Compare commits

..

135 Commits

Author SHA1 Message Date
Paul Masurel
95f1114870 blop 2020-04-05 16:20:45 +09:00
Paul Masurel
186d7fc20e Fix build 2020-04-01 09:32:45 +09:00
Paul Masurel
cfbdef5186 Using tantivy-fst version 0.3. 2020-03-31 23:24:54 +09:00
Paul Masurel
d04368b1d4 Closes #788. OR not working when using conjunction by default. (#802) 2020-03-31 21:13:50 +09:00
Chen Xu
b167058028 Fix prefix option for FuzzyTermQuery (#797)
* Fix prefix option for FuzzyTermQuery

* Update changelog
2020-03-19 20:19:32 +09:00
Paul Masurel
262957717b unit test fix and use of matches 2020-03-15 00:20:17 +09:00
Paul Masurel
873a808321 Removed itertools (#792) 2020-03-11 18:41:04 +09:00
dependabot-preview[bot]
6fa8f9330e Update base64 requirement from 0.11.0 to 0.12.0 (#791)
Updates the requirements on [base64](https://github.com/marshallpierce/rust-base64) to permit the latest version.
- [Release notes](https://github.com/marshallpierce/rust-base64/releases)
- [Changelog](https://github.com/marshallpierce/rust-base64/blob/master/RELEASE-NOTES.md)
- [Commits](https://github.com/marshallpierce/rust-base64/compare/v0.11.0...v0.12.0)

Signed-off-by: dependabot-preview[bot] <support@dependabot.com>

Co-authored-by: dependabot-preview[bot] <27856297+dependabot-preview[bot]@users.noreply.github.com>
2020-03-11 17:51:22 +09:00
Paul Masurel
b3f0ef0878 Avoid writing a new delete file if there was no actual deletes. (#787)
When applying the delete operations in the delete queue, it is possible
that there was no new deleted document.

In this case, avoid creating a new delete file, and updating the delete
opstamp.
2020-03-08 13:04:21 +09:00
Paul Masurel
04304262ba cargo fmt 2020-03-08 09:58:42 +09:00
Paul Masurel
920ced364a Added a method to persist the RAMDirectory into a different directory. 2020-03-07 17:00:50 +09:00
Paul Masurel
e0499118e2 Minor refactoring 2020-03-07 15:56:03 +09:00
Paul Masurel
50b5efae46 Added derive feature to serde crate 2020-03-06 23:46:29 +09:00
Paul Masurel
486b8fa9c5 Removing serde-derive dependency (#786) 2020-03-06 23:33:58 +09:00
Minoru Osuka
b2baed9bdd Add Lindera to README.md (#785)
* Add Lindera to README.md

* Put lindera in first place
2020-03-03 20:23:59 +09:00
Paul Masurel
b591542c0b Removing err.description() before deprecation. 2020-03-03 09:58:49 +09:00
Paul Masurel
a83fa00ac4 Faster compilation of query-grammar. (#784) 2020-03-02 22:12:42 +09:00
Paul Masurel
7ff5c7c797 Removing the fst feature in the levenshtein_automata crate. 2020-03-02 21:47:05 +09:00
Paul Masurel
1748602691 ignore -> compile_fail 2020-03-02 09:59:48 +09:00
Paul Masurel
6542dd5337 Removing parenthesis. 2020-03-01 09:41:53 +09:00
Nicholas Connor
c64a44b9e1 Slight re-organization to increase contrast of "Getting Started" (#783) 2020-02-28 08:42:38 +09:00
Paul Masurel
fccc5b3bed Closes #758 2020-02-27 17:58:43 +09:00
Paul Masurel
98b9d5c6c4 Closes #780. Will be fixed on the next published release. 2020-02-21 09:41:52 +09:00
Paul Masurel
afd2c1a8ad Merge branch 'master' of github.com:tantivy-search/tantivy 2020-02-19 22:08:44 +09:00
Paul Masurel
81f35a3ceb Bumped tantivy-grammar version 2020-02-19 22:08:31 +09:00
Paul Masurel
7e2e765f4a Bumped tantivy-grammar version 2020-02-19 22:07:54 +09:00
Paul Masurel
7d6cfa58e1 [WIP] Alternative take on boosted queries (#772)
* Alternative take on boosted queries

* Fixing unit test

* Added boosting to the query grammar.

* Made BoostQuery public.

* Added support for boosting field in QueryParser

Closes #547
2020-02-19 11:04:38 +09:00
Paul Masurel
14735ce3aa Update snap version to 1. (#781) 2020-02-17 10:41:44 +09:00
Paul Masurel
72f7cc1569 Closes #777 (#779) 2020-02-17 09:53:38 +09:00
Paul Masurel
abef5c4e74 Updating combine to version 4 (#775) 2020-02-06 23:02:48 +09:00
Paul Masurel
ae14022bf0 Removed use::Result. (#771) 2020-01-31 18:47:02 +09:00
Alexander
55f5658d40 Make Executor public so Searcher::search_in_executor method now can be used (#769)
* Make Executor public so Searcher::search_in_executor method now can be used

* Fixed cargo fmt
2020-01-31 15:50:26 +09:00
Paul Masurel
3ae6363462 Updated CHANGELOG 2020-01-30 10:16:56 +09:00
Halvor Fladsrud Bø
9e20d7f8a5 Maximum size of segment to be considered for merge (#765)
* Replicated changes from dead PR

* Ran formatter.
2020-01-30 10:14:34 +09:00
Halvor Fladsrud Bø
ab13ffe377 Facet path string (#759)
* Added to_path_string

* Fixed logic. Found strange behavior with string comparisons.

* ran formatter

* Fixed test

* Fixed format

* Fixed comment
2020-01-30 10:11:29 +09:00
Paul Masurel
039138ed50 Added the empty dictionary item in the CHANGELOG 2020-01-30 10:10:34 +09:00
Paul Masurel
6227a0555a Added unit test for empty dictionaries. 2020-01-30 10:08:27 +09:00
Audun Halland
f85d0a522a Optimize TermDictionary::empty by precomputed data source (#767) 2020-01-30 10:04:58 +09:00
Halvor Fladsrud Bø
5795488ba7 Backward iteration for termdict range (#757)
* Added backwards iteration to termdict

* Ran formatter

* Updated fst dependency

* Updated dependency

* Changelog and version

* Fixed version

* Made it part of 12.0
2020-01-30 09:59:21 +09:00
Paul Masurel
c3045dfb5c Remove time dev-deps by relying on chrono::Duration reexport. 2020-01-29 23:25:03 +09:00
Paul Masurel
811fd0cb9e Dynamic analyzer (#755)
* Removed generics in tokenizers

* lowercaser

* Added TokenizerExt

* Introducing BoxedTokenizer

* Introducing BoxXXXXX helper struct

* Closes #762.

* Introducing a TextAnalyzer
2020-01-29 18:23:37 +09:00
dependabot-preview[bot]
f6847c46d7 Update tantivy-fst requirement from 0.1 to 0.2 (#750)
Updates the requirements on [tantivy-fst](https://github.com/tantivy-search/fst) to permit the latest version.
- [Release notes](https://github.com/tantivy-search/fst/releases)
- [Commits](https://github.com/tantivy-search/fst/compare/0.1.1...0.2.0)

Signed-off-by: dependabot-preview[bot] <support@dependabot.com>
2020-01-21 07:57:39 +09:00
Paul Masurel
92dac7af5c Return an error instead of panicking when sorting by a non fast field. (#748)
Closes #747
2020-01-08 13:41:02 +09:00
Paul Masurel
801905d77f Davide romanini arm atomic mutex (#746)
* Add atomic mutex implementation for ARM.

* Applied rustfmt.

* rustfmt

Co-authored-by: davide-romanini <davide.romanini@gmail.com>
2019-12-30 23:42:11 +09:00
Paul Horn
8f5ac86f30 Expose UserOperation as a public type. (#744)
In order to make `IndexWriter::run` callable from outside of the create,
the `UserOperation` type needs to be publicly available.
Since the `indexer` module is private, we just export the `UserOperation`
type directly.
2019-12-29 22:37:13 +09:00
Paul Masurel
d12a06b65b Tiny code simplification. 2019-12-26 09:33:17 +09:00
Minoru Osuka
749432f949 Make SchemaBuilder::add_field() public (#742)
* Make add_field() to public

* cargo format
2019-12-25 20:37:34 +09:00
Paul Masurel
c1400f25a7 Handle facet search in the QueryParser. (#741)
Closes #738
2019-12-25 17:43:33 +09:00
Paul Masurel
87120acf7c Bump version 2019-12-20 21:22:43 +09:00
Paul Masurel
401f74f7ae Implement fast field for DateTime. (#736) 2019-12-20 21:20:15 +09:00
Paul Masurel
03d31f6713 Update CHANGELOG 2019-12-19 10:07:43 +09:00
Paul Masurel
a57faf07f6 Added a constructor for WatchHandle (#734)
Closes #731
2019-12-19 10:06:02 +09:00
Paul Masurel
562ea9a839 Merge branch 'master' of github.com:tantivy-search/tantivy 2019-12-19 09:32:50 +09:00
Paul Masurel
cf92cc1ada Closes #732 (#733)
The future returned by `IndexWriter::merge` does not borrow `&mut self`
2019-12-18 23:25:22 +09:00
Paul Masurel
f6000aece7 Closes #732
The future returned by `IndexWriter::merge` does not borrow `&mut self`
2019-12-18 21:48:51 +09:00
Paul Masurel
2b3fe3a2b5 Bumped version for hotfix 2019-12-17 21:10:50 +09:00
Paul Masurel
0fde90faac Closes #729 (#730)
Bug related with merge and deletes...
2019-12-17 21:09:08 +09:00
Paul Masurel
5838644b03 Added README in tantivy-query-grammar 2019-12-16 08:41:21 +09:00
Paul Masurel
c0011edd05 Added version for tantivy-grammar before publish 2019-12-16 08:35:17 +09:00
petr-tik
431c187a60 Make error handling richer in Footer::is_compatible (#724)
* WIP implemented is_compatible

hide Footer::from_bytes from public consumption - only found Footer::extract
used outside the module

Add a new error type for IncompatibleIndex
add a prototypical call to footer.is_compatible() in ManagedDirectory::open_read
to make sure we error before reading it further

* Make error handling more ergonomic

Add an error subtype for OpenReadError and converters to TantivyError

* Remove an unnecessary assert

it's follower by the same check that Errors instead of panicking

* Correct the compatibility check logic

Leave a defensive versioned footer check to make sure we add new logic handling
when we add possible footer versions

Restricted VersionedFooter::from_bytes to be used inside the crate only

remove a half-baked test

* WIP.

* Return an error if index incompatible - closes #662

Enrich the error type with incompatibility

Change return type to Result<bool, TantivyError>, instead of bool

Add an Incompatibility enum that enriches the IncompatibleIndex error variant
with information, which then allows us to generate a developer-friendly hint how
to upgrade library version or switch feature flags for a different compression
algorithm

Updated changelog

Change the signature of is_compatible

Added documentation to the Incompatibility
Added a conditional test on a Footer with lz4 erroring
2019-12-14 09:14:33 +09:00
Caio Romão
392abec420 Make u64_lenient() handle f64 fast fields too (#726)
* Make u64_lenient() handle f64 fast fields too

Without this, we get a panic during merge since the merger will
get a `None` where it expects something.

Prior to this patch, you can reproduce the panic with:

    use tantivy::{
        self,
        schema::{SchemaBuilder, FAST},
        Document, Index, Result,
    };

    #[test]
    fn pass() -> Result<()> {
        let mut builder = SchemaBuilder::new();
        let field = builder.add_f64_field("f64", FAST);
        let index = Index::create_in_ram(builder.build());

        let mut writer = index.writer_with_num_threads(1, 50_000_000)?;

        for i in 0..1000 {
            let mut doc = Document::new();
            doc.add_f64(field, 0.42);
            writer.add_document(doc);

            if i % 5 == 0 {
                writer.commit()?;
            }
        }

        writer.commit()?;

        Ok(())
    }

* Add test to verify that f64 fields are merged

* Ensure multi-valued fast fields can be merged too
2019-12-13 23:41:22 +09:00
Paul Masurel
dfbe337fe2 Optimize deletes (#723)
Closes #710
2019-12-13 09:50:00 +09:00
Paul Masurel
b9896c4962 Cleanup 2019-12-10 23:01:07 +09:00
Paul Masurel
afa5715e56 Added unit test. 2019-12-10 22:49:32 +09:00
Paul Masurel
79474288d0 Some clippy minor fixes (#722) 2019-12-09 13:40:04 +09:00
Paul Masurel
daf64487b4 Fixing JSON se/deserialization of dates. (#721)
Closes #719
2019-12-09 13:31:35 +09:00
Ximo Guanter
00816f5529 Fix outdated reference in documentation (#720) 2019-12-08 18:10:50 +09:00
Paul Masurel
f73787e6e5 Merge branch 'master' of github.com:tantivy-search/tantivy 2019-12-06 10:06:09 +09:00
Paul Masurel
5cffa71467 Using census 0.4 2019-12-06 10:04:01 +09:00
Christian Hunstad
02af28b3b7 add norwegian stemmer (#717) 2019-11-27 21:08:59 +09:00
Paul Masurel
afe0134d0f Kkoziara remove tokens from doc store (#715)
* Prevent tokens from being stored in the document store.

Commit adds prepare_for_store method to Document, which changes all
PreTokenizedString values into String values. The method is called
before adding document to the document store to prevent tokens from
being saved there. Commit also adds small changes to comments in
pre_tokenized_text example.

* Avoid storing the pretokenized text.
2019-11-25 22:39:12 +09:00
Christian Hunstad
db9e81d0f9 Updated rust-stemmers version to 1.2 (#716)
* Updated rust-stemmers version to 1.2

* 1.2.0 -> 1.2
2019-11-25 22:38:48 +09:00
Paul Masurel
3821f57ecc Closes #712 (#714)
Fixing the memory leak in the DeleteQueue.
2019-11-25 15:57:29 +09:00
Paul Masurel
d379f98b22 Waiting for indexing threads when dropping IndexWriter 2019-11-23 15:00:27 +09:00
Paul Masurel
ef3eddf3da clippy first stab (#711) 2019-11-22 13:09:35 +09:00
Paul Masurel
08a2368845 Closes #708 (#709)
Fixes a race condition in the test.
2019-11-21 11:41:59 +09:00
Paul Masurel
1868fc1e2c Text fix 2019-11-20 23:00:39 +09:00
Paul Masurel
451a0252ab thread pool merge (#704) 2019-11-20 21:18:05 +09:00
Paul Masurel
42756c7474 Removing futures-cpupool and upgrading to futures-0.3 2019-11-15 18:35:31 +09:00
Paul Masurel
598b076240 Making some of the IndexWriter's method public. 2019-11-11 12:41:45 +09:00
Paul Masurel
f1f96fc417 Updating some doc. 2019-11-11 10:04:12 +09:00
Paul Masurel
9c941603f5 Petr tik n662 errror incompatible footer version (#696)
* code tidy-up

Replace `20` magic constant with COMMON_FOOTER_SIZE

Add a docstring showing how footer is serialised
Add a test for footer length checking

* Add more tests for VersionedFooter

successful and panicking .to_bytes() calls

* Minor changes in footer.rs
2019-11-10 14:40:06 +09:00
Paul Masurel
fb3d6fa332 Adding Value::From<PretokenizedText> (#697) 2019-11-10 14:39:44 +09:00
Paul Masurel
88fd7f091a SegmentUpdater.add_segment does not need to return true (#693) 2019-11-09 21:18:51 +09:00
Jacob Brown
6e4fdfd4bf replace scoped_pool (#685) 2019-11-07 10:26:08 +09:00
kkoziara
0519056bd8 Added handling of pre-tokenized text fields (#642). (#669)
* Added handling of pre-tokenized text fields (#642).

* * Updated changelog and examples concerning #642.
* Added tokenized_text method to Value implementation.
* Implemented From<TokenizedString> for TokenizedStream.

* * Removed tokenized flag from TextOptions and code reliance on the flag.
* Changed naming to use word "pre-tokenized" instead of "tokenized".
* Updated example code.
* Fixed comments.

* Minor code refactoring. Test improvements.
2019-11-07 10:10:56 +09:00
dependabot-preview[bot]
7305ad575e Update smallvec requirement from 0.6 to 1.0 (#686)
Updates the requirements on [smallvec](https://github.com/servo/rust-smallvec) to permit the latest version.
- [Release notes](https://github.com/servo/rust-smallvec/releases)
- [Commits](https://github.com/servo/rust-smallvec/compare/v0.6.0...v1.0.0)

Signed-off-by: dependabot-preview[bot] <support@dependabot.com>
2019-11-07 09:55:33 +09:00
Paul Masurel
79f64ac2f4 Create FUNDING.yml 2019-11-05 16:26:12 +09:00
Paul Masurel
67bce6cbf2 Fixing the construction of the DeleteBitset. (#683)
Closes #681
2019-11-04 15:39:11 +09:00
xiaoniu-578fa6bff964d005
e5316a4388 Reduce unnecessary clone. (#684) 2019-11-04 13:57:59 +09:00
Mathias Svensson
6a8a8557d2 Use slice::iter instead of into_iter to avoid future breakage (#679)
* Use `slice::iter` instead of `into_iter` to avoid future breakage

`an_array.into_iter()` currently just works because of the autoref
feature, which then calls `<[T] as IntoIterator>::into_iter`. But
in the future, arrays will implement `IntoIterator`, too. In order
to avoid problems in the future, the call is replaced by `iter()`
which is shorter and more explicit.

* cargo fmt
2019-10-31 20:59:50 +09:00
Alberto Piai
3a65dc84c8 TopDocs: ensure stable sorting on equal score (#675)
* TopDocs: ensure stable sorting on equal score

When selecting the top K documents by score, we need to ensure stable
sorting. Until now, for documents with the same score, we were relying
on the (arbitrary) order returned by the BinaryHeap used to implement
the collectors.

This patch fixes the problem by explicitly using the doc address when
harvesting the `TopSegmentCollector` and when merging the results in
`TopCollector::merge_fruits()`.

This is important (for example) to implement pagination correctly using
the TopDocs collector. If sorting isn't stable, documents that have the
same score might be ranked in different positions depending on the
specific K that was used, thus appearing in two different pages, or in
none at all.

Fixes gh-671

* TMP: alternative solution (see previous commit)

If we add the constrait that D is also PartialOrd in ComparableDoc<T,
D>, then we can move the comparison by doc address directly in the cmp
implementation of ComparableDoc.

* TMP rebase as first commit: add benchmarks for TopSegmentCollector

* fixup! TMP: alternative solution (see previous commit)

* TMP add changelog entry

* TMP run cargo fmt
2019-10-26 15:27:25 +09:00
dependabot-preview[bot]
ce42bbf5c9 Update base64 requirement from 0.10.0 to 0.11.0 (#676)
Updates the requirements on [base64](https://github.com/marshallpierce/rust-base64) to permit the latest version.
- [Release notes](https://github.com/marshallpierce/rust-base64/releases)
- [Changelog](https://github.com/marshallpierce/rust-base64/blob/master/RELEASE-NOTES.md)
- [Commits](https://github.com/marshallpierce/rust-base64/compare/v0.10.0...v0.11.0)

Signed-off-by: dependabot-preview[bot] <support@dependabot.com>
2019-10-26 15:24:47 +09:00
Paul Masurel
7b21b3f25a Refactoring around Field (#673)
* Refactoring around Field

Removing the contract about the order of the field, and the
field id allocation.

* Update delete_queue.rs

* Update field.rs
2019-10-25 09:06:44 +09:00
Paul Masurel
46caec1040 Updating uuid to 0.8 (#674) 2019-10-25 09:02:00 +09:00
petr-tik
1187a02a3e Fixed #664 (#667)
Removed references to u8 and old documentation
2019-10-22 09:34:10 +09:00
Andrew Banchich
f6c525b19e Fix grammar / punctuation (#668) 2019-10-21 10:50:53 +09:00
petr-tik
4a8f7712f3 Add a doctest to BooleanQuery (#630)
* Add a doctest to BooleanQuery

Closes #446

Mark a function that is only used in tests to be compiled for tests only

Fix doc-comments in a couple of related files

* Minor corrections

remove whitespace, fix typos, add explicit dyn marker

* WIP: BooleanQuery doc test

Trying to nest several BooleanQueries together

* Addressed old review

rust 2018 edition + make function available to everyone

* Box the previous query to resolve the type error

* Rework wording in DocAdress document strings

* Reworded and restructured the docstring
2019-10-07 10:05:12 +09:00
Paul Masurel
2f867aad17 Fix bench (#663)
* fmt

* Fixing bench compilation
2019-10-04 17:07:49 +09:00
Paul Masurel
5c6580eb15 fmt (#661) 2019-10-04 12:10:01 +09:00
Paul Masurel
4c3941750b Waiting potentially longer on watch 2019-10-01 09:50:46 +09:00
Paul Masurel
2ea8e618f2 Merge branch 'hotfix-656' 2019-10-01 09:44:56 +09:00
Paul Masurel
94f27f990b Address #656
Broke the reference loop to make sure that the watch_router can
be dropped, and the thread exits.
2019-10-01 09:34:22 +09:00
Paul Masurel
349e8aa348 Removed enum variants on type alias 2019-09-26 18:43:29 +09:00
Paul Masurel
cde9b78b8d Fixing the issue associated with the Regex performance change 2019-09-18 18:29:27 +09:00
fdb-hiroshima
d8894f0bd2 add checksum check in ManagedDirectory (#605)
* add checksum check in ManagedDirectory

fix #400

* flush after writing checksum

* don't checksum atomic file access and clone managed_paths

* implement a footer storing metadata about a file

this is more of a poc, it require some refactoring into multiple files
`terminate(self)` is implemented, but not used anywhere yet

* address comments and simplify things with new contract

use BitOrder for integer to raw byte conversion
consider atomic write imply atomic read, which might not actually be true
use some indirection to have a boxable terminating writer

* implement TerminatingWrite and make terminate() be called where it should

add dependancy to drop_bomb to help find where terminate() should be called
implement TerminatingWrite for wrapper writers
make tests pass
/!\ some tests seems to pass where they shouldn't

* remove usage of drop_bomb

* fmt

* add test for checksum

* address some review comments

* update changelog

* fmt
2019-09-18 18:26:25 +09:00
fdb-hiroshima
7e08e0047b fix Term documentation (#655)
u64-based fields are actually 4+8=12 bytes long
2019-09-11 18:49:35 +09:00
fdb-hiroshima
1a817f117f fix documentation error (#654)
Union missdocumented as doing an intersection
Union and Intersection can hold more than 2 DocSets
2019-09-11 17:12:08 +09:00
petr-tik
2ec19b21ae Remove unnecessary duplicate methods (#650)
Closes #649

Spotted by @imor
2019-09-09 06:36:04 +09:00
Raminder Singh
141f5a93f7 Using FnvHashMap for mapping UnorderedTermId to TermOrdinal. Fixes #507 (#647)
* Using FnvHashMap for mapping UnorderedTermId to TermOrdinal. Fixes #507

* Fixed cargo fmt errors
2019-09-07 19:40:21 +09:00
Paul Masurel
df47d55cd2 Occur debug interface (#648) 2019-09-07 15:08:45 +09:00
Raminder Singh
5e579fd6b7 Fixed clippy warning: unneeded return statement (#646) 2019-09-07 10:14:37 +09:00
Paul Masurel
4b9c1dce69 Moving queyr grammar to a different crate. (#645) 2019-09-05 09:37:28 +09:00
Paul Masurel
d74f71bbef Lighter regex dependency. (#644)
Detail on https://github.com/rust-lang/regex/pull/613
2019-09-04 13:10:12 +09:00
Paul Masurel
5196ca41d8 Small code clean up 2019-09-03 09:22:32 +09:00
dependabot-preview[bot]
4959e06151 Update once_cell requirement from 0.2 to 1.0 (#643)
Updates the requirements on [once_cell](https://github.com/matklad/once_cell) to permit the latest version.
- [Release notes](https://github.com/matklad/once_cell/releases)
- [Changelog](https://github.com/matklad/once_cell/blob/master/CHANGELOG.md)
- [Commits](https://github.com/matklad/once_cell/compare/v0.2.0...v1.0.2)

Signed-off-by: dependabot-preview[bot] <support@dependabot.com>
2019-09-03 07:00:45 +09:00
Paul Masurel
c1635c13f6 RegexQuery performance: make it possible to cache Regexes - remastered by fulmicoton (Closes #639) (#641)
* small docs cleanup

* only compile a regex once per RegexQuery

Building a `Regex` is an expensive operation. Users of `RegexQuery`
need to cache and reuse regexes when searching across multiple fields.

This is the first step towards allowing that: we can store the `Regex`
directly in the `RegexQuery`, instead of the string pattern.

* RegexQuery: account for possible failure in the constructor

When building a regex from a str pattern, we have to account for the
possibility that the pattern is invalid. Before the previous commit, the
failure would happen in the `specialized_weight` method. Now that we
store a compiled `Regex` in `RegexQuery`, `specialized_weight` doesn't
fail anymore, and we can fail early while constructing `RegexQuery` if
the pattern is invalid.

This is a breaking change for users of `RegexQuery::new`.

* add RegexQuery::from_regex method

This builds a `RegexQuery` from an already compiled `Regex`. The use of
`Into<Arc<Regex>>` is to allow the caller to either simply pass a
`Regex`, or an `Arc<Regex>`, in case it needs to be cached and shared on
the caller's side.

* Using an Arc in AutomatonWeight

Closes #639
2019-08-22 16:14:01 +09:00
Paul Masurel
135e0ea2e9 Expose new segment meta from Index (#637) 2019-08-19 10:39:15 +09:00
Paul Masurel
f283bfd7ab Added segmentid_from_string (#636) 2019-08-19 10:37:30 +09:00
Joshua Dutton
9f74786db2 Update import statements in examples, doctests (#633)
Update import statements to edition 2018, including removing
`extern crate` and  `#[macro_use]`. Alphabetize the statements.
2019-08-19 07:26:35 +09:00
Joshua Dutton
32e5d7a0c7 Fix trait object in doctest (#635) 2019-08-19 07:25:00 +09:00
Joshua Dutton
84c615cff1 Fixing typos (#634) 2019-08-19 07:24:05 +09:00
Paul Masurel
039c0a0863 Introducing a wrapper struct instead of Boxed<BoxableTokenizer> (#631)
Closes #629
2019-08-15 16:37:04 +09:00
Paul Masurel
b3b0138b82 Change for tantivy-py
Schema.convert_named_doc
Better Debug string for Terms and TermQueries
2019-08-14 17:44:25 +09:00
petr-tik
ea56160cdc Added cargo-fmt to CI runs (#627)
* Added cargo-fmt to CI runs

Closes #625

* Remove fmt from appveyor builds

Windows seems to have issues with install components through rustup.

Formatting should be equally informative regardless of the OS,
so best to keep it in Linux on Travis
2019-08-12 08:25:47 +09:00
petr-tik
028b0a749c Elastic unbounded range query (#624)
* Tidy up

fmt

remove unneccessary -> Result<()> followed by run.unwrap() in a test

* Adding support for elasticsearch-style unbounded queries

Extend the UserInputBound to include Unbounded, so we can reuse formatting and
internal query format

* Still working on elastic-style range queries

Fixes #498

Merge the elastic_range into range

Reformat to make code easier to follow, use optional() macro to return Some

* Fixed bugs

Made the range parser insensitive to whitespace between the ":" and the range.

Removed optional parsing of field.

Added a unit test for the range parser.

Derived PartialEq to compare the results of parsing as structs, instead of
strings. Found a bug with that unit test - "*}" was parsed as an
UserInputBound::Exclusive, instead of UserInputBound::Unbounded. Added an early
detection-and-return for * in the original range parser

* Correct failing test

Assume that we will use "{*" for Unbounded ranges

* Add a note in the changelog

cargo-fmt

* Moved parenthesis to a newline to make nested if-else more visible
2019-08-12 08:24:47 +09:00
Paul Masurel
941f06eb9f Added Schema.from_named_doc 2019-08-11 16:50:32 +09:00
Paul Masurel
04832a86eb WTF is this file doing here (#622) 2019-08-08 21:54:10 +09:00
fdb-hiroshima
beb8e990cd fix parsing neg float in range query (#621)
fix #620
2019-08-08 20:41:04 +09:00
Paul Masurel
001af3876f cargo fmt 2019-08-08 18:07:19 +09:00
Paul Masurel
f428f344da Various bugfix in the query parser (#619) 2019-08-08 17:48:21 +09:00
Paul Masurel
143f78eced Trying to fix #609 (#616) 2019-08-06 20:33:30 +09:00
Kornel
754b55eee5 Bump deps (#613)
* Bump crossbeam

* Warnings--

* Remove outdated tempdir
2019-08-05 22:21:22 +09:00
Paul Masurel
280ea1209c Changes required for python binding (#610) 2019-08-01 17:26:21 +09:00
petr-tik
0154dbe477 Replace unwrap with match and proper Error handling (#606)
* Replace unwrap with match and proper Error handling

* Replaced 'magic' values with a documented variable

Didn't like the unexplained 0..3 range, thought it was best as a variable

Calculating Levenshtein distance is expensive, so best explain why we should
keep it low
2019-07-31 08:16:02 +09:00
170 changed files with 6692 additions and 3078 deletions

12
.github/FUNDING.yml vendored Normal file
View File

@@ -0,0 +1,12 @@
# These are supported funding model platforms
github: fulmicoton
patreon: # Replace with a single Patreon username
open_collective: # Replace with a single Open Collective username
ko_fi: # Replace with a single Ko-fi username
tidelift: # Replace with a single Tidelift platform-name/package-name e.g., npm/babel
community_bridge: # Replace with a single Community Bridge project-name e.g., cloud-foundry
liberapay: # Replace with a single Liberapay username
issuehunt: # Replace with a single IssueHunt username
otechie: # Replace with a single Otechie username
custom: # Replace with up to 4 custom sponsorship URLs e.g., ['link1', 'link2']

View File

@@ -47,6 +47,7 @@ matrix:
before_install: before_install:
- set -e - set -e
- rustup self update - rustup self update
- rustup component add rustfmt
install: install:
- sh ci/install.sh - sh ci/install.sh
@@ -60,6 +61,7 @@ before_script:
script: script:
- bash ci/script.sh - bash ci/script.sh
- cargo fmt --all -- --check
before_deploy: before_deploy:
- sh ci/before_deploy.sh - sh ci/before_deploy.sh

View File

@@ -1,7 +1,65 @@
Tantivy 0.13.0
======================
- Bugfix in `FuzzyTermQuery` not matching terms by prefix when it should (@Peachball)
Tantivy 0.12.0
======================
- Removing static dispatch in tokenizers for simplicity. (#762)
- Added backward iteration for `TermDictionary` stream. (@halvorboe)
- Fixed a performance issue when searching for the posting lists of a missing term (@audunhalland)
- Added a configurable maximum number of docs (10M by default) for a segment to be considered for merge (@hntd187, landed by @halvorboe #713)
- Important Bugfix #777, causing tantivy to retain memory mapping. (diagnosed by @poljar)
- Added support for field boosting. (#547, @fulmicoton)
## How to update?
Crates relying on custom tokenizer, or registering tokenizer in the manager will require some
minor changes. Check https://github.com/tantivy-search/tantivy/blob/master/examples/custom_tokenizer.rs
to check for some code sample.
Tantivy 0.11.3
=======================
- Fixed DateTime as a fast field (#735)
Tantivy 0.11.2
=======================
- The future returned by `IndexWriter::merge` does not borrow `self` mutably anymore (#732)
- Exposing a constructor for `WatchHandle` (#731)
Tantivy 0.11.1
=====================
- Bug fix #729
Tantivy 0.11.0 Tantivy 0.11.0
===================== =====================
- Added f64 field. Internally reuse u64 code the same way i64 does (@fdb-hiroshima) - Added f64 field. Internally reuse u64 code the same way i64 does (@fdb-hiroshima)
- Various bugfixes in the query parser.
- Better handling of hyphens in query parser. (#609)
- Better handling of whitespaces.
- Closes #498 - add support for Elastic-style unbounded range queries for alphanumeric types eg. "title:>hello", "weight:>=70.5", "height:<200" (@petr-tik)
- API change around `Box<BoxableTokenizer>`. See detail in #629
- Avoid rebuilding Regex automaton whenever a regex query is reused. #639 (@brainlock)
- Add footer with some metadata to index files. #605 (@fdb-hiroshima)
- Add a method to check the compatibility of the footer in the index with the running version of tantivy (@petr-tik)
- TopDocs collector: ensure stable sorting on equal score. #671 (@brainlock)
- Added handling of pre-tokenized text fields (#642), which will enable users to
load tokens created outside tantivy. See usage in examples/pre_tokenized_text. (@kkoziara)
- Fix crash when committing multiple times with deleted documents. #681 (@brainlock)
## How to update?
- The index format is changed. You are required to reindex your data to use tantivy 0.11.
- `Box<dyn BoxableTokenizer>` has been replaced by a `BoxedTokenizer` struct.
- Regex are now compiled when the `RegexQuery` instance is built. As a result, it can now return
an error and handling the `Result` is required.
- `tantivy::version()` now returns a `Version` object. This object implements `ToString()`
Tantivy 0.10.2
=====================
- Closes #656. Solving memory leak.
Tantivy 0.10.1 Tantivy 0.10.1
===================== =====================

View File

@@ -1,11 +1,11 @@
[package] [package]
name = "tantivy" name = "tantivy"
version = "0.10.1" version = "0.12.0"
authors = ["Paul Masurel <paul.masurel@gmail.com>"] authors = ["Paul Masurel <paul.masurel@gmail.com>"]
license = "MIT" license = "MIT"
categories = ["database-implementations", "data-structures"] categories = ["database-implementations", "data-structures"]
description = """Search engine library""" description = """Search engine library"""
documentation = "https://tantivy-search.github.io/tantivy/tantivy/index.html" documentation = "https://docs.rs/tantivy/"
homepage = "https://github.com/tantivy-search/tantivy" homepage = "https://github.com/tantivy-search/tantivy"
repository = "https://github.com/tantivy-search/tantivy" repository = "https://github.com/tantivy-search/tantivy"
readme = "README.md" readme = "README.md"
@@ -13,47 +13,43 @@ keywords = ["search", "information", "retrieval"]
edition = "2018" edition = "2018"
[dependencies] [dependencies]
base64 = "0.10.0" base64 = "0.12.0"
byteorder = "1.0" byteorder = "1.0"
once_cell = "0.2" crc32fast = "1.2.0"
regex = "1.0" once_cell = "1.0"
tantivy-fst = "0.1" regex ={version = "1.3.0", default-features = false, features = ["std"]}
tantivy-fst = {path="../tantivy-fst", version="0.3"}
memmap = {version = "0.7", optional=true} memmap = {version = "0.7", optional=true}
lz4 = {version="1.20", optional=true} lz4 = {version="1.20", optional=true}
snap = {version="0.2"} snap = "1"
atomicwrites = {version="0.2.2", optional=true} atomicwrites = {version="0.2.2", optional=true}
tempfile = "3.0" tempfile = "3.0"
log = "0.4" log = "0.4"
combine = ">=3.6.0,<4.0.0" serde = {version="1.0", features=["derive"]}
tempdir = "0.3"
serde = "1.0"
serde_derive = "1.0"
serde_json = "1.0" serde_json = "1.0"
num_cpus = "1.2" num_cpus = "1.2"
fs2={version="0.4", optional=true} fs2={version="0.4", optional=true}
itertools = "0.8" levenshtein_automata = "0.2"
levenshtein_automata = {version="0.1", features=["fst_automaton"]}
notify = {version="4", optional=true} notify = {version="4", optional=true}
bit-set = "0.5" uuid = { version = "0.8", features = ["v4", "serde"] }
uuid = { version = "0.7.2", features = ["v4", "serde"] } crossbeam = "0.7"
crossbeam = "0.5" futures = {version = "0.3", features=["thread-pool"] }
futures = "0.1"
futures-cpupool = "0.1"
owning_ref = "0.4" owning_ref = "0.4"
stable_deref_trait = "1.0.0" stable_deref_trait = "1.0.0"
rust-stemmers = "1.1" rust-stemmers = "1.2"
downcast-rs = { version="1.0" } downcast-rs = { version="1.0" }
tantivy-query-grammar = { version="0.13", path="./query-grammar" }
bitpacking = {version="0.8", default-features = false, features=["bitpacker4x"]} bitpacking = {version="0.8", default-features = false, features=["bitpacker4x"]}
census = "0.2" census = "0.4"
fnv = "1.0.6" fnv = "1.0.6"
owned-read = "0.4" owned-read = "0.4"
failure = "0.1" failure = "0.1"
htmlescape = "0.3.1" htmlescape = "0.3.1"
fail = "0.3" fail = "0.3"
scoped-pool = "1.0"
murmurhash32 = "0.2" murmurhash32 = "0.2"
chrono = "0.4" chrono = "0.4"
smallvec = "0.6" smallvec = "1.0"
rayon = "1"
[target.'cfg(windows)'.dependencies] [target.'cfg(windows)'.dependencies]
winapi = "0.3" winapi = "0.3"
@@ -62,7 +58,10 @@ winapi = "0.3"
rand = "0.7" rand = "0.7"
maplit = "1" maplit = "1"
matches = "0.1.8" matches = "0.1.8"
time = "0.1.42"
[dev-dependencies.fail]
version = "0.3"
features = ["failpoints"]
[profile.release] [profile.release]
opt-level = 3 opt-level = 3
@@ -81,13 +80,12 @@ failpoints = ["fail/failpoints"]
unstable = [] # useful for benches. unstable = [] # useful for benches.
wasm-bindgen = ["uuid/wasm-bindgen"] wasm-bindgen = ["uuid/wasm-bindgen"]
[workspace]
members = ["query-grammar"]
[badges] [badges]
travis-ci = { repository = "tantivy-search/tantivy" } travis-ci = { repository = "tantivy-search/tantivy" }
[dev-dependencies.fail]
features = ["failpoints"]
# Following the "fail" crate best practises, we isolate # Following the "fail" crate best practises, we isolate
# tests that define specific behavior in fail check points # tests that define specific behavior in fail check points
# in a different binary. # in a different binary.
@@ -98,4 +96,4 @@ features = ["failpoints"]
[[test]] [[test]]
name = "failpoints" name = "failpoints"
path = "tests/failpoints/mod.rs" path = "tests/failpoints/mod.rs"
required-features = ["fail/failpoints"] required-features = ["fail/failpoints"]

3
Makefile Normal file
View File

@@ -0,0 +1,3 @@
test:
echo "Run test only... No examples."
cargo test --tests --lib

View File

@@ -21,9 +21,9 @@
[![Become a patron](https://c5.patreon.com/external/logo/become_a_patron_button.png)](https://www.patreon.com/fulmicoton) [![Become a patron](https://c5.patreon.com/external/logo/become_a_patron_button.png)](https://www.patreon.com/fulmicoton)
**Tantivy** is a **full text search engine library** written in rust. **Tantivy** is a **full text search engine library** written in Rust.
It is closer to [Apache Lucene](https://lucene.apache.org/) than to [Elasticsearch](https://www.elastic.co/products/elasticsearch) and [Apache Solr](https://lucene.apache.org/solr/) in the sense it is not It is closer to [Apache Lucene](https://lucene.apache.org/) than to [Elasticsearch](https://www.elastic.co/products/elasticsearch) or [Apache Solr](https://lucene.apache.org/solr/) in the sense it is not
an off-the-shelf search engine server, but rather a crate that can be used an off-the-shelf search engine server, but rather a crate that can be used
to build such a search engine. to build such a search engine.
@@ -31,7 +31,7 @@ Tantivy is, in fact, strongly inspired by Lucene's design.
# Benchmark # Benchmark
Tantivy is typically faster than Lucene, but the results will depend on Tantivy is typically faster than Lucene, but the results depend on
the nature of the queries in your workload. the nature of the queries in your workload.
The following [benchmark](https://tantivy-search.github.io/bench/) break downs The following [benchmark](https://tantivy-search.github.io/bench/) break downs
@@ -40,64 +40,62 @@ performance for different type of queries / collection.
# Features # Features
- Full-text search - Full-text search
- Configurable tokenizer. (stemming available for 17 latin languages. Third party support for Chinese ([tantivy-jieba](https://crates.io/crates/tantivy-jieba) and [cang-jie](https://crates.io/crates/cang-jie)) and [Japanese](https://crates.io/crates/tantivy-tokenizer-tiny-segmenter) - Configurable tokenizer (stemming available for 17 Latin languages with third party support for Chinese ([tantivy-jieba](https://crates.io/crates/tantivy-jieba) and [cang-jie](https://crates.io/crates/cang-jie)), Japanese ([lindera](https://github.com/lindera-morphology/lindera-tantivy) and [tantivy-tokenizer-tiny-segmente](https://crates.io/crates/tantivy-tokenizer-tiny-segmenter)) and Korean ([lindera](https://github.com/lindera-morphology/lindera-tantivy) + [lindera-ko-dic-builder](https://github.com/lindera-morphology/lindera-ko-dic-builder))
- Fast (check out the :racehorse: :sparkles: [benchmark](https://tantivy-search.github.io/bench/) :sparkles: :racehorse:) - Fast (check out the :racehorse: :sparkles: [benchmark](https://tantivy-search.github.io/bench/) :sparkles: :racehorse:)
- Tiny startup time (<10ms), perfect for command line tools - Tiny startup time (<10ms), perfect for command line tools
- BM25 scoring (the same as lucene) - BM25 scoring (the same as Lucene)
- Natural query language `(michael AND jackson) OR "king of pop"` - Natural query language (e.g. `(michael AND jackson) OR "king of pop"`)
- Phrase queries search (`"michael jackson"`) - Phrase queries search (e.g. `"michael jackson"`)
- Incremental indexing - Incremental indexing
- Multithreaded indexing (indexing English Wikipedia takes < 3 minutes on my desktop) - Multithreaded indexing (indexing English Wikipedia takes < 3 minutes on my desktop)
- Mmap directory - Mmap directory
- SIMD integer compression when the platform/CPU includes the SSE2 instruction set. - SIMD integer compression when the platform/CPU includes the SSE2 instruction set
- Single valued and multivalued u64, i64 and f64 fast fields (equivalent of doc values in Lucene) - Single valued and multivalued u64, i64, and f64 fast fields (equivalent of doc values in Lucene)
- `&[u8]` fast fields - `&[u8]` fast fields
- Text, i64, u64, f64, dates and hierarchical facet fields - Text, i64, u64, f64, dates, and hierarchical facet fields
- LZ4 compressed document store - LZ4 compressed document store
- Range queries - Range queries
- Faceted search - Faceted search
- Configurable indexing (optional term frequency and position indexing) - Configurable indexing (optional term frequency and position indexing)
- Cheesy logo with a horse - Cheesy logo with a horse
# Non-features ## Non-features
- Distributed search is out of the scope of tantivy. That being said, tantivy is meant as a - Distributed search is out of the scope of Tantivy. That being said, Tantivy is a
library upon which one could build a distributed search. Serializable/mergeable collector state for instance, library upon which one could build a distributed search. Serializable/mergeable collector state for instance,
are within the scope of tantivy. are within the scope of Tantivy.
# Supported OS and compiler
Tantivy works on stable rust (>= 1.27) and supports Linux, MacOS and Windows.
# Getting started # Getting started
- [tantivy's simple search example](https://tantivy-search.github.io/examples/basic_search.html) Tantivy works on stable Rust (>= 1.27) and supports Linux, MacOS, and Windows.
- [tantivy-cli and its tutorial](https://github.com/tantivy-search/tantivy-cli).
`tantivy-cli` is an actual command line interface that makes it easy for you to create a search engine, - [Tantivy's simple search example](https://tantivy-search.github.io/examples/basic_search.html)
index documents and search via the CLI or a small server with a REST API. - [tantivy-cli and its tutorial](https://github.com/tantivy-search/tantivy-cli) - `tantivy-cli` is an actual command line interface that makes it easy for you to create a search engine,
It will walk you through getting a wikipedia search engine up and running in a few minutes. index documents, and search via the CLI or a small server with a REST API.
- [reference doc for the last released version](https://docs.rs/tantivy/) It walks you through getting a wikipedia search engine up and running in a few minutes.
- [Reference doc for the last released version](https://docs.rs/tantivy/)
# How can I support this project? # How can I support this project?
There are many ways to support this project. There are many ways to support this project.
- Use tantivy and tell us about your experience on [gitter](https://gitter.im/tantivy-search/tantivy) or by email (paul.masurel@gmail.com) - Use Tantivy and tell us about your experience on [Gitter](https://gitter.im/tantivy-search/tantivy) or by email (paul.masurel@gmail.com)
- Report bugs - Report bugs
- Write a blog post - Write a blog post
- Help with documentation by asking questions or submitting PRs - Help with documentation by asking questions or submitting PRs
- Contribute code (you can join [our gitter](https://gitter.im/tantivy-search/tantivy) ) - Contribute code (you can join [our Gitter](https://gitter.im/tantivy-search/tantivy))
- Talk about tantivy around you - Talk about Tantivy around you
- Drop a word on on [![Say Thanks!](https://img.shields.io/badge/Say%20Thanks-!-1EAEDB.svg)](https://saythanks.io/to/fulmicoton) or even [![Become a patron](https://c5.patreon.com/external/logo/become_a_patron_button.png)](https://www.patreon.com/fulmicoton) - Drop a word on on [![Say Thanks!](https://img.shields.io/badge/Say%20Thanks-!-1EAEDB.svg)](https://saythanks.io/to/fulmicoton) or even [![Become a patron](https://c5.patreon.com/external/logo/become_a_patron_button.png)](https://www.patreon.com/fulmicoton)
# Contributing code # Contributing code
We use the GitHub Pull Request workflow - reference a GitHub ticket and/or include a comprehensive commit message when opening a PR. We use the GitHub Pull Request workflow: reference a GitHub ticket and/or include a comprehensive commit message when opening a PR.
## Clone and build locally ## Clone and build locally
Tantivy compiles on stable rust but requires `Rust >= 1.27`. Tantivy compiles on stable Rust but requires `Rust >= 1.27`.
To check out and run tests, you can simply run : To check out and run tests, you can simply run:
```bash ```bash
git clone https://github.com/tantivy-search/tantivy.git git clone https://github.com/tantivy-search/tantivy.git
@@ -108,7 +106,7 @@ To check out and run tests, you can simply run :
## Run tests ## Run tests
Some tests will not run with just `cargo test` because of `fail-rs`. Some tests will not run with just `cargo test` because of `fail-rs`.
To run the tests exhaustively, run `./run-tests.sh` To run the tests exhaustively, run `./run-tests.sh`.
## Debug ## Debug
@@ -116,13 +114,13 @@ You might find it useful to step through the programme with a debugger.
### A failing test ### A failing test
Make sure you haven't run `cargo clean` after the most recent `cargo test` or `cargo build` to guarantee that `target/` dir exists. Use this bash script to find the most name of the most recent debug build of tantivy and run it under rust-gdb. Make sure you haven't run `cargo clean` after the most recent `cargo test` or `cargo build` to guarantee that the `target/` directory exists. Use this bash script to find the name of the most recent debug build of Tantivy and run it under `rust-gdb`:
```bash ```bash
find target/debug/ -maxdepth 1 -executable -type f -name "tantivy*" -printf '%TY-%Tm-%Td %TT %p\n' | sort -r | cut -d " " -f 3 | xargs -I RECENT_DBG_TANTIVY rust-gdb RECENT_DBG_TANTIVY find target/debug/ -maxdepth 1 -executable -type f -name "tantivy*" -printf '%TY-%Tm-%Td %TT %p\n' | sort -r | cut -d " " -f 3 | xargs -I RECENT_DBG_TANTIVY rust-gdb RECENT_DBG_TANTIVY
``` ```
Now that you are in rust-gdb, you can set breakpoints on lines and methods that match your source-code and run the debug executable with flags that you normally pass to `cargo test` to like this Now that you are in `rust-gdb`, you can set breakpoints on lines and methods that match your source code and run the debug executable with flags that you normally pass to `cargo test` like this:
```bash ```bash
$gdb run --test-threads 1 --test $NAME_OF_TEST $gdb run --test-threads 1 --test $NAME_OF_TEST
@@ -130,7 +128,7 @@ $gdb run --test-threads 1 --test $NAME_OF_TEST
### An example ### An example
By default, rustc compiles everything in the `examples/` dir in debug mode. This makes it easy for you to make examples to reproduce bugs. By default, `rustc` compiles everything in the `examples/` directory in debug mode. This makes it easy for you to make examples to reproduce bugs:
```bash ```bash
rust-gdb target/debug/examples/$EXAMPLE_NAME rust-gdb target/debug/examples/$EXAMPLE_NAME

View File

@@ -18,5 +18,5 @@ install:
build: false build: false
test_script: test_script:
- REM SET RUST_LOG=tantivy,test & cargo test --verbose --no-default-features --features mmap - REM SET RUST_LOG=tantivy,test & cargo test --all --verbose --no-default-features --features mmap
- REM SET RUST_BACKTRACE=1 & cargo build --examples - REM SET RUST_BACKTRACE=1 & cargo build --examples

View File

@@ -7,7 +7,7 @@ set -ex
main() { main() {
if [ ! -z $CODECOV ]; then if [ ! -z $CODECOV ]; then
echo "Codecov" echo "Codecov"
cargo build --verbose && cargo coverage --verbose && bash <(curl -s https://codecov.io/bash) -s target/kcov cargo build --verbose && cargo coverage --verbose --all && bash <(curl -s https://codecov.io/bash) -s target/kcov
else else
echo "Build" echo "Build"
cross build --target $TARGET cross build --target $TARGET
@@ -15,7 +15,8 @@ main() {
return return
fi fi
echo "Test" echo "Test"
cross test --target $TARGET --no-default-features --features mmap -- --test-threads 1 cross test --target $TARGET --no-default-features --features mmap
cross test --target $TARGET --no-default-features --features mmap query-grammar
fi fi
for example in $(ls examples/*.rs) for example in $(ls examples/*.rs)
do do

View File

@@ -5,26 +5,23 @@
// //
// We will : // We will :
// - define our schema // - define our schema
// = create an index in a directory // - create an index in a directory
// - index few documents in our index // - index a few documents into our index
// - search for the best document matchings "sea whale" // - search for the best document matching a basic query
// - retrieve the best document original content. // - retrieve the best document's original content.
// --- // ---
// Importing tantivy... // Importing tantivy...
#[macro_use]
extern crate tantivy;
use tantivy::collector::TopDocs; use tantivy::collector::TopDocs;
use tantivy::query::QueryParser; use tantivy::query::QueryParser;
use tantivy::schema::*; use tantivy::schema::*;
use tantivy::Index; use tantivy::{doc, Index, ReloadPolicy};
use tantivy::ReloadPolicy; use tempfile::TempDir;
use tempdir::TempDir;
fn main() -> tantivy::Result<()> { fn main() -> tantivy::Result<()> {
// Let's create a temporary directory for the // Let's create a temporary directory for the
// sake of this example // sake of this example
let index_path = TempDir::new("tantivy_example_dir")?; let index_path = TempDir::new()?;
// # Defining the schema // # Defining the schema
// //
@@ -33,7 +30,7 @@ fn main() -> tantivy::Result<()> {
// and for each field, its type and "the way it should // and for each field, its type and "the way it should
// be indexed". // be indexed".
// first we need to define a schema ... // First we need to define a schema ...
let mut schema_builder = Schema::builder(); let mut schema_builder = Schema::builder();
// Our first field is title. // Our first field is title.
@@ -48,7 +45,7 @@ fn main() -> tantivy::Result<()> {
// //
// `STORED` means that the field will also be saved // `STORED` means that the field will also be saved
// in a compressed, row-oriented key-value store. // in a compressed, row-oriented key-value store.
// This store is useful to reconstruct the // This store is useful for reconstructing the
// documents that were selected during the search phase. // documents that were selected during the search phase.
schema_builder.add_text_field("title", TEXT | STORED); schema_builder.add_text_field("title", TEXT | STORED);
@@ -57,8 +54,7 @@ fn main() -> tantivy::Result<()> {
// need to be able to be able to retrieve it // need to be able to be able to retrieve it
// for our application. // for our application.
// //
// We can make our index lighter and // We can make our index lighter by omitting the `STORED` flag.
// by omitting `STORED` flag.
schema_builder.add_text_field("body", TEXT); schema_builder.add_text_field("body", TEXT);
let schema = schema_builder.build(); let schema = schema_builder.build();
@@ -71,7 +67,7 @@ fn main() -> tantivy::Result<()> {
// with our schema in the directory. // with our schema in the directory.
let index = Index::create_in_dir(&index_path, schema.clone())?; let index = Index::create_in_dir(&index_path, schema.clone())?;
// To insert document we need an index writer. // To insert a document we will need an index writer.
// There must be only one writer at a time. // There must be only one writer at a time.
// This single `IndexWriter` is already // This single `IndexWriter` is already
// multithreaded. // multithreaded.
@@ -149,8 +145,8 @@ fn main() -> tantivy::Result<()> {
// At this point our documents are not searchable. // At this point our documents are not searchable.
// //
// //
// We need to call .commit() explicitly to force the // We need to call `.commit()` explicitly to force the
// index_writer to finish processing the documents in the queue, // `index_writer` to finish processing the documents in the queue,
// flush the current index to the disk, and advertise // flush the current index to the disk, and advertise
// the existence of new documents. // the existence of new documents.
// //
@@ -162,14 +158,14 @@ fn main() -> tantivy::Result<()> {
// persistently indexed. // persistently indexed.
// //
// In the scenario of a crash or a power failure, // In the scenario of a crash or a power failure,
// tantivy behaves as if has rolled back to its last // tantivy behaves as if it has rolled back to its last
// commit. // commit.
// # Searching // # Searching
// //
// ### Searcher // ### Searcher
// //
// A reader is required to get search the index. // A reader is required first in order to search an index.
// It acts as a `Searcher` pool that reloads itself, // It acts as a `Searcher` pool that reloads itself,
// depending on a `ReloadPolicy`. // depending on a `ReloadPolicy`.
// //
@@ -185,7 +181,7 @@ fn main() -> tantivy::Result<()> {
// We now need to acquire a searcher. // We now need to acquire a searcher.
// //
// A searcher points to snapshotted, immutable version of the index. // A searcher points to a snapshotted, immutable version of the index.
// //
// Some search experience might require more than // Some search experience might require more than
// one query. Using the same searcher ensures that all of these queries will run on the // one query. Using the same searcher ensures that all of these queries will run on the
@@ -205,7 +201,7 @@ fn main() -> tantivy::Result<()> {
// in both title and body. // in both title and body.
let query_parser = QueryParser::for_index(&index, vec![title, body]); let query_parser = QueryParser::for_index(&index, vec![title, body]);
// QueryParser may fail if the query is not in the right // `QueryParser` may fail if the query is not in the right
// format. For user facing applications, this can be a problem. // format. For user facing applications, this can be a problem.
// A ticket has been opened regarding this problem. // A ticket has been opened regarding this problem.
let query = query_parser.parse_query("sea whale")?; let query = query_parser.parse_query("sea whale")?;
@@ -221,7 +217,7 @@ fn main() -> tantivy::Result<()> {
// //
// We are not interested in all of the documents but // We are not interested in all of the documents but
// only in the top 10. Keeping track of our top 10 best documents // only in the top 10. Keeping track of our top 10 best documents
// is the role of the TopDocs. // is the role of the `TopDocs` collector.
// We can now perform our query. // We can now perform our query.
let top_docs = searcher.search(&query, &TopDocs::with_limit(10))?; let top_docs = searcher.search(&query, &TopDocs::with_limit(10))?;

View File

@@ -9,15 +9,12 @@
// --- // ---
// Importing tantivy... // Importing tantivy...
#[macro_use]
extern crate tantivy;
use tantivy::collector::{Collector, SegmentCollector}; use tantivy::collector::{Collector, SegmentCollector};
use tantivy::fastfield::FastFieldReader; use tantivy::fastfield::FastFieldReader;
use tantivy::query::QueryParser; use tantivy::query::QueryParser;
use tantivy::schema::Field; use tantivy::schema::Field;
use tantivy::schema::{Schema, FAST, INDEXED, TEXT}; use tantivy::schema::{Schema, FAST, INDEXED, TEXT};
use tantivy::SegmentReader; use tantivy::{doc, Index, SegmentReader, TantivyError};
use tantivy::{Index, TantivyError};
#[derive(Default)] #[derive(Default)]
struct Stats { struct Stats {

View File

@@ -2,14 +2,11 @@
// //
// In this example, we'll see how to define a tokenizer pipeline // In this example, we'll see how to define a tokenizer pipeline
// by aligning a bunch of `TokenFilter`. // by aligning a bunch of `TokenFilter`.
#[macro_use]
extern crate tantivy;
use tantivy::collector::TopDocs; use tantivy::collector::TopDocs;
use tantivy::query::QueryParser; use tantivy::query::QueryParser;
use tantivy::schema::*; use tantivy::schema::*;
use tantivy::tokenizer::NgramTokenizer; use tantivy::tokenizer::NgramTokenizer;
use tantivy::Index; use tantivy::{doc, Index};
fn main() -> tantivy::Result<()> { fn main() -> tantivy::Result<()> {
// # Defining the schema // # Defining the schema

View File

@@ -8,13 +8,10 @@
// //
// --- // ---
// Importing tantivy... // Importing tantivy...
#[macro_use]
extern crate tantivy;
use tantivy::collector::TopDocs; use tantivy::collector::TopDocs;
use tantivy::query::TermQuery; use tantivy::query::TermQuery;
use tantivy::schema::*; use tantivy::schema::*;
use tantivy::Index; use tantivy::{doc, Index, IndexReader};
use tantivy::IndexReader;
// A simple helper function to fetch a single document // A simple helper function to fetch a single document
// given its id from our index. // given its id from our index.

View File

@@ -12,67 +12,101 @@
// --- // ---
// Importing tantivy... // Importing tantivy...
#[macro_use]
extern crate tantivy;
use tantivy::collector::FacetCollector; use tantivy::collector::FacetCollector;
use tantivy::query::AllQuery; use tantivy::query::{AllQuery, TermQuery};
use tantivy::schema::*; use tantivy::schema::*;
use tantivy::Index; use tantivy::{doc, Index};
fn main() -> tantivy::Result<()> { fn main() -> tantivy::Result<()> {
// Let's create a temporary directory for the // Let's create a temporary directory for the sake of this example
// sake of this example
let index_path = TempDir::new("tantivy_facet_example_dir")?;
let mut schema_builder = Schema::builder(); let mut schema_builder = Schema::builder();
schema_builder.add_text_field("name", TEXT | STORED); let name = schema_builder.add_text_field("felin_name", TEXT | STORED);
// this is our faceted field: its scientific classification
// this is our faceted field let classification = schema_builder.add_facet_field("classification");
schema_builder.add_facet_field("tags");
let schema = schema_builder.build(); let schema = schema_builder.build();
let index = Index::create_in_ram(schema);
let index = Index::create_in_dir(&index_path, schema.clone())?; let mut index_writer = index.writer(30_000_000)?;
let mut index_writer = index.writer(50_000_000)?;
let name = schema.get_field("name").unwrap();
let tags = schema.get_field("tags").unwrap();
// For convenience, tantivy also comes with a macro to // For convenience, tantivy also comes with a macro to
// reduce the boilerplate above. // reduce the boilerplate above.
index_writer.add_document(doc!( index_writer.add_document(doc!(
name => "the ditch", name => "Cat",
tags => Facet::from("/pools/north") classification => Facet::from("/Felidae/Felinae/Felis")
)); ));
index_writer.add_document(doc!( index_writer.add_document(doc!(
name => "little stacey", name => "Canada lynx",
tags => Facet::from("/pools/south") classification => Facet::from("/Felidae/Felinae/Lynx")
));
index_writer.add_document(doc!(
name => "Cheetah",
classification => Facet::from("/Felidae/Felinae/Acinonyx")
));
index_writer.add_document(doc!(
name => "Tiger",
classification => Facet::from("/Felidae/Pantherinae/Panthera")
));
index_writer.add_document(doc!(
name => "Lion",
classification => Facet::from("/Felidae/Pantherinae/Panthera")
));
index_writer.add_document(doc!(
name => "Jaguar",
classification => Facet::from("/Felidae/Pantherinae/Panthera")
));
index_writer.add_document(doc!(
name => "Sunda clouded leopard",
classification => Facet::from("/Felidae/Pantherinae/Neofelis")
));
index_writer.add_document(doc!(
name => "Fossa",
classification => Facet::from("/Eupleridae/Cryptoprocta")
)); ));
index_writer.commit()?; index_writer.commit()?;
let reader = index.reader()?; let reader = index.reader()?;
let searcher = reader.searcher(); let searcher = reader.searcher();
{
let mut facet_collector = FacetCollector::for_field(classification);
facet_collector.add_facet("/Felidae");
let facet_counts = searcher.search(&AllQuery, &facet_collector)?;
// This lists all of the facet counts, right below "/Felidae".
let facets: Vec<(&Facet, u64)> = facet_counts.get("/Felidae").collect();
assert_eq!(
facets,
vec![
(&Facet::from("/Felidae/Felinae"), 3),
(&Facet::from("/Felidae/Pantherinae"), 4),
]
);
}
let mut facet_collector = FacetCollector::for_field(tags); // Facets are also searchable.
facet_collector.add_facet("/pools"); //
// For instance a common UI pattern is to allow the user someone to click on a facet link
// (e.g: `Pantherinae`) to drill down and filter the current result set with this subfacet.
//
// The search would then look as follows.
let facet_counts = searcher.search(&AllQuery, &facet_collector).unwrap(); // Check the reference doc for different ways to create a `Facet` object.
{
// This lists all of the facet counts let facet = Facet::from_text("/Felidae/Pantherinae");
let facets: Vec<(&Facet, u64)> = facet_counts.get("/pools").collect(); let facet_term = Term::from_facet(classification, &facet);
assert_eq!( let facet_term_query = TermQuery::new(facet_term, IndexRecordOption::Basic);
facets, let mut facet_collector = FacetCollector::for_field(classification);
vec![ facet_collector.add_facet("/Felidae/Pantherinae");
(&Facet::from("/pools/north"), 1), let facet_counts = searcher.search(&facet_term_query, &facet_collector)?;
(&Facet::from("/pools/south"), 1), let facets: Vec<(&Facet, u64)> = facet_counts.get("/Felidae/Pantherinae").collect();
] assert_eq!(
); facets,
vec![
(&Facet::from("/Felidae/Pantherinae/Neofelis"), 1),
(&Facet::from("/Felidae/Pantherinae/Panthera"), 3),
]
);
}
Ok(()) Ok(())
} }
use tempdir::TempDir;

View File

@@ -2,14 +2,10 @@
// //
// Below is an example of creating an indexed integer field in your schema // Below is an example of creating an indexed integer field in your schema
// You can use RangeQuery to get a Count of all occurrences in a given range. // You can use RangeQuery to get a Count of all occurrences in a given range.
#[macro_use]
extern crate tantivy;
use tantivy::collector::Count; use tantivy::collector::Count;
use tantivy::query::RangeQuery; use tantivy::query::RangeQuery;
use tantivy::schema::{Schema, INDEXED}; use tantivy::schema::{Schema, INDEXED};
use tantivy::Index; use tantivy::{doc, Index, Result};
use tantivy::Result;
fn run() -> Result<()> { fn run() -> Result<()> {
// For the sake of simplicity, this schema will only have 1 field // For the sake of simplicity, this schema will only have 1 field

View File

@@ -9,11 +9,8 @@
// --- // ---
// Importing tantivy... // Importing tantivy...
#[macro_use]
extern crate tantivy;
use tantivy::schema::*; use tantivy::schema::*;
use tantivy::Index; use tantivy::{doc, DocId, DocSet, Index, Postings};
use tantivy::{DocId, DocSet, Postings};
fn main() -> tantivy::Result<()> { fn main() -> tantivy::Result<()> {
// We first create a schema for the sake of the // We first create a schema for the sake of the

View File

@@ -25,14 +25,11 @@
// --- // ---
// Importing tantivy... // Importing tantivy...
#[macro_use]
extern crate tantivy;
use std::sync::{Arc, RwLock}; use std::sync::{Arc, RwLock};
use std::thread; use std::thread;
use std::time::Duration; use std::time::Duration;
use tantivy::schema::{Schema, STORED, TEXT}; use tantivy::schema::{Schema, STORED, TEXT};
use tantivy::Opstamp; use tantivy::{doc, Index, IndexWriter, Opstamp};
use tantivy::{Index, IndexWriter};
fn main() -> tantivy::Result<()> { fn main() -> tantivy::Result<()> {
// # Defining the schema // # Defining the schema
@@ -49,10 +46,9 @@ fn main() -> tantivy::Result<()> {
thread::spawn(move || { thread::spawn(move || {
// we index 100 times the document... for the sake of the example. // we index 100 times the document... for the sake of the example.
for i in 0..100 { for i in 0..100 {
let opstamp = { let opstamp = index_writer_clone_1
// A read lock is sufficient here. .read().unwrap() //< A read lock is sufficient here.
let index_writer_rlock = index_writer_clone_1.read().unwrap(); .add_document(
index_writer_rlock.add_document(
doc!( doc!(
title => "Of Mice and Men", title => "Of Mice and Men",
body => "A few miles south of Soledad, the Salinas River drops in close to the hillside \ body => "A few miles south of Soledad, the Salinas River drops in close to the hillside \
@@ -63,8 +59,7 @@ fn main() -> tantivy::Result<()> {
fresh and green with every spring, carrying in their lower leaf junctures the \ fresh and green with every spring, carrying in their lower leaf junctures the \
debris of the winters flooding; and sycamores with mottled, white, recumbent \ debris of the winters flooding; and sycamores with mottled, white, recumbent \
limbs and branches that arch over the pool" limbs and branches that arch over the pool"
)) ));
};
println!("add doc {} from thread 1 - opstamp {}", i, opstamp); println!("add doc {} from thread 1 - opstamp {}", i, opstamp);
thread::sleep(Duration::from_millis(20)); thread::sleep(Duration::from_millis(20));
} }

View File

@@ -0,0 +1,139 @@
// # Pre-tokenized text example
//
// This example shows how to use pre-tokenized text. Sometimes yout might
// want to index and search through text which is already split into
// tokens by some external tool.
//
// In this example we will:
// - use tantivy tokenizer to create tokens and load them directly into tantivy,
// - import tokenized text straight from json,
// - perform a search on documents with pre-tokenized text
use tantivy::collector::{Count, TopDocs};
use tantivy::query::TermQuery;
use tantivy::schema::*;
use tantivy::tokenizer::{PreTokenizedString, SimpleTokenizer, Token, Tokenizer};
use tantivy::{doc, Index, ReloadPolicy};
use tempfile::TempDir;
fn pre_tokenize_text(text: &str) -> Vec<Token> {
let mut token_stream = SimpleTokenizer.token_stream(text);
let mut tokens = vec![];
while token_stream.advance() {
tokens.push(token_stream.token().clone());
}
tokens
}
fn main() -> tantivy::Result<()> {
let index_path = TempDir::new()?;
let mut schema_builder = Schema::builder();
schema_builder.add_text_field("title", TEXT | STORED);
schema_builder.add_text_field("body", TEXT);
let schema = schema_builder.build();
let index = Index::create_in_dir(&index_path, schema.clone())?;
let mut index_writer = index.writer(50_000_000)?;
// We can create a document manually, by setting the fields
// one by one in a Document object.
let title = schema.get_field("title").unwrap();
let body = schema.get_field("body").unwrap();
let title_text = "The Old Man and the Sea";
let body_text = "He was an old man who fished alone in a skiff in the Gulf Stream";
// Content of our first document
// We create `PreTokenizedString` which contains original text and vector of tokens
let title_tok = PreTokenizedString {
text: String::from(title_text),
tokens: pre_tokenize_text(title_text),
};
println!(
"Original text: \"{}\" and tokens: {:?}",
title_tok.text, title_tok.tokens
);
let body_tok = PreTokenizedString {
text: String::from(body_text),
tokens: pre_tokenize_text(body_text),
};
// Now lets create a document and add our `PreTokenizedString`
let old_man_doc = doc!(title => title_tok, body => body_tok);
// ... now let's just add it to the IndexWriter
index_writer.add_document(old_man_doc);
// Pretokenized text can also be fed as JSON
let short_man_json = r#"{
"title":[{
"text":"The Old Man",
"tokens":[
{"offset_from":0,"offset_to":3,"position":0,"text":"The","position_length":1},
{"offset_from":4,"offset_to":7,"position":1,"text":"Old","position_length":1},
{"offset_from":8,"offset_to":11,"position":2,"text":"Man","position_length":1}
]
}]
}"#;
let short_man_doc = schema.parse_document(&short_man_json)?;
index_writer.add_document(short_man_doc);
// Let's commit changes
index_writer.commit()?;
// ... and now is the time to query our index
let reader = index
.reader_builder()
.reload_policy(ReloadPolicy::OnCommit)
.try_into()?;
let searcher = reader.searcher();
// We want to get documents with token "Man", we will use TermQuery to do it
// Using PreTokenizedString means the tokens are stored as is avoiding stemming
// and lowercasing, which preserves full words in their original form
let query = TermQuery::new(
Term::from_field_text(title, "Man"),
IndexRecordOption::Basic,
);
let (top_docs, count) = searcher
.search(&query, &(TopDocs::with_limit(2), Count))
.unwrap();
assert_eq!(count, 2);
// Now let's print out the results.
// Note that the tokens are not stored along with the original text
// in the document store
for (_score, doc_address) in top_docs {
let retrieved_doc = searcher.doc(doc_address)?;
println!("Document: {}", schema.to_json(&retrieved_doc));
}
// In contrary to the previous query, when we search for the "man" term we
// should get no results, as it's not one of the indexed tokens. SimpleTokenizer
// only splits text on whitespace / punctuation.
let query = TermQuery::new(
Term::from_field_text(title, "man"),
IndexRecordOption::Basic,
);
let (_top_docs, count) = searcher
.search(&query, &(TopDocs::with_limit(2), Count))
.unwrap();
assert_eq!(count, 0);
Ok(())
}

View File

@@ -7,19 +7,16 @@
// --- // ---
// Importing tantivy... // Importing tantivy...
#[macro_use]
extern crate tantivy;
use tantivy::collector::TopDocs; use tantivy::collector::TopDocs;
use tantivy::query::QueryParser; use tantivy::query::QueryParser;
use tantivy::schema::*; use tantivy::schema::*;
use tantivy::Index; use tantivy::{doc, Index, Snippet, SnippetGenerator};
use tantivy::{Snippet, SnippetGenerator}; use tempfile::TempDir;
use tempdir::TempDir;
fn main() -> tantivy::Result<()> { fn main() -> tantivy::Result<()> {
// Let's create a temporary directory for the // Let's create a temporary directory for the
// sake of this example // sake of this example
let index_path = TempDir::new("tantivy_example_dir")?; let index_path = TempDir::new()?;
// # Defining the schema // # Defining the schema
let mut schema_builder = Schema::builder(); let mut schema_builder = Schema::builder();

View File

@@ -11,13 +11,11 @@
// --- // ---
// Importing tantivy... // Importing tantivy...
#[macro_use]
extern crate tantivy;
use tantivy::collector::TopDocs; use tantivy::collector::TopDocs;
use tantivy::query::QueryParser; use tantivy::query::QueryParser;
use tantivy::schema::*; use tantivy::schema::*;
use tantivy::tokenizer::*; use tantivy::tokenizer::*;
use tantivy::Index; use tantivy::{doc, Index};
fn main() -> tantivy::Result<()> { fn main() -> tantivy::Result<()> {
// this example assumes you understand the content in `basic_search` // this example assumes you understand the content in `basic_search`
@@ -52,7 +50,7 @@ fn main() -> tantivy::Result<()> {
// This tokenizer lowers all of the text (to help with stop word matching) // This tokenizer lowers all of the text (to help with stop word matching)
// then removes all instances of `the` and `and` from the corpus // then removes all instances of `the` and `and` from the corpus
let tokenizer = SimpleTokenizer let tokenizer = TextAnalyzer::from(SimpleTokenizer)
.filter(LowerCaser) .filter(LowerCaser)
.filter(StopWordFilter::remove(vec![ .filter(StopWordFilter::remove(vec![
"the".to_string(), "the".to_string(),

16
query-grammar/Cargo.toml Normal file
View File

@@ -0,0 +1,16 @@
[package]
name = "tantivy-query-grammar"
version = "0.13.0"
authors = ["Paul Masurel <paul.masurel@gmail.com>"]
license = "MIT"
categories = ["database-implementations", "data-structures"]
description = """Search engine library"""
documentation = "https://tantivy-search.github.io/tantivy/tantivy/index.html"
homepage = "https://github.com/tantivy-search/tantivy"
repository = "https://github.com/tantivy-search/tantivy"
readme = "README.md"
keywords = ["search", "information", "retrieval"]
edition = "2018"
[dependencies]
combine = {version="4", default-features=false, features=[] }

3
query-grammar/README.md Normal file
View File

@@ -0,0 +1,3 @@
# Tantivy Query Grammar
This crate is used by tantivy to parse queries.

15
query-grammar/src/lib.rs Normal file
View File

@@ -0,0 +1,15 @@
mod occur;
mod query_grammar;
mod user_input_ast;
use combine::parser::Parser;
pub use crate::occur::Occur;
use crate::query_grammar::parse_to_ast;
pub use crate::user_input_ast::{UserInputAST, UserInputBound, UserInputLeaf, UserInputLiteral};
pub struct Error;
pub fn parse_query(query: &str) -> Result<UserInputAST, Error> {
let (user_input_ast, _remaining) = parse_to_ast().parse(query).map_err(|_| Error)?;
Ok(user_input_ast)
}

View File

@@ -1,5 +1,8 @@
use std::fmt;
use std::fmt::Write;
/// Defines whether a term in a query must be present, /// Defines whether a term in a query must be present,
/// should be present or must not be present. /// should be present or must be not present.
#[derive(Debug, Clone, Hash, Copy, Eq, PartialEq)] #[derive(Debug, Clone, Hash, Copy, Eq, PartialEq)]
pub enum Occur { pub enum Occur {
/// For a given document to be considered for scoring, /// For a given document to be considered for scoring,
@@ -18,32 +21,38 @@ impl Occur {
/// - `Should` => '?', /// - `Should` => '?',
/// - `Must` => '+' /// - `Must` => '+'
/// - `Not` => '-' /// - `Not` => '-'
pub fn to_char(self) -> char { fn to_char(self) -> char {
match self { match self {
Occur::Should => '?', Occur::Should => '?',
Occur::Must => '+', Occur::Must => '+',
Occur::MustNot => '-', Occur::MustNot => '-',
} }
} }
}
/// Compose two occur values. /// Compose two occur values.
pub fn compose_occur(left: Occur, right: Occur) -> Occur { pub fn compose(left: Occur, right: Occur) -> Occur {
match left { match left {
Occur::Should => right, Occur::Should => right,
Occur::Must => { Occur::Must => {
if right == Occur::MustNot { if right == Occur::MustNot {
Occur::MustNot Occur::MustNot
} else { } else {
Occur::Must Occur::Must
}
} }
} Occur::MustNot => {
Occur::MustNot => { if right == Occur::MustNot {
if right == Occur::MustNot { Occur::Must
Occur::Must } else {
} else { Occur::MustNot
Occur::MustNot }
} }
} }
} }
} }
impl fmt::Display for Occur {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
f.write_char(self.to_char())
}
}

View File

@@ -0,0 +1,510 @@
use super::user_input_ast::{UserInputAST, UserInputBound, UserInputLeaf, UserInputLiteral};
use crate::Occur;
use combine::error::StringStreamError;
use combine::parser::char::{char, digit, letter, space, spaces, string};
use combine::parser::Parser;
use combine::{
attempt, choice, eof, many, many1, one_of, optional, parser, satisfy, skip_many1, value,
};
fn field<'a>() -> impl Parser<&'a str, Output = String> {
(
letter(),
many(satisfy(|c: char| c.is_alphanumeric() || c == '_')),
)
.skip(char(':'))
.map(|(s1, s2): (char, String)| format!("{}{}", s1, s2))
}
fn word<'a>() -> impl Parser<&'a str, Output = String> {
(
satisfy(|c: char| {
!c.is_whitespace()
&& !['-', '^', '`', ':', '{', '}', '"', '[', ']', '(', ')'].contains(&c)
}),
many(satisfy(|c: char| {
!c.is_whitespace() && ![':', '^', '{', '}', '"', '[', ']', '(', ')'].contains(&c)
})),
)
.map(|(s1, s2): (char, String)| format!("{}{}", s1, s2))
.and_then(|s: String| match s.as_str() {
"OR" | "AND " | "NOT" => Err(StringStreamError::UnexpectedParse),
_ => Ok(s),
})
}
fn term_val<'a>() -> impl Parser<&'a str, Output = String> {
let phrase = char('"').with(many1(satisfy(|c| c != '"'))).skip(char('"'));
phrase.or(word())
}
fn term_query<'a>() -> impl Parser<&'a str, Output = UserInputLiteral> {
let term_val_with_field = negative_number().or(term_val());
(field(), term_val_with_field).map(|(field_name, phrase)| UserInputLiteral {
field_name: Some(field_name),
phrase,
})
}
fn literal<'a>() -> impl Parser<&'a str, Output = UserInputLeaf> {
let term_default_field = term_val().map(|phrase| UserInputLiteral {
field_name: None,
phrase,
});
attempt(term_query())
.or(term_default_field)
.map(UserInputLeaf::from)
}
fn negative_number<'a>() -> impl Parser<&'a str, Output = String> {
(
char('-'),
many1(digit()),
optional((char('.'), many1(digit()))),
)
.map(|(s1, s2, s3): (char, String, Option<(char, String)>)| {
if let Some(('.', s3)) = s3 {
format!("{}{}.{}", s1, s2, s3)
} else {
format!("{}{}", s1, s2)
}
})
}
fn spaces1<'a>() -> impl Parser<&'a str, Output = ()> {
skip_many1(space())
}
/// Function that parses a range out of a Stream
/// Supports ranges like:
/// [5 TO 10], {5 TO 10}, [* TO 10], [10 TO *], {10 TO *], >5, <=10
/// [a TO *], [a TO c], [abc TO bcd}
fn range<'a>() -> impl Parser<&'a str, Output = UserInputLeaf> {
let range_term_val = || {
word()
.or(negative_number())
.or(char('*').with(value("*".to_string())))
};
// check for unbounded range in the form of <5, <=10, >5, >=5
let elastic_unbounded_range = (
choice([
attempt(string(">=")),
attempt(string("<=")),
attempt(string("<")),
attempt(string(">")),
])
.skip(spaces()),
range_term_val(),
)
.map(
|(comparison_sign, bound): (&str, String)| match comparison_sign {
">=" => (UserInputBound::Inclusive(bound), UserInputBound::Unbounded),
"<=" => (UserInputBound::Unbounded, UserInputBound::Inclusive(bound)),
"<" => (UserInputBound::Unbounded, UserInputBound::Exclusive(bound)),
">" => (UserInputBound::Exclusive(bound), UserInputBound::Unbounded),
// default case
_ => (UserInputBound::Unbounded, UserInputBound::Unbounded),
},
);
let lower_bound = (one_of("{[".chars()), range_term_val()).map(
|(boundary_char, lower_bound): (char, String)| {
if lower_bound == "*" {
UserInputBound::Unbounded
} else if boundary_char == '{' {
UserInputBound::Exclusive(lower_bound)
} else {
UserInputBound::Inclusive(lower_bound)
}
},
);
let upper_bound = (range_term_val(), one_of("}]".chars())).map(
|(higher_bound, boundary_char): (String, char)| {
if higher_bound == "*" {
UserInputBound::Unbounded
} else if boundary_char == '}' {
UserInputBound::Exclusive(higher_bound)
} else {
UserInputBound::Inclusive(higher_bound)
}
},
);
// return only lower and upper
let lower_to_upper = (
lower_bound.skip((spaces(), string("TO"), spaces())),
upper_bound,
);
(
optional(field()).skip(spaces()),
// try elastic first, if it matches, the range is unbounded
attempt(elastic_unbounded_range).or(lower_to_upper),
)
.map(|(field, (lower, upper))|
// Construct the leaf from extracted field (optional)
// and bounds
UserInputLeaf::Range {
field,
lower,
upper
})
}
fn negate(expr: UserInputAST) -> UserInputAST {
expr.unary(Occur::MustNot)
}
fn leaf<'a>() -> impl Parser<&'a str, Output = UserInputAST> {
parser(|input| {
char('(')
.with(ast())
.skip(char(')'))
.or(char('*').map(|_| UserInputAST::from(UserInputLeaf::All)))
.or(attempt(
string("NOT").skip(spaces1()).with(leaf()).map(negate),
))
.or(attempt(range().map(UserInputAST::from)))
.or(literal().map(UserInputAST::from))
.parse_stream(input)
.into_result()
})
}
fn occur_symbol<'a>() -> impl Parser<&'a str, Output = Occur> {
char('-')
.map(|_| Occur::MustNot)
.or(char('+').map(|_| Occur::Must))
}
fn occur_leaf<'a>() -> impl Parser<&'a str, Output = (Option<Occur>, UserInputAST)> {
(optional(occur_symbol()), boosted_leaf())
}
fn positive_float_number<'a>() -> impl Parser<&'a str, Output = f32> {
(many1(digit()), optional((char('.'), many1(digit())))).map(
|(int_part, decimal_part_opt): (String, Option<(char, String)>)| {
let mut float_str = int_part;
if let Some((chr, decimal_str)) = decimal_part_opt {
float_str.push(chr);
float_str.push_str(&decimal_str);
}
float_str.parse::<f32>().unwrap()
},
)
}
fn boost<'a>() -> impl Parser<&'a str, Output = f32> {
(char('^'), positive_float_number()).map(|(_, boost)| boost)
}
fn boosted_leaf<'a>() -> impl Parser<&'a str, Output = UserInputAST> {
(leaf(), optional(boost())).map(|(leaf, boost_opt)| match boost_opt {
Some(boost) if (boost - 1.0).abs() > std::f32::EPSILON => {
UserInputAST::Boost(Box::new(leaf), boost)
}
_ => leaf,
})
}
#[derive(Clone, Copy)]
enum BinaryOperand {
Or,
And,
}
fn binary_operand<'a>() -> impl Parser<&'a str, Output = BinaryOperand> {
string("AND")
.with(value(BinaryOperand::And))
.or(string("OR").with(value(BinaryOperand::Or)))
}
fn aggregate_binary_expressions(
left: UserInputAST,
others: Vec<(BinaryOperand, UserInputAST)>,
) -> UserInputAST {
let mut dnf: Vec<Vec<UserInputAST>> = vec![vec![left]];
for (operator, operand_ast) in others {
match operator {
BinaryOperand::And => {
if let Some(last) = dnf.last_mut() {
last.push(operand_ast);
}
}
BinaryOperand::Or => {
dnf.push(vec![operand_ast]);
}
}
}
if dnf.len() == 1 {
UserInputAST::and(dnf.into_iter().next().unwrap()) //< safe
} else {
let conjunctions = dnf.into_iter().map(UserInputAST::and).collect();
UserInputAST::or(conjunctions)
}
}
fn operand_leaf<'a>() -> impl Parser<&'a str, Output = (BinaryOperand, UserInputAST)> {
(
binary_operand().skip(spaces()),
boosted_leaf().skip(spaces()),
)
}
pub fn ast<'a>() -> impl Parser<&'a str, Output = UserInputAST> {
let boolean_expr = (boosted_leaf().skip(spaces()), many1(operand_leaf()))
.map(|(left, right)| aggregate_binary_expressions(left, right));
let whitespace_separated_leaves = many1(occur_leaf().skip(spaces().silent())).map(
|subqueries: Vec<(Option<Occur>, UserInputAST)>| {
if subqueries.len() == 1 {
let (occur_opt, ast) = subqueries.into_iter().next().unwrap();
match occur_opt.unwrap_or(Occur::Should) {
Occur::Must | Occur::Should => ast,
Occur::MustNot => UserInputAST::Clause(vec![(Some(Occur::MustNot), ast)]),
}
} else {
UserInputAST::Clause(subqueries.into_iter().collect())
}
},
);
let expr = attempt(boolean_expr).or(whitespace_separated_leaves);
spaces().with(expr).skip(spaces())
}
pub fn parse_to_ast<'a>() -> impl Parser<&'a str, Output = UserInputAST> {
spaces()
.with(optional(ast()).skip(eof()))
.map(|opt_ast| opt_ast.unwrap_or_else(UserInputAST::empty_query))
}
#[cfg(test)]
mod test {
use super::*;
use combine::parser::Parser;
pub fn nearly_equals(a: f32, b: f32) -> bool {
(a - b).abs() < 0.0005 * (a + b).abs()
}
fn assert_nearly_equals(expected: f32, val: f32) {
assert!(
nearly_equals(val, expected),
"Got {}, expected {}.",
val,
expected
);
}
#[test]
fn test_occur_symbol() {
assert_eq!(super::occur_symbol().parse("-"), Ok((Occur::MustNot, "")));
assert_eq!(super::occur_symbol().parse("+"), Ok((Occur::Must, "")));
}
#[test]
fn test_positive_float_number() {
fn valid_parse(float_str: &str, expected_val: f32, expected_remaining: &str) {
let (val, remaining) = positive_float_number().parse(float_str).unwrap();
assert_eq!(remaining, expected_remaining);
assert_nearly_equals(val, expected_val);
}
fn error_parse(float_str: &str) {
assert!(positive_float_number().parse(float_str).is_err());
}
valid_parse("1.0", 1.0f32, "");
valid_parse("1", 1.0f32, "");
valid_parse("0.234234 aaa", 0.234234f32, " aaa");
error_parse(".3332");
error_parse("1.");
error_parse("-1.");
}
fn test_parse_query_to_ast_helper(query: &str, expected: &str) {
let query = parse_to_ast().parse(query).unwrap().0;
let query_str = format!("{:?}", query);
assert_eq!(query_str, expected);
}
fn test_is_parse_err(query: &str) {
assert!(parse_to_ast().parse(query).is_err());
}
#[test]
fn test_parse_empty_to_ast() {
test_parse_query_to_ast_helper("", "<emptyclause>");
}
#[test]
fn test_parse_query_to_ast_hyphen() {
test_parse_query_to_ast_helper("\"www-form-encoded\"", "\"www-form-encoded\"");
test_parse_query_to_ast_helper("www-form-encoded", "\"www-form-encoded\"");
test_parse_query_to_ast_helper("www-form-encoded", "\"www-form-encoded\"");
}
#[test]
fn test_parse_query_to_ast_not_op() {
assert_eq!(
format!("{:?}", parse_to_ast().parse("NOT")),
"Err(UnexpectedParse)"
);
test_parse_query_to_ast_helper("NOTa", "\"NOTa\"");
test_parse_query_to_ast_helper("NOT a", "(-\"a\")");
}
#[test]
fn test_boosting() {
assert!(parse_to_ast().parse("a^2^3").is_err());
assert!(parse_to_ast().parse("a^2^").is_err());
test_parse_query_to_ast_helper("a^3", "(\"a\")^3");
test_parse_query_to_ast_helper("a^3 b^2", "(*(\"a\")^3 *(\"b\")^2)");
test_parse_query_to_ast_helper("a^1", "\"a\"");
}
#[test]
fn test_parse_query_to_ast_binary_op() {
test_parse_query_to_ast_helper("a AND b", "(+\"a\" +\"b\")");
test_parse_query_to_ast_helper("a OR b", "(?\"a\" ?\"b\")");
test_parse_query_to_ast_helper("a OR b AND c", "(?\"a\" ?(+\"b\" +\"c\"))");
test_parse_query_to_ast_helper("a AND b AND c", "(+\"a\" +\"b\" +\"c\")");
assert_eq!(
format!("{:?}", parse_to_ast().parse("a OR b aaa")),
"Err(UnexpectedParse)"
);
assert_eq!(
format!("{:?}", parse_to_ast().parse("a AND b aaa")),
"Err(UnexpectedParse)"
);
assert_eq!(
format!("{:?}", parse_to_ast().parse("aaa a OR b ")),
"Err(UnexpectedParse)"
);
assert_eq!(
format!("{:?}", parse_to_ast().parse("aaa ccc a OR b ")),
"Err(UnexpectedParse)"
);
}
#[test]
fn test_parse_elastic_query_ranges() {
test_parse_query_to_ast_helper("title: >a", "title:{\"a\" TO \"*\"}");
test_parse_query_to_ast_helper("title:>=a", "title:[\"a\" TO \"*\"}");
test_parse_query_to_ast_helper("title: <a", "title:{\"*\" TO \"a\"}");
test_parse_query_to_ast_helper("title:<=a", "title:{\"*\" TO \"a\"]");
test_parse_query_to_ast_helper("title:<=bsd", "title:{\"*\" TO \"bsd\"]");
test_parse_query_to_ast_helper("weight: >70", "weight:{\"70\" TO \"*\"}");
test_parse_query_to_ast_helper("weight:>=70", "weight:[\"70\" TO \"*\"}");
test_parse_query_to_ast_helper("weight: <70", "weight:{\"*\" TO \"70\"}");
test_parse_query_to_ast_helper("weight:<=70", "weight:{\"*\" TO \"70\"]");
test_parse_query_to_ast_helper("weight: >60.7", "weight:{\"60.7\" TO \"*\"}");
test_parse_query_to_ast_helper("weight: <= 70", "weight:{\"*\" TO \"70\"]");
test_parse_query_to_ast_helper("weight: <= 70.5", "weight:{\"*\" TO \"70.5\"]");
}
#[test]
fn test_occur_leaf() {
let ((occur, ast), _) = super::occur_leaf().parse("+abc").unwrap();
assert_eq!(occur, Some(Occur::Must));
assert_eq!(format!("{:?}", ast), "\"abc\"");
}
#[test]
fn test_range_parser() {
// testing the range() parser separately
let res = range().parse("title: <hello").unwrap().0;
let expected = UserInputLeaf::Range {
field: Some("title".to_string()),
lower: UserInputBound::Unbounded,
upper: UserInputBound::Exclusive("hello".to_string()),
};
let res2 = range().parse("title:{* TO hello}").unwrap().0;
assert_eq!(res, expected);
assert_eq!(res2, expected);
let expected_weight = UserInputLeaf::Range {
field: Some("weight".to_string()),
lower: UserInputBound::Inclusive("71.2".to_string()),
upper: UserInputBound::Unbounded,
};
let res3 = range().parse("weight: >=71.2").unwrap().0;
let res4 = range().parse("weight:[71.2 TO *}").unwrap().0;
assert_eq!(res3, expected_weight);
assert_eq!(res4, expected_weight);
}
#[test]
fn test_parse_query_to_triming_spaces() {
test_parse_query_to_ast_helper(" abc", "\"abc\"");
test_parse_query_to_ast_helper("abc ", "\"abc\"");
test_parse_query_to_ast_helper("( a OR abc)", "(?\"a\" ?\"abc\")");
test_parse_query_to_ast_helper("(a OR abc)", "(?\"a\" ?\"abc\")");
test_parse_query_to_ast_helper("(a OR abc)", "(?\"a\" ?\"abc\")");
test_parse_query_to_ast_helper("a OR abc ", "(?\"a\" ?\"abc\")");
test_parse_query_to_ast_helper("(a OR abc )", "(?\"a\" ?\"abc\")");
test_parse_query_to_ast_helper("(a OR abc) ", "(?\"a\" ?\"abc\")");
}
#[test]
fn test_parse_query_single_term() {
test_parse_query_to_ast_helper("abc", "\"abc\"");
}
#[test]
fn test_parse_query_default_clause() {
test_parse_query_to_ast_helper("a b", "(*\"a\" *\"b\")");
}
#[test]
fn test_parse_query_must_default_clause() {
test_parse_query_to_ast_helper("+(a b)", "(*\"a\" *\"b\")");
}
#[test]
fn test_parse_query_must_single_term() {
test_parse_query_to_ast_helper("+d", "\"d\"");
}
#[test]
fn test_single_term_with_field() {
test_parse_query_to_ast_helper("abc:toto", "abc:\"toto\"");
}
#[test]
fn test_single_term_with_float() {
test_parse_query_to_ast_helper("abc:1.1", "abc:\"1.1\"");
}
#[test]
fn test_must_clause() {
test_parse_query_to_ast_helper("(+a +b)", "(+\"a\" +\"b\")");
}
#[test]
fn test_parse_test_query_plus_a_b_plus_d() {
test_parse_query_to_ast_helper("+(a b) +d", "(+(*\"a\" *\"b\") +\"d\")");
}
#[test]
fn test_parse_test_query_other() {
test_parse_query_to_ast_helper("(+a +b) d", "(*(+\"a\" +\"b\") *\"d\")");
test_parse_query_to_ast_helper("+abc:toto", "abc:\"toto\"");
test_parse_query_to_ast_helper("(+abc:toto -titi)", "(+abc:\"toto\" -\"titi\")");
test_parse_query_to_ast_helper("-abc:toto", "(-abc:\"toto\")");
test_parse_query_to_ast_helper("abc:a b", "(*abc:\"a\" *\"b\")");
test_parse_query_to_ast_helper("abc:\"a b\"", "abc:\"a b\"");
test_parse_query_to_ast_helper("foo:[1 TO 5]", "foo:[\"1\" TO \"5\"]");
}
#[test]
fn test_parse_query_with_range() {
test_parse_query_to_ast_helper("[1 TO 5]", "[\"1\" TO \"5\"]");
test_parse_query_to_ast_helper("foo:{a TO z}", "foo:{\"a\" TO \"z\"}");
test_parse_query_to_ast_helper("foo:[1 TO toto}", "foo:[\"1\" TO \"toto\"}");
test_parse_query_to_ast_helper("foo:[* TO toto}", "foo:{\"*\" TO \"toto\"}");
test_parse_query_to_ast_helper("foo:[1 TO *}", "foo:[\"1\" TO \"*\"}");
test_parse_query_to_ast_helper("foo:[1.1 TO *}", "foo:[\"1.1\" TO \"*\"}");
test_is_parse_err("abc + ");
}
}

View File

@@ -1,8 +1,9 @@
use std::fmt; use std::fmt;
use std::fmt::{Debug, Formatter}; use std::fmt::{Debug, Formatter};
use crate::query::Occur; use crate::Occur;
#[derive(PartialEq)]
pub enum UserInputLeaf { pub enum UserInputLeaf {
Literal(UserInputLiteral), Literal(UserInputLiteral),
All, All,
@@ -35,6 +36,7 @@ impl Debug for UserInputLeaf {
} }
} }
#[derive(PartialEq)]
pub struct UserInputLiteral { pub struct UserInputLiteral {
pub field_name: Option<String>, pub field_name: Option<String>,
pub phrase: String, pub phrase: String,
@@ -49,9 +51,11 @@ impl fmt::Debug for UserInputLiteral {
} }
} }
#[derive(PartialEq)]
pub enum UserInputBound { pub enum UserInputBound {
Inclusive(String), Inclusive(String),
Exclusive(String), Exclusive(String),
Unbounded,
} }
impl UserInputBound { impl UserInputBound {
@@ -59,6 +63,7 @@ impl UserInputBound {
match *self { match *self {
UserInputBound::Inclusive(ref word) => write!(formatter, "[\"{}\"", word), UserInputBound::Inclusive(ref word) => write!(formatter, "[\"{}\"", word),
UserInputBound::Exclusive(ref word) => write!(formatter, "{{\"{}\"", word), UserInputBound::Exclusive(ref word) => write!(formatter, "{{\"{}\"", word),
UserInputBound::Unbounded => write!(formatter, "{{\"*\""),
} }
} }
@@ -66,6 +71,7 @@ impl UserInputBound {
match *self { match *self {
UserInputBound::Inclusive(ref word) => write!(formatter, "\"{}\"]", word), UserInputBound::Inclusive(ref word) => write!(formatter, "\"{}\"]", word),
UserInputBound::Exclusive(ref word) => write!(formatter, "\"{}\"}}", word), UserInputBound::Exclusive(ref word) => write!(formatter, "\"{}\"}}", word),
UserInputBound::Unbounded => write!(formatter, "\"*\"}}"),
} }
} }
@@ -73,38 +79,40 @@ impl UserInputBound {
match *self { match *self {
UserInputBound::Inclusive(ref contents) => contents, UserInputBound::Inclusive(ref contents) => contents,
UserInputBound::Exclusive(ref contents) => contents, UserInputBound::Exclusive(ref contents) => contents,
UserInputBound::Unbounded => &"*",
} }
} }
} }
pub enum UserInputAST { pub enum UserInputAST {
Clause(Vec<UserInputAST>), Clause(Vec<(Option<Occur>, UserInputAST)>),
Unary(Occur, Box<UserInputAST>),
// Not(Box<UserInputAST>),
// Should(Box<UserInputAST>),
// Must(Box<UserInputAST>),
Leaf(Box<UserInputLeaf>), Leaf(Box<UserInputLeaf>),
Boost(Box<UserInputAST>, f32),
} }
impl UserInputAST { impl UserInputAST {
pub fn unary(self, occur: Occur) -> UserInputAST { pub fn unary(self, occur: Occur) -> UserInputAST {
UserInputAST::Unary(occur, Box::new(self)) UserInputAST::Clause(vec![(Some(occur), self)])
} }
fn compose(occur: Occur, asts: Vec<UserInputAST>) -> UserInputAST { fn compose(occur: Occur, asts: Vec<UserInputAST>) -> UserInputAST {
assert!(occur != Occur::MustNot); assert_ne!(occur, Occur::MustNot);
assert!(!asts.is_empty()); assert!(!asts.is_empty());
if asts.len() == 1 { if asts.len() == 1 {
asts.into_iter().next().unwrap() //< safe asts.into_iter().next().unwrap() //< safe
} else { } else {
UserInputAST::Clause( UserInputAST::Clause(
asts.into_iter() asts.into_iter()
.map(|ast: UserInputAST| ast.unary(occur)) .map(|ast: UserInputAST| (Some(occur), ast))
.collect::<Vec<_>>(), .collect::<Vec<_>>(),
) )
} }
} }
pub fn empty_query() -> UserInputAST {
UserInputAST::Clause(Vec::default())
}
pub fn and(asts: Vec<UserInputAST>) -> UserInputAST { pub fn and(asts: Vec<UserInputAST>) -> UserInputAST {
UserInputAST::compose(Occur::Must, asts) UserInputAST::compose(Occur::Must, asts)
} }
@@ -114,42 +122,6 @@ impl UserInputAST {
} }
} }
/*
impl UserInputAST {
fn compose_occur(self, occur: Occur) -> UserInputAST {
match self {
UserInputAST::Not(other) => {
let new_occur = compose_occur(Occur::MustNot, occur);
other.simplify()
}
_ => {
self
}
}
}
pub fn simplify(self) -> UserInputAST {
match self {
UserInputAST::Clause(els) => {
if els.len() == 1 {
return els.into_iter().next().unwrap();
} else {
return self;
}
}
UserInputAST::Not(els) => {
if els.len() == 1 {
return els.into_iter().next().unwrap();
} else {
return self;
}
}
}
}
}
*/
impl From<UserInputLiteral> for UserInputLeaf { impl From<UserInputLiteral> for UserInputLeaf {
fn from(literal: UserInputLiteral) -> UserInputLeaf { fn from(literal: UserInputLiteral) -> UserInputLeaf {
UserInputLeaf::Literal(literal) UserInputLeaf::Literal(literal)
@@ -162,26 +134,38 @@ impl From<UserInputLeaf> for UserInputAST {
} }
} }
fn print_occur_ast(
occur_opt: Option<Occur>,
ast: &UserInputAST,
formatter: &mut fmt::Formatter,
) -> fmt::Result {
if let Some(occur) = occur_opt {
write!(formatter, "{}{:?}", occur, ast)?;
} else {
write!(formatter, "*{:?}", ast)?;
}
Ok(())
}
impl fmt::Debug for UserInputAST { impl fmt::Debug for UserInputAST {
fn fmt(&self, formatter: &mut fmt::Formatter<'_>) -> Result<(), fmt::Error> { fn fmt(&self, formatter: &mut fmt::Formatter) -> fmt::Result {
match *self { match *self {
UserInputAST::Clause(ref subqueries) => { UserInputAST::Clause(ref subqueries) => {
if subqueries.is_empty() { if subqueries.is_empty() {
write!(formatter, "<emptyclause>")?; write!(formatter, "<emptyclause>")?;
} else { } else {
write!(formatter, "(")?; write!(formatter, "(")?;
write!(formatter, "{:?}", &subqueries[0])?; print_occur_ast(subqueries[0].0, &subqueries[0].1, formatter)?;
for subquery in &subqueries[1..] { for subquery in &subqueries[1..] {
write!(formatter, " {:?}", subquery)?; write!(formatter, " ")?;
print_occur_ast(subquery.0, &subquery.1, formatter)?;
} }
write!(formatter, ")")?; write!(formatter, ")")?;
} }
Ok(()) Ok(())
} }
UserInputAST::Unary(ref occur, ref subquery) => {
write!(formatter, "{}({:?})", occur.to_char(), subquery)
}
UserInputAST::Leaf(ref subquery) => write!(formatter, "{:?}", subquery), UserInputAST::Leaf(ref subquery) => write!(formatter, "{:?}", subquery),
UserInputAST::Boost(ref leaf, boost) => write!(formatter, "({:?})^{}", leaf, boost),
} }
} }
} }

View File

@@ -1,7 +1,6 @@
use super::Collector; use super::Collector;
use crate::collector::SegmentCollector; use crate::collector::SegmentCollector;
use crate::DocId; use crate::DocId;
use crate::Result;
use crate::Score; use crate::Score;
use crate::SegmentLocalId; use crate::SegmentLocalId;
use crate::SegmentReader; use crate::SegmentReader;
@@ -10,49 +9,32 @@ use crate::SegmentReader;
/// documents match the query. /// documents match the query.
/// ///
/// ```rust /// ```rust
/// #[macro_use]
/// extern crate tantivy;
/// use tantivy::schema::{Schema, TEXT};
/// use tantivy::{Index, Result};
/// use tantivy::collector::Count; /// use tantivy::collector::Count;
/// use tantivy::query::QueryParser; /// use tantivy::query::QueryParser;
/// use tantivy::schema::{Schema, TEXT};
/// use tantivy::{doc, Index};
/// ///
/// # fn main() { example().unwrap(); } /// let mut schema_builder = Schema::builder();
/// fn example() -> Result<()> { /// let title = schema_builder.add_text_field("title", TEXT);
/// let mut schema_builder = Schema::builder(); /// let schema = schema_builder.build();
/// let title = schema_builder.add_text_field("title", TEXT); /// let index = Index::create_in_ram(schema);
/// let schema = schema_builder.build();
/// let index = Index::create_in_ram(schema);
/// {
/// let mut index_writer = index.writer(3_000_000)?;
/// index_writer.add_document(doc!(
/// title => "The Name of the Wind",
/// ));
/// index_writer.add_document(doc!(
/// title => "The Diary of Muadib",
/// ));
/// index_writer.add_document(doc!(
/// title => "A Dairy Cow",
/// ));
/// index_writer.add_document(doc!(
/// title => "The Diary of a Young Girl",
/// ));
/// index_writer.commit().unwrap();
/// }
/// ///
/// let reader = index.reader()?; /// let mut index_writer = index.writer(3_000_000).unwrap();
/// let searcher = reader.searcher(); /// index_writer.add_document(doc!(title => "The Name of the Wind"));
/// index_writer.add_document(doc!(title => "The Diary of Muadib"));
/// index_writer.add_document(doc!(title => "A Dairy Cow"));
/// index_writer.add_document(doc!(title => "The Diary of a Young Girl"));
/// assert!(index_writer.commit().is_ok());
/// ///
/// { /// let reader = index.reader().unwrap();
/// let query_parser = QueryParser::for_index(&index, vec![title]); /// let searcher = reader.searcher();
/// let query = query_parser.parse_query("diary")?;
/// let count = searcher.search(&query, &Count).unwrap();
/// ///
/// assert_eq!(count, 2); /// // Here comes the important part
/// } /// let query_parser = QueryParser::for_index(&index, vec![title]);
/// let query = query_parser.parse_query("diary").unwrap();
/// let count = searcher.search(&query, &Count).unwrap();
/// ///
/// Ok(()) /// assert_eq!(count, 2);
/// }
/// ``` /// ```
pub struct Count; pub struct Count;
@@ -61,7 +43,11 @@ impl Collector for Count {
type Child = SegmentCountCollector; type Child = SegmentCountCollector;
fn for_segment(&self, _: SegmentLocalId, _: &SegmentReader) -> Result<SegmentCountCollector> { fn for_segment(
&self,
_: SegmentLocalId,
_: &SegmentReader,
) -> crate::Result<SegmentCountCollector> {
Ok(SegmentCountCollector::default()) Ok(SegmentCountCollector::default())
} }
@@ -69,7 +55,7 @@ impl Collector for Count {
false false
} }
fn merge_fruits(&self, segment_counts: Vec<usize>) -> Result<usize> { fn merge_fruits(&self, segment_counts: Vec<usize>) -> crate::Result<usize> {
Ok(segment_counts.into_iter().sum()) Ok(segment_counts.into_iter().sum())
} }
} }
@@ -125,5 +111,4 @@ mod tests {
assert_eq!(count_collector.harvest(), 2); assert_eq!(count_collector.harvest(), 2);
} }
} }
} }

View File

@@ -1,6 +1,5 @@
use crate::collector::top_collector::{TopCollector, TopSegmentCollector}; use crate::collector::top_collector::{TopCollector, TopSegmentCollector};
use crate::collector::{Collector, SegmentCollector}; use crate::collector::{Collector, SegmentCollector};
use crate::Result;
use crate::{DocAddress, DocId, Score, SegmentReader}; use crate::{DocAddress, DocId, Score, SegmentReader};
pub(crate) struct CustomScoreTopCollector<TCustomScorer, TScore = Score> { pub(crate) struct CustomScoreTopCollector<TCustomScorer, TScore = Score> {
@@ -42,7 +41,7 @@ pub trait CustomScorer<TScore>: Sync {
type Child: CustomSegmentScorer<TScore>; type Child: CustomSegmentScorer<TScore>;
/// Builds a child scorer for a specific segment. The child scorer is associated to /// Builds a child scorer for a specific segment. The child scorer is associated to
/// a specific segment. /// a specific segment.
fn segment_scorer(&self, segment_reader: &SegmentReader) -> Result<Self::Child>; fn segment_scorer(&self, segment_reader: &SegmentReader) -> crate::Result<Self::Child>;
} }
impl<TCustomScorer, TScore> Collector for CustomScoreTopCollector<TCustomScorer, TScore> impl<TCustomScorer, TScore> Collector for CustomScoreTopCollector<TCustomScorer, TScore>
@@ -58,7 +57,7 @@ where
&self, &self,
segment_local_id: u32, segment_local_id: u32,
segment_reader: &SegmentReader, segment_reader: &SegmentReader,
) -> Result<Self::Child> { ) -> crate::Result<Self::Child> {
let segment_scorer = self.custom_scorer.segment_scorer(segment_reader)?; let segment_scorer = self.custom_scorer.segment_scorer(segment_reader)?;
let segment_collector = self let segment_collector = self
.collector .collector
@@ -73,7 +72,7 @@ where
false false
} }
fn merge_fruits(&self, segment_fruits: Vec<Self::Fruit>) -> Result<Self::Fruit> { fn merge_fruits(&self, segment_fruits: Vec<Self::Fruit>) -> crate::Result<Self::Fruit> {
self.collector.merge_fruits(segment_fruits) self.collector.merge_fruits(segment_fruits)
} }
} }
@@ -111,7 +110,7 @@ where
{ {
type Child = T; type Child = T;
fn segment_scorer(&self, segment_reader: &SegmentReader) -> Result<Self::Child> { fn segment_scorer(&self, segment_reader: &SegmentReader) -> crate::Result<Self::Child> {
Ok((self)(segment_reader)) Ok((self)(segment_reader))
} }
} }

View File

@@ -5,7 +5,6 @@ use crate::fastfield::FacetReader;
use crate::schema::Facet; use crate::schema::Facet;
use crate::schema::Field; use crate::schema::Field;
use crate::DocId; use crate::DocId;
use crate::Result;
use crate::Score; use crate::Score;
use crate::SegmentLocalId; use crate::SegmentLocalId;
use crate::SegmentReader; use crate::SegmentReader;
@@ -81,15 +80,12 @@ fn facet_depth(facet_bytes: &[u8]) -> usize {
/// ///
/// ///
/// ```rust /// ```rust
/// #[macro_use]
/// extern crate tantivy;
/// use tantivy::schema::{Facet, Schema, TEXT};
/// use tantivy::{Index, Result};
/// use tantivy::collector::FacetCollector; /// use tantivy::collector::FacetCollector;
/// use tantivy::query::AllQuery; /// use tantivy::query::AllQuery;
/// use tantivy::schema::{Facet, Schema, TEXT};
/// use tantivy::{doc, Index};
/// ///
/// # fn main() { example().unwrap(); } /// fn example() -> tantivy::Result<()> {
/// fn example() -> Result<()> {
/// let mut schema_builder = Schema::builder(); /// let mut schema_builder = Schema::builder();
/// ///
/// // Facet have their own specific type. /// // Facet have their own specific type.
@@ -129,7 +125,7 @@ fn facet_depth(facet_bytes: &[u8]) -> usize {
/// let searcher = reader.searcher(); /// let searcher = reader.searcher();
/// ///
/// { /// {
/// let mut facet_collector = FacetCollector::for_field(facet); /// let mut facet_collector = FacetCollector::for_field(facet);
/// facet_collector.add_facet("/lang"); /// facet_collector.add_facet("/lang");
/// facet_collector.add_facet("/category"); /// facet_collector.add_facet("/category");
/// let facet_counts = searcher.search(&AllQuery, &facet_collector)?; /// let facet_counts = searcher.search(&AllQuery, &facet_collector)?;
@@ -145,7 +141,7 @@ fn facet_depth(facet_bytes: &[u8]) -> usize {
/// } /// }
/// ///
/// { /// {
/// let mut facet_collector = FacetCollector::for_field(facet); /// let mut facet_collector = FacetCollector::for_field(facet);
/// facet_collector.add_facet("/category/fiction"); /// facet_collector.add_facet("/category/fiction");
/// let facet_counts = searcher.search(&AllQuery, &facet_collector)?; /// let facet_counts = searcher.search(&AllQuery, &facet_collector)?;
/// ///
@@ -160,8 +156,8 @@ fn facet_depth(facet_bytes: &[u8]) -> usize {
/// ]); /// ]);
/// } /// }
/// ///
/// { /// {
/// let mut facet_collector = FacetCollector::for_field(facet); /// let mut facet_collector = FacetCollector::for_field(facet);
/// facet_collector.add_facet("/category/fiction"); /// facet_collector.add_facet("/category/fiction");
/// let facet_counts = searcher.search(&AllQuery, &facet_collector)?; /// let facet_counts = searcher.search(&AllQuery, &facet_collector)?;
/// ///
@@ -174,6 +170,7 @@ fn facet_depth(facet_bytes: &[u8]) -> usize {
/// ///
/// Ok(()) /// Ok(())
/// } /// }
/// # assert!(example().is_ok());
/// ``` /// ```
pub struct FacetCollector { pub struct FacetCollector {
field: Field, field: Field,
@@ -264,7 +261,7 @@ impl Collector for FacetCollector {
&self, &self,
_: SegmentLocalId, _: SegmentLocalId,
reader: &SegmentReader, reader: &SegmentReader,
) -> Result<FacetSegmentCollector> { ) -> crate::Result<FacetSegmentCollector> {
let field_name = reader.schema().get_field_name(self.field); let field_name = reader.schema().get_field_name(self.field);
let facet_reader = reader.facet_reader(self.field).ok_or_else(|| { let facet_reader = reader.facet_reader(self.field).ok_or_else(|| {
TantivyError::SchemaError(format!("Field {:?} is not a facet field.", field_name)) TantivyError::SchemaError(format!("Field {:?} is not a facet field.", field_name))
@@ -330,7 +327,7 @@ impl Collector for FacetCollector {
false false
} }
fn merge_fruits(&self, segments_facet_counts: Vec<FacetCounts>) -> Result<FacetCounts> { fn merge_fruits(&self, segments_facet_counts: Vec<FacetCounts>) -> crate::Result<FacetCounts> {
let mut facet_counts: BTreeMap<Facet, u64> = BTreeMap::new(); let mut facet_counts: BTreeMap<Facet, u64> = BTreeMap::new();
for segment_facet_counts in segments_facet_counts { for segment_facet_counts in segments_facet_counts {
for (facet, count) in segment_facet_counts.facet_counts { for (facet, count) in segment_facet_counts.facet_counts {
@@ -454,9 +451,11 @@ impl FacetCounts {
#[cfg(test)] #[cfg(test)]
mod tests { mod tests {
use super::{FacetCollector, FacetCounts}; use super::{FacetCollector, FacetCounts};
use crate::collector::Count;
use crate::core::Index; use crate::core::Index;
use crate::query::AllQuery; use crate::query::{AllQuery, QueryParser, TermQuery};
use crate::schema::{Document, Facet, Field, Schema}; use crate::schema::{Document, Facet, Field, IndexRecordOption, Schema};
use crate::Term;
use rand::distributions::Uniform; use rand::distributions::Uniform;
use rand::prelude::SliceRandom; use rand::prelude::SliceRandom;
use rand::{thread_rng, Rng}; use rand::{thread_rng, Rng};
@@ -517,7 +516,7 @@ mod tests {
#[should_panic(expected = "Tried to add a facet which is a descendant of \ #[should_panic(expected = "Tried to add a facet which is a descendant of \
an already added facet.")] an already added facet.")]
fn test_misused_facet_collector() { fn test_misused_facet_collector() {
let mut facet_collector = FacetCollector::for_field(Field(0)); let mut facet_collector = FacetCollector::for_field(Field::from_field_id(0));
facet_collector.add_facet(Facet::from("/country")); facet_collector.add_facet(Facet::from("/country"));
facet_collector.add_facet(Facet::from("/country/europe")); facet_collector.add_facet(Facet::from("/country/europe"));
} }
@@ -546,9 +545,59 @@ mod tests {
assert_eq!(facets[0].1, 1); assert_eq!(facets[0].1, 1);
} }
#[test]
fn test_doc_search_by_facet() {
let mut schema_builder = Schema::builder();
let facet_field = schema_builder.add_facet_field("facet");
let schema = schema_builder.build();
let index = Index::create_in_ram(schema);
let mut index_writer = index.writer_with_num_threads(1, 3_000_000).unwrap();
index_writer.add_document(doc!(
facet_field => Facet::from_text(&"/A/A"),
));
index_writer.add_document(doc!(
facet_field => Facet::from_text(&"/A/B"),
));
index_writer.add_document(doc!(
facet_field => Facet::from_text(&"/A/C/A"),
));
index_writer.add_document(doc!(
facet_field => Facet::from_text(&"/D/C/A"),
));
index_writer.commit().unwrap();
let reader = index.reader().unwrap();
let searcher = reader.searcher();
assert_eq!(searcher.num_docs(), 4);
let count_facet = |facet_str: &str| {
let term = Term::from_facet(facet_field, &Facet::from_text(facet_str));
searcher
.search(&TermQuery::new(term, IndexRecordOption::Basic), &Count)
.unwrap()
};
assert_eq!(count_facet("/"), 4);
assert_eq!(count_facet("/A"), 3);
assert_eq!(count_facet("/A/B"), 1);
assert_eq!(count_facet("/A/C"), 1);
assert_eq!(count_facet("/A/C/A"), 1);
assert_eq!(count_facet("/C/A"), 0);
{
let query_parser = QueryParser::for_index(&index, vec![]);
{
let query = query_parser.parse_query("facet:/A/B").unwrap();
assert_eq!(1, searcher.search(&query, &Count).unwrap());
}
{
let query = query_parser.parse_query("facet:/A").unwrap();
assert_eq!(3, searcher.search(&query, &Count).unwrap());
}
}
}
#[test] #[test]
fn test_non_used_facet_collector() { fn test_non_used_facet_collector() {
let mut facet_collector = FacetCollector::for_field(Field(0)); let mut facet_collector = FacetCollector::for_field(Field::from_field_id(0));
facet_collector.add_facet(Facet::from("/country")); facet_collector.add_facet(Facet::from("/country"));
facet_collector.add_facet(Facet::from("/countryeurope")); facet_collector.add_facet(Facet::from("/countryeurope"));
} }
@@ -601,19 +650,18 @@ mod tests {
); );
} }
} }
} }
#[cfg(all(test, feature = "unstable"))] #[cfg(all(test, feature = "unstable"))]
mod bench { mod bench {
use collector::FacetCollector; use crate::collector::FacetCollector;
use query::AllQuery; use crate::query::AllQuery;
use rand::{thread_rng, Rng}; use crate::schema::{Facet, Schema};
use schema::Facet; use crate::Index;
use schema::Schema; use rand::seq::SliceRandom;
use rand::thread_rng;
use test::Bencher; use test::Bencher;
use Index;
#[bench] #[bench]
fn bench_facet_collector(b: &mut Bencher) { fn bench_facet_collector(b: &mut Bencher) {
@@ -630,7 +678,7 @@ mod bench {
} }
} }
// 40425 docs // 40425 docs
thread_rng().shuffle(&mut docs[..]); docs[..].shuffle(&mut thread_rng());
let mut index_writer = index.writer_with_num_threads(1, 3_000_000).unwrap(); let mut index_writer = index.writer_with_num_threads(1, 3_000_000).unwrap();
for doc in docs { for doc in docs {
@@ -639,7 +687,7 @@ mod bench {
index_writer.commit().unwrap(); index_writer.commit().unwrap();
let reader = index.reader().unwrap(); let reader = index.reader().unwrap();
b.iter(|| { b.iter(|| {
let searcher = index.searcher(); let searcher = reader.searcher();
let facet_collector = FacetCollector::for_field(facet_field); let facet_collector = FacetCollector::for_field(facet_field);
searcher.search(&AllQuery, &facet_collector).unwrap(); searcher.search(&AllQuery, &facet_collector).unwrap();
}); });

View File

@@ -35,7 +35,6 @@ The resulting `Fruit` will then be a typed tuple with each collector's original
in their respective position. in their respective position.
```rust ```rust
# extern crate tantivy;
# use tantivy::schema::*; # use tantivy::schema::*;
# use tantivy::*; # use tantivy::*;
# use tantivy::query::*; # use tantivy::query::*;
@@ -86,7 +85,6 @@ See the `custom_collector` example.
*/ */
use crate::DocId; use crate::DocId;
use crate::Result;
use crate::Score; use crate::Score;
use crate::SegmentLocalId; use crate::SegmentLocalId;
use crate::SegmentReader; use crate::SegmentReader;
@@ -148,14 +146,14 @@ pub trait Collector: Sync {
&self, &self,
segment_local_id: SegmentLocalId, segment_local_id: SegmentLocalId,
segment: &SegmentReader, segment: &SegmentReader,
) -> Result<Self::Child>; ) -> crate::Result<Self::Child>;
/// Returns true iff the collector requires to compute scores for documents. /// Returns true iff the collector requires to compute scores for documents.
fn requires_scoring(&self) -> bool; fn requires_scoring(&self) -> bool;
/// Combines the fruit associated to the collection of each segments /// Combines the fruit associated to the collection of each segments
/// into one fruit. /// into one fruit.
fn merge_fruits(&self, segment_fruits: Vec<Self::Fruit>) -> Result<Self::Fruit>; fn merge_fruits(&self, segment_fruits: Vec<Self::Fruit>) -> crate::Result<Self::Fruit>;
} }
/// The `SegmentCollector` is the trait in charge of defining the /// The `SegmentCollector` is the trait in charge of defining the
@@ -186,7 +184,11 @@ where
type Fruit = (Left::Fruit, Right::Fruit); type Fruit = (Left::Fruit, Right::Fruit);
type Child = (Left::Child, Right::Child); type Child = (Left::Child, Right::Child);
fn for_segment(&self, segment_local_id: u32, segment: &SegmentReader) -> Result<Self::Child> { fn for_segment(
&self,
segment_local_id: u32,
segment: &SegmentReader,
) -> crate::Result<Self::Child> {
let left = self.0.for_segment(segment_local_id, segment)?; let left = self.0.for_segment(segment_local_id, segment)?;
let right = self.1.for_segment(segment_local_id, segment)?; let right = self.1.for_segment(segment_local_id, segment)?;
Ok((left, right)) Ok((left, right))
@@ -199,7 +201,7 @@ where
fn merge_fruits( fn merge_fruits(
&self, &self,
children: Vec<(Left::Fruit, Right::Fruit)>, children: Vec<(Left::Fruit, Right::Fruit)>,
) -> Result<(Left::Fruit, Right::Fruit)> { ) -> crate::Result<(Left::Fruit, Right::Fruit)> {
let mut left_fruits = vec![]; let mut left_fruits = vec![];
let mut right_fruits = vec![]; let mut right_fruits = vec![];
for (left_fruit, right_fruit) in children { for (left_fruit, right_fruit) in children {
@@ -241,7 +243,11 @@ where
type Fruit = (One::Fruit, Two::Fruit, Three::Fruit); type Fruit = (One::Fruit, Two::Fruit, Three::Fruit);
type Child = (One::Child, Two::Child, Three::Child); type Child = (One::Child, Two::Child, Three::Child);
fn for_segment(&self, segment_local_id: u32, segment: &SegmentReader) -> Result<Self::Child> { fn for_segment(
&self,
segment_local_id: u32,
segment: &SegmentReader,
) -> crate::Result<Self::Child> {
let one = self.0.for_segment(segment_local_id, segment)?; let one = self.0.for_segment(segment_local_id, segment)?;
let two = self.1.for_segment(segment_local_id, segment)?; let two = self.1.for_segment(segment_local_id, segment)?;
let three = self.2.for_segment(segment_local_id, segment)?; let three = self.2.for_segment(segment_local_id, segment)?;
@@ -252,7 +258,7 @@ where
self.0.requires_scoring() || self.1.requires_scoring() || self.2.requires_scoring() self.0.requires_scoring() || self.1.requires_scoring() || self.2.requires_scoring()
} }
fn merge_fruits(&self, children: Vec<Self::Fruit>) -> Result<Self::Fruit> { fn merge_fruits(&self, children: Vec<Self::Fruit>) -> crate::Result<Self::Fruit> {
let mut one_fruits = vec![]; let mut one_fruits = vec![];
let mut two_fruits = vec![]; let mut two_fruits = vec![];
let mut three_fruits = vec![]; let mut three_fruits = vec![];
@@ -300,7 +306,11 @@ where
type Fruit = (One::Fruit, Two::Fruit, Three::Fruit, Four::Fruit); type Fruit = (One::Fruit, Two::Fruit, Three::Fruit, Four::Fruit);
type Child = (One::Child, Two::Child, Three::Child, Four::Child); type Child = (One::Child, Two::Child, Three::Child, Four::Child);
fn for_segment(&self, segment_local_id: u32, segment: &SegmentReader) -> Result<Self::Child> { fn for_segment(
&self,
segment_local_id: u32,
segment: &SegmentReader,
) -> crate::Result<Self::Child> {
let one = self.0.for_segment(segment_local_id, segment)?; let one = self.0.for_segment(segment_local_id, segment)?;
let two = self.1.for_segment(segment_local_id, segment)?; let two = self.1.for_segment(segment_local_id, segment)?;
let three = self.2.for_segment(segment_local_id, segment)?; let three = self.2.for_segment(segment_local_id, segment)?;
@@ -315,7 +325,7 @@ where
|| self.3.requires_scoring() || self.3.requires_scoring()
} }
fn merge_fruits(&self, children: Vec<Self::Fruit>) -> Result<Self::Fruit> { fn merge_fruits(&self, children: Vec<Self::Fruit>) -> crate::Result<Self::Fruit> {
let mut one_fruits = vec![]; let mut one_fruits = vec![];
let mut two_fruits = vec![]; let mut two_fruits = vec![];
let mut three_fruits = vec![]; let mut three_fruits = vec![];

View File

@@ -2,7 +2,6 @@ use super::Collector;
use super::SegmentCollector; use super::SegmentCollector;
use crate::collector::Fruit; use crate::collector::Fruit;
use crate::DocId; use crate::DocId;
use crate::Result;
use crate::Score; use crate::Score;
use crate::SegmentLocalId; use crate::SegmentLocalId;
use crate::SegmentReader; use crate::SegmentReader;
@@ -24,7 +23,7 @@ impl<TCollector: Collector> Collector for CollectorWrapper<TCollector> {
&self, &self,
segment_local_id: u32, segment_local_id: u32,
reader: &SegmentReader, reader: &SegmentReader,
) -> Result<Box<dyn BoxableSegmentCollector>> { ) -> crate::Result<Box<dyn BoxableSegmentCollector>> {
let child = self.0.for_segment(segment_local_id, reader)?; let child = self.0.for_segment(segment_local_id, reader)?;
Ok(Box::new(SegmentCollectorWrapper(child))) Ok(Box::new(SegmentCollectorWrapper(child)))
} }
@@ -33,7 +32,10 @@ impl<TCollector: Collector> Collector for CollectorWrapper<TCollector> {
self.0.requires_scoring() self.0.requires_scoring()
} }
fn merge_fruits(&self, children: Vec<<Self as Collector>::Fruit>) -> Result<Box<dyn Fruit>> { fn merge_fruits(
&self,
children: Vec<<Self as Collector>::Fruit>,
) -> crate::Result<Box<dyn Fruit>> {
let typed_fruit: Vec<TCollector::Fruit> = children let typed_fruit: Vec<TCollector::Fruit> = children
.into_iter() .into_iter()
.map(|untyped_fruit| { .map(|untyped_fruit| {
@@ -44,7 +46,7 @@ impl<TCollector: Collector> Collector for CollectorWrapper<TCollector> {
TantivyError::InvalidArgument("Failed to cast child fruit.".to_string()) TantivyError::InvalidArgument("Failed to cast child fruit.".to_string())
}) })
}) })
.collect::<Result<_>>()?; .collect::<crate::Result<_>>()?;
let merged_fruit = self.0.merge_fruits(typed_fruit)?; let merged_fruit = self.0.merge_fruits(typed_fruit)?;
Ok(Box::new(merged_fruit)) Ok(Box::new(merged_fruit))
} }
@@ -105,54 +107,38 @@ impl<TFruit: Fruit> FruitHandle<TFruit> {
/// [Combining several collectors section of the collector documentation](./index.html#combining-several-collectors). /// [Combining several collectors section of the collector documentation](./index.html#combining-several-collectors).
/// ///
/// ```rust /// ```rust
/// #[macro_use]
/// extern crate tantivy;
/// use tantivy::schema::{Schema, TEXT};
/// use tantivy::{Index, Result};
/// use tantivy::collector::{Count, TopDocs, MultiCollector}; /// use tantivy::collector::{Count, TopDocs, MultiCollector};
/// use tantivy::query::QueryParser; /// use tantivy::query::QueryParser;
/// use tantivy::schema::{Schema, TEXT};
/// use tantivy::{doc, Index};
/// ///
/// # fn main() { example().unwrap(); } /// let mut schema_builder = Schema::builder();
/// fn example() -> Result<()> { /// let title = schema_builder.add_text_field("title", TEXT);
/// let mut schema_builder = Schema::builder(); /// let schema = schema_builder.build();
/// let title = schema_builder.add_text_field("title", TEXT); /// let index = Index::create_in_ram(schema);
/// let schema = schema_builder.build();
/// let index = Index::create_in_ram(schema);
/// {
/// let mut index_writer = index.writer(3_000_000)?;
/// index_writer.add_document(doc!(
/// title => "The Name of the Wind",
/// ));
/// index_writer.add_document(doc!(
/// title => "The Diary of Muadib",
/// ));
/// index_writer.add_document(doc!(
/// title => "A Dairy Cow",
/// ));
/// index_writer.add_document(doc!(
/// title => "The Diary of a Young Girl",
/// ));
/// index_writer.commit().unwrap();
/// }
/// ///
/// let reader = index.reader()?; /// let mut index_writer = index.writer(3_000_000).unwrap();
/// let searcher = reader.searcher(); /// index_writer.add_document(doc!(title => "The Name of the Wind"));
/// index_writer.add_document(doc!(title => "The Diary of Muadib"));
/// index_writer.add_document(doc!(title => "A Dairy Cow"));
/// index_writer.add_document(doc!(title => "The Diary of a Young Girl"));
/// assert!(index_writer.commit().is_ok());
/// ///
/// let mut collectors = MultiCollector::new(); /// let reader = index.reader().unwrap();
/// let top_docs_handle = collectors.add_collector(TopDocs::with_limit(2)); /// let searcher = reader.searcher();
/// let count_handle = collectors.add_collector(Count);
/// let query_parser = QueryParser::for_index(&index, vec![title]);
/// let query = query_parser.parse_query("diary")?;
/// let mut multi_fruit = searcher.search(&query, &collectors)?;
/// ///
/// let count = count_handle.extract(&mut multi_fruit); /// let mut collectors = MultiCollector::new();
/// let top_docs = top_docs_handle.extract(&mut multi_fruit); /// let top_docs_handle = collectors.add_collector(TopDocs::with_limit(2));
/// let count_handle = collectors.add_collector(Count);
/// let query_parser = QueryParser::for_index(&index, vec![title]);
/// let query = query_parser.parse_query("diary").unwrap();
/// let mut multi_fruit = searcher.search(&query, &collectors).unwrap();
/// ///
/// # assert_eq!(count, 2); /// let count = count_handle.extract(&mut multi_fruit);
/// # assert_eq!(top_docs.len(), 2); /// let top_docs = top_docs_handle.extract(&mut multi_fruit);
/// ///
/// Ok(()) /// assert_eq!(count, 2);
/// } /// assert_eq!(top_docs.len(), 2);
/// ``` /// ```
#[allow(clippy::type_complexity)] #[allow(clippy::type_complexity)]
#[derive(Default)] #[derive(Default)]
@@ -191,12 +177,12 @@ impl<'a> Collector for MultiCollector<'a> {
&self, &self,
segment_local_id: SegmentLocalId, segment_local_id: SegmentLocalId,
segment: &SegmentReader, segment: &SegmentReader,
) -> Result<MultiCollectorChild> { ) -> crate::Result<MultiCollectorChild> {
let children = self let children = self
.collector_wrappers .collector_wrappers
.iter() .iter()
.map(|collector_wrapper| collector_wrapper.for_segment(segment_local_id, segment)) .map(|collector_wrapper| collector_wrapper.for_segment(segment_local_id, segment))
.collect::<Result<Vec<_>>>()?; .collect::<crate::Result<Vec<_>>>()?;
Ok(MultiCollectorChild { children }) Ok(MultiCollectorChild { children })
} }
@@ -207,7 +193,7 @@ impl<'a> Collector for MultiCollector<'a> {
.any(Collector::requires_scoring) .any(Collector::requires_scoring)
} }
fn merge_fruits(&self, segments_multifruits: Vec<MultiFruit>) -> Result<MultiFruit> { fn merge_fruits(&self, segments_multifruits: Vec<MultiFruit>) -> crate::Result<MultiFruit> {
let mut segment_fruits_list: Vec<Vec<Box<dyn Fruit>>> = (0..self.collector_wrappers.len()) let mut segment_fruits_list: Vec<Vec<Box<dyn Fruit>>> = (0..self.collector_wrappers.len())
.map(|_| Vec::with_capacity(segments_multifruits.len())) .map(|_| Vec::with_capacity(segments_multifruits.len()))
.collect::<Vec<_>>(); .collect::<Vec<_>>();
@@ -225,7 +211,7 @@ impl<'a> Collector for MultiCollector<'a> {
.map(|(child_collector, segment_fruits)| { .map(|(child_collector, segment_fruits)| {
Ok(Some(child_collector.merge_fruits(segment_fruits)?)) Ok(Some(child_collector.merge_fruits(segment_fruits)?))
}) })
.collect::<Result<_>>()?; .collect::<crate::Result<_>>()?;
Ok(MultiFruit { sub_fruits }) Ok(MultiFruit { sub_fruits })
} }
} }

View File

@@ -55,7 +55,7 @@ impl Collector for TestCollector {
&self, &self,
segment_id: SegmentLocalId, segment_id: SegmentLocalId,
_reader: &SegmentReader, _reader: &SegmentReader,
) -> Result<TestSegmentCollector> { ) -> crate::Result<TestSegmentCollector> {
Ok(TestSegmentCollector { Ok(TestSegmentCollector {
segment_id, segment_id,
fruit: TestFruit::default(), fruit: TestFruit::default(),
@@ -66,7 +66,7 @@ impl Collector for TestCollector {
self.compute_score self.compute_score
} }
fn merge_fruits(&self, mut children: Vec<TestFruit>) -> Result<TestFruit> { fn merge_fruits(&self, mut children: Vec<TestFruit>) -> crate::Result<TestFruit> {
children.sort_by_key(|fruit| { children.sort_by_key(|fruit| {
if fruit.docs().is_empty() { if fruit.docs().is_empty() {
0 0
@@ -124,7 +124,7 @@ impl Collector for FastFieldTestCollector {
&self, &self,
_: SegmentLocalId, _: SegmentLocalId,
segment_reader: &SegmentReader, segment_reader: &SegmentReader,
) -> Result<FastFieldSegmentCollector> { ) -> crate::Result<FastFieldSegmentCollector> {
let reader = segment_reader let reader = segment_reader
.fast_fields() .fast_fields()
.u64(self.field) .u64(self.field)
@@ -139,7 +139,7 @@ impl Collector for FastFieldTestCollector {
false false
} }
fn merge_fruits(&self, children: Vec<Vec<u64>>) -> Result<Vec<u64>> { fn merge_fruits(&self, children: Vec<Vec<u64>>) -> crate::Result<Vec<u64>> {
Ok(children.into_iter().flat_map(|v| v.into_iter()).collect()) Ok(children.into_iter().flat_map(|v| v.into_iter()).collect())
} }
} }
@@ -184,7 +184,7 @@ impl Collector for BytesFastFieldTestCollector {
&self, &self,
_segment_local_id: u32, _segment_local_id: u32,
segment_reader: &SegmentReader, segment_reader: &SegmentReader,
) -> Result<BytesFastFieldSegmentCollector> { ) -> crate::Result<BytesFastFieldSegmentCollector> {
Ok(BytesFastFieldSegmentCollector { Ok(BytesFastFieldSegmentCollector {
vals: Vec::new(), vals: Vec::new(),
reader: segment_reader reader: segment_reader
@@ -198,7 +198,7 @@ impl Collector for BytesFastFieldTestCollector {
false false
} }
fn merge_fruits(&self, children: Vec<Vec<u8>>) -> Result<Vec<u8>> { fn merge_fruits(&self, children: Vec<Vec<u8>>) -> crate::Result<Vec<u8>> {
Ok(children.into_iter().flat_map(|c| c.into_iter()).collect()) Ok(children.into_iter().flat_map(|c| c.into_iter()).collect())
} }
} }

View File

@@ -1,6 +1,5 @@
use crate::DocAddress; use crate::DocAddress;
use crate::DocId; use crate::DocId;
use crate::Result;
use crate::SegmentLocalId; use crate::SegmentLocalId;
use crate::SegmentReader; use crate::SegmentReader;
use serde::export::PhantomData; use serde::export::PhantomData;
@@ -12,6 +11,9 @@ use std::collections::BinaryHeap;
/// It has a custom implementation of `PartialOrd` that reverses the order. This is because the /// It has a custom implementation of `PartialOrd` that reverses the order. This is because the
/// default Rust heap is a max heap, whereas a min heap is needed. /// default Rust heap is a max heap, whereas a min heap is needed.
/// ///
/// Additionally, it guarantees stable sorting: in case of a tie on the feature, the document
/// address is used.
///
/// WARNING: equality is not what you would expect here. /// WARNING: equality is not what you would expect here.
/// Two elements are equal if their feature is equal, and regardless of whether `doc` /// Two elements are equal if their feature is equal, and regardless of whether `doc`
/// is equal. This should be perfectly fine for this usage, but let's make sure this /// is equal. This should be perfectly fine for this usage, but let's make sure this
@@ -21,29 +23,37 @@ struct ComparableDoc<T, D> {
doc: D, doc: D,
} }
impl<T: PartialOrd, D> PartialOrd for ComparableDoc<T, D> { impl<T: PartialOrd, D: PartialOrd> PartialOrd for ComparableDoc<T, D> {
fn partial_cmp(&self, other: &Self) -> Option<Ordering> { fn partial_cmp(&self, other: &Self) -> Option<Ordering> {
Some(self.cmp(other)) Some(self.cmp(other))
} }
} }
impl<T: PartialOrd, D> Ord for ComparableDoc<T, D> { impl<T: PartialOrd, D: PartialOrd> Ord for ComparableDoc<T, D> {
#[inline] #[inline]
fn cmp(&self, other: &Self) -> Ordering { fn cmp(&self, other: &Self) -> Ordering {
other // Reversed to make BinaryHeap work as a min-heap
let by_feature = other
.feature .feature
.partial_cmp(&self.feature) .partial_cmp(&self.feature)
.unwrap_or_else(|| Ordering::Equal) .unwrap_or(Ordering::Equal);
let lazy_by_doc_address = || self.doc.partial_cmp(&other.doc).unwrap_or(Ordering::Equal);
// In case of a tie on the feature, we sort by ascending
// `DocAddress` in order to ensure a stable sorting of the
// documents.
by_feature.then_with(lazy_by_doc_address)
} }
} }
impl<T: PartialOrd, D> PartialEq for ComparableDoc<T, D> { impl<T: PartialOrd, D: PartialOrd> PartialEq for ComparableDoc<T, D> {
fn eq(&self, other: &Self) -> bool { fn eq(&self, other: &Self) -> bool {
self.cmp(other) == Ordering::Equal self.cmp(other) == Ordering::Equal
} }
} }
impl<T: PartialOrd, D> Eq for ComparableDoc<T, D> {} impl<T: PartialOrd, D: PartialOrd> Eq for ComparableDoc<T, D> {}
pub(crate) struct TopCollector<T> { pub(crate) struct TopCollector<T> {
limit: usize, limit: usize,
@@ -75,7 +85,7 @@ where
pub fn merge_fruits( pub fn merge_fruits(
&self, &self,
children: Vec<Vec<(T, DocAddress)>>, children: Vec<Vec<(T, DocAddress)>>,
) -> Result<Vec<(T, DocAddress)>> { ) -> crate::Result<Vec<(T, DocAddress)>> {
if self.limit == 0 { if self.limit == 0 {
return Ok(Vec::new()); return Ok(Vec::new());
} }
@@ -102,7 +112,7 @@ where
&self, &self,
segment_id: SegmentLocalId, segment_id: SegmentLocalId,
_: &SegmentReader, _: &SegmentReader,
) -> Result<TopSegmentCollector<F>> { ) -> crate::Result<TopSegmentCollector<F>> {
Ok(TopSegmentCollector::new(segment_id, self.limit)) Ok(TopSegmentCollector::new(segment_id, self.limit))
} }
} }
@@ -214,4 +224,94 @@ mod tests {
] ]
); );
} }
#[test]
fn test_top_segment_collector_stable_ordering_for_equal_feature() {
// given that the documents are collected in ascending doc id order,
// when harvesting we have to guarantee stable sorting in case of a tie
// on the score
let doc_ids_collection = [4, 5, 6];
let score = 3.14;
let mut top_collector_limit_2 = TopSegmentCollector::new(0, 2);
for id in &doc_ids_collection {
top_collector_limit_2.collect(*id, score);
}
let mut top_collector_limit_3 = TopSegmentCollector::new(0, 3);
for id in &doc_ids_collection {
top_collector_limit_3.collect(*id, score);
}
assert_eq!(
top_collector_limit_2.harvest(),
top_collector_limit_3.harvest()[..2].to_vec(),
);
}
}
#[cfg(all(test, feature = "unstable"))]
mod bench {
use super::TopSegmentCollector;
use test::Bencher;
#[bench]
fn bench_top_segment_collector_collect_not_at_capacity(b: &mut Bencher) {
let mut top_collector = TopSegmentCollector::new(0, 400);
b.iter(|| {
for i in 0..100 {
top_collector.collect(i, 0.8);
}
});
}
#[bench]
fn bench_top_segment_collector_collect_at_capacity(b: &mut Bencher) {
let mut top_collector = TopSegmentCollector::new(0, 100);
for i in 0..100 {
top_collector.collect(i, 0.8);
}
b.iter(|| {
for i in 0..100 {
top_collector.collect(i, 0.8);
}
});
}
#[bench]
fn bench_top_segment_collector_collect_and_harvest_many_ties(b: &mut Bencher) {
b.iter(|| {
let mut top_collector = TopSegmentCollector::new(0, 100);
for i in 0..100 {
top_collector.collect(i, 0.8);
}
// it would be nice to be able to do the setup N times but still
// measure only harvest(). We can't since harvest() consumes
// the top_collector.
top_collector.harvest()
});
}
#[bench]
fn bench_top_segment_collector_collect_and_harvest_no_tie(b: &mut Bencher) {
b.iter(|| {
let mut top_collector = TopSegmentCollector::new(0, 100);
let mut score = 1.0;
for i in 0..100 {
score += 1.0;
top_collector.collect(i, score);
}
// it would be nice to be able to do the setup N times but still
// measure only harvest(). We can't since harvest() consumes
// the top_collector.
top_collector.harvest()
});
}
} }

View File

@@ -6,66 +6,52 @@ use crate::collector::tweak_score_top_collector::TweakedScoreTopCollector;
use crate::collector::{ use crate::collector::{
CustomScorer, CustomSegmentScorer, ScoreSegmentTweaker, ScoreTweaker, SegmentCollector, CustomScorer, CustomSegmentScorer, ScoreSegmentTweaker, ScoreTweaker, SegmentCollector,
}; };
use crate::fastfield::FastFieldReader;
use crate::schema::Field; use crate::schema::Field;
use crate::DocAddress; use crate::DocAddress;
use crate::DocId; use crate::DocId;
use crate::Result;
use crate::Score; use crate::Score;
use crate::SegmentLocalId; use crate::SegmentLocalId;
use crate::SegmentReader; use crate::SegmentReader;
use std::fmt; use std::fmt;
/// The Top Score Collector keeps track of the K documents /// The `TopDocs` collector keeps track of the top `K` documents
/// sorted by their score. /// sorted by their score.
/// ///
/// The implementation is based on a `BinaryHeap`. /// The implementation is based on a `BinaryHeap`.
/// The theorical complexity for collecting the top `K` out of `n` documents /// The theorical complexity for collecting the top `K` out of `n` documents
/// is `O(n log K)`. /// is `O(n log K)`.
/// ///
/// This collector guarantees a stable sorting in case of a tie on the
/// document score. As such, it is suitable to implement pagination.
///
/// ```rust /// ```rust
/// #[macro_use]
/// extern crate tantivy;
/// use tantivy::DocAddress;
/// use tantivy::schema::{Schema, TEXT};
/// use tantivy::{Index, Result};
/// use tantivy::collector::TopDocs; /// use tantivy::collector::TopDocs;
/// use tantivy::query::QueryParser; /// use tantivy::query::QueryParser;
/// use tantivy::schema::{Schema, TEXT};
/// use tantivy::{doc, DocAddress, Index};
/// ///
/// # fn main() { example().unwrap(); } /// let mut schema_builder = Schema::builder();
/// fn example() -> Result<()> { /// let title = schema_builder.add_text_field("title", TEXT);
/// let mut schema_builder = Schema::builder(); /// let schema = schema_builder.build();
/// let title = schema_builder.add_text_field("title", TEXT); /// let index = Index::create_in_ram(schema);
/// let schema = schema_builder.build();
/// let index = Index::create_in_ram(schema);
/// {
/// let mut index_writer = index.writer_with_num_threads(1, 3_000_000)?;
/// index_writer.add_document(doc!(
/// title => "The Name of the Wind",
/// ));
/// index_writer.add_document(doc!(
/// title => "The Diary of Muadib",
/// ));
/// index_writer.add_document(doc!(
/// title => "A Dairy Cow",
/// ));
/// index_writer.add_document(doc!(
/// title => "The Diary of a Young Girl",
/// ));
/// index_writer.commit().unwrap();
/// }
/// ///
/// let reader = index.reader()?; /// let mut index_writer = index.writer_with_num_threads(1, 3_000_000).unwrap();
/// let searcher = reader.searcher(); /// index_writer.add_document(doc!(title => "The Name of the Wind"));
/// index_writer.add_document(doc!(title => "The Diary of Muadib"));
/// index_writer.add_document(doc!(title => "A Dairy Cow"));
/// index_writer.add_document(doc!(title => "The Diary of a Young Girl"));
/// assert!(index_writer.commit().is_ok());
/// ///
/// let query_parser = QueryParser::for_index(&index, vec![title]); /// let reader = index.reader().unwrap();
/// let query = query_parser.parse_query("diary")?; /// let searcher = reader.searcher();
/// let top_docs = searcher.search(&query, &TopDocs::with_limit(2))?;
/// ///
/// assert_eq!(&top_docs[0], &(0.7261542, DocAddress(0, 1))); /// let query_parser = QueryParser::for_index(&index, vec![title]);
/// assert_eq!(&top_docs[1], &(0.6099695, DocAddress(0, 3))); /// let query = query_parser.parse_query("diary").unwrap();
/// let top_docs = searcher.search(&query, &TopDocs::with_limit(2)).unwrap();
/// ///
/// Ok(()) /// assert_eq!(&top_docs[0], &(0.7261542, DocAddress(0, 1)));
/// } /// assert_eq!(&top_docs[1], &(0.6099695, DocAddress(0, 3)));
/// ``` /// ```
pub struct TopDocs(TopCollector<Score>); pub struct TopDocs(TopCollector<Score>);
@@ -75,6 +61,37 @@ impl fmt::Debug for TopDocs {
} }
} }
struct ScorerByFastFieldReader {
ff_reader: FastFieldReader<u64>,
}
impl CustomSegmentScorer<u64> for ScorerByFastFieldReader {
fn score(&self, doc: DocId) -> u64 {
self.ff_reader.get_u64(u64::from(doc))
}
}
struct ScorerByField {
field: Field,
}
impl CustomScorer<u64> for ScorerByField {
type Child = ScorerByFastFieldReader;
fn segment_scorer(&self, segment_reader: &SegmentReader) -> crate::Result<Self::Child> {
let ff_reader = segment_reader
.fast_fields()
.u64(self.field)
.ok_or_else(|| {
crate::TantivyError::SchemaError(format!(
"Field requested ({:?}) is not a i64/u64 fast field.",
self.field
))
})?;
Ok(ScorerByFastFieldReader { ff_reader })
}
}
impl TopDocs { impl TopDocs {
/// Creates a top score collector, with a number of documents equal to "limit". /// Creates a top score collector, with a number of documents equal to "limit".
/// ///
@@ -87,10 +104,8 @@ impl TopDocs {
/// Set top-K to rank documents by a given fast field. /// Set top-K to rank documents by a given fast field.
/// ///
/// ```rust /// ```rust
/// #[macro_use]
/// extern crate tantivy;
/// # use tantivy::schema::{Schema, FAST, TEXT}; /// # use tantivy::schema::{Schema, FAST, TEXT};
/// # use tantivy::{Index, Result, DocAddress}; /// # use tantivy::{doc, Index, DocAddress};
/// # use tantivy::query::{Query, QueryParser}; /// # use tantivy::query::{Query, QueryParser};
/// use tantivy::Searcher; /// use tantivy::Searcher;
/// use tantivy::collector::TopDocs; /// use tantivy::collector::TopDocs;
@@ -104,15 +119,12 @@ impl TopDocs {
/// # /// #
/// # let index = Index::create_in_ram(schema); /// # let index = Index::create_in_ram(schema);
/// # let mut index_writer = index.writer_with_num_threads(1, 3_000_000)?; /// # let mut index_writer = index.writer_with_num_threads(1, 3_000_000)?;
/// # index_writer.add_document(doc!( /// # index_writer.add_document(doc!(title => "The Name of the Wind", rating => 92u64));
/// # title => "The Name of the Wind",
/// # rating => 92u64,
/// # ));
/// # index_writer.add_document(doc!(title => "The Diary of Muadib", rating => 97u64)); /// # index_writer.add_document(doc!(title => "The Diary of Muadib", rating => 97u64));
/// # index_writer.add_document(doc!(title => "A Dairy Cow", rating => 63u64)); /// # index_writer.add_document(doc!(title => "A Dairy Cow", rating => 63u64));
/// # index_writer.add_document(doc!(title => "The Diary of a Young Girl", rating => 80u64)); /// # index_writer.add_document(doc!(title => "The Diary of a Young Girl", rating => 80u64));
/// # index_writer.commit()?; /// # assert!(index_writer.commit().is_ok());
/// # let reader = index.reader()?; /// # let reader = index.reader().unwrap();
/// # let query = QueryParser::for_index(&index, vec![title]).parse_query("diary")?; /// # let query = QueryParser::for_index(&index, vec![title]).parse_query("diary")?;
/// # let top_docs = docs_sorted_by_rating(&reader.searcher(), &query, rating)?; /// # let top_docs = docs_sorted_by_rating(&reader.searcher(), &query, rating)?;
/// # assert_eq!(top_docs, /// # assert_eq!(top_docs,
@@ -128,9 +140,9 @@ impl TopDocs {
/// /// /// ///
/// /// `field` is required to be a FAST field. /// /// `field` is required to be a FAST field.
/// fn docs_sorted_by_rating(searcher: &Searcher, /// fn docs_sorted_by_rating(searcher: &Searcher,
/// query: &Query, /// query: &dyn Query,
/// sort_by_field: Field) /// sort_by_field: Field)
/// -> Result<Vec<(u64, DocAddress)>> { /// -> tantivy::Result<Vec<(u64, DocAddress)>> {
/// ///
/// // This is where we build our topdocs collector /// // This is where we build our topdocs collector
/// // /// //
@@ -162,14 +174,7 @@ impl TopDocs {
self, self,
field: Field, field: Field,
) -> impl Collector<Fruit = Vec<(u64, DocAddress)>> { ) -> impl Collector<Fruit = Vec<(u64, DocAddress)>> {
self.custom_score(move |segment_reader: &SegmentReader| { self.custom_score(ScorerByField { field })
let ff_reader = segment_reader
.fast_fields()
.u64(field)
.expect("Field requested is not a i64/u64 fast field.");
//TODO error message missmatch actual behavior for i64
move |doc: DocId| ff_reader.get(doc)
})
} }
/// Ranks the documents using a custom score. /// Ranks the documents using a custom score.
@@ -197,36 +202,40 @@ impl TopDocs {
/// learning-to-rank model over various features /// learning-to-rank model over various features
/// ///
/// ```rust /// ```rust
/// #[macro_use]
/// extern crate tantivy;
/// # use tantivy::schema::{Schema, FAST, TEXT}; /// # use tantivy::schema::{Schema, FAST, TEXT};
/// # use tantivy::{Index, DocAddress, DocId, Score}; /// # use tantivy::{doc, Index, DocAddress, DocId, Score};
/// # use tantivy::query::QueryParser; /// # use tantivy::query::QueryParser;
/// use tantivy::SegmentReader; /// use tantivy::SegmentReader;
/// use tantivy::collector::TopDocs; /// use tantivy::collector::TopDocs;
/// use tantivy::schema::Field; /// use tantivy::schema::Field;
/// ///
/// # fn create_schema() -> Schema { /// fn create_schema() -> Schema {
/// # let mut schema_builder = Schema::builder(); /// let mut schema_builder = Schema::builder();
/// # schema_builder.add_text_field("product_name", TEXT); /// schema_builder.add_text_field("product_name", TEXT);
/// # schema_builder.add_u64_field("popularity", FAST); /// schema_builder.add_u64_field("popularity", FAST);
/// # schema_builder.build() /// schema_builder.build()
/// # } /// }
/// # ///
/// # fn main() -> tantivy::Result<()> { /// fn create_index() -> tantivy::Result<Index> {
/// # let schema = create_schema(); /// let schema = create_schema();
/// # let index = Index::create_in_ram(schema); /// let index = Index::create_in_ram(schema);
/// # let mut index_writer = index.writer_with_num_threads(1, 3_000_000)?; /// let mut index_writer = index.writer_with_num_threads(1, 3_000_000)?;
/// # let product_name = index.schema().get_field("product_name").unwrap(); /// let product_name = index.schema().get_field("product_name").unwrap();
/// # /// let popularity: Field = index.schema().get_field("popularity").unwrap();
/// index_writer.add_document(doc!(product_name => "The Diary of Muadib", popularity => 1u64));
/// index_writer.add_document(doc!(product_name => "A Dairy Cow", popularity => 10u64));
/// index_writer.add_document(doc!(product_name => "The Diary of a Young Girl", popularity => 15u64));
/// index_writer.commit()?;
/// Ok(index)
/// }
///
/// let index = create_index().unwrap();
/// let product_name = index.schema().get_field("product_name").unwrap();
/// let popularity: Field = index.schema().get_field("popularity").unwrap(); /// let popularity: Field = index.schema().get_field("popularity").unwrap();
/// # index_writer.add_document(doc!(product_name => "The Diary of Muadib", popularity => 1u64)); ///
/// # index_writer.add_document(doc!(product_name => "A Dairy Cow", popularity => 10u64)); /// let user_query_str = "diary";
/// # index_writer.add_document(doc!(product_name => "The Diary of a Young Girl", popularity => 15u64)); /// let query_parser = QueryParser::for_index(&index, vec![product_name]);
/// # index_writer.commit()?; /// let query = query_parser.parse_query(user_query_str).unwrap();
/// // ...
/// # let user_query = "diary";
/// # let query = QueryParser::for_index(&index, vec![product_name]).parse_query(user_query)?;
/// ///
/// // This is where we build our collector with our custom score. /// // This is where we build our collector with our custom score.
/// let top_docs_by_custom_score = TopDocs /// let top_docs_by_custom_score = TopDocs
@@ -253,15 +262,12 @@ impl TopDocs {
/// popularity_boost_score * original_score /// popularity_boost_score * original_score
/// } /// }
/// }); /// });
/// # let reader = index.reader()?; /// let reader = index.reader().unwrap();
/// # let searcher = reader.searcher(); /// let searcher = reader.searcher();
/// // ... and here are our documents. Note this is a simple vec. /// // ... and here are our documents. Note this is a simple vec.
/// // The `Score` in the pair is our tweaked score. /// // The `Score` in the pair is our tweaked score.
/// let resulting_docs: Vec<(Score, DocAddress)> = /// let resulting_docs: Vec<(Score, DocAddress)> =
/// searcher.search(&*query, &top_docs_by_custom_score)?; /// searcher.search(&query, &top_docs_by_custom_score).unwrap();
///
/// # Ok(())
/// # }
/// ``` /// ```
/// ///
/// # See also /// # See also
@@ -302,10 +308,8 @@ impl TopDocs {
/// # Example /// # Example
/// ///
/// ```rust /// ```rust
/// # #[macro_use]
/// # extern crate tantivy;
/// # use tantivy::schema::{Schema, FAST, TEXT}; /// # use tantivy::schema::{Schema, FAST, TEXT};
/// # use tantivy::{Index, DocAddress, DocId}; /// # use tantivy::{doc, Index, DocAddress, DocId};
/// # use tantivy::query::QueryParser; /// # use tantivy::query::QueryParser;
/// use tantivy::SegmentReader; /// use tantivy::SegmentReader;
/// use tantivy::collector::TopDocs; /// use tantivy::collector::TopDocs;
@@ -404,7 +408,7 @@ impl Collector for TopDocs {
&self, &self,
segment_local_id: SegmentLocalId, segment_local_id: SegmentLocalId,
reader: &SegmentReader, reader: &SegmentReader,
) -> Result<Self::Child> { ) -> crate::Result<Self::Child> {
let collector = self.0.for_segment(segment_local_id, reader)?; let collector = self.0.for_segment(segment_local_id, reader)?;
Ok(TopScoreSegmentCollector(collector)) Ok(TopScoreSegmentCollector(collector))
} }
@@ -413,7 +417,10 @@ impl Collector for TopDocs {
true true
} }
fn merge_fruits(&self, child_fruits: Vec<Vec<(Score, DocAddress)>>) -> Result<Self::Fruit> { fn merge_fruits(
&self,
child_fruits: Vec<Vec<(Score, DocAddress)>>,
) -> crate::Result<Self::Fruit> {
self.0.merge_fruits(child_fruits) self.0.merge_fruits(child_fruits)
} }
} }
@@ -437,7 +444,7 @@ impl SegmentCollector for TopScoreSegmentCollector {
mod tests { mod tests {
use super::TopDocs; use super::TopDocs;
use crate::collector::Collector; use crate::collector::Collector;
use crate::query::{Query, QueryParser}; use crate::query::{AllQuery, Query, QueryParser};
use crate::schema::{Field, Schema, FAST, STORED, TEXT}; use crate::schema::{Field, Schema, FAST, STORED, TEXT};
use crate::DocAddress; use crate::DocAddress;
use crate::Index; use crate::Index;
@@ -503,6 +510,29 @@ mod tests {
); );
} }
#[test]
fn test_top_collector_stable_sorting() {
let index = make_index();
// using AllQuery to get a constant score
let searcher = index.reader().unwrap().searcher();
let page_1 = searcher.search(&AllQuery, &TopDocs::with_limit(2)).unwrap();
let page_2 = searcher.search(&AllQuery, &TopDocs::with_limit(3)).unwrap();
// precondition for the test to be meaningful: we did get documents
// with the same score
assert!(page_1.iter().all(|result| result.0 == page_1[0].0));
assert!(page_2.iter().all(|result| result.0 == page_2[0].0));
// sanity check since we're relying on make_index()
assert_eq!(page_1.len(), 2);
assert_eq!(page_2.len(), 3);
assert_eq!(page_1, &page_2[..page_1.len()]);
}
#[test] #[test]
#[should_panic] #[should_panic]
fn test_top_0() { fn test_top_0() {
@@ -560,7 +590,7 @@ mod tests {
)); ));
}); });
let searcher = index.reader().unwrap().searcher(); let searcher = index.reader().unwrap().searcher();
let top_collector = TopDocs::with_limit(4).order_by_u64_field(Field(2)); let top_collector = TopDocs::with_limit(4).order_by_u64_field(Field::from_field_id(2));
let segment_reader = searcher.segment_reader(0u32); let segment_reader = searcher.segment_reader(0u32);
top_collector top_collector
.for_segment(0, segment_reader) .for_segment(0, segment_reader)
@@ -568,7 +598,6 @@ mod tests {
} }
#[test] #[test]
#[should_panic(expected = "Field requested is not a i64/u64 fast field")]
fn test_field_not_fast_field() { fn test_field_not_fast_field() {
let mut schema_builder = Schema::builder(); let mut schema_builder = Schema::builder();
let title = schema_builder.add_text_field(TITLE, TEXT); let title = schema_builder.add_text_field(TITLE, TEXT);
@@ -583,7 +612,15 @@ mod tests {
let searcher = index.reader().unwrap().searcher(); let searcher = index.reader().unwrap().searcher();
let segment = searcher.segment_reader(0); let segment = searcher.segment_reader(0);
let top_collector = TopDocs::with_limit(4).order_by_u64_field(size); let top_collector = TopDocs::with_limit(4).order_by_u64_field(size);
assert!(top_collector.for_segment(0, segment).is_ok()); let err = top_collector.for_segment(0, segment);
if let Err(crate::TantivyError::SchemaError(msg)) = err {
assert_eq!(
msg,
"Field requested (Field(1)) is not a i64/u64 fast field."
);
} else {
assert!(false);
}
} }
fn index( fn index(
@@ -591,7 +628,7 @@ mod tests {
query_field: Field, query_field: Field,
schema: Schema, schema: Schema,
mut doc_adder: impl FnMut(&mut IndexWriter) -> (), mut doc_adder: impl FnMut(&mut IndexWriter) -> (),
) -> (Index, Box<Query>) { ) -> (Index, Box<dyn Query>) {
let index = Index::create_in_ram(schema); let index = Index::create_in_ram(schema);
let mut index_writer = index.writer_with_num_threads(1, 3_000_000).unwrap(); let mut index_writer = index.writer_with_num_threads(1, 3_000_000).unwrap();
@@ -601,5 +638,4 @@ mod tests {
let query = query_parser.parse_query(query).unwrap(); let query = query_parser.parse_query(query).unwrap();
(index, query) (index, query)
} }
} }

View File

@@ -2,7 +2,7 @@ use crate::common::BinarySerializable;
use crate::common::CountingWriter; use crate::common::CountingWriter;
use crate::common::VInt; use crate::common::VInt;
use crate::directory::ReadOnlySource; use crate::directory::ReadOnlySource;
use crate::directory::WritePtr; use crate::directory::{TerminatingWrite, WritePtr};
use crate::schema::Field; use crate::schema::Field;
use crate::space_usage::FieldUsage; use crate::space_usage::FieldUsage;
use crate::space_usage::PerFieldSpaceUsage; use crate::space_usage::PerFieldSpaceUsage;
@@ -42,7 +42,7 @@ pub struct CompositeWrite<W = WritePtr> {
offsets: HashMap<FileAddr, u64>, offsets: HashMap<FileAddr, u64>,
} }
impl<W: Write> CompositeWrite<W> { impl<W: TerminatingWrite + Write> CompositeWrite<W> {
/// Crate a new API writer that writes a composite file /// Crate a new API writer that writes a composite file
/// in a given write. /// in a given write.
pub fn wrap(w: W) -> CompositeWrite<W> { pub fn wrap(w: W) -> CompositeWrite<W> {
@@ -91,8 +91,7 @@ impl<W: Write> CompositeWrite<W> {
let footer_len = (self.write.written_bytes() - footer_offset) as u32; let footer_len = (self.write.written_bytes() - footer_offset) as u32;
footer_len.serialize(&mut self.write)?; footer_len.serialize(&mut self.write)?;
self.write.flush()?; self.write.terminate()
Ok(())
} }
} }
@@ -200,13 +199,13 @@ mod test {
let w = directory.open_write(path).unwrap(); let w = directory.open_write(path).unwrap();
let mut composite_write = CompositeWrite::wrap(w); let mut composite_write = CompositeWrite::wrap(w);
{ {
let mut write_0 = composite_write.for_field(Field(0u32)); let mut write_0 = composite_write.for_field(Field::from_field_id(0u32));
VInt(32431123u64).serialize(&mut write_0).unwrap(); VInt(32431123u64).serialize(&mut write_0).unwrap();
write_0.flush().unwrap(); write_0.flush().unwrap();
} }
{ {
let mut write_4 = composite_write.for_field(Field(4u32)); let mut write_4 = composite_write.for_field(Field::from_field_id(4u32));
VInt(2).serialize(&mut write_4).unwrap(); VInt(2).serialize(&mut write_4).unwrap();
write_4.flush().unwrap(); write_4.flush().unwrap();
} }
@@ -216,14 +215,18 @@ mod test {
let r = directory.open_read(path).unwrap(); let r = directory.open_read(path).unwrap();
let composite_file = CompositeFile::open(&r).unwrap(); let composite_file = CompositeFile::open(&r).unwrap();
{ {
let file0 = composite_file.open_read(Field(0u32)).unwrap(); let file0 = composite_file
.open_read(Field::from_field_id(0u32))
.unwrap();
let mut file0_buf = file0.as_slice(); let mut file0_buf = file0.as_slice();
let payload_0 = VInt::deserialize(&mut file0_buf).unwrap().0; let payload_0 = VInt::deserialize(&mut file0_buf).unwrap().0;
assert_eq!(file0_buf.len(), 0); assert_eq!(file0_buf.len(), 0);
assert_eq!(payload_0, 32431123u64); assert_eq!(payload_0, 32431123u64);
} }
{ {
let file4 = composite_file.open_read(Field(4u32)).unwrap(); let file4 = composite_file
.open_read(Field::from_field_id(4u32))
.unwrap();
let mut file4_buf = file4.as_slice(); let mut file4_buf = file4.as_slice();
let payload_4 = VInt::deserialize(&mut file4_buf).unwrap().0; let payload_4 = VInt::deserialize(&mut file4_buf).unwrap().0;
assert_eq!(file4_buf.len(), 0); assert_eq!(file4_buf.len(), 0);
@@ -231,5 +234,4 @@ mod test {
} }
} }
} }
} }

View File

@@ -1,3 +1,5 @@
use crate::directory::AntiCallToken;
use crate::directory::TerminatingWrite;
use std::io; use std::io;
use std::io::Write; use std::io::Write;
@@ -42,6 +44,13 @@ impl<W: Write> Write for CountingWriter<W> {
} }
} }
impl<W: TerminatingWrite> TerminatingWrite for CountingWriter<W> {
fn terminate_ref(&mut self, token: AntiCallToken) -> io::Result<()> {
self.flush()?;
self.underlying.terminate_ref(token)
}
}
#[cfg(test)] #[cfg(test)]
mod test { mod test {

View File

@@ -18,6 +18,19 @@ pub use byteorder::LittleEndian as Endianness;
/// We do not allow segments with more than /// We do not allow segments with more than
pub const MAX_DOC_LIMIT: u32 = 1 << 31; pub const MAX_DOC_LIMIT: u32 = 1 << 31;
pub fn minmax<I, T>(mut vals: I) -> Option<(T, T)>
where
I: Iterator<Item = T>,
T: Copy + Ord,
{
if let Some(first_el) = vals.next() {
return Some(vals.fold((first_el, first_el), |(min_val, max_val), el| {
(min_val.min(el), max_val.max(el))
}));
}
None
}
/// Computes the number of bits that will be used for bitpacking. /// Computes the number of bits that will be used for bitpacking.
/// ///
/// In general the target is the minimum number of bits /// In general the target is the minimum number of bits
@@ -124,26 +137,25 @@ pub fn f64_to_u64(val: f64) -> u64 {
/// Reverse the mapping given by [`i64_to_u64`](./fn.i64_to_u64.html). /// Reverse the mapping given by [`i64_to_u64`](./fn.i64_to_u64.html).
#[inline(always)] #[inline(always)]
pub fn u64_to_f64(val: u64) -> f64 { pub fn u64_to_f64(val: u64) -> f64 {
f64::from_bits( f64::from_bits(if val & HIGHEST_BIT != 0 {
if val & HIGHEST_BIT != 0 { val ^ HIGHEST_BIT
val ^ HIGHEST_BIT } else {
} else { !val
!val })
}
)
} }
#[cfg(test)] #[cfg(test)]
pub(crate) mod test { pub(crate) mod test {
pub use super::minmax;
pub use super::serialize::test::fixed_size_test; pub use super::serialize::test::fixed_size_test;
use super::{compute_num_bits, i64_to_u64, u64_to_i64, f64_to_u64, u64_to_f64}; use super::{compute_num_bits, f64_to_u64, i64_to_u64, u64_to_f64, u64_to_i64};
use std::f64; use std::f64;
fn test_i64_converter_helper(val: i64) { fn test_i64_converter_helper(val: i64) {
assert_eq!(u64_to_i64(i64_to_u64(val)), val); assert_eq!(u64_to_i64(i64_to_u64(val)), val);
} }
fn test_f64_converter_helper(val: f64) { fn test_f64_converter_helper(val: f64) {
assert_eq!(u64_to_f64(f64_to_u64(val)), val); assert_eq!(u64_to_f64(f64_to_u64(val)), val);
} }
@@ -172,7 +184,8 @@ pub(crate) mod test {
#[test] #[test]
fn test_f64_order() { fn test_f64_order() {
assert!(!(f64_to_u64(f64::NEG_INFINITY)..f64_to_u64(f64::INFINITY)).contains(&f64_to_u64(f64::NAN))); //nan is not a number assert!(!(f64_to_u64(f64::NEG_INFINITY)..f64_to_u64(f64::INFINITY))
.contains(&f64_to_u64(f64::NAN))); //nan is not a number
assert!(f64_to_u64(1.5) > f64_to_u64(1.0)); //same exponent, different mantissa assert!(f64_to_u64(1.5) > f64_to_u64(1.0)); //same exponent, different mantissa
assert!(f64_to_u64(2.0) > f64_to_u64(1.0)); //same mantissa, different exponent assert!(f64_to_u64(2.0) > f64_to_u64(1.0)); //same mantissa, different exponent
assert!(f64_to_u64(2.0) > f64_to_u64(1.5)); //different exponent and mantissa assert!(f64_to_u64(2.0) > f64_to_u64(1.5)); //different exponent and mantissa
@@ -200,4 +213,21 @@ pub(crate) mod test {
assert!(((super::MAX_DOC_LIMIT - 1) as i32) >= 0); assert!(((super::MAX_DOC_LIMIT - 1) as i32) >= 0);
assert!((super::MAX_DOC_LIMIT as i32) < 0); assert!((super::MAX_DOC_LIMIT as i32) < 0);
} }
#[test]
fn test_minmax_empty() {
let vals: Vec<u32> = vec![];
assert_eq!(minmax(vals.into_iter()), None);
}
#[test]
fn test_minmax_one() {
assert_eq!(minmax(vec![1].into_iter()), Some((1, 1)));
}
#[test]
fn test_minmax_two() {
assert_eq!(minmax(vec![1, 2].into_iter()), Some((1, 2)));
assert_eq!(minmax(vec![2, 1].into_iter()), Some((1, 2)));
}
} }

View File

@@ -199,10 +199,7 @@ pub mod test {
fn test_serialize_string() { fn test_serialize_string() {
assert_eq!(serialize_test(String::from("")), 1); assert_eq!(serialize_test(String::from("")), 1);
assert_eq!(serialize_test(String::from("ぽよぽよ")), 1 + 3 * 4); assert_eq!(serialize_test(String::from("ぽよぽよ")), 1 + 3 * 4);
assert_eq!( assert_eq!(serialize_test(String::from("富士さん見える。")), 1 + 3 * 8);
serialize_test(String::from("富士さん見える。")),
1 + 3 * 8
);
} }
#[test] #[test]

View File

@@ -1,6 +1,5 @@
use crate::Result;
use crossbeam::channel; use crossbeam::channel;
use scoped_pool::{Pool, ThreadConfig}; use rayon::{ThreadPool, ThreadPoolBuilder};
/// Search executor whether search request are single thread or multithread. /// Search executor whether search request are single thread or multithread.
/// ///
@@ -10,8 +9,10 @@ use scoped_pool::{Pool, ThreadConfig};
/// API of a dependency, knowing it might conflict with a different version /// API of a dependency, knowing it might conflict with a different version
/// used by the client. Second, we may stop using rayon in the future. /// used by the client. Second, we may stop using rayon in the future.
pub enum Executor { pub enum Executor {
/// Single thread variant of an Executor
SingleThread, SingleThread,
ThreadPool(Pool), /// Thread pool variant of an Executor
ThreadPool(ThreadPool),
} }
impl Executor { impl Executor {
@@ -20,37 +21,39 @@ impl Executor {
Executor::SingleThread Executor::SingleThread
} }
// Creates an Executor that dispatches the tasks in a thread pool. /// Creates an Executor that dispatches the tasks in a thread pool.
pub fn multi_thread(num_threads: usize, prefix: &'static str) -> Executor { pub fn multi_thread(num_threads: usize, prefix: &'static str) -> crate::Result<Executor> {
let thread_config = ThreadConfig::new().prefix(prefix); let pool = ThreadPoolBuilder::new()
let pool = Pool::with_thread_config(num_threads, thread_config); .num_threads(num_threads)
Executor::ThreadPool(pool) .thread_name(move |num| format!("{}{}", prefix, num))
.build()?;
Ok(Executor::ThreadPool(pool))
} }
// Perform a map in the thread pool. /// Perform a map in the thread pool.
// ///
// Regardless of the executor (`SingleThread` or `ThreadPool`), panics in the task /// Regardless of the executor (`SingleThread` or `ThreadPool`), panics in the task
// will propagate to the caller. /// will propagate to the caller.
pub fn map< pub fn map<
A: Send, A: Send,
R: Send, R: Send,
AIterator: Iterator<Item = A>, AIterator: Iterator<Item = A>,
F: Sized + Sync + Fn(A) -> Result<R>, F: Sized + Sync + Fn(A) -> crate::Result<R>,
>( >(
&self, &self,
f: F, f: F,
args: AIterator, args: AIterator,
) -> Result<Vec<R>> { ) -> crate::Result<Vec<R>> {
match self { match self {
Executor::SingleThread => args.map(f).collect::<Result<_>>(), Executor::SingleThread => args.map(f).collect::<crate::Result<_>>(),
Executor::ThreadPool(pool) => { Executor::ThreadPool(pool) => {
let args_with_indices: Vec<(usize, A)> = args.enumerate().collect(); let args_with_indices: Vec<(usize, A)> = args.enumerate().collect();
let num_fruits = args_with_indices.len(); let num_fruits = args_with_indices.len();
let fruit_receiver = { let fruit_receiver = {
let (fruit_sender, fruit_receiver) = channel::unbounded(); let (fruit_sender, fruit_receiver) = channel::unbounded();
pool.scoped(|scope| { pool.scope(|scope| {
for arg_with_idx in args_with_indices { for arg_with_idx in args_with_indices {
scope.execute(|| { scope.spawn(|_| {
let (idx, arg) = arg_with_idx; let (idx, arg) = arg_with_idx;
let fruit = f(arg); let fruit = f(arg);
if let Err(err) = fruit_sender.send((idx, fruit)) { if let Err(err) = fruit_sender.send((idx, fruit)) {
@@ -103,6 +106,7 @@ mod tests {
#[should_panic] //< unfortunately the panic message is not propagated #[should_panic] //< unfortunately the panic message is not propagated
fn test_panic_propagates_multi_thread() { fn test_panic_propagates_multi_thread() {
let _result: Vec<usize> = Executor::multi_thread(1, "search-test") let _result: Vec<usize> = Executor::multi_thread(1, "search-test")
.unwrap()
.map( .map(
|_| { |_| {
panic!("panic should propagate"); panic!("panic should propagate");
@@ -126,6 +130,7 @@ mod tests {
#[test] #[test]
fn test_map_multithread() { fn test_map_multithread() {
let result: Vec<usize> = Executor::multi_thread(3, "search-test") let result: Vec<usize> = Executor::multi_thread(3, "search-test")
.unwrap()
.map(|i| Ok(i * 2), 0..10) .map(|i| Ok(i * 2), 0..10)
.unwrap(); .unwrap();
assert_eq!(result.len(), 10); assert_eq!(result.len(), 10);

View File

@@ -1,4 +1,3 @@
use super::segment::create_segment;
use super::segment::Segment; use super::segment::Segment;
use crate::core::Executor; use crate::core::Executor;
use crate::core::IndexMeta; use crate::core::IndexMeta;
@@ -20,18 +19,20 @@ use crate::reader::IndexReaderBuilder;
use crate::schema::Field; use crate::schema::Field;
use crate::schema::FieldType; use crate::schema::FieldType;
use crate::schema::Schema; use crate::schema::Schema;
use crate::tokenizer::BoxedTokenizer; use crate::tokenizer::{TextAnalyzer, TokenizerManager};
use crate::tokenizer::TokenizerManager;
use crate::IndexWriter; use crate::IndexWriter;
use crate::Result;
use num_cpus; use num_cpus;
use std::borrow::BorrowMut; use std::borrow::BorrowMut;
use std::collections::HashSet;
use std::fmt; use std::fmt;
#[cfg(feature = "mmap")] #[cfg(feature = "mmap")]
use std::path::Path; use std::path::{Path, PathBuf};
use std::sync::Arc; use std::sync::Arc;
fn load_metas(directory: &dyn Directory, inventory: &SegmentMetaInventory) -> Result<IndexMeta> { fn load_metas(
directory: &dyn Directory,
inventory: &SegmentMetaInventory,
) -> crate::Result<IndexMeta> {
let meta_data = directory.atomic_read(&META_FILEPATH)?; let meta_data = directory.atomic_read(&META_FILEPATH)?;
let meta_string = String::from_utf8_lossy(&meta_data); let meta_string = String::from_utf8_lossy(&meta_data);
IndexMeta::deserialize(&meta_string, &inventory) IndexMeta::deserialize(&meta_string, &inventory)
@@ -72,15 +73,16 @@ impl Index {
/// Replace the default single thread search executor pool /// Replace the default single thread search executor pool
/// by a thread pool with a given number of threads. /// by a thread pool with a given number of threads.
pub fn set_multithread_executor(&mut self, num_threads: usize) { pub fn set_multithread_executor(&mut self, num_threads: usize) -> crate::Result<()> {
self.executor = Arc::new(Executor::multi_thread(num_threads, "thrd-tantivy-search-")); self.executor = Arc::new(Executor::multi_thread(num_threads, "thrd-tantivy-search-")?);
Ok(())
} }
/// Replace the default single thread search executor pool /// Replace the default single thread search executor pool
/// by a thread pool with a given number of threads. /// by a thread pool with a given number of threads.
pub fn set_default_multithread_executor(&mut self) { pub fn set_default_multithread_executor(&mut self) -> crate::Result<()> {
let default_num_threads = num_cpus::get(); let default_num_threads = num_cpus::get();
self.set_multithread_executor(default_num_threads); self.set_multithread_executor(default_num_threads)
} }
/// Creates a new index using the `RAMDirectory`. /// Creates a new index using the `RAMDirectory`.
@@ -97,28 +99,29 @@ impl Index {
/// ///
/// If a previous index was in this directory, then its meta file will be destroyed. /// If a previous index was in this directory, then its meta file will be destroyed.
#[cfg(feature = "mmap")] #[cfg(feature = "mmap")]
pub fn create_in_dir<P: AsRef<Path>>(directory_path: P, schema: Schema) -> Result<Index> { pub fn create_in_dir<P: AsRef<Path>>(
directory_path: P,
schema: Schema,
) -> crate::Result<Index> {
let mmap_directory = MmapDirectory::open(directory_path)?; let mmap_directory = MmapDirectory::open(directory_path)?;
if Index::exists(&mmap_directory) { if Index::exists(&mmap_directory) {
return Err(TantivyError::IndexAlreadyExists); return Err(TantivyError::IndexAlreadyExists);
} }
Index::create(mmap_directory, schema) Index::create(mmap_directory, schema)
} }
/// Opens or creates a new index in the provided directory /// Opens or creates a new index in the provided directory
pub fn open_or_create<Dir: Directory>(dir: Dir, schema: Schema) -> Result<Index> { pub fn open_or_create<Dir: Directory>(dir: Dir, schema: Schema) -> crate::Result<Index> {
if Index::exists(&dir) { if !Index::exists(&dir) {
let index = Index::open(dir)?; return Index::create(dir, schema);
if index.schema() == schema { }
Ok(index) let index = Index::open(dir)?;
} else { if index.schema() == schema {
Err(TantivyError::SchemaError( Ok(index)
"An index exists but the schema does not match.".to_string(),
))
}
} else { } else {
Index::create(dir, schema) Err(TantivyError::SchemaError(
"An index exists but the schema does not match.".to_string(),
))
} }
} }
@@ -131,13 +134,13 @@ impl Index {
/// The temp directory is only used for testing the `MmapDirectory`. /// The temp directory is only used for testing the `MmapDirectory`.
/// For other unit tests, prefer the `RAMDirectory`, see: `create_in_ram`. /// For other unit tests, prefer the `RAMDirectory`, see: `create_in_ram`.
#[cfg(feature = "mmap")] #[cfg(feature = "mmap")]
pub fn create_from_tempdir(schema: Schema) -> Result<Index> { pub fn create_from_tempdir(schema: Schema) -> crate::Result<Index> {
let mmap_directory = MmapDirectory::create_from_tempdir()?; let mmap_directory = MmapDirectory::create_from_tempdir()?;
Index::create(mmap_directory, schema) Index::create(mmap_directory, schema)
} }
/// Creates a new index given an implementation of the trait `Directory` /// Creates a new index given an implementation of the trait `Directory`
pub fn create<Dir: Directory>(dir: Dir, schema: Schema) -> Result<Index> { pub fn create<Dir: Directory>(dir: Dir, schema: Schema) -> crate::Result<Index> {
let directory = ManagedDirectory::wrap(dir)?; let directory = ManagedDirectory::wrap(dir)?;
Index::from_directory(directory, schema) Index::from_directory(directory, schema)
} }
@@ -145,7 +148,7 @@ impl Index {
/// Create a new index from a directory. /// Create a new index from a directory.
/// ///
/// This will overwrite existing meta.json /// This will overwrite existing meta.json
fn from_directory(mut directory: ManagedDirectory, schema: Schema) -> Result<Index> { fn from_directory(mut directory: ManagedDirectory, schema: Schema) -> crate::Result<Index> {
save_new_metas(schema.clone(), directory.borrow_mut())?; save_new_metas(schema.clone(), directory.borrow_mut())?;
let metas = IndexMeta::with_schema(schema); let metas = IndexMeta::with_schema(schema);
Index::create_from_metas(directory, &metas, SegmentMetaInventory::default()) Index::create_from_metas(directory, &metas, SegmentMetaInventory::default())
@@ -156,7 +159,7 @@ impl Index {
directory: ManagedDirectory, directory: ManagedDirectory,
metas: &IndexMeta, metas: &IndexMeta,
inventory: SegmentMetaInventory, inventory: SegmentMetaInventory,
) -> Result<Index> { ) -> crate::Result<Index> {
let schema = metas.schema.clone(); let schema = metas.schema.clone();
Ok(Index { Ok(Index {
directory, directory,
@@ -173,11 +176,11 @@ impl Index {
} }
/// Helper to access the tokenizer associated to a specific field. /// Helper to access the tokenizer associated to a specific field.
pub fn tokenizer_for_field(&self, field: Field) -> Result<Box<dyn BoxedTokenizer>> { pub fn tokenizer_for_field(&self, field: Field) -> crate::Result<TextAnalyzer> {
let field_entry = self.schema.get_field_entry(field); let field_entry = self.schema.get_field_entry(field);
let field_type = field_entry.field_type(); let field_type = field_entry.field_type();
let tokenizer_manager: &TokenizerManager = self.tokenizers(); let tokenizer_manager: &TokenizerManager = self.tokenizers();
let tokenizer_name_opt: Option<Box<dyn BoxedTokenizer>> = match field_type { let tokenizer_name_opt: Option<TextAnalyzer> = match field_type {
FieldType::Str(text_options) => text_options FieldType::Str(text_options) => text_options
.get_indexing_options() .get_indexing_options()
.map(|text_indexing_options| text_indexing_options.tokenizer().to_string()) .map(|text_indexing_options| text_indexing_options.tokenizer().to_string())
@@ -196,7 +199,7 @@ impl Index {
/// Create a default `IndexReader` for the given index. /// Create a default `IndexReader` for the given index.
/// ///
/// See [`Index.reader_builder()`](#method.reader_builder). /// See [`Index.reader_builder()`](#method.reader_builder).
pub fn reader(&self) -> Result<IndexReader> { pub fn reader(&self) -> crate::Result<IndexReader> {
self.reader_builder().try_into() self.reader_builder().try_into()
} }
@@ -211,17 +214,31 @@ impl Index {
/// Opens a new directory from an index path. /// Opens a new directory from an index path.
#[cfg(feature = "mmap")] #[cfg(feature = "mmap")]
pub fn open_in_dir<P: AsRef<Path>>(directory_path: P) -> Result<Index> { pub fn open_in_dir<P: AsRef<Path>>(directory_path: P) -> crate::Result<Index> {
let mmap_directory = MmapDirectory::open(directory_path)?; let mmap_directory = MmapDirectory::open(directory_path)?;
Index::open(mmap_directory) Index::open(mmap_directory)
} }
pub(crate) fn inventory(&self) -> &SegmentMetaInventory { /// Returns the list of the segment metas tracked by the index.
&self.inventory ///
/// Such segments can of course be part of the index,
/// but also they could be segments being currently built or in the middle of a merge
/// operation.
pub fn list_all_segment_metas(&self) -> Vec<SegmentMeta> {
self.inventory.all()
}
/// Creates a new segment_meta (Advanced user only).
///
/// As long as the `SegmentMeta` lives, the files associated with the
/// `SegmentMeta` are guaranteed to not be garbage collected, regardless of
/// whether the segment is recorded as part of the index or not.
pub fn new_segment_meta(&self, segment_id: SegmentId, max_doc: u32) -> SegmentMeta {
self.inventory.new_segment_meta(segment_id, max_doc)
} }
/// Open the index using the provided directory /// Open the index using the provided directory
pub fn open<D: Directory>(directory: D) -> Result<Index> { pub fn open<D: Directory>(directory: D) -> crate::Result<Index> {
let directory = ManagedDirectory::wrap(directory)?; let directory = ManagedDirectory::wrap(directory)?;
let inventory = SegmentMetaInventory::default(); let inventory = SegmentMetaInventory::default();
let metas = load_metas(&directory, &inventory)?; let metas = load_metas(&directory, &inventory)?;
@@ -229,7 +246,7 @@ impl Index {
} }
/// Reads the index meta file from the directory. /// Reads the index meta file from the directory.
pub fn load_metas(&self) -> Result<IndexMeta> { pub fn load_metas(&self) -> crate::Result<IndexMeta> {
load_metas(self.directory(), &self.inventory) load_metas(self.directory(), &self.inventory)
} }
@@ -257,7 +274,7 @@ impl Index {
&self, &self,
num_threads: usize, num_threads: usize,
overall_heap_size_in_bytes: usize, overall_heap_size_in_bytes: usize,
) -> Result<IndexWriter> { ) -> crate::Result<IndexWriter> {
let directory_lock = self let directory_lock = self
.directory .directory
.acquire_lock(&INDEX_WRITER_LOCK) .acquire_lock(&INDEX_WRITER_LOCK)
@@ -292,7 +309,7 @@ impl Index {
/// If the lockfile already exists, returns `Error::FileAlreadyExists`. /// If the lockfile already exists, returns `Error::FileAlreadyExists`.
/// # Panics /// # Panics
/// If the heap size per thread is too small, panics. /// If the heap size per thread is too small, panics.
pub fn writer(&self, overall_heap_size_in_bytes: usize) -> Result<IndexWriter> { pub fn writer(&self, overall_heap_size_in_bytes: usize) -> crate::Result<IndexWriter> {
let mut num_threads = num_cpus::get(); let mut num_threads = num_cpus::get();
let heap_size_in_bytes_per_thread = overall_heap_size_in_bytes / num_threads; let heap_size_in_bytes_per_thread = overall_heap_size_in_bytes / num_threads;
if heap_size_in_bytes_per_thread < HEAP_SIZE_MIN { if heap_size_in_bytes_per_thread < HEAP_SIZE_MIN {
@@ -309,7 +326,7 @@ impl Index {
} }
/// Returns the list of segments that are searchable /// Returns the list of segments that are searchable
pub fn searchable_segments(&self) -> Result<Vec<Segment>> { pub fn searchable_segments(&self) -> crate::Result<Vec<Segment>> {
Ok(self Ok(self
.searchable_segment_metas()? .searchable_segment_metas()?
.into_iter() .into_iter()
@@ -319,7 +336,7 @@ impl Index {
#[doc(hidden)] #[doc(hidden)]
pub fn segment(&self, segment_meta: SegmentMeta) -> Segment { pub fn segment(&self, segment_meta: SegmentMeta) -> Segment {
create_segment(self.clone(), segment_meta) Segment::for_index(self.clone(), segment_meta)
} }
/// Creates a new segment. /// Creates a new segment.
@@ -342,18 +359,23 @@ impl Index {
/// Reads the meta.json and returns the list of /// Reads the meta.json and returns the list of
/// `SegmentMeta` from the last commit. /// `SegmentMeta` from the last commit.
pub fn searchable_segment_metas(&self) -> Result<Vec<SegmentMeta>> { pub fn searchable_segment_metas(&self) -> crate::Result<Vec<SegmentMeta>> {
Ok(self.load_metas()?.segments) Ok(self.load_metas()?.segments)
} }
/// Returns the list of segment ids that are searchable. /// Returns the list of segment ids that are searchable.
pub fn searchable_segment_ids(&self) -> Result<Vec<SegmentId>> { pub fn searchable_segment_ids(&self) -> crate::Result<Vec<SegmentId>> {
Ok(self Ok(self
.searchable_segment_metas()? .searchable_segment_metas()?
.iter() .iter()
.map(SegmentMeta::id) .map(SegmentMeta::id)
.collect()) .collect())
} }
/// Returns the set of corrupted files
pub fn validate_checksum(&self) -> crate::Result<HashSet<PathBuf>> {
self.directory.list_damaged().map_err(Into::into)
}
} }
impl fmt::Debug for Index { impl fmt::Debug for Index {
@@ -367,12 +389,9 @@ mod tests {
use crate::directory::RAMDirectory; use crate::directory::RAMDirectory;
use crate::schema::Field; use crate::schema::Field;
use crate::schema::{Schema, INDEXED, TEXT}; use crate::schema::{Schema, INDEXED, TEXT};
use crate::Index;
use crate::IndexReader; use crate::IndexReader;
use crate::IndexWriter;
use crate::ReloadPolicy; use crate::ReloadPolicy;
use std::thread; use crate::{Directory, Index};
use std::time::Duration;
#[test] #[test]
fn test_indexer_for_field() { fn test_indexer_for_field() {
@@ -450,40 +469,38 @@ mod tests {
.try_into() .try_into()
.unwrap(); .unwrap();
assert_eq!(reader.searcher().num_docs(), 0); assert_eq!(reader.searcher().num_docs(), 0);
let mut writer = index.writer_with_num_threads(1, 3_000_000).unwrap(); test_index_on_commit_reload_policy_aux(field, &index, &reader);
test_index_on_commit_reload_policy_aux(field, &mut writer, &reader);
} }
#[cfg(feature = "mmap")] #[cfg(feature = "mmap")]
mod mmap_specific { mod mmap_specific {
use super::*; use super::*;
use crate::Directory;
use std::path::PathBuf; use std::path::PathBuf;
use tempdir::TempDir; use tempfile::TempDir;
#[test] #[test]
fn test_index_on_commit_reload_policy_mmap() { fn test_index_on_commit_reload_policy_mmap() {
let schema = throw_away_schema(); let schema = throw_away_schema();
let field = schema.get_field("num_likes").unwrap(); let field = schema.get_field("num_likes").unwrap();
let tempdir = TempDir::new("index").unwrap(); let tempdir = TempDir::new().unwrap();
let tempdir_path = PathBuf::from(tempdir.path()); let tempdir_path = PathBuf::from(tempdir.path());
let index = Index::create_in_dir(&tempdir_path, schema).unwrap(); let index = Index::create_in_dir(&tempdir_path, schema).unwrap();
let mut writer = index.writer_with_num_threads(1, 3_000_000).unwrap();
writer.commit().unwrap();
let reader = index let reader = index
.reader_builder() .reader_builder()
.reload_policy(ReloadPolicy::OnCommit) .reload_policy(ReloadPolicy::OnCommit)
.try_into() .try_into()
.unwrap(); .unwrap();
assert_eq!(reader.searcher().num_docs(), 0); assert_eq!(reader.searcher().num_docs(), 0);
test_index_on_commit_reload_policy_aux(field, &mut writer, &reader); test_index_on_commit_reload_policy_aux(field, &index, &reader);
} }
#[test] #[test]
fn test_index_manual_policy_mmap() { fn test_index_manual_policy_mmap() {
let schema = throw_away_schema(); let schema = throw_away_schema();
let field = schema.get_field("num_likes").unwrap(); let field = schema.get_field("num_likes").unwrap();
let index = Index::create_from_tempdir(schema).unwrap(); let mut index = Index::create_from_tempdir(schema).unwrap();
let mut writer = index.writer_with_num_threads(1, 3_000_000).unwrap(); let mut writer = index.writer_with_num_threads(1, 3_000_000).unwrap();
writer.commit().unwrap(); writer.commit().unwrap();
let reader = index let reader = index
@@ -493,8 +510,12 @@ mod tests {
.unwrap(); .unwrap();
assert_eq!(reader.searcher().num_docs(), 0); assert_eq!(reader.searcher().num_docs(), 0);
writer.add_document(doc!(field=>1u64)); writer.add_document(doc!(field=>1u64));
let (sender, receiver) = crossbeam::channel::unbounded();
let _handle = index.directory_mut().watch(Box::new(move || {
let _ = sender.send(());
}));
writer.commit().unwrap(); writer.commit().unwrap();
thread::sleep(Duration::from_millis(500)); assert!(receiver.recv().is_ok());
assert_eq!(reader.searcher().num_docs(), 0); assert_eq!(reader.searcher().num_docs(), 0);
reader.reload().unwrap(); reader.reload().unwrap();
assert_eq!(reader.searcher().num_docs(), 1); assert_eq!(reader.searcher().num_docs(), 1);
@@ -504,7 +525,7 @@ mod tests {
fn test_index_on_commit_reload_policy_different_directories() { fn test_index_on_commit_reload_policy_different_directories() {
let schema = throw_away_schema(); let schema = throw_away_schema();
let field = schema.get_field("num_likes").unwrap(); let field = schema.get_field("num_likes").unwrap();
let tempdir = TempDir::new("index").unwrap(); let tempdir = TempDir::new().unwrap();
let tempdir_path = PathBuf::from(tempdir.path()); let tempdir_path = PathBuf::from(tempdir.path());
let write_index = Index::create_in_dir(&tempdir_path, schema).unwrap(); let write_index = Index::create_in_dir(&tempdir_path, schema).unwrap();
let read_index = Index::open_in_dir(&tempdir_path).unwrap(); let read_index = Index::open_in_dir(&tempdir_path).unwrap();
@@ -514,39 +535,26 @@ mod tests {
.try_into() .try_into()
.unwrap(); .unwrap();
assert_eq!(reader.searcher().num_docs(), 0); assert_eq!(reader.searcher().num_docs(), 0);
let mut writer = write_index.writer_with_num_threads(1, 3_000_000).unwrap(); test_index_on_commit_reload_policy_aux(field, &write_index, &reader);
test_index_on_commit_reload_policy_aux(field, &mut writer, &reader);
} }
} }
fn test_index_on_commit_reload_policy_aux( fn test_index_on_commit_reload_policy_aux(field: Field, index: &Index, reader: &IndexReader) {
field: Field, let mut reader_index = reader.index();
writer: &mut IndexWriter, let (sender, receiver) = crossbeam::channel::unbounded();
reader: &IndexReader, let _watch_handle = reader_index.directory_mut().watch(Box::new(move || {
) { let _ = sender.send(());
}));
let mut writer = index.writer_with_num_threads(1, 3_000_000).unwrap();
assert_eq!(reader.searcher().num_docs(), 0); assert_eq!(reader.searcher().num_docs(), 0);
writer.add_document(doc!(field=>1u64)); writer.add_document(doc!(field=>1u64));
writer.commit().unwrap(); writer.commit().unwrap();
let mut count = 0; assert!(receiver.recv().is_ok());
for _ in 0..100 { assert_eq!(reader.searcher().num_docs(), 1);
count = reader.searcher().num_docs();
if count > 0 {
break;
}
thread::sleep(Duration::from_millis(100));
}
assert_eq!(count, 1);
writer.add_document(doc!(field=>2u64)); writer.add_document(doc!(field=>2u64));
writer.commit().unwrap(); writer.commit().unwrap();
let mut count = 0; assert!(receiver.recv().is_ok());
for _ in 0..10 { assert_eq!(reader.searcher().num_docs(), 2);
count = reader.searcher().num_docs();
if count > 1 {
break;
}
thread::sleep(Duration::from_millis(100));
}
assert_eq!(count, 2);
} }
// This test will not pass on windows, because windows // This test will not pass on windows, because windows
@@ -563,9 +571,13 @@ mod tests {
for i in 0u64..8_000u64 { for i in 0u64..8_000u64 {
writer.add_document(doc!(field => i)); writer.add_document(doc!(field => i));
} }
let (sender, receiver) = crossbeam::channel::unbounded();
let _handle = directory.watch(Box::new(move || {
let _ = sender.send(());
}));
writer.commit().unwrap(); writer.commit().unwrap();
let mem_right_after_commit = directory.total_mem_usage(); let mem_right_after_commit = directory.total_mem_usage();
thread::sleep(Duration::from_millis(1_000)); assert!(receiver.recv().is_ok());
let reader = index let reader = index
.reader_builder() .reader_builder()
.reload_policy(ReloadPolicy::Manual) .reload_policy(ReloadPolicy::Manual)
@@ -579,7 +591,11 @@ mod tests {
reader.reload().unwrap(); reader.reload().unwrap();
let searcher = reader.searcher(); let searcher = reader.searcher();
assert_eq!(searcher.num_docs(), 8_000); assert_eq!(searcher.num_docs(), 8_000);
assert!(mem_right_after_merge_finished < mem_right_after_commit); assert!(
mem_right_after_merge_finished < mem_right_after_commit,
"(mem after merge){} is expected < (mem before merge){}",
mem_right_after_merge_finished,
mem_right_after_commit
);
} }
} }

View File

@@ -4,6 +4,7 @@ use crate::schema::Schema;
use crate::Opstamp; use crate::Opstamp;
use census::{Inventory, TrackedObject}; use census::{Inventory, TrackedObject};
use serde; use serde;
use serde::{Deserialize, Serialize};
use serde_json; use serde_json;
use std::collections::HashSet; use std::collections::HashSet;
use std::fmt; use std::fmt;
@@ -30,7 +31,6 @@ impl SegmentMetaInventory {
.collect::<Vec<_>>() .collect::<Vec<_>>()
} }
#[doc(hidden)]
pub fn new_segment_meta(&self, segment_id: SegmentId, max_doc: u32) -> SegmentMeta { pub fn new_segment_meta(&self, segment_id: SegmentId, max_doc: u32) -> SegmentMeta {
let inner = InnerSegmentMeta { let inner = InnerSegmentMeta {
segment_id, segment_id,
@@ -151,6 +151,21 @@ impl SegmentMeta {
self.num_deleted_docs() > 0 self.num_deleted_docs() > 0
} }
/// Updates the max_doc value from the `SegmentMeta`.
///
/// This method is only used when updating `max_doc` from 0
/// as we finalize a fresh new segment.
pub(crate) fn with_max_doc(self, max_doc: u32) -> SegmentMeta {
assert_eq!(self.tracked.max_doc, 0);
assert!(self.tracked.deletes.is_none());
let tracked = self.tracked.map(move |inner_meta| InnerSegmentMeta {
segment_id: inner_meta.segment_id,
max_doc,
deletes: None,
});
SegmentMeta { tracked }
}
#[doc(hidden)] #[doc(hidden)]
pub fn with_delete_meta(self, num_deleted_docs: u32, opstamp: Opstamp) -> SegmentMeta { pub fn with_delete_meta(self, num_deleted_docs: u32, opstamp: Opstamp) -> SegmentMeta {
let delete_meta = DeleteMeta { let delete_meta = DeleteMeta {
@@ -286,6 +301,9 @@ mod tests {
payload: None, payload: None,
}; };
let json = serde_json::ser::to_string(&index_metas).expect("serialization failed"); let json = serde_json::ser::to_string(&index_metas).expect("serialization failed");
assert_eq!(json, r#"{"segments":[],"schema":[{"name":"text","type":"text","options":{"indexing":{"record":"position","tokenizer":"default"},"stored":false}}],"opstamp":0}"#); assert_eq!(
json,
r#"{"segments":[],"schema":[{"name":"text","type":"text","options":{"indexing":{"record":"position","tokenizer":"default"},"stored":false}}],"opstamp":0}"#
);
} }
} }

View File

@@ -60,7 +60,7 @@ impl InvertedIndexReader {
.get_index_record_option() .get_index_record_option()
.unwrap_or(IndexRecordOption::Basic); .unwrap_or(IndexRecordOption::Basic);
InvertedIndexReader { InvertedIndexReader {
termdict: TermDictionary::empty(&field_type), termdict: TermDictionary::empty(),
postings_source: ReadOnlySource::empty(), postings_source: ReadOnlySource::empty(),
positions_source: ReadOnlySource::empty(), positions_source: ReadOnlySource::empty(),
positions_idx_source: ReadOnlySource::empty(), positions_idx_source: ReadOnlySource::empty(),

View File

@@ -14,7 +14,6 @@ use crate::store::StoreReader;
use crate::termdict::TermMerger; use crate::termdict::TermMerger;
use crate::DocAddress; use crate::DocAddress;
use crate::Index; use crate::Index;
use crate::Result;
use std::fmt; use std::fmt;
use std::sync::Arc; use std::sync::Arc;
@@ -23,8 +22,8 @@ fn collect_segment<C: Collector>(
weight: &dyn Weight, weight: &dyn Weight,
segment_ord: u32, segment_ord: u32,
segment_reader: &SegmentReader, segment_reader: &SegmentReader,
) -> Result<C::Fruit> { ) -> crate::Result<C::Fruit> {
let mut scorer = weight.scorer(segment_reader)?; let mut scorer = weight.scorer(segment_reader, 1.0f32)?;
let mut segment_collector = collector.for_segment(segment_ord as u32, segment_reader)?; let mut segment_collector = collector.for_segment(segment_ord as u32, segment_reader)?;
if let Some(delete_bitset) = segment_reader.delete_bitset() { if let Some(delete_bitset) = segment_reader.delete_bitset() {
scorer.for_each(&mut |doc, score| { scorer.for_each(&mut |doc, score| {
@@ -78,7 +77,7 @@ impl Searcher {
/// ///
/// The searcher uses the segment ordinal to route the /// The searcher uses the segment ordinal to route the
/// the request to the right `Segment`. /// the request to the right `Segment`.
pub fn doc(&self, doc_address: DocAddress) -> Result<Document> { pub fn doc(&self, doc_address: DocAddress) -> crate::Result<Document> {
let DocAddress(segment_local_id, doc_id) = doc_address; let DocAddress(segment_local_id, doc_id) = doc_address;
let store_reader = &self.store_readers[segment_local_id as usize]; let store_reader = &self.store_readers[segment_local_id as usize];
store_reader.get(doc_id) store_reader.get(doc_id)
@@ -132,7 +131,11 @@ impl Searcher {
/// ///
/// Finally, the Collector merges each of the child collectors into itself for result usability /// Finally, the Collector merges each of the child collectors into itself for result usability
/// by the caller. /// by the caller.
pub fn search<C: Collector>(&self, query: &dyn Query, collector: &C) -> Result<C::Fruit> { pub fn search<C: Collector>(
&self,
query: &dyn Query,
collector: &C,
) -> crate::Result<C::Fruit> {
let executor = self.index.search_executor(); let executor = self.index.search_executor();
self.search_with_executor(query, collector, executor) self.search_with_executor(query, collector, executor)
} }
@@ -154,7 +157,7 @@ impl Searcher {
query: &dyn Query, query: &dyn Query,
collector: &C, collector: &C,
executor: &Executor, executor: &Executor,
) -> Result<C::Fruit> { ) -> crate::Result<C::Fruit> {
let scoring_enabled = collector.requires_scoring(); let scoring_enabled = collector.requires_scoring();
let weight = query.weight(self, scoring_enabled)?; let weight = query.weight(self, scoring_enabled)?;
let segment_readers = self.segment_readers(); let segment_readers = self.segment_readers();

View File

@@ -8,10 +8,8 @@ use crate::directory::{ReadOnlySource, WritePtr};
use crate::indexer::segment_serializer::SegmentSerializer; use crate::indexer::segment_serializer::SegmentSerializer;
use crate::schema::Schema; use crate::schema::Schema;
use crate::Opstamp; use crate::Opstamp;
use crate::Result;
use std::fmt; use std::fmt;
use std::path::PathBuf; use std::path::PathBuf;
use std::result;
/// A segment is a piece of the index. /// A segment is a piece of the index.
#[derive(Clone)] #[derive(Clone)]
@@ -26,15 +24,12 @@ impl fmt::Debug for Segment {
} }
} }
/// Creates a new segment given an `Index` and a `SegmentId`
///
/// The function is here to make it private outside `tantivy`.
/// #[doc(hidden)]
pub fn create_segment(index: Index, meta: SegmentMeta) -> Segment {
Segment { index, meta }
}
impl Segment { impl Segment {
/// Creates a new segment given an `Index` and a `SegmentId`
pub(crate) fn for_index(index: Index, meta: SegmentMeta) -> Segment {
Segment { index, meta }
}
/// Returns the index the segment belongs to. /// Returns the index the segment belongs to.
pub fn index(&self) -> &Index { pub fn index(&self) -> &Index {
&self.index &self.index
@@ -50,6 +45,17 @@ impl Segment {
&self.meta &self.meta
} }
/// Updates the max_doc value from the `SegmentMeta`.
///
/// This method is only used when updating `max_doc` from 0
/// as we finalize a fresh new segment.
pub(crate) fn with_max_doc(self, max_doc: u32) -> Segment {
Segment {
index: self.index,
meta: self.meta.with_max_doc(max_doc),
}
}
#[doc(hidden)] #[doc(hidden)]
pub fn with_delete_meta(self, num_deleted_docs: u32, opstamp: Opstamp) -> Segment { pub fn with_delete_meta(self, num_deleted_docs: u32, opstamp: Opstamp) -> Segment {
Segment { Segment {
@@ -72,20 +78,14 @@ impl Segment {
} }
/// Open one of the component file for a *regular* read. /// Open one of the component file for a *regular* read.
pub fn open_read( pub fn open_read(&self, component: SegmentComponent) -> Result<ReadOnlySource, OpenReadError> {
&self,
component: SegmentComponent,
) -> result::Result<ReadOnlySource, OpenReadError> {
let path = self.relative_path(component); let path = self.relative_path(component);
let source = self.index.directory().open_read(&path)?; let source = self.index.directory().open_read(&path)?;
Ok(source) Ok(source)
} }
/// Open one of the component file for *regular* write. /// Open one of the component file for *regular* write.
pub fn open_write( pub fn open_write(&mut self, component: SegmentComponent) -> Result<WritePtr, OpenWriteError> {
&mut self,
component: SegmentComponent,
) -> result::Result<WritePtr, OpenWriteError> {
let path = self.relative_path(component); let path = self.relative_path(component);
let write = self.index.directory_mut().open_write(&path)?; let write = self.index.directory_mut().open_write(&path)?;
Ok(write) Ok(write)
@@ -98,5 +98,5 @@ pub trait SerializableSegment {
/// ///
/// # Returns /// # Returns
/// The number of documents in the segment. /// The number of documents in the segment.
fn write(&self, serializer: SegmentSerializer) -> Result<u32>; fn write(&self, serializer: SegmentSerializer) -> crate::Result<u32>;
} }

View File

@@ -4,6 +4,9 @@ use uuid::Uuid;
#[cfg(test)] #[cfg(test)]
use once_cell::sync::Lazy; use once_cell::sync::Lazy;
use serde::{Deserialize, Serialize};
use std::error::Error;
use std::str::FromStr;
#[cfg(test)] #[cfg(test)]
use std::sync::atomic; use std::sync::atomic;
@@ -52,15 +55,51 @@ impl SegmentId {
/// and the rest is random. /// and the rest is random.
/// ///
/// Picking the first 8 chars is ok to identify /// Picking the first 8 chars is ok to identify
/// segments in a display message. /// segments in a display message (e.g. a5c4dfcb).
pub fn short_uuid_string(&self) -> String { pub fn short_uuid_string(&self) -> String {
(&self.0.to_simple_ref().to_string()[..8]).to_string() (&self.0.to_simple_ref().to_string()[..8]).to_string()
} }
/// Returns a segment uuid string. /// Returns a segment uuid string.
///
/// It consists in 32 lowercase hexadecimal chars
/// (e.g. a5c4dfcbdfe645089129e308e26d5523)
pub fn uuid_string(&self) -> String { pub fn uuid_string(&self) -> String {
self.0.to_simple_ref().to_string() self.0.to_simple_ref().to_string()
} }
/// Build a `SegmentId` string from the full uuid string.
///
/// E.g. "a5c4dfcbdfe645089129e308e26d5523"
pub fn from_uuid_string(uuid_string: &str) -> Result<SegmentId, SegmentIdParseError> {
FromStr::from_str(uuid_string)
}
}
/// Error type used when parsing a `SegmentId` from a string fails.
pub struct SegmentIdParseError(uuid::Error);
impl Error for SegmentIdParseError {}
impl fmt::Debug for SegmentIdParseError {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
self.0.fmt(f)
}
}
impl fmt::Display for SegmentIdParseError {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
self.0.fmt(f)
}
}
impl FromStr for SegmentId {
type Err = SegmentIdParseError;
fn from_str(uuid_string: &str) -> Result<Self, SegmentIdParseError> {
let uuid = Uuid::parse_str(uuid_string).map_err(SegmentIdParseError)?;
Ok(SegmentId(uuid))
}
} }
impl fmt::Debug for SegmentId { impl fmt::Debug for SegmentId {
@@ -80,3 +119,18 @@ impl Ord for SegmentId {
self.0.as_bytes().cmp(other.0.as_bytes()) self.0.as_bytes().cmp(other.0.as_bytes())
} }
} }
#[cfg(test)]
mod tests {
use super::SegmentId;
#[test]
fn test_to_uuid_string() {
let full_uuid = "a5c4dfcbdfe645089129e308e26d5523";
let segment_id = SegmentId::from_uuid_string(full_uuid).unwrap();
assert_eq!(segment_id.uuid_string(), full_uuid);
assert_eq!(segment_id.short_uuid_string(), "a5c4dfcb");
// one extra char
assert!(SegmentId::from_uuid_string("a5c4dfcbdfe645089129e308e26d5523b").is_err());
}
}

View File

@@ -16,7 +16,6 @@ use crate::space_usage::SegmentSpaceUsage;
use crate::store::StoreReader; use crate::store::StoreReader;
use crate::termdict::TermDictionary; use crate::termdict::TermDictionary;
use crate::DocId; use crate::DocId;
use crate::Result;
use fail::fail_point; use fail::fail_point;
use std::collections::HashMap; use std::collections::HashMap;
use std::fmt; use std::fmt;
@@ -145,7 +144,7 @@ impl SegmentReader {
} }
/// Open a new segment for reading. /// Open a new segment for reading.
pub fn open(segment: &Segment) -> Result<SegmentReader> { pub fn open(segment: &Segment) -> crate::Result<SegmentReader> {
let termdict_source = segment.open_read(SegmentComponent::TERMS)?; let termdict_source = segment.open_read(SegmentComponent::TERMS)?;
let termdict_composite = CompositeFile::open(&termdict_source)?; let termdict_composite = CompositeFile::open(&termdict_source)?;

View File

@@ -48,14 +48,14 @@ impl RetryPolicy {
/// ///
/// It is transparently associated to a lock file, that gets deleted /// It is transparently associated to a lock file, that gets deleted
/// on `Drop.` The lock is released automatically on `Drop`. /// on `Drop.` The lock is released automatically on `Drop`.
pub struct DirectoryLock(Box<dyn Drop + Send + Sync + 'static>); pub struct DirectoryLock(Box<dyn Send + Sync + 'static>);
struct DirectoryLockGuard { struct DirectoryLockGuard {
directory: Box<dyn Directory>, directory: Box<dyn Directory>,
path: PathBuf, path: PathBuf,
} }
impl<T: Drop + Send + Sync + 'static> From<Box<T>> for DirectoryLock { impl<T: Send + Sync + 'static> From<Box<T>> for DirectoryLock {
fn from(underlying: Box<T>) -> Self { fn from(underlying: Box<T>) -> Self {
DirectoryLock(underlying) DirectoryLock(underlying)
} }
@@ -118,6 +118,8 @@ pub trait Directory: DirectoryClone + fmt::Debug + Send + Sync + 'static {
/// ///
/// Specifically, subsequent writes or flushes should /// Specifically, subsequent writes or flushes should
/// have no effect on the returned `ReadOnlySource` object. /// have no effect on the returned `ReadOnlySource` object.
///
/// You should only use this to read files create with [Directory::open_write].
fn open_read(&self, path: &Path) -> result::Result<ReadOnlySource, OpenReadError>; fn open_read(&self, path: &Path) -> result::Result<ReadOnlySource, OpenReadError>;
/// Removes a file /// Removes a file
@@ -157,6 +159,8 @@ pub trait Directory: DirectoryClone + fmt::Debug + Send + Sync + 'static {
/// atomic_write. /// atomic_write.
/// ///
/// This should only be used for small files. /// This should only be used for small files.
///
/// You should only use this to read files create with [Directory::atomic_write].
fn atomic_read(&self, path: &Path) -> Result<Vec<u8>, OpenReadError>; fn atomic_read(&self, path: &Path) -> Result<Vec<u8>, OpenReadError>;
/// Atomically replace the content of a file with data. /// Atomically replace the content of a file with data.
@@ -193,7 +197,7 @@ pub trait Directory: DirectoryClone + fmt::Debug + Send + Sync + 'static {
/// Registers a callback that will be called whenever a change on the `meta.json` /// Registers a callback that will be called whenever a change on the `meta.json`
/// using the `atomic_write` API is detected. /// using the `atomic_write` API is detected.
/// ///
/// The behavior when using `.watch()` on a file using `.open_write(...)` is, on the other /// The behavior when using `.watch()` on a file using [Directory::open_write] is, on the other
/// hand, undefined. /// hand, undefined.
/// ///
/// The file will be watched for the lifetime of the returned `WatchHandle`. The caller is /// The file will be watched for the lifetime of the returned `WatchHandle`. The caller is

View File

@@ -1,3 +1,4 @@
use crate::Version;
use std::error::Error as StdError; use std::error::Error as StdError;
use std::fmt; use std::fmt;
use std::io; use std::io;
@@ -156,6 +157,65 @@ impl StdError for OpenWriteError {
} }
} }
/// Type of index incompatibility between the library and the index found on disk
/// Used to catch and provide a hint to solve this incompatibility issue
pub enum Incompatibility {
/// This library cannot decompress the index found on disk
CompressionMismatch {
/// Compression algorithm used by the current version of tantivy
library_compression_format: String,
/// Compression algorithm that was used to serialise the index
index_compression_format: String,
},
/// The index format found on disk isn't supported by this version of the library
IndexMismatch {
/// Version used by the library
library_version: Version,
/// Version the index was built with
index_version: Version,
},
}
impl fmt::Debug for Incompatibility {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> Result<(), fmt::Error> {
match self {
Incompatibility::CompressionMismatch {
library_compression_format,
index_compression_format,
} => {
let err = format!(
"Library was compiled with {:?} compression, index was compressed with {:?}",
library_compression_format, index_compression_format
);
let advice = format!(
"Change the feature flag to {:?} and rebuild the library",
index_compression_format
);
write!(f, "{}. {}", err, advice)?;
}
Incompatibility::IndexMismatch {
library_version,
index_version,
} => {
let err = format!(
"Library version: {}, index version: {}",
library_version.index_format_version, index_version.index_format_version
);
// TODO make a more useful error message
// include the version range that supports this index_format_version
let advice = format!(
"Change tantivy to a version compatible with index format {} (e.g. {}.{}.x) \
and rebuild your project.",
index_version.index_format_version, index_version.major, index_version.minor
);
write!(f, "{}. {}", err, advice)?;
}
}
Ok(())
}
}
/// Error that may occur when accessing a file read /// Error that may occur when accessing a file read
#[derive(Debug)] #[derive(Debug)]
pub enum OpenReadError { pub enum OpenReadError {
@@ -164,6 +224,8 @@ pub enum OpenReadError {
/// Any kind of IO error that happens when /// Any kind of IO error that happens when
/// interacting with the underlying IO device. /// interacting with the underlying IO device.
IOError(IOError), IOError(IOError),
/// This library doesn't support the index version found on disk
IncompatibleIndex(Incompatibility),
} }
impl From<IOError> for OpenReadError { impl From<IOError> for OpenReadError {
@@ -183,19 +245,9 @@ impl fmt::Display for OpenReadError {
"an io error occurred while opening a file for reading: '{}'", "an io error occurred while opening a file for reading: '{}'",
err err
), ),
} OpenReadError::IncompatibleIndex(ref footer) => {
} write!(f, "Incompatible index format: {:?}", footer)
} }
impl StdError for OpenReadError {
fn description(&self) -> &str {
"error occurred while opening a file for reading"
}
fn cause(&self) -> Option<&dyn StdError> {
match *self {
OpenReadError::FileDoesNotExist(_) => None,
OpenReadError::IOError(ref err) => Some(err),
} }
} }
} }
@@ -216,6 +268,12 @@ impl From<IOError> for DeleteError {
} }
} }
impl From<Incompatibility> for OpenReadError {
fn from(incompatibility: Incompatibility) -> Self {
OpenReadError::IncompatibleIndex(incompatibility)
}
}
impl fmt::Display for DeleteError { impl fmt::Display for DeleteError {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
match *self { match *self {

369
src/directory/footer.rs Normal file
View File

@@ -0,0 +1,369 @@
use crate::common::{BinarySerializable, CountingWriter, FixedSize, VInt};
use crate::directory::error::Incompatibility;
use crate::directory::read_only_source::ReadOnlySource;
use crate::directory::{AntiCallToken, TerminatingWrite};
use crate::Version;
use byteorder::{ByteOrder, LittleEndian, WriteBytesExt};
use crc32fast::Hasher;
use std::io;
use std::io::Write;
const FOOTER_MAX_LEN: usize = 10_000;
type CrcHashU32 = u32;
#[derive(Debug, Clone, PartialEq)]
pub struct Footer {
pub version: Version,
pub meta: String,
pub versioned_footer: VersionedFooter,
}
/// Serialises the footer to a byte-array
/// - versioned_footer_len : 4 bytes
///- versioned_footer: variable bytes
/// - meta_len: 4 bytes
/// - meta: variable bytes
/// - version_len: 4 bytes
/// - version json: variable bytes
impl BinarySerializable for Footer {
fn serialize<W: io::Write>(&self, writer: &mut W) -> io::Result<()> {
BinarySerializable::serialize(&self.versioned_footer, writer)?;
BinarySerializable::serialize(&self.meta, writer)?;
let version_string =
serde_json::to_string(&self.version).map_err(|_err| io::ErrorKind::InvalidInput)?;
BinarySerializable::serialize(&version_string, writer)?;
Ok(())
}
fn deserialize<R: io::Read>(reader: &mut R) -> io::Result<Self> {
let versioned_footer = VersionedFooter::deserialize(reader)?;
let meta = String::deserialize(reader)?;
let version_json = String::deserialize(reader)?;
let version = serde_json::from_str(&version_json)?;
Ok(Footer {
version,
meta,
versioned_footer,
})
}
}
impl Footer {
pub fn new(versioned_footer: VersionedFooter) -> Self {
let version = crate::VERSION.clone();
let meta = version.to_string();
Footer {
version,
meta,
versioned_footer,
}
}
pub fn append_footer<W: io::Write>(&self, mut write: &mut W) -> io::Result<()> {
let mut counting_write = CountingWriter::wrap(&mut write);
self.serialize(&mut counting_write)?;
let written_len = counting_write.written_bytes();
write.write_u32::<LittleEndian>(written_len as u32)?;
Ok(())
}
pub fn extract_footer(source: ReadOnlySource) -> Result<(Footer, ReadOnlySource), io::Error> {
if source.len() < 4 {
return Err(io::Error::new(
io::ErrorKind::UnexpectedEof,
format!(
"File corrupted. The file is smaller than 4 bytes (len={}).",
source.len()
),
));
}
let (body_footer, footer_len_bytes) = source.split_from_end(u32::SIZE_IN_BYTES);
let footer_len = LittleEndian::read_u32(footer_len_bytes.as_slice()) as usize;
let body_len = body_footer.len() - footer_len;
let (body, footer_data) = body_footer.split(body_len);
let mut cursor = footer_data.as_slice();
let footer = Footer::deserialize(&mut cursor)?;
Ok((footer, body))
}
/// Confirms that the index will be read correctly by this version of tantivy
/// Has to be called after `extract_footer` to make sure it's not accessing uninitialised memory
pub fn is_compatible(&self) -> Result<(), Incompatibility> {
let library_version = crate::version();
match &self.versioned_footer {
VersionedFooter::V1 {
crc32: _crc,
store_compression: compression,
} => {
if &library_version.store_compression != compression {
return Err(Incompatibility::CompressionMismatch {
library_compression_format: library_version.store_compression.to_string(),
index_compression_format: compression.to_string(),
});
}
Ok(())
}
VersionedFooter::UnknownVersion => Err(Incompatibility::IndexMismatch {
library_version: library_version.clone(),
index_version: self.version.clone(),
}),
}
}
}
/// Footer that includes a crc32 hash that enables us to checksum files in the index
#[derive(Debug, Clone, PartialEq)]
pub enum VersionedFooter {
UnknownVersion,
V1 {
crc32: CrcHashU32,
store_compression: String,
},
}
impl BinarySerializable for VersionedFooter {
fn serialize<W: io::Write>(&self, writer: &mut W) -> io::Result<()> {
let mut buf = Vec::new();
match self {
VersionedFooter::V1 {
crc32,
store_compression: compression,
} => {
// Serializes a valid `VersionedFooter` or panics if the version is unknown
// [ version | crc_hash | compression_mode ]
// [ 0..4 | 4..8 | variable ]
BinarySerializable::serialize(&1u32, &mut buf)?;
BinarySerializable::serialize(crc32, &mut buf)?;
BinarySerializable::serialize(compression, &mut buf)?;
}
VersionedFooter::UnknownVersion => {
return Err(io::Error::new(
io::ErrorKind::InvalidInput,
"Cannot serialize an unknown versioned footer ",
));
}
}
BinarySerializable::serialize(&VInt(buf.len() as u64), writer)?;
assert!(buf.len() <= FOOTER_MAX_LEN);
writer.write_all(&buf[..])?;
Ok(())
}
fn deserialize<R: io::Read>(reader: &mut R) -> io::Result<Self> {
let len = VInt::deserialize(reader)?.0 as usize;
if len > FOOTER_MAX_LEN {
return Err(io::Error::new(
io::ErrorKind::InvalidData,
format!(
"Footer seems invalid as it suggests a footer len of {}. File is corrupted, \
or the index was created with a different & old version of tantivy.",
len
),
));
}
let mut buf = vec![0u8; len];
reader.read_exact(&mut buf[..])?;
let mut cursor = &buf[..];
let version = u32::deserialize(&mut cursor)?;
if version == 1 {
let crc32 = u32::deserialize(&mut cursor)?;
let compression = String::deserialize(&mut cursor)?;
Ok(VersionedFooter::V1 {
crc32,
store_compression: compression,
})
} else {
Ok(VersionedFooter::UnknownVersion)
}
}
}
impl VersionedFooter {
pub fn crc(&self) -> Option<CrcHashU32> {
match self {
VersionedFooter::V1 { crc32, .. } => Some(*crc32),
VersionedFooter::UnknownVersion { .. } => None,
}
}
}
pub(crate) struct FooterProxy<W: TerminatingWrite> {
/// always Some except after terminate call
hasher: Option<Hasher>,
/// always Some except after terminate call
writer: Option<W>,
}
impl<W: TerminatingWrite> FooterProxy<W> {
pub fn new(writer: W) -> Self {
FooterProxy {
hasher: Some(Hasher::new()),
writer: Some(writer),
}
}
}
impl<W: TerminatingWrite> Write for FooterProxy<W> {
fn write(&mut self, buf: &[u8]) -> io::Result<usize> {
let count = self.writer.as_mut().unwrap().write(buf)?;
self.hasher.as_mut().unwrap().update(&buf[..count]);
Ok(count)
}
fn flush(&mut self) -> io::Result<()> {
self.writer.as_mut().unwrap().flush()
}
}
impl<W: TerminatingWrite> TerminatingWrite for FooterProxy<W> {
fn terminate_ref(&mut self, _: AntiCallToken) -> io::Result<()> {
let crc32 = self.hasher.take().unwrap().finalize();
let footer = Footer::new(VersionedFooter::V1 {
crc32,
store_compression: crate::store::COMPRESSION.to_string(),
});
let mut writer = self.writer.take().unwrap();
footer.append_footer(&mut writer)?;
writer.terminate()
}
}
#[cfg(test)]
mod tests {
use super::CrcHashU32;
use super::FooterProxy;
use crate::common::{BinarySerializable, VInt};
use crate::directory::footer::{Footer, VersionedFooter};
use crate::directory::TerminatingWrite;
use byteorder::{ByteOrder, LittleEndian};
use regex::Regex;
use std::io;
#[test]
fn test_versioned_footer() {
let mut vec = Vec::new();
let footer_proxy = FooterProxy::new(&mut vec);
assert!(footer_proxy.terminate().is_ok());
assert_eq!(vec.len(), 167);
let footer = Footer::deserialize(&mut &vec[..]).unwrap();
if let VersionedFooter::V1 {
crc32: _,
store_compression,
} = footer.versioned_footer
{
assert_eq!(store_compression, crate::store::COMPRESSION);
} else {
panic!("Versioned footer should be V1.");
}
assert_eq!(&footer.version, crate::version());
}
#[test]
fn test_serialize_deserialize_footer() {
let mut buffer = Vec::new();
let crc32 = 123456u32;
let footer: Footer = Footer::new(VersionedFooter::V1 {
crc32,
store_compression: "lz4".to_string(),
});
footer.serialize(&mut buffer).unwrap();
let footer_deser = Footer::deserialize(&mut &buffer[..]).unwrap();
assert_eq!(footer_deser, footer);
}
#[test]
fn footer_length() {
let crc32 = 1111111u32;
let versioned_footer = VersionedFooter::V1 {
crc32,
store_compression: "lz4".to_string(),
};
let mut buf = Vec::new();
versioned_footer.serialize(&mut buf).unwrap();
assert_eq!(buf.len(), 13);
let footer = Footer::new(versioned_footer);
let regex_ptn = Regex::new(
"tantivy v[0-9]{1,3}\\.[0-9]{1,3}\\.[0-9]{1,3}\\.{0,10}, index_format v[0-9]{1,5}",
)
.unwrap();
assert!(regex_ptn.is_match(&footer.meta));
}
#[test]
fn versioned_footer_from_bytes() {
let v_footer_bytes = vec![
// versionned footer length
12 | 128,
// index format version
1,
0,
0,
0,
// crc 32
12,
35,
89,
18,
// compression format
3 | 128,
b'l',
b'z',
b'4',
];
let mut cursor = &v_footer_bytes[..];
let versioned_footer = VersionedFooter::deserialize(&mut cursor).unwrap();
assert!(cursor.is_empty());
let expected_crc: u32 = LittleEndian::read_u32(&v_footer_bytes[5..9]) as CrcHashU32;
let expected_versioned_footer: VersionedFooter = VersionedFooter::V1 {
crc32: expected_crc,
store_compression: "lz4".to_string(),
};
assert_eq!(versioned_footer, expected_versioned_footer);
let mut buffer = Vec::new();
assert!(versioned_footer.serialize(&mut buffer).is_ok());
assert_eq!(&v_footer_bytes[..], &buffer[..]);
}
#[test]
fn versioned_footer_panic() {
let v_footer_bytes = vec![6u8 | 128u8, 3u8, 0u8, 0u8, 1u8, 0u8, 0u8];
let mut b = &v_footer_bytes[..];
let versioned_footer = VersionedFooter::deserialize(&mut b).unwrap();
assert!(b.is_empty());
let expected_versioned_footer = VersionedFooter::UnknownVersion;
assert_eq!(versioned_footer, expected_versioned_footer);
let mut buf = Vec::new();
assert!(versioned_footer.serialize(&mut buf).is_err());
}
#[test]
#[cfg(not(feature = "lz4"))]
fn compression_mismatch() {
let crc32 = 1111111u32;
let versioned_footer = VersionedFooter::V1 {
crc32,
store_compression: "lz4".to_string(),
};
let footer = Footer::new(versioned_footer);
let res = footer.is_compatible();
assert!(res.is_err());
}
#[test]
fn test_deserialize_too_large_footer() {
let mut buf = vec![];
assert!(FooterProxy::new(&mut buf).terminate().is_ok());
let mut long_len_buf = [0u8; 10];
let num_bytes = VInt(super::FOOTER_MAX_LEN as u64 + 1u64).serialize_into(&mut long_len_buf);
buf[0..num_bytes].copy_from_slice(&long_len_buf[..num_bytes]);
let err = Footer::deserialize(&mut &buf[..]).unwrap_err();
assert_eq!(err.kind(), io::ErrorKind::InvalidData);
assert_eq!(
err.to_string(),
"Footer seems invalid as it suggests a footer len of 10001. File is corrupted, \
or the index was created with a different & old version of tantivy."
);
}
}

View File

@@ -1,13 +1,16 @@
use crate::core::MANAGED_FILEPATH; use crate::core::MANAGED_FILEPATH;
use crate::directory::error::{DeleteError, IOError, LockError, OpenReadError, OpenWriteError}; use crate::directory::error::{DeleteError, IOError, LockError, OpenReadError, OpenWriteError};
use crate::directory::footer::{Footer, FooterProxy};
use crate::directory::DirectoryLock; use crate::directory::DirectoryLock;
use crate::directory::GarbageCollectionResult;
use crate::directory::Lock; use crate::directory::Lock;
use crate::directory::META_LOCK; use crate::directory::META_LOCK;
use crate::directory::{ReadOnlySource, WritePtr}; use crate::directory::{ReadOnlySource, WritePtr};
use crate::directory::{WatchCallback, WatchHandle}; use crate::directory::{WatchCallback, WatchHandle};
use crate::error::DataCorruption; use crate::error::DataCorruption;
use crate::Directory; use crate::Directory;
use crate::Result;
use crc32fast::Hasher;
use serde_json; use serde_json;
use std::collections::HashSet; use std::collections::HashSet;
use std::io; use std::io;
@@ -62,7 +65,7 @@ fn save_managed_paths(
impl ManagedDirectory { impl ManagedDirectory {
/// Wraps a directory as managed directory. /// Wraps a directory as managed directory.
pub fn wrap<Dir: Directory>(directory: Dir) -> Result<ManagedDirectory> { pub fn wrap<Dir: Directory>(directory: Dir) -> crate::Result<ManagedDirectory> {
match directory.atomic_read(&MANAGED_FILEPATH) { match directory.atomic_read(&MANAGED_FILEPATH) {
Ok(data) => { Ok(data) => {
let managed_files_json = String::from_utf8_lossy(&data); let managed_files_json = String::from_utf8_lossy(&data);
@@ -85,6 +88,11 @@ impl ManagedDirectory {
meta_informations: Arc::default(), meta_informations: Arc::default(),
}), }),
Err(OpenReadError::IOError(e)) => Err(From::from(e)), Err(OpenReadError::IOError(e)) => Err(From::from(e)),
Err(OpenReadError::IncompatibleIndex(incompatibility)) => {
// For the moment, this should never happen `meta.json`
// do not have any footer and cannot detect incompatibility.
Err(crate::TantivyError::IncompatibleIndex(incompatibility))
}
} }
} }
@@ -102,7 +110,10 @@ impl ManagedDirectory {
/// If a file cannot be deleted (for permission reasons for instance) /// If a file cannot be deleted (for permission reasons for instance)
/// an error is simply logged, and the file remains in the list of managed /// an error is simply logged, and the file remains in the list of managed
/// files. /// files.
pub fn garbage_collect<L: FnOnce() -> HashSet<PathBuf>>(&mut self, get_living_files: L) { pub fn garbage_collect<L: FnOnce() -> HashSet<PathBuf>>(
&mut self,
get_living_files: L,
) -> crate::Result<GarbageCollectionResult> {
info!("Garbage collect"); info!("Garbage collect");
let mut files_to_delete = vec![]; let mut files_to_delete = vec![];
@@ -128,19 +139,25 @@ impl ManagedDirectory {
// 2) writer change meta.json (for instance after a merge or a commit) // 2) writer change meta.json (for instance after a merge or a commit)
// 3) gc kicks in. // 3) gc kicks in.
// 4) gc removes a file that was useful for process B, before process B opened it. // 4) gc removes a file that was useful for process B, before process B opened it.
if let Ok(_meta_lock) = self.acquire_lock(&META_LOCK) { match self.acquire_lock(&META_LOCK) {
let living_files = get_living_files(); Ok(_meta_lock) => {
for managed_path in &meta_informations_rlock.managed_paths { let living_files = get_living_files();
if !living_files.contains(managed_path) { for managed_path in &meta_informations_rlock.managed_paths {
files_to_delete.push(managed_path.clone()); if !living_files.contains(managed_path) {
files_to_delete.push(managed_path.clone());
}
} }
} }
} else { Err(err) => {
error!("Failed to acquire lock for GC"); error!("Failed to acquire lock for GC");
return Err(crate::TantivyError::from(err));
}
} }
} }
let mut failed_to_delete_files = vec![];
let mut deleted_files = vec![]; let mut deleted_files = vec![];
for file_to_delete in files_to_delete { for file_to_delete in files_to_delete {
match self.delete(&file_to_delete) { match self.delete(&file_to_delete) {
Ok(_) => { Ok(_) => {
@@ -150,9 +167,10 @@ impl ManagedDirectory {
Err(file_error) => { Err(file_error) => {
match file_error { match file_error {
DeleteError::FileDoesNotExist(_) => { DeleteError::FileDoesNotExist(_) => {
deleted_files.push(file_to_delete); deleted_files.push(file_to_delete.clone());
} }
DeleteError::IOError(_) => { DeleteError::IOError(_) => {
failed_to_delete_files.push(file_to_delete.clone());
if !cfg!(target_os = "windows") { if !cfg!(target_os = "windows") {
// On windows, delete is expected to fail if the file // On windows, delete is expected to fail if the file
// is mmapped. // is mmapped.
@@ -175,10 +193,13 @@ impl ManagedDirectory {
for delete_file in &deleted_files { for delete_file in &deleted_files {
managed_paths_write.remove(delete_file); managed_paths_write.remove(delete_file);
} }
if save_managed_paths(self.directory.as_mut(), &meta_informations_wlock).is_err() { save_managed_paths(self.directory.as_mut(), &meta_informations_wlock)?;
error!("Failed to save the list of managed files.");
}
} }
Ok(GarbageCollectionResult {
deleted_files,
failed_to_delete_files,
})
} }
/// Registers a file as managed /// Registers a file as managed
@@ -207,17 +228,60 @@ impl ManagedDirectory {
} }
Ok(()) Ok(())
} }
/// Verify checksum of a managed file
pub fn validate_checksum(&self, path: &Path) -> result::Result<bool, OpenReadError> {
let reader = self.directory.open_read(path)?;
let (footer, data) = Footer::extract_footer(reader)
.map_err(|err| IOError::with_path(path.to_path_buf(), err))?;
let mut hasher = Hasher::new();
hasher.update(data.as_slice());
let crc = hasher.finalize();
Ok(footer
.versioned_footer
.crc()
.map(|v| v == crc)
.unwrap_or(false))
}
/// List files for which checksum does not match content
pub fn list_damaged(&self) -> result::Result<HashSet<PathBuf>, OpenReadError> {
let mut hashset = HashSet::new();
let managed_paths = self
.meta_informations
.read()
.expect("Managed directory rlock poisoned in list damaged.")
.managed_paths
.clone();
for path in managed_paths.into_iter() {
if !self.validate_checksum(&path)? {
hashset.insert(path);
}
}
Ok(hashset)
}
} }
impl Directory for ManagedDirectory { impl Directory for ManagedDirectory {
fn open_read(&self, path: &Path) -> result::Result<ReadOnlySource, OpenReadError> { fn open_read(&self, path: &Path) -> result::Result<ReadOnlySource, OpenReadError> {
self.directory.open_read(path) let read_only_source = self.directory.open_read(path)?;
let (footer, reader) = Footer::extract_footer(read_only_source)
.map_err(|err| IOError::with_path(path.to_path_buf(), err))?;
footer.is_compatible()?;
Ok(reader)
} }
fn open_write(&mut self, path: &Path) -> result::Result<WritePtr, OpenWriteError> { fn open_write(&mut self, path: &Path) -> result::Result<WritePtr, OpenWriteError> {
self.register_file_as_managed(path) self.register_file_as_managed(path)
.map_err(|e| IOError::with_path(path.to_owned(), e))?; .map_err(|e| IOError::with_path(path.to_owned(), e))?;
self.directory.open_write(path) Ok(io::BufWriter::new(Box::new(FooterProxy::new(
self.directory
.open_write(path)?
.into_inner()
.map_err(|_| ())
.expect("buffer should be empty"),
))))
} }
fn atomic_write(&mut self, path: &Path, data: &[u8]) -> io::Result<()> { fn atomic_write(&mut self, path: &Path, data: &[u8]) -> io::Result<()> {
@@ -259,15 +323,16 @@ impl Clone for ManagedDirectory {
#[cfg(test)] #[cfg(test)]
mod tests_mmap_specific { mod tests_mmap_specific {
use crate::directory::{Directory, ManagedDirectory, MmapDirectory}; use crate::directory::{Directory, ManagedDirectory, MmapDirectory, TerminatingWrite};
use std::collections::HashSet; use std::collections::HashSet;
use std::fs::OpenOptions;
use std::io::Write; use std::io::Write;
use std::path::{Path, PathBuf}; use std::path::{Path, PathBuf};
use tempdir::TempDir; use tempfile::TempDir;
#[test] #[test]
fn test_managed_directory() { fn test_managed_directory() {
let tempdir = TempDir::new("tantivy-test").unwrap(); let tempdir = TempDir::new().unwrap();
let tempdir_path = PathBuf::from(tempdir.path()); let tempdir_path = PathBuf::from(tempdir.path());
let test_path1: &'static Path = Path::new("some_path_for_test"); let test_path1: &'static Path = Path::new("some_path_for_test");
@@ -275,16 +340,15 @@ mod tests_mmap_specific {
{ {
let mmap_directory = MmapDirectory::open(&tempdir_path).unwrap(); let mmap_directory = MmapDirectory::open(&tempdir_path).unwrap();
let mut managed_directory = ManagedDirectory::wrap(mmap_directory).unwrap(); let mut managed_directory = ManagedDirectory::wrap(mmap_directory).unwrap();
let mut write_file = managed_directory.open_write(test_path1).unwrap(); let write_file = managed_directory.open_write(test_path1).unwrap();
write_file.flush().unwrap(); write_file.terminate().unwrap();
managed_directory managed_directory
.atomic_write(test_path2, &[0u8, 1u8]) .atomic_write(test_path2, &[0u8, 1u8])
.unwrap(); .unwrap();
assert!(managed_directory.exists(test_path1)); assert!(managed_directory.exists(test_path1));
assert!(managed_directory.exists(test_path2)); assert!(managed_directory.exists(test_path2));
let living_files: HashSet<PathBuf> = let living_files: HashSet<PathBuf> = [test_path1.to_owned()].iter().cloned().collect();
[test_path1.to_owned()].into_iter().cloned().collect(); assert!(managed_directory.garbage_collect(|| living_files).is_ok());
managed_directory.garbage_collect(|| living_files);
assert!(managed_directory.exists(test_path1)); assert!(managed_directory.exists(test_path1));
assert!(!managed_directory.exists(test_path2)); assert!(!managed_directory.exists(test_path2));
} }
@@ -294,7 +358,7 @@ mod tests_mmap_specific {
assert!(managed_directory.exists(test_path1)); assert!(managed_directory.exists(test_path1));
assert!(!managed_directory.exists(test_path2)); assert!(!managed_directory.exists(test_path2));
let living_files: HashSet<PathBuf> = HashSet::new(); let living_files: HashSet<PathBuf> = HashSet::new();
managed_directory.garbage_collect(|| living_files); assert!(managed_directory.garbage_collect(|| living_files).is_ok());
assert!(!managed_directory.exists(test_path1)); assert!(!managed_directory.exists(test_path1));
assert!(!managed_directory.exists(test_path2)); assert!(!managed_directory.exists(test_path2));
} }
@@ -304,19 +368,21 @@ mod tests_mmap_specific {
fn test_managed_directory_gc_while_mmapped() { fn test_managed_directory_gc_while_mmapped() {
let test_path1: &'static Path = Path::new("some_path_for_test"); let test_path1: &'static Path = Path::new("some_path_for_test");
let tempdir = TempDir::new("index").unwrap(); let tempdir = TempDir::new().unwrap();
let tempdir_path = PathBuf::from(tempdir.path()); let tempdir_path = PathBuf::from(tempdir.path());
let living_files = HashSet::new(); let living_files = HashSet::new();
let mmap_directory = MmapDirectory::open(&tempdir_path).unwrap(); let mmap_directory = MmapDirectory::open(&tempdir_path).unwrap();
let mut managed_directory = ManagedDirectory::wrap(mmap_directory).unwrap(); let mut managed_directory = ManagedDirectory::wrap(mmap_directory).unwrap();
managed_directory let mut write = managed_directory.open_write(test_path1).unwrap();
.atomic_write(test_path1, &vec![0u8, 1u8]) write.write_all(&[0u8, 1u8]).unwrap();
.unwrap(); write.terminate().unwrap();
assert!(managed_directory.exists(test_path1)); assert!(managed_directory.exists(test_path1));
let _mmap_read = managed_directory.open_read(test_path1).unwrap(); let _mmap_read = managed_directory.open_read(test_path1).unwrap();
managed_directory.garbage_collect(|| living_files.clone()); assert!(managed_directory
.garbage_collect(|| living_files.clone())
.is_ok());
if cfg!(target_os = "windows") { if cfg!(target_os = "windows") {
// On Windows, gc should try and fail the file as it is mmapped. // On Windows, gc should try and fail the file as it is mmapped.
assert!(managed_directory.exists(test_path1)); assert!(managed_directory.exists(test_path1));
@@ -324,11 +390,47 @@ mod tests_mmap_specific {
drop(_mmap_read); drop(_mmap_read);
// The file should still be in the list of managed file and // The file should still be in the list of managed file and
// eventually be deleted once mmap is released. // eventually be deleted once mmap is released.
managed_directory.garbage_collect(|| living_files); assert!(managed_directory.garbage_collect(|| living_files).is_ok());
assert!(!managed_directory.exists(test_path1)); assert!(!managed_directory.exists(test_path1));
} else { } else {
assert!(!managed_directory.exists(test_path1)); assert!(!managed_directory.exists(test_path1));
} }
} }
#[test]
fn test_checksum() {
let test_path1: &'static Path = Path::new("some_path_for_test");
let test_path2: &'static Path = Path::new("other_test_path");
let tempdir = TempDir::new().unwrap();
let tempdir_path = PathBuf::from(tempdir.path());
let mmap_directory = MmapDirectory::open(&tempdir_path).unwrap();
let mut managed_directory = ManagedDirectory::wrap(mmap_directory).unwrap();
let mut write = managed_directory.open_write(test_path1).unwrap();
write.write_all(&[0u8, 1u8]).unwrap();
write.terminate().unwrap();
let mut write = managed_directory.open_write(test_path2).unwrap();
write.write_all(&[3u8, 4u8, 5u8]).unwrap();
write.terminate().unwrap();
let read_source = managed_directory.open_read(test_path2).unwrap();
assert_eq!(read_source.as_slice(), &[3u8, 4u8, 5u8]);
assert!(managed_directory.list_damaged().unwrap().is_empty());
let mut corrupted_path = tempdir_path.clone();
corrupted_path.push(test_path2);
let mut file = OpenOptions::new()
.write(true)
.open(&corrupted_path)
.unwrap();
file.write_all(&[255u8]).unwrap();
file.flush().unwrap();
drop(file);
let damaged = managed_directory.list_damaged().unwrap();
assert_eq!(damaged.len(), 1);
assert!(damaged.contains(test_path2));
}
} }

View File

@@ -11,6 +11,7 @@ use crate::directory::error::{
DeleteError, IOError, OpenDirectoryError, OpenReadError, OpenWriteError, DeleteError, IOError, OpenDirectoryError, OpenReadError, OpenWriteError,
}; };
use crate::directory::read_only_source::BoxedData; use crate::directory::read_only_source::BoxedData;
use crate::directory::AntiCallToken;
use crate::directory::Directory; use crate::directory::Directory;
use crate::directory::DirectoryLock; use crate::directory::DirectoryLock;
use crate::directory::Lock; use crate::directory::Lock;
@@ -18,9 +19,10 @@ use crate::directory::ReadOnlySource;
use crate::directory::WatchCallback; use crate::directory::WatchCallback;
use crate::directory::WatchCallbackList; use crate::directory::WatchCallbackList;
use crate::directory::WatchHandle; use crate::directory::WatchHandle;
use crate::directory::WritePtr; use crate::directory::{TerminatingWrite, WritePtr};
use atomicwrites; use atomicwrites;
use memmap::Mmap; use memmap::Mmap;
use serde::{Deserialize, Serialize};
use std::collections::HashMap; use std::collections::HashMap;
use std::convert::From; use std::convert::From;
use std::fmt; use std::fmt;
@@ -36,7 +38,7 @@ use std::sync::Mutex;
use std::sync::RwLock; use std::sync::RwLock;
use std::sync::Weak; use std::sync::Weak;
use std::thread; use std::thread;
use tempdir::TempDir; use tempfile::TempDir;
/// Create a default io error given a string. /// Create a default io error given a string.
pub(crate) fn make_io_err(msg: String) -> io::Error { pub(crate) fn make_io_err(msg: String) -> io::Error {
@@ -130,53 +132,38 @@ impl MmapCache {
} }
self.cache.remove(full_path); self.cache.remove(full_path);
self.counters.miss += 1; self.counters.miss += 1;
Ok(if let Some(mmap) = open_mmap(full_path)? { let mmap_opt = open_mmap(full_path)?;
Ok(mmap_opt.map(|mmap| {
let mmap_arc: Arc<BoxedData> = Arc::new(Box::new(mmap)); let mmap_arc: Arc<BoxedData> = Arc::new(Box::new(mmap));
let mmap_weak = Arc::downgrade(&mmap_arc); let mmap_weak = Arc::downgrade(&mmap_arc);
self.cache.insert(full_path.to_owned(), mmap_weak); self.cache.insert(full_path.to_owned(), mmap_weak);
Some(mmap_arc) mmap_arc
} else { }))
None
})
} }
} }
struct InnerWatcherWrapper {
_watcher: Mutex<notify::RecommendedWatcher>,
watcher_router: WatchCallbackList,
}
impl InnerWatcherWrapper {
pub fn new(path: &Path) -> Result<(Self, Receiver<notify::RawEvent>), notify::Error> {
let (tx, watcher_recv): (Sender<RawEvent>, Receiver<RawEvent>) = channel();
// We need to initialize the
let mut watcher = notify::raw_watcher(tx)?;
watcher.watch(path, RecursiveMode::Recursive)?;
let inner = InnerWatcherWrapper {
_watcher: Mutex::new(watcher),
watcher_router: Default::default(),
};
Ok((inner, watcher_recv))
}
}
#[derive(Clone)]
struct WatcherWrapper { struct WatcherWrapper {
inner: Arc<InnerWatcherWrapper>, _watcher: Mutex<notify::RecommendedWatcher>,
watcher_router: Arc<WatchCallbackList>,
} }
impl WatcherWrapper { impl WatcherWrapper {
pub fn new(path: &Path) -> Result<Self, OpenDirectoryError> { pub fn new(path: &Path) -> Result<Self, OpenDirectoryError> {
let (inner, watcher_recv) = InnerWatcherWrapper::new(path).map_err(|err| match err { let (tx, watcher_recv): (Sender<RawEvent>, Receiver<RawEvent>) = channel();
notify::Error::PathNotFound => OpenDirectoryError::DoesNotExist(path.to_owned()), // We need to initialize the
_ => { let watcher = notify::raw_watcher(tx)
panic!("Unknown error while starting watching directory {:?}", path); .and_then(|mut watcher| {
} watcher.watch(path, RecursiveMode::Recursive)?;
})?; Ok(watcher)
let watcher_wrapper = WatcherWrapper { })
inner: Arc::new(inner), .map_err(|err| match err {
}; notify::Error::PathNotFound => OpenDirectoryError::DoesNotExist(path.to_owned()),
let watcher_wrapper_clone = watcher_wrapper.clone(); _ => {
panic!("Unknown error while starting watching directory {:?}", path);
}
})?;
let watcher_router: Arc<WatchCallbackList> = Default::default();
let watcher_router_clone = watcher_router.clone();
thread::Builder::new() thread::Builder::new()
.name("meta-file-watch-thread".to_string()) .name("meta-file-watch-thread".to_string())
.spawn(move || { .spawn(move || {
@@ -187,7 +174,7 @@ impl WatcherWrapper {
// We might want to be more accurate than this at one point. // We might want to be more accurate than this at one point.
if let Some(filename) = changed_path.file_name() { if let Some(filename) = changed_path.file_name() {
if filename == *META_FILEPATH { if filename == *META_FILEPATH {
watcher_wrapper_clone.inner.watcher_router.broadcast(); let _ = watcher_router_clone.broadcast();
} }
} }
} }
@@ -200,13 +187,15 @@ impl WatcherWrapper {
} }
} }
} }
}) })?;
.expect("Failed to spawn thread to watch meta.json"); Ok(WatcherWrapper {
Ok(watcher_wrapper) _watcher: Mutex::new(watcher),
watcher_router,
})
} }
pub fn watch(&mut self, watch_callback: WatchCallback) -> WatchHandle { pub fn watch(&mut self, watch_callback: WatchCallback) -> WatchHandle {
self.inner.watcher_router.subscribe(watch_callback) self.watcher_router.subscribe(watch_callback)
} }
} }
@@ -265,7 +254,7 @@ impl MmapDirectoryInner {
} }
} }
if let Some(watch_wrapper) = self.watcher.write().unwrap().as_mut() { if let Some(watch_wrapper) = self.watcher.write().unwrap().as_mut() {
return Ok(watch_wrapper.watch(watch_callback)); Ok(watch_wrapper.watch(watch_callback))
} else { } else {
unreachable!("At this point, watch wrapper is supposed to be initialized"); unreachable!("At this point, watch wrapper is supposed to be initialized");
} }
@@ -294,7 +283,7 @@ impl MmapDirectory {
/// This is mostly useful to test the MmapDirectory itself. /// This is mostly useful to test the MmapDirectory itself.
/// For your unit tests, prefer the RAMDirectory. /// For your unit tests, prefer the RAMDirectory.
pub fn create_from_tempdir() -> Result<MmapDirectory, OpenDirectoryError> { pub fn create_from_tempdir() -> Result<MmapDirectory, OpenDirectoryError> {
let tempdir = TempDir::new("index").map_err(OpenDirectoryError::IoError)?; let tempdir = TempDir::new().map_err(OpenDirectoryError::IoError)?;
let tempdir_path = PathBuf::from(tempdir.path()); let tempdir_path = PathBuf::from(tempdir.path());
MmapDirectory::new(tempdir_path, Some(tempdir)) MmapDirectory::new(tempdir_path, Some(tempdir))
} }
@@ -412,6 +401,12 @@ impl Seek for SafeFileWriter {
} }
} }
impl TerminatingWrite for SafeFileWriter {
fn terminate_ref(&mut self, _: AntiCallToken) -> io::Result<()> {
self.flush()
}
}
impl Directory for MmapDirectory { impl Directory for MmapDirectory {
fn open_read(&self, path: &Path) -> result::Result<ReadOnlySource, OpenReadError> { fn open_read(&self, path: &Path) -> result::Result<ReadOnlySource, OpenReadError> {
debug!("Open Read {:?}", path); debug!("Open Read {:?}", path);
@@ -543,16 +538,15 @@ mod tests {
// The following tests are specific to the MmapDirectory // The following tests are specific to the MmapDirectory
use super::*; use super::*;
use crate::indexer::LogMergePolicy;
use crate::schema::{Schema, SchemaBuilder, TEXT}; use crate::schema::{Schema, SchemaBuilder, TEXT};
use crate::Index; use crate::Index;
use crate::ReloadPolicy; use crate::ReloadPolicy;
use std::fs; use std::fs;
use std::sync::atomic::{AtomicUsize, Ordering}; use std::sync::atomic::{AtomicUsize, Ordering};
use std::thread;
use std::time::Duration;
#[test] #[test]
fn test_open_non_existant_path() { fn test_open_non_existent_path() {
assert!(MmapDirectory::open(PathBuf::from("./nowhere")).is_err()); assert!(MmapDirectory::open(PathBuf::from("./nowhere")).is_err());
} }
@@ -642,16 +636,21 @@ mod tests {
fn test_watch_wrapper() { fn test_watch_wrapper() {
let counter: Arc<AtomicUsize> = Default::default(); let counter: Arc<AtomicUsize> = Default::default();
let counter_clone = counter.clone(); let counter_clone = counter.clone();
let tmp_dir: TempDir = tempdir::TempDir::new("test_watch_wrapper").unwrap(); let tmp_dir = tempfile::TempDir::new().unwrap();
let tmp_dirpath = tmp_dir.path().to_owned(); let tmp_dirpath = tmp_dir.path().to_owned();
let mut watch_wrapper = WatcherWrapper::new(&tmp_dirpath).unwrap(); let mut watch_wrapper = WatcherWrapper::new(&tmp_dirpath).unwrap();
let tmp_file = tmp_dirpath.join("coucou"); let tmp_file = tmp_dirpath.join(*META_FILEPATH);
let _handle = watch_wrapper.watch(Box::new(move || { let _handle = watch_wrapper.watch(Box::new(move || {
counter_clone.fetch_add(1, Ordering::SeqCst); counter_clone.fetch_add(1, Ordering::SeqCst);
})); }));
let (sender, receiver) = crossbeam::channel::unbounded();
let _handle2 = watch_wrapper.watch(Box::new(move || {
let _ = sender.send(());
}));
assert_eq!(counter.load(Ordering::SeqCst), 0); assert_eq!(counter.load(Ordering::SeqCst), 0);
fs::write(&tmp_file, b"whateverwilldo").unwrap(); fs::write(&tmp_file, b"whateverwilldo").unwrap();
thread::sleep(Duration::new(0, 1_000u32)); assert!(receiver.recv().is_ok());
assert!(counter.load(Ordering::SeqCst) >= 1);
} }
#[test] #[test]
@@ -660,34 +659,42 @@ mod tests {
let mut schema_builder: SchemaBuilder = Schema::builder(); let mut schema_builder: SchemaBuilder = Schema::builder();
let text_field = schema_builder.add_text_field("text", TEXT); let text_field = schema_builder.add_text_field("text", TEXT);
let schema = schema_builder.build(); let schema = schema_builder.build();
{ {
let index = Index::create(mmap_directory.clone(), schema).unwrap(); let index = Index::create(mmap_directory.clone(), schema).unwrap();
let mut index_writer = index.writer_with_num_threads(1, 3_000_000).unwrap(); let mut index_writer = index.writer_with_num_threads(1, 3_000_000).unwrap();
for _num_commits in 0..16 { let mut log_merge_policy = LogMergePolicy::default();
log_merge_policy.set_min_merge_size(3);
index_writer.set_merge_policy(Box::new(log_merge_policy));
for _num_commits in 0..10 {
for _ in 0..10 { for _ in 0..10 {
index_writer.add_document(doc!(text_field=>"abc")); index_writer.add_document(doc!(text_field=>"abc"));
} }
index_writer.commit().unwrap(); index_writer.commit().unwrap();
} }
let reader = index let reader = index
.reader_builder() .reader_builder()
.reload_policy(ReloadPolicy::Manual) .reload_policy(ReloadPolicy::Manual)
.try_into() .try_into()
.unwrap(); .unwrap();
for _ in 0..30 {
for _ in 0..4 {
index_writer.add_document(doc!(text_field=>"abc")); index_writer.add_document(doc!(text_field=>"abc"));
index_writer.commit().unwrap(); index_writer.commit().unwrap();
reader.reload().unwrap(); reader.reload().unwrap();
} }
index_writer.wait_merging_threads().unwrap(); index_writer.wait_merging_threads().unwrap();
reader.reload().unwrap(); reader.reload().unwrap();
let num_segments = reader.searcher().segment_readers().len(); let num_segments = reader.searcher().segment_readers().len();
assert_eq!(num_segments, 4); assert!(num_segments <= 4);
assert_eq!( assert_eq!(
num_segments * 7, num_segments * 7,
mmap_directory.get_cache_info().mmapped.len() mmap_directory.get_cache_info().mmapped.len()
); );
} }
assert_eq!(mmap_directory.get_cache_info().mmapped.len(), 0); assert!(mmap_directory.get_cache_info().mmapped.is_empty());
} }
} }

View File

@@ -9,6 +9,7 @@ mod mmap_directory;
mod directory; mod directory;
mod directory_lock; mod directory_lock;
mod footer;
mod managed_directory; mod managed_directory;
mod ram_directory; mod ram_directory;
mod read_only_source; mod read_only_source;
@@ -22,20 +23,74 @@ pub use self::directory::{Directory, DirectoryClone};
pub use self::directory_lock::{Lock, INDEX_WRITER_LOCK, META_LOCK}; pub use self::directory_lock::{Lock, INDEX_WRITER_LOCK, META_LOCK};
pub use self::ram_directory::RAMDirectory; pub use self::ram_directory::RAMDirectory;
pub use self::read_only_source::ReadOnlySource; pub use self::read_only_source::ReadOnlySource;
pub(crate) use self::watch_event_router::WatchCallbackList; pub use self::watch_event_router::{WatchCallback, WatchCallbackList, WatchHandle};
pub use self::watch_event_router::{WatchCallback, WatchHandle}; use std::io::{self, BufWriter, Write};
use std::io::{BufWriter, Write}; use std::path::PathBuf;
/// Outcome of the Garbage collection
pub struct GarbageCollectionResult {
/// List of files that were deleted in this cycle
pub deleted_files: Vec<PathBuf>,
/// List of files that were schedule to be deleted in this cycle,
/// but deletion did not work. This typically happens on windows,
/// as deleting a memory mapped file is forbidden.
///
/// If a searcher is still held, a file cannot be deleted.
/// This is not considered a bug, the file will simply be deleted
/// in the next GC.
pub failed_to_delete_files: Vec<PathBuf>,
}
#[cfg(feature = "mmap")] #[cfg(feature = "mmap")]
pub use self::mmap_directory::MmapDirectory; pub use self::mmap_directory::MmapDirectory;
pub use self::managed_directory::ManagedDirectory; pub use self::managed_directory::ManagedDirectory;
/// Struct used to prevent from calling [`terminate_ref`](trait.TerminatingWrite#method.terminate_ref) directly
///
/// The point is that while the type is public, it cannot be built by anyone
/// outside of this module.
pub struct AntiCallToken(());
/// Trait used to indicate when no more write need to be done on a writer
pub trait TerminatingWrite: Write {
/// Indicate that the writer will no longer be used. Internally call terminate_ref.
fn terminate(mut self) -> io::Result<()>
where
Self: Sized,
{
self.terminate_ref(AntiCallToken(()))
}
/// You should implement this function to define custom behavior.
/// This function should flush any buffer it may hold.
fn terminate_ref(&mut self, _: AntiCallToken) -> io::Result<()>;
}
impl<W: TerminatingWrite + ?Sized> TerminatingWrite for Box<W> {
fn terminate_ref(&mut self, token: AntiCallToken) -> io::Result<()> {
self.as_mut().terminate_ref(token)
}
}
impl<W: TerminatingWrite> TerminatingWrite for BufWriter<W> {
fn terminate_ref(&mut self, a: AntiCallToken) -> io::Result<()> {
self.flush()?;
self.get_mut().terminate_ref(a)
}
}
#[cfg(test)]
impl<'a> TerminatingWrite for &'a mut Vec<u8> {
fn terminate_ref(&mut self, _a: AntiCallToken) -> io::Result<()> {
self.flush()
}
}
/// Write object for Directory. /// Write object for Directory.
/// ///
/// `WritePtr` are required to implement both Write /// `WritePtr` are required to implement both Write
/// and Seek. /// and Seek.
pub type WritePtr = BufWriter<Box<dyn Write>>; pub type WritePtr = BufWriter<Box<dyn TerminatingWrite>>;
#[cfg(test)] #[cfg(test)]
mod tests; mod tests;

View File

@@ -1,8 +1,9 @@
use crate::core::META_FILEPATH; use crate::core::META_FILEPATH;
use crate::directory::error::{DeleteError, OpenReadError, OpenWriteError}; use crate::directory::error::{DeleteError, OpenReadError, OpenWriteError};
use crate::directory::AntiCallToken;
use crate::directory::WatchCallbackList; use crate::directory::WatchCallbackList;
use crate::directory::WritePtr;
use crate::directory::{Directory, ReadOnlySource, WatchCallback, WatchHandle}; use crate::directory::{Directory, ReadOnlySource, WatchCallback, WatchHandle};
use crate::directory::{TerminatingWrite, WritePtr};
use fail::fail_point; use fail::fail_point;
use std::collections::HashMap; use std::collections::HashMap;
use std::fmt; use std::fmt;
@@ -71,6 +72,12 @@ impl Write for VecWriter {
} }
} }
impl TerminatingWrite for VecWriter {
fn terminate_ref(&mut self, _: AntiCallToken) -> io::Result<()> {
self.flush()
}
}
#[derive(Default)] #[derive(Default)]
struct InnerDirectory { struct InnerDirectory {
fs: HashMap<PathBuf, ReadOnlySource>, fs: HashMap<PathBuf, ReadOnlySource>,
@@ -137,6 +144,22 @@ impl RAMDirectory {
pub fn total_mem_usage(&self) -> usize { pub fn total_mem_usage(&self) -> usize {
self.fs.read().unwrap().total_mem_usage() self.fs.read().unwrap().total_mem_usage()
} }
/// Write a copy of all of the files saved in the RAMDirectory in the target `Directory`.
///
/// Files are all written using the `Directory::write` meaning, even if they were
/// written using the `atomic_write` api.
///
/// If an error is encounterred, files may be persisted partially.
pub fn persist(&self, dest: &mut dyn Directory) -> crate::Result<()> {
let wlock = self.fs.write().unwrap();
for (path, source) in wlock.fs.iter() {
let mut dest_wrt = dest.open_write(path)?;
dest_wrt.write_all(source.as_slice())?;
dest_wrt.terminate()?;
}
Ok(())
}
} }
impl Directory for RAMDirectory { impl Directory for RAMDirectory {
@@ -177,18 +200,18 @@ impl Directory for RAMDirectory {
fn atomic_write(&mut self, path: &Path, data: &[u8]) -> io::Result<()> { fn atomic_write(&mut self, path: &Path, data: &[u8]) -> io::Result<()> {
fail_point!("RAMDirectory::atomic_write", |msg| Err(io::Error::new( fail_point!("RAMDirectory::atomic_write", |msg| Err(io::Error::new(
io::ErrorKind::Other, io::ErrorKind::Other,
msg.unwrap_or("Undefined".to_string()) msg.unwrap_or_else(|| "Undefined".to_string())
))); )));
let path_buf = PathBuf::from(path); let path_buf = PathBuf::from(path);
// Reserve the path to prevent calls to .write() to succeed. // Reserve the path to prevent calls to .write() to succeed.
self.fs.write().unwrap().write(path_buf.clone(), &[]); self.fs.write().unwrap().write(path_buf.clone(), &[]);
let mut vec_writer = VecWriter::new(path_buf.clone(), self.clone()); let mut vec_writer = VecWriter::new(path_buf, self.clone());
vec_writer.write_all(data)?; vec_writer.write_all(data)?;
vec_writer.flush()?; vec_writer.flush()?;
if path == Path::new(&*META_FILEPATH) { if path == Path::new(&*META_FILEPATH) {
self.fs.write().unwrap().watch_router.broadcast(); let _ = self.fs.write().unwrap().watch_router.broadcast();
} }
Ok(()) Ok(())
} }
@@ -197,3 +220,28 @@ impl Directory for RAMDirectory {
Ok(self.fs.write().unwrap().watch(watch_callback)) Ok(self.fs.write().unwrap().watch(watch_callback))
} }
} }
#[cfg(test)]
mod tests {
use super::RAMDirectory;
use crate::Directory;
use std::io::Write;
use std::path::Path;
#[test]
fn test_persist() {
let msg_atomic: &'static [u8] = b"atomic is the way";
let msg_seq: &'static [u8] = b"sequential is the way";
let path_atomic: &'static Path = Path::new("atomic");
let path_seq: &'static Path = Path::new("seq");
let mut directory = RAMDirectory::create();
assert!(directory.atomic_write(path_atomic, msg_atomic).is_ok());
let mut wrt = directory.open_write(path_seq).unwrap();
assert!(wrt.write_all(msg_seq).is_ok());
assert!(wrt.flush().is_ok());
let mut directory_copy = RAMDirectory::create();
assert!(directory.persist(&mut directory_copy).is_ok());
assert_eq!(directory_copy.atomic_read(path_atomic).unwrap(), msg_atomic);
assert_eq!(directory_copy.atomic_read(path_seq).unwrap(), msg_seq);
}
}

View File

@@ -70,6 +70,12 @@ impl ReadOnlySource {
(left, right) (left, right)
} }
/// Splits into 2 `ReadOnlySource`, at the offset `end - right_len`.
pub fn split_from_end(self, right_len: usize) -> (ReadOnlySource, ReadOnlySource) {
let left_len = self.len() - right_len;
self.split(left_len)
}
/// Creates a ReadOnlySource that is just a /// Creates a ReadOnlySource that is just a
/// view over a slice of the data. /// view over a slice of the data.
/// ///

View File

@@ -1,25 +1,117 @@
use super::*; use super::*;
use futures::channel::oneshot;
use futures::executor::block_on;
use std::io::Write; use std::io::Write;
use std::mem; use std::mem;
use std::path::{Path, PathBuf}; use std::path::{Path, PathBuf};
use std::sync::atomic::AtomicUsize; use std::sync::atomic::Ordering::SeqCst;
use std::sync::atomic::Ordering; use std::sync::atomic::{AtomicBool, AtomicUsize};
use std::sync::Arc; use std::sync::Arc;
use std::thread;
use std::time;
use std::time::Duration; use std::time::Duration;
#[test] #[cfg(feature = "mmap")]
fn test_ram_directory() { mod mmap_directory_tests {
let mut ram_directory = RAMDirectory::create(); use crate::directory::MmapDirectory;
test_directory(&mut ram_directory);
type DirectoryImpl = MmapDirectory;
fn make_directory() -> DirectoryImpl {
MmapDirectory::create_from_tempdir().unwrap()
}
#[test]
fn test_simple() {
let mut directory = make_directory();
super::test_simple(&mut directory);
}
#[test]
fn test_write_create_the_file() {
let mut directory = make_directory();
super::test_write_create_the_file(&mut directory);
}
#[test]
fn test_rewrite_forbidden() {
let mut directory = make_directory();
super::test_rewrite_forbidden(&mut directory);
}
#[test]
fn test_directory_delete() {
let mut directory = make_directory();
super::test_directory_delete(&mut directory);
}
#[test]
fn test_lock_non_blocking() {
let mut directory = make_directory();
super::test_lock_non_blocking(&mut directory);
}
#[test]
fn test_lock_blocking() {
let mut directory = make_directory();
super::test_lock_blocking(&mut directory);
}
#[test]
fn test_watch() {
let mut directory = make_directory();
super::test_watch(&mut directory);
}
} }
#[test] mod ram_directory_tests {
#[cfg(feature = "mmap")] use crate::directory::RAMDirectory;
fn test_mmap_directory() {
let mut mmap_directory = MmapDirectory::create_from_tempdir().unwrap(); type DirectoryImpl = RAMDirectory;
test_directory(&mut mmap_directory);
fn make_directory() -> DirectoryImpl {
RAMDirectory::default()
}
#[test]
fn test_simple() {
let mut directory = make_directory();
super::test_simple(&mut directory);
}
#[test]
fn test_write_create_the_file() {
let mut directory = make_directory();
super::test_write_create_the_file(&mut directory);
}
#[test]
fn test_rewrite_forbidden() {
let mut directory = make_directory();
super::test_rewrite_forbidden(&mut directory);
}
#[test]
fn test_directory_delete() {
let mut directory = make_directory();
super::test_directory_delete(&mut directory);
}
#[test]
fn test_lock_non_blocking() {
let mut directory = make_directory();
super::test_lock_non_blocking(&mut directory);
}
#[test]
fn test_lock_blocking() {
let mut directory = make_directory();
super::test_lock_blocking(&mut directory);
}
#[test]
fn test_watch() {
let mut directory = make_directory();
super::test_watch(&mut directory);
}
} }
#[test] #[test]
@@ -99,48 +191,39 @@ fn test_directory_delete(directory: &mut dyn Directory) {
assert!(directory.delete(&test_path).is_err()); assert!(directory.delete(&test_path).is_err());
} }
fn test_directory(directory: &mut dyn Directory) {
test_simple(directory);
test_rewrite_forbidden(directory);
test_write_create_the_file(directory);
test_directory_delete(directory);
test_lock_non_blocking(directory);
test_lock_blocking(directory);
test_watch(directory);
}
fn test_watch(directory: &mut dyn Directory) { fn test_watch(directory: &mut dyn Directory) {
let num_progress: Arc<AtomicUsize> = Default::default();
let counter: Arc<AtomicUsize> = Default::default(); let counter: Arc<AtomicUsize> = Default::default();
let counter_clone = counter.clone(); let counter_clone = counter.clone();
let (sender, receiver) = crossbeam::channel::unbounded();
let watch_callback = Box::new(move || { let watch_callback = Box::new(move || {
counter_clone.fetch_add(1, Ordering::SeqCst); counter_clone.fetch_add(1, SeqCst);
}); });
assert!(directory // This callback is used to synchronize watching in our unit test.
.atomic_write(Path::new("meta.json"), b"random_test_data") // We bind it to a variable because the callback is removed when that
.is_ok()); // handle is dropped.
thread::sleep(Duration::new(0, 10_000));
assert_eq!(0, counter.load(Ordering::SeqCst));
let watch_handle = directory.watch(watch_callback).unwrap(); let watch_handle = directory.watch(watch_callback).unwrap();
let _progress_listener = directory
.watch(Box::new(move || {
let val = num_progress.fetch_add(1, SeqCst);
let _ = sender.send(val);
}))
.unwrap();
for i in 0..10 { for i in 0..10 {
assert_eq!(i, counter.load(Ordering::SeqCst)); assert_eq!(i, counter.load(SeqCst));
assert!(directory assert!(directory
.atomic_write(Path::new("meta.json"), b"random_test_data_2") .atomic_write(Path::new("meta.json"), b"random_test_data_2")
.is_ok()); .is_ok());
for _ in 0..100 { assert_eq!(receiver.recv_timeout(Duration::from_millis(500)), Ok(i));
if counter.load(Ordering::SeqCst) > i { assert_eq!(i + 1, counter.load(SeqCst));
break;
}
thread::sleep(Duration::from_millis(10));
}
assert_eq!(i + 1, counter.load(Ordering::SeqCst));
} }
mem::drop(watch_handle); mem::drop(watch_handle);
assert!(directory assert!(directory
.atomic_write(Path::new("meta.json"), b"random_test_data") .atomic_write(Path::new("meta.json"), b"random_test_data")
.is_ok()); .is_ok());
thread::sleep(Duration::from_millis(200)); assert!(receiver.recv_timeout(Duration::from_millis(500)).is_ok());
assert_eq!(10, counter.load(Ordering::SeqCst)); assert_eq!(10, counter.load(SeqCst));
} }
fn test_lock_non_blocking(directory: &mut dyn Directory) { fn test_lock_non_blocking(directory: &mut dyn Directory) {
@@ -174,9 +257,13 @@ fn test_lock_blocking(directory: &mut dyn Directory) {
is_blocking: true, is_blocking: true,
}); });
assert!(lock_a_res.is_ok()); assert!(lock_a_res.is_ok());
let in_thread = Arc::new(AtomicBool::default());
let in_thread_clone = in_thread.clone();
let (sender, receiver) = oneshot::channel();
std::thread::spawn(move || { std::thread::spawn(move || {
//< lock_a_res is sent to the thread. //< lock_a_res is sent to the thread.
std::thread::sleep(time::Duration::from_millis(10)); in_thread_clone.store(true, SeqCst);
let _just_sync = block_on(receiver);
// explicitely droping lock_a_res. It would have been sufficient to just force it // explicitely droping lock_a_res. It would have been sufficient to just force it
// to be part of the move, but the intent seems clearer that way. // to be part of the move, but the intent seems clearer that way.
drop(lock_a_res); drop(lock_a_res);
@@ -189,14 +276,18 @@ fn test_lock_blocking(directory: &mut dyn Directory) {
}); });
assert!(lock_a_res.is_err()); assert!(lock_a_res.is_err());
} }
{ let directory_clone = directory.box_clone();
// the blocking call should wait for at least 10ms. let (sender2, receiver2) = oneshot::channel();
let start = time::Instant::now(); let join_handle = std::thread::spawn(move || {
let lock_a_res = directory.acquire_lock(&Lock { assert!(sender2.send(()).is_ok());
let lock_a_res = directory_clone.acquire_lock(&Lock {
filepath: PathBuf::from("a.lock"), filepath: PathBuf::from("a.lock"),
is_blocking: true, is_blocking: true,
}); });
assert!(in_thread.load(SeqCst));
assert!(lock_a_res.is_ok()); assert!(lock_a_res.is_ok());
assert!(start.elapsed().subsec_millis() >= 10); });
} assert!(block_on(receiver2).is_ok());
assert!(sender.send(()).is_ok());
assert!(join_handle.join().is_ok());
} }

View File

@@ -1,3 +1,5 @@
use futures::channel::oneshot;
use futures::{Future, TryFutureExt};
use std::sync::Arc; use std::sync::Arc;
use std::sync::RwLock; use std::sync::RwLock;
use std::sync::Weak; use std::sync::Weak;
@@ -22,13 +24,20 @@ pub struct WatchCallbackList {
#[derive(Clone)] #[derive(Clone)]
pub struct WatchHandle(Arc<WatchCallback>); pub struct WatchHandle(Arc<WatchCallback>);
impl WatchHandle {
/// Create a WatchHandle handle.
pub fn new(watch_callback: Arc<WatchCallback>) -> WatchHandle {
WatchHandle(watch_callback)
}
}
impl WatchCallbackList { impl WatchCallbackList {
/// Suscribes a new callback and returns a handle that controls the lifetime of the callback. /// Suscribes a new callback and returns a handle that controls the lifetime of the callback.
pub fn subscribe(&self, watch_callback: WatchCallback) -> WatchHandle { pub fn subscribe(&self, watch_callback: WatchCallback) -> WatchHandle {
let watch_callback_arc = Arc::new(watch_callback); let watch_callback_arc = Arc::new(watch_callback);
let watch_callback_weak = Arc::downgrade(&watch_callback_arc); let watch_callback_weak = Arc::downgrade(&watch_callback_arc);
self.router.write().unwrap().push(watch_callback_weak); self.router.write().unwrap().push(watch_callback_weak);
WatchHandle(watch_callback_arc) WatchHandle::new(watch_callback_arc)
} }
fn list_callback(&self) -> Vec<Arc<WatchCallback>> { fn list_callback(&self) -> Vec<Arc<WatchCallback>> {
@@ -47,14 +56,21 @@ impl WatchCallbackList {
} }
/// Triggers all callbacks /// Triggers all callbacks
pub fn broadcast(&self) { pub fn broadcast(&self) -> impl Future<Output = ()> {
let callbacks = self.list_callback(); let callbacks = self.list_callback();
let (sender, receiver) = oneshot::channel();
let result = receiver.unwrap_or_else(|_| ());
if callbacks.is_empty() {
let _ = sender.send(());
return result;
}
let spawn_res = std::thread::Builder::new() let spawn_res = std::thread::Builder::new()
.name("watch-callbacks".to_string()) .name("watch-callbacks".to_string())
.spawn(move || { .spawn(move || {
for callback in callbacks { for callback in callbacks {
callback(); callback();
} }
let _ = sender.send(());
}); });
if let Err(err) = spawn_res { if let Err(err) = spawn_res {
error!( error!(
@@ -62,19 +78,17 @@ impl WatchCallbackList {
err err
); );
} }
result
} }
} }
#[cfg(test)] #[cfg(test)]
mod tests { mod tests {
use crate::directory::WatchCallbackList; use crate::directory::WatchCallbackList;
use futures::executor::block_on;
use std::mem; use std::mem;
use std::sync::atomic::{AtomicUsize, Ordering}; use std::sync::atomic::{AtomicUsize, Ordering};
use std::sync::Arc; use std::sync::Arc;
use std::thread;
use std::time::Duration;
const WAIT_TIME: u64 = 20;
#[test] #[test]
fn test_watch_event_router_simple() { fn test_watch_event_router_simple() {
@@ -84,22 +98,22 @@ mod tests {
let inc_callback = Box::new(move || { let inc_callback = Box::new(move || {
counter_clone.fetch_add(1, Ordering::SeqCst); counter_clone.fetch_add(1, Ordering::SeqCst);
}); });
watch_event_router.broadcast(); block_on(watch_event_router.broadcast());
assert_eq!(0, counter.load(Ordering::SeqCst)); assert_eq!(0, counter.load(Ordering::SeqCst));
let handle_a = watch_event_router.subscribe(inc_callback); let handle_a = watch_event_router.subscribe(inc_callback);
thread::sleep(Duration::from_millis(WAIT_TIME));
assert_eq!(0, counter.load(Ordering::SeqCst)); assert_eq!(0, counter.load(Ordering::SeqCst));
watch_event_router.broadcast(); block_on(watch_event_router.broadcast());
thread::sleep(Duration::from_millis(WAIT_TIME));
assert_eq!(1, counter.load(Ordering::SeqCst)); assert_eq!(1, counter.load(Ordering::SeqCst));
watch_event_router.broadcast(); block_on(async {
watch_event_router.broadcast(); (
watch_event_router.broadcast(); watch_event_router.broadcast().await,
thread::sleep(Duration::from_millis(WAIT_TIME)); watch_event_router.broadcast().await,
watch_event_router.broadcast().await,
)
});
assert_eq!(4, counter.load(Ordering::SeqCst)); assert_eq!(4, counter.load(Ordering::SeqCst));
mem::drop(handle_a); mem::drop(handle_a);
watch_event_router.broadcast(); block_on(watch_event_router.broadcast());
thread::sleep(Duration::from_millis(WAIT_TIME));
assert_eq!(4, counter.load(Ordering::SeqCst)); assert_eq!(4, counter.load(Ordering::SeqCst));
} }
@@ -115,20 +129,20 @@ mod tests {
}; };
let handle_a = watch_event_router.subscribe(inc_callback(1)); let handle_a = watch_event_router.subscribe(inc_callback(1));
let handle_a2 = watch_event_router.subscribe(inc_callback(10)); let handle_a2 = watch_event_router.subscribe(inc_callback(10));
thread::sleep(Duration::from_millis(WAIT_TIME));
assert_eq!(0, counter.load(Ordering::SeqCst)); assert_eq!(0, counter.load(Ordering::SeqCst));
watch_event_router.broadcast(); block_on(async {
watch_event_router.broadcast(); futures::join!(
thread::sleep(Duration::from_millis(WAIT_TIME)); watch_event_router.broadcast(),
watch_event_router.broadcast()
)
});
assert_eq!(22, counter.load(Ordering::SeqCst)); assert_eq!(22, counter.load(Ordering::SeqCst));
mem::drop(handle_a); mem::drop(handle_a);
watch_event_router.broadcast(); block_on(watch_event_router.broadcast());
thread::sleep(Duration::from_millis(WAIT_TIME));
assert_eq!(32, counter.load(Ordering::SeqCst)); assert_eq!(32, counter.load(Ordering::SeqCst));
mem::drop(handle_a2); mem::drop(handle_a2);
watch_event_router.broadcast(); block_on(watch_event_router.broadcast());
watch_event_router.broadcast(); block_on(watch_event_router.broadcast());
thread::sleep(Duration::from_millis(WAIT_TIME));
assert_eq!(32, counter.load(Ordering::SeqCst)); assert_eq!(32, counter.load(Ordering::SeqCst));
} }
@@ -142,15 +156,15 @@ mod tests {
}); });
let handle_a = watch_event_router.subscribe(inc_callback); let handle_a = watch_event_router.subscribe(inc_callback);
assert_eq!(0, counter.load(Ordering::SeqCst)); assert_eq!(0, counter.load(Ordering::SeqCst));
watch_event_router.broadcast(); block_on(async {
watch_event_router.broadcast(); let future1 = watch_event_router.broadcast();
thread::sleep(Duration::from_millis(WAIT_TIME)); let future2 = watch_event_router.broadcast();
futures::join!(future1, future2)
});
assert_eq!(2, counter.load(Ordering::SeqCst)); assert_eq!(2, counter.load(Ordering::SeqCst));
thread::sleep(Duration::from_millis(WAIT_TIME));
mem::drop(handle_a); mem::drop(handle_a);
watch_event_router.broadcast(); let _ = watch_event_router.broadcast();
thread::sleep(Duration::from_millis(WAIT_TIME)); block_on(watch_event_router.broadcast());
assert_eq!(2, counter.load(Ordering::SeqCst)); assert_eq!(2, counter.load(Ordering::SeqCst));
} }
} }

View File

@@ -2,8 +2,8 @@
use std::io; use std::io;
use crate::directory::error::LockError;
use crate::directory::error::{IOError, OpenDirectoryError, OpenReadError, OpenWriteError}; use crate::directory::error::{IOError, OpenDirectoryError, OpenReadError, OpenWriteError};
use crate::directory::error::{Incompatibility, LockError};
use crate::fastfield::FastFieldNotAvailableError; use crate::fastfield::FastFieldNotAvailableError;
use crate::query; use crate::query;
use crate::schema; use crate::schema;
@@ -80,6 +80,9 @@ pub enum TantivyError {
/// System error. (e.g.: We failed spawning a new thread) /// System error. (e.g.: We failed spawning a new thread)
#[fail(display = "System error.'{}'", _0)] #[fail(display = "System error.'{}'", _0)]
SystemError(String), SystemError(String),
/// Index incompatible with current version of tantivy
#[fail(display = "{:?}", _0)]
IncompatibleIndex(Incompatibility),
} }
impl From<DataCorruption> for TantivyError { impl From<DataCorruption> for TantivyError {
@@ -129,6 +132,9 @@ impl From<OpenReadError> for TantivyError {
match error { match error {
OpenReadError::FileDoesNotExist(filepath) => TantivyError::PathDoesNotExist(filepath), OpenReadError::FileDoesNotExist(filepath) => TantivyError::PathDoesNotExist(filepath),
OpenReadError::IOError(io_error) => TantivyError::IOError(io_error), OpenReadError::IOError(io_error) => TantivyError::IOError(io_error),
OpenReadError::IncompatibleIndex(incompatibility) => {
TantivyError::IncompatibleIndex(incompatibility)
}
} }
} }
} }
@@ -170,3 +176,9 @@ impl From<serde_json::Error> for TantivyError {
TantivyError::IOError(io_err.into()) TantivyError::IOError(io_err.into())
} }
} }
impl From<rayon::ThreadPoolBuildError> for TantivyError {
fn from(error: rayon::ThreadPoolBuildError) -> TantivyError {
TantivyError::SystemError(error.to_string())
}
}

View File

@@ -1,17 +1,19 @@
use crate::common::HasLen; use crate::common::{BitSet, HasLen};
use crate::directory::ReadOnlySource; use crate::directory::ReadOnlySource;
use crate::directory::WritePtr; use crate::directory::WritePtr;
use crate::space_usage::ByteCount; use crate::space_usage::ByteCount;
use crate::DocId; use crate::DocId;
use bit_set::BitSet;
use std::io; use std::io;
use std::io::Write; use std::io::Write;
/// Write a delete `BitSet` /// Write a delete `BitSet`
/// ///
/// where `delete_bitset` is the set of deleted `DocId`. /// where `delete_bitset` is the set of deleted `DocId`.
pub fn write_delete_bitset(delete_bitset: &BitSet, writer: &mut WritePtr) -> io::Result<()> { pub fn write_delete_bitset(
let max_doc = delete_bitset.capacity(); delete_bitset: &BitSet,
max_doc: u32,
writer: &mut WritePtr,
) -> io::Result<()> {
let mut byte = 0u8; let mut byte = 0u8;
let mut shift = 0u8; let mut shift = 0u8;
for doc in 0..max_doc { for doc in 0..max_doc {
@@ -29,7 +31,7 @@ pub fn write_delete_bitset(delete_bitset: &BitSet, writer: &mut WritePtr) -> io:
if max_doc % 8 > 0 { if max_doc % 8 > 0 {
writer.write_all(&[byte])?; writer.write_all(&[byte])?;
} }
writer.flush() Ok(())
} }
/// Set of deleted `DocId`s. /// Set of deleted `DocId`s.
@@ -83,43 +85,40 @@ impl HasLen for DeleteBitSet {
mod tests { mod tests {
use super::*; use super::*;
use crate::directory::*; use crate::directory::*;
use bit_set::BitSet;
use std::path::PathBuf; use std::path::PathBuf;
fn test_delete_bitset_helper(bitset: &BitSet) { fn test_delete_bitset_helper(bitset: &BitSet, max_doc: u32) {
let test_path = PathBuf::from("test"); let test_path = PathBuf::from("test");
let mut directory = RAMDirectory::create(); let mut directory = RAMDirectory::create();
{ {
let mut writer = directory.open_write(&*test_path).unwrap(); let mut writer = directory.open_write(&*test_path).unwrap();
write_delete_bitset(bitset, &mut writer).unwrap(); write_delete_bitset(bitset, max_doc, &mut writer).unwrap();
writer.terminate().unwrap();
} }
{ let source = directory.open_read(&test_path).unwrap();
let source = directory.open_read(&test_path).unwrap(); let delete_bitset = DeleteBitSet::open(source);
let delete_bitset = DeleteBitSet::open(source); for doc in 0..max_doc {
let n = bitset.capacity(); assert_eq!(bitset.contains(doc), delete_bitset.is_deleted(doc as DocId));
for doc in 0..n {
assert_eq!(bitset.contains(doc), delete_bitset.is_deleted(doc as DocId));
}
assert_eq!(delete_bitset.len(), bitset.len());
} }
assert_eq!(delete_bitset.len(), bitset.len());
} }
#[test] #[test]
fn test_delete_bitset() { fn test_delete_bitset() {
{ {
let mut bitset = BitSet::with_capacity(10); let mut bitset = BitSet::with_max_value(10);
bitset.insert(1); bitset.insert(1);
bitset.insert(9); bitset.insert(9);
test_delete_bitset_helper(&bitset); test_delete_bitset_helper(&bitset, 10);
} }
{ {
let mut bitset = BitSet::with_capacity(8); let mut bitset = BitSet::with_max_value(8);
bitset.insert(1); bitset.insert(1);
bitset.insert(2); bitset.insert(2);
bitset.insert(3); bitset.insert(3);
bitset.insert(5); bitset.insert(5);
bitset.insert(7); bitset.insert(7);
test_delete_bitset_helper(&bitset); test_delete_bitset_helper(&bitset, 8);
} }
} }
} }

View File

@@ -33,6 +33,7 @@ pub use self::reader::FastFieldReader;
pub use self::readers::FastFieldReaders; pub use self::readers::FastFieldReaders;
pub use self::serializer::FastFieldSerializer; pub use self::serializer::FastFieldSerializer;
pub use self::writer::{FastFieldsWriter, IntFastFieldWriter}; pub use self::writer::{FastFieldsWriter, IntFastFieldWriter};
use crate::chrono::{NaiveDateTime, Utc};
use crate::common; use crate::common;
use crate::schema::Cardinality; use crate::schema::Cardinality;
use crate::schema::FieldType; use crate::schema::FieldType;
@@ -49,7 +50,7 @@ mod serializer;
mod writer; mod writer;
/// Trait for types that are allowed for fast fields: (u64, i64 and f64). /// Trait for types that are allowed for fast fields: (u64, i64 and f64).
pub trait FastValue: Default + Clone + Copy + Send + Sync + PartialOrd { pub trait FastValue: Clone + Copy + Send + Sync + PartialOrd {
/// Converts a value from u64 /// Converts a value from u64
/// ///
/// Internally all fast field values are encoded as u64. /// Internally all fast field values are encoded as u64.
@@ -69,6 +70,12 @@ pub trait FastValue: Default + Clone + Copy + Send + Sync + PartialOrd {
/// Cast value to `u64`. /// Cast value to `u64`.
/// The value is just reinterpreted in memory. /// The value is just reinterpreted in memory.
fn as_u64(&self) -> u64; fn as_u64(&self) -> u64;
/// Build a default value. This default value is never used, so the value does not
/// really matter.
fn make_zero() -> Self {
Self::from_u64(0i64.to_u64())
}
} }
impl FastValue for u64 { impl FastValue for u64 {
@@ -135,11 +142,34 @@ impl FastValue for f64 {
} }
} }
impl FastValue for crate::DateTime {
fn from_u64(timestamp_u64: u64) -> Self {
let timestamp_i64 = i64::from_u64(timestamp_u64);
crate::DateTime::from_utc(NaiveDateTime::from_timestamp(timestamp_i64, 0), Utc)
}
fn to_u64(&self) -> u64 {
self.timestamp().to_u64()
}
fn fast_field_cardinality(field_type: &FieldType) -> Option<Cardinality> {
match *field_type {
FieldType::Date(ref integer_options) => integer_options.get_fastfield_cardinality(),
_ => None,
}
}
fn as_u64(&self) -> u64 {
self.timestamp().as_u64()
}
}
fn value_to_u64(value: &Value) -> u64 { fn value_to_u64(value: &Value) -> u64 {
match *value { match *value {
Value::U64(ref val) => *val, Value::U64(ref val) => *val,
Value::I64(ref val) => common::i64_to_u64(*val), Value::I64(ref val) => common::i64_to_u64(*val),
Value::F64(ref val) => common::f64_to_u64(*val), Value::F64(ref val) => common::f64_to_u64(*val),
Value::Date(ref datetime) => common::i64_to_u64(datetime.timestamp()),
_ => panic!("Expected a u64/i64/f64 field, got {:?} ", value), _ => panic!("Expected a u64/i64/f64 field, got {:?} ", value),
} }
} }
@@ -151,10 +181,12 @@ mod tests {
use crate::common::CompositeFile; use crate::common::CompositeFile;
use crate::directory::{Directory, RAMDirectory, WritePtr}; use crate::directory::{Directory, RAMDirectory, WritePtr};
use crate::fastfield::FastFieldReader; use crate::fastfield::FastFieldReader;
use crate::schema::Document; use crate::merge_policy::NoMergePolicy;
use crate::schema::Field; use crate::schema::Field;
use crate::schema::Schema; use crate::schema::Schema;
use crate::schema::FAST; use crate::schema::FAST;
use crate::schema::{Document, IntOptions};
use crate::{Index, SegmentId, SegmentReader};
use once_cell::sync::Lazy; use once_cell::sync::Lazy;
use rand::prelude::SliceRandom; use rand::prelude::SliceRandom;
use rand::rngs::StdRng; use rand::rngs::StdRng;
@@ -178,6 +210,12 @@ mod tests {
assert_eq!(test_fastfield.get(2), 300); assert_eq!(test_fastfield.get(2), 300);
} }
#[test]
pub fn test_fastfield_i64_u64() {
let datetime = crate::DateTime::from_utc(NaiveDateTime::from_timestamp(0i64, 0), Utc);
assert_eq!(i64::from_u64(datetime.to_u64()), 0i64);
}
#[test] #[test]
fn test_intfastfield_small() { fn test_intfastfield_small() {
let path = Path::new("test"); let path = Path::new("test");
@@ -430,6 +468,92 @@ mod tests {
} }
} }
#[test]
fn test_merge_missing_date_fast_field() {
let mut schema_builder = Schema::builder();
let date_field = schema_builder.add_date_field("date", FAST);
let schema = schema_builder.build();
let index = Index::create_in_ram(schema);
let mut index_writer = index.writer_with_num_threads(1, 3_000_000).unwrap();
index_writer.set_merge_policy(Box::new(NoMergePolicy));
index_writer.add_document(doc!(date_field =>crate::chrono::prelude::Utc::now()));
index_writer.commit().unwrap();
index_writer.add_document(doc!());
index_writer.commit().unwrap();
let reader = index.reader().unwrap();
let segment_ids: Vec<SegmentId> = reader
.searcher()
.segment_readers()
.iter()
.map(SegmentReader::segment_id)
.collect();
assert_eq!(segment_ids.len(), 2);
let merge_future = index_writer.merge(&segment_ids[..]);
let merge_res = futures::executor::block_on(merge_future);
assert!(merge_res.is_ok());
assert!(reader.reload().is_ok());
assert_eq!(reader.searcher().segment_readers().len(), 1);
}
#[test]
fn test_default_datetime() {
assert_eq!(crate::DateTime::make_zero().timestamp(), 0i64);
}
#[test]
fn test_datefastfield() {
use crate::fastfield::FastValue;
let mut schema_builder = Schema::builder();
let date_field = schema_builder.add_date_field("date", FAST);
let multi_date_field = schema_builder.add_date_field(
"multi_date",
IntOptions::default().set_fast(Cardinality::MultiValues),
);
let schema = schema_builder.build();
let index = Index::create_in_ram(schema);
let mut index_writer = index.writer_with_num_threads(1, 3_000_000).unwrap();
index_writer.set_merge_policy(Box::new(NoMergePolicy));
index_writer.add_document(doc!(
date_field => crate::DateTime::from_u64(1i64.to_u64()),
multi_date_field => crate::DateTime::from_u64(2i64.to_u64()),
multi_date_field => crate::DateTime::from_u64(3i64.to_u64())
));
index_writer.add_document(doc!(
date_field => crate::DateTime::from_u64(4i64.to_u64())
));
index_writer.add_document(doc!(
multi_date_field => crate::DateTime::from_u64(5i64.to_u64()),
multi_date_field => crate::DateTime::from_u64(6i64.to_u64())
));
index_writer.commit().unwrap();
let reader = index.reader().unwrap();
let searcher = reader.searcher();
assert_eq!(searcher.segment_readers().len(), 1);
let segment_reader = searcher.segment_reader(0);
let fast_fields = segment_reader.fast_fields();
let date_fast_field = fast_fields.date(date_field).unwrap();
let dates_fast_field = fast_fields.dates(multi_date_field).unwrap();
let mut dates = vec![];
{
assert_eq!(date_fast_field.get(0u32).timestamp(), 1i64);
dates_fast_field.get_vals(0u32, &mut dates);
assert_eq!(dates.len(), 2);
assert_eq!(dates[0].timestamp(), 2i64);
assert_eq!(dates[1].timestamp(), 3i64);
}
{
assert_eq!(date_fast_field.get(1u32).timestamp(), 4i64);
dates_fast_field.get_vals(1u32, &mut dates);
assert!(dates.is_empty());
}
{
assert_eq!(date_fast_field.get(2u32).timestamp(), 0i64);
dates_fast_field.get_vals(2u32, &mut dates);
assert_eq!(dates.len(), 2);
assert_eq!(dates[0].timestamp(), 5i64);
assert_eq!(dates[1].timestamp(), 6i64);
}
}
} }
#[cfg(all(test, feature = "unstable"))] #[cfg(all(test, feature = "unstable"))]
@@ -437,9 +561,9 @@ mod bench {
use super::tests::FIELD; use super::tests::FIELD;
use super::tests::{generate_permutation, SCHEMA}; use super::tests::{generate_permutation, SCHEMA};
use super::*; use super::*;
use common::CompositeFile; use crate::common::CompositeFile;
use directory::{Directory, RAMDirectory, WritePtr}; use crate::directory::{Directory, RAMDirectory, WritePtr};
use fastfield::FastFieldReader; use crate::fastfield::FastFieldReader;
use std::collections::HashMap; use std::collections::HashMap;
use std::path::Path; use std::path::Path;
use test::{self, Bencher}; use test::{self, Bencher};
@@ -537,5 +661,4 @@ mod bench {
}); });
} }
} }
} }

View File

@@ -7,9 +7,6 @@ pub use self::writer::MultiValueIntFastFieldWriter;
#[cfg(test)] #[cfg(test)]
mod tests { mod tests {
use time;
use self::time::Duration;
use crate::collector::TopDocs; use crate::collector::TopDocs;
use crate::query::QueryParser; use crate::query::QueryParser;
use crate::schema::Cardinality; use crate::schema::Cardinality;
@@ -17,6 +14,7 @@ mod tests {
use crate::schema::IntOptions; use crate::schema::IntOptions;
use crate::schema::Schema; use crate::schema::Schema;
use crate::Index; use crate::Index;
use chrono::Duration;
#[test] #[test]
fn test_multivalued_u64() { fn test_multivalued_u64() {

View File

@@ -45,7 +45,7 @@ impl<Item: FastValue> MultiValueIntFastFieldReader<Item> {
pub fn get_vals(&self, doc: DocId, vals: &mut Vec<Item>) { pub fn get_vals(&self, doc: DocId, vals: &mut Vec<Item>) {
let (start, stop) = self.range(doc); let (start, stop) = self.range(doc);
let len = (stop - start) as usize; let len = (stop - start) as usize;
vals.resize(len, Item::default()); vals.resize(len, Item::make_zero());
self.vals_reader.get_range_u64(start, &mut vals[..]); self.vals_reader.get_range_u64(start, &mut vals[..]);
} }

View File

@@ -5,8 +5,7 @@ use crate::postings::UnorderedTermId;
use crate::schema::{Document, Field}; use crate::schema::{Document, Field};
use crate::termdict::TermOrdinal; use crate::termdict::TermOrdinal;
use crate::DocId; use crate::DocId;
use itertools::Itertools; use fnv::FnvHashMap;
use std::collections::HashMap;
use std::io; use std::io;
/// Writer for multi-valued (as in, more than one value per document) /// Writer for multi-valued (as in, more than one value per document)
@@ -102,7 +101,7 @@ impl MultiValueIntFastFieldWriter {
pub fn serialize( pub fn serialize(
&self, &self,
serializer: &mut FastFieldSerializer, serializer: &mut FastFieldSerializer,
mapping_opt: Option<&HashMap<UnorderedTermId, TermOrdinal>>, mapping_opt: Option<&FnvHashMap<UnorderedTermId, TermOrdinal>>,
) -> io::Result<()> { ) -> io::Result<()> {
{ {
// writing the offset index // writing the offset index
@@ -151,8 +150,8 @@ impl MultiValueIntFastFieldWriter {
} }
} }
None => { None => {
let val_min_max = self.vals.iter().cloned().minmax(); let val_min_max = crate::common::minmax(self.vals.iter().cloned());
let (val_min, val_max) = val_min_max.into_option().unwrap_or((0u64, 0u64)); let (val_min, val_max) = val_min_max.unwrap_or((0u64, 0u64));
value_serializer = value_serializer =
serializer.new_u64_fast_field_with_idx(self.field, val_min, val_max, 1)?; serializer.new_u64_fast_field_with_idx(self.field, val_min, val_max, 1)?;
for &val in &self.vals { for &val in &self.vals {

View File

@@ -4,7 +4,6 @@ use crate::fastfield::MultiValueIntFastFieldReader;
use crate::fastfield::{FastFieldNotAvailableError, FastFieldReader}; use crate::fastfield::{FastFieldNotAvailableError, FastFieldReader};
use crate::schema::{Cardinality, Field, FieldType, Schema}; use crate::schema::{Cardinality, Field, FieldType, Schema};
use crate::space_usage::PerFieldSpaceUsage; use crate::space_usage::PerFieldSpaceUsage;
use crate::Result;
use std::collections::HashMap; use std::collections::HashMap;
/// Provides access to all of the FastFieldReader. /// Provides access to all of the FastFieldReader.
@@ -15,9 +14,11 @@ pub struct FastFieldReaders {
fast_field_i64: HashMap<Field, FastFieldReader<i64>>, fast_field_i64: HashMap<Field, FastFieldReader<i64>>,
fast_field_u64: HashMap<Field, FastFieldReader<u64>>, fast_field_u64: HashMap<Field, FastFieldReader<u64>>,
fast_field_f64: HashMap<Field, FastFieldReader<f64>>, fast_field_f64: HashMap<Field, FastFieldReader<f64>>,
fast_field_date: HashMap<Field, FastFieldReader<crate::DateTime>>,
fast_field_i64s: HashMap<Field, MultiValueIntFastFieldReader<i64>>, fast_field_i64s: HashMap<Field, MultiValueIntFastFieldReader<i64>>,
fast_field_u64s: HashMap<Field, MultiValueIntFastFieldReader<u64>>, fast_field_u64s: HashMap<Field, MultiValueIntFastFieldReader<u64>>,
fast_field_f64s: HashMap<Field, MultiValueIntFastFieldReader<f64>>, fast_field_f64s: HashMap<Field, MultiValueIntFastFieldReader<f64>>,
fast_field_dates: HashMap<Field, MultiValueIntFastFieldReader<crate::DateTime>>,
fast_bytes: HashMap<Field, BytesFastFieldReader>, fast_bytes: HashMap<Field, BytesFastFieldReader>,
fast_fields_composite: CompositeFile, fast_fields_composite: CompositeFile,
} }
@@ -26,6 +27,7 @@ enum FastType {
I64, I64,
U64, U64,
F64, F64,
Date,
} }
fn type_and_cardinality(field_type: &FieldType) -> Option<(FastType, Cardinality)> { fn type_and_cardinality(field_type: &FieldType) -> Option<(FastType, Cardinality)> {
@@ -39,6 +41,9 @@ fn type_and_cardinality(field_type: &FieldType) -> Option<(FastType, Cardinality
FieldType::F64(options) => options FieldType::F64(options) => options
.get_fastfield_cardinality() .get_fastfield_cardinality()
.map(|cardinality| (FastType::F64, cardinality)), .map(|cardinality| (FastType::F64, cardinality)),
FieldType::Date(options) => options
.get_fastfield_cardinality()
.map(|cardinality| (FastType::Date, cardinality)),
FieldType::HierarchicalFacet => Some((FastType::U64, Cardinality::MultiValues)), FieldType::HierarchicalFacet => Some((FastType::U64, Cardinality::MultiValues)),
_ => None, _ => None,
} }
@@ -48,19 +53,20 @@ impl FastFieldReaders {
pub(crate) fn load_all( pub(crate) fn load_all(
schema: &Schema, schema: &Schema,
fast_fields_composite: &CompositeFile, fast_fields_composite: &CompositeFile,
) -> Result<FastFieldReaders> { ) -> crate::Result<FastFieldReaders> {
let mut fast_field_readers = FastFieldReaders { let mut fast_field_readers = FastFieldReaders {
fast_field_i64: Default::default(), fast_field_i64: Default::default(),
fast_field_u64: Default::default(), fast_field_u64: Default::default(),
fast_field_f64: Default::default(), fast_field_f64: Default::default(),
fast_field_date: Default::default(),
fast_field_i64s: Default::default(), fast_field_i64s: Default::default(),
fast_field_u64s: Default::default(), fast_field_u64s: Default::default(),
fast_field_f64s: Default::default(), fast_field_f64s: Default::default(),
fast_field_dates: Default::default(),
fast_bytes: Default::default(), fast_bytes: Default::default(),
fast_fields_composite: fast_fields_composite.clone(), fast_fields_composite: fast_fields_composite.clone(),
}; };
for (field_id, field_entry) in schema.fields().iter().enumerate() { for (field, field_entry) in schema.fields() {
let field = Field(field_id as u32);
let field_type = field_entry.field_type(); let field_type = field_entry.field_type();
if field_type == &FieldType::Bytes { if field_type == &FieldType::Bytes {
let idx_reader = fast_fields_composite let idx_reader = fast_fields_composite
@@ -96,6 +102,12 @@ impl FastFieldReaders {
FastFieldReader::open(fast_field_data.clone()), FastFieldReader::open(fast_field_data.clone()),
); );
} }
FastType::Date => {
fast_field_readers.fast_field_date.insert(
field,
FastFieldReader::open(fast_field_data.clone()),
);
}
} }
} else { } else {
return Err(From::from(FastFieldNotAvailableError::new(field_entry))); return Err(From::from(FastFieldNotAvailableError::new(field_entry)));
@@ -131,6 +143,14 @@ impl FastFieldReaders {
.fast_field_f64s .fast_field_f64s
.insert(field, multivalued_int_fast_field); .insert(field, multivalued_int_fast_field);
} }
FastType::Date => {
let vals_reader = FastFieldReader::open(fast_field_data);
let multivalued_int_fast_field =
MultiValueIntFastFieldReader::open(idx_reader, vals_reader);
fast_field_readers
.fast_field_dates
.insert(field, multivalued_int_fast_field);
}
} }
} else { } else {
return Err(From::from(FastFieldNotAvailableError::new(field_entry))); return Err(From::from(FastFieldNotAvailableError::new(field_entry)));
@@ -157,8 +177,6 @@ impl FastFieldReaders {
/// If the field is a i64-fast field, return the associated u64 reader. Values are /// If the field is a i64-fast field, return the associated u64 reader. Values are
/// mapped from i64 to u64 using a (well the, it is unique) monotonic mapping. /// /// mapped from i64 to u64 using a (well the, it is unique) monotonic mapping. ///
/// ///
///TODO should it also be lenient with f64?
///
/// This method is useful when merging segment reader. /// This method is useful when merging segment reader.
pub(crate) fn u64_lenient(&self, field: Field) -> Option<FastFieldReader<u64>> { pub(crate) fn u64_lenient(&self, field: Field) -> Option<FastFieldReader<u64>> {
if let Some(u64_ff_reader) = self.u64(field) { if let Some(u64_ff_reader) = self.u64(field) {
@@ -167,6 +185,12 @@ impl FastFieldReaders {
if let Some(i64_ff_reader) = self.i64(field) { if let Some(i64_ff_reader) = self.i64(field) {
return Some(i64_ff_reader.into_u64_reader()); return Some(i64_ff_reader.into_u64_reader());
} }
if let Some(f64_ff_reader) = self.f64(field) {
return Some(f64_ff_reader.into_u64_reader());
}
if let Some(date_ff_reader) = self.date(field) {
return Some(date_ff_reader.into_u64_reader());
}
None None
} }
@@ -177,6 +201,13 @@ impl FastFieldReaders {
self.fast_field_i64.get(&field).cloned() self.fast_field_i64.get(&field).cloned()
} }
/// Returns the `i64` fast field reader reader associated to `field`.
///
/// If `field` is not a i64 fast field, this method returns `None`.
pub fn date(&self, field: Field) -> Option<FastFieldReader<crate::DateTime>> {
self.fast_field_date.get(&field).cloned()
}
/// Returns the `f64` fast field reader reader associated to `field`. /// Returns the `f64` fast field reader reader associated to `field`.
/// ///
/// If `field` is not a f64 fast field, this method returns `None`. /// If `field` is not a f64 fast field, this method returns `None`.
@@ -203,6 +234,9 @@ impl FastFieldReaders {
if let Some(i64s_ff_reader) = self.i64s(field) { if let Some(i64s_ff_reader) = self.i64s(field) {
return Some(i64s_ff_reader.into_u64s_reader()); return Some(i64s_ff_reader.into_u64s_reader());
} }
if let Some(f64s_ff_reader) = self.f64s(field) {
return Some(f64s_ff_reader.into_u64s_reader());
}
None None
} }
@@ -220,6 +254,13 @@ impl FastFieldReaders {
self.fast_field_f64s.get(&field).cloned() self.fast_field_f64s.get(&field).cloned()
} }
/// Returns a `crate::DateTime` multi-valued fast field reader reader associated to `field`.
///
/// If `field` is not a `crate::DateTime` multi-valued fast field, this method returns `None`.
pub fn dates(&self, field: Field) -> Option<MultiValueIntFastFieldReader<crate::DateTime>> {
self.fast_field_dates.get(&field).cloned()
}
/// Returns the `bytes` fast field reader associated to `field`. /// Returns the `bytes` fast field reader associated to `field`.
/// ///
/// If `field` is not a bytes fast field, returns `None`. /// If `field` is not a bytes fast field, returns `None`.

View File

@@ -4,8 +4,9 @@ use crate::common::BinarySerializable;
use crate::common::VInt; use crate::common::VInt;
use crate::fastfield::{BytesFastFieldWriter, FastFieldSerializer}; use crate::fastfield::{BytesFastFieldWriter, FastFieldSerializer};
use crate::postings::UnorderedTermId; use crate::postings::UnorderedTermId;
use crate::schema::{Cardinality, Document, Field, FieldType, Schema}; use crate::schema::{Cardinality, Document, Field, FieldEntry, FieldType, Schema};
use crate::termdict::TermOrdinal; use crate::termdict::TermOrdinal;
use fnv::FnvHashMap;
use std::collections::HashMap; use std::collections::HashMap;
use std::io; use std::io;
@@ -16,6 +17,14 @@ pub struct FastFieldsWriter {
bytes_value_writers: Vec<BytesFastFieldWriter>, bytes_value_writers: Vec<BytesFastFieldWriter>,
} }
fn fast_field_default_value(field_entry: &FieldEntry) -> u64 {
match *field_entry.field_type() {
FieldType::I64(_) | FieldType::Date(_) => common::i64_to_u64(0i64),
FieldType::F64(_) => common::f64_to_u64(0.0f64),
_ => 0u64,
}
}
impl FastFieldsWriter { impl FastFieldsWriter {
/// Create all `FastFieldWriter` required by the schema. /// Create all `FastFieldWriter` required by the schema.
pub fn from_schema(schema: &Schema) -> FastFieldsWriter { pub fn from_schema(schema: &Schema) -> FastFieldsWriter {
@@ -23,18 +32,16 @@ impl FastFieldsWriter {
let mut multi_values_writers = Vec::new(); let mut multi_values_writers = Vec::new();
let mut bytes_value_writers = Vec::new(); let mut bytes_value_writers = Vec::new();
for (field_id, field_entry) in schema.fields().iter().enumerate() { for (field, field_entry) in schema.fields() {
let field = Field(field_id as u32);
let default_value = match *field_entry.field_type() {
FieldType::I64(_) => common::i64_to_u64(0i64),
FieldType::F64(_) => common::f64_to_u64(0.0f64),
_ => 0u64,
};
match *field_entry.field_type() { match *field_entry.field_type() {
FieldType::I64(ref int_options) | FieldType::U64(ref int_options) | FieldType::F64(ref int_options) => { FieldType::I64(ref int_options)
| FieldType::U64(ref int_options)
| FieldType::F64(ref int_options)
| FieldType::Date(ref int_options) => {
match int_options.get_fastfield_cardinality() { match int_options.get_fastfield_cardinality() {
Some(Cardinality::SingleValue) => { Some(Cardinality::SingleValue) => {
let mut fast_field_writer = IntFastFieldWriter::new(field); let mut fast_field_writer = IntFastFieldWriter::new(field);
let default_value = fast_field_default_value(field_entry);
fast_field_writer.set_val_if_missing(default_value); fast_field_writer.set_val_if_missing(default_value);
single_value_writers.push(fast_field_writer); single_value_writers.push(fast_field_writer);
} }
@@ -114,7 +121,7 @@ impl FastFieldsWriter {
pub fn serialize( pub fn serialize(
&self, &self,
serializer: &mut FastFieldSerializer, serializer: &mut FastFieldSerializer,
mapping: &HashMap<Field, HashMap<UnorderedTermId, TermOrdinal>>, mapping: &HashMap<Field, FnvHashMap<UnorderedTermId, TermOrdinal>>,
) -> io::Result<()> { ) -> io::Result<()> {
for field_writer in &self.single_value_writers { for field_writer in &self.single_value_writers {
field_writer.serialize(serializer)?; field_writer.serialize(serializer)?;

View File

@@ -22,11 +22,14 @@ impl FieldNormsWriter {
pub(crate) fn fields_with_fieldnorm(schema: &Schema) -> Vec<Field> { pub(crate) fn fields_with_fieldnorm(schema: &Schema) -> Vec<Field> {
schema schema
.fields() .fields()
.iter() .filter_map(|(field, field_entry)| {
.enumerate() if field_entry.is_indexed() {
.filter(|&(_, field_entry)| field_entry.is_indexed()) Some(field)
.map(|(field, _)| Field(field as u32)) } else {
.collect::<Vec<Field>>() None
}
})
.collect::<Vec<_>>()
} }
/// Initialize with state for tracking the field norm fields /// Initialize with state for tracking the field norm fields
@@ -35,7 +38,7 @@ impl FieldNormsWriter {
let fields = FieldNormsWriter::fields_with_fieldnorm(schema); let fields = FieldNormsWriter::fields_with_fieldnorm(schema);
let max_field = fields let max_field = fields
.iter() .iter()
.map(|field| field.0) .map(Field::field_id)
.max() .max()
.map(|max_field_id| max_field_id as usize + 1) .map(|max_field_id| max_field_id as usize + 1)
.unwrap_or(0); .unwrap_or(0);
@@ -50,8 +53,8 @@ impl FieldNormsWriter {
/// ///
/// Will extend with 0-bytes for documents that have not been seen. /// Will extend with 0-bytes for documents that have not been seen.
pub fn fill_up_to_max_doc(&mut self, max_doc: DocId) { pub fn fill_up_to_max_doc(&mut self, max_doc: DocId) {
for &field in self.fields.iter() { for field in self.fields.iter() {
self.fieldnorms_buffer[field.0 as usize].resize(max_doc as usize, 0u8); self.fieldnorms_buffer[field.field_id() as usize].resize(max_doc as usize, 0u8);
} }
} }
@@ -64,7 +67,7 @@ impl FieldNormsWriter {
/// * field - the field being set /// * field - the field being set
/// * fieldnorm - the number of terms present in document `doc` in field `field` /// * fieldnorm - the number of terms present in document `doc` in field `field`
pub fn record(&mut self, doc: DocId, field: Field, fieldnorm: u32) { pub fn record(&mut self, doc: DocId, field: Field, fieldnorm: u32) {
let fieldnorm_buffer: &mut Vec<u8> = &mut self.fieldnorms_buffer[field.0 as usize]; let fieldnorm_buffer: &mut Vec<u8> = &mut self.fieldnorms_buffer[field.field_id() as usize];
assert!( assert!(
fieldnorm_buffer.len() <= doc as usize, fieldnorm_buffer.len() <= doc as usize,
"Cannot register a given fieldnorm twice" "Cannot register a given fieldnorm twice"
@@ -77,7 +80,7 @@ impl FieldNormsWriter {
/// Serialize the seen fieldnorm values to the serializer for all fields. /// Serialize the seen fieldnorm values to the serializer for all fields.
pub fn serialize(&self, fieldnorms_serializer: &mut FieldNormsSerializer) -> io::Result<()> { pub fn serialize(&self, fieldnorms_serializer: &mut FieldNormsSerializer) -> io::Result<()> {
for &field in self.fields.iter() { for &field in self.fields.iter() {
let fieldnorm_values: &[u8] = &self.fieldnorms_buffer[field.0 as usize][..]; let fieldnorm_values: &[u8] = &self.fieldnorms_buffer[field.field_id() as usize][..];
fieldnorms_serializer.serialize_field(field, fieldnorm_values)?; fieldnorms_serializer.serialize_field(field, fieldnorm_values)?;
} }
Ok(()) Ok(())

View File

@@ -2,7 +2,7 @@ use super::operation::DeleteOperation;
use crate::Opstamp; use crate::Opstamp;
use std::mem; use std::mem;
use std::ops::DerefMut; use std::ops::DerefMut;
use std::sync::{Arc, RwLock}; use std::sync::{Arc, RwLock, Weak};
// The DeleteQueue is similar in conceptually to a multiple // The DeleteQueue is similar in conceptually to a multiple
// consumer single producer broadcast channel. // consumer single producer broadcast channel.
@@ -14,14 +14,15 @@ use std::sync::{Arc, RwLock};
// //
// New consumer can be created in two ways // New consumer can be created in two ways
// - calling `delete_queue.cursor()` returns a cursor, that // - calling `delete_queue.cursor()` returns a cursor, that
// will include all future delete operation (and no past operations). // will include all future delete operation (and some or none
// of the past operations... The client is in charge of checking the opstamps.).
// - cloning an existing cursor returns a new cursor, that // - cloning an existing cursor returns a new cursor, that
// is at the exact same position, and can now advance independently // is at the exact same position, and can now advance independently
// from the original cursor. // from the original cursor.
#[derive(Default)] #[derive(Default)]
struct InnerDeleteQueue { struct InnerDeleteQueue {
writer: Vec<DeleteOperation>, writer: Vec<DeleteOperation>,
last_block: Option<Arc<Block>>, last_block: Weak<Block>,
} }
#[derive(Clone)] #[derive(Clone)]
@@ -32,21 +33,31 @@ pub struct DeleteQueue {
impl DeleteQueue { impl DeleteQueue {
// Creates a new delete queue. // Creates a new delete queue.
pub fn new() -> DeleteQueue { pub fn new() -> DeleteQueue {
let delete_queue = DeleteQueue { DeleteQueue {
inner: Arc::default(), inner: Arc::default(),
};
let next_block = NextBlock::from(delete_queue.clone());
{
let mut delete_queue_wlock = delete_queue.inner.write().unwrap();
delete_queue_wlock.last_block = Some(Arc::new(Block {
operations: Arc::default(),
next: next_block,
}));
} }
}
delete_queue fn get_last_block(&self) -> Arc<Block> {
{
// try get the last block with simply acquiring the read lock.
let rlock = self.inner.read().unwrap();
if let Some(block) = rlock.last_block.upgrade() {
return block;
}
}
// It failed. Let's double check after acquiring the write, as someone could have called
// `get_last_block` right after we released the rlock.
let mut wlock = self.inner.write().unwrap();
if let Some(block) = wlock.last_block.upgrade() {
return block;
}
let block = Arc::new(Block {
operations: Arc::default(),
next: NextBlock::from(self.clone()),
});
wlock.last_block = Arc::downgrade(&block);
block
} }
// Creates a new cursor that makes it possible to // Creates a new cursor that makes it possible to
@@ -54,17 +65,7 @@ impl DeleteQueue {
// //
// Past delete operations are not accessible. // Past delete operations are not accessible.
pub fn cursor(&self) -> DeleteCursor { pub fn cursor(&self) -> DeleteCursor {
let last_block = self let last_block = self.get_last_block();
.inner
.read()
.expect("Read lock poisoned when opening delete queue cursor")
.last_block
.clone()
.expect(
"Failed to unwrap last_block. This should never happen
as the Option<> is only here to make
initialization possible",
);
let operations_len = last_block.operations.len(); let operations_len = last_block.operations.len();
DeleteCursor { DeleteCursor {
block: last_block, block: last_block,
@@ -100,23 +101,19 @@ impl DeleteQueue {
.write() .write()
.expect("Failed to acquire write lock on delete queue writer"); .expect("Failed to acquire write lock on delete queue writer");
let delete_operations; if self_wlock.writer.is_empty() {
{ return None;
let writer: &mut Vec<DeleteOperation> = &mut self_wlock.writer;
if writer.is_empty() {
return None;
}
delete_operations = mem::replace(writer, vec![]);
} }
let next_block = NextBlock::from(self.clone()); let delete_operations = mem::replace(&mut self_wlock.writer, vec![]);
{
self_wlock.last_block = Some(Arc::new(Block { let new_block = Arc::new(Block {
operations: Arc::new(delete_operations), operations: Arc::new(delete_operations.into_boxed_slice()),
next: next_block, next: NextBlock::from(self.clone()),
})); });
}
self_wlock.last_block.clone() self_wlock.last_block = Arc::downgrade(&new_block);
Some(new_block)
} }
} }
@@ -170,7 +167,7 @@ impl NextBlock {
} }
struct Block { struct Block {
operations: Arc<Vec<DeleteOperation>>, operations: Arc<Box<[DeleteOperation]>>,
next: NextBlock, next: NextBlock,
} }
@@ -258,7 +255,7 @@ mod tests {
let delete_queue = DeleteQueue::new(); let delete_queue = DeleteQueue::new();
let make_op = |i: usize| { let make_op = |i: usize| {
let field = Field(1u32); let field = Field::from_field_id(1u32);
DeleteOperation { DeleteOperation {
opstamp: i as u64, opstamp: i as u64,
term: Term::from_field_u64(field, i as u64), term: Term::from_field_u64(field, i as u64),

View File

@@ -1,13 +1,15 @@
use super::operation::{AddOperation, UserOperation}; use super::operation::{AddOperation, UserOperation};
use super::segment_updater::SegmentUpdater; use super::segment_updater::SegmentUpdater;
use super::PreparedCommit; use super::PreparedCommit;
use crate::common::BitSet;
use crate::core::Index; use crate::core::Index;
use crate::core::Segment; use crate::core::Segment;
use crate::core::SegmentComponent; use crate::core::SegmentComponent;
use crate::core::SegmentId; use crate::core::SegmentId;
use crate::core::SegmentMeta; use crate::core::SegmentMeta;
use crate::core::SegmentReader; use crate::core::SegmentReader;
use crate::directory::DirectoryLock; use crate::directory::TerminatingWrite;
use crate::directory::{DirectoryLock, GarbageCollectionResult};
use crate::docset::DocSet; use crate::docset::DocSet;
use crate::error::TantivyError; use crate::error::TantivyError;
use crate::fastfield::write_delete_bitset; use crate::fastfield::write_delete_bitset;
@@ -22,10 +24,9 @@ use crate::schema::Document;
use crate::schema::IndexRecordOption; use crate::schema::IndexRecordOption;
use crate::schema::Term; use crate::schema::Term;
use crate::Opstamp; use crate::Opstamp;
use crate::Result;
use bit_set::BitSet;
use crossbeam::channel; use crossbeam::channel;
use futures::{Canceled, Future}; use futures::executor::block_on;
use futures::future::Future;
use smallvec::smallvec; use smallvec::smallvec;
use smallvec::SmallVec; use smallvec::SmallVec;
use std::mem; use std::mem;
@@ -71,7 +72,7 @@ pub struct IndexWriter {
heap_size_in_bytes_per_thread: usize, heap_size_in_bytes_per_thread: usize,
workers_join_handle: Vec<JoinHandle<Result<()>>>, workers_join_handle: Vec<JoinHandle<crate::Result<()>>>,
operation_receiver: OperationReceiver, operation_receiver: OperationReceiver,
operation_sender: OperationSender, operation_sender: OperationSender,
@@ -94,7 +95,7 @@ fn compute_deleted_bitset(
delete_cursor: &mut DeleteCursor, delete_cursor: &mut DeleteCursor,
doc_opstamps: &DocToOpstampMapping, doc_opstamps: &DocToOpstampMapping,
target_opstamp: Opstamp, target_opstamp: Opstamp,
) -> Result<bool> { ) -> crate::Result<bool> {
let mut might_have_changed = false; let mut might_have_changed = false;
while let Some(delete_op) = delete_cursor.get() { while let Some(delete_op) = delete_cursor.get() {
if delete_op.opstamp > target_opstamp { if delete_op.opstamp > target_opstamp {
@@ -114,7 +115,7 @@ fn compute_deleted_bitset(
while docset.advance() { while docset.advance() {
let deleted_doc = docset.doc(); let deleted_doc = docset.doc();
if deleted_doc < limit_doc { if deleted_doc < limit_doc {
delete_bitset.insert(deleted_doc as usize); delete_bitset.insert(deleted_doc);
might_have_changed = true; might_have_changed = true;
} }
} }
@@ -125,64 +126,78 @@ fn compute_deleted_bitset(
Ok(might_have_changed) Ok(might_have_changed)
} }
/// Advance delete for the given segment up /// Advance delete for the given segment up to the target opstamp.
/// to the target opstamp. ///
/// Note that there are no guarantee that the resulting `segment_entry` delete_opstamp
/// is `==` target_opstamp.
/// For instance, there was no delete operation between the state of the `segment_entry` and
/// the `target_opstamp`, `segment_entry` is not updated.
pub(crate) fn advance_deletes( pub(crate) fn advance_deletes(
mut segment: Segment, mut segment: Segment,
segment_entry: &mut SegmentEntry, segment_entry: &mut SegmentEntry,
target_opstamp: Opstamp, target_opstamp: Opstamp,
) -> Result<()> { ) -> crate::Result<()> {
{ if segment_entry.meta().delete_opstamp() == Some(target_opstamp) {
if segment_entry.meta().delete_opstamp() == Some(target_opstamp) { // We are already up-to-date here.
// We are already up-to-date here. return Ok(());
return Ok(()); }
}
let segment_reader = SegmentReader::open(&segment)?; if segment_entry.delete_bitset().is_none() && segment_entry.delete_cursor().get().is_none() {
// There has been no `DeleteOperation` between the segment status and `target_opstamp`.
return Ok(());
}
let max_doc = segment_reader.max_doc(); let segment_reader = SegmentReader::open(&segment)?;
let mut delete_bitset: BitSet = match segment_entry.delete_bitset() {
Some(previous_delete_bitset) => (*previous_delete_bitset).clone(),
None => BitSet::with_capacity(max_doc as usize),
};
let delete_cursor = segment_entry.delete_cursor(); let max_doc = segment_reader.max_doc();
let mut delete_bitset: BitSet = match segment_entry.delete_bitset() {
Some(previous_delete_bitset) => (*previous_delete_bitset).clone(),
None => BitSet::with_max_value(max_doc),
};
compute_deleted_bitset( let num_deleted_docs_before = segment.meta().num_deleted_docs();
&mut delete_bitset,
&segment_reader,
delete_cursor,
&DocToOpstampMapping::None,
target_opstamp,
)?;
// TODO optimize compute_deleted_bitset(
&mut delete_bitset,
&segment_reader,
segment_entry.delete_cursor(),
&DocToOpstampMapping::None,
target_opstamp,
)?;
// TODO optimize
// It should be possible to do something smarter by manipulation bitsets directly
// to compute this union.
if let Some(seg_delete_bitset) = segment_reader.delete_bitset() {
for doc in 0u32..max_doc { for doc in 0u32..max_doc {
if segment_reader.is_deleted(doc) { if seg_delete_bitset.is_deleted(doc) {
delete_bitset.insert(doc as usize); delete_bitset.insert(doc);
} }
} }
let num_deleted_docs = delete_bitset.len();
if num_deleted_docs > 0 {
segment = segment.with_delete_meta(num_deleted_docs as u32, target_opstamp);
let mut delete_file = segment.open_write(SegmentComponent::DELETE)?;
write_delete_bitset(&delete_bitset, &mut delete_file)?;
}
} }
let num_deleted_docs: u32 = delete_bitset.len() as u32;
if num_deleted_docs > num_deleted_docs_before {
// There are new deletes. We need to write a new delete file.
segment = segment.with_delete_meta(num_deleted_docs as u32, target_opstamp);
let mut delete_file = segment.open_write(SegmentComponent::DELETE)?;
write_delete_bitset(&delete_bitset, max_doc, &mut delete_file)?;
delete_file.terminate()?;
}
segment_entry.set_meta(segment.meta().clone()); segment_entry.set_meta(segment.meta().clone());
Ok(()) Ok(())
} }
fn index_documents( fn index_documents(
memory_budget: usize, memory_budget: usize,
segment: &Segment, segment: Segment,
grouped_document_iterator: &mut dyn Iterator<Item = OperationGroup>, grouped_document_iterator: &mut dyn Iterator<Item = OperationGroup>,
segment_updater: &mut SegmentUpdater, segment_updater: &mut SegmentUpdater,
mut delete_cursor: DeleteCursor, mut delete_cursor: DeleteCursor,
) -> Result<bool> { ) -> crate::Result<bool> {
let schema = segment.schema(); let schema = segment.schema();
let segment_id = segment.id();
let mut segment_writer = SegmentWriter::for_segment(memory_budget, segment.clone(), &schema)?; let mut segment_writer = SegmentWriter::for_segment(memory_budget, segment.clone(), &schema)?;
for document_group in grouped_document_iterator { for document_group in grouped_document_iterator {
for doc in document_group { for doc in document_group {
@@ -202,25 +217,32 @@ fn index_documents(
return Ok(false); return Ok(false);
} }
let num_docs = segment_writer.max_doc(); let max_doc = segment_writer.max_doc();
// this is ensured by the call to peek before starting // this is ensured by the call to peek before starting
// the worker thread. // the worker thread.
assert!(num_docs > 0); assert!(max_doc > 0);
let doc_opstamps: Vec<Opstamp> = segment_writer.finalize()?; let doc_opstamps: Vec<Opstamp> = segment_writer.finalize()?;
let segment_meta = segment
.index() let segment_with_max_doc = segment.with_max_doc(max_doc);
.inventory()
.new_segment_meta(segment_id, num_docs);
let last_docstamp: Opstamp = *(doc_opstamps.last().unwrap()); let last_docstamp: Opstamp = *(doc_opstamps.last().unwrap());
let delete_bitset_opt = let delete_bitset_opt = apply_deletes(
apply_deletes(&segment, &mut delete_cursor, &doc_opstamps, last_docstamp)?; &segment_with_max_doc,
&mut delete_cursor,
&doc_opstamps,
last_docstamp,
)?;
let segment_entry = SegmentEntry::new(segment_meta, delete_cursor, delete_bitset_opt); let segment_entry = SegmentEntry::new(
Ok(segment_updater.add_segment(segment_entry)) segment_with_max_doc.meta().clone(),
delete_cursor,
delete_bitset_opt,
);
block_on(segment_updater.schedule_add_segment(segment_entry))?;
Ok(true)
} }
fn apply_deletes( fn apply_deletes(
@@ -228,7 +250,7 @@ fn apply_deletes(
mut delete_cursor: &mut DeleteCursor, mut delete_cursor: &mut DeleteCursor,
doc_opstamps: &[Opstamp], doc_opstamps: &[Opstamp],
last_docstamp: Opstamp, last_docstamp: Opstamp,
) -> Result<Option<BitSet<u32>>> { ) -> crate::Result<Option<BitSet>> {
if delete_cursor.get().is_none() { if delete_cursor.get().is_none() {
// if there are no delete operation in the queue, no need // if there are no delete operation in the queue, no need
// to even open the segment. // to even open the segment.
@@ -236,7 +258,9 @@ fn apply_deletes(
} }
let segment_reader = SegmentReader::open(segment)?; let segment_reader = SegmentReader::open(segment)?;
let doc_to_opstamps = DocToOpstampMapping::from(doc_opstamps); let doc_to_opstamps = DocToOpstampMapping::from(doc_opstamps);
let mut deleted_bitset = BitSet::with_capacity(segment_reader.max_doc() as usize);
let max_doc = segment.meta().max_doc();
let mut deleted_bitset = BitSet::with_max_value(max_doc);
let may_have_deletes = compute_deleted_bitset( let may_have_deletes = compute_deleted_bitset(
&mut deleted_bitset, &mut deleted_bitset,
&segment_reader, &segment_reader,
@@ -271,7 +295,7 @@ impl IndexWriter {
num_threads: usize, num_threads: usize,
heap_size_in_bytes_per_thread: usize, heap_size_in_bytes_per_thread: usize,
directory_lock: DirectoryLock, directory_lock: DirectoryLock,
) -> Result<IndexWriter> { ) -> crate::Result<IndexWriter> {
if heap_size_in_bytes_per_thread < HEAP_SIZE_MIN { if heap_size_in_bytes_per_thread < HEAP_SIZE_MIN {
let err_msg = format!( let err_msg = format!(
"The heap size per thread needs to be at least {}.", "The heap size per thread needs to be at least {}.",
@@ -320,12 +344,17 @@ impl IndexWriter {
Ok(index_writer) Ok(index_writer)
} }
fn drop_sender(&mut self) {
let (sender, _receiver) = channel::bounded(1);
mem::replace(&mut self.operation_sender, sender);
}
/// If there are some merging threads, blocks until they all finish their work and /// If there are some merging threads, blocks until they all finish their work and
/// then drop the `IndexWriter`. /// then drop the `IndexWriter`.
pub fn wait_merging_threads(mut self) -> Result<()> { pub fn wait_merging_threads(mut self) -> crate::Result<()> {
// this will stop the indexing thread, // this will stop the indexing thread,
// dropping the last reference to the segment_updater. // dropping the last reference to the segment_updater.
drop(self.operation_sender); self.drop_sender();
let former_workers_handles = mem::replace(&mut self.workers_join_handle, vec![]); let former_workers_handles = mem::replace(&mut self.workers_join_handle, vec![]);
for join_handle in former_workers_handles { for join_handle in former_workers_handles {
@@ -336,7 +365,6 @@ impl IndexWriter {
TantivyError::ErrorInThread("Error in indexing worker thread.".into()) TantivyError::ErrorInThread("Error in indexing worker thread.".into())
})?; })?;
} }
drop(self.workers_join_handle);
let result = self let result = self
.segment_updater .segment_updater
@@ -351,10 +379,10 @@ impl IndexWriter {
} }
#[doc(hidden)] #[doc(hidden)]
pub fn add_segment(&mut self, segment_meta: SegmentMeta) { pub fn add_segment(&self, segment_meta: SegmentMeta) -> crate::Result<()> {
let delete_cursor = self.delete_queue.cursor(); let delete_cursor = self.delete_queue.cursor();
let segment_entry = SegmentEntry::new(segment_meta, delete_cursor, None); let segment_entry = SegmentEntry::new(segment_meta, delete_cursor, None);
self.segment_updater.add_segment(segment_entry); block_on(self.segment_updater.schedule_add_segment(segment_entry))
} }
/// Creates a new segment. /// Creates a new segment.
@@ -371,7 +399,7 @@ impl IndexWriter {
/// Spawns a new worker thread for indexing. /// Spawns a new worker thread for indexing.
/// The thread consumes documents from the pipeline. /// The thread consumes documents from the pipeline.
fn add_indexing_worker(&mut self) -> Result<()> { fn add_indexing_worker(&mut self) -> crate::Result<()> {
let document_receiver_clone = self.operation_receiver.clone(); let document_receiver_clone = self.operation_receiver.clone();
let mut segment_updater = self.segment_updater.clone(); let mut segment_updater = self.segment_updater.clone();
@@ -379,7 +407,7 @@ impl IndexWriter {
let mem_budget = self.heap_size_in_bytes_per_thread; let mem_budget = self.heap_size_in_bytes_per_thread;
let index = self.index.clone(); let index = self.index.clone();
let join_handle: JoinHandle<Result<()>> = thread::Builder::new() let join_handle: JoinHandle<crate::Result<()>> = thread::Builder::new()
.name(format!("thrd-tantivy-index{}", self.worker_id)) .name(format!("thrd-tantivy-index{}", self.worker_id))
.spawn(move || { .spawn(move || {
loop { loop {
@@ -408,7 +436,7 @@ impl IndexWriter {
let segment = index.new_segment(); let segment = index.new_segment();
index_documents( index_documents(
mem_budget, mem_budget,
&segment, segment,
&mut document_iterator, &mut document_iterator,
&mut segment_updater, &mut segment_updater,
delete_cursor.clone(), delete_cursor.clone(),
@@ -425,22 +453,23 @@ impl IndexWriter {
self.segment_updater.get_merge_policy() self.segment_updater.get_merge_policy()
} }
/// Set the merge policy. /// Setter for the merge policy.
pub fn set_merge_policy(&self, merge_policy: Box<dyn MergePolicy>) { pub fn set_merge_policy(&self, merge_policy: Box<dyn MergePolicy>) {
self.segment_updater.set_merge_policy(merge_policy); self.segment_updater.set_merge_policy(merge_policy);
} }
fn start_workers(&mut self) -> Result<()> { fn start_workers(&mut self) -> crate::Result<()> {
for _ in 0..self.num_threads { for _ in 0..self.num_threads {
self.add_indexing_worker()?; self.add_indexing_worker()?;
} }
Ok(()) Ok(())
} }
/// Detects and removes the files that /// Detects and removes the files that are not used by the index anymore.
/// are not used by the index anymore. pub fn garbage_collect_files(
pub fn garbage_collect_files(&mut self) -> Result<()> { &self,
self.segment_updater.garbage_collect_files().wait() ) -> impl Future<Output = crate::Result<GarbageCollectionResult>> {
self.segment_updater.schedule_garbage_collect()
} }
/// Deletes all documents from the index /// Deletes all documents from the index
@@ -450,12 +479,10 @@ impl IndexWriter {
/// by clearing and resubmitting necessary documents /// by clearing and resubmitting necessary documents
/// ///
/// ```rust /// ```rust
/// #[macro_use]
/// extern crate tantivy;
/// use tantivy::query::QueryParser;
/// use tantivy::collector::TopDocs; /// use tantivy::collector::TopDocs;
/// use tantivy::query::QueryParser;
/// use tantivy::schema::*; /// use tantivy::schema::*;
/// use tantivy::Index; /// use tantivy::{doc, Index};
/// ///
/// fn main() -> tantivy::Result<()> { /// fn main() -> tantivy::Result<()> {
/// let mut schema_builder = Schema::builder(); /// let mut schema_builder = Schema::builder();
@@ -481,7 +508,7 @@ impl IndexWriter {
/// Ok(()) /// Ok(())
/// } /// }
/// ``` /// ```
pub fn delete_all_documents(&mut self) -> Result<Opstamp> { pub fn delete_all_documents(&self) -> crate::Result<Opstamp> {
// Delete segments // Delete segments
self.segment_updater.remove_all_segments(); self.segment_updater.remove_all_segments();
// Return new stamp - reverted stamp // Return new stamp - reverted stamp
@@ -495,8 +522,10 @@ impl IndexWriter {
pub fn merge( pub fn merge(
&mut self, &mut self,
segment_ids: &[SegmentId], segment_ids: &[SegmentId],
) -> Result<impl Future<Item = SegmentMeta, Error = Canceled>> { ) -> impl Future<Output = crate::Result<SegmentMeta>> {
self.segment_updater.start_merge(segment_ids) let merge_operation = self.segment_updater.make_merge_operation(segment_ids);
let segment_updater = self.segment_updater.clone();
async move { segment_updater.start_merge(merge_operation)?.await }
} }
/// Closes the current document channel send. /// Closes the current document channel send.
@@ -522,13 +551,8 @@ impl IndexWriter {
/// state as it was after the last commit. /// state as it was after the last commit.
/// ///
/// The opstamp at the last commit is returned. /// The opstamp at the last commit is returned.
pub fn rollback(&mut self) -> Result<Opstamp> { pub fn rollback(&mut self) -> crate::Result<Opstamp> {
info!("Rolling back to opstamp {}", self.committed_opstamp); info!("Rolling back to opstamp {}", self.committed_opstamp);
self.rollback_impl()
}
/// Private, implementation of rollback
fn rollback_impl(&mut self) -> Result<Opstamp> {
// marks the segment updater as killed. From now on, all // marks the segment updater as killed. From now on, all
// segment updates will be ignored. // segment updates will be ignored.
self.segment_updater.kill(); self.segment_updater.kill();
@@ -584,7 +608,7 @@ impl IndexWriter {
/// It is also possible to add a payload to the `commit` /// It is also possible to add a payload to the `commit`
/// using this API. /// using this API.
/// See [`PreparedCommit::set_payload()`](PreparedCommit.html) /// See [`PreparedCommit::set_payload()`](PreparedCommit.html)
pub fn prepare_commit(&mut self) -> Result<PreparedCommit<'_>> { pub fn prepare_commit(&mut self) -> crate::Result<PreparedCommit> {
// Here, because we join all of the worker threads, // Here, because we join all of the worker threads,
// all of the segment update for this commit have been // all of the segment update for this commit have been
// sent. // sent.
@@ -631,7 +655,7 @@ impl IndexWriter {
/// Commit returns the `opstamp` of the last document /// Commit returns the `opstamp` of the last document
/// that made it in the commit. /// that made it in the commit.
/// ///
pub fn commit(&mut self) -> Result<Opstamp> { pub fn commit(&mut self) -> crate::Result<Opstamp> {
self.prepare_commit()?.commit() self.prepare_commit()?.commit()
} }
@@ -672,9 +696,6 @@ impl IndexWriter {
/// The opstamp is an increasing `u64` that can /// The opstamp is an increasing `u64` that can
/// be used by the client to align commits with its own /// be used by the client to align commits with its own
/// document queue. /// document queue.
///
/// Currently it represents the number of documents that
/// have been added since the creation of the index.
pub fn add_document(&self, document: Document) -> Opstamp { pub fn add_document(&self, document: Document) -> Opstamp {
let opstamp = self.stamper.stamp(); let opstamp = self.stamper.stamp();
let add_operation = AddOperation { opstamp, document }; let add_operation = AddOperation { opstamp, document };
@@ -748,6 +769,16 @@ impl IndexWriter {
} }
} }
impl Drop for IndexWriter {
fn drop(&mut self) {
self.segment_updater.kill();
self.drop_sender();
for work in self.workers_join_handle.drain(..) {
let _ = work.join();
}
}
}
#[cfg(test)] #[cfg(test)]
mod tests { mod tests {
@@ -757,11 +788,10 @@ mod tests {
use crate::error::*; use crate::error::*;
use crate::indexer::NoMergePolicy; use crate::indexer::NoMergePolicy;
use crate::query::TermQuery; use crate::query::TermQuery;
use crate::schema::{self, IndexRecordOption}; use crate::schema::{self, IndexRecordOption, STRING};
use crate::Index; use crate::Index;
use crate::ReloadPolicy; use crate::ReloadPolicy;
use crate::Term; use crate::Term;
use fail;
#[test] #[test]
fn test_operations_group() { fn test_operations_group() {
@@ -778,6 +808,46 @@ mod tests {
assert_eq!(batch_opstamp1, 2u64); assert_eq!(batch_opstamp1, 2u64);
} }
#[test]
fn test_no_need_to_rewrite_delete_file_if_no_new_deletes() {
let mut schema_builder = schema::Schema::builder();
let text_field = schema_builder.add_text_field("text", schema::TEXT);
let index = Index::create_in_ram(schema_builder.build());
let mut index_writer = index.writer_with_num_threads(1, 3_000_000).unwrap();
index_writer.add_document(doc!(text_field => "hello1"));
index_writer.add_document(doc!(text_field => "hello2"));
assert!(index_writer.commit().is_ok());
let reader = index.reader().unwrap();
let searcher = reader.searcher();
assert_eq!(searcher.segment_readers().len(), 1);
assert_eq!(searcher.segment_reader(0u32).num_deleted_docs(), 0);
index_writer.delete_term(Term::from_field_text(text_field, "hello1"));
assert!(index_writer.commit().is_ok());
assert!(reader.reload().is_ok());
let searcher = reader.searcher();
assert_eq!(searcher.segment_readers().len(), 1);
assert_eq!(searcher.segment_reader(0u32).num_deleted_docs(), 1);
let previous_delete_opstamp = index.load_metas().unwrap().segments[0].delete_opstamp();
// All docs containing hello1 have been already removed.
// We should not update the delete meta.
index_writer.delete_term(Term::from_field_text(text_field, "hello1"));
assert!(index_writer.commit().is_ok());
assert!(reader.reload().is_ok());
let searcher = reader.searcher();
assert_eq!(searcher.segment_readers().len(), 1);
assert_eq!(searcher.segment_reader(0u32).num_deleted_docs(), 1);
let after_delete_opstamp = index.load_metas().unwrap().segments[0].delete_opstamp();
assert_eq!(after_delete_opstamp, previous_delete_opstamp);
}
#[test] #[test]
fn test_ordered_batched_operations() { fn test_ordered_batched_operations() {
// * one delete for `doc!(field=>"a")` // * one delete for `doc!(field=>"a")`
@@ -872,7 +942,7 @@ mod tests {
let index_writer = index.writer(3_000_000).unwrap(); let index_writer = index.writer(3_000_000).unwrap();
assert_eq!( assert_eq!(
format!("{:?}", index_writer.get_merge_policy()), format!("{:?}", index_writer.get_merge_policy()),
"LogMergePolicy { min_merge_size: 8, min_layer_size: 10000, \ "LogMergePolicy { min_merge_size: 8, max_merge_size: 10000000, min_layer_size: 10000, \
level_log_size: 0.75 }" level_log_size: 0.75 }"
); );
let merge_policy = Box::new(NoMergePolicy::default()); let merge_policy = Box::new(NoMergePolicy::default());
@@ -1184,4 +1254,15 @@ mod tests {
assert!(commit_again.is_ok()); assert!(commit_again.is_ok());
} }
#[test]
fn test_index_doc_missing_field() {
let mut schema_builder = schema::Schema::builder();
let idfield = schema_builder.add_text_field("id", STRING);
schema_builder.add_text_field("optfield", STRING);
let index = Index::create_in_ram(schema_builder.build());
let mut index_writer = index.writer_with_num_threads(1, 3_000_000).unwrap();
index_writer.add_document(doc!(idfield=>"myid"));
let commit = index_writer.commit();
assert!(commit.is_ok());
}
} }

View File

@@ -6,12 +6,14 @@ use std::f64;
const DEFAULT_LEVEL_LOG_SIZE: f64 = 0.75; const DEFAULT_LEVEL_LOG_SIZE: f64 = 0.75;
const DEFAULT_MIN_LAYER_SIZE: u32 = 10_000; const DEFAULT_MIN_LAYER_SIZE: u32 = 10_000;
const DEFAULT_MIN_MERGE_SIZE: usize = 8; const DEFAULT_MIN_MERGE_SIZE: usize = 8;
const DEFAULT_MAX_MERGE_SIZE: usize = 10_000_000;
/// `LogMergePolicy` tries tries to merge segments that have a similar number of /// `LogMergePolicy` tries tries to merge segments that have a similar number of
/// documents. /// documents.
#[derive(Debug, Clone)] #[derive(Debug, Clone)]
pub struct LogMergePolicy { pub struct LogMergePolicy {
min_merge_size: usize, min_merge_size: usize,
max_merge_size: usize,
min_layer_size: u32, min_layer_size: u32,
level_log_size: f64, level_log_size: f64,
} }
@@ -26,6 +28,12 @@ impl LogMergePolicy {
self.min_merge_size = min_merge_size; self.min_merge_size = min_merge_size;
} }
/// Set the maximum number docs in a segment for it to be considered for
/// merging.
pub fn set_max_merge_size(&mut self, max_merge_size: usize) {
self.max_merge_size = max_merge_size;
}
/// Set the minimum segment size under which all segment belong /// Set the minimum segment size under which all segment belong
/// to the same level. /// to the same level.
pub fn set_min_layer_size(&mut self, min_layer_size: u32) { pub fn set_min_layer_size(&mut self, min_layer_size: u32) {
@@ -53,6 +61,7 @@ impl MergePolicy for LogMergePolicy {
let mut size_sorted_tuples = segments let mut size_sorted_tuples = segments
.iter() .iter()
.map(SegmentMeta::num_docs) .map(SegmentMeta::num_docs)
.filter(|s| s <= &(self.max_merge_size as u32))
.enumerate() .enumerate()
.collect::<Vec<(usize, u32)>>(); .collect::<Vec<(usize, u32)>>();
@@ -86,6 +95,7 @@ impl Default for LogMergePolicy {
fn default() -> LogMergePolicy { fn default() -> LogMergePolicy {
LogMergePolicy { LogMergePolicy {
min_merge_size: DEFAULT_MIN_MERGE_SIZE, min_merge_size: DEFAULT_MIN_MERGE_SIZE,
max_merge_size: DEFAULT_MAX_MERGE_SIZE,
min_layer_size: DEFAULT_MIN_LAYER_SIZE, min_layer_size: DEFAULT_MIN_LAYER_SIZE,
level_log_size: DEFAULT_LEVEL_LOG_SIZE, level_log_size: DEFAULT_LEVEL_LOG_SIZE,
} }
@@ -104,6 +114,7 @@ mod tests {
fn test_merge_policy() -> LogMergePolicy { fn test_merge_policy() -> LogMergePolicy {
let mut log_merge_policy = LogMergePolicy::default(); let mut log_merge_policy = LogMergePolicy::default();
log_merge_policy.set_min_merge_size(3); log_merge_policy.set_min_merge_size(3);
log_merge_policy.set_max_merge_size(100_000);
log_merge_policy.set_min_layer_size(2); log_merge_policy.set_min_layer_size(2);
log_merge_policy log_merge_policy
} }
@@ -141,11 +152,11 @@ mod tests {
create_random_segment_meta(10), create_random_segment_meta(10),
create_random_segment_meta(10), create_random_segment_meta(10),
create_random_segment_meta(10), create_random_segment_meta(10),
create_random_segment_meta(1000), create_random_segment_meta(1_000),
create_random_segment_meta(1000), create_random_segment_meta(1_000),
create_random_segment_meta(1000), create_random_segment_meta(1_000),
create_random_segment_meta(10000), create_random_segment_meta(10_000),
create_random_segment_meta(10000), create_random_segment_meta(10_000),
create_random_segment_meta(10), create_random_segment_meta(10),
create_random_segment_meta(10), create_random_segment_meta(10),
create_random_segment_meta(10), create_random_segment_meta(10),
@@ -182,4 +193,19 @@ mod tests {
let result_list = test_merge_policy().compute_merge_candidates(&test_input); let result_list = test_merge_policy().compute_merge_candidates(&test_input);
assert_eq!(result_list.len(), 1); assert_eq!(result_list.len(), 1);
} }
#[test]
fn test_large_merge_segments() {
let test_input = vec![
create_random_segment_meta(1_000_000),
create_random_segment_meta(100_001),
create_random_segment_meta(100_000),
create_random_segment_meta(100_000),
create_random_segment_meta(100_000),
];
let result_list = test_merge_policy().compute_merge_candidates(&test_input);
// Do not include large segments
assert_eq!(result_list.len(), 1);
assert_eq!(result_list[0].0.len(), 3)
}
} }

View File

@@ -2,14 +2,23 @@ use crate::Opstamp;
use crate::SegmentId; use crate::SegmentId;
use census::{Inventory, TrackedObject}; use census::{Inventory, TrackedObject};
use std::collections::HashSet; use std::collections::HashSet;
use std::ops::Deref;
#[derive(Default)] #[derive(Default)]
pub struct MergeOperationInventory(Inventory<InnerMergeOperation>); pub(crate) struct MergeOperationInventory(Inventory<InnerMergeOperation>);
impl Deref for MergeOperationInventory {
type Target = Inventory<InnerMergeOperation>;
fn deref(&self) -> &Self::Target {
&self.0
}
}
impl MergeOperationInventory { impl MergeOperationInventory {
pub fn segment_in_merge(&self) -> HashSet<SegmentId> { pub fn segment_in_merge(&self) -> HashSet<SegmentId> {
let mut segment_in_merge = HashSet::default(); let mut segment_in_merge = HashSet::default();
for merge_op in self.0.list() { for merge_op in self.list() {
for &segment_id in &merge_op.segment_ids { for &segment_id in &merge_op.segment_ids {
segment_in_merge.insert(segment_id); segment_in_merge.insert(segment_id);
} }
@@ -35,13 +44,13 @@ pub struct MergeOperation {
inner: TrackedObject<InnerMergeOperation>, inner: TrackedObject<InnerMergeOperation>,
} }
struct InnerMergeOperation { pub(crate) struct InnerMergeOperation {
target_opstamp: Opstamp, target_opstamp: Opstamp,
segment_ids: Vec<SegmentId>, segment_ids: Vec<SegmentId>,
} }
impl MergeOperation { impl MergeOperation {
pub fn new( pub(crate) fn new(
inventory: &MergeOperationInventory, inventory: &MergeOperationInventory,
target_opstamp: Opstamp, target_opstamp: Opstamp,
segment_ids: Vec<SegmentId>, segment_ids: Vec<SegmentId>,
@@ -51,7 +60,7 @@ impl MergeOperation {
segment_ids, segment_ids,
}; };
MergeOperation { MergeOperation {
inner: inventory.0.track(inner_merge_operation), inner: inventory.track(inner_merge_operation),
} }
} }

View File

@@ -21,9 +21,6 @@ use crate::store::StoreWriter;
use crate::termdict::TermMerger; use crate::termdict::TermMerger;
use crate::termdict::TermOrdinal; use crate::termdict::TermOrdinal;
use crate::DocId; use crate::DocId;
use crate::Result;
use crate::TantivyError;
use itertools::Itertools;
use std::cmp; use std::cmp;
use std::collections::HashMap; use std::collections::HashMap;
@@ -72,11 +69,11 @@ fn compute_min_max_val(
Some(delete_bitset) => { Some(delete_bitset) => {
// some deleted documents, // some deleted documents,
// we need to recompute the max / min // we need to recompute the max / min
(0..max_doc) crate::common::minmax(
.filter(|doc_id| delete_bitset.is_alive(*doc_id)) (0..max_doc)
.map(|doc_id| u64_reader.get(doc_id)) .filter(|doc_id| delete_bitset.is_alive(*doc_id))
.minmax() .map(|doc_id| u64_reader.get(doc_id)),
.into_option() )
} }
None => { None => {
// no deleted documents, // no deleted documents,
@@ -143,7 +140,7 @@ impl DeltaComputer {
} }
impl IndexMerger { impl IndexMerger {
pub fn open(schema: Schema, segments: &[Segment]) -> Result<IndexMerger> { pub fn open(schema: Schema, segments: &[Segment]) -> crate::Result<IndexMerger> {
let mut readers = vec![]; let mut readers = vec![];
let mut max_doc: u32 = 0u32; let mut max_doc: u32 = 0u32;
for segment in segments { for segment in segments {
@@ -159,7 +156,7 @@ impl IndexMerger {
which exceeds the limit {}.", which exceeds the limit {}.",
max_doc, MAX_DOC_LIMIT max_doc, MAX_DOC_LIMIT
); );
return Err(TantivyError::InvalidArgument(err_msg)); return Err(crate::TantivyError::InvalidArgument(err_msg));
} }
Ok(IndexMerger { Ok(IndexMerger {
schema, schema,
@@ -168,7 +165,10 @@ impl IndexMerger {
}) })
} }
fn write_fieldnorms(&self, fieldnorms_serializer: &mut FieldNormsSerializer) -> Result<()> { fn write_fieldnorms(
&self,
fieldnorms_serializer: &mut FieldNormsSerializer,
) -> crate::Result<()> {
let fields = FieldNormsWriter::fields_with_fieldnorm(&self.schema); let fields = FieldNormsWriter::fields_with_fieldnorm(&self.schema);
let mut fieldnorms_data = Vec::with_capacity(self.max_doc as usize); let mut fieldnorms_data = Vec::with_capacity(self.max_doc as usize);
for field in fields { for field in fields {
@@ -189,9 +189,8 @@ impl IndexMerger {
&self, &self,
fast_field_serializer: &mut FastFieldSerializer, fast_field_serializer: &mut FastFieldSerializer,
mut term_ord_mappings: HashMap<Field, TermOrdinalMapping>, mut term_ord_mappings: HashMap<Field, TermOrdinalMapping>,
) -> Result<()> { ) -> crate::Result<()> {
for (field_id, field_entry) in self.schema.fields().iter().enumerate() { for (field, field_entry) in self.schema.fields() {
let field = Field(field_id as u32);
let field_type = field_entry.field_type(); let field_type = field_entry.field_type();
match *field_type { match *field_type {
FieldType::HierarchicalFacet => { FieldType::HierarchicalFacet => {
@@ -235,7 +234,7 @@ impl IndexMerger {
&self, &self,
field: Field, field: Field,
fast_field_serializer: &mut FastFieldSerializer, fast_field_serializer: &mut FastFieldSerializer,
) -> Result<()> { ) -> crate::Result<()> {
let mut u64_readers = vec![]; let mut u64_readers = vec![];
let mut min_value = u64::max_value(); let mut min_value = u64::max_value();
let mut max_value = u64::min_value(); let mut max_value = u64::min_value();
@@ -285,7 +284,7 @@ impl IndexMerger {
&self, &self,
field: Field, field: Field,
fast_field_serializer: &mut FastFieldSerializer, fast_field_serializer: &mut FastFieldSerializer,
) -> Result<()> { ) -> crate::Result<()> {
let mut total_num_vals = 0u64; let mut total_num_vals = 0u64;
let mut u64s_readers: Vec<MultiValueIntFastFieldReader<u64>> = Vec::new(); let mut u64s_readers: Vec<MultiValueIntFastFieldReader<u64>> = Vec::new();
@@ -332,7 +331,7 @@ impl IndexMerger {
field: Field, field: Field,
term_ordinal_mappings: &TermOrdinalMapping, term_ordinal_mappings: &TermOrdinalMapping,
fast_field_serializer: &mut FastFieldSerializer, fast_field_serializer: &mut FastFieldSerializer,
) -> Result<()> { ) -> crate::Result<()> {
// Multifastfield consists in 2 fastfields. // Multifastfield consists in 2 fastfields.
// The first serves as an index into the second one and is stricly increasing. // The first serves as an index into the second one and is stricly increasing.
// The second contains the actual values. // The second contains the actual values.
@@ -372,7 +371,7 @@ impl IndexMerger {
&self, &self,
field: Field, field: Field,
fast_field_serializer: &mut FastFieldSerializer, fast_field_serializer: &mut FastFieldSerializer,
) -> Result<()> { ) -> crate::Result<()> {
// Multifastfield consists in 2 fastfields. // Multifastfield consists in 2 fastfields.
// The first serves as an index into the second one and is stricly increasing. // The first serves as an index into the second one and is stricly increasing.
// The second contains the actual values. // The second contains the actual values.
@@ -437,7 +436,7 @@ impl IndexMerger {
&self, &self,
field: Field, field: Field,
fast_field_serializer: &mut FastFieldSerializer, fast_field_serializer: &mut FastFieldSerializer,
) -> Result<()> { ) -> crate::Result<()> {
let mut total_num_vals = 0u64; let mut total_num_vals = 0u64;
let mut bytes_readers: Vec<BytesFastFieldReader> = Vec::new(); let mut bytes_readers: Vec<BytesFastFieldReader> = Vec::new();
@@ -493,7 +492,7 @@ impl IndexMerger {
indexed_field: Field, indexed_field: Field,
field_type: &FieldType, field_type: &FieldType,
serializer: &mut InvertedIndexSerializer, serializer: &mut InvertedIndexSerializer,
) -> Result<Option<TermOrdinalMapping>> { ) -> crate::Result<Option<TermOrdinalMapping>> {
let mut positions_buffer: Vec<u32> = Vec::with_capacity(1_000); let mut positions_buffer: Vec<u32> = Vec::with_capacity(1_000);
let mut delta_computer = DeltaComputer::new(); let mut delta_computer = DeltaComputer::new();
let field_readers = self let field_readers = self
@@ -647,24 +646,21 @@ impl IndexMerger {
fn write_postings( fn write_postings(
&self, &self,
serializer: &mut InvertedIndexSerializer, serializer: &mut InvertedIndexSerializer,
) -> Result<HashMap<Field, TermOrdinalMapping>> { ) -> crate::Result<HashMap<Field, TermOrdinalMapping>> {
let mut term_ordinal_mappings = HashMap::new(); let mut term_ordinal_mappings = HashMap::new();
for (field_ord, field_entry) in self.schema.fields().iter().enumerate() { for (field, field_entry) in self.schema.fields() {
if field_entry.is_indexed() { if field_entry.is_indexed() {
let indexed_field = Field(field_ord as u32); if let Some(term_ordinal_mapping) =
if let Some(term_ordinal_mapping) = self.write_postings_for_field( self.write_postings_for_field(field, field_entry.field_type(), serializer)?
indexed_field, {
field_entry.field_type(), term_ordinal_mappings.insert(field, term_ordinal_mapping);
serializer,
)? {
term_ordinal_mappings.insert(indexed_field, term_ordinal_mapping);
} }
} }
} }
Ok(term_ordinal_mappings) Ok(term_ordinal_mappings)
} }
fn write_storable_fields(&self, store_writer: &mut StoreWriter) -> Result<()> { fn write_storable_fields(&self, store_writer: &mut StoreWriter) -> crate::Result<()> {
for reader in &self.readers { for reader in &self.readers {
let store_reader = reader.get_store_reader(); let store_reader = reader.get_store_reader();
if reader.num_deleted_docs() > 0 { if reader.num_deleted_docs() > 0 {
@@ -681,7 +677,7 @@ impl IndexMerger {
} }
impl SerializableSegment for IndexMerger { impl SerializableSegment for IndexMerger {
fn write(&self, mut serializer: SegmentSerializer) -> Result<u32> { fn write(&self, mut serializer: SegmentSerializer) -> crate::Result<u32> {
let term_ord_mappings = self.write_postings(serializer.get_postings_serializer())?; let term_ord_mappings = self.write_postings(serializer.get_postings_serializer())?;
self.write_fieldnorms(serializer.get_fieldnorms_serializer())?; self.write_fieldnorms(serializer.get_fieldnorms_serializer())?;
self.write_fast_fields(serializer.get_fast_field_serializer(), term_ord_mappings)?; self.write_fast_fields(serializer.get_fast_field_serializer(), term_ord_mappings)?;
@@ -713,7 +709,7 @@ mod tests {
use crate::IndexWriter; use crate::IndexWriter;
use crate::Searcher; use crate::Searcher;
use byteorder::{BigEndian, ReadBytesExt, WriteBytesExt}; use byteorder::{BigEndian, ReadBytesExt, WriteBytesExt};
use futures::Future; use futures::executor::block_on;
use std::io::Cursor; use std::io::Cursor;
#[test] #[test]
@@ -796,11 +792,7 @@ mod tests {
.searchable_segment_ids() .searchable_segment_ids()
.expect("Searchable segments failed."); .expect("Searchable segments failed.");
let mut index_writer = index.writer_with_num_threads(1, 3_000_000).unwrap(); let mut index_writer = index.writer_with_num_threads(1, 3_000_000).unwrap();
index_writer block_on(index_writer.merge(&segment_ids)).expect("Merging failed");
.merge(&segment_ids)
.expect("Failed to initiate merge")
.wait()
.expect("Merging failed");
index_writer.wait_merging_threads().unwrap(); index_writer.wait_merging_threads().unwrap();
} }
{ {
@@ -1044,11 +1036,7 @@ mod tests {
let segment_ids = index let segment_ids = index
.searchable_segment_ids() .searchable_segment_ids()
.expect("Searchable segments failed."); .expect("Searchable segments failed.");
index_writer block_on(index_writer.merge(&segment_ids)).expect("Merging failed");
.merge(&segment_ids)
.expect("Failed to initiate merge")
.wait()
.expect("Merging failed");
reader.reload().unwrap(); reader.reload().unwrap();
let searcher = reader.searcher(); let searcher = reader.searcher();
assert_eq!(searcher.segment_readers().len(), 1); assert_eq!(searcher.segment_readers().len(), 1);
@@ -1143,11 +1131,7 @@ mod tests {
let segment_ids = index let segment_ids = index
.searchable_segment_ids() .searchable_segment_ids()
.expect("Searchable segments failed."); .expect("Searchable segments failed.");
index_writer block_on(index_writer.merge(&segment_ids)).expect("Merging failed");
.merge(&segment_ids)
.expect("Failed to initiate merge")
.wait()
.expect("Merging failed");
reader.reload().unwrap(); reader.reload().unwrap();
let searcher = reader.searcher(); let searcher = reader.searcher();
@@ -1281,11 +1265,7 @@ mod tests {
.searchable_segment_ids() .searchable_segment_ids()
.expect("Searchable segments failed."); .expect("Searchable segments failed.");
let mut index_writer = index.writer_with_num_threads(1, 3_000_000).unwrap(); let mut index_writer = index.writer_with_num_threads(1, 3_000_000).unwrap();
index_writer block_on(index_writer.merge(&segment_ids)).expect("Merging failed");
.merge(&segment_ids)
.expect("Failed to initiate merge")
.wait()
.expect("Merging failed");
index_writer.wait_merging_threads().unwrap(); index_writer.wait_merging_threads().unwrap();
reader.reload().unwrap(); reader.reload().unwrap();
test_searcher( test_searcher(
@@ -1340,11 +1320,7 @@ mod tests {
let segment_ids = index let segment_ids = index
.searchable_segment_ids() .searchable_segment_ids()
.expect("Searchable segments failed."); .expect("Searchable segments failed.");
index_writer block_on(index_writer.merge(&segment_ids)).expect("Merging failed");
.merge(&segment_ids)
.expect("Failed to initiate merge")
.wait()
.expect("Merging failed");
reader.reload().unwrap(); reader.reload().unwrap();
// commit has not been called yet. The document should still be // commit has not been called yet. The document should still be
// there. // there.
@@ -1365,22 +1341,18 @@ mod tests {
let mut doc = Document::default(); let mut doc = Document::default();
doc.add_u64(int_field, 1); doc.add_u64(int_field, 1);
index_writer.add_document(doc.clone()); index_writer.add_document(doc.clone());
index_writer.commit().expect("commit failed"); assert!(index_writer.commit().is_ok());
index_writer.add_document(doc); index_writer.add_document(doc);
index_writer.commit().expect("commit failed"); assert!(index_writer.commit().is_ok());
index_writer.delete_term(Term::from_field_u64(int_field, 1)); index_writer.delete_term(Term::from_field_u64(int_field, 1));
let segment_ids = index let segment_ids = index
.searchable_segment_ids() .searchable_segment_ids()
.expect("Searchable segments failed."); .expect("Searchable segments failed.");
index_writer assert!(block_on(index_writer.merge(&segment_ids)).is_ok());
.merge(&segment_ids)
.expect("Failed to initiate merge")
.wait()
.expect("Merging failed");
// assert delete has not been committed // assert delete has not been committed
reader.reload().expect("failed to load searcher 1"); assert!(reader.reload().is_ok());
let searcher = reader.searcher(); let searcher = reader.searcher();
assert_eq!(searcher.num_docs(), 2); assert_eq!(searcher.num_docs(), 2);
@@ -1419,12 +1391,12 @@ mod tests {
index_doc(&mut index_writer, &[1, 5]); index_doc(&mut index_writer, &[1, 5]);
index_doc(&mut index_writer, &[3]); index_doc(&mut index_writer, &[3]);
index_doc(&mut index_writer, &[17]); index_doc(&mut index_writer, &[17]);
index_writer.commit().expect("committed"); assert!(index_writer.commit().is_ok());
index_doc(&mut index_writer, &[20]); index_doc(&mut index_writer, &[20]);
index_writer.commit().expect("committed"); assert!(index_writer.commit().is_ok());
index_doc(&mut index_writer, &[28, 27]); index_doc(&mut index_writer, &[28, 27]);
index_doc(&mut index_writer, &[1_000]); index_doc(&mut index_writer, &[1_000]);
index_writer.commit().expect("committed"); assert!(index_writer.commit().is_ok());
} }
let reader = index.reader().unwrap(); let reader = index.reader().unwrap();
let searcher = reader.searcher(); let searcher = reader.searcher();
@@ -1456,15 +1428,6 @@ mod tests {
assert_eq!(&vals, &[17]); assert_eq!(&vals, &[17]);
} }
println!(
"{:?}",
searcher
.segment_readers()
.iter()
.map(|reader| reader.max_doc())
.collect::<Vec<_>>()
);
{ {
let segment = searcher.segment_reader(1u32); let segment = searcher.segment_reader(1u32);
let ff_reader = segment.fast_fields().u64s(int_field).unwrap(); let ff_reader = segment.fast_fields().u64s(int_field).unwrap();
@@ -1488,27 +1451,13 @@ mod tests {
.searchable_segment_ids() .searchable_segment_ids()
.expect("Searchable segments failed."); .expect("Searchable segments failed.");
let mut index_writer = index.writer_with_num_threads(1, 3_000_000).unwrap(); let mut index_writer = index.writer_with_num_threads(1, 3_000_000).unwrap();
index_writer assert!(block_on(index_writer.merge(&segment_ids)).is_ok());
.merge(&segment_ids) assert!(index_writer.wait_merging_threads().is_ok());
.expect("Failed to initiate merge")
.wait()
.expect("Merging failed");
index_writer
.wait_merging_threads()
.expect("Wait for merging threads");
} }
reader.reload().expect("Load searcher"); assert!(reader.reload().is_ok());
{ {
let searcher = reader.searcher(); let searcher = reader.searcher();
println!(
"{:?}",
searcher
.segment_readers()
.iter()
.map(|reader| reader.max_doc())
.collect::<Vec<_>>()
);
let segment = searcher.segment_reader(0u32); let segment = searcher.segment_reader(0u32);
let ff_reader = segment.fast_fields().u64s(int_field).unwrap(); let ff_reader = segment.fast_fields().u64s(int_field).unwrap();
@@ -1543,4 +1492,46 @@ mod tests {
assert_eq!(&vals, &[20]); assert_eq!(&vals, &[20]);
} }
} }
#[test]
fn merges_f64_fast_fields_correctly() -> crate::Result<()> {
let mut builder = schema::SchemaBuilder::new();
let fast_multi = IntOptions::default().set_fast(Cardinality::MultiValues);
let field = builder.add_f64_field("f64", schema::FAST);
let multi_field = builder.add_f64_field("f64s", fast_multi);
let index = Index::create_in_ram(builder.build());
let mut writer = index.writer_with_num_threads(1, 3_000_000)?;
// Make sure we'll attempt to merge every created segment
let mut policy = crate::indexer::LogMergePolicy::default();
policy.set_min_merge_size(2);
writer.set_merge_policy(Box::new(policy));
for i in 0..100 {
let mut doc = Document::new();
doc.add_f64(field, 42.0);
doc.add_f64(multi_field, 0.24);
doc.add_f64(multi_field, 0.27);
writer.add_document(doc);
if i % 5 == 0 {
writer.commit()?;
}
}
writer.commit()?;
writer.wait_merging_threads()?;
// If a merging thread fails, we should end up with more
// than one segment here
assert_eq!(1, index.searchable_segments()?.len());
Ok(())
}
} }

View File

@@ -18,7 +18,7 @@ mod stamper;
pub use self::index_writer::IndexWriter; pub use self::index_writer::IndexWriter;
pub use self::log_merge_policy::LogMergePolicy; pub use self::log_merge_policy::LogMergePolicy;
pub use self::merge_operation::{MergeOperation, MergeOperationInventory}; pub use self::merge_operation::MergeOperation;
pub use self::merge_policy::{MergeCandidate, MergePolicy, NoMergePolicy}; pub use self::merge_policy::{MergeCandidate, MergePolicy, NoMergePolicy};
pub use self::prepared_commit::PreparedCommit; pub use self::prepared_commit::PreparedCommit;
pub use self::segment_entry::SegmentEntry; pub use self::segment_entry::SegmentEntry;
@@ -28,3 +28,26 @@ pub use self::segment_writer::SegmentWriter;
/// Alias for the default merge policy, which is the `LogMergePolicy`. /// Alias for the default merge policy, which is the `LogMergePolicy`.
pub type DefaultMergePolicy = LogMergePolicy; pub type DefaultMergePolicy = LogMergePolicy;
#[cfg(test)]
mod tests {
use crate::schema::{self, Schema};
use crate::{Index, Term};
#[test]
fn test_advance_delete_bug() {
let mut schema_builder = Schema::builder();
let text_field = schema_builder.add_text_field("text", schema::TEXT);
let index = Index::create_from_tempdir(schema_builder.build()).unwrap();
let mut index_writer = index.writer_with_num_threads(1, 3_000_000).unwrap();
// there must be one deleted document in the segment
index_writer.add_document(doc!(text_field=>"b"));
index_writer.delete_term(Term::from_field_text(text_field, "b"));
// we need enough data to trigger the bug (at least 32 documents)
for _ in 0..32 {
index_writer.add_document(doc!(text_field=>"c"));
}
index_writer.commit().unwrap();
index_writer.commit().unwrap();
}
}

View File

@@ -19,6 +19,8 @@ pub struct AddOperation {
/// UserOperation is an enum type that encapsulates other operation types. /// UserOperation is an enum type that encapsulates other operation types.
#[derive(Eq, PartialEq, Debug)] #[derive(Eq, PartialEq, Debug)]
pub enum UserOperation { pub enum UserOperation {
/// Add operation
Add(Document), Add(Document),
/// Delete operation
Delete(Term), Delete(Term),
} }

View File

@@ -1,6 +1,6 @@
use super::IndexWriter; use super::IndexWriter;
use crate::Opstamp; use crate::Opstamp;
use crate::Result; use futures::executor::block_on;
/// A prepared commit /// A prepared commit
pub struct PreparedCommit<'a> { pub struct PreparedCommit<'a> {
@@ -26,15 +26,17 @@ impl<'a> PreparedCommit<'a> {
self.payload = Some(payload.to_string()) self.payload = Some(payload.to_string())
} }
pub fn abort(self) -> Result<Opstamp> { pub fn abort(self) -> crate::Result<Opstamp> {
self.index_writer.rollback() self.index_writer.rollback()
} }
pub fn commit(self) -> Result<Opstamp> { pub fn commit(self) -> crate::Result<Opstamp> {
info!("committing {}", self.opstamp); info!("committing {}", self.opstamp);
self.index_writer let _ = block_on(
.segment_updater() self.index_writer
.commit(self.opstamp, self.payload)?; .segment_updater()
.schedule_commit(self.opstamp, self.payload),
);
Ok(self.opstamp) Ok(self.opstamp)
} }
} }

View File

@@ -1,7 +1,7 @@
use crate::common::BitSet;
use crate::core::SegmentId; use crate::core::SegmentId;
use crate::core::SegmentMeta; use crate::core::SegmentMeta;
use crate::indexer::delete_queue::DeleteCursor; use crate::indexer::delete_queue::DeleteCursor;
use bit_set::BitSet;
use std::fmt; use std::fmt;
/// A segment entry describes the state of /// A segment entry describes the state of

View File

@@ -4,7 +4,6 @@ use crate::core::SegmentMeta;
use crate::error::TantivyError; use crate::error::TantivyError;
use crate::indexer::delete_queue::DeleteCursor; use crate::indexer::delete_queue::DeleteCursor;
use crate::indexer::SegmentEntry; use crate::indexer::SegmentEntry;
use crate::Result as TantivyResult;
use std::collections::hash_set::HashSet; use std::collections::hash_set::HashSet;
use std::fmt::{self, Debug, Formatter}; use std::fmt::{self, Debug, Formatter};
use std::sync::RwLock; use std::sync::RwLock;
@@ -16,6 +15,28 @@ struct SegmentRegisters {
committed: SegmentRegister, committed: SegmentRegister,
} }
#[derive(PartialEq, Eq)]
pub(crate) enum SegmentsStatus {
Committed,
Uncommitted,
}
impl SegmentRegisters {
/// Check if all the segments are committed or uncommited.
///
/// If some segment is missing or segments are in a different state (this should not happen
/// if tantivy is used correctly), returns `None`.
fn segments_status(&self, segment_ids: &[SegmentId]) -> Option<SegmentsStatus> {
if self.uncommitted.contains_all(segment_ids) {
Some(SegmentsStatus::Uncommitted)
} else if self.committed.contains_all(segment_ids) {
Some(SegmentsStatus::Committed)
} else {
None
}
}
}
/// The segment manager stores the list of segments /// The segment manager stores the list of segments
/// as well as their state. /// as well as their state.
/// ///
@@ -27,7 +48,7 @@ pub struct SegmentManager {
} }
impl Debug for SegmentManager { impl Debug for SegmentManager {
fn fmt(&self, f: &mut Formatter<'_>) -> Result<(), fmt::Error> { fn fmt(&self, f: &mut Formatter<'_>) -> fmt::Result {
let lock = self.read(); let lock = self.read();
write!( write!(
f, f,
@@ -123,7 +144,7 @@ impl SegmentManager {
/// Returns an error if some segments are missing, or if /// Returns an error if some segments are missing, or if
/// the `segment_ids` are not either all committed or all /// the `segment_ids` are not either all committed or all
/// uncommitted. /// uncommitted.
pub fn start_merge(&self, segment_ids: &[SegmentId]) -> TantivyResult<Vec<SegmentEntry>> { pub fn start_merge(&self, segment_ids: &[SegmentId]) -> crate::Result<Vec<SegmentEntry>> {
let registers_lock = self.read(); let registers_lock = self.read();
let mut segment_entries = vec![]; let mut segment_entries = vec![];
if registers_lock.uncommitted.contains_all(segment_ids) { if registers_lock.uncommitted.contains_all(segment_ids) {
@@ -153,33 +174,35 @@ impl SegmentManager {
let mut registers_lock = self.write(); let mut registers_lock = self.write();
registers_lock.uncommitted.add_segment_entry(segment_entry); registers_lock.uncommitted.add_segment_entry(segment_entry);
} }
// Replace a list of segments for their equivalent merged segment.
pub fn end_merge( //
// Returns true if these segments are committed, false if the merge segments are uncommited.
pub(crate) fn end_merge(
&self, &self,
before_merge_segment_ids: &[SegmentId], before_merge_segment_ids: &[SegmentId],
after_merge_segment_entry: SegmentEntry, after_merge_segment_entry: SegmentEntry,
) { ) -> crate::Result<SegmentsStatus> {
let mut registers_lock = self.write(); let mut registers_lock = self.write();
let target_register: &mut SegmentRegister = { let segments_status = registers_lock
if registers_lock .segments_status(before_merge_segment_ids)
.uncommitted .ok_or_else(|| {
.contains_all(before_merge_segment_ids)
{
&mut registers_lock.uncommitted
} else if registers_lock
.committed
.contains_all(before_merge_segment_ids)
{
&mut registers_lock.committed
} else {
warn!("couldn't find segment in SegmentManager"); warn!("couldn't find segment in SegmentManager");
return; crate::TantivyError::InvalidArgument(
} "The segments that were merged could not be found in the SegmentManager. \
This is not necessarily a bug, and can happen after a rollback for instance."
.to_string(),
)
})?;
let target_register: &mut SegmentRegister = match segments_status {
SegmentsStatus::Uncommitted => &mut registers_lock.uncommitted,
SegmentsStatus::Committed => &mut registers_lock.committed,
}; };
for segment_id in before_merge_segment_ids { for segment_id in before_merge_segment_ids {
target_register.remove_segment(segment_id); target_register.remove_segment(segment_id);
} }
target_register.add_segment_entry(after_merge_segment_entry); target_register.add_segment_entry(after_merge_segment_entry);
Ok(segments_status)
} }
pub fn committed_segment_metas(&self) -> Vec<SegmentMeta> { pub fn committed_segment_metas(&self) -> Vec<SegmentMeta> {

View File

@@ -134,5 +134,4 @@ mod tests {
} }
assert_eq!(segment_ids(&segment_register), vec![segment_id_merged]); assert_eq!(segment_ids(&segment_register), vec![segment_id_merged]);
} }
} }

View File

@@ -1,5 +1,3 @@
use crate::Result;
use crate::core::Segment; use crate::core::Segment;
use crate::core::SegmentComponent; use crate::core::SegmentComponent;
use crate::fastfield::FastFieldSerializer; use crate::fastfield::FastFieldSerializer;
@@ -18,7 +16,7 @@ pub struct SegmentSerializer {
impl SegmentSerializer { impl SegmentSerializer {
/// Creates a new `SegmentSerializer`. /// Creates a new `SegmentSerializer`.
pub fn for_segment(segment: &mut Segment) -> Result<SegmentSerializer> { pub fn for_segment(segment: &mut Segment) -> crate::Result<SegmentSerializer> {
let store_write = segment.open_write(SegmentComponent::STORE)?; let store_write = segment.open_write(SegmentComponent::STORE)?;
let fast_field_write = segment.open_write(SegmentComponent::FASTFIELDS)?; let fast_field_write = segment.open_write(SegmentComponent::FASTFIELDS)?;
@@ -57,7 +55,7 @@ impl SegmentSerializer {
} }
/// Finalize the segment serialization. /// Finalize the segment serialization.
pub fn close(self) -> Result<()> { pub fn close(self) -> crate::Result<()> {
self.fast_field_serializer.close()?; self.fast_field_serializer.close()?;
self.postings_serializer.close()?; self.postings_serializer.close()?;
self.store_writer.close()?; self.store_writer.close()?;

View File

@@ -6,39 +6,34 @@ use crate::core::SegmentId;
use crate::core::SegmentMeta; use crate::core::SegmentMeta;
use crate::core::SerializableSegment; use crate::core::SerializableSegment;
use crate::core::META_FILEPATH; use crate::core::META_FILEPATH;
use crate::directory::{Directory, DirectoryClone}; use crate::directory::{Directory, DirectoryClone, GarbageCollectionResult};
use crate::error::TantivyError;
use crate::indexer::delete_queue::DeleteCursor; use crate::indexer::delete_queue::DeleteCursor;
use crate::indexer::index_writer::advance_deletes; use crate::indexer::index_writer::advance_deletes;
use crate::indexer::merge_operation::MergeOperationInventory; use crate::indexer::merge_operation::MergeOperationInventory;
use crate::indexer::merger::IndexMerger; use crate::indexer::merger::IndexMerger;
use crate::indexer::segment_manager::SegmentsStatus;
use crate::indexer::stamper::Stamper; use crate::indexer::stamper::Stamper;
use crate::indexer::MergeOperation;
use crate::indexer::SegmentEntry; use crate::indexer::SegmentEntry;
use crate::indexer::SegmentSerializer; use crate::indexer::SegmentSerializer;
use crate::indexer::{DefaultMergePolicy, MergePolicy}; use crate::indexer::{DefaultMergePolicy, MergePolicy};
use crate::indexer::{MergeCandidate, MergeOperation};
use crate::schema::Schema; use crate::schema::Schema;
use crate::Opstamp; use crate::Opstamp;
use crate::Result; use futures::channel::oneshot;
use futures::oneshot; use futures::executor::{ThreadPool, ThreadPoolBuilder};
use futures::sync::oneshot::Receiver; use futures::future::Future;
use futures::Future; use futures::future::TryFutureExt;
use futures_cpupool::Builder as CpuPoolBuilder;
use futures_cpupool::CpuFuture;
use futures_cpupool::CpuPool;
use serde_json; use serde_json;
use std::borrow::BorrowMut; use std::borrow::BorrowMut;
use std::collections::HashMap;
use std::collections::HashSet; use std::collections::HashSet;
use std::io::Write; use std::io::Write;
use std::mem; use std::ops::Deref;
use std::ops::DerefMut;
use std::path::PathBuf; use std::path::PathBuf;
use std::sync::atomic::{AtomicBool, AtomicUsize, Ordering}; use std::sync::atomic::{AtomicBool, Ordering};
use std::sync::Arc; use std::sync::Arc;
use std::sync::RwLock; use std::sync::RwLock;
use std::thread;
use std::thread::JoinHandle; const NUM_MERGE_THREADS: usize = 4;
/// Save the index meta file. /// Save the index meta file.
/// This operation is atomic : /// This operation is atomic :
@@ -49,7 +44,7 @@ use std::thread::JoinHandle;
/// and flushed. /// and flushed.
/// ///
/// This method is not part of tantivy's public API /// This method is not part of tantivy's public API
pub fn save_new_metas(schema: Schema, directory: &mut dyn Directory) -> Result<()> { pub fn save_new_metas(schema: Schema, directory: &mut dyn Directory) -> crate::Result<()> {
save_metas( save_metas(
&IndexMeta { &IndexMeta {
segments: Vec::new(), segments: Vec::new(),
@@ -70,7 +65,7 @@ pub fn save_new_metas(schema: Schema, directory: &mut dyn Directory) -> Result<(
/// and flushed. /// and flushed.
/// ///
/// This method is not part of tantivy's public API /// This method is not part of tantivy's public API
fn save_metas(metas: &IndexMeta, directory: &mut dyn Directory) -> Result<()> { fn save_metas(metas: &IndexMeta, directory: &mut dyn Directory) -> crate::Result<()> {
info!("save metas"); info!("save metas");
let mut buffer = serde_json::to_vec_pretty(metas)?; let mut buffer = serde_json::to_vec_pretty(metas)?;
// Just adding a new line at the end of the buffer. // Just adding a new line at the end of the buffer.
@@ -89,21 +84,38 @@ fn save_metas(metas: &IndexMeta, directory: &mut dyn Directory) -> Result<()> {
// We voluntarily pass a merge_operation ref to guarantee that // We voluntarily pass a merge_operation ref to guarantee that
// the merge_operation is alive during the process // the merge_operation is alive during the process
#[derive(Clone)] #[derive(Clone)]
pub struct SegmentUpdater(Arc<InnerSegmentUpdater>); pub(crate) struct SegmentUpdater(Arc<InnerSegmentUpdater>);
fn perform_merge( impl Deref for SegmentUpdater {
merge_operation: &MergeOperation, type Target = InnerSegmentUpdater;
#[inline]
fn deref(&self) -> &Self::Target {
&self.0
}
}
async fn garbage_collect_files(
segment_updater: SegmentUpdater,
) -> crate::Result<GarbageCollectionResult> {
info!("Running garbage collection");
let mut index = segment_updater.index.clone();
index
.directory_mut()
.garbage_collect(move || segment_updater.list_files())
}
/// Merges a list of segments the list of segment givens in the `segment_entries`.
/// This function happens in the calling thread and is computationally expensive.
fn merge(
index: &Index, index: &Index,
mut segment_entries: Vec<SegmentEntry>, mut segment_entries: Vec<SegmentEntry>,
) -> Result<SegmentEntry> { target_opstamp: Opstamp,
let target_opstamp = merge_operation.target_opstamp(); ) -> crate::Result<SegmentEntry> {
// first we need to apply deletes to our segment. // first we need to apply deletes to our segment.
let mut merged_segment = index.new_segment(); let mut merged_segment = index.new_segment();
// TODO add logging // First we apply all of the delet to the merged segment, up to the target opstamp.
let schema = index.schema();
for segment_entry in &mut segment_entries { for segment_entry in &mut segment_entries {
let segment = index.segment(segment_entry.meta().clone()); let segment = index.segment(segment_entry.meta().clone());
advance_deletes(segment, segment_entry, target_opstamp)?; advance_deletes(segment, segment_entry, target_opstamp)?;
@@ -117,24 +129,19 @@ fn perform_merge(
.collect(); .collect();
// An IndexMerger is like a "view" of our merged segments. // An IndexMerger is like a "view" of our merged segments.
let merger: IndexMerger = IndexMerger::open(schema, &segments[..])?; let merger: IndexMerger = IndexMerger::open(index.schema(), &segments[..])?;
// ... we just serialize this index merger in our new segment
// to merge the two segments.
// ... we just serialize this index merger in our new segment to merge the two segments.
let segment_serializer = SegmentSerializer::for_segment(&mut merged_segment)?; let segment_serializer = SegmentSerializer::for_segment(&mut merged_segment)?;
let num_docs = merger.write(segment_serializer)?; let num_docs = merger.write(segment_serializer)?;
let segment_meta = index let segment_meta = index.new_segment_meta(merged_segment.id(), num_docs);
.inventory()
.new_segment_meta(merged_segment.id(), num_docs);
let after_merge_segment_entry = SegmentEntry::new(segment_meta.clone(), delete_cursor, None); Ok(SegmentEntry::new(segment_meta, delete_cursor, None))
Ok(after_merge_segment_entry)
} }
struct InnerSegmentUpdater { pub(crate) struct InnerSegmentUpdater {
// we keep a copy of the current active IndexMeta to // we keep a copy of the current active IndexMeta to
// avoid loading the file everytime we need it in the // avoid loading the file everytime we need it in the
// `SegmentUpdater`. // `SegmentUpdater`.
@@ -142,12 +149,12 @@ struct InnerSegmentUpdater {
// This should be up to date as all update happen through // This should be up to date as all update happen through
// the unique active `SegmentUpdater`. // the unique active `SegmentUpdater`.
active_metas: RwLock<Arc<IndexMeta>>, active_metas: RwLock<Arc<IndexMeta>>,
pool: CpuPool, pool: ThreadPool,
merge_thread_pool: ThreadPool,
index: Index, index: Index,
segment_manager: SegmentManager, segment_manager: SegmentManager,
merge_policy: RwLock<Arc<Box<dyn MergePolicy>>>, merge_policy: RwLock<Arc<Box<dyn MergePolicy>>>,
merging_thread_id: AtomicUsize,
merging_threads: RwLock<HashMap<usize, JoinHandle<Result<()>>>>,
killed: AtomicBool, killed: AtomicBool,
stamper: Stamper, stamper: Stamper,
merge_operations: MergeOperationInventory, merge_operations: MergeOperationInventory,
@@ -158,22 +165,35 @@ impl SegmentUpdater {
index: Index, index: Index,
stamper: Stamper, stamper: Stamper,
delete_cursor: &DeleteCursor, delete_cursor: &DeleteCursor,
) -> Result<SegmentUpdater> { ) -> crate::Result<SegmentUpdater> {
let segments = index.searchable_segment_metas()?; let segments = index.searchable_segment_metas()?;
let segment_manager = SegmentManager::from_segments(segments, delete_cursor); let segment_manager = SegmentManager::from_segments(segments, delete_cursor);
let pool = CpuPoolBuilder::new() let pool = ThreadPoolBuilder::new()
.name_prefix("segment_updater") .name_prefix("segment_updater")
.pool_size(1) .pool_size(1)
.create(); .create()
.map_err(|_| {
crate::TantivyError::SystemError(
"Failed to spawn segment updater thread".to_string(),
)
})?;
let merge_thread_pool = ThreadPoolBuilder::new()
.name_prefix("merge_thread")
.pool_size(NUM_MERGE_THREADS)
.create()
.map_err(|_| {
crate::TantivyError::SystemError(
"Failed to spawn segment merging thread".to_string(),
)
})?;
let index_meta = index.load_metas()?; let index_meta = index.load_metas()?;
Ok(SegmentUpdater(Arc::new(InnerSegmentUpdater { Ok(SegmentUpdater(Arc::new(InnerSegmentUpdater {
active_metas: RwLock::new(Arc::new(index_meta)), active_metas: RwLock::new(Arc::new(index_meta)),
pool, pool,
merge_thread_pool,
index, index,
segment_manager, segment_manager,
merge_policy: RwLock::new(Arc::new(Box::new(DefaultMergePolicy::default()))), merge_policy: RwLock::new(Arc::new(Box::new(DefaultMergePolicy::default()))),
merging_thread_id: AtomicUsize::default(),
merging_threads: RwLock::new(HashMap::new()),
killed: AtomicBool::new(false), killed: AtomicBool::new(false),
stamper, stamper,
merge_operations: Default::default(), merge_operations: Default::default(),
@@ -181,67 +201,82 @@ impl SegmentUpdater {
} }
pub fn get_merge_policy(&self) -> Arc<Box<dyn MergePolicy>> { pub fn get_merge_policy(&self) -> Arc<Box<dyn MergePolicy>> {
self.0.merge_policy.read().unwrap().clone() self.merge_policy.read().unwrap().clone()
} }
pub fn set_merge_policy(&self, merge_policy: Box<dyn MergePolicy>) { pub fn set_merge_policy(&self, merge_policy: Box<dyn MergePolicy>) {
let arc_merge_policy = Arc::new(merge_policy); let arc_merge_policy = Arc::new(merge_policy);
*self.0.merge_policy.write().unwrap() = arc_merge_policy; *self.merge_policy.write().unwrap() = arc_merge_policy;
} }
fn get_merging_thread_id(&self) -> usize { fn schedule_future<T: 'static + Send, F: Future<Output = crate::Result<T>> + 'static + Send>(
self.0.merging_thread_id.fetch_add(1, Ordering::SeqCst)
}
fn run_async<T: 'static + Send, F: 'static + Send + FnOnce(SegmentUpdater) -> T>(
&self, &self,
f: F, f: F,
) -> CpuFuture<T, TantivyError> { ) -> impl Future<Output = crate::Result<T>> {
let me_clone = self.clone(); let (sender, receiver) = oneshot::channel();
self.0.pool.spawn_fn(move || Ok(f(me_clone))) if self.is_alive() {
self.pool.spawn_ok(async move {
let _ = sender.send(f.await);
});
} else {
let _ = sender.send(Err(crate::TantivyError::SystemError(
"Segment updater killed".to_string(),
)));
}
receiver.unwrap_or_else(|_| {
let err_msg =
"A segment_updater future did not success. This should never happen.".to_string();
Err(crate::TantivyError::SystemError(err_msg))
})
} }
pub fn add_segment(&self, segment_entry: SegmentEntry) -> bool { pub fn schedule_add_segment(
self.run_async(|segment_updater| { &self,
segment_updater.0.segment_manager.add_segment(segment_entry); segment_entry: SegmentEntry,
segment_updater.consider_merge_options(); ) -> impl Future<Output = crate::Result<()>> {
true let segment_updater = self.clone();
self.schedule_future(async move {
segment_updater.segment_manager.add_segment(segment_entry);
segment_updater.consider_merge_options().await;
Ok(())
}) })
.forget();
true
} }
/// Orders `SegmentManager` to remove all segments /// Orders `SegmentManager` to remove all segments
pub(crate) fn remove_all_segments(&self) { pub(crate) fn remove_all_segments(&self) {
self.0.segment_manager.remove_all_segments(); self.segment_manager.remove_all_segments();
} }
pub fn kill(&mut self) { pub fn kill(&mut self) {
self.0.killed.store(true, Ordering::Release); self.killed.store(true, Ordering::Release);
} }
pub fn is_alive(&self) -> bool { pub fn is_alive(&self) -> bool {
!self.0.killed.load(Ordering::Acquire) !self.killed.load(Ordering::Acquire)
} }
/// Apply deletes up to the target opstamp to all segments. /// Apply deletes up to the target opstamp to all segments.
/// ///
/// The method returns copies of the segment entries, /// The method returns copies of the segment entries,
/// updated with the delete information. /// updated with the delete information.
fn purge_deletes(&self, target_opstamp: Opstamp) -> Result<Vec<SegmentEntry>> { fn purge_deletes(&self, target_opstamp: Opstamp) -> crate::Result<Vec<SegmentEntry>> {
let mut segment_entries = self.0.segment_manager.segment_entries(); let mut segment_entries = self.segment_manager.segment_entries();
for segment_entry in &mut segment_entries { for segment_entry in &mut segment_entries {
let segment = self.0.index.segment(segment_entry.meta().clone()); let segment = self.index.segment(segment_entry.meta().clone());
advance_deletes(segment, segment_entry, target_opstamp)?; advance_deletes(segment, segment_entry, target_opstamp)?;
} }
Ok(segment_entries) Ok(segment_entries)
} }
pub fn save_metas(&self, opstamp: Opstamp, commit_message: Option<String>) { pub fn save_metas(
&self,
opstamp: Opstamp,
commit_message: Option<String>,
) -> crate::Result<()> {
if self.is_alive() { if self.is_alive() {
let index = &self.0.index; let index = &self.index;
let directory = index.directory(); let directory = index.directory();
let mut commited_segment_metas = self.0.segment_manager.committed_segment_metas(); let mut commited_segment_metas = self.segment_manager.committed_segment_metas();
// We sort segment_readers by number of documents. // We sort segment_readers by number of documents.
// This is an heuristic to make multithreading more efficient. // This is an heuristic to make multithreading more efficient.
@@ -263,16 +298,18 @@ impl SegmentUpdater {
opstamp, opstamp,
payload: commit_message, payload: commit_message,
}; };
save_metas(&index_meta, directory.box_clone().borrow_mut()) // TODO add context to the error.
.expect("Could not save metas."); save_metas(&index_meta, directory.box_clone().borrow_mut())?;
self.store_meta(&index_meta); self.store_meta(&index_meta);
} }
Ok(())
} }
pub fn garbage_collect_files(&self) -> CpuFuture<(), TantivyError> { pub fn schedule_garbage_collect(
self.run_async(move |segment_updater| { &self,
segment_updater.garbage_collect_files_exec(); ) -> impl Future<Output = crate::Result<GarbageCollectionResult>> {
}) let garbage_collect_future = garbage_collect_files(self.clone());
self.schedule_future(garbage_collect_future)
} }
/// List the files that are useful to the index. /// List the files that are useful to the index.
@@ -280,148 +317,130 @@ impl SegmentUpdater {
/// This does not include lock files, or files that are obsolete /// This does not include lock files, or files that are obsolete
/// but have not yet been deleted by the garbage collector. /// but have not yet been deleted by the garbage collector.
fn list_files(&self) -> HashSet<PathBuf> { fn list_files(&self) -> HashSet<PathBuf> {
let mut files = HashSet::new(); let mut files: HashSet<PathBuf> = self
.index
.list_all_segment_metas()
.into_iter()
.flat_map(|segment_meta| segment_meta.list_files())
.collect();
files.insert(META_FILEPATH.to_path_buf()); files.insert(META_FILEPATH.to_path_buf());
for segment_meta in self.0.index.inventory().all() {
files.extend(segment_meta.list_files());
}
files files
} }
fn garbage_collect_files_exec(&self) { pub fn schedule_commit(
info!("Running garbage collection"); &self,
let mut index = self.0.index.clone(); opstamp: Opstamp,
index.directory_mut().garbage_collect(|| self.list_files()); payload: Option<String>,
} ) -> impl Future<Output = crate::Result<()>> {
let segment_updater: SegmentUpdater = self.clone();
pub fn commit(&self, opstamp: Opstamp, payload: Option<String>) -> Result<()> { self.schedule_future(async move {
self.run_async(move |segment_updater| { let segment_entries = segment_updater.purge_deletes(opstamp)?;
if segment_updater.is_alive() { segment_updater.segment_manager.commit(segment_entries);
let segment_entries = segment_updater segment_updater.save_metas(opstamp, payload)?;
.purge_deletes(opstamp) let _ = garbage_collect_files(segment_updater.clone()).await;
.expect("Failed purge deletes"); segment_updater.consider_merge_options().await;
segment_updater.0.segment_manager.commit(segment_entries); Ok(())
segment_updater.save_metas(opstamp, payload);
segment_updater.garbage_collect_files_exec();
segment_updater.consider_merge_options();
}
}) })
.wait()
}
pub fn start_merge(&self, segment_ids: &[SegmentId]) -> Result<Receiver<SegmentMeta>> {
let commit_opstamp = self.load_metas().opstamp;
let merge_operation = MergeOperation::new(
&self.0.merge_operations,
commit_opstamp,
segment_ids.to_vec(),
);
self.run_async(move |segment_updater| segment_updater.start_merge_impl(merge_operation))
.wait()?
} }
fn store_meta(&self, index_meta: &IndexMeta) { fn store_meta(&self, index_meta: &IndexMeta) {
*self.0.active_metas.write().unwrap() = Arc::new(index_meta.clone()); *self.active_metas.write().unwrap() = Arc::new(index_meta.clone());
}
fn load_metas(&self) -> Arc<IndexMeta> {
self.0.active_metas.read().unwrap().clone()
} }
fn load_metas(&self) -> Arc<IndexMeta> {
self.active_metas.read().unwrap().clone()
}
pub(crate) fn make_merge_operation(&self, segment_ids: &[SegmentId]) -> MergeOperation {
let commit_opstamp = self.load_metas().opstamp;
MergeOperation::new(&self.merge_operations, commit_opstamp, segment_ids.to_vec())
}
// Starts a merge operation. This function will block until the merge operation is effectively
// started. Note that it does not wait for the merge to terminate.
// The calling thread should not be block for a long time, as this only involve waiting for the
// `SegmentUpdater` queue which in turns only contains lightweight operations.
//
// The merge itself happens on a different thread.
//
// When successful, this function returns a `Future` for a `Result<SegmentMeta>` that represents
// the actual outcome of the merge operation.
//
// It returns an error if for some reason the merge operation could not be started.
//
// At this point an error is not necessarily the sign of a malfunction.
// (e.g. A rollback could have happened, between the instant when the merge operaiton was
// suggested and the moment when it ended up being executed.)
//
// `segment_ids` is required to be non-empty. // `segment_ids` is required to be non-empty.
fn start_merge_impl(&self, merge_operation: MergeOperation) -> Result<Receiver<SegmentMeta>> { pub fn start_merge(
&self,
merge_operation: MergeOperation,
) -> crate::Result<impl Future<Output = crate::Result<SegmentMeta>>> {
assert!( assert!(
!merge_operation.segment_ids().is_empty(), !merge_operation.segment_ids().is_empty(),
"Segment_ids cannot be empty." "Segment_ids cannot be empty."
); );
let segment_updater_clone = self.clone(); let segment_updater = self.clone();
let segment_entries: Vec<SegmentEntry> = self let segment_entries: Vec<SegmentEntry> = self
.0
.segment_manager .segment_manager
.start_merge(merge_operation.segment_ids())?; .start_merge(merge_operation.segment_ids())?;
// let segment_ids_vec = merge_operation.segment_ids.to_vec(); info!("Starting merge - {:?}", merge_operation.segment_ids());
let merging_thread_id = self.get_merging_thread_id(); let (merging_future_send, merging_future_recv) =
info!( oneshot::channel::<crate::Result<SegmentMeta>>();
"Starting merge thread #{} - {:?}",
merging_thread_id,
merge_operation.segment_ids()
);
let (merging_future_send, merging_future_recv) = oneshot();
// first we need to apply deletes to our segment. self.merge_thread_pool.spawn_ok(async move {
let merging_join_handle = thread::Builder::new() // The fact that `merge_operation` is moved here is important.
.name(format!("mergingthread-{}", merging_thread_id)) // Its lifetime is used to track how many merging thread are currently running,
.spawn(move || { // as well as which segment is currently in merge and therefore should not be
// first we need to apply deletes to our segment. // candidate for another merge.
let merge_result = perform_merge( match merge(
&merge_operation, &segment_updater.index,
&segment_updater_clone.0.index, segment_entries,
segment_entries, merge_operation.target_opstamp(),
); ) {
Ok(after_merge_segment_entry) => {
match merge_result { let segment_meta = segment_updater
Ok(after_merge_segment_entry) => { .end_merge(merge_operation, after_merge_segment_entry)
let merged_segment_meta = after_merge_segment_entry.meta().clone(); .await;
segment_updater_clone let _send_result = merging_future_send.send(segment_meta);
.end_merge(merge_operation, after_merge_segment_entry) }
.expect("Segment updater thread is corrupted."); Err(e) => {
warn!(
// the future may fail if the listener of the oneshot future "Merge of {:?} was cancelled: {:?}",
// has been destroyed. merge_operation.segment_ids().to_vec(),
// e
// This is not a problem here, so we just ignore any );
// possible error. // ... cancel merge
let _merging_future_res = merging_future_send.send(merged_segment_meta); if cfg!(test) {
} panic!("Merge failed.");
Err(e) => {
warn!(
"Merge of {:?} was cancelled: {:?}",
merge_operation.segment_ids(),
e
);
// ... cancel merge
if cfg!(test) {
panic!("Merge failed.");
}
// As `merge_operation` will be dropped, the segment in merge state will
// be available for merge again.
// `merging_future_send` will be dropped, sending an error to the future.
} }
} }
segment_updater_clone }
.0 });
.merging_threads
.write() Ok(merging_future_recv
.unwrap() .unwrap_or_else(|_| Err(crate::TantivyError::SystemError("Merge failed".to_string()))))
.remove(&merging_thread_id);
Ok(())
})
.expect("Failed to spawn a thread.");
self.0
.merging_threads
.write()
.unwrap()
.insert(merging_thread_id, merging_join_handle);
Ok(merging_future_recv)
} }
fn consider_merge_options(&self) { async fn consider_merge_options(&self) {
let merge_segment_ids: HashSet<SegmentId> = self.0.merge_operations.segment_in_merge(); let merge_segment_ids: HashSet<SegmentId> = self.merge_operations.segment_in_merge();
let (committed_segments, uncommitted_segments) = let (committed_segments, uncommitted_segments) =
get_mergeable_segments(&merge_segment_ids, &self.0.segment_manager); get_mergeable_segments(&merge_segment_ids, &self.segment_manager);
// Committed segments cannot be merged with uncommitted_segments. // Committed segments cannot be merged with uncommitted_segments.
// We therefore consider merges using these two sets of segments independently. // We therefore consider merges using these two sets of segments independently.
let merge_policy = self.get_merge_policy(); let merge_policy = self.get_merge_policy();
let current_opstamp = self.0.stamper.stamp(); let current_opstamp = self.stamper.stamp();
let mut merge_candidates: Vec<MergeOperation> = merge_policy let mut merge_candidates: Vec<MergeOperation> = merge_policy
.compute_merge_candidates(&uncommitted_segments) .compute_merge_candidates(&uncommitted_segments)
.into_iter() .into_iter()
.map(|merge_candidate| { .map(|merge_candidate| {
MergeOperation::new(&self.0.merge_operations, current_opstamp, merge_candidate.0) MergeOperation::new(&self.merge_operations, current_opstamp, merge_candidate.0)
}) })
.collect(); .collect();
@@ -429,25 +448,18 @@ impl SegmentUpdater {
let committed_merge_candidates = merge_policy let committed_merge_candidates = merge_policy
.compute_merge_candidates(&committed_segments) .compute_merge_candidates(&committed_segments)
.into_iter() .into_iter()
.map(|merge_candidate| { .map(|merge_candidate: MergeCandidate| {
MergeOperation::new(&self.0.merge_operations, commit_opstamp, merge_candidate.0) MergeOperation::new(&self.merge_operations, commit_opstamp, merge_candidate.0)
}) })
.collect::<Vec<_>>(); .collect::<Vec<_>>();
merge_candidates.extend(committed_merge_candidates.into_iter()); merge_candidates.extend(committed_merge_candidates.into_iter());
for merge_operation in merge_candidates { for merge_operation in merge_candidates {
match self.start_merge_impl(merge_operation) { if let Err(err) = self.start_merge(merge_operation) {
Ok(merge_future) => { warn!(
if let Err(e) = merge_future.fuse().poll() { "Starting the merge failed for the following reason. This is not fatal. {}",
error!("The merge task failed quickly after starting: {:?}", e); err
} );
}
Err(err) => {
warn!(
"Starting the merge failed for the following reason. This is not fatal. {}",
err
);
}
} }
} }
} }
@@ -456,15 +468,17 @@ impl SegmentUpdater {
&self, &self,
merge_operation: MergeOperation, merge_operation: MergeOperation,
mut after_merge_segment_entry: SegmentEntry, mut after_merge_segment_entry: SegmentEntry,
) -> Result<()> { ) -> impl Future<Output = crate::Result<SegmentMeta>> {
self.run_async(move |segment_updater| { let segment_updater = self.clone();
let after_merge_segment_meta = after_merge_segment_entry.meta().clone();
let end_merge_future = self.schedule_future(async move {
info!("End merge {:?}", after_merge_segment_entry.meta()); info!("End merge {:?}", after_merge_segment_entry.meta());
{ {
let mut delete_cursor = after_merge_segment_entry.delete_cursor().clone(); let mut delete_cursor = after_merge_segment_entry.delete_cursor().clone();
if let Some(delete_operation) = delete_cursor.get() { if let Some(delete_operation) = delete_cursor.get() {
let committed_opstamp = segment_updater.load_metas().opstamp; let committed_opstamp = segment_updater.load_metas().opstamp;
if delete_operation.opstamp < committed_opstamp { if delete_operation.opstamp < committed_opstamp {
let index = &segment_updater.0.index; let index = &segment_updater.index;
let segment = index.segment(after_merge_segment_entry.meta().clone()); let segment = index.segment(after_merge_segment_entry.meta().clone());
if let Err(e) = advance_deletes( if let Err(e) = advance_deletes(
segment, segment,
@@ -482,21 +496,26 @@ impl SegmentUpdater {
// ... cancel merge // ... cancel merge
// `merge_operations` are tracked. As it is dropped, the // `merge_operations` are tracked. As it is dropped, the
// the segment_ids will be available again for merge. // the segment_ids will be available again for merge.
return; return Err(e);
} }
} }
} }
let previous_metas = segment_updater.load_metas(); let previous_metas = segment_updater.load_metas();
segment_updater let segments_status = segment_updater
.0
.segment_manager .segment_manager
.end_merge(merge_operation.segment_ids(), after_merge_segment_entry); .end_merge(merge_operation.segment_ids(), after_merge_segment_entry)?;
segment_updater.consider_merge_options();
segment_updater.save_metas(previous_metas.opstamp, previous_metas.payload.clone()); if segments_status == SegmentsStatus::Committed {
segment_updater
.save_metas(previous_metas.opstamp, previous_metas.payload.clone())?;
}
segment_updater.consider_merge_options().await;
} // we drop all possible handle to a now useless `SegmentMeta`. } // we drop all possible handle to a now useless `SegmentMeta`.
segment_updater.garbage_collect_files_exec(); let _ = garbage_collect_files(segment_updater).await;
}) Ok(())
.wait() });
end_merge_future.map_ok(|_| after_merge_segment_meta)
} }
/// Wait for current merging threads. /// Wait for current merging threads.
@@ -514,26 +533,9 @@ impl SegmentUpdater {
/// ///
/// Obsolete files will eventually be cleaned up /// Obsolete files will eventually be cleaned up
/// by the directory garbage collector. /// by the directory garbage collector.
pub fn wait_merging_thread(&self) -> Result<()> { pub fn wait_merging_thread(&self) -> crate::Result<()> {
loop { self.merge_operations.wait_until_empty();
let merging_threads: HashMap<usize, JoinHandle<Result<()>>> = { Ok(())
let mut merging_threads = self.0.merging_threads.write().unwrap();
mem::replace(merging_threads.deref_mut(), HashMap::new())
};
if merging_threads.is_empty() {
return Ok(());
}
debug!("wait merging thread {}", merging_threads.len());
for (_, merging_thread_handle) in merging_threads {
merging_thread_handle
.join()
.map(|_| ())
.map_err(|_| TantivyError::ErrorInThread("Merging thread failed.".into()))?;
}
// Our merging thread may have queued their completed merged segment.
// Let's wait for that too.
self.run_async(move |_| {}).wait()?;
}
} }
} }
@@ -689,7 +691,6 @@ mod tests {
index_writer.segment_updater().remove_all_segments(); index_writer.segment_updater().remove_all_segments();
let seg_vec = index_writer let seg_vec = index_writer
.segment_updater() .segment_updater()
.0
.segment_manager .segment_manager
.segment_entries(); .segment_entries();
assert!(seg_vec.is_empty()); assert!(seg_vec.is_empty());

View File

@@ -6,25 +6,23 @@ use crate::fieldnorm::FieldNormsWriter;
use crate::indexer::segment_serializer::SegmentSerializer; use crate::indexer::segment_serializer::SegmentSerializer;
use crate::postings::compute_table_size; use crate::postings::compute_table_size;
use crate::postings::MultiFieldPostingsWriter; use crate::postings::MultiFieldPostingsWriter;
use crate::schema::FieldEntry;
use crate::schema::FieldType; use crate::schema::FieldType;
use crate::schema::Schema; use crate::schema::Schema;
use crate::schema::Term; use crate::schema::Term;
use crate::schema::Value; use crate::schema::Value;
use crate::tokenizer::BoxedTokenizer; use crate::schema::{Field, FieldEntry};
use crate::tokenizer::FacetTokenizer; use crate::tokenizer::{BoxTokenStream, PreTokenizedStream};
use crate::tokenizer::{TokenStream, Tokenizer}; use crate::tokenizer::{FacetTokenizer, TextAnalyzer};
use crate::tokenizer::{TokenStreamChain, Tokenizer};
use crate::DocId; use crate::DocId;
use crate::Opstamp; use crate::Opstamp;
use crate::Result;
use crate::TantivyError;
use std::io; use std::io;
use std::str; use std::str;
/// Computes the initial size of the hash table. /// Computes the initial size of the hash table.
/// ///
/// Returns a number of bit `b`, such that the recommended initial table size is 2^b. /// Returns a number of bit `b`, such that the recommended initial table size is 2^b.
fn initial_table_size(per_thread_memory_budget: usize) -> Result<usize> { fn initial_table_size(per_thread_memory_budget: usize) -> crate::Result<usize> {
let table_memory_upper_bound = per_thread_memory_budget / 3; let table_memory_upper_bound = per_thread_memory_budget / 3;
if let Some(limit) = (10..) if let Some(limit) = (10..)
.take_while(|num_bits: &usize| compute_table_size(*num_bits) < table_memory_upper_bound) .take_while(|num_bits: &usize| compute_table_size(*num_bits) < table_memory_upper_bound)
@@ -32,7 +30,7 @@ fn initial_table_size(per_thread_memory_budget: usize) -> Result<usize> {
{ {
Ok(limit.min(19)) // we cap it at 2^19 = 512K. Ok(limit.min(19)) // we cap it at 2^19 = 512K.
} else { } else {
Err(TantivyError::InvalidArgument( Err(crate::TantivyError::InvalidArgument(
format!("per thread memory budget (={}) is too small. Raise the memory budget or lower the number of threads.", per_thread_memory_budget))) format!("per thread memory budget (={}) is too small. Raise the memory budget or lower the number of threads.", per_thread_memory_budget)))
} }
} }
@@ -49,7 +47,7 @@ pub struct SegmentWriter {
fast_field_writers: FastFieldsWriter, fast_field_writers: FastFieldsWriter,
fieldnorms_writer: FieldNormsWriter, fieldnorms_writer: FieldNormsWriter,
doc_opstamps: Vec<Opstamp>, doc_opstamps: Vec<Opstamp>,
tokenizers: Vec<Option<Box<dyn BoxedTokenizer>>>, tokenizers: Vec<Option<TextAnalyzer>>,
} }
impl SegmentWriter { impl SegmentWriter {
@@ -66,16 +64,14 @@ impl SegmentWriter {
memory_budget: usize, memory_budget: usize,
mut segment: Segment, mut segment: Segment,
schema: &Schema, schema: &Schema,
) -> Result<SegmentWriter> { ) -> crate::Result<SegmentWriter> {
let table_num_bits = initial_table_size(memory_budget)?; let table_num_bits = initial_table_size(memory_budget)?;
let segment_serializer = SegmentSerializer::for_segment(&mut segment)?; let segment_serializer = SegmentSerializer::for_segment(&mut segment)?;
let multifield_postings = MultiFieldPostingsWriter::new(schema, table_num_bits); let multifield_postings = MultiFieldPostingsWriter::new(schema, table_num_bits);
let tokenizers = let tokenizers = schema
schema .fields()
.fields() .map(
.iter() |(_, field_entry): (Field, &FieldEntry)| match field_entry.field_type() {
.map(FieldEntry::field_type)
.map(|field_type| match *field_type {
FieldType::Str(ref text_options) => text_options FieldType::Str(ref text_options) => text_options
.get_indexing_options() .get_indexing_options()
.and_then(|text_index_option| { .and_then(|text_index_option| {
@@ -83,8 +79,9 @@ impl SegmentWriter {
segment.index().tokenizers().get(tokenizer_name) segment.index().tokenizers().get(tokenizer_name)
}), }),
_ => None, _ => None,
}) },
.collect(); )
.collect();
Ok(SegmentWriter { Ok(SegmentWriter {
max_doc: 0, max_doc: 0,
multifield_postings, multifield_postings,
@@ -100,7 +97,7 @@ impl SegmentWriter {
/// ///
/// Finalize consumes the `SegmentWriter`, so that it cannot /// Finalize consumes the `SegmentWriter`, so that it cannot
/// be used afterwards. /// be used afterwards.
pub fn finalize(mut self) -> Result<Vec<u64>> { pub fn finalize(mut self) -> crate::Result<Vec<u64>> {
self.fieldnorms_writer.fill_up_to_max_doc(self.max_doc); self.fieldnorms_writer.fill_up_to_max_doc(self.max_doc);
write( write(
&self.multifield_postings, &self.multifield_postings,
@@ -159,26 +156,43 @@ impl SegmentWriter {
} }
} }
FieldType::Str(_) => { FieldType::Str(_) => {
let num_tokens = if let Some(ref mut tokenizer) = let mut token_streams: Vec<BoxTokenStream> = vec![];
self.tokenizers[field.0 as usize] let mut offsets = vec![];
{ let mut total_offset = 0;
let texts: Vec<&str> = field_values
.iter() for field_value in field_values {
.flat_map(|field_value| match *field_value.value() { match field_value.value() {
Value::Str(ref text) => Some(text.as_str()), Value::PreTokStr(tok_str) => {
_ => None, offsets.push(total_offset);
}) if let Some(last_token) = tok_str.tokens.last() {
.collect(); total_offset += last_token.offset_to;
if texts.is_empty() { }
0
} else { token_streams
let mut token_stream = tokenizer.token_stream_texts(&texts[..]); .push(PreTokenizedStream::from(tok_str.clone()).into());
self.multifield_postings }
.index_text(doc_id, field, &mut token_stream) Value::Str(ref text) => {
if let Some(ref mut tokenizer) =
self.tokenizers[field.field_id() as usize]
{
offsets.push(total_offset);
total_offset += text.len();
token_streams.push(tokenizer.token_stream(text));
}
}
_ => (),
} }
} else { }
let num_tokens = if token_streams.is_empty() {
0 0
} else {
let mut token_stream = TokenStreamChain::new(offsets, token_streams);
self.multifield_postings
.index_text(doc_id, field, &mut token_stream)
}; };
self.fieldnorms_writer.record(doc_id, field, num_tokens); self.fieldnorms_writer.record(doc_id, field, num_tokens);
} }
FieldType::U64(ref int_option) => { FieldType::U64(ref int_option) => {
@@ -231,6 +245,7 @@ impl SegmentWriter {
} }
} }
doc.filter_fields(|field| schema.get_field_entry(field).is_stored()); doc.filter_fields(|field| schema.get_field_entry(field).is_stored());
doc.prepare_for_store();
let doc_writer = self.segment_serializer.get_store_writer(); let doc_writer = self.segment_serializer.get_store_writer();
doc_writer.store(&doc)?; doc_writer.store(&doc)?;
self.max_doc += 1; self.max_doc += 1;
@@ -264,7 +279,7 @@ fn write(
fast_field_writers: &FastFieldsWriter, fast_field_writers: &FastFieldsWriter,
fieldnorms_writer: &FieldNormsWriter, fieldnorms_writer: &FieldNormsWriter,
mut serializer: SegmentSerializer, mut serializer: SegmentSerializer,
) -> Result<()> { ) -> crate::Result<()> {
let term_ord_map = multifield_postings.serialize(serializer.get_postings_serializer())?; let term_ord_map = multifield_postings.serialize(serializer.get_postings_serializer())?;
fast_field_writers.serialize(serializer.get_fast_field_serializer(), &term_ord_map)?; fast_field_writers.serialize(serializer.get_fast_field_serializer(), &term_ord_map)?;
fieldnorms_writer.serialize(serializer.get_fieldnorms_serializer())?; fieldnorms_writer.serialize(serializer.get_fieldnorms_serializer())?;
@@ -273,7 +288,7 @@ fn write(
} }
impl SerializableSegment for SegmentWriter { impl SerializableSegment for SegmentWriter {
fn write(&self, serializer: SegmentSerializer) -> Result<u32> { fn write(&self, serializer: SegmentSerializer) -> crate::Result<u32> {
let max_doc = self.max_doc; let max_doc = self.max_doc;
write( write(
&self.multifield_postings, &self.multifield_postings,
@@ -296,5 +311,4 @@ mod tests {
assert_eq!(initial_table_size(10_000_000).unwrap(), 17); assert_eq!(initial_table_size(10_000_000).unwrap(), 17);
assert_eq!(initial_table_size(1_000_000_000).unwrap(), 19); assert_eq!(initial_table_size(1_000_000_000).unwrap(), 19);
} }
} }

View File

@@ -1,18 +1,76 @@
use crate::Opstamp; use crate::Opstamp;
use std::ops::Range; use std::ops::Range;
use std::sync::atomic::{AtomicU64, Ordering}; use std::sync::atomic::Ordering;
use std::sync::Arc; use std::sync::Arc;
#[cfg(not(target_arch = "arm"))]
mod atomic_impl {
use crate::Opstamp;
use std::sync::atomic::{AtomicU64, Ordering};
#[derive(Default)]
pub struct AtomicU64Wrapper(AtomicU64);
impl AtomicU64Wrapper {
pub fn new(first_opstamp: Opstamp) -> AtomicU64Wrapper {
AtomicU64Wrapper(AtomicU64::new(first_opstamp as u64))
}
pub fn fetch_add(&self, val: u64, order: Ordering) -> u64 {
self.0.fetch_add(val as u64, order) as u64
}
pub fn revert(&self, val: u64, order: Ordering) -> u64 {
self.0.store(val, order);
val
}
}
}
#[cfg(target_arch = "arm")]
mod atomic_impl {
use crate::Opstamp;
/// Under other architecture, we rely on a mutex.
use std::sync::atomic::Ordering;
use std::sync::RwLock;
#[derive(Default)]
pub struct AtomicU64Wrapper(RwLock<u64>);
impl AtomicU64Wrapper {
pub fn new(first_opstamp: Opstamp) -> AtomicU64Wrapper {
AtomicU64Wrapper(RwLock::new(first_opstamp))
}
pub fn fetch_add(&self, incr: u64, _order: Ordering) -> u64 {
let mut lock = self.0.write().unwrap();
let previous_val = *lock;
*lock = previous_val + incr;
previous_val
}
pub fn revert(&self, val: u64, _order: Ordering) -> u64 {
let mut lock = self.0.write().unwrap();
*lock = val;
val
}
}
}
use self::atomic_impl::AtomicU64Wrapper;
/// Stamper provides Opstamps, which is just an auto-increment id to label /// Stamper provides Opstamps, which is just an auto-increment id to label
/// an operation. /// an operation.
/// ///
/// Cloning does not "fork" the stamp generation. The stamper actually wraps an `Arc`. /// Cloning does not "fork" the stamp generation. The stamper actually wraps an `Arc`.
#[derive(Clone, Default)] #[derive(Clone, Default)]
pub struct Stamper(Arc<AtomicU64>); pub struct Stamper(Arc<AtomicU64Wrapper>);
impl Stamper { impl Stamper {
pub fn new(first_opstamp: Opstamp) -> Stamper { pub fn new(first_opstamp: Opstamp) -> Stamper {
Stamper(Arc::new(AtomicU64::new(first_opstamp))) Stamper(Arc::new(AtomicU64Wrapper::new(first_opstamp)))
} }
pub fn stamp(&self) -> Opstamp { pub fn stamp(&self) -> Opstamp {
@@ -31,8 +89,7 @@ impl Stamper {
/// Reverts the stamper to a given `Opstamp` value and returns it /// Reverts the stamper to a given `Opstamp` value and returns it
pub fn revert(&self, to_opstamp: Opstamp) -> Opstamp { pub fn revert(&self, to_opstamp: Opstamp) -> Opstamp {
self.0.store(to_opstamp, Ordering::SeqCst); self.0.revert(to_opstamp, Ordering::SeqCst)
to_opstamp
} }
} }

322
src/lib.rs Executable file → Normal file
View File

@@ -3,7 +3,6 @@
#![cfg_attr(feature = "cargo-clippy", allow(clippy::module_inception))] #![cfg_attr(feature = "cargo-clippy", allow(clippy::module_inception))]
#![doc(test(attr(allow(unused_variables), deny(warnings))))] #![doc(test(attr(allow(unused_variables), deny(warnings))))]
#![warn(missing_docs)] #![warn(missing_docs)]
#![recursion_limit = "80"]
//! # `tantivy` //! # `tantivy`
//! //!
@@ -11,26 +10,17 @@
//! Think `Lucene`, but in Rust. //! Think `Lucene`, but in Rust.
//! //!
//! ```rust //! ```rust
//! # extern crate tempdir;
//! #
//! #[macro_use]
//! extern crate tantivy;
//!
//! // ...
//!
//! # use std::path::Path; //! # use std::path::Path;
//! # use tempdir::TempDir; //! # use tempfile::TempDir;
//! # use tantivy::Index;
//! # use tantivy::schema::*;
//! # use tantivy::{Score, DocAddress};
//! # use tantivy::collector::TopDocs; //! # use tantivy::collector::TopDocs;
//! # use tantivy::query::QueryParser; //! # use tantivy::query::QueryParser;
//! # use tantivy::schema::*;
//! # use tantivy::{doc, DocAddress, Index, Score};
//! # //! #
//! # fn main() { //! # fn main() {
//! # // Let's create a temporary directory for the //! # // Let's create a temporary directory for the
//! # // sake of this example //! # // sake of this example
//! # if let Ok(dir) = TempDir::new("tantivy_example_dir") { //! # if let Ok(dir) = TempDir::new() {
//! # run_example(dir.path()).unwrap(); //! # run_example(dir.path()).unwrap();
//! # dir.close().unwrap(); //! # dir.close().unwrap();
//! # } //! # }
@@ -108,9 +98,6 @@
//! [literate programming](https://tantivy-search.github.io/examples/basic_search.html) / //! [literate programming](https://tantivy-search.github.io/examples/basic_search.html) /
//! [source code](https://github.com/tantivy-search/tantivy/blob/master/examples/basic_search.rs)) //! [source code](https://github.com/tantivy-search/tantivy/blob/master/examples/basic_search.rs))
#[macro_use]
extern crate serde_derive;
#[cfg_attr(test, macro_use)] #[cfg_attr(test, macro_use)]
extern crate serde_json; extern crate serde_json;
@@ -131,13 +118,13 @@ mod functional_test;
mod macros; mod macros;
pub use crate::error::TantivyError; pub use crate::error::TantivyError;
#[deprecated(since = "0.7.0", note = "please use `tantivy::TantivyError` instead")]
pub use crate::error::TantivyError as Error;
pub use chrono; pub use chrono;
/// Tantivy result. /// Tantivy result.
pub type Result<T> = std::result::Result<T, error::TantivyError>; ///
/// Within tantivy, please avoid importing `Result` using `use crate::Result`
/// and instead, refer to this as `crate::Result<T>`.
pub type Result<T> = std::result::Result<T, TantivyError>;
/// Tantivy DateTime /// Tantivy DateTime
pub type DateTime = chrono::DateTime<chrono::Utc>; pub type DateTime = chrono::DateTime<chrono::Utc>;
@@ -170,21 +157,69 @@ pub use self::snippet::{Snippet, SnippetGenerator};
mod docset; mod docset;
pub use self::docset::{DocSet, SkipResult}; pub use self::docset::{DocSet, SkipResult};
pub use crate::common::{f64_to_u64, i64_to_u64, u64_to_f64, u64_to_i64}; pub use crate::common::{f64_to_u64, i64_to_u64, u64_to_f64, u64_to_i64};
pub use crate::core::SegmentComponent; pub use crate::core::{Executor, SegmentComponent};
pub use crate::core::{Index, IndexMeta, Searcher, Segment, SegmentId, SegmentMeta}; pub use crate::core::{Index, IndexMeta, Searcher, Segment, SegmentId, SegmentMeta};
pub use crate::core::{InvertedIndexReader, SegmentReader}; pub use crate::core::{InvertedIndexReader, SegmentReader};
pub use crate::directory::Directory; pub use crate::directory::Directory;
pub use crate::indexer::operation::UserOperation;
pub use crate::indexer::IndexWriter; pub use crate::indexer::IndexWriter;
pub use crate::postings::Postings; pub use crate::postings::Postings;
pub use crate::reader::LeasedItem; pub use crate::reader::LeasedItem;
pub use crate::schema::{Document, Term}; pub use crate::schema::{Document, Term};
use std::fmt;
/// Expose the current version of tantivy, as well use once_cell::sync::Lazy;
/// whether it was compiled with the simd compression. use serde::{Deserialize, Serialize};
pub fn version() -> &'static str {
env!("CARGO_PKG_VERSION") /// Index format version.
const INDEX_FORMAT_VERSION: u32 = 1;
/// Structure version for the index.
#[derive(Clone, PartialEq, Eq, Serialize, Deserialize)]
pub struct Version {
major: u32,
minor: u32,
patch: u32,
index_format_version: u32,
store_compression: String,
}
impl fmt::Debug for Version {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
write!(f, "{}", self.to_string())
}
}
static VERSION: Lazy<Version> = Lazy::new(|| Version {
major: env!("CARGO_PKG_VERSION_MAJOR").parse().unwrap(),
minor: env!("CARGO_PKG_VERSION_MINOR").parse().unwrap(),
patch: env!("CARGO_PKG_VERSION_PATCH").parse().unwrap(),
index_format_version: INDEX_FORMAT_VERSION,
store_compression: crate::store::COMPRESSION.to_string(),
});
impl ToString for Version {
fn to_string(&self) -> String {
format!(
"tantivy v{}.{}.{}, index_format v{}, store_compression: {}",
self.major, self.minor, self.patch, self.index_format_version, self.store_compression
)
}
}
static VERSION_STRING: Lazy<String> = Lazy::new(|| VERSION.to_string());
/// Expose the current version of tantivy as found in Cargo.toml during compilation.
/// eg. "0.11.0" as well as the compression scheme used in the docstore.
pub fn version() -> &'static Version {
&VERSION
}
/// Exposes the complete version of tantivy as found in Cargo.toml during compilation as a string.
/// eg. "tantivy v0.11.0, index_format v1, store_compression: lz4".
pub fn version_string() -> &'static str {
VERSION_STRING.as_str()
} }
/// Defines tantivy's merging strategy /// Defines tantivy's merging strategy
@@ -222,28 +257,164 @@ pub type Score = f32;
pub type SegmentLocalId = u32; pub type SegmentLocalId = u32;
impl DocAddress { impl DocAddress {
/// Return the segment ordinal. /// Return the segment ordinal id that identifies the segment
/// The segment ordinal is an id identifying the segment /// hosting the document in the `Searcher` it is called from.
/// hosting the document. It is only meaningful, in the context
/// of a searcher.
pub fn segment_ord(self) -> SegmentLocalId { pub fn segment_ord(self) -> SegmentLocalId {
self.0 self.0
} }
/// Return the segment local `DocId` /// Return the segment-local `DocId`
pub fn doc(self) -> DocId { pub fn doc(self) -> DocId {
self.1 self.1
} }
} }
mod sse2 {
use crate::postings::compression::{AlignedBuffer, COMPRESSION_BLOCK_SIZE};
use std::arch::x86_64::__m128i as DataType;
use std::arch::x86_64::_mm_add_epi32 as op_add;
use std::arch::x86_64::_mm_cmplt_epi32 as op_lt;
use std::arch::x86_64::_mm_load_si128 as op_load;
// requires 128-bits alignment
use std::arch::x86_64::_mm_set1_epi32 as set1;
use std::arch::x86_64::_mm_setzero_si128 as set0;
use std::arch::x86_64::_mm_sub_epi32 as op_sub;
use std::arch::x86_64::{_mm_cvtsi128_si32, _mm_shuffle_epi32};
const MASK1: i32 = 78;
const MASK2: i32 = 177;
/// Performs an exhaustive linear search over the
///
/// There is no early exit here. We simply count the
/// number of elements that are `< target`.
pub unsafe fn linear_search_sse2_128(arr: &AlignedBuffer, target: u32) -> usize {
let ptr = arr as *const AlignedBuffer as *const DataType;
let vkey = set1(target as i32);
let mut cnt = set0();
// We work over 4 `__m128i` at a time.
// A single `__m128i` actual contains 4 `u32`.
for i in 0..(COMPRESSION_BLOCK_SIZE as isize) / (4 * 4) {
let cmp1 = op_lt(op_load(ptr.offset(i * 4)), vkey);
let cmp2 = op_lt(op_load(ptr.offset(i * 4 + 1)), vkey);
let cmp3 = op_lt(op_load(ptr.offset(i * 4 + 2)), vkey);
let cmp4 = op_lt(op_load(ptr.offset(i * 4 + 3)), vkey);
let sum = op_add(op_add(cmp1, cmp2), op_add(cmp3, cmp4));
cnt = op_sub(cnt, sum);
}
cnt = op_add(cnt, _mm_shuffle_epi32(cnt, MASK1));
cnt = op_add(cnt, _mm_shuffle_epi32(cnt, MASK2));
_mm_cvtsi128_si32(cnt) as usize
}
}
const MAX_DOC: u32 = 2_000_000_000u32;
use sse2::linear_search_sse2_128;
use crate::postings::BlockSegmentPostings;
use crate::schema::IndexRecordOption;
struct DocCursor {
block_segment_postings: BlockSegmentPostings,
cursor: usize,
}
impl DocCursor {
fn new(mut block_segment_postings: BlockSegmentPostings) -> DocCursor {
block_segment_postings.advance();
DocCursor {
block_segment_postings,
cursor: 0,
}
}
fn doc(&self,) -> DocId {
self.block_segment_postings.doc(self.cursor)
}
fn len(&self,) -> usize {
self.block_segment_postings.doc_freq()
}
fn advance(&mut self) {
if self.cursor == 127 {
self.block_segment_postings.advance();
self.cursor = 0;
} else {
self.cursor += 1;
}
}
fn skip_to(&mut self, target: DocId) {
if self.block_segment_postings.skip_reader.doc() >= target {
unsafe {
let mut ptr = self.block_segment_postings.docs_aligned_b().0.get_unchecked(self.cursor) as *const u32;
while *ptr < target {
self.cursor += 1;
ptr = ptr.offset(1);
}
}
return;
}
self.block_segment_postings.skip_to_b(target);
let block= self.block_segment_postings.docs_aligned_b();
self.cursor = unsafe { linear_search_sse2_128(&block, target) };
}
}
pub fn intersection(idx: &InvertedIndexReader, terms: &[Term]) -> crate::Result<u32> {
let mut posting_lists: Vec<DocCursor> = Vec::with_capacity(terms.len());
for term in terms {
if let Some(mut block_postings) = idx.read_block_postings(term, IndexRecordOption::Basic) {
posting_lists.push(DocCursor::new(block_postings ));
} else {
return Ok(0);
}
}
posting_lists.sort_by_key(|posting_list| posting_list.len());
Ok({ intersection_idx(&mut posting_lists[..]) })
}
fn intersection_idx(posting_lists: &mut [DocCursor]) -> DocId {
let mut candidate = posting_lists[0].doc();
let mut i = 1;
let mut count = 0u32;
let num_posting_lists = posting_lists.len();
loop {
while i < num_posting_lists {
posting_lists[i].skip_to(candidate);
if posting_lists[i].doc() != candidate {
candidate = posting_lists[i].doc();
i = 0;
continue;
}
i += 1;
}
count += 1;
if candidate == MAX_DOC {
break;
}
posting_lists[0].advance();
candidate = posting_lists[0].doc();
i = 1;
}
count - 1u32
}
/// `DocAddress` contains all the necessary information /// `DocAddress` contains all the necessary information
/// to identify a document given a `Searcher` object. /// to identify a document given a `Searcher` object.
/// ///
/// It consists in an id identifying its segment, and /// It consists of an id identifying its segment, and
/// its segment-local `DocId`. /// a segment-local `DocId`.
/// ///
/// The id used for the segment is actually an ordinal /// The id used for the segment is actually an ordinal
/// in the list of segment hold by a `Searcher`. /// in the list of `Segment`s held by a `Searcher`.
#[derive(Debug, Clone, Copy, PartialEq, Eq, PartialOrd, Ord)] #[derive(Debug, Clone, Copy, PartialEq, Eq, PartialOrd, Ord)]
pub struct DocAddress(pub SegmentLocalId, pub DocId); pub struct DocAddress(pub SegmentLocalId, pub DocId);
@@ -299,6 +470,18 @@ mod tests {
sample_with_seed(n, ratio, 4) sample_with_seed(n, ratio, 4)
} }
#[test]
#[cfg(not(feature = "lz4"))]
fn test_version_string() {
use regex::Regex;
let regex_ptn = Regex::new(
"tantivy v[0-9]{1,3}\\.[0-9]{1,3}\\.[0-9]{1,3}\\.{0,10}, index_format v[0-9]{1,5}",
)
.unwrap();
let version = super::version().to_string();
assert!(regex_ptn.find(&version).is_some());
}
#[test] #[test]
#[cfg(feature = "mmap")] #[cfg(feature = "mmap")]
fn test_indexing() { fn test_indexing() {
@@ -894,4 +1077,73 @@ mod tests {
assert_eq!(fast_field_reader.get(0), 4f64) assert_eq!(fast_field_reader.get(0), 4f64)
} }
} }
// motivated by #729
#[test]
fn test_update_via_delete_insert() {
use crate::collector::Count;
use crate::indexer::NoMergePolicy;
use crate::query::AllQuery;
use crate::SegmentId;
use futures::executor::block_on;
const DOC_COUNT: u64 = 2u64;
let mut schema_builder = SchemaBuilder::default();
let id = schema_builder.add_u64_field("id", INDEXED);
let schema = schema_builder.build();
let index = Index::create_in_ram(schema.clone());
let index_reader = index.reader().unwrap();
let mut index_writer = index.writer(3_000_000).unwrap();
index_writer.set_merge_policy(Box::new(NoMergePolicy));
for doc_id in 0u64..DOC_COUNT {
index_writer.add_document(doc!(id => doc_id));
}
index_writer.commit().unwrap();
index_reader.reload().unwrap();
let searcher = index_reader.searcher();
assert_eq!(
searcher.search(&AllQuery, &Count).unwrap(),
DOC_COUNT as usize
);
// update the 10 elements by deleting and re-adding
for doc_id in 0u64..DOC_COUNT {
index_writer.delete_term(Term::from_field_u64(id, doc_id));
index_writer.commit().unwrap();
index_reader.reload().unwrap();
let doc = doc!(id => doc_id);
index_writer.add_document(doc);
index_writer.commit().unwrap();
index_reader.reload().unwrap();
let searcher = index_reader.searcher();
// The number of document should be stable.
assert_eq!(
searcher.search(&AllQuery, &Count).unwrap(),
DOC_COUNT as usize
);
}
index_reader.reload().unwrap();
let searcher = index_reader.searcher();
let segment_ids: Vec<SegmentId> = searcher
.segment_readers()
.into_iter()
.map(|reader| reader.segment_id())
.collect();
block_on(index_writer.merge(&segment_ids)).unwrap();
index_reader.reload().unwrap();
let searcher = index_reader.searcher();
assert_eq!(
searcher.search(&AllQuery, &Count).unwrap(),
DOC_COUNT as usize
);
}
} }

View File

@@ -22,11 +22,9 @@
/// ///
/// # Example /// # Example
/// ///
/// ``` /// ```rust
/// #[macro_use]
/// extern crate tantivy;
///
/// use tantivy::schema::{Schema, TEXT, FAST}; /// use tantivy::schema::{Schema, TEXT, FAST};
/// use tantivy::doc;
/// ///
/// //... /// //...
/// ///
@@ -37,9 +35,9 @@
/// let likes = schema_builder.add_u64_field("num_u64", FAST); /// let likes = schema_builder.add_u64_field("num_u64", FAST);
/// let schema = schema_builder.build(); /// let schema = schema_builder.build();
/// let doc = doc!( /// let doc = doc!(
/// title => "Life Aquatic", /// title => "Life Aquatic",
/// author => "Wes Anderson", /// author => "Wes Anderson",
/// likes => 4u64 /// likes => 4u64
/// ); /// );
/// # } /// # }
/// ``` /// ```

View File

@@ -36,11 +36,10 @@ struct Positions {
impl Positions { impl Positions {
pub fn new(position_source: ReadOnlySource, skip_source: ReadOnlySource) -> Positions { pub fn new(position_source: ReadOnlySource, skip_source: ReadOnlySource) -> Positions {
let skip_len = skip_source.len(); let (body, footer) = skip_source.split_from_end(u32::SIZE_IN_BYTES);
let (body, footer) = skip_source.split(skip_len - u32::SIZE_IN_BYTES);
let num_long_skips = u32::deserialize(&mut footer.as_slice()).expect("Index corrupted"); let num_long_skips = u32::deserialize(&mut footer.as_slice()).expect("Index corrupted");
let body_split = body.len() - u64::SIZE_IN_BYTES * (num_long_skips as usize); let (skip_source, long_skip_source) =
let (skip_source, long_skip_source) = body.split(body_split); body.split_from_end(u64::SIZE_IN_BYTES * (num_long_skips as usize));
Positions { Positions {
bit_packer: BitPacker4x::new(), bit_packer: BitPacker4x::new(),
skip_source, skip_source,

View File

@@ -25,7 +25,7 @@ mod sse2 {
/// ///
/// There is no early exit here. We simply count the /// There is no early exit here. We simply count the
/// number of elements that are `< target`. /// number of elements that are `< target`.
pub(crate) fn linear_search_sse2_128(arr: &AlignedBuffer, target: u32) -> usize { pub fn linear_search_sse2_128(arr: &AlignedBuffer, target: u32) -> usize {
unsafe { unsafe {
let ptr = arr as *const AlignedBuffer as *const DataType; let ptr = arr as *const AlignedBuffer as *const DataType;
let vkey = set1(target as i32); let vkey = set1(target as i32);
@@ -63,6 +63,8 @@ mod sse2 {
} }
} }
pub use self::sse2::linear_search_sse2_128;
/// This `linear search` browser exhaustively through the array. /// This `linear search` browser exhaustively through the array.
/// but the early exit is very difficult to predict. /// but the early exit is very difficult to predict.
/// ///
@@ -106,7 +108,7 @@ impl BlockSearcher {
/// the target. /// the target.
/// ///
/// The results should be equivalent to /// The results should be equivalent to
/// ```ignore /// ```compile_fail
/// block[..] /// block[..]
// .iter() // .iter()
// .take_while(|&&val| val < target) // .take_while(|&&val| val < target)

View File

@@ -46,7 +46,7 @@ impl BlockEncoder {
/// We ensure that the OutputBuffer is align on 128 bits /// We ensure that the OutputBuffer is align on 128 bits
/// in order to run SSE2 linear search on it. /// in order to run SSE2 linear search on it.
#[repr(align(128))] #[repr(align(128))]
pub(crate) struct AlignedBuffer(pub [u32; COMPRESSION_BLOCK_SIZE]); pub struct AlignedBuffer(pub [u32; COMPRESSION_BLOCK_SIZE]);
pub struct BlockDecoder { pub struct BlockDecoder {
bitpacker: BitPacker4x, bitpacker: BitPacker4x,
@@ -67,6 +67,18 @@ impl BlockDecoder {
} }
} }
pub fn uncompress_vint_sorted_b<'a>(
&mut self,
compressed_data: &'a [u8],
offset: u32,
num_els: usize,
) {
if num_els > 0 {
vint::uncompress_sorted(compressed_data, &mut self.output.0[..num_els], offset);
}
self.output.0[num_els..].iter_mut().for_each(|val| *val = 2_000_000_000u32);
}
pub fn uncompress_block_sorted( pub fn uncompress_block_sorted(
&mut self, &mut self,
compressed_data: &[u8], compressed_data: &[u8],
@@ -94,6 +106,11 @@ impl BlockDecoder {
(&self.output, self.output_len) (&self.output, self.output_len)
} }
#[inline]
pub(crate) fn output_aligned_2(&self) -> &AlignedBuffer {
&self.output
}
#[inline] #[inline]
pub fn output(&self, idx: usize) -> u32 { pub fn output(&self, idx: usize) -> u32 {
self.output.0[idx] self.output.0[idx]
@@ -274,13 +291,15 @@ pub mod tests {
mod bench { mod bench {
use super::*; use super::*;
use rand::rngs::StdRng;
use rand::Rng;
use rand::SeedableRng; use rand::SeedableRng;
use rand::{Rng, XorShiftRng};
use test::Bencher; use test::Bencher;
fn generate_array_with_seed(n: usize, ratio: f64, seed_val: u8) -> Vec<u32> { fn generate_array_with_seed(n: usize, ratio: f64, seed_val: u8) -> Vec<u32> {
let seed: &[u8; 16] = &[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, seed_val]; let mut seed: [u8; 32] = [0; 32];
let mut rng: XorShiftRng = XorShiftRng::from_seed(*seed); seed[31] = seed_val;
let mut rng = StdRng::from_seed(seed);
(0u32..).filter(|_| rng.gen_bool(ratio)).take(n).collect() (0u32..).filter(|_| rng.gen_bool(ratio)).take(n).collect()
} }

View File

@@ -3,7 +3,7 @@ Postings module (also called inverted index)
*/ */
mod block_search; mod block_search;
pub(crate) mod compression; pub mod compression;
/// Postings module /// Postings module
/// ///
/// Postings, also called inverted lists, is the key datastructure /// Postings, also called inverted lists, is the key datastructure
@@ -356,9 +356,9 @@ pub mod tests {
#[test] #[test]
fn test_skip_next() { fn test_skip_next() {
let term_0 = Term::from_field_u64(Field(0), 0); let term_0 = Term::from_field_u64(Field::from_field_id(0), 0);
let term_1 = Term::from_field_u64(Field(0), 1); let term_1 = Term::from_field_u64(Field::from_field_id(0), 1);
let term_2 = Term::from_field_u64(Field(0), 2); let term_2 = Term::from_field_u64(Field::from_field_id(0), 2);
let num_docs = 300u32; let num_docs = 300u32;
@@ -511,19 +511,19 @@ pub mod tests {
} }
pub static TERM_A: Lazy<Term> = Lazy::new(|| { pub static TERM_A: Lazy<Term> = Lazy::new(|| {
let field = Field(0); let field = Field::from_field_id(0);
Term::from_field_text(field, "a") Term::from_field_text(field, "a")
}); });
pub static TERM_B: Lazy<Term> = Lazy::new(|| { pub static TERM_B: Lazy<Term> = Lazy::new(|| {
let field = Field(0); let field = Field::from_field_id(0);
Term::from_field_text(field, "b") Term::from_field_text(field, "b")
}); });
pub static TERM_C: Lazy<Term> = Lazy::new(|| { pub static TERM_C: Lazy<Term> = Lazy::new(|| {
let field = Field(0); let field = Field::from_field_id(0);
Term::from_field_text(field, "c") Term::from_field_text(field, "c")
}); });
pub static TERM_D: Lazy<Term> = Lazy::new(|| { pub static TERM_D: Lazy<Term> = Lazy::new(|| {
let field = Field(0); let field = Field::from_field_id(0);
Term::from_field_text(field, "d") Term::from_field_text(field, "d")
}); });
@@ -622,23 +622,23 @@ pub mod tests {
assert!(!postings_unopt.advance()); assert!(!postings_unopt.advance());
} }
} }
} }
#[cfg(all(test, feature = "unstable"))] #[cfg(all(test, feature = "unstable"))]
mod bench { mod bench {
use super::tests::*; use super::tests::*;
use docset::SkipResult; use crate::docset::SkipResult;
use query::Intersection; use crate::query::Intersection;
use schema::IndexRecordOption; use crate::schema::IndexRecordOption;
use crate::tests;
use crate::DocSet;
use test::{self, Bencher}; use test::{self, Bencher};
use tests;
use DocSet;
#[bench] #[bench]
fn bench_segment_postings(b: &mut Bencher) { fn bench_segment_postings(b: &mut Bencher) {
let searcher = INDEX.searcher(); let reader = INDEX.reader().unwrap();
let searcher = reader.searcher();
let segment_reader = searcher.segment_reader(0); let segment_reader = searcher.segment_reader(0);
b.iter(|| { b.iter(|| {
@@ -652,7 +652,8 @@ mod bench {
#[bench] #[bench]
fn bench_segment_intersection(b: &mut Bencher) { fn bench_segment_intersection(b: &mut Bencher) {
let searcher = INDEX.searcher(); let reader = INDEX.reader().unwrap();
let searcher = reader.searcher();
let segment_reader = searcher.segment_reader(0); let segment_reader = searcher.segment_reader(0);
b.iter(|| { b.iter(|| {
let segment_postings_a = segment_reader let segment_postings_a = segment_reader
@@ -682,7 +683,8 @@ mod bench {
} }
fn bench_skip_next(p: f64, b: &mut Bencher) { fn bench_skip_next(p: f64, b: &mut Bencher) {
let searcher = INDEX.searcher(); let reader = INDEX.reader().unwrap();
let searcher = reader.searcher();
let segment_reader = searcher.segment_reader(0); let segment_reader = searcher.segment_reader(0);
let docs = tests::sample(segment_reader.num_docs(), p); let docs = tests::sample(segment_reader.num_docs(), p);
@@ -737,7 +739,8 @@ mod bench {
#[bench] #[bench]
fn bench_iterate_segment_postings(b: &mut Bencher) { fn bench_iterate_segment_postings(b: &mut Bencher) {
let searcher = INDEX.searcher(); let reader = INDEX.reader().unwrap();
let searcher = reader.searcher();
let segment_reader = searcher.segment_reader(0); let segment_reader = searcher.segment_reader(0);
b.iter(|| { b.iter(|| {
let n: u32 = test::black_box(17); let n: u32 = test::black_box(17);

View File

@@ -11,7 +11,7 @@ use crate::termdict::TermOrdinal;
use crate::tokenizer::TokenStream; use crate::tokenizer::TokenStream;
use crate::tokenizer::{Token, MAX_TOKEN_LEN}; use crate::tokenizer::{Token, MAX_TOKEN_LEN};
use crate::DocId; use crate::DocId;
use crate::Result; use fnv::FnvHashMap;
use std::collections::HashMap; use std::collections::HashMap;
use std::io; use std::io;
use std::marker::PhantomData; use std::marker::PhantomData;
@@ -60,12 +60,12 @@ fn make_field_partition(
.iter() .iter()
.map(|(key, _, _)| Term::wrap(key).field()) .map(|(key, _, _)| Term::wrap(key).field())
.enumerate(); .enumerate();
let mut prev_field = Field(u32::max_value()); let mut prev_field_opt = None;
let mut fields = vec![]; let mut fields = vec![];
let mut offsets = vec![]; let mut offsets = vec![];
for (offset, field) in term_offsets_it { for (offset, field) in term_offsets_it {
if field != prev_field { if Some(field) != prev_field_opt {
prev_field = field; prev_field_opt = Some(field);
fields.push(field); fields.push(field);
offsets.push(offset); offsets.push(offset);
} }
@@ -85,8 +85,7 @@ impl MultiFieldPostingsWriter {
let term_index = TermHashMap::new(table_bits); let term_index = TermHashMap::new(table_bits);
let per_field_postings_writers: Vec<_> = schema let per_field_postings_writers: Vec<_> = schema
.fields() .fields()
.iter() .map(|(_, field_entry)| posting_from_field_entry(field_entry))
.map(|field_entry| posting_from_field_entry(field_entry))
.collect(); .collect();
MultiFieldPostingsWriter { MultiFieldPostingsWriter {
heap: MemoryArena::new(), heap: MemoryArena::new(),
@@ -106,7 +105,8 @@ impl MultiFieldPostingsWriter {
field: Field, field: Field,
token_stream: &mut dyn TokenStream, token_stream: &mut dyn TokenStream,
) -> u32 { ) -> u32 {
let postings_writer = self.per_field_postings_writers[field.0 as usize].deref_mut(); let postings_writer =
self.per_field_postings_writers[field.field_id() as usize].deref_mut();
postings_writer.index_text( postings_writer.index_text(
&mut self.term_index, &mut self.term_index,
doc, doc,
@@ -117,7 +117,8 @@ impl MultiFieldPostingsWriter {
} }
pub fn subscribe(&mut self, doc: DocId, term: &Term) -> UnorderedTermId { pub fn subscribe(&mut self, doc: DocId, term: &Term) -> UnorderedTermId {
let postings_writer = self.per_field_postings_writers[term.field().0 as usize].deref_mut(); let postings_writer =
self.per_field_postings_writers[term.field().field_id() as usize].deref_mut();
postings_writer.subscribe(&mut self.term_index, doc, 0u32, term, &mut self.heap) postings_writer.subscribe(&mut self.term_index, doc, 0u32, term, &mut self.heap)
} }
@@ -127,12 +128,12 @@ impl MultiFieldPostingsWriter {
pub fn serialize( pub fn serialize(
&self, &self,
serializer: &mut InvertedIndexSerializer, serializer: &mut InvertedIndexSerializer,
) -> Result<HashMap<Field, HashMap<UnorderedTermId, TermOrdinal>>> { ) -> crate::Result<HashMap<Field, FnvHashMap<UnorderedTermId, TermOrdinal>>> {
let mut term_offsets: Vec<(&[u8], Addr, UnorderedTermId)> = let mut term_offsets: Vec<(&[u8], Addr, UnorderedTermId)> =
self.term_index.iter().collect(); self.term_index.iter().collect();
term_offsets.sort_unstable_by_key(|&(k, _, _)| k); term_offsets.sort_unstable_by_key(|&(k, _, _)| k);
let mut unordered_term_mappings: HashMap<Field, HashMap<UnorderedTermId, TermOrdinal>> = let mut unordered_term_mappings: HashMap<Field, FnvHashMap<UnorderedTermId, TermOrdinal>> =
HashMap::new(); HashMap::new();
let field_offsets = make_field_partition(&term_offsets); let field_offsets = make_field_partition(&term_offsets);
@@ -147,7 +148,7 @@ impl MultiFieldPostingsWriter {
let unordered_term_ids = term_offsets[start..stop] let unordered_term_ids = term_offsets[start..stop]
.iter() .iter()
.map(|&(_, _, bucket)| bucket); .map(|&(_, _, bucket)| bucket);
let mapping: HashMap<UnorderedTermId, TermOrdinal> = unordered_term_ids let mapping: FnvHashMap<UnorderedTermId, TermOrdinal> = unordered_term_ids
.enumerate() .enumerate()
.map(|(term_ord, unord_term_id)| { .map(|(term_ord, unord_term_id)| {
(unord_term_id as UnorderedTermId, term_ord as TermOrdinal) (unord_term_id as UnorderedTermId, term_ord as TermOrdinal)
@@ -159,7 +160,7 @@ impl MultiFieldPostingsWriter {
FieldType::Bytes => {} FieldType::Bytes => {}
} }
let postings_writer = &self.per_field_postings_writers[field.0 as usize]; let postings_writer = &self.per_field_postings_writers[field.field_id() as usize];
let mut field_serializer = let mut field_serializer =
serializer.new_field(field, postings_writer.total_num_tokens())?; serializer.new_field(field, postings_writer.total_num_tokens())?;
postings_writer.serialize( postings_writer.serialize(

View File

@@ -317,7 +317,7 @@ pub struct BlockSegmentPostings {
num_vint_docs: usize, num_vint_docs: usize,
remaining_data: OwnedRead, remaining_data: OwnedRead,
skip_reader: SkipReader, pub skip_reader: SkipReader,
} }
fn split_into_skips_and_postings( fn split_into_skips_and_postings(
@@ -414,7 +414,12 @@ impl BlockSegmentPostings {
self.doc_decoder.output_array() self.doc_decoder.output_array()
} }
pub(crate) fn docs_aligned(&self) -> (&AlignedBuffer, usize) { #[inline(always)]
pub fn docs_aligned_b(&self) -> &AlignedBuffer {
self.doc_decoder.output_aligned_2()
}
pub fn docs_aligned(&self) -> (&AlignedBuffer, usize) {
self.doc_decoder.output_aligned() self.doc_decoder.output_aligned()
} }
@@ -486,6 +491,7 @@ impl BlockSegmentPostings {
} }
} }
self.doc_offset = self.skip_reader.doc(); self.doc_offset = self.skip_reader.doc();
unsafe { core::arch::x86_64::_mm_prefetch(self.remaining_data.as_ref() as *mut i8, _MM_HINT_T1) };
return BlockSegmentPostingsSkipResult::Success(skip_freqs); return BlockSegmentPostingsSkipResult::Success(skip_freqs);
} else { } else {
skip_freqs += self.skip_reader.tf_sum(); skip_freqs += self.skip_reader.tf_sum();
@@ -526,6 +532,41 @@ impl BlockSegmentPostings {
BlockSegmentPostingsSkipResult::Terminated BlockSegmentPostingsSkipResult::Terminated
} }
/// Ensure we are located on the right block.
#[inline(never)]
pub fn skip_to_b(&mut self, target_doc: DocId) {
while self.skip_reader.advance() {
if self.skip_reader.doc() >= target_doc {
// the last document of the current block is larger
// than the target.
//
// We found our block!
let num_bits = self.skip_reader.doc_num_bits();
let num_consumed_bytes = self.doc_decoder.uncompress_block_sorted(
self.remaining_data.as_ref(),
self.doc_offset,
num_bits,
);
let tf_num_bits = self.skip_reader.tf_num_bits();
let num_bytes_to_skip = compressed_block_size(tf_num_bits);
self.remaining_data.advance(num_consumed_bytes + num_bytes_to_skip);
self.doc_offset = self.skip_reader.doc();
return;
} else {
let advance_len = self.skip_reader.total_block_len();
self.doc_offset = self.skip_reader.doc();
self.remaining_data.advance(advance_len);
}
}
// we are now on the last, incomplete, variable encoded block.
self.doc_decoder.uncompress_vint_sorted_b(
self.remaining_data.as_ref(),
self.doc_offset,
self.num_vint_docs
);
}
/// Advance to the next block. /// Advance to the next block.
/// ///
/// Returns false iff there was no remaining blocks. /// Returns false iff there was no remaining blocks.

View File

@@ -11,7 +11,6 @@ use crate::schema::Schema;
use crate::schema::{Field, FieldEntry, FieldType}; use crate::schema::{Field, FieldEntry, FieldType};
use crate::termdict::{TermDictionaryBuilder, TermOrdinal}; use crate::termdict::{TermDictionaryBuilder, TermOrdinal};
use crate::DocId; use crate::DocId;
use crate::Result;
use std::io::{self, Write}; use std::io::{self, Write};
/// `InvertedIndexSerializer` is in charge of serializing /// `InvertedIndexSerializer` is in charge of serializing
@@ -61,7 +60,7 @@ impl InvertedIndexSerializer {
positions_write: CompositeWrite<WritePtr>, positions_write: CompositeWrite<WritePtr>,
positionsidx_write: CompositeWrite<WritePtr>, positionsidx_write: CompositeWrite<WritePtr>,
schema: Schema, schema: Schema,
) -> Result<InvertedIndexSerializer> { ) -> crate::Result<InvertedIndexSerializer> {
Ok(InvertedIndexSerializer { Ok(InvertedIndexSerializer {
terms_write, terms_write,
postings_write, postings_write,
@@ -72,7 +71,7 @@ impl InvertedIndexSerializer {
} }
/// Open a new `PostingsSerializer` for the given segment /// Open a new `PostingsSerializer` for the given segment
pub fn open(segment: &mut Segment) -> Result<InvertedIndexSerializer> { pub fn open(segment: &mut Segment) -> crate::Result<InvertedIndexSerializer> {
use crate::SegmentComponent::{POSITIONS, POSITIONSSKIP, POSTINGS, TERMS}; use crate::SegmentComponent::{POSITIONS, POSITIONSSKIP, POSTINGS, TERMS};
InvertedIndexSerializer::create( InvertedIndexSerializer::create(
CompositeWrite::wrap(segment.open_write(TERMS)?), CompositeWrite::wrap(segment.open_write(TERMS)?),
@@ -141,18 +140,14 @@ impl<'a> FieldSerializer<'a> {
FieldType::Str(ref text_options) => { FieldType::Str(ref text_options) => {
if let Some(text_indexing_options) = text_options.get_indexing_options() { if let Some(text_indexing_options) = text_options.get_indexing_options() {
let index_option = text_indexing_options.index_option(); let index_option = text_indexing_options.index_option();
( (index_option.has_freq(), index_option.has_positions())
index_option.is_termfreq_enabled(),
index_option.is_position_enabled(),
)
} else { } else {
(false, false) (false, false)
} }
} }
_ => (false, false), _ => (false, false),
}; };
let term_dictionary_builder = let term_dictionary_builder = TermDictionaryBuilder::create(term_dictionary_write)?;
TermDictionaryBuilder::create(term_dictionary_write, &field_type)?;
let postings_serializer = let postings_serializer =
PostingsSerializer::new(postings_write, term_freq_enabled, position_enabled); PostingsSerializer::new(postings_write, term_freq_enabled, position_enabled);
let positions_serializer_opt = if position_enabled { let positions_serializer_opt = if position_enabled {

View File

@@ -49,7 +49,7 @@ impl SkipSerializer {
} }
} }
pub(crate) struct SkipReader { pub struct SkipReader {
doc: DocId, doc: DocId,
owned_read: OwnedRead, owned_read: OwnedRead,
doc_num_bits: u8, doc_num_bits: u8,

View File

@@ -310,6 +310,7 @@ mod bench {
use super::super::MemoryArena; use super::super::MemoryArena;
use super::ExpUnrolledLinkedList; use super::ExpUnrolledLinkedList;
use byteorder::{NativeEndian, WriteBytesExt}; use byteorder::{NativeEndian, WriteBytesExt};
use std::iter;
use test::Bencher; use test::Bencher;
const NUM_STACK: usize = 10_000; const NUM_STACK: usize = 10_000;
@@ -335,11 +336,10 @@ mod bench {
fn bench_push_stack(bench: &mut Bencher) { fn bench_push_stack(bench: &mut Bencher) {
bench.iter(|| { bench.iter(|| {
let mut heap = MemoryArena::new(); let mut heap = MemoryArena::new();
let mut stacks = Vec::with_capacity(100); let mut stacks: Vec<ExpUnrolledLinkedList> =
for _ in 0..NUM_STACK { iter::repeat_with(ExpUnrolledLinkedList::new)
let mut stack = ExpUnrolledLinkedList::new(); .take(NUM_STACK)
stacks.push(stack); .collect();
}
for s in 0..NUM_STACK { for s in 0..NUM_STACK {
for i in 0u32..STACK_SIZE { for i in 0u32..STACK_SIZE {
let t = s * 392017 % NUM_STACK; let t = s * 392017 % NUM_STACK;

View File

@@ -1,10 +1,10 @@
use crate::core::Searcher; use crate::core::Searcher;
use crate::core::SegmentReader; use crate::core::SegmentReader;
use crate::docset::DocSet; use crate::docset::DocSet;
use crate::query::boost_query::BoostScorer;
use crate::query::explanation::does_not_match; use crate::query::explanation::does_not_match;
use crate::query::{Explanation, Query, Scorer, Weight}; use crate::query::{Explanation, Query, Scorer, Weight};
use crate::DocId; use crate::DocId;
use crate::Result;
use crate::Score; use crate::Score;
/// Query that matches all of the documents. /// Query that matches all of the documents.
@@ -14,7 +14,7 @@ use crate::Score;
pub struct AllQuery; pub struct AllQuery;
impl Query for AllQuery { impl Query for AllQuery {
fn weight(&self, _: &Searcher, _: bool) -> Result<Box<dyn Weight>> { fn weight(&self, _: &Searcher, _: bool) -> crate::Result<Box<dyn Weight>> {
Ok(Box::new(AllWeight)) Ok(Box::new(AllWeight))
} }
} }
@@ -23,15 +23,16 @@ impl Query for AllQuery {
pub struct AllWeight; pub struct AllWeight;
impl Weight for AllWeight { impl Weight for AllWeight {
fn scorer(&self, reader: &SegmentReader) -> Result<Box<dyn Scorer>> { fn scorer(&self, reader: &SegmentReader, boost: f32) -> crate::Result<Box<dyn Scorer>> {
Ok(Box::new(AllScorer { let all_scorer = AllScorer {
state: State::NotStarted, state: State::NotStarted,
doc: 0u32, doc: 0u32,
max_doc: reader.max_doc(), max_doc: reader.max_doc(),
})) };
Ok(Box::new(BoostScorer::new(all_scorer, boost)))
} }
fn explain(&self, reader: &SegmentReader, doc: DocId) -> Result<Explanation> { fn explain(&self, reader: &SegmentReader, doc: DocId) -> crate::Result<Explanation> {
if doc >= reader.max_doc() { if doc >= reader.max_doc() {
return Err(does_not_match(doc)); return Err(does_not_match(doc));
} }
@@ -91,14 +92,12 @@ impl Scorer for AllScorer {
#[cfg(test)] #[cfg(test)]
mod tests { mod tests {
use super::AllQuery; use super::AllQuery;
use crate::query::Query; use crate::query::Query;
use crate::schema::{Schema, TEXT}; use crate::schema::{Schema, TEXT};
use crate::Index; use crate::Index;
#[test] fn create_test_index() -> Index {
fn test_all_query() {
let mut schema_builder = Schema::builder(); let mut schema_builder = Schema::builder();
let field = schema_builder.add_text_field("text", TEXT); let field = schema_builder.add_text_field("text", TEXT);
let schema = schema_builder.build(); let schema = schema_builder.build();
@@ -109,13 +108,18 @@ mod tests {
index_writer.commit().unwrap(); index_writer.commit().unwrap();
index_writer.add_document(doc!(field=>"ccc")); index_writer.add_document(doc!(field=>"ccc"));
index_writer.commit().unwrap(); index_writer.commit().unwrap();
index
}
#[test]
fn test_all_query() {
let index = create_test_index();
let reader = index.reader().unwrap(); let reader = index.reader().unwrap();
reader.reload().unwrap();
let searcher = reader.searcher(); let searcher = reader.searcher();
let weight = AllQuery.weight(&searcher, false).unwrap(); let weight = AllQuery.weight(&searcher, false).unwrap();
{ {
let reader = searcher.segment_reader(0); let reader = searcher.segment_reader(0);
let mut scorer = weight.scorer(reader).unwrap(); let mut scorer = weight.scorer(reader, 1.0f32).unwrap();
assert!(scorer.advance()); assert!(scorer.advance());
assert_eq!(scorer.doc(), 0u32); assert_eq!(scorer.doc(), 0u32);
assert!(scorer.advance()); assert!(scorer.advance());
@@ -124,11 +128,31 @@ mod tests {
} }
{ {
let reader = searcher.segment_reader(1); let reader = searcher.segment_reader(1);
let mut scorer = weight.scorer(reader).unwrap(); let mut scorer = weight.scorer(reader, 1.0f32).unwrap();
assert!(scorer.advance()); assert!(scorer.advance());
assert_eq!(scorer.doc(), 0u32); assert_eq!(scorer.doc(), 0u32);
assert!(!scorer.advance()); assert!(!scorer.advance());
} }
} }
#[test]
fn test_all_query_with_boost() {
let index = create_test_index();
let reader = index.reader().unwrap();
let searcher = reader.searcher();
let weight = AllQuery.weight(&searcher, false).unwrap();
let reader = searcher.segment_reader(0);
{
let mut scorer = weight.scorer(reader, 2.0f32).unwrap();
assert!(scorer.advance());
assert_eq!(scorer.doc(), 0u32);
assert_eq!(scorer.score(), 2.0f32);
}
{
let mut scorer = weight.scorer(reader, 1.5f32).unwrap();
assert!(scorer.advance());
assert_eq!(scorer.doc(), 0u32);
assert_eq!(scorer.score(), 1.5f32);
}
}
} }

View File

@@ -8,15 +8,13 @@ use crate::termdict::{TermDictionary, TermStreamer};
use crate::DocId; use crate::DocId;
use crate::TantivyError; use crate::TantivyError;
use crate::{Result, SkipResult}; use crate::{Result, SkipResult};
use std::sync::Arc;
use tantivy_fst::Automaton; use tantivy_fst::Automaton;
/// A weight struct for Fuzzy Term and Regex Queries /// A weight struct for Fuzzy Term and Regex Queries
pub struct AutomatonWeight<A> pub struct AutomatonWeight<A> {
where
A: Automaton + Send + Sync + 'static,
{
field: Field, field: Field,
automaton: A, automaton: Arc<A>,
} }
impl<A> AutomatonWeight<A> impl<A> AutomatonWeight<A>
@@ -24,12 +22,16 @@ where
A: Automaton + Send + Sync + 'static, A: Automaton + Send + Sync + 'static,
{ {
/// Create a new AutomationWeight /// Create a new AutomationWeight
pub fn new(field: Field, automaton: A) -> AutomatonWeight<A> { pub fn new<IntoArcA: Into<Arc<A>>>(field: Field, automaton: IntoArcA) -> AutomatonWeight<A> {
AutomatonWeight { field, automaton } AutomatonWeight {
field,
automaton: automaton.into(),
}
} }
fn automaton_stream<'a>(&'a self, term_dict: &'a TermDictionary) -> TermStreamer<'a, &'a A> { fn automaton_stream<'a>(&'a self, term_dict: &'a TermDictionary) -> TermStreamer<'a, &'a A> {
let term_stream_builder = term_dict.search(&self.automaton); let automaton: &A = &*self.automaton;
let term_stream_builder = term_dict.search(automaton);
term_stream_builder.into_stream() term_stream_builder.into_stream()
} }
} }
@@ -38,7 +40,7 @@ impl<A> Weight for AutomatonWeight<A>
where where
A: Automaton + Send + Sync + 'static, A: Automaton + Send + Sync + 'static,
{ {
fn scorer(&self, reader: &SegmentReader) -> Result<Box<dyn Scorer>> { fn scorer(&self, reader: &SegmentReader, boost: f32) -> Result<Box<dyn Scorer>> {
let max_doc = reader.max_doc(); let max_doc = reader.max_doc();
let mut doc_bitset = BitSet::with_max_value(max_doc); let mut doc_bitset = BitSet::with_max_value(max_doc);
@@ -56,11 +58,12 @@ where
} }
} }
let doc_bitset = BitSetDocSet::from(doc_bitset); let doc_bitset = BitSetDocSet::from(doc_bitset);
Ok(Box::new(ConstScorer::new(doc_bitset))) let const_scorer = ConstScorer::new(doc_bitset, boost);
Ok(Box::new(const_scorer))
} }
fn explain(&self, reader: &SegmentReader, doc: DocId) -> Result<Explanation> { fn explain(&self, reader: &SegmentReader, doc: DocId) -> Result<Explanation> {
let mut scorer = self.scorer(reader)?; let mut scorer = self.scorer(reader, 1.0f32)?;
if scorer.skip_next(doc) == SkipResult::Reached { if scorer.skip_next(doc) == SkipResult::Reached {
Ok(Explanation::new("AutomatonScorer", 1.0f32)) Ok(Explanation::new("AutomatonScorer", 1.0f32))
} else { } else {
@@ -70,3 +73,95 @@ where
} }
} }
} }
#[cfg(test)]
mod tests {
use super::AutomatonWeight;
use crate::query::Weight;
use crate::schema::{Schema, STRING};
use crate::Index;
use tantivy_fst::Automaton;
fn create_index() -> Index {
let mut schema = Schema::builder();
let title = schema.add_text_field("title", STRING);
let index = Index::create_in_ram(schema.build());
let mut index_writer = index.writer_with_num_threads(1, 3_000_000).unwrap();
index_writer.add_document(doc!(title=>"abc"));
index_writer.add_document(doc!(title=>"bcd"));
index_writer.add_document(doc!(title=>"abcd"));
assert!(index_writer.commit().is_ok());
index
}
enum State {
Start,
NotMatching,
AfterA,
}
struct PrefixedByA;
impl Automaton for PrefixedByA {
type State = State;
fn start(&self) -> Self::State {
State::Start
}
fn is_match(&self, state: &Self::State) -> bool {
match *state {
State::AfterA => true,
_ => false,
}
}
fn accept(&self, state: &Self::State, byte: u8) -> Self::State {
match *state {
State::Start => {
if byte == b'a' {
State::AfterA
} else {
State::NotMatching
}
}
State::AfterA => State::AfterA,
State::NotMatching => State::NotMatching,
}
}
}
#[test]
fn test_automaton_weight() {
let index = create_index();
let field = index.schema().get_field("title").unwrap();
let automaton_weight = AutomatonWeight::new(field, PrefixedByA);
let reader = index.reader().unwrap();
let searcher = reader.searcher();
let mut scorer = automaton_weight
.scorer(searcher.segment_reader(0u32), 1.0f32)
.unwrap();
assert!(scorer.advance());
assert_eq!(scorer.doc(), 0u32);
assert_eq!(scorer.score(), 1.0f32);
assert!(scorer.advance());
assert_eq!(scorer.doc(), 2u32);
assert_eq!(scorer.score(), 1.0f32);
assert!(!scorer.advance());
}
#[test]
fn test_automaton_weight_boost() {
let index = create_index();
let field = index.schema().get_field("title").unwrap();
let automaton_weight = AutomatonWeight::new(field, PrefixedByA);
let reader = index.reader().unwrap();
let searcher = reader.searcher();
let mut scorer = automaton_weight
.scorer(searcher.segment_reader(0u32), 1.32f32)
.unwrap();
assert!(scorer.advance());
assert_eq!(scorer.doc(), 0u32);
assert_eq!(scorer.score(), 1.32f32);
}
}

View File

@@ -216,7 +216,6 @@ mod tests {
assert!(!docset.advance()); assert!(!docset.advance());
} }
} }
} }
#[cfg(all(test, feature = "unstable"))] #[cfg(all(test, feature = "unstable"))]
@@ -224,13 +223,12 @@ mod bench {
use super::BitSet; use super::BitSet;
use super::BitSetDocSet; use super::BitSetDocSet;
use test; use crate::test;
use tests; use crate::tests;
use DocSet; use crate::DocSet;
#[bench] #[bench]
fn bench_bitset_1pct_insert(b: &mut test::Bencher) { fn bench_bitset_1pct_insert(b: &mut test::Bencher) {
use tests;
let els = tests::generate_nonunique_unsorted(1_000_000u32, 10_000); let els = tests::generate_nonunique_unsorted(1_000_000u32, 10_000);
b.iter(|| { b.iter(|| {
let mut bitset = BitSet::with_max_value(1_000_000); let mut bitset = BitSet::with_max_value(1_000_000);
@@ -242,7 +240,6 @@ mod bench {
#[bench] #[bench]
fn bench_bitset_1pct_clone(b: &mut test::Bencher) { fn bench_bitset_1pct_clone(b: &mut test::Bencher) {
use tests;
let els = tests::generate_nonunique_unsorted(1_000_000u32, 10_000); let els = tests::generate_nonunique_unsorted(1_000_000u32, 10_000);
let mut bitset = BitSet::with_max_value(1_000_000); let mut bitset = BitSet::with_max_value(1_000_000);
for el in els { for el in els {

View File

@@ -25,7 +25,6 @@ fn compute_tf_cache(average_fieldnorm: f32) -> [f32; 256] {
cache cache
} }
#[derive(Clone)]
pub struct BM25Weight { pub struct BM25Weight {
idf_explain: Explanation, idf_explain: Explanation,
weight: f32, weight: f32,
@@ -34,6 +33,15 @@ pub struct BM25Weight {
} }
impl BM25Weight { impl BM25Weight {
pub fn boost_by(&self, boost: f32) -> BM25Weight {
BM25Weight {
idf_explain: self.idf_explain.clone(),
weight: self.weight * boost,
cache: self.cache,
average_fieldnorm: self.average_fieldnorm,
}
}
pub fn for_terms(searcher: &Searcher, terms: &[Term]) -> BM25Weight { pub fn for_terms(searcher: &Searcher, terms: &[Term]) -> BM25Weight {
assert!(!terms.is_empty(), "BM25 requires at least one term"); assert!(!terms.is_empty(), "BM25 requires at least one term");
let field = terms[0].field(); let field = terms[0].field();
@@ -137,5 +145,4 @@ mod tests {
fn test_idf() { fn test_idf() {
assert_nearly_equals(idf(1, 2), 0.6931472); assert_nearly_equals(idf(1, 2), 0.6931472);
} }
} }

View File

@@ -5,11 +5,11 @@ use crate::query::TermQuery;
use crate::query::Weight; use crate::query::Weight;
use crate::schema::IndexRecordOption; use crate::schema::IndexRecordOption;
use crate::schema::Term; use crate::schema::Term;
use crate::Result;
use crate::Searcher; use crate::Searcher;
use std::collections::BTreeSet; use std::collections::BTreeSet;
/// The boolean query combines a set of queries /// The boolean query returns a set of documents
/// that matches the Boolean combination of constituent subqueries.
/// ///
/// The documents matched by the boolean query are /// The documents matched by the boolean query are
/// those which /// those which
@@ -19,6 +19,113 @@ use std::collections::BTreeSet;
/// `MustNot` occurence. /// `MustNot` occurence.
/// * match at least one of the subqueries that is not /// * match at least one of the subqueries that is not
/// a `MustNot` occurence. /// a `MustNot` occurence.
///
///
/// You can combine other query types and their `Occur`ances into one `BooleanQuery`
///
/// ```rust
///use tantivy::collector::Count;
///use tantivy::doc;
///use tantivy::query::{BooleanQuery, Occur, PhraseQuery, Query, TermQuery};
///use tantivy::schema::{IndexRecordOption, Schema, TEXT};
///use tantivy::Term;
///use tantivy::Index;
///
///fn main() -> tantivy::Result<()> {
/// let mut schema_builder = Schema::builder();
/// let title = schema_builder.add_text_field("title", TEXT);
/// let body = schema_builder.add_text_field("body", TEXT);
/// let schema = schema_builder.build();
/// let index = Index::create_in_ram(schema);
/// {
/// let mut index_writer = index.writer(3_000_000)?;
/// index_writer.add_document(doc!(
/// title => "The Name of the Wind",
/// ));
/// index_writer.add_document(doc!(
/// title => "The Diary of Muadib",
/// ));
/// index_writer.add_document(doc!(
/// title => "A Dairy Cow",
/// body => "hidden",
/// ));
/// index_writer.add_document(doc!(
/// title => "A Dairy Cow",
/// body => "found",
/// ));
/// index_writer.add_document(doc!(
/// title => "The Diary of a Young Girl",
/// ));
/// index_writer.commit().unwrap();
/// }
///
/// let reader = index.reader()?;
/// let searcher = reader.searcher();
///
/// // Make TermQuery's for "girl" and "diary" in the title
/// let girl_term_query: Box<dyn Query> = Box::new(TermQuery::new(
/// Term::from_field_text(title, "girl"),
/// IndexRecordOption::Basic,
/// ));
/// let diary_term_query: Box<dyn Query> = Box::new(TermQuery::new(
/// Term::from_field_text(title, "diary"),
/// IndexRecordOption::Basic,
/// ));
/// // A TermQuery with "found" in the body
/// let body_term_query: Box<dyn Query> = Box::new(TermQuery::new(
/// Term::from_field_text(body, "found"),
/// IndexRecordOption::Basic,
/// ));
/// // TermQuery "diary" must and "girl" must not be present
/// let queries_with_occurs1 = vec![
/// (Occur::Must, diary_term_query.box_clone()),
/// (Occur::MustNot, girl_term_query),
/// ];
/// // Make a BooleanQuery equivalent to
/// // title:+diary title:-girl
/// let diary_must_and_girl_mustnot = BooleanQuery::from(queries_with_occurs1);
/// let count1 = searcher.search(&diary_must_and_girl_mustnot, &Count)?;
/// assert_eq!(count1, 1);
///
/// // TermQuery for "cow" in the title
/// let cow_term_query: Box<dyn Query> = Box::new(TermQuery::new(
/// Term::from_field_text(title, "cow"),
/// IndexRecordOption::Basic,
/// ));
/// // "title:diary OR title:cow"
/// let title_diary_or_cow = BooleanQuery::from(vec![
/// (Occur::Should, diary_term_query.box_clone()),
/// (Occur::Should, cow_term_query),
/// ]);
/// let count2 = searcher.search(&title_diary_or_cow, &Count)?;
/// assert_eq!(count2, 4);
///
/// // Make a `PhraseQuery` from a vector of `Term`s
/// let phrase_query: Box<dyn Query> = Box::new(PhraseQuery::new(vec![
/// Term::from_field_text(title, "dairy"),
/// Term::from_field_text(title, "cow"),
/// ]));
/// // You can combine subqueries of different types into 1 BooleanQuery:
/// // `TermQuery` and `PhraseQuery`
/// // "title:diary OR "dairy cow"
/// let term_of_phrase_query = BooleanQuery::from(vec![
/// (Occur::Should, diary_term_query.box_clone()),
/// (Occur::Should, phrase_query.box_clone()),
/// ]);
/// let count3 = searcher.search(&term_of_phrase_query, &Count)?;
/// assert_eq!(count3, 4);
///
/// // You can nest one BooleanQuery inside another
/// // body:found AND ("title:diary OR "dairy cow")
/// let nested_query = BooleanQuery::from(vec![
/// (Occur::Must, body_term_query),
/// (Occur::Must, Box::new(term_of_phrase_query))
/// ]);
/// let count4 = searcher.search(&nested_query, &Count)?;
/// assert_eq!(count4, 1);
/// Ok(())
///}
/// ```
#[derive(Debug)] #[derive(Debug)]
pub struct BooleanQuery { pub struct BooleanQuery {
subqueries: Vec<(Occur, Box<dyn Query>)>, subqueries: Vec<(Occur, Box<dyn Query>)>,
@@ -41,14 +148,14 @@ impl From<Vec<(Occur, Box<dyn Query>)>> for BooleanQuery {
} }
impl Query for BooleanQuery { impl Query for BooleanQuery {
fn weight(&self, searcher: &Searcher, scoring_enabled: bool) -> Result<Box<dyn Weight>> { fn weight(&self, searcher: &Searcher, scoring_enabled: bool) -> crate::Result<Box<dyn Weight>> {
let sub_weights = self let sub_weights = self
.subqueries .subqueries
.iter() .iter()
.map(|&(ref occur, ref subquery)| { .map(|&(ref occur, ref subquery)| {
Ok((*occur, subquery.weight(searcher, scoring_enabled)?)) Ok((*occur, subquery.weight(searcher, scoring_enabled)?))
}) })
.collect::<Result<_>>()?; .collect::<crate::Result<_>>()?;
Ok(Box::new(BooleanWeight::new(sub_weights, scoring_enabled))) Ok(Box::new(BooleanWeight::new(sub_weights, scoring_enabled)))
} }

View File

@@ -10,7 +10,6 @@ use crate::query::Scorer;
use crate::query::Union; use crate::query::Union;
use crate::query::Weight; use crate::query::Weight;
use crate::query::{intersect_scorers, Explanation}; use crate::query::{intersect_scorers, Explanation};
use crate::Result;
use crate::{DocId, SkipResult}; use crate::{DocId, SkipResult};
use std::collections::HashMap; use std::collections::HashMap;
@@ -56,10 +55,11 @@ impl BooleanWeight {
fn per_occur_scorers( fn per_occur_scorers(
&self, &self,
reader: &SegmentReader, reader: &SegmentReader,
) -> Result<HashMap<Occur, Vec<Box<dyn Scorer>>>> { boost: f32,
) -> crate::Result<HashMap<Occur, Vec<Box<dyn Scorer>>>> {
let mut per_occur_scorers: HashMap<Occur, Vec<Box<dyn Scorer>>> = HashMap::new(); let mut per_occur_scorers: HashMap<Occur, Vec<Box<dyn Scorer>>> = HashMap::new();
for &(ref occur, ref subweight) in &self.weights { for &(ref occur, ref subweight) in &self.weights {
let sub_scorer: Box<dyn Scorer> = subweight.scorer(reader)?; let sub_scorer: Box<dyn Scorer> = subweight.scorer(reader, boost)?;
per_occur_scorers per_occur_scorers
.entry(*occur) .entry(*occur)
.or_insert_with(Vec::new) .or_insert_with(Vec::new)
@@ -71,8 +71,9 @@ impl BooleanWeight {
fn complex_scorer<TScoreCombiner: ScoreCombiner>( fn complex_scorer<TScoreCombiner: ScoreCombiner>(
&self, &self,
reader: &SegmentReader, reader: &SegmentReader,
) -> Result<Box<dyn Scorer>> { boost: f32,
let mut per_occur_scorers = self.per_occur_scorers(reader)?; ) -> crate::Result<Box<dyn Scorer>> {
let mut per_occur_scorers = self.per_occur_scorers(reader, boost)?;
let should_scorer_opt: Option<Box<dyn Scorer>> = per_occur_scorers let should_scorer_opt: Option<Box<dyn Scorer>> = per_occur_scorers
.remove(&Occur::Should) .remove(&Occur::Should)
@@ -113,7 +114,7 @@ impl BooleanWeight {
} }
impl Weight for BooleanWeight { impl Weight for BooleanWeight {
fn scorer(&self, reader: &SegmentReader) -> Result<Box<dyn Scorer>> { fn scorer(&self, reader: &SegmentReader, boost: f32) -> crate::Result<Box<dyn Scorer>> {
if self.weights.is_empty() { if self.weights.is_empty() {
Ok(Box::new(EmptyScorer)) Ok(Box::new(EmptyScorer))
} else if self.weights.len() == 1 { } else if self.weights.len() == 1 {
@@ -121,17 +122,17 @@ impl Weight for BooleanWeight {
if occur == Occur::MustNot { if occur == Occur::MustNot {
Ok(Box::new(EmptyScorer)) Ok(Box::new(EmptyScorer))
} else { } else {
weight.scorer(reader) weight.scorer(reader, boost)
} }
} else if self.scoring_enabled { } else if self.scoring_enabled {
self.complex_scorer::<SumWithCoordsCombiner>(reader) self.complex_scorer::<SumWithCoordsCombiner>(reader, boost)
} else { } else {
self.complex_scorer::<DoNothingCombiner>(reader) self.complex_scorer::<DoNothingCombiner>(reader, boost)
} }
} }
fn explain(&self, reader: &SegmentReader, doc: DocId) -> Result<Explanation> { fn explain(&self, reader: &SegmentReader, doc: DocId) -> crate::Result<Explanation> {
let mut scorer = self.scorer(reader)?; let mut scorer = self.scorer(reader, 1.0f32)?;
if scorer.skip_next(doc) != SkipResult::Reached { if scorer.skip_next(doc) != SkipResult::Reached {
return Err(does_not_match(doc)); return Err(does_not_match(doc));
} }

View File

@@ -18,6 +18,7 @@ mod tests {
use crate::query::Scorer; use crate::query::Scorer;
use crate::query::TermQuery; use crate::query::TermQuery;
use crate::schema::*; use crate::schema::*;
use crate::tests::assert_nearly_equals;
use crate::Index; use crate::Index;
use crate::{DocAddress, DocId}; use crate::{DocAddress, DocId};
@@ -30,24 +31,11 @@ mod tests {
// writing the segment // writing the segment
let mut index_writer = index.writer_with_num_threads(1, 3_000_000).unwrap(); let mut index_writer = index.writer_with_num_threads(1, 3_000_000).unwrap();
{ {
let doc = doc!(text_field => "a b c"); index_writer.add_document(doc!(text_field => "a b c"));
index_writer.add_document(doc); index_writer.add_document(doc!(text_field => "a c"));
} index_writer.add_document(doc!(text_field => "b c"));
{ index_writer.add_document(doc!(text_field => "a b c d"));
let doc = doc!(text_field => "a c"); index_writer.add_document(doc!(text_field => "d"));
index_writer.add_document(doc);
}
{
let doc = doc!(text_field => "b c");
index_writer.add_document(doc);
}
{
let doc = doc!(text_field => "a b c d");
index_writer.add_document(doc);
}
{
let doc = doc!(text_field => "d");
index_writer.add_document(doc);
} }
assert!(index_writer.commit().is_ok()); assert!(index_writer.commit().is_ok());
} }
@@ -70,7 +58,9 @@ mod tests {
let query = query_parser.parse_query("+a").unwrap(); let query = query_parser.parse_query("+a").unwrap();
let searcher = index.reader().unwrap().searcher(); let searcher = index.reader().unwrap().searcher();
let weight = query.weight(&searcher, true).unwrap(); let weight = query.weight(&searcher, true).unwrap();
let scorer = weight.scorer(searcher.segment_reader(0u32)).unwrap(); let scorer = weight
.scorer(searcher.segment_reader(0u32), 1.0f32)
.unwrap();
assert!(scorer.is::<TermScorer>()); assert!(scorer.is::<TermScorer>());
} }
@@ -82,13 +72,17 @@ mod tests {
{ {
let query = query_parser.parse_query("+a +b +c").unwrap(); let query = query_parser.parse_query("+a +b +c").unwrap();
let weight = query.weight(&searcher, true).unwrap(); let weight = query.weight(&searcher, true).unwrap();
let scorer = weight.scorer(searcher.segment_reader(0u32)).unwrap(); let scorer = weight
.scorer(searcher.segment_reader(0u32), 1.0f32)
.unwrap();
assert!(scorer.is::<Intersection<TermScorer>>()); assert!(scorer.is::<Intersection<TermScorer>>());
} }
{ {
let query = query_parser.parse_query("+a +(b c)").unwrap(); let query = query_parser.parse_query("+a +(b c)").unwrap();
let weight = query.weight(&searcher, true).unwrap(); let weight = query.weight(&searcher, true).unwrap();
let scorer = weight.scorer(searcher.segment_reader(0u32)).unwrap(); let scorer = weight
.scorer(searcher.segment_reader(0u32), 1.0f32)
.unwrap();
assert!(scorer.is::<Intersection<Box<dyn Scorer>>>()); assert!(scorer.is::<Intersection<Box<dyn Scorer>>>());
} }
} }
@@ -101,7 +95,9 @@ mod tests {
{ {
let query = query_parser.parse_query("+a b").unwrap(); let query = query_parser.parse_query("+a b").unwrap();
let weight = query.weight(&searcher, true).unwrap(); let weight = query.weight(&searcher, true).unwrap();
let scorer = weight.scorer(searcher.segment_reader(0u32)).unwrap(); let scorer = weight
.scorer(searcher.segment_reader(0u32), 1.0f32)
.unwrap();
assert!(scorer.is::<RequiredOptionalScorer< assert!(scorer.is::<RequiredOptionalScorer<
Box<dyn Scorer>, Box<dyn Scorer>,
Box<dyn Scorer>, Box<dyn Scorer>,
@@ -111,7 +107,9 @@ mod tests {
{ {
let query = query_parser.parse_query("+a b").unwrap(); let query = query_parser.parse_query("+a b").unwrap();
let weight = query.weight(&searcher, false).unwrap(); let weight = query.weight(&searcher, false).unwrap();
let scorer = weight.scorer(searcher.segment_reader(0u32)).unwrap(); let scorer = weight
.scorer(searcher.segment_reader(0u32), 1.0f32)
.unwrap();
assert!(scorer.is::<TermScorer>()); assert!(scorer.is::<TermScorer>());
} }
} }
@@ -179,6 +177,50 @@ mod tests {
} }
} }
#[test]
pub fn test_boolean_query_with_weight() {
let mut schema_builder = Schema::builder();
let text_field = schema_builder.add_text_field("text", TEXT);
let schema = schema_builder.build();
let index = Index::create_in_ram(schema);
{
let mut index_writer = index.writer_with_num_threads(1, 3_000_000).unwrap();
index_writer.add_document(doc!(text_field => "a b c"));
index_writer.add_document(doc!(text_field => "a c"));
index_writer.add_document(doc!(text_field => "b c"));
assert!(index_writer.commit().is_ok());
}
let term_a: Box<dyn Query> = Box::new(TermQuery::new(
Term::from_field_text(text_field, "a"),
IndexRecordOption::WithFreqs,
));
let term_b: Box<dyn Query> = Box::new(TermQuery::new(
Term::from_field_text(text_field, "b"),
IndexRecordOption::WithFreqs,
));
let reader = index.reader().unwrap();
let searcher = reader.searcher();
let boolean_query =
BooleanQuery::from(vec![(Occur::Should, term_a), (Occur::Should, term_b)]);
let boolean_weight = boolean_query.weight(&searcher, true).unwrap();
{
let mut boolean_scorer = boolean_weight
.scorer(searcher.segment_reader(0u32), 1.0f32)
.unwrap();
assert!(boolean_scorer.advance());
assert_eq!(boolean_scorer.doc(), 0u32);
assert_nearly_equals(boolean_scorer.score(), 0.84163445f32);
}
{
let mut boolean_scorer = boolean_weight
.scorer(searcher.segment_reader(0u32), 2.0f32)
.unwrap();
assert!(boolean_scorer.advance());
assert_eq!(boolean_scorer.doc(), 0u32);
assert_nearly_equals(boolean_scorer.score(), 1.6832689f32);
}
}
#[test] #[test]
pub fn test_intersection_score() { pub fn test_intersection_score() {
let (index, text_field) = aux_test_helper(); let (index, text_field) = aux_test_helper();
@@ -247,11 +289,11 @@ mod tests {
let reader = index.reader().unwrap(); let reader = index.reader().unwrap();
let searcher = reader.searcher(); let searcher = reader.searcher();
let query_parser = QueryParser::for_index(&index, vec![title, text]); let query_parser = QueryParser::for_index(&index, vec![title, text]);
let query = query_parser let query = query_parser.parse_query("Оксана Лифенко").unwrap();
.parse_query("Оксана Лифенко")
.unwrap();
let weight = query.weight(&searcher, true).unwrap(); let weight = query.weight(&searcher, true).unwrap();
let mut scorer = weight.scorer(searcher.segment_reader(0u32)).unwrap(); let mut scorer = weight
.scorer(searcher.segment_reader(0u32), 1.0f32)
.unwrap();
scorer.advance(); scorer.advance();
let explanation = query.explain(&searcher, DocAddress(0u32, 0u32)).unwrap(); let explanation = query.explain(&searcher, DocAddress(0u32, 0u32)).unwrap();

164
src/query/boost_query.rs Normal file
View File

@@ -0,0 +1,164 @@
use crate::common::BitSet;
use crate::fastfield::DeleteBitSet;
use crate::query::explanation::does_not_match;
use crate::query::{Explanation, Query, Scorer, Weight};
use crate::{DocId, DocSet, Searcher, SegmentReader, SkipResult, Term};
use std::collections::BTreeSet;
use std::fmt;
/// `BoostQuery` is a wrapper over a query used to boost its score.
///
/// The document set matched by the `BoostQuery` is strictly the same as the underlying query.
/// The score of each document, is the score of the underlying query multiplied by the `boost`
/// factor.
pub struct BoostQuery {
query: Box<dyn Query>,
boost: f32,
}
impl BoostQuery {
/// Builds a boost query.
pub fn new(query: Box<dyn Query>, boost: f32) -> BoostQuery {
BoostQuery { query, boost }
}
}
impl Clone for BoostQuery {
fn clone(&self) -> Self {
BoostQuery {
query: self.query.box_clone(),
boost: self.boost,
}
}
}
impl fmt::Debug for BoostQuery {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
write!(f, "Boost(query={:?}, boost={})", self.query, self.boost)
}
}
impl Query for BoostQuery {
fn weight(&self, searcher: &Searcher, scoring_enabled: bool) -> crate::Result<Box<dyn Weight>> {
let weight_without_boost = self.query.weight(searcher, scoring_enabled)?;
let boosted_weight = if scoring_enabled {
Box::new(BoostWeight::new(weight_without_boost, self.boost))
} else {
weight_without_boost
};
Ok(boosted_weight)
}
fn query_terms(&self, term_set: &mut BTreeSet<Term>) {
self.query.query_terms(term_set)
}
}
pub(crate) struct BoostWeight {
weight: Box<dyn Weight>,
boost: f32,
}
impl BoostWeight {
pub fn new(weight: Box<dyn Weight>, boost: f32) -> Self {
BoostWeight { weight, boost }
}
}
impl Weight for BoostWeight {
fn scorer(&self, reader: &SegmentReader, boost: f32) -> crate::Result<Box<dyn Scorer>> {
self.weight.scorer(reader, boost * self.boost)
}
fn explain(&self, reader: &SegmentReader, doc: u32) -> crate::Result<Explanation> {
let mut scorer = self.scorer(reader, 1.0f32)?;
if scorer.skip_next(doc) != SkipResult::Reached {
return Err(does_not_match(doc));
}
let mut explanation =
Explanation::new(format!("Boost x{} of ...", self.boost), scorer.score());
let underlying_explanation = self.weight.explain(reader, doc)?;
explanation.add_detail(underlying_explanation);
Ok(explanation)
}
fn count(&self, reader: &SegmentReader) -> crate::Result<u32> {
self.weight.count(reader)
}
}
pub(crate) struct BoostScorer<S: Scorer> {
underlying: S,
boost: f32,
}
impl<S: Scorer> BoostScorer<S> {
pub fn new(underlying: S, boost: f32) -> BoostScorer<S> {
BoostScorer { underlying, boost }
}
}
impl<S: Scorer> DocSet for BoostScorer<S> {
fn advance(&mut self) -> bool {
self.underlying.advance()
}
fn skip_next(&mut self, target: DocId) -> SkipResult {
self.underlying.skip_next(target)
}
fn fill_buffer(&mut self, buffer: &mut [DocId]) -> usize {
self.underlying.fill_buffer(buffer)
}
fn doc(&self) -> u32 {
self.underlying.doc()
}
fn size_hint(&self) -> u32 {
self.underlying.size_hint()
}
fn append_to_bitset(&mut self, bitset: &mut BitSet) {
self.underlying.append_to_bitset(bitset)
}
fn count(&mut self, delete_bitset: &DeleteBitSet) -> u32 {
self.underlying.count(delete_bitset)
}
fn count_including_deleted(&mut self) -> u32 {
self.underlying.count_including_deleted()
}
}
impl<S: Scorer> Scorer for BoostScorer<S> {
fn score(&mut self) -> f32 {
self.underlying.score() * self.boost
}
}
#[cfg(test)]
mod tests {
use super::BoostQuery;
use crate::query::{AllQuery, Query};
use crate::schema::Schema;
use crate::{DocAddress, Document, Index};
#[test]
fn test_boost_query_explain() {
let schema = Schema::builder().build();
let index = Index::create_in_ram(schema);
let mut index_writer = index.writer_with_num_threads(1, 3_000_000).unwrap();
index_writer.add_document(Document::new());
assert!(index_writer.commit().is_ok());
let reader = index.reader().unwrap();
let searcher = reader.searcher();
let query = BoostQuery::new(Box::new(AllQuery), 0.2);
let explanation = query.explain(&searcher, DocAddress(0, 0u32)).unwrap();
assert_eq!(
explanation.to_pretty_json(),
"{\n \"value\": 0.2,\n \"description\": \"Boost x0.2 of ...\",\n \"details\": [\n {\n \"value\": 1.0,\n \"description\": \"AllQuery\"\n }\n ]\n}"
)
}
}

View File

@@ -4,7 +4,6 @@ use crate::query::Weight;
use crate::query::{Explanation, Query}; use crate::query::{Explanation, Query};
use crate::DocId; use crate::DocId;
use crate::DocSet; use crate::DocSet;
use crate::Result;
use crate::Score; use crate::Score;
use crate::Searcher; use crate::Searcher;
use crate::SegmentReader; use crate::SegmentReader;
@@ -16,11 +15,15 @@ use crate::SegmentReader;
pub struct EmptyQuery; pub struct EmptyQuery;
impl Query for EmptyQuery { impl Query for EmptyQuery {
fn weight(&self, _searcher: &Searcher, _scoring_enabled: bool) -> Result<Box<dyn Weight>> { fn weight(
&self,
_searcher: &Searcher,
_scoring_enabled: bool,
) -> crate::Result<Box<dyn Weight>> {
Ok(Box::new(EmptyWeight)) Ok(Box::new(EmptyWeight))
} }
fn count(&self, _searcher: &Searcher) -> Result<usize> { fn count(&self, _searcher: &Searcher) -> crate::Result<usize> {
Ok(0) Ok(0)
} }
} }
@@ -30,11 +33,11 @@ impl Query for EmptyQuery {
/// It is useful for tests and handling edge cases. /// It is useful for tests and handling edge cases.
pub struct EmptyWeight; pub struct EmptyWeight;
impl Weight for EmptyWeight { impl Weight for EmptyWeight {
fn scorer(&self, _reader: &SegmentReader) -> Result<Box<dyn Scorer>> { fn scorer(&self, _reader: &SegmentReader, _boost: f32) -> crate::Result<Box<dyn Scorer>> {
Ok(Box::new(EmptyScorer)) Ok(Box::new(EmptyScorer))
} }
fn explain(&self, _reader: &SegmentReader, doc: DocId) -> Result<Explanation> { fn explain(&self, _reader: &SegmentReader, doc: DocId) -> crate::Result<Explanation> {
Err(does_not_match(doc)) Err(does_not_match(doc))
} }
} }

View File

@@ -54,21 +54,21 @@ where
match self.excluding_state { match self.excluding_state {
State::ExcludeOne(excluded_doc) => { State::ExcludeOne(excluded_doc) => {
if doc == excluded_doc { if doc == excluded_doc {
false return false;
} else if excluded_doc > doc { }
true if excluded_doc > doc {
} else { return true;
match self.excluding_docset.skip_next(doc) { }
SkipResult::OverStep => { match self.excluding_docset.skip_next(doc) {
self.excluding_state = State::ExcludeOne(self.excluding_docset.doc()); SkipResult::OverStep => {
true self.excluding_state = State::ExcludeOne(self.excluding_docset.doc());
} true
SkipResult::End => {
self.excluding_state = State::Finished;
true
}
SkipResult::Reached => false,
} }
SkipResult::End => {
self.excluding_state = State::Finished;
true
}
SkipResult::Reached => false,
} }
} }
State::Finished => true, State::Finished => true,
@@ -175,5 +175,4 @@ mod tests {
sample_skip, sample_skip,
); );
} }
} }

Some files were not shown because too many files have changed in this diff Show More