Compare commits

...

101 Commits

Author SHA1 Message Date
Paul Masurel
7cb018c640 Add an option to opt out fieldnorms for indexed fields.
Closes #922
2020-11-03 16:15:20 +09:00
Paul Masurel
6b5a5ac1d0 Merge pull request #923 from tantivy-search/refact-param-serialize
Minor refactoring postings serializers options.
2020-11-03 15:49:34 +09:00
Paul Masurel
581c2bb718 Minor refactoring postings serializers options. 2020-11-03 15:47:25 +09:00
Paul Masurel
3d192c0f57 Merge pull request #921 from tantivy-search/more-pub-for-hot-directory
Exposing API for the hot directory
2020-10-29 13:04:37 +09:00
Paul Masurel
9dc36f4431 Exposing API for the hot directory 2020-10-29 13:04:13 +09:00
Paul Masurel
730ccefffb Fixes a bug in TermQuery::explain.
Closes #915
2020-10-28 22:29:15 +09:00
Paul Masurel
2c56f4b583 Updated CHANGELOG 2020-10-28 17:39:01 +09:00
Paul Masurel
9e27da8b4e Added CR comments.
Added Unit tests.
2020-10-28 17:35:34 +09:00
Adrien Guillo
7f373f232a Add helper methods for BooleanQuery 2020-10-28 17:35:34 +09:00
Stephen Becker IV
6f0487979c Removing Inoperable 'Say Thanks' Links (#919)
Dearest Maintainer,

The say thanks project moved to email https://github.com/BlitzKraft/saythanks.io/issues/60.  I removed the links. You might want to use your email but at that point people could just email you thanks?

Anyway, Thanks for the hard work on the project. I am enjoying it.

Dictated but not reviewed,
Becker
2020-10-28 15:08:47 +09:00
Pasha Podolsky
71c66a5405 [tantivy] Run clippy linter (#914) 2020-10-27 14:36:02 +09:00
Paul Masurel
2eb5326aa4 Fixing compilation 2020-10-27 14:00:14 +09:00
Paul Masurel
91e92fa8a3 Made public. 2020-10-20 14:59:41 +09:00
Paul Masurel
9cc1661ce2 Updating crossbeam (#909) 2020-10-13 10:55:50 +09:00
Paul Masurel
c3f44d38f3 Moving HasLen (#910) 2020-10-13 10:19:30 +09:00
Paul Masurel
01b4aa9adc Refactoring dir (#905) 2020-10-11 22:22:56 +09:00
Paul Masurel
7a78b1cba3 Fix unit test on windows 2020-10-09 14:57:39 +09:00
Paul Masurel
4d011cc648 Updated changelog 2020-10-09 14:54:07 +09:00
Pasha Podolsky
80cbe889ba [tantivy] Add brotli codec for row storage (#885)
* [tantivy] Add brotli codec for row storage

* [tantivy] Fix not actual comments for code

* [CR] Fixes for comment and cursor
2020-10-09 14:51:42 +09:00
Paul Masurel
c23a03ad81 Large API Change in the Directory API. (#901)
Tantivy used to assume that all files could be somehow memory mapped. After this change, Directory return a `FileSlice` that can be reduced and eventually read into an `OwnedBytes` object. Long and blocking io operation are still required by they do not span over the entire file.
2020-10-08 16:36:51 +09:00
Paul Masurel
579e3d1ed8 Removed dev-deps to serde_yaml 2020-10-06 10:04:06 +09:00
Pasha Podolsky
687a36a49c [tantivy] Fix for schema deserialization error (#902)
Co-authored-by: Pasha <pasha@izihawa.net>
2020-10-05 11:24:48 +09:00
Paul Masurel
ad82b455a3 Minor change 2020-10-01 20:45:07 +09:00
Paul Masurel
848afa43ee Merge branch 'issue/896' into main 2020-10-01 20:43:42 +09:00
Paul Masurel
7720d21265 Closes #896 - Facet reader related
Bugfix. Acquiring a facet reader on a segment that does not contain any
doc with this facet returns `None`.
2020-10-01 20:25:28 +09:00
Paul Masurel
96f946d4c3 Raultang master (#879)
* add support for indexed bytes fast field

* remove backup code file

* refine test cases

* Simplified unit test. Renamed it as it is testing the storable part. Not the indexed part.

* Small refactoring and added unit test. If multivalued we only retain the first FAST value.

Co-authored-by: Raul <raul.tang.lc@gmail.com>
2020-10-01 18:03:18 +09:00
dependabot-preview[bot]
3432149759 Update base64 requirement from 0.12 to 0.13 (#895)
Updates the requirements on [base64](https://github.com/marshallpierce/rust-base64) to permit the latest version.
- [Release notes](https://github.com/marshallpierce/rust-base64/releases)
- [Changelog](https://github.com/marshallpierce/rust-base64/blob/master/RELEASE-NOTES.md)
- [Commits](https://github.com/marshallpierce/rust-base64/compare/v0.12.0...v0.13.0)

Signed-off-by: dependabot-preview[bot] <support@dependabot.com>

Co-authored-by: dependabot-preview[bot] <27856297+dependabot-preview[bot]@users.noreply.github.com>
2020-10-01 11:37:36 +09:00
Paul Masurel
392221e36a Removing dead file 2020-10-01 11:36:55 +09:00
Paul Masurel
674cae8ee2 Issue/822 TopDocs sorted by i64, and date fastfield (in addition to u64) (#890)
* Unsatisfactory implementation.

The fastfield are hit. But for performance, we want the comparison to happen on u64,
and the conversion to the FastType to be done only on the selected TopK
elements.

For i64, the current approach might be ok.
For DateTime, it is most likely catastrophic.

Closes #822

* Decoupled SegmentCollector Fruit from Collector Fruit.

Deferred conversion from u64 to the proper FastField type to after the overall collection.
(tantivy guarantees that u64 encoding is consistent with the original
ordering of the fastfield)

Closes #882
2020-09-30 17:51:11 +09:00
Paul Masurel
838c476733 Hirevo move to thiserror (#889)
* Migrated from `failure` to `thiserror`

* Refactoring

Co-authored-by: Nicolas Polomack <nicolas@polomack.eu>
2020-09-30 16:34:10 +09:00
Paul Masurel
5f574348d1 Syntactic change. 2020-09-26 21:33:00 +09:00
Paul Masurel
19a02b2c30 Merge tag '0.13.1'
0.13.1 was published as a hotfix to accomodate tantivy-py.
2020-09-19 21:20:27 +09:00
Paul Masurel
c339b05789 Bumped version and edited changelog 2020-09-19 21:13:19 +09:00
Paul Masurel
2d3c657f9d Added Send Sync to collectors. 2020-09-19 21:04:44 +09:00
Paul Masurel
07f9b828ae Added Send and Sync to the Query trait. 2020-09-19 21:04:29 +09:00
Paul Masurel
70bae7ce4c Removing Term Vec allocation (#881) 2020-09-08 23:11:00 +09:00
Paul Masurel
ac2a7273e6 Re-added comment to Score. 2020-09-08 21:41:34 +09:00
Paul Masurel
4ce9517a82 fix unit test for bench. remove scoref64 feature. fixed test for lz4 feature. 2020-09-08 07:35:00 +09:00
Paul Masurel
73024a8af3 Fixing compilation of bench and doctests. 2020-09-08 07:18:43 +09:00
Paul Masurel
e70e605fc3 fix unit test (at least on linux) 2020-09-07 23:35:04 +09:00
Paul Masurel
439d6956a9 Returning Result in some of the API (#880)
* Returning Result in some of the API

* Introducing `.writer_for_test(..)`
2020-09-07 15:52:34 +09:00
Paul Masurel
6530bf0eae Make field types less strict when populating documents. 2020-09-06 10:24:03 +09:00
Paul Masurel
151498cbe7 Creating the tempfile for atomicwrites in the same directory as the MmapDirectory. (#878) 2020-09-05 23:06:29 +09:00
Paul Masurel
3a72b1cb98 Accept dash within field names. (#874)
Accept dash in field names and enforce field names constraint at the
creation of the schema.

Closes #796
2020-09-01 13:38:52 +09:00
Paul Masurel
2737822620 Fixing unit tests. (#868)
There was a unit test failing when notify was sending more
than one event on atomicwrites.

It was observed on MacOS CI.
2020-08-27 16:43:39 +09:00
b8591340
06c12ae221 Filter meta.json from validate_checksum (#872) 2020-08-27 07:54:37 +09:00
Paul Masurel
4e4400af7f Added cargo timing report to .gitignore 2020-08-23 16:15:28 +09:00
Paul Masurel
3f1ecf53ab Merge branch 'master' of github.com:tantivy-search/tantivy 2020-08-22 21:30:47 +09:00
Paul Masurel
0b583b8130 Plastic changes 2020-08-22 21:29:12 +09:00
Paul Masurel
31d18dca1c Removing dependency to atomicwrites (#866) 2020-08-21 21:37:05 +09:00
stephenlagree
5e06e7de5a Update basic_search.rs (#865)
Remove duplicated document entry.
2020-08-21 11:23:09 +09:00
Paul Masurel
8af53cbd36 Merge branch 'master' of github.com:tantivy-search/tantivy 2020-08-21 08:57:42 +09:00
Paul Masurel
4914076e8f Fixing release build 2020-08-21 08:57:27 +09:00
Paul Masurel
e04f47e922 Using block wand for term queries too. 2020-08-20 15:51:21 +09:00
Paul Masurel
f355695581 Code clean up 2020-08-20 15:42:50 +09:00
Paul Masurel
cbacdf0de8 Edited README. 2020-08-20 14:28:24 +09:00
Paul Masurel
3dd0322f4c Bumped version 2020-08-19 22:41:48 +09:00
Paul Masurel
2481c87be8 Block wand (#856) 2020-08-19 22:36:36 +09:00
Paul Masurel
b6a664b5f8 cargo fmt 2020-08-16 12:40:50 +09:00
lyj
25b666a7c9 Update occur.rs (#862) 2020-08-16 10:49:55 +09:00
Paul Masurel
9b41912e66 Bugfix (#861) 2020-08-12 16:06:24 +09:00
Paul Masurel
8e74bb98b5 Added field norm readers (#854) 2020-07-20 13:05:05 +09:00
Paul Masurel
6db8bb49d6 Assert nearly equals macro (#853)
* Assert nearly equals macro

* Renamed specialized_scorer in TermScorer
2020-07-17 16:40:41 +09:00
lyj
410aed0176 Update segment_updater.rs (#848) 2020-07-16 12:33:11 +09:00
aptend
00a239a712 fix typo in index_meta.rs (#851) 2020-07-16 12:32:45 +09:00
Paul Masurel
68fe406924 Removed asserts (#850) 2020-07-16 12:24:55 +09:00
Paul Masurel
f71b04acb0 Bugfix. (#849)
go_to_first_doc was typically calling seek with a target smaller than
doc.

Since SegmentPostings typically do a linear search on the full block,
regardless of the current position, it could have our segment postings
go backward.
2020-07-16 10:57:51 +09:00
lyj
1ab7f660a4 Update index.rs (#846) 2020-07-02 15:11:38 +09:00
Sean Stangl
0ebbc4cb5a Fix incorrect SimpleTokenizer link in documentation (#844) 2020-07-01 10:26:36 +09:00
lyj
5300cb5da0 Update mod.rs (#845) 2020-07-01 10:25:26 +09:00
Ype Kingma
7d773abc92 Boolean query: do not combine excluded scores. (#840)
* Do nothing when combining score values of excluded scores.

* Add test case for two excluded.

* Test score for two excluded terms.

* Use TopDocs in test_boolean_query_two_excluded
2020-06-08 20:01:19 +09:00
Paul Masurel
c34541ccce Alive doc iterator. (#837) 2020-06-05 19:42:51 +09:00
Paul Masurel
1cc5bd706c Fixes build for no-default-features (#839) 2020-06-05 19:41:55 +09:00
Paul Masurel
4026d183bc Small readability change 2020-06-03 09:04:57 +09:00
Paul Masurel
c0f5645cd9 Move for_each functions from Scorer to Weight. (#836)
* Move for_each functions from Scorer to Weight.

* Specialized foreach / foreach_pruning for union of termscorer.
2020-06-01 11:31:18 +09:00
Paul Masurel
cbff874e43 Change the loading of blocks. 2020-05-27 16:36:50 +09:00
Paul Masurel
baf015fc57 Simplification of the segment postings seek implementation. (#834) 2020-05-27 08:49:47 +09:00
Paul Masurel
7275ebdf3c Skiprefactoring skipabsolute (#831)
Simplification of the way we handle positions.
2020-05-25 09:51:23 +09:00
Paul Masurel
b974e7ce34 Closes #828. (#829)
There was a bug in the LogMergePolicy that was surfacing when there were
segments, but all of the segments were larger than the max limit.

After filtering, the list of segments candidate for merge was 0, and
the code was indexing the first element of an empty Vec.
2020-05-22 16:24:07 +09:00
Paul Masurel
8f8f34499f Updated CHANGELOG with the TopCollector offset information and cargo fmt. 2020-05-20 22:26:54 +09:00
Rob Young
6ea6f4bfcd Add offset to TopDocsCollector (#826)
* Add offset to TopDocsCollector

Add an offset to TopDocsCollector and TopDocs to make it clearer how to
handle pagination.

Closes #822

* Address review comments

- Make Debug formatting of TopDocs clearer.
- Add unit tests for limit and offset on TopCollector.
- Change API for using offset to a fluent interface.
- Add some context to the docstring to clarify what limit and offset are
  equivalent to in other projects.

* Changes required by rebase on e25284

- Pass Collector into TweakedScoreTopCollector and
  CustomScoreTopCollector.
- Add std:: qualifier to f32, i32 etc. Not sure why this was not failing
  already.
- Add unit tests for TopDocs with offset including for tweaked and
  custom score collectors.

In order to convert a TopCollector<Score> to a TopCollector<TScore> I
had to add a `into_tscore` method to `TopCollector`. This is a hack but
I don't know how to avoid it.
2020-05-20 22:25:24 +09:00
Paul Masurel
e25284bafe Major change in the DocSet/Scorer API (#824)
- Change in the DocSet and Scorer API. (@fulmicoton). 
A freshly created DocSet point directly to their first doc. A sentinel value called TERMINATED marks the end of a DocSet.
`.advance()` returns the new DocId. `Scorer::skip(target)` has been replaced by `Scorer::seek(target)` and returns the resulting DocId.
As a result, iterating through DocSet now looks as follows
```rust
let mut doc = docset.doc();
while doc != TERMINATED {
   // ...
   doc = docset.advance();
}
```
The change made it possible to greatly simplify a lot of the docset's code.
- Misc internal optimization and introduction of the `Scorer::for_each_pruning` function. (@fulmicoton)
2020-05-16 16:33:36 +09:00
Fisher Darling
8b67877cd5 Made field methods const fns (#823) 2020-05-16 10:59:50 +09:00
Rob Young
9de1360538 Minor doc and test improvements around fuzzy querying (#825) 2020-05-16 10:59:24 +09:00
Paul Masurel
c55db83609 Closes #805 (#820)
Added TryInto implementation for IndexReaderBuilder
2020-04-27 12:01:17 +09:00
Paul Masurel
1e5ebdbf3c Format and remove useless import (#819) 2020-04-27 11:56:49 +09:00
Paul Masurel
9a2090ab21 Create the MMapDirectory does not return a Directory. (#818) 2020-04-27 11:42:20 +09:00
Paul Masurel
e4aaacdb86 Minor change in README.md 2020-04-21 21:30:34 +09:00
Paul Masurel
29acf1104d Update README's claim on performance. 2020-04-21 14:44:26 +09:00
Paul Masurel
3d34fa0b69 Fixed changelog 2020-04-19 15:55:54 +09:00
Rob Young
77f363987a Make TweakScore and CustomScore mutable at the segment level (#807)
* Make TweakScore and CustomScore mutable

Make TweakScore and CustomScore mutable at the segment level.

Addresses issue #806

* Add example to show tweak_score working for facets
2020-04-19 15:54:00 +09:00
Paul Masurel
c0be461191 Removing tantivy-fst conf and removing warning. (#813) 2020-04-18 20:19:23 +09:00
dependabot-preview[bot]
1fb562f44a Update fail requirement from 0.3 to 0.4 (#810)
Updates the requirements on [fail](https://github.com/tikv/fail-rs) to permit the latest version.
- [Release notes](https://github.com/tikv/fail-rs/releases)
- [Changelog](https://github.com/tikv/fail-rs/blob/master/CHANGELOG.md)
- [Commits](https://github.com/tikv/fail-rs/compare/v0.3.0...v0.4.0)

Signed-off-by: dependabot-preview[bot] <support@dependabot.com>

Co-authored-by: dependabot-preview[bot] <27856297+dependabot-preview[bot]@users.noreply.github.com>
2020-04-17 07:14:19 +09:00
Rob Young
c591d0e591 Switch fst dependency to git (#808)
Closes #803

This allows the package to be built without first cloning the
tantivy-search/fst repo into the expected place. This should fix CI.
2020-04-16 23:05:12 +09:00
Paul Masurel
186d7fc20e Fix build 2020-04-01 09:32:45 +09:00
Paul Masurel
cfbdef5186 Using tantivy-fst version 0.3. 2020-03-31 23:24:54 +09:00
Paul Masurel
d04368b1d4 Closes #788. OR not working when using conjunction by default. (#802) 2020-03-31 21:13:50 +09:00
Chen Xu
b167058028 Fix prefix option for FuzzyTermQuery (#797)
* Fix prefix option for FuzzyTermQuery

* Update changelog
2020-03-19 20:19:32 +09:00
Paul Masurel
262957717b unit test fix and use of matches 2020-03-15 00:20:17 +09:00
Paul Masurel
873a808321 Removed itertools (#792) 2020-03-11 18:41:04 +09:00
dependabot-preview[bot]
6fa8f9330e Update base64 requirement from 0.11.0 to 0.12.0 (#791)
Updates the requirements on [base64](https://github.com/marshallpierce/rust-base64) to permit the latest version.
- [Release notes](https://github.com/marshallpierce/rust-base64/releases)
- [Changelog](https://github.com/marshallpierce/rust-base64/blob/master/RELEASE-NOTES.md)
- [Commits](https://github.com/marshallpierce/rust-base64/compare/v0.11.0...v0.12.0)

Signed-off-by: dependabot-preview[bot] <support@dependabot.com>

Co-authored-by: dependabot-preview[bot] <27856297+dependabot-preview[bot]@users.noreply.github.com>
2020-03-11 17:51:22 +09:00
150 changed files with 8182 additions and 5089 deletions

2
.gitignore vendored
View File

@@ -1,4 +1,5 @@
tantivy.iml tantivy.iml
proptest-regressions
*.swp *.swp
target target
target/debug target/debug
@@ -11,3 +12,4 @@ cpp/simdcomp/bitpackingbenchmark
*.bk *.bk
.idea .idea
trace.dat trace.dat
cargo-timing*

View File

@@ -1,3 +1,53 @@
Tantivy 0.14.0
=========================
- Remove dependency to atomicwrites #833 .Implemented by @pmasurel upon suggestion and research from @asafigan).
- Migrated tantivy error from the now deprecated `failure` crate to `thiserror` #760. (@hirevo)
- API Change. Accessing the typed value off a `Schema::Value` now returns an Option instead of panicking if the type does not match.
- Large API Change in the Directory API. Tantivy used to assume that all files could be somehow memory mapped. After this change, Directory return a `FileSlice` that can be reduced and eventually read into an `OwnedBytes` object. Long and blocking io operation are still required by they do not span over the entire file.
- Added support for Brotli compression in the DocStore. (@ppodolsky)
- Added helper for building intersections and unions in BooleanQuery (@guilload)
- Bugfix in `Query::explain`
- Making it possible to opt out the generation of fieldnorms information for indexed fields. This change breaks compatibility as the meta.json file format is slightly changed. (#922, @pmasurel)
Tantivy 0.13.2
===================
Bugfix. Acquiring a facet reader on a segment that does not contain any
doc with this facet returns `None`. (#896)
Tantivy 0.13.1
===================
Made `Query` and `Collector` `Send + Sync`.
Updated misc dependency versions.
Tantivy 0.13.0
======================
Tantivy 0.13 introduce a change in the index format that will require
you to reindex your index (BlockWAND information are added in the skiplist).
The index size increase is minor as this information is only added for
full blocks.
If you have a massive index for which reindexing is not an option, please contact me
so that we can discuss possible solutions.
- Bugfix in `FuzzyTermQuery` not matching terms by prefix when it should (@Peachball)
- Relaxed constraints on the custom/tweak score functions. At the segment level, they can be mut, and they are not required to be Sync + Send.
- `MMapDirectory::open` does not return a `Result` anymore.
- Change in the DocSet and Scorer API. (@fulmicoton).
A freshly created DocSet point directly to their first doc. A sentinel value called TERMINATED marks the end of a DocSet.
`.advance()` returns the new DocId. `Scorer::skip(target)` has been replaced by `Scorer::seek(target)` and returns the resulting DocId.
As a result, iterating through DocSet now looks as follows
```rust
let mut doc = docset.doc();
while doc != TERMINATED {
// ...
doc = docset.advance();
}
```
The change made it possible to greatly simplify a lot of the docset's code.
- Misc internal optimization and introduction of the `Scorer::for_each_pruning` function. (@fulmicoton)
- Added an offset option to the Top(.*)Collectors. (@robyoung)
- Added Block WAND. Performance on TOP-K on term-unions should be greatly increased. (@fulmicoton, and special thanks
to the PISA team for answering all my questions!)
Tantivy 0.12.0 Tantivy 0.12.0
====================== ======================
- Removing static dispatch in tokenizers for simplicity. (#762) - Removing static dispatch in tokenizers for simplicity. (#762)

View File

@@ -1,6 +1,6 @@
[package] [package]
name = "tantivy" name = "tantivy"
version = "0.12.0" version = "0.14.0-dev"
authors = ["Paul Masurel <paul.masurel@gmail.com>"] authors = ["Paul Masurel <paul.masurel@gmail.com>"]
license = "MIT" license = "MIT"
categories = ["database-implementations", "data-structures"] categories = ["database-implementations", "data-structures"]
@@ -13,43 +13,40 @@ keywords = ["search", "information", "retrieval"]
edition = "2018" edition = "2018"
[dependencies] [dependencies]
base64 = "0.11.0" base64 = "0.13"
byteorder = "1.0" byteorder = "1"
crc32fast = "1.2.0" crc32fast = "1"
once_cell = "1.0" once_cell = "1"
regex ={version = "1.3.0", default-features = false, features = ["std"]} regex ={version = "1", default-features = false, features = ["std"]}
tantivy-fst = "0.2.1" tantivy-fst = "0.3"
memmap = {version = "0.7", optional=true} memmap = {version = "0.7", optional=true}
lz4 = {version="1.20", optional=true} lz4 = {version="1", optional=true}
brotli = {version="3.3.0", optional=true}
snap = "1" snap = "1"
atomicwrites = {version="0.2.2", optional=true} tempfile = {version="3", optional=true}
tempfile = "3.0"
log = "0.4" log = "0.4"
serde = {version="1.0", features=["derive"]} serde = {version="1", features=["derive"]}
serde_json = "1.0" serde_json = "1"
num_cpus = "1.2" num_cpus = "1"
fs2={version="0.4", optional=true} fs2={version="0.4", optional=true}
itertools = "0.8" levenshtein_automata = "0.2"
levenshtein_automata = "0.1"
notify = {version="4", optional=true} notify = {version="4", optional=true}
uuid = { version = "0.8", features = ["v4", "serde"] } uuid = { version = "0.8", features = ["v4", "serde"] }
crossbeam = "0.7" crossbeam = "0.8"
futures = {version = "0.3", features=["thread-pool"] } futures = {version = "0.3", features=["thread-pool"] }
owning_ref = "0.4" tantivy-query-grammar = { version="0.14.0-dev", path="./query-grammar" }
stable_deref_trait = "1.0.0" stable_deref_trait = "1"
rust-stemmers = "1.2" rust-stemmers = "1"
downcast-rs = { version="1.0" } downcast-rs = "1"
tantivy-query-grammar = { version="0.12", path="./query-grammar" }
bitpacking = {version="0.8", default-features = false, features=["bitpacker4x"]} bitpacking = {version="0.8", default-features = false, features=["bitpacker4x"]}
census = "0.4" census = "0.4"
fnv = "1.0.6" fnv = "1"
owned-read = "0.4" thiserror = "1.0"
failure = "0.1" htmlescape = "0.3"
htmlescape = "0.3.1" fail = "0.4"
fail = "0.3"
murmurhash32 = "0.2" murmurhash32 = "0.2"
chrono = "0.4" chrono = "0.4"
smallvec = "1.0" smallvec = "1"
rayon = "1" rayon = "1"
[target.'cfg(windows)'.dependencies] [target.'cfg(windows)'.dependencies]
@@ -59,9 +56,10 @@ winapi = "0.3"
rand = "0.7" rand = "0.7"
maplit = "1" maplit = "1"
matches = "0.1.8" matches = "0.1.8"
proptest = "0.10"
[dev-dependencies.fail] [dev-dependencies.fail]
version = "0.3" version = "0.4"
features = ["failpoints"] features = ["failpoints"]
[profile.release] [profile.release]
@@ -75,7 +73,8 @@ overflow-checks = true
[features] [features]
default = ["mmap"] default = ["mmap"]
mmap = ["atomicwrites", "fs2", "memmap", "notify"] mmap = ["fs2", "tempfile", "memmap", "notify"]
brotli-compression = ["brotli"]
lz4-compression = ["lz4"] lz4-compression = ["lz4"]
failpoints = ["fail/failpoints"] failpoints = ["fail/failpoints"]
unstable = [] # useful for benches. unstable = [] # useful for benches.

View File

@@ -5,7 +5,6 @@
[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT) [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
[![Build status](https://ci.appveyor.com/api/projects/status/r7nb13kj23u8m9pj/branch/master?svg=true)](https://ci.appveyor.com/project/fulmicoton/tantivy/branch/master) [![Build status](https://ci.appveyor.com/api/projects/status/r7nb13kj23u8m9pj/branch/master?svg=true)](https://ci.appveyor.com/project/fulmicoton/tantivy/branch/master)
[![Crates.io](https://img.shields.io/crates/v/tantivy.svg)](https://crates.io/crates/tantivy) [![Crates.io](https://img.shields.io/crates/v/tantivy.svg)](https://crates.io/crates/tantivy)
[![Say Thanks!](https://img.shields.io/badge/Say%20Thanks-!-1EAEDB.svg)](https://saythanks.io/to/fulmicoton)
![Tantivy](https://tantivy-search.github.io/logo/tantivy-logo.png) ![Tantivy](https://tantivy-search.github.io/logo/tantivy-logo.png)
@@ -31,12 +30,11 @@ Tantivy is, in fact, strongly inspired by Lucene's design.
# Benchmark # Benchmark
Tantivy is typically faster than Lucene, but the results depend on
the nature of the queries in your workload.
The following [benchmark](https://tantivy-search.github.io/bench/) break downs The following [benchmark](https://tantivy-search.github.io/bench/) break downs
performance for different type of queries / collection. performance for different type of queries / collection.
Your mileage WILL vary depending on the nature of queries and their load.
# Features # Features
- Full-text search - Full-text search
@@ -86,7 +84,7 @@ There are many ways to support this project.
- Help with documentation by asking questions or submitting PRs - Help with documentation by asking questions or submitting PRs
- Contribute code (you can join [our Gitter](https://gitter.im/tantivy-search/tantivy)) - Contribute code (you can join [our Gitter](https://gitter.im/tantivy-search/tantivy))
- Talk about Tantivy around you - Talk about Tantivy around you
- Drop a word on on [![Say Thanks!](https://img.shields.io/badge/Say%20Thanks-!-1EAEDB.svg)](https://saythanks.io/to/fulmicoton) or even [![Become a patron](https://c5.patreon.com/external/logo/become_a_patron_button.png)](https://www.patreon.com/fulmicoton) - [![Become a patron](https://c5.patreon.com/external/logo/become_a_patron_button.png)](https://www.patreon.com/fulmicoton)
# Contributing code # Contributing code

View File

@@ -18,5 +18,5 @@ install:
build: false build: false
test_script: test_script:
- REM SET RUST_LOG=tantivy,test & cargo test --verbose --no-default-features --features mmap - REM SET RUST_LOG=tantivy,test & cargo test --all --verbose --no-default-features --features mmap
- REM SET RUST_BACKTRACE=1 & cargo build --examples - REM SET RUST_BACKTRACE=1 & cargo build --examples

View File

@@ -112,18 +112,6 @@ fn main() -> tantivy::Result<()> {
limbs and branches that arch over the pool" limbs and branches that arch over the pool"
)); ));
index_writer.add_document(doc!(
title => "Of Mice and Men",
body => "A few miles south of Soledad, the Salinas River drops in close to the hillside \
bank and runs deep and green. The water is warm too, for it has slipped twinkling \
over the yellow sands in the sunlight before reaching the narrow pool. On one \
side of the river the golden foothill slopes curve up to the strong and rocky \
Gabilan Mountains, but on the valley side the water is lined with trees—willows \
fresh and green with every spring, carrying in their lower leaf junctures the \
debris of the winters flooding; and sycamores with mottled, white, recumbent \
limbs and branches that arch over the pool"
));
// Multivalued field just need to be repeated. // Multivalued field just need to be repeated.
index_writer.add_document(doc!( index_writer.add_document(doc!(
title => "Frankenstein", title => "Frankenstein",

View File

@@ -14,7 +14,7 @@ use tantivy::fastfield::FastFieldReader;
use tantivy::query::QueryParser; use tantivy::query::QueryParser;
use tantivy::schema::Field; use tantivy::schema::Field;
use tantivy::schema::{Schema, FAST, INDEXED, TEXT}; use tantivy::schema::{Schema, FAST, INDEXED, TEXT};
use tantivy::{doc, Index, SegmentReader, TantivyError}; use tantivy::{doc, Index, Score, SegmentReader, TantivyError};
#[derive(Default)] #[derive(Default)]
struct Stats { struct Stats {
@@ -114,7 +114,7 @@ struct StatsSegmentCollector {
impl SegmentCollector for StatsSegmentCollector { impl SegmentCollector for StatsSegmentCollector {
type Fruit = Option<Stats>; type Fruit = Option<Stats>;
fn collect(&mut self, doc: u32, _score: f32) { fn collect(&mut self, doc: u32, _score: Score) {
let value = self.fast_field_reader.get(doc) as f64; let value = self.fast_field_reader.get(doc) as f64;
self.stats.count += 1; self.stats.count += 1;
self.stats.sum += value; self.stats.sum += value;

View File

@@ -0,0 +1,98 @@
use std::collections::HashSet;
use tantivy::collector::TopDocs;
use tantivy::doc;
use tantivy::query::BooleanQuery;
use tantivy::schema::*;
use tantivy::{DocId, Index, Score, SegmentReader};
fn main() -> tantivy::Result<()> {
let mut schema_builder = Schema::builder();
let title = schema_builder.add_text_field("title", STORED);
let ingredient = schema_builder.add_facet_field("ingredient");
let schema = schema_builder.build();
let index = Index::create_in_ram(schema.clone());
let mut index_writer = index.writer(30_000_000)?;
index_writer.add_document(doc!(
title => "Fried egg",
ingredient => Facet::from("/ingredient/egg"),
ingredient => Facet::from("/ingredient/oil"),
));
index_writer.add_document(doc!(
title => "Scrambled egg",
ingredient => Facet::from("/ingredient/egg"),
ingredient => Facet::from("/ingredient/butter"),
ingredient => Facet::from("/ingredient/milk"),
ingredient => Facet::from("/ingredient/salt"),
));
index_writer.add_document(doc!(
title => "Egg rolls",
ingredient => Facet::from("/ingredient/egg"),
ingredient => Facet::from("/ingredient/garlic"),
ingredient => Facet::from("/ingredient/salt"),
ingredient => Facet::from("/ingredient/oil"),
ingredient => Facet::from("/ingredient/tortilla-wrap"),
ingredient => Facet::from("/ingredient/mushroom"),
));
index_writer.commit()?;
let reader = index.reader()?;
let searcher = reader.searcher();
{
let facets = vec![
Facet::from("/ingredient/egg"),
Facet::from("/ingredient/oil"),
Facet::from("/ingredient/garlic"),
Facet::from("/ingredient/mushroom"),
];
let query = BooleanQuery::new_multiterms_query(
facets
.iter()
.map(|key| Term::from_facet(ingredient, &key))
.collect(),
);
let top_docs_by_custom_score =
TopDocs::with_limit(2).tweak_score(move |segment_reader: &SegmentReader| {
let ingredient_reader = segment_reader.facet_reader(ingredient).unwrap();
let facet_dict = ingredient_reader.facet_dict();
let query_ords: HashSet<u64> = facets
.iter()
.filter_map(|key| facet_dict.term_ord(key.encoded_str()))
.collect();
let mut facet_ords_buffer: Vec<u64> = Vec::with_capacity(20);
move |doc: DocId, original_score: Score| {
ingredient_reader.facet_ords(doc, &mut facet_ords_buffer);
let missing_ingredients = facet_ords_buffer
.iter()
.filter(|ord| !query_ords.contains(ord))
.count();
let tweak = 1.0 / 4_f32.powi(missing_ingredients as i32);
original_score * tweak
}
});
let top_docs = searcher.search(&query, &top_docs_by_custom_score)?;
let titles: Vec<String> = top_docs
.iter()
.map(|(_, doc_id)| {
searcher
.doc(*doc_id)
.unwrap()
.get_first(title)
.unwrap()
.text()
.unwrap()
.to_owned()
})
.collect();
assert_eq!(titles, vec!["Fried egg", "Egg rolls"]);
}
Ok(())
}

View File

@@ -10,7 +10,7 @@
// --- // ---
// Importing tantivy... // Importing tantivy...
use tantivy::schema::*; use tantivy::schema::*;
use tantivy::{doc, DocId, DocSet, Index, Postings}; use tantivy::{doc, DocSet, Index, Postings, TERMINATED};
fn main() -> tantivy::Result<()> { fn main() -> tantivy::Result<()> {
// We first create a schema for the sake of the // We first create a schema for the sake of the
@@ -45,7 +45,7 @@ fn main() -> tantivy::Result<()> {
// Inverted index stands for the combination of // Inverted index stands for the combination of
// - the term dictionary // - the term dictionary
// - the inverted lists associated to each terms and their positions // - the inverted lists associated to each terms and their positions
let inverted_index = segment_reader.inverted_index(title); let inverted_index = segment_reader.inverted_index(title)?;
// A `Term` is a text token associated with a field. // A `Term` is a text token associated with a field.
// Let's go through all docs containing the term `title:the` and access their position // Let's go through all docs containing the term `title:the` and access their position
@@ -58,16 +58,15 @@ fn main() -> tantivy::Result<()> {
// If you don't need all this information, you may get better performance by decompressing less // If you don't need all this information, you may get better performance by decompressing less
// information. // information.
if let Some(mut segment_postings) = if let Some(mut segment_postings) =
inverted_index.read_postings(&term_the, IndexRecordOption::WithFreqsAndPositions) inverted_index.read_postings(&term_the, IndexRecordOption::WithFreqsAndPositions)?
{ {
// this buffer will be used to request for positions // this buffer will be used to request for positions
let mut positions: Vec<u32> = Vec::with_capacity(100); let mut positions: Vec<u32> = Vec::with_capacity(100);
while segment_postings.advance() { let mut doc_id = segment_postings.doc();
// the number of time the term appears in the document. while doc_id != TERMINATED {
let doc_id: DocId = segment_postings.doc(); //< do not try to access this before calling advance once.
// This MAY contains deleted documents as well. // This MAY contains deleted documents as well.
if segment_reader.is_deleted(doc_id) { if segment_reader.is_deleted(doc_id) {
doc_id = segment_postings.advance();
continue; continue;
} }
@@ -86,6 +85,7 @@ fn main() -> tantivy::Result<()> {
// Doc 2: TermFreq 1: [0] // Doc 2: TermFreq 1: [0]
// ``` // ```
println!("Doc {}: TermFreq {}: {:?}", doc_id, term_freq, positions); println!("Doc {}: TermFreq {}: {:?}", doc_id, term_freq, positions);
doc_id = segment_postings.advance();
} }
} }
} }
@@ -106,7 +106,7 @@ fn main() -> tantivy::Result<()> {
// Inverted index stands for the combination of // Inverted index stands for the combination of
// - the term dictionary // - the term dictionary
// - the inverted lists associated to each terms and their positions // - the inverted lists associated to each terms and their positions
let inverted_index = segment_reader.inverted_index(title); let inverted_index = segment_reader.inverted_index(title)?;
// This segment posting object is like a cursor over the documents matching the term. // This segment posting object is like a cursor over the documents matching the term.
// The `IndexRecordOption` arguments tells tantivy we will be interested in both term frequencies // The `IndexRecordOption` arguments tells tantivy we will be interested in both term frequencies
@@ -115,13 +115,18 @@ fn main() -> tantivy::Result<()> {
// If you don't need all this information, you may get better performance by decompressing less // If you don't need all this information, you may get better performance by decompressing less
// information. // information.
if let Some(mut block_segment_postings) = if let Some(mut block_segment_postings) =
inverted_index.read_block_postings(&term_the, IndexRecordOption::Basic) inverted_index.read_block_postings(&term_the, IndexRecordOption::Basic)?
{ {
while block_segment_postings.advance() { loop {
let docs = block_segment_postings.docs();
if docs.is_empty() {
break;
}
// Once again these docs MAY contains deleted documents as well. // Once again these docs MAY contains deleted documents as well.
let docs = block_segment_postings.docs(); let docs = block_segment_postings.docs();
// Prints `Docs [0, 2].` // Prints `Docs [0, 2].`
println!("Docs {:?}", docs); println!("Docs {:?}", docs);
block_segment_postings.advance();
} }
} }
} }

View File

@@ -1,6 +1,6 @@
[package] [package]
name = "tantivy-query-grammar" name = "tantivy-query-grammar"
version = "0.12.0" version = "0.14.0-dev"
authors = ["Paul Masurel <paul.masurel@gmail.com>"] authors = ["Paul Masurel <paul.masurel@gmail.com>"]
license = "MIT" license = "MIT"
categories = ["database-implementations", "data-structures"] categories = ["database-implementations", "data-structures"]

View File

@@ -31,22 +31,12 @@ impl Occur {
/// Compose two occur values. /// Compose two occur values.
pub fn compose(left: Occur, right: Occur) -> Occur { pub fn compose(left: Occur, right: Occur) -> Occur {
match left { match (left, right) {
Occur::Should => right, (Occur::Should, _) => right,
Occur::Must => { (Occur::Must, Occur::MustNot) => Occur::MustNot,
if right == Occur::MustNot { (Occur::Must, _) => Occur::Must,
Occur::MustNot (Occur::MustNot, Occur::MustNot) => Occur::Must,
} else { (Occur::MustNot, _) => Occur::MustNot,
Occur::Must
}
}
Occur::MustNot => {
if right == Occur::MustNot {
Occur::Must
} else {
Occur::MustNot
}
}
} }
} }
} }
@@ -56,3 +46,27 @@ impl fmt::Display for Occur {
f.write_char(self.to_char()) f.write_char(self.to_char())
} }
} }
#[cfg(test)]
mod test {
use crate::Occur;
#[test]
fn test_occur_compose() {
assert_eq!(Occur::compose(Occur::Should, Occur::Should), Occur::Should);
assert_eq!(Occur::compose(Occur::Should, Occur::Must), Occur::Must);
assert_eq!(
Occur::compose(Occur::Should, Occur::MustNot),
Occur::MustNot
);
assert_eq!(Occur::compose(Occur::Must, Occur::Should), Occur::Must);
assert_eq!(Occur::compose(Occur::Must, Occur::Must), Occur::Must);
assert_eq!(Occur::compose(Occur::Must, Occur::MustNot), Occur::MustNot);
assert_eq!(
Occur::compose(Occur::MustNot, Occur::Should),
Occur::MustNot
);
assert_eq!(Occur::compose(Occur::MustNot, Occur::Must), Occur::MustNot);
assert_eq!(Occur::compose(Occur::MustNot, Occur::MustNot), Occur::Must);
}
}

View File

@@ -9,8 +9,10 @@ use combine::{
fn field<'a>() -> impl Parser<&'a str, Output = String> { fn field<'a>() -> impl Parser<&'a str, Output = String> {
( (
letter(), (letter().or(char('_'))),
many(satisfy(|c: char| c.is_alphanumeric() || c == '_')), many(satisfy(|c: char| {
c.is_alphanumeric() || c == '_' || c == '-'
})),
) )
.skip(char(':')) .skip(char(':'))
.map(|(s1, s2): (char, String)| format!("{}{}", s1, s2)) .map(|(s1, s2): (char, String)| format!("{}{}", s1, s2))
@@ -154,17 +156,11 @@ fn negate(expr: UserInputAST) -> UserInputAST {
expr.unary(Occur::MustNot) expr.unary(Occur::MustNot)
} }
fn must(expr: UserInputAST) -> UserInputAST {
expr.unary(Occur::Must)
}
fn leaf<'a>() -> impl Parser<&'a str, Output = UserInputAST> { fn leaf<'a>() -> impl Parser<&'a str, Output = UserInputAST> {
parser(|input| { parser(|input| {
char('-') char('(')
.with(leaf()) .with(ast())
.map(negate) .skip(char(')'))
.or(char('+').with(leaf()).map(must))
.or(char('(').with(ast()).skip(char(')')))
.or(char('*').map(|_| UserInputAST::from(UserInputLeaf::All))) .or(char('*').map(|_| UserInputAST::from(UserInputLeaf::All)))
.or(attempt( .or(attempt(
string("NOT").skip(spaces1()).with(leaf()).map(negate), string("NOT").skip(spaces1()).with(leaf()).map(negate),
@@ -176,7 +172,17 @@ fn leaf<'a>() -> impl Parser<&'a str, Output = UserInputAST> {
}) })
} }
fn positive_float_number<'a>() -> impl Parser<&'a str, Output = f32> { fn occur_symbol<'a>() -> impl Parser<&'a str, Output = Occur> {
char('-')
.map(|_| Occur::MustNot)
.or(char('+').map(|_| Occur::Must))
}
fn occur_leaf<'a>() -> impl Parser<&'a str, Output = (Option<Occur>, UserInputAST)> {
(optional(occur_symbol()), boosted_leaf())
}
fn positive_float_number<'a>() -> impl Parser<&'a str, Output = f64> {
(many1(digit()), optional((char('.'), many1(digit())))).map( (many1(digit()), optional((char('.'), many1(digit())))).map(
|(int_part, decimal_part_opt): (String, Option<(char, String)>)| { |(int_part, decimal_part_opt): (String, Option<(char, String)>)| {
let mut float_str = int_part; let mut float_str = int_part;
@@ -184,18 +190,18 @@ fn positive_float_number<'a>() -> impl Parser<&'a str, Output = f32> {
float_str.push(chr); float_str.push(chr);
float_str.push_str(&decimal_str); float_str.push_str(&decimal_str);
} }
float_str.parse::<f32>().unwrap() float_str.parse::<f64>().unwrap()
}, },
) )
} }
fn boost<'a>() -> impl Parser<&'a str, Output = f32> { fn boost<'a>() -> impl Parser<&'a str, Output = f64> {
(char('^'), positive_float_number()).map(|(_, boost)| boost) (char('^'), positive_float_number()).map(|(_, boost)| boost)
} }
fn boosted_leaf<'a>() -> impl Parser<&'a str, Output = UserInputAST> { fn boosted_leaf<'a>() -> impl Parser<&'a str, Output = UserInputAST> {
(leaf(), optional(boost())).map(|(leaf, boost_opt)| match boost_opt { (leaf(), optional(boost())).map(|(leaf, boost_opt)| match boost_opt {
Some(boost) if (boost - 1.0).abs() > std::f32::EPSILON => { Some(boost) if (boost - 1.0).abs() > std::f64::EPSILON => {
UserInputAST::Boost(Box::new(leaf), boost) UserInputAST::Boost(Box::new(leaf), boost)
} }
_ => leaf, _ => leaf,
@@ -239,21 +245,29 @@ fn aggregate_binary_expressions(
} }
} }
pub fn ast<'a>() -> impl Parser<&'a str, Output = UserInputAST> { fn operand_leaf<'a>() -> impl Parser<&'a str, Output = (BinaryOperand, UserInputAST)> {
let operand_leaf = ( (
binary_operand().skip(spaces()), binary_operand().skip(spaces()),
boosted_leaf().skip(spaces()), boosted_leaf().skip(spaces()),
); )
let boolean_expr = (boosted_leaf().skip(spaces().silent()), many1(operand_leaf)) }
pub fn ast<'a>() -> impl Parser<&'a str, Output = UserInputAST> {
let boolean_expr = (boosted_leaf().skip(spaces()), many1(operand_leaf()))
.map(|(left, right)| aggregate_binary_expressions(left, right)); .map(|(left, right)| aggregate_binary_expressions(left, right));
let whitespace_separated_leaves = let whitespace_separated_leaves = many1(occur_leaf().skip(spaces().silent())).map(
many1(boosted_leaf().skip(spaces().silent())).map(|subqueries: Vec<UserInputAST>| { |subqueries: Vec<(Option<Occur>, UserInputAST)>| {
if subqueries.len() == 1 { if subqueries.len() == 1 {
subqueries.into_iter().next().unwrap() let (occur_opt, ast) = subqueries.into_iter().next().unwrap();
match occur_opt.unwrap_or(Occur::Should) {
Occur::Must | Occur::Should => ast,
Occur::MustNot => UserInputAST::Clause(vec![(Some(Occur::MustNot), ast)]),
}
} else { } else {
UserInputAST::Clause(subqueries.into_iter().collect()) UserInputAST::Clause(subqueries.into_iter().collect())
} }
}); },
);
let expr = attempt(boolean_expr).or(whitespace_separated_leaves); let expr = attempt(boolean_expr).or(whitespace_separated_leaves);
spaces().with(expr).skip(spaces()) spaces().with(expr).skip(spaces())
} }
@@ -267,14 +281,16 @@ pub fn parse_to_ast<'a>() -> impl Parser<&'a str, Output = UserInputAST> {
#[cfg(test)] #[cfg(test)]
mod test { mod test {
type TestParseResult = Result<(), StringStreamError>;
use super::*; use super::*;
use combine::parser::Parser; use combine::parser::Parser;
pub fn nearly_equals(a: f32, b: f32) -> bool { pub fn nearly_equals(a: f64, b: f64) -> bool {
(a - b).abs() < 0.0005 * (a + b).abs() (a - b).abs() < 0.0005 * (a + b).abs()
} }
fn assert_nearly_equals(expected: f32, val: f32) { fn assert_nearly_equals(expected: f64, val: f64) {
assert!( assert!(
nearly_equals(val, expected), nearly_equals(val, expected),
"Got {}, expected {}.", "Got {}, expected {}.",
@@ -283,9 +299,16 @@ mod test {
); );
} }
#[test]
fn test_occur_symbol() -> TestParseResult {
assert_eq!(super::occur_symbol().parse("-")?, (Occur::MustNot, ""));
assert_eq!(super::occur_symbol().parse("+")?, (Occur::Must, ""));
Ok(())
}
#[test] #[test]
fn test_positive_float_number() { fn test_positive_float_number() {
fn valid_parse(float_str: &str, expected_val: f32, expected_remaining: &str) { fn valid_parse(float_str: &str, expected_val: f64, expected_remaining: &str) {
let (val, remaining) = positive_float_number().parse(float_str).unwrap(); let (val, remaining) = positive_float_number().parse(float_str).unwrap();
assert_eq!(remaining, expected_remaining); assert_eq!(remaining, expected_remaining);
assert_nearly_equals(val, expected_val); assert_nearly_equals(val, expected_val);
@@ -293,9 +316,9 @@ mod test {
fn error_parse(float_str: &str) { fn error_parse(float_str: &str) {
assert!(positive_float_number().parse(float_str).is_err()); assert!(positive_float_number().parse(float_str).is_err());
} }
valid_parse("1.0", 1.0f32, ""); valid_parse("1.0", 1.0, "");
valid_parse("1", 1.0f32, ""); valid_parse("1", 1.0, "");
valid_parse("0.234234 aaa", 0.234234f32, " aaa"); valid_parse("0.234234 aaa", 0.234234f64, " aaa");
error_parse(".3332"); error_parse(".3332");
error_parse("1."); error_parse("1.");
error_parse("-1."); error_parse("-1.");
@@ -330,7 +353,7 @@ mod test {
"Err(UnexpectedParse)" "Err(UnexpectedParse)"
); );
test_parse_query_to_ast_helper("NOTa", "\"NOTa\""); test_parse_query_to_ast_helper("NOTa", "\"NOTa\"");
test_parse_query_to_ast_helper("NOT a", "-(\"a\")"); test_parse_query_to_ast_helper("NOT a", "(-\"a\")");
} }
#[test] #[test]
@@ -338,16 +361,16 @@ mod test {
assert!(parse_to_ast().parse("a^2^3").is_err()); assert!(parse_to_ast().parse("a^2^3").is_err());
assert!(parse_to_ast().parse("a^2^").is_err()); assert!(parse_to_ast().parse("a^2^").is_err());
test_parse_query_to_ast_helper("a^3", "(\"a\")^3"); test_parse_query_to_ast_helper("a^3", "(\"a\")^3");
test_parse_query_to_ast_helper("a^3 b^2", "((\"a\")^3 (\"b\")^2)"); test_parse_query_to_ast_helper("a^3 b^2", "(*(\"a\")^3 *(\"b\")^2)");
test_parse_query_to_ast_helper("a^1", "\"a\""); test_parse_query_to_ast_helper("a^1", "\"a\"");
} }
#[test] #[test]
fn test_parse_query_to_ast_binary_op() { fn test_parse_query_to_ast_binary_op() {
test_parse_query_to_ast_helper("a AND b", "(+(\"a\") +(\"b\"))"); test_parse_query_to_ast_helper("a AND b", "(+\"a\" +\"b\")");
test_parse_query_to_ast_helper("a OR b", "(?(\"a\") ?(\"b\"))"); test_parse_query_to_ast_helper("a OR b", "(?\"a\" ?\"b\")");
test_parse_query_to_ast_helper("a OR b AND c", "(?(\"a\") ?((+(\"b\") +(\"c\"))))"); test_parse_query_to_ast_helper("a OR b AND c", "(?\"a\" ?(+\"b\" +\"c\"))");
test_parse_query_to_ast_helper("a AND b AND c", "(+(\"a\") +(\"b\") +(\"c\"))"); test_parse_query_to_ast_helper("a AND b AND c", "(+\"a\" +\"b\" +\"c\")");
assert_eq!( assert_eq!(
format!("{:?}", parse_to_ast().parse("a OR b aaa")), format!("{:?}", parse_to_ast().parse("a OR b aaa")),
"Err(UnexpectedParse)" "Err(UnexpectedParse)"
@@ -385,6 +408,32 @@ mod test {
test_parse_query_to_ast_helper("weight: <= 70.5", "weight:{\"*\" TO \"70.5\"]"); test_parse_query_to_ast_helper("weight: <= 70.5", "weight:{\"*\" TO \"70.5\"]");
} }
#[test]
fn test_occur_leaf() {
let ((occur, ast), _) = super::occur_leaf().parse("+abc").unwrap();
assert_eq!(occur, Some(Occur::Must));
assert_eq!(format!("{:?}", ast), "\"abc\"");
}
#[test]
fn test_field_name() -> TestParseResult {
assert_eq!(
super::field().parse("my-field-name:a")?,
("my-field-name".to_string(), "a")
);
assert_eq!(
super::field().parse("my_field_name:a")?,
("my_field_name".to_string(), "a")
);
assert!(super::field().parse(":a").is_err());
assert!(super::field().parse("-my_field:a").is_err());
assert_eq!(
super::field().parse("_my_field:a")?,
("_my_field".to_string(), "a")
);
Ok(())
}
#[test] #[test]
fn test_range_parser() { fn test_range_parser() {
// testing the range() parser separately // testing the range() parser separately
@@ -413,32 +462,67 @@ mod test {
fn test_parse_query_to_triming_spaces() { fn test_parse_query_to_triming_spaces() {
test_parse_query_to_ast_helper(" abc", "\"abc\""); test_parse_query_to_ast_helper(" abc", "\"abc\"");
test_parse_query_to_ast_helper("abc ", "\"abc\""); test_parse_query_to_ast_helper("abc ", "\"abc\"");
test_parse_query_to_ast_helper("( a OR abc)", "(?(\"a\") ?(\"abc\"))"); test_parse_query_to_ast_helper("( a OR abc)", "(?\"a\" ?\"abc\")");
test_parse_query_to_ast_helper("(a OR abc)", "(?(\"a\") ?(\"abc\"))"); test_parse_query_to_ast_helper("(a OR abc)", "(?\"a\" ?\"abc\")");
test_parse_query_to_ast_helper("(a OR abc)", "(?(\"a\") ?(\"abc\"))"); test_parse_query_to_ast_helper("(a OR abc)", "(?\"a\" ?\"abc\")");
test_parse_query_to_ast_helper("a OR abc ", "(?(\"a\") ?(\"abc\"))"); test_parse_query_to_ast_helper("a OR abc ", "(?\"a\" ?\"abc\")");
test_parse_query_to_ast_helper("(a OR abc )", "(?(\"a\") ?(\"abc\"))"); test_parse_query_to_ast_helper("(a OR abc )", "(?\"a\" ?\"abc\")");
test_parse_query_to_ast_helper("(a OR abc) ", "(?(\"a\") ?(\"abc\"))"); test_parse_query_to_ast_helper("(a OR abc) ", "(?\"a\" ?\"abc\")");
} }
#[test] #[test]
fn test_parse_query_to_ast() { fn test_parse_query_single_term() {
test_parse_query_to_ast_helper("abc", "\"abc\""); test_parse_query_to_ast_helper("abc", "\"abc\"");
test_parse_query_to_ast_helper("a b", "(\"a\" \"b\")"); }
test_parse_query_to_ast_helper("+(a b)", "+((\"a\" \"b\"))");
test_parse_query_to_ast_helper("+d", "+(\"d\")"); #[test]
test_parse_query_to_ast_helper("+(a b) +d", "(+((\"a\" \"b\")) +(\"d\"))"); fn test_parse_query_default_clause() {
test_parse_query_to_ast_helper("(+a +b) d", "((+(\"a\") +(\"b\")) \"d\")"); test_parse_query_to_ast_helper("a b", "(*\"a\" *\"b\")");
test_parse_query_to_ast_helper("(+a)", "+(\"a\")"); }
test_parse_query_to_ast_helper("(+a +b)", "(+(\"a\") +(\"b\"))");
#[test]
fn test_parse_query_must_default_clause() {
test_parse_query_to_ast_helper("+(a b)", "(*\"a\" *\"b\")");
}
#[test]
fn test_parse_query_must_single_term() {
test_parse_query_to_ast_helper("+d", "\"d\"");
}
#[test]
fn test_single_term_with_field() {
test_parse_query_to_ast_helper("abc:toto", "abc:\"toto\""); test_parse_query_to_ast_helper("abc:toto", "abc:\"toto\"");
}
#[test]
fn test_single_term_with_float() {
test_parse_query_to_ast_helper("abc:1.1", "abc:\"1.1\""); test_parse_query_to_ast_helper("abc:1.1", "abc:\"1.1\"");
test_parse_query_to_ast_helper("+abc:toto", "+(abc:\"toto\")"); }
test_parse_query_to_ast_helper("(+abc:toto -titi)", "(+(abc:\"toto\") -(\"titi\"))");
test_parse_query_to_ast_helper("-abc:toto", "-(abc:\"toto\")"); #[test]
test_parse_query_to_ast_helper("abc:a b", "(abc:\"a\" \"b\")"); fn test_must_clause() {
test_parse_query_to_ast_helper("(+a +b)", "(+\"a\" +\"b\")");
}
#[test]
fn test_parse_test_query_plus_a_b_plus_d() {
test_parse_query_to_ast_helper("+(a b) +d", "(+(*\"a\" *\"b\") +\"d\")");
}
#[test]
fn test_parse_test_query_other() {
test_parse_query_to_ast_helper("(+a +b) d", "(*(+\"a\" +\"b\") *\"d\")");
test_parse_query_to_ast_helper("+abc:toto", "abc:\"toto\"");
test_parse_query_to_ast_helper("(+abc:toto -titi)", "(+abc:\"toto\" -\"titi\")");
test_parse_query_to_ast_helper("-abc:toto", "(-abc:\"toto\")");
test_parse_query_to_ast_helper("abc:a b", "(*abc:\"a\" *\"b\")");
test_parse_query_to_ast_helper("abc:\"a b\"", "abc:\"a b\""); test_parse_query_to_ast_helper("abc:\"a b\"", "abc:\"a b\"");
test_parse_query_to_ast_helper("foo:[1 TO 5]", "foo:[\"1\" TO \"5\"]"); test_parse_query_to_ast_helper("foo:[1 TO 5]", "foo:[\"1\" TO \"5\"]");
}
#[test]
fn test_parse_query_with_range() {
test_parse_query_to_ast_helper("[1 TO 5]", "[\"1\" TO \"5\"]"); test_parse_query_to_ast_helper("[1 TO 5]", "[\"1\" TO \"5\"]");
test_parse_query_to_ast_helper("foo:{a TO z}", "foo:{\"a\" TO \"z\"}"); test_parse_query_to_ast_helper("foo:{a TO z}", "foo:{\"a\" TO \"z\"}");
test_parse_query_to_ast_helper("foo:[1 TO toto}", "foo:[\"1\" TO \"toto\"}"); test_parse_query_to_ast_helper("foo:[1 TO toto}", "foo:[\"1\" TO \"toto\"}");

View File

@@ -85,15 +85,14 @@ impl UserInputBound {
} }
pub enum UserInputAST { pub enum UserInputAST {
Clause(Vec<UserInputAST>), Clause(Vec<(Option<Occur>, UserInputAST)>),
Unary(Occur, Box<UserInputAST>),
Leaf(Box<UserInputLeaf>), Leaf(Box<UserInputLeaf>),
Boost(Box<UserInputAST>, f32), Boost(Box<UserInputAST>, f64),
} }
impl UserInputAST { impl UserInputAST {
pub fn unary(self, occur: Occur) -> UserInputAST { pub fn unary(self, occur: Occur) -> UserInputAST {
UserInputAST::Unary(occur, Box::new(self)) UserInputAST::Clause(vec![(Some(occur), self)])
} }
fn compose(occur: Occur, asts: Vec<UserInputAST>) -> UserInputAST { fn compose(occur: Occur, asts: Vec<UserInputAST>) -> UserInputAST {
@@ -104,7 +103,7 @@ impl UserInputAST {
} else { } else {
UserInputAST::Clause( UserInputAST::Clause(
asts.into_iter() asts.into_iter()
.map(|ast: UserInputAST| ast.unary(occur)) .map(|ast: UserInputAST| (Some(occur), ast))
.collect::<Vec<_>>(), .collect::<Vec<_>>(),
) )
} }
@@ -135,25 +134,36 @@ impl From<UserInputLeaf> for UserInputAST {
} }
} }
fn print_occur_ast(
occur_opt: Option<Occur>,
ast: &UserInputAST,
formatter: &mut fmt::Formatter,
) -> fmt::Result {
if let Some(occur) = occur_opt {
write!(formatter, "{}{:?}", occur, ast)?;
} else {
write!(formatter, "*{:?}", ast)?;
}
Ok(())
}
impl fmt::Debug for UserInputAST { impl fmt::Debug for UserInputAST {
fn fmt(&self, formatter: &mut fmt::Formatter<'_>) -> Result<(), fmt::Error> { fn fmt(&self, formatter: &mut fmt::Formatter) -> fmt::Result {
match *self { match *self {
UserInputAST::Clause(ref subqueries) => { UserInputAST::Clause(ref subqueries) => {
if subqueries.is_empty() { if subqueries.is_empty() {
write!(formatter, "<emptyclause>")?; write!(formatter, "<emptyclause>")?;
} else { } else {
write!(formatter, "(")?; write!(formatter, "(")?;
write!(formatter, "{:?}", &subqueries[0])?; print_occur_ast(subqueries[0].0, &subqueries[0].1, formatter)?;
for subquery in &subqueries[1..] { for subquery in &subqueries[1..] {
write!(formatter, " {:?}", subquery)?; write!(formatter, " ")?;
print_occur_ast(subquery.0, &subquery.1, formatter)?;
} }
write!(formatter, ")")?; write!(formatter, ")")?;
} }
Ok(()) Ok(())
} }
UserInputAST::Unary(ref occur, ref subquery) => {
write!(formatter, "{}({:?})", occur, subquery)
}
UserInputAST::Leaf(ref subquery) => write!(formatter, "{:?}", subquery), UserInputAST::Leaf(ref subquery) => write!(formatter, "{:?}", subquery),
UserInputAST::Boost(ref leaf, boost) => write!(formatter, "({:?})^{}", leaf, boost), UserInputAST::Boost(ref leaf, boost) => write!(formatter, "({:?})^{}", leaf, boost),
} }

View File

@@ -96,18 +96,18 @@ mod tests {
} }
{ {
let mut count_collector = SegmentCountCollector::default(); let mut count_collector = SegmentCountCollector::default();
count_collector.collect(0u32, 1f32); count_collector.collect(0u32, 1.0);
assert_eq!(count_collector.harvest(), 1); assert_eq!(count_collector.harvest(), 1);
} }
{ {
let mut count_collector = SegmentCountCollector::default(); let mut count_collector = SegmentCountCollector::default();
count_collector.collect(0u32, 1f32); count_collector.collect(0u32, 1.0);
assert_eq!(count_collector.harvest(), 1); assert_eq!(count_collector.harvest(), 1);
} }
{ {
let mut count_collector = SegmentCountCollector::default(); let mut count_collector = SegmentCountCollector::default();
count_collector.collect(0u32, 1f32); count_collector.collect(0u32, 1.0);
count_collector.collect(1u32, 1f32); count_collector.collect(1u32, 1.0);
assert_eq!(count_collector.harvest(), 2); assert_eq!(count_collector.harvest(), 2);
} }
} }

View File

@@ -11,13 +11,13 @@ impl<TCustomScorer, TScore> CustomScoreTopCollector<TCustomScorer, TScore>
where where
TScore: Clone + PartialOrd, TScore: Clone + PartialOrd,
{ {
pub fn new( pub(crate) fn new(
custom_scorer: TCustomScorer, custom_scorer: TCustomScorer,
limit: usize, collector: TopCollector<TScore>,
) -> CustomScoreTopCollector<TCustomScorer, TScore> { ) -> CustomScoreTopCollector<TCustomScorer, TScore> {
CustomScoreTopCollector { CustomScoreTopCollector {
custom_scorer, custom_scorer,
collector: TopCollector::with_limit(limit), collector,
} }
} }
} }
@@ -28,7 +28,7 @@ where
/// It is the segment local version of the [`CustomScorer`](./trait.CustomScorer.html). /// It is the segment local version of the [`CustomScorer`](./trait.CustomScorer.html).
pub trait CustomSegmentScorer<TScore>: 'static { pub trait CustomSegmentScorer<TScore>: 'static {
/// Computes the score of a specific `doc`. /// Computes the score of a specific `doc`.
fn score(&self, doc: DocId) -> TScore; fn score(&mut self, doc: DocId) -> TScore;
} }
/// `CustomScorer` makes it possible to define any kind of score. /// `CustomScorer` makes it possible to define any kind of score.
@@ -46,7 +46,7 @@ pub trait CustomScorer<TScore>: Sync {
impl<TCustomScorer, TScore> Collector for CustomScoreTopCollector<TCustomScorer, TScore> impl<TCustomScorer, TScore> Collector for CustomScoreTopCollector<TCustomScorer, TScore>
where where
TCustomScorer: CustomScorer<TScore>, TCustomScorer: CustomScorer<TScore> + Send + Sync,
TScore: 'static + PartialOrd + Clone + Send + Sync, TScore: 'static + PartialOrd + Clone + Send + Sync,
{ {
type Fruit = Vec<(TScore, DocAddress)>; type Fruit = Vec<(TScore, DocAddress)>;
@@ -58,10 +58,10 @@ where
segment_local_id: u32, segment_local_id: u32,
segment_reader: &SegmentReader, segment_reader: &SegmentReader,
) -> crate::Result<Self::Child> { ) -> crate::Result<Self::Child> {
let segment_scorer = self.custom_scorer.segment_scorer(segment_reader)?;
let segment_collector = self let segment_collector = self
.collector .collector
.for_segment(segment_local_id, segment_reader)?; .for_segment(segment_local_id, segment_reader)?;
let segment_scorer = self.custom_scorer.segment_scorer(segment_reader)?;
Ok(CustomScoreTopSegmentCollector { Ok(CustomScoreTopSegmentCollector {
segment_collector, segment_collector,
segment_scorer, segment_scorer,
@@ -117,9 +117,9 @@ where
impl<F, TScore> CustomSegmentScorer<TScore> for F impl<F, TScore> CustomSegmentScorer<TScore> for F
where where
F: 'static + Sync + Send + Fn(DocId) -> TScore, F: 'static + FnMut(DocId) -> TScore,
{ {
fn score(&self, doc: DocId) -> TScore { fn score(&mut self, doc: DocId) -> TScore {
(self)(doc) (self)(doc)
} }
} }

View File

@@ -0,0 +1,61 @@
use std::collections::HashSet;
use crate::{DocAddress, DocId, Score};
use super::{Collector, SegmentCollector};
/// Collectors that returns the set of DocAddress that matches the query.
///
/// This collector is mostly useful for tests.
pub struct DocSetCollector;
impl Collector for DocSetCollector {
type Fruit = HashSet<DocAddress>;
type Child = DocSetChildCollector;
fn for_segment(
&self,
segment_local_id: crate::SegmentLocalId,
_segment: &crate::SegmentReader,
) -> crate::Result<Self::Child> {
Ok(DocSetChildCollector {
segment_local_id,
docs: HashSet::new(),
})
}
fn requires_scoring(&self) -> bool {
false
}
fn merge_fruits(
&self,
segment_fruits: Vec<(u32, HashSet<DocId>)>,
) -> crate::Result<Self::Fruit> {
let len: usize = segment_fruits.iter().map(|(_, docset)| docset.len()).sum();
let mut result = HashSet::with_capacity(len);
for (segment_local_id, docs) in segment_fruits {
for doc in docs {
result.insert(DocAddress(segment_local_id, doc));
}
}
Ok(result)
}
}
pub struct DocSetChildCollector {
segment_local_id: u32,
docs: HashSet<DocId>,
}
impl SegmentCollector for DocSetChildCollector {
type Fruit = (u32, HashSet<DocId>);
fn collect(&mut self, doc: crate::DocId, _score: Score) {
self.docs.insert(doc);
}
fn harvest(self) -> (u32, HashSet<DocId>) {
(self.segment_local_id, self.docs)
}
}

View File

@@ -1,6 +1,5 @@
use crate::collector::Collector; use crate::collector::Collector;
use crate::collector::SegmentCollector; use crate::collector::SegmentCollector;
use crate::docset::SkipResult;
use crate::fastfield::FacetReader; use crate::fastfield::FacetReader;
use crate::schema::Facet; use crate::schema::Facet;
use crate::schema::Field; use crate::schema::Field;
@@ -8,7 +7,6 @@ use crate::DocId;
use crate::Score; use crate::Score;
use crate::SegmentLocalId; use crate::SegmentLocalId;
use crate::SegmentReader; use crate::SegmentReader;
use crate::TantivyError;
use std::cmp::Ordering; use std::cmp::Ordering;
use std::collections::btree_map; use std::collections::btree_map;
use std::collections::BTreeMap; use std::collections::BTreeMap;
@@ -188,6 +186,11 @@ pub struct FacetSegmentCollector {
collapse_facet_ords: Vec<u64>, collapse_facet_ords: Vec<u64>,
} }
enum SkipResult {
Found,
NotFound,
}
fn skip<'a, I: Iterator<Item = &'a Facet>>( fn skip<'a, I: Iterator<Item = &'a Facet>>(
target: &[u8], target: &[u8],
collapse_it: &mut Peekable<I>, collapse_it: &mut Peekable<I>,
@@ -197,14 +200,14 @@ fn skip<'a, I: Iterator<Item = &'a Facet>>(
Some(facet_bytes) => match facet_bytes.encoded_str().as_bytes().cmp(target) { Some(facet_bytes) => match facet_bytes.encoded_str().as_bytes().cmp(target) {
Ordering::Less => {} Ordering::Less => {}
Ordering::Greater => { Ordering::Greater => {
return SkipResult::OverStep; return SkipResult::NotFound;
} }
Ordering::Equal => { Ordering::Equal => {
return SkipResult::Reached; return SkipResult::Found;
} }
}, },
None => { None => {
return SkipResult::End; return SkipResult::NotFound;
} }
} }
collapse_it.next(); collapse_it.next();
@@ -262,10 +265,7 @@ impl Collector for FacetCollector {
_: SegmentLocalId, _: SegmentLocalId,
reader: &SegmentReader, reader: &SegmentReader,
) -> crate::Result<FacetSegmentCollector> { ) -> crate::Result<FacetSegmentCollector> {
let field_name = reader.schema().get_field_name(self.field); let facet_reader = reader.facet_reader(self.field)?;
let facet_reader = reader.facet_reader(self.field).ok_or_else(|| {
TantivyError::SchemaError(format!("Field {:?} is not a facet field.", field_name))
})?;
let mut collapse_mapping = Vec::new(); let mut collapse_mapping = Vec::new();
let mut counts = Vec::new(); let mut counts = Vec::new();
@@ -281,7 +281,7 @@ impl Collector for FacetCollector {
// is positionned on a term that has not been processed yet. // is positionned on a term that has not been processed yet.
let skip_result = skip(facet_streamer.key(), &mut collapse_facet_it); let skip_result = skip(facet_streamer.key(), &mut collapse_facet_it);
match skip_result { match skip_result {
SkipResult::Reached => { SkipResult::Found => {
// we reach a facet we decided to collapse. // we reach a facet we decided to collapse.
let collapse_depth = facet_depth(facet_streamer.key()); let collapse_depth = facet_depth(facet_streamer.key());
let mut collapsed_id = 0; let mut collapsed_id = 0;
@@ -301,7 +301,7 @@ impl Collector for FacetCollector {
} }
break; break;
} }
SkipResult::End | SkipResult::OverStep => { SkipResult::NotFound => {
collapse_mapping.push(0); collapse_mapping.push(0);
if !facet_streamer.advance() { if !facet_streamer.advance() {
break; break;
@@ -468,7 +468,7 @@ mod tests {
let schema = schema_builder.build(); let schema = schema_builder.build();
let index = Index::create_in_ram(schema); let index = Index::create_in_ram(schema);
let mut index_writer = index.writer_with_num_threads(1, 3_000_000).unwrap(); let mut index_writer = index.writer_for_tests().unwrap();
let num_facets: usize = 3 * 4 * 5; let num_facets: usize = 3 * 4 * 5;
let facets: Vec<Facet> = (0..num_facets) let facets: Vec<Facet> = (0..num_facets)
.map(|mut n| { .map(|mut n| {
@@ -527,7 +527,7 @@ mod tests {
let facet_field = schema_builder.add_facet_field("facets"); let facet_field = schema_builder.add_facet_field("facets");
let schema = schema_builder.build(); let schema = schema_builder.build();
let index = Index::create_in_ram(schema); let index = Index::create_in_ram(schema);
let mut index_writer = index.writer_with_num_threads(1, 3_000_000).unwrap(); let mut index_writer = index.writer_for_tests().unwrap();
index_writer.add_document(doc!( index_writer.add_document(doc!(
facet_field => Facet::from_text(&"/subjects/A/a"), facet_field => Facet::from_text(&"/subjects/A/a"),
facet_field => Facet::from_text(&"/subjects/B/a"), facet_field => Facet::from_text(&"/subjects/B/a"),
@@ -546,12 +546,12 @@ mod tests {
} }
#[test] #[test]
fn test_doc_search_by_facet() { fn test_doc_search_by_facet() -> crate::Result<()> {
let mut schema_builder = Schema::builder(); let mut schema_builder = Schema::builder();
let facet_field = schema_builder.add_facet_field("facet"); let facet_field = schema_builder.add_facet_field("facet");
let schema = schema_builder.build(); let schema = schema_builder.build();
let index = Index::create_in_ram(schema); let index = Index::create_in_ram(schema);
let mut index_writer = index.writer_with_num_threads(1, 3_000_000).unwrap(); let mut index_writer = index.writer_for_tests()?;
index_writer.add_document(doc!( index_writer.add_document(doc!(
facet_field => Facet::from_text(&"/A/A"), facet_field => Facet::from_text(&"/A/A"),
)); ));
@@ -564,8 +564,8 @@ mod tests {
index_writer.add_document(doc!( index_writer.add_document(doc!(
facet_field => Facet::from_text(&"/D/C/A"), facet_field => Facet::from_text(&"/D/C/A"),
)); ));
index_writer.commit().unwrap(); index_writer.commit()?;
let reader = index.reader().unwrap(); let reader = index.reader()?;
let searcher = reader.searcher(); let searcher = reader.searcher();
assert_eq!(searcher.num_docs(), 4); assert_eq!(searcher.num_docs(), 4);
@@ -582,17 +582,17 @@ mod tests {
assert_eq!(count_facet("/A/C"), 1); assert_eq!(count_facet("/A/C"), 1);
assert_eq!(count_facet("/A/C/A"), 1); assert_eq!(count_facet("/A/C/A"), 1);
assert_eq!(count_facet("/C/A"), 0); assert_eq!(count_facet("/C/A"), 0);
let query_parser = QueryParser::for_index(&index, vec![]);
{ {
let query_parser = QueryParser::for_index(&index, vec![]); let query = query_parser.parse_query("facet:/A/B")?;
{ assert_eq!(1, searcher.search(&query, &Count).unwrap());
let query = query_parser.parse_query("facet:/A/B").unwrap();
assert_eq!(1, searcher.search(&query, &Count).unwrap());
}
{
let query = query_parser.parse_query("facet:/A").unwrap();
assert_eq!(3, searcher.search(&query, &Count).unwrap());
}
} }
{
let query = query_parser.parse_query("facet:/A")?;
assert_eq!(3, searcher.search(&query, &Count)?);
}
Ok(())
} }
#[test] #[test]
@@ -627,7 +627,7 @@ mod tests {
.collect(); .collect();
docs[..].shuffle(&mut thread_rng()); docs[..].shuffle(&mut thread_rng());
let mut index_writer = index.writer_with_num_threads(1, 3_000_000).unwrap(); let mut index_writer = index.writer_for_tests().unwrap();
for doc in docs { for doc in docs {
index_writer.add_document(doc); index_writer.add_document(doc);
} }
@@ -680,7 +680,7 @@ mod bench {
// 40425 docs // 40425 docs
docs[..].shuffle(&mut thread_rng()); docs[..].shuffle(&mut thread_rng());
let mut index_writer = index.writer_with_num_threads(1, 3_000_000).unwrap(); let mut index_writer = index.writer_for_tests().unwrap();
for doc in docs { for doc in docs {
index_writer.add_document(doc); index_writer.add_document(doc);
} }

View File

@@ -1,127 +0,0 @@
use std::cmp::Eq;
use std::collections::HashMap;
use std::hash::Hash;
use collector::Collector;
use fastfield::FastFieldReader;
use schema::Field;
use DocId;
use Result;
use Score;
use SegmentReader;
use SegmentLocalId;
/// Facet collector for i64/u64 fast field
pub struct IntFacetCollector<T>
where
T: FastFieldReader,
T::ValueType: Eq + Hash,
{
counters: HashMap<T::ValueType, u64>,
field: Field,
ff_reader: Option<T>,
}
impl<T> IntFacetCollector<T>
where
T: FastFieldReader,
T::ValueType: Eq + Hash,
{
/// Creates a new facet collector for aggregating a given field.
pub fn new(field: Field) -> IntFacetCollector<T> {
IntFacetCollector {
counters: HashMap::new(),
field: field,
ff_reader: None,
}
}
}
impl<T> Collector for IntFacetCollector<T>
where
T: FastFieldReader,
T::ValueType: Eq + Hash,
{
fn set_segment(&mut self, _: SegmentLocalId, reader: &SegmentReader) -> Result<()> {
self.ff_reader = Some(reader.get_fast_field_reader(self.field)?);
Ok(())
}
fn collect(&mut self, doc: DocId, _: Score) {
let val = self.ff_reader
.as_ref()
.expect(
"collect() was called before set_segment. \
This should never happen.",
)
.get(doc);
*(self.counters.entry(val).or_insert(0)) += 1;
}
}
#[cfg(test)]
mod tests {
use collector::{chain, IntFacetCollector};
use query::QueryParser;
use fastfield::{I64FastFieldReader, U64FastFieldReader};
use schema::{self, FAST, STRING};
use Index;
#[test]
// create 10 documents, set num field value to 0 or 1 for even/odd ones
// make sure we have facet counters correctly filled
fn test_facet_collector_results() {
let mut schema_builder = schema::Schema::builder();
let num_field_i64 = schema_builder.add_i64_field("num_i64", FAST);
let num_field_u64 = schema_builder.add_u64_field("num_u64", FAST);
let num_field_f64 = schema_builder.add_f64_field("num_f64", FAST);
let text_field = schema_builder.add_text_field("text", STRING);
let schema = schema_builder.build();
let index = Index::create_in_ram(schema.clone());
{
let mut index_writer = index.writer_with_num_threads(1, 3_000_000).unwrap();
{
for i in 0u64..10u64 {
index_writer.add_document(doc!(
num_field_i64 => ((i as i64) % 3i64) as i64,
num_field_u64 => (i % 2u64) as u64,
num_field_f64 => (i % 4u64) as f64,
text_field => "text"
));
}
}
assert_eq!(index_writer.commit().unwrap(), 10u64);
}
let searcher = index.reader().searcher();
let mut ffvf_i64: IntFacetCollector<I64FastFieldReader> = IntFacetCollector::new(num_field_i64);
let mut ffvf_u64: IntFacetCollector<U64FastFieldReader> = IntFacetCollector::new(num_field_u64);
let mut ffvf_f64: IntFacetCollector<F64FastFieldReader> = IntFacetCollector::new(num_field_f64);
{
// perform the query
let mut facet_collectors = chain().push(&mut ffvf_i64).push(&mut ffvf_u64).push(&mut ffvf_f64);
let mut query_parser = QueryParser::for_index(index, vec![text_field]);
let query = query_parser.parse_query("text:text").unwrap();
query.search(&searcher, &mut facet_collectors).unwrap();
}
assert_eq!(ffvf_u64.counters[&0], 5);
assert_eq!(ffvf_u64.counters[&1], 5);
assert_eq!(ffvf_i64.counters[&0], 4);
assert_eq!(ffvf_i64.counters[&1], 3);
assert_eq!(ffvf_f64.counters[&0.0], 3);
assert_eq!(ffvf_f64.counters[&2.0], 2);
}
}

View File

@@ -109,6 +109,10 @@ pub use self::tweak_score_top_collector::{ScoreSegmentTweaker, ScoreTweaker};
mod facet_collector; mod facet_collector;
pub use self::facet_collector::FacetCollector; pub use self::facet_collector::FacetCollector;
use crate::query::Weight;
mod docset_collector;
pub use self::docset_collector::DocSetCollector;
/// `Fruit` is the type for the result of our collection. /// `Fruit` is the type for the result of our collection.
/// e.g. `usize` for the `Count` collector. /// e.g. `usize` for the `Count` collector.
@@ -132,13 +136,13 @@ impl<T> Fruit for T where T: Send + downcast_rs::Downcast {}
/// The collection logic itself is in the `SegmentCollector`. /// The collection logic itself is in the `SegmentCollector`.
/// ///
/// Segments are not guaranteed to be visited in any specific order. /// Segments are not guaranteed to be visited in any specific order.
pub trait Collector: Sync { pub trait Collector: Sync + Send {
/// `Fruit` is the type for the result of our collection. /// `Fruit` is the type for the result of our collection.
/// e.g. `usize` for the `Count` collector. /// e.g. `usize` for the `Count` collector.
type Fruit: Fruit; type Fruit: Fruit;
/// Type of the `SegmentCollector` associated to this collector. /// Type of the `SegmentCollector` associated to this collector.
type Child: SegmentCollector<Fruit = Self::Fruit>; type Child: SegmentCollector;
/// `set_segment` is called before beginning to enumerate /// `set_segment` is called before beginning to enumerate
/// on this segment. /// on this segment.
@@ -153,7 +157,33 @@ pub trait Collector: Sync {
/// Combines the fruit associated to the collection of each segments /// Combines the fruit associated to the collection of each segments
/// into one fruit. /// into one fruit.
fn merge_fruits(&self, segment_fruits: Vec<Self::Fruit>) -> crate::Result<Self::Fruit>; fn merge_fruits(
&self,
segment_fruits: Vec<<Self::Child as SegmentCollector>::Fruit>,
) -> crate::Result<Self::Fruit>;
/// Created a segment collector and
fn collect_segment(
&self,
weight: &dyn Weight,
segment_ord: u32,
reader: &SegmentReader,
) -> crate::Result<<Self::Child as SegmentCollector>::Fruit> {
let mut segment_collector = self.for_segment(segment_ord as u32, reader)?;
if let Some(delete_bitset) = reader.delete_bitset() {
weight.for_each(reader, &mut |doc, score| {
if delete_bitset.is_alive(doc) {
segment_collector.collect(doc, score);
}
})?;
} else {
weight.for_each(reader, &mut |doc, score| {
segment_collector.collect(doc, score);
})?;
}
Ok(segment_collector.harvest())
}
} }
/// The `SegmentCollector` is the trait in charge of defining the /// The `SegmentCollector` is the trait in charge of defining the
@@ -200,11 +230,11 @@ where
fn merge_fruits( fn merge_fruits(
&self, &self,
children: Vec<(Left::Fruit, Right::Fruit)>, segment_fruits: Vec<<Self::Child as SegmentCollector>::Fruit>,
) -> crate::Result<(Left::Fruit, Right::Fruit)> { ) -> crate::Result<(Left::Fruit, Right::Fruit)> {
let mut left_fruits = vec![]; let mut left_fruits = vec![];
let mut right_fruits = vec![]; let mut right_fruits = vec![];
for (left_fruit, right_fruit) in children { for (left_fruit, right_fruit) in segment_fruits {
left_fruits.push(left_fruit); left_fruits.push(left_fruit);
right_fruits.push(right_fruit); right_fruits.push(right_fruit);
} }
@@ -258,7 +288,10 @@ where
self.0.requires_scoring() || self.1.requires_scoring() || self.2.requires_scoring() self.0.requires_scoring() || self.1.requires_scoring() || self.2.requires_scoring()
} }
fn merge_fruits(&self, children: Vec<Self::Fruit>) -> crate::Result<Self::Fruit> { fn merge_fruits(
&self,
children: Vec<<Self::Child as SegmentCollector>::Fruit>,
) -> crate::Result<Self::Fruit> {
let mut one_fruits = vec![]; let mut one_fruits = vec![];
let mut two_fruits = vec![]; let mut two_fruits = vec![];
let mut three_fruits = vec![]; let mut three_fruits = vec![];
@@ -325,7 +358,10 @@ where
|| self.3.requires_scoring() || self.3.requires_scoring()
} }
fn merge_fruits(&self, children: Vec<Self::Fruit>) -> crate::Result<Self::Fruit> { fn merge_fruits(
&self,
children: Vec<<Self::Child as SegmentCollector>::Fruit>,
) -> crate::Result<Self::Fruit> {
let mut one_fruits = vec![]; let mut one_fruits = vec![];
let mut two_fruits = vec![]; let mut two_fruits = vec![];
let mut three_fruits = vec![]; let mut three_fruits = vec![];

View File

@@ -34,13 +34,13 @@ impl<TCollector: Collector> Collector for CollectorWrapper<TCollector> {
fn merge_fruits( fn merge_fruits(
&self, &self,
children: Vec<<Self as Collector>::Fruit>, children: Vec<<Self::Child as SegmentCollector>::Fruit>,
) -> crate::Result<Box<dyn Fruit>> { ) -> crate::Result<Box<dyn Fruit>> {
let typed_fruit: Vec<TCollector::Fruit> = children let typed_fruit: Vec<<TCollector::Child as SegmentCollector>::Fruit> = children
.into_iter() .into_iter()
.map(|untyped_fruit| { .map(|untyped_fruit| {
untyped_fruit untyped_fruit
.downcast::<TCollector::Fruit>() .downcast::<<TCollector::Child as SegmentCollector>::Fruit>()
.map(|boxed_but_typed| *boxed_but_typed) .map(|boxed_but_typed| *boxed_but_typed)
.map_err(|_| { .map_err(|_| {
TantivyError::InvalidArgument("Failed to cast child fruit.".to_string()) TantivyError::InvalidArgument("Failed to cast child fruit.".to_string())
@@ -55,7 +55,7 @@ impl<TCollector: Collector> Collector for CollectorWrapper<TCollector> {
impl SegmentCollector for Box<dyn BoxableSegmentCollector> { impl SegmentCollector for Box<dyn BoxableSegmentCollector> {
type Fruit = Box<dyn Fruit>; type Fruit = Box<dyn Fruit>;
fn collect(&mut self, doc: u32, score: f32) { fn collect(&mut self, doc: u32, score: Score) {
self.as_mut().collect(doc, score); self.as_mut().collect(doc, score);
} }
@@ -65,7 +65,7 @@ impl SegmentCollector for Box<dyn BoxableSegmentCollector> {
} }
pub trait BoxableSegmentCollector { pub trait BoxableSegmentCollector {
fn collect(&mut self, doc: u32, score: f32); fn collect(&mut self, doc: u32, score: Score);
fn harvest_from_box(self: Box<Self>) -> Box<dyn Fruit>; fn harvest_from_box(self: Box<Self>) -> Box<dyn Fruit>;
} }
@@ -74,7 +74,7 @@ pub struct SegmentCollectorWrapper<TSegmentCollector: SegmentCollector>(TSegment
impl<TSegmentCollector: SegmentCollector> BoxableSegmentCollector impl<TSegmentCollector: SegmentCollector> BoxableSegmentCollector
for SegmentCollectorWrapper<TSegmentCollector> for SegmentCollectorWrapper<TSegmentCollector>
{ {
fn collect(&mut self, doc: u32, score: f32) { fn collect(&mut self, doc: u32, score: Score) {
self.0.collect(doc, score); self.0.collect(doc, score);
} }
@@ -259,7 +259,7 @@ mod tests {
let index = Index::create_in_ram(schema); let index = Index::create_in_ram(schema);
{ {
let mut index_writer = index.writer_with_num_threads(1, 3_000_000).unwrap(); let mut index_writer = index.writer_for_tests().unwrap();
index_writer.add_document(doc!(text=>"abc")); index_writer.add_document(doc!(text=>"abc"));
index_writer.add_document(doc!(text=>"abc abc abc")); index_writer.add_document(doc!(text=>"abc abc abc"));
index_writer.add_document(doc!(text=>"abc abc")); index_writer.add_document(doc!(text=>"abc abc"));

View File

@@ -185,12 +185,15 @@ impl Collector for BytesFastFieldTestCollector {
_segment_local_id: u32, _segment_local_id: u32,
segment_reader: &SegmentReader, segment_reader: &SegmentReader,
) -> crate::Result<BytesFastFieldSegmentCollector> { ) -> crate::Result<BytesFastFieldSegmentCollector> {
let reader = segment_reader
.fast_fields()
.bytes(self.field)
.ok_or_else(|| {
crate::TantivyError::InvalidArgument("Field is not a bytes fast field.".to_string())
})?;
Ok(BytesFastFieldSegmentCollector { Ok(BytesFastFieldSegmentCollector {
vals: Vec::new(), vals: Vec::new(),
reader: segment_reader reader,
.fast_fields()
.bytes(self.field)
.expect("Field is not a bytes fast field."),
}) })
} }
@@ -206,7 +209,7 @@ impl Collector for BytesFastFieldTestCollector {
impl SegmentCollector for BytesFastFieldSegmentCollector { impl SegmentCollector for BytesFastFieldSegmentCollector {
type Fruit = Vec<u8>; type Fruit = Vec<u8>;
fn collect(&mut self, doc: u32, _score: f32) { fn collect(&mut self, doc: u32, _score: Score) {
let data = self.reader.get_bytes(doc); let data = self.reader.get_bytes(doc);
self.vals.extend(data); self.vals.extend(data);
} }

View File

@@ -18,9 +18,9 @@ use std::collections::BinaryHeap;
/// Two elements are equal if their feature is equal, and regardless of whether `doc` /// Two elements are equal if their feature is equal, and regardless of whether `doc`
/// is equal. This should be perfectly fine for this usage, but let's make sure this /// is equal. This should be perfectly fine for this usage, but let's make sure this
/// struct is never public. /// struct is never public.
struct ComparableDoc<T, D> { pub(crate) struct ComparableDoc<T, D> {
feature: T, pub feature: T,
doc: D, pub doc: D,
} }
impl<T: PartialOrd, D: PartialOrd> PartialOrd for ComparableDoc<T, D> { impl<T: PartialOrd, D: PartialOrd> PartialOrd for ComparableDoc<T, D> {
@@ -56,7 +56,8 @@ impl<T: PartialOrd, D: PartialOrd> PartialEq for ComparableDoc<T, D> {
impl<T: PartialOrd, D: PartialOrd> Eq for ComparableDoc<T, D> {} impl<T: PartialOrd, D: PartialOrd> Eq for ComparableDoc<T, D> {}
pub(crate) struct TopCollector<T> { pub(crate) struct TopCollector<T> {
limit: usize, pub limit: usize,
pub offset: usize,
_marker: PhantomData<T>, _marker: PhantomData<T>,
} }
@@ -72,14 +73,20 @@ where
if limit < 1 { if limit < 1 {
panic!("Limit must be strictly greater than 0."); panic!("Limit must be strictly greater than 0.");
} }
TopCollector { Self {
limit, limit,
offset: 0,
_marker: PhantomData, _marker: PhantomData,
} }
} }
pub fn limit(&self) -> usize { /// Skip the first "offset" documents when collecting.
self.limit ///
/// This is equivalent to `OFFSET` in MySQL or PostgreSQL and `start` in
/// Lucene's TopDocsCollector.
pub fn and_offset(mut self, offset: usize) -> TopCollector<T> {
self.offset = offset;
self
} }
pub fn merge_fruits( pub fn merge_fruits(
@@ -92,7 +99,7 @@ where
let mut top_collector = BinaryHeap::new(); let mut top_collector = BinaryHeap::new();
for child_fruit in children { for child_fruit in children {
for (feature, doc) in child_fruit { for (feature, doc) in child_fruit {
if top_collector.len() < self.limit { if top_collector.len() < (self.limit + self.offset) {
top_collector.push(ComparableDoc { feature, doc }); top_collector.push(ComparableDoc { feature, doc });
} else if let Some(mut head) = top_collector.peek_mut() { } else if let Some(mut head) = top_collector.peek_mut() {
if head.feature < feature { if head.feature < feature {
@@ -104,6 +111,7 @@ where
Ok(top_collector Ok(top_collector
.into_sorted_vec() .into_sorted_vec()
.into_iter() .into_iter()
.skip(self.offset)
.map(|cdoc| (cdoc.feature, cdoc.doc)) .map(|cdoc| (cdoc.feature, cdoc.doc))
.collect()) .collect())
} }
@@ -113,7 +121,23 @@ where
segment_id: SegmentLocalId, segment_id: SegmentLocalId,
_: &SegmentReader, _: &SegmentReader,
) -> crate::Result<TopSegmentCollector<F>> { ) -> crate::Result<TopSegmentCollector<F>> {
Ok(TopSegmentCollector::new(segment_id, self.limit)) Ok(TopSegmentCollector::new(
segment_id,
self.limit + self.offset,
))
}
/// Create a new TopCollector with the same limit and offset.
///
/// Ideally we would use Into but the blanket implementation seems to cause the Scorer traits
/// to fail.
#[doc(hidden)]
pub(crate) fn into_tscore<TScore: PartialOrd + Clone>(self) -> TopCollector<TScore> {
TopCollector {
limit: self.limit,
offset: self.offset,
_marker: PhantomData,
}
} }
} }
@@ -187,7 +211,7 @@ impl<T: PartialOrd + Clone> TopSegmentCollector<T> {
#[cfg(test)] #[cfg(test)]
mod tests { mod tests {
use super::TopSegmentCollector; use super::{TopCollector, TopSegmentCollector};
use crate::DocAddress; use crate::DocAddress;
#[test] #[test]
@@ -248,6 +272,48 @@ mod tests {
top_collector_limit_3.harvest()[..2].to_vec(), top_collector_limit_3.harvest()[..2].to_vec(),
); );
} }
#[test]
fn test_top_collector_with_limit_and_offset() {
let collector = TopCollector::with_limit(2).and_offset(1);
let results = collector
.merge_fruits(vec![vec![
(0.9, DocAddress(0, 1)),
(0.8, DocAddress(0, 2)),
(0.7, DocAddress(0, 3)),
(0.6, DocAddress(0, 4)),
(0.5, DocAddress(0, 5)),
]])
.unwrap();
assert_eq!(
results,
vec![(0.8, DocAddress(0, 2)), (0.7, DocAddress(0, 3)),]
);
}
#[test]
fn test_top_collector_with_limit_larger_than_set_and_offset() {
let collector = TopCollector::with_limit(2).and_offset(1);
let results = collector
.merge_fruits(vec![vec![(0.9, DocAddress(0, 1)), (0.8, DocAddress(0, 2))]])
.unwrap();
assert_eq!(results, vec![(0.8, DocAddress(0, 2)),]);
}
#[test]
fn test_top_collector_with_limit_and_offset_larger_than_set() {
let collector = TopCollector::with_limit(2).and_offset(20);
let results = collector
.merge_fruits(vec![vec![(0.9, DocAddress(0, 1)), (0.8, DocAddress(0, 2))]])
.unwrap();
assert_eq!(results, vec![]);
}
} }
#[cfg(all(test, feature = "unstable"))] #[cfg(all(test, feature = "unstable"))]

View File

@@ -1,19 +1,82 @@
use super::Collector; use super::Collector;
use crate::collector::custom_score_top_collector::CustomScoreTopCollector; use crate::collector::top_collector::{ComparableDoc, TopCollector};
use crate::collector::top_collector::TopCollector;
use crate::collector::top_collector::TopSegmentCollector;
use crate::collector::tweak_score_top_collector::TweakedScoreTopCollector; use crate::collector::tweak_score_top_collector::TweakedScoreTopCollector;
use crate::collector::{ use crate::collector::{
CustomScorer, CustomSegmentScorer, ScoreSegmentTweaker, ScoreTweaker, SegmentCollector, CustomScorer, CustomSegmentScorer, ScoreSegmentTweaker, ScoreTweaker, SegmentCollector,
}; };
use crate::fastfield::FastFieldReader; use crate::fastfield::FastFieldReader;
use crate::query::Weight;
use crate::schema::Field; use crate::schema::Field;
use crate::DocAddress; use crate::DocAddress;
use crate::DocId; use crate::DocId;
use crate::Score; use crate::Score;
use crate::SegmentLocalId; use crate::SegmentLocalId;
use crate::SegmentReader; use crate::SegmentReader;
use crate::{collector::custom_score_top_collector::CustomScoreTopCollector, fastfield::FastValue};
use crate::{collector::top_collector::TopSegmentCollector, TantivyError};
use std::fmt; use std::fmt;
use std::{collections::BinaryHeap, marker::PhantomData};
struct FastFieldConvertCollector<
TCollector: Collector<Fruit = Vec<(u64, DocAddress)>>,
TFastValue: FastValue,
> {
pub collector: TCollector,
pub field: Field,
pub fast_value: std::marker::PhantomData<TFastValue>,
}
impl<TCollector, TFastValue> Collector for FastFieldConvertCollector<TCollector, TFastValue>
where
TCollector: Collector<Fruit = Vec<(u64, DocAddress)>>,
TFastValue: FastValue + 'static,
{
type Fruit = Vec<(TFastValue, DocAddress)>;
type Child = TCollector::Child;
fn for_segment(
&self,
segment_local_id: crate::SegmentLocalId,
segment: &SegmentReader,
) -> crate::Result<Self::Child> {
let schema = segment.schema();
let field_entry = schema.get_field_entry(self.field);
if !field_entry.is_fast() {
return Err(TantivyError::SchemaError(format!(
"Field {:?} is not a fast field.",
field_entry.name()
)));
}
let schema_type = TFastValue::to_type();
let requested_type = field_entry.field_type().value_type();
if schema_type != requested_type {
return Err(TantivyError::SchemaError(format!(
"Field {:?} is of type {:?}!={:?}",
field_entry.name(),
schema_type,
requested_type
)));
}
self.collector.for_segment(segment_local_id, segment)
}
fn requires_scoring(&self) -> bool {
self.collector.requires_scoring()
}
fn merge_fruits(
&self,
segment_fruits: Vec<<Self::Child as SegmentCollector>::Fruit>,
) -> crate::Result<Self::Fruit> {
let raw_result = self.collector.merge_fruits(segment_fruits)?;
let transformed_result = raw_result
.into_iter()
.map(|(score, doc_address)| (TFastValue::from_u64(score), doc_address))
.collect::<Vec<_>>();
Ok(transformed_result)
}
}
/// The `TopDocs` collector keeps track of the top `K` documents /// The `TopDocs` collector keeps track of the top `K` documents
/// sorted by their score. /// sorted by their score.
@@ -36,7 +99,7 @@ use std::fmt;
/// let schema = schema_builder.build(); /// let schema = schema_builder.build();
/// let index = Index::create_in_ram(schema); /// let index = Index::create_in_ram(schema);
/// ///
/// let mut index_writer = index.writer_with_num_threads(1, 3_000_000).unwrap(); /// let mut index_writer = index.writer_with_num_threads(1, 10_000_000).unwrap();
/// index_writer.add_document(doc!(title => "The Name of the Wind")); /// index_writer.add_document(doc!(title => "The Name of the Wind"));
/// index_writer.add_document(doc!(title => "The Diary of Muadib")); /// index_writer.add_document(doc!(title => "The Diary of Muadib"));
/// index_writer.add_document(doc!(title => "A Dairy Cow")); /// index_writer.add_document(doc!(title => "A Dairy Cow"));
@@ -50,14 +113,18 @@ use std::fmt;
/// let query = query_parser.parse_query("diary").unwrap(); /// let query = query_parser.parse_query("diary").unwrap();
/// let top_docs = searcher.search(&query, &TopDocs::with_limit(2)).unwrap(); /// let top_docs = searcher.search(&query, &TopDocs::with_limit(2)).unwrap();
/// ///
/// assert_eq!(&top_docs[0], &(0.7261542, DocAddress(0, 1))); /// assert_eq!(top_docs[0].1, DocAddress(0, 1));
/// assert_eq!(&top_docs[1], &(0.6099695, DocAddress(0, 3))); /// assert_eq!(top_docs[1].1, DocAddress(0, 3));
/// ``` /// ```
pub struct TopDocs(TopCollector<Score>); pub struct TopDocs(TopCollector<Score>);
impl fmt::Debug for TopDocs { impl fmt::Debug for TopDocs {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
write!(f, "TopDocs({})", self.0.limit()) write!(
f,
"TopDocs(limit={}, offset={})",
self.0.limit, self.0.offset
)
} }
} }
@@ -66,8 +133,8 @@ struct ScorerByFastFieldReader {
} }
impl CustomSegmentScorer<u64> for ScorerByFastFieldReader { impl CustomSegmentScorer<u64> for ScorerByFastFieldReader {
fn score(&self, doc: DocId) -> u64 { fn score(&mut self, doc: DocId) -> u64 {
self.ff_reader.get_u64(u64::from(doc)) self.ff_reader.get(doc)
} }
} }
@@ -81,10 +148,10 @@ impl CustomScorer<u64> for ScorerByField {
fn segment_scorer(&self, segment_reader: &SegmentReader) -> crate::Result<Self::Child> { fn segment_scorer(&self, segment_reader: &SegmentReader) -> crate::Result<Self::Child> {
let ff_reader = segment_reader let ff_reader = segment_reader
.fast_fields() .fast_fields()
.u64(self.field) .u64_lenient(self.field)
.ok_or_else(|| { .ok_or_else(|| {
crate::TantivyError::SchemaError(format!( crate::TantivyError::SchemaError(format!(
"Field requested ({:?}) is not a i64/u64 fast field.", "Field requested ({:?}) is not a fast field.",
self.field self.field
)) ))
})?; })?;
@@ -101,8 +168,57 @@ impl TopDocs {
TopDocs(TopCollector::with_limit(limit)) TopDocs(TopCollector::with_limit(limit))
} }
/// Skip the first "offset" documents when collecting.
///
/// This is equivalent to `OFFSET` in MySQL or PostgreSQL and `start` in
/// Lucene's TopDocsCollector.
///
/// # Example
///
/// ```rust
/// use tantivy::collector::TopDocs;
/// use tantivy::query::QueryParser;
/// use tantivy::schema::{Schema, TEXT};
/// use tantivy::{doc, DocAddress, Index};
///
/// let mut schema_builder = Schema::builder();
/// let title = schema_builder.add_text_field("title", TEXT);
/// let schema = schema_builder.build();
/// let index = Index::create_in_ram(schema);
///
/// let mut index_writer = index.writer_with_num_threads(1, 10_000_000).unwrap();
/// index_writer.add_document(doc!(title => "The Name of the Wind"));
/// index_writer.add_document(doc!(title => "The Diary of Muadib"));
/// index_writer.add_document(doc!(title => "A Dairy Cow"));
/// index_writer.add_document(doc!(title => "The Diary of a Young Girl"));
/// index_writer.add_document(doc!(title => "The Diary of Lena Mukhina"));
/// assert!(index_writer.commit().is_ok());
///
/// let reader = index.reader().unwrap();
/// let searcher = reader.searcher();
///
/// let query_parser = QueryParser::for_index(&index, vec![title]);
/// let query = query_parser.parse_query("diary").unwrap();
/// let top_docs = searcher.search(&query, &TopDocs::with_limit(2).and_offset(1)).unwrap();
///
/// assert_eq!(top_docs.len(), 2);
/// assert_eq!(top_docs[0].1, DocAddress(0, 4));
/// assert_eq!(top_docs[1].1, DocAddress(0, 3));
/// ```
pub fn and_offset(self, offset: usize) -> TopDocs {
TopDocs(self.0.and_offset(offset))
}
/// Set top-K to rank documents by a given fast field. /// Set top-K to rank documents by a given fast field.
/// ///
/// If the field is not a fast or does not exist, this method returns successfully (it is not aware of any schema).
/// An error will be returned at the moment of search.
///
/// If the field is a FAST field but not a u64 field, search will return successfully but it will return
/// returns a monotonic u64-representation (ie. the order is still correct) of the requested field type.
///
/// # Example
///
/// ```rust /// ```rust
/// # use tantivy::schema::{Schema, FAST, TEXT}; /// # use tantivy::schema::{Schema, FAST, TEXT};
/// # use tantivy::{doc, Index, DocAddress}; /// # use tantivy::{doc, Index, DocAddress};
@@ -118,13 +234,13 @@ impl TopDocs {
/// # let schema = schema_builder.build(); /// # let schema = schema_builder.build();
/// # /// #
/// # let index = Index::create_in_ram(schema); /// # let index = Index::create_in_ram(schema);
/// # let mut index_writer = index.writer_with_num_threads(1, 3_000_000)?; /// # let mut index_writer = index.writer_with_num_threads(1, 10_000_000)?;
/// # index_writer.add_document(doc!(title => "The Name of the Wind", rating => 92u64)); /// # index_writer.add_document(doc!(title => "The Name of the Wind", rating => 92u64));
/// # index_writer.add_document(doc!(title => "The Diary of Muadib", rating => 97u64)); /// # index_writer.add_document(doc!(title => "The Diary of Muadib", rating => 97u64));
/// # index_writer.add_document(doc!(title => "A Dairy Cow", rating => 63u64)); /// # index_writer.add_document(doc!(title => "A Dairy Cow", rating => 63u64));
/// # index_writer.add_document(doc!(title => "The Diary of a Young Girl", rating => 80u64)); /// # index_writer.add_document(doc!(title => "The Diary of a Young Girl", rating => 80u64));
/// # assert!(index_writer.commit().is_ok()); /// # assert!(index_writer.commit().is_ok());
/// # let reader = index.reader().unwrap(); /// # let reader = index.reader()?;
/// # let query = QueryParser::for_index(&index, vec![title]).parse_query("diary")?; /// # let query = QueryParser::for_index(&index, vec![title]).parse_query("diary")?;
/// # let top_docs = docs_sorted_by_rating(&reader.searcher(), &query, rating)?; /// # let top_docs = docs_sorted_by_rating(&reader.searcher(), &query, rating)?;
/// # assert_eq!(top_docs, /// # assert_eq!(top_docs,
@@ -132,25 +248,20 @@ impl TopDocs {
/// # (80u64, DocAddress(0u32, 3))]); /// # (80u64, DocAddress(0u32, 3))]);
/// # Ok(()) /// # Ok(())
/// # } /// # }
///
///
/// /// Searches the document matching the given query, and /// /// Searches the document matching the given query, and
/// /// collects the top 10 documents, order by the u64-`field` /// /// collects the top 10 documents, order by the u64-`field`
/// /// given in argument. /// /// given in argument.
/// ///
/// /// `field` is required to be a FAST field.
/// fn docs_sorted_by_rating(searcher: &Searcher, /// fn docs_sorted_by_rating(searcher: &Searcher,
/// query: &dyn Query, /// query: &dyn Query,
/// sort_by_field: Field) /// rating_field: Field)
/// -> tantivy::Result<Vec<(u64, DocAddress)>> { /// -> tantivy::Result<Vec<(u64, DocAddress)>> {
/// ///
/// // This is where we build our topdocs collector /// // This is where we build our topdocs collector
/// // /// //
/// // Note the generics parameter that needs to match the /// // Note the `rating_field` needs to be a FAST field here.
/// // type `sort_by_field`. /// let top_books_by_rating = TopDocs
/// let top_docs_by_rating = TopDocs
/// ::with_limit(10) /// ::with_limit(10)
/// .order_by_u64_field(sort_by_field); /// .order_by_u64_field(rating_field);
/// ///
/// // ... and here are our documents. Note this is a simple vec. /// // ... and here are our documents. Note this is a simple vec.
/// // The `u64` in the pair is the value of our fast field for /// // The `u64` in the pair is the value of our fast field for
@@ -160,21 +271,105 @@ impl TopDocs {
/// // length of 10, or less if not enough documents matched the /// // length of 10, or less if not enough documents matched the
/// // query. /// // query.
/// let resulting_docs: Vec<(u64, DocAddress)> = /// let resulting_docs: Vec<(u64, DocAddress)> =
/// searcher.search(query, &top_docs_by_rating)?; /// searcher.search(query, &top_books_by_rating)?;
/// ///
/// Ok(resulting_docs) /// Ok(resulting_docs)
/// } /// }
/// ``` /// ```
/// ///
/// # Panics /// # See also
/// ///
/// May panic if the field requested is not a fast field. /// To confortably work with `u64`s, `i64`s, `f64`s, or `date`s, please refer to
/// /// [.order_by_fast_field(...)](#method.order_by_fast_field) method.
pub fn order_by_u64_field( pub fn order_by_u64_field(
self, self,
field: Field, field: Field,
) -> impl Collector<Fruit = Vec<(u64, DocAddress)>> { ) -> impl Collector<Fruit = Vec<(u64, DocAddress)>> {
self.custom_score(ScorerByField { field }) CustomScoreTopCollector::new(ScorerByField { field }, self.0.into_tscore())
}
/// Set top-K to rank documents by a given fast field.
///
/// If the field is not a fast field, or its field type does not match the generic type, this method does not panic,
/// but an explicit error will be returned at the moment of collection.
///
/// Note that this method is a generic. The requested fast field type will be often
/// inferred in your code by the rust compiler.
///
/// Implementation-wise, for performance reason, tantivy will manipulate the u64 representation of your fast
/// field until the last moment.
///
/// # Example
///
/// ```rust
/// # use tantivy::schema::{Schema, FAST, TEXT};
/// # use tantivy::{doc, Index, DocAddress};
/// # use tantivy::query::{Query, AllQuery};
/// use tantivy::Searcher;
/// use tantivy::collector::TopDocs;
/// use tantivy::schema::Field;
///
/// # fn main() -> tantivy::Result<()> {
/// # let mut schema_builder = Schema::builder();
/// # let title = schema_builder.add_text_field("company", TEXT);
/// # let rating = schema_builder.add_i64_field("revenue", FAST);
/// # let schema = schema_builder.build();
/// #
/// # let index = Index::create_in_ram(schema);
/// # let mut index_writer = index.writer_with_num_threads(1, 10_000_000)?;
/// # index_writer.add_document(doc!(title => "MadCow Inc.", rating => 92_000_000i64));
/// # index_writer.add_document(doc!(title => "Zozo Cow KKK", rating => 119_000_000i64));
/// # index_writer.add_document(doc!(title => "Declining Cow", rating => -63_000_000i64));
/// # assert!(index_writer.commit().is_ok());
/// # let reader = index.reader()?;
/// # let top_docs = docs_sorted_by_revenue(&reader.searcher(), &AllQuery, rating)?;
/// # assert_eq!(top_docs,
/// # vec![(119_000_000i64, DocAddress(0, 1)),
/// # (92_000_000i64, DocAddress(0, 0))]);
/// # Ok(())
/// # }
/// /// Searches the document matching the given query, and
/// /// collects the top 10 documents, order by the u64-`field`
/// /// given in argument.
/// fn docs_sorted_by_revenue(searcher: &Searcher,
/// query: &dyn Query,
/// revenue_field: Field)
/// -> tantivy::Result<Vec<(i64, DocAddress)>> {
///
/// // This is where we build our topdocs collector
/// //
/// // Note the generics parameter that needs to match the
/// // type `sort_by_field`. revenue_field here is a FAST i64 field.
/// let top_company_by_revenue = TopDocs
/// ::with_limit(2)
/// .order_by_fast_field(revenue_field);
///
/// // ... and here are our documents. Note this is a simple vec.
/// // The `i64` in the pair is the value of our fast field for
/// // each documents.
/// //
/// // The vec is sorted decreasingly by `sort_by_field`, and has a
/// // length of 10, or less if not enough documents matched the
/// // query.
/// let resulting_docs: Vec<(i64, DocAddress)> =
/// searcher.search(query, &top_company_by_revenue)?;
///
/// Ok(resulting_docs)
/// }
/// ```
pub fn order_by_fast_field<TFastValue>(
self,
fast_field: Field,
) -> impl Collector<Fruit = Vec<(TFastValue, DocAddress)>>
where
TFastValue: FastValue + 'static,
{
let u64_collector = self.order_by_u64_field(fast_field);
FastFieldConvertCollector {
collector: u64_collector,
field: fast_field,
fast_value: PhantomData,
}
} }
/// Ranks the documents using a custom score. /// Ranks the documents using a custom score.
@@ -219,7 +414,7 @@ impl TopDocs {
/// fn create_index() -> tantivy::Result<Index> { /// fn create_index() -> tantivy::Result<Index> {
/// let schema = create_schema(); /// let schema = create_schema();
/// let index = Index::create_in_ram(schema); /// let index = Index::create_in_ram(schema);
/// let mut index_writer = index.writer_with_num_threads(1, 3_000_000)?; /// let mut index_writer = index.writer_with_num_threads(1, 10_000_000)?;
/// let product_name = index.schema().get_field("product_name").unwrap(); /// let product_name = index.schema().get_field("product_name").unwrap();
/// let popularity: Field = index.schema().get_field("popularity").unwrap(); /// let popularity: Field = index.schema().get_field("popularity").unwrap();
/// index_writer.add_document(doc!(product_name => "The Diary of Muadib", popularity => 1u64)); /// index_writer.add_document(doc!(product_name => "The Diary of Muadib", popularity => 1u64));
@@ -258,7 +453,7 @@ impl TopDocs {
/// let popularity: u64 = popularity_reader.get(doc); /// let popularity: u64 = popularity_reader.get(doc);
/// // Well.. For the sake of the example we use a simple logarithm /// // Well.. For the sake of the example we use a simple logarithm
/// // function. /// // function.
/// let popularity_boost_score = ((2u64 + popularity) as f32).log2(); /// let popularity_boost_score = ((2u64 + popularity) as Score).log2();
/// popularity_boost_score * original_score /// popularity_boost_score * original_score
/// } /// }
/// }); /// });
@@ -279,9 +474,9 @@ impl TopDocs {
where where
TScore: 'static + Send + Sync + Clone + PartialOrd, TScore: 'static + Send + Sync + Clone + PartialOrd,
TScoreSegmentTweaker: ScoreSegmentTweaker<TScore> + 'static, TScoreSegmentTweaker: ScoreSegmentTweaker<TScore> + 'static,
TScoreTweaker: ScoreTweaker<TScore, Child = TScoreSegmentTweaker>, TScoreTweaker: ScoreTweaker<TScore, Child = TScoreSegmentTweaker> + Send + Sync,
{ {
TweakedScoreTopCollector::new(score_tweaker, self.0.limit()) TweakedScoreTopCollector::new(score_tweaker, self.0.into_tscore())
} }
/// Ranks the documents using a custom score. /// Ranks the documents using a custom score.
@@ -326,7 +521,7 @@ impl TopDocs {
/// # fn main() -> tantivy::Result<()> { /// # fn main() -> tantivy::Result<()> {
/// # let schema = create_schema(); /// # let schema = create_schema();
/// # let index = Index::create_in_ram(schema); /// # let index = Index::create_in_ram(schema);
/// # let mut index_writer = index.writer_with_num_threads(1, 3_000_000)?; /// # let mut index_writer = index.writer_with_num_threads(1, 10_000_000)?;
/// # let product_name = index.schema().get_field("product_name").unwrap(); /// # let product_name = index.schema().get_field("product_name").unwrap();
/// # /// #
/// let popularity: Field = index.schema().get_field("popularity").unwrap(); /// let popularity: Field = index.schema().get_field("popularity").unwrap();
@@ -393,9 +588,9 @@ impl TopDocs {
where where
TScore: 'static + Send + Sync + Clone + PartialOrd, TScore: 'static + Send + Sync + Clone + PartialOrd,
TCustomSegmentScorer: CustomSegmentScorer<TScore> + 'static, TCustomSegmentScorer: CustomSegmentScorer<TScore> + 'static,
TCustomScorer: CustomScorer<TScore, Child = TCustomSegmentScorer>, TCustomScorer: CustomScorer<TScore, Child = TCustomSegmentScorer> + Send + Sync,
{ {
CustomScoreTopCollector::new(custom_score, self.0.limit()) CustomScoreTopCollector::new(custom_score, self.0.into_tscore())
} }
} }
@@ -423,6 +618,64 @@ impl Collector for TopDocs {
) -> crate::Result<Self::Fruit> { ) -> crate::Result<Self::Fruit> {
self.0.merge_fruits(child_fruits) self.0.merge_fruits(child_fruits)
} }
fn collect_segment(
&self,
weight: &dyn Weight,
segment_ord: u32,
reader: &SegmentReader,
) -> crate::Result<<Self::Child as SegmentCollector>::Fruit> {
let heap_len = self.0.limit + self.0.offset;
let mut heap: BinaryHeap<ComparableDoc<Score, DocId>> = BinaryHeap::with_capacity(heap_len);
if let Some(delete_bitset) = reader.delete_bitset() {
let mut threshold = Score::MIN;
weight.for_each_pruning(threshold, reader, &mut |doc, score| {
if delete_bitset.is_deleted(doc) {
return threshold;
}
let heap_item = ComparableDoc {
feature: score,
doc,
};
if heap.len() < heap_len {
heap.push(heap_item);
if heap.len() == heap_len {
threshold = heap.peek().map(|el| el.feature).unwrap_or(Score::MIN);
}
return threshold;
}
*heap.peek_mut().unwrap() = heap_item;
threshold = heap.peek().map(|el| el.feature).unwrap_or(Score::MIN);
threshold
})?;
} else {
weight.for_each_pruning(Score::MIN, reader, &mut |doc, score| {
let heap_item = ComparableDoc {
feature: score,
doc,
};
if heap.len() < heap_len {
heap.push(heap_item);
// TODO the threshold is suboptimal for heap.len == heap_len
if heap.len() == heap_len {
return heap.peek().map(|el| el.feature).unwrap_or(Score::MIN);
} else {
return Score::MIN;
}
}
*heap.peek_mut().unwrap() = heap_item;
heap.peek().map(|el| el.feature).unwrap_or(Score::MIN)
})?;
}
let fruit = heap
.into_sorted_vec()
.into_iter()
.map(|cid| (cid.feature, DocAddress(segment_ord, cid.doc)))
.collect();
Ok(fruit)
}
} }
/// Segment Collector associated to `TopDocs`. /// Segment Collector associated to `TopDocs`.
@@ -432,7 +685,7 @@ impl SegmentCollector for TopScoreSegmentCollector {
type Fruit = Vec<(Score, DocAddress)>; type Fruit = Vec<(Score, DocAddress)>;
fn collect(&mut self, doc: DocId, score: Score) { fn collect(&mut self, doc: DocId, score: Score) {
self.0.collect(doc, score) self.0.collect(doc, score);
} }
fn harvest(self) -> Vec<(Score, DocAddress)> { fn harvest(self) -> Vec<(Score, DocAddress)> {
@@ -446,11 +699,10 @@ mod tests {
use crate::collector::Collector; use crate::collector::Collector;
use crate::query::{AllQuery, Query, QueryParser}; use crate::query::{AllQuery, Query, QueryParser};
use crate::schema::{Field, Schema, FAST, STORED, TEXT}; use crate::schema::{Field, Schema, FAST, STORED, TEXT};
use crate::DocAddress;
use crate::Index; use crate::Index;
use crate::IndexWriter; use crate::IndexWriter;
use crate::Score; use crate::Score;
use itertools::Itertools; use crate::{DocAddress, DocId, SegmentReader};
fn make_index() -> Index { fn make_index() -> Index {
let mut schema_builder = Schema::builder(); let mut schema_builder = Schema::builder();
@@ -459,7 +711,7 @@ mod tests {
let index = Index::create_in_ram(schema); let index = Index::create_in_ram(schema);
{ {
// writing the segment // writing the segment
let mut index_writer = index.writer_with_num_threads(1, 3_000_000).unwrap(); let mut index_writer = index.writer_with_num_threads(1, 10_000_000).unwrap();
index_writer.add_document(doc!(text_field=>"Hello happy tax payer.")); index_writer.add_document(doc!(text_field=>"Hello happy tax payer."));
index_writer.add_document(doc!(text_field=>"Droopy says hello happy tax payer")); index_writer.add_document(doc!(text_field=>"Droopy says hello happy tax payer"));
index_writer.add_document(doc!(text_field=>"I like Droopy")); index_writer.add_document(doc!(text_field=>"I like Droopy"));
@@ -468,6 +720,13 @@ mod tests {
index index
} }
fn assert_results_equals(results: &[(Score, DocAddress)], expected: &[(Score, DocAddress)]) {
for (result, expected) in results.iter().zip(expected.iter()) {
assert_eq!(result.1, expected.1);
crate::assert_nearly_equals!(result.0, expected.0);
}
}
#[test] #[test]
fn test_top_collector_not_at_capacity() { fn test_top_collector_not_at_capacity() {
let index = make_index(); let index = make_index();
@@ -480,16 +739,31 @@ mod tests {
.searcher() .searcher()
.search(&text_query, &TopDocs::with_limit(4)) .search(&text_query, &TopDocs::with_limit(4))
.unwrap(); .unwrap();
assert_eq!( assert_results_equals(
score_docs, &score_docs,
vec![ &[
(0.81221175, DocAddress(0u32, 1)), (0.81221175, DocAddress(0u32, 1)),
(0.5376842, DocAddress(0u32, 2)), (0.5376842, DocAddress(0u32, 2)),
(0.48527452, DocAddress(0, 0)) (0.48527452, DocAddress(0, 0)),
] ],
); );
} }
#[test]
fn test_top_collector_not_at_capacity_with_offset() {
let index = make_index();
let field = index.schema().get_field("text").unwrap();
let query_parser = QueryParser::for_index(&index, vec![field]);
let text_query = query_parser.parse_query("droopy tax").unwrap();
let score_docs: Vec<(Score, DocAddress)> = index
.reader()
.unwrap()
.searcher()
.search(&text_query, &TopDocs::with_limit(4).and_offset(2))
.unwrap();
assert_results_equals(&score_docs[..], &[(0.48527452, DocAddress(0, 0))]);
}
#[test] #[test]
fn test_top_collector_at_capacity() { fn test_top_collector_at_capacity() {
let index = make_index(); let index = make_index();
@@ -502,12 +776,33 @@ mod tests {
.searcher() .searcher()
.search(&text_query, &TopDocs::with_limit(2)) .search(&text_query, &TopDocs::with_limit(2))
.unwrap(); .unwrap();
assert_eq!( assert_results_equals(
score_docs, &score_docs,
vec![ &[
(0.81221175, DocAddress(0u32, 1)), (0.81221175, DocAddress(0u32, 1)),
(0.5376842, DocAddress(0u32, 2)), (0.5376842, DocAddress(0u32, 2)),
] ],
);
}
#[test]
fn test_top_collector_at_capacity_with_offset() {
let index = make_index();
let field = index.schema().get_field("text").unwrap();
let query_parser = QueryParser::for_index(&index, vec![field]);
let text_query = query_parser.parse_query("droopy tax").unwrap();
let score_docs: Vec<(Score, DocAddress)> = index
.reader()
.unwrap()
.searcher()
.search(&text_query, &TopDocs::with_limit(2).and_offset(1))
.unwrap();
assert_results_equals(
&score_docs[..],
&[
(0.5376842, DocAddress(0u32, 2)),
(0.48527452, DocAddress(0, 0)),
],
); );
} }
@@ -524,8 +819,8 @@ mod tests {
// precondition for the test to be meaningful: we did get documents // precondition for the test to be meaningful: we did get documents
// with the same score // with the same score
assert!(page_1.iter().map(|result| result.0).all_equal()); assert!(page_1.iter().all(|result| result.0 == page_1[0].0));
assert!(page_2.iter().map(|result| result.0).all_equal()); assert!(page_2.iter().all(|result| result.0 == page_2[0].0));
// sanity check since we're relying on make_index() // sanity check since we're relying on make_index()
assert_eq!(page_1.len(), 2); assert_eq!(page_1.len(), 2);
@@ -568,8 +863,8 @@ mod tests {
let top_collector = TopDocs::with_limit(4).order_by_u64_field(size); let top_collector = TopDocs::with_limit(4).order_by_u64_field(size);
let top_docs: Vec<(u64, DocAddress)> = searcher.search(&query, &top_collector).unwrap(); let top_docs: Vec<(u64, DocAddress)> = searcher.search(&query, &top_collector).unwrap();
assert_eq!( assert_eq!(
top_docs, &top_docs[..],
vec![ &[
(64, DocAddress(0, 1)), (64, DocAddress(0, 1)),
(16, DocAddress(0, 2)), (16, DocAddress(0, 2)),
(12, DocAddress(0, 0)) (12, DocAddress(0, 0))
@@ -577,6 +872,94 @@ mod tests {
); );
} }
#[test]
fn test_top_field_collector_datetime() -> crate::Result<()> {
use std::str::FromStr;
let mut schema_builder = Schema::builder();
let name = schema_builder.add_text_field("name", TEXT);
let birthday = schema_builder.add_date_field("birthday", FAST);
let schema = schema_builder.build();
let index = Index::create_in_ram(schema);
let mut index_writer = index.writer_for_tests()?;
let pr_birthday = crate::DateTime::from_str("1898-04-09T00:00:00+00:00")?;
index_writer.add_document(doc!(
name => "Paul Robeson",
birthday => pr_birthday
));
let mr_birthday = crate::DateTime::from_str("1947-11-08T00:00:00+00:00")?;
index_writer.add_document(doc!(
name => "Minnie Riperton",
birthday => mr_birthday
));
index_writer.commit()?;
let searcher = index.reader()?.searcher();
let top_collector = TopDocs::with_limit(3).order_by_fast_field(birthday);
let top_docs: Vec<(crate::DateTime, DocAddress)> =
searcher.search(&AllQuery, &top_collector)?;
assert_eq!(
&top_docs[..],
&[
(mr_birthday, DocAddress(0, 1)),
(pr_birthday, DocAddress(0, 0)),
]
);
Ok(())
}
#[test]
fn test_top_field_collector_i64() -> crate::Result<()> {
let mut schema_builder = Schema::builder();
let city = schema_builder.add_text_field("city", TEXT);
let altitude = schema_builder.add_i64_field("altitude", FAST);
let schema = schema_builder.build();
let index = Index::create_in_ram(schema);
let mut index_writer = index.writer_for_tests()?;
index_writer.add_document(doc!(
city => "georgetown",
altitude => -1i64,
));
index_writer.add_document(doc!(
city => "tokyo",
altitude => 40i64,
));
index_writer.commit()?;
let searcher = index.reader()?.searcher();
let top_collector = TopDocs::with_limit(3).order_by_fast_field(altitude);
let top_docs: Vec<(i64, DocAddress)> = searcher.search(&AllQuery, &top_collector)?;
assert_eq!(
&top_docs[..],
&[(40i64, DocAddress(0, 1)), (-1i64, DocAddress(0, 0)),]
);
Ok(())
}
#[test]
fn test_top_field_collector_f64() -> crate::Result<()> {
let mut schema_builder = Schema::builder();
let city = schema_builder.add_text_field("city", TEXT);
let altitude = schema_builder.add_f64_field("altitude", FAST);
let schema = schema_builder.build();
let index = Index::create_in_ram(schema);
let mut index_writer = index.writer_for_tests()?;
index_writer.add_document(doc!(
city => "georgetown",
altitude => -1.0f64,
));
index_writer.add_document(doc!(
city => "tokyo",
altitude => 40f64,
));
index_writer.commit()?;
let searcher = index.reader()?.searcher();
let top_collector = TopDocs::with_limit(3).order_by_fast_field(altitude);
let top_docs: Vec<(f64, DocAddress)> = searcher.search(&AllQuery, &top_collector)?;
assert_eq!(
&top_docs[..],
&[(40f64, DocAddress(0, 1)), (-1.0f64, DocAddress(0, 0)),]
);
Ok(())
}
#[test] #[test]
#[should_panic] #[should_panic]
fn test_field_does_not_exist() { fn test_field_does_not_exist() {
@@ -599,29 +982,85 @@ mod tests {
} }
#[test] #[test]
fn test_field_not_fast_field() { fn test_field_not_fast_field() -> crate::Result<()> {
let mut schema_builder = Schema::builder(); let mut schema_builder = Schema::builder();
let title = schema_builder.add_text_field(TITLE, TEXT);
let size = schema_builder.add_u64_field(SIZE, STORED); let size = schema_builder.add_u64_field(SIZE, STORED);
let schema = schema_builder.build(); let schema = schema_builder.build();
let (index, _) = index("beer", title, schema, |index_writer| { let index = Index::create_in_ram(schema);
index_writer.add_document(doc!( let mut index_writer = index.writer_for_tests()?;
title => "bottle of beer", index_writer.add_document(doc!(size=>1u64));
size => 12u64, index_writer.commit()?;
)); let searcher = index.reader()?.searcher();
});
let searcher = index.reader().unwrap().searcher();
let segment = searcher.segment_reader(0); let segment = searcher.segment_reader(0);
let top_collector = TopDocs::with_limit(4).order_by_u64_field(size); let top_collector = TopDocs::with_limit(4).order_by_u64_field(size);
let err = top_collector.for_segment(0, segment); let err = top_collector.for_segment(0, segment).err().unwrap();
if let Err(crate::TantivyError::SchemaError(msg)) = err { assert!(
assert_eq!( matches!(err, crate::TantivyError::SchemaError(msg) if msg == "Field requested (Field(0)) is not a fast field.")
msg, );
"Field requested (Field(1)) is not a i64/u64 fast field." Ok(())
); }
} else {
assert!(false); #[test]
} fn test_field_wrong_type() -> crate::Result<()> {
let mut schema_builder = Schema::builder();
let size = schema_builder.add_u64_field(SIZE, STORED);
let schema = schema_builder.build();
let index = Index::create_in_ram(schema);
let mut index_writer = index.writer_for_tests()?;
index_writer.add_document(doc!(size=>1u64));
index_writer.commit()?;
let searcher = index.reader()?.searcher();
let segment = searcher.segment_reader(0);
let top_collector = TopDocs::with_limit(4).order_by_fast_field::<i64>(size);
let err = top_collector.for_segment(0, segment).err().unwrap();
assert!(
matches!(err, crate::TantivyError::SchemaError(msg) if msg == "Field \"size\" is not a fast field.")
);
Ok(())
}
#[test]
fn test_tweak_score_top_collector_with_offset() {
let index = make_index();
let field = index.schema().get_field("text").unwrap();
let query_parser = QueryParser::for_index(&index, vec![field]);
let text_query = query_parser.parse_query("droopy tax").unwrap();
let collector = TopDocs::with_limit(2).and_offset(1).tweak_score(
move |_segment_reader: &SegmentReader| move |doc: DocId, _original_score: Score| doc,
);
let score_docs: Vec<(u32, DocAddress)> = index
.reader()
.unwrap()
.searcher()
.search(&text_query, &collector)
.unwrap();
assert_eq!(
score_docs,
vec![(1, DocAddress(0, 1)), (0, DocAddress(0, 0)),]
);
}
#[test]
fn test_custom_score_top_collector_with_offset() {
let index = make_index();
let field = index.schema().get_field("text").unwrap();
let query_parser = QueryParser::for_index(&index, vec![field]);
let text_query = query_parser.parse_query("droopy tax").unwrap();
let collector = TopDocs::with_limit(2)
.and_offset(1)
.custom_score(move |_segment_reader: &SegmentReader| move |doc: DocId| doc);
let score_docs: Vec<(u32, DocAddress)> = index
.reader()
.unwrap()
.searcher()
.search(&text_query, &collector)
.unwrap();
assert_eq!(
score_docs,
vec![(1, DocAddress(0, 1)), (0, DocAddress(0, 0)),]
);
} }
fn index( fn index(
@@ -631,8 +1070,7 @@ mod tests {
mut doc_adder: impl FnMut(&mut IndexWriter) -> (), mut doc_adder: impl FnMut(&mut IndexWriter) -> (),
) -> (Index, Box<dyn Query>) { ) -> (Index, Box<dyn Query>) {
let index = Index::create_in_ram(schema); let index = Index::create_in_ram(schema);
let mut index_writer = index.writer_with_num_threads(1, 10_000_000).unwrap();
let mut index_writer = index.writer_with_num_threads(1, 3_000_000).unwrap();
doc_adder(&mut index_writer); doc_adder(&mut index_writer);
index_writer.commit().unwrap(); index_writer.commit().unwrap();
let query_parser = QueryParser::for_index(&index, vec![query_field]); let query_parser = QueryParser::for_index(&index, vec![query_field]);

View File

@@ -14,11 +14,11 @@ where
{ {
pub fn new( pub fn new(
score_tweaker: TScoreTweaker, score_tweaker: TScoreTweaker,
limit: usize, collector: TopCollector<TScore>,
) -> TweakedScoreTopCollector<TScoreTweaker, TScore> { ) -> TweakedScoreTopCollector<TScoreTweaker, TScore> {
TweakedScoreTopCollector { TweakedScoreTopCollector {
score_tweaker, score_tweaker,
collector: TopCollector::with_limit(limit), collector,
} }
} }
} }
@@ -29,7 +29,7 @@ where
/// It is the segment local version of the [`ScoreTweaker`](./trait.ScoreTweaker.html). /// It is the segment local version of the [`ScoreTweaker`](./trait.ScoreTweaker.html).
pub trait ScoreSegmentTweaker<TScore>: 'static { pub trait ScoreSegmentTweaker<TScore>: 'static {
/// Tweak the given `score` for the document `doc`. /// Tweak the given `score` for the document `doc`.
fn score(&self, doc: DocId, score: Score) -> TScore; fn score(&mut self, doc: DocId, score: Score) -> TScore;
} }
/// `ScoreTweaker` makes it possible to tweak the score /// `ScoreTweaker` makes it possible to tweak the score
@@ -49,7 +49,7 @@ pub trait ScoreTweaker<TScore>: Sync {
impl<TScoreTweaker, TScore> Collector for TweakedScoreTopCollector<TScoreTweaker, TScore> impl<TScoreTweaker, TScore> Collector for TweakedScoreTopCollector<TScoreTweaker, TScore>
where where
TScoreTweaker: ScoreTweaker<TScore>, TScoreTweaker: ScoreTweaker<TScore> + Send + Sync,
TScore: 'static + PartialOrd + Clone + Send + Sync, TScore: 'static + PartialOrd + Clone + Send + Sync,
{ {
type Fruit = Vec<(TScore, DocAddress)>; type Fruit = Vec<(TScore, DocAddress)>;
@@ -121,9 +121,9 @@ where
impl<F, TScore> ScoreSegmentTweaker<TScore> for F impl<F, TScore> ScoreSegmentTweaker<TScore> for F
where where
F: 'static + Sync + Send + Fn(DocId, Score) -> TScore, F: 'static + FnMut(DocId, Score) -> TScore,
{ {
fn score(&self, doc: DocId, score: Score) -> TScore { fn score(&mut self, doc: DocId, score: Score) -> TScore {
(self)(doc, score) (self)(doc, score)
} }
} }

View File

@@ -1,6 +1,7 @@
use byteorder::{ByteOrder, LittleEndian, WriteBytesExt}; use byteorder::{ByteOrder, LittleEndian, WriteBytesExt};
use std::io; use std::io;
use std::ops::Deref;
use crate::directory::OwnedBytes;
pub(crate) struct BitPacker { pub(crate) struct BitPacker {
mini_buffer: u64, mini_buffer: u64,
@@ -60,20 +61,14 @@ impl BitPacker {
} }
#[derive(Clone)] #[derive(Clone)]
pub struct BitUnpacker<Data> pub struct BitUnpacker {
where
Data: Deref<Target = [u8]>,
{
num_bits: u64, num_bits: u64,
mask: u64, mask: u64,
data: Data, data: OwnedBytes,
} }
impl<Data> BitUnpacker<Data> impl BitUnpacker {
where pub fn new(data: OwnedBytes, num_bits: u8) -> BitUnpacker {
Data: Deref<Target = [u8]>,
{
pub fn new(data: Data, num_bits: u8) -> BitUnpacker<Data> {
let mask: u64 = if num_bits == 64 { let mask: u64 = if num_bits == 64 {
!0u64 !0u64
} else { } else {
@@ -90,7 +85,7 @@ where
if self.num_bits == 0 { if self.num_bits == 0 {
return 0u64; return 0u64;
} }
let data: &[u8] = &*self.data; let data: &[u8] = self.data.as_slice();
let num_bits = self.num_bits; let num_bits = self.num_bits;
let mask = self.mask; let mask = self.mask;
let addr_in_bits = idx * num_bits; let addr_in_bits = idx * num_bits;
@@ -109,8 +104,9 @@ where
#[cfg(test)] #[cfg(test)]
mod test { mod test {
use super::{BitPacker, BitUnpacker}; use super::{BitPacker, BitUnpacker};
use crate::directory::OwnedBytes;
fn create_fastfield_bitpacker(len: usize, num_bits: u8) -> (BitUnpacker<Vec<u8>>, Vec<u64>) { fn create_fastfield_bitpacker(len: usize, num_bits: u8) -> (BitUnpacker, Vec<u64>) {
let mut data = Vec::new(); let mut data = Vec::new();
let mut bitpacker = BitPacker::new(); let mut bitpacker = BitPacker::new();
let max_val: u64 = (1u64 << num_bits as u64) - 1u64; let max_val: u64 = (1u64 << num_bits as u64) - 1u64;
@@ -122,7 +118,7 @@ mod test {
} }
bitpacker.close(&mut data).unwrap(); bitpacker.close(&mut data).unwrap();
assert_eq!(data.len(), ((num_bits as usize) * len + 7) / 8 + 7); assert_eq!(data.len(), ((num_bits as usize) * len + 7) / 8 + 7);
let bitunpacker = BitUnpacker::new(data, num_bits); let bitunpacker = BitUnpacker::new(OwnedBytes::new(data), num_bits);
(bitunpacker, vals) (bitunpacker, vals)
} }

View File

@@ -33,6 +33,10 @@ impl TinySet {
TinySet(0u64) TinySet(0u64)
} }
pub fn clear(&mut self) {
self.0 = 0u64;
}
/// Returns the complement of the set in `[0, 64[`. /// Returns the complement of the set in `[0, 64[`.
fn complement(self) -> TinySet { fn complement(self) -> TinySet {
TinySet(!self.0) TinySet(!self.0)
@@ -43,6 +47,11 @@ impl TinySet {
!self.intersect(TinySet::singleton(el)).is_empty() !self.intersect(TinySet::singleton(el)).is_empty()
} }
/// Returns the number of elements in the TinySet.
pub fn len(self) -> u32 {
self.0.count_ones()
}
/// Returns the intersection of `self` and `other` /// Returns the intersection of `self` and `other`
pub fn intersect(self, other: TinySet) -> TinySet { pub fn intersect(self, other: TinySet) -> TinySet {
TinySet(self.0 & other.0) TinySet(self.0 & other.0)
@@ -109,22 +118,12 @@ impl TinySet {
pub fn range_greater_or_equal(from_included: u32) -> TinySet { pub fn range_greater_or_equal(from_included: u32) -> TinySet {
TinySet::range_lower(from_included).complement() TinySet::range_lower(from_included).complement()
} }
pub fn clear(&mut self) {
self.0 = 0u64;
}
pub fn len(self) -> u32 {
self.0.count_ones()
}
} }
#[derive(Clone)] #[derive(Clone)]
pub struct BitSet { pub struct BitSet {
tinysets: Box<[TinySet]>, tinysets: Box<[TinySet]>,
len: usize, //< Technically it should be u32, but we len: usize,
// count multiple inserts.
// `usize` guards us from overflow.
max_value: u32, max_value: u32,
} }
@@ -204,7 +203,7 @@ mod tests {
use super::BitSet; use super::BitSet;
use super::TinySet; use super::TinySet;
use crate::docset::DocSet; use crate::docset::{DocSet, TERMINATED};
use crate::query::BitSetDocSet; use crate::query::BitSetDocSet;
use crate::tests; use crate::tests;
use crate::tests::generate_nonunique_unsorted; use crate::tests::generate_nonunique_unsorted;
@@ -278,11 +277,13 @@ mod tests {
} }
assert_eq!(btreeset.len(), bitset.len()); assert_eq!(btreeset.len(), bitset.len());
let mut bitset_docset = BitSetDocSet::from(bitset); let mut bitset_docset = BitSetDocSet::from(bitset);
let mut remaining = true;
for el in btreeset.into_iter() { for el in btreeset.into_iter() {
bitset_docset.advance(); assert!(remaining);
assert_eq!(bitset_docset.doc(), el); assert_eq!(bitset_docset.doc(), el);
remaining = bitset_docset.advance() != TERMINATED;
} }
assert!(!bitset_docset.advance()); assert!(!remaining);
} }
#[test] #[test]

View File

@@ -1,14 +1,15 @@
use crate::common::BinarySerializable; use crate::common::BinarySerializable;
use crate::common::CountingWriter; use crate::common::CountingWriter;
use crate::common::VInt; use crate::common::VInt;
use crate::directory::ReadOnlySource; use crate::directory::FileSlice;
use crate::directory::{TerminatingWrite, WritePtr}; use crate::directory::{TerminatingWrite, WritePtr};
use crate::schema::Field; use crate::schema::Field;
use crate::space_usage::FieldUsage; use crate::space_usage::FieldUsage;
use crate::space_usage::PerFieldSpaceUsage; use crate::space_usage::PerFieldSpaceUsage;
use std::collections::HashMap; use std::collections::HashMap;
use std::io::Write; use std::io::{self, Read, Write};
use std::io::{self, Read};
use super::HasLen;
#[derive(Eq, PartialEq, Hash, Copy, Ord, PartialOrd, Clone, Debug)] #[derive(Eq, PartialEq, Hash, Copy, Ord, PartialOrd, Clone, Debug)]
pub struct FileAddr { pub struct FileAddr {
@@ -103,25 +104,26 @@ impl<W: TerminatingWrite + Write> CompositeWrite<W> {
/// for each field. /// for each field.
#[derive(Clone)] #[derive(Clone)]
pub struct CompositeFile { pub struct CompositeFile {
data: ReadOnlySource, data: FileSlice,
offsets_index: HashMap<FileAddr, (usize, usize)>, offsets_index: HashMap<FileAddr, (usize, usize)>,
} }
impl CompositeFile { impl CompositeFile {
/// Opens a composite file stored in a given /// Opens a composite file stored in a given
/// `ReadOnlySource`. /// `FileSlice`.
pub fn open(data: &ReadOnlySource) -> io::Result<CompositeFile> { pub fn open(data: &FileSlice) -> io::Result<CompositeFile> {
let end = data.len(); let end = data.len();
let footer_len_data = data.slice_from(end - 4); let footer_len_data = data.slice_from(end - 4).read_bytes()?;
let footer_len = u32::deserialize(&mut footer_len_data.as_slice())? as usize; let footer_len = u32::deserialize(&mut footer_len_data.as_slice())? as usize;
let footer_start = end - 4 - footer_len; let footer_start = end - 4 - footer_len;
let footer_data = data.slice(footer_start, footer_start + footer_len); let footer_data = data
.slice(footer_start, footer_start + footer_len)
.read_bytes()?;
let mut footer_buffer = footer_data.as_slice(); let mut footer_buffer = footer_data.as_slice();
let num_fields = VInt::deserialize(&mut footer_buffer)?.0 as usize; let num_fields = VInt::deserialize(&mut footer_buffer)?.0 as usize;
let mut file_addrs = vec![]; let mut file_addrs = vec![];
let mut offsets = vec![]; let mut offsets = vec![];
let mut field_index = HashMap::new(); let mut field_index = HashMap::new();
let mut offset = 0; let mut offset = 0;
@@ -150,19 +152,19 @@ impl CompositeFile {
pub fn empty() -> CompositeFile { pub fn empty() -> CompositeFile {
CompositeFile { CompositeFile {
offsets_index: HashMap::new(), offsets_index: HashMap::new(),
data: ReadOnlySource::empty(), data: FileSlice::empty(),
} }
} }
/// Returns the `ReadOnlySource` associated /// Returns the `FileSlice` associated
/// to a given `Field` and stored in a `CompositeFile`. /// to a given `Field` and stored in a `CompositeFile`.
pub fn open_read(&self, field: Field) -> Option<ReadOnlySource> { pub fn open_read(&self, field: Field) -> Option<FileSlice> {
self.open_read_with_idx(field, 0) self.open_read_with_idx(field, 0)
} }
/// Returns the `ReadOnlySource` associated /// Returns the `FileSlice` associated
/// to a given `Field` and stored in a `CompositeFile`. /// to a given `Field` and stored in a `CompositeFile`.
pub fn open_read_with_idx(&self, field: Field, idx: usize) -> Option<ReadOnlySource> { pub fn open_read_with_idx(&self, field: Field, idx: usize) -> Option<FileSlice> {
self.offsets_index self.offsets_index
.get(&FileAddr { field, idx }) .get(&FileAddr { field, idx })
.map(|&(from, to)| self.data.slice(from, to)) .map(|&(from, to)| self.data.slice(from, to))
@@ -192,46 +194,44 @@ mod test {
use std::path::Path; use std::path::Path;
#[test] #[test]
fn test_composite_file() { fn test_composite_file() -> crate::Result<()> {
let path = Path::new("test_path"); let path = Path::new("test_path");
let mut directory = RAMDirectory::create(); let directory = RAMDirectory::create();
{ {
let w = directory.open_write(path).unwrap(); let w = directory.open_write(path).unwrap();
let mut composite_write = CompositeWrite::wrap(w); let mut composite_write = CompositeWrite::wrap(w);
{ let mut write_0 = composite_write.for_field(Field::from_field_id(0u32));
let mut write_0 = composite_write.for_field(Field::from_field_id(0u32)); VInt(32431123u64).serialize(&mut write_0)?;
VInt(32431123u64).serialize(&mut write_0).unwrap(); write_0.flush()?;
write_0.flush().unwrap(); let mut write_4 = composite_write.for_field(Field::from_field_id(4u32));
} VInt(2).serialize(&mut write_4)?;
write_4.flush()?;
{ composite_write.close()?;
let mut write_4 = composite_write.for_field(Field::from_field_id(4u32));
VInt(2).serialize(&mut write_4).unwrap();
write_4.flush().unwrap();
}
composite_write.close().unwrap();
} }
{ {
let r = directory.open_read(path).unwrap(); let r = directory.open_read(path)?;
let composite_file = CompositeFile::open(&r).unwrap(); let composite_file = CompositeFile::open(&r)?;
{ {
let file0 = composite_file let file0 = composite_file
.open_read(Field::from_field_id(0u32)) .open_read(Field::from_field_id(0u32))
.unwrap(); .unwrap()
.read_bytes()?;
let mut file0_buf = file0.as_slice(); let mut file0_buf = file0.as_slice();
let payload_0 = VInt::deserialize(&mut file0_buf).unwrap().0; let payload_0 = VInt::deserialize(&mut file0_buf)?.0;
assert_eq!(file0_buf.len(), 0); assert_eq!(file0_buf.len(), 0);
assert_eq!(payload_0, 32431123u64); assert_eq!(payload_0, 32431123u64);
} }
{ {
let file4 = composite_file let file4 = composite_file
.open_read(Field::from_field_id(4u32)) .open_read(Field::from_field_id(4u32))
.unwrap(); .unwrap()
.read_bytes()?;
let mut file4_buf = file4.as_slice(); let mut file4_buf = file4.as_slice();
let payload_4 = VInt::deserialize(&mut file4_buf).unwrap().0; let payload_4 = VInt::deserialize(&mut file4_buf)?.0;
assert_eq!(file4_buf.len(), 0); assert_eq!(file4_buf.len(), 0);
assert_eq!(payload_4, 2u64); assert_eq!(payload_4, 2u64);
} }
} }
Ok(())
} }
} }

View File

@@ -10,7 +10,9 @@ pub(crate) use self::bitset::TinySet;
pub(crate) use self::composite_file::{CompositeFile, CompositeWrite}; pub(crate) use self::composite_file::{CompositeFile, CompositeWrite};
pub use self::counting_writer::CountingWriter; pub use self::counting_writer::CountingWriter;
pub use self::serialize::{BinarySerializable, FixedSize}; pub use self::serialize::{BinarySerializable, FixedSize};
pub use self::vint::{read_u32_vint, serialize_vint_u32, write_u32_vint, VInt}; pub use self::vint::{
read_u32_vint, read_u32_vint_no_advance, serialize_vint_u32, write_u32_vint, VInt,
};
pub use byteorder::LittleEndian as Endianness; pub use byteorder::LittleEndian as Endianness;
/// Segment's max doc must be `< MAX_DOC_LIMIT`. /// Segment's max doc must be `< MAX_DOC_LIMIT`.
@@ -18,6 +20,19 @@ pub use byteorder::LittleEndian as Endianness;
/// We do not allow segments with more than /// We do not allow segments with more than
pub const MAX_DOC_LIMIT: u32 = 1 << 31; pub const MAX_DOC_LIMIT: u32 = 1 << 31;
pub fn minmax<I, T>(mut vals: I) -> Option<(T, T)>
where
I: Iterator<Item = T>,
T: Copy + Ord,
{
if let Some(first_el) = vals.next() {
return Some(vals.fold((first_el, first_el), |(min_val, max_val), el| {
(min_val.min(el), max_val.max(el))
}));
}
None
}
/// Computes the number of bits that will be used for bitpacking. /// Computes the number of bits that will be used for bitpacking.
/// ///
/// In general the target is the minimum number of bits /// In general the target is the minimum number of bits
@@ -134,6 +149,7 @@ pub fn u64_to_f64(val: u64) -> f64 {
#[cfg(test)] #[cfg(test)]
pub(crate) mod test { pub(crate) mod test {
pub use super::minmax;
pub use super::serialize::test::fixed_size_test; pub use super::serialize::test::fixed_size_test;
use super::{compute_num_bits, f64_to_u64, i64_to_u64, u64_to_f64, u64_to_i64}; use super::{compute_num_bits, f64_to_u64, i64_to_u64, u64_to_f64, u64_to_i64};
use std::f64; use std::f64;
@@ -199,4 +215,21 @@ pub(crate) mod test {
assert!(((super::MAX_DOC_LIMIT - 1) as i32) >= 0); assert!(((super::MAX_DOC_LIMIT - 1) as i32) >= 0);
assert!((super::MAX_DOC_LIMIT as i32) < 0); assert!((super::MAX_DOC_LIMIT as i32) < 0);
} }
#[test]
fn test_minmax_empty() {
let vals: Vec<u32> = vec![];
assert_eq!(minmax(vals.into_iter()), None);
}
#[test]
fn test_minmax_one() {
assert_eq!(minmax(vec![1].into_iter()), Some((1, 1)));
}
#[test]
fn test_minmax_two() {
assert_eq!(minmax(vec![1, 2].into_iter()), Some((1, 2)));
assert_eq!(minmax(vec![2, 1].into_iter()), Some((1, 2)));
}
} }

View File

@@ -89,6 +89,19 @@ impl FixedSize for u64 {
const SIZE_IN_BYTES: usize = 8; const SIZE_IN_BYTES: usize = 8;
} }
impl BinarySerializable for f32 {
fn serialize<W: Write>(&self, writer: &mut W) -> io::Result<()> {
writer.write_f32::<Endianness>(*self)
}
fn deserialize<R: Read>(reader: &mut R) -> io::Result<Self> {
reader.read_f32::<Endianness>()
}
}
impl FixedSize for f32 {
const SIZE_IN_BYTES: usize = 4;
}
impl BinarySerializable for i64 { impl BinarySerializable for i64 {
fn serialize<W: Write>(&self, writer: &mut W) -> io::Result<()> { fn serialize<W: Write>(&self, writer: &mut W) -> io::Result<()> {
writer.write_i64::<Endianness>(*self) writer.write_i64::<Endianness>(*self)

View File

@@ -5,12 +5,12 @@ use std::io::Read;
use std::io::Write; use std::io::Write;
/// Wrapper over a `u64` that serializes as a variable int. /// Wrapper over a `u64` that serializes as a variable int.
#[derive(Debug, Eq, PartialEq)] #[derive(Clone, Copy, Debug, Eq, PartialEq)]
pub struct VInt(pub u64); pub struct VInt(pub u64);
const STOP_BIT: u8 = 128; const STOP_BIT: u8 = 128;
pub fn serialize_vint_u32(val: u32) -> (u64, usize) { pub fn serialize_vint_u32(val: u32, buf: &mut [u8; 8]) -> &[u8] {
const START_2: u64 = 1 << 7; const START_2: u64 = 1 << 7;
const START_3: u64 = 1 << 14; const START_3: u64 = 1 << 14;
const START_4: u64 = 1 << 21; const START_4: u64 = 1 << 21;
@@ -29,7 +29,7 @@ pub fn serialize_vint_u32(val: u32) -> (u64, usize) {
let val = u64::from(val); let val = u64::from(val);
const STOP_BIT: u64 = 128u64; const STOP_BIT: u64 = 128u64;
match val { let (res, num_bytes) = match val {
0..=STOP_1 => (val | STOP_BIT, 1), 0..=STOP_1 => (val | STOP_BIT, 1),
START_2..=STOP_2 => ( START_2..=STOP_2 => (
(val & MASK_1) | ((val & MASK_2) << 1) | (STOP_BIT << (8)), (val & MASK_1) | ((val & MASK_2) << 1) | (STOP_BIT << (8)),
@@ -56,7 +56,9 @@ pub fn serialize_vint_u32(val: u32) -> (u64, usize) {
| (STOP_BIT << (8 * 4)), | (STOP_BIT << (8 * 4)),
5, 5,
), ),
} };
LittleEndian::write_u64(&mut buf[..], res);
&buf[0..num_bytes]
} }
/// Returns the number of bytes covered by a /// Returns the number of bytes covered by a
@@ -85,23 +87,26 @@ fn vint_len(data: &[u8]) -> usize {
/// If the buffer does not start by a valid /// If the buffer does not start by a valid
/// vint payload /// vint payload
pub fn read_u32_vint(data: &mut &[u8]) -> u32 { pub fn read_u32_vint(data: &mut &[u8]) -> u32 {
let vlen = vint_len(*data); let (result, vlen) = read_u32_vint_no_advance(*data);
*data = &data[vlen..];
result
}
pub fn read_u32_vint_no_advance(data: &[u8]) -> (u32, usize) {
let vlen = vint_len(data);
let mut result = 0u32; let mut result = 0u32;
let mut shift = 0u64; let mut shift = 0u64;
for &b in &data[..vlen] { for &b in &data[..vlen] {
result |= u32::from(b & 127u8) << shift; result |= u32::from(b & 127u8) << shift;
shift += 7; shift += 7;
} }
*data = &data[vlen..]; (result, vlen)
result
} }
/// Write a `u32` as a vint payload. /// Write a `u32` as a vint payload.
pub fn write_u32_vint<W: io::Write>(val: u32, writer: &mut W) -> io::Result<()> { pub fn write_u32_vint<W: io::Write>(val: u32, writer: &mut W) -> io::Result<()> {
let (val, num_bytes) = serialize_vint_u32(val); let mut buf = [0u8; 8];
let mut buffer = [0u8; 8]; let data = serialize_vint_u32(val, &mut buf);
LittleEndian::write_u64(&mut buffer, val); writer.write_all(&data)
writer.write_all(&buffer[..num_bytes])
} }
impl VInt { impl VInt {
@@ -172,7 +177,6 @@ mod tests {
use super::serialize_vint_u32; use super::serialize_vint_u32;
use super::VInt; use super::VInt;
use crate::common::BinarySerializable; use crate::common::BinarySerializable;
use byteorder::{ByteOrder, LittleEndian};
fn aux_test_vint(val: u64) { fn aux_test_vint(val: u64) {
let mut v = [14u8; 10]; let mut v = [14u8; 10];
@@ -208,12 +212,10 @@ mod tests {
fn aux_test_serialize_vint_u32(val: u32) { fn aux_test_serialize_vint_u32(val: u32) {
let mut buffer = [0u8; 10]; let mut buffer = [0u8; 10];
let mut buffer2 = [0u8; 10]; let mut buffer2 = [0u8; 8];
let len_vint = VInt(val as u64).serialize_into(&mut buffer); let len_vint = VInt(val as u64).serialize_into(&mut buffer);
let (vint, len) = serialize_vint_u32(val); let res2 = serialize_vint_u32(val, &mut buffer2);
assert_eq!(len, len_vint, "len wrong for val {}", val); assert_eq!(&buffer[..len_vint], res2, "array wrong for {}", val);
LittleEndian::write_u64(&mut buffer2, vint);
assert_eq!(&buffer[..len], &buffer2[..len], "array wrong for {}", val);
} }
#[test] #[test]

View File

@@ -21,12 +21,12 @@ use crate::schema::FieldType;
use crate::schema::Schema; use crate::schema::Schema;
use crate::tokenizer::{TextAnalyzer, TokenizerManager}; use crate::tokenizer::{TextAnalyzer, TokenizerManager};
use crate::IndexWriter; use crate::IndexWriter;
use num_cpus;
use std::borrow::BorrowMut;
use std::collections::HashSet; use std::collections::HashSet;
use std::fmt; use std::fmt;
#[cfg(feature = "mmap")] #[cfg(feature = "mmap")]
use std::path::{Path, PathBuf}; use std::path::Path;
use std::path::PathBuf;
use std::sync::Arc; use std::sync::Arc;
fn load_metas( fn load_metas(
@@ -56,7 +56,9 @@ pub struct Index {
} }
impl Index { impl Index {
/// Examines the director to see if it contains an index /// Examines the directory to see if it contains an index.
///
/// Effectively, it only checks for the presence of the `meta.json` file.
pub fn exists<Dir: Directory>(dir: &Dir) -> bool { pub fn exists<Dir: Directory>(dir: &Dir) -> bool {
dir.exists(&META_FILEPATH) dir.exists(&META_FILEPATH)
} }
@@ -139,7 +141,9 @@ impl Index {
Index::create(mmap_directory, schema) Index::create(mmap_directory, schema)
} }
/// Creates a new index given an implementation of the trait `Directory` /// Creates a new index given an implementation of the trait `Directory`.
///
/// If a directory previously existed, it will be erased.
pub fn create<Dir: Directory>(dir: Dir, schema: Schema) -> crate::Result<Index> { pub fn create<Dir: Directory>(dir: Dir, schema: Schema) -> crate::Result<Index> {
let directory = ManagedDirectory::wrap(dir)?; let directory = ManagedDirectory::wrap(dir)?;
Index::from_directory(directory, schema) Index::from_directory(directory, schema)
@@ -148,8 +152,8 @@ impl Index {
/// Create a new index from a directory. /// Create a new index from a directory.
/// ///
/// This will overwrite existing meta.json /// This will overwrite existing meta.json
fn from_directory(mut directory: ManagedDirectory, schema: Schema) -> crate::Result<Index> { fn from_directory(directory: ManagedDirectory, schema: Schema) -> crate::Result<Index> {
save_new_metas(schema.clone(), directory.borrow_mut())?; save_new_metas(schema.clone(), &directory)?;
let metas = IndexMeta::with_schema(schema); let metas = IndexMeta::with_schema(schema);
Index::create_from_metas(directory, &metas, SegmentMetaInventory::default()) Index::create_from_metas(directory, &metas, SegmentMetaInventory::default())
} }
@@ -282,7 +286,7 @@ impl Index {
TantivyError::LockFailure( TantivyError::LockFailure(
err, err,
Some( Some(
"Failed to acquire index lock. If you are using\ "Failed to acquire index lock. If you are using \
a regular directory, this means there is already an \ a regular directory, this means there is already an \
`IndexWriter` working on this `Directory`, in this process \ `IndexWriter` working on this `Directory`, in this process \
or in a different process." or in a different process."
@@ -299,6 +303,15 @@ impl Index {
) )
} }
/// Helper to create an index writer for tests.
///
/// That index writer only simply has a single thread and a heap of 5 MB.
/// Using a single thread gives us a deterministic allocation of DocId.
#[cfg(test)]
pub fn writer_for_tests(&self) -> crate::Result<IndexWriter> {
self.writer_with_num_threads(1, 10_000_000)
}
/// Creates a multithreaded writer /// Creates a multithreaded writer
/// ///
/// Tantivy will automatically define the number of threads to use. /// Tantivy will automatically define the number of threads to use.
@@ -501,7 +514,7 @@ mod tests {
let schema = throw_away_schema(); let schema = throw_away_schema();
let field = schema.get_field("num_likes").unwrap(); let field = schema.get_field("num_likes").unwrap();
let mut index = Index::create_from_tempdir(schema).unwrap(); let mut index = Index::create_from_tempdir(schema).unwrap();
let mut writer = index.writer_with_num_threads(1, 3_000_000).unwrap(); let mut writer = index.writer_for_tests().unwrap();
writer.commit().unwrap(); writer.commit().unwrap();
let reader = index let reader = index
.reader_builder() .reader_builder()
@@ -538,23 +551,33 @@ mod tests {
test_index_on_commit_reload_policy_aux(field, &write_index, &reader); test_index_on_commit_reload_policy_aux(field, &write_index, &reader);
} }
} }
fn test_index_on_commit_reload_policy_aux(field: Field, index: &Index, reader: &IndexReader) { fn test_index_on_commit_reload_policy_aux(field: Field, index: &Index, reader: &IndexReader) {
let mut reader_index = reader.index(); let mut reader_index = reader.index();
let (sender, receiver) = crossbeam::channel::unbounded(); let (sender, receiver) = crossbeam::channel::unbounded();
let _watch_handle = reader_index.directory_mut().watch(Box::new(move || { let _watch_handle = reader_index.directory_mut().watch(Box::new(move || {
let _ = sender.send(()); let _ = sender.send(());
})); }));
let mut writer = index.writer_with_num_threads(1, 3_000_000).unwrap(); let mut writer = index.writer_for_tests().unwrap();
assert_eq!(reader.searcher().num_docs(), 0); assert_eq!(reader.searcher().num_docs(), 0);
writer.add_document(doc!(field=>1u64)); writer.add_document(doc!(field=>1u64));
writer.commit().unwrap(); writer.commit().unwrap();
assert!(receiver.recv().is_ok()); // We need a loop here because it is possible for notify to send more than
assert_eq!(reader.searcher().num_docs(), 1); // one modify event. It was observed on CI on MacOS.
loop {
assert!(receiver.recv().is_ok());
if reader.searcher().num_docs() == 1 {
break;
}
}
writer.add_document(doc!(field=>2u64)); writer.add_document(doc!(field=>2u64));
writer.commit().unwrap(); writer.commit().unwrap();
assert!(receiver.recv().is_ok()); // ... Same as above
assert_eq!(reader.searcher().num_docs(), 2); loop {
assert!(receiver.recv().is_ok());
if reader.searcher().num_docs() == 2 {
break;
}
}
} }
// This test will not pass on windows, because windows // This test will not pass on windows, because windows

View File

@@ -3,9 +3,7 @@ use crate::core::SegmentId;
use crate::schema::Schema; use crate::schema::Schema;
use crate::Opstamp; use crate::Opstamp;
use census::{Inventory, TrackedObject}; use census::{Inventory, TrackedObject};
use serde;
use serde::{Deserialize, Serialize}; use serde::{Deserialize, Serialize};
use serde_json;
use std::collections::HashSet; use std::collections::HashSet;
use std::fmt; use std::fmt;
use std::path::PathBuf; use std::path::PathBuf;
@@ -215,7 +213,7 @@ pub struct IndexMeta {
#[serde(skip_serializing_if = "Option::is_none")] #[serde(skip_serializing_if = "Option::is_none")]
/// Payload associated to the last commit. /// Payload associated to the last commit.
/// ///
/// Upon commit, clients can optionally add a small `Striing` payload to their commit /// Upon commit, clients can optionally add a small `String` payload to their commit
/// to help identify this commit. /// to help identify this commit.
/// This payload is entirely unused by tantivy. /// This payload is entirely unused by tantivy.
pub payload: Option<String>, pub payload: Option<String>,
@@ -303,7 +301,7 @@ mod tests {
let json = serde_json::ser::to_string(&index_metas).expect("serialization failed"); let json = serde_json::ser::to_string(&index_metas).expect("serialization failed");
assert_eq!( assert_eq!(
json, json,
r#"{"segments":[],"schema":[{"name":"text","type":"text","options":{"indexing":{"record":"position","tokenizer":"default"},"stored":false}}],"opstamp":0}"# r#"{"segments":[],"schema":[{"name":"text","type":"text","options":{"indexing":{"record":"position","tokenizer":"default","fieldnorms":true},"stored":false}}],"opstamp":0}"#
); );
} }
} }

View File

@@ -1,13 +1,13 @@
use std::io;
use crate::common::BinarySerializable; use crate::common::BinarySerializable;
use crate::directory::ReadOnlySource; use crate::directory::FileSlice;
use crate::positions::PositionReader; use crate::positions::PositionReader;
use crate::postings::TermInfo; use crate::postings::TermInfo;
use crate::postings::{BlockSegmentPostings, SegmentPostings}; use crate::postings::{BlockSegmentPostings, SegmentPostings};
use crate::schema::FieldType;
use crate::schema::IndexRecordOption; use crate::schema::IndexRecordOption;
use crate::schema::Term; use crate::schema::Term;
use crate::termdict::TermDictionary; use crate::termdict::TermDictionary;
use owned_read::OwnedRead;
/// The inverted index reader is in charge of accessing /// The inverted index reader is in charge of accessing
/// the inverted index associated to a specific field. /// the inverted index associated to a specific field.
@@ -16,7 +16,7 @@ use owned_read::OwnedRead;
/// ///
/// It is safe to delete the segment associated to /// It is safe to delete the segment associated to
/// an `InvertedIndexReader`. As long as it is open, /// an `InvertedIndexReader`. As long as it is open,
/// the `ReadOnlySource` it is relying on should /// the `FileSlice` it is relying on should
/// stay available. /// stay available.
/// ///
/// ///
@@ -24,9 +24,9 @@ use owned_read::OwnedRead;
/// the `SegmentReader`'s [`.inverted_index(...)`] method /// the `SegmentReader`'s [`.inverted_index(...)`] method
pub struct InvertedIndexReader { pub struct InvertedIndexReader {
termdict: TermDictionary, termdict: TermDictionary,
postings_source: ReadOnlySource, postings_file_slice: FileSlice,
positions_source: ReadOnlySource, positions_file_slice: FileSlice,
positions_idx_source: ReadOnlySource, positions_idx_file_slice: FileSlice,
record_option: IndexRecordOption, record_option: IndexRecordOption,
total_num_tokens: u64, total_num_tokens: u64,
} }
@@ -35,35 +35,31 @@ impl InvertedIndexReader {
#[cfg_attr(feature = "cargo-clippy", allow(clippy::needless_pass_by_value))] // for symmetry #[cfg_attr(feature = "cargo-clippy", allow(clippy::needless_pass_by_value))] // for symmetry
pub(crate) fn new( pub(crate) fn new(
termdict: TermDictionary, termdict: TermDictionary,
postings_source: ReadOnlySource, postings_file_slice: FileSlice,
positions_source: ReadOnlySource, positions_file_slice: FileSlice,
positions_idx_source: ReadOnlySource, positions_idx_file_slice: FileSlice,
record_option: IndexRecordOption, record_option: IndexRecordOption,
) -> InvertedIndexReader { ) -> io::Result<InvertedIndexReader> {
let total_num_tokens_data = postings_source.slice(0, 8); let (total_num_tokens_slice, postings_body) = postings_file_slice.split(8);
let mut total_num_tokens_cursor = total_num_tokens_data.as_slice(); let total_num_tokens = u64::deserialize(&mut total_num_tokens_slice.read_bytes()?)?;
let total_num_tokens = u64::deserialize(&mut total_num_tokens_cursor).unwrap_or(0u64); Ok(InvertedIndexReader {
InvertedIndexReader {
termdict, termdict,
postings_source: postings_source.slice_from(8), postings_file_slice: postings_body,
positions_source, positions_file_slice,
positions_idx_source, positions_idx_file_slice,
record_option, record_option,
total_num_tokens, total_num_tokens,
} })
} }
/// Creates an empty `InvertedIndexReader` object, which /// Creates an empty `InvertedIndexReader` object, which
/// contains no terms at all. /// contains no terms at all.
pub fn empty(field_type: &FieldType) -> InvertedIndexReader { pub fn empty(record_option: IndexRecordOption) -> InvertedIndexReader {
let record_option = field_type
.get_index_record_option()
.unwrap_or(IndexRecordOption::Basic);
InvertedIndexReader { InvertedIndexReader {
termdict: TermDictionary::empty(), termdict: TermDictionary::empty(),
postings_source: ReadOnlySource::empty(), postings_file_slice: FileSlice::empty(),
positions_source: ReadOnlySource::empty(), positions_file_slice: FileSlice::empty(),
positions_idx_source: ReadOnlySource::empty(), positions_idx_file_slice: FileSlice::empty(),
record_option, record_option,
total_num_tokens: 0u64, total_num_tokens: 0u64,
} }
@@ -93,12 +89,12 @@ impl InvertedIndexReader {
&self, &self,
term_info: &TermInfo, term_info: &TermInfo,
block_postings: &mut BlockSegmentPostings, block_postings: &mut BlockSegmentPostings,
) { ) -> io::Result<()> {
let offset = term_info.postings_offset as usize; let postings_slice = self
let end_source = self.postings_source.len(); .postings_file_slice
let postings_slice = self.postings_source.slice(offset, end_source); .slice_from(term_info.postings_offset as usize);
let postings_reader = OwnedRead::new(postings_slice); block_postings.reset(term_info.doc_freq, postings_slice.read_bytes()?);
block_postings.reset(term_info.doc_freq, postings_reader); Ok(())
} }
/// Returns a block postings given a `Term`. /// Returns a block postings given a `Term`.
@@ -109,9 +105,11 @@ impl InvertedIndexReader {
&self, &self,
term: &Term, term: &Term,
option: IndexRecordOption, option: IndexRecordOption,
) -> Option<BlockSegmentPostings> { ) -> io::Result<Option<BlockSegmentPostings>> {
self.get_term_info(term) Ok(self
.get_term_info(term)
.map(move |term_info| self.read_block_postings_from_terminfo(&term_info, option)) .map(move |term_info| self.read_block_postings_from_terminfo(&term_info, option))
.transpose()?)
} }
/// Returns a block postings given a `term_info`. /// Returns a block postings given a `term_info`.
@@ -122,12 +120,12 @@ impl InvertedIndexReader {
&self, &self,
term_info: &TermInfo, term_info: &TermInfo,
requested_option: IndexRecordOption, requested_option: IndexRecordOption,
) -> BlockSegmentPostings { ) -> io::Result<BlockSegmentPostings> {
let offset = term_info.postings_offset as usize; let offset = term_info.postings_offset as usize;
let postings_data = self.postings_source.slice_from(offset); let postings_data = self.postings_file_slice.slice_from(offset);
BlockSegmentPostings::from_data( BlockSegmentPostings::open(
term_info.doc_freq, term_info.doc_freq,
OwnedRead::new(postings_data), postings_data,
self.record_option, self.record_option,
requested_option, requested_option,
) )
@@ -141,20 +139,23 @@ impl InvertedIndexReader {
&self, &self,
term_info: &TermInfo, term_info: &TermInfo,
option: IndexRecordOption, option: IndexRecordOption,
) -> SegmentPostings { ) -> io::Result<SegmentPostings> {
let block_postings = self.read_block_postings_from_terminfo(term_info, option); let block_postings = self.read_block_postings_from_terminfo(term_info, option)?;
let position_stream = { let position_stream = {
if option.has_positions() { if option.has_positions() {
let position_reader = self.positions_source.clone(); let position_reader = self.positions_file_slice.clone();
let skip_reader = self.positions_idx_source.clone(); let skip_reader = self.positions_idx_file_slice.clone();
let position_reader = let position_reader =
PositionReader::new(position_reader, skip_reader, term_info.positions_idx); PositionReader::new(position_reader, skip_reader, term_info.positions_idx)?;
Some(position_reader) Some(position_reader)
} else { } else {
None None
} }
}; };
SegmentPostings::from_block_postings(block_postings, position_stream) Ok(SegmentPostings::from_block_postings(
block_postings,
position_stream,
))
} }
/// Returns the total number of tokens recorded for all documents /// Returns the total number of tokens recorded for all documents
@@ -173,24 +174,31 @@ impl InvertedIndexReader {
/// For instance, requesting `IndexRecordOption::Freq` for a /// For instance, requesting `IndexRecordOption::Freq` for a
/// `TextIndexingOptions` that does not index position will return a `SegmentPostings` /// `TextIndexingOptions` that does not index position will return a `SegmentPostings`
/// with `DocId`s and frequencies. /// with `DocId`s and frequencies.
pub fn read_postings(&self, term: &Term, option: IndexRecordOption) -> Option<SegmentPostings> { pub fn read_postings(
&self,
term: &Term,
option: IndexRecordOption,
) -> io::Result<Option<SegmentPostings>> {
self.get_term_info(term) self.get_term_info(term)
.map(move |term_info| self.read_postings_from_terminfo(&term_info, option)) .map(move |term_info| self.read_postings_from_terminfo(&term_info, option))
.transpose()
} }
pub(crate) fn read_postings_no_deletes( pub(crate) fn read_postings_no_deletes(
&self, &self,
term: &Term, term: &Term,
option: IndexRecordOption, option: IndexRecordOption,
) -> Option<SegmentPostings> { ) -> io::Result<Option<SegmentPostings>> {
self.get_term_info(term) self.get_term_info(term)
.map(|term_info| self.read_postings_from_terminfo(&term_info, option)) .map(|term_info| self.read_postings_from_terminfo(&term_info, option))
.transpose()
} }
/// Returns the number of documents containing the term. /// Returns the number of documents containing the term.
pub fn doc_freq(&self, term: &Term) -> u32 { pub fn doc_freq(&self, term: &Term) -> io::Result<u32> {
self.get_term_info(term) Ok(self
.get_term_info(term)
.map(|term_info| term_info.doc_freq) .map(|term_info| term_info.doc_freq)
.unwrap_or(0u32) .unwrap_or(0u32))
} }
} }

View File

@@ -1,11 +1,8 @@
use crate::collector::Collector; use crate::collector::Collector;
use crate::collector::SegmentCollector;
use crate::core::Executor; use crate::core::Executor;
use crate::core::InvertedIndexReader; use crate::core::InvertedIndexReader;
use crate::core::SegmentReader; use crate::core::SegmentReader;
use crate::query::Query; use crate::query::Query;
use crate::query::Scorer;
use crate::query::Weight;
use crate::schema::Document; use crate::schema::Document;
use crate::schema::Schema; use crate::schema::Schema;
use crate::schema::{Field, Term}; use crate::schema::{Field, Term};
@@ -14,28 +11,8 @@ use crate::store::StoreReader;
use crate::termdict::TermMerger; use crate::termdict::TermMerger;
use crate::DocAddress; use crate::DocAddress;
use crate::Index; use crate::Index;
use std::fmt;
use std::sync::Arc; use std::sync::Arc;
use std::{fmt, io};
fn collect_segment<C: Collector>(
collector: &C,
weight: &dyn Weight,
segment_ord: u32,
segment_reader: &SegmentReader,
) -> crate::Result<C::Fruit> {
let mut scorer = weight.scorer(segment_reader, 1.0f32)?;
let mut segment_collector = collector.for_segment(segment_ord as u32, segment_reader)?;
if let Some(delete_bitset) = segment_reader.delete_bitset() {
scorer.for_each(&mut |doc, score| {
if delete_bitset.is_alive(doc) {
segment_collector.collect(doc, score);
}
});
} else {
scorer.for_each(&mut |doc, score| segment_collector.collect(doc, score));
}
Ok(segment_collector.harvest())
}
/// Holds a list of `SegmentReader`s ready for search. /// Holds a list of `SegmentReader`s ready for search.
/// ///
@@ -55,17 +32,17 @@ impl Searcher {
schema: Schema, schema: Schema,
index: Index, index: Index,
segment_readers: Vec<SegmentReader>, segment_readers: Vec<SegmentReader>,
) -> Searcher { ) -> io::Result<Searcher> {
let store_readers = segment_readers let store_readers: Vec<StoreReader> = segment_readers
.iter() .iter()
.map(SegmentReader::get_store_reader) .map(SegmentReader::get_store_reader)
.collect(); .collect::<io::Result<Vec<_>>>()?;
Searcher { Ok(Searcher {
schema, schema,
index, index,
segment_readers, segment_readers,
store_readers, store_readers,
} })
} }
/// Returns the `Index` associated to the `Searcher` /// Returns the `Index` associated to the `Searcher`
@@ -98,13 +75,14 @@ impl Searcher {
/// Return the overall number of documents containing /// Return the overall number of documents containing
/// the given term. /// the given term.
pub fn doc_freq(&self, term: &Term) -> u64 { pub fn doc_freq(&self, term: &Term) -> crate::Result<u64> {
self.segment_readers let mut total_doc_freq = 0;
.iter() for segment_reader in &self.segment_readers {
.map(|segment_reader| { let inverted_index = segment_reader.inverted_index(term.field())?;
u64::from(segment_reader.inverted_index(term.field()).doc_freq(term)) let doc_freq = inverted_index.doc_freq(term)?;
}) total_doc_freq += u64::from(doc_freq);
.sum::<u64>() }
Ok(total_doc_freq)
} }
/// Return the list of segment readers /// Return the list of segment readers
@@ -163,12 +141,7 @@ impl Searcher {
let segment_readers = self.segment_readers(); let segment_readers = self.segment_readers();
let fruits = executor.map( let fruits = executor.map(
|(segment_ord, segment_reader)| { |(segment_ord, segment_reader)| {
collect_segment( collector.collect_segment(weight.as_ref(), segment_ord as u32, segment_reader)
collector,
weight.as_ref(),
segment_ord as u32,
segment_reader,
)
}, },
segment_readers.iter().enumerate(), segment_readers.iter().enumerate(),
)?; )?;
@@ -176,22 +149,22 @@ impl Searcher {
} }
/// Return the field searcher associated to a `Field`. /// Return the field searcher associated to a `Field`.
pub fn field(&self, field: Field) -> FieldSearcher { pub fn field(&self, field: Field) -> crate::Result<FieldSearcher> {
let inv_index_readers = self let inv_index_readers: Vec<Arc<InvertedIndexReader>> = self
.segment_readers .segment_readers
.iter() .iter()
.map(|segment_reader| segment_reader.inverted_index(field)) .map(|segment_reader| segment_reader.inverted_index(field))
.collect::<Vec<_>>(); .collect::<crate::Result<Vec<_>>>()?;
FieldSearcher::new(inv_index_readers) Ok(FieldSearcher::new(inv_index_readers))
} }
/// Summarize total space usage of this searcher. /// Summarize total space usage of this searcher.
pub fn space_usage(&self) -> SearcherSpaceUsage { pub fn space_usage(&self) -> io::Result<SearcherSpaceUsage> {
let mut space_usage = SearcherSpaceUsage::new(); let mut space_usage = SearcherSpaceUsage::new();
for segment_reader in self.segment_readers.iter() { for segment_reader in &self.segment_readers {
space_usage.add_segment(segment_reader.space_usage()); space_usage.add_segment(segment_reader.space_usage()?);
} }
space_usage Ok(space_usage)
} }
} }

View File

@@ -4,7 +4,7 @@ use crate::core::SegmentId;
use crate::core::SegmentMeta; use crate::core::SegmentMeta;
use crate::directory::error::{OpenReadError, OpenWriteError}; use crate::directory::error::{OpenReadError, OpenWriteError};
use crate::directory::Directory; use crate::directory::Directory;
use crate::directory::{ReadOnlySource, WritePtr}; use crate::directory::{FileSlice, WritePtr};
use crate::indexer::segment_serializer::SegmentSerializer; use crate::indexer::segment_serializer::SegmentSerializer;
use crate::schema::Schema; use crate::schema::Schema;
use crate::Opstamp; use crate::Opstamp;
@@ -78,10 +78,9 @@ impl Segment {
} }
/// Open one of the component file for a *regular* read. /// Open one of the component file for a *regular* read.
pub fn open_read(&self, component: SegmentComponent) -> Result<ReadOnlySource, OpenReadError> { pub fn open_read(&self, component: SegmentComponent) -> Result<FileSlice, OpenReadError> {
let path = self.relative_path(component); let path = self.relative_path(component);
let source = self.index.directory().open_read(&path)?; self.index.directory().open_read(&path)
Ok(source)
} }
/// Open one of the component file for *regular* write. /// Open one of the component file for *regular* write.

View File

@@ -20,7 +20,7 @@ pub enum SegmentComponent {
/// Dictionary associating `Term`s to `TermInfo`s which is /// Dictionary associating `Term`s to `TermInfo`s which is
/// simply an address into the `postings` file and the `positions` file. /// simply an address into the `postings` file and the `positions` file.
TERMS, TERMS,
/// Row-oriented, LZ4-compressed storage of the documents. /// Row-oriented, compressed storage of the documents.
/// Accessing a document from the store is relatively slow, as it /// Accessing a document from the store is relatively slow, as it
/// requires to decompress the entire block it belongs to. /// requires to decompress the entire block it belongs to.
STORE, STORE,

View File

@@ -1,26 +1,26 @@
use crate::common::CompositeFile;
use crate::common::HasLen; use crate::common::HasLen;
use crate::core::InvertedIndexReader; use crate::core::InvertedIndexReader;
use crate::core::Segment; use crate::core::Segment;
use crate::core::SegmentComponent; use crate::core::SegmentComponent;
use crate::core::SegmentId; use crate::core::SegmentId;
use crate::directory::ReadOnlySource; use crate::directory::FileSlice;
use crate::fastfield::DeleteBitSet; use crate::fastfield::DeleteBitSet;
use crate::fastfield::FacetReader; use crate::fastfield::FacetReader;
use crate::fastfield::FastFieldReaders; use crate::fastfield::FastFieldReaders;
use crate::fieldnorm::FieldNormReader; use crate::fieldnorm::{FieldNormReader, FieldNormReaders};
use crate::schema::Field;
use crate::schema::FieldType; use crate::schema::FieldType;
use crate::schema::Schema; use crate::schema::Schema;
use crate::schema::{Field, IndexRecordOption};
use crate::space_usage::SegmentSpaceUsage; use crate::space_usage::SegmentSpaceUsage;
use crate::store::StoreReader; use crate::store::StoreReader;
use crate::termdict::TermDictionary; use crate::termdict::TermDictionary;
use crate::DocId; use crate::DocId;
use crate::{common::CompositeFile, error::DataCorruption};
use fail::fail_point; use fail::fail_point;
use std::collections::HashMap;
use std::fmt; use std::fmt;
use std::sync::Arc; use std::sync::Arc;
use std::sync::RwLock; use std::sync::RwLock;
use std::{collections::HashMap, io};
/// Entry point to access all of the datastructures of the `Segment` /// Entry point to access all of the datastructures of the `Segment`
/// ///
@@ -48,9 +48,9 @@ pub struct SegmentReader {
positions_composite: CompositeFile, positions_composite: CompositeFile,
positions_idx_composite: CompositeFile, positions_idx_composite: CompositeFile,
fast_fields_readers: Arc<FastFieldReaders>, fast_fields_readers: Arc<FastFieldReaders>,
fieldnorms_composite: CompositeFile, fieldnorm_readers: FieldNormReaders,
store_source: ReadOnlySource, store_file: FileSlice,
delete_bitset_opt: Option<DeleteBitSet>, delete_bitset_opt: Option<DeleteBitSet>,
schema: Schema, schema: Schema,
} }
@@ -106,16 +106,26 @@ impl SegmentReader {
} }
/// Accessor to the `FacetReader` associated to a given `Field`. /// Accessor to the `FacetReader` associated to a given `Field`.
pub fn facet_reader(&self, field: Field) -> Option<FacetReader> { pub fn facet_reader(&self, field: Field) -> crate::Result<FacetReader> {
let field_entry = self.schema.get_field_entry(field); let field_entry = self.schema.get_field_entry(field);
if field_entry.field_type() != &FieldType::HierarchicalFacet { if field_entry.field_type() != &FieldType::HierarchicalFacet {
return None; return Err(crate::TantivyError::InvalidArgument(format!(
"Field {:?} is not a facet field.",
field_entry.name()
)));
} }
let term_ords_reader = self.fast_fields().u64s(field)?; let term_ords_reader = self.fast_fields().u64s(field).ok_or_else(|| {
let termdict_source = self.termdict_composite.open_read(field)?; DataCorruption::comment_only(format!(
let termdict = TermDictionary::from_source(&termdict_source); "Cannot find data for hierarchical facet {:?}",
let facet_reader = FacetReader::new(term_ords_reader, termdict); field_entry.name()
Some(facet_reader) ))
})?;
let termdict = self
.termdict_composite
.open_read(field)
.map(TermDictionary::open)
.unwrap_or_else(|| Ok(TermDictionary::empty()))?;
Ok(FacetReader::new(term_ords_reader, termdict))
} }
/// Accessor to the segment's `Field norms`'s reader. /// Accessor to the segment's `Field norms`'s reader.
@@ -125,47 +135,45 @@ impl SegmentReader {
/// ///
/// They are simply stored as a fast field, serialized in /// They are simply stored as a fast field, serialized in
/// the `.fieldnorm` file of the segment. /// the `.fieldnorm` file of the segment.
pub fn get_fieldnorms_reader(&self, field: Field) -> FieldNormReader { pub fn get_fieldnorms_reader(&self, field: Field) -> crate::Result<FieldNormReader> {
if let Some(fieldnorm_source) = self.fieldnorms_composite.open_read(field) { self.fieldnorm_readers.get_field(field)?.ok_or_else(|| {
FieldNormReader::open(fieldnorm_source)
} else {
let field_name = self.schema.get_field_name(field); let field_name = self.schema.get_field_name(field);
let err_msg = format!( let err_msg = format!(
"Field norm not found for field {:?}. Was it market as indexed during indexing.", "Field norm not found for field {:?}. Was it marked as indexed during indexing?",
field_name field_name
); );
panic!(err_msg); crate::TantivyError::SchemaError(err_msg)
} })
} }
/// Accessor to the segment's `StoreReader`. /// Accessor to the segment's `StoreReader`.
pub fn get_store_reader(&self) -> StoreReader { pub fn get_store_reader(&self) -> io::Result<StoreReader> {
StoreReader::from_source(self.store_source.clone()) StoreReader::open(self.store_file.clone())
} }
/// Open a new segment for reading. /// Open a new segment for reading.
pub fn open(segment: &Segment) -> crate::Result<SegmentReader> { pub fn open(segment: &Segment) -> crate::Result<SegmentReader> {
let termdict_source = segment.open_read(SegmentComponent::TERMS)?; let termdict_file = segment.open_read(SegmentComponent::TERMS)?;
let termdict_composite = CompositeFile::open(&termdict_source)?; let termdict_composite = CompositeFile::open(&termdict_file)?;
let store_source = segment.open_read(SegmentComponent::STORE)?; let store_file = segment.open_read(SegmentComponent::STORE)?;
fail_point!("SegmentReader::open#middle"); fail_point!("SegmentReader::open#middle");
let postings_source = segment.open_read(SegmentComponent::POSTINGS)?; let postings_file = segment.open_read(SegmentComponent::POSTINGS)?;
let postings_composite = CompositeFile::open(&postings_source)?; let postings_composite = CompositeFile::open(&postings_file)?;
let positions_composite = { let positions_composite = {
if let Ok(source) = segment.open_read(SegmentComponent::POSITIONS) { if let Ok(positions_file) = segment.open_read(SegmentComponent::POSITIONS) {
CompositeFile::open(&source)? CompositeFile::open(&positions_file)?
} else { } else {
CompositeFile::empty() CompositeFile::empty()
} }
}; };
let positions_idx_composite = { let positions_idx_composite = {
if let Ok(source) = segment.open_read(SegmentComponent::POSITIONSSKIP) { if let Ok(positions_skip_file) = segment.open_read(SegmentComponent::POSITIONSSKIP) {
CompositeFile::open(&source)? CompositeFile::open(&positions_skip_file)?
} else { } else {
CompositeFile::empty() CompositeFile::empty()
} }
@@ -178,26 +186,27 @@ impl SegmentReader {
let fast_field_readers = let fast_field_readers =
Arc::new(FastFieldReaders::load_all(&schema, &fast_fields_composite)?); Arc::new(FastFieldReaders::load_all(&schema, &fast_fields_composite)?);
let fieldnorms_data = segment.open_read(SegmentComponent::FIELDNORMS)?; let fieldnorm_data = segment.open_read(SegmentComponent::FIELDNORMS)?;
let fieldnorms_composite = CompositeFile::open(&fieldnorms_data)?; let fieldnorm_readers = FieldNormReaders::open(fieldnorm_data)?;
let delete_bitset_opt = if segment.meta().has_deletes() { let delete_bitset_opt = if segment.meta().has_deletes() {
let delete_data = segment.open_read(SegmentComponent::DELETE)?; let delete_data = segment.open_read(SegmentComponent::DELETE)?;
Some(DeleteBitSet::open(delete_data)) let delete_bitset = DeleteBitSet::open(delete_data)?;
Some(delete_bitset)
} else { } else {
None None
}; };
Ok(SegmentReader { Ok(SegmentReader {
inv_idx_reader_cache: Arc::new(RwLock::new(HashMap::new())), inv_idx_reader_cache: Default::default(),
max_doc: segment.meta().max_doc(), max_doc: segment.meta().max_doc(),
num_docs: segment.meta().num_docs(), num_docs: segment.meta().num_docs(),
termdict_composite, termdict_composite,
postings_composite, postings_composite,
fast_fields_readers: fast_field_readers, fast_fields_readers: fast_field_readers,
fieldnorms_composite, fieldnorm_readers,
segment_id: segment.id(), segment_id: segment.id(),
store_source, store_file,
delete_bitset_opt, delete_bitset_opt,
positions_composite, positions_composite,
positions_idx_composite, positions_idx_composite,
@@ -212,58 +221,64 @@ impl SegmentReader {
/// The field reader is in charge of iterating through the /// The field reader is in charge of iterating through the
/// term dictionary associated to a specific field, /// term dictionary associated to a specific field,
/// and opening the posting list associated to any term. /// and opening the posting list associated to any term.
pub fn inverted_index(&self, field: Field) -> Arc<InvertedIndexReader> { ///
/// If the field is marked as index, a warn is logged and an empty `InvertedIndexReader`
/// is returned.
/// Similarly if the field is marked as indexed but no term has been indexed for the given
/// index. an empty `InvertedIndexReader` is returned (but no warning is logged).
pub fn inverted_index(&self, field: Field) -> crate::Result<Arc<InvertedIndexReader>> {
if let Some(inv_idx_reader) = self if let Some(inv_idx_reader) = self
.inv_idx_reader_cache .inv_idx_reader_cache
.read() .read()
.expect("Lock poisoned. This should never happen") .expect("Lock poisoned. This should never happen")
.get(&field) .get(&field)
{ {
return Arc::clone(inv_idx_reader); return Ok(Arc::clone(inv_idx_reader));
} }
let field_entry = self.schema.get_field_entry(field); let field_entry = self.schema.get_field_entry(field);
let field_type = field_entry.field_type(); let field_type = field_entry.field_type();
let record_option_opt = field_type.get_index_record_option(); let record_option_opt = field_type.get_index_record_option();
if record_option_opt.is_none() { if record_option_opt.is_none() {
panic!("Field {:?} does not seem indexed.", field_entry.name()); warn!("Field {:?} does not seem indexed.", field_entry.name());
} }
let record_option = record_option_opt.unwrap(); let postings_file_opt = self.postings_composite.open_read(field);
let postings_source_opt = self.postings_composite.open_read(field); if postings_file_opt.is_none() || record_option_opt.is_none() {
if postings_source_opt.is_none() {
// no documents in the segment contained this field. // no documents in the segment contained this field.
// As a result, no data is associated to the inverted index. // As a result, no data is associated to the inverted index.
// //
// Returns an empty inverted index. // Returns an empty inverted index.
return Arc::new(InvertedIndexReader::empty(field_type)); let record_option = record_option_opt.unwrap_or(IndexRecordOption::Basic);
return Ok(Arc::new(InvertedIndexReader::empty(record_option)));
} }
let postings_source = postings_source_opt.unwrap(); let record_option = record_option_opt.unwrap();
let postings_file = postings_file_opt.unwrap();
let termdict_source = self.termdict_composite.open_read(field).expect( let termdict_file: FileSlice = self.termdict_composite.open_read(field)
"Failed to open field term dictionary in composite file. Is the field indexed?", .ok_or_else(||
); DataCorruption::comment_only(format!("Failed to open field {:?}'s term dictionary in the composite file. Has the schema been modified?", field_entry.name()))
)?;
let positions_source = self let positions_file = self
.positions_composite .positions_composite
.open_read(field) .open_read(field)
.expect("Index corrupted. Failed to open field positions in composite file."); .expect("Index corrupted. Failed to open field positions in composite file.");
let positions_idx_source = self let positions_idx_file = self
.positions_idx_composite .positions_idx_composite
.open_read(field) .open_read(field)
.expect("Index corrupted. Failed to open field positions in composite file."); .expect("Index corrupted. Failed to open field positions in composite file.");
let inv_idx_reader = Arc::new(InvertedIndexReader::new( let inv_idx_reader = Arc::new(InvertedIndexReader::new(
TermDictionary::from_source(&termdict_source), TermDictionary::open(termdict_file)?,
postings_source, postings_file,
positions_source, positions_file,
positions_idx_source, positions_idx_file,
record_option, record_option,
)); )?);
// by releasing the lock in between, we may end up opening the inverting index // by releasing the lock in between, we may end up opening the inverting index
// twice, but this is fine. // twice, but this is fine.
@@ -272,7 +287,7 @@ impl SegmentReader {
.expect("Field reader cache lock poisoned. This should never happen.") .expect("Field reader cache lock poisoned. This should never happen.")
.insert(field, Arc::clone(&inv_idx_reader)); .insert(field, Arc::clone(&inv_idx_reader));
inv_idx_reader Ok(inv_idx_reader)
} }
/// Returns the segment id /// Returns the segment id
@@ -295,26 +310,26 @@ impl SegmentReader {
} }
/// Returns an iterator that will iterate over the alive document ids /// Returns an iterator that will iterate over the alive document ids
pub fn doc_ids_alive(&self) -> SegmentReaderAliveDocsIterator<'_> { pub fn doc_ids_alive<'a>(&'a self) -> impl Iterator<Item = DocId> + 'a {
SegmentReaderAliveDocsIterator::new(&self) (0u32..self.max_doc).filter(move |doc| !self.is_deleted(*doc))
} }
/// Summarize total space usage of this segment. /// Summarize total space usage of this segment.
pub fn space_usage(&self) -> SegmentSpaceUsage { pub fn space_usage(&self) -> io::Result<SegmentSpaceUsage> {
SegmentSpaceUsage::new( Ok(SegmentSpaceUsage::new(
self.num_docs(), self.num_docs(),
self.termdict_composite.space_usage(), self.termdict_composite.space_usage(),
self.postings_composite.space_usage(), self.postings_composite.space_usage(),
self.positions_composite.space_usage(), self.positions_composite.space_usage(),
self.positions_idx_composite.space_usage(), self.positions_idx_composite.space_usage(),
self.fast_fields_readers.space_usage(), self.fast_fields_readers.space_usage(),
self.fieldnorms_composite.space_usage(), self.fieldnorm_readers.space_usage(),
self.get_store_reader().space_usage(), self.get_store_reader()?.space_usage(),
self.delete_bitset_opt self.delete_bitset_opt
.as_ref() .as_ref()
.map(DeleteBitSet::space_usage) .map(DeleteBitSet::space_usage)
.unwrap_or(0), .unwrap_or(0),
) ))
} }
} }
@@ -324,52 +339,6 @@ impl fmt::Debug for SegmentReader {
} }
} }
/// Implements the iterator trait to allow easy iteration
/// over non-deleted ("alive") DocIds in a SegmentReader
pub struct SegmentReaderAliveDocsIterator<'a> {
reader: &'a SegmentReader,
max_doc: DocId,
current: DocId,
}
impl<'a> SegmentReaderAliveDocsIterator<'a> {
pub fn new(reader: &'a SegmentReader) -> SegmentReaderAliveDocsIterator<'a> {
SegmentReaderAliveDocsIterator {
reader,
max_doc: reader.max_doc(),
current: 0,
}
}
}
impl<'a> Iterator for SegmentReaderAliveDocsIterator<'a> {
type Item = DocId;
fn next(&mut self) -> Option<Self::Item> {
// TODO: Use TinySet (like in BitSetDocSet) to speed this process up
if self.current >= self.max_doc {
return None;
}
// find the next alive doc id
while self.reader.is_deleted(self.current) {
self.current += 1;
if self.current >= self.max_doc {
return None;
}
}
// capture the current alive DocId
let result = Some(self.current);
// move down the chain
self.current += 1;
result
}
}
#[cfg(test)] #[cfg(test)]
mod test { mod test {
use crate::core::Index; use crate::core::Index;
@@ -377,7 +346,7 @@ mod test {
use crate::DocId; use crate::DocId;
#[test] #[test]
fn test_alive_docs_iterator() { fn test_alive_docs_iterator() -> crate::Result<()> {
let mut schema_builder = Schema::builder(); let mut schema_builder = Schema::builder();
schema_builder.add_text_field("name", TEXT | STORED); schema_builder.add_text_field("name", TEXT | STORED);
let schema = schema_builder.build(); let schema = schema_builder.build();
@@ -385,26 +354,26 @@ mod test {
let name = schema.get_field("name").unwrap(); let name = schema.get_field("name").unwrap();
{ {
let mut index_writer = index.writer_with_num_threads(1, 3_000_000).unwrap(); let mut index_writer = index.writer_for_tests()?;
index_writer.add_document(doc!(name => "tantivy")); index_writer.add_document(doc!(name => "tantivy"));
index_writer.add_document(doc!(name => "horse")); index_writer.add_document(doc!(name => "horse"));
index_writer.add_document(doc!(name => "jockey")); index_writer.add_document(doc!(name => "jockey"));
index_writer.add_document(doc!(name => "cap")); index_writer.add_document(doc!(name => "cap"));
// we should now have one segment with two docs // we should now have one segment with two docs
index_writer.commit().unwrap(); index_writer.commit()?;
} }
{ {
let mut index_writer2 = index.writer(50_000_000).unwrap(); let mut index_writer2 = index.writer(50_000_000)?;
index_writer2.delete_term(Term::from_field_text(name, "horse")); index_writer2.delete_term(Term::from_field_text(name, "horse"));
index_writer2.delete_term(Term::from_field_text(name, "cap")); index_writer2.delete_term(Term::from_field_text(name, "cap"));
// ok, now we should have a deleted doc // ok, now we should have a deleted doc
index_writer2.commit().unwrap(); index_writer2.commit()?;
} }
let searcher = index.reader().unwrap().searcher(); let searcher = index.reader()?.searcher();
let docs: Vec<DocId> = searcher.segment_reader(0).doc_ids_alive().collect(); let docs: Vec<DocId> = searcher.segment_reader(0).doc_ids_alive().collect();
assert_eq!(vec![0u32, 2u32], docs); assert_eq!(vec![0u32, 2u32], docs);
Ok(())
} }
} }

View File

@@ -3,7 +3,7 @@ use crate::directory::error::LockError;
use crate::directory::error::{DeleteError, OpenReadError, OpenWriteError}; use crate::directory::error::{DeleteError, OpenReadError, OpenWriteError};
use crate::directory::WatchCallback; use crate::directory::WatchCallback;
use crate::directory::WatchHandle; use crate::directory::WatchHandle;
use crate::directory::{ReadOnlySource, WritePtr}; use crate::directory::{FileSlice, WritePtr};
use std::fmt; use std::fmt;
use std::io; use std::io;
use std::io::Write; use std::io::Write;
@@ -11,7 +11,6 @@ use std::marker::Send;
use std::marker::Sync; use std::marker::Sync;
use std::path::Path; use std::path::Path;
use std::path::PathBuf; use std::path::PathBuf;
use std::result;
use std::thread; use std::thread;
use std::time::Duration; use std::time::Duration;
@@ -80,7 +79,7 @@ fn try_acquire_lock(
) -> Result<DirectoryLock, TryAcquireLockError> { ) -> Result<DirectoryLock, TryAcquireLockError> {
let mut write = directory.open_write(filepath).map_err(|e| match e { let mut write = directory.open_write(filepath).map_err(|e| match e {
OpenWriteError::FileAlreadyExists(_) => TryAcquireLockError::FileExists, OpenWriteError::FileAlreadyExists(_) => TryAcquireLockError::FileExists,
OpenWriteError::IOError(io_error) => TryAcquireLockError::IOError(io_error.into()), OpenWriteError::IOError { io_error, .. } => TryAcquireLockError::IOError(io_error),
})?; })?;
write.flush().map_err(TryAcquireLockError::IOError)?; write.flush().map_err(TryAcquireLockError::IOError)?;
Ok(DirectoryLock::from(Box::new(DirectoryLockGuard { Ok(DirectoryLock::from(Box::new(DirectoryLockGuard {
@@ -117,19 +116,19 @@ pub trait Directory: DirectoryClone + fmt::Debug + Send + Sync + 'static {
/// change. /// change.
/// ///
/// Specifically, subsequent writes or flushes should /// Specifically, subsequent writes or flushes should
/// have no effect on the returned `ReadOnlySource` object. /// have no effect on the returned `FileSlice` object.
/// ///
/// You should only use this to read files create with [Directory::open_write]. /// You should only use this to read files create with [Directory::open_write].
fn open_read(&self, path: &Path) -> result::Result<ReadOnlySource, OpenReadError>; fn open_read(&self, path: &Path) -> Result<FileSlice, OpenReadError>;
/// Removes a file /// Removes a file
/// ///
/// Removing a file will not affect an eventual /// Removing a file will not affect an eventual
/// existing ReadOnlySource pointing to it. /// existing FileSlice pointing to it.
/// ///
/// Removing a nonexistent file, yields a /// Removing a nonexistent file, yields a
/// `DeleteError::DoesNotExist`. /// `DeleteError::DoesNotExist`.
fn delete(&self, path: &Path) -> result::Result<(), DeleteError>; fn delete(&self, path: &Path) -> Result<(), DeleteError>;
/// Returns true iff the file exists /// Returns true iff the file exists
fn exists(&self, path: &Path) -> bool; fn exists(&self, path: &Path) -> bool;
@@ -139,7 +138,7 @@ pub trait Directory: DirectoryClone + fmt::Debug + Send + Sync + 'static {
/// ///
/// Right after this call, the file should be created /// Right after this call, the file should be created
/// and any subsequent call to `open_read` for the /// and any subsequent call to `open_read` for the
/// same path should return a `ReadOnlySource`. /// same path should return a `FileSlice`.
/// ///
/// Write operations may be aggressively buffered. /// Write operations may be aggressively buffered.
/// The client of this trait is responsible for calling flush /// The client of this trait is responsible for calling flush
@@ -153,7 +152,7 @@ pub trait Directory: DirectoryClone + fmt::Debug + Send + Sync + 'static {
/// was not called. /// was not called.
/// ///
/// The file may not previously exist. /// The file may not previously exist.
fn open_write(&mut self, path: &Path) -> Result<WritePtr, OpenWriteError>; fn open_write(&self, path: &Path) -> Result<WritePtr, OpenWriteError>;
/// Reads the full content file that has been written using /// Reads the full content file that has been written using
/// atomic_write. /// atomic_write.
@@ -169,7 +168,7 @@ pub trait Directory: DirectoryClone + fmt::Debug + Send + Sync + 'static {
/// a partially written file. /// a partially written file.
/// ///
/// The file may or may not previously exist. /// The file may or may not previously exist.
fn atomic_write(&mut self, path: &Path, data: &[u8]) -> io::Result<()>; fn atomic_write(&self, path: &Path, data: &[u8]) -> io::Result<()>;
/// Acquire a lock in the given directory. /// Acquire a lock in the given directory.
/// ///

View File

@@ -1,162 +1,67 @@
use crate::Version; use crate::Version;
use std::error::Error as StdError;
use std::fmt; use std::fmt;
use std::io; use std::io;
use std::path::PathBuf; use std::path::PathBuf;
/// Error while trying to acquire a directory lock. /// Error while trying to acquire a directory lock.
#[derive(Debug, Fail)] #[derive(Debug, Error)]
pub enum LockError { pub enum LockError {
/// Failed to acquired a lock as it is already held by another /// Failed to acquired a lock as it is already held by another
/// client. /// client.
/// - In the context of a blocking lock, this means the lock was not released within some `timeout` period. /// - In the context of a blocking lock, this means the lock was not released within some `timeout` period.
/// - In the context of a non-blocking lock, this means the lock was busy at the moment of the call. /// - In the context of a non-blocking lock, this means the lock was busy at the moment of the call.
#[fail( #[error("Could not acquire lock as it is already held, possibly by a different process.")]
display = "Could not acquire lock as it is already held, possibly by a different process."
)]
LockBusy, LockBusy,
/// Trying to acquire a lock failed with an `IOError` /// Trying to acquire a lock failed with an `IOError`
#[fail(display = "Failed to acquire the lock due to an io:Error.")] #[error("Failed to acquire the lock due to an io:Error.")]
IOError(io::Error), IOError(io::Error),
} }
/// General IO error with an optional path to the offending file.
#[derive(Debug)]
pub struct IOError {
path: Option<PathBuf>,
err: io::Error,
}
impl Into<io::Error> for IOError {
fn into(self) -> io::Error {
self.err
}
}
impl fmt::Display for IOError {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
match self.path {
Some(ref path) => write!(f, "io error occurred on path '{:?}': '{}'", path, self.err),
None => write!(f, "io error occurred: '{}'", self.err),
}
}
}
impl StdError for IOError {
fn description(&self) -> &str {
"io error occurred"
}
fn cause(&self) -> Option<&dyn StdError> {
Some(&self.err)
}
}
impl IOError {
pub(crate) fn with_path(path: PathBuf, err: io::Error) -> Self {
IOError {
path: Some(path),
err,
}
}
}
impl From<io::Error> for IOError {
fn from(err: io::Error) -> IOError {
IOError { path: None, err }
}
}
/// Error that may occur when opening a directory /// Error that may occur when opening a directory
#[derive(Debug)] #[derive(Debug, Error)]
pub enum OpenDirectoryError { pub enum OpenDirectoryError {
/// The underlying directory does not exists. /// The underlying directory does not exists.
#[error("Directory does not exist: '{0}'.")]
DoesNotExist(PathBuf), DoesNotExist(PathBuf),
/// The path exists but is not a directory. /// The path exists but is not a directory.
#[error("Path exists but is not a directory: '{0}'.")]
NotADirectory(PathBuf), NotADirectory(PathBuf),
/// Failed to create a temp directory.
#[error("Failed to create a temporary directory: '{0}'.")]
FailedToCreateTempDir(io::Error),
/// IoError /// IoError
IoError(io::Error), #[error("IOError '{io_error:?}' while create directory in: '{directory_path:?}'.")]
} IoError {
/// underlying io Error.
impl From<io::Error> for OpenDirectoryError { io_error: io::Error,
fn from(io_err: io::Error) -> Self { /// directory we tried to open.
OpenDirectoryError::IoError(io_err) directory_path: PathBuf,
} },
}
impl fmt::Display for OpenDirectoryError {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
match *self {
OpenDirectoryError::DoesNotExist(ref path) => {
write!(f, "the underlying directory '{:?}' does not exist", path)
}
OpenDirectoryError::NotADirectory(ref path) => {
write!(f, "the path '{:?}' exists but is not a directory", path)
}
OpenDirectoryError::IoError(ref err) => write!(
f,
"IOError while trying to open/create the directory. {:?}",
err
),
}
}
}
impl StdError for OpenDirectoryError {
fn description(&self) -> &str {
"error occurred while opening a directory"
}
fn cause(&self) -> Option<&dyn StdError> {
None
}
} }
/// Error that may occur when starting to write in a file /// Error that may occur when starting to write in a file
#[derive(Debug)] #[derive(Debug, Error)]
pub enum OpenWriteError { pub enum OpenWriteError {
/// Our directory is WORM, writing an existing file is forbidden. /// Our directory is WORM, writing an existing file is forbidden.
/// Checkout the `Directory` documentation. /// Checkout the `Directory` documentation.
#[error("File already exists: '{0}'")]
FileAlreadyExists(PathBuf), FileAlreadyExists(PathBuf),
/// Any kind of IO error that happens when /// Any kind of IO error that happens when
/// writing in the underlying IO device. /// writing in the underlying IO device.
IOError(IOError), #[error("IOError '{io_error:?}' while opening file for write: '{filepath}'.")]
IOError {
/// The underlying `io::Error`.
io_error: io::Error,
/// File path of the file that tantivy failed to open for write.
filepath: PathBuf,
},
} }
impl From<IOError> for OpenWriteError { impl OpenWriteError {
fn from(err: IOError) -> OpenWriteError { pub(crate) fn wrap_io_error(io_error: io::Error, filepath: PathBuf) -> Self {
OpenWriteError::IOError(err) Self::IOError { io_error, filepath }
} }
} }
impl fmt::Display for OpenWriteError {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
match *self {
OpenWriteError::FileAlreadyExists(ref path) => {
write!(f, "the file '{:?}' already exists", path)
}
OpenWriteError::IOError(ref err) => write!(
f,
"an io error occurred while opening a file for writing: '{}'",
err
),
}
}
}
impl StdError for OpenWriteError {
fn description(&self) -> &str {
"error occurred while opening a file for writing"
}
fn cause(&self) -> Option<&dyn StdError> {
match *self {
OpenWriteError::FileAlreadyExists(_) => None,
OpenWriteError::IOError(ref err) => Some(err),
}
}
}
/// Type of index incompatibility between the library and the index found on disk /// Type of index incompatibility between the library and the index found on disk
/// Used to catch and provide a hint to solve this incompatibility issue /// Used to catch and provide a hint to solve this incompatibility issue
pub enum Incompatibility { pub enum Incompatibility {
@@ -217,55 +122,46 @@ impl fmt::Debug for Incompatibility {
} }
/// Error that may occur when accessing a file read /// Error that may occur when accessing a file read
#[derive(Debug)] #[derive(Debug, Error)]
pub enum OpenReadError { pub enum OpenReadError {
/// The file does not exists. /// The file does not exists.
#[error("Files does not exists: {0:?}")]
FileDoesNotExist(PathBuf), FileDoesNotExist(PathBuf),
/// Any kind of IO error that happens when /// Any kind of io::Error.
/// interacting with the underlying IO device. #[error(
IOError(IOError), "IOError: '{io_error:?}' happened while opening the following file for Read: {filepath}."
/// This library doesn't support the index version found on disk )]
IOError {
/// The underlying `io::Error`.
io_error: io::Error,
/// File path of the file that tantivy failed to open for read.
filepath: PathBuf,
},
/// This library does not support the index version found in file footer.
#[error("Index version unsupported: {0:?}")]
IncompatibleIndex(Incompatibility), IncompatibleIndex(Incompatibility),
} }
impl From<IOError> for OpenReadError { impl OpenReadError {
fn from(err: IOError) -> OpenReadError { pub(crate) fn wrap_io_error(io_error: io::Error, filepath: PathBuf) -> Self {
OpenReadError::IOError(err) Self::IOError { io_error, filepath }
} }
} }
impl fmt::Display for OpenReadError {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
match *self {
OpenReadError::FileDoesNotExist(ref path) => {
write!(f, "the file '{:?}' does not exist", path)
}
OpenReadError::IOError(ref err) => write!(
f,
"an io error occurred while opening a file for reading: '{}'",
err
),
OpenReadError::IncompatibleIndex(ref footer) => {
write!(f, "Incompatible index format: {:?}", footer)
}
}
}
}
/// Error that may occur when trying to delete a file /// Error that may occur when trying to delete a file
#[derive(Debug)] #[derive(Debug, Error)]
pub enum DeleteError { pub enum DeleteError {
/// The file does not exists. /// The file does not exists.
#[error("File does not exists: '{0}'.")]
FileDoesNotExist(PathBuf), FileDoesNotExist(PathBuf),
/// Any kind of IO error that happens when /// Any kind of IO error that happens when
/// interacting with the underlying IO device. /// interacting with the underlying IO device.
IOError(IOError), #[error("The following IO error happened while deleting file '{filepath}': '{io_error:?}'.")]
} IOError {
/// The underlying `io::Error`.
impl From<IOError> for DeleteError { io_error: io::Error,
fn from(err: IOError) -> DeleteError { /// File path of the file that tantivy failed to delete.
DeleteError::IOError(err) filepath: PathBuf,
} },
} }
impl From<Incompatibility> for OpenReadError { impl From<Incompatibility> for OpenReadError {
@@ -273,29 +169,3 @@ impl From<Incompatibility> for OpenReadError {
OpenReadError::IncompatibleIndex(incompatibility) OpenReadError::IncompatibleIndex(incompatibility)
} }
} }
impl fmt::Display for DeleteError {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
match *self {
DeleteError::FileDoesNotExist(ref path) => {
write!(f, "the file '{:?}' does not exist", path)
}
DeleteError::IOError(ref err) => {
write!(f, "an io error occurred while deleting a file: '{}'", err)
}
}
}
}
impl StdError for DeleteError {
fn description(&self) -> &str {
"error occurred while deleting a file"
}
fn cause(&self) -> Option<&dyn StdError> {
match *self {
DeleteError::FileDoesNotExist(_) => None,
DeleteError::IOError(ref err) => Some(err),
}
}
}

237
src/directory/file_slice.rs Normal file
View File

@@ -0,0 +1,237 @@
use stable_deref_trait::StableDeref;
use crate::common::HasLen;
use crate::directory::OwnedBytes;
use std::sync::Arc;
use std::{io, ops::Deref};
pub type BoxedData = Box<dyn Deref<Target = [u8]> + Send + Sync + 'static>;
/// Objects that represents files sections in tantivy.
///
/// By contract, whatever happens to the directory file, as long as a FileHandle
/// is alive, the data associated with it cannot be altered or destroyed.
///
/// The underlying behavior is therefore specific to the `Directory` that created it.
/// Despite its name, a `FileSlice` may or may not directly map to an actual file
/// on the filesystem.
pub trait FileHandle: 'static + Send + Sync + HasLen {
/// Reads a slice of bytes.
///
/// This method may panic if the range requested is invalid.
fn read_bytes(&self, from: usize, to: usize) -> io::Result<OwnedBytes>;
}
impl FileHandle for &'static [u8] {
fn read_bytes(&self, from: usize, to: usize) -> io::Result<OwnedBytes> {
let bytes = &self[from..to];
Ok(OwnedBytes::new(bytes))
}
}
impl<T: Deref<Target = [u8]>> HasLen for T {
fn len(&self) -> usize {
self.as_ref().len()
}
}
impl<B> From<B> for FileSlice
where
B: StableDeref + Deref<Target = [u8]> + 'static + Send + Sync,
{
fn from(bytes: B) -> FileSlice {
FileSlice::new(OwnedBytes::new(bytes))
}
}
/// Logical slice of read only file in tantivy.
//
/// It can be cloned and sliced cheaply.
///
#[derive(Clone)]
pub struct FileSlice {
data: Arc<Box<dyn FileHandle>>,
start: usize,
stop: usize,
}
impl FileSlice {
/// Wraps a FileHandle.
pub fn new<D>(data: D) -> Self
where
D: FileHandle,
{
let len = data.len();
FileSlice {
data: Arc::new(Box::new(data)),
start: 0,
stop: len,
}
}
/// Creates a fileslice that is just a view over a slice of the data.
///
/// # Panics
///
/// Panics if `to < from` or if `to` exceeds the filesize.
pub fn slice(&self, from: usize, to: usize) -> FileSlice {
assert!(to <= self.len());
assert!(to >= from);
FileSlice {
data: self.data.clone(),
start: self.start + from,
stop: self.start + to,
}
}
/// Creates an empty FileSlice
pub fn empty() -> FileSlice {
const EMPTY_SLICE: &[u8] = &[];
FileSlice::from(EMPTY_SLICE)
}
/// Returns a `OwnedBytes` with all of the data in the `FileSlice`.
///
/// The behavior is strongly dependant on the implementation of the underlying
/// `Directory` and the `FileSliceTrait` it creates.
/// In particular, it is up to the `Directory` implementation
/// to handle caching if needed.
pub fn read_bytes(&self) -> io::Result<OwnedBytes> {
self.data.read_bytes(self.start, self.stop)
}
/// Reads a specific slice of data.
///
/// This is equivalent to running `file_slice.slice(from, to).read_bytes()`.
pub fn read_bytes_slice(&self, from: usize, to: usize) -> io::Result<OwnedBytes> {
assert!(from <= to);
assert!(
self.start + to <= self.stop,
"`to` exceeds the fileslice length"
);
self.data.read_bytes(self.start + from, self.start + to)
}
/// Splits the FileSlice at the given offset and return two file slices.
/// `file_slice[..split_offset]` and `file_slice[split_offset..]`.
///
/// This operation is cheap and must not copy any underlying data.
pub fn split(self, left_len: usize) -> (FileSlice, FileSlice) {
let left = self.slice_to(left_len);
let right = self.slice_from(left_len);
(left, right)
}
/// Splits the file slice at the given offset and return two file slices.
/// `file_slice[..split_offset]` and `file_slice[split_offset..]`.
pub fn split_from_end(self, right_len: usize) -> (FileSlice, FileSlice) {
let left_len = self.len() - right_len;
self.split(left_len)
}
/// Like `.slice(...)` but enforcing only the `from`
/// boundary.
///
/// Equivalent to `.slice(from_offset, self.len())`
pub fn slice_from(&self, from_offset: usize) -> FileSlice {
self.slice(from_offset, self.len())
}
/// Like `.slice(...)` but enforcing only the `to`
/// boundary.
///
/// Equivalent to `.slice(0, to_offset)`
pub fn slice_to(&self, to_offset: usize) -> FileSlice {
self.slice(0, to_offset)
}
}
impl HasLen for FileSlice {
fn len(&self) -> usize {
self.stop - self.start
}
}
#[cfg(test)]
mod tests {
use super::{FileHandle, FileSlice};
use crate::common::HasLen;
use std::io;
#[test]
fn test_file_slice() -> io::Result<()> {
let file_slice = FileSlice::new(b"abcdef".as_ref());
assert_eq!(file_slice.len(), 6);
assert_eq!(file_slice.slice_from(2).read_bytes()?.as_slice(), b"cdef");
assert_eq!(file_slice.slice_to(2).read_bytes()?.as_slice(), b"ab");
assert_eq!(
file_slice
.slice_from(1)
.slice_to(2)
.read_bytes()?
.as_slice(),
b"bc"
);
{
let (left, right) = file_slice.clone().split(0);
assert_eq!(left.read_bytes()?.as_slice(), b"");
assert_eq!(right.read_bytes()?.as_slice(), b"abcdef");
}
{
let (left, right) = file_slice.clone().split(2);
assert_eq!(left.read_bytes()?.as_slice(), b"ab");
assert_eq!(right.read_bytes()?.as_slice(), b"cdef");
}
{
let (left, right) = file_slice.clone().split_from_end(0);
assert_eq!(left.read_bytes()?.as_slice(), b"abcdef");
assert_eq!(right.read_bytes()?.as_slice(), b"");
}
{
let (left, right) = file_slice.clone().split_from_end(2);
assert_eq!(left.read_bytes()?.as_slice(), b"abcd");
assert_eq!(right.read_bytes()?.as_slice(), b"ef");
}
Ok(())
}
#[test]
fn test_file_slice_trait_slice_len() {
let blop: &'static [u8] = b"abc";
let owned_bytes: Box<dyn FileHandle> = Box::new(blop);
assert_eq!(owned_bytes.len(), 3);
}
#[test]
fn test_slice_simple_read() -> io::Result<()> {
let slice = FileSlice::new(&b"abcdef"[..]);
assert_eq!(slice.len(), 6);
assert_eq!(slice.read_bytes()?.as_ref(), b"abcdef");
assert_eq!(slice.slice(1, 4).read_bytes()?.as_ref(), b"bcd");
Ok(())
}
#[test]
fn test_slice_read_slice() -> io::Result<()> {
let slice_deref = FileSlice::new(&b"abcdef"[..]);
assert_eq!(slice_deref.read_bytes_slice(1, 4)?.as_ref(), b"bcd");
Ok(())
}
#[test]
#[should_panic(expected = "assertion failed: from <= to")]
fn test_slice_read_slice_invalid_range() {
let slice_deref = FileSlice::new(&b"abcdef"[..]);
assert_eq!(slice_deref.read_bytes_slice(1, 0).unwrap().as_ref(), b"bcd");
}
#[test]
#[should_panic(expected = "`to` exceeds the fileslice length")]
fn test_slice_read_slice_invalid_range_exceeds() {
let slice_deref = FileSlice::new(&b"abcdef"[..]);
assert_eq!(
slice_deref.read_bytes_slice(0, 10).unwrap().as_ref(),
b"bcd"
);
}
}

View File

@@ -1,9 +1,8 @@
use crate::common::{BinarySerializable, CountingWriter, FixedSize, VInt}; use crate::common::{BinarySerializable, CountingWriter, FixedSize, HasLen, VInt};
use crate::directory::error::Incompatibility; use crate::directory::error::Incompatibility;
use crate::directory::read_only_source::ReadOnlySource; use crate::directory::FileSlice;
use crate::directory::{AntiCallToken, TerminatingWrite}; use crate::directory::{AntiCallToken, TerminatingWrite};
use crate::Version; use crate::Version;
use byteorder::{ByteOrder, LittleEndian, WriteBytesExt};
use crc32fast::Hasher; use crc32fast::Hasher;
use std::io; use std::io;
use std::io::Write; use std::io::Write;
@@ -64,26 +63,26 @@ impl Footer {
let mut counting_write = CountingWriter::wrap(&mut write); let mut counting_write = CountingWriter::wrap(&mut write);
self.serialize(&mut counting_write)?; self.serialize(&mut counting_write)?;
let written_len = counting_write.written_bytes(); let written_len = counting_write.written_bytes();
write.write_u32::<LittleEndian>(written_len as u32)?; (written_len as u32).serialize(write)?;
Ok(()) Ok(())
} }
pub fn extract_footer(source: ReadOnlySource) -> Result<(Footer, ReadOnlySource), io::Error> { pub fn extract_footer(file: FileSlice) -> io::Result<(Footer, FileSlice)> {
if source.len() < 4 { if file.len() < 4 {
return Err(io::Error::new( return Err(io::Error::new(
io::ErrorKind::UnexpectedEof, io::ErrorKind::UnexpectedEof,
format!( format!(
"File corrupted. The file is smaller than 4 bytes (len={}).", "File corrupted. The file is smaller than 4 bytes (len={}).",
source.len() file.len()
), ),
)); ));
} }
let (body_footer, footer_len_bytes) = source.split_from_end(u32::SIZE_IN_BYTES); let (body_footer, footer_len_file) = file.split_from_end(u32::SIZE_IN_BYTES);
let footer_len = LittleEndian::read_u32(footer_len_bytes.as_slice()) as usize; let mut footer_len_bytes = footer_len_file.read_bytes()?;
let body_len = body_footer.len() - footer_len; let footer_len = u32::deserialize(&mut footer_len_bytes)? as usize;
let (body, footer_data) = body_footer.split(body_len); let (body, footer) = body_footer.split_from_end(footer_len);
let mut cursor = footer_data.as_slice(); let mut footer_bytes = footer.read_bytes()?;
let footer = Footer::deserialize(&mut cursor)?; let footer = Footer::deserialize(&mut footer_bytes)?;
Ok((footer, body)) Ok((footer, body))
} }
@@ -94,12 +93,24 @@ impl Footer {
match &self.versioned_footer { match &self.versioned_footer {
VersionedFooter::V1 { VersionedFooter::V1 {
crc32: _crc, crc32: _crc,
store_compression: compression, store_compression,
} => { } => {
if &library_version.store_compression != compression { if &library_version.store_compression != store_compression {
return Err(Incompatibility::CompressionMismatch { return Err(Incompatibility::CompressionMismatch {
library_compression_format: library_version.store_compression.to_string(), library_compression_format: library_version.store_compression.to_string(),
index_compression_format: compression.to_string(), index_compression_format: store_compression.to_string(),
});
}
Ok(())
}
VersionedFooter::V2 {
crc32: _crc,
store_compression,
} => {
if &library_version.store_compression != store_compression {
return Err(Incompatibility::CompressionMismatch {
library_compression_format: library_version.store_compression.to_string(),
index_compression_format: store_compression.to_string(),
}); });
} }
Ok(()) Ok(())
@@ -120,24 +131,29 @@ pub enum VersionedFooter {
crc32: CrcHashU32, crc32: CrcHashU32,
store_compression: String, store_compression: String,
}, },
// Introduction of the Block WAND information.
V2 {
crc32: CrcHashU32,
store_compression: String,
},
} }
impl BinarySerializable for VersionedFooter { impl BinarySerializable for VersionedFooter {
fn serialize<W: io::Write>(&self, writer: &mut W) -> io::Result<()> { fn serialize<W: io::Write>(&self, writer: &mut W) -> io::Result<()> {
let mut buf = Vec::new(); let mut buf = Vec::new();
match self { match self {
VersionedFooter::V1 { VersionedFooter::V2 {
crc32, crc32,
store_compression: compression, store_compression: compression,
} => { } => {
// Serializes a valid `VersionedFooter` or panics if the version is unknown // Serializes a valid `VersionedFooter` or panics if the version is unknown
// [ version | crc_hash | compression_mode ] // [ version | crc_hash | compression_mode ]
// [ 0..4 | 4..8 | variable ] // [ 0..4 | 4..8 | variable ]
BinarySerializable::serialize(&1u32, &mut buf)?; BinarySerializable::serialize(&2u32, &mut buf)?;
BinarySerializable::serialize(crc32, &mut buf)?; BinarySerializable::serialize(crc32, &mut buf)?;
BinarySerializable::serialize(compression, &mut buf)?; BinarySerializable::serialize(compression, &mut buf)?;
} }
VersionedFooter::UnknownVersion => { VersionedFooter::V1 { .. } | VersionedFooter::UnknownVersion => {
return Err(io::Error::new( return Err(io::Error::new(
io::ErrorKind::InvalidInput, io::ErrorKind::InvalidInput,
"Cannot serialize an unknown versioned footer ", "Cannot serialize an unknown versioned footer ",
@@ -166,22 +182,30 @@ impl BinarySerializable for VersionedFooter {
reader.read_exact(&mut buf[..])?; reader.read_exact(&mut buf[..])?;
let mut cursor = &buf[..]; let mut cursor = &buf[..];
let version = u32::deserialize(&mut cursor)?; let version = u32::deserialize(&mut cursor)?;
if version == 1 { if version != 1 && version != 2 {
let crc32 = u32::deserialize(&mut cursor)?; return Ok(VersionedFooter::UnknownVersion);
let compression = String::deserialize(&mut cursor)?;
Ok(VersionedFooter::V1 {
crc32,
store_compression: compression,
})
} else {
Ok(VersionedFooter::UnknownVersion)
} }
let crc32 = u32::deserialize(&mut cursor)?;
let store_compression = String::deserialize(&mut cursor)?;
Ok(if version == 1 {
VersionedFooter::V1 {
crc32,
store_compression,
}
} else {
assert_eq!(version, 2);
VersionedFooter::V2 {
crc32,
store_compression,
}
})
} }
} }
impl VersionedFooter { impl VersionedFooter {
pub fn crc(&self) -> Option<CrcHashU32> { pub fn crc(&self) -> Option<CrcHashU32> {
match self { match self {
VersionedFooter::V2 { crc32, .. } => Some(*crc32),
VersionedFooter::V1 { crc32, .. } => Some(*crc32), VersionedFooter::V1 { crc32, .. } => Some(*crc32),
VersionedFooter::UnknownVersion { .. } => None, VersionedFooter::UnknownVersion { .. } => None,
} }
@@ -219,7 +243,7 @@ impl<W: TerminatingWrite> Write for FooterProxy<W> {
impl<W: TerminatingWrite> TerminatingWrite for FooterProxy<W> { impl<W: TerminatingWrite> TerminatingWrite for FooterProxy<W> {
fn terminate_ref(&mut self, _: AntiCallToken) -> io::Result<()> { fn terminate_ref(&mut self, _: AntiCallToken) -> io::Result<()> {
let crc32 = self.hasher.take().unwrap().finalize(); let crc32 = self.hasher.take().unwrap().finalize();
let footer = Footer::new(VersionedFooter::V1 { let footer = Footer::new(VersionedFooter::V2 {
crc32, crc32,
store_compression: crate::store::COMPRESSION.to_string(), store_compression: crate::store::COMPRESSION.to_string(),
}); });
@@ -246,17 +270,17 @@ mod tests {
let mut vec = Vec::new(); let mut vec = Vec::new();
let footer_proxy = FooterProxy::new(&mut vec); let footer_proxy = FooterProxy::new(&mut vec);
assert!(footer_proxy.terminate().is_ok()); assert!(footer_proxy.terminate().is_ok());
assert_eq!(vec.len(), 167); if crate::store::COMPRESSION == "lz4" {
let footer = Footer::deserialize(&mut &vec[..]).unwrap(); assert_eq!(vec.len(), 158);
if let VersionedFooter::V1 {
crc32: _,
store_compression,
} = footer.versioned_footer
{
assert_eq!(store_compression, crate::store::COMPRESSION);
} else { } else {
panic!("Versioned footer should be V1."); assert_eq!(vec.len(), 167);
} }
let footer = Footer::deserialize(&mut &vec[..]).unwrap();
assert!(matches!(
footer.versioned_footer,
VersionedFooter::V2 { store_compression, .. }
if store_compression == crate::store::COMPRESSION
));
assert_eq!(&footer.version, crate::version()); assert_eq!(&footer.version, crate::version());
} }
@@ -264,7 +288,7 @@ mod tests {
fn test_serialize_deserialize_footer() { fn test_serialize_deserialize_footer() {
let mut buffer = Vec::new(); let mut buffer = Vec::new();
let crc32 = 123456u32; let crc32 = 123456u32;
let footer: Footer = Footer::new(VersionedFooter::V1 { let footer: Footer = Footer::new(VersionedFooter::V2 {
crc32, crc32,
store_compression: "lz4".to_string(), store_compression: "lz4".to_string(),
}); });
@@ -276,7 +300,7 @@ mod tests {
#[test] #[test]
fn footer_length() { fn footer_length() {
let crc32 = 1111111u32; let crc32 = 1111111u32;
let versioned_footer = VersionedFooter::V1 { let versioned_footer = VersionedFooter::V2 {
crc32, crc32,
store_compression: "lz4".to_string(), store_compression: "lz4".to_string(),
}; };
@@ -297,7 +321,7 @@ mod tests {
// versionned footer length // versionned footer length
12 | 128, 12 | 128,
// index format version // index format version
1, 2,
0, 0,
0, 0,
0, 0,
@@ -316,7 +340,7 @@ mod tests {
let versioned_footer = VersionedFooter::deserialize(&mut cursor).unwrap(); let versioned_footer = VersionedFooter::deserialize(&mut cursor).unwrap();
assert!(cursor.is_empty()); assert!(cursor.is_empty());
let expected_crc: u32 = LittleEndian::read_u32(&v_footer_bytes[5..9]) as CrcHashU32; let expected_crc: u32 = LittleEndian::read_u32(&v_footer_bytes[5..9]) as CrcHashU32;
let expected_versioned_footer: VersionedFooter = VersionedFooter::V1 { let expected_versioned_footer: VersionedFooter = VersionedFooter::V2 {
crc32: expected_crc, crc32: expected_crc,
store_compression: "lz4".to_string(), store_compression: "lz4".to_string(),
}; };

View File

@@ -1,17 +1,16 @@
use crate::core::MANAGED_FILEPATH; use crate::core::{MANAGED_FILEPATH, META_FILEPATH};
use crate::directory::error::{DeleteError, IOError, LockError, OpenReadError, OpenWriteError}; use crate::directory::error::{DeleteError, LockError, OpenReadError, OpenWriteError};
use crate::directory::footer::{Footer, FooterProxy}; use crate::directory::footer::{Footer, FooterProxy};
use crate::directory::DirectoryLock; use crate::directory::DirectoryLock;
use crate::directory::GarbageCollectionResult; use crate::directory::GarbageCollectionResult;
use crate::directory::Lock; use crate::directory::Lock;
use crate::directory::META_LOCK; use crate::directory::META_LOCK;
use crate::directory::{ReadOnlySource, WritePtr}; use crate::directory::{FileSlice, WritePtr};
use crate::directory::{WatchCallback, WatchHandle}; use crate::directory::{WatchCallback, WatchHandle};
use crate::error::DataCorruption; use crate::error::DataCorruption;
use crate::Directory; use crate::Directory;
use crc32fast::Hasher; use crc32fast::Hasher;
use serde_json;
use std::collections::HashSet; use std::collections::HashSet;
use std::io; use std::io;
use std::io::Write; use std::io::Write;
@@ -54,7 +53,7 @@ struct MetaInformation {
/// Saves the file containing the list of existing files /// Saves the file containing the list of existing files
/// that were created by tantivy. /// that were created by tantivy.
fn save_managed_paths( fn save_managed_paths(
directory: &mut dyn Directory, directory: &dyn Directory,
wlock: &RwLockWriteGuard<'_, MetaInformation>, wlock: &RwLockWriteGuard<'_, MetaInformation>,
) -> io::Result<()> { ) -> io::Result<()> {
let mut w = serde_json::to_vec(&wlock.managed_paths)?; let mut w = serde_json::to_vec(&wlock.managed_paths)?;
@@ -87,7 +86,7 @@ impl ManagedDirectory {
directory: Box::new(directory), directory: Box::new(directory),
meta_informations: Arc::default(), meta_informations: Arc::default(),
}), }),
Err(OpenReadError::IOError(e)) => Err(From::from(e)), io_err @ Err(OpenReadError::IOError { .. }) => Err(io_err.err().unwrap().into()),
Err(OpenReadError::IncompatibleIndex(incompatibility)) => { Err(OpenReadError::IncompatibleIndex(incompatibility)) => {
// For the moment, this should never happen `meta.json` // For the moment, this should never happen `meta.json`
// do not have any footer and cannot detect incompatibility. // do not have any footer and cannot detect incompatibility.
@@ -169,7 +168,7 @@ impl ManagedDirectory {
DeleteError::FileDoesNotExist(_) => { DeleteError::FileDoesNotExist(_) => {
deleted_files.push(file_to_delete.clone()); deleted_files.push(file_to_delete.clone());
} }
DeleteError::IOError(_) => { DeleteError::IOError { .. } => {
failed_to_delete_files.push(file_to_delete.clone()); failed_to_delete_files.push(file_to_delete.clone());
if !cfg!(target_os = "windows") { if !cfg!(target_os = "windows") {
// On windows, delete is expected to fail if the file // On windows, delete is expected to fail if the file
@@ -213,7 +212,7 @@ impl ManagedDirectory {
/// File starting by "." are reserved to locks. /// File starting by "." are reserved to locks.
/// They are not managed and cannot be subjected /// They are not managed and cannot be subjected
/// to garbage collection. /// to garbage collection.
fn register_file_as_managed(&mut self, filepath: &Path) -> io::Result<()> { fn register_file_as_managed(&self, filepath: &Path) -> io::Result<()> {
// Files starting by "." (e.g. lock files) are not managed. // Files starting by "." (e.g. lock files) are not managed.
if !is_managed(filepath) { if !is_managed(filepath) {
return Ok(()); return Ok(());
@@ -224,7 +223,7 @@ impl ManagedDirectory {
.expect("Managed file lock poisoned"); .expect("Managed file lock poisoned");
let has_changed = meta_wlock.managed_paths.insert(filepath.to_owned()); let has_changed = meta_wlock.managed_paths.insert(filepath.to_owned());
if has_changed { if has_changed {
save_managed_paths(self.directory.as_mut(), &meta_wlock)?; save_managed_paths(self.directory.as_ref(), &meta_wlock)?;
} }
Ok(()) Ok(())
} }
@@ -232,10 +231,19 @@ impl ManagedDirectory {
/// Verify checksum of a managed file /// Verify checksum of a managed file
pub fn validate_checksum(&self, path: &Path) -> result::Result<bool, OpenReadError> { pub fn validate_checksum(&self, path: &Path) -> result::Result<bool, OpenReadError> {
let reader = self.directory.open_read(path)?; let reader = self.directory.open_read(path)?;
let (footer, data) = Footer::extract_footer(reader) let (footer, data) =
.map_err(|err| IOError::with_path(path.to_path_buf(), err))?; Footer::extract_footer(reader).map_err(|io_error| OpenReadError::IOError {
io_error,
filepath: path.to_path_buf(),
})?;
let bytes = data
.read_bytes()
.map_err(|io_error| OpenReadError::IOError {
filepath: path.to_path_buf(),
io_error,
})?;
let mut hasher = Hasher::new(); let mut hasher = Hasher::new();
hasher.update(data.as_slice()); hasher.update(bytes.as_slice());
let crc = hasher.finalize(); let crc = hasher.finalize();
Ok(footer Ok(footer
.versioned_footer .versioned_footer
@@ -246,35 +254,37 @@ impl ManagedDirectory {
/// List files for which checksum does not match content /// List files for which checksum does not match content
pub fn list_damaged(&self) -> result::Result<HashSet<PathBuf>, OpenReadError> { pub fn list_damaged(&self) -> result::Result<HashSet<PathBuf>, OpenReadError> {
let mut hashset = HashSet::new(); let mut managed_paths = self
let managed_paths = self
.meta_informations .meta_informations
.read() .read()
.expect("Managed directory rlock poisoned in list damaged.") .expect("Managed directory rlock poisoned in list damaged.")
.managed_paths .managed_paths
.clone(); .clone();
for path in managed_paths.into_iter() { managed_paths.remove(*META_FILEPATH);
let mut damaged_files = HashSet::new();
for path in managed_paths {
if !self.validate_checksum(&path)? { if !self.validate_checksum(&path)? {
hashset.insert(path); damaged_files.insert(path);
} }
} }
Ok(hashset) Ok(damaged_files)
} }
} }
impl Directory for ManagedDirectory { impl Directory for ManagedDirectory {
fn open_read(&self, path: &Path) -> result::Result<ReadOnlySource, OpenReadError> { fn open_read(&self, path: &Path) -> result::Result<FileSlice, OpenReadError> {
let read_only_source = self.directory.open_read(path)?; let file_slice = self.directory.open_read(path)?;
let (footer, reader) = Footer::extract_footer(read_only_source) let (footer, reader) = Footer::extract_footer(file_slice)
.map_err(|err| IOError::with_path(path.to_path_buf(), err))?; .map_err(|io_error| OpenReadError::wrap_io_error(io_error, path.to_path_buf()))?;
footer.is_compatible()?; footer.is_compatible()?;
Ok(reader) Ok(reader)
} }
fn open_write(&mut self, path: &Path) -> result::Result<WritePtr, OpenWriteError> { fn open_write(&self, path: &Path) -> result::Result<WritePtr, OpenWriteError> {
self.register_file_as_managed(path) self.register_file_as_managed(path)
.map_err(|e| IOError::with_path(path.to_owned(), e))?; .map_err(|io_error| OpenWriteError::wrap_io_error(io_error, path.to_path_buf()))?;
Ok(io::BufWriter::new(Box::new(FooterProxy::new( Ok(io::BufWriter::new(Box::new(FooterProxy::new(
self.directory self.directory
.open_write(path)? .open_write(path)?
@@ -284,7 +294,7 @@ impl Directory for ManagedDirectory {
)))) ))))
} }
fn atomic_write(&mut self, path: &Path, data: &[u8]) -> io::Result<()> { fn atomic_write(&self, path: &Path, data: &[u8]) -> io::Result<()> {
self.register_file_as_managed(path)?; self.register_file_as_managed(path)?;
self.directory.atomic_write(path, data) self.directory.atomic_write(path, data)
} }
@@ -398,39 +408,37 @@ mod tests_mmap_specific {
} }
#[test] #[test]
fn test_checksum() { fn test_checksum() -> crate::Result<()> {
let test_path1: &'static Path = Path::new("some_path_for_test"); let test_path1: &'static Path = Path::new("some_path_for_test");
let test_path2: &'static Path = Path::new("other_test_path"); let test_path2: &'static Path = Path::new("other_test_path");
let tempdir = TempDir::new().unwrap(); let tempdir = TempDir::new().unwrap();
let tempdir_path = PathBuf::from(tempdir.path()); let tempdir_path = PathBuf::from(tempdir.path());
let mmap_directory = MmapDirectory::open(&tempdir_path).unwrap(); let mmap_directory = MmapDirectory::open(&tempdir_path)?;
let mut managed_directory = ManagedDirectory::wrap(mmap_directory).unwrap(); let managed_directory = ManagedDirectory::wrap(mmap_directory)?;
let mut write = managed_directory.open_write(test_path1).unwrap(); let mut write = managed_directory.open_write(test_path1)?;
write.write_all(&[0u8, 1u8]).unwrap(); write.write_all(&[0u8, 1u8])?;
write.terminate().unwrap(); write.terminate()?;
let mut write = managed_directory.open_write(test_path2).unwrap(); let mut write = managed_directory.open_write(test_path2)?;
write.write_all(&[3u8, 4u8, 5u8]).unwrap(); write.write_all(&[3u8, 4u8, 5u8])?;
write.terminate().unwrap(); write.terminate()?;
let read_source = managed_directory.open_read(test_path2).unwrap(); let read_file = managed_directory.open_read(test_path2)?.read_bytes()?;
assert_eq!(read_source.as_slice(), &[3u8, 4u8, 5u8]); assert_eq!(read_file.as_slice(), &[3u8, 4u8, 5u8]);
assert!(managed_directory.list_damaged().unwrap().is_empty()); assert!(managed_directory.list_damaged().unwrap().is_empty());
let mut corrupted_path = tempdir_path.clone(); let mut corrupted_path = tempdir_path.clone();
corrupted_path.push(test_path2); corrupted_path.push(test_path2);
let mut file = OpenOptions::new() let mut file = OpenOptions::new().write(true).open(&corrupted_path)?;
.write(true) file.write_all(&[255u8])?;
.open(&corrupted_path) file.flush()?;
.unwrap();
file.write_all(&[255u8]).unwrap();
file.flush().unwrap();
drop(file); drop(file);
let damaged = managed_directory.list_damaged().unwrap(); let damaged = managed_directory.list_damaged()?;
assert_eq!(damaged.len(), 1); assert_eq!(damaged.len(), 1);
assert!(damaged.contains(test_path2)); assert!(damaged.contains(test_path2));
Ok(())
} }
} }

View File

@@ -1,29 +1,23 @@
use fs2;
use notify;
use self::fs2::FileExt;
use self::notify::RawEvent;
use self::notify::RecursiveMode;
use self::notify::Watcher;
use crate::core::META_FILEPATH; use crate::core::META_FILEPATH;
use crate::directory::error::LockError; use crate::directory::error::LockError;
use crate::directory::error::{ use crate::directory::error::{DeleteError, OpenDirectoryError, OpenReadError, OpenWriteError};
DeleteError, IOError, OpenDirectoryError, OpenReadError, OpenWriteError,
};
use crate::directory::read_only_source::BoxedData;
use crate::directory::AntiCallToken; use crate::directory::AntiCallToken;
use crate::directory::BoxedData;
use crate::directory::Directory; use crate::directory::Directory;
use crate::directory::DirectoryLock; use crate::directory::DirectoryLock;
use crate::directory::FileSlice;
use crate::directory::Lock; use crate::directory::Lock;
use crate::directory::ReadOnlySource;
use crate::directory::WatchCallback; use crate::directory::WatchCallback;
use crate::directory::WatchCallbackList; use crate::directory::WatchCallbackList;
use crate::directory::WatchHandle; use crate::directory::WatchHandle;
use crate::directory::{TerminatingWrite, WritePtr}; use crate::directory::{TerminatingWrite, WritePtr};
use atomicwrites; use fs2::FileExt;
use memmap::Mmap; use memmap::Mmap;
use notify::RawEvent;
use notify::RecursiveMode;
use notify::Watcher;
use serde::{Deserialize, Serialize}; use serde::{Deserialize, Serialize};
use std::collections::HashMap; use stable_deref_trait::StableDeref;
use std::convert::From; use std::convert::From;
use std::fmt; use std::fmt;
use std::fs::OpenOptions; use std::fs::OpenOptions;
@@ -38,6 +32,7 @@ use std::sync::Mutex;
use std::sync::RwLock; use std::sync::RwLock;
use std::sync::Weak; use std::sync::Weak;
use std::thread; use std::thread;
use std::{collections::HashMap, ops::Deref};
use tempfile::TempDir; use tempfile::TempDir;
/// Create a default io error given a string. /// Create a default io error given a string.
@@ -48,17 +43,17 @@ pub(crate) fn make_io_err(msg: String) -> io::Error {
/// Returns None iff the file exists, can be read, but is empty (and hence /// Returns None iff the file exists, can be read, but is empty (and hence
/// cannot be mmapped) /// cannot be mmapped)
fn open_mmap(full_path: &Path) -> result::Result<Option<Mmap>, OpenReadError> { fn open_mmap(full_path: &Path) -> result::Result<Option<Mmap>, OpenReadError> {
let file = File::open(full_path).map_err(|e| { let file = File::open(full_path).map_err(|io_err| {
if e.kind() == io::ErrorKind::NotFound { if io_err.kind() == io::ErrorKind::NotFound {
OpenReadError::FileDoesNotExist(full_path.to_owned()) OpenReadError::FileDoesNotExist(full_path.to_path_buf())
} else { } else {
OpenReadError::IOError(IOError::with_path(full_path.to_owned(), e)) OpenReadError::wrap_io_error(io_err, full_path.to_path_buf())
} }
})?; })?;
let meta_data = file let meta_data = file
.metadata() .metadata()
.map_err(|e| IOError::with_path(full_path.to_owned(), e))?; .map_err(|io_err| OpenReadError::wrap_io_error(io_err, full_path.to_owned()))?;
if meta_data.len() == 0 { if meta_data.len() == 0 {
// if the file size is 0, it will not be possible // if the file size is 0, it will not be possible
// to mmap the file, so we return None // to mmap the file, so we return None
@@ -68,7 +63,7 @@ fn open_mmap(full_path: &Path) -> result::Result<Option<Mmap>, OpenReadError> {
unsafe { unsafe {
memmap::Mmap::map(&file) memmap::Mmap::map(&file)
.map(Some) .map(Some)
.map_err(|e| From::from(IOError::with_path(full_path.to_owned(), e))) .map_err(|io_err| OpenReadError::wrap_io_error(io_err, full_path.to_path_buf()))
} }
} }
@@ -187,6 +182,10 @@ impl WatcherWrapper {
} }
} }
} }
})
.map_err(|io_error| OpenDirectoryError::IoError {
io_error,
directory_path: path.to_path_buf(),
})?; })?;
Ok(WatcherWrapper { Ok(WatcherWrapper {
_watcher: Mutex::new(watcher), _watcher: Mutex::new(watcher),
@@ -224,17 +223,13 @@ struct MmapDirectoryInner {
} }
impl MmapDirectoryInner { impl MmapDirectoryInner {
fn new( fn new(root_path: PathBuf, temp_directory: Option<TempDir>) -> MmapDirectoryInner {
root_path: PathBuf, MmapDirectoryInner {
temp_directory: Option<TempDir>,
) -> Result<MmapDirectoryInner, OpenDirectoryError> {
let mmap_directory_inner = MmapDirectoryInner {
root_path, root_path,
mmap_cache: Default::default(), mmap_cache: Default::default(),
_temp_directory: temp_directory, _temp_directory: temp_directory,
watcher: RwLock::new(None), watcher: RwLock::new(None),
}; }
Ok(mmap_directory_inner)
} }
fn watch(&self, watch_callback: WatchCallback) -> crate::Result<WatchHandle> { fn watch(&self, watch_callback: WatchCallback) -> crate::Result<WatchHandle> {
@@ -268,14 +263,11 @@ impl fmt::Debug for MmapDirectory {
} }
impl MmapDirectory { impl MmapDirectory {
fn new( fn new(root_path: PathBuf, temp_directory: Option<TempDir>) -> MmapDirectory {
root_path: PathBuf, let inner = MmapDirectoryInner::new(root_path, temp_directory);
temp_directory: Option<TempDir>, MmapDirectory {
) -> Result<MmapDirectory, OpenDirectoryError> {
let inner = MmapDirectoryInner::new(root_path, temp_directory)?;
Ok(MmapDirectory {
inner: Arc::new(inner), inner: Arc::new(inner),
}) }
} }
/// Creates a new MmapDirectory in a temporary directory. /// Creates a new MmapDirectory in a temporary directory.
@@ -283,9 +275,11 @@ impl MmapDirectory {
/// This is mostly useful to test the MmapDirectory itself. /// This is mostly useful to test the MmapDirectory itself.
/// For your unit tests, prefer the RAMDirectory. /// For your unit tests, prefer the RAMDirectory.
pub fn create_from_tempdir() -> Result<MmapDirectory, OpenDirectoryError> { pub fn create_from_tempdir() -> Result<MmapDirectory, OpenDirectoryError> {
let tempdir = TempDir::new().map_err(OpenDirectoryError::IoError)?; let tempdir = TempDir::new().map_err(OpenDirectoryError::FailedToCreateTempDir)?;
let tempdir_path = PathBuf::from(tempdir.path()); Ok(MmapDirectory::new(
MmapDirectory::new(tempdir_path, Some(tempdir)) tempdir.path().to_path_buf(),
Some(tempdir),
))
} }
/// Opens a MmapDirectory in a directory. /// Opens a MmapDirectory in a directory.
@@ -303,7 +297,7 @@ impl MmapDirectory {
directory_path, directory_path,
))) )))
} else { } else {
Ok(MmapDirectory::new(PathBuf::from(directory_path), None)?) Ok(MmapDirectory::new(PathBuf::from(directory_path), None))
} }
} }
@@ -407,8 +401,20 @@ impl TerminatingWrite for SafeFileWriter {
} }
} }
#[derive(Clone)]
struct MmapArc(Arc<Box<dyn Deref<Target = [u8]> + Send + Sync>>);
impl Deref for MmapArc {
type Target = [u8];
fn deref(&self) -> &[u8] {
self.0.deref()
}
}
unsafe impl StableDeref for MmapArc {}
impl Directory for MmapDirectory { impl Directory for MmapDirectory {
fn open_read(&self, path: &Path) -> result::Result<ReadOnlySource, OpenReadError> { fn open_read(&self, path: &Path) -> result::Result<FileSlice, OpenReadError> {
debug!("Open Read {:?}", path); debug!("Open Read {:?}", path);
let full_path = self.resolve_path(path); let full_path = self.resolve_path(path);
@@ -418,12 +424,14 @@ impl Directory for MmapDirectory {
on mmap cache while reading {:?}", on mmap cache while reading {:?}",
path path
); );
IOError::with_path(path.to_owned(), make_io_err(msg)) let io_err = make_io_err(msg);
OpenReadError::wrap_io_error(io_err, path.to_path_buf())
})?; })?;
Ok(mmap_cache if let Some(mmap_arc) = mmap_cache.get_mmap(&full_path)? {
.get_mmap(&full_path)? Ok(FileSlice::from(MmapArc(mmap_arc)))
.map(ReadOnlySource::from) } else {
.unwrap_or_else(ReadOnlySource::empty)) Ok(FileSlice::empty())
}
} }
/// Any entry associated to the path in the mmap will be /// Any entry associated to the path in the mmap will be
@@ -431,14 +439,18 @@ impl Directory for MmapDirectory {
fn delete(&self, path: &Path) -> result::Result<(), DeleteError> { fn delete(&self, path: &Path) -> result::Result<(), DeleteError> {
let full_path = self.resolve_path(path); let full_path = self.resolve_path(path);
match fs::remove_file(&full_path) { match fs::remove_file(&full_path) {
Ok(_) => self Ok(_) => self.sync_directory().map_err(|e| DeleteError::IOError {
.sync_directory() io_error: e,
.map_err(|e| IOError::with_path(path.to_owned(), e).into()), filepath: path.to_path_buf(),
}),
Err(e) => { Err(e) => {
if e.kind() == io::ErrorKind::NotFound { if e.kind() == io::ErrorKind::NotFound {
Err(DeleteError::FileDoesNotExist(path.to_owned())) Err(DeleteError::FileDoesNotExist(path.to_owned()))
} else { } else {
Err(IOError::with_path(path.to_owned(), e).into()) Err(DeleteError::IOError {
io_error: e,
filepath: path.to_path_buf(),
})
} }
} }
} }
@@ -449,7 +461,7 @@ impl Directory for MmapDirectory {
full_path.exists() full_path.exists()
} }
fn open_write(&mut self, path: &Path) -> Result<WritePtr, OpenWriteError> { fn open_write(&self, path: &Path) -> Result<WritePtr, OpenWriteError> {
debug!("Open Write {:?}", path); debug!("Open Write {:?}", path);
let full_path = self.resolve_path(path); let full_path = self.resolve_path(path);
@@ -458,22 +470,22 @@ impl Directory for MmapDirectory {
.create_new(true) .create_new(true)
.open(full_path); .open(full_path);
let mut file = open_res.map_err(|err| { let mut file = open_res.map_err(|io_err| {
if err.kind() == io::ErrorKind::AlreadyExists { if io_err.kind() == io::ErrorKind::AlreadyExists {
OpenWriteError::FileAlreadyExists(path.to_owned()) OpenWriteError::FileAlreadyExists(path.to_path_buf())
} else { } else {
IOError::with_path(path.to_owned(), err).into() OpenWriteError::wrap_io_error(io_err, path.to_path_buf())
} }
})?; })?;
// making sure the file is created. // making sure the file is created.
file.flush() file.flush()
.map_err(|e| IOError::with_path(path.to_owned(), e))?; .map_err(|io_error| OpenWriteError::wrap_io_error(io_error, path.to_path_buf()))?;
// Apparetntly, on some filesystem syncing the parent // Apparetntly, on some filesystem syncing the parent
// directory is required. // directory is required.
self.sync_directory() self.sync_directory()
.map_err(|e| IOError::with_path(path.to_owned(), e))?; .map_err(|io_err| OpenWriteError::wrap_io_error(io_err, path.to_path_buf()))?;
let writer = SafeFileWriter::new(file); let writer = SafeFileWriter::new(file);
Ok(BufWriter::new(Box::new(writer))) Ok(BufWriter::new(Box::new(writer)))
@@ -484,25 +496,28 @@ impl Directory for MmapDirectory {
let mut buffer = Vec::new(); let mut buffer = Vec::new();
match File::open(&full_path) { match File::open(&full_path) {
Ok(mut file) => { Ok(mut file) => {
file.read_to_end(&mut buffer) file.read_to_end(&mut buffer).map_err(|io_error| {
.map_err(|e| IOError::with_path(path.to_owned(), e))?; OpenReadError::wrap_io_error(io_error, path.to_path_buf())
})?;
Ok(buffer) Ok(buffer)
} }
Err(e) => { Err(io_error) => {
if e.kind() == io::ErrorKind::NotFound { if io_error.kind() == io::ErrorKind::NotFound {
Err(OpenReadError::FileDoesNotExist(path.to_owned())) Err(OpenReadError::FileDoesNotExist(path.to_owned()))
} else { } else {
Err(IOError::with_path(path.to_owned(), e).into()) Err(OpenReadError::wrap_io_error(io_error, path.to_path_buf()))
} }
} }
} }
} }
fn atomic_write(&mut self, path: &Path, data: &[u8]) -> io::Result<()> { fn atomic_write(&self, path: &Path, content: &[u8]) -> io::Result<()> {
debug!("Atomic Write {:?}", path); debug!("Atomic Write {:?}", path);
let mut tempfile = tempfile::Builder::new().tempfile_in(&self.inner.root_path)?;
tempfile.write_all(content)?;
tempfile.flush()?;
let full_path = self.resolve_path(path); let full_path = self.resolve_path(path);
let meta_file = atomicwrites::AtomicFile::new(full_path, atomicwrites::AllowOverwrite); tempfile.into_temp_path().persist(full_path)?;
meta_file.write(|f| f.write_all(data))?;
Ok(()) Ok(())
} }
@@ -538,10 +553,10 @@ mod tests {
// The following tests are specific to the MmapDirectory // The following tests are specific to the MmapDirectory
use super::*; use super::*;
use crate::indexer::LogMergePolicy;
use crate::schema::{Schema, SchemaBuilder, TEXT}; use crate::schema::{Schema, SchemaBuilder, TEXT};
use crate::Index; use crate::Index;
use crate::ReloadPolicy; use crate::ReloadPolicy;
use crate::{common::HasLen, indexer::LogMergePolicy};
use std::fs; use std::fs;
use std::sync::atomic::{AtomicUsize, Ordering}; use std::sync::atomic::{AtomicUsize, Ordering};
@@ -556,7 +571,7 @@ mod tests {
// cannot be mmapped. // cannot be mmapped.
// //
// In that case the directory returns a SharedVecSlice. // In that case the directory returns a SharedVecSlice.
let mut mmap_directory = MmapDirectory::create_from_tempdir().unwrap(); let mmap_directory = MmapDirectory::create_from_tempdir().unwrap();
let path = PathBuf::from("test"); let path = PathBuf::from("test");
{ {
let mut w = mmap_directory.open_write(&path).unwrap(); let mut w = mmap_directory.open_write(&path).unwrap();
@@ -572,7 +587,7 @@ mod tests {
// here we test if the cache releases // here we test if the cache releases
// mmaps correctly. // mmaps correctly.
let mut mmap_directory = MmapDirectory::create_from_tempdir().unwrap(); let mmap_directory = MmapDirectory::create_from_tempdir().unwrap();
let num_paths = 10; let num_paths = 10;
let paths: Vec<PathBuf> = (0..num_paths) let paths: Vec<PathBuf> = (0..num_paths)
.map(|i| PathBuf::from(&*format!("file_{}", i))) .map(|i| PathBuf::from(&*format!("file_{}", i)))
@@ -663,7 +678,7 @@ mod tests {
{ {
let index = Index::create(mmap_directory.clone(), schema).unwrap(); let index = Index::create(mmap_directory.clone(), schema).unwrap();
let mut index_writer = index.writer_with_num_threads(1, 3_000_000).unwrap(); let mut index_writer = index.writer_for_tests().unwrap();
let mut log_merge_policy = LogMergePolicy::default(); let mut log_merge_policy = LogMergePolicy::default();
log_merge_policy.set_min_merge_size(3); log_merge_policy.set_min_merge_size(3);
index_writer.set_merge_policy(Box::new(log_merge_policy)); index_writer.set_merge_policy(Box::new(log_merge_policy));

View File

@@ -9,10 +9,11 @@ mod mmap_directory;
mod directory; mod directory;
mod directory_lock; mod directory_lock;
mod file_slice;
mod footer; mod footer;
mod managed_directory; mod managed_directory;
mod owned_bytes;
mod ram_directory; mod ram_directory;
mod read_only_source;
mod watch_event_router; mod watch_event_router;
/// Errors specific to the directory module. /// Errors specific to the directory module.
@@ -21,11 +22,14 @@ pub mod error;
pub use self::directory::DirectoryLock; pub use self::directory::DirectoryLock;
pub use self::directory::{Directory, DirectoryClone}; pub use self::directory::{Directory, DirectoryClone};
pub use self::directory_lock::{Lock, INDEX_WRITER_LOCK, META_LOCK}; pub use self::directory_lock::{Lock, INDEX_WRITER_LOCK, META_LOCK};
pub(crate) use self::file_slice::BoxedData;
pub use self::file_slice::{FileHandle, FileSlice};
pub use self::owned_bytes::OwnedBytes;
pub use self::ram_directory::RAMDirectory; pub use self::ram_directory::RAMDirectory;
pub use self::read_only_source::ReadOnlySource;
pub use self::watch_event_router::{WatchCallback, WatchCallbackList, WatchHandle}; pub use self::watch_event_router::{WatchCallback, WatchCallbackList, WatchHandle};
use std::io::{self, BufWriter, Write}; use std::io::{self, BufWriter, Write};
use std::path::PathBuf; use std::path::PathBuf;
/// Outcome of the Garbage collection /// Outcome of the Garbage collection
pub struct GarbageCollectionResult { pub struct GarbageCollectionResult {
/// List of files that were deleted in this cycle /// List of files that were deleted in this cycle

View File

@@ -0,0 +1,255 @@
use crate::directory::FileHandle;
use stable_deref_trait::StableDeref;
use std::mem;
use std::ops::Deref;
use std::sync::Arc;
use std::{fmt, io};
/// An OwnedBytes simply wraps an object that owns a slice of data and exposes
/// this data as a static slice.
///
/// The backing object is required to be `StableDeref`.
#[derive(Clone)]
pub struct OwnedBytes {
data: &'static [u8],
box_stable_deref: Arc<dyn Deref<Target = [u8]> + Sync + Send>,
}
impl FileHandle for OwnedBytes {
fn read_bytes(&self, from: usize, to: usize) -> io::Result<OwnedBytes> {
Ok(self.slice(from, to))
}
}
impl OwnedBytes {
/// Creates an empty `OwnedBytes`.
pub fn empty() -> OwnedBytes {
OwnedBytes::new(&[][..])
}
/// Creates an `OwnedBytes` intance given a `StableDeref` object.
pub fn new<T: StableDeref + Deref<Target = [u8]> + 'static + Send + Sync>(
data_holder: T,
) -> OwnedBytes {
let box_stable_deref = Arc::new(data_holder);
let bytes: &[u8] = box_stable_deref.as_ref();
let data = unsafe { mem::transmute::<_, &'static [u8]>(bytes.deref()) };
OwnedBytes {
box_stable_deref,
data,
}
}
/// creates a fileslice that is just a view over a slice of the data.
pub fn slice(&self, from: usize, to: usize) -> Self {
OwnedBytes {
data: &self.data[from..to],
box_stable_deref: self.box_stable_deref.clone(),
}
}
/// Returns the underlying slice of data.
/// `Deref` and `AsRef` are also available.
#[inline(always)]
pub fn as_slice(&self) -> &[u8] {
self.data
}
/// Returns the len of the slice.
#[inline(always)]
pub fn len(&self) -> usize {
self.data.len()
}
/// Splits the OwnedBytes into two OwnedBytes `(left, right)`.
///
/// Left will hold `split_len` bytes.
///
/// This operation is cheap and does not require to copy any memory.
/// On the other hand, both `left` and `right` retain a handle over
/// the entire slice of memory. In other words, the memory will only
/// be released when both left and right are dropped.
pub fn split(self, split_len: usize) -> (OwnedBytes, OwnedBytes) {
let right_box_stable_deref = self.box_stable_deref.clone();
let left = OwnedBytes {
data: &self.data[..split_len],
box_stable_deref: self.box_stable_deref,
};
let right = OwnedBytes {
data: &self.data[split_len..],
box_stable_deref: right_box_stable_deref,
};
(left, right)
}
/// Returns true iff this `OwnedBytes` is empty.
#[inline(always)]
pub fn is_empty(&self) -> bool {
self.as_slice().is_empty()
}
/// Drops the left most `advance_len` bytes.
///
/// See also [.clip(clip_len: usize))](#method.clip).
#[inline(always)]
pub fn advance(&mut self, advance_len: usize) {
self.data = &self.data[advance_len..]
}
}
impl fmt::Debug for OwnedBytes {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
// We truncate the bytes in order to make sure the debug string
// is not too long.
let bytes_truncated: &[u8] = if self.len() > 8 {
&self.as_slice()[..10]
} else {
self.as_slice()
};
write!(f, "OwnedBytes({:?}, len={})", bytes_truncated, self.len())
}
}
impl Deref for OwnedBytes {
type Target = [u8];
fn deref(&self) -> &Self::Target {
self.as_slice()
}
}
impl io::Read for OwnedBytes {
fn read(&mut self, buf: &mut [u8]) -> io::Result<usize> {
let read_len = {
let data = self.as_slice();
if data.len() >= buf.len() {
let buf_len = buf.len();
buf.copy_from_slice(&data[..buf_len]);
buf.len()
} else {
let data_len = data.len();
buf[..data_len].copy_from_slice(data);
data_len
}
};
self.advance(read_len);
Ok(read_len)
}
fn read_to_end(&mut self, buf: &mut Vec<u8>) -> io::Result<usize> {
let read_len = {
let data = self.as_slice();
buf.extend(data);
data.len()
};
self.advance(read_len);
Ok(read_len)
}
fn read_exact(&mut self, buf: &mut [u8]) -> io::Result<()> {
let read_len = self.read(buf)?;
if read_len != buf.len() {
return Err(io::Error::new(
io::ErrorKind::UnexpectedEof,
"failed to fill whole buffer",
));
}
Ok(())
}
}
impl AsRef<[u8]> for OwnedBytes {
fn as_ref(&self) -> &[u8] {
self.as_slice()
}
}
#[cfg(test)]
mod tests {
use std::io::{self, Read};
use super::OwnedBytes;
#[test]
fn test_owned_bytes_debug() {
let short_bytes = OwnedBytes::new(b"abcd".as_ref());
assert_eq!(
format!("{:?}", short_bytes),
"OwnedBytes([97, 98, 99, 100], len=4)"
);
let long_bytes = OwnedBytes::new(b"abcdefghijklmnopq".as_ref());
assert_eq!(
format!("{:?}", long_bytes),
"OwnedBytes([97, 98, 99, 100, 101, 102, 103, 104, 105, 106], len=17)"
);
}
#[test]
fn test_owned_bytes_read() -> io::Result<()> {
let mut bytes = OwnedBytes::new(b"abcdefghiklmnopqrstuvwxyz".as_ref());
{
let mut buf = [0u8; 5];
bytes.read_exact(&mut buf[..]).unwrap();
assert_eq!(&buf, b"abcde");
assert_eq!(bytes.as_slice(), b"fghiklmnopqrstuvwxyz")
}
{
let mut buf = [0u8; 2];
bytes.read_exact(&mut buf[..]).unwrap();
assert_eq!(&buf, b"fg");
assert_eq!(bytes.as_slice(), b"hiklmnopqrstuvwxyz")
}
Ok(())
}
#[test]
fn test_owned_bytes_read_right_at_the_end() -> io::Result<()> {
let mut bytes = OwnedBytes::new(b"abcde".as_ref());
let mut buf = [0u8; 5];
assert_eq!(bytes.read(&mut buf[..]).unwrap(), 5);
assert_eq!(&buf, b"abcde");
assert_eq!(bytes.as_slice(), b"");
assert_eq!(bytes.read(&mut buf[..]).unwrap(), 0);
assert_eq!(&buf, b"abcde");
Ok(())
}
#[test]
fn test_owned_bytes_read_incomplete() -> io::Result<()> {
let mut bytes = OwnedBytes::new(b"abcde".as_ref());
let mut buf = [0u8; 7];
assert_eq!(bytes.read(&mut buf[..]).unwrap(), 5);
assert_eq!(&buf[..5], b"abcde");
assert_eq!(bytes.read(&mut buf[..]).unwrap(), 0);
Ok(())
}
#[test]
fn test_owned_bytes_read_to_end() -> io::Result<()> {
let mut bytes = OwnedBytes::new(b"abcde".as_ref());
let mut buf = Vec::new();
bytes.read_to_end(&mut buf)?;
assert_eq!(buf.as_slice(), b"abcde".as_ref());
Ok(())
}
#[test]
fn test_owned_bytes_split() {
let bytes = OwnedBytes::new(b"abcdefghi".as_ref());
let (left, right) = bytes.split(3);
assert_eq!(left.as_slice(), b"abc");
assert_eq!(right.as_slice(), b"defghi");
}
#[test]
fn test_owned_bytes_split_boundary() {
let bytes = OwnedBytes::new(b"abcdefghi".as_ref());
{
let (left, right) = bytes.clone().split(0);
assert_eq!(left.as_slice(), b"");
assert_eq!(right.as_slice(), b"abcdefghi");
}
{
let (left, right) = bytes.split(9);
assert_eq!(left.as_slice(), b"abcdefghi");
assert_eq!(right.as_slice(), b"");
}
}
}

View File

@@ -1,9 +1,9 @@
use crate::core::META_FILEPATH;
use crate::directory::error::{DeleteError, OpenReadError, OpenWriteError}; use crate::directory::error::{DeleteError, OpenReadError, OpenWriteError};
use crate::directory::AntiCallToken; use crate::directory::AntiCallToken;
use crate::directory::WatchCallbackList; use crate::directory::WatchCallbackList;
use crate::directory::{Directory, ReadOnlySource, WatchCallback, WatchHandle}; use crate::directory::{Directory, FileSlice, WatchCallback, WatchHandle};
use crate::directory::{TerminatingWrite, WritePtr}; use crate::directory::{TerminatingWrite, WritePtr};
use crate::{common::HasLen, core::META_FILEPATH};
use fail::fail_point; use fail::fail_point;
use std::collections::HashMap; use std::collections::HashMap;
use std::fmt; use std::fmt;
@@ -80,17 +80,17 @@ impl TerminatingWrite for VecWriter {
#[derive(Default)] #[derive(Default)]
struct InnerDirectory { struct InnerDirectory {
fs: HashMap<PathBuf, ReadOnlySource>, fs: HashMap<PathBuf, FileSlice>,
watch_router: WatchCallbackList, watch_router: WatchCallbackList,
} }
impl InnerDirectory { impl InnerDirectory {
fn write(&mut self, path: PathBuf, data: &[u8]) -> bool { fn write(&mut self, path: PathBuf, data: &[u8]) -> bool {
let data = ReadOnlySource::new(Vec::from(data)); let data = FileSlice::from(data.to_vec());
self.fs.insert(path, data).is_some() self.fs.insert(path, data).is_some()
} }
fn open_read(&self, path: &Path) -> Result<ReadOnlySource, OpenReadError> { fn open_read(&self, path: &Path) -> Result<FileSlice, OpenReadError> {
self.fs self.fs
.get(path) .get(path)
.ok_or_else(|| OpenReadError::FileDoesNotExist(PathBuf::from(path))) .ok_or_else(|| OpenReadError::FileDoesNotExist(PathBuf::from(path)))
@@ -151,11 +151,11 @@ impl RAMDirectory {
/// written using the `atomic_write` api. /// written using the `atomic_write` api.
/// ///
/// If an error is encounterred, files may be persisted partially. /// If an error is encounterred, files may be persisted partially.
pub fn persist(&self, dest: &mut dyn Directory) -> crate::Result<()> { pub fn persist(&self, dest: &dyn Directory) -> crate::Result<()> {
let wlock = self.fs.write().unwrap(); let wlock = self.fs.write().unwrap();
for (path, source) in wlock.fs.iter() { for (path, file) in wlock.fs.iter() {
let mut dest_wrt = dest.open_write(path)?; let mut dest_wrt = dest.open_write(path)?;
dest_wrt.write_all(source.as_slice())?; dest_wrt.write_all(file.read_bytes()?.as_slice())?;
dest_wrt.terminate()?; dest_wrt.terminate()?;
} }
Ok(()) Ok(())
@@ -163,15 +163,16 @@ impl RAMDirectory {
} }
impl Directory for RAMDirectory { impl Directory for RAMDirectory {
fn open_read(&self, path: &Path) -> result::Result<ReadOnlySource, OpenReadError> { fn open_read(&self, path: &Path) -> result::Result<FileSlice, OpenReadError> {
self.fs.read().unwrap().open_read(path) self.fs.read().unwrap().open_read(path)
} }
fn delete(&self, path: &Path) -> result::Result<(), DeleteError> { fn delete(&self, path: &Path) -> result::Result<(), DeleteError> {
fail_point!("RAMDirectory::delete", |_| { fail_point!("RAMDirectory::delete", |_| {
use crate::directory::error::IOError; Err(DeleteError::IOError {
let io_error = IOError::from(io::Error::from(io::ErrorKind::Other)); io_error: io::Error::from(io::ErrorKind::Other),
Err(DeleteError::from(io_error)) filepath: path.to_path_buf(),
})
}); });
self.fs.write().unwrap().delete(path) self.fs.write().unwrap().delete(path)
} }
@@ -180,7 +181,7 @@ impl Directory for RAMDirectory {
self.fs.read().unwrap().exists(path) self.fs.read().unwrap().exists(path)
} }
fn open_write(&mut self, path: &Path) -> Result<WritePtr, OpenWriteError> { fn open_write(&self, path: &Path) -> Result<WritePtr, OpenWriteError> {
let mut fs = self.fs.write().unwrap(); let mut fs = self.fs.write().unwrap();
let path_buf = PathBuf::from(path); let path_buf = PathBuf::from(path);
let vec_writer = VecWriter::new(path_buf.clone(), self.clone()); let vec_writer = VecWriter::new(path_buf.clone(), self.clone());
@@ -194,10 +195,17 @@ impl Directory for RAMDirectory {
} }
fn atomic_read(&self, path: &Path) -> Result<Vec<u8>, OpenReadError> { fn atomic_read(&self, path: &Path) -> Result<Vec<u8>, OpenReadError> {
Ok(self.open_read(path)?.as_slice().to_owned()) let bytes =
self.open_read(path)?
.read_bytes()
.map_err(|io_error| OpenReadError::IOError {
io_error,
filepath: path.to_path_buf(),
})?;
Ok(bytes.as_slice().to_owned())
} }
fn atomic_write(&mut self, path: &Path, data: &[u8]) -> io::Result<()> { fn atomic_write(&self, path: &Path, data: &[u8]) -> io::Result<()> {
fail_point!("RAMDirectory::atomic_write", |msg| Err(io::Error::new( fail_point!("RAMDirectory::atomic_write", |msg| Err(io::Error::new(
io::ErrorKind::Other, io::ErrorKind::Other,
msg.unwrap_or_else(|| "Undefined".to_string()) msg.unwrap_or_else(|| "Undefined".to_string())
@@ -234,13 +242,13 @@ mod tests {
let msg_seq: &'static [u8] = b"sequential is the way"; let msg_seq: &'static [u8] = b"sequential is the way";
let path_atomic: &'static Path = Path::new("atomic"); let path_atomic: &'static Path = Path::new("atomic");
let path_seq: &'static Path = Path::new("seq"); let path_seq: &'static Path = Path::new("seq");
let mut directory = RAMDirectory::create(); let directory = RAMDirectory::create();
assert!(directory.atomic_write(path_atomic, msg_atomic).is_ok()); assert!(directory.atomic_write(path_atomic, msg_atomic).is_ok());
let mut wrt = directory.open_write(path_seq).unwrap(); let mut wrt = directory.open_write(path_seq).unwrap();
assert!(wrt.write_all(msg_seq).is_ok()); assert!(wrt.write_all(msg_seq).is_ok());
assert!(wrt.flush().is_ok()); assert!(wrt.flush().is_ok());
let mut directory_copy = RAMDirectory::create(); let directory_copy = RAMDirectory::create();
assert!(directory.persist(&mut directory_copy).is_ok()); assert!(directory.persist(&directory_copy).is_ok());
assert_eq!(directory_copy.atomic_read(path_atomic).unwrap(), msg_atomic); assert_eq!(directory_copy.atomic_read(path_atomic).unwrap(), msg_atomic);
assert_eq!(directory_copy.atomic_read(path_seq).unwrap(), msg_seq); assert_eq!(directory_copy.atomic_read(path_seq).unwrap(), msg_seq);
} }

View File

@@ -1,137 +0,0 @@
use crate::common::HasLen;
use stable_deref_trait::{CloneStableDeref, StableDeref};
use std::ops::Deref;
use std::sync::Arc;
pub type BoxedData = Box<dyn Deref<Target = [u8]> + Send + Sync + 'static>;
/// Read object that represents files in tantivy.
///
/// These read objects are only in charge to deliver
/// the data in the form of a constant read-only `&[u8]`.
/// Whatever happens to the directory file, the data
/// hold by this object should never be altered or destroyed.
pub struct ReadOnlySource {
data: Arc<BoxedData>,
start: usize,
stop: usize,
}
unsafe impl StableDeref for ReadOnlySource {}
unsafe impl CloneStableDeref for ReadOnlySource {}
impl Deref for ReadOnlySource {
type Target = [u8];
fn deref(&self) -> &[u8] {
self.as_slice()
}
}
impl From<Arc<BoxedData>> for ReadOnlySource {
fn from(data: Arc<BoxedData>) -> Self {
let len = data.len();
ReadOnlySource {
data,
start: 0,
stop: len,
}
}
}
impl ReadOnlySource {
pub(crate) fn new<D>(data: D) -> ReadOnlySource
where
D: Deref<Target = [u8]> + Send + Sync + 'static,
{
let len = data.len();
ReadOnlySource {
data: Arc::new(Box::new(data)),
start: 0,
stop: len,
}
}
/// Creates an empty ReadOnlySource
pub fn empty() -> ReadOnlySource {
ReadOnlySource::new(&[][..])
}
/// Returns the data underlying the ReadOnlySource object.
pub fn as_slice(&self) -> &[u8] {
&self.data[self.start..self.stop]
}
/// Splits into 2 `ReadOnlySource`, at the offset given
/// as an argument.
pub fn split(self, addr: usize) -> (ReadOnlySource, ReadOnlySource) {
let left = self.slice(0, addr);
let right = self.slice_from(addr);
(left, right)
}
/// Splits into 2 `ReadOnlySource`, at the offset `end - right_len`.
pub fn split_from_end(self, right_len: usize) -> (ReadOnlySource, ReadOnlySource) {
let left_len = self.len() - right_len;
self.split(left_len)
}
/// Creates a ReadOnlySource that is just a
/// view over a slice of the data.
///
/// Keep in mind that any living slice extends
/// the lifetime of the original ReadOnlySource,
///
/// For instance, if `ReadOnlySource` wraps 500MB
/// worth of data in anonymous memory, and only a
/// 1KB slice is remaining, the whole `500MBs`
/// are retained in memory.
pub fn slice(&self, start: usize, stop: usize) -> ReadOnlySource {
assert!(
start <= stop,
"Requested negative slice [{}..{}]",
start,
stop
);
assert!(stop <= self.len());
ReadOnlySource {
data: self.data.clone(),
start: self.start + start,
stop: self.start + stop,
}
}
/// Like `.slice(...)` but enforcing only the `from`
/// boundary.
///
/// Equivalent to `.slice(from_offset, self.len())`
pub fn slice_from(&self, from_offset: usize) -> ReadOnlySource {
self.slice(from_offset, self.len())
}
/// Like `.slice(...)` but enforcing only the `to`
/// boundary.
///
/// Equivalent to `.slice(0, to_offset)`
pub fn slice_to(&self, to_offset: usize) -> ReadOnlySource {
self.slice(0, to_offset)
}
}
impl HasLen for ReadOnlySource {
fn len(&self) -> usize {
self.stop - self.start
}
}
impl Clone for ReadOnlySource {
fn clone(&self) -> Self {
self.slice_from(0)
}
}
impl From<Vec<u8>> for ReadOnlySource {
fn from(data: Vec<u8>) -> ReadOnlySource {
ReadOnlySource::new(data)
}
}

View File

@@ -20,45 +20,47 @@ mod mmap_directory_tests {
} }
#[test] #[test]
fn test_simple() { fn test_simple() -> crate::Result<()> {
let mut directory = make_directory(); let directory = make_directory();
super::test_simple(&mut directory); super::test_simple(&directory)
} }
#[test] #[test]
fn test_write_create_the_file() { fn test_write_create_the_file() {
let mut directory = make_directory(); let directory = make_directory();
super::test_write_create_the_file(&mut directory); super::test_write_create_the_file(&directory);
} }
#[test] #[test]
fn test_rewrite_forbidden() { fn test_rewrite_forbidden() -> crate::Result<()> {
let mut directory = make_directory(); let directory = make_directory();
super::test_rewrite_forbidden(&mut directory); super::test_rewrite_forbidden(&directory)?;
Ok(())
} }
#[test] #[test]
fn test_directory_delete() { fn test_directory_delete() -> crate::Result<()> {
let mut directory = make_directory(); let directory = make_directory();
super::test_directory_delete(&mut directory); super::test_directory_delete(&directory)?;
Ok(())
} }
#[test] #[test]
fn test_lock_non_blocking() { fn test_lock_non_blocking() {
let mut directory = make_directory(); let directory = make_directory();
super::test_lock_non_blocking(&mut directory); super::test_lock_non_blocking(&directory);
} }
#[test] #[test]
fn test_lock_blocking() { fn test_lock_blocking() {
let mut directory = make_directory(); let directory = make_directory();
super::test_lock_blocking(&mut directory); super::test_lock_blocking(&directory);
} }
#[test] #[test]
fn test_watch() { fn test_watch() {
let mut directory = make_directory(); let directory = make_directory();
super::test_watch(&mut directory); super::test_watch(&directory);
} }
} }
@@ -72,45 +74,47 @@ mod ram_directory_tests {
} }
#[test] #[test]
fn test_simple() { fn test_simple() -> crate::Result<()> {
let mut directory = make_directory(); let directory = make_directory();
super::test_simple(&mut directory); super::test_simple(&directory)
} }
#[test] #[test]
fn test_write_create_the_file() { fn test_write_create_the_file() {
let mut directory = make_directory(); let directory = make_directory();
super::test_write_create_the_file(&mut directory); super::test_write_create_the_file(&directory);
} }
#[test] #[test]
fn test_rewrite_forbidden() { fn test_rewrite_forbidden() -> crate::Result<()> {
let mut directory = make_directory(); let directory = make_directory();
super::test_rewrite_forbidden(&mut directory); super::test_rewrite_forbidden(&directory)?;
Ok(())
} }
#[test] #[test]
fn test_directory_delete() { fn test_directory_delete() -> crate::Result<()> {
let mut directory = make_directory(); let directory = make_directory();
super::test_directory_delete(&mut directory); super::test_directory_delete(&directory)?;
Ok(())
} }
#[test] #[test]
fn test_lock_non_blocking() { fn test_lock_non_blocking() {
let mut directory = make_directory(); let directory = make_directory();
super::test_lock_non_blocking(&mut directory); super::test_lock_non_blocking(&directory);
} }
#[test] #[test]
fn test_lock_blocking() { fn test_lock_blocking() {
let mut directory = make_directory(); let directory = make_directory();
super::test_lock_blocking(&mut directory); super::test_lock_blocking(&directory);
} }
#[test] #[test]
fn test_watch() { fn test_watch() {
let mut directory = make_directory(); let directory = make_directory();
super::test_watch(&mut directory); super::test_watch(&directory);
} }
} }
@@ -118,43 +122,37 @@ mod ram_directory_tests {
#[should_panic] #[should_panic]
fn ram_directory_panics_if_flush_forgotten() { fn ram_directory_panics_if_flush_forgotten() {
let test_path: &'static Path = Path::new("some_path_for_test"); let test_path: &'static Path = Path::new("some_path_for_test");
let mut ram_directory = RAMDirectory::create(); let ram_directory = RAMDirectory::create();
let mut write_file = ram_directory.open_write(test_path).unwrap(); let mut write_file = ram_directory.open_write(test_path).unwrap();
assert!(write_file.write_all(&[4]).is_ok()); assert!(write_file.write_all(&[4]).is_ok());
} }
fn test_simple(directory: &mut dyn Directory) { fn test_simple(directory: &dyn Directory) -> crate::Result<()> {
let test_path: &'static Path = Path::new("some_path_for_test"); let test_path: &'static Path = Path::new("some_path_for_test");
{ let mut write_file = directory.open_write(test_path)?;
let mut write_file = directory.open_write(test_path).unwrap(); assert!(directory.exists(test_path));
assert!(directory.exists(test_path)); write_file.write_all(&[4])?;
write_file.write_all(&[4]).unwrap(); write_file.write_all(&[3])?;
write_file.write_all(&[3]).unwrap(); write_file.write_all(&[7, 3, 5])?;
write_file.write_all(&[7, 3, 5]).unwrap(); write_file.flush()?;
write_file.flush().unwrap(); let read_file = directory.open_read(test_path)?.read_bytes()?;
} assert_eq!(read_file.as_slice(), &[4u8, 3u8, 7u8, 3u8, 5u8]);
{ mem::drop(read_file);
let read_file = directory.open_read(test_path).unwrap();
let data: &[u8] = &*read_file;
assert_eq!(data, &[4u8, 3u8, 7u8, 3u8, 5u8]);
}
assert!(directory.delete(test_path).is_ok()); assert!(directory.delete(test_path).is_ok());
assert!(!directory.exists(test_path)); assert!(!directory.exists(test_path));
Ok(())
} }
fn test_rewrite_forbidden(directory: &mut dyn Directory) { fn test_rewrite_forbidden(directory: &dyn Directory) -> crate::Result<()> {
let test_path: &'static Path = Path::new("some_path_for_test"); let test_path: &'static Path = Path::new("some_path_for_test");
{ directory.open_write(test_path)?;
directory.open_write(test_path).unwrap(); assert!(directory.exists(test_path));
assert!(directory.exists(test_path)); assert!(directory.open_write(test_path).is_err());
}
{
assert!(directory.open_write(test_path).is_err());
}
assert!(directory.delete(test_path).is_ok()); assert!(directory.delete(test_path).is_ok());
Ok(())
} }
fn test_write_create_the_file(directory: &mut dyn Directory) { fn test_write_create_the_file(directory: &dyn Directory) {
let test_path: &'static Path = Path::new("some_path_for_test"); let test_path: &'static Path = Path::new("some_path_for_test");
{ {
assert!(directory.open_read(test_path).is_err()); assert!(directory.open_read(test_path).is_err());
@@ -165,21 +163,20 @@ fn test_write_create_the_file(directory: &mut dyn Directory) {
} }
} }
fn test_directory_delete(directory: &mut dyn Directory) { fn test_directory_delete(directory: &dyn Directory) -> crate::Result<()> {
let test_path: &'static Path = Path::new("some_path_for_test"); let test_path: &'static Path = Path::new("some_path_for_test");
assert!(directory.open_read(test_path).is_err()); assert!(directory.open_read(test_path).is_err());
let mut write_file = directory.open_write(&test_path).unwrap(); let mut write_file = directory.open_write(&test_path)?;
write_file.write_all(&[1, 2, 3, 4]).unwrap(); write_file.write_all(&[1, 2, 3, 4])?;
write_file.flush().unwrap(); write_file.flush()?;
{ {
let read_handle = directory.open_read(&test_path).unwrap(); let read_handle = directory.open_read(&test_path)?.read_bytes()?;
assert_eq!(&*read_handle, &[1u8, 2u8, 3u8, 4u8]); assert_eq!(read_handle.as_slice(), &[1u8, 2u8, 3u8, 4u8]);
// Mapped files can't be deleted on Windows // Mapped files can't be deleted on Windows
if !cfg!(windows) { if !cfg!(windows) {
assert!(directory.delete(&test_path).is_ok()); assert!(directory.delete(&test_path).is_ok());
assert_eq!(&*read_handle, &[1u8, 2u8, 3u8, 4u8]); assert_eq!(read_handle.as_slice(), &[1u8, 2u8, 3u8, 4u8]);
} }
assert!(directory.delete(Path::new("SomeOtherPath")).is_err()); assert!(directory.delete(Path::new("SomeOtherPath")).is_err());
} }
@@ -189,9 +186,10 @@ fn test_directory_delete(directory: &mut dyn Directory) {
assert!(directory.open_read(&test_path).is_err()); assert!(directory.open_read(&test_path).is_err());
assert!(directory.delete(&test_path).is_err()); assert!(directory.delete(&test_path).is_err());
Ok(())
} }
fn test_watch(directory: &mut dyn Directory) { fn test_watch(directory: &dyn Directory) {
let num_progress: Arc<AtomicUsize> = Default::default(); let num_progress: Arc<AtomicUsize> = Default::default();
let counter: Arc<AtomicUsize> = Default::default(); let counter: Arc<AtomicUsize> = Default::default();
let counter_clone = counter.clone(); let counter_clone = counter.clone();
@@ -211,22 +209,22 @@ fn test_watch(directory: &mut dyn Directory) {
.unwrap(); .unwrap();
for i in 0..10 { for i in 0..10 {
assert_eq!(i, counter.load(SeqCst)); assert!(i <= counter.load(SeqCst));
assert!(directory assert!(directory
.atomic_write(Path::new("meta.json"), b"random_test_data_2") .atomic_write(Path::new("meta.json"), b"random_test_data_2")
.is_ok()); .is_ok());
assert_eq!(receiver.recv_timeout(Duration::from_millis(500)), Ok(i)); assert_eq!(receiver.recv_timeout(Duration::from_millis(500)), Ok(i));
assert_eq!(i + 1, counter.load(SeqCst)); assert!(i + 1 <= counter.load(SeqCst)); // notify can trigger more than once.
} }
mem::drop(watch_handle); mem::drop(watch_handle);
assert!(directory assert!(directory
.atomic_write(Path::new("meta.json"), b"random_test_data") .atomic_write(Path::new("meta.json"), b"random_test_data")
.is_ok()); .is_ok());
assert!(receiver.recv_timeout(Duration::from_millis(500)).is_ok()); assert!(receiver.recv_timeout(Duration::from_millis(500)).is_ok());
assert_eq!(10, counter.load(SeqCst)); assert!(10 <= counter.load(SeqCst));
} }
fn test_lock_non_blocking(directory: &mut dyn Directory) { fn test_lock_non_blocking(directory: &dyn Directory) {
{ {
let lock_a_res = directory.acquire_lock(&Lock { let lock_a_res = directory.acquire_lock(&Lock {
filepath: PathBuf::from("a.lock"), filepath: PathBuf::from("a.lock"),
@@ -251,7 +249,7 @@ fn test_lock_non_blocking(directory: &mut dyn Directory) {
assert!(lock_a_res.is_ok()); assert!(lock_a_res.is_ok());
} }
fn test_lock_blocking(directory: &mut dyn Directory) { fn test_lock_blocking(directory: &dyn Directory) {
let lock_a_res = directory.acquire_lock(&Lock { let lock_a_res = directory.acquire_lock(&Lock {
filepath: PathBuf::from("a.lock"), filepath: PathBuf::from("a.lock"),
is_blocking: true, is_blocking: true,

View File

@@ -5,7 +5,7 @@ use std::sync::RwLock;
use std::sync::Weak; use std::sync::Weak;
/// Type alias for callbacks registered when watching files of a `Directory`. /// Type alias for callbacks registered when watching files of a `Directory`.
pub type WatchCallback = Box<dyn Fn() -> () + Sync + Send>; pub type WatchCallback = Box<dyn Fn() + Sync + Send>;
/// Helper struct to implement the watch method in `Directory` implementations. /// Helper struct to implement the watch method in `Directory` implementations.
/// ///
@@ -29,10 +29,17 @@ impl WatchHandle {
pub fn new(watch_callback: Arc<WatchCallback>) -> WatchHandle { pub fn new(watch_callback: Arc<WatchCallback>) -> WatchHandle {
WatchHandle(watch_callback) WatchHandle(watch_callback)
} }
/// Returns an empty watch handle.
///
/// This function is only useful when implementing a readonly directory.
pub fn empty() -> WatchHandle {
WatchHandle::new(Arc::new(Box::new(|| {})))
}
} }
impl WatchCallbackList { impl WatchCallbackList {
/// Suscribes a new callback and returns a handle that controls the lifetime of the callback. /// Subscribes a new callback and returns a handle that controls the lifetime of the callback.
pub fn subscribe(&self, watch_callback: WatchCallback) -> WatchHandle { pub fn subscribe(&self, watch_callback: WatchCallback) -> WatchHandle {
let watch_callback_arc = Arc::new(watch_callback); let watch_callback_arc = Arc::new(watch_callback);
let watch_callback_weak = Arc::downgrade(&watch_callback_arc); let watch_callback_weak = Arc::downgrade(&watch_callback_arc);

View File

@@ -1,58 +1,48 @@
use crate::common::BitSet;
use crate::fastfield::DeleteBitSet; use crate::fastfield::DeleteBitSet;
use crate::DocId; use crate::DocId;
use std::borrow::Borrow; use std::borrow::Borrow;
use std::borrow::BorrowMut; use std::borrow::BorrowMut;
use std::cmp::Ordering;
/// Expresses the outcome of a call to `DocSet`'s `.skip_next(...)`. /// Sentinel value returned when a DocSet has been entirely consumed.
#[derive(PartialEq, Eq, Debug)] ///
pub enum SkipResult { /// This is not u32::MAX as one would have expected, due to the lack of SSE2 instructions
/// target was in the docset /// to compare [u32; 4].
Reached, pub const TERMINATED: DocId = std::i32::MAX as u32;
/// target was not in the docset, skipping stopped as a greater element was found
OverStep,
/// the docset was entirely consumed without finding the target, nor any
/// element greater than the target.
End,
}
/// Represents an iterable set of sorted doc ids. /// Represents an iterable set of sorted doc ids.
pub trait DocSet { pub trait DocSet {
/// Goes to the next element. /// Goes to the next element.
/// `.advance(...)` needs to be called a first time to point to the correct ///
/// element. /// The DocId of the next element is returned.
fn advance(&mut self) -> bool; /// In other words we should always have :
/// ```ignore
/// let doc = docset.advance();
/// assert_eq!(doc, docset.doc());
/// ```
///
/// If we reached the end of the DocSet, TERMINATED should be returned.
///
/// Calling `.advance()` on a terminated DocSet should be supported, and TERMINATED should
/// be returned.
/// TODO Test existing docsets.
fn advance(&mut self) -> DocId;
/// After skipping, position the iterator in such a way that `.doc()` /// Advances the DocSet forward until reaching the target, or going to the
/// will return a value greater than or equal to target. /// lowest DocId greater than the target.
/// ///
/// SkipResult expresses whether the `target value` was reached, overstepped, /// If the end of the DocSet is reached, TERMINATED is returned.
/// or if the `DocSet` was entirely consumed without finding any value
/// greater or equal to the `target`.
/// ///
/// WARNING: Calling skip always advances the docset. /// Calling `.seek(target)` on a terminated DocSet is legal. Implementation
/// More specifically, if the docset is already positionned on the target /// of DocSet should support it.
/// skipping will advance to the next position and return SkipResult::Overstep.
/// ///
/// If `.skip_next()` oversteps, then the docset must be positionned correctly /// Calling `seek(TERMINATED)` is also legal and is the normal way to consume a DocSet.
/// on an existing document. In other words, `.doc()` should return the first document fn seek(&mut self, target: DocId) -> DocId {
/// greater than `DocId`. let mut doc = self.doc();
fn skip_next(&mut self, target: DocId) -> SkipResult { debug_assert!(doc <= target);
if !self.advance() { while doc < target {
return SkipResult::End; doc = self.advance();
}
loop {
match self.doc().cmp(&target) {
Ordering::Less => {
if !self.advance() {
return SkipResult::End;
}
}
Ordering::Equal => return SkipResult::Reached,
Ordering::Greater => return SkipResult::OverStep,
}
} }
doc
} }
/// Fills a given mutable buffer with the next doc ids from the /// Fills a given mutable buffer with the next doc ids from the
@@ -71,38 +61,38 @@ pub trait DocSet {
/// use case where batching. The normal way to /// use case where batching. The normal way to
/// go through the `DocId`'s is to call `.advance()`. /// go through the `DocId`'s is to call `.advance()`.
fn fill_buffer(&mut self, buffer: &mut [DocId]) -> usize { fn fill_buffer(&mut self, buffer: &mut [DocId]) -> usize {
if self.doc() == TERMINATED {
return 0;
}
for (i, buffer_val) in buffer.iter_mut().enumerate() { for (i, buffer_val) in buffer.iter_mut().enumerate() {
if self.advance() { *buffer_val = self.doc();
*buffer_val = self.doc(); if self.advance() == TERMINATED {
} else { return i + 1;
return i;
} }
} }
buffer.len() buffer.len()
} }
/// Returns the current document /// Returns the current document
/// Right after creating a new DocSet, the docset points to the first document.
///
/// If the DocSet is empty, .doc() should return `TERMINATED`.
fn doc(&self) -> DocId; fn doc(&self) -> DocId;
/// Returns a best-effort hint of the /// Returns a best-effort hint of the
/// length of the docset. /// length of the docset.
fn size_hint(&self) -> u32; fn size_hint(&self) -> u32;
/// Appends all docs to a `bitset`.
fn append_to_bitset(&mut self, bitset: &mut BitSet) {
while self.advance() {
bitset.insert(self.doc());
}
}
/// Returns the number documents matching. /// Returns the number documents matching.
/// Calling this method consumes the `DocSet`. /// Calling this method consumes the `DocSet`.
fn count(&mut self, delete_bitset: &DeleteBitSet) -> u32 { fn count(&mut self, delete_bitset: &DeleteBitSet) -> u32 {
let mut count = 0u32; let mut count = 0u32;
while self.advance() { let mut doc = self.doc();
if !delete_bitset.is_deleted(self.doc()) { while doc != TERMINATED {
if !delete_bitset.is_deleted(doc) {
count += 1u32; count += 1u32;
} }
doc = self.advance();
} }
count count
} }
@@ -114,22 +104,42 @@ pub trait DocSet {
/// given by `count()`. /// given by `count()`.
fn count_including_deleted(&mut self) -> u32 { fn count_including_deleted(&mut self) -> u32 {
let mut count = 0u32; let mut count = 0u32;
while self.advance() { let mut doc = self.doc();
while doc != TERMINATED {
count += 1u32; count += 1u32;
doc = self.advance();
} }
count count
} }
} }
impl<'a> DocSet for &'a mut dyn DocSet {
fn advance(&mut self) -> u32 {
(**self).advance()
}
fn seek(&mut self, target: DocId) -> DocId {
(**self).seek(target)
}
fn doc(&self) -> u32 {
(**self).doc()
}
fn size_hint(&self) -> u32 {
(**self).size_hint()
}
}
impl<TDocSet: DocSet + ?Sized> DocSet for Box<TDocSet> { impl<TDocSet: DocSet + ?Sized> DocSet for Box<TDocSet> {
fn advance(&mut self) -> bool { fn advance(&mut self) -> DocId {
let unboxed: &mut TDocSet = self.borrow_mut(); let unboxed: &mut TDocSet = self.borrow_mut();
unboxed.advance() unboxed.advance()
} }
fn skip_next(&mut self, target: DocId) -> SkipResult { fn seek(&mut self, target: DocId) -> DocId {
let unboxed: &mut TDocSet = self.borrow_mut(); let unboxed: &mut TDocSet = self.borrow_mut();
unboxed.skip_next(target) unboxed.seek(target)
} }
fn doc(&self) -> DocId { fn doc(&self) -> DocId {
@@ -151,9 +161,4 @@ impl<TDocSet: DocSet + ?Sized> DocSet for Box<TDocSet> {
let unboxed: &mut TDocSet = self.borrow_mut(); let unboxed: &mut TDocSet = self.borrow_mut();
unboxed.count_including_deleted() unboxed.count_including_deleted()
} }
fn append_to_bitset(&mut self, bitset: &mut BitSet) {
let unboxed: &mut TDocSet = self.borrow_mut();
unboxed.append_to_bitset(bitset);
}
} }

View File

@@ -2,22 +2,27 @@
use std::io; use std::io;
use crate::directory::error::{IOError, OpenDirectoryError, OpenReadError, OpenWriteError};
use crate::directory::error::{Incompatibility, LockError}; use crate::directory::error::{Incompatibility, LockError};
use crate::fastfield::FastFieldNotAvailableError; use crate::fastfield::FastFieldNotAvailableError;
use crate::query; use crate::query;
use crate::schema; use crate::{
use serde_json; directory::error::{OpenDirectoryError, OpenReadError, OpenWriteError},
schema,
};
use std::fmt; use std::fmt;
use std::path::PathBuf; use std::path::PathBuf;
use std::sync::PoisonError; use std::sync::PoisonError;
/// Represents a `DataCorruption` error.
///
/// When facing data corruption, tantivy actually panic or return this error.
pub struct DataCorruption { pub struct DataCorruption {
filepath: Option<PathBuf>, filepath: Option<PathBuf>,
comment: String, comment: String,
} }
impl DataCorruption { impl DataCorruption {
/// Creates a `DataCorruption` Error.
pub fn new(filepath: PathBuf, comment: String) -> DataCorruption { pub fn new(filepath: PathBuf, comment: String) -> DataCorruption {
DataCorruption { DataCorruption {
filepath: Some(filepath), filepath: Some(filepath),
@@ -25,10 +30,11 @@ impl DataCorruption {
} }
} }
pub fn comment_only(comment: String) -> DataCorruption { /// Creates a `DataCorruption` Error, when the filepath is irrelevant.
pub fn comment_only<TStr: ToString>(comment: TStr) -> DataCorruption {
DataCorruption { DataCorruption {
filepath: None, filepath: None,
comment, comment: comment.to_string(),
} }
} }
} }
@@ -44,44 +50,47 @@ impl fmt::Debug for DataCorruption {
} }
} }
/// The library's failure based error enum /// The library's error enum
#[derive(Debug, Fail)] #[derive(Debug, Error)]
pub enum TantivyError { pub enum TantivyError {
/// Path does not exist. /// Failed to open the directory.
#[fail(display = "Path does not exist: '{:?}'", _0)] #[error("Failed to open the directory: '{0:?}'")]
PathDoesNotExist(PathBuf), OpenDirectoryError(#[from] OpenDirectoryError),
/// File already exists, this is a problem when we try to write into a new file. /// Failed to open a file for read.
#[fail(display = "File already exists: '{:?}'", _0)] #[error("Failed to open file for read: '{0:?}'")]
FileAlreadyExists(PathBuf), OpenReadError(#[from] OpenReadError),
/// Failed to open a file for write.
#[error("Failed to open file for write: '{0:?}'")]
OpenWriteError(#[from] OpenWriteError),
/// Index already exists in this directory /// Index already exists in this directory
#[fail(display = "Index already exists")] #[error("Index already exists")]
IndexAlreadyExists, IndexAlreadyExists,
/// Failed to acquire file lock /// Failed to acquire file lock
#[fail(display = "Failed to acquire Lockfile: {:?}. {:?}", _0, _1)] #[error("Failed to acquire Lockfile: {0:?}. {1:?}")]
LockFailure(LockError, Option<String>), LockFailure(LockError, Option<String>),
/// IO Error. /// IO Error.
#[fail(display = "An IO error occurred: '{}'", _0)] #[error("An IO error occurred: '{0}'")]
IOError(#[cause] IOError), IOError(#[from] io::Error),
/// Data corruption. /// Data corruption.
#[fail(display = "{:?}", _0)] #[error("Data corrupted: '{0:?}'")]
DataCorruption(DataCorruption), DataCorruption(DataCorruption),
/// A thread holding the locked panicked and poisoned the lock. /// A thread holding the locked panicked and poisoned the lock.
#[fail(display = "A thread holding the locked panicked and poisoned the lock")] #[error("A thread holding the locked panicked and poisoned the lock")]
Poisoned, Poisoned,
/// Invalid argument was passed by the user. /// Invalid argument was passed by the user.
#[fail(display = "An invalid argument was passed: '{}'", _0)] #[error("An invalid argument was passed: '{0}'")]
InvalidArgument(String), InvalidArgument(String),
/// An Error happened in one of the thread. /// An Error happened in one of the thread.
#[fail(display = "An error occurred in a thread: '{}'", _0)] #[error("An error occurred in a thread: '{0}'")]
ErrorInThread(String), ErrorInThread(String),
/// An Error appeared related to the schema. /// An Error appeared related to the schema.
#[fail(display = "Schema error: '{}'", _0)] #[error("Schema error: '{0}'")]
SchemaError(String), SchemaError(String),
/// System error. (e.g.: We failed spawning a new thread) /// System error. (e.g.: We failed spawning a new thread)
#[fail(display = "System error.'{}'", _0)] #[error("System error.'{0}'")]
SystemError(String), SystemError(String),
/// Index incompatible with current version of tantivy /// Index incompatible with current version of tantivy
#[fail(display = "{:?}", _0)] #[error("{0:?}")]
IncompatibleIndex(Incompatibility), IncompatibleIndex(Incompatibility),
} }
@@ -90,31 +99,17 @@ impl From<DataCorruption> for TantivyError {
TantivyError::DataCorruption(data_corruption) TantivyError::DataCorruption(data_corruption)
} }
} }
impl From<FastFieldNotAvailableError> for TantivyError { impl From<FastFieldNotAvailableError> for TantivyError {
fn from(fastfield_error: FastFieldNotAvailableError) -> TantivyError { fn from(fastfield_error: FastFieldNotAvailableError) -> TantivyError {
TantivyError::SchemaError(format!("{}", fastfield_error)) TantivyError::SchemaError(format!("{}", fastfield_error))
} }
} }
impl From<LockError> for TantivyError { impl From<LockError> for TantivyError {
fn from(lock_error: LockError) -> TantivyError { fn from(lock_error: LockError) -> TantivyError {
TantivyError::LockFailure(lock_error, None) TantivyError::LockFailure(lock_error, None)
} }
} }
impl From<IOError> for TantivyError {
fn from(io_error: IOError) -> TantivyError {
TantivyError::IOError(io_error)
}
}
impl From<io::Error> for TantivyError {
fn from(io_error: io::Error) -> TantivyError {
TantivyError::IOError(io_error.into())
}
}
impl From<query::QueryParserError> for TantivyError { impl From<query::QueryParserError> for TantivyError {
fn from(parsing_error: query::QueryParserError) -> TantivyError { fn from(parsing_error: query::QueryParserError) -> TantivyError {
TantivyError::InvalidArgument(format!("Query is invalid. {:?}", parsing_error)) TantivyError::InvalidArgument(format!("Query is invalid. {:?}", parsing_error))
@@ -127,15 +122,9 @@ impl<Guard> From<PoisonError<Guard>> for TantivyError {
} }
} }
impl From<OpenReadError> for TantivyError { impl From<chrono::ParseError> for TantivyError {
fn from(error: OpenReadError) -> TantivyError { fn from(err: chrono::ParseError) -> TantivyError {
match error { TantivyError::InvalidArgument(err.to_string())
OpenReadError::FileDoesNotExist(filepath) => TantivyError::PathDoesNotExist(filepath),
OpenReadError::IOError(io_error) => TantivyError::IOError(io_error),
OpenReadError::IncompatibleIndex(incompatibility) => {
TantivyError::IncompatibleIndex(incompatibility)
}
}
} }
} }
@@ -145,35 +134,9 @@ impl From<schema::DocParsingError> for TantivyError {
} }
} }
impl From<OpenWriteError> for TantivyError {
fn from(error: OpenWriteError) -> TantivyError {
match error {
OpenWriteError::FileAlreadyExists(filepath) => {
TantivyError::FileAlreadyExists(filepath)
}
OpenWriteError::IOError(io_error) => TantivyError::IOError(io_error),
}
}
}
impl From<OpenDirectoryError> for TantivyError {
fn from(error: OpenDirectoryError) -> TantivyError {
match error {
OpenDirectoryError::DoesNotExist(directory_path) => {
TantivyError::PathDoesNotExist(directory_path)
}
OpenDirectoryError::NotADirectory(directory_path) => {
TantivyError::InvalidArgument(format!("{:?} is not a directory", directory_path))
}
OpenDirectoryError::IoError(err) => TantivyError::IOError(IOError::from(err)),
}
}
}
impl From<serde_json::Error> for TantivyError { impl From<serde_json::Error> for TantivyError {
fn from(error: serde_json::Error) -> TantivyError { fn from(error: serde_json::Error) -> TantivyError {
let io_err = io::Error::from(error); TantivyError::IOError(error.into())
TantivyError::IOError(io_err.into())
} }
} }

View File

@@ -6,31 +6,114 @@ pub use self::writer::BytesFastFieldWriter;
#[cfg(test)] #[cfg(test)]
mod tests { mod tests {
use crate::schema::Schema; use crate::schema::{BytesOptions, IndexRecordOption, Schema, Value};
use crate::Index; use crate::{query::TermQuery, schema::FAST, schema::INDEXED, schema::STORED};
use crate::{DocAddress, DocSet, Index, Searcher, Term};
use std::ops::Deref;
#[test] #[test]
fn test_bytes() { fn test_bytes() -> crate::Result<()> {
let mut schema_builder = Schema::builder(); let mut schema_builder = Schema::builder();
let field = schema_builder.add_bytes_field("bytesfield"); let bytes_field = schema_builder.add_bytes_field("bytesfield", FAST);
let schema = schema_builder.build(); let schema = schema_builder.build();
let index = Index::create_in_ram(schema); let index = Index::create_in_ram(schema);
let mut index_writer = index.writer_with_num_threads(1, 3_000_000).unwrap(); let mut index_writer = index.writer_for_tests()?;
index_writer.add_document(doc!(field=>vec![0u8, 1, 2, 3])); index_writer.add_document(doc!(bytes_field=>vec![0u8, 1, 2, 3]));
index_writer.add_document(doc!(field=>vec![])); index_writer.add_document(doc!(bytes_field=>vec![]));
index_writer.add_document(doc!(field=>vec![255u8])); index_writer.add_document(doc!(bytes_field=>vec![255u8]));
index_writer.add_document(doc!(field=>vec![1u8, 3, 5, 7, 9])); index_writer.add_document(doc!(bytes_field=>vec![1u8, 3, 5, 7, 9]));
index_writer.add_document(doc!(field=>vec![0u8; 1000])); index_writer.add_document(doc!(bytes_field=>vec![0u8; 1000]));
assert!(index_writer.commit().is_ok()); index_writer.commit()?;
let searcher = index.reader().unwrap().searcher(); let searcher = index.reader()?.searcher();
let segment_reader = searcher.segment_reader(0); let segment_reader = searcher.segment_reader(0);
let bytes_reader = segment_reader.fast_fields().bytes(field).unwrap(); let bytes_reader = segment_reader.fast_fields().bytes(bytes_field).unwrap();
assert_eq!(bytes_reader.get_bytes(0), &[0u8, 1, 2, 3]); assert_eq!(bytes_reader.get_bytes(0), &[0u8, 1, 2, 3]);
assert!(bytes_reader.get_bytes(1).is_empty()); assert!(bytes_reader.get_bytes(1).is_empty());
assert_eq!(bytes_reader.get_bytes(2), &[255u8]); assert_eq!(bytes_reader.get_bytes(2), &[255u8]);
assert_eq!(bytes_reader.get_bytes(3), &[1u8, 3, 5, 7, 9]); assert_eq!(bytes_reader.get_bytes(3), &[1u8, 3, 5, 7, 9]);
let long = vec![0u8; 1000]; let long = vec![0u8; 1000];
assert_eq!(bytes_reader.get_bytes(4), long.as_slice()); assert_eq!(bytes_reader.get_bytes(4), long.as_slice());
Ok(())
}
fn create_index_for_test<T: Into<BytesOptions>>(
byte_options: T,
) -> crate::Result<impl Deref<Target = Searcher>> {
let mut schema_builder = Schema::builder();
let field = schema_builder.add_bytes_field("string_bytes", byte_options.into());
let schema = schema_builder.build();
let index = Index::create_in_ram(schema);
let mut index_writer = index.writer_for_tests()?;
index_writer.add_document(doc!(
field => b"tantivy".as_ref(),
field => b"lucene".as_ref()
));
index_writer.commit()?;
Ok(index.reader()?.searcher())
}
#[test]
fn test_stored_bytes() -> crate::Result<()> {
let searcher = create_index_for_test(STORED)?;
assert_eq!(searcher.num_docs(), 1);
let retrieved_doc = searcher.doc(DocAddress(0u32, 0u32))?;
let field = searcher.schema().get_field("string_bytes").unwrap();
let values: Vec<&Value> = retrieved_doc.get_all(field).collect();
assert_eq!(values.len(), 2);
let values_bytes: Vec<&[u8]> = values
.into_iter()
.flat_map(|value| value.bytes_value())
.collect();
assert_eq!(values_bytes, &[&b"tantivy"[..], &b"lucene"[..]]);
Ok(())
}
#[test]
fn test_non_stored_bytes() -> crate::Result<()> {
let searcher = create_index_for_test(INDEXED)?;
assert_eq!(searcher.num_docs(), 1);
let retrieved_doc = searcher.doc(DocAddress(0u32, 0u32))?;
let field = searcher.schema().get_field("string_bytes").unwrap();
assert!(retrieved_doc.get_first(field).is_none());
Ok(())
}
#[test]
fn test_index_bytes() -> crate::Result<()> {
let searcher = create_index_for_test(INDEXED)?;
assert_eq!(searcher.num_docs(), 1);
let field = searcher.schema().get_field("string_bytes").unwrap();
let term = Term::from_field_bytes(field, b"lucene".as_ref());
let term_query = TermQuery::new(term, IndexRecordOption::Basic);
let term_weight = term_query.specialized_weight(&searcher, true)?;
let term_scorer = term_weight.specialized_scorer(searcher.segment_reader(0), 1.0f32)?;
assert_eq!(term_scorer.doc(), 0u32);
Ok(())
}
#[test]
fn test_non_index_bytes() -> crate::Result<()> {
let searcher = create_index_for_test(STORED)?;
assert_eq!(searcher.num_docs(), 1);
let field = searcher.schema().get_field("string_bytes").unwrap();
let term = Term::from_field_bytes(field, b"lucene".as_ref());
let term_query = TermQuery::new(term, IndexRecordOption::Basic);
let term_weight_res = term_query.specialized_weight(&searcher, false);
assert!(matches!(
term_weight_res,
Err(crate::TantivyError::SchemaError(_))
));
Ok(())
}
#[test]
fn test_fast_bytes_multivalue_value() -> crate::Result<()> {
let searcher = create_index_for_test(FAST)?;
assert_eq!(searcher.num_docs(), 1);
let fast_fields = searcher.segment_reader(0u32).fast_fields();
let field = searcher.schema().get_field("string_bytes").unwrap();
let fast_field_reader = fast_fields.bytes(field).unwrap();
assert_eq!(fast_field_reader.get_bytes(0u32), b"tantivy");
Ok(())
} }
} }

View File

@@ -1,6 +1,5 @@
use owning_ref::OwningRef; use crate::directory::FileSlice;
use crate::directory::OwnedBytes;
use crate::directory::ReadOnlySource;
use crate::fastfield::FastFieldReader; use crate::fastfield::FastFieldReader;
use crate::DocId; use crate::DocId;
@@ -17,16 +16,16 @@ use crate::DocId;
#[derive(Clone)] #[derive(Clone)]
pub struct BytesFastFieldReader { pub struct BytesFastFieldReader {
idx_reader: FastFieldReader<u64>, idx_reader: FastFieldReader<u64>,
values: OwningRef<ReadOnlySource, [u8]>, values: OwnedBytes,
} }
impl BytesFastFieldReader { impl BytesFastFieldReader {
pub(crate) fn open( pub(crate) fn open(
idx_reader: FastFieldReader<u64>, idx_reader: FastFieldReader<u64>,
values_source: ReadOnlySource, values_file: FileSlice,
) -> BytesFastFieldReader { ) -> crate::Result<BytesFastFieldReader> {
let values = OwningRef::new(values_source).map(|source| &source[..]); let values = values_file.read_bytes()?;
BytesFastFieldReader { idx_reader, values } Ok(BytesFastFieldReader { idx_reader, values })
} }
fn range(&self, doc: DocId) -> (usize, usize) { fn range(&self, doc: DocId) -> (usize, usize) {
@@ -38,7 +37,7 @@ impl BytesFastFieldReader {
/// Returns the bytes associated to the given `doc` /// Returns the bytes associated to the given `doc`
pub fn get_bytes(&self, doc: DocId) -> &[u8] { pub fn get_bytes(&self, doc: DocId) -> &[u8] {
let (start, stop) = self.range(doc); let (start, stop) = self.range(doc);
&self.values[start..stop] &self.values.as_slice()[start..stop]
} }
/// Returns the overall number of bytes in this bytes fast field. /// Returns the overall number of bytes in this bytes fast field.

View File

@@ -49,16 +49,10 @@ impl BytesFastFieldWriter {
/// matching field values present in the document. /// matching field values present in the document.
pub fn add_document(&mut self, doc: &Document) { pub fn add_document(&mut self, doc: &Document) {
self.next_doc(); self.next_doc();
for field_value in doc.field_values() { for field_value in doc.get_all(self.field) {
if field_value.field() == self.field { if let Value::Bytes(ref bytes) = field_value {
if let Value::Bytes(ref bytes) = *field_value.value() { self.vals.extend_from_slice(bytes);
self.vals.extend_from_slice(bytes); return;
} else {
panic!(
"Bytes field contained non-Bytes Value!. Field {:?} = {:?}",
self.field, field_value
);
}
} }
} }
} }
@@ -76,21 +70,18 @@ impl BytesFastFieldWriter {
/// Serializes the fast field values by pushing them to the `FastFieldSerializer`. /// Serializes the fast field values by pushing them to the `FastFieldSerializer`.
pub fn serialize(&self, serializer: &mut FastFieldSerializer) -> io::Result<()> { pub fn serialize(&self, serializer: &mut FastFieldSerializer) -> io::Result<()> {
{ // writing the offset index
// writing the offset index let mut doc_index_serializer =
let mut doc_index_serializer = serializer.new_u64_fast_field_with_idx(self.field, 0, self.vals.len() as u64, 0)?;
serializer.new_u64_fast_field_with_idx(self.field, 0, self.vals.len() as u64, 0)?; for &offset in &self.doc_index {
for &offset in &self.doc_index { doc_index_serializer.add_val(offset)?;
doc_index_serializer.add_val(offset)?;
}
doc_index_serializer.add_val(self.vals.len() as u64)?;
doc_index_serializer.close_field()?;
}
{
// writing the values themselves
let mut value_serializer = serializer.new_bytes_fast_field_with_idx(self.field, 1)?;
value_serializer.write_all(&self.vals)?;
} }
doc_index_serializer.add_val(self.vals.len() as u64)?;
doc_index_serializer.close_field()?;
// writing the values themselves
serializer
.new_bytes_fast_field_with_idx(self.field, 1)?
.write_all(&self.vals)?;
Ok(()) Ok(())
} }
} }

View File

@@ -1,5 +1,6 @@
use crate::common::{BitSet, HasLen}; use crate::common::{BitSet, HasLen};
use crate::directory::ReadOnlySource; use crate::directory::FileSlice;
use crate::directory::OwnedBytes;
use crate::directory::WritePtr; use crate::directory::WritePtr;
use crate::space_usage::ByteCount; use crate::space_usage::ByteCount;
use crate::DocId; use crate::DocId;
@@ -9,6 +10,8 @@ use std::io::Write;
/// Write a delete `BitSet` /// Write a delete `BitSet`
/// ///
/// where `delete_bitset` is the set of deleted `DocId`. /// where `delete_bitset` is the set of deleted `DocId`.
/// Warning: this function does not call terminate. The caller is in charge of
/// closing the writer properly.
pub fn write_delete_bitset( pub fn write_delete_bitset(
delete_bitset: &BitSet, delete_bitset: &BitSet,
max_doc: u32, max_doc: u32,
@@ -37,22 +40,41 @@ pub fn write_delete_bitset(
/// Set of deleted `DocId`s. /// Set of deleted `DocId`s.
#[derive(Clone)] #[derive(Clone)]
pub struct DeleteBitSet { pub struct DeleteBitSet {
data: ReadOnlySource, data: OwnedBytes,
len: usize, len: usize,
} }
impl DeleteBitSet { impl DeleteBitSet {
/// Opens a delete bitset given its data source. #[cfg(test)]
pub fn open(data: ReadOnlySource) -> DeleteBitSet { pub(crate) fn for_test(docs: &[DocId], max_doc: u32) -> DeleteBitSet {
let num_deleted: usize = data use crate::directory::{Directory, RAMDirectory, TerminatingWrite};
use std::path::Path;
assert!(docs.iter().all(|&doc| doc < max_doc));
let mut bitset = BitSet::with_max_value(max_doc);
for &doc in docs {
bitset.insert(doc);
}
let directory = RAMDirectory::create();
let path = Path::new("dummydeletebitset");
let mut wrt = directory.open_write(path).unwrap();
write_delete_bitset(&bitset, max_doc, &mut wrt).unwrap();
wrt.terminate().unwrap();
let file = directory.open_read(path).unwrap();
Self::open(file).unwrap()
}
/// Opens a delete bitset given its file.
pub fn open(file: FileSlice) -> crate::Result<DeleteBitSet> {
let bytes = file.read_bytes()?;
let num_deleted: usize = bytes
.as_slice() .as_slice()
.iter() .iter()
.map(|b| b.count_ones() as usize) .map(|b| b.count_ones() as usize)
.sum(); .sum();
DeleteBitSet { Ok(DeleteBitSet {
data, data: bytes,
len: num_deleted, len: num_deleted,
} })
} }
/// Returns true iff the document is still "alive". In other words, if it has not been deleted. /// Returns true iff the document is still "alive". In other words, if it has not been deleted.
@@ -64,7 +86,7 @@ impl DeleteBitSet {
#[inline(always)] #[inline(always)]
pub fn is_deleted(&self, doc: DocId) -> bool { pub fn is_deleted(&self, doc: DocId) -> bool {
let byte_offset = doc / 8u32; let byte_offset = doc / 8u32;
let b: u8 = (*self.data)[byte_offset as usize]; let b: u8 = self.data.as_slice()[byte_offset as usize];
let shift = (doc & 7u32) as u8; let shift = (doc & 7u32) as u8;
b & (1u8 << shift) != 0 b & (1u8 << shift) != 0
} }
@@ -83,42 +105,35 @@ impl HasLen for DeleteBitSet {
#[cfg(test)] #[cfg(test)]
mod tests { mod tests {
use super::*; use super::DeleteBitSet;
use crate::directory::*; use crate::common::HasLen;
use std::path::PathBuf;
fn test_delete_bitset_helper(bitset: &BitSet, max_doc: u32) { #[test]
let test_path = PathBuf::from("test"); fn test_delete_bitset_empty() {
let mut directory = RAMDirectory::create(); let delete_bitset = DeleteBitSet::for_test(&[], 10);
{ for doc in 0..10 {
let mut writer = directory.open_write(&*test_path).unwrap(); assert_eq!(delete_bitset.is_deleted(doc), !delete_bitset.is_alive(doc));
write_delete_bitset(bitset, max_doc, &mut writer).unwrap();
writer.terminate().unwrap();
} }
let source = directory.open_read(&test_path).unwrap(); assert_eq!(delete_bitset.len(), 0);
let delete_bitset = DeleteBitSet::open(source);
for doc in 0..max_doc {
assert_eq!(bitset.contains(doc), delete_bitset.is_deleted(doc as DocId));
}
assert_eq!(delete_bitset.len(), bitset.len());
} }
#[test] #[test]
fn test_delete_bitset() { fn test_delete_bitset() {
{ let delete_bitset = DeleteBitSet::for_test(&[1, 9], 10);
let mut bitset = BitSet::with_max_value(10); assert!(delete_bitset.is_alive(0));
bitset.insert(1); assert!(delete_bitset.is_deleted(1));
bitset.insert(9); assert!(delete_bitset.is_alive(2));
test_delete_bitset_helper(&bitset, 10); assert!(delete_bitset.is_alive(3));
} assert!(delete_bitset.is_alive(4));
{ assert!(delete_bitset.is_alive(5));
let mut bitset = BitSet::with_max_value(8); assert!(delete_bitset.is_alive(6));
bitset.insert(1); assert!(delete_bitset.is_alive(6));
bitset.insert(2); assert!(delete_bitset.is_alive(7));
bitset.insert(3); assert!(delete_bitset.is_alive(8));
bitset.insert(5); assert!(delete_bitset.is_deleted(9));
bitset.insert(7); for doc in 0..10 {
test_delete_bitset_helper(&bitset, 8); assert_eq!(delete_bitset.is_deleted(doc), !delete_bitset.is_alive(doc));
} }
assert_eq!(delete_bitset.len(), 2);
} }
} }

View File

@@ -4,8 +4,8 @@ use std::result;
/// `FastFieldNotAvailableError` is returned when the /// `FastFieldNotAvailableError` is returned when the
/// user requested for a fast field reader, and the field was not /// user requested for a fast field reader, and the field was not
/// defined in the schema as a fast field. /// defined in the schema as a fast field.
#[derive(Debug, Fail)] #[derive(Debug, Error)]
#[fail(display = "Fast field not available: '{:?}'", field_name)] #[error("Fast field not available: '{field_name:?}'")]
pub struct FastFieldNotAvailableError { pub struct FastFieldNotAvailableError {
field_name: String, field_name: String,
} }

View File

@@ -73,7 +73,61 @@ impl FacetReader {
} }
/// Return the list of facet ordinals associated to a document. /// Return the list of facet ordinals associated to a document.
pub fn facet_ords(&mut self, doc: DocId, output: &mut Vec<u64>) { pub fn facet_ords(&self, doc: DocId, output: &mut Vec<u64>) {
self.term_ords.get_vals(doc, output); self.term_ords.get_vals(doc, output);
} }
} }
#[cfg(test)]
mod tests {
use crate::Index;
use crate::{
schema::{Facet, SchemaBuilder},
Document,
};
#[test]
fn test_facet_not_populated_for_all_docs() -> crate::Result<()> {
let mut schema_builder = SchemaBuilder::default();
let facet_field = schema_builder.add_facet_field("facet");
let schema = schema_builder.build();
let index = Index::create_in_ram(schema);
let mut index_writer = index.writer_for_tests()?;
index_writer.add_document(doc!(facet_field=>Facet::from_text("/a/b")));
index_writer.add_document(Document::default());
index_writer.commit()?;
let searcher = index.reader()?.searcher();
let facet_reader = searcher
.segment_reader(0u32)
.facet_reader(facet_field)
.unwrap();
let mut facet_ords = Vec::new();
facet_reader.facet_ords(0u32, &mut facet_ords);
assert_eq!(&facet_ords, &[2u64]);
facet_reader.facet_ords(1u32, &mut facet_ords);
assert!(facet_ords.is_empty());
Ok(())
}
#[test]
fn test_facet_not_populated_for_any_docs() -> crate::Result<()> {
let mut schema_builder = SchemaBuilder::default();
let facet_field = schema_builder.add_facet_field("facet");
let schema = schema_builder.build();
let index = Index::create_in_ram(schema);
let mut index_writer = index.writer_for_tests()?;
index_writer.add_document(Document::default());
index_writer.add_document(Document::default());
index_writer.commit()?;
let searcher = index.reader()?.searcher();
let facet_reader = searcher
.segment_reader(0u32)
.facet_reader(facet_field)
.unwrap();
let mut facet_ords = Vec::new();
facet_reader.facet_ords(0u32, &mut facet_ords);
assert!(facet_ords.is_empty());
facet_reader.facet_ords(1u32, &mut facet_ords);
assert!(facet_ords.is_empty());
Ok(())
}
}

View File

@@ -33,11 +33,14 @@ pub use self::reader::FastFieldReader;
pub use self::readers::FastFieldReaders; pub use self::readers::FastFieldReaders;
pub use self::serializer::FastFieldSerializer; pub use self::serializer::FastFieldSerializer;
pub use self::writer::{FastFieldsWriter, IntFastFieldWriter}; pub use self::writer::{FastFieldsWriter, IntFastFieldWriter};
use crate::chrono::{NaiveDateTime, Utc};
use crate::common; use crate::common;
use crate::schema::Cardinality; use crate::schema::Cardinality;
use crate::schema::FieldType; use crate::schema::FieldType;
use crate::schema::Value; use crate::schema::Value;
use crate::{
chrono::{NaiveDateTime, Utc},
schema::Type,
};
mod bytes; mod bytes;
mod delete; mod delete;
@@ -76,6 +79,9 @@ pub trait FastValue: Clone + Copy + Send + Sync + PartialOrd {
fn make_zero() -> Self { fn make_zero() -> Self {
Self::from_u64(0i64.to_u64()) Self::from_u64(0i64.to_u64())
} }
/// Returns the `schema::Type` for this FastValue.
fn to_type() -> Type;
} }
impl FastValue for u64 { impl FastValue for u64 {
@@ -98,6 +104,10 @@ impl FastValue for u64 {
fn as_u64(&self) -> u64 { fn as_u64(&self) -> u64 {
*self *self
} }
fn to_type() -> Type {
Type::U64
}
} }
impl FastValue for i64 { impl FastValue for i64 {
@@ -119,6 +129,10 @@ impl FastValue for i64 {
fn as_u64(&self) -> u64 { fn as_u64(&self) -> u64 {
*self as u64 *self as u64
} }
fn to_type() -> Type {
Type::I64
}
} }
impl FastValue for f64 { impl FastValue for f64 {
@@ -140,6 +154,10 @@ impl FastValue for f64 {
fn as_u64(&self) -> u64 { fn as_u64(&self) -> u64 {
self.to_bits() self.to_bits()
} }
fn to_type() -> Type {
Type::F64
}
} }
impl FastValue for crate::DateTime { impl FastValue for crate::DateTime {
@@ -162,6 +180,10 @@ impl FastValue for crate::DateTime {
fn as_u64(&self) -> u64 { fn as_u64(&self) -> u64 {
self.timestamp().as_u64() self.timestamp().as_u64()
} }
fn to_type() -> Type {
Type::Date
}
} }
fn value_to_u64(value: &Value) -> u64 { fn value_to_u64(value: &Value) -> u64 {
@@ -187,6 +209,7 @@ mod tests {
use crate::schema::FAST; use crate::schema::FAST;
use crate::schema::{Document, IntOptions}; use crate::schema::{Document, IntOptions};
use crate::{Index, SegmentId, SegmentReader}; use crate::{Index, SegmentId, SegmentReader};
use common::HasLen;
use once_cell::sync::Lazy; use once_cell::sync::Lazy;
use rand::prelude::SliceRandom; use rand::prelude::SliceRandom;
use rand::rngs::StdRng; use rand::rngs::StdRng;
@@ -217,9 +240,9 @@ mod tests {
} }
#[test] #[test]
fn test_intfastfield_small() { fn test_intfastfield_small() -> crate::Result<()> {
let path = Path::new("test"); let path = Path::new("test");
let mut directory: RAMDirectory = RAMDirectory::create(); let directory: RAMDirectory = RAMDirectory::create();
{ {
let write: WritePtr = directory.open_write(Path::new("test")).unwrap(); let write: WritePtr = directory.open_write(Path::new("test")).unwrap();
let mut serializer = FastFieldSerializer::from_write(write).unwrap(); let mut serializer = FastFieldSerializer::from_write(write).unwrap();
@@ -232,27 +255,24 @@ mod tests {
.unwrap(); .unwrap();
serializer.close().unwrap(); serializer.close().unwrap();
} }
let source = directory.open_read(&path).unwrap(); let file = directory.open_read(&path).unwrap();
{ assert_eq!(file.len(), 36 as usize);
assert_eq!(source.len(), 36 as usize); let composite_file = CompositeFile::open(&file)?;
} let file = composite_file.open_read(*FIELD).unwrap();
{ let fast_field_reader = FastFieldReader::<u64>::open(file)?;
let composite_file = CompositeFile::open(&source).unwrap(); assert_eq!(fast_field_reader.get(0), 13u64);
let field_source = composite_file.open_read(*FIELD).unwrap(); assert_eq!(fast_field_reader.get(1), 14u64);
let fast_field_reader = FastFieldReader::<u64>::open(field_source); assert_eq!(fast_field_reader.get(2), 2u64);
assert_eq!(fast_field_reader.get(0), 13u64); Ok(())
assert_eq!(fast_field_reader.get(1), 14u64);
assert_eq!(fast_field_reader.get(2), 2u64);
}
} }
#[test] #[test]
fn test_intfastfield_large() { fn test_intfastfield_large() -> crate::Result<()> {
let path = Path::new("test"); let path = Path::new("test");
let mut directory: RAMDirectory = RAMDirectory::create(); let directory: RAMDirectory = RAMDirectory::create();
{ {
let write: WritePtr = directory.open_write(Path::new("test")).unwrap(); let write: WritePtr = directory.open_write(Path::new("test"))?;
let mut serializer = FastFieldSerializer::from_write(write).unwrap(); let mut serializer = FastFieldSerializer::from_write(write)?;
let mut fast_field_writers = FastFieldsWriter::from_schema(&SCHEMA); let mut fast_field_writers = FastFieldsWriter::from_schema(&SCHEMA);
fast_field_writers.add_document(&doc!(*FIELD=>4u64)); fast_field_writers.add_document(&doc!(*FIELD=>4u64));
fast_field_writers.add_document(&doc!(*FIELD=>14_082_001u64)); fast_field_writers.add_document(&doc!(*FIELD=>14_082_001u64));
@@ -263,19 +283,15 @@ mod tests {
fast_field_writers.add_document(&doc!(*FIELD=>1_002u64)); fast_field_writers.add_document(&doc!(*FIELD=>1_002u64));
fast_field_writers.add_document(&doc!(*FIELD=>1_501u64)); fast_field_writers.add_document(&doc!(*FIELD=>1_501u64));
fast_field_writers.add_document(&doc!(*FIELD=>215u64)); fast_field_writers.add_document(&doc!(*FIELD=>215u64));
fast_field_writers fast_field_writers.serialize(&mut serializer, &HashMap::new())?;
.serialize(&mut serializer, &HashMap::new()) serializer.close()?;
.unwrap();
serializer.close().unwrap();
} }
let source = directory.open_read(&path).unwrap(); let file = directory.open_read(&path)?;
assert_eq!(file.len(), 61 as usize);
{ {
assert_eq!(source.len(), 61 as usize); let fast_fields_composite = CompositeFile::open(&file)?;
}
{
let fast_fields_composite = CompositeFile::open(&source).unwrap();
let data = fast_fields_composite.open_read(*FIELD).unwrap(); let data = fast_fields_composite.open_read(*FIELD).unwrap();
let fast_field_reader = FastFieldReader::<u64>::open(data); let fast_field_reader = FastFieldReader::<u64>::open(data)?;
assert_eq!(fast_field_reader.get(0), 4u64); assert_eq!(fast_field_reader.get(0), 4u64);
assert_eq!(fast_field_reader.get(1), 14_082_001u64); assert_eq!(fast_field_reader.get(1), 14_082_001u64);
assert_eq!(fast_field_reader.get(2), 3_052u64); assert_eq!(fast_field_reader.get(2), 3_052u64);
@@ -286,12 +302,13 @@ mod tests {
assert_eq!(fast_field_reader.get(7), 1_501u64); assert_eq!(fast_field_reader.get(7), 1_501u64);
assert_eq!(fast_field_reader.get(8), 215u64); assert_eq!(fast_field_reader.get(8), 215u64);
} }
Ok(())
} }
#[test] #[test]
fn test_intfastfield_null_amplitude() { fn test_intfastfield_null_amplitude() -> crate::Result<()> {
let path = Path::new("test"); let path = Path::new("test");
let mut directory: RAMDirectory = RAMDirectory::create(); let directory: RAMDirectory = RAMDirectory::create();
{ {
let write: WritePtr = directory.open_write(Path::new("test")).unwrap(); let write: WritePtr = directory.open_write(Path::new("test")).unwrap();
@@ -305,24 +322,23 @@ mod tests {
.unwrap(); .unwrap();
serializer.close().unwrap(); serializer.close().unwrap();
} }
let source = directory.open_read(&path).unwrap(); let file = directory.open_read(&path).unwrap();
assert_eq!(file.len(), 34 as usize);
{ {
assert_eq!(source.len(), 34 as usize); let fast_fields_composite = CompositeFile::open(&file).unwrap();
}
{
let fast_fields_composite = CompositeFile::open(&source).unwrap();
let data = fast_fields_composite.open_read(*FIELD).unwrap(); let data = fast_fields_composite.open_read(*FIELD).unwrap();
let fast_field_reader = FastFieldReader::<u64>::open(data); let fast_field_reader = FastFieldReader::<u64>::open(data)?;
for doc in 0..10_000 { for doc in 0..10_000 {
assert_eq!(fast_field_reader.get(doc), 100_000u64); assert_eq!(fast_field_reader.get(doc), 100_000u64);
} }
} }
Ok(())
} }
#[test] #[test]
fn test_intfastfield_large_numbers() { fn test_intfastfield_large_numbers() -> crate::Result<()> {
let path = Path::new("test"); let path = Path::new("test");
let mut directory: RAMDirectory = RAMDirectory::create(); let directory: RAMDirectory = RAMDirectory::create();
{ {
let write: WritePtr = directory.open_write(Path::new("test")).unwrap(); let write: WritePtr = directory.open_write(Path::new("test")).unwrap();
@@ -338,14 +354,12 @@ mod tests {
.unwrap(); .unwrap();
serializer.close().unwrap(); serializer.close().unwrap();
} }
let source = directory.open_read(&path).unwrap(); let file = directory.open_read(&path).unwrap();
assert_eq!(file.len(), 80042 as usize);
{ {
assert_eq!(source.len(), 80042 as usize); let fast_fields_composite = CompositeFile::open(&file)?;
}
{
let fast_fields_composite = CompositeFile::open(&source).unwrap();
let data = fast_fields_composite.open_read(*FIELD).unwrap(); let data = fast_fields_composite.open_read(*FIELD).unwrap();
let fast_field_reader = FastFieldReader::<u64>::open(data); let fast_field_reader = FastFieldReader::<u64>::open(data)?;
assert_eq!(fast_field_reader.get(0), 0u64); assert_eq!(fast_field_reader.get(0), 0u64);
for doc in 1..10_001 { for doc in 1..10_001 {
assert_eq!( assert_eq!(
@@ -354,12 +368,13 @@ mod tests {
); );
} }
} }
Ok(())
} }
#[test] #[test]
fn test_signed_intfastfield() { fn test_signed_intfastfield() -> crate::Result<()> {
let path = Path::new("test"); let path = Path::new("test");
let mut directory: RAMDirectory = RAMDirectory::create(); let directory: RAMDirectory = RAMDirectory::create();
let mut schema_builder = Schema::builder(); let mut schema_builder = Schema::builder();
let i64_field = schema_builder.add_i64_field("field", FAST); let i64_field = schema_builder.add_i64_field("field", FAST);
@@ -378,14 +393,12 @@ mod tests {
.unwrap(); .unwrap();
serializer.close().unwrap(); serializer.close().unwrap();
} }
let source = directory.open_read(&path).unwrap(); let file = directory.open_read(&path).unwrap();
assert_eq!(file.len(), 17709 as usize);
{ {
assert_eq!(source.len(), 17709 as usize); let fast_fields_composite = CompositeFile::open(&file)?;
}
{
let fast_fields_composite = CompositeFile::open(&source).unwrap();
let data = fast_fields_composite.open_read(i64_field).unwrap(); let data = fast_fields_composite.open_read(i64_field).unwrap();
let fast_field_reader = FastFieldReader::<i64>::open(data); let fast_field_reader = FastFieldReader::<i64>::open(data)?;
assert_eq!(fast_field_reader.min_value(), -100i64); assert_eq!(fast_field_reader.min_value(), -100i64);
assert_eq!(fast_field_reader.max_value(), 9_999i64); assert_eq!(fast_field_reader.max_value(), 9_999i64);
@@ -398,12 +411,13 @@ mod tests {
assert_eq!(buffer[i], -100i64 + 53i64 + i as i64); assert_eq!(buffer[i], -100i64 + 53i64 + i as i64);
} }
} }
Ok(())
} }
#[test] #[test]
fn test_signed_intfastfield_default_val() { fn test_signed_intfastfield_default_val() -> crate::Result<()> {
let path = Path::new("test"); let path = Path::new("test");
let mut directory: RAMDirectory = RAMDirectory::create(); let directory: RAMDirectory = RAMDirectory::create();
let mut schema_builder = Schema::builder(); let mut schema_builder = Schema::builder();
let i64_field = schema_builder.add_i64_field("field", FAST); let i64_field = schema_builder.add_i64_field("field", FAST);
let schema = schema_builder.build(); let schema = schema_builder.build();
@@ -420,13 +434,14 @@ mod tests {
serializer.close().unwrap(); serializer.close().unwrap();
} }
let source = directory.open_read(&path).unwrap(); let file = directory.open_read(&path).unwrap();
{ {
let fast_fields_composite = CompositeFile::open(&source).unwrap(); let fast_fields_composite = CompositeFile::open(&file).unwrap();
let data = fast_fields_composite.open_read(i64_field).unwrap(); let data = fast_fields_composite.open_read(i64_field).unwrap();
let fast_field_reader = FastFieldReader::<i64>::open(data); let fast_field_reader = FastFieldReader::<i64>::open(data)?;
assert_eq!(fast_field_reader.get(0u32), 0i64); assert_eq!(fast_field_reader.get(0u32), 0i64);
} }
Ok(())
} }
// Warning: this generates the same permutation at each call // Warning: this generates the same permutation at each call
@@ -437,28 +452,26 @@ mod tests {
} }
#[test] #[test]
fn test_intfastfield_permutation() { fn test_intfastfield_permutation() -> crate::Result<()> {
let path = Path::new("test"); let path = Path::new("test");
let permutation = generate_permutation(); let permutation = generate_permutation();
let n = permutation.len(); let n = permutation.len();
let mut directory = RAMDirectory::create(); let directory = RAMDirectory::create();
{ {
let write: WritePtr = directory.open_write(Path::new("test")).unwrap(); let write: WritePtr = directory.open_write(Path::new("test"))?;
let mut serializer = FastFieldSerializer::from_write(write).unwrap(); let mut serializer = FastFieldSerializer::from_write(write)?;
let mut fast_field_writers = FastFieldsWriter::from_schema(&SCHEMA); let mut fast_field_writers = FastFieldsWriter::from_schema(&SCHEMA);
for &x in &permutation { for &x in &permutation {
fast_field_writers.add_document(&doc!(*FIELD=>x)); fast_field_writers.add_document(&doc!(*FIELD=>x));
} }
fast_field_writers fast_field_writers.serialize(&mut serializer, &HashMap::new())?;
.serialize(&mut serializer, &HashMap::new()) serializer.close()?;
.unwrap();
serializer.close().unwrap();
} }
let source = directory.open_read(&path).unwrap(); let file = directory.open_read(&path)?;
{ {
let fast_fields_composite = CompositeFile::open(&source).unwrap(); let fast_fields_composite = CompositeFile::open(&file)?;
let data = fast_fields_composite.open_read(*FIELD).unwrap(); let data = fast_fields_composite.open_read(*FIELD).unwrap();
let fast_field_reader = FastFieldReader::<u64>::open(data); let fast_field_reader = FastFieldReader::<u64>::open(data)?;
let mut a = 0u64; let mut a = 0u64;
for _ in 0..n { for _ in 0..n {
@@ -466,6 +479,7 @@ mod tests {
a = fast_field_reader.get(a as u32); a = fast_field_reader.get(a as u32);
} }
} }
Ok(())
} }
#[test] #[test]
@@ -474,7 +488,7 @@ mod tests {
let date_field = schema_builder.add_date_field("date", FAST); let date_field = schema_builder.add_date_field("date", FAST);
let schema = schema_builder.build(); let schema = schema_builder.build();
let index = Index::create_in_ram(schema); let index = Index::create_in_ram(schema);
let mut index_writer = index.writer_with_num_threads(1, 3_000_000).unwrap(); let mut index_writer = index.writer_for_tests().unwrap();
index_writer.set_merge_policy(Box::new(NoMergePolicy)); index_writer.set_merge_policy(Box::new(NoMergePolicy));
index_writer.add_document(doc!(date_field =>crate::chrono::prelude::Utc::now())); index_writer.add_document(doc!(date_field =>crate::chrono::prelude::Utc::now()));
index_writer.commit().unwrap(); index_writer.commit().unwrap();
@@ -511,7 +525,7 @@ mod tests {
); );
let schema = schema_builder.build(); let schema = schema_builder.build();
let index = Index::create_in_ram(schema); let index = Index::create_in_ram(schema);
let mut index_writer = index.writer_with_num_threads(1, 3_000_000).unwrap(); let mut index_writer = index.writer_for_tests().unwrap();
index_writer.set_merge_policy(Box::new(NoMergePolicy)); index_writer.set_merge_policy(Box::new(NoMergePolicy));
index_writer.add_document(doc!( index_writer.add_document(doc!(
date_field => crate::DateTime::from_u64(1i64.to_u64()), date_field => crate::DateTime::from_u64(1i64.to_u64()),
@@ -598,7 +612,7 @@ mod bench {
fn bench_intfastfield_linear_fflookup(b: &mut Bencher) { fn bench_intfastfield_linear_fflookup(b: &mut Bencher) {
let path = Path::new("test"); let path = Path::new("test");
let permutation = generate_permutation(); let permutation = generate_permutation();
let mut directory: RAMDirectory = RAMDirectory::create(); let directory: RAMDirectory = RAMDirectory::create();
{ {
let write: WritePtr = directory.open_write(Path::new("test")).unwrap(); let write: WritePtr = directory.open_write(Path::new("test")).unwrap();
let mut serializer = FastFieldSerializer::from_write(write).unwrap(); let mut serializer = FastFieldSerializer::from_write(write).unwrap();
@@ -611,9 +625,9 @@ mod bench {
.unwrap(); .unwrap();
serializer.close().unwrap(); serializer.close().unwrap();
} }
let source = directory.open_read(&path).unwrap(); let file = directory.open_read(&path).unwrap();
{ {
let fast_fields_composite = CompositeFile::open(&source).unwrap(); let fast_fields_composite = CompositeFile::open(&file).unwrap();
let data = fast_fields_composite.open_read(*FIELD).unwrap(); let data = fast_fields_composite.open_read(*FIELD).unwrap();
let fast_field_reader = FastFieldReader::<u64>::open(data); let fast_field_reader = FastFieldReader::<u64>::open(data);
@@ -632,7 +646,7 @@ mod bench {
fn bench_intfastfield_fflookup(b: &mut Bencher) { fn bench_intfastfield_fflookup(b: &mut Bencher) {
let path = Path::new("test"); let path = Path::new("test");
let permutation = generate_permutation(); let permutation = generate_permutation();
let mut directory: RAMDirectory = RAMDirectory::create(); let directory: RAMDirectory = RAMDirectory::create();
{ {
let write: WritePtr = directory.open_write(Path::new("test")).unwrap(); let write: WritePtr = directory.open_write(Path::new("test")).unwrap();
let mut serializer = FastFieldSerializer::from_write(write).unwrap(); let mut serializer = FastFieldSerializer::from_write(write).unwrap();
@@ -645,9 +659,9 @@ mod bench {
.unwrap(); .unwrap();
serializer.close().unwrap(); serializer.close().unwrap();
} }
let source = directory.open_read(&path).unwrap(); let file = directory.open_read(&path).unwrap();
{ {
let fast_fields_composite = CompositeFile::open(&source).unwrap(); let fast_fields_composite = CompositeFile::open(&file).unwrap();
let data = fast_fields_composite.open_read(*FIELD).unwrap(); let data = fast_fields_composite.open_read(*FIELD).unwrap();
let fast_field_reader = FastFieldReader::<u64>::open(data); let fast_field_reader = FastFieldReader::<u64>::open(data);

View File

@@ -25,7 +25,7 @@ mod tests {
); );
let schema = schema_builder.build(); let schema = schema_builder.build();
let index = Index::create_in_ram(schema); let index = Index::create_in_ram(schema);
let mut index_writer = index.writer_with_num_threads(1, 3_000_000).unwrap(); let mut index_writer = index.writer_for_tests().unwrap();
index_writer.add_document(doc!(field=>1u64, field=>3u64)); index_writer.add_document(doc!(field=>1u64, field=>3u64));
index_writer.add_document(doc!()); index_writer.add_document(doc!());
index_writer.add_document(doc!(field=>4u64)); index_writer.add_document(doc!(field=>4u64));
@@ -64,7 +64,7 @@ mod tests {
schema_builder.add_i64_field("time_stamp_i", IntOptions::default().set_stored()); schema_builder.add_i64_field("time_stamp_i", IntOptions::default().set_stored());
let schema = schema_builder.build(); let schema = schema_builder.build();
let index = Index::create_in_ram(schema); let index = Index::create_in_ram(schema);
let mut index_writer = index.writer_with_num_threads(1, 3_000_000).unwrap(); let mut index_writer = index.writer_for_tests().unwrap();
let first_time_stamp = chrono::Utc::now(); let first_time_stamp = chrono::Utc::now();
index_writer.add_document( index_writer.add_document(
doc!(date_field=>first_time_stamp, date_field=>first_time_stamp, time_i=>1i64), doc!(date_field=>first_time_stamp, date_field=>first_time_stamp, time_i=>1i64),
@@ -100,6 +100,7 @@ mod tests {
.get_first(date_field) .get_first(date_field)
.expect("cannot find value") .expect("cannot find value")
.date_value() .date_value()
.unwrap()
.timestamp(), .timestamp(),
first_time_stamp.timestamp() first_time_stamp.timestamp()
); );
@@ -108,7 +109,7 @@ mod tests {
.get_first(time_i) .get_first(time_i)
.expect("cannot find value") .expect("cannot find value")
.i64_value(), .i64_value(),
1i64 Some(1i64)
); );
} }
} }
@@ -131,6 +132,7 @@ mod tests {
.get_first(date_field) .get_first(date_field)
.expect("cannot find value") .expect("cannot find value")
.date_value() .date_value()
.unwrap()
.timestamp(), .timestamp(),
two_secs_ahead.timestamp() two_secs_ahead.timestamp()
); );
@@ -139,7 +141,7 @@ mod tests {
.get_first(time_i) .get_first(time_i)
.expect("cannot find value") .expect("cannot find value")
.i64_value(), .i64_value(),
3i64 Some(3i64)
); );
} }
} }
@@ -186,7 +188,7 @@ mod tests {
); );
let schema = schema_builder.build(); let schema = schema_builder.build();
let index = Index::create_in_ram(schema); let index = Index::create_in_ram(schema);
let mut index_writer = index.writer_with_num_threads(1, 3_000_000).unwrap(); let mut index_writer = index.writer_for_tests().unwrap();
index_writer.add_document(doc!(field=> 1i64, field => 3i64)); index_writer.add_document(doc!(field=> 1i64, field => 3i64));
index_writer.add_document(doc!()); index_writer.add_document(doc!());
index_writer.add_document(doc!(field=> -4i64)); index_writer.add_document(doc!(field=> -4i64));
@@ -197,22 +199,14 @@ mod tests {
let segment_reader = searcher.segment_reader(0); let segment_reader = searcher.segment_reader(0);
let mut vals = Vec::new(); let mut vals = Vec::new();
let multi_value_reader = segment_reader.fast_fields().i64s(field).unwrap(); let multi_value_reader = segment_reader.fast_fields().i64s(field).unwrap();
{ multi_value_reader.get_vals(2, &mut vals);
multi_value_reader.get_vals(2, &mut vals); assert_eq!(&vals, &[-4i64]);
assert_eq!(&vals, &[-4i64]); multi_value_reader.get_vals(0, &mut vals);
} assert_eq!(&vals, &[1i64, 3i64]);
{ multi_value_reader.get_vals(1, &mut vals);
multi_value_reader.get_vals(0, &mut vals); assert!(vals.is_empty());
assert_eq!(&vals, &[1i64, 3i64]); multi_value_reader.get_vals(3, &mut vals);
} assert_eq!(&vals, &[-5i64, -20i64, 1i64]);
{
multi_value_reader.get_vals(1, &mut vals);
assert!(vals.is_empty());
}
{
multi_value_reader.get_vals(3, &mut vals);
assert_eq!(&vals, &[-5i64, -20i64, 1i64]);
}
} }
#[test] #[test]
#[ignore] #[ignore]
@@ -221,7 +215,7 @@ mod tests {
let field = schema_builder.add_facet_field("facetfield"); let field = schema_builder.add_facet_field("facetfield");
let schema = schema_builder.build(); let schema = schema_builder.build();
let index = Index::create_in_ram(schema); let index = Index::create_in_ram(schema);
let mut index_writer = index.writer_with_num_threads(1, 3_000_000).unwrap(); let mut index_writer = index.writer_for_tests().unwrap();
for i in 0..100_000 { for i in 0..100_000 {
index_writer.add_document(doc!(field=> Facet::from(format!("/lang/{}", i).as_str()))); index_writer.add_document(doc!(field=> Facet::from(format!("/lang/{}", i).as_str())));
} }

View File

@@ -74,7 +74,7 @@ mod tests {
let schema = schema_builder.build(); let schema = schema_builder.build();
let index = Index::create_in_ram(schema); let index = Index::create_in_ram(schema);
let mut index_writer = index let mut index_writer = index
.writer_with_num_threads(1, 30_000_000) .writer_for_tests()
.expect("Failed to create index writer."); .expect("Failed to create index writer.");
index_writer.add_document(doc!( index_writer.add_document(doc!(
facet_field => Facet::from("/category/cat2"), facet_field => Facet::from("/category/cat2"),

View File

@@ -6,7 +6,6 @@ use crate::schema::{Document, Field};
use crate::termdict::TermOrdinal; use crate::termdict::TermOrdinal;
use crate::DocId; use crate::DocId;
use fnv::FnvHashMap; use fnv::FnvHashMap;
use itertools::Itertools;
use std::io; use std::io;
/// Writer for multi-valued (as in, more than one value per document) /// Writer for multi-valued (as in, more than one value per document)
@@ -144,15 +143,15 @@ impl MultiValueIntFastFieldWriter {
.iter() .iter()
.map(|val| *mapping.get(val).expect("Missing term ordinal")); .map(|val| *mapping.get(val).expect("Missing term ordinal"));
doc_vals.extend(remapped_vals); doc_vals.extend(remapped_vals);
doc_vals.sort(); doc_vals.sort_unstable();
for &val in &doc_vals { for &val in &doc_vals {
value_serializer.add_val(val)?; value_serializer.add_val(val)?;
} }
} }
} }
None => { None => {
let val_min_max = self.vals.iter().cloned().minmax(); let val_min_max = crate::common::minmax(self.vals.iter().cloned());
let (val_min, val_max) = val_min_max.into_option().unwrap_or((0u64, 0u64)); let (val_min, val_max) = val_min_max.unwrap_or((0u64, 0u64));
value_serializer = value_serializer =
serializer.new_u64_fast_field_with_idx(self.field, val_min, val_max, 1)?; serializer.new_u64_fast_field_with_idx(self.field, val_min, val_max, 1)?;
for &val in &self.vals { for &val in &self.vals {

View File

@@ -3,13 +3,12 @@ use crate::common::bitpacker::BitUnpacker;
use crate::common::compute_num_bits; use crate::common::compute_num_bits;
use crate::common::BinarySerializable; use crate::common::BinarySerializable;
use crate::common::CompositeFile; use crate::common::CompositeFile;
use crate::directory::ReadOnlySource; use crate::directory::FileSlice;
use crate::directory::{Directory, RAMDirectory, WritePtr}; use crate::directory::{Directory, RAMDirectory, WritePtr};
use crate::fastfield::{FastFieldSerializer, FastFieldsWriter}; use crate::fastfield::{FastFieldSerializer, FastFieldsWriter};
use crate::schema::Schema; use crate::schema::Schema;
use crate::schema::FAST; use crate::schema::FAST;
use crate::DocId; use crate::DocId;
use owning_ref::OwningRef;
use std::collections::HashMap; use std::collections::HashMap;
use std::marker::PhantomData; use std::marker::PhantomData;
use std::path::Path; use std::path::Path;
@@ -20,34 +19,27 @@ use std::path::Path;
/// fast field is required. /// fast field is required.
#[derive(Clone)] #[derive(Clone)]
pub struct FastFieldReader<Item: FastValue> { pub struct FastFieldReader<Item: FastValue> {
bit_unpacker: BitUnpacker<OwningRef<ReadOnlySource, [u8]>>, bit_unpacker: BitUnpacker,
min_value_u64: u64, min_value_u64: u64,
max_value_u64: u64, max_value_u64: u64,
_phantom: PhantomData<Item>, _phantom: PhantomData<Item>,
} }
impl<Item: FastValue> FastFieldReader<Item> { impl<Item: FastValue> FastFieldReader<Item> {
/// Opens a fast field given a source. /// Opens a fast field given a file.
pub fn open(data: ReadOnlySource) -> Self { pub fn open(file: FileSlice) -> crate::Result<Self> {
let min_value: u64; let mut bytes = file.read_bytes()?;
let amplitude: u64; let min_value = u64::deserialize(&mut bytes)?;
{ let amplitude = u64::deserialize(&mut bytes)?;
let mut cursor = data.as_slice();
min_value =
u64::deserialize(&mut cursor).expect("Failed to read the min_value of fast field.");
amplitude =
u64::deserialize(&mut cursor).expect("Failed to read the amplitude of fast field.");
}
let max_value = min_value + amplitude; let max_value = min_value + amplitude;
let num_bits = compute_num_bits(amplitude); let num_bits = compute_num_bits(amplitude);
let owning_ref = OwningRef::new(data).map(|data| &data[16..]); let bit_unpacker = BitUnpacker::new(bytes, num_bits);
let bit_unpacker = BitUnpacker::new(owning_ref, num_bits); Ok(FastFieldReader {
FastFieldReader {
min_value_u64: min_value, min_value_u64: min_value,
max_value_u64: max_value, max_value_u64: max_value,
bit_unpacker, bit_unpacker,
_phantom: PhantomData, _phantom: PhantomData,
} })
} }
pub(crate) fn into_u64_reader(self) -> FastFieldReader<u64> { pub(crate) fn into_u64_reader(self) -> FastFieldReader<u64> {
@@ -135,7 +127,7 @@ impl<Item: FastValue> From<Vec<Item>> for FastFieldReader<Item> {
let field = schema_builder.add_u64_field("field", FAST); let field = schema_builder.add_u64_field("field", FAST);
let schema = schema_builder.build(); let schema = schema_builder.build();
let path = Path::new("__dummy__"); let path = Path::new("__dummy__");
let mut directory: RAMDirectory = RAMDirectory::create(); let directory: RAMDirectory = RAMDirectory::create();
{ {
let write: WritePtr = directory let write: WritePtr = directory
.open_write(path) .open_write(path)
@@ -157,12 +149,11 @@ impl<Item: FastValue> From<Vec<Item>> for FastFieldReader<Item> {
serializer.close().unwrap(); serializer.close().unwrap();
} }
let source = directory.open_read(path).expect("Failed to open the file"); let file = directory.open_read(path).expect("Failed to open the file");
let composite_file = let composite_file = CompositeFile::open(&file).expect("Failed to read the composite file");
CompositeFile::open(&source).expect("Failed to read the composite file"); let field_file = composite_file
let field_source = composite_file
.open_read(field) .open_read(field)
.expect("File component not found"); .expect("File component not found");
FastFieldReader::open(field_source) FastFieldReader::open(field_file).unwrap()
} }
} }

View File

@@ -68,45 +68,52 @@ impl FastFieldReaders {
}; };
for (field, field_entry) in schema.fields() { for (field, field_entry) in schema.fields() {
let field_type = field_entry.field_type(); let field_type = field_entry.field_type();
if field_type == &FieldType::Bytes { if let FieldType::Bytes(bytes_option) = field_type {
let idx_reader = fast_fields_composite if !bytes_option.is_fast() {
continue;
}
let fast_field_idx_file = fast_fields_composite
.open_read_with_idx(field, 0) .open_read_with_idx(field, 0)
.ok_or_else(|| FastFieldNotAvailableError::new(field_entry)) .ok_or_else(|| FastFieldNotAvailableError::new(field_entry))?;
.map(FastFieldReader::open)?; let idx_reader = FastFieldReader::open(fast_field_idx_file)?;
let data = fast_fields_composite let data = fast_fields_composite
.open_read_with_idx(field, 1) .open_read_with_idx(field, 1)
.ok_or_else(|| FastFieldNotAvailableError::new(field_entry))?; .ok_or_else(|| FastFieldNotAvailableError::new(field_entry))?;
let bytes_fast_field_reader = BytesFastFieldReader::open(idx_reader, data)?;
fast_field_readers fast_field_readers
.fast_bytes .fast_bytes
.insert(field, BytesFastFieldReader::open(idx_reader, data)); .insert(field, bytes_fast_field_reader);
} else if let Some((fast_type, cardinality)) = type_and_cardinality(field_type) { } else if let Some((fast_type, cardinality)) = type_and_cardinality(field_type) {
match cardinality { match cardinality {
Cardinality::SingleValue => { Cardinality::SingleValue => {
if let Some(fast_field_data) = fast_fields_composite.open_read(field) { if let Some(fast_field_data) = fast_fields_composite.open_read(field) {
match fast_type { match fast_type {
FastType::U64 => { FastType::U64 => {
let fast_field_reader = FastFieldReader::open(fast_field_data); let fast_field_reader = FastFieldReader::open(fast_field_data)?;
fast_field_readers fast_field_readers
.fast_field_u64 .fast_field_u64
.insert(field, fast_field_reader); .insert(field, fast_field_reader);
} }
FastType::I64 => { FastType::I64 => {
fast_field_readers.fast_field_i64.insert( let fast_field_reader =
field, FastFieldReader::open(fast_field_data.clone())?;
FastFieldReader::open(fast_field_data.clone()), fast_field_readers
); .fast_field_i64
.insert(field, fast_field_reader);
} }
FastType::F64 => { FastType::F64 => {
fast_field_readers.fast_field_f64.insert( let fast_field_reader =
field, FastFieldReader::open(fast_field_data.clone())?;
FastFieldReader::open(fast_field_data.clone()), fast_field_readers
); .fast_field_f64
.insert(field, fast_field_reader);
} }
FastType::Date => { FastType::Date => {
fast_field_readers.fast_field_date.insert( let fast_field_reader =
field, FastFieldReader::open(fast_field_data.clone())?;
FastFieldReader::open(fast_field_data.clone()), fast_field_readers
); .fast_field_date
.insert(field, fast_field_reader);
} }
} }
} else { } else {
@@ -117,10 +124,10 @@ impl FastFieldReaders {
let idx_opt = fast_fields_composite.open_read_with_idx(field, 0); let idx_opt = fast_fields_composite.open_read_with_idx(field, 0);
let data_opt = fast_fields_composite.open_read_with_idx(field, 1); let data_opt = fast_fields_composite.open_read_with_idx(field, 1);
if let (Some(fast_field_idx), Some(fast_field_data)) = (idx_opt, data_opt) { if let (Some(fast_field_idx), Some(fast_field_data)) = (idx_opt, data_opt) {
let idx_reader = FastFieldReader::open(fast_field_idx); let idx_reader = FastFieldReader::open(fast_field_idx)?;
match fast_type { match fast_type {
FastType::I64 => { FastType::I64 => {
let vals_reader = FastFieldReader::open(fast_field_data); let vals_reader = FastFieldReader::open(fast_field_data)?;
let multivalued_int_fast_field = let multivalued_int_fast_field =
MultiValueIntFastFieldReader::open(idx_reader, vals_reader); MultiValueIntFastFieldReader::open(idx_reader, vals_reader);
fast_field_readers fast_field_readers
@@ -128,7 +135,7 @@ impl FastFieldReaders {
.insert(field, multivalued_int_fast_field); .insert(field, multivalued_int_fast_field);
} }
FastType::U64 => { FastType::U64 => {
let vals_reader = FastFieldReader::open(fast_field_data); let vals_reader = FastFieldReader::open(fast_field_data)?;
let multivalued_int_fast_field = let multivalued_int_fast_field =
MultiValueIntFastFieldReader::open(idx_reader, vals_reader); MultiValueIntFastFieldReader::open(idx_reader, vals_reader);
fast_field_readers fast_field_readers
@@ -136,7 +143,7 @@ impl FastFieldReaders {
.insert(field, multivalued_int_fast_field); .insert(field, multivalued_int_fast_field);
} }
FastType::F64 => { FastType::F64 => {
let vals_reader = FastFieldReader::open(fast_field_data); let vals_reader = FastFieldReader::open(fast_field_data)?;
let multivalued_int_fast_field = let multivalued_int_fast_field =
MultiValueIntFastFieldReader::open(idx_reader, vals_reader); MultiValueIntFastFieldReader::open(idx_reader, vals_reader);
fast_field_readers fast_field_readers
@@ -144,7 +151,7 @@ impl FastFieldReaders {
.insert(field, multivalued_int_fast_field); .insert(field, multivalued_int_fast_field);
} }
FastType::Date => { FastType::Date => {
let vals_reader = FastFieldReader::open(fast_field_data); let vals_reader = FastFieldReader::open(fast_field_data)?;
let multivalued_int_fast_field = let multivalued_int_fast_field =
MultiValueIntFastFieldReader::open(idx_reader, vals_reader); MultiValueIntFastFieldReader::open(idx_reader, vals_reader);
fast_field_readers fast_field_readers

View File

@@ -33,7 +33,7 @@ impl FastFieldsWriter {
let mut bytes_value_writers = Vec::new(); let mut bytes_value_writers = Vec::new();
for (field, field_entry) in schema.fields() { for (field, field_entry) in schema.fields() {
match *field_entry.field_type() { match field_entry.field_type() {
FieldType::I64(ref int_options) FieldType::I64(ref int_options)
| FieldType::U64(ref int_options) | FieldType::U64(ref int_options)
| FieldType::F64(ref int_options) | FieldType::F64(ref int_options)
@@ -56,9 +56,11 @@ impl FastFieldsWriter {
let fast_field_writer = MultiValueIntFastFieldWriter::new(field, true); let fast_field_writer = MultiValueIntFastFieldWriter::new(field, true);
multi_values_writers.push(fast_field_writer); multi_values_writers.push(fast_field_writer);
} }
FieldType::Bytes => { FieldType::Bytes(bytes_option) => {
let fast_field_writer = BytesFastFieldWriter::new(field); if bytes_option.is_fast() {
bytes_value_writers.push(fast_field_writer); let fast_field_writer = BytesFastFieldWriter::new(field);
bytes_value_writers.push(fast_field_writer);
}
} }
_ => {} _ => {}
} }
@@ -126,6 +128,7 @@ impl FastFieldsWriter {
for field_writer in &self.single_value_writers { for field_writer in &self.single_value_writers {
field_writer.serialize(serializer)?; field_writer.serialize(serializer)?;
} }
for field_writer in &self.multi_values_writers { for field_writer in &self.multi_values_writers {
let field = field_writer.field(); let field = field_writer.field();
field_writer.serialize(serializer, mapping.get(&field))?; field_writer.serialize(serializer, mapping.get(&field))?;

View File

@@ -21,7 +21,7 @@ mod reader;
mod serializer; mod serializer;
mod writer; mod writer;
pub use self::reader::FieldNormReader; pub use self::reader::{FieldNormReader, FieldNormReaders};
pub use self::serializer::FieldNormsSerializer; pub use self::serializer::FieldNormsSerializer;
pub use self::writer::FieldNormsWriter; pub use self::writer::FieldNormsWriter;

View File

@@ -1,6 +1,47 @@
use super::{fieldnorm_to_id, id_to_fieldnorm}; use super::{fieldnorm_to_id, id_to_fieldnorm};
use crate::directory::ReadOnlySource; use crate::common::CompositeFile;
use crate::directory::FileSlice;
use crate::directory::OwnedBytes;
use crate::schema::Field;
use crate::space_usage::PerFieldSpaceUsage;
use crate::DocId; use crate::DocId;
use std::sync::Arc;
/// Reader for the fieldnorm (for each document, the number of tokens indexed in the
/// field) of all indexed fields in the index.
///
/// Each fieldnorm is approximately compressed over one byte. We refer to this byte as
/// `fieldnorm_id`.
/// The mapping from `fieldnorm` to `fieldnorm_id` is given by monotonic.
#[derive(Clone)]
pub struct FieldNormReaders {
data: Arc<CompositeFile>,
}
impl FieldNormReaders {
/// Creates a field norm reader.
pub fn open(file: FileSlice) -> crate::Result<FieldNormReaders> {
let data = CompositeFile::open(&file)?;
Ok(FieldNormReaders {
data: Arc::new(data),
})
}
/// Returns the FieldNormReader for a specific field.
pub fn get_field(&self, field: Field) -> crate::Result<Option<FieldNormReader>> {
if let Some(file) = self.data.open_read(field) {
let fieldnorm_reader = FieldNormReader::open(file)?;
Ok(Some(fieldnorm_reader))
} else {
Ok(None)
}
}
/// Return a break down of the space usage per field.
pub fn space_usage(&self) -> PerFieldSpaceUsage {
self.data.space_usage()
}
}
/// Reads the fieldnorm associated to a document. /// Reads the fieldnorm associated to a document.
/// The fieldnorm represents the length associated to /// The fieldnorm represents the length associated to
@@ -8,7 +49,7 @@ use crate::DocId;
/// ///
/// This metric is important to compute the score of a /// This metric is important to compute the score of a
/// document : a document having a query word in one its short fields /// document : a document having a query word in one its short fields
/// (e.g. title) is likely to be more relevant than in one of its longer field /// (e.g. title)is likely to be more relevant than in one of its longer field
/// (e.g. body). /// (e.g. body).
/// ///
/// tantivy encodes `fieldnorm` on one byte with some precision loss, /// tantivy encodes `fieldnorm` on one byte with some precision loss,
@@ -19,14 +60,32 @@ use crate::DocId;
/// Apart from compression, this scale also makes it possible to /// Apart from compression, this scale also makes it possible to
/// precompute computationally expensive functions of the fieldnorm /// precompute computationally expensive functions of the fieldnorm
/// in a very short array. /// in a very short array.
pub struct FieldNormReader { #[derive(Clone)]
data: ReadOnlySource, pub enum FieldNormReader {
ConstFieldNorm { fieldnorm_id: u8, num_docs: u32 },
OneByte(OwnedBytes),
} }
impl FieldNormReader { impl FieldNormReader {
/// Opens a field norm reader given its data source. pub fn const_fieldnorm_id(fieldnorm_id: u8, num_docs: u32) -> FieldNormReader {
pub fn open(data: ReadOnlySource) -> Self { FieldNormReader::ConstFieldNorm {
FieldNormReader { data } fieldnorm_id,
num_docs,
}
}
/// Opens a field norm reader given its file.
pub fn open(fieldnorm_file: FileSlice) -> crate::Result<Self> {
let data = fieldnorm_file.read_bytes()?;
Ok(FieldNormReader::OneByte(data))
}
/// Returns the number of documents in this segment.
pub fn num_docs(&self) -> u32 {
match self {
Self::ConstFieldNorm { num_docs, .. } => *num_docs,
FieldNormReader::OneByte(vals) => vals.len() as u32,
}
} }
/// Returns the `fieldnorm` associated to a doc id. /// Returns the `fieldnorm` associated to a doc id.
@@ -38,6 +97,7 @@ impl FieldNormReader {
/// ///
/// The fieldnorm is effectively decoded from the /// The fieldnorm is effectively decoded from the
/// `fieldnorm_id` by doing a simple table lookup. /// `fieldnorm_id` by doing a simple table lookup.
#[inline(always)]
pub fn fieldnorm(&self, doc_id: DocId) -> u32 { pub fn fieldnorm(&self, doc_id: DocId) -> u32 {
let fieldnorm_id = self.fieldnorm_id(doc_id); let fieldnorm_id = self.fieldnorm_id(doc_id);
id_to_fieldnorm(fieldnorm_id) id_to_fieldnorm(fieldnorm_id)
@@ -46,8 +106,11 @@ impl FieldNormReader {
/// Returns the `fieldnorm_id` associated to a document. /// Returns the `fieldnorm_id` associated to a document.
#[inline(always)] #[inline(always)]
pub fn fieldnorm_id(&self, doc_id: DocId) -> u8 { pub fn fieldnorm_id(&self, doc_id: DocId) -> u8 {
let fielnorms_data = self.data.as_slice(); match self {
fielnorms_data[doc_id as usize] FieldNormReader::ConstFieldNorm { fieldnorm_id, .. } => *fieldnorm_id,
FieldNormReader::OneByte(data) => data.as_slice()[doc_id as usize],
}
} }
/// Converts a `fieldnorm_id` into a fieldnorm. /// Converts a `fieldnorm_id` into a fieldnorm.
@@ -62,18 +125,32 @@ impl FieldNormReader {
pub fn fieldnorm_to_id(fieldnorm: u32) -> u8 { pub fn fieldnorm_to_id(fieldnorm: u32) -> u8 {
fieldnorm_to_id(fieldnorm) fieldnorm_to_id(fieldnorm)
} }
#[cfg(test)]
pub fn for_test(field_norms: &[u32]) -> FieldNormReader {
let field_norms_id = field_norms
.iter()
.cloned()
.map(FieldNormReader::fieldnorm_to_id)
.collect::<Vec<u8>>();
let field_norms_data = OwnedBytes::new(field_norms_id);
FieldNormReader::OneByte(field_norms_data)
}
} }
#[cfg(test)] #[cfg(test)]
impl From<Vec<u32>> for FieldNormReader { mod tests {
fn from(field_norms: Vec<u32>) -> FieldNormReader { use crate::fieldnorm::FieldNormReader;
let field_norms_id = field_norms
.into_iter() #[test]
.map(FieldNormReader::fieldnorm_to_id) fn test_from_fieldnorms_array() {
.collect::<Vec<u8>>(); let fieldnorms = &[1, 2, 3, 4, 1_000_000];
let field_norms_data = ReadOnlySource::from(field_norms_id); let fieldnorm_reader = FieldNormReader::for_test(fieldnorms);
FieldNormReader { assert_eq!(fieldnorm_reader.num_docs(), 5);
data: field_norms_data, assert_eq!(fieldnorm_reader.fieldnorm(0), 1);
} assert_eq!(fieldnorm_reader.fieldnorm(1), 2);
assert_eq!(fieldnorm_reader.fieldnorm(2), 3);
assert_eq!(fieldnorm_reader.fieldnorm(3), 4);
assert_eq!(fieldnorm_reader.fieldnorm(4), 983_064);
} }
} }

View File

@@ -13,7 +13,7 @@ use std::io;
/// byte per document per field. /// byte per document per field.
pub struct FieldNormsWriter { pub struct FieldNormsWriter {
fields: Vec<Field>, fields: Vec<Field>,
fieldnorms_buffer: Vec<Vec<u8>>, fieldnorms_buffer: Vec<Option<Vec<u8>>>,
} }
impl FieldNormsWriter { impl FieldNormsWriter {
@@ -23,7 +23,7 @@ impl FieldNormsWriter {
schema schema
.fields() .fields()
.filter_map(|(field, field_entry)| { .filter_map(|(field, field_entry)| {
if field_entry.is_indexed() { if field_entry.has_fieldnorms() {
Some(field) Some(field)
} else { } else {
None None
@@ -36,15 +36,14 @@ impl FieldNormsWriter {
/// specified in the schema. /// specified in the schema.
pub fn for_schema(schema: &Schema) -> FieldNormsWriter { pub fn for_schema(schema: &Schema) -> FieldNormsWriter {
let fields = FieldNormsWriter::fields_with_fieldnorm(schema); let fields = FieldNormsWriter::fields_with_fieldnorm(schema);
let max_field = fields let num_fields = schema.num_fields();
.iter() let mut fieldnorms_buffer: Vec<Option<Vec<u8>>> = vec![None; num_fields];
.map(Field::field_id) for field in &fields {
.max() fieldnorms_buffer[field.field_id() as usize] = Some(Vec::new());
.map(|max_field_id| max_field_id as usize + 1) }
.unwrap_or(0);
FieldNormsWriter { FieldNormsWriter {
fields, fields,
fieldnorms_buffer: (0..max_field).map(|_| Vec::new()).collect::<Vec<_>>(), fieldnorms_buffer,
} }
} }
@@ -53,8 +52,10 @@ impl FieldNormsWriter {
/// ///
/// Will extend with 0-bytes for documents that have not been seen. /// Will extend with 0-bytes for documents that have not been seen.
pub fn fill_up_to_max_doc(&mut self, max_doc: DocId) { pub fn fill_up_to_max_doc(&mut self, max_doc: DocId) {
for field in self.fields.iter() { for buffer_opt in self.fieldnorms_buffer.iter_mut() {
self.fieldnorms_buffer[field.field_id() as usize].resize(max_doc as usize, 0u8); if let Some(buffer) = buffer_opt {
buffer.resize(max_doc as usize, 0u8);
}
} }
} }
@@ -67,22 +68,24 @@ impl FieldNormsWriter {
/// * field - the field being set /// * field - the field being set
/// * fieldnorm - the number of terms present in document `doc` in field `field` /// * fieldnorm - the number of terms present in document `doc` in field `field`
pub fn record(&mut self, doc: DocId, field: Field, fieldnorm: u32) { pub fn record(&mut self, doc: DocId, field: Field, fieldnorm: u32) {
let fieldnorm_buffer: &mut Vec<u8> = &mut self.fieldnorms_buffer[field.field_id() as usize]; if let Some(fieldnorm_buffer) = self.fieldnorms_buffer[field.field_id() as usize].as_mut() {
assert!( assert!(
fieldnorm_buffer.len() <= doc as usize, fieldnorm_buffer.len() <= doc as usize,
"Cannot register a given fieldnorm twice" "Cannot register a given fieldnorm twice" // we fill intermediary `DocId` as having a fieldnorm of 0.
); );
// we fill intermediary `DocId` as having a fieldnorm of 0. fieldnorm_buffer.resize(doc as usize + 1, 0u8);
fieldnorm_buffer.resize(doc as usize + 1, 0u8); fieldnorm_buffer[doc as usize] = fieldnorm_to_id(fieldnorm);
fieldnorm_buffer[doc as usize] = fieldnorm_to_id(fieldnorm); }
} }
/// Serialize the seen fieldnorm values to the serializer for all fields. /// Serialize the seen fieldnorm values to the serializer for all fields.
pub fn serialize(&self, fieldnorms_serializer: &mut FieldNormsSerializer) -> io::Result<()> { pub fn serialize(&self, mut fieldnorms_serializer: FieldNormsSerializer) -> io::Result<()> {
for &field in self.fields.iter() { for &field in self.fields.iter() {
let fieldnorm_values: &[u8] = &self.fieldnorms_buffer[field.field_id() as usize][..]; if let Some(buffer) = self.fieldnorms_buffer[field.field_id() as usize].as_ref() {
fieldnorms_serializer.serialize_field(field, fieldnorm_values)?; fieldnorms_serializer.serialize_field(field, &buffer[..])?;
}
} }
fieldnorms_serializer.close()?;
Ok(()) Ok(())
} }
} }

View File

@@ -10,7 +10,7 @@ use crate::core::SegmentMeta;
use crate::core::SegmentReader; use crate::core::SegmentReader;
use crate::directory::TerminatingWrite; use crate::directory::TerminatingWrite;
use crate::directory::{DirectoryLock, GarbageCollectionResult}; use crate::directory::{DirectoryLock, GarbageCollectionResult};
use crate::docset::DocSet; use crate::docset::{DocSet, TERMINATED};
use crate::error::TantivyError; use crate::error::TantivyError;
use crate::fastfield::write_delete_bitset; use crate::fastfield::write_delete_bitset;
use crate::indexer::delete_queue::{DeleteCursor, DeleteQueue}; use crate::indexer::delete_queue::{DeleteCursor, DeleteQueue};
@@ -108,19 +108,19 @@ fn compute_deleted_bitset(
// Limit doc helps identify the first document // Limit doc helps identify the first document
// that may be affected by the delete operation. // that may be affected by the delete operation.
let limit_doc = doc_opstamps.compute_doc_limit(delete_op.opstamp); let limit_doc = doc_opstamps.compute_doc_limit(delete_op.opstamp);
let inverted_index = segment_reader.inverted_index(delete_op.term.field()); let inverted_index = segment_reader.inverted_index(delete_op.term.field())?;
if let Some(mut docset) = if let Some(mut docset) =
inverted_index.read_postings(&delete_op.term, IndexRecordOption::Basic) inverted_index.read_postings(&delete_op.term, IndexRecordOption::Basic)?
{ {
while docset.advance() { let mut deleted_doc = docset.doc();
let deleted_doc = docset.doc(); while deleted_doc != TERMINATED {
if deleted_doc < limit_doc { if deleted_doc < limit_doc {
delete_bitset.insert(deleted_doc); delete_bitset.insert(deleted_doc);
might_have_changed = true; might_have_changed = true;
} }
deleted_doc = docset.advance();
} }
} }
delete_cursor.advance(); delete_cursor.advance();
} }
Ok(might_have_changed) Ok(might_have_changed)
@@ -346,7 +346,7 @@ impl IndexWriter {
fn drop_sender(&mut self) { fn drop_sender(&mut self) {
let (sender, _receiver) = channel::bounded(1); let (sender, _receiver) = channel::bounded(1);
mem::replace(&mut self.operation_sender, sender); self.operation_sender = sender;
} }
/// If there are some merging threads, blocks until they all finish their work and /// If there are some merging threads, blocks until they all finish their work and
@@ -536,6 +536,7 @@ impl IndexWriter {
/// when no documents are remaining. /// when no documents are remaining.
/// ///
/// Returns the former segment_ready channel. /// Returns the former segment_ready channel.
#[allow(unused_must_use)]
fn recreate_document_channel(&mut self) -> OperationReceiver { fn recreate_document_channel(&mut self) -> OperationReceiver {
let (document_sender, document_receiver): (OperationSender, OperationReceiver) = let (document_sender, document_receiver): (OperationSender, OperationReceiver) =
channel::bounded(PIPELINE_MAX_SIZE_IN_DOCS); channel::bounded(PIPELINE_MAX_SIZE_IN_DOCS);
@@ -575,7 +576,7 @@ impl IndexWriter {
// //
// This will drop the document queue, and the thread // This will drop the document queue, and the thread
// should terminate. // should terminate.
mem::replace(self, new_index_writer); *self = new_index_writer;
// Drains the document receiver pipeline : // Drains the document receiver pipeline :
// Workers don't need to index the pending documents. // Workers don't need to index the pending documents.
@@ -799,7 +800,7 @@ mod tests {
let mut schema_builder = schema::Schema::builder(); let mut schema_builder = schema::Schema::builder();
let text_field = schema_builder.add_text_field("text", schema::TEXT); let text_field = schema_builder.add_text_field("text", schema::TEXT);
let index = Index::create_in_ram(schema_builder.build()); let index = Index::create_in_ram(schema_builder.build());
let index_writer = index.writer_with_num_threads(1, 3_000_000).unwrap(); let index_writer = index.writer_for_tests().unwrap();
let operations = vec![ let operations = vec![
UserOperation::Add(doc!(text_field=>"a")), UserOperation::Add(doc!(text_field=>"a")),
UserOperation::Add(doc!(text_field=>"b")), UserOperation::Add(doc!(text_field=>"b")),
@@ -814,7 +815,7 @@ mod tests {
let text_field = schema_builder.add_text_field("text", schema::TEXT); let text_field = schema_builder.add_text_field("text", schema::TEXT);
let index = Index::create_in_ram(schema_builder.build()); let index = Index::create_in_ram(schema_builder.build());
let mut index_writer = index.writer_with_num_threads(1, 3_000_000).unwrap(); let mut index_writer = index.writer_for_tests().unwrap();
index_writer.add_document(doc!(text_field => "hello1")); index_writer.add_document(doc!(text_field => "hello1"));
index_writer.add_document(doc!(text_field => "hello2")); index_writer.add_document(doc!(text_field => "hello2"));
assert!(index_writer.commit().is_ok()); assert!(index_writer.commit().is_ok());
@@ -863,7 +864,7 @@ mod tests {
.reload_policy(ReloadPolicy::Manual) .reload_policy(ReloadPolicy::Manual)
.try_into() .try_into()
.unwrap(); .unwrap();
let mut index_writer = index.writer_with_num_threads(1, 3_000_000).unwrap(); let mut index_writer = index.writer_for_tests().unwrap();
let a_term = Term::from_field_text(text_field, "a"); let a_term = Term::from_field_text(text_field, "a");
let b_term = Term::from_field_text(text_field, "b"); let b_term = Term::from_field_text(text_field, "b");
let operations = vec![ let operations = vec![
@@ -925,8 +926,8 @@ mod tests {
fn test_lockfile_already_exists_error_msg() { fn test_lockfile_already_exists_error_msg() {
let schema_builder = schema::Schema::builder(); let schema_builder = schema::Schema::builder();
let index = Index::create_in_ram(schema_builder.build()); let index = Index::create_in_ram(schema_builder.build());
let _index_writer = index.writer_with_num_threads(1, 3_000_000).unwrap(); let _index_writer = index.writer_for_tests().unwrap();
match index.writer_with_num_threads(1, 3_000_000) { match index.writer_for_tests() {
Err(err) => { Err(err) => {
let err_msg = err.to_string(); let err_msg = err.to_string();
assert!(err_msg.contains("already an `IndexWriter`")); assert!(err_msg.contains("already an `IndexWriter`"));
@@ -978,7 +979,7 @@ mod tests {
let num_docs_containing = |s: &str| { let num_docs_containing = |s: &str| {
let searcher = reader.searcher(); let searcher = reader.searcher();
let term = Term::from_field_text(text_field, s); let term = Term::from_field_text(text_field, s);
searcher.doc_freq(&term) searcher.doc_freq(&term).unwrap()
}; };
{ {
@@ -1014,7 +1015,7 @@ mod tests {
.unwrap(); .unwrap();
let num_docs_containing = |s: &str| { let num_docs_containing = |s: &str| {
let term_a = Term::from_field_text(text_field, s); let term_a = Term::from_field_text(text_field, s);
reader.searcher().doc_freq(&term_a) reader.searcher().doc_freq(&term_a).unwrap()
}; };
{ {
// writing the segment // writing the segment
@@ -1109,6 +1110,7 @@ mod tests {
.unwrap() .unwrap()
.searcher() .searcher()
.doc_freq(&term_a) .doc_freq(&term_a)
.unwrap()
}; };
assert_eq!(num_docs_containing("a"), 0); assert_eq!(num_docs_containing("a"), 0);
assert_eq!(num_docs_containing("b"), 100); assert_eq!(num_docs_containing("b"), 100);
@@ -1128,7 +1130,7 @@ mod tests {
reader.reload().unwrap(); reader.reload().unwrap();
let searcher = reader.searcher(); let searcher = reader.searcher();
let term = Term::from_field_text(text_field, s); let term = Term::from_field_text(text_field, s);
searcher.doc_freq(&term) searcher.doc_freq(&term).unwrap()
}; };
let mut index_writer = index.writer_with_num_threads(4, 12_000_000).unwrap(); let mut index_writer = index.writer_with_num_threads(4, 12_000_000).unwrap();
@@ -1179,7 +1181,15 @@ mod tests {
// working with an empty index == no documents // working with an empty index == no documents
let term_b = Term::from_field_text(text_field, "b"); let term_b = Term::from_field_text(text_field, "b");
assert_eq!(index.reader().unwrap().searcher().doc_freq(&term_b), 0); assert_eq!(
index
.reader()
.unwrap()
.searcher()
.doc_freq(&term_b)
.unwrap(),
0
);
} }
#[test] #[test]
@@ -1199,7 +1209,15 @@ mod tests {
let term_a = Term::from_field_text(text_field, "a"); let term_a = Term::from_field_text(text_field, "a");
// expect the document with that term to be in the index // expect the document with that term to be in the index
assert_eq!(index.reader().unwrap().searcher().doc_freq(&term_a), 1); assert_eq!(
index
.reader()
.unwrap()
.searcher()
.doc_freq(&term_a)
.unwrap(),
1
);
} }
#[test] #[test]
@@ -1225,7 +1243,15 @@ mod tests {
// Find original docs in the index // Find original docs in the index
let term_a = Term::from_field_text(text_field, "a"); let term_a = Term::from_field_text(text_field, "a");
// expect the document with that term to be in the index // expect the document with that term to be in the index
assert_eq!(index.reader().unwrap().searcher().doc_freq(&term_a), 1); assert_eq!(
index
.reader()
.unwrap()
.searcher()
.doc_freq(&term_a)
.unwrap(),
1
);
} }
#[test] #[test]
@@ -1260,7 +1286,7 @@ mod tests {
let idfield = schema_builder.add_text_field("id", STRING); let idfield = schema_builder.add_text_field("id", STRING);
schema_builder.add_text_field("optfield", STRING); schema_builder.add_text_field("optfield", STRING);
let index = Index::create_in_ram(schema_builder.build()); let index = Index::create_in_ram(schema_builder.build());
let mut index_writer = index.writer_with_num_threads(1, 3_000_000).unwrap(); let mut index_writer = index.writer_for_tests().unwrap();
index_writer.add_document(doc!(idfield=>"myid")); index_writer.add_document(doc!(idfield=>"myid"));
let commit = index_writer.commit(); let commit = index_writer.commit();
assert!(commit.is_ok()); assert!(commit.is_ok());

View File

@@ -54,10 +54,6 @@ impl LogMergePolicy {
impl MergePolicy for LogMergePolicy { impl MergePolicy for LogMergePolicy {
fn compute_merge_candidates(&self, segments: &[SegmentMeta]) -> Vec<MergeCandidate> { fn compute_merge_candidates(&self, segments: &[SegmentMeta]) -> Vec<MergeCandidate> {
if segments.is_empty() {
return Vec::new();
}
let mut size_sorted_tuples = segments let mut size_sorted_tuples = segments
.iter() .iter()
.map(SegmentMeta::num_docs) .map(SegmentMeta::num_docs)
@@ -67,27 +63,35 @@ impl MergePolicy for LogMergePolicy {
size_sorted_tuples.sort_by(|x, y| y.1.cmp(&(x.1))); size_sorted_tuples.sort_by(|x, y| y.1.cmp(&(x.1)));
if size_sorted_tuples.len() <= 1 {
return Vec::new();
}
let size_sorted_log_tuples: Vec<_> = size_sorted_tuples let size_sorted_log_tuples: Vec<_> = size_sorted_tuples
.into_iter() .into_iter()
.map(|(ind, num_docs)| (ind, f64::from(self.clip_min_size(num_docs)).log2())) .map(|(ind, num_docs)| (ind, f64::from(self.clip_min_size(num_docs)).log2()))
.collect(); .collect();
let (first_ind, first_score) = size_sorted_log_tuples[0]; if let Some(&(first_ind, first_score)) = size_sorted_log_tuples.first() {
let mut current_max_log_size = first_score; let mut current_max_log_size = first_score;
let mut levels = vec![vec![first_ind]]; let mut levels = vec![vec![first_ind]];
for &(ind, score) in (&size_sorted_log_tuples).iter().skip(1) { for &(ind, score) in (&size_sorted_log_tuples).iter().skip(1) {
if score < (current_max_log_size - self.level_log_size) { if score < (current_max_log_size - self.level_log_size) {
current_max_log_size = score; current_max_log_size = score;
levels.push(Vec::new()); levels.push(Vec::new());
}
levels.last_mut().unwrap().push(ind);
} }
levels.last_mut().unwrap().push(ind); levels
.iter()
.filter(|level| level.len() >= self.min_merge_size)
.map(|ind_vec| {
MergeCandidate(ind_vec.iter().map(|&ind| segments[ind].id()).collect())
})
.collect()
} else {
return vec![];
} }
levels
.iter()
.filter(|level| level.len() >= self.min_merge_size)
.map(|ind_vec| MergeCandidate(ind_vec.iter().map(|&ind| segments[ind].id()).collect()))
.collect()
} }
} }
@@ -179,6 +183,7 @@ mod tests {
let result_list = test_merge_policy().compute_merge_candidates(&test_input); let result_list = test_merge_policy().compute_merge_candidates(&test_input);
assert_eq!(result_list.len(), 2); assert_eq!(result_list.len(), 2);
} }
#[test] #[test]
fn test_log_merge_policy_small_segments() { fn test_log_merge_policy_small_segments() {
// segments under min_layer_size are merged together // segments under min_layer_size are merged together
@@ -194,6 +199,17 @@ mod tests {
assert_eq!(result_list.len(), 1); assert_eq!(result_list.len(), 1);
} }
#[test]
fn test_log_merge_policy_all_segments_too_large_to_merge() {
let eight_large_segments: Vec<SegmentMeta> =
std::iter::repeat_with(|| create_random_segment_meta(100_001))
.take(8)
.collect();
assert!(test_merge_policy()
.compute_merge_candidates(&eight_large_segments)
.is_empty());
}
#[test] #[test]
fn test_large_merge_segments() { fn test_large_merge_segments() {
let test_input = vec![ let test_input = vec![

File diff suppressed because it is too large Load Diff

View File

@@ -29,8 +29,9 @@ pub use self::segment_writer::SegmentWriter;
/// Alias for the default merge policy, which is the `LogMergePolicy`. /// Alias for the default merge policy, which is the `LogMergePolicy`.
pub type DefaultMergePolicy = LogMergePolicy; pub type DefaultMergePolicy = LogMergePolicy;
#[cfg(feature = "mmap")]
#[cfg(test)] #[cfg(test)]
mod tests { mod tests_mmap {
use crate::schema::{self, Schema}; use crate::schema::{self, Schema};
use crate::{Index, Term}; use crate::{Index, Term};
@@ -39,7 +40,7 @@ mod tests {
let mut schema_builder = Schema::builder(); let mut schema_builder = Schema::builder();
let text_field = schema_builder.add_text_field("text", schema::TEXT); let text_field = schema_builder.add_text_field("text", schema::TEXT);
let index = Index::create_from_tempdir(schema_builder.build()).unwrap(); let index = Index::create_from_tempdir(schema_builder.build()).unwrap();
let mut index_writer = index.writer_with_num_threads(1, 3_000_000).unwrap(); let mut index_writer = index.writer_for_tests().unwrap();
// there must be one deleted document in the segment // there must be one deleted document in the segment
index_writer.add_document(doc!(text_field=>"b")); index_writer.add_document(doc!(text_field=>"b"));
index_writer.delete_term(Term::from_field_text(text_field, "b")); index_writer.delete_term(Term::from_field_text(text_field, "b"));

View File

@@ -8,15 +8,16 @@ use crate::store::StoreWriter;
/// Segment serializer is in charge of laying out on disk /// Segment serializer is in charge of laying out on disk
/// the data accumulated and sorted by the `SegmentWriter`. /// the data accumulated and sorted by the `SegmentWriter`.
pub struct SegmentSerializer { pub struct SegmentSerializer {
segment: Segment,
store_writer: StoreWriter, store_writer: StoreWriter,
fast_field_serializer: FastFieldSerializer, fast_field_serializer: FastFieldSerializer,
fieldnorms_serializer: FieldNormsSerializer, fieldnorms_serializer: Option<FieldNormsSerializer>,
postings_serializer: InvertedIndexSerializer, postings_serializer: InvertedIndexSerializer,
} }
impl SegmentSerializer { impl SegmentSerializer {
/// Creates a new `SegmentSerializer`. /// Creates a new `SegmentSerializer`.
pub fn for_segment(segment: &mut Segment) -> crate::Result<SegmentSerializer> { pub fn for_segment(mut segment: Segment) -> crate::Result<SegmentSerializer> {
let store_write = segment.open_write(SegmentComponent::STORE)?; let store_write = segment.open_write(SegmentComponent::STORE)?;
let fast_field_write = segment.open_write(SegmentComponent::FASTFIELDS)?; let fast_field_write = segment.open_write(SegmentComponent::FASTFIELDS)?;
@@ -25,15 +26,20 @@ impl SegmentSerializer {
let fieldnorms_write = segment.open_write(SegmentComponent::FIELDNORMS)?; let fieldnorms_write = segment.open_write(SegmentComponent::FIELDNORMS)?;
let fieldnorms_serializer = FieldNormsSerializer::from_write(fieldnorms_write)?; let fieldnorms_serializer = FieldNormsSerializer::from_write(fieldnorms_write)?;
let postings_serializer = InvertedIndexSerializer::open(segment)?; let postings_serializer = InvertedIndexSerializer::open(&mut segment)?;
Ok(SegmentSerializer { Ok(SegmentSerializer {
segment,
store_writer: StoreWriter::new(store_write), store_writer: StoreWriter::new(store_write),
fast_field_serializer, fast_field_serializer,
fieldnorms_serializer, fieldnorms_serializer: Some(fieldnorms_serializer),
postings_serializer, postings_serializer,
}) })
} }
pub fn segment(&self) -> &Segment {
&self.segment
}
/// Accessor to the `PostingsSerializer`. /// Accessor to the `PostingsSerializer`.
pub fn get_postings_serializer(&mut self) -> &mut InvertedIndexSerializer { pub fn get_postings_serializer(&mut self) -> &mut InvertedIndexSerializer {
&mut self.postings_serializer &mut self.postings_serializer
@@ -44,9 +50,11 @@ impl SegmentSerializer {
&mut self.fast_field_serializer &mut self.fast_field_serializer
} }
/// Accessor to the field norm serializer. /// Extract the field norm serializer.
pub fn get_fieldnorms_serializer(&mut self) -> &mut FieldNormsSerializer { ///
&mut self.fieldnorms_serializer /// Note the fieldnorms serializer can only be extracted once.
pub fn extract_fieldnorms_serializer(&mut self) -> Option<FieldNormsSerializer> {
self.fieldnorms_serializer.take()
} }
/// Accessor to the `StoreWriter`. /// Accessor to the `StoreWriter`.
@@ -55,11 +63,13 @@ impl SegmentSerializer {
} }
/// Finalize the segment serialization. /// Finalize the segment serialization.
pub fn close(self) -> crate::Result<()> { pub fn close(mut self) -> crate::Result<()> {
if let Some(fieldnorms_serializer) = self.extract_fieldnorms_serializer() {
fieldnorms_serializer.close()?;
}
self.fast_field_serializer.close()?; self.fast_field_serializer.close()?;
self.postings_serializer.close()?; self.postings_serializer.close()?;
self.store_writer.close()?; self.store_writer.close()?;
self.fieldnorms_serializer.close()?;
Ok(()) Ok(())
} }
} }

View File

@@ -23,7 +23,6 @@ use futures::channel::oneshot;
use futures::executor::{ThreadPool, ThreadPoolBuilder}; use futures::executor::{ThreadPool, ThreadPoolBuilder};
use futures::future::Future; use futures::future::Future;
use futures::future::TryFutureExt; use futures::future::TryFutureExt;
use serde_json;
use std::borrow::BorrowMut; use std::borrow::BorrowMut;
use std::collections::HashSet; use std::collections::HashSet;
use std::io::Write; use std::io::Write;
@@ -44,7 +43,7 @@ const NUM_MERGE_THREADS: usize = 4;
/// and flushed. /// and flushed.
/// ///
/// This method is not part of tantivy's public API /// This method is not part of tantivy's public API
pub fn save_new_metas(schema: Schema, directory: &mut dyn Directory) -> crate::Result<()> { pub fn save_new_metas(schema: Schema, directory: &dyn Directory) -> crate::Result<()> {
save_metas( save_metas(
&IndexMeta { &IndexMeta {
segments: Vec::new(), segments: Vec::new(),
@@ -65,7 +64,7 @@ pub fn save_new_metas(schema: Schema, directory: &mut dyn Directory) -> crate::R
/// and flushed. /// and flushed.
/// ///
/// This method is not part of tantivy's public API /// This method is not part of tantivy's public API
fn save_metas(metas: &IndexMeta, directory: &mut dyn Directory) -> crate::Result<()> { fn save_metas(metas: &IndexMeta, directory: &dyn Directory) -> crate::Result<()> {
info!("save metas"); info!("save metas");
let mut buffer = serde_json::to_vec_pretty(metas)?; let mut buffer = serde_json::to_vec_pretty(metas)?;
// Just adding a new line at the end of the buffer. // Just adding a new line at the end of the buffer.
@@ -113,7 +112,7 @@ fn merge(
target_opstamp: Opstamp, target_opstamp: Opstamp,
) -> crate::Result<SegmentEntry> { ) -> crate::Result<SegmentEntry> {
// first we need to apply deletes to our segment. // first we need to apply deletes to our segment.
let mut merged_segment = index.new_segment(); let merged_segment = index.new_segment();
// First we apply all of the delet to the merged segment, up to the target opstamp. // First we apply all of the delet to the merged segment, up to the target opstamp.
for segment_entry in &mut segment_entries { for segment_entry in &mut segment_entries {
@@ -132,12 +131,13 @@ fn merge(
let merger: IndexMerger = IndexMerger::open(index.schema(), &segments[..])?; let merger: IndexMerger = IndexMerger::open(index.schema(), &segments[..])?;
// ... we just serialize this index merger in our new segment to merge the two segments. // ... we just serialize this index merger in our new segment to merge the two segments.
let segment_serializer = SegmentSerializer::for_segment(&mut merged_segment)?; let segment_serializer = SegmentSerializer::for_segment(merged_segment.clone())?;
let num_docs = merger.write(segment_serializer)?; let num_docs = merger.write(segment_serializer)?;
let segment_meta = index.new_segment_meta(merged_segment.id(), num_docs); let merged_segment_id = merged_segment.id();
let segment_meta = index.new_segment_meta(merged_segment_id, num_docs);
Ok(SegmentEntry::new(segment_meta, delete_cursor, None)) Ok(SegmentEntry::new(segment_meta, delete_cursor, None))
} }
@@ -450,9 +450,8 @@ impl SegmentUpdater {
.into_iter() .into_iter()
.map(|merge_candidate: MergeCandidate| { .map(|merge_candidate: MergeCandidate| {
MergeOperation::new(&self.merge_operations, commit_opstamp, merge_candidate.0) MergeOperation::new(&self.merge_operations, commit_opstamp, merge_candidate.0)
}) });
.collect::<Vec<_>>(); merge_candidates.extend(committed_merge_candidates);
merge_candidates.extend(committed_merge_candidates.into_iter());
for merge_operation in merge_candidates { for merge_operation in merge_candidates {
if let Err(err) = self.start_merge(merge_operation) { if let Err(err) = self.start_merge(merge_operation) {
@@ -522,7 +521,7 @@ impl SegmentUpdater {
/// ///
/// Upon termination of the current merging threads, /// Upon termination of the current merging threads,
/// merge opportunity may appear. /// merge opportunity may appear.
// ///
/// We keep waiting until the merge policy judges that /// We keep waiting until the merge policy judges that
/// no opportunity is available. /// no opportunity is available.
/// ///
@@ -555,7 +554,7 @@ mod tests {
let index = Index::create_in_ram(schema); let index = Index::create_in_ram(schema);
// writing the segment // writing the segment
let mut index_writer = index.writer_with_num_threads(1, 3_000_000).unwrap(); let mut index_writer = index.writer_for_tests().unwrap();
index_writer.set_merge_policy(Box::new(MergeWheneverPossible)); index_writer.set_merge_policy(Box::new(MergeWheneverPossible));
{ {
@@ -608,7 +607,7 @@ mod tests {
let index = Index::create_in_ram(schema); let index = Index::create_in_ram(schema);
// writing the segment // writing the segment
let mut index_writer = index.writer_with_num_threads(1, 3_000_000).unwrap(); let mut index_writer = index.writer_for_tests().unwrap();
{ {
for _ in 0..100 { for _ in 0..100 {
@@ -679,7 +678,7 @@ mod tests {
let index = Index::create_in_ram(schema); let index = Index::create_in_ram(schema);
// writing the segment // writing the segment
let mut index_writer = index.writer_with_num_threads(1, 3_000_000).unwrap(); let mut index_writer = index.writer_for_tests().unwrap();
{ {
for _ in 0..100 { for _ in 0..100 {

View File

@@ -2,7 +2,7 @@ use super::operation::AddOperation;
use crate::core::Segment; use crate::core::Segment;
use crate::core::SerializableSegment; use crate::core::SerializableSegment;
use crate::fastfield::FastFieldsWriter; use crate::fastfield::FastFieldsWriter;
use crate::fieldnorm::FieldNormsWriter; use crate::fieldnorm::{FieldNormReaders, FieldNormsWriter};
use crate::indexer::segment_serializer::SegmentSerializer; use crate::indexer::segment_serializer::SegmentSerializer;
use crate::postings::compute_table_size; use crate::postings::compute_table_size;
use crate::postings::MultiFieldPostingsWriter; use crate::postings::MultiFieldPostingsWriter;
@@ -14,10 +14,8 @@ use crate::schema::{Field, FieldEntry};
use crate::tokenizer::{BoxTokenStream, PreTokenizedStream}; use crate::tokenizer::{BoxTokenStream, PreTokenizedStream};
use crate::tokenizer::{FacetTokenizer, TextAnalyzer}; use crate::tokenizer::{FacetTokenizer, TextAnalyzer};
use crate::tokenizer::{TokenStreamChain, Tokenizer}; use crate::tokenizer::{TokenStreamChain, Tokenizer};
use crate::DocId;
use crate::Opstamp; use crate::Opstamp;
use std::io; use crate::{DocId, SegmentComponent};
use std::str;
/// Computes the initial size of the hash table. /// Computes the initial size of the hash table.
/// ///
@@ -48,6 +46,7 @@ pub struct SegmentWriter {
fieldnorms_writer: FieldNormsWriter, fieldnorms_writer: FieldNormsWriter,
doc_opstamps: Vec<Opstamp>, doc_opstamps: Vec<Opstamp>,
tokenizers: Vec<Option<TextAnalyzer>>, tokenizers: Vec<Option<TextAnalyzer>>,
term_buffer: Term,
} }
impl SegmentWriter { impl SegmentWriter {
@@ -62,11 +61,12 @@ impl SegmentWriter {
/// - schema /// - schema
pub fn for_segment( pub fn for_segment(
memory_budget: usize, memory_budget: usize,
mut segment: Segment, segment: Segment,
schema: &Schema, schema: &Schema,
) -> crate::Result<SegmentWriter> { ) -> crate::Result<SegmentWriter> {
let tokenizer_manager = segment.index().tokenizers().clone();
let table_num_bits = initial_table_size(memory_budget)?; let table_num_bits = initial_table_size(memory_budget)?;
let segment_serializer = SegmentSerializer::for_segment(&mut segment)?; let segment_serializer = SegmentSerializer::for_segment(segment)?;
let multifield_postings = MultiFieldPostingsWriter::new(schema, table_num_bits); let multifield_postings = MultiFieldPostingsWriter::new(schema, table_num_bits);
let tokenizers = schema let tokenizers = schema
.fields() .fields()
@@ -76,7 +76,7 @@ impl SegmentWriter {
.get_indexing_options() .get_indexing_options()
.and_then(|text_index_option| { .and_then(|text_index_option| {
let tokenizer_name = &text_index_option.tokenizer(); let tokenizer_name = &text_index_option.tokenizer();
segment.index().tokenizers().get(tokenizer_name) tokenizer_manager.get(tokenizer_name)
}), }),
_ => None, _ => None,
}, },
@@ -90,6 +90,7 @@ impl SegmentWriter {
fast_field_writers: FastFieldsWriter::from_schema(schema), fast_field_writers: FastFieldsWriter::from_schema(schema),
doc_opstamps: Vec::with_capacity(1_000), doc_opstamps: Vec::with_capacity(1_000),
tokenizers, tokenizers,
term_buffer: Term::new(),
}) })
} }
@@ -115,7 +116,11 @@ impl SegmentWriter {
/// Indexes a new document /// Indexes a new document
/// ///
/// As a user, you should rather use `IndexWriter`'s add_document. /// As a user, you should rather use `IndexWriter`'s add_document.
pub fn add_document(&mut self, add_operation: AddOperation, schema: &Schema) -> io::Result<()> { pub fn add_document(
&mut self,
add_operation: AddOperation,
schema: &Schema,
) -> crate::Result<()> {
let doc_id = self.max_doc; let doc_id = self.max_doc;
let mut doc = add_operation.document; let mut doc = add_operation.document;
self.doc_opstamps.push(add_operation.opstamp); self.doc_opstamps.push(add_operation.opstamp);
@@ -123,34 +128,45 @@ impl SegmentWriter {
self.fast_field_writers.add_document(&doc); self.fast_field_writers.add_document(&doc);
for (field, field_values) in doc.get_sorted_field_values() { for (field, field_values) in doc.get_sorted_field_values() {
let field_options = schema.get_field_entry(field); let field_entry = schema.get_field_entry(field);
if !field_options.is_indexed() { let make_schema_error = || {
crate::TantivyError::SchemaError(format!(
"Expected a {:?} for field {:?}",
field_entry.field_type().value_type(),
field_entry.name()
))
};
if !field_entry.is_indexed() {
continue; continue;
} }
match *field_options.field_type() { let (term_buffer, multifield_postings) =
(&mut self.term_buffer, &mut self.multifield_postings);
match *field_entry.field_type() {
FieldType::HierarchicalFacet => { FieldType::HierarchicalFacet => {
let facets: Vec<&str> = field_values term_buffer.set_field(field);
.iter() let facets =
.flat_map(|field_value| match *field_value.value() { field_values
Value::Facet(ref facet) => Some(facet.encoded_str()), .iter()
_ => { .flat_map(|field_value| match *field_value.value() {
panic!("Expected hierarchical facet"); Value::Facet(ref facet) => Some(facet.encoded_str()),
} _ => {
}) panic!("Expected hierarchical facet");
.collect(); }
let mut term = Term::for_field(field); // we set the Term });
for fake_str in facets { for facet_str in facets {
let mut unordered_term_id_opt = None; let mut unordered_term_id_opt = None;
FacetTokenizer.token_stream(fake_str).process(&mut |token| { FacetTokenizer
term.set_text(&token.text); .token_stream(facet_str)
let unordered_term_id = .process(&mut |token| {
self.multifield_postings.subscribe(doc_id, &term); term_buffer.set_text(&token.text);
unordered_term_id_opt = Some(unordered_term_id); let unordered_term_id =
}); multifield_postings.subscribe(doc_id, &term_buffer);
unordered_term_id_opt = Some(unordered_term_id);
});
if let Some(unordered_term_id) = unordered_term_id_opt { if let Some(unordered_term_id) = unordered_term_id_opt {
self.fast_field_writers self.fast_field_writers
.get_multivalue_writer(field) .get_multivalue_writer(field)
.expect("multified writer for facet missing") .expect("writer for facet missing")
.add_val(unordered_term_id); .add_val(unordered_term_id);
} }
} }
@@ -167,7 +183,6 @@ impl SegmentWriter {
if let Some(last_token) = tok_str.tokens.last() { if let Some(last_token) = tok_str.tokens.last() {
total_offset += last_token.offset_to; total_offset += last_token.offset_to;
} }
token_streams token_streams
.push(PreTokenizedStream::from(tok_str.clone()).into()); .push(PreTokenizedStream::from(tok_str.clone()).into());
} }
@@ -177,7 +192,6 @@ impl SegmentWriter {
{ {
offsets.push(total_offset); offsets.push(total_offset);
total_offset += text.len(); total_offset += text.len();
token_streams.push(tokenizer.token_stream(text)); token_streams.push(tokenizer.token_stream(text));
} }
} }
@@ -189,8 +203,12 @@ impl SegmentWriter {
0 0
} else { } else {
let mut token_stream = TokenStreamChain::new(offsets, token_streams); let mut token_stream = TokenStreamChain::new(offsets, token_streams);
self.multifield_postings multifield_postings.index_text(
.index_text(doc_id, field, &mut token_stream) doc_id,
field,
&mut token_stream,
term_buffer,
)
}; };
self.fieldnorms_writer.record(doc_id, field, num_tokens); self.fieldnorms_writer.record(doc_id, field, num_tokens);
@@ -198,49 +216,67 @@ impl SegmentWriter {
FieldType::U64(ref int_option) => { FieldType::U64(ref int_option) => {
if int_option.is_indexed() { if int_option.is_indexed() {
for field_value in field_values { for field_value in field_values {
let term = Term::from_field_u64( term_buffer.set_field(field_value.field());
field_value.field(), let u64_val = field_value
field_value.value().u64_value(), .value()
); .u64_value()
self.multifield_postings.subscribe(doc_id, &term); .ok_or_else(make_schema_error)?;
term_buffer.set_u64(u64_val);
multifield_postings.subscribe(doc_id, &term_buffer);
} }
} }
} }
FieldType::Date(ref int_option) => { FieldType::Date(ref int_option) => {
if int_option.is_indexed() { if int_option.is_indexed() {
for field_value in field_values { for field_value in field_values {
let term = Term::from_field_i64( term_buffer.set_field(field_value.field());
field_value.field(), let date_val = field_value
field_value.value().date_value().timestamp(), .value()
); .date_value()
self.multifield_postings.subscribe(doc_id, &term); .ok_or_else(make_schema_error)?;
term_buffer.set_i64(date_val.timestamp());
multifield_postings.subscribe(doc_id, &term_buffer);
} }
} }
} }
FieldType::I64(ref int_option) => { FieldType::I64(ref int_option) => {
if int_option.is_indexed() { if int_option.is_indexed() {
for field_value in field_values { for field_value in field_values {
let term = Term::from_field_i64( term_buffer.set_field(field_value.field());
field_value.field(), let i64_val = field_value
field_value.value().i64_value(), .value()
); .i64_value()
self.multifield_postings.subscribe(doc_id, &term); .ok_or_else(make_schema_error)?;
term_buffer.set_i64(i64_val);
multifield_postings.subscribe(doc_id, &term_buffer);
} }
} }
} }
FieldType::F64(ref int_option) => { FieldType::F64(ref int_option) => {
if int_option.is_indexed() { if int_option.is_indexed() {
for field_value in field_values { for field_value in field_values {
let term = Term::from_field_f64( term_buffer.set_field(field_value.field());
field_value.field(), let f64_val = field_value
field_value.value().f64_value(), .value()
); .f64_value()
self.multifield_postings.subscribe(doc_id, &term); .ok_or_else(make_schema_error)?;
term_buffer.set_f64(f64_val);
multifield_postings.subscribe(doc_id, &term_buffer);
} }
} }
} }
FieldType::Bytes => { FieldType::Bytes(ref option) => {
// Do nothing. Bytes only supports fast fields. if option.is_indexed() {
for field_value in field_values {
term_buffer.set_field(field_value.field());
let bytes = field_value
.value()
.bytes_value()
.ok_or_else(make_schema_error)?;
term_buffer.set_bytes(bytes);
self.multifield_postings.subscribe(doc_id, &term_buffer);
}
}
} }
} }
} }
@@ -280,9 +316,16 @@ fn write(
fieldnorms_writer: &FieldNormsWriter, fieldnorms_writer: &FieldNormsWriter,
mut serializer: SegmentSerializer, mut serializer: SegmentSerializer,
) -> crate::Result<()> { ) -> crate::Result<()> {
let term_ord_map = multifield_postings.serialize(serializer.get_postings_serializer())?; if let Some(fieldnorms_serializer) = serializer.extract_fieldnorms_serializer() {
fieldnorms_writer.serialize(fieldnorms_serializer)?;
}
let fieldnorm_data = serializer
.segment()
.open_read(SegmentComponent::FIELDNORMS)?;
let fieldnorm_readers = FieldNormReaders::open(fieldnorm_data)?;
let term_ord_map =
multifield_postings.serialize(serializer.get_postings_serializer(), fieldnorm_readers)?;
fast_field_writers.serialize(serializer.get_fast_field_serializer(), &term_ord_map)?; fast_field_writers.serialize(serializer.get_fast_field_serializer(), &term_ord_map)?;
fieldnorms_writer.serialize(serializer.get_fieldnorms_serializer())?;
serializer.close()?; serializer.close()?;
Ok(()) Ok(())
} }

View File

@@ -105,7 +105,7 @@ extern crate serde_json;
extern crate log; extern crate log;
#[macro_use] #[macro_use]
extern crate failure; extern crate thiserror;
#[cfg(all(test, feature = "unstable"))] #[cfg(all(test, feature = "unstable"))]
extern crate test; extern crate test;
@@ -134,7 +134,7 @@ mod core;
mod indexer; mod indexer;
#[allow(unused_doc_comments)] #[allow(unused_doc_comments)]
mod error; pub mod error;
pub mod tokenizer; pub mod tokenizer;
pub mod collector; pub mod collector;
@@ -156,7 +156,8 @@ mod snippet;
pub use self::snippet::{Snippet, SnippetGenerator}; pub use self::snippet::{Snippet, SnippetGenerator};
mod docset; mod docset;
pub use self::docset::{DocSet, SkipResult}; pub use self::docset::{DocSet, TERMINATED};
pub use crate::common::HasLen;
pub use crate::common::{f64_to_u64, i64_to_u64, u64_to_f64, u64_to_i64}; pub use crate::common::{f64_to_u64, i64_to_u64, u64_to_f64, u64_to_i64};
pub use crate::core::{Executor, SegmentComponent}; pub use crate::core::{Executor, SegmentComponent};
pub use crate::core::{Index, IndexMeta, Searcher, Segment, SegmentId, SegmentMeta}; pub use crate::core::{Index, IndexMeta, Searcher, Segment, SegmentId, SegmentMeta};
@@ -173,7 +174,7 @@ use once_cell::sync::Lazy;
use serde::{Deserialize, Serialize}; use serde::{Deserialize, Serialize};
/// Index format version. /// Index format version.
const INDEX_FORMAT_VERSION: u32 = 1; const INDEX_FORMAT_VERSION: u32 = 2;
/// Structure version for the index. /// Structure version for the index.
#[derive(Clone, PartialEq, Eq, Serialize, Deserialize)] #[derive(Clone, PartialEq, Eq, Serialize, Deserialize)]
@@ -245,11 +246,10 @@ pub type DocId = u32;
/// with opstamp `n+1`. /// with opstamp `n+1`.
pub type Opstamp = u64; pub type Opstamp = u64;
/// A f32 that represents the relevance of the document to the query /// A Score that represents the relevance of the document to the query
/// ///
/// This is modelled internally as a `f32`. The /// This is modelled internally as a `f32`. The larger the number, the more relevant
/// larger the number, the more relevant the document /// the document to the search query.
/// to the search
pub type Score = f32; pub type Score = f32;
/// A `SegmentLocalId` identifies a segment. /// A `SegmentLocalId` identifies a segment.
@@ -277,20 +277,18 @@ impl DocAddress {
/// ///
/// The id used for the segment is actually an ordinal /// The id used for the segment is actually an ordinal
/// in the list of `Segment`s held by a `Searcher`. /// in the list of `Segment`s held by a `Searcher`.
#[derive(Debug, Clone, Copy, PartialEq, Eq, PartialOrd, Ord)] #[derive(Debug, Clone, Copy, PartialEq, Eq, PartialOrd, Ord, Hash)]
pub struct DocAddress(pub SegmentLocalId, pub DocId); pub struct DocAddress(pub SegmentLocalId, pub DocId);
#[cfg(test)] #[cfg(test)]
mod tests { mod tests {
use crate::collector::tests::TEST_COLLECTOR_WITH_SCORE; use crate::collector::tests::TEST_COLLECTOR_WITH_SCORE;
use crate::core::SegmentReader; use crate::core::SegmentReader;
use crate::docset::DocSet; use crate::docset::{DocSet, TERMINATED};
use crate::query::BooleanQuery; use crate::query::BooleanQuery;
use crate::schema::*; use crate::schema::*;
use crate::DocAddress; use crate::DocAddress;
use crate::Index; use crate::Index;
use crate::IndexWriter;
use crate::Postings; use crate::Postings;
use crate::ReloadPolicy; use crate::ReloadPolicy;
use rand::distributions::Bernoulli; use rand::distributions::Bernoulli;
@@ -298,17 +296,26 @@ mod tests {
use rand::rngs::StdRng; use rand::rngs::StdRng;
use rand::{Rng, SeedableRng}; use rand::{Rng, SeedableRng};
pub fn assert_nearly_equals(expected: f32, val: f32) { /// Checks if left and right are close one to each other.
assert!( /// Panics if the two values are more than 0.5% apart.
nearly_equals(val, expected), #[macro_export]
"Got {}, expected {}.", macro_rules! assert_nearly_equals {
val, ($left:expr, $right:expr) => {{
expected match (&$left, &$right) {
); (left_val, right_val) => {
} let diff = (left_val - right_val).abs();
let add = left_val.abs() + right_val.abs();
pub fn nearly_equals(a: f32, b: f32) -> bool { if diff > 0.0005 * add {
(a - b).abs() < 0.0005 * (a + b).abs() panic!(
r#"assertion failed: `(left ~= right)`
left: `{:?}`,
right: `{:?}`"#,
&*left_val, &*right_val
)
}
}
}
}};
} }
pub fn generate_nonunique_unsorted(max_value: u32, n_elems: usize) -> Vec<u32> { pub fn generate_nonunique_unsorted(max_value: u32, n_elems: usize) -> Vec<u32> {
@@ -346,14 +353,14 @@ mod tests {
#[test] #[test]
#[cfg(feature = "mmap")] #[cfg(feature = "mmap")]
fn test_indexing() { fn test_indexing() -> crate::Result<()> {
let mut schema_builder = Schema::builder(); let mut schema_builder = Schema::builder();
let text_field = schema_builder.add_text_field("text", TEXT); let text_field = schema_builder.add_text_field("text", TEXT);
let schema = schema_builder.build(); let schema = schema_builder.build();
let index = Index::create_from_tempdir(schema).unwrap(); let index = Index::create_from_tempdir(schema).unwrap();
{ {
// writing the segment // writing the segment
let mut index_writer = index.writer_with_num_threads(1, 3_000_000).unwrap(); let mut index_writer = index.writer_for_tests()?;
{ {
let doc = doc!(text_field=>"af b"); let doc = doc!(text_field=>"af b");
index_writer.add_document(doc); index_writer.add_document(doc);
@@ -368,120 +375,91 @@ mod tests {
} }
assert!(index_writer.commit().is_ok()); assert!(index_writer.commit().is_ok());
} }
Ok(())
} }
#[test] #[test]
fn test_docfreq1() { fn test_docfreq1() -> crate::Result<()> {
let mut schema_builder = Schema::builder(); let mut schema_builder = Schema::builder();
let text_field = schema_builder.add_text_field("text", TEXT); let text_field = schema_builder.add_text_field("text", TEXT);
let index = Index::create_in_ram(schema_builder.build()); let index = Index::create_in_ram(schema_builder.build());
let mut index_writer = index.writer_with_num_threads(1, 3_000_000).unwrap(); let mut index_writer = index.writer_for_tests()?;
{ index_writer.add_document(doc!(text_field=>"a b c"));
index_writer.add_document(doc!(text_field=>"a b c")); index_writer.commit()?;
index_writer.commit().unwrap(); index_writer.add_document(doc!(text_field=>"a"));
} index_writer.add_document(doc!(text_field=>"a a"));
{ index_writer.commit()?;
{ index_writer.add_document(doc!(text_field=>"c"));
let doc = doc!(text_field=>"a"); index_writer.commit()?;
index_writer.add_document(doc); let reader = index.reader()?;
} let searcher = reader.searcher();
{ let term_a = Term::from_field_text(text_field, "a");
let doc = doc!(text_field=>"a a"); assert_eq!(searcher.doc_freq(&term_a)?, 3);
index_writer.add_document(doc); let term_b = Term::from_field_text(text_field, "b");
} assert_eq!(searcher.doc_freq(&term_b)?, 1);
index_writer.commit().unwrap(); let term_c = Term::from_field_text(text_field, "c");
} assert_eq!(searcher.doc_freq(&term_c)?, 2);
{ let term_d = Term::from_field_text(text_field, "d");
let doc = doc!(text_field=>"c"); assert_eq!(searcher.doc_freq(&term_d)?, 0);
index_writer.add_document(doc); Ok(())
index_writer.commit().unwrap();
}
{
let reader = index.reader().unwrap();
let searcher = reader.searcher();
let term_a = Term::from_field_text(text_field, "a");
assert_eq!(searcher.doc_freq(&term_a), 3);
let term_b = Term::from_field_text(text_field, "b");
assert_eq!(searcher.doc_freq(&term_b), 1);
let term_c = Term::from_field_text(text_field, "c");
assert_eq!(searcher.doc_freq(&term_c), 2);
let term_d = Term::from_field_text(text_field, "d");
assert_eq!(searcher.doc_freq(&term_d), 0);
}
} }
#[test] #[test]
fn test_fieldnorm_no_docs_with_field() { fn test_fieldnorm_no_docs_with_field() -> crate::Result<()> {
let mut schema_builder = Schema::builder(); let mut schema_builder = Schema::builder();
let title_field = schema_builder.add_text_field("title", TEXT); let title_field = schema_builder.add_text_field("title", TEXT);
let text_field = schema_builder.add_text_field("text", TEXT); let text_field = schema_builder.add_text_field("text", TEXT);
let index = Index::create_in_ram(schema_builder.build()); let index = Index::create_in_ram(schema_builder.build());
let mut index_writer = index.writer_for_tests()?;
index_writer.add_document(doc!(text_field=>"a b c"));
index_writer.commit()?;
let index_reader = index.reader()?;
let searcher = index_reader.searcher();
let reader = searcher.segment_reader(0);
{ {
let mut index_writer = index.writer_with_num_threads(1, 3_000_000).unwrap(); let fieldnorm_reader = reader.get_fieldnorms_reader(text_field)?;
{ assert_eq!(fieldnorm_reader.fieldnorm(0), 3);
let doc = doc!(text_field=>"a b c");
index_writer.add_document(doc);
}
index_writer.commit().unwrap();
} }
{ {
let index_reader = index.reader().unwrap(); let fieldnorm_reader = reader.get_fieldnorms_reader(title_field)?;
let searcher = index_reader.searcher(); assert_eq!(fieldnorm_reader.fieldnorm_id(0), 0);
let reader = searcher.segment_reader(0);
{
let fieldnorm_reader = reader.get_fieldnorms_reader(text_field);
assert_eq!(fieldnorm_reader.fieldnorm(0), 3);
}
{
let fieldnorm_reader = reader.get_fieldnorms_reader(title_field);
assert_eq!(fieldnorm_reader.fieldnorm_id(0), 0);
}
} }
Ok(())
} }
#[test] #[test]
fn test_fieldnorm() { fn test_fieldnorm() -> crate::Result<()> {
let mut schema_builder = Schema::builder(); let mut schema_builder = Schema::builder();
let text_field = schema_builder.add_text_field("text", TEXT); let text_field = schema_builder.add_text_field("text", TEXT);
let index = Index::create_in_ram(schema_builder.build()); let index = Index::create_in_ram(schema_builder.build());
{ let mut index_writer = index.writer_for_tests()?;
let mut index_writer = index.writer_with_num_threads(1, 3_000_000).unwrap(); index_writer.add_document(doc!(text_field=>"a b c"));
{ index_writer.add_document(doc!());
let doc = doc!(text_field=>"a b c"); index_writer.add_document(doc!(text_field=>"a b"));
index_writer.add_document(doc); index_writer.commit()?;
} let reader = index.reader()?;
{ let searcher = reader.searcher();
let doc = doc!(); let segment_reader: &SegmentReader = searcher.segment_reader(0);
index_writer.add_document(doc); let fieldnorms_reader = segment_reader.get_fieldnorms_reader(text_field)?;
} assert_eq!(fieldnorms_reader.fieldnorm(0), 3);
{ assert_eq!(fieldnorms_reader.fieldnorm(1), 0);
let doc = doc!(text_field=>"a b"); assert_eq!(fieldnorms_reader.fieldnorm(2), 2);
index_writer.add_document(doc); Ok(())
}
index_writer.commit().unwrap();
}
{
let reader = index.reader().unwrap();
let searcher = reader.searcher();
let segment_reader: &SegmentReader = searcher.segment_reader(0);
let fieldnorms_reader = segment_reader.get_fieldnorms_reader(text_field);
assert_eq!(fieldnorms_reader.fieldnorm(0), 3);
assert_eq!(fieldnorms_reader.fieldnorm(1), 0);
assert_eq!(fieldnorms_reader.fieldnorm(2), 2);
}
} }
fn advance_undeleted(docset: &mut dyn DocSet, reader: &SegmentReader) -> bool { fn advance_undeleted(docset: &mut dyn DocSet, reader: &SegmentReader) -> bool {
while docset.advance() { let mut doc = docset.advance();
if !reader.is_deleted(docset.doc()) { while doc != TERMINATED {
if !reader.is_deleted(doc) {
return true; return true;
} }
doc = docset.advance();
} }
false false
} }
#[test] #[test]
fn test_delete_postings1() { fn test_delete_postings1() -> crate::Result<()> {
let mut schema_builder = Schema::builder(); let mut schema_builder = Schema::builder();
let text_field = schema_builder.add_text_field("text", TEXT); let text_field = schema_builder.add_text_field("text", TEXT);
let term_abcd = Term::from_field_text(text_field, "abcd"); let term_abcd = Term::from_field_text(text_field, "abcd");
@@ -497,7 +475,7 @@ mod tests {
.unwrap(); .unwrap();
{ {
// writing the segment // writing the segment
let mut index_writer = index.writer_with_num_threads(1, 3_000_000).unwrap(); let mut index_writer = index.writer_for_tests()?;
// 0 // 0
index_writer.add_document(doc!(text_field=>"a b")); index_writer.add_document(doc!(text_field=>"a b"));
// 1 // 1
@@ -513,19 +491,19 @@ mod tests {
index_writer.add_document(doc!(text_field=>" b c")); index_writer.add_document(doc!(text_field=>" b c"));
// 5 // 5
index_writer.add_document(doc!(text_field=>" a")); index_writer.add_document(doc!(text_field=>" a"));
index_writer.commit().unwrap(); index_writer.commit()?;
} }
{ {
reader.reload().unwrap(); reader.reload()?;
let searcher = reader.searcher(); let searcher = reader.searcher();
let segment_reader = searcher.segment_reader(0); let segment_reader = searcher.segment_reader(0);
let inverted_index = segment_reader.inverted_index(text_field); let inverted_index = segment_reader.inverted_index(text_field)?;
assert!(inverted_index assert!(inverted_index
.read_postings(&term_abcd, IndexRecordOption::WithFreqsAndPositions) .read_postings(&term_abcd, IndexRecordOption::WithFreqsAndPositions)?
.is_none()); .is_none());
{ {
let mut postings = inverted_index let mut postings = inverted_index
.read_postings(&term_a, IndexRecordOption::WithFreqsAndPositions) .read_postings(&term_a, IndexRecordOption::WithFreqsAndPositions)?
.unwrap(); .unwrap();
assert!(advance_undeleted(&mut postings, segment_reader)); assert!(advance_undeleted(&mut postings, segment_reader));
assert_eq!(postings.doc(), 5); assert_eq!(postings.doc(), 5);
@@ -533,7 +511,7 @@ mod tests {
} }
{ {
let mut postings = inverted_index let mut postings = inverted_index
.read_postings(&term_b, IndexRecordOption::WithFreqsAndPositions) .read_postings(&term_b, IndexRecordOption::WithFreqsAndPositions)?
.unwrap(); .unwrap();
assert!(advance_undeleted(&mut postings, segment_reader)); assert!(advance_undeleted(&mut postings, segment_reader));
assert_eq!(postings.doc(), 3); assert_eq!(postings.doc(), 3);
@@ -544,25 +522,25 @@ mod tests {
} }
{ {
// writing the segment // writing the segment
let mut index_writer = index.writer_with_num_threads(1, 3_000_000).unwrap(); let mut index_writer = index.writer_for_tests()?;
// 0 // 0
index_writer.add_document(doc!(text_field=>"a b")); index_writer.add_document(doc!(text_field=>"a b"));
// 1 // 1
index_writer.delete_term(Term::from_field_text(text_field, "c")); index_writer.delete_term(Term::from_field_text(text_field, "c"));
index_writer.rollback().unwrap(); index_writer.rollback()?;
} }
{ {
reader.reload().unwrap(); reader.reload()?;
let searcher = reader.searcher(); let searcher = reader.searcher();
let seg_reader = searcher.segment_reader(0); let seg_reader = searcher.segment_reader(0);
let inverted_index = seg_reader.inverted_index(term_abcd.field()); let inverted_index = seg_reader.inverted_index(term_abcd.field())?;
assert!(inverted_index assert!(inverted_index
.read_postings(&term_abcd, IndexRecordOption::WithFreqsAndPositions) .read_postings(&term_abcd, IndexRecordOption::WithFreqsAndPositions)?
.is_none()); .is_none());
{ {
let mut postings = inverted_index let mut postings = inverted_index
.read_postings(&term_a, IndexRecordOption::WithFreqsAndPositions) .read_postings(&term_a, IndexRecordOption::WithFreqsAndPositions)?
.unwrap(); .unwrap();
assert!(advance_undeleted(&mut postings, seg_reader)); assert!(advance_undeleted(&mut postings, seg_reader));
assert_eq!(postings.doc(), 5); assert_eq!(postings.doc(), 5);
@@ -570,7 +548,7 @@ mod tests {
} }
{ {
let mut postings = inverted_index let mut postings = inverted_index
.read_postings(&term_b, IndexRecordOption::WithFreqsAndPositions) .read_postings(&term_b, IndexRecordOption::WithFreqsAndPositions)?
.unwrap(); .unwrap();
assert!(advance_undeleted(&mut postings, seg_reader)); assert!(advance_undeleted(&mut postings, seg_reader));
assert_eq!(postings.doc(), 3); assert_eq!(postings.doc(), 3);
@@ -581,30 +559,30 @@ mod tests {
} }
{ {
// writing the segment // writing the segment
let mut index_writer = index.writer_with_num_threads(1, 3_000_000).unwrap(); let mut index_writer = index.writer_for_tests()?;
index_writer.add_document(doc!(text_field=>"a b")); index_writer.add_document(doc!(text_field=>"a b"));
index_writer.delete_term(Term::from_field_text(text_field, "c")); index_writer.delete_term(Term::from_field_text(text_field, "c"));
index_writer.rollback().unwrap(); index_writer.rollback()?;
index_writer.delete_term(Term::from_field_text(text_field, "a")); index_writer.delete_term(Term::from_field_text(text_field, "a"));
index_writer.commit().unwrap(); index_writer.commit()?;
} }
{ {
reader.reload().unwrap(); reader.reload()?;
let searcher = reader.searcher(); let searcher = reader.searcher();
let segment_reader = searcher.segment_reader(0); let segment_reader = searcher.segment_reader(0);
let inverted_index = segment_reader.inverted_index(term_abcd.field()); let inverted_index = segment_reader.inverted_index(term_abcd.field())?;
assert!(inverted_index assert!(inverted_index
.read_postings(&term_abcd, IndexRecordOption::WithFreqsAndPositions) .read_postings(&term_abcd, IndexRecordOption::WithFreqsAndPositions)?
.is_none()); .is_none());
{ {
let mut postings = inverted_index let mut postings = inverted_index
.read_postings(&term_a, IndexRecordOption::WithFreqsAndPositions) .read_postings(&term_a, IndexRecordOption::WithFreqsAndPositions)?
.unwrap(); .unwrap();
assert!(!advance_undeleted(&mut postings, segment_reader)); assert!(!advance_undeleted(&mut postings, segment_reader));
} }
{ {
let mut postings = inverted_index let mut postings = inverted_index
.read_postings(&term_b, IndexRecordOption::WithFreqsAndPositions) .read_postings(&term_b, IndexRecordOption::WithFreqsAndPositions)?
.unwrap(); .unwrap();
assert!(advance_undeleted(&mut postings, segment_reader)); assert!(advance_undeleted(&mut postings, segment_reader));
assert_eq!(postings.doc(), 3); assert_eq!(postings.doc(), 3);
@@ -614,104 +592,107 @@ mod tests {
} }
{ {
let mut postings = inverted_index let mut postings = inverted_index
.read_postings(&term_c, IndexRecordOption::WithFreqsAndPositions) .read_postings(&term_c, IndexRecordOption::WithFreqsAndPositions)?
.unwrap(); .unwrap();
assert!(advance_undeleted(&mut postings, segment_reader)); assert!(advance_undeleted(&mut postings, segment_reader));
assert_eq!(postings.doc(), 4); assert_eq!(postings.doc(), 4);
assert!(!advance_undeleted(&mut postings, segment_reader)); assert!(!advance_undeleted(&mut postings, segment_reader));
} }
} }
Ok(())
} }
#[test] #[test]
fn test_indexed_u64() { fn test_indexed_u64() -> crate::Result<()> {
let mut schema_builder = Schema::builder(); let mut schema_builder = Schema::builder();
let field = schema_builder.add_u64_field("value", INDEXED); let field = schema_builder.add_u64_field("value", INDEXED);
let schema = schema_builder.build(); let schema = schema_builder.build();
let index = Index::create_in_ram(schema); let index = Index::create_in_ram(schema);
let mut index_writer = index.writer_with_num_threads(1, 3_000_000).unwrap(); let mut index_writer = index.writer_for_tests()?;
index_writer.add_document(doc!(field=>1u64)); index_writer.add_document(doc!(field=>1u64));
index_writer.commit().unwrap(); index_writer.commit()?;
let reader = index.reader().unwrap(); let reader = index.reader()?;
let searcher = reader.searcher(); let searcher = reader.searcher();
let term = Term::from_field_u64(field, 1u64); let term = Term::from_field_u64(field, 1u64);
let mut postings = searcher let mut postings = searcher
.segment_reader(0) .segment_reader(0)
.inverted_index(term.field()) .inverted_index(term.field())?
.read_postings(&term, IndexRecordOption::Basic) .read_postings(&term, IndexRecordOption::Basic)?
.unwrap(); .unwrap();
assert!(postings.advance());
assert_eq!(postings.doc(), 0); assert_eq!(postings.doc(), 0);
assert!(!postings.advance()); assert_eq!(postings.advance(), TERMINATED);
Ok(())
} }
#[test] #[test]
fn test_indexed_i64() { fn test_indexed_i64() -> crate::Result<()> {
let mut schema_builder = Schema::builder(); let mut schema_builder = Schema::builder();
let value_field = schema_builder.add_i64_field("value", INDEXED); let value_field = schema_builder.add_i64_field("value", INDEXED);
let schema = schema_builder.build(); let schema = schema_builder.build();
let index = Index::create_in_ram(schema); let index = Index::create_in_ram(schema);
let mut index_writer = index.writer_with_num_threads(1, 3_000_000).unwrap(); let mut index_writer = index.writer_for_tests()?;
let negative_val = -1i64; let negative_val = -1i64;
index_writer.add_document(doc!(value_field => negative_val)); index_writer.add_document(doc!(value_field => negative_val));
index_writer.commit().unwrap(); index_writer.commit()?;
let reader = index.reader().unwrap(); let reader = index.reader()?;
let searcher = reader.searcher(); let searcher = reader.searcher();
let term = Term::from_field_i64(value_field, negative_val); let term = Term::from_field_i64(value_field, negative_val);
let mut postings = searcher let mut postings = searcher
.segment_reader(0) .segment_reader(0)
.inverted_index(term.field()) .inverted_index(term.field())?
.read_postings(&term, IndexRecordOption::Basic) .read_postings(&term, IndexRecordOption::Basic)?
.unwrap(); .unwrap();
assert!(postings.advance());
assert_eq!(postings.doc(), 0); assert_eq!(postings.doc(), 0);
assert!(!postings.advance()); assert_eq!(postings.advance(), TERMINATED);
Ok(())
} }
#[test] #[test]
fn test_indexed_f64() { fn test_indexed_f64() -> crate::Result<()> {
let mut schema_builder = Schema::builder(); let mut schema_builder = Schema::builder();
let value_field = schema_builder.add_f64_field("value", INDEXED); let value_field = schema_builder.add_f64_field("value", INDEXED);
let schema = schema_builder.build(); let schema = schema_builder.build();
let index = Index::create_in_ram(schema); let index = Index::create_in_ram(schema);
let mut index_writer = index.writer_with_num_threads(1, 3_000_000).unwrap(); let mut index_writer = index.writer_for_tests()?;
let val = std::f64::consts::PI; let val = std::f64::consts::PI;
index_writer.add_document(doc!(value_field => val)); index_writer.add_document(doc!(value_field => val));
index_writer.commit().unwrap(); index_writer.commit()?;
let reader = index.reader().unwrap(); let reader = index.reader()?;
let searcher = reader.searcher(); let searcher = reader.searcher();
let term = Term::from_field_f64(value_field, val); let term = Term::from_field_f64(value_field, val);
let mut postings = searcher let mut postings = searcher
.segment_reader(0) .segment_reader(0)
.inverted_index(term.field()) .inverted_index(term.field())?
.read_postings(&term, IndexRecordOption::Basic) .read_postings(&term, IndexRecordOption::Basic)?
.unwrap(); .unwrap();
assert!(postings.advance());
assert_eq!(postings.doc(), 0); assert_eq!(postings.doc(), 0);
assert!(!postings.advance()); assert_eq!(postings.advance(), TERMINATED);
Ok(())
} }
#[test] #[test]
fn test_indexedfield_not_in_documents() { fn test_indexedfield_not_in_documents() -> crate::Result<()> {
let mut schema_builder = Schema::builder(); let mut schema_builder = Schema::builder();
let text_field = schema_builder.add_text_field("text", TEXT); let text_field = schema_builder.add_text_field("text", TEXT);
let absent_field = schema_builder.add_text_field("text", TEXT); let absent_field = schema_builder.add_text_field("text", TEXT);
let schema = schema_builder.build(); let schema = schema_builder.build();
let index = Index::create_in_ram(schema); let index = Index::create_in_ram(schema);
let mut index_writer = index.writer_with_num_threads(2, 6_000_000).unwrap(); let mut index_writer = index.writer_for_tests()?;
index_writer.add_document(doc!(text_field=>"a")); index_writer.add_document(doc!(text_field=>"a"));
assert!(index_writer.commit().is_ok()); assert!(index_writer.commit().is_ok());
let reader = index.reader().unwrap(); let reader = index.reader()?;
let searcher = reader.searcher(); let searcher = reader.searcher();
let segment_reader = searcher.segment_reader(0); let segment_reader = searcher.segment_reader(0);
segment_reader.inverted_index(absent_field); //< should not panic let inverted_index = segment_reader.inverted_index(absent_field)?;
assert_eq!(inverted_index.terms().num_terms(), 0);
Ok(())
} }
#[test] #[test]
fn test_delete_postings2() { fn test_delete_postings2() -> crate::Result<()> {
let mut schema_builder = Schema::builder(); let mut schema_builder = Schema::builder();
let text_field = schema_builder.add_text_field("text", TEXT); let text_field = schema_builder.add_text_field("text", TEXT);
let schema = schema_builder.build(); let schema = schema_builder.build();
@@ -719,128 +700,112 @@ mod tests {
let reader = index let reader = index
.reader_builder() .reader_builder()
.reload_policy(ReloadPolicy::Manual) .reload_policy(ReloadPolicy::Manual)
.try_into() .try_into()?;
.unwrap();
// writing the segment // writing the segment
let mut index_writer = index.writer_with_num_threads(2, 6_000_000).unwrap(); let mut index_writer = index.writer_for_tests()?;
index_writer.add_document(doc!(text_field=>"63"));
let add_document = |index_writer: &mut IndexWriter, val: &'static str| { index_writer.add_document(doc!(text_field=>"70"));
let doc = doc!(text_field=>val); index_writer.add_document(doc!(text_field=>"34"));
index_writer.add_document(doc); index_writer.add_document(doc!(text_field=>"1"));
}; index_writer.add_document(doc!(text_field=>"38"));
index_writer.add_document(doc!(text_field=>"33"));
let remove_document = |index_writer: &mut IndexWriter, val: &'static str| { index_writer.add_document(doc!(text_field=>"40"));
let delterm = Term::from_field_text(text_field, val); index_writer.add_document(doc!(text_field=>"17"));
index_writer.delete_term(delterm); index_writer.delete_term(Term::from_field_text(text_field, "38"));
}; index_writer.delete_term(Term::from_field_text(text_field, "34"));
index_writer.commit()?;
add_document(&mut index_writer, "63"); reader.reload()?;
add_document(&mut index_writer, "70"); assert_eq!(reader.searcher().num_docs(), 6);
add_document(&mut index_writer, "34"); Ok(())
add_document(&mut index_writer, "1");
add_document(&mut index_writer, "38");
add_document(&mut index_writer, "33");
add_document(&mut index_writer, "40");
add_document(&mut index_writer, "17");
remove_document(&mut index_writer, "38");
remove_document(&mut index_writer, "34");
index_writer.commit().unwrap();
reader.reload().unwrap();
let searcher = reader.searcher();
assert_eq!(searcher.num_docs(), 6);
} }
#[test] #[test]
fn test_termfreq() { fn test_termfreq() -> crate::Result<()> {
let mut schema_builder = Schema::builder(); let mut schema_builder = Schema::builder();
let text_field = schema_builder.add_text_field("text", TEXT); let text_field = schema_builder.add_text_field("text", TEXT);
let schema = schema_builder.build(); let schema = schema_builder.build();
let index = Index::create_in_ram(schema); let index = Index::create_in_ram(schema);
{ {
// writing the segment // writing the segment
let mut index_writer = index.writer_with_num_threads(1, 3_000_000).unwrap(); let mut index_writer = index.writer_for_tests()?;
{ index_writer.add_document(doc!(text_field=>"af af af bc bc"));
let doc = doc!(text_field=>"af af af bc bc"); index_writer.commit()?;
index_writer.add_document(doc);
}
index_writer.commit().unwrap();
} }
{ {
let index_reader = index.reader().unwrap(); let index_reader = index.reader()?;
let searcher = index_reader.searcher(); let searcher = index_reader.searcher();
let reader = searcher.segment_reader(0); let reader = searcher.segment_reader(0);
let inverted_index = reader.inverted_index(text_field); let inverted_index = reader.inverted_index(text_field)?;
let term_abcd = Term::from_field_text(text_field, "abcd"); let term_abcd = Term::from_field_text(text_field, "abcd");
assert!(inverted_index assert!(inverted_index
.read_postings(&term_abcd, IndexRecordOption::WithFreqsAndPositions) .read_postings(&term_abcd, IndexRecordOption::WithFreqsAndPositions)?
.is_none()); .is_none());
let term_af = Term::from_field_text(text_field, "af"); let term_af = Term::from_field_text(text_field, "af");
let mut postings = inverted_index let mut postings = inverted_index
.read_postings(&term_af, IndexRecordOption::WithFreqsAndPositions) .read_postings(&term_af, IndexRecordOption::WithFreqsAndPositions)?
.unwrap(); .unwrap();
assert!(postings.advance());
assert_eq!(postings.doc(), 0); assert_eq!(postings.doc(), 0);
assert_eq!(postings.term_freq(), 3); assert_eq!(postings.term_freq(), 3);
assert!(!postings.advance()); assert_eq!(postings.advance(), TERMINATED);
} }
Ok(())
} }
#[test] #[test]
fn test_searcher_1() { fn test_searcher_1() -> crate::Result<()> {
let mut schema_builder = Schema::builder(); let mut schema_builder = Schema::builder();
let text_field = schema_builder.add_text_field("text", TEXT); let text_field = schema_builder.add_text_field("text", TEXT);
let schema = schema_builder.build(); let schema = schema_builder.build();
let index = Index::create_in_ram(schema); let index = Index::create_in_ram(schema);
let reader = index.reader().unwrap(); let reader = index.reader()?;
{ // writing the segment
// writing the segment let mut index_writer = index.writer_for_tests()?;
let mut index_writer = index.writer_with_num_threads(1, 3_000_000).unwrap(); index_writer.add_document(doc!(text_field=>"af af af b"));
index_writer.add_document(doc!(text_field=>"af af af b")); index_writer.add_document(doc!(text_field=>"a b c"));
index_writer.add_document(doc!(text_field=>"a b c")); index_writer.add_document(doc!(text_field=>"a b c d"));
index_writer.add_document(doc!(text_field=>"a b c d")); index_writer.commit()?;
index_writer.commit().unwrap();
} reader.reload()?;
{ let searcher = reader.searcher();
reader.reload().unwrap(); let get_doc_ids = |terms: Vec<Term>| {
let searcher = reader.searcher(); let query = BooleanQuery::new_multiterms_query(terms);
let get_doc_ids = |terms: Vec<Term>| { searcher
let query = BooleanQuery::new_multiterms_query(terms); .search(&query, &TEST_COLLECTOR_WITH_SCORE)
let topdocs = searcher.search(&query, &TEST_COLLECTOR_WITH_SCORE).unwrap(); .map(|topdocs| topdocs.docs().to_vec())
topdocs.docs().to_vec() };
}; assert_eq!(
assert_eq!( get_doc_ids(vec![Term::from_field_text(text_field, "a")])?,
get_doc_ids(vec![Term::from_field_text(text_field, "a")]), vec![DocAddress(0, 1), DocAddress(0, 2)]
vec![DocAddress(0, 1), DocAddress(0, 2)] );
); assert_eq!(
assert_eq!( get_doc_ids(vec![Term::from_field_text(text_field, "af")])?,
get_doc_ids(vec![Term::from_field_text(text_field, "af")]), vec![DocAddress(0, 0)]
vec![DocAddress(0, 0)] );
); assert_eq!(
assert_eq!( get_doc_ids(vec![Term::from_field_text(text_field, "b")])?,
get_doc_ids(vec![Term::from_field_text(text_field, "b")]), vec![DocAddress(0, 0), DocAddress(0, 1), DocAddress(0, 2)]
vec![DocAddress(0, 0), DocAddress(0, 1), DocAddress(0, 2)] );
); assert_eq!(
assert_eq!( get_doc_ids(vec![Term::from_field_text(text_field, "c")])?,
get_doc_ids(vec![Term::from_field_text(text_field, "c")]), vec![DocAddress(0, 1), DocAddress(0, 2)]
vec![DocAddress(0, 1), DocAddress(0, 2)] );
); assert_eq!(
assert_eq!( get_doc_ids(vec![Term::from_field_text(text_field, "d")])?,
get_doc_ids(vec![Term::from_field_text(text_field, "d")]), vec![DocAddress(0, 2)]
vec![DocAddress(0, 2)] );
); assert_eq!(
assert_eq!( get_doc_ids(vec![
get_doc_ids(vec![ Term::from_field_text(text_field, "b"),
Term::from_field_text(text_field, "b"), Term::from_field_text(text_field, "a"),
Term::from_field_text(text_field, "a"), ])?,
]), vec![DocAddress(0, 0), DocAddress(0, 1), DocAddress(0, 2)]
vec![DocAddress(0, 0), DocAddress(0, 1), DocAddress(0, 2)] );
); Ok(())
}
} }
#[test] #[test]
fn test_searcher_2() { fn test_searcher_2() -> crate::Result<()> {
let mut schema_builder = Schema::builder(); let mut schema_builder = Schema::builder();
let text_field = schema_builder.add_text_field("text", TEXT); let text_field = schema_builder.add_text_field("text", TEXT);
let schema = schema_builder.build(); let schema = schema_builder.build();
@@ -848,19 +813,17 @@ mod tests {
let reader = index let reader = index
.reader_builder() .reader_builder()
.reload_policy(ReloadPolicy::Manual) .reload_policy(ReloadPolicy::Manual)
.try_into() .try_into()?;
.unwrap();
assert_eq!(reader.searcher().num_docs(), 0u64); assert_eq!(reader.searcher().num_docs(), 0u64);
{ // writing the segment
// writing the segment let mut index_writer = index.writer_for_tests()?;
let mut index_writer = index.writer_with_num_threads(1, 3_000_000).unwrap(); index_writer.add_document(doc!(text_field=>"af b"));
index_writer.add_document(doc!(text_field=>"af b")); index_writer.add_document(doc!(text_field=>"a b c"));
index_writer.add_document(doc!(text_field=>"a b c")); index_writer.add_document(doc!(text_field=>"a b c d"));
index_writer.add_document(doc!(text_field=>"a b c d")); index_writer.commit()?;
index_writer.commit().unwrap(); reader.reload()?;
}
reader.reload().unwrap();
assert_eq!(reader.searcher().num_docs(), 3u64); assert_eq!(reader.searcher().num_docs(), 3u64);
Ok(())
} }
#[test] #[test]
@@ -872,17 +835,17 @@ mod tests {
text_field => "some other value", text_field => "some other value",
other_text_field => "short"); other_text_field => "short");
assert_eq!(document.len(), 3); assert_eq!(document.len(), 3);
let values = document.get_all(text_field); let values: Vec<&Value> = document.get_all(text_field).collect();
assert_eq!(values.len(), 2); assert_eq!(values.len(), 2);
assert_eq!(values[0].text(), Some("tantivy")); assert_eq!(values[0].text(), Some("tantivy"));
assert_eq!(values[1].text(), Some("some other value")); assert_eq!(values[1].text(), Some("some other value"));
let values = document.get_all(other_text_field); let values: Vec<&Value> = document.get_all(other_text_field).collect();
assert_eq!(values.len(), 1); assert_eq!(values.len(), 1);
assert_eq!(values[0].text(), Some("short")); assert_eq!(values[0].text(), Some("short"));
} }
#[test] #[test]
fn test_wrong_fast_field_type() { fn test_wrong_fast_field_type() -> crate::Result<()> {
let mut schema_builder = Schema::builder(); let mut schema_builder = Schema::builder();
let fast_field_unsigned = schema_builder.add_u64_field("unsigned", FAST); let fast_field_unsigned = schema_builder.add_u64_field("unsigned", FAST);
let fast_field_signed = schema_builder.add_i64_field("signed", FAST); let fast_field_signed = schema_builder.add_i64_field("signed", FAST);
@@ -892,14 +855,14 @@ mod tests {
let schema = schema_builder.build(); let schema = schema_builder.build();
let index = Index::create_in_ram(schema); let index = Index::create_in_ram(schema);
let mut index_writer = index.writer_with_num_threads(1, 50_000_000).unwrap(); let mut index_writer = index.writer_for_tests()?;
{ {
let document = let document =
doc!(fast_field_unsigned => 4u64, fast_field_signed=>4i64, fast_field_float=>4f64); doc!(fast_field_unsigned => 4u64, fast_field_signed=>4i64, fast_field_float=>4f64);
index_writer.add_document(document); index_writer.add_document(document);
index_writer.commit().unwrap(); index_writer.commit()?;
} }
let reader = index.reader().unwrap(); let reader = index.reader()?;
let searcher = reader.searcher(); let searcher = reader.searcher();
let segment_reader: &SegmentReader = searcher.segment_reader(0); let segment_reader: &SegmentReader = searcher.segment_reader(0);
{ {
@@ -938,11 +901,12 @@ mod tests {
let fast_field_reader = fast_field_reader_opt.unwrap(); let fast_field_reader = fast_field_reader_opt.unwrap();
assert_eq!(fast_field_reader.get(0), 4f64) assert_eq!(fast_field_reader.get(0), 4f64)
} }
Ok(())
} }
// motivated by #729 // motivated by #729
#[test] #[test]
fn test_update_via_delete_insert() { fn test_update_via_delete_insert() -> crate::Result<()> {
use crate::collector::Count; use crate::collector::Count;
use crate::indexer::NoMergePolicy; use crate::indexer::NoMergePolicy;
use crate::query::AllQuery; use crate::query::AllQuery;
@@ -956,17 +920,17 @@ mod tests {
let schema = schema_builder.build(); let schema = schema_builder.build();
let index = Index::create_in_ram(schema.clone()); let index = Index::create_in_ram(schema.clone());
let index_reader = index.reader().unwrap(); let index_reader = index.reader()?;
let mut index_writer = index.writer(3_000_000).unwrap(); let mut index_writer = index.writer_for_tests()?;
index_writer.set_merge_policy(Box::new(NoMergePolicy)); index_writer.set_merge_policy(Box::new(NoMergePolicy));
for doc_id in 0u64..DOC_COUNT { for doc_id in 0u64..DOC_COUNT {
index_writer.add_document(doc!(id => doc_id)); index_writer.add_document(doc!(id => doc_id));
} }
index_writer.commit().unwrap(); index_writer.commit()?;
index_reader.reload().unwrap(); index_reader.reload()?;
let searcher = index_reader.searcher(); let searcher = index_reader.searcher();
assert_eq!( assert_eq!(
@@ -977,12 +941,11 @@ mod tests {
// update the 10 elements by deleting and re-adding // update the 10 elements by deleting and re-adding
for doc_id in 0u64..DOC_COUNT { for doc_id in 0u64..DOC_COUNT {
index_writer.delete_term(Term::from_field_u64(id, doc_id)); index_writer.delete_term(Term::from_field_u64(id, doc_id));
index_writer.commit().unwrap(); index_writer.commit()?;
index_reader.reload().unwrap(); index_reader.reload()?;
let doc = doc!(id => doc_id); index_writer.add_document(doc!(id => doc_id));
index_writer.add_document(doc); index_writer.commit()?;
index_writer.commit().unwrap(); index_reader.reload()?;
index_reader.reload().unwrap();
let searcher = index_reader.searcher(); let searcher = index_reader.searcher();
// The number of document should be stable. // The number of document should be stable.
assert_eq!( assert_eq!(
@@ -991,7 +954,7 @@ mod tests {
); );
} }
index_reader.reload().unwrap(); index_reader.reload()?;
let searcher = index_reader.searcher(); let searcher = index_reader.searcher();
let segment_ids: Vec<SegmentId> = searcher let segment_ids: Vec<SegmentId> = searcher
.segment_readers() .segment_readers()
@@ -1000,12 +963,18 @@ mod tests {
.collect(); .collect();
block_on(index_writer.merge(&segment_ids)).unwrap(); block_on(index_writer.merge(&segment_ids)).unwrap();
index_reader.reload().unwrap(); index_reader.reload()?;
let searcher = index_reader.searcher(); let searcher = index_reader.searcher();
assert_eq!(searcher.search(&AllQuery, &Count)?, DOC_COUNT as usize);
Ok(())
}
assert_eq!( #[test]
searcher.search(&AllQuery, &Count).unwrap(), fn test_validate_checksum() -> crate::Result<()> {
DOC_COUNT as usize let index_path = tempfile::tempdir().expect("dir");
); let schema = Schema::builder().build();
let index = Index::create_in_dir(&index_path, schema)?;
assert!(index.validate_checksum()?.is_empty());
Ok(())
} }
} }

View File

@@ -37,12 +37,12 @@ const LONG_SKIP_INTERVAL: u64 = (LONG_SKIP_IN_BLOCKS * COMPRESSION_BLOCK_SIZE) a
#[cfg(test)] #[cfg(test)]
pub mod tests { pub mod tests {
use super::{PositionReader, PositionSerializer}; use super::PositionSerializer;
use crate::directory::ReadOnlySource; use crate::positions::reader::PositionReader;
use crate::positions::COMPRESSION_BLOCK_SIZE; use crate::{common::HasLen, directory::FileSlice};
use std::iter; use std::iter;
fn create_stream_buffer(vals: &[u32]) -> (ReadOnlySource, ReadOnlySource) { fn create_stream_buffer(vals: &[u32]) -> (FileSlice, FileSlice) {
let mut skip_buffer = vec![]; let mut skip_buffer = vec![];
let mut stream_buffer = vec![]; let mut stream_buffer = vec![];
{ {
@@ -53,10 +53,7 @@ pub mod tests {
} }
serializer.close().unwrap(); serializer.close().unwrap();
} }
( (FileSlice::from(stream_buffer), FileSlice::from(skip_buffer))
ReadOnlySource::from(stream_buffer),
ReadOnlySource::from(skip_buffer),
)
} }
#[test] #[test]
@@ -65,10 +62,10 @@ pub mod tests {
let (stream, skip) = create_stream_buffer(&v[..]); let (stream, skip) = create_stream_buffer(&v[..]);
assert_eq!(skip.len(), 12); assert_eq!(skip.len(), 12);
assert_eq!(stream.len(), 1168); assert_eq!(stream.len(), 1168);
let mut position_reader = PositionReader::new(stream, skip, 0u64); let mut position_reader = PositionReader::new(stream, skip, 0u64).unwrap();
for &n in &[1, 10, 127, 128, 130, 312] { for &n in &[1, 10, 127, 128, 130, 312] {
let mut v = vec![0u32; n]; let mut v = vec![0u32; n];
position_reader.read(&mut v[..n]); position_reader.read(0, &mut v[..]);
for i in 0..n { for i in 0..n {
assert_eq!(v[i], i as u32); assert_eq!(v[i], i as u32);
} }
@@ -76,19 +73,19 @@ pub mod tests {
} }
#[test] #[test]
fn test_position_skip() { fn test_position_read_with_offset() {
let v: Vec<u32> = (0..1_000).collect(); let v: Vec<u32> = (0..1000).collect();
let (stream, skip) = create_stream_buffer(&v[..]); let (stream, skip) = create_stream_buffer(&v[..]);
assert_eq!(skip.len(), 12); assert_eq!(skip.len(), 12);
assert_eq!(stream.len(), 1168); assert_eq!(stream.len(), 1168);
let mut position_reader = PositionReader::new(stream, skip, 0u64).unwrap();
let mut position_reader = PositionReader::new(stream, skip, 0u64); for &offset in &[1u64, 10u64, 127u64, 128u64, 130u64, 312u64] {
position_reader.skip(10); for &len in &[1, 10, 130, 500] {
for &n in &[10, 127, COMPRESSION_BLOCK_SIZE, 130, 312] { let mut v = vec![0u32; len];
let mut v = vec![0u32; n]; position_reader.read(offset, &mut v[..]);
position_reader.read(&mut v[..n]); for i in 0..len {
for i in 0..n { assert_eq!(v[i], i as u32 + offset as u32);
assert_eq!(v[i], 10u32 + i as u32); }
} }
} }
} }
@@ -100,14 +97,15 @@ pub mod tests {
assert_eq!(skip.len(), 12); assert_eq!(skip.len(), 12);
assert_eq!(stream.len(), 1168); assert_eq!(stream.len(), 1168);
let mut position_reader = PositionReader::new(stream, skip, 0u64); let mut position_reader = PositionReader::new(stream, skip, 0u64).unwrap();
let mut buf = [0u32; 7]; let mut buf = [0u32; 7];
let mut c = 0; let mut c = 0;
let mut offset = 0;
for _ in 0..100 { for _ in 0..100 {
position_reader.read(&mut buf); position_reader.read(offset, &mut buf);
position_reader.read(&mut buf); position_reader.read(offset, &mut buf);
position_reader.skip(4); offset += 7;
position_reader.skip(3);
for &el in &buf { for &el in &buf {
assert_eq!(c, el); assert_eq!(c, el);
c += 1; c += 1;
@@ -115,6 +113,59 @@ pub mod tests {
} }
} }
#[test]
fn test_position_reread_anchor_different_than_block() {
let v: Vec<u32> = (0..2_000_000).collect();
let (stream, skip) = create_stream_buffer(&v[..]);
assert_eq!(skip.len(), 15_749);
assert_eq!(stream.len(), 4_987_872);
let mut position_reader = PositionReader::new(stream.clone(), skip.clone(), 0).unwrap();
let mut buf = [0u32; 256];
position_reader.read(128, &mut buf);
for i in 0..256 {
assert_eq!(buf[i], (128 + i) as u32);
}
position_reader.read(128, &mut buf);
for i in 0..256 {
assert_eq!(buf[i], (128 + i) as u32);
}
}
#[test]
#[should_panic(expected = "offset arguments should be increasing.")]
fn test_position_panic_if_called_previous_anchor() {
let v: Vec<u32> = (0..2_000_000).collect();
let (stream, skip) = create_stream_buffer(&v[..]);
assert_eq!(skip.len(), 15_749);
assert_eq!(stream.len(), 4_987_872);
let mut buf = [0u32; 1];
let mut position_reader =
PositionReader::new(stream.clone(), skip.clone(), 200_000).unwrap();
position_reader.read(230, &mut buf);
position_reader.read(9, &mut buf);
}
#[test]
fn test_positions_bug() {
let mut v: Vec<u32> = vec![];
for i in 1..200 {
for j in 0..i {
v.push(j);
}
}
let (stream, skip) = create_stream_buffer(&v[..]);
let mut buf = Vec::new();
let mut position_reader = PositionReader::new(stream.clone(), skip.clone(), 0).unwrap();
let mut offset = 0;
for i in 1..24 {
buf.resize(i, 0);
position_reader.read(offset, &mut buf[..]);
offset += i as u64;
let r: Vec<u32> = (0..i).map(|el| el as u32).collect();
assert_eq!(buf, &r[..]);
}
}
#[test] #[test]
fn test_position_long_skip_const() { fn test_position_long_skip_const() {
const CONST_VAL: u32 = 9u32; const CONST_VAL: u32 = 9u32;
@@ -122,9 +173,9 @@ pub mod tests {
let (stream, skip) = create_stream_buffer(&v[..]); let (stream, skip) = create_stream_buffer(&v[..]);
assert_eq!(skip.len(), 15_749); assert_eq!(skip.len(), 15_749);
assert_eq!(stream.len(), 1_000_000); assert_eq!(stream.len(), 1_000_000);
let mut position_reader = PositionReader::new(stream, skip, 128 * 1024); let mut position_reader = PositionReader::new(stream, skip, 128 * 1024).unwrap();
let mut buf = [0u32; 1]; let mut buf = [0u32; 1];
position_reader.read(&mut buf); position_reader.read(0, &mut buf);
assert_eq!(buf[0], CONST_VAL); assert_eq!(buf[0], CONST_VAL);
} }
@@ -141,9 +192,10 @@ pub mod tests {
128 * 1024 + 7, 128 * 1024 + 7,
128 * 10 * 1024 + 10, 128 * 10 * 1024 + 10,
] { ] {
let mut position_reader = PositionReader::new(stream.clone(), skip.clone(), offset); let mut position_reader =
PositionReader::new(stream.clone(), skip.clone(), offset).unwrap();
let mut buf = [0u32; 1]; let mut buf = [0u32; 1];
position_reader.read(&mut buf); position_reader.read(0, &mut buf);
assert_eq!(buf[0], offset as u32); assert_eq!(buf[0], offset as u32);
} }
} }

View File

@@ -1,9 +1,13 @@
use std::io;
use crate::common::{BinarySerializable, FixedSize}; use crate::common::{BinarySerializable, FixedSize};
use crate::directory::ReadOnlySource; use crate::directory::FileSlice;
use crate::directory::OwnedBytes;
use crate::positions::COMPRESSION_BLOCK_SIZE; use crate::positions::COMPRESSION_BLOCK_SIZE;
use crate::positions::LONG_SKIP_INTERVAL; use crate::positions::LONG_SKIP_INTERVAL;
use crate::positions::LONG_SKIP_IN_BLOCKS; use crate::positions::LONG_SKIP_IN_BLOCKS;
use crate::postings::compression::compressed_block_size; use bitpacking::{BitPacker, BitPacker4x};
/// Positions works as a long sequence of compressed block. /// Positions works as a long sequence of compressed block.
/// All terms are chained one after the other. /// All terms are chained one after the other.
/// ///
@@ -24,28 +28,28 @@ use crate::postings::compression::compressed_block_size;
/// A given block obviously takes `(128 x num_bit_for_the_block / num_bits_in_a_byte)`, /// A given block obviously takes `(128 x num_bit_for_the_block / num_bits_in_a_byte)`,
/// so skipping a block without decompressing it is just a matter of advancing that many /// so skipping a block without decompressing it is just a matter of advancing that many
/// bytes. /// bytes.
use bitpacking::{BitPacker, BitPacker4x};
use owned_read::OwnedRead;
struct Positions { struct Positions {
bit_packer: BitPacker4x, bit_packer: BitPacker4x,
skip_source: ReadOnlySource, skip_file: FileSlice,
position_source: ReadOnlySource, position_file: FileSlice,
long_skip_source: ReadOnlySource, long_skip_data: OwnedBytes,
} }
impl Positions { impl Positions {
pub fn new(position_source: ReadOnlySource, skip_source: ReadOnlySource) -> Positions { pub fn new(position_file: FileSlice, skip_file: FileSlice) -> io::Result<Positions> {
let (body, footer) = skip_source.split_from_end(u32::SIZE_IN_BYTES); let (body, footer) = skip_file.split_from_end(u32::SIZE_IN_BYTES);
let num_long_skips = u32::deserialize(&mut footer.as_slice()).expect("Index corrupted"); let footer_data = footer.read_bytes()?;
let (skip_source, long_skip_source) = let num_long_skips = u32::deserialize(&mut footer_data.as_slice())?;
let (skip_file, long_skip_file) =
body.split_from_end(u64::SIZE_IN_BYTES * (num_long_skips as usize)); body.split_from_end(u64::SIZE_IN_BYTES * (num_long_skips as usize));
Positions { let long_skip_data = long_skip_file.read_bytes()?;
Ok(Positions {
bit_packer: BitPacker4x::new(), bit_packer: BitPacker4x::new(),
skip_source, skip_file,
long_skip_source, long_skip_data,
position_source, position_file,
} })
} }
/// Returns the offset of the block associated to the given `long_skip_id`. /// Returns the offset of the block associated to the given `long_skip_id`.
@@ -55,143 +59,116 @@ impl Positions {
if long_skip_id == 0 { if long_skip_id == 0 {
return 0; return 0;
} }
let long_skip_slice = self.long_skip_source.as_slice(); let long_skip_slice = self.long_skip_data.as_slice();
let mut long_skip_blocks: &[u8] = &long_skip_slice[(long_skip_id - 1) * 8..][..8]; let mut long_skip_blocks: &[u8] = &long_skip_slice[(long_skip_id - 1) * 8..][..8];
u64::deserialize(&mut long_skip_blocks).expect("Index corrupted") u64::deserialize(&mut long_skip_blocks).expect("Index corrupted")
} }
fn reader(&self, offset: u64) -> PositionReader { fn reader(&self, offset: u64) -> io::Result<PositionReader> {
let long_skip_id = (offset / LONG_SKIP_INTERVAL) as usize; let long_skip_id = (offset / LONG_SKIP_INTERVAL) as usize;
let small_skip = (offset % LONG_SKIP_INTERVAL) as usize;
let offset_num_bytes: u64 = self.long_skip(long_skip_id); let offset_num_bytes: u64 = self.long_skip(long_skip_id);
let mut position_read = OwnedRead::new(self.position_source.clone()); let position_read = self
position_read.advance(offset_num_bytes as usize); .position_file
let mut skip_read = OwnedRead::new(self.skip_source.clone()); .slice_from(offset_num_bytes as usize)
skip_read.advance(long_skip_id * LONG_SKIP_IN_BLOCKS); .read_bytes()?;
let mut position_reader = PositionReader { let skip_read = self
.skip_file
.slice_from(long_skip_id * LONG_SKIP_IN_BLOCKS)
.read_bytes()?;
Ok(PositionReader {
bit_packer: self.bit_packer, bit_packer: self.bit_packer,
skip_read, skip_read,
position_read, position_read,
inner_offset: 0,
buffer: Box::new([0u32; 128]), buffer: Box::new([0u32; 128]),
ahead: None, block_offset: std::i64::MAX as u64,
}; anchor_offset: (long_skip_id as u64) * LONG_SKIP_INTERVAL,
position_reader.skip(small_skip); abs_offset: offset,
position_reader })
} }
} }
#[derive(Clone)]
pub struct PositionReader { pub struct PositionReader {
skip_read: OwnedRead, skip_read: OwnedBytes,
position_read: OwnedRead, position_read: OwnedBytes,
bit_packer: BitPacker4x, bit_packer: BitPacker4x,
inner_offset: usize, buffer: Box<[u32; COMPRESSION_BLOCK_SIZE]>,
buffer: Box<[u32; 128]>,
ahead: Option<usize>, // if None, no block is loaded.
// if Some(num_blocks), the block currently loaded is num_blocks ahead
// of the block of the next int to read.
}
// `ahead` represents the offset of the block currently loaded block_offset: u64,
// compared to the cursor of the actual stream. anchor_offset: u64,
//
// By contract, when this function is called, the current block has to be abs_offset: u64,
// decompressed.
//
// If the requested number of els ends exactly at a given block, the next
// block is not decompressed.
fn read_impl(
bit_packer: BitPacker4x,
mut position: &[u8],
buffer: &mut [u32; 128],
mut inner_offset: usize,
num_bits: &[u8],
output: &mut [u32],
) -> usize {
let mut output_start = 0;
let mut output_len = output.len();
let mut ahead = 0;
loop {
let available_len = COMPRESSION_BLOCK_SIZE - inner_offset;
// We have enough elements in the current block.
// Let's copy the requested elements in the output buffer,
// and return.
if output_len <= available_len {
output[output_start..].copy_from_slice(&buffer[inner_offset..][..output_len]);
return ahead;
}
output[output_start..][..available_len].copy_from_slice(&buffer[inner_offset..]);
output_len -= available_len;
output_start += available_len;
inner_offset = 0;
let num_bits = num_bits[ahead];
bit_packer.decompress(position, &mut buffer[..], num_bits);
let block_len = compressed_block_size(num_bits);
position = &position[block_len..];
ahead += 1;
}
} }
impl PositionReader { impl PositionReader {
pub fn new( pub fn new(
position_source: ReadOnlySource, position_file: FileSlice,
skip_source: ReadOnlySource, skip_file: FileSlice,
offset: u64, offset: u64,
) -> PositionReader { ) -> io::Result<PositionReader> {
Positions::new(position_source, skip_source).reader(offset) let positions = Positions::new(position_file, skip_file)?;
positions.reader(offset)
} }
/// Fills a buffer with the next `output.len()` integers. fn advance_num_blocks(&mut self, num_blocks: usize) {
/// This does not consume / advance the stream. let num_bits: usize = self.skip_read.as_ref()[..num_blocks]
pub fn read(&mut self, output: &mut [u32]) { .iter()
let skip_data = self.skip_read.as_ref(); .cloned()
let position_data = self.position_read.as_ref(); .map(|num_bits| num_bits as usize)
let num_bits = self.skip_read.get(0); .sum();
if self.ahead != Some(0) { let num_bytes_to_skip = num_bits * COMPRESSION_BLOCK_SIZE / 8;
// the block currently available is not the block self.skip_read.advance(num_blocks as usize);
// for the current position self.position_read.advance(num_bytes_to_skip);
}
/// Fills a buffer with the positions `[offset..offset+output.len())` integers.
///
/// `offset` is required to have a value >= to the offsets given in previous calls
/// for the given `PositionReaderAbsolute` instance.
pub fn read(&mut self, mut offset: u64, mut output: &mut [u32]) {
offset += self.abs_offset;
assert!(
offset >= self.anchor_offset,
"offset arguments should be increasing."
);
let delta_to_block_offset = offset as i64 - self.block_offset as i64;
if delta_to_block_offset < 0 || delta_to_block_offset >= 128 {
// The first position is not within the first block.
// We need to decompress the first block.
let delta_to_anchor_offset = offset - self.anchor_offset;
let num_blocks_to_skip =
(delta_to_anchor_offset / (COMPRESSION_BLOCK_SIZE as u64)) as usize;
self.advance_num_blocks(num_blocks_to_skip);
self.anchor_offset = offset - (offset % COMPRESSION_BLOCK_SIZE as u64);
self.block_offset = self.anchor_offset;
let num_bits = self.skip_read.as_slice()[0];
self.bit_packer
.decompress(self.position_read.as_ref(), self.buffer.as_mut(), num_bits);
} else {
let num_blocks_to_skip =
((self.block_offset - self.anchor_offset) / COMPRESSION_BLOCK_SIZE as u64) as usize;
self.advance_num_blocks(num_blocks_to_skip);
self.anchor_offset = self.block_offset;
}
let mut num_bits = self.skip_read.as_slice()[0];
let mut position_data = self.position_read.as_ref();
for i in 1.. {
let offset_in_block = (offset as usize) % COMPRESSION_BLOCK_SIZE;
let remaining_in_block = COMPRESSION_BLOCK_SIZE - offset_in_block;
if remaining_in_block >= output.len() {
output.copy_from_slice(&self.buffer[offset_in_block..][..output.len()]);
break;
}
output[..remaining_in_block].copy_from_slice(&self.buffer[offset_in_block..]);
output = &mut output[remaining_in_block..];
offset += remaining_in_block as u64;
position_data = &position_data[(num_bits as usize * COMPRESSION_BLOCK_SIZE / 8)..];
num_bits = self.skip_read.as_slice()[i];
self.bit_packer self.bit_packer
.decompress(position_data, self.buffer.as_mut(), num_bits); .decompress(position_data, self.buffer.as_mut(), num_bits);
self.ahead = Some(0); self.block_offset += COMPRESSION_BLOCK_SIZE as u64;
} }
let block_len = compressed_block_size(num_bits);
self.ahead = Some(read_impl(
self.bit_packer,
&position_data[block_len..],
self.buffer.as_mut(),
self.inner_offset,
&skip_data[1..],
output,
));
}
/// Skip the next `skip_len` integer.
///
/// If a full block is skipped, calling
/// `.skip(...)` will avoid decompressing it.
///
/// May panic if the end of the stream is reached.
pub fn skip(&mut self, skip_len: usize) {
let skip_len_plus_inner_offset = skip_len + self.inner_offset;
let num_blocks_to_advance = skip_len_plus_inner_offset / COMPRESSION_BLOCK_SIZE;
self.inner_offset = skip_len_plus_inner_offset % COMPRESSION_BLOCK_SIZE;
self.ahead = self.ahead.and_then(|num_blocks| {
if num_blocks >= num_blocks_to_advance {
Some(num_blocks - num_blocks_to_advance)
} else {
None
}
});
let skip_len_in_bits = self.skip_read.as_ref()[..num_blocks_to_advance]
.iter()
.map(|num_bits| *num_bits as usize)
.sum::<usize>()
* COMPRESSION_BLOCK_SIZE;
let skip_len_in_bytes = skip_len_in_bits / 8;
self.skip_read.advance(num_blocks_to_advance);
self.position_read.advance(skip_len_in_bytes);
} }
} }

View File

@@ -87,6 +87,7 @@ fn exponential_search(arr: &[u32], target: u32) -> (usize, usize) {
(begin, end) (begin, end)
} }
#[inline(never)]
fn galloping(block_docs: &[u32], target: u32) -> usize { fn galloping(block_docs: &[u32], target: u32) -> usize {
let (start, end) = exponential_search(&block_docs, target); let (start, end) = exponential_search(&block_docs, target);
start + linear_search(&block_docs[start..end], target) start + linear_search(&block_docs[start..end], target)
@@ -129,23 +130,18 @@ impl BlockSearcher {
/// ///
/// If SSE2 instructions are available in the `(platform, running CPU)`, /// If SSE2 instructions are available in the `(platform, running CPU)`,
/// then we use a different implementation that does an exhaustive linear search over /// then we use a different implementation that does an exhaustive linear search over
/// the full block whenever the block is full (`len == 128`). It is surprisingly faster, most likely because of the lack /// the block regardless of whether the block is full or not.
/// of branch. ///
pub(crate) fn search_in_block( /// Indeed, if the block is not full, the remaining items are TERMINATED.
self, /// It is surprisingly faster, most likely because of the lack of branch misprediction.
block_docs: &AlignedBuffer, pub(crate) fn search_in_block(self, block_docs: &AlignedBuffer, target: u32) -> usize {
len: usize,
start: usize,
target: u32,
) -> usize {
#[cfg(target_arch = "x86_64")] #[cfg(target_arch = "x86_64")]
{ {
use crate::postings::compression::COMPRESSION_BLOCK_SIZE; if self == BlockSearcher::SSE2 {
if self == BlockSearcher::SSE2 && len == COMPRESSION_BLOCK_SIZE {
return sse2::linear_search_sse2_128(block_docs, target); return sse2::linear_search_sse2_128(block_docs, target);
} }
} }
start + galloping(&block_docs.0[start..len], target) galloping(&block_docs.0[..], target)
} }
} }
@@ -166,6 +162,7 @@ mod tests {
use super::exponential_search; use super::exponential_search;
use super::linear_search; use super::linear_search;
use super::BlockSearcher; use super::BlockSearcher;
use crate::docset::TERMINATED;
use crate::postings::compression::{AlignedBuffer, COMPRESSION_BLOCK_SIZE}; use crate::postings::compression::{AlignedBuffer, COMPRESSION_BLOCK_SIZE};
#[test] #[test]
@@ -196,19 +193,12 @@ mod tests {
fn util_test_search_in_block(block_searcher: BlockSearcher, block: &[u32], target: u32) { fn util_test_search_in_block(block_searcher: BlockSearcher, block: &[u32], target: u32) {
let cursor = search_in_block_trivial_but_slow(block, target); let cursor = search_in_block_trivial_but_slow(block, target);
assert!(block.len() < COMPRESSION_BLOCK_SIZE); assert!(block.len() < COMPRESSION_BLOCK_SIZE);
let mut output_buffer = [u32::max_value(); COMPRESSION_BLOCK_SIZE]; let mut output_buffer = [TERMINATED; COMPRESSION_BLOCK_SIZE];
output_buffer[..block.len()].copy_from_slice(block); output_buffer[..block.len()].copy_from_slice(block);
for i in 0..cursor { assert_eq!(
assert_eq!( block_searcher.search_in_block(&AlignedBuffer(output_buffer), target),
block_searcher.search_in_block( cursor
&AlignedBuffer(output_buffer), );
block.len(),
i,
target
),
cursor
);
}
} }
fn util_test_search_in_block_all(block_searcher: BlockSearcher, block: &[u32]) { fn util_test_search_in_block_all(block_searcher: BlockSearcher, block: &[u32]) {

View File

@@ -0,0 +1,530 @@
use std::io;
use crate::common::{BinarySerializable, VInt};
use crate::directory::FileSlice;
use crate::directory::OwnedBytes;
use crate::fieldnorm::FieldNormReader;
use crate::postings::compression::{
AlignedBuffer, BlockDecoder, VIntDecoder, COMPRESSION_BLOCK_SIZE,
};
use crate::postings::{BlockInfo, FreqReadingOption, SkipReader};
use crate::query::BM25Weight;
use crate::schema::IndexRecordOption;
use crate::{DocId, Score, TERMINATED};
fn max_score<I: Iterator<Item = Score>>(mut it: I) -> Option<Score> {
if let Some(first) = it.next() {
Some(it.fold(first, Score::max))
} else {
None
}
}
/// `BlockSegmentPostings` is a cursor iterating over blocks
/// of documents.
///
/// # Warning
///
/// While it is useful for some very specific high-performance
/// use cases, you should prefer using `SegmentPostings` for most usage.
#[derive(Clone)]
pub struct BlockSegmentPostings {
pub(crate) doc_decoder: BlockDecoder,
loaded_offset: usize,
freq_decoder: BlockDecoder,
freq_reading_option: FreqReadingOption,
block_max_score_cache: Option<Score>,
doc_freq: u32,
data: OwnedBytes,
pub(crate) skip_reader: SkipReader,
}
fn decode_bitpacked_block(
doc_decoder: &mut BlockDecoder,
freq_decoder_opt: Option<&mut BlockDecoder>,
data: &[u8],
doc_offset: DocId,
doc_num_bits: u8,
tf_num_bits: u8,
) {
let num_consumed_bytes = doc_decoder.uncompress_block_sorted(data, doc_offset, doc_num_bits);
if let Some(freq_decoder) = freq_decoder_opt {
freq_decoder.uncompress_block_unsorted(&data[num_consumed_bytes..], tf_num_bits);
}
}
fn decode_vint_block(
doc_decoder: &mut BlockDecoder,
freq_decoder_opt: Option<&mut BlockDecoder>,
data: &[u8],
doc_offset: DocId,
num_vint_docs: usize,
) {
let num_consumed_bytes =
doc_decoder.uncompress_vint_sorted(data, doc_offset, num_vint_docs, TERMINATED);
if let Some(freq_decoder) = freq_decoder_opt {
freq_decoder.uncompress_vint_unsorted(
&data[num_consumed_bytes..],
num_vint_docs,
TERMINATED,
);
}
}
fn split_into_skips_and_postings(
doc_freq: u32,
mut bytes: OwnedBytes,
) -> (Option<OwnedBytes>, OwnedBytes) {
if doc_freq < COMPRESSION_BLOCK_SIZE as u32 {
return (None, bytes);
}
let skip_len = VInt::deserialize(&mut bytes).expect("Data corrupted").0 as usize;
let (skip_data, postings_data) = bytes.split(skip_len);
(Some(skip_data), postings_data)
}
impl BlockSegmentPostings {
pub(crate) fn open(
doc_freq: u32,
data: FileSlice,
record_option: IndexRecordOption,
requested_option: IndexRecordOption,
) -> io::Result<BlockSegmentPostings> {
let freq_reading_option = match (record_option, requested_option) {
(IndexRecordOption::Basic, _) => FreqReadingOption::NoFreq,
(_, IndexRecordOption::Basic) => FreqReadingOption::SkipFreq,
(_, _) => FreqReadingOption::ReadFreq,
};
let (skip_data_opt, postings_data) =
split_into_skips_and_postings(doc_freq, data.read_bytes()?);
let skip_reader = match skip_data_opt {
Some(skip_data) => SkipReader::new(skip_data, doc_freq, record_option),
None => SkipReader::new(OwnedBytes::empty(), doc_freq, record_option),
};
let mut block_segment_postings = BlockSegmentPostings {
doc_decoder: BlockDecoder::with_val(TERMINATED),
loaded_offset: std::usize::MAX,
freq_decoder: BlockDecoder::with_val(1),
freq_reading_option,
block_max_score_cache: None,
doc_freq,
data: postings_data,
skip_reader,
};
block_segment_postings.load_block();
Ok(block_segment_postings)
}
/// Returns the block_max_score for the current block.
/// It does not require the block to be loaded. For instance, it is ok to call this method
/// after having called `.shallow_advance(..)`.
///
/// See `TermScorer::block_max_score(..)` for more information.
pub fn block_max_score(
&mut self,
fieldnorm_reader: &FieldNormReader,
bm25_weight: &BM25Weight,
) -> Score {
if let Some(score) = self.block_max_score_cache {
return score;
}
if let Some(skip_reader_max_score) = self.skip_reader.block_max_score(bm25_weight) {
// if we are on a full block, the skip reader should have the block max information
// for us
self.block_max_score_cache = Some(skip_reader_max_score);
return skip_reader_max_score;
}
// this is the last block of the segment posting list.
// If it is actually loaded, we can compute block max manually.
if self.block_is_loaded() {
let docs = self.doc_decoder.output_array().iter().cloned();
let freqs = self.freq_decoder.output_array().iter().cloned();
let bm25_scores = docs.zip(freqs).map(|(doc, term_freq)| {
let fieldnorm_id = fieldnorm_reader.fieldnorm_id(doc);
bm25_weight.score(fieldnorm_id, term_freq)
});
let block_max_score = max_score(bm25_scores).unwrap_or(0.0);
self.block_max_score_cache = Some(block_max_score);
return block_max_score;
}
// We do not have access to any good block max value. We return bm25_weight.max_score()
// as it is a valid upperbound.
//
// We do not cache it however, so that it gets computed when once block is loaded.
bm25_weight.max_score()
}
pub(crate) fn freq_reading_option(&self) -> FreqReadingOption {
self.freq_reading_option
}
// Resets the block segment postings on another position
// in the postings file.
//
// This is useful for enumerating through a list of terms,
// and consuming the associated posting lists while avoiding
// reallocating a `BlockSegmentPostings`.
//
// # Warning
//
// This does not reset the positions list.
pub(crate) fn reset(&mut self, doc_freq: u32, postings_data: OwnedBytes) {
let (skip_data_opt, postings_data) = split_into_skips_and_postings(doc_freq, postings_data);
self.data = postings_data;
self.block_max_score_cache = None;
self.loaded_offset = std::usize::MAX;
if let Some(skip_data) = skip_data_opt {
self.skip_reader.reset(skip_data, doc_freq);
} else {
self.skip_reader.reset(OwnedBytes::empty(), doc_freq);
}
self.doc_freq = doc_freq;
self.load_block();
}
/// Returns the overall number of documents in the block postings.
/// It does not take in account whether documents are deleted or not.
///
/// This `doc_freq` is simply the sum of the length of all of the blocks
/// length, and it does not take in account deleted documents.
pub fn doc_freq(&self) -> u32 {
self.doc_freq
}
/// Returns the array of docs in the current block.
///
/// Before the first call to `.advance()`, the block
/// returned by `.docs()` is empty.
#[inline]
pub fn docs(&self) -> &[DocId] {
debug_assert!(self.block_is_loaded());
self.doc_decoder.output_array()
}
/// Returns a full block, regardless of whetehr the block is complete or incomplete (
/// as it happens for the last block of the posting list).
///
/// In the latter case, the block is guaranteed to be padded with the sentinel value:
/// `TERMINATED`. The array is also guaranteed to be aligned on 16 bytes = 128 bits.
///
/// This method is useful to run SSE2 linear search.
#[inline(always)]
pub(crate) fn docs_aligned(&self) -> &AlignedBuffer {
debug_assert!(self.block_is_loaded());
self.doc_decoder.output_aligned()
}
/// Return the document at index `idx` of the block.
#[inline(always)]
pub fn doc(&self, idx: usize) -> u32 {
self.doc_decoder.output(idx)
}
/// Return the array of `term freq` in the block.
#[inline]
pub fn freqs(&self) -> &[u32] {
debug_assert!(self.block_is_loaded());
self.freq_decoder.output_array()
}
/// Return the frequency at index `idx` of the block.
#[inline]
pub fn freq(&self, idx: usize) -> u32 {
debug_assert!(self.block_is_loaded());
self.freq_decoder.output(idx)
}
/// Returns the length of the current block.
///
/// All blocks have a length of `NUM_DOCS_PER_BLOCK`,
/// except the last block that may have a length
/// of any number between 1 and `NUM_DOCS_PER_BLOCK - 1`
#[inline]
pub fn block_len(&self) -> usize {
debug_assert!(self.block_is_loaded());
self.doc_decoder.output_len
}
/// Position on a block that may contains `target_doc`.
///
/// If all docs are smaller than target, the block loaded may be empty,
/// or be the last an incomplete VInt block.
pub fn seek(&mut self, target_doc: DocId) {
self.shallow_seek(target_doc);
self.load_block();
}
pub(crate) fn position_offset(&self) -> u64 {
self.skip_reader.position_offset()
}
/// Dangerous API! This calls seek on the skip list,
/// but does not `.load_block()` afterwards.
///
/// `.load_block()` needs to be called manually afterwards.
/// If all docs are smaller than target, the block loaded may be empty,
/// or be the last an incomplete VInt block.
pub(crate) fn shallow_seek(&mut self, target_doc: DocId) {
if self.skip_reader.seek(target_doc) {
self.block_max_score_cache = None;
}
}
pub(crate) fn block_is_loaded(&self) -> bool {
self.loaded_offset == self.skip_reader.byte_offset()
}
pub(crate) fn load_block(&mut self) {
let offset = self.skip_reader.byte_offset();
if self.loaded_offset == offset {
return;
}
self.loaded_offset = offset;
match self.skip_reader.block_info() {
BlockInfo::BitPacked {
doc_num_bits,
tf_num_bits,
..
} => {
decode_bitpacked_block(
&mut self.doc_decoder,
if let FreqReadingOption::ReadFreq = self.freq_reading_option {
Some(&mut self.freq_decoder)
} else {
None
},
&self.data.as_slice()[offset..],
self.skip_reader.last_doc_in_previous_block,
doc_num_bits,
tf_num_bits,
);
}
BlockInfo::VInt { num_docs } => {
let data = {
if num_docs == 0 {
&[]
} else {
&self.data.as_slice()[offset..]
}
};
decode_vint_block(
&mut self.doc_decoder,
if let FreqReadingOption::ReadFreq = self.freq_reading_option {
Some(&mut self.freq_decoder)
} else {
None
},
data,
self.skip_reader.last_doc_in_previous_block,
num_docs as usize,
);
}
}
}
/// Advance to the next block.
///
/// Returns false iff there was no remaining blocks.
pub fn advance(&mut self) {
self.skip_reader.advance();
self.block_max_score_cache = None;
self.load_block();
}
/// Returns an empty segment postings object
pub fn empty() -> BlockSegmentPostings {
BlockSegmentPostings {
doc_decoder: BlockDecoder::with_val(TERMINATED),
loaded_offset: 0,
freq_decoder: BlockDecoder::with_val(1),
freq_reading_option: FreqReadingOption::NoFreq,
block_max_score_cache: None,
doc_freq: 0,
data: OwnedBytes::empty(),
skip_reader: SkipReader::new(OwnedBytes::empty(), 0, IndexRecordOption::Basic),
}
}
}
#[cfg(test)]
mod tests {
use super::BlockSegmentPostings;
use crate::common::HasLen;
use crate::core::Index;
use crate::docset::{DocSet, TERMINATED};
use crate::postings::compression::COMPRESSION_BLOCK_SIZE;
use crate::postings::postings::Postings;
use crate::postings::SegmentPostings;
use crate::schema::IndexRecordOption;
use crate::schema::Schema;
use crate::schema::Term;
use crate::schema::INDEXED;
use crate::DocId;
#[test]
fn test_empty_segment_postings() {
let mut postings = SegmentPostings::empty();
assert_eq!(postings.doc(), TERMINATED);
assert_eq!(postings.advance(), TERMINATED);
assert_eq!(postings.advance(), TERMINATED);
assert_eq!(postings.doc_freq(), 0);
assert_eq!(postings.len(), 0);
}
#[test]
fn test_empty_postings_doc_returns_terminated() {
let mut postings = SegmentPostings::empty();
assert_eq!(postings.doc(), TERMINATED);
assert_eq!(postings.advance(), TERMINATED);
}
#[test]
fn test_empty_postings_doc_term_freq_returns_0() {
let postings = SegmentPostings::empty();
assert_eq!(postings.term_freq(), 1);
}
#[test]
fn test_empty_block_segment_postings() {
let mut postings = BlockSegmentPostings::empty();
assert!(postings.docs().is_empty());
assert_eq!(postings.doc_freq(), 0);
postings.advance();
assert!(postings.docs().is_empty());
assert_eq!(postings.doc_freq(), 0);
}
#[test]
fn test_block_segment_postings() {
let mut block_segments = build_block_postings(&(0..100_000).collect::<Vec<u32>>());
let mut offset: u32 = 0u32;
// checking that the `doc_freq` is correct
assert_eq!(block_segments.doc_freq(), 100_000);
loop {
let block = block_segments.docs();
if block.is_empty() {
break;
}
for (i, doc) in block.iter().cloned().enumerate() {
assert_eq!(offset + (i as u32), doc);
}
offset += block.len() as u32;
block_segments.advance();
}
}
#[test]
fn test_skip_right_at_new_block() {
let mut doc_ids = (0..128).collect::<Vec<u32>>();
// 128 is missing
doc_ids.push(129);
doc_ids.push(130);
{
let block_segments = build_block_postings(&doc_ids);
let mut docset = SegmentPostings::from_block_postings(block_segments, None);
assert_eq!(docset.seek(128), 129);
assert_eq!(docset.doc(), 129);
assert_eq!(docset.advance(), 130);
assert_eq!(docset.doc(), 130);
assert_eq!(docset.advance(), TERMINATED);
}
{
let block_segments = build_block_postings(&doc_ids);
let mut docset = SegmentPostings::from_block_postings(block_segments, None);
assert_eq!(docset.seek(129), 129);
assert_eq!(docset.doc(), 129);
assert_eq!(docset.advance(), 130);
assert_eq!(docset.doc(), 130);
assert_eq!(docset.advance(), TERMINATED);
}
{
let block_segments = build_block_postings(&doc_ids);
let mut docset = SegmentPostings::from_block_postings(block_segments, None);
assert_eq!(docset.doc(), 0);
assert_eq!(docset.seek(131), TERMINATED);
assert_eq!(docset.doc(), TERMINATED);
}
}
fn build_block_postings(docs: &[DocId]) -> BlockSegmentPostings {
let mut schema_builder = Schema::builder();
let int_field = schema_builder.add_u64_field("id", INDEXED);
let schema = schema_builder.build();
let index = Index::create_in_ram(schema);
let mut index_writer = index.writer_for_tests().unwrap();
let mut last_doc = 0u32;
for &doc in docs {
for _ in last_doc..doc {
index_writer.add_document(doc!(int_field=>1u64));
}
index_writer.add_document(doc!(int_field=>0u64));
last_doc = doc + 1;
}
index_writer.commit().unwrap();
let searcher = index.reader().unwrap().searcher();
let segment_reader = searcher.segment_reader(0);
let inverted_index = segment_reader.inverted_index(int_field).unwrap();
let term = Term::from_field_u64(int_field, 0u64);
let term_info = inverted_index.get_term_info(&term).unwrap();
inverted_index
.read_block_postings_from_terminfo(&term_info, IndexRecordOption::Basic)
.unwrap()
}
#[test]
fn test_block_segment_postings_seek() {
let mut docs = vec![0];
for i in 0..1300 {
docs.push((i * i / 100) + i);
}
let mut block_postings = build_block_postings(&docs[..]);
for i in vec![0, 424, 10000] {
block_postings.seek(i);
let docs = block_postings.docs();
assert!(docs[0] <= i);
assert!(docs.last().cloned().unwrap_or(0u32) >= i);
}
block_postings.seek(100_000);
assert_eq!(block_postings.doc(COMPRESSION_BLOCK_SIZE - 1), TERMINATED);
}
#[test]
fn test_reset_block_segment_postings() -> crate::Result<()> {
let mut schema_builder = Schema::builder();
let int_field = schema_builder.add_u64_field("id", INDEXED);
let schema = schema_builder.build();
let index = Index::create_in_ram(schema);
let mut index_writer = index.writer_for_tests()?;
// create two postings list, one containg even number,
// the other containing odd numbers.
for i in 0..6 {
let doc = doc!(int_field=> (i % 2) as u64);
index_writer.add_document(doc);
}
index_writer.commit()?;
let searcher = index.reader()?.searcher();
let segment_reader = searcher.segment_reader(0);
let mut block_segments;
{
let term = Term::from_field_u64(int_field, 0u64);
let inverted_index = segment_reader.inverted_index(int_field)?;
let term_info = inverted_index.get_term_info(&term).unwrap();
block_segments = inverted_index
.read_block_postings_from_terminfo(&term_info, IndexRecordOption::Basic)?;
}
assert_eq!(block_segments.docs(), &[0, 2, 4]);
{
let term = Term::from_field_u64(int_field, 1u64);
let inverted_index = segment_reader.inverted_index(int_field)?;
let term_info = inverted_index.get_term_info(&term).unwrap();
inverted_index.reset_block_postings_from_terminfo(&term_info, &mut block_segments)?;
}
assert_eq!(block_segments.docs(), &[1, 3, 5]);
Ok(())
}
}

View File

@@ -17,6 +17,12 @@ pub struct BlockEncoder {
pub output_len: usize, pub output_len: usize,
} }
impl Default for BlockEncoder {
fn default() -> Self {
BlockEncoder::new()
}
}
impl BlockEncoder { impl BlockEncoder {
pub fn new() -> BlockEncoder { pub fn new() -> BlockEncoder {
BlockEncoder { BlockEncoder {
@@ -46,19 +52,23 @@ impl BlockEncoder {
/// We ensure that the OutputBuffer is align on 128 bits /// We ensure that the OutputBuffer is align on 128 bits
/// in order to run SSE2 linear search on it. /// in order to run SSE2 linear search on it.
#[repr(align(128))] #[repr(align(128))]
#[derive(Clone)]
pub(crate) struct AlignedBuffer(pub [u32; COMPRESSION_BLOCK_SIZE]); pub(crate) struct AlignedBuffer(pub [u32; COMPRESSION_BLOCK_SIZE]);
#[derive(Clone)]
pub struct BlockDecoder { pub struct BlockDecoder {
bitpacker: BitPacker4x, bitpacker: BitPacker4x,
output: AlignedBuffer, output: AlignedBuffer,
pub output_len: usize, pub output_len: usize,
} }
impl BlockDecoder { impl Default for BlockDecoder {
pub fn new() -> BlockDecoder { fn default() -> Self {
BlockDecoder::with_val(0u32) BlockDecoder::with_val(0u32)
} }
}
impl BlockDecoder {
pub fn with_val(val: u32) -> BlockDecoder { pub fn with_val(val: u32) -> BlockDecoder {
BlockDecoder { BlockDecoder {
bitpacker: BitPacker4x::new(), bitpacker: BitPacker4x::new(),
@@ -90,8 +100,8 @@ impl BlockDecoder {
} }
#[inline] #[inline]
pub(crate) fn output_aligned(&self) -> (&AlignedBuffer, usize) { pub(crate) fn output_aligned(&self) -> &AlignedBuffer {
(&self.output, self.output_len) &self.output
} }
#[inline] #[inline]
@@ -134,11 +144,14 @@ pub trait VIntDecoder {
/// For instance, if delta encoded are `1, 3, 9`, and the /// For instance, if delta encoded are `1, 3, 9`, and the
/// `offset` is 5, then the output will be: /// `offset` is 5, then the output will be:
/// `5 + 1 = 6, 6 + 3= 9, 9 + 9 = 18` /// `5 + 1 = 6, 6 + 3= 9, 9 + 9 = 18`
fn uncompress_vint_sorted<'a>( ///
/// The value given in `padding` will be used to fill the remaining `128 - num_els` values.
fn uncompress_vint_sorted(
&mut self, &mut self,
compressed_data: &'a [u8], compressed_data: &[u8],
offset: u32, offset: u32,
num_els: usize, num_els: usize,
padding: u32,
) -> usize; ) -> usize;
/// Uncompress an array of `u32s`, compressed using variable /// Uncompress an array of `u32s`, compressed using variable
@@ -146,7 +159,14 @@ pub trait VIntDecoder {
/// ///
/// The method takes a number of int to decompress, and returns /// The method takes a number of int to decompress, and returns
/// the amount of bytes that were read to decompress them. /// the amount of bytes that were read to decompress them.
fn uncompress_vint_unsorted<'a>(&mut self, compressed_data: &'a [u8], num_els: usize) -> usize; ///
/// The value given in `padding` will be used to fill the remaining `128 - num_els` values.
fn uncompress_vint_unsorted(
&mut self,
compressed_data: &[u8],
num_els: usize,
padding: u32,
) -> usize;
} }
impl VIntEncoder for BlockEncoder { impl VIntEncoder for BlockEncoder {
@@ -160,18 +180,26 @@ impl VIntEncoder for BlockEncoder {
} }
impl VIntDecoder for BlockDecoder { impl VIntDecoder for BlockDecoder {
fn uncompress_vint_sorted<'a>( fn uncompress_vint_sorted(
&mut self, &mut self,
compressed_data: &'a [u8], compressed_data: &[u8],
offset: u32, offset: u32,
num_els: usize, num_els: usize,
padding: u32,
) -> usize { ) -> usize {
self.output_len = num_els; self.output_len = num_els;
self.output.0.iter_mut().for_each(|el| *el = padding);
vint::uncompress_sorted(compressed_data, &mut self.output.0[..num_els], offset) vint::uncompress_sorted(compressed_data, &mut self.output.0[..num_els], offset)
} }
fn uncompress_vint_unsorted<'a>(&mut self, compressed_data: &'a [u8], num_els: usize) -> usize { fn uncompress_vint_unsorted(
&mut self,
compressed_data: &[u8],
num_els: usize,
padding: u32,
) -> usize {
self.output_len = num_els; self.output_len = num_els;
self.output.0.iter_mut().for_each(|el| *el = padding);
vint::uncompress_unsorted(compressed_data, &mut self.output.0[..num_els]) vint::uncompress_unsorted(compressed_data, &mut self.output.0[..num_els])
} }
} }
@@ -180,13 +208,14 @@ impl VIntDecoder for BlockDecoder {
pub mod tests { pub mod tests {
use super::*; use super::*;
use crate::TERMINATED;
#[test] #[test]
fn test_encode_sorted_block() { fn test_encode_sorted_block() {
let vals: Vec<u32> = (0u32..128u32).map(|i| i * 7).collect(); let vals: Vec<u32> = (0u32..128u32).map(|i| i * 7).collect();
let mut encoder = BlockEncoder::new(); let mut encoder = BlockEncoder::new();
let (num_bits, compressed_data) = encoder.compress_block_sorted(&vals, 0); let (num_bits, compressed_data) = encoder.compress_block_sorted(&vals, 0);
let mut decoder = BlockDecoder::new(); let mut decoder = BlockDecoder::default();
{ {
let consumed_num_bytes = decoder.uncompress_block_sorted(compressed_data, 0, num_bits); let consumed_num_bytes = decoder.uncompress_block_sorted(compressed_data, 0, num_bits);
assert_eq!(consumed_num_bytes, compressed_data.len()); assert_eq!(consumed_num_bytes, compressed_data.len());
@@ -199,9 +228,9 @@ pub mod tests {
#[test] #[test]
fn test_encode_sorted_block_with_offset() { fn test_encode_sorted_block_with_offset() {
let vals: Vec<u32> = (0u32..128u32).map(|i| 11 + i * 7).collect(); let vals: Vec<u32> = (0u32..128u32).map(|i| 11 + i * 7).collect();
let mut encoder = BlockEncoder::new(); let mut encoder = BlockEncoder::default();
let (num_bits, compressed_data) = encoder.compress_block_sorted(&vals, 10); let (num_bits, compressed_data) = encoder.compress_block_sorted(&vals, 10);
let mut decoder = BlockDecoder::new(); let mut decoder = BlockDecoder::default();
{ {
let consumed_num_bytes = decoder.uncompress_block_sorted(compressed_data, 10, num_bits); let consumed_num_bytes = decoder.uncompress_block_sorted(compressed_data, 10, num_bits);
assert_eq!(consumed_num_bytes, compressed_data.len()); assert_eq!(consumed_num_bytes, compressed_data.len());
@@ -216,11 +245,11 @@ pub mod tests {
let mut compressed: Vec<u8> = Vec::new(); let mut compressed: Vec<u8> = Vec::new();
let n = 128; let n = 128;
let vals: Vec<u32> = (0..n).map(|i| 11u32 + (i as u32) * 7u32).collect(); let vals: Vec<u32> = (0..n).map(|i| 11u32 + (i as u32) * 7u32).collect();
let mut encoder = BlockEncoder::new(); let mut encoder = BlockEncoder::default();
let (num_bits, compressed_data) = encoder.compress_block_sorted(&vals, 10); let (num_bits, compressed_data) = encoder.compress_block_sorted(&vals, 10);
compressed.extend_from_slice(compressed_data); compressed.extend_from_slice(compressed_data);
compressed.push(173u8); compressed.push(173u8);
let mut decoder = BlockDecoder::new(); let mut decoder = BlockDecoder::default();
{ {
let consumed_num_bytes = decoder.uncompress_block_sorted(&compressed, 10, num_bits); let consumed_num_bytes = decoder.uncompress_block_sorted(&compressed, 10, num_bits);
assert_eq!(consumed_num_bytes, compressed.len() - 1); assert_eq!(consumed_num_bytes, compressed.len() - 1);
@@ -236,11 +265,11 @@ pub mod tests {
let mut compressed: Vec<u8> = Vec::new(); let mut compressed: Vec<u8> = Vec::new();
let n = 128; let n = 128;
let vals: Vec<u32> = (0..n).map(|i| 11u32 + (i as u32) * 7u32 % 12).collect(); let vals: Vec<u32> = (0..n).map(|i| 11u32 + (i as u32) * 7u32 % 12).collect();
let mut encoder = BlockEncoder::new(); let mut encoder = BlockEncoder::default();
let (num_bits, compressed_data) = encoder.compress_block_unsorted(&vals); let (num_bits, compressed_data) = encoder.compress_block_unsorted(&vals);
compressed.extend_from_slice(compressed_data); compressed.extend_from_slice(compressed_data);
compressed.push(173u8); compressed.push(173u8);
let mut decoder = BlockDecoder::new(); let mut decoder = BlockDecoder::default();
{ {
let consumed_num_bytes = decoder.uncompress_block_unsorted(&compressed, num_bits); let consumed_num_bytes = decoder.uncompress_block_unsorted(&compressed, num_bits);
assert_eq!(consumed_num_bytes + 1, compressed.len()); assert_eq!(consumed_num_bytes + 1, compressed.len());
@@ -251,20 +280,27 @@ pub mod tests {
} }
} }
#[test]
fn test_block_decoder_initialization() {
let block = BlockDecoder::with_val(TERMINATED);
assert_eq!(block.output(0), TERMINATED);
}
#[test] #[test]
fn test_encode_vint() { fn test_encode_vint() {
{ const PADDING_VALUE: u32 = 234_234_345u32;
let expected_length = 154; let expected_length = 154;
let mut encoder = BlockEncoder::new(); let mut encoder = BlockEncoder::new();
let input: Vec<u32> = (0u32..123u32).map(|i| 4 + i * 7 / 2).into_iter().collect(); let input: Vec<u32> = (0u32..123u32).map(|i| 4 + i * 7 / 2).into_iter().collect();
for offset in &[0u32, 1u32, 2u32] { for offset in &[0u32, 1u32, 2u32] {
let encoded_data = encoder.compress_vint_sorted(&input, *offset); let encoded_data = encoder.compress_vint_sorted(&input, *offset);
assert!(encoded_data.len() <= expected_length); assert!(encoded_data.len() <= expected_length);
let mut decoder = BlockDecoder::new(); let mut decoder = BlockDecoder::default();
let consumed_num_bytes = let consumed_num_bytes =
decoder.uncompress_vint_sorted(&encoded_data, *offset, input.len()); decoder.uncompress_vint_sorted(&encoded_data, *offset, input.len(), PADDING_VALUE);
assert_eq!(consumed_num_bytes, encoded_data.len()); assert_eq!(consumed_num_bytes, encoded_data.len());
assert_eq!(input, decoder.output_array()); assert_eq!(input, decoder.output_array());
for i in input.len()..COMPRESSION_BLOCK_SIZE {
assert_eq!(decoder.output(i), PADDING_VALUE);
} }
} }
} }
@@ -274,6 +310,7 @@ pub mod tests {
mod bench { mod bench {
use super::*; use super::*;
use crate::TERMINATED;
use rand::rngs::StdRng; use rand::rngs::StdRng;
use rand::Rng; use rand::Rng;
use rand::SeedableRng; use rand::SeedableRng;
@@ -304,7 +341,7 @@ mod bench {
let mut encoder = BlockEncoder::new(); let mut encoder = BlockEncoder::new();
let data = generate_array(COMPRESSION_BLOCK_SIZE, 0.1); let data = generate_array(COMPRESSION_BLOCK_SIZE, 0.1);
let (num_bits, compressed) = encoder.compress_block_sorted(&data, 0u32); let (num_bits, compressed) = encoder.compress_block_sorted(&data, 0u32);
let mut decoder = BlockDecoder::new(); let mut decoder = BlockDecoder::default();
b.iter(|| { b.iter(|| {
decoder.uncompress_block_sorted(compressed, 0u32, num_bits); decoder.uncompress_block_sorted(compressed, 0u32, num_bits);
}); });
@@ -339,9 +376,9 @@ mod bench {
let mut encoder = BlockEncoder::new(); let mut encoder = BlockEncoder::new();
let data = generate_array(NUM_INTS_BENCH_VINT, 0.001); let data = generate_array(NUM_INTS_BENCH_VINT, 0.001);
let compressed = encoder.compress_vint_sorted(&data, 0u32); let compressed = encoder.compress_vint_sorted(&data, 0u32);
let mut decoder = BlockDecoder::new(); let mut decoder = BlockDecoder::default();
b.iter(|| { b.iter(|| {
decoder.uncompress_vint_sorted(compressed, 0u32, NUM_INTS_BENCH_VINT); decoder.uncompress_vint_sorted(compressed, 0u32, NUM_INTS_BENCH_VINT, TERMINATED);
}); });
} }
} }

View File

@@ -42,7 +42,7 @@ pub(crate) fn compress_unsorted<'a>(input: &[u32], output: &'a mut [u8]) -> &'a
} }
#[inline(always)] #[inline(always)]
pub fn uncompress_sorted<'a>(compressed_data: &'a [u8], output: &mut [u32], offset: u32) -> usize { pub fn uncompress_sorted(compressed_data: &[u8], output: &mut [u32], offset: u32) -> usize {
let mut read_byte = 0; let mut read_byte = 0;
let mut result = offset; let mut result = offset;
for output_mut in output.iter_mut() { for output_mut in output.iter_mut() {

View File

@@ -3,11 +3,8 @@ Postings module (also called inverted index)
*/ */
mod block_search; mod block_search;
mod block_segment_postings;
pub(crate) mod compression; pub(crate) mod compression;
/// Postings module
///
/// Postings, also called inverted lists, is the key datastructure
/// to full-text search.
mod postings; mod postings;
mod postings_writer; mod postings_writer;
mod recorder; mod recorder;
@@ -22,18 +19,15 @@ pub(crate) use self::block_search::BlockSearcher;
pub(crate) use self::postings_writer::MultiFieldPostingsWriter; pub(crate) use self::postings_writer::MultiFieldPostingsWriter;
pub use self::serializer::{FieldSerializer, InvertedIndexSerializer}; pub use self::serializer::{FieldSerializer, InvertedIndexSerializer};
use self::compression::COMPRESSION_BLOCK_SIZE;
pub use self::postings::Postings; pub use self::postings::Postings;
pub(crate) use self::skip::SkipReader; pub(crate) use self::skip::{BlockInfo, SkipReader};
pub use self::term_info::TermInfo; pub use self::term_info::TermInfo;
pub use self::segment_postings::{BlockSegmentPostings, SegmentPostings}; pub use self::block_segment_postings::BlockSegmentPostings;
pub use self::segment_postings::SegmentPostings;
pub(crate) use self::stacker::compute_table_size; pub(crate) use self::stacker::compute_table_size;
pub use crate::common::HasLen;
pub(crate) const USE_SKIP_INFO_LIMIT: u32 = COMPRESSION_BLOCK_SIZE as u32;
pub(crate) type UnorderedTermId = u64; pub(crate) type UnorderedTermId = u64;
#[cfg_attr(feature = "cargo-clippy", allow(clippy::enum_variant_names))] #[cfg_attr(feature = "cargo-clippy", allow(clippy::enum_variant_names))]
@@ -46,12 +40,12 @@ pub(crate) enum FreqReadingOption {
#[cfg(test)] #[cfg(test)]
pub mod tests { pub mod tests {
use super::InvertedIndexSerializer;
use super::*; use super::Postings;
use crate::core::Index; use crate::core::Index;
use crate::core::SegmentComponent; use crate::core::SegmentComponent;
use crate::core::SegmentReader; use crate::core::SegmentReader;
use crate::docset::{DocSet, SkipResult}; use crate::docset::{DocSet, TERMINATED};
use crate::fieldnorm::FieldNormReader; use crate::fieldnorm::FieldNormReader;
use crate::indexer::operation::AddOperation; use crate::indexer::operation::AddOperation;
use crate::indexer::SegmentWriter; use crate::indexer::SegmentWriter;
@@ -62,6 +56,7 @@ pub mod tests {
use crate::schema::{IndexRecordOption, TextFieldIndexing}; use crate::schema::{IndexRecordOption, TextFieldIndexing};
use crate::tokenizer::{SimpleTokenizer, MAX_TOKEN_LEN}; use crate::tokenizer::{SimpleTokenizer, MAX_TOKEN_LEN};
use crate::DocId; use crate::DocId;
use crate::HasLen;
use crate::Score; use crate::Score;
use once_cell::sync::Lazy; use once_cell::sync::Lazy;
use rand::rngs::StdRng; use rand::rngs::StdRng;
@@ -69,102 +64,101 @@ pub mod tests {
use std::iter; use std::iter;
#[test] #[test]
pub fn test_position_write() { pub fn test_position_write() -> crate::Result<()> {
let mut schema_builder = Schema::builder(); let mut schema_builder = Schema::builder();
let text_field = schema_builder.add_text_field("text", TEXT); let text_field = schema_builder.add_text_field("text", TEXT);
let schema = schema_builder.build(); let schema = schema_builder.build();
let index = Index::create_in_ram(schema); let index = Index::create_in_ram(schema);
let mut segment = index.new_segment(); let mut segment = index.new_segment();
let mut posting_serializer = InvertedIndexSerializer::open(&mut segment).unwrap(); let mut posting_serializer = InvertedIndexSerializer::open(&mut segment)?;
{ let mut field_serializer = posting_serializer.new_field(text_field, 120 * 4, None)?;
let mut field_serializer = posting_serializer.new_field(text_field, 120 * 4).unwrap(); field_serializer.new_term("abc".as_bytes(), 12u32)?;
field_serializer.new_term("abc".as_bytes()).unwrap(); for doc_id in 0u32..120u32 {
for doc_id in 0u32..120u32 { let delta_positions = vec![1, 2, 3, 2];
let delta_positions = vec![1, 2, 3, 2]; field_serializer.write_doc(doc_id, 4, &delta_positions)?;
field_serializer
.write_doc(doc_id, 4, &delta_positions)
.unwrap();
}
field_serializer.close_term().unwrap();
} }
posting_serializer.close().unwrap(); field_serializer.close_term()?;
let read = segment.open_read(SegmentComponent::POSITIONS).unwrap(); posting_serializer.close()?;
let read = segment.open_read(SegmentComponent::POSITIONS)?;
assert!(read.len() <= 140); assert!(read.len() <= 140);
Ok(())
} }
#[test] #[test]
pub fn test_skip_positions() { pub fn test_skip_positions() -> crate::Result<()> {
let mut schema_builder = Schema::builder(); let mut schema_builder = Schema::builder();
let title = schema_builder.add_text_field("title", TEXT); let title = schema_builder.add_text_field("title", TEXT);
let schema = schema_builder.build(); let schema = schema_builder.build();
let index = Index::create_in_ram(schema); let index = Index::create_in_ram(schema);
let mut index_writer = index.writer_with_num_threads(1, 30_000_000).unwrap(); let mut index_writer = index.writer_for_tests()?;
index_writer.add_document(doc!(title => r#"abc abc abc"#)); index_writer.add_document(doc!(title => r#"abc abc abc"#));
index_writer.add_document(doc!(title => r#"abc be be be be abc"#)); index_writer.add_document(doc!(title => r#"abc be be be be abc"#));
for _ in 0..1_000 { for _ in 0..1_000 {
index_writer.add_document(doc!(title => r#"abc abc abc"#)); index_writer.add_document(doc!(title => r#"abc abc abc"#));
} }
index_writer.add_document(doc!(title => r#"abc be be be be abc"#)); index_writer.add_document(doc!(title => r#"abc be be be be abc"#));
index_writer.commit().unwrap(); index_writer.commit()?;
let searcher = index.reader().unwrap().searcher(); let searcher = index.reader()?.searcher();
let inverted_index = searcher.segment_reader(0u32).inverted_index(title); let inverted_index = searcher.segment_reader(0u32).inverted_index(title)?;
let term = Term::from_field_text(title, "abc"); let term = Term::from_field_text(title, "abc");
let mut positions = Vec::new(); let mut positions = Vec::new();
{ {
let mut postings = inverted_index let mut postings = inverted_index
.read_postings(&term, IndexRecordOption::WithFreqsAndPositions) .read_postings(&term, IndexRecordOption::WithFreqsAndPositions)?
.unwrap(); .unwrap();
postings.advance(); assert_eq!(postings.doc(), 0);
postings.positions(&mut positions); postings.positions(&mut positions);
assert_eq!(&[0, 1, 2], &positions[..]); assert_eq!(&[0, 1, 2], &positions[..]);
postings.positions(&mut positions); postings.positions(&mut positions);
assert_eq!(&[0, 1, 2], &positions[..]); assert_eq!(&[0, 1, 2], &positions[..]);
postings.advance(); assert_eq!(postings.advance(), 1);
postings.positions(&mut positions);
assert_eq!(&[0, 5], &positions[..]);
}
{
let mut postings = inverted_index
.read_postings(&term, IndexRecordOption::WithFreqsAndPositions)
.unwrap();
postings.advance();
postings.advance();
postings.positions(&mut positions);
assert_eq!(&[0, 5], &positions[..]);
}
{
let mut postings = inverted_index
.read_postings(&term, IndexRecordOption::WithFreqsAndPositions)
.unwrap();
assert_eq!(postings.skip_next(1), SkipResult::Reached);
assert_eq!(postings.doc(), 1); assert_eq!(postings.doc(), 1);
postings.positions(&mut positions); postings.positions(&mut positions);
assert_eq!(&[0, 5], &positions[..]); assert_eq!(&[0, 5], &positions[..]);
} }
{ {
let mut postings = inverted_index let mut postings = inverted_index
.read_postings(&term, IndexRecordOption::WithFreqsAndPositions) .read_postings(&term, IndexRecordOption::WithFreqsAndPositions)?
.unwrap(); .unwrap();
assert_eq!(postings.skip_next(1002), SkipResult::Reached); assert_eq!(postings.doc(), 0);
assert_eq!(postings.advance(), 1);
postings.positions(&mut positions);
assert_eq!(&[0, 5], &positions[..]);
}
{
let mut postings = inverted_index
.read_postings(&term, IndexRecordOption::WithFreqsAndPositions)?
.unwrap();
assert_eq!(postings.seek(1), 1);
assert_eq!(postings.doc(), 1);
postings.positions(&mut positions);
assert_eq!(&[0, 5], &positions[..]);
}
{
let mut postings = inverted_index
.read_postings(&term, IndexRecordOption::WithFreqsAndPositions)?
.unwrap();
assert_eq!(postings.seek(1002), 1002);
assert_eq!(postings.doc(), 1002); assert_eq!(postings.doc(), 1002);
postings.positions(&mut positions); postings.positions(&mut positions);
assert_eq!(&[0, 5], &positions[..]); assert_eq!(&[0, 5], &positions[..]);
} }
{ {
let mut postings = inverted_index let mut postings = inverted_index
.read_postings(&term, IndexRecordOption::WithFreqsAndPositions) .read_postings(&term, IndexRecordOption::WithFreqsAndPositions)?
.unwrap(); .unwrap();
assert_eq!(postings.skip_next(100), SkipResult::Reached); assert_eq!(postings.seek(100), 100);
assert_eq!(postings.skip_next(1002), SkipResult::Reached); assert_eq!(postings.seek(1002), 1002);
assert_eq!(postings.doc(), 1002); assert_eq!(postings.doc(), 1002);
postings.positions(&mut positions); postings.positions(&mut positions);
assert_eq!(&[0, 5], &positions[..]); assert_eq!(&[0, 5], &positions[..]);
} }
Ok(())
} }
#[test] #[test]
pub fn test_drop_token_that_are_too_long() { pub fn test_drop_token_that_are_too_long() -> crate::Result<()> {
let ok_token_text: String = iter::repeat('A').take(MAX_TOKEN_LEN).collect(); let ok_token_text: String = iter::repeat('A').take(MAX_TOKEN_LEN).collect();
let mut exceeding_token_text: String = iter::repeat('A').take(MAX_TOKEN_LEN + 1).collect(); let mut exceeding_token_text: String = iter::repeat('A').take(MAX_TOKEN_LEN + 1).collect();
exceeding_token_text.push_str(" hello"); exceeding_token_text.push_str(" hello");
@@ -181,7 +175,7 @@ pub mod tests {
.tokenizers() .tokenizers()
.register("simple_no_truncation", SimpleTokenizer); .register("simple_no_truncation", SimpleTokenizer);
let reader = index.reader().unwrap(); let reader = index.reader().unwrap();
let mut index_writer = index.writer_with_num_threads(1, 3_000_000).unwrap(); let mut index_writer = index.writer_for_tests().unwrap();
index_writer.set_merge_policy(Box::new(NoMergePolicy)); index_writer.set_merge_policy(Box::new(NoMergePolicy));
{ {
index_writer.add_document(doc!(text_field=>exceeding_token_text)); index_writer.add_document(doc!(text_field=>exceeding_token_text));
@@ -189,7 +183,7 @@ pub mod tests {
reader.reload().unwrap(); reader.reload().unwrap();
let searcher = reader.searcher(); let searcher = reader.searcher();
let segment_reader = searcher.segment_reader(0u32); let segment_reader = searcher.segment_reader(0u32);
let inverted_index = segment_reader.inverted_index(text_field); let inverted_index = segment_reader.inverted_index(text_field)?;
assert_eq!(inverted_index.terms().num_terms(), 1); assert_eq!(inverted_index.terms().num_terms(), 1);
let mut bytes = vec![]; let mut bytes = vec![];
assert!(inverted_index.terms().ord_to_term(0, &mut bytes)); assert!(inverted_index.terms().ord_to_term(0, &mut bytes));
@@ -201,16 +195,17 @@ pub mod tests {
reader.reload().unwrap(); reader.reload().unwrap();
let searcher = reader.searcher(); let searcher = reader.searcher();
let segment_reader = searcher.segment_reader(1u32); let segment_reader = searcher.segment_reader(1u32);
let inverted_index = segment_reader.inverted_index(text_field); let inverted_index = segment_reader.inverted_index(text_field)?;
assert_eq!(inverted_index.terms().num_terms(), 1); assert_eq!(inverted_index.terms().num_terms(), 1);
let mut bytes = vec![]; let mut bytes = vec![];
assert!(inverted_index.terms().ord_to_term(0, &mut bytes)); assert!(inverted_index.terms().ord_to_term(0, &mut bytes));
assert_eq!(&bytes[..], ok_token_text.as_bytes()); assert_eq!(&bytes[..], ok_token_text.as_bytes());
} }
Ok(())
} }
#[test] #[test]
pub fn test_position_and_fieldnorm1() { pub fn test_position_and_fieldnorm1() -> crate::Result<()> {
let mut positions = Vec::new(); let mut positions = Vec::new();
let mut schema_builder = Schema::builder(); let mut schema_builder = Schema::builder();
let text_field = schema_builder.add_text_field("text", TEXT); let text_field = schema_builder.add_text_field("text", TEXT);
@@ -222,42 +217,38 @@ pub mod tests {
let mut segment_writer = let mut segment_writer =
SegmentWriter::for_segment(3_000_000, segment.clone(), &schema).unwrap(); SegmentWriter::for_segment(3_000_000, segment.clone(), &schema).unwrap();
{ {
let mut doc = Document::default();
// checking that position works if the field has two values // checking that position works if the field has two values
doc.add_text(text_field, "a b a c a d a a.");
doc.add_text(text_field, "d d d d a");
let op = AddOperation { let op = AddOperation {
opstamp: 0u64, opstamp: 0u64,
document: doc, document: doc!(
text_field => "a b a c a d a a.",
text_field => "d d d d a"
),
}; };
segment_writer.add_document(op, &schema).unwrap(); segment_writer.add_document(op, &schema)?;
} }
{ {
let mut doc = Document::default();
doc.add_text(text_field, "b a");
let op = AddOperation { let op = AddOperation {
opstamp: 1u64, opstamp: 1u64,
document: doc, document: doc!(text_field => "b a"),
}; };
segment_writer.add_document(op, &schema).unwrap(); segment_writer.add_document(op, &schema).unwrap();
} }
for i in 2..1000 { for i in 2..1000 {
let mut doc = Document::default(); let mut text: String = iter::repeat("e ").take(i).collect();
let mut text = iter::repeat("e ").take(i).collect::<String>();
text.push_str(" a"); text.push_str(" a");
doc.add_text(text_field, &text);
let op = AddOperation { let op = AddOperation {
opstamp: 2u64, opstamp: 2u64,
document: doc, document: doc!(text_field => text),
}; };
segment_writer.add_document(op, &schema).unwrap(); segment_writer.add_document(op, &schema).unwrap();
} }
segment_writer.finalize().unwrap(); segment_writer.finalize()?;
} }
{ {
let segment_reader = SegmentReader::open(&segment).unwrap(); let segment_reader = SegmentReader::open(&segment)?;
{ {
let fieldnorm_reader = segment_reader.get_fieldnorms_reader(text_field); let fieldnorm_reader = segment_reader.get_fieldnorms_reader(text_field)?;
assert_eq!(fieldnorm_reader.fieldnorm(0), 8 + 5); assert_eq!(fieldnorm_reader.fieldnorm(0), 8 + 5);
assert_eq!(fieldnorm_reader.fieldnorm(1), 2); assert_eq!(fieldnorm_reader.fieldnorm(1), 2);
for i in 2..1000 { for i in 2..1000 {
@@ -270,43 +261,41 @@ pub mod tests {
{ {
let term_a = Term::from_field_text(text_field, "abcdef"); let term_a = Term::from_field_text(text_field, "abcdef");
assert!(segment_reader assert!(segment_reader
.inverted_index(term_a.field()) .inverted_index(term_a.field())?
.read_postings(&term_a, IndexRecordOption::WithFreqsAndPositions) .read_postings(&term_a, IndexRecordOption::WithFreqsAndPositions)?
.is_none()); .is_none());
} }
{ {
let term_a = Term::from_field_text(text_field, "a"); let term_a = Term::from_field_text(text_field, "a");
let mut postings_a = segment_reader let mut postings_a = segment_reader
.inverted_index(term_a.field()) .inverted_index(term_a.field())?
.read_postings(&term_a, IndexRecordOption::WithFreqsAndPositions) .read_postings(&term_a, IndexRecordOption::WithFreqsAndPositions)?
.unwrap(); .unwrap();
assert_eq!(postings_a.len(), 1000); assert_eq!(postings_a.len(), 1000);
assert!(postings_a.advance());
assert_eq!(postings_a.doc(), 0); assert_eq!(postings_a.doc(), 0);
assert_eq!(postings_a.term_freq(), 6); assert_eq!(postings_a.term_freq(), 6);
postings_a.positions(&mut positions); postings_a.positions(&mut positions);
assert_eq!(&positions[..], [0, 2, 4, 6, 7, 13]); assert_eq!(&positions[..], [0, 2, 4, 6, 7, 13]);
assert!(postings_a.advance()); assert_eq!(postings_a.advance(), 1u32);
assert_eq!(postings_a.doc(), 1u32); assert_eq!(postings_a.doc(), 1u32);
assert_eq!(postings_a.term_freq(), 1); assert_eq!(postings_a.term_freq(), 1);
for i in 2u32..1000u32 { for i in 2u32..1000u32 {
assert!(postings_a.advance()); assert_eq!(postings_a.advance(), i);
assert_eq!(postings_a.term_freq(), 1); assert_eq!(postings_a.term_freq(), 1);
postings_a.positions(&mut positions); postings_a.positions(&mut positions);
assert_eq!(&positions[..], [i]); assert_eq!(&positions[..], [i]);
assert_eq!(postings_a.doc(), i); assert_eq!(postings_a.doc(), i);
} }
assert!(!postings_a.advance()); assert_eq!(postings_a.advance(), TERMINATED);
} }
{ {
let term_e = Term::from_field_text(text_field, "e"); let term_e = Term::from_field_text(text_field, "e");
let mut postings_e = segment_reader let mut postings_e = segment_reader
.inverted_index(term_e.field()) .inverted_index(term_e.field())?
.read_postings(&term_e, IndexRecordOption::WithFreqsAndPositions) .read_postings(&term_e, IndexRecordOption::WithFreqsAndPositions)?
.unwrap(); .unwrap();
assert_eq!(postings_e.len(), 1000 - 2); assert_eq!(postings_e.len(), 1000 - 2);
for i in 2u32..1000u32 { for i in 2u32..1000u32 {
assert!(postings_e.advance());
assert_eq!(postings_e.term_freq(), i); assert_eq!(postings_e.term_freq(), i);
postings_e.positions(&mut positions); postings_e.positions(&mut positions);
assert_eq!(positions.len(), i as usize); assert_eq!(positions.len(), i as usize);
@@ -314,48 +303,42 @@ pub mod tests {
assert_eq!(positions[j], (j as u32)); assert_eq!(positions[j], (j as u32));
} }
assert_eq!(postings_e.doc(), i); assert_eq!(postings_e.doc(), i);
postings_e.advance();
} }
assert!(!postings_e.advance()); assert_eq!(postings_e.doc(), TERMINATED);
} }
} }
Ok(())
} }
#[test] #[test]
pub fn test_position_and_fieldnorm2() { pub fn test_position_and_fieldnorm2() -> crate::Result<()> {
let mut positions: Vec<u32> = Vec::new(); let mut positions: Vec<u32> = Vec::new();
let mut schema_builder = Schema::builder(); let mut schema_builder = Schema::builder();
let text_field = schema_builder.add_text_field("text", TEXT); let text_field = schema_builder.add_text_field("text", TEXT);
let schema = schema_builder.build(); let schema = schema_builder.build();
let index = Index::create_in_ram(schema); let index = Index::create_in_ram(schema);
{ {
let mut index_writer = index.writer_with_num_threads(1, 3_000_000).unwrap(); let mut index_writer = index.writer_for_tests().unwrap();
{ index_writer.add_document(doc!(text_field => "g b b d c g c"));
let mut doc = Document::default(); index_writer.add_document(doc!(text_field => "g a b b a d c g c"));
doc.add_text(text_field, "g b b d c g c");
index_writer.add_document(doc);
}
{
let mut doc = Document::default();
doc.add_text(text_field, "g a b b a d c g c");
index_writer.add_document(doc);
}
assert!(index_writer.commit().is_ok()); assert!(index_writer.commit().is_ok());
} }
let term_a = Term::from_field_text(text_field, "a"); let term_a = Term::from_field_text(text_field, "a");
let searcher = index.reader().unwrap().searcher(); let searcher = index.reader().unwrap().searcher();
let segment_reader = searcher.segment_reader(0); let segment_reader = searcher.segment_reader(0);
let mut postings = segment_reader let mut postings = segment_reader
.inverted_index(text_field) .inverted_index(text_field)?
.read_postings(&term_a, IndexRecordOption::WithFreqsAndPositions) .read_postings(&term_a, IndexRecordOption::WithFreqsAndPositions)?
.unwrap(); .unwrap();
assert!(postings.advance());
assert_eq!(postings.doc(), 1u32); assert_eq!(postings.doc(), 1u32);
postings.positions(&mut positions); postings.positions(&mut positions);
assert_eq!(&positions[..], &[1u32, 4]); assert_eq!(&positions[..], &[1u32, 4]);
Ok(())
} }
#[test] #[test]
fn test_skip_next() { fn test_skip_next() -> crate::Result<()> {
let term_0 = Term::from_field_u64(Field::from_field_id(0), 0); let term_0 = Term::from_field_u64(Field::from_field_id(0), 0);
let term_1 = Term::from_field_u64(Field::from_field_id(0), 1); let term_1 = Term::from_field_u64(Field::from_field_id(0), 1);
let term_2 = Term::from_field_u64(Field::from_field_id(0), 2); let term_2 = Term::from_field_u64(Field::from_field_id(0), 2);
@@ -366,105 +349,100 @@ pub mod tests {
let mut schema_builder = Schema::builder(); let mut schema_builder = Schema::builder();
let value_field = schema_builder.add_u64_field("value", INDEXED); let value_field = schema_builder.add_u64_field("value", INDEXED);
let schema = schema_builder.build(); let schema = schema_builder.build();
let index = Index::create_in_ram(schema); let index = Index::create_in_ram(schema);
{ {
let mut index_writer = index.writer_with_num_threads(1, 3_000_000).unwrap(); let mut index_writer = index.writer_for_tests()?;
for i in 0..num_docs { for i in 0u64..num_docs as u64 {
let mut doc = Document::default(); let doc = doc!(value_field => 2u64, value_field => i % 2u64);
doc.add_u64(value_field, 2);
doc.add_u64(value_field, (i % 2) as u64);
index_writer.add_document(doc); index_writer.add_document(doc);
} }
assert!(index_writer.commit().is_ok()); assert!(index_writer.commit().is_ok());
} }
index index
}; };
let searcher = index.reader().unwrap().searcher(); let searcher = index.reader()?.searcher();
let segment_reader = searcher.segment_reader(0); let segment_reader = searcher.segment_reader(0);
// check that the basic usage works // check that the basic usage works
for i in 0..num_docs - 1 { for i in 0..num_docs - 1 {
for j in i + 1..num_docs { for j in i + 1..num_docs {
let mut segment_postings = segment_reader let mut segment_postings = segment_reader
.inverted_index(term_2.field()) .inverted_index(term_2.field())?
.read_postings(&term_2, IndexRecordOption::Basic) .read_postings(&term_2, IndexRecordOption::Basic)?
.unwrap(); .unwrap();
assert_eq!(segment_postings.seek(i), i);
assert_eq!(segment_postings.skip_next(i), SkipResult::Reached);
assert_eq!(segment_postings.doc(), i); assert_eq!(segment_postings.doc(), i);
assert_eq!(segment_postings.skip_next(j), SkipResult::Reached); assert_eq!(segment_postings.seek(j), j);
assert_eq!(segment_postings.doc(), j); assert_eq!(segment_postings.doc(), j);
} }
} }
{ {
let mut segment_postings = segment_reader let mut segment_postings = segment_reader
.inverted_index(term_2.field()) .inverted_index(term_2.field())?
.read_postings(&term_2, IndexRecordOption::Basic) .read_postings(&term_2, IndexRecordOption::Basic)?
.unwrap(); .unwrap();
// check that `skip_next` advances the iterator // check that `skip_next` advances the iterator
assert!(segment_postings.advance());
assert_eq!(segment_postings.doc(), 0); assert_eq!(segment_postings.doc(), 0);
assert_eq!(segment_postings.skip_next(1), SkipResult::Reached); assert_eq!(segment_postings.seek(1), 1);
assert_eq!(segment_postings.doc(), 1); assert_eq!(segment_postings.doc(), 1);
assert_eq!(segment_postings.skip_next(1), SkipResult::OverStep); assert_eq!(segment_postings.seek(1), 1);
assert_eq!(segment_postings.doc(), 2); assert_eq!(segment_postings.doc(), 1);
// check that going beyond the end is handled // check that going beyond the end is handled
assert_eq!(segment_postings.skip_next(num_docs), SkipResult::End); assert_eq!(segment_postings.seek(num_docs), TERMINATED);
} }
// check that filtering works // check that filtering works
{ {
let mut segment_postings = segment_reader let mut segment_postings = segment_reader
.inverted_index(term_0.field()) .inverted_index(term_0.field())?
.read_postings(&term_0, IndexRecordOption::Basic) .read_postings(&term_0, IndexRecordOption::Basic)?
.unwrap(); .unwrap();
for i in 0..num_docs / 2 { for i in 0..num_docs / 2 {
assert_eq!(segment_postings.skip_next(i * 2), SkipResult::Reached); assert_eq!(segment_postings.seek(i * 2), i * 2);
assert_eq!(segment_postings.doc(), i * 2); assert_eq!(segment_postings.doc(), i * 2);
} }
let mut segment_postings = segment_reader let mut segment_postings = segment_reader
.inverted_index(term_0.field()) .inverted_index(term_0.field())?
.read_postings(&term_0, IndexRecordOption::Basic) .read_postings(&term_0, IndexRecordOption::Basic)?
.unwrap(); .unwrap();
for i in 0..num_docs / 2 - 1 { for i in 0..num_docs / 2 - 1 {
assert_eq!(segment_postings.skip_next(i * 2 + 1), SkipResult::OverStep); assert!(segment_postings.seek(i * 2 + 1) > (i * 1) * 2);
assert_eq!(segment_postings.doc(), (i + 1) * 2); assert_eq!(segment_postings.doc(), (i + 1) * 2);
} }
} }
// delete some of the documents // delete some of the documents
{ {
let mut index_writer = index.writer_with_num_threads(1, 3_000_000).unwrap(); let mut index_writer = index.writer_for_tests()?;
index_writer.delete_term(term_0); index_writer.delete_term(term_0);
assert!(index_writer.commit().is_ok()); assert!(index_writer.commit().is_ok());
} }
let searcher = index.reader().unwrap().searcher(); let searcher = index.reader()?.searcher();
assert_eq!(searcher.segment_readers().len(), 1);
let segment_reader = searcher.segment_reader(0); let segment_reader = searcher.segment_reader(0);
// make sure seeking still works // make sure seeking still works
for i in 0..num_docs { for i in 0..num_docs {
let mut segment_postings = segment_reader let mut segment_postings = segment_reader
.inverted_index(term_2.field()) .inverted_index(term_2.field())?
.read_postings(&term_2, IndexRecordOption::Basic) .read_postings(&term_2, IndexRecordOption::Basic)?
.unwrap(); .unwrap();
if i % 2 == 0 { if i % 2 == 0 {
assert_eq!(segment_postings.skip_next(i), SkipResult::Reached); assert_eq!(segment_postings.seek(i), i);
assert_eq!(segment_postings.doc(), i); assert_eq!(segment_postings.doc(), i);
assert!(segment_reader.is_deleted(i)); assert!(segment_reader.is_deleted(i));
} else { } else {
assert_eq!(segment_postings.skip_next(i), SkipResult::Reached); assert_eq!(segment_postings.seek(i), i);
assert_eq!(segment_postings.doc(), i); assert_eq!(segment_postings.doc(), i);
} }
} }
@@ -472,19 +450,23 @@ pub mod tests {
// now try with a longer sequence // now try with a longer sequence
{ {
let mut segment_postings = segment_reader let mut segment_postings = segment_reader
.inverted_index(term_2.field()) .inverted_index(term_2.field())?
.read_postings(&term_2, IndexRecordOption::Basic) .read_postings(&term_2, IndexRecordOption::Basic)?
.unwrap(); .unwrap();
let mut last = 2; // start from 5 to avoid seeking to 3 twice let mut last = 2; // start from 5 to avoid seeking to 3 twice
let mut cur = 3; let mut cur = 3;
loop { loop {
match segment_postings.skip_next(cur) { let seek = segment_postings.seek(cur);
SkipResult::End => break, if seek == TERMINATED {
SkipResult::Reached => assert_eq!(segment_postings.doc(), cur), break;
SkipResult::OverStep => assert_eq!(segment_postings.doc(), cur + 1), }
assert_eq!(seek, segment_postings.doc());
if seek == cur {
assert_eq!(segment_postings.doc(), cur);
} else {
assert_eq!(segment_postings.doc(), cur + 1);
} }
let next = cur + last; let next = cur + last;
last = cur; last = cur;
cur = next; cur = next;
@@ -494,20 +476,19 @@ pub mod tests {
// delete everything else // delete everything else
{ {
let mut index_writer = index.writer_with_num_threads(1, 3_000_000).unwrap(); let mut index_writer = index.writer_for_tests()?;
index_writer.delete_term(term_1); index_writer.delete_term(term_1);
assert!(index_writer.commit().is_ok()); assert!(index_writer.commit().is_ok());
} }
let searcher = index.reader().unwrap().searcher(); let searcher = index.reader()?.searcher();
// finally, check that it's empty // finally, check that it's empty
{ {
let searchable_segment_ids = index let searchable_segment_ids = index.searchable_segment_ids()?;
.searchable_segment_ids()
.expect("could not get index segment ids");
assert!(searchable_segment_ids.is_empty()); assert!(searchable_segment_ids.is_empty());
assert_eq!(searcher.num_docs(), 0); assert_eq!(searcher.num_docs(), 0);
} }
Ok(())
} }
pub static TERM_A: Lazy<Term> = Lazy::new(|| { pub static TERM_A: Lazy<Term> = Lazy::new(|| {
@@ -537,7 +518,7 @@ pub mod tests {
let index = Index::create_in_ram(schema); let index = Index::create_in_ram(schema);
let posting_list_size = 1_000_000; let posting_list_size = 1_000_000;
{ {
let mut index_writer = index.writer_with_num_threads(1, 3_000_000).unwrap(); let mut index_writer = index.writer_for_tests().unwrap();
for _ in 0..posting_list_size { for _ in 0..posting_list_size {
let mut doc = Document::default(); let mut doc = Document::default();
if rng.gen_bool(1f64 / 15f64) { if rng.gen_bool(1f64 / 15f64) {
@@ -570,7 +551,7 @@ pub mod tests {
} }
impl<TDocSet: DocSet> DocSet for UnoptimizedDocSet<TDocSet> { impl<TDocSet: DocSet> DocSet for UnoptimizedDocSet<TDocSet> {
fn advance(&mut self) -> bool { fn advance(&mut self) -> DocId {
self.0.advance() self.0.advance()
} }
@@ -595,31 +576,26 @@ pub mod tests {
) { ) {
for target in targets { for target in targets {
let mut postings_opt = postings_factory(); let mut postings_opt = postings_factory();
if target < postings_opt.doc() {
continue;
}
let mut postings_unopt = UnoptimizedDocSet::wrap(postings_factory()); let mut postings_unopt = UnoptimizedDocSet::wrap(postings_factory());
let skip_result_opt = postings_opt.skip_next(target); let skip_result_opt = postings_opt.seek(target);
let skip_result_unopt = postings_unopt.skip_next(target); let skip_result_unopt = postings_unopt.seek(target);
assert_eq!( assert_eq!(
skip_result_unopt, skip_result_opt, skip_result_unopt, skip_result_opt,
"Failed while skipping to {}", "Failed while skipping to {}",
target target
); );
match skip_result_opt { assert!(skip_result_opt >= target);
SkipResult::Reached => assert_eq!(postings_opt.doc(), target), assert_eq!(skip_result_opt, postings_opt.doc());
SkipResult::OverStep => assert!(postings_opt.doc() > target), if skip_result_opt == TERMINATED {
SkipResult::End => { return;
return;
}
} }
while postings_opt.advance() { while postings_opt.doc() != TERMINATED {
assert!(postings_unopt.advance()); assert_eq!(postings_opt.doc(), postings_unopt.doc());
assert_eq!( assert_eq!(postings_opt.advance(), postings_unopt.advance());
postings_opt.doc(),
postings_unopt.doc(),
"Failed while skipping to {}",
target
);
} }
assert!(!postings_unopt.advance());
} }
} }
} }
@@ -628,7 +604,7 @@ pub mod tests {
mod bench { mod bench {
use super::tests::*; use super::tests::*;
use crate::docset::SkipResult; use crate::docset::TERMINATED;
use crate::query::Intersection; use crate::query::Intersection;
use crate::schema::IndexRecordOption; use crate::schema::IndexRecordOption;
use crate::tests; use crate::tests;
@@ -644,9 +620,9 @@ mod bench {
b.iter(|| { b.iter(|| {
let mut segment_postings = segment_reader let mut segment_postings = segment_reader
.inverted_index(TERM_A.field()) .inverted_index(TERM_A.field())
.read_postings(&*TERM_A, IndexRecordOption::Basic) .read_postings(&*TERM_A, IndexRecordOption::Basic)?
.unwrap(); .unwrap();
while segment_postings.advance() {} while segment_postings.advance() != TERMINATED {}
}); });
} }
@@ -659,18 +635,22 @@ mod bench {
let segment_postings_a = segment_reader let segment_postings_a = segment_reader
.inverted_index(TERM_A.field()) .inverted_index(TERM_A.field())
.read_postings(&*TERM_A, IndexRecordOption::Basic) .read_postings(&*TERM_A, IndexRecordOption::Basic)
.unwrap()
.unwrap(); .unwrap();
let segment_postings_b = segment_reader let segment_postings_b = segment_reader
.inverted_index(TERM_B.field()) .inverted_index(TERM_B.field())
.read_postings(&*TERM_B, IndexRecordOption::Basic) .read_postings(&*TERM_B, IndexRecordOption::Basic)
.unwrap()
.unwrap(); .unwrap();
let segment_postings_c = segment_reader let segment_postings_c = segment_reader
.inverted_index(TERM_C.field()) .inverted_index(TERM_C.field())
.read_postings(&*TERM_C, IndexRecordOption::Basic) .read_postings(&*TERM_C, IndexRecordOption::Basic)
.unwrap()
.unwrap(); .unwrap();
let segment_postings_d = segment_reader let segment_postings_d = segment_reader
.inverted_index(TERM_D.field()) .inverted_index(TERM_D.field())
.read_postings(&*TERM_D, IndexRecordOption::Basic) .read_postings(&*TERM_D, IndexRecordOption::Basic)
.unwrap()
.unwrap(); .unwrap();
let mut intersection = Intersection::new(vec![ let mut intersection = Intersection::new(vec![
segment_postings_a, segment_postings_a,
@@ -678,7 +658,7 @@ mod bench {
segment_postings_c, segment_postings_c,
segment_postings_d, segment_postings_d,
]); ]);
while intersection.advance() {} while intersection.advance() != TERMINATED {}
}); });
} }
@@ -691,14 +671,14 @@ mod bench {
let mut segment_postings = segment_reader let mut segment_postings = segment_reader
.inverted_index(TERM_A.field()) .inverted_index(TERM_A.field())
.read_postings(&*TERM_A, IndexRecordOption::Basic) .read_postings(&*TERM_A, IndexRecordOption::Basic)
.unwrap()
.unwrap(); .unwrap();
let mut existing_docs = Vec::new(); let mut existing_docs = Vec::new();
segment_postings.advance();
for doc in &docs { for doc in &docs {
if *doc >= segment_postings.doc() { if *doc >= segment_postings.doc() {
existing_docs.push(*doc); existing_docs.push(*doc);
if segment_postings.skip_next(*doc) == SkipResult::End { if segment_postings.seek(*doc) == TERMINATED {
break; break;
} }
} }
@@ -710,7 +690,7 @@ mod bench {
.read_postings(&*TERM_A, IndexRecordOption::Basic) .read_postings(&*TERM_A, IndexRecordOption::Basic)
.unwrap(); .unwrap();
for doc in &existing_docs { for doc in &existing_docs {
if segment_postings.skip_next(*doc) == SkipResult::End { if segment_postings.seek(*doc) == TERMINATED {
break; break;
} }
} }
@@ -749,8 +729,9 @@ mod bench {
.read_postings(&*TERM_A, IndexRecordOption::Basic) .read_postings(&*TERM_A, IndexRecordOption::Basic)
.unwrap(); .unwrap();
let mut s = 0u32; let mut s = 0u32;
while segment_postings.advance() { while segment_postings.doc() != TERMINATED {
s += (segment_postings.doc() & n) % 1024; s += (segment_postings.doc() & n) % 1024;
segment_postings.advance();
} }
s s
}); });

View File

@@ -1,5 +1,6 @@
use super::stacker::{Addr, MemoryArena, TermHashMap}; use super::stacker::{Addr, MemoryArena, TermHashMap};
use crate::fieldnorm::FieldNormReaders;
use crate::postings::recorder::{ use crate::postings::recorder::{
BufferLender, NothingRecorder, Recorder, TFAndPositionRecorder, TermFrequencyRecorder, BufferLender, NothingRecorder, Recorder, TFAndPositionRecorder, TermFrequencyRecorder,
}; };
@@ -37,12 +38,8 @@ fn posting_from_field_entry(field_entry: &FieldEntry) -> Box<dyn PostingsWriter>
| FieldType::I64(_) | FieldType::I64(_)
| FieldType::F64(_) | FieldType::F64(_)
| FieldType::Date(_) | FieldType::Date(_)
| FieldType::Bytes(_)
| FieldType::HierarchicalFacet => SpecializedPostingsWriter::<NothingRecorder>::new_boxed(), | FieldType::HierarchicalFacet => SpecializedPostingsWriter::<NothingRecorder>::new_boxed(),
FieldType::Bytes => {
// FieldType::Bytes cannot actually be indexed.
// TODO fix during the indexer refactoring described in #276
SpecializedPostingsWriter::<NothingRecorder>::new_boxed()
}
} }
} }
@@ -104,6 +101,7 @@ impl MultiFieldPostingsWriter {
doc: DocId, doc: DocId,
field: Field, field: Field,
token_stream: &mut dyn TokenStream, token_stream: &mut dyn TokenStream,
term_buffer: &mut Term,
) -> u32 { ) -> u32 {
let postings_writer = let postings_writer =
self.per_field_postings_writers[field.field_id() as usize].deref_mut(); self.per_field_postings_writers[field.field_id() as usize].deref_mut();
@@ -113,6 +111,7 @@ impl MultiFieldPostingsWriter {
field, field,
token_stream, token_stream,
&mut self.heap, &mut self.heap,
term_buffer,
) )
} }
@@ -128,6 +127,7 @@ impl MultiFieldPostingsWriter {
pub fn serialize( pub fn serialize(
&self, &self,
serializer: &mut InvertedIndexSerializer, serializer: &mut InvertedIndexSerializer,
fieldnorm_readers: FieldNormReaders,
) -> crate::Result<HashMap<Field, FnvHashMap<UnorderedTermId, TermOrdinal>>> { ) -> crate::Result<HashMap<Field, FnvHashMap<UnorderedTermId, TermOrdinal>>> {
let mut term_offsets: Vec<(&[u8], Addr, UnorderedTermId)> = let mut term_offsets: Vec<(&[u8], Addr, UnorderedTermId)> =
self.term_index.iter().collect(); self.term_index.iter().collect();
@@ -157,12 +157,17 @@ impl MultiFieldPostingsWriter {
unordered_term_mappings.insert(field, mapping); unordered_term_mappings.insert(field, mapping);
} }
FieldType::U64(_) | FieldType::I64(_) | FieldType::F64(_) | FieldType::Date(_) => {} FieldType::U64(_) | FieldType::I64(_) | FieldType::F64(_) | FieldType::Date(_) => {}
FieldType::Bytes => {} FieldType::Bytes(_) => {}
} }
let postings_writer = &self.per_field_postings_writers[field.field_id() as usize]; let postings_writer =
let mut field_serializer = self.per_field_postings_writers[field.field_id() as usize].as_ref();
serializer.new_field(field, postings_writer.total_num_tokens())?; let fieldnorm_reader = fieldnorm_readers.get_field(field)?;
let mut field_serializer = serializer.new_field(
field,
postings_writer.total_num_tokens(),
fieldnorm_reader,
)?;
postings_writer.serialize( postings_writer.serialize(
&term_offsets[start..stop], &term_offsets[start..stop],
&mut field_serializer, &mut field_serializer,
@@ -214,13 +219,20 @@ pub trait PostingsWriter {
field: Field, field: Field,
token_stream: &mut dyn TokenStream, token_stream: &mut dyn TokenStream,
heap: &mut MemoryArena, heap: &mut MemoryArena,
term_buffer: &mut Term,
) -> u32 { ) -> u32 {
let mut term = Term::for_field(field); term_buffer.set_field(field);
let mut sink = |token: &Token| { let mut sink = |token: &Token| {
// We skip all tokens with a len greater than u16. // We skip all tokens with a len greater than u16.
if token.text.len() <= MAX_TOKEN_LEN { if token.text.len() <= MAX_TOKEN_LEN {
term.set_text(token.text.as_str()); term_buffer.set_text(token.text.as_str());
self.subscribe(term_index, doc_id, token.position as u32, &term, heap); self.subscribe(
term_index,
doc_id,
token.position as u32,
&term_buffer,
heap,
);
} else { } else {
info!( info!(
"A token exceeding MAX_TOKEN_LEN ({}>{}) was dropped. Search for \ "A token exceeding MAX_TOKEN_LEN ({}>{}) was dropped. Search for \
@@ -297,7 +309,8 @@ impl<Rec: Recorder + 'static> PostingsWriter for SpecializedPostingsWriter<Rec>
let mut buffer_lender = BufferLender::default(); let mut buffer_lender = BufferLender::default();
for &(term_bytes, addr, _) in term_addrs { for &(term_bytes, addr, _) in term_addrs {
let recorder: Rec = termdict_heap.read(addr); let recorder: Rec = termdict_heap.read(addr);
serializer.new_term(&term_bytes[4..])?; let term_doc_freq = recorder.term_doc_freq().unwrap_or(0u32);
serializer.new_term(&term_bytes[4..], term_doc_freq)?;
recorder.serialize(&mut buffer_lender, serializer, heap)?; recorder.serialize(&mut buffer_lender, serializer, heap)?;
serializer.close_term()?; serializer.close_term()?;
} }

View File

@@ -75,6 +75,10 @@ pub(crate) trait Recorder: Copy + 'static {
serializer: &mut FieldSerializer<'_>, serializer: &mut FieldSerializer<'_>,
heap: &MemoryArena, heap: &MemoryArena,
) -> io::Result<()>; ) -> io::Result<()>;
/// Returns the number of document containing this term.
///
/// Returns `None` if not available.
fn term_doc_freq(&self) -> Option<u32>;
} }
/// Only records the doc ids /// Only records the doc ids
@@ -113,11 +117,16 @@ impl Recorder for NothingRecorder {
) -> io::Result<()> { ) -> io::Result<()> {
let buffer = buffer_lender.lend_u8(); let buffer = buffer_lender.lend_u8();
self.stack.read_to_end(heap, buffer); self.stack.read_to_end(heap, buffer);
// TODO avoid reading twice.
for doc in VInt32Reader::new(&buffer[..]) { for doc in VInt32Reader::new(&buffer[..]) {
serializer.write_doc(doc as u32, 0u32, &[][..])?; serializer.write_doc(doc as u32, 0u32, &[][..])?;
} }
Ok(()) Ok(())
} }
fn term_doc_freq(&self) -> Option<u32> {
None
}
} }
/// Recorder encoding document ids, and term frequencies /// Recorder encoding document ids, and term frequencies
@@ -126,6 +135,7 @@ pub struct TermFrequencyRecorder {
stack: ExpUnrolledLinkedList, stack: ExpUnrolledLinkedList,
current_doc: DocId, current_doc: DocId,
current_tf: u32, current_tf: u32,
term_doc_freq: u32,
} }
impl Recorder for TermFrequencyRecorder { impl Recorder for TermFrequencyRecorder {
@@ -134,6 +144,7 @@ impl Recorder for TermFrequencyRecorder {
stack: ExpUnrolledLinkedList::new(), stack: ExpUnrolledLinkedList::new(),
current_doc: u32::max_value(), current_doc: u32::max_value(),
current_tf: 0u32, current_tf: 0u32,
term_doc_freq: 0u32,
} }
} }
@@ -142,6 +153,7 @@ impl Recorder for TermFrequencyRecorder {
} }
fn new_doc(&mut self, doc: DocId, heap: &mut MemoryArena) { fn new_doc(&mut self, doc: DocId, heap: &mut MemoryArena) {
self.term_doc_freq += 1;
self.current_doc = doc; self.current_doc = doc;
let _ = write_u32_vint(doc, &mut self.stack.writer(heap)); let _ = write_u32_vint(doc, &mut self.stack.writer(heap));
} }
@@ -172,6 +184,10 @@ impl Recorder for TermFrequencyRecorder {
Ok(()) Ok(())
} }
fn term_doc_freq(&self) -> Option<u32> {
Some(self.term_doc_freq)
}
} }
/// Recorder encoding term frequencies as well as positions. /// Recorder encoding term frequencies as well as positions.
@@ -179,12 +195,14 @@ impl Recorder for TermFrequencyRecorder {
pub struct TFAndPositionRecorder { pub struct TFAndPositionRecorder {
stack: ExpUnrolledLinkedList, stack: ExpUnrolledLinkedList,
current_doc: DocId, current_doc: DocId,
term_doc_freq: u32,
} }
impl Recorder for TFAndPositionRecorder { impl Recorder for TFAndPositionRecorder {
fn new() -> Self { fn new() -> Self {
TFAndPositionRecorder { TFAndPositionRecorder {
stack: ExpUnrolledLinkedList::new(), stack: ExpUnrolledLinkedList::new(),
current_doc: u32::max_value(), current_doc: u32::max_value(),
term_doc_freq: 0u32,
} }
} }
@@ -194,6 +212,7 @@ impl Recorder for TFAndPositionRecorder {
fn new_doc(&mut self, doc: DocId, heap: &mut MemoryArena) { fn new_doc(&mut self, doc: DocId, heap: &mut MemoryArena) {
self.current_doc = doc; self.current_doc = doc;
self.term_doc_freq += 1u32;
let _ = write_u32_vint(doc, &mut self.stack.writer(heap)); let _ = write_u32_vint(doc, &mut self.stack.writer(heap));
} }
@@ -233,6 +252,10 @@ impl Recorder for TFAndPositionRecorder {
} }
Ok(()) Ok(())
} }
fn term_doc_freq(&self) -> Option<u32> {
Some(self.term_doc_freq)
}
} }
#[cfg(test)] #[cfg(test)]

View File

@@ -1,81 +1,65 @@
use crate::common::BitSet;
use crate::common::HasLen; use crate::common::HasLen;
use crate::common::{BinarySerializable, VInt}; use crate::directory::FileSlice;
use crate::docset::{DocSet, SkipResult}; use crate::docset::DocSet;
use crate::fastfield::DeleteBitSet;
use crate::positions::PositionReader; use crate::positions::PositionReader;
use crate::postings::compression::{compressed_block_size, AlignedBuffer}; use crate::postings::compression::COMPRESSION_BLOCK_SIZE;
use crate::postings::compression::{BlockDecoder, VIntDecoder, COMPRESSION_BLOCK_SIZE};
use crate::postings::serializer::PostingsSerializer; use crate::postings::serializer::PostingsSerializer;
use crate::postings::BlockSearcher; use crate::postings::BlockSearcher;
use crate::postings::FreqReadingOption; use crate::postings::BlockSegmentPostings;
use crate::postings::Postings; use crate::postings::Postings;
use crate::postings::SkipReader;
use crate::postings::USE_SKIP_INFO_LIMIT;
use crate::schema::IndexRecordOption; use crate::schema::IndexRecordOption;
use crate::DocId; use crate::{DocId, TERMINATED};
use owned_read::OwnedRead;
use std::cmp::Ordering;
use tantivy_fst::Streamer;
struct PositionComputer {
// store the amount of position int
// before reading positions.
//
// if none, position are already loaded in
// the positions vec.
position_to_skip: usize,
position_reader: PositionReader,
}
impl PositionComputer {
pub fn new(position_reader: PositionReader) -> PositionComputer {
PositionComputer {
position_to_skip: 0,
position_reader,
}
}
pub fn add_skip(&mut self, num_skip: usize) {
self.position_to_skip += num_skip;
}
// Positions can only be read once.
pub fn positions_with_offset(&mut self, offset: u32, output: &mut [u32]) {
self.position_reader.skip(self.position_to_skip);
self.position_to_skip = 0;
self.position_reader.read(output);
let mut cum = offset;
for output_mut in output.iter_mut() {
cum += *output_mut;
*output_mut = cum;
}
}
}
/// `SegmentPostings` represents the inverted list or postings associated to /// `SegmentPostings` represents the inverted list or postings associated to
/// a term in a `Segment`. /// a term in a `Segment`.
/// ///
/// As we iterate through the `SegmentPostings`, the frequencies are optionally decoded. /// As we iterate through the `SegmentPostings`, the frequencies are optionally decoded.
/// Positions on the other hand, are optionally entirely decoded upfront. /// Positions on the other hand, are optionally entirely decoded upfront.
#[derive(Clone)]
pub struct SegmentPostings { pub struct SegmentPostings {
block_cursor: BlockSegmentPostings, pub(crate) block_cursor: BlockSegmentPostings,
cur: usize, cur: usize,
position_computer: Option<PositionComputer>, position_reader: Option<PositionReader>,
block_searcher: BlockSearcher, block_searcher: BlockSearcher,
} }
impl SegmentPostings { impl SegmentPostings {
/// Returns an empty segment postings object /// Returns an empty segment postings object
pub fn empty() -> Self { pub fn empty() -> Self {
let empty_block_cursor = BlockSegmentPostings::empty();
SegmentPostings { SegmentPostings {
block_cursor: empty_block_cursor, block_cursor: BlockSegmentPostings::empty(),
cur: COMPRESSION_BLOCK_SIZE, cur: 0,
position_computer: None, position_reader: None,
block_searcher: BlockSearcher::default(), block_searcher: BlockSearcher::default(),
} }
} }
/// Compute the number of non-deleted documents.
///
/// This method will clone and scan through the posting lists.
/// (this is a rather expensive operation).
pub fn doc_freq_given_deletes(&self, delete_bitset: &DeleteBitSet) -> u32 {
let mut docset = self.clone();
let mut doc_freq = 0;
loop {
let doc = docset.doc();
if doc == TERMINATED {
return doc_freq;
}
if delete_bitset.is_alive(doc) {
doc_freq += 1u32;
}
docset.advance();
}
}
/// Returns the overall number of documents in the block postings.
/// It does not take in account whether documents are deleted or not.
pub fn doc_freq(&self) -> u32 {
self.block_cursor.doc_freq()
}
/// Creates a segment postings object with the given documents /// Creates a segment postings object with the given documents
/// and no frequency encoded. /// and no frequency encoded.
/// ///
@@ -87,7 +71,9 @@ impl SegmentPostings {
pub fn create_from_docs(docs: &[u32]) -> SegmentPostings { pub fn create_from_docs(docs: &[u32]) -> SegmentPostings {
let mut buffer = Vec::new(); let mut buffer = Vec::new();
{ {
let mut postings_serializer = PostingsSerializer::new(&mut buffer, false, false); let mut postings_serializer =
PostingsSerializer::new(&mut buffer, 0.0, IndexRecordOption::Basic, None);
postings_serializer.new_term(docs.len() as u32);
for &doc in docs { for &doc in docs {
postings_serializer.write_doc(doc, 1u32); postings_serializer.write_doc(doc, 1u32);
} }
@@ -95,17 +81,61 @@ impl SegmentPostings {
.close_term(docs.len() as u32) .close_term(docs.len() as u32)
.expect("In memory Serialization should never fail."); .expect("In memory Serialization should never fail.");
} }
let block_segment_postings = BlockSegmentPostings::from_data( let block_segment_postings = BlockSegmentPostings::open(
docs.len() as u32, docs.len() as u32,
OwnedRead::new(buffer), FileSlice::from(buffer),
IndexRecordOption::Basic, IndexRecordOption::Basic,
IndexRecordOption::Basic, IndexRecordOption::Basic,
); )
.unwrap();
SegmentPostings::from_block_postings(block_segment_postings, None)
}
/// Helper functions to create `SegmentPostings` for tests.
#[cfg(test)]
pub fn create_from_docs_and_tfs(
doc_and_tfs: &[(u32, u32)],
fieldnorms: Option<&[u32]>,
) -> SegmentPostings {
use crate::fieldnorm::FieldNormReader;
use crate::Score;
let mut buffer: Vec<u8> = Vec::new();
let fieldnorm_reader = fieldnorms.map(FieldNormReader::for_test);
let average_field_norm = fieldnorms
.map(|fieldnorms| {
if fieldnorms.len() == 0 {
return 0.0;
}
let total_num_tokens: u64 = fieldnorms
.iter()
.map(|&fieldnorm| fieldnorm as u64)
.sum::<u64>();
total_num_tokens as Score / fieldnorms.len() as Score
})
.unwrap_or(0.0);
let mut postings_serializer = PostingsSerializer::new(
&mut buffer,
average_field_norm,
IndexRecordOption::WithFreqs,
fieldnorm_reader,
);
postings_serializer.new_term(doc_and_tfs.len() as u32);
for &(doc, tf) in doc_and_tfs {
postings_serializer.write_doc(doc, tf);
}
postings_serializer
.close_term(doc_and_tfs.len() as u32)
.unwrap();
let block_segment_postings = BlockSegmentPostings::open(
doc_and_tfs.len() as u32,
FileSlice::from(buffer),
IndexRecordOption::WithFreqs,
IndexRecordOption::WithFreqs,
)
.unwrap();
SegmentPostings::from_block_postings(block_segment_postings, None) SegmentPostings::from_block_postings(block_segment_postings, None)
} }
}
impl SegmentPostings {
/// Reads a Segment postings from an &[u8] /// Reads a Segment postings from an &[u8]
/// ///
/// * `len` - number of document in the posting lists. /// * `len` - number of document in the posting lists.
@@ -114,12 +144,12 @@ impl SegmentPostings {
/// frequencies and/or positions /// frequencies and/or positions
pub(crate) fn from_block_postings( pub(crate) fn from_block_postings(
segment_block_postings: BlockSegmentPostings, segment_block_postings: BlockSegmentPostings,
positions_stream_opt: Option<PositionReader>, position_reader: Option<PositionReader>,
) -> SegmentPostings { ) -> SegmentPostings {
SegmentPostings { SegmentPostings {
block_cursor: segment_block_postings, block_cursor: segment_block_postings,
cur: COMPRESSION_BLOCK_SIZE, // cursor within the block cur: 0, // cursor within the block
position_computer: positions_stream_opt.map(PositionComputer::new), position_reader,
block_searcher: BlockSearcher::default(), block_searcher: BlockSearcher::default(),
} }
} }
@@ -129,139 +159,60 @@ impl DocSet for SegmentPostings {
// goes to the next element. // goes to the next element.
// next needs to be called a first time to point to the correct element. // next needs to be called a first time to point to the correct element.
#[inline] #[inline]
fn advance(&mut self) -> bool { fn advance(&mut self) -> DocId {
if self.position_computer.is_some() && self.cur < COMPRESSION_BLOCK_SIZE { debug_assert!(self.block_cursor.block_is_loaded());
let term_freq = self.term_freq() as usize; if self.cur == COMPRESSION_BLOCK_SIZE - 1 {
if let Some(position_computer) = self.position_computer.as_mut() {
position_computer.add_skip(term_freq);
}
}
self.cur += 1;
if self.cur >= self.block_cursor.block_len() {
self.cur = 0; self.cur = 0;
if !self.block_cursor.advance() { self.block_cursor.advance();
self.cur = COMPRESSION_BLOCK_SIZE; } else {
return false; self.cur += 1;
}
} }
true self.doc()
} }
fn skip_next(&mut self, target: DocId) -> SkipResult { fn seek(&mut self, target: DocId) -> DocId {
if !self.advance() { debug_assert!(self.doc() <= target);
return SkipResult::End; if self.doc() >= target {
} return self.doc();
match self.doc().cmp(&target) {
Ordering::Equal => {
return SkipResult::Reached;
}
Ordering::Greater => {
return SkipResult::OverStep;
}
_ => {
// ...
}
} }
// In the following, thanks to the call to advance above, self.block_cursor.seek(target);
// we know that the position is not loaded and we need
// to skip every doc_freq we cross.
// skip blocks until one that might contain the target // At this point we are on the block, that might contain our document.
// check if we need to go to the next block let output = self.block_cursor.docs_aligned();
let mut sum_freqs_skipped: u32 = 0; self.cur = self.block_searcher.search_in_block(&output, target);
if !self
.block_cursor
.docs()
.last()
.map(|doc| *doc >= target)
.unwrap_or(false)
// there should always be at least a document in the block
// since advance returned.
{
// we are not in the right block.
//
// First compute all of the freqs skipped from the current block.
if self.position_computer.is_some() {
sum_freqs_skipped = self.block_cursor.freqs()[self.cur..].iter().sum();
match self.block_cursor.skip_to(target) {
BlockSegmentPostingsSkipResult::Success(block_skip_freqs) => {
sum_freqs_skipped += block_skip_freqs;
}
BlockSegmentPostingsSkipResult::Terminated => {
return SkipResult::End;
}
}
} else if self.block_cursor.skip_to(target)
== BlockSegmentPostingsSkipResult::Terminated
{
// no positions needed. no need to sum freqs.
return SkipResult::End;
}
self.cur = 0;
}
let cur = self.cur; // The last block is not full and padded with the value TERMINATED,
// so that we are guaranteed to have at least doc in the block (a real one or the padding)
// we're in the right block now, start with an exponential search // that is greater or equal to the target.
let (output, len) = self.block_cursor.docs_aligned(); debug_assert!(self.cur < COMPRESSION_BLOCK_SIZE);
let new_cur = self
.block_searcher
.search_in_block(&output, len, cur, target);
if let Some(position_computer) = self.position_computer.as_mut() {
sum_freqs_skipped += self.block_cursor.freqs()[cur..new_cur].iter().sum::<u32>();
position_computer.add_skip(sum_freqs_skipped as usize);
}
self.cur = new_cur;
// `doc` is now the first element >= `target` // `doc` is now the first element >= `target`
let doc = output.0[new_cur];
// If all docs are smaller than target the current block should be incomplemented and padded
// with the value `TERMINATED`.
//
// After the search, the cursor should point to the first value of TERMINATED.
let doc = output.0[self.cur];
debug_assert!(doc >= target); debug_assert!(doc >= target);
if doc == target { debug_assert_eq!(doc, self.doc());
SkipResult::Reached doc
} else {
SkipResult::OverStep
}
} }
/// Return the current document's `DocId`. /// Return the current document's `DocId`.
/// #[inline(always)]
/// # Panics
///
/// Will panics if called without having called advance before.
#[inline]
fn doc(&self) -> DocId { fn doc(&self) -> DocId {
let docs = self.block_cursor.docs(); self.block_cursor.doc(self.cur)
debug_assert!(
self.cur < docs.len(),
"Have you forgotten to call `.advance()` at least once before calling `.doc()` ."
);
docs[self.cur]
} }
fn size_hint(&self) -> u32 { fn size_hint(&self) -> u32 {
self.len() as u32 self.len() as u32
} }
fn append_to_bitset(&mut self, bitset: &mut BitSet) {
// finish the current block
if self.advance() {
for &doc in &self.block_cursor.docs()[self.cur..] {
bitset.insert(doc);
}
// ... iterate through the remaining blocks.
while self.block_cursor.advance() {
for &doc in self.block_cursor.docs() {
bitset.insert(doc);
}
}
}
}
} }
impl HasLen for SegmentPostings { impl HasLen for SegmentPostings {
fn len(&self) -> usize { fn len(&self) -> usize {
self.block_cursor.doc_freq() self.block_cursor.doc_freq() as usize
} }
} }
@@ -290,515 +241,63 @@ impl Postings for SegmentPostings {
fn positions_with_offset(&mut self, offset: u32, output: &mut Vec<u32>) { fn positions_with_offset(&mut self, offset: u32, output: &mut Vec<u32>) {
let term_freq = self.term_freq() as usize; let term_freq = self.term_freq() as usize;
if let Some(position_comp) = self.position_computer.as_mut() { if let Some(position_reader) = self.position_reader.as_mut() {
let read_offset = self.block_cursor.position_offset()
+ (self.block_cursor.freqs()[..self.cur]
.iter()
.cloned()
.sum::<u32>() as u64);
output.resize(term_freq, 0u32); output.resize(term_freq, 0u32);
position_comp.positions_with_offset(offset, &mut output[..]); position_reader.read(read_offset, &mut output[..]);
let mut cum = offset;
for output_mut in output.iter_mut() {
cum += *output_mut;
*output_mut = cum;
}
} else { } else {
output.clear(); output.clear();
} }
} }
} }
/// `BlockSegmentPostings` is a cursor iterating over blocks
/// of documents.
///
/// # Warning
///
/// While it is useful for some very specific high-performance
/// use cases, you should prefer using `SegmentPostings` for most usage.
pub struct BlockSegmentPostings {
doc_decoder: BlockDecoder,
freq_decoder: BlockDecoder,
freq_reading_option: FreqReadingOption,
doc_freq: usize,
doc_offset: DocId,
num_vint_docs: usize,
remaining_data: OwnedRead,
skip_reader: SkipReader,
}
fn split_into_skips_and_postings(
doc_freq: u32,
mut data: OwnedRead,
) -> (Option<OwnedRead>, OwnedRead) {
if doc_freq >= USE_SKIP_INFO_LIMIT {
let skip_len = VInt::deserialize(&mut data).expect("Data corrupted").0 as usize;
let mut postings_data = data.clone();
postings_data.advance(skip_len);
data.clip(skip_len);
(Some(data), postings_data)
} else {
(None, data)
}
}
#[derive(Debug, Eq, PartialEq)]
pub enum BlockSegmentPostingsSkipResult {
Terminated,
Success(u32), //< number of term freqs to skip
}
impl BlockSegmentPostings {
pub(crate) fn from_data(
doc_freq: u32,
data: OwnedRead,
record_option: IndexRecordOption,
requested_option: IndexRecordOption,
) -> BlockSegmentPostings {
let freq_reading_option = match (record_option, requested_option) {
(IndexRecordOption::Basic, _) => FreqReadingOption::NoFreq,
(_, IndexRecordOption::Basic) => FreqReadingOption::SkipFreq,
(_, _) => FreqReadingOption::ReadFreq,
};
let (skip_data_opt, postings_data) = split_into_skips_and_postings(doc_freq, data);
let skip_reader = match skip_data_opt {
Some(skip_data) => SkipReader::new(skip_data, record_option),
None => SkipReader::new(OwnedRead::new(&[][..]), record_option),
};
let doc_freq = doc_freq as usize;
let num_vint_docs = doc_freq % COMPRESSION_BLOCK_SIZE;
BlockSegmentPostings {
num_vint_docs,
doc_decoder: BlockDecoder::new(),
freq_decoder: BlockDecoder::with_val(1),
freq_reading_option,
doc_offset: 0,
doc_freq,
remaining_data: postings_data,
skip_reader,
}
}
// Resets the block segment postings on another position
// in the postings file.
//
// This is useful for enumerating through a list of terms,
// and consuming the associated posting lists while avoiding
// reallocating a `BlockSegmentPostings`.
//
// # Warning
//
// This does not reset the positions list.
pub(crate) fn reset(&mut self, doc_freq: u32, postings_data: OwnedRead) {
let (skip_data_opt, postings_data) = split_into_skips_and_postings(doc_freq, postings_data);
let num_vint_docs = (doc_freq as usize) & (COMPRESSION_BLOCK_SIZE - 1);
self.num_vint_docs = num_vint_docs;
self.remaining_data = postings_data;
if let Some(skip_data) = skip_data_opt {
self.skip_reader.reset(skip_data);
} else {
self.skip_reader.reset(OwnedRead::new(&[][..]))
}
self.doc_offset = 0;
self.doc_freq = doc_freq as usize;
}
/// Returns the document frequency associated to this block postings.
///
/// This `doc_freq` is simply the sum of the length of all of the blocks
/// length, and it does not take in account deleted documents.
pub fn doc_freq(&self) -> usize {
self.doc_freq
}
/// Returns the array of docs in the current block.
///
/// Before the first call to `.advance()`, the block
/// returned by `.docs()` is empty.
#[inline]
pub fn docs(&self) -> &[DocId] {
self.doc_decoder.output_array()
}
pub(crate) fn docs_aligned(&self) -> (&AlignedBuffer, usize) {
self.doc_decoder.output_aligned()
}
/// Return the document at index `idx` of the block.
#[inline]
pub fn doc(&self, idx: usize) -> u32 {
self.doc_decoder.output(idx)
}
/// Return the array of `term freq` in the block.
#[inline]
pub fn freqs(&self) -> &[u32] {
self.freq_decoder.output_array()
}
/// Return the frequency at index `idx` of the block.
#[inline]
pub fn freq(&self, idx: usize) -> u32 {
self.freq_decoder.output(idx)
}
/// Returns the length of the current block.
///
/// All blocks have a length of `NUM_DOCS_PER_BLOCK`,
/// except the last block that may have a length
/// of any number between 1 and `NUM_DOCS_PER_BLOCK - 1`
#[inline]
fn block_len(&self) -> usize {
self.doc_decoder.output_len
}
/// position on a block that may contains `doc_id`.
/// Always advance the current block.
///
/// Returns true if a block that has an element greater or equal to the target is found.
/// Returning true does not guarantee that the smallest element of the block is smaller
/// than the target. It only guarantees that the last element is greater or equal.
///
/// Returns false iff all of the document remaining are smaller than
/// `doc_id`. In that case, all of these document are consumed.
///
pub fn skip_to(&mut self, target_doc: DocId) -> BlockSegmentPostingsSkipResult {
let mut skip_freqs = 0u32;
while self.skip_reader.advance() {
if self.skip_reader.doc() >= target_doc {
// the last document of the current block is larger
// than the target.
//
// We found our block!
let num_bits = self.skip_reader.doc_num_bits();
let num_consumed_bytes = self.doc_decoder.uncompress_block_sorted(
self.remaining_data.as_ref(),
self.doc_offset,
num_bits,
);
self.remaining_data.advance(num_consumed_bytes);
let tf_num_bits = self.skip_reader.tf_num_bits();
match self.freq_reading_option {
FreqReadingOption::NoFreq => {}
FreqReadingOption::SkipFreq => {
let num_bytes_to_skip = compressed_block_size(tf_num_bits);
self.remaining_data.advance(num_bytes_to_skip);
}
FreqReadingOption::ReadFreq => {
let num_consumed_bytes = self
.freq_decoder
.uncompress_block_unsorted(self.remaining_data.as_ref(), tf_num_bits);
self.remaining_data.advance(num_consumed_bytes);
}
}
self.doc_offset = self.skip_reader.doc();
return BlockSegmentPostingsSkipResult::Success(skip_freqs);
} else {
skip_freqs += self.skip_reader.tf_sum();
let advance_len = self.skip_reader.total_block_len();
self.doc_offset = self.skip_reader.doc();
self.remaining_data.advance(advance_len);
}
}
// we are now on the last, incomplete, variable encoded block.
if self.num_vint_docs > 0 {
let num_compressed_bytes = self.doc_decoder.uncompress_vint_sorted(
self.remaining_data.as_ref(),
self.doc_offset,
self.num_vint_docs,
);
self.remaining_data.advance(num_compressed_bytes);
match self.freq_reading_option {
FreqReadingOption::NoFreq | FreqReadingOption::SkipFreq => {}
FreqReadingOption::ReadFreq => {
self.freq_decoder
.uncompress_vint_unsorted(self.remaining_data.as_ref(), self.num_vint_docs);
}
}
self.num_vint_docs = 0;
return self
.docs()
.last()
.map(|last_doc| {
if *last_doc >= target_doc {
BlockSegmentPostingsSkipResult::Success(skip_freqs)
} else {
BlockSegmentPostingsSkipResult::Terminated
}
})
.unwrap_or(BlockSegmentPostingsSkipResult::Terminated);
}
BlockSegmentPostingsSkipResult::Terminated
}
/// Advance to the next block.
///
/// Returns false iff there was no remaining blocks.
pub fn advance(&mut self) -> bool {
if self.skip_reader.advance() {
let num_bits = self.skip_reader.doc_num_bits();
let num_consumed_bytes = self.doc_decoder.uncompress_block_sorted(
self.remaining_data.as_ref(),
self.doc_offset,
num_bits,
);
self.remaining_data.advance(num_consumed_bytes);
let tf_num_bits = self.skip_reader.tf_num_bits();
match self.freq_reading_option {
FreqReadingOption::NoFreq => {}
FreqReadingOption::SkipFreq => {
let num_bytes_to_skip = compressed_block_size(tf_num_bits);
self.remaining_data.advance(num_bytes_to_skip);
}
FreqReadingOption::ReadFreq => {
let num_consumed_bytes = self
.freq_decoder
.uncompress_block_unsorted(self.remaining_data.as_ref(), tf_num_bits);
self.remaining_data.advance(num_consumed_bytes);
}
}
// it will be used as the next offset.
self.doc_offset = self.doc_decoder.output(COMPRESSION_BLOCK_SIZE - 1);
true
} else if self.num_vint_docs > 0 {
let num_compressed_bytes = self.doc_decoder.uncompress_vint_sorted(
self.remaining_data.as_ref(),
self.doc_offset,
self.num_vint_docs,
);
self.remaining_data.advance(num_compressed_bytes);
match self.freq_reading_option {
FreqReadingOption::NoFreq | FreqReadingOption::SkipFreq => {}
FreqReadingOption::ReadFreq => {
self.freq_decoder
.uncompress_vint_unsorted(self.remaining_data.as_ref(), self.num_vint_docs);
}
}
self.num_vint_docs = 0;
true
} else {
false
}
}
/// Returns an empty segment postings object
pub fn empty() -> BlockSegmentPostings {
BlockSegmentPostings {
num_vint_docs: 0,
doc_decoder: BlockDecoder::new(),
freq_decoder: BlockDecoder::with_val(1),
freq_reading_option: FreqReadingOption::NoFreq,
doc_offset: 0,
doc_freq: 0,
remaining_data: OwnedRead::new(vec![]),
skip_reader: SkipReader::new(OwnedRead::new(vec![]), IndexRecordOption::Basic),
}
}
}
impl<'b> Streamer<'b> for BlockSegmentPostings {
type Item = &'b [DocId];
fn next(&'b mut self) -> Option<&'b [DocId]> {
if self.advance() {
Some(self.docs())
} else {
None
}
}
}
#[cfg(test)] #[cfg(test)]
mod tests { mod tests {
use super::BlockSegmentPostings;
use super::BlockSegmentPostingsSkipResult;
use super::SegmentPostings; use super::SegmentPostings;
use crate::common::HasLen; use crate::common::HasLen;
use crate::core::Index;
use crate::docset::DocSet; use crate::docset::{DocSet, TERMINATED};
use crate::fastfield::DeleteBitSet;
use crate::postings::postings::Postings; use crate::postings::postings::Postings;
use crate::schema::IndexRecordOption;
use crate::schema::Schema;
use crate::schema::Term;
use crate::schema::INDEXED;
use crate::DocId;
use crate::SkipResult;
use tantivy_fst::Streamer;
#[test] #[test]
fn test_empty_segment_postings() { fn test_empty_segment_postings() {
let mut postings = SegmentPostings::empty(); let mut postings = SegmentPostings::empty();
assert!(!postings.advance()); assert_eq!(postings.advance(), TERMINATED);
assert!(!postings.advance()); assert_eq!(postings.advance(), TERMINATED);
assert_eq!(postings.len(), 0); assert_eq!(postings.len(), 0);
} }
#[test] #[test]
#[should_panic(expected = "Have you forgotten to call `.advance()`")] fn test_empty_postings_doc_returns_terminated() {
fn test_panic_if_doc_called_before_advance() { let mut postings = SegmentPostings::empty();
SegmentPostings::empty().doc(); assert_eq!(postings.doc(), TERMINATED);
assert_eq!(postings.advance(), TERMINATED);
} }
#[test] #[test]
#[should_panic(expected = "Have you forgotten to call `.advance()`")] fn test_empty_postings_doc_term_freq_returns_0() {
fn test_panic_if_freq_called_before_advance() { let postings = SegmentPostings::empty();
SegmentPostings::empty().term_freq(); assert_eq!(postings.term_freq(), 1);
} }
#[test] #[test]
fn test_empty_block_segment_postings() { fn test_doc_freq() {
let mut postings = BlockSegmentPostings::empty(); let docs = SegmentPostings::create_from_docs(&[0, 2, 10]);
assert!(!postings.advance()); assert_eq!(docs.doc_freq(), 3);
assert_eq!(postings.doc_freq(), 0); let delete_bitset = DeleteBitSet::for_test(&[2], 12);
} assert_eq!(docs.doc_freq_given_deletes(&delete_bitset), 2);
let all_deleted = DeleteBitSet::for_test(&[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], 12);
#[test] assert_eq!(docs.doc_freq_given_deletes(&all_deleted), 0);
fn test_block_segment_postings() {
let mut block_segments = build_block_postings(&(0..100_000).collect::<Vec<u32>>());
let mut offset: u32 = 0u32;
// checking that the block before calling advance is empty
assert!(block_segments.docs().is_empty());
// checking that the `doc_freq` is correct
assert_eq!(block_segments.doc_freq(), 100_000);
while let Some(block) = block_segments.next() {
for (i, doc) in block.iter().cloned().enumerate() {
assert_eq!(offset + (i as u32), doc);
}
offset += block.len() as u32;
}
}
#[test]
fn test_skip_right_at_new_block() {
let mut doc_ids = (0..128).collect::<Vec<u32>>();
doc_ids.push(129);
doc_ids.push(130);
{
let block_segments = build_block_postings(&doc_ids);
let mut docset = SegmentPostings::from_block_postings(block_segments, None);
assert_eq!(docset.skip_next(128), SkipResult::OverStep);
assert_eq!(docset.doc(), 129);
assert!(docset.advance());
assert_eq!(docset.doc(), 130);
assert!(!docset.advance());
}
{
let block_segments = build_block_postings(&doc_ids);
let mut docset = SegmentPostings::from_block_postings(block_segments, None);
assert_eq!(docset.skip_next(129), SkipResult::Reached);
assert_eq!(docset.doc(), 129);
assert!(docset.advance());
assert_eq!(docset.doc(), 130);
assert!(!docset.advance());
}
{
let block_segments = build_block_postings(&doc_ids);
let mut docset = SegmentPostings::from_block_postings(block_segments, None);
assert_eq!(docset.skip_next(131), SkipResult::End);
}
}
fn build_block_postings(docs: &[DocId]) -> BlockSegmentPostings {
let mut schema_builder = Schema::builder();
let int_field = schema_builder.add_u64_field("id", INDEXED);
let schema = schema_builder.build();
let index = Index::create_in_ram(schema);
let mut index_writer = index.writer_with_num_threads(1, 3_000_000).unwrap();
let mut last_doc = 0u32;
for &doc in docs {
for _ in last_doc..doc {
index_writer.add_document(doc!(int_field=>1u64));
}
index_writer.add_document(doc!(int_field=>0u64));
last_doc = doc + 1;
}
index_writer.commit().unwrap();
let searcher = index.reader().unwrap().searcher();
let segment_reader = searcher.segment_reader(0);
let inverted_index = segment_reader.inverted_index(int_field);
let term = Term::from_field_u64(int_field, 0u64);
let term_info = inverted_index.get_term_info(&term).unwrap();
inverted_index.read_block_postings_from_terminfo(&term_info, IndexRecordOption::Basic)
}
#[test]
fn test_block_segment_postings_skip() {
for i in 0..4 {
let mut block_postings = build_block_postings(&[3]);
assert_eq!(
block_postings.skip_to(i),
BlockSegmentPostingsSkipResult::Success(0u32)
);
assert_eq!(
block_postings.skip_to(i),
BlockSegmentPostingsSkipResult::Terminated
);
}
let mut block_postings = build_block_postings(&[3]);
assert_eq!(
block_postings.skip_to(4u32),
BlockSegmentPostingsSkipResult::Terminated
);
}
#[test]
fn test_block_segment_postings_skip2() {
let mut docs = vec![0];
for i in 0..1300 {
docs.push((i * i / 100) + i);
}
let mut block_postings = build_block_postings(&docs[..]);
for i in vec![0, 424, 10000] {
assert_eq!(
block_postings.skip_to(i),
BlockSegmentPostingsSkipResult::Success(0u32)
);
let docs = block_postings.docs();
assert!(docs[0] <= i);
assert!(docs.last().cloned().unwrap_or(0u32) >= i);
}
assert_eq!(
block_postings.skip_to(100_000),
BlockSegmentPostingsSkipResult::Terminated
);
assert_eq!(
block_postings.skip_to(101_000),
BlockSegmentPostingsSkipResult::Terminated
);
}
#[test]
fn test_reset_block_segment_postings() {
let mut schema_builder = Schema::builder();
let int_field = schema_builder.add_u64_field("id", INDEXED);
let schema = schema_builder.build();
let index = Index::create_in_ram(schema);
let mut index_writer = index.writer_with_num_threads(1, 3_000_000).unwrap();
// create two postings list, one containg even number,
// the other containing odd numbers.
for i in 0..6 {
let doc = doc!(int_field=> (i % 2) as u64);
index_writer.add_document(doc);
}
index_writer.commit().unwrap();
let searcher = index.reader().unwrap().searcher();
let segment_reader = searcher.segment_reader(0);
let mut block_segments;
{
let term = Term::from_field_u64(int_field, 0u64);
let inverted_index = segment_reader.inverted_index(int_field);
let term_info = inverted_index.get_term_info(&term).unwrap();
block_segments = inverted_index
.read_block_postings_from_terminfo(&term_info, IndexRecordOption::Basic);
}
assert!(block_segments.advance());
assert_eq!(block_segments.docs(), &[0, 2, 4]);
{
let term = Term::from_field_u64(int_field, 1u64);
let inverted_index = segment_reader.inverted_index(int_field);
let term_info = inverted_index.get_term_info(&term).unwrap();
inverted_index.reset_block_postings_from_terminfo(&term_info, &mut block_segments);
}
assert!(block_segments.advance());
assert_eq!(block_segments.docs(), &[1, 3, 5]);
} }
} }

View File

@@ -3,14 +3,16 @@ use crate::common::{BinarySerializable, VInt};
use crate::common::{CompositeWrite, CountingWriter}; use crate::common::{CompositeWrite, CountingWriter};
use crate::core::Segment; use crate::core::Segment;
use crate::directory::WritePtr; use crate::directory::WritePtr;
use crate::fieldnorm::FieldNormReader;
use crate::positions::PositionSerializer; use crate::positions::PositionSerializer;
use crate::postings::compression::{BlockEncoder, VIntEncoder, COMPRESSION_BLOCK_SIZE}; use crate::postings::compression::{BlockEncoder, VIntEncoder, COMPRESSION_BLOCK_SIZE};
use crate::postings::skip::SkipSerializer; use crate::postings::skip::SkipSerializer;
use crate::postings::USE_SKIP_INFO_LIMIT; use crate::query::BM25Weight;
use crate::schema::Schema;
use crate::schema::{Field, FieldEntry, FieldType}; use crate::schema::{Field, FieldEntry, FieldType};
use crate::schema::{IndexRecordOption, Schema};
use crate::termdict::{TermDictionaryBuilder, TermOrdinal}; use crate::termdict::{TermDictionaryBuilder, TermOrdinal};
use crate::DocId; use crate::{DocId, Score};
use std::cmp::Ordering;
use std::io::{self, Write}; use std::io::{self, Write};
/// `InvertedIndexSerializer` is in charge of serializing /// `InvertedIndexSerializer` is in charge of serializing
@@ -90,20 +92,22 @@ impl InvertedIndexSerializer {
&mut self, &mut self,
field: Field, field: Field,
total_num_tokens: u64, total_num_tokens: u64,
fieldnorm_reader: Option<FieldNormReader>,
) -> io::Result<FieldSerializer<'_>> { ) -> io::Result<FieldSerializer<'_>> {
let field_entry: &FieldEntry = self.schema.get_field_entry(field); let field_entry: &FieldEntry = self.schema.get_field_entry(field);
let term_dictionary_write = self.terms_write.for_field(field); let term_dictionary_write = self.terms_write.for_field(field);
let postings_write = self.postings_write.for_field(field); let postings_write = self.postings_write.for_field(field);
total_num_tokens.serialize(postings_write)?;
let positions_write = self.positions_write.for_field(field); let positions_write = self.positions_write.for_field(field);
let positionsidx_write = self.positionsidx_write.for_field(field); let positionsidx_write = self.positionsidx_write.for_field(field);
let field_type: FieldType = (*field_entry.field_type()).clone(); let field_type: FieldType = (*field_entry.field_type()).clone();
FieldSerializer::create( FieldSerializer::create(
&field_type, &field_type,
total_num_tokens,
term_dictionary_write, term_dictionary_write,
postings_write, postings_write,
positions_write, positions_write,
positionsidx_write, positionsidx_write,
fieldnorm_reader,
) )
} }
@@ -131,26 +135,32 @@ pub struct FieldSerializer<'a> {
impl<'a> FieldSerializer<'a> { impl<'a> FieldSerializer<'a> {
fn create( fn create(
field_type: &FieldType, field_type: &FieldType,
total_num_tokens: u64,
term_dictionary_write: &'a mut CountingWriter<WritePtr>, term_dictionary_write: &'a mut CountingWriter<WritePtr>,
postings_write: &'a mut CountingWriter<WritePtr>, postings_write: &'a mut CountingWriter<WritePtr>,
positions_write: &'a mut CountingWriter<WritePtr>, positions_write: &'a mut CountingWriter<WritePtr>,
positionsidx_write: &'a mut CountingWriter<WritePtr>, positionsidx_write: &'a mut CountingWriter<WritePtr>,
fieldnorm_reader: Option<FieldNormReader>,
) -> io::Result<FieldSerializer<'a>> { ) -> io::Result<FieldSerializer<'a>> {
let (term_freq_enabled, position_enabled): (bool, bool) = match field_type { total_num_tokens.serialize(postings_write)?;
let mode = match field_type {
FieldType::Str(ref text_options) => { FieldType::Str(ref text_options) => {
if let Some(text_indexing_options) = text_options.get_indexing_options() { if let Some(text_indexing_options) = text_options.get_indexing_options() {
let index_option = text_indexing_options.index_option(); text_indexing_options.index_option()
(index_option.has_freq(), index_option.has_positions())
} else { } else {
(false, false) IndexRecordOption::Basic
} }
} }
_ => (false, false), _ => IndexRecordOption::Basic,
}; };
let term_dictionary_builder = TermDictionaryBuilder::create(term_dictionary_write)?; let term_dictionary_builder = TermDictionaryBuilder::create(term_dictionary_write)?;
let average_fieldnorm = fieldnorm_reader
.as_ref()
.map(|ff_reader| (total_num_tokens as Score / ff_reader.num_docs() as Score))
.unwrap_or(0.0);
let postings_serializer = let postings_serializer =
PostingsSerializer::new(postings_write, term_freq_enabled, position_enabled); PostingsSerializer::new(postings_write, average_fieldnorm, mode, fieldnorm_reader);
let positions_serializer_opt = if position_enabled { let positions_serializer_opt = if mode.has_positions() {
Some(PositionSerializer::new(positions_write, positionsidx_write)) Some(PositionSerializer::new(positions_write, positionsidx_write))
} else { } else {
None None
@@ -182,18 +192,20 @@ impl<'a> FieldSerializer<'a> {
/// Starts the postings for a new term. /// Starts the postings for a new term.
/// * term - the term. It needs to come after the previous term according /// * term - the term. It needs to come after the previous term according
/// to the lexicographical order. /// to the lexicographical order.
/// * doc_freq - return the number of document containing the term. /// * term_doc_freq - return the number of document containing the term.
pub fn new_term(&mut self, term: &[u8]) -> io::Result<TermOrdinal> { pub fn new_term(&mut self, term: &[u8], term_doc_freq: u32) -> io::Result<TermOrdinal> {
assert!( assert!(
!self.term_open, !self.term_open,
"Called new_term, while the previous term was not closed." "Called new_term, while the previous term was not closed."
); );
self.term_open = true; self.term_open = true;
self.postings_serializer.clear(); self.postings_serializer.clear();
self.current_term_info = self.current_term_info(); self.current_term_info = self.current_term_info();
self.term_dictionary_builder.insert_key(term)?; self.term_dictionary_builder.insert_key(term)?;
let term_ordinal = self.num_terms; let term_ordinal = self.num_terms;
self.num_terms += 1; self.num_terms += 1;
self.postings_serializer.new_term(term_doc_freq);
Ok(term_ordinal) Ok(term_ordinal)
} }
@@ -305,15 +317,21 @@ pub struct PostingsSerializer<W: Write> {
postings_write: Vec<u8>, postings_write: Vec<u8>,
skip_write: SkipSerializer, skip_write: SkipSerializer,
termfreq_enabled: bool, mode: IndexRecordOption,
termfreq_sum_enabled: bool, fieldnorm_reader: Option<FieldNormReader>,
bm25_weight: Option<BM25Weight>,
avg_fieldnorm: Score, // Average number of term in the field for that segment.
// this value is used to compute the block wand information.
} }
impl<W: Write> PostingsSerializer<W> { impl<W: Write> PostingsSerializer<W> {
pub fn new( pub fn new(
write: W, write: W,
termfreq_enabled: bool, avg_fieldnorm: Score,
termfreq_sum_enabled: bool, mode: IndexRecordOption,
fieldnorm_reader: Option<FieldNormReader>,
) -> PostingsSerializer<W> { ) -> PostingsSerializer<W> {
PostingsSerializer { PostingsSerializer {
output_write: CountingWriter::wrap(write), output_write: CountingWriter::wrap(write),
@@ -325,11 +343,32 @@ impl<W: Write> PostingsSerializer<W> {
skip_write: SkipSerializer::new(), skip_write: SkipSerializer::new(),
last_doc_id_encoded: 0u32, last_doc_id_encoded: 0u32,
termfreq_enabled, mode,
termfreq_sum_enabled,
fieldnorm_reader,
bm25_weight: None,
avg_fieldnorm,
} }
} }
/// Returns the number of documents in the segment currently being serialized.
/// This function may return `None` if there are no fieldnorm for that field.
fn num_docs_in_segment(&self) -> Option<u32> {
self.fieldnorm_reader
.as_ref()
.map(|reader| reader.num_docs())
}
pub fn new_term(&mut self, term_doc_freq: u32) {
if self.mode.has_freq() {
return;
}
self.bm25_weight = self.num_docs_in_segment().map(|num_docs| {
BM25Weight::for_one_term(term_doc_freq as u64, num_docs as u64, self.avg_fieldnorm)
});
}
fn write_block(&mut self) { fn write_block(&mut self) {
{ {
// encode the doc ids // encode the doc ids
@@ -342,17 +381,43 @@ impl<W: Write> PostingsSerializer<W> {
// last el block 0, offset block 1, // last el block 0, offset block 1,
self.postings_write.extend(block_encoded); self.postings_write.extend(block_encoded);
} }
if self.termfreq_enabled { if self.mode.has_freq() {
// encode the term_freqs
let (num_bits, block_encoded): (u8, &[u8]) = self let (num_bits, block_encoded): (u8, &[u8]) = self
.block_encoder .block_encoder
.compress_block_unsorted(&self.block.term_freqs()); .compress_block_unsorted(&self.block.term_freqs());
self.postings_write.extend(block_encoded); self.postings_write.extend(block_encoded);
self.skip_write.write_term_freq(num_bits); self.skip_write.write_term_freq(num_bits);
if self.termfreq_sum_enabled { if self.mode.has_positions() {
// We serialize the sum of term freqs within the skip information
// in order to navigate through positions.
let sum_freq = self.block.term_freqs().iter().cloned().sum(); let sum_freq = self.block.term_freqs().iter().cloned().sum();
self.skip_write.write_total_term_freq(sum_freq); self.skip_write.write_total_term_freq(sum_freq);
} }
let mut blockwand_params = (0u8, 0u32);
if let Some(bm25_weight) = self.bm25_weight.as_ref() {
if let Some(fieldnorm_reader) = self.fieldnorm_reader.as_ref() {
let docs = self.block.doc_ids().iter().cloned();
let term_freqs = self.block.term_freqs().iter().cloned();
let fieldnorms = docs.map(|doc| fieldnorm_reader.fieldnorm_id(doc));
blockwand_params = fieldnorms
.zip(term_freqs)
.max_by(
|(left_fieldnorm_id, left_term_freq),
(right_fieldnorm_id, right_term_freq)| {
let left_score =
bm25_weight.tf_factor(*left_fieldnorm_id, *left_term_freq);
let right_score =
bm25_weight.tf_factor(*right_fieldnorm_id, *right_term_freq);
left_score
.partial_cmp(&right_score)
.unwrap_or(Ordering::Equal)
},
)
.unwrap();
}
}
let (fieldnorm_id, term_freq) = blockwand_params;
self.skip_write.write_blockwand_max(fieldnorm_id, term_freq);
} }
self.block.clear(); self.block.clear();
} }
@@ -383,7 +448,7 @@ impl<W: Write> PostingsSerializer<W> {
self.postings_write.write_all(block_encoded)?; self.postings_write.write_all(block_encoded)?;
} }
// ... Idem for term frequencies // ... Idem for term frequencies
if self.termfreq_enabled { if self.mode.has_freq() {
let block_encoded = self let block_encoded = self
.block_encoder .block_encoder
.compress_vint_unsorted(self.block.term_freqs()); .compress_vint_unsorted(self.block.term_freqs());
@@ -391,7 +456,7 @@ impl<W: Write> PostingsSerializer<W> {
} }
self.block.clear(); self.block.clear();
} }
if doc_freq >= USE_SKIP_INFO_LIMIT { if doc_freq >= COMPRESSION_BLOCK_SIZE as u32 {
let skip_data = self.skip_write.data(); let skip_data = self.skip_write.data();
VInt(skip_data.len() as u64).serialize(&mut self.output_write)?; VInt(skip_data.len() as u64).serialize(&mut self.output_write)?;
self.output_write.write_all(skip_data)?; self.output_write.write_all(skip_data)?;
@@ -401,6 +466,7 @@ impl<W: Write> PostingsSerializer<W> {
} }
self.skip_write.clear(); self.skip_write.clear();
self.postings_write.clear(); self.postings_write.clear();
self.bm25_weight = None;
Ok(()) Ok(())
} }

View File

@@ -1,8 +1,9 @@
use crate::common::BinarySerializable; use crate::common::{read_u32_vint_no_advance, serialize_vint_u32, BinarySerializable};
use crate::postings::compression::COMPRESSION_BLOCK_SIZE; use crate::directory::OwnedBytes;
use crate::postings::compression::{compressed_block_size, COMPRESSION_BLOCK_SIZE};
use crate::query::BM25Weight;
use crate::schema::IndexRecordOption; use crate::schema::IndexRecordOption;
use crate::DocId; use crate::{DocId, Score, TERMINATED};
use owned_read::OwnedRead;
pub struct SkipSerializer { pub struct SkipSerializer {
buffer: Vec<u8>, buffer: Vec<u8>,
@@ -39,6 +40,13 @@ impl SkipSerializer {
.expect("Should never fail"); .expect("Should never fail");
} }
pub fn write_blockwand_max(&mut self, fieldnorm_id: u8, term_freq: u32) {
self.buffer.push(fieldnorm_id);
let mut buf = [0u8; 8];
let bytes = serialize_vint_u32(term_freq, &mut buf);
self.buffer.extend_from_slice(bytes);
}
pub fn data(&self) -> &[u8] { pub fn data(&self) -> &[u8] {
&self.buffer[..] &self.buffer[..]
} }
@@ -49,81 +57,210 @@ impl SkipSerializer {
} }
} }
#[derive(Clone)]
pub(crate) struct SkipReader { pub(crate) struct SkipReader {
doc: DocId, last_doc_in_block: DocId,
owned_read: OwnedRead, pub(crate) last_doc_in_previous_block: DocId,
doc_num_bits: u8, owned_read: OwnedBytes,
tf_num_bits: u8,
tf_sum: u32,
skip_info: IndexRecordOption, skip_info: IndexRecordOption,
byte_offset: usize,
remaining_docs: u32, // number of docs remaining, including the
// documents in the current block.
block_info: BlockInfo,
position_offset: u64,
}
#[derive(Clone, Eq, PartialEq, Copy, Debug)]
pub(crate) enum BlockInfo {
BitPacked {
doc_num_bits: u8,
tf_num_bits: u8,
tf_sum: u32,
block_wand_fieldnorm_id: u8,
block_wand_term_freq: u32,
},
VInt {
num_docs: u32,
},
}
impl Default for BlockInfo {
fn default() -> Self {
BlockInfo::VInt { num_docs: 0u32 }
}
} }
impl SkipReader { impl SkipReader {
pub fn new(data: OwnedRead, skip_info: IndexRecordOption) -> SkipReader { pub fn new(data: OwnedBytes, doc_freq: u32, skip_info: IndexRecordOption) -> SkipReader {
SkipReader { let mut skip_reader = SkipReader {
doc: 0u32, last_doc_in_block: if doc_freq >= COMPRESSION_BLOCK_SIZE as u32 {
0
} else {
TERMINATED
},
last_doc_in_previous_block: 0u32,
owned_read: data, owned_read: data,
skip_info, skip_info,
doc_num_bits: 0u8, block_info: BlockInfo::VInt { num_docs: doc_freq },
tf_num_bits: 0u8, byte_offset: 0,
tf_sum: 0u32, remaining_docs: doc_freq,
position_offset: 0u64,
};
if doc_freq >= COMPRESSION_BLOCK_SIZE as u32 {
skip_reader.read_block_info();
}
skip_reader
}
pub fn reset(&mut self, data: OwnedBytes, doc_freq: u32) {
self.last_doc_in_block = if doc_freq >= COMPRESSION_BLOCK_SIZE as u32 {
0
} else {
TERMINATED
};
self.last_doc_in_previous_block = 0u32;
self.owned_read = data;
self.block_info = BlockInfo::VInt { num_docs: doc_freq };
self.byte_offset = 0;
self.remaining_docs = doc_freq;
self.position_offset = 0u64;
if doc_freq >= COMPRESSION_BLOCK_SIZE as u32 {
self.read_block_info();
} }
} }
pub fn reset(&mut self, data: OwnedRead) { // Returns the block max score for this block if available.
self.doc = 0u32; //
self.owned_read = data; // The block max score is available for all full bitpacked block,
self.doc_num_bits = 0u8; // but no available for the last VInt encoded incomplete block.
self.tf_num_bits = 0u8; pub fn block_max_score(&self, bm25_weight: &BM25Weight) -> Option<Score> {
self.tf_sum = 0u32; match self.block_info {
BlockInfo::BitPacked {
block_wand_fieldnorm_id,
block_wand_term_freq,
..
} => Some(bm25_weight.score(block_wand_fieldnorm_id, block_wand_term_freq)),
BlockInfo::VInt { .. } => None,
}
} }
pub fn total_block_len(&self) -> usize { pub(crate) fn last_doc_in_block(&self) -> DocId {
(self.doc_num_bits + self.tf_num_bits) as usize * COMPRESSION_BLOCK_SIZE / 8 self.last_doc_in_block
} }
pub fn doc(&self) -> DocId { pub fn position_offset(&self) -> u64 {
self.doc self.position_offset
} }
pub fn doc_num_bits(&self) -> u8 { #[inline(always)]
self.doc_num_bits pub fn byte_offset(&self) -> usize {
self.byte_offset
} }
/// Number of bits used to encode term frequencies fn read_block_info(&mut self) {
/// let doc_delta = {
/// 0 if term frequencies are not enabled. let bytes = self.owned_read.as_slice();
pub fn tf_num_bits(&self) -> u8 { let mut buf = [0; 4];
self.tf_num_bits buf.copy_from_slice(&bytes[..4]);
} u32::from_le_bytes(buf)
};
self.last_doc_in_block += doc_delta as DocId;
let doc_num_bits = self.owned_read.as_slice()[4];
pub fn tf_sum(&self) -> u32 { match self.skip_info {
self.tf_sum IndexRecordOption::Basic => {
} self.owned_read.advance(5);
self.block_info = BlockInfo::BitPacked {
pub fn advance(&mut self) -> bool { doc_num_bits,
if self.owned_read.as_ref().is_empty() { tf_num_bits: 0,
false tf_sum: 0,
} else { block_wand_fieldnorm_id: 0,
let doc_delta = u32::deserialize(&mut self.owned_read).expect("Skip data corrupted"); block_wand_term_freq: 0,
self.doc += doc_delta as DocId; };
self.doc_num_bits = self.owned_read.get(0);
match self.skip_info {
IndexRecordOption::Basic => {
self.owned_read.advance(1);
}
IndexRecordOption::WithFreqs => {
self.tf_num_bits = self.owned_read.get(1);
self.owned_read.advance(2);
}
IndexRecordOption::WithFreqsAndPositions => {
self.tf_num_bits = self.owned_read.get(1);
self.owned_read.advance(2);
self.tf_sum =
u32::deserialize(&mut self.owned_read).expect("Failed reading tf_sum");
}
} }
true IndexRecordOption::WithFreqs => {
let bytes = self.owned_read.as_slice();
let tf_num_bits = bytes[5];
let block_wand_fieldnorm_id = bytes[6];
let (block_wand_term_freq, num_bytes) = read_u32_vint_no_advance(&bytes[7..]);
self.owned_read.advance(7 + num_bytes);
self.block_info = BlockInfo::BitPacked {
doc_num_bits,
tf_num_bits,
tf_sum: 0,
block_wand_fieldnorm_id,
block_wand_term_freq,
};
}
IndexRecordOption::WithFreqsAndPositions => {
let bytes = self.owned_read.as_slice();
let tf_num_bits = bytes[5];
let tf_sum = {
let mut buf = [0; 4];
buf.copy_from_slice(&bytes[6..10]);
u32::from_le_bytes(buf)
};
let block_wand_fieldnorm_id = bytes[10];
let (block_wand_term_freq, num_bytes) = read_u32_vint_no_advance(&bytes[11..]);
self.owned_read.advance(11 + num_bytes);
self.block_info = BlockInfo::BitPacked {
doc_num_bits,
tf_num_bits,
tf_sum,
block_wand_fieldnorm_id,
block_wand_term_freq,
};
}
}
}
pub fn block_info(&self) -> BlockInfo {
self.block_info
}
/// Advance the skip reader to the block that may contain the target.
///
/// If the target is larger than all documents, the skip_reader
/// then advance to the last Variable In block.
pub fn seek(&mut self, target: DocId) -> bool {
if self.last_doc_in_block() >= target {
return false;
}
loop {
self.advance();
if self.last_doc_in_block() >= target {
return true;
}
}
}
pub fn advance(&mut self) {
match self.block_info {
BlockInfo::BitPacked {
doc_num_bits,
tf_num_bits,
tf_sum,
..
} => {
self.remaining_docs -= COMPRESSION_BLOCK_SIZE as u32;
self.byte_offset += compressed_block_size(doc_num_bits + tf_num_bits);
self.position_offset += tf_sum as u64;
}
BlockInfo::VInt { num_docs } => {
debug_assert_eq!(num_docs, self.remaining_docs);
self.remaining_docs = 0;
self.byte_offset = std::usize::MAX;
}
}
self.last_doc_in_previous_block = self.last_doc_in_block;
if self.remaining_docs >= COMPRESSION_BLOCK_SIZE as u32 {
self.read_block_info();
} else {
self.last_doc_in_block = TERMINATED;
self.block_info = BlockInfo::VInt {
num_docs: self.remaining_docs,
};
} }
} }
} }
@@ -131,9 +268,11 @@ impl SkipReader {
#[cfg(test)] #[cfg(test)]
mod tests { mod tests {
use super::BlockInfo;
use super::IndexRecordOption; use super::IndexRecordOption;
use super::{SkipReader, SkipSerializer}; use super::{SkipReader, SkipSerializer};
use owned_read::OwnedRead; use crate::directory::OwnedBytes;
use crate::postings::compression::COMPRESSION_BLOCK_SIZE;
#[test] #[test]
fn test_skip_with_freq() { fn test_skip_with_freq() {
@@ -141,20 +280,44 @@ mod tests {
let mut skip_serializer = SkipSerializer::new(); let mut skip_serializer = SkipSerializer::new();
skip_serializer.write_doc(1u32, 2u8); skip_serializer.write_doc(1u32, 2u8);
skip_serializer.write_term_freq(3u8); skip_serializer.write_term_freq(3u8);
skip_serializer.write_blockwand_max(13u8, 3u32);
skip_serializer.write_doc(5u32, 5u8); skip_serializer.write_doc(5u32, 5u8);
skip_serializer.write_term_freq(2u8); skip_serializer.write_term_freq(2u8);
skip_serializer.write_blockwand_max(8u8, 2u32);
skip_serializer.data().to_owned() skip_serializer.data().to_owned()
}; };
let mut skip_reader = SkipReader::new(OwnedRead::new(buf), IndexRecordOption::WithFreqs); let doc_freq = 3u32 + (COMPRESSION_BLOCK_SIZE * 2) as u32;
assert!(skip_reader.advance()); let mut skip_reader =
assert_eq!(skip_reader.doc(), 1u32); SkipReader::new(OwnedBytes::new(buf), doc_freq, IndexRecordOption::WithFreqs);
assert_eq!(skip_reader.doc_num_bits(), 2u8); assert_eq!(skip_reader.last_doc_in_block(), 1u32);
assert_eq!(skip_reader.tf_num_bits(), 3u8); assert_eq!(
assert!(skip_reader.advance()); skip_reader.block_info,
assert_eq!(skip_reader.doc(), 5u32); BlockInfo::BitPacked {
assert_eq!(skip_reader.doc_num_bits(), 5u8); doc_num_bits: 2u8,
assert_eq!(skip_reader.tf_num_bits(), 2u8); tf_num_bits: 3u8,
assert!(!skip_reader.advance()); tf_sum: 0,
block_wand_fieldnorm_id: 13,
block_wand_term_freq: 3
}
);
skip_reader.advance();
assert_eq!(skip_reader.last_doc_in_block(), 5u32);
assert_eq!(
skip_reader.block_info(),
BlockInfo::BitPacked {
doc_num_bits: 5u8,
tf_num_bits: 2u8,
tf_sum: 0,
block_wand_fieldnorm_id: 8,
block_wand_term_freq: 2
}
);
skip_reader.advance();
assert_eq!(skip_reader.block_info(), BlockInfo::VInt { num_docs: 3u32 });
skip_reader.advance();
assert_eq!(skip_reader.block_info(), BlockInfo::VInt { num_docs: 0u32 });
skip_reader.advance();
assert_eq!(skip_reader.block_info(), BlockInfo::VInt { num_docs: 0u32 });
} }
#[test] #[test]
@@ -165,13 +328,62 @@ mod tests {
skip_serializer.write_doc(5u32, 5u8); skip_serializer.write_doc(5u32, 5u8);
skip_serializer.data().to_owned() skip_serializer.data().to_owned()
}; };
let mut skip_reader = SkipReader::new(OwnedRead::new(buf), IndexRecordOption::Basic); let doc_freq = 3u32 + (COMPRESSION_BLOCK_SIZE * 2) as u32;
assert!(skip_reader.advance()); let mut skip_reader =
assert_eq!(skip_reader.doc(), 1u32); SkipReader::new(OwnedBytes::new(buf), doc_freq, IndexRecordOption::Basic);
assert_eq!(skip_reader.doc_num_bits(), 2u8); assert_eq!(skip_reader.last_doc_in_block(), 1u32);
assert!(skip_reader.advance()); assert_eq!(
assert_eq!(skip_reader.doc(), 5u32); skip_reader.block_info(),
assert_eq!(skip_reader.doc_num_bits(), 5u8); BlockInfo::BitPacked {
assert!(!skip_reader.advance()); doc_num_bits: 2u8,
tf_num_bits: 0,
tf_sum: 0u32,
block_wand_fieldnorm_id: 0,
block_wand_term_freq: 0
}
);
skip_reader.advance();
assert_eq!(skip_reader.last_doc_in_block(), 5u32);
assert_eq!(
skip_reader.block_info(),
BlockInfo::BitPacked {
doc_num_bits: 5u8,
tf_num_bits: 0,
tf_sum: 0u32,
block_wand_fieldnorm_id: 0,
block_wand_term_freq: 0
}
);
skip_reader.advance();
assert_eq!(skip_reader.block_info(), BlockInfo::VInt { num_docs: 3u32 });
skip_reader.advance();
assert_eq!(skip_reader.block_info(), BlockInfo::VInt { num_docs: 0u32 });
skip_reader.advance();
assert_eq!(skip_reader.block_info(), BlockInfo::VInt { num_docs: 0u32 });
}
#[test]
fn test_skip_multiple_of_block_size() {
let buf = {
let mut skip_serializer = SkipSerializer::new();
skip_serializer.write_doc(1u32, 2u8);
skip_serializer.data().to_owned()
};
let doc_freq = COMPRESSION_BLOCK_SIZE as u32;
let mut skip_reader =
SkipReader::new(OwnedBytes::new(buf), doc_freq, IndexRecordOption::Basic);
assert_eq!(skip_reader.last_doc_in_block(), 1u32);
assert_eq!(
skip_reader.block_info(),
BlockInfo::BitPacked {
doc_num_bits: 2u8,
tf_num_bits: 0,
tf_sum: 0u32,
block_wand_fieldnorm_id: 0,
block_wand_term_freq: 0
}
);
skip_reader.advance();
assert_eq!(skip_reader.block_info(), BlockInfo::VInt { num_docs: 0u32 });
} }
} }

View File

@@ -206,8 +206,8 @@ mod tests {
fn test_stack_long() { fn test_stack_long() {
let mut heap = MemoryArena::new(); let mut heap = MemoryArena::new();
let mut stack = ExpUnrolledLinkedList::new(); let mut stack = ExpUnrolledLinkedList::new();
let source: Vec<u32> = (0..100).collect(); let data: Vec<u32> = (0..100).collect();
for &el in &source { for &el in &data {
assert!(stack assert!(stack
.writer(&mut heap) .writer(&mut heap)
.write_u32::<LittleEndian>(el) .write_u32::<LittleEndian>(el)
@@ -221,7 +221,7 @@ mod tests {
result.push(LittleEndian::read_u32(&remaining[..4])); result.push(LittleEndian::read_u32(&remaining[..4]));
remaining = &remaining[4..]; remaining = &remaining[4..];
} }
assert_eq!(&result[..], &source[..]); assert_eq!(&result[..], &data[..]);
} }
#[test] #[test]

View File

@@ -1,6 +1,4 @@
use murmurhash32; use murmurhash32::murmurhash2;
use self::murmurhash32::murmurhash2;
use super::{Addr, MemoryArena}; use super::{Addr, MemoryArena};
use crate::postings::stacker::memory_arena::store; use crate::postings::stacker::memory_arena::store;

View File

@@ -1,6 +1,6 @@
use crate::core::Searcher; use crate::core::Searcher;
use crate::core::SegmentReader; use crate::core::SegmentReader;
use crate::docset::DocSet; use crate::docset::{DocSet, TERMINATED};
use crate::query::boost_query::BoostScorer; use crate::query::boost_query::BoostScorer;
use crate::query::explanation::does_not_match; use crate::query::explanation::does_not_match;
use crate::query::{Explanation, Query, Scorer, Weight}; use crate::query::{Explanation, Query, Scorer, Weight};
@@ -9,7 +9,7 @@ use crate::Score;
/// Query that matches all of the documents. /// Query that matches all of the documents.
/// ///
/// All of the document get the score 1f32. /// All of the document get the score 1.0.
#[derive(Clone, Debug)] #[derive(Clone, Debug)]
pub struct AllQuery; pub struct AllQuery;
@@ -23,9 +23,8 @@ impl Query for AllQuery {
pub struct AllWeight; pub struct AllWeight;
impl Weight for AllWeight { impl Weight for AllWeight {
fn scorer(&self, reader: &SegmentReader, boost: f32) -> crate::Result<Box<dyn Scorer>> { fn scorer(&self, reader: &SegmentReader, boost: Score) -> crate::Result<Box<dyn Scorer>> {
let all_scorer = AllScorer { let all_scorer = AllScorer {
state: State::NotStarted,
doc: 0u32, doc: 0u32,
max_doc: reader.max_doc(), max_doc: reader.max_doc(),
}; };
@@ -36,43 +35,24 @@ impl Weight for AllWeight {
if doc >= reader.max_doc() { if doc >= reader.max_doc() {
return Err(does_not_match(doc)); return Err(does_not_match(doc));
} }
Ok(Explanation::new("AllQuery", 1f32)) Ok(Explanation::new("AllQuery", 1.0))
} }
} }
enum State {
NotStarted,
Started,
Finished,
}
/// Scorer associated to the `AllQuery` query. /// Scorer associated to the `AllQuery` query.
pub struct AllScorer { pub struct AllScorer {
state: State,
doc: DocId, doc: DocId,
max_doc: DocId, max_doc: DocId,
} }
impl DocSet for AllScorer { impl DocSet for AllScorer {
fn advance(&mut self) -> bool { fn advance(&mut self) -> DocId {
match self.state { if self.doc + 1 >= self.max_doc {
State::NotStarted => { self.doc = TERMINATED;
self.state = State::Started; return TERMINATED;
self.doc = 0;
}
State::Started => {
self.doc += 1u32;
}
State::Finished => {
return false;
}
}
if self.doc < self.max_doc {
true
} else {
self.state = State::Finished;
false
} }
self.doc += 1;
self.doc
} }
fn doc(&self) -> DocId { fn doc(&self) -> DocId {
@@ -86,13 +66,14 @@ impl DocSet for AllScorer {
impl Scorer for AllScorer { impl Scorer for AllScorer {
fn score(&mut self) -> Score { fn score(&mut self) -> Score {
1f32 1.0
} }
} }
#[cfg(test)] #[cfg(test)]
mod tests { mod tests {
use super::AllQuery; use super::AllQuery;
use crate::docset::TERMINATED;
use crate::query::Query; use crate::query::Query;
use crate::schema::{Schema, TEXT}; use crate::schema::{Schema, TEXT};
use crate::Index; use crate::Index;
@@ -102,7 +83,7 @@ mod tests {
let field = schema_builder.add_text_field("text", TEXT); let field = schema_builder.add_text_field("text", TEXT);
let schema = schema_builder.build(); let schema = schema_builder.build();
let index = Index::create_in_ram(schema); let index = Index::create_in_ram(schema);
let mut index_writer = index.writer_with_num_threads(1, 10_000_000).unwrap(); let mut index_writer = index.writer_for_tests().unwrap();
index_writer.add_document(doc!(field=>"aaa")); index_writer.add_document(doc!(field=>"aaa"));
index_writer.add_document(doc!(field=>"bbb")); index_writer.add_document(doc!(field=>"bbb"));
index_writer.commit().unwrap(); index_writer.commit().unwrap();
@@ -119,19 +100,17 @@ mod tests {
let weight = AllQuery.weight(&searcher, false).unwrap(); let weight = AllQuery.weight(&searcher, false).unwrap();
{ {
let reader = searcher.segment_reader(0); let reader = searcher.segment_reader(0);
let mut scorer = weight.scorer(reader, 1.0f32).unwrap(); let mut scorer = weight.scorer(reader, 1.0).unwrap();
assert!(scorer.advance());
assert_eq!(scorer.doc(), 0u32); assert_eq!(scorer.doc(), 0u32);
assert!(scorer.advance()); assert_eq!(scorer.advance(), 1u32);
assert_eq!(scorer.doc(), 1u32); assert_eq!(scorer.doc(), 1u32);
assert!(!scorer.advance()); assert_eq!(scorer.advance(), TERMINATED);
} }
{ {
let reader = searcher.segment_reader(1); let reader = searcher.segment_reader(1);
let mut scorer = weight.scorer(reader, 1.0f32).unwrap(); let mut scorer = weight.scorer(reader, 1.0).unwrap();
assert!(scorer.advance());
assert_eq!(scorer.doc(), 0u32); assert_eq!(scorer.doc(), 0u32);
assert!(!scorer.advance()); assert_eq!(scorer.advance(), TERMINATED);
} }
} }
@@ -143,16 +122,14 @@ mod tests {
let weight = AllQuery.weight(&searcher, false).unwrap(); let weight = AllQuery.weight(&searcher, false).unwrap();
let reader = searcher.segment_reader(0); let reader = searcher.segment_reader(0);
{ {
let mut scorer = weight.scorer(reader, 2.0f32).unwrap(); let mut scorer = weight.scorer(reader, 2.0).unwrap();
assert!(scorer.advance());
assert_eq!(scorer.doc(), 0u32); assert_eq!(scorer.doc(), 0u32);
assert_eq!(scorer.score(), 2.0f32); assert_eq!(scorer.score(), 2.0);
} }
{ {
let mut scorer = weight.scorer(reader, 1.5f32).unwrap(); let mut scorer = weight.scorer(reader, 1.5).unwrap();
assert!(scorer.advance());
assert_eq!(scorer.doc(), 0u32); assert_eq!(scorer.doc(), 0u32);
assert_eq!(scorer.score(), 1.5f32); assert_eq!(scorer.score(), 1.5);
} }
} }
} }

View File

@@ -5,9 +5,8 @@ use crate::query::{BitSetDocSet, Explanation};
use crate::query::{Scorer, Weight}; use crate::query::{Scorer, Weight};
use crate::schema::{Field, IndexRecordOption}; use crate::schema::{Field, IndexRecordOption};
use crate::termdict::{TermDictionary, TermStreamer}; use crate::termdict::{TermDictionary, TermStreamer};
use crate::DocId;
use crate::TantivyError; use crate::TantivyError;
use crate::{Result, SkipResult}; use crate::{DocId, Score};
use std::sync::Arc; use std::sync::Arc;
use tantivy_fst::Automaton; use tantivy_fst::Automaton;
@@ -40,21 +39,25 @@ impl<A> Weight for AutomatonWeight<A>
where where
A: Automaton + Send + Sync + 'static, A: Automaton + Send + Sync + 'static,
{ {
fn scorer(&self, reader: &SegmentReader, boost: f32) -> Result<Box<dyn Scorer>> { fn scorer(&self, reader: &SegmentReader, boost: Score) -> crate::Result<Box<dyn Scorer>> {
let max_doc = reader.max_doc(); let max_doc = reader.max_doc();
let mut doc_bitset = BitSet::with_max_value(max_doc); let mut doc_bitset = BitSet::with_max_value(max_doc);
let inverted_index = reader.inverted_index(self.field)?;
let inverted_index = reader.inverted_index(self.field);
let term_dict = inverted_index.terms(); let term_dict = inverted_index.terms();
let mut term_stream = self.automaton_stream(term_dict); let mut term_stream = self.automaton_stream(term_dict);
while term_stream.advance() { while term_stream.advance() {
let term_info = term_stream.value(); let term_info = term_stream.value();
let mut block_segment_postings = inverted_index let mut block_segment_postings = inverted_index
.read_block_postings_from_terminfo(term_info, IndexRecordOption::Basic); .read_block_postings_from_terminfo(term_info, IndexRecordOption::Basic)?;
while block_segment_postings.advance() { loop {
for &doc in block_segment_postings.docs() { let docs = block_segment_postings.docs();
if docs.is_empty() {
break;
}
for &doc in docs {
doc_bitset.insert(doc); doc_bitset.insert(doc);
} }
block_segment_postings.advance();
} }
} }
let doc_bitset = BitSetDocSet::from(doc_bitset); let doc_bitset = BitSetDocSet::from(doc_bitset);
@@ -62,10 +65,10 @@ where
Ok(Box::new(const_scorer)) Ok(Box::new(const_scorer))
} }
fn explain(&self, reader: &SegmentReader, doc: DocId) -> Result<Explanation> { fn explain(&self, reader: &SegmentReader, doc: DocId) -> crate::Result<Explanation> {
let mut scorer = self.scorer(reader, 1.0f32)?; let mut scorer = self.scorer(reader, 1.0)?;
if scorer.skip_next(doc) == SkipResult::Reached { if scorer.seek(doc) == doc {
Ok(Explanation::new("AutomatonScorer", 1.0f32)) Ok(Explanation::new("AutomatonScorer", 1.0))
} else { } else {
Err(TantivyError::InvalidArgument( Err(TantivyError::InvalidArgument(
"Document does not exist".to_string(), "Document does not exist".to_string(),
@@ -77,6 +80,7 @@ where
#[cfg(test)] #[cfg(test)]
mod tests { mod tests {
use super::AutomatonWeight; use super::AutomatonWeight;
use crate::docset::TERMINATED;
use crate::query::Weight; use crate::query::Weight;
use crate::schema::{Schema, STRING}; use crate::schema::{Schema, STRING};
use crate::Index; use crate::Index;
@@ -86,7 +90,7 @@ mod tests {
let mut schema = Schema::builder(); let mut schema = Schema::builder();
let title = schema.add_text_field("title", STRING); let title = schema.add_text_field("title", STRING);
let index = Index::create_in_ram(schema.build()); let index = Index::create_in_ram(schema.build());
let mut index_writer = index.writer_with_num_threads(1, 3_000_000).unwrap(); let mut index_writer = index.writer_for_tests().unwrap();
index_writer.add_document(doc!(title=>"abc")); index_writer.add_document(doc!(title=>"abc"));
index_writer.add_document(doc!(title=>"bcd")); index_writer.add_document(doc!(title=>"bcd"));
index_writer.add_document(doc!(title=>"abcd")); index_writer.add_document(doc!(title=>"abcd"));
@@ -139,15 +143,14 @@ mod tests {
let reader = index.reader().unwrap(); let reader = index.reader().unwrap();
let searcher = reader.searcher(); let searcher = reader.searcher();
let mut scorer = automaton_weight let mut scorer = automaton_weight
.scorer(searcher.segment_reader(0u32), 1.0f32) .scorer(searcher.segment_reader(0u32), 1.0)
.unwrap(); .unwrap();
assert!(scorer.advance());
assert_eq!(scorer.doc(), 0u32); assert_eq!(scorer.doc(), 0u32);
assert_eq!(scorer.score(), 1.0f32); assert_eq!(scorer.score(), 1.0);
assert!(scorer.advance()); assert_eq!(scorer.advance(), 2u32);
assert_eq!(scorer.doc(), 2u32); assert_eq!(scorer.doc(), 2u32);
assert_eq!(scorer.score(), 1.0f32); assert_eq!(scorer.score(), 1.0);
assert!(!scorer.advance()); assert_eq!(scorer.advance(), TERMINATED);
} }
#[test] #[test]
@@ -158,10 +161,9 @@ mod tests {
let reader = index.reader().unwrap(); let reader = index.reader().unwrap();
let searcher = reader.searcher(); let searcher = reader.searcher();
let mut scorer = automaton_weight let mut scorer = automaton_weight
.scorer(searcher.segment_reader(0u32), 1.32f32) .scorer(searcher.segment_reader(0u32), 1.32)
.unwrap(); .unwrap();
assert!(scorer.advance());
assert_eq!(scorer.doc(), 0u32); assert_eq!(scorer.doc(), 0u32);
assert_eq!(scorer.score(), 1.32f32); assert_eq!(scorer.score(), 1.32);
} }
} }

View File

@@ -1,7 +1,6 @@
use crate::common::{BitSet, TinySet}; use crate::common::{BitSet, TinySet};
use crate::docset::{DocSet, SkipResult}; use crate::docset::{DocSet, TERMINATED};
use crate::DocId; use crate::DocId;
use std::cmp::Ordering;
/// A `BitSetDocSet` makes it possible to iterate through a bitset as if it was a `DocSet`. /// A `BitSetDocSet` makes it possible to iterate through a bitset as if it was a `DocSet`.
/// ///
@@ -33,74 +32,51 @@ impl From<BitSet> for BitSetDocSet {
} else { } else {
docs.tinyset(0) docs.tinyset(0)
}; };
BitSetDocSet { let mut docset = BitSetDocSet {
docs, docs,
cursor_bucket: 0, cursor_bucket: 0,
cursor_tinybitset: first_tiny_bitset, cursor_tinybitset: first_tiny_bitset,
doc: 0u32, doc: 0u32,
} };
docset.advance();
docset
} }
} }
impl DocSet for BitSetDocSet { impl DocSet for BitSetDocSet {
fn advance(&mut self) -> bool { fn advance(&mut self) -> DocId {
if let Some(lower) = self.cursor_tinybitset.pop_lowest() { if let Some(lower) = self.cursor_tinybitset.pop_lowest() {
self.doc = (self.cursor_bucket as u32 * 64u32) | lower; self.doc = (self.cursor_bucket as u32 * 64u32) | lower;
return true; return self.doc;
} }
if let Some(cursor_bucket) = self.docs.first_non_empty_bucket(self.cursor_bucket + 1) { if let Some(cursor_bucket) = self.docs.first_non_empty_bucket(self.cursor_bucket + 1) {
self.go_to_bucket(cursor_bucket); self.go_to_bucket(cursor_bucket);
let lower = self.cursor_tinybitset.pop_lowest().unwrap(); let lower = self.cursor_tinybitset.pop_lowest().unwrap();
self.doc = (cursor_bucket * 64u32) | lower; self.doc = (cursor_bucket * 64u32) | lower;
true self.doc
} else { } else {
false self.doc = TERMINATED;
TERMINATED
} }
} }
fn skip_next(&mut self, target: DocId) -> SkipResult { fn seek(&mut self, target: DocId) -> DocId {
// skip is required to advance. if target >= self.docs.max_value() {
if !self.advance() { self.doc = TERMINATED;
return SkipResult::End; return TERMINATED;
} }
let target_bucket = target / 64u32; let target_bucket = target / 64u32;
if target_bucket > self.cursor_bucket {
// Mask for all of the bits greater or equal self.go_to_bucket(target_bucket);
// to our target document. let greater_filter: TinySet = TinySet::range_greater_or_equal(target);
match target_bucket.cmp(&self.cursor_bucket) { self.cursor_tinybitset = self.cursor_tinybitset.intersect(greater_filter);
Ordering::Greater => { self.advance()
self.go_to_bucket(target_bucket); } else {
let greater_filter: TinySet = TinySet::range_greater_or_equal(target); let mut doc = self.doc();
self.cursor_tinybitset = self.cursor_tinybitset.intersect(greater_filter); while doc < target {
if !self.advance() { doc = self.advance();
SkipResult::End
} else if self.doc() == target {
SkipResult::Reached
} else {
debug_assert!(self.doc() > target);
SkipResult::OverStep
}
}
Ordering::Equal => loop {
match self.doc().cmp(&target) {
Ordering::Less => {
if !self.advance() {
return SkipResult::End;
}
}
Ordering::Equal => {
return SkipResult::Reached;
}
Ordering::Greater => {
debug_assert!(self.doc() > target);
return SkipResult::OverStep;
}
}
},
Ordering::Less => {
debug_assert!(self.doc() > target);
SkipResult::OverStep
} }
doc
} }
} }
@@ -122,7 +98,7 @@ impl DocSet for BitSetDocSet {
mod tests { mod tests {
use super::BitSetDocSet; use super::BitSetDocSet;
use crate::common::BitSet; use crate::common::BitSet;
use crate::docset::{DocSet, SkipResult}; use crate::docset::{DocSet, TERMINATED};
use crate::DocId; use crate::DocId;
fn create_docbitset(docs: &[DocId], max_doc: DocId) -> BitSetDocSet { fn create_docbitset(docs: &[DocId], max_doc: DocId) -> BitSetDocSet {
@@ -133,19 +109,31 @@ mod tests {
BitSetDocSet::from(docset) BitSetDocSet::from(docset)
} }
#[test]
fn test_empty() {
let bitset = BitSet::with_max_value(1000);
let mut empty = BitSetDocSet::from(bitset);
assert_eq!(empty.advance(), TERMINATED)
}
#[test]
fn test_seek_terminated() {
let bitset = BitSet::with_max_value(1000);
let mut empty = BitSetDocSet::from(bitset);
assert_eq!(empty.seek(TERMINATED), TERMINATED)
}
fn test_go_through_sequential(docs: &[DocId]) { fn test_go_through_sequential(docs: &[DocId]) {
let mut docset = create_docbitset(docs, 1_000u32); let mut docset = create_docbitset(docs, 1_000u32);
for &doc in docs { for &doc in docs {
assert!(docset.advance());
assert_eq!(doc, docset.doc()); assert_eq!(doc, docset.doc());
docset.advance();
} }
assert!(!docset.advance()); assert_eq!(docset.advance(), TERMINATED);
assert!(!docset.advance());
} }
#[test] #[test]
fn test_docbitset_sequential() { fn test_docbitset_sequential() {
test_go_through_sequential(&[]);
test_go_through_sequential(&[1, 2, 3]); test_go_through_sequential(&[1, 2, 3]);
test_go_through_sequential(&[1, 2, 3, 4, 5, 63, 64, 65]); test_go_through_sequential(&[1, 2, 3, 4, 5, 63, 64, 65]);
test_go_through_sequential(&[63, 64, 65]); test_go_through_sequential(&[63, 64, 65]);
@@ -156,64 +144,64 @@ mod tests {
fn test_docbitset_skip() { fn test_docbitset_skip() {
{ {
let mut docset = create_docbitset(&[1, 5, 6, 7, 5112], 10_000); let mut docset = create_docbitset(&[1, 5, 6, 7, 5112], 10_000);
assert_eq!(docset.skip_next(7), SkipResult::Reached); assert_eq!(docset.seek(7), 7);
assert_eq!(docset.doc(), 7); assert_eq!(docset.doc(), 7);
assert!(docset.advance(), 7); assert_eq!(docset.advance(), 5112);
assert_eq!(docset.doc(), 5112); assert_eq!(docset.doc(), 5112);
assert!(!docset.advance()); assert_eq!(docset.advance(), TERMINATED);
} }
{ {
let mut docset = create_docbitset(&[1, 5, 6, 7, 5112], 10_000); let mut docset = create_docbitset(&[1, 5, 6, 7, 5112], 10_000);
assert_eq!(docset.skip_next(3), SkipResult::OverStep); assert_eq!(docset.seek(3), 5);
assert_eq!(docset.doc(), 5); assert_eq!(docset.doc(), 5);
assert!(docset.advance()); assert_eq!(docset.advance(), 6);
} }
{ {
let mut docset = create_docbitset(&[5112], 10_000); let mut docset = create_docbitset(&[5112], 10_000);
assert_eq!(docset.skip_next(5112), SkipResult::Reached); assert_eq!(docset.seek(5112), 5112);
assert_eq!(docset.doc(), 5112); assert_eq!(docset.doc(), 5112);
assert!(!docset.advance()); assert_eq!(docset.advance(), TERMINATED);
} }
{ {
let mut docset = create_docbitset(&[5112], 10_000); let mut docset = create_docbitset(&[5112], 10_000);
assert_eq!(docset.skip_next(5113), SkipResult::End); assert_eq!(docset.seek(5113), TERMINATED);
assert!(!docset.advance()); assert_eq!(docset.advance(), TERMINATED);
} }
{ {
let mut docset = create_docbitset(&[5112], 10_000); let mut docset = create_docbitset(&[5112], 10_000);
assert_eq!(docset.skip_next(5111), SkipResult::OverStep); assert_eq!(docset.seek(5111), 5112);
assert_eq!(docset.doc(), 5112); assert_eq!(docset.doc(), 5112);
assert!(!docset.advance()); assert_eq!(docset.advance(), TERMINATED);
} }
{ {
let mut docset = create_docbitset(&[1, 5, 6, 7, 5112, 5500, 6666], 10_000); let mut docset = create_docbitset(&[1, 5, 6, 7, 5112, 5500, 6666], 10_000);
assert_eq!(docset.skip_next(5112), SkipResult::Reached); assert_eq!(docset.seek(5112), 5112);
assert_eq!(docset.doc(), 5112); assert_eq!(docset.doc(), 5112);
assert!(docset.advance()); assert_eq!(docset.advance(), 5500);
assert_eq!(docset.doc(), 5500); assert_eq!(docset.doc(), 5500);
assert!(docset.advance()); assert_eq!(docset.advance(), 6666);
assert_eq!(docset.doc(), 6666); assert_eq!(docset.doc(), 6666);
assert!(!docset.advance()); assert_eq!(docset.advance(), TERMINATED);
} }
{ {
let mut docset = create_docbitset(&[1, 5, 6, 7, 5112, 5500, 6666], 10_000); let mut docset = create_docbitset(&[1, 5, 6, 7, 5112, 5500, 6666], 10_000);
assert_eq!(docset.skip_next(5111), SkipResult::OverStep); assert_eq!(docset.seek(5111), 5112);
assert_eq!(docset.doc(), 5112); assert_eq!(docset.doc(), 5112);
assert!(docset.advance()); assert_eq!(docset.advance(), 5500);
assert_eq!(docset.doc(), 5500); assert_eq!(docset.doc(), 5500);
assert!(docset.advance()); assert_eq!(docset.advance(), 6666);
assert_eq!(docset.doc(), 6666); assert_eq!(docset.doc(), 6666);
assert!(!docset.advance()); assert_eq!(docset.advance(), TERMINATED);
} }
{ {
let mut docset = create_docbitset(&[1, 5, 6, 7, 5112, 5513, 6666], 10_000); let mut docset = create_docbitset(&[1, 5, 6, 7, 5112, 5513, 6666], 10_000);
assert_eq!(docset.skip_next(5111), SkipResult::OverStep); assert_eq!(docset.seek(5111), 5112);
assert_eq!(docset.doc(), 5112); assert_eq!(docset.doc(), 5112);
assert!(docset.advance()); assert_eq!(docset.advance(), 5513);
assert_eq!(docset.doc(), 5513); assert_eq!(docset.doc(), 5513);
assert!(docset.advance()); assert_eq!(docset.advance(), 6666);
assert_eq!(docset.doc(), 6666); assert_eq!(docset.doc(), 6666);
assert!(!docset.advance()); assert_eq!(docset.advance(), TERMINATED);
} }
} }
} }
@@ -223,6 +211,7 @@ mod bench {
use super::BitSet; use super::BitSet;
use super::BitSetDocSet; use super::BitSetDocSet;
use crate::docset::TERMINATED;
use crate::test; use crate::test;
use crate::tests; use crate::tests;
use crate::DocSet; use crate::DocSet;
@@ -257,7 +246,7 @@ mod bench {
} }
b.iter(|| { b.iter(|| {
let mut docset = BitSetDocSet::from(bitset.clone()); let mut docset = BitSetDocSet::from(bitset.clone());
while docset.advance() {} while docset.advance() != TERMINATED {}
}); });
} }
} }

View File

@@ -3,21 +3,24 @@ use crate::query::Explanation;
use crate::Score; use crate::Score;
use crate::Searcher; use crate::Searcher;
use crate::Term; use crate::Term;
use serde::Deserialize;
use serde::Serialize;
const K1: f32 = 1.2; const K1: Score = 1.2;
const B: f32 = 0.75; const B: Score = 0.75;
fn idf(doc_freq: u64, doc_count: u64) -> f32 { fn idf(doc_freq: u64, doc_count: u64) -> Score {
let x = ((doc_count - doc_freq) as f32 + 0.5) / (doc_freq as f32 + 0.5); assert!(doc_count >= doc_freq, "{} >= {}", doc_count, doc_freq);
(1f32 + x).ln() let x = ((doc_count - doc_freq) as Score + 0.5) / (doc_freq as Score + 0.5);
(1.0 + x).ln()
} }
fn cached_tf_component(fieldnorm: u32, average_fieldnorm: f32) -> f32 { fn cached_tf_component(fieldnorm: u32, average_fieldnorm: Score) -> Score {
K1 * (1f32 - B + B * fieldnorm as f32 / average_fieldnorm) K1 * (1.0 - B + B * fieldnorm as Score / average_fieldnorm)
} }
fn compute_tf_cache(average_fieldnorm: f32) -> [f32; 256] { fn compute_tf_cache(average_fieldnorm: Score) -> [Score; 256] {
let mut cache = [0f32; 256]; let mut cache: [Score; 256] = [0.0; 256];
for (fieldnorm_id, cache_mut) in cache.iter_mut().enumerate() { for (fieldnorm_id, cache_mut) in cache.iter_mut().enumerate() {
let fieldnorm = FieldNormReader::id_to_fieldnorm(fieldnorm_id as u8); let fieldnorm = FieldNormReader::id_to_fieldnorm(fieldnorm_id as u8);
*cache_mut = cached_tf_component(fieldnorm, average_fieldnorm); *cache_mut = cached_tf_component(fieldnorm, average_fieldnorm);
@@ -25,15 +28,22 @@ fn compute_tf_cache(average_fieldnorm: f32) -> [f32; 256] {
cache cache
} }
#[derive(Clone, PartialEq, Debug, Serialize, Deserialize)]
pub struct BM25Params {
pub idf: Score,
pub avg_fieldnorm: Score,
}
#[derive(Clone)]
pub struct BM25Weight { pub struct BM25Weight {
idf_explain: Explanation, idf_explain: Explanation,
weight: f32, weight: Score,
cache: [f32; 256], cache: [Score; 256],
average_fieldnorm: f32, average_fieldnorm: Score,
} }
impl BM25Weight { impl BM25Weight {
pub fn boost_by(&self, boost: f32) -> BM25Weight { pub fn boost_by(&self, boost: Score) -> BM25Weight {
BM25Weight { BM25Weight {
idf_explain: self.idf_explain.clone(), idf_explain: self.idf_explain.clone(),
weight: self.weight * boost, weight: self.weight * boost,
@@ -42,7 +52,7 @@ impl BM25Weight {
} }
} }
pub fn for_terms(searcher: &Searcher, terms: &[Term]) -> BM25Weight { pub fn for_terms(searcher: &Searcher, terms: &[Term]) -> crate::Result<BM25Weight> {
assert!(!terms.is_empty(), "BM25 requires at least one term"); assert!(!terms.is_empty(), "BM25 requires at least one term");
let field = terms[0].field(); let field = terms[0].field();
for term in &terms[1..] { for term in &terms[1..] {
@@ -56,38 +66,48 @@ impl BM25Weight {
let mut total_num_tokens = 0u64; let mut total_num_tokens = 0u64;
let mut total_num_docs = 0u64; let mut total_num_docs = 0u64;
for segment_reader in searcher.segment_readers() { for segment_reader in searcher.segment_readers() {
let inverted_index = segment_reader.inverted_index(field); let inverted_index = segment_reader.inverted_index(field)?;
total_num_tokens += inverted_index.total_num_tokens(); total_num_tokens += inverted_index.total_num_tokens();
total_num_docs += u64::from(segment_reader.max_doc()); total_num_docs += u64::from(segment_reader.max_doc());
} }
let average_fieldnorm = total_num_tokens as f32 / total_num_docs as f32; let average_fieldnorm = total_num_tokens as Score / total_num_docs as Score;
let mut idf_explain: Explanation;
if terms.len() == 1 { if terms.len() == 1 {
let term_doc_freq = searcher.doc_freq(&terms[0]); let term_doc_freq = searcher.doc_freq(&terms[0])?;
let idf = idf(term_doc_freq, total_num_docs); Ok(BM25Weight::for_one_term(
idf_explain = term_doc_freq,
Explanation::new("idf, computed as log(1 + (N - n + 0.5) / (n + 0.5))", idf); total_num_docs,
idf_explain.add_const( average_fieldnorm,
"n, number of docs containing this term", ))
term_doc_freq as f32,
);
idf_explain.add_const("N, total number of docs", total_num_docs as f32);
} else { } else {
let idf = terms let mut idf_sum: Score = 0.0;
.iter() for term in terms {
.map(|term| { let term_doc_freq = searcher.doc_freq(term)?;
let term_doc_freq = searcher.doc_freq(term); idf_sum += idf(term_doc_freq, total_num_docs);
idf(term_doc_freq, total_num_docs) }
}) let idf_explain = Explanation::new("idf", idf_sum);
.sum::<f32>(); Ok(BM25Weight::new(idf_explain, average_fieldnorm))
idf_explain = Explanation::new("idf", idf);
} }
BM25Weight::new(idf_explain, average_fieldnorm)
} }
fn new(idf_explain: Explanation, average_fieldnorm: f32) -> BM25Weight { pub fn for_one_term(
let weight = idf_explain.value() * (1f32 + K1); term_doc_freq: u64,
total_num_docs: u64,
avg_fieldnorm: Score,
) -> BM25Weight {
let idf = idf(term_doc_freq, total_num_docs);
let mut idf_explain =
Explanation::new("idf, computed as log(1 + (N - n + 0.5) / (n + 0.5))", idf);
idf_explain.add_const(
"n, number of docs containing this term",
term_doc_freq as Score,
);
idf_explain.add_const("N, total number of docs", total_num_docs as Score);
BM25Weight::new(idf_explain, avg_fieldnorm)
}
fn new(idf_explain: Explanation, average_fieldnorm: Score) -> BM25Weight {
let weight = idf_explain.value() * (1.0 + K1);
BM25Weight { BM25Weight {
idf_explain, idf_explain,
weight, weight,
@@ -98,19 +118,27 @@ impl BM25Weight {
#[inline(always)] #[inline(always)]
pub fn score(&self, fieldnorm_id: u8, term_freq: u32) -> Score { pub fn score(&self, fieldnorm_id: u8, term_freq: u32) -> Score {
self.weight * self.tf_factor(fieldnorm_id, term_freq)
}
pub fn max_score(&self) -> Score {
self.score(255u8, 2_013_265_944)
}
#[inline(always)]
pub(crate) fn tf_factor(&self, fieldnorm_id: u8, term_freq: u32) -> Score {
let term_freq = term_freq as Score;
let norm = self.cache[fieldnorm_id as usize]; let norm = self.cache[fieldnorm_id as usize];
let term_freq = term_freq as f32; term_freq / (term_freq + norm)
self.weight * term_freq / (term_freq + norm)
} }
pub fn explain(&self, fieldnorm_id: u8, term_freq: u32) -> Explanation { pub fn explain(&self, fieldnorm_id: u8, term_freq: u32) -> Explanation {
// The explain format is directly copied from Lucene's. // The explain format is directly copied from Lucene's.
// (So, Kudos to Lucene) // (So, Kudos to Lucene)
let score = self.score(fieldnorm_id, term_freq); let score = self.score(fieldnorm_id, term_freq);
let norm = self.cache[fieldnorm_id as usize]; let norm = self.cache[fieldnorm_id as usize];
let term_freq = term_freq as f32; let term_freq = term_freq as Score;
let right_factor = term_freq / (term_freq + norm); let right_factor = term_freq / (term_freq + norm);
let mut tf_explanation = Explanation::new( let mut tf_explanation = Explanation::new(
@@ -123,12 +151,12 @@ impl BM25Weight {
tf_explanation.add_const("b, length normalization parameter", B); tf_explanation.add_const("b, length normalization parameter", B);
tf_explanation.add_const( tf_explanation.add_const(
"dl, length of field", "dl, length of field",
FieldNormReader::id_to_fieldnorm(fieldnorm_id) as f32, FieldNormReader::id_to_fieldnorm(fieldnorm_id) as Score,
); );
tf_explanation.add_const("avgdl, average length of field", self.average_fieldnorm); tf_explanation.add_const("avgdl, average length of field", self.average_fieldnorm);
let mut explanation = Explanation::new("TermQuery, product of...", score); let mut explanation = Explanation::new("TermQuery, product of...", score);
explanation.add_detail(Explanation::new("(K1+1)", K1 + 1f32)); explanation.add_detail(Explanation::new("(K1+1)", K1 + 1.0));
explanation.add_detail(self.idf_explain.clone()); explanation.add_detail(self.idf_explain.clone());
explanation.add_detail(tf_explanation); explanation.add_detail(tf_explanation);
explanation explanation
@@ -139,10 +167,11 @@ impl BM25Weight {
mod tests { mod tests {
use super::idf; use super::idf;
use crate::tests::assert_nearly_equals; use crate::{assert_nearly_equals, Score};
#[test] #[test]
fn test_idf() { fn test_idf() {
assert_nearly_equals(idf(1, 2), 0.6931472); let score: Score = 2.0;
assert_nearly_equals!(idf(1, 2), score.ln());
} }
} }

View File

@@ -0,0 +1,434 @@
use crate::query::term_query::TermScorer;
use crate::query::Scorer;
use crate::{DocId, DocSet, Score, TERMINATED};
use std::ops::Deref;
use std::ops::DerefMut;
/// Takes a term_scorers sorted by their current doc() and a threshold and returns
/// Returns (pivot_len, pivot_ord) defined as follows:
/// - `pivot_doc` lowest document that has a chance of exceeding (>) the threshold score.
/// - `before_pivot_len` number of term_scorers such that term_scorer.doc() < pivot.
/// - `pivot_len` number of term_scorers such that term_scorer.doc() <= pivot.
///
/// We always have `before_pivot_len` < `pivot_len`.
///
/// None is returned if we establish that no document can exceed the threshold.
fn find_pivot_doc(
term_scorers: &[TermScorerWithMaxScore],
threshold: Score,
) -> Option<(usize, usize, DocId)> {
let mut max_score = 0.0;
let mut before_pivot_len = 0;
let mut pivot_doc = TERMINATED;
while before_pivot_len < term_scorers.len() {
let term_scorer = &term_scorers[before_pivot_len];
max_score += term_scorer.max_score;
if max_score > threshold {
pivot_doc = term_scorer.doc();
break;
}
before_pivot_len += 1;
}
if pivot_doc == TERMINATED {
return None;
}
// Right now i is an ordinal, we want a len.
let mut pivot_len = before_pivot_len + 1;
// Some other term_scorer may be positioned on the same document.
pivot_len += term_scorers[pivot_len..]
.iter()
.take_while(|term_scorer| term_scorer.doc() == pivot_doc)
.count();
Some((before_pivot_len, pivot_len, pivot_doc))
}
// Before and after calling this method, scorers need to be sorted by their `.doc()`.
fn block_max_was_too_low_advance_one_scorer(
scorers: &mut Vec<TermScorerWithMaxScore>,
pivot_len: usize,
) {
debug_assert!(is_sorted(scorers.iter().map(|scorer| scorer.doc())));
let mut scorer_to_seek = pivot_len - 1;
let mut doc_to_seek_after = scorers[scorer_to_seek].doc();
for scorer_ord in (0..pivot_len - 1).rev() {
let scorer = &scorers[scorer_ord];
if scorer.last_doc_in_block() <= doc_to_seek_after {
doc_to_seek_after = scorer.last_doc_in_block();
scorer_to_seek = scorer_ord;
}
}
for scorer in &scorers[pivot_len..] {
if scorer.doc() <= doc_to_seek_after {
doc_to_seek_after = scorer.doc();
}
}
scorers[scorer_to_seek].seek(doc_to_seek_after + 1);
restore_ordering(scorers, scorer_to_seek);
debug_assert!(is_sorted(scorers.iter().map(|scorer| scorer.doc())));
}
// Given a list of term_scorers and a `ord` and assuming that `term_scorers[ord]` is sorted
// except term_scorers[ord] that might be in advance compared to its ranks,
// bubble up term_scorers[ord] in order to restore the ordering.
fn restore_ordering(term_scorers: &mut Vec<TermScorerWithMaxScore>, ord: usize) {
let doc = term_scorers[ord].doc();
for i in ord + 1..term_scorers.len() {
if term_scorers[i].doc() >= doc {
break;
}
term_scorers.swap(i, i - 1);
}
debug_assert!(is_sorted(term_scorers.iter().map(|scorer| scorer.doc())));
}
// Attempts to advance all term_scorers between `&term_scorers[0..before_len]` to the pivot.
// If this works, return true.
// If this fails (ie: one of the term_scorer does not contain `pivot_doc` and seek goes past the
// pivot), reorder the term_scorers to ensure the list is still sorted and returns `false`.
// If a term_scorer reach TERMINATED in the process return false remove the term_scorer and return.
fn align_scorers(
term_scorers: &mut Vec<TermScorerWithMaxScore>,
pivot_doc: DocId,
before_pivot_len: usize,
) -> bool {
debug_assert_ne!(pivot_doc, TERMINATED);
for i in (0..before_pivot_len).rev() {
let new_doc = term_scorers[i].seek(pivot_doc);
if new_doc != pivot_doc {
if new_doc == TERMINATED {
term_scorers.swap_remove(i);
}
// We went past the pivot.
// We just go through the outer loop mechanic (Note that pivot is
// still a possible candidate).
//
// Termination is still guaranteed since we can only consider the same
// pivot at most term_scorers.len() - 1 times.
restore_ordering(term_scorers, i);
return false;
}
}
true
}
// Assumes terms_scorers[..pivot_len] are positioned on the same doc (pivot_doc).
// Advance term_scorers[..pivot_len] and out of these removes the terminated scores.
// Restores the ordering of term_scorers.
fn advance_all_scorers_on_pivot(term_scorers: &mut Vec<TermScorerWithMaxScore>, pivot_len: usize) {
for term_scorer in &mut term_scorers[..pivot_len] {
term_scorer.advance();
}
// TODO use drain_filter when available.
let mut i = 0;
while i != term_scorers.len() {
if term_scorers[i].doc() == TERMINATED {
term_scorers.swap_remove(i);
} else {
i += 1;
}
}
term_scorers.sort_by_key(|scorer| scorer.doc());
}
pub fn block_wand(
mut scorers: Vec<TermScorer>,
mut threshold: Score,
callback: &mut dyn FnMut(u32, Score) -> Score,
) {
let mut scorers: Vec<TermScorerWithMaxScore> = scorers
.iter_mut()
.map(TermScorerWithMaxScore::from)
.collect();
scorers.sort_by_key(|scorer| scorer.doc());
// At this point we need to ensure that the scorers are sorted!
debug_assert!(is_sorted(scorers.iter().map(|scorer| scorer.doc())));
while let Some((before_pivot_len, pivot_len, pivot_doc)) =
find_pivot_doc(&scorers[..], threshold)
{
debug_assert!(is_sorted(scorers.iter().map(|scorer| scorer.doc())));
debug_assert_ne!(pivot_doc, TERMINATED);
debug_assert!(before_pivot_len < pivot_len);
let block_max_score_upperbound: Score = scorers[..pivot_len]
.iter_mut()
.map(|scorer| {
scorer.shallow_seek(pivot_doc);
scorer.block_max_score()
})
.sum();
// Beware after shallow advance, skip readers can be in advance compared to
// the segment posting lists.
//
// `block_segment_postings.load_block()` need to be called separately.
if block_max_score_upperbound <= threshold {
// Block max condition was not reached
// We could get away by simply advancing the scorers to DocId + 1 but it would
// be inefficient. The optimization requires proper explanation and was
// isolated in a different function.
block_max_was_too_low_advance_one_scorer(&mut scorers, pivot_len);
continue;
}
// Block max condition is observed.
//
// Let's try and advance all scorers before the pivot to the pivot.
if !align_scorers(&mut scorers, pivot_doc, before_pivot_len) {
// At least of the scorer does not contain the pivot.
//
// Let's stop scoring this pivot and go through the pivot selection again.
// Note that the current pivot is not necessarily a bad candidate and it
// may be picked again.
continue;
}
// At this point, all scorers are positioned on the doc.
let score = scorers[..pivot_len]
.iter_mut()
.map(|scorer| scorer.score())
.sum();
if score > threshold {
threshold = callback(pivot_doc, score);
}
// let's advance all of the scorers that are currently positioned on the pivot.
advance_all_scorers_on_pivot(&mut scorers, pivot_len);
}
}
struct TermScorerWithMaxScore<'a> {
scorer: &'a mut TermScorer,
max_score: Score,
}
impl<'a> From<&'a mut TermScorer> for TermScorerWithMaxScore<'a> {
fn from(scorer: &'a mut TermScorer) -> Self {
let max_score = scorer.max_score();
TermScorerWithMaxScore { scorer, max_score }
}
}
impl<'a> Deref for TermScorerWithMaxScore<'a> {
type Target = TermScorer;
fn deref(&self) -> &Self::Target {
self.scorer
}
}
impl<'a> DerefMut for TermScorerWithMaxScore<'a> {
fn deref_mut(&mut self) -> &mut Self::Target {
self.scorer
}
}
fn is_sorted<I: Iterator<Item = DocId>>(mut it: I) -> bool {
if let Some(first) = it.next() {
let mut prev = first;
for doc in it {
if doc < prev {
return false;
}
prev = doc;
}
}
true
}
#[cfg(test)]
mod tests {
use crate::query::score_combiner::SumCombiner;
use crate::query::term_query::TermScorer;
use crate::query::Union;
use crate::query::{BM25Weight, Scorer};
use crate::{DocId, DocSet, Score, TERMINATED};
use proptest::prelude::*;
use std::cmp::Ordering;
use std::collections::BinaryHeap;
use std::iter;
struct Float(Score);
impl Eq for Float {}
impl PartialEq for Float {
fn eq(&self, other: &Self) -> bool {
self.cmp(&other) == Ordering::Equal
}
}
impl PartialOrd for Float {
fn partial_cmp(&self, other: &Self) -> Option<Ordering> {
Some(self.cmp(other))
}
}
impl Ord for Float {
fn cmp(&self, other: &Self) -> Ordering {
other.0.partial_cmp(&self.0).unwrap_or(Ordering::Equal)
}
}
fn nearly_equals(left: Score, right: Score) -> bool {
(left - right).abs() < 0.000001 * (left + right).abs()
}
fn compute_checkpoints_for_each_pruning(
term_scorers: Vec<TermScorer>,
n: usize,
) -> Vec<(DocId, Score)> {
let mut heap: BinaryHeap<Float> = BinaryHeap::with_capacity(n);
let mut checkpoints: Vec<(DocId, Score)> = Vec::new();
let mut limit: Score = 0.0;
super::block_wand(term_scorers, Score::MIN, &mut |doc, score| {
heap.push(Float(score));
if heap.len() > n {
heap.pop().unwrap();
}
if heap.len() == n {
limit = heap.peek().unwrap().0;
}
if !nearly_equals(score, limit) {
checkpoints.push((doc, score));
}
return limit;
});
checkpoints
}
fn compute_checkpoints_manual(term_scorers: Vec<TermScorer>, n: usize) -> Vec<(DocId, Score)> {
let mut heap: BinaryHeap<Float> = BinaryHeap::with_capacity(n);
let mut checkpoints: Vec<(DocId, Score)> = Vec::new();
let mut scorer: Union<TermScorer, SumCombiner> = Union::from(term_scorers);
let mut limit = Score::MIN;
loop {
if scorer.doc() == TERMINATED {
break;
}
let doc = scorer.doc();
let score = scorer.score();
if score > limit {
heap.push(Float(score));
if heap.len() > n {
heap.pop().unwrap();
}
if heap.len() == n {
limit = heap.peek().unwrap().0;
}
if !nearly_equals(score, limit) {
checkpoints.push((doc, score));
}
}
scorer.advance();
}
checkpoints
}
const MAX_TERM_FREQ: u32 = 100u32;
fn posting_list(max_doc: u32) -> BoxedStrategy<Vec<(DocId, u32)>> {
(1..max_doc + 1)
.prop_flat_map(move |doc_freq| {
(
proptest::bits::bitset::sampled(doc_freq as usize, 0..max_doc as usize),
proptest::collection::vec(1u32..MAX_TERM_FREQ, doc_freq as usize),
)
})
.prop_map(|(docset, term_freqs)| {
docset
.iter()
.map(|doc| doc as u32)
.zip(term_freqs.iter().cloned())
.collect::<Vec<_>>()
})
.boxed()
}
fn gen_term_scorers(num_scorers: usize) -> BoxedStrategy<(Vec<Vec<(DocId, u32)>>, Vec<u32>)> {
(1u32..100u32)
.prop_flat_map(move |max_doc: u32| {
(
proptest::collection::vec(posting_list(max_doc), num_scorers),
proptest::collection::vec(2u32..10u32 * MAX_TERM_FREQ, max_doc as usize),
)
})
.boxed()
}
fn test_block_wand_aux(posting_lists: &[Vec<(DocId, u32)>], fieldnorms: &[u32]) {
// We virtually repeat all docs 64 times in order to emulate blocks of 2 documents
// and surface blogs more easily.
const REPEAT: usize = 64;
let fieldnorms_expanded = fieldnorms
.iter()
.cloned()
.flat_map(|fieldnorm| iter::repeat(fieldnorm).take(REPEAT))
.collect::<Vec<u32>>();
let postings_lists_expanded: Vec<Vec<(DocId, u32)>> = posting_lists
.iter()
.map(|posting_list| {
posting_list
.into_iter()
.cloned()
.flat_map(|(doc, term_freq)| {
(0 as u32..REPEAT as u32).map(move |offset| {
(
doc * (REPEAT as u32) + offset,
if offset == 0 { term_freq } else { 1 },
)
})
})
.collect::<Vec<(DocId, u32)>>()
})
.collect::<Vec<_>>();
let total_fieldnorms: u64 = fieldnorms_expanded
.iter()
.cloned()
.map(|fieldnorm| fieldnorm as u64)
.sum();
let average_fieldnorm = (total_fieldnorms as Score) / (fieldnorms_expanded.len() as Score);
let max_doc = fieldnorms_expanded.len();
let term_scorers: Vec<TermScorer> = postings_lists_expanded
.iter()
.map(|postings| {
let bm25_weight = BM25Weight::for_one_term(
postings.len() as u64,
max_doc as u64,
average_fieldnorm,
);
TermScorer::create_for_test(postings, &fieldnorms_expanded[..], bm25_weight)
})
.collect();
for top_k in 1..4 {
let checkpoints_for_each_pruning =
compute_checkpoints_for_each_pruning(term_scorers.clone(), top_k);
let checkpoints_manual = compute_checkpoints_manual(term_scorers.clone(), top_k);
assert_eq!(checkpoints_for_each_pruning.len(), checkpoints_manual.len());
for (&(left_doc, left_score), &(right_doc, right_score)) in checkpoints_for_each_pruning
.iter()
.zip(checkpoints_manual.iter())
{
assert_eq!(left_doc, right_doc);
assert!(nearly_equals(left_score, right_score));
}
}
}
proptest! {
#![proptest_config(ProptestConfig::with_cases(500))]
#[test]
fn test_block_wand_two_term_scorers((posting_lists, fieldnorms) in gen_term_scorers(2)) {
test_block_wand_aux(&posting_lists[..], &fieldnorms[..]);
}
}
proptest! {
#![proptest_config(ProptestConfig::with_cases(500))]
#[test]
fn test_block_wand_three_term_scorers((posting_lists, fieldnorms) in gen_term_scorers(3)) {
test_block_wand_aux(&posting_lists[..], &fieldnorms[..]);
}
}
}

View File

@@ -83,7 +83,7 @@ use std::collections::BTreeSet;
/// ]; /// ];
/// // Make a BooleanQuery equivalent to /// // Make a BooleanQuery equivalent to
/// // title:+diary title:-girl /// // title:+diary title:-girl
/// let diary_must_and_girl_mustnot = BooleanQuery::from(queries_with_occurs1); /// let diary_must_and_girl_mustnot = BooleanQuery::new(queries_with_occurs1);
/// let count1 = searcher.search(&diary_must_and_girl_mustnot, &Count)?; /// let count1 = searcher.search(&diary_must_and_girl_mustnot, &Count)?;
/// assert_eq!(count1, 1); /// assert_eq!(count1, 1);
/// ///
@@ -93,7 +93,7 @@ use std::collections::BTreeSet;
/// IndexRecordOption::Basic, /// IndexRecordOption::Basic,
/// )); /// ));
/// // "title:diary OR title:cow" /// // "title:diary OR title:cow"
/// let title_diary_or_cow = BooleanQuery::from(vec![ /// let title_diary_or_cow = BooleanQuery::new(vec![
/// (Occur::Should, diary_term_query.box_clone()), /// (Occur::Should, diary_term_query.box_clone()),
/// (Occur::Should, cow_term_query), /// (Occur::Should, cow_term_query),
/// ]); /// ]);
@@ -108,7 +108,7 @@ use std::collections::BTreeSet;
/// // You can combine subqueries of different types into 1 BooleanQuery: /// // You can combine subqueries of different types into 1 BooleanQuery:
/// // `TermQuery` and `PhraseQuery` /// // `TermQuery` and `PhraseQuery`
/// // "title:diary OR "dairy cow" /// // "title:diary OR "dairy cow"
/// let term_of_phrase_query = BooleanQuery::from(vec![ /// let term_of_phrase_query = BooleanQuery::new(vec![
/// (Occur::Should, diary_term_query.box_clone()), /// (Occur::Should, diary_term_query.box_clone()),
/// (Occur::Should, phrase_query.box_clone()), /// (Occur::Should, phrase_query.box_clone()),
/// ]); /// ]);
@@ -117,7 +117,7 @@ use std::collections::BTreeSet;
/// ///
/// // You can nest one BooleanQuery inside another /// // You can nest one BooleanQuery inside another
/// // body:found AND ("title:diary OR "dairy cow") /// // body:found AND ("title:diary OR "dairy cow")
/// let nested_query = BooleanQuery::from(vec![ /// let nested_query = BooleanQuery::new(vec![
/// (Occur::Must, body_term_query), /// (Occur::Must, body_term_query),
/// (Occur::Must, Box::new(term_of_phrase_query)) /// (Occur::Must, Box::new(term_of_phrase_query))
/// ]); /// ]);
@@ -143,7 +143,7 @@ impl Clone for BooleanQuery {
impl From<Vec<(Occur, Box<dyn Query>)>> for BooleanQuery { impl From<Vec<(Occur, Box<dyn Query>)>> for BooleanQuery {
fn from(subqueries: Vec<(Occur, Box<dyn Query>)>) -> BooleanQuery { fn from(subqueries: Vec<(Occur, Box<dyn Query>)>) -> BooleanQuery {
BooleanQuery { subqueries } BooleanQuery::new(subqueries)
} }
} }
@@ -167,6 +167,23 @@ impl Query for BooleanQuery {
} }
impl BooleanQuery { impl BooleanQuery {
/// Creates a new boolean query.
pub fn new(subqueries: Vec<(Occur, Box<dyn Query>)>) -> BooleanQuery {
BooleanQuery { subqueries }
}
/// Returns the intersection of the queries.
pub fn intersection(queries: Vec<Box<dyn Query>>) -> BooleanQuery {
let subqueries = queries.into_iter().map(|s| (Occur::Must, s)).collect();
BooleanQuery::new(subqueries)
}
/// Returns the union of the queries.
pub fn union(queries: Vec<Box<dyn Query>>) -> BooleanQuery {
let subqueries = queries.into_iter().map(|s| (Occur::Should, s)).collect();
BooleanQuery::new(subqueries)
}
/// Helper method to create a boolean query matching a given list of terms. /// Helper method to create a boolean query matching a given list of terms.
/// The resulting query is a disjunction of the terms. /// The resulting query is a disjunction of the terms.
pub fn new_multiterms_query(terms: Vec<Term>) -> BooleanQuery { pub fn new_multiterms_query(terms: Vec<Term>) -> BooleanQuery {
@@ -178,7 +195,7 @@ impl BooleanQuery {
(Occur::Should, term_query) (Occur::Should, term_query)
}) })
.collect(); .collect();
BooleanQuery::from(occur_term_queries) BooleanQuery::new(occur_term_queries)
} }
/// Deconstructed view of the clauses making up this query. /// Deconstructed view of the clauses making up this query.
@@ -186,3 +203,77 @@ impl BooleanQuery {
&self.subqueries[..] &self.subqueries[..]
} }
} }
#[cfg(test)]
mod tests {
use super::BooleanQuery;
use crate::collector::DocSetCollector;
use crate::query::{QueryClone, TermQuery};
use crate::schema::{IndexRecordOption, Schema, TEXT};
use crate::{DocAddress, Index, Term};
fn create_test_index() -> crate::Result<Index> {
let mut schema_builder = Schema::builder();
let text = schema_builder.add_text_field("text", TEXT);
let schema = schema_builder.build();
let index = Index::create_in_ram(schema);
let mut writer = index.writer_for_tests().unwrap();
writer.add_document(doc!(text=>"b c"));
writer.add_document(doc!(text=>"a c"));
writer.add_document(doc!(text=>"a b"));
writer.add_document(doc!(text=>"a d"));
writer.commit()?;
Ok(index)
}
#[test]
fn test_union() -> crate::Result<()> {
let index = create_test_index()?;
let searcher = index.reader()?.searcher();
let text = index.schema().get_field("text").unwrap();
let term_a = TermQuery::new(Term::from_field_text(text, "a"), IndexRecordOption::Basic);
let term_d = TermQuery::new(Term::from_field_text(text, "d"), IndexRecordOption::Basic);
let union_ad = BooleanQuery::union(vec![term_a.box_clone(), term_d.box_clone()]);
let docs = searcher.search(&union_ad, &DocSetCollector)?;
assert_eq!(
docs,
vec![
DocAddress(0u32, 1u32),
DocAddress(0u32, 2u32),
DocAddress(0u32, 3u32)
]
.into_iter()
.collect()
);
Ok(())
}
#[test]
fn test_intersection() -> crate::Result<()> {
let index = create_test_index()?;
let searcher = index.reader()?.searcher();
let text = index.schema().get_field("text").unwrap();
let term_a = TermQuery::new(Term::from_field_text(text, "a"), IndexRecordOption::Basic);
let term_b = TermQuery::new(Term::from_field_text(text, "b"), IndexRecordOption::Basic);
let term_c = TermQuery::new(Term::from_field_text(text, "c"), IndexRecordOption::Basic);
let intersection_ab =
BooleanQuery::intersection(vec![term_a.box_clone(), term_b.box_clone()]);
let intersection_ac =
BooleanQuery::intersection(vec![term_a.box_clone(), term_c.box_clone()]);
let intersection_bc =
BooleanQuery::intersection(vec![term_b.box_clone(), term_c.box_clone()]);
{
let docs = searcher.search(&intersection_ab, &DocSetCollector)?;
assert_eq!(docs, vec![DocAddress(0u32, 2u32)].into_iter().collect());
}
{
let docs = searcher.search(&intersection_ac, &DocSetCollector)?;
assert_eq!(docs, vec![DocAddress(0u32, 1u32)].into_iter().collect());
}
{
let docs = searcher.search(&intersection_bc, &DocSetCollector)?;
assert_eq!(docs, vec![DocAddress(0u32, 0u32)].into_iter().collect());
}
Ok(())
}
}

View File

@@ -1,7 +1,9 @@
use crate::core::SegmentReader; use crate::core::SegmentReader;
use crate::postings::FreqReadingOption;
use crate::query::explanation::does_not_match; use crate::query::explanation::does_not_match;
use crate::query::score_combiner::{DoNothingCombiner, ScoreCombiner, SumWithCoordsCombiner}; use crate::query::score_combiner::{DoNothingCombiner, ScoreCombiner, SumWithCoordsCombiner};
use crate::query::term_query::TermScorer; use crate::query::term_query::TermScorer;
use crate::query::weight::{for_each_pruning_scorer, for_each_scorer};
use crate::query::EmptyScorer; use crate::query::EmptyScorer;
use crate::query::Exclude; use crate::query::Exclude;
use crate::query::Occur; use crate::query::Occur;
@@ -10,16 +12,21 @@ use crate::query::Scorer;
use crate::query::Union; use crate::query::Union;
use crate::query::Weight; use crate::query::Weight;
use crate::query::{intersect_scorers, Explanation}; use crate::query::{intersect_scorers, Explanation};
use crate::{DocId, SkipResult}; use crate::{DocId, Score};
use std::collections::HashMap; use std::collections::HashMap;
fn scorer_union<TScoreCombiner>(scorers: Vec<Box<dyn Scorer>>) -> Box<dyn Scorer> enum SpecializedScorer {
TermUnion(Vec<TermScorer>),
Other(Box<dyn Scorer>),
}
fn scorer_union<TScoreCombiner>(scorers: Vec<Box<dyn Scorer>>) -> SpecializedScorer
where where
TScoreCombiner: ScoreCombiner, TScoreCombiner: ScoreCombiner,
{ {
assert!(!scorers.is_empty()); assert!(!scorers.is_empty());
if scorers.len() == 1 { if scorers.len() == 1 {
return scorers.into_iter().next().unwrap(); //< we checked the size beforehands return SpecializedScorer::Other(scorers.into_iter().next().unwrap()); //< we checked the size beforehands
} }
{ {
@@ -29,14 +36,30 @@ where
.into_iter() .into_iter()
.map(|scorer| *(scorer.downcast::<TermScorer>().map_err(|_| ()).unwrap())) .map(|scorer| *(scorer.downcast::<TermScorer>().map_err(|_| ()).unwrap()))
.collect(); .collect();
let scorer: Box<dyn Scorer> = if scorers
Box::new(Union::<TermScorer, TScoreCombiner>::from(scorers)); .iter()
return scorer; .all(|scorer| scorer.freq_reading_option() == FreqReadingOption::ReadFreq)
{
// Block wand is only available iff we read frequencies.
return SpecializedScorer::TermUnion(scorers);
} else {
return SpecializedScorer::Other(Box::new(Union::<_, TScoreCombiner>::from(
scorers,
)));
}
} }
} }
SpecializedScorer::Other(Box::new(Union::<_, TScoreCombiner>::from(scorers)))
}
let scorer: Box<dyn Scorer> = Box::new(Union::<_, TScoreCombiner>::from(scorers)); fn into_box_scorer<TScoreCombiner: ScoreCombiner>(scorer: SpecializedScorer) -> Box<dyn Scorer> {
scorer match scorer {
SpecializedScorer::TermUnion(term_scorers) => {
let union_scorer = Union::<TermScorer, TScoreCombiner>::from(term_scorers);
Box::new(union_scorer)
}
SpecializedScorer::Other(scorer) => scorer,
}
} }
pub struct BooleanWeight { pub struct BooleanWeight {
@@ -55,7 +78,7 @@ impl BooleanWeight {
fn per_occur_scorers( fn per_occur_scorers(
&self, &self,
reader: &SegmentReader, reader: &SegmentReader,
boost: f32, boost: Score,
) -> crate::Result<HashMap<Occur, Vec<Box<dyn Scorer>>>> { ) -> crate::Result<HashMap<Occur, Vec<Box<dyn Scorer>>>> {
let mut per_occur_scorers: HashMap<Occur, Vec<Box<dyn Scorer>>> = HashMap::new(); let mut per_occur_scorers: HashMap<Occur, Vec<Box<dyn Scorer>>> = HashMap::new();
for &(ref occur, ref subweight) in &self.weights { for &(ref occur, ref subweight) in &self.weights {
@@ -71,42 +94,52 @@ impl BooleanWeight {
fn complex_scorer<TScoreCombiner: ScoreCombiner>( fn complex_scorer<TScoreCombiner: ScoreCombiner>(
&self, &self,
reader: &SegmentReader, reader: &SegmentReader,
boost: f32, boost: Score,
) -> crate::Result<Box<dyn Scorer>> { ) -> crate::Result<SpecializedScorer> {
let mut per_occur_scorers = self.per_occur_scorers(reader, boost)?; let mut per_occur_scorers = self.per_occur_scorers(reader, boost)?;
let should_scorer_opt: Option<Box<dyn Scorer>> = per_occur_scorers let should_scorer_opt: Option<SpecializedScorer> = per_occur_scorers
.remove(&Occur::Should) .remove(&Occur::Should)
.map(scorer_union::<TScoreCombiner>); .map(scorer_union::<TScoreCombiner>);
let exclude_scorer_opt: Option<Box<dyn Scorer>> = per_occur_scorers let exclude_scorer_opt: Option<Box<dyn Scorer>> = per_occur_scorers
.remove(&Occur::MustNot) .remove(&Occur::MustNot)
.map(scorer_union::<TScoreCombiner>); .map(scorer_union::<DoNothingCombiner>)
.map(into_box_scorer::<DoNothingCombiner>);
let must_scorer_opt: Option<Box<dyn Scorer>> = per_occur_scorers let must_scorer_opt: Option<Box<dyn Scorer>> = per_occur_scorers
.remove(&Occur::Must) .remove(&Occur::Must)
.map(intersect_scorers); .map(intersect_scorers);
let positive_scorer: Box<dyn Scorer> = match (should_scorer_opt, must_scorer_opt) { let positive_scorer: SpecializedScorer = match (should_scorer_opt, must_scorer_opt) {
(Some(should_scorer), Some(must_scorer)) => { (Some(should_scorer), Some(must_scorer)) => {
if self.scoring_enabled { if self.scoring_enabled {
Box::new(RequiredOptionalScorer::<_, _, TScoreCombiner>::new( SpecializedScorer::Other(Box::new(RequiredOptionalScorer::<
Box<dyn Scorer>,
Box<dyn Scorer>,
TScoreCombiner,
>::new(
must_scorer, must_scorer,
should_scorer, into_box_scorer::<TScoreCombiner>(should_scorer),
)) )))
} else { } else {
must_scorer SpecializedScorer::Other(must_scorer)
} }
} }
(None, Some(must_scorer)) => must_scorer, (None, Some(must_scorer)) => SpecializedScorer::Other(must_scorer),
(Some(should_scorer), None) => should_scorer, (Some(should_scorer), None) => should_scorer,
(None, None) => { (None, None) => {
return Ok(Box::new(EmptyScorer)); return Ok(SpecializedScorer::Other(Box::new(EmptyScorer)));
} }
}; };
if let Some(exclude_scorer) = exclude_scorer_opt { if let Some(exclude_scorer) = exclude_scorer_opt {
Ok(Box::new(Exclude::new(positive_scorer, exclude_scorer))) let positive_scorer_boxed: Box<dyn Scorer> =
into_box_scorer::<TScoreCombiner>(positive_scorer);
Ok(SpecializedScorer::Other(Box::new(Exclude::new(
positive_scorer_boxed,
exclude_scorer,
))))
} else { } else {
Ok(positive_scorer) Ok(positive_scorer)
} }
@@ -114,7 +147,7 @@ impl BooleanWeight {
} }
impl Weight for BooleanWeight { impl Weight for BooleanWeight {
fn scorer(&self, reader: &SegmentReader, boost: f32) -> crate::Result<Box<dyn Scorer>> { fn scorer(&self, reader: &SegmentReader, boost: Score) -> crate::Result<Box<dyn Scorer>> {
if self.weights.is_empty() { if self.weights.is_empty() {
Ok(Box::new(EmptyScorer)) Ok(Box::new(EmptyScorer))
} else if self.weights.len() == 1 { } else if self.weights.len() == 1 {
@@ -126,18 +159,22 @@ impl Weight for BooleanWeight {
} }
} else if self.scoring_enabled { } else if self.scoring_enabled {
self.complex_scorer::<SumWithCoordsCombiner>(reader, boost) self.complex_scorer::<SumWithCoordsCombiner>(reader, boost)
.map(|specialized_scorer| {
into_box_scorer::<SumWithCoordsCombiner>(specialized_scorer)
})
} else { } else {
self.complex_scorer::<DoNothingCombiner>(reader, boost) self.complex_scorer::<DoNothingCombiner>(reader, boost)
.map(into_box_scorer::<DoNothingCombiner>)
} }
} }
fn explain(&self, reader: &SegmentReader, doc: DocId) -> crate::Result<Explanation> { fn explain(&self, reader: &SegmentReader, doc: DocId) -> crate::Result<Explanation> {
let mut scorer = self.scorer(reader, 1.0f32)?; let mut scorer = self.scorer(reader, 1.0)?;
if scorer.skip_next(doc) != SkipResult::Reached { if scorer.seek(doc) != doc {
return Err(does_not_match(doc)); return Err(does_not_match(doc));
} }
if !self.scoring_enabled { if !self.scoring_enabled {
return Ok(Explanation::new("BooleanQuery with no scoring", 1f32)); return Ok(Explanation::new("BooleanQuery with no scoring", 1.0));
} }
let mut explanation = Explanation::new("BooleanClause. Sum of ...", scorer.score()); let mut explanation = Explanation::new("BooleanClause. Sum of ...", scorer.score());
@@ -150,6 +187,53 @@ impl Weight for BooleanWeight {
} }
Ok(explanation) Ok(explanation)
} }
fn for_each(
&self,
reader: &SegmentReader,
callback: &mut dyn FnMut(DocId, Score),
) -> crate::Result<()> {
let scorer = self.complex_scorer::<SumWithCoordsCombiner>(reader, 1.0)?;
match scorer {
SpecializedScorer::TermUnion(term_scorers) => {
let mut union_scorer =
Union::<TermScorer, SumWithCoordsCombiner>::from(term_scorers);
for_each_scorer(&mut union_scorer, callback);
}
SpecializedScorer::Other(mut scorer) => {
for_each_scorer(scorer.as_mut(), callback);
}
}
Ok(())
}
/// Calls `callback` with all of the `(doc, score)` for which score
/// is exceeding a given threshold.
///
/// This method is useful for the TopDocs collector.
/// For all docsets, the blanket implementation has the benefit
/// of prefiltering (doc, score) pairs, avoiding the
/// virtual dispatch cost.
///
/// More importantly, it makes it possible for scorers to implement
/// important optimization (e.g. BlockWAND for union).
fn for_each_pruning(
&self,
threshold: Score,
reader: &SegmentReader,
callback: &mut dyn FnMut(DocId, Score) -> Score,
) -> crate::Result<()> {
let scorer = self.complex_scorer::<SumWithCoordsCombiner>(reader, 1.0)?;
match scorer {
SpecializedScorer::TermUnion(term_scorers) => {
super::block_wand(term_scorers, threshold, callback);
}
SpecializedScorer::Other(mut scorer) => {
for_each_pruning_scorer(scorer.as_mut(), threshold, callback);
}
}
Ok(())
}
} }
fn is_positive_occur(occur: Occur) -> bool { fn is_positive_occur(occur: Occur) -> bool {

View File

@@ -1,13 +1,17 @@
mod block_wand;
mod boolean_query; mod boolean_query;
mod boolean_weight; mod boolean_weight;
pub(crate) use self::block_wand::block_wand;
pub use self::boolean_query::BooleanQuery; pub use self::boolean_query::BooleanQuery;
#[cfg(test)] #[cfg(test)]
mod tests { mod tests {
use super::*; use super::*;
use crate::assert_nearly_equals;
use crate::collector::tests::TEST_COLLECTOR_WITH_SCORE; use crate::collector::tests::TEST_COLLECTOR_WITH_SCORE;
use crate::collector::TopDocs;
use crate::query::score_combiner::SumWithCoordsCombiner; use crate::query::score_combiner::SumWithCoordsCombiner;
use crate::query::term_query::TermScorer; use crate::query::term_query::TermScorer;
use crate::query::Intersection; use crate::query::Intersection;
@@ -18,9 +22,8 @@ mod tests {
use crate::query::Scorer; use crate::query::Scorer;
use crate::query::TermQuery; use crate::query::TermQuery;
use crate::schema::*; use crate::schema::*;
use crate::tests::assert_nearly_equals;
use crate::Index; use crate::Index;
use crate::{DocAddress, DocId}; use crate::{DocAddress, DocId, Score};
fn aux_test_helper() -> (Index, Field) { fn aux_test_helper() -> (Index, Field) {
let mut schema_builder = Schema::builder(); let mut schema_builder = Schema::builder();
@@ -29,27 +32,12 @@ mod tests {
let index = Index::create_in_ram(schema); let index = Index::create_in_ram(schema);
{ {
// writing the segment // writing the segment
let mut index_writer = index.writer_with_num_threads(1, 3_000_000).unwrap(); let mut index_writer = index.writer_for_tests().unwrap();
{ index_writer.add_document(doc!(text_field => "a b c"));
let doc = doc!(text_field => "a b c"); index_writer.add_document(doc!(text_field => "a c"));
index_writer.add_document(doc); index_writer.add_document(doc!(text_field => "b c"));
} index_writer.add_document(doc!(text_field => "a b c d"));
{ index_writer.add_document(doc!(text_field => "d"));
let doc = doc!(text_field => "a c");
index_writer.add_document(doc);
}
{
let doc = doc!(text_field => "b c");
index_writer.add_document(doc);
}
{
let doc = doc!(text_field => "a b c d");
index_writer.add_document(doc);
}
{
let doc = doc!(text_field => "d");
index_writer.add_document(doc);
}
assert!(index_writer.commit().is_ok()); assert!(index_writer.commit().is_ok());
} }
(index, text_field) (index, text_field)
@@ -71,9 +59,7 @@ mod tests {
let query = query_parser.parse_query("+a").unwrap(); let query = query_parser.parse_query("+a").unwrap();
let searcher = index.reader().unwrap().searcher(); let searcher = index.reader().unwrap().searcher();
let weight = query.weight(&searcher, true).unwrap(); let weight = query.weight(&searcher, true).unwrap();
let scorer = weight let scorer = weight.scorer(searcher.segment_reader(0u32), 1.0).unwrap();
.scorer(searcher.segment_reader(0u32), 1.0f32)
.unwrap();
assert!(scorer.is::<TermScorer>()); assert!(scorer.is::<TermScorer>());
} }
@@ -85,17 +71,13 @@ mod tests {
{ {
let query = query_parser.parse_query("+a +b +c").unwrap(); let query = query_parser.parse_query("+a +b +c").unwrap();
let weight = query.weight(&searcher, true).unwrap(); let weight = query.weight(&searcher, true).unwrap();
let scorer = weight let scorer = weight.scorer(searcher.segment_reader(0u32), 1.0).unwrap();
.scorer(searcher.segment_reader(0u32), 1.0f32)
.unwrap();
assert!(scorer.is::<Intersection<TermScorer>>()); assert!(scorer.is::<Intersection<TermScorer>>());
} }
{ {
let query = query_parser.parse_query("+a +(b c)").unwrap(); let query = query_parser.parse_query("+a +(b c)").unwrap();
let weight = query.weight(&searcher, true).unwrap(); let weight = query.weight(&searcher, true).unwrap();
let scorer = weight let scorer = weight.scorer(searcher.segment_reader(0u32), 1.0).unwrap();
.scorer(searcher.segment_reader(0u32), 1.0f32)
.unwrap();
assert!(scorer.is::<Intersection<Box<dyn Scorer>>>()); assert!(scorer.is::<Intersection<Box<dyn Scorer>>>());
} }
} }
@@ -108,9 +90,7 @@ mod tests {
{ {
let query = query_parser.parse_query("+a b").unwrap(); let query = query_parser.parse_query("+a b").unwrap();
let weight = query.weight(&searcher, true).unwrap(); let weight = query.weight(&searcher, true).unwrap();
let scorer = weight let scorer = weight.scorer(searcher.segment_reader(0u32), 1.0).unwrap();
.scorer(searcher.segment_reader(0u32), 1.0f32)
.unwrap();
assert!(scorer.is::<RequiredOptionalScorer< assert!(scorer.is::<RequiredOptionalScorer<
Box<dyn Scorer>, Box<dyn Scorer>,
Box<dyn Scorer>, Box<dyn Scorer>,
@@ -120,9 +100,7 @@ mod tests {
{ {
let query = query_parser.parse_query("+a b").unwrap(); let query = query_parser.parse_query("+a b").unwrap();
let weight = query.weight(&searcher, false).unwrap(); let weight = query.weight(&searcher, false).unwrap();
let scorer = weight let scorer = weight.scorer(searcher.segment_reader(0u32), 1.0).unwrap();
.scorer(searcher.segment_reader(0u32), 1.0f32)
.unwrap();
assert!(scorer.is::<TermScorer>()); assert!(scorer.is::<TermScorer>());
} }
} }
@@ -153,31 +131,30 @@ mod tests {
.map(|doc| doc.1) .map(|doc| doc.1)
.collect::<Vec<DocId>>() .collect::<Vec<DocId>>()
}; };
{ {
let boolean_query = BooleanQuery::from(vec![(Occur::Must, make_term_query("a"))]); let boolean_query = BooleanQuery::new(vec![(Occur::Must, make_term_query("a"))]);
assert_eq!(matching_docs(&boolean_query), vec![0, 1, 3]); assert_eq!(matching_docs(&boolean_query), vec![0, 1, 3]);
} }
{ {
let boolean_query = BooleanQuery::from(vec![(Occur::Should, make_term_query("a"))]); let boolean_query = BooleanQuery::new(vec![(Occur::Should, make_term_query("a"))]);
assert_eq!(matching_docs(&boolean_query), vec![0, 1, 3]); assert_eq!(matching_docs(&boolean_query), vec![0, 1, 3]);
} }
{ {
let boolean_query = BooleanQuery::from(vec![ let boolean_query = BooleanQuery::new(vec![
(Occur::Should, make_term_query("a")), (Occur::Should, make_term_query("a")),
(Occur::Should, make_term_query("b")), (Occur::Should, make_term_query("b")),
]); ]);
assert_eq!(matching_docs(&boolean_query), vec![0, 1, 2, 3]); assert_eq!(matching_docs(&boolean_query), vec![0, 1, 2, 3]);
} }
{ {
let boolean_query = BooleanQuery::from(vec![ let boolean_query = BooleanQuery::new(vec![
(Occur::Must, make_term_query("a")), (Occur::Must, make_term_query("a")),
(Occur::Should, make_term_query("b")), (Occur::Should, make_term_query("b")),
]); ]);
assert_eq!(matching_docs(&boolean_query), vec![0, 1, 3]); assert_eq!(matching_docs(&boolean_query), vec![0, 1, 3]);
} }
{ {
let boolean_query = BooleanQuery::from(vec![ let boolean_query = BooleanQuery::new(vec![
(Occur::Must, make_term_query("a")), (Occur::Must, make_term_query("a")),
(Occur::Should, make_term_query("b")), (Occur::Should, make_term_query("b")),
(Occur::MustNot, make_term_query("d")), (Occur::MustNot, make_term_query("d")),
@@ -185,11 +162,59 @@ mod tests {
assert_eq!(matching_docs(&boolean_query), vec![0, 1]); assert_eq!(matching_docs(&boolean_query), vec![0, 1]);
} }
{ {
let boolean_query = BooleanQuery::from(vec![(Occur::MustNot, make_term_query("d"))]); let boolean_query = BooleanQuery::new(vec![(Occur::MustNot, make_term_query("d"))]);
assert_eq!(matching_docs(&boolean_query), Vec::<u32>::new()); assert_eq!(matching_docs(&boolean_query), Vec::<u32>::new());
} }
} }
#[test]
pub fn test_boolean_query_two_excluded() {
let (index, text_field) = aux_test_helper();
let make_term_query = |text: &str| {
let term_query = TermQuery::new(
Term::from_field_text(text_field, text),
IndexRecordOption::Basic,
);
let query: Box<dyn Query> = Box::new(term_query);
query
};
let reader = index.reader().unwrap();
let matching_topdocs = |query: &dyn Query| {
reader
.searcher()
.search(query, &TopDocs::with_limit(3))
.unwrap()
};
let score_doc_4: Score; // score of doc 4 should not be influenced by exclusion
{
let boolean_query_no_excluded =
BooleanQuery::new(vec![(Occur::Must, make_term_query("d"))]);
let topdocs_no_excluded = matching_topdocs(&boolean_query_no_excluded);
assert_eq!(topdocs_no_excluded.len(), 2);
let (top_score, top_doc) = topdocs_no_excluded[0];
assert_eq!(top_doc, DocAddress(0, 4));
assert_eq!(topdocs_no_excluded[1].1, DocAddress(0, 3)); // ignore score of doc 3.
score_doc_4 = top_score;
}
{
let boolean_query_two_excluded = BooleanQuery::new(vec![
(Occur::Must, make_term_query("d")),
(Occur::MustNot, make_term_query("a")),
(Occur::MustNot, make_term_query("b")),
]);
let topdocs_excluded = matching_topdocs(&boolean_query_two_excluded);
assert_eq!(topdocs_excluded.len(), 1);
let (top_score, top_doc) = topdocs_excluded[0];
assert_eq!(top_doc, DocAddress(0, 4));
assert_eq!(top_score, score_doc_4);
}
}
#[test] #[test]
pub fn test_boolean_query_with_weight() { pub fn test_boolean_query_with_weight() {
let mut schema_builder = Schema::builder(); let mut schema_builder = Schema::builder();
@@ -197,7 +222,7 @@ mod tests {
let schema = schema_builder.build(); let schema = schema_builder.build();
let index = Index::create_in_ram(schema); let index = Index::create_in_ram(schema);
{ {
let mut index_writer = index.writer_with_num_threads(1, 3_000_000).unwrap(); let mut index_writer = index.writer_for_tests().unwrap();
index_writer.add_document(doc!(text_field => "a b c")); index_writer.add_document(doc!(text_field => "a b c"));
index_writer.add_document(doc!(text_field => "a c")); index_writer.add_document(doc!(text_field => "a c"));
index_writer.add_document(doc!(text_field => "b c")); index_writer.add_document(doc!(text_field => "b c"));
@@ -214,23 +239,21 @@ mod tests {
let reader = index.reader().unwrap(); let reader = index.reader().unwrap();
let searcher = reader.searcher(); let searcher = reader.searcher();
let boolean_query = let boolean_query =
BooleanQuery::from(vec![(Occur::Should, term_a), (Occur::Should, term_b)]); BooleanQuery::new(vec![(Occur::Should, term_a), (Occur::Should, term_b)]);
let boolean_weight = boolean_query.weight(&searcher, true).unwrap(); let boolean_weight = boolean_query.weight(&searcher, true).unwrap();
{ {
let mut boolean_scorer = boolean_weight let mut boolean_scorer = boolean_weight
.scorer(searcher.segment_reader(0u32), 1.0f32) .scorer(searcher.segment_reader(0u32), 1.0)
.unwrap(); .unwrap();
assert!(boolean_scorer.advance());
assert_eq!(boolean_scorer.doc(), 0u32); assert_eq!(boolean_scorer.doc(), 0u32);
assert_nearly_equals(boolean_scorer.score(), 0.84163445f32); assert_nearly_equals!(boolean_scorer.score(), 0.84163445);
} }
{ {
let mut boolean_scorer = boolean_weight let mut boolean_scorer = boolean_weight
.scorer(searcher.segment_reader(0u32), 2.0f32) .scorer(searcher.segment_reader(0u32), 2.0)
.unwrap(); .unwrap();
assert!(boolean_scorer.advance());
assert_eq!(boolean_scorer.doc(), 0u32); assert_eq!(boolean_scorer.doc(), 0u32);
assert_nearly_equals(boolean_scorer.score(), 1.6832689f32); assert_nearly_equals!(boolean_scorer.score(), 1.6832689);
} }
} }
@@ -256,174 +279,38 @@ mod tests {
}; };
{ {
let boolean_query = BooleanQuery::from(vec![ let boolean_query = BooleanQuery::new(vec![
(Occur::Must, make_term_query("a")), (Occur::Must, make_term_query("a")),
(Occur::Must, make_term_query("b")), (Occur::Must, make_term_query("b")),
]); ]);
assert_eq!(score_docs(&boolean_query), vec![0.977973, 0.84699446]); let scores = score_docs(&boolean_query);
assert_nearly_equals!(scores[0], 0.977973);
assert_nearly_equals!(scores[1], 0.84699446);
} }
} }
// motivated by #554
#[test] #[test]
fn test_bm25_several_fields() { pub fn test_explain() -> crate::Result<()> {
let mut schema_builder = Schema::builder(); let mut schema_builder = Schema::builder();
let title = schema_builder.add_text_field("title", TEXT); let text = schema_builder.add_text_field("text", STRING);
let text = schema_builder.add_text_field("text", TEXT);
let schema = schema_builder.build(); let schema = schema_builder.build();
let index = Index::create_in_ram(schema); let index = Index::create_in_ram(schema);
let mut index_writer = index.writer_with_num_threads(1, 5_000_000)?;
let mut index_writer = index.writer_with_num_threads(1, 3_000_000).unwrap(); index_writer.add_document(doc!(text=>"a"));
index_writer.add_document(doc!( index_writer.add_document(doc!(text=>"b"));
// tf = 1 0 index_writer.commit()?;
title => "Законы притяжения Оксана Кулакова", let searcher = index.reader()?.searcher();
// tf = 1 0 let term_a: Box<dyn Query> = Box::new(TermQuery::new(
text => "Законы притяжения Оксана Кулакова] \n\nТема: Сексуальное искусство, Женственность\nТип товара: Запись вебинара (аудио)\nПродолжительность: 1,5 часа\n\nСсылка на вебинар:\n ", Term::from_field_text(text, "a"),
IndexRecordOption::Basic,
)); ));
index_writer.add_document(doc!( let term_b: Box<dyn Query> = Box::new(TermQuery::new(
// tf = 1 0 Term::from_field_text(text, "b"),
title => "Любимые русские пироги (Оксана Путан)", IndexRecordOption::Basic,
// tf = 2 0
text => "http://i95.fastpic.ru/big/2017/0628/9a/615b9c8504d94a3893d7f496ac53539a.jpg \n\nОт издателя\nОксана Путан профессиональный повар, автор кулинарных книг и известный кулинарный блогер. Ее рецепты отличаются практичностью, доступностью и пользуются огромной популярностью в русскоязычном интернете. Это третья книга автора о самом вкусном и ароматном настоящих русских пирогах и выпечке!\nДаже новички на кухне легко готовят по ее рецептам. Оксана описывает процесс приготовления настолько подробно и понятно, что вам остается только наслаждаться готовкой и не тратить время на лишние усилия. Готовьте легко и просто!\n\nhttps://www.ozon.ru/context/detail/id/139872462/"
)); ));
index_writer.add_document(doc!( let query = BooleanQuery::from(vec![(Occur::Should, term_a), (Occur::Should, term_b)]);
// tf = 1 1 let explanation = query.explain(&searcher, DocAddress(0, 0u32))?;
title => "PDF Мастер Класс \"Морячок\" (Оксана Лифенко)", assert_nearly_equals!(explanation.value(), 0.6931472f32);
// tf = 0 0 Ok(())
text => "https://i.ibb.co/pzvHrDN/I3d U T6 Gg TM.jpg\nhttps://i.ibb.co/NFrb6v6/N0ls Z9nwjb U.jpg\nВ описание входит штаны, кофта, берет, матросский воротник. Описание продается в формате PDF, состоит из 12 страниц формата А4 и может быть напечатано на любом принтере.\nОписание предназначено для кукол BJD RealPuki от FairyLand, но может подойти и другим подобным куклам. Также вы можете вязать этот наряд из обычной пряжи, и он подойдет для куколок побольше.\nhttps://vk.com/market 95724412?w=product 95724412_2212"
));
for _ in 0..1_000 {
index_writer.add_document(doc!(
title => "a b d e f g",
text => "maitre corbeau sur un arbre perche tenait dans son bec un fromage Maitre rnard par lodeur alleche lui tint a peu pres ce langage."
));
}
index_writer.commit().unwrap();
let reader = index.reader().unwrap();
let searcher = reader.searcher();
let query_parser = QueryParser::for_index(&index, vec![title, text]);
let query = query_parser.parse_query("Оксана Лифенко").unwrap();
let weight = query.weight(&searcher, true).unwrap();
let mut scorer = weight
.scorer(searcher.segment_reader(0u32), 1.0f32)
.unwrap();
scorer.advance();
let explanation = query.explain(&searcher, DocAddress(0u32, 0u32)).unwrap();
assert_eq!(
explanation.to_pretty_json(),
r#"{
"value": 12.997711,
"description": "BooleanClause. Sum of ...",
"details": [
{
"value": 12.997711,
"description": "BooleanClause. Sum of ...",
"details": [
{
"value": 6.551476,
"description": "TermQuery, product of...",
"details": [
{
"value": 2.2,
"description": "(K1+1)"
},
{
"value": 5.658984,
"description": "idf, computed as log(1 + (N - n + 0.5) / (n + 0.5))",
"details": [
{
"value": 3.0,
"description": "n, number of docs containing this term"
},
{
"value": 1003.0,
"description": "N, total number of docs"
}
]
},
{
"value": 0.5262329,
"description": "freq / (freq + k1 * (1 - b + b * dl / avgdl))",
"details": [
{
"value": 1.0,
"description": "freq, occurrences of term within document"
},
{
"value": 1.2,
"description": "k1, term saturation parameter"
},
{
"value": 0.75,
"description": "b, length normalization parameter"
},
{
"value": 4.0,
"description": "dl, length of field"
},
{
"value": 5.997009,
"description": "avgdl, average length of field"
}
]
}
]
},
{
"value": 6.446235,
"description": "TermQuery, product of...",
"details": [
{
"value": 2.2,
"description": "(K1+1)"
},
{
"value": 5.9954567,
"description": "idf, computed as log(1 + (N - n + 0.5) / (n + 0.5))",
"details": [
{
"value": 2.0,
"description": "n, number of docs containing this term"
},
{
"value": 1003.0,
"description": "N, total number of docs"
}
]
},
{
"value": 0.4887212,
"description": "freq / (freq + k1 * (1 - b + b * dl / avgdl))",
"details": [
{
"value": 1.0,
"description": "freq, occurrences of term within document"
},
{
"value": 1.2,
"description": "k1, term saturation parameter"
},
{
"value": 0.75,
"description": "b, length normalization parameter"
},
{
"value": 20.0,
"description": "dl, length of field"
},
{
"value": 24.123629,
"description": "avgdl, average length of field"
}
]
}
]
}
]
}
]
}"#
);
} }
} }

View File

@@ -1,8 +1,7 @@
use crate::common::BitSet;
use crate::fastfield::DeleteBitSet; use crate::fastfield::DeleteBitSet;
use crate::query::explanation::does_not_match; use crate::query::explanation::does_not_match;
use crate::query::{Explanation, Query, Scorer, Weight}; use crate::query::{Explanation, Query, Scorer, Weight};
use crate::{DocId, DocSet, Searcher, SegmentReader, SkipResult, Term}; use crate::{DocId, DocSet, Score, Searcher, SegmentReader, Term};
use std::collections::BTreeSet; use std::collections::BTreeSet;
use std::fmt; use std::fmt;
@@ -13,12 +12,12 @@ use std::fmt;
/// factor. /// factor.
pub struct BoostQuery { pub struct BoostQuery {
query: Box<dyn Query>, query: Box<dyn Query>,
boost: f32, boost: Score,
} }
impl BoostQuery { impl BoostQuery {
/// Builds a boost query. /// Builds a boost query.
pub fn new(query: Box<dyn Query>, boost: f32) -> BoostQuery { pub fn new(query: Box<dyn Query>, boost: Score) -> BoostQuery {
BoostQuery { query, boost } BoostQuery { query, boost }
} }
} }
@@ -56,23 +55,23 @@ impl Query for BoostQuery {
pub(crate) struct BoostWeight { pub(crate) struct BoostWeight {
weight: Box<dyn Weight>, weight: Box<dyn Weight>,
boost: f32, boost: Score,
} }
impl BoostWeight { impl BoostWeight {
pub fn new(weight: Box<dyn Weight>, boost: f32) -> Self { pub fn new(weight: Box<dyn Weight>, boost: Score) -> Self {
BoostWeight { weight, boost } BoostWeight { weight, boost }
} }
} }
impl Weight for BoostWeight { impl Weight for BoostWeight {
fn scorer(&self, reader: &SegmentReader, boost: f32) -> crate::Result<Box<dyn Scorer>> { fn scorer(&self, reader: &SegmentReader, boost: Score) -> crate::Result<Box<dyn Scorer>> {
self.weight.scorer(reader, boost * self.boost) self.weight.scorer(reader, boost * self.boost)
} }
fn explain(&self, reader: &SegmentReader, doc: u32) -> crate::Result<Explanation> { fn explain(&self, reader: &SegmentReader, doc: u32) -> crate::Result<Explanation> {
let mut scorer = self.scorer(reader, 1.0f32)?; let mut scorer = self.scorer(reader, 1.0)?;
if scorer.skip_next(doc) != SkipResult::Reached { if scorer.seek(doc) != doc {
return Err(does_not_match(doc)); return Err(does_not_match(doc));
} }
let mut explanation = let mut explanation =
@@ -89,22 +88,22 @@ impl Weight for BoostWeight {
pub(crate) struct BoostScorer<S: Scorer> { pub(crate) struct BoostScorer<S: Scorer> {
underlying: S, underlying: S,
boost: f32, boost: Score,
} }
impl<S: Scorer> BoostScorer<S> { impl<S: Scorer> BoostScorer<S> {
pub fn new(underlying: S, boost: f32) -> BoostScorer<S> { pub fn new(underlying: S, boost: Score) -> BoostScorer<S> {
BoostScorer { underlying, boost } BoostScorer { underlying, boost }
} }
} }
impl<S: Scorer> DocSet for BoostScorer<S> { impl<S: Scorer> DocSet for BoostScorer<S> {
fn advance(&mut self) -> bool { fn advance(&mut self) -> DocId {
self.underlying.advance() self.underlying.advance()
} }
fn skip_next(&mut self, target: DocId) -> SkipResult { fn seek(&mut self, target: DocId) -> DocId {
self.underlying.skip_next(target) self.underlying.seek(target)
} }
fn fill_buffer(&mut self, buffer: &mut [DocId]) -> usize { fn fill_buffer(&mut self, buffer: &mut [DocId]) -> usize {
@@ -119,10 +118,6 @@ impl<S: Scorer> DocSet for BoostScorer<S> {
self.underlying.size_hint() self.underlying.size_hint()
} }
fn append_to_bitset(&mut self, bitset: &mut BitSet) {
self.underlying.append_to_bitset(bitset)
}
fn count(&mut self, delete_bitset: &DeleteBitSet) -> u32 { fn count(&mut self, delete_bitset: &DeleteBitSet) -> u32 {
self.underlying.count(delete_bitset) self.underlying.count(delete_bitset)
} }
@@ -133,7 +128,7 @@ impl<S: Scorer> DocSet for BoostScorer<S> {
} }
impl<S: Scorer> Scorer for BoostScorer<S> { impl<S: Scorer> Scorer for BoostScorer<S> {
fn score(&mut self) -> f32 { fn score(&mut self) -> Score {
self.underlying.score() * self.boost self.underlying.score() * self.boost
} }
} }
@@ -149,7 +144,7 @@ mod tests {
fn test_boost_query_explain() { fn test_boost_query_explain() {
let schema = Schema::builder().build(); let schema = Schema::builder().build();
let index = Index::create_in_ram(schema); let index = Index::create_in_ram(schema);
let mut index_writer = index.writer_with_num_threads(1, 3_000_000).unwrap(); let mut index_writer = index.writer_for_tests().unwrap();
index_writer.add_document(Document::new()); index_writer.add_document(Document::new());
assert!(index_writer.commit().is_ok()); assert!(index_writer.commit().is_ok());
let reader = index.reader().unwrap(); let reader = index.reader().unwrap();
@@ -158,7 +153,7 @@ mod tests {
let explanation = query.explain(&searcher, DocAddress(0, 0u32)).unwrap(); let explanation = query.explain(&searcher, DocAddress(0, 0u32)).unwrap();
assert_eq!( assert_eq!(
explanation.to_pretty_json(), explanation.to_pretty_json(),
"{\n \"value\": 0.2,\n \"description\": \"Boost x0.2 of ...\",\n \"details\": [\n {\n \"value\": 1.0,\n \"description\": \"AllQuery\"\n }\n ]\n}" "{\n \"value\": 0.2,\n \"description\": \"Boost x0.2 of ...\",\n \"details\": [\n {\n \"value\": 1.0,\n \"description\": \"AllQuery\",\n \"context\": []\n }\n ],\n \"context\": []\n}"
) )
} }
} }

View File

@@ -1,4 +1,5 @@
use super::Scorer; use super::Scorer;
use crate::docset::TERMINATED;
use crate::query::explanation::does_not_match; use crate::query::explanation::does_not_match;
use crate::query::Weight; use crate::query::Weight;
use crate::query::{Explanation, Query}; use crate::query::{Explanation, Query};
@@ -33,7 +34,7 @@ impl Query for EmptyQuery {
/// It is useful for tests and handling edge cases. /// It is useful for tests and handling edge cases.
pub struct EmptyWeight; pub struct EmptyWeight;
impl Weight for EmptyWeight { impl Weight for EmptyWeight {
fn scorer(&self, _reader: &SegmentReader, _boost: f32) -> crate::Result<Box<dyn Scorer>> { fn scorer(&self, _reader: &SegmentReader, _boost: Score) -> crate::Result<Box<dyn Scorer>> {
Ok(Box::new(EmptyScorer)) Ok(Box::new(EmptyScorer))
} }
@@ -48,15 +49,12 @@ impl Weight for EmptyWeight {
pub struct EmptyScorer; pub struct EmptyScorer;
impl DocSet for EmptyScorer { impl DocSet for EmptyScorer {
fn advance(&mut self) -> bool { fn advance(&mut self) -> DocId {
false TERMINATED
} }
fn doc(&self) -> DocId { fn doc(&self) -> DocId {
panic!( TERMINATED
"You may not call .doc() on a scorer \
where the last call to advance() did not return true."
);
} }
fn size_hint(&self) -> u32 { fn size_hint(&self) -> u32 {
@@ -66,24 +64,21 @@ impl DocSet for EmptyScorer {
impl Scorer for EmptyScorer { impl Scorer for EmptyScorer {
fn score(&mut self) -> Score { fn score(&mut self) -> Score {
0f32 0.0
} }
} }
#[cfg(test)] #[cfg(test)]
mod tests { mod tests {
use crate::docset::TERMINATED;
use crate::query::EmptyScorer; use crate::query::EmptyScorer;
use crate::DocSet; use crate::DocSet;
#[test] #[test]
fn test_empty_scorer() { fn test_empty_scorer() {
let mut empty_scorer = EmptyScorer; let mut empty_scorer = EmptyScorer;
assert!(!empty_scorer.advance()); assert_eq!(empty_scorer.doc(), TERMINATED);
} assert_eq!(empty_scorer.advance(), TERMINATED);
assert_eq!(empty_scorer.doc(), TERMINATED);
#[test]
#[should_panic]
fn test_empty_scorer_panic_on_doc_call() {
EmptyScorer.doc();
} }
} }

View File

@@ -1,12 +1,11 @@
use crate::docset::{DocSet, SkipResult}; use crate::docset::{DocSet, TERMINATED};
use crate::query::Scorer; use crate::query::Scorer;
use crate::DocId; use crate::DocId;
use crate::Score; use crate::Score;
#[derive(Clone, Copy, Debug)] #[inline(always)]
enum State { fn is_within<TDocSetExclude: DocSet>(docset: &mut TDocSetExclude, doc: DocId) -> bool {
ExcludeOne(DocId), docset.doc() <= doc && docset.seek(doc) == doc
Finished,
} }
/// Filters a given `DocSet` by removing the docs from a given `DocSet`. /// Filters a given `DocSet` by removing the docs from a given `DocSet`.
@@ -15,29 +14,6 @@ enum State {
pub struct Exclude<TDocSet, TDocSetExclude> { pub struct Exclude<TDocSet, TDocSetExclude> {
underlying_docset: TDocSet, underlying_docset: TDocSet,
excluding_docset: TDocSetExclude, excluding_docset: TDocSetExclude,
excluding_state: State,
}
impl<TDocSet, TDocSetExclude> Exclude<TDocSet, TDocSetExclude>
where
TDocSetExclude: DocSet,
{
/// Creates a new `ExcludeScorer`
pub fn new(
underlying_docset: TDocSet,
mut excluding_docset: TDocSetExclude,
) -> Exclude<TDocSet, TDocSetExclude> {
let state = if excluding_docset.advance() {
State::ExcludeOne(excluding_docset.doc())
} else {
State::Finished
};
Exclude {
underlying_docset,
excluding_docset,
excluding_state: state,
}
}
} }
impl<TDocSet, TDocSetExclude> Exclude<TDocSet, TDocSetExclude> impl<TDocSet, TDocSetExclude> Exclude<TDocSet, TDocSetExclude>
@@ -45,33 +21,21 @@ where
TDocSet: DocSet, TDocSet: DocSet,
TDocSetExclude: DocSet, TDocSetExclude: DocSet,
{ {
/// Returns true iff the doc is not removed. /// Creates a new `ExcludeScorer`
/// pub fn new(
/// The method has to be called with non strictly mut underlying_docset: TDocSet,
/// increasing `doc`. mut excluding_docset: TDocSetExclude,
fn accept(&mut self) -> bool { ) -> Exclude<TDocSet, TDocSetExclude> {
let doc = self.underlying_docset.doc(); while underlying_docset.doc() != TERMINATED {
match self.excluding_state { let target = underlying_docset.doc();
State::ExcludeOne(excluded_doc) => { if !is_within(&mut excluding_docset, target) {
if doc == excluded_doc { break;
return false;
}
if excluded_doc > doc {
return true;
}
match self.excluding_docset.skip_next(doc) {
SkipResult::OverStep => {
self.excluding_state = State::ExcludeOne(self.excluding_docset.doc());
true
}
SkipResult::End => {
self.excluding_state = State::Finished;
true
}
SkipResult::Reached => false,
}
} }
State::Finished => true, underlying_docset.advance();
}
Exclude {
underlying_docset,
excluding_docset,
} }
} }
} }
@@ -81,27 +45,27 @@ where
TDocSet: DocSet, TDocSet: DocSet,
TDocSetExclude: DocSet, TDocSetExclude: DocSet,
{ {
fn advance(&mut self) -> bool { fn advance(&mut self) -> DocId {
while self.underlying_docset.advance() { loop {
if self.accept() { let candidate = self.underlying_docset.advance();
return true; if candidate == TERMINATED {
return TERMINATED;
}
if !is_within(&mut self.excluding_docset, candidate) {
return candidate;
} }
} }
false
} }
fn skip_next(&mut self, target: DocId) -> SkipResult { fn seek(&mut self, target: DocId) -> DocId {
let underlying_skip_result = self.underlying_docset.skip_next(target); let candidate = self.underlying_docset.seek(target);
if underlying_skip_result == SkipResult::End { if candidate == TERMINATED {
return SkipResult::End; return TERMINATED;
} }
if self.accept() { if !is_within(&mut self.excluding_docset, candidate) {
underlying_skip_result return candidate;
} else if self.advance() {
SkipResult::OverStep
} else {
SkipResult::End
} }
self.advance()
} }
fn doc(&self) -> DocId { fn doc(&self) -> DocId {
@@ -141,8 +105,9 @@ mod tests {
VecDocSet::from(vec![1, 2, 3, 10, 16, 24]), VecDocSet::from(vec![1, 2, 3, 10, 16, 24]),
); );
let mut els = vec![]; let mut els = vec![];
while exclude_scorer.advance() { while exclude_scorer.doc() != TERMINATED {
els.push(exclude_scorer.doc()); els.push(exclude_scorer.doc());
exclude_scorer.advance();
} }
assert_eq!(els, vec![5, 8, 15]); assert_eq!(els, vec![5, 8, 15]);
} }
@@ -156,7 +121,7 @@ mod tests {
VecDocSet::from(vec![1, 2, 3, 10, 16, 24]), VecDocSet::from(vec![1, 2, 3, 10, 16, 24]),
)) ))
}, },
vec![1, 2, 5, 8, 10, 15, 24], vec![5, 8, 10, 15, 24],
); );
} }

Some files were not shown because too many files have changed in this diff Show More