Compare commits

...

78 Commits

Author SHA1 Message Date
François Massot
14d53851a8 Fix clippy, clean comment. 2022-03-28 01:17:07 +02:00
François Massot
2d176e66b6 Format. 2022-03-26 22:29:43 +01:00
François Massot
838a332db0 Fix fmt. 2022-03-26 21:33:08 +01:00
François Massot
defbd9139b Update fastfield codecs readme. 2022-03-26 21:33:08 +01:00
François Massot
0c87732459 Fix makefile. 2022-03-26 21:33:08 +01:00
François Massot
4d66a3f0a0 Put deprecated attributes on deprecated codecs. Clean. 2022-03-26 21:33:06 +01:00
François Massot
977f01a8a3 Deprecate linear and multilienar fast field coded, add piece wise and FOR. Update tests and clean. 2022-03-26 21:27:15 +01:00
François Massot
c14bdd26d4 Clean. 2022-03-26 21:18:13 +01:00
François Massot
3272f80171 Fix clippy. 2022-03-26 21:17:32 +01:00
François Massot
23d5ab5656 Rename new codecs. 2022-03-26 21:17:32 +01:00
François Massot
245ed5fed1 Add float dataset for comparing fast field codec. 2022-03-26 21:17:32 +01:00
François Massot
33bed01168 Clean frame of ref codec. 2022-03-26 21:17:32 +01:00
François Massot
17a5f4f0ff Seed random datasets in fast field codecs comparison. 2022-03-26 21:17:30 +01:00
François Massot
c969582308 Add frame of reference codecs. 2022-03-26 21:16:50 +01:00
François Massot
18d2ee5bb7 Add another multilinear interpolation and real world dataset. 2022-03-26 21:15:50 +01:00
Maxim Kraynyuchenko
447811c111 Update README following sections: features, benchmark illustration & FAQ. (#1318)
* Updated features, benchmark illustration & FAQ.
* Updated README: Feat,Graph,Non-Feat,Companies,FAQ
2022-03-23 10:02:09 +09:00
PSeitz
f29acf5d8c fix clippy (#1321) 2022-03-22 12:48:23 +09:00
Uwe Klotz
125707dbe0 Replace chrono with time (#1307)
For date values `chrono` has been replaced with `time` 
- The `time` crate is re-exported as `tantivy::time` instead of `tantivy::chrono`.
- The type alias `tantivy::DateTime` has been removed.
- `Value::Date` wraps `time::PrimitiveDateTime` without time zone information.
- Internally date/time values are stored as seconds since UNIX epoch in UTC.
- Converting a `time::OffsetDateTime` to `Value::Date` implicitly converts the value into UTC.
If this is not desired do the time zone conversion yourself and use `time::PrimitiveDateTime`
directly instead.

Closes #1304
2022-03-21 10:50:19 +09:00
Paul Masurel
46d5de920d Removes all usage of block_on, and use a oneshot channel instead. (#1315)
* Removes all usage of block_on, and use a oneshot channel instead.

Calling `block_on` panics in certain context.
For instance, it panics when it is called in a the context of another
call to block.

Using it in tantivy is unnecessary. We replace it by a thin wrapper
around a oneshot channel that supports both async/sync.

* Removing needless uses of async in the API.

Co-authored-by: PSeitz <PSeitz@users.noreply.github.com>
2022-03-18 16:54:58 +09:00
PSeitz
d2a7bcf217 fix fmt (#1317) 2022-03-18 15:53:27 +09:00
PSeitz
141b9aa245 Merge pull request #1306 from PSeitz/histogram
add Histogram aggregation
2022-03-18 05:03:46 +01:00
PSeitz
c5a6282fa8 Update src/aggregation/bucket/histogram/histogram.rs
Co-authored-by: Paul Masurel <paul@quickwit.io>
2022-03-18 04:55:31 +01:00
PSeitz
c0f524e1a3 Update src/aggregation/bucket/histogram/histogram.rs
Co-authored-by: Paul Masurel <paul@quickwit.io>
2022-03-18 04:55:25 +01:00
Paul Masurel
958b2bee08 Clippy comments (#1316) 2022-03-17 18:57:55 +09:00
Pascal Seitz
f619658e2c rename 2022-03-17 16:37:57 +08:00
Pascal Seitz
aa391bf843 refactor parameters 2022-03-17 16:28:37 +08:00
Pascal Seitz
47dcbdbeae handle empty results, empty indices, add tests 2022-03-17 10:24:34 +08:00
Pascal Seitz
691245bf20 make code more concise 2022-03-16 14:21:58 +08:00
Pascal Seitz
90798d4b39 address comments, add single bucket test 2022-03-16 13:58:13 +08:00
Pascal Seitz
0b6d9f90cf improve docs 2022-03-16 12:39:26 +08:00
PSeitz
8a5a12d961 add setter to json object options (#1311) 2022-03-16 10:36:30 +09:00
Pascal Seitz
e73542e2e8 Elasticsearch behaviour on hard/extended_bounds 2022-03-15 16:46:45 +08:00
Pascal Seitz
0262e44bbd merge_fruits pass by value 2022-03-15 12:59:22 +08:00
Pascal Seitz
613aad7a8a vec optional, improve performance 2022-03-14 21:29:07 +08:00
Pascal Seitz
1aa88b0c51 improve performance 2022-03-14 20:28:08 +08:00
Pascal Seitz
564fa38085 move sub_aggregations to own vec, use itertools minmax 2022-03-14 16:20:26 +08:00
dependabot[bot]
59ec21479f Update pprof requirement from 0.6 to 0.7 (#1305)
Updates the requirements on [pprof](https://github.com/tikv/pprof-rs) to permit the latest version.
- [Release notes](https://github.com/tikv/pprof-rs/releases)
- [Changelog](https://github.com/tikv/pprof-rs/blob/master/CHANGELOG.md)
- [Commits](https://github.com/tikv/pprof-rs/commits)

---
updated-dependencies:
- dependency-name: pprof
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-03-14 13:57:22 +09:00
PSeitz
42283f9e91 fix error message UnknownTokenizer (#1308)
closes #1303
2022-03-14 13:54:47 +09:00
PSeitz
b105bf72e1 use defaults in meta.json (#1310)
This change allows to have unset fields in meta.json and fall back to their defaults
Currently it is required to explicitly put e.g. fieldnorms: false
2022-03-14 13:54:06 +09:00
Pascal Seitz
226f577803 Add Histogram aggregation 2022-03-11 21:52:07 +08:00
Paul Masurel
2e255c4bef Preparing for release 2022-03-09 09:59:08 +09:00
Paul Masurel
387592809f Updated CHANGELOG 2022-03-07 15:31:35 +09:00
Halvor Fladsrud Bø
cedced5bb0 Slop support for phrase queries (#1241)
Closes #1068
2022-03-07 15:29:18 +09:00
dependabot[bot]
d31f045872 Bump actions/checkout from 2 to 3 (#1300)
Bumps [actions/checkout](https://github.com/actions/checkout) from 2 to 3.
- [Release notes](https://github.com/actions/checkout/releases)
- [Changelog](https://github.com/actions/checkout/blob/main/CHANGELOG.md)
- [Commits](https://github.com/actions/checkout/compare/v2...v3)

---
updated-dependencies:
- dependency-name: actions/checkout
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-03-07 11:54:26 +09:00
PSeitz
6656a70d1b Merge pull request #1301 from saroh/1232-doc-fastfield
update fastfield doc
2022-03-04 08:18:21 +01:00
saroh
d36e0a9549 fix fastfield doc 2022-03-03 17:43:18 +01:00
Antoine G
8771b2673f Update src/fastfield/writer.rs
Co-authored-by: PSeitz <PSeitz@users.noreply.github.com>
2022-03-03 11:25:24 +01:00
Antoine G
a41d3d51a4 Update fastfield_codecs/src/lib.rs 2022-03-03 11:25:06 +01:00
Saroh
cae34ffe47 update fastfield doc 2022-03-02 16:04:15 +01:00
PSeitz
4b62f7907d Merge pull request #1297 from PSeitz/fix_clippy
fix clippy issues
2022-03-02 10:11:56 +01:00
Pascal Seitz
7fa6a0b665 cargo fmt 2022-03-02 09:24:14 +01:00
PSeitz
458ed29a31 Merge pull request #1299 from saroh/1232-doc-lint
doc lint for errors and aggregations
2022-03-02 09:22:07 +01:00
Antoine G
e37775fe21 iff->if or if and only if (#1298)
* has_xxx is_xxx -> if, these function usualy define equivalence
xxx returns bool -> specify equivalence when appropriate

* fix doc
2022-03-02 11:00:00 +09:00
Saroh
1cd2434a32 fix(aggregations) Readme 2022-03-01 20:37:48 +01:00
Saroh
de2cba6d1e error definitions 2022-03-01 20:13:59 +01:00
Paul Masurel
c0b1a58d27 Apply suggestions from code review 2022-03-01 18:41:58 +09:00
Paul Masurel
848b795b9f Apply suggestions from code review 2022-03-01 18:37:51 +09:00
Pascal Seitz
091b668624 fix clippy issues 2022-03-01 08:58:51 +01:00
Paul Masurel
5004290daa Return an error on certain type of corruption. (#1296) 2022-03-01 11:35:56 +09:00
StyMaar
5d2c2b804c Fix link to RamDirectory and MMapDirectory in Directory's documentation (#1295) 2022-03-01 09:46:53 +09:00
PSeitz
1a92b588e0 Merge pull request #1294 from PSeitz/aggregation
fix intermediate result de/serialization
2022-02-28 08:39:23 +01:00
Pascal Seitz
010e92c118 fix intermediate result de/serialization
return None for empty average/stats metric
add test for de/serialization of intermediate result
add test for metric on empty result
2022-02-25 16:39:57 +01:00
Paul Masurel
2ead010c83 Tantivy quickwit (#1293)
* Added sstable and enabling it by default, and parallel boolean query.
* Added async API for FileSlice.
* Added async get_doc
* Reduce blocksize to 32_000
* Added debug logs

Quickwit specific feature a hidden behind the quickwit feature flag.
2022-02-25 17:32:49 +09:00
PSeitz
c4f66eb185 improve validation in aggregation, extend invalid field test (#1292)
* improve validation in aggregation, extend invalid field test

improve validation in aggregation
extend invalid field test
Fixes #1291

* collect fast field names on request structure

* fix visibility of AggregationSegmentCollector
2022-02-25 15:21:19 +09:00
Paul Masurel
d7b46d2137 Added JSON Type (#1270)
- Removed useless copy when ingesting JSON.
- Bugfix in phrase query with a missing field norms.
- Disabled range query on default fields

Closes #1251
2022-02-24 16:25:22 +09:00
PSeitz
d042ce74c7 Merge pull request #1289 from PSeitz/numeric_options
rename IntOptions to NumericOptions
2022-02-23 14:04:40 +01:00
PSeitz
7ba9e662b8 Merge pull request #1290 from PSeitz/improve_docs
improve aggregation docs
2022-02-23 14:04:20 +01:00
Pascal Seitz
fdd5ef85e5 improve aggregation docs 2022-02-22 10:37:54 +01:00
Pascal Seitz
704498a1ac rename IntOptions to NumericOptions
keep IntOptions with deprecation warning
Fixes #1286
2022-02-21 22:20:07 +01:00
PSeitz
1232af7928 fix docs (#1288) 2022-02-21 23:15:58 +09:00
Paul Masurel
d37633e034 Minor changes in indexing. (#1285) 2022-02-21 17:16:52 +09:00
Paul Masurel
9815067171 Minor changes 2022-02-21 13:55:01 +09:00
PSeitz
972cb6c26d Aggregation (#1276)
Added support for aggregation compatible with Elasticsearch's API.
2022-02-21 09:59:11 +09:00
Paul Masurel
4dc80cfa25 Removes TokenStream chain. (#1283)
This change is mostly motivated by the introduction of json object.

We need to be able to inject a position object to make the position
shift.
2022-02-21 09:51:27 +09:00
PSeitz
cef145790c Fix opening bytes index with dynamic codec (#1279)
* Fix opening bytes index with dynamic codec

Fix #1278

* extend proptest to cover bytes field codec bug
2022-02-18 20:44:21 +09:00
Paul Masurel
e05e2a0c51 Added profiling to indexing bench (#1282) 2022-02-18 20:43:28 +09:00
Paul Masurel
e028515caf Simplified expull code. (#1281) 2022-02-18 18:57:10 +09:00
Paul Masurel
850b9eaea4 added a bench to measure the perf of indexing logs (#1275) 2022-02-18 16:48:29 +09:00
154 changed files with 112511 additions and 1695 deletions

View File

@@ -10,7 +10,7 @@ jobs:
coverage:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- uses: actions/checkout@v3
- name: Install Rust
run: rustup toolchain install nightly --component llvm-tools-preview
- name: Install cargo-llvm-cov

View File

@@ -12,13 +12,13 @@ jobs:
functional_test_unsorted:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- uses: actions/checkout@v3
- name: Run indexing_unsorted
run: cargo test indexing_unsorted -- --ignored
functional_test_sorted:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- uses: actions/checkout@v3
- name: Run indexing_sorted
run: cargo test indexing_sorted -- --ignored

View File

@@ -15,7 +15,7 @@ jobs:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- uses: actions/checkout@v3
- name: Build
run: cargo build --verbose --workspace
- name: Install latest nightly to test also against unstable feature flag
@@ -24,16 +24,23 @@ jobs:
toolchain: nightly
override: true
components: rustfmt
- name: Install latest nightly to test also against unstable feature flag
uses: actions-rs/toolchain@v1
with:
toolchain: stable
override: true
components: rustfmt, clippy
- name: Run tests
run: cargo +stable test --features mmap,brotli-compression,lz4-compression,snappy-compression,failpoints --verbose --workspace
- name: Run tests quickwit feature
run: cargo +stable test --features mmap,quickwit,failpoints --verbose --workspace
- name: Check Formatting
run: cargo +nightly fmt --all -- --check
- uses: actions-rs/clippy-check@v1
with:
toolchain: stable

View File

@@ -1,3 +1,14 @@
Unreleased
================================
- For date values `chrono` has been replaced with `time` (@uklotzde) #1304 :
- The `time` crate is re-exported as `tantivy::time` instead of `tantivy::chrono`.
- The type alias `tantivy::DateTime` has been removed.
- `Value::Date` wraps `time::PrimitiveDateTime` without time zone information.
- Internally date/time values are stored as seconds since UNIX epoch in UTC.
- Converting a `time::OffsetDateTime` to `Value::Date` implicitly converts the value into UTC.
If this is not desired do the time zone conversion yourself and use `time::PrimitiveDateTime`
directly instead.
Tantivy 0.17
================================
- LogMergePolicy now triggers merges if the ratio of deleted documents reaches a threshold (@shikhar @fulmicoton) [#115](https://github.com/quickwit-oss/tantivy/issues/115)
@@ -7,6 +18,10 @@ Tantivy 0.17
- Bugfix that could in theory impact durability in theory on some filesystems [#1224](https://github.com/quickwit-oss/tantivy/issues/1224)
- Schema now offers not indexing fieldnorms (@lpouget) [#922](https://github.com/quickwit-oss/tantivy/issues/922)
- Reduce the number of fsync calls [#1225](https://github.com/quickwit-oss/tantivy/issues/1225)
- Fix opening bytes index with dynamic codec (@PSeitz) [#1278](https://github.com/quickwit-oss/tantivy/issues/1278)
- Added an aggregation collector compatible with Elasticsearch (@PSeitz)
- Added a JSON schema type @fulmicoton [#1251](https://github.com/quickwit-oss/tantivy/issues/1251)
- Added support for slop in phrase queries @halvorboe [#1068](https://github.com/quickwit-oss/tantivy/issues/1068)
Tantivy 0.16.2
================================

View File

@@ -1,6 +1,6 @@
[package]
name = "tantivy"
version = "0.17.0-dev"
version = "0.17.0"
authors = ["Paul Masurel <paul.masurel@gmail.com>"]
license = "MIT"
categories = ["database-implementations", "data-structures"]
@@ -13,6 +13,7 @@ keywords = ["search", "information", "retrieval"]
edition = "2018"
[dependencies]
oneshot = "0.1"
base64 = "0.13"
byteorder = "1.4.3"
crc32fast = "1.2.1"
@@ -32,10 +33,9 @@ fs2={ version = "0.4.3", optional = true }
levenshtein_automata = "0.2"
uuid = { version = "0.8.2", features = ["v4", "serde"] }
crossbeam = "0.8.1"
futures = { version = "0.3.15", features = ["thread-pool"] }
tantivy-query-grammar = { version="0.15.0", path="./query-grammar" }
tantivy-bitpacker = { version="0.1", path="./bitpacker" }
common = { version = "0.1", path = "./common/", package = "tantivy-common" }
common = { version = "0.2", path = "./common/", package = "tantivy-common" }
fastfield_codecs = { version="0.1", path="./fastfield_codecs", default-features = false }
ownedbytes = { version="0.2", path="./ownedbytes" }
stable_deref_trait = "1.2"
@@ -48,13 +48,16 @@ thiserror = "1.0.24"
htmlescape = "0.3.1"
fail = "0.5"
murmurhash32 = "0.2"
chrono = "0.4.19"
time = { version = "0.3.7", features = ["serde-well-known"] }
smallvec = "1.6.1"
rayon = "1.5"
lru = "0.7.0"
fastdivide = "0.4"
itertools = "0.10.0"
measure_time = "0.8.0"
pretty_assertions = "1.1.0"
serde_cbor = {version="0.11", optional=true}
async-trait = "0.1"
[target.'cfg(windows)'.dependencies]
winapi = "0.3.9"
@@ -67,6 +70,8 @@ proptest = "1.0"
criterion = "0.3.5"
test-log = "0.2.8"
env_logger = "0.9.0"
pprof = {version= "0.7", features=["flamegraph", "criterion"]}
futures = "0.3.15"
[dev-dependencies.fail]
version = "0.5"
@@ -92,6 +97,8 @@ snappy-compression = ["snap"]
failpoints = ["fail/failpoints"]
unstable = [] # useful for benches.
quickwit = ["serde_cbor"]
[workspace]
members = ["query-grammar", "bitpacker", "common", "fastfield_codecs", "ownedbytes"]
@@ -110,3 +117,8 @@ required-features = ["fail/failpoints"]
[[bench]]
name = "analyzer"
harness = false
[[bench]]
name = "index-bench"
harness = false

View File

@@ -1,4 +1,3 @@
[![Docs](https://docs.rs/tantivy/badge.svg)](https://docs.rs/crate/tantivy/)
[![Build Status](https://github.com/quickwit-oss/tantivy/actions/workflows/test.yml/badge.svg)](https://github.com/quickwit-oss/tantivy/actions/workflows/test.yml)
[![codecov](https://codecov.io/gh/quickwit-oss/tantivy/branch/main/graph/badge.svg)](https://codecov.io/gh/quickwit-oss/tantivy)
@@ -6,9 +5,10 @@
[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
[![Crates.io](https://img.shields.io/crates/v/tantivy.svg)](https://crates.io/crates/tantivy)
![Tantivy](https://tantivy-search.github.io/logo/tantivy-logo.png)
**Tantivy** is a **full text search engine library** written in Rust.
**Tantivy** is a **full-text search engine library** written in Rust.
It is closer to [Apache Lucene](https://lucene.apache.org/) than to [Elasticsearch](https://www.elastic.co/products/elasticsearch) or [Apache Solr](https://lucene.apache.org/solr/) in the sense it is not
an off-the-shelf search engine server, but rather a crate that can be used
@@ -16,19 +16,23 @@ to build such a search engine.
Tantivy is, in fact, strongly inspired by Lucene's design.
If you are looking for an alternative to Elasticsearch or Apache Solr, check out [Quickwit](https://github.com/quickwit-oss/quickwit), our search engine built on top of Tantivy.
# Benchmark
The following [benchmark](https://tantivy-search.github.io/bench/) break downs
performance for different type of queries / collection.
The following [benchmark](https://tantivy-search.github.io/bench/) breakdowns
performance for different types of queries/collections.
Your mileage WILL vary depending on the nature of queries and their load.
<img src="doc/assets/images/searchbenchmark.png">
# Features
- Full-text search
- Configurable tokenizer (stemming available for 17 Latin languages with third party support for Chinese ([tantivy-jieba](https://crates.io/crates/tantivy-jieba) and [cang-jie](https://crates.io/crates/cang-jie)), Japanese ([lindera](https://github.com/lindera-morphology/lindera-tantivy), [Vaporetto](https://crates.io/crates/vaporetto_tantivy), and [tantivy-tokenizer-tiny-segmenter](https://crates.io/crates/tantivy-tokenizer-tiny-segmenter)) and Korean ([lindera](https://github.com/lindera-morphology/lindera-tantivy) + [lindera-ko-dic-builder](https://github.com/lindera-morphology/lindera-ko-dic-builder))
- Fast (check out the :racehorse: :sparkles: [benchmark](https://tantivy-search.github.io/bench/) :sparkles: :racehorse:)
- Tiny startup time (<10ms), perfect for command line tools
- Tiny startup time (<10ms), perfect for command-line tools
- BM25 scoring (the same as Lucene)
- Natural query language (e.g. `(michael AND jackson) OR "king of pop"`)
- Phrase queries search (e.g. `"michael jackson"`)
@@ -43,23 +47,25 @@ Your mileage WILL vary depending on the nature of queries and their load.
- Range queries
- Faceted search
- Configurable indexing (optional term frequency and position indexing)
- JSON Field
- Aggregation Collector: range buckets, average, and stats metrics
- LogMergePolicy with deletes
- Searcher Warmer API
- Cheesy logo with a horse
## Non-features
- Distributed search is out of the scope of Tantivy. That being said, Tantivy is a
library upon which one could build a distributed search. Serializable/mergeable collector state for instance,
are within the scope of Tantivy.
Distributed search is out of the scope of Tantivy, but if you are looking for this feature, check out [Quickwit](https://github.com/quickwit-oss/quickwit/).
# Getting started
Tantivy works on stable Rust (>= 1.27) and supports Linux, MacOS, and Windows.
Tantivy works on stable Rust (>= 1.27) and supports Linux, macOS, and Windows.
- [Tantivy's simple search example](https://tantivy-search.github.io/examples/basic_search.html)
- [tantivy-cli and its tutorial](https://github.com/quickwit-oss/tantivy-cli) - `tantivy-cli` is an actual command line interface that makes it easy for you to create a search engine,
- [tantivy-cli and its tutorial](https://github.com/quickwit-oss/tantivy-cli) - `tantivy-cli` is an actual command-line interface that makes it easy for you to create a search engine,
index documents, and search via the CLI or a small server with a REST API.
It walks you through getting a wikipedia search engine up and running in a few minutes.
It walks you through getting a Wikipedia search engine up and running in a few minutes.
- [Reference doc for the last released version](https://docs.rs/tantivy/)
# How can I support this project?
@@ -119,3 +125,28 @@ By default, `rustc` compiles everything in the `examples/` directory in debug mo
rust-gdb target/debug/examples/$EXAMPLE_NAME
$ gdb run
```
# Companies Using Tantivy
<p align="left">
<img align="center" src="doc/assets/images/Nuclia.png" alt="Nuclia" height="25" width="auto" /> &nbsp;
<img align="center" src="doc/assets/images/humanfirst.png" alt="Humanfirst.ai" height="30" width="auto" />&nbsp;
<img align="center" src="doc/assets/images/element.io.svg" alt="Element.io" height="25" width="auto" />
</p>
# FAQ
### Can I use Tantivy in other languages?
- Python → [tantivy-py](https://github.com/quickwit-oss/tantivy-py)
- Ruby → [tantiny](https://github.com/baygeldin/tantiny)
You can also find other bindings on [GitHub](https://github.com/search?q=tantivy) but they may be less maintained.
### What are some examples of Tantivy use?
- [seshat](https://github.com/matrix-org/seshat/): A matrix message database/indexer
- [tantiny](https://github.com/baygeldin/tantiny): Tiny full-text search for Ruby
- [lnx](https://github.com/lnx-search/lnx): adaptable, typo tolerant search engine with a REST API
- and [more](https://github.com/search?q=tantivy)!
### On average, how much faster is Tantivy compared to Lucene?
- According to our [search latency benchmark](https://tantivy-search.github.io/bench/), Tantivy is approximately 2x faster than Lucene.

100000
benches/hdfs.json Normal file

File diff suppressed because it is too large Load Diff

121
benches/index-bench.rs Normal file
View File

@@ -0,0 +1,121 @@
use criterion::{criterion_group, criterion_main, Criterion};
use pprof::criterion::{Output, PProfProfiler};
use tantivy::schema::{INDEXED, STORED, STRING, TEXT};
use tantivy::Index;
const HDFS_LOGS: &str = include_str!("hdfs.json");
const NUM_REPEATS: usize = 2;
pub fn hdfs_index_benchmark(c: &mut Criterion) {
let schema = {
let mut schema_builder = tantivy::schema::SchemaBuilder::new();
schema_builder.add_u64_field("timestamp", INDEXED);
schema_builder.add_text_field("body", TEXT);
schema_builder.add_text_field("severity", STRING);
schema_builder.build()
};
let schema_with_store = {
let mut schema_builder = tantivy::schema::SchemaBuilder::new();
schema_builder.add_u64_field("timestamp", INDEXED | STORED);
schema_builder.add_text_field("body", TEXT | STORED);
schema_builder.add_text_field("severity", STRING | STORED);
schema_builder.build()
};
let dynamic_schema = {
let mut schema_builder = tantivy::schema::SchemaBuilder::new();
schema_builder.add_json_field("json", TEXT);
schema_builder.build()
};
let mut group = c.benchmark_group("index-hdfs");
group.sample_size(20);
group.bench_function("index-hdfs-no-commit", |b| {
b.iter(|| {
let index = Index::create_in_ram(schema.clone());
let index_writer = index.writer_with_num_threads(1, 100_000_000).unwrap();
for _ in 0..NUM_REPEATS {
for doc_json in HDFS_LOGS.trim().split("\n") {
let doc = schema.parse_document(doc_json).unwrap();
index_writer.add_document(doc).unwrap();
}
}
})
});
group.bench_function("index-hdfs-with-commit", |b| {
b.iter(|| {
let index = Index::create_in_ram(schema.clone());
let mut index_writer = index.writer_with_num_threads(1, 100_000_000).unwrap();
for _ in 0..NUM_REPEATS {
for doc_json in HDFS_LOGS.trim().split("\n") {
let doc = schema.parse_document(doc_json).unwrap();
index_writer.add_document(doc).unwrap();
}
}
index_writer.commit().unwrap();
})
});
group.bench_function("index-hdfs-no-commit-with-docstore", |b| {
b.iter(|| {
let index = Index::create_in_ram(schema_with_store.clone());
let index_writer = index.writer_with_num_threads(1, 100_000_000).unwrap();
for _ in 0..NUM_REPEATS {
for doc_json in HDFS_LOGS.trim().split("\n") {
let doc = schema.parse_document(doc_json).unwrap();
index_writer.add_document(doc).unwrap();
}
}
})
});
group.bench_function("index-hdfs-with-commit-with-docstore", |b| {
b.iter(|| {
let index = Index::create_in_ram(schema_with_store.clone());
let mut index_writer = index.writer_with_num_threads(1, 100_000_000).unwrap();
for _ in 0..NUM_REPEATS {
for doc_json in HDFS_LOGS.trim().split("\n") {
let doc = schema.parse_document(doc_json).unwrap();
index_writer.add_document(doc).unwrap();
}
}
index_writer.commit().unwrap();
})
});
group.bench_function("index-hdfs-no-commit-json-without-docstore", |b| {
b.iter(|| {
let index = Index::create_in_ram(dynamic_schema.clone());
let json_field = dynamic_schema.get_field("json").unwrap();
let mut index_writer = index.writer_with_num_threads(1, 100_000_000).unwrap();
for _ in 0..NUM_REPEATS {
for doc_json in HDFS_LOGS.trim().split("\n") {
let json_val: serde_json::Map<String, serde_json::Value> =
serde_json::from_str(doc_json).unwrap();
let doc = tantivy::doc!(json_field=>json_val);
index_writer.add_document(doc).unwrap();
}
}
index_writer.commit().unwrap();
})
});
group.bench_function("index-hdfs-with-commit-json-without-docstore", |b| {
b.iter(|| {
let index = Index::create_in_ram(dynamic_schema.clone());
let json_field = dynamic_schema.get_field("json").unwrap();
let mut index_writer = index.writer_with_num_threads(1, 100_000_000).unwrap();
for _ in 0..NUM_REPEATS {
for doc_json in HDFS_LOGS.trim().split("\n") {
let json_val: serde_json::Map<String, serde_json::Value> =
serde_json::from_str(doc_json).unwrap();
let doc = tantivy::doc!(json_field=>json_val);
index_writer.add_document(doc).unwrap();
}
}
index_writer.commit().unwrap();
})
});
}
criterion_group! {
name = benches;
config = Criterion::default().with_profiler(PProfProfiler::new(100, Output::Flamegraph(None)));
targets = hdfs_index_benchmark
}
criterion_main!(benches);

View File

@@ -6,6 +6,7 @@ extern crate test;
mod tests {
use tantivy_bitpacker::BlockedBitpacker;
use test::Bencher;
#[bench]
fn bench_blockedbitp_read(b: &mut Bencher) {
let mut blocked_bitpacker = BlockedBitpacker::new();
@@ -20,6 +21,7 @@ mod tests {
out
});
}
#[bench]
fn bench_blockedbitp_create(b: &mut Bencher) {
b.iter(|| {

View File

@@ -1,6 +1,6 @@
[package]
name = "tantivy-common"
version = "0.1.0"
version = "0.2.0"
authors = ["Paul Masurel <paul@quickwit.io>", "Pascal Seitz <pascal@quickwit.io>"]
license = "MIT"
edition = "2018"

Binary file not shown.

After

Width:  |  Height:  |  Size: 3.1 KiB

View File

@@ -0,0 +1,8 @@
<svg width="518" height="112" viewBox="0 0 518 112" fill="none" xmlns="http://www.w3.org/2000/svg">
<path fill-rule="evenodd" clip-rule="evenodd" d="M56 112C86.9279 112 112 86.9279 112 56C112 25.0721 86.9279 0 56 0C25.0721 0 0 25.0721 0 56C0 86.9279 25.0721 112 56 112Z" fill="#0DBD8B"/>
<path fill-rule="evenodd" clip-rule="evenodd" d="M45.7615 26.093C45.7615 23.8325 47.5977 22.0001 49.8629 22.0001C65.2154 22.0001 77.6611 34.4199 77.6611 49.7406C77.6611 52.001 75.8248 53.8335 73.5597 53.8335C71.2945 53.8335 69.4583 52.001 69.4583 49.7406C69.4583 38.9408 60.6851 30.1859 49.8629 30.1859C47.5977 30.1859 45.7615 28.3534 45.7615 26.093Z" fill="white"/>
<path fill-rule="evenodd" clip-rule="evenodd" d="M85.8986 45.6477C88.1637 45.6477 89.9999 47.4801 89.9999 49.7406C89.9999 65.0612 77.5543 77.4811 62.2017 77.4811C59.9366 77.4811 58.1003 75.6486 58.1003 73.3882C58.1003 71.1277 59.9366 69.2953 62.2017 69.2953C73.024 69.2953 81.7972 60.5403 81.7972 49.7406C81.7972 47.4801 83.6334 45.6477 85.8986 45.6477Z" fill="white"/>
<path fill-rule="evenodd" clip-rule="evenodd" d="M66.3031 85.907C66.3031 88.1675 64.4668 89.9999 62.2017 89.9999C46.8492 89.9999 34.4035 77.58 34.4035 62.2594C34.4035 59.9989 36.2398 58.1665 38.5049 58.1665C40.77 58.1665 42.6063 59.9989 42.6063 62.2594C42.6063 73.0592 51.3795 81.8141 62.2017 81.8141C64.4668 81.8141 66.3031 83.6466 66.3031 85.907Z" fill="white"/>
<path fill-rule="evenodd" clip-rule="evenodd" d="M26.1014 66.3523C23.8363 66.3523 22.0001 64.5199 22.0001 62.2594C22 46.9388 34.4457 34.5189 49.7983 34.5189C52.0634 34.5189 53.8997 36.3514 53.8997 38.6118C53.8997 40.8723 52.0634 42.7047 49.7983 42.7047C38.976 42.7047 30.2028 51.4597 30.2028 62.2594C30.2028 64.5199 28.3666 66.3523 26.1014 66.3523Z" fill="white"/>
<path d="M197 63.5H157.5C157.967 67.6333 159.467 70.9333 162 73.4C164.533 75.8 167.867 77 172 77C174.733 77 177.2 76.3333 179.4 75C181.6 73.6667 183.167 71.8667 184.1 69.6H196.1C194.5 74.8667 191.5 79.1333 187.1 82.4C182.767 85.6 177.633 87.2 171.7 87.2C163.967 87.2 157.7 84.6333 152.9 79.5C148.167 74.3667 145.8 67.8667 145.8 60C145.8 52.3333 148.2 45.9 153 40.7C157.8 35.5 164 32.9 171.6 32.9C179.2 32.9 185.333 35.4667 190 40.6C194.733 45.6667 197.1 52.0667 197.1 59.8L197 63.5ZM171.6 42.6C167.867 42.6 164.767 43.7 162.3 45.9C159.833 48.1 158.3 51.0333 157.7 54.7H185.3C184.767 51.0333 183.3 48.1 180.9 45.9C178.5 43.7 175.4 42.6 171.6 42.6ZM205.289 70.5V11H217.189V70.7C217.189 73.3667 218.656 74.7 221.589 74.7L223.689 74.6V85.9C222.556 86.1 221.356 86.2 220.089 86.2C214.956 86.2 211.189 84.9 208.789 82.3C206.456 79.7 205.289 75.7667 205.289 70.5ZM279.109 63.5H239.609C240.076 67.6333 241.576 70.9333 244.109 73.4C246.643 75.8 249.976 77 254.109 77C256.843 77 259.309 76.3333 261.509 75C263.709 73.6667 265.276 71.8667 266.209 69.6H278.209C276.609 74.8667 273.609 79.1333 269.209 82.4C264.876 85.6 259.743 87.2 253.809 87.2C246.076 87.2 239.809 84.6333 235.009 79.5C230.276 74.3667 227.909 67.8667 227.909 60C227.909 52.3333 230.309 45.9 235.109 40.7C239.909 35.5 246.109 32.9 253.709 32.9C261.309 32.9 267.443 35.4667 272.109 40.6C276.843 45.6667 279.209 52.0667 279.209 59.8L279.109 63.5ZM253.709 42.6C249.976 42.6 246.876 43.7 244.409 45.9C241.943 48.1 240.409 51.0333 239.809 54.7H267.409C266.876 51.0333 265.409 48.1 263.009 45.9C260.609 43.7 257.509 42.6 253.709 42.6ZM332.798 56.2V86H320.898V54.9C320.898 47.0333 317.632 43.1 311.098 43.1C307.565 43.1 304.732 44.2333 302.598 46.5C300.532 48.7667 299.498 51.8667 299.498 55.8V86H287.598V34.1H298.598V41C299.865 38.6667 301.798 36.7333 304.398 35.2C306.998 33.6667 310.232 32.9 314.098 32.9C321.298 32.9 326.498 35.6333 329.698 41.1C334.098 35.6333 339.965 32.9 347.298 32.9C353.365 32.9 358.032 34.8 361.298 38.6C364.565 42.3333 366.198 47.2667 366.198 53.4V86H354.298V54.9C354.298 47.0333 351.032 43.1 344.498 43.1C340.898 43.1 338.032 44.2667 335.898 46.6C333.832 48.8667 332.798 52.0667 332.798 56.2ZM425.379 63.5H385.879C386.346 67.6333 387.846 70.9333 390.379 73.4C392.912 75.8 396.246 77 400.379 77C403.112 77 405.579 76.3333 407.779 75C409.979 73.6667 411.546 71.8667 412.479 69.6H424.479C422.879 74.8667 419.879 79.1333 415.479 82.4C411.146 85.6 406.012 87.2 400.079 87.2C392.346 87.2 386.079 84.6333 381.279 79.5C376.546 74.3667 374.179 67.8667 374.179 60C374.179 52.3333 376.579 45.9 381.379 40.7C386.179 35.5 392.379 32.9 399.979 32.9C407.579 32.9 413.712 35.4667 418.379 40.6C423.112 45.6667 425.479 52.0667 425.479 59.8L425.379 63.5ZM399.979 42.6C396.246 42.6 393.146 43.7 390.679 45.9C388.212 48.1 386.679 51.0333 386.079 54.7H413.679C413.146 51.0333 411.679 48.1 409.279 45.9C406.879 43.7 403.779 42.6 399.979 42.6ZM444.868 34.1V41C446.068 38.7333 448.035 36.8333 450.768 35.3C453.568 33.7 456.935 32.9 460.868 32.9C467.001 32.9 471.735 34.7667 475.068 38.5C478.468 42.2333 480.168 47.2 480.168 53.4V86H468.268V54.9C468.268 51.2333 467.401 48.3667 465.668 46.3C464.001 44.1667 461.435 43.1 457.968 43.1C454.168 43.1 451.168 44.2333 448.968 46.5C446.835 48.7667 445.768 51.9 445.768 55.9V86H433.868V34.1H444.868ZM514.922 75.4V85.7C513.455 86.1 511.389 86.3 508.722 86.3C498.589 86.3 493.522 81.2 493.522 71V43.6H485.622V34.1H493.522V20.6H505.422V34.1H515.122V43.6H505.422V69.8C505.422 73.8667 507.355 75.9 511.222 75.9L514.922 75.4Z" fill="black"/>
</svg>

After

Width:  |  Height:  |  Size: 5.2 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 102 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 653 KiB

128
doc/src/json.md Normal file
View File

@@ -0,0 +1,128 @@
# Json
As of tantivy 0.17, tantivy supports a json object type.
This type can be used to allow for a schema-less search index.
When indexing a json object, we "flatten" the JSON. This operation emits terms that represent a triplet `(json_path, value_type, value)`
For instance, if user is a json field, the following document:
```json
{
"user": {
"name": "Paul Masurel",
"address": {
"city": "Tokyo",
"country": "Japan"
},
"created_at": "2018-11-12T23:20:50.52Z"
}
}
```
emits the following tokens:
- ("name", Text, "Paul")
- ("name", Text, "Masurel")
- ("address.city", Text, "Tokyo")
- ("address.country", Text, "Japan")
- ("created_at", Date, 15420648505)
# Bytes-encoding and lexicographical sort.
Like any other terms, these triplets are encoded into a binary format as follows.
- `json_path`: the json path is a sequence of "segments". In the example above, `address.city`
is just a debug representation of the json path `["address", "city"]`.
Its representation is done by separating segments by a unicode char `\x01`, and ending the path by `\x00`.
- `value type`: One byte represents the `Value` type.
- `value`: The value representation is just the regular Value representation.
This representation is designed to align the natural sort of Terms with the lexicographical sort
of their binary representation (Tantivy's dictionary (whether fst or sstable) is sorted and does prefix encoding).
In the example above, the terms will be sorted as
- ("address.city", Text, "Tokyo")
- ("address.country", Text, "Japan")
- ("name", Text, "Masurel")
- ("name", Text, "Paul")
- ("created_at", Date, 15420648505)
As seen in "pitfalls", we may end up having to search for a value for a same path in several different fields. Putting the field code after the path makes it maximizes compression opportunities but also increases the chances for the two terms to end up in the actual same term dictionary block.
# Pitfalls, limitation and corner cases.
Json gives very little information about the type of the literals it stores.
All numeric types end up mapped as a "Number" and there are no types for dates.
At indexing, tantivy will try to interpret number and strings as different type with a
priority order.
Numbers will be interpreted as u64, i64 and f64 in that order.
Strings will be interpreted as rfc3999 dates or simple strings.
The first working type is picked and is the only term that is emitted for indexing.
Note this interpretation happens on a per-document basis, and there is no effort to try to sniff
a consistent field type at the scale of a segment.
On the query parser side on the other hand, we may end up emitting more than one type.
For instance, we do not even know if the type is a number or string based.
So the query
```
my_path.my_segment:233
```
Will be interpreted as
`(my_path.my_segment, String, 233) or (my_path.my_segment, u64, 233)`
Likewise, we need to emit two tokens if the query contains an rfc3999 date.
Indeed the date could have been actually a single token inside the text of a document at ingestion time. Generally speaking, we will always at least emit a string token in query parsing, and sometimes more.
If one more json field is defined, things get even more complicated.
## Default json field
If the schema contains a text field called "text" and a json field that is set as a default field:
`text:hello` could be reasonably interpreted as targetting the text field or as targetting the json field called `json_dynamic` with the json_path "text".
If there is such an ambiguity, we decide to only search in the "text" field: `text:hello`.
In other words, the parser will not search in default json fields if there is a schema hit.
This is a product decision.
The user can still target the JSON field by specifying its name explicitly:
`json_dynamic.text:hello`.
## Range queries are not supported.
Json field do not support range queries.
## Arrays do not work like nested object.
If json object contains an array, a search query might return more documents
than what might be expected.
Let's take an example.
```json
{
"cart_id": 3234234 ,
"cart": [
{"product_type": "sneakers", "attributes": {"color": "white"} },
{"product_type": "t-shirt", "attributes": {"color": "red"}},
]
}
```
Despite the array structure, a document in tantivy is a bag of terms.
The query:
```
cart.product_type:sneakers AND cart.attributes.color:red
```
Actually match the document above.

129
examples/aggregation.rs Normal file
View File

@@ -0,0 +1,129 @@
// # Aggregation example
//
// This example shows how you can use built-in aggregations.
// We will use range buckets and compute the average in each bucket.
//
use serde_json::Value;
use tantivy::aggregation::agg_req::{
Aggregation, Aggregations, BucketAggregation, BucketAggregationType, MetricAggregation,
RangeAggregation,
};
use tantivy::aggregation::agg_result::AggregationResults;
use tantivy::aggregation::metric::AverageAggregation;
use tantivy::aggregation::AggregationCollector;
use tantivy::query::TermQuery;
use tantivy::schema::{self, Cardinality, IndexRecordOption, Schema, TextFieldIndexing};
use tantivy::{doc, Index, Term};
fn main() -> tantivy::Result<()> {
let mut schema_builder = Schema::builder();
let text_fieldtype = schema::TextOptions::default()
.set_indexing_options(
TextFieldIndexing::default().set_index_option(IndexRecordOption::WithFreqs),
)
.set_stored();
let text_field = schema_builder.add_text_field("text", text_fieldtype);
let score_fieldtype =
crate::schema::NumericOptions::default().set_fast(Cardinality::SingleValue);
let highscore_field = schema_builder.add_f64_field("highscore", score_fieldtype.clone());
let price_field = schema_builder.add_f64_field("price", score_fieldtype.clone());
let schema = schema_builder.build();
// # Indexing documents
//
// Lets index a bunch of documents for this example.
let index = Index::create_in_ram(schema);
let mut index_writer = index.writer(50_000_000)?;
// writing the segment
index_writer.add_document(doc!(
text_field => "cool",
highscore_field => 1f64,
price_field => 0f64,
))?;
index_writer.add_document(doc!(
text_field => "cool",
highscore_field => 3f64,
price_field => 1f64,
))?;
index_writer.add_document(doc!(
text_field => "cool",
highscore_field => 5f64,
price_field => 1f64,
))?;
index_writer.add_document(doc!(
text_field => "nohit",
highscore_field => 6f64,
price_field => 2f64,
))?;
index_writer.add_document(doc!(
text_field => "cool",
highscore_field => 7f64,
price_field => 2f64,
))?;
index_writer.commit()?;
index_writer.add_document(doc!(
text_field => "cool",
highscore_field => 11f64,
price_field => 10f64,
))?;
index_writer.add_document(doc!(
text_field => "cool",
highscore_field => 14f64,
price_field => 15f64,
))?;
index_writer.add_document(doc!(
text_field => "cool",
highscore_field => 15f64,
price_field => 20f64,
))?;
index_writer.commit()?;
let reader = index.reader()?;
let text_field = reader.searcher().schema().get_field("text").unwrap();
let term_query = TermQuery::new(
Term::from_field_text(text_field, "cool"),
IndexRecordOption::Basic,
);
let sub_agg_req_1: Aggregations = vec![(
"average_price".to_string(),
Aggregation::Metric(MetricAggregation::Average(
AverageAggregation::from_field_name("price".to_string()),
)),
)]
.into_iter()
.collect();
let agg_req_1: Aggregations = vec![(
"score_ranges".to_string(),
Aggregation::Bucket(BucketAggregation {
bucket_agg: BucketAggregationType::Range(RangeAggregation {
field: "highscore".to_string(),
ranges: vec![
(-1f64..9f64).into(),
(9f64..14f64).into(),
(14f64..20f64).into(),
],
}),
sub_aggregation: sub_agg_req_1.clone(),
}),
)]
.into_iter()
.collect();
let collector = AggregationCollector::from_aggs(agg_req_1);
let searcher = reader.searcher();
let agg_res: AggregationResults = searcher.search(&term_query, &collector).unwrap();
let res: Value = serde_json::from_str(&serde_json::to_string(&agg_res)?)?;
println!("{}", serde_json::to_string_pretty(&res)?);
Ok(())
}

80
examples/json_field.rs Normal file
View File

@@ -0,0 +1,80 @@
// # Json field example
//
// This example shows how the json field can be used
// to make tantivy partially schemaless.
use tantivy::collector::{Count, TopDocs};
use tantivy::query::QueryParser;
use tantivy::schema::{Schema, FAST, STORED, STRING, TEXT};
use tantivy::Index;
fn main() -> tantivy::Result<()> {
// # Defining the schema
//
// We need two fields:
// - a timestamp
// - a json object field
let mut schema_builder = Schema::builder();
schema_builder.add_date_field("timestamp", FAST | STORED);
let event_type = schema_builder.add_text_field("event_type", STRING | STORED);
let attributes = schema_builder.add_json_field("attributes", STORED | TEXT);
let schema = schema_builder.build();
// # Indexing documents
let index = Index::create_in_ram(schema.clone());
let mut index_writer = index.writer(50_000_000)?;
let doc = schema.parse_document(
r#"{
"timestamp": "2022-02-22T23:20:50.53Z",
"event_type": "click",
"attributes": {
"target": "submit-button",
"cart": {"product_id": 103},
"description": "the best vacuum cleaner ever"
}
}"#,
)?;
index_writer.add_document(doc)?;
let doc = schema.parse_document(
r#"{
"timestamp": "2022-02-22T23:20:51.53Z",
"event_type": "click",
"attributes": {
"target": "submit-button",
"cart": {"product_id": 133},
"description": "das keyboard"
}
}"#,
)?;
index_writer.add_document(doc)?;
index_writer.commit()?;
let reader = index.reader()?;
let searcher = reader.searcher();
let query_parser = QueryParser::for_index(&index, vec![event_type, attributes]);
{
let query = query_parser.parse_query("target:submit-button")?;
let count_docs = searcher.search(&*query, &TopDocs::with_limit(2))?;
assert_eq!(count_docs.len(), 2);
}
{
let query = query_parser.parse_query("target:submit")?;
let count_docs = searcher.search(&*query, &TopDocs::with_limit(2))?;
assert_eq!(count_docs.len(), 2);
}
{
let query = query_parser.parse_query("cart.product_id:103")?;
let count_docs = searcher.search(&*query, &Count)?;
assert_eq!(count_docs, 1);
}
{
let query = query_parser
.parse_query("event_type:click AND cart.product_id:133")
.unwrap();
let hits = searcher.search(&*query, &TopDocs::with_limit(2)).unwrap();
assert_eq!(hits.len(), 1);
}
Ok(())
}

1
fastfield_codecs/.gitignore vendored Normal file
View File

@@ -0,0 +1 @@
datasets/

View File

@@ -6,10 +6,8 @@ license = "MIT"
edition = "2018"
description = "Fast field codecs used by tantivy"
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
[dependencies]
common = { version = "0.1", path = "../common/", package = "tantivy-common" }
common = { version = "0.2", path = "../common/", package = "tantivy-common" }
tantivy-bitpacker = { version="0.1.1", path = "../bitpacker/" }
prettytable-rs = {version="0.8.0", optional= true}
rand = {version="0.8.3", optional= true}
@@ -19,6 +17,6 @@ more-asserts = "0.2.1"
rand = "0.8.3"
[features]
unstable = [] # useful for benches and experimental codecs.
bin = ["prettytable-rs", "rand"]
default = ["bin"]

View File

@@ -0,0 +1,6 @@
DATASETS ?= hdfs_logs_timestamps http_logs_timestamps amazon_reviews_product_ids nooc_temperatures
download:
@echo "--- Downloading datasets ---"
mkdir -p datasets
@for dataset in $(DATASETS); do curl -o - https://quickwit-datasets-public.s3.amazonaws.com/benchmarks/fastfields/$$dataset.txt.gz | gunzip > datasets/$$dataset.txt; done

View File

@@ -13,6 +13,10 @@ A codec needs to implement 2 traits:
- A reader implementing `FastFieldCodecReader` to read the codec.
- A serializer implementing `FastFieldCodecSerializer` for compression estimation and codec name + id.
### Download real world datasets for codecs comparison
Before comparing codecs, you need to execute `make download` to download real world datasets hosted on AWS S3.
To run with the unstable codecs, execute `cargo run --features unstable`.
### Tests
Once the traits are implemented test and benchmark integration is pretty easy (see `test_with_codec_data_sets` and `bench.rs`).
@@ -23,46 +27,101 @@ cargo run --features bin
```
### TODO
- Add real world data sets in comparison
- Add codec to cover sparse data sets
### Codec Comparison
```
+----------------------------------+-------------------+------------------------+
| | Compression Ratio | Compression Estimation |
+----------------------------------+-------------------+------------------------+
| Autoincrement | | |
+----------------------------------+-------------------+------------------------+
| LinearInterpol | 0.000039572664 | 0.000004396963 |
+----------------------------------+-------------------+------------------------+
| MultiLinearInterpol | 0.1477348 | 0.17275847 |
+----------------------------------+-------------------+------------------------+
| Bitpacked | 0.28126493 | 0.28125 |
+----------------------------------+-------------------+------------------------+
| Monotonically increasing concave | | |
+----------------------------------+-------------------+------------------------+
| LinearInterpol | 0.25003937 | 0.26562938 |
+----------------------------------+-------------------+------------------------+
| MultiLinearInterpol | 0.190665 | 0.1883836 |
+----------------------------------+-------------------+------------------------+
| Bitpacked | 0.31251436 | 0.3125 |
+----------------------------------+-------------------+------------------------+
| Monotonically increasing convex | | |
+----------------------------------+-------------------+------------------------+
| LinearInterpol | 0.25003937 | 0.28125438 |
+----------------------------------+-------------------+------------------------+
| MultiLinearInterpol | 0.18676 | 0.2040086 |
+----------------------------------+-------------------+------------------------+
| Bitpacked | 0.31251436 | 0.3125 |
+----------------------------------+-------------------+------------------------+
| Almost monotonically increasing | | |
+----------------------------------+-------------------+------------------------+
| LinearInterpol | 0.14066513 | 0.1562544 |
+----------------------------------+-------------------+------------------------+
| MultiLinearInterpol | 0.16335973 | 0.17275847 |
+----------------------------------+-------------------+------------------------+
| Bitpacked | 0.28126493 | 0.28125 |
+----------------------------------+-------------------+------------------------+
+----------------------------------+-------------------+------------------------------+--------------------------+----------------------+
| | Compression ratio | Compression ratio estimation | Compression time (micro) | Reading time (micro) |
+----------------------------------+-------------------+------------------------------+--------------------------+----------------------+
| Autoincrement | | | | |
+----------------------------------+-------------------+------------------------------+--------------------------+----------------------+
| PiecewiseLinear | 0.0051544965 | 0.17251475 | 960 | 211 |
+----------------------------------+-------------------+------------------------------+--------------------------+----------------------+
| FOR | 0.118189104 | 0.14172314 | 708 | 212 |
+----------------------------------+-------------------+------------------------------+--------------------------+----------------------+
| Bitpacked | 0.28126493 | 0.28125 | 474 | 112 |
+----------------------------------+-------------------+------------------------------+--------------------------+----------------------+
| Monotonically increasing concave | | | | |
+----------------------------------+-------------------+------------------------------+--------------------------+----------------------+
| PiecewiseLinear | 0.005955 | 0.18813984 | 885 | 211 |
+----------------------------------+-------------------+------------------------------+--------------------------+----------------------+
| FOR | 0.16113 | 0.15734828 | 704 | 212 |
+----------------------------------+-------------------+------------------------------+--------------------------+----------------------+
| Bitpacked | 0.31251436 | 0.3125 | 478 | 113 |
+----------------------------------+-------------------+------------------------------+--------------------------+----------------------+
| Monotonically increasing convex | | | | |
+----------------------------------+-------------------+------------------------------+--------------------------+----------------------+
| PiecewiseLinear | 0.00613 | 0.20376484 | 889 | 211 |
+----------------------------------+-------------------+------------------------------+--------------------------+----------------------+
| FOR | 0.157175 | 0.17297328 | 706 | 212 |
+----------------------------------+-------------------+------------------------------+--------------------------+----------------------+
| Bitpacked | 0.31251436 | 0.3125 | 471 | 113 |
+----------------------------------+-------------------+------------------------------+--------------------------+----------------------+
| Almost monotonically increasing | | | | |
+----------------------------------+-------------------+------------------------------+--------------------------+----------------------+
| PiecewiseLinear | 0.14549863 | 0.17251475 | 923 | 210 |
+----------------------------------+-------------------+------------------------------+--------------------------+----------------------+
| FOR | 0.14943957 | 0.15734814 | 703 | 211 |
+----------------------------------+-------------------+------------------------------+--------------------------+----------------------+
| Bitpacked | 0.28126493 | 0.28125 | 462 | 112 |
+----------------------------------+-------------------+------------------------------+--------------------------+----------------------+
| Random | | | | |
+----------------------------------+-------------------+------------------------------+--------------------------+----------------------+
| PiecewiseLinear | 0.14533783 | 0.14126475 | 924 | 211 |
+----------------------------------+-------------------+------------------------------+--------------------------+----------------------+
| FOR | 0.13381402 | 0.15734814 | 695 | 211 |
+----------------------------------+-------------------+------------------------------+--------------------------+----------------------+
| Bitpacked | 0.12501445 | 0.125 | 422 | 112 |
+----------------------------------+-------------------+------------------------------+--------------------------+----------------------+
| HDFS logs timestamps | | | | |
+----------------------------------+-------------------+------------------------------+--------------------------+----------------------+
| PiecewiseLinear | 0.39826187 | 0.4068908 | 5545 | 1086 |
+----------------------------------+-------------------+------------------------------+--------------------------+----------------------+
| FOR | 0.39214826 | 0.40734857 | 5082 | 1073 |
+----------------------------------+-------------------+------------------------------+--------------------------+----------------------+
| Bitpacked | 0.39062786 | 0.390625 | 2864 | 567 |
+----------------------------------+-------------------+------------------------------+--------------------------+----------------------+
| HDFS logs timestamps SORTED | | | | |
+----------------------------------+-------------------+------------------------------+--------------------------+----------------------+
| PiecewiseLinear | 0.032736875 | 0.094390824 | 4942 | 1067 |
+----------------------------------+-------------------+------------------------------+--------------------------+----------------------+
| FOR | 0.02667125 | 0.079223566 | 3626 | 994 |
+----------------------------------+-------------------+------------------------------+--------------------------+----------------------+
| Bitpacked | 0.39062786 | 0.390625 | 2493 | 566 |
+----------------------------------+-------------------+------------------------------+--------------------------+----------------------+
| HTTP logs timestamps SORTED | | | | |
+----------------------------------+-------------------+------------------------------+--------------------------+----------------------+
| PiecewiseLinear | 0.047942877 | 0.20376582 | 5121 | 1065 |
+----------------------------------+-------------------+------------------------------+--------------------------+----------------------+
| FOR | 0.06637425 | 0.18859856 | 3929 | 1093 |
+----------------------------------+-------------------+------------------------------+--------------------------+----------------------+
| Bitpacked | 0.26562786 | 0.265625 | 2221 | 526 |
+----------------------------------+-------------------+------------------------------+--------------------------+----------------------+
| Amazon review product ids | | | | |
+----------------------------------+-------------------+------------------------------+--------------------------+----------------------+
| PiecewiseLinear | 0.41900787 | 0.4225158 | 5239 | 1089 |
+----------------------------------+-------------------+------------------------------+--------------------------+----------------------+
| FOR | 0.41504425 | 0.43859857 | 4158 | 1052 |
+----------------------------------+-------------------+------------------------------+--------------------------+----------------------+
| Bitpacked | 0.40625286 | 0.40625 | 2603 | 513 |
+----------------------------------+-------------------+------------------------------+--------------------------+----------------------+
| Amazon review product ids SORTED | | | | |
+----------------------------------+-------------------+------------------------------+--------------------------+----------------------+
| PiecewiseLinear | 0.18364687 | 0.25064084 | 5036 | 990 |
+----------------------------------+-------------------+------------------------------+--------------------------+----------------------+
| FOR | 0.21239226 | 0.21984856 | 4087 | 1072 |
+----------------------------------+-------------------+------------------------------+--------------------------+----------------------+
| Bitpacked | 0.40625286 | 0.40625 | 2702 | 525 |
+----------------------------------+-------------------+------------------------------+--------------------------+----------------------+
| Temperatures | | | | |
+----------------------------------+-------------------+------------------------------+--------------------------+----------------------+
| PiecewiseLinear | | Codec Disabled | 0 | 0 |
+----------------------------------+-------------------+------------------------------+--------------------------+----------------------+
| FOR | 1.0088086 | 1.001098 | 1306 | 237 |
+----------------------------------+-------------------+------------------------------+--------------------------+----------------------+
| Bitpacked | 1.000012 | 1 | 950 | 108 |
+----------------------------------+-------------------+------------------------------+--------------------------+----------------------+
```

View File

@@ -5,11 +5,8 @@ extern crate test;
#[cfg(test)]
mod tests {
use fastfield_codecs::bitpacked::{BitpackedFastFieldReader, BitpackedFastFieldSerializer};
use fastfield_codecs::linearinterpol::{
LinearInterpolFastFieldReader, LinearInterpolFastFieldSerializer,
};
use fastfield_codecs::multilinearinterpol::{
MultiLinearInterpolFastFieldReader, MultiLinearInterpolFastFieldSerializer,
use fastfield_codecs::piecewise_linear::{
PiecewiseLinearFastFieldReader, PiecewiseLinearFastFieldSerializer,
};
use fastfield_codecs::*;
@@ -70,14 +67,9 @@ mod tests {
bench_create::<BitpackedFastFieldSerializer>(b, &data);
}
#[bench]
fn bench_fastfield_linearinterpol_create(b: &mut Bencher) {
fn bench_fastfield_piecewise_linear_create(b: &mut Bencher) {
let data: Vec<_> = get_data();
bench_create::<LinearInterpolFastFieldSerializer>(b, &data);
}
#[bench]
fn bench_fastfield_multilinearinterpol_create(b: &mut Bencher) {
let data: Vec<_> = get_data();
bench_create::<MultiLinearInterpolFastFieldSerializer>(b, &data);
bench_create::<PiecewiseLinearFastFieldSerializer>(b, &data);
}
#[bench]
fn bench_fastfield_bitpack_get(b: &mut Bencher) {
@@ -85,16 +77,9 @@ mod tests {
bench_get::<BitpackedFastFieldSerializer, BitpackedFastFieldReader>(b, &data);
}
#[bench]
fn bench_fastfield_linearinterpol_get(b: &mut Bencher) {
fn bench_fastfield_piecewise_linear_get(b: &mut Bencher) {
let data: Vec<_> = get_data();
bench_get::<LinearInterpolFastFieldSerializer, LinearInterpolFastFieldReader>(b, &data);
}
#[bench]
fn bench_fastfield_multilinearinterpol_get(b: &mut Bencher) {
let data: Vec<_> = get_data();
bench_get::<MultiLinearInterpolFastFieldSerializer, MultiLinearInterpolFastFieldReader>(
b, &data,
);
bench_get::<PiecewiseLinearFastFieldSerializer, PiecewiseLinearFastFieldReader>(b, &data);
}
pub fn stats_from_vec(data: &[u64]) -> FastFieldStats {
let min_value = data.iter().cloned().min().unwrap_or(0);

View File

@@ -128,7 +128,10 @@ impl FastFieldCodecSerializer for BitpackedFastFieldSerializer {
) -> bool {
true
}
fn estimate(_fastfield_accessor: &impl FastFieldDataAccess, stats: FastFieldStats) -> f32 {
fn estimate_compression_ratio(
_fastfield_accessor: &impl FastFieldDataAccess,
stats: FastFieldStats,
) -> f32 {
let amplitude = stats.max_value - stats.min_value;
let num_bits = compute_num_bits(amplitude);
let num_bits_uncompressed = 64;

View File

@@ -0,0 +1,272 @@
use std::io::{self, Read, Write};
use common::{BinarySerializable, DeserializeFrom};
use tantivy_bitpacker::{compute_num_bits, BitPacker, BitUnpacker};
use crate::{FastFieldCodecReader, FastFieldCodecSerializer, FastFieldDataAccess, FastFieldStats};
const BLOCK_SIZE: u64 = 128;
#[derive(Clone)]
pub struct FORFastFieldReader {
num_vals: u64,
min_value: u64,
max_value: u64,
block_readers: Vec<BlockReader>,
}
#[derive(Clone, Debug, Default)]
struct BlockMetadata {
min: u64,
num_bits: u8,
}
#[derive(Clone, Debug, Default)]
struct BlockReader {
metadata: BlockMetadata,
start_offset: u64,
bit_unpacker: BitUnpacker,
}
impl BlockReader {
fn new(metadata: BlockMetadata, start_offset: u64) -> Self {
Self {
bit_unpacker: BitUnpacker::new(metadata.num_bits),
metadata,
start_offset,
}
}
#[inline]
fn get_u64(&self, block_pos: u64, data: &[u8]) -> u64 {
let diff = self
.bit_unpacker
.get(block_pos, &data[self.start_offset as usize..]);
self.metadata.min + diff
}
}
impl BinarySerializable for BlockMetadata {
fn serialize<W: Write>(&self, write: &mut W) -> io::Result<()> {
self.min.serialize(write)?;
self.num_bits.serialize(write)?;
Ok(())
}
fn deserialize<R: Read>(reader: &mut R) -> io::Result<Self> {
let min = u64::deserialize(reader)?;
let num_bits = u8::deserialize(reader)?;
Ok(Self { min, num_bits })
}
}
#[derive(Clone, Debug)]
pub struct FORFooter {
pub num_vals: u64,
pub min_value: u64,
pub max_value: u64,
block_metadatas: Vec<BlockMetadata>,
}
impl BinarySerializable for FORFooter {
fn serialize<W: Write>(&self, write: &mut W) -> io::Result<()> {
let mut out = vec![];
self.num_vals.serialize(&mut out)?;
self.min_value.serialize(&mut out)?;
self.max_value.serialize(&mut out)?;
self.block_metadatas.serialize(&mut out)?;
write.write_all(&out)?;
(out.len() as u32).serialize(write)?;
Ok(())
}
fn deserialize<R: Read>(reader: &mut R) -> io::Result<Self> {
let footer = Self {
num_vals: u64::deserialize(reader)?,
min_value: u64::deserialize(reader)?,
max_value: u64::deserialize(reader)?,
block_metadatas: Vec::<BlockMetadata>::deserialize(reader)?,
};
Ok(footer)
}
}
impl FastFieldCodecReader for FORFastFieldReader {
/// Opens a fast field given a file.
fn open_from_bytes(bytes: &[u8]) -> io::Result<Self> {
let footer_len: u32 = (&bytes[bytes.len() - 4..]).deserialize()?;
let (_, mut footer) = bytes.split_at(bytes.len() - (4 + footer_len) as usize);
let footer = FORFooter::deserialize(&mut footer)?;
let mut block_readers = Vec::with_capacity(footer.block_metadatas.len());
let mut current_data_offset = 0;
for block_metadata in footer.block_metadatas {
let num_bits = block_metadata.num_bits;
block_readers.push(BlockReader::new(block_metadata, current_data_offset));
current_data_offset += num_bits as u64 * BLOCK_SIZE / 8;
}
Ok(Self {
num_vals: footer.num_vals,
min_value: footer.min_value,
max_value: footer.max_value,
block_readers,
})
}
#[inline]
fn get_u64(&self, idx: u64, data: &[u8]) -> u64 {
let block_idx = (idx / BLOCK_SIZE) as usize;
let block_pos = idx - (block_idx as u64) * BLOCK_SIZE;
let block_reader = &self.block_readers[block_idx];
block_reader.get_u64(block_pos, data)
}
#[inline]
fn min_value(&self) -> u64 {
self.min_value
}
#[inline]
fn max_value(&self) -> u64 {
self.max_value
}
}
/// Same as LinearInterpolFastFieldSerializer, but working on chunks of CHUNK_SIZE elements.
pub struct FORFastFieldSerializer {}
impl FastFieldCodecSerializer for FORFastFieldSerializer {
const NAME: &'static str = "FOR";
const ID: u8 = 5;
/// Creates a new fast field serializer.
fn serialize(
write: &mut impl Write,
_: &impl FastFieldDataAccess,
stats: FastFieldStats,
data_iter: impl Iterator<Item = u64>,
_data_iter1: impl Iterator<Item = u64>,
) -> io::Result<()> {
let data = data_iter.collect::<Vec<_>>();
let mut bit_packer = BitPacker::new();
let mut block_metadatas = Vec::new();
for data_pos in (0..data.len() as u64).step_by(BLOCK_SIZE as usize) {
let block_num_vals = BLOCK_SIZE.min(data.len() as u64 - data_pos) as usize;
let block_values = &data[data_pos as usize..data_pos as usize + block_num_vals];
let mut min = block_values[0];
let mut max = block_values[0];
for &current_value in block_values[1..].iter() {
min = min.min(current_value);
max = max.max(current_value);
}
let num_bits = compute_num_bits(max - min);
for current_value in block_values.iter() {
bit_packer.write(current_value - min, num_bits, write)?;
}
bit_packer.flush(write)?;
block_metadatas.push(BlockMetadata { min, num_bits });
}
bit_packer.close(write)?;
let footer = FORFooter {
num_vals: stats.num_vals,
min_value: stats.min_value,
max_value: stats.max_value,
block_metadatas,
};
footer.serialize(write)?;
Ok(())
}
fn is_applicable(
_fastfield_accessor: &impl FastFieldDataAccess,
stats: FastFieldStats,
) -> bool {
stats.num_vals > BLOCK_SIZE
}
/// Estimate compression ratio by compute the ratio of the first block.
fn estimate_compression_ratio(
fastfield_accessor: &impl FastFieldDataAccess,
stats: FastFieldStats,
) -> f32 {
let last_elem_in_first_chunk = BLOCK_SIZE.min(stats.num_vals);
let max_distance = (0..last_elem_in_first_chunk)
.into_iter()
.map(|pos| {
let actual_value = fastfield_accessor.get_val(pos as u64);
actual_value - stats.min_value
})
.max()
.unwrap();
// Estimate one block and multiply by a magic number 3 to select this codec
// when we are almost sure that this is relevant.
let relative_max_value = max_distance as f32 * 3.0;
let num_bits = compute_num_bits(relative_max_value as u64) as u64 * stats.num_vals as u64
// function metadata per block
+ 9 * (stats.num_vals / BLOCK_SIZE);
let num_bits_uncompressed = 64 * stats.num_vals;
num_bits as f32 / num_bits_uncompressed as f32
}
}
#[cfg(test)]
mod tests {
use super::*;
use crate::tests::get_codec_test_data_sets;
fn create_and_validate(data: &[u64], name: &str) -> (f32, f32) {
crate::tests::create_and_validate::<FORFastFieldSerializer, FORFastFieldReader>(data, name)
}
#[test]
fn test_compression() {
let data = (10..=6_000_u64).collect::<Vec<_>>();
let (estimate, actual_compression) =
create_and_validate(&data, "simple monotonically large");
println!("{}", actual_compression);
assert!(actual_compression < 0.2);
assert!(actual_compression > 0.006);
assert!(estimate < 0.20);
assert!(estimate > 0.10);
}
#[test]
fn test_with_codec_data_sets() {
let data_sets = get_codec_test_data_sets();
for (mut data, name) in data_sets {
create_and_validate(&data, name);
data.reverse();
create_and_validate(&data, name);
}
}
#[test]
fn test_simple() {
let data = (10..=20_u64).collect::<Vec<_>>();
create_and_validate(&data, "simple monotonically");
}
#[test]
fn border_cases_1() {
let data = (0..1024).collect::<Vec<_>>();
create_and_validate(&data, "border case");
}
#[test]
fn border_case_2() {
let data = (0..1025).collect::<Vec<_>>();
create_and_validate(&data, "border case");
}
#[test]
fn rand() {
for _ in 0..10 {
let mut data = (5_000..20_000)
.map(|_| rand::random::<u32>() as u64)
.collect::<Vec<_>>();
let (estimate, actual_compression) = create_and_validate(&data, "random");
dbg!(estimate);
dbg!(actual_compression);
data.reverse();
create_and_validate(&data, "random");
}
}
}

View File

@@ -6,15 +6,20 @@ use std::io;
use std::io::Write;
pub mod bitpacked;
#[cfg(feature = "unstable")]
pub mod frame_of_reference;
pub mod linearinterpol;
pub mod multilinearinterpol;
pub mod piecewise_linear;
pub trait FastFieldCodecReader: Sized {
/// reads the metadata and returns the CodecReader
/// Reads the metadata and returns the CodecReader.
fn open_from_bytes(bytes: &[u8]) -> std::io::Result<Self>;
fn get_u64(&self, doc: u64, data: &[u8]) -> u64;
/// Read u64 value for indice `idx`.
/// `idx` can be either a `DocId` or an index used for
/// `multivalued` fast field.
fn get_u64(&self, idx: u64, data: &[u8]) -> u64;
fn min_value(&self) -> u64;
fn max_value(&self) -> u64;
}
@@ -35,7 +40,10 @@ pub trait FastFieldCodecSerializer {
///
/// It could make sense to also return a value representing
/// computational complexity.
fn estimate(fastfield_accessor: &impl FastFieldDataAccess, stats: FastFieldStats) -> f32;
fn estimate_compression_ratio(
fastfield_accessor: &impl FastFieldDataAccess,
stats: FastFieldStats,
) -> f32;
/// Serializes the data using the serializer into write.
/// There are multiple iterators, in case the codec needs to read the data multiple times.
@@ -63,6 +71,7 @@ pub trait FastFieldDataAccess {
}
#[derive(Debug, Clone)]
/// Statistics are used in codec detection and stored in the fast field footer.
pub struct FastFieldStats {
pub min_value: u64,
pub max_value: u64,
@@ -84,9 +93,8 @@ impl FastFieldDataAccess for Vec<u64> {
#[cfg(test)]
mod tests {
use crate::bitpacked::{BitpackedFastFieldReader, BitpackedFastFieldSerializer};
use crate::linearinterpol::{LinearInterpolFastFieldReader, LinearInterpolFastFieldSerializer};
use crate::multilinearinterpol::{
MultiLinearInterpolFastFieldReader, MultiLinearInterpolFastFieldSerializer,
use crate::piecewise_linear::{
PiecewiseLinearFastFieldReader, PiecewiseLinearFastFieldSerializer,
};
pub fn create_and_validate<S: FastFieldCodecSerializer, R: FastFieldCodecReader>(
@@ -96,7 +104,7 @@ mod tests {
if !S::is_applicable(&data, crate::tests::stats_from_vec(data)) {
return (f32::MAX, 0.0);
}
let estimation = S::estimate(&data, crate::tests::stats_from_vec(data));
let estimation = S::estimate_compression_ratio(&data, crate::tests::stats_from_vec(data));
let mut out = vec![];
S::serialize(
&mut out,
@@ -156,13 +164,10 @@ mod tests {
fn test_codec_bitpacking() {
test_codec::<BitpackedFastFieldSerializer, BitpackedFastFieldReader>();
}
#[test]
fn test_codec_interpolation() {
test_codec::<LinearInterpolFastFieldSerializer, LinearInterpolFastFieldReader>();
}
#[test]
fn test_codec_multi_interpolation() {
test_codec::<MultiLinearInterpolFastFieldSerializer, MultiLinearInterpolFastFieldReader>();
fn test_codec_piecewise_linear() {
test_codec::<PiecewiseLinearFastFieldSerializer, PiecewiseLinearFastFieldReader>();
}
use super::*;
@@ -180,45 +185,50 @@ mod tests {
fn estimation_good_interpolation_case() {
let data = (10..=20000_u64).collect::<Vec<_>>();
let linear_interpol_estimation =
LinearInterpolFastFieldSerializer::estimate(&data, stats_from_vec(&data));
assert_le!(linear_interpol_estimation, 0.01);
let multi_linear_interpol_estimation =
MultiLinearInterpolFastFieldSerializer::estimate(&data, stats_from_vec(&data));
assert_le!(multi_linear_interpol_estimation, 0.2);
assert_le!(linear_interpol_estimation, multi_linear_interpol_estimation);
let piecewise_interpol_estimation =
PiecewiseLinearFastFieldSerializer::estimate_compression_ratio(
&data,
stats_from_vec(&data),
);
assert_le!(piecewise_interpol_estimation, 0.2);
let bitpacked_estimation =
BitpackedFastFieldSerializer::estimate(&data, stats_from_vec(&data));
assert_le!(linear_interpol_estimation, bitpacked_estimation);
BitpackedFastFieldSerializer::estimate_compression_ratio(&data, stats_from_vec(&data));
assert_le!(piecewise_interpol_estimation, bitpacked_estimation);
}
#[test]
fn estimation_test_bad_interpolation_case() {
let data = vec![200, 10, 10, 10, 10, 1000, 20];
let linear_interpol_estimation =
LinearInterpolFastFieldSerializer::estimate(&data, stats_from_vec(&data));
assert_le!(linear_interpol_estimation, 0.32);
let piecewise_interpol_estimation =
PiecewiseLinearFastFieldSerializer::estimate_compression_ratio(
&data,
stats_from_vec(&data),
);
assert_le!(piecewise_interpol_estimation, 0.32);
let bitpacked_estimation =
BitpackedFastFieldSerializer::estimate(&data, stats_from_vec(&data));
assert_le!(bitpacked_estimation, linear_interpol_estimation);
BitpackedFastFieldSerializer::estimate_compression_ratio(&data, stats_from_vec(&data));
assert_le!(bitpacked_estimation, piecewise_interpol_estimation);
}
#[test]
fn estimation_test_bad_interpolation_case_monotonically_increasing() {
fn estimation_test_interpolation_case_monotonically_increasing() {
let mut data = (200..=20000_u64).collect::<Vec<_>>();
data.push(1_000_000);
// in this case the linear interpolation can't in fact not be worse than bitpacking,
// but the estimator adds some threshold, which leads to estimated worse behavior
let linear_interpol_estimation =
LinearInterpolFastFieldSerializer::estimate(&data, stats_from_vec(&data));
assert_le!(linear_interpol_estimation, 0.35);
let piecewise_interpol_estimation =
PiecewiseLinearFastFieldSerializer::estimate_compression_ratio(
&data,
stats_from_vec(&data),
);
assert_le!(piecewise_interpol_estimation, 0.2);
let bitpacked_estimation =
BitpackedFastFieldSerializer::estimate(&data, stats_from_vec(&data));
BitpackedFastFieldSerializer::estimate_compression_ratio(&data, stats_from_vec(&data));
println!("{}", bitpacked_estimation);
assert_le!(bitpacked_estimation, 0.32);
assert_le!(bitpacked_estimation, linear_interpol_estimation);
assert_le!(piecewise_interpol_estimation, bitpacked_estimation);
}
}

View File

@@ -71,9 +71,9 @@ impl FastFieldCodecReader for LinearInterpolFastFieldReader {
})
}
#[inline]
fn get_u64(&self, doc: u64, data: &[u8]) -> u64 {
let calculated_value = get_calculated_value(self.footer.first_val, doc, self.slope);
(calculated_value + self.bit_unpacker.get(doc, data)) - self.footer.offset
fn get_u64(&self, idx: u64, data: &[u8]) -> u64 {
let calculated_value = get_calculated_value(self.footer.first_val, idx, self.slope);
(calculated_value + self.bit_unpacker.get(idx, data)) - self.footer.offset
}
#[inline]
@@ -88,6 +88,10 @@ impl FastFieldCodecReader for LinearInterpolFastFieldReader {
/// Fastfield serializer, which tries to guess values by linear interpolation
/// and stores the difference bitpacked.
#[deprecated(
note = "Linear interpolation works best only on very rare cases and piecewise linear codec \
already works great on them."
)]
pub struct LinearInterpolFastFieldSerializer {}
#[inline]
@@ -105,6 +109,7 @@ fn get_calculated_value(first_val: u64, pos: u64, slope: f32) -> u64 {
first_val + (pos as f32 * slope) as u64
}
#[allow(deprecated)]
impl FastFieldCodecSerializer for LinearInterpolFastFieldSerializer {
const NAME: &'static str = "LinearInterpol";
const ID: u8 = 2;
@@ -182,10 +187,16 @@ impl FastFieldCodecSerializer for LinearInterpolFastFieldSerializer {
}
true
}
/// estimation for linear interpolation is hard because, you don't know
/// Estimation for linear interpolation is hard because, you don't know
/// where the local maxima for the deviation of the calculated value are and
/// the offset to shift all values to >=0 is also unknown.
fn estimate(fastfield_accessor: &impl FastFieldDataAccess, stats: FastFieldStats) -> f32 {
fn estimate_compression_ratio(
fastfield_accessor: &impl FastFieldDataAccess,
stats: FastFieldStats,
) -> f32 {
if stats.num_vals < 3 {
return f32::MAX;
}
let first_val = fastfield_accessor.get_val(0);
let last_val = fastfield_accessor.get_val(stats.num_vals as u64 - 1);
let slope = get_slope(first_val, last_val, stats.num_vals);
@@ -229,6 +240,7 @@ fn distance<T: Sub<Output = T> + Ord>(x: T, y: T) -> T {
}
}
#[allow(deprecated)]
#[cfg(test)]
mod tests {
use super::*;
@@ -289,8 +301,10 @@ mod tests {
#[test]
fn linear_interpol_fast_field_rand() {
for _ in 0..5000 {
let mut data = (0..50).map(|_| rand::random::<u64>()).collect::<Vec<_>>();
for _ in 0..10 {
let mut data = (5_000..20_000)
.map(|_| rand::random::<u32>() as u64)
.collect::<Vec<_>>();
create_and_validate(&data, "random");
data.reverse();

View File

@@ -1,31 +1,52 @@
#[macro_use]
extern crate prettytable;
use fastfield_codecs::linearinterpol::LinearInterpolFastFieldSerializer;
use fastfield_codecs::multilinearinterpol::MultiLinearInterpolFastFieldSerializer;
use fastfield_codecs::{FastFieldCodecSerializer, FastFieldStats};
use std::fs::File;
use std::io;
use std::io::BufRead;
use std::time::{Duration, Instant};
use common::f64_to_u64;
use fastfield_codecs::bitpacked::BitpackedFastFieldReader;
#[cfg(feature = "unstable")]
use fastfield_codecs::frame_of_reference::{FORFastFieldReader, FORFastFieldSerializer};
use fastfield_codecs::piecewise_linear::{
PiecewiseLinearFastFieldReader, PiecewiseLinearFastFieldSerializer,
};
use fastfield_codecs::{FastFieldCodecReader, FastFieldCodecSerializer, FastFieldStats};
use prettytable::{Cell, Row, Table};
use rand::prelude::StdRng;
use rand::Rng;
fn main() {
let mut table = Table::new();
// Add a row per time
table.add_row(row!["", "Compression Ratio", "Compression Estimation"]);
table.add_row(row![
"",
"Compression ratio",
"Compression ratio estimation",
"Compression time (micro)",
"Reading time (micro)"
]);
for (data, data_set_name) in get_codec_test_data_sets() {
let mut results = vec![];
let res = serialize_with_codec::<LinearInterpolFastFieldSerializer>(&data);
let res = serialize_with_codec::<
PiecewiseLinearFastFieldSerializer,
PiecewiseLinearFastFieldReader,
>(&data);
results.push(res);
let res = serialize_with_codec::<MultiLinearInterpolFastFieldSerializer>(&data);
results.push(res);
let res = serialize_with_codec::<fastfield_codecs::bitpacked::BitpackedFastFieldSerializer>(
&data,
);
#[cfg(feature = "unstable")]
{
let res = serialize_with_codec::<FORFastFieldSerializer, FORFastFieldReader>(&data);
results.push(res);
}
let res = serialize_with_codec::<
fastfield_codecs::bitpacked::BitpackedFastFieldSerializer,
BitpackedFastFieldReader,
>(&data);
results.push(res);
// let best_estimation_codec = results
//.iter()
//.min_by(|res1, res2| res1.partial_cmp(&res2).unwrap())
//.unwrap();
let best_compression_ratio_codec = results
.iter()
.min_by(|res1, res2| res1.partial_cmp(res2).unwrap())
@@ -33,7 +54,7 @@ fn main() {
.unwrap();
table.add_row(Row::new(vec![Cell::new(data_set_name).style_spec("Bbb")]));
for (is_applicable, est, comp, name) in results {
for (is_applicable, est, comp, name, compression_duration, read_duration) in results {
let (est_cell, ratio_cell) = if !is_applicable {
("Codec Disabled".to_string(), "".to_string())
} else {
@@ -49,6 +70,8 @@ fn main() {
Cell::new(name).style_spec("bFg"),
Cell::new(&ratio_cell).style_spec(style),
Cell::new(&est_cell).style_spec(""),
Cell::new(&compression_duration.as_micros().to_string()),
Cell::new(&read_duration.as_micros().to_string()),
]));
}
}
@@ -70,7 +93,6 @@ pub fn get_codec_test_data_sets() -> Vec<(Vec<u64>, &'static str)> {
current_cumulative
})
.collect::<Vec<_>>();
// let data = (1..=200000_u64).map(|num| num + num).collect::<Vec<_>>();
data_and_names.push((data, "Monotonically increasing concave"));
let mut current_cumulative = 0;
@@ -83,22 +105,79 @@ pub fn get_codec_test_data_sets() -> Vec<(Vec<u64>, &'static str)> {
.collect::<Vec<_>>();
data_and_names.push((data, "Monotonically increasing convex"));
let mut rng: StdRng = rand::SeedableRng::seed_from_u64(1);
let data = (1000..=200_000_u64)
.map(|num| num + rand::random::<u8>() as u64)
.map(|num| num + rng.gen::<u8>() as u64)
.collect::<Vec<_>>();
data_and_names.push((data, "Almost monotonically increasing"));
let data = (1000..=200_000_u64)
.map(|_| rng.gen::<u8>() as u64)
.collect::<Vec<_>>();
data_and_names.push((data, "Random"));
let mut data = load_dataset("datasets/hdfs_logs_timestamps.txt");
data_and_names.push((data.clone(), "HDFS logs timestamps"));
data.sort_unstable();
data_and_names.push((data, "HDFS logs timestamps SORTED"));
let data = load_dataset("datasets/http_logs_timestamps.txt");
data_and_names.push((data, "HTTP logs timestamps SORTED"));
let mut data = load_dataset("datasets/amazon_reviews_product_ids.txt");
data_and_names.push((data.clone(), "Amazon review product ids"));
data.sort_unstable();
data_and_names.push((data, "Amazon review product ids SORTED"));
let data = load_float_dataset("datasets/nooc_temperatures.txt");
data_and_names.push((data, "Temperatures"));
data_and_names
}
pub fn serialize_with_codec<S: FastFieldCodecSerializer>(
pub fn load_dataset(file_path: &str) -> Vec<u64> {
println!("Load dataset from `{}`", file_path);
let file = File::open(file_path).expect("Error when opening file.");
let lines = io::BufReader::new(file).lines();
let mut data = Vec::new();
for line in lines {
let l = line.unwrap();
data.push(l.parse::<u64>().unwrap());
}
data
}
pub fn load_float_dataset(file_path: &str) -> Vec<u64> {
println!("Load float dataset from `{}`", file_path);
let file = File::open(file_path).expect("Error when opening file.");
let lines = io::BufReader::new(file).lines();
let mut data = Vec::new();
for line in lines {
let line_string = line.unwrap();
let value = line_string.parse::<f64>().unwrap();
data.push(f64_to_u64(value));
}
data
}
pub fn serialize_with_codec<S: FastFieldCodecSerializer, R: FastFieldCodecReader>(
data: &[u64],
) -> (bool, f32, f32, &'static str) {
) -> (bool, f32, f32, &'static str, Duration, Duration) {
let is_applicable = S::is_applicable(&data, stats_from_vec(data));
if !is_applicable {
return (false, 0.0, 0.0, S::NAME);
return (
false,
0.0,
0.0,
S::NAME,
Duration::from_secs(0),
Duration::from_secs(0),
);
}
let estimation = S::estimate(&data, stats_from_vec(data));
let start_time_compression = Instant::now();
let estimation = S::estimate_compression_ratio(&data, stats_from_vec(data));
let mut out = vec![];
S::serialize(
&mut out,
@@ -108,9 +187,22 @@ pub fn serialize_with_codec<S: FastFieldCodecSerializer>(
data.iter().cloned(),
)
.unwrap();
let elasped_time_compression = start_time_compression.elapsed();
let actual_compression = out.len() as f32 / (data.len() * 8) as f32;
(true, estimation, actual_compression, S::NAME)
let reader = R::open_from_bytes(&out).unwrap();
let start_time_read = Instant::now();
for doc in 0..data.len() {
reader.get_u64(doc as u64, &out);
}
let elapsed_time_read = start_time_read.elapsed();
(
true,
estimation,
actual_compression,
S::NAME,
elasped_time_compression,
elapsed_time_read,
)
}
pub fn stats_from_vec(data: &[u64]) -> FastFieldStats {

View File

@@ -155,14 +155,17 @@ impl FastFieldCodecReader for MultiLinearInterpolFastFieldReader {
}
#[inline]
fn get_u64(&self, doc: u64, data: &[u8]) -> u64 {
let interpolation = get_interpolation_function(doc, &self.footer.interpolations);
let doc = doc - interpolation.start_pos;
let calculated_value =
get_calculated_value(interpolation.value_start_pos, doc, interpolation.slope);
fn get_u64(&self, idx: u64, data: &[u8]) -> u64 {
let interpolation = get_interpolation_function(idx, &self.footer.interpolations);
let block_idx = idx - interpolation.start_pos;
let calculated_value = get_calculated_value(
interpolation.value_start_pos,
block_idx,
interpolation.slope,
);
let diff = interpolation
.bit_unpacker
.get(doc, &data[interpolation.data_start_offset as usize..]);
.get(block_idx, &data[interpolation.data_start_offset as usize..]);
(calculated_value + diff) - interpolation.positive_val_offset
}
@@ -187,8 +190,13 @@ fn get_calculated_value(first_val: u64, pos: u64, slope: f32) -> u64 {
}
/// Same as LinearInterpolFastFieldSerializer, but working on chunks of CHUNK_SIZE elements.
#[deprecated(
note = "MultiLinearInterpol is replaced by PiecewiseLinear codec which fixes the slope and is \
a little bit more optimized."
)]
pub struct MultiLinearInterpolFastFieldSerializer {}
#[allow(deprecated)]
impl FastFieldCodecSerializer for MultiLinearInterpolFastFieldSerializer {
const NAME: &'static str = "MultiLinearInterpol";
const ID: u8 = 3;
@@ -311,10 +319,13 @@ impl FastFieldCodecSerializer for MultiLinearInterpolFastFieldSerializer {
}
true
}
/// estimation for linear interpolation is hard because, you don't know
/// Estimation for linear interpolation is hard because, you don't know
/// where the local maxima are for the deviation of the calculated value and
/// the offset is also unknown.
fn estimate(fastfield_accessor: &impl FastFieldDataAccess, stats: FastFieldStats) -> f32 {
fn estimate_compression_ratio(
fastfield_accessor: &impl FastFieldDataAccess,
stats: FastFieldStats,
) -> f32 {
let first_val_in_first_block = fastfield_accessor.get_val(0);
let last_elem_in_first_chunk = CHUNK_SIZE.min(stats.num_vals);
let last_val_in_first_block =
@@ -366,6 +377,7 @@ fn distance<T: Sub<Output = T> + Ord>(x: T, y: T) -> T {
}
#[cfg(test)]
#[allow(deprecated)]
mod tests {
use super::*;
use crate::tests::get_codec_test_data_sets;
@@ -419,10 +431,7 @@ mod tests {
let mut data = (5_000..20_000)
.map(|_| rand::random::<u32>() as u64)
.collect::<Vec<_>>();
let (estimate, actual_compression) = create_and_validate(&data, "random");
dbg!(estimate);
dbg!(actual_compression);
let _ = create_and_validate(&data, "random");
data.reverse();
create_and_validate(&data, "random");
}

View File

@@ -0,0 +1,365 @@
//! PiecewiseLinear codec uses piecewise linear functions for every block of 512 values to predict
//! values and fast field values. The difference with real fast field values is then stored.
//! For every block, the linear function can be expressed as
//! `computed_value = slope * block_position + first_value + positive_offset`
//! where:
//! - `block_position` is the position inside of the block from 0 to 511
//! - `first_value` is the first value on the block
//! - `positive_offset` is computed such that we ensure the diff `real_value - computed_value` is
//! always positive.
//!
//! 21 bytes is needed to store the block metadata, it adds an overhead of 21 * 8 / 512 = 0,33 bits
//! per element.
use std::io::{self, Read, Write};
use std::ops::Sub;
use common::{BinarySerializable, DeserializeFrom};
use tantivy_bitpacker::{compute_num_bits, BitPacker, BitUnpacker};
use crate::{FastFieldCodecReader, FastFieldCodecSerializer, FastFieldDataAccess, FastFieldStats};
const BLOCK_SIZE: u64 = 512;
#[derive(Clone)]
pub struct PiecewiseLinearFastFieldReader {
min_value: u64,
max_value: u64,
block_readers: Vec<BlockReader>,
}
/// Block that stores metadata to predict value with a linear
/// function `predicted_value = slope * position + first_value + positive_offset`
/// where `positive_offset` is comupted such that predicted values
/// are always positive.
#[derive(Clone, Debug, Default)]
struct BlockMetadata {
first_value: u64,
positive_offset: u64,
slope: f32,
num_bits: u8,
}
#[derive(Clone, Debug, Default)]
struct BlockReader {
metadata: BlockMetadata,
start_offset: u64,
bit_unpacker: BitUnpacker,
}
impl BlockReader {
fn new(metadata: BlockMetadata, start_offset: u64) -> Self {
Self {
bit_unpacker: BitUnpacker::new(metadata.num_bits),
metadata,
start_offset,
}
}
#[inline]
fn get_u64(&self, block_pos: u64, data: &[u8]) -> u64 {
let diff = self
.bit_unpacker
.get(block_pos, &data[self.start_offset as usize..]);
let predicted_value =
predict_value(self.metadata.first_value, block_pos, self.metadata.slope);
(predicted_value + diff) - self.metadata.positive_offset
}
}
impl BinarySerializable for BlockMetadata {
fn serialize<W: Write>(&self, write: &mut W) -> io::Result<()> {
self.first_value.serialize(write)?;
self.positive_offset.serialize(write)?;
self.slope.serialize(write)?;
self.num_bits.serialize(write)?;
Ok(())
}
fn deserialize<R: Read>(reader: &mut R) -> io::Result<Self> {
let first_value = u64::deserialize(reader)?;
let positive_offset = u64::deserialize(reader)?;
let slope = f32::deserialize(reader)?;
let num_bits = u8::deserialize(reader)?;
Ok(Self {
first_value,
positive_offset,
slope,
num_bits,
})
}
}
#[derive(Clone, Debug)]
pub struct PiecewiseLinearFooter {
pub num_vals: u64,
pub min_value: u64,
pub max_value: u64,
block_metadatas: Vec<BlockMetadata>,
}
impl BinarySerializable for PiecewiseLinearFooter {
fn serialize<W: Write>(&self, write: &mut W) -> io::Result<()> {
let mut out = vec![];
self.num_vals.serialize(&mut out)?;
self.min_value.serialize(&mut out)?;
self.max_value.serialize(&mut out)?;
self.block_metadatas.serialize(&mut out)?;
write.write_all(&out)?;
(out.len() as u32).serialize(write)?;
Ok(())
}
fn deserialize<R: Read>(reader: &mut R) -> io::Result<Self> {
let footer = Self {
num_vals: u64::deserialize(reader)?,
min_value: u64::deserialize(reader)?,
max_value: u64::deserialize(reader)?,
block_metadatas: Vec::<BlockMetadata>::deserialize(reader)?,
};
Ok(footer)
}
}
impl FastFieldCodecReader for PiecewiseLinearFastFieldReader {
/// Opens a fast field given a file.
fn open_from_bytes(bytes: &[u8]) -> io::Result<Self> {
let footer_len: u32 = (&bytes[bytes.len() - 4..]).deserialize()?;
let (_, mut footer) = bytes.split_at(bytes.len() - (4 + footer_len) as usize);
let footer = PiecewiseLinearFooter::deserialize(&mut footer)?;
let mut block_readers = Vec::with_capacity(footer.block_metadatas.len());
let mut current_data_offset = 0;
for block_metadata in footer.block_metadatas.into_iter() {
let num_bits = block_metadata.num_bits;
block_readers.push(BlockReader::new(block_metadata, current_data_offset));
current_data_offset += num_bits as u64 * BLOCK_SIZE / 8;
}
Ok(Self {
min_value: footer.min_value,
max_value: footer.max_value,
block_readers,
})
}
#[inline]
fn get_u64(&self, idx: u64, data: &[u8]) -> u64 {
let block_idx = (idx / BLOCK_SIZE) as usize;
let block_pos = idx - (block_idx as u64) * BLOCK_SIZE;
let block_reader = &self.block_readers[block_idx];
block_reader.get_u64(block_pos, data)
}
#[inline]
fn min_value(&self) -> u64 {
self.min_value
}
#[inline]
fn max_value(&self) -> u64 {
self.max_value
}
}
#[inline]
fn predict_value(first_val: u64, pos: u64, slope: f32) -> u64 {
(first_val as i64 + (pos as f32 * slope) as i64) as u64
}
pub struct PiecewiseLinearFastFieldSerializer;
impl FastFieldCodecSerializer for PiecewiseLinearFastFieldSerializer {
const NAME: &'static str = "PiecewiseLinear";
const ID: u8 = 4;
/// Creates a new fast field serializer.
fn serialize(
write: &mut impl Write,
_: &impl FastFieldDataAccess,
stats: FastFieldStats,
data_iter: impl Iterator<Item = u64>,
_data_iter1: impl Iterator<Item = u64>,
) -> io::Result<()> {
let mut data = data_iter.collect::<Vec<_>>();
let mut bit_packer = BitPacker::new();
let mut block_metadatas = Vec::new();
for data_pos in (0..data.len() as u64).step_by(BLOCK_SIZE as usize) {
let block_num_vals = BLOCK_SIZE.min(data.len() as u64 - data_pos) as usize;
let block_values = &mut data[data_pos as usize..data_pos as usize + block_num_vals];
let slope = if block_num_vals == 1 {
0f32
} else {
((block_values[block_values.len() - 1] as f64 - block_values[0] as f64)
/ (block_num_vals - 1) as f64) as f32
};
let first_value = block_values[0];
let mut positive_offset = 0;
let mut max_delta = 0;
for (pos, &current_value) in block_values[1..].iter().enumerate() {
let computed_value = predict_value(first_value, pos as u64 + 1, slope);
if computed_value > current_value {
positive_offset = positive_offset.max(computed_value - current_value);
} else {
max_delta = max_delta.max(current_value - computed_value);
}
}
let num_bits = compute_num_bits(max_delta + positive_offset);
for (pos, current_value) in block_values.iter().enumerate() {
let computed_value = predict_value(first_value, pos as u64, slope);
let diff = (current_value + positive_offset) - computed_value;
bit_packer.write(diff, num_bits, write)?;
}
bit_packer.flush(write)?;
block_metadatas.push(BlockMetadata {
first_value,
positive_offset,
slope,
num_bits,
});
}
bit_packer.close(write)?;
let footer = PiecewiseLinearFooter {
num_vals: stats.num_vals,
min_value: stats.min_value,
max_value: stats.max_value,
block_metadatas,
};
footer.serialize(write)?;
Ok(())
}
fn is_applicable(
_fastfield_accessor: &impl FastFieldDataAccess,
stats: FastFieldStats,
) -> bool {
if stats.num_vals < 10 * BLOCK_SIZE {
return false;
}
// On serialization the offset is added to the actual value.
// We need to make sure this won't run into overflow calculation issues.
// For this we take the maximum theroretical offset and add this to the max value.
// If this doesn't overflow the algortihm should be fine
let theorethical_maximum_offset = stats.max_value - stats.min_value;
if stats
.max_value
.checked_add(theorethical_maximum_offset)
.is_none()
{
return false;
}
true
}
/// Estimation for linear interpolation is hard because, you don't know
/// where the local maxima are for the deviation of the calculated value and
/// the offset is also unknown.
fn estimate_compression_ratio(
fastfield_accessor: &impl FastFieldDataAccess,
stats: FastFieldStats,
) -> f32 {
let first_val_in_first_block = fastfield_accessor.get_val(0);
let last_elem_in_first_chunk = BLOCK_SIZE.min(stats.num_vals);
let last_val_in_first_block =
fastfield_accessor.get_val(last_elem_in_first_chunk as u64 - 1);
let slope = ((last_val_in_first_block as f64 - first_val_in_first_block as f64)
/ (stats.num_vals - 1) as f64) as f32;
// let's sample at 0%, 5%, 10% .. 95%, 100%, but for the first block only
let sample_positions = (0..20)
.map(|pos| (last_elem_in_first_chunk as f32 / 100.0 * pos as f32 * 5.0) as usize)
.collect::<Vec<_>>();
let max_distance = sample_positions
.iter()
.map(|&pos| {
let calculated_value = predict_value(first_val_in_first_block, pos as u64, slope);
let actual_value = fastfield_accessor.get_val(pos as u64);
distance(calculated_value, actual_value)
})
.max()
.unwrap();
// Estimate one block and extrapolate the cost to all blocks.
// the theory would be that we don't have the actual max_distance, but we are close within
// 50% threshold.
// It is multiplied by 2 because in a log case scenario the line would be as much above as
// below. So the offset would = max_distance
let relative_max_value = (max_distance as f32 * 1.5) * 2.0;
let num_bits = compute_num_bits(relative_max_value as u64) as u64 * stats.num_vals as u64
// function metadata per block
+ 21 * (stats.num_vals / BLOCK_SIZE);
let num_bits_uncompressed = 64 * stats.num_vals;
num_bits as f32 / num_bits_uncompressed as f32
}
}
fn distance<T: Sub<Output = T> + Ord>(x: T, y: T) -> T {
if x < y {
y - x
} else {
x - y
}
}
#[cfg(test)]
mod tests {
use super::*;
use crate::tests::get_codec_test_data_sets;
fn create_and_validate(data: &[u64], name: &str) -> (f32, f32) {
crate::tests::create_and_validate::<
PiecewiseLinearFastFieldSerializer,
PiecewiseLinearFastFieldReader,
>(data, name)
}
#[test]
fn test_compression() {
let data = (10..=6_000_u64).collect::<Vec<_>>();
let (estimate, actual_compression) =
create_and_validate(&data, "simple monotonically large");
assert!(actual_compression < 0.2);
assert!(estimate < 0.20);
assert!(estimate > 0.15);
assert!(actual_compression > 0.001);
}
#[test]
fn test_with_codec_data_sets() {
let data_sets = get_codec_test_data_sets();
for (mut data, name) in data_sets {
create_and_validate(&data, name);
data.reverse();
create_and_validate(&data, name);
}
}
#[test]
fn test_simple() {
let data = (10..=20_u64).collect::<Vec<_>>();
create_and_validate(&data, "simple monotonically");
}
#[test]
fn border_cases_1() {
let data = (0..1024).collect::<Vec<_>>();
create_and_validate(&data, "border case");
}
#[test]
fn border_case_2() {
let data = (0..1025).collect::<Vec<_>>();
create_and_validate(&data, "border case");
}
#[test]
fn rand() {
for _ in 0..10 {
let mut data = (5_000..20_000)
.map(|_| rand::random::<u32>() as u64)
.collect::<Vec<_>>();
let (estimate, actual_compression) = create_and_validate(&data, "random");
dbg!(estimate);
dbg!(actual_compression);
data.reverse();
create_and_validate(&data, "random");
}
}
}

View File

@@ -67,7 +67,7 @@ fn word<'a>() -> impl Parser<&'a str, Output = String> {
///
/// NOTE: also accepts 999999-99-99T99:99:99.266051969+99:99
/// We delegate rejecting such invalid dates to the logical AST compuation code
/// which invokes chrono::DateTime::parse_from_rfc3339 on the value to actually parse
/// which invokes time::OffsetDateTime::parse(..., &Rfc3339) on the value to actually parse
/// it (instead of merely extracting the datetime value as string as done here).
fn date_time<'a>() -> impl Parser<&'a str, Output = String> {
let two_digits = || recognize::<String, _, _>((digit(), digit()));

View File

@@ -59,7 +59,7 @@ pub enum UserInputBound {
}
impl UserInputBound {
fn display_lower(&self, formatter: &mut fmt::Formatter<'_>) -> Result<(), fmt::Error> {
fn display_lower(&self, formatter: &mut fmt::Formatter) -> Result<(), fmt::Error> {
match *self {
UserInputBound::Inclusive(ref word) => write!(formatter, "[\"{}\"", word),
UserInputBound::Exclusive(ref word) => write!(formatter, "{{\"{}\"", word),
@@ -67,7 +67,7 @@ impl UserInputBound {
}
}
fn display_upper(&self, formatter: &mut fmt::Formatter<'_>) -> Result<(), fmt::Error> {
fn display_upper(&self, formatter: &mut fmt::Formatter) -> Result<(), fmt::Error> {
match *self {
UserInputBound::Inclusive(ref word) => write!(formatter, "\"{}\"]", word),
UserInputBound::Exclusive(ref word) => write!(formatter, "\"{}\"}}", word),

36
src/aggregation/README.md Normal file
View File

@@ -0,0 +1,36 @@
# Contributing
When adding new bucket aggregation make sure to extend the "test_aggregation_flushing" test for at least 2 levels.
# Code Organization
Tantivy's aggregations have been designed to mimic the
[aggregations of elasticsearch](https://www.elastic.co/guide/en/elasticsearch/reference/current/search-aggregations.html).
The code is organized in submodules:
## bucket
Contains all bucket aggregations, like range aggregation. These bucket aggregations group documents into buckets and can contain sub-aggegations.
## metric
Contains all metric aggregations, like average aggregation. Metric aggregations do not have sub aggregations.
#### agg_req
agg_req contains the users aggregation request. Deserialization from json is compatible with elasticsearch aggregation requests.
#### agg_req_with_accessor
agg_req_with_accessor contains the users aggregation request enriched with fast field accessors etc, which are
used during collection.
#### segment_agg_result
segment_agg_result contains the aggregation result tree, which is used for collection of a segment.
The tree from agg_req_with_accessor is passed during collection.
#### intermediate_agg_result
intermediate_agg_result contains the aggregation tree for merging with other trees.
#### agg_result
agg_result contains the final aggregation tree.

327
src/aggregation/agg_req.rs Normal file
View File

@@ -0,0 +1,327 @@
//! Contains the aggregation request tree. Used to build an
//! [AggregationCollector](super::AggregationCollector).
//!
//! [Aggregations] is the top level entry point to create a request, which is a `HashMap<String,
//! Aggregation>`.
//!
//! Requests are compatible with the json format of elasticsearch.
//!
//! # Example
//!
//! ```
//! use tantivy::aggregation::bucket::RangeAggregation;
//! use tantivy::aggregation::agg_req::BucketAggregationType;
//! use tantivy::aggregation::agg_req::{Aggregation, Aggregations};
//! use tantivy::aggregation::agg_req::BucketAggregation;
//! let agg_req1: Aggregations = vec![
//! (
//! "range".to_string(),
//! Aggregation::Bucket(BucketAggregation {
//! bucket_agg: BucketAggregationType::Range(RangeAggregation{
//! field: "score".to_string(),
//! ranges: vec![(3f64..7f64).into(), (7f64..20f64).into()],
//! }),
//! sub_aggregation: Default::default(),
//! }),
//! ),
//! ]
//! .into_iter()
//! .collect();
//!
//! let elasticsearch_compatible_json_req = r#"
//! {
//! "range": {
//! "range": {
//! "field": "score",
//! "ranges": [
//! { "from": 3.0, "to": 7.0 },
//! { "from": 7.0, "to": 20.0 }
//! ]
//! }
//! }
//! }"#;
//! let agg_req2: Aggregations = serde_json::from_str(elasticsearch_compatible_json_req).unwrap();
//! assert_eq!(agg_req1, agg_req2);
//! ```
use std::collections::{HashMap, HashSet};
use serde::{Deserialize, Serialize};
use super::bucket::HistogramAggregation;
pub use super::bucket::RangeAggregation;
use super::metric::{AverageAggregation, StatsAggregation};
use super::VecWithNames;
/// The top-level aggregation request structure, which contains [Aggregation] and their user defined
/// names. It is also used in [buckets](BucketAggregation) to define sub-aggregations.
///
/// The key is the user defined name of the aggregation.
pub type Aggregations = HashMap<String, Aggregation>;
/// Like Aggregations, but optimized to work with the aggregation result
#[derive(Clone, Debug)]
pub(crate) struct AggregationsInternal {
pub(crate) metrics: VecWithNames<MetricAggregation>,
pub(crate) buckets: VecWithNames<BucketAggregationInternal>,
}
impl From<Aggregations> for AggregationsInternal {
fn from(aggs: Aggregations) -> Self {
let mut metrics = vec![];
let mut buckets = vec![];
for (key, agg) in aggs {
match agg {
Aggregation::Bucket(bucket) => buckets.push((
key,
BucketAggregationInternal {
bucket_agg: bucket.bucket_agg,
sub_aggregation: bucket.sub_aggregation.into(),
},
)),
Aggregation::Metric(metric) => metrics.push((key, metric)),
}
}
Self {
metrics: VecWithNames::from_entries(metrics),
buckets: VecWithNames::from_entries(buckets),
}
}
}
#[derive(Clone, Debug)]
// Like BucketAggregation, but optimized to work with the result
pub(crate) struct BucketAggregationInternal {
/// Bucket aggregation strategy to group documents.
pub bucket_agg: BucketAggregationType,
/// The sub_aggregations in the buckets. Each bucket will aggregate on the document set in the
/// bucket.
pub sub_aggregation: AggregationsInternal,
}
impl BucketAggregationInternal {
pub(crate) fn as_histogram(&self) -> &HistogramAggregation {
match &self.bucket_agg {
BucketAggregationType::Range(_) => panic!("unexpected aggregation"),
BucketAggregationType::Histogram(histogram) => histogram,
}
}
}
/// Extract all fast field names used in the tree.
pub fn get_fast_field_names(aggs: &Aggregations) -> HashSet<String> {
let mut fast_field_names = Default::default();
for el in aggs.values() {
el.get_fast_field_names(&mut fast_field_names)
}
fast_field_names
}
/// Aggregation request of [BucketAggregation] or [MetricAggregation].
///
/// An aggregation is either a bucket or a metric.
#[derive(Clone, Debug, PartialEq, Serialize, Deserialize)]
#[serde(untagged)]
pub enum Aggregation {
/// Bucket aggregation, see [BucketAggregation] for details.
Bucket(BucketAggregation),
/// Metric aggregation, see [MetricAggregation] for details.
Metric(MetricAggregation),
}
impl Aggregation {
fn get_fast_field_names(&self, fast_field_names: &mut HashSet<String>) {
match self {
Aggregation::Bucket(bucket) => bucket.get_fast_field_names(fast_field_names),
Aggregation::Metric(metric) => metric.get_fast_field_names(fast_field_names),
}
}
}
/// BucketAggregations create buckets of documents. Each bucket is associated with a rule which
/// determines whether or not a document in the falls into it. In other words, the buckets
/// effectively define document sets. Buckets are not necessarily disjunct, therefore a document can
/// fall into multiple buckets. In addition to the buckets themselves, the bucket aggregations also
/// compute and return the number of documents for each bucket. Bucket aggregations, as opposed to
/// metric aggregations, can hold sub-aggregations. These sub-aggregations will be aggregated for
/// the buckets created by their "parent" bucket aggregation. There are different bucket
/// aggregators, each with a different "bucketing" strategy. Some define a single bucket, some
/// define fixed number of multiple buckets, and others dynamically create the buckets during the
/// aggregation process.
#[derive(Clone, Debug, PartialEq, Serialize, Deserialize)]
pub struct BucketAggregation {
/// Bucket aggregation strategy to group documents.
#[serde(flatten)]
pub bucket_agg: BucketAggregationType,
/// The sub_aggregations in the buckets. Each bucket will aggregate on the document set in the
/// bucket.
#[serde(rename = "aggs")]
#[serde(default)]
#[serde(skip_serializing_if = "Aggregations::is_empty")]
pub sub_aggregation: Aggregations,
}
impl BucketAggregation {
fn get_fast_field_names(&self, fast_field_names: &mut HashSet<String>) {
self.bucket_agg.get_fast_field_names(fast_field_names);
fast_field_names.extend(get_fast_field_names(&self.sub_aggregation));
}
}
/// The bucket aggregation types.
#[derive(Clone, Debug, PartialEq, Serialize, Deserialize)]
pub enum BucketAggregationType {
/// Put data into buckets of user-defined ranges.
#[serde(rename = "range")]
Range(RangeAggregation),
/// Put data into buckets of user-defined ranges.
#[serde(rename = "histogram")]
Histogram(HistogramAggregation),
}
impl BucketAggregationType {
fn get_fast_field_names(&self, fast_field_names: &mut HashSet<String>) {
match self {
BucketAggregationType::Range(range) => fast_field_names.insert(range.field.to_string()),
BucketAggregationType::Histogram(histogram) => {
fast_field_names.insert(histogram.field.to_string())
}
};
}
}
/// The aggregations in this family compute metrics based on values extracted
/// from the documents that are being aggregated. Values are extracted from the fast field of
/// the document.
/// Some aggregations output a single numeric metric (e.g. Average) and are called
/// single-value numeric metrics aggregation, others generate multiple metrics (e.g. Stats) and are
/// called multi-value numeric metrics aggregation.
#[derive(Clone, Debug, PartialEq, Serialize, Deserialize)]
pub enum MetricAggregation {
/// Calculates the average.
#[serde(rename = "avg")]
Average(AverageAggregation),
/// Calculates stats sum, average, min, max, standard_deviation on a field.
#[serde(rename = "stats")]
Stats(StatsAggregation),
}
impl MetricAggregation {
fn get_fast_field_names(&self, fast_field_names: &mut HashSet<String>) {
match self {
MetricAggregation::Average(avg) => fast_field_names.insert(avg.field.to_string()),
MetricAggregation::Stats(stats) => fast_field_names.insert(stats.field.to_string()),
};
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn serialize_to_json_test() {
let agg_req1: Aggregations = vec![(
"range".to_string(),
Aggregation::Bucket(BucketAggregation {
bucket_agg: BucketAggregationType::Range(RangeAggregation {
field: "score".to_string(),
ranges: vec![
(f64::MIN..3f64).into(),
(3f64..7f64).into(),
(7f64..20f64).into(),
(20f64..f64::MAX).into(),
],
}),
sub_aggregation: Default::default(),
}),
)]
.into_iter()
.collect();
let elasticsearch_compatible_json_req = r#"{
"range": {
"range": {
"field": "score",
"ranges": [
{
"to": 3.0
},
{
"from": 3.0,
"to": 7.0
},
{
"from": 7.0,
"to": 20.0
},
{
"from": 20.0
}
]
}
}
}"#;
let agg_req2: String = serde_json::to_string_pretty(&agg_req1).unwrap();
assert_eq!(agg_req2, elasticsearch_compatible_json_req);
}
#[test]
fn test_get_fast_field_names() {
let agg_req2: Aggregations = vec![
(
"range".to_string(),
Aggregation::Bucket(BucketAggregation {
bucket_agg: BucketAggregationType::Range(RangeAggregation {
field: "score2".to_string(),
ranges: vec![
(f64::MIN..3f64).into(),
(3f64..7f64).into(),
(7f64..20f64).into(),
(20f64..f64::MAX).into(),
],
}),
sub_aggregation: Default::default(),
}),
),
(
"metric".to_string(),
Aggregation::Metric(MetricAggregation::Average(
AverageAggregation::from_field_name("field123".to_string()),
)),
),
]
.into_iter()
.collect();
let agg_req1: Aggregations = vec![(
"range".to_string(),
Aggregation::Bucket(BucketAggregation {
bucket_agg: BucketAggregationType::Range(RangeAggregation {
field: "score".to_string(),
ranges: vec![
(f64::MIN..3f64).into(),
(3f64..7f64).into(),
(7f64..20f64).into(),
(20f64..f64::MAX).into(),
],
}),
sub_aggregation: agg_req2,
}),
)]
.into_iter()
.collect();
assert_eq!(
get_fast_field_names(&agg_req1),
vec![
"score".to_string(),
"score2".to_string(),
"field123".to_string()
]
.into_iter()
.collect()
)
}
}

View File

@@ -0,0 +1,149 @@
//! This will enhance the request tree with access to the fastfield and metadata.
use super::agg_req::{Aggregation, Aggregations, BucketAggregationType, MetricAggregation};
use super::bucket::{HistogramAggregation, RangeAggregation};
use super::metric::{AverageAggregation, StatsAggregation};
use super::VecWithNames;
use crate::fastfield::{type_and_cardinality, DynamicFastFieldReader, FastType};
use crate::schema::{Cardinality, Type};
use crate::{SegmentReader, TantivyError};
#[derive(Clone, Default)]
pub(crate) struct AggregationsWithAccessor {
pub metrics: VecWithNames<MetricAggregationWithAccessor>,
pub buckets: VecWithNames<BucketAggregationWithAccessor>,
}
impl AggregationsWithAccessor {
fn from_data(
metrics: VecWithNames<MetricAggregationWithAccessor>,
buckets: VecWithNames<BucketAggregationWithAccessor>,
) -> Self {
Self { metrics, buckets }
}
pub fn is_empty(&self) -> bool {
self.metrics.is_empty() && self.buckets.is_empty()
}
}
#[derive(Clone)]
pub struct BucketAggregationWithAccessor {
/// In general there can be buckets without fast field access, e.g. buckets that are created
/// based on search terms. So eventually this needs to be Option or moved.
pub(crate) accessor: DynamicFastFieldReader<u64>,
pub(crate) field_type: Type,
pub(crate) bucket_agg: BucketAggregationType,
pub(crate) sub_aggregation: AggregationsWithAccessor,
}
impl BucketAggregationWithAccessor {
fn try_from_bucket(
bucket: &BucketAggregationType,
sub_aggregation: &Aggregations,
reader: &SegmentReader,
) -> crate::Result<BucketAggregationWithAccessor> {
let (accessor, field_type) = match &bucket {
BucketAggregationType::Range(RangeAggregation {
field: field_name,
ranges: _,
}) => get_ff_reader_and_validate(reader, field_name)?,
BucketAggregationType::Histogram(HistogramAggregation {
field: field_name, ..
}) => get_ff_reader_and_validate(reader, field_name)?,
};
let sub_aggregation = sub_aggregation.clone();
Ok(BucketAggregationWithAccessor {
accessor,
field_type,
sub_aggregation: get_aggs_with_accessor_and_validate(&sub_aggregation, reader)?,
bucket_agg: bucket.clone(),
})
}
}
/// Contains the metric request and the fast field accessor.
#[derive(Clone)]
pub struct MetricAggregationWithAccessor {
pub metric: MetricAggregation,
pub field_type: Type,
pub accessor: DynamicFastFieldReader<u64>,
}
impl MetricAggregationWithAccessor {
fn try_from_metric(
metric: &MetricAggregation,
reader: &SegmentReader,
) -> crate::Result<MetricAggregationWithAccessor> {
match &metric {
MetricAggregation::Average(AverageAggregation { field: field_name })
| MetricAggregation::Stats(StatsAggregation { field: field_name }) => {
let (accessor, field_type) = get_ff_reader_and_validate(reader, field_name)?;
Ok(MetricAggregationWithAccessor {
accessor,
field_type,
metric: metric.clone(),
})
}
}
}
}
pub(crate) fn get_aggs_with_accessor_and_validate(
aggs: &Aggregations,
reader: &SegmentReader,
) -> crate::Result<AggregationsWithAccessor> {
let mut metrics = vec![];
let mut buckets = vec![];
for (key, agg) in aggs.iter() {
match agg {
Aggregation::Bucket(bucket) => buckets.push((
key.to_string(),
BucketAggregationWithAccessor::try_from_bucket(
&bucket.bucket_agg,
&bucket.sub_aggregation,
reader,
)?,
)),
Aggregation::Metric(metric) => metrics.push((
key.to_string(),
MetricAggregationWithAccessor::try_from_metric(metric, reader)?,
)),
}
}
Ok(AggregationsWithAccessor::from_data(
VecWithNames::from_entries(metrics),
VecWithNames::from_entries(buckets),
))
}
fn get_ff_reader_and_validate(
reader: &SegmentReader,
field_name: &str,
) -> crate::Result<(DynamicFastFieldReader<u64>, Type)> {
let field = reader
.schema()
.get_field(field_name)
.ok_or_else(|| TantivyError::FieldNotFound(field_name.to_string()))?;
let field_type = reader.schema().get_field_entry(field).field_type();
if let Some((ff_type, cardinality)) = type_and_cardinality(field_type) {
if cardinality == Cardinality::MultiValues || ff_type == FastType::Date {
return Err(TantivyError::InvalidArgument(format!(
"Invalid field type in aggregation {:?}, only Cardinality::SingleValue supported",
field_type.value_type()
)));
}
} else {
return Err(TantivyError::InvalidArgument(format!(
"Only single value fast fields of type f64, u64, i64 are supported, but got {:?} ",
field_type.value_type()
)));
};
let ff_fields = reader.fast_fields();
ff_fields
.u64_lenient(field)
.map(|field| (field, field_type.value_type()))
}

View File

@@ -0,0 +1,296 @@
//! Contains the final aggregation tree.
//! This tree can be converted via the `into()` method from `IntermediateAggregationResults`.
//! This conversion computes the final result. For example: The intermediate result contains
//! intermediate average results, which is the sum and the number of values. The actual average is
//! calculated on the step from intermediate to final aggregation result tree.
use std::cmp::Ordering;
use std::collections::HashMap;
use itertools::Itertools;
use serde::{Deserialize, Serialize};
use super::agg_req::{Aggregations, AggregationsInternal, BucketAggregationInternal};
use super::bucket::intermediate_buckets_to_final_buckets;
use super::intermediate_agg_result::{
IntermediateAggregationResults, IntermediateBucketResult, IntermediateHistogramBucketEntry,
IntermediateMetricResult, IntermediateRangeBucketEntry,
};
use super::metric::{SingleMetricResult, Stats};
use super::Key;
#[derive(Clone, Default, Debug, PartialEq, Serialize, Deserialize)]
/// The final aggegation result.
pub struct AggregationResults(pub HashMap<String, AggregationResult>);
impl AggregationResults {
/// Convert and intermediate result and its aggregation request to the final result
pub fn from_intermediate_and_req(
results: IntermediateAggregationResults,
agg: Aggregations,
) -> Self {
AggregationResults::from_intermediate_and_req_internal(results, &(agg.into()))
}
/// Convert and intermediate result and its aggregation request to the final result
///
/// Internal function, CollectorAggregations is used instead Aggregations, which is optimized
/// for internal processing
fn from_intermediate_and_req_internal(
results: IntermediateAggregationResults,
req: &AggregationsInternal,
) -> Self {
let mut result = HashMap::default();
// Important assumption:
// When the tree contains buckets/metric, we expect it to have all buckets/metrics from the
// request
if let Some(buckets) = results.buckets {
result.extend(buckets.into_iter().zip(req.buckets.values()).map(
|((key, bucket), req)| {
(
key,
AggregationResult::BucketResult(BucketResult::from_intermediate_and_req(
bucket, req,
)),
)
},
));
} else {
result.extend(req.buckets.iter().map(|(key, req)| {
let empty_bucket = IntermediateBucketResult::empty_from_req(&req.bucket_agg);
(
key.to_string(),
AggregationResult::BucketResult(BucketResult::from_intermediate_and_req(
empty_bucket,
req,
)),
)
}));
}
if let Some(metrics) = results.metrics {
result.extend(
metrics
.into_iter()
.map(|(key, metric)| (key, AggregationResult::MetricResult(metric.into()))),
);
} else {
result.extend(req.metrics.iter().map(|(key, req)| {
let empty_bucket = IntermediateMetricResult::empty_from_req(req);
(
key.to_string(),
AggregationResult::MetricResult(empty_bucket.into()),
)
}));
}
Self(result)
}
}
#[derive(Clone, Debug, PartialEq, Serialize, Deserialize)]
#[serde(untagged)]
/// An aggregation is either a bucket or a metric.
pub enum AggregationResult {
/// Bucket result variant.
BucketResult(BucketResult),
/// Metric result variant.
MetricResult(MetricResult),
}
#[derive(Clone, Debug, PartialEq, Serialize, Deserialize)]
#[serde(untagged)]
/// MetricResult
pub enum MetricResult {
/// Average metric result.
Average(SingleMetricResult),
/// Stats metric result.
Stats(Stats),
}
impl From<IntermediateMetricResult> for MetricResult {
fn from(metric: IntermediateMetricResult) -> Self {
match metric {
IntermediateMetricResult::Average(avg_data) => {
MetricResult::Average(avg_data.finalize().into())
}
IntermediateMetricResult::Stats(intermediate_stats) => {
MetricResult::Stats(intermediate_stats.finalize())
}
}
}
}
/// BucketEntry holds bucket aggregation result types.
#[derive(Clone, Debug, PartialEq, Serialize, Deserialize)]
#[serde(untagged)]
pub enum BucketResult {
/// This is the range entry for a bucket, which contains a key, count, from, to, and optionally
/// sub_aggregations.
Range {
/// The range buckets sorted by range.
buckets: Vec<RangeBucketEntry>,
},
/// This is the histogram entry for a bucket, which contains a key, count, and optionally
/// sub_aggregations.
Histogram {
/// The buckets.
///
/// If there are holes depends on the request, if min_doc_count is 0, then there are no
/// holes between the first and last bucket.
/// See [HistogramAggregation](super::bucket::HistogramAggregation)
buckets: Vec<BucketEntry>,
},
}
impl BucketResult {
fn from_intermediate_and_req(
bucket_result: IntermediateBucketResult,
req: &BucketAggregationInternal,
) -> Self {
match bucket_result {
IntermediateBucketResult::Range(range_map) => {
let mut buckets: Vec<RangeBucketEntry> = range_map
.into_iter()
.map(|(_, bucket)| {
RangeBucketEntry::from_intermediate_and_req(bucket, &req.sub_aggregation)
})
.collect_vec();
buckets.sort_by(|a, b| {
a.from
.unwrap_or(f64::MIN)
.partial_cmp(&b.from.unwrap_or(f64::MIN))
.unwrap_or(Ordering::Equal)
});
BucketResult::Range { buckets }
}
IntermediateBucketResult::Histogram { buckets } => {
let buckets = intermediate_buckets_to_final_buckets(
buckets,
req.as_histogram(),
&req.sub_aggregation,
);
BucketResult::Histogram { buckets }
}
}
}
}
/// This is the default entry for a bucket, which contains a key, count, and optionally
/// sub_aggregations.
///
/// # JSON Format
/// ```json
/// {
/// ...
/// "my_histogram": {
/// "buckets": [
/// {
/// "key": "2.0",
/// "doc_count": 5
/// },
/// {
/// "key": "4.0",
/// "doc_count": 2
/// },
/// {
/// "key": "6.0",
/// "doc_count": 3
/// }
/// ]
/// }
/// ...
/// }
/// ```
#[derive(Clone, Debug, PartialEq, Serialize, Deserialize)]
pub struct BucketEntry {
/// The identifier of the bucket.
pub key: Key,
/// Number of documents in the bucket.
pub doc_count: u64,
#[serde(flatten)]
/// sub-aggregations in this bucket.
pub sub_aggregation: AggregationResults,
}
impl BucketEntry {
pub(crate) fn from_intermediate_and_req(
entry: IntermediateHistogramBucketEntry,
req: &AggregationsInternal,
) -> Self {
BucketEntry {
key: Key::F64(entry.key),
doc_count: entry.doc_count,
sub_aggregation: AggregationResults::from_intermediate_and_req_internal(
entry.sub_aggregation,
req,
),
}
}
}
/// This is the range entry for a bucket, which contains a key, count, and optionally
/// sub_aggregations.
///
/// # JSON Format
/// ```json
/// {
/// ...
/// "my_ranges": {
/// "buckets": [
/// {
/// "key": "*-10",
/// "to": 10,
/// "doc_count": 5
/// },
/// {
/// "key": "10-20",
/// "from": 10,
/// "to": 20,
/// "doc_count": 2
/// },
/// {
/// "key": "20-*",
/// "from": 20,
/// "doc_count": 3
/// }
/// ]
/// }
/// ...
/// }
/// ```
#[derive(Clone, Debug, PartialEq, Serialize, Deserialize)]
pub struct RangeBucketEntry {
/// The identifier of the bucket.
pub key: Key,
/// Number of documents in the bucket.
pub doc_count: u64,
#[serde(flatten)]
/// sub-aggregations in this bucket.
pub sub_aggregation: AggregationResults,
/// The from range of the bucket. Equals f64::MIN when None.
#[serde(skip_serializing_if = "Option::is_none")]
pub from: Option<f64>,
/// The to range of the bucket. Equals f64::MAX when None.
#[serde(skip_serializing_if = "Option::is_none")]
pub to: Option<f64>,
}
impl RangeBucketEntry {
fn from_intermediate_and_req(
entry: IntermediateRangeBucketEntry,
req: &AggregationsInternal,
) -> Self {
RangeBucketEntry {
key: entry.key,
doc_count: entry.doc_count,
sub_aggregation: AggregationResults::from_intermediate_and_req_internal(
entry.sub_aggregation,
req,
),
to: entry.to,
from: entry.from,
}
}
}

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,2 @@
mod histogram;
pub use histogram::*;

View File

@@ -0,0 +1,16 @@
//! Module for all bucket aggregations.
//!
//! BucketAggregations create buckets of documents
//! [BucketAggregation](super::agg_req::BucketAggregation).
//!
//! Results of final buckets are [BucketResult](super::agg_result::BucketResult).
//! Results of intermediate buckets are
//! [IntermediateBucketResult](super::intermediate_agg_result::IntermediateBucketResult)
mod histogram;
mod range;
pub(crate) use histogram::SegmentHistogramCollector;
pub use histogram::*;
pub(crate) use range::SegmentRangeCollector;
pub use range::*;

View File

@@ -0,0 +1,568 @@
use std::ops::Range;
use serde::{Deserialize, Serialize};
use crate::aggregation::agg_req_with_accessor::{
AggregationsWithAccessor, BucketAggregationWithAccessor,
};
use crate::aggregation::intermediate_agg_result::IntermediateBucketResult;
use crate::aggregation::segment_agg_result::{
SegmentAggregationResultsCollector, SegmentRangeBucketEntry,
};
use crate::aggregation::{f64_from_fastfield_u64, f64_to_fastfield_u64, Key};
use crate::fastfield::FastFieldReader;
use crate::schema::Type;
use crate::{DocId, TantivyError};
/// Provide user-defined buckets to aggregate on.
/// Two special buckets will automatically be created to cover the whole range of values.
/// The provided buckets have to be continous.
/// During the aggregation, the values extracted from the fast_field `field` will be checked
/// against each bucket range. Note that this aggregation includes the from value and excludes the
/// to value for each range.
///
/// Result type is [BucketResult](crate::aggregation::agg_result::BucketResult) with
/// [RangeBucketEntry](crate::aggregation::agg_result::RangeBucketEntry) on the
/// AggregationCollector.
///
/// Result type is
/// [crate::aggregation::intermediate_agg_result::IntermediateBucketResult] with
/// [crate::aggregation::intermediate_agg_result::IntermediateRangeBucketEntry] on the
/// DistributedAggregationCollector.
///
/// # Limitations/Compatibility
/// Overlapping ranges are not yet supported.
///
/// The keyed parameter (elasticsearch) is not yet supported.
///
/// # Request JSON Format
/// ```json
/// {
/// "range": {
/// "field": "score",
/// "ranges": [
/// { "to": 3.0 },
/// { "from": 3.0, "to": 7.0 },
/// { "from": 7.0, "to": 20.0 }
/// { "from": 20.0 }
/// ]
/// }
/// }
/// ```
#[derive(Clone, Debug, PartialEq, Serialize, Deserialize)]
pub struct RangeAggregation {
/// The field to aggregate on.
pub field: String,
/// Note that this aggregation includes the from value and excludes the to value for each
/// range. Extra buckets will be created until the first to, and last from, if necessary.
pub ranges: Vec<RangeAggregationRange>,
}
#[derive(Clone, Debug, PartialEq, Serialize, Deserialize)]
/// The range for one range bucket.
pub struct RangeAggregationRange {
/// The from range value, which is inclusive in the range.
/// None equals to an open ended interval.
#[serde(skip_serializing_if = "Option::is_none", default)]
pub from: Option<f64>,
/// The to range value, which is not inclusive in the range.
/// None equals to an open ended interval.
#[serde(skip_serializing_if = "Option::is_none", default)]
pub to: Option<f64>,
}
impl From<Range<f64>> for RangeAggregationRange {
fn from(range: Range<f64>) -> Self {
let from = if range.start == f64::MIN {
None
} else {
Some(range.start)
};
let to = if range.end == f64::MAX {
None
} else {
Some(range.end)
};
RangeAggregationRange { from, to }
}
}
#[derive(Clone, Debug, PartialEq)]
pub(crate) struct SegmentRangeAndBucketEntry {
range: Range<u64>,
bucket: SegmentRangeBucketEntry,
}
/// The collector puts values from the fast field into the correct buckets and does a conversion to
/// the correct datatype.
#[derive(Clone, Debug, PartialEq)]
pub struct SegmentRangeCollector {
/// The buckets containing the aggregation data.
buckets: Vec<SegmentRangeAndBucketEntry>,
field_type: Type,
}
impl SegmentRangeCollector {
pub fn into_intermediate_bucket_result(self) -> IntermediateBucketResult {
let field_type = self.field_type;
let buckets = self
.buckets
.into_iter()
.map(move |range_bucket| {
(
range_to_string(&range_bucket.range, &field_type),
range_bucket.bucket.into(),
)
})
.collect();
IntermediateBucketResult::Range(buckets)
}
pub(crate) fn from_req_and_validate(
req: &RangeAggregation,
sub_aggregation: &AggregationsWithAccessor,
field_type: Type,
) -> crate::Result<Self> {
// The range input on the request is f64.
// We need to convert to u64 ranges, because we read the values as u64.
// The mapping from the conversion is monotonic so ordering is preserved.
let buckets = extend_validate_ranges(&req.ranges, &field_type)?
.iter()
.map(|range| {
let to = if range.end == u64::MAX {
None
} else {
Some(f64_from_fastfield_u64(range.end, &field_type))
};
let from = if range.start == u64::MIN {
None
} else {
Some(f64_from_fastfield_u64(range.start, &field_type))
};
let sub_aggregation = if sub_aggregation.is_empty() {
None
} else {
Some(SegmentAggregationResultsCollector::from_req_and_validate(
sub_aggregation,
)?)
};
Ok(SegmentRangeAndBucketEntry {
range: range.clone(),
bucket: SegmentRangeBucketEntry {
key: range_to_key(range, &field_type),
doc_count: 0,
sub_aggregation,
from,
to,
},
})
})
.collect::<crate::Result<_>>()?;
Ok(SegmentRangeCollector {
buckets,
field_type,
})
}
#[inline]
pub(crate) fn collect_block(
&mut self,
doc: &[DocId],
bucket_with_accessor: &BucketAggregationWithAccessor,
force_flush: bool,
) {
let mut iter = doc.chunks_exact(4);
for docs in iter.by_ref() {
let val1 = bucket_with_accessor.accessor.get(docs[0]);
let val2 = bucket_with_accessor.accessor.get(docs[1]);
let val3 = bucket_with_accessor.accessor.get(docs[2]);
let val4 = bucket_with_accessor.accessor.get(docs[3]);
let bucket_pos1 = self.get_bucket_pos(val1);
let bucket_pos2 = self.get_bucket_pos(val2);
let bucket_pos3 = self.get_bucket_pos(val3);
let bucket_pos4 = self.get_bucket_pos(val4);
self.increment_bucket(bucket_pos1, docs[0], &bucket_with_accessor.sub_aggregation);
self.increment_bucket(bucket_pos2, docs[1], &bucket_with_accessor.sub_aggregation);
self.increment_bucket(bucket_pos3, docs[2], &bucket_with_accessor.sub_aggregation);
self.increment_bucket(bucket_pos4, docs[3], &bucket_with_accessor.sub_aggregation);
}
for doc in iter.remainder() {
let val = bucket_with_accessor.accessor.get(*doc);
let bucket_pos = self.get_bucket_pos(val);
self.increment_bucket(bucket_pos, *doc, &bucket_with_accessor.sub_aggregation);
}
if force_flush {
for bucket in &mut self.buckets {
if let Some(sub_aggregation) = &mut bucket.bucket.sub_aggregation {
sub_aggregation
.flush_staged_docs(&bucket_with_accessor.sub_aggregation, force_flush);
}
}
}
}
#[inline]
fn increment_bucket(
&mut self,
bucket_pos: usize,
doc: DocId,
bucket_with_accessor: &AggregationsWithAccessor,
) {
let bucket = &mut self.buckets[bucket_pos];
bucket.bucket.doc_count += 1;
if let Some(sub_aggregation) = &mut bucket.bucket.sub_aggregation {
sub_aggregation.collect(doc, bucket_with_accessor);
}
}
#[inline]
fn get_bucket_pos(&self, val: u64) -> usize {
let pos = self
.buckets
.binary_search_by_key(&val, |probe| probe.range.start)
.unwrap_or_else(|pos| pos - 1);
debug_assert!(self.buckets[pos].range.contains(&val));
pos
}
}
/// Converts the user provided f64 range value to fast field value space.
///
/// Internally fast field values are always stored as u64.
/// If the fast field has u64 [1,2,5], these values are stored as is in the fast field.
/// A fast field with f64 [1.0, 2.0, 5.0] is converted to u64 space, using a
/// monotonic mapping function, so the order is preserved.
///
/// Consequently, a f64 user range 1.0..3.0 needs to be converted to fast field value space using
/// the same monotonic mapping function, so that the provided ranges contain the u64 values in the
/// fast field.
/// The alternative would be that every value read would be converted to the f64 range, but that is
/// more computational expensive when many documents are hit.
fn to_u64_range(range: &RangeAggregationRange, field_type: &Type) -> crate::Result<Range<u64>> {
let start = if let Some(from) = range.from {
f64_to_fastfield_u64(from, field_type)
.ok_or_else(|| TantivyError::InvalidArgument("invalid field type".to_string()))?
} else {
u64::MIN
};
let end = if let Some(to) = range.to {
f64_to_fastfield_u64(to, field_type)
.ok_or_else(|| TantivyError::InvalidArgument("invalid field type".to_string()))?
} else {
u64::MAX
};
Ok(start..end)
}
/// Extends the provided buckets to contain the whole value range, by inserting buckets at the
/// beginning and end.
fn extend_validate_ranges(
buckets: &[RangeAggregationRange],
field_type: &Type,
) -> crate::Result<Vec<Range<u64>>> {
let mut converted_buckets = buckets
.iter()
.map(|range| to_u64_range(range, field_type))
.collect::<crate::Result<Vec<_>>>()?;
converted_buckets.sort_by_key(|bucket| bucket.start);
if converted_buckets[0].start != u64::MIN {
converted_buckets.insert(0, u64::MIN..converted_buckets[0].start);
}
if converted_buckets[converted_buckets.len() - 1].end != u64::MAX {
converted_buckets.push(converted_buckets[converted_buckets.len() - 1].end..u64::MAX);
}
// fill up holes in the ranges
let find_hole = |converted_buckets: &[Range<u64>]| {
for (pos, ranges) in converted_buckets.windows(2).enumerate() {
if ranges[0].end > ranges[1].start {
return Err(TantivyError::InvalidArgument(format!(
"Overlapping ranges not supported range {:?}, range+1 {:?}",
ranges[0], ranges[1]
)));
}
if ranges[0].end != ranges[1].start {
return Ok(Some(pos));
}
}
Ok(None)
};
while let Some(hole_pos) = find_hole(&converted_buckets)? {
let new_range = converted_buckets[hole_pos].end..converted_buckets[hole_pos + 1].start;
converted_buckets.insert(hole_pos + 1, new_range);
}
Ok(converted_buckets)
}
pub(crate) fn range_to_string(range: &Range<u64>, field_type: &Type) -> String {
// is_start is there for malformed requests, e.g. ig the user passes the range u64::MIN..0.0,
// it should be rendererd as "*-0" and not "*-*"
let to_str = |val: u64, is_start: bool| {
if (is_start && val == u64::MIN) || (!is_start && val == u64::MAX) {
"*".to_string()
} else {
f64_from_fastfield_u64(val, field_type).to_string()
}
};
format!("{}-{}", to_str(range.start, true), to_str(range.end, false))
}
pub(crate) fn range_to_key(range: &Range<u64>, field_type: &Type) -> Key {
Key::Str(range_to_string(range, field_type))
}
#[cfg(test)]
mod tests {
use serde_json::Value;
use super::*;
use crate::aggregation::agg_req::{
Aggregation, Aggregations, BucketAggregation, BucketAggregationType,
};
use crate::aggregation::tests::get_test_index_with_num_docs;
use crate::aggregation::AggregationCollector;
use crate::fastfield::FastValue;
use crate::query::AllQuery;
pub fn get_collector_from_ranges(
ranges: Vec<RangeAggregationRange>,
field_type: Type,
) -> SegmentRangeCollector {
let req = RangeAggregation {
field: "dummy".to_string(),
ranges,
};
SegmentRangeCollector::from_req_and_validate(&req, &Default::default(), field_type).unwrap()
}
#[test]
fn range_fraction_test() -> crate::Result<()> {
let index = get_test_index_with_num_docs(false, 100)?;
let agg_req: Aggregations = vec![(
"range".to_string(),
Aggregation::Bucket(BucketAggregation {
bucket_agg: BucketAggregationType::Range(RangeAggregation {
field: "fraction_f64".to_string(),
ranges: vec![(0f64..0.1f64).into(), (0.1f64..0.2f64).into()],
}),
sub_aggregation: Default::default(),
}),
)]
.into_iter()
.collect();
let collector = AggregationCollector::from_aggs(agg_req);
let reader = index.reader()?;
let searcher = reader.searcher();
let agg_res = searcher.search(&AllQuery, &collector).unwrap();
let res: Value = serde_json::from_str(&serde_json::to_string(&agg_res)?)?;
assert_eq!(res["range"]["buckets"][0]["key"], "*-0");
assert_eq!(res["range"]["buckets"][0]["doc_count"], 0);
assert_eq!(res["range"]["buckets"][1]["key"], "0-0.1");
assert_eq!(res["range"]["buckets"][1]["doc_count"], 10);
assert_eq!(res["range"]["buckets"][2]["key"], "0.1-0.2");
assert_eq!(res["range"]["buckets"][2]["doc_count"], 10);
assert_eq!(res["range"]["buckets"][3]["key"], "0.2-*");
assert_eq!(res["range"]["buckets"][3]["doc_count"], 80);
Ok(())
}
#[test]
fn bucket_test_extend_range_hole() {
let buckets = vec![(10f64..20f64).into(), (30f64..40f64).into()];
let collector = get_collector_from_ranges(buckets, Type::F64);
let buckets = collector.buckets;
assert_eq!(buckets[0].range.start, u64::MIN);
assert_eq!(buckets[0].range.end, 10f64.to_u64());
assert_eq!(buckets[1].range.start, 10f64.to_u64());
assert_eq!(buckets[1].range.end, 20f64.to_u64());
// Added bucket to fill hole
assert_eq!(buckets[2].range.start, 20f64.to_u64());
assert_eq!(buckets[2].range.end, 30f64.to_u64());
assert_eq!(buckets[3].range.start, 30f64.to_u64());
assert_eq!(buckets[3].range.end, 40f64.to_u64());
}
#[test]
fn bucket_test_range_conversion_special_case() {
// the monotonic conversion between f64 and u64, does not map f64::MIN.to_u64() ==
// u64::MIN, but the into trait converts f64::MIN/MAX to None
let buckets = vec![
(f64::MIN..10f64).into(),
(10f64..20f64).into(),
(20f64..f64::MAX).into(),
];
let collector = get_collector_from_ranges(buckets, Type::F64);
let buckets = collector.buckets;
assert_eq!(buckets[0].range.start, u64::MIN);
assert_eq!(buckets[0].range.end, 10f64.to_u64());
assert_eq!(buckets[1].range.start, 10f64.to_u64());
assert_eq!(buckets[1].range.end, 20f64.to_u64());
assert_eq!(buckets[2].range.start, 20f64.to_u64());
assert_eq!(buckets[2].range.end, u64::MAX);
assert_eq!(buckets.len(), 3);
}
#[test]
fn bucket_range_test_negative_vals() {
let buckets = vec![(-10f64..-1f64).into()];
let collector = get_collector_from_ranges(buckets, Type::F64);
let buckets = collector.buckets;
assert_eq!(&buckets[0].bucket.key.to_string(), "*--10");
assert_eq!(&buckets[buckets.len() - 1].bucket.key.to_string(), "-1-*");
}
#[test]
fn bucket_range_test_positive_vals() {
let buckets = vec![(0f64..10f64).into()];
let collector = get_collector_from_ranges(buckets, Type::F64);
let buckets = collector.buckets;
assert_eq!(&buckets[0].bucket.key.to_string(), "*-0");
assert_eq!(&buckets[buckets.len() - 1].bucket.key.to_string(), "10-*");
}
#[test]
fn range_binary_search_test_u64() {
let check_ranges = |ranges: Vec<RangeAggregationRange>| {
let collector = get_collector_from_ranges(ranges, Type::U64);
let search = |val: u64| collector.get_bucket_pos(val);
assert_eq!(search(u64::MIN), 0);
assert_eq!(search(9), 0);
assert_eq!(search(10), 1);
assert_eq!(search(11), 1);
assert_eq!(search(99), 1);
assert_eq!(search(100), 2);
assert_eq!(search(u64::MAX - 1), 2); // Since the end range is never included, the max
// value
};
let ranges = vec![(10.0..100.0).into()];
check_ranges(ranges);
let ranges = vec![
RangeAggregationRange {
to: Some(10.0),
from: None,
},
(10.0..100.0).into(),
];
check_ranges(ranges);
let ranges = vec![
RangeAggregationRange {
to: Some(10.0),
from: None,
},
(10.0..100.0).into(),
RangeAggregationRange {
to: None,
from: Some(100.0),
},
];
check_ranges(ranges);
}
#[test]
fn range_binary_search_test_f64() {
let ranges = vec![
//(f64::MIN..10.0).into(),
(10.0..100.0).into(),
//(100.0..f64::MAX).into(),
];
let collector = get_collector_from_ranges(ranges, Type::F64);
let search = |val: u64| collector.get_bucket_pos(val);
assert_eq!(search(u64::MIN), 0);
assert_eq!(search(9f64.to_u64()), 0);
assert_eq!(search(10f64.to_u64()), 1);
assert_eq!(search(11f64.to_u64()), 1);
assert_eq!(search(99f64.to_u64()), 1);
assert_eq!(search(100f64.to_u64()), 2);
assert_eq!(search(u64::MAX - 1), 2); // Since the end range is never included,
// the max value
}
}
#[cfg(all(test, feature = "unstable"))]
mod bench {
use itertools::Itertools;
use rand::seq::SliceRandom;
use rand::thread_rng;
use super::*;
use crate::aggregation::bucket::range::tests::get_collector_from_ranges;
const TOTAL_DOCS: u64 = 1_000_000u64;
const NUM_DOCS: u64 = 50_000u64;
fn get_collector_with_buckets(num_buckets: u64, num_docs: u64) -> SegmentRangeCollector {
let bucket_size = num_docs / num_buckets;
let mut buckets: Vec<RangeAggregationRange> = vec![];
for i in 0..num_buckets {
let bucket_start = (i * bucket_size) as f64;
buckets.push((bucket_start..bucket_start + bucket_size as f64).into())
}
get_collector_from_ranges(buckets, Type::U64)
}
fn get_rand_docs(total_docs: u64, num_docs_returned: u64) -> Vec<u64> {
let mut rng = thread_rng();
let all_docs = (0..total_docs - 1).collect_vec();
let mut vals = all_docs
.as_slice()
.choose_multiple(&mut rng, num_docs_returned as usize)
.cloned()
.collect_vec();
vals.sort();
vals
}
fn bench_range_binary_search(b: &mut test::Bencher, num_buckets: u64) {
let collector = get_collector_with_buckets(num_buckets, TOTAL_DOCS);
let vals = get_rand_docs(TOTAL_DOCS, NUM_DOCS);
b.iter(|| {
let mut bucket_pos = 0;
for val in &vals {
bucket_pos = collector.get_bucket_pos(*val);
}
bucket_pos
})
}
#[bench]
fn bench_range_100_buckets(b: &mut test::Bencher) {
bench_range_binary_search(b, 100)
}
#[bench]
fn bench_range_10_buckets(b: &mut test::Bencher) {
bench_range_binary_search(b, 10)
}
}

View File

@@ -0,0 +1,142 @@
use super::agg_req::Aggregations;
use super::agg_req_with_accessor::AggregationsWithAccessor;
use super::agg_result::AggregationResults;
use super::intermediate_agg_result::IntermediateAggregationResults;
use super::segment_agg_result::SegmentAggregationResultsCollector;
use crate::aggregation::agg_req_with_accessor::get_aggs_with_accessor_and_validate;
use crate::collector::{Collector, SegmentCollector};
use crate::SegmentReader;
/// Collector for aggregations.
///
/// The collector collects all aggregations by the underlying aggregation request.
pub struct AggregationCollector {
agg: Aggregations,
}
impl AggregationCollector {
/// Create collector from aggregation request.
pub fn from_aggs(agg: Aggregations) -> Self {
Self { agg }
}
}
/// Collector for distributed aggregations.
///
/// The collector collects all aggregations by the underlying aggregation request.
///
/// # Purpose
/// AggregationCollector returns `IntermediateAggregationResults` and not the final
/// `AggregationResults`, so that results from differenct indices can be merged and then converted
/// into the final `AggregationResults` via the `into()` method.
pub struct DistributedAggregationCollector {
agg: Aggregations,
}
impl DistributedAggregationCollector {
/// Create collector from aggregation request.
pub fn from_aggs(agg: Aggregations) -> Self {
Self { agg }
}
}
impl Collector for DistributedAggregationCollector {
type Fruit = IntermediateAggregationResults;
type Child = AggregationSegmentCollector;
fn for_segment(
&self,
_segment_local_id: crate::SegmentOrdinal,
reader: &crate::SegmentReader,
) -> crate::Result<Self::Child> {
AggregationSegmentCollector::from_agg_req_and_reader(&self.agg, reader)
}
fn requires_scoring(&self) -> bool {
false
}
fn merge_fruits(
&self,
segment_fruits: Vec<<Self::Child as SegmentCollector>::Fruit>,
) -> crate::Result<Self::Fruit> {
merge_fruits(segment_fruits)
}
}
impl Collector for AggregationCollector {
type Fruit = AggregationResults;
type Child = AggregationSegmentCollector;
fn for_segment(
&self,
_segment_local_id: crate::SegmentOrdinal,
reader: &crate::SegmentReader,
) -> crate::Result<Self::Child> {
AggregationSegmentCollector::from_agg_req_and_reader(&self.agg, reader)
}
fn requires_scoring(&self) -> bool {
false
}
fn merge_fruits(
&self,
segment_fruits: Vec<<Self::Child as SegmentCollector>::Fruit>,
) -> crate::Result<Self::Fruit> {
merge_fruits(segment_fruits)
.map(|res| AggregationResults::from_intermediate_and_req(res, self.agg.clone()))
}
}
fn merge_fruits(
mut segment_fruits: Vec<IntermediateAggregationResults>,
) -> crate::Result<IntermediateAggregationResults> {
if let Some(mut fruit) = segment_fruits.pop() {
for next_fruit in segment_fruits {
fruit.merge_fruits(next_fruit);
}
Ok(fruit)
} else {
Ok(IntermediateAggregationResults::default())
}
}
/// AggregationSegmentCollector does the aggregation collection on a segment.
pub struct AggregationSegmentCollector {
aggs: AggregationsWithAccessor,
result: SegmentAggregationResultsCollector,
}
impl AggregationSegmentCollector {
/// Creates an AggregationSegmentCollector from an [Aggregations] request and a segment reader.
/// Also includes validation, e.g. checking field types and existence.
pub fn from_agg_req_and_reader(
agg: &Aggregations,
reader: &SegmentReader,
) -> crate::Result<Self> {
let aggs_with_accessor = get_aggs_with_accessor_and_validate(agg, reader)?;
let result =
SegmentAggregationResultsCollector::from_req_and_validate(&aggs_with_accessor)?;
Ok(AggregationSegmentCollector {
aggs: aggs_with_accessor,
result,
})
}
}
impl SegmentCollector for AggregationSegmentCollector {
type Fruit = IntermediateAggregationResults;
#[inline]
fn collect(&mut self, doc: crate::DocId, _score: crate::Score) {
self.result.collect(doc, &self.aggs);
}
fn harvest(mut self) -> Self::Fruit {
self.result.flush_staged_docs(&self.aggs, true);
self.result.into()
}
}

View File

@@ -0,0 +1,473 @@
//! Contains the intermediate aggregation tree, that can be merged.
//! Intermediate aggregation results can be used to merge results between segments or between
//! indices.
use std::cmp::Ordering;
use fnv::FnvHashMap;
use itertools::Itertools;
use serde::{Deserialize, Serialize};
use super::agg_req::{AggregationsInternal, BucketAggregationType, MetricAggregation};
use super::metric::{IntermediateAverage, IntermediateStats};
use super::segment_agg_result::{
SegmentAggregationResultsCollector, SegmentBucketResultCollector, SegmentHistogramBucketEntry,
SegmentMetricResultCollector, SegmentRangeBucketEntry,
};
use super::{Key, SerializedKey, VecWithNames};
/// Contains the intermediate aggregation result, which is optimized to be merged with other
/// intermediate results.
#[derive(Default, Clone, Debug, PartialEq, Serialize, Deserialize)]
pub struct IntermediateAggregationResults {
pub(crate) metrics: Option<VecWithNames<IntermediateMetricResult>>,
pub(crate) buckets: Option<VecWithNames<IntermediateBucketResult>>,
}
impl From<SegmentAggregationResultsCollector> for IntermediateAggregationResults {
fn from(tree: SegmentAggregationResultsCollector) -> Self {
let metrics = tree.metrics.map(VecWithNames::from_other);
let buckets = tree.buckets.map(VecWithNames::from_other);
Self { metrics, buckets }
}
}
impl IntermediateAggregationResults {
pub(crate) fn empty_from_req(req: &AggregationsInternal) -> Self {
let metrics = if req.metrics.is_empty() {
None
} else {
let metrics = req
.metrics
.iter()
.map(|(key, req)| {
(
key.to_string(),
IntermediateMetricResult::empty_from_req(req),
)
})
.collect();
Some(VecWithNames::from_entries(metrics))
};
let buckets = if req.buckets.is_empty() {
None
} else {
let buckets = req
.buckets
.iter()
.map(|(key, req)| {
(
key.to_string(),
IntermediateBucketResult::empty_from_req(&req.bucket_agg),
)
})
.collect();
Some(VecWithNames::from_entries(buckets))
};
Self { metrics, buckets }
}
/// Merge an other intermediate aggregation result into this result.
///
/// The order of the values need to be the same on both results. This is ensured when the same
/// (key values) are present on the underlying VecWithNames struct.
pub fn merge_fruits(&mut self, other: IntermediateAggregationResults) {
if let (Some(buckets_left), Some(buckets_right)) = (&mut self.buckets, other.buckets) {
for (bucket_left, bucket_right) in
buckets_left.values_mut().zip(buckets_right.into_values())
{
bucket_left.merge_fruits(bucket_right);
}
}
if let (Some(metrics_left), Some(metrics_right)) = (&mut self.metrics, other.metrics) {
for (metric_left, metric_right) in
metrics_left.values_mut().zip(metrics_right.into_values())
{
metric_left.merge_fruits(metric_right);
}
}
}
}
/// An aggregation is either a bucket or a metric.
#[derive(Clone, Debug, PartialEq, Serialize, Deserialize)]
pub enum IntermediateAggregationResult {
/// Bucket variant
Bucket(IntermediateBucketResult),
/// Metric variant
Metric(IntermediateMetricResult),
}
/// Holds the intermediate data for metric results
#[derive(Clone, Debug, PartialEq, Serialize, Deserialize)]
pub enum IntermediateMetricResult {
/// Average containing intermediate average data result
Average(IntermediateAverage),
/// AverageData variant
Stats(IntermediateStats),
}
impl From<SegmentMetricResultCollector> for IntermediateMetricResult {
fn from(tree: SegmentMetricResultCollector) -> Self {
match tree {
SegmentMetricResultCollector::Average(collector) => {
IntermediateMetricResult::Average(IntermediateAverage::from_collector(collector))
}
SegmentMetricResultCollector::Stats(collector) => {
IntermediateMetricResult::Stats(collector.stats)
}
}
}
}
impl IntermediateMetricResult {
pub(crate) fn empty_from_req(req: &MetricAggregation) -> Self {
match req {
MetricAggregation::Average(_) => {
IntermediateMetricResult::Average(IntermediateAverage::default())
}
MetricAggregation::Stats(_) => {
IntermediateMetricResult::Stats(IntermediateStats::default())
}
}
}
fn merge_fruits(&mut self, other: IntermediateMetricResult) {
match (self, other) {
(
IntermediateMetricResult::Average(avg_data_left),
IntermediateMetricResult::Average(avg_data_right),
) => {
avg_data_left.merge_fruits(avg_data_right);
}
(
IntermediateMetricResult::Stats(stats_left),
IntermediateMetricResult::Stats(stats_right),
) => {
stats_left.merge_fruits(stats_right);
}
_ => {
panic!("incompatible fruit types in tree");
}
}
}
}
/// The intermediate bucket results. Internally they can be easily merged via the keys of the
/// buckets.
#[derive(Clone, Debug, PartialEq, Serialize, Deserialize)]
pub enum IntermediateBucketResult {
/// This is the range entry for a bucket, which contains a key, count, from, to, and optionally
/// sub_aggregations.
Range(FnvHashMap<SerializedKey, IntermediateRangeBucketEntry>),
/// This is the histogram entry for a bucket, which contains a key, count, and optionally
/// sub_aggregations.
Histogram {
/// The buckets
buckets: Vec<IntermediateHistogramBucketEntry>,
},
}
impl From<SegmentBucketResultCollector> for IntermediateBucketResult {
fn from(collector: SegmentBucketResultCollector) -> Self {
match collector {
SegmentBucketResultCollector::Range(range) => range.into_intermediate_bucket_result(),
SegmentBucketResultCollector::Histogram(histogram) => {
histogram.into_intermediate_bucket_result()
}
}
}
}
impl IntermediateBucketResult {
pub(crate) fn empty_from_req(req: &BucketAggregationType) -> Self {
match req {
BucketAggregationType::Range(_) => IntermediateBucketResult::Range(Default::default()),
BucketAggregationType::Histogram(_) => {
IntermediateBucketResult::Histogram { buckets: vec![] }
}
}
}
fn merge_fruits(&mut self, other: IntermediateBucketResult) {
match (self, other) {
(
IntermediateBucketResult::Range(entries_left),
IntermediateBucketResult::Range(entries_right),
) => {
merge_maps(entries_left, entries_right);
}
(
IntermediateBucketResult::Histogram {
buckets: entries_left,
..
},
IntermediateBucketResult::Histogram {
buckets: entries_right,
..
},
) => {
let mut buckets = entries_left
.drain(..)
.merge_join_by(entries_right.into_iter(), |left, right| {
left.key.partial_cmp(&right.key).unwrap_or(Ordering::Equal)
})
.map(|either| match either {
itertools::EitherOrBoth::Both(mut left, right) => {
left.merge_fruits(right);
left
}
itertools::EitherOrBoth::Left(left) => left,
itertools::EitherOrBoth::Right(right) => right,
})
.collect();
std::mem::swap(entries_left, &mut buckets);
}
(IntermediateBucketResult::Range(_), _) => {
panic!("try merge on different types")
}
(IntermediateBucketResult::Histogram { .. }, _) => {
panic!("try merge on different types")
}
}
}
}
trait MergeFruits {
fn merge_fruits(&mut self, other: Self);
}
fn merge_maps<V: MergeFruits + Clone>(
entries_left: &mut FnvHashMap<SerializedKey, V>,
mut entries_right: FnvHashMap<SerializedKey, V>,
) {
for (name, entry_left) in entries_left.iter_mut() {
if let Some(entry_right) = entries_right.remove(name) {
entry_left.merge_fruits(entry_right);
}
}
for (key, res) in entries_right.into_iter() {
entries_left.entry(key).or_insert(res);
}
}
/// This is the histogram entry for a bucket, which contains a key, count, and optionally
/// sub_aggregations.
#[derive(Clone, Debug, PartialEq, Serialize, Deserialize)]
pub struct IntermediateHistogramBucketEntry {
/// The unique the bucket is identified.
pub key: f64,
/// The number of documents in the bucket.
pub doc_count: u64,
/// The sub_aggregation in this bucket.
pub sub_aggregation: IntermediateAggregationResults,
}
impl From<SegmentHistogramBucketEntry> for IntermediateHistogramBucketEntry {
fn from(entry: SegmentHistogramBucketEntry) -> Self {
IntermediateHistogramBucketEntry {
key: entry.key,
doc_count: entry.doc_count,
sub_aggregation: Default::default(),
}
}
}
impl
From<(
SegmentHistogramBucketEntry,
SegmentAggregationResultsCollector,
)> for IntermediateHistogramBucketEntry
{
fn from(
entry: (
SegmentHistogramBucketEntry,
SegmentAggregationResultsCollector,
),
) -> Self {
IntermediateHistogramBucketEntry {
key: entry.0.key,
doc_count: entry.0.doc_count,
sub_aggregation: entry.1.into(),
}
}
}
/// This is the range entry for a bucket, which contains a key, count, and optionally
/// sub_aggregations.
#[derive(Clone, Debug, PartialEq, Serialize, Deserialize)]
pub struct IntermediateRangeBucketEntry {
/// The unique the bucket is identified.
pub key: Key,
/// The number of documents in the bucket.
pub doc_count: u64,
pub(crate) values: Option<Vec<u64>>,
/// The sub_aggregation in this bucket.
pub sub_aggregation: IntermediateAggregationResults,
/// The from range of the bucket. Equals f64::MIN when None.
#[serde(skip_serializing_if = "Option::is_none")]
pub from: Option<f64>,
/// The to range of the bucket. Equals f64::MAX when None.
#[serde(skip_serializing_if = "Option::is_none")]
pub to: Option<f64>,
}
impl From<SegmentRangeBucketEntry> for IntermediateRangeBucketEntry {
fn from(entry: SegmentRangeBucketEntry) -> Self {
let sub_aggregation = if let Some(sub_aggregation) = entry.sub_aggregation {
sub_aggregation.into()
} else {
Default::default()
};
IntermediateRangeBucketEntry {
key: entry.key,
doc_count: entry.doc_count,
values: None,
sub_aggregation,
to: entry.to,
from: entry.from,
}
}
}
impl MergeFruits for IntermediateRangeBucketEntry {
fn merge_fruits(&mut self, other: IntermediateRangeBucketEntry) {
self.doc_count += other.doc_count;
self.sub_aggregation.merge_fruits(other.sub_aggregation);
}
}
impl MergeFruits for IntermediateHistogramBucketEntry {
fn merge_fruits(&mut self, other: IntermediateHistogramBucketEntry) {
self.doc_count += other.doc_count;
self.sub_aggregation.merge_fruits(other.sub_aggregation);
}
}
#[cfg(test)]
mod tests {
use std::collections::HashMap;
use pretty_assertions::assert_eq;
use super::*;
fn get_sub_test_tree(data: &[(String, u64)]) -> IntermediateAggregationResults {
let mut map = HashMap::new();
let mut buckets = FnvHashMap::default();
for (key, doc_count) in data {
buckets.insert(
key.to_string(),
IntermediateRangeBucketEntry {
key: Key::Str(key.to_string()),
doc_count: *doc_count,
values: None,
sub_aggregation: Default::default(),
from: None,
to: None,
},
);
}
map.insert(
"my_agg_level2".to_string(),
IntermediateBucketResult::Range(buckets),
);
IntermediateAggregationResults {
buckets: Some(VecWithNames::from_entries(map.into_iter().collect())),
metrics: Default::default(),
}
}
fn get_intermediat_tree_with_ranges(
data: &[(String, u64, String, u64)],
) -> IntermediateAggregationResults {
let mut map = HashMap::new();
let mut buckets: FnvHashMap<_, _> = Default::default();
for (key, doc_count, sub_aggregation_key, sub_aggregation_count) in data {
buckets.insert(
key.to_string(),
IntermediateRangeBucketEntry {
key: Key::Str(key.to_string()),
doc_count: *doc_count,
values: None,
from: None,
to: None,
sub_aggregation: get_sub_test_tree(&[(
sub_aggregation_key.to_string(),
*sub_aggregation_count,
)]),
},
);
}
map.insert(
"my_agg_level1".to_string(),
IntermediateBucketResult::Range(buckets),
);
IntermediateAggregationResults {
buckets: Some(VecWithNames::from_entries(map.into_iter().collect())),
metrics: Default::default(),
}
}
#[test]
fn test_merge_fruits_tree_1() {
let mut tree_left = get_intermediat_tree_with_ranges(&[
("red".to_string(), 50, "1900".to_string(), 25),
("blue".to_string(), 30, "1900".to_string(), 30),
]);
let tree_right = get_intermediat_tree_with_ranges(&[
("red".to_string(), 60, "1900".to_string(), 30),
("blue".to_string(), 25, "1900".to_string(), 50),
]);
tree_left.merge_fruits(tree_right);
let tree_expected = get_intermediat_tree_with_ranges(&[
("red".to_string(), 110, "1900".to_string(), 55),
("blue".to_string(), 55, "1900".to_string(), 80),
]);
assert_eq!(tree_left, tree_expected);
}
#[test]
fn test_merge_fruits_tree_2() {
let mut tree_left = get_intermediat_tree_with_ranges(&[
("red".to_string(), 50, "1900".to_string(), 25),
("blue".to_string(), 30, "1900".to_string(), 30),
]);
let tree_right = get_intermediat_tree_with_ranges(&[
("red".to_string(), 60, "1900".to_string(), 30),
("green".to_string(), 25, "1900".to_string(), 50),
]);
tree_left.merge_fruits(tree_right);
let tree_expected = get_intermediat_tree_with_ranges(&[
("red".to_string(), 110, "1900".to_string(), 55),
("blue".to_string(), 30, "1900".to_string(), 30),
("green".to_string(), 25, "1900".to_string(), 50),
]);
assert_eq!(tree_left, tree_expected);
}
#[test]
fn test_merge_fruits_tree_empty() {
let mut tree_left = get_intermediat_tree_with_ranges(&[
("red".to_string(), 50, "1900".to_string(), 25),
("blue".to_string(), 30, "1900".to_string(), 30),
]);
let orig = tree_left.clone();
tree_left.merge_fruits(IntermediateAggregationResults::default());
assert_eq!(tree_left, orig);
}
}

View File

@@ -0,0 +1,114 @@
use std::fmt::Debug;
use serde::{Deserialize, Serialize};
use crate::aggregation::f64_from_fastfield_u64;
use crate::fastfield::{DynamicFastFieldReader, FastFieldReader};
use crate::schema::Type;
use crate::DocId;
#[derive(Clone, Debug, PartialEq, Serialize, Deserialize)]
/// A single-value metric aggregation that computes the average of numeric values that are
/// extracted from the aggregated documents.
/// Supported field types are u64, i64, and f64.
/// See [super::SingleMetricResult] for return value.
///
/// # JSON Format
/// ```json
/// {
/// "avg": {
/// "field": "score",
/// }
/// }
/// ```
pub struct AverageAggregation {
/// The field name to compute the stats on.
pub field: String,
}
impl AverageAggregation {
/// Create new AverageAggregation from a field.
pub fn from_field_name(field_name: String) -> Self {
AverageAggregation { field: field_name }
}
/// Return the field name.
pub fn field_name(&self) -> &str {
&self.field
}
}
#[derive(Clone, PartialEq)]
pub(crate) struct SegmentAverageCollector {
pub data: IntermediateAverage,
field_type: Type,
}
impl Debug for SegmentAverageCollector {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
f.debug_struct("AverageCollector")
.field("data", &self.data)
.finish()
}
}
impl SegmentAverageCollector {
pub fn from_req(field_type: Type) -> Self {
Self {
field_type,
data: Default::default(),
}
}
pub(crate) fn collect_block(&mut self, doc: &[DocId], field: &DynamicFastFieldReader<u64>) {
let mut iter = doc.chunks_exact(4);
for docs in iter.by_ref() {
let val1 = field.get(docs[0]);
let val2 = field.get(docs[1]);
let val3 = field.get(docs[2]);
let val4 = field.get(docs[3]);
let val1 = f64_from_fastfield_u64(val1, &self.field_type);
let val2 = f64_from_fastfield_u64(val2, &self.field_type);
let val3 = f64_from_fastfield_u64(val3, &self.field_type);
let val4 = f64_from_fastfield_u64(val4, &self.field_type);
self.data.collect(val1);
self.data.collect(val2);
self.data.collect(val3);
self.data.collect(val4);
}
for doc in iter.remainder() {
let val = field.get(*doc);
let val = f64_from_fastfield_u64(val, &self.field_type);
self.data.collect(val);
}
}
}
/// Contains mergeable version of average data.
#[derive(Default, Clone, Debug, PartialEq, Serialize, Deserialize)]
pub struct IntermediateAverage {
pub(crate) sum: f64,
pub(crate) doc_count: u64,
}
impl IntermediateAverage {
pub(crate) fn from_collector(collector: SegmentAverageCollector) -> Self {
collector.data
}
/// Merge average data into this instance.
pub fn merge_fruits(&mut self, other: IntermediateAverage) {
self.sum += other.sum;
self.doc_count += other.doc_count;
}
/// compute final result
pub fn finalize(&self) -> Option<f64> {
if self.doc_count == 0 {
None
} else {
Some(self.sum / self.doc_count as f64)
}
}
#[inline]
fn collect(&mut self, val: f64) {
self.doc_count += 1;
self.sum += val;
}
}

View File

@@ -0,0 +1,30 @@
//! Module for all metric aggregations.
//!
//! The aggregations in this family compute metrics, see [super::agg_req::MetricAggregation] for
//! details.
mod average;
mod stats;
pub use average::*;
use serde::{Deserialize, Serialize};
pub use stats::*;
/// Single-metric aggregations use this common result structure.
///
/// Main reason to wrap it in value is to match elasticsearch output structure.
#[derive(Clone, Debug, PartialEq, Serialize, Deserialize)]
pub struct SingleMetricResult {
/// The value of the single value metric.
pub value: Option<f64>,
}
impl From<f64> for SingleMetricResult {
fn from(value: f64) -> Self {
Self { value: Some(value) }
}
}
impl From<Option<f64>> for SingleMetricResult {
fn from(value: Option<f64>) -> Self {
Self { value }
}
}

View File

@@ -0,0 +1,353 @@
use serde::{Deserialize, Serialize};
use crate::aggregation::f64_from_fastfield_u64;
use crate::fastfield::{DynamicFastFieldReader, FastFieldReader};
use crate::schema::Type;
use crate::DocId;
/// A multi-value metric aggregation that computes stats of numeric values that are
/// extracted from the aggregated documents.
/// Supported field types are u64, i64, and f64.
/// See [Stats] for returned statistics.
///
/// # JSON Format
/// ```json
/// {
/// "stats": {
/// "field": "score",
/// }
/// }
/// ```
#[derive(Clone, Debug, PartialEq, Serialize, Deserialize)]
pub struct StatsAggregation {
/// The field name to compute the stats on.
pub field: String,
}
impl StatsAggregation {
/// Create new StatsAggregation from a field.
pub fn from_field_name(field_name: String) -> Self {
StatsAggregation { field: field_name }
}
/// Return the field name.
pub fn field_name(&self) -> &str {
&self.field
}
}
/// Stats contains a collection of statistics.
#[derive(Clone, Debug, PartialEq, Serialize, Deserialize)]
pub struct Stats {
/// The number of documents.
pub count: usize,
/// The sum of the fast field values.
pub sum: f64,
/// The standard deviation of the fast field values. None for count == 0.
pub standard_deviation: Option<f64>,
/// The min value of the fast field values.
pub min: Option<f64>,
/// The max value of the fast field values.
pub max: Option<f64>,
/// The average of the values. None for count == 0.
pub avg: Option<f64>,
}
/// IntermediateStats contains the mergeable version for stats.
#[derive(Clone, Debug, PartialEq, Serialize, Deserialize)]
pub struct IntermediateStats {
count: usize,
sum: f64,
squared_sum: f64,
min: f64,
max: f64,
}
impl Default for IntermediateStats {
fn default() -> Self {
Self {
count: 0,
sum: 0.0,
squared_sum: 0.0,
min: f64::MAX,
max: f64::MIN,
}
}
}
impl IntermediateStats {
pub(crate) fn avg(&self) -> Option<f64> {
if self.count == 0 {
None
} else {
Some(self.sum / (self.count as f64))
}
}
fn square_mean(&self) -> f64 {
self.squared_sum / (self.count as f64)
}
pub(crate) fn standard_deviation(&self) -> Option<f64> {
self.avg()
.map(|average| (self.square_mean() - average * average).sqrt())
}
/// Merge data from other stats into this instance.
pub fn merge_fruits(&mut self, other: IntermediateStats) {
self.count += other.count;
self.sum += other.sum;
self.squared_sum += other.squared_sum;
self.min = self.min.min(other.min);
self.max = self.max.max(other.max);
}
/// compute final resultimprove_docs
pub fn finalize(&self) -> Stats {
let min = if self.count == 0 {
None
} else {
Some(self.min)
};
let max = if self.count == 0 {
None
} else {
Some(self.max)
};
Stats {
count: self.count,
sum: self.sum,
standard_deviation: self.standard_deviation(),
min,
max,
avg: self.avg(),
}
}
#[inline]
fn collect(&mut self, value: f64) {
self.count += 1;
self.sum += value;
self.squared_sum += value * value;
self.min = self.min.min(value);
self.max = self.max.max(value);
}
}
#[derive(Clone, Debug, PartialEq)]
pub(crate) struct SegmentStatsCollector {
pub(crate) stats: IntermediateStats,
field_type: Type,
}
impl SegmentStatsCollector {
pub fn from_req(field_type: Type) -> Self {
Self {
field_type,
stats: IntermediateStats::default(),
}
}
pub(crate) fn collect_block(&mut self, doc: &[DocId], field: &DynamicFastFieldReader<u64>) {
let mut iter = doc.chunks_exact(4);
for docs in iter.by_ref() {
let val1 = field.get(docs[0]);
let val2 = field.get(docs[1]);
let val3 = field.get(docs[2]);
let val4 = field.get(docs[3]);
let val1 = f64_from_fastfield_u64(val1, &self.field_type);
let val2 = f64_from_fastfield_u64(val2, &self.field_type);
let val3 = f64_from_fastfield_u64(val3, &self.field_type);
let val4 = f64_from_fastfield_u64(val4, &self.field_type);
self.stats.collect(val1);
self.stats.collect(val2);
self.stats.collect(val3);
self.stats.collect(val4);
}
for doc in iter.remainder() {
let val = field.get(*doc);
let val = f64_from_fastfield_u64(val, &self.field_type);
self.stats.collect(val);
}
}
}
#[cfg(test)]
mod tests {
use std::iter;
use serde_json::Value;
use crate::aggregation::agg_req::{
Aggregation, Aggregations, BucketAggregation, BucketAggregationType, MetricAggregation,
RangeAggregation,
};
use crate::aggregation::agg_result::AggregationResults;
use crate::aggregation::metric::StatsAggregation;
use crate::aggregation::tests::{get_test_index_2_segments, get_test_index_from_values};
use crate::aggregation::AggregationCollector;
use crate::query::{AllQuery, TermQuery};
use crate::schema::IndexRecordOption;
use crate::Term;
#[test]
fn test_aggregation_stats_empty_index() -> crate::Result<()> {
// test index without segments
let values = vec![];
let index = get_test_index_from_values(false, &values)?;
let agg_req_1: Aggregations = vec![(
"stats".to_string(),
Aggregation::Metric(MetricAggregation::Stats(StatsAggregation::from_field_name(
"score".to_string(),
))),
)]
.into_iter()
.collect();
let collector = AggregationCollector::from_aggs(agg_req_1);
let reader = index.reader()?;
let searcher = reader.searcher();
let agg_res: AggregationResults = searcher.search(&AllQuery, &collector).unwrap();
let res: Value = serde_json::from_str(&serde_json::to_string(&agg_res)?)?;
assert_eq!(
res["stats"],
json!({
"avg": Value::Null,
"count": 0,
"max": Value::Null,
"min": Value::Null,
"standard_deviation": Value::Null,
"sum": 0.0
})
);
Ok(())
}
#[test]
fn test_aggregation_stats() -> crate::Result<()> {
let index = get_test_index_2_segments(false)?;
let reader = index.reader()?;
let text_field = reader.searcher().schema().get_field("text").unwrap();
let term_query = TermQuery::new(
Term::from_field_text(text_field, "cool"),
IndexRecordOption::Basic,
);
let agg_req_1: Aggregations = vec![
(
"stats_i64".to_string(),
Aggregation::Metric(MetricAggregation::Stats(StatsAggregation::from_field_name(
"score_i64".to_string(),
))),
),
(
"stats_f64".to_string(),
Aggregation::Metric(MetricAggregation::Stats(StatsAggregation::from_field_name(
"score_f64".to_string(),
))),
),
(
"stats".to_string(),
Aggregation::Metric(MetricAggregation::Stats(StatsAggregation::from_field_name(
"score".to_string(),
))),
),
(
"range".to_string(),
Aggregation::Bucket(BucketAggregation {
bucket_agg: BucketAggregationType::Range(RangeAggregation {
field: "score".to_string(),
ranges: vec![
(3f64..7f64).into(),
(7f64..19f64).into(),
(19f64..20f64).into(),
],
}),
sub_aggregation: iter::once((
"stats".to_string(),
Aggregation::Metric(MetricAggregation::Stats(
StatsAggregation::from_field_name("score".to_string()),
)),
))
.collect(),
}),
),
]
.into_iter()
.collect();
let collector = AggregationCollector::from_aggs(agg_req_1);
let searcher = reader.searcher();
let agg_res: AggregationResults = searcher.search(&term_query, &collector).unwrap();
let res: Value = serde_json::from_str(&serde_json::to_string(&agg_res)?)?;
assert_eq!(
res["stats"],
json!({
"avg": 12.142857142857142,
"count": 7,
"max": 44.0,
"min": 1.0,
"standard_deviation": 13.65313748796613,
"sum": 85.0
})
);
assert_eq!(
res["stats_i64"],
json!({
"avg": 12.142857142857142,
"count": 7,
"max": 44.0,
"min": 1.0,
"standard_deviation": 13.65313748796613,
"sum": 85.0
})
);
assert_eq!(
res["stats_f64"],
json!({
"avg": 12.214285714285714,
"count": 7,
"max": 44.5,
"min": 1.0,
"standard_deviation": 13.819905785437443,
"sum": 85.5
})
);
assert_eq!(
res["range"]["buckets"][2]["stats"],
json!({
"avg": 10.666666666666666,
"count": 3,
"max": 14.0,
"min": 7.0,
"standard_deviation": 2.867441755680877,
"sum": 32.0
})
);
assert_eq!(
res["range"]["buckets"][3]["stats"],
json!({
"avg": serde_json::Value::Null,
"count": 0,
"max": serde_json::Value::Null,
"min": serde_json::Value::Null,
"standard_deviation": serde_json::Value::Null,
"sum": 0.0,
})
);
Ok(())
}
}

1369
src/aggregation/mod.rs Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,233 @@
//! Contains aggregation trees which is used during collection in a segment.
//! This tree contains datastructrues optimized for fast collection.
//! The tree can be converted to an intermediate tree, which contains datastructrues optimized for
//! merging.
use std::fmt::Debug;
use super::agg_req::MetricAggregation;
use super::agg_req_with_accessor::{
AggregationsWithAccessor, BucketAggregationWithAccessor, MetricAggregationWithAccessor,
};
use super::bucket::{SegmentHistogramCollector, SegmentRangeCollector};
use super::metric::{
AverageAggregation, SegmentAverageCollector, SegmentStatsCollector, StatsAggregation,
};
use super::{Key, VecWithNames};
use crate::aggregation::agg_req::BucketAggregationType;
use crate::DocId;
pub(crate) const DOC_BLOCK_SIZE: usize = 64;
pub(crate) type DocBlock = [DocId; DOC_BLOCK_SIZE];
#[derive(Clone, PartialEq)]
pub(crate) struct SegmentAggregationResultsCollector {
pub(crate) metrics: Option<VecWithNames<SegmentMetricResultCollector>>,
pub(crate) buckets: Option<VecWithNames<SegmentBucketResultCollector>>,
staged_docs: DocBlock,
num_staged_docs: usize,
}
impl Debug for SegmentAggregationResultsCollector {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
f.debug_struct("SegmentAggregationResultsCollector")
.field("metrics", &self.metrics)
.field("buckets", &self.buckets)
.field("staged_docs", &&self.staged_docs[..self.num_staged_docs])
.field("num_staged_docs", &self.num_staged_docs)
.finish()
}
}
impl SegmentAggregationResultsCollector {
pub(crate) fn from_req_and_validate(req: &AggregationsWithAccessor) -> crate::Result<Self> {
let buckets = req
.buckets
.entries()
.map(|(key, req)| {
Ok((
key.to_string(),
SegmentBucketResultCollector::from_req_and_validate(req)?,
))
})
.collect::<crate::Result<Vec<(String, _)>>>()?;
let metrics = req
.metrics
.entries()
.map(|(key, req)| {
Ok((
key.to_string(),
SegmentMetricResultCollector::from_req_and_validate(req)?,
))
})
.collect::<crate::Result<Vec<(String, _)>>>()?;
let metrics = if metrics.is_empty() {
None
} else {
Some(VecWithNames::from_entries(metrics))
};
let buckets = if buckets.is_empty() {
None
} else {
Some(VecWithNames::from_entries(buckets))
};
Ok(SegmentAggregationResultsCollector {
metrics,
buckets,
staged_docs: [0; DOC_BLOCK_SIZE],
num_staged_docs: 0,
})
}
#[inline]
pub(crate) fn collect(
&mut self,
doc: crate::DocId,
agg_with_accessor: &AggregationsWithAccessor,
) {
self.staged_docs[self.num_staged_docs] = doc;
self.num_staged_docs += 1;
if self.num_staged_docs == self.staged_docs.len() {
self.flush_staged_docs(agg_with_accessor, false);
}
}
pub(crate) fn flush_staged_docs(
&mut self,
agg_with_accessor: &AggregationsWithAccessor,
force_flush: bool,
) {
if let Some(metrics) = &mut self.metrics {
for (collector, agg_with_accessor) in
metrics.values_mut().zip(agg_with_accessor.metrics.values())
{
collector
.collect_block(&self.staged_docs[..self.num_staged_docs], agg_with_accessor);
}
}
if let Some(buckets) = &mut self.buckets {
for (collector, agg_with_accessor) in
buckets.values_mut().zip(agg_with_accessor.buckets.values())
{
collector.collect_block(
&self.staged_docs[..self.num_staged_docs],
agg_with_accessor,
force_flush,
);
}
}
self.num_staged_docs = 0;
}
}
#[derive(Clone, Debug, PartialEq)]
pub(crate) enum SegmentMetricResultCollector {
Average(SegmentAverageCollector),
Stats(SegmentStatsCollector),
}
impl SegmentMetricResultCollector {
pub fn from_req_and_validate(req: &MetricAggregationWithAccessor) -> crate::Result<Self> {
match &req.metric {
MetricAggregation::Average(AverageAggregation { field: _ }) => {
Ok(SegmentMetricResultCollector::Average(
SegmentAverageCollector::from_req(req.field_type),
))
}
MetricAggregation::Stats(StatsAggregation { field: _ }) => {
Ok(SegmentMetricResultCollector::Stats(
SegmentStatsCollector::from_req(req.field_type),
))
}
}
}
pub(crate) fn collect_block(&mut self, doc: &[DocId], metric: &MetricAggregationWithAccessor) {
match self {
SegmentMetricResultCollector::Average(avg_collector) => {
avg_collector.collect_block(doc, &metric.accessor);
}
SegmentMetricResultCollector::Stats(stats_collector) => {
stats_collector.collect_block(doc, &metric.accessor);
}
}
}
}
/// SegmentBucketAggregationResultCollectors will have specialized buckets for collection inside
/// segments.
/// The typical structure of Map<Key, Bucket> is not suitable during collection for performance
/// reasons.
#[derive(Clone, Debug, PartialEq)]
pub(crate) enum SegmentBucketResultCollector {
Range(SegmentRangeCollector),
Histogram(SegmentHistogramCollector),
}
impl SegmentBucketResultCollector {
pub fn from_req_and_validate(req: &BucketAggregationWithAccessor) -> crate::Result<Self> {
match &req.bucket_agg {
BucketAggregationType::Range(range_req) => {
Ok(Self::Range(SegmentRangeCollector::from_req_and_validate(
range_req,
&req.sub_aggregation,
req.field_type,
)?))
}
BucketAggregationType::Histogram(histogram) => Ok(Self::Histogram(
SegmentHistogramCollector::from_req_and_validate(
histogram,
&req.sub_aggregation,
req.field_type,
&req.accessor,
)?,
)),
}
}
#[inline]
pub(crate) fn collect_block(
&mut self,
doc: &[DocId],
bucket_with_accessor: &BucketAggregationWithAccessor,
force_flush: bool,
) {
match self {
SegmentBucketResultCollector::Range(range) => {
range.collect_block(doc, bucket_with_accessor, force_flush);
}
SegmentBucketResultCollector::Histogram(histogram) => {
histogram.collect_block(doc, bucket_with_accessor, force_flush)
}
}
}
}
#[derive(Clone, Debug, PartialEq)]
pub(crate) struct SegmentHistogramBucketEntry {
pub key: f64,
pub doc_count: u64,
}
#[derive(Clone, PartialEq)]
pub(crate) struct SegmentRangeBucketEntry {
pub key: Key,
pub doc_count: u64,
pub sub_aggregation: Option<SegmentAggregationResultsCollector>,
/// The from range of the bucket. Equals f64::MIN when None.
pub from: Option<f64>,
/// The to range of the bucket. Equals f64::MAX when None.
pub to: Option<f64>,
}
impl Debug for SegmentRangeBucketEntry {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
f.debug_struct("SegmentRangeBucketEntry")
.field("key", &self.key)
.field("doc_count", &self.doc_count)
.field("from", &self.from)
.field("to", &self.to)
.finish()
}
}

View File

@@ -19,7 +19,7 @@ use crate::{DocId, Score};
///
/// # Warning
///
/// f64 field. are not supported.
/// f64 fields are not supported.
#[derive(Clone)]
pub struct HistogramCollector {
min_value: u64,
@@ -152,9 +152,9 @@ mod tests {
use query::AllQuery;
use super::{add_vecs, HistogramCollector, HistogramComputer};
use crate::chrono::{TimeZone, Utc};
use crate::schema::{Schema, FAST};
use crate::{doc, query, Index};
use crate::time::{Date, Month};
use crate::{doc, query, DateTime, Index};
#[test]
fn test_add_histograms_simple() {
@@ -273,16 +273,20 @@ mod tests {
let schema = schema_builder.build();
let index = Index::create_in_ram(schema);
let mut writer = index.writer_with_num_threads(1, 4_000_000)?;
writer.add_document(doc!(date_field=>Utc.ymd(1982, 9, 17).and_hms(0, 0,0)))?;
writer.add_document(doc!(date_field=>Utc.ymd(1986, 3, 9).and_hms(0, 0, 0)))?;
writer.add_document(doc!(date_field=>Utc.ymd(1983, 9, 27).and_hms(0, 0, 0)))?;
writer.add_document(doc!(date_field=>DateTime::new_primitive(Date::from_calendar_date(1982, Month::September, 17)?.with_hms(0, 0, 0)?)))?;
writer.add_document(
doc!(date_field=>DateTime::new_primitive(Date::from_calendar_date(1986, Month::March, 9)?.with_hms(0, 0, 0)?)),
)?;
writer.add_document(doc!(date_field=>DateTime::new_primitive(Date::from_calendar_date(1983, Month::September, 27)?.with_hms(0, 0, 0)?)))?;
writer.commit()?;
let reader = index.reader()?;
let searcher = reader.searcher();
let all_query = AllQuery;
let week_histogram_collector = HistogramCollector::new(
date_field,
Utc.ymd(1980, 1, 1).and_hms(0, 0, 0),
DateTime::new_primitive(
Date::from_calendar_date(1980, Month::January, 1)?.with_hms(0, 0, 0)?,
),
3600 * 24 * 365, // it is just for a unit test... sorry leap years.
10,
);

View File

@@ -1,11 +1,11 @@
use std::str::FromStr;
use super::*;
use crate::collector::{Count, FilterCollector, TopDocs};
use crate::core::SegmentReader;
use crate::fastfield::{BytesFastFieldReader, DynamicFastFieldReader, FastFieldReader};
use crate::query::{AllQuery, QueryParser};
use crate::schema::{Field, Schema, FAST, TEXT};
use crate::time::format_description::well_known::Rfc3339;
use crate::time::OffsetDateTime;
use crate::{doc, DateTime, DocAddress, DocId, Document, Index, Score, Searcher, SegmentOrdinal};
pub const TEST_COLLECTOR_WITH_SCORE: TestCollector = TestCollector {
@@ -26,11 +26,11 @@ pub fn test_filter_collector() -> crate::Result<()> {
let index = Index::create_in_ram(schema);
let mut index_writer = index.writer_with_num_threads(1, 10_000_000)?;
index_writer.add_document(doc!(title => "The Name of the Wind", price => 30_200u64, date => DateTime::from_str("1898-04-09T00:00:00+00:00").unwrap()))?;
index_writer.add_document(doc!(title => "The Diary of Muadib", price => 29_240u64, date => DateTime::from_str("2020-04-09T00:00:00+00:00").unwrap()))?;
index_writer.add_document(doc!(title => "The Diary of Anne Frank", price => 18_240u64, date => DateTime::from_str("2019-04-20T00:00:00+00:00").unwrap()))?;
index_writer.add_document(doc!(title => "A Dairy Cow", price => 21_240u64, date => DateTime::from_str("2019-04-09T00:00:00+00:00").unwrap()))?;
index_writer.add_document(doc!(title => "The Diary of a Young Girl", price => 20_120u64, date => DateTime::from_str("2018-04-09T00:00:00+00:00").unwrap()))?;
index_writer.add_document(doc!(title => "The Name of the Wind", price => 30_200u64, date => DateTime::new_utc(OffsetDateTime::parse("1898-04-09T00:00:00+00:00", &Rfc3339).unwrap())))?;
index_writer.add_document(doc!(title => "The Diary of Muadib", price => 29_240u64, date => DateTime::new_utc(OffsetDateTime::parse("2020-04-09T00:00:00+00:00", &Rfc3339).unwrap())))?;
index_writer.add_document(doc!(title => "The Diary of Anne Frank", price => 18_240u64, date => DateTime::new_utc(OffsetDateTime::parse("2019-04-20T00:00:00+00:00", &Rfc3339).unwrap())))?;
index_writer.add_document(doc!(title => "A Dairy Cow", price => 21_240u64, date => DateTime::new_utc(OffsetDateTime::parse("2019-04-09T00:00:00+00:00", &Rfc3339).unwrap())))?;
index_writer.add_document(doc!(title => "The Diary of a Young Girl", price => 20_120u64, date => DateTime::new_utc(OffsetDateTime::parse("2018-04-09T00:00:00+00:00", &Rfc3339).unwrap())))?;
index_writer.commit()?;
let reader = index.reader()?;
@@ -55,7 +55,9 @@ pub fn test_filter_collector() -> crate::Result<()> {
assert_eq!(filtered_top_docs.len(), 0);
fn date_filter(value: DateTime) -> bool {
(value - DateTime::from_str("2019-04-09T00:00:00+00:00").unwrap()).num_weeks() > 0
(value.to_utc() - OffsetDateTime::parse("2019-04-09T00:00:00+00:00", &Rfc3339).unwrap())
.whole_weeks()
> 0
}
let filter_dates_collector = FilterCollector::new(date, &date_filter, TopDocs::with_limit(5));

View File

@@ -173,8 +173,7 @@ impl<T: PartialOrd + Clone> TopSegmentCollector<T> {
.collect()
}
/// Return true iff at least K documents have gone through
/// the collector.
/// Return true if more documents have been collected than the limit.
#[inline]
pub(crate) fn at_capacity(&self) -> bool {
self.heap.len() >= self.limit

View File

@@ -714,7 +714,9 @@ mod tests {
use crate::collector::Collector;
use crate::query::{AllQuery, Query, QueryParser};
use crate::schema::{Field, Schema, FAST, STORED, TEXT};
use crate::{DocAddress, DocId, Index, IndexWriter, Score, SegmentReader};
use crate::time::format_description::well_known::Rfc3339;
use crate::time::OffsetDateTime;
use crate::{DateTime, DocAddress, DocId, Index, IndexWriter, Score, SegmentReader};
fn make_index() -> crate::Result<Index> {
let mut schema_builder = Schema::builder();
@@ -890,28 +892,32 @@ mod tests {
#[test]
fn test_top_field_collector_datetime() -> crate::Result<()> {
use std::str::FromStr;
let mut schema_builder = Schema::builder();
let name = schema_builder.add_text_field("name", TEXT);
let birthday = schema_builder.add_date_field("birthday", FAST);
let schema = schema_builder.build();
let index = Index::create_in_ram(schema);
let mut index_writer = index.writer_for_tests()?;
let pr_birthday = crate::DateTime::from_str("1898-04-09T00:00:00+00:00")?;
let pr_birthday = DateTime::new_utc(OffsetDateTime::parse(
"1898-04-09T00:00:00+00:00",
&Rfc3339,
)?);
index_writer.add_document(doc!(
name => "Paul Robeson",
birthday => pr_birthday
birthday => pr_birthday,
))?;
let mr_birthday = crate::DateTime::from_str("1947-11-08T00:00:00+00:00")?;
let mr_birthday = DateTime::new_utc(OffsetDateTime::parse(
"1947-11-08T00:00:00+00:00",
&Rfc3339,
)?);
index_writer.add_document(doc!(
name => "Minnie Riperton",
birthday => mr_birthday
birthday => mr_birthday,
))?;
index_writer.commit()?;
let searcher = index.reader()?.searcher();
let top_collector = TopDocs::with_limit(3).order_by_fast_field(birthday);
let top_docs: Vec<(crate::DateTime, DocAddress)> =
searcher.search(&AllQuery, &top_collector)?;
let top_docs: Vec<(DateTime, DocAddress)> = searcher.search(&AllQuery, &top_collector)?;
assert_eq!(
&top_docs[..],
&[

View File

@@ -64,7 +64,7 @@ fn load_metas(
/// let body_field = schema_builder.add_text_field("body", TEXT);
/// let number_field = schema_builder.add_u64_field(
/// "number",
/// IntOptions::default().set_fast(Cardinality::SingleValue),
/// NumericOptions::default().set_fast(Cardinality::SingleValue),
/// );
///
/// let schema = schema_builder.build();
@@ -781,24 +781,24 @@ mod tests {
for i in 0u64..8_000u64 {
writer.add_document(doc!(field => i))?;
}
let (sender, receiver) = crossbeam::channel::unbounded();
let _handle = directory.watch(WatchCallback::new(move || {
let _ = sender.send(());
}));
writer.commit()?;
let mem_right_after_commit = directory.total_mem_usage();
assert!(receiver.recv().is_ok());
let reader = index
.reader_builder()
.reload_policy(ReloadPolicy::Manual)
.try_into()?;
assert_eq!(reader.searcher().num_docs(), 8_000);
assert_eq!(reader.searcher().segment_readers().len(), 8);
writer.wait_merging_threads()?;
let mem_right_after_merge_finished = directory.total_mem_usage();
reader.reload().unwrap();
let searcher = reader.searcher();
assert_eq!(searcher.segment_readers().len(), 1);
assert_eq!(searcher.num_docs(), 8_000);
assert!(
mem_right_after_merge_finished < mem_right_after_commit,

View File

@@ -239,7 +239,7 @@ impl InnerSegmentMeta {
///
/// Contains settings which are applied on the whole
/// index, like presort documents.
#[derive(Clone, Default, Serialize, Deserialize, Eq, PartialEq)]
#[derive(Clone, Debug, Default, Serialize, Deserialize, Eq, PartialEq)]
pub struct IndexSettings {
/// Sorts the documents by information
/// provided in `IndexSortByField`
@@ -254,7 +254,7 @@ pub struct IndexSettings {
/// Presorting documents can greatly performance
/// in some scenarios, by applying top n
/// optimizations.
#[derive(Clone, Serialize, Deserialize, Eq, PartialEq)]
#[derive(Clone, Debug, Serialize, Deserialize, Eq, PartialEq)]
pub struct IndexSortByField {
/// The field to sort the documents by
pub field: String,
@@ -262,7 +262,7 @@ pub struct IndexSortByField {
pub order: Order,
}
/// The order to sort by
#[derive(Clone, Serialize, Deserialize, Eq, PartialEq)]
#[derive(Clone, Debug, Serialize, Deserialize, Eq, PartialEq)]
pub enum Order {
/// Ascending Order
Asc,
@@ -298,12 +298,12 @@ pub struct IndexMeta {
pub schema: Schema,
/// Opstamp associated to the last `commit` operation.
pub opstamp: Opstamp,
#[serde(skip_serializing_if = "Option::is_none")]
/// Payload associated to the last commit.
///
/// Upon commit, clients can optionally add a small `String` payload to their commit
/// to help identify this commit.
/// This payload is entirely unused by tantivy.
#[serde(skip_serializing_if = "Option::is_none")]
pub payload: Option<String>,
}
@@ -374,6 +374,7 @@ impl fmt::Debug for IndexMeta {
mod tests {
use super::IndexMeta;
use crate::core::index_meta::UntrackedIndexMeta;
use crate::schema::{Schema, TEXT};
use crate::{IndexSettings, IndexSortByField, Order};
@@ -402,5 +403,10 @@ mod tests {
json,
r#"{"index_settings":{"sort_by_field":{"field":"text","order":"Asc"},"docstore_compression":"lz4"},"segments":[],"schema":[{"name":"text","type":"text","options":{"indexing":{"record":"position","fieldnorms":true,"tokenizer":"default"},"stored":false}}],"opstamp":0}"#
);
let deser_meta: UntrackedIndexMeta = serde_json::from_str(&json).unwrap();
assert_eq!(index_metas.index_settings, deser_meta.index_settings);
assert_eq!(index_metas.schema, deser_meta.schema);
assert_eq!(index_metas.opstamp, deser_meta.opstamp);
}
}

View File

@@ -88,7 +88,8 @@ impl InvertedIndexReader {
let postings_slice = self
.postings_file_slice
.slice(term_info.postings_range.clone());
block_postings.reset(term_info.doc_freq, postings_slice.read_bytes()?);
let postings_bytes = postings_slice.read_bytes()?;
block_postings.reset(term_info.doc_freq, postings_bytes)?;
Ok(())
}
@@ -197,3 +198,36 @@ impl InvertedIndexReader {
.unwrap_or(0u32))
}
}
#[cfg(feature = "quickwit")]
impl InvertedIndexReader {
pub(crate) async fn get_term_info_async(
&self,
term: &Term,
) -> crate::AsyncIoResult<Option<TermInfo>> {
self.termdict.get_async(term.value_bytes()).await
}
/// Returns a block postings given a `Term`.
/// This method is for an advanced usage only.
///
/// Most user should prefer using `read_postings` instead.
pub async fn warm_postings(
&self,
term: &Term,
with_positions: bool,
) -> crate::AsyncIoResult<()> {
let term_info_opt = self.get_term_info_async(term).await?;
if let Some(term_info) = term_info_opt {
self.postings_file_slice
.read_bytes_slice_async(term_info.postings_range.clone())
.await?;
if with_positions {
self.positions_file_slice
.read_bytes_slice_async(term_info.positions_range.clone())
.await?;
}
}
Ok(())
}
}

View File

@@ -110,6 +110,13 @@ impl Searcher {
store_reader.get(doc_address.doc_id)
}
/// Fetches a document in an asynchronous manner.
#[cfg(feature = "quickwit")]
pub async fn doc_async(&self, doc_address: DocAddress) -> crate::Result<Document> {
let store_reader = &self.store_readers[doc_address.segment_ord as usize];
store_reader.get_async(doc_address.doc_id).await
}
/// Access the schema associated to the index of this searcher.
pub fn schema(&self) -> &Schema {
&self.schema

View File

@@ -70,7 +70,7 @@ impl SegmentReader {
self.max_doc - self.num_docs
}
/// Returns true iff some of the documents of the segment have been deleted.
/// Returns true if some of the documents of the segment have been deleted.
pub fn has_deletes(&self) -> bool {
self.num_deleted_docs() > 0
}
@@ -121,9 +121,8 @@ impl SegmentReader {
self.fieldnorm_readers.get_field(field)?.ok_or_else(|| {
let field_name = self.schema.get_field_name(field);
let err_msg = format!(
"Field norm not found for field {:?}. Was the field set to record norm during \
indexing?",
field_name
"Field norm not found for field {field_name:?}. Was the field set to record norm \
during indexing?"
);
crate::TantivyError::SchemaError(err_msg)
})
@@ -302,7 +301,7 @@ impl SegmentReader {
self.alive_bitset_opt.as_ref()
}
/// Returns true iff the `doc` is marked
/// Returns true if the `doc` is marked
/// as deleted.
pub fn is_deleted(&self, doc: DocId) -> bool {
self.alive_bitset()

View File

@@ -96,9 +96,9 @@ fn retry_policy(is_blocking: bool) -> RetryPolicy {
///
/// There are currently two implementations of `Directory`
///
/// - The [`MMapDirectory`](struct.MmapDirectory.html), this
/// - The [`MMapDirectory`][crate::directory::MmapDirectory], this
/// should be your default choice.
/// - The [`RamDirectory`](struct.RamDirectory.html), which
/// - The [`RamDirectory`][crate::directory::RamDirectory], which
/// should be used mostly for tests.
pub trait Directory: DirectoryClone + fmt::Debug + Send + Sync + 'static {
/// Opens a file and returns a boxed `FileHandle`.
@@ -128,7 +128,7 @@ pub trait Directory: DirectoryClone + fmt::Debug + Send + Sync + 'static {
/// `DeleteError::DoesNotExist`.
fn delete(&self, path: &Path) -> Result<(), DeleteError>;
/// Returns true iff the file exists
/// Returns true if and only if the file exists
fn exists(&self, path: &Path) -> Result<bool, OpenReadError>;
/// Opens a writer for the *virtual file* associated with

View File

@@ -2,6 +2,7 @@ use std::ops::{Deref, Range};
use std::sync::{Arc, Weak};
use std::{fmt, io};
use async_trait::async_trait;
use common::HasLen;
use stable_deref_trait::StableDeref;
@@ -18,18 +19,35 @@ pub type WeakArcBytes = Weak<dyn Deref<Target = [u8]> + Send + Sync + 'static>;
/// The underlying behavior is therefore specific to the `Directory` that created it.
/// Despite its name, a `FileSlice` may or may not directly map to an actual file
/// on the filesystem.
#[async_trait]
pub trait FileHandle: 'static + Send + Sync + HasLen + fmt::Debug {
/// Reads a slice of bytes.
///
/// This method may panic if the range requested is invalid.
fn read_bytes(&self, range: Range<usize>) -> io::Result<OwnedBytes>;
#[cfg(feature = "quickwit")]
#[doc(hidden)]
async fn read_bytes_async(
&self,
_byte_range: Range<usize>,
) -> crate::AsyncIoResult<OwnedBytes> {
Err(crate::error::AsyncIoError::AsyncUnsupported)
}
}
#[async_trait]
impl FileHandle for &'static [u8] {
fn read_bytes(&self, range: Range<usize>) -> io::Result<OwnedBytes> {
let bytes = &self[range];
Ok(OwnedBytes::new(bytes))
}
#[cfg(feature = "quickwit")]
async fn read_bytes_async(&self, byte_range: Range<usize>) -> crate::AsyncIoResult<OwnedBytes> {
Ok(self.read_bytes(byte_range)?)
}
}
impl<B> From<B> for FileSlice
@@ -102,6 +120,12 @@ impl FileSlice {
self.data.read_bytes(self.range.clone())
}
#[cfg(feature = "quickwit")]
#[doc(hidden)]
pub async fn read_bytes_async(&self) -> crate::AsyncIoResult<OwnedBytes> {
self.data.read_bytes_async(self.range.clone()).await
}
/// Reads a specific slice of data.
///
/// This is equivalent to running `file_slice.slice(from, to).read_bytes()`.
@@ -116,6 +140,23 @@ impl FileSlice {
.read_bytes(self.range.start + range.start..self.range.start + range.end)
}
#[cfg(feature = "quickwit")]
#[doc(hidden)]
pub async fn read_bytes_slice_async(
&self,
byte_range: Range<usize>,
) -> crate::AsyncIoResult<OwnedBytes> {
assert!(
self.range.start + byte_range.end <= self.range.end,
"`to` exceeds the fileslice length"
);
self.data
.read_bytes_async(
self.range.start + byte_range.start..self.range.start + byte_range.end,
)
.await
}
/// Splits the FileSlice at the given offset and return two file slices.
/// `file_slice[..split_offset]` and `file_slice[split_offset..]`.
///
@@ -160,10 +201,16 @@ impl FileSlice {
}
}
#[async_trait]
impl FileHandle for FileSlice {
fn read_bytes(&self, range: Range<usize>) -> io::Result<OwnedBytes> {
self.read_bytes_slice(range)
}
#[cfg(feature = "quickwit")]
async fn read_bytes_async(&self, byte_range: Range<usize>) -> crate::AsyncIoResult<OwnedBytes> {
self.read_bytes_slice_async(byte_range).await
}
}
impl HasLen for FileSlice {
@@ -172,6 +219,19 @@ impl HasLen for FileSlice {
}
}
#[async_trait]
impl FileHandle for OwnedBytes {
fn read_bytes(&self, range: Range<usize>) -> io::Result<OwnedBytes> {
Ok(self.slice(range))
}
#[cfg(feature = "quickwit")]
async fn read_bytes_async(&self, range: Range<usize>) -> crate::AsyncIoResult<OwnedBytes> {
let bytes = self.read_bytes(range)?;
Ok(bytes)
}
}
#[cfg(test)]
mod tests {
use std::io;

View File

@@ -53,7 +53,9 @@ impl FileWatcher {
if metafile_has_changed {
info!("Meta file {:?} was modified", path);
current_checksum_opt = Some(checksum);
futures::executor::block_on(callbacks.broadcast());
// We actually ignore callbacks failing here.
// We just wait for the end of their execution.
let _ = callbacks.broadcast().wait();
}
}

View File

@@ -16,7 +16,7 @@ use crate::directory::{
use crate::error::DataCorruption;
use crate::Directory;
/// Returns true iff the file is "managed".
/// Returns true if the file is "managed".
/// Non-managed file are not subject to garbage collection.
///
/// Filenames that starts by a "." -typically locks-

View File

@@ -1,7 +1,6 @@
use std::collections::HashMap;
use std::convert::From;
use std::fs::{self, File, OpenOptions};
use std::io::{self, BufWriter, Read, Seek, SeekFrom, Write};
use std::io::{self, BufWriter, Read, Seek, Write};
use std::ops::Deref;
use std::path::{Path, PathBuf};
use std::sync::{Arc, RwLock};
@@ -265,7 +264,7 @@ impl Write for SafeFileWriter {
}
impl Seek for SafeFileWriter {
fn seek(&mut self, pos: SeekFrom) -> io::Result<u64> {
fn seek(&mut self, pos: io::SeekFrom) -> io::Result<u64> {
self.0.seek(pos)
}
}

View File

@@ -9,7 +9,6 @@ mod file_slice;
mod file_watcher;
mod footer;
mod managed_directory;
mod owned_bytes;
mod ram_directory;
mod watch_event_router;
@@ -22,13 +21,13 @@ use std::io::BufWriter;
use std::path::PathBuf;
pub use common::{AntiCallToken, TerminatingWrite};
pub use ownedbytes::OwnedBytes;
pub(crate) use self::composite_file::{CompositeFile, CompositeWrite};
pub use self::directory::{Directory, DirectoryClone, DirectoryLock};
pub use self::directory_lock::{Lock, INDEX_WRITER_LOCK, META_LOCK};
pub(crate) use self::file_slice::{ArcBytes, WeakArcBytes};
pub use self::file_slice::{FileHandle, FileSlice};
pub use self::owned_bytes::OwnedBytes;
pub use self::ram_directory::RamDirectory;
pub use self::watch_event_router::{WatchCallback, WatchCallbackList, WatchHandle};

View File

@@ -1,12 +0,0 @@
use std::io;
use std::ops::Range;
pub use ownedbytes::OwnedBytes;
use crate::directory::FileHandle;
impl FileHandle for OwnedBytes {
fn read_bytes(&self, range: Range<usize>) -> io::Result<OwnedBytes> {
Ok(self.slice(range))
}
}

View File

@@ -6,9 +6,6 @@ use std::sync::atomic::{AtomicBool, AtomicUsize};
use std::sync::Arc;
use std::time::Duration;
use futures::channel::oneshot;
use futures::executor::block_on;
use super::*;
#[cfg(feature = "mmap")]
@@ -249,8 +246,8 @@ fn test_lock_blocking(directory: &dyn Directory) {
std::thread::spawn(move || {
//< lock_a_res is sent to the thread.
in_thread_clone.store(true, SeqCst);
let _just_sync = block_on(receiver);
// explicitely droping lock_a_res. It would have been sufficient to just force it
let _just_sync = receiver.recv();
// explicitely dropping lock_a_res. It would have been sufficient to just force it
// to be part of the move, but the intent seems clearer that way.
drop(lock_a_res);
});
@@ -273,7 +270,7 @@ fn test_lock_blocking(directory: &dyn Directory) {
assert!(in_thread.load(SeqCst));
assert!(lock_a_res.is_ok());
});
assert!(block_on(receiver2).is_ok());
assert!(receiver2.recv().is_ok());
assert!(sender.send(()).is_ok());
assert!(join_handle.join().is_ok());
}

View File

@@ -1,7 +1,6 @@
use std::sync::{Arc, RwLock, Weak};
use futures::channel::oneshot;
use futures::{Future, TryFutureExt};
use crate::FutureResult;
/// Cloneable wrapper for callbacks registered when watching files of a `Directory`.
#[derive(Clone)]
@@ -74,12 +73,11 @@ impl WatchCallbackList {
}
/// Triggers all callbacks
pub fn broadcast(&self) -> impl Future<Output = ()> {
pub fn broadcast(&self) -> FutureResult<()> {
let callbacks = self.list_callback();
let (sender, receiver) = oneshot::channel();
let result = receiver.unwrap_or_else(|_| ());
let (result, sender) = FutureResult::create("One of the callback panicked.");
if callbacks.is_empty() {
let _ = sender.send(());
let _ = sender.send(Ok(()));
return result;
}
let spawn_res = std::thread::Builder::new()
@@ -88,7 +86,7 @@ impl WatchCallbackList {
for callback in callbacks {
callback.call();
}
let _ = sender.send(());
let _ = sender.send(Ok(()));
});
if let Err(err) = spawn_res {
error!(
@@ -106,8 +104,6 @@ mod tests {
use std::sync::atomic::{AtomicUsize, Ordering};
use std::sync::Arc;
use futures::executor::block_on;
use crate::directory::{WatchCallback, WatchCallbackList};
#[test]
@@ -118,22 +114,18 @@ mod tests {
let inc_callback = WatchCallback::new(move || {
counter_clone.fetch_add(1, Ordering::SeqCst);
});
block_on(watch_event_router.broadcast());
watch_event_router.broadcast().wait().unwrap();
assert_eq!(0, counter.load(Ordering::SeqCst));
let handle_a = watch_event_router.subscribe(inc_callback);
assert_eq!(0, counter.load(Ordering::SeqCst));
block_on(watch_event_router.broadcast());
watch_event_router.broadcast().wait().unwrap();
assert_eq!(1, counter.load(Ordering::SeqCst));
block_on(async {
(
watch_event_router.broadcast().await,
watch_event_router.broadcast().await,
watch_event_router.broadcast().await,
)
});
watch_event_router.broadcast().wait().unwrap();
watch_event_router.broadcast().wait().unwrap();
watch_event_router.broadcast().wait().unwrap();
assert_eq!(4, counter.load(Ordering::SeqCst));
mem::drop(handle_a);
block_on(watch_event_router.broadcast());
watch_event_router.broadcast().wait().unwrap();
assert_eq!(4, counter.load(Ordering::SeqCst));
}
@@ -150,19 +142,15 @@ mod tests {
let handle_a = watch_event_router.subscribe(inc_callback(1));
let handle_a2 = watch_event_router.subscribe(inc_callback(10));
assert_eq!(0, counter.load(Ordering::SeqCst));
block_on(async {
futures::join!(
watch_event_router.broadcast(),
watch_event_router.broadcast()
)
});
watch_event_router.broadcast().wait().unwrap();
watch_event_router.broadcast().wait().unwrap();
assert_eq!(22, counter.load(Ordering::SeqCst));
mem::drop(handle_a);
block_on(watch_event_router.broadcast());
watch_event_router.broadcast().wait().unwrap();
assert_eq!(32, counter.load(Ordering::SeqCst));
mem::drop(handle_a2);
block_on(watch_event_router.broadcast());
block_on(watch_event_router.broadcast());
watch_event_router.broadcast().wait().unwrap();
watch_event_router.broadcast().wait().unwrap();
assert_eq!(32, counter.load(Ordering::SeqCst));
}
@@ -176,15 +164,12 @@ mod tests {
});
let handle_a = watch_event_router.subscribe(inc_callback);
assert_eq!(0, counter.load(Ordering::SeqCst));
block_on(async {
let future1 = watch_event_router.broadcast();
let future2 = watch_event_router.broadcast();
futures::join!(future1, future2)
});
watch_event_router.broadcast().wait().unwrap();
watch_event_router.broadcast().wait().unwrap();
assert_eq!(2, counter.load(Ordering::SeqCst));
mem::drop(handle_a);
let _ = watch_event_router.broadcast();
block_on(watch_event_router.broadcast());
watch_event_router.broadcast().wait().unwrap();
assert_eq!(2, counter.load(Ordering::SeqCst));
}
}

View File

@@ -1,9 +1,11 @@
//! Definition of Tantivy's error and result.
//! Definition of Tantivy's errors and results.
use std::path::PathBuf;
use std::sync::PoisonError;
use std::{fmt, io};
use thiserror::Error;
use crate::directory::error::{
Incompatibility, LockError, OpenDirectoryError, OpenReadError, OpenWriteError,
};
@@ -12,7 +14,7 @@ use crate::{query, schema};
/// Represents a `DataCorruption` error.
///
/// When facing data corruption, tantivy actually panic or return this error.
/// When facing data corruption, tantivy actually panics or returns this error.
pub struct DataCorruption {
filepath: Option<PathBuf>,
comment: String,
@@ -38,9 +40,9 @@ impl DataCorruption {
impl fmt::Debug for DataCorruption {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> Result<(), fmt::Error> {
write!(f, "Data corruption: ")?;
write!(f, "Data corruption")?;
if let Some(ref filepath) = &self.filepath {
write!(f, "(in file `{:?}`)", filepath)?;
write!(f, " (in file `{:?}`)", filepath)?;
}
write!(f, ": {}.", self.comment)?;
Ok(())
@@ -59,10 +61,10 @@ pub enum TantivyError {
/// Failed to open a file for write.
#[error("Failed to open file for write: '{0:?}'")]
OpenWriteError(#[from] OpenWriteError),
/// Index already exists in this directory
/// Index already exists in this directory.
#[error("Index already exists")]
IndexAlreadyExists,
/// Failed to acquire file lock
/// Failed to acquire file lock.
#[error("Failed to acquire Lockfile: {0:?}. {1:?}")]
LockFailure(LockError, Option<String>),
/// IO Error.
@@ -74,26 +76,51 @@ pub enum TantivyError {
/// A thread holding the locked panicked and poisoned the lock.
#[error("A thread holding the locked panicked and poisoned the lock")]
Poisoned,
/// The provided field name does not exist.
#[error("The field does not exist: '{0}'")]
FieldNotFound(String),
/// Invalid argument was passed by the user.
#[error("An invalid argument was passed: '{0}'")]
InvalidArgument(String),
/// An Error happened in one of the thread.
/// An Error occurred in one of the threads.
#[error("An error occurred in a thread: '{0}'")]
ErrorInThread(String),
/// An Error appeared related to opening or creating a index.
/// An Error occurred related to opening or creating a index.
#[error("Missing required index builder argument when open/create index: '{0}'")]
IndexBuilderMissingArgument(&'static str),
/// An Error appeared related to the schema.
/// An Error occurred related to the schema.
#[error("Schema error: '{0}'")]
SchemaError(String),
/// System error. (e.g.: We failed spawning a new thread)
/// System error. (e.g.: We failed spawning a new thread).
#[error("System error.'{0}'")]
SystemError(String),
/// Index incompatible with current version of tantivy
/// Index incompatible with current version of Tantivy.
#[error("{0:?}")]
IncompatibleIndex(Incompatibility),
}
#[cfg(feature = "quickwit")]
#[derive(Error, Debug)]
#[doc(hidden)]
pub enum AsyncIoError {
#[error("io::Error `{0}`")]
Io(#[from] io::Error),
#[error("Asynchronous API is unsupported by this directory")]
AsyncUnsupported,
}
#[cfg(feature = "quickwit")]
impl From<AsyncIoError> for TantivyError {
fn from(async_io_err: AsyncIoError) -> Self {
match async_io_err {
AsyncIoError::Io(io_err) => TantivyError::from(io_err),
AsyncIoError::AsyncUnsupported => {
TantivyError::SystemError(format!("{:?}", async_io_err))
}
}
}
}
impl From<DataCorruption> for TantivyError {
fn from(data_corruption: DataCorruption) -> TantivyError {
TantivyError::DataCorruption(data_corruption)
@@ -122,9 +149,21 @@ impl<Guard> From<PoisonError<Guard>> for TantivyError {
}
}
impl From<chrono::ParseError> for TantivyError {
fn from(err: chrono::ParseError) -> TantivyError {
TantivyError::InvalidArgument(err.to_string())
impl From<time::error::Format> for TantivyError {
fn from(err: time::error::Format) -> TantivyError {
TantivyError::InvalidArgument(format!("Date formatting error: {err}"))
}
}
impl From<time::error::Parse> for TantivyError {
fn from(err: time::error::Parse) -> TantivyError {
TantivyError::InvalidArgument(format!("Date parsing error: {err}"))
}
}
impl From<time::error::ComponentRange> for TantivyError {
fn from(err: time::error::ComponentRange) -> TantivyError {
TantivyError::InvalidArgument(format!("Date range error: {err}"))
}
}

View File

@@ -7,7 +7,7 @@ use ownedbytes::OwnedBytes;
use crate::space_usage::ByteCount;
use crate::DocId;
/// Write a alive `BitSet`
/// Write an alive `BitSet`
///
/// where `alive_bitset` is the set of alive `DocId`.
/// Warning: this function does not call terminate. The caller is in charge of
@@ -55,19 +55,19 @@ impl AliveBitSet {
AliveBitSet::from(readonly_bitset)
}
/// Opens a delete bitset given its file.
/// Opens an alive bitset given its file.
pub fn open(bytes: OwnedBytes) -> AliveBitSet {
let bitset = ReadOnlyBitSet::open(bytes);
AliveBitSet::from(bitset)
}
/// Returns true iff the document is still "alive". In other words, if it has not been deleted.
/// Returns true if the document is still "alive". In other words, if it has not been deleted.
#[inline]
pub fn is_alive(&self, doc: DocId) -> bool {
self.bitset.contains(doc)
}
/// Returns true iff the document has been marked as deleted.
/// Returns true if the document has been marked as deleted.
#[inline]
pub fn is_deleted(&self, doc: DocId) -> bool {
!self.is_alive(doc)
@@ -79,13 +79,13 @@ impl AliveBitSet {
self.bitset.iter()
}
/// Get underlying bitset
/// Get underlying bitset.
#[inline]
pub fn bitset(&self) -> &ReadOnlyBitSet {
&self.bitset
}
/// The number of deleted docs
/// The number of alive documents.
pub fn num_alive_docs(&self) -> usize {
self.num_alive_docs
}

View File

@@ -86,7 +86,7 @@ mod tests {
let field = searcher.schema().get_field("string_bytes").unwrap();
let term = Term::from_field_bytes(field, b"lucene".as_ref());
let term_query = TermQuery::new(term, IndexRecordOption::Basic);
let term_weight = term_query.specialized_weight(&searcher, true)?;
let term_weight = term_query.specialized_weight(&*searcher, true)?;
let term_scorer = term_weight.specialized_scorer(searcher.segment_reader(0), 1.0)?;
assert_eq!(term_scorer.doc(), 0u32);
Ok(())
@@ -99,7 +99,7 @@ mod tests {
let field = searcher.schema().get_field("string_bytes").unwrap();
let term = Term::from_field_bytes(field, b"lucene".as_ref());
let term_query = TermQuery::new(term, IndexRecordOption::Basic);
let term_weight_err = term_query.specialized_weight(&searcher, false);
let term_weight_err = term_query.specialized_weight(&*searcher, false);
assert!(matches!(
term_weight_err,
Err(crate::TantivyError::SchemaError(_))

View File

@@ -1,5 +1,5 @@
use crate::directory::{FileSlice, OwnedBytes};
use crate::fastfield::{BitpackedFastFieldReader, FastFieldReader, MultiValueLength};
use crate::fastfield::{DynamicFastFieldReader, FastFieldReader, MultiValueLength};
use crate::DocId;
/// Reader for byte array fast fields
@@ -14,13 +14,13 @@ use crate::DocId;
/// and the start index for the next document, and keeping the bytes in between.
#[derive(Clone)]
pub struct BytesFastFieldReader {
idx_reader: BitpackedFastFieldReader<u64>,
idx_reader: DynamicFastFieldReader<u64>,
values: OwnedBytes,
}
impl BytesFastFieldReader {
pub(crate) fn open(
idx_reader: BitpackedFastFieldReader<u64>,
idx_reader: DynamicFastFieldReader<u64>,
values_file: FileSlice,
) -> crate::Result<BytesFastFieldReader> {
let values = values_file.read_bytes()?;

View File

@@ -7,7 +7,7 @@ use crate::DocId;
/// Writer for byte array (as in, any number of bytes per document) fast fields
///
/// This `BytesFastFieldWriter` is only useful for advanced user.
/// This `BytesFastFieldWriter` is only useful for advanced users.
/// The normal way to get your associated bytes in your index
/// is to
/// - declare your field with fast set to `Cardinality::SingleValue`

View File

@@ -2,7 +2,7 @@
//!
//! It is the equivalent of `Lucene`'s `DocValues`.
//!
//! Fast fields is a column-oriented fashion storage of `tantivy`.
//! A fast field is a column-oriented fashion storage for `tantivy`.
//!
//! It is designed for the fast random access of some document
//! fields given a document id.
@@ -12,11 +12,10 @@
//!
//!
//! Fields have to be declared as `FAST` in the schema.
//! Currently only 64-bits integers (signed or unsigned) are
//! supported.
//! Currently supported fields are: u64, i64, f64 and bytes.
//!
//! They are stored in a bit-packed fashion so that their
//! memory usage is directly linear with the amplitude of the
//! u64, i64 and f64 fields are stored in a bit-packed fashion so that
//! their memory usage is directly linear with the amplitude of the
//! values stored.
//!
//! Read access performance is comparable to that of an array lookup.
@@ -26,14 +25,13 @@ pub use self::bytes::{BytesFastFieldReader, BytesFastFieldWriter};
pub use self::error::{FastFieldNotAvailableError, Result};
pub use self::facet_reader::FacetReader;
pub use self::multivalued::{MultiValuedFastFieldReader, MultiValuedFastFieldWriter};
pub(crate) use self::reader::BitpackedFastFieldReader;
pub use self::reader::{DynamicFastFieldReader, FastFieldReader};
pub use self::readers::FastFieldReaders;
pub(crate) use self::readers::{type_and_cardinality, FastType};
pub use self::serializer::{CompositeFastFieldSerializer, FastFieldDataAccess, FastFieldStats};
pub use self::writer::{FastFieldsWriter, IntFastFieldWriter};
use crate::chrono::{NaiveDateTime, Utc};
use crate::schema::{Cardinality, FieldType, Type, Value};
use crate::DocId;
use crate::{DateTime, DocId};
mod alive_bitset;
mod bytes;
@@ -162,14 +160,14 @@ impl FastValue for f64 {
}
}
impl FastValue for crate::DateTime {
impl FastValue for DateTime {
fn from_u64(timestamp_u64: u64) -> Self {
let timestamp_i64 = i64::from_u64(timestamp_u64);
crate::DateTime::from_utc(NaiveDateTime::from_timestamp(timestamp_i64, 0), Utc)
let unix_timestamp = i64::from_u64(timestamp_u64);
Self::from_unix_timestamp(unix_timestamp)
}
fn to_u64(&self) -> u64 {
self.timestamp().to_u64()
self.to_unix_timestamp().to_u64()
}
fn fast_field_cardinality(field_type: &FieldType) -> Option<Cardinality> {
@@ -180,7 +178,7 @@ impl FastValue for crate::DateTime {
}
fn as_u64(&self) -> u64 {
self.timestamp().as_u64()
self.to_unix_timestamp().as_u64()
}
fn to_type() -> Type {
@@ -189,12 +187,12 @@ impl FastValue for crate::DateTime {
}
fn value_to_u64(value: &Value) -> u64 {
match *value {
Value::U64(ref val) => *val,
Value::I64(ref val) => common::i64_to_u64(*val),
Value::F64(ref val) => common::f64_to_u64(*val),
Value::Date(ref datetime) => common::i64_to_u64(datetime.timestamp()),
_ => panic!("Expected a u64/i64/f64 field, got {:?} ", value),
match value {
Value::U64(val) => val.to_u64(),
Value::I64(val) => val.to_u64(),
Value::F64(val) => val.to_u64(),
Value::Date(val) => val.to_u64(),
_ => panic!("Expected a u64/i64/f64/date field, got {:?} ", value),
}
}
@@ -213,7 +211,8 @@ mod tests {
use super::*;
use crate::directory::{CompositeFile, Directory, RamDirectory, WritePtr};
use crate::merge_policy::NoMergePolicy;
use crate::schema::{Document, Field, IntOptions, Schema, FAST};
use crate::schema::{Document, Field, NumericOptions, Schema, FAST};
use crate::time::OffsetDateTime;
use crate::{Index, SegmentId, SegmentReader};
pub static SCHEMA: Lazy<Schema> = Lazy::new(|| {
@@ -234,7 +233,7 @@ mod tests {
#[test]
pub fn test_fastfield_i64_u64() {
let datetime = crate::DateTime::from_utc(NaiveDateTime::from_timestamp(0i64, 0), Utc);
let datetime = DateTime::new_utc(OffsetDateTime::UNIX_EPOCH);
assert_eq!(i64::from_u64(datetime.to_u64()), 0i64);
}
@@ -393,8 +392,7 @@ mod tests {
serializer.close().unwrap();
}
let file = directory.open_read(path).unwrap();
// assert_eq!(file.len(), 17710 as usize); //bitpacked size
assert_eq!(file.len(), 10175_usize); // linear interpol size
assert_eq!(file.len(), 12471_usize); // Piecewise linear codec size
{
let fast_fields_composite = CompositeFile::open(&file)?;
let data = fast_fields_composite.open_read(i64_field).unwrap();
@@ -490,7 +488,8 @@ mod tests {
let index = Index::create_in_ram(schema);
let mut index_writer = index.writer_for_tests().unwrap();
index_writer.set_merge_policy(Box::new(NoMergePolicy));
index_writer.add_document(doc!(date_field =>crate::chrono::prelude::Utc::now()))?;
index_writer
.add_document(doc!(date_field =>DateTime::new_utc(OffsetDateTime::now_utc())))?;
index_writer.commit()?;
index_writer.add_document(doc!())?;
index_writer.commit()?;
@@ -502,8 +501,7 @@ mod tests {
.map(SegmentReader::segment_id)
.collect();
assert_eq!(segment_ids.len(), 2);
let merge_future = index_writer.merge(&segment_ids[..]);
futures::executor::block_on(merge_future)?;
index_writer.merge(&segment_ids[..]).wait().unwrap();
reader.reload()?;
assert_eq!(reader.searcher().segment_readers().len(), 1);
Ok(())
@@ -511,7 +509,7 @@ mod tests {
#[test]
fn test_default_datetime() {
assert_eq!(crate::DateTime::make_zero().timestamp(), 0i64);
assert_eq!(0, DateTime::make_zero().to_unix_timestamp());
}
#[test]
@@ -521,23 +519,23 @@ mod tests {
let date_field = schema_builder.add_date_field("date", FAST);
let multi_date_field = schema_builder.add_date_field(
"multi_date",
IntOptions::default().set_fast(Cardinality::MultiValues),
NumericOptions::default().set_fast(Cardinality::MultiValues),
);
let schema = schema_builder.build();
let index = Index::create_in_ram(schema);
let mut index_writer = index.writer_for_tests()?;
index_writer.set_merge_policy(Box::new(NoMergePolicy));
index_writer.add_document(doc!(
date_field => crate::DateTime::from_u64(1i64.to_u64()),
multi_date_field => crate::DateTime::from_u64(2i64.to_u64()),
multi_date_field => crate::DateTime::from_u64(3i64.to_u64())
date_field => DateTime::from_u64(1i64.to_u64()),
multi_date_field => DateTime::from_u64(2i64.to_u64()),
multi_date_field => DateTime::from_u64(3i64.to_u64())
))?;
index_writer.add_document(doc!(
date_field => crate::DateTime::from_u64(4i64.to_u64())
date_field => DateTime::from_u64(4i64.to_u64())
))?;
index_writer.add_document(doc!(
multi_date_field => crate::DateTime::from_u64(5i64.to_u64()),
multi_date_field => crate::DateTime::from_u64(6i64.to_u64())
multi_date_field => DateTime::from_u64(5i64.to_u64()),
multi_date_field => DateTime::from_u64(6i64.to_u64())
))?;
index_writer.commit()?;
let reader = index.reader()?;
@@ -549,23 +547,23 @@ mod tests {
let dates_fast_field = fast_fields.dates(multi_date_field).unwrap();
let mut dates = vec![];
{
assert_eq!(date_fast_field.get(0u32).timestamp(), 1i64);
assert_eq!(date_fast_field.get(0u32).to_unix_timestamp(), 1i64);
dates_fast_field.get_vals(0u32, &mut dates);
assert_eq!(dates.len(), 2);
assert_eq!(dates[0].timestamp(), 2i64);
assert_eq!(dates[1].timestamp(), 3i64);
assert_eq!(dates[0].to_unix_timestamp(), 2i64);
assert_eq!(dates[1].to_unix_timestamp(), 3i64);
}
{
assert_eq!(date_fast_field.get(1u32).timestamp(), 4i64);
assert_eq!(date_fast_field.get(1u32).to_unix_timestamp(), 4i64);
dates_fast_field.get_vals(1u32, &mut dates);
assert!(dates.is_empty());
}
{
assert_eq!(date_fast_field.get(2u32).timestamp(), 0i64);
assert_eq!(date_fast_field.get(2u32).to_unix_timestamp(), 0i64);
dates_fast_field.get_vals(2u32, &mut dates);
assert_eq!(dates.len(), 2);
assert_eq!(dates[0].timestamp(), 5i64);
assert_eq!(dates[1].timestamp(), 6i64);
assert_eq!(dates[0].to_unix_timestamp(), 5i64);
assert_eq!(dates[1].to_unix_timestamp(), 6i64);
}
Ok(())
}

View File

@@ -6,9 +6,6 @@ pub use self::writer::MultiValuedFastFieldWriter;
#[cfg(test)]
mod tests {
use chrono::Duration;
use futures::executor::block_on;
use proptest::strategy::Strategy;
use proptest::{prop_oneof, proptest};
use test_log::test;
@@ -16,15 +13,17 @@ mod tests {
use crate::collector::TopDocs;
use crate::indexer::NoMergePolicy;
use crate::query::QueryParser;
use crate::schema::{Cardinality, Facet, FacetOptions, IntOptions, Schema};
use crate::{Document, Index, Term};
use crate::schema::{Cardinality, Facet, FacetOptions, NumericOptions, Schema};
use crate::time::format_description::well_known::Rfc3339;
use crate::time::{Duration, OffsetDateTime};
use crate::{DateTime, Document, Index, Term};
#[test]
fn test_multivalued_u64() -> crate::Result<()> {
let mut schema_builder = Schema::builder();
let field = schema_builder.add_u64_field(
"multifield",
IntOptions::default().set_fast(Cardinality::MultiValues),
NumericOptions::default().set_fast(Cardinality::MultiValues),
);
let schema = schema_builder.build();
let index = Index::create_in_ram(schema);
@@ -59,33 +58,38 @@ mod tests {
let mut schema_builder = Schema::builder();
let date_field = schema_builder.add_date_field(
"multi_date_field",
IntOptions::default()
NumericOptions::default()
.set_fast(Cardinality::MultiValues)
.set_indexed()
.set_fieldnorm()
.set_stored(),
);
let time_i =
schema_builder.add_i64_field("time_stamp_i", IntOptions::default().set_stored());
schema_builder.add_i64_field("time_stamp_i", NumericOptions::default().set_stored());
let schema = schema_builder.build();
let index = Index::create_in_ram(schema);
let mut index_writer = index.writer_for_tests()?;
let first_time_stamp = chrono::Utc::now();
index_writer.add_document(
doc!(date_field=>first_time_stamp, date_field=>first_time_stamp, time_i=>1i64),
)?;
index_writer.add_document(doc!(time_i=>0i64))?;
let first_time_stamp = OffsetDateTime::now_utc();
index_writer.add_document(doc!(
date_field => DateTime::new_utc(first_time_stamp),
date_field => DateTime::new_utc(first_time_stamp),
time_i=>1i64))?;
index_writer.add_document(doc!(time_i => 0i64))?;
// add one second
index_writer.add_document(
doc!(date_field=>first_time_stamp + Duration::seconds(1), time_i=>2i64),
)?;
index_writer.add_document(doc!(
date_field => DateTime::new_utc(first_time_stamp + Duration::seconds(1)),
time_i => 2i64))?;
// add another second
let two_secs_ahead = first_time_stamp + Duration::seconds(2);
index_writer.add_document(doc!(date_field=>two_secs_ahead, date_field=>two_secs_ahead,date_field=>two_secs_ahead, time_i=>3i64))?;
index_writer.add_document(doc!(
date_field => DateTime::new_utc(two_secs_ahead),
date_field => DateTime::new_utc(two_secs_ahead),
date_field => DateTime::new_utc(two_secs_ahead),
time_i => 3i64))?;
// add three seconds
index_writer.add_document(
doc!(date_field=>first_time_stamp + Duration::seconds(3), time_i=>4i64),
)?;
index_writer.add_document(doc!(
date_field => DateTime::new_utc(first_time_stamp + Duration::seconds(3)),
time_i => 4i64))?;
index_writer.commit()?;
let reader = index.reader()?;
@@ -94,8 +98,11 @@ mod tests {
assert_eq!(reader.num_docs(), 5);
{
let parser = QueryParser::for_index(&index, vec![date_field]);
let query = parser.parse_query(&format!("\"{}\"", first_time_stamp.to_rfc3339()))?;
let parser = QueryParser::for_index(&index, vec![]);
let query = parser.parse_query(&format!(
"multi_date_field:\"{}\"",
first_time_stamp.format(&Rfc3339)?,
))?;
let results = searcher.search(&query, &TopDocs::with_limit(5))?;
assert_eq!(results.len(), 1);
for (_score, doc_address) in results {
@@ -105,9 +112,8 @@ mod tests {
.get_first(date_field)
.expect("cannot find value")
.as_date()
.unwrap()
.timestamp(),
first_time_stamp.timestamp()
.unwrap(),
DateTime::new_utc(first_time_stamp),
);
assert_eq!(
retrieved_doc
@@ -121,7 +127,7 @@ mod tests {
{
let parser = QueryParser::for_index(&index, vec![date_field]);
let query = parser.parse_query(&format!("\"{}\"", two_secs_ahead.to_rfc3339()))?;
let query = parser.parse_query(&format!("\"{}\"", two_secs_ahead.format(&Rfc3339)?))?;
let results = searcher.search(&query, &TopDocs::with_limit(5))?;
assert_eq!(results.len(), 1);
@@ -133,9 +139,8 @@ mod tests {
.get_first(date_field)
.expect("cannot find value")
.as_date()
.unwrap()
.timestamp(),
two_secs_ahead.timestamp()
.unwrap(),
DateTime::new_utc(two_secs_ahead)
);
assert_eq!(
retrieved_doc
@@ -150,9 +155,9 @@ mod tests {
{
let parser = QueryParser::for_index(&index, vec![date_field]);
let range_q = format!(
"[{} TO {}}}",
(first_time_stamp + Duration::seconds(1)).to_rfc3339(),
(first_time_stamp + Duration::seconds(3)).to_rfc3339()
"multi_date_field:[{} TO {}}}",
(first_time_stamp + Duration::seconds(1)).format(&Rfc3339)?,
(first_time_stamp + Duration::seconds(3)).format(&Rfc3339)?
);
let query = parser.parse_query(&range_q)?;
let results = searcher.search(&query, &TopDocs::with_limit(5))?;
@@ -175,9 +180,8 @@ mod tests {
.get_first(date_field)
.expect("cannot find value")
.as_date()
.expect("value not of Date type")
.timestamp(),
(first_time_stamp + Duration::seconds(offset_sec)).timestamp()
.expect("value not of Date type"),
DateTime::new_utc(first_time_stamp + Duration::seconds(offset_sec)),
);
assert_eq!(
retrieved_doc
@@ -196,7 +200,7 @@ mod tests {
let mut schema_builder = Schema::builder();
let field = schema_builder.add_i64_field(
"multifield",
IntOptions::default().set_fast(Cardinality::MultiValues),
NumericOptions::default().set_fast(Cardinality::MultiValues),
);
let schema = schema_builder.build();
let index = Index::create_in_ram(schema);
@@ -226,7 +230,7 @@ mod tests {
let mut schema_builder = Schema::builder();
let field = schema_builder.add_u64_field(
"multifield",
IntOptions::default()
NumericOptions::default()
.set_fast(Cardinality::MultiValues)
.set_indexed(),
);
@@ -265,7 +269,7 @@ mod tests {
IndexingOp::Merge => {
let segment_ids = index.searchable_segment_ids()?;
if segment_ids.len() >= 2 {
block_on(index_writer.merge(&segment_ids))?;
index_writer.merge(&segment_ids).wait()?;
index_writer.segment_updater().wait_merging_thread()?;
}
}
@@ -280,7 +284,7 @@ mod tests {
.searchable_segment_ids()
.expect("Searchable segments failed.");
if !segment_ids.is_empty() {
block_on(index_writer.merge(&segment_ids)).unwrap();
index_writer.merge(&segment_ids).wait()?;
assert!(index_writer.wait_merging_threads().is_ok());
}
}

View File

@@ -90,7 +90,7 @@ impl<Item: FastValue> MultiValueLength for MultiValuedFastFieldReader<Item> {
mod tests {
use crate::core::Index;
use crate::schema::{Cardinality, Facet, FacetOptions, IntOptions, Schema};
use crate::schema::{Cardinality, Facet, FacetOptions, NumericOptions, Schema};
#[test]
fn test_multifastfield_reader() -> crate::Result<()> {
@@ -148,7 +148,7 @@ mod tests {
#[test]
fn test_multifastfield_reader_min_max() -> crate::Result<()> {
let mut schema_builder = Schema::builder();
let field_options = IntOptions::default()
let field_options = NumericOptions::default()
.set_indexed()
.set_fast(Cardinality::MultiValues);
let item_field = schema_builder.add_i64_field("items", field_options);

View File

@@ -14,7 +14,7 @@ use crate::DocId;
/// Writer for multi-valued (as in, more than one value per document)
/// int fast field.
///
/// This `Writer` is only useful for advanced user.
/// This `Writer` is only useful for advanced users.
/// The normal way to get your multivalued int in your index
/// is to
/// - declare your field with fast set to `Cardinality::MultiValues`
@@ -23,10 +23,11 @@ use crate::DocId;
///
/// The `MultiValuedFastFieldWriter` can be acquired from the
/// fastfield writer, by calling
/// [`.get_multivalue_writer(...)`](./struct.FastFieldsWriter.html#method.get_multivalue_writer).
/// [`.get_multivalue_writer_mut(...)`](./struct.FastFieldsWriter.html#method.
/// get_multivalue_writer_mut).
///
/// Once acquired, writing is done by calling calls to
/// `.add_document_vals(&[u64])` once per document.
/// Once acquired, writing is done by calling
/// [`.add_document_vals(&[u64])`](MultiValuedFastFieldWriter::add_document_vals) once per document.
///
/// The serializer makes it possible to remap all of the values
/// that were pushed to the writer using a mapping.

View File

@@ -6,12 +6,17 @@ use common::BinarySerializable;
use fastfield_codecs::bitpacked::{
BitpackedFastFieldReader as BitpackedReader, BitpackedFastFieldSerializer,
};
#[allow(deprecated)]
use fastfield_codecs::linearinterpol::{
LinearInterpolFastFieldReader, LinearInterpolFastFieldSerializer,
};
#[allow(deprecated)]
use fastfield_codecs::multilinearinterpol::{
MultiLinearInterpolFastFieldReader, MultiLinearInterpolFastFieldSerializer,
};
use fastfield_codecs::piecewise_linear::{
PiecewiseLinearFastFieldReader, PiecewiseLinearFastFieldSerializer,
};
use fastfield_codecs::{FastFieldCodecReader, FastFieldCodecSerializer};
use super::FastValue;
@@ -71,6 +76,8 @@ pub enum DynamicFastFieldReader<Item: FastValue> {
LinearInterpol(FastFieldReaderCodecWrapper<Item, LinearInterpolFastFieldReader>),
/// Blockwise linear interpolated values + bitpacked
MultiLinearInterpol(FastFieldReaderCodecWrapper<Item, MultiLinearInterpolFastFieldReader>),
/// Piecewise linear interpolated values + bitpacked
PiecewiseLinear(FastFieldReaderCodecWrapper<Item, PiecewiseLinearFastFieldReader>),
}
impl<Item: FastValue> DynamicFastFieldReader<Item> {
@@ -86,12 +93,14 @@ impl<Item: FastValue> DynamicFastFieldReader<Item> {
BitpackedReader,
>::open_from_bytes(bytes)?)
}
#[allow(deprecated)]
LinearInterpolFastFieldSerializer::ID => {
DynamicFastFieldReader::LinearInterpol(FastFieldReaderCodecWrapper::<
Item,
LinearInterpolFastFieldReader,
>::open_from_bytes(bytes)?)
}
#[allow(deprecated)]
MultiLinearInterpolFastFieldSerializer::ID => {
DynamicFastFieldReader::MultiLinearInterpol(FastFieldReaderCodecWrapper::<
Item,
@@ -100,6 +109,12 @@ impl<Item: FastValue> DynamicFastFieldReader<Item> {
bytes
)?)
}
PiecewiseLinearFastFieldSerializer::ID => {
DynamicFastFieldReader::PiecewiseLinear(FastFieldReaderCodecWrapper::<
Item,
PiecewiseLinearFastFieldReader,
>::open_from_bytes(bytes)?)
}
_ => {
panic!(
"unknown fastfield id {:?}. Data corrupted or using old tantivy version.",
@@ -112,18 +127,22 @@ impl<Item: FastValue> DynamicFastFieldReader<Item> {
}
impl<Item: FastValue> FastFieldReader<Item> for DynamicFastFieldReader<Item> {
#[inline]
fn get(&self, doc: DocId) -> Item {
match self {
Self::Bitpacked(reader) => reader.get(doc),
Self::LinearInterpol(reader) => reader.get(doc),
Self::MultiLinearInterpol(reader) => reader.get(doc),
Self::PiecewiseLinear(reader) => reader.get(doc),
}
}
#[inline]
fn get_range(&self, start: u64, output: &mut [Item]) {
match self {
Self::Bitpacked(reader) => reader.get_range(start, output),
Self::LinearInterpol(reader) => reader.get_range(start, output),
Self::MultiLinearInterpol(reader) => reader.get_range(start, output),
Self::PiecewiseLinear(reader) => reader.get_range(start, output),
}
}
fn min_value(&self) -> Item {
@@ -131,6 +150,7 @@ impl<Item: FastValue> FastFieldReader<Item> for DynamicFastFieldReader<Item> {
Self::Bitpacked(reader) => reader.min_value(),
Self::LinearInterpol(reader) => reader.min_value(),
Self::MultiLinearInterpol(reader) => reader.min_value(),
Self::PiecewiseLinear(reader) => reader.min_value(),
}
}
fn max_value(&self) -> Item {
@@ -138,6 +158,7 @@ impl<Item: FastValue> FastFieldReader<Item> for DynamicFastFieldReader<Item> {
Self::Bitpacked(reader) => reader.max_value(),
Self::LinearInterpol(reader) => reader.max_value(),
Self::MultiLinearInterpol(reader) => reader.max_value(),
Self::PiecewiseLinear(reader) => reader.max_value(),
}
}
}
@@ -174,8 +195,12 @@ impl<Item: FastValue, C: FastFieldCodecReader> FastFieldReaderCodecWrapper<Item,
_phantom: PhantomData,
})
}
pub(crate) fn get_u64(&self, doc: u64) -> Item {
Item::from_u64(self.reader.get_u64(doc, self.bytes.as_slice()))
/// Get u64 for indice `idx`.
/// `idx` can be either a `DocId` or an index used for
/// `multivalued` fast field. See [`get_range`] for more details.
pub(crate) fn get_u64(&self, idx: u64) -> Item {
Item::from_u64(self.reader.get_u64(idx, self.bytes.as_slice()))
}
/// Internally `multivalued` also use SingleValue Fast fields.
@@ -248,8 +273,6 @@ impl<Item: FastValue, C: FastFieldCodecReader + Clone> FastFieldReader<Item>
}
}
pub(crate) type BitpackedFastFieldReader<Item> = FastFieldReaderCodecWrapper<Item, BitpackedReader>;
impl<Item: FastValue> From<Vec<Item>> for DynamicFastFieldReader<Item> {
fn from(vals: Vec<Item>) -> DynamicFastFieldReader<Item> {
let mut schema_builder = Schema::builder();

View File

@@ -1,12 +1,11 @@
use super::reader::DynamicFastFieldReader;
use crate::directory::{CompositeFile, FileSlice};
use crate::fastfield::{
BitpackedFastFieldReader, BytesFastFieldReader, FastFieldNotAvailableError, FastValue,
MultiValuedFastFieldReader,
BytesFastFieldReader, FastFieldNotAvailableError, FastValue, MultiValuedFastFieldReader,
};
use crate::schema::{Cardinality, Field, FieldType, Schema};
use crate::space_usage::PerFieldSpaceUsage;
use crate::TantivyError;
use crate::{DateTime, TantivyError};
/// Provides access to all of the BitpackedFastFieldReader.
///
@@ -18,14 +17,14 @@ pub struct FastFieldReaders {
fast_fields_composite: CompositeFile,
}
#[derive(Eq, PartialEq, Debug)]
enum FastType {
pub(crate) enum FastType {
I64,
U64,
F64,
Date,
}
fn type_and_cardinality(field_type: &FieldType) -> Option<(FastType, Cardinality)> {
pub(crate) fn type_and_cardinality(field_type: &FieldType) -> Option<(FastType, Cardinality)> {
match field_type {
FieldType::U64(options) => options
.get_fastfield_cardinality()
@@ -56,7 +55,8 @@ impl FastFieldReaders {
self.fast_fields_composite.space_usage()
}
fn fast_field_data(&self, field: Field, idx: usize) -> crate::Result<FileSlice> {
#[doc(hidden)]
pub fn fast_field_data(&self, field: Field, idx: usize) -> crate::Result<FileSlice> {
self.fast_fields_composite
.open_read_with_idx(field, idx)
.ok_or_else(|| {
@@ -147,10 +147,10 @@ impl FastFieldReaders {
self.typed_fast_field_reader(field)
}
/// Returns the `i64` fast field reader reader associated to `field`.
/// Returns the `date` fast field reader reader associated to `field`.
///
/// If `field` is not a i64 fast field, this method returns an Error.
pub fn date(&self, field: Field) -> crate::Result<DynamicFastFieldReader<crate::DateTime>> {
/// If `field` is not a date fast field, this method returns an Error.
pub fn date(&self, field: Field) -> crate::Result<DynamicFastFieldReader<DateTime>> {
self.check_type(field, FastType::Date, Cardinality::SingleValue)?;
self.typed_fast_field_reader(field)
}
@@ -195,13 +195,12 @@ impl FastFieldReaders {
self.typed_fast_field_multi_reader(field)
}
/// Returns a `crate::DateTime` multi-valued fast field reader reader associated to `field`.
/// Returns a `time::OffsetDateTime` multi-valued fast field reader reader associated to
/// `field`.
///
/// If `field` is not a `crate::DateTime` multi-valued fast field, this method returns an Error.
pub fn dates(
&self,
field: Field,
) -> crate::Result<MultiValuedFastFieldReader<crate::DateTime>> {
/// If `field` is not a `time::OffsetDateTime` multi-valued fast field, this method returns an
/// Error.
pub fn dates(&self, field: Field) -> crate::Result<MultiValuedFastFieldReader<DateTime>> {
self.check_type(field, FastType::Date, Cardinality::MultiValues)?;
self.typed_fast_field_multi_reader(field)
}
@@ -219,7 +218,7 @@ impl FastFieldReaders {
)));
}
let fast_field_idx_file = self.fast_field_data(field, 0)?;
let idx_reader = BitpackedFastFieldReader::open(fast_field_idx_file)?;
let idx_reader = DynamicFastFieldReader::open(fast_field_idx_file)?;
let data = self.fast_field_data(field, 1)?;
BytesFastFieldReader::open(idx_reader, data)
} else {

View File

@@ -4,9 +4,9 @@ use common::{BinarySerializable, CountingWriter};
pub use fastfield_codecs::bitpacked::{
BitpackedFastFieldSerializer, BitpackedFastFieldSerializerLegacy,
};
use fastfield_codecs::linearinterpol::LinearInterpolFastFieldSerializer;
use fastfield_codecs::multilinearinterpol::MultiLinearInterpolFastFieldSerializer;
use fastfield_codecs::piecewise_linear::PiecewiseLinearFastFieldSerializer;
pub use fastfield_codecs::{FastFieldCodecSerializer, FastFieldDataAccess, FastFieldStats};
use itertools::Itertools;
use crate::directory::{CompositeWrite, WritePtr};
use crate::schema::Field;
@@ -35,18 +35,31 @@ pub struct CompositeFastFieldSerializer {
composite_write: CompositeWrite<WritePtr>,
}
// use this, when this is merged and stabilized explicit_generic_args_with_impl_trait
#[derive(Debug)]
pub struct CodecEstimationResult<'a> {
pub ratio: f32,
pub name: &'a str,
pub id: u8,
}
// TODO: use this when this is merged and stabilized explicit_generic_args_with_impl_trait
// https://github.com/rust-lang/rust/pull/86176
fn codec_estimation<T: FastFieldCodecSerializer, A: FastFieldDataAccess>(
stats: FastFieldStats,
fastfield_accessor: &A,
estimations: &mut Vec<(f32, &str, u8)>,
) {
) -> CodecEstimationResult {
if !T::is_applicable(fastfield_accessor, stats.clone()) {
return;
return CodecEstimationResult {
ratio: f32::MAX,
name: T::NAME,
id: T::ID,
};
}
CodecEstimationResult {
ratio: T::estimate_compression_ratio(fastfield_accessor, stats),
name: T::NAME,
id: T::ID,
}
let (ratio, name, id) = (T::estimate(fastfield_accessor, stats), T::NAME, T::ID);
estimations.push((ratio, name, id));
}
impl CompositeFastFieldSerializer {
@@ -59,7 +72,7 @@ impl CompositeFastFieldSerializer {
/// Serialize data into a new u64 fast field. The best compression codec will be chosen
/// automatically.
pub fn create_auto_detect_u64_fast_field(
pub fn new_u64_fast_field_with_best_codec(
&mut self,
field: Field,
stats: FastFieldStats,
@@ -67,7 +80,7 @@ impl CompositeFastFieldSerializer {
data_iter_1: impl Iterator<Item = u64>,
data_iter_2: impl Iterator<Item = u64>,
) -> io::Result<()> {
self.create_auto_detect_u64_fast_field_with_idx(
self.new_u64_fast_field_with_idx_with_best_codec(
field,
stats,
fastfield_accessor,
@@ -78,7 +91,7 @@ impl CompositeFastFieldSerializer {
}
/// Serialize data into a new u64 fast field. The best compression codec will be chosen
/// automatically.
pub fn create_auto_detect_u64_fast_field_with_idx(
pub fn new_u64_fast_field_with_idx_with_best_codec(
&mut self,
field: Field,
stats: FastFieldStats,
@@ -88,42 +101,29 @@ impl CompositeFastFieldSerializer {
idx: usize,
) -> io::Result<()> {
let field_write = self.composite_write.for_field_with_idx(field, idx);
let mut estimations = vec![];
codec_estimation::<BitpackedFastFieldSerializer, _>(
stats.clone(),
&fastfield_accessor,
&mut estimations,
);
codec_estimation::<LinearInterpolFastFieldSerializer, _>(
stats.clone(),
&fastfield_accessor,
&mut estimations,
);
codec_estimation::<MultiLinearInterpolFastFieldSerializer, _>(
stats.clone(),
&fastfield_accessor,
&mut estimations,
);
if let Some(broken_estimation) = estimations.iter().find(|estimation| estimation.0.is_nan())
{
warn!(
"broken estimation for fast field codec {}",
broken_estimation.1
);
}
// removing nan values for codecs with broken calculations, and max values which disables
// codecs
estimations.retain(|estimation| !estimation.0.is_nan() && estimation.0 != f32::MAX);
estimations.sort_by(|a, b| a.0.partial_cmp(&b.0).unwrap());
let (_ratio, name, id) = estimations[0];
let estimations = vec![
codec_estimation::<BitpackedFastFieldSerializer, _>(stats.clone(), &fastfield_accessor),
codec_estimation::<PiecewiseLinearFastFieldSerializer, _>(
stats.clone(),
&fastfield_accessor,
),
];
let best_codec_result = estimations
.iter()
.sorted_by(|result_a, result_b| {
result_a
.ratio
.partial_cmp(&result_b.ratio)
.expect("Ratio cannot be nan.")
})
.next()
.expect("A codec must be present.");
debug!(
"choosing fast field codec {} for field_id {:?}",
name, field
); // todo print actual field name
id.serialize(field_write)?;
match name {
"Choosing fast field codec {} for field_id {:?} among {:?}",
best_codec_result.name, field, estimations,
);
best_codec_result.id.serialize(field_write)?;
match best_codec_result.name {
BitpackedFastFieldSerializer::NAME => {
BitpackedFastFieldSerializer::serialize(
field_write,
@@ -133,17 +133,8 @@ impl CompositeFastFieldSerializer {
data_iter_2,
)?;
}
LinearInterpolFastFieldSerializer::NAME => {
LinearInterpolFastFieldSerializer::serialize(
field_write,
&fastfield_accessor,
stats,
data_iter_1,
data_iter_2,
)?;
}
MultiLinearInterpolFastFieldSerializer::NAME => {
MultiLinearInterpolFastFieldSerializer::serialize(
PiecewiseLinearFastFieldSerializer::NAME => {
PiecewiseLinearFastFieldSerializer::serialize(
field_write,
&fastfield_accessor,
stats,
@@ -152,7 +143,7 @@ impl CompositeFastFieldSerializer {
)?;
}
_ => {
panic!("unknown fastfield serializer {}", name)
panic!("unknown fastfield serializer {}", best_codec_result.name)
}
};
field_write.flush()?;
@@ -197,7 +188,7 @@ impl CompositeFastFieldSerializer {
/// Closes the serializer
///
/// After this call the data must be persistently save on disk.
/// After this call the data must be persistently saved on disk.
pub fn close(self) -> io::Result<()> {
self.composite_write.close()
}
@@ -216,3 +207,45 @@ impl<'a, W: Write> FastBytesFieldSerializer<'a, W> {
self.write.flush()
}
}
#[cfg(test)]
mod tests {
use std::path::Path;
use common::BinarySerializable;
use fastfield_codecs::FastFieldStats;
use itertools::Itertools;
use super::CompositeFastFieldSerializer;
use crate::directory::{RamDirectory, WritePtr};
use crate::schema::Field;
use crate::Directory;
#[test]
fn new_u64_fast_field_with_best_codec() -> crate::Result<()> {
let directory: RamDirectory = RamDirectory::create();
let path = Path::new("test");
let write: WritePtr = directory.open_write(path)?;
let mut serializer = CompositeFastFieldSerializer::from_write(write)?;
let vals = (0..10000u64).into_iter().collect_vec();
let stats = FastFieldStats {
min_value: 0,
max_value: 9999,
num_vals: vals.len() as u64,
};
serializer.new_u64_fast_field_with_best_codec(
Field::from_field_id(0),
stats,
vals.clone(),
vals.clone().into_iter(),
vals.into_iter(),
)?;
serializer.close()?;
// get the codecs id
let mut bytes = directory.open_read(path)?.read_bytes()?;
let codec_id = u8::deserialize(&mut bytes)?;
// Codec id = 4 is piecewise linear.
assert_eq!(codec_id, 4);
Ok(())
}
}

View File

@@ -14,7 +14,7 @@ use crate::postings::UnorderedTermId;
use crate::schema::{Cardinality, Document, Field, FieldEntry, FieldType, Schema};
use crate::termdict::TermOrdinal;
/// The fastfieldswriter regroup all of the fast field writers.
/// The `FastFieldsWriter` groups all of the fast field writers.
pub struct FastFieldsWriter {
single_value_writers: Vec<IntFastFieldWriter>,
multi_values_writers: Vec<MultiValuedFastFieldWriter>,
@@ -298,7 +298,7 @@ impl IntFastFieldWriter {
let iter = doc_id_map
.iter_old_doc_ids()
.map(|doc_id| self.vals.get(doc_id as usize));
serializer.create_auto_detect_u64_fast_field(
serializer.new_u64_fast_field_with_best_codec(
self.field,
stats,
fastfield_accessor,
@@ -306,7 +306,7 @@ impl IntFastFieldWriter {
iter,
)?;
} else {
serializer.create_auto_detect_u64_fast_field(
serializer.new_u64_fast_field_with_best_codec(
self.field,
stats,
fastfield_accessor,

View File

@@ -35,8 +35,7 @@ fn test_functional_store() -> crate::Result<()> {
let mut doc_set: Vec<u64> = Vec::new();
let mut doc_id = 0u64;
for iteration in 0..get_num_iterations() {
dbg!(iteration);
for _iteration in 0..get_num_iterations() {
let num_docs: usize = rng.gen_range(0..4);
if !doc_set.is_empty() {
let doc_to_remove_id = rng.gen_range(0..doc_set.len());

130
src/future_result.rs Normal file
View File

@@ -0,0 +1,130 @@
use std::future::Future;
use std::pin::Pin;
use std::task::Poll;
use crate::TantivyError;
/// `FutureResult` is a handle that makes it possible to wait for the completion
/// of an ongoing task.
///
/// Contrary to some `Future`, it does not need to be polled for the task to
/// progress. Dropping the `FutureResult` does not cancel the task being executed
/// either.
///
/// - In a sync context, you can call `FutureResult::wait()`. The function
/// does not rely on `block_on`.
/// - In an async context, you can call simply use `FutureResult` as a future.
pub struct FutureResult<T> {
inner: Inner<T>,
}
enum Inner<T> {
FailedBeforeStart(Option<TantivyError>),
InProgress {
receiver: oneshot::Receiver<crate::Result<T>>,
error_msg_if_failure: &'static str,
},
}
impl<T> From<TantivyError> for FutureResult<T> {
fn from(err: TantivyError) -> Self {
FutureResult {
inner: Inner::FailedBeforeStart(Some(err)),
}
}
}
impl<T> FutureResult<T> {
pub(crate) fn create(
error_msg_if_failure: &'static str,
) -> (Self, oneshot::Sender<crate::Result<T>>) {
let (sender, receiver) = oneshot::channel();
let inner: Inner<T> = Inner::InProgress {
receiver,
error_msg_if_failure,
};
(FutureResult { inner }, sender)
}
/// Blocks until the scheduled result is available.
///
/// In an async context, you should simply use `ScheduledResult` as a future.
pub fn wait(self) -> crate::Result<T> {
match self.inner {
Inner::FailedBeforeStart(err) => Err(err.unwrap()),
Inner::InProgress {
receiver,
error_msg_if_failure,
} => receiver.recv().unwrap_or_else(|_| {
Err(crate::TantivyError::SystemError(
error_msg_if_failure.to_string(),
))
}),
}
}
}
impl<T> Future for FutureResult<T> {
type Output = crate::Result<T>;
fn poll(self: Pin<&mut Self>, cx: &mut std::task::Context<'_>) -> Poll<Self::Output> {
unsafe {
match &mut Pin::get_unchecked_mut(self).inner {
Inner::FailedBeforeStart(err) => Poll::Ready(Err(err.take().unwrap())),
Inner::InProgress {
receiver,
error_msg_if_failure,
} => match Future::poll(Pin::new_unchecked(receiver), cx) {
Poll::Ready(oneshot_res) => {
let res = oneshot_res.unwrap_or_else(|_| {
Err(crate::TantivyError::SystemError(
error_msg_if_failure.to_string(),
))
});
Poll::Ready(res)
}
Poll::Pending => Poll::Pending,
},
}
}
}
}
#[cfg(test)]
mod tests {
use futures::executor::block_on;
use super::FutureResult;
use crate::TantivyError;
#[test]
fn test_scheduled_result_failed_to_schedule() {
let scheduled_result: FutureResult<()> = FutureResult::from(TantivyError::Poisoned);
let res = block_on(scheduled_result);
assert!(matches!(res, Err(TantivyError::Poisoned)));
}
#[test]
fn test_scheduled_result_error() {
let (scheduled_result, tx): (FutureResult<()>, _) = FutureResult::create("failed");
drop(tx);
let res = block_on(scheduled_result);
assert!(matches!(res, Err(TantivyError::SystemError(_))));
}
#[test]
fn test_scheduled_result_sent_success() {
let (scheduled_result, tx): (FutureResult<u64>, _) = FutureResult::create("failed");
tx.send(Ok(2u64)).unwrap();
assert_eq!(block_on(scheduled_result).unwrap(), 2u64);
}
#[test]
fn test_scheduled_result_sent_error() {
let (scheduled_result, tx): (FutureResult<u64>, _) = FutureResult::create("failed");
tx.send(Err(TantivyError::Poisoned)).unwrap();
let res = block_on(scheduled_result);
assert!(matches!(res, Err(TantivyError::Poisoned)));
}
}

View File

@@ -221,7 +221,7 @@ impl DeleteCursor {
}
/// Advance to the next delete operation.
/// Returns true iff there is such an operation.
/// Returns true if and only if there is such an operation.
pub fn advance(&mut self) -> bool {
if self.load_block_if_required() {
self.pos += 1;

View File

@@ -168,12 +168,12 @@ mod tests_indexsorting {
let my_string_field = schema_builder.add_text_field("string_field", STRING | STORED);
let my_number = schema_builder.add_u64_field(
"my_number",
IntOptions::default().set_fast(Cardinality::SingleValue),
NumericOptions::default().set_fast(Cardinality::SingleValue),
);
let multi_numbers = schema_builder.add_u64_field(
"multi_numbers",
IntOptions::default().set_fast(Cardinality::MultiValues),
NumericOptions::default().set_fast(Cardinality::MultiValues),
);
let schema = schema_builder.build();

View File

@@ -5,8 +5,6 @@ use std::thread::JoinHandle;
use common::BitSet;
use crossbeam::channel;
use futures::executor::block_on;
use futures::future::Future;
use smallvec::smallvec;
use super::operation::{AddOperation, UserOperation};
@@ -24,7 +22,7 @@ use crate::indexer::operation::DeleteOperation;
use crate::indexer::stamper::Stamper;
use crate::indexer::{MergePolicy, SegmentEntry, SegmentWriter};
use crate::schema::{Document, IndexRecordOption, Term};
use crate::Opstamp;
use crate::{FutureResult, Opstamp};
// Size of the margin for the `memory_arena`. A segment is closed when the remaining memory
// in the `memory_arena` goes below MARGIN_IN_BYTES.
@@ -214,7 +212,7 @@ fn index_documents(
meta.untrack_temp_docstore();
// update segment_updater inventory to remove tempstore
let segment_entry = SegmentEntry::new(meta, delete_cursor, alive_bitset_opt);
block_on(segment_updater.schedule_add_segment(segment_entry))?;
segment_updater.schedule_add_segment(segment_entry).wait()?;
Ok(())
}
@@ -368,7 +366,9 @@ impl IndexWriter {
pub fn add_segment(&self, segment_meta: SegmentMeta) -> crate::Result<()> {
let delete_cursor = self.delete_queue.cursor();
let segment_entry = SegmentEntry::new(segment_meta, delete_cursor, None);
block_on(self.segment_updater.schedule_add_segment(segment_entry))
self.segment_updater
.schedule_add_segment(segment_entry)
.wait()
}
/// Creates a new segment.
@@ -465,8 +465,8 @@ impl IndexWriter {
}
/// Detects and removes the files that are not used by the index anymore.
pub async fn garbage_collect_files(&self) -> crate::Result<GarbageCollectionResult> {
self.segment_updater.schedule_garbage_collect().await
pub fn garbage_collect_files(&self) -> FutureResult<GarbageCollectionResult> {
self.segment_updater.schedule_garbage_collect()
}
/// Deletes all documents from the index
@@ -516,13 +516,10 @@ impl IndexWriter {
/// Merges a given list of segments
///
/// `segment_ids` is required to be non-empty.
pub fn merge(
&mut self,
segment_ids: &[SegmentId],
) -> impl Future<Output = crate::Result<SegmentMeta>> {
pub fn merge(&mut self, segment_ids: &[SegmentId]) -> FutureResult<SegmentMeta> {
let merge_operation = self.segment_updater.make_merge_operation(segment_ids);
let segment_updater = self.segment_updater.clone();
async move { segment_updater.start_merge(merge_operation)?.await }
segment_updater.start_merge(merge_operation)
}
/// Closes the current document channel send.
@@ -781,7 +778,6 @@ impl Drop for IndexWriter {
mod tests {
use std::collections::{HashMap, HashSet};
use futures::executor::block_on;
use proptest::prelude::*;
use proptest::prop_oneof;
use proptest::strategy::Strategy;
@@ -794,8 +790,8 @@ mod tests {
use crate::indexer::NoMergePolicy;
use crate::query::{QueryParser, TermQuery};
use crate::schema::{
self, Cardinality, Facet, FacetOptions, IndexRecordOption, IntOptions, TextFieldIndexing,
TextOptions, FAST, INDEXED, STORED, STRING, TEXT,
self, Cardinality, Facet, FacetOptions, IndexRecordOption, NumericOptions,
TextFieldIndexing, TextOptions, FAST, INDEXED, STORED, STRING, TEXT,
};
use crate::{DocAddress, Index, IndexSettings, IndexSortByField, Order, ReloadPolicy, Term};
@@ -1389,6 +1385,7 @@ mod tests {
) -> crate::Result<()> {
let mut schema_builder = schema::Schema::builder();
let id_field = schema_builder.add_u64_field("id", FAST | INDEXED | STORED);
let bytes_field = schema_builder.add_bytes_field("bytes", FAST | INDEXED | STORED);
let text_field = schema_builder.add_text_field(
"text_field",
TextOptions::default()
@@ -1403,7 +1400,7 @@ mod tests {
let multi_numbers = schema_builder.add_u64_field(
"multi_numbers",
IntOptions::default()
NumericOptions::default()
.set_fast(Cardinality::MultiValues)
.set_stored(),
);
@@ -1435,8 +1432,14 @@ mod tests {
match op {
IndexingOp::AddDoc { id } => {
let facet = Facet::from(&("/cola/".to_string() + &id.to_string()));
index_writer
.add_document(doc!(id_field=>id, multi_numbers=> id, multi_numbers => id, text_field => id.to_string(), facet_field => facet, large_text_field=> LOREM))?;
index_writer.add_document(doc!(id_field=>id,
bytes_field => id.to_le_bytes().as_slice(),
multi_numbers=> id,
multi_numbers => id,
text_field => id.to_string(),
facet_field => facet,
large_text_field=> LOREM
))?;
}
IndexingOp::DeleteDoc { id } => {
index_writer.delete_term(Term::from_field_u64(id_field, id));
@@ -1449,7 +1452,7 @@ mod tests {
.searchable_segment_ids()
.expect("Searchable segments failed.");
if segment_ids.len() >= 2 {
block_on(index_writer.merge(&segment_ids)).unwrap();
index_writer.merge(&segment_ids).wait().unwrap();
assert!(index_writer.segment_updater().wait_merging_thread().is_ok());
}
}
@@ -1465,7 +1468,7 @@ mod tests {
.searchable_segment_ids()
.expect("Searchable segments failed.");
if segment_ids.len() >= 2 {
block_on(index_writer.merge(&segment_ids)).unwrap();
index_writer.merge(&segment_ids).wait().unwrap();
assert!(index_writer.wait_merging_threads().is_ok());
}
}

View File

@@ -0,0 +1,416 @@
use fnv::FnvHashMap;
use murmurhash32::murmurhash2;
use crate::fastfield::FastValue;
use crate::postings::{IndexingContext, IndexingPosition, PostingsWriter};
use crate::schema::term::{JSON_END_OF_PATH, JSON_PATH_SEGMENT_SEP};
use crate::schema::Type;
use crate::time::format_description::well_known::Rfc3339;
use crate::time::{OffsetDateTime, UtcOffset};
use crate::tokenizer::TextAnalyzer;
use crate::{DateTime, DocId, Term};
/// This object is a map storing the last position for a given path for the current document
/// being indexed.
///
/// It is key to solve the following problem:
/// If we index a JsonObject emitting several terms with the same path
/// we do not want to create false positive in phrase queries.
///
/// For instance:
///
/// ```json
/// {"bands": [
/// {"band_name": "Elliot Smith"},
/// {"band_name": "The Who"},
/// ]}
/// ```
///
/// If we are careless and index each band names independently,
/// `Elliot` and `The` will end up indexed at position 0, and `Smith` and `Who` will be indexed at
/// position 1.
/// As a result, with lemmatization, "The Smiths" will match our object.
///
/// Worse, if a same term is appears in the second object, a non increasing value would be pushed
/// to the position recorder probably provoking a panic.
///
/// This problem is solved for regular multivalued object by offsetting the position
/// of values, with a position gap. Here we would like `The` and `Who` to get indexed at
/// position 2 and 3 respectively.
///
/// With regular fields, we sort the fields beforehands, so that all terms with the same
/// path are indexed consecutively.
///
/// In JSON object, we do not have this confort, so we need to record these position offsets in
/// a map.
///
/// Note that using a single position for the entire object would not hurt correctness.
/// It would however hurt compression.
///
/// We can therefore afford working with a map that is not imperfect. It is fine if several
/// path map to the same index position as long as the probability is relatively low.
#[derive(Default)]
struct IndexingPositionsPerPath {
positions_per_path: FnvHashMap<u32, IndexingPosition>,
}
impl IndexingPositionsPerPath {
fn get_position(&mut self, term: &Term) -> &mut IndexingPosition {
self.positions_per_path
.entry(murmurhash2(term.as_slice()))
.or_insert_with(Default::default)
}
}
pub(crate) fn index_json_values<'a>(
doc: DocId,
json_values: impl Iterator<Item = crate::Result<&'a serde_json::Map<String, serde_json::Value>>>,
text_analyzer: &TextAnalyzer,
term_buffer: &mut Term,
postings_writer: &mut dyn PostingsWriter,
ctx: &mut IndexingContext,
) -> crate::Result<()> {
let mut json_term_writer = JsonTermWriter::wrap(term_buffer);
let mut positions_per_path: IndexingPositionsPerPath = Default::default();
for json_value_res in json_values {
let json_value = json_value_res?;
index_json_object(
doc,
json_value,
text_analyzer,
&mut json_term_writer,
postings_writer,
ctx,
&mut positions_per_path,
);
}
Ok(())
}
fn index_json_object<'a>(
doc: DocId,
json_value: &serde_json::Map<String, serde_json::Value>,
text_analyzer: &TextAnalyzer,
json_term_writer: &mut JsonTermWriter<'a>,
postings_writer: &mut dyn PostingsWriter,
ctx: &mut IndexingContext,
positions_per_path: &mut IndexingPositionsPerPath,
) {
for (json_path_segment, json_value) in json_value {
json_term_writer.push_path_segment(json_path_segment);
index_json_value(
doc,
json_value,
text_analyzer,
json_term_writer,
postings_writer,
ctx,
positions_per_path,
);
json_term_writer.pop_path_segment();
}
}
fn index_json_value<'a>(
doc: DocId,
json_value: &serde_json::Value,
text_analyzer: &TextAnalyzer,
json_term_writer: &mut JsonTermWriter<'a>,
postings_writer: &mut dyn PostingsWriter,
ctx: &mut IndexingContext,
positions_per_path: &mut IndexingPositionsPerPath,
) {
match json_value {
serde_json::Value::Null => {}
serde_json::Value::Bool(val_bool) => {
let bool_u64 = if *val_bool { 1u64 } else { 0u64 };
json_term_writer.set_fast_value(bool_u64);
postings_writer.subscribe(doc, 0u32, json_term_writer.term(), ctx);
}
serde_json::Value::Number(number) => {
if let Some(number_u64) = number.as_u64() {
json_term_writer.set_fast_value(number_u64);
} else if let Some(number_i64) = number.as_i64() {
json_term_writer.set_fast_value(number_i64);
} else if let Some(number_f64) = number.as_f64() {
json_term_writer.set_fast_value(number_f64);
}
postings_writer.subscribe(doc, 0u32, json_term_writer.term(), ctx);
}
serde_json::Value::String(text) => match infer_type_from_str(text) {
TextOrDateTime::Text(text) => {
let mut token_stream = text_analyzer.token_stream(text);
// TODO make sure the chain position works out.
json_term_writer.close_path_and_set_type(Type::Str);
let indexing_position = positions_per_path.get_position(json_term_writer.term());
postings_writer.index_text(
doc,
&mut *token_stream,
json_term_writer.term_buffer,
ctx,
indexing_position,
);
}
TextOrDateTime::DateTime(dt) => {
json_term_writer.set_fast_value(DateTime::new_utc(dt));
postings_writer.subscribe(doc, 0u32, json_term_writer.term(), ctx);
}
},
serde_json::Value::Array(arr) => {
for val in arr {
index_json_value(
doc,
val,
text_analyzer,
json_term_writer,
postings_writer,
ctx,
positions_per_path,
);
}
}
serde_json::Value::Object(map) => {
index_json_object(
doc,
map,
text_analyzer,
json_term_writer,
postings_writer,
ctx,
positions_per_path,
);
}
}
}
enum TextOrDateTime<'a> {
Text(&'a str),
DateTime(OffsetDateTime),
}
fn infer_type_from_str(text: &str) -> TextOrDateTime {
match OffsetDateTime::parse(text, &Rfc3339) {
Ok(dt) => {
let dt_utc = dt.to_offset(UtcOffset::UTC);
TextOrDateTime::DateTime(dt_utc)
}
Err(_) => TextOrDateTime::Text(text),
}
}
pub struct JsonTermWriter<'a> {
term_buffer: &'a mut Term,
path_stack: Vec<usize>,
}
impl<'a> JsonTermWriter<'a> {
pub fn wrap(term_buffer: &'a mut Term) -> Self {
term_buffer.clear_with_type(Type::Json);
let mut path_stack = Vec::with_capacity(10);
path_stack.push(5);
Self {
term_buffer,
path_stack,
}
}
fn trim_to_end_of_path(&mut self) {
let end_of_path = *self.path_stack.last().unwrap();
self.term_buffer.truncate(end_of_path);
}
pub fn close_path_and_set_type(&mut self, typ: Type) {
self.trim_to_end_of_path();
let buffer = self.term_buffer.as_mut();
let buffer_len = buffer.len();
buffer[buffer_len - 1] = JSON_END_OF_PATH;
buffer.push(typ.to_code());
}
pub fn push_path_segment(&mut self, segment: &str) {
// the path stack should never be empty.
self.trim_to_end_of_path();
let buffer = self.term_buffer.as_mut();
let buffer_len = buffer.len();
if self.path_stack.len() > 1 {
buffer[buffer_len - 1] = JSON_PATH_SEGMENT_SEP;
}
buffer.extend(segment.as_bytes());
buffer.push(JSON_PATH_SEGMENT_SEP);
self.path_stack.push(buffer.len());
}
pub fn pop_path_segment(&mut self) {
self.path_stack.pop();
assert!(!self.path_stack.is_empty());
self.trim_to_end_of_path();
}
/// Returns the json path of the term being currently built.
#[cfg(test)]
pub(crate) fn path(&self) -> &[u8] {
let end_of_path = self.path_stack.last().cloned().unwrap_or(6);
&self.term().as_slice()[5..end_of_path - 1]
}
pub fn set_fast_value<T: FastValue>(&mut self, val: T) {
self.close_path_and_set_type(T::to_type());
self.term_buffer
.as_mut()
.extend_from_slice(val.to_u64().to_be_bytes().as_slice());
}
#[cfg(test)]
pub(crate) fn set_str(&mut self, text: &str) {
self.close_path_and_set_type(Type::Str);
self.term_buffer.as_mut().extend_from_slice(text.as_bytes());
}
pub fn term(&self) -> &Term {
self.term_buffer
}
}
#[cfg(test)]
mod tests {
use super::JsonTermWriter;
use crate::schema::{Field, Type};
use crate::Term;
#[test]
fn test_json_writer() {
let field = Field::from_field_id(1);
let mut term = Term::new();
term.set_field(Type::Json, field);
let mut json_writer = JsonTermWriter::wrap(&mut term);
json_writer.push_path_segment("attributes");
json_writer.push_path_segment("color");
json_writer.set_str("red");
assert_eq!(
format!("{:?}", json_writer.term()),
"Term(type=Json, field=1, path=attributes.color, vtype=Str, \"red\")"
);
json_writer.set_str("blue");
assert_eq!(
format!("{:?}", json_writer.term()),
"Term(type=Json, field=1, path=attributes.color, vtype=Str, \"blue\")"
);
json_writer.pop_path_segment();
json_writer.push_path_segment("dimensions");
json_writer.push_path_segment("width");
json_writer.set_fast_value(400i64);
assert_eq!(
format!("{:?}", json_writer.term()),
"Term(type=Json, field=1, path=attributes.dimensions.width, vtype=I64, 400)"
);
json_writer.pop_path_segment();
json_writer.push_path_segment("height");
json_writer.set_fast_value(300i64);
assert_eq!(
format!("{:?}", json_writer.term()),
"Term(type=Json, field=1, path=attributes.dimensions.height, vtype=I64, 300)"
);
}
#[test]
fn test_string_term() {
let field = Field::from_field_id(1);
let mut term = Term::new();
term.set_field(Type::Json, field);
let mut json_writer = JsonTermWriter::wrap(&mut term);
json_writer.push_path_segment("color");
json_writer.set_str("red");
assert_eq!(
json_writer.term().as_slice(),
b"\x00\x00\x00\x01jcolor\x00sred"
)
}
#[test]
fn test_i64_term() {
let field = Field::from_field_id(1);
let mut term = Term::new();
term.set_field(Type::Json, field);
let mut json_writer = JsonTermWriter::wrap(&mut term);
json_writer.push_path_segment("color");
json_writer.set_fast_value(-4i64);
assert_eq!(
json_writer.term().as_slice(),
b"\x00\x00\x00\x01jcolor\x00i\x7f\xff\xff\xff\xff\xff\xff\xfc"
)
}
#[test]
fn test_u64_term() {
let field = Field::from_field_id(1);
let mut term = Term::new();
term.set_field(Type::Json, field);
let mut json_writer = JsonTermWriter::wrap(&mut term);
json_writer.push_path_segment("color");
json_writer.set_fast_value(4u64);
assert_eq!(
json_writer.term().as_slice(),
b"\x00\x00\x00\x01jcolor\x00u\x00\x00\x00\x00\x00\x00\x00\x04"
)
}
#[test]
fn test_f64_term() {
let field = Field::from_field_id(1);
let mut term = Term::new();
term.set_field(Type::Json, field);
let mut json_writer = JsonTermWriter::wrap(&mut term);
json_writer.push_path_segment("color");
json_writer.set_fast_value(4.0f64);
assert_eq!(
json_writer.term().as_slice(),
b"\x00\x00\x00\x01jcolor\x00f\xc0\x10\x00\x00\x00\x00\x00\x00"
)
}
#[test]
fn test_push_after_set_path_segment() {
let field = Field::from_field_id(1);
let mut term = Term::new();
term.set_field(Type::Json, field);
let mut json_writer = JsonTermWriter::wrap(&mut term);
json_writer.push_path_segment("attribute");
json_writer.set_str("something");
json_writer.push_path_segment("color");
json_writer.set_str("red");
assert_eq!(
json_writer.term().as_slice(),
b"\x00\x00\x00\x01jattribute\x01color\x00sred"
)
}
#[test]
fn test_pop_segment() {
let field = Field::from_field_id(1);
let mut term = Term::new();
term.set_field(Type::Json, field);
let mut json_writer = JsonTermWriter::wrap(&mut term);
json_writer.push_path_segment("color");
json_writer.push_path_segment("hue");
json_writer.pop_path_segment();
json_writer.set_str("red");
assert_eq!(
json_writer.term().as_slice(),
b"\x00\x00\x00\x01jcolor\x00sred"
)
}
#[test]
fn test_json_writer_path() {
let field = Field::from_field_id(1);
let mut term = Term::new();
term.set_field(Type::Json, field);
let mut json_writer = JsonTermWriter::wrap(&mut term);
json_writer.push_path_segment("color");
assert_eq!(json_writer.path(), b"color");
json_writer.push_path_segment("hue");
assert_eq!(json_writer.path(), b"color\x01hue");
json_writer.set_str("pink");
assert_eq!(json_writer.path(), b"color\x01hue");
}
}

View File

@@ -278,7 +278,7 @@ impl IndexMerger {
mut term_ord_mappings: HashMap<Field, TermOrdinalMapping>,
doc_id_mapping: &SegmentDocIdMapping,
) -> crate::Result<()> {
debug_time!("write_fast_fields");
debug_time!("write-fast-fields");
for (field, field_entry) in self.schema.fields() {
let field_type = field_entry.field_type();
@@ -307,16 +307,16 @@ impl IndexMerger {
}
None => {}
},
FieldType::Str(_) => {
// We don't handle str fast field for the moment
// They can be implemented using what is done
// for facets in the future.
}
FieldType::Bytes(byte_options) => {
if byte_options.is_fast() {
self.write_bytes_fast_field(field, fast_field_serializer, doc_id_mapping)?;
}
}
FieldType::Str(_) | FieldType::JsonObject(_) => {
// We don't handle json / string fast field for the moment
// They can be implemented using what is done
// for facets in the future
}
}
}
Ok(())
@@ -384,7 +384,7 @@ impl IndexMerger {
let fast_field_reader = &fast_field_readers[*reader_ordinal as usize];
fast_field_reader.get(*doc_id)
});
fast_field_serializer.create_auto_detect_u64_fast_field(
fast_field_serializer.new_u64_fast_field_with_best_codec(
field,
stats,
fastfield_accessor,
@@ -551,7 +551,7 @@ impl IndexMerger {
}
offsets.push(offset);
fast_field_serializer.create_auto_detect_u64_fast_field(
fast_field_serializer.new_u64_fast_field_with_best_codec(
field,
stats,
&offsets[..],
@@ -597,7 +597,7 @@ impl IndexMerger {
fast_field_serializer: &mut CompositeFastFieldSerializer,
doc_id_mapping: &SegmentDocIdMapping,
) -> crate::Result<()> {
debug_time!("write_hierarchical_facet_field");
debug_time!("write-hierarchical-facet-field");
// Multifastfield consists of 2 fastfields.
// The first serves as an index into the second one and is stricly increasing.
@@ -771,7 +771,7 @@ impl IndexMerger {
ff_reader.get_vals(*doc_id, &mut vals);
vals.into_iter()
});
fast_field_serializer.create_auto_detect_u64_fast_field_with_idx(
fast_field_serializer.new_u64_fast_field_with_idx_with_best_codec(
field,
stats,
fastfield_accessor,
@@ -827,7 +827,7 @@ impl IndexMerger {
fieldnorm_reader: Option<FieldNormReader>,
doc_id_mapping: &SegmentDocIdMapping,
) -> crate::Result<Option<TermOrdinalMapping>> {
debug_time!("write_postings_for_field");
debug_time!("write-postings-for-field");
let mut positions_buffer: Vec<u32> = Vec::with_capacity(1_000);
let mut delta_computer = DeltaComputer::new();
@@ -1023,7 +1023,8 @@ impl IndexMerger {
store_writer: &mut StoreWriter,
doc_id_mapping: &SegmentDocIdMapping,
) -> crate::Result<()> {
debug_time!("write_storable_fields");
debug_time!("write-storable-fields");
debug!("write-storable-field");
let store_readers: Vec<_> = self
.readers
@@ -1036,6 +1037,7 @@ impl IndexMerger {
.map(|(i, store)| store.iter_raw(self.readers[i].alive_bitset()))
.collect();
if !doc_id_mapping.is_trivial() {
debug!("non-trivial-doc-id-mapping");
for (old_doc_id, reader_ordinal) in doc_id_mapping.iter() {
let doc_bytes_it = &mut document_iterators[*reader_ordinal as usize];
if let Some(doc_bytes_res) = doc_bytes_it.next() {
@@ -1050,6 +1052,7 @@ impl IndexMerger {
}
}
} else {
debug!("trivial-doc-id-mapping");
for reader in &self.readers {
let store_reader = reader.get_store_reader()?;
if reader.has_deletes()
@@ -1099,10 +1102,11 @@ impl IndexMerger {
} else {
self.get_doc_id_from_concatenated_data()?
};
debug!("write-fieldnorms");
if let Some(fieldnorms_serializer) = serializer.extract_fieldnorms_serializer() {
self.write_fieldnorms(fieldnorms_serializer, &doc_id_mapping)?;
}
debug!("write-postings");
let fieldnorm_data = serializer
.segment()
.open_read(SegmentComponent::FieldNorms)?;
@@ -1112,12 +1116,15 @@ impl IndexMerger {
fieldnorm_readers,
&doc_id_mapping,
)?;
debug!("write-fastfields");
self.write_fast_fields(
serializer.get_fast_field_serializer(),
term_ord_mappings,
&doc_id_mapping,
)?;
debug!("write-storagefields");
self.write_storable_fields(serializer.get_store_writer(), &doc_id_mapping)?;
debug!("close-serializer");
serializer.close()?;
Ok(self.max_doc)
}
@@ -1126,7 +1133,6 @@ impl IndexMerger {
#[cfg(test)]
mod tests {
use byteorder::{BigEndian, ReadBytesExt};
use futures::executor::block_on;
use schema::FAST;
use crate::collector::tests::{
@@ -1137,12 +1143,13 @@ mod tests {
use crate::fastfield::FastFieldReader;
use crate::query::{AllQuery, BooleanQuery, Scorer, TermQuery};
use crate::schema::{
Cardinality, Document, Facet, FacetOptions, IndexRecordOption, IntOptions, Term,
Cardinality, Document, Facet, FacetOptions, IndexRecordOption, NumericOptions, Term,
TextFieldIndexing, INDEXED, TEXT,
};
use crate::time::OffsetDateTime;
use crate::{
assert_nearly_equals, schema, DocAddress, DocSet, IndexSettings, IndexSortByField,
IndexWriter, Order, Searcher, SegmentId,
assert_nearly_equals, schema, DateTime, DocAddress, DocSet, IndexSettings,
IndexSortByField, IndexWriter, Order, Searcher, SegmentId,
};
#[test]
@@ -1150,26 +1157,24 @@ mod tests {
let mut schema_builder = schema::Schema::builder();
let text_fieldtype = schema::TextOptions::default()
.set_indexing_options(
TextFieldIndexing::default()
.set_tokenizer("default")
.set_index_option(IndexRecordOption::WithFreqs),
TextFieldIndexing::default().set_index_option(IndexRecordOption::WithFreqs),
)
.set_stored();
let text_field = schema_builder.add_text_field("text", text_fieldtype);
let date_field = schema_builder.add_date_field("date", INDEXED);
let score_fieldtype = schema::IntOptions::default().set_fast(Cardinality::SingleValue);
let score_fieldtype = schema::NumericOptions::default().set_fast(Cardinality::SingleValue);
let score_field = schema_builder.add_u64_field("score", score_fieldtype);
let bytes_score_field = schema_builder.add_bytes_field("score_bytes", FAST);
let index = Index::create_in_ram(schema_builder.build());
let reader = index.reader()?;
let curr_time = chrono::Utc::now();
let curr_time = OffsetDateTime::now_utc();
{
let mut index_writer = index.writer_for_tests()?;
// writing the segment
index_writer.add_document(doc!(
text_field => "af b",
score_field => 3u64,
date_field => curr_time,
date_field => DateTime::new_utc(curr_time),
bytes_score_field => 3u32.to_be_bytes().as_ref()
))?;
index_writer.add_document(doc!(
@@ -1186,7 +1191,7 @@ mod tests {
// writing the segment
index_writer.add_document(doc!(
text_field => "af b",
date_field => curr_time,
date_field => DateTime::new_utc(curr_time),
score_field => 11u64,
bytes_score_field => 11u32.to_be_bytes().as_ref()
))?;
@@ -1202,7 +1207,7 @@ mod tests {
.searchable_segment_ids()
.expect("Searchable segments failed.");
let mut index_writer = index.writer_for_tests()?;
block_on(index_writer.merge(&segment_ids))?;
index_writer.merge(&segment_ids).wait()?;
index_writer.wait_merging_threads()?;
}
{
@@ -1242,7 +1247,10 @@ mod tests {
]
);
assert_eq!(
get_doc_ids(vec![Term::from_field_date(date_field, &curr_time)])?,
get_doc_ids(vec![Term::from_field_date(
date_field,
DateTime::new_utc(curr_time)
)])?,
vec![DocAddress::new(0, 0), DocAddress::new(0, 3)]
);
}
@@ -1306,7 +1314,7 @@ mod tests {
)
.set_stored();
let text_field = schema_builder.add_text_field("text", text_fieldtype);
let score_fieldtype = schema::IntOptions::default().set_fast(Cardinality::SingleValue);
let score_fieldtype = schema::NumericOptions::default().set_fast(Cardinality::SingleValue);
let score_field = schema_builder.add_u64_field("score", score_fieldtype);
let bytes_score_field = schema_builder.add_bytes_field("score_bytes", FAST);
let index = Index::create_in_ram(schema_builder.build());
@@ -1451,7 +1459,7 @@ mod tests {
{
// merging the segments
let segment_ids = index.searchable_segment_ids()?;
block_on(index_writer.merge(&segment_ids))?;
index_writer.merge(&segment_ids).wait()?;
reader.reload()?;
let searcher = reader.searcher();
assert_eq!(searcher.segment_readers().len(), 1);
@@ -1544,7 +1552,7 @@ mod tests {
{
// Test merging a single segment in order to remove deletes.
let segment_ids = index.searchable_segment_ids()?;
block_on(index_writer.merge(&segment_ids))?;
index_writer.merge(&segment_ids).wait()?;
reader.reload()?;
let searcher = reader.searcher();
@@ -1666,7 +1674,7 @@ mod tests {
fn test_merge_facets(index_settings: Option<IndexSettings>, force_segment_value_overlap: bool) {
let mut schema_builder = schema::Schema::builder();
let facet_field = schema_builder.add_facet_field("facet", FacetOptions::default());
let int_options = IntOptions::default()
let int_options = NumericOptions::default()
.set_fast(Cardinality::SingleValue)
.set_indexed();
let int_field = schema_builder.add_u64_field("intval", int_options);
@@ -1764,7 +1772,10 @@ mod tests {
.searchable_segment_ids()
.expect("Searchable segments failed.");
let mut index_writer = index.writer_for_tests().unwrap();
block_on(index_writer.merge(&segment_ids)).expect("Merging failed");
index_writer
.merge(&segment_ids)
.wait()
.expect("Merging failed");
index_writer.wait_merging_threads().unwrap();
reader.reload().unwrap();
test_searcher(
@@ -1819,7 +1830,7 @@ mod tests {
let segment_ids = index
.searchable_segment_ids()
.expect("Searchable segments failed.");
block_on(index_writer.merge(&segment_ids))?;
index_writer.merge(&segment_ids).wait()?;
reader.reload()?;
// commit has not been called yet. The document should still be
// there.
@@ -1830,7 +1841,7 @@ mod tests {
#[test]
fn test_merge_multivalued_int_fields_all_deleted() -> crate::Result<()> {
let mut schema_builder = schema::Schema::builder();
let int_options = IntOptions::default()
let int_options = NumericOptions::default()
.set_fast(Cardinality::MultiValues)
.set_indexed();
let int_field = schema_builder.add_u64_field("intvals", int_options);
@@ -1846,7 +1857,7 @@ mod tests {
index_writer.commit()?;
index_writer.delete_term(Term::from_field_u64(int_field, 1));
let segment_ids = index.searchable_segment_ids()?;
block_on(index_writer.merge(&segment_ids))?;
index_writer.merge(&segment_ids).wait()?;
// assert delete has not been committed
reader.reload()?;
@@ -1867,7 +1878,7 @@ mod tests {
#[test]
fn test_merge_multivalued_int_fields_simple() -> crate::Result<()> {
let mut schema_builder = schema::Schema::builder();
let int_options = IntOptions::default()
let int_options = NumericOptions::default()
.set_fast(Cardinality::MultiValues)
.set_indexed();
let int_field = schema_builder.add_u64_field("intvals", int_options);
@@ -1947,7 +1958,7 @@ mod tests {
{
let segment_ids = index.searchable_segment_ids()?;
let mut index_writer = index.writer_for_tests()?;
block_on(index_writer.merge(&segment_ids))?;
index_writer.merge(&segment_ids).wait()?;
index_writer.wait_merging_threads()?;
}
reader.reload()?;
@@ -1994,7 +2005,7 @@ mod tests {
fn merges_f64_fast_fields_correctly() -> crate::Result<()> {
let mut builder = schema::SchemaBuilder::new();
let fast_multi = IntOptions::default().set_fast(Cardinality::MultiValues);
let fast_multi = NumericOptions::default().set_fast(Cardinality::MultiValues);
let field = builder.add_f64_field("f64", schema::FAST);
let multi_field = builder.add_f64_field("f64s", fast_multi);
@@ -2075,7 +2086,7 @@ mod tests {
.iter()
.map(|reader| reader.segment_id())
.collect();
block_on(writer.merge(&segment_ids[..]))?;
writer.merge(&segment_ids[..]).wait()?;
reader.reload()?;
let searcher = reader.searcher();

View File

@@ -1,20 +1,18 @@
#[cfg(test)]
mod tests {
use futures::executor::block_on;
use crate::collector::TopDocs;
use crate::core::Index;
use crate::fastfield::{AliveBitSet, FastFieldReader, MultiValuedFastFieldReader};
use crate::query::QueryParser;
use crate::schema::{
self, BytesOptions, Cardinality, Facet, FacetOptions, IndexRecordOption, IntOptions,
self, BytesOptions, Cardinality, Facet, FacetOptions, IndexRecordOption, NumericOptions,
TextFieldIndexing, TextOptions,
};
use crate::{DocAddress, DocSet, IndexSettings, IndexSortByField, Order, Postings, Term};
fn create_test_index_posting_list_issue(index_settings: Option<IndexSettings>) -> Index {
let mut schema_builder = schema::Schema::builder();
let int_options = IntOptions::default()
let int_options = NumericOptions::default()
.set_fast(Cardinality::SingleValue)
.set_indexed();
let int_field = schema_builder.add_u64_field("intval", int_options);
@@ -50,7 +48,7 @@ mod tests {
.searchable_segment_ids()
.expect("Searchable segments failed.");
let mut index_writer = index.writer_for_tests().unwrap();
assert!(block_on(index_writer.merge(&segment_ids)).is_ok());
assert!(index_writer.merge(&segment_ids).wait().is_ok());
assert!(index_writer.wait_merging_threads().is_ok());
}
index
@@ -63,7 +61,7 @@ mod tests {
force_disjunct_segment_sort_values: bool,
) -> crate::Result<Index> {
let mut schema_builder = schema::Schema::builder();
let int_options = IntOptions::default()
let int_options = NumericOptions::default()
.set_fast(Cardinality::SingleValue)
.set_stored()
.set_indexed();
@@ -75,7 +73,7 @@ mod tests {
let multi_numbers = schema_builder.add_u64_field(
"multi_numbers",
IntOptions::default().set_fast(Cardinality::MultiValues),
NumericOptions::default().set_fast(Cardinality::MultiValues),
);
let text_field_options = TextOptions::default()
.set_indexing_options(
@@ -140,7 +138,7 @@ mod tests {
{
let segment_ids = index.searchable_segment_ids()?;
let mut index_writer = index.writer_for_tests()?;
block_on(index_writer.merge(&segment_ids))?;
index_writer.merge(&segment_ids).wait()?;
index_writer.wait_merging_threads()?;
}
Ok(index)
@@ -486,11 +484,11 @@ mod bench_sorted_index_merge {
// use cratedoc_id, readerdoc_id_mappinglet vals = reader.fate::schema;
use crate::fastfield::{DynamicFastFieldReader, FastFieldReader};
use crate::indexer::merger::IndexMerger;
use crate::schema::{Cardinality, Document, IntOptions, Schema};
use crate::schema::{Cardinality, Document, NumericOptions, Schema};
use crate::{IndexSettings, IndexSortByField, IndexWriter, Order};
fn create_index(sort_by_field: Option<IndexSortByField>) -> Index {
let mut schema_builder = Schema::builder();
let int_options = IntOptions::default()
let int_options = NumericOptions::default()
.set_fast(Cardinality::SingleValue)
.set_indexed();
let int_field = schema_builder.add_u64_field("intval", int_options);

View File

@@ -5,6 +5,7 @@ pub mod doc_id_mapping;
mod doc_opstamp_mapping;
pub mod index_writer;
mod index_writer_status;
mod json_term_writer;
mod log_merge_policy;
mod merge_operation;
pub mod merge_policy;
@@ -24,6 +25,7 @@ use crossbeam::channel;
use smallvec::SmallVec;
pub use self::index_writer::IndexWriter;
pub(crate) use self::json_term_writer::JsonTermWriter;
pub use self::log_merge_policy::LogMergePolicy;
pub use self::merge_operation::MergeOperation;
pub use self::merge_policy::{MergeCandidate, MergePolicy, NoMergePolicy};

View File

@@ -1,7 +1,5 @@
use futures::executor::block_on;
use super::IndexWriter;
use crate::Opstamp;
use crate::{FutureResult, Opstamp};
/// A prepared commit
pub struct PreparedCommit<'a> {
@@ -35,9 +33,9 @@ impl<'a> PreparedCommit<'a> {
}
/// Proceeds to commit.
/// See `.commit_async()`.
/// See `.commit_future()`.
pub fn commit(self) -> crate::Result<Opstamp> {
block_on(self.commit_async())
self.commit_future().wait()
}
/// Proceeds to commit.
@@ -45,12 +43,10 @@ impl<'a> PreparedCommit<'a> {
/// Unfortunately, contrary to what `PrepareCommit` may suggests,
/// this operation is not at all really light.
/// At this point deletes have not been flushed yet.
pub async fn commit_async(self) -> crate::Result<Opstamp> {
pub fn commit_future(self) -> FutureResult<Opstamp> {
info!("committing {}", self.opstamp);
self.index_writer
.segment_updater()
.schedule_commit(self.opstamp, self.payload)
.await?;
Ok(self.opstamp)
}
}

View File

@@ -8,9 +8,7 @@ use std::sync::atomic::{AtomicBool, Ordering};
use std::sync::{Arc, RwLock};
use fail::fail_point;
use futures::channel::oneshot;
use futures::executor::{ThreadPool, ThreadPoolBuilder};
use futures::future::{Future, TryFutureExt};
use rayon::{ThreadPool, ThreadPoolBuilder};
use super::segment_manager::SegmentManager;
use crate::core::{
@@ -29,7 +27,7 @@ use crate::indexer::{
SegmentSerializer,
};
use crate::schema::Schema;
use crate::{Opstamp, TantivyError};
use crate::{FutureResult, Opstamp, TantivyError};
const NUM_MERGE_THREADS: usize = 4;
@@ -105,7 +103,7 @@ impl Deref for SegmentUpdater {
}
}
async fn garbage_collect_files(
fn garbage_collect_files(
segment_updater: SegmentUpdater,
) -> crate::Result<GarbageCollectionResult> {
info!("Running garbage collection");
@@ -309,18 +307,18 @@ impl SegmentUpdater {
let segments = index.searchable_segment_metas()?;
let segment_manager = SegmentManager::from_segments(segments, delete_cursor);
let pool = ThreadPoolBuilder::new()
.name_prefix("segment_updater")
.pool_size(1)
.create()
.thread_name(|_| "segment_updater".to_string())
.num_threads(1)
.build()
.map_err(|_| {
crate::TantivyError::SystemError(
"Failed to spawn segment updater thread".to_string(),
)
})?;
let merge_thread_pool = ThreadPoolBuilder::new()
.name_prefix("merge_thread")
.pool_size(NUM_MERGE_THREADS)
.create()
.thread_name(|i| format!("merge_thread_{i}"))
.num_threads(NUM_MERGE_THREADS)
.build()
.map_err(|_| {
crate::TantivyError::SystemError(
"Failed to spawn segment merging thread".to_string(),
@@ -349,39 +347,30 @@ impl SegmentUpdater {
*self.merge_policy.write().unwrap() = arc_merge_policy;
}
async fn schedule_task<
T: 'static + Send,
F: Future<Output = crate::Result<T>> + 'static + Send,
>(
fn schedule_task<T: 'static + Send, F: FnOnce() -> crate::Result<T> + 'static + Send>(
&self,
task: F,
) -> crate::Result<T> {
) -> FutureResult<T> {
if !self.is_alive() {
return Err(crate::TantivyError::SystemError(
"Segment updater killed".to_string(),
));
return crate::TantivyError::SystemError("Segment updater killed".to_string()).into();
}
let (sender, receiver) = oneshot::channel();
self.pool.spawn_ok(async move {
let task_result = task.await;
let (scheduled_result, sender) = FutureResult::create(
"A segment_updater future did not succeed. This should never happen.",
);
self.pool.spawn(|| {
let task_result = task();
let _ = sender.send(task_result);
});
let task_result = receiver.await;
task_result.unwrap_or_else(|_| {
let err_msg =
"A segment_updater future did not success. This should never happen.".to_string();
Err(crate::TantivyError::SystemError(err_msg))
})
scheduled_result
}
pub async fn schedule_add_segment(&self, segment_entry: SegmentEntry) -> crate::Result<()> {
pub fn schedule_add_segment(&self, segment_entry: SegmentEntry) -> FutureResult<()> {
let segment_updater = self.clone();
self.schedule_task(async move {
self.schedule_task(move || {
segment_updater.segment_manager.add_segment(segment_entry);
segment_updater.consider_merge_options().await;
segment_updater.consider_merge_options();
Ok(())
})
.await
}
/// Orders `SegmentManager` to remove all segments
@@ -448,9 +437,9 @@ impl SegmentUpdater {
Ok(())
}
pub async fn schedule_garbage_collect(&self) -> crate::Result<GarbageCollectionResult> {
let garbage_collect_future = garbage_collect_files(self.clone());
self.schedule_task(garbage_collect_future).await
pub fn schedule_garbage_collect(&self) -> FutureResult<GarbageCollectionResult> {
let self_clone = self.clone();
self.schedule_task(move || garbage_collect_files(self_clone))
}
/// List the files that are useful to the index.
@@ -468,21 +457,20 @@ impl SegmentUpdater {
files
}
pub(crate) async fn schedule_commit(
pub(crate) fn schedule_commit(
&self,
opstamp: Opstamp,
payload: Option<String>,
) -> crate::Result<()> {
) -> FutureResult<Opstamp> {
let segment_updater: SegmentUpdater = self.clone();
self.schedule_task(async move {
self.schedule_task(move || {
let segment_entries = segment_updater.purge_deletes(opstamp)?;
segment_updater.segment_manager.commit(segment_entries);
segment_updater.save_metas(opstamp, payload)?;
let _ = garbage_collect_files(segment_updater.clone()).await;
segment_updater.consider_merge_options().await;
Ok(())
let _ = garbage_collect_files(segment_updater.clone());
segment_updater.consider_merge_options();
Ok(opstamp)
})
.await
}
fn store_meta(&self, index_meta: &IndexMeta) {
@@ -515,26 +503,33 @@ impl SegmentUpdater {
// suggested and the moment when it ended up being executed.)
//
// `segment_ids` is required to be non-empty.
pub fn start_merge(
&self,
merge_operation: MergeOperation,
) -> crate::Result<impl Future<Output = crate::Result<SegmentMeta>>> {
pub fn start_merge(&self, merge_operation: MergeOperation) -> FutureResult<SegmentMeta> {
assert!(
!merge_operation.segment_ids().is_empty(),
"Segment_ids cannot be empty."
);
let segment_updater = self.clone();
let segment_entries: Vec<SegmentEntry> = self
let segment_entries: Vec<SegmentEntry> = match self
.segment_manager
.start_merge(merge_operation.segment_ids())?;
.start_merge(merge_operation.segment_ids())
{
Ok(segment_entries) => segment_entries,
Err(err) => {
warn!(
"Starting the merge failed for the following reason. This is not fatal. {}",
err
);
return err.into();
}
};
info!("Starting merge - {:?}", merge_operation.segment_ids());
let (merging_future_send, merging_future_recv) =
oneshot::channel::<crate::Result<SegmentMeta>>();
let (scheduled_result, merging_future_send) =
FutureResult::create("Merge operation failed.");
self.merge_thread_pool.spawn_ok(async move {
self.merge_thread_pool.spawn(move || {
// The fact that `merge_operation` is moved here is important.
// Its lifetime is used to track how many merging thread are currently running,
// as well as which segment is currently in merge and therefore should not be
@@ -545,28 +540,23 @@ impl SegmentUpdater {
merge_operation.target_opstamp(),
) {
Ok(after_merge_segment_entry) => {
let segment_meta = segment_updater
.end_merge(merge_operation, after_merge_segment_entry)
.await;
let _send_result = merging_future_send.send(segment_meta);
let segment_meta_res =
segment_updater.end_merge(merge_operation, after_merge_segment_entry);
let _send_result = merging_future_send.send(segment_meta_res);
}
Err(e) => {
Err(merge_error) => {
warn!(
"Merge of {:?} was cancelled: {:?}",
merge_operation.segment_ids().to_vec(),
e
merge_error
);
// ... cancel merge
let _send_result = merging_future_send.send(Err(merge_error));
assert!(!cfg!(test), "Merge failed.");
}
}
});
Ok(merging_future_recv.unwrap_or_else(|e| {
Err(crate::TantivyError::SystemError(
"Merge failed:".to_string() + &e.to_string(),
))
}))
scheduled_result
}
pub(crate) fn get_mergeable_segments(&self) -> (Vec<SegmentMeta>, Vec<SegmentMeta>) {
@@ -575,7 +565,7 @@ impl SegmentUpdater {
.get_mergeable_segments(&merge_segment_ids)
}
async fn consider_merge_options(&self) {
fn consider_merge_options(&self) {
let (committed_segments, uncommitted_segments) = self.get_mergeable_segments();
// Committed segments cannot be merged with uncommitted_segments.
@@ -601,23 +591,21 @@ impl SegmentUpdater {
merge_candidates.extend(committed_merge_candidates);
for merge_operation in merge_candidates {
if let Err(err) = self.start_merge(merge_operation) {
warn!(
"Starting the merge failed for the following reason. This is not fatal. {}",
err
);
}
// If a merge cannot be started this is not a fatal error.
// We do log a warning in `start_merge`.
let _ = self.start_merge(merge_operation);
}
}
async fn end_merge(
/// Queues a `end_merge` in the segment updater and blocks until it is successfully processed.
fn end_merge(
&self,
merge_operation: MergeOperation,
mut after_merge_segment_entry: SegmentEntry,
) -> crate::Result<SegmentMeta> {
let segment_updater = self.clone();
let after_merge_segment_meta = after_merge_segment_entry.meta().clone();
self.schedule_task(async move {
self.schedule_task(move || {
info!("End merge {:?}", after_merge_segment_entry.meta());
{
let mut delete_cursor = after_merge_segment_entry.delete_cursor().clone();
@@ -655,13 +643,13 @@ impl SegmentUpdater {
.save_metas(previous_metas.opstamp, previous_metas.payload.clone())?;
}
segment_updater.consider_merge_options().await;
segment_updater.consider_merge_options();
} // we drop all possible handle to a now useless `SegmentMeta`.
let _ = garbage_collect_files(segment_updater).await;
let _ = garbage_collect_files(segment_updater);
Ok(())
})
.await?;
.wait()?;
Ok(after_merge_segment_meta)
}

View File

@@ -1,16 +1,18 @@
use super::doc_id_mapping::{get_doc_id_mapping_from_field, DocIdMapping};
use super::operation::AddOperation;
use crate::core::Segment;
use crate::fastfield::FastFieldsWriter;
use crate::fastfield::{FastFieldsWriter, FastValue as _};
use crate::fieldnorm::{FieldNormReaders, FieldNormsWriter};
use crate::indexer::json_term_writer::index_json_values;
use crate::indexer::segment_serializer::SegmentSerializer;
use crate::postings::{
compute_table_size, serialize_postings, IndexingContext, PerFieldPostingsWriter, PostingsWriter,
compute_table_size, serialize_postings, IndexingContext, IndexingPosition,
PerFieldPostingsWriter, PostingsWriter,
};
use crate::schema::{Field, FieldEntry, FieldType, FieldValue, Schema, Term, Type, Value};
use crate::schema::{FieldEntry, FieldType, FieldValue, Schema, Term, Value};
use crate::store::{StoreReader, StoreWriter};
use crate::tokenizer::{
BoxTokenStream, FacetTokenizer, PreTokenizedStream, TextAnalyzer, TokenStreamChain, Tokenizer,
BoxTokenStream, FacetTokenizer, PreTokenizedStream, TextAnalyzer, Tokenizer,
};
use crate::{DocId, Document, Opstamp, SegmentComponent};
@@ -54,13 +56,13 @@ fn remap_doc_opstamps(
/// The segment is layed on disk when the segment gets `finalized`.
pub struct SegmentWriter {
pub(crate) max_doc: DocId,
pub(crate) indexing_context: IndexingContext,
pub(crate) ctx: IndexingContext,
pub(crate) per_field_postings_writers: PerFieldPostingsWriter,
pub(crate) segment_serializer: SegmentSerializer,
pub(crate) fast_field_writers: FastFieldsWriter,
pub(crate) fieldnorms_writer: FieldNormsWriter,
pub(crate) doc_opstamps: Vec<Opstamp>,
tokenizers: Vec<Option<TextAnalyzer>>,
per_field_text_analyzers: Vec<TextAnalyzer>,
term_buffer: Term,
schema: Schema,
}
@@ -84,29 +86,33 @@ impl SegmentWriter {
let table_size = compute_initial_table_size(memory_budget_in_bytes)?;
let segment_serializer = SegmentSerializer::for_segment(segment, false)?;
let per_field_postings_writers = PerFieldPostingsWriter::for_schema(&schema);
let tokenizers = schema
let per_field_text_analyzers = schema
.fields()
.map(
|(_, field_entry): (Field, &FieldEntry)| match field_entry.field_type() {
FieldType::Str(ref text_options) => text_options
.get_indexing_options()
.and_then(|text_index_option| {
let tokenizer_name = &text_index_option.tokenizer();
tokenizer_manager.get(tokenizer_name)
}),
.map(|(_, field_entry): (_, &FieldEntry)| {
let text_options = match field_entry.field_type() {
FieldType::Str(ref text_options) => text_options.get_indexing_options(),
FieldType::JsonObject(ref json_object_options) => {
json_object_options.get_text_indexing_options()
}
_ => None,
},
)
};
text_options
.and_then(|text_index_option| {
let tokenizer_name = &text_index_option.tokenizer();
tokenizer_manager.get(tokenizer_name)
})
.unwrap_or_default()
})
.collect();
Ok(SegmentWriter {
max_doc: 0,
indexing_context: IndexingContext::new(table_size),
ctx: IndexingContext::new(table_size),
per_field_postings_writers,
fieldnorms_writer: FieldNormsWriter::for_schema(&schema),
segment_serializer,
fast_field_writers: FastFieldsWriter::from_schema(&schema),
doc_opstamps: Vec::with_capacity(1_000),
tokenizers,
per_field_text_analyzers,
term_buffer: Term::new(),
schema,
})
@@ -129,7 +135,7 @@ impl SegmentWriter {
.transpose()?;
remap_and_write(
&self.per_field_postings_writers,
self.indexing_context,
self.ctx,
&self.fast_field_writers,
&self.fieldnorms_writer,
&self.schema,
@@ -141,7 +147,7 @@ impl SegmentWriter {
}
pub fn mem_usage(&self) -> usize {
self.indexing_context.mem_usage()
self.ctx.mem_usage()
+ self.fieldnorms_writer.mem_usage()
+ self.fast_field_writers.mem_usage()
+ self.segment_serializer.mem_usage()
@@ -161,13 +167,12 @@ impl SegmentWriter {
if !field_entry.is_indexed() {
continue;
}
let (term_buffer, indexing_context) =
(&mut self.term_buffer, &mut self.indexing_context);
let (term_buffer, ctx) = (&mut self.term_buffer, &mut self.ctx);
let postings_writer: &mut dyn PostingsWriter =
self.per_field_postings_writers.get_for_field_mut(field);
term_buffer.set_field(field_entry.field_type().value_type(), field);
match *field_entry.field_type() {
FieldType::Facet(_) => {
term_buffer.set_field(Type::Facet, field);
for value in values {
let facet = value.as_facet().ok_or_else(make_schema_error)?;
let facet_str = facet.encoded_str();
@@ -176,12 +181,8 @@ impl SegmentWriter {
.token_stream(facet_str)
.process(&mut |token| {
term_buffer.set_text(&token.text);
let unordered_term_id = postings_writer.subscribe(
doc_id,
0u32,
term_buffer,
indexing_context,
);
let unordered_term_id =
postings_writer.subscribe(doc_id, 0u32, term_buffer, ctx);
// TODO pass indexing context directly in subscribe function
unordered_term_id_opt = Some(unordered_term_id);
});
@@ -209,72 +210,79 @@ impl SegmentWriter {
.push(PreTokenizedStream::from(tok_str.clone()).into());
}
Value::Str(ref text) => {
if let Some(ref mut tokenizer) =
self.tokenizers[field.field_id() as usize]
{
offsets.push(total_offset);
total_offset += text.len();
token_streams.push(tokenizer.token_stream(text));
}
let text_analyzer =
&self.per_field_text_analyzers[field.field_id() as usize];
offsets.push(total_offset);
total_offset += text.len();
token_streams.push(text_analyzer.token_stream(text));
}
_ => (),
}
}
let num_tokens = if token_streams.is_empty() {
0
} else {
let mut token_stream = TokenStreamChain::new(offsets, token_streams);
let mut indexing_position = IndexingPosition::default();
for mut token_stream in token_streams {
assert_eq!(term_buffer.as_slice().len(), 5);
postings_writer.index_text(
doc_id,
field,
&mut token_stream,
&mut *token_stream,
term_buffer,
indexing_context,
)
};
self.fieldnorms_writer.record(doc_id, field, num_tokens);
ctx,
&mut indexing_position,
);
}
self.fieldnorms_writer
.record(doc_id, field, indexing_position.num_tokens);
}
FieldType::U64(_) => {
for value in values {
term_buffer.set_field(Type::U64, field);
let u64_val = value.as_u64().ok_or_else(make_schema_error)?;
term_buffer.set_u64(u64_val);
postings_writer.subscribe(doc_id, 0u32, term_buffer, indexing_context);
postings_writer.subscribe(doc_id, 0u32, term_buffer, ctx);
}
}
FieldType::Date(_) => {
for value in values {
term_buffer.set_field(Type::Date, field);
let date_val = value.as_date().ok_or_else(make_schema_error)?;
term_buffer.set_i64(date_val.timestamp());
postings_writer.subscribe(doc_id, 0u32, term_buffer, indexing_context);
term_buffer.set_u64(date_val.to_u64());
postings_writer.subscribe(doc_id, 0u32, term_buffer, ctx);
}
}
FieldType::I64(_) => {
for value in values {
term_buffer.set_field(Type::I64, field);
let i64_val = value.as_i64().ok_or_else(make_schema_error)?;
term_buffer.set_i64(i64_val);
postings_writer.subscribe(doc_id, 0u32, term_buffer, indexing_context);
postings_writer.subscribe(doc_id, 0u32, term_buffer, ctx);
}
}
FieldType::F64(_) => {
for value in values {
term_buffer.set_field(Type::F64, field);
let f64_val = value.as_f64().ok_or_else(make_schema_error)?;
term_buffer.set_f64(f64_val);
postings_writer.subscribe(doc_id, 0u32, term_buffer, indexing_context);
postings_writer.subscribe(doc_id, 0u32, term_buffer, ctx);
}
}
FieldType::Bytes(_) => {
for value in values {
term_buffer.set_field(Type::Bytes, field);
let bytes = value.as_bytes().ok_or_else(make_schema_error)?;
term_buffer.set_bytes(bytes);
postings_writer.subscribe(doc_id, 0u32, term_buffer, indexing_context);
postings_writer.subscribe(doc_id, 0u32, term_buffer, ctx);
}
}
FieldType::JsonObject(_) => {
let text_analyzer = &self.per_field_text_analyzers[field.field_id() as usize];
let json_values_it = values
.iter()
.map(|value| value.as_json().ok_or_else(make_schema_error));
index_json_values(
doc_id,
json_values_it,
text_analyzer,
term_buffer,
postings_writer,
ctx,
)?;
}
}
}
Ok(())
@@ -323,13 +331,14 @@ impl SegmentWriter {
/// `doc_id_map` is used to map to the new doc_id order.
fn remap_and_write(
per_field_postings_writers: &PerFieldPostingsWriter,
indexing_context: IndexingContext,
ctx: IndexingContext,
fast_field_writers: &FastFieldsWriter,
fieldnorms_writer: &FieldNormsWriter,
schema: &Schema,
mut serializer: SegmentSerializer,
doc_id_map: Option<&DocIdMapping>,
) -> crate::Result<()> {
debug!("remap-and-write");
if let Some(fieldnorms_serializer) = serializer.extract_fieldnorms_serializer() {
fieldnorms_writer.serialize(fieldnorms_serializer, doc_id_map)?;
}
@@ -338,19 +347,21 @@ fn remap_and_write(
.open_read(SegmentComponent::FieldNorms)?;
let fieldnorm_readers = FieldNormReaders::open(fieldnorm_data)?;
let term_ord_map = serialize_postings(
indexing_context,
ctx,
per_field_postings_writers,
fieldnorm_readers,
doc_id_map,
schema,
serializer.get_postings_serializer(),
)?;
debug!("fastfield-serialize");
fast_field_writers.serialize(
serializer.get_fast_field_serializer(),
&term_ord_map,
doc_id_map,
)?;
debug!("resort-docstore");
// finalize temp docstore and create version, which reflects the doc_id_map
if let Some(doc_id_map) = doc_id_map {
let store_write = serializer
@@ -373,6 +384,7 @@ fn remap_and_write(
}
}
debug!("serializer-close");
serializer.close()?;
Ok(())
@@ -403,9 +415,15 @@ pub fn prepare_doc_for_store(doc: Document, schema: &Schema) -> Document {
#[cfg(test)]
mod tests {
use super::compute_initial_table_size;
use crate::schema::{Schema, STORED, TEXT};
use crate::collector::Count;
use crate::indexer::json_term_writer::JsonTermWriter;
use crate::postings::TermInfo;
use crate::query::PhraseQuery;
use crate::schema::{IndexRecordOption, Schema, Type, STORED, STRING, TEXT};
use crate::time::format_description::well_known::Rfc3339;
use crate::time::OffsetDateTime;
use crate::tokenizer::{PreTokenizedString, Token};
use crate::Document;
use crate::{DateTime, DocAddress, DocSet, Document, Index, Postings, Term, TERMINATED};
#[test]
fn test_hashmap_size() {
@@ -444,4 +462,245 @@ mod tests {
Some("title")
);
}
#[test]
fn test_json_indexing() {
let mut schema_builder = Schema::builder();
let json_field = schema_builder.add_json_field("json", STORED | TEXT);
let schema = schema_builder.build();
let json_val: serde_json::Map<String, serde_json::Value> = serde_json::from_str(
r#"{
"toto": "titi",
"float": -0.2,
"unsigned": 1,
"signed": -2,
"complexobject": {
"field.with.dot": 1
},
"date": "1985-04-12T23:20:50.52Z",
"my_arr": [2, 3, {"my_key": "two tokens"}, 4]
}"#,
)
.unwrap();
let doc = doc!(json_field=>json_val.clone());
let index = Index::create_in_ram(schema.clone());
let mut writer = index.writer_for_tests().unwrap();
writer.add_document(doc).unwrap();
writer.commit().unwrap();
let reader = index.reader().unwrap();
let searcher = reader.searcher();
let doc = searcher
.doc(DocAddress {
segment_ord: 0u32,
doc_id: 0u32,
})
.unwrap();
let serdeser_json_val = serde_json::from_str::<serde_json::Map<String, serde_json::Value>>(
&schema.to_json(&doc),
)
.unwrap()
.get("json")
.unwrap()[0]
.as_object()
.unwrap()
.clone();
assert_eq!(json_val, serdeser_json_val);
let segment_reader = searcher.segment_reader(0u32);
let inv_idx = segment_reader.inverted_index(json_field).unwrap();
let term_dict = inv_idx.terms();
let mut term = Term::new();
term.set_field(Type::Json, json_field);
let mut term_stream = term_dict.stream().unwrap();
let mut json_term_writer = JsonTermWriter::wrap(&mut term);
json_term_writer.push_path_segment("complexobject");
json_term_writer.push_path_segment("field.with.dot");
json_term_writer.set_fast_value(1u64);
assert!(term_stream.advance());
assert_eq!(term_stream.key(), json_term_writer.term().value_bytes());
json_term_writer.pop_path_segment();
json_term_writer.pop_path_segment();
json_term_writer.push_path_segment("date");
json_term_writer.set_fast_value(DateTime::new_utc(
OffsetDateTime::parse("1985-04-12T23:20:50.52Z", &Rfc3339).unwrap(),
));
assert!(term_stream.advance());
assert_eq!(term_stream.key(), json_term_writer.term().value_bytes());
json_term_writer.pop_path_segment();
json_term_writer.push_path_segment("float");
json_term_writer.set_fast_value(-0.2f64);
assert!(term_stream.advance());
assert_eq!(term_stream.key(), json_term_writer.term().value_bytes());
json_term_writer.pop_path_segment();
json_term_writer.push_path_segment("my_arr");
json_term_writer.set_fast_value(2u64);
assert!(term_stream.advance());
assert_eq!(term_stream.key(), json_term_writer.term().value_bytes());
json_term_writer.set_fast_value(3u64);
assert!(term_stream.advance());
assert_eq!(term_stream.key(), json_term_writer.term().value_bytes());
json_term_writer.set_fast_value(4u64);
assert!(term_stream.advance());
assert_eq!(term_stream.key(), json_term_writer.term().value_bytes());
json_term_writer.push_path_segment("my_key");
json_term_writer.set_str("tokens");
assert!(term_stream.advance());
assert_eq!(term_stream.key(), json_term_writer.term().value_bytes());
json_term_writer.set_str("two");
assert!(term_stream.advance());
assert_eq!(term_stream.key(), json_term_writer.term().value_bytes());
json_term_writer.pop_path_segment();
json_term_writer.pop_path_segment();
json_term_writer.push_path_segment("signed");
json_term_writer.set_fast_value(-2i64);
assert!(term_stream.advance());
assert_eq!(term_stream.key(), json_term_writer.term().value_bytes());
json_term_writer.pop_path_segment();
json_term_writer.push_path_segment("toto");
json_term_writer.set_str("titi");
assert!(term_stream.advance());
assert_eq!(term_stream.key(), json_term_writer.term().value_bytes());
json_term_writer.pop_path_segment();
json_term_writer.push_path_segment("unsigned");
json_term_writer.set_fast_value(1u64);
assert!(term_stream.advance());
assert_eq!(term_stream.key(), json_term_writer.term().value_bytes());
assert!(!term_stream.advance());
}
#[test]
fn test_json_tokenized_with_position() {
let mut schema_builder = Schema::builder();
let json_field = schema_builder.add_json_field("json", STORED | TEXT);
let schema = schema_builder.build();
let mut doc = Document::default();
let json_val: serde_json::Map<String, serde_json::Value> =
serde_json::from_str(r#"{"mykey": "repeated token token"}"#).unwrap();
doc.add_json_object(json_field, json_val);
let index = Index::create_in_ram(schema);
let mut writer = index.writer_for_tests().unwrap();
writer.add_document(doc).unwrap();
writer.commit().unwrap();
let reader = index.reader().unwrap();
let searcher = reader.searcher();
let segment_reader = searcher.segment_reader(0u32);
let inv_index = segment_reader.inverted_index(json_field).unwrap();
let mut term = Term::new();
term.set_field(Type::Json, json_field);
let mut json_term_writer = JsonTermWriter::wrap(&mut term);
json_term_writer.push_path_segment("mykey");
json_term_writer.set_str("token");
let term_info = inv_index
.get_term_info(json_term_writer.term())
.unwrap()
.unwrap();
assert_eq!(
term_info,
TermInfo {
doc_freq: 1,
postings_range: 2..4,
positions_range: 2..5
}
);
let mut postings = inv_index
.read_postings(&term, IndexRecordOption::WithFreqsAndPositions)
.unwrap()
.unwrap();
assert_eq!(postings.doc(), 0);
assert_eq!(postings.term_freq(), 2);
let mut positions = Vec::new();
postings.positions(&mut positions);
assert_eq!(&positions[..], &[1, 2]);
assert_eq!(postings.advance(), TERMINATED);
}
#[test]
fn test_json_raw_no_position() {
let mut schema_builder = Schema::builder();
let json_field = schema_builder.add_json_field("json", STRING);
let schema = schema_builder.build();
let json_val: serde_json::Map<String, serde_json::Value> =
serde_json::from_str(r#"{"mykey": "two tokens"}"#).unwrap();
let doc = doc!(json_field=>json_val);
let index = Index::create_in_ram(schema);
let mut writer = index.writer_for_tests().unwrap();
writer.add_document(doc).unwrap();
writer.commit().unwrap();
let reader = index.reader().unwrap();
let searcher = reader.searcher();
let segment_reader = searcher.segment_reader(0u32);
let inv_index = segment_reader.inverted_index(json_field).unwrap();
let mut term = Term::new();
term.set_field(Type::Json, json_field);
let mut json_term_writer = JsonTermWriter::wrap(&mut term);
json_term_writer.push_path_segment("mykey");
json_term_writer.set_str("two tokens");
let term_info = inv_index
.get_term_info(json_term_writer.term())
.unwrap()
.unwrap();
assert_eq!(
term_info,
TermInfo {
doc_freq: 1,
postings_range: 0..1,
positions_range: 0..0
}
);
let mut postings = inv_index
.read_postings(&term, IndexRecordOption::WithFreqs)
.unwrap()
.unwrap();
assert_eq!(postings.doc(), 0);
assert_eq!(postings.term_freq(), 1);
let mut positions = Vec::new();
postings.positions(&mut positions);
assert_eq!(postings.advance(), TERMINATED);
}
#[test]
fn test_position_overlapping_path() {
// This test checks that we do not end up detecting phrase query due
// to several string literal in the same json object being overlapping.
let mut schema_builder = Schema::builder();
let json_field = schema_builder.add_json_field("json", TEXT);
let schema = schema_builder.build();
let json_val: serde_json::Map<String, serde_json::Value> = serde_json::from_str(
r#"{"mykey": [{"field": "hello happy tax payer"}, {"field": "nothello"}]}"#,
)
.unwrap();
let doc = doc!(json_field=>json_val);
let index = Index::create_in_ram(schema);
let mut writer = index.writer_for_tests().unwrap();
writer.add_document(doc).unwrap();
writer.commit().unwrap();
let reader = index.reader().unwrap();
let searcher = reader.searcher();
let mut term = Term::new();
term.set_field(Type::Json, json_field);
let mut json_term_writer = JsonTermWriter::wrap(&mut term);
json_term_writer.push_path_segment("mykey");
json_term_writer.push_path_segment("field");
json_term_writer.set_str("hello");
let hello_term = json_term_writer.term().clone();
json_term_writer.set_str("nothello");
let nothello_term = json_term_writer.term().clone();
json_term_writer.set_str("happy");
let happy_term = json_term_writer.term().clone();
let phrase_query = PhraseQuery::new(vec![hello_term, happy_term.clone()]);
assert_eq!(searcher.search(&phrase_query, &Count).unwrap(), 1);
let phrase_query = PhraseQuery::new(vec![nothello_term, happy_term]);
assert_eq!(searcher.search(&phrase_query, &Count).unwrap(), 0);
}
}

View File

@@ -123,10 +123,95 @@ mod functional_test;
#[macro_use]
mod macros;
mod future_result;
pub use chrono;
/// Re-export of the `time` crate
///
/// Tantivy uses [`time`](https://crates.io/crates/time) for dates.
pub use time;
use crate::time::format_description::well_known::Rfc3339;
use crate::time::{OffsetDateTime, PrimitiveDateTime, UtcOffset};
/// A date/time value with second precision.
///
/// This timestamp does not carry any explicit time zone information.
/// Users are responsible for applying the provided conversion
/// functions consistently. Internally the time zone is assumed
/// to be UTC, which is also used implicitly for JSON serialization.
///
/// All constructors and conversions are provided as explicit
/// functions and not by implementing any `From`/`Into` traits
/// to prevent unintended usage.
#[derive(Clone, Copy, PartialEq, Eq, PartialOrd, Ord)]
pub struct DateTime {
unix_timestamp: i64,
}
impl DateTime {
/// Create new from UNIX timestamp
pub const fn from_unix_timestamp(unix_timestamp: i64) -> Self {
Self { unix_timestamp }
}
/// Create new from `OffsetDateTime`
///
/// The given date/time is converted to UTC and the actual
/// time zone is discarded.
pub const fn new_utc(dt: OffsetDateTime) -> Self {
Self::from_unix_timestamp(dt.unix_timestamp())
}
/// Create new from `PrimitiveDateTime`
///
/// Implicitly assumes that the given date/time is in UTC!
/// Otherwise the original value must only be reobtained with
/// [`to_primitive()`].
pub const fn new_primitive(dt: PrimitiveDateTime) -> Self {
Self::new_utc(dt.assume_utc())
}
/// Convert to UNIX timestamp
pub const fn to_unix_timestamp(self) -> i64 {
let Self { unix_timestamp } = self;
unix_timestamp
}
/// Convert to UTC `OffsetDateTime`
pub fn to_utc(self) -> OffsetDateTime {
let Self { unix_timestamp } = self;
let utc_datetime =
OffsetDateTime::from_unix_timestamp(unix_timestamp).expect("valid UNIX timestamp");
debug_assert_eq!(UtcOffset::UTC, utc_datetime.offset());
utc_datetime
}
/// Convert to `OffsetDateTime` with the given time zone
pub fn to_offset(self, offset: UtcOffset) -> OffsetDateTime {
self.to_utc().to_offset(offset)
}
/// Convert to `PrimitiveDateTime` without any time zone
///
/// The value should have been constructed with [`from_primitive()`].
/// Otherwise the time zone is implicitly assumed to be UTC.
pub fn to_primitive(self) -> PrimitiveDateTime {
let utc_datetime = self.to_utc();
// Discard the UTC time zone offset
debug_assert_eq!(UtcOffset::UTC, utc_datetime.offset());
PrimitiveDateTime::new(utc_datetime.date(), utc_datetime.time())
}
}
impl fmt::Debug for DateTime {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
let utc_rfc3339 = self.to_utc().format(&Rfc3339).map_err(|_| fmt::Error)?;
f.write_str(&utc_rfc3339)
}
}
pub use crate::error::TantivyError;
pub use crate::future_result::FutureResult;
/// Tantivy result.
///
@@ -134,8 +219,9 @@ pub use crate::error::TantivyError;
/// and instead, refer to this as `crate::Result<T>`.
pub type Result<T> = std::result::Result<T, TantivyError>;
/// Tantivy DateTime
pub type DateTime = chrono::DateTime<chrono::Utc>;
/// Result for an Async io operation.
#[cfg(feature = "quickwit")]
pub type AsyncIoResult<T> = std::result::Result<T, crate::error::AsyncIoError>;
mod core;
mod indexer;
@@ -144,6 +230,7 @@ mod indexer;
pub mod error;
pub mod tokenizer;
pub mod aggregation;
pub mod collector;
pub mod directory;
pub mod fastfield;
@@ -303,6 +390,7 @@ pub mod tests {
use crate::core::SegmentReader;
use crate::docset::{DocSet, TERMINATED};
use crate::fastfield::FastFieldReader;
use crate::merge_policy::NoMergePolicy;
use crate::query::BooleanQuery;
use crate::schema::*;
use crate::{DocAddress, Index, Postings, ReloadPolicy};
@@ -930,8 +1018,6 @@ pub mod tests {
// motivated by #729
#[test]
fn test_update_via_delete_insert() -> crate::Result<()> {
use futures::executor::block_on;
use crate::collector::Count;
use crate::indexer::NoMergePolicy;
use crate::query::AllQuery;
@@ -985,8 +1071,7 @@ pub mod tests {
.iter()
.map(|reader| reader.segment_id())
.collect();
block_on(index_writer.merge(&segment_ids)).unwrap();
index_writer.merge(&segment_ids).wait()?;
index_reader.reload()?;
let searcher = index_reader.searcher();
assert_eq!(searcher.search(&AllQuery, &Count)?, DOC_COUNT as usize);
@@ -1001,6 +1086,7 @@ pub mod tests {
let schema = builder.build();
let index = Index::create_in_dir(&index_path, schema)?;
let mut writer = index.writer(50_000_000)?;
writer.set_merge_policy(Box::new(NoMergePolicy));
for _ in 0..5000 {
writer.add_document(doc!(body => "foo"))?;
writer.add_document(doc!(body => "boo"))?;
@@ -1012,8 +1098,7 @@ pub mod tests {
writer.delete_term(Term::from_field_text(body, "foo"));
writer.commit()?;
let segment_ids = index.searchable_segment_ids()?;
let _ = futures::executor::block_on(writer.merge(&segment_ids));
writer.merge(&segment_ids).wait()?;
assert!(index.validate_checksum()?.is_empty());
Ok(())
}

View File

@@ -1,6 +1,6 @@
use std::io;
use common::{BinarySerializable, VInt};
use common::VInt;
use crate::directory::{FileSlice, OwnedBytes};
use crate::fieldnorm::FieldNormReader;
@@ -28,9 +28,7 @@ pub struct BlockSegmentPostings {
freq_decoder: BlockDecoder,
freq_reading_option: FreqReadingOption,
block_max_score_cache: Option<Score>,
doc_freq: u32,
data: OwnedBytes,
pub(crate) skip_reader: SkipReader,
}
@@ -70,13 +68,13 @@ fn decode_vint_block(
fn split_into_skips_and_postings(
doc_freq: u32,
mut bytes: OwnedBytes,
) -> (Option<OwnedBytes>, OwnedBytes) {
) -> io::Result<(Option<OwnedBytes>, OwnedBytes)> {
if doc_freq < COMPRESSION_BLOCK_SIZE as u32 {
return (None, bytes);
return Ok((None, bytes));
}
let skip_len = VInt::deserialize(&mut bytes).expect("Data corrupted").0 as usize;
let skip_len = VInt::deserialize_u64(&mut bytes)? as usize;
let (skip_data, postings_data) = bytes.split(skip_len);
(Some(skip_data), postings_data)
Ok((Some(skip_data), postings_data))
}
impl BlockSegmentPostings {
@@ -92,8 +90,8 @@ impl BlockSegmentPostings {
(_, _) => FreqReadingOption::ReadFreq,
};
let (skip_data_opt, postings_data) =
split_into_skips_and_postings(doc_freq, data.read_bytes()?);
let bytes = data.read_bytes()?;
let (skip_data_opt, postings_data) = split_into_skips_and_postings(doc_freq, bytes)?;
let skip_reader = match skip_data_opt {
Some(skip_data) => SkipReader::new(skip_data, doc_freq, record_option),
None => SkipReader::new(OwnedBytes::empty(), doc_freq, record_option),
@@ -166,8 +164,9 @@ impl BlockSegmentPostings {
// # Warning
//
// This does not reset the positions list.
pub(crate) fn reset(&mut self, doc_freq: u32, postings_data: OwnedBytes) {
let (skip_data_opt, postings_data) = split_into_skips_and_postings(doc_freq, postings_data);
pub(crate) fn reset(&mut self, doc_freq: u32, postings_data: OwnedBytes) -> io::Result<()> {
let (skip_data_opt, postings_data) =
split_into_skips_and_postings(doc_freq, postings_data)?;
self.data = postings_data;
self.block_max_score_cache = None;
self.loaded_offset = std::usize::MAX;
@@ -178,6 +177,7 @@ impl BlockSegmentPostings {
}
self.doc_freq = doc_freq;
self.load_block();
Ok(())
}
/// Returns the overall number of documents in the block postings.
@@ -322,7 +322,7 @@ impl BlockSegmentPostings {
/// Advance to the next block.
///
/// Returns false iff there was no remaining blocks.
/// Returns false if and only if there is no remaining block.
pub fn advance(&mut self) {
self.skip_reader.advance();
self.block_max_score_cache = None;

View File

@@ -0,0 +1,94 @@
use std::io;
use crate::indexer::doc_id_mapping::DocIdMapping;
use crate::postings::postings_writer::SpecializedPostingsWriter;
use crate::postings::recorder::{BufferLender, NothingRecorder, Recorder};
use crate::postings::stacker::Addr;
use crate::postings::{
FieldSerializer, IndexingContext, IndexingPosition, PostingsWriter, UnorderedTermId,
};
use crate::schema::term::as_json_path_type_value_bytes;
use crate::schema::Type;
use crate::tokenizer::TokenStream;
use crate::{DocId, Term};
#[derive(Default)]
pub(crate) struct JsonPostingsWriter<Rec: Recorder> {
str_posting_writer: SpecializedPostingsWriter<Rec>,
non_str_posting_writer: SpecializedPostingsWriter<NothingRecorder>,
}
impl<Rec: Recorder> From<JsonPostingsWriter<Rec>> for Box<dyn PostingsWriter> {
fn from(json_postings_writer: JsonPostingsWriter<Rec>) -> Box<dyn PostingsWriter> {
Box::new(json_postings_writer)
}
}
impl<Rec: Recorder> PostingsWriter for JsonPostingsWriter<Rec> {
fn subscribe(
&mut self,
doc: crate::DocId,
pos: u32,
term: &crate::Term,
ctx: &mut IndexingContext,
) -> UnorderedTermId {
self.non_str_posting_writer.subscribe(doc, pos, term, ctx)
}
fn index_text(
&mut self,
doc_id: DocId,
token_stream: &mut dyn TokenStream,
term_buffer: &mut Term,
ctx: &mut IndexingContext,
indexing_position: &mut IndexingPosition,
) {
self.str_posting_writer.index_text(
doc_id,
token_stream,
term_buffer,
ctx,
indexing_position,
);
}
/// The actual serialization format is handled by the `PostingsSerializer`.
fn serialize(
&self,
term_addrs: &[(Term<&[u8]>, Addr, UnorderedTermId)],
doc_id_map: Option<&DocIdMapping>,
ctx: &IndexingContext,
serializer: &mut FieldSerializer,
) -> io::Result<()> {
let mut buffer_lender = BufferLender::default();
for (term, addr, _) in term_addrs {
// TODO optimization opportunity here.
if let Some((_, typ, _)) = as_json_path_type_value_bytes(term.value_bytes()) {
if typ == Type::Str {
SpecializedPostingsWriter::<Rec>::serialize_one_term(
term,
*addr,
doc_id_map,
&mut buffer_lender,
ctx,
serializer,
)?;
} else {
SpecializedPostingsWriter::<NothingRecorder>::serialize_one_term(
term,
*addr,
doc_id_map,
&mut buffer_lender,
ctx,
serializer,
)?;
}
}
}
Ok(())
}
fn total_num_tokens(&self) -> u64 {
self.str_posting_writer.total_num_tokens() + self.non_str_posting_writer.total_num_tokens()
}
}

View File

@@ -7,6 +7,7 @@ pub(crate) use self::block_search::branchless_binary_search;
mod block_segment_postings;
pub(crate) mod compression;
mod indexing_context;
mod json_postings_writer;
mod per_field_postings_writer;
mod postings;
mod postings_writer;
@@ -21,7 +22,7 @@ pub use self::block_segment_postings::BlockSegmentPostings;
pub(crate) use self::indexing_context::IndexingContext;
pub(crate) use self::per_field_postings_writer::PerFieldPostingsWriter;
pub use self::postings::Postings;
pub(crate) use self::postings_writer::{serialize_postings, PostingsWriter};
pub(crate) use self::postings_writer::{serialize_postings, IndexingPosition, PostingsWriter};
pub use self::segment_postings::SegmentPostings;
pub use self::serializer::{FieldSerializer, InvertedIndexSerializer};
pub(crate) use self::skip::{BlockInfo, SkipReader};

View File

@@ -1,3 +1,4 @@
use crate::postings::json_postings_writer::JsonPostingsWriter;
use crate::postings::postings_writer::SpecializedPostingsWriter;
use crate::postings::recorder::{NothingRecorder, TermFrequencyRecorder, TfAndPositionRecorder};
use crate::postings::PostingsWriter;
@@ -33,21 +34,38 @@ fn posting_writer_from_field_entry(field_entry: &FieldEntry) -> Box<dyn Postings
.get_indexing_options()
.map(|indexing_options| match indexing_options.index_option() {
IndexRecordOption::Basic => {
SpecializedPostingsWriter::<NothingRecorder>::new_boxed()
SpecializedPostingsWriter::<NothingRecorder>::default().into()
}
IndexRecordOption::WithFreqs => {
SpecializedPostingsWriter::<TermFrequencyRecorder>::new_boxed()
SpecializedPostingsWriter::<TermFrequencyRecorder>::default().into()
}
IndexRecordOption::WithFreqsAndPositions => {
SpecializedPostingsWriter::<TfAndPositionRecorder>::new_boxed()
SpecializedPostingsWriter::<TfAndPositionRecorder>::default().into()
}
})
.unwrap_or_else(SpecializedPostingsWriter::<NothingRecorder>::new_boxed),
.unwrap_or_else(|| SpecializedPostingsWriter::<NothingRecorder>::default().into()),
FieldType::U64(_)
| FieldType::I64(_)
| FieldType::F64(_)
| FieldType::Date(_)
| FieldType::Bytes(_)
| FieldType::Facet(_) => SpecializedPostingsWriter::<NothingRecorder>::new_boxed(),
| FieldType::Facet(_) => Box::new(SpecializedPostingsWriter::<NothingRecorder>::default()),
FieldType::JsonObject(ref json_object_options) => {
if let Some(text_indexing_option) = json_object_options.get_text_indexing_options() {
match text_indexing_option.index_option() {
IndexRecordOption::Basic => {
JsonPostingsWriter::<NothingRecorder>::default().into()
}
IndexRecordOption::WithFreqs => {
JsonPostingsWriter::<TermFrequencyRecorder>::default().into()
}
IndexRecordOption::WithFreqsAndPositions => {
JsonPostingsWriter::<TfAndPositionRecorder>::default().into()
}
}
} else {
JsonPostingsWriter::<NothingRecorder>::default().into()
}
}
}
}

View File

@@ -13,11 +13,13 @@ use crate::postings::{
FieldSerializer, IndexingContext, InvertedIndexSerializer, PerFieldPostingsWriter,
UnorderedTermId,
};
use crate::schema::{Field, FieldType, Schema, Term, Type};
use crate::schema::{Field, FieldType, Schema, Term};
use crate::termdict::TermOrdinal;
use crate::tokenizer::{Token, TokenStream, MAX_TOKEN_LEN};
use crate::DocId;
const POSITION_GAP: u32 = 1;
fn make_field_partition(
term_offsets: &[(Term<&[u8]>, Addr, UnorderedTermId)],
) -> Vec<(Field, Range<usize>)> {
@@ -47,7 +49,7 @@ fn make_field_partition(
/// It pushes all term, one field at a time, towards the
/// postings serializer.
pub(crate) fn serialize_postings(
indexing_context: IndexingContext,
ctx: IndexingContext,
per_field_postings_writers: &PerFieldPostingsWriter,
fieldnorm_readers: FieldNormReaders,
doc_id_map: Option<&DocIdMapping>,
@@ -55,15 +57,13 @@ pub(crate) fn serialize_postings(
serializer: &mut InvertedIndexSerializer,
) -> crate::Result<HashMap<Field, FnvHashMap<UnorderedTermId, TermOrdinal>>> {
let mut term_offsets: Vec<(Term<&[u8]>, Addr, UnorderedTermId)> =
Vec::with_capacity(indexing_context.term_index.len());
term_offsets.extend(indexing_context.term_index.iter());
Vec::with_capacity(ctx.term_index.len());
term_offsets.extend(ctx.term_index.iter());
term_offsets.sort_unstable_by_key(|(k, _, _)| k.clone());
let mut unordered_term_mappings: HashMap<Field, FnvHashMap<UnorderedTermId, TermOrdinal>> =
HashMap::new();
let field_offsets = make_field_partition(&term_offsets);
for (field, byte_offsets) in field_offsets {
let field_entry = schema.get_field_entry(field);
match *field_entry.field_type() {
@@ -83,6 +83,7 @@ pub(crate) fn serialize_postings(
}
FieldType::U64(_) | FieldType::I64(_) | FieldType::F64(_) | FieldType::Date(_) => {}
FieldType::Bytes(_) => {}
FieldType::JsonObject(_) => {}
}
let postings_writer = per_field_postings_writers.get_for_field(field);
@@ -92,7 +93,7 @@ pub(crate) fn serialize_postings(
postings_writer.serialize(
&term_offsets[byte_offsets],
doc_id_map,
&indexing_context,
&ctx,
&mut field_serializer,
)?;
field_serializer.close()?;
@@ -100,6 +101,12 @@ pub(crate) fn serialize_postings(
Ok(unordered_term_mappings)
}
#[derive(Default)]
pub(crate) struct IndexingPosition {
pub num_tokens: u32,
pub end_position: u32,
}
/// The `PostingsWriter` is in charge of receiving documenting
/// and building a `Segment` in anonymous memory.
///
@@ -110,14 +117,14 @@ pub(crate) trait PostingsWriter {
/// * doc - the document id
/// * pos - the term position (expressed in tokens)
/// * term - the term
/// * indexing_context - Contains a term hashmap and a memory arena to store all necessary
/// posting list information.
/// * ctx - Contains a term hashmap and a memory arena to store all necessary posting list
/// information.
fn subscribe(
&mut self,
doc: DocId,
pos: u32,
term: &Term,
indexing_context: &mut IndexingContext,
ctx: &mut IndexingContext,
) -> UnorderedTermId;
/// Serializes the postings on disk.
@@ -126,7 +133,7 @@ pub(crate) trait PostingsWriter {
&self,
term_addrs: &[(Term<&[u8]>, Addr, UnorderedTermId)],
doc_id_map: Option<&DocIdMapping>,
indexing_context: &IndexingContext,
ctx: &IndexingContext,
serializer: &mut FieldSerializer,
) -> io::Result<()>;
@@ -134,27 +141,35 @@ pub(crate) trait PostingsWriter {
fn index_text(
&mut self,
doc_id: DocId,
field: Field,
token_stream: &mut dyn TokenStream,
term_buffer: &mut Term,
indexing_context: &mut IndexingContext,
) -> u32 {
term_buffer.set_field(Type::Str, field);
let mut sink = |token: &Token| {
ctx: &mut IndexingContext,
indexing_position: &mut IndexingPosition,
) {
let end_of_path_idx = term_buffer.as_slice().len();
let mut num_tokens = 0;
let mut end_position = 0;
token_stream.process(&mut |token: &Token| {
// We skip all tokens with a len greater than u16.
if token.text.len() <= MAX_TOKEN_LEN {
term_buffer.set_text(token.text.as_str());
self.subscribe(doc_id, token.position as u32, term_buffer, indexing_context);
} else {
if token.text.len() > MAX_TOKEN_LEN {
warn!(
"A token exceeding MAX_TOKEN_LEN ({}>{}) was dropped. Search for \
MAX_TOKEN_LEN in the documentation for more information.",
token.text.len(),
MAX_TOKEN_LEN
);
return;
}
};
token_stream.process(&mut sink)
term_buffer.truncate(end_of_path_idx);
term_buffer.append_bytes(token.text.as_bytes());
let start_position = indexing_position.end_position + token.position as u32;
end_position = start_position + token.position_length as u32;
self.subscribe(doc_id, start_position, term_buffer, ctx);
num_tokens += 1;
});
indexing_position.end_position = end_position + POSITION_GAP;
indexing_position.num_tokens += num_tokens;
term_buffer.truncate(end_of_path_idx);
}
fn total_num_tokens(&self) -> u64;
@@ -162,40 +177,50 @@ pub(crate) trait PostingsWriter {
/// The `SpecializedPostingsWriter` is just here to remove dynamic
/// dispatch to the recorder information.
pub(crate) struct SpecializedPostingsWriter<Rec: Recorder + 'static> {
#[derive(Default)]
pub(crate) struct SpecializedPostingsWriter<Rec: Recorder> {
total_num_tokens: u64,
_recorder_type: PhantomData<Rec>,
}
impl<Rec: Recorder + 'static> SpecializedPostingsWriter<Rec> {
/// constructor
pub fn new() -> SpecializedPostingsWriter<Rec> {
SpecializedPostingsWriter {
total_num_tokens: 0u64,
_recorder_type: PhantomData,
}
}
/// Builds a `SpecializedPostingsWriter` storing its data in a memory arena.
pub fn new_boxed() -> Box<dyn PostingsWriter> {
Box::new(SpecializedPostingsWriter::<Rec>::new())
impl<Rec: Recorder> From<SpecializedPostingsWriter<Rec>> for Box<dyn PostingsWriter> {
fn from(
specialized_postings_writer: SpecializedPostingsWriter<Rec>,
) -> Box<dyn PostingsWriter> {
Box::new(specialized_postings_writer)
}
}
impl<Rec: Recorder + 'static> PostingsWriter for SpecializedPostingsWriter<Rec> {
impl<Rec: Recorder> SpecializedPostingsWriter<Rec> {
#[inline]
pub(crate) fn serialize_one_term(
term: &Term<&[u8]>,
addr: Addr,
doc_id_map: Option<&DocIdMapping>,
buffer_lender: &mut BufferLender,
ctx: &IndexingContext,
serializer: &mut FieldSerializer,
) -> io::Result<()> {
let recorder: Rec = ctx.term_index.read(addr);
let term_doc_freq = recorder.term_doc_freq().unwrap_or(0u32);
serializer.new_term(term.value_bytes(), term_doc_freq)?;
recorder.serialize(&ctx.arena, doc_id_map, serializer, buffer_lender);
serializer.close_term()?;
Ok(())
}
}
impl<Rec: Recorder> PostingsWriter for SpecializedPostingsWriter<Rec> {
fn subscribe(
&mut self,
doc: DocId,
position: u32,
term: &Term,
indexing_context: &mut IndexingContext,
ctx: &mut IndexingContext,
) -> UnorderedTermId {
debug_assert!(term.as_slice().len() >= 4);
self.total_num_tokens += 1;
let (term_index, arena) = (
&mut indexing_context.term_index,
&mut indexing_context.arena,
);
let (term_index, arena) = (&mut ctx.term_index, &mut ctx.arena);
term_index.mutate_or_create(term.as_slice(), |opt_recorder: Option<Rec>| {
if let Some(mut recorder) = opt_recorder {
let current_doc = recorder.current_doc();
@@ -206,7 +231,7 @@ impl<Rec: Recorder + 'static> PostingsWriter for SpecializedPostingsWriter<Rec>
recorder.record_position(position, arena);
recorder
} else {
let mut recorder = Rec::new();
let mut recorder = Rec::default();
recorder.new_doc(doc, arena);
recorder.record_position(position, arena);
recorder
@@ -218,21 +243,12 @@ impl<Rec: Recorder + 'static> PostingsWriter for SpecializedPostingsWriter<Rec>
&self,
term_addrs: &[(Term<&[u8]>, Addr, UnorderedTermId)],
doc_id_map: Option<&DocIdMapping>,
indexing_context: &IndexingContext,
ctx: &IndexingContext,
serializer: &mut FieldSerializer,
) -> io::Result<()> {
let mut buffer_lender = BufferLender::default();
for (term, addr, _) in term_addrs {
let recorder: Rec = indexing_context.term_index.read(*addr);
let term_doc_freq = recorder.term_doc_freq().unwrap_or(0u32);
serializer.new_term(term.value_bytes(), term_doc_freq)?;
recorder.serialize(
&indexing_context.arena,
doc_id_map,
serializer,
&mut buffer_lender,
);
serializer.close_term()?;
Self::serialize_one_term(term, *addr, doc_id_map, &mut buffer_lender, ctx, serializer)?;
}
Ok(())
}

View File

@@ -1,4 +1,4 @@
use common::{read_u32_vint, write_u32_vint};
use common::read_u32_vint;
use super::stacker::{ExpUnrolledLinkedList, MemoryArena};
use crate::indexer::doc_id_mapping::DocIdMapping;
@@ -56,9 +56,7 @@ impl<'a> Iterator for VInt32Reader<'a> {
/// * the document id
/// * the term frequency
/// * the term positions
pub(crate) trait Recorder: Copy + 'static {
///
fn new() -> Self;
pub(crate) trait Recorder: Copy + Default + 'static {
/// Returns the current document
fn current_doc(&self) -> u32;
/// Starts recording information about a new document
@@ -90,21 +88,23 @@ pub struct NothingRecorder {
current_doc: DocId,
}
impl Recorder for NothingRecorder {
fn new() -> Self {
impl Default for NothingRecorder {
fn default() -> Self {
NothingRecorder {
stack: ExpUnrolledLinkedList::new(),
current_doc: u32::max_value(),
}
}
}
impl Recorder for NothingRecorder {
fn current_doc(&self) -> DocId {
self.current_doc
}
fn new_doc(&mut self, doc: DocId, arena: &mut MemoryArena) {
self.current_doc = doc;
let _ = write_u32_vint(doc, &mut self.stack.writer(arena));
self.stack.writer(arena).write_u32_vint(doc);
}
fn record_position(&mut self, _position: u32, _arena: &mut MemoryArena) {}
@@ -152,8 +152,8 @@ pub struct TermFrequencyRecorder {
term_doc_freq: u32,
}
impl Recorder for TermFrequencyRecorder {
fn new() -> Self {
impl Default for TermFrequencyRecorder {
fn default() -> Self {
TermFrequencyRecorder {
stack: ExpUnrolledLinkedList::new(),
current_doc: 0,
@@ -161,7 +161,9 @@ impl Recorder for TermFrequencyRecorder {
term_doc_freq: 0u32,
}
}
}
impl Recorder for TermFrequencyRecorder {
fn current_doc(&self) -> DocId {
self.current_doc
}
@@ -169,7 +171,7 @@ impl Recorder for TermFrequencyRecorder {
fn new_doc(&mut self, doc: DocId, arena: &mut MemoryArena) {
self.term_doc_freq += 1;
self.current_doc = doc;
let _ = write_u32_vint(doc, &mut self.stack.writer(arena));
self.stack.writer(arena).write_u32_vint(doc);
}
fn record_position(&mut self, _position: u32, _arena: &mut MemoryArena) {
@@ -178,7 +180,7 @@ impl Recorder for TermFrequencyRecorder {
fn close_doc(&mut self, arena: &mut MemoryArena) {
debug_assert!(self.current_tf > 0);
let _ = write_u32_vint(self.current_tf, &mut self.stack.writer(arena));
self.stack.writer(arena).write_u32_vint(self.current_tf);
self.current_tf = 0;
}
@@ -223,15 +225,18 @@ pub struct TfAndPositionRecorder {
current_doc: DocId,
term_doc_freq: u32,
}
impl Recorder for TfAndPositionRecorder {
fn new() -> Self {
impl Default for TfAndPositionRecorder {
fn default() -> Self {
TfAndPositionRecorder {
stack: ExpUnrolledLinkedList::new(),
current_doc: u32::max_value(),
term_doc_freq: 0u32,
}
}
}
impl Recorder for TfAndPositionRecorder {
fn current_doc(&self) -> DocId {
self.current_doc
}
@@ -239,15 +244,17 @@ impl Recorder for TfAndPositionRecorder {
fn new_doc(&mut self, doc: DocId, arena: &mut MemoryArena) {
self.current_doc = doc;
self.term_doc_freq += 1u32;
let _ = write_u32_vint(doc, &mut self.stack.writer(arena));
self.stack.writer(arena).write_u32_vint(doc);
}
fn record_position(&mut self, position: u32, arena: &mut MemoryArena) {
let _ = write_u32_vint(position + 1u32, &mut self.stack.writer(arena));
self.stack
.writer(arena)
.write_u32_vint(position.wrapping_add(1u32));
}
fn close_doc(&mut self, arena: &mut MemoryArena) {
let _ = write_u32_vint(POSITION_END, &mut self.stack.writer(arena));
self.stack.writer(arena).write_u32_vint(POSITION_END);
}
fn serialize(
@@ -300,7 +307,9 @@ impl Recorder for TfAndPositionRecorder {
#[cfg(test)]
mod tests {
use super::{write_u32_vint, BufferLender, VInt32Reader};
use common::write_u32_vint;
use super::{BufferLender, VInt32Reader};
#[test]
fn test_buffer_lender() {

View File

@@ -76,7 +76,7 @@ impl InvertedIndexSerializer {
field: Field,
total_num_tokens: u64,
fieldnorm_reader: Option<FieldNormReader>,
) -> io::Result<FieldSerializer<'_>> {
) -> io::Result<FieldSerializer> {
let field_entry: &FieldEntry = self.schema.get_field_entry(field);
let term_dictionary_write = self.terms_write.for_field(field);
let postings_write = self.postings_write.for_field(field);
@@ -122,24 +122,21 @@ impl<'a> FieldSerializer<'a> {
fieldnorm_reader: Option<FieldNormReader>,
) -> io::Result<FieldSerializer<'a>> {
total_num_tokens.serialize(postings_write)?;
let mode = match field_type {
FieldType::Str(ref text_options) => {
if let Some(text_indexing_options) = text_options.get_indexing_options() {
text_indexing_options.index_option()
} else {
IndexRecordOption::Basic
}
}
_ => IndexRecordOption::Basic,
};
let index_record_option = field_type
.index_record_option()
.unwrap_or(IndexRecordOption::Basic);
let term_dictionary_builder = TermDictionaryBuilder::create(term_dictionary_write)?;
let average_fieldnorm = fieldnorm_reader
.as_ref()
.map(|ff_reader| (total_num_tokens as Score / ff_reader.num_docs() as Score))
.unwrap_or(0.0);
let postings_serializer =
PostingsSerializer::new(postings_write, average_fieldnorm, mode, fieldnorm_reader);
let positions_serializer_opt = if mode.has_positions() {
let postings_serializer = PostingsSerializer::new(
postings_write,
average_fieldnorm,
index_record_option,
fieldnorm_reader,
);
let positions_serializer_opt = if index_record_option.has_positions() {
Some(PositionSerializer::new(positions_write))
} else {
None
@@ -203,6 +200,7 @@ impl<'a> FieldSerializer<'a> {
self.current_term_info.doc_freq += 1;
self.postings_serializer.write_doc(doc_id, term_freq);
if let Some(ref mut positions_serializer) = self.positions_serializer_opt.as_mut() {
assert_eq!(term_freq as usize, position_deltas.len());
positions_serializer.write_positions_delta(position_deltas);
}
}

View File

@@ -1,4 +1,6 @@
use std::{io, mem};
use std::mem;
use common::serialize_vint_u32;
use super::{Addr, MemoryArena};
use crate::postings::stacker::memory_arena::{load, store};
@@ -97,12 +99,13 @@ fn ensure_capacity<'a>(
}
impl<'a> ExpUnrolledLinkedListWriter<'a> {
pub fn write_u32_vint(&mut self, val: u32) {
let mut buf = [0u8; 8];
let data = serialize_vint_u32(val, &mut buf);
self.extend_from_slice(data);
}
pub fn extend_from_slice(&mut self, mut buf: &[u8]) {
if buf.is_empty() {
// we need to cut early, because `ensure_capacity`
// allocates if there is no capacity at all right now.
return;
}
while !buf.is_empty() {
let add_len: usize;
{
@@ -117,25 +120,6 @@ impl<'a> ExpUnrolledLinkedListWriter<'a> {
}
}
impl<'a> io::Write for ExpUnrolledLinkedListWriter<'a> {
fn write(&mut self, buf: &[u8]) -> io::Result<usize> {
// There is no use case to only write the capacity.
// This is not IO after all, so we write the whole
// buffer even if the contract of `.write` is looser.
self.extend_from_slice(buf);
Ok(buf.len())
}
fn write_all(&mut self, buf: &[u8]) -> io::Result<()> {
self.extend_from_slice(buf);
Ok(())
}
fn flush(&mut self) -> io::Result<()> {
Ok(())
}
}
impl ExpUnrolledLinkedList {
pub fn new() -> ExpUnrolledLinkedList {
ExpUnrolledLinkedList {
@@ -178,8 +162,7 @@ impl ExpUnrolledLinkedList {
#[cfg(test)]
mod tests {
use byteorder::{ByteOrder, LittleEndian, WriteBytesExt};
use common::{read_u32_vint, write_u32_vint};
use super::super::MemoryArena;
use super::{len_to_capacity, *};
@@ -205,18 +188,14 @@ mod tests {
let mut eull = ExpUnrolledLinkedList::new();
let data: Vec<u32> = (0..100).collect();
for &el in &data {
assert!(eull
.writer(&mut arena)
.write_u32::<LittleEndian>(el)
.is_ok());
eull.writer(&mut arena).write_u32_vint(el);
}
let mut buffer = Vec::new();
eull.read_to_end(&arena, &mut buffer);
let mut result = vec![];
let mut remaining = &buffer[..];
while !remaining.is_empty() {
result.push(LittleEndian::read_u32(&remaining[..4]));
remaining = &remaining[4..];
result.push(read_u32_vint(&mut remaining));
}
assert_eq!(&result[..], &data[..]);
}
@@ -231,14 +210,11 @@ mod tests {
let mut vec2: Vec<u8> = vec![];
for i in 0..9 {
assert!(stack.writer(&mut eull).write_u32::<LittleEndian>(i).is_ok());
assert!(vec1.write_u32::<LittleEndian>(i).is_ok());
stack.writer(&mut eull).write_u32_vint(i);
assert!(write_u32_vint(i, &mut vec1).is_ok());
if i % 2 == 0 {
assert!(stack2
.writer(&mut eull)
.write_u32::<LittleEndian>(i)
.is_ok());
assert!(vec2.write_u32::<LittleEndian>(i).is_ok());
stack2.writer(&mut eull).write_u32_vint(i);
assert!(write_u32_vint(i, &mut vec2).is_ok());
}
}
let mut res1 = vec![];
@@ -303,7 +279,6 @@ mod tests {
mod bench {
use std::iter;
use byteorder::{NativeEndian, WriteBytesExt};
use test::Bencher;
use super::super::MemoryArena;
@@ -339,7 +314,9 @@ mod bench {
for s in 0..NUM_STACK {
for i in 0u32..STACK_SIZE {
let t = s * 392017 % NUM_STACK;
let _ = stacks[t].writer(&mut arena).write_u32::<NativeEndian>(i);
stacks[t]
.writer(&mut arena)
.extend_from_slice(&i.to_ne_bytes());
}
}
});

View File

@@ -47,7 +47,7 @@ fn find_pivot_doc(
/// scorer in scorers[..pivot_len] and `scorer.doc()` for scorer in scorers[pivot_len..].
/// Note: before and after calling this method, scorers need to be sorted by their `.doc()`.
fn block_max_was_too_low_advance_one_scorer(
scorers: &mut Vec<TermScorerWithMaxScore>,
scorers: &mut [TermScorerWithMaxScore],
pivot_len: usize,
) {
debug_assert!(is_sorted(scorers.iter().map(|scorer| scorer.doc())));
@@ -82,7 +82,7 @@ fn block_max_was_too_low_advance_one_scorer(
// Given a list of term_scorers and a `ord` and assuming that `term_scorers[ord]` is sorted
// except term_scorers[ord] that might be in advance compared to its ranks,
// bubble up term_scorers[ord] in order to restore the ordering.
fn restore_ordering(term_scorers: &mut Vec<TermScorerWithMaxScore>, ord: usize) {
fn restore_ordering(term_scorers: &mut [TermScorerWithMaxScore], ord: usize) {
let doc = term_scorers[ord].doc();
for i in ord + 1..term_scorers.len() {
if term_scorers[i].doc() >= doc {

View File

@@ -204,8 +204,8 @@ impl BooleanQuery {
#[cfg(test)]
mod tests {
use super::BooleanQuery;
use crate::collector::DocSetCollector;
use crate::query::{QueryClone, TermQuery};
use crate::collector::{Count, DocSetCollector};
use crate::query::{QueryClone, QueryParser, TermQuery};
use crate::schema::{IndexRecordOption, Schema, TEXT};
use crate::{DocAddress, Index, Term};
@@ -282,4 +282,42 @@ mod tests {
}
Ok(())
}
#[test]
pub fn test_json_array_pitfall_bag_of_terms() -> crate::Result<()> {
let mut schema_builder = Schema::builder();
let json_field = schema_builder.add_json_field("json", TEXT);
let schema = schema_builder.build();
let index = Index::create_in_ram(schema);
{
let mut index_writer = index.writer_for_tests()?;
index_writer.add_document(doc!(json_field=>json!({
"cart": [
{"product_type": "sneakers", "attributes": {"color": "white"}},
{"product_type": "t-shirt", "attributes": {"color": "red"}},
{"product_type": "cd", "attributes": {"genre": "blues"}},
]
})))?;
index_writer.commit()?;
}
let searcher = index.reader()?.searcher();
let doc_matches = |query: &str| {
let query_parser = QueryParser::for_index(&index, vec![json_field]);
let query = query_parser.parse_query(query).unwrap();
searcher.search(&query, &Count).unwrap() == 1
};
// As expected
assert!(doc_matches(
r#"cart.product_type:sneakers AND cart.attributes.color:white"#
));
// Unexpected match, due to the fact that array do not act as nested docs.
assert!(doc_matches(
r#"cart.product_type:sneakers AND cart.attributes.color:red"#
));
// However, bviously this works...
assert!(!doc_matches(
r#"cart.product_type:sneakers AND cart.attributes.color:blues"#
));
Ok(())
}
}

Some files were not shown because too many files have changed in this diff Show More