Compare commits

...

17 Commits

Author SHA1 Message Date
Pascal Seitz
a88e659e02 make convert_to_fast_value_and_append_to_json_term pub 2024-07-11 08:54:01 +08:00
Pascal Seitz
dd2c4a8963 clippy 2024-04-22 09:56:49 +08:00
Pascal Seitz
786781d0fc cleanup 2024-04-22 09:44:41 +08:00
Pascal Seitz
2d7483e3d4 add JsonTermSerializer 2024-04-20 18:56:27 +08:00
Pascal Seitz
87b9f0678c split term and indexing term 2024-04-18 23:38:21 +08:00
PSeitz
0e9fced336 remove JsonTermWriter (#2238)
* remove JsonTermWriter

remove JsonTermWriter
remove path truncation logic, add assertion

* fix json_path_writer add sep logic
2024-04-18 16:28:05 +02:00
PSeitz
b257b960b3 validate sort by field type (#2336)
* validate sort by field type

* Update src/index/index.rs

Co-authored-by: Adam Reichold <adamreichold@users.noreply.github.com>

---------

Co-authored-by: Adam Reichold <adamreichold@users.noreply.github.com>
2024-04-16 04:42:24 +02:00
Adam Reichold
4708171a32 Fix some of the things current Clippy complains about (#2363) 2024-04-16 04:27:06 +02:00
Adam Reichold
b493743f8d Fix trait bound of StoreReader::iter (#2360)
* Fix trait bound of StoreReader::iter

Similar to `StoreReader::get`, `StoreReader::iter` should only require
`DocumentDeserialize` and not `Document`.

* Mark the iterator returned by SegmentReader::doc_ids_alive as Send so it can be used in impls of Stream/AsyncIterator.
2024-04-15 15:50:02 +02:00
trinity-1686a
d2955a3fd2 extend field grouping (#2333)
* extend field grouping
2024-04-15 10:36:32 +02:00
PSeitz
17d5869ad6 update CHANGELOG, use github API in cliff (#2354)
* update CHANGELOG, use github API in cliff

* reset version to 0.21.1, before release

* chore: Release

* remove unreleased from CHANGELOG
2024-04-15 10:07:20 +02:00
PSeitz
dfa3aed32d check unsupported parameters top_hits (#2351)
* check unsupported parameters top_hits

* move to function
2024-04-10 08:20:52 +02:00
PSeitz
398817ce7b add index sorting deprecation warning (#2353)
* add index sorting deprecation warning

* remove deprecated IntOptions and DatePrecision
2024-04-10 08:09:09 +02:00
PSeitz
74940e9345 clippy (#2349)
* fix clippy

* fix clippy

* fix duplicate imports
2024-04-09 07:54:44 +02:00
PSeitz
1e9fc51535 update ahash (#2344) 2024-04-09 06:35:39 +02:00
PSeitz
92c32979d2 fix postcard compatibility for top_hits, add postcard test (#2346)
* fix postcard compatibility for top_hits, add postcard test

* fix top_hits naming, delay data fetch

closes #2347

* fix import
2024-04-09 06:17:25 +02:00
PSeitz
b644d78a32 fix null byte handling in JSON paths (#2345)
* fix null byte handling in JSON paths

closes https://github.com/quickwit-oss/tantivy/issues/2193
closes https://github.com/quickwit-oss/tantivy/issues/2340

* avoid repeated term truncation

* fix test

* Apply suggestions from code review

Co-authored-by: Paul Masurel <paul@quickwit.io>

* add comment

---------

Co-authored-by: Paul Masurel <paul@quickwit.io>
2024-04-05 09:53:35 +02:00
90 changed files with 1321 additions and 1231 deletions

View File

@@ -1,3 +1,65 @@
Tantivy 0.22
================================
Tantivy 0.22 will be able to read indices created with Tantivy 0.21.
#### Bugfixes
- Fix null byte handling in JSON paths (null bytes in json keys caused panic during indexing) [#2345](https://github.com/quickwit-oss/tantivy/pull/2345)(@PSeitz)
- Fix bug that can cause `get_docids_for_value_range` to panic. [#2295](https://github.com/quickwit-oss/tantivy/pull/2295)(@fulmicoton)
- Avoid 1 document indices by increase min memory to 15MB for indexing [#2176](https://github.com/quickwit-oss/tantivy/pull/2176)(@PSeitz)
- Fix merge panic for JSON fields [#2284](https://github.com/quickwit-oss/tantivy/pull/2284)(@PSeitz)
- Fix bug occuring when merging JSON object indexed with positions. [#2253](https://github.com/quickwit-oss/tantivy/pull/2253)(@fulmicoton)
- Fix empty DateHistogram gap bug [#2183](https://github.com/quickwit-oss/tantivy/pull/2183)(@PSeitz)
- Fix range query end check (fields with less than 1 value per doc are affected) [#2226](https://github.com/quickwit-oss/tantivy/pull/2226)(@PSeitz)
- Handle exclusive out of bounds ranges on fastfield range queries [#2174](https://github.com/quickwit-oss/tantivy/pull/2174)(@PSeitz)
#### Breaking API Changes
- rename ReloadPolicy onCommit to onCommitWithDelay [#2235](https://github.com/quickwit-oss/tantivy/pull/2235)(@giovannicuccu)
- Move exports from the root into modules [#2220](https://github.com/quickwit-oss/tantivy/pull/2220)(@PSeitz)
- Accept field name instead of `Field` in FilterCollector [#2196](https://github.com/quickwit-oss/tantivy/pull/2196)(@PSeitz)
- remove deprecated IntOptions and DateTime [#2353](https://github.com/quickwit-oss/tantivy/pull/2353)(@PSeitz)
#### Features/Improvements
- Tantivy documents as a trait: Index data directly without converting to tantivy types first [#2071](https://github.com/quickwit-oss/tantivy/pull/2071)(@ChillFish8)
- encode some part of posting list as -1 instead of direct values (smaller inverted indices) [#2185](https://github.com/quickwit-oss/tantivy/pull/2185)(@trinity-1686a)
- **Aggregation**
- Support to deserialize f64 from string [#2311](https://github.com/quickwit-oss/tantivy/pull/2311)(@PSeitz)
- Add a top_hits aggregator [#2198](https://github.com/quickwit-oss/tantivy/pull/2198)(@ditsuke)
- Support bool type in term aggregation [#2318](https://github.com/quickwit-oss/tantivy/pull/2318)(@PSeitz)
- Support ip adresses in term aggregation [#2319](https://github.com/quickwit-oss/tantivy/pull/2319)(@PSeitz)
- Support date type in term aggregation [#2172](https://github.com/quickwit-oss/tantivy/pull/2172)(@PSeitz)
- Support escaped dot when addressing field [#2250](https://github.com/quickwit-oss/tantivy/pull/2250)(@PSeitz)
- Add ExistsQuery to check documents that have a value [#2160](https://github.com/quickwit-oss/tantivy/pull/2160)(@imotov)
- Expose TopDocs::order_by_u64_field again [#2282](https://github.com/quickwit-oss/tantivy/pull/2282)(@ditsuke)
- **Memory/Performance**
- Faster TopN: replace BinaryHeap with TopNComputer [#2186](https://github.com/quickwit-oss/tantivy/pull/2186)(@PSeitz)
- reduce number of allocations during indexing [#2257](https://github.com/quickwit-oss/tantivy/pull/2257)(@PSeitz)
- Less Memory while indexing: docid deltas while indexing [#2249](https://github.com/quickwit-oss/tantivy/pull/2249)(@PSeitz)
- Faster indexing: use term hashmap in fastfield [#2243](https://github.com/quickwit-oss/tantivy/pull/2243)(@PSeitz)
- term hashmap remove copy in is_empty, unused unordered_id [#2229](https://github.com/quickwit-oss/tantivy/pull/2229)(@PSeitz)
- add method to fetch block of first values in columnar [#2330](https://github.com/quickwit-oss/tantivy/pull/2330)(@PSeitz)
- Faster aggregations: add fast path for full columns in fetch_block [#2328](https://github.com/quickwit-oss/tantivy/pull/2328)(@PSeitz)
- Faster sstable loading: use fst for sstable index [#2268](https://github.com/quickwit-oss/tantivy/pull/2268)(@trinity-1686a)
- **QueryParser**
- allow newline where we allow space in query parser [#2302](https://github.com/quickwit-oss/tantivy/pull/2302)(@trinity-1686a)
- allow some mixing of occur and bool in strict query parser [#2323](https://github.com/quickwit-oss/tantivy/pull/2323)(@trinity-1686a)
- handle * inside term in lenient query parser [#2228](https://github.com/quickwit-oss/tantivy/pull/2228)(@trinity-1686a)
- add support for exists query syntax in query parser [#2170](https://github.com/quickwit-oss/tantivy/pull/2170)(@trinity-1686a)
- Add shared search executor [#2312](https://github.com/quickwit-oss/tantivy/pull/2312)(@MochiXu)
- Truncate keys to u16::MAX in term hashmap [#2299](https://github.com/quickwit-oss/tantivy/pull/2299)(@PSeitz)
- report if a term matched when warming up posting list [#2309](https://github.com/quickwit-oss/tantivy/pull/2309)(@trinity-1686a)
- Support json fields in FuzzyTermQuery [#2173](https://github.com/quickwit-oss/tantivy/pull/2173)(@PingXia-at)
- Read list of fields encoded in term dictionary for JSON fields [#2184](https://github.com/quickwit-oss/tantivy/pull/2184)(@PSeitz)
- add collect_block to BoxableSegmentCollector [#2331](https://github.com/quickwit-oss/tantivy/pull/2331)(@PSeitz)
- expose collect_block buffer size [#2326](https://github.com/quickwit-oss/tantivy/pull/2326)(@PSeitz)
- Forward regex parser errors [#2288](https://github.com/quickwit-oss/tantivy/pull/2288)(@adamreichold)
- Make FacetCounts defaultable and cloneable. [#2322](https://github.com/quickwit-oss/tantivy/pull/2322)(@adamreichold)
- Derive Debug for SchemaBuilder [#2254](https://github.com/quickwit-oss/tantivy/pull/2254)(@GodTamIt)
- add missing inlines to tantivy options [#2245](https://github.com/quickwit-oss/tantivy/pull/2245)(@PSeitz)
Tantivy 0.21.1 Tantivy 0.21.1
================================ ================================
#### Bugfixes #### Bugfixes

View File

@@ -1,6 +1,6 @@
[package] [package]
name = "tantivy" name = "tantivy"
version = "0.22.0-dev" version = "0.22.0"
authors = ["Paul Masurel <paul.masurel@gmail.com>"] authors = ["Paul Masurel <paul.masurel@gmail.com>"]
license = "MIT" license = "MIT"
categories = ["database-implementations", "data-structures"] categories = ["database-implementations", "data-structures"]
@@ -11,7 +11,7 @@ repository = "https://github.com/quickwit-oss/tantivy"
readme = "README.md" readme = "README.md"
keywords = ["search", "information", "retrieval"] keywords = ["search", "information", "retrieval"]
edition = "2021" edition = "2021"
rust-version = "1.62" rust-version = "1.63"
exclude = ["benches/*.json", "benches/*.txt"] exclude = ["benches/*.json", "benches/*.txt"]
[dependencies] [dependencies]
@@ -52,13 +52,13 @@ itertools = "0.12.0"
measure_time = "0.8.2" measure_time = "0.8.2"
arc-swap = "1.5.0" arc-swap = "1.5.0"
columnar = { version= "0.2", path="./columnar", package ="tantivy-columnar" } columnar = { version= "0.3", path="./columnar", package ="tantivy-columnar" }
sstable = { version= "0.2", path="./sstable", package ="tantivy-sstable", optional = true } sstable = { version= "0.3", path="./sstable", package ="tantivy-sstable", optional = true }
stacker = { version= "0.2", path="./stacker", package ="tantivy-stacker" } stacker = { version= "0.3", path="./stacker", package ="tantivy-stacker" }
query-grammar = { version= "0.21.0", path="./query-grammar", package = "tantivy-query-grammar" } query-grammar = { version= "0.22.0", path="./query-grammar", package = "tantivy-query-grammar" }
tantivy-bitpacker = { version= "0.5", path="./bitpacker" } tantivy-bitpacker = { version= "0.6", path="./bitpacker" }
common = { version= "0.6", path = "./common/", package = "tantivy-common" } common = { version= "0.7", path = "./common/", package = "tantivy-common" }
tokenizer-api = { version= "0.2", path="./tokenizer-api", package="tantivy-tokenizer-api" } tokenizer-api = { version= "0.3", path="./tokenizer-api", package="tantivy-tokenizer-api" }
sketches-ddsketch = { version = "0.2.1", features = ["use_serde"] } sketches-ddsketch = { version = "0.2.1", features = ["use_serde"] }
futures-util = { version = "0.3.28", optional = true } futures-util = { version = "0.3.28", optional = true }
fnv = "1.0.7" fnv = "1.0.7"
@@ -78,6 +78,9 @@ paste = "1.0.11"
more-asserts = "0.3.1" more-asserts = "0.3.1"
rand_distr = "0.4.3" rand_distr = "0.4.3"
time = { version = "0.3.10", features = ["serde-well-known", "macros"] } time = { version = "0.3.10", features = ["serde-well-known", "macros"] }
postcard = { version = "1.0.4", features = [
"use-std",
], default-features = false }
[target.'cfg(not(windows))'.dev-dependencies] [target.'cfg(not(windows))'.dev-dependencies]
criterion = { version = "0.5", default-features = false } criterion = { version = "0.5", default-features = false }

View File

@@ -1,6 +1,6 @@
[package] [package]
name = "tantivy-bitpacker" name = "tantivy-bitpacker"
version = "0.5.0" version = "0.6.0"
edition = "2021" edition = "2021"
authors = ["Paul Masurel <paul.masurel@gmail.com>"] authors = ["Paul Masurel <paul.masurel@gmail.com>"]
license = "MIT" license = "MIT"

View File

@@ -1,6 +1,10 @@
# configuration file for git-cliff{ pattern = "foo", replace = "bar"} # configuration file for git-cliff{ pattern = "foo", replace = "bar"}
# see https://github.com/orhun/git-cliff#configuration-file # see https://github.com/orhun/git-cliff#configuration-file
[remote.github]
owner = "quickwit-oss"
repo = "tantivy"
[changelog] [changelog]
# changelog header # changelog header
header = """ header = """
@@ -8,15 +12,43 @@ header = """
# template for the changelog body # template for the changelog body
# https://tera.netlify.app/docs/#introduction # https://tera.netlify.app/docs/#introduction
body = """ body = """
{% if version %}\ ## What's Changed
{{ version | trim_start_matches(pat="v") }} ({{ timestamp | date(format="%Y-%m-%d") }})
================== {%- if version %} in {{ version }}{%- endif -%}
{% else %}\
## [unreleased]
{% endif %}\
{% for commit in commits %} {% for commit in commits %}
- {% if commit.breaking %}[**breaking**] {% endif %}{{ commit.message | split(pat="\n") | first | trim | upper_first }}(@{{ commit.author.name }})\ {% if commit.github.pr_title -%}
{% endfor %} {%- set commit_message = commit.github.pr_title -%}
{%- else -%}
{%- set commit_message = commit.message -%}
{%- endif -%}
- {{ commit_message | split(pat="\n") | first | trim }}\
{% if commit.github.pr_number %} \
[#{{ commit.github.pr_number }}]({{ self::remote_url() }}/pull/{{ commit.github.pr_number }}){% if commit.github.username %}(@{{ commit.github.username }}){%- endif -%} \
{%- endif %}
{%- endfor -%}
{% if github.contributors | filter(attribute="is_first_time", value=true) | length != 0 %}
{% raw %}\n{% endraw -%}
## New Contributors
{%- endif %}\
{% for contributor in github.contributors | filter(attribute="is_first_time", value=true) %}
* @{{ contributor.username }} made their first contribution
{%- if contributor.pr_number %} in \
[#{{ contributor.pr_number }}]({{ self::remote_url() }}/pull/{{ contributor.pr_number }}) \
{%- endif %}
{%- endfor -%}
{% if version %}
{% if previous.version %}
**Full Changelog**: {{ self::remote_url() }}/compare/{{ previous.version }}...{{ version }}
{% endif %}
{% else -%}
{% raw %}\n{% endraw %}
{% endif %}
{%- macro remote_url() -%}
https://github.com/{{ remote.github.owner }}/{{ remote.github.repo }}
{%- endmacro -%}
""" """
# remove the leading and trailing whitespace from the template # remove the leading and trailing whitespace from the template
trim = true trim = true
@@ -25,53 +57,24 @@ footer = """
""" """
postprocessors = [ postprocessors = [
{ pattern = 'Paul Masurel', replace = "fulmicoton"}, # replace with github user
{ pattern = 'PSeitz', replace = "PSeitz"}, # replace with github user
{ pattern = 'Adam Reichold', replace = "adamreichold"}, # replace with github user
{ pattern = 'trinity-1686a', replace = "trinity-1686a"}, # replace with github user
{ pattern = 'Michael Kleen', replace = "mkleen"}, # replace with github user
{ pattern = 'Adrien Guillo', replace = "guilload"}, # replace with github user
{ pattern = 'François Massot', replace = "fmassot"}, # replace with github user
{ pattern = 'Naveen Aiathurai', replace = "naveenann"}, # replace with github user
{ pattern = '', replace = ""}, # replace with github user
] ]
[git] [git]
# parse the commits based on https://www.conventionalcommits.org # parse the commits based on https://www.conventionalcommits.org
# This is required or commit.message contains the whole commit message and not just the title # This is required or commit.message contains the whole commit message and not just the title
conventional_commits = true conventional_commits = false
# filter out the commits that are not conventional # filter out the commits that are not conventional
filter_unconventional = false filter_unconventional = true
# process each line of a commit as an individual commit # process each line of a commit as an individual commit
split_commits = false split_commits = false
# regex for preprocessing the commit messages # regex for preprocessing the commit messages
commit_preprocessors = [ commit_preprocessors = [
{ pattern = '\((\w+\s)?#([0-9]+)\)', replace = "[#${2}](https://github.com/quickwit-oss/tantivy/issues/${2})"}, # replace issue numbers { pattern = '\((\w+\s)?#([0-9]+)\)', replace = ""},
] ]
#link_parsers = [ #link_parsers = [
#{ pattern = "#(\\d+)", href = "https://github.com/quickwit-oss/tantivy/pulls/$1"}, #{ pattern = "#(\\d+)", href = "https://github.com/quickwit-oss/tantivy/pulls/$1"},
#] #]
# regex for parsing and grouping commits # regex for parsing and grouping commits
commit_parsers = [
{ message = "^feat", group = "Features"},
{ message = "^fix", group = "Bug Fixes"},
{ message = "^doc", group = "Documentation"},
{ message = "^perf", group = "Performance"},
{ message = "^refactor", group = "Refactor"},
{ message = "^style", group = "Styling"},
{ message = "^test", group = "Testing"},
{ message = "^chore\\(release\\): prepare for", skip = true},
{ message = "(?i)clippy", skip = true},
{ message = "(?i)dependabot", skip = true},
{ message = "(?i)fmt", skip = true},
{ message = "(?i)bump", skip = true},
{ message = "(?i)readme", skip = true},
{ message = "(?i)comment", skip = true},
{ message = "(?i)spelling", skip = true},
{ message = "^chore", group = "Miscellaneous Tasks"},
{ body = ".*security", group = "Security"},
{ message = ".*", group = "Other", default_scope = "other"},
]
# protect breaking changes from being skipped due to matching a skipping commit_parser # protect breaking changes from being skipped due to matching a skipping commit_parser
protect_breaking_commits = false protect_breaking_commits = false
# filter out the commits that are not matched by commit parsers # filter out the commits that are not matched by commit parsers

View File

@@ -1,6 +1,6 @@
[package] [package]
name = "tantivy-columnar" name = "tantivy-columnar"
version = "0.2.0" version = "0.3.0"
edition = "2021" edition = "2021"
license = "MIT" license = "MIT"
homepage = "https://github.com/quickwit-oss/tantivy" homepage = "https://github.com/quickwit-oss/tantivy"
@@ -12,10 +12,10 @@ categories = ["database-implementations", "data-structures", "compression"]
itertools = "0.12.0" itertools = "0.12.0"
fastdivide = "0.4.0" fastdivide = "0.4.0"
stacker = { version= "0.2", path = "../stacker", package="tantivy-stacker"} stacker = { version= "0.3", path = "../stacker", package="tantivy-stacker"}
sstable = { version= "0.2", path = "../sstable", package = "tantivy-sstable" } sstable = { version= "0.3", path = "../sstable", package = "tantivy-sstable" }
common = { version= "0.6", path = "../common", package = "tantivy-common" } common = { version= "0.7", path = "../common", package = "tantivy-common" }
tantivy-bitpacker = { version= "0.5", path = "../bitpacker/" } tantivy-bitpacker = { version= "0.6", path = "../bitpacker/" }
serde = "1.0.152" serde = "1.0.152"
downcast-rs = "1.2.0" downcast-rs = "1.2.0"

View File

@@ -140,7 +140,7 @@ mod tests {
#[test] #[test]
fn test_merge_column_index_optional_shuffle() { fn test_merge_column_index_optional_shuffle() {
let optional_index: ColumnIndex = OptionalIndex::for_test(2, &[0]).into(); let optional_index: ColumnIndex = OptionalIndex::for_test(2, &[0]).into();
let column_indexes = vec![optional_index, ColumnIndex::Full]; let column_indexes = [optional_index, ColumnIndex::Full];
let row_addrs = vec![ let row_addrs = vec![
RowAddr { RowAddr {
segment_ord: 0u32, segment_ord: 0u32,

View File

@@ -75,7 +75,7 @@ pub trait ColumnValues<T: PartialOrd = u64>: Send + Sync + DowncastSync {
let out_and_idx_chunks = output let out_and_idx_chunks = output
.chunks_exact_mut(4) .chunks_exact_mut(4)
.into_remainder() .into_remainder()
.into_iter() .iter_mut()
.zip(indexes.chunks_exact(4).remainder()); .zip(indexes.chunks_exact(4).remainder());
for (out, idx) in out_and_idx_chunks { for (out, idx) in out_and_idx_chunks {
*out = self.get_val(*idx); *out = self.get_val(*idx);
@@ -102,7 +102,7 @@ pub trait ColumnValues<T: PartialOrd = u64>: Send + Sync + DowncastSync {
let out_and_idx_chunks = output let out_and_idx_chunks = output
.chunks_exact_mut(4) .chunks_exact_mut(4)
.into_remainder() .into_remainder()
.into_iter() .iter_mut()
.zip(indexes.chunks_exact(4).remainder()); .zip(indexes.chunks_exact(4).remainder());
for (out, idx) in out_and_idx_chunks { for (out, idx) in out_and_idx_chunks {
*out = Some(self.get_val(*idx)); *out = Some(self.get_val(*idx));

View File

@@ -148,7 +148,7 @@ impl CompactSpace {
.binary_search_by_key(&compact, |range_mapping| range_mapping.compact_start) .binary_search_by_key(&compact, |range_mapping| range_mapping.compact_start)
// Correctness: Overflow. The first range starts at compact space 0, the error from // Correctness: Overflow. The first range starts at compact space 0, the error from
// binary search can never be 0 // binary search can never be 0
.map_or_else(|e| e - 1, |v| v); .unwrap_or_else(|e| e - 1);
let range_mapping = &self.ranges_mapping[pos]; let range_mapping = &self.ranges_mapping[pos];
let diff = compact - range_mapping.compact_start; let diff = compact - range_mapping.compact_start;

View File

@@ -18,7 +18,12 @@ pub struct ColumnarSerializer<W: io::Write> {
/// code. /// code.
fn prepare_key(key: &[u8], column_type: ColumnType, buffer: &mut Vec<u8>) { fn prepare_key(key: &[u8], column_type: ColumnType, buffer: &mut Vec<u8>) {
buffer.clear(); buffer.clear();
buffer.extend_from_slice(key); // Convert 0 bytes to '0' string, as 0 bytes are reserved for the end of the path.
if key.contains(&0u8) {
buffer.extend(key.iter().map(|&b| if b == 0 { b'0' } else { b }));
} else {
buffer.extend_from_slice(key);
}
buffer.push(0u8); buffer.push(0u8);
buffer.push(column_type.to_code()); buffer.push(column_type.to_code());
} }
@@ -102,7 +107,7 @@ mod tests {
let mut buffer: Vec<u8> = b"somegarbage".to_vec(); let mut buffer: Vec<u8> = b"somegarbage".to_vec();
prepare_key(b"root\0child", ColumnType::Str, &mut buffer); prepare_key(b"root\0child", ColumnType::Str, &mut buffer);
assert_eq!(buffer.len(), 12); assert_eq!(buffer.len(), 12);
assert_eq!(&buffer[..10], b"root\0child"); assert_eq!(&buffer[..10], b"root0child");
assert_eq!(buffer[10], 0u8); assert_eq!(buffer[10], 0u8);
assert_eq!(buffer[11], ColumnType::Str.to_code()); assert_eq!(buffer[11], ColumnType::Str.to_code());
} }

View File

@@ -1,6 +1,6 @@
[package] [package]
name = "tantivy-common" name = "tantivy-common"
version = "0.6.0" version = "0.7.0"
authors = ["Paul Masurel <paul@quickwit.io>", "Pascal Seitz <pascal@quickwit.io>"] authors = ["Paul Masurel <paul@quickwit.io>", "Pascal Seitz <pascal@quickwit.io>"]
license = "MIT" license = "MIT"
edition = "2021" edition = "2021"
@@ -14,7 +14,7 @@ repository = "https://github.com/quickwit-oss/tantivy"
[dependencies] [dependencies]
byteorder = "1.4.3" byteorder = "1.4.3"
ownedbytes = { version= "0.6", path="../ownedbytes" } ownedbytes = { version= "0.7", path="../ownedbytes" }
async-trait = "0.1" async-trait = "0.1"
time = { version = "0.3.10", features = ["serde-well-known"] } time = { version = "0.3.10", features = ["serde-well-known"] }
serde = { version = "1.0.136", features = ["derive"] } serde = { version = "1.0.136", features = ["derive"] }

View File

@@ -1,5 +1,5 @@
use std::io::Write; use std::io::Write;
use std::{fmt, io, u64}; use std::{fmt, io};
use ownedbytes::OwnedBytes; use ownedbytes::OwnedBytes;

View File

@@ -1,5 +1,3 @@
#![allow(deprecated)]
use std::fmt; use std::fmt;
use std::io::{Read, Write}; use std::io::{Read, Write};
@@ -27,9 +25,6 @@ pub enum DateTimePrecision {
Nanoseconds, Nanoseconds,
} }
#[deprecated(since = "0.20.0", note = "Use `DateTimePrecision` instead")]
pub type DatePrecision = DateTimePrecision;
/// A date/time value with nanoseconds precision. /// A date/time value with nanoseconds precision.
/// ///
/// This timestamp does not carry any explicit time zone information. /// This timestamp does not carry any explicit time zone information.
@@ -40,7 +35,7 @@ pub type DatePrecision = DateTimePrecision;
/// All constructors and conversions are provided as explicit /// All constructors and conversions are provided as explicit
/// functions and not by implementing any `From`/`Into` traits /// functions and not by implementing any `From`/`Into` traits
/// to prevent unintended usage. /// to prevent unintended usage.
#[derive(Clone, Default, Copy, PartialEq, Eq, PartialOrd, Ord, Hash)] #[derive(Clone, Default, Copy, PartialEq, Eq, PartialOrd, Ord, Hash, Serialize, Deserialize)]
pub struct DateTime { pub struct DateTime {
// Timestamp in nanoseconds. // Timestamp in nanoseconds.
pub(crate) timestamp_nanos: i64, pub(crate) timestamp_nanos: i64,

View File

@@ -5,6 +5,12 @@ pub const JSON_PATH_SEGMENT_SEP: u8 = 1u8;
pub const JSON_PATH_SEGMENT_SEP_STR: &str = pub const JSON_PATH_SEGMENT_SEP_STR: &str =
unsafe { std::str::from_utf8_unchecked(&[JSON_PATH_SEGMENT_SEP]) }; unsafe { std::str::from_utf8_unchecked(&[JSON_PATH_SEGMENT_SEP]) };
/// Separates the json path and the value in
/// a JSON term binary representation.
pub const JSON_END_OF_PATH: u8 = 0u8;
pub const JSON_END_OF_PATH_STR: &str =
unsafe { std::str::from_utf8_unchecked(&[JSON_END_OF_PATH]) };
/// Create a new JsonPathWriter, that creates flattened json paths for tantivy. /// Create a new JsonPathWriter, that creates flattened json paths for tantivy.
#[derive(Clone, Debug, Default)] #[derive(Clone, Debug, Default)]
pub struct JsonPathWriter { pub struct JsonPathWriter {
@@ -14,6 +20,14 @@ pub struct JsonPathWriter {
} }
impl JsonPathWriter { impl JsonPathWriter {
pub fn with_expand_dots(expand_dots: bool) -> Self {
JsonPathWriter {
path: String::new(),
indices: Vec::new(),
expand_dots,
}
}
pub fn new() -> Self { pub fn new() -> Self {
JsonPathWriter { JsonPathWriter {
path: String::new(), path: String::new(),
@@ -39,8 +53,8 @@ impl JsonPathWriter {
pub fn push(&mut self, segment: &str) { pub fn push(&mut self, segment: &str) {
let len_path = self.path.len(); let len_path = self.path.len();
self.indices.push(len_path); self.indices.push(len_path);
if !self.path.is_empty() { if self.indices.len() > 1 {
self.path.push_str(JSON_PATH_SEGMENT_SEP_STR); self.path.push(JSON_PATH_SEGMENT_SEP as char);
} }
self.path.push_str(segment); self.path.push_str(segment);
if self.expand_dots { if self.expand_dots {
@@ -55,6 +69,12 @@ impl JsonPathWriter {
} }
} }
/// Set the end of JSON path marker.
#[inline]
pub fn set_end(&mut self) {
self.path.push_str(JSON_END_OF_PATH_STR);
}
/// Remove the last segment. Does nothing if the path is empty. /// Remove the last segment. Does nothing if the path is empty.
#[inline] #[inline]
pub fn pop(&mut self) { pub fn pop(&mut self) {
@@ -91,6 +111,7 @@ mod tests {
#[test] #[test]
fn json_path_writer_test() { fn json_path_writer_test() {
let mut writer = JsonPathWriter::new(); let mut writer = JsonPathWriter::new();
writer.set_expand_dots(false);
writer.push("root"); writer.push("root");
assert_eq!(writer.as_str(), "root"); assert_eq!(writer.as_str(), "root");
@@ -109,4 +130,15 @@ mod tests {
writer.push("k8s.node.id"); writer.push("k8s.node.id");
assert_eq!(writer.as_str(), "root\u{1}k8s\u{1}node\u{1}id"); assert_eq!(writer.as_str(), "root\u{1}k8s\u{1}node\u{1}id");
} }
#[test]
fn test_json_path_expand_dots_enabled_pop_segment() {
let mut json_writer = JsonPathWriter::with_expand_dots(true);
json_writer.push("hello");
assert_eq!(json_writer.as_str(), "hello");
json_writer.push("color.hue");
assert_eq!(json_writer.as_str(), "hello\x01color\x01hue");
json_writer.pop();
assert_eq!(json_writer.as_str(), "hello");
}
} }

View File

@@ -9,14 +9,12 @@ mod byte_count;
mod datetime; mod datetime;
pub mod file_slice; pub mod file_slice;
mod group_by; mod group_by;
mod json_path_writer; pub mod json_path_writer;
mod serialize; mod serialize;
mod vint; mod vint;
mod writer; mod writer;
pub use bitset::*; pub use bitset::*;
pub use byte_count::ByteCount; pub use byte_count::ByteCount;
#[allow(deprecated)]
pub use datetime::DatePrecision;
pub use datetime::{DateTime, DateTimePrecision}; pub use datetime::{DateTime, DateTimePrecision};
pub use group_by::GroupByIteratorExtended; pub use group_by::GroupByIteratorExtended;
pub use json_path_writer::JsonPathWriter; pub use json_path_writer::JsonPathWriter;

View File

@@ -290,8 +290,7 @@ impl<'a> BinarySerializable for Cow<'a, [u8]> {
#[cfg(test)] #[cfg(test)]
pub mod test { pub mod test {
use super::{VInt, *}; use super::*;
use crate::serialize::BinarySerializable;
pub fn fixed_size_test<O: BinarySerializable + FixedSize + Default>() { pub fn fixed_size_test<O: BinarySerializable + FixedSize + Default>() {
let mut buffer = Vec::new(); let mut buffer = Vec::new();
O::default().serialize(&mut buffer).unwrap(); O::default().serialize(&mut buffer).unwrap();

View File

@@ -1,7 +1,7 @@
[package] [package]
authors = ["Paul Masurel <paul@quickwit.io>", "Pascal Seitz <pascal@quickwit.io>"] authors = ["Paul Masurel <paul@quickwit.io>", "Pascal Seitz <pascal@quickwit.io>"]
name = "ownedbytes" name = "ownedbytes"
version = "0.6.0" version = "0.7.0"
edition = "2021" edition = "2021"
description = "Expose data as static slice" description = "Expose data as static slice"
license = "MIT" license = "MIT"

View File

@@ -1,6 +1,6 @@
[package] [package]
name = "tantivy-query-grammar" name = "tantivy-query-grammar"
version = "0.21.0" version = "0.22.0"
authors = ["Paul Masurel <paul.masurel@gmail.com>"] authors = ["Paul Masurel <paul.masurel@gmail.com>"]
license = "MIT" license = "MIT"
categories = ["database-implementations", "data-structures"] categories = ["database-implementations", "data-structures"]

View File

@@ -218,27 +218,14 @@ fn term_or_phrase_infallible(inp: &str) -> JResult<&str, Option<UserInputLeaf>>
} }
fn term_group(inp: &str) -> IResult<&str, UserInputAst> { fn term_group(inp: &str) -> IResult<&str, UserInputAst> {
let occur_symbol = alt((
value(Occur::MustNot, char('-')),
value(Occur::Must, char('+')),
));
map( map(
tuple(( tuple((
terminated(field_name, multispace0), terminated(field_name, multispace0),
delimited( delimited(tuple((char('('), multispace0)), ast, char(')')),
tuple((char('('), multispace0)),
separated_list0(multispace1, tuple((opt(occur_symbol), term_or_phrase))),
char(')'),
),
)), )),
|(field_name, terms)| { |(field_name, mut ast)| {
UserInputAst::Clause( ast.set_default_field(field_name);
terms ast
.into_iter()
.map(|(occur, leaf)| (occur, leaf.set_field(Some(field_name.clone())).into()))
.collect(),
)
}, },
)(inp) )(inp)
} }
@@ -258,46 +245,18 @@ fn term_group_precond(inp: &str) -> IResult<&str, (), ()> {
} }
fn term_group_infallible(inp: &str) -> JResult<&str, UserInputAst> { fn term_group_infallible(inp: &str) -> JResult<&str, UserInputAst> {
let (mut inp, (field_name, _, _, _)) = let (inp, (field_name, _, _, _)) =
tuple((field_name, multispace0, char('('), multispace0))(inp).expect("precondition failed"); tuple((field_name, multispace0, char('('), multispace0))(inp).expect("precondition failed");
let mut terms = Vec::new(); let res = delimited_infallible(
let mut errs = Vec::new(); nothing,
map(ast_infallible, |(mut ast, errors)| {
let mut first_round = true; ast.set_default_field(field_name.to_string());
loop { (ast, errors)
let mut space_error = if first_round { }),
first_round = false; opt_i_err(char(')'), "expected ')'"),
Vec::new() )(inp);
} else { res
let (rest, (_, err)) = space1_infallible(inp)?;
inp = rest;
err
};
if inp.is_empty() {
errs.push(LenientErrorInternal {
pos: inp.len(),
message: "missing )".to_string(),
});
break Ok((inp, (UserInputAst::Clause(terms), errs)));
}
if let Some(inp) = inp.strip_prefix(')') {
break Ok((inp, (UserInputAst::Clause(terms), errs)));
}
// only append missing space error if we did not reach the end of group
errs.append(&mut space_error);
// here we do the assumption term_or_phrase_infallible always consume something if the
// first byte is not `)` or ' '. If it did not, we would end up looping.
let (rest, ((occur, leaf), mut err)) =
tuple_infallible((occur_symbol, term_or_phrase_infallible))(inp)?;
errs.append(&mut err);
if let Some(leaf) = leaf {
terms.push((occur, leaf.set_field(Some(field_name.clone())).into()));
}
inp = rest;
}
} }
fn exists(inp: &str) -> IResult<&str, UserInputLeaf> { fn exists(inp: &str) -> IResult<&str, UserInputLeaf> {
@@ -1468,8 +1427,18 @@ mod test {
#[test] #[test]
fn test_parse_query_term_group() { fn test_parse_query_term_group() {
test_parse_query_to_ast_helper(r#"field:(abc)"#, r#"(*"field":abc)"#); test_parse_query_to_ast_helper(r#"field:(abc)"#, r#""field":abc"#);
test_parse_query_to_ast_helper(r#"field:(+a -"b c")"#, r#"(+"field":a -"field":"b c")"#); test_parse_query_to_ast_helper(r#"field:(+a -"b c")"#, r#"(+"field":a -"field":"b c")"#);
test_parse_query_to_ast_helper(r#"field:(a AND "b c")"#, r#"(+"field":a +"field":"b c")"#);
test_parse_query_to_ast_helper(r#"field:(a OR "b c")"#, r#"(?"field":a ?"field":"b c")"#);
test_parse_query_to_ast_helper(
r#"field:(a OR (b AND c))"#,
r#"(?"field":a ?(+"field":b +"field":c))"#,
);
test_parse_query_to_ast_helper(
r#"field:(a [b TO c])"#,
r#"(*"field":a *"field":["b" TO "c"])"#,
);
test_is_parse_err(r#"field:(+a -"b c""#, r#"(+"field":a -"field":"b c")"#); test_is_parse_err(r#"field:(+a -"b c""#, r#"(+"field":a -"field":"b c")"#);
} }

View File

@@ -44,6 +44,26 @@ impl UserInputLeaf {
}, },
} }
} }
pub(crate) fn set_default_field(&mut self, default_field: String) {
match self {
UserInputLeaf::Literal(ref mut literal) if literal.field_name.is_none() => {
literal.field_name = Some(default_field)
}
UserInputLeaf::All => {
*self = UserInputLeaf::Exists {
field: default_field,
}
}
UserInputLeaf::Range { ref mut field, .. } if field.is_none() => {
*field = Some(default_field)
}
UserInputLeaf::Set { ref mut field, .. } if field.is_none() => {
*field = Some(default_field)
}
_ => (), // field was already set, do nothing
}
}
} }
impl Debug for UserInputLeaf { impl Debug for UserInputLeaf {
@@ -205,6 +225,16 @@ impl UserInputAst {
pub fn or(asts: Vec<UserInputAst>) -> UserInputAst { pub fn or(asts: Vec<UserInputAst>) -> UserInputAst {
UserInputAst::compose(Occur::Should, asts) UserInputAst::compose(Occur::Should, asts)
} }
pub(crate) fn set_default_field(&mut self, field: String) {
match self {
UserInputAst::Clause(clauses) => clauses
.iter_mut()
.for_each(|(_, ast)| ast.set_default_field(field.clone())),
UserInputAst::Leaf(leaf) => leaf.set_default_field(field),
UserInputAst::Boost(ref mut ast, _) => ast.set_default_field(field),
}
}
} }
impl From<UserInputLiteral> for UserInputLeaf { impl From<UserInputLiteral> for UserInputLeaf {

View File

@@ -292,7 +292,7 @@ impl AggregationWithAccessor {
add_agg_with_accessor(&agg, accessor, column_type, &mut res)?; add_agg_with_accessor(&agg, accessor, column_type, &mut res)?;
} }
TopHits(ref mut top_hits) => { TopHits(ref mut top_hits) => {
top_hits.validate_and_resolve(reader.fast_fields().columnar())?; top_hits.validate_and_resolve_field_names(reader.fast_fields().columnar())?;
let accessors: Vec<(Column<u64>, ColumnType)> = top_hits let accessors: Vec<(Column<u64>, ColumnType)> = top_hits
.field_names() .field_names()
.iter() .iter()

View File

@@ -4,6 +4,7 @@ use crate::aggregation::agg_req::{Aggregation, Aggregations};
use crate::aggregation::agg_result::AggregationResults; use crate::aggregation::agg_result::AggregationResults;
use crate::aggregation::buf_collector::DOC_BLOCK_SIZE; use crate::aggregation::buf_collector::DOC_BLOCK_SIZE;
use crate::aggregation::collector::AggregationCollector; use crate::aggregation::collector::AggregationCollector;
use crate::aggregation::intermediate_agg_result::IntermediateAggregationResults;
use crate::aggregation::segment_agg_result::AggregationLimits; use crate::aggregation::segment_agg_result::AggregationLimits;
use crate::aggregation::tests::{get_test_index_2_segments, get_test_index_from_values_and_terms}; use crate::aggregation::tests::{get_test_index_2_segments, get_test_index_from_values_and_terms};
use crate::aggregation::DistributedAggregationCollector; use crate::aggregation::DistributedAggregationCollector;
@@ -66,6 +67,22 @@ fn test_aggregation_flushing(
} }
} }
}, },
"top_hits_test":{
"terms": {
"field": "string_id"
},
"aggs": {
"bucketsL2": {
"top_hits": {
"size": 2,
"sort": [
{ "score": "asc" }
],
"docvalue_fields": ["score"]
}
}
}
},
"histogram_test":{ "histogram_test":{
"histogram": { "histogram": {
"field": "score", "field": "score",
@@ -108,6 +125,16 @@ fn test_aggregation_flushing(
let searcher = reader.searcher(); let searcher = reader.searcher();
let intermediate_agg_result = searcher.search(&AllQuery, &collector).unwrap(); let intermediate_agg_result = searcher.search(&AllQuery, &collector).unwrap();
// Test postcard roundtrip serialization
let intermediate_agg_result_bytes = postcard::to_allocvec(&intermediate_agg_result).expect(
"Postcard Serialization failed, flatten etc. is not supported in the intermediate \
result",
);
let intermediate_agg_result: IntermediateAggregationResults =
postcard::from_bytes(&intermediate_agg_result_bytes)
.expect("Post deserialization failed");
intermediate_agg_result intermediate_agg_result
.into_final_result(agg_req, &Default::default()) .into_final_result(agg_req, &Default::default())
.unwrap() .unwrap()

View File

@@ -1,7 +1,5 @@
use std::cmp::Ordering; use std::cmp::Ordering;
use columnar::ColumnType;
use itertools::Itertools;
use rustc_hash::FxHashMap; use rustc_hash::FxHashMap;
use serde::{Deserialize, Serialize}; use serde::{Deserialize, Serialize};
use tantivy_bitpacker::minmax; use tantivy_bitpacker::minmax;
@@ -17,7 +15,7 @@ use crate::aggregation::intermediate_agg_result::{
IntermediateHistogramBucketEntry, IntermediateHistogramBucketEntry,
}; };
use crate::aggregation::segment_agg_result::{ use crate::aggregation::segment_agg_result::{
build_segment_agg_collector, AggregationLimits, SegmentAggregationCollector, build_segment_agg_collector, SegmentAggregationCollector,
}; };
use crate::aggregation::*; use crate::aggregation::*;
use crate::TantivyError; use crate::TantivyError;

View File

@@ -28,6 +28,7 @@ mod term_agg;
mod term_missing_agg; mod term_missing_agg;
use std::collections::HashMap; use std::collections::HashMap;
use std::fmt;
pub use histogram::*; pub use histogram::*;
pub use range::*; pub use range::*;
@@ -72,12 +73,12 @@ impl From<&str> for OrderTarget {
} }
} }
impl ToString for OrderTarget { impl fmt::Display for OrderTarget {
fn to_string(&self) -> String { fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
match self { match self {
OrderTarget::Key => "_key".to_string(), OrderTarget::Key => f.write_str("_key"),
OrderTarget::Count => "_count".to_string(), OrderTarget::Count => f.write_str("_count"),
OrderTarget::SubAggregation(agg) => agg.to_string(), OrderTarget::SubAggregation(agg) => agg.fmt(f),
} }
} }
} }

View File

@@ -1,7 +1,6 @@
use std::fmt::Debug; use std::fmt::Debug;
use std::ops::Range; use std::ops::Range;
use columnar::{ColumnType, MonotonicallyMappableToU64};
use rustc_hash::FxHashMap; use rustc_hash::FxHashMap;
use serde::{Deserialize, Serialize}; use serde::{Deserialize, Serialize};
@@ -450,7 +449,6 @@ pub(crate) fn range_to_key(range: &Range<u64>, field_type: &ColumnType) -> crate
#[cfg(test)] #[cfg(test)]
mod tests { mod tests {
use columnar::MonotonicallyMappableToU64;
use serde_json::Value; use serde_json::Value;
use super::*; use super::*;
@@ -459,7 +457,6 @@ mod tests {
exec_request, exec_request_with_query, get_test_index_2_segments, exec_request, exec_request_with_query, get_test_index_2_segments,
get_test_index_with_num_docs, get_test_index_with_num_docs,
}; };
use crate::aggregation::AggregationLimits;
pub fn get_collector_from_ranges( pub fn get_collector_from_ranges(
ranges: Vec<RangeAggregationRange>, ranges: Vec<RangeAggregationRange>,

View File

@@ -20,7 +20,7 @@ use super::bucket::{
}; };
use super::metric::{ use super::metric::{
IntermediateAverage, IntermediateCount, IntermediateMax, IntermediateMin, IntermediateStats, IntermediateAverage, IntermediateCount, IntermediateMax, IntermediateMin, IntermediateStats,
IntermediateSum, PercentilesCollector, TopHitsCollector, IntermediateSum, PercentilesCollector, TopHitsTopNComputer,
}; };
use super::segment_agg_result::AggregationLimits; use super::segment_agg_result::AggregationLimits;
use super::{format_date, AggregationError, Key, SerializedKey}; use super::{format_date, AggregationError, Key, SerializedKey};
@@ -221,9 +221,9 @@ pub(crate) fn empty_from_req(req: &Aggregation) -> IntermediateAggregationResult
Percentiles(_) => IntermediateAggregationResult::Metric( Percentiles(_) => IntermediateAggregationResult::Metric(
IntermediateMetricResult::Percentiles(PercentilesCollector::default()), IntermediateMetricResult::Percentiles(PercentilesCollector::default()),
), ),
TopHits(_) => IntermediateAggregationResult::Metric(IntermediateMetricResult::TopHits( TopHits(ref req) => IntermediateAggregationResult::Metric(
TopHitsCollector::default(), IntermediateMetricResult::TopHits(TopHitsTopNComputer::new(req.clone())),
)), ),
} }
} }
@@ -285,7 +285,7 @@ pub enum IntermediateMetricResult {
/// Intermediate sum result. /// Intermediate sum result.
Sum(IntermediateSum), Sum(IntermediateSum),
/// Intermediate top_hits result /// Intermediate top_hits result
TopHits(TopHitsCollector), TopHits(TopHitsTopNComputer),
} }
impl IntermediateMetricResult { impl IntermediateMetricResult {
@@ -314,7 +314,7 @@ impl IntermediateMetricResult {
.into_final_result(req.agg.as_percentile().expect("unexpected metric type")), .into_final_result(req.agg.as_percentile().expect("unexpected metric type")),
), ),
IntermediateMetricResult::TopHits(top_hits) => { IntermediateMetricResult::TopHits(top_hits) => {
MetricResult::TopHits(top_hits.finalize()) MetricResult::TopHits(top_hits.into_final_result())
} }
} }
} }

View File

@@ -25,6 +25,8 @@ mod stats;
mod sum; mod sum;
mod top_hits; mod top_hits;
use std::collections::HashMap;
pub use average::*; pub use average::*;
pub use count::*; pub use count::*;
pub use max::*; pub use max::*;
@@ -36,6 +38,8 @@ pub use stats::*;
pub use sum::*; pub use sum::*;
pub use top_hits::*; pub use top_hits::*;
use crate::schema::OwnedValue;
/// Single-metric aggregations use this common result structure. /// Single-metric aggregations use this common result structure.
/// ///
/// Main reason to wrap it in value is to match elasticsearch output structure. /// Main reason to wrap it in value is to match elasticsearch output structure.
@@ -92,8 +96,9 @@ pub struct TopHitsVecEntry {
/// Search results, for queries that include field retrieval requests /// Search results, for queries that include field retrieval requests
/// (`docvalue_fields`). /// (`docvalue_fields`).
#[serde(flatten)] #[serde(rename = "docvalue_fields")]
pub search_results: FieldRetrivalResult, #[serde(skip_serializing_if = "HashMap::is_empty")]
pub doc_value_fields: HashMap<String, OwnedValue>,
} }
/// The top_hits metric aggregation results a list of top hits by sort criteria. /// The top_hits metric aggregation results a list of top hits by sort criteria.

View File

@@ -1,6 +1,5 @@
use std::fmt::Debug; use std::fmt::Debug;
use columnar::ColumnType;
use serde::{Deserialize, Serialize}; use serde::{Deserialize, Serialize};
use super::*; use super::*;

View File

@@ -1,4 +1,3 @@
use columnar::ColumnType;
use serde::{Deserialize, Serialize}; use serde::{Deserialize, Serialize};
use super::*; use super::*;

View File

@@ -1,7 +1,9 @@
use std::collections::HashMap; use std::collections::HashMap;
use std::fmt::Formatter; use std::net::Ipv6Addr;
use columnar::{ColumnarReader, DynamicColumn}; use columnar::{ColumnarReader, DynamicColumn};
use common::json_path_writer::JSON_PATH_SEGMENT_SEP_STR;
use common::DateTime;
use regex::Regex; use regex::Regex;
use serde::ser::SerializeMap; use serde::ser::SerializeMap;
use serde::{Deserialize, Deserializer, Serialize, Serializer}; use serde::{Deserialize, Deserializer, Serialize, Serializer};
@@ -12,8 +14,8 @@ use crate::aggregation::intermediate_agg_result::{
IntermediateAggregationResult, IntermediateMetricResult, IntermediateAggregationResult, IntermediateMetricResult,
}; };
use crate::aggregation::segment_agg_result::SegmentAggregationCollector; use crate::aggregation::segment_agg_result::SegmentAggregationCollector;
use crate::aggregation::AggregationError;
use crate::collector::TopNComputer; use crate::collector::TopNComputer;
use crate::schema::term::JSON_PATH_SEGMENT_SEP_STR;
use crate::schema::OwnedValue; use crate::schema::OwnedValue;
use crate::{DocAddress, DocId, SegmentOrdinal}; use crate::{DocAddress, DocId, SegmentOrdinal};
@@ -92,53 +94,106 @@ pub struct TopHitsAggregation {
size: usize, size: usize,
from: Option<usize>, from: Option<usize>,
#[serde(flatten)]
retrieval: RetrievalFields,
}
const fn default_doc_value_fields() -> Vec<String> {
Vec::new()
}
/// Search query spec for each matched document
/// TODO: move this to a common module
#[derive(Debug, Clone, PartialEq, Serialize, Deserialize, Default)]
pub struct RetrievalFields {
/// The fast fields to return for each hit.
/// This is the only variant supported for now.
/// TODO: support the {field, format} variant for custom formatting.
#[serde(rename = "docvalue_fields")] #[serde(rename = "docvalue_fields")]
#[serde(default = "default_doc_value_fields")] #[serde(default)]
pub doc_value_fields: Vec<String>, doc_value_fields: Vec<String>,
// Not supported
_source: Option<serde_json::Value>,
fields: Option<serde_json::Value>,
script_fields: Option<serde_json::Value>,
highlight: Option<serde_json::Value>,
explain: Option<serde_json::Value>,
version: Option<serde_json::Value>,
} }
/// Search query result for each matched document #[derive(Debug, Clone, PartialEq, Default)]
/// TODO: move this to a common module struct KeyOrder {
#[derive(Debug, Clone, PartialEq, Serialize, Deserialize, Default)] field: String,
pub struct FieldRetrivalResult { order: Order,
/// The fast fields returned for each hit.
#[serde(rename = "docvalue_fields")]
#[serde(skip_serializing_if = "HashMap::is_empty")]
pub doc_value_fields: HashMap<String, OwnedValue>,
} }
impl RetrievalFields { impl Serialize for KeyOrder {
fn get_field_names(&self) -> Vec<&str> { fn serialize<S: Serializer>(&self, serializer: S) -> Result<S::Ok, S::Error> {
self.doc_value_fields.iter().map(|s| s.as_str()).collect() let KeyOrder { field, order } = self;
let mut map = serializer.serialize_map(Some(1))?;
map.serialize_entry(field, order)?;
map.end()
} }
}
impl<'de> Deserialize<'de> for KeyOrder {
fn deserialize<D>(deserializer: D) -> Result<Self, D::Error>
where D: Deserializer<'de> {
let mut key_order = <HashMap<String, Order>>::deserialize(deserializer)?.into_iter();
let (field, order) = key_order.next().ok_or(serde::de::Error::custom(
"Expected exactly one key-value pair in sort parameter of top_hits, found none",
))?;
if key_order.next().is_some() {
return Err(serde::de::Error::custom(format!(
"Expected exactly one key-value pair in sort parameter of top_hits, found {:?}",
key_order
)));
}
Ok(Self { field, order })
}
}
// Tranform a glob (`pattern*`, for example) into a regex::Regex (`^pattern.*$`)
fn globbed_string_to_regex(glob: &str) -> Result<Regex, crate::TantivyError> {
// Replace `*` glob with `.*` regex
let sanitized = format!("^{}$", regex::escape(glob).replace(r"\*", ".*"));
Regex::new(&sanitized.replace('*', ".*")).map_err(|e| {
crate::TantivyError::SchemaError(format!(
"Invalid regex '{}' in docvalue_fields: {}",
glob, e
))
})
}
fn use_doc_value_fields_err(parameter: &str) -> crate::Result<()> {
Err(crate::TantivyError::AggregationError(
AggregationError::InvalidRequest(format!(
"The `{}` parameter is not supported, only `docvalue_fields` is supported in \
`top_hits` aggregation",
parameter
)),
))
}
fn unsupported_err(parameter: &str) -> crate::Result<()> {
Err(crate::TantivyError::AggregationError(
AggregationError::InvalidRequest(format!(
"The `{}` parameter is not supported in the `top_hits` aggregation",
parameter
)),
))
}
impl TopHitsAggregation {
/// Validate and resolve field retrieval parameters
pub fn validate_and_resolve_field_names(
&mut self,
reader: &ColumnarReader,
) -> crate::Result<()> {
if self._source.is_some() {
use_doc_value_fields_err("_source")?;
}
if self.fields.is_some() {
use_doc_value_fields_err("fields")?;
}
if self.script_fields.is_some() {
use_doc_value_fields_err("script_fields")?;
}
if self.explain.is_some() {
unsupported_err("explain")?;
}
if self.highlight.is_some() {
unsupported_err("highlight")?;
}
if self.version.is_some() {
unsupported_err("version")?;
}
fn resolve_field_names(&mut self, reader: &ColumnarReader) -> crate::Result<()> {
// Tranform a glob (`pattern*`, for example) into a regex::Regex (`^pattern.*$`)
let globbed_string_to_regex = |glob: &str| {
// Replace `*` glob with `.*` regex
let sanitized = format!("^{}$", regex::escape(glob).replace(r"\*", ".*"));
Regex::new(&sanitized.replace('*', ".*")).map_err(|e| {
crate::TantivyError::SchemaError(format!(
"Invalid regex '{}' in docvalue_fields: {}",
glob, e
))
})
};
self.doc_value_fields = self self.doc_value_fields = self
.doc_value_fields .doc_value_fields
.iter() .iter()
@@ -175,12 +230,25 @@ impl RetrievalFields {
Ok(()) Ok(())
} }
/// Return fields accessed by the aggregator, in order.
pub fn field_names(&self) -> Vec<&str> {
self.sort
.iter()
.map(|KeyOrder { field, .. }| field.as_str())
.collect()
}
/// Return fields accessed by the aggregator's value retrieval.
pub fn value_field_names(&self) -> Vec<&str> {
self.doc_value_fields.iter().map(|s| s.as_str()).collect()
}
fn get_document_field_data( fn get_document_field_data(
&self, &self,
accessors: &HashMap<String, Vec<DynamicColumn>>, accessors: &HashMap<String, Vec<DynamicColumn>>,
doc_id: DocId, doc_id: DocId,
) -> FieldRetrivalResult { ) -> HashMap<String, FastFieldValue> {
let dvf = self let doc_value_fields = self
.doc_value_fields .doc_value_fields
.iter() .iter()
.map(|field| { .map(|field| {
@@ -188,20 +256,20 @@ impl RetrievalFields {
.get(field) .get(field)
.unwrap_or_else(|| panic!("field '{}' not found in accessors", field)); .unwrap_or_else(|| panic!("field '{}' not found in accessors", field));
let values: Vec<OwnedValue> = accessors let values: Vec<FastFieldValue> = accessors
.iter() .iter()
.flat_map(|accessor| match accessor { .flat_map(|accessor| match accessor {
DynamicColumn::U64(accessor) => accessor DynamicColumn::U64(accessor) => accessor
.values_for_doc(doc_id) .values_for_doc(doc_id)
.map(OwnedValue::U64) .map(FastFieldValue::U64)
.collect::<Vec<_>>(), .collect::<Vec<_>>(),
DynamicColumn::I64(accessor) => accessor DynamicColumn::I64(accessor) => accessor
.values_for_doc(doc_id) .values_for_doc(doc_id)
.map(OwnedValue::I64) .map(FastFieldValue::I64)
.collect::<Vec<_>>(), .collect::<Vec<_>>(),
DynamicColumn::F64(accessor) => accessor DynamicColumn::F64(accessor) => accessor
.values_for_doc(doc_id) .values_for_doc(doc_id)
.map(OwnedValue::F64) .map(FastFieldValue::F64)
.collect::<Vec<_>>(), .collect::<Vec<_>>(),
DynamicColumn::Bytes(accessor) => accessor DynamicColumn::Bytes(accessor) => accessor
.term_ords(doc_id) .term_ords(doc_id)
@@ -213,7 +281,7 @@ impl RetrievalFields {
.expect("could not read term dictionary"), .expect("could not read term dictionary"),
"term corresponding to term_ord does not exist" "term corresponding to term_ord does not exist"
); );
OwnedValue::Bytes(buffer) FastFieldValue::Bytes(buffer)
}) })
.collect::<Vec<_>>(), .collect::<Vec<_>>(),
DynamicColumn::Str(accessor) => accessor DynamicColumn::Str(accessor) => accessor
@@ -226,94 +294,82 @@ impl RetrievalFields {
.expect("could not read term dictionary"), .expect("could not read term dictionary"),
"term corresponding to term_ord does not exist" "term corresponding to term_ord does not exist"
); );
OwnedValue::Str(String::from_utf8(buffer).unwrap()) FastFieldValue::Str(String::from_utf8(buffer).unwrap())
}) })
.collect::<Vec<_>>(), .collect::<Vec<_>>(),
DynamicColumn::Bool(accessor) => accessor DynamicColumn::Bool(accessor) => accessor
.values_for_doc(doc_id) .values_for_doc(doc_id)
.map(OwnedValue::Bool) .map(FastFieldValue::Bool)
.collect::<Vec<_>>(), .collect::<Vec<_>>(),
DynamicColumn::IpAddr(accessor) => accessor DynamicColumn::IpAddr(accessor) => accessor
.values_for_doc(doc_id) .values_for_doc(doc_id)
.map(OwnedValue::IpAddr) .map(FastFieldValue::IpAddr)
.collect::<Vec<_>>(), .collect::<Vec<_>>(),
DynamicColumn::DateTime(accessor) => accessor DynamicColumn::DateTime(accessor) => accessor
.values_for_doc(doc_id) .values_for_doc(doc_id)
.map(OwnedValue::Date) .map(FastFieldValue::Date)
.collect::<Vec<_>>(), .collect::<Vec<_>>(),
}) })
.collect(); .collect();
(field.to_owned(), OwnedValue::Array(values)) (field.to_owned(), FastFieldValue::Array(values))
}) })
.collect(); .collect();
FieldRetrivalResult { doc_value_fields
doc_value_fields: dvf, }
}
/// A retrieved value from a fast field.
#[derive(Debug, Clone, PartialEq, Serialize, Deserialize)]
pub enum FastFieldValue {
/// The str type is used for any text information.
Str(String),
/// Unsigned 64-bits Integer `u64`
U64(u64),
/// Signed 64-bits Integer `i64`
I64(i64),
/// 64-bits Float `f64`
F64(f64),
/// Bool value
Bool(bool),
/// Date/time with nanoseconds precision
Date(DateTime),
/// Arbitrarily sized byte array
Bytes(Vec<u8>),
/// IpV6 Address. Internally there is no IpV4, it needs to be converted to `Ipv6Addr`.
IpAddr(Ipv6Addr),
/// A list of values.
Array(Vec<Self>),
}
impl From<FastFieldValue> for OwnedValue {
fn from(value: FastFieldValue) -> Self {
match value {
FastFieldValue::Str(s) => OwnedValue::Str(s),
FastFieldValue::U64(u) => OwnedValue::U64(u),
FastFieldValue::I64(i) => OwnedValue::I64(i),
FastFieldValue::F64(f) => OwnedValue::F64(f),
FastFieldValue::Bool(b) => OwnedValue::Bool(b),
FastFieldValue::Date(d) => OwnedValue::Date(d),
FastFieldValue::Bytes(b) => OwnedValue::Bytes(b),
FastFieldValue::IpAddr(ip) => OwnedValue::IpAddr(ip),
FastFieldValue::Array(a) => {
OwnedValue::Array(a.into_iter().map(OwnedValue::from).collect())
}
} }
} }
} }
#[derive(Debug, Clone, PartialEq, Default)] /// Holds a fast field value in its u64 representation, and the order in which it should be sorted.
struct KeyOrder {
field: String,
order: Order,
}
impl Serialize for KeyOrder {
fn serialize<S: Serializer>(&self, serializer: S) -> Result<S::Ok, S::Error> {
let KeyOrder { field, order } = self;
let mut map = serializer.serialize_map(Some(1))?;
map.serialize_entry(field, order)?;
map.end()
}
}
impl<'de> Deserialize<'de> for KeyOrder {
fn deserialize<D>(deserializer: D) -> Result<Self, D::Error>
where D: Deserializer<'de> {
let mut k_o = <HashMap<String, Order>>::deserialize(deserializer)?.into_iter();
let (k, v) = k_o.next().ok_or(serde::de::Error::custom(
"Expected exactly one key-value pair in KeyOrder, found none",
))?;
if k_o.next().is_some() {
return Err(serde::de::Error::custom(
"Expected exactly one key-value pair in KeyOrder, found more",
));
}
Ok(Self { field: k, order: v })
}
}
impl TopHitsAggregation {
/// Validate and resolve field retrieval parameters
pub fn validate_and_resolve(&mut self, reader: &ColumnarReader) -> crate::Result<()> {
self.retrieval.resolve_field_names(reader)
}
/// Return fields accessed by the aggregator, in order.
pub fn field_names(&self) -> Vec<&str> {
self.sort
.iter()
.map(|KeyOrder { field, .. }| field.as_str())
.collect()
}
/// Return fields accessed by the aggregator's value retrieval.
pub fn value_field_names(&self) -> Vec<&str> {
self.retrieval.get_field_names()
}
}
/// Holds a single comparable doc feature, and the order in which it should be sorted.
#[derive(Clone, Serialize, Deserialize, Debug)] #[derive(Clone, Serialize, Deserialize, Debug)]
struct ComparableDocFeature { struct DocValueAndOrder {
/// Stores any u64-mappable feature. /// A fast field value in its u64 representation.
value: Option<u64>, value: Option<u64>,
/// Sort order for the doc feature /// Sort order for the value
order: Order, order: Order,
} }
impl Ord for ComparableDocFeature { impl Ord for DocValueAndOrder {
fn cmp(&self, other: &Self) -> std::cmp::Ordering { fn cmp(&self, other: &Self) -> std::cmp::Ordering {
let invert = |cmp: std::cmp::Ordering| match self.order { let invert = |cmp: std::cmp::Ordering| match self.order {
Order::Asc => cmp, Order::Asc => cmp,
@@ -329,26 +385,32 @@ impl Ord for ComparableDocFeature {
} }
} }
impl PartialOrd for ComparableDocFeature { impl PartialOrd for DocValueAndOrder {
fn partial_cmp(&self, other: &Self) -> Option<std::cmp::Ordering> { fn partial_cmp(&self, other: &Self) -> Option<std::cmp::Ordering> {
Some(self.cmp(other)) Some(self.cmp(other))
} }
} }
impl PartialEq for ComparableDocFeature { impl PartialEq for DocValueAndOrder {
fn eq(&self, other: &Self) -> bool { fn eq(&self, other: &Self) -> bool {
self.value.cmp(&other.value) == std::cmp::Ordering::Equal self.value.cmp(&other.value) == std::cmp::Ordering::Equal
} }
} }
impl Eq for ComparableDocFeature {} impl Eq for DocValueAndOrder {}
#[derive(Clone, Serialize, Deserialize, Debug)] #[derive(Clone, Serialize, Deserialize, Debug)]
struct ComparableDocFeatures(Vec<ComparableDocFeature>, FieldRetrivalResult); struct DocSortValuesAndFields {
sorts: Vec<DocValueAndOrder>,
impl Ord for ComparableDocFeatures { #[serde(rename = "docvalue_fields")]
#[serde(skip_serializing_if = "HashMap::is_empty")]
doc_value_fields: HashMap<String, FastFieldValue>,
}
impl Ord for DocSortValuesAndFields {
fn cmp(&self, other: &Self) -> std::cmp::Ordering { fn cmp(&self, other: &Self) -> std::cmp::Ordering {
for (self_feature, other_feature) in self.0.iter().zip(other.0.iter()) { for (self_feature, other_feature) in self.sorts.iter().zip(other.sorts.iter()) {
let cmp = self_feature.cmp(other_feature); let cmp = self_feature.cmp(other_feature);
if cmp != std::cmp::Ordering::Equal { if cmp != std::cmp::Ordering::Equal {
return cmp; return cmp;
@@ -358,53 +420,43 @@ impl Ord for ComparableDocFeatures {
} }
} }
impl PartialOrd for ComparableDocFeatures { impl PartialOrd for DocSortValuesAndFields {
fn partial_cmp(&self, other: &Self) -> Option<std::cmp::Ordering> { fn partial_cmp(&self, other: &Self) -> Option<std::cmp::Ordering> {
Some(self.cmp(other)) Some(self.cmp(other))
} }
} }
impl PartialEq for ComparableDocFeatures { impl PartialEq for DocSortValuesAndFields {
fn eq(&self, other: &Self) -> bool { fn eq(&self, other: &Self) -> bool {
self.cmp(other) == std::cmp::Ordering::Equal self.cmp(other) == std::cmp::Ordering::Equal
} }
} }
impl Eq for ComparableDocFeatures {} impl Eq for DocSortValuesAndFields {}
/// The TopHitsCollector used for collecting over segments and merging results. /// The TopHitsCollector used for collecting over segments and merging results.
#[derive(Clone, Serialize, Deserialize)] #[derive(Clone, Serialize, Deserialize, Debug)]
pub struct TopHitsCollector { pub struct TopHitsTopNComputer {
req: TopHitsAggregation, req: TopHitsAggregation,
top_n: TopNComputer<ComparableDocFeatures, DocAddress, false>, top_n: TopNComputer<DocSortValuesAndFields, DocAddress, false>,
} }
impl Default for TopHitsCollector { impl std::cmp::PartialEq for TopHitsTopNComputer {
fn default() -> Self {
Self {
req: TopHitsAggregation::default(),
top_n: TopNComputer::new(1),
}
}
}
impl std::fmt::Debug for TopHitsCollector {
fn fmt(&self, f: &mut Formatter<'_>) -> std::fmt::Result {
f.debug_struct("TopHitsCollector")
.field("req", &self.req)
.field("top_n_threshold", &self.top_n.threshold)
.finish()
}
}
impl std::cmp::PartialEq for TopHitsCollector {
fn eq(&self, _other: &Self) -> bool { fn eq(&self, _other: &Self) -> bool {
false false
} }
} }
impl TopHitsCollector { impl TopHitsTopNComputer {
fn collect(&mut self, features: ComparableDocFeatures, doc: DocAddress) { /// Create a new TopHitsCollector
pub fn new(req: TopHitsAggregation) -> Self {
Self {
top_n: TopNComputer::new(req.size + req.from.unwrap_or(0)),
req,
}
}
fn collect(&mut self, features: DocSortValuesAndFields, doc: DocAddress) {
self.top_n.push(features, doc); self.top_n.push(features, doc);
} }
@@ -416,14 +468,19 @@ impl TopHitsCollector {
} }
/// Finalize by converting self into the final result form /// Finalize by converting self into the final result form
pub fn finalize(self) -> TopHitsMetricResult { pub fn into_final_result(self) -> TopHitsMetricResult {
let mut hits: Vec<TopHitsVecEntry> = self let mut hits: Vec<TopHitsVecEntry> = self
.top_n .top_n
.into_sorted_vec() .into_sorted_vec()
.into_iter() .into_iter()
.map(|doc| TopHitsVecEntry { .map(|doc| TopHitsVecEntry {
sort: doc.feature.0.iter().map(|f| f.value).collect(), sort: doc.feature.sorts.iter().map(|f| f.value).collect(),
search_results: doc.feature.1, doc_value_fields: doc
.feature
.doc_value_fields
.into_iter()
.map(|(k, v)| (k, v.into()))
.collect(),
}) })
.collect(); .collect();
@@ -436,48 +493,63 @@ impl TopHitsCollector {
} }
} }
#[derive(Clone)] #[derive(Clone, Debug)]
pub(crate) struct SegmentTopHitsCollector { pub(crate) struct TopHitsSegmentCollector {
segment_ordinal: SegmentOrdinal, segment_ordinal: SegmentOrdinal,
accessor_idx: usize, accessor_idx: usize,
inner_collector: TopHitsCollector, req: TopHitsAggregation,
top_n: TopNComputer<Vec<DocValueAndOrder>, DocAddress, false>,
} }
impl SegmentTopHitsCollector { impl TopHitsSegmentCollector {
pub fn from_req( pub fn from_req(
req: &TopHitsAggregation, req: &TopHitsAggregation,
accessor_idx: usize, accessor_idx: usize,
segment_ordinal: SegmentOrdinal, segment_ordinal: SegmentOrdinal,
) -> Self { ) -> Self {
Self { Self {
inner_collector: TopHitsCollector { req: req.clone(),
req: req.clone(), top_n: TopNComputer::new(req.size + req.from.unwrap_or(0)),
top_n: TopNComputer::new(req.size + req.from.unwrap_or(0)),
},
segment_ordinal, segment_ordinal,
accessor_idx, accessor_idx,
} }
} }
} fn into_top_hits_collector(
self,
value_accessors: &HashMap<String, Vec<DynamicColumn>>,
) -> TopHitsTopNComputer {
let mut top_hits_computer = TopHitsTopNComputer::new(self.req.clone());
let top_results = self.top_n.into_vec();
impl std::fmt::Debug for SegmentTopHitsCollector { for res in top_results {
fn fmt(&self, f: &mut Formatter<'_>) -> std::fmt::Result { let doc_value_fields = self
f.debug_struct("SegmentTopHitsCollector") .req
.field("segment_id", &self.segment_ordinal) .get_document_field_data(value_accessors, res.doc.doc_id);
.field("accessor_idx", &self.accessor_idx) top_hits_computer.collect(
.field("inner_collector", &self.inner_collector) DocSortValuesAndFields {
.finish() sorts: res.feature,
doc_value_fields,
},
res.doc,
);
}
top_hits_computer
} }
} }
impl SegmentAggregationCollector for SegmentTopHitsCollector { impl SegmentAggregationCollector for TopHitsSegmentCollector {
fn add_intermediate_aggregation_result( fn add_intermediate_aggregation_result(
self: Box<Self>, self: Box<Self>,
agg_with_accessor: &crate::aggregation::agg_req_with_accessor::AggregationsWithAccessor, agg_with_accessor: &crate::aggregation::agg_req_with_accessor::AggregationsWithAccessor,
results: &mut crate::aggregation::intermediate_agg_result::IntermediateAggregationResults, results: &mut crate::aggregation::intermediate_agg_result::IntermediateAggregationResults,
) -> crate::Result<()> { ) -> crate::Result<()> {
let name = agg_with_accessor.aggs.keys[self.accessor_idx].to_string(); let name = agg_with_accessor.aggs.keys[self.accessor_idx].to_string();
let intermediate_result = IntermediateMetricResult::TopHits(self.inner_collector);
let value_accessors = &agg_with_accessor.aggs.values[self.accessor_idx].value_accessors;
let intermediate_result =
IntermediateMetricResult::TopHits(self.into_top_hits_collector(value_accessors));
results.push( results.push(
name, name,
IntermediateAggregationResult::Metric(intermediate_result), IntermediateAggregationResult::Metric(intermediate_result),
@@ -490,9 +562,7 @@ impl SegmentAggregationCollector for SegmentTopHitsCollector {
agg_with_accessor: &mut crate::aggregation::agg_req_with_accessor::AggregationsWithAccessor, agg_with_accessor: &mut crate::aggregation::agg_req_with_accessor::AggregationsWithAccessor,
) -> crate::Result<()> { ) -> crate::Result<()> {
let accessors = &agg_with_accessor.aggs.values[self.accessor_idx].accessors; let accessors = &agg_with_accessor.aggs.values[self.accessor_idx].accessors;
let value_accessors = &agg_with_accessor.aggs.values[self.accessor_idx].value_accessors; let sorts: Vec<DocValueAndOrder> = self
let features: Vec<ComparableDocFeature> = self
.inner_collector
.req .req
.sort .sort
.iter() .iter()
@@ -505,18 +575,12 @@ impl SegmentAggregationCollector for SegmentTopHitsCollector {
.0 .0
.values_for_doc(doc_id) .values_for_doc(doc_id)
.next(); .next();
ComparableDocFeature { value, order } DocValueAndOrder { value, order }
}) })
.collect(); .collect();
let retrieval_result = self self.top_n.push(
.inner_collector sorts,
.req
.retrieval
.get_document_field_data(value_accessors, doc_id);
self.inner_collector.collect(
ComparableDocFeatures(features, retrieval_result),
DocAddress { DocAddress {
segment_ord: self.segment_ordinal, segment_ord: self.segment_ordinal,
doc_id, doc_id,
@@ -530,11 +594,7 @@ impl SegmentAggregationCollector for SegmentTopHitsCollector {
docs: &[crate::DocId], docs: &[crate::DocId],
agg_with_accessor: &mut crate::aggregation::agg_req_with_accessor::AggregationsWithAccessor, agg_with_accessor: &mut crate::aggregation::agg_req_with_accessor::AggregationsWithAccessor,
) -> crate::Result<()> { ) -> crate::Result<()> {
// TODO: Consider getting fields with the column block accessor and refactor this. // TODO: Consider getting fields with the column block accessor.
// ---
// Would the additional complexity of getting fields with the column_block_accessor
// make sense here? Probably yes, but I want to get a first-pass review first
// before proceeding.
for doc in docs { for doc in docs {
self.collect(*doc, agg_with_accessor)?; self.collect(*doc, agg_with_accessor)?;
} }
@@ -549,7 +609,7 @@ mod tests {
use serde_json::Value; use serde_json::Value;
use time::macros::datetime; use time::macros::datetime;
use super::{ComparableDocFeature, ComparableDocFeatures, Order}; use super::{DocSortValuesAndFields, DocValueAndOrder, Order};
use crate::aggregation::agg_req::Aggregations; use crate::aggregation::agg_req::Aggregations;
use crate::aggregation::agg_result::AggregationResults; use crate::aggregation::agg_result::AggregationResults;
use crate::aggregation::bucket::tests::get_test_index_from_docs; use crate::aggregation::bucket::tests::get_test_index_from_docs;
@@ -557,44 +617,44 @@ mod tests {
use crate::aggregation::AggregationCollector; use crate::aggregation::AggregationCollector;
use crate::collector::ComparableDoc; use crate::collector::ComparableDoc;
use crate::query::AllQuery; use crate::query::AllQuery;
use crate::schema::OwnedValue as SchemaValue; use crate::schema::OwnedValue;
fn invert_order(cmp_feature: ComparableDocFeature) -> ComparableDocFeature { fn invert_order(cmp_feature: DocValueAndOrder) -> DocValueAndOrder {
let ComparableDocFeature { value, order } = cmp_feature; let DocValueAndOrder { value, order } = cmp_feature;
let order = match order { let order = match order {
Order::Asc => Order::Desc, Order::Asc => Order::Desc,
Order::Desc => Order::Asc, Order::Desc => Order::Asc,
}; };
ComparableDocFeature { value, order } DocValueAndOrder { value, order }
} }
fn collector_with_capacity(capacity: usize) -> super::TopHitsCollector { fn collector_with_capacity(capacity: usize) -> super::TopHitsTopNComputer {
super::TopHitsCollector { super::TopHitsTopNComputer {
top_n: super::TopNComputer::new(capacity), top_n: super::TopNComputer::new(capacity),
..Default::default() req: Default::default(),
} }
} }
fn invert_order_features(cmp_features: ComparableDocFeatures) -> ComparableDocFeatures { fn invert_order_features(mut cmp_features: DocSortValuesAndFields) -> DocSortValuesAndFields {
let ComparableDocFeatures(cmp_features, search_results) = cmp_features; cmp_features.sorts = cmp_features
let cmp_features = cmp_features .sorts
.into_iter() .into_iter()
.map(invert_order) .map(invert_order)
.collect::<Vec<_>>(); .collect::<Vec<_>>();
ComparableDocFeatures(cmp_features, search_results) cmp_features
} }
#[test] #[test]
fn test_comparable_doc_feature() -> crate::Result<()> { fn test_comparable_doc_feature() -> crate::Result<()> {
let small = ComparableDocFeature { let small = DocValueAndOrder {
value: Some(1), value: Some(1),
order: Order::Asc, order: Order::Asc,
}; };
let big = ComparableDocFeature { let big = DocValueAndOrder {
value: Some(2), value: Some(2),
order: Order::Asc, order: Order::Asc,
}; };
let none = ComparableDocFeature { let none = DocValueAndOrder {
value: None, value: None,
order: Order::Asc, order: Order::Asc,
}; };
@@ -616,21 +676,21 @@ mod tests {
#[test] #[test]
fn test_comparable_doc_features() -> crate::Result<()> { fn test_comparable_doc_features() -> crate::Result<()> {
let features_1 = ComparableDocFeatures( let features_1 = DocSortValuesAndFields {
vec![ComparableDocFeature { sorts: vec![DocValueAndOrder {
value: Some(1), value: Some(1),
order: Order::Asc, order: Order::Asc,
}], }],
Default::default(), doc_value_fields: Default::default(),
); };
let features_2 = ComparableDocFeatures( let features_2 = DocSortValuesAndFields {
vec![ComparableDocFeature { sorts: vec![DocValueAndOrder {
value: Some(2), value: Some(2),
order: Order::Asc, order: Order::Asc,
}], }],
Default::default(), doc_value_fields: Default::default(),
); };
assert!(features_1 < features_2); assert!(features_1 < features_2);
@@ -689,39 +749,39 @@ mod tests {
segment_ord: 0, segment_ord: 0,
doc_id: 0, doc_id: 0,
}, },
feature: ComparableDocFeatures( feature: DocSortValuesAndFields {
vec![ComparableDocFeature { sorts: vec![DocValueAndOrder {
value: Some(1), value: Some(1),
order: Order::Asc, order: Order::Asc,
}], }],
Default::default(), doc_value_fields: Default::default(),
), },
}, },
ComparableDoc { ComparableDoc {
doc: crate::DocAddress { doc: crate::DocAddress {
segment_ord: 0, segment_ord: 0,
doc_id: 2, doc_id: 2,
}, },
feature: ComparableDocFeatures( feature: DocSortValuesAndFields {
vec![ComparableDocFeature { sorts: vec![DocValueAndOrder {
value: Some(3), value: Some(3),
order: Order::Asc, order: Order::Asc,
}], }],
Default::default(), doc_value_fields: Default::default(),
), },
}, },
ComparableDoc { ComparableDoc {
doc: crate::DocAddress { doc: crate::DocAddress {
segment_ord: 0, segment_ord: 0,
doc_id: 1, doc_id: 1,
}, },
feature: ComparableDocFeatures( feature: DocSortValuesAndFields {
vec![ComparableDocFeature { sorts: vec![DocValueAndOrder {
value: Some(5), value: Some(5),
order: Order::Asc, order: Order::Asc,
}], }],
Default::default(), doc_value_fields: Default::default(),
), },
}, },
]; ];
@@ -730,23 +790,23 @@ mod tests {
collector.collect(doc.feature, doc.doc); collector.collect(doc.feature, doc.doc);
} }
let res = collector.finalize(); let res = collector.into_final_result();
assert_eq!( assert_eq!(
res, res,
super::TopHitsMetricResult { super::TopHitsMetricResult {
hits: vec![ hits: vec![
super::TopHitsVecEntry { super::TopHitsVecEntry {
sort: vec![docs[0].feature.0[0].value], sort: vec![docs[0].feature.sorts[0].value],
search_results: Default::default(), doc_value_fields: Default::default(),
}, },
super::TopHitsVecEntry { super::TopHitsVecEntry {
sort: vec![docs[1].feature.0[0].value], sort: vec![docs[1].feature.sorts[0].value],
search_results: Default::default(), doc_value_fields: Default::default(),
}, },
super::TopHitsVecEntry { super::TopHitsVecEntry {
sort: vec![docs[2].feature.0[0].value], sort: vec![docs[2].feature.sorts[0].value],
search_results: Default::default(), doc_value_fields: Default::default(),
}, },
] ]
} }
@@ -803,7 +863,7 @@ mod tests {
{ {
"sort": [common::i64_to_u64(date_2017.unix_timestamp_nanos() as i64)], "sort": [common::i64_to_u64(date_2017.unix_timestamp_nanos() as i64)],
"docvalue_fields": { "docvalue_fields": {
"date": [ SchemaValue::Date(DateTime::from_utc(date_2017)) ], "date": [ OwnedValue::Date(DateTime::from_utc(date_2017)) ],
"text": [ "ccc" ], "text": [ "ccc" ],
"text2": [ "ddd" ], "text2": [ "ddd" ],
"mixed.dyn_arr": [ 3, "4" ], "mixed.dyn_arr": [ 3, "4" ],
@@ -812,7 +872,7 @@ mod tests {
{ {
"sort": [common::i64_to_u64(date_2016.unix_timestamp_nanos() as i64)], "sort": [common::i64_to_u64(date_2016.unix_timestamp_nanos() as i64)],
"docvalue_fields": { "docvalue_fields": {
"date": [ SchemaValue::Date(DateTime::from_utc(date_2016)) ], "date": [ OwnedValue::Date(DateTime::from_utc(date_2016)) ],
"text": [ "aaa" ], "text": [ "aaa" ],
"text2": [ "bbb" ], "text2": [ "bbb" ],
"mixed.dyn_arr": [ 6, "7" ], "mixed.dyn_arr": [ 6, "7" ],

View File

@@ -417,7 +417,6 @@ mod tests {
use time::OffsetDateTime; use time::OffsetDateTime;
use super::agg_req::Aggregations; use super::agg_req::Aggregations;
use super::segment_agg_result::AggregationLimits;
use super::*; use super::*;
use crate::indexer::NoMergePolicy; use crate::indexer::NoMergePolicy;
use crate::query::{AllQuery, TermQuery}; use crate::query::{AllQuery, TermQuery};

View File

@@ -16,7 +16,7 @@ use super::metric::{
SumAggregation, SumAggregation,
}; };
use crate::aggregation::bucket::TermMissingAgg; use crate::aggregation::bucket::TermMissingAgg;
use crate::aggregation::metric::SegmentTopHitsCollector; use crate::aggregation::metric::TopHitsSegmentCollector;
pub(crate) trait SegmentAggregationCollector: CollectorClone + Debug { pub(crate) trait SegmentAggregationCollector: CollectorClone + Debug {
fn add_intermediate_aggregation_result( fn add_intermediate_aggregation_result(
@@ -161,7 +161,7 @@ pub(crate) fn build_single_agg_segment_collector(
accessor_idx, accessor_idx,
)?, )?,
)), )),
TopHits(top_hits_req) => Ok(Box::new(SegmentTopHitsCollector::from_req( TopHits(top_hits_req) => Ok(Box::new(TopHitsSegmentCollector::from_req(
top_hits_req, top_hits_req,
accessor_idx, accessor_idx,
req.segment_ordinal, req.segment_ordinal,

View File

@@ -1,7 +1,7 @@
use std::cmp::Ordering; use std::cmp::Ordering;
use std::collections::{btree_map, BTreeMap, BTreeSet, BinaryHeap}; use std::collections::{btree_map, BTreeMap, BTreeSet, BinaryHeap};
use std::io;
use std::ops::Bound; use std::ops::Bound;
use std::{io, u64, usize};
use crate::collector::{Collector, SegmentCollector}; use crate::collector::{Collector, SegmentCollector};
use crate::fastfield::FacetReader; use crate::fastfield::FacetReader;

View File

@@ -160,7 +160,7 @@ mod tests {
use super::{add_vecs, HistogramCollector, HistogramComputer}; use super::{add_vecs, HistogramCollector, HistogramComputer};
use crate::schema::{Schema, FAST}; use crate::schema::{Schema, FAST};
use crate::time::{Date, Month}; use crate::time::{Date, Month};
use crate::{doc, query, DateTime, Index}; use crate::{query, DateTime, Index};
#[test] #[test]
fn test_add_histograms_simple() { fn test_add_histograms_simple() {

View File

@@ -1,15 +1,11 @@
use columnar::{BytesColumn, Column}; use columnar::{BytesColumn, Column};
use super::*; use super::*;
use crate::collector::{Count, FilterCollector, TopDocs};
use crate::index::SegmentReader;
use crate::query::{AllQuery, QueryParser}; use crate::query::{AllQuery, QueryParser};
use crate::schema::{Schema, FAST, TEXT}; use crate::schema::{Schema, FAST, TEXT};
use crate::time::format_description::well_known::Rfc3339; use crate::time::format_description::well_known::Rfc3339;
use crate::time::OffsetDateTime; use crate::time::OffsetDateTime;
use crate::{ use crate::{DateTime, DocAddress, Index, Searcher, TantivyDocument};
doc, DateTime, DocAddress, DocId, Index, Score, Searcher, SegmentOrdinal, TantivyDocument,
};
pub const TEST_COLLECTOR_WITH_SCORE: TestCollector = TestCollector { pub const TEST_COLLECTOR_WITH_SCORE: TestCollector = TestCollector {
compute_score: true, compute_score: true,

View File

@@ -732,6 +732,19 @@ pub struct TopNComputer<Score, D, const REVERSE_ORDER: bool = true> {
top_n: usize, top_n: usize,
pub(crate) threshold: Option<Score>, pub(crate) threshold: Option<Score>,
} }
impl<Score: std::fmt::Debug, D, const REVERSE_ORDER: bool> std::fmt::Debug
for TopNComputer<Score, D, REVERSE_ORDER>
{
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> std::fmt::Result {
f.debug_struct("TopNComputer")
.field("buffer_len", &self.buffer.len())
.field("top_n", &self.top_n)
.field("current_threshold", &self.threshold)
.finish()
}
}
// Intermediate struct for TopNComputer for deserialization, to keep vec capacity // Intermediate struct for TopNComputer for deserialization, to keep vec capacity
#[derive(Deserialize)] #[derive(Deserialize)]
struct TopNComputerDeser<Score, D, const REVERSE_ORDER: bool> { struct TopNComputerDeser<Score, D, const REVERSE_ORDER: bool> {

View File

@@ -1,12 +1,11 @@
use columnar::MonotonicallyMappableToU64; use common::json_path_writer::JSON_PATH_SEGMENT_SEP;
use common::{replace_in_place, JsonPathWriter}; use common::{replace_in_place, JsonPathWriter};
use rustc_hash::FxHashMap; use rustc_hash::FxHashMap;
use crate::fastfield::FastValue;
use crate::postings::{IndexingContext, IndexingPosition, PostingsWriter}; use crate::postings::{IndexingContext, IndexingPosition, PostingsWriter};
use crate::schema::document::{ReferenceValue, ReferenceValueLeaf, Value}; use crate::schema::document::{ReferenceValue, ReferenceValueLeaf, Value};
use crate::schema::term::JSON_PATH_SEGMENT_SEP; use crate::schema::indexing_term::IndexingTerm;
use crate::schema::{Field, Type, DATE_TIME_PRECISION_INDEXED}; use crate::schema::{Field, Type};
use crate::time::format_description::well_known::Rfc3339; use crate::time::format_description::well_known::Rfc3339;
use crate::time::{OffsetDateTime, UtcOffset}; use crate::time::{OffsetDateTime, UtcOffset};
use crate::tokenizer::TextAnalyzer; use crate::tokenizer::TextAnalyzer;
@@ -76,7 +75,7 @@ pub(crate) fn index_json_values<'a, V: Value<'a>>(
json_visitors: impl Iterator<Item = crate::Result<V::ObjectIter>>, json_visitors: impl Iterator<Item = crate::Result<V::ObjectIter>>,
text_analyzer: &mut TextAnalyzer, text_analyzer: &mut TextAnalyzer,
expand_dots_enabled: bool, expand_dots_enabled: bool,
term_buffer: &mut Term, term_buffer: &mut IndexingTerm,
postings_writer: &mut dyn PostingsWriter, postings_writer: &mut dyn PostingsWriter,
json_path_writer: &mut JsonPathWriter, json_path_writer: &mut JsonPathWriter,
ctx: &mut IndexingContext, ctx: &mut IndexingContext,
@@ -105,7 +104,7 @@ fn index_json_object<'a, V: Value<'a>>(
doc: DocId, doc: DocId,
json_visitor: V::ObjectIter, json_visitor: V::ObjectIter,
text_analyzer: &mut TextAnalyzer, text_analyzer: &mut TextAnalyzer,
term_buffer: &mut Term, term_buffer: &mut IndexingTerm,
json_path_writer: &mut JsonPathWriter, json_path_writer: &mut JsonPathWriter,
postings_writer: &mut dyn PostingsWriter, postings_writer: &mut dyn PostingsWriter,
ctx: &mut IndexingContext, ctx: &mut IndexingContext,
@@ -132,19 +131,16 @@ fn index_json_value<'a, V: Value<'a>>(
doc: DocId, doc: DocId,
json_value: V, json_value: V,
text_analyzer: &mut TextAnalyzer, text_analyzer: &mut TextAnalyzer,
term_buffer: &mut Term, term_buffer: &mut IndexingTerm,
json_path_writer: &mut JsonPathWriter, json_path_writer: &mut JsonPathWriter,
postings_writer: &mut dyn PostingsWriter, postings_writer: &mut dyn PostingsWriter,
ctx: &mut IndexingContext, ctx: &mut IndexingContext,
positions_per_path: &mut IndexingPositionsPerPath, positions_per_path: &mut IndexingPositionsPerPath,
) { ) {
let set_path_id = |term_buffer: &mut Term, unordered_id: u32| { let set_path_id = |term_buffer: &mut IndexingTerm, unordered_id: u32| {
term_buffer.truncate_value_bytes(0); term_buffer.truncate_value_bytes(0);
term_buffer.append_bytes(&unordered_id.to_be_bytes()); term_buffer.append_bytes(&unordered_id.to_be_bytes());
}; };
let set_type = |term_buffer: &mut Term, typ: Type| {
term_buffer.append_bytes(&[typ.to_code()]);
};
match json_value.as_value() { match json_value.as_value() {
ReferenceValue::Leaf(leaf) => match leaf { ReferenceValue::Leaf(leaf) => match leaf {
@@ -157,7 +153,7 @@ fn index_json_value<'a, V: Value<'a>>(
// TODO: make sure the chain position works out. // TODO: make sure the chain position works out.
set_path_id(term_buffer, unordered_id); set_path_id(term_buffer, unordered_id);
set_type(term_buffer, Type::Str); term_buffer.append_bytes(&[Type::Str.to_code()]);
let indexing_position = positions_per_path.get_position_from_id(unordered_id); let indexing_position = positions_per_path.get_position_from_id(unordered_id);
postings_writer.index_text( postings_writer.index_text(
doc, doc,
@@ -213,18 +209,16 @@ fn index_json_value<'a, V: Value<'a>>(
postings_writer.subscribe(doc, 0u32, term_buffer, ctx); postings_writer.subscribe(doc, 0u32, term_buffer, ctx);
} }
ReferenceValueLeaf::PreTokStr(_) => { ReferenceValueLeaf::PreTokStr(_) => {
unimplemented!( unimplemented!("Pre-tokenized string support in JSON fields is not yet implemented")
"Pre-tokenized string support in dynamic fields is not yet implemented"
)
} }
ReferenceValueLeaf::Bytes(_) => { ReferenceValueLeaf::Bytes(_) => {
unimplemented!("Bytes support in dynamic fields is not yet implemented") unimplemented!("Bytes support in JSON fields is not yet implemented")
} }
ReferenceValueLeaf::Facet(_) => { ReferenceValueLeaf::Facet(_) => {
unimplemented!("Facet support in dynamic fields is not yet implemented") unimplemented!("Facet support in JSON fields is not yet implemented")
} }
ReferenceValueLeaf::IpAddr(_) => { ReferenceValueLeaf::IpAddr(_) => {
unimplemented!("IP address support in dynamic fields is not yet implemented") unimplemented!("IP address support in JSON fields is not yet implemented")
} }
}, },
ReferenceValue::Array(elements) => { ReferenceValue::Array(elements) => {
@@ -256,71 +250,43 @@ fn index_json_value<'a, V: Value<'a>>(
} }
} }
// Tries to infer a JSON type from a string. /// Tries to infer a JSON type from a string and append it to the term.
pub fn convert_to_fast_value_and_get_term( ///
json_term_writer: &mut JsonTermWriter, /// The term must be json + JSON path.
phrase: &str, pub fn convert_to_fast_value_and_append_to_json_term(mut term: Term, phrase: &str) -> Option<Term> {
) -> Option<Term> { assert_eq!(
term.value()
.as_json()
.expect("expecting a Term with a json type and json path")
.1
.as_serialized()
.len(),
0,
"JSON value bytes should be empty"
);
if let Ok(dt) = OffsetDateTime::parse(phrase, &Rfc3339) { if let Ok(dt) = OffsetDateTime::parse(phrase, &Rfc3339) {
let dt_utc = dt.to_offset(UtcOffset::UTC); let dt_utc = dt.to_offset(UtcOffset::UTC);
return Some(set_fastvalue_and_get_term( term.append_type_and_fast_value(DateTime::from_utc(dt_utc));
json_term_writer, return Some(term);
DateTime::from_utc(dt_utc),
));
} }
if let Ok(i64_val) = str::parse::<i64>(phrase) { if let Ok(i64_val) = str::parse::<i64>(phrase) {
return Some(set_fastvalue_and_get_term(json_term_writer, i64_val)); term.append_type_and_fast_value(i64_val);
return Some(term);
} }
if let Ok(u64_val) = str::parse::<u64>(phrase) { if let Ok(u64_val) = str::parse::<u64>(phrase) {
return Some(set_fastvalue_and_get_term(json_term_writer, u64_val)); term.append_type_and_fast_value(u64_val);
return Some(term);
} }
if let Ok(f64_val) = str::parse::<f64>(phrase) { if let Ok(f64_val) = str::parse::<f64>(phrase) {
return Some(set_fastvalue_and_get_term(json_term_writer, f64_val)); term.append_type_and_fast_value(f64_val);
return Some(term);
} }
if let Ok(bool_val) = str::parse::<bool>(phrase) { if let Ok(bool_val) = str::parse::<bool>(phrase) {
return Some(set_fastvalue_and_get_term(json_term_writer, bool_val)); term.append_type_and_fast_value(bool_val);
return Some(term);
} }
None None
} }
// helper function to generate a Term from a json fastvalue
pub(crate) fn set_fastvalue_and_get_term<T: FastValue>(
json_term_writer: &mut JsonTermWriter,
value: T,
) -> Term {
json_term_writer.set_fast_value(value);
json_term_writer.term().clone()
}
// helper function to generate a list of terms with their positions from a textual json value
pub(crate) fn set_string_and_get_terms(
json_term_writer: &mut JsonTermWriter,
value: &str,
text_analyzer: &mut TextAnalyzer,
) -> Vec<(usize, Term)> {
let mut positions_and_terms = Vec::<(usize, Term)>::new();
json_term_writer.close_path_and_set_type(Type::Str);
let term_num_bytes = json_term_writer.term_buffer.len_bytes();
let mut token_stream = text_analyzer.token_stream(value);
token_stream.process(&mut |token| {
json_term_writer
.term_buffer
.truncate_value_bytes(term_num_bytes);
json_term_writer
.term_buffer
.append_bytes(token.text.as_bytes());
positions_and_terms.push((token.position, json_term_writer.term().clone()));
});
positions_and_terms
}
/// Writes a value of a JSON field to a `Term`.
/// The Term format is as follows:
/// `[JSON_TYPE][JSON_PATH][JSON_END_OF_PATH][VALUE_BYTES]`
pub struct JsonTermWriter<'a> {
term_buffer: &'a mut Term,
path_stack: Vec<usize>,
expand_dots_enabled: bool,
}
/// Splits a json path supplied to the query parser in such a way that /// Splits a json path supplied to the query parser in such a way that
/// `.` can be escaped. /// `.` can be escaped.
@@ -377,275 +343,106 @@ pub(crate) fn encode_column_name(
path.into() path.into()
} }
impl<'a> JsonTermWriter<'a> { pub fn term_from_json_paths<'a>(
pub fn from_field_and_json_path( json_field: Field,
field: Field, paths: impl Iterator<Item = &'a str>,
json_path: &str, expand_dots_enabled: bool,
expand_dots_enabled: bool, ) -> Term {
term_buffer: &'a mut Term, let mut json_path = JsonPathWriter::with_expand_dots(expand_dots_enabled);
) -> Self { for path in paths {
term_buffer.set_field_and_type(field, Type::Json); json_path.push(path);
let mut json_term_writer = Self::wrap(term_buffer, expand_dots_enabled);
for segment in split_json_path(json_path) {
json_term_writer.push_path_segment(&segment);
}
json_term_writer
} }
json_path.set_end();
let mut term = Term::with_type_and_field(Type::Json, json_field);
pub fn wrap(term_buffer: &'a mut Term, expand_dots_enabled: bool) -> Self { term.append_bytes(json_path.as_str().as_bytes());
term_buffer.clear_with_type(Type::Json); term
let mut path_stack = Vec::with_capacity(10);
path_stack.push(0);
Self {
term_buffer,
path_stack,
expand_dots_enabled,
}
}
fn trim_to_end_of_path(&mut self) {
let end_of_path = *self.path_stack.last().unwrap();
self.term_buffer.truncate_value_bytes(end_of_path);
}
pub fn close_path_and_set_type(&mut self, typ: Type) {
self.trim_to_end_of_path();
self.term_buffer.set_json_path_end();
self.term_buffer.append_bytes(&[typ.to_code()]);
}
// TODO: Remove this function and use JsonPathWriter instead.
pub fn push_path_segment(&mut self, segment: &str) {
// the path stack should never be empty.
self.trim_to_end_of_path();
if self.path_stack.len() > 1 {
self.term_buffer.set_json_path_separator();
}
let appended_segment = self.term_buffer.append_bytes(segment.as_bytes());
if self.expand_dots_enabled {
// We need to replace `.` by JSON_PATH_SEGMENT_SEP.
replace_in_place(b'.', JSON_PATH_SEGMENT_SEP, appended_segment);
}
self.term_buffer.add_json_path_separator();
self.path_stack.push(self.term_buffer.len_bytes());
}
pub fn pop_path_segment(&mut self) {
self.path_stack.pop();
assert!(!self.path_stack.is_empty());
self.trim_to_end_of_path();
}
/// Returns the json path of the term being currently built.
#[cfg(test)]
pub(crate) fn path(&self) -> &[u8] {
let end_of_path = self.path_stack.last().cloned().unwrap_or(1);
&self.term().serialized_value_bytes()[..end_of_path - 1]
}
pub(crate) fn set_fast_value<T: FastValue>(&mut self, val: T) {
self.close_path_and_set_type(T::to_type());
let value = if T::to_type() == Type::Date {
DateTime::from_u64(val.to_u64())
.truncate(DATE_TIME_PRECISION_INDEXED)
.to_u64()
} else {
val.to_u64()
};
self.term_buffer
.append_bytes(value.to_be_bytes().as_slice());
}
pub fn set_str(&mut self, text: &str) {
self.close_path_and_set_type(Type::Str);
self.term_buffer.append_bytes(text.as_bytes());
}
pub fn term(&self) -> &Term {
self.term_buffer
}
} }
#[cfg(test)] #[cfg(test)]
mod tests { mod tests {
use super::{split_json_path, JsonTermWriter}; use super::split_json_path;
use crate::schema::{Field, Type}; use crate::json_utils::term_from_json_paths;
use crate::Term; use crate::schema::Field;
#[test] #[test]
fn test_json_writer() { fn test_json_writer() {
let field = Field::from_field_id(1); let field = Field::from_field_id(1);
let mut term = Term::with_type_and_field(Type::Json, field);
let mut json_writer = JsonTermWriter::wrap(&mut term, false); let mut term = term_from_json_paths(field, ["attributes", "color"].into_iter(), false);
json_writer.push_path_segment("attributes"); term.append_type_and_str("red");
json_writer.push_path_segment("color");
json_writer.set_str("red");
assert_eq!( assert_eq!(
format!("{:?}", json_writer.term()), format!("{:?}", term),
"Term(field=1, type=Json, path=attributes.color, type=Str, \"red\")" "Term(field=1, type=Json, path=attributes.color, type=Str, \"red\")"
); );
json_writer.set_str("blue");
assert_eq!( let mut term = term_from_json_paths(
format!("{:?}", json_writer.term()), field,
"Term(field=1, type=Json, path=attributes.color, type=Str, \"blue\")" ["attributes", "dimensions", "width"].into_iter(),
false,
); );
json_writer.pop_path_segment(); term.append_type_and_fast_value(400i64);
json_writer.push_path_segment("dimensions");
json_writer.push_path_segment("width");
json_writer.set_fast_value(400i64);
assert_eq!( assert_eq!(
format!("{:?}", json_writer.term()), format!("{:?}", term),
"Term(field=1, type=Json, path=attributes.dimensions.width, type=I64, 400)" "Term(field=1, type=Json, path=attributes.dimensions.width, type=I64, 400)"
); );
json_writer.pop_path_segment();
json_writer.push_path_segment("height");
json_writer.set_fast_value(300i64);
assert_eq!(
format!("{:?}", json_writer.term()),
"Term(field=1, type=Json, path=attributes.dimensions.height, type=I64, 300)"
);
} }
#[test] #[test]
fn test_string_term() { fn test_string_term() {
let field = Field::from_field_id(1); let field = Field::from_field_id(1);
let mut term = Term::with_type_and_field(Type::Json, field); let mut term = term_from_json_paths(field, ["color"].into_iter(), false);
let mut json_writer = JsonTermWriter::wrap(&mut term, false); term.append_type_and_str("red");
json_writer.push_path_segment("color");
json_writer.set_str("red"); assert_eq!(term.serialized_term(), b"\x00\x00\x00\x01jcolor\x00sred")
assert_eq!(
json_writer.term().serialized_term(),
b"\x00\x00\x00\x01jcolor\x00sred"
)
} }
#[test] #[test]
fn test_i64_term() { fn test_i64_term() {
let field = Field::from_field_id(1); let field = Field::from_field_id(1);
let mut term = Term::with_type_and_field(Type::Json, field); let mut term = term_from_json_paths(field, ["color"].into_iter(), false);
let mut json_writer = JsonTermWriter::wrap(&mut term, false); term.append_type_and_fast_value(-4i64);
json_writer.push_path_segment("color");
json_writer.set_fast_value(-4i64);
assert_eq!( assert_eq!(
json_writer.term().serialized_term(), term.value().as_serialized(),
b"\x00\x00\x00\x01jcolor\x00i\x7f\xff\xff\xff\xff\xff\xff\xfc" b"jcolor\x00i\x7f\xff\xff\xff\xff\xff\xff\xfc"
) )
} }
#[test] #[test]
fn test_u64_term() { fn test_u64_term() {
let field = Field::from_field_id(1); let field = Field::from_field_id(1);
let mut term = Term::with_type_and_field(Type::Json, field); let mut term = term_from_json_paths(field, ["color"].into_iter(), false);
let mut json_writer = JsonTermWriter::wrap(&mut term, false); term.append_type_and_fast_value(4u64);
json_writer.push_path_segment("color");
json_writer.set_fast_value(4u64);
assert_eq!( assert_eq!(
json_writer.term().serialized_term(), term.value().as_serialized(),
b"\x00\x00\x00\x01jcolor\x00u\x00\x00\x00\x00\x00\x00\x00\x04" b"jcolor\x00u\x00\x00\x00\x00\x00\x00\x00\x04"
) )
} }
#[test] #[test]
fn test_f64_term() { fn test_f64_term() {
let field = Field::from_field_id(1); let field = Field::from_field_id(1);
let mut term = Term::with_type_and_field(Type::Json, field); let mut term = term_from_json_paths(field, ["color"].into_iter(), false);
let mut json_writer = JsonTermWriter::wrap(&mut term, false); term.append_type_and_fast_value(4.0f64);
json_writer.push_path_segment("color");
json_writer.set_fast_value(4.0f64);
assert_eq!( assert_eq!(
json_writer.term().serialized_term(), term.value().as_serialized(),
b"\x00\x00\x00\x01jcolor\x00f\xc0\x10\x00\x00\x00\x00\x00\x00" b"jcolor\x00f\xc0\x10\x00\x00\x00\x00\x00\x00"
) )
} }
#[test] #[test]
fn test_bool_term() { fn test_bool_term() {
let field = Field::from_field_id(1); let field = Field::from_field_id(1);
let mut term = Term::with_type_and_field(Type::Json, field); let mut term = term_from_json_paths(field, ["color"].into_iter(), false);
let mut json_writer = JsonTermWriter::wrap(&mut term, false); term.append_type_and_fast_value(true);
json_writer.push_path_segment("color");
json_writer.set_fast_value(true);
assert_eq!( assert_eq!(
json_writer.term().serialized_term(), term.value().as_serialized(),
b"\x00\x00\x00\x01jcolor\x00o\x00\x00\x00\x00\x00\x00\x00\x01" b"jcolor\x00o\x00\x00\x00\x00\x00\x00\x00\x01"
) )
} }
#[test]
fn test_push_after_set_path_segment() {
let field = Field::from_field_id(1);
let mut term = Term::with_type_and_field(Type::Json, field);
let mut json_writer = JsonTermWriter::wrap(&mut term, false);
json_writer.push_path_segment("attribute");
json_writer.set_str("something");
json_writer.push_path_segment("color");
json_writer.set_str("red");
assert_eq!(
json_writer.term().serialized_term(),
b"\x00\x00\x00\x01jattribute\x01color\x00sred"
)
}
#[test]
fn test_pop_segment() {
let field = Field::from_field_id(1);
let mut term = Term::with_type_and_field(Type::Json, field);
let mut json_writer = JsonTermWriter::wrap(&mut term, false);
json_writer.push_path_segment("color");
json_writer.push_path_segment("hue");
json_writer.pop_path_segment();
json_writer.set_str("red");
assert_eq!(
json_writer.term().serialized_term(),
b"\x00\x00\x00\x01jcolor\x00sred"
)
}
#[test]
fn test_json_writer_path() {
let field = Field::from_field_id(1);
let mut term = Term::with_type_and_field(Type::Json, field);
let mut json_writer = JsonTermWriter::wrap(&mut term, false);
json_writer.push_path_segment("color");
assert_eq!(json_writer.path(), b"color");
json_writer.push_path_segment("hue");
assert_eq!(json_writer.path(), b"color\x01hue");
json_writer.set_str("pink");
assert_eq!(json_writer.path(), b"color\x01hue");
}
#[test]
fn test_json_path_expand_dots_disabled() {
let field = Field::from_field_id(1);
let mut term = Term::with_type_and_field(Type::Json, field);
let mut json_writer = JsonTermWriter::wrap(&mut term, false);
json_writer.push_path_segment("color.hue");
assert_eq!(json_writer.path(), b"color.hue");
}
#[test]
fn test_json_path_expand_dots_enabled() {
let field = Field::from_field_id(1);
let mut term = Term::with_type_and_field(Type::Json, field);
let mut json_writer = JsonTermWriter::wrap(&mut term, true);
json_writer.push_path_segment("color.hue");
assert_eq!(json_writer.path(), b"color\x01hue");
}
#[test]
fn test_json_path_expand_dots_enabled_pop_segment() {
let field = Field::from_field_id(1);
let mut term = Term::with_type_and_field(Type::Json, field);
let mut json_writer = JsonTermWriter::wrap(&mut term, true);
json_writer.push_path_segment("hello");
assert_eq!(json_writer.path(), b"hello");
json_writer.push_path_segment("color.hue");
assert_eq!(json_writer.path(), b"hello\x01color\x01hue");
json_writer.pop_path_segment();
assert_eq!(json_writer.path(), b"hello");
}
#[test] #[test]
fn test_split_json_path_simple() { fn test_split_json_path_simple() {
let json_path = split_json_path("titi.toto"); let json_path = split_json_path("titi.toto");

View File

@@ -1,9 +1,9 @@
use crate::collector::Count; use crate::collector::Count;
use crate::directory::{RamDirectory, WatchCallback}; use crate::directory::{RamDirectory, WatchCallback};
use crate::indexer::{LogMergePolicy, NoMergePolicy}; use crate::indexer::{LogMergePolicy, NoMergePolicy};
use crate::json_utils::JsonTermWriter; use crate::json_utils::term_from_json_paths;
use crate::query::TermQuery; use crate::query::TermQuery;
use crate::schema::{Field, IndexRecordOption, Schema, Type, INDEXED, STRING, TEXT}; use crate::schema::{Field, IndexRecordOption, Schema, INDEXED, STRING, TEXT};
use crate::tokenizer::TokenizerManager; use crate::tokenizer::TokenizerManager;
use crate::{ use crate::{
Directory, DocSet, Index, IndexBuilder, IndexReader, IndexSettings, IndexWriter, Postings, Directory, DocSet, Index, IndexBuilder, IndexReader, IndexSettings, IndexWriter, Postings,
@@ -416,16 +416,12 @@ fn test_non_text_json_term_freq() {
let searcher = reader.searcher(); let searcher = reader.searcher();
let segment_reader = searcher.segment_reader(0u32); let segment_reader = searcher.segment_reader(0u32);
let inv_idx = segment_reader.inverted_index(field).unwrap(); let inv_idx = segment_reader.inverted_index(field).unwrap();
let mut term = Term::with_type_and_field(Type::Json, field);
let mut json_term_writer = JsonTermWriter::wrap(&mut term, false); let mut term = term_from_json_paths(field, ["tenant_id"].iter().cloned(), false);
json_term_writer.push_path_segment("tenant_id"); term.append_type_and_fast_value(75u64);
json_term_writer.close_path_and_set_type(Type::U64);
json_term_writer.set_fast_value(75u64);
let postings = inv_idx let postings = inv_idx
.read_postings( .read_postings(&term, IndexRecordOption::WithFreqsAndPositions)
json_term_writer.term(),
IndexRecordOption::WithFreqsAndPositions,
)
.unwrap() .unwrap()
.unwrap(); .unwrap();
assert_eq!(postings.doc(), 0); assert_eq!(postings.doc(), 0);
@@ -454,16 +450,12 @@ fn test_non_text_json_term_freq_bitpacked() {
let searcher = reader.searcher(); let searcher = reader.searcher();
let segment_reader = searcher.segment_reader(0u32); let segment_reader = searcher.segment_reader(0u32);
let inv_idx = segment_reader.inverted_index(field).unwrap(); let inv_idx = segment_reader.inverted_index(field).unwrap();
let mut term = Term::with_type_and_field(Type::Json, field);
let mut json_term_writer = JsonTermWriter::wrap(&mut term, false); let mut term = term_from_json_paths(field, ["tenant_id"].iter().cloned(), false);
json_term_writer.push_path_segment("tenant_id"); term.append_type_and_fast_value(75u64);
json_term_writer.close_path_and_set_type(Type::U64);
json_term_writer.set_fast_value(75u64);
let mut postings = inv_idx let mut postings = inv_idx
.read_postings( .read_postings(&term, IndexRecordOption::WithFreqsAndPositions)
json_term_writer.term(),
IndexRecordOption::WithFreqsAndPositions,
)
.unwrap() .unwrap()
.unwrap(); .unwrap();
assert_eq!(postings.doc(), 0); assert_eq!(postings.doc(), 0);

View File

@@ -1,6 +1,5 @@
use std::collections::HashMap; use std::collections::HashMap;
use std::io::{self, Read, Write}; use std::io::{self, Read, Write};
use std::iter::ExactSizeIterator;
use std::ops::Range; use std::ops::Range;
use common::{BinarySerializable, CountingWriter, HasLen, VInt}; use common::{BinarySerializable, CountingWriter, HasLen, VInt};

View File

@@ -1,5 +1,4 @@
use std::io::Write; use std::io::Write;
use std::marker::{Send, Sync};
use std::path::{Path, PathBuf}; use std::path::{Path, PathBuf};
use std::sync::Arc; use std::sync::Arc;
use std::time::Duration; use std::time::Duration;
@@ -40,6 +39,7 @@ impl RetryPolicy {
/// The `DirectoryLock` is an object that represents a file lock. /// The `DirectoryLock` is an object that represents a file lock.
/// ///
/// It is associated with a lock file, that gets deleted on `Drop.` /// It is associated with a lock file, that gets deleted on `Drop.`
#[allow(dead_code)]
pub struct DirectoryLock(Box<dyn Send + Sync + 'static>); pub struct DirectoryLock(Box<dyn Send + Sync + 'static>);
struct DirectoryLockGuard { struct DirectoryLockGuard {

View File

@@ -1,6 +1,6 @@
use std::io::Write; use std::io::Write;
use std::mem; use std::mem;
use std::path::{Path, PathBuf}; use std::path::Path;
use std::sync::atomic::Ordering::SeqCst; use std::sync::atomic::Ordering::SeqCst;
use std::sync::atomic::{AtomicBool, AtomicUsize}; use std::sync::atomic::{AtomicBool, AtomicUsize};
use std::sync::Arc; use std::sync::Arc;

View File

@@ -32,6 +32,7 @@ pub struct WatchCallbackList {
/// file change is detected. /// file change is detected.
#[must_use = "This `WatchHandle` controls the lifetime of the watch and should therefore be used."] #[must_use = "This `WatchHandle` controls the lifetime of the watch and should therefore be used."]
#[derive(Clone)] #[derive(Clone)]
#[allow(dead_code)]
pub struct WatchHandle(Arc<WatchCallback>); pub struct WatchHandle(Arc<WatchCallback>);
impl WatchHandle { impl WatchHandle {

View File

@@ -79,7 +79,7 @@ mod tests {
use std::ops::{Range, RangeInclusive}; use std::ops::{Range, RangeInclusive};
use std::path::Path; use std::path::Path;
use columnar::{Column, MonotonicallyMappableToU64, StrColumn}; use columnar::StrColumn;
use common::{ByteCount, HasLen, TerminatingWrite}; use common::{ByteCount, HasLen, TerminatingWrite};
use once_cell::sync::Lazy; use once_cell::sync::Lazy;
use rand::prelude::SliceRandom; use rand::prelude::SliceRandom;

View File

@@ -1,3 +1,5 @@
#![allow(deprecated)] // Remove with index sorting
use std::collections::HashSet; use std::collections::HashSet;
use rand::{thread_rng, Rng}; use rand::{thread_rng, Rng};

View File

@@ -20,7 +20,7 @@ use crate::indexer::segment_updater::save_metas;
use crate::indexer::{IndexWriter, SingleSegmentIndexWriter}; use crate::indexer::{IndexWriter, SingleSegmentIndexWriter};
use crate::reader::{IndexReader, IndexReaderBuilder}; use crate::reader::{IndexReader, IndexReaderBuilder};
use crate::schema::document::Document; use crate::schema::document::Document;
use crate::schema::{Field, FieldType, Schema}; use crate::schema::{Field, FieldType, Schema, Type};
use crate::tokenizer::{TextAnalyzer, TokenizerManager}; use crate::tokenizer::{TextAnalyzer, TokenizerManager};
use crate::SegmentReader; use crate::SegmentReader;
@@ -83,7 +83,7 @@ fn save_new_metas(
/// ///
/// ``` /// ```
/// use tantivy::schema::*; /// use tantivy::schema::*;
/// use tantivy::{Index, IndexSettings, IndexSortByField, Order}; /// use tantivy::{Index, IndexSettings};
/// ///
/// let mut schema_builder = Schema::builder(); /// let mut schema_builder = Schema::builder();
/// let id_field = schema_builder.add_text_field("id", STRING); /// let id_field = schema_builder.add_text_field("id", STRING);
@@ -96,10 +96,7 @@ fn save_new_metas(
/// ///
/// let schema = schema_builder.build(); /// let schema = schema_builder.build();
/// let settings = IndexSettings{ /// let settings = IndexSettings{
/// sort_by_field: Some(IndexSortByField{ /// docstore_blocksize: 100_000,
/// field: "number".to_string(),
/// order: Order::Asc
/// }),
/// ..Default::default() /// ..Default::default()
/// }; /// };
/// let index = Index::builder().schema(schema).settings(settings).create_in_ram(); /// let index = Index::builder().schema(schema).settings(settings).create_in_ram();
@@ -251,6 +248,15 @@ impl IndexBuilder {
sort_by_field.field sort_by_field.field
))); )));
} }
let supported_field_types = [Type::I64, Type::U64, Type::F64, Type::Date];
let field_type = entry.field_type().value_type();
if !supported_field_types.contains(&field_type) {
return Err(TantivyError::InvalidArgument(format!(
"Unsupported field type in sort_by_field: {:?}. Supported field types: \
{:?} ",
field_type, supported_field_types,
)));
}
} }
Ok(()) Ok(())
} else { } else {

View File

@@ -288,6 +288,10 @@ impl Default for IndexSettings {
/// Presorting documents can greatly improve performance /// Presorting documents can greatly improve performance
/// in some scenarios, by applying top n /// in some scenarios, by applying top n
/// optimizations. /// optimizations.
#[deprecated(
since = "0.22.0",
note = "We plan to remove index sorting in `0.23`. If you need index sorting, please comment on the related issue https://github.com/quickwit-oss/tantivy/issues/2352 and explain your use case."
)]
#[derive(Clone, Debug, Serialize, Deserialize, Eq, PartialEq)] #[derive(Clone, Debug, Serialize, Deserialize, Eq, PartialEq)]
pub struct IndexSortByField { pub struct IndexSortByField {
/// The field to sort the documents by /// The field to sort the documents by

View File

@@ -1,12 +1,13 @@
use std::io; use std::io;
use common::json_path_writer::JSON_END_OF_PATH;
use common::BinarySerializable; use common::BinarySerializable;
use fnv::FnvHashSet; use fnv::FnvHashSet;
use crate::directory::FileSlice; use crate::directory::FileSlice;
use crate::positions::PositionReader; use crate::positions::PositionReader;
use crate::postings::{BlockSegmentPostings, SegmentPostings, TermInfo}; use crate::postings::{BlockSegmentPostings, SegmentPostings, TermInfo};
use crate::schema::{IndexRecordOption, Term, Type, JSON_END_OF_PATH}; use crate::schema::{IndexRecordOption, Term, Type};
use crate::termdict::TermDictionary; use crate::termdict::TermDictionary;
/// The inverted index reader is in charge of accessing /// The inverted index reader is in charge of accessing

View File

@@ -1,4 +1,4 @@
use std::cmp::{Ord, Ordering}; use std::cmp::Ordering;
use std::error::Error; use std::error::Error;
use std::fmt; use std::fmt;
use std::str::FromStr; use std::str::FromStr;

View File

@@ -406,7 +406,7 @@ impl SegmentReader {
} }
/// Returns an iterator that will iterate over the alive document ids /// Returns an iterator that will iterate over the alive document ids
pub fn doc_ids_alive(&self) -> Box<dyn Iterator<Item = DocId> + '_> { pub fn doc_ids_alive(&self) -> Box<dyn Iterator<Item = DocId> + Send + '_> {
if let Some(alive_bitset) = &self.alive_bitset_opt { if let Some(alive_bitset) = &self.alive_bitset_opt {
Box::new(alive_bitset.iter_alive()) Box::new(alive_bitset.iter_alive())
} else { } else {
@@ -516,8 +516,8 @@ impl fmt::Debug for SegmentReader {
mod test { mod test {
use super::*; use super::*;
use crate::index::Index; use crate::index::Index;
use crate::schema::{Schema, SchemaBuilder, Term, STORED, TEXT}; use crate::schema::{SchemaBuilder, Term, STORED, TEXT};
use crate::{DocId, IndexWriter}; use crate::IndexWriter;
#[test] #[test]
fn test_merge_field_meta_data_same() { fn test_merge_field_meta_data_same() {

View File

@@ -159,7 +159,7 @@ mod tests_indexsorting {
use crate::indexer::NoMergePolicy; use crate::indexer::NoMergePolicy;
use crate::query::QueryParser; use crate::query::QueryParser;
use crate::schema::*; use crate::schema::*;
use crate::{DocAddress, Index, IndexSettings, IndexSortByField, Order}; use crate::{DocAddress, Index, IndexBuilder, IndexSettings, IndexSortByField, Order};
fn create_test_index( fn create_test_index(
index_settings: Option<IndexSettings>, index_settings: Option<IndexSettings>,
@@ -557,4 +557,28 @@ mod tests_indexsorting {
&[2000, 8000, 3000] &[2000, 8000, 3000]
); );
} }
#[test]
fn test_text_sort() -> crate::Result<()> {
let mut schema_builder = SchemaBuilder::new();
schema_builder.add_text_field("id", STRING | FAST | STORED);
schema_builder.add_text_field("name", TEXT | STORED);
let resp = IndexBuilder::new()
.schema(schema_builder.build())
.settings(IndexSettings {
sort_by_field: Some(IndexSortByField {
field: "id".to_string(),
order: Order::Asc,
}),
..Default::default()
})
.create_in_ram();
assert!(resp
.unwrap_err()
.to_string()
.contains("Unsupported field type"));
Ok(())
}
} }

View File

@@ -22,6 +22,7 @@ where
} }
} }
#[allow(dead_code)]
pub trait FlatMapWithBufferIter: Iterator { pub trait FlatMapWithBufferIter: Iterator {
/// Function similar to `flat_map`, but allows reusing a shared `Vec`. /// Function similar to `flat_map`, but allows reusing a shared `Vec`.
fn flat_map_with_buffer<F, T>(self, fill_buffer: F) -> FlatMapWithBuffer<T, F, Self> fn flat_map_with_buffer<F, T>(self, fill_buffer: F) -> FlatMapWithBuffer<T, F, Self>

View File

@@ -145,7 +145,6 @@ mod tests {
use super::*; use super::*;
use crate::index::SegmentMetaInventory; use crate::index::SegmentMetaInventory;
use crate::indexer::merge_policy::MergePolicy;
use crate::schema::INDEXED; use crate::schema::INDEXED;
use crate::{schema, SegmentId}; use crate::{schema, SegmentId};

View File

@@ -39,7 +39,6 @@ impl MergePolicy for NoMergePolicy {
pub mod tests { pub mod tests {
use super::*; use super::*;
use crate::index::{SegmentId, SegmentMeta};
/// `MergePolicy` useful for test purposes. /// `MergePolicy` useful for test purposes.
/// ///

View File

@@ -576,7 +576,7 @@ impl IndexMerger {
// //
// Overall the reliable way to know if we have actual frequencies loaded or not // Overall the reliable way to know if we have actual frequencies loaded or not
// is to check whether the actual decoded array is empty or not. // is to check whether the actual decoded array is empty or not.
if has_term_freq != !postings.block_cursor.freqs().is_empty() { if has_term_freq == postings.block_cursor.freqs().is_empty() {
return Err(DataCorruption::comment_only( return Err(DataCorruption::comment_only(
"Term freqs are inconsistent across segments", "Term freqs are inconsistent across segments",
) )

View File

@@ -144,6 +144,123 @@ mod tests_mmap {
assert_eq!(num_docs, 256); assert_eq!(num_docs, 256);
} }
} }
#[test]
fn test_json_field_null_byte() {
// Test when field name contains a zero byte, which has special meaning in tantivy.
// As a workaround, we convert the zero byte to the ASCII character '0'.
// https://github.com/quickwit-oss/tantivy/issues/2340
// https://github.com/quickwit-oss/tantivy/issues/2193
let field_name_in = "\u{0000}";
let field_name_out = "0";
test_json_field_name(field_name_in, field_name_out);
}
#[test]
fn test_json_field_1byte() {
// Test when field name contains a '1' byte, which has special meaning in tantivy.
// The 1 byte can be addressed as '1' byte or '.'.
let field_name_in = "\u{0001}";
let field_name_out = "\u{0001}";
test_json_field_name(field_name_in, field_name_out);
// Test when field name contains a '1' byte, which has special meaning in tantivy.
let field_name_in = "\u{0001}";
let field_name_out = ".";
test_json_field_name(field_name_in, field_name_out);
}
#[test]
fn test_json_field_dot() {
// Test when field name contains a '.'
let field_name_in = ".";
let field_name_out = ".";
test_json_field_name(field_name_in, field_name_out);
}
fn test_json_field_name(field_name_in: &str, field_name_out: &str) {
let mut schema_builder = Schema::builder();
let options = JsonObjectOptions::from(TEXT | FAST).set_expand_dots_enabled();
let field = schema_builder.add_json_field("json", options);
let index = Index::create_in_ram(schema_builder.build());
let mut index_writer = index.writer_for_tests().unwrap();
index_writer
.add_document(doc!(field=>json!({format!("{field_name_in}"): "test1"})))
.unwrap();
index_writer
.add_document(doc!(field=>json!({format!("a{field_name_in}"): "test2"})))
.unwrap();
index_writer
.add_document(doc!(field=>json!({format!("a{field_name_in}a"): "test3"})))
.unwrap();
index_writer
.add_document(
doc!(field=>json!({format!("a{field_name_in}a{field_name_in}"): "test4"})),
)
.unwrap();
index_writer
.add_document(
doc!(field=>json!({format!("a{field_name_in}.ab{field_name_in}"): "test5"})),
)
.unwrap();
index_writer
.add_document(
doc!(field=>json!({format!("a{field_name_in}"): json!({format!("a{field_name_in}"): "test6"}) })),
)
.unwrap();
index_writer
.add_document(doc!(field=>json!({format!("{field_name_in}a" ): "test7"})))
.unwrap();
index_writer.commit().unwrap();
let reader = index.reader().unwrap();
let searcher = reader.searcher();
let parse_query = QueryParser::for_index(&index, Vec::new());
let test_query = |query_str: &str| {
let query = parse_query.parse_query(query_str).unwrap();
let num_docs = searcher.search(&query, &Count).unwrap();
assert_eq!(num_docs, 1, "{}", query_str);
};
test_query(format!("json.{field_name_out}:test1").as_str());
test_query(format!("json.a{field_name_out}:test2").as_str());
test_query(format!("json.a{field_name_out}a:test3").as_str());
test_query(format!("json.a{field_name_out}a{field_name_out}:test4").as_str());
test_query(format!("json.a{field_name_out}.ab{field_name_out}:test5").as_str());
test_query(format!("json.a{field_name_out}.a{field_name_out}:test6").as_str());
test_query(format!("json.{field_name_out}a:test7").as_str());
let test_agg = |field_name: &str, expected: &str| {
let agg_req_str = json!(
{
"termagg": {
"terms": {
"field": field_name,
}
}
});
let agg_req: Aggregations = serde_json::from_value(agg_req_str).unwrap();
let collector = AggregationCollector::from_aggs(agg_req, Default::default());
let agg_res: AggregationResults = searcher.search(&AllQuery, &collector).unwrap();
let res = serde_json::to_value(agg_res).unwrap();
assert_eq!(res["termagg"]["buckets"][0]["doc_count"], 1);
assert_eq!(res["termagg"]["buckets"][0]["key"], expected);
};
test_agg(format!("json.{field_name_out}").as_str(), "test1");
test_agg(format!("json.a{field_name_out}").as_str(), "test2");
test_agg(format!("json.a{field_name_out}a").as_str(), "test3");
test_agg(
format!("json.a{field_name_out}a{field_name_out}").as_str(),
"test4",
);
test_agg(
format!("json.a{field_name_out}.ab{field_name_out}").as_str(),
"test5",
);
test_agg(
format!("json.a{field_name_out}.a{field_name_out}").as_str(),
"test6",
);
test_agg(format!("json.{field_name_out}a").as_str(), "test7");
}
#[test] #[test]
fn test_json_field_expand_dots_enabled_dot_escape_not_required() { fn test_json_field_expand_dots_enabled_dot_escape_not_required() {

View File

@@ -1,4 +1,3 @@
use columnar::MonotonicallyMappableToU64;
use common::JsonPathWriter; use common::JsonPathWriter;
use itertools::Itertools; use itertools::Itertools;
use tokenizer_api::BoxTokenStream; use tokenizer_api::BoxTokenStream;
@@ -15,7 +14,8 @@ use crate::postings::{
PerFieldPostingsWriter, PostingsWriter, PerFieldPostingsWriter, PostingsWriter,
}; };
use crate::schema::document::{Document, ReferenceValue, Value}; use crate::schema::document::{Document, ReferenceValue, Value};
use crate::schema::{FieldEntry, FieldType, Schema, Term, DATE_TIME_PRECISION_INDEXED}; use crate::schema::indexing_term::IndexingTerm;
use crate::schema::{FieldEntry, FieldType, Schema};
use crate::store::{StoreReader, StoreWriter}; use crate::store::{StoreReader, StoreWriter};
use crate::tokenizer::{FacetTokenizer, PreTokenizedStream, TextAnalyzer, Tokenizer}; use crate::tokenizer::{FacetTokenizer, PreTokenizedStream, TextAnalyzer, Tokenizer};
use crate::{DocId, Opstamp, SegmentComponent, TantivyError}; use crate::{DocId, Opstamp, SegmentComponent, TantivyError};
@@ -70,7 +70,7 @@ pub struct SegmentWriter {
pub(crate) json_path_writer: JsonPathWriter, pub(crate) json_path_writer: JsonPathWriter,
pub(crate) doc_opstamps: Vec<Opstamp>, pub(crate) doc_opstamps: Vec<Opstamp>,
per_field_text_analyzers: Vec<TextAnalyzer>, per_field_text_analyzers: Vec<TextAnalyzer>,
term_buffer: Term, term_buffer: IndexingTerm,
schema: Schema, schema: Schema,
} }
@@ -126,7 +126,7 @@ impl SegmentWriter {
)?, )?,
doc_opstamps: Vec::with_capacity(1_000), doc_opstamps: Vec::with_capacity(1_000),
per_field_text_analyzers, per_field_text_analyzers,
term_buffer: Term::with_capacity(16), term_buffer: IndexingTerm::new(),
schema, schema,
}) })
} }
@@ -195,7 +195,7 @@ impl SegmentWriter {
let (term_buffer, ctx) = (&mut self.term_buffer, &mut self.ctx); let (term_buffer, ctx) = (&mut self.term_buffer, &mut self.ctx);
let postings_writer: &mut dyn PostingsWriter = let postings_writer: &mut dyn PostingsWriter =
self.per_field_postings_writers.get_for_field_mut(field); self.per_field_postings_writers.get_for_field_mut(field);
term_buffer.clear_with_field_and_type(field_entry.field_type().value_type(), field); term_buffer.clear_with_field(field);
match field_entry.field_type() { match field_entry.field_type() {
FieldType::Facet(_) => { FieldType::Facet(_) => {
@@ -271,8 +271,7 @@ impl SegmentWriter {
num_vals += 1; num_vals += 1;
let date_val = value.as_datetime().ok_or_else(make_schema_error)?; let date_val = value.as_datetime().ok_or_else(make_schema_error)?;
term_buffer term_buffer.set_date(date_val);
.set_u64(date_val.truncate(DATE_TIME_PRECISION_INDEXED).to_u64());
postings_writer.subscribe(doc_id, 0u32, term_buffer, ctx); postings_writer.subscribe(doc_id, 0u32, term_buffer, ctx);
} }
if field_entry.has_fieldnorms() { if field_entry.has_fieldnorms() {
@@ -332,7 +331,7 @@ impl SegmentWriter {
num_vals += 1; num_vals += 1;
let bytes = value.as_bytes().ok_or_else(make_schema_error)?; let bytes = value.as_bytes().ok_or_else(make_schema_error)?;
term_buffer.set_bytes(bytes); term_buffer.set_value_bytes(bytes);
postings_writer.subscribe(doc_id, 0u32, term_buffer, ctx); postings_writer.subscribe(doc_id, 0u32, term_buffer, ctx);
} }
if field_entry.has_fieldnorms() { if field_entry.has_fieldnorms() {
@@ -496,14 +495,14 @@ mod tests {
use tempfile::TempDir; use tempfile::TempDir;
use crate::collector::{Count, TopDocs}; use crate::collector::{Count, TopDocs};
use crate::core::json_utils::JsonTermWriter;
use crate::directory::RamDirectory; use crate::directory::RamDirectory;
use crate::fastfield::FastValue;
use crate::json_utils::term_from_json_paths;
use crate::postings::TermInfo; use crate::postings::TermInfo;
use crate::query::{PhraseQuery, QueryParser}; use crate::query::{PhraseQuery, QueryParser};
use crate::schema::document::Value; use crate::schema::document::Value;
use crate::schema::{ use crate::schema::{
Document, IndexRecordOption, Schema, TextFieldIndexing, TextOptions, Type, STORED, STRING, Document, IndexRecordOption, Schema, TextFieldIndexing, TextOptions, STORED, STRING, TEXT,
TEXT,
}; };
use crate::store::{Compressor, StoreReader, StoreWriter}; use crate::store::{Compressor, StoreReader, StoreWriter};
use crate::time::format_description::well_known::Rfc3339; use crate::time::format_description::well_known::Rfc3339;
@@ -645,115 +644,117 @@ mod tests {
let inv_idx = segment_reader.inverted_index(json_field).unwrap(); let inv_idx = segment_reader.inverted_index(json_field).unwrap();
let term_dict = inv_idx.terms(); let term_dict = inv_idx.terms();
let mut term = Term::with_type_and_field(Type::Json, json_field);
let mut term_stream = term_dict.stream().unwrap(); let mut term_stream = term_dict.stream().unwrap();
let mut json_term_writer = JsonTermWriter::wrap(&mut term, false); let term_from_path = |paths: &[&str]| -> Term {
term_from_json_paths(json_field, paths.iter().cloned(), false)
};
json_term_writer.push_path_segment("bool"); fn set_fast_val<T: FastValue>(val: T, mut term: Term) -> Term {
json_term_writer.set_fast_value(true); term.append_type_and_fast_value(val);
term
}
fn set_str(val: &str, mut term: Term) -> Term {
term.append_type_and_str(val);
term
}
let term = term_from_path(&["bool"]);
assert!(term_stream.advance()); assert!(term_stream.advance());
assert_eq!( assert_eq!(
term_stream.key(), term_stream.key(),
json_term_writer.term().serialized_value_bytes() set_fast_val(true, term).serialized_value_bytes()
); );
json_term_writer.pop_path_segment(); let term = term_from_path(&["complexobject", "field.with.dot"]);
json_term_writer.push_path_segment("complexobject");
json_term_writer.push_path_segment("field.with.dot");
json_term_writer.set_fast_value(1i64);
assert!(term_stream.advance()); assert!(term_stream.advance());
assert_eq!( assert_eq!(
term_stream.key(), term_stream.key(),
json_term_writer.term().serialized_value_bytes() set_fast_val(1i64, term).serialized_value_bytes()
); );
json_term_writer.pop_path_segment(); // Date
json_term_writer.pop_path_segment(); let term = term_from_path(&["date"]);
json_term_writer.push_path_segment("date");
json_term_writer.set_fast_value(DateTime::from_utc(
OffsetDateTime::parse("1985-04-12T23:20:50.52Z", &Rfc3339).unwrap(),
));
assert!(term_stream.advance()); assert!(term_stream.advance());
assert_eq!( assert_eq!(
term_stream.key(), term_stream.key(),
json_term_writer.term().serialized_value_bytes() set_fast_val(
DateTime::from_utc(
OffsetDateTime::parse("1985-04-12T23:20:50.52Z", &Rfc3339).unwrap(),
),
term
)
.serialized_value_bytes()
); );
json_term_writer.pop_path_segment(); // Float
json_term_writer.push_path_segment("float"); let term = term_from_path(&["float"]);
json_term_writer.set_fast_value(-0.2f64);
assert!(term_stream.advance()); assert!(term_stream.advance());
assert_eq!( assert_eq!(
term_stream.key(), term_stream.key(),
json_term_writer.term().serialized_value_bytes() set_fast_val(-0.2f64, term).serialized_value_bytes()
); );
json_term_writer.pop_path_segment(); // Number In Array
json_term_writer.push_path_segment("my_arr"); let term = term_from_path(&["my_arr"]);
json_term_writer.set_fast_value(2i64);
assert!(term_stream.advance()); assert!(term_stream.advance());
assert_eq!( assert_eq!(
term_stream.key(), term_stream.key(),
json_term_writer.term().serialized_value_bytes() set_fast_val(2i64, term).serialized_value_bytes()
); );
json_term_writer.set_fast_value(3i64); let term = term_from_path(&["my_arr"]);
assert!(term_stream.advance()); assert!(term_stream.advance());
assert_eq!( assert_eq!(
term_stream.key(), term_stream.key(),
json_term_writer.term().serialized_value_bytes() set_fast_val(3i64, term).serialized_value_bytes()
); );
json_term_writer.set_fast_value(4i64); let term = term_from_path(&["my_arr"]);
assert!(term_stream.advance()); assert!(term_stream.advance());
assert_eq!( assert_eq!(
term_stream.key(), term_stream.key(),
json_term_writer.term().serialized_value_bytes() set_fast_val(4i64, term).serialized_value_bytes()
); );
json_term_writer.push_path_segment("my_key"); // El in Array
json_term_writer.set_str("tokens"); let term = term_from_path(&["my_arr", "my_key"]);
assert!(term_stream.advance()); assert!(term_stream.advance());
assert_eq!( assert_eq!(
term_stream.key(), term_stream.key(),
json_term_writer.term().serialized_value_bytes() set_str("tokens", term).serialized_value_bytes()
); );
let term = term_from_path(&["my_arr", "my_key"]);
json_term_writer.set_str("two");
assert!(term_stream.advance()); assert!(term_stream.advance());
assert_eq!( assert_eq!(
term_stream.key(), term_stream.key(),
json_term_writer.term().serialized_value_bytes() set_str("two", term).serialized_value_bytes()
); );
json_term_writer.pop_path_segment(); // Signed
json_term_writer.pop_path_segment(); let term = term_from_path(&["signed"]);
json_term_writer.push_path_segment("signed");
json_term_writer.set_fast_value(-2i64);
assert!(term_stream.advance()); assert!(term_stream.advance());
assert_eq!( assert_eq!(
term_stream.key(), term_stream.key(),
json_term_writer.term().serialized_value_bytes() set_fast_val(-2i64, term).serialized_value_bytes()
); );
json_term_writer.pop_path_segment(); let term = term_from_path(&["toto"]);
json_term_writer.push_path_segment("toto");
json_term_writer.set_str("titi");
assert!(term_stream.advance()); assert!(term_stream.advance());
assert_eq!( assert_eq!(
term_stream.key(), term_stream.key(),
json_term_writer.term().serialized_value_bytes() set_str("titi", term).serialized_value_bytes()
); );
// Unsigned
json_term_writer.pop_path_segment(); let term = term_from_path(&["unsigned"]);
json_term_writer.push_path_segment("unsigned");
json_term_writer.set_fast_value(1i64);
assert!(term_stream.advance()); assert!(term_stream.advance());
assert_eq!( assert_eq!(
term_stream.key(), term_stream.key(),
json_term_writer.term().serialized_value_bytes() set_fast_val(1i64, term).serialized_value_bytes()
); );
assert!(!term_stream.advance()); assert!(!term_stream.advance());
} }
@@ -774,14 +775,9 @@ mod tests {
let searcher = reader.searcher(); let searcher = reader.searcher();
let segment_reader = searcher.segment_reader(0u32); let segment_reader = searcher.segment_reader(0u32);
let inv_index = segment_reader.inverted_index(json_field).unwrap(); let inv_index = segment_reader.inverted_index(json_field).unwrap();
let mut term = Term::with_type_and_field(Type::Json, json_field); let mut term = term_from_json_paths(json_field, ["mykey"].into_iter(), false);
let mut json_term_writer = JsonTermWriter::wrap(&mut term, false); term.append_type_and_str("token");
json_term_writer.push_path_segment("mykey"); let term_info = inv_index.get_term_info(&term).unwrap().unwrap();
json_term_writer.set_str("token");
let term_info = inv_index
.get_term_info(json_term_writer.term())
.unwrap()
.unwrap();
assert_eq!( assert_eq!(
term_info, term_info,
TermInfo { TermInfo {
@@ -818,14 +814,9 @@ mod tests {
let searcher = reader.searcher(); let searcher = reader.searcher();
let segment_reader = searcher.segment_reader(0u32); let segment_reader = searcher.segment_reader(0u32);
let inv_index = segment_reader.inverted_index(json_field).unwrap(); let inv_index = segment_reader.inverted_index(json_field).unwrap();
let mut term = Term::with_type_and_field(Type::Json, json_field); let mut term = term_from_json_paths(json_field, ["mykey"].into_iter(), false);
let mut json_term_writer = JsonTermWriter::wrap(&mut term, false); term.append_type_and_str("two tokens");
json_term_writer.push_path_segment("mykey"); let term_info = inv_index.get_term_info(&term).unwrap().unwrap();
json_term_writer.set_str("two tokens");
let term_info = inv_index
.get_term_info(json_term_writer.term())
.unwrap()
.unwrap();
assert_eq!( assert_eq!(
term_info, term_info,
TermInfo { TermInfo {
@@ -863,16 +854,18 @@ mod tests {
writer.commit().unwrap(); writer.commit().unwrap();
let reader = index.reader().unwrap(); let reader = index.reader().unwrap();
let searcher = reader.searcher(); let searcher = reader.searcher();
let mut term = Term::with_type_and_field(Type::Json, json_field);
let mut json_term_writer = JsonTermWriter::wrap(&mut term, false); let term = term_from_json_paths(json_field, ["mykey", "field"].into_iter(), false);
json_term_writer.push_path_segment("mykey");
json_term_writer.push_path_segment("field"); let mut hello_term = term.clone();
json_term_writer.set_str("hello"); hello_term.append_type_and_str("hello");
let hello_term = json_term_writer.term().clone();
json_term_writer.set_str("nothello"); let mut nothello_term = term.clone();
let nothello_term = json_term_writer.term().clone(); nothello_term.append_type_and_str("nothello");
json_term_writer.set_str("happy");
let happy_term = json_term_writer.term().clone(); let mut happy_term = term.clone();
happy_term.append_type_and_str("happy");
let phrase_query = PhraseQuery::new(vec![hello_term, happy_term.clone()]); let phrase_query = PhraseQuery::new(vec![hello_term, happy_term.clone()]);
assert_eq!(searcher.search(&phrase_query, &Count).unwrap(), 1); assert_eq!(searcher.search(&phrase_query, &Count).unwrap(), 1);
let phrase_query = PhraseQuery::new(vec![nothello_term, happy_term]); let phrase_query = PhraseQuery::new(vec![nothello_term, happy_term]);

View File

@@ -178,6 +178,7 @@ pub use crate::future_result::FutureResult;
pub type Result<T> = std::result::Result<T, TantivyError>; pub type Result<T> = std::result::Result<T, TantivyError>;
mod core; mod core;
#[allow(deprecated)] // Remove with index sorting
pub mod indexer; pub mod indexer;
#[allow(unused_doc_comments)] #[allow(unused_doc_comments)]
@@ -189,6 +190,7 @@ pub mod collector;
pub mod directory; pub mod directory;
pub mod fastfield; pub mod fastfield;
pub mod fieldnorm; pub mod fieldnorm;
#[allow(deprecated)] // Remove with index sorting
pub mod index; pub mod index;
pub mod positions; pub mod positions;
pub mod postings; pub mod postings;
@@ -223,6 +225,7 @@ pub use self::snippet::{Snippet, SnippetGenerator};
pub use crate::core::json_utils; pub use crate::core::json_utils;
pub use crate::core::{Executor, Searcher, SearcherGeneration}; pub use crate::core::{Executor, Searcher, SearcherGeneration};
pub use crate::directory::Directory; pub use crate::directory::Directory;
#[allow(deprecated)] // Remove with index sorting
pub use crate::index::{ pub use crate::index::{
Index, IndexBuilder, IndexMeta, IndexSettings, IndexSortByField, InvertedIndexReader, Order, Index, IndexBuilder, IndexMeta, IndexSettings, IndexSortByField, InvertedIndexReader, Order,
Segment, SegmentComponent, SegmentId, SegmentMeta, SegmentReader, Segment, SegmentComponent, SegmentId, SegmentMeta, SegmentReader,
@@ -234,8 +237,6 @@ pub use crate::index::{
pub use crate::indexer::PreparedCommit; pub use crate::indexer::PreparedCommit;
pub use crate::indexer::{IndexWriter, SingleSegmentIndexWriter}; pub use crate::indexer::{IndexWriter, SingleSegmentIndexWriter};
pub use crate::postings::Postings; pub use crate::postings::Postings;
#[allow(deprecated)]
pub use crate::schema::DatePrecision;
pub use crate::schema::{DateOptions, DateTimePrecision, Document, TantivyDocument, Term}; pub use crate::schema::{DateOptions, DateTimePrecision, Document, TantivyDocument, Term};
/// Index format version. /// Index format version.
@@ -254,7 +255,7 @@ pub struct Version {
impl fmt::Debug for Version { impl fmt::Debug for Version {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
write!(f, "{}", self.to_string()) fmt::Display::fmt(self, f)
} }
} }
@@ -265,9 +266,10 @@ static VERSION: Lazy<Version> = Lazy::new(|| Version {
index_format_version: INDEX_FORMAT_VERSION, index_format_version: INDEX_FORMAT_VERSION,
}); });
impl ToString for Version { impl fmt::Display for Version {
fn to_string(&self) -> String { fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
format!( write!(
f,
"tantivy v{}.{}.{}, index_format v{}", "tantivy v{}.{}.{}, index_format v{}",
self.major, self.minor, self.patch, self.index_format_version self.major, self.minor, self.patch, self.index_format_version
) )

View File

@@ -14,7 +14,6 @@ pub fn compressed_block_size(num_bits: u8) -> usize {
pub struct BlockEncoder { pub struct BlockEncoder {
bitpacker: BitPacker4x, bitpacker: BitPacker4x,
pub output: [u8; COMPRESSED_BLOCK_MAX_SIZE], pub output: [u8; COMPRESSED_BLOCK_MAX_SIZE],
pub output_len: usize,
} }
impl Default for BlockEncoder { impl Default for BlockEncoder {
@@ -28,7 +27,6 @@ impl BlockEncoder {
BlockEncoder { BlockEncoder {
bitpacker: BitPacker4x::new(), bitpacker: BitPacker4x::new(),
output: [0u8; COMPRESSED_BLOCK_MAX_SIZE], output: [0u8; COMPRESSED_BLOCK_MAX_SIZE],
output_len: 0,
} }
} }

View File

@@ -1,5 +1,6 @@
use std::io; use std::io;
use common::json_path_writer::JSON_END_OF_PATH;
use stacker::Addr; use stacker::Addr;
use crate::indexer::doc_id_mapping::DocIdMapping; use crate::indexer::doc_id_mapping::DocIdMapping;
@@ -7,9 +8,10 @@ use crate::indexer::path_to_unordered_id::OrderedPathId;
use crate::postings::postings_writer::SpecializedPostingsWriter; use crate::postings::postings_writer::SpecializedPostingsWriter;
use crate::postings::recorder::{BufferLender, DocIdRecorder, Recorder}; use crate::postings::recorder::{BufferLender, DocIdRecorder, Recorder};
use crate::postings::{FieldSerializer, IndexingContext, IndexingPosition, PostingsWriter}; use crate::postings::{FieldSerializer, IndexingContext, IndexingPosition, PostingsWriter};
use crate::schema::{Field, Type, JSON_END_OF_PATH}; use crate::schema::indexing_term::IndexingTerm;
use crate::schema::{Field, Type, ValueBytes};
use crate::tokenizer::TokenStream; use crate::tokenizer::TokenStream;
use crate::{DocId, Term}; use crate::DocId;
/// The `JsonPostingsWriter` is odd in that it relies on a hidden contract: /// The `JsonPostingsWriter` is odd in that it relies on a hidden contract:
/// ///
@@ -33,7 +35,7 @@ impl<Rec: Recorder> PostingsWriter for JsonPostingsWriter<Rec> {
&mut self, &mut self,
doc: crate::DocId, doc: crate::DocId,
pos: u32, pos: u32,
term: &crate::Term, term: &IndexingTerm,
ctx: &mut IndexingContext, ctx: &mut IndexingContext,
) { ) {
self.non_str_posting_writer.subscribe(doc, pos, term, ctx); self.non_str_posting_writer.subscribe(doc, pos, term, ctx);
@@ -43,7 +45,7 @@ impl<Rec: Recorder> PostingsWriter for JsonPostingsWriter<Rec> {
&mut self, &mut self,
doc_id: DocId, doc_id: DocId,
token_stream: &mut dyn TokenStream, token_stream: &mut dyn TokenStream,
term_buffer: &mut Term, term_buffer: &mut IndexingTerm,
ctx: &mut IndexingContext, ctx: &mut IndexingContext,
indexing_position: &mut IndexingPosition, indexing_position: &mut IndexingPosition,
) { ) {
@@ -65,34 +67,40 @@ impl<Rec: Recorder> PostingsWriter for JsonPostingsWriter<Rec> {
ctx: &IndexingContext, ctx: &IndexingContext,
serializer: &mut FieldSerializer, serializer: &mut FieldSerializer,
) -> io::Result<()> { ) -> io::Result<()> {
let mut term_buffer = Term::with_capacity(48); let mut term_buffer = JsonTermSerializer(Vec::with_capacity(48));
let mut buffer_lender = BufferLender::default(); let mut buffer_lender = BufferLender::default();
let mut prev_term_id = u32::MAX;
let mut term_path_len = 0; // this will be set in the first iteration
for (_field, path_id, term, addr) in term_addrs { for (_field, path_id, term, addr) in term_addrs {
term_buffer.clear_with_field_and_type(Type::Json, Field::from_field_id(0)); if prev_term_id != path_id.path_id() {
term_buffer.append_bytes(ordered_id_to_path[path_id.path_id() as usize].as_bytes()); term_buffer.clear();
term_buffer.append_bytes(&[JSON_END_OF_PATH]); term_buffer.append_path(ordered_id_to_path[path_id.path_id() as usize].as_bytes());
term_buffer.append_bytes(&[JSON_END_OF_PATH]);
term_path_len = term_buffer.len();
prev_term_id = path_id.path_id();
}
term_buffer.truncate(term_path_len);
term_buffer.append_bytes(term); term_buffer.append_bytes(term);
if let Some(json_value) = term_buffer.value().as_json_value_bytes() { let json_value = ValueBytes::wrap(term);
let typ = json_value.typ(); let typ = json_value.typ();
if typ == Type::Str { if typ == Type::Str {
SpecializedPostingsWriter::<Rec>::serialize_one_term( SpecializedPostingsWriter::<Rec>::serialize_one_term(
term_buffer.serialized_value_bytes(), term_buffer.as_bytes(),
*addr, *addr,
doc_id_map, doc_id_map,
&mut buffer_lender, &mut buffer_lender,
ctx, ctx,
serializer, serializer,
)?; )?;
} else { } else {
SpecializedPostingsWriter::<DocIdRecorder>::serialize_one_term( SpecializedPostingsWriter::<DocIdRecorder>::serialize_one_term(
term_buffer.serialized_value_bytes(), term_buffer.as_bytes(),
*addr, *addr,
doc_id_map, doc_id_map,
&mut buffer_lender, &mut buffer_lender,
ctx, ctx,
serializer, serializer,
)?; )?;
}
} }
} }
Ok(()) Ok(())
@@ -102,3 +110,40 @@ impl<Rec: Recorder> PostingsWriter for JsonPostingsWriter<Rec> {
self.str_posting_writer.total_num_tokens() + self.non_str_posting_writer.total_num_tokens() self.str_posting_writer.total_num_tokens() + self.non_str_posting_writer.total_num_tokens()
} }
} }
struct JsonTermSerializer(Vec<u8>);
impl JsonTermSerializer {
#[inline]
pub fn append_path(&mut self, bytes: &[u8]) {
if bytes.contains(&0u8) {
self.0
.extend(bytes.iter().map(|&b| if b == 0 { b'0' } else { b }));
} else {
self.0.extend_from_slice(bytes);
}
}
/// Appends value bytes to the Term.
///
/// This function returns the segment that has just been added.
#[inline]
pub fn append_bytes(&mut self, bytes: &[u8]) -> &mut [u8] {
let len_before = self.0.len();
self.0.extend_from_slice(bytes);
&mut self.0[len_before..]
}
fn clear(&mut self) {
self.0.clear();
}
fn truncate(&mut self, len: usize) {
self.0.truncate(len);
}
fn len(&self) -> usize {
self.0.len()
}
fn as_bytes(&self) -> &[u8] {
&self.0
}
}

View File

@@ -11,7 +11,8 @@ use crate::postings::recorder::{BufferLender, Recorder};
use crate::postings::{ use crate::postings::{
FieldSerializer, IndexingContext, InvertedIndexSerializer, PerFieldPostingsWriter, FieldSerializer, IndexingContext, InvertedIndexSerializer, PerFieldPostingsWriter,
}; };
use crate::schema::{Field, Schema, Term, Type}; use crate::schema::indexing_term::{get_field_from_indexing_term, IndexingTerm};
use crate::schema::{Field, Schema, Type};
use crate::tokenizer::{Token, TokenStream, MAX_TOKEN_LEN}; use crate::tokenizer::{Token, TokenStream, MAX_TOKEN_LEN};
use crate::DocId; use crate::DocId;
@@ -60,14 +61,14 @@ pub(crate) fn serialize_postings(
let mut term_offsets: Vec<(Field, OrderedPathId, &[u8], Addr)> = let mut term_offsets: Vec<(Field, OrderedPathId, &[u8], Addr)> =
Vec::with_capacity(ctx.term_index.len()); Vec::with_capacity(ctx.term_index.len());
term_offsets.extend(ctx.term_index.iter().map(|(key, addr)| { term_offsets.extend(ctx.term_index.iter().map(|(key, addr)| {
let field = Term::wrap(key).field(); let field = get_field_from_indexing_term(key);
if schema.get_field_entry(field).field_type().value_type() == Type::Json { if schema.get_field_entry(field).field_type().value_type() == Type::Json {
let byte_range_path = 5..5 + 4; let byte_range_path = 4..4 + 4;
let unordered_id = u32::from_be_bytes(key[byte_range_path.clone()].try_into().unwrap()); let unordered_id = u32::from_be_bytes(key[byte_range_path.clone()].try_into().unwrap());
let path_id = unordered_id_to_ordered_id[unordered_id as usize]; let path_id = unordered_id_to_ordered_id[unordered_id as usize];
(field, path_id, &key[byte_range_path.end..], addr) (field, path_id, &key[byte_range_path.end..], addr)
} else { } else {
(field, 0.into(), &key[5..], addr) (field, 0.into(), &key[4..], addr)
} }
})); }));
// Sort by field, path, and term // Sort by field, path, and term
@@ -114,7 +115,7 @@ pub(crate) trait PostingsWriter: Send + Sync {
/// * term - the term /// * term - the term
/// * ctx - Contains a term hashmap and a memory arena to store all necessary posting list /// * ctx - Contains a term hashmap and a memory arena to store all necessary posting list
/// information. /// information.
fn subscribe(&mut self, doc: DocId, pos: u32, term: &Term, ctx: &mut IndexingContext); fn subscribe(&mut self, doc: DocId, pos: u32, term: &IndexingTerm, ctx: &mut IndexingContext);
/// Serializes the postings on disk. /// Serializes the postings on disk.
/// The actual serialization format is handled by the `PostingsSerializer`. /// The actual serialization format is handled by the `PostingsSerializer`.
@@ -132,7 +133,7 @@ pub(crate) trait PostingsWriter: Send + Sync {
&mut self, &mut self,
doc_id: DocId, doc_id: DocId,
token_stream: &mut dyn TokenStream, token_stream: &mut dyn TokenStream,
term_buffer: &mut Term, term_buffer: &mut IndexingTerm,
ctx: &mut IndexingContext, ctx: &mut IndexingContext,
indexing_position: &mut IndexingPosition, indexing_position: &mut IndexingPosition,
) { ) {
@@ -203,26 +204,35 @@ impl<Rec: Recorder> SpecializedPostingsWriter<Rec> {
impl<Rec: Recorder> PostingsWriter for SpecializedPostingsWriter<Rec> { impl<Rec: Recorder> PostingsWriter for SpecializedPostingsWriter<Rec> {
#[inline] #[inline]
fn subscribe(&mut self, doc: DocId, position: u32, term: &Term, ctx: &mut IndexingContext) { fn subscribe(
debug_assert!(term.serialized_term().len() >= 4); &mut self,
doc: DocId,
position: u32,
term: &IndexingTerm,
ctx: &mut IndexingContext,
) {
debug_assert!(term.serialized_for_hashmap().len() >= 4);
self.total_num_tokens += 1; self.total_num_tokens += 1;
let (term_index, arena) = (&mut ctx.term_index, &mut ctx.arena); let (term_index, arena) = (&mut ctx.term_index, &mut ctx.arena);
term_index.mutate_or_create(term.serialized_term(), |opt_recorder: Option<Rec>| { term_index.mutate_or_create(
if let Some(mut recorder) = opt_recorder { term.serialized_for_hashmap(),
let current_doc = recorder.current_doc(); |opt_recorder: Option<Rec>| {
if current_doc != doc { if let Some(mut recorder) = opt_recorder {
recorder.close_doc(arena); let current_doc = recorder.current_doc();
if current_doc != doc {
recorder.close_doc(arena);
recorder.new_doc(doc, arena);
}
recorder.record_position(position, arena);
recorder
} else {
let mut recorder = Rec::default();
recorder.new_doc(doc, arena); recorder.new_doc(doc, arena);
recorder.record_position(position, arena);
recorder
} }
recorder.record_position(position, arena); },
recorder );
} else {
let mut recorder = Rec::default();
recorder.new_doc(doc, arena);
recorder.record_position(position, arena);
recorder
}
});
} }
fn serialize( fn serialize(

View File

@@ -1,5 +1,3 @@
use std::convert::TryInto;
use crate::directory::OwnedBytes; use crate::directory::OwnedBytes;
use crate::postings::compression::{compressed_block_size, COMPRESSION_BLOCK_SIZE}; use crate::postings::compression::{compressed_block_size, COMPRESSION_BLOCK_SIZE};
use crate::query::Bm25Weight; use crate::query::Bm25Weight;

View File

@@ -1,5 +1,4 @@
use std::io; use std::io;
use std::iter::ExactSizeIterator;
use std::ops::Range; use std::ops::Range;
use common::{BinarySerializable, FixedSize}; use common::{BinarySerializable, FixedSize};

View File

@@ -149,7 +149,7 @@ mod tests {
use crate::query::exist_query::ExistsQuery; use crate::query::exist_query::ExistsQuery;
use crate::query::{BooleanQuery, RangeQuery}; use crate::query::{BooleanQuery, RangeQuery};
use crate::schema::{Facet, FacetOptions, Schema, FAST, INDEXED, STRING, TEXT}; use crate::schema::{Facet, FacetOptions, Schema, FAST, INDEXED, STRING, TEXT};
use crate::{doc, Index, Searcher}; use crate::{Index, Searcher};
#[test] #[test]
fn test_exists_query_simple() -> crate::Result<()> { fn test_exists_query_simple() -> crate::Result<()> {

View File

@@ -3,7 +3,7 @@ use once_cell::sync::OnceCell;
use tantivy_fst::Automaton; use tantivy_fst::Automaton;
use crate::query::{AutomatonWeight, EnableScoring, Query, Weight}; use crate::query::{AutomatonWeight, EnableScoring, Query, Weight};
use crate::schema::{Term, Type}; use crate::schema::Term;
use crate::TantivyError::InvalidArgument; use crate::TantivyError::InvalidArgument;
pub(crate) struct DfaWrapper(pub DFA); pub(crate) struct DfaWrapper(pub DFA);
@@ -84,7 +84,7 @@ pub struct FuzzyTermQuery {
distance: u8, distance: u8,
/// Should a transposition cost 1 or 2? /// Should a transposition cost 1 or 2?
transposition_cost_one: bool, transposition_cost_one: bool,
/// /// is a starts with query
prefix: bool, prefix: bool,
} }
@@ -133,40 +133,33 @@ impl FuzzyTermQuery {
let term_value = self.term.value(); let term_value = self.term.value();
let term_text = if term_value.typ() == Type::Json { let get_automaton = |term_text: &str| {
if let Some(json_path_type) = term_value.json_path_type() { if self.prefix {
if json_path_type != Type::Str { automaton_builder.build_prefix_dfa(term_text)
return Err(InvalidArgument(format!( } else {
"The fuzzy term query requires a string path type for a json term. Found \ automaton_builder.build_dfa(term_text)
{:?}",
json_path_type
)));
}
} }
std::str::from_utf8(self.term.serialized_value_bytes()).map_err(|_| {
InvalidArgument(
"Failed to convert json term value bytes to utf8 string.".to_string(),
)
})?
} else {
term_value.as_str().ok_or_else(|| {
InvalidArgument("The fuzzy term query requires a string term.".to_string())
})?
};
let automaton = if self.prefix {
automaton_builder.build_prefix_dfa(term_text)
} else {
automaton_builder.build_dfa(term_text)
}; };
if let Some((json_path_bytes, _)) = term_value.as_json() { if let Some((json_path_bytes, _term_value)) = term_value.as_json() {
let term_text =
std::str::from_utf8(self.term.serialized_value_bytes()).map_err(|_| {
InvalidArgument(
"Failed to convert json term value bytes to utf8 string.".to_string(),
)
})?;
let automaton = get_automaton(term_text);
Ok(AutomatonWeight::new_for_json_path( Ok(AutomatonWeight::new_for_json_path(
self.term.field(), self.term.field(),
DfaWrapper(automaton), DfaWrapper(automaton),
json_path_bytes, json_path_bytes,
)) ))
} else { } else {
let term_text = term_value.as_str().ok_or_else(|| {
InvalidArgument("The fuzzy term query requires a string term.".to_string())
})?;
let automaton = get_automaton(term_text);
Ok(AutomatonWeight::new( Ok(AutomatonWeight::new(
self.term.field(), self.term.field(),
DfaWrapper(automaton), DfaWrapper(automaton),

View File

@@ -137,7 +137,7 @@ impl Query for PhrasePrefixQuery {
// There are no prefix. Let's just match the suffix. // There are no prefix. Let's just match the suffix.
let end_term = let end_term =
if let Some(end_value) = prefix_end(self.prefix.1.serialized_value_bytes()) { if let Some(end_value) = prefix_end(self.prefix.1.serialized_value_bytes()) {
let mut end_term = Term::with_capacity(end_value.len()); let mut end_term = Term::new();
end_term.set_field_and_type(self.field, self.prefix.1.typ()); end_term.set_field_and_type(self.field, self.prefix.1.typ());
end_term.append_bytes(&end_value); end_term.append_bytes(&end_value);
Bound::Excluded(end_term) Bound::Excluded(end_term)

View File

@@ -10,10 +10,10 @@ use query_grammar::{UserInputAst, UserInputBound, UserInputLeaf, UserInputLitera
use rustc_hash::FxHashMap; use rustc_hash::FxHashMap;
use super::logical_ast::*; use super::logical_ast::*;
use crate::core::json_utils::{
convert_to_fast_value_and_get_term, set_string_and_get_terms, JsonTermWriter,
};
use crate::index::Index; use crate::index::Index;
use crate::json_utils::{
convert_to_fast_value_and_append_to_json_term, split_json_path, term_from_json_paths,
};
use crate::query::range_query::{is_type_valid_for_fastfield_range_query, RangeQuery}; use crate::query::range_query::{is_type_valid_for_fastfield_range_query, RangeQuery};
use crate::query::{ use crate::query::{
AllQuery, BooleanQuery, BoostQuery, EmptyQuery, FuzzyTermQuery, Occur, PhrasePrefixQuery, AllQuery, BooleanQuery, BoostQuery, EmptyQuery, FuzzyTermQuery, Occur, PhrasePrefixQuery,
@@ -965,20 +965,33 @@ fn generate_literals_for_json_object(
})?; })?;
let index_record_option = text_options.index_option(); let index_record_option = text_options.index_option();
let mut logical_literals = Vec::new(); let mut logical_literals = Vec::new();
let mut term = Term::with_capacity(100);
let mut json_term_writer = JsonTermWriter::from_field_and_json_path( let paths = split_json_path(json_path);
field, let get_term_with_path = || {
json_path, term_from_json_paths(
json_options.is_expand_dots_enabled(), field,
&mut term, paths.iter().map(|el| el.as_str()),
); json_options.is_expand_dots_enabled(),
if let Some(term) = convert_to_fast_value_and_get_term(&mut json_term_writer, phrase) { )
};
// Try to convert the phrase to a fast value
if let Some(term) = convert_to_fast_value_and_append_to_json_term(get_term_with_path(), phrase)
{
logical_literals.push(LogicalLiteral::Term(term)); logical_literals.push(LogicalLiteral::Term(term));
} }
let terms = set_string_and_get_terms(&mut json_term_writer, phrase, &mut text_analyzer);
drop(json_term_writer); // Try to tokenize the phrase and create Terms.
if terms.len() <= 1 { let mut positions_and_terms = Vec::<(usize, Term)>::new();
for (_, term) in terms { let mut token_stream = text_analyzer.token_stream(phrase);
token_stream.process(&mut |token| {
let mut term = get_term_with_path();
term.append_type_and_str(&token.text);
positions_and_terms.push((token.position, term.clone()));
});
if positions_and_terms.len() <= 1 {
for (_, term) in positions_and_terms {
logical_literals.push(LogicalLiteral::Term(term)); logical_literals.push(LogicalLiteral::Term(term));
} }
return Ok(logical_literals); return Ok(logical_literals);
@@ -989,7 +1002,7 @@ fn generate_literals_for_json_object(
)); ));
} }
logical_literals.push(LogicalLiteral::Phrase { logical_literals.push(LogicalLiteral::Phrase {
terms, terms: positions_and_terms,
slop: 0, slop: 0,
prefix: false, prefix: false,
}); });

View File

@@ -477,7 +477,7 @@ mod tests {
use crate::schema::{ use crate::schema::{
Field, IntoIpv6Addr, Schema, TantivyDocument, FAST, INDEXED, STORED, TEXT, Field, IntoIpv6Addr, Schema, TantivyDocument, FAST, INDEXED, STORED, TEXT,
}; };
use crate::{doc, Index, IndexWriter}; use crate::{Index, IndexWriter};
#[test] #[test]
fn test_range_query_simple() -> crate::Result<()> { fn test_range_query_simple() -> crate::Result<()> {

View File

@@ -139,7 +139,7 @@ mod tests {
use crate::collector::{Count, TopDocs}; use crate::collector::{Count, TopDocs};
use crate::query::{Query, QueryParser, TermQuery}; use crate::query::{Query, QueryParser, TermQuery};
use crate::schema::{IndexRecordOption, IntoIpv6Addr, Schema, INDEXED, STORED}; use crate::schema::{IndexRecordOption, IntoIpv6Addr, Schema, INDEXED, STORED};
use crate::{doc, Index, IndexWriter, Term}; use crate::{Index, IndexWriter, Term};
#[test] #[test]
fn search_ip_test() { fn search_ip_test() {

View File

@@ -53,8 +53,7 @@ impl HasLen for VecDocSet {
pub mod tests { pub mod tests {
use super::*; use super::*;
use crate::docset::{DocSet, COLLECT_BLOCK_BUFFER_LEN}; use crate::docset::COLLECT_BLOCK_BUFFER_LEN;
use crate::DocId;
#[test] #[test]
pub fn test_vec_postings() { pub fn test_vec_postings() {

View File

@@ -1,7 +1,5 @@
use std::ops::BitOr; use std::ops::BitOr;
#[allow(deprecated)]
pub use common::DatePrecision;
pub use common::DateTimePrecision; pub use common::DateTimePrecision;
use serde::{Deserialize, Serialize}; use serde::{Deserialize, Serialize};

View File

@@ -160,7 +160,7 @@ pub enum ValueType {
/// A dynamic object value. /// A dynamic object value.
Object, Object,
/// A JSON object value. Deprecated. /// A JSON object value. Deprecated.
#[deprecated] #[deprecated(note = "We keep this for backwards compatibility, use Object instead")]
JSONObject, JSONObject,
} }
@@ -819,7 +819,6 @@ mod tests {
use crate::schema::document::existing_type_impls::JsonObjectIter; use crate::schema::document::existing_type_impls::JsonObjectIter;
use crate::schema::document::se::BinaryValueSerializer; use crate::schema::document::se::BinaryValueSerializer;
use crate::schema::document::{ReferenceValue, ReferenceValueLeaf}; use crate::schema::document::{ReferenceValue, ReferenceValueLeaf};
use crate::schema::OwnedValue;
fn serialize_value<'a>(value: ReferenceValue<'a, &'a serde_json::Value>) -> Vec<u8> { fn serialize_value<'a>(value: ReferenceValue<'a, &'a serde_json::Value>) -> Vec<u8> {
let mut writer = Vec::new(); let mut writer = Vec::new();

View File

@@ -256,7 +256,6 @@ impl DocParsingError {
#[cfg(test)] #[cfg(test)]
mod tests { mod tests {
use crate::schema::document::default_document::TantivyDocument;
use crate::schema::*; use crate::schema::*;
#[test] #[test]

View File

@@ -443,9 +443,7 @@ impl<'a> Iterator for ObjectMapIter<'a> {
mod tests { mod tests {
use super::*; use super::*;
use crate::schema::{BytesOptions, Schema}; use crate::schema::{BytesOptions, Schema};
use crate::time::format_description::well_known::Rfc3339; use crate::{Document, TantivyDocument};
use crate::time::OffsetDateTime;
use crate::{DateTime, Document, TantivyDocument};
#[test] #[test]
fn test_parse_bytes_doc() { fn test_parse_bytes_doc() {

View File

@@ -136,7 +136,6 @@ impl FieldEntry {
#[cfg(test)] #[cfg(test)]
mod tests { mod tests {
use serde_json;
use super::*; use super::*;
use crate::schema::{Schema, TextFieldIndexing, TEXT}; use crate::schema::{Schema, TextFieldIndexing, TEXT};

147
src/schema/indexing_term.rs Normal file
View File

@@ -0,0 +1,147 @@
use std::net::Ipv6Addr;
use columnar::{MonotonicallyMappableToU128, MonotonicallyMappableToU64};
use super::date_time_options::DATE_TIME_PRECISION_INDEXED;
use super::Field;
use crate::fastfield::FastValue;
use crate::schema::Type;
use crate::DateTime;
/// IndexingTerm represents is the serialized information of a term during indexing.
/// It's a serialized representation over different types.
///
/// It actually wraps a `Vec<u8>`.
///
/// The format is as follow:
/// `[field id: u32][serialized value]`
///
/// For JSON it equals to:
/// `[field id: u32][path id: u32][type code: u8][serialized value]`
///
/// The format is chosen to easily partition the terms by field during serialization, as all terms
/// are stored in one hashmap.
#[derive(Clone)]
pub(crate) struct IndexingTerm(Vec<u8>);
/// The number of bytes used as for the field id by `Term`.
const FIELD_ID_LENGTH: usize = 4;
impl IndexingTerm {
/// Create a new IndexingTerm.
pub fn new() -> IndexingTerm {
let mut data = Vec::with_capacity(FIELD_ID_LENGTH + 32);
data.resize(FIELD_ID_LENGTH, 0u8);
IndexingTerm(data)
}
/// Is empty if there are no value bytes.
pub fn is_empty(&self) -> bool {
self.0.len() == FIELD_ID_LENGTH
}
/// Removes the value_bytes and set the field
pub(crate) fn clear_with_field(&mut self, field: Field) {
self.truncate_value_bytes(0);
self.0[0..4].clone_from_slice(field.field_id().to_be_bytes().as_ref());
}
/// Sets a u64 value in the term.
///
/// U64 are serialized using (8-byte) BigEndian
/// representation.
/// The use of BigEndian has the benefit of preserving
/// the natural order of the values.
pub fn set_u64(&mut self, val: u64) {
self.set_fast_value(val);
}
/// Sets a `DateTime` value in the term.
pub fn set_date(&mut self, val: DateTime) {
self.set_fast_value(val);
}
/// Sets a `i64` value in the term.
pub fn set_i64(&mut self, val: i64) {
self.set_fast_value(val);
}
/// Sets a `f64` value in the term.
pub fn set_f64(&mut self, val: f64) {
self.set_fast_value(val);
}
/// Sets a `bool` value in the term.
pub fn set_bool(&mut self, val: bool) {
self.set_fast_value(val);
}
fn set_fast_value<T: FastValue>(&mut self, val: T) {
self.truncate_value_bytes(0);
self.append_fast_value(val);
}
/// Sets a `Ipv6Addr` value in the term.
pub fn set_ip_addr(&mut self, val: Ipv6Addr) {
self.set_value_bytes(val.to_u128().to_be_bytes().as_ref());
}
/// Sets the value bytes of the term.
pub fn set_value_bytes(&mut self, bytes: &[u8]) {
self.truncate_value_bytes(0);
self.0.extend(bytes);
}
/// Append a type marker + fast value to a term.
/// This is used in JSON type to append a fast value after the path.
///
/// It will not clear existing bytes.
pub(crate) fn append_type_and_fast_value<T: FastValue>(&mut self, val: T) {
self.0.push(T::to_type().to_code());
self.append_fast_value(val)
}
/// Append a fast value to a term.
///
/// It will not clear existing bytes.
pub fn append_fast_value<T: FastValue>(&mut self, val: T) {
let value = if T::to_type() == Type::Date {
DateTime::from_u64(val.to_u64())
.truncate(DATE_TIME_PRECISION_INDEXED)
.to_u64()
} else {
val.to_u64()
};
self.0.extend(value.to_be_bytes().as_ref());
}
/// Truncates the value bytes of the term. Value and field type stays the same.
pub fn truncate_value_bytes(&mut self, len: usize) {
self.0.truncate(len + FIELD_ID_LENGTH);
}
/// The length of the bytes.
pub fn len_bytes(&self) -> usize {
self.0.len() - FIELD_ID_LENGTH
}
/// Appends bytes to the Term.
///
/// This function returns the segment that has just been added.
#[inline]
pub fn append_bytes(&mut self, bytes: &[u8]) {
self.0.extend_from_slice(bytes);
}
/// Returns the serialized representation of Term.
/// This includes field_id, value bytes
#[inline]
pub fn serialized_for_hashmap(&self) -> &[u8] {
self.0.as_ref()
}
}
pub fn get_field_from_indexing_term(bytes: &[u8]) -> Field {
let field_id_bytes: [u8; 4] = bytes[..4].try_into().unwrap();
Field::from_field_id(u32::from_be_bytes(field_id_bytes))
}

View File

@@ -109,6 +109,7 @@
pub mod document; pub mod document;
mod facet; mod facet;
mod facet_options; mod facet_options;
pub(crate) mod indexing_term;
mod schema; mod schema;
pub(crate) mod term; pub(crate) mod term;
@@ -130,8 +131,6 @@ mod text_options;
use columnar::ColumnType; use columnar::ColumnType;
pub use self::bytes_options::BytesOptions; pub use self::bytes_options::BytesOptions;
#[allow(deprecated)]
pub use self::date_time_options::DatePrecision;
pub use self::date_time_options::{DateOptions, DateTimePrecision, DATE_TIME_PRECISION_INDEXED}; pub use self::date_time_options::{DateOptions, DateTimePrecision, DATE_TIME_PRECISION_INDEXED};
pub use self::document::{DocParsingError, Document, OwnedValue, TantivyDocument, Value}; pub use self::document::{DocParsingError, Document, OwnedValue, TantivyDocument, Value};
pub(crate) use self::facet::FACET_SEP_BYTE; pub(crate) use self::facet::FACET_SEP_BYTE;
@@ -146,11 +145,9 @@ pub use self::index_record_option::IndexRecordOption;
pub use self::ip_options::{IntoIpv6Addr, IpAddrOptions}; pub use self::ip_options::{IntoIpv6Addr, IpAddrOptions};
pub use self::json_object_options::JsonObjectOptions; pub use self::json_object_options::JsonObjectOptions;
pub use self::named_field_document::NamedFieldDocument; pub use self::named_field_document::NamedFieldDocument;
#[allow(deprecated)]
pub use self::numeric_options::IntOptions;
pub use self::numeric_options::NumericOptions; pub use self::numeric_options::NumericOptions;
pub use self::schema::{Schema, SchemaBuilder}; pub use self::schema::{Schema, SchemaBuilder};
pub use self::term::{Term, ValueBytes, JSON_END_OF_PATH}; pub use self::term::{Term, ValueBytes};
pub use self::text_options::{TextFieldIndexing, TextOptions, STRING, TEXT}; pub use self::text_options::{TextFieldIndexing, TextOptions, STRING, TEXT};
/// Validator for a potential `field_name`. /// Validator for a potential `field_name`.

View File

@@ -5,10 +5,6 @@ use serde::{Deserialize, Serialize};
use super::flags::CoerceFlag; use super::flags::CoerceFlag;
use crate::schema::flags::{FastFlag, IndexedFlag, SchemaFlagList, StoredFlag}; use crate::schema::flags::{FastFlag, IndexedFlag, SchemaFlagList, StoredFlag};
#[deprecated(since = "0.17.0", note = "Use NumericOptions instead.")]
/// Deprecated use [`NumericOptions`] instead.
pub type IntOptions = NumericOptions;
/// Define how an `u64`, `i64`, or `f64` field should be handled by tantivy. /// Define how an `u64`, `i64`, or `f64` field should be handled by tantivy.
#[derive(Clone, Debug, PartialEq, Eq, Serialize, Deserialize, Default)] #[derive(Clone, Debug, PartialEq, Eq, Serialize, Deserialize, Default)]
#[serde(from = "NumericOptionsDeser")] #[serde(from = "NumericOptionsDeser")]

View File

@@ -6,10 +6,8 @@ use serde::de::{SeqAccess, Visitor};
use serde::ser::SerializeSeq; use serde::ser::SerializeSeq;
use serde::{Deserialize, Deserializer, Serialize, Serializer}; use serde::{Deserialize, Deserializer, Serialize, Serializer};
use super::ip_options::IpAddrOptions;
use super::*; use super::*;
use crate::json_utils::split_json_path; use crate::json_utils::split_json_path;
use crate::schema::bytes_options::BytesOptions;
use crate::TantivyError; use crate::TantivyError;
/// Tantivy has a very strict schema. /// Tantivy has a very strict schema.
@@ -421,9 +419,7 @@ mod tests {
use matches::{assert_matches, matches}; use matches::{assert_matches, matches};
use pretty_assertions::assert_eq; use pretty_assertions::assert_eq;
use serde_json;
use crate::schema::document::Value;
use crate::schema::field_type::ValueParsingError; use crate::schema::field_type::ValueParsingError;
use crate::schema::schema::DocParsingError::InvalidJson; use crate::schema::schema::DocParsingError::InvalidJson;
use crate::schema::*; use crate::schema::*;

View File

@@ -1,9 +1,9 @@
use std::convert::TryInto; use std::hash::Hash;
use std::hash::{Hash, Hasher};
use std::net::Ipv6Addr; use std::net::Ipv6Addr;
use std::{fmt, str}; use std::{fmt, str};
use columnar::{MonotonicallyMappableToU128, MonotonicallyMappableToU64}; use columnar::{MonotonicallyMappableToU128, MonotonicallyMappableToU64};
use common::json_path_writer::{JSON_END_OF_PATH, JSON_PATH_SEGMENT_SEP_STR};
use super::date_time_options::DATE_TIME_PRECISION_INDEXED; use super::date_time_options::DATE_TIME_PRECISION_INDEXED;
use super::Field; use super::Field;
@@ -11,15 +11,6 @@ use crate::fastfield::FastValue;
use crate::schema::{Facet, Type}; use crate::schema::{Facet, Type};
use crate::DateTime; use crate::DateTime;
/// Separates the different segments of a json path.
pub const JSON_PATH_SEGMENT_SEP: u8 = 1u8;
pub const JSON_PATH_SEGMENT_SEP_STR: &str =
unsafe { std::str::from_utf8_unchecked(&[JSON_PATH_SEGMENT_SEP]) };
/// Separates the json path and the value in
/// a JSON term binary representation.
pub const JSON_END_OF_PATH: u8 = 0u8;
/// Term represents the value that the token can take. /// Term represents the value that the token can take.
/// It's a serialized representation over different types. /// It's a serialized representation over different types.
/// ///
@@ -27,37 +18,45 @@ pub const JSON_END_OF_PATH: u8 = 0u8;
/// 4 bytes are the field id, and the last byte is the type. /// 4 bytes are the field id, and the last byte is the type.
/// ///
/// The serialized value `ValueBytes` is considered everything after the 4 first bytes (term id). /// The serialized value `ValueBytes` is considered everything after the 4 first bytes (term id).
#[derive(Clone)] #[derive(Clone, Hash, PartialEq, Ord, PartialOrd, Eq)]
pub struct Term<B = Vec<u8>>(B) pub struct Term(Vec<u8>);
where B: AsRef<[u8]>; impl Default for Term {
fn default() -> Self {
Self::new()
}
}
/// The number of bytes used as metadata by `Term`. /// The number of bytes used as metadata by `Term`.
const TERM_METADATA_LENGTH: usize = 5; const TERM_METADATA_LENGTH: usize = 5;
impl Term { impl Term {
/// Create a new Term with a buffer with a given capacity. /// Create a new Term
pub fn with_capacity(capacity: usize) -> Term { pub fn new() -> Term {
let mut data = Vec::with_capacity(TERM_METADATA_LENGTH + capacity); let mut data = Vec::with_capacity(TERM_METADATA_LENGTH + 32);
data.resize(TERM_METADATA_LENGTH, 0u8); data.resize(TERM_METADATA_LENGTH, 0u8);
Term(data) Term(data)
} }
pub(crate) fn with_type_and_field(typ: Type, field: Field) -> Term { pub(crate) fn with_type_and_field(typ: Type, field: Field) -> Term {
let mut term = Self::with_capacity(8); Self::with_bytes_and_field_and_payload(typ, field, &[])
term.set_field_and_type(field, typ);
term
} }
fn with_bytes_and_field_and_payload(typ: Type, field: Field, bytes: &[u8]) -> Term { fn with_bytes_and_field_and_payload(typ: Type, field: Field, bytes: &[u8]) -> Term {
let mut term = Self::with_capacity(bytes.len()); let mut term = Self::new();
term.set_field_and_type(field, typ); term.set_field_and_type(field, typ);
term.0.extend_from_slice(bytes); term.0.extend_from_slice(bytes);
term term
} }
/// Sets a fast value in the term.
///
/// fast values are converted to u64 and then serialized using (8-byte) BigEndian
/// representation.
/// The use of BigEndian has the benefit of preserving
/// the natural order of the values.
fn from_fast_value<T: FastValue>(field: Field, val: &T) -> Term { fn from_fast_value<T: FastValue>(field: Field, val: &T) -> Term {
let mut term = Self::with_type_and_field(T::to_type(), field); let mut term = Self::with_type_and_field(T::to_type(), field);
term.set_u64(val.to_u64()); term.set_bytes(val.to_u64().to_be_bytes().as_ref());
term term
} }
@@ -79,7 +78,7 @@ impl Term {
/// Builds a term given a field, and a `Ipv6Addr`-value /// Builds a term given a field, and a `Ipv6Addr`-value
pub fn from_field_ip_addr(field: Field, ip_addr: Ipv6Addr) -> Term { pub fn from_field_ip_addr(field: Field, ip_addr: Ipv6Addr) -> Term {
let mut term = Self::with_type_and_field(Type::IpAddr, field); let mut term = Self::with_type_and_field(Type::IpAddr, field);
term.set_ip_addr(ip_addr); term.set_bytes(ip_addr.to_u128().to_be_bytes().as_ref());
term term
} }
@@ -124,52 +123,16 @@ impl Term {
Term::with_bytes_and_field_and_payload(Type::Bytes, field, bytes) Term::with_bytes_and_field_and_payload(Type::Bytes, field, bytes)
} }
/// Removes the value_bytes and set the field and type code.
pub(crate) fn clear_with_field_and_type(&mut self, typ: Type, field: Field) {
self.truncate_value_bytes(0);
self.set_field_and_type(field, typ);
}
/// Removes the value_bytes and set the type code. /// Removes the value_bytes and set the type code.
pub fn clear_with_type(&mut self, typ: Type) { pub fn clear_with_type(&mut self, typ: Type) {
self.truncate_value_bytes(0); self.truncate_value_bytes(0);
self.0[4] = typ.to_code(); self.0[4] = typ.to_code();
} }
/// Sets a u64 value in the term. /// Append a type marker + fast value to a term.
/// This is used in JSON type to append a fast value after the path.
/// ///
/// U64 are serialized using (8-byte) BigEndian /// It will not clear existing bytes.
/// representation.
/// The use of BigEndian has the benefit of preserving
/// the natural order of the values.
pub fn set_u64(&mut self, val: u64) {
self.set_fast_value(val);
}
/// Sets a `i64` value in the term.
pub fn set_i64(&mut self, val: i64) {
self.set_fast_value(val);
}
/// Sets a `DateTime` value in the term.
pub fn set_date(&mut self, date: DateTime) {
self.set_fast_value(date);
}
/// Sets a `f64` value in the term.
pub fn set_f64(&mut self, val: f64) {
self.set_fast_value(val);
}
/// Sets a `bool` value in the term.
pub fn set_bool(&mut self, val: bool) {
self.set_fast_value(val);
}
fn set_fast_value<T: FastValue>(&mut self, val: T) {
self.set_bytes(val.to_u64().to_be_bytes().as_ref());
}
pub(crate) fn append_type_and_fast_value<T: FastValue>(&mut self, val: T) { pub(crate) fn append_type_and_fast_value<T: FastValue>(&mut self, val: T) {
self.0.push(T::to_type().to_code()); self.0.push(T::to_type().to_code());
let value = if T::to_type() == Type::Date { let value = if T::to_type() == Type::Date {
@@ -182,32 +145,26 @@ impl Term {
self.0.extend(value.to_be_bytes().as_ref()); self.0.extend(value.to_be_bytes().as_ref());
} }
/// Sets a `Ipv6Addr` value in the term. /// Append a string type marker + string to a term.
pub fn set_ip_addr(&mut self, val: Ipv6Addr) { /// This is used in JSON type to append a str after the path.
self.set_bytes(val.to_u128().to_be_bytes().as_ref()); ///
/// It will not clear existing bytes.
pub(crate) fn append_type_and_str(&mut self, val: &str) {
self.0.push(Type::Str.to_code());
self.0.extend(val.as_bytes().as_ref());
} }
/// Sets the value of a `Bytes` field. /// Sets the value of a `Bytes` field.
pub fn set_bytes(&mut self, bytes: &[u8]) { fn set_bytes(&mut self, bytes: &[u8]) {
self.truncate_value_bytes(0); self.truncate_value_bytes(0);
self.0.extend(bytes); self.0.extend(bytes);
} }
/// Set the texts only, keeping the field untouched.
pub fn set_text(&mut self, text: &str) {
self.set_bytes(text.as_bytes());
}
/// Truncates the value bytes of the term. Value and field type stays the same. /// Truncates the value bytes of the term. Value and field type stays the same.
pub fn truncate_value_bytes(&mut self, len: usize) { pub fn truncate_value_bytes(&mut self, len: usize) {
self.0.truncate(len + TERM_METADATA_LENGTH); self.0.truncate(len + TERM_METADATA_LENGTH);
} }
/// The length of the bytes.
pub fn len_bytes(&self) -> usize {
self.0.len() - TERM_METADATA_LENGTH
}
/// Appends value bytes to the Term. /// Appends value bytes to the Term.
/// ///
/// This function returns the segment that has just been added. /// This function returns the segment that has just been added.
@@ -218,61 +175,19 @@ impl Term {
&mut self.0[len_before..] &mut self.0[len_before..]
} }
/// Appends a JSON_PATH_SEGMENT_SEP to the term.
/// Only used for JSON type.
#[inline]
pub fn add_json_path_separator(&mut self) {
self.0.push(JSON_PATH_SEGMENT_SEP);
}
/// Sets the current end to JSON_END_OF_PATH.
/// Only used for JSON type.
#[inline]
pub fn set_json_path_end(&mut self) {
let buffer_len = self.0.len();
self.0[buffer_len - 1] = JSON_END_OF_PATH;
}
/// Sets the current end to JSON_PATH_SEGMENT_SEP.
/// Only used for JSON type.
#[inline]
pub fn set_json_path_separator(&mut self) {
let buffer_len = self.0.len();
self.0[buffer_len - 1] = JSON_PATH_SEGMENT_SEP;
}
}
impl<B> Term<B>
where B: AsRef<[u8]>
{
/// Wraps a object holding bytes
pub fn wrap(data: B) -> Term<B> {
Term(data)
}
/// Return the type of the term. /// Return the type of the term.
pub fn typ(&self) -> Type { pub fn typ(&self) -> Type {
self.value().typ() self.value().typ()
} }
/// Returns the field.
pub fn field(&self) -> Field {
let field_id_bytes: [u8; 4] = (&self.0.as_ref()[..4]).try_into().unwrap();
Field::from_field_id(u32::from_be_bytes(field_id_bytes))
}
/// Returns the serialized representation of the value. /// Returns the serialized representation of the value.
/// (this does neither include the field id nor the value type.) /// (this does neither include the field id nor the value type.)
/// ///
/// If the term is a string, its value is utf-8 encoded. /// If the term is a string, its value is utf-8 encoded.
/// If the term is a u64, its value is encoded according /// If the term is a u64, its value is encoded according
/// to `byteorder::BigEndian`. /// to `byteorder::BigEndian`.
pub fn serialized_value_bytes(&self) -> &[u8] { pub(crate) fn serialized_value_bytes(&self) -> &[u8] {
&self.0.as_ref()[TERM_METADATA_LENGTH..] &self.0[TERM_METADATA_LENGTH..]
}
/// Returns the value of the term.
/// address or JSON path + value. (this does not include the field.)
pub fn value(&self) -> ValueBytes<&[u8]> {
ValueBytes::wrap(&self.0.as_ref()[4..])
} }
/// Returns the serialized representation of Term. /// Returns the serialized representation of Term.
@@ -281,9 +196,22 @@ where B: AsRef<[u8]>
/// Do NOT rely on this byte representation in the index. /// Do NOT rely on this byte representation in the index.
/// This value is likely to change in the future. /// This value is likely to change in the future.
#[inline] #[inline]
#[cfg(test)]
pub fn serialized_term(&self) -> &[u8] { pub fn serialized_term(&self) -> &[u8] {
self.0.as_ref() self.0.as_ref()
} }
/// Returns the field.
pub fn field(&self) -> Field {
let field_id_bytes: [u8; 4] = (&self.0[..4]).try_into().unwrap();
Field::from_field_id(u32::from_be_bytes(field_id_bytes))
}
/// Returns the value of the term.
/// address or JSON path + value. (this does not include the field.)
pub fn value(&self) -> ValueBytes<&[u8]> {
ValueBytes::wrap(&self.0[4..])
}
} }
/// ValueBytes represents a serialized value. /// ValueBytes represents a serialized value.
@@ -314,18 +242,10 @@ where B: AsRef<[u8]>
} }
/// Return the type of the term. /// Return the type of the term.
pub fn typ(&self) -> Type { pub(crate) fn typ(&self) -> Type {
Type::from_code(self.typ_code()).expect("The term has an invalid type code") Type::from_code(self.typ_code()).expect("The term has an invalid type code")
} }
/// Returns the `u64` value stored in a term.
///
/// Returns `None` if the term is not of the u64 type, or if the term byte representation
/// is invalid.
pub fn as_u64(&self) -> Option<u64> {
self.get_fast_type::<u64>()
}
fn get_fast_type<T: FastValue>(&self) -> Option<T> { fn get_fast_type<T: FastValue>(&self) -> Option<T> {
if self.typ() != T::to_type() { if self.typ() != T::to_type() {
return None; return None;
@@ -335,38 +255,6 @@ where B: AsRef<[u8]>
Some(T::from_u64(value_u64)) Some(T::from_u64(value_u64))
} }
/// Returns the `i64` value stored in a term.
///
/// Returns `None` if the term is not of the i64 type, or if the term byte representation
/// is invalid.
pub fn as_i64(&self) -> Option<i64> {
self.get_fast_type::<i64>()
}
/// Returns the `f64` value stored in a term.
///
/// Returns `None` if the term is not of the f64 type, or if the term byte representation
/// is invalid.
pub fn as_f64(&self) -> Option<f64> {
self.get_fast_type::<f64>()
}
/// Returns the `bool` value stored in a term.
///
/// Returns `None` if the term is not of the bool type, or if the term byte representation
/// is invalid.
pub fn as_bool(&self) -> Option<bool> {
self.get_fast_type::<bool>()
}
/// Returns the `Date` value stored in a term.
///
/// Returns `None` if the term is not of the Date type, or if the term byte representation
/// is invalid.
pub fn as_date(&self) -> Option<DateTime> {
self.get_fast_type::<DateTime>()
}
/// Returns the text associated with the term. /// Returns the text associated with the term.
/// ///
/// Returns `None` if the field is not of string type /// Returns `None` if the field is not of string type
@@ -382,7 +270,7 @@ where B: AsRef<[u8]>
/// ///
/// Returns `None` if the field is not of facet type /// Returns `None` if the field is not of facet type
/// or if the bytes are not valid utf-8. /// or if the bytes are not valid utf-8.
pub fn as_facet(&self) -> Option<Facet> { pub(crate) fn as_facet(&self) -> Option<Facet> {
if self.typ() != Type::Facet { if self.typ() != Type::Facet {
return None; return None;
} }
@@ -393,7 +281,7 @@ where B: AsRef<[u8]>
/// Returns the bytes associated with the term. /// Returns the bytes associated with the term.
/// ///
/// Returns `None` if the field is not of bytes type. /// Returns `None` if the field is not of bytes type.
pub fn as_bytes(&self) -> Option<&[u8]> { pub(crate) fn as_bytes(&self) -> Option<&[u8]> {
if self.typ() != Type::Bytes { if self.typ() != Type::Bytes {
return None; return None;
} }
@@ -401,7 +289,7 @@ where B: AsRef<[u8]>
} }
/// Returns a `Ipv6Addr` value from the term. /// Returns a `Ipv6Addr` value from the term.
pub fn as_ip_addr(&self) -> Option<Ipv6Addr> { pub(crate) fn as_ip_addr(&self) -> Option<Ipv6Addr> {
if self.typ() != Type::IpAddr { if self.typ() != Type::IpAddr {
return None; return None;
} }
@@ -409,15 +297,6 @@ where B: AsRef<[u8]>
Some(Ipv6Addr::from_u128(ip_u128)) Some(Ipv6Addr::from_u128(ip_u128))
} }
/// Returns the json path type.
///
/// Returns `None` if the value is not JSON.
pub fn json_path_type(&self) -> Option<Type> {
let json_value_bytes = self.as_json_value_bytes()?;
Some(json_value_bytes.typ())
}
/// Returns the json path bytes (including the JSON_END_OF_PATH byte), /// Returns the json path bytes (including the JSON_END_OF_PATH byte),
/// and the encoded ValueBytes after the json path. /// and the encoded ValueBytes after the json path.
/// ///
@@ -434,18 +313,6 @@ where B: AsRef<[u8]>
Some((json_path_bytes, ValueBytes::wrap(term))) Some((json_path_bytes, ValueBytes::wrap(term)))
} }
/// Returns the encoded ValueBytes after the json path.
///
/// Returns `None` if the value is not JSON.
pub(crate) fn as_json_value_bytes(&self) -> Option<ValueBytes<&[u8]>> {
if self.typ() != Type::Json {
return None;
}
let bytes = self.value_bytes();
let pos = bytes.iter().cloned().position(|b| b == JSON_END_OF_PATH)?;
Some(ValueBytes::wrap(&bytes[pos + 1..]))
}
/// Returns the serialized value of ValueBytes without the type. /// Returns the serialized value of ValueBytes without the type.
fn value_bytes(&self) -> &[u8] { fn value_bytes(&self) -> &[u8] {
&self.0.as_ref()[1..] &self.0.as_ref()[1..]
@@ -468,20 +335,20 @@ where B: AsRef<[u8]>
write_opt(f, s)?; write_opt(f, s)?;
} }
Type::U64 => { Type::U64 => {
write_opt(f, self.as_u64())?; write_opt(f, self.get_fast_type::<u64>())?;
} }
Type::I64 => { Type::I64 => {
write_opt(f, self.as_i64())?; write_opt(f, self.get_fast_type::<i64>())?;
} }
Type::F64 => { Type::F64 => {
write_opt(f, self.as_f64())?; write_opt(f, self.get_fast_type::<f64>())?;
} }
Type::Bool => { Type::Bool => {
write_opt(f, self.as_bool())?; write_opt(f, self.get_fast_type::<bool>())?;
} }
// TODO pretty print these types too. // TODO pretty print these types too.
Type::Date => { Type::Date => {
write_opt(f, self.as_date())?; write_opt(f, self.get_fast_type::<DateTime>())?;
} }
Type::Facet => { Type::Facet => {
write_opt(f, self.as_facet())?; write_opt(f, self.as_facet())?;
@@ -507,40 +374,6 @@ where B: AsRef<[u8]>
} }
} }
impl<B> Ord for Term<B>
where B: AsRef<[u8]>
{
fn cmp(&self, other: &Self) -> std::cmp::Ordering {
self.serialized_term().cmp(other.serialized_term())
}
}
impl<B> PartialOrd for Term<B>
where B: AsRef<[u8]>
{
fn partial_cmp(&self, other: &Self) -> Option<std::cmp::Ordering> {
Some(self.cmp(other))
}
}
impl<B> PartialEq for Term<B>
where B: AsRef<[u8]>
{
fn eq(&self, other: &Self) -> bool {
self.serialized_term() == other.serialized_term()
}
}
impl<B> Eq for Term<B> where B: AsRef<[u8]> {}
impl<B> Hash for Term<B>
where B: AsRef<[u8]>
{
fn hash<H: Hasher>(&self, state: &mut H) {
self.0.as_ref().hash(state)
}
}
fn write_opt<T: std::fmt::Debug>(f: &mut fmt::Formatter, val_opt: Option<T>) -> fmt::Result { fn write_opt<T: std::fmt::Debug>(f: &mut fmt::Formatter, val_opt: Option<T>) -> fmt::Result {
if let Some(val) = val_opt { if let Some(val) = val_opt {
write!(f, "{val:?}")?; write!(f, "{val:?}")?;
@@ -548,13 +381,11 @@ fn write_opt<T: std::fmt::Debug>(f: &mut fmt::Formatter, val_opt: Option<T>) ->
Ok(()) Ok(())
} }
impl<B> fmt::Debug for Term<B> impl fmt::Debug for Term {
where B: AsRef<[u8]>
{
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
let field_id = self.field().field_id(); let field_id = self.field().field_id();
write!(f, "Term(field={field_id}, ")?; write!(f, "Term(field={field_id}, ")?;
let value_bytes = ValueBytes::wrap(&self.0.as_ref()[4..]); let value_bytes = ValueBytes::wrap(&self.0[4..]);
value_bytes.debug_value_bytes(f)?; value_bytes.debug_value_bytes(f)?;
write!(f, ")",)?; write!(f, ")",)?;
Ok(()) Ok(())
@@ -576,38 +407,4 @@ mod tests {
assert_eq!(term.typ(), Type::Str); assert_eq!(term.typ(), Type::Str);
assert_eq!(term.value().as_str(), Some("test")) assert_eq!(term.value().as_str(), Some("test"))
} }
/// Size (in bytes) of the buffer of a fast value (u64, i64, f64, or date) term.
/// <field> + <type byte> + <value len>
///
/// - <field> is a big endian encoded u32 field id
/// - <type_byte>'s most significant bit expresses whether the term is a json term or not
/// The remaining 7 bits are used to encode the type of the value.
/// If this is a JSON term, the type is the type of the leaf of the json.
///
/// - <value> is, if this is not the json term, a binary representation specific to the type.
/// If it is a JSON Term, then it is prepended with the path that leads to this leaf value.
const FAST_VALUE_TERM_LEN: usize = 4 + 1 + 8;
#[test]
pub fn test_term_u64() {
let mut schema_builder = Schema::builder();
let count_field = schema_builder.add_u64_field("count", INDEXED);
let term = Term::from_field_u64(count_field, 983u64);
assert_eq!(term.field(), count_field);
assert_eq!(term.typ(), Type::U64);
assert_eq!(term.serialized_term().len(), FAST_VALUE_TERM_LEN);
assert_eq!(term.value().as_u64(), Some(983u64))
}
#[test]
pub fn test_term_bool() {
let mut schema_builder = Schema::builder();
let bool_field = schema_builder.add_bool_field("bool", INDEXED);
let term = Term::from_field_bool(bool_field, true);
assert_eq!(term.field(), bool_field);
assert_eq!(term.typ(), Type::Bool);
assert_eq!(term.serialized_term().len(), FAST_VALUE_TERM_LEN);
assert_eq!(term.value().as_bool(), Some(true))
}
} }

View File

@@ -1,4 +1,3 @@
use core::convert::TryInto;
use std::io::{self}; use std::io::{self};
use std::mem; use std::mem;

View File

@@ -2,12 +2,6 @@ use std::io;
use serde::{Deserialize, Deserializer, Serialize}; use serde::{Deserialize, Deserializer, Serialize};
pub trait StoreCompressor {
fn compress(&self, uncompressed: &[u8], compressed: &mut Vec<u8>) -> io::Result<()>;
fn decompress(&self, compressed: &[u8], decompressed: &mut Vec<u8>) -> io::Result<()>;
fn get_compressor_id() -> u8;
}
/// Compressor can be used on `IndexSettings` to choose /// Compressor can be used on `IndexSettings` to choose
/// the compressor used to compress the doc store. /// the compressor used to compress the doc store.
/// ///

View File

@@ -4,12 +4,6 @@ use serde::{Deserialize, Serialize};
use super::Compressor; use super::Compressor;
pub trait StoreCompressor {
fn compress(&self, uncompressed: &[u8], compressed: &mut Vec<u8>) -> io::Result<()>;
fn decompress(&self, compressed: &[u8], decompressed: &mut Vec<u8>) -> io::Result<()>;
fn get_compressor_id() -> u8;
}
/// Decompressor is deserialized from the doc store footer, when opening an index. /// Decompressor is deserialized from the doc store footer, when opening an index.
#[derive(Clone, Debug, Copy, PartialEq, Eq, Serialize, Deserialize)] #[derive(Clone, Debug, Copy, PartialEq, Eq, Serialize, Deserialize)]
pub enum Decompressor { pub enum Decompressor {
@@ -86,7 +80,6 @@ impl Decompressor {
#[cfg(test)] #[cfg(test)]
mod tests { mod tests {
use super::*; use super::*;
use crate::store::Compressor;
#[test] #[test]
fn compressor_decompressor_id_test() { fn compressor_decompressor_id_test() {

View File

@@ -41,7 +41,7 @@ mod tests {
use std::io; use std::io;
use proptest::strategy::{BoxedStrategy, Strategy}; use proptest::prelude::*;
use super::{SkipIndex, SkipIndexBuilder}; use super::{SkipIndex, SkipIndexBuilder};
use crate::directory::OwnedBytes; use crate::directory::OwnedBytes;
@@ -227,8 +227,6 @@ mod tests {
} }
} }
use proptest::prelude::*;
proptest! { proptest! {
#![proptest_config(ProptestConfig::with_cases(20))] #![proptest_config(ProptestConfig::with_cases(20))]
#[test] #[test]

View File

@@ -14,7 +14,7 @@ use super::Decompressor;
use crate::directory::FileSlice; use crate::directory::FileSlice;
use crate::error::DataCorruption; use crate::error::DataCorruption;
use crate::fastfield::AliveBitSet; use crate::fastfield::AliveBitSet;
use crate::schema::document::{BinaryDocumentDeserializer, Document, DocumentDeserialize}; use crate::schema::document::{BinaryDocumentDeserializer, DocumentDeserialize};
use crate::space_usage::StoreSpaceUsage; use crate::space_usage::StoreSpaceUsage;
use crate::store::index::Checkpoint; use crate::store::index::Checkpoint;
use crate::DocId; use crate::DocId;
@@ -235,7 +235,7 @@ impl StoreReader {
/// Iterator over all Documents in their order as they are stored in the doc store. /// Iterator over all Documents in their order as they are stored in the doc store.
/// Use this, if you want to extract all Documents from the doc store. /// Use this, if you want to extract all Documents from the doc store.
/// The `alive_bitset` has to be forwarded from the `SegmentReader` or the results may be wrong. /// The `alive_bitset` has to be forwarded from the `SegmentReader` or the results may be wrong.
pub fn iter<'a: 'b, 'b, D: Document + DocumentDeserialize>( pub fn iter<'a: 'b, 'b, D: DocumentDeserialize>(
&'b self, &'b self,
alive_bitset: Option<&'a AliveBitSet>, alive_bitset: Option<&'a AliveBitSet>,
) -> impl Iterator<Item = crate::Result<D>> + 'b { ) -> impl Iterator<Item = crate::Result<D>> + 'b {

View File

@@ -1,6 +1,6 @@
[package] [package]
name = "tantivy-sstable" name = "tantivy-sstable"
version = "0.2.0" version = "0.3.0"
edition = "2021" edition = "2021"
license = "MIT" license = "MIT"
homepage = "https://github.com/quickwit-oss/tantivy" homepage = "https://github.com/quickwit-oss/tantivy"
@@ -10,8 +10,8 @@ categories = ["database-implementations", "data-structures", "compression"]
description = "sstables for tantivy" description = "sstables for tantivy"
[dependencies] [dependencies]
common = {version= "0.6", path="../common", package="tantivy-common"} common = {version= "0.7", path="../common", package="tantivy-common"}
tantivy-bitpacker = { version= "0.5", path="../bitpacker" } tantivy-bitpacker = { version= "0.6", path="../bitpacker" }
tantivy-fst = "0.5" tantivy-fst = "0.5"
# experimental gives us access to Decompressor::upper_bound # experimental gives us access to Decompressor::upper_bound
zstd = { version = "0.13", features = ["experimental"] } zstd = { version = "0.13", features = ["experimental"] }

View File

@@ -3,7 +3,7 @@ use std::sync::Arc;
use common::file_slice::FileSlice; use common::file_slice::FileSlice;
use common::OwnedBytes; use common::OwnedBytes;
use criterion::{criterion_group, criterion_main, Criterion}; use criterion::{criterion_group, criterion_main, Criterion};
use tantivy_sstable::{self, Dictionary, MonotonicU64SSTable}; use tantivy_sstable::{Dictionary, MonotonicU64SSTable};
fn make_test_sstable(suffix: &str) -> FileSlice { fn make_test_sstable(suffix: &str) -> FileSlice {
let mut builder = Dictionary::<MonotonicU64SSTable>::builder(Vec::new()).unwrap(); let mut builder = Dictionary::<MonotonicU64SSTable>::builder(Vec::new()).unwrap();

View File

@@ -5,7 +5,7 @@ use common::file_slice::FileSlice;
use criterion::{criterion_group, criterion_main, Criterion}; use criterion::{criterion_group, criterion_main, Criterion};
use rand::rngs::StdRng; use rand::rngs::StdRng;
use rand::{Rng, SeedableRng}; use rand::{Rng, SeedableRng};
use tantivy_sstable::{self, Dictionary, MonotonicU64SSTable}; use tantivy_sstable::{Dictionary, MonotonicU64SSTable};
const CHARSET: &'static [u8] = b"abcdefghij"; const CHARSET: &'static [u8] = b"abcdefghij";

View File

@@ -1,6 +1,5 @@
use std::io::{self, Write}; use std::io::{self, Write};
use std::ops::Range; use std::ops::Range;
use std::usize;
use merge::ValueMerger; use merge::ValueMerger;

View File

@@ -1,6 +1,6 @@
[package] [package]
name = "tantivy-stacker" name = "tantivy-stacker"
version = "0.2.0" version = "0.3.0"
edition = "2021" edition = "2021"
license = "MIT" license = "MIT"
homepage = "https://github.com/quickwit-oss/tantivy" homepage = "https://github.com/quickwit-oss/tantivy"
@@ -9,8 +9,8 @@ description = "term hashmap used for indexing"
[dependencies] [dependencies]
murmurhash32 = "0.3" murmurhash32 = "0.3"
common = { version = "0.6", path = "../common/", package = "tantivy-common" } common = { version = "0.7", path = "../common/", package = "tantivy-common" }
ahash = { version = "0.8.3", default-features = false, optional = true } ahash = { version = "0.8.11", default-features = false, optional = true }
rand_distr = "0.4.3" rand_distr = "0.4.3"
[[bench]] [[bench]]

View File

@@ -151,7 +151,6 @@ impl ExpUnrolledLinkedList {
mod tests { mod tests {
use common::{read_u32_vint, write_u32_vint}; use common::{read_u32_vint, write_u32_vint};
use super::super::MemoryArena;
use super::*; use super::*;
#[test] #[test]

View File

@@ -1,6 +1,6 @@
[package] [package]
name = "tantivy-tokenizer-api" name = "tantivy-tokenizer-api"
version = "0.2.0" version = "0.3.0"
license = "MIT" license = "MIT"
edition = "2021" edition = "2021"
description = "Tokenizer API of tantivy" description = "Tokenizer API of tantivy"