Compare commits

..

1 Commits

Author SHA1 Message Date
Pascal Seitz
4a9262cd2c accept * as field name 2024-12-06 09:42:18 +01:00
151 changed files with 938 additions and 3251 deletions

View File

@@ -1,26 +1,12 @@
Tantivy 0.25 Tantivy 0.23 - Unreleased
================================ ================================
Tantivy 0.23 will be backwards compatible with indices created with v0.22 and v0.21. The new minimum rust version will be 1.75.
## Bugfixes
- fix union performance regression in tantivy 0.24 [#2663](https://github.com/quickwit-oss/tantivy/pull/2663)(@PSeitz-dd)
- make zstd optional in sstable [#2633](https://github.com/quickwit-oss/tantivy/pull/2633)(@Parth)
## Features/Improvements
- add docs/example and Vec<u32> values to sstable [#2660](https://github.com/quickwit-oss/tantivy/pull/2660)(@PSeitz)
- Add string fast field support to `TopDocs`. [#2642](https://github.com/quickwit-oss/tantivy/pull/2642)(@stuhood)
- update edition to 2024 [#2620](https://github.com/quickwit-oss/tantivy/pull/2620)(@PSeitz)
Tantivy 0.24
================================
Tantivy 0.24 will be backwards compatible with indices created with v0.22 and v0.21. The new minimum rust version will be 1.75. Tantivy 0.23 will be skipped.
#### Bugfixes #### Bugfixes
- fix potential endless loop in merge [#2457](https://github.com/quickwit-oss/tantivy/pull/2457)(@PSeitz) - fix potential endless loop in merge [#2457](https://github.com/quickwit-oss/tantivy/pull/2457)(@PSeitz)
- fix bug that causes out-of-order sstable key. [#2445](https://github.com/quickwit-oss/tantivy/pull/2445)(@fulmicoton) - fix bug that causes out-of-order sstable key. [#2445](https://github.com/quickwit-oss/tantivy/pull/2445)(@fulmicoton)
- fix ReferenceValue API flaw [#2372](https://github.com/quickwit-oss/tantivy/pull/2372)(@PSeitz) - fix ReferenceValue API flaw [#2372](https://github.com/quickwit-oss/tantivy/pull/2372)(@PSeitz)
- fix `OwnedBytes` debug panic [#2512](https://github.com/quickwit-oss/tantivy/pull/2512)(@b41sh) - fix `OwnedBytes` debug panic [#2512](https://github.com/quickwit-oss/tantivy/pull/2512)(@b41sh)
- catch panics during merges [#2582](https://github.com/quickwit-oss/tantivy/pull/2582)(@rdettai)
- switch from u32 to usize in bitpacker. This enables multivalued columns larger than 4GB, which crashed during merge before. [#2581](https://github.com/quickwit-oss/tantivy/pull/2581) [#2586](https://github.com/quickwit-oss/tantivy/pull/2586)(@fulmicoton-dd @PSeitz)
#### Breaking API Changes #### Breaking API Changes
- remove index sorting [#2434](https://github.com/quickwit-oss/tantivy/pull/2434)(@PSeitz) - remove index sorting [#2434](https://github.com/quickwit-oss/tantivy/pull/2434)(@PSeitz)
@@ -38,7 +24,6 @@ Tantivy 0.24 will be backwards compatible with indices created with v0.22 and v0
- reduce top hits memory consumption [#2426](https://github.com/quickwit-oss/tantivy/pull/2426)(@PSeitz) - reduce top hits memory consumption [#2426](https://github.com/quickwit-oss/tantivy/pull/2426)(@PSeitz)
- check unsupported parameters top_hits [#2351](https://github.com/quickwit-oss/tantivy/pull/2351)(@PSeitz) - check unsupported parameters top_hits [#2351](https://github.com/quickwit-oss/tantivy/pull/2351)(@PSeitz)
- Change AggregationLimits to AggregationLimitsGuard [#2495](https://github.com/quickwit-oss/tantivy/pull/2495)(@PSeitz) - Change AggregationLimits to AggregationLimitsGuard [#2495](https://github.com/quickwit-oss/tantivy/pull/2495)(@PSeitz)
- add support for counting non integer in aggregation [#2547](https://github.com/quickwit-oss/tantivy/pull/2547)(@trinity-1686a)
- **Range Queries** - **Range Queries**
- Support fast field range queries on json fields [#2456](https://github.com/quickwit-oss/tantivy/pull/2456)(@PSeitz) - Support fast field range queries on json fields [#2456](https://github.com/quickwit-oss/tantivy/pull/2456)(@PSeitz)
- Add support for str fast field range query [#2460](https://github.com/quickwit-oss/tantivy/pull/2460) [#2452](https://github.com/quickwit-oss/tantivy/pull/2452) [#2453](https://github.com/quickwit-oss/tantivy/pull/2453)(@PSeitz) - Add support for str fast field range query [#2460](https://github.com/quickwit-oss/tantivy/pull/2460) [#2452](https://github.com/quickwit-oss/tantivy/pull/2452) [#2453](https://github.com/quickwit-oss/tantivy/pull/2453)(@PSeitz)
@@ -49,8 +34,7 @@ Tantivy 0.24 will be backwards compatible with indices created with v0.22 and v0
- add columnar format compatibility tests [#2433](https://github.com/quickwit-oss/tantivy/pull/2433)(@PSeitz) - add columnar format compatibility tests [#2433](https://github.com/quickwit-oss/tantivy/pull/2433)(@PSeitz)
- Improved snippet ranges algorithm [#2474](https://github.com/quickwit-oss/tantivy/pull/2474)(@gezihuzi) - Improved snippet ranges algorithm [#2474](https://github.com/quickwit-oss/tantivy/pull/2474)(@gezihuzi)
- make find_field_with_default return json fields without path [#2476](https://github.com/quickwit-oss/tantivy/pull/2476)(@trinity-1686a) - make find_field_with_default return json fields without path [#2476](https://github.com/quickwit-oss/tantivy/pull/2476)(@trinity-1686a)
- Make `BooleanQuery` support `minimum_number_should_match` [#2405](https://github.com/quickwit-oss/tantivy/pull/2405)(@LebranceBW) - feat(query): Make `BooleanQuery` support `minimum_number_should_match` [#2405](https://github.com/quickwit-oss/tantivy/pull/2405)(@LebranceBW)
- Make `NUM_MERGE_THREADS` configurable [#2535](https://github.com/quickwit-oss/tantivy/pull/2535)(@Barre)
- **RegexPhraseQuery** - **RegexPhraseQuery**
`RegexPhraseQuery` supports phrase queries with regex. E.g. query "b.* b.* wolf" matches "big bad wolf". Slop is supported as well: "b.* wolf"~2 matches "big bad wolf" [#2516](https://github.com/quickwit-oss/tantivy/pull/2516)(@PSeitz) `RegexPhraseQuery` supports phrase queries with regex. E.g. query "b.* b.* wolf" matches "big bad wolf". Slop is supported as well: "b.* wolf"~2 matches "big bad wolf" [#2516](https://github.com/quickwit-oss/tantivy/pull/2516)(@PSeitz)
@@ -76,9 +60,7 @@ This will slightly increase space and access time. [#2439](https://github.com/qu
- fix de-escaping too much in query parser [#2427](https://github.com/quickwit-oss/tantivy/pull/2427)(@trinity-1686a) - fix de-escaping too much in query parser [#2427](https://github.com/quickwit-oss/tantivy/pull/2427)(@trinity-1686a)
- improve query parser [#2416](https://github.com/quickwit-oss/tantivy/pull/2416)(@trinity-1686a) - improve query parser [#2416](https://github.com/quickwit-oss/tantivy/pull/2416)(@trinity-1686a)
- Support field grouping `title:(return AND "pink panther")` [#2333](https://github.com/quickwit-oss/tantivy/pull/2333)(@trinity-1686a) - Support field grouping `title:(return AND "pink panther")` [#2333](https://github.com/quickwit-oss/tantivy/pull/2333)(@trinity-1686a)
- allow term starting with wildcard [#2568](https://github.com/quickwit-oss/tantivy/pull/2568)(@trinity-1686a)
- Exist queries match subpath fields [#2558](https://github.com/quickwit-oss/tantivy/pull/2558)(@rdettai)
- add access benchmark for columnar [#2432](https://github.com/quickwit-oss/tantivy/pull/2432)(@PSeitz) - add access benchmark for columnar [#2432](https://github.com/quickwit-oss/tantivy/pull/2432)(@PSeitz)
- extend indexwriter proptests [#2342](https://github.com/quickwit-oss/tantivy/pull/2342)(@PSeitz) - extend indexwriter proptests [#2342](https://github.com/quickwit-oss/tantivy/pull/2342)(@PSeitz)
- add bench & test for columnar merging [#2428](https://github.com/quickwit-oss/tantivy/pull/2428)(@PSeitz) - add bench & test for columnar merging [#2428](https://github.com/quickwit-oss/tantivy/pull/2428)(@PSeitz)

View File

@@ -1,6 +1,6 @@
[package] [package]
name = "tantivy" name = "tantivy"
version = "0.24.0" version = "0.23.0"
authors = ["Paul Masurel <paul.masurel@gmail.com>"] authors = ["Paul Masurel <paul.masurel@gmail.com>"]
license = "MIT" license = "MIT"
categories = ["database-implementations", "data-structures"] categories = ["database-implementations", "data-structures"]
@@ -11,7 +11,7 @@ repository = "https://github.com/quickwit-oss/tantivy"
readme = "README.md" readme = "README.md"
keywords = ["search", "information", "retrieval"] keywords = ["search", "information", "retrieval"]
edition = "2021" edition = "2021"
rust-version = "1.85" rust-version = "1.75"
exclude = ["benches/*.json", "benches/*.txt"] exclude = ["benches/*.json", "benches/*.txt"]
[dependencies] [dependencies]
@@ -31,14 +31,14 @@ lz4_flex = { version = "0.11", default-features = false, optional = true }
zstd = { version = "0.13", optional = true, default-features = false } zstd = { version = "0.13", optional = true, default-features = false }
tempfile = { version = "3.12.0", optional = true } tempfile = { version = "3.12.0", optional = true }
log = "0.4.16" log = "0.4.16"
serde = { version = "1.0.219", features = ["derive"] } serde = { version = "1.0.136", features = ["derive"] }
serde_json = "1.0.140" serde_json = "1.0.79"
fs4 = { version = "0.13.1", optional = true } fs4 = { version = "0.8.0", optional = true }
levenshtein_automata = "0.2.1" levenshtein_automata = "0.2.1"
uuid = { version = "1.0.0", features = ["v4", "serde"] } uuid = { version = "1.0.0", features = ["v4", "serde"] }
crossbeam-channel = "0.5.4" crossbeam-channel = "0.5.4"
rust-stemmers = "1.2.0" rust-stemmers = "1.2.0"
downcast-rs = "2.0.1" downcast-rs = "1.2.1"
bitpacking = { version = "0.9.2", default-features = false, features = [ bitpacking = { version = "0.9.2", default-features = false, features = [
"bitpacker4x", "bitpacker4x",
] } ] }
@@ -52,22 +52,20 @@ smallvec = "1.8.0"
rayon = "1.5.2" rayon = "1.5.2"
lru = "0.12.0" lru = "0.12.0"
fastdivide = "0.4.0" fastdivide = "0.4.0"
itertools = "0.14.0" itertools = "0.13.0"
measure_time = "0.9.0" measure_time = "0.8.2"
arc-swap = "1.5.0" arc-swap = "1.5.0"
bon = "3.3.1"
columnar = { version = "0.5", path = "./columnar", package = "tantivy-columnar" } columnar = { version = "0.3", path = "./columnar", package = "tantivy-columnar" }
sstable = { version = "0.5", path = "./sstable", package = "tantivy-sstable", optional = true } sstable = { version = "0.3", path = "./sstable", package = "tantivy-sstable", optional = true }
stacker = { version = "0.5", path = "./stacker", package = "tantivy-stacker" } stacker = { version = "0.3", path = "./stacker", package = "tantivy-stacker" }
query-grammar = { version = "0.24.0", path = "./query-grammar", package = "tantivy-query-grammar" } query-grammar = { version = "0.22.0", path = "./query-grammar", package = "tantivy-query-grammar" }
tantivy-bitpacker = { version = "0.8", path = "./bitpacker" } tantivy-bitpacker = { version = "0.6", path = "./bitpacker" }
common = { version = "0.9", path = "./common/", package = "tantivy-common" } common = { version = "0.7", path = "./common/", package = "tantivy-common" }
tokenizer-api = { version = "0.5", path = "./tokenizer-api", package = "tantivy-tokenizer-api" } tokenizer-api = { version = "0.3", path = "./tokenizer-api", package = "tantivy-tokenizer-api" }
sketches-ddsketch = { version = "0.3.0", features = ["use_serde"] } sketches-ddsketch = { version = "0.3.0", features = ["use_serde"] }
hyperloglogplus = { version = "0.4.1", features = ["const-loop"] } hyperloglogplus = { version = "0.4.1", features = ["const-loop"] }
futures-util = { version = "0.3.28", optional = true } futures-util = { version = "0.3.28", optional = true }
futures-channel = { version = "0.3.28", optional = true }
fnv = "1.0.7" fnv = "1.0.7"
[target.'cfg(windows)'.dependencies] [target.'cfg(windows)'.dependencies]
@@ -112,20 +110,17 @@ debug-assertions = true
overflow-checks = true overflow-checks = true
[features] [features]
default = ["mmap", "stopwords", "lz4-compression", "columnar-zstd-compression"] default = ["mmap", "stopwords", "lz4-compression"]
mmap = ["fs4", "tempfile", "memmap2"] mmap = ["fs4", "tempfile", "memmap2"]
stopwords = [] stopwords = []
lz4-compression = ["lz4_flex"] lz4-compression = ["lz4_flex"]
zstd-compression = ["zstd"] zstd-compression = ["zstd"]
# enable zstd-compression in columnar (and sstable)
columnar-zstd-compression = ["columnar/zstd-compression"]
failpoints = ["fail", "fail/failpoints"] failpoints = ["fail", "fail/failpoints"]
unstable = [] # useful for benches. unstable = [] # useful for benches.
quickwit = ["sstable", "futures-util", "futures-channel"] quickwit = ["sstable", "futures-util"]
# Compares only the hash of a string when indexing data. # Compares only the hash of a string when indexing data.
# Increases indexing speed, but may lead to extremely rare missing terms, when there's a hash collision. # Increases indexing speed, but may lead to extremely rare missing terms, when there's a hash collision.

View File

@@ -1,7 +1,7 @@
[package] [package]
name = "tantivy-bitpacker" name = "tantivy-bitpacker"
version = "0.8.0" version = "0.6.0"
edition = "2024" edition = "2021"
authors = ["Paul Masurel <paul.masurel@gmail.com>"] authors = ["Paul Masurel <paul.masurel@gmail.com>"]
license = "MIT" license = "MIT"
categories = [] categories = []

View File

@@ -1,7 +1,3 @@
// manual divceil actually generates code that is not optimal (to accept the full range of u32) and
// perf matters here.
#![allow(clippy::manual_div_ceil)]
use std::io; use std::io;
use std::ops::{Range, RangeInclusive}; use std::ops::{Range, RangeInclusive};
@@ -69,7 +65,7 @@ impl BitPacker {
#[derive(Clone, Debug, Default, Copy)] #[derive(Clone, Debug, Default, Copy)]
pub struct BitUnpacker { pub struct BitUnpacker {
num_bits: usize, num_bits: u32,
mask: u64, mask: u64,
} }
@@ -87,7 +83,7 @@ impl BitUnpacker {
(1u64 << num_bits) - 1u64 (1u64 << num_bits) - 1u64
}; };
BitUnpacker { BitUnpacker {
num_bits: usize::from(num_bits), num_bits: u32::from(num_bits),
mask, mask,
} }
} }
@@ -98,14 +94,14 @@ impl BitUnpacker {
#[inline] #[inline]
pub fn get(&self, idx: u32, data: &[u8]) -> u64 { pub fn get(&self, idx: u32, data: &[u8]) -> u64 {
let addr_in_bits = idx as usize * self.num_bits; let addr_in_bits = idx * self.num_bits;
let addr = addr_in_bits >> 3; let addr = (addr_in_bits >> 3) as usize;
if addr + 8 > data.len() { if addr + 8 > data.len() {
if self.num_bits == 0 { if self.num_bits == 0 {
return 0; return 0;
} }
let bit_shift = addr_in_bits & 7; let bit_shift = addr_in_bits & 7;
return self.get_slow_path(addr, bit_shift as u32, data); return self.get_slow_path(addr, bit_shift, data);
} }
let bit_shift = addr_in_bits & 7; let bit_shift = addr_in_bits & 7;
let bytes: [u8; 8] = (&data[addr..addr + 8]).try_into().unwrap(); let bytes: [u8; 8] = (&data[addr..addr + 8]).try_into().unwrap();
@@ -138,13 +134,12 @@ impl BitUnpacker {
"Bitwidth must be <= 32 to use this method." "Bitwidth must be <= 32 to use this method."
); );
let end_idx: u32 = start_idx + output.len() as u32; let end_idx = start_idx + output.len() as u32;
// We use `usize` here to avoid overflow issues. let end_bit_read = end_idx * self.num_bits;
let end_bit_read = (end_idx as usize) * self.num_bits;
let end_byte_read = (end_bit_read + 7) / 8; let end_byte_read = (end_bit_read + 7) / 8;
assert!( assert!(
end_byte_read <= data.len(), end_byte_read as usize <= data.len(),
"Requested index is out of bounds." "Requested index is out of bounds."
); );
@@ -164,24 +159,24 @@ impl BitUnpacker {
// We want the start of the fast track to start align with bytes. // We want the start of the fast track to start align with bytes.
// A sufficient condition is to start with an idx that is a multiple of 8, // A sufficient condition is to start with an idx that is a multiple of 8,
// so highway start is the closest multiple of 8 that is >= start_idx. // so highway start is the closest multiple of 8 that is >= start_idx.
let entrance_ramp_len: u32 = 8 - (start_idx % 8) % 8; let entrance_ramp_len = 8 - (start_idx % 8) % 8;
let highway_start: u32 = start_idx + entrance_ramp_len; let highway_start: u32 = start_idx + entrance_ramp_len;
if highway_start + (BitPacker1x::BLOCK_LEN as u32) > end_idx { if highway_start + BitPacker1x::BLOCK_LEN as u32 > end_idx {
// We don't have enough values to have even a single block of highway. // We don't have enough values to have even a single block of highway.
// Let's just supply the values the simple way. // Let's just supply the values the simple way.
get_batch_ramp(start_idx, output); get_batch_ramp(start_idx, output);
return; return;
} }
let num_blocks: usize = (end_idx - highway_start) as usize / BitPacker1x::BLOCK_LEN; let num_blocks: u32 = (end_idx - highway_start) / BitPacker1x::BLOCK_LEN as u32;
// Entrance ramp // Entrance ramp
get_batch_ramp(start_idx, &mut output[..entrance_ramp_len as usize]); get_batch_ramp(start_idx, &mut output[..entrance_ramp_len as usize]);
// Highway // Highway
let mut offset = (highway_start as usize * self.num_bits) / 8; let mut offset = (highway_start * self.num_bits) as usize / 8;
let mut output_cursor = (highway_start - start_idx) as usize; let mut output_cursor = (highway_start - start_idx) as usize;
for _ in 0..num_blocks { for _ in 0..num_blocks {
offset += BitPacker1x.decompress( offset += BitPacker1x.decompress(
@@ -193,7 +188,7 @@ impl BitUnpacker {
} }
// Exit ramp // Exit ramp
let highway_end: u32 = highway_start + (num_blocks * BitPacker1x::BLOCK_LEN) as u32; let highway_end = highway_start + num_blocks * BitPacker1x::BLOCK_LEN as u32;
get_batch_ramp(highway_end, &mut output[output_cursor..]); get_batch_ramp(highway_end, &mut output[output_cursor..]);
} }

View File

@@ -1,6 +1,6 @@
use super::bitpacker::BitPacker; use super::bitpacker::BitPacker;
use super::compute_num_bits; use super::compute_num_bits;
use crate::{BitUnpacker, minmax}; use crate::{minmax, BitUnpacker};
const BLOCK_SIZE: usize = 128; const BLOCK_SIZE: usize = 128;
@@ -34,7 +34,7 @@ struct BlockedBitpackerEntryMetaData {
impl BlockedBitpackerEntryMetaData { impl BlockedBitpackerEntryMetaData {
fn new(offset: u64, num_bits: u8, base_value: u64) -> Self { fn new(offset: u64, num_bits: u8, base_value: u64) -> Self {
let encoded = offset | (u64::from(num_bits) << (64 - 8)); let encoded = offset | (num_bits as u64) << (64 - 8);
Self { Self {
encoded, encoded,
base_value, base_value,
@@ -140,9 +140,10 @@ impl BlockedBitpacker {
pub fn iter(&self) -> impl Iterator<Item = u64> + '_ { pub fn iter(&self) -> impl Iterator<Item = u64> + '_ {
// todo performance: we could decompress a whole block and cache it instead // todo performance: we could decompress a whole block and cache it instead
let bitpacked_elems = self.offset_and_bits.len() * BLOCK_SIZE; let bitpacked_elems = self.offset_and_bits.len() * BLOCK_SIZE;
(0..bitpacked_elems) let iter = (0..bitpacked_elems)
.map(move |idx| self.get(idx)) .map(move |idx| self.get(idx))
.chain(self.buffer.iter().cloned()) .chain(self.buffer.iter().cloned());
iter
} }
} }

View File

@@ -1,5 +1,3 @@
// #[allow(clippy::manual_div_ceil)]
mod bitpacker; mod bitpacker;
mod blocked_bitpacker; mod blocked_bitpacker;
mod filter_vec; mod filter_vec;
@@ -35,7 +33,11 @@ pub use crate::blocked_bitpacker::BlockedBitpacker;
/// number of bits. /// number of bits.
pub fn compute_num_bits(n: u64) -> u8 { pub fn compute_num_bits(n: u64) -> u8 {
let amplitude = (64u32 - n.leading_zeros()) as u8; let amplitude = (64u32 - n.leading_zeros()) as u8;
if amplitude <= 64 - 8 { amplitude } else { 64 } if amplitude <= 64 - 8 {
amplitude
} else {
64
}
} }
/// Computes the (min, max) of an iterator of `PartialOrd` values. /// Computes the (min, max) of an iterator of `PartialOrd` values.

View File

@@ -16,14 +16,14 @@ body = """
{%- if version %} in {{ version }}{%- endif -%} {%- if version %} in {{ version }}{%- endif -%}
{% for commit in commits %} {% for commit in commits %}
{% if commit.remote.pr_title -%} {% if commit.github.pr_title -%}
{%- set commit_message = commit.remote.pr_title -%} {%- set commit_message = commit.github.pr_title -%}
{%- else -%} {%- else -%}
{%- set commit_message = commit.message -%} {%- set commit_message = commit.message -%}
{%- endif -%} {%- endif -%}
- {{ commit_message | split(pat="\n") | first | trim }}\ - {{ commit_message | split(pat="\n") | first | trim }}\
{% if commit.remote.pr_number %} \ {% if commit.github.pr_number %} \
[#{{ commit.remote.pr_number }}]({{ self::remote_url() }}/pull/{{ commit.remote.pr_number }}){% if commit.remote.username %}(@{{ commit.remote.username }}){%- endif -%} \ [#{{ commit.github.pr_number }}]({{ self::remote_url() }}/pull/{{ commit.github.pr_number }}){% if commit.github.username %}(@{{ commit.github.username }}){%- endif -%} \
{%- endif %} {%- endif %}
{%- endfor -%} {%- endfor -%}

View File

@@ -1,7 +1,7 @@
[package] [package]
name = "tantivy-columnar" name = "tantivy-columnar"
version = "0.5.0" version = "0.3.0"
edition = "2024" edition = "2021"
license = "MIT" license = "MIT"
homepage = "https://github.com/quickwit-oss/tantivy" homepage = "https://github.com/quickwit-oss/tantivy"
repository = "https://github.com/quickwit-oss/tantivy" repository = "https://github.com/quickwit-oss/tantivy"
@@ -9,15 +9,15 @@ description = "column oriented storage for tantivy"
categories = ["database-implementations", "data-structures", "compression"] categories = ["database-implementations", "data-structures", "compression"]
[dependencies] [dependencies]
itertools = "0.14.0" itertools = "0.13.0"
fastdivide = "0.4.0" fastdivide = "0.4.0"
stacker = { version= "0.5", path = "../stacker", package="tantivy-stacker"} stacker = { version= "0.3", path = "../stacker", package="tantivy-stacker"}
sstable = { version= "0.5", path = "../sstable", package = "tantivy-sstable" } sstable = { version= "0.3", path = "../sstable", package = "tantivy-sstable" }
common = { version= "0.9", path = "../common", package = "tantivy-common" } common = { version= "0.7", path = "../common", package = "tantivy-common" }
tantivy-bitpacker = { version= "0.8", path = "../bitpacker/" } tantivy-bitpacker = { version= "0.6", path = "../bitpacker/" }
serde = "1.0.152" serde = "1.0.152"
downcast-rs = "2.0.1" downcast-rs = "1.2.0"
[dev-dependencies] [dev-dependencies]
proptest = "1" proptest = "1"
@@ -33,6 +33,6 @@ harness = false
name = "bench_access" name = "bench_access"
harness = false harness = false
[features] [features]
unstable = [] unstable = []
zstd-compression = ["sstable/zstd-compression"]

View File

@@ -1,4 +1,4 @@
use binggan::{InputGroup, black_box}; use binggan::{black_box, InputGroup};
use common::*; use common::*;
use tantivy_columnar::Column; use tantivy_columnar::Column;

View File

@@ -4,9 +4,9 @@ extern crate test;
use std::sync::Arc; use std::sync::Arc;
use rand::prelude::*; use rand::prelude::*;
use tantivy_columnar::column_values::{CodecType, serialize_and_load_u64_based_column_values}; use tantivy_columnar::column_values::{serialize_and_load_u64_based_column_values, CodecType};
use tantivy_columnar::*; use tantivy_columnar::*;
use test::{Bencher, black_box}; use test::{black_box, Bencher};
struct Columns { struct Columns {
pub optional: Column, pub optional: Column,

View File

@@ -1,7 +1,7 @@
pub mod common; pub mod common;
use binggan::BenchRunner; use binggan::BenchRunner;
use common::{Card, generate_columnar_with_name}; use common::{generate_columnar_with_name, Card};
use tantivy_columnar::*; use tantivy_columnar::*;
const NUM_DOCS: u32 = 100_000; const NUM_DOCS: u32 = 100_000;

View File

@@ -6,7 +6,7 @@ use std::sync::Arc;
use common::OwnedBytes; use common::OwnedBytes;
use rand::rngs::StdRng; use rand::rngs::StdRng;
use rand::seq::SliceRandom; use rand::seq::SliceRandom;
use rand::{Rng, SeedableRng, random}; use rand::{random, Rng, SeedableRng};
use tantivy_columnar::ColumnValues; use tantivy_columnar::ColumnValues;
use test::Bencher; use test::Bencher;
extern crate test; extern crate test;

View File

@@ -5,7 +5,7 @@ use std::ops::RangeInclusive;
use std::sync::Arc; use std::sync::Arc;
use rand::prelude::*; use rand::prelude::*;
use tantivy_columnar::column_values::{CodecType, serialize_and_load_u64_based_column_values}; use tantivy_columnar::column_values::{serialize_and_load_u64_based_column_values, CodecType};
use tantivy_columnar::*; use tantivy_columnar::*;
use test::Bencher; use test::Bencher;

View File

@@ -1,18 +0,0 @@
[package]
name = "tantivy-columnar-inspect"
version = "0.1.0"
edition = "2021"
license = "MIT"
[dependencies]
tantivy = {path="../..", package="tantivy"}
columnar = {path="../", package="tantivy-columnar"}
common = {path="../../common", package="tantivy-common"}
[workspace]
members = []
[profile.release]
debug = true
#debug-assertions = true
#overflow-checks = true

View File

@@ -1,54 +0,0 @@
use columnar::ColumnarReader;
use common::file_slice::{FileSlice, WrapFile};
use std::io;
use std::path::Path;
use tantivy::directory::footer::Footer;
fn main() -> io::Result<()> {
println!("Opens a columnar file written by tantivy and validates it.");
let path = std::env::args().nth(1).unwrap();
let path = Path::new(&path);
println!("Reading {:?}", path);
let _reader = open_and_validate_columnar(path.to_str().unwrap())?;
Ok(())
}
pub fn validate_columnar_reader(reader: &ColumnarReader) {
let num_rows = reader.num_rows();
println!("num_rows: {}", num_rows);
let columns = reader.list_columns().unwrap();
println!("num columns: {:?}", columns.len());
for (col_name, dynamic_column_handle) in columns {
let col = dynamic_column_handle.open().unwrap();
match col {
columnar::DynamicColumn::Bool(_)
| columnar::DynamicColumn::I64(_)
| columnar::DynamicColumn::U64(_)
| columnar::DynamicColumn::F64(_)
| columnar::DynamicColumn::IpAddr(_)
| columnar::DynamicColumn::DateTime(_)
| columnar::DynamicColumn::Bytes(_) => {}
columnar::DynamicColumn::Str(str_column) => {
let num_vals = str_column.ords().values.num_vals();
let num_terms_dict = str_column.num_terms() as u64;
let max_ord = str_column.ords().values.iter().max().unwrap_or_default();
println!("{col_name:35} num_vals {num_vals:10} \t num_terms_dict {num_terms_dict:8} max_ord: {max_ord:8}",);
for ord in str_column.ords().values.iter() {
assert!(ord < num_terms_dict);
}
}
}
}
}
/// Opens a columnar file that was written by tantivy and validates it.
pub fn open_and_validate_columnar(path: &str) -> io::Result<ColumnarReader> {
let wrap_file = WrapFile::new(std::fs::File::open(path)?)?;
let slice = FileSlice::new(std::sync::Arc::new(wrap_file));
let (_footer, slice) = Footer::extract_footer(slice.clone()).unwrap();
let reader = ColumnarReader::open(slice).unwrap();
validate_columnar_reader(&reader);
Ok(reader)
}

View File

@@ -66,7 +66,7 @@ impl<T: PartialOrd + Copy + std::fmt::Debug + Send + Sync + 'static + Default>
&'a self, &'a self,
docs: &'a [u32], docs: &'a [u32],
accessor: &Column<T>, accessor: &Column<T>,
) -> impl Iterator<Item = (DocId, T)> + 'a + use<'a, T> { ) -> impl Iterator<Item = (DocId, T)> + 'a {
if accessor.index.get_cardinality().is_full() { if accessor.index.get_cardinality().is_full() {
docs.iter().cloned().zip(self.val_cache.iter().cloned()) docs.iter().cloned().zip(self.val_cache.iter().cloned())
} else { } else {
@@ -139,7 +139,7 @@ mod tests {
missing_docs.push(missing_doc); missing_docs.push(missing_doc);
}); });
assert_eq!(missing_docs, Vec::<u32>::new()); assert_eq!(missing_docs, vec![]);
} }
#[test] #[test]

View File

@@ -4,8 +4,8 @@ use std::{fmt, io};
use sstable::{Dictionary, VoidSSTable}; use sstable::{Dictionary, VoidSSTable};
use crate::RowId;
use crate::column::Column; use crate::column::Column;
use crate::RowId;
/// Dictionary encoded column. /// Dictionary encoded column.
/// ///

View File

@@ -9,14 +9,13 @@ use std::sync::Arc;
use common::BinarySerializable; use common::BinarySerializable;
pub use dictionary_encoded::{BytesColumn, StrColumn}; pub use dictionary_encoded::{BytesColumn, StrColumn};
pub use serialize::{ pub use serialize::{
open_column_bytes, open_column_str, open_column_u64, open_column_u128, open_column_bytes, open_column_str, open_column_u128, open_column_u128_as_compact_u64,
open_column_u128_as_compact_u64, serialize_column_mappable_to_u64, open_column_u64, serialize_column_mappable_to_u128, serialize_column_mappable_to_u64,
serialize_column_mappable_to_u128,
}; };
use crate::column_index::{ColumnIndex, Set}; use crate::column_index::{ColumnIndex, Set};
use crate::column_values::monotonic_mapping::StrictlyMonotonicMappingToInternal; use crate::column_values::monotonic_mapping::StrictlyMonotonicMappingToInternal;
use crate::column_values::{ColumnValues, monotonic_map_column}; use crate::column_values::{monotonic_map_column, ColumnValues};
use crate::{Cardinality, DocId, EmptyColumnValues, MonotonicallyMappableToU64, RowId}; use crate::{Cardinality, DocId, EmptyColumnValues, MonotonicallyMappableToU64, RowId};
#[derive(Clone)] #[derive(Clone)]
@@ -114,7 +113,7 @@ impl<T: PartialOrd + Copy + Debug + Send + Sync + 'static> Column<T> {
} }
} }
/// Translates a block of docids to row_ids. /// Translates a block of docis to row_ids.
/// ///
/// returns the row_ids and the matching docids on the same index /// returns the row_ids and the matching docids on the same index
/// e.g. /// e.g.

View File

@@ -6,10 +6,10 @@ use common::OwnedBytes;
use sstable::Dictionary; use sstable::Dictionary;
use crate::column::{BytesColumn, Column}; use crate::column::{BytesColumn, Column};
use crate::column_index::{SerializableColumnIndex, serialize_column_index}; use crate::column_index::{serialize_column_index, SerializableColumnIndex};
use crate::column_values::{ use crate::column_values::{
CodecType, MonotonicallyMappableToU64, MonotonicallyMappableToU128,
load_u64_based_column_values, serialize_column_values_u128, serialize_u64_based_column_values, load_u64_based_column_values, serialize_column_values_u128, serialize_u64_based_column_values,
CodecType, MonotonicallyMappableToU128, MonotonicallyMappableToU64,
}; };
use crate::iterable::Iterable; use crate::iterable::Iterable;
use crate::{StrColumn, Version}; use crate::{StrColumn, Version};

View File

@@ -99,9 +99,9 @@ mod tests {
use crate::column_index::merge::detect_cardinality; use crate::column_index::merge::detect_cardinality;
use crate::column_index::multivalued_index::{ use crate::column_index::multivalued_index::{
MultiValueIndex, open_multivalued_index, serialize_multivalued_index, open_multivalued_index, serialize_multivalued_index, MultiValueIndex,
}; };
use crate::column_index::{OptionalIndex, SerializableColumnIndex, merge_column_index}; use crate::column_index::{merge_column_index, OptionalIndex, SerializableColumnIndex};
use crate::{ use crate::{
Cardinality, ColumnIndex, MergeRowOrder, RowAddr, RowId, ShuffleMergeOrder, StackMergeOrder, Cardinality, ColumnIndex, MergeRowOrder, RowAddr, RowId, ShuffleMergeOrder, StackMergeOrder,
}; };

View File

@@ -137,8 +137,8 @@ impl Iterable<u32> for ShuffledMultivaluedIndex<'_> {
#[cfg(test)] #[cfg(test)]
mod tests { mod tests {
use super::*; use super::*;
use crate::RowAddr;
use crate::column_index::OptionalIndex; use crate::column_index::OptionalIndex;
use crate::RowAddr;
#[test] #[test]
fn test_integrate_num_vals_empty() { fn test_integrate_num_vals_empty() {

View File

@@ -1,8 +1,8 @@
use std::ops::Range; use std::ops::Range;
use crate::column_index::SerializableColumnIndex;
use crate::column_index::multivalued_index::{MultiValueIndex, SerializableMultivalueIndex}; use crate::column_index::multivalued_index::{MultiValueIndex, SerializableMultivalueIndex};
use crate::column_index::serialize::SerializableOptionalIndex; use crate::column_index::serialize::SerializableOptionalIndex;
use crate::column_index::SerializableColumnIndex;
use crate::iterable::Iterable; use crate::iterable::Iterable;
use crate::{Cardinality, ColumnIndex, RowId, StackMergeOrder}; use crate::{Cardinality, ColumnIndex, RowId, StackMergeOrder};
@@ -56,7 +56,7 @@ fn get_doc_ids_with_values<'a>(
ColumnIndex::Full => Box::new(doc_range), ColumnIndex::Full => Box::new(doc_range),
ColumnIndex::Optional(optional_index) => Box::new( ColumnIndex::Optional(optional_index) => Box::new(
optional_index optional_index
.iter_docs() .iter_rows()
.map(move |row| row + doc_range.start), .map(move |row| row + doc_range.start),
), ),
ColumnIndex::Multivalued(multivalued_index) => match multivalued_index { ColumnIndex::Multivalued(multivalued_index) => match multivalued_index {
@@ -73,7 +73,7 @@ fn get_doc_ids_with_values<'a>(
MultiValueIndex::MultiValueIndexV2(multivalued_index) => Box::new( MultiValueIndex::MultiValueIndexV2(multivalued_index) => Box::new(
multivalued_index multivalued_index
.optional_index .optional_index
.iter_docs() .iter_rows()
.map(move |row| row + doc_range.start), .map(move |row| row + doc_range.start),
), ),
}, },
@@ -177,7 +177,7 @@ impl<'a> Iterable<RowId> for StackedOptionalIndex<'a> {
ColumnIndex::Full => Box::new(columnar_row_range), ColumnIndex::Full => Box::new(columnar_row_range),
ColumnIndex::Optional(optional_index) => Box::new( ColumnIndex::Optional(optional_index) => Box::new(
optional_index optional_index
.iter_docs() .iter_rows()
.map(move |row_id: RowId| columnar_row_range.start + row_id), .map(move |row_id: RowId| columnar_row_range.start + row_id),
), ),
ColumnIndex::Multivalued(_) => { ColumnIndex::Multivalued(_) => {

View File

@@ -14,7 +14,7 @@ pub use merge::merge_column_index;
pub(crate) use multivalued_index::SerializableMultivalueIndex; pub(crate) use multivalued_index::SerializableMultivalueIndex;
pub use optional_index::{OptionalIndex, Set}; pub use optional_index::{OptionalIndex, Set};
pub use serialize::{ pub use serialize::{
SerializableColumnIndex, SerializableOptionalIndex, open_column_index, serialize_column_index, open_column_index, serialize_column_index, SerializableColumnIndex, SerializableOptionalIndex,
}; };
use crate::column_index::multivalued_index::MultiValueIndex; use crate::column_index::multivalued_index::MultiValueIndex;

View File

@@ -8,7 +8,7 @@ use common::{CountingWriter, OwnedBytes};
use super::optional_index::{open_optional_index, serialize_optional_index}; use super::optional_index::{open_optional_index, serialize_optional_index};
use super::{OptionalIndex, SerializableOptionalIndex, Set}; use super::{OptionalIndex, SerializableOptionalIndex, Set};
use crate::column_values::{ use crate::column_values::{
CodecType, ColumnValues, load_u64_based_column_values, serialize_u64_based_column_values, load_u64_based_column_values, serialize_u64_based_column_values, CodecType, ColumnValues,
}; };
use crate::iterable::Iterable; use crate::iterable::Iterable;
use crate::{DocId, RowId, Version}; use crate::{DocId, RowId, Version};

View File

@@ -7,7 +7,7 @@ mod set_block;
use common::{BinarySerializable, OwnedBytes, VInt}; use common::{BinarySerializable, OwnedBytes, VInt};
pub use set::{SelectCursor, Set, SetCodec}; pub use set::{SelectCursor, Set, SetCodec};
use set_block::{ use set_block::{
DENSE_BLOCK_NUM_BYTES, DenseBlock, DenseBlockCodec, SparseBlock, SparseBlockCodec, DenseBlock, DenseBlockCodec, SparseBlock, SparseBlockCodec, DENSE_BLOCK_NUM_BYTES,
}; };
use crate::iterable::Iterable; use crate::iterable::Iterable;
@@ -80,23 +80,23 @@ impl BlockVariant {
/// index is the block index. For each block `byte_start` and `offset` is computed. /// index is the block index. For each block `byte_start` and `offset` is computed.
#[derive(Clone)] #[derive(Clone)]
pub struct OptionalIndex { pub struct OptionalIndex {
num_docs: RowId, num_rows: RowId,
num_non_null_docs: RowId, num_non_null_rows: RowId,
block_data: OwnedBytes, block_data: OwnedBytes,
block_metas: Arc<[BlockMeta]>, block_metas: Arc<[BlockMeta]>,
} }
impl Iterable<u32> for &OptionalIndex { impl Iterable<u32> for &OptionalIndex {
fn boxed_iter(&self) -> Box<dyn Iterator<Item = u32> + '_> { fn boxed_iter(&self) -> Box<dyn Iterator<Item = u32> + '_> {
Box::new(self.iter_docs()) Box::new(self.iter_rows())
} }
} }
impl std::fmt::Debug for OptionalIndex { impl std::fmt::Debug for OptionalIndex {
fn fmt(&self, f: &mut std::fmt::Formatter) -> std::fmt::Result { fn fmt(&self, f: &mut std::fmt::Formatter) -> std::fmt::Result {
f.debug_struct("OptionalIndex") f.debug_struct("OptionalIndex")
.field("num_docs", &self.num_docs) .field("num_rows", &self.num_rows)
.field("num_non_null_docs", &self.num_non_null_docs) .field("num_non_null_rows", &self.num_non_null_rows)
.finish_non_exhaustive() .finish_non_exhaustive()
} }
} }
@@ -259,13 +259,11 @@ impl Set<RowId> for OptionalIndex {
impl OptionalIndex { impl OptionalIndex {
pub fn for_test(num_rows: RowId, row_ids: &[RowId]) -> OptionalIndex { pub fn for_test(num_rows: RowId, row_ids: &[RowId]) -> OptionalIndex {
assert!( assert!(row_ids
row_ids .last()
.last() .copied()
.copied() .map(|last_row_id| last_row_id < num_rows)
.map(|last_row_id| last_row_id < num_rows) .unwrap_or(true));
.unwrap_or(true)
);
let mut buffer = Vec::new(); let mut buffer = Vec::new();
serialize_optional_index(&row_ids, num_rows, &mut buffer).unwrap(); serialize_optional_index(&row_ids, num_rows, &mut buffer).unwrap();
let bytes = OwnedBytes::new(buffer); let bytes = OwnedBytes::new(buffer);
@@ -273,17 +271,17 @@ impl OptionalIndex {
} }
pub fn num_docs(&self) -> RowId { pub fn num_docs(&self) -> RowId {
self.num_docs self.num_rows
} }
pub fn num_non_nulls(&self) -> RowId { pub fn num_non_nulls(&self) -> RowId {
self.num_non_null_docs self.num_non_null_rows
} }
pub fn iter_docs(&self) -> impl Iterator<Item = RowId> + '_ { pub fn iter_rows(&self) -> impl Iterator<Item = RowId> + '_ {
// TODO optimize // TODO optimize
let mut select_batch = self.select_cursor(); let mut select_batch = self.select_cursor();
(0..self.num_non_null_docs).map(move |rank| select_batch.select(rank)) (0..self.num_non_null_rows).map(move |rank| select_batch.select(rank))
} }
pub fn select_batch(&self, ranks: &mut [RowId]) { pub fn select_batch(&self, ranks: &mut [RowId]) {
let mut select_cursor = self.select_cursor(); let mut select_cursor = self.select_cursor();
@@ -521,15 +519,15 @@ pub fn open_optional_index(bytes: OwnedBytes) -> io::Result<OptionalIndex> {
let (mut bytes, num_non_empty_blocks_bytes) = bytes.rsplit(2); let (mut bytes, num_non_empty_blocks_bytes) = bytes.rsplit(2);
let num_non_empty_block_bytes = let num_non_empty_block_bytes =
u16::from_le_bytes(num_non_empty_blocks_bytes.as_slice().try_into().unwrap()); u16::from_le_bytes(num_non_empty_blocks_bytes.as_slice().try_into().unwrap());
let num_docs = VInt::deserialize_u64(&mut bytes)? as u32; let num_rows = VInt::deserialize_u64(&mut bytes)? as u32;
let block_metas_num_bytes = let block_metas_num_bytes =
num_non_empty_block_bytes as usize * SERIALIZED_BLOCK_META_NUM_BYTES; num_non_empty_block_bytes as usize * SERIALIZED_BLOCK_META_NUM_BYTES;
let (block_data, block_metas) = bytes.rsplit(block_metas_num_bytes); let (block_data, block_metas) = bytes.rsplit(block_metas_num_bytes);
let (block_metas, num_non_null_docs) = let (block_metas, num_non_null_rows) =
deserialize_optional_index_block_metadatas(block_metas.as_slice(), num_docs); deserialize_optional_index_block_metadatas(block_metas.as_slice(), num_rows);
let optional_index = OptionalIndex { let optional_index = OptionalIndex {
num_docs, num_rows,
num_non_null_docs, num_non_null_rows,
block_data, block_data,
block_metas: block_metas.into(), block_metas: block_metas.into(),
}; };

View File

@@ -2,7 +2,7 @@ use std::io::{self, Write};
use common::BinarySerializable; use common::BinarySerializable;
use crate::column_index::optional_index::{ELEMENTS_PER_BLOCK, SelectCursor, Set, SetCodec}; use crate::column_index::optional_index::{SelectCursor, Set, SetCodec, ELEMENTS_PER_BLOCK};
#[inline(always)] #[inline(always)]
fn get_bit_at(input: u64, n: u16) -> bool { fn get_bit_at(input: u64, n: u16) -> bool {

View File

@@ -1,7 +1,7 @@
mod dense; mod dense;
mod sparse; mod sparse;
pub use dense::{DENSE_BLOCK_NUM_BYTES, DenseBlock, DenseBlockCodec}; pub use dense::{DenseBlock, DenseBlockCodec, DENSE_BLOCK_NUM_BYTES};
pub use sparse::{SparseBlock, SparseBlockCodec}; pub use sparse::{SparseBlock, SparseBlockCodec};
#[cfg(test)] #[cfg(test)]

View File

@@ -164,7 +164,7 @@ fn test_optional_index_large() {
fn test_optional_index_iter_aux(row_ids: &[RowId], num_rows: RowId) { fn test_optional_index_iter_aux(row_ids: &[RowId], num_rows: RowId) {
let optional_index = OptionalIndex::for_test(num_rows, row_ids); let optional_index = OptionalIndex::for_test(num_rows, row_ids);
assert_eq!(optional_index.num_docs(), num_rows); assert_eq!(optional_index.num_docs(), num_rows);
assert!(optional_index.iter_docs().eq(row_ids.iter().copied())); assert!(optional_index.iter_rows().eq(row_ids.iter().copied()));
} }
#[test] #[test]
@@ -254,7 +254,11 @@ mod bench {
let mut current = start; let mut current = start;
std::iter::from_fn(move || { std::iter::from_fn(move || {
current += rng.gen_range(avg_step_size - avg_deviation..=avg_step_size + avg_deviation); current += rng.gen_range(avg_step_size - avg_deviation..=avg_step_size + avg_deviation);
if current >= end { None } else { Some(current) } if current >= end {
None
} else {
Some(current)
}
}) })
} }

View File

@@ -3,11 +3,11 @@ use std::io::Write;
use common::{CountingWriter, OwnedBytes}; use common::{CountingWriter, OwnedBytes};
use super::OptionalIndex;
use super::multivalued_index::SerializableMultivalueIndex; use super::multivalued_index::SerializableMultivalueIndex;
use crate::column_index::ColumnIndex; use super::OptionalIndex;
use crate::column_index::multivalued_index::serialize_multivalued_index; use crate::column_index::multivalued_index::serialize_multivalued_index;
use crate::column_index::optional_index::serialize_optional_index; use crate::column_index::optional_index::serialize_optional_index;
use crate::column_index::ColumnIndex;
use crate::iterable::Iterable; use crate::iterable::Iterable;
use crate::{Cardinality, RowId, Version}; use crate::{Cardinality, RowId, Version};

View File

@@ -11,7 +11,7 @@ use crate::column_values::u64_based::*;
fn get_data() -> Vec<u64> { fn get_data() -> Vec<u64> {
let mut rng = StdRng::seed_from_u64(2u64); let mut rng = StdRng::seed_from_u64(2u64);
let mut data: Vec<_> = (100..55000_u64) let mut data: Vec<_> = (100..55000_u64)
.map(|num| num + rng.r#gen::<u8>() as u64) .map(|num| num + rng.gen::<u8>() as u64)
.collect(); .collect();
data.push(99_000); data.push(99_000);
data.insert(1000, 2000); data.insert(1000, 2000);

View File

@@ -26,13 +26,13 @@ mod monotonic_column;
pub(crate) use merge::MergedColumnValues; pub(crate) use merge::MergedColumnValues;
pub use stats::ColumnStats; pub use stats::ColumnStats;
pub use u64_based::{
ALL_U64_CODEC_TYPES, CodecType, load_u64_based_column_values,
serialize_and_load_u64_based_column_values, serialize_u64_based_column_values,
};
pub use u128_based::{ pub use u128_based::{
CompactSpaceU64Accessor, open_u128_as_compact_u64, open_u128_mapped, open_u128_as_compact_u64, open_u128_mapped, serialize_column_values_u128,
serialize_column_values_u128, CompactSpaceU64Accessor,
};
pub use u64_based::{
load_u64_based_column_values, serialize_and_load_u64_based_column_values,
serialize_u64_based_column_values, CodecType, ALL_U64_CODEC_TYPES,
}; };
pub use vec_column::VecColumn; pub use vec_column::VecColumn;

View File

@@ -2,8 +2,8 @@ use std::fmt::Debug;
use std::marker::PhantomData; use std::marker::PhantomData;
use std::ops::{Range, RangeInclusive}; use std::ops::{Range, RangeInclusive};
use crate::ColumnValues;
use crate::column_values::monotonic_mapping::StrictlyMonotonicFn; use crate::column_values::monotonic_mapping::StrictlyMonotonicFn;
use crate::ColumnValues;
struct MonotonicMappingColumn<C, T, Input> { struct MonotonicMappingColumn<C, T, Input> {
from_column: C, from_column: C,
@@ -99,10 +99,10 @@ where
#[cfg(test)] #[cfg(test)]
mod tests { mod tests {
use super::*; use super::*;
use crate::column_values::VecColumn;
use crate::column_values::monotonic_mapping::{ use crate::column_values::monotonic_mapping::{
StrictlyMonotonicMappingInverter, StrictlyMonotonicMappingToInternal, StrictlyMonotonicMappingInverter, StrictlyMonotonicMappingToInternal,
}; };
use crate::column_values::VecColumn;
#[test] #[test]
fn test_monotonic_mapping_iter() { fn test_monotonic_mapping_iter() {

View File

@@ -24,8 +24,8 @@ use build_compact_space::get_compact_space;
use common::{BinarySerializable, CountingWriter, OwnedBytes, VInt, VIntU128}; use common::{BinarySerializable, CountingWriter, OwnedBytes, VInt, VIntU128};
use tantivy_bitpacker::{BitPacker, BitUnpacker}; use tantivy_bitpacker::{BitPacker, BitUnpacker};
use crate::RowId;
use crate::column_values::ColumnValues; use crate::column_values::ColumnValues;
use crate::RowId;
/// The cost per blank is quite hard actually, since blanks are delta encoded, the actual cost of /// The cost per blank is quite hard actually, since blanks are delta encoded, the actual cost of
/// blanks depends on the number of blanks. /// blanks depends on the number of blanks.
@@ -653,14 +653,12 @@ mod tests {
), ),
&[3] &[3]
); );
assert!( assert!(get_positions_for_value_range_helper(
get_positions_for_value_range_helper( &decomp,
&decomp, 99998u128..=99998u128,
99998u128..=99998u128, complete_range.clone()
complete_range.clone() )
) .is_empty());
.is_empty()
);
assert_eq!( assert_eq!(
&get_positions_for_value_range_helper( &get_positions_for_value_range_helper(
&decomp, &decomp,

View File

@@ -130,11 +130,11 @@ pub fn open_u128_as_compact_u64(mut bytes: OwnedBytes) -> io::Result<Arc<dyn Col
#[cfg(test)] #[cfg(test)]
pub(crate) mod tests { pub(crate) mod tests {
use super::*; use super::*;
use crate::column_values::CodecType;
use crate::column_values::u64_based::{ use crate::column_values::u64_based::{
ALL_U64_CODEC_TYPES, serialize_and_load_u64_based_column_values, serialize_and_load_u64_based_column_values, serialize_u64_based_column_values,
serialize_u64_based_column_values, ALL_U64_CODEC_TYPES,
}; };
use crate::column_values::CodecType;
#[test] #[test]
fn test_serialize_deserialize_u128_header() { fn test_serialize_deserialize_u128_header() {

View File

@@ -4,7 +4,7 @@ use std::ops::{Range, RangeInclusive};
use common::{BinarySerializable, OwnedBytes}; use common::{BinarySerializable, OwnedBytes};
use fastdivide::DividerU64; use fastdivide::DividerU64;
use tantivy_bitpacker::{BitPacker, BitUnpacker, compute_num_bits}; use tantivy_bitpacker::{compute_num_bits, BitPacker, BitUnpacker};
use crate::column_values::u64_based::{ColumnCodec, ColumnCodecEstimator, ColumnStats}; use crate::column_values::u64_based::{ColumnCodec, ColumnCodecEstimator, ColumnStats};
use crate::{ColumnValues, RowId}; use crate::{ColumnValues, RowId};
@@ -23,7 +23,11 @@ const fn div_ceil(n: u64, q: NonZeroU64) -> u64 {
// copied from unstable rust standard library. // copied from unstable rust standard library.
let d = n / q.get(); let d = n / q.get();
let r = n % q.get(); let r = n % q.get();
if r > 0 { d + 1 } else { d } if r > 0 {
d + 1
} else {
d
}
} }
// The bitpacked codec applies a linear transformation `f` over data that are bitpacked. // The bitpacked codec applies a linear transformation `f` over data that are bitpacked.

View File

@@ -4,12 +4,12 @@ use std::{io, iter};
use common::{BinarySerializable, CountingWriter, DeserializeFrom, OwnedBytes}; use common::{BinarySerializable, CountingWriter, DeserializeFrom, OwnedBytes};
use fastdivide::DividerU64; use fastdivide::DividerU64;
use tantivy_bitpacker::{BitPacker, BitUnpacker, compute_num_bits}; use tantivy_bitpacker::{compute_num_bits, BitPacker, BitUnpacker};
use crate::MonotonicallyMappableToU64;
use crate::column_values::u64_based::line::Line; use crate::column_values::u64_based::line::Line;
use crate::column_values::u64_based::{ColumnCodec, ColumnCodecEstimator, ColumnStats}; use crate::column_values::u64_based::{ColumnCodec, ColumnCodecEstimator, ColumnStats};
use crate::column_values::{ColumnValues, VecColumn}; use crate::column_values::{ColumnValues, VecColumn};
use crate::MonotonicallyMappableToU64;
const BLOCK_SIZE: u32 = 512u32; const BLOCK_SIZE: u32 = 512u32;

View File

@@ -1,13 +1,13 @@
use std::io; use std::io;
use common::{BinarySerializable, OwnedBytes}; use common::{BinarySerializable, OwnedBytes};
use tantivy_bitpacker::{BitPacker, BitUnpacker, compute_num_bits}; use tantivy_bitpacker::{compute_num_bits, BitPacker, BitUnpacker};
use super::ColumnValues;
use super::line::Line; use super::line::Line;
use crate::RowId; use super::ColumnValues;
use crate::column_values::VecColumn;
use crate::column_values::u64_based::{ColumnCodec, ColumnCodecEstimator, ColumnStats}; use crate::column_values::u64_based::{ColumnCodec, ColumnCodecEstimator, ColumnStats};
use crate::column_values::VecColumn;
use crate::RowId;
const HALF_SPACE: u64 = u64::MAX / 2; const HALF_SPACE: u64 = u64::MAX / 2;
const LINE_ESTIMATION_BLOCK_LEN: usize = 512; const LINE_ESTIMATION_BLOCK_LEN: usize = 512;

View File

@@ -17,7 +17,7 @@ pub use crate::column_values::u64_based::bitpacked::BitpackedCodec;
pub use crate::column_values::u64_based::blockwise_linear::BlockwiseLinearCodec; pub use crate::column_values::u64_based::blockwise_linear::BlockwiseLinearCodec;
pub use crate::column_values::u64_based::linear::LinearCodec; pub use crate::column_values::u64_based::linear::LinearCodec;
pub use crate::column_values::u64_based::stats_collector::StatsCollector; pub use crate::column_values::u64_based::stats_collector::StatsCollector;
use crate::column_values::{ColumnStats, monotonic_map_column}; use crate::column_values::{monotonic_map_column, ColumnStats};
use crate::iterable::Iterable; use crate::iterable::Iterable;
use crate::{ColumnValues, MonotonicallyMappableToU64}; use crate::{ColumnValues, MonotonicallyMappableToU64};

View File

@@ -2,8 +2,8 @@ use std::num::NonZeroU64;
use fastdivide::DividerU64; use fastdivide::DividerU64;
use crate::RowId;
use crate::column_values::ColumnStats; use crate::column_values::ColumnStats;
use crate::RowId;
/// Compute the gcd of two non null numbers. /// Compute the gcd of two non null numbers.
/// ///
@@ -96,8 +96,8 @@ impl StatsCollector {
mod tests { mod tests {
use std::num::NonZeroU64; use std::num::NonZeroU64;
use crate::column_values::u64_based::stats_collector::{compute_gcd, StatsCollector};
use crate::column_values::u64_based::ColumnStats; use crate::column_values::u64_based::ColumnStats;
use crate::column_values::u64_based::stats_collector::{StatsCollector, compute_gcd};
fn compute_stats(vals: impl Iterator<Item = u64>) -> ColumnStats { fn compute_stats(vals: impl Iterator<Item = u64>) -> ColumnStats {
let mut stats_collector = StatsCollector::default(); let mut stats_collector = StatsCollector::default();

View File

@@ -1,6 +1,5 @@
use proptest::prelude::*; use proptest::prelude::*;
use proptest::{prop_oneof, proptest}; use proptest::{prop_oneof, proptest};
use rand::Rng;
#[test] #[test]
fn test_serialize_and_load_simple() { fn test_serialize_and_load_simple() {

View File

@@ -4,8 +4,8 @@ use std::net::Ipv6Addr;
use serde::{Deserialize, Serialize}; use serde::{Deserialize, Serialize};
use crate::InvalidData;
use crate::value::NumericalType; use crate::value::NumericalType;
use crate::InvalidData;
/// The column type represents the column type. /// The column type represents the column type.
/// Any changes need to be propagated to `COLUMN_TYPES`. /// Any changes need to be propagated to `COLUMN_TYPES`.

View File

@@ -3,7 +3,7 @@ use std::io::{self, Write};
use common::{BitSet, CountingWriter, ReadOnlyBitSet}; use common::{BitSet, CountingWriter, ReadOnlyBitSet};
use sstable::{SSTable, Streamer, TermOrdinal, VoidSSTable}; use sstable::{SSTable, Streamer, TermOrdinal, VoidSSTable};
use super::term_merger::{TermMerger, TermsWithSegmentOrd}; use super::term_merger::TermMerger;
use crate::column::serialize_column_mappable_to_u64; use crate::column::serialize_column_mappable_to_u64;
use crate::column_index::SerializableColumnIndex; use crate::column_index::SerializableColumnIndex;
use crate::iterable::Iterable; use crate::iterable::Iterable;
@@ -126,17 +126,14 @@ fn serialize_merged_dict(
let mut term_ord_mapping = TermOrdinalMapping::default(); let mut term_ord_mapping = TermOrdinalMapping::default();
let mut field_term_streams = Vec::new(); let mut field_term_streams = Vec::new();
for (segment_ord, column_opt) in bytes_columns.iter().enumerate() { for column_opt in bytes_columns.iter() {
if let Some(column) = column_opt { if let Some(column) = column_opt {
term_ord_mapping.add_segment(column.dictionary.num_terms()); term_ord_mapping.add_segment(column.dictionary.num_terms());
let terms: Streamer<VoidSSTable> = column.dictionary.stream()?; let terms: Streamer<VoidSSTable> = column.dictionary.stream()?;
field_term_streams.push(TermsWithSegmentOrd { terms, segment_ord }); field_term_streams.push(terms);
} else { } else {
term_ord_mapping.add_segment(0); term_ord_mapping.add_segment(0);
field_term_streams.push(TermsWithSegmentOrd { field_term_streams.push(Streamer::empty());
terms: Streamer::empty(),
segment_ord,
});
} }
} }
@@ -194,7 +191,6 @@ fn serialize_merged_dict(
#[derive(Default, Debug)] #[derive(Default, Debug)]
struct TermOrdinalMapping { struct TermOrdinalMapping {
/// Contains the new term ordinals for each segment.
per_segment_new_term_ordinals: Vec<Vec<TermOrdinal>>, per_segment_new_term_ordinals: Vec<Vec<TermOrdinal>>,
} }
@@ -209,6 +205,6 @@ impl TermOrdinalMapping {
} }
fn get_segment(&self, segment_ord: u32) -> &[TermOrdinal] { fn get_segment(&self, segment_ord: u32) -> &[TermOrdinal] {
&self.per_segment_new_term_ordinals[segment_ord as usize] &(self.per_segment_new_term_ordinals[segment_ord as usize])[..]
} }
} }

View File

@@ -26,7 +26,7 @@ impl StackMergeOrder {
let mut cumulated_row_ids: Vec<RowId> = Vec::with_capacity(columnars.len()); let mut cumulated_row_ids: Vec<RowId> = Vec::with_capacity(columnars.len());
let mut cumulated_row_id = 0; let mut cumulated_row_id = 0;
for columnar in columnars { for columnar in columnars {
cumulated_row_id += columnar.num_docs(); cumulated_row_id += columnar.num_rows();
cumulated_row_ids.push(cumulated_row_id); cumulated_row_ids.push(cumulated_row_id);
} }
StackMergeOrder { cumulated_row_ids } StackMergeOrder { cumulated_row_ids }

View File

@@ -10,11 +10,11 @@ use std::sync::Arc;
pub use merge_mapping::{MergeRowOrder, ShuffleMergeOrder, StackMergeOrder}; pub use merge_mapping::{MergeRowOrder, ShuffleMergeOrder, StackMergeOrder};
use super::writer::ColumnarSerializer; use super::writer::ColumnarSerializer;
use crate::column::{serialize_column_mappable_to_u64, serialize_column_mappable_to_u128}; use crate::column::{serialize_column_mappable_to_u128, serialize_column_mappable_to_u64};
use crate::column_values::MergedColumnValues; use crate::column_values::MergedColumnValues;
use crate::columnar::ColumnarReader;
use crate::columnar::merge::merge_dict_column::merge_bytes_or_str_column; use crate::columnar::merge::merge_dict_column::merge_bytes_or_str_column;
use crate::columnar::writer::CompatibleNumericalTypes; use crate::columnar::writer::CompatibleNumericalTypes;
use crate::columnar::ColumnarReader;
use crate::dynamic_column::DynamicColumn; use crate::dynamic_column::DynamicColumn;
use crate::{ use crate::{
BytesColumn, Column, ColumnIndex, ColumnType, ColumnValues, DynamicColumnHandle, NumericalType, BytesColumn, Column, ColumnIndex, ColumnType, ColumnValues, DynamicColumnHandle, NumericalType,
@@ -80,12 +80,13 @@ pub fn merge_columnar(
output: &mut impl io::Write, output: &mut impl io::Write,
) -> io::Result<()> { ) -> io::Result<()> {
let mut serializer = ColumnarSerializer::new(output); let mut serializer = ColumnarSerializer::new(output);
let num_docs_per_columnar = columnar_readers let num_rows_per_columnar = columnar_readers
.iter() .iter()
.map(|reader| reader.num_docs()) .map(|reader| reader.num_rows())
.collect::<Vec<u32>>(); .collect::<Vec<u32>>();
let columns_to_merge = group_columns_for_merge(columnar_readers, required_columns)?; let columns_to_merge =
group_columns_for_merge(columnar_readers, required_columns, &merge_row_order)?;
for res in columns_to_merge { for res in columns_to_merge {
let ((column_name, _column_type_category), grouped_columns) = res; let ((column_name, _column_type_category), grouped_columns) = res;
let grouped_columns = grouped_columns.open(&merge_row_order)?; let grouped_columns = grouped_columns.open(&merge_row_order)?;
@@ -93,18 +94,15 @@ pub fn merge_columnar(
continue; continue;
} }
let column_type_after_merge = grouped_columns.column_type_after_merge(); let column_type = grouped_columns.column_type_after_merge();
let mut columns = grouped_columns.columns; let mut columns = grouped_columns.columns;
// Make sure the number of columns is the same as the number of columnar readers. coerce_columns(column_type, &mut columns)?;
// Or num_docs_per_columnar would be incorrect.
assert_eq!(columns.len(), columnar_readers.len());
coerce_columns(column_type_after_merge, &mut columns)?;
let mut column_serializer = let mut column_serializer =
serializer.start_serialize_column(column_name.as_bytes(), column_type_after_merge); serializer.start_serialize_column(column_name.as_bytes(), column_type);
merge_column( merge_column(
column_type_after_merge, column_type,
&num_docs_per_columnar, &num_rows_per_columnar,
columns, columns,
&merge_row_order, &merge_row_order,
&mut column_serializer, &mut column_serializer,
@@ -130,7 +128,7 @@ fn dynamic_column_to_u64_monotonic(dynamic_column: DynamicColumn) -> Option<Colu
fn merge_column( fn merge_column(
column_type: ColumnType, column_type: ColumnType,
num_docs_per_column: &[u32], num_docs_per_column: &[u32],
columns_to_merge: Vec<Option<DynamicColumn>>, columns: Vec<Option<DynamicColumn>>,
merge_row_order: &MergeRowOrder, merge_row_order: &MergeRowOrder,
wrt: &mut impl io::Write, wrt: &mut impl io::Write,
) -> io::Result<()> { ) -> io::Result<()> {
@@ -140,21 +138,20 @@ fn merge_column(
| ColumnType::F64 | ColumnType::F64
| ColumnType::DateTime | ColumnType::DateTime
| ColumnType::Bool => { | ColumnType::Bool => {
let mut column_indexes: Vec<ColumnIndex> = Vec::with_capacity(columns_to_merge.len()); let mut column_indexes: Vec<ColumnIndex> = Vec::with_capacity(columns.len());
let mut column_values: Vec<Option<Arc<dyn ColumnValues>>> = let mut column_values: Vec<Option<Arc<dyn ColumnValues>>> =
Vec::with_capacity(columns_to_merge.len()); Vec::with_capacity(columns.len());
for (i, dynamic_column_opt) in columns_to_merge.into_iter().enumerate() { for (i, dynamic_column_opt) in columns.into_iter().enumerate() {
match dynamic_column_opt.and_then(dynamic_column_to_u64_monotonic) { if let Some(Column { index: idx, values }) =
Some(Column { index: idx, values }) => { dynamic_column_opt.and_then(dynamic_column_to_u64_monotonic)
column_indexes.push(idx); {
column_values.push(Some(values)); column_indexes.push(idx);
} column_values.push(Some(values));
None => { } else {
column_indexes.push(ColumnIndex::Empty { column_indexes.push(ColumnIndex::Empty {
num_docs: num_docs_per_column[i], num_docs: num_docs_per_column[i],
}); });
column_values.push(None); column_values.push(None);
}
} }
} }
let merged_column_index = let merged_column_index =
@@ -167,10 +164,10 @@ fn merge_column(
serialize_column_mappable_to_u64(merged_column_index, &merge_column_values, wrt)?; serialize_column_mappable_to_u64(merged_column_index, &merge_column_values, wrt)?;
} }
ColumnType::IpAddr => { ColumnType::IpAddr => {
let mut column_indexes: Vec<ColumnIndex> = Vec::with_capacity(columns_to_merge.len()); let mut column_indexes: Vec<ColumnIndex> = Vec::with_capacity(columns.len());
let mut column_values: Vec<Option<Arc<dyn ColumnValues<Ipv6Addr>>>> = let mut column_values: Vec<Option<Arc<dyn ColumnValues<Ipv6Addr>>>> =
Vec::with_capacity(columns_to_merge.len()); Vec::with_capacity(columns.len());
for (i, dynamic_column_opt) in columns_to_merge.into_iter().enumerate() { for (i, dynamic_column_opt) in columns.into_iter().enumerate() {
if let Some(DynamicColumn::IpAddr(Column { index: idx, values })) = if let Some(DynamicColumn::IpAddr(Column { index: idx, values })) =
dynamic_column_opt dynamic_column_opt
{ {
@@ -195,10 +192,9 @@ fn merge_column(
serialize_column_mappable_to_u128(merged_column_index, &merge_column_values, wrt)?; serialize_column_mappable_to_u128(merged_column_index, &merge_column_values, wrt)?;
} }
ColumnType::Bytes | ColumnType::Str => { ColumnType::Bytes | ColumnType::Str => {
let mut column_indexes: Vec<ColumnIndex> = Vec::with_capacity(columns_to_merge.len()); let mut column_indexes: Vec<ColumnIndex> = Vec::with_capacity(columns.len());
let mut bytes_columns: Vec<Option<BytesColumn>> = let mut bytes_columns: Vec<Option<BytesColumn>> = Vec::with_capacity(columns.len());
Vec::with_capacity(columns_to_merge.len()); for (i, dynamic_column_opt) in columns.into_iter().enumerate() {
for (i, dynamic_column_opt) in columns_to_merge.into_iter().enumerate() {
match dynamic_column_opt { match dynamic_column_opt {
Some(DynamicColumn::Str(str_column)) => { Some(DynamicColumn::Str(str_column)) => {
column_indexes.push(str_column.term_ord_column.index.clone()); column_indexes.push(str_column.term_ord_column.index.clone());
@@ -252,15 +248,13 @@ impl GroupedColumns {
if column_type.len() == 1 { if column_type.len() == 1 {
return column_type.into_iter().next().unwrap(); return column_type.into_iter().next().unwrap();
} }
// At the moment, only the numerical column type category has more than one possible // At the moment, only the numerical categorical column type has more than one possible
// column type. // column type.
assert!( assert!(self
self.columns .columns
.iter() .iter()
.flatten() .flatten()
.all(|el| ColumnTypeCategory::from(el.column_type()) .all(|el| ColumnTypeCategory::from(el.column_type()) == ColumnTypeCategory::Numerical));
== ColumnTypeCategory::Numerical)
);
merged_numerical_columns_type(self.columns.iter().flatten()).into() merged_numerical_columns_type(self.columns.iter().flatten()).into()
} }
} }
@@ -367,7 +361,7 @@ fn is_empty_after_merge(
ColumnIndex::Empty { .. } => true, ColumnIndex::Empty { .. } => true,
ColumnIndex::Full => alive_bitset.len() == 0, ColumnIndex::Full => alive_bitset.len() == 0,
ColumnIndex::Optional(optional_index) => { ColumnIndex::Optional(optional_index) => {
for doc in optional_index.iter_docs() { for doc in optional_index.iter_rows() {
if alive_bitset.contains(doc) { if alive_bitset.contains(doc) {
return false; return false;
} }
@@ -397,6 +391,7 @@ fn is_empty_after_merge(
fn group_columns_for_merge<'a>( fn group_columns_for_merge<'a>(
columnar_readers: &'a [&'a ColumnarReader], columnar_readers: &'a [&'a ColumnarReader],
required_columns: &'a [(String, ColumnType)], required_columns: &'a [(String, ColumnType)],
_merge_row_order: &'a MergeRowOrder,
) -> io::Result<BTreeMap<(String, ColumnTypeCategory), GroupedColumnsHandle>> { ) -> io::Result<BTreeMap<(String, ColumnTypeCategory), GroupedColumnsHandle>> {
let mut columns: BTreeMap<(String, ColumnTypeCategory), GroupedColumnsHandle> = BTreeMap::new(); let mut columns: BTreeMap<(String, ColumnTypeCategory), GroupedColumnsHandle> = BTreeMap::new();

View File

@@ -5,29 +5,28 @@ use sstable::TermOrdinal;
use crate::Streamer; use crate::Streamer;
/// The terms of a column with the ordinal of the segment. pub struct HeapItem<'a> {
pub struct TermsWithSegmentOrd<'a> { pub streamer: Streamer<'a>,
pub terms: Streamer<'a>,
pub segment_ord: usize, pub segment_ord: usize,
} }
impl PartialEq for TermsWithSegmentOrd<'_> { impl PartialEq for HeapItem<'_> {
fn eq(&self, other: &Self) -> bool { fn eq(&self, other: &Self) -> bool {
self.segment_ord == other.segment_ord self.segment_ord == other.segment_ord
} }
} }
impl Eq for TermsWithSegmentOrd<'_> {} impl Eq for HeapItem<'_> {}
impl<'a> PartialOrd for TermsWithSegmentOrd<'a> { impl<'a> PartialOrd for HeapItem<'a> {
fn partial_cmp(&self, other: &TermsWithSegmentOrd<'a>) -> Option<Ordering> { fn partial_cmp(&self, other: &HeapItem<'a>) -> Option<Ordering> {
Some(self.cmp(other)) Some(self.cmp(other))
} }
} }
impl<'a> Ord for TermsWithSegmentOrd<'a> { impl<'a> Ord for HeapItem<'a> {
fn cmp(&self, other: &TermsWithSegmentOrd<'a>) -> Ordering { fn cmp(&self, other: &HeapItem<'a>) -> Ordering {
(&other.terms.key(), &other.segment_ord).cmp(&(&self.terms.key(), &self.segment_ord)) (&other.streamer.key(), &other.segment_ord).cmp(&(&self.streamer.key(), &self.segment_ord))
} }
} }
@@ -38,32 +37,39 @@ impl<'a> Ord for TermsWithSegmentOrd<'a> {
/// - the term /// - the term
/// - a slice with the ordinal of the segments containing the terms. /// - a slice with the ordinal of the segments containing the terms.
pub struct TermMerger<'a> { pub struct TermMerger<'a> {
heap: BinaryHeap<TermsWithSegmentOrd<'a>>, heap: BinaryHeap<HeapItem<'a>>,
term_streams_with_segment: Vec<TermsWithSegmentOrd<'a>>, current_streamers: Vec<HeapItem<'a>>,
} }
impl<'a> TermMerger<'a> { impl<'a> TermMerger<'a> {
/// Stream of merged term dictionary /// Stream of merged term dictionary
pub fn new(term_streams_with_segment: Vec<TermsWithSegmentOrd<'a>>) -> TermMerger<'a> { pub fn new(streams: Vec<Streamer<'a>>) -> TermMerger<'a> {
TermMerger { TermMerger {
heap: BinaryHeap::new(), heap: BinaryHeap::new(),
term_streams_with_segment, current_streamers: streams
.into_iter()
.enumerate()
.map(|(ord, streamer)| HeapItem {
streamer,
segment_ord: ord,
})
.collect(),
} }
} }
pub(crate) fn matching_segments<'b: 'a>( pub(crate) fn matching_segments<'b: 'a>(
&'b self, &'b self,
) -> impl 'b + Iterator<Item = (usize, TermOrdinal)> { ) -> impl 'b + Iterator<Item = (usize, TermOrdinal)> {
self.term_streams_with_segment self.current_streamers
.iter() .iter()
.map(|heap_item| (heap_item.segment_ord, heap_item.terms.term_ord())) .map(|heap_item| (heap_item.segment_ord, heap_item.streamer.term_ord()))
} }
fn advance_segments(&mut self) { fn advance_segments(&mut self) {
let streamers = &mut self.term_streams_with_segment; let streamers = &mut self.current_streamers;
let heap = &mut self.heap; let heap = &mut self.heap;
for mut heap_item in streamers.drain(..) { for mut heap_item in streamers.drain(..) {
if heap_item.terms.advance() { if heap_item.streamer.advance() {
heap.push(heap_item); heap.push(heap_item);
} }
} }
@@ -74,19 +80,18 @@ impl<'a> TermMerger<'a> {
/// False if there is none. /// False if there is none.
pub fn advance(&mut self) -> bool { pub fn advance(&mut self) -> bool {
self.advance_segments(); self.advance_segments();
match self.heap.pop() { if let Some(head) = self.heap.pop() {
Some(head) => { self.current_streamers.push(head);
self.term_streams_with_segment.push(head); while let Some(next_streamer) = self.heap.peek() {
while let Some(next_streamer) = self.heap.peek() { if self.current_streamers[0].streamer.key() != next_streamer.streamer.key() {
if self.term_streams_with_segment[0].terms.key() != next_streamer.terms.key() { break;
break;
}
let next_heap_it = self.heap.pop().unwrap(); // safe : we peeked beforehand
self.term_streams_with_segment.push(next_heap_it);
} }
true let next_heap_it = self.heap.pop().unwrap(); // safe : we peeked beforehand
self.current_streamers.push(next_heap_it);
} }
_ => false, true
} else {
false
} }
} }
@@ -96,6 +101,6 @@ impl<'a> TermMerger<'a> {
/// if and only if advance() has been called before /// if and only if advance() has been called before
/// and "true" was returned. /// and "true" was returned.
pub fn key(&self) -> &[u8] { pub fn key(&self) -> &[u8] {
self.term_streams_with_segment[0].terms.key() self.current_streamers[0].streamer.key()
} }
} }

View File

@@ -1,10 +1,7 @@
use itertools::Itertools; use itertools::Itertools;
use proptest::collection::vec;
use proptest::prelude::*;
use super::*; use super::*;
use crate::columnar::{ColumnarReader, MergeRowOrder, StackMergeOrder, merge_columnar}; use crate::{Cardinality, ColumnarWriter, HasAssociatedColumnType, RowId};
use crate::{Cardinality, ColumnarWriter, DynamicColumn, HasAssociatedColumnType, RowId};
fn make_columnar<T: Into<NumericalValue> + HasAssociatedColumnType + Copy>( fn make_columnar<T: Into<NumericalValue> + HasAssociatedColumnType + Copy>(
column_name: &str, column_name: &str,
@@ -29,8 +26,9 @@ fn test_column_coercion_to_u64() {
// u64 type // u64 type
let columnar2 = make_columnar("numbers", &[u64::MAX]); let columnar2 = make_columnar("numbers", &[u64::MAX]);
let columnars = &[&columnar1, &columnar2]; let columnars = &[&columnar1, &columnar2];
let merge_order = StackMergeOrder::stack(columnars).into();
let column_map: BTreeMap<(String, ColumnTypeCategory), GroupedColumnsHandle> = let column_map: BTreeMap<(String, ColumnTypeCategory), GroupedColumnsHandle> =
group_columns_for_merge(columnars, &[]).unwrap(); group_columns_for_merge(columnars, &[], &merge_order).unwrap();
assert_eq!(column_map.len(), 1); assert_eq!(column_map.len(), 1);
assert!(column_map.contains_key(&("numbers".to_string(), ColumnTypeCategory::Numerical))); assert!(column_map.contains_key(&("numbers".to_string(), ColumnTypeCategory::Numerical)));
} }
@@ -40,8 +38,9 @@ fn test_column_coercion_to_i64() {
let columnar1 = make_columnar("numbers", &[-1i64]); let columnar1 = make_columnar("numbers", &[-1i64]);
let columnar2 = make_columnar("numbers", &[2u64]); let columnar2 = make_columnar("numbers", &[2u64]);
let columnars = &[&columnar1, &columnar2]; let columnars = &[&columnar1, &columnar2];
let merge_order = StackMergeOrder::stack(columnars).into();
let column_map: BTreeMap<(String, ColumnTypeCategory), GroupedColumnsHandle> = let column_map: BTreeMap<(String, ColumnTypeCategory), GroupedColumnsHandle> =
group_columns_for_merge(columnars, &[]).unwrap(); group_columns_for_merge(columnars, &[], &merge_order).unwrap();
assert_eq!(column_map.len(), 1); assert_eq!(column_map.len(), 1);
assert!(column_map.contains_key(&("numbers".to_string(), ColumnTypeCategory::Numerical))); assert!(column_map.contains_key(&("numbers".to_string(), ColumnTypeCategory::Numerical)));
} }
@@ -64,8 +63,14 @@ fn test_group_columns_with_required_column() {
let columnar1 = make_columnar("numbers", &[1i64]); let columnar1 = make_columnar("numbers", &[1i64]);
let columnar2 = make_columnar("numbers", &[2u64]); let columnar2 = make_columnar("numbers", &[2u64]);
let columnars = &[&columnar1, &columnar2]; let columnars = &[&columnar1, &columnar2];
let merge_order = StackMergeOrder::stack(columnars).into();
let column_map: BTreeMap<(String, ColumnTypeCategory), GroupedColumnsHandle> = let column_map: BTreeMap<(String, ColumnTypeCategory), GroupedColumnsHandle> =
group_columns_for_merge(columnars, &[("numbers".to_string(), ColumnType::U64)]).unwrap(); group_columns_for_merge(
&[&columnar1, &columnar2],
&[("numbers".to_string(), ColumnType::U64)],
&merge_order,
)
.unwrap();
assert_eq!(column_map.len(), 1); assert_eq!(column_map.len(), 1);
assert!(column_map.contains_key(&("numbers".to_string(), ColumnTypeCategory::Numerical))); assert!(column_map.contains_key(&("numbers".to_string(), ColumnTypeCategory::Numerical)));
} }
@@ -75,9 +80,13 @@ fn test_group_columns_required_column_with_no_existing_columns() {
let columnar1 = make_columnar("numbers", &[2u64]); let columnar1 = make_columnar("numbers", &[2u64]);
let columnar2 = make_columnar("numbers", &[2u64]); let columnar2 = make_columnar("numbers", &[2u64]);
let columnars = &[&columnar1, &columnar2]; let columnars = &[&columnar1, &columnar2];
let column_map: BTreeMap<_, _> = let merge_order = StackMergeOrder::stack(columnars).into();
group_columns_for_merge(columnars, &[("required_col".to_string(), ColumnType::Str)]) let column_map: BTreeMap<_, _> = group_columns_for_merge(
.unwrap(); columnars,
&[("required_col".to_string(), ColumnType::Str)],
&merge_order,
)
.unwrap();
assert_eq!(column_map.len(), 2); assert_eq!(column_map.len(), 2);
let columns = &column_map let columns = &column_map
.get(&("required_col".to_string(), ColumnTypeCategory::Str)) .get(&("required_col".to_string(), ColumnTypeCategory::Str))
@@ -93,8 +102,14 @@ fn test_group_columns_required_column_is_above_all_columns_have_the_same_type_ru
let columnar1 = make_columnar("numbers", &[2i64]); let columnar1 = make_columnar("numbers", &[2i64]);
let columnar2 = make_columnar("numbers", &[2i64]); let columnar2 = make_columnar("numbers", &[2i64]);
let columnars = &[&columnar1, &columnar2]; let columnars = &[&columnar1, &columnar2];
let merge_order = StackMergeOrder::stack(columnars).into();
let column_map: BTreeMap<(String, ColumnTypeCategory), GroupedColumnsHandle> = let column_map: BTreeMap<(String, ColumnTypeCategory), GroupedColumnsHandle> =
group_columns_for_merge(columnars, &[("numbers".to_string(), ColumnType::U64)]).unwrap(); group_columns_for_merge(
columnars,
&[("numbers".to_string(), ColumnType::U64)],
&merge_order,
)
.unwrap();
assert_eq!(column_map.len(), 1); assert_eq!(column_map.len(), 1);
assert!(column_map.contains_key(&("numbers".to_string(), ColumnTypeCategory::Numerical))); assert!(column_map.contains_key(&("numbers".to_string(), ColumnTypeCategory::Numerical)));
} }
@@ -104,8 +119,9 @@ fn test_missing_column() {
let columnar1 = make_columnar("numbers", &[-1i64]); let columnar1 = make_columnar("numbers", &[-1i64]);
let columnar2 = make_columnar("numbers2", &[2u64]); let columnar2 = make_columnar("numbers2", &[2u64]);
let columnars = &[&columnar1, &columnar2]; let columnars = &[&columnar1, &columnar2];
let merge_order = StackMergeOrder::stack(columnars).into();
let column_map: BTreeMap<(String, ColumnTypeCategory), GroupedColumnsHandle> = let column_map: BTreeMap<(String, ColumnTypeCategory), GroupedColumnsHandle> =
group_columns_for_merge(columnars, &[]).unwrap(); group_columns_for_merge(columnars, &[], &merge_order).unwrap();
assert_eq!(column_map.len(), 2); assert_eq!(column_map.len(), 2);
assert!(column_map.contains_key(&("numbers".to_string(), ColumnTypeCategory::Numerical))); assert!(column_map.contains_key(&("numbers".to_string(), ColumnTypeCategory::Numerical)));
{ {
@@ -208,7 +224,7 @@ fn test_merge_columnar_numbers() {
) )
.unwrap(); .unwrap();
let columnar_reader = ColumnarReader::open(buffer).unwrap(); let columnar_reader = ColumnarReader::open(buffer).unwrap();
assert_eq!(columnar_reader.num_docs(), 3); assert_eq!(columnar_reader.num_rows(), 3);
assert_eq!(columnar_reader.num_columns(), 1); assert_eq!(columnar_reader.num_columns(), 1);
let cols = columnar_reader.read_columns("numbers").unwrap(); let cols = columnar_reader.read_columns("numbers").unwrap();
let dynamic_column = cols[0].open().unwrap(); let dynamic_column = cols[0].open().unwrap();
@@ -236,7 +252,7 @@ fn test_merge_columnar_texts() {
) )
.unwrap(); .unwrap();
let columnar_reader = ColumnarReader::open(buffer).unwrap(); let columnar_reader = ColumnarReader::open(buffer).unwrap();
assert_eq!(columnar_reader.num_docs(), 3); assert_eq!(columnar_reader.num_rows(), 3);
assert_eq!(columnar_reader.num_columns(), 1); assert_eq!(columnar_reader.num_columns(), 1);
let cols = columnar_reader.read_columns("texts").unwrap(); let cols = columnar_reader.read_columns("texts").unwrap();
let dynamic_column = cols[0].open().unwrap(); let dynamic_column = cols[0].open().unwrap();
@@ -285,7 +301,7 @@ fn test_merge_columnar_byte() {
) )
.unwrap(); .unwrap();
let columnar_reader = ColumnarReader::open(buffer).unwrap(); let columnar_reader = ColumnarReader::open(buffer).unwrap();
assert_eq!(columnar_reader.num_docs(), 4); assert_eq!(columnar_reader.num_rows(), 4);
assert_eq!(columnar_reader.num_columns(), 1); assert_eq!(columnar_reader.num_columns(), 1);
let cols = columnar_reader.read_columns("bytes").unwrap(); let cols = columnar_reader.read_columns("bytes").unwrap();
let dynamic_column = cols[0].open().unwrap(); let dynamic_column = cols[0].open().unwrap();
@@ -341,7 +357,7 @@ fn test_merge_columnar_byte_with_missing() {
) )
.unwrap(); .unwrap();
let columnar_reader = ColumnarReader::open(buffer).unwrap(); let columnar_reader = ColumnarReader::open(buffer).unwrap();
assert_eq!(columnar_reader.num_docs(), 3 + 2 + 3); assert_eq!(columnar_reader.num_rows(), 3 + 2 + 3);
assert_eq!(columnar_reader.num_columns(), 2); assert_eq!(columnar_reader.num_columns(), 2);
let cols = columnar_reader.read_columns("col").unwrap(); let cols = columnar_reader.read_columns("col").unwrap();
let dynamic_column = cols[0].open().unwrap(); let dynamic_column = cols[0].open().unwrap();
@@ -393,7 +409,7 @@ fn test_merge_columnar_different_types() {
) )
.unwrap(); .unwrap();
let columnar_reader = ColumnarReader::open(buffer).unwrap(); let columnar_reader = ColumnarReader::open(buffer).unwrap();
assert_eq!(columnar_reader.num_docs(), 4); assert_eq!(columnar_reader.num_rows(), 4);
assert_eq!(columnar_reader.num_columns(), 2); assert_eq!(columnar_reader.num_columns(), 2);
let cols = columnar_reader.read_columns("mixed").unwrap(); let cols = columnar_reader.read_columns("mixed").unwrap();
@@ -403,11 +419,11 @@ fn test_merge_columnar_different_types() {
panic!() panic!()
}; };
assert_eq!(vals.get_cardinality(), Cardinality::Optional); assert_eq!(vals.get_cardinality(), Cardinality::Optional);
assert_eq!(vals.values_for_doc(0).collect_vec(), Vec::<i64>::new()); assert_eq!(vals.values_for_doc(0).collect_vec(), vec![]);
assert_eq!(vals.values_for_doc(1).collect_vec(), Vec::<i64>::new()); assert_eq!(vals.values_for_doc(1).collect_vec(), vec![]);
assert_eq!(vals.values_for_doc(2).collect_vec(), Vec::<i64>::new()); assert_eq!(vals.values_for_doc(2).collect_vec(), vec![]);
assert_eq!(vals.values_for_doc(3).collect_vec(), vec![1]); assert_eq!(vals.values_for_doc(3).collect_vec(), vec![1]);
assert_eq!(vals.values_for_doc(4).collect_vec(), Vec::<i64>::new()); assert_eq!(vals.values_for_doc(4).collect_vec(), vec![]);
// text column // text column
let dynamic_column = cols[1].open().unwrap(); let dynamic_column = cols[1].open().unwrap();
@@ -458,7 +474,7 @@ fn test_merge_columnar_different_empty_cardinality() {
) )
.unwrap(); .unwrap();
let columnar_reader = ColumnarReader::open(buffer).unwrap(); let columnar_reader = ColumnarReader::open(buffer).unwrap();
assert_eq!(columnar_reader.num_docs(), 2); assert_eq!(columnar_reader.num_rows(), 2);
assert_eq!(columnar_reader.num_columns(), 2); assert_eq!(columnar_reader.num_columns(), 2);
let cols = columnar_reader.read_columns("mixed").unwrap(); let cols = columnar_reader.read_columns("mixed").unwrap();
@@ -470,119 +486,3 @@ fn test_merge_columnar_different_empty_cardinality() {
let dynamic_column = cols[1].open().unwrap(); let dynamic_column = cols[1].open().unwrap();
assert_eq!(dynamic_column.get_cardinality(), Cardinality::Optional); assert_eq!(dynamic_column.get_cardinality(), Cardinality::Optional);
} }
#[derive(Debug, Clone)]
struct ColumnSpec {
column_name: String,
/// (row_id, term)
terms: Vec<(RowId, Vec<u8>)>,
}
#[derive(Clone, Debug)]
struct ColumnarSpec {
columns: Vec<ColumnSpec>,
}
/// Generate a random (row_id, term) pair:
/// - row_id in [0..10]
/// - term is either from POSSIBLE_TERMS or random bytes
fn rowid_and_term_strategy() -> impl Strategy<Value = (RowId, Vec<u8>)> {
const POSSIBLE_TERMS: &[&[u8]] = &[b"a", b"b", b"allo"];
let term_strat = prop_oneof![
// pick from the fixed list
(0..POSSIBLE_TERMS.len()).prop_map(|i| POSSIBLE_TERMS[i].to_vec()),
// or random bytes (length 0..10)
prop::collection::vec(any::<u8>(), 0..10),
];
(0u32..11, term_strat)
}
/// Generate one ColumnSpec, with a random name and a random list of (row_id, term).
/// We sort it by row_id so that data is in ascending order.
fn column_spec_strategy() -> impl Strategy<Value = ColumnSpec> {
let column_name = prop_oneof![
Just("col".to_string()),
Just("col2".to_string()),
"col.*".prop_map(|s| s),
];
// We'll produce 0..8 (rowid,term) entries for this column
let data_strat = vec(rowid_and_term_strategy(), 0..8).prop_map(|mut pairs| {
// Sort by row_id
pairs.sort_by_key(|(row_id, _)| *row_id);
pairs
});
(column_name, data_strat).prop_map(|(name, data)| ColumnSpec {
column_name: name,
terms: data,
})
}
/// Strategy to generate an ColumnarSpec
fn columnar_strategy() -> impl Strategy<Value = ColumnarSpec> {
vec(column_spec_strategy(), 0..3).prop_map(|columns| ColumnarSpec { columns })
}
/// Strategy to generate multiple ColumnarSpecs, each of which we will treat
/// as one "columnar" to be merged together.
fn columnars_strategy() -> impl Strategy<Value = Vec<ColumnarSpec>> {
vec(columnar_strategy(), 1..4)
}
/// Build a `ColumnarReader` from a `ColumnarSpec`
fn build_columnar(spec: &ColumnarSpec) -> ColumnarReader {
let mut writer = ColumnarWriter::default();
let mut max_row_id = 0;
for col in &spec.columns {
for &(row_id, ref term) in &col.terms {
writer.record_bytes(row_id, &col.column_name, term);
max_row_id = max_row_id.max(row_id);
}
}
let mut buffer = Vec::new();
writer.serialize(max_row_id + 1, &mut buffer).unwrap();
ColumnarReader::open(buffer).unwrap()
}
proptest! {
// We just test that the merge_columnar function doesn't crash.
#![proptest_config(ProptestConfig::with_cases(256))]
#[test]
fn test_merge_columnar_bytes_no_crash(columnars in columnars_strategy(), second_merge_columnars in columnars_strategy()) {
let columnars: Vec<ColumnarReader> = columnars.iter()
.map(build_columnar)
.collect();
let mut out = Vec::new();
let columnar_refs: Vec<&ColumnarReader> = columnars.iter().collect();
let stack_merge_order = StackMergeOrder::stack(&columnar_refs);
merge_columnar(
&columnar_refs,
&[],
MergeRowOrder::Stack(stack_merge_order),
&mut out,
).unwrap();
let merged_reader = ColumnarReader::open(out).unwrap();
// Merge the second set of columnars with the result of the first merge
let mut columnars: Vec<ColumnarReader> = second_merge_columnars.iter()
.map(build_columnar)
.collect();
columnars.push(merged_reader);
let mut out = Vec::new();
let columnar_refs: Vec<&ColumnarReader> = columnars.iter().collect();
let stack_merge_order = StackMergeOrder::stack(&columnar_refs);
merge_columnar(
&columnar_refs,
&[],
MergeRowOrder::Stack(stack_merge_order),
&mut out,
).unwrap();
}
}

View File

@@ -1,5 +1,3 @@
#![allow(clippy::manual_div_ceil)]
mod column_type; mod column_type;
mod format_version; mod format_version;
mod merge; mod merge;
@@ -7,9 +5,9 @@ mod reader;
mod writer; mod writer;
pub use column_type::{ColumnType, HasAssociatedColumnType}; pub use column_type::{ColumnType, HasAssociatedColumnType};
pub use format_version::{CURRENT_VERSION, Version}; pub use format_version::{Version, CURRENT_VERSION};
#[cfg(test)] #[cfg(test)]
pub(crate) use merge::ColumnTypeCategory; pub(crate) use merge::ColumnTypeCategory;
pub use merge::{MergeRowOrder, ShuffleMergeOrder, StackMergeOrder, merge_columnar}; pub use merge::{merge_columnar, MergeRowOrder, ShuffleMergeOrder, StackMergeOrder};
pub use reader::ColumnarReader; pub use reader::ColumnarReader;
pub use writer::ColumnarWriter; pub use writer::ColumnarWriter;

View File

@@ -1,11 +1,10 @@
use std::{fmt, io, mem}; use std::{fmt, io, mem};
use common::BinarySerializable;
use common::file_slice::FileSlice; use common::file_slice::FileSlice;
use common::json_path_writer::JSON_PATH_SEGMENT_SEP; use common::BinarySerializable;
use sstable::{Dictionary, RangeSSTable}; use sstable::{Dictionary, RangeSSTable};
use crate::columnar::{ColumnType, format_version}; use crate::columnar::{format_version, ColumnType};
use crate::dynamic_column::DynamicColumnHandle; use crate::dynamic_column::DynamicColumnHandle;
use crate::{RowId, Version}; use crate::{RowId, Version};
@@ -19,13 +18,13 @@ fn io_invalid_data(msg: String) -> io::Error {
pub struct ColumnarReader { pub struct ColumnarReader {
column_dictionary: Dictionary<RangeSSTable>, column_dictionary: Dictionary<RangeSSTable>,
column_data: FileSlice, column_data: FileSlice,
num_docs: RowId, num_rows: RowId,
format_version: Version, format_version: Version,
} }
impl fmt::Debug for ColumnarReader { impl fmt::Debug for ColumnarReader {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
let num_rows = self.num_docs(); let num_rows = self.num_rows();
let columns = self.list_columns().unwrap(); let columns = self.list_columns().unwrap();
let num_cols = columns.len(); let num_cols = columns.len();
let mut debug_struct = f.debug_struct("Columnar"); let mut debug_struct = f.debug_struct("Columnar");
@@ -77,19 +76,6 @@ fn read_all_columns_in_stream(
Ok(results) Ok(results)
} }
fn column_dictionary_prefix_for_column_name(column_name: &str) -> String {
// Each column is a associated to a given `column_key`,
// that starts by `column_name\0column_header`.
//
// Listing the columns associated to the given column name is therefore equivalent to
// listing `column_key` with the prefix `column_name\0`.
format!("{}{}", column_name, '\0')
}
fn column_dictionary_prefix_for_subpath(root_path: &str) -> String {
format!("{}{}", root_path, JSON_PATH_SEGMENT_SEP as char)
}
impl ColumnarReader { impl ColumnarReader {
/// Opens a new Columnar file. /// Opens a new Columnar file.
pub fn open<F>(file_slice: F) -> io::Result<ColumnarReader> pub fn open<F>(file_slice: F) -> io::Result<ColumnarReader>
@@ -112,13 +98,13 @@ impl ColumnarReader {
Ok(ColumnarReader { Ok(ColumnarReader {
column_dictionary, column_dictionary,
column_data, column_data,
num_docs: num_rows, num_rows,
format_version, format_version,
}) })
} }
pub fn num_docs(&self) -> RowId { pub fn num_rows(&self) -> RowId {
self.num_docs self.num_rows
} }
// Iterate over the columns in a sorted way // Iterate over the columns in a sorted way
pub fn iter_columns( pub fn iter_columns(
@@ -158,14 +144,32 @@ impl ColumnarReader {
Ok(self.iter_columns()?.collect()) Ok(self.iter_columns()?.collect())
} }
fn stream_for_column_range(&self, column_name: &str) -> sstable::StreamerBuilder<RangeSSTable> {
// Each column is a associated to a given `column_key`,
// that starts by `column_name\0column_header`.
//
// Listing the columns associated to the given column name is therefore equivalent to
// listing `column_key` with the prefix `column_name\0`.
//
// This is in turn equivalent to searching for the range
// `[column_name,\0`..column_name\1)`.
// TODO can we get some more generic `prefix(..)` logic in the dictionary.
let mut start_key = column_name.to_string();
start_key.push('\0');
let mut end_key = column_name.to_string();
end_key.push(1u8 as char);
self.column_dictionary
.range()
.ge(start_key.as_bytes())
.lt(end_key.as_bytes())
}
pub async fn read_columns_async( pub async fn read_columns_async(
&self, &self,
column_name: &str, column_name: &str,
) -> io::Result<Vec<DynamicColumnHandle>> { ) -> io::Result<Vec<DynamicColumnHandle>> {
let prefix = column_dictionary_prefix_for_column_name(column_name);
let stream = self let stream = self
.column_dictionary .stream_for_column_range(column_name)
.prefix_range(prefix)
.into_stream_async() .into_stream_async()
.await?; .await?;
read_all_columns_in_stream(stream, &self.column_data, self.format_version) read_all_columns_in_stream(stream, &self.column_data, self.format_version)
@@ -176,35 +180,7 @@ impl ColumnarReader {
/// There can be more than one column associated to a given column name, provided they have /// There can be more than one column associated to a given column name, provided they have
/// different types. /// different types.
pub fn read_columns(&self, column_name: &str) -> io::Result<Vec<DynamicColumnHandle>> { pub fn read_columns(&self, column_name: &str) -> io::Result<Vec<DynamicColumnHandle>> {
let prefix = column_dictionary_prefix_for_column_name(column_name); let stream = self.stream_for_column_range(column_name).into_stream()?;
let stream = self.column_dictionary.prefix_range(prefix).into_stream()?;
read_all_columns_in_stream(stream, &self.column_data, self.format_version)
}
pub async fn read_subpath_columns_async(
&self,
root_path: &str,
) -> io::Result<Vec<DynamicColumnHandle>> {
let prefix = column_dictionary_prefix_for_subpath(root_path);
let stream = self
.column_dictionary
.prefix_range(prefix)
.into_stream_async()
.await?;
read_all_columns_in_stream(stream, &self.column_data, self.format_version)
}
/// Get all inner columns for a given JSON prefix, i.e columns for which the name starts
/// with the prefix then contain the [`JSON_PATH_SEGMENT_SEP`].
///
/// There can be more than one column associated to each path within the JSON structure,
/// provided they have different types.
pub fn read_subpath_columns(&self, root_path: &str) -> io::Result<Vec<DynamicColumnHandle>> {
let prefix = column_dictionary_prefix_for_subpath(root_path);
let stream = self
.column_dictionary
.prefix_range(prefix.as_bytes())
.into_stream()?;
read_all_columns_in_stream(stream, &self.column_data, self.format_version) read_all_columns_in_stream(stream, &self.column_data, self.format_version)
} }
@@ -216,8 +192,6 @@ impl ColumnarReader {
#[cfg(test)] #[cfg(test)]
mod tests { mod tests {
use common::json_path_writer::JSON_PATH_SEGMENT_SEP;
use crate::{ColumnType, ColumnarReader, ColumnarWriter}; use crate::{ColumnType, ColumnarReader, ColumnarWriter};
#[test] #[test]
@@ -250,64 +224,6 @@ mod tests {
assert_eq!(columns[0].1.column_type(), ColumnType::U64); assert_eq!(columns[0].1.column_type(), ColumnType::U64);
} }
#[test]
fn test_read_columns() {
let mut columnar_writer = ColumnarWriter::default();
columnar_writer.record_column_type("col", ColumnType::U64, false);
columnar_writer.record_numerical(1, "col", 1u64);
let mut buffer = Vec::new();
columnar_writer.serialize(2, &mut buffer).unwrap();
let columnar = ColumnarReader::open(buffer).unwrap();
{
let columns = columnar.read_columns("col").unwrap();
assert_eq!(columns.len(), 1);
assert_eq!(columns[0].column_type(), ColumnType::U64);
}
{
let columns = columnar.read_columns("other").unwrap();
assert_eq!(columns.len(), 0);
}
}
#[test]
fn test_read_subpath_columns() {
let mut columnar_writer = ColumnarWriter::default();
columnar_writer.record_str(
0,
&format!("col1{}subcol1", JSON_PATH_SEGMENT_SEP as char),
"hello",
);
columnar_writer.record_numerical(
0,
&format!("col1{}subcol2", JSON_PATH_SEGMENT_SEP as char),
1i64,
);
columnar_writer.record_str(1, "col1", "hello");
columnar_writer.record_str(0, "col2", "hello");
let mut buffer = Vec::new();
columnar_writer.serialize(2, &mut buffer).unwrap();
let columnar = ColumnarReader::open(buffer).unwrap();
{
let columns = columnar.read_subpath_columns("col1").unwrap();
assert_eq!(columns.len(), 2);
assert_eq!(columns[0].column_type(), ColumnType::Str);
assert_eq!(columns[1].column_type(), ColumnType::I64);
}
{
let columns = columnar.read_subpath_columns("col1.subcol1").unwrap();
assert_eq!(columns.len(), 0);
}
{
let columns = columnar.read_subpath_columns("col2").unwrap();
assert_eq!(columns.len(), 0);
}
{
let columns = columnar.read_subpath_columns("other").unwrap();
assert_eq!(columns.len(), 0);
}
}
#[test] #[test]
#[should_panic(expected = "Input type forbidden")] #[should_panic(expected = "Input type forbidden")]
fn test_list_columns_strict_typing_panics_on_wrong_types() { fn test_list_columns_strict_typing_panics_on_wrong_types() {

View File

@@ -42,7 +42,7 @@ impl ColumnWriter {
&self, &self,
arena: &MemoryArena, arena: &MemoryArena,
buffer: &'a mut Vec<u8>, buffer: &'a mut Vec<u8>,
) -> impl Iterator<Item = ColumnOperation<V>> + 'a + use<'a, V> { ) -> impl Iterator<Item = ColumnOperation<V>> + 'a {
buffer.clear(); buffer.clear();
self.values.read_to_end(arena, buffer); self.values.read_to_end(arena, buffer);
let mut cursor: &[u8] = &buffer[..]; let mut cursor: &[u8] = &buffer[..];
@@ -104,10 +104,9 @@ pub(crate) struct NumericalColumnWriter {
impl NumericalColumnWriter { impl NumericalColumnWriter {
pub fn force_numerical_type(&mut self, numerical_type: NumericalType) { pub fn force_numerical_type(&mut self, numerical_type: NumericalType) {
assert!( assert!(self
self.compatible_numerical_types .compatible_numerical_types
.is_type_accepted(numerical_type) .is_type_accepted(numerical_type));
);
self.compatible_numerical_types = CompatibleNumericalTypes::StaticType(numerical_type); self.compatible_numerical_types = CompatibleNumericalTypes::StaticType(numerical_type);
} }
} }
@@ -212,7 +211,7 @@ impl NumericalColumnWriter {
self, self,
arena: &MemoryArena, arena: &MemoryArena,
buffer: &'a mut Vec<u8>, buffer: &'a mut Vec<u8>,
) -> impl Iterator<Item = ColumnOperation<NumericalValue>> + 'a + use<'a> { ) -> impl Iterator<Item = ColumnOperation<NumericalValue>> + 'a {
self.column_writer.operation_iterator(arena, buffer) self.column_writer.operation_iterator(arena, buffer)
} }
} }
@@ -256,7 +255,7 @@ impl StrOrBytesColumnWriter {
&self, &self,
arena: &MemoryArena, arena: &MemoryArena,
byte_buffer: &'a mut Vec<u8>, byte_buffer: &'a mut Vec<u8>,
) -> impl Iterator<Item = ColumnOperation<UnorderedId>> + 'a + use<'a> { ) -> impl Iterator<Item = ColumnOperation<UnorderedId>> + 'a {
self.column_writer.operation_iterator(arena, byte_buffer) self.column_writer.operation_iterator(arena, byte_buffer)
} }
} }

View File

@@ -8,13 +8,13 @@ use std::net::Ipv6Addr;
use column_operation::ColumnOperation; use column_operation::ColumnOperation;
pub(crate) use column_writers::CompatibleNumericalTypes; pub(crate) use column_writers::CompatibleNumericalTypes;
use common::CountingWriter;
use common::json_path_writer::JSON_END_OF_PATH; use common::json_path_writer::JSON_END_OF_PATH;
use common::CountingWriter;
pub(crate) use serializer::ColumnarSerializer; pub(crate) use serializer::ColumnarSerializer;
use stacker::{Addr, ArenaHashMap, MemoryArena}; use stacker::{Addr, ArenaHashMap, MemoryArena};
use crate::column_index::{SerializableColumnIndex, SerializableOptionalIndex}; use crate::column_index::{SerializableColumnIndex, SerializableOptionalIndex};
use crate::column_values::{MonotonicallyMappableToU64, MonotonicallyMappableToU128}; use crate::column_values::{MonotonicallyMappableToU128, MonotonicallyMappableToU64};
use crate::columnar::column_type::ColumnType; use crate::columnar::column_type::ColumnType;
use crate::columnar::writer::column_writers::{ use crate::columnar::writer::column_writers::{
ColumnWriter, NumericalColumnWriter, StrOrBytesColumnWriter, ColumnWriter, NumericalColumnWriter, StrOrBytesColumnWriter,
@@ -285,6 +285,7 @@ impl ColumnarWriter {
.map(|(column_name, addr)| (column_name, ColumnType::DateTime, addr)), .map(|(column_name, addr)| (column_name, ColumnType::DateTime, addr)),
); );
columns.sort_unstable_by_key(|(column_name, col_type, _)| (*column_name, *col_type)); columns.sort_unstable_by_key(|(column_name, col_type, _)| (*column_name, *col_type));
let (arena, buffers, dictionaries) = (&self.arena, &mut self.buffers, &self.dictionaries); let (arena, buffers, dictionaries) = (&self.arena, &mut self.buffers, &self.dictionaries);
let mut symbol_byte_buffer: Vec<u8> = Vec::new(); let mut symbol_byte_buffer: Vec<u8> = Vec::new();
for (column_name, column_type, addr) in columns { for (column_name, column_type, addr) in columns {

View File

@@ -3,11 +3,11 @@ use std::io::Write;
use common::json_path_writer::JSON_END_OF_PATH; use common::json_path_writer::JSON_END_OF_PATH;
use common::{BinarySerializable, CountingWriter}; use common::{BinarySerializable, CountingWriter};
use sstable::RangeSSTable;
use sstable::value::RangeValueWriter; use sstable::value::RangeValueWriter;
use sstable::RangeSSTable;
use crate::RowId;
use crate::columnar::ColumnType; use crate::columnar::ColumnType;
use crate::RowId;
pub struct ColumnarSerializer<W: io::Write> { pub struct ColumnarSerializer<W: io::Write> {
wrt: CountingWriter<W>, wrt: CountingWriter<W>,

View File

@@ -1,6 +1,6 @@
use crate::RowId;
use crate::column_index::{SerializableMultivalueIndex, SerializableOptionalIndex}; use crate::column_index::{SerializableMultivalueIndex, SerializableOptionalIndex};
use crate::iterable::Iterable; use crate::iterable::Iterable;
use crate::RowId;
/// The `IndexBuilder` interprets a sequence of /// The `IndexBuilder` interprets a sequence of
/// calls of the form: /// calls of the form:
@@ -31,13 +31,12 @@ pub struct OptionalIndexBuilder {
impl OptionalIndexBuilder { impl OptionalIndexBuilder {
pub fn finish(&mut self, num_rows: RowId) -> impl Iterable<RowId> + '_ { pub fn finish(&mut self, num_rows: RowId) -> impl Iterable<RowId> + '_ {
debug_assert!( debug_assert!(self
self.docs .docs
.last() .last()
.copied() .copied()
.map(|last_doc| last_doc < num_rows) .map(|last_doc| last_doc < num_rows)
.unwrap_or(true) .unwrap_or(true));
);
&self.docs[..] &self.docs[..]
} }
@@ -49,13 +48,12 @@ impl OptionalIndexBuilder {
impl IndexBuilder for OptionalIndexBuilder { impl IndexBuilder for OptionalIndexBuilder {
#[inline(always)] #[inline(always)]
fn record_row(&mut self, doc: RowId) { fn record_row(&mut self, doc: RowId) {
debug_assert!( debug_assert!(self
self.docs .docs
.last() .last()
.copied() .copied()
.map(|prev_doc| doc > prev_doc) .map(|prev_doc| doc > prev_doc)
.unwrap_or(true) .unwrap_or(true));
);
self.docs.push(doc); self.docs.push(doc);
} }
} }

View File

@@ -3,8 +3,8 @@ use std::path::PathBuf;
use itertools::Itertools; use itertools::Itertools;
use crate::{ use crate::{
CURRENT_VERSION, Cardinality, Column, ColumnarReader, DynamicColumn, StackMergeOrder, merge_columnar, Cardinality, Column, ColumnarReader, DynamicColumn, StackMergeOrder,
merge_columnar, CURRENT_VERSION,
}; };
const NUM_DOCS: u32 = u16::MAX as u32; const NUM_DOCS: u32 = u16::MAX as u32;

View File

@@ -6,7 +6,7 @@ use common::file_slice::FileSlice;
use common::{ByteCount, DateTime, HasLen, OwnedBytes}; use common::{ByteCount, DateTime, HasLen, OwnedBytes};
use crate::column::{BytesColumn, Column, StrColumn}; use crate::column::{BytesColumn, Column, StrColumn};
use crate::column_values::{StrictlyMonotonicFn, monotonic_map_column}; use crate::column_values::{monotonic_map_column, StrictlyMonotonicFn};
use crate::columnar::ColumnType; use crate::columnar::ColumnType;
use crate::{Cardinality, ColumnIndex, ColumnValues, NumericalType, Version}; use crate::{Cardinality, ColumnIndex, ColumnValues, NumericalType, Version};

View File

@@ -17,7 +17,7 @@
//! column. //! column.
//! - [column_values]: Stores the values of a column in a dense format. //! - [column_values]: Stores the values of a column in a dense format.
// #![cfg_attr(all(feature = "unstable", test), feature(test))] #![cfg_attr(all(feature = "unstable", test), feature(test))]
#[cfg(test)] #[cfg(test)]
#[macro_use] #[macro_use]
@@ -44,11 +44,11 @@ pub use block_accessor::ColumnBlockAccessor;
pub use column::{BytesColumn, Column, StrColumn}; pub use column::{BytesColumn, Column, StrColumn};
pub use column_index::ColumnIndex; pub use column_index::ColumnIndex;
pub use column_values::{ pub use column_values::{
ColumnValues, EmptyColumnValues, MonotonicallyMappableToU64, MonotonicallyMappableToU128, ColumnValues, EmptyColumnValues, MonotonicallyMappableToU128, MonotonicallyMappableToU64,
}; };
pub use columnar::{ pub use columnar::{
CURRENT_VERSION, ColumnType, ColumnarReader, ColumnarWriter, HasAssociatedColumnType, merge_columnar, ColumnType, ColumnarReader, ColumnarWriter, HasAssociatedColumnType,
MergeRowOrder, ShuffleMergeOrder, StackMergeOrder, Version, merge_columnar, MergeRowOrder, ShuffleMergeOrder, StackMergeOrder, Version, CURRENT_VERSION,
}; };
use sstable::VoidSSTable; use sstable::VoidSSTable;
pub use value::{NumericalType, NumericalValue}; pub use value::{NumericalType, NumericalValue};

View File

@@ -380,7 +380,7 @@ fn assert_columnar_eq(
right: &ColumnarReader, right: &ColumnarReader,
lenient_on_numerical_value: bool, lenient_on_numerical_value: bool,
) { ) {
assert_eq!(left.num_docs(), right.num_docs()); assert_eq!(left.num_rows(), right.num_rows());
let left_columns = left.list_columns().unwrap(); let left_columns = left.list_columns().unwrap();
let right_columns = right.list_columns().unwrap(); let right_columns = right.list_columns().unwrap();
assert_eq!(left_columns.len(), right_columns.len()); assert_eq!(left_columns.len(), right_columns.len());
@@ -588,7 +588,7 @@ proptest! {
#[test] #[test]
fn test_single_columnar_builder_proptest(docs in columnar_docs_strategy()) { fn test_single_columnar_builder_proptest(docs in columnar_docs_strategy()) {
let columnar = build_columnar(&docs[..]); let columnar = build_columnar(&docs[..]);
assert_eq!(columnar.num_docs() as usize, docs.len()); assert_eq!(columnar.num_rows() as usize, docs.len());
let mut expected_columns: HashMap<(&str, ColumnTypeCategory), HashMap<u32, Vec<&ColumnValue>> > = Default::default(); let mut expected_columns: HashMap<(&str, ColumnTypeCategory), HashMap<u32, Vec<&ColumnValue>> > = Default::default();
for (doc_id, doc_vals) in docs.iter().enumerate() { for (doc_id, doc_vals) in docs.iter().enumerate() {
for (col_name, col_val) in doc_vals { for (col_name, col_val) in doc_vals {
@@ -715,9 +715,8 @@ fn test_columnar_merging_number_columns() {
// TODO test required_columns // TODO test required_columns
// TODO document edge case: required_columns incompatible with values. // TODO document edge case: required_columns incompatible with values.
#[allow(clippy::type_complexity)] fn columnar_docs_and_remap(
fn columnar_docs_and_remap() ) -> impl Strategy<Value = (Vec<Vec<Vec<(&'static str, ColumnValue)>>>, Vec<RowAddr>)> {
-> impl Strategy<Value = (Vec<Vec<Vec<(&'static str, ColumnValue)>>>, Vec<RowAddr>)> {
proptest::collection::vec(columnar_docs_strategy(), 2..=3).prop_flat_map( proptest::collection::vec(columnar_docs_strategy(), 2..=3).prop_flat_map(
|columnars_docs: Vec<Vec<Vec<(&str, ColumnValue)>>>| { |columnars_docs: Vec<Vec<Vec<(&str, ColumnValue)>>>| {
let row_addrs: Vec<RowAddr> = columnars_docs let row_addrs: Vec<RowAddr> = columnars_docs
@@ -820,7 +819,7 @@ fn test_columnar_merge_empty() {
) )
.unwrap(); .unwrap();
let merged_columnar = ColumnarReader::open(output).unwrap(); let merged_columnar = ColumnarReader::open(output).unwrap();
assert_eq!(merged_columnar.num_docs(), 0); assert_eq!(merged_columnar.num_rows(), 0);
assert_eq!(merged_columnar.num_columns(), 0); assert_eq!(merged_columnar.num_columns(), 0);
} }
@@ -846,7 +845,7 @@ fn test_columnar_merge_single_str_column() {
) )
.unwrap(); .unwrap();
let merged_columnar = ColumnarReader::open(output).unwrap(); let merged_columnar = ColumnarReader::open(output).unwrap();
assert_eq!(merged_columnar.num_docs(), 1); assert_eq!(merged_columnar.num_rows(), 1);
assert_eq!(merged_columnar.num_columns(), 1); assert_eq!(merged_columnar.num_columns(), 1);
} }
@@ -878,7 +877,7 @@ fn test_delete_decrease_cardinality() {
) )
.unwrap(); .unwrap();
let merged_columnar = ColumnarReader::open(output).unwrap(); let merged_columnar = ColumnarReader::open(output).unwrap();
assert_eq!(merged_columnar.num_docs(), 1); assert_eq!(merged_columnar.num_rows(), 1);
assert_eq!(merged_columnar.num_columns(), 1); assert_eq!(merged_columnar.num_columns(), 1);
let cols = merged_columnar.read_columns("c").unwrap(); let cols = merged_columnar.read_columns("c").unwrap();
assert_eq!(cols.len(), 1); assert_eq!(cols.len(), 1);

View File

@@ -1,9 +1,9 @@
[package] [package]
name = "tantivy-common" name = "tantivy-common"
version = "0.9.0" version = "0.7.0"
authors = ["Paul Masurel <paul@quickwit.io>", "Pascal Seitz <pascal@quickwit.io>"] authors = ["Paul Masurel <paul@quickwit.io>", "Pascal Seitz <pascal@quickwit.io>"]
license = "MIT" license = "MIT"
edition = "2024" edition = "2021"
description = "common traits and utility functions used by multiple tantivy subcrates" description = "common traits and utility functions used by multiple tantivy subcrates"
documentation = "https://docs.rs/tantivy_common/" documentation = "https://docs.rs/tantivy_common/"
homepage = "https://github.com/quickwit-oss/tantivy" homepage = "https://github.com/quickwit-oss/tantivy"
@@ -13,7 +13,7 @@ repository = "https://github.com/quickwit-oss/tantivy"
[dependencies] [dependencies]
byteorder = "1.4.3" byteorder = "1.4.3"
ownedbytes = { version= "0.9", path="../ownedbytes" } ownedbytes = { version= "0.7", path="../ownedbytes" }
async-trait = "0.1" async-trait = "0.1"
time = { version = "0.3.10", features = ["serde-well-known"] } time = { version = "0.3.10", features = ["serde-well-known"] }
serde = { version = "1.0.136", features = ["derive"] } serde = { version = "1.0.136", features = ["derive"] }

View File

@@ -1,7 +1,7 @@
use binggan::{BenchRunner, black_box}; use binggan::{black_box, BenchRunner};
use rand::seq::IteratorRandom; use rand::seq::IteratorRandom;
use rand::thread_rng; use rand::thread_rng;
use tantivy_common::{BitSet, TinySet, serialize_vint_u32}; use tantivy_common::{serialize_vint_u32, BitSet, TinySet};
fn bench_vint() { fn bench_vint() {
let mut runner = BenchRunner::new(); let mut runner = BenchRunner::new();

View File

@@ -9,7 +9,7 @@ use crate::ByteCount;
pub struct TinySet(u64); pub struct TinySet(u64);
impl fmt::Debug for TinySet { impl fmt::Debug for TinySet {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
self.into_iter().collect::<Vec<u32>>().fmt(f) self.into_iter().collect::<Vec<u32>>().fmt(f)
} }
} }
@@ -182,7 +182,6 @@ pub struct BitSet {
max_value: u32, max_value: u32,
} }
#[inline(always)]
fn num_buckets(max_val: u32) -> u32 { fn num_buckets(max_val: u32) -> u32 {
(max_val + 63u32) / 64u32 (max_val + 63u32) / 64u32
} }

View File

@@ -65,11 +65,11 @@ pub fn transform_bound_inner_res<TFrom, TTo>(
) -> io::Result<Bound<TTo>> { ) -> io::Result<Bound<TTo>> {
use self::Bound::*; use self::Bound::*;
Ok(match bound { Ok(match bound {
Excluded(from_val) => match transform(from_val)? { Excluded(ref from_val) => match transform(from_val)? {
TransformBound::NewBound(new_val) => new_val, TransformBound::NewBound(new_val) => new_val,
TransformBound::Existing(new_val) => Excluded(new_val), TransformBound::Existing(new_val) => Excluded(new_val),
}, },
Included(from_val) => match transform(from_val)? { Included(ref from_val) => match transform(from_val)? {
TransformBound::NewBound(new_val) => new_val, TransformBound::NewBound(new_val) => new_val,
TransformBound::Existing(new_val) => Included(new_val), TransformBound::Existing(new_val) => Included(new_val),
}, },
@@ -85,11 +85,11 @@ pub fn transform_bound_inner<TFrom, TTo>(
) -> Bound<TTo> { ) -> Bound<TTo> {
use self::Bound::*; use self::Bound::*;
match bound { match bound {
Excluded(from_val) => match transform(from_val) { Excluded(ref from_val) => match transform(from_val) {
TransformBound::NewBound(new_val) => new_val, TransformBound::NewBound(new_val) => new_val,
TransformBound::Existing(new_val) => Excluded(new_val), TransformBound::Existing(new_val) => Excluded(new_val),
}, },
Included(from_val) => match transform(from_val) { Included(ref from_val) => match transform(from_val) {
TransformBound::NewBound(new_val) => new_val, TransformBound::NewBound(new_val) => new_val,
TransformBound::Existing(new_val) => Included(new_val), TransformBound::Existing(new_val) => Included(new_val),
}, },
@@ -111,8 +111,8 @@ pub fn map_bound<TFrom, TTo>(
) -> Bound<TTo> { ) -> Bound<TTo> {
use self::Bound::*; use self::Bound::*;
match bound { match bound {
Excluded(from_val) => Bound::Excluded(transform(from_val)), Excluded(ref from_val) => Bound::Excluded(transform(from_val)),
Included(from_val) => Bound::Included(transform(from_val)), Included(ref from_val) => Bound::Included(transform(from_val)),
Unbounded => Unbounded, Unbounded => Unbounded,
} }
} }
@@ -123,8 +123,8 @@ pub fn map_bound_res<TFrom, TTo, Err>(
) -> Result<Bound<TTo>, Err> { ) -> Result<Bound<TTo>, Err> {
use self::Bound::*; use self::Bound::*;
Ok(match bound { Ok(match bound {
Excluded(from_val) => Excluded(transform(from_val)?), Excluded(ref from_val) => Excluded(transform(from_val)?),
Included(from_val) => Included(transform(from_val)?), Included(ref from_val) => Included(transform(from_val)?),
Unbounded => Unbounded, Unbounded => Unbounded,
}) })
} }

View File

@@ -1,6 +1,5 @@
use std::fs::File; use std::fs::File;
use std::ops::{Deref, Range, RangeBounds}; use std::ops::{Deref, Range, RangeBounds};
use std::path::Path;
use std::sync::Arc; use std::sync::Arc;
use std::{fmt, io}; use std::{fmt, io};
@@ -74,7 +73,7 @@ impl FileHandle for WrapFile {
{ {
use std::io::{Read, Seek}; use std::io::{Read, Seek};
let mut file = self.file.try_clone()?; // Clone the file to read from it separately let mut file = self.file.try_clone()?; // Clone the file to read from it separately
// Seek to the start position in the file // Seek to the start position in the file
file.seek(io::SeekFrom::Start(start as u64))?; file.seek(io::SeekFrom::Start(start as u64))?;
// Read the data into the buffer // Read the data into the buffer
file.read_exact(&mut buffer)?; file.read_exact(&mut buffer)?;
@@ -178,12 +177,6 @@ fn combine_ranges<R: RangeBounds<usize>>(orig_range: Range<usize>, rel_range: R)
} }
impl FileSlice { impl FileSlice {
/// Creates a FileSlice from a path.
pub fn open(path: &Path) -> io::Result<FileSlice> {
let wrap_file = WrapFile::new(File::open(path)?)?;
Ok(FileSlice::new(Arc::new(wrap_file)))
}
/// Wraps a FileHandle. /// Wraps a FileHandle.
pub fn new(file_handle: Arc<dyn FileHandle>) -> Self { pub fn new(file_handle: Arc<dyn FileHandle>) -> Self {
let num_bytes = file_handle.len(); let num_bytes = file_handle.len();
@@ -346,8 +339,8 @@ mod tests {
use std::sync::Arc; use std::sync::Arc;
use super::{FileHandle, FileSlice}; use super::{FileHandle, FileSlice};
use crate::HasLen;
use crate::file_slice::combine_ranges; use crate::file_slice::combine_ranges;
use crate::HasLen;
#[test] #[test]
fn test_file_slice() -> io::Result<()> { fn test_file_slice() -> io::Result<()> {

View File

@@ -1,6 +1,4 @@
// manual divceil actually generates code that is not optimal (to accept the full range of u32) and #![allow(clippy::len_without_is_empty)]
// perf matters here.
#![allow(clippy::len_without_is_empty, clippy::manual_div_ceil)]
use std::ops::Deref; use std::ops::Deref;
@@ -24,7 +22,7 @@ pub use json_path_writer::JsonPathWriter;
pub use ownedbytes::{OwnedBytes, StableDeref}; pub use ownedbytes::{OwnedBytes, StableDeref};
pub use serialize::{BinarySerializable, DeserializeFrom, FixedSize}; pub use serialize::{BinarySerializable, DeserializeFrom, FixedSize};
pub use vint::{ pub use vint::{
VInt, VIntU128, read_u32_vint, read_u32_vint_no_advance, serialize_vint_u32, write_u32_vint, read_u32_vint, read_u32_vint_no_advance, serialize_vint_u32, write_u32_vint, VInt, VIntU128,
}; };
pub use writer::{AntiCallToken, CountingWriter, TerminatingWrite}; pub use writer::{AntiCallToken, CountingWriter, TerminatingWrite};
@@ -179,10 +177,8 @@ pub(crate) mod test {
#[test] #[test]
fn test_f64_order() { fn test_f64_order() {
assert!( assert!(!(f64_to_u64(f64::NEG_INFINITY)..f64_to_u64(f64::INFINITY))
!(f64_to_u64(f64::NEG_INFINITY)..f64_to_u64(f64::INFINITY)) .contains(&f64_to_u64(f64::NAN))); // nan is not a number
.contains(&f64_to_u64(f64::NAN))
); // nan is not a number
assert!(f64_to_u64(1.5) > f64_to_u64(1.0)); // same exponent, different mantissa assert!(f64_to_u64(1.5) > f64_to_u64(1.0)); // same exponent, different mantissa
assert!(f64_to_u64(2.0) > f64_to_u64(1.0)); // same mantissa, different exponent assert!(f64_to_u64(2.0) > f64_to_u64(1.0)); // same mantissa, different exponent
assert!(f64_to_u64(2.0) > f64_to_u64(1.5)); // different exponent and mantissa assert!(f64_to_u64(2.0) > f64_to_u64(1.5)); // different exponent and mantissa

View File

@@ -222,7 +222,7 @@ impl BinarySerializable for VInt {
#[cfg(test)] #[cfg(test)]
mod tests { mod tests {
use super::{BinarySerializable, VInt, serialize_vint_u32}; use super::{serialize_vint_u32, BinarySerializable, VInt};
fn aux_test_vint(val: u64) { fn aux_test_vint(val: u64) {
let mut v = [14u8; 10]; let mut v = [14u8; 10];

Binary file not shown.

Before

Width:  |  Height:  |  Size: 7.4 KiB

After

Width:  |  Height:  |  Size: 30 KiB

View File

@@ -51,7 +51,7 @@ fn main() -> tantivy::Result<()> {
// Our second field is body. // Our second field is body.
// We want full-text search for it, but we do not // We want full-text search for it, but we do not
// need to be able to retrieve it // need to be able to be able to retrieve it
// for our application. // for our application.
// //
// We can make our index lighter by omitting the `STORED` flag. // We can make our index lighter by omitting the `STORED` flag.

View File

@@ -1,7 +1,7 @@
[package] [package]
authors = ["Paul Masurel <paul@quickwit.io>", "Pascal Seitz <pascal@quickwit.io>"] authors = ["Paul Masurel <paul@quickwit.io>", "Pascal Seitz <pascal@quickwit.io>"]
name = "ownedbytes" name = "ownedbytes"
version = "0.9.0" version = "0.7.0"
edition = "2021" edition = "2021"
description = "Expose data as static slice" description = "Expose data as static slice"
license = "MIT" license = "MIT"

View File

@@ -1,6 +1,6 @@
[package] [package]
name = "tantivy-query-grammar" name = "tantivy-query-grammar"
version = "0.24.0" version = "0.22.0"
authors = ["Paul Masurel <paul.masurel@gmail.com>"] authors = ["Paul Masurel <paul.masurel@gmail.com>"]
license = "MIT" license = "MIT"
categories = ["database-implementations", "data-structures"] categories = ["database-implementations", "data-structures"]
@@ -9,9 +9,7 @@ homepage = "https://github.com/quickwit-oss/tantivy"
repository = "https://github.com/quickwit-oss/tantivy" repository = "https://github.com/quickwit-oss/tantivy"
readme = "README.md" readme = "README.md"
keywords = ["search", "information", "retrieval"] keywords = ["search", "information", "retrieval"]
edition = "2024" edition = "2021"
[dependencies] [dependencies]
nom = "7" nom = "7"
serde = { version = "1.0.219", features = ["derive"] }
serde_json = "1.0.140"

View File

@@ -3,7 +3,6 @@
use std::convert::Infallible; use std::convert::Infallible;
use nom::{AsChar, IResult, InputLength, InputTakeAtPosition}; use nom::{AsChar, IResult, InputLength, InputTakeAtPosition};
use serde::Serialize;
pub(crate) type ErrorList = Vec<LenientErrorInternal>; pub(crate) type ErrorList = Vec<LenientErrorInternal>;
pub(crate) type JResult<I, O> = IResult<I, (O, ErrorList), Infallible>; pub(crate) type JResult<I, O> = IResult<I, (O, ErrorList), Infallible>;
@@ -16,8 +15,7 @@ pub(crate) struct LenientErrorInternal {
} }
/// A recoverable error and the position it happened at /// A recoverable error and the position it happened at
#[derive(Debug, PartialEq, Serialize)] #[derive(Debug, PartialEq)]
#[serde(rename_all = "snake_case")]
pub struct LenientError { pub struct LenientError {
pub pos: usize, pub pos: usize,
pub message: String, pub message: String,
@@ -186,19 +184,19 @@ macro_rules! tuple_trait_impl(
); );
macro_rules! tuple_trait_inner( macro_rules! tuple_trait_inner(
($it:tt, $self:expr_2021, $input:expr_2021, (), $error_list:expr_2021, $head:ident $($id:ident)+) => ({ ($it:tt, $self:expr, $input:expr, (), $error_list:expr, $head:ident $($id:ident)+) => ({
let (i, (o, mut err)) = $self.$it.parse($input.clone())?; let (i, (o, mut err)) = $self.$it.parse($input.clone())?;
$error_list.append(&mut err); $error_list.append(&mut err);
succ!($it, tuple_trait_inner!($self, i, ( o ), $error_list, $($id)+)) succ!($it, tuple_trait_inner!($self, i, ( o ), $error_list, $($id)+))
}); });
($it:tt, $self:expr_2021, $input:expr_2021, ($($parsed:tt)*), $error_list:expr_2021, $head:ident $($id:ident)+) => ({ ($it:tt, $self:expr, $input:expr, ($($parsed:tt)*), $error_list:expr, $head:ident $($id:ident)+) => ({
let (i, (o, mut err)) = $self.$it.parse($input.clone())?; let (i, (o, mut err)) = $self.$it.parse($input.clone())?;
$error_list.append(&mut err); $error_list.append(&mut err);
succ!($it, tuple_trait_inner!($self, i, ($($parsed)* , o), $error_list, $($id)+)) succ!($it, tuple_trait_inner!($self, i, ($($parsed)* , o), $error_list, $($id)+))
}); });
($it:tt, $self:expr_2021, $input:expr_2021, ($($parsed:tt)*), $error_list:expr_2021, $head:ident) => ({ ($it:tt, $self:expr, $input:expr, ($($parsed:tt)*), $error_list:expr, $head:ident) => ({
let (i, (o, mut err)) = $self.$it.parse($input.clone())?; let (i, (o, mut err)) = $self.$it.parse($input.clone())?;
$error_list.append(&mut err); $error_list.append(&mut err);
@@ -328,13 +326,13 @@ macro_rules! alt_trait_impl(
); );
macro_rules! alt_trait_inner( macro_rules! alt_trait_inner(
($it:tt, $self:expr_2021, $input:expr_2021, $head_cond:ident $head:ident, $($id_cond:ident $id:ident),+) => ( ($it:tt, $self:expr, $input:expr, $head_cond:ident $head:ident, $($id_cond:ident $id:ident),+) => (
match $self.$it.0.parse($input.clone()) { match $self.$it.0.parse($input.clone()) {
Err(_) => succ!($it, alt_trait_inner!($self, $input, $($id_cond $id),+)), Err(_) => succ!($it, alt_trait_inner!($self, $input, $($id_cond $id),+)),
Ok((input_left, _)) => Some($self.$it.1.parse(input_left)), Ok((input_left, _)) => Some($self.$it.1.parse(input_left)),
} }
); );
($it:tt, $self:expr_2021, $input:expr_2021, $head_cond:ident $head:ident) => ( ($it:tt, $self:expr, $input:expr, $head_cond:ident $head:ident) => (
None None
); );
); );
@@ -355,21 +353,3 @@ where
{ {
move |i: I| l.choice(i.clone()).unwrap_or_else(|| default.parse(i)) move |i: I| l.choice(i.clone()).unwrap_or_else(|| default.parse(i))
} }
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_lenient_error_serialization() {
let error = LenientError {
pos: 42,
message: "test error message".to_string(),
};
assert_eq!(
serde_json::to_string(&error).unwrap(),
"{\"pos\":42,\"message\":\"test error message\"}"
);
}
}

View File

@@ -1,7 +1,5 @@
#![allow(clippy::derive_partial_eq_without_eq)] #![allow(clippy::derive_partial_eq_without_eq)]
use serde::Serialize;
mod infallible; mod infallible;
mod occur; mod occur;
mod query_grammar; mod query_grammar;
@@ -14,8 +12,6 @@ pub use crate::user_input_ast::{
Delimiter, UserInputAst, UserInputBound, UserInputLeaf, UserInputLiteral, Delimiter, UserInputAst, UserInputBound, UserInputLeaf, UserInputLiteral,
}; };
#[derive(Debug, Serialize)]
#[serde(rename_all = "snake_case")]
pub struct Error; pub struct Error;
/// Parse a query /// Parse a query
@@ -28,31 +24,3 @@ pub fn parse_query(query: &str) -> Result<UserInputAst, Error> {
pub fn parse_query_lenient(query: &str) -> (UserInputAst, Vec<LenientError>) { pub fn parse_query_lenient(query: &str) -> (UserInputAst, Vec<LenientError>) {
parse_to_ast_lenient(query) parse_to_ast_lenient(query)
} }
#[cfg(test)]
mod tests {
use crate::{parse_query, parse_query_lenient};
#[test]
fn test_parse_query_serialization() {
let ast = parse_query("title:hello OR title:x").unwrap();
let json = serde_json::to_string(&ast).unwrap();
assert_eq!(
json,
r#"{"type":"bool","clauses":[["should",{"type":"literal","field_name":"title","phrase":"hello","delimiter":"none","slop":0,"prefix":false}],["should",{"type":"literal","field_name":"title","phrase":"x","delimiter":"none","slop":0,"prefix":false}]]}"#
);
}
#[test]
fn test_parse_query_wrong_query() {
assert!(parse_query("title:").is_err());
}
#[test]
fn test_parse_query_lenient_wrong_query() {
let (_, errors) = parse_query_lenient("title:");
assert!(errors.len() == 1);
let json = serde_json::to_string(&errors).unwrap();
assert_eq!(json, r#"[{"pos":6,"message":"expected word"}]"#);
}
}

View File

@@ -1,12 +1,9 @@
use std::fmt; use std::fmt;
use std::fmt::Write; use std::fmt::Write;
use serde::Serialize;
/// Defines whether a term in a query must be present, /// Defines whether a term in a query must be present,
/// should be present or must not be present. /// should be present or must not be present.
#[derive(Debug, Clone, Hash, Copy, Eq, PartialEq, Serialize)] #[derive(Debug, Clone, Hash, Copy, Eq, PartialEq)]
#[serde(rename_all = "snake_case")]
pub enum Occur { pub enum Occur {
/// For a given document to be considered for scoring, /// For a given document to be considered for scoring,
/// at least one of the queries with the Should or the Must /// at least one of the queries with the Should or the Must

View File

@@ -1,26 +1,26 @@
use std::borrow::Cow; use std::borrow::Cow;
use std::iter::once; use std::iter::once;
use nom::IResult;
use nom::branch::alt; use nom::branch::alt;
use nom::bytes::complete::tag; use nom::bytes::complete::tag;
use nom::character::complete::{ use nom::character::complete::{
anychar, char, digit1, multispace0, multispace1, none_of, one_of, satisfy, u32, anychar, char, digit1, multispace0, multispace1, none_of, one_of, satisfy, u32,
}; };
use nom::combinator::{eof, map, map_res, opt, peek, recognize, value, verify}; use nom::combinator::{eof, map, map_res, not, opt, peek, recognize, value, verify};
use nom::error::{Error, ErrorKind}; use nom::error::{Error, ErrorKind};
use nom::multi::{many0, many1, separated_list0}; use nom::multi::{many0, many1, separated_list0};
use nom::sequence::{delimited, preceded, separated_pair, terminated, tuple}; use nom::sequence::{delimited, preceded, separated_pair, terminated, tuple};
use nom::IResult;
use super::user_input_ast::{UserInputAst, UserInputBound, UserInputLeaf, UserInputLiteral}; use super::user_input_ast::{UserInputAst, UserInputBound, UserInputLeaf, UserInputLiteral};
use crate::Occur;
use crate::infallible::*; use crate::infallible::*;
use crate::user_input_ast::Delimiter; use crate::user_input_ast::Delimiter;
use crate::Occur;
// Note: '-' char is only forbidden at the beginning of a field name, would be clearer to add it to // Note: '-' char is only forbidden at the beginning of a field name, would be clearer to add it to
// special characters. // special characters.
const SPECIAL_CHARS: &[char] = &[ const SPECIAL_CHARS: &[char] = &[
'+', '^', '`', ':', '{', '}', '"', '\'', '[', ']', '(', ')', '!', '\\', '*', ' ', '+', '^', '`', ':', '{', '}', '"', '\'', '[', ']', '(', ')', '!', '\\', ' ',
]; ];
/// consume a field name followed by colon. Return the field name with escape sequence /// consume a field name followed by colon. Return the field name with escape sequence
@@ -305,14 +305,15 @@ fn term_group_infallible(inp: &str) -> JResult<&str, UserInputAst> {
let (inp, (field_name, _, _, _)) = let (inp, (field_name, _, _, _)) =
tuple((field_name, multispace0, char('('), multispace0))(inp).expect("precondition failed"); tuple((field_name, multispace0, char('('), multispace0))(inp).expect("precondition failed");
delimited_infallible( let res = delimited_infallible(
nothing, nothing,
map(ast_infallible, |(mut ast, errors)| { map(ast_infallible, |(mut ast, errors)| {
ast.set_default_field(field_name.to_string()); ast.set_default_field(field_name.to_string());
(ast, errors) (ast, errors)
}), }),
opt_i_err(char(')'), "expected ')'"), opt_i_err(char(')'), "expected ')'"),
)(inp) )(inp);
res
} }
fn exists(inp: &str) -> IResult<&str, UserInputLeaf> { fn exists(inp: &str) -> IResult<&str, UserInputLeaf> {
@@ -320,17 +321,7 @@ fn exists(inp: &str) -> IResult<&str, UserInputLeaf> {
UserInputLeaf::Exists { UserInputLeaf::Exists {
field: String::new(), field: String::new(),
}, },
tuple(( tuple((multispace0, char('*'))),
multispace0,
char('*'),
peek(alt((
value(
"",
satisfy(|c: char| c.is_whitespace() || ESCAPE_IN_WORD.contains(&c)),
),
eof,
))),
)),
)(inp) )(inp)
} }
@@ -340,14 +331,7 @@ fn exists_precond(inp: &str) -> IResult<&str, (), ()> {
peek(tuple(( peek(tuple((
field_name, field_name,
multispace0, multispace0,
char('*'), char('*'), // when we are here, we know it can't be anything but a exists
peek(alt((
value(
"",
satisfy(|c: char| c.is_whitespace() || ESCAPE_IN_WORD.contains(&c)),
),
eof,
))), // we need to check this isn't a wildcard query
))), ))),
)(inp) )(inp)
.map_err(|e| e.map(|_| ())) .map_err(|e| e.map(|_| ()))
@@ -695,7 +679,10 @@ fn negate(expr: UserInputAst) -> UserInputAst {
fn leaf(inp: &str) -> IResult<&str, UserInputAst> { fn leaf(inp: &str) -> IResult<&str, UserInputAst> {
alt(( alt((
delimited(char('('), ast, char(')')), delimited(char('('), ast, char(')')),
map(char('*'), |_| UserInputAst::from(UserInputLeaf::All)), preceded(
peek(not(tag("*:"))),
map(char('*'), |_| UserInputAst::from(UserInputLeaf::All)),
),
map(preceded(tuple((tag("NOT"), multispace1)), leaf), negate), map(preceded(tuple((tag("NOT"), multispace1)), leaf), negate),
literal, literal,
))(inp) ))(inp)
@@ -716,7 +703,13 @@ fn leaf_infallible(inp: &str) -> JResult<&str, Option<UserInputAst>> {
), ),
), ),
( (
value((), char('*')), value(
(),
preceded(
peek(not(tag("*:"))), // Fail if `*:` is detected
char('*'), // Match standalone `*`
),
),
map(nothing, |_| { map(nothing, |_| {
(Some(UserInputAst::from(UserInputLeaf::All)), Vec::new()) (Some(UserInputAst::from(UserInputLeaf::All)), Vec::new())
}), }),
@@ -1029,7 +1022,7 @@ fn rewrite_ast(mut input: UserInputAst) -> UserInputAst {
fn rewrite_ast_clause(input: &mut (Option<Occur>, UserInputAst)) { fn rewrite_ast_clause(input: &mut (Option<Occur>, UserInputAst)) {
match input { match input {
(None, UserInputAst::Clause(clauses)) if clauses.len() == 1 => { (None, UserInputAst::Clause(ref mut clauses)) if clauses.len() == 1 => {
*input = clauses.pop().unwrap(); // safe because clauses.len() == 1 *input = clauses.pop().unwrap(); // safe because clauses.len() == 1
} }
_ => {} _ => {}
@@ -1238,6 +1231,7 @@ mod test {
#[test] #[test]
fn test_field_name() { fn test_field_name() {
assert_eq!(super::field_name("*:a"), Ok(("a", "*".to_string())));
assert_eq!( assert_eq!(
super::field_name(".my.field.name:a"), super::field_name(".my.field.name:a"),
Ok(("a", ".my.field.name".to_string())) Ok(("a", ".my.field.name".to_string()))
@@ -1375,7 +1369,7 @@ mod test {
#[test] #[test]
fn test_range_parser_lenient() { fn test_range_parser_lenient() {
let literal = |query| literal_infallible(query).unwrap().1.0.unwrap(); let literal = |query| literal_infallible(query).unwrap().1 .0.unwrap();
// same tests as non-lenient // same tests as non-lenient
let res = literal("title: <hello"); let res = literal("title: <hello");
@@ -1543,6 +1537,11 @@ mod test {
test_parse_query_to_ast_helper("abc:toto", "\"abc\":toto"); test_parse_query_to_ast_helper("abc:toto", "\"abc\":toto");
} }
#[test]
fn all_field_star() {
test_parse_query_to_ast_helper("*:toto", "\"*\":toto");
}
#[test] #[test]
fn test_phrase_with_field() { fn test_phrase_with_field() {
test_parse_query_to_ast_helper("abc:\"happy tax payer\"", "\"abc\":\"happy tax payer\""); test_parse_query_to_ast_helper("abc:\"happy tax payer\"", "\"abc\":\"happy tax payer\"");
@@ -1640,19 +1639,13 @@ mod test {
#[test] #[test]
fn test_exist_query() { fn test_exist_query() {
test_parse_query_to_ast_helper("a:*", "$exists(\"a\")"); test_parse_query_to_ast_helper("a:*", "\"a\":*");
test_parse_query_to_ast_helper("a: *", "$exists(\"a\")"); test_parse_query_to_ast_helper("a: *", "\"a\":*");
// an exist followed by default term being b
test_is_parse_err("a:*b", "(*\"a\":* *b)");
test_parse_query_to_ast_helper( // this is a term query (not a phrase prefix)
"(hello AND toto:*) OR happy",
"(?(+hello +$exists(\"toto\")) ?happy)",
);
test_parse_query_to_ast_helper("(a:*)", "$exists(\"a\")");
// these are term/wildcard query (not a phrase prefix)
test_parse_query_to_ast_helper("a:b*", "\"a\":b*"); test_parse_query_to_ast_helper("a:b*", "\"a\":b*");
test_parse_query_to_ast_helper("a:*b", "\"a\":*b");
test_parse_query_to_ast_helper(r#"a:*def*"#, "\"a\":*def*");
} }
#[test] #[test]

View File

@@ -1,13 +1,9 @@
use std::fmt; use std::fmt;
use std::fmt::{Debug, Formatter}; use std::fmt::{Debug, Formatter};
use serde::Serialize;
use crate::Occur; use crate::Occur;
#[derive(PartialEq, Clone, Serialize)] #[derive(PartialEq, Clone)]
#[serde(tag = "type")]
#[serde(rename_all = "snake_case")]
pub enum UserInputLeaf { pub enum UserInputLeaf {
Literal(UserInputLiteral), Literal(UserInputLiteral),
All, All,
@@ -51,7 +47,7 @@ impl UserInputLeaf {
pub(crate) fn set_default_field(&mut self, default_field: String) { pub(crate) fn set_default_field(&mut self, default_field: String) {
match self { match self {
UserInputLeaf::Literal(literal) if literal.field_name.is_none() => { UserInputLeaf::Literal(ref mut literal) if literal.field_name.is_none() => {
literal.field_name = Some(default_field) literal.field_name = Some(default_field)
} }
UserInputLeaf::All => { UserInputLeaf::All => {
@@ -59,8 +55,12 @@ impl UserInputLeaf {
field: default_field, field: default_field,
} }
} }
UserInputLeaf::Range { field, .. } if field.is_none() => *field = Some(default_field), UserInputLeaf::Range { ref mut field, .. } if field.is_none() => {
UserInputLeaf::Set { field, .. } if field.is_none() => *field = Some(default_field), *field = Some(default_field)
}
UserInputLeaf::Set { ref mut field, .. } if field.is_none() => {
*field = Some(default_field)
}
_ => (), // field was already set, do nothing _ => (), // field was already set, do nothing
} }
} }
@@ -71,11 +71,11 @@ impl Debug for UserInputLeaf {
match self { match self {
UserInputLeaf::Literal(literal) => literal.fmt(formatter), UserInputLeaf::Literal(literal) => literal.fmt(formatter),
UserInputLeaf::Range { UserInputLeaf::Range {
field, ref field,
lower, ref lower,
upper, ref upper,
} => { } => {
if let Some(field) = field { if let Some(ref field) = field {
// TODO properly escape field (in case of \") // TODO properly escape field (in case of \")
write!(formatter, "\"{field}\":")?; write!(formatter, "\"{field}\":")?;
} }
@@ -85,7 +85,7 @@ impl Debug for UserInputLeaf {
Ok(()) Ok(())
} }
UserInputLeaf::Set { field, elements } => { UserInputLeaf::Set { field, elements } => {
if let Some(field) = field { if let Some(ref field) = field {
// TODO properly escape field (in case of \") // TODO properly escape field (in case of \")
write!(formatter, "\"{field}\": ")?; write!(formatter, "\"{field}\": ")?;
} }
@@ -101,22 +101,20 @@ impl Debug for UserInputLeaf {
} }
UserInputLeaf::All => write!(formatter, "*"), UserInputLeaf::All => write!(formatter, "*"),
UserInputLeaf::Exists { field } => { UserInputLeaf::Exists { field } => {
write!(formatter, "$exists(\"{field}\")") write!(formatter, "\"{field}\":*")
} }
} }
} }
} }
#[derive(Copy, Clone, Eq, PartialEq, Debug, Serialize)] #[derive(Copy, Clone, Eq, PartialEq, Debug)]
#[serde(rename_all = "snake_case")]
pub enum Delimiter { pub enum Delimiter {
SingleQuotes, SingleQuotes,
DoubleQuotes, DoubleQuotes,
None, None,
} }
#[derive(PartialEq, Clone, Serialize)] #[derive(PartialEq, Clone)]
#[serde(rename_all = "snake_case")]
pub struct UserInputLiteral { pub struct UserInputLiteral {
pub field_name: Option<String>, pub field_name: Option<String>,
pub phrase: String, pub phrase: String,
@@ -154,9 +152,7 @@ impl fmt::Debug for UserInputLiteral {
} }
} }
#[derive(PartialEq, Debug, Clone, Serialize)] #[derive(PartialEq, Debug, Clone)]
#[serde(tag = "type", content = "value")]
#[serde(rename_all = "snake_case")]
pub enum UserInputBound { pub enum UserInputBound {
Inclusive(String), Inclusive(String),
Exclusive(String), Exclusive(String),
@@ -191,38 +187,11 @@ impl UserInputBound {
} }
} }
#[derive(PartialEq, Clone, Serialize)] #[derive(PartialEq, Clone)]
#[serde(into = "UserInputAstSerde")]
pub enum UserInputAst { pub enum UserInputAst {
Clause(Vec<(Option<Occur>, UserInputAst)>), Clause(Vec<(Option<Occur>, UserInputAst)>),
Leaf(Box<UserInputLeaf>),
Boost(Box<UserInputAst>, f64), Boost(Box<UserInputAst>, f64),
Leaf(Box<UserInputLeaf>),
}
#[derive(Serialize)]
#[serde(tag = "type", rename_all = "snake_case")]
enum UserInputAstSerde {
Bool {
clauses: Vec<(Option<Occur>, UserInputAst)>,
},
Boost {
underlying: Box<UserInputAst>,
boost: f64,
},
#[serde(untagged)]
Leaf(Box<UserInputLeaf>),
}
impl From<UserInputAst> for UserInputAstSerde {
fn from(ast: UserInputAst) -> Self {
match ast {
UserInputAst::Clause(clause) => UserInputAstSerde::Bool { clauses: clause },
UserInputAst::Boost(underlying, boost) => {
UserInputAstSerde::Boost { underlying, boost }
}
UserInputAst::Leaf(leaf) => UserInputAstSerde::Leaf(leaf),
}
}
} }
impl UserInputAst { impl UserInputAst {
@@ -263,7 +232,7 @@ impl UserInputAst {
.iter_mut() .iter_mut()
.for_each(|(_, ast)| ast.set_default_field(field.clone())), .for_each(|(_, ast)| ast.set_default_field(field.clone())),
UserInputAst::Leaf(leaf) => leaf.set_default_field(field), UserInputAst::Leaf(leaf) => leaf.set_default_field(field),
UserInputAst::Boost(ast, _) => ast.set_default_field(field), UserInputAst::Boost(ref mut ast, _) => ast.set_default_field(field),
} }
} }
} }
@@ -316,126 +285,3 @@ impl fmt::Debug for UserInputAst {
} }
} }
} }
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_all_leaf_serialization() {
let ast = UserInputAst::Leaf(Box::new(UserInputLeaf::All));
let json = serde_json::to_string(&ast).unwrap();
assert_eq!(json, r#"{"type":"all"}"#);
}
#[test]
fn test_literal_leaf_serialization() {
let literal = UserInputLiteral {
field_name: Some("title".to_string()),
phrase: "hello".to_string(),
delimiter: Delimiter::None,
slop: 0,
prefix: false,
};
let ast = UserInputAst::Leaf(Box::new(UserInputLeaf::Literal(literal)));
let json = serde_json::to_string(&ast).unwrap();
assert_eq!(
json,
r#"{"type":"literal","field_name":"title","phrase":"hello","delimiter":"none","slop":0,"prefix":false}"#
);
}
#[test]
fn test_range_leaf_serialization() {
let range = UserInputLeaf::Range {
field: Some("price".to_string()),
lower: UserInputBound::Inclusive("10".to_string()),
upper: UserInputBound::Exclusive("100".to_string()),
};
let ast = UserInputAst::Leaf(Box::new(range));
let json = serde_json::to_string(&ast).unwrap();
assert_eq!(
json,
r#"{"type":"range","field":"price","lower":{"type":"inclusive","value":"10"},"upper":{"type":"exclusive","value":"100"}}"#
);
}
#[test]
fn test_range_leaf_unbounded_serialization() {
let range = UserInputLeaf::Range {
field: Some("price".to_string()),
lower: UserInputBound::Inclusive("10".to_string()),
upper: UserInputBound::Unbounded,
};
let ast = UserInputAst::Leaf(Box::new(range));
let json = serde_json::to_string(&ast).unwrap();
assert_eq!(
json,
r#"{"type":"range","field":"price","lower":{"type":"inclusive","value":"10"},"upper":{"type":"unbounded"}}"#
);
}
#[test]
fn test_boost_serialization() {
let inner_ast = UserInputAst::Leaf(Box::new(UserInputLeaf::All));
let boost_ast = UserInputAst::Boost(Box::new(inner_ast), 2.5);
let json = serde_json::to_string(&boost_ast).unwrap();
assert_eq!(
json,
r#"{"type":"boost","underlying":{"type":"all"},"boost":2.5}"#
);
}
#[test]
fn test_boost_serialization2() {
let boost_ast = UserInputAst::Boost(
Box::new(UserInputAst::Clause(vec![
(
Some(Occur::Must),
UserInputAst::Leaf(Box::new(UserInputLeaf::All)),
),
(
Some(Occur::Should),
UserInputAst::Leaf(Box::new(UserInputLeaf::Literal(UserInputLiteral {
field_name: Some("title".to_string()),
phrase: "hello".to_string(),
delimiter: Delimiter::None,
slop: 0,
prefix: false,
}))),
),
])),
2.5,
);
let json = serde_json::to_string(&boost_ast).unwrap();
assert_eq!(
json,
r#"{"type":"boost","underlying":{"type":"bool","clauses":[["must",{"type":"all"}],["should",{"type":"literal","field_name":"title","phrase":"hello","delimiter":"none","slop":0,"prefix":false}]]},"boost":2.5}"#
);
}
#[test]
fn test_clause_serialization() {
let clause = UserInputAst::Clause(vec![
(
Some(Occur::Must),
UserInputAst::Leaf(Box::new(UserInputLeaf::All)),
),
(
Some(Occur::Should),
UserInputAst::Leaf(Box::new(UserInputLeaf::Literal(UserInputLiteral {
field_name: Some("title".to_string()),
phrase: "hello".to_string(),
delimiter: Delimiter::None,
slop: 0,
prefix: false,
}))),
),
]);
let json = serde_json::to_string(&clause).unwrap();
assert_eq!(
json,
r#"{"type":"bool","clauses":[["must",{"type":"all"}],["should",{"type":"literal","field_name":"title","phrase":"hello","delimiter":"none","slop":0,"prefix":false}]]}"#
);
}
}

View File

@@ -271,6 +271,10 @@ impl AggregationWithAccessor {
field: ref field_name, field: ref field_name,
.. ..
}) })
| Count(CountAggregation {
field: ref field_name,
..
})
| Max(MaxAggregation { | Max(MaxAggregation {
field: ref field_name, field: ref field_name,
.. ..
@@ -295,24 +299,6 @@ impl AggregationWithAccessor {
get_ff_reader(reader, field_name, Some(get_numeric_or_date_column_types()))?; get_ff_reader(reader, field_name, Some(get_numeric_or_date_column_types()))?;
add_agg_with_accessor(&agg, accessor, column_type, &mut res)?; add_agg_with_accessor(&agg, accessor, column_type, &mut res)?;
} }
Count(CountAggregation {
field: ref field_name,
..
}) => {
let allowed_column_types = [
ColumnType::I64,
ColumnType::U64,
ColumnType::F64,
ColumnType::Str,
ColumnType::DateTime,
ColumnType::Bool,
ColumnType::IpAddr,
// ColumnType::Bytes Unsupported
];
let (accessor, column_type) =
get_ff_reader(reader, field_name, Some(&allowed_column_types))?;
add_agg_with_accessor(&agg, accessor, column_type, &mut res)?;
}
Percentiles(ref percentiles) => { Percentiles(ref percentiles) => {
let (accessor, column_type) = get_ff_reader( let (accessor, column_type) = get_ff_reader(
reader, reader,

View File

@@ -34,10 +34,10 @@ use crate::aggregation::*;
pub struct DateHistogramAggregationReq { pub struct DateHistogramAggregationReq {
#[doc(hidden)] #[doc(hidden)]
/// Only for validation /// Only for validation
pub interval: Option<String>, interval: Option<String>,
#[doc(hidden)] #[doc(hidden)]
/// Only for validation /// Only for validation
pub calendar_interval: Option<String>, calendar_interval: Option<String>,
/// The field to aggregate on. /// The field to aggregate on.
pub field: String, pub field: String,
/// The format to format dates. Unsupported currently. /// The format to format dates. Unsupported currently.

View File

@@ -518,7 +518,7 @@ impl SegmentTermCollector {
|term| { |term| {
let entry = entries[idx]; let entry = entries[idx];
let intermediate_entry = into_intermediate_bucket_entry(entry.0, entry.1) let intermediate_entry = into_intermediate_bucket_entry(entry.0, entry.1)
.map_err(io::Error::other)?; .map_err(|err| io::Error::new(io::ErrorKind::Other, err))?;
dict.insert( dict.insert(
IntermediateKey::Str( IntermediateKey::Str(
String::from_utf8(term.to_vec()).expect("could not convert to String"), String::from_utf8(term.to_vec()).expect("could not convert to String"),

View File

@@ -220,23 +220,9 @@ impl SegmentStatsCollector {
.column_block_accessor .column_block_accessor
.fetch_block(docs, &agg_accessor.accessor); .fetch_block(docs, &agg_accessor.accessor);
} }
if [ for val in agg_accessor.column_block_accessor.iter_vals() {
ColumnType::I64, let val1 = f64_from_fastfield_u64(val, &self.field_type);
ColumnType::U64, self.stats.collect(val1);
ColumnType::F64,
ColumnType::DateTime,
]
.contains(&self.field_type)
{
for val in agg_accessor.column_block_accessor.iter_vals() {
let val1 = f64_from_fastfield_u64(val, &self.field_type);
self.stats.collect(val1);
}
} else {
for _val in agg_accessor.column_block_accessor.iter_vals() {
// we ignore the value and simply record that we got something
self.stats.collect(0.0);
}
} }
} }
} }
@@ -449,11 +435,6 @@ mod tests {
"field": "score", "field": "score",
}, },
}, },
"count_str": {
"value_count": {
"field": "text",
},
},
"range": range_agg "range": range_agg
})) }))
.unwrap(); .unwrap();
@@ -519,13 +500,6 @@ mod tests {
}) })
); );
assert_eq!(
res["count_str"],
json!({
"value": 7.0,
})
);
Ok(()) Ok(())
} }

View File

@@ -229,7 +229,6 @@ impl TopHitsAggregationReq {
self.sort self.sort
.iter() .iter()
.map(|KeyOrder { field, .. }| field.as_str()) .map(|KeyOrder { field, .. }| field.as_str())
.chain(self.doc_value_fields.iter().map(|s| s.as_str()))
.collect() .collect()
} }

View File

@@ -366,12 +366,8 @@ impl PartialEq for Key {
fn eq(&self, other: &Self) -> bool { fn eq(&self, other: &Self) -> bool {
match (self, other) { match (self, other) {
(Self::Str(l), Self::Str(r)) => l == r, (Self::Str(l), Self::Str(r)) => l == r,
(Self::F64(l), Self::F64(r)) => l.to_bits() == r.to_bits(), (Self::F64(l), Self::F64(r)) => l == r,
(Self::I64(l), Self::I64(r)) => l == r, _ => false,
(Self::U64(l), Self::U64(r)) => l == r,
// we list all variant of left operand to make sure this gets updated when we add
// variants to the enum
(Self::Str(_) | Self::F64(_) | Self::I64(_) | Self::U64(_), _) => false,
} }
} }
} }
@@ -582,7 +578,7 @@ mod tests {
.set_indexing_options( .set_indexing_options(
TextFieldIndexing::default().set_index_option(IndexRecordOption::WithFreqs), TextFieldIndexing::default().set_index_option(IndexRecordOption::WithFreqs),
) )
.set_fast(Some("raw")) .set_fast(None)
.set_stored(); .set_stored();
let text_field = schema_builder.add_text_field("text", text_fieldtype); let text_field = schema_builder.add_text_field("text", text_fieldtype);
let date_field = schema_builder.add_date_field("date", FAST); let date_field = schema_builder.add_date_field("date", FAST);

View File

@@ -484,6 +484,7 @@ impl FacetCounts {
#[cfg(test)] #[cfg(test)]
mod tests { mod tests {
use std::collections::BTreeSet; use std::collections::BTreeSet;
use std::iter;
use columnar::Dictionary; use columnar::Dictionary;
use rand::distributions::Uniform; use rand::distributions::Uniform;
@@ -738,7 +739,7 @@ mod tests {
.flat_map(|(c, count)| { .flat_map(|(c, count)| {
let facet = Facet::from(&format!("/facet/{c}")); let facet = Facet::from(&format!("/facet/{c}"));
let doc = doc!(facet_field => facet); let doc = doc!(facet_field => facet);
std::iter::repeat_n(doc, count) iter::repeat(doc).take(count)
}) })
.map(|mut doc| { .map(|mut doc| {
doc.add_facet( doc.add_facet(
@@ -786,7 +787,7 @@ mod tests {
.flat_map(|(c, count)| { .flat_map(|(c, count)| {
let facet = Facet::from(&format!("/facet/{c}")); let facet = Facet::from(&format!("/facet/{c}"));
let doc = doc!(facet_field => facet); let doc = doc!(facet_field => facet);
std::iter::repeat_n(doc, count) iter::repeat(doc).take(count)
}) })
.collect(); .collect();

View File

@@ -2,13 +2,11 @@ use std::fmt;
use std::marker::PhantomData; use std::marker::PhantomData;
use std::sync::Arc; use std::sync::Arc;
use columnar::{ColumnValues, StrColumn}; use columnar::ColumnValues;
use serde::{Deserialize, Serialize}; use serde::{Deserialize, Serialize};
use super::Collector; use super::Collector;
use crate::collector::custom_score_top_collector::{ use crate::collector::custom_score_top_collector::CustomScoreTopCollector;
CustomScoreTopCollector, CustomScoreTopSegmentCollector,
};
use crate::collector::top_collector::{ComparableDoc, TopCollector, TopSegmentCollector}; use crate::collector::top_collector::{ComparableDoc, TopCollector, TopSegmentCollector};
use crate::collector::tweak_score_top_collector::TweakedScoreTopCollector; use crate::collector::tweak_score_top_collector::TweakedScoreTopCollector;
use crate::collector::{ use crate::collector::{
@@ -16,7 +14,6 @@ use crate::collector::{
}; };
use crate::fastfield::{FastFieldNotAvailableError, FastValue}; use crate::fastfield::{FastFieldNotAvailableError, FastValue};
use crate::query::Weight; use crate::query::Weight;
use crate::termdict::TermOrdinal;
use crate::{DocAddress, DocId, Order, Score, SegmentOrdinal, SegmentReader, TantivyError}; use crate::{DocAddress, DocId, Order, Score, SegmentOrdinal, SegmentReader, TantivyError};
struct FastFieldConvertCollector< struct FastFieldConvertCollector<
@@ -86,163 +83,6 @@ where
} }
} }
struct StringConvertCollector {
pub collector: CustomScoreTopCollector<ScorerByField, u64>,
pub field: String,
order: Order,
limit: usize,
offset: usize,
}
impl Collector for StringConvertCollector {
type Fruit = Vec<(String, DocAddress)>;
type Child = StringConvertSegmentCollector;
fn for_segment(
&self,
segment_local_id: crate::SegmentOrdinal,
segment: &SegmentReader,
) -> crate::Result<Self::Child> {
let schema = segment.schema();
let field = schema.get_field(&self.field)?;
let field_entry = schema.get_field_entry(field);
if !field_entry.is_fast() {
return Err(TantivyError::SchemaError(format!(
"Field {:?} is not a fast field.",
field_entry.name()
)));
}
let requested_type = crate::schema::Type::Str;
let schema_type = field_entry.field_type().value_type();
if schema_type != requested_type {
return Err(TantivyError::SchemaError(format!(
"Field {:?} is of type {schema_type:?}!={requested_type:?}",
field_entry.name()
)));
}
let ff = segment
.fast_fields()
.str(&self.field)?
.expect("ff should be a str field");
Ok(StringConvertSegmentCollector {
collector: self.collector.for_segment(segment_local_id, segment)?,
ff,
order: self.order.clone(),
})
}
fn requires_scoring(&self) -> bool {
self.collector.requires_scoring()
}
fn merge_fruits(
&self,
child_fruits: Vec<<Self::Child as SegmentCollector>::Fruit>,
) -> crate::Result<Self::Fruit> {
if self.limit == 0 {
return Ok(Vec::new());
}
if self.order.is_desc() {
let mut top_collector: TopNComputer<_, _, true> =
TopNComputer::new(self.limit + self.offset);
for child_fruit in child_fruits {
for (feature, doc) in child_fruit {
top_collector.push(feature, doc);
}
}
Ok(top_collector
.into_sorted_vec()
.into_iter()
.skip(self.offset)
.map(|cdoc| (cdoc.feature, cdoc.doc))
.collect())
} else {
let mut top_collector: TopNComputer<_, _, false> =
TopNComputer::new(self.limit + self.offset);
for child_fruit in child_fruits {
for (feature, doc) in child_fruit {
top_collector.push(feature, doc);
}
}
Ok(top_collector
.into_sorted_vec()
.into_iter()
.skip(self.offset)
.map(|cdoc| (cdoc.feature, cdoc.doc))
.collect())
}
}
}
struct StringConvertSegmentCollector {
pub collector: CustomScoreTopSegmentCollector<ScorerByFastFieldReader, u64>,
ff: StrColumn,
order: Order,
}
impl SegmentCollector for StringConvertSegmentCollector {
type Fruit = Vec<(String, DocAddress)>;
fn collect(&mut self, doc: DocId, score: Score) {
self.collector.collect(doc, score);
}
fn harvest(self) -> Vec<(String, DocAddress)> {
let top_ordinals: Vec<(TermOrdinal, DocAddress)> = self.collector.harvest();
// Collect terms.
let mut terms: Vec<String> = Vec::with_capacity(top_ordinals.len());
let result = if self.order.is_asc() {
self.ff.dictionary().sorted_ords_to_term_cb(
top_ordinals.iter().map(|(term_ord, _)| u64::MAX - term_ord),
|term| {
terms.push(
std::str::from_utf8(term)
.expect("Failed to decode term as unicode")
.to_owned(),
);
Ok(())
},
)
} else {
self.ff.dictionary().sorted_ords_to_term_cb(
top_ordinals.iter().rev().map(|(term_ord, _)| *term_ord),
|term| {
terms.push(
std::str::from_utf8(term)
.expect("Failed to decode term as unicode")
.to_owned(),
);
Ok(())
},
)
};
assert!(
result.expect("Failed to read terms from term dictionary"),
"Not all terms were matched in segment."
);
// Zip them back with their docs.
if self.order.is_asc() {
terms
.into_iter()
.zip(top_ordinals)
.map(|(term, (_, doc))| (term, doc))
.collect()
} else {
terms
.into_iter()
.rev()
.zip(top_ordinals)
.map(|(term, (_, doc))| (term, doc))
.collect()
}
}
}
/// The `TopDocs` collector keeps track of the top `K` documents /// The `TopDocs` collector keeps track of the top `K` documents
/// sorted by their score. /// sorted by their score.
/// ///
@@ -570,30 +410,6 @@ impl TopDocs {
} }
} }
/// Like `order_by_fast_field`, but for a `String` fast field.
pub fn order_by_string_fast_field(
self,
fast_field: impl ToString,
order: Order,
) -> impl Collector<Fruit = Vec<(String, DocAddress)>> {
let limit = self.0.limit;
let offset = self.0.offset;
let u64_collector = CustomScoreTopCollector::new(
ScorerByField {
field: fast_field.to_string(),
order: order.clone(),
},
self.0.into_tscore(),
);
StringConvertCollector {
collector: u64_collector,
field: fast_field.to_string(),
order,
limit,
offset,
}
}
/// Ranks the documents using a custom score. /// Ranks the documents using a custom score.
/// ///
/// This method offers a convenient way to tweak or replace /// This method offers a convenient way to tweak or replace
@@ -970,7 +786,7 @@ impl<Score, D, const R: bool> From<TopNComputerDeser<Score, D, R>> for TopNCompu
} }
} }
impl<Score, D, const REVERSE_ORDER: bool> TopNComputer<Score, D, REVERSE_ORDER> impl<Score, D, const R: bool> TopNComputer<Score, D, R>
where where
Score: PartialOrd + Clone, Score: PartialOrd + Clone,
D: Ord, D: Ord,
@@ -991,10 +807,7 @@ where
#[inline] #[inline]
pub fn push(&mut self, feature: Score, doc: D) { pub fn push(&mut self, feature: Score, doc: D) {
if let Some(last_median) = self.threshold.clone() { if let Some(last_median) = self.threshold.clone() {
if !REVERSE_ORDER && feature > last_median { if feature < last_median {
return;
}
if REVERSE_ORDER && feature < last_median {
return; return;
} }
} }
@@ -1029,7 +842,7 @@ where
} }
/// Returns the top n elements in sorted order. /// Returns the top n elements in sorted order.
pub fn into_sorted_vec(mut self) -> Vec<ComparableDoc<Score, D, REVERSE_ORDER>> { pub fn into_sorted_vec(mut self) -> Vec<ComparableDoc<Score, D, R>> {
if self.buffer.len() > self.top_n { if self.buffer.len() > self.top_n {
self.truncate_top_n(); self.truncate_top_n();
} }
@@ -1040,7 +853,7 @@ where
/// Returns the top n elements in stored order. /// Returns the top n elements in stored order.
/// Useful if you do not need the elements in sorted order, /// Useful if you do not need the elements in sorted order,
/// for example when merging the results of multiple segments. /// for example when merging the results of multiple segments.
pub fn into_vec(mut self) -> Vec<ComparableDoc<Score, D, REVERSE_ORDER>> { pub fn into_vec(mut self) -> Vec<ComparableDoc<Score, D, R>> {
if self.buffer.len() > self.top_n { if self.buffer.len() > self.top_n {
self.truncate_top_n(); self.truncate_top_n();
} }
@@ -1050,11 +863,9 @@ where
#[cfg(test)] #[cfg(test)]
mod tests { mod tests {
use proptest::prelude::*;
use super::{TopDocs, TopNComputer}; use super::{TopDocs, TopNComputer};
use crate::collector::top_collector::ComparableDoc; use crate::collector::top_collector::ComparableDoc;
use crate::collector::{Collector, DocSetCollector}; use crate::collector::Collector;
use crate::query::{AllQuery, Query, QueryParser}; use crate::query::{AllQuery, Query, QueryParser};
use crate::schema::{Field, Schema, FAST, STORED, TEXT}; use crate::schema::{Field, Schema, FAST, STORED, TEXT};
use crate::time::format_description::well_known::Rfc3339; use crate::time::format_description::well_known::Rfc3339;
@@ -1149,44 +960,6 @@ mod tests {
} }
} }
proptest! {
#[test]
fn test_topn_computer_asc_prop(
limit in 0..10_usize,
docs in proptest::collection::vec((0..100_u64, 0..100_u64), 0..100_usize),
) {
let mut computer: TopNComputer<_, _, false> = TopNComputer::new(limit);
for (feature, doc) in &docs {
computer.push(*feature, *doc);
}
let mut comparable_docs = docs.into_iter().map(|(feature, doc)| ComparableDoc { feature, doc }).collect::<Vec<_>>();
comparable_docs.sort();
comparable_docs.truncate(limit);
prop_assert_eq!(
computer.into_sorted_vec(),
comparable_docs,
);
}
#[test]
fn test_topn_computer_desc_prop(
limit in 0..10_usize,
docs in proptest::collection::vec((0..100_u64, 0..100_u64), 0..100_usize),
) {
let mut computer: TopNComputer<_, _, true> = TopNComputer::new(limit);
for (feature, doc) in &docs {
computer.push(*feature, *doc);
}
let mut comparable_docs = docs.into_iter().map(|(feature, doc)| ComparableDoc { feature, doc }).collect::<Vec<_>>();
comparable_docs.sort();
comparable_docs.truncate(limit);
prop_assert_eq!(
computer.into_sorted_vec(),
comparable_docs,
);
}
}
#[test] #[test]
fn test_top_collector_not_at_capacity_without_offset() -> crate::Result<()> { fn test_top_collector_not_at_capacity_without_offset() -> crate::Result<()> {
let index = make_index()?; let index = make_index()?;
@@ -1441,160 +1214,6 @@ mod tests {
Ok(()) Ok(())
} }
#[test]
fn test_top_field_collector_string() -> crate::Result<()> {
let mut schema_builder = Schema::builder();
let city = schema_builder.add_text_field("city", TEXT | FAST);
let schema = schema_builder.build();
let index = Index::create_in_ram(schema);
let mut index_writer = index.writer_for_tests()?;
index_writer.add_document(doc!(
city => "austin",
))?;
index_writer.add_document(doc!(
city => "greenville",
))?;
index_writer.add_document(doc!(
city => "tokyo",
))?;
index_writer.commit()?;
fn query(
index: &Index,
order: Order,
limit: usize,
offset: usize,
) -> crate::Result<Vec<(String, DocAddress)>> {
let searcher = index.reader()?.searcher();
let top_collector = TopDocs::with_limit(limit)
.and_offset(offset)
.order_by_string_fast_field("city", order);
searcher.search(&AllQuery, &top_collector)
}
assert_eq!(
&query(&index, Order::Desc, 3, 0)?,
&[
("tokyo".to_owned(), DocAddress::new(0, 2)),
("greenville".to_owned(), DocAddress::new(0, 1)),
("austin".to_owned(), DocAddress::new(0, 0)),
]
);
assert_eq!(
&query(&index, Order::Desc, 2, 0)?,
&[
("tokyo".to_owned(), DocAddress::new(0, 2)),
("greenville".to_owned(), DocAddress::new(0, 1)),
]
);
assert_eq!(&query(&index, Order::Desc, 3, 3)?, &[]);
assert_eq!(
&query(&index, Order::Desc, 2, 1)?,
&[
("greenville".to_owned(), DocAddress::new(0, 1)),
("austin".to_owned(), DocAddress::new(0, 0)),
]
);
assert_eq!(
&query(&index, Order::Asc, 3, 0)?,
&[
("austin".to_owned(), DocAddress::new(0, 0)),
("greenville".to_owned(), DocAddress::new(0, 1)),
("tokyo".to_owned(), DocAddress::new(0, 2)),
]
);
assert_eq!(
&query(&index, Order::Asc, 2, 1)?,
&[
("greenville".to_owned(), DocAddress::new(0, 1)),
("tokyo".to_owned(), DocAddress::new(0, 2)),
]
);
assert_eq!(
&query(&index, Order::Asc, 2, 0)?,
&[
("austin".to_owned(), DocAddress::new(0, 0)),
("greenville".to_owned(), DocAddress::new(0, 1)),
]
);
assert_eq!(&query(&index, Order::Asc, 3, 3)?, &[]);
Ok(())
}
proptest! {
#[test]
fn test_top_field_collect_string_prop(
order in prop_oneof!(Just(Order::Desc), Just(Order::Asc)),
limit in 1..256_usize,
offset in 0..256_usize,
segments_terms in
proptest::collection::vec(
proptest::collection::vec(0..32_u8, 1..32_usize),
0..8_usize,
)
) {
let mut schema_builder = Schema::builder();
let city = schema_builder.add_text_field("city", TEXT | FAST);
let schema = schema_builder.build();
let index = Index::create_in_ram(schema);
let mut index_writer = index.writer_for_tests()?;
// A Vec<Vec<u8>>, where the outer Vec represents segments, and the inner Vec
// represents terms.
for segment_terms in segments_terms.into_iter() {
for term in segment_terms.into_iter() {
let term = format!("{term:0>3}");
index_writer.add_document(doc!(
city => term,
))?;
}
index_writer.commit()?;
}
let searcher = index.reader()?.searcher();
let top_n_results = searcher.search(&AllQuery, &TopDocs::with_limit(limit)
.and_offset(offset)
.order_by_string_fast_field("city", order.clone()))?;
let all_results = searcher.search(&AllQuery, &DocSetCollector)?.into_iter().map(|doc_address| {
// Get the term for this address.
// NOTE: We can't determine the SegmentIds that will be generated for Segments
// ahead of time, so we can't pre-compute the expected `DocAddress`es.
let column = searcher.segment_readers()[doc_address.segment_ord as usize].fast_fields().str("city").unwrap().unwrap();
let term_ord = column.term_ords(doc_address.doc_id).next().unwrap();
let mut city = Vec::new();
column.dictionary().ord_to_term(term_ord, &mut city).unwrap();
(String::try_from(city).unwrap(), doc_address)
});
// Using the TopDocs collector should always be equivalent to sorting, skipping the
// offset, and then taking the limit.
let sorted_docs: Vec<_> = if order.is_desc() {
let mut comparable_docs: Vec<ComparableDoc<_, _, true>> =
all_results.into_iter().map(|(feature, doc)| ComparableDoc { feature, doc}).collect();
comparable_docs.sort();
comparable_docs.into_iter().map(|cd| (cd.feature, cd.doc)).collect()
} else {
let mut comparable_docs: Vec<ComparableDoc<_, _, false>> =
all_results.into_iter().map(|(feature, doc)| ComparableDoc { feature, doc}).collect();
comparable_docs.sort();
comparable_docs.into_iter().map(|cd| (cd.feature, cd.doc)).collect()
};
let expected_docs = sorted_docs.into_iter().skip(offset).take(limit).collect::<Vec<_>>();
prop_assert_eq!(
expected_docs,
top_n_results
);
}
}
#[test] #[test]
#[should_panic] #[should_panic]
fn test_field_does_not_exist() { fn test_field_does_not_exist() {
@@ -1754,29 +1373,4 @@ mod tests {
); );
Ok(()) Ok(())
} }
#[test]
fn test_topn_computer_asc() {
let mut computer: TopNComputer<u32, u32, false> = TopNComputer::new(2);
computer.push(1u32, 1u32);
computer.push(2u32, 2u32);
computer.push(3u32, 3u32);
computer.push(2u32, 4u32);
computer.push(4u32, 5u32);
computer.push(1u32, 6u32);
assert_eq!(
computer.into_sorted_vec(),
&[
ComparableDoc {
feature: 1u32,
doc: 1u32,
},
ComparableDoc {
feature: 1u32,
doc: 6u32,
}
]
);
}
} }

View File

@@ -30,7 +30,7 @@ fn create_format() {
} }
fn path_for_version(version: &str) -> String { fn path_for_version(version: &str) -> String {
format!("./tests/compat_tests_data/index_v{version}/") format!("./tests/compat_tests_data/index_v{}/", version)
} }
/// feature flag quickwit uses a different dictionary type /// feature flag quickwit uses a different dictionary type

View File

@@ -41,12 +41,16 @@ impl Executor {
/// ///
/// Regardless of the executor (`SingleThread` or `ThreadPool`), panics in the task /// Regardless of the executor (`SingleThread` or `ThreadPool`), panics in the task
/// will propagate to the caller. /// will propagate to the caller.
pub fn map<A, R, F>(&self, f: F, args: impl Iterator<Item = A>) -> crate::Result<Vec<R>> pub fn map<
where
A: Send, A: Send,
R: Send, R: Send,
AIterator: Iterator<Item = A>,
F: Sized + Sync + Fn(A) -> crate::Result<R>, F: Sized + Sync + Fn(A) -> crate::Result<R>,
{ >(
&self,
f: F,
args: AIterator,
) -> crate::Result<Vec<R>> {
match self { match self {
Executor::SingleThread => args.map(f).collect::<crate::Result<_>>(), Executor::SingleThread => args.map(f).collect::<crate::Result<_>>(),
Executor::ThreadPool(pool) => { Executor::ThreadPool(pool) => {
@@ -65,7 +69,8 @@ impl Executor {
if let Err(err) = fruit_sender_ref.send((idx, fruit)) { if let Err(err) = fruit_sender_ref.send((idx, fruit)) {
error!( error!(
"Failed to send search task. It probably means all search \ "Failed to send search task. It probably means all search \
threads have panicked. {err:?}" threads have panicked. {:?}",
err
); );
} }
}); });

View File

@@ -214,7 +214,7 @@ impl Searcher {
/// It is powerless at making search faster if your index consists in /// It is powerless at making search faster if your index consists in
/// one large segment. /// one large segment.
/// ///
/// Also, keep in mind multithreading a single query on several /// Also, keep in my multithreading a single query on several
/// threads will not improve your throughput. It can actually /// threads will not improve your throughput. It can actually
/// hurt it. It will however, decrease the average response time. /// hurt it. It will however, decrease the average response time.
pub fn search_with_executor<C: Collector>( pub fn search_with_executor<C: Collector>(

View File

@@ -56,7 +56,7 @@ impl<T: Send + Sync + 'static> From<Box<T>> for DirectoryLock {
impl Drop for DirectoryLockGuard { impl Drop for DirectoryLockGuard {
fn drop(&mut self) { fn drop(&mut self) {
if let Err(e) = self.directory.delete(&self.path) { if let Err(e) = self.directory.delete(&self.path) {
error!("Failed to remove the lock file. {e:?}"); error!("Failed to remove the lock file. {:?}", e);
} }
} }
} }

View File

@@ -51,7 +51,7 @@ impl FileWatcher {
.map(|current_checksum| current_checksum != checksum) .map(|current_checksum| current_checksum != checksum)
.unwrap_or(true); .unwrap_or(true);
if metafile_has_changed { if metafile_has_changed {
info!("Meta file {path:?} was modified"); info!("Meta file {:?} was modified", path);
current_checksum_opt = Some(checksum); current_checksum_opt = Some(checksum);
// We actually ignore callbacks failing here. // We actually ignore callbacks failing here.
// We just wait for the end of their execution. // We just wait for the end of their execution.
@@ -75,7 +75,7 @@ impl FileWatcher {
let reader = match fs::File::open(path) { let reader = match fs::File::open(path) {
Ok(f) => io::BufReader::new(f), Ok(f) => io::BufReader::new(f),
Err(e) => { Err(e) => {
warn!("Failed to open meta file {path:?}: {e:?}"); warn!("Failed to open meta file {:?}: {:?}", path, e);
return Err(e); return Err(e);
} }
}; };

View File

@@ -1,9 +1,3 @@
//! The footer is a small metadata structure that is appended at the end of every file.
//!
//! The footer is used to store a checksum of the file content.
//! The footer also stores the version of the index format.
//! This version is used to detect incompatibility between the index and the library version.
use std::io; use std::io;
use std::io::Write; use std::io::Write;
@@ -26,22 +20,20 @@ type CrcHashU32 = u32;
/// A Footer is appended to every file /// A Footer is appended to every file
#[derive(Debug, Clone, PartialEq, Serialize, Deserialize)] #[derive(Debug, Clone, PartialEq, Serialize, Deserialize)]
pub struct Footer { pub struct Footer {
/// The version of the index format
pub version: Version, pub version: Version,
/// The crc32 hash of the body
pub crc: CrcHashU32, pub crc: CrcHashU32,
} }
impl Footer { impl Footer {
pub(crate) fn new(crc: CrcHashU32) -> Self { pub fn new(crc: CrcHashU32) -> Self {
let version = crate::VERSION.clone(); let version = crate::VERSION.clone();
Footer { version, crc } Footer { version, crc }
} }
pub(crate) fn crc(&self) -> CrcHashU32 { pub fn crc(&self) -> CrcHashU32 {
self.crc self.crc
} }
pub(crate) fn append_footer<W: io::Write>(&self, mut write: &mut W) -> io::Result<()> { pub fn append_footer<W: io::Write>(&self, mut write: &mut W) -> io::Result<()> {
let mut counting_write = CountingWriter::wrap(&mut write); let mut counting_write = CountingWriter::wrap(&mut write);
counting_write.write_all(serde_json::to_string(&self)?.as_ref())?; counting_write.write_all(serde_json::to_string(&self)?.as_ref())?;
let footer_payload_len = counting_write.written_bytes(); let footer_payload_len = counting_write.written_bytes();
@@ -50,7 +42,6 @@ impl Footer {
Ok(()) Ok(())
} }
/// Extracts the tantivy Footer from the file and returns the footer and the rest of the file
pub fn extract_footer(file: FileSlice) -> io::Result<(Footer, FileSlice)> { pub fn extract_footer(file: FileSlice) -> io::Result<(Footer, FileSlice)> {
if file.len() < 4 { if file.len() < 4 {
return Err(io::Error::new( return Err(io::Error::new(

View File

@@ -157,7 +157,7 @@ impl ManagedDirectory {
for file_to_delete in files_to_delete { for file_to_delete in files_to_delete {
match self.delete(&file_to_delete) { match self.delete(&file_to_delete) {
Ok(_) => { Ok(_) => {
info!("Deleted {file_to_delete:?}"); info!("Deleted {:?}", file_to_delete);
deleted_files.push(file_to_delete); deleted_files.push(file_to_delete);
} }
Err(file_error) => { Err(file_error) => {
@@ -170,7 +170,7 @@ impl ManagedDirectory {
if !cfg!(target_os = "windows") { if !cfg!(target_os = "windows") {
// On windows, delete is expected to fail if the file // On windows, delete is expected to fail if the file
// is mmapped. // is mmapped.
error!("Failed to delete {file_to_delete:?}"); error!("Failed to delete {:?}", file_to_delete);
} }
} }
} }

View File

@@ -7,7 +7,7 @@ use std::path::{Path, PathBuf};
use std::sync::{Arc, RwLock, Weak}; use std::sync::{Arc, RwLock, Weak};
use common::StableDeref; use common::StableDeref;
use fs4::fs_std::FileExt; use fs4::FileExt;
#[cfg(all(feature = "mmap", unix))] #[cfg(all(feature = "mmap", unix))]
pub use memmap2::Advice; pub use memmap2::Advice;
use memmap2::Mmap; use memmap2::Mmap;
@@ -29,7 +29,7 @@ pub type WeakArcBytes = Weak<dyn Deref<Target = [u8]> + Send + Sync + 'static>;
/// Create a default io error given a string. /// Create a default io error given a string.
pub(crate) fn make_io_err(msg: String) -> io::Error { pub(crate) fn make_io_err(msg: String) -> io::Error {
io::Error::other(msg) io::Error::new(io::ErrorKind::Other, msg)
} }
/// Returns `None` iff the file exists, can be read, but is empty (and hence /// Returns `None` iff the file exists, can be read, but is empty (and hence
@@ -369,7 +369,7 @@ pub(crate) fn atomic_write(path: &Path, content: &[u8]) -> io::Result<()> {
impl Directory for MmapDirectory { impl Directory for MmapDirectory {
fn get_file_handle(&self, path: &Path) -> Result<Arc<dyn FileHandle>, OpenReadError> { fn get_file_handle(&self, path: &Path) -> Result<Arc<dyn FileHandle>, OpenReadError> {
debug!("Open Read {path:?}"); debug!("Open Read {:?}", path);
let full_path = self.resolve_path(path); let full_path = self.resolve_path(path);
let mut mmap_cache = self.inner.mmap_cache.write().map_err(|_| { let mut mmap_cache = self.inner.mmap_cache.write().map_err(|_| {
@@ -414,7 +414,7 @@ impl Directory for MmapDirectory {
} }
fn open_write(&self, path: &Path) -> Result<WritePtr, OpenWriteError> { fn open_write(&self, path: &Path) -> Result<WritePtr, OpenWriteError> {
debug!("Open Write {path:?}"); debug!("Open Write {:?}", path);
let full_path = self.resolve_path(path); let full_path = self.resolve_path(path);
let open_res = OpenOptions::new() let open_res = OpenOptions::new()
@@ -467,7 +467,7 @@ impl Directory for MmapDirectory {
} }
fn atomic_write(&self, path: &Path, content: &[u8]) -> io::Result<()> { fn atomic_write(&self, path: &Path, content: &[u8]) -> io::Result<()> {
debug!("Atomic Write {path:?}"); debug!("Atomic Write {:?}", path);
let full_path = self.resolve_path(path); let full_path = self.resolve_path(path);
atomic_write(&full_path, content)?; atomic_write(&full_path, content)?;
Ok(()) Ok(())
@@ -485,9 +485,7 @@ impl Directory for MmapDirectory {
if lock.is_blocking { if lock.is_blocking {
file.lock_exclusive().map_err(LockError::wrap_io_error)?; file.lock_exclusive().map_err(LockError::wrap_io_error)?;
} else { } else {
if !file.try_lock_exclusive().map_err(|_| LockError::LockBusy)? { file.try_lock_exclusive().map_err(|_| LockError::LockBusy)?
return Err(LockError::LockBusy);
}
} }
// dropping the file handle will release the lock. // dropping the file handle will release the lock.
Ok(DirectoryLock::from(Box::new(ReleaseLockFile { Ok(DirectoryLock::from(Box::new(ReleaseLockFile {

View File

@@ -6,7 +6,7 @@ mod mmap_directory;
mod directory; mod directory;
mod directory_lock; mod directory_lock;
mod file_watcher; mod file_watcher;
pub mod footer; mod footer;
mod managed_directory; mod managed_directory;
mod ram_directory; mod ram_directory;
mod watch_event_router; mod watch_event_router;

View File

@@ -191,7 +191,7 @@ impl Directory for RamDirectory {
.fs .fs
.read() .read()
.map_err(|e| OpenReadError::IoError { .map_err(|e| OpenReadError::IoError {
io_error: Arc::new(io::Error::other(e.to_string())), io_error: Arc::new(io::Error::new(io::ErrorKind::Other, e.to_string())),
filepath: path.to_path_buf(), filepath: path.to_path_buf(),
})? })?
.exists(path)) .exists(path))

View File

@@ -90,7 +90,10 @@ impl WatchCallbackList {
let _ = sender.send(Ok(())); let _ = sender.send(Ok(()));
}); });
if let Err(err) = spawn_res { if let Err(err) = spawn_res {
error!("Failed to spawn thread to call watch callbacks. Cause: {err:?}"); error!(
"Failed to spawn thread to call watch callbacks. Cause: {:?}",
err
);
} }
result result
} }

View File

@@ -942,7 +942,7 @@ mod tests {
let numbers = [100, 200, 300]; let numbers = [100, 200, 300];
let test_range = |range: RangeInclusive<u64>| { let test_range = |range: RangeInclusive<u64>| {
let expected_count = numbers.iter().filter(|num| range.contains(*num)).count(); let expected_count = numbers.iter().filter(|num| range.contains(num)).count();
let mut vec = vec![]; let mut vec = vec![];
field.get_row_ids_for_value_range(range, 0..u32::MAX, &mut vec); field.get_row_ids_for_value_range(range, 0..u32::MAX, &mut vec);
assert_eq!(vec.len(), expected_count); assert_eq!(vec.len(), expected_count);
@@ -1020,7 +1020,7 @@ mod tests {
let numbers = [1000, 1001, 1003]; let numbers = [1000, 1001, 1003];
let test_range = |range: RangeInclusive<u64>| { let test_range = |range: RangeInclusive<u64>| {
let expected_count = numbers.iter().filter(|num| range.contains(*num)).count(); let expected_count = numbers.iter().filter(|num| range.contains(num)).count();
let mut vec = vec![]; let mut vec = vec![];
field.get_row_ids_for_value_range(range, 0..u32::MAX, &mut vec); field.get_row_ids_for_value_range(range, 0..u32::MAX, &mut vec);
assert_eq!(vec.len(), expected_count); assert_eq!(vec.len(), expected_count);

View File

@@ -217,7 +217,7 @@ impl FastFieldReaders {
Ok(dynamic_column.into()) Ok(dynamic_column.into())
} }
/// Returns a `dynamic_column_handle`. /// Returning a `dynamic_column_handle`.
pub fn dynamic_column_handle( pub fn dynamic_column_handle(
&self, &self,
field_name: &str, field_name: &str,
@@ -234,7 +234,7 @@ impl FastFieldReaders {
Ok(dynamic_column_handle_opt) Ok(dynamic_column_handle_opt)
} }
/// Returns all `dynamic_column_handle` that match the given field name. /// Returning all `dynamic_column_handle`.
pub fn dynamic_column_handles( pub fn dynamic_column_handles(
&self, &self,
field_name: &str, field_name: &str,
@@ -250,22 +250,6 @@ impl FastFieldReaders {
Ok(dynamic_column_handles) Ok(dynamic_column_handles)
} }
/// Returns all `dynamic_column_handle` that are inner fields of the provided JSON path.
pub fn dynamic_subpath_column_handles(
&self,
root_path: &str,
) -> crate::Result<Vec<DynamicColumnHandle>> {
let Some(resolved_field_name) = self.resolve_field(root_path)? else {
return Ok(Vec::new());
};
let dynamic_column_handles = self
.columnar
.read_subpath_columns(&resolved_field_name)?
.into_iter()
.collect();
Ok(dynamic_column_handles)
}
#[doc(hidden)] #[doc(hidden)]
pub async fn list_dynamic_column_handles( pub async fn list_dynamic_column_handles(
&self, &self,
@@ -281,21 +265,6 @@ impl FastFieldReaders {
Ok(columns) Ok(columns)
} }
#[doc(hidden)]
pub async fn list_subpath_dynamic_column_handles(
&self,
root_path: &str,
) -> crate::Result<Vec<DynamicColumnHandle>> {
let Some(resolved_field_name) = self.resolve_field(root_path)? else {
return Ok(Vec::new());
};
let columns = self
.columnar
.read_subpath_columns_async(&resolved_field_name)
.await?;
Ok(columns)
}
/// Returns the `u64` column used to represent any `u64`-mapped typed (String/Bytes term ids, /// Returns the `u64` column used to represent any `u64`-mapped typed (String/Bytes term ids,
/// i64, u64, f64, DateTime). /// i64, u64, f64, DateTime).
/// ///
@@ -507,15 +476,6 @@ mod tests {
.iter() .iter()
.any(|column| column.column_type() == ColumnType::Str)); .any(|column| column.column_type() == ColumnType::Str));
let json_columns = fast_fields.dynamic_column_handles("json").unwrap(); println!("*** {:?}", fast_fields.columnar().list_columns());
assert_eq!(json_columns.len(), 0);
let json_subcolumns = fast_fields.dynamic_subpath_column_handles("json").unwrap();
assert_eq!(json_subcolumns.len(), 3);
let foo_subcolumns = fast_fields
.dynamic_subpath_column_handles("json.foo")
.unwrap();
assert_eq!(foo_subcolumns.len(), 0);
} }
} }

View File

@@ -15,9 +15,7 @@ use crate::directory::MmapDirectory;
use crate::directory::{Directory, ManagedDirectory, RamDirectory, INDEX_WRITER_LOCK}; use crate::directory::{Directory, ManagedDirectory, RamDirectory, INDEX_WRITER_LOCK};
use crate::error::{DataCorruption, TantivyError}; use crate::error::{DataCorruption, TantivyError};
use crate::index::{IndexMeta, SegmentId, SegmentMeta, SegmentMetaInventory}; use crate::index::{IndexMeta, SegmentId, SegmentMeta, SegmentMetaInventory};
use crate::indexer::index_writer::{ use crate::indexer::index_writer::{MAX_NUM_THREAD, MEMORY_BUDGET_NUM_BYTES_MIN};
IndexWriterOptions, MAX_NUM_THREAD, MEMORY_BUDGET_NUM_BYTES_MIN,
};
use crate::indexer::segment_updater::save_metas; use crate::indexer::segment_updater::save_metas;
use crate::indexer::{IndexWriter, SingleSegmentIndexWriter}; use crate::indexer::{IndexWriter, SingleSegmentIndexWriter};
use crate::reader::{IndexReader, IndexReaderBuilder}; use crate::reader::{IndexReader, IndexReaderBuilder};
@@ -521,43 +519,6 @@ impl Index {
load_metas(self.directory(), &self.inventory) load_metas(self.directory(), &self.inventory)
} }
/// Open a new index writer with the given options. Attempts to acquire a lockfile.
///
/// The lockfile should be deleted on drop, but it is possible
/// that due to a panic or other error, a stale lockfile will be
/// left in the index directory. If you are sure that no other
/// `IndexWriter` on the system is accessing the index directory,
/// it is safe to manually delete the lockfile.
///
/// - `options` defines the writer configuration which includes things like buffer sizes,
/// indexer threads, etc...
///
/// # Errors
/// If the lockfile already exists, returns `TantivyError::LockFailure`.
/// If the memory arena per thread is too small or too big, returns
/// `TantivyError::InvalidArgument`
pub fn writer_with_options<D: Document>(
&self,
options: IndexWriterOptions,
) -> crate::Result<IndexWriter<D>> {
let directory_lock = self
.directory
.acquire_lock(&INDEX_WRITER_LOCK)
.map_err(|err| {
TantivyError::LockFailure(
err,
Some(
"Failed to acquire index lock. If you are using a regular directory, this \
means there is already an `IndexWriter` working on this `Directory`, in \
this process or in a different process."
.to_string(),
),
)
})?;
IndexWriter::new(self, options, directory_lock)
}
/// Open a new index writer. Attempts to acquire a lockfile. /// Open a new index writer. Attempts to acquire a lockfile.
/// ///
/// The lockfile should be deleted on drop, but it is possible /// The lockfile should be deleted on drop, but it is possible
@@ -582,12 +543,27 @@ impl Index {
num_threads: usize, num_threads: usize,
overall_memory_budget_in_bytes: usize, overall_memory_budget_in_bytes: usize,
) -> crate::Result<IndexWriter<D>> { ) -> crate::Result<IndexWriter<D>> {
let directory_lock = self
.directory
.acquire_lock(&INDEX_WRITER_LOCK)
.map_err(|err| {
TantivyError::LockFailure(
err,
Some(
"Failed to acquire index lock. If you are using a regular directory, this \
means there is already an `IndexWriter` working on this `Directory`, in \
this process or in a different process."
.to_string(),
),
)
})?;
let memory_arena_in_bytes_per_thread = overall_memory_budget_in_bytes / num_threads; let memory_arena_in_bytes_per_thread = overall_memory_budget_in_bytes / num_threads;
let options = IndexWriterOptions::builder() IndexWriter::new(
.num_worker_threads(num_threads) self,
.memory_budget_per_thread(memory_arena_in_bytes_per_thread) num_threads,
.build(); memory_arena_in_bytes_per_thread,
self.writer_with_options(options) directory_lock,
)
} }
/// Helper to create an index writer for tests. /// Helper to create an index writer for tests.

View File

@@ -3,12 +3,6 @@ use std::io;
use common::json_path_writer::JSON_END_OF_PATH; use common::json_path_writer::JSON_END_OF_PATH;
use common::BinarySerializable; use common::BinarySerializable;
use fnv::FnvHashSet; use fnv::FnvHashSet;
#[cfg(feature = "quickwit")]
use futures_util::{FutureExt, StreamExt, TryStreamExt};
#[cfg(feature = "quickwit")]
use itertools::Itertools;
#[cfg(feature = "quickwit")]
use tantivy_fst::automaton::{AlwaysMatch, Automaton};
use crate::directory::FileSlice; use crate::directory::FileSlice;
use crate::positions::PositionReader; use crate::positions::PositionReader;
@@ -225,18 +219,13 @@ impl InvertedIndexReader {
self.termdict.get_async(term.serialized_value_bytes()).await self.termdict.get_async(term.serialized_value_bytes()).await
} }
async fn get_term_range_async<'a, A: Automaton + 'a>( async fn get_term_range_async(
&'a self, &self,
terms: impl std::ops::RangeBounds<Term>, terms: impl std::ops::RangeBounds<Term>,
automaton: A,
limit: Option<u64>, limit: Option<u64>,
merge_holes_under_bytes: usize, ) -> io::Result<impl Iterator<Item = TermInfo> + '_> {
) -> io::Result<impl Iterator<Item = TermInfo> + 'a>
where
A::State: Clone,
{
use std::ops::Bound; use std::ops::Bound;
let range_builder = self.termdict.search(automaton); let range_builder = self.termdict.range();
let range_builder = match terms.start_bound() { let range_builder = match terms.start_bound() {
Bound::Included(bound) => range_builder.ge(bound.serialized_value_bytes()), Bound::Included(bound) => range_builder.ge(bound.serialized_value_bytes()),
Bound::Excluded(bound) => range_builder.gt(bound.serialized_value_bytes()), Bound::Excluded(bound) => range_builder.gt(bound.serialized_value_bytes()),
@@ -253,9 +242,7 @@ impl InvertedIndexReader {
range_builder range_builder
}; };
let mut stream = range_builder let mut stream = range_builder.into_stream_async().await?;
.into_stream_async_merging_holes(merge_holes_under_bytes)
.await?;
let iter = std::iter::from_fn(move || stream.next().map(|(_k, v)| v.clone())); let iter = std::iter::from_fn(move || stream.next().map(|(_k, v)| v.clone()));
@@ -301,9 +288,7 @@ impl InvertedIndexReader {
limit: Option<u64>, limit: Option<u64>,
with_positions: bool, with_positions: bool,
) -> io::Result<bool> { ) -> io::Result<bool> {
let mut term_info = self let mut term_info = self.get_term_range_async(terms, limit).await?;
.get_term_range_async(terms, AlwaysMatch, limit, 0)
.await?;
let Some(first_terminfo) = term_info.next() else { let Some(first_terminfo) = term_info.next() else {
// no key matches, nothing more to load // no key matches, nothing more to load
@@ -330,84 +315,6 @@ impl InvertedIndexReader {
Ok(true) Ok(true)
} }
/// Warmup a block postings given a range of `Term`s.
/// This method is for an advanced usage only.
///
/// returns a boolean, whether a term matching the range was found in the dictionary
pub async fn warm_postings_automaton<
A: Automaton + Clone + Send + 'static,
E: FnOnce(Box<dyn FnOnce() -> io::Result<()> + Send>) -> F,
F: std::future::Future<Output = io::Result<()>>,
>(
&self,
automaton: A,
// with_positions: bool, at the moment we have no use for it, and supporting it would add
// complexity to the coalesce
executor: E,
) -> io::Result<bool>
where
A::State: Clone,
{
// merge holes under 4MiB, that's how many bytes we can hope to receive during a TTFB from
// S3 (~80MiB/s, and 50ms latency)
const MERGE_HOLES_UNDER_BYTES: usize = (80 * 1024 * 1024 * 50) / 1000;
// we build a first iterator to download everything. Simply calling the function already
// download everything we need from the sstable, but doesn't start iterating over it.
let _term_info_iter = self
.get_term_range_async(.., automaton.clone(), None, MERGE_HOLES_UNDER_BYTES)
.await?;
let (sender, posting_ranges_to_load_stream) = futures_channel::mpsc::unbounded();
let termdict = self.termdict.clone();
let cpu_bound_task = move || {
// then we build a 2nd iterator, this one with no holes, so we don't go through blocks
// we can't match.
// This makes the assumption there is a caching layer below us, which gives sync read
// for free after the initial async access. This might not always be true, but is in
// Quickwit.
// We build things from this closure otherwise we get into lifetime issues that can only
// be solved with self referential strucs. Returning an io::Result from here is a bit
// more leaky abstraction-wise, but a lot better than the alternative
let mut stream = termdict.search(automaton).into_stream()?;
// we could do without an iterator, but this allows us access to coalesce which simplify
// things
let posting_ranges_iter =
std::iter::from_fn(move || stream.next().map(|(_k, v)| v.postings_range.clone()));
let merged_posting_ranges_iter = posting_ranges_iter.coalesce(|range1, range2| {
if range1.end + MERGE_HOLES_UNDER_BYTES >= range2.start {
Ok(range1.start..range2.end)
} else {
Err((range1, range2))
}
});
for posting_range in merged_posting_ranges_iter {
if let Err(_) = sender.unbounded_send(posting_range) {
// this should happen only when search is cancelled
return Err(io::Error::other("failed to send posting range back"));
}
}
Ok(())
};
let task_handle = executor(Box::new(cpu_bound_task));
let posting_downloader = posting_ranges_to_load_stream
.map(|posting_slice| {
self.postings_file_slice
.read_bytes_slice_async(posting_slice)
.map(|result| result.map(|_slice| ()))
})
.buffer_unordered(5)
.try_collect::<Vec<()>>();
let (_, slices_downloaded) =
futures_util::future::try_join(task_handle, posting_downloader).await?;
Ok(!slices_downloaded.is_empty())
}
/// Warmup the block postings for all terms. /// Warmup the block postings for all terms.
/// This method is for an advanced usage only. /// This method is for an advanced usage only.
/// ///

View File

@@ -45,23 +45,6 @@ fn error_in_index_worker_thread(context: &str) -> TantivyError {
)) ))
} }
#[derive(Clone, bon::Builder)]
/// A builder for creating a new [IndexWriter] for an index.
pub struct IndexWriterOptions {
#[builder(default = MEMORY_BUDGET_NUM_BYTES_MIN)]
/// The memory budget per indexer thread.
///
/// When an indexer thread has buffered this much data in memory
/// it will flush the segment to disk (although this is not searchable until commit is called.)
memory_budget_per_thread: usize,
#[builder(default = 1)]
/// The number of indexer worker threads to use.
num_worker_threads: usize,
#[builder(default = 4)]
/// Defines the number of merger threads to use.
num_merge_threads: usize,
}
/// `IndexWriter` is the user entry-point to add document to an index. /// `IndexWriter` is the user entry-point to add document to an index.
/// ///
/// It manages a small number of indexing thread, as well as a shared /// It manages a small number of indexing thread, as well as a shared
@@ -75,7 +58,8 @@ pub struct IndexWriter<D: Document = TantivyDocument> {
index: Index, index: Index,
options: IndexWriterOptions, // The memory budget per thread, after which a commit is triggered.
memory_budget_in_bytes_per_thread: usize,
workers_join_handle: Vec<JoinHandle<crate::Result<()>>>, workers_join_handle: Vec<JoinHandle<crate::Result<()>>>,
@@ -86,6 +70,8 @@ pub struct IndexWriter<D: Document = TantivyDocument> {
worker_id: usize, worker_id: usize,
num_threads: usize,
delete_queue: DeleteQueue, delete_queue: DeleteQueue,
stamper: Stamper, stamper: Stamper,
@@ -279,27 +265,23 @@ impl<D: Document> IndexWriter<D> {
/// `TantivyError::InvalidArgument` /// `TantivyError::InvalidArgument`
pub(crate) fn new( pub(crate) fn new(
index: &Index, index: &Index,
options: IndexWriterOptions, num_threads: usize,
memory_budget_in_bytes_per_thread: usize,
directory_lock: DirectoryLock, directory_lock: DirectoryLock,
) -> crate::Result<Self> { ) -> crate::Result<Self> {
if options.memory_budget_per_thread < MEMORY_BUDGET_NUM_BYTES_MIN { if memory_budget_in_bytes_per_thread < MEMORY_BUDGET_NUM_BYTES_MIN {
let err_msg = format!( let err_msg = format!(
"The memory arena in bytes per thread needs to be at least \ "The memory arena in bytes per thread needs to be at least \
{MEMORY_BUDGET_NUM_BYTES_MIN}." {MEMORY_BUDGET_NUM_BYTES_MIN}."
); );
return Err(TantivyError::InvalidArgument(err_msg)); return Err(TantivyError::InvalidArgument(err_msg));
} }
if options.memory_budget_per_thread >= MEMORY_BUDGET_NUM_BYTES_MAX { if memory_budget_in_bytes_per_thread >= MEMORY_BUDGET_NUM_BYTES_MAX {
let err_msg = format!( let err_msg = format!(
"The memory arena in bytes per thread cannot exceed {MEMORY_BUDGET_NUM_BYTES_MAX}" "The memory arena in bytes per thread cannot exceed {MEMORY_BUDGET_NUM_BYTES_MAX}"
); );
return Err(TantivyError::InvalidArgument(err_msg)); return Err(TantivyError::InvalidArgument(err_msg));
} }
if options.num_worker_threads == 0 {
let err_msg = "At least one worker thread is required, got 0".to_string();
return Err(TantivyError::InvalidArgument(err_msg));
}
let (document_sender, document_receiver) = let (document_sender, document_receiver) =
crossbeam_channel::bounded(PIPELINE_MAX_SIZE_IN_DOCS); crossbeam_channel::bounded(PIPELINE_MAX_SIZE_IN_DOCS);
@@ -309,17 +291,13 @@ impl<D: Document> IndexWriter<D> {
let stamper = Stamper::new(current_opstamp); let stamper = Stamper::new(current_opstamp);
let segment_updater = SegmentUpdater::create( let segment_updater =
index.clone(), SegmentUpdater::create(index.clone(), stamper.clone(), &delete_queue.cursor())?;
stamper.clone(),
&delete_queue.cursor(),
options.num_merge_threads,
)?;
let mut index_writer = Self { let mut index_writer = Self {
_directory_lock: Some(directory_lock), _directory_lock: Some(directory_lock),
options: options.clone(), memory_budget_in_bytes_per_thread,
index: index.clone(), index: index.clone(),
index_writer_status: IndexWriterStatus::from(document_receiver), index_writer_status: IndexWriterStatus::from(document_receiver),
operation_sender: document_sender, operation_sender: document_sender,
@@ -327,6 +305,7 @@ impl<D: Document> IndexWriter<D> {
segment_updater, segment_updater,
workers_join_handle: vec![], workers_join_handle: vec![],
num_threads,
delete_queue, delete_queue,
@@ -370,7 +349,7 @@ impl<D: Document> IndexWriter<D> {
.map_err(|_| error_in_index_worker_thread("Failed to join merging thread.")); .map_err(|_| error_in_index_worker_thread("Failed to join merging thread."));
if let Err(ref e) = result { if let Err(ref e) = result {
error!("Some merging thread failed {e:?}"); error!("Some merging thread failed {:?}", e);
} }
result result
@@ -419,7 +398,7 @@ impl<D: Document> IndexWriter<D> {
let mut delete_cursor = self.delete_queue.cursor(); let mut delete_cursor = self.delete_queue.cursor();
let mem_budget = self.options.memory_budget_per_thread; let mem_budget = self.memory_budget_in_bytes_per_thread;
let index = self.index.clone(); let index = self.index.clone();
let join_handle: JoinHandle<crate::Result<()>> = thread::Builder::new() let join_handle: JoinHandle<crate::Result<()>> = thread::Builder::new()
.name(format!("thrd-tantivy-index{}", self.worker_id)) .name(format!("thrd-tantivy-index{}", self.worker_id))
@@ -472,7 +451,7 @@ impl<D: Document> IndexWriter<D> {
} }
fn start_workers(&mut self) -> crate::Result<()> { fn start_workers(&mut self) -> crate::Result<()> {
for _ in 0..self.options.num_worker_threads { for _ in 0..self.num_threads {
self.add_indexing_worker()?; self.add_indexing_worker()?;
} }
Ok(()) Ok(())
@@ -574,7 +553,12 @@ impl<D: Document> IndexWriter<D> {
.take() .take()
.expect("The IndexWriter does not have any lock. This is a bug, please report."); .expect("The IndexWriter does not have any lock. This is a bug, please report.");
let new_index_writer = IndexWriter::new(&self.index, self.options.clone(), directory_lock)?; let new_index_writer = IndexWriter::new(
&self.index,
self.num_threads,
self.memory_budget_in_bytes_per_thread,
directory_lock,
)?;
// the current `self` is dropped right away because of this call. // the current `self` is dropped right away because of this call.
// //
@@ -644,7 +628,7 @@ impl<D: Document> IndexWriter<D> {
let commit_opstamp = self.stamper.stamp(); let commit_opstamp = self.stamper.stamp();
let prepared_commit = PreparedCommit::new(self, commit_opstamp); let prepared_commit = PreparedCommit::new(self, commit_opstamp);
info!("Prepared commit {commit_opstamp}"); info!("Prepared commit {}", commit_opstamp);
Ok(prepared_commit) Ok(prepared_commit)
} }
@@ -828,7 +812,7 @@ mod tests {
use crate::directory::error::LockError; use crate::directory::error::LockError;
use crate::error::*; use crate::error::*;
use crate::indexer::index_writer::MEMORY_BUDGET_NUM_BYTES_MIN; use crate::indexer::index_writer::MEMORY_BUDGET_NUM_BYTES_MIN;
use crate::indexer::{IndexWriterOptions, NoMergePolicy}; use crate::indexer::NoMergePolicy;
use crate::query::{QueryParser, TermQuery}; use crate::query::{QueryParser, TermQuery};
use crate::schema::{ use crate::schema::{
self, Facet, FacetOptions, IndexRecordOption, IpAddrOptions, JsonObjectOptions, self, Facet, FacetOptions, IndexRecordOption, IpAddrOptions, JsonObjectOptions,
@@ -2549,36 +2533,4 @@ mod tests {
index_writer.commit().unwrap(); index_writer.commit().unwrap();
Ok(()) Ok(())
} }
#[test]
fn test_writer_options_validation() {
let mut schema_builder = Schema::builder();
let _field = schema_builder.add_bool_field("example", STORED);
let index = Index::create_in_ram(schema_builder.build());
let opt_wo_threads = IndexWriterOptions::builder().num_worker_threads(0).build();
let result = index.writer_with_options::<TantivyDocument>(opt_wo_threads);
assert!(result.is_err(), "Writer should reject 0 thread count");
assert!(matches!(result, Err(TantivyError::InvalidArgument(_))));
let opt_with_low_memory = IndexWriterOptions::builder()
.memory_budget_per_thread(10 << 10)
.build();
let result = index.writer_with_options::<TantivyDocument>(opt_with_low_memory);
assert!(
result.is_err(),
"Writer should reject options with too low memory size"
);
assert!(matches!(result, Err(TantivyError::InvalidArgument(_))));
let opt_with_low_memory = IndexWriterOptions::builder()
.memory_budget_per_thread(5 << 30)
.build();
let result = index.writer_with_options::<TantivyDocument>(opt_with_low_memory);
assert!(
result.is_err(),
"Writer should reject options with too high memory size"
);
assert!(matches!(result, Err(TantivyError::InvalidArgument(_))));
}
} }

View File

@@ -31,7 +31,7 @@ mod stamper;
use crossbeam_channel as channel; use crossbeam_channel as channel;
use smallvec::SmallVec; use smallvec::SmallVec;
pub use self::index_writer::{IndexWriter, IndexWriterOptions}; pub use self::index_writer::IndexWriter;
pub use self::log_merge_policy::LogMergePolicy; pub use self::log_merge_policy::LogMergePolicy;
pub use self::merge_operation::MergeOperation; pub use self::merge_operation::MergeOperation;
pub use self::merge_policy::{MergeCandidate, MergePolicy, NoMergePolicy}; pub use self::merge_policy::{MergeCandidate, MergePolicy, NoMergePolicy};

View File

@@ -1,4 +1,3 @@
use std::any::Any;
use std::borrow::BorrowMut; use std::borrow::BorrowMut;
use std::collections::HashSet; use std::collections::HashSet;
use std::io::Write; use std::io::Write;
@@ -24,9 +23,9 @@ use crate::indexer::{
DefaultMergePolicy, MergeCandidate, MergeOperation, MergePolicy, SegmentEntry, DefaultMergePolicy, MergeCandidate, MergeOperation, MergePolicy, SegmentEntry,
SegmentSerializer, SegmentSerializer,
}; };
use crate::{FutureResult, Opstamp, TantivyError}; use crate::{FutureResult, Opstamp};
const PANIC_CAUGHT: &str = "Panic caught in merge thread"; const NUM_MERGE_THREADS: usize = 4;
/// Save the index meta file. /// Save the index meta file.
/// This operation is atomic: /// This operation is atomic:
@@ -274,7 +273,6 @@ impl SegmentUpdater {
index: Index, index: Index,
stamper: Stamper, stamper: Stamper,
delete_cursor: &DeleteCursor, delete_cursor: &DeleteCursor,
num_merge_threads: usize,
) -> crate::Result<SegmentUpdater> { ) -> crate::Result<SegmentUpdater> {
let segments = index.searchable_segment_metas()?; let segments = index.searchable_segment_metas()?;
let segment_manager = SegmentManager::from_segments(segments, delete_cursor); let segment_manager = SegmentManager::from_segments(segments, delete_cursor);
@@ -289,16 +287,7 @@ impl SegmentUpdater {
})?; })?;
let merge_thread_pool = ThreadPoolBuilder::new() let merge_thread_pool = ThreadPoolBuilder::new()
.thread_name(|i| format!("merge_thread_{i}")) .thread_name(|i| format!("merge_thread_{i}"))
.num_threads(num_merge_threads) .num_threads(NUM_MERGE_THREADS)
.panic_handler(move |panic| {
// We don't print the panic content itself,
// it is already printed during the unwinding
if let Some(message) = panic.downcast_ref::<&str>() {
if *message != PANIC_CAUGHT {
error!("uncaught merge panic")
}
}
})
.build() .build()
.map_err(|_| { .map_err(|_| {
crate::TantivyError::SystemError( crate::TantivyError::SystemError(
@@ -501,7 +490,8 @@ impl SegmentUpdater {
Ok(segment_entries) => segment_entries, Ok(segment_entries) => segment_entries,
Err(err) => { Err(err) => {
warn!( warn!(
"Starting the merge failed for the following reason. This is not fatal. {err}" "Starting the merge failed for the following reason. This is not fatal. {}",
err
); );
return err.into(); return err.into();
} }
@@ -517,34 +507,11 @@ impl SegmentUpdater {
// Its lifetime is used to track how many merging thread are currently running, // Its lifetime is used to track how many merging thread are currently running,
// as well as which segment is currently in merge and therefore should not be // as well as which segment is currently in merge and therefore should not be
// candidate for another merge. // candidate for another merge.
let merge_panic_res = std::panic::catch_unwind(std::panic::AssertUnwindSafe(|| { match merge(
merge( &segment_updater.index,
&segment_updater.index, segment_entries,
segment_entries, merge_operation.target_opstamp(),
merge_operation.target_opstamp(), ) {
)
}));
let merge_res = match merge_panic_res {
Ok(merge_res) => merge_res,
Err(panic_err) => {
let panic_str = if let Some(msg) = panic_err.downcast_ref::<&str>() {
*msg
} else if let Some(msg) = panic_err.downcast_ref::<String>() {
msg.as_str()
} else {
"UNKNOWN"
};
let _send_result = merging_future_send.send(Err(TantivyError::SystemError(
format!("Merge thread panicked: {panic_str}"),
)));
// Resume unwinding because we forced unwind safety with
// `std::panic::AssertUnwindSafe` Use a specific message so
// the panic_handler can double check that we properly caught the panic.
let boxed_panic_message: Box<dyn Any + Send> = Box::new(PANIC_CAUGHT);
std::panic::resume_unwind(boxed_panic_message);
}
};
match merge_res {
Ok(after_merge_segment_entry) => { Ok(after_merge_segment_entry) => {
let res = segment_updater.end_merge(merge_operation, after_merge_segment_entry); let res = segment_updater.end_merge(merge_operation, after_merge_segment_entry);
let _send_result = merging_future_send.send(res); let _send_result = merging_future_send.send(res);

View File

@@ -422,7 +422,6 @@ mod tests {
use std::collections::BTreeMap; use std::collections::BTreeMap;
use std::path::{Path, PathBuf}; use std::path::{Path, PathBuf};
use columnar::ColumnType;
use tempfile::TempDir; use tempfile::TempDir;
use crate::collector::{Count, TopDocs}; use crate::collector::{Count, TopDocs};
@@ -432,15 +431,15 @@ mod tests {
use crate::query::{PhraseQuery, QueryParser}; use crate::query::{PhraseQuery, QueryParser};
use crate::schema::{ use crate::schema::{
Document, IndexRecordOption, OwnedValue, Schema, TextFieldIndexing, TextOptions, Value, Document, IndexRecordOption, OwnedValue, Schema, TextFieldIndexing, TextOptions, Value,
DATE_TIME_PRECISION_INDEXED, FAST, STORED, STRING, TEXT, DATE_TIME_PRECISION_INDEXED, STORED, STRING, TEXT,
}; };
use crate::store::{Compressor, StoreReader, StoreWriter}; use crate::store::{Compressor, StoreReader, StoreWriter};
use crate::time::format_description::well_known::Rfc3339; use crate::time::format_description::well_known::Rfc3339;
use crate::time::OffsetDateTime; use crate::time::OffsetDateTime;
use crate::tokenizer::{PreTokenizedString, Token}; use crate::tokenizer::{PreTokenizedString, Token};
use crate::{ use crate::{
DateTime, Directory, DocAddress, DocSet, Index, IndexWriter, SegmentReader, DateTime, Directory, DocAddress, DocSet, Index, IndexWriter, TantivyDocument, Term,
TantivyDocument, Term, TERMINATED, TERMINATED,
}; };
#[test] #[test]
@@ -842,75 +841,6 @@ mod tests {
assert_eq!(searcher.search(&phrase_query, &Count).unwrap(), 0); assert_eq!(searcher.search(&phrase_query, &Count).unwrap(), 0);
} }
#[test]
fn test_json_fast() {
let mut schema_builder = Schema::builder();
let json_field = schema_builder.add_json_field("json", FAST);
let schema = schema_builder.build();
let json_val: serde_json::Value = serde_json::from_str(
r#"{
"toto": "titi",
"float": -0.2,
"bool": true,
"unsigned": 1,
"signed": -2,
"complexobject": {
"field.with.dot": 1
},
"date": "1985-04-12T23:20:50.52Z",
"my_arr": [2, 3, {"my_key": "two tokens"}, 4]
}"#,
)
.unwrap();
let doc = doc!(json_field=>json_val.clone());
let index = Index::create_in_ram(schema.clone());
let mut writer = index.writer_for_tests().unwrap();
writer.add_document(doc).unwrap();
writer.commit().unwrap();
let reader = index.reader().unwrap();
let searcher = reader.searcher();
let segment_reader = searcher.segment_reader(0u32);
fn assert_type(reader: &SegmentReader, field: &str, typ: ColumnType) {
let cols = reader.fast_fields().dynamic_column_handles(field).unwrap();
assert_eq!(cols.len(), 1, "{field}");
assert_eq!(cols[0].column_type(), typ, "{field}");
}
assert_type(segment_reader, "json.toto", ColumnType::Str);
assert_type(segment_reader, "json.float", ColumnType::F64);
assert_type(segment_reader, "json.bool", ColumnType::Bool);
assert_type(segment_reader, "json.unsigned", ColumnType::I64);
assert_type(segment_reader, "json.signed", ColumnType::I64);
assert_type(
segment_reader,
"json.complexobject.field\\.with\\.dot",
ColumnType::I64,
);
assert_type(segment_reader, "json.date", ColumnType::DateTime);
assert_type(segment_reader, "json.my_arr", ColumnType::I64);
assert_type(segment_reader, "json.my_arr.my_key", ColumnType::Str);
fn assert_empty(reader: &SegmentReader, field: &str) {
let cols = reader.fast_fields().dynamic_column_handles(field).unwrap();
assert_eq!(cols.len(), 0);
}
assert_empty(segment_reader, "unknown");
assert_empty(segment_reader, "json");
assert_empty(segment_reader, "json.toto.titi");
let sub_columns = segment_reader
.fast_fields()
.dynamic_subpath_column_handles("json")
.unwrap();
assert_eq!(sub_columns.len(), 9);
let subsub_columns = segment_reader
.fast_fields()
.dynamic_subpath_column_handles("json.complexobject")
.unwrap();
assert_eq!(subsub_columns.len(), 1);
}
#[test] #[test]
fn test_json_term_with_numeric_merge_panic_regression_bug_2283() { fn test_json_term_with_numeric_merge_panic_regression_bug_2283() {
// https://github.com/quickwit-oss/tantivy/issues/2283 // https://github.com/quickwit-oss/tantivy/issues/2283

View File

@@ -1,19 +1,79 @@
use std::ops::Range; use std::ops::Range;
use std::sync::atomic::{AtomicU64, Ordering}; use std::sync::atomic::Ordering;
use std::sync::Arc; use std::sync::Arc;
use crate::Opstamp; use crate::Opstamp;
#[cfg(not(target_arch = "arm"))]
mod atomic_impl {
use std::sync::atomic::{AtomicU64, Ordering};
use crate::Opstamp;
#[derive(Default)]
pub struct AtomicU64Wrapper(AtomicU64);
impl AtomicU64Wrapper {
pub fn new(first_opstamp: Opstamp) -> AtomicU64Wrapper {
AtomicU64Wrapper(AtomicU64::new(first_opstamp))
}
pub fn fetch_add(&self, val: u64, order: Ordering) -> u64 {
self.0.fetch_add(val, order)
}
pub fn revert(&self, val: u64, order: Ordering) -> u64 {
self.0.store(val, order);
val
}
}
}
#[cfg(target_arch = "arm")]
mod atomic_impl {
/// Under other architecture, we rely on a mutex.
use std::sync::atomic::Ordering;
use std::sync::RwLock;
use crate::Opstamp;
#[derive(Default)]
pub struct AtomicU64Wrapper(RwLock<u64>);
impl AtomicU64Wrapper {
pub fn new(first_opstamp: Opstamp) -> AtomicU64Wrapper {
AtomicU64Wrapper(RwLock::new(first_opstamp))
}
pub fn fetch_add(&self, incr: u64, _order: Ordering) -> u64 {
let mut lock = self.0.write().unwrap();
let previous_val = *lock;
*lock = previous_val + incr;
previous_val
}
pub fn revert(&self, val: u64, _order: Ordering) -> u64 {
let mut lock = self.0.write().unwrap();
*lock = val;
val
}
}
}
use self::atomic_impl::AtomicU64Wrapper;
/// Stamper provides Opstamps, which is just an auto-increment id to label /// Stamper provides Opstamps, which is just an auto-increment id to label
/// an operation. /// an operation.
/// ///
/// Cloning does not "fork" the stamp generation. The stamper actually wraps an `Arc`. /// Cloning does not "fork" the stamp generation. The stamper actually wraps an `Arc`.
#[derive(Clone, Default)] #[derive(Clone, Default)]
pub struct Stamper(Arc<AtomicU64>); pub struct Stamper(Arc<AtomicU64Wrapper>);
impl Stamper { impl Stamper {
pub fn new(first_opstamp: Opstamp) -> Stamper { pub fn new(first_opstamp: Opstamp) -> Stamper {
Stamper(Arc::new(AtomicU64::new(first_opstamp))) Stamper(Arc::new(AtomicU64Wrapper::new(first_opstamp)))
} }
pub fn stamp(&self) -> Opstamp { pub fn stamp(&self) -> Opstamp {
@@ -32,8 +92,7 @@ impl Stamper {
/// Reverts the stamper to a given `Opstamp` value and returns it /// Reverts the stamper to a given `Opstamp` value and returns it
pub fn revert(&self, to_opstamp: Opstamp) -> Opstamp { pub fn revert(&self, to_opstamp: Opstamp) -> Opstamp {
self.0.store(to_opstamp, Ordering::SeqCst); self.0.revert(to_opstamp, Ordering::SeqCst)
to_opstamp
} }
} }

Some files were not shown because too many files have changed in this diff Show More