Compare commits

..

24 Commits

Author SHA1 Message Date
Paul Masurel
1a72844048 Added simple columnar CLI program 2022-12-23 22:25:45 +09:00
Paul Masurel
d91df6cc7e Added support for dynamic fast field.
See README for more information.
2022-12-23 22:24:40 +09:00
Paul Masurel
bc959006fa Ooops. Removing ordered_floats. 2022-12-22 19:50:34 +09:00
Paul Masurel
7385a8f80c Supporting PartialCmp in VectorColumn. (#1735)
* Supporting PartialCmp in VectorColumn.
* Apply suggestions from code review

Co-authored-by: PSeitz <PSeitz@users.noreply.github.com>
2022-12-22 17:47:25 +09:00
Paul Masurel
13b89cba17 Adding inlines. 2022-12-22 14:29:41 +09:00
Hasnain Lakhani
f4804ce2f5 Adjust spelling of "returns" in docs for DisjunctionMaxQuery (#1733) 2022-12-22 14:04:07 +09:00
Paul Masurel
2a6d1eaf78 Added missing license. 2022-12-22 12:47:43 +09:00
Paul Masurel
540a9972bd Support for NotNaN in fast fields 2022-12-22 12:28:25 +09:00
Paul Masurel
bb48c3e488 Refactoring to prepare for the addition of dynamic fast field (#1730)
* Refactoring to prepare for the addition of dynamic fast field

- Exposing insert_key / insert_value
- Renamed SSTable::{Reader/Writer}-> SSTable::{ValueReader/ValueWriter}
- Added a generic Dictionary object in the sstable crate
- Removing the TermDictionary wrapper from tantivy, relying directly on
  an alias of the generic Dictionary object.
- dropped the use of byteorder in sstable.
- Stopped scanning / reading the entire dictionary when streaming a range.

* Added a benchmark for streaming sstable ranges.

* CR comments.

Rename deserialize_u64 -> deserialize_vint_u64

* Removed needless allocation, split serialize into serialize and clear.
2022-12-22 12:25:46 +09:00
Paul Masurel
3339a3ec05 Removed feature(quickwit) in tantivy-common. 2022-12-22 10:19:57 +09:00
Paul Masurel
f39165e1e7 Moving FileSlice to tantivy-common (#1729) 2022-12-21 16:35:11 +09:00
Paul Masurel
32cb1d22da Removed AsyncIoResult. (#1728) 2022-12-21 16:01:17 +09:00
Paul Masurel
4a6bf50e78 Clippy 2022-12-21 15:43:34 +09:00
PSeitz
2ac1cc2fc0 add sparse codec (#1723)
* add sparse codec

* Apply suggestions from code review

Co-authored-by: Paul Masurel <paul@quickwit.io>

* Apply suggestions from code review

Co-authored-by: Paul Masurel <paul@quickwit.io>

* Apply suggestions from code review

Co-authored-by: Paul Masurel <paul@quickwit.io>

* add the -1 u16 fix for metadata num_vals

* add dense block encoding to sparse codec

* add comment, refactor u16 reading

Co-authored-by: Paul Masurel <paul@quickwit.io>
2022-12-20 15:30:33 +01:00
PSeitz
f9171a3981 fix clippy (#1725)
* fix clippy

* fix clippy fastfield codecs

* fix clippy bitpacker

* fix clippy common

* fix clippy stacker

* fix clippy sstable

* fmt
2022-12-20 07:30:06 +01:00
PSeitz
a2cf6a79b4 Sparse dense index (#1716)
* add dense codec

* benchmark fix and important optimisation

* move code to DenseIndexBlock

improve benchmark

* Apply suggestions from code review

Co-authored-by: Paul Masurel <paul@quickwit.io>

* Apply suggestions from code review

Co-authored-by: Paul Masurel <paul@quickwit.io>

* extend benchmarks

* Apply suggestions from code review

Co-authored-by: Paul Masurel <paul@quickwit.io>

Co-authored-by: Paul Masurel <paul@quickwit.io>
2022-12-13 07:50:09 +01:00
Paul Masurel
f6e87a5319 Cargo fmt 2022-12-13 12:30:40 +09:00
Paul Masurel
f9971e15fe Fixing unit test with sstable test. 2022-12-13 12:22:44 +09:00
PSeitz
3cdc8e7472 pass index info to serialize (#1719) 2022-12-13 04:20:31 +01:00
dependabot[bot]
fbb0f8b55d Update base64 requirement from 0.13.0 to 0.20.0 (#1720)
Updates the requirements on [base64](https://github.com/marshallpierce/rust-base64) to permit the latest version.
- [Release notes](https://github.com/marshallpierce/rust-base64/releases)
- [Changelog](https://github.com/marshallpierce/rust-base64/blob/master/RELEASE-NOTES.md)
- [Commits](https://github.com/marshallpierce/rust-base64/compare/v0.13.0...v0.20.0)

---
updated-dependencies:
- dependency-name: base64
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-12-13 11:46:23 +09:00
Paul Masurel
136a8f4124 Isolating sstable and stacker in independant crates. (#1718)
Both crate will be used in the new (optional + dynamic) fastfield work.
2022-12-13 11:44:17 +09:00
PSeitz
5d4535de83 Changelog fix (#1717) 2022-12-12 14:28:42 +09:00
PSeitz
2c50b02eb3 Fix max bucket limit in histogram (#1703)
* Fix max bucket limit in histogram

The max bucket limit in histogram was broken, since some code introduced temporary filtering of buckets, which then resulted into an incorrect increment on the bucket count.
The provided solution covers more scenarios, but there are still some scenarios unhandled (See #1702).

* Apply suggestions from code review

Co-authored-by: Paul Masurel <paul@quickwit.io>

Co-authored-by: Paul Masurel <paul@quickwit.io>
2022-12-12 04:40:15 +01:00
PSeitz
509adab79d Bump version (#1715)
* group workspace deps

* update cargo.toml

* revert tant version

* chore: Release
2022-12-12 04:39:43 +01:00
115 changed files with 4816 additions and 947 deletions

2
.gitignore vendored
View File

@@ -13,3 +13,5 @@ benchmark
.idea
trace.dat
cargo-timing*
columnar/columnar-cli/*.json
**/perf.data*

View File

@@ -2,22 +2,21 @@ Tantivy 0.19
================================
#### Bugfixes
- Fix missing fieldnorms for u64, i64, f64, bool, bytes and date [#1620](https://github.com/quickwit-oss/tantivy/pull/1620) (@PSeitz)
- Fix interpolation overflow in linear interpolation fastfield codec [#1480](https://github.com/quickwit-oss/tantivy/pull/1480 (@PSeitz @fulmicoton)
- Fix interpolation overflow in linear interpolation fastfield codec [#1480](https://github.com/quickwit-oss/tantivy/pull/1480) (@PSeitz @fulmicoton)
#### Features/Improvements
- Add support for `IN` in queryparser , e.g. `field: IN [val1 val2 val3]` [#1683](https://github.com/quickwit-oss/tantivy/pull/1683) (@trinity-1686a)
- Skip score calculation, when no scoring is required [#1646](https://github.com/quickwit-oss/tantivy/pull/1646) (@PSeitz)
- Limit fast fields to u32 (`get_val(u32)`) [#1644](https://github.com/quickwit-oss/tantivy/pull/1644) (@PSeitz)
- Updated [Date Field Type](https://github.com/quickwit-oss/tantivy/pull/1396)
The `DateTime` type has been updated to hold timestamps with microseconds precision.
`DateOptions` and `DatePrecision` have been added to configure Date fields. The precision is used to hint on fast values compression. Otherwise, seconds precision is used everywhere else (i.e terms, indexing). (@evanxg852000)
- The `DateTime` type has been updated to hold timestamps with microseconds precision.
`DateOptions` and `DatePrecision` have been added to configure Date fields. The precision is used to hint on fast values compression. Otherwise, seconds precision is used everywhere else (i.e terms, indexing) [#1396](https://github.com/quickwit-oss/tantivy/pull/1396) (@evanxg852000)
- Add IP address field type [#1553](https://github.com/quickwit-oss/tantivy/pull/1553) (@PSeitz)
- Add boolean field type [#1382](https://github.com/quickwit-oss/tantivy/pull/1382) (@boraarslan)
- Remove Searcher pool and make `Searcher` cloneable. (@PSeitz)
- Validate settings on create [#1570](https://github.com/quickwit-oss/tantivy/pull/1570 (@PSeitz)
- Validate settings on create [#1570](https://github.com/quickwit-oss/tantivy/pull/1570) (@PSeitz)
- Detect and apply gcd on fastfield codecs [#1418](https://github.com/quickwit-oss/tantivy/pull/1418) (@PSeitz)
- Doc store
- use separate thread to compress block store [#1389](https://github.com/quickwit-oss/tantivy/pull/1389) [#1510](https://github.com/quickwit-oss/tantivy/pull/1510 (@PSeitz @fulmicoton)
- use separate thread to compress block store [#1389](https://github.com/quickwit-oss/tantivy/pull/1389) [#1510](https://github.com/quickwit-oss/tantivy/pull/1510) (@PSeitz @fulmicoton)
- Expose doc store cache size [#1403](https://github.com/quickwit-oss/tantivy/pull/1403) (@PSeitz)
- Enable compression levels for doc store [#1378](https://github.com/quickwit-oss/tantivy/pull/1378) (@PSeitz)
- Make block size configurable [#1374](https://github.com/quickwit-oss/tantivy/pull/1374) (@kryesh)

View File

@@ -15,7 +15,7 @@ rust-version = "1.62"
[dependencies]
oneshot = "0.1.5"
base64 = "0.13.0"
base64 = "0.20.0"
byteorder = "1.4.3"
crc32fast = "1.3.2"
once_cell = "1.10.0"
@@ -36,7 +36,6 @@ fs2 = { version = "0.4.3", optional = true }
levenshtein_automata = "0.2.1"
uuid = { version = "1.0.0", features = ["v4", "serde"] }
crossbeam-channel = "0.5.4"
stable_deref_trait = "1.2.0"
rust-stemmers = "1.2.0"
downcast-rs = "1.2.0"
bitpacking = { version = "0.8.4", default-features = false, features = ["bitpacker4x"] }
@@ -53,15 +52,15 @@ lru = "0.7.5"
fastdivide = "0.4.0"
itertools = "0.10.3"
measure_time = "0.8.2"
ciborium = { version = "0.2", optional = true}
async-trait = "0.1.53"
arc-swap = "1.5.0"
sstable = { version="0.1", path="./sstable", package ="tantivy-sstable", optional = true }
stacker = { version="0.1", path="./stacker", package ="tantivy-stacker" }
tantivy-query-grammar = { version= "0.19.0", path="./query-grammar" }
tantivy-bitpacker = { version= "0.3", path="./bitpacker" }
common = { version= "0.4", path = "./common/", package = "tantivy-common" }
common = { version= "0.5", path = "./common/", package = "tantivy-common" }
fastfield_codecs = { version= "0.3", path="./fastfield_codecs", default-features = false }
ownedbytes = { version= "0.4", path="./ownedbytes" }
[target.'cfg(windows)'.dependencies]
winapi = "0.3.9"
@@ -104,10 +103,10 @@ zstd-compression = ["zstd"]
failpoints = ["fail/failpoints"]
unstable = [] # useful for benches.
quickwit = ["ciborium"]
quickwit = ["sstable"]
[workspace]
members = ["query-grammar", "bitpacker", "common", "fastfield_codecs", "ownedbytes"]
members = ["query-grammar", "bitpacker", "common", "fastfield_codecs", "ownedbytes", "stacker", "sstable", "columnar"]
# Following the "fail" crate best practises, we isolate
# tests that define specific behavior in fail check points

View File

@@ -25,15 +25,14 @@ impl BitPacker {
num_bits: u8,
output: &mut TWrite,
) -> io::Result<()> {
let val_u64 = val as u64;
let num_bits = num_bits as usize;
if self.mini_buffer_written + num_bits > 64 {
self.mini_buffer |= val_u64.wrapping_shl(self.mini_buffer_written as u32);
self.mini_buffer |= val.wrapping_shl(self.mini_buffer_written as u32);
output.write_all(self.mini_buffer.to_le_bytes().as_ref())?;
self.mini_buffer = val_u64.wrapping_shr((64 - self.mini_buffer_written) as u32);
self.mini_buffer = val.wrapping_shr((64 - self.mini_buffer_written) as u32);
self.mini_buffer_written = self.mini_buffer_written + num_bits - 64;
} else {
self.mini_buffer |= val_u64 << self.mini_buffer_written;
self.mini_buffer |= val << self.mini_buffer_written;
self.mini_buffer_written += num_bits;
if self.mini_buffer_written == 64 {
output.write_all(self.mini_buffer.to_le_bytes().as_ref())?;
@@ -92,17 +91,15 @@ impl BitUnpacker {
return 0u64;
}
let addr_in_bits = idx * self.num_bits as u32;
let addr = addr_in_bits >> 3;
let addr = (addr_in_bits >> 3) as usize;
let bit_shift = addr_in_bits & 7;
debug_assert!(
addr + 8 <= data.len() as u32,
addr + 8 <= data.len(),
"The fast field field should have been padded with 7 bytes."
);
let bytes: [u8; 8] = (&data[(addr as usize)..(addr as usize) + 8])
.try_into()
.unwrap();
let bytes: [u8; 8] = (&data[addr..addr + 8]).try_into().unwrap();
let val_unshifted_unmasked: u64 = u64::from_le_bytes(bytes);
let val_shifted = (val_unshifted_unmasked >> bit_shift) as u64;
let val_shifted = val_unshifted_unmasked >> bit_shift;
val_shifted & self.mask
}
}

View File

@@ -84,7 +84,7 @@ impl BlockedBitpacker {
#[inline]
pub fn add(&mut self, val: u64) {
self.buffer.push(val);
if self.buffer.len() == BLOCK_SIZE as usize {
if self.buffer.len() == BLOCK_SIZE {
self.flush();
}
}
@@ -126,8 +126,8 @@ impl BlockedBitpacker {
}
#[inline]
pub fn get(&self, idx: usize) -> u64 {
let metadata_pos = idx / BLOCK_SIZE as usize;
let pos_in_block = idx % BLOCK_SIZE as usize;
let metadata_pos = idx / BLOCK_SIZE;
let pos_in_block = idx % BLOCK_SIZE;
if let Some(metadata) = self.offset_and_bits.get(metadata_pos) {
let unpacked = BitUnpacker::new(metadata.num_bits()).get(
pos_in_block as u32,

View File

@@ -1,6 +1,8 @@
mod bitpacker;
mod blocked_bitpacker;
use std::cmp::Ordering;
pub use crate::bitpacker::{BitPacker, BitUnpacker};
pub use crate::blocked_bitpacker::BlockedBitpacker;
@@ -37,44 +39,104 @@ pub fn compute_num_bits(n: u64) -> u8 {
}
}
/// Computes the (min, max) of an iterator of `PartialOrd` values.
///
/// For values implementing `Ord` (in a way consistent to their `PartialOrd` impl),
/// this function behaves as expected.
///
/// For values with partial ordering, the behavior is non-trivial and may
/// depends on the order of the values.
/// For floats however, it simply returns the same results as if NaN were
/// skipped.
pub fn minmax<I, T>(mut vals: I) -> Option<(T, T)>
where
I: Iterator<Item = T>,
T: Copy + Ord,
T: Copy + PartialOrd,
{
if let Some(first_el) = vals.next() {
return Some(vals.fold((first_el, first_el), |(min_val, max_val), el| {
(min_val.min(el), max_val.max(el))
}));
let first_el = vals.find(|val| {
// We use this to make sure we skip all NaN values when
// working with a float type.
val.partial_cmp(val) == Some(Ordering::Equal)
})?;
let mut min_so_far: T = first_el;
let mut max_so_far: T = first_el;
for val in vals {
if val.partial_cmp(&min_so_far) == Some(Ordering::Less) {
min_so_far = val;
}
if val.partial_cmp(&max_so_far) == Some(Ordering::Greater) {
max_so_far = val;
}
}
None
Some((min_so_far, max_so_far))
}
#[test]
fn test_compute_num_bits() {
assert_eq!(compute_num_bits(1), 1u8);
assert_eq!(compute_num_bits(0), 0u8);
assert_eq!(compute_num_bits(2), 2u8);
assert_eq!(compute_num_bits(3), 2u8);
assert_eq!(compute_num_bits(4), 3u8);
assert_eq!(compute_num_bits(255), 8u8);
assert_eq!(compute_num_bits(256), 9u8);
assert_eq!(compute_num_bits(5_000_000_000), 33u8);
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_minmax_empty() {
let vals: Vec<u32> = vec![];
assert_eq!(minmax(vals.into_iter()), None);
}
#[test]
fn test_compute_num_bits() {
assert_eq!(compute_num_bits(1), 1u8);
assert_eq!(compute_num_bits(0), 0u8);
assert_eq!(compute_num_bits(2), 2u8);
assert_eq!(compute_num_bits(3), 2u8);
assert_eq!(compute_num_bits(4), 3u8);
assert_eq!(compute_num_bits(255), 8u8);
assert_eq!(compute_num_bits(256), 9u8);
assert_eq!(compute_num_bits(5_000_000_000), 33u8);
}
#[test]
fn test_minmax_one() {
assert_eq!(minmax(vec![1].into_iter()), Some((1, 1)));
}
#[test]
fn test_minmax_empty() {
let vals: Vec<u32> = vec![];
assert_eq!(minmax(vals.into_iter()), None);
}
#[test]
fn test_minmax_two() {
assert_eq!(minmax(vec![1, 2].into_iter()), Some((1, 2)));
assert_eq!(minmax(vec![2, 1].into_iter()), Some((1, 2)));
#[test]
fn test_minmax_one() {
assert_eq!(minmax(vec![1].into_iter()), Some((1, 1)));
}
#[test]
fn test_minmax_two() {
assert_eq!(minmax(vec![1, 2].into_iter()), Some((1, 2)));
assert_eq!(minmax(vec![2, 1].into_iter()), Some((1, 2)));
}
#[test]
fn test_minmax_nan() {
assert_eq!(
minmax(vec![f64::NAN, 1f64, 2f64].into_iter()),
Some((1f64, 2f64))
);
assert_eq!(
minmax(vec![2f64, f64::NAN, 1f64].into_iter()),
Some((1f64, 2f64))
);
assert_eq!(
minmax(vec![2f64, 1f64, f64::NAN].into_iter()),
Some((1f64, 2f64))
);
}
#[test]
fn test_minmax_inf() {
assert_eq!(
minmax(vec![f64::INFINITY, 1f64, 2f64].into_iter()),
Some((1f64, f64::INFINITY))
);
assert_eq!(
minmax(vec![-f64::INFINITY, 1f64, 2f64].into_iter()),
Some((-f64::INFINITY, 2f64))
);
assert_eq!(
minmax(vec![2f64, f64::INFINITY, 1f64].into_iter()),
Some((1f64, f64::INFINITY))
);
assert_eq!(
minmax(vec![2f64, 1f64, -f64::INFINITY].into_iter()),
Some((-f64::INFINITY, 2f64))
);
}
}

19
columnar/Cargo.toml Normal file
View File

@@ -0,0 +1,19 @@
[package]
name = "tantivy-columnar"
version = "0.1.0"
edition = "2021"
license = "MIT"
[dependencies]
stacker = { path = "../stacker", package="tantivy-stacker"}
serde_json = "1"
thiserror = "1"
fnv = "1"
sstable = { path = "../sstable", package = "tantivy-sstable" }
zstd = "0.12"
common = { path = "../common", package = "tantivy-common" }
fastfield_codecs = { path = "../fastfield_codecs"}
itertools = "0.10"
[dev-dependencies]
proptest = "1"

73
columnar/README.md Normal file
View File

@@ -0,0 +1,73 @@
# Columnar format
This crate describes columnar format used in tantivy.
## Goals
This format is special in the following way.
- it needs to be compact
- it does not required to be loaded in memory.
- it is designed to fit well with quickwit's strange constraint:
we need to be able to load columns rapidly.
- columns of several types can be associated with the same column name.
- it needs to support columns with different types `(str, u64, i64, f64)`
and different cardinality `(required, optional, multivalued)`.
- columns, once loaded, offer cheap random access.
# Coercion rules
Users can create a columnar by appending rows to a writer.
Nothing prevents a user from recording values with different to a same `column_key`.
In that case, `tantivy-columnar`'s behavior is as follows:
- Values that corresponds to different JsonValue type are mapped to different columns. For instance, String values are treated independently from Number or boolean values. `tantivy-columnar` will simply emit several columns associated to a given column_name.
- Only one column for a given json value type is emitted. If number values with different number types are recorded (e.g. u64, i64, f64), `tantivy-columnar` will pick the first type that can represents the set of appended value, with the following prioriy order (`i64`, `u64`, `f64`). `i64` is picked over `u64` as it is likely to yield less change of types. Most use cases strictly requiring `u64` show the restriction on 50% of the values (e.g. a 64-bit hash). On the other hand, a lot of use cases can show rare negative value.
# Columnar format
Because this columnar format tries to avoid some coercion.
There can be several columns (with different type) associated to a single `column_name`.
Each column is associated to `column_key`.
The format of that key is:
`[column_name][ZERO_BYTE][column_type_header: u8]`
```
COLUMNAR:=
[COLUMNAR_DATA]
[COLUMNAR_INDEX]
[COLUMNAR_FOOTER];
# Columns are sorted by their column key.
COLUMNAR_DATA:=
[COLUMN]+;
COLUMN:=
COMPRESSED_COLUMN | NON_COMPRESSED_COLUMN;
# COLUMN_DATA is compressed when it exceeds a threshold of 100KB.
COMPRESSED_COLUMN := [b'1'][zstd(COLUMN_DATA)]
NON_COMPRESSED_COLUMN:= [b'0'][COLUMN_DATA]
COLUMNAR_INDEX := [RANGE_SSTABLE_BYTES]
COLUMNAR_FOOTER := [RANGE_SSTABLE_BYTES_LEN: 8 bytes little endian]
```
The columnar file starts by the actual column data, concatenated one after the other,
sorted by column key.
A quickwit/tantivy style sstable associates
`(column names, column_cardinality, column_type) to range of bytes.
Column name may not contain the zero byte.
Listing all columns associated to `column_name` can therefore
be done by listing all keys prefixed by
`[column_name][ZERO_BYTE]`
The associated range of bytes refer to a range of bytes

View File

@@ -0,0 +1,17 @@
[package]
name = "tantivy-columnar-cli"
version = "0.1.0"
edition = "2021"
license = "MIT"
[dependencies]
columnar = {path="../", package="tantivy-columnar"}
serde_json = "1"
serde_json_borrow = {git="https://github.com/PSeitz/serde_json_borrow/"}
serde = "1"
[workspace]
members = []
[profile.release]
debug = true

View File

@@ -0,0 +1,126 @@
use columnar::ColumnarWriter;
use columnar::NumericalValue;
use serde_json_borrow;
use std::fs::File;
use std::io;
use std::io::BufRead;
use std::io::BufReader;
use std::time::Instant;
#[derive(Default)]
struct JsonStack {
path: String,
stack: Vec<usize>,
}
impl JsonStack {
fn push(&mut self, seg: &str) {
let len = self.path.len();
self.stack.push(len);
self.path.push('.');
self.path.push_str(seg);
}
fn pop(&mut self) {
if let Some(len) = self.stack.pop() {
self.path.truncate(len);
}
}
fn path(&self) -> &str {
&self.path[1..]
}
}
fn append_json_to_columnar(
doc: u32,
json_value: &serde_json_borrow::Value,
columnar: &mut ColumnarWriter,
stack: &mut JsonStack,
) -> usize {
let mut count = 0;
match json_value {
serde_json_borrow::Value::Null => {}
serde_json_borrow::Value::Bool(val) => {
columnar.record_numerical(
doc,
stack.path(),
NumericalValue::from(if *val { 1u64 } else { 0u64 }),
);
count += 1;
}
serde_json_borrow::Value::Number(num) => {
let numerical_value: NumericalValue = if let Some(num_i64) = num.as_i64() {
num_i64.into()
} else if let Some(num_u64) = num.as_u64() {
num_u64.into()
} else if let Some(num_f64) = num.as_f64() {
num_f64.into()
} else {
panic!();
};
count += 1;
columnar.record_numerical(
doc,
stack.path(),
numerical_value,
);
}
serde_json_borrow::Value::Str(msg) => {
columnar.record_str(
doc,
stack.path(),
msg.as_bytes(),
);
count += 1;
},
serde_json_borrow::Value::Array(vals) => {
for val in vals {
count += append_json_to_columnar(doc, val, columnar, stack);
}
},
serde_json_borrow::Value::Object(json_map) => {
for (child_key, child_val) in json_map {
stack.push(child_key);
count += append_json_to_columnar(doc, child_val, columnar, stack);
stack.pop();
}
},
}
count
}
fn main() -> io::Result<()> {
let file = File::open("gh_small.json")?;
let mut reader = BufReader::new(file);
let mut line = String::with_capacity(100);
let mut columnar = columnar::ColumnarWriter::default();
let mut doc = 0;
let start = Instant::now();
let mut stack = JsonStack::default();
let mut total_count = 0;
loop {
line.clear();
let len = reader.read_line(&mut line)?;
if len == 0 {
break;
}
let Ok(json_value) = serde_json::from_str::<serde_json_borrow::Value>(&line) else { continue; };
total_count += append_json_to_columnar(doc, &json_value, &mut columnar, &mut stack);
doc += 1;
}
println!("value count {total_count}");
println!("record {:?}", start.elapsed());
let mut buffer = Vec::new();
columnar.serialize(doc, &mut buffer)?;
println!("num docs: {doc}, {:?}", start.elapsed());
println!("buffer len {} MB", buffer.len() / 1_000_000);
let columnar = columnar::ColumnarReader::open(buffer)?;
for (column_name, typ, offsets, num_bytes) in columnar.list_columns()? {
if num_bytes>1_000_000 {
println!("{column_name} {typ:?} {offsets:?} {}", num_bytes / 1_000_000);
}
}
println!("{} columns", columnar.num_columns());
Ok(())
}

View File

@@ -0,0 +1,188 @@
use crate::utils::{place_bits, select_bits};
use crate::value::NumericalType;
/// Enum describing the number of values that can exist per document
/// (or per row if you will).
#[derive(Clone, Copy, Hash, Default, Debug, PartialEq, Eq, PartialOrd, Ord)]
#[repr(u8)]
pub enum Cardinality {
/// All documents contain exactly one value.
#[default]
Required = 0,
/// All documents contain at most one value.
Optional = 1,
/// All documents may contain any number of values.
Multivalued = 2,
}
impl Cardinality {
pub(crate) fn to_code(self) -> u8 {
self as u8
}
pub(crate) fn try_from_code(code: u8) -> Option<Cardinality> {
match code {
0 => Some(Cardinality::Required),
1 => Some(Cardinality::Optional),
2 => Some(Cardinality::Multivalued),
_ => None,
}
}
}
#[derive(Hash, Eq, PartialEq, Debug, Clone, Copy)]
pub enum ColumnType {
Bytes,
Numerical(NumericalType),
Bool,
}
impl ColumnType {
/// Encoded over 6 bits.
pub(crate) fn to_code(self) -> u8 {
let high_type;
let low_code: u8;
match self {
ColumnType::Bytes => {
high_type = GeneralType::Str;
low_code = 0u8;
}
ColumnType::Numerical(numerical_type) => {
high_type = GeneralType::Numerical;
low_code = numerical_type.to_code();
}
ColumnType::Bool => {
high_type = GeneralType::Bool;
low_code = 0u8;
}
}
place_bits::<3, 6>(high_type.to_code()) | place_bits::<0, 3>(low_code)
}
pub(crate) fn try_from_code(code: u8) -> Option<ColumnType> {
if select_bits::<6, 8>(code) != 0u8 {
return None;
}
let high_code = select_bits::<3, 6>(code);
let low_code = select_bits::<0, 3>(code);
let high_type = GeneralType::try_from_code(high_code)?;
match high_type {
GeneralType::Bool => {
if low_code != 0u8 {
return None;
}
Some(ColumnType::Bool)
}
GeneralType::Str => {
if low_code != 0u8 {
return None;
}
Some(ColumnType::Bytes)
}
GeneralType::Numerical => {
let numerical_type = NumericalType::try_from_code(low_code)?;
Some(ColumnType::Numerical(numerical_type))
}
}
}
}
/// This corresponds to the JsonType.
#[derive(Copy, Clone, Ord, PartialOrd, Eq, PartialEq, Debug)]
#[repr(u8)]
pub(crate) enum GeneralType {
Bool = 0u8,
Str = 1u8,
Numerical = 2u8,
}
impl GeneralType {
pub fn to_code(self) -> u8 {
self as u8
}
pub fn try_from_code(code: u8) -> Option<Self> {
match code {
0u8 => Some(Self::Bool),
1u8 => Some(Self::Str),
2u8 => Some(Self::Numerical),
_ => None,
}
}
}
/// Represents the type and cardinality of a column.
/// This is encoded over one-byte and added to a column key in the
/// columnar sstable.
///
/// Cardinality is encoded as the first two highest two bits.
/// The low 6 bits encode the column type.
#[derive(Eq, Hash, PartialEq, Debug, Copy, Clone)]
pub struct ColumnTypeAndCardinality {
pub cardinality: Cardinality,
pub typ: ColumnType,
}
impl ColumnTypeAndCardinality {
pub fn to_code(self) -> u8 {
place_bits::<6, 8>(self.cardinality.to_code()) | place_bits::<0, 6>(self.typ.to_code())
}
pub fn try_from_code(code: u8) -> Option<ColumnTypeAndCardinality> {
let typ_code = select_bits::<0, 6>(code);
let cardinality_code = select_bits::<6, 8>(code);
let cardinality = Cardinality::try_from_code(cardinality_code)?;
let typ = ColumnType::try_from_code(typ_code)?;
assert_eq!(typ.to_code(), typ_code);
Some(ColumnTypeAndCardinality { cardinality, typ })
}
}
#[cfg(test)]
mod tests {
use std::collections::HashSet;
use super::ColumnTypeAndCardinality;
use crate::column_type_header::{Cardinality, ColumnType};
#[test]
fn test_column_type_header_to_code() {
let mut column_type_header_set: HashSet<ColumnTypeAndCardinality> = HashSet::new();
for code in u8::MIN..=u8::MAX {
if let Some(column_type_header) = ColumnTypeAndCardinality::try_from_code(code) {
assert_eq!(column_type_header.to_code(), code);
assert!(column_type_header_set.insert(column_type_header));
}
}
assert_eq!(
column_type_header_set.len(),
3 /* cardinality */ *
(1 + 1 + 3) // column_types (str, bool, numerical x 3)
);
}
#[test]
fn test_column_type_to_code() {
let mut column_type_set: HashSet<ColumnType> = HashSet::new();
for code in u8::MIN..=u8::MAX {
if let Some(column_type) = ColumnType::try_from_code(code) {
assert_eq!(column_type.to_code(), code);
assert!(column_type_set.insert(column_type));
}
}
assert_eq!(column_type_set.len(), 2 + 3);
}
#[test]
fn test_cardinality_to_code() {
let mut num_cardinality = 0;
for code in u8::MIN..=u8::MAX {
let cardinality_opt = Cardinality::try_from_code(code);
if let Some(cardinality) = cardinality_opt {
assert_eq!(cardinality.to_code(), code);
num_cardinality += 1;
}
}
assert_eq!(num_cardinality, 3);
}
}

View File

@@ -0,0 +1,84 @@
use std::io;
use fnv::FnvHashMap;
use sstable::SSTable;
pub(crate) struct IdMapping {
unordered_to_ord: Vec<OrderedId>,
}
impl IdMapping {
pub fn to_ord(&self, unordered: UnorderedId) -> OrderedId {
self.unordered_to_ord[unordered.0 as usize]
}
}
/// When we add values, we cannot know their ordered id yet.
/// For this reason, we temporarily assign them a `UnorderedId`
/// that will be mapped to an `OrderedId` upon serialization.
#[derive(Clone, Copy, Debug, Hash, PartialEq, Eq)]
pub struct UnorderedId(pub u32);
#[derive(Clone, Copy, Hash, PartialEq, Eq, Debug)]
pub struct OrderedId(pub u32);
/// `DictionaryBuilder` for dictionary encoding.
///
/// It stores the different terms encounterred and assigns them a temporary value
/// we call unordered id.
///
/// Upon serialization, we will sort the ids and hence build a `UnorderedId -> Term ordinal`
/// mapping.
#[derive(Default)]
pub(crate) struct DictionaryBuilder {
dict: FnvHashMap<Vec<u8>, UnorderedId>,
}
impl DictionaryBuilder {
/// Get or allocate an unordered id.
/// (This ID is simply an auto-incremented id.)
pub fn get_or_allocate_id(&mut self, term: &[u8]) -> UnorderedId {
if let Some(term_id) = self.dict.get(term) {
return *term_id;
}
let new_id = UnorderedId(self.dict.len() as u32);
self.dict.insert(term.to_vec(), new_id);
new_id
}
/// Serialize the dictionary into an fst, and returns the
/// `UnorderedId -> TermOrdinal` map.
pub fn serialize<'a, W: io::Write + 'a>(&self, wrt: &mut W) -> io::Result<IdMapping> {
let mut terms: Vec<(&[u8], UnorderedId)> =
self.dict.iter().map(|(k, v)| (k.as_slice(), *v)).collect();
terms.sort_unstable_by_key(|(key, _)| *key);
// TODO Remove the allocation.
let mut unordered_to_ord: Vec<OrderedId> = vec![OrderedId(0u32); terms.len()];
let mut sstable_builder = sstable::VoidSSTable::writer(wrt);
for (ord, (key, unordered_id)) in terms.into_iter().enumerate() {
let ordered_id = OrderedId(ord as u32);
sstable_builder.insert(key, &())?;
unordered_to_ord[unordered_id.0 as usize] = ordered_id;
}
sstable_builder.finish()?;
Ok(IdMapping { unordered_to_ord })
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_dictionary_builder() {
let mut dictionary_builder = DictionaryBuilder::default();
let hello_uid = dictionary_builder.get_or_allocate_id(b"hello");
let happy_uid = dictionary_builder.get_or_allocate_id(b"happy");
let tax_uid = dictionary_builder.get_or_allocate_id(b"tax");
let mut buffer = Vec::new();
let id_mapping = dictionary_builder.serialize(&mut buffer).unwrap();
assert_eq!(id_mapping.to_ord(hello_uid), OrderedId(1));
assert_eq!(id_mapping.to_ord(happy_uid), OrderedId(0));
assert_eq!(id_mapping.to_ord(tax_uid), OrderedId(2));
}
}

86
columnar/src/lib.rs Normal file
View File

@@ -0,0 +1,86 @@
mod column_type_header;
mod dictionary;
mod reader;
pub(crate) mod utils;
mod value;
mod writer;
pub use column_type_header::Cardinality;
pub use reader::ColumnarReader;
pub use value::{NumericalType, NumericalValue};
pub use writer::ColumnarWriter;
pub type DocId = u32;
#[cfg(test)]
mod tests {
use std::ops::Range;
use common::file_slice::FileSlice;
use crate::column_type_header::{ColumnType, ColumnTypeAndCardinality};
use crate::reader::ColumnarReader;
use crate::value::NumericalValue;
use crate::{Cardinality, ColumnarWriter};
#[test]
fn test_dataframe_writer_bytes() {
let mut dataframe_writer = ColumnarWriter::default();
dataframe_writer.record_str(1u32, "my_string", b"hello");
dataframe_writer.record_str(3u32, "my_string", b"helloeee");
let mut buffer: Vec<u8> = Vec::new();
dataframe_writer.serialize(5, &mut buffer).unwrap();
let columnar_fileslice = FileSlice::from(buffer);
let columnar = ColumnarReader::open(columnar_fileslice).unwrap();
assert_eq!(columnar.num_columns(), 1);
let cols: Vec<(ColumnTypeAndCardinality, Range<u64>)> =
columnar.read_columns("my_string").unwrap();
assert_eq!(cols.len(), 1);
assert_eq!(cols[0].1, 0..159);
}
#[test]
fn test_dataframe_writer_bool() {
let mut dataframe_writer = ColumnarWriter::default();
dataframe_writer.record_bool(1u32, "bool.value", false);
let mut buffer: Vec<u8> = Vec::new();
dataframe_writer.serialize(5, &mut buffer).unwrap();
let columnar_fileslice = FileSlice::from(buffer);
let columnar = ColumnarReader::open(columnar_fileslice).unwrap();
assert_eq!(columnar.num_columns(), 1);
let cols: Vec<(ColumnTypeAndCardinality, Range<u64>)> =
columnar.read_columns("bool.value").unwrap();
assert_eq!(cols.len(), 1);
assert_eq!(
cols[0].0,
ColumnTypeAndCardinality {
cardinality: Cardinality::Optional,
typ: ColumnType::Bool
}
);
assert_eq!(cols[0].1, 0..22);
}
#[test]
fn test_dataframe_writer_numerical() {
let mut dataframe_writer = ColumnarWriter::default();
dataframe_writer.record_numerical(1u32, "srical.value", NumericalValue::U64(12u64));
dataframe_writer.record_numerical(2u32, "srical.value", NumericalValue::U64(13u64));
dataframe_writer.record_numerical(4u32, "srical.value", NumericalValue::U64(15u64));
let mut buffer: Vec<u8> = Vec::new();
dataframe_writer.serialize(5, &mut buffer).unwrap();
let columnar_fileslice = FileSlice::from(buffer);
let columnar = ColumnarReader::open(columnar_fileslice).unwrap();
assert_eq!(columnar.num_columns(), 1);
let cols: Vec<(ColumnTypeAndCardinality, Range<u64>)> =
columnar.read_columns("srical.value").unwrap();
assert_eq!(cols.len(), 1);
// Right now this 31 bytes are spent as follows
//
// - header 14 bytes
// - vals 8 //< due to padding? could have been 1byte?.
// - null footer 6 bytes
// - version footer 3 bytes // Should be file-wide
assert_eq!(cols[0].1, 0..32);
}
}

102
columnar/src/reader/mod.rs Normal file
View File

@@ -0,0 +1,102 @@
use std::ops::Range;
use std::{io, mem};
use common::file_slice::FileSlice;
use common::BinarySerializable;
use sstable::{Dictionary, RangeSSTable};
use crate::column_type_header::ColumnTypeAndCardinality;
fn io_invalid_data(msg: String) -> io::Error {
io::Error::new(io::ErrorKind::InvalidData, msg) // format!("Invalid key found.
// {key_bytes:?}")));
}
/// The ColumnarReader makes it possible to access a set of columns
/// associated to field names.
pub struct ColumnarReader {
column_dictionary: Dictionary<RangeSSTable>,
column_data: FileSlice,
}
impl ColumnarReader {
/// Opens a new Columnar file.
pub fn open<F>(file_slice: F) -> io::Result<ColumnarReader>
where FileSlice: From<F> {
Self::open_inner(file_slice.into())
}
fn open_inner(file_slice: FileSlice) -> io::Result<ColumnarReader> {
let (file_slice_without_sstable_len, sstable_len_bytes) =
file_slice.split_from_end(mem::size_of::<u64>());
let mut sstable_len_bytes = sstable_len_bytes.read_bytes()?;
let sstable_len = u64::deserialize(&mut sstable_len_bytes)?;
let (column_data, sstable) =
file_slice_without_sstable_len.split_from_end(sstable_len as usize);
let column_dictionary = Dictionary::open(sstable)?;
Ok(ColumnarReader {
column_dictionary,
column_data,
})
}
// TODO fix ugly API
pub fn list_columns(
&self,
) -> io::Result<Vec<(String, ColumnTypeAndCardinality, Range<u64>, u64)>> {
let mut stream = self.column_dictionary.stream()?;
let mut results = Vec::new();
while stream.advance() {
let key_bytes: &[u8] = stream.key();
let column_code: u8 = key_bytes.last().cloned().unwrap();
let column_type_and_cardinality = ColumnTypeAndCardinality::try_from_code(column_code)
.ok_or_else(|| io_invalid_data(format!("Unknown column code `{column_code}`")))?;
let range = stream.value().clone();
let column_name = String::from_utf8_lossy(&key_bytes[..key_bytes.len() - 1]);
let range_len = range.end - range.start;
results.push((
column_name.to_string(),
column_type_and_cardinality,
range,
range_len,
));
}
Ok(results)
}
/// Get all columns for the given field_name.
// TODO fix ugly API
pub fn read_columns(
&self,
field_name: &str,
) -> io::Result<Vec<(ColumnTypeAndCardinality, Range<u64>)>> {
let mut start_key = field_name.to_string();
start_key.push('\0');
let mut end_key = field_name.to_string();
end_key.push(1u8 as char);
let mut stream = self
.column_dictionary
.range()
.ge(start_key.as_bytes())
.lt(end_key.as_bytes())
.into_stream()?;
let mut results = Vec::new();
while stream.advance() {
let key_bytes: &[u8] = stream.key();
if !key_bytes.starts_with(start_key.as_bytes()) {
return Err(io_invalid_data(format!("Invalid key found. {key_bytes:?}")));
}
let column_code: u8 = key_bytes.last().cloned().unwrap();
let column_type_and_cardinality = ColumnTypeAndCardinality::try_from_code(column_code)
.ok_or_else(|| io_invalid_data(format!("Unknown column code `{column_code}`")))?;
let range = stream.value().clone();
results.push((column_type_and_cardinality, range));
}
Ok(results)
}
/// Return the number of columns in the columnar.
pub fn num_columns(&self) -> usize {
self.column_dictionary.num_terms()
}
}

76
columnar/src/utils.rs Normal file
View File

@@ -0,0 +1,76 @@
const fn compute_mask(num_bits: u8) -> u8 {
if num_bits == 8 {
u8::MAX
} else {
(1u8 << num_bits) - 1
}
}
#[inline(always)]
#[must_use]
pub(crate) fn select_bits<const START: u8, const END: u8>(code: u8) -> u8 {
assert!(START <= END);
assert!(END <= 8);
let num_bits: u8 = END - START;
let mask: u8 = compute_mask(num_bits);
(code >> START) & mask
}
#[inline(always)]
#[must_use]
pub(crate) fn place_bits<const START: u8, const END: u8>(code: u8) -> u8 {
assert!(START <= END);
assert!(END <= 8);
let num_bits: u8 = END - START;
let mask: u8 = compute_mask(num_bits);
assert!(code <= mask);
code << START
}
/// Pop-front one bytes from a slice of bytes.
#[inline(always)]
pub fn pop_first_byte(bytes: &mut &[u8]) -> Option<u8> {
if bytes.is_empty() {
return None;
}
let first_byte = bytes[0];
*bytes = &bytes[1..];
Some(first_byte)
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_select_bits() {
assert_eq!(255u8, select_bits::<0, 8>(255u8));
assert_eq!(0u8, select_bits::<0, 0>(255u8));
assert_eq!(8u8, select_bits::<0, 4>(8u8));
assert_eq!(4u8, select_bits::<1, 4>(8u8));
assert_eq!(0u8, select_bits::<1, 3>(8u8));
}
#[test]
fn test_place_bits() {
assert_eq!(255u8, place_bits::<0, 8>(255u8));
assert_eq!(4u8, place_bits::<2, 3>(1u8));
assert_eq!(0u8, place_bits::<2, 2>(0u8));
}
#[test]
#[should_panic]
fn test_place_bits_overflows() {
let _ = place_bits::<1, 4>(8u8);
}
#[test]
fn test_pop_first_byte() {
let mut cursor: &[u8] = &b"abcd"[..];
assert_eq!(pop_first_byte(&mut cursor), Some(b'a'));
assert_eq!(pop_first_byte(&mut cursor), Some(b'b'));
assert_eq!(pop_first_byte(&mut cursor), Some(b'c'));
assert_eq!(pop_first_byte(&mut cursor), Some(b'd'));
assert_eq!(pop_first_byte(&mut cursor), None);
}
}

121
columnar/src/value.rs Normal file
View File

@@ -0,0 +1,121 @@
#[derive(Copy, Clone, Debug, PartialEq)]
pub enum NumericalValue {
I64(i64),
U64(u64),
F64(f64),
}
impl From<u64> for NumericalValue {
fn from(val: u64) -> NumericalValue {
NumericalValue::U64(val)
}
}
impl From<i64> for NumericalValue {
fn from(val: i64) -> Self {
NumericalValue::I64(val)
}
}
impl From<f64> for NumericalValue {
fn from(val: f64) -> Self {
NumericalValue::F64(val)
}
}
impl NumericalValue {
pub fn numerical_type(&self) -> NumericalType {
match self {
NumericalValue::F64(_) => NumericalType::F64,
NumericalValue::I64(_) => NumericalType::I64,
NumericalValue::U64(_) => NumericalType::U64,
}
}
}
impl Eq for NumericalValue {}
#[derive(Clone, Copy, Debug, Default, Hash, Eq, PartialEq)]
#[repr(u8)]
pub enum NumericalType {
#[default]
I64 = 0,
U64 = 1,
F64 = 2,
}
impl NumericalType {
pub fn to_code(self) -> u8 {
self as u8
}
pub fn try_from_code(code: u8) -> Option<NumericalType> {
match code {
0 => Some(NumericalType::I64),
1 => Some(NumericalType::U64),
2 => Some(NumericalType::F64),
_ => None,
}
}
}
/// We voluntarily avoid using `Into` here to keep this
/// implementation quirk as private as possible.
///
/// This coercion trait actually panics if it is used
/// to convert a loose types to a stricter type.
///
/// The level is strictness is somewhat arbitrary.
/// - i64
/// - u64
/// - f64.
pub(crate) trait Coerce {
fn coerce(numerical_value: NumericalValue) -> Self;
}
impl Coerce for i64 {
fn coerce(value: NumericalValue) -> Self {
match value {
NumericalValue::I64(val) => val,
NumericalValue::U64(val) => val as i64,
NumericalValue::F64(_) => unreachable!(),
}
}
}
impl Coerce for u64 {
fn coerce(value: NumericalValue) -> Self {
match value {
NumericalValue::I64(val) => val as u64,
NumericalValue::U64(val) => val,
NumericalValue::F64(_) => unreachable!(),
}
}
}
impl Coerce for f64 {
fn coerce(value: NumericalValue) -> Self {
match value {
NumericalValue::I64(val) => val as f64,
NumericalValue::U64(val) => val as f64,
NumericalValue::F64(val) => val,
}
}
}
#[cfg(test)]
mod tests {
use super::NumericalType;
#[test]
fn test_numerical_type_code() {
let mut num_numerical_type = 0;
for code in u8::MIN..=u8::MAX {
if let Some(numerical_type) = NumericalType::try_from_code(code) {
assert_eq!(numerical_type.to_code(), code);
num_numerical_type += 1;
}
}
assert_eq!(num_numerical_type, 3);
}
}

View File

@@ -0,0 +1,311 @@
use crate::dictionary::UnorderedId;
use crate::utils::{place_bits, pop_first_byte, select_bits};
use crate::value::NumericalValue;
use crate::{DocId, NumericalType};
/// When we build a columnar dataframe, we first just group
/// all mutations per column, and append them in append-only object.
///
/// We represents all of these operations as `ColumnOperation`.
#[derive(Eq, PartialEq, Debug, Clone, Copy)]
pub(crate) enum ColumnOperation<T> {
NewDoc(DocId),
Value(T),
}
#[derive(Copy, Clone, Debug, Eq, PartialEq)]
struct ColumnOperationHeader {
typ_code: u8,
len: u8,
}
impl ColumnOperationHeader {
fn to_code(self) -> u8 {
place_bits::<0, 4>(self.len) | place_bits::<4, 8>(self.typ_code)
}
fn from_code(code: u8) -> Self {
let len = select_bits::<0, 4>(code);
let typ_code = select_bits::<4, 8>(code);
ColumnOperationHeader { typ_code, len }
}
}
const NEW_DOC_CODE: u8 = 0u8;
const NEW_VALUE_CODE: u8 = 1u8;
impl<V: SymbolValue> ColumnOperation<V> {
pub fn serialize(self) -> impl AsRef<[u8]> {
let mut minibuf = MiniBuffer::default();
let header = match self {
ColumnOperation::NewDoc(new_doc) => {
let symbol_len = new_doc.serialize(&mut minibuf.bytes[1..]);
ColumnOperationHeader {
typ_code: NEW_DOC_CODE,
len: symbol_len,
}
}
ColumnOperation::Value(val) => {
let symbol_len = val.serialize(&mut minibuf.bytes[1..]);
ColumnOperationHeader {
typ_code: NEW_VALUE_CODE,
len: symbol_len,
}
}
};
minibuf.bytes[0] = header.to_code();
minibuf.len = 1 + header.len;
minibuf
}
/// Deserialize a colummn operation.
/// Returns None if the buffer is empty.
///
/// Panics if the payload is invalid.
pub fn deserialize(bytes: &mut &[u8]) -> Option<Self> {
let header_byte = pop_first_byte(bytes)?;
let column_op_header = ColumnOperationHeader::from_code(header_byte);
let symbol_bytes: &[u8];
(symbol_bytes, *bytes) = bytes.split_at(column_op_header.len as usize);
match column_op_header.typ_code {
NEW_DOC_CODE => {
let new_doc = u32::deserialize(symbol_bytes);
Some(ColumnOperation::NewDoc(new_doc))
}
NEW_VALUE_CODE => {
let value = V::deserialize(symbol_bytes);
Some(ColumnOperation::Value(value))
}
_ => {
panic!("Unknown code {}", column_op_header.typ_code);
}
}
}
}
impl<T> From<T> for ColumnOperation<T> {
fn from(value: T) -> Self {
ColumnOperation::Value(value)
}
}
#[allow(clippy::from_over_into)]
pub(crate) trait SymbolValue: Clone + Copy {
fn serialize(self, buffer: &mut [u8]) -> u8;
// Reads the header type and the given bytes.
//
// `bytes` does not contain the header byte.
// This method should advance bytes by the number of bytes that were consumed.
fn deserialize(bytes: &[u8]) -> Self;
}
impl SymbolValue for bool {
fn serialize(self, buffer: &mut [u8]) -> u8 {
buffer[0] = if self { 1u8 } else { 0u8 };
1u8
}
fn deserialize(bytes: &[u8]) -> Self {
bytes[0] == 1u8
}
}
#[derive(Default)]
struct MiniBuffer {
pub bytes: [u8; 10],
pub len: u8,
}
impl AsRef<[u8]> for MiniBuffer {
fn as_ref(&self) -> &[u8] {
&self.bytes[..self.len as usize]
}
}
impl SymbolValue for NumericalValue {
fn deserialize(mut bytes: &[u8]) -> Self {
let type_code = pop_first_byte(&mut bytes).unwrap();
let symbol_type = NumericalType::try_from_code(type_code).unwrap();
let mut octet: [u8; 8] = [0u8; 8];
octet[..bytes.len()].copy_from_slice(bytes);
match symbol_type {
NumericalType::U64 => {
let val: u64 = u64::from_le_bytes(octet);
NumericalValue::U64(val)
}
NumericalType::I64 => {
let encoded: u64 = u64::from_le_bytes(octet);
let val: i64 = decode_zig_zag(encoded);
NumericalValue::I64(val)
}
NumericalType::F64 => {
debug_assert_eq!(bytes.len(), 8);
let val: f64 = f64::from_le_bytes(octet);
NumericalValue::F64(val)
}
}
}
fn serialize(self, output: &mut [u8]) -> u8 {
match self {
NumericalValue::F64(val) => {
output[0] = NumericalType::F64 as u8;
output[1..9].copy_from_slice(&val.to_le_bytes());
9u8
}
NumericalValue::U64(val) => {
let len = compute_num_bytes_for_u64(val) as u8;
output[0] = NumericalType::U64 as u8;
output[1..9].copy_from_slice(&val.to_le_bytes());
len + 1u8
}
NumericalValue::I64(val) => {
let zig_zag_encoded = encode_zig_zag(val);
let len = compute_num_bytes_for_u64(zig_zag_encoded) as u8;
output[0] = NumericalType::I64 as u8;
output[1..9].copy_from_slice(&zig_zag_encoded.to_le_bytes());
len + 1u8
}
}
}
}
impl SymbolValue for u32 {
fn serialize(self, output: &mut [u8]) -> u8 {
let len = compute_num_bytes_for_u64(self as u64);
output[0..4].copy_from_slice(&self.to_le_bytes());
len as u8
}
fn deserialize(bytes: &[u8]) -> Self {
let mut quartet: [u8; 4] = [0u8; 4];
quartet[..bytes.len()].copy_from_slice(bytes);
u32::from_le_bytes(quartet)
}
}
impl SymbolValue for UnorderedId {
fn serialize(self, output: &mut [u8]) -> u8 {
self.0.serialize(output)
}
fn deserialize(bytes: &[u8]) -> Self {
UnorderedId(u32::deserialize(bytes))
}
}
fn compute_num_bytes_for_u64(val: u64) -> usize {
let msb = (64u32 - val.leading_zeros()) as usize;
(msb + 7) / 8
}
fn encode_zig_zag(n: i64) -> u64 {
((n << 1) ^ (n >> 63)) as u64
}
fn decode_zig_zag(n: u64) -> i64 {
((n >> 1) as i64) ^ (-((n & 1) as i64))
}
#[cfg(test)]
mod tests {
use super::*;
#[track_caller]
fn test_zig_zag_aux(val: i64) {
let encoded = super::encode_zig_zag(val);
assert_eq!(decode_zig_zag(encoded), val);
if let Some(abs_val) = val.checked_abs() {
let abs_val = abs_val as u64;
assert!(encoded <= abs_val * 2);
}
}
#[test]
fn test_zig_zag() {
assert_eq!(encode_zig_zag(0i64), 0u64);
assert_eq!(encode_zig_zag(-1i64), 1u64);
assert_eq!(encode_zig_zag(1i64), 2u64);
test_zig_zag_aux(0i64);
test_zig_zag_aux(i64::MIN);
test_zig_zag_aux(i64::MAX);
}
use proptest::prelude::any;
use proptest::proptest;
proptest! {
#[test]
fn test_proptest_zig_zag(val in any::<i64>()) {
test_zig_zag_aux(val);
}
}
#[test]
fn test_header_byte_serialization() {
for len in 0..=15 {
for typ_code in 0..=15 {
let header = ColumnOperationHeader { typ_code, len };
let header_code = header.to_code();
let serdeser_header = ColumnOperationHeader::from_code(header_code);
assert_eq!(header, serdeser_header);
}
}
}
#[track_caller]
fn ser_deser_symbol(column_op: ColumnOperation<NumericalValue>) {
let buf = column_op.serialize();
let mut buffer = buf.as_ref().to_vec();
buffer.extend_from_slice(b"234234");
let mut bytes = &buffer[..];
let serdeser_symbol = ColumnOperation::deserialize(&mut bytes).unwrap();
assert_eq!(bytes.len() + buf.as_ref().len() as usize, buffer.len());
assert_eq!(column_op, serdeser_symbol);
}
#[test]
fn test_compute_num_bytes_for_u64() {
assert_eq!(compute_num_bytes_for_u64(0), 0);
assert_eq!(compute_num_bytes_for_u64(1), 1);
assert_eq!(compute_num_bytes_for_u64(255), 1);
assert_eq!(compute_num_bytes_for_u64(256), 2);
assert_eq!(compute_num_bytes_for_u64((1 << 16) - 1), 2);
assert_eq!(compute_num_bytes_for_u64(1 << 16), 3);
}
#[test]
fn test_symbol_serialization() {
ser_deser_symbol(ColumnOperation::NewDoc(0));
ser_deser_symbol(ColumnOperation::NewDoc(3));
ser_deser_symbol(ColumnOperation::Value(NumericalValue::I64(0i64)));
ser_deser_symbol(ColumnOperation::Value(NumericalValue::I64(1i64)));
ser_deser_symbol(ColumnOperation::Value(NumericalValue::U64(257u64)));
ser_deser_symbol(ColumnOperation::Value(NumericalValue::I64(-257i64)));
ser_deser_symbol(ColumnOperation::Value(NumericalValue::I64(i64::MIN)));
ser_deser_symbol(ColumnOperation::Value(NumericalValue::U64(0u64)));
ser_deser_symbol(ColumnOperation::Value(NumericalValue::U64(u64::MIN)));
ser_deser_symbol(ColumnOperation::Value(NumericalValue::U64(u64::MAX)));
}
fn test_column_operation_unordered_aux(val: u32, expected_len: usize) {
let column_op = ColumnOperation::Value(UnorderedId(val));
let minibuf = column_op.serialize();
assert_eq!(minibuf.as_ref().len() as usize, expected_len);
let mut buf = minibuf.as_ref().to_vec();
buf.extend_from_slice(&[2, 2, 2, 2, 2, 2]);
let mut cursor = &buf[..];
let column_op_serdeser: ColumnOperation<UnorderedId> =
ColumnOperation::deserialize(&mut cursor).unwrap();
assert_eq!(column_op_serdeser, ColumnOperation::Value(UnorderedId(val)));
assert_eq!(cursor.len() + expected_len, buf.len());
}
#[test]
fn test_column_operation_unordered() {
test_column_operation_unordered_aux(300u32, 3);
test_column_operation_unordered_aux(1u32, 2);
test_column_operation_unordered_aux(0u32, 1);
}
}

View File

@@ -0,0 +1,270 @@
use std::cmp::Ordering;
use stacker::{ExpUnrolledLinkedList, MemoryArena};
use crate::dictionary::{DictionaryBuilder, UnorderedId};
use crate::writer::column_operation::{ColumnOperation, SymbolValue};
use crate::{Cardinality, DocId, NumericalType, NumericalValue};
#[derive(Copy, Clone, Debug, Eq, PartialEq)]
#[repr(u8)]
enum DocumentStep {
SameDoc = 0,
NextDoc = 1,
SkippedDoc = 2,
}
#[inline(always)]
fn delta_with_last_doc(last_doc_opt: Option<u32>, doc: u32) -> DocumentStep {
let expected_next_doc = last_doc_opt.map(|last_doc| last_doc + 1).unwrap_or(0u32);
match doc.cmp(&expected_next_doc) {
Ordering::Less => DocumentStep::SameDoc,
Ordering::Equal => DocumentStep::NextDoc,
Ordering::Greater => DocumentStep::SkippedDoc,
}
}
#[derive(Copy, Clone, Default)]
pub struct ColumnWriter {
// Detected cardinality of the column so far.
cardinality: Cardinality,
// Last document inserted.
// None if no doc has been added yet.
last_doc_opt: Option<u32>,
// Buffer containing the serialized values.
values: ExpUnrolledLinkedList,
}
impl ColumnWriter {
/// Returns an iterator over the Symbol that have been recorded
/// for the given column.
pub(crate) fn operation_iterator<'a, V: SymbolValue>(
&self,
arena: &MemoryArena,
buffer: &'a mut Vec<u8>,
) -> impl Iterator<Item = ColumnOperation<V>> + 'a {
buffer.clear();
self.values.read_to_end(arena, buffer);
let mut cursor: &[u8] = &buffer[..];
std::iter::from_fn(move || ColumnOperation::deserialize(&mut cursor))
}
/// Records a change of the document being recorded.
///
/// This function will also update the cardinality of the column
/// if necessary.
pub(crate) fn record<S: SymbolValue>(&mut self, doc: DocId, value: S, arena: &mut MemoryArena) {
// Difference between `doc` and the last doc.
match delta_with_last_doc(self.last_doc_opt, doc) {
DocumentStep::SameDoc => {
// This is the last encounterred document.
self.cardinality = Cardinality::Multivalued;
}
DocumentStep::NextDoc => {
self.last_doc_opt = Some(doc);
self.write_symbol::<S>(ColumnOperation::NewDoc(doc), arena);
}
DocumentStep::SkippedDoc => {
self.cardinality = self.cardinality.max(Cardinality::Optional);
self.last_doc_opt = Some(doc);
self.write_symbol::<S>(ColumnOperation::NewDoc(doc), arena);
}
}
self.write_symbol(ColumnOperation::Value(value), arena);
}
// Get the cardinality.
// The overall number of docs in the column is necessary to
// deal with the case where the all docs contain 1 value, except some documents
// at the end of the column.
pub fn get_cardinality(&self, num_docs: DocId) -> Cardinality {
match delta_with_last_doc(self.last_doc_opt, num_docs) {
DocumentStep::SameDoc | DocumentStep::NextDoc => self.cardinality,
DocumentStep::SkippedDoc => self.cardinality.max(Cardinality::Optional),
}
}
/// Appends a new symbol to the `ColumnWriter`.
fn write_symbol<V: SymbolValue>(
&mut self,
column_operation: ColumnOperation<V>,
arena: &mut MemoryArena,
) {
self.values
.writer(arena)
.extend_from_slice(column_operation.serialize().as_ref());
}
}
#[derive(Clone, Copy, Default)]
pub(crate) struct NumericalColumnWriter {
compatible_numerical_types: CompatibleNumericalTypes,
column_writer: ColumnWriter,
}
/// State used to store what types are still acceptable
/// after having seen a set of numerical values.
#[derive(Clone, Copy)]
pub(crate) struct CompatibleNumericalTypes {
all_values_within_i64_range: bool,
all_values_within_u64_range: bool,
// f64 is always acceptable.
}
impl Default for CompatibleNumericalTypes {
fn default() -> CompatibleNumericalTypes {
CompatibleNumericalTypes {
all_values_within_i64_range: true,
all_values_within_u64_range: true,
}
}
}
impl CompatibleNumericalTypes {
fn accept_value(&mut self, numerical_value: NumericalValue) {
match numerical_value {
NumericalValue::I64(val_i64) => {
let value_within_u64_range = val_i64 >= 0i64;
self.all_values_within_u64_range &= value_within_u64_range;
}
NumericalValue::U64(val_u64) => {
let value_within_i64_range = val_u64 < i64::MAX as u64;
self.all_values_within_i64_range &= value_within_i64_range;
}
NumericalValue::F64(_) => {
self.all_values_within_i64_range = false;
self.all_values_within_u64_range = false;
}
}
}
pub fn to_numerical_type(self) -> NumericalType {
if self.all_values_within_i64_range {
NumericalType::I64
} else if self.all_values_within_u64_range {
NumericalType::U64
} else {
NumericalType::F64
}
}
}
impl NumericalColumnWriter {
pub fn column_type_and_cardinality(&self, num_docs: DocId) -> (NumericalType, Cardinality) {
let numerical_type = self.compatible_numerical_types.to_numerical_type();
let cardinality = self.column_writer.get_cardinality(num_docs);
(numerical_type, cardinality)
}
pub fn record_numerical_value(
&mut self,
doc: DocId,
value: NumericalValue,
arena: &mut MemoryArena,
) {
self.compatible_numerical_types.accept_value(value);
self.column_writer.record(doc, value, arena);
}
pub fn operation_iterator<'a>(
self,
arena: &MemoryArena,
buffer: &'a mut Vec<u8>,
) -> impl Iterator<Item = ColumnOperation<NumericalValue>> + 'a {
self.column_writer.operation_iterator(arena, buffer)
}
}
#[derive(Copy, Clone, Default)]
pub struct StrColumnWriter {
pub(crate) dictionary_id: u32,
pub(crate) column_writer: ColumnWriter,
}
impl StrColumnWriter {
pub fn with_dictionary_id(dictionary_id: u32) -> StrColumnWriter {
StrColumnWriter {
dictionary_id,
column_writer: Default::default(),
}
}
pub(crate) fn record_bytes(
&mut self,
doc: DocId,
bytes: &[u8],
dictionaries: &mut [DictionaryBuilder],
arena: &mut MemoryArena,
) {
let unordered_id = dictionaries[self.dictionary_id as usize].get_or_allocate_id(bytes);
self.column_writer.record(doc, unordered_id, arena);
}
pub(crate) fn operation_iterator<'a>(
&self,
arena: &MemoryArena,
byte_buffer: &'a mut Vec<u8>,
) -> impl Iterator<Item = ColumnOperation<UnorderedId>> + 'a {
self.column_writer.operation_iterator(arena, byte_buffer)
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_delta_with_last_doc() {
assert_eq!(delta_with_last_doc(None, 0u32), DocumentStep::NextDoc);
assert_eq!(delta_with_last_doc(None, 1u32), DocumentStep::SkippedDoc);
assert_eq!(delta_with_last_doc(None, 2u32), DocumentStep::SkippedDoc);
assert_eq!(delta_with_last_doc(Some(0u32), 0u32), DocumentStep::SameDoc);
assert_eq!(delta_with_last_doc(Some(1u32), 1u32), DocumentStep::SameDoc);
assert_eq!(delta_with_last_doc(Some(1u32), 2u32), DocumentStep::NextDoc);
assert_eq!(
delta_with_last_doc(Some(1u32), 3u32),
DocumentStep::SkippedDoc
);
assert_eq!(
delta_with_last_doc(Some(1u32), 4u32),
DocumentStep::SkippedDoc
);
}
#[track_caller]
fn test_column_writer_coercion_iter_aux(
values: impl Iterator<Item = NumericalValue>,
expected_numerical_type: NumericalType,
) {
let mut compatible_numerical_types = CompatibleNumericalTypes::default();
for value in values {
compatible_numerical_types.accept_value(value);
}
assert_eq!(
compatible_numerical_types.to_numerical_type(),
expected_numerical_type
);
}
#[track_caller]
fn test_column_writer_coercion_aux(
values: &[NumericalValue],
expected_numerical_type: NumericalType,
) {
test_column_writer_coercion_iter_aux(values.iter().copied(), expected_numerical_type);
test_column_writer_coercion_iter_aux(values.iter().rev().copied(), expected_numerical_type);
}
#[test]
fn test_column_writer_coercion() {
test_column_writer_coercion_aux(&[], NumericalType::I64);
test_column_writer_coercion_aux(&[1i64.into()], NumericalType::I64);
test_column_writer_coercion_aux(&[1u64.into()], NumericalType::I64);
// We don't detect exact integer at the moment. We could!
test_column_writer_coercion_aux(&[1f64.into()], NumericalType::F64);
test_column_writer_coercion_aux(&[u64::MAX.into()], NumericalType::U64);
test_column_writer_coercion_aux(&[(i64::MAX as u64).into()], NumericalType::U64);
test_column_writer_coercion_aux(&[(1u64 << 63).into()], NumericalType::U64);
test_column_writer_coercion_aux(&[1i64.into(), 1u64.into()], NumericalType::I64);
test_column_writer_coercion_aux(&[u64::MAX.into(), (-1i64).into()], NumericalType::F64);
}
}

526
columnar/src/writer/mod.rs Normal file
View File

@@ -0,0 +1,526 @@
mod column_operation;
mod column_writers;
mod serializer;
mod value_index;
use std::io::{self, Write};
use column_operation::ColumnOperation;
use fastfield_codecs::serialize::ValueIndexInfo;
use fastfield_codecs::{Column, MonotonicallyMappableToU64, VecColumn};
use serializer::ColumnarSerializer;
use stacker::{Addr, ArenaHashMap, MemoryArena};
use crate::column_type_header::{ColumnType, ColumnTypeAndCardinality, GeneralType};
use crate::dictionary::{DictionaryBuilder, IdMapping, UnorderedId};
use crate::value::{Coerce, NumericalType, NumericalValue};
use crate::writer::column_writers::{ColumnWriter, NumericalColumnWriter, StrColumnWriter};
use crate::writer::value_index::{IndexBuilder, SpareIndexBuilders};
use crate::{Cardinality, DocId};
/// Threshold above which a column data will be compressed
/// using ZSTD.
const COLUMN_COMPRESSION_THRESHOLD: usize = 100_000;
/// This is a set of buffers that are only here
/// to limit the amount of allocation.
#[derive(Default)]
struct SpareBuffers {
value_index_builders: SpareIndexBuilders,
i64_values: Vec<i64>,
u64_values: Vec<u64>,
f64_values: Vec<f64>,
bool_values: Vec<bool>,
column_buffer: Vec<u8>,
}
pub struct ColumnarWriter {
numerical_field_hash_map: ArenaHashMap,
bool_field_hash_map: ArenaHashMap,
bytes_field_hash_map: ArenaHashMap,
arena: MemoryArena,
// Dictionaries used to store dictionary-encoded values.
dictionaries: Vec<DictionaryBuilder>,
buffers: SpareBuffers,
}
impl Default for ColumnarWriter {
fn default() -> Self {
ColumnarWriter {
numerical_field_hash_map: ArenaHashMap::new(10_000),
bool_field_hash_map: ArenaHashMap::new(10_000),
bytes_field_hash_map: ArenaHashMap::new(10_000),
dictionaries: Vec::new(),
arena: MemoryArena::default(),
buffers: SpareBuffers::default(),
}
}
}
impl ColumnarWriter {
pub fn record_numerical(
&mut self,
doc: DocId,
column_name: &str,
numerical_value: NumericalValue,
) {
assert!(
!column_name.as_bytes().contains(&0u8),
"key may not contain the 0 byte"
);
let (hash_map, arena) = (&mut self.numerical_field_hash_map, &mut self.arena);
hash_map.mutate_or_create(
column_name.as_bytes(),
|column_opt: Option<NumericalColumnWriter>| {
let mut column: NumericalColumnWriter = column_opt.unwrap_or_default();
column.record_numerical_value(doc, numerical_value, arena);
column
},
);
}
pub fn record_bool(&mut self, doc: DocId, column_name: &str, val: bool) {
assert!(
!column_name.as_bytes().contains(&0u8),
"key may not contain the 0 byte"
);
let (hash_map, arena) = (&mut self.bool_field_hash_map, &mut self.arena);
hash_map.mutate_or_create(
column_name.as_bytes(),
|column_opt: Option<ColumnWriter>| {
let mut column: ColumnWriter = column_opt.unwrap_or_default();
column.record(doc, val, arena);
column
},
);
}
pub fn record_str(&mut self, doc: DocId, column_name: &str, value: &[u8]) {
assert!(
!column_name.as_bytes().contains(&0u8),
"key may not contain the 0 byte"
);
let (hash_map, arena, dictionaries) = (
&mut self.bytes_field_hash_map,
&mut self.arena,
&mut self.dictionaries,
);
hash_map.mutate_or_create(
column_name.as_bytes(),
|column_opt: Option<StrColumnWriter>| {
let mut column: StrColumnWriter = column_opt.unwrap_or_else(|| {
let dictionary_id = dictionaries.len() as u32;
dictionaries.push(DictionaryBuilder::default());
StrColumnWriter::with_dictionary_id(dictionary_id)
});
column.record_bytes(doc, value, dictionaries, arena);
column
},
);
}
pub fn serialize(&mut self, num_docs: DocId, wrt: &mut dyn io::Write) -> io::Result<()> {
let mut serializer = ColumnarSerializer::new(wrt);
let mut field_columns: Vec<(&[u8], GeneralType, Addr)> = self
.numerical_field_hash_map
.iter()
.map(|(term, addr, _)| (term, GeneralType::Numerical, addr))
.collect();
field_columns.extend(
self.bytes_field_hash_map
.iter()
.map(|(term, addr, _)| (term, GeneralType::Str, addr)),
);
field_columns.extend(
self.bool_field_hash_map
.iter()
.map(|(term, addr, _)| (term, GeneralType::Bool, addr)),
);
field_columns.sort_unstable_by_key(|(column_name, col_type, _)| (*column_name, *col_type));
let (arena, buffers, dictionaries) = (&self.arena, &mut self.buffers, &self.dictionaries);
let mut symbol_byte_buffer: Vec<u8> = Vec::new();
for (column_name, bytes_or_numerical, addr) in field_columns {
match bytes_or_numerical {
GeneralType::Bool => {
let column_writer: ColumnWriter = self.bool_field_hash_map.read(addr);
let cardinality = column_writer.get_cardinality(num_docs);
let column_type_and_cardinality = ColumnTypeAndCardinality {
cardinality,
typ: ColumnType::Bool,
};
let column_serializer =
serializer.serialize_column(column_name, column_type_and_cardinality);
serialize_bool_column(
cardinality,
num_docs,
column_writer.operation_iterator(arena, &mut symbol_byte_buffer),
buffers,
column_serializer,
)?;
}
GeneralType::Str => {
let str_column_writer: StrColumnWriter = self.bytes_field_hash_map.read(addr);
let dictionary_builder =
&dictionaries[str_column_writer.dictionary_id as usize];
let cardinality = str_column_writer.column_writer.get_cardinality(num_docs);
let column_type_and_cardinality = ColumnTypeAndCardinality {
cardinality,
typ: ColumnType::Bytes,
};
let column_serializer =
serializer.serialize_column(column_name, column_type_and_cardinality);
serialize_bytes_column(
cardinality,
num_docs,
dictionary_builder,
str_column_writer.operation_iterator(arena, &mut symbol_byte_buffer),
buffers,
column_serializer,
)?;
}
GeneralType::Numerical => {
let numerical_column_writer: NumericalColumnWriter =
self.numerical_field_hash_map.read(addr);
let (numerical_type, cardinality) =
numerical_column_writer.column_type_and_cardinality(num_docs);
let column_type_and_cardinality = ColumnTypeAndCardinality {
cardinality,
typ: ColumnType::Numerical(numerical_type),
};
let column_serializer =
serializer.serialize_column(column_name, column_type_and_cardinality);
serialize_numerical_column(
cardinality,
num_docs,
numerical_type,
numerical_column_writer.operation_iterator(arena, &mut symbol_byte_buffer),
buffers,
column_serializer,
)?;
}
};
}
serializer.finalize()?;
Ok(())
}
}
fn compress_and_write_column<W: io::Write>(column_bytes: &[u8], wrt: &mut W) -> io::Result<()> {
if column_bytes.len() >= COLUMN_COMPRESSION_THRESHOLD {
wrt.write_all(&[1])?;
let mut encoder = zstd::Encoder::new(wrt, 3)?;
encoder.write_all(column_bytes)?;
encoder.finish()?;
} else {
wrt.write_all(&[0])?;
wrt.write_all(column_bytes)?;
}
Ok(())
}
fn serialize_bytes_column<W: io::Write>(
cardinality: Cardinality,
num_docs: DocId,
dictionary_builder: &DictionaryBuilder,
operation_it: impl Iterator<Item = ColumnOperation<UnorderedId>>,
buffers: &mut SpareBuffers,
mut wrt: W,
) -> io::Result<()> {
let SpareBuffers {
value_index_builders,
u64_values,
column_buffer,
..
} = buffers;
column_buffer.clear();
let id_mapping: IdMapping = dictionary_builder.serialize(column_buffer)?;
let dictionary_num_bytes: u32 = column_buffer.len() as u32;
let operation_iterator = operation_it.map(|symbol: ColumnOperation<UnorderedId>| {
// We map unordered ids to ordered ids.
match symbol {
ColumnOperation::Value(unordered_id) => {
let ordered_id = id_mapping.to_ord(unordered_id);
ColumnOperation::Value(ordered_id.0 as u64)
}
ColumnOperation::NewDoc(doc) => ColumnOperation::NewDoc(doc),
}
});
serialize_column(
operation_iterator,
cardinality,
num_docs,
value_index_builders,
u64_values,
column_buffer,
)?;
column_buffer.write_all(&dictionary_num_bytes.to_le_bytes()[..])?;
compress_and_write_column(column_buffer, &mut wrt)?;
Ok(())
}
fn serialize_numerical_column<W: io::Write>(
cardinality: Cardinality,
num_docs: DocId,
numerical_type: NumericalType,
op_iterator: impl Iterator<Item = ColumnOperation<NumericalValue>>,
buffers: &mut SpareBuffers,
mut wrt: W,
) -> io::Result<()> {
let SpareBuffers {
value_index_builders,
u64_values,
i64_values,
f64_values,
column_buffer,
..
} = buffers;
column_buffer.clear();
match numerical_type {
NumericalType::I64 => {
serialize_column(
coerce_numerical_symbol::<i64>(op_iterator),
cardinality,
num_docs,
value_index_builders,
i64_values,
column_buffer,
)?;
}
NumericalType::U64 => {
serialize_column(
coerce_numerical_symbol::<u64>(op_iterator),
cardinality,
num_docs,
value_index_builders,
u64_values,
column_buffer,
)?;
}
NumericalType::F64 => {
serialize_column(
coerce_numerical_symbol::<f64>(op_iterator),
cardinality,
num_docs,
value_index_builders,
f64_values,
column_buffer,
)?;
}
};
compress_and_write_column(column_buffer, &mut wrt)?;
Ok(())
}
fn serialize_bool_column<W: io::Write>(
cardinality: Cardinality,
num_docs: DocId,
column_operations_it: impl Iterator<Item = ColumnOperation<bool>>,
buffers: &mut SpareBuffers,
mut wrt: W,
) -> io::Result<()> {
let SpareBuffers {
value_index_builders,
bool_values,
column_buffer,
..
} = buffers;
column_buffer.clear();
serialize_column(
column_operations_it,
cardinality,
num_docs,
value_index_builders,
bool_values,
column_buffer,
)?;
compress_and_write_column(column_buffer, &mut wrt)?;
Ok(())
}
fn serialize_column<
T: Copy + Default + std::fmt::Debug + Send + Sync + MonotonicallyMappableToU64 + PartialOrd,
>(
op_iterator: impl Iterator<Item = ColumnOperation<T>>,
cardinality: Cardinality,
num_docs: DocId,
value_index_builders: &mut SpareIndexBuilders,
values: &mut Vec<T>,
wrt: &mut Vec<u8>,
) -> io::Result<()>
where
for<'a> VecColumn<'a, T>: Column<T>,
{
values.clear();
match cardinality {
Cardinality::Required => {
consume_operation_iterator(
op_iterator,
value_index_builders.borrow_required_index_builder(),
values,
);
fastfield_codecs::serialize(
VecColumn::from(&values[..]),
wrt,
&fastfield_codecs::ALL_CODEC_TYPES[..],
)?;
}
Cardinality::Optional => {
let optional_index_builder = value_index_builders.borrow_optional_index_builder();
consume_operation_iterator(op_iterator, optional_index_builder, values);
let optional_index = optional_index_builder.finish(num_docs);
fastfield_codecs::serialize::serialize_new(
ValueIndexInfo::SingleValue(Box::new(optional_index)),
VecColumn::from(&values[..]),
wrt,
&fastfield_codecs::ALL_CODEC_TYPES[..],
)?;
}
Cardinality::Multivalued => {
let multivalued_index_builder = value_index_builders.borrow_multivalued_index_builder();
consume_operation_iterator(op_iterator, multivalued_index_builder, values);
let multivalued_index = multivalued_index_builder.finish(num_docs);
fastfield_codecs::serialize::serialize_new(
ValueIndexInfo::MultiValue(Box::new(multivalued_index)),
VecColumn::from(&values[..]),
wrt,
&fastfield_codecs::ALL_CODEC_TYPES[..],
)?;
}
}
Ok(())
}
fn coerce_numerical_symbol<T>(
operation_iterator: impl Iterator<Item = ColumnOperation<NumericalValue>>,
) -> impl Iterator<Item = ColumnOperation<T>>
where T: Coerce {
operation_iterator.map(|symbol| match symbol {
ColumnOperation::NewDoc(doc) => ColumnOperation::NewDoc(doc),
ColumnOperation::Value(numerical_value) => {
ColumnOperation::Value(Coerce::coerce(numerical_value))
}
})
}
fn consume_operation_iterator<T: std::fmt::Debug, TIndexBuilder: IndexBuilder>(
operation_iterator: impl Iterator<Item = ColumnOperation<T>>,
index_builder: &mut TIndexBuilder,
values: &mut Vec<T>,
) {
for symbol in operation_iterator {
match symbol {
ColumnOperation::NewDoc(doc) => {
index_builder.record_doc(doc);
}
ColumnOperation::Value(value) => {
index_builder.record_value();
values.push(value);
}
}
}
}
#[cfg(test)]
mod tests {
use column_operation::ColumnOperation;
use stacker::MemoryArena;
use super::*;
use crate::value::NumericalValue;
use crate::Cardinality;
#[test]
fn test_column_writer_required_simple() {
let mut arena = MemoryArena::default();
let mut column_writer = super::ColumnWriter::default();
column_writer.record(0u32, NumericalValue::from(14i64), &mut arena);
column_writer.record(1u32, NumericalValue::from(15i64), &mut arena);
column_writer.record(2u32, NumericalValue::from(-16i64), &mut arena);
assert_eq!(column_writer.get_cardinality(3), Cardinality::Required);
let mut buffer = Vec::new();
let symbols: Vec<ColumnOperation<NumericalValue>> = column_writer
.operation_iterator(&mut arena, &mut buffer)
.collect();
assert_eq!(symbols.len(), 6);
assert!(matches!(symbols[0], ColumnOperation::NewDoc(0u32)));
assert!(matches!(
symbols[1],
ColumnOperation::Value(NumericalValue::I64(14i64))
));
assert!(matches!(symbols[2], ColumnOperation::NewDoc(1u32)));
assert!(matches!(
symbols[3],
ColumnOperation::Value(NumericalValue::I64(15i64))
));
assert!(matches!(symbols[4], ColumnOperation::NewDoc(2u32)));
assert!(matches!(
symbols[5],
ColumnOperation::Value(NumericalValue::I64(-16i64))
));
}
#[test]
fn test_column_writer_optional_cardinality_missing_first() {
let mut arena = MemoryArena::default();
let mut column_writer = super::ColumnWriter::default();
column_writer.record(1u32, NumericalValue::from(15i64), &mut arena);
column_writer.record(2u32, NumericalValue::from(-16i64), &mut arena);
assert_eq!(column_writer.get_cardinality(3), Cardinality::Optional);
let mut buffer = Vec::new();
let symbols: Vec<ColumnOperation<NumericalValue>> = column_writer
.operation_iterator(&mut arena, &mut buffer)
.collect();
assert_eq!(symbols.len(), 4);
assert!(matches!(symbols[0], ColumnOperation::NewDoc(1u32)));
assert!(matches!(
symbols[1],
ColumnOperation::Value(NumericalValue::I64(15i64))
));
assert!(matches!(symbols[2], ColumnOperation::NewDoc(2u32)));
assert!(matches!(
symbols[3],
ColumnOperation::Value(NumericalValue::I64(-16i64))
));
}
#[test]
fn test_column_writer_optional_cardinality_missing_last() {
let mut arena = MemoryArena::default();
let mut column_writer = super::ColumnWriter::default();
column_writer.record(0u32, NumericalValue::from(15i64), &mut arena);
assert_eq!(column_writer.get_cardinality(2), Cardinality::Optional);
let mut buffer = Vec::new();
let symbols: Vec<ColumnOperation<NumericalValue>> = column_writer
.operation_iterator(&mut arena, &mut buffer)
.collect();
assert_eq!(symbols.len(), 2);
assert!(matches!(symbols[0], ColumnOperation::NewDoc(0u32)));
assert!(matches!(
symbols[1],
ColumnOperation::Value(NumericalValue::I64(15i64))
));
}
#[test]
fn test_column_writer_multivalued() {
let mut arena = MemoryArena::default();
let mut column_writer = super::ColumnWriter::default();
column_writer.record(0u32, NumericalValue::from(16i64), &mut arena);
column_writer.record(0u32, NumericalValue::from(17i64), &mut arena);
assert_eq!(column_writer.get_cardinality(1), Cardinality::Multivalued);
let mut buffer = Vec::new();
let symbols: Vec<ColumnOperation<NumericalValue>> = column_writer
.operation_iterator(&mut arena, &mut buffer)
.collect();
assert_eq!(symbols.len(), 3);
assert!(matches!(symbols[0], ColumnOperation::NewDoc(0u32)));
assert!(matches!(
symbols[1],
ColumnOperation::Value(NumericalValue::I64(16i64))
));
assert!(matches!(
symbols[2],
ColumnOperation::Value(NumericalValue::I64(17i64))
));
}
}

View File

@@ -0,0 +1,116 @@
use std::io;
use std::io::Write;
use common::CountingWriter;
use sstable::value::RangeValueWriter;
use sstable::RangeSSTable;
use crate::column_type_header::ColumnTypeAndCardinality;
pub struct ColumnarSerializer<W: io::Write> {
wrt: CountingWriter<W>,
sstable_range: sstable::Writer<Vec<u8>, RangeValueWriter>,
prepare_key_buffer: Vec<u8>,
}
/// Returns a key consisting of the concatenation of the key and the column_type_and_cardinality
/// code.
fn prepare_key<'a>(
key: &[u8],
column_type_cardinality: ColumnTypeAndCardinality,
buffer: &'a mut Vec<u8>,
) {
buffer.clear();
buffer.extend_from_slice(key);
buffer.push(0u8);
buffer.push(column_type_cardinality.to_code());
}
impl<W: io::Write> ColumnarSerializer<W> {
pub(crate) fn new(wrt: W) -> ColumnarSerializer<W> {
let sstable_range: sstable::Writer<Vec<u8>, RangeValueWriter> =
sstable::Dictionary::<RangeSSTable>::builder(Vec::with_capacity(100_000)).unwrap();
ColumnarSerializer {
wrt: CountingWriter::wrap(wrt),
sstable_range,
prepare_key_buffer: Vec::new(),
}
}
pub fn serialize_column<'a>(
&'a mut self,
column_name: &[u8],
column_type_cardinality: ColumnTypeAndCardinality,
) -> impl io::Write + 'a {
let start_offset = self.wrt.written_bytes();
prepare_key(
column_name,
column_type_cardinality,
&mut self.prepare_key_buffer,
);
ColumnSerializer {
columnar_serializer: self,
start_offset,
}
}
pub(crate) fn finalize(mut self) -> io::Result<()> {
let sstable_bytes: Vec<u8> = self.sstable_range.finish()?;
let sstable_num_bytes: u64 = sstable_bytes.len() as u64;
self.wrt.write_all(&sstable_bytes)?;
self.wrt.write_all(&sstable_num_bytes.to_le_bytes()[..])?;
Ok(())
}
}
struct ColumnSerializer<'a, W: io::Write> {
columnar_serializer: &'a mut ColumnarSerializer<W>,
start_offset: u64,
}
impl<'a, W: io::Write> Drop for ColumnSerializer<'a, W> {
fn drop(&mut self) {
let end_offset: u64 = self.columnar_serializer.wrt.written_bytes();
let byte_range = self.start_offset..end_offset;
self.columnar_serializer.sstable_range.insert_cannot_fail(
&self.columnar_serializer.prepare_key_buffer[..],
&byte_range,
);
self.columnar_serializer.prepare_key_buffer.clear();
}
}
impl<'a, W: io::Write> io::Write for ColumnSerializer<'a, W> {
fn write(&mut self, buf: &[u8]) -> io::Result<usize> {
self.columnar_serializer.wrt.write(buf)
}
fn flush(&mut self) -> io::Result<()> {
self.columnar_serializer.wrt.flush()
}
fn write_all(&mut self, buf: &[u8]) -> io::Result<()> {
self.columnar_serializer.wrt.write_all(buf)
}
}
#[cfg(test)]
mod tests {
use super::*;
use crate::column_type_header::ColumnType;
use crate::Cardinality;
#[test]
fn test_prepare_key_bytes() {
let mut buffer: Vec<u8> = b"somegarbage".to_vec();
let column_type_and_cardinality = ColumnTypeAndCardinality {
typ: ColumnType::Bytes,
cardinality: Cardinality::Optional,
};
prepare_key(b"root\0child", column_type_and_cardinality, &mut buffer);
assert_eq!(buffer.len(), 12);
assert_eq!(&buffer[..10], b"root\0child");
assert_eq!(buffer[10], 0u8);
assert_eq!(buffer[11], column_type_and_cardinality.to_code());
}
}

View File

@@ -0,0 +1,218 @@
use fastfield_codecs::serialize::{MultiValueIndexInfo, SingleValueIndexInfo};
use crate::DocId;
/// The `IndexBuilder` interprets a sequence of
/// calls of the form:
/// (record_doc,record_value+)*
/// and can then serialize the results into an index.
///
/// It has different implementation depending on whether the
/// cardinality is required, optional, or multivalued.
pub(crate) trait IndexBuilder {
fn record_doc(&mut self, doc: DocId);
#[inline]
fn record_value(&mut self) {}
}
/// The RequiredIndexBuilder does nothing.
#[derive(Default)]
pub struct RequiredIndexBuilder;
impl IndexBuilder for RequiredIndexBuilder {
#[inline(always)]
fn record_doc(&mut self, _doc: DocId) {}
}
#[derive(Default)]
pub struct OptionalIndexBuilder {
docs: Vec<DocId>,
}
struct SingleValueArrayIndex<'a> {
docs: &'a [DocId],
num_docs: DocId,
}
impl<'a> SingleValueIndexInfo for SingleValueArrayIndex<'a> {
fn num_vals(&self) -> u32 {
self.num_docs as u32
}
fn num_non_nulls(&self) -> u32 {
self.docs.len() as u32
}
fn iter(&self) -> Box<dyn Iterator<Item = u32> + '_> {
Box::new(self.docs.iter().copied())
}
}
impl OptionalIndexBuilder {
pub fn finish(&mut self, num_docs: DocId) -> impl SingleValueIndexInfo + '_ {
debug_assert!(self
.docs
.last()
.copied()
.map(|last_doc| last_doc < num_docs)
.unwrap_or(true));
SingleValueArrayIndex {
docs: &self.docs[..],
num_docs,
}
}
fn reset(&mut self) {
self.docs.clear();
}
}
impl IndexBuilder for OptionalIndexBuilder {
#[inline(always)]
fn record_doc(&mut self, doc: DocId) {
debug_assert!(self
.docs
.last()
.copied()
.map(|prev_doc| doc > prev_doc)
.unwrap_or(true));
self.docs.push(doc);
}
}
#[derive(Default)]
pub struct MultivaluedIndexBuilder {
// TODO should we switch to `start_offset`?
end_values: Vec<DocId>,
total_num_vals_seen: u32,
}
pub struct MultivaluedValueArrayIndex<'a> {
end_offsets: &'a [DocId],
}
impl<'a> MultiValueIndexInfo for MultivaluedValueArrayIndex<'a> {
fn num_docs(&self) -> u32 {
self.end_offsets.len() as u32
}
fn num_vals(&self) -> u32 {
self.end_offsets.last().copied().unwrap_or(0u32)
}
fn iter(&self) -> Box<dyn Iterator<Item = u32> + '_> {
if self.end_offsets.is_empty() {
return Box::new(std::iter::empty());
}
let n = self.end_offsets.len();
Box::new(std::iter::once(0u32).chain(self.end_offsets[..n - 1].iter().copied()))
}
}
impl MultivaluedIndexBuilder {
pub fn finish(&mut self, num_docs: DocId) -> impl MultiValueIndexInfo + '_ {
self.end_values
.resize(num_docs as usize, self.total_num_vals_seen);
MultivaluedValueArrayIndex {
end_offsets: &self.end_values[..],
}
}
fn reset(&mut self) {
self.end_values.clear();
self.total_num_vals_seen = 0;
}
}
impl IndexBuilder for MultivaluedIndexBuilder {
fn record_doc(&mut self, doc: DocId) {
self.end_values
.resize(doc as usize, self.total_num_vals_seen);
}
fn record_value(&mut self) {
self.total_num_vals_seen += 1;
}
}
/// The `SpareIndexBuilders` is there to avoid allocating a
/// new index builder for every single column.
#[derive(Default)]
pub struct SpareIndexBuilders {
required_index_builder: RequiredIndexBuilder,
optional_index_builder: OptionalIndexBuilder,
multivalued_index_builder: MultivaluedIndexBuilder,
}
impl SpareIndexBuilders {
pub fn borrow_required_index_builder(&mut self) -> &mut RequiredIndexBuilder {
&mut self.required_index_builder
}
pub fn borrow_optional_index_builder(&mut self) -> &mut OptionalIndexBuilder {
self.optional_index_builder.reset();
&mut self.optional_index_builder
}
pub fn borrow_multivalued_index_builder(&mut self) -> &mut MultivaluedIndexBuilder {
self.multivalued_index_builder.reset();
&mut self.multivalued_index_builder
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_optional_value_index_builder() {
let mut opt_value_index_builder = OptionalIndexBuilder::default();
opt_value_index_builder.record_doc(0u32);
opt_value_index_builder.record_value();
assert_eq!(
&opt_value_index_builder
.finish(1u32)
.iter()
.collect::<Vec<u32>>(),
&[0]
);
opt_value_index_builder.reset();
opt_value_index_builder.record_doc(1u32);
opt_value_index_builder.record_value();
assert_eq!(
&opt_value_index_builder
.finish(2u32)
.iter()
.collect::<Vec<u32>>(),
&[1]
);
}
#[test]
fn test_multivalued_value_index_builder() {
let mut multivalued_value_index_builder = MultivaluedIndexBuilder::default();
multivalued_value_index_builder.record_doc(1u32);
multivalued_value_index_builder.record_value();
multivalued_value_index_builder.record_value();
multivalued_value_index_builder.record_doc(2u32);
multivalued_value_index_builder.record_value();
assert_eq!(
multivalued_value_index_builder
.finish(4u32)
.iter()
.collect::<Vec<u32>>(),
vec![0, 0, 2, 3]
);
multivalued_value_index_builder.reset();
multivalued_value_index_builder.record_doc(2u32);
multivalued_value_index_builder.record_value();
multivalued_value_index_builder.record_value();
assert_eq!(
multivalued_value_index_builder
.finish(4u32)
.iter()
.collect::<Vec<u32>>(),
vec![0, 0, 0, 2]
);
}
}

View File

@@ -1,6 +1,6 @@
[package]
name = "tantivy-common"
version = "0.4.0"
version = "0.5.0"
authors = ["Paul Masurel <paul@quickwit.io>", "Pascal Seitz <pascal@quickwit.io>"]
license = "MIT"
edition = "2021"
@@ -14,7 +14,8 @@ repository = "https://github.com/quickwit-oss/tantivy"
[dependencies]
byteorder = "1.4.3"
ownedbytes = { version= "0.4", path="../ownedbytes" }
ownedbytes = { version= "0.5", path="../ownedbytes" }
async-trait = "0.1"
[dev-dependencies]
proptest = "1.0.0"

View File

@@ -151,7 +151,7 @@ impl TinySet {
if self.is_empty() {
None
} else {
let lowest = self.0.trailing_zeros() as u32;
let lowest = self.0.trailing_zeros();
self.0 ^= TinySet::singleton(lowest).0;
Some(lowest)
}
@@ -421,7 +421,7 @@ mod tests {
bitset.serialize(&mut out).unwrap();
let bitset = ReadOnlyBitSet::open(OwnedBytes::new(out));
assert_eq!(bitset.len() as usize, i as usize);
assert_eq!(bitset.len(), i as usize);
}
}
@@ -432,7 +432,7 @@ mod tests {
bitset.serialize(&mut out).unwrap();
let bitset = ReadOnlyBitSet::open(OwnedBytes::new(out));
assert_eq!(bitset.len() as usize, 64);
assert_eq!(bitset.len(), 64);
}
#[test]

View File

@@ -1,19 +1,18 @@
use std::ops::{Deref, Range};
use std::ops::{Deref, Range, RangeBounds};
use std::sync::Arc;
use std::{fmt, io};
use async_trait::async_trait;
use common::HasLen;
use stable_deref_trait::StableDeref;
use ownedbytes::{OwnedBytes, StableDeref};
use crate::directory::OwnedBytes;
use crate::HasLen;
/// Objects that represents files sections in tantivy.
///
/// By contract, whatever happens to the directory file, as long as a FileHandle
/// is alive, the data associated with it cannot be altered or destroyed.
///
/// The underlying behavior is therefore specific to the [`Directory`](crate::Directory) that
/// The underlying behavior is therefore specific to the `Directory` that
/// created it. Despite its name, a [`FileSlice`] may or may not directly map to an actual file
/// on the filesystem.
@@ -24,13 +23,12 @@ pub trait FileHandle: 'static + Send + Sync + HasLen + fmt::Debug {
/// This method may panic if the range requested is invalid.
fn read_bytes(&self, range: Range<usize>) -> io::Result<OwnedBytes>;
#[cfg(feature = "quickwit")]
#[doc(hidden)]
async fn read_bytes_async(
&self,
_byte_range: Range<usize>,
) -> crate::AsyncIoResult<OwnedBytes> {
Err(crate::error::AsyncIoError::AsyncUnsupported)
async fn read_bytes_async(&self, _byte_range: Range<usize>) -> io::Result<OwnedBytes> {
Err(io::Error::new(
io::ErrorKind::Unsupported,
"Async read is not supported.",
))
}
}
@@ -41,8 +39,7 @@ impl FileHandle for &'static [u8] {
Ok(OwnedBytes::new(bytes))
}
#[cfg(feature = "quickwit")]
async fn read_bytes_async(&self, byte_range: Range<usize>) -> crate::AsyncIoResult<OwnedBytes> {
async fn read_bytes_async(&self, byte_range: Range<usize>) -> io::Result<OwnedBytes> {
Ok(self.read_bytes(byte_range)?)
}
}
@@ -70,6 +67,34 @@ impl fmt::Debug for FileSlice {
}
}
/// Takes a range, a `RangeBounds` object, and returns
/// a `Range` that corresponds to the relative application of the
/// `RangeBounds` object to the original `Range`.
///
/// For instance, combine_ranges(`[2..11)`, `[5..7]`) returns `[7..10]`
/// as it reads, what is the sub-range that starts at the 5 element of
/// `[2..11)` and ends at the 9th element included.
///
/// This function panics, if the result would suggest something outside
/// of the bounds of the original range.
fn combine_ranges<R: RangeBounds<usize>>(orig_range: Range<usize>, rel_range: R) -> Range<usize> {
let start: usize = orig_range.start
+ match rel_range.start_bound().cloned() {
std::ops::Bound::Included(rel_start) => rel_start,
std::ops::Bound::Excluded(rel_start) => rel_start + 1,
std::ops::Bound::Unbounded => 0,
};
assert!(start <= orig_range.end);
let end: usize = match rel_range.end_bound().cloned() {
std::ops::Bound::Included(rel_end) => orig_range.start + rel_end + 1,
std::ops::Bound::Excluded(rel_end) => orig_range.start + rel_end,
std::ops::Bound::Unbounded => orig_range.end,
};
assert!(end >= start);
assert!(end <= orig_range.end);
start..end
}
impl FileSlice {
/// Wraps a FileHandle.
pub fn new(file_handle: Arc<dyn FileHandle>) -> Self {
@@ -93,11 +118,11 @@ impl FileSlice {
///
/// Panics if `byte_range.end` exceeds the filesize.
#[must_use]
pub fn slice(&self, byte_range: Range<usize>) -> FileSlice {
assert!(byte_range.end <= self.len());
#[inline]
pub fn slice<R: RangeBounds<usize>>(&self, byte_range: R) -> FileSlice {
FileSlice {
data: self.data.clone(),
range: self.range.start + byte_range.start..self.range.start + byte_range.end,
range: combine_ranges(self.range.clone(), byte_range),
}
}
@@ -117,9 +142,8 @@ impl FileSlice {
self.data.read_bytes(self.range.clone())
}
#[cfg(feature = "quickwit")]
#[doc(hidden)]
pub async fn read_bytes_async(&self) -> crate::AsyncIoResult<OwnedBytes> {
pub async fn read_bytes_async(&self) -> io::Result<OwnedBytes> {
self.data.read_bytes_async(self.range.clone()).await
}
@@ -137,12 +161,8 @@ impl FileSlice {
.read_bytes(self.range.start + range.start..self.range.start + range.end)
}
#[cfg(feature = "quickwit")]
#[doc(hidden)]
pub async fn read_bytes_slice_async(
&self,
byte_range: Range<usize>,
) -> crate::AsyncIoResult<OwnedBytes> {
pub async fn read_bytes_slice_async(&self, byte_range: Range<usize>) -> io::Result<OwnedBytes> {
assert!(
self.range.start + byte_range.end <= self.range.end,
"`to` exceeds the fileslice length"
@@ -204,8 +224,7 @@ impl FileHandle for FileSlice {
self.read_bytes_slice(range)
}
#[cfg(feature = "quickwit")]
async fn read_bytes_async(&self, byte_range: Range<usize>) -> crate::AsyncIoResult<OwnedBytes> {
async fn read_bytes_async(&self, byte_range: Range<usize>) -> io::Result<OwnedBytes> {
self.read_bytes_slice_async(byte_range).await
}
}
@@ -222,21 +241,20 @@ impl FileHandle for OwnedBytes {
Ok(self.slice(range))
}
#[cfg(feature = "quickwit")]
async fn read_bytes_async(&self, range: Range<usize>) -> crate::AsyncIoResult<OwnedBytes> {
let bytes = self.read_bytes(range)?;
Ok(bytes)
async fn read_bytes_async(&self, range: Range<usize>) -> io::Result<OwnedBytes> {
self.read_bytes(range)
}
}
#[cfg(test)]
mod tests {
use std::io;
use std::ops::Bound;
use std::sync::Arc;
use common::HasLen;
use super::{FileHandle, FileSlice};
use crate::file_slice::combine_ranges;
use crate::HasLen;
#[test]
fn test_file_slice() -> io::Result<()> {
@@ -307,4 +325,23 @@ mod tests {
b"bcd"
);
}
#[test]
fn test_combine_range() {
assert_eq!(combine_ranges(1..3, 0..1), 1..2);
assert_eq!(combine_ranges(1..3, 1..), 2..3);
assert_eq!(combine_ranges(1..4, ..2), 1..3);
assert_eq!(combine_ranges(3..10, 2..5), 5..8);
assert_eq!(combine_ranges(2..11, 5..=7), 7..10);
assert_eq!(
combine_ranges(2..11, (Bound::Excluded(5), Bound::Unbounded)),
8..11
);
}
#[test]
#[should_panic]
fn test_combine_range_panics() {
let _ = combine_ranges(3..5, 1..4);
}
}

View File

@@ -5,11 +5,12 @@ use std::ops::Deref;
pub use byteorder::LittleEndian as Endianness;
mod bitset;
pub mod file_slice;
mod serialize;
mod vint;
mod writer;
pub use bitset::*;
pub use ownedbytes::{OwnedBytes, StableDeref};
pub use serialize::{BinarySerializable, DeserializeFrom, FixedSize};
pub use vint::{
deserialize_vint_u128, read_u32_vint, read_u32_vint_no_advance, serialize_vint_u128,

View File

@@ -12,9 +12,8 @@ repository = "https://github.com/quickwit-oss/tantivy"
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
[dependencies]
common = { version = "0.4", path = "../common/", package = "tantivy-common" }
common = { version = "0.5", path = "../common/", package = "tantivy-common" }
tantivy-bitpacker = { version= "0.3", path = "../bitpacker/" }
ownedbytes = { version = "0.4.0", path = "../ownedbytes" }
prettytable-rs = {version="0.9.0", optional= true}
rand = {version="0.8.3", optional= true}
fastdivide = "0.4"

View File

@@ -7,8 +7,8 @@ mod tests {
use std::iter;
use std::sync::Arc;
use common::OwnedBytes;
use fastfield_codecs::*;
use ownedbytes::OwnedBytes;
use rand::prelude::*;
use test::Bencher;

View File

@@ -1,6 +1,6 @@
use std::io::{self, Write};
use ownedbytes::OwnedBytes;
use common::OwnedBytes;
use tantivy_bitpacker::{compute_num_bits, BitPacker, BitUnpacker};
use crate::serialize::NormalizedHeader;

View File

@@ -1,8 +1,7 @@
use std::sync::Arc;
use std::{io, iter};
use common::{BinarySerializable, CountingWriter, DeserializeFrom};
use ownedbytes::OwnedBytes;
use common::{BinarySerializable, CountingWriter, DeserializeFrom, OwnedBytes};
use tantivy_bitpacker::{compute_num_bits, BitPacker, BitUnpacker};
use crate::line::Line;
@@ -47,7 +46,7 @@ impl FastFieldCodec for BlockwiseLinearCodec {
type Reader = BlockwiseLinearReader;
fn open_from_bytes(
bytes: ownedbytes::OwnedBytes,
bytes: common::OwnedBytes,
normalized_header: NormalizedHeader,
) -> io::Result<Self::Reader> {
let footer_len: u32 = (&bytes[bytes.len() - 4..]).deserialize()?;
@@ -75,7 +74,7 @@ impl FastFieldCodec for BlockwiseLinearCodec {
if column.num_vals() < 10 * CHUNK_SIZE as u32 {
return None;
}
let mut first_chunk: Vec<u64> = column.iter().take(CHUNK_SIZE as usize).collect();
let mut first_chunk: Vec<u64> = column.iter().take(CHUNK_SIZE).collect();
let line = Line::train(&VecColumn::from(&first_chunk));
for (i, buffer_val) in first_chunk.iter_mut().enumerate() {
let interpolated_val = line.eval(i as u32);
@@ -171,15 +170,18 @@ impl Column for BlockwiseLinearReader {
interpoled_val.wrapping_add(bitpacked_diff)
}
#[inline(always)]
fn min_value(&self) -> u64 {
// The BlockwiseLinearReader assumes a normalized vector.
0u64
}
#[inline(always)]
fn max_value(&self) -> u64 {
self.normalized_header.max_value
}
#[inline(always)]
fn num_vals(&self) -> u32 {
self.normalized_header.num_vals
}

View File

@@ -135,7 +135,7 @@ impl<'a, T: Copy + PartialOrd + Send + Sync> Column<T> for VecColumn<'a, T> {
}
}
impl<'a, T: Copy + Ord + Default, V> From<&'a V> for VecColumn<'a, T>
impl<'a, T: Copy + PartialOrd + Default, V> From<&'a V> for VecColumn<'a, T>
where V: AsRef<[T]> + ?Sized
{
fn from(values: &'a V) -> Self {

View File

@@ -208,7 +208,7 @@ impl CompactSpaceBuilder {
};
let covered_range_len = range_mapping.range_length();
ranges_mapping.push(range_mapping);
compact_start += covered_range_len as u64;
compact_start += covered_range_len;
}
// println!("num ranges {}", ranges_mapping.len());
CompactSpace { ranges_mapping }

View File

@@ -17,8 +17,7 @@ use std::{
ops::{Range, RangeInclusive},
};
use common::{BinarySerializable, CountingWriter, VInt, VIntU128};
use ownedbytes::OwnedBytes;
use common::{BinarySerializable, CountingWriter, OwnedBytes, VInt, VIntU128};
use tantivy_bitpacker::{self, BitPacker, BitUnpacker};
use crate::compact_space::build_compact_space::get_compact_space;
@@ -97,7 +96,7 @@ impl BinarySerializable for CompactSpace {
};
let range_length = range_mapping.range_length();
ranges_mapping.push(range_mapping);
compact_start += range_length as u64;
compact_start += range_length;
}
Ok(Self { ranges_mapping })
@@ -407,10 +406,10 @@ impl CompactSpaceDecompressor {
let idx2 = idx + 1;
let idx3 = idx + 2;
let idx4 = idx + 3;
let val1 = get_val(idx1 as u32);
let val2 = get_val(idx2 as u32);
let val3 = get_val(idx3 as u32);
let val4 = get_val(idx4 as u32);
let val1 = get_val(idx1);
let val2 = get_val(idx2);
let val3 = get_val(idx3);
let val4 = get_val(idx4);
push_if_in_range(idx1, val1);
push_if_in_range(idx2, val2);
push_if_in_range(idx3, val3);
@@ -419,14 +418,13 @@ impl CompactSpaceDecompressor {
// handle rest
for idx in cutoff..position_range.end {
push_if_in_range(idx, get_val(idx as u32));
push_if_in_range(idx, get_val(idx));
}
}
#[inline]
fn iter_compact(&self) -> impl Iterator<Item = u64> + '_ {
(0..self.params.num_vals)
.map(move |idx| self.params.bit_unpacker.get(idx, &self.data) as u64)
(0..self.params.num_vals).map(move |idx| self.params.bit_unpacker.get(idx, &self.data))
}
#[inline]
@@ -569,7 +567,7 @@ mod tests {
let decomp = CompactSpaceDecompressor::open(data).unwrap();
let complete_range = 0..vals.len() as u32;
for (pos, val) in vals.iter().enumerate() {
let val = *val as u128;
let val = *val;
let pos = pos as u32;
let mut positions = Vec::new();
decomp.get_positions_for_value_range(val..=val, pos..pos + 1, &mut positions);
@@ -666,7 +664,7 @@ mod tests {
get_positions_for_value_range_helper(
&decomp,
4_000_211_221u128..=5_000_000_000u128,
complete_range.clone()
complete_range
),
vec![6, 7]
);
@@ -703,7 +701,7 @@ mod tests {
vec![0]
);
assert_eq!(
get_positions_for_value_range_helper(&decomp, 0..=105, complete_range.clone()),
get_positions_for_value_range_helper(&decomp, 0..=105, complete_range),
vec![0]
);
}
@@ -756,11 +754,7 @@ mod tests {
);
assert_eq!(
get_positions_for_value_range_helper(
&*decomp,
1_000_000..=1_000_000,
complete_range.clone()
),
get_positions_for_value_range_helper(&*decomp, 1_000_000..=1_000_000, complete_range),
vec![11]
);
}

View File

@@ -1,7 +1,6 @@
use std::io;
use common::BinarySerializable;
use ownedbytes::OwnedBytes;
use common::{BinarySerializable, OwnedBytes};
const MAGIC_NUMBER: u16 = 4335u16;
const FASTFIELD_FORMAT_VERSION: u8 = 1;

View File

@@ -45,7 +45,7 @@ mod tests {
use std::io;
use std::num::NonZeroU64;
use ownedbytes::OwnedBytes;
use common::OwnedBytes;
use crate::gcd::{compute_gcd, find_gcd};
use crate::{FastFieldCodecType, VecColumn};

View File

@@ -18,7 +18,7 @@ use std::io;
use std::io::Write;
use std::sync::Arc;
use common::BinarySerializable;
use common::{BinarySerializable, OwnedBytes};
use compact_space::CompactSpaceDecompressor;
use format_version::read_format_version;
use monotonic_mapping::{
@@ -26,7 +26,6 @@ use monotonic_mapping::{
StrictlyMonotonicMappingToInternalBaseval, StrictlyMonotonicMappingToInternalGCDBaseval,
};
use null_index_footer::read_null_index_footer;
use ownedbytes::OwnedBytes;
use serialize::{Header, U128Header};
mod bitpacked;
@@ -37,15 +36,13 @@ mod line;
mod linear;
mod monotonic_mapping;
mod monotonic_mapping_u128;
#[allow(dead_code)]
mod null_index;
mod null_index_footer;
mod column;
mod gcd;
mod serialize;
/// TODO: remove when codec is used
pub use null_index::*;
pub mod serialize;
use self::bitpacked::BitpackedCodec;
use self::blockwise_linear::BlockwiseLinearCodec;
@@ -438,7 +435,7 @@ mod tests {
mod bench {
use std::sync::Arc;
use ownedbytes::OwnedBytes;
use common::OwnedBytes;
use rand::rngs::StdRng;
use rand::{Rng, SeedableRng};
use test::{self, Bencher};

View File

@@ -1,7 +1,6 @@
use std::io::{self, Write};
use common::BinarySerializable;
use ownedbytes::OwnedBytes;
use common::{BinarySerializable, OwnedBytes};
use tantivy_bitpacker::{compute_num_bits, BitPacker, BitUnpacker};
use crate::line::Line;
@@ -25,13 +24,13 @@ impl Column for LinearReader {
interpoled_val.wrapping_add(bitpacked_diff)
}
#[inline]
#[inline(always)]
fn min_value(&self) -> u64 {
// The LinearReader assumes a normalized vector.
0u64
}
#[inline]
#[inline(always)]
fn max_value(&self) -> u64 {
self.header.max_value
}

View File

@@ -6,10 +6,10 @@ use std::io::BufRead;
use std::net::{IpAddr, Ipv6Addr};
use std::str::FromStr;
use common::OwnedBytes;
use fastfield_codecs::{open_u128, serialize_u128, Column, FastFieldCodecType, VecColumn};
use itertools::Itertools;
use measure_time::print_time;
use ownedbytes::OwnedBytes;
use prettytable::{Cell, Row, Table};
fn print_set_stats(ip_addrs: &[u128]) {

View File

@@ -56,10 +56,12 @@ impl<T> From<T> for StrictlyMonotonicMappingInverter<T> {
impl<From, To, T> StrictlyMonotonicFn<To, From> for StrictlyMonotonicMappingInverter<T>
where T: StrictlyMonotonicFn<From, To>
{
#[inline(always)]
fn mapping(&self, val: To) -> From {
self.orig_mapping.inverse(val)
}
#[inline(always)]
fn inverse(&self, val: From) -> To {
self.orig_mapping.mapping(val)
}
@@ -82,10 +84,12 @@ impl<External: MonotonicallyMappableToU128, T: MonotonicallyMappableToU128>
StrictlyMonotonicFn<External, u128> for StrictlyMonotonicMappingToInternal<T>
where T: MonotonicallyMappableToU128
{
#[inline(always)]
fn mapping(&self, inp: External) -> u128 {
External::to_u128(inp)
}
#[inline(always)]
fn inverse(&self, out: u128) -> External {
External::from_u128(out)
}
@@ -95,10 +99,12 @@ impl<External: MonotonicallyMappableToU64, T: MonotonicallyMappableToU64>
StrictlyMonotonicFn<External, u64> for StrictlyMonotonicMappingToInternal<T>
where T: MonotonicallyMappableToU64
{
#[inline(always)]
fn mapping(&self, inp: External) -> u64 {
External::to_u64(inp)
}
#[inline(always)]
fn inverse(&self, out: u64) -> External {
External::from_u64(out)
}
@@ -126,11 +132,13 @@ impl StrictlyMonotonicMappingToInternalGCDBaseval {
impl<External: MonotonicallyMappableToU64> StrictlyMonotonicFn<External, u64>
for StrictlyMonotonicMappingToInternalGCDBaseval
{
#[inline(always)]
fn mapping(&self, inp: External) -> u64 {
self.gcd_divider
.divide(External::to_u64(inp) - self.min_value)
}
#[inline(always)]
fn inverse(&self, out: u64) -> External {
External::from_u64(self.min_value + out * self.gcd)
}
@@ -141,6 +149,7 @@ pub(crate) struct StrictlyMonotonicMappingToInternalBaseval {
min_value: u64,
}
impl StrictlyMonotonicMappingToInternalBaseval {
#[inline(always)]
pub(crate) fn new(min_value: u64) -> Self {
Self { min_value }
}
@@ -149,20 +158,24 @@ impl StrictlyMonotonicMappingToInternalBaseval {
impl<External: MonotonicallyMappableToU64> StrictlyMonotonicFn<External, u64>
for StrictlyMonotonicMappingToInternalBaseval
{
#[inline(always)]
fn mapping(&self, val: External) -> u64 {
External::to_u64(val) - self.min_value
}
#[inline(always)]
fn inverse(&self, val: u64) -> External {
External::from_u64(self.min_value + val)
}
}
impl MonotonicallyMappableToU64 for u64 {
#[inline(always)]
fn to_u64(self) -> u64 {
self
}
#[inline(always)]
fn from_u64(val: u64) -> Self {
val
}
@@ -192,11 +205,15 @@ impl MonotonicallyMappableToU64 for bool {
}
}
// TODO remove me.
// Tantivy should refuse NaN values and work with NotNaN internally.
impl MonotonicallyMappableToU64 for f64 {
#[inline(always)]
fn to_u64(self) -> u64 {
common::f64_to_u64(self)
}
#[inline(always)]
fn from_u64(val: u64) -> Self {
common::u64_to_f64(val)
}

View File

@@ -1,9 +1,8 @@
use std::convert::TryInto;
use std::io::{self, Write};
use common::BinarySerializable;
use common::{BinarySerializable, OwnedBytes};
use itertools::Itertools;
use ownedbytes::OwnedBytes;
use super::{get_bit_at, set_bit_at};
@@ -16,6 +15,7 @@ use super::{get_bit_at, set_bit_at};
///
/// When translating a dense index to the original index, we can use the offset to find the correct
/// block. Direct computation is not possible, but we can employ a linear or binary search.
#[derive(Clone)]
pub struct DenseCodec {
// data consists of blocks of 64 bits.
//
@@ -77,7 +77,7 @@ impl DenseCodec {
}
/// Return the number of non-null values in an index
pub fn num_non_null_vals(&self) -> u32 {
pub fn num_non_nulls(&self) -> u32 {
let last_block = (self.data.len() / SERIALIZED_BLOCK_SIZE) - 1;
self.dense_index_block(last_block as u32).offset
}
@@ -101,7 +101,7 @@ impl DenseCodec {
///
/// # Panics
///
/// May panic if any `idx` is greater than the column length.
/// May panic if any `idx` is greater than the max codec index.
pub fn translate_codec_idx_to_original_idx<'a>(
&'a self,
iter: impl Iterator<Item = u32> + 'a,
@@ -120,7 +120,7 @@ impl DenseCodec {
num_set_bits += 1;
}
if num_set_bits == (dense_idx - index_block.offset + 1) {
let orig_idx = block_pos * ELEMENTS_PER_BLOCK + idx_in_bitvec as u32;
let orig_idx = block_pos * ELEMENTS_PER_BLOCK + idx_in_bitvec;
return orig_idx;
}
}
@@ -173,7 +173,7 @@ pub fn serialize_dense_codec(
block.serialize(&mut out)?;
offset.serialize(&mut out)?;
offset += block.count_ones() as u32;
offset += block.count_ones();
}
// Add sentinal block for the offset
let block: u64 = 0;
@@ -379,7 +379,7 @@ mod bench {
}
#[bench]
fn bench_dense_codec_translate_orig_to_dense_90percent_filled_random_stride(
fn bench_dense_codec_translate_orig_to_codec_90percent_filled_random_stride(
bench: &mut Bencher,
) {
let codec = gen_bools(0.9f64);
@@ -387,7 +387,7 @@ mod bench {
}
#[bench]
fn bench_dense_codec_translate_orig_to_dense_50percent_filled_random_stride(
fn bench_dense_codec_translate_orig_to_codec_50percent_filled_random_stride(
bench: &mut Bencher,
) {
let codec = gen_bools(0.5f64);
@@ -395,19 +395,19 @@ mod bench {
}
#[bench]
fn bench_dense_codec_translate_orig_to_dense_full_scan_10percent(bench: &mut Bencher) {
fn bench_dense_codec_translate_orig_to_codec_full_scan_10percent(bench: &mut Bencher) {
let codec = gen_bools(0.1f64);
bench.iter(|| walk_over_data_from_positions(&codec, 0..TOTAL_NUM_VALUES));
}
#[bench]
fn bench_dense_codec_translate_orig_to_dense_full_scan_90percent(bench: &mut Bencher) {
fn bench_dense_codec_translate_orig_to_codec_full_scan_90percent(bench: &mut Bencher) {
let codec = gen_bools(0.9f64);
bench.iter(|| walk_over_data_from_positions(&codec, 0..TOTAL_NUM_VALUES));
}
#[bench]
fn bench_dense_codec_translate_orig_to_dense_10percent_filled_random_stride(
fn bench_dense_codec_translate_orig_to_codec_10percent_filled_random_stride(
bench: &mut Bencher,
) {
let codec = gen_bools(0.1f64);
@@ -415,11 +415,11 @@ mod bench {
}
#[bench]
fn bench_dense_codec_translate_dense_to_orig_90percent_filled_random_stride_big_step(
fn bench_dense_codec_translate_codec_to_orig_90percent_filled_random_stride_big_step(
bench: &mut Bencher,
) {
let codec = gen_bools(0.9f64);
let num_vals = codec.num_non_null_vals();
let num_vals = codec.num_non_nulls();
bench.iter(|| {
codec
.translate_codec_idx_to_original_idx(random_range_iterator(0, num_vals, 50_000))
@@ -428,11 +428,11 @@ mod bench {
}
#[bench]
fn bench_dense_codec_translate_dense_to_orig_90percent_filled_random_stride(
fn bench_dense_codec_translate_codec_to_orig_90percent_filled_random_stride(
bench: &mut Bencher,
) {
let codec = gen_bools(0.9f64);
let num_vals = codec.num_non_null_vals();
let num_vals = codec.num_non_nulls();
bench.iter(|| {
codec
.translate_codec_idx_to_original_idx(random_range_iterator(0, num_vals, 100))
@@ -441,9 +441,9 @@ mod bench {
}
#[bench]
fn bench_dense_codec_translate_dense_to_orig_90percent_filled_full_scan(bench: &mut Bencher) {
fn bench_dense_codec_translate_codec_to_orig_90percent_filled_full_scan(bench: &mut Bencher) {
let codec = gen_bools(0.9f64);
let num_vals = codec.num_non_null_vals();
let num_vals = codec.num_non_nulls();
bench.iter(|| {
codec
.translate_codec_idx_to_original_idx(0..num_vals)

View File

@@ -1,6 +1,7 @@
pub use dense::{serialize_dense_codec, DenseCodec};
mod dense;
mod sparse;
#[inline]
fn get_bit_at(input: u64, n: u32) -> bool {

View File

@@ -0,0 +1,752 @@
use std::io::{self, Write};
use common::{BitSet, OwnedBytes};
use super::{serialize_dense_codec, DenseCodec};
/// `SparseCodec` is the codec for data, when only few documents have values.
/// In contrast to `DenseCodec` opening a `SparseCodec` causes runtime data to be produced, for
/// faster access.
///
/// The lower 16 bits of doc ids are stored as u16 while the upper 16 bits are given by the block
/// id. Each block contains 1<<16 docids.
///
/// # Serialized Data Layout
/// The data starts with the block data. Each block is either dense or sparse encoded, depending on
/// the number of values in the block. A block is sparse when it contains less than
/// DENSE_BLOCK_THRESHOLD (6144) values.
/// [Sparse data block | dense data block, .. #repeat*; Desc: Either a sparse or dense encoded
/// block]
/// ### Sparse block data
/// [u16 LE, .. #repeat*; Desc: Positions with values in a block]
/// ### Dense block data
/// [Dense codec for the whole block; Desc: Similar to a bitvec(0..ELEMENTS_PER_BLOCK) + Metadata
/// for faster lookups. See dense.rs]
///
/// The data is followed by block metadata, to know which area of the raw block data belongs to
/// which block. Only metadata for blocks with elements is recorded to
/// keep the overhead low for scenarios with many very sparse columns. The block metadata consists
/// of the block index and the number of values in the block. Since we don't store empty blocks
/// num_vals is incremented by 1, e.g. 0 means 1 value.
///
/// The last u16 is storing the number of metadata blocks.
/// [u16 LE, .. #repeat*; Desc: Positions with values in a block][(u16 LE, u16 LE), .. #repeat*;
/// Desc: (Block Id u16, Num Elements u16)][u16 LE; Desc: num blocks with values u16]
///
/// # Opening
/// When opening the data layout, the data is expanded to `Vec<SparseCodecBlockVariant>`, where the
/// index is the block index. For each block `byte_start` and `offset` is computed.
pub struct SparseCodec {
data: OwnedBytes,
blocks: Vec<SparseCodecBlockVariant>,
}
/// The threshold for for number of elements after which we switch to dense block encoding
const DENSE_BLOCK_THRESHOLD: u32 = 6144;
const ELEMENTS_PER_BLOCK: u32 = u16::MAX as u32 + 1;
/// 1.5 bit per Element + 12 bytes for the sentinal block
const NUM_BYTES_DENSE_BLOCK: u32 = (ELEMENTS_PER_BLOCK + ELEMENTS_PER_BLOCK / 2 + 64 + 32) / 8;
#[derive(Clone)]
enum SparseCodecBlockVariant {
Empty { offset: u32 },
Dense(DenseBlock),
Sparse(SparseBlock),
}
impl SparseCodecBlockVariant {
/// The number of non-null values that preceeded that block.
#[inline]
fn offset(&self) -> u32 {
match self {
SparseCodecBlockVariant::Empty { offset } => *offset,
SparseCodecBlockVariant::Dense(dense) => dense.offset,
SparseCodecBlockVariant::Sparse(sparse) => sparse.offset,
}
}
}
/// A block consists of max u16 values
#[derive(Clone)]
struct DenseBlock {
/// The number of values set before the block
offset: u32,
/// The data for the dense encoding
codec: DenseCodec,
}
impl DenseBlock {
pub fn exists(&self, idx: u32) -> bool {
self.codec.exists(idx)
}
pub fn translate_to_codec_idx(&self, idx: u32) -> Option<u32> {
self.codec.translate_to_codec_idx(idx)
}
pub fn translate_codec_idx_to_original_idx(&self, idx: u32) -> u32 {
self.codec
.translate_codec_idx_to_original_idx(idx..=idx)
.next()
.unwrap()
}
}
/// A block consists of max u16 values
#[derive(Debug, Copy, Clone)]
struct SparseBlock {
/// The number of values in the block
num_vals: u32,
/// The number of values set before the block
offset: u32,
/// The start position of the data for the block
byte_start: u32,
}
impl SparseBlock {
fn empty_block(offset: u32) -> Self {
Self {
num_vals: 0,
byte_start: 0,
offset,
}
}
#[inline]
fn value_at_idx(&self, data: &[u8], idx: u16) -> u16 {
let start_offset: usize = self.byte_start as usize + (idx as u32 as usize * 2);
get_u16(data, start_offset)
}
#[inline]
#[allow(clippy::comparison_chain)]
// Looks for the element in the block. Returns the positions if found.
fn binary_search(&self, data: &[u8], target: u16) -> Option<u16> {
let mut size = self.num_vals as u16;
let mut left = 0;
let mut right = size;
// TODO try different implem.
// e.g. exponential search into binary search
while left < right {
let mid = left + size / 2;
// TODO do boundary check only once, and then use an
// unsafe `value_at_idx`
let mid_val = self.value_at_idx(data, mid);
if target > mid_val {
left = mid + 1;
} else if target < mid_val {
right = mid;
} else {
return Some(mid);
}
size = right - left;
}
None
}
}
#[inline]
fn get_u16(data: &[u8], byte_position: usize) -> u16 {
let bytes: [u8; 2] = data[byte_position..byte_position + 2].try_into().unwrap();
u16::from_le_bytes(bytes)
}
const SERIALIZED_BLOCK_METADATA_SIZE: usize = 4;
fn deserialize_sparse_codec_block(data: &OwnedBytes) -> Vec<SparseCodecBlockVariant> {
// The number of vals so far
let mut offset = 0;
let mut sparse_codec_blocks = Vec::new();
let num_blocks = get_u16(data, data.len() - 2);
let block_data_index_start =
data.len() - 2 - num_blocks as usize * SERIALIZED_BLOCK_METADATA_SIZE;
let mut byte_start = 0;
for block_num in 0..num_blocks as usize {
let block_data_index = block_data_index_start + SERIALIZED_BLOCK_METADATA_SIZE * block_num;
let block_idx = get_u16(data, block_data_index);
let num_vals = get_u16(data, block_data_index + 2) as u32 + 1;
sparse_codec_blocks.resize(
block_idx as usize,
SparseCodecBlockVariant::Empty { offset },
);
if is_sparse(num_vals) {
let block = SparseBlock {
num_vals,
offset,
byte_start,
};
sparse_codec_blocks.push(SparseCodecBlockVariant::Sparse(block));
byte_start += 2 * num_vals;
} else {
let block = DenseBlock {
offset,
codec: DenseCodec::open(data.slice(byte_start as usize..data.len()).clone()),
};
sparse_codec_blocks.push(SparseCodecBlockVariant::Dense(block));
// Dense blocks have a fixed size spanning ELEMENTS_PER_BLOCK.
byte_start += NUM_BYTES_DENSE_BLOCK;
}
offset += num_vals;
}
sparse_codec_blocks.push(SparseCodecBlockVariant::Empty { offset });
sparse_codec_blocks
}
/// Splits a value address into lower and upper 16bits.
/// The lower 16 bits are the value in the block
/// The upper 16 bits are the block index
#[derive(Debug, Clone, Copy)]
struct ValueAddr {
block_idx: u16,
value_in_block: u16,
}
/// Splits a idx into block index and value in the block
fn value_addr(idx: u32) -> ValueAddr {
/// Static assert number elements per block this method expects
#[allow(clippy::assertions_on_constants)]
const _: () = assert!(ELEMENTS_PER_BLOCK == (1 << 16));
let value_in_block = idx as u16;
let block_idx = (idx >> 16) as u16;
ValueAddr {
block_idx,
value_in_block,
}
}
impl SparseCodec {
/// Open the SparseCodec from OwnedBytes
pub fn open(data: OwnedBytes) -> Self {
let blocks = deserialize_sparse_codec_block(&data);
Self { data, blocks }
}
#[inline]
/// Check if value at position is not null.
pub fn exists(&self, idx: u32) -> bool {
let value_addr = value_addr(idx);
// There may be trailing nulls without data, those are not stored as blocks. It would be
// possible to create empty blocks, but for that we would need to serialize the number of
// values or pass them when opening
if let Some(block) = self.blocks.get(value_addr.block_idx as usize) {
match block {
SparseCodecBlockVariant::Empty { offset: _ } => false,
SparseCodecBlockVariant::Dense(block) => {
block.exists(value_addr.value_in_block as u32)
}
SparseCodecBlockVariant::Sparse(block) => block
.binary_search(&self.data, value_addr.value_in_block)
.is_some(),
}
} else {
false
}
}
/// Return the number of non-null values in an index
pub fn num_non_nulls(&self) -> u32 {
self.blocks.last().map(|block| block.offset()).unwrap_or(0)
}
#[inline]
/// Translate from the original index to the codec index.
pub fn translate_to_codec_idx(&self, idx: u32) -> Option<u32> {
let value_addr = value_addr(idx);
let block = self.blocks.get(value_addr.block_idx as usize)?;
match block {
SparseCodecBlockVariant::Empty { offset: _ } => None,
SparseCodecBlockVariant::Dense(block) => block
.translate_to_codec_idx(value_addr.value_in_block as u32)
.map(|pos_in_block| pos_in_block + block.offset),
SparseCodecBlockVariant::Sparse(block) => {
let pos_in_block = block.binary_search(&self.data, value_addr.value_in_block);
pos_in_block.map(|pos_in_block: u16| block.offset + pos_in_block as u32)
}
}
}
fn find_block(&self, dense_idx: u32, mut block_pos: u32) -> u32 {
loop {
let offset = self.blocks[block_pos as usize].offset();
if offset > dense_idx {
return block_pos - 1;
}
block_pos += 1;
}
}
/// Translate positions from the codec index to the original index.
///
/// # Panics
///
/// May panic if any `idx` is greater than the max codec index.
pub fn translate_codec_idx_to_original_idx<'a>(
&'a self,
iter: impl Iterator<Item = u32> + 'a,
) -> impl Iterator<Item = u32> + 'a {
// TODO: There's a big potential performance gain, by using iterators per block instead of
// random access for each element in a block
// group_by itertools won't help though, since it requires a temporary local variable
let mut block_pos = 0u32;
iter.map(move |codec_idx| {
// update block_pos to limit search scope
block_pos = self.find_block(codec_idx, block_pos);
let block_doc_idx_start = block_pos * ELEMENTS_PER_BLOCK;
let block = &self.blocks[block_pos as usize];
let idx_in_block = codec_idx - block.offset();
match block {
SparseCodecBlockVariant::Empty { offset: _ } => {
panic!(
"invalid input, cannot translate to original index. associated empty \
block with dense idx. block_pos {}, idx_in_block {}",
block_pos, idx_in_block
)
}
SparseCodecBlockVariant::Dense(dense) => {
dense.translate_codec_idx_to_original_idx(idx_in_block) + block_doc_idx_start
}
SparseCodecBlockVariant::Sparse(block) => {
block.value_at_idx(&self.data, idx_in_block as u16) as u32 + block_doc_idx_start
}
}
})
}
}
fn is_sparse(num_elem_in_block: u32) -> bool {
num_elem_in_block < DENSE_BLOCK_THRESHOLD
}
#[derive(Default)]
struct BlockDataSerialized {
block_idx: u16,
num_vals: u32,
}
/// Iterator over positions of set values.
pub fn serialize_sparse_codec<W: Write>(
mut iter: impl Iterator<Item = u32>,
mut out: W,
) -> io::Result<()> {
let mut block_metadata: Vec<BlockDataSerialized> = Vec::new();
let mut current_block = Vec::new();
// This if-statement for the first element ensures that
// `block_metadata` is not empty in the loop below.
if let Some(idx) = iter.next() {
let value_addr = value_addr(idx);
block_metadata.push(BlockDataSerialized {
block_idx: value_addr.block_idx,
num_vals: 1,
});
current_block.push(value_addr.value_in_block);
}
let flush_block = |current_block: &mut Vec<u16>, out: &mut W| -> io::Result<()> {
let is_sparse = is_sparse(current_block.len() as u32);
if is_sparse {
for val_in_block in current_block.iter() {
out.write_all(val_in_block.to_le_bytes().as_ref())?;
}
} else {
let mut bitset = BitSet::with_max_value(ELEMENTS_PER_BLOCK + 1);
for val_in_block in current_block.iter() {
bitset.insert(*val_in_block as u32);
}
let iter = (0..ELEMENTS_PER_BLOCK).map(|idx| bitset.contains(idx));
serialize_dense_codec(iter, out)?;
}
current_block.clear();
Ok(())
};
for idx in iter {
let value_addr = value_addr(idx);
if block_metadata[block_metadata.len() - 1].block_idx == value_addr.block_idx {
let last_idx_metadata = block_metadata.len() - 1;
block_metadata[last_idx_metadata].num_vals += 1;
} else {
// flush prev block
flush_block(&mut current_block, &mut out)?;
block_metadata.push(BlockDataSerialized {
block_idx: value_addr.block_idx,
num_vals: 1,
});
}
current_block.push(value_addr.value_in_block);
}
// handle last block
flush_block(&mut current_block, &mut out)?;
for block in &block_metadata {
out.write_all(block.block_idx.to_le_bytes().as_ref())?;
// We don't store empty blocks, therefore we can subtract 1.
// This way we will be able to use u16 when the number of elements is 1 << 16 or u16::MAX+1
out.write_all(((block.num_vals - 1) as u16).to_le_bytes().as_ref())?;
}
out.write_all((block_metadata.len() as u16).to_le_bytes().as_ref())?;
Ok(())
}
#[cfg(test)]
mod tests {
use itertools::Itertools;
use proptest::prelude::{any, prop, *};
use proptest::strategy::Strategy;
use proptest::{prop_oneof, proptest};
use super::*;
fn random_bitvec() -> BoxedStrategy<Vec<bool>> {
prop_oneof![
1 => prop::collection::vec(proptest::bool::weighted(1.0), 0..100),
1 => prop::collection::vec(proptest::bool::weighted(0.00), 0..(ELEMENTS_PER_BLOCK as usize * 3)), // empty blocks
1 => prop::collection::vec(proptest::bool::weighted(1.00), 0..(ELEMENTS_PER_BLOCK as usize + 10)), // full block
1 => prop::collection::vec(proptest::bool::weighted(0.01), 0..100),
1 => prop::collection::vec(proptest::bool::weighted(0.01), 0..u16::MAX as usize),
8 => vec![any::<bool>()],
]
.boxed()
}
proptest! {
#![proptest_config(ProptestConfig::with_cases(50))]
#[test]
fn test_with_random_bitvecs(bitvec1 in random_bitvec(), bitvec2 in random_bitvec(), bitvec3 in random_bitvec()) {
let mut bitvec = Vec::new();
bitvec.extend_from_slice(&bitvec1);
bitvec.extend_from_slice(&bitvec2);
bitvec.extend_from_slice(&bitvec3);
test_null_index(bitvec);
}
}
#[test]
fn sparse_codec_test_one_block_false() {
let mut iter = vec![false; ELEMENTS_PER_BLOCK as usize];
iter.push(true);
test_null_index(iter);
}
#[test]
fn sparse_codec_test_one_block_true() {
let mut iter = vec![true; ELEMENTS_PER_BLOCK as usize];
iter.push(true);
test_null_index(iter);
}
fn test_null_index(data: Vec<bool>) {
let mut out = vec![];
serialize_sparse_codec(
data.iter()
.cloned()
.enumerate()
.filter(|(_pos, val)| *val)
.map(|(pos, _val)| pos as u32),
&mut out,
)
.unwrap();
let null_index = SparseCodec::open(OwnedBytes::new(out));
let orig_idx_with_value: Vec<u32> = data
.iter()
.enumerate()
.filter(|(_pos, val)| **val)
.map(|(pos, _val)| pos as u32)
.collect();
assert_eq!(
null_index
.translate_codec_idx_to_original_idx(0..orig_idx_with_value.len() as u32)
.collect_vec(),
orig_idx_with_value
);
let step_size = (orig_idx_with_value.len() / 100).max(1);
for (dense_idx, orig_idx) in orig_idx_with_value.iter().enumerate().step_by(step_size) {
assert_eq!(
null_index.translate_to_codec_idx(*orig_idx),
Some(dense_idx as u32)
);
}
// 100 samples
let step_size = (data.len() / 100).max(1);
for (pos, value) in data.iter().enumerate().step_by(step_size) {
assert_eq!(null_index.exists(pos as u32), *value);
}
}
#[test]
fn sparse_codec_test_translation() {
let mut out = vec![];
let iter = ([true, false, true, false]).iter().cloned();
serialize_sparse_codec(
iter.enumerate()
.filter(|(_pos, val)| *val)
.map(|(pos, _val)| pos as u32),
&mut out,
)
.unwrap();
let null_index = SparseCodec::open(OwnedBytes::new(out));
assert_eq!(
null_index
.translate_codec_idx_to_original_idx(0..2)
.collect_vec(),
vec![0, 2]
);
}
#[test]
fn sparse_codec_translate() {
let mut out = vec![];
let iter = ([true, false, true, false]).iter().cloned();
serialize_sparse_codec(
iter.enumerate()
.filter(|(_pos, val)| *val)
.map(|(pos, _val)| pos as u32),
&mut out,
)
.unwrap();
let null_index = SparseCodec::open(OwnedBytes::new(out));
assert_eq!(null_index.translate_to_codec_idx(0), Some(0));
assert_eq!(null_index.translate_to_codec_idx(2), Some(1));
}
#[test]
fn sparse_codec_test_small() {
let mut out = vec![];
let iter = ([true, false, true, false]).iter().cloned();
serialize_sparse_codec(
iter.enumerate()
.filter(|(_pos, val)| *val)
.map(|(pos, _val)| pos as u32),
&mut out,
)
.unwrap();
let null_index = SparseCodec::open(OwnedBytes::new(out));
assert!(null_index.exists(0));
assert!(!null_index.exists(1));
assert!(null_index.exists(2));
assert!(!null_index.exists(3));
}
#[test]
fn sparse_codec_test_large() {
let mut docs = vec![];
docs.extend((0..ELEMENTS_PER_BLOCK).map(|_idx| false));
docs.extend((0..=1).map(|_idx| true));
let iter = docs.iter().cloned();
let mut out = vec![];
serialize_sparse_codec(
iter.enumerate()
.filter(|(_pos, val)| *val)
.map(|(pos, _val)| pos as u32),
&mut out,
)
.unwrap();
let null_index = SparseCodec::open(OwnedBytes::new(out));
assert!(!null_index.exists(0));
assert!(!null_index.exists(100));
assert!(!null_index.exists(ELEMENTS_PER_BLOCK - 1));
assert!(null_index.exists(ELEMENTS_PER_BLOCK));
assert!(null_index.exists(ELEMENTS_PER_BLOCK + 1));
}
}
#[cfg(all(test, feature = "unstable"))]
mod bench {
use rand::rngs::StdRng;
use rand::{Rng, SeedableRng};
use test::Bencher;
use super::*;
const TOTAL_NUM_VALUES: u32 = 1_000_000;
fn gen_bools(fill_ratio: f64) -> SparseCodec {
let mut out = Vec::new();
let mut rng: StdRng = StdRng::from_seed([1u8; 32]);
serialize_sparse_codec(
(0..TOTAL_NUM_VALUES)
.map(|_| rng.gen_bool(fill_ratio))
.enumerate()
.filter(|(_pos, val)| *val)
.map(|(pos, _val)| pos as u32),
&mut out,
)
.unwrap();
let codec = SparseCodec::open(OwnedBytes::new(out));
codec
}
fn random_range_iterator(start: u32, end: u32, step_size: u32) -> impl Iterator<Item = u32> {
let mut rng: StdRng = StdRng::from_seed([1u8; 32]);
let mut current = start;
std::iter::from_fn(move || {
current += rng.gen_range(1..step_size + 1);
if current >= end {
None
} else {
Some(current)
}
})
}
fn walk_over_data(codec: &SparseCodec, max_step_size: u32) -> Option<u32> {
walk_over_data_from_positions(
codec,
random_range_iterator(0, TOTAL_NUM_VALUES, max_step_size),
)
}
fn walk_over_data_from_positions(
codec: &SparseCodec,
positions: impl Iterator<Item = u32>,
) -> Option<u32> {
let mut dense_idx: Option<u32> = None;
for idx in positions {
dense_idx = dense_idx.or(codec.translate_to_codec_idx(idx));
}
dense_idx
}
#[bench]
fn bench_sparse_codec_translate_orig_to_codec_1percent_filled_random_stride(
bench: &mut Bencher,
) {
let codec = gen_bools(0.01f64);
bench.iter(|| walk_over_data(&codec, 100));
}
#[bench]
fn bench_sparse_codec_translate_orig_to_codec_5percent_filled_random_stride(
bench: &mut Bencher,
) {
let codec = gen_bools(0.05f64);
bench.iter(|| walk_over_data(&codec, 100));
}
#[bench]
fn bench_sparse_codec_translate_orig_to_codec_full_scan_10percent(bench: &mut Bencher) {
let codec = gen_bools(0.1f64);
bench.iter(|| walk_over_data_from_positions(&codec, 0..TOTAL_NUM_VALUES));
}
#[bench]
fn bench_sparse_codec_translate_orig_to_codec_full_scan_90percent(bench: &mut Bencher) {
let codec = gen_bools(0.9f64);
bench.iter(|| walk_over_data_from_positions(&codec, 0..TOTAL_NUM_VALUES));
}
#[bench]
fn bench_sparse_codec_translate_orig_to_codec_full_scan_1percent(bench: &mut Bencher) {
let codec = gen_bools(0.01f64);
bench.iter(|| walk_over_data_from_positions(&codec, 0..TOTAL_NUM_VALUES));
}
#[bench]
fn bench_sparse_codec_translate_orig_to_codec_10percent_filled_random_stride(
bench: &mut Bencher,
) {
let codec = gen_bools(0.1f64);
bench.iter(|| walk_over_data(&codec, 100));
}
#[bench]
fn bench_sparse_codec_translate_orig_to_codec_90percent_filled_random_stride(
bench: &mut Bencher,
) {
let codec = gen_bools(0.9f64);
bench.iter(|| walk_over_data(&codec, 100));
}
#[bench]
fn bench_sparse_codec_translate_codec_to_orig_1percent_filled_random_stride_big_step(
bench: &mut Bencher,
) {
let codec = gen_bools(0.01f64);
let num_vals = codec.num_non_nulls();
bench.iter(|| {
codec
.translate_codec_idx_to_original_idx(random_range_iterator(0, num_vals, 50_000))
.last()
});
}
#[bench]
fn bench_sparse_codec_translate_codec_to_orig_1percent_filled_random_stride(
bench: &mut Bencher,
) {
let codec = gen_bools(0.01f64);
let num_vals = codec.num_non_nulls();
bench.iter(|| {
codec
.translate_codec_idx_to_original_idx(random_range_iterator(0, num_vals, 100))
.last()
});
}
#[bench]
fn bench_sparse_codec_translate_codec_to_orig_1percent_filled_full_scan(bench: &mut Bencher) {
let codec = gen_bools(0.01f64);
let num_vals = codec.num_non_nulls();
bench.iter(|| {
codec
.translate_codec_idx_to_original_idx(0..num_vals)
.last()
});
}
#[bench]
fn bench_sparse_codec_translate_codec_to_orig_90percent_filled_random_stride_big_step(
bench: &mut Bencher,
) {
let codec = gen_bools(0.90f64);
let num_vals = codec.num_non_nulls();
bench.iter(|| {
codec
.translate_codec_idx_to_original_idx(random_range_iterator(0, num_vals, 50_000))
.last()
});
}
#[bench]
fn bench_sparse_codec_translate_codec_to_orig_90percent_filled_random_stride(
bench: &mut Bencher,
) {
let codec = gen_bools(0.9f64);
let num_vals = codec.num_non_nulls();
bench.iter(|| {
codec
.translate_codec_idx_to_original_idx(random_range_iterator(0, num_vals, 100))
.last()
});
}
#[bench]
fn bench_sparse_codec_translate_codec_to_orig_90percent_filled_full_scan(bench: &mut Bencher) {
let codec = gen_bools(0.9f64);
let num_vals = codec.num_non_nulls();
bench.iter(|| {
codec
.translate_codec_idx_to_original_idx(0..num_vals)
.last()
});
}
}

View File

@@ -1,12 +1,12 @@
use std::io::{self, Write};
use std::ops::Range;
use common::{BinarySerializable, CountingWriter, VInt};
use ownedbytes::OwnedBytes;
use common::{BinarySerializable, CountingWriter, OwnedBytes, VInt};
#[derive(Debug, Clone, Copy, Eq, PartialEq)]
pub(crate) enum FastFieldCardinality {
Single = 1,
Multi = 2,
}
impl BinarySerializable for FastFieldCardinality {
@@ -30,6 +30,7 @@ impl FastFieldCardinality {
pub(crate) fn from_code(code: u8) -> Option<Self> {
match code {
1 => Some(Self::Single),
2 => Some(Self::Multi),
_ => None,
}
}

View File

@@ -21,9 +21,8 @@ use std::io;
use std::num::NonZeroU64;
use std::sync::Arc;
use common::{BinarySerializable, VInt};
use common::{BinarySerializable, OwnedBytes, VInt};
use log::warn;
use ownedbytes::OwnedBytes;
use crate::bitpacked::BitpackedCodec;
use crate::blockwise_linear::BlockwiseLinearCodec;
@@ -193,6 +192,69 @@ pub fn serialize_u128<F: Fn() -> I, I: Iterator<Item = u128>>(
iter_gen: F,
num_vals: u32,
output: &mut impl io::Write,
) -> io::Result<()> {
serialize_u128_new(ValueIndexInfo::default(), iter_gen, num_vals, output)
}
#[allow(dead_code)]
pub enum ValueIndexInfo<'a> {
MultiValue(Box<dyn MultiValueIndexInfo + 'a>),
SingleValue(Box<dyn SingleValueIndexInfo + 'a>),
}
// TODO Remove me
impl Default for ValueIndexInfo<'static> {
fn default() -> Self {
struct Dummy {}
impl SingleValueIndexInfo for Dummy {
fn num_vals(&self) -> u32 {
todo!()
}
fn num_non_nulls(&self) -> u32 {
todo!()
}
fn iter(&self) -> Box<dyn Iterator<Item = u32>> {
todo!()
}
}
Self::SingleValue(Box::new(Dummy {}))
}
}
impl<'a> ValueIndexInfo<'a> {
fn get_cardinality(&self) -> FastFieldCardinality {
match self {
ValueIndexInfo::MultiValue(_) => FastFieldCardinality::Multi,
ValueIndexInfo::SingleValue(_) => FastFieldCardinality::Single,
}
}
}
pub trait MultiValueIndexInfo {
/// The number of docs in the column.
fn num_docs(&self) -> u32;
/// The number of values in the column.
fn num_vals(&self) -> u32;
/// Return the start index of the values for each doc
fn iter(&self) -> Box<dyn Iterator<Item = u32> + '_>;
}
pub trait SingleValueIndexInfo {
/// The number of values including nulls in the column.
fn num_vals(&self) -> u32;
/// The number of non-null values in the column.
fn num_non_nulls(&self) -> u32;
/// Return a iterator of the positions of docs with a value
fn iter(&self) -> Box<dyn Iterator<Item = u32> + '_>;
}
/// Serializes u128 values with the compact space codec.
pub fn serialize_u128_new<F: Fn() -> I, I: Iterator<Item = u128>>(
value_index: ValueIndexInfo,
iter_gen: F,
num_vals: u32,
output: &mut impl io::Write,
) -> io::Result<()> {
let header = U128Header {
num_vals,
@@ -203,7 +265,7 @@ pub fn serialize_u128<F: Fn() -> I, I: Iterator<Item = u128>>(
compressor.compress_into(iter_gen(), output).unwrap();
let null_index_footer = NullIndexFooter {
cardinality: FastFieldCardinality::Single,
cardinality: value_index.get_cardinality(),
null_index_codec: NullIndexCodec::Full,
null_index_byte_range: 0..0,
};
@@ -218,6 +280,16 @@ pub fn serialize<T: MonotonicallyMappableToU64>(
typed_column: impl Column<T>,
output: &mut impl io::Write,
codecs: &[FastFieldCodecType],
) -> io::Result<()> {
serialize_new(ValueIndexInfo::default(), typed_column, output, codecs)
}
/// Serializes the column with the codec with the best estimate on the data.
pub fn serialize_new<T: MonotonicallyMappableToU64>(
value_index: ValueIndexInfo,
typed_column: impl Column<T>,
output: &mut impl io::Write,
codecs: &[FastFieldCodecType],
) -> io::Result<()> {
let column = monotonic_map_column(typed_column, StrictlyMonotonicMappingToInternal::<T>::new());
let header = Header::compute_header(&column, codecs).ok_or_else(|| {
@@ -235,7 +307,7 @@ pub fn serialize<T: MonotonicallyMappableToU64>(
serialize_given_codec(normalized_column, header.codec_type, output)?;
let null_index_footer = NullIndexFooter {
cardinality: FastFieldCardinality::Single,
cardinality: value_index.get_cardinality(),
null_index_codec: NullIndexCodec::Full,
null_index_byte_range: 0..0,
};

View File

@@ -1,7 +1,7 @@
[package]
authors = ["Paul Masurel <paul@quickwit.io>", "Pascal Seitz <pascal@quickwit.io>"]
name = "ownedbytes"
version = "0.4.0"
version = "0.5.0"
edition = "2021"
description = "Expose data as static slice"
license = "MIT"

View File

@@ -3,7 +3,7 @@ use std::ops::{Deref, Range};
use std::sync::Arc;
use std::{fmt, io, mem};
use stable_deref_trait::StableDeref;
pub use stable_deref_trait::StableDeref;
/// An OwnedBytes simply wraps an object that owns a slice of data and exposes
/// this data as a slice.

View File

@@ -206,6 +206,7 @@ pub struct SegmentHistogramCollector {
field_type: Type,
interval: f64,
offset: f64,
min_doc_count: u64,
first_bucket_num: i64,
bounds: HistogramBounds,
}
@@ -215,6 +216,30 @@ impl SegmentHistogramCollector {
self,
agg_with_accessor: &BucketAggregationWithAccessor,
) -> crate::Result<IntermediateBucketResult> {
// Compute the number of buckets to validate against max num buckets
// Note: We use min_doc_count here, but it's only an lowerbound here, since were are on the
// intermediate level and after merging the number of documents of a bucket could exceed
// `min_doc_count`.
{
let cut_off_buckets_front = self
.buckets
.iter()
.take_while(|bucket| bucket.doc_count <= self.min_doc_count)
.count();
let cut_off_buckets_back = self.buckets[cut_off_buckets_front..]
.iter()
.rev()
.take_while(|bucket| bucket.doc_count <= self.min_doc_count)
.count();
let estimate_num_buckets =
self.buckets.len() - cut_off_buckets_front - cut_off_buckets_back;
agg_with_accessor
.bucket_count
.add_count(estimate_num_buckets as u32);
agg_with_accessor.bucket_count.validate_bucket_count()?;
}
let mut buckets = Vec::with_capacity(
self.buckets
.iter()
@@ -251,11 +276,6 @@ impl SegmentHistogramCollector {
);
};
agg_with_accessor
.bucket_count
.add_count(buckets.len() as u32);
agg_with_accessor.bucket_count.validate_bucket_count()?;
Ok(IntermediateBucketResult::Histogram { buckets })
}
@@ -308,6 +328,7 @@ impl SegmentHistogramCollector {
first_bucket_num,
bounds,
sub_aggregations,
min_doc_count: req.min_doc_count(),
})
}
@@ -380,7 +401,7 @@ impl SegmentHistogramCollector {
debug_assert_eq!(
self.buckets[bucket_pos].key,
get_bucket_val(val, self.interval, self.offset) as f64
get_bucket_val(val, self.interval, self.offset)
);
self.increment_bucket(bucket_pos, doc, &bucket_with_accessor.sub_aggregation)?;
}
@@ -407,7 +428,7 @@ impl SegmentHistogramCollector {
if bounds.contains(val) {
debug_assert_eq!(
self.buckets[bucket_pos].key,
get_bucket_val(val, self.interval, self.offset) as f64
get_bucket_val(val, self.interval, self.offset)
);
self.increment_bucket(bucket_pos, doc, bucket_with_accessor)?;
@@ -1521,4 +1542,36 @@ mod tests {
Ok(())
}
#[test]
fn histogram_test_max_buckets_segments() -> crate::Result<()> {
let values = vec![0.0, 70000.0];
let index = get_test_index_from_values(true, &values)?;
let agg_req: Aggregations = vec![(
"my_interval".to_string(),
Aggregation::Bucket(BucketAggregation {
bucket_agg: BucketAggregationType::Histogram(HistogramAggregation {
field: "score_f64".to_string(),
interval: 1.0,
..Default::default()
}),
sub_aggregation: Default::default(),
}),
)]
.into_iter()
.collect();
let res = exec_request(agg_req, &index);
assert_eq!(
res.unwrap_err().to_string(),
"An invalid argument was passed: 'Aborting aggregation because too many buckets were \
created'"
.to_string()
);
Ok(())
}
}

View File

@@ -282,8 +282,8 @@ impl IntermediateBucketResult {
IntermediateBucketResult::Range(range_res) => {
let mut buckets: Vec<RangeBucketEntry> = range_res
.buckets
.into_iter()
.map(|(_, bucket)| {
.into_values()
.map(|bucket| {
bucket.into_final_bucket_entry(
&req.sub_aggregation,
schema,

View File

@@ -451,9 +451,9 @@ mod tests {
text_field_id => term.to_string(),
string_field_id => term.to_string(),
score_field => i as u64,
score_field_f64 => i as f64,
score_field_f64 => i,
score_field_i64 => i as i64,
fraction_field => i as f64/100.0,
fraction_field => i/100.0,
))?;
}
index_writer.commit()?;

View File

@@ -305,7 +305,7 @@ impl BucketCount {
}
pub(crate) fn add_count(&self, count: u32) {
self.bucket_count
.fetch_add(count as u32, std::sync::atomic::Ordering::Relaxed);
.fetch_add(count, std::sync::atomic::Ordering::Relaxed);
}
pub(crate) fn get_count(&self) -> u32 {
self.bucket_count.load(std::sync::atomic::Ordering::Relaxed)

View File

@@ -357,7 +357,7 @@ impl SegmentCollector for FacetSegmentCollector {
let mut facet = vec![];
let facet_ord = self.collapse_facet_ords[collapsed_facet_ord];
// TODO handle errors.
if facet_dict.ord_to_term(facet_ord as u64, &mut facet).is_ok() {
if facet_dict.ord_to_term(facet_ord, &mut facet).is_ok() {
if let Ok(facet) = Facet::from_encoded(facet) {
facet_counts.insert(facet, count);
}

View File

@@ -170,7 +170,7 @@ pub trait Collector: Sync + Send {
segment_ord: u32,
reader: &SegmentReader,
) -> crate::Result<<Self::Child as SegmentCollector>::Fruit> {
let mut segment_collector = self.for_segment(segment_ord as u32, reader)?;
let mut segment_collector = self.for_segment(segment_ord, reader)?;
match (reader.alive_bitset(), self.requires_scoring()) {
(Some(alive_bitset), true) => {

View File

@@ -813,7 +813,7 @@ mod tests {
let field = schema.get_field("num_likes").unwrap();
let tempdir = TempDir::new().unwrap();
let tempdir_path = PathBuf::from(tempdir.path());
let index = Index::create_in_dir(&tempdir_path, schema).unwrap();
let index = Index::create_in_dir(tempdir_path, schema).unwrap();
let reader = index
.reader_builder()
.reload_policy(ReloadPolicy::OnCommit)

View File

@@ -200,10 +200,7 @@ impl InvertedIndexReader {
#[cfg(feature = "quickwit")]
impl InvertedIndexReader {
pub(crate) async fn get_term_info_async(
&self,
term: &Term,
) -> crate::AsyncIoResult<Option<TermInfo>> {
pub(crate) async fn get_term_info_async(&self, term: &Term) -> io::Result<Option<TermInfo>> {
self.termdict.get_async(term.value_bytes()).await
}
@@ -211,12 +208,8 @@ impl InvertedIndexReader {
/// This method is for an advanced usage only.
///
/// Most users should prefer using [`Self::read_postings()`] instead.
pub async fn warm_postings(
&self,
term: &Term,
with_positions: bool,
) -> crate::AsyncIoResult<()> {
let term_info_opt = self.get_term_info_async(term).await?;
pub async fn warm_postings(&self, term: &Term, with_positions: bool) -> io::Result<()> {
let term_info_opt: Option<TermInfo> = self.get_term_info_async(term).await?;
if let Some(term_info) = term_info_opt {
self.postings_file_slice
.read_bytes_slice_async(term_info.postings_range.clone())
@@ -234,7 +227,7 @@ impl InvertedIndexReader {
/// This method is for an advanced usage only.
///
/// If you know which terms to pre-load, prefer using [`Self::warm_postings`] instead.
pub async fn warm_postings_full(&self, with_positions: bool) -> crate::AsyncIoResult<()> {
pub async fn warm_postings_full(&self, with_positions: bool) -> io::Result<()> {
self.postings_file_slice.read_bytes_async().await?;
if with_positions {
self.positions_file_slice.read_bytes_async().await?;
@@ -243,7 +236,7 @@ impl InvertedIndexReader {
}
/// Returns the number of documents containing the term asynchronously.
pub async fn doc_freq_async(&self, term: &Term) -> crate::AsyncIoResult<u32> {
pub async fn doc_freq_async(&self, term: &Term) -> io::Result<u32> {
Ok(self
.get_term_info_async(term)
.await?

View File

@@ -75,7 +75,7 @@ impl<W: TerminatingWrite + Write> CompositeWrite<W> {
let mut prev_offset = 0;
for (file_addr, offset) in self.offsets {
VInt((offset - prev_offset) as u64).serialize(&mut self.write)?;
VInt(offset - prev_offset).serialize(&mut self.write)?;
file_addr.serialize(&mut self.write)?;
prev_offset = offset;
}

View File

@@ -38,7 +38,7 @@ impl Footer {
counting_write.write_all(serde_json::to_string(&self)?.as_ref())?;
let footer_payload_len = counting_write.written_bytes();
BinarySerializable::serialize(&(footer_payload_len as u32), write)?;
BinarySerializable::serialize(&(FOOTER_MAGIC_NUMBER as u32), write)?;
BinarySerializable::serialize(&FOOTER_MAGIC_NUMBER, write)?;
Ok(())
}
@@ -90,9 +90,10 @@ impl Footer {
));
}
let footer: Footer = serde_json::from_slice(&file.read_bytes_slice(
file.len() - total_footer_size..file.len() - footer_metadata_len as usize,
)?)?;
let footer: Footer =
serde_json::from_slice(&file.read_bytes_slice(
file.len() - total_footer_size..file.len() - footer_metadata_len,
)?)?;
let body = file.slice_to(file.len() - total_footer_size);
Ok((footer, body))

View File

@@ -388,7 +388,7 @@ mod tests_mmap_specific {
let tempdir_path = PathBuf::from(tempdir.path());
let living_files = HashSet::new();
let mmap_directory = MmapDirectory::open(&tempdir_path).unwrap();
let mmap_directory = MmapDirectory::open(tempdir_path).unwrap();
let mut managed_directory = ManagedDirectory::wrap(Box::new(mmap_directory)).unwrap();
let mut write = managed_directory.open_write(test_path1).unwrap();
write.write_all(&[0u8, 1u8]).unwrap();

View File

@@ -6,10 +6,10 @@ use std::path::{Path, PathBuf};
use std::sync::{Arc, RwLock, Weak};
use std::{fmt, result};
use common::StableDeref;
use fs2::FileExt;
use memmap2::Mmap;
use serde::{Deserialize, Serialize};
use stable_deref_trait::StableDeref;
use tempfile::TempDir;
use crate::core::META_FILEPATH;
@@ -341,7 +341,7 @@ impl Directory for MmapDirectory {
/// removed before the file is deleted.
fn delete(&self, path: &Path) -> result::Result<(), DeleteError> {
let full_path = self.resolve_path(path);
fs::remove_file(&full_path).map_err(|e| {
fs::remove_file(full_path).map_err(|e| {
if e.kind() == io::ErrorKind::NotFound {
DeleteError::FileDoesNotExist(path.to_owned())
} else {
@@ -395,7 +395,7 @@ impl Directory for MmapDirectory {
fn atomic_read(&self, path: &Path) -> Result<Vec<u8>, OpenReadError> {
let full_path = self.resolve_path(path);
let mut buffer = Vec::new();
match File::open(&full_path) {
match File::open(full_path) {
Ok(mut file) => {
file.read_to_end(&mut buffer).map_err(|io_error| {
OpenReadError::wrap_io_error(io_error, path.to_path_buf())
@@ -425,7 +425,7 @@ impl Directory for MmapDirectory {
let file: File = OpenOptions::new()
.write(true)
.create(true) //< if the file does not exist yet, create it.
.open(&full_path)
.open(full_path)
.map_err(LockError::wrap_io_error)?;
if lock.is_blocking {
file.lock_exclusive().map_err(LockError::wrap_io_error)?;

View File

@@ -5,7 +5,6 @@ mod mmap_directory;
mod directory;
mod directory_lock;
mod file_slice;
mod file_watcher;
mod footer;
mod managed_directory;
@@ -20,13 +19,12 @@ mod composite_file;
use std::io::BufWriter;
use std::path::PathBuf;
pub use common::{AntiCallToken, TerminatingWrite};
pub use ownedbytes::OwnedBytes;
pub use common::file_slice::{FileHandle, FileSlice};
pub use common::{AntiCallToken, OwnedBytes, TerminatingWrite};
pub(crate) use self::composite_file::{CompositeFile, CompositeWrite};
pub use self::directory::{Directory, DirectoryClone, DirectoryLock};
pub use self::directory_lock::{Lock, INDEX_WRITER_LOCK, META_LOCK};
pub use self::file_slice::{FileHandle, FileSlice};
pub use self::ram_directory::RamDirectory;
pub use self::watch_event_router::{WatchCallback, WatchCallbackList, WatchHandle};

View File

@@ -104,28 +104,6 @@ pub enum TantivyError {
InternalError(String),
}
#[cfg(feature = "quickwit")]
#[derive(Error, Debug)]
#[doc(hidden)]
pub enum AsyncIoError {
#[error("io::Error `{0}`")]
Io(#[from] io::Error),
#[error("Asynchronous API is unsupported by this directory")]
AsyncUnsupported,
}
#[cfg(feature = "quickwit")]
impl From<AsyncIoError> for TantivyError {
fn from(async_io_err: AsyncIoError) -> Self {
match async_io_err {
AsyncIoError::Io(io_err) => TantivyError::from(io_err),
AsyncIoError::AsyncUnsupported => {
TantivyError::SystemError(format!("{:?}", async_io_err))
}
}
}
}
impl From<io::Error> for TantivyError {
fn from(io_err: io::Error) -> TantivyError {
TantivyError::IoError(Arc::new(io_err))

View File

@@ -1,8 +1,7 @@
use std::io;
use std::io::Write;
use common::{intersect_bitsets, BitSet, ReadOnlyBitSet};
use ownedbytes::OwnedBytes;
use common::{intersect_bitsets, BitSet, OwnedBytes, ReadOnlyBitSet};
use crate::space_usage::ByteCount;
use crate::DocId;

View File

@@ -64,9 +64,7 @@ impl FacetReader {
facet_ord: TermOrdinal,
output: &mut Facet,
) -> crate::Result<()> {
let found_term = self
.term_dict
.ord_to_term(facet_ord as u64, &mut self.buffer)?;
let found_term = self.term_dict.ord_to_term(facet_ord, &mut self.buffer)?;
assert!(found_term, "Term ordinal {} no found.", facet_ord);
let facet_str = str::from_utf8(&self.buffer[..])
.map_err(|utf8_err| DataCorruption::comment_only(utf8_err.to_string()))?;

View File

@@ -473,7 +473,7 @@ mod tests {
let fast_field_reader = open::<u64>(data)?;
for a in 0..n {
assert_eq!(fast_field_reader.get_val(a as u32), permutation[a as usize]);
assert_eq!(fast_field_reader.get_val(a as u32), permutation[a]);
}
}
Ok(())

View File

@@ -80,6 +80,7 @@ impl MultiValueIndex {
///
/// TODO: Instead of a linear scan we can employ a exponential search into binary search to
/// match a docid to its value position.
#[allow(clippy::bool_to_int_with_if)]
pub(crate) fn positions_to_docids(&self, doc_id_range: Range<u32>, positions: &mut Vec<u32>) {
if positions.is_empty() {
return;

View File

@@ -264,7 +264,7 @@ fn iter_remapped_multivalue_index<'a, C: Column>(
std::iter::once(0).chain(doc_id_map.iter_old_doc_ids().map(move |old_doc| {
let num_vals_for_doc = column.get_val(old_doc + 1) - column.get_val(old_doc);
offset += num_vals_for_doc;
offset as u64
offset
}))
}

View File

@@ -360,20 +360,10 @@ impl U128FastFieldWriter {
.map(|idx| self.vals[idx as usize])
};
serializer.create_u128_fast_field_with_idx(
self.field,
iter_gen,
self.val_count as u32,
0,
)?;
serializer.create_u128_fast_field_with_idx(self.field, iter_gen, self.val_count, 0)?;
} else {
let iter_gen = || self.vals.iter().cloned();
serializer.create_u128_fast_field_with_idx(
self.field,
iter_gen,
self.val_count as u32,
0,
)?;
serializer.create_u128_fast_field_with_idx(self.field, iter_gen, self.val_count, 0)?;
}
Ok(())

View File

@@ -252,8 +252,8 @@ mod tests {
&demux_mapping,
target_settings,
vec![
Box::new(RamDirectory::default()),
Box::new(RamDirectory::default()),
Box::<RamDirectory>::default(),
Box::<RamDirectory>::default(),
],
)?;

View File

@@ -152,7 +152,7 @@ pub(crate) fn advance_deletes(
let num_deleted_docs = max_doc - num_alive_docs;
if num_deleted_docs > num_deleted_docs_before {
// There are new deletes. We need to write a new delete file.
segment = segment.with_delete_meta(num_deleted_docs as u32, target_opstamp);
segment = segment.with_delete_meta(num_deleted_docs, target_opstamp);
let mut alive_doc_file = segment.open_write(SegmentComponent::Delete)?;
write_alive_bitset(&alive_bitset, &mut alive_doc_file)?;
alive_doc_file.terminate()?;
@@ -984,7 +984,7 @@ mod tests {
"LogMergePolicy { min_num_segments: 8, max_docs_before_merge: 10000000, \
min_layer_size: 10000, level_log_size: 0.75, del_docs_ratio_before_merge: 1.0 }"
);
let merge_policy = Box::new(NoMergePolicy::default());
let merge_policy = Box::<NoMergePolicy>::default();
index_writer.set_merge_policy(merge_policy);
assert_eq!(
format!("{:?}", index_writer.get_merge_policy()),
@@ -1813,8 +1813,8 @@ mod tests {
}
let num_docs_expected = expected_ids_and_num_occurrences
.iter()
.map(|(_, id_occurrences)| *id_occurrences as usize)
.values()
.map(|id_occurrences| *id_occurrences as usize)
.sum::<usize>();
assert_eq!(searcher.num_docs() as usize, num_docs_expected);
assert_eq!(old_searcher.num_docs() as usize, num_docs_expected);

View File

@@ -366,7 +366,7 @@ impl IndexMerger {
.map(|doc| reader.num_vals(doc))
.sum()
} else {
reader.total_num_vals() as u32
reader.total_num_vals()
}
})
.sum();
@@ -968,7 +968,7 @@ impl IndexMerger {
let doc_bytes = doc_bytes_res?;
store_writer.store_bytes(&doc_bytes)?;
} else {
return Err(DataCorruption::comment_only(&format!(
return Err(DataCorruption::comment_only(format!(
"unexpected missing document in docstore on merge, doc address \
{old_doc_addr:?}",
))

View File

@@ -447,8 +447,8 @@ impl SegmentUpdater {
let segment_entries = segment_updater.purge_deletes(opstamp)?;
segment_updater.segment_manager.commit(segment_entries);
segment_updater.save_metas(opstamp, payload)?;
// let _ = garbage_collect_files(segment_updater.clone());
// segment_updater.consider_merge_options();
let _ = garbage_collect_files(segment_updater.clone());
segment_updater.consider_merge_options();
Ok(opstamp)
})
}
@@ -866,7 +866,7 @@ mod tests {
}
assert_eq!(indices.len(), 3);
let output_directory: Box<dyn Directory> = Box::new(RamDirectory::default());
let output_directory: Box<dyn Directory> = Box::<RamDirectory>::default();
let index = merge_indices(&indices, output_directory)?;
assert_eq!(index.schema(), schema);

View File

@@ -16,11 +16,11 @@ mod atomic_impl {
impl AtomicU64Wrapper {
pub fn new(first_opstamp: Opstamp) -> AtomicU64Wrapper {
AtomicU64Wrapper(AtomicU64::new(first_opstamp as u64))
AtomicU64Wrapper(AtomicU64::new(first_opstamp))
}
pub fn fetch_add(&self, val: u64, order: Ordering) -> u64 {
self.0.fetch_add(val as u64, order) as u64
self.0.fetch_add(val, order)
}
pub fn revert(&self, val: u64, order: Ordering) -> u64 {
@@ -77,7 +77,7 @@ impl Stamper {
}
pub fn stamp(&self) -> Opstamp {
self.0.fetch_add(1u64, Ordering::SeqCst) as u64
self.0.fetch_add(1u64, Ordering::SeqCst)
}
/// Given a desired count `n`, `stamps` returns an iterator that

View File

@@ -177,7 +177,7 @@ impl DateTime {
/// The given date/time is converted to UTC and the actual
/// time zone is discarded.
pub const fn from_utc(dt: OffsetDateTime) -> Self {
let timestamp_micros = dt.unix_timestamp() as i64 * 1_000_000 + dt.microsecond() as i64;
let timestamp_micros = dt.unix_timestamp() * 1_000_000 + dt.microsecond() as i64;
Self { timestamp_micros }
}
@@ -259,10 +259,6 @@ pub use crate::future_result::FutureResult;
/// and instead, refer to this as `crate::Result<T>`.
pub type Result<T> = std::result::Result<T, TantivyError>;
/// Result for an Async io operation.
#[cfg(feature = "quickwit")]
pub type AsyncIoResult<T> = std::result::Result<T, crate::error::AsyncIoError>;
mod core;
mod indexer;

View File

@@ -71,7 +71,7 @@ impl PositionReader {
.map(|num_bits| num_bits as usize)
.sum();
let num_bytes_to_skip = num_bits * COMPRESSION_BLOCK_SIZE / 8;
self.bit_widths.advance(num_blocks as usize);
self.bit_widths.advance(num_blocks);
self.positions.advance(num_bytes_to_skip);
self.anchor_offset += (num_blocks * COMPRESSION_BLOCK_SIZE) as u64;
}

View File

@@ -1,11 +1,11 @@
use crate::postings::stacker::{MemoryArena, TermHashMap};
use stacker::{ArenaHashMap, MemoryArena};
/// IndexingContext contains all of the transient memory arenas
/// required for building the inverted index.
pub(crate) struct IndexingContext {
/// The term index is an adhoc hashmap,
/// itself backed by a dedicated memory arena.
pub term_index: TermHashMap,
pub term_index: ArenaHashMap,
/// Arena is a memory arena that stores posting lists / term frequencies / positions.
pub arena: MemoryArena,
}
@@ -13,9 +13,9 @@ pub(crate) struct IndexingContext {
impl IndexingContext {
/// Create a new IndexingContext given the size of the term hash map.
pub(crate) fn new(table_size: usize) -> IndexingContext {
let term_index = TermHashMap::new(table_size);
let term_index = ArenaHashMap::new(table_size);
IndexingContext {
arena: MemoryArena::new(),
arena: MemoryArena::default(),
term_index,
}
}

View File

@@ -1,10 +1,11 @@
use std::io;
use stacker::Addr;
use crate::fastfield::MultiValuedFastFieldWriter;
use crate::indexer::doc_id_mapping::DocIdMapping;
use crate::postings::postings_writer::SpecializedPostingsWriter;
use crate::postings::recorder::{BufferLender, DocIdRecorder, Recorder};
use crate::postings::stacker::Addr;
use crate::postings::{
FieldSerializer, IndexingContext, IndexingPosition, PostingsWriter, UnorderedTermId,
};

View File

@@ -15,9 +15,10 @@ mod recorder;
mod segment_postings;
mod serializer;
mod skip;
mod stacker;
mod term_info;
pub(crate) use stacker::compute_table_size;
pub use self::block_segment_postings::BlockSegmentPostings;
pub(crate) use self::indexing_context::IndexingContext;
pub(crate) use self::per_field_postings_writer::PerFieldPostingsWriter;
@@ -26,10 +27,9 @@ pub(crate) use self::postings_writer::{serialize_postings, IndexingPosition, Pos
pub use self::segment_postings::SegmentPostings;
pub use self::serializer::{FieldSerializer, InvertedIndexSerializer};
pub(crate) use self::skip::{BlockInfo, SkipReader};
pub(crate) use self::stacker::compute_table_size;
pub use self::term_info::TermInfo;
pub(crate) type UnorderedTermId = u64;
pub(crate) type UnorderedTermId = stacker::UnorderedId;
#[allow(clippy::enum_variant_names)]
#[derive(Debug, PartialEq, Clone, Copy, Eq)]

View File

@@ -51,7 +51,7 @@ fn posting_writer_from_field_entry(field_entry: &FieldEntry) -> Box<dyn Postings
| FieldType::Date(_)
| FieldType::Bytes(_)
| FieldType::IpAddr(_)
| FieldType::Facet(_) => Box::new(SpecializedPostingsWriter::<DocIdRecorder>::default()),
| FieldType::Facet(_) => Box::<SpecializedPostingsWriter<DocIdRecorder>>::default(),
FieldType::JsonObject(ref json_object_options) => {
if let Some(text_indexing_option) = json_object_options.get_text_indexing_options() {
match text_indexing_option.index_option() {

View File

@@ -4,8 +4,8 @@ use std::marker::PhantomData;
use std::ops::Range;
use rustc_hash::FxHashMap;
use stacker::Addr;
use super::stacker::Addr;
use crate::fastfield::MultiValuedFastFieldWriter;
use crate::fieldnorm::FieldNormReaders;
use crate::indexer::doc_id_mapping::DocIdMapping;
@@ -59,7 +59,11 @@ pub(crate) fn serialize_postings(
) -> crate::Result<HashMap<Field, FxHashMap<UnorderedTermId, TermOrdinal>>> {
let mut term_offsets: Vec<(Term<&[u8]>, Addr, UnorderedTermId)> =
Vec::with_capacity(ctx.term_index.len());
term_offsets.extend(ctx.term_index.iter());
term_offsets.extend(
ctx.term_index
.iter()
.map(|(bytes, addr, unordered_id)| (Term::wrap(bytes), addr, unordered_id)),
);
term_offsets.sort_unstable_by_key(|(k, _, _)| k.clone());
let mut unordered_term_mappings: HashMap<Field, FxHashMap<UnorderedTermId, TermOrdinal>> =
HashMap::new();

View File

@@ -1,6 +1,6 @@
use common::read_u32_vint;
use stacker::{ExpUnrolledLinkedList, MemoryArena};
use super::stacker::{ExpUnrolledLinkedList, MemoryArena};
use crate::indexer::doc_id_mapping::DocIdMapping;
use crate::postings::FieldSerializer;
use crate::DocId;
@@ -91,7 +91,7 @@ pub struct DocIdRecorder {
impl Default for DocIdRecorder {
fn default() -> Self {
DocIdRecorder {
stack: ExpUnrolledLinkedList::new(),
stack: ExpUnrolledLinkedList::default(),
current_doc: u32::MAX,
}
}
@@ -144,7 +144,7 @@ impl Recorder for DocIdRecorder {
}
/// Recorder encoding document ids, and term frequencies
#[derive(Clone, Copy)]
#[derive(Clone, Copy, Default)]
pub struct TermFrequencyRecorder {
stack: ExpUnrolledLinkedList,
current_doc: DocId,
@@ -152,17 +152,6 @@ pub struct TermFrequencyRecorder {
term_doc_freq: u32,
}
impl Default for TermFrequencyRecorder {
fn default() -> Self {
TermFrequencyRecorder {
stack: ExpUnrolledLinkedList::new(),
current_doc: 0,
current_tf: 0u32,
term_doc_freq: 0u32,
}
}
}
impl Recorder for TermFrequencyRecorder {
fn current_doc(&self) -> DocId {
self.current_doc
@@ -229,7 +218,7 @@ pub struct TfAndPositionRecorder {
impl Default for TfAndPositionRecorder {
fn default() -> Self {
TfAndPositionRecorder {
stack: ExpUnrolledLinkedList::new(),
stack: ExpUnrolledLinkedList::default(),
current_doc: u32::MAX,
term_doc_freq: 0u32,
}

View File

@@ -465,7 +465,7 @@ impl<W: Write> PostingsSerializer<W> {
/// When called after writing the postings of a term, this value is used as a
/// end offset.
fn written_bytes(&self) -> u64 {
self.output_write.written_bytes() as u64
self.output_write.written_bytes()
}
fn clear(&mut self) {

View File

@@ -1,7 +0,0 @@
mod expull;
mod memory_arena;
mod term_hashmap;
pub(crate) use self::expull::ExpUnrolledLinkedList;
pub(crate) use self::memory_arena::{Addr, MemoryArena};
pub(crate) use self::term_hashmap::{compute_table_size, TermHashMap};

View File

@@ -47,7 +47,7 @@ impl From<BitSet> for BitSetDocSet {
impl DocSet for BitSetDocSet {
fn advance(&mut self) -> DocId {
if let Some(lower) = self.cursor_tinybitset.pop_lowest() {
self.doc = (self.cursor_bucket as u32 * 64u32) | lower;
self.doc = (self.cursor_bucket * 64u32) | lower;
return self.doc;
}
if let Some(cursor_bucket) = self.docs.first_non_empty_bucket(self.cursor_bucket + 1) {

View File

@@ -3,7 +3,7 @@ use tantivy_query_grammar::Occur;
use crate::query::{BooleanWeight, DisjunctionMaxCombiner, EnableScoring, Query, Weight};
use crate::{Score, Term};
/// The disjunction max query кeturns documents matching one or more wrapped queries,
/// The disjunction max query returns documents matching one or more wrapped queries,
/// called query clauses or clauses.
///
/// If a returned document matches multiple query clauses,

View File

@@ -126,7 +126,7 @@ impl VecCursor {
}
#[inline]
fn current(&self) -> Option<u32> {
self.docs.get(self.current_pos).map(|el| *el as u32)
self.docs.get(self.current_pos).copied()
}
fn get_cleared_data(&mut self) -> &mut Vec<u32> {
self.docs.clear();
@@ -268,9 +268,9 @@ impl DocSet for IpRangeDocSet {
#[inline]
fn advance(&mut self) -> DocId {
if let Some(docid) = self.loaded_docs.next() {
docid as u32
docid
} else {
if self.next_fetch_start >= self.ip_addr_fast_field.num_docs() as u32 {
if self.next_fetch_start >= self.ip_addr_fast_field.num_docs() {
return TERMINATED;
}
self.fetch_block();
@@ -280,10 +280,7 @@ impl DocSet for IpRangeDocSet {
#[inline]
fn doc(&self) -> DocId {
self.loaded_docs
.current()
.map(|el| el as u32)
.unwrap_or(TERMINATED)
self.loaded_docs.current().unwrap_or(TERMINATED)
}
/// Advances the `DocSet` forward until reaching the target, or going to the

View File

@@ -43,7 +43,7 @@ fn refill<TScorer: Scorer, TScoreCombiner: ScoreCombiner>(
min_doc: DocId,
) {
unordered_drain_filter(scorers, |scorer| {
let horizon = min_doc + HORIZON as u32;
let horizon = min_doc + HORIZON;
loop {
let doc = scorer.doc();
if doc >= horizon {

View File

@@ -236,7 +236,7 @@ mod tests {
)
.unwrap();
let date_options_json = serde_json::to_value(&date_options).unwrap();
let date_options_json = serde_json::to_value(date_options).unwrap();
assert_eq!(
date_options_json,
serde_json::json!({

View File

@@ -193,8 +193,8 @@ mod tests {
(0..max_len)
.prop_flat_map(move |len: usize| {
(
proptest::collection::vec(1usize..20, len as usize).prop_map(integrate_delta),
proptest::collection::vec(1usize..26, len as usize).prop_map(integrate_delta),
proptest::collection::vec(1usize..20, len).prop_map(integrate_delta),
proptest::collection::vec(1usize..26, len).prop_map(integrate_delta),
)
.prop_map(|(docs, offsets)| {
(0..docs.len() - 1)

View File

@@ -4,9 +4,8 @@ use std::ops::{AddAssign, Range};
use std::sync::atomic::{AtomicUsize, Ordering};
use std::sync::{Arc, Mutex};
use common::{BinarySerializable, HasLen};
use common::{BinarySerializable, HasLen, OwnedBytes};
use lru::LruCache;
use ownedbytes::OwnedBytes;
use super::footer::DocStoreFooter;
use super::index::SkipIndex;
@@ -66,11 +65,7 @@ impl BlockCache {
#[cfg(test)]
fn peek_lru(&self) -> Option<usize> {
self.cache
.lock()
.unwrap()
.peek_lru()
.map(|(&k, _)| k as usize)
self.cache.lock().unwrap().peek_lru().map(|(&k, _)| k)
}
}
@@ -323,7 +318,7 @@ impl StoreReader {
/// In most cases use [`get_async`](Self::get_async)
///
/// Loads and decompresses a block asynchronously.
async fn read_block_async(&self, checkpoint: &Checkpoint) -> crate::AsyncIoResult<Block> {
async fn read_block_async(&self, checkpoint: &Checkpoint) -> io::Result<Block> {
let cache_key = checkpoint.byte_range.start;
if let Some(block) = self.cache.get_from_cache(checkpoint.byte_range.start) {
return Ok(block);

View File

@@ -142,7 +142,7 @@ impl BlockCompressorImpl {
}
fn close(mut self) -> io::Result<()> {
let header_offset: u64 = self.writer.written_bytes() as u64;
let header_offset: u64 = self.writer.written_bytes();
let docstore_footer =
DocStoreFooter::new(header_offset, Decompressor::from(self.compressor));
self.offset_index_writer.serialize_into(&mut self.writer)?;

View File

@@ -69,7 +69,7 @@ impl TermInfoBlockMeta {
let posting_end_addr = posting_start_addr + num_bits;
let positions_start_addr = posting_start_addr + self.postings_offset_nbits as usize;
// the position_end is the positions_start of the next term info.
let positions_end_addr = positions_start_addr + num_bits as usize;
let positions_end_addr = positions_start_addr + num_bits;
let doc_freq_addr = positions_start_addr + self.positions_offset_nbits as usize;
@@ -121,7 +121,7 @@ fn extract_bits(data: &[u8], addr_bits: usize, num_bits: u8) -> u64 {
}
impl TermInfoStore {
pub fn open(term_info_store_file: FileSlice) -> crate::Result<TermInfoStore> {
pub fn open(term_info_store_file: FileSlice) -> io::Result<TermInfoStore> {
let (len_slice, main_slice) = term_info_store_file.split(16);
let mut bytes = len_slice.read_bytes()?;
let len = u64::deserialize(&mut bytes)? as usize;

View File

@@ -8,7 +8,6 @@ use tantivy_fst::Automaton;
use super::term_info_store::{TermInfoStore, TermInfoStoreWriter};
use super::{TermStreamer, TermStreamerBuilder};
use crate::directory::{FileSlice, OwnedBytes};
use crate::error::DataCorruption;
use crate::postings::TermInfo;
use crate::termdict::TermOrdinal;
@@ -55,7 +54,7 @@ where W: Write
/// to insert_key and insert_value.
///
/// Prefer using `.insert(key, value)`
pub(crate) fn insert_key(&mut self, key: &[u8]) -> io::Result<()> {
pub fn insert_key(&mut self, key: &[u8]) -> io::Result<()> {
self.fst_builder
.insert(key, self.term_ord)
.map_err(convert_fst_error)?;
@@ -66,7 +65,7 @@ where W: Write
/// # Warning
///
/// Horribly dangerous internal API. See `.insert_key(...)`.
pub(crate) fn insert_value(&mut self, term_info: &TermInfo) -> io::Result<()> {
pub fn insert_value(&mut self, term_info: &TermInfo) -> io::Result<()> {
self.term_info_store_writer.write_term_info(term_info)?;
Ok(())
}
@@ -80,16 +79,20 @@ where W: Write
self.term_info_store_writer
.serialize(&mut counting_writer)?;
let footer_size = counting_writer.written_bytes();
(footer_size as u64).serialize(&mut counting_writer)?;
footer_size.serialize(&mut counting_writer)?;
}
Ok(file)
}
}
fn open_fst_index(fst_file: FileSlice) -> crate::Result<tantivy_fst::Map<OwnedBytes>> {
fn open_fst_index(fst_file: FileSlice) -> io::Result<tantivy_fst::Map<OwnedBytes>> {
let bytes = fst_file.read_bytes()?;
let fst = Fst::new(bytes)
.map_err(|err| DataCorruption::comment_only(format!("Fst data is corrupted: {:?}", err)))?;
let fst = Fst::new(bytes).map_err(|err| {
io::Error::new(
io::ErrorKind::InvalidData,
format!("Fst data is corrupted: {:?}", err),
)
})?;
Ok(tantivy_fst::Map::from(fst))
}
@@ -114,7 +117,7 @@ pub struct TermDictionary {
impl TermDictionary {
/// Opens a `TermDictionary`.
pub fn open(file: FileSlice) -> crate::Result<Self> {
pub fn open(file: FileSlice) -> io::Result<Self> {
let (main_slice, footer_len_slice) = file.split_from_end(8);
let mut footer_len_bytes = footer_len_slice.read_bytes()?;
let footer_size = u64::deserialize(&mut footer_len_bytes)?;

View File

@@ -1,50 +1,64 @@
use std::io;
mod merger;
mod sstable;
mod streamer;
mod termdict;
use std::iter::ExactSizeIterator;
use common::VInt;
use sstable::value::{ValueReader, ValueWriter};
use sstable::SSTable;
use tantivy_fst::automaton::AlwaysMatch;
pub use self::merger::TermMerger;
use self::sstable::value::{ValueReader, ValueWriter};
use self::sstable::{BlockReader, SSTable};
pub use self::streamer::{TermStreamer, TermStreamerBuilder};
pub use self::termdict::{TermDictionary, TermDictionaryBuilder};
use crate::postings::TermInfo;
/// The term dictionary contains all of the terms in
/// `tantivy index` in a sorted manner.
///
/// The `Fst` crate is used to associate terms to their
/// respective `TermOrdinal`. The `TermInfoStore` then makes it
/// possible to fetch the associated `TermInfo`.
pub type TermDictionary = sstable::Dictionary<TermSSTable>;
/// Builder for the new term dictionary.
pub type TermDictionaryBuilder<W> = sstable::Writer<W, TermInfoValueWriter>;
/// `TermStreamer` acts as a cursor over a range of terms of a segment.
/// Terms are guaranteed to be sorted.
pub type TermStreamer<'a, A = AlwaysMatch> = sstable::Streamer<'a, TermSSTable, A>;
/// SSTable used to store TermInfo objects.
pub struct TermSSTable;
impl SSTable for TermSSTable {
type Value = TermInfo;
type Reader = TermInfoReader;
type Writer = TermInfoWriter;
type ValueReader = TermInfoValueReader;
type ValueWriter = TermInfoValueWriter;
}
#[derive(Default)]
pub struct TermInfoReader {
pub struct TermInfoValueReader {
term_infos: Vec<TermInfo>,
}
impl ValueReader for TermInfoReader {
impl ValueReader for TermInfoValueReader {
type Value = TermInfo;
#[inline(always)]
fn value(&self, idx: usize) -> &TermInfo {
&self.term_infos[idx]
}
fn read(&mut self, reader: &mut BlockReader) -> io::Result<()> {
fn load(&mut self, mut data: &[u8]) -> io::Result<usize> {
let len_before = data.len();
self.term_infos.clear();
let num_els = VInt::deserialize_u64(reader)?;
let mut postings_start = VInt::deserialize_u64(reader)? as usize;
let mut positions_start = VInt::deserialize_u64(reader)? as usize;
let num_els = VInt::deserialize_u64(&mut data)?;
let mut postings_start = VInt::deserialize_u64(&mut data)? as usize;
let mut positions_start = VInt::deserialize_u64(&mut data)? as usize;
for _ in 0..num_els {
let doc_freq = VInt::deserialize_u64(reader)? as u32;
let postings_num_bytes = VInt::deserialize_u64(reader)?;
let positions_num_bytes = VInt::deserialize_u64(reader)?;
let doc_freq = VInt::deserialize_u64(&mut data)? as u32;
let postings_num_bytes = VInt::deserialize_u64(&mut data)?;
let positions_num_bytes = VInt::deserialize_u64(&mut data)?;
let postings_end = postings_start + postings_num_bytes as usize;
let positions_end = positions_start + positions_num_bytes as usize;
let term_info = TermInfo {
@@ -56,23 +70,24 @@ impl ValueReader for TermInfoReader {
postings_start = postings_end;
positions_start = positions_end;
}
Ok(())
let consumed_len = len_before - data.len();
Ok(consumed_len)
}
}
#[derive(Default)]
pub struct TermInfoWriter {
pub struct TermInfoValueWriter {
term_infos: Vec<TermInfo>,
}
impl ValueWriter for TermInfoWriter {
impl ValueWriter for TermInfoValueWriter {
type Value = TermInfo;
fn write(&mut self, term_info: &TermInfo) {
self.term_infos.push(term_info.clone());
}
fn write_block(&mut self, buffer: &mut Vec<u8>) {
fn serialize_block(&self, buffer: &mut Vec<u8>) {
VInt(self.term_infos.len() as u64).serialize_into_vec(buffer);
if self.term_infos.is_empty() {
return;
@@ -84,23 +99,23 @@ impl ValueWriter for TermInfoWriter {
VInt(term_info.postings_range.len() as u64).serialize_into_vec(buffer);
VInt(term_info.positions_range.len() as u64).serialize_into_vec(buffer);
}
}
fn clear(&mut self) {
self.term_infos.clear();
}
}
#[cfg(test)]
mod tests {
use std::io;
use sstable::value::{ValueReader, ValueWriter};
use super::BlockReader;
use crate::directory::OwnedBytes;
use crate::postings::TermInfo;
use crate::termdict::sstable_termdict::sstable::value::{ValueReader, ValueWriter};
use crate::termdict::sstable_termdict::TermInfoReader;
use crate::termdict::sstable_termdict::TermInfoValueReader;
#[test]
fn test_block_terminfos() -> io::Result<()> {
let mut term_info_writer = super::TermInfoWriter::default();
fn test_block_terminfos() {
let mut term_info_writer = super::TermInfoValueWriter::default();
term_info_writer.write(&TermInfo {
doc_freq: 120u32,
postings_range: 17..45,
@@ -117,10 +132,9 @@ mod tests {
positions_range: 1100..1302,
});
let mut buffer = Vec::new();
term_info_writer.write_block(&mut buffer);
let mut block_reader = make_block_reader(&buffer[..]);
let mut term_info_reader = TermInfoReader::default();
term_info_reader.read(&mut block_reader)?;
term_info_writer.serialize_block(&mut buffer);
let mut term_info_reader = TermInfoValueReader::default();
let num_bytes: usize = term_info_reader.load(&buffer[..]).unwrap();
assert_eq!(
term_info_reader.value(0),
&TermInfo {
@@ -129,16 +143,6 @@ mod tests {
positions_range: 10..122
}
);
assert!(block_reader.buffer().is_empty());
Ok(())
}
fn make_block_reader(data: &[u8]) -> BlockReader {
let mut buffer = (data.len() as u32).to_le_bytes().to_vec();
buffer.extend_from_slice(data);
let owned_bytes = OwnedBytes::new(buffer);
let mut block_reader = BlockReader::new(Box::new(owned_bytes));
block_reader.read_block().unwrap();
block_reader
assert_eq!(buffer.len(), num_bytes);
}
}

View File

@@ -1,95 +0,0 @@
use std::io;
use super::{vint, BlockReader};
pub trait ValueReader: Default {
type Value;
fn value(&self, idx: usize) -> &Self::Value;
fn read(&mut self, reader: &mut BlockReader) -> io::Result<()>;
}
pub trait ValueWriter: Default {
type Value;
fn write(&mut self, val: &Self::Value);
fn write_block(&mut self, writer: &mut Vec<u8>);
}
#[derive(Default)]
pub struct VoidReader;
impl ValueReader for VoidReader {
type Value = ();
fn value(&self, _idx: usize) -> &() {
&()
}
fn read(&mut self, _reader: &mut BlockReader) -> io::Result<()> {
Ok(())
}
}
#[derive(Default)]
pub struct VoidWriter;
impl ValueWriter for VoidWriter {
type Value = ();
fn write(&mut self, _val: &()) {}
fn write_block(&mut self, _writer: &mut Vec<u8>) {}
}
#[derive(Default)]
pub struct U64MonotonicWriter {
vals: Vec<u64>,
}
impl ValueWriter for U64MonotonicWriter {
type Value = u64;
fn write(&mut self, val: &Self::Value) {
self.vals.push(*val);
}
fn write_block(&mut self, writer: &mut Vec<u8>) {
let mut prev_val = 0u64;
vint::serialize_into_vec(self.vals.len() as u64, writer);
for &val in &self.vals {
let delta = val - prev_val;
vint::serialize_into_vec(delta, writer);
prev_val = val;
}
self.vals.clear();
}
}
#[derive(Default)]
pub struct U64MonotonicReader {
vals: Vec<u64>,
}
impl ValueReader for U64MonotonicReader {
type Value = u64;
fn value(&self, idx: usize) -> &Self::Value {
&self.vals[idx]
}
fn read(&mut self, reader: &mut BlockReader) -> io::Result<()> {
let len = reader.deserialize_u64() as usize;
self.vals.clear();
let mut prev_val = 0u64;
for _ in 0..len {
let delta = reader.deserialize_u64() as u64;
let val = prev_val + delta;
self.vals.push(val);
prev_val = val;
}
Ok(())
}
}

View File

@@ -1,258 +0,0 @@
use std::io;
use std::sync::Arc;
use common::BinarySerializable;
use once_cell::sync::Lazy;
use tantivy_fst::automaton::AlwaysMatch;
use tantivy_fst::Automaton;
use crate::directory::{FileSlice, OwnedBytes};
use crate::postings::TermInfo;
use crate::termdict::sstable_termdict::sstable::sstable_index::BlockAddr;
use crate::termdict::sstable_termdict::sstable::{
DeltaReader, Reader, SSTable, SSTableIndex, Writer,
};
use crate::termdict::sstable_termdict::{
TermInfoReader, TermInfoWriter, TermSSTable, TermStreamer, TermStreamerBuilder,
};
use crate::termdict::TermOrdinal;
use crate::AsyncIoResult;
pub struct TermInfoSSTable;
impl SSTable for TermInfoSSTable {
type Value = TermInfo;
type Reader = TermInfoReader;
type Writer = TermInfoWriter;
}
/// Builder for the new term dictionary.
pub struct TermDictionaryBuilder<W: io::Write> {
sstable_writer: Writer<W, TermInfoWriter>,
}
impl<W: io::Write> TermDictionaryBuilder<W> {
/// Creates a new `TermDictionaryBuilder`
pub fn create(w: W) -> io::Result<Self> {
let sstable_writer = TermSSTable::writer(w);
Ok(TermDictionaryBuilder { sstable_writer })
}
/// Inserts a `(key, value)` pair in the term dictionary.
///
/// *Keys have to be inserted in order.*
pub fn insert<K: AsRef<[u8]>>(&mut self, key_ref: K, value: &TermInfo) -> io::Result<()> {
let key = key_ref.as_ref();
self.insert_key(key)?;
self.insert_value(value)?;
Ok(())
}
/// # Warning
/// Horribly dangerous internal API
///
/// If used, it must be used by systematically alternating calls
/// to insert_key and insert_value.
///
/// Prefer using `.insert(key, value)`
#[allow(clippy::unnecessary_wraps)]
pub(crate) fn insert_key(&mut self, key: &[u8]) -> io::Result<()> {
self.sstable_writer.write_key(key);
Ok(())
}
/// # Warning
///
/// Horribly dangerous internal API. See `.insert_key(...)`.
pub(crate) fn insert_value(&mut self, term_info: &TermInfo) -> io::Result<()> {
self.sstable_writer.write_value(term_info)
}
/// Finalize writing the builder, and returns the underlying
/// `Write` object.
pub fn finish(self) -> io::Result<W> {
self.sstable_writer.finalize()
}
}
static EMPTY_TERM_DICT_FILE: Lazy<FileSlice> = Lazy::new(|| {
let term_dictionary_data: Vec<u8> = TermDictionaryBuilder::create(Vec::<u8>::new())
.expect("Creating a TermDictionaryBuilder in a Vec<u8> should never fail")
.finish()
.expect("Writing in a Vec<u8> should never fail");
FileSlice::from(term_dictionary_data)
});
/// The term dictionary contains all of the terms in
/// `tantivy index` in a sorted manner.
///
/// The `Fst` crate is used to associate terms to their
/// respective `TermOrdinal`. The `TermInfoStore` then makes it
/// possible to fetch the associated `TermInfo`.
pub struct TermDictionary {
sstable_slice: FileSlice,
sstable_index: SSTableIndex,
num_terms: u64,
}
impl TermDictionary {
pub(crate) fn sstable_reader(&self) -> io::Result<Reader<'static, TermInfoReader>> {
let data = self.sstable_slice.read_bytes()?;
Ok(TermInfoSSTable::reader(data))
}
pub(crate) fn sstable_reader_block(
&self,
block_addr: BlockAddr,
) -> io::Result<Reader<'static, TermInfoReader>> {
let data = self.sstable_slice.read_bytes_slice(block_addr.byte_range)?;
Ok(TermInfoSSTable::reader(data))
}
pub(crate) async fn sstable_reader_block_async(
&self,
block_addr: BlockAddr,
) -> AsyncIoResult<Reader<'static, TermInfoReader>> {
let data = self
.sstable_slice
.read_bytes_slice_async(block_addr.byte_range)
.await?;
Ok(TermInfoSSTable::reader(data))
}
pub(crate) fn sstable_delta_reader(&self) -> io::Result<DeltaReader<'static, TermInfoReader>> {
let data = self.sstable_slice.read_bytes()?;
Ok(TermInfoSSTable::delta_reader(data))
}
/// Opens a `TermDictionary`.
pub fn open(term_dictionary_file: FileSlice) -> crate::Result<Self> {
let (main_slice, footer_len_slice) = term_dictionary_file.split_from_end(16);
let mut footer_len_bytes: OwnedBytes = footer_len_slice.read_bytes()?;
let index_offset = u64::deserialize(&mut footer_len_bytes)?;
let num_terms = u64::deserialize(&mut footer_len_bytes)?;
let (sstable_slice, index_slice) = main_slice.split(index_offset as usize);
let sstable_index_bytes = index_slice.read_bytes()?;
let sstable_index = SSTableIndex::load(sstable_index_bytes.as_slice())?;
Ok(TermDictionary {
sstable_slice,
sstable_index,
num_terms,
})
}
/// Creates a term dictionary from the supplied bytes.
pub fn from_bytes(owned_bytes: OwnedBytes) -> crate::Result<TermDictionary> {
TermDictionary::open(FileSlice::new(Arc::new(owned_bytes)))
}
/// Creates an empty term dictionary which contains no terms.
pub fn empty() -> Self {
TermDictionary::open(EMPTY_TERM_DICT_FILE.clone()).unwrap()
}
/// Returns the number of terms in the dictionary.
/// Term ordinals range from 0 to `num_terms() - 1`.
pub fn num_terms(&self) -> usize {
self.num_terms as usize
}
/// Returns the ordinal associated with a given term.
pub fn term_ord<K: AsRef<[u8]>>(&self, key: K) -> io::Result<Option<TermOrdinal>> {
let mut term_ord = 0u64;
let key_bytes = key.as_ref();
let mut sstable_reader = self.sstable_reader()?;
while sstable_reader.advance().unwrap_or(false) {
if sstable_reader.key() == key_bytes {
return Ok(Some(term_ord));
}
term_ord += 1;
}
Ok(None)
}
/// Returns the term associated with a given term ordinal.
///
/// Term ordinals are defined as the position of the term in
/// the sorted list of terms.
///
/// Returns true if and only if the term has been found.
///
/// Regardless of whether the term is found or not,
/// the buffer may be modified.
pub fn ord_to_term(&self, ord: TermOrdinal, bytes: &mut Vec<u8>) -> io::Result<bool> {
let mut sstable_reader = self.sstable_reader()?;
bytes.clear();
for _ in 0..(ord + 1) {
if !sstable_reader.advance().unwrap_or(false) {
return Ok(false);
}
}
bytes.extend_from_slice(sstable_reader.key());
Ok(true)
}
/// Returns the number of terms in the dictionary.
pub fn term_info_from_ord(&self, term_ord: TermOrdinal) -> io::Result<TermInfo> {
let mut sstable_reader = self.sstable_reader()?;
for _ in 0..(term_ord + 1) {
if !sstable_reader.advance().unwrap_or(false) {
return Ok(TermInfo::default());
}
}
Ok(sstable_reader.value().clone())
}
/// Lookups the value corresponding to the key.
pub fn get<K: AsRef<[u8]>>(&self, key: K) -> io::Result<Option<TermInfo>> {
if let Some(block_addr) = self.sstable_index.search(key.as_ref()) {
let mut sstable_reader = self.sstable_reader_block(block_addr)?;
let key_bytes = key.as_ref();
while sstable_reader.advance().unwrap_or(false) {
if sstable_reader.key() == key_bytes {
let term_info = sstable_reader.value().clone();
return Ok(Some(term_info));
}
}
}
Ok(None)
}
/// Lookups the value corresponding to the key.
pub async fn get_async<K: AsRef<[u8]>>(&self, key: K) -> AsyncIoResult<Option<TermInfo>> {
if let Some(block_addr) = self.sstable_index.search(key.as_ref()) {
let mut sstable_reader = self.sstable_reader_block_async(block_addr).await?;
let key_bytes = key.as_ref();
while sstable_reader.advance().unwrap_or(false) {
if sstable_reader.key() == key_bytes {
let term_info = sstable_reader.value().clone();
return Ok(Some(term_info));
}
}
}
Ok(None)
}
/// Returns a range builder, to stream all of the terms
/// within an interval.
pub fn range(&self) -> TermStreamerBuilder<'_> {
TermStreamerBuilder::new(self, AlwaysMatch)
}
/// A stream of all the sorted terms.
pub fn stream(&self) -> io::Result<TermStreamer<'_>> {
self.range().into_stream()
}
/// Returns a search builder, to stream all of the terms
/// within the Automaton
pub fn search<'a, A: Automaton + 'a>(&'a self, automaton: A) -> TermStreamerBuilder<'a, A>
where A::State: Clone {
TermStreamerBuilder::<A>::new(self, automaton)
}
#[doc(hidden)]
pub async fn warm_up_dictionary(&self) -> AsyncIoResult<()> {
self.sstable_slice.read_bytes_async().await?;
Ok(())
}
}

View File

@@ -1,5 +1,5 @@
use std::path::PathBuf;
use std::str;
use std::{io, str};
use super::{TermDictionary, TermDictionaryBuilder, TermStreamer};
use crate::directory::{Directory, FileSlice, RamDirectory, TerminatingWrite};
@@ -247,7 +247,7 @@ fn test_empty_string() -> crate::Result<()> {
Ok(())
}
fn stream_range_test_dict() -> crate::Result<TermDictionary> {
fn stream_range_test_dict() -> io::Result<TermDictionary> {
let buffer: Vec<u8> = {
let mut term_dictionary_builder = TermDictionaryBuilder::create(Vec::new())?;
for i in 0u8..10u8 {

21
sstable/Cargo.toml Normal file
View File

@@ -0,0 +1,21 @@
[package]
name = "tantivy-sstable"
version = "0.1.0"
edition = "2021"
license = "MIT"
[dependencies]
common = {path="../common", package="tantivy-common"}
ciborium = "0.2"
serde = "1"
tantivy-fst = "0.4"
[dev-dependencies]
proptest = "1"
criterion = "0.4"
names = "0.14"
rand = "0.8"
[[bench]]
name = "stream_bench"
harness = false

28
sstable/README.md Normal file
View File

@@ -0,0 +1,28 @@
# SSTable
The `tantivy-sstable` crate is yet another sstable crate.
It has been designed to be used in `quickwit`:
- as an alternative to the default tantivy fst dictionary.
- as a way to store the column index for dynamic fast fields.
The benefit compared to the fst crate is locality.
Searching a key in the fst crate requires downloading the entire dictionary.
Once the sstable index is downloaded, running a `get` in the sstable
crate only requires a single fetch.
Right now, the block index and the default block size have been thought
for quickwit, and the performance of a get is very bad.
# Sorted strings?
SSTable stands for Sorted String Table.
Strings have to be insert in sorted order.
That sorted order is used in different ways:
- it makes gets and streaming ranges of keys
possible.
- it allows incremental encoding of the keys
- the front compression is leveraged to optimize
the intersection with an automaton

View File

@@ -0,0 +1,87 @@
use std::collections::BTreeSet;
use std::io;
use common::file_slice::FileSlice;
use criterion::{criterion_group, criterion_main, Criterion};
use rand::rngs::StdRng;
use rand::{Rng, SeedableRng};
use tantivy_sstable::{self, Dictionary, MonotonicU64SSTable};
const CHARSET: &'static [u8] = b"abcdefghij";
fn generate_key(rng: &mut impl Rng) -> String {
let len = rng.gen_range(3..12);
std::iter::from_fn(|| {
let idx = rng.gen_range(0..CHARSET.len());
Some(CHARSET[idx] as char)
})
.take(len)
.collect()
}
fn prepare_sstable() -> io::Result<Dictionary<MonotonicU64SSTable>> {
let mut rng = StdRng::from_seed([3u8; 32]);
let mut els = BTreeSet::new();
while els.len() < 100_000 {
els.insert(generate_key(&mut rng));
}
let mut dictionary_builder = Dictionary::<MonotonicU64SSTable>::builder(Vec::new())?;
for (ord, word) in els.iter().enumerate() {
dictionary_builder.insert(word, &(ord as u64))?;
}
let buffer = dictionary_builder.finish()?;
let dictionary = Dictionary::open(FileSlice::from(buffer))?;
Ok(dictionary)
}
fn stream_bench(
dictionary: &Dictionary<MonotonicU64SSTable>,
lower: &[u8],
upper: &[u8],
do_scan: bool,
) -> usize {
let mut stream = dictionary
.range()
.ge(lower)
.lt(upper)
.into_stream()
.unwrap();
if !do_scan {
return 0;
}
let mut count = 0;
while stream.advance() {
count += 1;
}
count
}
pub fn criterion_benchmark(c: &mut Criterion) {
let dict = prepare_sstable().unwrap();
c.bench_function("short_scan_init", |b| {
b.iter(|| stream_bench(&dict, b"fa", b"fana", false))
});
c.bench_function("short_scan_init_and_scan", |b| {
b.iter(|| {
assert_eq!(stream_bench(&dict, b"fa", b"faz", true), 971);
})
});
c.bench_function("full_scan_init_and_scan_full_with_bound", |b| {
b.iter(|| {
assert_eq!(stream_bench(&dict, b"", b"z", true), 100_000);
})
});
c.bench_function("full_scan_init_and_scan_full_no_bounds", |b| {
b.iter(|| {
let mut stream = dict.stream().unwrap();
let mut count = 0;
while stream.advance() {
count += 1;
}
count
})
});
}
criterion_group!(benches, criterion_benchmark);
criterion_main!(benches);

View File

@@ -1,6 +1,5 @@
use std::io::{self, Read};
use byteorder::{LittleEndian, ReadBytesExt};
use std::io;
use std::ops::Range;
pub struct BlockReader<'a> {
buffer: Vec<u8>,
@@ -8,6 +7,13 @@ pub struct BlockReader<'a> {
offset: usize,
}
#[inline]
fn read_u32(read: &mut dyn io::Read) -> io::Result<u32> {
let mut buf = [0u8; 4];
read.read_exact(&mut buf)?;
Ok(u32::from_le_bytes(buf))
}
impl<'a> BlockReader<'a> {
pub fn new(reader: Box<dyn io::Read + 'a>) -> BlockReader<'a> {
BlockReader {
@@ -24,13 +30,13 @@ impl<'a> BlockReader<'a> {
}
#[inline(always)]
pub fn buffer_from_to(&self, start: usize, end: usize) -> &[u8] {
&self.buffer[start..end]
pub fn buffer_from_to(&self, range: Range<usize>) -> &[u8] {
&self.buffer[range]
}
pub fn read_block(&mut self) -> io::Result<bool> {
self.offset = 0;
let block_len_res = self.reader.read_u32::<LittleEndian>();
let block_len_res = read_u32(self.reader.as_mut());
if let Err(err) = &block_len_res {
if err.kind() == io::ErrorKind::UnexpectedEof {
return Ok(false);
@@ -46,14 +52,17 @@ impl<'a> BlockReader<'a> {
Ok(true)
}
#[inline(always)]
pub fn offset(&self) -> usize {
self.offset
}
#[inline(always)]
pub fn advance(&mut self, num_bytes: usize) {
self.offset += num_bytes;
}
#[inline(always)]
pub fn buffer(&self) -> &[u8] {
&self.buffer[self.offset..]
}

View File

@@ -16,6 +16,8 @@ where W: io::Write
block: Vec<u8>,
write: CountingWriter<BufWriter<W>>,
value_writer: TValueWriter,
// Only here to avoid allocations.
stateless_buffer: Vec<u8>,
}
impl<W, TValueWriter> DeltaWriter<W, TValueWriter>
@@ -28,6 +30,7 @@ where
block: Vec::with_capacity(BLOCK_LEN * 2),
write: CountingWriter::wrap(BufWriter::new(wrt)),
value_writer: TValueWriter::default(),
stateless_buffer: Vec::new(),
}
}
}
@@ -42,15 +45,16 @@ where
return Ok(None);
}
let start_offset = self.write.written_bytes() as usize;
// TODO avoid buffer allocation
let mut buffer = Vec::new();
self.value_writer.write_block(&mut buffer);
let buffer: &mut Vec<u8> = &mut self.stateless_buffer;
self.value_writer.serialize_block(buffer);
self.value_writer.clear();
let block_len = buffer.len() + self.block.len();
self.write.write_all(&(block_len as u32).to_le_bytes())?;
self.write.write_all(&buffer[..])?;
self.write.write_all(&self.block[..])?;
let end_offset = self.write.written_bytes() as usize;
self.block.clear();
buffer.clear();
Ok(Some(start_offset..end_offset))
}
@@ -84,15 +88,14 @@ where
Ok(None)
}
pub fn finalize(self) -> CountingWriter<BufWriter<W>> {
pub fn finish(self) -> CountingWriter<BufWriter<W>> {
self.write
}
}
pub struct DeltaReader<'a, TValueReader> {
common_prefix_len: usize,
suffix_start: usize,
suffix_end: usize,
suffix_range: Range<usize>,
value_reader: TValueReader,
block_reader: BlockReader<'a>,
idx: usize,
@@ -105,13 +108,16 @@ where TValueReader: value::ValueReader
DeltaReader {
idx: 0,
common_prefix_len: 0,
suffix_start: 0,
suffix_end: 0,
suffix_range: 0..0,
value_reader: TValueReader::default(),
block_reader: BlockReader::new(Box::new(reader)),
}
}
pub fn empty() -> Self {
DeltaReader::new(&b""[..])
}
fn deserialize_vint(&mut self) -> u64 {
self.block_reader.deserialize_u64()
}
@@ -140,15 +146,14 @@ where TValueReader: value::ValueReader
}
fn read_delta_key(&mut self) -> bool {
if let Some((keep, add)) = self.read_keep_add() {
self.common_prefix_len = keep;
self.suffix_start = self.block_reader.offset();
self.suffix_end = self.suffix_start + add;
self.block_reader.advance(add);
true
} else {
false
}
let Some((keep, add)) = self.read_keep_add() else {
return false;
};
self.common_prefix_len = keep;
let suffix_start = self.block_reader.offset();
self.suffix_range = suffix_start..(suffix_start + add);
self.block_reader.advance(add);
true
}
pub fn advance(&mut self) -> io::Result<bool> {
@@ -156,7 +161,8 @@ where TValueReader: value::ValueReader
if !self.block_reader.read_block()? {
return Ok(false);
}
self.value_reader.read(&mut self.block_reader)?;
let consumed_len = self.value_reader.load(self.block_reader.buffer())?;
self.block_reader.advance(consumed_len);
self.idx = 0;
} else {
self.idx += 1;
@@ -167,16 +173,30 @@ where TValueReader: value::ValueReader
Ok(true)
}
#[inline(always)]
pub fn common_prefix_len(&self) -> usize {
self.common_prefix_len
}
#[inline(always)]
pub fn suffix(&self) -> &[u8] {
self.block_reader
.buffer_from_to(self.suffix_start, self.suffix_end)
self.block_reader.buffer_from_to(self.suffix_range.clone())
}
#[inline(always)]
pub fn value(&self) -> &TValueReader::Value {
self.value_reader.value(self.idx)
}
}
#[cfg(test)]
mod tests {
use super::DeltaReader;
use crate::value::U64MonotonicValueReader;
#[test]
fn test_empty() {
let mut delta_reader: DeltaReader<U64MonotonicValueReader> = DeltaReader::empty();
assert!(!delta_reader.advance().unwrap());
}
}

261
sstable/src/dictionary.rs Normal file
View File

@@ -0,0 +1,261 @@
use std::io;
use std::marker::PhantomData;
use std::ops::{Bound, RangeBounds};
use std::sync::Arc;
use common::file_slice::FileSlice;
use common::{BinarySerializable, OwnedBytes};
use tantivy_fst::automaton::AlwaysMatch;
use tantivy_fst::Automaton;
use crate::streamer::{Streamer, StreamerBuilder};
use crate::{BlockAddr, DeltaReader, Reader, SSTable, SSTableIndex, TermOrdinal};
/// An SSTable is a sorted map that associates sorted `&[u8]` keys
/// to any kind of typed values.
///
/// The SSTable is organized in blocks.
/// In each block, keys and values are encoded separately.
///
/// The keys are encoded using incremental encoding.
/// The values on the other hand, are encoded according to a value-specific
/// codec defined in the TSSTable generic argument.
///
/// Finally, an index is joined to the Dictionary to make it possible,
/// given a key to identify which block contains this key.
///
/// The codec was designed in such a way that the sstable
/// reader is not aware of block, and yet can read any sequence of blocks,
/// as long as the slice of bytes it is given starts and stops at
/// block boundary.
///
/// (See also README.md)
pub struct Dictionary<TSSTable: SSTable> {
pub sstable_slice: FileSlice,
pub sstable_index: SSTableIndex,
num_terms: u64,
phantom_data: PhantomData<TSSTable>,
}
impl<TSSTable: SSTable> Dictionary<TSSTable> {
pub fn builder<W: io::Write>(wrt: W) -> io::Result<crate::Writer<W, TSSTable::ValueWriter>> {
Ok(TSSTable::writer(wrt))
}
pub(crate) fn sstable_reader(&self) -> io::Result<Reader<'static, TSSTable::ValueReader>> {
let data = self.sstable_slice.read_bytes()?;
Ok(TSSTable::reader(data))
}
pub(crate) fn sstable_reader_block(
&self,
block_addr: BlockAddr,
) -> io::Result<Reader<'static, TSSTable::ValueReader>> {
let data = self.sstable_slice.read_bytes_slice(block_addr.byte_range)?;
Ok(TSSTable::reader(data))
}
pub(crate) async fn sstable_reader_block_async(
&self,
block_addr: BlockAddr,
) -> io::Result<Reader<'static, TSSTable::ValueReader>> {
let data = self
.sstable_slice
.read_bytes_slice_async(block_addr.byte_range)
.await?;
Ok(TSSTable::reader(data))
}
pub(crate) fn sstable_delta_reader_for_key_range(
&self,
key_range: impl RangeBounds<[u8]>,
) -> io::Result<DeltaReader<'static, TSSTable::ValueReader>> {
let slice = self.file_slice_for_range(key_range);
let data = slice.read_bytes()?;
Ok(TSSTable::delta_reader(data))
}
/// This function returns a file slice covering a set of sstable blocks
/// that include the key range passed in arguments.
///
/// It works by identifying
/// - `first_block`: the block containing the start boudary key
/// - `last_block`: the block containing the end boundary key.
///
/// And then returning the range that spans over all blocks between.
/// and including first_block and last_block, aka:
/// `[first_block.start_offset .. last_block.end_offset)`
///
/// Technically this function does not provide the tightest fit, as
/// for simplification, it treats the start bound of the `key_range`
/// as if it was inclusive, even if it is exclusive.
/// On the rare edge case where a user asks for `(start_key, end_key]`
/// and `start_key` happens to be the last key of a block, we return a
/// slice that is the first block was not necessary.
fn file_slice_for_range(&self, key_range: impl RangeBounds<[u8]>) -> FileSlice {
let start_bound: Bound<usize> = match key_range.start_bound() {
Bound::Included(key) | Bound::Excluded(key) => {
let Some(first_block_addr) = self.sstable_index.search_block(key) else {
return FileSlice::empty();
};
Bound::Included(first_block_addr.byte_range.start)
}
Bound::Unbounded => Bound::Unbounded,
};
let end_bound: Bound<usize> = match key_range.end_bound() {
Bound::Included(key) | Bound::Excluded(key) => {
if let Some(block_addr) = self.sstable_index.search_block(key) {
Bound::Excluded(block_addr.byte_range.end)
} else {
Bound::Unbounded
}
}
Bound::Unbounded => Bound::Unbounded,
};
self.sstable_slice.slice((start_bound, end_bound))
}
/// Opens a `TermDictionary`.
pub fn open(term_dictionary_file: FileSlice) -> io::Result<Self> {
let (main_slice, footer_len_slice) = term_dictionary_file.split_from_end(16);
let mut footer_len_bytes: OwnedBytes = footer_len_slice.read_bytes()?;
let index_offset = u64::deserialize(&mut footer_len_bytes)?;
let num_terms = u64::deserialize(&mut footer_len_bytes)?;
let (sstable_slice, index_slice) = main_slice.split(index_offset as usize);
let sstable_index_bytes = index_slice.read_bytes()?;
let sstable_index = SSTableIndex::load(sstable_index_bytes.as_slice())
.map_err(|_| io::Error::new(io::ErrorKind::InvalidData, "SSTable corruption"))?;
Ok(Dictionary {
sstable_slice,
sstable_index,
num_terms,
phantom_data: PhantomData,
})
}
/// Creates a term dictionary from the supplied bytes.
pub fn from_bytes(owned_bytes: OwnedBytes) -> io::Result<Self> {
Dictionary::open(FileSlice::new(Arc::new(owned_bytes)))
}
/// Creates an empty term dictionary which contains no terms.
pub fn empty() -> Self {
let term_dictionary_data: Vec<u8> = Self::builder(Vec::<u8>::new())
.expect("Creating a TermDictionaryBuilder in a Vec<u8> should never fail")
.finish()
.expect("Writing in a Vec<u8> should never fail");
let empty_dict_file = FileSlice::from(term_dictionary_data);
Dictionary::open(empty_dict_file).unwrap()
}
/// Returns the number of terms in the dictionary.
/// Term ordinals range from 0 to `num_terms() - 1`.
pub fn num_terms(&self) -> usize {
self.num_terms as usize
}
/// Returns the ordinal associated with a given term.
pub fn term_ord<K: AsRef<[u8]>>(&self, key: K) -> io::Result<Option<TermOrdinal>> {
let mut term_ord = 0u64;
let key_bytes = key.as_ref();
let mut sstable_reader = self.sstable_reader()?;
while sstable_reader.advance().unwrap_or(false) {
if sstable_reader.key() == key_bytes {
return Ok(Some(term_ord));
}
term_ord += 1;
}
Ok(None)
}
/// Returns the term associated with a given term ordinal.
///
/// Term ordinals are defined as the position of the term in
/// the sorted list of terms.
///
/// Returns true if and only if the term has been found.
///
/// Regardless of whether the term is found or not,
/// the buffer may be modified.
pub fn ord_to_term(&self, ord: TermOrdinal, bytes: &mut Vec<u8>) -> io::Result<bool> {
let mut sstable_reader = self.sstable_reader()?;
bytes.clear();
for _ in 0..(ord + 1) {
if !sstable_reader.advance().unwrap_or(false) {
return Ok(false);
}
}
bytes.extend_from_slice(sstable_reader.key());
Ok(true)
}
/// Returns the number of terms in the dictionary.
pub fn term_info_from_ord(&self, term_ord: TermOrdinal) -> io::Result<Option<TSSTable::Value>> {
let mut sstable_reader = self.sstable_reader()?;
for _ in 0..(term_ord + 1) {
if !sstable_reader.advance().unwrap_or(false) {
return Ok(None);
}
}
Ok(Some(sstable_reader.value().clone()))
}
/// Lookups the value corresponding to the key.
pub fn get<K: AsRef<[u8]>>(&self, key: K) -> io::Result<Option<TSSTable::Value>> {
if let Some(block_addr) = self.sstable_index.search_block(key.as_ref()) {
let mut sstable_reader = self.sstable_reader_block(block_addr)?;
let key_bytes = key.as_ref();
while sstable_reader.advance().unwrap_or(false) {
if sstable_reader.key() == key_bytes {
let value = sstable_reader.value().clone();
return Ok(Some(value));
}
}
}
Ok(None)
}
/// Lookups the value corresponding to the key.
pub async fn get_async<K: AsRef<[u8]>>(&self, key: K) -> io::Result<Option<TSSTable::Value>> {
if let Some(block_addr) = self.sstable_index.search_block(key.as_ref()) {
let mut sstable_reader = self.sstable_reader_block_async(block_addr).await?;
let key_bytes = key.as_ref();
while sstable_reader.advance().unwrap_or(false) {
if sstable_reader.key() == key_bytes {
let value = sstable_reader.value().clone();
return Ok(Some(value));
}
}
}
Ok(None)
}
/// Returns a range builder, to stream all of the terms
/// within an interval.
pub fn range(&self) -> StreamerBuilder<'_, TSSTable> {
StreamerBuilder::new(self, AlwaysMatch)
}
/// A stream of all the sorted terms.
pub fn stream(&self) -> io::Result<Streamer<'_, TSSTable>> {
self.range().into_stream()
}
/// Returns a search builder, to stream all of the terms
/// within the Automaton
pub fn search<'a, A: Automaton + 'a>(
&'a self,
automaton: A,
) -> StreamerBuilder<'a, TSSTable, A>
where
A::State: Clone,
{
StreamerBuilder::<TSSTable, A>::new(self, automaton)
}
#[doc(hidden)]
pub async fn warm_up_dictionary(&self) -> io::Result<()> {
self.sstable_slice.read_bytes_async().await?;
Ok(())
}
}

Some files were not shown because too many files have changed in this diff Show More