Compare commits

...

22 Commits

Author SHA1 Message Date
Paul Masurel
d7a8053cc2 Introduced a select cursor. 2023-01-20 23:27:39 +09:00
Paul Masurel
9548570e88 Fixing broken test build 2023-01-20 18:18:32 +09:00
Paul Masurel
9a296b29b7 Renamed dense file to dense.rs 2023-01-20 17:22:25 +09:00
PSeitz
b31fd389d8 collect columns for merge (#1812)
* collect columns for merge

* return column_type from, fix visibility

* fix

Co-authored-by: Paul Masurel <paul@quickwit.io>
2023-01-20 07:58:29 +01:00
Paul Masurel
89cec79813 Make it possible to force a column type and intricate bugfix. (#1815) 2023-01-20 14:30:56 +09:00
PSeitz
d09d91a856 fix tests (#1813) 2023-01-19 23:41:21 +09:00
PSeitz
50d8a8bc32 Update README (#1804)
Some parts are outdated

For the debugging tutorial, debugging is really easy now with VSCode, and there are plenty of other sources for debugging rust
2023-01-19 18:09:45 +09:00
Paul Masurel
08919a2900 Improvement on the scalar / random bitpacker code. (#1781)
* Improvement on the scalar / random bitpacker code.

Added proptesting
Added simple benchmark
Added assert and comments on the very non trivial hidden contract
Remove the need for an extra padding.

The last point introduces a small performance regression (~10%).

* Fixing unit tests
2023-01-19 18:09:13 +09:00
Lonre Wang
8ba333f1b4 Typo fix (#1803)
* Update text_options.rs

* Update src/schema/text_options.rs

Co-authored-by: Paul Masurel <paul@quickwit.io>
2023-01-19 17:56:05 +09:00
PSeitz
a2ca12995e update aggregation docs (#1807) 2023-01-19 09:52:47 +01:00
Paul Masurel
e3d504d833 Minor code cleanup (#1810) 2023-01-19 17:47:26 +09:00
Paul Masurel
5a42c5aae9 Add support for multivalues (#1809) 2023-01-19 16:55:01 +09:00
Paul Masurel
a86b104a40 Differentiating between str and bytes, + unit test 2023-01-19 14:38:12 +09:00
PSeitz
f9abd256b7 add ip addr to columnar (#1805) 2023-01-19 05:36:06 +01:00
Paul Masurel
9f42b6440a Completed unit test for dictionary encoded column 2023-01-19 12:15:27 +09:00
Paul Masurel
c723ed3f0b Columnar merge (#1806) 2023-01-19 11:52:27 +09:00
trinity-1686a
d72ea7d353 modify getters for sstable metadata (#1793)
* add way to get up to `limit` terms from sstable

* make some function of sstable load less data

* add some tests to sstable

* add tests on sstable dictionary

* fix some bugs with sstable
2023-01-18 14:42:55 +01:00
Paul Masurel
5180b612ef Removing the demuxer code (#1799) 2023-01-18 16:12:35 +09:00
PSeitz
f687b3a5aa start migrate Field to &str (#1772)
start migrate Field to &str in preparation of columnar
return Result for get_field
2023-01-18 16:12:07 +09:00
PSeitz
c4af63e588 add rename (#1797) 2023-01-18 13:28:37 +09:00
Adrien Guillo
4b343b3189 Merge pull request #1802 from quickwit-oss/guilload/clippy-fixes
Fix some Clippy warnings
2023-01-17 10:39:55 -05:00
Adrien Guillo
c51d9f9f83 Fix some Clippy warnings 2023-01-17 10:17:51 -05:00
85 changed files with 2391 additions and 1269 deletions

View File

@@ -41,7 +41,7 @@ Your mileage WILL vary depending on the nature of queries and their load.
- SIMD integer compression when the platform/CPU includes the SSE2 instruction set - SIMD integer compression when the platform/CPU includes the SSE2 instruction set
- Single valued and multivalued u64, i64, and f64 fast fields (equivalent of doc values in Lucene) - Single valued and multivalued u64, i64, and f64 fast fields (equivalent of doc values in Lucene)
- `&[u8]` fast fields - `&[u8]` fast fields
- Text, i64, u64, f64, dates, and hierarchical facet fields - Text, i64, u64, f64, dates, ip, bool, and hierarchical facet fields
- Compressed document store (LZ4, Zstd, None, Brotli, Snap) - Compressed document store (LZ4, Zstd, None, Brotli, Snap)
- Range queries - Range queries
- Faceted search - Faceted search
@@ -80,56 +80,21 @@ There are many ways to support this project.
# Contributing code # Contributing code
We use the GitHub Pull Request workflow: reference a GitHub ticket and/or include a comprehensive commit message when opening a PR. We use the GitHub Pull Request workflow: reference a GitHub ticket and/or include a comprehensive commit message when opening a PR.
Feel free to update CHANGELOG.md with your contribution.
## Tokenizer ## Tokenizer
When implementing a tokenizer for tantivy depend on the `tantivy-tokenizer-api` crate. When implementing a tokenizer for tantivy depend on the `tantivy-tokenizer-api` crate.
## Minimum supported Rust version
Tantivy currently requires at least Rust 1.62 or later to compile.
## Clone and build locally ## Clone and build locally
Tantivy compiles on stable Rust. Tantivy compiles on stable Rust.
To check out and run tests, you can simply run: To check out and run tests, you can simply run:
```bash ```bash
git clone https://github.com/quickwit-oss/tantivy.git git clone https://github.com/quickwit-oss/tantivy.git
cd tantivy cd tantivy
cargo build cargo test
```
## Run tests
Some tests will not run with just `cargo test` because of `fail-rs`.
To run the tests exhaustively, run `./run-tests.sh`.
## Debug
You might find it useful to step through the programme with a debugger.
### A failing test
Make sure you haven't run `cargo clean` after the most recent `cargo test` or `cargo build` to guarantee that the `target/` directory exists. Use this bash script to find the name of the most recent debug build of Tantivy and run it under `rust-gdb`:
```bash
find target/debug/ -maxdepth 1 -executable -type f -name "tantivy*" -printf '%TY-%Tm-%Td %TT %p\n' | sort -r | cut -d " " -f 3 | xargs -I RECENT_DBG_TANTIVY rust-gdb RECENT_DBG_TANTIVY
```
Now that you are in `rust-gdb`, you can set breakpoints on lines and methods that match your source code and run the debug executable with flags that you normally pass to `cargo test` like this:
```bash
$gdb run --test-threads 1 --test $NAME_OF_TEST
```
### An example
By default, `rustc` compiles everything in the `examples/` directory in debug mode. This makes it easy for you to make examples to reproduce bugs:
```bash
rust-gdb target/debug/examples/$EXAMPLE_NAME
$ gdb run
``` ```
# Companies Using Tantivy # Companies Using Tantivy

View File

@@ -34,7 +34,7 @@ pub fn hdfs_index_benchmark(c: &mut Criterion) {
let index = Index::create_in_ram(schema.clone()); let index = Index::create_in_ram(schema.clone());
let index_writer = index.writer_with_num_threads(1, 100_000_000).unwrap(); let index_writer = index.writer_with_num_threads(1, 100_000_000).unwrap();
for _ in 0..NUM_REPEATS { for _ in 0..NUM_REPEATS {
for doc_json in HDFS_LOGS.trim().split("\n") { for doc_json in HDFS_LOGS.trim().split('\n') {
let doc = schema.parse_document(doc_json).unwrap(); let doc = schema.parse_document(doc_json).unwrap();
index_writer.add_document(doc).unwrap(); index_writer.add_document(doc).unwrap();
} }
@@ -46,7 +46,7 @@ pub fn hdfs_index_benchmark(c: &mut Criterion) {
let index = Index::create_in_ram(schema.clone()); let index = Index::create_in_ram(schema.clone());
let mut index_writer = index.writer_with_num_threads(1, 100_000_000).unwrap(); let mut index_writer = index.writer_with_num_threads(1, 100_000_000).unwrap();
for _ in 0..NUM_REPEATS { for _ in 0..NUM_REPEATS {
for doc_json in HDFS_LOGS.trim().split("\n") { for doc_json in HDFS_LOGS.trim().split('\n') {
let doc = schema.parse_document(doc_json).unwrap(); let doc = schema.parse_document(doc_json).unwrap();
index_writer.add_document(doc).unwrap(); index_writer.add_document(doc).unwrap();
} }
@@ -59,7 +59,7 @@ pub fn hdfs_index_benchmark(c: &mut Criterion) {
let index = Index::create_in_ram(schema_with_store.clone()); let index = Index::create_in_ram(schema_with_store.clone());
let index_writer = index.writer_with_num_threads(1, 100_000_000).unwrap(); let index_writer = index.writer_with_num_threads(1, 100_000_000).unwrap();
for _ in 0..NUM_REPEATS { for _ in 0..NUM_REPEATS {
for doc_json in HDFS_LOGS.trim().split("\n") { for doc_json in HDFS_LOGS.trim().split('\n') {
let doc = schema.parse_document(doc_json).unwrap(); let doc = schema.parse_document(doc_json).unwrap();
index_writer.add_document(doc).unwrap(); index_writer.add_document(doc).unwrap();
} }
@@ -71,7 +71,7 @@ pub fn hdfs_index_benchmark(c: &mut Criterion) {
let index = Index::create_in_ram(schema_with_store.clone()); let index = Index::create_in_ram(schema_with_store.clone());
let mut index_writer = index.writer_with_num_threads(1, 100_000_000).unwrap(); let mut index_writer = index.writer_with_num_threads(1, 100_000_000).unwrap();
for _ in 0..NUM_REPEATS { for _ in 0..NUM_REPEATS {
for doc_json in HDFS_LOGS.trim().split("\n") { for doc_json in HDFS_LOGS.trim().split('\n') {
let doc = schema.parse_document(doc_json).unwrap(); let doc = schema.parse_document(doc_json).unwrap();
index_writer.add_document(doc).unwrap(); index_writer.add_document(doc).unwrap();
} }
@@ -85,7 +85,7 @@ pub fn hdfs_index_benchmark(c: &mut Criterion) {
let json_field = dynamic_schema.get_field("json").unwrap(); let json_field = dynamic_schema.get_field("json").unwrap();
let mut index_writer = index.writer_with_num_threads(1, 100_000_000).unwrap(); let mut index_writer = index.writer_with_num_threads(1, 100_000_000).unwrap();
for _ in 0..NUM_REPEATS { for _ in 0..NUM_REPEATS {
for doc_json in HDFS_LOGS.trim().split("\n") { for doc_json in HDFS_LOGS.trim().split('\n') {
let json_val: serde_json::Map<String, serde_json::Value> = let json_val: serde_json::Map<String, serde_json::Value> =
serde_json::from_str(doc_json).unwrap(); serde_json::from_str(doc_json).unwrap();
let doc = tantivy::doc!(json_field=>json_val); let doc = tantivy::doc!(json_field=>json_val);
@@ -101,7 +101,7 @@ pub fn hdfs_index_benchmark(c: &mut Criterion) {
let json_field = dynamic_schema.get_field("json").unwrap(); let json_field = dynamic_schema.get_field("json").unwrap();
let mut index_writer = index.writer_with_num_threads(1, 100_000_000).unwrap(); let mut index_writer = index.writer_with_num_threads(1, 100_000_000).unwrap();
for _ in 0..NUM_REPEATS { for _ in 0..NUM_REPEATS {
for doc_json in HDFS_LOGS.trim().split("\n") { for doc_json in HDFS_LOGS.trim().split('\n') {
let json_val: serde_json::Map<String, serde_json::Value> = let json_val: serde_json::Map<String, serde_json::Value> =
serde_json::from_str(doc_json).unwrap(); serde_json::from_str(doc_json).unwrap();
let doc = tantivy::doc!(json_field=>json_val); let doc = tantivy::doc!(json_field=>json_val);

View File

@@ -15,3 +15,7 @@ homepage = "https://github.com/quickwit-oss/tantivy"
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html # See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
[dependencies] [dependencies]
[dev-dependencies]
rand = "0.8"
proptest = "1"

View File

@@ -4,9 +4,39 @@ extern crate test;
#[cfg(test)] #[cfg(test)]
mod tests { mod tests {
use tantivy_bitpacker::BlockedBitpacker; use rand::seq::IteratorRandom;
use rand::thread_rng;
use tantivy_bitpacker::{BitPacker, BitUnpacker, BlockedBitpacker};
use test::Bencher; use test::Bencher;
#[inline(never)]
fn create_bitpacked_data(bit_width: u8, num_els: u32) -> Vec<u8> {
let mut bitpacker = BitPacker::new();
let mut buffer = Vec::new();
for _ in 0..num_els {
// the values do not matter.
bitpacker.write(0u64, bit_width, &mut buffer).unwrap();
bitpacker.flush(&mut buffer).unwrap();
}
buffer
}
#[bench]
fn bench_bitpacking_read(b: &mut Bencher) {
let bit_width = 3;
let num_els = 1_000_000u32;
let bit_unpacker = BitUnpacker::new(bit_width);
let data = create_bitpacked_data(bit_width, num_els);
let idxs: Vec<u32> = (0..num_els).choose_multiple(&mut thread_rng(), 100_000);
b.iter(|| {
let mut out = 0u64;
for &idx in &idxs {
out = out.wrapping_add(bit_unpacker.get(idx, &data[..]));
}
out
});
}
#[bench] #[bench]
fn bench_blockedbitp_read(b: &mut Bencher) { fn bench_blockedbitp_read(b: &mut Bencher) {
let mut blocked_bitpacker = BlockedBitpacker::new(); let mut blocked_bitpacker = BlockedBitpacker::new();
@@ -14,9 +44,9 @@ mod tests {
blocked_bitpacker.add(val * val); blocked_bitpacker.add(val * val);
} }
b.iter(|| { b.iter(|| {
let mut out = 0; let mut out = 0u64;
for val in 0..=21500 { for val in 0..=21500 {
out = blocked_bitpacker.get(val); out = out.wrapping_add(blocked_bitpacker.get(val));
} }
out out
}); });

View File

@@ -56,27 +56,31 @@ impl BitPacker {
pub fn close<TWrite: io::Write>(&mut self, output: &mut TWrite) -> io::Result<()> { pub fn close<TWrite: io::Write>(&mut self, output: &mut TWrite) -> io::Result<()> {
self.flush(output)?; self.flush(output)?;
// Padding the write file to simplify reads.
output.write_all(&[0u8; 7])?;
Ok(()) Ok(())
} }
} }
#[derive(Clone, Debug, Default)] #[derive(Clone, Debug, Default, Copy)]
pub struct BitUnpacker { pub struct BitUnpacker {
num_bits: u64, num_bits: u32,
mask: u64, mask: u64,
} }
impl BitUnpacker { impl BitUnpacker {
/// Creates a bit unpacker, that assumes the same bitwidth for all values.
///
/// The bitunpacker works by doing an unaligned read of 8 bytes.
/// For this reason, values of `num_bits` between
/// [57..63] are forbidden.
pub fn new(num_bits: u8) -> BitUnpacker { pub fn new(num_bits: u8) -> BitUnpacker {
assert!(num_bits <= 7 * 8 || num_bits == 64);
let mask: u64 = if num_bits == 64 { let mask: u64 = if num_bits == 64 {
!0u64 !0u64
} else { } else {
(1u64 << num_bits) - 1u64 (1u64 << num_bits) - 1u64
}; };
BitUnpacker { BitUnpacker {
num_bits: u64::from(num_bits), num_bits: u32::from(num_bits),
mask, mask,
} }
} }
@@ -87,28 +91,40 @@ impl BitUnpacker {
#[inline] #[inline]
pub fn get(&self, idx: u32, data: &[u8]) -> u64 { pub fn get(&self, idx: u32, data: &[u8]) -> u64 {
if self.num_bits == 0 { let addr_in_bits = idx * self.num_bits;
return 0u64;
}
let addr_in_bits = idx * self.num_bits as u32;
let addr = (addr_in_bits >> 3) as usize; let addr = (addr_in_bits >> 3) as usize;
if addr + 8 > data.len() {
if self.num_bits == 0 {
return 0;
}
let bit_shift = addr_in_bits & 7;
return self.get_slow_path(addr, bit_shift, data);
}
let bit_shift = addr_in_bits & 7; let bit_shift = addr_in_bits & 7;
debug_assert!(
addr + 8 <= data.len(),
"The fast field field should have been padded with 7 bytes."
);
let bytes: [u8; 8] = (&data[addr..addr + 8]).try_into().unwrap(); let bytes: [u8; 8] = (&data[addr..addr + 8]).try_into().unwrap();
let val_unshifted_unmasked: u64 = u64::from_le_bytes(bytes); let val_unshifted_unmasked: u64 = u64::from_le_bytes(bytes);
let val_shifted = val_unshifted_unmasked >> bit_shift; let val_shifted = val_unshifted_unmasked >> bit_shift;
val_shifted & self.mask val_shifted & self.mask
} }
#[inline(never)]
fn get_slow_path(&self, addr: usize, bit_shift: u32, data: &[u8]) -> u64 {
let mut bytes: [u8; 8] = [0u8; 8];
let available_bytes = data.len() - addr;
// This function is meant to only be called if we did not have 8 bytes to load.
debug_assert!(available_bytes < 8);
bytes[..available_bytes].copy_from_slice(&data[addr..]);
let val_unshifted_unmasked: u64 = u64::from_le_bytes(bytes);
let val_shifted = val_unshifted_unmasked >> bit_shift;
val_shifted & self.mask
}
} }
#[cfg(test)] #[cfg(test)]
mod test { mod test {
use super::{BitPacker, BitUnpacker}; use super::{BitPacker, BitUnpacker};
fn create_fastfield_bitpacker(len: usize, num_bits: u8) -> (BitUnpacker, Vec<u64>, Vec<u8>) { fn create_bitpacker(len: usize, num_bits: u8) -> (BitUnpacker, Vec<u64>, Vec<u8>) {
let mut data = Vec::new(); let mut data = Vec::new();
let mut bitpacker = BitPacker::new(); let mut bitpacker = BitPacker::new();
let max_val: u64 = (1u64 << num_bits as u64) - 1u64; let max_val: u64 = (1u64 << num_bits as u64) - 1u64;
@@ -119,13 +135,13 @@ mod test {
bitpacker.write(val, num_bits, &mut data).unwrap(); bitpacker.write(val, num_bits, &mut data).unwrap();
} }
bitpacker.close(&mut data).unwrap(); bitpacker.close(&mut data).unwrap();
assert_eq!(data.len(), ((num_bits as usize) * len + 7) / 8 + 7); assert_eq!(data.len(), ((num_bits as usize) * len + 7) / 8);
let bitunpacker = BitUnpacker::new(num_bits); let bitunpacker = BitUnpacker::new(num_bits);
(bitunpacker, vals, data) (bitunpacker, vals, data)
} }
fn test_bitpacker_util(len: usize, num_bits: u8) { fn test_bitpacker_util(len: usize, num_bits: u8) {
let (bitunpacker, vals, data) = create_fastfield_bitpacker(len, num_bits); let (bitunpacker, vals, data) = create_bitpacker(len, num_bits);
for (i, val) in vals.iter().enumerate() { for (i, val) in vals.iter().enumerate() {
assert_eq!(bitunpacker.get(i as u32, &data), *val); assert_eq!(bitunpacker.get(i as u32, &data), *val);
} }
@@ -139,4 +155,49 @@ mod test {
test_bitpacker_util(6, 14); test_bitpacker_util(6, 14);
test_bitpacker_util(1000, 14); test_bitpacker_util(1000, 14);
} }
use proptest::prelude::*;
fn num_bits_strategy() -> impl Strategy<Value = u8> {
prop_oneof!(Just(0), Just(1), 2u8..56u8, Just(56), Just(64),)
}
fn vals_strategy() -> impl Strategy<Value = (u8, Vec<u64>)> {
(num_bits_strategy(), 0usize..100usize).prop_flat_map(|(num_bits, len)| {
let max_val = if num_bits == 64 {
u64::MAX
} else {
(1u64 << num_bits as u32) - 1
};
let vals = proptest::collection::vec(0..=max_val, len);
vals.prop_map(move |vals| (num_bits, vals))
})
}
fn test_bitpacker_aux(num_bits: u8, vals: &[u64]) {
let mut buffer: Vec<u8> = Vec::new();
let mut bitpacker = BitPacker::new();
for &val in vals {
bitpacker.write(val, num_bits, &mut buffer).unwrap();
}
bitpacker.flush(&mut buffer).unwrap();
assert_eq!(buffer.len(), (vals.len() * num_bits as usize + 7) / 8);
let bitunpacker = BitUnpacker::new(num_bits);
let max_val = if num_bits == 64 {
u64::MAX
} else {
(1u64 << num_bits) - 1
};
for (i, val) in vals.iter().copied().enumerate() {
assert!(val <= max_val);
assert_eq!(bitunpacker.get(i as u32, &buffer), val);
}
}
proptest::proptest! {
#[test]
fn test_bitpacker_proptest((num_bits, vals) in vals_strategy()) {
test_bitpacker_aux(num_bits, &vals);
}
}
} }

View File

@@ -35,6 +35,7 @@ remove all doc_id occurences -> row_id
use the rank & select naming in unit tests branch. use the rank & select naming in unit tests branch.
multi-linear -> blockwise multi-linear -> blockwise
linear codec -> simply a multiplication for the index column linear codec -> simply a multiplication for the index column
rename columnar to something more explicit, like column_dictionary or columnar_table
# Other # Other
fix enhance column-cli fix enhance column-cli

View File

@@ -5,9 +5,16 @@ use std::sync::Arc;
use sstable::{Dictionary, VoidSSTable}; use sstable::{Dictionary, VoidSSTable};
use crate::column::Column; use crate::column::Column;
use crate::column_index::ColumnIndex; use crate::RowId;
/// Dictionary encoded column. /// Dictionary encoded column.
///
/// The column simply gives access to a regular u64-column that, in
/// which the values are term-ordinals.
///
/// These ordinals are ids uniquely identify the bytes that are stored in
/// the column. These ordinals are small, and sorted in the same order
/// as the term_ord_column.
#[derive(Clone)] #[derive(Clone)]
pub struct BytesColumn { pub struct BytesColumn {
pub(crate) dictionary: Arc<Dictionary<VoidSSTable>>, pub(crate) dictionary: Arc<Dictionary<VoidSSTable>>,
@@ -15,26 +22,57 @@ pub struct BytesColumn {
} }
impl BytesColumn { impl BytesColumn {
/// Fills the given `output` buffer with the term associated to the ordinal `ord`.
///
/// Returns `false` if the term does not exist (e.g. `term_ord` is greater or equal to the /// Returns `false` if the term does not exist (e.g. `term_ord` is greater or equal to the
/// overll number of terms). /// overll number of terms).
pub fn term_ord_to_str(&self, term_ord: u64, output: &mut Vec<u8>) -> io::Result<bool> { pub fn ord_to_bytes(&self, ord: u64, output: &mut Vec<u8>) -> io::Result<bool> {
self.dictionary.ord_to_term(term_ord, output) self.dictionary.ord_to_term(ord, output)
} }
pub fn term_ords(&self) -> &Column<u64> { /// Returns the number of rows in the column.
pub fn num_rows(&self) -> RowId {
self.term_ord_column.num_rows()
}
/// Returns the column of ordinals
pub fn ords(&self) -> &Column<u64> {
&self.term_ord_column &self.term_ord_column
} }
} }
impl Deref for BytesColumn { #[derive(Clone)]
type Target = ColumnIndex<'static>; pub struct StrColumn(BytesColumn);
fn deref(&self) -> &Self::Target { impl From<BytesColumn> for StrColumn {
&**self.term_ords() fn from(bytes_col: BytesColumn) -> Self {
StrColumn(bytes_col)
} }
} }
#[cfg(test)] impl StrColumn {
mod tests { /// Fills the buffer
use crate::{ColumnarReader, ColumnarWriter}; pub fn ord_to_str(&self, term_ord: u64, output: &mut String) -> io::Result<bool> {
unsafe {
let buf = output.as_mut_vec();
self.0.dictionary.ord_to_term(term_ord, buf)?;
// TODO consider remove checks if it hurts performance.
if std::str::from_utf8(buf.as_slice()).is_err() {
buf.clear();
return Err(io::Error::new(
io::ErrorKind::InvalidData,
"Not valid utf-8",
));
}
}
Ok(true)
}
}
impl Deref for StrColumn {
type Target = BytesColumn;
fn deref(&self) -> &Self::Target {
&self.0
}
} }

View File

@@ -5,8 +5,11 @@ use std::ops::Deref;
use std::sync::Arc; use std::sync::Arc;
use common::BinarySerializable; use common::BinarySerializable;
pub use dictionary_encoded::BytesColumn; pub use dictionary_encoded::{BytesColumn, StrColumn};
pub use serialize::{open_column_bytes, open_column_u64, serialize_column_u64}; pub use serialize::{
open_column_bytes, open_column_u128, open_column_u64, serialize_column_mappable_to_u128,
serialize_column_mappable_to_u64,
};
use crate::column_index::ColumnIndex; use crate::column_index::ColumnIndex;
use crate::column_values::ColumnValues; use crate::column_values::ColumnValues;
@@ -18,21 +21,43 @@ pub struct Column<T> {
pub values: Arc<dyn ColumnValues<T>>, pub values: Arc<dyn ColumnValues<T>>,
} }
use crate::column_index::Set;
impl<T: PartialOrd> Column<T> { impl<T: PartialOrd> Column<T> {
pub fn first(&self, row_id: RowId) -> Option<T> { pub fn num_rows(&self) -> RowId {
match &self.idx { match &self.idx {
ColumnIndex::Full => Some(self.values.get_val(row_id)), ColumnIndex::Full => self.values.num_vals() as u32,
ColumnIndex::Optional(opt_idx) => { ColumnIndex::Optional(optional_index) => optional_index.num_rows(),
let value_row_idx = opt_idx.rank_if_exists(row_id)?; ColumnIndex::Multivalued(col_index) => {
Some(self.values.get_val(value_row_idx)) // The multivalued index contains all value start row_id,
} // and one extra value at the end with the overall number of rows.
ColumnIndex::Multivalued(_multivalued_index) => { col_index.num_vals() - 1
todo!();
} }
} }
} }
pub fn min_value(&self) -> T {
self.values.min_value()
}
pub fn max_value(&self) -> T {
self.values.max_value()
}
}
impl<T: PartialOrd + Copy + Send + Sync + 'static> Column<T> {
pub fn first(&self, row_id: RowId) -> Option<T> {
self.values(row_id).next()
}
pub fn values(&self, row_id: RowId) -> impl Iterator<Item = T> + '_ {
self.value_row_ids(row_id)
.map(|value_row_id: RowId| self.values.get_val(value_row_id))
}
pub fn first_or_default_col(self, default_value: T) -> Arc<dyn ColumnValues<T>> {
Arc::new(FirstValueWithDefault {
column: self,
default_value,
})
}
} }
impl<T> Deref for Column<T> { impl<T> Deref for Column<T> {
@@ -54,3 +79,31 @@ impl BinarySerializable for Cardinality {
Ok(cardinality) Ok(cardinality)
} }
} }
// TODO simplify or optimize
struct FirstValueWithDefault<T: Copy> {
column: Column<T>,
default_value: T,
}
impl<T: PartialOrd + Send + Sync + Copy + 'static> ColumnValues<T> for FirstValueWithDefault<T> {
fn get_val(&self, idx: u32) -> T {
self.column.first(idx).unwrap_or(self.default_value)
}
fn min_value(&self) -> T {
self.column.values.min_value()
}
fn max_value(&self) -> T {
self.column.values.max_value()
}
fn num_vals(&self) -> u32 {
match &self.column.idx {
ColumnIndex::Full => self.column.values.num_vals(),
ColumnIndex::Optional(optional_idx) => optional_idx.num_rows(),
ColumnIndex::Multivalued(_) => todo!(),
}
}
}

View File

@@ -2,24 +2,51 @@ use std::io;
use std::io::Write; use std::io::Write;
use std::sync::Arc; use std::sync::Arc;
use common::{CountingWriter, OwnedBytes}; use common::OwnedBytes;
use sstable::Dictionary; use sstable::Dictionary;
use crate::column::{BytesColumn, Column}; use crate::column::{BytesColumn, Column};
use crate::column_index::{serialize_column_index, SerializableColumnIndex}; use crate::column_index::{serialize_column_index, SerializableColumnIndex};
use crate::column_values::serialize::serialize_column_values_u128;
use crate::column_values::{ use crate::column_values::{
serialize_column_values, ColumnValues, MonotonicallyMappableToU64, ALL_CODEC_TYPES, serialize_column_values, ColumnValues, FastFieldCodecType, MonotonicallyMappableToU128,
MonotonicallyMappableToU64,
}; };
pub fn serialize_column_u64<T: MonotonicallyMappableToU64>(
pub fn serialize_column_mappable_to_u128<
F: Fn() -> I,
I: Iterator<Item = T>,
T: MonotonicallyMappableToU128,
>(
column_index: SerializableColumnIndex<'_>,
column_values: F,
num_vals: u32,
output: &mut impl Write,
) -> io::Result<()> {
let column_index_num_bytes = serialize_column_index(column_index, output)?;
serialize_column_values_u128(
|| column_values().map(|val| val.to_u128()),
num_vals,
output,
)?;
output.write_all(&column_index_num_bytes.to_le_bytes())?;
Ok(())
}
pub fn serialize_column_mappable_to_u64<T: MonotonicallyMappableToU64>(
column_index: SerializableColumnIndex<'_>, column_index: SerializableColumnIndex<'_>,
column_values: &impl ColumnValues<T>, column_values: &impl ColumnValues<T>,
output: &mut impl Write, output: &mut impl Write,
) -> io::Result<()> { ) -> io::Result<()> {
let mut counting_writer = CountingWriter::wrap(output); let column_index_num_bytes = serialize_column_index(column_index, output)?;
serialize_column_index(column_index, &mut counting_writer)?; serialize_column_values(
let column_index_num_bytes = counting_writer.written_bytes() as u32; column_values,
let output = counting_writer.finish(); &[
serialize_column_values(column_values, &ALL_CODEC_TYPES[..], output)?; FastFieldCodecType::Bitpacked,
FastFieldCodecType::BlockwiseLinear,
],
output,
)?;
output.write_all(&column_index_num_bytes.to_le_bytes())?; output.write_all(&column_index_num_bytes.to_le_bytes())?;
Ok(()) Ok(())
} }
@@ -41,14 +68,34 @@ pub fn open_column_u64<T: MonotonicallyMappableToU64>(bytes: OwnedBytes) -> io::
}) })
} }
pub fn open_column_bytes(data: OwnedBytes) -> io::Result<BytesColumn> { pub fn open_column_u128<T: MonotonicallyMappableToU128>(
bytes: OwnedBytes,
) -> io::Result<Column<T>> {
let (body, column_index_num_bytes_payload) = bytes.rsplit(4);
let column_index_num_bytes = u32::from_le_bytes(
column_index_num_bytes_payload
.as_slice()
.try_into()
.unwrap(),
);
let (column_index_data, column_values_data) = body.split(column_index_num_bytes as usize);
let column_index = crate::column_index::open_column_index(column_index_data)?;
let column_values = crate::column_values::open_u128_mapped(column_values_data)?;
Ok(Column {
idx: column_index,
values: column_values,
})
}
pub fn open_column_bytes<T: From<BytesColumn>>(data: OwnedBytes) -> io::Result<T> {
let (body, dictionary_len_bytes) = data.rsplit(4); let (body, dictionary_len_bytes) = data.rsplit(4);
let dictionary_len = u32::from_le_bytes(dictionary_len_bytes.as_slice().try_into().unwrap()); let dictionary_len = u32::from_le_bytes(dictionary_len_bytes.as_slice().try_into().unwrap());
let (dictionary_bytes, column_bytes) = body.split(dictionary_len as usize); let (dictionary_bytes, column_bytes) = body.split(dictionary_len as usize);
let dictionary = Arc::new(Dictionary::from_bytes(dictionary_bytes)?); let dictionary = Arc::new(Dictionary::from_bytes(dictionary_bytes)?);
let term_ord_column = crate::column::open_column_u64::<u64>(column_bytes)?; let term_ord_column = crate::column::open_column_u64::<u64>(column_bytes)?;
Ok(BytesColumn { let bytes_column = BytesColumn {
dictionary, dictionary,
term_ord_column, term_ord_column,
}) };
Ok(bytes_column.into())
} }

View File

@@ -2,6 +2,7 @@ mod multivalued_index;
mod optional_index; mod optional_index;
mod serialize; mod serialize;
use std::ops::Range;
use std::sync::Arc; use std::sync::Arc;
pub use optional_index::{OptionalIndex, SerializableOptionalIndex, Set}; pub use optional_index::{OptionalIndex, SerializableOptionalIndex, Set};
@@ -14,8 +15,12 @@ use crate::{Cardinality, RowId};
pub enum ColumnIndex<'a> { pub enum ColumnIndex<'a> {
Full, Full,
Optional(OptionalIndex), Optional(OptionalIndex),
// TODO remove the Arc<dyn> apart from serialization this is not // TODO Remove the static by fixing the codec if possible.
// dynamic at all. /// The column values enclosed contains for all row_id,
/// the value start_index.
///
/// In addition, at index num_rows, an extra value is added
/// containing the overal number of values.
Multivalued(Arc<dyn ColumnValues<RowId> + 'a>), Multivalued(Arc<dyn ColumnValues<RowId> + 'a>),
} }
@@ -28,13 +33,22 @@ impl<'a> ColumnIndex<'a> {
} }
} }
pub fn num_rows(&self) -> RowId { pub fn value_row_ids(&self, row_id: RowId) -> Range<RowId> {
match self { match self {
ColumnIndex::Full => { ColumnIndex::Full => row_id..row_id + 1,
todo!() ColumnIndex::Optional(optional_index) => {
if let Some(val) = optional_index.rank_if_exists(row_id) {
val..val + 1
} else {
0..0
}
}
ColumnIndex::Multivalued(multivalued_index) => {
let multivalued_index_ref = &**multivalued_index;
let start: u32 = multivalued_index_ref.get_val(row_id);
let end: u32 = multivalued_index_ref.get_val(row_id + 1);
start..end
} }
ColumnIndex::Optional(optional_index) => optional_index.num_rows(),
ColumnIndex::Multivalued(multivalued_index) => multivalued_index.num_vals() - 1,
} }
} }
} }

View File

@@ -11,11 +11,11 @@ use crate::RowId;
pub struct MultivaluedIndex(Arc<dyn ColumnValues<RowId>>); pub struct MultivaluedIndex(Arc<dyn ColumnValues<RowId>>);
pub fn serialize_multivalued_index( pub fn serialize_multivalued_index(
multivalued_index: MultivaluedIndex, multivalued_index: &dyn ColumnValues<RowId>,
output: &mut impl Write, output: &mut impl Write,
) -> io::Result<()> { ) -> io::Result<()> {
crate::column_values::serialize_column_values( crate::column_values::serialize_column_values(
&*multivalued_index.0, &*multivalued_index,
&[FastFieldCodecType::Bitpacked, FastFieldCodecType::Linear], &[FastFieldCodecType::Bitpacked, FastFieldCodecType::Linear],
output, output,
)?; )?;
@@ -23,5 +23,7 @@ pub fn serialize_multivalued_index(
} }
pub fn open_multivalued_index(bytes: OwnedBytes) -> io::Result<Arc<dyn ColumnValues<RowId>>> { pub fn open_multivalued_index(bytes: OwnedBytes) -> io::Result<Arc<dyn ColumnValues<RowId>>> {
todo!(); let start_index_column: Arc<dyn ColumnValues<RowId>> =
crate::column_values::open_u64_mapped(bytes)?;
Ok(start_index_column)
} }

View File

@@ -5,8 +5,8 @@ use std::sync::Arc;
mod set; mod set;
mod set_block; mod set_block;
use common::{BinarySerializable, GroupByIteratorExtended, OwnedBytes, VInt}; use common::{BinarySerializable, OwnedBytes, VInt};
pub use set::{Set, SetCodec}; pub use set::{Set, SetCodec, SelectCursor};
use set_block::{ use set_block::{
DenseBlock, DenseBlockCodec, SparseBlock, SparseBlockCodec, DENSE_BLOCK_NUM_BYTES, DenseBlock, DenseBlockCodec, SparseBlock, SparseBlockCodec, DENSE_BLOCK_NUM_BYTES,
}; };
@@ -115,7 +115,59 @@ fn row_addr_from_row_id(row_id: RowId) -> RowAddr {
} }
} }
enum BlockSelectCursor<'a> {
Dense(<DenseBlock<'a> as Set<u16>>::SelectCursor<'a>),
Sparse(<SparseBlock<'a> as Set<u16>>::SelectCursor<'a>),
}
impl<'a> BlockSelectCursor<'a> {
fn select(&mut self, rank: u16) -> u16 {
match self {
BlockSelectCursor::Dense(dense_select_cursor) => dense_select_cursor.select(rank),
BlockSelectCursor::Sparse(sparse_select_cursor) => sparse_select_cursor.select(rank),
}
}
}
pub struct OptionalIndexSelectCursor<'a> {
current_block_cursor: BlockSelectCursor<'a>,
current_block_id: u16,
// The current block is guaranteed to contain ranks < end_rank.
current_block_end_rank: RowId,
optional_index: &'a OptionalIndex,
block_doc_idx_start: RowId,
num_null_rows_before_block: RowId,
}
impl<'a> OptionalIndexSelectCursor<'a> {
fn search_and_load_block(&mut self, rank: RowId) {
if rank < self.current_block_end_rank {
// we are already in the right block
return;
}
self.current_block_id = self.optional_index.find_block(rank, self.current_block_id);
self.current_block_end_rank = self.optional_index.block_metas.get(self.current_block_id as usize + 1).map(|block_meta| block_meta.non_null_rows_before_block).unwrap_or(u32::MAX);
self.block_doc_idx_start = (self.current_block_id as u32) * ELEMENTS_PER_BLOCK;
let block_meta = self.optional_index.block_metas[self.current_block_id as usize];
self.num_null_rows_before_block = block_meta.non_null_rows_before_block;
let block: Block<'_> = self.optional_index.block(block_meta);
self.current_block_cursor = match block {
Block::Dense(dense_block) => BlockSelectCursor::Dense(dense_block.select_cursor()),
Block::Sparse(sparse_block) => BlockSelectCursor::Sparse(sparse_block.select_cursor()),
};
}
}
impl<'a> SelectCursor<RowId> for OptionalIndexSelectCursor<'a> {
fn select(&mut self, rank: RowId) -> RowId {
self.search_and_load_block(rank);
let index_in_block = (rank - self.num_null_rows_before_block) as u16;
self.current_block_cursor.select(index_in_block) as RowId + self.block_doc_idx_start
}
}
impl Set<RowId> for OptionalIndex { impl Set<RowId> for OptionalIndex {
type SelectCursor<'b> = OptionalIndexSelectCursor<'b> where Self: 'b;
// Check if value at position is not null. // Check if value at position is not null.
#[inline] #[inline]
fn contains(&self, row_id: RowId) -> bool { fn contains(&self, row_id: RowId) -> bool {
@@ -148,7 +200,7 @@ impl Set<RowId> for OptionalIndex {
#[inline] #[inline]
fn select(&self, rank: RowId) -> RowId { fn select(&self, rank: RowId) -> RowId {
let block_pos = self.find_block(rank, 0); let block_pos = self.find_block(rank, 0);
let block_doc_idx_start = block_pos * ELEMENTS_PER_BLOCK; let block_doc_idx_start = (block_pos as u32) * ELEMENTS_PER_BLOCK;
let block_meta = self.block_metas[block_pos as usize]; let block_meta = self.block_metas[block_pos as usize];
let block: Block<'_> = self.block(block_meta); let block: Block<'_> = self.block(block_meta);
let index_in_block = (rank - block_meta.non_null_rows_before_block) as u16; let index_in_block = (rank - block_meta.non_null_rows_before_block) as u16;
@@ -159,39 +211,27 @@ impl Set<RowId> for OptionalIndex {
block_doc_idx_start + in_block_rank as u32 block_doc_idx_start + in_block_rank as u32
} }
fn select_batch(&self, ranks: &[u32], output_idxs: &mut [u32]) { fn select_cursor<'b>(&'b self) -> OptionalIndexSelectCursor<'b> {
let mut block_pos = 0u32; OptionalIndexSelectCursor {
let mut start = 0; current_block_cursor: BlockSelectCursor::Sparse(SparseBlockCodec::open(b"").select_cursor()),
let group_by_it = ranks.iter().copied().group_by(move |codec_idx| { current_block_id: 0u16,
block_pos = self.find_block(*codec_idx, block_pos); current_block_end_rank: 0u32, //< this is sufficient to force the first load
block_pos optional_index: self,
}); block_doc_idx_start: 0u32,
for (block_pos, block_iter) in group_by_it { num_null_rows_before_block: 0u32,
let block_doc_idx_start = block_pos * ELEMENTS_PER_BLOCK;
let block_meta = self.block_metas[block_pos as usize];
let block: Block<'_> = self.block(block_meta);
let offset = block_meta.non_null_rows_before_block;
let indexes_in_block_iter =
block_iter.map(move |codec_idx| (codec_idx - offset) as u16);
match block {
Block::Dense(dense_block) => {
for in_offset in dense_block.select_iter(indexes_in_block_iter) {
output_idxs[start] = in_offset as u32 + block_doc_idx_start;
start += 1;
}
}
Block::Sparse(sparse_block) => {
for in_offset in sparse_block.select_iter(indexes_in_block_iter) {
output_idxs[start] = in_offset as u32 + block_doc_idx_start;
start += 1;
}
}
};
} }
} }
} }
impl OptionalIndex { impl OptionalIndex {
pub fn select_batch(&self, ranks: &mut [RowId]) {
let mut select_cursor = self.select_cursor();
for rank in ranks.iter_mut() {
*rank = select_cursor.select(*rank);
}
}
#[inline] #[inline]
fn block<'a>(&'a self, block_meta: BlockMeta) -> Block<'a> { fn block<'a>(&'a self, block_meta: BlockMeta) -> Block<'a> {
let BlockMeta { let BlockMeta {
@@ -214,14 +254,14 @@ impl OptionalIndex {
} }
#[inline] #[inline]
fn find_block(&self, dense_idx: u32, start_block_pos: u32) -> u32 { fn find_block(&self, dense_idx: u32, start_block_pos: u16) -> u16 {
for block_pos in start_block_pos..self.block_metas.len() as u32 { for block_pos in start_block_pos..self.block_metas.len() as u16 {
let offset = self.block_metas[block_pos as usize].non_null_rows_before_block; let offset = self.block_metas[block_pos as usize].non_null_rows_before_block;
if offset > dense_idx { if offset > dense_idx {
return block_pos - 1; return block_pos - 1u16;
} }
} }
self.block_metas.len() as u32 - 1u32 self.block_metas.len() as u16 - 1u16
} }
// TODO Add a good API for the codec_idx to original_idx translation. // TODO Add a good API for the codec_idx to original_idx translation.

View File

@@ -13,7 +13,19 @@ pub trait SetCodec {
fn open<'a>(data: &'a [u8]) -> Self::Reader<'a>; fn open<'a>(data: &'a [u8]) -> Self::Reader<'a>;
} }
/// Stateful object that makes it possible to compute several select in a row,
/// provided the rank passed as argument are increasing.
pub trait SelectCursor<T> {
// May panic if rank is greater than the number of elements in the Set,
// or if rank is < than value provided in the previous call.
fn select(&mut self, rank: T) -> T;
}
pub trait Set<T> { pub trait Set<T> {
type SelectCursor<'b>: SelectCursor<T> where Self: 'b;
/// Returns true if the elements is contained in the Set /// Returns true if the elements is contained in the Set
fn contains(&self, el: T) -> bool; fn contains(&self, el: T) -> bool;
@@ -28,11 +40,6 @@ pub trait Set<T> {
/// May panic if rank is greater than the number of elements in the Set. /// May panic if rank is greater than the number of elements in the Set.
fn select(&self, rank: T) -> T; fn select(&self, rank: T) -> T;
/// Batch version of select. /// Creates a brand new select cursor.
/// `ranks` is assumed to be sorted. fn select_cursor<'b>(&'b self,) -> Self::SelectCursor<'b>;
///
/// # Panics
///
/// May panic if rank is greater than the number of elements in the Set.
fn select_batch(&self, ranks: &[T], outputs: &mut [T]);
} }

View File

@@ -3,7 +3,7 @@ use std::io::{self, Write};
use common::BinarySerializable; use common::BinarySerializable;
use crate::column_index::optional_index::{Set, SetCodec, ELEMENTS_PER_BLOCK}; use crate::column_index::optional_index::{Set, SetCodec, SelectCursor, ELEMENTS_PER_BLOCK};
#[inline(always)] #[inline(always)]
fn get_bit_at(input: u64, n: u16) -> bool { fn get_bit_at(input: u64, n: u16) -> bool {
@@ -105,7 +105,24 @@ impl DenseMiniBlock {
#[derive(Copy, Clone)] #[derive(Copy, Clone)]
pub struct DenseBlock<'a>(&'a [u8]); pub struct DenseBlock<'a>(&'a [u8]);
pub struct DenseBlockSelectCursor<'a> {
block_id: u16,
dense_block: DenseBlock<'a>,
}
impl<'a> SelectCursor<u16> for DenseBlockSelectCursor<'a> {
#[inline]
fn select(&mut self, rank: u16) -> u16 {
self.block_id = self.dense_block.find_miniblock_containing_rank(rank, self.block_id).unwrap();
let index_block = self.dense_block.mini_block(self.block_id);
let in_block_rank = rank - index_block.rank;
self.block_id * ELEMENTS_PER_MINI_BLOCK + select_u64(index_block.bitvec, in_block_rank)
}
}
impl<'a> Set<u16> for DenseBlock<'a> { impl<'a> Set<u16> for DenseBlock<'a> {
type SelectCursor<'b> = DenseBlockSelectCursor<'a> where Self: 'b;
#[inline(always)] #[inline(always)]
fn contains(&self, el: u16) -> bool { fn contains(&self, el: u16) -> bool {
let mini_block_id = el / ELEMENTS_PER_MINI_BLOCK; let mini_block_id = el / ELEMENTS_PER_MINI_BLOCK;
@@ -136,37 +153,15 @@ impl<'a> Set<u16> for DenseBlock<'a> {
block_id * ELEMENTS_PER_MINI_BLOCK + select_u64(index_block.bitvec, in_block_rank) block_id * ELEMENTS_PER_MINI_BLOCK + select_u64(index_block.bitvec, in_block_rank)
} }
fn select_batch(&self, ranks: &[u16], outputs: &mut [u16]) { #[inline(always)]
let orig_ids = self.select_iter(ranks.iter().copied()); fn select_cursor<'b>(&'b self,) -> Self::SelectCursor<'b> {
for (output, original_id) in outputs.iter_mut().zip(orig_ids) { DenseBlockSelectCursor {
*output = original_id; block_id: 0,
dense_block: *self,
} }
} }
} }
impl<'a> DenseBlock<'a> {
/// Iterator verison of select.
///
/// # Panics
/// Panics if one of the rank is higher than the number of elements in the set.
pub fn select_iter<'b>(
&self,
rank_it: impl Iterator<Item = u16> + 'b,
) -> impl Iterator<Item = u16> + 'b
where
Self: 'b,
{
let mut block_id = 0u16;
let me = *self;
rank_it.map(move |rank| {
block_id = me.find_miniblock_containing_rank(rank, block_id).unwrap();
let index_block = me.mini_block(block_id);
let in_block_rank = rank - index_block.rank;
block_id * ELEMENTS_PER_MINI_BLOCK + select_u64(index_block.bitvec, in_block_rank)
})
}
}
impl<'a> DenseBlock<'a> { impl<'a> DenseBlock<'a> {
#[inline] #[inline]
fn mini_block(&self, mini_block_id: u16) -> DenseMiniBlock { fn mini_block(&self, mini_block_id: u16) -> DenseMiniBlock {

View File

@@ -1,7 +1,7 @@
mod set_block; mod dense;
mod sparse; mod sparse;
pub use set_block::{DenseBlock, DenseBlockCodec, DENSE_BLOCK_NUM_BYTES}; pub use dense::{DenseBlock, DenseBlockCodec, DENSE_BLOCK_NUM_BYTES};
pub use sparse::{SparseBlock, SparseBlockCodec}; pub use sparse::{SparseBlock, SparseBlockCodec};
#[cfg(test)] #[cfg(test)]

View File

@@ -1,4 +1,4 @@
use crate::column_index::optional_index::{Set, SetCodec}; use crate::column_index::optional_index::{Set, SetCodec, SelectCursor};
pub struct SparseBlockCodec; pub struct SparseBlockCodec;
@@ -24,7 +24,17 @@ impl SetCodec for SparseBlockCodec {
#[derive(Copy, Clone)] #[derive(Copy, Clone)]
pub struct SparseBlock<'a>(&'a [u8]); pub struct SparseBlock<'a>(&'a [u8]);
impl<'a> SelectCursor<u16> for SparseBlock<'a> {
#[inline]
fn select(&mut self, rank: u16) -> u16 {
<SparseBlock<'a> as Set<u16>>::select(self, rank)
}
}
impl<'a> Set<u16> for SparseBlock<'a> { impl<'a> Set<u16> for SparseBlock<'a> {
type SelectCursor<'b> = Self where Self: 'b;
#[inline(always)] #[inline(always)]
fn contains(&self, el: u16) -> bool { fn contains(&self, el: u16) -> bool {
self.binary_search(el).is_ok() self.binary_search(el).is_ok()
@@ -41,12 +51,11 @@ impl<'a> Set<u16> for SparseBlock<'a> {
u16::from_le_bytes(self.0[offset..offset + 2].try_into().unwrap()) u16::from_le_bytes(self.0[offset..offset + 2].try_into().unwrap())
} }
fn select_batch(&self, ranks: &[u16], outputs: &mut [u16]) { #[inline(always)]
let orig_ids = self.select_iter(ranks.iter().copied()); fn select_cursor<'b>(&'b self,) -> Self::SelectCursor<'b> {
for (output, original_id) in outputs.iter_mut().zip(orig_ids) { *self
*output = original_id;
}
} }
} }
#[inline(always)] #[inline(always)]
@@ -96,17 +105,4 @@ impl<'a> SparseBlock<'a> {
} }
Err(left) Err(left)
} }
pub fn select_iter<'b>(
&self,
iter: impl Iterator<Item = u16> + 'b,
) -> impl Iterator<Item = u16> + 'b
where
Self: 'b,
{
iter.map(|codec_id| {
let offset = codec_id as usize * 2;
u16::from_le_bytes(self.0[offset..offset + 2].try_into().unwrap())
})
}
} }

View File

@@ -1,8 +1,8 @@
use std::collections::HashMap; use std::collections::HashMap;
use crate::column_index::optional_index::set_block::set_block::DENSE_BLOCK_NUM_BYTES; use crate::column_index::optional_index::set_block::dense::DENSE_BLOCK_NUM_BYTES;
use crate::column_index::optional_index::set_block::{DenseBlockCodec, SparseBlockCodec}; use crate::column_index::optional_index::set_block::{DenseBlockCodec, SparseBlockCodec};
use crate::column_index::optional_index::{Set, SetCodec}; use crate::column_index::optional_index::{Set, SetCodec, SelectCursor};
fn test_set_helper<C: SetCodec<Item = u16>>(vals: &[u16]) -> usize { fn test_set_helper<C: SetCodec<Item = u16>>(vals: &[u16]) -> usize {
let mut buffer = Vec::new(); let mut buffer = Vec::new();
@@ -51,6 +51,7 @@ fn test_sparse_block_set_u16_max() {
use proptest::prelude::*; use proptest::prelude::*;
proptest! { proptest! {
#![proptest_config(ProptestConfig::with_cases(1))]
#[test] #[test]
fn test_prop_test_dense(els in proptest::collection::btree_set(0..=u16::MAX, 0..=u16::MAX as usize)) { fn test_prop_test_dense(els in proptest::collection::btree_set(0..=u16::MAX, 0..=u16::MAX as usize)) {
let vals: Vec<u16> = els.into_iter().collect(); let vals: Vec<u16> = els.into_iter().collect();
@@ -73,12 +74,10 @@ fn test_simple_translate_codec_codec_idx_to_original_idx_dense() {
.unwrap(); .unwrap();
let tested_set = DenseBlockCodec::open(buffer.as_slice()); let tested_set = DenseBlockCodec::open(buffer.as_slice());
assert!(tested_set.contains(1)); assert!(tested_set.contains(1));
assert_eq!( let mut select_cursor = tested_set.select_cursor();
&tested_set assert_eq!(select_cursor.select(0), 1);
.select_iter([0, 1, 2, 5].iter().copied()) assert_eq!(select_cursor.select(1), 3);
.collect::<Vec<u16>>(), assert_eq!(select_cursor.select(2), 17);
&[1, 3, 17, 30_001]
);
} }
#[test] #[test]
@@ -87,12 +86,10 @@ fn test_simple_translate_codec_idx_to_original_idx_sparse() {
SparseBlockCodec::serialize([1, 3, 17].iter().copied(), &mut buffer).unwrap(); SparseBlockCodec::serialize([1, 3, 17].iter().copied(), &mut buffer).unwrap();
let tested_set = SparseBlockCodec::open(buffer.as_slice()); let tested_set = SparseBlockCodec::open(buffer.as_slice());
assert!(tested_set.contains(1)); assert!(tested_set.contains(1));
assert_eq!( let mut select_cursor = tested_set.select_cursor();
&tested_set assert_eq!(SelectCursor::select(&mut select_cursor, 0), 1);
.select_iter([0, 1, 2].iter().copied()) assert_eq!(SelectCursor::select(&mut select_cursor, 1), 3);
.collect::<Vec<u16>>(), assert_eq!(SelectCursor::select(&mut select_cursor, 2), 17);
&[1, 3, 17]
);
} }
#[test] #[test]
@@ -101,10 +98,8 @@ fn test_simple_translate_codec_idx_to_original_idx_dense() {
DenseBlockCodec::serialize(0u16..150u16, &mut buffer).unwrap(); DenseBlockCodec::serialize(0u16..150u16, &mut buffer).unwrap();
let tested_set = DenseBlockCodec::open(buffer.as_slice()); let tested_set = DenseBlockCodec::open(buffer.as_slice());
assert!(tested_set.contains(1)); assert!(tested_set.contains(1));
let rg = 0u16..150u16; let mut select_cursor = tested_set.select_cursor();
let els: Vec<u16> = rg.clone().collect(); for i in 0..150 {
assert_eq!( assert_eq!(i, select_cursor.select(i));
&tested_set.select_iter(rg.clone()).collect::<Vec<u16>>(), }
&els
);
} }

View File

@@ -41,9 +41,10 @@ fn test_with_random_sets_simple() {
let null_index = open_optional_index(OwnedBytes::new(out)).unwrap(); let null_index = open_optional_index(OwnedBytes::new(out)).unwrap();
let ranks: Vec<u32> = (65_472u32..65_473u32).collect(); let ranks: Vec<u32> = (65_472u32..65_473u32).collect();
let els: Vec<u32> = ranks.iter().copied().map(|rank| rank + 10).collect(); let els: Vec<u32> = ranks.iter().copied().map(|rank| rank + 10).collect();
let mut output = vec![0u32; ranks.len()]; let mut select_cursor = null_index.select_cursor();
null_index.select_batch(&ranks[..], &mut output[..]); for (rank, el) in ranks.iter().copied().zip(els.iter().copied()) {
assert_eq!(&output, &els); assert_eq!(select_cursor.select(rank), el);
}
} }
#[test] #[test]
@@ -91,11 +92,10 @@ fn test_null_index(data: &[bool]) {
.filter(|(_pos, val)| **val) .filter(|(_pos, val)| **val)
.map(|(pos, _val)| pos as u32) .map(|(pos, _val)| pos as u32)
.collect(); .collect();
let ids: Vec<u32> = (0..orig_idx_with_value.len() as u32).collect(); let mut select_iter = null_index.select_cursor();
let mut output = vec![0u32; ids.len()]; for i in 0..orig_idx_with_value.len() {
null_index.select_batch(&ids[..], &mut output); assert_eq!(select_iter.select(i as u32), orig_idx_with_value[i]);
// assert_eq!(&output[0..100], &orig_idx_with_value[0..100]); }
assert_eq!(output, orig_idx_with_value);
let step_size = (orig_idx_with_value.len() / 100).max(1); let step_size = (orig_idx_with_value.len() / 100).max(1);
for (dense_idx, orig_idx) in orig_idx_with_value.iter().enumerate().step_by(step_size) { for (dense_idx, orig_idx) in orig_idx_with_value.iter().enumerate().step_by(step_size) {
@@ -115,9 +115,9 @@ fn test_optional_index_test_translation() {
let iter = &[true, false, true, false]; let iter = &[true, false, true, false];
serialize_optional_index(&&iter[..], &mut out).unwrap(); serialize_optional_index(&&iter[..], &mut out).unwrap();
let null_index = open_optional_index(OwnedBytes::new(out)).unwrap(); let null_index = open_optional_index(OwnedBytes::new(out)).unwrap();
let mut output = vec![0u32; 2]; let mut select_cursor = null_index.select_cursor();
null_index.select_batch(&[0, 1], &mut output); assert_eq!(select_cursor.select(0), 0);
assert_eq!(output, &[0, 2]); assert_eq!(select_cursor.select(1), 2);
} }
#[test] #[test]
@@ -175,7 +175,6 @@ mod bench {
.map(|_| rng.gen_bool(fill_ratio)) .map(|_| rng.gen_bool(fill_ratio))
.collect(); .collect();
serialize_optional_index(&&vals[..], &mut out).unwrap(); serialize_optional_index(&&vals[..], &mut out).unwrap();
let codec = open_optional_index(OwnedBytes::new(out)).unwrap(); let codec = open_optional_index(OwnedBytes::new(out)).unwrap();
codec codec
} }
@@ -311,7 +310,8 @@ mod bench {
}; };
let mut output = vec![0u32; idxs.len()]; let mut output = vec![0u32; idxs.len()];
bench.iter(|| { bench.iter(|| {
codec.select_batch(&idxs[..], &mut output); output.copy_from_slice(&idxs[..]);
codec.select_batch(&mut output);
}); });
} }

View File

@@ -1,19 +1,20 @@
use std::io; use std::io;
use std::io::Write; use std::io::Write;
use common::OwnedBytes; use common::{CountingWriter, OwnedBytes};
use crate::column_index::multivalued_index::{serialize_multivalued_index, MultivaluedIndex}; use crate::column_index::multivalued_index::serialize_multivalued_index;
use crate::column_index::optional_index::serialize_optional_index; use crate::column_index::optional_index::serialize_optional_index;
use crate::column_index::{ColumnIndex, SerializableOptionalIndex}; use crate::column_index::{ColumnIndex, SerializableOptionalIndex};
use crate::Cardinality; use crate::column_values::ColumnValues;
use crate::{Cardinality, RowId};
pub enum SerializableColumnIndex<'a> { pub enum SerializableColumnIndex<'a> {
Full, Full,
Optional(Box<dyn SerializableOptionalIndex<'a> + 'a>), Optional(Box<dyn SerializableOptionalIndex<'a> + 'a>),
// TODO remove the Arc<dyn> apart from serialization this is not // TODO remove the Arc<dyn> apart from serialization this is not
// dynamic at all. // dynamic at all.
Multivalued(MultivaluedIndex), Multivalued(Box<dyn ColumnValues<RowId> + 'a>),
} }
impl<'a> SerializableColumnIndex<'a> { impl<'a> SerializableColumnIndex<'a> {
@@ -29,19 +30,21 @@ impl<'a> SerializableColumnIndex<'a> {
pub fn serialize_column_index( pub fn serialize_column_index(
column_index: SerializableColumnIndex, column_index: SerializableColumnIndex,
output: &mut impl Write, output: &mut impl Write,
) -> io::Result<()> { ) -> io::Result<u32> {
let mut output = CountingWriter::wrap(output);
let cardinality = column_index.get_cardinality().to_code(); let cardinality = column_index.get_cardinality().to_code();
output.write_all(&[cardinality])?; output.write_all(&[cardinality])?;
match column_index { match column_index {
SerializableColumnIndex::Full => {} SerializableColumnIndex::Full => {}
SerializableColumnIndex::Optional(optional_index) => { SerializableColumnIndex::Optional(optional_index) => {
serialize_optional_index(&*optional_index, output)? serialize_optional_index(&*optional_index, &mut output)?
} }
SerializableColumnIndex::Multivalued(multivalued_index) => { SerializableColumnIndex::Multivalued(multivalued_index) => {
serialize_multivalued_index(multivalued_index, output)? serialize_multivalued_index(&*multivalued_index, &mut output)?
} }
} }
Ok(()) let column_index_num_bytes = output.written_bytes() as u32;
Ok(column_index_num_bytes)
} }
pub fn open_column_index(mut bytes: OwnedBytes) -> io::Result<ColumnIndex<'static>> { pub fn open_column_index(mut bytes: OwnedBytes) -> io::Result<ColumnIndex<'static>> {

View File

@@ -78,6 +78,32 @@ pub trait ColumnValues<T: PartialOrd = u64>: Send + Sync {
} }
} }
impl<T: Copy + PartialOrd> ColumnValues<T> for std::sync::Arc<dyn ColumnValues<T>> {
fn get_val(&self, idx: u32) -> T {
self.as_ref().get_val(idx)
}
fn min_value(&self) -> T {
self.as_ref().min_value()
}
fn max_value(&self) -> T {
self.as_ref().max_value()
}
fn num_vals(&self) -> u32 {
self.as_ref().num_vals()
}
fn iter<'b>(&'b self) -> Box<dyn Iterator<Item = T> + 'b> {
self.as_ref().iter()
}
fn get_range(&self, start: u64, output: &mut [T]) {
self.as_ref().get_range(start, output)
}
}
impl<'a, C: ColumnValues<T> + ?Sized, T: Copy + PartialOrd> ColumnValues<T> for &'a C { impl<'a, C: ColumnValues<T> + ?Sized, T: Copy + PartialOrd> ColumnValues<T> for &'a C {
fn get_val(&self, idx: u32) -> T { fn get_val(&self, idx: u32) -> T {
(*self).get_val(idx) (*self).get_val(idx)

View File

@@ -28,7 +28,7 @@ mod compact_space;
mod line; mod line;
mod linear; mod linear;
pub(crate) mod monotonic_mapping; pub(crate) mod monotonic_mapping;
// mod monotonic_mapping_u128; pub(crate) mod monotonic_mapping_u128;
mod column; mod column;
mod column_with_cardinality; mod column_with_cardinality;
@@ -37,8 +37,10 @@ pub mod serialize;
pub use self::column::{monotonic_map_column, ColumnValues, IterColumn, VecColumn}; pub use self::column::{monotonic_map_column, ColumnValues, IterColumn, VecColumn};
pub use self::monotonic_mapping::{MonotonicallyMappableToU64, StrictlyMonotonicFn}; pub use self::monotonic_mapping::{MonotonicallyMappableToU64, StrictlyMonotonicFn};
// pub use self::monotonic_mapping_u128::MonotonicallyMappableToU128; pub use self::monotonic_mapping_u128::MonotonicallyMappableToU128;
pub use self::serialize::{serialize_and_load, serialize_column_values, NormalizedHeader}; #[cfg(test)]
pub use self::serialize::tests::serialize_and_load;
pub use self::serialize::{serialize_column_values, NormalizedHeader};
use crate::column_values::bitpacked::BitpackedCodec; use crate::column_values::bitpacked::BitpackedCodec;
use crate::column_values::blockwise_linear::BlockwiseLinearCodec; use crate::column_values::blockwise_linear::BlockwiseLinearCodec;
use crate::column_values::linear::LinearCodec; use crate::column_values::linear::LinearCodec;
@@ -122,19 +124,17 @@ impl U128FastFieldCodecType {
} }
/// Returns the correct codec reader wrapped in the `Arc` for the data. /// Returns the correct codec reader wrapped in the `Arc` for the data.
// pub fn open_u128<Item: MonotonicallyMappableToU128>( pub fn open_u128_mapped<T: MonotonicallyMappableToU128>(
// bytes: OwnedBytes, mut bytes: OwnedBytes,
// ) -> io::Result<Arc<dyn Column<Item>>> { ) -> io::Result<Arc<dyn ColumnValues<T>>> {
// todo!(); let header = U128Header::deserialize(&mut bytes)?;
// // let (bytes, _format_version) = read_format_version(bytes)?; assert_eq!(header.codec_type, U128FastFieldCodecType::CompactSpace);
// // let (mut bytes, _null_index_footer) = read_null_index_footer(bytes)?; let reader = CompactSpaceDecompressor::open(bytes)?;
// // let header = U128Header::deserialize(&mut bytes)?;
// // assert_eq!(header.codec_type, U128FastFieldCodecType::CompactSpace); let inverted: StrictlyMonotonicMappingInverter<StrictlyMonotonicMappingToInternal<T>> =
// // let reader = CompactSpaceDecompressor::open(bytes)?; StrictlyMonotonicMappingToInternal::<T>::new().into();
// // let inverted: StrictlyMonotonicMappingInverter<StrictlyMonotonicMappingToInternal<Item>> = Ok(Arc::new(monotonic_map_column(reader, inverted)))
// // StrictlyMonotonicMappingToInternal::<Item>::new().into(); }
// // Ok(Arc::new(monotonic_map_column(reader, inverted)))
// }
/// Returns the correct codec reader wrapped in the `Arc` for the data. /// Returns the correct codec reader wrapped in the `Arc` for the data.
pub fn open_u64_mapped<T: MonotonicallyMappableToU64>( pub fn open_u64_mapped<T: MonotonicallyMappableToU64>(
@@ -198,13 +198,6 @@ pub(crate) trait FastFieldCodec: 'static {
fn estimate(column: &dyn ColumnValues) -> Option<f32>; fn estimate(column: &dyn ColumnValues) -> Option<f32>;
} }
/// The list of all available codecs for u64 convertible data.
pub const ALL_CODEC_TYPES: [FastFieldCodecType; 3] = [
FastFieldCodecType::Bitpacked,
FastFieldCodecType::BlockwiseLinear,
FastFieldCodecType::Linear,
];
#[cfg(all(test, feature = "unstable"))] #[cfg(all(test, feature = "unstable"))]
mod bench { mod bench {
use std::sync::Arc; use std::sync::Arc;

View File

@@ -2,6 +2,7 @@ use std::marker::PhantomData;
use fastdivide::DividerU64; use fastdivide::DividerU64;
use super::MonotonicallyMappableToU128;
use crate::RowId; use crate::RowId;
/// Monotonic maps a value to u64 value space. /// Monotonic maps a value to u64 value space.
@@ -80,21 +81,20 @@ impl<T> StrictlyMonotonicMappingToInternal<T> {
} }
} }
// TODO impl<External: MonotonicallyMappableToU128, T: MonotonicallyMappableToU128>
// impl<External: MonotonicallyMappableToU128, T: MonotonicallyMappableToU128> StrictlyMonotonicFn<External, u128> for StrictlyMonotonicMappingToInternal<T>
// StrictlyMonotonicFn<External, u128> for StrictlyMonotonicMappingToInternal<T> where T: MonotonicallyMappableToU128
// where T: MonotonicallyMappableToU128 {
// { #[inline(always)]
// #[inline(always)] fn mapping(&self, inp: External) -> u128 {
// fn mapping(&self, inp: External) -> u128 { External::to_u128(inp)
// External::to_u128(inp) }
// }
// #[inline(always)] #[inline(always)]
// fn inverse(&self, out: u128) -> External { fn inverse(&self, out: u128) -> External {
// External::from_u128(out) External::from_u128(out)
// } }
// } }
impl<External: MonotonicallyMappableToU64, T: MonotonicallyMappableToU64> impl<External: MonotonicallyMappableToU64, T: MonotonicallyMappableToU64>
StrictlyMonotonicFn<External, u64> for StrictlyMonotonicMappingToInternal<T> StrictlyMonotonicFn<External, u64> for StrictlyMonotonicMappingToInternal<T>
@@ -194,6 +194,20 @@ impl MonotonicallyMappableToU64 for i64 {
} }
} }
impl MonotonicallyMappableToU64 for crate::DateTime {
#[inline(always)]
fn to_u64(self) -> u64 {
common::i64_to_u64(self.timestamp_micros)
}
#[inline(always)]
fn from_u64(val: u64) -> Self {
crate::DateTime {
timestamp_micros: common::u64_to_i64(val),
}
}
}
impl MonotonicallyMappableToU64 for bool { impl MonotonicallyMappableToU64 for bool {
#[inline(always)] #[inline(always)]
fn to_u64(self) -> u64 { fn to_u64(self) -> u64 {

View File

@@ -19,9 +19,8 @@
use std::io; use std::io;
use std::num::NonZeroU64; use std::num::NonZeroU64;
use std::sync::Arc;
use common::{BinarySerializable, OwnedBytes, VInt}; use common::{BinarySerializable, VInt};
use log::warn; use log::warn;
use super::bitpacked::BitpackedCodec; use super::bitpacked::BitpackedCodec;
@@ -33,8 +32,9 @@ use super::monotonic_mapping::{
}; };
use super::{ use super::{
monotonic_map_column, ColumnValues, FastFieldCodec, FastFieldCodecType, monotonic_map_column, ColumnValues, FastFieldCodec, FastFieldCodecType,
MonotonicallyMappableToU64, U128FastFieldCodecType, VecColumn, ALL_CODEC_TYPES, MonotonicallyMappableToU64, U128FastFieldCodecType,
}; };
use crate::column_values::compact_space::CompactSpaceCompressor;
/// The normalized header gives some parameters after applying the following /// The normalized header gives some parameters after applying the following
/// normalization of the vector: /// normalization of the vector:
@@ -160,54 +160,22 @@ impl BinarySerializable for Header {
} }
} }
/// Return estimated compression for given codec in the value range [0.0..1.0], where 1.0 means no
/// compression.
pub(crate) fn estimate<T: MonotonicallyMappableToU64>(
typed_column: impl ColumnValues<T>,
codec_type: FastFieldCodecType,
) -> Option<f32> {
let column = monotonic_map_column(typed_column, StrictlyMonotonicMappingToInternal::<T>::new());
let min_value = column.min_value();
let gcd = super::gcd::find_gcd(column.iter().map(|val| val - min_value))
.filter(|gcd| gcd.get() > 1u64);
let mapping = StrictlyMonotonicMappingToInternalGCDBaseval::new(
gcd.map(|gcd| gcd.get()).unwrap_or(1u64),
min_value,
);
let normalized_column = monotonic_map_column(&column, mapping);
match codec_type {
FastFieldCodecType::Bitpacked => BitpackedCodec::estimate(&normalized_column),
FastFieldCodecType::Linear => LinearCodec::estimate(&normalized_column),
FastFieldCodecType::BlockwiseLinear => BlockwiseLinearCodec::estimate(&normalized_column),
}
}
// TODO
/// Serializes u128 values with the compact space codec. /// Serializes u128 values with the compact space codec.
// pub fn serialize_u128_new<F: Fn() -> I, I: Iterator<Item = u128>>( pub fn serialize_column_values_u128<F: Fn() -> I, I: Iterator<Item = u128>>(
// value_index: ColumnIndex, iter_gen: F,
// iter_gen: F, num_vals: u32,
// num_vals: u32, output: &mut impl io::Write,
// output: &mut impl io::Write, ) -> io::Result<()> {
// ) -> io::Result<()> { let header = U128Header {
// let header = U128Header { num_vals,
// num_vals, codec_type: U128FastFieldCodecType::CompactSpace,
// codec_type: U128FastFieldCodecType::CompactSpace, };
// }; header.serialize(output)?;
// header.serialize(output)?; let compressor = CompactSpaceCompressor::train_from(iter_gen(), num_vals);
// let compressor = CompactSpaceCompressor::train_from(iter_gen(), num_vals); compressor.compress_into(iter_gen(), output)?;
// compressor.compress_into(iter_gen(), output).unwrap();
// let null_index_footer = ColumnFooter { Ok(())
// cardinality: value_index.get_cardinality(), }
// null_index_codec: NullIndexCodec::Full,
// null_index_byte_range: 0..0,
// };
// append_null_index_footer(output, null_index_footer)?;
// append_format_version(output)?;
// Ok(())
// }
/// Serializes the column with the codec with the best estimate on the data. /// Serializes the column with the codec with the best estimate on the data.
pub fn serialize_column_values<T: MonotonicallyMappableToU64>( pub fn serialize_column_values<T: MonotonicallyMappableToU64>(
@@ -279,20 +247,29 @@ pub(crate) fn serialize_given_codec(
Ok(()) Ok(())
} }
/// Helper function to serialize a column (autodetect from all codecs) and then open it
pub fn serialize_and_load<T: MonotonicallyMappableToU64 + Ord + Default>(
column: &[T],
) -> Arc<dyn ColumnValues<T>> {
let mut buffer = Vec::new();
super::serialize_column_values(&VecColumn::from(&column), &ALL_CODEC_TYPES, &mut buffer)
.unwrap();
super::open_u64_mapped(OwnedBytes::new(buffer)).unwrap()
}
#[cfg(test)] #[cfg(test)]
mod tests { pub mod tests {
use super::*; use std::sync::Arc;
use common::OwnedBytes;
use super::*;
use crate::column_values::{open_u64_mapped, VecColumn};
const ALL_CODEC_TYPES: [FastFieldCodecType; 3] = [
FastFieldCodecType::Bitpacked,
FastFieldCodecType::Linear,
FastFieldCodecType::BlockwiseLinear,
];
/// Helper function to serialize a column (autodetect from all codecs) and then open it
pub fn serialize_and_load<T: MonotonicallyMappableToU64 + Ord + Default>(
column: &[T],
) -> Arc<dyn ColumnValues<T>> {
let mut buffer = Vec::new();
serialize_column_values(&VecColumn::from(&column), &ALL_CODEC_TYPES, &mut buffer).unwrap();
open_u64_mapped(OwnedBytes::new(buffer)).unwrap()
}
#[test] #[test]
fn test_serialize_deserialize_u128_header() { fn test_serialize_deserialize_u128_header() {
let original = U128Header { let original = U128Header {
@@ -319,7 +296,7 @@ mod tests {
serialize_column_values(&col, &ALL_CODEC_TYPES, &mut buffer).unwrap(); serialize_column_values(&col, &ALL_CODEC_TYPES, &mut buffer).unwrap();
// TODO put the header as a footer so that it serves as a padding. // TODO put the header as a footer so that it serves as a padding.
// 5 bytes of header, 1 byte of value, 7 bytes of padding. // 5 bytes of header, 1 byte of value, 7 bytes of padding.
assert_eq!(buffer.len(), 5 + 1 + 7); assert_eq!(buffer.len(), 5 + 1);
} }
#[test] #[test]
@@ -328,7 +305,7 @@ mod tests {
let col = VecColumn::from(&[true][..]); let col = VecColumn::from(&[true][..]);
serialize_column_values(&col, &ALL_CODEC_TYPES, &mut buffer).unwrap(); serialize_column_values(&col, &ALL_CODEC_TYPES, &mut buffer).unwrap();
// 5 bytes of header, 0 bytes of value, 7 bytes of padding. // 5 bytes of header, 0 bytes of value, 7 bytes of padding.
assert_eq!(buffer.len(), 5 + 7); assert_eq!(buffer.len(), 5);
} }
#[test] #[test]
@@ -338,6 +315,6 @@ mod tests {
let col = VecColumn::from(&vals[..]); let col = VecColumn::from(&vals[..]);
serialize_column_values(&col, &[FastFieldCodecType::Bitpacked], &mut buffer).unwrap(); serialize_column_values(&col, &[FastFieldCodecType::Bitpacked], &mut buffer).unwrap();
// Values are stored over 3 bits. // Values are stored over 3 bits.
assert_eq!(buffer.len(), 7 + (3 * 80 / 8) + 7); assert_eq!(buffer.len(), 7 + (3 * 80 / 8));
} }
} }

View File

@@ -1,4 +1,5 @@
use crate::utils::{place_bits, select_bits}; use std::net::Ipv6Addr;
use crate::value::NumericalType; use crate::value::NumericalType;
use crate::InvalidData; use crate::InvalidData;
@@ -7,62 +8,152 @@ use crate::InvalidData;
/// - bits[0..3]: Column category type. /// - bits[0..3]: Column category type.
/// - bits[3..6]: Numerical type if necessary. /// - bits[3..6]: Numerical type if necessary.
#[derive(Hash, Eq, PartialEq, Debug, Clone, Copy)] #[derive(Hash, Eq, PartialEq, Debug, Clone, Copy)]
#[repr(u8)]
pub enum ColumnType { pub enum ColumnType {
Bytes, I64 = 0u8,
Numerical(NumericalType), U64 = 1u8,
Bool, F64 = 2u8,
Bytes = 10u8,
Str = 14u8,
Bool = 18u8,
IpAddr = 22u8,
DateTime = 26u8,
} }
#[cfg(test)]
const COLUMN_TYPES: [ColumnType; 8] = [
ColumnType::I64,
ColumnType::U64,
ColumnType::F64,
ColumnType::Bytes,
ColumnType::Str,
ColumnType::Bool,
ColumnType::IpAddr,
ColumnType::DateTime,
];
impl ColumnType { impl ColumnType {
/// Encoded over 6 bits. pub fn to_code(self) -> u8 {
pub(crate) fn to_code(self) -> u8 { self as u8
let column_type_category;
let numerical_type_code: u8;
match self {
ColumnType::Bytes => {
column_type_category = ColumnTypeCategory::Str;
numerical_type_code = 0u8;
}
ColumnType::Numerical(numerical_type) => {
column_type_category = ColumnTypeCategory::Numerical;
numerical_type_code = numerical_type.to_code();
}
ColumnType::Bool => {
column_type_category = ColumnTypeCategory::Bool;
numerical_type_code = 0u8;
}
}
place_bits::<0, 3>(column_type_category.to_code()) | place_bits::<3, 6>(numerical_type_code)
} }
pub(crate) fn try_from_code(code: u8) -> Result<ColumnType, InvalidData> { pub(crate) fn try_from_code(code: u8) -> Result<ColumnType, InvalidData> {
if select_bits::<6, 8>(code) != 0u8 { use ColumnType::*;
return Err(InvalidData); match code {
0u8 => Ok(I64),
1u8 => Ok(U64),
2u8 => Ok(F64),
10u8 => Ok(Bytes),
14u8 => Ok(Str),
18u8 => Ok(Bool),
22u8 => Ok(IpAddr),
26u8 => Ok(Self::DateTime),
_ => Err(InvalidData),
} }
let column_type_category_code = select_bits::<0, 3>(code); }
let numerical_type_code = select_bits::<3, 6>(code); }
let column_type_category = ColumnTypeCategory::try_from_code(column_type_category_code)?;
match column_type_category { impl From<NumericalType> for ColumnType {
ColumnTypeCategory::Bool => { fn from(numerical_type: NumericalType) -> Self {
if numerical_type_code != 0u8 { match numerical_type {
return Err(InvalidData); NumericalType::I64 => ColumnType::I64,
} NumericalType::U64 => ColumnType::U64,
Ok(ColumnType::Bool) NumericalType::F64 => ColumnType::F64,
}
ColumnTypeCategory::Str => {
if numerical_type_code != 0u8 {
return Err(InvalidData);
}
Ok(ColumnType::Bytes)
}
ColumnTypeCategory::Numerical => {
let numerical_type = NumericalType::try_from_code(numerical_type_code)?;
Ok(ColumnType::Numerical(numerical_type))
}
} }
} }
} }
impl ColumnType {
/// get column type category
pub(crate) fn column_type_category(self) -> ColumnTypeCategory {
match self {
ColumnType::I64 | ColumnType::U64 | ColumnType::F64 => ColumnTypeCategory::Numerical,
ColumnType::Bytes => ColumnTypeCategory::Bytes,
ColumnType::Str => ColumnTypeCategory::Str,
ColumnType::Bool => ColumnTypeCategory::Bool,
ColumnType::IpAddr => ColumnTypeCategory::IpAddr,
ColumnType::DateTime => ColumnTypeCategory::DateTime,
}
}
pub fn numerical_type(&self) -> Option<NumericalType> {
match self {
ColumnType::I64 => Some(NumericalType::I64),
ColumnType::U64 => Some(NumericalType::U64),
ColumnType::F64 => Some(NumericalType::F64),
ColumnType::Bytes
| ColumnType::Str
| ColumnType::Bool
| ColumnType::IpAddr
| ColumnType::DateTime => None,
}
}
}
// TODO remove if possible
pub trait HasAssociatedColumnType: 'static + Send + Sync + Copy + PartialOrd {
fn column_type() -> ColumnType;
fn default_value() -> Self;
}
impl HasAssociatedColumnType for u64 {
fn column_type() -> ColumnType {
ColumnType::U64
}
fn default_value() -> Self {
0u64
}
}
impl HasAssociatedColumnType for i64 {
fn column_type() -> ColumnType {
ColumnType::I64
}
fn default_value() -> Self {
0i64
}
}
impl HasAssociatedColumnType for f64 {
fn column_type() -> ColumnType {
ColumnType::F64
}
fn default_value() -> Self {
Default::default()
}
}
impl HasAssociatedColumnType for bool {
fn column_type() -> ColumnType {
ColumnType::Bool
}
fn default_value() -> Self {
Default::default()
}
}
impl HasAssociatedColumnType for crate::DateTime {
fn column_type() -> ColumnType {
ColumnType::DateTime
}
fn default_value() -> Self {
Default::default()
}
}
impl HasAssociatedColumnType for Ipv6Addr {
fn column_type() -> ColumnType {
ColumnType::IpAddr
}
fn default_value() -> Self {
Ipv6Addr::from([0u8; 16])
}
}
/// Column types are grouped into different categories that /// Column types are grouped into different categories that
/// corresponds to the different types of `JsonValue` types. /// corresponds to the different types of `JsonValue` types.
/// ///
@@ -70,25 +161,28 @@ impl ColumnType {
/// at most one column exist per `ColumnTypeCategory`. /// at most one column exist per `ColumnTypeCategory`.
/// ///
/// See also [README.md]. /// See also [README.md].
#[derive(Copy, Clone, Ord, PartialOrd, Eq, PartialEq, Debug)] #[derive(Copy, Clone, Ord, PartialOrd, Eq, PartialEq, Hash, Debug)]
#[repr(u8)] #[repr(u8)]
pub(crate) enum ColumnTypeCategory { pub enum ColumnTypeCategory {
Bool = 0u8, Bool,
Str = 1u8, Str,
Numerical = 2u8, Numerical,
DateTime,
Bytes,
IpAddr,
} }
impl ColumnTypeCategory { impl From<ColumnType> for ColumnTypeCategory {
pub fn to_code(self) -> u8 { fn from(column_type: ColumnType) -> Self {
self as u8 match column_type {
} ColumnType::I64 => ColumnTypeCategory::Numerical,
ColumnType::U64 => ColumnTypeCategory::Numerical,
pub fn try_from_code(code: u8) -> Result<Self, InvalidData> { ColumnType::F64 => ColumnTypeCategory::Numerical,
match code { ColumnType::Bytes => ColumnTypeCategory::Bytes,
0u8 => Ok(Self::Bool), ColumnType::Str => ColumnTypeCategory::Str,
1u8 => Ok(Self::Str), ColumnType::Bool => ColumnTypeCategory::Bool,
2u8 => Ok(Self::Numerical), ColumnType::IpAddr => ColumnTypeCategory::IpAddr,
_ => Err(InvalidData), ColumnType::DateTime => ColumnTypeCategory::DateTime,
} }
} }
} }
@@ -109,7 +203,22 @@ mod tests {
assert!(column_type_set.insert(column_type)); assert!(column_type_set.insert(column_type));
} }
} }
assert_eq!(column_type_set.len(), 2 + 3); assert_eq!(column_type_set.len(), super::COLUMN_TYPES.len());
}
#[test]
fn test_column_category_sort_consistent_with_column_type_sort() {
// This is a very important property because we
// we need to serialize colunmn in the right order.
let mut column_types: Vec<ColumnType> = super::COLUMN_TYPES.iter().copied().collect();
column_types.sort_by_key(|col| col.to_code());
let column_categories: Vec<ColumnTypeCategory> = column_types
.into_iter()
.map(ColumnTypeCategory::from)
.collect();
for (prev, next) in column_categories.iter().zip(column_categories.iter()) {
assert!(prev <= next);
}
} }
#[test] #[test]

View File

@@ -0,0 +1,176 @@
use std::collections::HashMap;
use std::io;
use super::column_type::ColumnTypeCategory;
use crate::columnar::ColumnarReader;
use crate::dynamic_column::DynamicColumn;
pub enum MergeDocOrder {
/// Columnar tables are simply stacked one above the other.
/// If the i-th columnar_readers has n_rows_i rows, then
/// in the resulting columnar,
/// rows [r0..n_row_0) contains the row of columnar_readers[0], in ordder
/// rows [n_row_0..n_row_0 + n_row_1 contains the row of columnar_readers[1], in order.
/// ..
Stack,
/// Some more complex mapping, that can interleaves rows from the different readers and
/// possibly drop rows.
Complex(()),
}
pub fn merge_columnar(
_columnar_readers: &[ColumnarReader],
mapping: MergeDocOrder,
_output: &mut impl io::Write,
) -> io::Result<()> {
match mapping {
MergeDocOrder::Stack => {
// implement me :)
todo!();
}
MergeDocOrder::Complex(_) => {
// for later
todo!();
}
}
}
pub fn collect_columns(
columnar_readers: &[&ColumnarReader],
) -> io::Result<HashMap<String, HashMap<ColumnTypeCategory, Vec<DynamicColumn>>>> {
// Each column name may have multiple types of column associated.
// For merging we are interested in the same column type category since they can be merged.
let mut field_name_to_group: HashMap<String, HashMap<ColumnTypeCategory, Vec<DynamicColumn>>> =
HashMap::new();
for columnar_reader in columnar_readers {
let column_name_and_handle = columnar_reader.list_columns()?;
for (column_name, handle) in column_name_and_handle {
let column_type_to_handles = field_name_to_group
.entry(column_name.to_string())
.or_default();
let columns = column_type_to_handles
.entry(handle.column_type().column_type_category())
.or_default();
columns.push(handle.open()?);
}
}
normalize_columns(&mut field_name_to_group);
Ok(field_name_to_group)
}
/// Cast numerical type columns to the same type
pub(crate) fn normalize_columns(
map: &mut HashMap<String, HashMap<ColumnTypeCategory, Vec<DynamicColumn>>>,
) {
for (_field_name, type_category_to_columns) in map.iter_mut() {
for (type_category, columns) in type_category_to_columns {
if type_category == &ColumnTypeCategory::Numerical {
let casted_columns = cast_to_common_numerical_column(&columns);
*columns = casted_columns;
}
}
}
}
/// Receives a list of columns of numerical types (u64, i64, f64)
///
/// Returns a list of `DynamicColumn` which are all of the same numerical type
fn cast_to_common_numerical_column(columns: &[DynamicColumn]) -> Vec<DynamicColumn> {
assert!(columns
.iter()
.all(|column| column.column_type().numerical_type().is_some()));
let coerce_to_i64: Vec<_> = columns
.iter()
.map(|column| column.clone().coerce_to_i64())
.collect();
if coerce_to_i64.iter().all(|column| column.is_some()) {
return coerce_to_i64
.into_iter()
.map(|column| column.unwrap())
.collect();
}
let coerce_to_u64: Vec<_> = columns
.iter()
.map(|column| column.clone().coerce_to_u64())
.collect();
if coerce_to_u64.iter().all(|column| column.is_some()) {
return coerce_to_u64
.into_iter()
.map(|column| column.unwrap())
.collect();
}
columns
.iter()
.map(|column| {
column
.clone()
.coerce_to_f64()
.expect("couldn't cast column to f64")
})
.collect()
}
#[cfg(test)]
mod tests {
use super::*;
use crate::ColumnarWriter;
#[test]
fn test_column_coercion() {
// i64 type
let columnar1 = {
let mut dataframe_writer = ColumnarWriter::default();
dataframe_writer.record_numerical(1u32, "numbers", 1i64);
let mut buffer: Vec<u8> = Vec::new();
dataframe_writer.serialize(2, &mut buffer).unwrap();
ColumnarReader::open(buffer).unwrap()
};
// u64 type
let columnar2 = {
let mut dataframe_writer = ColumnarWriter::default();
dataframe_writer.record_numerical(1u32, "numbers", u64::MAX - 100);
let mut buffer: Vec<u8> = Vec::new();
dataframe_writer.serialize(2, &mut buffer).unwrap();
ColumnarReader::open(buffer).unwrap()
};
// f64 type
let columnar3 = {
let mut dataframe_writer = ColumnarWriter::default();
dataframe_writer.record_numerical(1u32, "numbers", 30.5);
let mut buffer: Vec<u8> = Vec::new();
dataframe_writer.serialize(2, &mut buffer).unwrap();
ColumnarReader::open(buffer).unwrap()
};
let column_map = collect_columns(&[&columnar1, &columnar2, &columnar3]).unwrap();
assert_eq!(column_map.len(), 1);
let cat_to_columns = column_map.get("numbers").unwrap();
assert_eq!(cat_to_columns.len(), 1);
let numerical = cat_to_columns.get(&ColumnTypeCategory::Numerical).unwrap();
assert!(numerical.iter().all(|column| column.is_f64()));
let column_map = collect_columns(&[&columnar1, &columnar1]).unwrap();
assert_eq!(column_map.len(), 1);
let cat_to_columns = column_map.get("numbers").unwrap();
assert_eq!(cat_to_columns.len(), 1);
let numerical = cat_to_columns.get(&ColumnTypeCategory::Numerical).unwrap();
assert!(numerical.iter().all(|column| column.is_i64()));
let column_map = collect_columns(&[&columnar2, &columnar2]).unwrap();
assert_eq!(column_map.len(), 1);
let cat_to_columns = column_map.get("numbers").unwrap();
assert_eq!(cat_to_columns.len(), 1);
let numerical = cat_to_columns.get(&ColumnTypeCategory::Numerical).unwrap();
assert!(numerical.iter().all(|column| column.is_u64()));
}
}

View File

@@ -1,28 +1,10 @@
// Copyright (C) 2022 Quickwit, Inc.
//
// Quickwit is offered under the AGPL v3.0 and as commercial software.
// For commercial licensing, contact us at hello@quickwit.io.
//
// AGPL:
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as
// published by the Free Software Foundation, either version 3 of the
// License, or (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
//
mod column_type; mod column_type;
mod format_version; mod format_version;
mod merge;
mod reader; mod reader;
mod writer; mod writer;
pub use column_type::ColumnType; pub use column_type::{ColumnType, HasAssociatedColumnType};
pub use merge::{merge_columnar, MergeDocOrder};
pub use reader::ColumnarReader; pub use reader::ColumnarReader;
pub use writer::ColumnarWriter; pub use writer::ColumnarWriter;

View File

@@ -44,7 +44,7 @@ impl ColumnarReader {
}) })
} }
// TODO fix ugly API // TODO Add unit tests
pub fn list_columns(&self) -> io::Result<Vec<(String, DynamicColumnHandle)>> { pub fn list_columns(&self) -> io::Result<Vec<(String, DynamicColumnHandle)>> {
let mut stream = self.column_dictionary.stream()?; let mut stream = self.column_dictionary.stream()?;
let mut results = Vec::new(); let mut results = Vec::new();
@@ -55,7 +55,8 @@ impl ColumnarReader {
.map_err(|_| io_invalid_data(format!("Unknown column code `{column_code}`")))?; .map_err(|_| io_invalid_data(format!("Unknown column code `{column_code}`")))?;
let range = stream.value().clone(); let range = stream.value().clone();
let column_name = let column_name =
String::from_utf8_lossy(&key_bytes[..key_bytes.len() - 1]).to_string(); // The last two bytes are respectively the 0u8 separator and the column_type.
String::from_utf8_lossy(&key_bytes[..key_bytes.len() - 2]).to_string();
let file_slice = self let file_slice = self
.column_data .column_data
.slice(range.start as usize..range.end as usize); .slice(range.start as usize..range.end as usize);

View File

@@ -1,3 +1,5 @@
use std::net::Ipv6Addr;
use crate::dictionary::UnorderedId; use crate::dictionary::UnorderedId;
use crate::utils::{place_bits, pop_first_byte, select_bits}; use crate::utils::{place_bits, pop_first_byte, select_bits};
use crate::value::NumericalValue; use crate::value::NumericalValue;
@@ -25,12 +27,12 @@ struct ColumnOperationMetadata {
impl ColumnOperationMetadata { impl ColumnOperationMetadata {
fn to_code(self) -> u8 { fn to_code(self) -> u8 {
place_bits::<0, 4>(self.len) | place_bits::<4, 8>(self.op_type.to_code()) place_bits::<0, 6>(self.len) | place_bits::<6, 8>(self.op_type.to_code())
} }
fn try_from_code(code: u8) -> Result<Self, InvalidData> { fn try_from_code(code: u8) -> Result<Self, InvalidData> {
let len = select_bits::<0, 4>(code); let len = select_bits::<0, 6>(code);
let typ_code = select_bits::<4, 8>(code); let typ_code = select_bits::<6, 8>(code);
let column_type = ColumnOperationType::try_from_code(typ_code)?; let column_type = ColumnOperationType::try_from_code(typ_code)?;
Ok(ColumnOperationMetadata { Ok(ColumnOperationMetadata {
op_type: column_type, op_type: column_type,
@@ -142,9 +144,21 @@ impl SymbolValue for bool {
} }
} }
impl SymbolValue for Ipv6Addr {
fn serialize(self, buffer: &mut [u8]) -> u8 {
buffer[0..16].copy_from_slice(&self.octets());
16
}
fn deserialize(bytes: &[u8]) -> Self {
let octets: [u8; 16] = bytes[0..16].try_into().unwrap();
Ipv6Addr::from(octets)
}
}
#[derive(Default)] #[derive(Default)]
struct MiniBuffer { struct MiniBuffer {
pub bytes: [u8; 10], pub bytes: [u8; 17],
pub len: u8, pub len: u8,
} }

View File

@@ -102,18 +102,29 @@ pub(crate) struct NumericalColumnWriter {
column_writer: ColumnWriter, column_writer: ColumnWriter,
} }
impl NumericalColumnWriter {
pub fn force_numerical_type(&mut self, numerical_type: NumericalType) {
assert!(self
.compatible_numerical_types
.is_type_accepted(numerical_type));
self.compatible_numerical_types = CompatibleNumericalTypes::StaticType(numerical_type);
}
}
/// State used to store what types are still acceptable /// State used to store what types are still acceptable
/// after having seen a set of numerical values. /// after having seen a set of numerical values.
#[derive(Clone, Copy)] #[derive(Clone, Copy)]
struct CompatibleNumericalTypes { enum CompatibleNumericalTypes {
all_values_within_i64_range: bool, Dynamic {
all_values_within_u64_range: bool, all_values_within_i64_range: bool,
// f64 is always acceptable. all_values_within_u64_range: bool,
},
StaticType(NumericalType),
} }
impl Default for CompatibleNumericalTypes { impl Default for CompatibleNumericalTypes {
fn default() -> CompatibleNumericalTypes { fn default() -> CompatibleNumericalTypes {
CompatibleNumericalTypes { CompatibleNumericalTypes::Dynamic {
all_values_within_i64_range: true, all_values_within_i64_range: true,
all_values_within_u64_range: true, all_values_within_u64_range: true,
} }
@@ -121,31 +132,54 @@ impl Default for CompatibleNumericalTypes {
} }
impl CompatibleNumericalTypes { impl CompatibleNumericalTypes {
fn is_type_accepted(&self, numerical_type: NumericalType) -> bool {
match self {
CompatibleNumericalTypes::Dynamic {
all_values_within_i64_range,
all_values_within_u64_range,
} => match numerical_type {
NumericalType::I64 => *all_values_within_i64_range,
NumericalType::U64 => *all_values_within_u64_range,
NumericalType::F64 => true,
},
CompatibleNumericalTypes::StaticType(static_numerical_type) => {
*static_numerical_type == numerical_type
}
}
}
fn accept_value(&mut self, numerical_value: NumericalValue) { fn accept_value(&mut self, numerical_value: NumericalValue) {
match numerical_value { match self {
NumericalValue::I64(val_i64) => { CompatibleNumericalTypes::Dynamic {
let value_within_u64_range = val_i64 >= 0i64; all_values_within_i64_range,
self.all_values_within_u64_range &= value_within_u64_range; all_values_within_u64_range,
} } => match numerical_value {
NumericalValue::U64(val_u64) => { NumericalValue::I64(val_i64) => {
let value_within_i64_range = val_u64 < i64::MAX as u64; let value_within_u64_range = val_i64 >= 0i64;
self.all_values_within_i64_range &= value_within_i64_range; *all_values_within_u64_range &= value_within_u64_range;
} }
NumericalValue::F64(_) => { NumericalValue::U64(val_u64) => {
self.all_values_within_i64_range = false; let value_within_i64_range = val_u64 < i64::MAX as u64;
self.all_values_within_u64_range = false; *all_values_within_i64_range &= value_within_i64_range;
}
NumericalValue::F64(_) => {
*all_values_within_i64_range = false;
*all_values_within_u64_range = false;
}
},
CompatibleNumericalTypes::StaticType(typ) => {
assert_eq!(numerical_value.numerical_type(), *typ);
} }
} }
} }
pub fn to_numerical_type(self) -> NumericalType { pub fn to_numerical_type(self) -> NumericalType {
if self.all_values_within_i64_range { for numerical_type in [NumericalType::I64, NumericalType::U64] {
NumericalType::I64 if self.is_type_accepted(numerical_type) {
} else if self.all_values_within_u64_range { return numerical_type;
NumericalType::U64 }
} else {
NumericalType::F64
} }
NumericalType::F64
} }
} }
@@ -175,15 +209,15 @@ impl NumericalColumnWriter {
} }
} }
#[derive(Copy, Clone, Default)] #[derive(Copy, Clone)]
pub(crate) struct StrColumnWriter { pub(crate) struct StrOrBytesColumnWriter {
pub(crate) dictionary_id: u32, pub(crate) dictionary_id: u32,
pub(crate) column_writer: ColumnWriter, pub(crate) column_writer: ColumnWriter,
} }
impl StrColumnWriter { impl StrOrBytesColumnWriter {
pub(crate) fn with_dictionary_id(dictionary_id: u32) -> StrColumnWriter { pub(crate) fn with_dictionary_id(dictionary_id: u32) -> StrOrBytesColumnWriter {
StrColumnWriter { StrOrBytesColumnWriter {
dictionary_id, dictionary_id,
column_writer: Default::default(), column_writer: Default::default(),
} }
@@ -262,4 +296,27 @@ mod tests {
test_column_writer_coercion_aux(&[1i64.into(), 1u64.into()], NumericalType::I64); test_column_writer_coercion_aux(&[1i64.into(), 1u64.into()], NumericalType::I64);
test_column_writer_coercion_aux(&[u64::MAX.into(), (-1i64).into()], NumericalType::F64); test_column_writer_coercion_aux(&[u64::MAX.into(), (-1i64).into()], NumericalType::F64);
} }
#[test]
#[should_panic]
fn test_compatible_numerical_types_static_incompatible_type() {
let mut compatible_numerical_types =
CompatibleNumericalTypes::StaticType(NumericalType::U64);
compatible_numerical_types.accept_value(NumericalValue::I64(1i64));
}
#[test]
fn test_compatible_numerical_types_static_different_type_forbidden() {
let mut compatible_numerical_types =
CompatibleNumericalTypes::StaticType(NumericalType::U64);
compatible_numerical_types.accept_value(NumericalValue::U64(u64::MAX));
}
#[test]
fn test_compatible_numerical_types_static() {
for typ in [NumericalType::I64, NumericalType::I64, NumericalType::F64] {
let compatible_numerical_types = CompatibleNumericalTypes::StaticType(typ);
assert_eq!(compatible_numerical_types.to_numerical_type(), typ);
}
}
} }

View File

@@ -4,6 +4,7 @@ mod serializer;
mod value_index; mod value_index;
use std::io; use std::io;
use std::net::Ipv6Addr;
use column_operation::ColumnOperation; use column_operation::ColumnOperation;
use common::CountingWriter; use common::CountingWriter;
@@ -11,10 +12,12 @@ use serializer::ColumnarSerializer;
use stacker::{Addr, ArenaHashMap, MemoryArena}; use stacker::{Addr, ArenaHashMap, MemoryArena};
use crate::column_index::SerializableColumnIndex; use crate::column_index::SerializableColumnIndex;
use crate::column_values::{ColumnValues, MonotonicallyMappableToU64, VecColumn}; use crate::column_values::{
ColumnValues, MonotonicallyMappableToU128, MonotonicallyMappableToU64, VecColumn,
};
use crate::columnar::column_type::{ColumnType, ColumnTypeCategory}; use crate::columnar::column_type::{ColumnType, ColumnTypeCategory};
use crate::columnar::writer::column_writers::{ use crate::columnar::writer::column_writers::{
ColumnWriter, NumericalColumnWriter, StrColumnWriter, ColumnWriter, NumericalColumnWriter, StrOrBytesColumnWriter,
}; };
use crate::columnar::writer::value_index::{IndexBuilder, PreallocatedIndexBuilders}; use crate::columnar::writer::value_index::{IndexBuilder, PreallocatedIndexBuilders};
use crate::dictionary::{DictionaryBuilder, TermIdMapping, UnorderedId}; use crate::dictionary::{DictionaryBuilder, TermIdMapping, UnorderedId};
@@ -30,6 +33,7 @@ struct SpareBuffers {
u64_values: Vec<u64>, u64_values: Vec<u64>,
f64_values: Vec<f64>, f64_values: Vec<f64>,
bool_values: Vec<bool>, bool_values: Vec<bool>,
ip_addr_values: Vec<Ipv6Addr>,
} }
/// Makes it possible to create a new columnar. /// Makes it possible to create a new columnar.
@@ -47,8 +51,11 @@ struct SpareBuffers {
/// ``` /// ```
pub struct ColumnarWriter { pub struct ColumnarWriter {
numerical_field_hash_map: ArenaHashMap, numerical_field_hash_map: ArenaHashMap,
datetime_field_hash_map: ArenaHashMap,
bool_field_hash_map: ArenaHashMap, bool_field_hash_map: ArenaHashMap,
ip_addr_field_hash_map: ArenaHashMap,
bytes_field_hash_map: ArenaHashMap, bytes_field_hash_map: ArenaHashMap,
str_field_hash_map: ArenaHashMap,
arena: MemoryArena, arena: MemoryArena,
// Dictionaries used to store dictionary-encoded values. // Dictionaries used to store dictionary-encoded values.
dictionaries: Vec<DictionaryBuilder>, dictionaries: Vec<DictionaryBuilder>,
@@ -60,7 +67,10 @@ impl Default for ColumnarWriter {
ColumnarWriter { ColumnarWriter {
numerical_field_hash_map: ArenaHashMap::new(10_000), numerical_field_hash_map: ArenaHashMap::new(10_000),
bool_field_hash_map: ArenaHashMap::new(10_000), bool_field_hash_map: ArenaHashMap::new(10_000),
ip_addr_field_hash_map: ArenaHashMap::new(10_000),
bytes_field_hash_map: ArenaHashMap::new(10_000), bytes_field_hash_map: ArenaHashMap::new(10_000),
str_field_hash_map: ArenaHashMap::new(10_000),
datetime_field_hash_map: ArenaHashMap::new(10_000),
dictionaries: Vec::new(), dictionaries: Vec::new(),
arena: MemoryArena::default(), arena: MemoryArena::default(),
buffers: SpareBuffers::default(), buffers: SpareBuffers::default(),
@@ -68,20 +78,115 @@ impl Default for ColumnarWriter {
} }
} }
#[inline]
fn mutate_or_create_column<V, TMutator>(
arena_hash_map: &mut ArenaHashMap,
column_name: &str,
updater: TMutator,
) where
V: Copy + 'static,
TMutator: FnMut(Option<V>) -> V,
{
assert!(
!column_name.as_bytes().contains(&0u8),
"key may not contain the 0 byte"
);
arena_hash_map.mutate_or_create(column_name.as_bytes(), updater);
}
impl ColumnarWriter { impl ColumnarWriter {
pub fn mem_usage(&self) -> usize {
// TODO add dictionary builders.
self.arena.mem_usage()
+ self.numerical_field_hash_map.mem_usage()
+ self.bool_field_hash_map.mem_usage()
+ self.bytes_field_hash_map.mem_usage()
+ self.str_field_hash_map.mem_usage()
+ self.ip_addr_field_hash_map.mem_usage()
+ self.datetime_field_hash_map.mem_usage()
}
pub fn record_column_type(&mut self, column_name: &str, column_type: ColumnType) {
match column_type {
ColumnType::Str | ColumnType::Bytes => {
let (hash_map, dictionaries) = (
if column_type == ColumnType::Str {
&mut self.str_field_hash_map
} else {
&mut self.bytes_field_hash_map
},
&mut self.dictionaries,
);
mutate_or_create_column(
hash_map,
column_name,
|column_opt: Option<StrOrBytesColumnWriter>| {
if let Some(column_writer) = column_opt {
column_writer
} else {
let dictionary_id = dictionaries.len() as u32;
dictionaries.push(DictionaryBuilder::default());
StrOrBytesColumnWriter::with_dictionary_id(dictionary_id)
}
},
);
}
ColumnType::Bool => {
mutate_or_create_column(
&mut self.bool_field_hash_map,
column_name,
|column_opt: Option<ColumnWriter>| column_opt.unwrap_or_default(),
);
}
ColumnType::DateTime => {
mutate_or_create_column(
&mut self.datetime_field_hash_map,
column_name,
|column_opt: Option<ColumnWriter>| column_opt.unwrap_or_default(),
);
}
ColumnType::I64 | ColumnType::F64 | ColumnType::U64 => {
let numerical_type = column_type.numerical_type().unwrap();
mutate_or_create_column(
&mut self.numerical_field_hash_map,
column_name,
|column_opt: Option<NumericalColumnWriter>| {
let mut column: NumericalColumnWriter = column_opt.unwrap_or_default();
column.force_numerical_type(numerical_type);
column
},
);
}
ColumnType::IpAddr => mutate_or_create_column(
&mut self.ip_addr_field_hash_map,
column_name,
|column_opt: Option<ColumnWriter>| column_opt.unwrap_or_default(),
),
}
}
pub fn force_numerical_type(&mut self, column_name: &str, numerical_type: NumericalType) {
mutate_or_create_column(
&mut self.numerical_field_hash_map,
column_name,
|column_opt: Option<NumericalColumnWriter>| {
let mut column: NumericalColumnWriter = column_opt.unwrap_or_default();
column.force_numerical_type(numerical_type);
column
},
);
}
pub fn record_numerical<T: Into<NumericalValue> + Copy>( pub fn record_numerical<T: Into<NumericalValue> + Copy>(
&mut self, &mut self,
doc: RowId, doc: RowId,
column_name: &str, column_name: &str,
numerical_value: T, numerical_value: T,
) { ) {
assert!(
!column_name.as_bytes().contains(&0u8),
"key may not contain the 0 byte"
);
let (hash_map, arena) = (&mut self.numerical_field_hash_map, &mut self.arena); let (hash_map, arena) = (&mut self.numerical_field_hash_map, &mut self.arena);
hash_map.mutate_or_create( mutate_or_create_column(
column_name.as_bytes(), hash_map,
column_name,
|column_opt: Option<NumericalColumnWriter>| { |column_opt: Option<NumericalColumnWriter>| {
let mut column: NumericalColumnWriter = column_opt.unwrap_or_default(); let mut column: NumericalColumnWriter = column_opt.unwrap_or_default();
column.record_numerical_value(doc, numerical_value.into(), arena); column.record_numerical_value(doc, numerical_value.into(), arena);
@@ -90,23 +195,62 @@ impl ColumnarWriter {
); );
} }
pub fn record_bool(&mut self, doc: RowId, column_name: &str, val: bool) { pub fn record_ip_addr(&mut self, doc: RowId, column_name: &str, ip_addr: Ipv6Addr) {
assert!( assert!(
!column_name.as_bytes().contains(&0u8), !column_name.as_bytes().contains(&0u8),
"key may not contain the 0 byte" "key may not contain the 0 byte"
); );
let (hash_map, arena) = (&mut self.bool_field_hash_map, &mut self.arena); let (hash_map, arena) = (&mut self.ip_addr_field_hash_map, &mut self.arena);
hash_map.mutate_or_create( hash_map.mutate_or_create(
column_name.as_bytes(), column_name.as_bytes(),
|column_opt: Option<ColumnWriter>| { |column_opt: Option<ColumnWriter>| {
let mut column: ColumnWriter = column_opt.unwrap_or_default(); let mut column: ColumnWriter = column_opt.unwrap_or_default();
column.record(doc, val, arena); column.record(doc, ip_addr, arena);
column column
}, },
); );
} }
pub fn record_bool(&mut self, doc: RowId, column_name: &str, val: bool) {
let (hash_map, arena) = (&mut self.bool_field_hash_map, &mut self.arena);
mutate_or_create_column(hash_map, column_name, |column_opt: Option<ColumnWriter>| {
let mut column: ColumnWriter = column_opt.unwrap_or_default();
column.record(doc, val, arena);
column
});
}
pub fn record_datetime(&mut self, doc: RowId, column_name: &str, datetime: crate::DateTime) {
let (hash_map, arena) = (&mut self.datetime_field_hash_map, &mut self.arena);
mutate_or_create_column(hash_map, column_name, |column_opt: Option<ColumnWriter>| {
let mut column: ColumnWriter = column_opt.unwrap_or_default();
column.record(doc, NumericalValue::I64(datetime.timestamp_micros), arena);
column
});
}
pub fn record_str(&mut self, doc: RowId, column_name: &str, value: &str) { pub fn record_str(&mut self, doc: RowId, column_name: &str, value: &str) {
let (hash_map, arena, dictionaries) = (
&mut self.str_field_hash_map,
&mut self.arena,
&mut self.dictionaries,
);
hash_map.mutate_or_create(
column_name.as_bytes(),
|column_opt: Option<StrOrBytesColumnWriter>| {
let mut column: StrOrBytesColumnWriter = column_opt.unwrap_or_else(|| {
// Each column has its own dictionary
let dictionary_id = dictionaries.len() as u32;
dictionaries.push(DictionaryBuilder::default());
StrOrBytesColumnWriter::with_dictionary_id(dictionary_id)
});
column.record_bytes(doc, value.as_bytes(), dictionaries, arena);
column
},
);
}
pub fn record_bytes(&mut self, doc: RowId, column_name: &str, value: &[u8]) {
assert!( assert!(
!column_name.as_bytes().contains(&0u8), !column_name.as_bytes().contains(&0u8),
"key may not contain the 0 byte" "key may not contain the 0 byte"
@@ -118,41 +262,56 @@ impl ColumnarWriter {
); );
hash_map.mutate_or_create( hash_map.mutate_or_create(
column_name.as_bytes(), column_name.as_bytes(),
|column_opt: Option<StrColumnWriter>| { |column_opt: Option<StrOrBytesColumnWriter>| {
let mut column: StrColumnWriter = column_opt.unwrap_or_else(|| { let mut column: StrOrBytesColumnWriter = column_opt.unwrap_or_else(|| {
// Each column has its own dictionary // Each column has its own dictionary
let dictionary_id = dictionaries.len() as u32; let dictionary_id = dictionaries.len() as u32;
dictionaries.push(DictionaryBuilder::default()); dictionaries.push(DictionaryBuilder::default());
StrColumnWriter::with_dictionary_id(dictionary_id) StrOrBytesColumnWriter::with_dictionary_id(dictionary_id)
}); });
column.record_bytes(doc, value.as_bytes(), dictionaries, arena); column.record_bytes(doc, value, dictionaries, arena);
column column
}, },
); );
} }
pub fn serialize(&mut self, num_docs: RowId, wrt: &mut dyn io::Write) -> io::Result<()> { pub fn serialize(&mut self, num_docs: RowId, wrt: &mut dyn io::Write) -> io::Result<()> {
let mut serializer = ColumnarSerializer::new(wrt); let mut serializer = ColumnarSerializer::new(wrt);
let mut field_columns: Vec<(&[u8], ColumnTypeCategory, Addr)> = self let mut columns: Vec<(&[u8], ColumnTypeCategory, Addr)> = self
.numerical_field_hash_map .numerical_field_hash_map
.iter() .iter()
.map(|(term, addr, _)| (term, ColumnTypeCategory::Numerical, addr)) .map(|(column_name, addr, _)| (column_name, ColumnTypeCategory::Numerical, addr))
.collect(); .collect();
field_columns.extend( columns.extend(
self.bytes_field_hash_map self.bytes_field_hash_map
.iter() .iter()
.map(|(term, addr, _)| (term, ColumnTypeCategory::Str, addr)), .map(|(term, addr, _)| (term, ColumnTypeCategory::Bytes, addr)),
); );
field_columns.extend( columns.extend(
self.str_field_hash_map
.iter()
.map(|(column_name, addr, _)| (column_name, ColumnTypeCategory::Str, addr)),
);
columns.extend(
self.bool_field_hash_map self.bool_field_hash_map
.iter() .iter()
.map(|(term, addr, _)| (term, ColumnTypeCategory::Bool, addr)), .map(|(column_name, addr, _)| (column_name, ColumnTypeCategory::Bool, addr)),
); );
field_columns.sort_unstable_by_key(|(column_name, col_type, _)| (*column_name, *col_type)); columns.extend(
self.ip_addr_field_hash_map
.iter()
.map(|(column_name, addr, _)| (column_name, ColumnTypeCategory::IpAddr, addr)),
);
columns.extend(
self.datetime_field_hash_map
.iter()
.map(|(column_name, addr, _)| (column_name, ColumnTypeCategory::DateTime, addr)),
);
columns.sort_unstable_by_key(|(column_name, col_type, _)| (*column_name, *col_type));
let (arena, buffers, dictionaries) = (&self.arena, &mut self.buffers, &self.dictionaries); let (arena, buffers, dictionaries) = (&self.arena, &mut self.buffers, &self.dictionaries);
let mut symbol_byte_buffer: Vec<u8> = Vec::new(); let mut symbol_byte_buffer: Vec<u8> = Vec::new();
for (column_name, bytes_or_numerical, addr) in field_columns { for (column_name, column_type, addr) in columns {
match bytes_or_numerical { match column_type {
ColumnTypeCategory::Bool => { ColumnTypeCategory::Bool => {
let column_writer: ColumnWriter = self.bool_field_hash_map.read(addr); let column_writer: ColumnWriter = self.bool_field_hash_map.read(addr);
let cardinality = column_writer.get_cardinality(num_docs); let cardinality = column_writer.get_cardinality(num_docs);
@@ -166,14 +325,32 @@ impl ColumnarWriter {
&mut column_serializer, &mut column_serializer,
)?; )?;
} }
ColumnTypeCategory::Str => { ColumnTypeCategory::IpAddr => {
let str_column_writer: StrColumnWriter = self.bytes_field_hash_map.read(addr); let column_writer: ColumnWriter = self.ip_addr_field_hash_map.read(addr);
let cardinality = column_writer.get_cardinality(num_docs);
let mut column_serializer =
serializer.serialize_column(column_name, ColumnType::IpAddr);
serialize_ip_addr_column(
cardinality,
num_docs,
column_writer.operation_iterator(arena, &mut symbol_byte_buffer),
buffers,
&mut column_serializer,
)?;
}
ColumnTypeCategory::Bytes | ColumnTypeCategory::Str => {
let (column_type, str_column_writer): (ColumnType, StrOrBytesColumnWriter) =
if column_type == ColumnTypeCategory::Bytes {
(ColumnType::Bytes, self.bytes_field_hash_map.read(addr))
} else {
(ColumnType::Str, self.str_field_hash_map.read(addr))
};
let dictionary_builder = let dictionary_builder =
&dictionaries[str_column_writer.dictionary_id as usize]; &dictionaries[str_column_writer.dictionary_id as usize];
let cardinality = str_column_writer.column_writer.get_cardinality(num_docs); let cardinality = str_column_writer.column_writer.get_cardinality(num_docs);
let mut column_serializer = let mut column_serializer =
serializer.serialize_column(column_name, ColumnType::Bytes); serializer.serialize_column(column_name, column_type);
serialize_bytes_column( serialize_bytes_or_str_column(
cardinality, cardinality,
num_docs, num_docs,
dictionary_builder, dictionary_builder,
@@ -187,8 +364,8 @@ impl ColumnarWriter {
self.numerical_field_hash_map.read(addr); self.numerical_field_hash_map.read(addr);
let (numerical_type, cardinality) = let (numerical_type, cardinality) =
numerical_column_writer.column_type_and_cardinality(num_docs); numerical_column_writer.column_type_and_cardinality(num_docs);
let mut column_serializer = serializer let mut column_serializer =
.serialize_column(column_name, ColumnType::Numerical(numerical_type)); serializer.serialize_column(column_name, ColumnType::from(numerical_type));
serialize_numerical_column( serialize_numerical_column(
cardinality, cardinality,
num_docs, num_docs,
@@ -198,6 +375,20 @@ impl ColumnarWriter {
&mut column_serializer, &mut column_serializer,
)?; )?;
} }
ColumnTypeCategory::DateTime => {
let column_writer: ColumnWriter = self.datetime_field_hash_map.read(addr);
let cardinality = column_writer.get_cardinality(num_docs);
let mut column_serializer =
serializer.serialize_column(column_name, ColumnType::DateTime);
serialize_numerical_column(
cardinality,
num_docs,
NumericalType::I64,
column_writer.operation_iterator(arena, &mut symbol_byte_buffer),
buffers,
&mut column_serializer,
)?;
}
}; };
} }
serializer.finalize()?; serializer.finalize()?;
@@ -205,7 +396,7 @@ impl ColumnarWriter {
} }
} }
fn serialize_bytes_column( fn serialize_bytes_or_str_column(
cardinality: Cardinality, cardinality: Cardinality,
num_docs: RowId, num_docs: RowId,
dictionary_builder: &DictionaryBuilder, dictionary_builder: &DictionaryBuilder,
@@ -232,7 +423,7 @@ fn serialize_bytes_column(
ColumnOperation::NewDoc(doc) => ColumnOperation::NewDoc(doc), ColumnOperation::NewDoc(doc) => ColumnOperation::NewDoc(doc),
} }
}); });
serialize_column( send_to_serialize_column_mappable_to_u64(
operation_iterator, operation_iterator,
cardinality, cardinality,
num_docs, num_docs,
@@ -261,7 +452,7 @@ fn serialize_numerical_column(
} = buffers; } = buffers;
match numerical_type { match numerical_type {
NumericalType::I64 => { NumericalType::I64 => {
serialize_column( send_to_serialize_column_mappable_to_u64(
coerce_numerical_symbol::<i64>(op_iterator), coerce_numerical_symbol::<i64>(op_iterator),
cardinality, cardinality,
num_docs, num_docs,
@@ -271,7 +462,7 @@ fn serialize_numerical_column(
)?; )?;
} }
NumericalType::U64 => { NumericalType::U64 => {
serialize_column( send_to_serialize_column_mappable_to_u64(
coerce_numerical_symbol::<u64>(op_iterator), coerce_numerical_symbol::<u64>(op_iterator),
cardinality, cardinality,
num_docs, num_docs,
@@ -281,7 +472,7 @@ fn serialize_numerical_column(
)?; )?;
} }
NumericalType::F64 => { NumericalType::F64 => {
serialize_column( send_to_serialize_column_mappable_to_u64(
coerce_numerical_symbol::<f64>(op_iterator), coerce_numerical_symbol::<f64>(op_iterator),
cardinality, cardinality,
num_docs, num_docs,
@@ -306,7 +497,7 @@ fn serialize_bool_column(
bool_values, bool_values,
.. ..
} = buffers; } = buffers;
serialize_column( send_to_serialize_column_mappable_to_u64(
column_operations_it, column_operations_it,
cardinality, cardinality,
num_docs, num_docs,
@@ -317,7 +508,76 @@ fn serialize_bool_column(
Ok(()) Ok(())
} }
fn serialize_column< fn serialize_ip_addr_column(
cardinality: Cardinality,
num_docs: RowId,
column_operations_it: impl Iterator<Item = ColumnOperation<Ipv6Addr>>,
buffers: &mut SpareBuffers,
wrt: &mut impl io::Write,
) -> io::Result<()> {
let SpareBuffers {
value_index_builders,
ip_addr_values,
..
} = buffers;
send_to_serialize_column_mappable_to_u128(
column_operations_it,
cardinality,
num_docs,
value_index_builders,
ip_addr_values,
wrt,
)?;
Ok(())
}
fn send_to_serialize_column_mappable_to_u128<
T: Copy + std::fmt::Debug + Send + Sync + MonotonicallyMappableToU128 + PartialOrd,
>(
op_iterator: impl Iterator<Item = ColumnOperation<T>>,
cardinality: Cardinality,
num_docs: RowId,
value_index_builders: &mut PreallocatedIndexBuilders,
values: &mut Vec<T>,
mut wrt: impl io::Write,
) -> io::Result<()>
where
for<'a> VecColumn<'a, T>: ColumnValues<T>,
{
values.clear();
// TODO: split index and values
let serializable_column_index = match cardinality {
Cardinality::Full => {
consume_operation_iterator(
op_iterator,
value_index_builders.borrow_required_index_builder(),
values,
);
SerializableColumnIndex::Full
}
Cardinality::Optional => {
let optional_index_builder = value_index_builders.borrow_optional_index_builder();
consume_operation_iterator(op_iterator, optional_index_builder, values);
let optional_index = optional_index_builder.finish(num_docs);
SerializableColumnIndex::Optional(Box::new(optional_index))
}
Cardinality::Multivalued => {
let multivalued_index_builder = value_index_builders.borrow_multivalued_index_builder();
consume_operation_iterator(op_iterator, multivalued_index_builder, values);
let multivalued_index = multivalued_index_builder.finish(num_docs);
SerializableColumnIndex::Multivalued(Box::new(multivalued_index))
}
};
crate::column::serialize_column_mappable_to_u128(
serializable_column_index,
|| values.iter().cloned(),
values.len() as u32,
&mut wrt,
)?;
Ok(())
}
fn send_to_serialize_column_mappable_to_u64<
T: Copy + Default + std::fmt::Debug + Send + Sync + MonotonicallyMappableToU64 + PartialOrd, T: Copy + Default + std::fmt::Debug + Send + Sync + MonotonicallyMappableToU64 + PartialOrd,
>( >(
op_iterator: impl Iterator<Item = ColumnOperation<T>>, op_iterator: impl Iterator<Item = ColumnOperation<T>>,
@@ -350,11 +610,10 @@ where
let multivalued_index_builder = value_index_builders.borrow_multivalued_index_builder(); let multivalued_index_builder = value_index_builders.borrow_multivalued_index_builder();
consume_operation_iterator(op_iterator, multivalued_index_builder, values); consume_operation_iterator(op_iterator, multivalued_index_builder, values);
let multivalued_index = multivalued_index_builder.finish(num_docs); let multivalued_index = multivalued_index_builder.finish(num_docs);
todo!(); SerializableColumnIndex::Multivalued(Box::new(multivalued_index))
// SerializableColumnIndex::Multivalued(Box::new(multivalued_index))
} }
}; };
crate::column::serialize_column_u64( crate::column::serialize_column_mappable_to_u64(
serializable_column_index, serializable_column_index,
&VecColumn::from(&values[..]), &VecColumn::from(&values[..]),
&mut wrt, &mut wrt,
@@ -392,59 +651,12 @@ fn consume_operation_iterator<T: std::fmt::Debug, TIndexBuilder: IndexBuilder>(
} }
} }
// /// Serializes the column with the codec with the best estimate on the data.
// fn serialize_numerical<T: MonotonicallyMappableToU64>(
// value_index: ValueIndexInfo,
// typed_column: impl Column<T>,
// output: &mut impl io::Write,
// codecs: &[FastFieldCodecType],
// ) -> io::Result<()> {
// let counting_writer = CountingWriter::wrap(output);
// serialize_value_index(value_index, output)?;
// let value_index_len = counting_writer.written_bytes();
// let output = counting_writer.finish();
// serialize_column(value_index, output)?;
// let column = monotonic_map_column(
// typed_column,
// crate::column::monotonic_mapping::StrictlyMonotonicMappingToInternal::<T>::new(),
// );
// let header = Header::compute_header(&column, codecs).ok_or_else(|| {
// io::Error::new(
// io::ErrorKind::InvalidInput,
// format!(
// "Data cannot be serialized with this list of codec. {:?}",
// codecs
// ),
// )
// })?;
// header.serialize(output)?;
// let normalized_column = header.normalize_column(column);
// assert_eq!(normalized_column.min_value(), 0u64);
// serialize_given_codec(normalized_column, header.codec_type, output)?;
// let column_header = ColumnFooter {
// value_index_len: todo!(),
// cardinality: todo!(),
// };
// let null_index_footer = NullIndexFooter {
// cardinality: value_index.get_cardinality(),
// null_index_codec: NullIndexCodec::Full,
// null_index_byte_range: 0..0,
// };
// append_null_index_footer(output, null_index_footer)?;
// Ok(())
// }
#[cfg(test)] #[cfg(test)]
mod tests { mod tests {
use column_operation::ColumnOperation;
use stacker::MemoryArena; use stacker::MemoryArena;
use super::*; use crate::columnar::writer::column_operation::ColumnOperation;
use crate::value::NumericalValue; use crate::{Cardinality, NumericalValue};
#[test] #[test]
fn test_column_writer_required_simple() { fn test_column_writer_required_simple() {

View File

@@ -97,10 +97,10 @@ mod tests {
#[test] #[test]
fn test_prepare_key_bytes() { fn test_prepare_key_bytes() {
let mut buffer: Vec<u8> = b"somegarbage".to_vec(); let mut buffer: Vec<u8> = b"somegarbage".to_vec();
prepare_key(b"root\0child", ColumnType::Bytes, &mut buffer); prepare_key(b"root\0child", ColumnType::Str, &mut buffer);
assert_eq!(buffer.len(), 12); assert_eq!(buffer.len(), 12);
assert_eq!(&buffer[..10], b"root\0child"); assert_eq!(&buffer[..10], b"root\0child");
assert_eq!(buffer[10], 0u8); assert_eq!(buffer[10], 0u8);
assert_eq!(buffer[11], ColumnType::Bytes.to_code()); assert_eq!(buffer[11], ColumnType::Str.to_code());
} }
} }

View File

@@ -45,16 +45,6 @@ impl<'a> SerializableOptionalIndex<'a> for SingleValueArrayIndex<'a> {
} }
} }
impl OptionalIndexBuilder {
fn num_non_nulls(&self) -> u32 {
self.docs.len() as u32
}
fn iter(&self) -> Box<dyn Iterator<Item = u32> + '_> {
Box::new(self.docs.iter().copied())
}
}
impl OptionalIndexBuilder { impl OptionalIndexBuilder {
pub fn finish<'a>(&'a mut self, num_rows: RowId) -> impl SerializableOptionalIndex + 'a { pub fn finish<'a>(&'a mut self, num_rows: RowId) -> impl SerializableOptionalIndex + 'a {
debug_assert!(self debug_assert!(self
@@ -96,7 +86,7 @@ pub struct MultivaluedIndexBuilder {
impl MultivaluedIndexBuilder { impl MultivaluedIndexBuilder {
pub fn finish(&mut self, num_docs: RowId) -> impl ColumnValues<u32> + '_ { pub fn finish(&mut self, num_docs: RowId) -> impl ColumnValues<u32> + '_ {
self.start_offsets self.start_offsets
.resize(num_docs as usize, self.total_num_vals_seen); .resize(num_docs as usize + 1, self.total_num_vals_seen);
VecColumn { VecColumn {
values: &&self.start_offsets[..], values: &&self.start_offsets[..],
min_value: 0, min_value: 0,
@@ -188,7 +178,7 @@ mod tests {
.finish(4u32) .finish(4u32)
.iter() .iter()
.collect::<Vec<u32>>(), .collect::<Vec<u32>>(),
vec![0, 0, 2, 3] vec![0, 0, 2, 3, 3]
); );
multivalued_value_index_builder.reset(); multivalued_value_index_builder.reset();
multivalued_value_index_builder.record_row(2u32); multivalued_value_index_builder.record_row(2u32);
@@ -199,7 +189,7 @@ mod tests {
.finish(4u32) .finish(4u32)
.iter() .iter()
.collect::<Vec<u32>>(), .collect::<Vec<u32>>(),
vec![0, 0, 0, 2] vec![0, 0, 0, 2, 2]
); );
} }
} }

View File

@@ -1,12 +1,14 @@
use std::io; use std::io;
use std::net::IpAddr; use std::net::Ipv6Addr;
use std::sync::Arc;
use common::file_slice::FileSlice; use common::file_slice::FileSlice;
use common::{HasLen, OwnedBytes}; use common::{HasLen, OwnedBytes};
use crate::column::{BytesColumn, Column}; use crate::column::{BytesColumn, Column, StrColumn};
use crate::column_values::{monotonic_map_column, StrictlyMonotonicFn};
use crate::columnar::ColumnType; use crate::columnar::ColumnType;
use crate::DateTime; use crate::{DateTime, NumericalType};
#[derive(Clone)] #[derive(Clone)]
pub enum DynamicColumn { pub enum DynamicColumn {
@@ -14,41 +16,163 @@ pub enum DynamicColumn {
I64(Column<i64>), I64(Column<i64>),
U64(Column<u64>), U64(Column<u64>),
F64(Column<f64>), F64(Column<f64>),
IpAddr(Column<IpAddr>), IpAddr(Column<Ipv6Addr>),
DateTime(Column<DateTime>), DateTime(Column<DateTime>),
Str(BytesColumn), Bytes(BytesColumn),
Str(StrColumn),
} }
impl From<Column<i64>> for DynamicColumn { impl DynamicColumn {
fn from(column_i64: Column<i64>) -> Self { pub fn column_type(&self) -> ColumnType {
DynamicColumn::I64(column_i64) match self {
DynamicColumn::Bool(_) => ColumnType::Bool,
DynamicColumn::I64(_) => ColumnType::I64,
DynamicColumn::U64(_) => ColumnType::U64,
DynamicColumn::F64(_) => ColumnType::F64,
DynamicColumn::IpAddr(_) => ColumnType::IpAddr,
DynamicColumn::DateTime(_) => ColumnType::DateTime,
DynamicColumn::Bytes(_) => ColumnType::Bytes,
DynamicColumn::Str(_) => ColumnType::Str,
}
}
pub fn is_numerical(&self) -> bool {
self.column_type().numerical_type().is_some()
}
pub fn is_f64(&self) -> bool {
self.column_type().numerical_type() == Some(NumericalType::F64)
}
pub fn is_i64(&self) -> bool {
self.column_type().numerical_type() == Some(NumericalType::I64)
}
pub fn is_u64(&self) -> bool {
self.column_type().numerical_type() == Some(NumericalType::U64)
}
pub fn coerce_to_f64(self) -> Option<DynamicColumn> {
match self {
DynamicColumn::I64(column) => Some(DynamicColumn::F64(Column {
idx: column.idx,
values: Arc::new(monotonic_map_column(column.values, MapI64ToF64)),
})),
DynamicColumn::U64(column) => Some(DynamicColumn::F64(Column {
idx: column.idx,
values: Arc::new(monotonic_map_column(column.values, MapU64ToF64)),
})),
DynamicColumn::F64(_) => Some(self),
_ => None,
}
}
pub fn coerce_to_i64(self) -> Option<DynamicColumn> {
match self {
DynamicColumn::U64(column) => {
if column.max_value() > i64::MAX as u64 {
return None;
}
Some(DynamicColumn::I64(Column {
idx: column.idx,
values: Arc::new(monotonic_map_column(column.values, MapU64ToI64)),
}))
}
DynamicColumn::I64(_) => Some(self),
_ => None,
}
}
pub fn coerce_to_u64(self) -> Option<DynamicColumn> {
match self {
DynamicColumn::I64(column) => {
if column.min_value() < 0 {
return None;
}
Some(DynamicColumn::U64(Column {
idx: column.idx,
values: Arc::new(monotonic_map_column(column.values, MapI64ToU64)),
}))
}
DynamicColumn::U64(_) => Some(self),
_ => None,
}
} }
} }
impl From<Column<u64>> for DynamicColumn { struct MapI64ToF64;
fn from(column_u64: Column<u64>) -> Self { impl StrictlyMonotonicFn<i64, f64> for MapI64ToF64 {
DynamicColumn::U64(column_u64) #[inline(always)]
fn mapping(&self, inp: i64) -> f64 {
inp as f64
}
#[inline(always)]
fn inverse(&self, out: f64) -> i64 {
out as i64
} }
} }
impl From<Column<f64>> for DynamicColumn { struct MapU64ToF64;
fn from(column_f64: Column<f64>) -> Self { impl StrictlyMonotonicFn<u64, f64> for MapU64ToF64 {
DynamicColumn::F64(column_f64) #[inline(always)]
fn mapping(&self, inp: u64) -> f64 {
inp as f64
}
#[inline(always)]
fn inverse(&self, out: f64) -> u64 {
out as u64
} }
} }
impl From<Column<bool>> for DynamicColumn { struct MapU64ToI64;
fn from(bool_column: Column<bool>) -> Self { impl StrictlyMonotonicFn<u64, i64> for MapU64ToI64 {
DynamicColumn::Bool(bool_column) #[inline(always)]
fn mapping(&self, inp: u64) -> i64 {
inp as i64
}
#[inline(always)]
fn inverse(&self, out: i64) -> u64 {
out as u64
} }
} }
impl From<BytesColumn> for DynamicColumn { struct MapI64ToU64;
fn from(dictionary_encoded_col: BytesColumn) -> Self { impl StrictlyMonotonicFn<i64, u64> for MapI64ToU64 {
DynamicColumn::Str(dictionary_encoded_col) #[inline(always)]
fn mapping(&self, inp: i64) -> u64 {
inp as u64
}
#[inline(always)]
fn inverse(&self, out: u64) -> i64 {
out as i64
} }
} }
macro_rules! static_dynamic_conversions {
($typ:ty, $enum_name:ident) => {
impl Into<Option<$typ>> for DynamicColumn {
fn into(self) -> Option<$typ> {
if let DynamicColumn::$enum_name(col) = self {
Some(col)
} else {
None
}
}
}
impl From<$typ> for DynamicColumn {
fn from(typed_column: $typ) -> Self {
DynamicColumn::$enum_name(typed_column)
}
}
};
}
static_dynamic_conversions!(Column<bool>, Bool);
static_dynamic_conversions!(Column<u64>, U64);
static_dynamic_conversions!(Column<i64>, I64);
static_dynamic_conversions!(Column<f64>, F64);
static_dynamic_conversions!(Column<crate::DateTime>, DateTime);
static_dynamic_conversions!(StrColumn, Str);
static_dynamic_conversions!(BytesColumn, Bytes);
static_dynamic_conversions!(Column<Ipv6Addr>, IpAddr);
#[derive(Clone)] #[derive(Clone)]
pub struct DynamicColumnHandle { pub struct DynamicColumnHandle {
pub(crate) file_slice: FileSlice, pub(crate) file_slice: FileSlice,
@@ -56,31 +180,53 @@ pub struct DynamicColumnHandle {
} }
impl DynamicColumnHandle { impl DynamicColumnHandle {
// TODO rename load
pub fn open(&self) -> io::Result<DynamicColumn> { pub fn open(&self) -> io::Result<DynamicColumn> {
let column_bytes: OwnedBytes = self.file_slice.read_bytes()?; let column_bytes: OwnedBytes = self.file_slice.read_bytes()?;
self.open_internal(column_bytes) self.open_internal(column_bytes)
} }
// TODO rename load_async
pub async fn open_async(&self) -> io::Result<DynamicColumn> { pub async fn open_async(&self) -> io::Result<DynamicColumn> {
let column_bytes: OwnedBytes = self.file_slice.read_bytes_async().await?; let column_bytes: OwnedBytes = self.file_slice.read_bytes_async().await?;
self.open_internal(column_bytes) self.open_internal(column_bytes)
} }
/// Returns the `u64` fast field reader reader associated with `fields` of types
/// Str, u64, i64, f64, or datetime.
///
/// If not, the fastfield reader will returns the u64-value associated with the original
/// FastValue.
pub fn open_u64_lenient(&self) -> io::Result<Option<Column<u64>>> {
let column_bytes = self.file_slice.read_bytes()?;
match self.column_type {
ColumnType::Str | ColumnType::Bytes => {
let column: BytesColumn = crate::column::open_column_bytes(column_bytes)?;
Ok(Some(column.term_ord_column))
}
ColumnType::Bool => Ok(None),
ColumnType::IpAddr => Ok(None),
ColumnType::I64 | ColumnType::U64 | ColumnType::F64 | ColumnType::DateTime => {
let column = crate::column::open_column_u64::<u64>(column_bytes)?;
Ok(Some(column))
}
}
}
fn open_internal(&self, column_bytes: OwnedBytes) -> io::Result<DynamicColumn> { fn open_internal(&self, column_bytes: OwnedBytes) -> io::Result<DynamicColumn> {
let dynamic_column: DynamicColumn = match self.column_type { let dynamic_column: DynamicColumn = match self.column_type {
ColumnType::Bytes => crate::column::open_column_bytes(column_bytes)?.into(), ColumnType::Bytes => {
ColumnType::Numerical(numerical_type) => match numerical_type { crate::column::open_column_bytes::<BytesColumn>(column_bytes)?.into()
crate::NumericalType::I64 => { }
crate::column::open_column_u64::<i64>(column_bytes)?.into() ColumnType::Str => crate::column::open_column_bytes::<StrColumn>(column_bytes)?.into(),
} ColumnType::I64 => crate::column::open_column_u64::<i64>(column_bytes)?.into(),
crate::NumericalType::U64 => { ColumnType::U64 => crate::column::open_column_u64::<u64>(column_bytes)?.into(),
crate::column::open_column_u64::<u64>(column_bytes)?.into() ColumnType::F64 => crate::column::open_column_u64::<f64>(column_bytes)?.into(),
}
crate::NumericalType::F64 => {
crate::column::open_column_u64::<f64>(column_bytes)?.into()
}
},
ColumnType::Bool => crate::column::open_column_u64::<bool>(column_bytes)?.into(), ColumnType::Bool => crate::column::open_column_u64::<bool>(column_bytes)?.into(),
ColumnType::IpAddr => crate::column::open_column_u128::<Ipv6Addr>(column_bytes)?.into(),
ColumnType::DateTime => {
crate::column::open_column_u64::<crate::DateTime>(column_bytes)?.into()
}
}; };
Ok(dynamic_column) Ok(dynamic_column)
} }

View File

@@ -18,16 +18,21 @@ mod dynamic_column;
pub(crate) mod utils; pub(crate) mod utils;
mod value; mod value;
pub use columnar::{ColumnarReader, ColumnarWriter}; pub use column::{BytesColumn, Column, StrColumn};
pub use column_values::ColumnValues;
pub use columnar::{
merge_columnar, ColumnType, ColumnarReader, ColumnarWriter, HasAssociatedColumnType,
MergeDocOrder,
};
pub use value::{NumericalType, NumericalValue}; pub use value::{NumericalType, NumericalValue};
// pub use self::dynamic_column::DynamicColumnHandle; pub use self::dynamic_column::{DynamicColumn, DynamicColumnHandle};
pub type RowId = u32; pub type RowId = u32;
#[derive(Clone, Copy)] #[derive(Clone, Copy, PartialOrd, PartialEq, Default, Debug)]
pub struct DateTime { pub struct DateTime {
timestamp_micros: i64, pub timestamp_micros: i64,
} }
#[derive(Copy, Clone, Debug)] #[derive(Copy, Clone, Debug)]

View File

@@ -1,10 +1,13 @@
use std::net::Ipv6Addr;
use crate::column_values::MonotonicallyMappableToU128;
use crate::columnar::ColumnType; use crate::columnar::ColumnType;
use crate::dynamic_column::{DynamicColumn, DynamicColumnHandle}; use crate::dynamic_column::{DynamicColumn, DynamicColumnHandle};
use crate::value::NumericalValue; use crate::value::NumericalValue;
use crate::{Cardinality, ColumnarReader, ColumnarWriter}; use crate::{Cardinality, ColumnarReader, ColumnarWriter};
#[test] #[test]
fn test_dataframe_writer_bytes() { fn test_dataframe_writer_str() {
let mut dataframe_writer = ColumnarWriter::default(); let mut dataframe_writer = ColumnarWriter::default();
dataframe_writer.record_str(1u32, "my_string", "hello"); dataframe_writer.record_str(1u32, "my_string", "hello");
dataframe_writer.record_str(3u32, "my_string", "helloeee"); dataframe_writer.record_str(3u32, "my_string", "helloeee");
@@ -14,7 +17,21 @@ fn test_dataframe_writer_bytes() {
assert_eq!(columnar.num_columns(), 1); assert_eq!(columnar.num_columns(), 1);
let cols: Vec<DynamicColumnHandle> = columnar.read_columns("my_string").unwrap(); let cols: Vec<DynamicColumnHandle> = columnar.read_columns("my_string").unwrap();
assert_eq!(cols.len(), 1); assert_eq!(cols.len(), 1);
assert_eq!(cols[0].num_bytes(), 165); assert_eq!(cols[0].num_bytes(), 158);
}
#[test]
fn test_dataframe_writer_bytes() {
let mut dataframe_writer = ColumnarWriter::default();
dataframe_writer.record_bytes(1u32, "my_string", b"hello");
dataframe_writer.record_bytes(3u32, "my_string", b"helloeee");
let mut buffer: Vec<u8> = Vec::new();
dataframe_writer.serialize(5, &mut buffer).unwrap();
let columnar = ColumnarReader::open(buffer).unwrap();
assert_eq!(columnar.num_columns(), 1);
let cols: Vec<DynamicColumnHandle> = columnar.read_columns("my_string").unwrap();
assert_eq!(cols.len(), 1);
assert_eq!(cols[0].num_bytes(), 158);
} }
#[test] #[test]
@@ -28,7 +45,7 @@ fn test_dataframe_writer_bool() {
assert_eq!(columnar.num_columns(), 1); assert_eq!(columnar.num_columns(), 1);
let cols: Vec<DynamicColumnHandle> = columnar.read_columns("bool.value").unwrap(); let cols: Vec<DynamicColumnHandle> = columnar.read_columns("bool.value").unwrap();
assert_eq!(cols.len(), 1); assert_eq!(cols.len(), 1);
assert_eq!(cols[0].num_bytes(), 29); assert_eq!(cols[0].num_bytes(), 22);
assert_eq!(cols[0].column_type(), ColumnType::Bool); assert_eq!(cols[0].column_type(), ColumnType::Bool);
let dyn_bool_col = cols[0].open().unwrap(); let dyn_bool_col = cols[0].open().unwrap();
let DynamicColumn::Bool(bool_col) = dyn_bool_col else { panic!(); }; let DynamicColumn::Bool(bool_col) = dyn_bool_col else { panic!(); };
@@ -36,6 +53,59 @@ fn test_dataframe_writer_bool() {
assert_eq!(&vals, &[None, Some(false), None, Some(true), None,]); assert_eq!(&vals, &[None, Some(false), None, Some(true), None,]);
} }
#[test]
fn test_dataframe_writer_u64_multivalued() {
let mut dataframe_writer = ColumnarWriter::default();
dataframe_writer.record_numerical(2u32, "divisor", 2u64);
dataframe_writer.record_numerical(3u32, "divisor", 3u64);
dataframe_writer.record_numerical(4u32, "divisor", 2u64);
dataframe_writer.record_numerical(5u32, "divisor", 5u64);
dataframe_writer.record_numerical(6u32, "divisor", 2u64);
dataframe_writer.record_numerical(6u32, "divisor", 3u64);
let mut buffer: Vec<u8> = Vec::new();
dataframe_writer.serialize(7, &mut buffer).unwrap();
let columnar = ColumnarReader::open(buffer).unwrap();
assert_eq!(columnar.num_columns(), 1);
let cols: Vec<DynamicColumnHandle> = columnar.read_columns("divisor").unwrap();
assert_eq!(cols.len(), 1);
assert_eq!(cols[0].num_bytes(), 29);
let dyn_i64_col = cols[0].open().unwrap();
let DynamicColumn::I64(divisor_col) = dyn_i64_col else { panic!(); };
assert_eq!(
divisor_col.get_cardinality(),
crate::Cardinality::Multivalued
);
assert_eq!(divisor_col.num_rows(), 7);
}
#[test]
fn test_dataframe_writer_ip_addr() {
let mut dataframe_writer = ColumnarWriter::default();
dataframe_writer.record_ip_addr(1, "ip_addr", Ipv6Addr::from_u128(1001));
dataframe_writer.record_ip_addr(3, "ip_addr", Ipv6Addr::from_u128(1050));
let mut buffer: Vec<u8> = Vec::new();
dataframe_writer.serialize(5, &mut buffer).unwrap();
let columnar = ColumnarReader::open(buffer).unwrap();
assert_eq!(columnar.num_columns(), 1);
let cols: Vec<DynamicColumnHandle> = columnar.read_columns("ip_addr").unwrap();
assert_eq!(cols.len(), 1);
assert_eq!(cols[0].num_bytes(), 42);
assert_eq!(cols[0].column_type(), ColumnType::IpAddr);
let dyn_bool_col = cols[0].open().unwrap();
let DynamicColumn::IpAddr(ip_col) = dyn_bool_col else { panic!(); };
let vals: Vec<Option<Ipv6Addr>> = (0..5).map(|row_id| ip_col.first(row_id)).collect();
assert_eq!(
&vals,
&[
None,
Some(Ipv6Addr::from_u128(1001)),
None,
Some(Ipv6Addr::from_u128(1050)),
None,
]
);
}
#[test] #[test]
fn test_dataframe_writer_numerical() { fn test_dataframe_writer_numerical() {
let mut dataframe_writer = ColumnarWriter::default(); let mut dataframe_writer = ColumnarWriter::default();
@@ -53,7 +123,7 @@ fn test_dataframe_writer_numerical() {
// - header 14 bytes // - header 14 bytes
// - vals 8 //< due to padding? could have been 1byte?. // - vals 8 //< due to padding? could have been 1byte?.
// - null footer 6 bytes // - null footer 6 bytes
assert_eq!(cols[0].num_bytes(), 40); assert_eq!(cols[0].num_bytes(), 33);
let column = cols[0].open().unwrap(); let column = cols[0].open().unwrap();
let DynamicColumn::I64(column_i64) = column else { panic!(); }; let DynamicColumn::I64(column_i64) = column else { panic!(); };
assert_eq!(column_i64.idx.get_cardinality(), Cardinality::Optional); assert_eq!(column_i64.idx.get_cardinality(), Cardinality::Optional);
@@ -67,18 +137,76 @@ fn test_dataframe_writer_numerical() {
} }
#[test] #[test]
fn test_dictionary_encoded() { fn test_dictionary_encoded_str() {
let mut buffer = Vec::new(); let mut buffer = Vec::new();
let mut columnar_writer = ColumnarWriter::default(); let mut columnar_writer = ColumnarWriter::default();
columnar_writer.record_str(1, "my.column", "my.key"); columnar_writer.record_str(1, "my.column", "a");
columnar_writer.record_str(3, "my.column", "my.key2"); columnar_writer.record_str(3, "my.column", "c");
columnar_writer.record_str(3, "my.column2", "different_column!"); columnar_writer.record_str(3, "my.column2", "different_column!");
columnar_writer.record_str(4, "my.column", "b");
columnar_writer.serialize(5, &mut buffer).unwrap(); columnar_writer.serialize(5, &mut buffer).unwrap();
let columnar_reader = ColumnarReader::open(buffer).unwrap(); let columnar_reader = ColumnarReader::open(buffer).unwrap();
assert_eq!(columnar_reader.num_columns(), 2); assert_eq!(columnar_reader.num_columns(), 2);
let col_handles = columnar_reader.read_columns("my.column").unwrap(); let col_handles = columnar_reader.read_columns("my.column").unwrap();
assert_eq!(col_handles.len(), 1); assert_eq!(col_handles.len(), 1);
let DynamicColumn::Str(str_col) = col_handles[0].open().unwrap() else { panic!(); }; let DynamicColumn::Str(str_col) = col_handles[0].open().unwrap() else { panic!(); };
let index: Vec<Option<u64>> = (0..5).map(|row_id| str_col.ords().first(row_id)).collect();
assert_eq!(index, &[None, Some(0), None, Some(2), Some(1)]);
assert_eq!(str_col.num_rows(), 5); assert_eq!(str_col.num_rows(), 5);
// let term_ords = (0..) let mut term_buffer = String::new();
let term_ords = str_col.ords();
assert_eq!(term_ords.first(0), None);
assert_eq!(term_ords.first(1), Some(0));
str_col.ord_to_str(0u64, &mut term_buffer).unwrap();
assert_eq!(term_buffer, "a");
assert_eq!(term_ords.first(2), None);
assert_eq!(term_ords.first(3), Some(2));
str_col.ord_to_str(2u64, &mut term_buffer).unwrap();
assert_eq!(term_buffer, "c");
assert_eq!(term_ords.first(4), Some(1));
str_col.ord_to_str(1u64, &mut term_buffer).unwrap();
assert_eq!(term_buffer, "b");
}
#[test]
fn test_dictionary_encoded_bytes() {
let mut buffer = Vec::new();
let mut columnar_writer = ColumnarWriter::default();
columnar_writer.record_bytes(1, "my.column", b"a");
columnar_writer.record_bytes(3, "my.column", b"c");
columnar_writer.record_bytes(3, "my.column2", b"different_column!");
columnar_writer.record_bytes(4, "my.column", b"b");
columnar_writer.serialize(5, &mut buffer).unwrap();
let columnar_reader = ColumnarReader::open(buffer).unwrap();
assert_eq!(columnar_reader.num_columns(), 2);
let col_handles = columnar_reader.read_columns("my.column").unwrap();
assert_eq!(col_handles.len(), 1);
let DynamicColumn::Bytes(bytes_col) = col_handles[0].open().unwrap() else { panic!(); };
let index: Vec<Option<u64>> = (0..5)
.map(|row_id| bytes_col.ords().first(row_id))
.collect();
assert_eq!(index, &[None, Some(0), None, Some(2), Some(1)]);
assert_eq!(bytes_col.num_rows(), 5);
let mut term_buffer = Vec::new();
let term_ords = bytes_col.ords();
assert_eq!(term_ords.first(0), None);
assert_eq!(term_ords.first(1), Some(0));
bytes_col
.dictionary
.ord_to_term(0u64, &mut term_buffer)
.unwrap();
assert_eq!(term_buffer, b"a");
assert_eq!(term_ords.first(2), None);
assert_eq!(term_ords.first(3), Some(2));
bytes_col
.dictionary
.ord_to_term(2u64, &mut term_buffer)
.unwrap();
assert_eq!(term_buffer, b"c");
assert_eq!(term_ords.first(4), Some(1));
bytes_col
.dictionary
.ord_to_term(1u64, &mut term_buffer)
.unwrap();
assert_eq!(term_buffer, b"b");
} }

View File

@@ -1,12 +1,22 @@
use crate::InvalidData; use crate::InvalidData;
#[derive(Copy, Clone, Debug, PartialEq)] #[derive(Copy, Clone, PartialEq, Debug)]
pub enum NumericalValue { pub enum NumericalValue {
I64(i64), I64(i64),
U64(u64), U64(u64),
F64(f64), F64(f64),
} }
impl NumericalValue {
pub fn numerical_type(&self) -> NumericalType {
match self {
NumericalValue::I64(_) => NumericalType::I64,
NumericalValue::U64(_) => NumericalType::U64,
NumericalValue::F64(_) => NumericalType::F64,
}
}
}
impl From<u64> for NumericalValue { impl From<u64> for NumericalValue {
fn from(val: u64) -> NumericalValue { fn from(val: u64) -> NumericalValue {
NumericalValue::U64(val) NumericalValue::U64(val)
@@ -25,18 +35,6 @@ impl From<f64> for NumericalValue {
} }
} }
impl NumericalValue {
pub fn numerical_type(&self) -> NumericalType {
match self {
NumericalValue::F64(_) => NumericalType::F64,
NumericalValue::I64(_) => NumericalType::I64,
NumericalValue::U64(_) => NumericalType::U64,
}
}
}
impl Eq for NumericalValue {}
#[derive(Clone, Copy, Debug, Default, Hash, Eq, PartialEq)] #[derive(Clone, Copy, Debug, Default, Hash, Eq, PartialEq)]
#[repr(u8)] #[repr(u8)]
pub enum NumericalType { pub enum NumericalType {
@@ -106,6 +104,13 @@ impl Coerce for f64 {
} }
} }
impl Coerce for crate::DateTime {
fn coerce(value: NumericalValue) -> Self {
let timestamp_micros = i64::coerce(value);
crate::DateTime { timestamp_micros }
}
}
#[cfg(test)] #[cfg(test)]
mod tests { mod tests {
use super::NumericalType; use super::NumericalType;

View File

@@ -27,7 +27,7 @@ fn main() -> tantivy::Result<()> {
let score_fieldtype = let score_fieldtype =
crate::schema::NumericOptions::default().set_fast(Cardinality::SingleValue); crate::schema::NumericOptions::default().set_fast(Cardinality::SingleValue);
let highscore_field = schema_builder.add_f64_field("highscore", score_fieldtype.clone()); let highscore_field = schema_builder.add_f64_field("highscore", score_fieldtype.clone());
let price_field = schema_builder.add_f64_field("price", score_fieldtype.clone()); let price_field = schema_builder.add_f64_field("price", score_fieldtype);
let schema = schema_builder.build(); let schema = schema_builder.build();
@@ -112,7 +112,7 @@ fn main() -> tantivy::Result<()> {
], ],
..Default::default() ..Default::default()
}), }),
sub_aggregation: sub_agg_req_1.clone(), sub_aggregation: sub_agg_req_1,
}), }),
)] )]
.into_iter() .into_iter()
@@ -123,7 +123,7 @@ fn main() -> tantivy::Result<()> {
let searcher = reader.searcher(); let searcher = reader.searcher();
let agg_res: AggregationResults = searcher.search(&term_query, &collector).unwrap(); let agg_res: AggregationResults = searcher.search(&term_query, &collector).unwrap();
let res: Value = serde_json::to_value(&agg_res)?; let res: Value = serde_json::to_value(agg_res)?;
println!("{}", serde_json::to_string_pretty(&res)?); println!("{}", serde_json::to_string_pretty(&res)?);
Ok(()) Ok(())

View File

@@ -14,7 +14,7 @@ use fastfield_codecs::Column;
// Importing tantivy... // Importing tantivy...
use tantivy::collector::{Collector, SegmentCollector}; use tantivy::collector::{Collector, SegmentCollector};
use tantivy::query::QueryParser; use tantivy::query::QueryParser;
use tantivy::schema::{Field, Schema, FAST, INDEXED, TEXT}; use tantivy::schema::{Schema, FAST, INDEXED, TEXT};
use tantivy::{doc, Index, Score, SegmentReader}; use tantivy::{doc, Index, Score, SegmentReader};
#[derive(Default)] #[derive(Default)]
@@ -52,11 +52,11 @@ impl Stats {
} }
struct StatsCollector { struct StatsCollector {
field: Field, field: String,
} }
impl StatsCollector { impl StatsCollector {
fn with_field(field: Field) -> StatsCollector { fn with_field(field: String) -> StatsCollector {
StatsCollector { field } StatsCollector { field }
} }
} }
@@ -73,7 +73,7 @@ impl Collector for StatsCollector {
_segment_local_id: u32, _segment_local_id: u32,
segment_reader: &SegmentReader, segment_reader: &SegmentReader,
) -> tantivy::Result<StatsSegmentCollector> { ) -> tantivy::Result<StatsSegmentCollector> {
let fast_field_reader = segment_reader.fast_fields().u64(self.field)?; let fast_field_reader = segment_reader.fast_fields().u64(&self.field)?;
Ok(StatsSegmentCollector { Ok(StatsSegmentCollector {
fast_field_reader, fast_field_reader,
stats: Stats::default(), stats: Stats::default(),
@@ -171,7 +171,9 @@ fn main() -> tantivy::Result<()> {
// here we want to get a hit on the 'ken' in Frankenstein // here we want to get a hit on the 'ken' in Frankenstein
let query = query_parser.parse_query("broom")?; let query = query_parser.parse_query("broom")?;
if let Some(stats) = searcher.search(&query, &StatsCollector::with_field(price))? { if let Some(stats) =
searcher.search(&query, &StatsCollector::with_field("price".to_string()))?
{
println!("count: {}", stats.count()); println!("count: {}", stats.count());
println!("mean: {}", stats.mean()); println!("mean: {}", stats.mean());
println!("standard deviation: {}", stats.standard_deviation()); println!("standard deviation: {}", stats.standard_deviation());

View File

@@ -27,7 +27,7 @@ fn main() -> Result<()> {
reader.reload()?; reader.reload()?;
let searcher = reader.searcher(); let searcher = reader.searcher();
// The end is excluded i.e. here we are searching up to 1969 // The end is excluded i.e. here we are searching up to 1969
let docs_in_the_sixties = RangeQuery::new_u64(year_field, 1960..1970); let docs_in_the_sixties = RangeQuery::new_u64("year".to_string(), 1960..1970);
// Uses a Count collector to sum the total number of docs in the range // Uses a Count collector to sum the total number of docs in the range
let num_60s_books = searcher.search(&docs_in_the_sixties, &Count)?; let num_60s_books = searcher.search(&docs_in_the_sixties, &Count)?;
assert_eq!(num_60s_books, 10); assert_eq!(num_60s_books, 10);

View File

@@ -4,7 +4,7 @@ use std::sync::{Arc, RwLock, Weak};
use tantivy::collector::TopDocs; use tantivy::collector::TopDocs;
use tantivy::query::QueryParser; use tantivy::query::QueryParser;
use tantivy::schema::{Field, Schema, FAST, TEXT}; use tantivy::schema::{Schema, FAST, TEXT};
use tantivy::{ use tantivy::{
doc, DocAddress, DocId, Index, IndexReader, Opstamp, Searcher, SearcherGeneration, SegmentId, doc, DocAddress, DocId, Index, IndexReader, Opstamp, Searcher, SearcherGeneration, SegmentId,
SegmentReader, Warmer, SegmentReader, Warmer,
@@ -25,13 +25,13 @@ pub trait PriceFetcher: Send + Sync + 'static {
} }
struct DynamicPriceColumn { struct DynamicPriceColumn {
field: Field, field: String,
price_cache: RwLock<HashMap<(SegmentId, Option<Opstamp>), Arc<Vec<Price>>>>, price_cache: RwLock<HashMap<(SegmentId, Option<Opstamp>), Arc<Vec<Price>>>>,
price_fetcher: Box<dyn PriceFetcher>, price_fetcher: Box<dyn PriceFetcher>,
} }
impl DynamicPriceColumn { impl DynamicPriceColumn {
pub fn with_product_id_field<T: PriceFetcher>(field: Field, price_fetcher: T) -> Self { pub fn with_product_id_field<T: PriceFetcher>(field: String, price_fetcher: T) -> Self {
DynamicPriceColumn { DynamicPriceColumn {
field, field,
price_cache: Default::default(), price_cache: Default::default(),
@@ -48,7 +48,7 @@ impl Warmer for DynamicPriceColumn {
fn warm(&self, searcher: &Searcher) -> tantivy::Result<()> { fn warm(&self, searcher: &Searcher) -> tantivy::Result<()> {
for segment in searcher.segment_readers() { for segment in searcher.segment_readers() {
let key = (segment.segment_id(), segment.delete_opstamp()); let key = (segment.segment_id(), segment.delete_opstamp());
let product_id_reader = segment.fast_fields().u64(self.field)?; let product_id_reader = segment.fast_fields().u64(&self.field)?;
let product_ids: Vec<ProductId> = segment let product_ids: Vec<ProductId> = segment
.doc_ids_alive() .doc_ids_alive()
.map(|doc| product_id_reader.get_val(doc)) .map(|doc| product_id_reader.get_val(doc))
@@ -123,7 +123,7 @@ fn main() -> tantivy::Result<()> {
let price_table = ExternalPriceTable::default(); let price_table = ExternalPriceTable::default();
let price_dynamic_column = Arc::new(DynamicPriceColumn::with_product_id_field( let price_dynamic_column = Arc::new(DynamicPriceColumn::with_product_id_field(
product_id, "product_id".to_string(),
price_table.clone(), price_table.clone(),
)); ));
price_table.update_price(OLIVE_OIL, 12); price_table.update_price(OLIVE_OIL, 12);

View File

@@ -402,8 +402,8 @@ mod tests {
let mut buffer = Vec::new(); let mut buffer = Vec::new();
let col = VecColumn::from(&[false, true][..]); let col = VecColumn::from(&[false, true][..]);
serialize(col, &mut buffer, &ALL_CODEC_TYPES).unwrap(); serialize(col, &mut buffer, &ALL_CODEC_TYPES).unwrap();
// 5 bytes of header, 1 byte of value, 7 bytes of padding. // 5 bytes of header, 1 byte of value
assert_eq!(buffer.len(), 3 + 5 + 8 + 4 + 2); assert_eq!(buffer.len(), 3 + 5 + 1 + 4 + 2);
} }
#[test] #[test]
@@ -411,8 +411,8 @@ mod tests {
let mut buffer = Vec::new(); let mut buffer = Vec::new();
let col = VecColumn::from(&[true][..]); let col = VecColumn::from(&[true][..]);
serialize(col, &mut buffer, &ALL_CODEC_TYPES).unwrap(); serialize(col, &mut buffer, &ALL_CODEC_TYPES).unwrap();
// 5 bytes of header, 0 bytes of value, 7 bytes of padding. // 5 bytes of header, 0 bytes of value
assert_eq!(buffer.len(), 3 + 5 + 7 + 4 + 2); assert_eq!(buffer.len(), 3 + 5 + 4 + 2);
} }
#[test] #[test]
@@ -422,6 +422,6 @@ mod tests {
let col = VecColumn::from(&vals[..]); let col = VecColumn::from(&vals[..]);
serialize(col, &mut buffer, &[FastFieldCodecType::Bitpacked]).unwrap(); serialize(col, &mut buffer, &[FastFieldCodecType::Bitpacked]).unwrap();
// Values are stored over 3 bits. // Values are stored over 3 bits.
assert_eq!(buffer.len(), 3 + 7 + (3 * 80 / 8) + 7 + 4 + 2); assert_eq!(buffer.len(), 3 + 7 + (3 * 80 / 8) + 4 + 2);
} }
} }

View File

@@ -1,2 +0,0 @@
#!/bin/bash
cargo test

View File

@@ -94,10 +94,7 @@ impl BucketAggregationWithAccessor {
BucketAggregationType::Terms(TermsAggregation { BucketAggregationType::Terms(TermsAggregation {
field: field_name, .. field: field_name, ..
}) => { }) => {
let field = reader let field = reader.schema().get_field(field_name)?;
.schema()
.get_field(field_name)
.ok_or_else(|| TantivyError::FieldNotFound(field_name.to_string()))?;
inverted_index = Some(reader.inverted_index(field)?); inverted_index = Some(reader.inverted_index(field)?);
get_ff_reader_and_validate(reader, field_name, Cardinality::MultiValues)? get_ff_reader_and_validate(reader, field_name, Cardinality::MultiValues)?
} }
@@ -195,10 +192,7 @@ fn get_ff_reader_and_validate(
field_name: &str, field_name: &str,
cardinality: Cardinality, cardinality: Cardinality,
) -> crate::Result<(FastFieldAccessor, Type)> { ) -> crate::Result<(FastFieldAccessor, Type)> {
let field = reader let field = reader.schema().get_field(field_name)?;
.schema()
.get_field(field_name)
.ok_or_else(|| TantivyError::FieldNotFound(field_name.to_string()))?;
let field_type = reader.schema().get_field_entry(field).field_type(); let field_type = reader.schema().get_field_entry(field).field_type();
if let Some((_ff_type, field_cardinality)) = type_and_cardinality(field_type) { if let Some((_ff_type, field_cardinality)) = type_and_cardinality(field_type) {
@@ -218,10 +212,10 @@ fn get_ff_reader_and_validate(
let ff_fields = reader.fast_fields(); let ff_fields = reader.fast_fields();
match cardinality { match cardinality {
Cardinality::SingleValue => ff_fields Cardinality::SingleValue => ff_fields
.u64_lenient(field) .u64_lenient(field_name)
.map(|field| (FastFieldAccessor::Single(field), field_type.value_type())), .map(|field| (FastFieldAccessor::Single(field), field_type.value_type())),
Cardinality::MultiValues => ff_fields Cardinality::MultiValues => ff_fields
.u64s_lenient(field) .u64s_lenient(field_name)
.map(|field| (FastFieldAccessor::Multi(field), field_type.value_type())), .map(|field| (FastFieldAccessor::Multi(field), field_type.value_type())),
} }
} }

View File

@@ -548,9 +548,7 @@ pub(crate) fn intermediate_histogram_buckets_to_final_buckets(
}; };
// If we have a date type on the histogram buckets, we add the `key_as_string` field as rfc339 // If we have a date type on the histogram buckets, we add the `key_as_string` field as rfc339
let field = schema let field = schema.get_field(&histogram_req.field)?;
.get_field(&histogram_req.field)
.ok_or_else(|| TantivyError::FieldNotFound(histogram_req.field.to_string()))?;
if schema.get_field_entry(field).field_type().is_date() { if schema.get_field_entry(field).field_type().is_date() {
for bucket in buckets.iter_mut() { for bucket in buckets.iter_mut() {
if let crate::aggregation::Key::F64(val) = bucket.key { if let crate::aggregation::Key::F64(val) = bucket.key {

View File

@@ -26,7 +26,6 @@ use super::{format_date, Key, SerializedKey, VecWithNames};
use crate::aggregation::agg_result::{AggregationResults, BucketEntries, BucketEntry}; use crate::aggregation::agg_result::{AggregationResults, BucketEntries, BucketEntry};
use crate::aggregation::bucket::TermsAggregationInternal; use crate::aggregation::bucket::TermsAggregationInternal;
use crate::schema::Schema; use crate::schema::Schema;
use crate::TantivyError;
/// Contains the intermediate aggregation result, which is optimized to be merged with other /// Contains the intermediate aggregation result, which is optimized to be merged with other
/// intermediate results. /// intermediate results.
@@ -658,9 +657,7 @@ impl IntermediateRangeBucketEntry {
// If we have a date type on the histogram buckets, we add the `key_as_string` field as // If we have a date type on the histogram buckets, we add the `key_as_string` field as
// rfc339 // rfc339
let field = schema let field = schema.get_field(&range_req.field)?;
.get_field(&range_req.field)
.ok_or_else(|| TantivyError::FieldNotFound(range_req.field.to_string()))?;
if schema.get_field_entry(field).field_type().is_date() { if schema.get_field_entry(field).field_type().is_date() {
if let Some(val) = range_bucket_entry.to { if let Some(val) = range_bucket_entry.to {
let key_as_string = format_date(val as i64)?; let key_as_string = format_date(val as i64)?;

View File

@@ -6,14 +6,13 @@ use super::{IntermediateStats, SegmentStatsCollector};
/// A single-value metric aggregation that computes the average of numeric values that are /// A single-value metric aggregation that computes the average of numeric values that are
/// extracted from the aggregated documents. /// extracted from the aggregated documents.
/// Supported field types are u64, i64, and f64.
/// See [super::SingleMetricResult] for return value. /// See [super::SingleMetricResult] for return value.
/// ///
/// # JSON Format /// # JSON Format
/// ```json /// ```json
/// { /// {
/// "avg": { /// "avg": {
/// "field": "score", /// "field": "score"
/// } /// }
/// } /// }
/// ``` /// ```

View File

@@ -6,14 +6,13 @@ use super::{IntermediateStats, SegmentStatsCollector};
/// A single-value metric aggregation that counts the number of values that are /// A single-value metric aggregation that counts the number of values that are
/// extracted from the aggregated documents. /// extracted from the aggregated documents.
/// Supported field types are u64, i64, and f64.
/// See [super::SingleMetricResult] for return value. /// See [super::SingleMetricResult] for return value.
/// ///
/// # JSON Format /// # JSON Format
/// ```json /// ```json
/// { /// {
/// "value_count": { /// "value_count": {
/// "field": "score", /// "field": "score"
/// } /// }
/// } /// }
/// ``` /// ```

View File

@@ -6,14 +6,13 @@ use super::{IntermediateStats, SegmentStatsCollector};
/// A single-value metric aggregation that computes the maximum of numeric values that are /// A single-value metric aggregation that computes the maximum of numeric values that are
/// extracted from the aggregated documents. /// extracted from the aggregated documents.
/// Supported field types are u64, i64, and f64.
/// See [super::SingleMetricResult] for return value. /// See [super::SingleMetricResult] for return value.
/// ///
/// # JSON Format /// # JSON Format
/// ```json /// ```json
/// { /// {
/// "max": { /// "max": {
/// "field": "score", /// "field": "score"
/// } /// }
/// } /// }
/// ``` /// ```

View File

@@ -6,14 +6,13 @@ use super::{IntermediateStats, SegmentStatsCollector};
/// A single-value metric aggregation that computes the minimum of numeric values that are /// A single-value metric aggregation that computes the minimum of numeric values that are
/// extracted from the aggregated documents. /// extracted from the aggregated documents.
/// Supported field types are u64, i64, and f64.
/// See [super::SingleMetricResult] for return value. /// See [super::SingleMetricResult] for return value.
/// ///
/// # JSON Format /// # JSON Format
/// ```json /// ```json
/// { /// {
/// "min": { /// "min": {
/// "field": "score", /// "field": "score"
/// } /// }
/// } /// }
/// ``` /// ```

View File

@@ -80,12 +80,12 @@ mod tests {
"price_stats": { "stats": { "field": "price" } }, "price_stats": { "stats": { "field": "price" } },
"price_sum": { "sum": { "field": "price" } } "price_sum": { "sum": { "field": "price" } }
}"#; }"#;
let aggregations: Aggregations = serde_json::from_str(&aggregations_json).unwrap(); let aggregations: Aggregations = serde_json::from_str(aggregations_json).unwrap();
let collector = AggregationCollector::from_aggs(aggregations, None, index.schema()); let collector = AggregationCollector::from_aggs(aggregations, None, index.schema());
let reader = index.reader().unwrap(); let reader = index.reader().unwrap();
let searcher = reader.searcher(); let searcher = reader.searcher();
let aggregations_res: AggregationResults = searcher.search(&AllQuery, &collector).unwrap(); let aggregations_res: AggregationResults = searcher.search(&AllQuery, &collector).unwrap();
let aggregations_res_json = serde_json::to_value(&aggregations_res).unwrap(); let aggregations_res_json = serde_json::to_value(aggregations_res).unwrap();
assert_eq!(aggregations_res_json["price_avg"]["value"], 2.5); assert_eq!(aggregations_res_json["price_avg"]["value"], 2.5);
assert_eq!(aggregations_res_json["price_count"]["value"], 6.0); assert_eq!(aggregations_res_json["price_count"]["value"], 6.0);

View File

@@ -7,14 +7,13 @@ use crate::{DocId, TantivyError};
/// A multi-value metric aggregation that computes a collection of statistics on numeric values that /// A multi-value metric aggregation that computes a collection of statistics on numeric values that
/// are extracted from the aggregated documents. /// are extracted from the aggregated documents.
/// Supported field types are `u64`, `i64`, and `f64`.
/// See [`Stats`] for returned statistics. /// See [`Stats`] for returned statistics.
/// ///
/// # JSON Format /// # JSON Format
/// ```json /// ```json
/// { /// {
/// "stats": { /// "stats": {
/// "field": "score", /// "field": "score"
/// } /// }
/// } /// }
/// ``` /// ```

View File

@@ -6,14 +6,13 @@ use super::{IntermediateStats, SegmentStatsCollector};
/// A single-value metric aggregation that sums up numeric values that are /// A single-value metric aggregation that sums up numeric values that are
/// extracted from the aggregated documents. /// extracted from the aggregated documents.
/// Supported field types are u64, i64, and f64.
/// See [super::SingleMetricResult] for return value. /// See [super::SingleMetricResult] for return value.
/// ///
/// # JSON Format /// # JSON Format
/// ```json /// ```json
/// { /// {
/// "sum": { /// "sum": {
/// "field": "score", /// "field": "score"
/// } /// }
/// } /// }
/// ``` /// ```

View File

@@ -1,6 +1,5 @@
//! # Aggregations //! # Aggregations
//! //!
//!
//! An aggregation summarizes your data as statistics on buckets or metrics. //! An aggregation summarizes your data as statistics on buckets or metrics.
//! //!
//! Aggregations can provide answer to questions like: //! Aggregations can provide answer to questions like:
@@ -41,6 +40,10 @@
//! - [Metric](metric) //! - [Metric](metric)
//! - [Average](metric::AverageAggregation) //! - [Average](metric::AverageAggregation)
//! - [Stats](metric::StatsAggregation) //! - [Stats](metric::StatsAggregation)
//! - [Min](metric::MinAggregation)
//! - [Max](metric::MaxAggregation)
//! - [Sum](metric::SumAggregation)
//! - [Count](metric::CountAggregation)
//! //!
//! # Example //! # Example
//! Compute the average metric, by building [`agg_req::Aggregations`], which is built from an //! Compute the average metric, by building [`agg_req::Aggregations`], which is built from an
@@ -75,7 +78,7 @@
//! } //! }
//! ``` //! ```
//! # Example JSON //! # Example JSON
//! Requests are compatible with the elasticsearch json request format. //! Requests are compatible with the elasticsearch JSON request format.
//! //!
//! ``` //! ```
//! use tantivy::aggregation::agg_req::Aggregations; //! use tantivy::aggregation::agg_req::Aggregations;

View File

@@ -130,7 +130,7 @@ where
let fast_field_reader = segment_reader let fast_field_reader = segment_reader
.fast_fields() .fast_fields()
.typed_fast_field_reader(self.field)?; .typed_fast_field_reader(schema.get_field_name(self.field))?;
let segment_collector = self let segment_collector = self
.collector .collector

View File

@@ -5,7 +5,7 @@ use fastfield_codecs::Column;
use crate::collector::{Collector, SegmentCollector}; use crate::collector::{Collector, SegmentCollector};
use crate::fastfield::FastValue; use crate::fastfield::FastValue;
use crate::schema::{Field, Type}; use crate::schema::Type;
use crate::{DocId, Score}; use crate::{DocId, Score};
/// Histogram builds an histogram of the values of a fastfield for the /// Histogram builds an histogram of the values of a fastfield for the
@@ -28,7 +28,7 @@ pub struct HistogramCollector {
min_value: u64, min_value: u64,
num_buckets: usize, num_buckets: usize,
divider: DividerU64, divider: DividerU64,
field: Field, field: String,
} }
impl HistogramCollector { impl HistogramCollector {
@@ -46,7 +46,7 @@ impl HistogramCollector {
/// # Disclaimer /// # Disclaimer
/// This function panics if the field given is of type f64. /// This function panics if the field given is of type f64.
pub fn new<TFastValue: FastValue>( pub fn new<TFastValue: FastValue>(
field: Field, field: String,
min_value: TFastValue, min_value: TFastValue,
bucket_width: u64, bucket_width: u64,
num_buckets: usize, num_buckets: usize,
@@ -112,7 +112,7 @@ impl Collector for HistogramCollector {
_segment_local_id: crate::SegmentOrdinal, _segment_local_id: crate::SegmentOrdinal,
segment: &crate::SegmentReader, segment: &crate::SegmentReader,
) -> crate::Result<Self::Child> { ) -> crate::Result<Self::Child> {
let ff_reader = segment.fast_fields().u64_lenient(self.field)?; let ff_reader = segment.fast_fields().u64_lenient(&self.field)?;
Ok(SegmentHistogramCollector { Ok(SegmentHistogramCollector {
histogram_computer: HistogramComputer { histogram_computer: HistogramComputer {
counts: vec![0; self.num_buckets], counts: vec![0; self.num_buckets],
@@ -211,13 +211,13 @@ mod tests {
#[test] #[test]
fn test_no_segments() -> crate::Result<()> { fn test_no_segments() -> crate::Result<()> {
let mut schema_builder = Schema::builder(); let mut schema_builder = Schema::builder();
let val_field = schema_builder.add_u64_field("val_field", FAST); schema_builder.add_u64_field("val_field", FAST);
let schema = schema_builder.build(); let schema = schema_builder.build();
let index = Index::create_in_ram(schema); let index = Index::create_in_ram(schema);
let reader = index.reader()?; let reader = index.reader()?;
let searcher = reader.searcher(); let searcher = reader.searcher();
let all_query = AllQuery; let all_query = AllQuery;
let histogram_collector = HistogramCollector::new(val_field, 0u64, 2, 5); let histogram_collector = HistogramCollector::new("val_field".to_string(), 0u64, 2, 5);
let histogram = searcher.search(&all_query, &histogram_collector)?; let histogram = searcher.search(&all_query, &histogram_collector)?;
assert_eq!(histogram, vec![0; 5]); assert_eq!(histogram, vec![0; 5]);
Ok(()) Ok(())
@@ -238,7 +238,8 @@ mod tests {
let reader = index.reader()?; let reader = index.reader()?;
let searcher = reader.searcher(); let searcher = reader.searcher();
let all_query = AllQuery; let all_query = AllQuery;
let histogram_collector = HistogramCollector::new(val_field, -20i64, 10u64, 4); let histogram_collector =
HistogramCollector::new("val_field".to_string(), -20i64, 10u64, 4);
let histogram = searcher.search(&all_query, &histogram_collector)?; let histogram = searcher.search(&all_query, &histogram_collector)?;
assert_eq!(histogram, vec![1, 1, 0, 1]); assert_eq!(histogram, vec![1, 1, 0, 1]);
Ok(()) Ok(())
@@ -262,7 +263,8 @@ mod tests {
let reader = index.reader()?; let reader = index.reader()?;
let searcher = reader.searcher(); let searcher = reader.searcher();
let all_query = AllQuery; let all_query = AllQuery;
let histogram_collector = HistogramCollector::new(val_field, -20i64, 10u64, 4); let histogram_collector =
HistogramCollector::new("val_field".to_string(), -20i64, 10u64, 4);
let histogram = searcher.search(&all_query, &histogram_collector)?; let histogram = searcher.search(&all_query, &histogram_collector)?;
assert_eq!(histogram, vec![1, 1, 0, 1]); assert_eq!(histogram, vec![1, 1, 0, 1]);
Ok(()) Ok(())
@@ -285,7 +287,7 @@ mod tests {
let searcher = reader.searcher(); let searcher = reader.searcher();
let all_query = AllQuery; let all_query = AllQuery;
let week_histogram_collector = HistogramCollector::new( let week_histogram_collector = HistogramCollector::new(
date_field, "date_field".to_string(),
DateTime::from_primitive( DateTime::from_primitive(
Date::from_calendar_date(1980, Month::January, 1)?.with_hms(0, 0, 0)?, Date::from_calendar_date(1980, Month::January, 1)?.with_hms(0, 0, 0)?,
), ),

View File

@@ -155,7 +155,7 @@ impl SegmentCollector for TestSegmentCollector {
/// ///
/// This collector is mainly useful for tests. /// This collector is mainly useful for tests.
pub struct FastFieldTestCollector { pub struct FastFieldTestCollector {
field: Field, field: String,
} }
pub struct FastFieldSegmentCollector { pub struct FastFieldSegmentCollector {
@@ -164,7 +164,7 @@ pub struct FastFieldSegmentCollector {
} }
impl FastFieldTestCollector { impl FastFieldTestCollector {
pub fn for_field(field: Field) -> FastFieldTestCollector { pub fn for_field(field: String) -> FastFieldTestCollector {
FastFieldTestCollector { field } FastFieldTestCollector { field }
} }
} }
@@ -180,7 +180,7 @@ impl Collector for FastFieldTestCollector {
) -> crate::Result<FastFieldSegmentCollector> { ) -> crate::Result<FastFieldSegmentCollector> {
let reader = segment_reader let reader = segment_reader
.fast_fields() .fast_fields()
.u64(self.field) .u64(&self.field)
.expect("Requested field is not a fast field."); .expect("Requested field is not a fast field.");
Ok(FastFieldSegmentCollector { Ok(FastFieldSegmentCollector {
vals: Vec::new(), vals: Vec::new(),
@@ -238,7 +238,9 @@ impl Collector for BytesFastFieldTestCollector {
_segment_local_id: u32, _segment_local_id: u32,
segment_reader: &SegmentReader, segment_reader: &SegmentReader,
) -> crate::Result<BytesFastFieldSegmentCollector> { ) -> crate::Result<BytesFastFieldSegmentCollector> {
let reader = segment_reader.fast_fields().bytes(self.field)?; let reader = segment_reader
.fast_fields()
.bytes(segment_reader.schema().get_field_name(self.field))?;
Ok(BytesFastFieldSegmentCollector { Ok(BytesFastFieldSegmentCollector {
vals: Vec::new(), vals: Vec::new(),
reader, reader,

View File

@@ -156,7 +156,7 @@ impl CustomScorer<u64> for ScorerByField {
// The conversion will then happen only on the top-K docs. // The conversion will then happen only on the top-K docs.
let ff_reader = segment_reader let ff_reader = segment_reader
.fast_fields() .fast_fields()
.typed_fast_field_reader(self.field)?; .typed_fast_field_reader(segment_reader.schema().get_field_name(self.field))?;
Ok(ScorerByFastFieldReader { ff_reader }) Ok(ScorerByFastFieldReader { ff_reader })
} }
} }
@@ -454,7 +454,7 @@ impl TopDocs {
/// // In our case, we will get a reader for the popularity /// // In our case, we will get a reader for the popularity
/// // fast field. /// // fast field.
/// let popularity_reader = /// let popularity_reader =
/// segment_reader.fast_fields().u64(popularity).unwrap(); /// segment_reader.fast_fields().u64("popularity").unwrap();
/// ///
/// // We can now define our actual scoring function /// // We can now define our actual scoring function
/// move |doc: DocId, original_score: Score| { /// move |doc: DocId, original_score: Score| {
@@ -561,9 +561,9 @@ impl TopDocs {
/// // Note that this is implemented by using a `(u64, u64)` /// // Note that this is implemented by using a `(u64, u64)`
/// // as a score. /// // as a score.
/// let popularity_reader = /// let popularity_reader =
/// segment_reader.fast_fields().u64(popularity).unwrap(); /// segment_reader.fast_fields().u64("popularity").unwrap();
/// let boosted_reader = /// let boosted_reader =
/// segment_reader.fast_fields().u64(boosted).unwrap(); /// segment_reader.fast_fields().u64("boosted").unwrap();
/// ///
/// // We can now define our actual scoring function /// // We can now define our actual scoring function
/// move |doc: DocId| { /// move |doc: DocId| {

View File

@@ -231,7 +231,7 @@ impl IndexBuilder {
fn validate(&self) -> crate::Result<()> { fn validate(&self) -> crate::Result<()> {
if let Some(schema) = self.schema.as_ref() { if let Some(schema) = self.schema.as_ref() {
if let Some(sort_by_field) = self.index_settings.sort_by_field.as_ref() { if let Some(sort_by_field) = self.index_settings.sort_by_field.as_ref() {
let schema_field = schema.get_field(&sort_by_field.field).ok_or_else(|| { let schema_field = schema.get_field(&sort_by_field.field).map_err(|_| {
TantivyError::InvalidArgument(format!( TantivyError::InvalidArgument(format!(
"Field to sort index {} not found in schema", "Field to sort index {} not found in schema",
sort_by_field.field sort_by_field.field

View File

@@ -95,7 +95,8 @@ impl SegmentReader {
match field_entry.field_type() { match field_entry.field_type() {
FieldType::Facet(_) => { FieldType::Facet(_) => {
let term_ords_reader = self.fast_fields().u64s(field)?; let term_ords_reader =
self.fast_fields().u64s(self.schema.get_field_name(field))?;
let termdict = self let termdict = self
.termdict_composite .termdict_composite
.open_read(field) .open_read(field)

View File

@@ -25,7 +25,7 @@ mod tests {
index_writer.commit()?; index_writer.commit()?;
let searcher = index.reader()?.searcher(); let searcher = index.reader()?.searcher();
let segment_reader = searcher.segment_reader(0); let segment_reader = searcher.segment_reader(0);
let bytes_reader = segment_reader.fast_fields().bytes(bytes_field).unwrap(); let bytes_reader = segment_reader.fast_fields().bytes("bytesfield").unwrap();
assert_eq!(bytes_reader.get_bytes(0), &[0u8, 1, 2, 3]); assert_eq!(bytes_reader.get_bytes(0), &[0u8, 1, 2, 3]);
assert!(bytes_reader.get_bytes(1).is_empty()); assert!(bytes_reader.get_bytes(1).is_empty());
assert_eq!(bytes_reader.get_bytes(2), &[255u8]); assert_eq!(bytes_reader.get_bytes(2), &[255u8]);
@@ -109,8 +109,7 @@ mod tests {
let searcher = create_index_for_test(FAST)?; let searcher = create_index_for_test(FAST)?;
assert_eq!(searcher.num_docs(), 1); assert_eq!(searcher.num_docs(), 1);
let fast_fields = searcher.segment_reader(0u32).fast_fields(); let fast_fields = searcher.segment_reader(0u32).fast_fields();
let field = searcher.schema().get_field("string_bytes").unwrap(); let fast_field_reader = fast_fields.bytes("string_bytes").unwrap();
let fast_field_reader = fast_fields.bytes(field).unwrap();
assert_eq!(fast_field_reader.get_bytes(0u32), b"tantivy"); assert_eq!(fast_field_reader.get_bytes(0u32), b"tantivy");
Ok(()) Ok(())
} }

View File

@@ -226,7 +226,7 @@ mod tests {
serializer.close().unwrap(); serializer.close().unwrap();
} }
let file = directory.open_read(path).unwrap(); let file = directory.open_read(path).unwrap();
assert_eq!(file.len(), 34); assert_eq!(file.len(), 27);
let composite_file = CompositeFile::open(&file)?; let composite_file = CompositeFile::open(&file)?;
let fast_field_bytes = composite_file.open_read(*FIELD).unwrap().read_bytes()?; let fast_field_bytes = composite_file.open_read(*FIELD).unwrap().read_bytes()?;
let fast_field_reader = open::<u64>(fast_field_bytes)?; let fast_field_reader = open::<u64>(fast_field_bytes)?;
@@ -275,7 +275,7 @@ mod tests {
serializer.close()?; serializer.close()?;
} }
let file = directory.open_read(path)?; let file = directory.open_read(path)?;
assert_eq!(file.len(), 62); assert_eq!(file.len(), 55);
{ {
let fast_fields_composite = CompositeFile::open(&file)?; let fast_fields_composite = CompositeFile::open(&file)?;
let data = fast_fields_composite let data = fast_fields_composite
@@ -316,7 +316,7 @@ mod tests {
serializer.close().unwrap(); serializer.close().unwrap();
} }
let file = directory.open_read(path).unwrap(); let file = directory.open_read(path).unwrap();
assert_eq!(file.len(), 35); assert_eq!(file.len(), 28);
{ {
let fast_fields_composite = CompositeFile::open(&file).unwrap(); let fast_fields_composite = CompositeFile::open(&file).unwrap();
let data = fast_fields_composite let data = fast_fields_composite
@@ -355,7 +355,7 @@ mod tests {
serializer.close().unwrap(); serializer.close().unwrap();
} }
let file = directory.open_read(path).unwrap(); let file = directory.open_read(path).unwrap();
assert_eq!(file.len(), 80049); assert_eq!(file.len(), 80042);
{ {
let fast_fields_composite = CompositeFile::open(&file)?; let fast_fields_composite = CompositeFile::open(&file)?;
let data = fast_fields_composite let data = fast_fields_composite
@@ -397,7 +397,7 @@ mod tests {
serializer.close().unwrap(); serializer.close().unwrap();
} }
let file = directory.open_read(path).unwrap(); let file = directory.open_read(path).unwrap();
assert_eq!(file.len(), 49_usize); assert_eq!(file.len(), 42_usize);
{ {
let fast_fields_composite = CompositeFile::open(&file)?; let fast_fields_composite = CompositeFile::open(&file)?;
@@ -583,7 +583,7 @@ mod tests {
assert_eq!(searcher.segment_readers().len(), 1); assert_eq!(searcher.segment_readers().len(), 1);
let segment_reader = searcher.segment_reader(0); let segment_reader = searcher.segment_reader(0);
let fast_fields = segment_reader.fast_fields(); let fast_fields = segment_reader.fast_fields();
let text_fast_field = fast_fields.u64s(text_field).unwrap(); let text_fast_field = fast_fields.u64s("text").unwrap();
assert_eq!( assert_eq!(
get_vals_for_docs(&text_fast_field, 0..5), get_vals_for_docs(&text_fast_field, 0..5),
@@ -622,7 +622,7 @@ mod tests {
assert_eq!(searcher.segment_readers().len(), 2); assert_eq!(searcher.segment_readers().len(), 2);
let segment_reader = searcher.segment_reader(1); let segment_reader = searcher.segment_reader(1);
let fast_fields = segment_reader.fast_fields(); let fast_fields = segment_reader.fast_fields();
let text_fast_field = fast_fields.u64s(text_field).unwrap(); let text_fast_field = fast_fields.u64s("text").unwrap();
assert_eq!(get_vals_for_docs(&text_fast_field, 0..3), vec![0, 1, 0]); assert_eq!(get_vals_for_docs(&text_fast_field, 0..3), vec![0, 1, 0]);
} }
@@ -638,7 +638,7 @@ mod tests {
let searcher = reader.searcher(); let searcher = reader.searcher();
let segment_reader = searcher.segment_reader(0); let segment_reader = searcher.segment_reader(0);
let fast_fields = segment_reader.fast_fields(); let fast_fields = segment_reader.fast_fields();
let text_fast_field = fast_fields.u64s(text_field).unwrap(); let text_fast_field = fast_fields.u64s("text").unwrap();
assert_eq!( assert_eq!(
get_vals_for_docs(&text_fast_field, 0..8), get_vals_for_docs(&text_fast_field, 0..8),
@@ -681,7 +681,7 @@ mod tests {
assert_eq!(searcher.segment_readers().len(), 1); assert_eq!(searcher.segment_readers().len(), 1);
let segment_reader = searcher.segment_reader(0); let segment_reader = searcher.segment_reader(0);
let fast_fields = segment_reader.fast_fields(); let fast_fields = segment_reader.fast_fields();
let text_fast_field = fast_fields.u64s(text_field).unwrap(); let text_fast_field = fast_fields.u64s("text").unwrap();
assert_eq!(get_vals_for_docs(&text_fast_field, 0..6), vec![1, 0, 0, 2]); assert_eq!(get_vals_for_docs(&text_fast_field, 0..6), vec![1, 0, 0, 2]);
@@ -712,7 +712,7 @@ mod tests {
assert_eq!(searcher.segment_readers().len(), 2); assert_eq!(searcher.segment_readers().len(), 2);
let segment_reader = searcher.segment_reader(1); let segment_reader = searcher.segment_reader(1);
let fast_fields = segment_reader.fast_fields(); let fast_fields = segment_reader.fast_fields();
let text_fast_field = fast_fields.u64s(text_field).unwrap(); let text_fast_field = fast_fields.u64s("text").unwrap();
assert_eq!(get_vals_for_docs(&text_fast_field, 0..2), vec![0, 1]); assert_eq!(get_vals_for_docs(&text_fast_field, 0..2), vec![0, 1]);
} }
@@ -728,7 +728,7 @@ mod tests {
let searcher = reader.searcher(); let searcher = reader.searcher();
let segment_reader = searcher.segment_reader(0); let segment_reader = searcher.segment_reader(0);
let fast_fields = segment_reader.fast_fields(); let fast_fields = segment_reader.fast_fields();
let text_fast_field = fast_fields.u64s(text_field).unwrap(); let text_fast_field = fast_fields.u64s("text").unwrap();
assert_eq!( assert_eq!(
get_vals_for_docs(&text_fast_field, 0..9), get_vals_for_docs(&text_fast_field, 0..9),
@@ -773,8 +773,8 @@ mod tests {
assert_eq!(searcher.segment_readers().len(), 1); assert_eq!(searcher.segment_readers().len(), 1);
let segment_reader = searcher.segment_reader(0); let segment_reader = searcher.segment_reader(0);
let fast_fields = segment_reader.fast_fields(); let fast_fields = segment_reader.fast_fields();
let date_fast_field = fast_fields.date(date_field).unwrap(); let date_fast_field = fast_fields.date("date").unwrap();
let dates_fast_field = fast_fields.dates(multi_date_field).unwrap(); let dates_fast_field = fast_fields.dates("multi_date").unwrap();
let mut dates = vec![]; let mut dates = vec![];
{ {
assert_eq!(date_fast_field.get_val(0).into_timestamp_micros(), 1i64); assert_eq!(date_fast_field.get_val(0).into_timestamp_micros(), 1i64);
@@ -836,7 +836,7 @@ mod tests {
serializer.close().unwrap(); serializer.close().unwrap();
} }
let file = directory.open_read(path).unwrap(); let file = directory.open_read(path).unwrap();
assert_eq!(file.len(), 33); assert_eq!(file.len(), 26);
let composite_file = CompositeFile::open(&file)?; let composite_file = CompositeFile::open(&file)?;
let data = composite_file.open_read(field).unwrap().read_bytes()?; let data = composite_file.open_read(field).unwrap().read_bytes()?;
let fast_field_reader = open::<bool>(data)?; let fast_field_reader = open::<bool>(data)?;
@@ -874,7 +874,7 @@ mod tests {
serializer.close().unwrap(); serializer.close().unwrap();
} }
let file = directory.open_read(path).unwrap(); let file = directory.open_read(path).unwrap();
assert_eq!(file.len(), 45); assert_eq!(file.len(), 38);
let composite_file = CompositeFile::open(&file)?; let composite_file = CompositeFile::open(&file)?;
let data = composite_file.open_read(field).unwrap().read_bytes()?; let data = composite_file.open_read(field).unwrap().read_bytes()?;
let fast_field_reader = open::<bool>(data)?; let fast_field_reader = open::<bool>(data)?;
@@ -906,7 +906,7 @@ mod tests {
} }
let file = directory.open_read(path).unwrap(); let file = directory.open_read(path).unwrap();
let composite_file = CompositeFile::open(&file)?; let composite_file = CompositeFile::open(&file)?;
assert_eq!(file.len(), 32); assert_eq!(file.len(), 25);
let data = composite_file.open_read(field).unwrap().read_bytes()?; let data = composite_file.open_read(field).unwrap().read_bytes()?;
let fast_field_reader = open::<bool>(data)?; let fast_field_reader = open::<bool>(data)?;
assert_eq!(fast_field_reader.get_val(0), false); assert_eq!(fast_field_reader.get_val(0), false);
@@ -940,10 +940,10 @@ mod tests {
pub fn test_gcd_date() -> crate::Result<()> { pub fn test_gcd_date() -> crate::Result<()> {
let size_prec_sec = let size_prec_sec =
test_gcd_date_with_codec(FastFieldCodecType::Bitpacked, DatePrecision::Seconds)?; test_gcd_date_with_codec(FastFieldCodecType::Bitpacked, DatePrecision::Seconds)?;
assert_eq!(size_prec_sec, 5 + 4 + 28 + (1_000 * 13) / 8); // 13 bits per val = ceil(log_2(number of seconds in 2hours); assert_eq!(size_prec_sec, 5 + 4 + 21 + (1_000 * 13) / 8); // 13 bits per val = ceil(log_2(number of seconds in 2hours);
let size_prec_micro = let size_prec_micro =
test_gcd_date_with_codec(FastFieldCodecType::Bitpacked, DatePrecision::Microseconds)?; test_gcd_date_with_codec(FastFieldCodecType::Bitpacked, DatePrecision::Microseconds)?;
assert_eq!(size_prec_micro, 5 + 4 + 26 + (1_000 * 33) / 8); // 33 bits per val = ceil(log_2(number of microsecsseconds in 2hours); assert_eq!(size_prec_micro, 5 + 4 + 19 + (1_000 * 33) / 8); // 33 bits per val = ceil(log_2(number of microsecsseconds in 2hours);
Ok(()) Ok(())
} }
@@ -1014,7 +1014,7 @@ mod tests {
let reader = index.reader().unwrap(); let reader = index.reader().unwrap();
let searcher = reader.searcher(); let searcher = reader.searcher();
let segment = &searcher.segment_readers()[0]; let segment = &searcher.segment_readers()[0];
let field = segment.fast_fields().u64(num_field).unwrap(); let field = segment.fast_fields().u64("url_norm_hash").unwrap();
let numbers = vec![100, 200, 300]; let numbers = vec![100, 200, 300];
let test_range = |range: RangeInclusive<u64>| { let test_range = |range: RangeInclusive<u64>| {
@@ -1063,7 +1063,7 @@ mod tests {
let reader = index.reader().unwrap(); let reader = index.reader().unwrap();
let searcher = reader.searcher(); let searcher = reader.searcher();
let segment = &searcher.segment_readers()[0]; let segment = &searcher.segment_readers()[0];
let field = segment.fast_fields().u64(num_field).unwrap(); let field = segment.fast_fields().u64("url_norm_hash").unwrap();
let numbers = vec![1000, 1001, 1003]; let numbers = vec![1000, 1001, 1003];
let test_range = |range: RangeInclusive<u64>| { let test_range = |range: RangeInclusive<u64>| {

View File

@@ -52,7 +52,7 @@ mod tests {
let searcher = index.reader()?.searcher(); let searcher = index.reader()?.searcher();
let segment_reader = searcher.segment_reader(0); let segment_reader = searcher.segment_reader(0);
let mut vals = Vec::new(); let mut vals = Vec::new();
let multi_value_reader = segment_reader.fast_fields().u64s(field)?; let multi_value_reader = segment_reader.fast_fields().u64s("multifield")?;
{ {
multi_value_reader.get_vals(2, &mut vals); multi_value_reader.get_vals(2, &mut vals);
assert_eq!(&vals, &[4u64]); assert_eq!(&vals, &[4u64]);
@@ -229,7 +229,7 @@ mod tests {
let searcher = index.reader()?.searcher(); let searcher = index.reader()?.searcher();
let segment_reader = searcher.segment_reader(0); let segment_reader = searcher.segment_reader(0);
let mut vals = Vec::new(); let mut vals = Vec::new();
let multi_value_reader = segment_reader.fast_fields().i64s(field).unwrap(); let multi_value_reader = segment_reader.fast_fields().i64s("multifield").unwrap();
multi_value_reader.get_vals(2, &mut vals); multi_value_reader.get_vals(2, &mut vals);
assert_eq!(&vals, &[-4i64]); assert_eq!(&vals, &[-4i64]);
multi_value_reader.get_vals(0, &mut vals); multi_value_reader.get_vals(0, &mut vals);
@@ -261,7 +261,7 @@ mod tests {
let searcher = index.reader()?.searcher(); let searcher = index.reader()?.searcher();
let segment_reader = searcher.segment_reader(0); let segment_reader = searcher.segment_reader(0);
let mut vals = Vec::new(); let mut vals = Vec::new();
let multi_value_reader = segment_reader.fast_fields().bools(bool_field).unwrap(); let multi_value_reader = segment_reader.fast_fields().bools("multifield").unwrap();
multi_value_reader.get_vals(2, &mut vals); multi_value_reader.get_vals(2, &mut vals);
assert_eq!(&vals, &[false]); assert_eq!(&vals, &[false]);
multi_value_reader.get_vals(0, &mut vals); multi_value_reader.get_vals(0, &mut vals);

View File

@@ -159,7 +159,7 @@ mod tests {
let searcher = reader.searcher(); let searcher = reader.searcher();
let reader = searcher.segment_reader(0); let reader = searcher.segment_reader(0);
let date_ff_reader = reader.fast_fields().dates(date_field).unwrap(); let date_ff_reader = reader.fast_fields().dates("multi_date_field").unwrap();
let mut docids = vec![]; let mut docids = vec![];
date_ff_reader.get_docids_for_value_range( date_ff_reader.get_docids_for_value_range(
DateTime::from_utc(first_time_stamp)..=DateTime::from_utc(two_secs_ahead), DateTime::from_utc(first_time_stamp)..=DateTime::from_utc(two_secs_ahead),
@@ -173,7 +173,7 @@ mod tests {
assert_eq!( assert_eq!(
count_multiples(RangeQuery::new_date( count_multiples(RangeQuery::new_date(
date_field, "multi_date_field".to_string(),
DateTime::from_utc(first_time_stamp)..DateTime::from_utc(two_secs_ahead) DateTime::from_utc(first_time_stamp)..DateTime::from_utc(two_secs_ahead)
)), )),
1 1
@@ -226,7 +226,7 @@ mod tests {
let reader = searcher.segment_reader(0); let reader = searcher.segment_reader(0);
assert_eq!(reader.num_docs(), 5); assert_eq!(reader.num_docs(), 5);
let date_ff_reader = reader.fast_fields().dates(date_field).unwrap(); let date_ff_reader = reader.fast_fields().dates("multi_date_field").unwrap();
let mut docids = vec![]; let mut docids = vec![];
date_ff_reader.get_docids_for_value_range( date_ff_reader.get_docids_for_value_range(
DateTime::from_utc(first_time_stamp)..=DateTime::from_utc(two_secs_ahead), DateTime::from_utc(first_time_stamp)..=DateTime::from_utc(two_secs_ahead),
@@ -240,7 +240,7 @@ mod tests {
assert_eq!( assert_eq!(
count_multiples(RangeQuery::new_date( count_multiples(RangeQuery::new_date(
date_field, "multi_date_field".to_string(),
DateTime::from_utc(first_time_stamp)..DateTime::from_utc(two_secs_ahead) DateTime::from_utc(first_time_stamp)..DateTime::from_utc(two_secs_ahead)
)), )),
2 2
@@ -324,7 +324,7 @@ mod tests {
index_writer.commit()?; index_writer.commit()?;
let searcher = index.reader()?.searcher(); let searcher = index.reader()?.searcher();
let segment_reader = searcher.segment_reader(0); let segment_reader = searcher.segment_reader(0);
let field_reader = segment_reader.fast_fields().i64s(item_field)?; let field_reader = segment_reader.fast_fields().i64s("items")?;
assert_eq!(field_reader.min_value(), -2); assert_eq!(field_reader.min_value(), -2);
assert_eq!(field_reader.max_value(), 6); assert_eq!(field_reader.max_value(), 6);

View File

@@ -114,9 +114,11 @@ impl FastFieldReaders {
pub(crate) fn typed_fast_field_reader_with_idx<TFastValue: FastValue>( pub(crate) fn typed_fast_field_reader_with_idx<TFastValue: FastValue>(
&self, &self,
field: Field, field_name: &str,
index: usize, index: usize,
) -> crate::Result<Arc<dyn Column<TFastValue>>> { ) -> crate::Result<Arc<dyn Column<TFastValue>>> {
let field = self.schema.get_field(field_name)?;
let fast_field_slice = self.fast_field_data(field, index)?; let fast_field_slice = self.fast_field_data(field, index)?;
let bytes = fast_field_slice.read_bytes()?; let bytes = fast_field_slice.read_bytes()?;
let column = fastfield_codecs::open(bytes)?; let column = fastfield_codecs::open(bytes)?;
@@ -125,32 +127,37 @@ impl FastFieldReaders {
pub(crate) fn typed_fast_field_reader<TFastValue: FastValue>( pub(crate) fn typed_fast_field_reader<TFastValue: FastValue>(
&self, &self,
field: Field, field_name: &str,
) -> crate::Result<Arc<dyn Column<TFastValue>>> { ) -> crate::Result<Arc<dyn Column<TFastValue>>> {
self.typed_fast_field_reader_with_idx(field, 0) self.typed_fast_field_reader_with_idx(field_name, 0)
} }
pub(crate) fn typed_fast_field_multi_reader<TFastValue: FastValue>( pub(crate) fn typed_fast_field_multi_reader<TFastValue: FastValue>(
&self, &self,
field: Field, field_name: &str,
) -> crate::Result<MultiValuedFastFieldReader<TFastValue>> { ) -> crate::Result<MultiValuedFastFieldReader<TFastValue>> {
let idx_reader = self.typed_fast_field_reader(field)?; let idx_reader = self.typed_fast_field_reader(field_name)?;
let vals_reader = self.typed_fast_field_reader_with_idx(field, 1)?; let vals_reader = self.typed_fast_field_reader_with_idx(field_name, 1)?;
Ok(MultiValuedFastFieldReader::open(idx_reader, vals_reader)) Ok(MultiValuedFastFieldReader::open(idx_reader, vals_reader))
} }
/// Returns the `u64` fast field reader reader associated with `field`. /// Returns the `u64` fast field reader reader associated with `field`.
/// ///
/// If `field` is not a u64 fast field, this method returns an Error. /// If `field` is not a u64 fast field, this method returns an Error.
pub fn u64(&self, field: Field) -> crate::Result<Arc<dyn Column<u64>>> { pub fn u64(&self, field_name: &str) -> crate::Result<Arc<dyn Column<u64>>> {
self.check_type(field, FastType::U64, Cardinality::SingleValue)?; self.check_type(
self.typed_fast_field_reader(field) self.schema.get_field(field_name)?,
FastType::U64,
Cardinality::SingleValue,
)?;
self.typed_fast_field_reader(field_name)
} }
/// Returns the `ip` fast field reader reader associated to `field`. /// Returns the `ip` fast field reader reader associated to `field`.
/// ///
/// If `field` is not a u128 fast field, this method returns an Error. /// If `field` is not a u128 fast field, this method returns an Error.
pub fn ip_addr(&self, field: Field) -> crate::Result<Arc<dyn Column<Ipv6Addr>>> { pub fn ip_addr(&self, field_name: &str) -> crate::Result<Arc<dyn Column<Ipv6Addr>>> {
let field = self.schema.get_field(field_name)?;
self.check_type(field, FastType::U128, Cardinality::SingleValue)?; self.check_type(field, FastType::U128, Cardinality::SingleValue)?;
let bytes = self.fast_field_data(field, 0)?.read_bytes()?; let bytes = self.fast_field_data(field, 0)?.read_bytes()?;
Ok(open_u128::<Ipv6Addr>(bytes)?) Ok(open_u128::<Ipv6Addr>(bytes)?)
@@ -159,9 +166,13 @@ impl FastFieldReaders {
/// Returns the `ip` fast field reader reader associated to `field`. /// Returns the `ip` fast field reader reader associated to `field`.
/// ///
/// If `field` is not a u128 fast field, this method returns an Error. /// If `field` is not a u128 fast field, this method returns an Error.
pub fn ip_addrs(&self, field: Field) -> crate::Result<MultiValuedFastFieldReader<Ipv6Addr>> { pub fn ip_addrs(
&self,
field_name: &str,
) -> crate::Result<MultiValuedFastFieldReader<Ipv6Addr>> {
let field = self.schema.get_field(field_name)?;
self.check_type(field, FastType::U128, Cardinality::MultiValues)?; self.check_type(field, FastType::U128, Cardinality::MultiValues)?;
let idx_reader: Arc<dyn Column<u64>> = self.typed_fast_field_reader(field)?; let idx_reader: Arc<dyn Column<u64>> = self.typed_fast_field_reader(field_name)?;
let bytes = self.fast_field_data(field, 1)?.read_bytes()?; let bytes = self.fast_field_data(field, 1)?.read_bytes()?;
let vals_reader = open_u128::<Ipv6Addr>(bytes)?; let vals_reader = open_u128::<Ipv6Addr>(bytes)?;
@@ -172,7 +183,8 @@ impl FastFieldReaders {
/// Returns the `u128` fast field reader reader associated to `field`. /// Returns the `u128` fast field reader reader associated to `field`.
/// ///
/// If `field` is not a u128 fast field, this method returns an Error. /// If `field` is not a u128 fast field, this method returns an Error.
pub(crate) fn u128(&self, field: Field) -> crate::Result<Arc<dyn Column<u128>>> { pub(crate) fn u128(&self, field_name: &str) -> crate::Result<Arc<dyn Column<u128>>> {
let field = self.schema.get_field(field_name)?;
self.check_type(field, FastType::U128, Cardinality::SingleValue)?; self.check_type(field, FastType::U128, Cardinality::SingleValue)?;
let bytes = self.fast_field_data(field, 0)?.read_bytes()?; let bytes = self.fast_field_data(field, 0)?.read_bytes()?;
Ok(open_u128::<u128>(bytes)?) Ok(open_u128::<u128>(bytes)?)
@@ -181,9 +193,11 @@ impl FastFieldReaders {
/// Returns the `u128` multi-valued fast field reader reader associated to `field`. /// Returns the `u128` multi-valued fast field reader reader associated to `field`.
/// ///
/// If `field` is not a u128 multi-valued fast field, this method returns an Error. /// If `field` is not a u128 multi-valued fast field, this method returns an Error.
pub fn u128s(&self, field: Field) -> crate::Result<MultiValuedFastFieldReader<u128>> { pub fn u128s(&self, field_name: &str) -> crate::Result<MultiValuedFastFieldReader<u128>> {
let field = self.schema.get_field(field_name)?;
self.check_type(field, FastType::U128, Cardinality::MultiValues)?; self.check_type(field, FastType::U128, Cardinality::MultiValues)?;
let idx_reader: Arc<dyn Column<u64>> = self.typed_fast_field_reader(field)?; let idx_reader: Arc<dyn Column<u64>> =
self.typed_fast_field_reader(self.schema.get_field_name(field))?;
let bytes = self.fast_field_data(field, 1)?.read_bytes()?; let bytes = self.fast_field_data(field, 1)?.read_bytes()?;
let vals_reader = open_u128::<u128>(bytes)?; let vals_reader = open_u128::<u128>(bytes)?;
@@ -196,80 +210,88 @@ impl FastFieldReaders {
/// ///
/// If not, the fastfield reader will returns the u64-value associated with the original /// If not, the fastfield reader will returns the u64-value associated with the original
/// FastValue. /// FastValue.
pub fn u64_lenient(&self, field: Field) -> crate::Result<Arc<dyn Column<u64>>> { pub fn u64_lenient(&self, field_name: &str) -> crate::Result<Arc<dyn Column<u64>>> {
self.typed_fast_field_reader(field) self.typed_fast_field_reader(field_name)
} }
/// Returns the `i64` fast field reader reader associated with `field`. /// Returns the `i64` fast field reader reader associated with `field`.
/// ///
/// If `field` is not a i64 fast field, this method returns an Error. /// If `field` is not a i64 fast field, this method returns an Error.
pub fn i64(&self, field: Field) -> crate::Result<Arc<dyn Column<i64>>> { pub fn i64(&self, field_name: &str) -> crate::Result<Arc<dyn Column<i64>>> {
let field = self.schema.get_field(field_name)?;
self.check_type(field, FastType::I64, Cardinality::SingleValue)?; self.check_type(field, FastType::I64, Cardinality::SingleValue)?;
self.typed_fast_field_reader(field) self.typed_fast_field_reader(self.schema.get_field_name(field))
} }
/// Returns the `date` fast field reader reader associated with `field`. /// Returns the `date` fast field reader reader associated with `field`.
/// ///
/// If `field` is not a date fast field, this method returns an Error. /// If `field` is not a date fast field, this method returns an Error.
pub fn date(&self, field: Field) -> crate::Result<Arc<dyn Column<DateTime>>> { pub fn date(&self, field_name: &str) -> crate::Result<Arc<dyn Column<DateTime>>> {
let field = self.schema.get_field(field_name)?;
self.check_type(field, FastType::Date, Cardinality::SingleValue)?; self.check_type(field, FastType::Date, Cardinality::SingleValue)?;
self.typed_fast_field_reader(field) self.typed_fast_field_reader(field_name)
} }
/// Returns the `f64` fast field reader reader associated with `field`. /// Returns the `f64` fast field reader reader associated with `field`.
/// ///
/// If `field` is not a f64 fast field, this method returns an Error. /// If `field` is not a f64 fast field, this method returns an Error.
pub fn f64(&self, field: Field) -> crate::Result<Arc<dyn Column<f64>>> { pub fn f64(&self, field_name: &str) -> crate::Result<Arc<dyn Column<f64>>> {
let field = self.schema.get_field(field_name)?;
self.check_type(field, FastType::F64, Cardinality::SingleValue)?; self.check_type(field, FastType::F64, Cardinality::SingleValue)?;
self.typed_fast_field_reader(field) self.typed_fast_field_reader(field_name)
} }
/// Returns the `bool` fast field reader reader associated with `field`. /// Returns the `bool` fast field reader reader associated with `field`.
/// ///
/// If `field` is not a bool fast field, this method returns an Error. /// If `field` is not a bool fast field, this method returns an Error.
pub fn bool(&self, field: Field) -> crate::Result<Arc<dyn Column<bool>>> { pub fn bool(&self, field_name: &str) -> crate::Result<Arc<dyn Column<bool>>> {
let field = self.schema.get_field(field_name)?;
self.check_type(field, FastType::Bool, Cardinality::SingleValue)?; self.check_type(field, FastType::Bool, Cardinality::SingleValue)?;
self.typed_fast_field_reader(field) self.typed_fast_field_reader(field_name)
} }
/// Returns a `u64s` multi-valued fast field reader reader associated with `field`. /// Returns a `u64s` multi-valued fast field reader reader associated with `field`.
/// ///
/// If `field` is not a u64 multi-valued fast field, this method returns an Error. /// If `field` is not a u64 multi-valued fast field, this method returns an Error.
pub fn u64s(&self, field: Field) -> crate::Result<MultiValuedFastFieldReader<u64>> { pub fn u64s(&self, field_name: &str) -> crate::Result<MultiValuedFastFieldReader<u64>> {
let field = self.schema.get_field(field_name)?;
self.check_type(field, FastType::U64, Cardinality::MultiValues)?; self.check_type(field, FastType::U64, Cardinality::MultiValues)?;
self.typed_fast_field_multi_reader(field) self.typed_fast_field_multi_reader(field_name)
} }
/// Returns a `u64s` multi-valued fast field reader reader associated with `field`, regardless /// Returns a `u64s` multi-valued fast field reader reader associated with `field`, regardless
/// of whether the given field is effectively of type `u64` or not. /// of whether the given field is effectively of type `u64` or not.
/// ///
/// If `field` is not a u64 multi-valued fast field, this method returns an Error. /// If `field` is not a u64 multi-valued fast field, this method returns an Error.
pub fn u64s_lenient(&self, field: Field) -> crate::Result<MultiValuedFastFieldReader<u64>> { pub fn u64s_lenient(&self, field_name: &str) -> crate::Result<MultiValuedFastFieldReader<u64>> {
self.typed_fast_field_multi_reader(field) self.typed_fast_field_multi_reader(field_name)
} }
/// Returns a `i64s` multi-valued fast field reader reader associated with `field`. /// Returns a `i64s` multi-valued fast field reader reader associated with `field`.
/// ///
/// If `field` is not a i64 multi-valued fast field, this method returns an Error. /// If `field` is not a i64 multi-valued fast field, this method returns an Error.
pub fn i64s(&self, field: Field) -> crate::Result<MultiValuedFastFieldReader<i64>> { pub fn i64s(&self, field_name: &str) -> crate::Result<MultiValuedFastFieldReader<i64>> {
let field = self.schema.get_field(field_name)?;
self.check_type(field, FastType::I64, Cardinality::MultiValues)?; self.check_type(field, FastType::I64, Cardinality::MultiValues)?;
self.typed_fast_field_multi_reader(field) self.typed_fast_field_multi_reader(self.schema.get_field_name(field))
} }
/// Returns a `f64s` multi-valued fast field reader reader associated with `field`. /// Returns a `f64s` multi-valued fast field reader reader associated with `field`.
/// ///
/// If `field` is not a f64 multi-valued fast field, this method returns an Error. /// If `field` is not a f64 multi-valued fast field, this method returns an Error.
pub fn f64s(&self, field: Field) -> crate::Result<MultiValuedFastFieldReader<f64>> { pub fn f64s(&self, field_name: &str) -> crate::Result<MultiValuedFastFieldReader<f64>> {
let field = self.schema.get_field(field_name)?;
self.check_type(field, FastType::F64, Cardinality::MultiValues)?; self.check_type(field, FastType::F64, Cardinality::MultiValues)?;
self.typed_fast_field_multi_reader(field) self.typed_fast_field_multi_reader(self.schema.get_field_name(field))
} }
/// Returns a `bools` multi-valued fast field reader reader associated with `field`. /// Returns a `bools` multi-valued fast field reader reader associated with `field`.
/// ///
/// If `field` is not a bool multi-valued fast field, this method returns an Error. /// If `field` is not a bool multi-valued fast field, this method returns an Error.
pub fn bools(&self, field: Field) -> crate::Result<MultiValuedFastFieldReader<bool>> { pub fn bools(&self, field_name: &str) -> crate::Result<MultiValuedFastFieldReader<bool>> {
let field = self.schema.get_field(field_name)?;
self.check_type(field, FastType::Bool, Cardinality::MultiValues)?; self.check_type(field, FastType::Bool, Cardinality::MultiValues)?;
self.typed_fast_field_multi_reader(field) self.typed_fast_field_multi_reader(self.schema.get_field_name(field))
} }
/// Returns a `time::OffsetDateTime` multi-valued fast field reader reader associated with /// Returns a `time::OffsetDateTime` multi-valued fast field reader reader associated with
@@ -277,15 +299,17 @@ impl FastFieldReaders {
/// ///
/// If `field` is not a `time::OffsetDateTime` multi-valued fast field, this method returns an /// If `field` is not a `time::OffsetDateTime` multi-valued fast field, this method returns an
/// Error. /// Error.
pub fn dates(&self, field: Field) -> crate::Result<MultiValuedFastFieldReader<DateTime>> { pub fn dates(&self, field_name: &str) -> crate::Result<MultiValuedFastFieldReader<DateTime>> {
let field = self.schema.get_field(field_name)?;
self.check_type(field, FastType::Date, Cardinality::MultiValues)?; self.check_type(field, FastType::Date, Cardinality::MultiValues)?;
self.typed_fast_field_multi_reader(field) self.typed_fast_field_multi_reader(self.schema.get_field_name(field))
} }
/// Returns the `bytes` fast field reader associated with `field`. /// Returns the `bytes` fast field reader associated with `field`.
/// ///
/// If `field` is not a bytes fast field, returns an Error. /// If `field` is not a bytes fast field, returns an Error.
pub fn bytes(&self, field: Field) -> crate::Result<BytesFastFieldReader> { pub fn bytes(&self, field_name: &str) -> crate::Result<BytesFastFieldReader> {
let field = self.schema.get_field(field_name)?;
let field_entry = self.schema.get_field_entry(field); let field_entry = self.schema.get_field_entry(field);
if let FieldType::Bytes(bytes_option) = field_entry.field_type() { if let FieldType::Bytes(bytes_option) = field_entry.field_type() {
if !bytes_option.is_fast() { if !bytes_option.is_fast() {

View File

@@ -1,322 +0,0 @@
use common::BitSet;
use itertools::Itertools;
use crate::fastfield::AliveBitSet;
use crate::{merge_filtered_segments, Directory, Index, IndexSettings, Segment, SegmentOrdinal};
/// DemuxMapping can be used to reorganize data from multiple segments.
///
/// DemuxMapping is useful in a multitenant settings, in which each document might actually belong
/// to a different tenant. It allows to reorganize documents as follows:
///
/// e.g. if you have two tenant ids TENANT_A and TENANT_B and two segments with
/// the documents (simplified)
/// Seg 1 [TENANT_A, TENANT_B]
/// Seg 2 [TENANT_A, TENANT_B]
///
/// You may want to group your documents to
/// Seg 1 [TENANT_A, TENANT_A]
/// Seg 2 [TENANT_B, TENANT_B]
///
/// Demuxing is the tool for that.
/// Semantically you can define a mapping from [old segment ordinal, old doc_id] -> [new segment
/// ordinal].
#[derive(Debug, Default)]
pub struct DemuxMapping {
/// [index old segment ordinal] -> [index doc_id] = new segment ordinal
mapping: Vec<DocIdToSegmentOrdinal>,
}
/// DocIdToSegmentOrdinal maps from doc_id within a segment to the new segment ordinal for demuxing.
///
/// For every source segment there is a `DocIdToSegmentOrdinal` to distribute its doc_ids.
#[derive(Debug, Default)]
pub struct DocIdToSegmentOrdinal {
doc_id_index_to_segment_ord: Vec<SegmentOrdinal>,
}
impl DocIdToSegmentOrdinal {
/// Creates a new DocIdToSegmentOrdinal with size of num_doc_ids.
/// Initially all doc_ids point to segment ordinal 0 and need to be set
/// the via `set` method.
pub fn with_max_doc(max_doc: usize) -> Self {
DocIdToSegmentOrdinal {
doc_id_index_to_segment_ord: vec![0; max_doc],
}
}
/// Returns the number of documents in this mapping.
/// It should be equal to the `max_doc` of the segment it targets.
pub fn max_doc(&self) -> u32 {
self.doc_id_index_to_segment_ord.len() as u32
}
/// Associates a doc_id with an output `SegmentOrdinal`.
pub fn set(&mut self, doc_id: u32, segment_ord: SegmentOrdinal) {
self.doc_id_index_to_segment_ord[doc_id as usize] = segment_ord;
}
/// Iterates over the new SegmentOrdinal in the order of the doc_id.
pub fn iter(&self) -> impl Iterator<Item = SegmentOrdinal> + '_ {
self.doc_id_index_to_segment_ord.iter().cloned()
}
}
impl DemuxMapping {
/// Adds a DocIdToSegmentOrdinal. The order of the pus calls
/// defines the old segment ordinal. e.g. first push = ordinal 0.
pub fn add(&mut self, segment_mapping: DocIdToSegmentOrdinal) {
self.mapping.push(segment_mapping);
}
/// Returns the old number of segments.
pub fn get_old_num_segments(&self) -> usize {
self.mapping.len()
}
}
fn docs_for_segment_ord(
doc_id_to_segment_ord: &DocIdToSegmentOrdinal,
target_segment_ord: SegmentOrdinal,
) -> AliveBitSet {
let mut bitset = BitSet::with_max_value(doc_id_to_segment_ord.max_doc());
for doc_id in doc_id_to_segment_ord
.iter()
.enumerate()
.filter(|(_doc_id, new_segment_ord)| *new_segment_ord == target_segment_ord)
.map(|(doc_id, _)| doc_id)
{
// add document if segment ordinal = target segment ordinal
bitset.insert(doc_id as u32);
}
AliveBitSet::from_bitset(&bitset)
}
fn get_alive_bitsets(
demux_mapping: &DemuxMapping,
target_segment_ord: SegmentOrdinal,
) -> Vec<AliveBitSet> {
demux_mapping
.mapping
.iter()
.map(|doc_id_to_segment_ord| {
docs_for_segment_ord(doc_id_to_segment_ord, target_segment_ord)
})
.collect_vec()
}
/// Demux the segments according to `demux_mapping`. See `DemuxMapping`.
/// The number of output_directories need to match max new segment ordinal from `demux_mapping`.
///
/// The ordinal of `segments` need to match the ordinals provided in `demux_mapping`.
pub fn demux(
segments: &[Segment],
demux_mapping: &DemuxMapping,
target_settings: IndexSettings,
output_directories: Vec<Box<dyn Directory>>,
) -> crate::Result<Vec<Index>> {
let mut indices = vec![];
for (target_segment_ord, output_directory) in output_directories.into_iter().enumerate() {
let alive_bitset = get_alive_bitsets(demux_mapping, target_segment_ord as u32)
.into_iter()
.map(Some)
.collect_vec();
let index = merge_filtered_segments(
segments,
target_settings.clone(),
alive_bitset,
output_directory,
)?;
indices.push(index);
}
Ok(indices)
}
#[cfg(test)]
mod tests {
use super::*;
use crate::collector::TopDocs;
use crate::directory::RamDirectory;
use crate::query::QueryParser;
use crate::schema::{Schema, TEXT};
use crate::{DocAddress, Term};
#[test]
fn test_demux_map_to_alive_bitset() {
let max_value = 2;
let mut demux_mapping = DemuxMapping::default();
// segment ordinal 0 mapping
let mut doc_id_to_segment = DocIdToSegmentOrdinal::with_max_doc(max_value);
doc_id_to_segment.set(0, 1);
doc_id_to_segment.set(1, 0);
demux_mapping.add(doc_id_to_segment);
// segment ordinal 1 mapping
let mut doc_id_to_segment = DocIdToSegmentOrdinal::with_max_doc(max_value);
doc_id_to_segment.set(0, 1);
doc_id_to_segment.set(1, 1);
demux_mapping.add(doc_id_to_segment);
{
let bit_sets_for_demuxing_to_segment_ord_0 = get_alive_bitsets(&demux_mapping, 0);
assert_eq!(
bit_sets_for_demuxing_to_segment_ord_0[0].is_deleted(0),
true
);
assert_eq!(
bit_sets_for_demuxing_to_segment_ord_0[0].is_deleted(1),
false
);
assert_eq!(
bit_sets_for_demuxing_to_segment_ord_0[1].is_deleted(0),
true
);
assert_eq!(
bit_sets_for_demuxing_to_segment_ord_0[1].is_deleted(1),
true
);
}
{
let bit_sets_for_demuxing_to_segment_ord_1 = get_alive_bitsets(&demux_mapping, 1);
assert_eq!(
bit_sets_for_demuxing_to_segment_ord_1[0].is_deleted(0),
false
);
assert_eq!(
bit_sets_for_demuxing_to_segment_ord_1[0].is_deleted(1),
true
);
assert_eq!(
bit_sets_for_demuxing_to_segment_ord_1[1].is_deleted(0),
false
);
assert_eq!(
bit_sets_for_demuxing_to_segment_ord_1[1].is_deleted(1),
false
);
}
}
#[test]
fn test_demux_segments() -> crate::Result<()> {
let first_index = {
let mut schema_builder = Schema::builder();
let text_field = schema_builder.add_text_field("text", TEXT);
let index = Index::create_in_ram(schema_builder.build());
let mut index_writer = index.writer_for_tests()?;
index_writer.add_document(doc!(text_field=>"texto1"))?;
index_writer.add_document(doc!(text_field=>"texto2"))?;
index_writer.commit()?;
index
};
let second_index = {
let mut schema_builder = Schema::builder();
let text_field = schema_builder.add_text_field("text", TEXT);
let index = Index::create_in_ram(schema_builder.build());
let mut index_writer = index.writer_for_tests()?;
index_writer.add_document(doc!(text_field=>"texto3"))?;
index_writer.add_document(doc!(text_field=>"texto4"))?;
index_writer.delete_term(Term::from_field_text(text_field, "4"));
index_writer.commit()?;
index
};
let mut segments: Vec<Segment> = Vec::new();
segments.extend(first_index.searchable_segments()?);
segments.extend(second_index.searchable_segments()?);
let target_settings = first_index.settings().clone();
let mut demux_mapping = DemuxMapping::default();
{
let max_value = 2;
// segment ordinal 0 mapping
let mut doc_id_to_segment = DocIdToSegmentOrdinal::with_max_doc(max_value);
doc_id_to_segment.set(0, 1);
doc_id_to_segment.set(1, 0);
demux_mapping.add(doc_id_to_segment);
// segment ordinal 1 mapping
let mut doc_id_to_segment = DocIdToSegmentOrdinal::with_max_doc(max_value);
doc_id_to_segment.set(0, 1);
doc_id_to_segment.set(1, 1);
demux_mapping.add(doc_id_to_segment);
}
assert_eq!(demux_mapping.get_old_num_segments(), 2);
let demuxed_indices = demux(
&segments,
&demux_mapping,
target_settings,
vec![
Box::<RamDirectory>::default(),
Box::<RamDirectory>::default(),
],
)?;
{
let index = &demuxed_indices[0];
let segments = index.searchable_segments()?;
assert_eq!(segments.len(), 1);
let segment_metas = segments[0].meta();
assert_eq!(segment_metas.num_deleted_docs(), 0);
assert_eq!(segment_metas.num_docs(), 1);
let searcher = index.reader().unwrap().searcher();
{
let text_field = index.schema().get_field("text").unwrap();
let do_search = |term: &str| {
let query = QueryParser::for_index(index, vec![text_field])
.parse_query(term)
.unwrap();
let top_docs: Vec<(f32, DocAddress)> =
searcher.search(&query, &TopDocs::with_limit(3)).unwrap();
top_docs.iter().map(|el| el.1.doc_id).collect::<Vec<_>>()
};
assert_eq!(do_search("texto1"), vec![] as Vec<u32>);
assert_eq!(do_search("texto2"), vec![0]);
}
}
{
let index = &demuxed_indices[1];
let segments = index.searchable_segments()?;
assert_eq!(segments.len(), 1);
let segment_metas = segments[0].meta();
assert_eq!(segment_metas.num_deleted_docs(), 0);
assert_eq!(segment_metas.num_docs(), 3);
let searcher = index.reader().unwrap().searcher();
{
let text_field = index.schema().get_field("text").unwrap();
let do_search = |term: &str| {
let query = QueryParser::for_index(index, vec![text_field])
.parse_query(term)
.unwrap();
let top_docs: Vec<(f32, DocAddress)> =
searcher.search(&query, &TopDocs::with_limit(3)).unwrap();
top_docs.iter().map(|el| el.1.doc_id).collect::<Vec<_>>()
};
assert_eq!(do_search("texto1"), vec![0]);
assert_eq!(do_search("texto2"), vec![] as Vec<u32>);
assert_eq!(do_search("texto3"), vec![1]);
assert_eq!(do_search("texto4"), vec![2]);
}
}
Ok(())
}
}

View File

@@ -99,7 +99,7 @@ pub(crate) fn expect_field_id_for_sort_field(
schema: &Schema, schema: &Schema,
sort_by_field: &IndexSortByField, sort_by_field: &IndexSortByField,
) -> crate::Result<Field> { ) -> crate::Result<Field> {
schema.get_field(&sort_by_field.field).ok_or_else(|| { schema.get_field(&sort_by_field.field).map_err(|_| {
TantivyError::InvalidArgument(format!( TantivyError::InvalidArgument(format!(
"field to sort index by not found: {:?}", "field to sort index by not found: {:?}",
sort_by_field.field sort_by_field.field
@@ -462,15 +462,14 @@ mod tests_indexsorting {
assert_eq!(searcher.segment_readers().len(), 1); assert_eq!(searcher.segment_readers().len(), 1);
let segment_reader = searcher.segment_reader(0); let segment_reader = searcher.segment_reader(0);
let fast_fields = segment_reader.fast_fields(); let fast_fields = segment_reader.fast_fields();
let my_number = index.schema().get_field("my_number").unwrap(); index.schema().get_field("my_number").unwrap();
let fast_field = fast_fields.u64(my_number).unwrap(); let fast_field = fast_fields.u64("my_number").unwrap();
assert_eq!(fast_field.get_val(0), 10u64); assert_eq!(fast_field.get_val(0), 10u64);
assert_eq!(fast_field.get_val(1), 20u64); assert_eq!(fast_field.get_val(1), 20u64);
assert_eq!(fast_field.get_val(2), 30u64); assert_eq!(fast_field.get_val(2), 30u64);
let multi_numbers = index.schema().get_field("multi_numbers").unwrap(); let multifield = fast_fields.u64s("multi_numbers").unwrap();
let multifield = fast_fields.u64s(multi_numbers).unwrap();
let mut vals = vec![]; let mut vals = vec![];
multifield.get_vals(0u32, &mut vals); multifield.get_vals(0u32, &mut vals);
assert_eq!(vals, &[] as &[u64]); assert_eq!(vals, &[] as &[u64]);

View File

@@ -1465,7 +1465,7 @@ mod tests {
let segment_reader = searcher.segment_reader(0); let segment_reader = searcher.segment_reader(0);
assert_eq!(segment_reader.num_docs(), 8); assert_eq!(segment_reader.num_docs(), 8);
assert_eq!(segment_reader.max_doc(), 10); assert_eq!(segment_reader.max_doc(), 10);
let fast_field_reader = segment_reader.fast_fields().u64(id_field)?; let fast_field_reader = segment_reader.fast_fields().u64("id")?;
let in_order_alive_ids: Vec<u64> = segment_reader let in_order_alive_ids: Vec<u64> = segment_reader
.doc_ids_alive() .doc_ids_alive()
.map(|doc| fast_field_reader.get_val(doc)) .map(|doc| fast_field_reader.get_val(doc))
@@ -1526,7 +1526,7 @@ mod tests {
let segment_reader = searcher.segment_reader(0); let segment_reader = searcher.segment_reader(0);
assert_eq!(segment_reader.num_docs(), 8); assert_eq!(segment_reader.num_docs(), 8);
assert_eq!(segment_reader.max_doc(), 10); assert_eq!(segment_reader.max_doc(), 10);
let fast_field_reader = segment_reader.fast_fields().u64(id_field)?; let fast_field_reader = segment_reader.fast_fields().u64("id")?;
let in_order_alive_ids: Vec<u64> = segment_reader let in_order_alive_ids: Vec<u64> = segment_reader
.doc_ids_alive() .doc_ids_alive()
.map(|doc| fast_field_reader.get_val(doc)) .map(|doc| fast_field_reader.get_val(doc))
@@ -1778,7 +1778,7 @@ mod tests {
.segment_readers() .segment_readers()
.iter() .iter()
.flat_map(|segment_reader| { .flat_map(|segment_reader| {
let ff_reader = segment_reader.fast_fields().u64(id_field).unwrap(); let ff_reader = segment_reader.fast_fields().u64("id").unwrap();
segment_reader segment_reader
.doc_ids_alive() .doc_ids_alive()
.map(move |doc| ff_reader.get_val(doc)) .map(move |doc| ff_reader.get_val(doc))
@@ -1789,7 +1789,7 @@ mod tests {
.segment_readers() .segment_readers()
.iter() .iter()
.flat_map(|segment_reader| { .flat_map(|segment_reader| {
let ff_reader = segment_reader.fast_fields().u64(id_field).unwrap(); let ff_reader = segment_reader.fast_fields().u64("id").unwrap();
segment_reader segment_reader
.doc_ids_alive() .doc_ids_alive()
.map(move |doc| ff_reader.get_val(doc)) .map(move |doc| ff_reader.get_val(doc))
@@ -1804,7 +1804,7 @@ mod tests {
let mut all_ips = Vec::new(); let mut all_ips = Vec::new();
let mut num_ips = 0; let mut num_ips = 0;
for segment_reader in searcher.segment_readers().iter() { for segment_reader in searcher.segment_readers().iter() {
let ip_reader = segment_reader.fast_fields().ip_addrs(ips_field).unwrap(); let ip_reader = segment_reader.fast_fields().ip_addrs("ips").unwrap();
for doc in segment_reader.doc_ids_alive() { for doc in segment_reader.doc_ids_alive() {
let mut vals = vec![]; let mut vals = vec![];
ip_reader.get_vals(doc, &mut vals); ip_reader.get_vals(doc, &mut vals);
@@ -1851,7 +1851,7 @@ mod tests {
.segment_readers() .segment_readers()
.iter() .iter()
.map(|segment_reader| { .map(|segment_reader| {
let ff_reader = segment_reader.fast_fields().ip_addrs(ips_field).unwrap(); let ff_reader = segment_reader.fast_fields().ip_addrs("ips").unwrap();
ff_reader.get_index_reader().num_docs() as usize ff_reader.get_index_reader().num_docs() as usize
}) })
.sum(); .sum();
@@ -1863,7 +1863,7 @@ mod tests {
.segment_readers() .segment_readers()
.iter() .iter()
.flat_map(|segment_reader| { .flat_map(|segment_reader| {
let ff_reader = segment_reader.fast_fields().ip_addr(ip_field).unwrap(); let ff_reader = segment_reader.fast_fields().ip_addr("ip").unwrap();
segment_reader.doc_ids_alive().flat_map(move |doc| { segment_reader.doc_ids_alive().flat_map(move |doc| {
let val = ff_reader.get_val(doc); let val = ff_reader.get_val(doc);
if val == Ipv6Addr::from_u128(0) { if val == Ipv6Addr::from_u128(0) {
@@ -1902,7 +1902,7 @@ mod tests {
.segment_readers() .segment_readers()
.iter() .iter()
.flat_map(|segment_reader| { .flat_map(|segment_reader| {
let ff_reader = segment_reader.fast_fields().ip_addrs(ips_field).unwrap(); let ff_reader = segment_reader.fast_fields().ip_addrs("ips").unwrap();
segment_reader.doc_ids_alive().flat_map(move |doc| { segment_reader.doc_ids_alive().flat_map(move |doc| {
let mut vals = vec![]; let mut vals = vec![];
ff_reader.get_vals(doc, &mut vals); ff_reader.get_vals(doc, &mut vals);
@@ -1914,9 +1914,9 @@ mod tests {
// multivalue fast field tests // multivalue fast field tests
for segment_reader in searcher.segment_readers().iter() { for segment_reader in searcher.segment_readers().iter() {
let id_reader = segment_reader.fast_fields().u64(id_field).unwrap(); let id_reader = segment_reader.fast_fields().u64("id").unwrap();
let ff_reader = segment_reader.fast_fields().u64s(multi_numbers).unwrap(); let ff_reader = segment_reader.fast_fields().u64s("multi_numbers").unwrap();
let bool_ff_reader = segment_reader.fast_fields().bools(multi_bools).unwrap(); let bool_ff_reader = segment_reader.fast_fields().bools("multi_bools").unwrap();
for doc in segment_reader.doc_ids_alive() { for doc in segment_reader.doc_ids_alive() {
let mut vals = vec![]; let mut vals = vec![];
ff_reader.get_vals(doc, &mut vals); ff_reader.get_vals(doc, &mut vals);
@@ -2109,7 +2109,7 @@ mod tests {
// test facets // test facets
for segment_reader in searcher.segment_readers().iter() { for segment_reader in searcher.segment_readers().iter() {
let mut facet_reader = segment_reader.facet_reader(facet_field).unwrap(); let mut facet_reader = segment_reader.facet_reader(facet_field).unwrap();
let ff_reader = segment_reader.fast_fields().u64(id_field).unwrap(); let ff_reader = segment_reader.fast_fields().u64("id").unwrap();
for doc_id in segment_reader.doc_ids_alive() { for doc_id in segment_reader.doc_ids_alive() {
let mut facet_ords = Vec::new(); let mut facet_ords = Vec::new();
facet_reader.facet_ords(doc_id, &mut facet_ords); facet_reader.facet_ords(doc_id, &mut facet_ords);

View File

@@ -16,7 +16,7 @@ use crate::fastfield::{
MultiValueIndex, MultiValuedFastFieldReader, MultiValueIndex, MultiValuedFastFieldReader,
}; };
use crate::fieldnorm::{FieldNormReader, FieldNormReaders, FieldNormsSerializer, FieldNormsWriter}; use crate::fieldnorm::{FieldNormReader, FieldNormReaders, FieldNormsSerializer, FieldNormsWriter};
use crate::indexer::doc_id_mapping::{expect_field_id_for_sort_field, SegmentDocIdMapping}; use crate::indexer::doc_id_mapping::SegmentDocIdMapping;
use crate::indexer::sorted_doc_id_column::RemappedDocIdColumn; use crate::indexer::sorted_doc_id_column::RemappedDocIdColumn;
use crate::indexer::sorted_doc_id_multivalue_column::RemappedDocIdMultiValueColumn; use crate::indexer::sorted_doc_id_multivalue_column::RemappedDocIdMultiValueColumn;
use crate::indexer::SegmentSerializer; use crate::indexer::SegmentSerializer;
@@ -335,8 +335,10 @@ impl IndexMerger {
.readers .readers
.iter() .iter()
.map(|segment_reader| { .map(|segment_reader| {
let ff_reader: MultiValuedFastFieldReader<u128> = let ff_reader: MultiValuedFastFieldReader<u128> = segment_reader
segment_reader.fast_fields().u128s(field).expect( .fast_fields()
.u128s(self.schema.get_field_name(field))
.expect(
"Failed to find index for multivalued field. This is a bug in tantivy, \ "Failed to find index for multivalued field. This is a bug in tantivy, \
please report.", please report.",
); );
@@ -401,10 +403,13 @@ impl IndexMerger {
.readers .readers
.iter() .iter()
.map(|reader| { .map(|reader| {
let u128_reader: Arc<dyn Column<u128>> = reader.fast_fields().u128(field).expect( let u128_reader: Arc<dyn Column<u128>> = reader
"Failed to find a reader for single fast field. This is a tantivy bug and it \ .fast_fields()
should never happen.", .u128(self.schema.get_field_name(field))
); .expect(
"Failed to find a reader for single fast field. This is a tantivy bug and \
it should never happen.",
);
u128_reader u128_reader
}) })
.collect::<Vec<_>>(); .collect::<Vec<_>>();
@@ -431,7 +436,11 @@ impl IndexMerger {
fast_field_serializer: &mut CompositeFastFieldSerializer, fast_field_serializer: &mut CompositeFastFieldSerializer,
doc_id_mapping: &SegmentDocIdMapping, doc_id_mapping: &SegmentDocIdMapping,
) -> crate::Result<()> { ) -> crate::Result<()> {
let fast_field_accessor = RemappedDocIdColumn::new(&self.readers, doc_id_mapping, field); let fast_field_accessor = RemappedDocIdColumn::new(
&self.readers,
doc_id_mapping,
self.schema.get_field_name(field),
);
fast_field_serializer.create_auto_detect_u64_fast_field(field, fast_field_accessor)?; fast_field_serializer.create_auto_detect_u64_fast_field(field, fast_field_accessor)?;
Ok(()) Ok(())
@@ -464,8 +473,8 @@ impl IndexMerger {
reader: &SegmentReader, reader: &SegmentReader,
sort_by_field: &IndexSortByField, sort_by_field: &IndexSortByField,
) -> crate::Result<Arc<dyn Column>> { ) -> crate::Result<Arc<dyn Column>> {
let field_id = expect_field_id_for_sort_field(reader.schema(), sort_by_field)?; // for now expect fastfield, but not strictly required reader.schema().get_field(&sort_by_field.field)?;
let value_accessor = reader.fast_fields().u64_lenient(field_id)?; let value_accessor = reader.fast_fields().u64_lenient(&sort_by_field.field)?;
Ok(value_accessor) Ok(value_accessor)
} }
/// Collecting value_accessors into a vec to bind the lifetime. /// Collecting value_accessors into a vec to bind the lifetime.
@@ -569,7 +578,7 @@ impl IndexMerger {
.map(|reader| { .map(|reader| {
let u64s_reader: MultiValuedFastFieldReader<u64> = reader let u64s_reader: MultiValuedFastFieldReader<u64> = reader
.fast_fields() .fast_fields()
.typed_fast_field_multi_reader::<u64>(field) .typed_fast_field_multi_reader::<u64>(self.schema.get_field_name(field))
.expect( .expect(
"Failed to find index for multivalued field. This is a bug in tantivy, \ "Failed to find index for multivalued field. This is a bug in tantivy, \
please report.", please report.",
@@ -613,7 +622,7 @@ impl IndexMerger {
.map(|reader| { .map(|reader| {
let ff_reader: MultiValuedFastFieldReader<u64> = reader let ff_reader: MultiValuedFastFieldReader<u64> = reader
.fast_fields() .fast_fields()
.u64s(field) .u64s(self.schema.get_field_name(field))
.expect("Could not find multivalued u64 fast value reader."); .expect("Could not find multivalued u64 fast value reader.");
ff_reader ff_reader
}) })
@@ -684,8 +693,11 @@ impl IndexMerger {
self.write_multi_value_fast_field_idx(field, fast_field_serializer, doc_id_mapping)?; self.write_multi_value_fast_field_idx(field, fast_field_serializer, doc_id_mapping)?;
let fastfield_accessor = let fastfield_accessor = RemappedDocIdMultiValueColumn::new(
RemappedDocIdMultiValueColumn::new(&self.readers, doc_id_mapping, field); &self.readers,
doc_id_mapping,
self.schema.get_field_name(field),
);
fast_field_serializer.create_auto_detect_u64_fast_field_with_idx_and_codecs( fast_field_serializer.create_auto_detect_u64_fast_field_with_idx_and_codecs(
field, field,
fastfield_accessor, fastfield_accessor,
@@ -706,10 +718,13 @@ impl IndexMerger {
.readers .readers
.iter() .iter()
.map(|reader| { .map(|reader| {
let bytes_reader = reader.fast_fields().bytes(field).expect( let bytes_reader = reader
"Failed to find index for bytes field. This is a bug in tantivy, please \ .fast_fields()
report.", .bytes(self.schema.get_field_name(field))
); .expect(
"Failed to find index for bytes field. This is a bug in tantivy, please \
report.",
);
(reader, bytes_reader) (reader, bytes_reader)
}) })
.collect::<Vec<_>>(); .collect::<Vec<_>>();
@@ -1206,7 +1221,10 @@ mod tests {
{ {
let get_fast_vals = |terms: Vec<Term>| { let get_fast_vals = |terms: Vec<Term>| {
let query = BooleanQuery::new_multiterms_query(terms); let query = BooleanQuery::new_multiterms_query(terms);
searcher.search(&query, &FastFieldTestCollector::for_field(score_field)) searcher.search(
&query,
&FastFieldTestCollector::for_field("score".to_string()),
)
}; };
let get_fast_vals_bytes = |terms: Vec<Term>| { let get_fast_vals_bytes = |terms: Vec<Term>| {
let query = BooleanQuery::new_multiterms_query(terms); let query = BooleanQuery::new_multiterms_query(terms);
@@ -1244,7 +1262,7 @@ mod tests {
let mut index_writer = index.writer_for_tests()?; let mut index_writer = index.writer_for_tests()?;
let reader = index.reader().unwrap(); let reader = index.reader().unwrap();
let search_term = |searcher: &Searcher, term: Term| { let search_term = |searcher: &Searcher, term: Term| {
let collector = FastFieldTestCollector::for_field(score_field); let collector = FastFieldTestCollector::for_field("score".to_string());
let bytes_collector = BytesFastFieldTestCollector::for_field(bytes_score_field); let bytes_collector = BytesFastFieldTestCollector::for_field(bytes_score_field);
let term_query = TermQuery::new(term, IndexRecordOption::Basic); let term_query = TermQuery::new(term, IndexRecordOption::Basic);
searcher searcher
@@ -1366,7 +1384,7 @@ mod tests {
let score_field_reader = searcher let score_field_reader = searcher
.segment_reader(0) .segment_reader(0)
.fast_fields() .fast_fields()
.u64(score_field) .u64("score")
.unwrap(); .unwrap();
assert_eq!(score_field_reader.min_value(), 4000); assert_eq!(score_field_reader.min_value(), 4000);
assert_eq!(score_field_reader.max_value(), 7000); assert_eq!(score_field_reader.max_value(), 7000);
@@ -1374,7 +1392,7 @@ mod tests {
let score_field_reader = searcher let score_field_reader = searcher
.segment_reader(1) .segment_reader(1)
.fast_fields() .fast_fields()
.u64(score_field) .u64("score")
.unwrap(); .unwrap();
assert_eq!(score_field_reader.min_value(), 1); assert_eq!(score_field_reader.min_value(), 1);
assert_eq!(score_field_reader.max_value(), 3); assert_eq!(score_field_reader.max_value(), 3);
@@ -1420,7 +1438,7 @@ mod tests {
let score_field_reader = searcher let score_field_reader = searcher
.segment_reader(0) .segment_reader(0)
.fast_fields() .fast_fields()
.u64(score_field) .u64("score")
.unwrap(); .unwrap();
assert_eq!(score_field_reader.min_value(), 3); assert_eq!(score_field_reader.min_value(), 3);
assert_eq!(score_field_reader.max_value(), 7000); assert_eq!(score_field_reader.max_value(), 7000);
@@ -1467,7 +1485,7 @@ mod tests {
let score_field_reader = searcher let score_field_reader = searcher
.segment_reader(0) .segment_reader(0)
.fast_fields() .fast_fields()
.u64(score_field) .u64("score")
.unwrap(); .unwrap();
assert_eq!(score_field_reader.min_value(), 3); assert_eq!(score_field_reader.min_value(), 3);
assert_eq!(score_field_reader.max_value(), 7000); assert_eq!(score_field_reader.max_value(), 7000);
@@ -1514,7 +1532,7 @@ mod tests {
let score_field_reader = searcher let score_field_reader = searcher
.segment_reader(0) .segment_reader(0)
.fast_fields() .fast_fields()
.u64(score_field) .u64("score")
.unwrap(); .unwrap();
assert_eq!(score_field_reader.min_value(), 6000); assert_eq!(score_field_reader.min_value(), 6000);
assert_eq!(score_field_reader.max_value(), 7000); assert_eq!(score_field_reader.max_value(), 7000);
@@ -1836,7 +1854,7 @@ mod tests {
{ {
let segment = searcher.segment_reader(0u32); let segment = searcher.segment_reader(0u32);
let ff_reader = segment.fast_fields().u64s(int_field).unwrap(); let ff_reader = segment.fast_fields().u64s("intvals").unwrap();
ff_reader.get_vals(0, &mut vals); ff_reader.get_vals(0, &mut vals);
assert_eq!(&vals, &[1, 2]); assert_eq!(&vals, &[1, 2]);
@@ -1862,7 +1880,7 @@ mod tests {
{ {
let segment = searcher.segment_reader(1u32); let segment = searcher.segment_reader(1u32);
let ff_reader = segment.fast_fields().u64s(int_field).unwrap(); let ff_reader = segment.fast_fields().u64s("intvals").unwrap();
ff_reader.get_vals(0, &mut vals); ff_reader.get_vals(0, &mut vals);
assert_eq!(&vals, &[28, 27]); assert_eq!(&vals, &[28, 27]);
@@ -1872,7 +1890,7 @@ mod tests {
{ {
let segment = searcher.segment_reader(2u32); let segment = searcher.segment_reader(2u32);
let ff_reader = segment.fast_fields().u64s(int_field).unwrap(); let ff_reader = segment.fast_fields().u64s("intvals").unwrap();
ff_reader.get_vals(0, &mut vals); ff_reader.get_vals(0, &mut vals);
assert_eq!(&vals, &[20]); assert_eq!(&vals, &[20]);
} }
@@ -1889,7 +1907,7 @@ mod tests {
{ {
let searcher = reader.searcher(); let searcher = reader.searcher();
let segment = searcher.segment_reader(0u32); let segment = searcher.segment_reader(0u32);
let ff_reader = segment.fast_fields().u64s(int_field).unwrap(); let ff_reader = segment.fast_fields().u64s("intvals").unwrap();
ff_reader.get_vals(0, &mut vals); ff_reader.get_vals(0, &mut vals);
assert_eq!(&vals, &[1, 2]); assert_eq!(&vals, &[1, 2]);

View File

@@ -185,7 +185,7 @@ mod tests {
let segment_reader = searcher.segment_readers().last().unwrap(); let segment_reader = searcher.segment_readers().last().unwrap();
let fast_fields = segment_reader.fast_fields(); let fast_fields = segment_reader.fast_fields();
let fast_field = fast_fields.u64(int_field).unwrap(); let fast_field = fast_fields.u64("intval").unwrap();
assert_eq!(fast_field.get_val(5), 1u64); assert_eq!(fast_field.get_val(5), 1u64);
assert_eq!(fast_field.get_val(4), 2u64); assert_eq!(fast_field.get_val(4), 2u64);
assert_eq!(fast_field.get_val(3), 3u64); assert_eq!(fast_field.get_val(3), 3u64);
@@ -364,15 +364,13 @@ mod tests {
.unwrap(); .unwrap();
let int_field = index.schema().get_field("intval").unwrap(); let int_field = index.schema().get_field("intval").unwrap();
let multi_numbers = index.schema().get_field("multi_numbers").unwrap();
let bytes_field = index.schema().get_field("bytes").unwrap();
let reader = index.reader().unwrap(); let reader = index.reader().unwrap();
let searcher = reader.searcher(); let searcher = reader.searcher();
assert_eq!(searcher.segment_readers().len(), 1); assert_eq!(searcher.segment_readers().len(), 1);
let segment_reader = searcher.segment_readers().last().unwrap(); let segment_reader = searcher.segment_readers().last().unwrap();
let fast_fields = segment_reader.fast_fields(); let fast_fields = segment_reader.fast_fields();
let fast_field = fast_fields.u64(int_field).unwrap(); let fast_field = fast_fields.u64("intval").unwrap();
assert_eq!(fast_field.get_val(0), 1u64); assert_eq!(fast_field.get_val(0), 1u64);
assert_eq!(fast_field.get_val(1), 2u64); assert_eq!(fast_field.get_val(1), 2u64);
assert_eq!(fast_field.get_val(2), 3u64); assert_eq!(fast_field.get_val(2), 3u64);
@@ -386,7 +384,7 @@ mod tests {
vals vals
}; };
let fast_fields = segment_reader.fast_fields(); let fast_fields = segment_reader.fast_fields();
let fast_field = fast_fields.u64s(multi_numbers).unwrap(); let fast_field = fast_fields.u64s("multi_numbers").unwrap();
assert_eq!(&get_vals(&fast_field, 0), &[] as &[u64]); assert_eq!(&get_vals(&fast_field, 0), &[] as &[u64]);
assert_eq!(&get_vals(&fast_field, 1), &[2, 3]); assert_eq!(&get_vals(&fast_field, 1), &[2, 3]);
assert_eq!(&get_vals(&fast_field, 2), &[3, 4]); assert_eq!(&get_vals(&fast_field, 2), &[3, 4]);
@@ -394,7 +392,7 @@ mod tests {
assert_eq!(&get_vals(&fast_field, 4), &[20]); assert_eq!(&get_vals(&fast_field, 4), &[20]);
assert_eq!(&get_vals(&fast_field, 5), &[1001, 1002]); assert_eq!(&get_vals(&fast_field, 5), &[1001, 1002]);
let fast_field = fast_fields.bytes(bytes_field).unwrap(); let fast_field = fast_fields.bytes("bytes").unwrap();
assert_eq!(fast_field.get_bytes(0), &[] as &[u8]); assert_eq!(fast_field.get_bytes(0), &[] as &[u8]);
assert_eq!(fast_field.get_bytes(2), &[1, 2, 3]); assert_eq!(fast_field.get_bytes(2), &[1, 2, 3]);
assert_eq!(fast_field.get_bytes(5), &[5, 5]); assert_eq!(fast_field.get_bytes(5), &[5, 5]);
@@ -527,7 +525,6 @@ mod bench_sorted_index_merge {
order: Order::Desc, order: Order::Desc,
}; };
let index = create_index(Some(sort_by_field.clone())); let index = create_index(Some(sort_by_field.clone()));
let field = index.schema().get_field("intval").unwrap();
let segments = index.searchable_segments().unwrap(); let segments = index.searchable_segments().unwrap();
let merger: IndexMerger = let merger: IndexMerger =
IndexMerger::open(index.schema(), index.settings().clone(), &segments[..])?; IndexMerger::open(index.schema(), index.settings().clone(), &segments[..])?;
@@ -535,8 +532,10 @@ mod bench_sorted_index_merge {
b.iter(|| { b.iter(|| {
let sorted_doc_ids = doc_id_mapping.iter_old_doc_addrs().map(|doc_addr| { let sorted_doc_ids = doc_id_mapping.iter_old_doc_addrs().map(|doc_addr| {
let reader = &merger.readers[doc_addr.segment_ord as usize]; let reader = &merger.readers[doc_addr.segment_ord as usize];
let u64_reader: Arc<dyn Column<u64>> = let u64_reader: Arc<dyn Column<u64>> = reader
reader.fast_fields().typed_fast_field_reader(field).expect( .fast_fields()
.typed_fast_field_reader("intval")
.expect(
"Failed to find a reader for single fast field. This is a tantivy bug and \ "Failed to find a reader for single fast field. This is a tantivy bug and \
it should never happen.", it should never happen.",
); );

View File

@@ -1,6 +1,5 @@
pub mod delete_queue; pub mod delete_queue;
pub mod demuxer;
pub mod doc_id_mapping; pub mod doc_id_mapping;
mod doc_opstamp_mapping; mod doc_opstamp_mapping;
mod flat_map_with_buffer; mod flat_map_with_buffer;

View File

@@ -4,7 +4,6 @@ use fastfield_codecs::Column;
use itertools::Itertools; use itertools::Itertools;
use crate::indexer::doc_id_mapping::SegmentDocIdMapping; use crate::indexer::doc_id_mapping::SegmentDocIdMapping;
use crate::schema::Field;
use crate::SegmentReader; use crate::SegmentReader;
pub(crate) struct RemappedDocIdColumn<'a> { pub(crate) struct RemappedDocIdColumn<'a> {
@@ -41,7 +40,7 @@ impl<'a> RemappedDocIdColumn<'a> {
pub(crate) fn new( pub(crate) fn new(
readers: &'a [SegmentReader], readers: &'a [SegmentReader],
doc_id_mapping: &'a SegmentDocIdMapping, doc_id_mapping: &'a SegmentDocIdMapping,
field: Field, field: &str,
) -> Self { ) -> Self {
let (min_value, max_value) = readers let (min_value, max_value) = readers
.iter() .iter()

View File

@@ -5,7 +5,6 @@ use fastfield_codecs::Column;
use super::flat_map_with_buffer::FlatMapWithBufferIter; use super::flat_map_with_buffer::FlatMapWithBufferIter;
use crate::fastfield::{MultiValueIndex, MultiValuedFastFieldReader}; use crate::fastfield::{MultiValueIndex, MultiValuedFastFieldReader};
use crate::indexer::doc_id_mapping::SegmentDocIdMapping; use crate::indexer::doc_id_mapping::SegmentDocIdMapping;
use crate::schema::Field;
use crate::{DocAddress, SegmentReader}; use crate::{DocAddress, SegmentReader};
pub(crate) struct RemappedDocIdMultiValueColumn<'a> { pub(crate) struct RemappedDocIdMultiValueColumn<'a> {
@@ -20,7 +19,7 @@ impl<'a> RemappedDocIdMultiValueColumn<'a> {
pub(crate) fn new( pub(crate) fn new(
readers: &'a [SegmentReader], readers: &'a [SegmentReader],
doc_id_mapping: &'a SegmentDocIdMapping, doc_id_mapping: &'a SegmentDocIdMapping,
field: Field, field: &str,
) -> Self { ) -> Self {
// Our values are bitpacked and we need to know what should be // Our values are bitpacked and we need to know what should be
// our bitwidth and our minimum value before serializing any values. // our bitwidth and our minimum value before serializing any values.

View File

@@ -299,7 +299,6 @@ pub use crate::core::{
SegmentReader, SingleSegmentIndexWriter, SegmentReader, SingleSegmentIndexWriter,
}; };
pub use crate::directory::Directory; pub use crate::directory::Directory;
pub use crate::indexer::demuxer::*;
pub use crate::indexer::operation::UserOperation; pub use crate::indexer::operation::UserOperation;
pub use crate::indexer::{merge_filtered_segments, merge_indices, IndexWriter, PreparedCommit}; pub use crate::indexer::{merge_filtered_segments, merge_indices, IndexWriter, PreparedCommit};
pub use crate::postings::Postings; pub use crate::postings::Postings;
@@ -995,8 +994,8 @@ pub mod tests {
let fast_field_unsigned = schema_builder.add_u64_field("unsigned", FAST); let fast_field_unsigned = schema_builder.add_u64_field("unsigned", FAST);
let fast_field_signed = schema_builder.add_i64_field("signed", FAST); let fast_field_signed = schema_builder.add_i64_field("signed", FAST);
let fast_field_float = schema_builder.add_f64_field("float", FAST); let fast_field_float = schema_builder.add_f64_field("float", FAST);
let text_field = schema_builder.add_text_field("text", TEXT); schema_builder.add_text_field("text", TEXT);
let stored_int_field = schema_builder.add_u64_field("stored_int", STORED); schema_builder.add_u64_field("stored_int", STORED);
let schema = schema_builder.build(); let schema = schema_builder.build();
let index = Index::create_in_ram(schema); let index = Index::create_in_ram(schema);
@@ -1011,37 +1010,37 @@ pub mod tests {
let searcher = reader.searcher(); let searcher = reader.searcher();
let segment_reader: &SegmentReader = searcher.segment_reader(0); let segment_reader: &SegmentReader = searcher.segment_reader(0);
{ {
let fast_field_reader_res = segment_reader.fast_fields().u64(text_field); let fast_field_reader_res = segment_reader.fast_fields().u64("text");
assert!(fast_field_reader_res.is_err()); assert!(fast_field_reader_res.is_err());
} }
{ {
let fast_field_reader_opt = segment_reader.fast_fields().u64(stored_int_field); let fast_field_reader_opt = segment_reader.fast_fields().u64("stored_int");
assert!(fast_field_reader_opt.is_err()); assert!(fast_field_reader_opt.is_err());
} }
{ {
let fast_field_reader_opt = segment_reader.fast_fields().u64(fast_field_signed); let fast_field_reader_opt = segment_reader.fast_fields().u64("signed");
assert!(fast_field_reader_opt.is_err()); assert!(fast_field_reader_opt.is_err());
} }
{ {
let fast_field_reader_opt = segment_reader.fast_fields().u64(fast_field_float); let fast_field_reader_opt = segment_reader.fast_fields().u64("float");
assert!(fast_field_reader_opt.is_err()); assert!(fast_field_reader_opt.is_err());
} }
{ {
let fast_field_reader_opt = segment_reader.fast_fields().u64(fast_field_unsigned); let fast_field_reader_opt = segment_reader.fast_fields().u64("unsigned");
assert!(fast_field_reader_opt.is_ok()); assert!(fast_field_reader_opt.is_ok());
let fast_field_reader = fast_field_reader_opt.unwrap(); let fast_field_reader = fast_field_reader_opt.unwrap();
assert_eq!(fast_field_reader.get_val(0), 4u64) assert_eq!(fast_field_reader.get_val(0), 4u64)
} }
{ {
let fast_field_reader_res = segment_reader.fast_fields().i64(fast_field_signed); let fast_field_reader_res = segment_reader.fast_fields().i64("signed");
assert!(fast_field_reader_res.is_ok()); assert!(fast_field_reader_res.is_ok());
let fast_field_reader = fast_field_reader_res.unwrap(); let fast_field_reader = fast_field_reader_res.unwrap();
assert_eq!(fast_field_reader.get_val(0), 4i64) assert_eq!(fast_field_reader.get_val(0), 4i64)
} }
{ {
let fast_field_reader_res = segment_reader.fast_fields().f64(fast_field_float); let fast_field_reader_res = segment_reader.fast_fields().f64("float");
assert!(fast_field_reader_res.is_ok()); assert!(fast_field_reader_res.is_ok());
let fast_field_reader = fast_field_reader_res.unwrap(); let fast_field_reader = fast_field_reader_res.unwrap();
assert_eq!(fast_field_reader.get_val(0), 4f64) assert_eq!(fast_field_reader.get_val(0), 4f64)

View File

@@ -10,7 +10,7 @@ pub enum LogicalLiteral {
Term(Term), Term(Term),
Phrase(Vec<(usize, Term)>, u32), Phrase(Vec<(usize, Term)>, u32),
Range { Range {
field: Field, field: String,
value_type: Type, value_type: Type,
lower: Bound<Term>, lower: Bound<Term>,
upper: Bound<Term>, upper: Bound<Term>,

View File

@@ -672,7 +672,7 @@ impl QueryParser {
let field_entry = self.schema.get_field_entry(field); let field_entry = self.schema.get_field_entry(field);
let value_type = field_entry.field_type().value_type(); let value_type = field_entry.field_type().value_type();
let logical_ast = LogicalAst::Leaf(Box::new(LogicalLiteral::Range { let logical_ast = LogicalAst::Leaf(Box::new(LogicalLiteral::Range {
field, field: self.schema.get_field_name(field).to_string(),
value_type, value_type,
lower: self.resolve_bound(field, json_path, &lower)?, lower: self.resolve_bound(field, json_path, &lower)?,
upper: self.resolve_bound(field, json_path, &upper)?, upper: self.resolve_bound(field, json_path, &upper)?,
@@ -964,7 +964,7 @@ mod test {
let query = make_query_parser().parse_query("title:[A TO B]").unwrap(); let query = make_query_parser().parse_query("title:[A TO B]").unwrap();
assert_eq!( assert_eq!(
format!("{:?}", query), format!("{:?}", query),
"RangeQuery { field: Field(0), value_type: Str, left_bound: Included([97]), \ "RangeQuery { field: \"title\", value_type: Str, left_bound: Included([97]), \
right_bound: Included([98]) }" right_bound: Included([98]) }"
); );
} }

View File

@@ -67,7 +67,7 @@ pub(crate) fn map_bound<TFrom, TTo, Transform: Fn(&TFrom) -> TTo>(
/// ///
/// let reader = index.reader()?; /// let reader = index.reader()?;
/// let searcher = reader.searcher(); /// let searcher = reader.searcher();
/// let docs_in_the_sixties = RangeQuery::new_u64(year_field, 1960..1970); /// let docs_in_the_sixties = RangeQuery::new_u64("year".to_string(), 1960..1970);
/// let num_60s_books = searcher.search(&docs_in_the_sixties, &Count)?; /// let num_60s_books = searcher.search(&docs_in_the_sixties, &Count)?;
/// assert_eq!(num_60s_books, 2285); /// assert_eq!(num_60s_books, 2285);
/// Ok(()) /// Ok(())
@@ -76,7 +76,7 @@ pub(crate) fn map_bound<TFrom, TTo, Transform: Fn(&TFrom) -> TTo>(
/// ``` /// ```
#[derive(Clone, Debug)] #[derive(Clone, Debug)]
pub struct RangeQuery { pub struct RangeQuery {
field: Field, field: String,
value_type: Type, value_type: Type,
left_bound: Bound<Vec<u8>>, left_bound: Bound<Vec<u8>>,
right_bound: Bound<Vec<u8>>, right_bound: Bound<Vec<u8>>,
@@ -88,15 +88,12 @@ impl RangeQuery {
/// If the value type is not correct, something may go terribly wrong when /// If the value type is not correct, something may go terribly wrong when
/// the `Weight` object is created. /// the `Weight` object is created.
pub fn new_term_bounds( pub fn new_term_bounds(
field: Field, field: String,
value_type: Type, value_type: Type,
left_bound: &Bound<Term>, left_bound: &Bound<Term>,
right_bound: &Bound<Term>, right_bound: &Bound<Term>,
) -> RangeQuery { ) -> RangeQuery {
let verify_and_unwrap_term = |val: &Term| { let verify_and_unwrap_term = |val: &Term| val.value_bytes().to_owned();
assert_eq!(field, val.field());
val.value_bytes().to_owned()
};
RangeQuery { RangeQuery {
field, field,
value_type, value_type,
@@ -109,7 +106,7 @@ impl RangeQuery {
/// ///
/// If the field is not of the type `i64`, tantivy /// If the field is not of the type `i64`, tantivy
/// will panic when the `Weight` object is created. /// will panic when the `Weight` object is created.
pub fn new_i64(field: Field, range: Range<i64>) -> RangeQuery { pub fn new_i64(field: String, range: Range<i64>) -> RangeQuery {
RangeQuery::new_i64_bounds( RangeQuery::new_i64_bounds(
field, field,
Bound::Included(range.start), Bound::Included(range.start),
@@ -125,11 +122,15 @@ impl RangeQuery {
/// If the field is not of the type `i64`, tantivy /// If the field is not of the type `i64`, tantivy
/// will panic when the `Weight` object is created. /// will panic when the `Weight` object is created.
pub fn new_i64_bounds( pub fn new_i64_bounds(
field: Field, field: String,
left_bound: Bound<i64>, left_bound: Bound<i64>,
right_bound: Bound<i64>, right_bound: Bound<i64>,
) -> RangeQuery { ) -> RangeQuery {
let make_term_val = |val: &i64| Term::from_field_i64(field, *val).value_bytes().to_owned(); let make_term_val = |val: &i64| {
Term::from_field_i64(Field::from_field_id(0), *val)
.value_bytes()
.to_owned()
};
RangeQuery { RangeQuery {
field, field,
value_type: Type::I64, value_type: Type::I64,
@@ -142,7 +143,7 @@ impl RangeQuery {
/// ///
/// If the field is not of the type `f64`, tantivy /// If the field is not of the type `f64`, tantivy
/// will panic when the `Weight` object is created. /// will panic when the `Weight` object is created.
pub fn new_f64(field: Field, range: Range<f64>) -> RangeQuery { pub fn new_f64(field: String, range: Range<f64>) -> RangeQuery {
RangeQuery::new_f64_bounds( RangeQuery::new_f64_bounds(
field, field,
Bound::Included(range.start), Bound::Included(range.start),
@@ -158,11 +159,15 @@ impl RangeQuery {
/// If the field is not of the type `f64`, tantivy /// If the field is not of the type `f64`, tantivy
/// will panic when the `Weight` object is created. /// will panic when the `Weight` object is created.
pub fn new_f64_bounds( pub fn new_f64_bounds(
field: Field, field: String,
left_bound: Bound<f64>, left_bound: Bound<f64>,
right_bound: Bound<f64>, right_bound: Bound<f64>,
) -> RangeQuery { ) -> RangeQuery {
let make_term_val = |val: &f64| Term::from_field_f64(field, *val).value_bytes().to_owned(); let make_term_val = |val: &f64| {
Term::from_field_f64(Field::from_field_id(0), *val)
.value_bytes()
.to_owned()
};
RangeQuery { RangeQuery {
field, field,
value_type: Type::F64, value_type: Type::F64,
@@ -179,11 +184,15 @@ impl RangeQuery {
/// If the field is not of the type `u64`, tantivy /// If the field is not of the type `u64`, tantivy
/// will panic when the `Weight` object is created. /// will panic when the `Weight` object is created.
pub fn new_u64_bounds( pub fn new_u64_bounds(
field: Field, field: String,
left_bound: Bound<u64>, left_bound: Bound<u64>,
right_bound: Bound<u64>, right_bound: Bound<u64>,
) -> RangeQuery { ) -> RangeQuery {
let make_term_val = |val: &u64| Term::from_field_u64(field, *val).value_bytes().to_owned(); let make_term_val = |val: &u64| {
Term::from_field_u64(Field::from_field_id(0), *val)
.value_bytes()
.to_owned()
};
RangeQuery { RangeQuery {
field, field,
value_type: Type::U64, value_type: Type::U64,
@@ -196,7 +205,7 @@ impl RangeQuery {
/// ///
/// If the field is not of the type `u64`, tantivy /// If the field is not of the type `u64`, tantivy
/// will panic when the `Weight` object is created. /// will panic when the `Weight` object is created.
pub fn new_u64(field: Field, range: Range<u64>) -> RangeQuery { pub fn new_u64(field: String, range: Range<u64>) -> RangeQuery {
RangeQuery::new_u64_bounds( RangeQuery::new_u64_bounds(
field, field,
Bound::Included(range.start), Bound::Included(range.start),
@@ -212,12 +221,15 @@ impl RangeQuery {
/// If the field is not of the type `date`, tantivy /// If the field is not of the type `date`, tantivy
/// will panic when the `Weight` object is created. /// will panic when the `Weight` object is created.
pub fn new_date_bounds( pub fn new_date_bounds(
field: Field, field: String,
left_bound: Bound<DateTime>, left_bound: Bound<DateTime>,
right_bound: Bound<DateTime>, right_bound: Bound<DateTime>,
) -> RangeQuery { ) -> RangeQuery {
let make_term_val = let make_term_val = |val: &DateTime| {
|val: &DateTime| Term::from_field_date(field, *val).value_bytes().to_owned(); Term::from_field_date(Field::from_field_id(0), *val)
.value_bytes()
.to_owned()
};
RangeQuery { RangeQuery {
field, field,
value_type: Type::Date, value_type: Type::Date,
@@ -230,7 +242,7 @@ impl RangeQuery {
/// ///
/// If the field is not of the type `date`, tantivy /// If the field is not of the type `date`, tantivy
/// will panic when the `Weight` object is created. /// will panic when the `Weight` object is created.
pub fn new_date(field: Field, range: Range<DateTime>) -> RangeQuery { pub fn new_date(field: String, range: Range<DateTime>) -> RangeQuery {
RangeQuery::new_date_bounds( RangeQuery::new_date_bounds(
field, field,
Bound::Included(range.start), Bound::Included(range.start),
@@ -245,7 +257,7 @@ impl RangeQuery {
/// ///
/// If the field is not of the type `Str`, tantivy /// If the field is not of the type `Str`, tantivy
/// will panic when the `Weight` object is created. /// will panic when the `Weight` object is created.
pub fn new_str_bounds(field: Field, left: Bound<&str>, right: Bound<&str>) -> RangeQuery { pub fn new_str_bounds(field: String, left: Bound<&str>, right: Bound<&str>) -> RangeQuery {
let make_term_val = |val: &&str| val.as_bytes().to_vec(); let make_term_val = |val: &&str| val.as_bytes().to_vec();
RangeQuery { RangeQuery {
field, field,
@@ -259,7 +271,7 @@ impl RangeQuery {
/// ///
/// If the field is not of the type `Str`, tantivy /// If the field is not of the type `Str`, tantivy
/// will panic when the `Weight` object is created. /// will panic when the `Weight` object is created.
pub fn new_str(field: Field, range: Range<&str>) -> RangeQuery { pub fn new_str(field: String, range: Range<&str>) -> RangeQuery {
RangeQuery::new_str_bounds( RangeQuery::new_str_bounds(
field, field,
Bound::Included(range.start), Bound::Included(range.start),
@@ -268,22 +280,8 @@ impl RangeQuery {
} }
/// Field to search over /// Field to search over
pub fn field(&self) -> Field { pub fn field(&self) -> &str {
self.field &self.field
}
/// Lower bound of range
pub fn left_bound(&self) -> Bound<Term> {
map_bound(&self.left_bound, &|bytes| {
Term::from_field_bytes(self.field, bytes)
})
}
/// Upper bound of range
pub fn right_bound(&self) -> Bound<Term> {
map_bound(&self.right_bound, &|bytes| {
Term::from_field_bytes(self.field, bytes)
})
} }
} }
@@ -307,7 +305,9 @@ pub(crate) fn maps_to_u64_fastfield(typ: Type) -> bool {
impl Query for RangeQuery { impl Query for RangeQuery {
fn weight(&self, enable_scoring: EnableScoring<'_>) -> crate::Result<Box<dyn Weight>> { fn weight(&self, enable_scoring: EnableScoring<'_>) -> crate::Result<Box<dyn Weight>> {
let schema = enable_scoring.schema(); let schema = enable_scoring.schema();
let field_type = schema.get_field_entry(self.field).field_type(); let field_type = schema
.get_field_entry(schema.get_field(&self.field)?)
.field_type();
let value_type = field_type.value_type(); let value_type = field_type.value_type();
if value_type != self.value_type { if value_type != self.value_type {
let err_msg = format!( let err_msg = format!(
@@ -320,7 +320,7 @@ impl Query for RangeQuery {
if field_type.is_fast() && is_type_valid_for_fastfield_range_query(self.value_type) { if field_type.is_fast() && is_type_valid_for_fastfield_range_query(self.value_type) {
if field_type.is_ip_addr() { if field_type.is_ip_addr() {
Ok(Box::new(IPFastFieldRangeWeight::new( Ok(Box::new(IPFastFieldRangeWeight::new(
self.field, self.field.to_string(),
&self.left_bound, &self.left_bound,
&self.right_bound, &self.right_bound,
))) )))
@@ -335,14 +335,14 @@ impl Query for RangeQuery {
let left_bound = map_bound(&self.left_bound, &parse_from_bytes); let left_bound = map_bound(&self.left_bound, &parse_from_bytes);
let right_bound = map_bound(&self.right_bound, &parse_from_bytes); let right_bound = map_bound(&self.right_bound, &parse_from_bytes);
Ok(Box::new(FastFieldRangeWeight::new( Ok(Box::new(FastFieldRangeWeight::new(
self.field, self.field.to_string(),
left_bound, left_bound,
right_bound, right_bound,
))) )))
} }
} else { } else {
Ok(Box::new(RangeWeight { Ok(Box::new(RangeWeight {
field: self.field, field: self.field.to_string(),
left_bound: self.left_bound.clone(), left_bound: self.left_bound.clone(),
right_bound: self.right_bound.clone(), right_bound: self.right_bound.clone(),
})) }))
@@ -351,7 +351,7 @@ impl Query for RangeQuery {
} }
pub struct RangeWeight { pub struct RangeWeight {
field: Field, field: String,
left_bound: Bound<Vec<u8>>, left_bound: Bound<Vec<u8>>,
right_bound: Bound<Vec<u8>>, right_bound: Bound<Vec<u8>>,
} }
@@ -379,7 +379,7 @@ impl Weight for RangeWeight {
let max_doc = reader.max_doc(); let max_doc = reader.max_doc();
let mut doc_bitset = BitSet::with_max_value(max_doc); let mut doc_bitset = BitSet::with_max_value(max_doc);
let inverted_index = reader.inverted_index(self.field)?; let inverted_index = reader.inverted_index(reader.schema().get_field(&self.field)?)?;
let term_dict = inverted_index.terms(); let term_dict = inverted_index.terms();
let mut term_range = self.term_range(term_dict)?; let mut term_range = self.term_range(term_dict)?;
while term_range.advance() { while term_range.advance() {
@@ -443,7 +443,7 @@ mod tests {
let reader = index.reader()?; let reader = index.reader()?;
let searcher = reader.searcher(); let searcher = reader.searcher();
let docs_in_the_sixties = RangeQuery::new_u64(year_field, 1960u64..1970u64); let docs_in_the_sixties = RangeQuery::new_u64("year".to_string(), 1960u64..1970u64);
// ... or `1960..=1969` if inclusive range is enabled. // ... or `1960..=1969` if inclusive range is enabled.
let count = searcher.search(&docs_in_the_sixties, &Count)?; let count = searcher.search(&docs_in_the_sixties, &Count)?;
@@ -481,10 +481,13 @@ mod tests {
let count_multiples = let count_multiples =
|range_query: RangeQuery| searcher.search(&range_query, &Count).unwrap(); |range_query: RangeQuery| searcher.search(&range_query, &Count).unwrap();
assert_eq!(count_multiples(RangeQuery::new_i64(int_field, 10..11)), 9); assert_eq!(
count_multiples(RangeQuery::new_i64("intfield".to_string(), 10..11)),
9
);
assert_eq!( assert_eq!(
count_multiples(RangeQuery::new_i64_bounds( count_multiples(RangeQuery::new_i64_bounds(
int_field, "intfield".to_string(),
Bound::Included(10), Bound::Included(10),
Bound::Included(11) Bound::Included(11)
)), )),
@@ -492,7 +495,7 @@ mod tests {
); );
assert_eq!( assert_eq!(
count_multiples(RangeQuery::new_i64_bounds( count_multiples(RangeQuery::new_i64_bounds(
int_field, "intfield".to_string(),
Bound::Excluded(9), Bound::Excluded(9),
Bound::Included(10) Bound::Included(10)
)), )),
@@ -500,7 +503,7 @@ mod tests {
); );
assert_eq!( assert_eq!(
count_multiples(RangeQuery::new_i64_bounds( count_multiples(RangeQuery::new_i64_bounds(
int_field, "intfield".to_string(),
Bound::Included(9), Bound::Included(9),
Bound::Unbounded Bound::Unbounded
)), )),
@@ -540,12 +543,12 @@ mod tests {
|range_query: RangeQuery| searcher.search(&range_query, &Count).unwrap(); |range_query: RangeQuery| searcher.search(&range_query, &Count).unwrap();
assert_eq!( assert_eq!(
count_multiples(RangeQuery::new_f64(float_field, 10.0..11.0)), count_multiples(RangeQuery::new_f64("floatfield".to_string(), 10.0..11.0)),
9 9
); );
assert_eq!( assert_eq!(
count_multiples(RangeQuery::new_f64_bounds( count_multiples(RangeQuery::new_f64_bounds(
float_field, "floatfield".to_string(),
Bound::Included(10.0), Bound::Included(10.0),
Bound::Included(11.0) Bound::Included(11.0)
)), )),
@@ -553,7 +556,7 @@ mod tests {
); );
assert_eq!( assert_eq!(
count_multiples(RangeQuery::new_f64_bounds( count_multiples(RangeQuery::new_f64_bounds(
float_field, "floatfield".to_string(),
Bound::Excluded(9.0), Bound::Excluded(9.0),
Bound::Included(10.0) Bound::Included(10.0)
)), )),
@@ -561,7 +564,7 @@ mod tests {
); );
assert_eq!( assert_eq!(
count_multiples(RangeQuery::new_f64_bounds( count_multiples(RangeQuery::new_f64_bounds(
float_field, "floatfield".to_string(),
Bound::Included(9.0), Bound::Included(9.0),
Bound::Unbounded Bound::Unbounded
)), )),

View File

@@ -11,18 +11,18 @@ use fastfield_codecs::MonotonicallyMappableToU128;
use super::fast_field_range_query::{FastFieldCardinality, RangeDocSet}; use super::fast_field_range_query::{FastFieldCardinality, RangeDocSet};
use super::range_query::map_bound; use super::range_query::map_bound;
use crate::query::{ConstScorer, Explanation, Scorer, Weight}; use crate::query::{ConstScorer, Explanation, Scorer, Weight};
use crate::schema::{Cardinality, Field}; use crate::schema::Cardinality;
use crate::{DocId, DocSet, Score, SegmentReader, TantivyError}; use crate::{DocId, DocSet, Score, SegmentReader, TantivyError};
/// `IPFastFieldRangeWeight` uses the ip address fast field to execute range queries. /// `IPFastFieldRangeWeight` uses the ip address fast field to execute range queries.
pub struct IPFastFieldRangeWeight { pub struct IPFastFieldRangeWeight {
field: Field, field: String,
left_bound: Bound<Ipv6Addr>, left_bound: Bound<Ipv6Addr>,
right_bound: Bound<Ipv6Addr>, right_bound: Bound<Ipv6Addr>,
} }
impl IPFastFieldRangeWeight { impl IPFastFieldRangeWeight {
pub fn new(field: Field, left_bound: &Bound<Vec<u8>>, right_bound: &Bound<Vec<u8>>) -> Self { pub fn new(field: String, left_bound: &Bound<Vec<u8>>, right_bound: &Bound<Vec<u8>>) -> Self {
let parse_ip_from_bytes = |data: &Vec<u8>| { let parse_ip_from_bytes = |data: &Vec<u8>| {
let ip_u128: u128 = let ip_u128: u128 =
u128::from_be(BinarySerializable::deserialize(&mut &data[..]).unwrap()); u128::from_be(BinarySerializable::deserialize(&mut &data[..]).unwrap());
@@ -40,10 +40,13 @@ impl IPFastFieldRangeWeight {
impl Weight for IPFastFieldRangeWeight { impl Weight for IPFastFieldRangeWeight {
fn scorer(&self, reader: &SegmentReader, boost: Score) -> crate::Result<Box<dyn Scorer>> { fn scorer(&self, reader: &SegmentReader, boost: Score) -> crate::Result<Box<dyn Scorer>> {
let field_type = reader.schema().get_field_entry(self.field).field_type(); let field_type = reader
.schema()
.get_field_entry(reader.schema().get_field(&self.field)?)
.field_type();
match field_type.fastfield_cardinality().unwrap() { match field_type.fastfield_cardinality().unwrap() {
Cardinality::SingleValue => { Cardinality::SingleValue => {
let ip_addr_fast_field = reader.fast_fields().ip_addr(self.field)?; let ip_addr_fast_field = reader.fast_fields().ip_addr(&self.field)?;
let value_range = bound_to_value_range( let value_range = bound_to_value_range(
&self.left_bound, &self.left_bound,
&self.right_bound, &self.right_bound,
@@ -57,7 +60,7 @@ impl Weight for IPFastFieldRangeWeight {
Ok(Box::new(ConstScorer::new(docset, boost))) Ok(Box::new(ConstScorer::new(docset, boost)))
} }
Cardinality::MultiValues => { Cardinality::MultiValues => {
let ip_addr_fast_field = reader.fast_fields().ip_addrs(self.field)?; let ip_addr_fast_field = reader.fast_fields().ip_addrs(&self.field)?;
let value_range = bound_to_value_range( let value_range = bound_to_value_range(
&self.left_bound, &self.left_bound,
&self.right_bound, &self.right_bound,

View File

@@ -9,18 +9,18 @@ use fastfield_codecs::MonotonicallyMappableToU64;
use super::fast_field_range_query::{FastFieldCardinality, RangeDocSet}; use super::fast_field_range_query::{FastFieldCardinality, RangeDocSet};
use super::range_query::map_bound; use super::range_query::map_bound;
use crate::query::{ConstScorer, Explanation, Scorer, Weight}; use crate::query::{ConstScorer, Explanation, Scorer, Weight};
use crate::schema::{Cardinality, Field}; use crate::schema::Cardinality;
use crate::{DocId, DocSet, Score, SegmentReader, TantivyError}; use crate::{DocId, DocSet, Score, SegmentReader, TantivyError};
/// `FastFieldRangeWeight` uses the fast field to execute range queries. /// `FastFieldRangeWeight` uses the fast field to execute range queries.
pub struct FastFieldRangeWeight { pub struct FastFieldRangeWeight {
field: Field, field: String,
left_bound: Bound<u64>, left_bound: Bound<u64>,
right_bound: Bound<u64>, right_bound: Bound<u64>,
} }
impl FastFieldRangeWeight { impl FastFieldRangeWeight {
pub fn new(field: Field, left_bound: Bound<u64>, right_bound: Bound<u64>) -> Self { pub fn new(field: String, left_bound: Bound<u64>, right_bound: Bound<u64>) -> Self {
let left_bound = map_bound(&left_bound, &|val| *val); let left_bound = map_bound(&left_bound, &|val| *val);
let right_bound = map_bound(&right_bound, &|val| *val); let right_bound = map_bound(&right_bound, &|val| *val);
Self { Self {
@@ -33,10 +33,13 @@ impl FastFieldRangeWeight {
impl Weight for FastFieldRangeWeight { impl Weight for FastFieldRangeWeight {
fn scorer(&self, reader: &SegmentReader, boost: Score) -> crate::Result<Box<dyn Scorer>> { fn scorer(&self, reader: &SegmentReader, boost: Score) -> crate::Result<Box<dyn Scorer>> {
let field_type = reader.schema().get_field_entry(self.field).field_type(); let field_type = reader
.schema()
.get_field_entry(reader.schema().get_field(&self.field)?)
.field_type();
match field_type.fastfield_cardinality().unwrap() { match field_type.fastfield_cardinality().unwrap() {
Cardinality::SingleValue => { Cardinality::SingleValue => {
let fast_field = reader.fast_fields().u64_lenient(self.field)?; let fast_field = reader.fast_fields().u64_lenient(&self.field)?;
let value_range = bound_to_value_range( let value_range = bound_to_value_range(
&self.left_bound, &self.left_bound,
&self.right_bound, &self.right_bound,
@@ -48,7 +51,7 @@ impl Weight for FastFieldRangeWeight {
Ok(Box::new(ConstScorer::new(docset, boost))) Ok(Box::new(ConstScorer::new(docset, boost)))
} }
Cardinality::MultiValues => { Cardinality::MultiValues => {
let fast_field = reader.fast_fields().u64s_lenient(self.field)?; let fast_field = reader.fast_fields().u64s_lenient(&self.field)?;
let value_range = bound_to_value_range( let value_range = bound_to_value_range(
&self.left_bound, &self.left_bound,
&self.right_bound, &self.right_bound,

View File

@@ -11,6 +11,7 @@ use super::ip_options::IpAddrOptions;
use super::*; use super::*;
use crate::schema::bytes_options::BytesOptions; use crate::schema::bytes_options::BytesOptions;
use crate::schema::field_type::ValueParsingError; use crate::schema::field_type::ValueParsingError;
use crate::TantivyError;
/// Tantivy has a very strict schema. /// Tantivy has a very strict schema.
/// You need to specify in advance whether a field is indexed or not, /// You need to specify in advance whether a field is indexed or not,
@@ -308,8 +309,12 @@ impl Schema {
} }
/// Returns the field option associated with a given name. /// Returns the field option associated with a given name.
pub fn get_field(&self, field_name: &str) -> Option<Field> { pub fn get_field(&self, field_name: &str) -> crate::Result<Field> {
self.0.fields_map.get(field_name).cloned() self.0
.fields_map
.get(field_name)
.cloned()
.ok_or_else(|| TantivyError::FieldNotFound(field_name.to_string()))
} }
/// Create document from a named doc. /// Create document from a named doc.
@@ -319,7 +324,7 @@ impl Schema {
) -> Result<Document, DocParsingError> { ) -> Result<Document, DocParsingError> {
let mut document = Document::new(); let mut document = Document::new();
for (field_name, values) in named_doc.0 { for (field_name, values) in named_doc.0 {
if let Some(field) = self.get_field(&field_name) { if let Ok(field) = self.get_field(&field_name) {
for value in values { for value in values {
document.add_field_value(field, value); document.add_field_value(field, value);
} }
@@ -360,7 +365,7 @@ impl Schema {
) -> Result<Document, DocParsingError> { ) -> Result<Document, DocParsingError> {
let mut doc = Document::default(); let mut doc = Document::default();
for (field_name, json_value) in json_obj { for (field_name, json_value) in json_obj {
if let Some(field) = self.get_field(&field_name) { if let Ok(field) = self.get_field(&field_name) {
let field_entry = self.get_field_entry(field); let field_entry = self.get_field_entry(field);
let field_type = field_entry.field_type(); let field_type = field_entry.field_type();
match json_value { match json_value {

View File

@@ -30,7 +30,7 @@ impl TextOptions {
self.stored self.stored
} }
/// Returns true iff the value is a fast field. /// Returns true if and only if the value is a fast field.
pub fn is_fast(&self) -> bool { pub fn is_fast(&self) -> bool {
self.fast self.fast
} }

View File

@@ -312,7 +312,7 @@ mod tests {
bitpack.write(51, 6, &mut buffer).unwrap(); bitpack.write(51, 6, &mut buffer).unwrap();
assert_eq!(compute_num_bits(51), 6); assert_eq!(compute_num_bits(51), 6);
bitpack.close(&mut buffer).unwrap(); bitpack.close(&mut buffer).unwrap();
assert_eq!(buffer.len(), 3 + 7); assert_eq!(buffer.len(), 3);
assert_eq!(extract_bits(&buffer[..], 0, 9), 321u64); assert_eq!(extract_bits(&buffer[..], 0, 9), 321u64);
assert_eq!(extract_bits(&buffer[..], 9, 2), 2u64); assert_eq!(extract_bits(&buffer[..], 9, 2), 2u64);
assert_eq!(extract_bits(&buffer[..], 11, 6), 51u64); assert_eq!(extract_bits(&buffer[..], 11, 6), 51u64);

View File

@@ -69,14 +69,16 @@ impl<TSSTable: SSTable> Dictionary<TSSTable> {
pub(crate) fn sstable_delta_reader_for_key_range( pub(crate) fn sstable_delta_reader_for_key_range(
&self, &self,
key_range: impl RangeBounds<[u8]>, key_range: impl RangeBounds<[u8]>,
limit: Option<u64>,
) -> io::Result<DeltaReader<'static, TSSTable::ValueReader>> { ) -> io::Result<DeltaReader<'static, TSSTable::ValueReader>> {
let slice = self.file_slice_for_range(key_range); let slice = self.file_slice_for_range(key_range, limit);
let data = slice.read_bytes()?; let data = slice.read_bytes()?;
Ok(TSSTable::delta_reader(data)) Ok(TSSTable::delta_reader(data))
} }
/// This function returns a file slice covering a set of sstable blocks /// This function returns a file slice covering a set of sstable blocks
/// that include the key range passed in arguments. /// that include the key range passed in arguments. Optionally returns
/// only block for up to `limit` matching terms.
/// ///
/// It works by identifying /// It works by identifying
/// - `first_block`: the block containing the start boudary key /// - `first_block`: the block containing the start boudary key
@@ -92,26 +94,56 @@ impl<TSSTable: SSTable> Dictionary<TSSTable> {
/// On the rare edge case where a user asks for `(start_key, end_key]` /// On the rare edge case where a user asks for `(start_key, end_key]`
/// and `start_key` happens to be the last key of a block, we return a /// and `start_key` happens to be the last key of a block, we return a
/// slice that is the first block was not necessary. /// slice that is the first block was not necessary.
pub fn file_slice_for_range(&self, key_range: impl RangeBounds<[u8]>) -> FileSlice { pub fn file_slice_for_range(
let start_bound: Bound<usize> = match key_range.start_bound() { &self,
key_range: impl RangeBounds<[u8]>,
limit: Option<u64>,
) -> FileSlice {
let first_block_id = match key_range.start_bound() {
Bound::Included(key) | Bound::Excluded(key) => { Bound::Included(key) | Bound::Excluded(key) => {
let Some(first_block_addr) = self.sstable_index.search_block(key) else { let Some(first_block_id) = self.sstable_index.locate_with_key(key) else {
return FileSlice::empty(); return FileSlice::empty();
}; };
Bound::Included(first_block_addr.byte_range.start) Some(first_block_id)
} }
Bound::Unbounded => Bound::Unbounded, Bound::Unbounded => None,
}; };
let end_bound: Bound<usize> = match key_range.end_bound() {
Bound::Included(key) | Bound::Excluded(key) => { let last_block_id = match key_range.end_bound() {
if let Some(block_addr) = self.sstable_index.search_block(key) { Bound::Included(key) | Bound::Excluded(key) => self.sstable_index.locate_with_key(key),
Bound::Excluded(block_addr.byte_range.end) Bound::Unbounded => None,
};
let start_bound = if let Some(first_block_id) = first_block_id {
let Some(block_addr) = self.sstable_index.get_block(first_block_id) else {
return FileSlice::empty();
};
Bound::Included(block_addr.byte_range.start)
} else {
Bound::Unbounded
};
let last_block_id = if let Some(limit) = limit {
let second_block_id = first_block_id.map(|id| id + 1).unwrap_or(0);
if let Some(block_addr) = self.sstable_index.get_block(second_block_id) {
let ordinal_limit = block_addr.first_ordinal + limit;
let last_block_limit = self.sstable_index.locate_with_ord(ordinal_limit);
if let Some(last_block_id) = last_block_id {
Some(last_block_id.min(last_block_limit))
} else { } else {
Bound::Unbounded Some(last_block_limit)
} }
} else {
last_block_id
} }
Bound::Unbounded => Bound::Unbounded, } else {
last_block_id
}; };
let end_bound = last_block_id
.and_then(|block_id| self.sstable_index.get_block(block_id))
.map(|block_addr| Bound::Excluded(block_addr.byte_range.end))
.unwrap_or(Bound::Unbounded);
self.sstable_slice.slice((start_bound, end_bound)) self.sstable_slice.slice((start_bound, end_bound))
} }
@@ -156,10 +188,15 @@ impl<TSSTable: SSTable> Dictionary<TSSTable> {
/// Returns the ordinal associated with a given term. /// Returns the ordinal associated with a given term.
pub fn term_ord<K: AsRef<[u8]>>(&self, key: K) -> io::Result<Option<TermOrdinal>> { pub fn term_ord<K: AsRef<[u8]>>(&self, key: K) -> io::Result<Option<TermOrdinal>> {
let mut term_ord = 0u64;
let key_bytes = key.as_ref(); let key_bytes = key.as_ref();
let mut sstable_reader = self.sstable_reader()?;
while sstable_reader.advance().unwrap_or(false) { let Some(block_addr) = self.sstable_index.get_block_with_key(key_bytes) else {
return Ok(None);
};
let mut term_ord = block_addr.first_ordinal;
let mut sstable_reader = self.sstable_reader_block(block_addr)?;
while sstable_reader.advance()? {
if sstable_reader.key() == key_bytes { if sstable_reader.key() == key_bytes {
return Ok(Some(term_ord)); return Ok(Some(term_ord));
} }
@@ -178,22 +215,32 @@ impl<TSSTable: SSTable> Dictionary<TSSTable> {
/// Regardless of whether the term is found or not, /// Regardless of whether the term is found or not,
/// the buffer may be modified. /// the buffer may be modified.
pub fn ord_to_term(&self, ord: TermOrdinal, bytes: &mut Vec<u8>) -> io::Result<bool> { pub fn ord_to_term(&self, ord: TermOrdinal, bytes: &mut Vec<u8>) -> io::Result<bool> {
let mut sstable_reader = self.sstable_reader()?; // find block in which the term would be
bytes.clear(); let block_addr = self.sstable_index.get_block_with_ord(ord);
for _ in 0..(ord + 1) { let first_ordinal = block_addr.first_ordinal;
if !sstable_reader.advance().unwrap_or(false) {
// then search inside that block only
let mut sstable_reader = self.sstable_reader_block(block_addr)?;
for _ in first_ordinal..=ord {
if !sstable_reader.advance()? {
return Ok(false); return Ok(false);
} }
} }
bytes.clear();
bytes.extend_from_slice(sstable_reader.key()); bytes.extend_from_slice(sstable_reader.key());
Ok(true) Ok(true)
} }
/// Returns the number of terms in the dictionary. /// Returns the number of terms in the dictionary.
pub fn term_info_from_ord(&self, term_ord: TermOrdinal) -> io::Result<Option<TSSTable::Value>> { pub fn term_info_from_ord(&self, term_ord: TermOrdinal) -> io::Result<Option<TSSTable::Value>> {
let mut sstable_reader = self.sstable_reader()?; // find block in which the term would be
for _ in 0..(term_ord + 1) { let block_addr = self.sstable_index.get_block_with_ord(term_ord);
if !sstable_reader.advance().unwrap_or(false) { let first_ordinal = block_addr.first_ordinal;
// then search inside that block only
let mut sstable_reader = self.sstable_reader_block(block_addr)?;
for _ in first_ordinal..=term_ord {
if !sstable_reader.advance()? {
return Ok(None); return Ok(None);
} }
} }
@@ -202,10 +249,10 @@ impl<TSSTable: SSTable> Dictionary<TSSTable> {
/// Lookups the value corresponding to the key. /// Lookups the value corresponding to the key.
pub fn get<K: AsRef<[u8]>>(&self, key: K) -> io::Result<Option<TSSTable::Value>> { pub fn get<K: AsRef<[u8]>>(&self, key: K) -> io::Result<Option<TSSTable::Value>> {
if let Some(block_addr) = self.sstable_index.search_block(key.as_ref()) { if let Some(block_addr) = self.sstable_index.get_block_with_key(key.as_ref()) {
let mut sstable_reader = self.sstable_reader_block(block_addr)?; let mut sstable_reader = self.sstable_reader_block(block_addr)?;
let key_bytes = key.as_ref(); let key_bytes = key.as_ref();
while sstable_reader.advance().unwrap_or(false) { while sstable_reader.advance()? {
if sstable_reader.key() == key_bytes { if sstable_reader.key() == key_bytes {
let value = sstable_reader.value().clone(); let value = sstable_reader.value().clone();
return Ok(Some(value)); return Ok(Some(value));
@@ -217,10 +264,10 @@ impl<TSSTable: SSTable> Dictionary<TSSTable> {
/// Lookups the value corresponding to the key. /// Lookups the value corresponding to the key.
pub async fn get_async<K: AsRef<[u8]>>(&self, key: K) -> io::Result<Option<TSSTable::Value>> { pub async fn get_async<K: AsRef<[u8]>>(&self, key: K) -> io::Result<Option<TSSTable::Value>> {
if let Some(block_addr) = self.sstable_index.search_block(key.as_ref()) { if let Some(block_addr) = self.sstable_index.get_block_with_key(key.as_ref()) {
let mut sstable_reader = self.sstable_reader_block_async(block_addr).await?; let mut sstable_reader = self.sstable_reader_block_async(block_addr).await?;
let key_bytes = key.as_ref(); let key_bytes = key.as_ref();
while sstable_reader.advance().unwrap_or(false) { while sstable_reader.advance()? {
if sstable_reader.key() == key_bytes { if sstable_reader.key() == key_bytes {
let value = sstable_reader.value().clone(); let value = sstable_reader.value().clone();
return Ok(Some(value)); return Ok(Some(value));
@@ -259,3 +306,192 @@ impl<TSSTable: SSTable> Dictionary<TSSTable> {
Ok(()) Ok(())
} }
} }
#[cfg(test)]
mod tests {
use std::ops::Range;
use std::sync::{Arc, Mutex};
use common::OwnedBytes;
use super::Dictionary;
use crate::MonotonicU64SSTable;
#[derive(Debug)]
struct PermissionedHandle {
bytes: OwnedBytes,
allowed_range: Mutex<Range<usize>>,
}
impl PermissionedHandle {
fn new(bytes: Vec<u8>) -> Self {
let bytes = OwnedBytes::new(bytes);
PermissionedHandle {
allowed_range: Mutex::new(0..bytes.len()),
bytes,
}
}
fn restrict(&self, range: Range<usize>) {
*self.allowed_range.lock().unwrap() = range;
}
}
impl common::HasLen for PermissionedHandle {
fn len(&self) -> usize {
self.bytes.len()
}
}
impl common::file_slice::FileHandle for PermissionedHandle {
fn read_bytes(&self, range: Range<usize>) -> std::io::Result<OwnedBytes> {
let allowed_range = self.allowed_range.lock().unwrap();
if !allowed_range.contains(&range.start) || !allowed_range.contains(&(range.end - 1)) {
return Err(std::io::Error::new(
std::io::ErrorKind::Other,
format!("invalid range, allowed {allowed_range:?}, requested {range:?}"),
));
}
Ok(self.bytes.slice(range))
}
}
fn make_test_sstable() -> (Dictionary<MonotonicU64SSTable>, Arc<PermissionedHandle>) {
let mut builder = Dictionary::<MonotonicU64SSTable>::builder(Vec::new()).unwrap();
// this makes 256k keys, enough to fill multiple blocks.
for elem in 0..0x3ffff {
let key = format!("{elem:05X}").into_bytes();
builder.insert_cannot_fail(&key, &elem);
}
let table = builder.finish().unwrap();
let table = Arc::new(PermissionedHandle::new(table));
let slice = common::file_slice::FileSlice::new(table.clone());
let dictionary = Dictionary::<MonotonicU64SSTable>::open(slice).unwrap();
// if the last block is id 0, tests are meaningless
assert_ne!(dictionary.sstable_index.locate_with_ord(u64::MAX), 0);
assert_eq!(dictionary.num_terms(), 0x3ffff);
(dictionary, table)
}
#[test]
fn test_ord_term_conversion() {
let (dic, slice) = make_test_sstable();
let block = dic.sstable_index.get_block_with_ord(100_000);
slice.restrict(block.byte_range);
let mut res = Vec::new();
// middle of a block
assert!(dic.ord_to_term(100_000, &mut res).unwrap());
assert_eq!(res, format!("{:05X}", 100_000).into_bytes());
assert_eq!(dic.term_info_from_ord(100_000).unwrap().unwrap(), 100_000);
assert_eq!(dic.get(&res).unwrap().unwrap(), 100_000);
assert_eq!(dic.term_ord(&res).unwrap().unwrap(), 100_000);
// start of a block
assert!(dic.ord_to_term(block.first_ordinal, &mut res).unwrap());
assert_eq!(res, format!("{:05X}", block.first_ordinal).into_bytes());
assert_eq!(
dic.term_info_from_ord(block.first_ordinal)
.unwrap()
.unwrap(),
block.first_ordinal
);
assert_eq!(dic.get(&res).unwrap().unwrap(), block.first_ordinal);
assert_eq!(dic.term_ord(&res).unwrap().unwrap(), block.first_ordinal);
// end of a block
let ordinal = block.first_ordinal - 1;
let new_range = dic.sstable_index.get_block_with_ord(ordinal).byte_range;
slice.restrict(new_range);
assert!(dic.ord_to_term(ordinal, &mut res).unwrap());
assert_eq!(res, format!("{:05X}", ordinal).into_bytes());
assert_eq!(dic.term_info_from_ord(ordinal).unwrap().unwrap(), ordinal);
assert_eq!(dic.get(&res).unwrap().unwrap(), ordinal);
assert_eq!(dic.term_ord(&res).unwrap().unwrap(), ordinal);
// before first block
// 1st block must be loaded for key-related operations
let block = dic.sstable_index.get_block_with_ord(0);
slice.restrict(block.byte_range);
assert!(dic.get(&b"$$$").unwrap().is_none());
assert!(dic.term_ord(&b"$$$").unwrap().is_none());
// after last block
// last block must be loaded for ord related operations
let ordinal = 0x40000 + 10;
let new_range = dic.sstable_index.get_block_with_ord(ordinal).byte_range;
slice.restrict(new_range);
assert!(!dic.ord_to_term(ordinal, &mut res).unwrap());
assert!(dic.term_info_from_ord(ordinal).unwrap().is_none());
// last block isn't required to be loaded for key related operations
slice.restrict(0..0);
assert!(dic.get(&b"~~~").unwrap().is_none());
assert!(dic.term_ord(&b"~~~").unwrap().is_none());
}
#[test]
fn test_range() {
let (dic, slice) = make_test_sstable();
let start = dic
.sstable_index
.get_block_with_key(b"10000")
.unwrap()
.byte_range;
let end = dic
.sstable_index
.get_block_with_key(b"18000")
.unwrap()
.byte_range;
slice.restrict(start.start..end.end);
let mut stream = dic.range().ge(b"10000").lt(b"18000").into_stream().unwrap();
for i in 0x10000..0x18000 {
assert!(stream.advance());
assert_eq!(stream.term_ord(), i);
assert_eq!(stream.value(), &i);
assert_eq!(stream.key(), format!("{i:05X}").into_bytes());
}
assert!(!stream.advance());
// verify limiting the number of results reduce the size read
slice.restrict(start.start..(end.end - 1));
let mut stream = dic
.range()
.ge(b"10000")
.lt(b"18000")
.limit(0xfff)
.into_stream()
.unwrap();
for i in 0x10000..0x10fff {
assert!(stream.advance());
assert_eq!(stream.term_ord(), i);
assert_eq!(stream.value(), &i);
assert_eq!(stream.key(), format!("{i:05X}").into_bytes());
}
// there might be more successful elements after, though how many is undefined
slice.restrict(0..slice.bytes.len());
let mut stream = dic.stream().unwrap();
for i in 0..0x3ffff {
assert!(stream.advance());
assert_eq!(stream.term_ord(), i);
assert_eq!(stream.value(), &i);
assert_eq!(stream.key(), format!("{i:05X}").into_bytes());
}
assert!(!stream.advance());
}
}

View File

@@ -3,7 +3,7 @@ use std::ops::Range;
use serde::{Deserialize, Serialize}; use serde::{Deserialize, Serialize};
use crate::{common_prefix_len, SSTableDataCorruption}; use crate::{common_prefix_len, SSTableDataCorruption, TermOrdinal};
#[derive(Default, Debug, Serialize, Deserialize)] #[derive(Default, Debug, Serialize, Deserialize)]
pub struct SSTableIndex { pub struct SSTableIndex {
@@ -11,15 +11,61 @@ pub struct SSTableIndex {
} }
impl SSTableIndex { impl SSTableIndex {
/// Load an index from its binary representation
pub fn load(data: &[u8]) -> Result<SSTableIndex, SSTableDataCorruption> { pub fn load(data: &[u8]) -> Result<SSTableIndex, SSTableDataCorruption> {
ciborium::de::from_reader(data).map_err(|_| SSTableDataCorruption) ciborium::de::from_reader(data).map_err(|_| SSTableDataCorruption)
} }
pub fn search_block(&self, key: &[u8]) -> Option<BlockAddr> { /// Get the [`BlockAddr`] of the requested block.
pub(crate) fn get_block(&self, block_id: usize) -> Option<BlockAddr> {
self.blocks self.blocks
.iter() .get(block_id)
.find(|block| &block.last_key_or_greater[..] >= key) .map(|block_meta| block_meta.block_addr.clone())
.map(|block| block.block_addr.clone()) }
/// Get the block id of the block that woudl contain `key`.
///
/// Returns None if `key` is lexicographically after the last key recorded.
pub(crate) fn locate_with_key(&self, key: &[u8]) -> Option<usize> {
let pos = self
.blocks
.binary_search_by_key(&key, |block| &block.last_key_or_greater);
match pos {
Ok(pos) => Some(pos),
Err(pos) => {
if pos < self.blocks.len() {
Some(pos)
} else {
// after end of last block: no block matches
None
}
}
}
}
/// Get the [`BlockAddr`] of the block that would contain `key`.
///
/// Returns None if `key` is lexicographically after the last key recorded.
pub fn get_block_with_key(&self, key: &[u8]) -> Option<BlockAddr> {
self.locate_with_key(key).and_then(|id| self.get_block(id))
}
pub(crate) fn locate_with_ord(&self, ord: TermOrdinal) -> usize {
let pos = self
.blocks
.binary_search_by_key(&ord, |block| block.block_addr.first_ordinal);
match pos {
Ok(pos) => pos,
// Err(0) can't happen as the sstable starts with ordinal zero
Err(pos) => pos - 1,
}
}
/// Get the [`BlockAddr`] of the block containing the `ord`-th term.
pub(crate) fn get_block_with_ord(&self, ord: TermOrdinal) -> BlockAddr {
// locate_with_ord always returns an index within range
self.get_block(self.locate_with_ord(ord)).unwrap()
} }
} }
@@ -30,7 +76,7 @@ pub struct BlockAddr {
} }
#[derive(Debug, Serialize, Deserialize)] #[derive(Debug, Serialize, Deserialize)]
struct BlockMeta { pub(crate) struct BlockMeta {
/// Any byte string that is lexicographically greater or equal to /// Any byte string that is lexicographically greater or equal to
/// the last key in the block, /// the last key in the block,
/// and yet strictly smaller than the first key in the next block. /// and yet strictly smaller than the first key in the next block.
@@ -98,26 +144,38 @@ mod tests {
fn test_sstable_index() { fn test_sstable_index() {
let mut sstable_builder = SSTableIndexBuilder::default(); let mut sstable_builder = SSTableIndexBuilder::default();
sstable_builder.add_block(b"aaa", 10..20, 0u64); sstable_builder.add_block(b"aaa", 10..20, 0u64);
sstable_builder.add_block(b"bbbbbbb", 20..30, 564); sstable_builder.add_block(b"bbbbbbb", 20..30, 5u64);
sstable_builder.add_block(b"ccc", 30..40, 10u64); sstable_builder.add_block(b"ccc", 30..40, 10u64);
sstable_builder.add_block(b"dddd", 40..50, 15u64); sstable_builder.add_block(b"dddd", 40..50, 15u64);
let mut buffer: Vec<u8> = Vec::new(); let mut buffer: Vec<u8> = Vec::new();
sstable_builder.serialize(&mut buffer).unwrap(); sstable_builder.serialize(&mut buffer).unwrap();
let sstable_index = SSTableIndex::load(&buffer[..]).unwrap(); let sstable_index = SSTableIndex::load(&buffer[..]).unwrap();
assert_eq!( assert_eq!(
sstable_index.search_block(b"bbbde"), sstable_index.get_block_with_key(b"bbbde"),
Some(BlockAddr { Some(BlockAddr {
first_ordinal: 10u64, first_ordinal: 10u64,
byte_range: 30..40 byte_range: 30..40
}) })
); );
assert_eq!(sstable_index.locate_with_key(b"aa").unwrap(), 0);
assert_eq!(sstable_index.locate_with_key(b"aaa").unwrap(), 0);
assert_eq!(sstable_index.locate_with_key(b"aab").unwrap(), 1);
assert_eq!(sstable_index.locate_with_key(b"ccc").unwrap(), 2);
assert!(sstable_index.locate_with_key(b"e").is_none());
assert_eq!(sstable_index.locate_with_ord(0), 0);
assert_eq!(sstable_index.locate_with_ord(1), 0);
assert_eq!(sstable_index.locate_with_ord(4), 0);
assert_eq!(sstable_index.locate_with_ord(5), 1);
assert_eq!(sstable_index.locate_with_ord(100), 3);
} }
#[test] #[test]
fn test_sstable_with_corrupted_data() { fn test_sstable_with_corrupted_data() {
let mut sstable_builder = SSTableIndexBuilder::default(); let mut sstable_builder = SSTableIndexBuilder::default();
sstable_builder.add_block(b"aaa", 10..20, 0u64); sstable_builder.add_block(b"aaa", 10..20, 0u64);
sstable_builder.add_block(b"bbbbbbb", 20..30, 564); sstable_builder.add_block(b"bbbbbbb", 20..30, 5u64);
sstable_builder.add_block(b"ccc", 30..40, 10u64); sstable_builder.add_block(b"ccc", 30..40, 10u64);
sstable_builder.add_block(b"dddd", 40..50, 15u64); sstable_builder.add_block(b"dddd", 40..50, 15u64);
let mut buffer: Vec<u8> = Vec::new(); let mut buffer: Vec<u8> = Vec::new();

View File

@@ -19,6 +19,7 @@ where
automaton: A, automaton: A,
lower: Bound<Vec<u8>>, lower: Bound<Vec<u8>>,
upper: Bound<Vec<u8>>, upper: Bound<Vec<u8>>,
limit: Option<u64>,
} }
fn bound_as_byte_slice(bound: &Bound<Vec<u8>>) -> Bound<&[u8]> { fn bound_as_byte_slice(bound: &Bound<Vec<u8>>) -> Bound<&[u8]> {
@@ -41,6 +42,7 @@ where
automaton, automaton,
lower: Bound::Unbounded, lower: Bound::Unbounded,
upper: Bound::Unbounded, upper: Bound::Unbounded,
limit: None,
} }
} }
@@ -68,24 +70,46 @@ where
self self
} }
/// Load no more data than what's required to to get `limit`
/// matching entries.
///
/// The resulting [`Streamer`] can still return marginaly
/// more than `limit` elements.
pub fn limit(mut self, limit: u64) -> Self {
self.limit = Some(limit);
self
}
/// Creates the stream corresponding to the range /// Creates the stream corresponding to the range
/// of terms defined using the `StreamerBuilder`. /// of terms defined using the `StreamerBuilder`.
pub fn into_stream(self) -> io::Result<Streamer<'a, TSSTable, A>> { pub fn into_stream(self) -> io::Result<Streamer<'a, TSSTable, A>> {
// TODO Optimize by skipping to the right first block. // TODO Optimize by skipping to the right first block.
let start_state = self.automaton.start(); let start_state = self.automaton.start();
let key_range = ( let key_range = (
bound_as_byte_slice(&self.lower), bound_as_byte_slice(&self.lower),
bound_as_byte_slice(&self.upper), bound_as_byte_slice(&self.upper),
); );
let first_term = match &key_range.0 {
Bound::Included(key) | Bound::Excluded(key) => self
.term_dict
.sstable_index
.get_block_with_key(key)
.map(|block| block.first_ordinal)
.unwrap_or(0),
Bound::Unbounded => 0,
};
let delta_reader = self let delta_reader = self
.term_dict .term_dict
.sstable_delta_reader_for_key_range(key_range)?; .sstable_delta_reader_for_key_range(key_range, self.limit)?;
Ok(Streamer { Ok(Streamer {
automaton: self.automaton, automaton: self.automaton,
states: vec![start_state], states: vec![start_state],
delta_reader, delta_reader,
key: Vec::new(), key: Vec::new(),
term_ord: None, term_ord: first_term.checked_sub(1),
lower_bound: self.lower, lower_bound: self.lower,
upper_bound: self.upper, upper_bound: self.upper,
}) })

View File

@@ -16,7 +16,7 @@ pub trait ValueReader: Default {
/// Loads a block. /// Loads a block.
/// ///
/// Returns the number of bytes that were written. /// Returns the number of bytes that were read.
fn load(&mut self, data: &[u8]) -> io::Result<usize>; fn load(&mut self, data: &[u8]) -> io::Result<usize>;
} }