Compare commits

...

331 Commits

Author SHA1 Message Date
Paul Masurel
72925c2bba Removed azure stuff 2023-03-03 21:47:31 +09:00
Paul Masurel
ed5a3b3172 Bumped murmurhash version 2023-03-03 21:24:32 +09:00
PSeitz
ca20bfa776 add date_histogram (#1900)
* add date_histogram

* add return result
2023-03-02 05:17:35 +01:00
PSeitz
faa706d804 add coerce option for text and numbers types (#1904)
* add coerce option for text and numbers types

allow to coerce the field type when indexing if the type does not match

* Apply suggestions from code review

Co-authored-by: Paul Masurel <paul@quickwit.io>

* add tests,add COERCE flag, include bool in coercion

---------

Co-authored-by: Paul Masurel <paul@quickwit.io>
2023-03-01 11:36:59 +01:00
PSeitz
850a0d7ae2 add agg benchmark for optional and multi value (#1916)
closes #1870
2023-03-01 17:01:52 +09:00
Paul Masurel
7fae4d98d7 Adapting for quickwit2 (#1912)
* Adapting tantivy to make it possible to be plugged to quickwit.

* Apply suggestions from code review

Co-authored-by: PSeitz <PSeitz@users.noreply.github.com>

* Added unit test

---------

Co-authored-by: PSeitz <PSeitz@users.noreply.github.com>
2023-03-01 16:27:46 +09:00
PSeitz
bc36458334 move buffer in front of dynamic dispatch (#1915)
dynamic dispatch seems to be really expensive, move the buffer in front of the dynamic dispatch, to reduce the number of calls into the dynamic dispatched collector.
2023-02-28 13:07:50 +08:00
trinity-1686a
8a71e00da3 allow limiting the number of matched term in range query (#1899) 2023-02-27 10:44:08 +01:00
PSeitz
e510f699c8 feat: add support for u64,i64,f64 fields in term aggregation (#1883)
* feat: add support for u64,i64,f64 fields in term aggregation

* hash enum values

* fix build

* Apply suggestions from code review

Co-authored-by: Paul Masurel <paul@quickwit.io>

---------

Co-authored-by: Paul Masurel <paul@quickwit.io>
2023-02-27 15:04:41 +08:00
Paul Masurel
d25fc155b2 Making some of the column/termdict operations async-friendly (#1902) 2023-02-27 15:34:47 +09:00
Paul Masurel
8ea97e7d6b Minor refactoring preparing for getting columnar integrated in quickwit. (#1911) 2023-02-27 14:23:30 +09:00
Paul Masurel
0a726a0897 Added Empty ColumnIndex (#1910) 2023-02-27 13:59:22 +09:00
Paul Masurel
66ff53b0f4 Various minor code cleanup (#1909) 2023-02-27 13:48:34 +09:00
Paul Masurel
d002698008 Re-export of query grammar. (#1908) 2023-02-27 12:26:34 +09:00
Paul Masurel
c838aa808b Removedc the extra nesting in unit test file (#1907) 2023-02-27 12:17:52 +09:00
Paul Masurel
06850719dc Renaming .values(DocId) to .values_for_doc(DocId) (#1906) 2023-02-27 12:15:13 +09:00
PSeitz
5f23bb7e65 switch to sparse collection for histogram (#1898)
* switch to sparse collection for histogram

Replaces histogram vec collection with a hashmap. This approach works much better for sparse data and enables use cases like drill downs (filter + small interval).
It is slower for dense cases (1.3x-2x slower). This can be alleviated with a specialized hashmap in the future.
closes #1704
closes #1370

* refactor, clippy

* fix bucket_pos overflow issue
2023-02-23 07:02:58 +01:00
trinity-1686a
533ad99cd5 add PhrasePrefixQuery (#1842)
* add PhrasePrefixQuery
2023-02-22 11:18:33 +01:00
PSeitz
c7278b3258 remove schema in aggs (#1888)
* switch to ColumnType, move tests

* remove Schema dependency in agg
2023-02-22 04:50:28 +01:00
Paul Masurel
6b403e3281 Re-export of columnar 2023-02-22 11:23:54 +09:00
Paul Masurel
789cc8703e Adding unit test testing docfreq after merge (#1895) 2023-02-22 11:05:34 +09:00
Paul Masurel
e5098d9fe8 Moving test around reenabling tests that were disabled. (#1894) 2023-02-22 10:31:52 +09:00
Paul Masurel
f537334e4f Adding a write schema to columnar's merge operations. (#1884)
* Adding a write schema to columnar's merge operations.

* Added unit test checking min/max when columns are empty.

* CR comment

* Rename to value_type_to_column_type
2023-02-21 18:25:16 +09:00
Paul Masurel
e2aa5af075 Clippy warnings fixes (#1885) 2023-02-20 19:04:13 +09:00
Paul Masurel
02bebf4ff5 Cargo fmt 2023-02-20 09:40:04 +09:00
Paul Masurel
0274c982d5 Refactoring. (#1881)
`ColumnValues` wrongly located in column_values/column.rs due to
historical reason moves to column_values/mod.rs

u128 stuff gets its own directory like u64 stuff.
2023-02-17 21:57:14 +09:00
PSeitz
74bf60b4f7 implement SegmentAggregationCollector on bucket aggs (#1878) 2023-02-17 12:53:29 +01:00
PSeitz
bf1449b22d update examples for literate docs (#1880) 2023-02-17 11:48:22 +01:00
PSeitz
111f25a8f7 clippy (#1879)
* fix clippy

* fix clippy

* fmt
2023-02-17 11:34:21 +01:00
PSeitz
019db10e8e refactor aggregations (#1875)
* add specialized version for full cardinality

Pre Columnar
test aggregation::tests::bench::bench_aggregation_average_u64                                                            ... bench:   6,681,850 ns/iter (+/- 1,217,385)
test aggregation::tests::bench::bench_aggregation_average_u64_and_f64                                                    ... bench:  10,576,327 ns/iter (+/- 494,380)

Current
test aggregation::tests::bench::bench_aggregation_average_u64                                                            ... bench:  11,562,084 ns/iter (+/- 3,678,682)
test aggregation::tests::bench::bench_aggregation_average_u64_and_f64                                                    ... bench:  18,925,790 ns/iter (+/- 17,616,771)

Post Change
test aggregation::tests::bench::bench_aggregation_average_u64                                                            ... bench:   9,123,811 ns/iter (+/- 399,720)
test aggregation::tests::bench::bench_aggregation_average_u64_and_f64                                                    ... bench:  13,111,825 ns/iter (+/- 273,547)

* refactor aggregation collection

* add buffering collector
2023-02-16 13:15:16 +01:00
Paul Masurel
7423f99719 Issue/columnar for json (#1876)
Adding support for JSON fast field.
2023-02-16 20:38:32 +09:00
Alex Cole
f2f38c43ce Make BM25 scoring more flexible (#1855)
* Introduce Bm25StatisticsProvider to inject statistics

* fix formatting I accidentally changed
2023-02-16 19:14:12 +09:00
PSeitz
71f43ace1d fix dynamic dispatch regression for range queries (#1871) 2023-02-14 16:56:40 +01:00
PSeitz
347614c841 test error for avg agg on ip field (#1873)
closes #1835
2023-02-14 23:22:56 +08:00
Paul Masurel
097fd6138d Fix clippy comments (#1872) 2023-02-14 23:12:45 +09:00
PSeitz
01e5a22759 switch to new ff api (#1868) 2023-02-14 15:57:32 +08:00
Antoine Gauthier
b60b7d2afe fix(CI) enable coverage on doctest (#1839)
* fix(CI) enable coverage on doctest
⚠️ Marked as [unstable](https://github.com/taiki-e/cargo-llvm-cov/issues/2)
refs #1761

* remove obsolete CI directory
2023-02-14 16:42:44 +09:00
Yukun Guo
dfe4e95fde Make index compatible with virtual drives on Windows (#1843)
* Make index compatible with virtual drives on Windows

* Get rid of normpath
2023-02-14 16:41:48 +09:00
Paul Masurel
60cc2644d6 Fixing test_fail_on_flush_segment_but_one_worker_remains (#1869)
The new fast field code, based on columnar, had a larger minimum memory
footprint, causing the first docuemnt to trigger a flush of the asegment
in this unit test.

This PR prevents the allocation of a large capacity for the different hashmap tables
using in the columnar writer.

Closes #1859
2023-02-14 16:09:42 +09:00
Paul Masurel
10bccac61b Bugfix in parse_into_milliseconds (#1867) 2023-02-14 15:06:40 +09:00
PSeitz
1cfb9ce59a improve range query performance (#1864)
fix RowId vs DocId naming
fixes #1863
2023-02-14 13:25:39 +09:00
trinity-1686a
539ff08a79 move DateTime to tantivy_common (#1861)
* move DateTime to tantivy_common

* resolve imports of columnar::DateTime as import of common::DateTime
2023-02-11 17:03:06 +01:00
PSeitz
dab93df94e fix benchmarks (#1862) 2023-02-11 15:44:47 +09:00
trinity-1686a
3120147a76 re-enable examples (#1860) 2023-02-10 14:51:37 +01:00
PSeitz
cbcafae04c fix: doc store for files larger 4GB (#1856)
Fixes an issue in the skip list deserialization, which deserialized the byte start offset incorrectly as u32.
`get_doc` will fail for any docs that live in a block with start offset larger than u32::MAX (~4GB).
Causes index corruption, if a segment with a doc store larger 4GB is merged.

tantivy version 0.19 is affected
2023-02-10 14:29:43 +01:00
PSeitz
36c6138e7f fix: auto downgrade index record option, instead of vint error (#1857)
Prev: thread 'main' panicked at 'called `Result::unwrap()` on an `Err` value: IoError(Custom { kind: InvalidData, error: "Reach end of buffer while reading VInt" })', src/main.rs:46:14
Now: Automatic downgrade to next available level
2023-02-10 13:45:23 +01:00
PSeitz
7a9befd18d fix sort order test for term aggregation (#1858)
fix sort order test for term aggregation
fix invalid request test
2023-02-10 10:26:58 +01:00
Paul Masurel
62c811df2b Added a columnar cli 2023-02-09 19:02:16 +01:00
PSeitz
03345f0aa2 fmt code, update lz4_flex (#1838)
formatting on nightly changed
2023-02-10 01:42:32 +09:00
Paul Masurel
b7bfa20e38 Fixed test performance. 2023-02-09 17:39:55 +01:00
Paul Masurel
db8583db75 Fixing unit test 2023-02-09 16:53:05 +01:00
trinity-1686a
1390834ae8 make Term::as_slice public (#1846) 2023-02-09 15:37:07 +01:00
trinity-1686a
3ac973bea4 fix invalid endianness in documentation (#1845)
* fix doc about term endianness

* rustfmt
2023-02-09 15:36:38 +01:00
Paul Masurel
405e2cf4d9 Merge with main 2023-02-09 14:28:57 +01:00
Paul Masurel
b63c6c27bc adding change from main 2023-02-09 14:18:46 +01:00
Paul Masurel
bd5eea9852 Integrated columnar work. 2023-02-09 13:14:31 +01:00
PSeitz
0f20787917 fix doc store cache docs (#1821)
* fix doc store cache docs

addresses an issue reported in #1820

* rename doc_store_cache_size
2023-01-23 07:06:49 +01:00
Paul Masurel
2874554ee4 Removed the sorting logic that forced column type to be sorted like (#1816)
* Removed the sorting logic that forced column type to be sorted like
ColumnTypes.

* add comments

Co-authored-by: PSeitz <PSeitz@users.noreply.github.com>
2023-01-20 12:43:28 +01:00
PSeitz
cbc70a9eae Cargo.toml cleanup (#1817) 2023-01-20 12:30:35 +01:00
PSeitz
226d0f88bc add columnar to workspace (#1808) 2023-01-20 11:47:10 +01:00
Paul Masurel
9548570e88 Fixing broken test build 2023-01-20 18:18:32 +09:00
Paul Masurel
9a296b29b7 Renamed dense file to dense.rs 2023-01-20 17:22:25 +09:00
PSeitz
b31fd389d8 collect columns for merge (#1812)
* collect columns for merge

* return column_type from, fix visibility

* fix

Co-authored-by: Paul Masurel <paul@quickwit.io>
2023-01-20 07:58:29 +01:00
Paul Masurel
89cec79813 Make it possible to force a column type and intricate bugfix. (#1815) 2023-01-20 14:30:56 +09:00
PSeitz
d09d91a856 fix tests (#1813) 2023-01-19 23:41:21 +09:00
PSeitz
50d8a8bc32 Update README (#1804)
Some parts are outdated

For the debugging tutorial, debugging is really easy now with VSCode, and there are plenty of other sources for debugging rust
2023-01-19 18:09:45 +09:00
Paul Masurel
08919a2900 Improvement on the scalar / random bitpacker code. (#1781)
* Improvement on the scalar / random bitpacker code.

Added proptesting
Added simple benchmark
Added assert and comments on the very non trivial hidden contract
Remove the need for an extra padding.

The last point introduces a small performance regression (~10%).

* Fixing unit tests
2023-01-19 18:09:13 +09:00
Lonre Wang
8ba333f1b4 Typo fix (#1803)
* Update text_options.rs

* Update src/schema/text_options.rs

Co-authored-by: Paul Masurel <paul@quickwit.io>
2023-01-19 17:56:05 +09:00
PSeitz
a2ca12995e update aggregation docs (#1807) 2023-01-19 09:52:47 +01:00
Paul Masurel
e3d504d833 Minor code cleanup (#1810) 2023-01-19 17:47:26 +09:00
Paul Masurel
5a42c5aae9 Add support for multivalues (#1809) 2023-01-19 16:55:01 +09:00
Paul Masurel
a86b104a40 Differentiating between str and bytes, + unit test 2023-01-19 14:38:12 +09:00
PSeitz
f9abd256b7 add ip addr to columnar (#1805) 2023-01-19 05:36:06 +01:00
Paul Masurel
9f42b6440a Completed unit test for dictionary encoded column 2023-01-19 12:15:27 +09:00
Paul Masurel
c723ed3f0b Columnar merge (#1806) 2023-01-19 11:52:27 +09:00
trinity-1686a
d72ea7d353 modify getters for sstable metadata (#1793)
* add way to get up to `limit` terms from sstable

* make some function of sstable load less data

* add some tests to sstable

* add tests on sstable dictionary

* fix some bugs with sstable
2023-01-18 14:42:55 +01:00
Paul Masurel
5180b612ef Removing the demuxer code (#1799) 2023-01-18 16:12:35 +09:00
PSeitz
f687b3a5aa start migrate Field to &str (#1772)
start migrate Field to &str in preparation of columnar
return Result for get_field
2023-01-18 16:12:07 +09:00
PSeitz
c4af63e588 add rename (#1797) 2023-01-18 13:28:37 +09:00
Adrien Guillo
4b343b3189 Merge pull request #1802 from quickwit-oss/guilload/clippy-fixes
Fix some Clippy warnings
2023-01-17 10:39:55 -05:00
Adrien Guillo
c51d9f9f83 Fix some Clippy warnings 2023-01-17 10:17:51 -05:00
Adrien Guillo
c9cb3d04bf Merge pull request #1788 from quickwit-oss/guilload/remove-std-dev-from-stats-agg
Remove standard deviation from stats aggregation
2023-01-16 23:16:36 -05:00
Adrien Guillo
0caaf13a90 Remove standard deviation from stats aggregation 2023-01-16 22:58:23 -05:00
Adrien Guillo
a59bd965cc Merge pull request #1794 from quickwit-oss/guilload/count-min-max-sum-aggs
Add count, min, max, and sum aggregations
2023-01-16 22:45:01 -05:00
Adrien Guillo
f2dad194ea Add count, min, max, and sum aggregations 2023-01-16 12:22:20 -05:00
Paul Masurel
25bad784ad Integrated fastfield codecs into columnar. (#1782)
Introduced asymetric OptionalCodec / SerializableOptionalCodec
Removed cardinality from the columnar sstable.
Added DynamicColumn
Reorganized all files
Change DenseCodec serialization logic.
Renamed methods to rank/select
Moved versioning footer to the columnar level
2023-01-16 17:24:49 +09:00
PSeitz
4bac945709 add ip field example (#1775) 2023-01-16 06:06:11 +01:00
trinity-1686a
16b704e190 make file_slice_for_range on sstable public (#1784) 2023-01-16 13:59:57 +09:00
PSeitz
6ca9a477f3 reuse stats for average (#1785)
* reuse stats for average

* fix count type
2023-01-13 23:32:27 +08:00
Shikhar Bhushan
2650111b76 EnableScoring::Disabled - optional Searcher (#1780) 2023-01-12 09:26:50 -05:00
PSeitz
1176555eff handle user input on get_docid_for_value_range (#1760)
* handle user input on get_docid_for_value_range

fixes #1757

* pass range as parameter
2023-01-12 14:20:16 +01:00
Adrien Guillo
f8d111a75e Merge pull request #1777 from quickwit-oss/guilload/ff-range-query-on-not-indexed-fields
Allow range queries via fast fields on non-indexed fields
2023-01-11 10:14:32 -05:00
Adrien Guillo
e17996f2fd Allow range queries via fast fields on non-indexed fields 2023-01-11 09:56:13 -05:00
Adrien Guillo
f3621c0487 Add license to tokenizer-api crate (#1778) 2023-01-11 05:26:41 +01:00
Adrien Guillo
14222a47a3 Fix typo (#1776) 2023-01-11 00:49:13 +09:00
Adam Reichold
8312c882a5 More cosmetic fixes for upcoming Clippy lints. (#1771) 2023-01-10 10:32:45 +01:00
Paul Masurel
7a8fce0ae7 Minor mini fixes 2023-01-10 14:15:30 +09:00
Michael Kleen
196e42f33e Add regex tokenizer (#1759)
This adds a regex tokenizer which tokenizes the text by using a
regex pattern to split.

Co-authored-by: Michael Kleen <mkleen@gmailw.com>
2023-01-10 13:38:37 +09:00
Adam Reichold
82a183bc2d Bump dependency on lru to from version 0.7.5 to version 0.9.0. (#1755) 2023-01-10 13:35:37 +09:00
dependabot[bot]
3090d49615 Update base64 requirement from 0.20.0 to 0.21.0 (#1769)
Updates the requirements on [base64](https://github.com/marshallpierce/rust-base64) to permit the latest version.
- [Release notes](https://github.com/marshallpierce/rust-base64/releases)
- [Changelog](https://github.com/marshallpierce/rust-base64/blob/master/RELEASE-NOTES.md)
- [Commits](https://github.com/marshallpierce/rust-base64/compare/v0.20.0...v0.21.0)

---
updated-dependencies:
- dependency-name: base64
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-01-10 13:35:05 +09:00
PSeitz
7c6cc818ae enable range query on fast field for u64 compatible types (#1762)
* enable range query on fast field for u64 compatible types

* rename, update benches
2023-01-10 04:08:26 +01:00
PSeitz
514d23a20c move tokenizer API to seperate crate (#1767)
closes #1766

Finding tantivy tokenizers is a frustrating experience currently, since
they need be updated for each tantivy version. That's unnecessary since
the API is rather stable anyway.
2023-01-09 06:37:38 +01:00
Paul Masurel
4f9efe654c Support for columnar (#1734)
* Added support for dynamic fast field.

See README for more information.

* Apply suggestions from code review

Co-authored-by: PSeitz <PSeitz@users.noreply.github.com>
2023-01-07 17:37:00 +09:00
Adam Reichold
1afa5bf3db Make construction of LevenshteinAutomatonBuilder for FuzzyTermQuery instances lazy. (#1756) 2023-01-06 12:44:49 +09:00
PSeitz
07a51eb7c8 refactor multivalue fastfield, refactor range query (#1749)
Introduce MakeZero trait, remove make_zero from FastValue
Merge two multivalue fastfield implementations into one
prepare range query on fastfield for different types
2023-01-05 12:09:50 +01:00
Adam Reichold
2080c370c2 Enable usage of FuzzyTermQuery for specific fields via QueryParser (#1750)
* Make nightly Clippy mostly happy.

* Document how to produce TermSetQuery queries using QueryParser.

* Enable construction of queries using FuzzyTermQuery via the QueryParser

* Use FxHashMap instead of HashMap in the QueryParser as these hash tables are not exposed to DoS attacks.

* Use a struct instead of a tuple to improve readability.
2023-01-04 18:11:27 +09:00
Daw-Chih Liou
b22f96624e doc: update comments in the faceted search example (#1737)
* doc: update comments in the faceted search example

* chore: format
2023-01-02 11:07:30 +01:00
pinkforest(she/her)
b78dc5e313 Bump prettytables (#1746) 2022-12-31 15:01:39 +01:00
Paul Masurel
3f915925af Fixing unit tests 2022-12-27 12:02:16 +09:00
Paul Masurel
9c5fef5af7 Fixing sstable proptest (#1743) 2022-12-26 16:29:33 +09:00
Paul Masurel
9948a84ebe Simplifies the count_ones definition. (#1742) 2022-12-26 16:08:01 +09:00
PSeitz
45156fd869 use group_by in translate_codec_idx_to_original_id (#1736) 2022-12-26 06:13:29 +01:00
Paul Masurel
bc959006fa Ooops. Removing ordered_floats. 2022-12-22 19:50:34 +09:00
Paul Masurel
7385a8f80c Supporting PartialCmp in VectorColumn. (#1735)
* Supporting PartialCmp in VectorColumn.
* Apply suggestions from code review

Co-authored-by: PSeitz <PSeitz@users.noreply.github.com>
2022-12-22 17:47:25 +09:00
Paul Masurel
13b89cba17 Adding inlines. 2022-12-22 14:29:41 +09:00
Hasnain Lakhani
f4804ce2f5 Adjust spelling of "returns" in docs for DisjunctionMaxQuery (#1733) 2022-12-22 14:04:07 +09:00
Paul Masurel
2a6d1eaf78 Added missing license. 2022-12-22 12:47:43 +09:00
Paul Masurel
540a9972bd Support for NotNaN in fast fields 2022-12-22 12:28:25 +09:00
Paul Masurel
bb48c3e488 Refactoring to prepare for the addition of dynamic fast field (#1730)
* Refactoring to prepare for the addition of dynamic fast field

- Exposing insert_key / insert_value
- Renamed SSTable::{Reader/Writer}-> SSTable::{ValueReader/ValueWriter}
- Added a generic Dictionary object in the sstable crate
- Removing the TermDictionary wrapper from tantivy, relying directly on
  an alias of the generic Dictionary object.
- dropped the use of byteorder in sstable.
- Stopped scanning / reading the entire dictionary when streaming a range.

* Added a benchmark for streaming sstable ranges.

* CR comments.

Rename deserialize_u64 -> deserialize_vint_u64

* Removed needless allocation, split serialize into serialize and clear.
2022-12-22 12:25:46 +09:00
Paul Masurel
3339a3ec05 Removed feature(quickwit) in tantivy-common. 2022-12-22 10:19:57 +09:00
Paul Masurel
f39165e1e7 Moving FileSlice to tantivy-common (#1729) 2022-12-21 16:35:11 +09:00
Paul Masurel
32cb1d22da Removed AsyncIoResult. (#1728) 2022-12-21 16:01:17 +09:00
Paul Masurel
4a6bf50e78 Clippy 2022-12-21 15:43:34 +09:00
PSeitz
2ac1cc2fc0 add sparse codec (#1723)
* add sparse codec

* Apply suggestions from code review

Co-authored-by: Paul Masurel <paul@quickwit.io>

* Apply suggestions from code review

Co-authored-by: Paul Masurel <paul@quickwit.io>

* Apply suggestions from code review

Co-authored-by: Paul Masurel <paul@quickwit.io>

* add the -1 u16 fix for metadata num_vals

* add dense block encoding to sparse codec

* add comment, refactor u16 reading

Co-authored-by: Paul Masurel <paul@quickwit.io>
2022-12-20 15:30:33 +01:00
PSeitz
f9171a3981 fix clippy (#1725)
* fix clippy

* fix clippy fastfield codecs

* fix clippy bitpacker

* fix clippy common

* fix clippy stacker

* fix clippy sstable

* fmt
2022-12-20 07:30:06 +01:00
PSeitz
a2cf6a79b4 Sparse dense index (#1716)
* add dense codec

* benchmark fix and important optimisation

* move code to DenseIndexBlock

improve benchmark

* Apply suggestions from code review

Co-authored-by: Paul Masurel <paul@quickwit.io>

* Apply suggestions from code review

Co-authored-by: Paul Masurel <paul@quickwit.io>

* extend benchmarks

* Apply suggestions from code review

Co-authored-by: Paul Masurel <paul@quickwit.io>

Co-authored-by: Paul Masurel <paul@quickwit.io>
2022-12-13 07:50:09 +01:00
Paul Masurel
f6e87a5319 Cargo fmt 2022-12-13 12:30:40 +09:00
Paul Masurel
f9971e15fe Fixing unit test with sstable test. 2022-12-13 12:22:44 +09:00
PSeitz
3cdc8e7472 pass index info to serialize (#1719) 2022-12-13 04:20:31 +01:00
dependabot[bot]
fbb0f8b55d Update base64 requirement from 0.13.0 to 0.20.0 (#1720)
Updates the requirements on [base64](https://github.com/marshallpierce/rust-base64) to permit the latest version.
- [Release notes](https://github.com/marshallpierce/rust-base64/releases)
- [Changelog](https://github.com/marshallpierce/rust-base64/blob/master/RELEASE-NOTES.md)
- [Commits](https://github.com/marshallpierce/rust-base64/compare/v0.13.0...v0.20.0)

---
updated-dependencies:
- dependency-name: base64
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-12-13 11:46:23 +09:00
Paul Masurel
136a8f4124 Isolating sstable and stacker in independant crates. (#1718)
Both crate will be used in the new (optional + dynamic) fastfield work.
2022-12-13 11:44:17 +09:00
PSeitz
5d4535de83 Changelog fix (#1717) 2022-12-12 14:28:42 +09:00
PSeitz
2c50b02eb3 Fix max bucket limit in histogram (#1703)
* Fix max bucket limit in histogram

The max bucket limit in histogram was broken, since some code introduced temporary filtering of buckets, which then resulted into an incorrect increment on the bucket count.
The provided solution covers more scenarios, but there are still some scenarios unhandled (See #1702).

* Apply suggestions from code review

Co-authored-by: Paul Masurel <paul@quickwit.io>

Co-authored-by: Paul Masurel <paul@quickwit.io>
2022-12-12 04:40:15 +01:00
PSeitz
509adab79d Bump version (#1715)
* group workspace deps

* update cargo.toml

* revert tant version

* chore: Release
2022-12-12 04:39:43 +01:00
PSeitz
96c93a6ba3 Merge pull request #1700 from quickwit-oss/PSeitz-patch-1
Update CHANGELOG.md
2022-12-02 16:31:11 +01:00
boraarslan
495824361a Move split_full_path to Schema (#1692) 2022-11-29 20:56:13 +09:00
PSeitz
485a8f507e Update CHANGELOG.md 2022-11-28 15:41:31 +01:00
PSeitz
1119e59eae prepare fastfield format for null index (#1691)
* prepare fastfield format for null index
* add format version for fastfield
* Update fastfield_codecs/src/compact_space/mod.rs
* switch to variable size footer
* serialize delta of end
2022-11-28 17:15:24 +09:00
PSeitz
ee1f2c1f28 add aggregation support for date type (#1693)
* add aggregation support for date type
fixes #1332

* serialize key_as_string as rfc3339 in date histogram
* update docs
* enable date for range aggregation
2022-11-28 09:12:08 +09:00
PSeitz
600548fd26 Merge pull request #1694 from quickwit-oss/dependabot/cargo/zstd-0.12
Update zstd requirement from 0.11 to 0.12
2022-11-25 05:48:59 +01:00
PSeitz
9929c0c221 Merge pull request #1696 from quickwit-oss/dependabot/cargo/env_logger-0.10.0
Update env_logger requirement from 0.9.0 to 0.10.0
2022-11-25 03:28:10 +01:00
dependabot[bot]
f53e65648b Update env_logger requirement from 0.9.0 to 0.10.0
Updates the requirements on [env_logger](https://github.com/rust-cli/env_logger) to permit the latest version.
- [Release notes](https://github.com/rust-cli/env_logger/releases)
- [Changelog](https://github.com/rust-cli/env_logger/blob/main/CHANGELOG.md)
- [Commits](https://github.com/rust-cli/env_logger/compare/v0.9.0...v0.10.0)

---
updated-dependencies:
- dependency-name: env_logger
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2022-11-24 20:07:52 +00:00
PSeitz
0281b22b77 update create_in_ram docs (#1695) 2022-11-24 17:30:09 +01:00
dependabot[bot]
a05c184830 Update zstd requirement from 0.11 to 0.12
Updates the requirements on [zstd](https://github.com/gyscos/zstd-rs) to permit the latest version.
- [Release notes](https://github.com/gyscos/zstd-rs/releases)
- [Commits](https://github.com/gyscos/zstd-rs/commits)

---
updated-dependencies:
- dependency-name: zstd
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2022-11-23 20:15:32 +00:00
Paul Masurel
0b40a7fe43 Added a expand_dots JsonObjectOptions. (#1687)
Related with quickwit#2345.
2022-11-21 23:03:00 +09:00
trinity-1686a
e758080465 add support for TermSetQuery in query parser (#1683) 2022-11-17 16:49:49 +01:00
Paul Masurel
2a39289a1b Handle escaped dot in json path in the QueryParser. (#1682) 2022-11-16 07:18:34 +09:00
Adam Reichold
ca6231170e Make the built-in stop word lists selectable via the Language enum already used by the Stemmer filter. (#1671) 2022-11-15 17:40:25 +09:00
PSeitz
eda6e5a10a Merge pull request #1681 from quickwit-oss/ip_range_query_multi
remove Column from MultiValuedU128FastFieldReader
2022-11-15 09:27:46 +08:00
Pascal Seitz
8641155cbb remove column from MultiValuedU128FastFieldReader 2022-11-14 18:49:15 +08:00
PSeitz
9a090ed994 Merge pull request #1659 from quickwit-oss/ip_range_query_multi
add support for ip range query on multivalue fastfields
2022-11-14 15:17:41 +08:00
Pascal Seitz
b7d0dd154a fmt 2022-11-14 14:49:15 +08:00
PSeitz
ce10fab20f Apply suggestions from code review
Co-authored-by: Paul Masurel <paul@quickwit.io>
2022-11-14 14:21:53 +08:00
Pascal Seitz
e034328a8b Improve position_to_docid, refactor, add tests 2022-11-14 14:21:53 +08:00
Pascal Seitz
f811d1616b add support for ip range query on multivalue fastfields 2022-11-14 14:21:52 +08:00
PSeitz
c665b16ff0 Merge pull request #1672 from quickwit-oss/allow_range_without_indexed
Allow range query on fastfield without INDEXED
2022-11-14 12:45:12 +08:00
PSeitz
3b5f810051 Merge pull request #1677 from quickwit-oss/switch_to_u32
switch total_num_val to u32
2022-11-14 12:01:40 +08:00
trinity-1686a
5765c261aa allow warming up of the full posting list (#1673)
* allow warming up of the full posting list

* cargo fmt
2022-11-14 10:27:56 +09:00
Pascal Seitz
fb9f03118d switch total_num_val to u32 2022-11-11 17:35:52 +08:00
PSeitz
55a9d808d4 Merge pull request #1674 from quickwit-oss/u128_codec_header
add header with codec type for u128
2022-11-11 13:47:51 +08:00
Pascal Seitz
32166682b3 add header deser test 2022-11-11 13:28:12 +08:00
Pascal Seitz
e6acf8f76d add header with codec type for u128 2022-11-11 11:52:17 +08:00
Pascal Seitz
9e8a0c2cca Allow range query on fastfield without INDEXED 2022-11-10 15:56:08 +08:00
Paul Masurel
3edf0a2724 Using the manual reload policy in IndexWriter. (#1667) 2022-11-09 11:20:41 +01:00
Paul Masurel
8ca12a5683 Added stop word filter to CHANGELOG.md 2022-11-09 17:00:45 +09:00
Adam Reichold
a4b759d2fe Include stop word lists from Lucene and the Snowball project (#1666) 2022-11-09 16:57:35 +09:00
PSeitz
3e9c806890 Merge pull request #1665 from quickwit-oss/fix_num_vals
fix num_vals on u128 value index after merge
2022-11-07 21:46:02 +08:00
Pascal Seitz
c69a873dd3 fix num_vals on value index after merge 2022-11-07 21:05:21 +08:00
PSeitz
666afcf641 Merge pull request #1663 from PSeitz/fix_clippy
fix clippy
2022-11-07 18:11:20 +08:00
Pascal Seitz
38ad46e580 fix clippy 2022-11-07 16:09:55 +08:00
PSeitz
e948889f4c Merge pull request #1662 from quickwit-oss/fix_num_vals
fix num_vals in multivalue index after merge
2022-11-07 15:57:32 +08:00
Pascal Seitz
6e636c9cea fix num_vals in multivalue index after merge 2022-11-07 15:00:52 +08:00
PSeitz
5a610efbc1 Merge pull request #1661 from quickwit-oss/upgrade_criterion
update criterion to 0.4
2022-11-04 14:45:34 +08:00
Pascal Seitz
500a0d5e48 update criterion to 0.4 2022-11-04 13:26:29 +08:00
PSeitz
509a265659 add docstore version (#1652)
* add docstore version

closes #1589

* assert for docstore version
2022-11-04 10:19:16 +09:00
PSeitz
5b2cea1b97 Merge pull request #1656 from quickwit-oss/multival_offset_index
move multivalue index to own file
2022-11-02 14:03:06 +08:00
PSeitz
a5a80ffaea Update fastfield_codecs/src/column.rs
Co-authored-by: Paul Masurel <paul@quickwit.io>
2022-11-02 06:37:27 +01:00
PSeitz
0f98d91a39 Merge pull request #1646 from quickwit-oss/no_score_calls
No score calls if score is not requested
2022-11-01 20:09:32 +08:00
PSeitz
2af6b01c17 Update src/query/boolean_query/boolean_weight.rs
Co-authored-by: Paul Masurel <paul@quickwit.io>
2022-11-01 16:13:00 +08:00
Adam Reichold
c32ab66bbd Small improvements to StopWorldFilter (#1657)
* Do not copy the whole set of stop words for each stream

* Make construction of StopWordFilter more flexible.
2022-11-01 16:47:34 +09:00
PSeitz
3f3a6f9990 Merge pull request #1653 from quickwit-oss/faster_hash
switch to fx hashmap
2022-11-01 14:53:18 +08:00
Pascal Seitz
83325d8f3f move multivalue index to own file
start_doc parameter in positions to docids
2022-11-01 10:36:13 +08:00
PSeitz
4e46f4f8c4 Merge pull request #1649 from adamreichold/split-compound-words
RFC: Add dictionary-based SplitCompoundWords token filter.
2022-10-27 17:12:48 +08:00
Pascal Seitz
43df356010 rename to docset 2022-10-27 16:53:38 +08:00
PSeitz
6647362464 Merge pull request #1648 from adamreichold/stemmer-todo-alloc
Avoid unconditional allocation in StemmerTokenStream.
2022-10-27 16:50:41 +08:00
Pascal Seitz
279b1b28d3 switch to fx hashmap 2022-10-27 16:19:59 +08:00
PSeitz
7a80851e36 Merge pull request #1645 from quickwit-oss/ip_field_range_query
add ip range query benchmark, add seek behaviour
2022-10-27 16:13:52 +08:00
Adam Reichold
cd952429d2 Add dictionary-based SplitCompoundWords token filter. 2022-10-27 08:30:33 +02:00
PSeitz
d777c964da Merge pull request #1650 from adamreichold/fnv-rustc-hash
Replace FNV by rustc-hash
2022-10-27 12:11:26 +08:00
Adam Reichold
bbb058d976 Replace FNV by rustc-hash
Both construction have similar goals but rustc-hash ist better suited for
contemporary CPU as it works one word at a time instead of byte per byte.
2022-10-27 00:35:09 +02:00
Adam Reichold
5f7d027a52 Avoid unconditional allocation in StemmerTokenStream.
This fixes the TODO in two ways: If the stemmer already yields an owned string,
it is used directly as the new text of the token. Otherwise, a temporary buffer
is used to copy the stemmed text (just as before) and then swapping it into the
token to reuse its existing buffer.
2022-10-26 18:11:15 +02:00
Pascal Seitz
dfab201191 for_each_docset to iterate without score 2022-10-26 17:25:05 +08:00
PSeitz
0c2bd36fe3 Panic on duplicate field names (#1647)
fixes #1601
2022-10-26 16:17:33 +09:00
Pascal Seitz
af839753e0 No score calls if score is not requested 2022-10-26 12:18:35 +08:00
Pascal Seitz
fec2b63571 improve bench by adding more blanks in compact space 2022-10-25 22:09:01 +08:00
Pascal Seitz
6213ea476a pass positions parameter 2022-10-25 17:44:51 +08:00
Pascal Seitz
5e159c26bf add ip range query benchmark, add seek behaviour 2022-10-25 15:57:19 +08:00
PSeitz
a5e59ab598 Merge pull request #1644 from quickwit-oss/get_val_u32
switch get_val() to u32
2022-10-24 19:30:03 +08:00
Pascal Seitz
e772d3170d switch get_val() to u32
Fixes #1638
2022-10-24 19:05:57 +08:00
PSeitz
8c2ba7bd55 Merge pull request #1637 from quickwit-oss/ip_field_range_query
add range query via ip fast field
2022-10-24 18:10:47 +08:00
Pascal Seitz
02328b0151 fix proptest 2022-10-24 17:46:06 +08:00
Pascal Seitz
7cc775256c add comments, rename 2022-10-24 17:08:37 +08:00
Pascal Seitz
07b40f8b8b add proptest 2022-10-24 16:52:55 +08:00
PSeitz
9b6b6be5b9 Apply suggestions from code review
Co-authored-by: Paul Masurel <paul@quickwit.io>
2022-10-24 16:00:38 +08:00
Pascal Seitz
6bb73a527f add range query via ip fast field 2022-10-24 16:00:38 +08:00
PSeitz
03885d0f3c Merge pull request #1643 from quickwit-oss/range_query_parser
allow more characters in range query
2022-10-24 15:09:47 +08:00
Pascal Seitz
f2e5135870 allow more characters in range query
closes #1642
2022-10-21 18:05:15 +08:00
Paul Masurel
c24157f28b Bumping version format. (#1640)
The docstore format has changed in a non-compatible manner.
2022-10-21 15:35:35 +09:00
PSeitz
873382cdcb Merge pull request #1639 from quickwit-oss/num_vals_u32
switch num_vals() to u32
2022-10-21 12:36:50 +08:00
Pascal Seitz
791350091c switch num_vals() to u32
fixes #1630
2022-10-20 19:44:28 +08:00
Paul Masurel
483b1d13d4 Added unit test for long tokens (#1635)
* Bugfix on long tokens and multivalue text fields.

Fixes a minor bug for the strong edge case
in which a tokenizer would emit tokens where
the last token does not cover the last position.

More importantly, this adds unit tests.

Closes #1634

* Update src/indexer/segment_writer.rs

Co-authored-by: PSeitz <PSeitz@users.noreply.github.com>

Co-authored-by: PSeitz <PSeitz@users.noreply.github.com>
2022-10-20 15:05:37 +09:00
PSeitz
8de7fa9d95 Merge pull request #1631 from quickwit-oss/high_positions
add test for phrase search on multi text field
2022-10-20 10:26:00 +08:00
Paul Masurel
94313b62f8 Hotfix issue/1629 - position broken (#1633)
* Bugfix position broken.

For Field with several FieldValues, with a
value that contained no token at all, the token position
was reinitialized to 0.

As a result, PhraseQueries can show some false positives.
In addition, after the computation of the position delta, we can
underflow u32, and end up with gigantic delta.

We haven't been able to actually explain the bug in 1629, but it
is assumed that in some corner case these delta can cause a panic.

Closes #1629
2022-10-20 11:03:55 +09:00
Pascal Seitz
f2b2628feb add test for phrase search on multi text field 2022-10-19 16:29:56 +08:00
PSeitz
449f595832 Merge pull request #1628 from quickwit-oss/skip_index_deser
faster skipindex deserialization, larger blocksize on sort
2022-10-19 11:05:20 +08:00
PSeitz
c9235df059 Merge pull request #1627 from quickwit-oss/ip_field_range_query
add range query handling for ip via term dictionary
2022-10-19 10:53:00 +08:00
Pascal Seitz
a4485f7611 faster skipindex deserialization, larger blocksize on sort 2022-10-18 19:32:23 +08:00
Pascal Seitz
1082ff60f9 add range query handling for ip via term dictionary
since IPs are mapped monotonically we can use the term dictionary for range queries
2022-10-18 13:08:27 +08:00
PSeitz
491854155c Merge pull request #1625 from quickwit-oss/index_ip_field
index ip field
2022-10-18 11:18:17 +08:00
Christoph Herzog
96c3d54ac7 fix: Fix power of two computation on 32bit architectures (#1624)
The current `compute_previous_power_of_two()` implementation used for
TermHashmap takes and returns `usize` , but actually only works
correclty on 64 bit architectures (aka usize == u64)

On other architectures the leading_zeros computation is run on the wrong
type (must be u64), and leads to overflows.

Fixed simply computing the leading_zeros based on a u64 value.
2022-10-18 11:55:02 +09:00
Pascal Seitz
6800fdec9d add indexing for ip field
Closes #1595
2022-10-18 10:07:48 +08:00
PSeitz
c9cf9c952a Merge pull request #1614 from quickwit-oss/remove_superfluous_steps
refactor Term
2022-10-17 18:25:31 +08:00
Pascal Seitz
024e53a99c remove truncate 2022-10-17 12:14:35 +08:00
Pascal Seitz
8d75e451bd fix truncate, remove mutable access from term 2022-10-17 12:14:35 +08:00
Pascal Seitz
fcfd76ec55 refactor Term
fixes some issues with Term
Remove duplicate calls to truncate or resize
Replace Magic Number 5 with constant
Enforce minimum size of 5 for metadata
Fix broken truncate docs
use constructor instead new + set calls
normalize constructor stack
replace assert on internal behavior fixes #1585
2022-10-17 12:14:34 +08:00
PSeitz
6b7b1cc4fa Merge pull request #1623 from quickwit-oss/remove_unused_buffer
remove unused buffer
2022-10-14 20:36:00 +08:00
Pascal Seitz
129f7422f5 remove unused buffer 2022-10-14 20:01:10 +08:00
PSeitz
f39cce2c8b Merge pull request #1622 from quickwit-oss/term_aggregation
add term aggregation clarification
2022-10-14 18:09:18 +08:00
PSeitz
d2478fac8a Merge pull request #1621 from quickwit-oss/changelog
update CHANGELOG
2022-10-14 18:08:57 +08:00
Pascal Seitz
952b048341 add term aggregation clarification 2022-10-14 16:12:19 +08:00
PSeitz
80f9596ec8 Merge pull request #1611 from quickwit-oss/remove_token_stream_alloc
remove tokenstream vec alloc
2022-10-14 15:12:30 +08:00
Pascal Seitz
84f9e77e1d update CHANGELOG 2022-10-14 15:10:33 +08:00
PSeitz
a602c248fb Merge pull request #1590 from waywardmonkeys/fix-doc-warnings-quickwit
Fix missing doc warnings when enabling feature "quickwit".
2022-10-14 14:09:25 +08:00
PSeitz
4b9d1fe828 Merge pull request #1620 from quickwit-oss/fix_fieldnorms_indexing
Fix missing fieldnorm indexing
2022-10-14 13:41:38 +08:00
Pascal Seitz
63bc390b02 Fix missing fieldnorm indexing
Fixes broken search (no results) with BM25 for u64, i64, f64, bool, bytes and date after deletion and merge.
There were no fieldnorms recorded for those field. After merge InvertedIndexReader::total_num_tokens returns 0 (Sum over the fieldnorms is 0). BM25 does not work when total_num_tokens is 0.
Fixes #1617
2022-10-14 12:44:40 +08:00
Paul Masurel
07393c2fa0 Attempt to fix race condition in test. (#1619)
Close #1550
2022-10-14 10:56:37 +09:00
PSeitz
77a415cbe4 rename NothingRecorder to DocIdRecorder (#1615) 2022-10-13 15:43:40 +09:00
PSeitz
4b4c231bba Merge pull request #1612 from quickwit-oss/no_panic_please
return Error instead panic in fastfields
2022-10-11 18:33:00 +08:00
PSeitz
11d3409286 add missing docs for fastfield_codecs crate (#1613)
closes #1603
2022-10-11 18:54:24 +09:00
Pascal Seitz
9cb8cfbea8 return Error instead panic in fastfields
fixes #1572
2022-10-11 14:15:22 +08:00
PSeitz
8b69aab0fc avoid prepare_doc allocation (#1610)
avoid prepare_doc allocation, ~10% more thoughput best case
2022-10-11 14:15:55 +09:00
PSeitz
3650d1f36a Merge pull request #1553 from quickwit-oss/ip_field
ip field
2022-10-11 13:09:47 +08:00
Pascal Seitz
2efebdb1bb remove tokenstream vec alloc 2022-10-11 10:30:56 +08:00
François Massot
e443ca63aa Merge pull request #1608 from quickwit-oss/nigel/serialise-bytes-as-b64-#2042
Serialise bytes as base64 strings instead of arrays.
2022-10-10 11:51:23 +02:00
Pascal Seitz
5c9cbee29d handle IpV4 serialization case 2022-10-07 19:52:00 +08:00
Pascal Seitz
b2ca83a93c switch to ipv6, add monotonic_mapping tests 2022-10-07 18:47:55 +08:00
Nigel Andrews
3b189080d4 Use raw string literals in tests 2022-10-07 12:28:25 +02:00
Nigel Andrews
00a6586efe Replaced String::serialize for serializer.serialize_str 2022-10-07 11:55:05 +02:00
Pascal Seitz
b9b913510e fmt 2022-10-07 16:56:19 +08:00
PSeitz
534b1d33c3 use ipv6
Co-authored-by: Paul Masurel <paul@quickwit.io>
2022-10-07 16:56:00 +08:00
PSeitz
f465173872 Apply suggestions from code review
Co-authored-by: Paul Masurel <paul@quickwit.io>
2022-10-07 16:55:53 +08:00
Pascal Seitz
96315df20d use idx part only for positions_to_docid 2022-10-07 16:54:04 +08:00
Pascal Seitz
9a1609d364 add test 2022-10-07 16:25:01 +08:00
Pascal Seitz
39f4e58450 improve comment 2022-10-07 16:25:01 +08:00
Pascal Seitz
a8a36b62cd enable test 2022-10-07 16:25:01 +08:00
Pascal Seitz
226a49338f add StrictlyMonotonicFn 2022-10-07 16:25:01 +08:00
Pascal Seitz
2864bf7123 use serializer for u128 2022-10-07 16:25:01 +08:00
Pascal Seitz
5171ff611b serialize ip as u128, add test for positions_to_docid 2022-10-07 16:25:01 +08:00
Pascal Seitz
e50e74acf8 remove u128 type 2022-10-07 16:25:01 +08:00
Pascal Seitz
0b86658389 rename ip addr, use buffer 2022-10-07 16:25:01 +08:00
Pascal Seitz
5d6602a8d9 mark null handling TODO 2022-10-07 16:25:01 +08:00
Pascal Seitz
4d29ff4d01 finalize ip addr rename 2022-10-07 16:25:01 +08:00
Pascal Seitz
cdc8e3a8be group montonic mapping and inverse
fix mapping inverse
remove ip indexing
add get_between_vals test
2022-10-07 16:25:01 +08:00
Pascal Seitz
67f453b534 rename to iter_gen 2022-10-07 16:25:01 +08:00
Pascal Seitz
787a37bacf expect instead of unwrap 2022-10-07 16:25:01 +08:00
Pascal Seitz
f5039f1846 remove roaring 2022-10-07 16:25:01 +08:00
Pascal Seitz
eeb1f19093 rename to iter_gen 2022-10-07 16:25:01 +08:00
Pascal Seitz
087beaf328 remove null handling 2022-10-07 16:25:01 +08:00
Pascal Seitz
309449dba3 rename to IpAddr 2022-10-07 16:25:01 +08:00
Pascal Seitz
5a76e6c5d3 fix get_between_vals forwarding
fix get_between_vals forwarding in monotonicmapping column by adding an additional conversion function Output->Input
2022-10-07 16:25:01 +08:00
Pascal Seitz
c8713a01ed use iter api 2022-10-07 16:25:01 +08:00
Pascal Seitz
6113e0408c remove comment 2022-10-07 16:25:01 +08:00
Pascal Seitz
400a20b7af add ip field
add u128 multivalue reader and writer
add ip to schema
add ip writers, handle merge
2022-10-07 16:25:01 +08:00
PSeitz
5f565e77de Merge pull request #1604 from quickwit-oss/replace_cbor
replace cbor with cborium
2022-10-07 14:42:55 +08:00
Pascal Seitz
516e60900d remove unwrap 2022-10-07 14:22:37 +08:00
Pascal Seitz
36e1c79f37 replace cbor with cborium
closes #1526
2022-10-07 13:23:39 +08:00
Bruce Mitchener
c2f1c250f9 doc: Remove reference to Searcher pool. (#1598)
The pool of searchers was removed in 23fe73a6 as part of #1411.
2022-10-06 00:04:11 +09:00
Bruce Mitchener
c694bc039a Fix missing doc warnings when enabling feature "quickwit". 2022-10-05 20:17:10 +07:00
PSeitz
2063f1717f Merge pull request #1591 from quickwit-oss/ff_refact
disable linear codec for multivalue values
2022-10-05 19:39:36 +08:00
Pascal Seitz
d742275048 renames 2022-10-05 19:16:49 +08:00
PSeitz
b9f06bc287 Update src/fastfield/multivalued/mod.rs
Co-authored-by: Paul Masurel <paul@quickwit.io>
2022-10-05 19:09:19 +08:00
Pascal Seitz
8b42c4c126 disable linear codec for multivalue value index
don't materialize index column on merge
use simpler chain() variant
2022-10-05 19:09:17 +08:00
PSeitz
7905965800 Merge pull request #1594 from quickwit-oss/flat_map_with_buffer
Removing alloc on all .next() in MultiValueColumn
2022-10-05 18:34:15 +08:00
Pascal Seitz
f60a551890 add flat_map_with_buffer to Iterator trait 2022-10-05 17:44:26 +08:00
Paul Masurel
7baa6e3ec5 Removing alloc on all .next() in MultiValueColumn 2022-10-05 17:12:06 +09:00
PSeitz
2100ec5d26 Merge pull request #1593 from waywardmonkeys/doc-improvements
Documentation improvements.
2022-10-05 15:50:08 +08:00
Bruce Mitchener
b3bf9a5716 Documentation improvements. 2022-10-05 14:18:10 +07:00
Paul Masurel
0dc8c458e0 Flaky unit test. (#1592) 2022-10-05 16:15:48 +09:00
Nigel Andrews
e5043d78d2 added a couple of tests + make fmt 2022-10-04 12:52:44 +02:00
Nigel Andrews
6d0bb82bd2 Fix issue 1576: serialize bytes as base64 strings 2022-10-04 12:18:13 +02:00
trinity-1686a
5945dbf0bd change format for store to make it faster with small documents (#1569)
* use new format for docstore blocks

* move index to end of block

it makes writing the block faster due to one less memcopy
2022-10-04 09:58:55 +02:00
PSeitz
4cf911d56a Merge pull request #1587 from quickwit-oss/no_get_val_in_serialize
remove get_val in serialization
2022-10-04 12:56:48 +08:00
Pascal Seitz
0f5cff762f move enumerate and remove computation 2022-10-04 12:30:19 +08:00
Pascal Seitz
6d9a123cf2 remove get_val in serialization
remove get_val in serialization and mark as unimplemented!()
replace get_val with iter in linear codec
remove MultivalueStartIndexRandomSeeker
replace MultivalueStartIndexIter with closure
Sample 100 values in linear codec
2022-10-04 12:01:25 +08:00
PSeitz
0f4a47816a Merge pull request #1582 from quickwit-oss/faster_sorted_field_values
use groupby instead of vec allocation
2022-10-04 09:36:24 +08:00
Pascal Seitz
b062ab2196 use groupby instead of vec allocation 2022-10-04 09:26:26 +08:00
Bruce Mitchener
a9d2f3db23 Tantivy requires Rust 1.62 or later. (#1583)
Tantivy needs the `total_cmp` feature to compile, which was stabilized
in Rust 1.62.
2022-10-03 18:31:07 +09:00
Bruce Mitchener
44e03791f9 Fix warnings when doc'ing private items. (#1579)
This also fixes a couple of typos, but plenty remain!
2022-10-03 14:24:00 +09:00
Bruce Mitchener
2d23763e9f Use u64::from boolean more. (#1580)
This case is inverted from the previous cases fixed.

This is from nightly clippy.
2022-10-03 14:17:50 +09:00
Bruce Mitchener
a24ae8d924 clippy: Fix needless-borrow warnings. (#1581)
These show on nightly clippy.
2022-10-03 14:15:09 +09:00
PSeitz
927dff5262 Merge pull request #1578 from quickwit-oss/dead_code
remove dead indexing code
2022-10-03 11:25:10 +08:00
Pascal Seitz
a695edcc95 remove dead indexing code 2022-10-03 09:44:02 +08:00
Paul Masurel
b4b4f3fa73 Removing default features for zstd (#1574) 2022-09-30 13:02:46 +09:00
PSeitz
b50e4b7c20 Merge pull request #1566 from quickwit-oss/fix_docstore_sorting
fix docstore settings for temp docstore
2022-09-30 10:10:36 +08:00
PSeitz
f8686ab1ec improve comments
Co-authored-by: Paul Masurel <paul@quickwit.io>
2022-09-30 10:06:34 +08:00
PSeitz
2fe42719d8 Merge pull request #1570 from quickwit-oss/no_sort_on_multi
validate index settings on create
2022-09-30 09:17:03 +08:00
PSeitz
fadd784a25 log improvements (#1564) 2022-09-30 09:39:26 +09:00
Pascal Seitz
0e94213af0 validate index settings on create 2022-09-29 18:58:09 +08:00
PSeitz
0da2a2e70d Merge pull request #1567 from quickwit-oss/dependabot/cargo/tantivy-fst-0.4.0
Update tantivy-fst requirement from 0.3.0 to 0.4.0
2022-09-29 10:00:16 +08:00
dependabot[bot]
0bcdf3cbbf Update tantivy-fst requirement from 0.3.0 to 0.4.0
Updates the requirements on [tantivy-fst](https://github.com/tantivy-search/fst) to permit the latest version.
- [Release notes](https://github.com/tantivy-search/fst/releases)
- [Commits](https://github.com/tantivy-search/fst/commits)

---
updated-dependencies:
- dependency-name: tantivy-fst
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2022-09-28 20:50:43 +00:00
Pascal Seitz
8f647b817f fix docstore settings for temp docstore
fixes #1565
2022-09-28 17:53:59 +08:00
trinity-1686a
a86b0df6f4 Add query matching terms in a set (#1539) 2022-09-28 09:43:18 +02:00
Bruce Mitchener
f842da758c Move ArcBytes,WeakArcBytes to mmap_directory. (#1555)
When building without default features (so without mmap, etc),
there are some warnings about unused things. This fixes the
ones related to `ArcBytes` and `WeakArcBytes`, which are only
used with the `mmap_directory` code.
2022-09-27 09:57:28 +09:00
Bruce Mitchener
97ccd6d712 Avoid slicing a string in DocParsingError. (#1559)
Fixes #1339.
2022-09-26 20:27:15 +09:00
Bruce Mitchener
cb252a42af docs: "associated to" -> "associated with" (#1557)
This reads better this way.
2022-09-26 20:23:37 +09:00
Bruce Mitchener
d9609dd6b6 POLLING_INTERVAL needn't be pub. (#1556)
This is only used within the file watcher and is const, so it
can't be configured.
2022-09-26 20:22:55 +09:00
Bruce Mitchener
f03667d967 Remove references to /cpp directory. (#1560)
This was removed in 2018, so these should be fine to remove now.
2022-09-26 20:22:28 +09:00
PSeitz
10f10a322f Merge pull request #1554 from quickwit-oss/prepare_ip_field
prepare for ip field
2022-09-26 16:34:24 +08:00
Pascal Seitz
f757471077 prepare for ip field 2022-09-26 16:27:35 +08:00
PSeitz
21e0adefda use binary search instead of linear for get_val in merge (#1548)
* use binary search instead of linear for get_val in merge

* use partition_point
2022-09-26 09:42:33 +09:00
Bruce Mitchener
ea8e6d7b1d Tidy up clippy config. (#1547)
* Checking cfg_attr is no longer necessary.
* Don't need multiple `clippy::` prefixes on a name.
2022-09-26 09:37:55 +09:00
PSeitz
dac7da780e Merge pull request #1545 from waywardmonkeys/remove-some-refs
clippy: Remove borrows that the compiler will do.
2022-09-23 15:33:23 +08:00
PSeitz
20c87903b2 fix multivalue ff index creation regression (#1543)
fixes multivalue ff regression by avoiding using `get_val`. Line::train calls repeatedly get_val, but get_val implementation on Column for multivalues is very slow. The fix is to use the iterator instead. Longterm fix should be to remove get_val access in serialization.

Old Code

test fastfield::bench::bench_multi_value_ff_merge_few_segments                                                           ... bench:  46,103,960 ns/iter (+/- 2,066,083)
test fastfield::bench::bench_multi_value_ff_merge_many_segments                                                          ... bench:  83,073,036 ns/iter (+/- 4,373,615)
est fastfield::bench::bench_multi_value_ff_merge_many_segments_log_merge                                                ... bench:  64,178,576 ns/iter (+/- 1,466,700)

Current

running 3 tests
test fastfield::multivalued::bench::bench_multi_value_ff_merge_few_segments                                              ... bench:  57,379,523 ns/iter (+/- 3,220,787)
test fastfield::multivalued::bench::bench_multi_value_ff_merge_many_segments                                             ... bench:  90,831,688 ns/iter (+/- 1,445,486)
test fastfield::multivalued::bench::bench_multi_value_ff_merge_many_segments_log_merge                                   ... bench: 158,313,264 ns/iter (+/- 28,823,250)

With Fix

running 3 tests
test fastfield::multivalued::bench::bench_multi_value_ff_merge_few_segments                                              ... bench:  57,635,671 ns/iter (+/- 2,707,361)
test fastfield::multivalued::bench::bench_multi_value_ff_merge_many_segments                                             ... bench:  91,468,712 ns/iter (+/- 11,393,581)
test fastfield::multivalued::bench::bench_multi_value_ff_merge_many_segments_log_merge                                   ... bench:  73,909,138 ns/iter (+/- 15,846,097)
2022-09-23 15:36:29 +09:00
PSeitz
f9c3947803 Merge pull request #1546 from waywardmonkeys/use-ux-from-bool
Use u8::from(bool), u64::from(bool).
2022-09-23 09:06:24 +08:00
Bruce Mitchener
e9a384bb15 Use u8::from(bool), u64::from(bool). 2022-09-22 22:44:53 +07:00
Bruce Mitchener
d231671fe2 clippy: Remove borrows that the compiler will do.
This started showing up with clippy in rust 1.64.
2022-09-22 22:38:23 +07:00
trinity-1686a
fa3d786a2f Add support for deleting all documents matching query (#1535)
* add support for deleting all documents matching query

#1494
2022-09-22 21:26:09 +09:00
Paul Masurel
75aafeeb9b Added a function to deep clone RamDirectory. (#1544) 2022-09-22 12:04:02 +02:00
PSeitz
6f066c7f65 Merge pull request #1541 from quickwit-oss/add_bench
add benchmarks for multivalued fastfield merge
2022-09-22 15:28:00 +08:00
Pascal Seitz
22e56aaee3 add benchmarks for multivalued fastfield merge 2022-09-22 11:25:41 +08:00
Paul Masurel
d641979127 Minor refactor of fast fields (#1538) 2022-09-21 12:55:03 +09:00
341 changed files with 30398 additions and 13367 deletions

1
.gitattributes vendored
View File

@@ -1 +0,0 @@
cpp/* linguist-vendored

View File

@@ -2,9 +2,9 @@ name: Coverage
on:
push:
branches: [ main ]
branches: [main]
pull_request:
branches: [ main ]
branches: [main]
jobs:
coverage:
@@ -16,7 +16,7 @@ jobs:
- uses: Swatinem/rust-cache@v2
- uses: taiki-e/install-action@cargo-llvm-cov
- name: Generate code coverage
run: cargo +nightly llvm-cov --all-features --workspace --lcov --output-path lcov.info
run: cargo +nightly llvm-cov --all-features --workspace --doctests --lcov --output-path lcov.info
- name: Upload coverage to Codecov
uses: codecov/codecov-action@v3
continue-on-error: true

View File

@@ -48,7 +48,7 @@ jobs:
strategy:
matrix:
features: [
{ label: "all", flags: "mmap,brotli-compression,lz4-compression,snappy-compression,zstd-compression,failpoints" },
{ label: "all", flags: "mmap,stopwords,brotli-compression,lz4-compression,snappy-compression,zstd-compression,failpoints" },
{ label: "quickwit", flags: "mmap,quickwit,failpoints" }
]

1
.gitignore vendored
View File

@@ -9,7 +9,6 @@ target/release
Cargo.lock
benchmark
.DS_Store
cpp/simdcomp/bitpackingbenchmark
*.bk
.idea
trace.dat

View File

@@ -1,10 +1,37 @@
Tantivy 0.19
================================
#### Bugfixes
- Fix missing fieldnorms for u64, i64, f64, bool, bytes and date [#1620](https://github.com/quickwit-oss/tantivy/pull/1620) (@PSeitz)
- Fix interpolation overflow in linear interpolation fastfield codec [#1480](https://github.com/quickwit-oss/tantivy/pull/1480) (@PSeitz @fulmicoton)
- Updated [Date Field Type](https://github.com/quickwit-oss/tantivy/pull/1396)
The `DateTime` type has been updated to hold timestamps with microseconds precision.
`DateOptions` and `DatePrecision` have been added to configure Date fields. The precision is used to hint on fast values compression. Otherwise, seconds precision is used everywhere else (i.e terms, indexing).
- Remove Searcher pool and make `Searcher` cloneable.
#### Features/Improvements
- Add support for `IN` in queryparser , e.g. `field: IN [val1 val2 val3]` [#1683](https://github.com/quickwit-oss/tantivy/pull/1683) (@trinity-1686a)
- Skip score calculation, when no scoring is required [#1646](https://github.com/quickwit-oss/tantivy/pull/1646) (@PSeitz)
- Limit fast fields to u32 (`get_val(u32)`) [#1644](https://github.com/quickwit-oss/tantivy/pull/1644) (@PSeitz)
- The `DateTime` type has been updated to hold timestamps with microseconds precision.
`DateOptions` and `DatePrecision` have been added to configure Date fields. The precision is used to hint on fast values compression. Otherwise, seconds precision is used everywhere else (i.e terms, indexing) [#1396](https://github.com/quickwit-oss/tantivy/pull/1396) (@evanxg852000)
- Add IP address field type [#1553](https://github.com/quickwit-oss/tantivy/pull/1553) (@PSeitz)
- Add boolean field type [#1382](https://github.com/quickwit-oss/tantivy/pull/1382) (@boraarslan)
- Remove Searcher pool and make `Searcher` cloneable. (@PSeitz)
- Validate settings on create [#1570](https://github.com/quickwit-oss/tantivy/pull/1570) (@PSeitz)
- Detect and apply gcd on fastfield codecs [#1418](https://github.com/quickwit-oss/tantivy/pull/1418) (@PSeitz)
- Doc store
- use separate thread to compress block store [#1389](https://github.com/quickwit-oss/tantivy/pull/1389) [#1510](https://github.com/quickwit-oss/tantivy/pull/1510) (@PSeitz @fulmicoton)
- Expose doc store cache size [#1403](https://github.com/quickwit-oss/tantivy/pull/1403) (@PSeitz)
- Enable compression levels for doc store [#1378](https://github.com/quickwit-oss/tantivy/pull/1378) (@PSeitz)
- Make block size configurable [#1374](https://github.com/quickwit-oss/tantivy/pull/1374) (@kryesh)
- Make `tantivy::TantivyError` cloneable [#1402](https://github.com/quickwit-oss/tantivy/pull/1402) (@PSeitz)
- Add support for phrase slop in query language [#1393](https://github.com/quickwit-oss/tantivy/pull/1393) (@saroh)
- Aggregation
- Add aggregation support for date type [#1693](https://github.com/quickwit-oss/tantivy/pull/1693)(@PSeitz)
- Add support for keyed parameter in range and histgram aggregations [#1424](https://github.com/quickwit-oss/tantivy/pull/1424) (@k-yomo)
- Add aggregation bucket limit [#1363](https://github.com/quickwit-oss/tantivy/pull/1363) (@PSeitz)
- Faster indexing
- [#1610](https://github.com/quickwit-oss/tantivy/pull/1610) (@PSeitz)
- [#1594](https://github.com/quickwit-oss/tantivy/pull/1594) (@PSeitz)
- [#1582](https://github.com/quickwit-oss/tantivy/pull/1582) (@PSeitz)
- [#1611](https://github.com/quickwit-oss/tantivy/pull/1611) (@PSeitz)
- Added a pre-configured stop word filter for various language [#1666](https://github.com/quickwit-oss/tantivy/pull/1666) (@adamreichold)
Tantivy 0.18
================================
@@ -22,6 +49,10 @@ Tantivy 0.18
- Add terms aggregation (@PSeitz)
- Add support for zstd compression (@kryesh)
Tantivy 0.18.1
================================
- Hotfix: positions computation. #1629 (@fmassot, @fulmicoton, @PSeitz)
Tantivy 0.17
================================

View File

@@ -1,6 +1,6 @@
[package]
name = "tantivy"
version = "0.18.0"
version = "0.19.0"
authors = ["Paul Masurel <paul.masurel@gmail.com>"]
license = "MIT"
categories = ["database-implementations", "data-structures"]
@@ -11,19 +11,20 @@ repository = "https://github.com/quickwit-oss/tantivy"
readme = "README.md"
keywords = ["search", "information", "retrieval"]
edition = "2021"
rust-version = "1.62"
[dependencies]
oneshot = "0.1.3"
base64 = "0.13.0"
byteorder = "1.4.3"
oneshot = "0.1.5"
base64 = "0.21.0"
crc32fast = "1.3.2"
once_cell = "1.10.0"
regex = { version = "1.5.5", default-features = false, features = ["std", "unicode"] }
tantivy-fst = "0.3.0"
aho-corasick = "0.7"
tantivy-fst = "0.4.0"
memmap2 = { version = "0.5.3", optional = true }
lz4_flex = { version = "0.9.2", default-features = false, features = ["checked-decode"], optional = true }
lz4_flex = { version = "0.10", default-features = false, features = ["checked-decode"], optional = true }
brotli = { version = "3.3.4", optional = true }
zstd = { version = "0.11", optional = true }
zstd = { version = "0.12", optional = true, default-features = false }
snap = { version = "1.0.5", optional = true }
tempfile = { version = "3.3.0", optional = true }
log = "0.4.16"
@@ -34,32 +35,33 @@ fs2 = { version = "0.4.3", optional = true }
levenshtein_automata = "0.2.1"
uuid = { version = "1.0.0", features = ["v4", "serde"] }
crossbeam-channel = "0.5.4"
tantivy-query-grammar = { version="0.18.0", path="./query-grammar" }
tantivy-bitpacker = { version="0.2", path="./bitpacker" }
common = { version = "0.3", path = "./common/", package = "tantivy-common" }
fastfield_codecs = { version="0.2", path="./fastfield_codecs", default-features = false }
ownedbytes = { version="0.3", path="./ownedbytes" }
stable_deref_trait = "1.2.0"
rust-stemmers = "1.2.0"
downcast-rs = "1.2.0"
bitpacking = { version = "0.8.4", default-features = false, features = ["bitpacker4x"] }
census = "0.4.0"
fnv = "1.0.7"
rustc-hash = "1.1.0"
thiserror = "1.0.30"
htmlescape = "0.3.1"
fail = "0.5.0"
murmurhash32 = "0.2.0"
murmurhash32 = "0.3.0"
time = { version = "0.3.10", features = ["serde-well-known"] }
smallvec = "1.8.0"
rayon = "1.5.2"
lru = "0.7.5"
lru = "0.9.0"
fastdivide = "0.4.0"
itertools = "0.10.3"
measure_time = "0.8.2"
serde_cbor = { version = "0.11.2", optional = true }
async-trait = "0.1.53"
arc-swap = "1.5.0"
columnar = { version="0.1", path="./columnar", package ="tantivy-columnar" }
sstable = { version="0.1", path="./sstable", package ="tantivy-sstable", optional = true }
stacker = { version="0.1", path="./stacker", package ="tantivy-stacker" }
query-grammar = { version= "0.19.0", path="./query-grammar", package = "tantivy-query-grammar" }
tantivy-bitpacker = { version= "0.3", path="./bitpacker" }
common = { version= "0.5", path = "./common/", package = "tantivy-common" }
tokenizer-api = { version="0.1", path="./tokenizer-api", package="tantivy-tokenizer-api" }
[target.'cfg(windows)'.dependencies]
winapi = "0.3.9"
@@ -69,11 +71,12 @@ maplit = "1.0.2"
matches = "0.1.9"
pretty_assertions = "1.2.1"
proptest = "1.0.0"
criterion = "0.3.5"
criterion = "0.4"
test-log = "0.2.10"
env_logger = "0.9.0"
pprof = { version = "0.10.0", features = ["flamegraph", "criterion"] }
env_logger = "0.10.0"
pprof = { version = "0.11.0", features = ["flamegraph", "criterion"] }
futures = "0.3.21"
paste = "1.0.11"
[dev-dependencies.fail]
version = "0.5.0"
@@ -89,8 +92,9 @@ debug-assertions = true
overflow-checks = true
[features]
default = ["mmap", "lz4-compression" ]
default = ["mmap", "stopwords", "lz4-compression"]
mmap = ["fs2", "tempfile", "memmap2"]
stopwords = []
brotli-compression = ["brotli"]
lz4-compression = ["lz4_flex"]
@@ -100,10 +104,10 @@ zstd-compression = ["zstd"]
failpoints = ["fail/failpoints"]
unstable = [] # useful for benches.
quickwit = ["serde_cbor"]
quickwit = ["sstable"]
[workspace]
members = ["query-grammar", "bitpacker", "common", "fastfield_codecs", "ownedbytes"]
members = ["query-grammar", "bitpacker", "common", "ownedbytes", "stacker", "sstable", "tokenizer-api", "columnar"]
# Following the "fail" crate best practises, we isolate
# tests that define specific behavior in fail check points

View File

@@ -29,7 +29,7 @@ Your mileage WILL vary depending on the nature of queries and their load.
# Features
- Full-text search
- Configurable tokenizer (stemming available for 17 Latin languages with third party support for Chinese ([tantivy-jieba](https://crates.io/crates/tantivy-jieba) and [cang-jie](https://crates.io/crates/cang-jie)), Japanese ([lindera](https://github.com/lindera-morphology/lindera-tantivy), [Vaporetto](https://crates.io/crates/vaporetto_tantivy), and [tantivy-tokenizer-tiny-segmenter](https://crates.io/crates/tantivy-tokenizer-tiny-segmenter)) and Korean ([lindera](https://github.com/lindera-morphology/lindera-tantivy) + [lindera-ko-dic-builder](https://github.com/lindera-morphology/lindera-ko-dic-builder))
- Configurable tokenizer (stemming available for 17 Latin languages) with third party support for Chinese ([tantivy-jieba](https://crates.io/crates/tantivy-jieba) and [cang-jie](https://crates.io/crates/cang-jie)), Japanese ([lindera](https://github.com/lindera-morphology/lindera-tantivy), [Vaporetto](https://crates.io/crates/vaporetto_tantivy), and [tantivy-tokenizer-tiny-segmenter](https://crates.io/crates/tantivy-tokenizer-tiny-segmenter)) and Korean ([lindera](https://github.com/lindera-morphology/lindera-tantivy) + [lindera-ko-dic-builder](https://github.com/lindera-morphology/lindera-ko-dic-builder))
- Fast (check out the :racehorse: :sparkles: [benchmark](https://tantivy-search.github.io/bench/) :sparkles: :racehorse:)
- Tiny startup time (<10ms), perfect for command-line tools
- BM25 scoring (the same as Lucene)
@@ -41,13 +41,13 @@ Your mileage WILL vary depending on the nature of queries and their load.
- SIMD integer compression when the platform/CPU includes the SSE2 instruction set
- Single valued and multivalued u64, i64, and f64 fast fields (equivalent of doc values in Lucene)
- `&[u8]` fast fields
- Text, i64, u64, f64, dates, and hierarchical facet fields
- LZ4 compressed document store
- Text, i64, u64, f64, dates, ip, bool, and hierarchical facet fields
- Compressed document store (LZ4, Zstd, None, Brotli, Snap)
- Range queries
- Faceted search
- Configurable indexing (optional term frequency and position indexing)
- JSON Field
- Aggregation Collector: range buckets, average, and stats metrics
- Aggregation Collector: histogram, range buckets, average, and stats metrics
- LogMergePolicy with deletes
- Searcher Warmer API
- Cheesy logo with a horse
@@ -58,7 +58,7 @@ Distributed search is out of the scope of Tantivy, but if you are looking for th
# Getting started
Tantivy works on stable Rust (>= 1.27) and supports Linux, macOS, and Windows.
Tantivy works on stable Rust and supports Linux, macOS, and Windows.
- [Tantivy's simple search example](https://tantivy-search.github.io/examples/basic_search.html)
- [tantivy-cli and its tutorial](https://github.com/quickwit-oss/tantivy-cli) - `tantivy-cli` is an actual command-line interface that makes it easy for you to create a search engine,
@@ -80,48 +80,21 @@ There are many ways to support this project.
# Contributing code
We use the GitHub Pull Request workflow: reference a GitHub ticket and/or include a comprehensive commit message when opening a PR.
Feel free to update CHANGELOG.md with your contribution.
## Tokenizer
When implementing a tokenizer for tantivy depend on the `tantivy-tokenizer-api` crate.
## Clone and build locally
Tantivy compiles on stable Rust but requires `Rust >= 1.27`.
Tantivy compiles on stable Rust.
To check out and run tests, you can simply run:
```bash
git clone https://github.com/quickwit-oss/tantivy.git
cd tantivy
cargo build
```
## Run tests
Some tests will not run with just `cargo test` because of `fail-rs`.
To run the tests exhaustively, run `./run-tests.sh`.
## Debug
You might find it useful to step through the programme with a debugger.
### A failing test
Make sure you haven't run `cargo clean` after the most recent `cargo test` or `cargo build` to guarantee that the `target/` directory exists. Use this bash script to find the name of the most recent debug build of Tantivy and run it under `rust-gdb`:
```bash
find target/debug/ -maxdepth 1 -executable -type f -name "tantivy*" -printf '%TY-%Tm-%Td %TT %p\n' | sort -r | cut -d " " -f 3 | xargs -I RECENT_DBG_TANTIVY rust-gdb RECENT_DBG_TANTIVY
```
Now that you are in `rust-gdb`, you can set breakpoints on lines and methods that match your source code and run the debug executable with flags that you normally pass to `cargo test` like this:
```bash
$gdb run --test-threads 1 --test $NAME_OF_TEST
```
### An example
By default, `rustc` compiles everything in the `examples/` directory in debug mode. This makes it easy for you to make examples to reproduce bugs:
```bash
rust-gdb target/debug/examples/$EXAMPLE_NAME
$ gdb run
git clone https://github.com/quickwit-oss/tantivy.git
cd tantivy
cargo test
```
# Companies Using Tantivy

18
TODO.txt Normal file
View File

@@ -0,0 +1,18 @@
Make schema_builder API fluent.
fix doc serialization and prevent compression problems
u64 , etc. shoudl return Resutl<Option> now that we support optional missing a column is really not an error
remove fastfield codecs
ditch the first_or_default trick. if it is still useful, improve its implementation.
rename FastFieldReaders::open to load
remove fast field reader
find a way to unify the two DateTime.
readd type check in the filter wrapper
add unit test on columnar list columns.
make sure sort works

View File

@@ -34,7 +34,7 @@ pub fn hdfs_index_benchmark(c: &mut Criterion) {
let index = Index::create_in_ram(schema.clone());
let index_writer = index.writer_with_num_threads(1, 100_000_000).unwrap();
for _ in 0..NUM_REPEATS {
for doc_json in HDFS_LOGS.trim().split("\n") {
for doc_json in HDFS_LOGS.trim().split('\n') {
let doc = schema.parse_document(doc_json).unwrap();
index_writer.add_document(doc).unwrap();
}
@@ -46,7 +46,7 @@ pub fn hdfs_index_benchmark(c: &mut Criterion) {
let index = Index::create_in_ram(schema.clone());
let mut index_writer = index.writer_with_num_threads(1, 100_000_000).unwrap();
for _ in 0..NUM_REPEATS {
for doc_json in HDFS_LOGS.trim().split("\n") {
for doc_json in HDFS_LOGS.trim().split('\n') {
let doc = schema.parse_document(doc_json).unwrap();
index_writer.add_document(doc).unwrap();
}
@@ -59,7 +59,7 @@ pub fn hdfs_index_benchmark(c: &mut Criterion) {
let index = Index::create_in_ram(schema_with_store.clone());
let index_writer = index.writer_with_num_threads(1, 100_000_000).unwrap();
for _ in 0..NUM_REPEATS {
for doc_json in HDFS_LOGS.trim().split("\n") {
for doc_json in HDFS_LOGS.trim().split('\n') {
let doc = schema.parse_document(doc_json).unwrap();
index_writer.add_document(doc).unwrap();
}
@@ -71,7 +71,7 @@ pub fn hdfs_index_benchmark(c: &mut Criterion) {
let index = Index::create_in_ram(schema_with_store.clone());
let mut index_writer = index.writer_with_num_threads(1, 100_000_000).unwrap();
for _ in 0..NUM_REPEATS {
for doc_json in HDFS_LOGS.trim().split("\n") {
for doc_json in HDFS_LOGS.trim().split('\n') {
let doc = schema.parse_document(doc_json).unwrap();
index_writer.add_document(doc).unwrap();
}
@@ -85,7 +85,7 @@ pub fn hdfs_index_benchmark(c: &mut Criterion) {
let json_field = dynamic_schema.get_field("json").unwrap();
let mut index_writer = index.writer_with_num_threads(1, 100_000_000).unwrap();
for _ in 0..NUM_REPEATS {
for doc_json in HDFS_LOGS.trim().split("\n") {
for doc_json in HDFS_LOGS.trim().split('\n') {
let json_val: serde_json::Map<String, serde_json::Value> =
serde_json::from_str(doc_json).unwrap();
let doc = tantivy::doc!(json_field=>json_val);
@@ -101,7 +101,7 @@ pub fn hdfs_index_benchmark(c: &mut Criterion) {
let json_field = dynamic_schema.get_field("json").unwrap();
let mut index_writer = index.writer_with_num_threads(1, 100_000_000).unwrap();
for _ in 0..NUM_REPEATS {
for doc_json in HDFS_LOGS.trim().split("\n") {
for doc_json in HDFS_LOGS.trim().split('\n') {
let json_val: serde_json::Map<String, serde_json::Value> =
serde_json::from_str(doc_json).unwrap();
let doc = tantivy::doc!(json_field=>json_val);

View File

@@ -1,6 +1,6 @@
[package]
name = "tantivy-bitpacker"
version = "0.2.0"
version = "0.3.0"
edition = "2021"
authors = ["Paul Masurel <paul.masurel@gmail.com>"]
license = "MIT"
@@ -8,8 +8,14 @@ categories = []
description = """Tantivy-sub crate: bitpacking"""
repository = "https://github.com/quickwit-oss/tantivy"
keywords = []
documentation = "https://docs.rs/tantivy-bitpacker/latest/tantivy_bitpacker"
homepage = "https://github.com/quickwit-oss/tantivy"
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
[dependencies]
[dev-dependencies]
rand = "0.8"
proptest = "1"

View File

@@ -4,9 +4,39 @@ extern crate test;
#[cfg(test)]
mod tests {
use tantivy_bitpacker::BlockedBitpacker;
use rand::seq::IteratorRandom;
use rand::thread_rng;
use tantivy_bitpacker::{BitPacker, BitUnpacker, BlockedBitpacker};
use test::Bencher;
#[inline(never)]
fn create_bitpacked_data(bit_width: u8, num_els: u32) -> Vec<u8> {
let mut bitpacker = BitPacker::new();
let mut buffer = Vec::new();
for _ in 0..num_els {
// the values do not matter.
bitpacker.write(0u64, bit_width, &mut buffer).unwrap();
bitpacker.flush(&mut buffer).unwrap();
}
buffer
}
#[bench]
fn bench_bitpacking_read(b: &mut Bencher) {
let bit_width = 3;
let num_els = 1_000_000u32;
let bit_unpacker = BitUnpacker::new(bit_width);
let data = create_bitpacked_data(bit_width, num_els);
let idxs: Vec<u32> = (0..num_els).choose_multiple(&mut thread_rng(), 100_000);
b.iter(|| {
let mut out = 0u64;
for &idx in &idxs {
out = out.wrapping_add(bit_unpacker.get(idx, &data[..]));
}
out
});
}
#[bench]
fn bench_blockedbitp_read(b: &mut Bencher) {
let mut blocked_bitpacker = BlockedBitpacker::new();
@@ -14,9 +44,9 @@ mod tests {
blocked_bitpacker.add(val * val);
}
b.iter(|| {
let mut out = 0;
let mut out = 0u64;
for val in 0..=21500 {
out = blocked_bitpacker.get(val);
out = out.wrapping_add(blocked_bitpacker.get(val));
}
out
});

View File

@@ -19,21 +19,20 @@ impl BitPacker {
}
#[inline]
pub fn write<TWrite: io::Write>(
pub fn write<TWrite: io::Write + ?Sized>(
&mut self,
val: u64,
num_bits: u8,
output: &mut TWrite,
) -> io::Result<()> {
let val_u64 = val as u64;
let num_bits = num_bits as usize;
if self.mini_buffer_written + num_bits > 64 {
self.mini_buffer |= val_u64.wrapping_shl(self.mini_buffer_written as u32);
self.mini_buffer |= val.wrapping_shl(self.mini_buffer_written as u32);
output.write_all(self.mini_buffer.to_le_bytes().as_ref())?;
self.mini_buffer = val_u64.wrapping_shr((64 - self.mini_buffer_written) as u32);
self.mini_buffer = val.wrapping_shr((64 - self.mini_buffer_written) as u32);
self.mini_buffer_written = self.mini_buffer_written + num_bits - 64;
} else {
self.mini_buffer |= val_u64 << self.mini_buffer_written;
self.mini_buffer |= val << self.mini_buffer_written;
self.mini_buffer_written += num_bits;
if self.mini_buffer_written == 64 {
output.write_all(self.mini_buffer.to_le_bytes().as_ref())?;
@@ -44,7 +43,7 @@ impl BitPacker {
Ok(())
}
pub fn flush<TWrite: io::Write>(&mut self, output: &mut TWrite) -> io::Result<()> {
pub fn flush<TWrite: io::Write + ?Sized>(&mut self, output: &mut TWrite) -> io::Result<()> {
if self.mini_buffer_written > 0 {
let num_bytes = (self.mini_buffer_written + 7) / 8;
let bytes = self.mini_buffer.to_le_bytes();
@@ -55,29 +54,33 @@ impl BitPacker {
Ok(())
}
pub fn close<TWrite: io::Write>(&mut self, output: &mut TWrite) -> io::Result<()> {
pub fn close<TWrite: io::Write + ?Sized>(&mut self, output: &mut TWrite) -> io::Result<()> {
self.flush(output)?;
// Padding the write file to simplify reads.
output.write_all(&[0u8; 7])?;
Ok(())
}
}
#[derive(Clone, Debug, Default)]
#[derive(Clone, Debug, Default, Copy)]
pub struct BitUnpacker {
num_bits: u64,
num_bits: u32,
mask: u64,
}
impl BitUnpacker {
/// Creates a bit unpacker, that assumes the same bitwidth for all values.
///
/// The bitunpacker works by doing an unaligned read of 8 bytes.
/// For this reason, values of `num_bits` between
/// [57..63] are forbidden.
pub fn new(num_bits: u8) -> BitUnpacker {
assert!(num_bits <= 7 * 8 || num_bits == 64);
let mask: u64 = if num_bits == 64 {
!0u64
} else {
(1u64 << num_bits) - 1u64
};
BitUnpacker {
num_bits: u64::from(num_bits),
num_bits: u32::from(num_bits),
mask,
}
}
@@ -87,22 +90,32 @@ impl BitUnpacker {
}
#[inline]
pub fn get(&self, idx: u64, data: &[u8]) -> u64 {
if self.num_bits == 0 {
return 0u64;
}
pub fn get(&self, idx: u32, data: &[u8]) -> u64 {
let addr_in_bits = idx * self.num_bits;
let addr = addr_in_bits >> 3;
let addr = (addr_in_bits >> 3) as usize;
if addr + 8 > data.len() {
if self.num_bits == 0 {
return 0;
}
let bit_shift = addr_in_bits & 7;
return self.get_slow_path(addr, bit_shift, data);
}
let bit_shift = addr_in_bits & 7;
debug_assert!(
addr + 8 <= data.len() as u64,
"The fast field field should have been padded with 7 bytes."
);
let bytes: [u8; 8] = (&data[(addr as usize)..(addr as usize) + 8])
.try_into()
.unwrap();
let bytes: [u8; 8] = (&data[addr..addr + 8]).try_into().unwrap();
let val_unshifted_unmasked: u64 = u64::from_le_bytes(bytes);
let val_shifted = (val_unshifted_unmasked >> bit_shift) as u64;
let val_shifted = val_unshifted_unmasked >> bit_shift;
val_shifted & self.mask
}
#[inline(never)]
fn get_slow_path(&self, addr: usize, bit_shift: u32, data: &[u8]) -> u64 {
let mut bytes: [u8; 8] = [0u8; 8];
let available_bytes = data.len() - addr;
// This function is meant to only be called if we did not have 8 bytes to load.
debug_assert!(available_bytes < 8);
bytes[..available_bytes].copy_from_slice(&data[addr..]);
let val_unshifted_unmasked: u64 = u64::from_le_bytes(bytes);
let val_shifted = val_unshifted_unmasked >> bit_shift;
val_shifted & self.mask
}
}
@@ -111,7 +124,7 @@ impl BitUnpacker {
mod test {
use super::{BitPacker, BitUnpacker};
fn create_fastfield_bitpacker(len: usize, num_bits: u8) -> (BitUnpacker, Vec<u64>, Vec<u8>) {
fn create_bitpacker(len: usize, num_bits: u8) -> (BitUnpacker, Vec<u64>, Vec<u8>) {
let mut data = Vec::new();
let mut bitpacker = BitPacker::new();
let max_val: u64 = (1u64 << num_bits as u64) - 1u64;
@@ -122,15 +135,15 @@ mod test {
bitpacker.write(val, num_bits, &mut data).unwrap();
}
bitpacker.close(&mut data).unwrap();
assert_eq!(data.len(), ((num_bits as usize) * len + 7) / 8 + 7);
assert_eq!(data.len(), ((num_bits as usize) * len + 7) / 8);
let bitunpacker = BitUnpacker::new(num_bits);
(bitunpacker, vals, data)
}
fn test_bitpacker_util(len: usize, num_bits: u8) {
let (bitunpacker, vals, data) = create_fastfield_bitpacker(len, num_bits);
let (bitunpacker, vals, data) = create_bitpacker(len, num_bits);
for (i, val) in vals.iter().enumerate() {
assert_eq!(bitunpacker.get(i as u64, &data), *val);
assert_eq!(bitunpacker.get(i as u32, &data), *val);
}
}
@@ -142,4 +155,49 @@ mod test {
test_bitpacker_util(6, 14);
test_bitpacker_util(1000, 14);
}
use proptest::prelude::*;
fn num_bits_strategy() -> impl Strategy<Value = u8> {
prop_oneof!(Just(0), Just(1), 2u8..56u8, Just(56), Just(64),)
}
fn vals_strategy() -> impl Strategy<Value = (u8, Vec<u64>)> {
(num_bits_strategy(), 0usize..100usize).prop_flat_map(|(num_bits, len)| {
let max_val = if num_bits == 64 {
u64::MAX
} else {
(1u64 << num_bits as u32) - 1
};
let vals = proptest::collection::vec(0..=max_val, len);
vals.prop_map(move |vals| (num_bits, vals))
})
}
fn test_bitpacker_aux(num_bits: u8, vals: &[u64]) {
let mut buffer: Vec<u8> = Vec::new();
let mut bitpacker = BitPacker::new();
for &val in vals {
bitpacker.write(val, num_bits, &mut buffer).unwrap();
}
bitpacker.flush(&mut buffer).unwrap();
assert_eq!(buffer.len(), (vals.len() * num_bits as usize + 7) / 8);
let bitunpacker = BitUnpacker::new(num_bits);
let max_val = if num_bits == 64 {
u64::MAX
} else {
(1u64 << num_bits) - 1
};
for (i, val) in vals.iter().copied().enumerate() {
assert!(val <= max_val);
assert_eq!(bitunpacker.get(i as u32, &buffer), val);
}
}
proptest::proptest! {
#[test]
fn test_bitpacker_proptest((num_bits, vals) in vals_strategy()) {
test_bitpacker_aux(num_bits, &vals);
}
}
}

View File

@@ -84,7 +84,7 @@ impl BlockedBitpacker {
#[inline]
pub fn add(&mut self, val: u64) {
self.buffer.push(val);
if self.buffer.len() == BLOCK_SIZE as usize {
if self.buffer.len() == BLOCK_SIZE {
self.flush();
}
}
@@ -126,11 +126,11 @@ impl BlockedBitpacker {
}
#[inline]
pub fn get(&self, idx: usize) -> u64 {
let metadata_pos = idx / BLOCK_SIZE as usize;
let pos_in_block = idx % BLOCK_SIZE as usize;
let metadata_pos = idx / BLOCK_SIZE;
let pos_in_block = idx % BLOCK_SIZE;
if let Some(metadata) = self.offset_and_bits.get(metadata_pos) {
let unpacked = BitUnpacker::new(metadata.num_bits()).get(
pos_in_block as u64,
pos_in_block as u32,
&self.compressed_blocks[metadata.offset() as usize..],
);
unpacked + metadata.base_value()

View File

@@ -1,6 +1,8 @@
mod bitpacker;
mod blocked_bitpacker;
use std::cmp::Ordering;
pub use crate::bitpacker::{BitPacker, BitUnpacker};
pub use crate::blocked_bitpacker::BlockedBitpacker;
@@ -37,44 +39,104 @@ pub fn compute_num_bits(n: u64) -> u8 {
}
}
/// Computes the (min, max) of an iterator of `PartialOrd` values.
///
/// For values implementing `Ord` (in a way consistent to their `PartialOrd` impl),
/// this function behaves as expected.
///
/// For values with partial ordering, the behavior is non-trivial and may
/// depends on the order of the values.
/// For floats however, it simply returns the same results as if NaN were
/// skipped.
pub fn minmax<I, T>(mut vals: I) -> Option<(T, T)>
where
I: Iterator<Item = T>,
T: Copy + Ord,
T: Copy + PartialOrd,
{
if let Some(first_el) = vals.next() {
return Some(vals.fold((first_el, first_el), |(min_val, max_val), el| {
(min_val.min(el), max_val.max(el))
}));
let first_el = vals.find(|val| {
// We use this to make sure we skip all NaN values when
// working with a float type.
val.partial_cmp(val) == Some(Ordering::Equal)
})?;
let mut min_so_far: T = first_el;
let mut max_so_far: T = first_el;
for val in vals {
if val.partial_cmp(&min_so_far) == Some(Ordering::Less) {
min_so_far = val;
}
if val.partial_cmp(&max_so_far) == Some(Ordering::Greater) {
max_so_far = val;
}
}
None
Some((min_so_far, max_so_far))
}
#[test]
fn test_compute_num_bits() {
assert_eq!(compute_num_bits(1), 1u8);
assert_eq!(compute_num_bits(0), 0u8);
assert_eq!(compute_num_bits(2), 2u8);
assert_eq!(compute_num_bits(3), 2u8);
assert_eq!(compute_num_bits(4), 3u8);
assert_eq!(compute_num_bits(255), 8u8);
assert_eq!(compute_num_bits(256), 9u8);
assert_eq!(compute_num_bits(5_000_000_000), 33u8);
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_minmax_empty() {
let vals: Vec<u32> = vec![];
assert_eq!(minmax(vals.into_iter()), None);
}
#[test]
fn test_compute_num_bits() {
assert_eq!(compute_num_bits(1), 1u8);
assert_eq!(compute_num_bits(0), 0u8);
assert_eq!(compute_num_bits(2), 2u8);
assert_eq!(compute_num_bits(3), 2u8);
assert_eq!(compute_num_bits(4), 3u8);
assert_eq!(compute_num_bits(255), 8u8);
assert_eq!(compute_num_bits(256), 9u8);
assert_eq!(compute_num_bits(5_000_000_000), 33u8);
}
#[test]
fn test_minmax_one() {
assert_eq!(minmax(vec![1].into_iter()), Some((1, 1)));
}
#[test]
fn test_minmax_empty() {
let vals: Vec<u32> = vec![];
assert_eq!(minmax(vals.into_iter()), None);
}
#[test]
fn test_minmax_two() {
assert_eq!(minmax(vec![1, 2].into_iter()), Some((1, 2)));
assert_eq!(minmax(vec![2, 1].into_iter()), Some((1, 2)));
#[test]
fn test_minmax_one() {
assert_eq!(minmax(vec![1].into_iter()), Some((1, 1)));
}
#[test]
fn test_minmax_two() {
assert_eq!(minmax(vec![1, 2].into_iter()), Some((1, 2)));
assert_eq!(minmax(vec![2, 1].into_iter()), Some((1, 2)));
}
#[test]
fn test_minmax_nan() {
assert_eq!(
minmax(vec![f64::NAN, 1f64, 2f64].into_iter()),
Some((1f64, 2f64))
);
assert_eq!(
minmax(vec![2f64, f64::NAN, 1f64].into_iter()),
Some((1f64, 2f64))
);
assert_eq!(
minmax(vec![2f64, 1f64, f64::NAN].into_iter()),
Some((1f64, 2f64))
);
}
#[test]
fn test_minmax_inf() {
assert_eq!(
minmax(vec![f64::INFINITY, 1f64, 2f64].into_iter()),
Some((1f64, f64::INFINITY))
);
assert_eq!(
minmax(vec![-f64::INFINITY, 1f64, 2f64].into_iter()),
Some((-f64::INFINITY, 2f64))
);
assert_eq!(
minmax(vec![2f64, f64::INFINITY, 1f64].into_iter()),
Some((1f64, f64::INFINITY))
);
assert_eq!(
minmax(vec![2f64, 1f64, -f64::INFINITY].into_iter()),
Some((-f64::INFINITY, 2f64))
);
}
}

View File

@@ -1,23 +0,0 @@
# This script takes care of packaging the build artifacts that will go in the
# release zipfile
$SRC_DIR = $PWD.Path
$STAGE = [System.Guid]::NewGuid().ToString()
Set-Location $ENV:Temp
New-Item -Type Directory -Name $STAGE
Set-Location $STAGE
$ZIP = "$SRC_DIR\$($Env:CRATE_NAME)-$($Env:APPVEYOR_REPO_TAG_NAME)-$($Env:TARGET).zip"
# TODO Update this to package the right artifacts
Copy-Item "$SRC_DIR\target\$($Env:TARGET)\release\hello.exe" '.\'
7z a "$ZIP" *
Push-AppveyorArtifact "$ZIP"
Remove-Item *.* -Force
Set-Location ..
Remove-Item $STAGE
Set-Location $SRC_DIR

View File

@@ -1,33 +0,0 @@
# This script takes care of building your crate and packaging it for release
set -ex
main() {
local src=$(pwd) \
stage=
case $TRAVIS_OS_NAME in
linux)
stage=$(mktemp -d)
;;
osx)
stage=$(mktemp -d -t tmp)
;;
esac
test -f Cargo.lock || cargo generate-lockfile
# TODO Update this to build the artifacts that matter to you
cross rustc --bin hello --target $TARGET --release -- -C lto
# TODO Update this to package the right artifacts
cp target/$TARGET/release/hello $stage/
cd $stage
tar czf $src/$CRATE_NAME-$TRAVIS_TAG-$TARGET.tar.gz *
cd $src
rm -rf $stage
}
main

View File

@@ -1,47 +0,0 @@
set -ex
main() {
local target=
if [ $TRAVIS_OS_NAME = linux ]; then
target=x86_64-unknown-linux-musl
sort=sort
else
target=x86_64-apple-darwin
sort=gsort # for `sort --sort-version`, from brew's coreutils.
fi
# Builds for iOS are done on OSX, but require the specific target to be
# installed.
case $TARGET in
aarch64-apple-ios)
rustup target install aarch64-apple-ios
;;
armv7-apple-ios)
rustup target install armv7-apple-ios
;;
armv7s-apple-ios)
rustup target install armv7s-apple-ios
;;
i386-apple-ios)
rustup target install i386-apple-ios
;;
x86_64-apple-ios)
rustup target install x86_64-apple-ios
;;
esac
# This fetches latest stable release
local tag=$(git ls-remote --tags --refs --exit-code https://github.com/japaric/cross \
| cut -d/ -f3 \
| grep -E '^v[0.1.0-9.]+$' \
| $sort --version-sort \
| tail -n1)
curl -LSfs https://japaric.github.io/trust/install.sh | \
sh -s -- \
--force \
--git japaric/cross \
--tag $tag \
--target $target
}
main

View File

@@ -1,30 +0,0 @@
#!/usr/bin/env bash
# This script takes care of testing your crate
set -ex
main() {
if [ ! -z $CODECOV ]; then
echo "Codecov"
cargo build --verbose && cargo coverage --verbose --all && bash <(curl -s https://codecov.io/bash) -s target/kcov
else
echo "Build"
cross build --target $TARGET
if [ ! -z $DISABLE_TESTS ]; then
return
fi
echo "Test"
cross test --target $TARGET --no-default-features --features mmap
cross test --target $TARGET --no-default-features --features mmap query-grammar
fi
for example in $(ls examples/*.rs)
do
cargo run --example $(basename $example .rs)
done
}
# we don't run the "test phase" when doing deploys
if [ -z $TRAVIS_TAG ]; then
main
fi

28
columnar/Cargo.toml Normal file
View File

@@ -0,0 +1,28 @@
[package]
name = "tantivy-columnar"
version = "0.1.0"
edition = "2021"
license = "MIT"
[dependencies]
itertools = "0.10.5"
log = "0.4.17"
fnv = "1.0.7"
fastdivide = "0.4.0"
rand = { version = "0.8.5", optional = true }
measure_time = { version = "0.8.2", optional = true }
prettytable-rs = { version = "0.10.0", optional = true }
stacker = { path = "../stacker", package="tantivy-stacker"}
sstable = { path = "../sstable", package = "tantivy-sstable" }
common = { path = "../common", package = "tantivy-common" }
tantivy-bitpacker = { version= "0.3", path = "../bitpacker/" }
serde = "1.0.152"
[dev-dependencies]
proptest = "1"
more-asserts = "0.3.1"
rand = "0.8.5"
[features]
unstable = []

109
columnar/README.md Normal file
View File

@@ -0,0 +1,109 @@
# Columnar format
This crate describes columnar format used in tantivy.
## Goals
This format is special in the following way.
- it needs to be compact
- accessing a specific column does not require to load the entire columnar. It can be done in 2 to 3 random access.
- columns of several types can be associated with the same column name.
- it needs to support columns with different types `(str, u64, i64, f64)`
and different cardinality `(required, optional, multivalued)`.
- columns, once loaded, offer cheap random access.
- it is designed to allow range queries.
# Coercion rules
Users can create a columnar by inserting rows to a `ColumnarWriter`,
and serializing it into a `Write` object.
Nothing prevents a user from recording values with different type to the same `column_name`.
In that case, `tantivy-columnar`'s behavior is as follows:
- JsonValues are grouped into 3 types (String, Number, bool).
Values that corresponds to different groups are mapped to different columns. For instance, String values are treated independently
from Number or boolean values. `tantivy-columnar` will simply emit several columns associated to a given column_name.
- Only one column for a given json value type is emitted. If number values with different number types are recorded (e.g. u64, i64, f64),
`tantivy-columnar` will pick the first type that can represents the set of appended value, with the following prioriy order (`i64`, `u64`, `f64`).
`i64` is picked over `u64` as it is likely to yield less change of types. Most use cases strictly requiring `u64` show the
restriction on 50% of the values (e.g. a 64-bit hash). On the other hand, a lot of use cases can show rare negative value.
# Columnar format
This columnar format may have more than one column (with different types) associated to the same `column_name` (see [Coercion rules](#coercion-rules) above).
The `(column_name, columne_type)` couple however uniquely identifies a column.
That couple is serialized as a column `column_key`. The format of that key is:
`[column_name][ZERO_BYTE][column_type_header: u8]`
```
COLUMNAR:=
[COLUMNAR_DATA]
[COLUMNAR_KEY_TO_DATA_INDEX]
[COLUMNAR_FOOTER];
# Columns are sorted by their column key.
COLUMNAR_DATA:=
[COLUMN_DATA]+;
COLUMNAR_FOOTER := [RANGE_SSTABLE_BYTES_LEN: 8 bytes little endian]
```
The columnar file starts by the actual column data, concatenated one after the other,
sorted by column key.
A sstable associates
`(column name, column_cardinality, column_type) to range of bytes.
Column name may not contain the zero byte `\0`.
Listing all columns associated to `column_name` can therefore
be done by listing all keys prefixed by
`[column_name][ZERO_BYTE]`
The associated range of bytes refer to a range of bytes
This crate exposes a columnar format for tantivy.
This format is described in README.md
The crate introduces the following concepts.
`Columnar` is an equivalent of a dataframe.
It maps `column_key` to `Column`.
A `Column<T>` asssociates a `RowId` (u32) to any
number of values.
This is made possible by wrapping a `ColumnIndex` and a `ColumnValue` object.
The `ColumnValue<T>` represents a mapping that associates each `RowId` to
exactly one single value.
The `ColumnIndex` then maps each RowId to a set of `RowId` in the
`ColumnValue`.
For optimization, and compression purposes, the `ColumnIndex` has three
possible representation, each for different cardinalities.
- Full
All RowId have exactly one value. The ColumnIndex is the trivial mapping.
- Optional
All RowIds can have at most one value. The ColumnIndex is the trivial mapping `ColumnRowId -> Option<ColumnValueRowId>`.
- Multivalued
All RowIds can have any number of values.
The column index is mapping values to a range.
All these objects are implemented an unit tested independently
in their own module:
- columnar
- column_index
- column_values
- column

View File

@@ -0,0 +1,124 @@
#![feature(test)]
use std::ops::RangeInclusive;
use std::sync::Arc;
use common::OwnedBytes;
use rand::rngs::StdRng;
use rand::seq::SliceRandom;
use rand::{random, Rng, SeedableRng};
use tantivy_columnar::ColumnValues;
use test::Bencher;
extern crate test;
// TODO does this make sense for IPv6 ?
fn generate_random() -> Vec<u64> {
let mut permutation: Vec<u64> = (0u64..100_000u64)
.map(|el| el + random::<u16>() as u64)
.collect();
permutation.shuffle(&mut StdRng::from_seed([1u8; 32]));
permutation
}
fn get_u128_column_random() -> Arc<dyn ColumnValues<u128>> {
let permutation = generate_random();
let permutation = permutation.iter().map(|el| *el as u128).collect::<Vec<_>>();
get_u128_column_from_data(&permutation)
}
fn get_u128_column_from_data(data: &[u128]) -> Arc<dyn ColumnValues<u128>> {
let mut out = vec![];
tantivy_columnar::column_values::serialize_column_values_u128(&data, &mut out).unwrap();
let out = OwnedBytes::new(out);
tantivy_columnar::column_values::open_u128_mapped::<u128>(out).unwrap()
}
const FIFTY_PERCENT_RANGE: RangeInclusive<u64> = 1..=50;
const SINGLE_ITEM: u64 = 90;
const SINGLE_ITEM_RANGE: RangeInclusive<u64> = 90..=90;
fn get_data_50percent_item() -> Vec<u128> {
let mut rng = StdRng::from_seed([1u8; 32]);
let mut data = vec![];
for _ in 0..300_000 {
let val = rng.gen_range(1..=100);
data.push(val);
}
data.push(SINGLE_ITEM);
data.shuffle(&mut rng);
let data = data.iter().map(|el| *el as u128).collect::<Vec<_>>();
data
}
#[bench]
fn bench_intfastfield_getrange_u128_50percent_hit(b: &mut Bencher) {
let data = get_data_50percent_item();
let column = get_u128_column_from_data(&data);
b.iter(|| {
let mut positions = Vec::new();
column.get_row_ids_for_value_range(
*FIFTY_PERCENT_RANGE.start() as u128..=*FIFTY_PERCENT_RANGE.end() as u128,
0..data.len() as u32,
&mut positions,
);
positions
});
}
#[bench]
fn bench_intfastfield_getrange_u128_single_hit(b: &mut Bencher) {
let data = get_data_50percent_item();
let column = get_u128_column_from_data(&data);
b.iter(|| {
let mut positions = Vec::new();
column.get_row_ids_for_value_range(
*SINGLE_ITEM_RANGE.start() as u128..=*SINGLE_ITEM_RANGE.end() as u128,
0..data.len() as u32,
&mut positions,
);
positions
});
}
#[bench]
fn bench_intfastfield_getrange_u128_hit_all(b: &mut Bencher) {
let data = get_data_50percent_item();
let column = get_u128_column_from_data(&data);
b.iter(|| {
let mut positions = Vec::new();
column.get_row_ids_for_value_range(0..=u128::MAX, 0..data.len() as u32, &mut positions);
positions
});
}
// U128 RANGE END
#[bench]
fn bench_intfastfield_scan_all_fflookup_u128(b: &mut Bencher) {
let column = get_u128_column_random();
b.iter(|| {
let mut a = 0u128;
for i in 0u64..column.num_vals() as u64 {
a += column.get_val(i as u32);
}
a
});
}
#[bench]
fn bench_intfastfield_jumpy_stride5_u128(b: &mut Bencher) {
let column = get_u128_column_random();
b.iter(|| {
let n = column.num_vals();
let mut a = 0u128;
for i in (0..n / 5).map(|val| val * 5) {
a += column.get_val(i);
}
a
});
}

View File

@@ -0,0 +1,211 @@
#![feature(test)]
extern crate test;
use std::ops::RangeInclusive;
use std::sync::Arc;
use rand::prelude::*;
use tantivy_columnar::column_values::{serialize_and_load_u64_based_column_values, CodecType};
use tantivy_columnar::*;
use test::Bencher;
// Warning: this generates the same permutation at each call
fn generate_permutation() -> Vec<u64> {
let mut permutation: Vec<u64> = (0u64..100_000u64).collect();
permutation.shuffle(&mut StdRng::from_seed([1u8; 32]));
permutation
}
fn generate_random() -> Vec<u64> {
let mut permutation: Vec<u64> = (0u64..100_000u64)
.map(|el| el + random::<u16>() as u64)
.collect();
permutation.shuffle(&mut StdRng::from_seed([1u8; 32]));
permutation
}
// Warning: this generates the same permutation at each call
fn generate_permutation_gcd() -> Vec<u64> {
let mut permutation: Vec<u64> = (1u64..100_000u64).map(|el| el * 1000).collect();
permutation.shuffle(&mut StdRng::from_seed([1u8; 32]));
permutation
}
pub fn serialize_and_load(column: &[u64], codec_type: CodecType) -> Arc<dyn ColumnValues<u64>> {
serialize_and_load_u64_based_column_values(&column, &[codec_type])
}
#[bench]
fn bench_intfastfield_jumpy_veclookup(b: &mut Bencher) {
let permutation = generate_permutation();
let n = permutation.len();
b.iter(|| {
let mut a = 0u64;
for _ in 0..n {
a = permutation[a as usize];
}
a
});
}
#[bench]
fn bench_intfastfield_jumpy_fflookup_bitpacked(b: &mut Bencher) {
let permutation = generate_permutation();
let n = permutation.len();
let column: Arc<dyn ColumnValues<u64>> = serialize_and_load(&permutation, CodecType::Bitpacked);
b.iter(|| {
let mut a = 0u64;
for _ in 0..n {
a = column.get_val(a as u32);
}
a
});
}
const FIFTY_PERCENT_RANGE: RangeInclusive<u64> = 1..=50;
const SINGLE_ITEM: u64 = 90;
const SINGLE_ITEM_RANGE: RangeInclusive<u64> = 90..=90;
const ONE_PERCENT_ITEM_RANGE: RangeInclusive<u64> = 49..=49;
fn get_data_50percent_item() -> Vec<u128> {
let mut rng = StdRng::from_seed([1u8; 32]);
let mut data = vec![];
for _ in 0..300_000 {
let val = rng.gen_range(1..=100);
data.push(val);
}
data.push(SINGLE_ITEM);
data.shuffle(&mut rng);
let data = data.iter().map(|el| *el as u128).collect::<Vec<_>>();
data
}
// U64 RANGE START
#[bench]
fn bench_intfastfield_getrange_u64_50percent_hit(b: &mut Bencher) {
let data = get_data_50percent_item();
let data = data.iter().map(|el| *el as u64).collect::<Vec<_>>();
let column: Arc<dyn ColumnValues<u64>> = serialize_and_load(&data, CodecType::Bitpacked);
b.iter(|| {
let mut positions = Vec::new();
column.get_row_ids_for_value_range(
FIFTY_PERCENT_RANGE,
0..data.len() as u32,
&mut positions,
);
positions
});
}
#[bench]
fn bench_intfastfield_getrange_u64_1percent_hit(b: &mut Bencher) {
let data = get_data_50percent_item();
let data = data.iter().map(|el| *el as u64).collect::<Vec<_>>();
let column: Arc<dyn ColumnValues<u64>> = serialize_and_load(&data, CodecType::Bitpacked);
b.iter(|| {
let mut positions = Vec::new();
column.get_row_ids_for_value_range(
ONE_PERCENT_ITEM_RANGE,
0..data.len() as u32,
&mut positions,
);
positions
});
}
#[bench]
fn bench_intfastfield_getrange_u64_single_hit(b: &mut Bencher) {
let data = get_data_50percent_item();
let data = data.iter().map(|el| *el as u64).collect::<Vec<_>>();
let column: Arc<dyn ColumnValues<u64>> = serialize_and_load(&data, CodecType::Bitpacked);
b.iter(|| {
let mut positions = Vec::new();
column.get_row_ids_for_value_range(SINGLE_ITEM_RANGE, 0..data.len() as u32, &mut positions);
positions
});
}
#[bench]
fn bench_intfastfield_getrange_u64_hit_all(b: &mut Bencher) {
let data = get_data_50percent_item();
let data = data.iter().map(|el| *el as u64).collect::<Vec<_>>();
let column: Arc<dyn ColumnValues<u64>> = serialize_and_load(&data, CodecType::Bitpacked);
b.iter(|| {
let mut positions = Vec::new();
column.get_row_ids_for_value_range(0..=u64::MAX, 0..data.len() as u32, &mut positions);
positions
});
}
// U64 RANGE END
#[bench]
fn bench_intfastfield_stride7_vec(b: &mut Bencher) {
let permutation = generate_permutation();
let n = permutation.len();
b.iter(|| {
let mut a = 0u64;
for i in (0..n / 7).map(|val| val * 7) {
a += permutation[i as usize];
}
a
});
}
#[bench]
fn bench_intfastfield_stride7_fflookup(b: &mut Bencher) {
let permutation = generate_permutation();
let n = permutation.len();
let column: Arc<dyn ColumnValues<u64>> = serialize_and_load(&permutation, CodecType::Bitpacked);
b.iter(|| {
let mut a = 0;
for i in (0..n / 7).map(|val| val * 7) {
a += column.get_val(i as u32);
}
a
});
}
#[bench]
fn bench_intfastfield_scan_all_fflookup(b: &mut Bencher) {
let permutation = generate_permutation();
let n = permutation.len();
let column: Arc<dyn ColumnValues<u64>> = serialize_and_load(&permutation, CodecType::Bitpacked);
let column_ref = column.as_ref();
b.iter(|| {
let mut a = 0u64;
for i in 0u32..n as u32 {
a += column_ref.get_val(i);
}
a
});
}
#[bench]
fn bench_intfastfield_scan_all_fflookup_gcd(b: &mut Bencher) {
let permutation = generate_permutation_gcd();
let n = permutation.len();
let column: Arc<dyn ColumnValues<u64>> = serialize_and_load(&permutation, CodecType::Bitpacked);
b.iter(|| {
let mut a = 0u64;
for i in 0..n {
a += column.get_val(i as u32);
}
a
});
}
#[bench]
fn bench_intfastfield_scan_all_vec(b: &mut Bencher) {
let permutation = generate_permutation();
b.iter(|| {
let mut a = 0u64;
for i in 0..permutation.len() {
a += permutation[i as usize] as u64;
}
a
});
}

View File

@@ -0,0 +1,17 @@
[package]
name = "tantivy-columnar-cli"
version = "0.1.0"
edition = "2021"
license = "MIT"
[dependencies]
columnar = {path="../", package="tantivy-columnar"}
serde_json = "1"
serde_json_borrow = {git="https://github.com/PSeitz/serde_json_borrow/"}
serde = "1"
[workspace]
members = []
[profile.release]
debug = true

View File

@@ -0,0 +1,134 @@
use columnar::ColumnarWriter;
use columnar::NumericalValue;
use serde_json_borrow;
use std::fs::File;
use std::io;
use std::io::BufRead;
use std::io::BufReader;
use std::time::Instant;
#[derive(Default)]
struct JsonStack {
path: String,
stack: Vec<usize>,
}
impl JsonStack {
fn push(&mut self, seg: &str) {
let len = self.path.len();
self.stack.push(len);
self.path.push('.');
self.path.push_str(seg);
}
fn pop(&mut self) {
if let Some(len) = self.stack.pop() {
self.path.truncate(len);
}
}
fn path(&self) -> &str {
&self.path[1..]
}
}
fn append_json_to_columnar(
doc: u32,
json_value: &serde_json_borrow::Value,
columnar: &mut ColumnarWriter,
stack: &mut JsonStack,
) -> usize {
let mut count = 0;
match json_value {
serde_json_borrow::Value::Null => {}
serde_json_borrow::Value::Bool(val) => {
columnar.record_numerical(
doc,
stack.path(),
NumericalValue::from(if *val { 1u64 } else { 0u64 }),
);
count += 1;
}
serde_json_borrow::Value::Number(num) => {
let numerical_value: NumericalValue = if let Some(num_i64) = num.as_i64() {
num_i64.into()
} else if let Some(num_u64) = num.as_u64() {
num_u64.into()
} else if let Some(num_f64) = num.as_f64() {
num_f64.into()
} else {
panic!();
};
count += 1;
columnar.record_numerical(
doc,
stack.path(),
numerical_value,
);
}
serde_json_borrow::Value::Str(msg) => {
columnar.record_str(
doc,
stack.path(),
msg,
);
count += 1;
},
serde_json_borrow::Value::Array(vals) => {
for val in vals {
count += append_json_to_columnar(doc, val, columnar, stack);
}
},
serde_json_borrow::Value::Object(json_map) => {
for (child_key, child_val) in json_map {
stack.push(child_key);
count += append_json_to_columnar(doc, child_val, columnar, stack);
stack.pop();
}
},
}
count
}
fn main() -> io::Result<()> {
let file = File::open("gh_small.json")?;
let mut reader = BufReader::new(file);
let mut line = String::with_capacity(100);
let mut columnar = columnar::ColumnarWriter::default();
let mut doc = 0;
let start = Instant::now();
let mut stack = JsonStack::default();
let mut total_count = 0;
let start_build = Instant::now();
loop {
line.clear();
let len = reader.read_line(&mut line)?;
if len == 0 {
break;
}
let Ok(json_value) = serde_json::from_str::<serde_json_borrow::Value>(&line) else { continue; };
total_count += append_json_to_columnar(doc, &json_value, &mut columnar, &mut stack);
doc += 1;
}
println!("Build in {:?}", start_build.elapsed());
println!("value count {total_count}");
let mut buffer = Vec::new();
let start_serialize = Instant::now();
columnar.serialize(doc, None, &mut buffer)?;
println!("Serialized in {:?}", start_serialize.elapsed());
println!("num docs: {doc}, {:?}", start.elapsed());
println!("buffer len {} MB", buffer.len() / 1_000_000);
let columnar = columnar::ColumnarReader::open(buffer)?;
for (column_name, dynamic_column) in columnar.list_columns()? {
let num_bytes = dynamic_column.num_bytes();
let typ = dynamic_column.column_type();
if num_bytes > 1_000_000 {
println!("{column_name} {typ:?} {} KB", num_bytes / 1_000);
}
}
println!("{} columns", columnar.num_columns());
Ok(())
}

47
columnar/src/TODO.md Normal file
View File

@@ -0,0 +1,47 @@
# zero to one
* revisit line codec
* add columns from schema on merge
* Plugging JSON
* replug examples
* move datetime to quickwit common
* switch to nanos
* reintroduce the gcd map.
# Perf and Size
* remove alloc in `ord_to_term`
+ multivaued range queries restrat frm the beginning all of the time.
* re-add ZSTD compression for dictionaries
no systematic monotonic mapping
consider removing multilinear
f32?
adhoc solution for bool?
add metrics helper for aggregate. sum(row_id)
review inline absence/presence
improv perf of select using PDEP
compare with roaring bitmap/elias fano etc etc.
SIMD range? (see blog post)
Add alignment?
Consider another codec to bridge the gap between few and 5k elements
# Cleanup and rationalization
in benchmark, unify percent vs ratio, f32 vs f64.
investigate if should have better errors? io::Error is overused at the moment.
rename rank/select in unit tests
Review the public API via cargo doc
go through TODOs
remove all doc_id occurences -> row_id
use the rank & select naming in unit tests branch.
multi-linear -> blockwise
linear codec -> simply a multiplication for the index column
rename columnar to something more explicit, like column_dictionary or columnar_table
rename fastfield -> column
document changes
rationalization FastFieldValue, HasColumnType
isolate u128_based and uniform naming
# Other
fix enhance column-cli
# Santa claus
autodetect datetime ipaddr, plug customizable tokenizer.

View File

@@ -0,0 +1,100 @@
use std::io;
use std::ops::Deref;
use std::sync::Arc;
use sstable::{Dictionary, VoidSSTable};
use crate::column::Column;
use crate::RowId;
/// Dictionary encoded column.
///
/// The column simply gives access to a regular u64-column that, in
/// which the values are term-ordinals.
///
/// These ordinals are ids uniquely identify the bytes that are stored in
/// the column. These ordinals are small, and sorted in the same order
/// as the term_ord_column.
#[derive(Clone)]
pub struct BytesColumn {
pub(crate) dictionary: Arc<Dictionary<VoidSSTable>>,
pub(crate) term_ord_column: Column<u64>,
}
impl BytesColumn {
/// Fills the given `output` buffer with the term associated to the ordinal `ord`.
///
/// Returns `false` if the term does not exist (e.g. `term_ord` is greater or equal to the
/// overll number of terms).
pub fn ord_to_bytes(&self, ord: u64, output: &mut Vec<u8>) -> io::Result<bool> {
self.dictionary.ord_to_term(ord, output)
}
/// Returns the number of rows in the column.
pub fn num_rows(&self) -> RowId {
self.term_ord_column.num_docs()
}
pub fn term_ords(&self, row_id: RowId) -> impl Iterator<Item = u64> + '_ {
self.term_ord_column.values_for_doc(row_id)
}
/// Returns the column of ordinals
pub fn ords(&self) -> &Column<u64> {
&self.term_ord_column
}
pub fn num_terms(&self) -> usize {
self.dictionary.num_terms()
}
pub fn dictionary(&self) -> &Dictionary<VoidSSTable> {
self.dictionary.as_ref()
}
}
#[derive(Clone)]
pub struct StrColumn(BytesColumn);
impl From<StrColumn> for BytesColumn {
fn from(str_column: StrColumn) -> BytesColumn {
str_column.0
}
}
impl StrColumn {
pub(crate) fn wrap(bytes_column: BytesColumn) -> StrColumn {
StrColumn(bytes_column)
}
pub fn dictionary(&self) -> &Dictionary<VoidSSTable> {
self.0.dictionary.as_ref()
}
/// Fills the buffer
pub fn ord_to_str(&self, term_ord: u64, output: &mut String) -> io::Result<bool> {
unsafe {
let buf = output.as_mut_vec();
if !self.0.dictionary.ord_to_term(term_ord, buf)? {
return Ok(false);
}
// TODO consider remove checks if it hurts performance.
if std::str::from_utf8(buf.as_slice()).is_err() {
buf.clear();
return Err(io::Error::new(
io::ErrorKind::InvalidData,
"Not valid utf-8",
));
}
}
Ok(true)
}
}
impl Deref for StrColumn {
type Target = BytesColumn;
fn deref(&self) -> &Self::Target {
&self.0
}
}

161
columnar/src/column/mod.rs Normal file
View File

@@ -0,0 +1,161 @@
mod dictionary_encoded;
mod serialize;
use std::fmt::Debug;
use std::io::Write;
use std::ops::{Deref, Range, RangeInclusive};
use std::sync::Arc;
use common::BinarySerializable;
pub use dictionary_encoded::{BytesColumn, StrColumn};
pub use serialize::{
open_column_bytes, open_column_str, open_column_u128, open_column_u64,
serialize_column_mappable_to_u128, serialize_column_mappable_to_u64,
};
use crate::column_index::ColumnIndex;
use crate::column_values::monotonic_mapping::StrictlyMonotonicMappingToInternal;
use crate::column_values::{monotonic_map_column, ColumnValues};
use crate::{Cardinality, MonotonicallyMappableToU64, RowId};
#[derive(Clone)]
pub struct Column<T = u64> {
pub idx: ColumnIndex,
pub values: Arc<dyn ColumnValues<T>>,
}
impl<T: MonotonicallyMappableToU64> Column<T> {
pub fn to_u64_monotonic(self) -> Column<u64> {
let values = Arc::new(monotonic_map_column(
self.values,
StrictlyMonotonicMappingToInternal::<T>::new(),
));
Column {
idx: self.idx,
values,
}
}
}
impl<T: PartialOrd + Copy + Debug + Send + Sync + 'static> Column<T> {
#[inline]
pub fn get_cardinality(&self) -> Cardinality {
self.idx.get_cardinality()
}
pub fn num_docs(&self) -> RowId {
match &self.idx {
ColumnIndex::Empty { num_docs } => *num_docs,
ColumnIndex::Full => self.values.num_vals(),
ColumnIndex::Optional(optional_index) => optional_index.num_docs(),
ColumnIndex::Multivalued(col_index) => {
// The multivalued index contains all value start row_id,
// and one extra value at the end with the overall number of rows.
col_index.num_docs()
}
}
}
pub fn min_value(&self) -> T {
self.values.min_value()
}
pub fn max_value(&self) -> T {
self.values.max_value()
}
pub fn first(&self, row_id: RowId) -> Option<T> {
self.values_for_doc(row_id).next()
}
pub fn values_for_doc(&self, row_id: RowId) -> impl Iterator<Item = T> + '_ {
self.value_row_ids(row_id)
.map(|value_row_id: RowId| self.values.get_val(value_row_id))
}
/// Get the docids of values which are in the provided value range.
#[inline]
pub fn get_docids_for_value_range(
&self,
value_range: RangeInclusive<T>,
selected_docid_range: Range<u32>,
doc_ids: &mut Vec<u32>,
) {
// convert passed docid range to row id range
let rowid_range = self.idx.docid_range_to_rowids(selected_docid_range.clone());
// Load rows
self.values
.get_row_ids_for_value_range(value_range, rowid_range, doc_ids);
// Convert rows to docids
self.idx
.select_batch_in_place(selected_docid_range.start, doc_ids);
}
/// Fils the output vector with the (possibly multiple values that are associated_with
/// `row_id`.
///
/// This method clears the `output` vector.
pub fn fill_vals(&self, row_id: RowId, output: &mut Vec<T>) {
output.clear();
output.extend(self.values_for_doc(row_id));
}
pub fn first_or_default_col(self, default_value: T) -> Arc<dyn ColumnValues<T>> {
Arc::new(FirstValueWithDefault {
column: self,
default_value,
})
}
}
impl<T> Deref for Column<T> {
type Target = ColumnIndex;
fn deref(&self) -> &Self::Target {
&self.idx
}
}
impl BinarySerializable for Cardinality {
fn serialize<W: Write + ?Sized>(&self, writer: &mut W) -> std::io::Result<()> {
self.to_code().serialize(writer)
}
fn deserialize<R: std::io::Read>(reader: &mut R) -> std::io::Result<Self> {
let cardinality_code = u8::deserialize(reader)?;
let cardinality = Cardinality::try_from_code(cardinality_code)?;
Ok(cardinality)
}
}
// TODO simplify or optimize
struct FirstValueWithDefault<T: Copy> {
column: Column<T>,
default_value: T,
}
impl<T: PartialOrd + Debug + Send + Sync + Copy + 'static> ColumnValues<T>
for FirstValueWithDefault<T>
{
fn get_val(&self, idx: u32) -> T {
self.column.first(idx).unwrap_or(self.default_value)
}
fn min_value(&self) -> T {
self.column.values.min_value()
}
fn max_value(&self) -> T {
self.column.values.max_value()
}
fn num_vals(&self) -> u32 {
match &self.column.idx {
ColumnIndex::Empty { .. } => 0u32,
ColumnIndex::Full => self.column.values.num_vals(),
ColumnIndex::Optional(optional_idx) => optional_idx.num_docs(),
ColumnIndex::Multivalued(multivalue_idx) => multivalue_idx.num_docs(),
}
}
}

View File

@@ -0,0 +1,94 @@
use std::io;
use std::io::Write;
use std::sync::Arc;
use common::OwnedBytes;
use sstable::Dictionary;
use crate::column::{BytesColumn, Column};
use crate::column_index::{serialize_column_index, SerializableColumnIndex};
use crate::column_values::{
load_u64_based_column_values, serialize_column_values_u128, serialize_u64_based_column_values,
CodecType, MonotonicallyMappableToU128, MonotonicallyMappableToU64,
};
use crate::iterable::Iterable;
use crate::StrColumn;
pub fn serialize_column_mappable_to_u128<T: MonotonicallyMappableToU128>(
column_index: SerializableColumnIndex<'_>,
iterable: &dyn Iterable<T>,
output: &mut impl Write,
) -> io::Result<()> {
let column_index_num_bytes = serialize_column_index(column_index, output)?;
serialize_column_values_u128(iterable, output)?;
output.write_all(&column_index_num_bytes.to_le_bytes())?;
Ok(())
}
pub fn serialize_column_mappable_to_u64<T: MonotonicallyMappableToU64>(
column_index: SerializableColumnIndex<'_>,
column_values: &impl Iterable<T>,
output: &mut impl Write,
) -> io::Result<()> {
let column_index_num_bytes = serialize_column_index(column_index, output)?;
serialize_u64_based_column_values(
column_values,
&[CodecType::Bitpacked, CodecType::BlockwiseLinear],
output,
)?;
output.write_all(&column_index_num_bytes.to_le_bytes())?;
Ok(())
}
pub fn open_column_u64<T: MonotonicallyMappableToU64>(bytes: OwnedBytes) -> io::Result<Column<T>> {
let (body, column_index_num_bytes_payload) = bytes.rsplit(4);
let column_index_num_bytes = u32::from_le_bytes(
column_index_num_bytes_payload
.as_slice()
.try_into()
.unwrap(),
);
let (column_index_data, column_values_data) = body.split(column_index_num_bytes as usize);
let column_index = crate::column_index::open_column_index(column_index_data)?;
let column_values = load_u64_based_column_values(column_values_data)?;
Ok(Column {
idx: column_index,
values: column_values,
})
}
pub fn open_column_u128<T: MonotonicallyMappableToU128>(
bytes: OwnedBytes,
) -> io::Result<Column<T>> {
let (body, column_index_num_bytes_payload) = bytes.rsplit(4);
let column_index_num_bytes = u32::from_le_bytes(
column_index_num_bytes_payload
.as_slice()
.try_into()
.unwrap(),
);
let (column_index_data, column_values_data) = body.split(column_index_num_bytes as usize);
let column_index = crate::column_index::open_column_index(column_index_data)?;
let column_values = crate::column_values::open_u128_mapped(column_values_data)?;
Ok(Column {
idx: column_index,
values: column_values,
})
}
pub fn open_column_bytes(data: OwnedBytes) -> io::Result<BytesColumn> {
let (body, dictionary_len_bytes) = data.rsplit(4);
let dictionary_len = u32::from_le_bytes(dictionary_len_bytes.as_slice().try_into().unwrap());
let (dictionary_bytes, column_bytes) = body.split(dictionary_len as usize);
let dictionary = Arc::new(Dictionary::from_bytes(dictionary_bytes)?);
let term_ord_column = crate::column::open_column_u64::<u64>(column_bytes)?;
Ok(BytesColumn {
dictionary,
term_ord_column,
})
}
pub fn open_column_str(data: OwnedBytes) -> io::Result<StrColumn> {
let bytes_column = open_column_bytes(data)?;
Ok(StrColumn::wrap(bytes_column))
}

View File

@@ -0,0 +1,136 @@
mod shuffled;
mod stacked;
use shuffled::merge_column_index_shuffled;
use stacked::merge_column_index_stacked;
use crate::column_index::SerializableColumnIndex;
use crate::{Cardinality, ColumnIndex, MergeRowOrder};
// For simplification, we never have cardinality go down due to deletes.
fn detect_cardinality(columns: &[Option<ColumnIndex>]) -> Cardinality {
columns
.iter()
.flatten()
.map(ColumnIndex::get_cardinality)
.max()
.unwrap_or(Cardinality::Full)
}
pub fn merge_column_index<'a>(
columns: &'a [Option<ColumnIndex>],
merge_row_order: &'a MergeRowOrder,
) -> SerializableColumnIndex<'a> {
// For simplification, we do not try to detect whether the cardinality could be
// downgraded thanks to deletes.
let cardinality_after_merge = detect_cardinality(columns);
match merge_row_order {
MergeRowOrder::Stack(stack_merge_order) => {
merge_column_index_stacked(columns, cardinality_after_merge, stack_merge_order)
}
MergeRowOrder::Shuffled(complex_merge_order) => {
merge_column_index_shuffled(columns, cardinality_after_merge, complex_merge_order)
}
}
}
// TODO actually, the shuffled code path is a bit too general.
// In practise, we do not really shuffle everything.
// The merge order restricted to a specific column keeps the original row order.
//
// This may offer some optimization that we have not explored yet.
#[cfg(test)]
mod tests {
use crate::column_index::merge::detect_cardinality;
use crate::column_index::multivalued_index::MultiValueIndex;
use crate::column_index::{merge_column_index, OptionalIndex, SerializableColumnIndex};
use crate::{Cardinality, ColumnIndex, MergeRowOrder, RowAddr, RowId, ShuffleMergeOrder};
#[test]
fn test_detect_cardinality() {
assert_eq!(detect_cardinality(&[]), Cardinality::Full);
let optional_index: ColumnIndex = OptionalIndex::for_test(1, &[]).into();
let multivalued_index: ColumnIndex = MultiValueIndex::for_test(&[0, 1]).into();
assert_eq!(
detect_cardinality(&[Some(optional_index.clone()), None]),
Cardinality::Optional
);
assert_eq!(
detect_cardinality(&[Some(optional_index.clone()), Some(ColumnIndex::Full)]),
Cardinality::Optional
);
assert_eq!(
detect_cardinality(&[Some(multivalued_index.clone()), None]),
Cardinality::Multivalued
);
assert_eq!(
detect_cardinality(&[
Some(multivalued_index.clone()),
Some(optional_index.clone())
]),
Cardinality::Multivalued
);
assert_eq!(
detect_cardinality(&[Some(optional_index), Some(multivalued_index)]),
Cardinality::Multivalued
);
}
#[test]
fn test_merge_index_multivalued_sorted() {
let column_indexes: Vec<Option<ColumnIndex>> =
vec![Some(MultiValueIndex::for_test(&[0, 2, 5]).into())];
let merge_row_order: MergeRowOrder = ShuffleMergeOrder::for_test(
&[2],
vec![
RowAddr {
segment_ord: 0u32,
row_id: 1u32,
},
RowAddr {
segment_ord: 0u32,
row_id: 0u32,
},
],
)
.into();
let merged_column_index = merge_column_index(&column_indexes[..], &merge_row_order);
let SerializableColumnIndex::Multivalued(start_index_iterable) = merged_column_index
else { panic!("Excpected a multivalued index") };
let start_indexes: Vec<RowId> = start_index_iterable.boxed_iter().collect();
assert_eq!(&start_indexes, &[0, 3, 5]);
}
#[test]
fn test_merge_index_multivalued_sorted_several_segment() {
let column_indexes: Vec<Option<ColumnIndex>> = vec![
Some(MultiValueIndex::for_test(&[0, 2, 5]).into()),
None,
Some(MultiValueIndex::for_test(&[0, 1, 4]).into()),
];
let merge_row_order: MergeRowOrder = ShuffleMergeOrder::for_test(
&[2, 0, 2],
vec![
RowAddr {
segment_ord: 2u32,
row_id: 1u32,
},
RowAddr {
segment_ord: 0u32,
row_id: 0u32,
},
RowAddr {
segment_ord: 2u32,
row_id: 0u32,
},
],
)
.into();
let merged_column_index = merge_column_index(&column_indexes[..], &merge_row_order);
let SerializableColumnIndex::Multivalued(start_index_iterable) = merged_column_index
else { panic!("Excpected a multivalued index") };
let start_indexes: Vec<RowId> = start_index_iterable.boxed_iter().collect();
assert_eq!(&start_indexes, &[0, 3, 5, 6]);
}
}

View File

@@ -0,0 +1,168 @@
use std::iter;
use crate::column_index::{SerializableColumnIndex, Set};
use crate::iterable::Iterable;
use crate::{Cardinality, ColumnIndex, RowId, ShuffleMergeOrder};
pub fn merge_column_index_shuffled<'a>(
column_indexes: &'a [Option<ColumnIndex>],
cardinality_after_merge: Cardinality,
shuffle_merge_order: &'a ShuffleMergeOrder,
) -> SerializableColumnIndex<'a> {
match cardinality_after_merge {
Cardinality::Full => SerializableColumnIndex::Full,
Cardinality::Optional => {
let non_null_row_ids =
merge_column_index_shuffled_optional(column_indexes, shuffle_merge_order);
SerializableColumnIndex::Optional {
non_null_row_ids,
num_rows: shuffle_merge_order.num_rows(),
}
}
Cardinality::Multivalued => {
let multivalue_start_index =
merge_column_index_shuffled_multivalued(column_indexes, shuffle_merge_order);
SerializableColumnIndex::Multivalued(multivalue_start_index)
}
}
}
/// Merge several column indexes into one, ordering rows according to the merge_order passed as
/// argument. While it is true that the `merge_order` may imply deletes and hence could in theory a
/// multivalued index into an optional one, this is not supported today for simplification.
///
/// In other words the column_indexes passed as argument may NOT be multivalued.
fn merge_column_index_shuffled_optional<'a>(
column_indexes: &'a [Option<ColumnIndex>],
merge_order: &'a ShuffleMergeOrder,
) -> Box<dyn Iterable<RowId> + 'a> {
Box::new(ShuffledOptionalIndex {
column_indexes,
merge_order,
})
}
struct ShuffledOptionalIndex<'a> {
column_indexes: &'a [Option<ColumnIndex>],
merge_order: &'a ShuffleMergeOrder,
}
impl<'a> Iterable<u32> for ShuffledOptionalIndex<'a> {
fn boxed_iter(&self) -> Box<dyn Iterator<Item = u32> + '_> {
Box::new(self.merge_order
.iter_new_to_old_row_addrs()
.enumerate()
.filter_map(|(new_row_id, old_row_addr)| {
let Some(column_index) = &self.column_indexes[old_row_addr.segment_ord as usize] else {
return None;
};
let row_id = new_row_id as u32;
if column_index.has_value(old_row_addr.row_id) {
Some(row_id)
} else {
None
}
}))
}
}
fn merge_column_index_shuffled_multivalued<'a>(
column_indexes: &'a [Option<ColumnIndex>],
merge_order: &'a ShuffleMergeOrder,
) -> Box<dyn Iterable<RowId> + 'a> {
Box::new(ShuffledMultivaluedIndex {
column_indexes,
merge_order,
})
}
struct ShuffledMultivaluedIndex<'a> {
column_indexes: &'a [Option<ColumnIndex>],
merge_order: &'a ShuffleMergeOrder,
}
fn iter_num_values<'a>(
column_indexes: &'a [Option<ColumnIndex>],
merge_order: &'a ShuffleMergeOrder,
) -> impl Iterator<Item = u32> + 'a {
merge_order.iter_new_to_old_row_addrs().map(|row_addr| {
let Some(column_index) = &column_indexes[row_addr.segment_ord as usize] else {
// No values in the entire column. It surely means there are 0 values associated to this row.
return 0u32;
};
match column_index {
ColumnIndex::Empty { .. } => 0u32,
ColumnIndex::Full => 1,
ColumnIndex::Optional(optional_index) => {
u32::from(optional_index.contains(row_addr.row_id))
}
ColumnIndex::Multivalued(multivalued_index) => {
multivalued_index.range(row_addr.row_id).len() as u32
}
}
})
}
/// Transforms an iterator containing the number of vals per row (with `num_rows` elements)
/// into a `start_offset` iterator starting at 0 and (with `num_rows + 1` element)
fn integrate_num_vals(num_vals: impl Iterator<Item = u32>) -> impl Iterator<Item = RowId> {
iter::once(0u32).chain(num_vals.scan(0, |state, num_vals| {
*state += num_vals;
Some(*state)
}))
}
impl<'a> Iterable<u32> for ShuffledMultivaluedIndex<'a> {
fn boxed_iter(&self) -> Box<dyn Iterator<Item = u32> + '_> {
let num_vals_per_row = iter_num_values(self.column_indexes, self.merge_order);
Box::new(integrate_num_vals(num_vals_per_row))
}
}
#[cfg(test)]
mod tests {
use super::*;
use crate::column_index::OptionalIndex;
use crate::RowAddr;
#[test]
fn test_integrate_num_vals_empty() {
assert!(integrate_num_vals(iter::empty()).eq(iter::once(0)));
}
#[test]
fn test_integrate_num_vals_one_el() {
assert!(integrate_num_vals(iter::once(10)).eq([0, 10].into_iter()));
}
#[test]
fn test_integrate_num_vals_several() {
assert!(integrate_num_vals([3, 0, 10, 20].into_iter()).eq([0, 3, 3, 13, 33].into_iter()));
}
#[test]
fn test_merge_column_index_optional_shuffle() {
let optional_index: ColumnIndex = OptionalIndex::for_test(2, &[0]).into();
let column_indexes = vec![Some(optional_index), Some(ColumnIndex::Full)];
let row_addrs = vec![
RowAddr {
segment_ord: 0u32,
row_id: 1u32,
},
RowAddr {
segment_ord: 1u32,
row_id: 0u32,
},
];
let shuffle_merge_order = ShuffleMergeOrder::for_test(&[2, 1], row_addrs);
let serializable_index = merge_column_index_shuffled(
&column_indexes[..],
Cardinality::Optional,
&shuffle_merge_order,
);
let SerializableColumnIndex::Optional { non_null_row_ids, num_rows } = serializable_index else { panic!() };
assert_eq!(num_rows, 2);
let non_null_rows: Vec<RowId> = non_null_row_ids.boxed_iter().collect();
assert_eq!(&non_null_rows, &[1]);
}
}

View File

@@ -0,0 +1,156 @@
use std::iter;
use crate::column_index::{SerializableColumnIndex, Set};
use crate::iterable::Iterable;
use crate::{Cardinality, ColumnIndex, RowId, StackMergeOrder};
/// Simple case:
/// The new mapping just consists in stacking the different column indexes.
///
/// There are no sort nor deletes involved.
pub fn merge_column_index_stacked<'a>(
columns: &'a [Option<ColumnIndex>],
cardinality_after_merge: Cardinality,
stack_merge_order: &'a StackMergeOrder,
) -> SerializableColumnIndex<'a> {
match cardinality_after_merge {
Cardinality::Full => SerializableColumnIndex::Full,
Cardinality::Optional => SerializableColumnIndex::Optional {
non_null_row_ids: Box::new(StackedOptionalIndex {
columns,
stack_merge_order,
}),
num_rows: stack_merge_order.num_rows(),
},
Cardinality::Multivalued => {
let stacked_multivalued_index = StackedMultivaluedIndex {
columns,
stack_merge_order,
};
SerializableColumnIndex::Multivalued(Box::new(stacked_multivalued_index))
}
}
}
struct StackedOptionalIndex<'a> {
columns: &'a [Option<ColumnIndex>],
stack_merge_order: &'a StackMergeOrder,
}
impl<'a> Iterable<RowId> for StackedOptionalIndex<'a> {
fn boxed_iter(&self) -> Box<dyn Iterator<Item = RowId> + 'a> {
Box::new(
self.columns
.iter()
.enumerate()
.flat_map(|(columnar_id, column_index_opt)| {
let columnar_row_range = self.stack_merge_order.columnar_range(columnar_id);
let rows_it: Box<dyn Iterator<Item = RowId>> = match column_index_opt {
Some(ColumnIndex::Full) => Box::new(columnar_row_range),
Some(ColumnIndex::Optional(optional_index)) => Box::new(
optional_index
.iter_rows()
.map(move |row_id: RowId| columnar_row_range.start + row_id),
),
Some(ColumnIndex::Multivalued(_)) => {
panic!("No multivalued index is allowed when stacking column index");
}
None | Some(ColumnIndex::Empty { .. }) => Box::new(std::iter::empty()),
};
rows_it
}),
)
}
}
#[derive(Clone, Copy)]
struct StackedMultivaluedIndex<'a> {
columns: &'a [Option<ColumnIndex>],
stack_merge_order: &'a StackMergeOrder,
}
fn convert_column_opt_to_multivalued_index<'a>(
column_index_opt: Option<&'a ColumnIndex>,
num_rows: RowId,
) -> Box<dyn Iterator<Item = RowId> + 'a> {
match column_index_opt {
None | Some(ColumnIndex::Empty { .. }) => {
Box::new(iter::repeat(0u32).take(num_rows as usize + 1))
}
Some(ColumnIndex::Full) => Box::new(0..num_rows + 1),
Some(ColumnIndex::Optional(optional_index)) => {
Box::new(
(0..num_rows)
// TODO optimize
.map(|row_id| optional_index.rank(row_id))
.chain(std::iter::once(optional_index.num_non_nulls())),
)
}
Some(ColumnIndex::Multivalued(multivalued_index)) => {
multivalued_index.start_index_column.iter()
}
}
}
impl<'a> Iterable<RowId> for StackedMultivaluedIndex<'a> {
fn boxed_iter(&self) -> Box<dyn Iterator<Item = RowId> + '_> {
let multivalued_indexes =
self.columns
.iter()
.map(Option::as_ref)
.enumerate()
.map(|(columnar_id, column_opt)| {
let num_rows =
self.stack_merge_order.columnar_range(columnar_id).len() as RowId;
convert_column_opt_to_multivalued_index(column_opt, num_rows)
});
stack_multivalued_indexes(multivalued_indexes)
}
}
// Refactor me
fn stack_multivalued_indexes<'a>(
mut multivalued_indexes: impl Iterator<Item = Box<dyn Iterator<Item = RowId> + 'a>> + 'a,
) -> Box<dyn Iterator<Item = RowId> + 'a> {
let mut offset = 0;
let mut last_row_id = 0;
let mut current_it = multivalued_indexes.next();
Box::new(std::iter::from_fn(move || loop {
let Some(multivalued_index) = current_it.as_mut() else {
return None;
};
if let Some(row_id) = multivalued_index.next() {
last_row_id = offset + row_id;
return Some(last_row_id);
}
offset = last_row_id;
loop {
current_it = multivalued_indexes.next();
if current_it.as_mut()?.next().is_some() {
break;
}
}
}))
}
#[cfg(test)]
mod tests {
use crate::RowId;
fn it<'a>(row_ids: &'a [RowId]) -> Box<dyn Iterator<Item = RowId> + 'a> {
Box::new(row_ids.iter().copied())
}
#[test]
fn test_stack() {
let columns = [
it(&[0u32, 0u32]),
it(&[0u32, 1u32, 1u32, 4u32]),
it(&[0u32, 3u32, 5u32]),
it(&[0u32, 4u32]),
]
.into_iter();
let start_offsets: Vec<RowId> = super::stack_multivalued_indexes(columns).collect();
assert_eq!(start_offsets, &[0, 0, 1, 1, 4, 7, 9, 13]);
}
}

View File

@@ -0,0 +1,115 @@
mod merge;
mod multivalued_index;
mod optional_index;
mod serialize;
use std::ops::Range;
pub use merge::merge_column_index;
pub use optional_index::{OptionalIndex, Set};
pub use serialize::{open_column_index, serialize_column_index, SerializableColumnIndex};
use crate::column_index::multivalued_index::MultiValueIndex;
use crate::{Cardinality, DocId, RowId};
#[derive(Clone)]
pub enum ColumnIndex {
Empty {
num_docs: u32,
},
Full,
Optional(OptionalIndex),
/// In addition, at index num_rows, an extra value is added
/// containing the overal number of values.
Multivalued(MultiValueIndex),
}
impl From<OptionalIndex> for ColumnIndex {
fn from(optional_index: OptionalIndex) -> ColumnIndex {
ColumnIndex::Optional(optional_index)
}
}
impl From<MultiValueIndex> for ColumnIndex {
fn from(multi_value_index: MultiValueIndex) -> ColumnIndex {
ColumnIndex::Multivalued(multi_value_index)
}
}
impl ColumnIndex {
#[inline]
pub fn get_cardinality(&self) -> Cardinality {
match self {
ColumnIndex::Empty { .. } => Cardinality::Optional,
ColumnIndex::Full => Cardinality::Full,
ColumnIndex::Optional(_) => Cardinality::Optional,
ColumnIndex::Multivalued(_) => Cardinality::Multivalued,
}
}
/// Returns true if and only if there are at least one value associated to the row.
pub fn has_value(&self, doc_id: DocId) -> bool {
match self {
ColumnIndex::Empty { .. } => false,
ColumnIndex::Full => true,
ColumnIndex::Optional(optional_index) => optional_index.contains(doc_id),
ColumnIndex::Multivalued(multivalued_index) => {
!multivalued_index.range(doc_id).is_empty()
}
}
}
pub fn value_row_ids(&self, doc_id: DocId) -> Range<RowId> {
match self {
ColumnIndex::Empty { .. } => 0..0,
ColumnIndex::Full => doc_id..doc_id + 1,
ColumnIndex::Optional(optional_index) => {
if let Some(val) = optional_index.rank_if_exists(doc_id) {
val..val + 1
} else {
0..0
}
}
ColumnIndex::Multivalued(multivalued_index) => multivalued_index.range(doc_id),
}
}
pub fn docid_range_to_rowids(&self, doc_id: Range<DocId>) -> Range<RowId> {
match self {
ColumnIndex::Empty { .. } => 0..0,
ColumnIndex::Full => doc_id,
ColumnIndex::Optional(optional_index) => {
let row_start = optional_index.rank(doc_id.start);
let row_end = optional_index.rank(doc_id.end);
row_start..row_end
}
ColumnIndex::Multivalued(multivalued_index) => {
let end_docid = doc_id.end.min(multivalued_index.num_docs() - 1) + 1;
let start_docid = doc_id.start.min(end_docid);
let row_start = multivalued_index.start_index_column.get_val(start_docid);
let row_end = multivalued_index.start_index_column.get_val(end_docid);
row_start..row_end
}
}
}
pub fn select_batch_in_place(&self, doc_id_start: DocId, rank_ids: &mut Vec<RowId>) {
match self {
ColumnIndex::Empty { .. } => {
rank_ids.clear();
}
ColumnIndex::Full => {
// No need to do anything:
// value_idx and row_idx are the same.
}
ColumnIndex::Optional(optional_index) => {
optional_index.select_batch(&mut rank_ids[..]);
}
ColumnIndex::Multivalued(multivalued_index) => {
multivalued_index.select_batch_in_place(doc_id_start, rank_ids)
}
}
}
}

View File

@@ -0,0 +1,141 @@
use std::io;
use std::io::Write;
use std::ops::Range;
use std::sync::Arc;
use common::OwnedBytes;
use crate::column_values::{
load_u64_based_column_values, serialize_u64_based_column_values, CodecType, ColumnValues,
};
use crate::iterable::Iterable;
use crate::{DocId, RowId};
pub fn serialize_multivalued_index(
multivalued_index: &dyn Iterable<RowId>,
output: &mut impl Write,
) -> io::Result<()> {
serialize_u64_based_column_values(
multivalued_index,
&[CodecType::Bitpacked, CodecType::Linear],
output,
)?;
Ok(())
}
pub fn open_multivalued_index(bytes: OwnedBytes) -> io::Result<MultiValueIndex> {
let start_index_column: Arc<dyn ColumnValues<RowId>> = load_u64_based_column_values(bytes)?;
Ok(MultiValueIndex { start_index_column })
}
#[derive(Clone)]
/// Index to resolve value range for given doc_id.
/// Starts at 0.
pub struct MultiValueIndex {
pub start_index_column: Arc<dyn crate::ColumnValues<RowId>>,
}
impl From<Arc<dyn ColumnValues<RowId>>> for MultiValueIndex {
fn from(start_index_column: Arc<dyn ColumnValues<RowId>>) -> Self {
MultiValueIndex { start_index_column }
}
}
impl MultiValueIndex {
pub fn for_test(start_offsets: &[RowId]) -> MultiValueIndex {
let mut buffer = Vec::new();
serialize_multivalued_index(&start_offsets, &mut buffer).unwrap();
let bytes = OwnedBytes::new(buffer);
open_multivalued_index(bytes).unwrap()
}
/// Returns `[start, end)`, such that the values associated with
/// the given document are `start..end`.
#[inline]
pub(crate) fn range(&self, doc_id: DocId) -> Range<RowId> {
let start = self.start_index_column.get_val(doc_id);
let end = self.start_index_column.get_val(doc_id + 1);
start..end
}
/// Returns the number of documents in the index.
#[inline]
pub fn num_docs(&self) -> u32 {
self.start_index_column.num_vals() - 1
}
/// Converts a list of ranks (row ids of values) in a 1:n index to the corresponding list of
/// docids. Positions are converted inplace to docids.
///
/// Since there is no index for value pos -> docid, but docid -> value pos range, we scan the
/// index.
///
/// Correctness: positions needs to be sorted. idx_reader needs to contain monotonically
/// increasing positions.
///
/// TODO: Instead of a linear scan we can employ a exponential search into binary search to
/// match a docid to its value position.
#[allow(clippy::bool_to_int_with_if)]
pub(crate) fn select_batch_in_place(&self, docid_start: DocId, ranks: &mut Vec<u32>) {
if ranks.is_empty() {
return;
}
let mut cur_doc = docid_start;
let mut last_doc = None;
assert!(self.start_index_column.get_val(docid_start) <= ranks[0]);
let mut write_doc_pos = 0;
for i in 0..ranks.len() {
let pos = ranks[i];
loop {
let end = self.start_index_column.get_val(cur_doc + 1);
if end > pos {
ranks[write_doc_pos] = cur_doc;
write_doc_pos += if last_doc == Some(cur_doc) { 0 } else { 1 };
last_doc = Some(cur_doc);
break;
}
cur_doc += 1;
}
}
ranks.truncate(write_doc_pos);
}
}
#[cfg(test)]
mod tests {
use std::ops::Range;
use std::sync::Arc;
use super::MultiValueIndex;
use crate::column_values::IterColumn;
use crate::{ColumnValues, RowId};
fn index_to_pos_helper(
index: &MultiValueIndex,
doc_id_range: Range<u32>,
positions: &[u32],
) -> Vec<u32> {
let mut positions = positions.to_vec();
index.select_batch_in_place(doc_id_range.start, &mut positions);
positions
}
#[test]
fn test_positions_to_docid() {
let offsets: Vec<RowId> = vec![0, 10, 12, 15, 22, 23]; // docid values are [0..10, 10..12, 12..15, etc.]
let column: Arc<dyn ColumnValues<RowId>> = Arc::new(IterColumn::from(offsets.into_iter()));
let index = MultiValueIndex::from(column);
assert_eq!(index.num_docs(), 5);
let positions = &[10u32, 11, 15, 20, 21, 22];
assert_eq!(index_to_pos_helper(&index, 0..5, positions), vec![1, 3, 4]);
assert_eq!(index_to_pos_helper(&index, 1..5, positions), vec![1, 3, 4]);
assert_eq!(index_to_pos_helper(&index, 0..5, &[9]), vec![0]);
assert_eq!(index_to_pos_helper(&index, 1..5, &[10]), vec![1]);
assert_eq!(index_to_pos_helper(&index, 1..5, &[11]), vec![1]);
assert_eq!(index_to_pos_helper(&index, 2..5, &[12]), vec![2]);
assert_eq!(index_to_pos_helper(&index, 2..5, &[12, 14]), vec![2]);
assert_eq!(index_to_pos_helper(&index, 2..5, &[12, 14, 15]), vec![2, 3]);
}
}

View File

@@ -0,0 +1,515 @@
use std::io::{self, Write};
use std::sync::Arc;
mod set;
mod set_block;
use common::{BinarySerializable, OwnedBytes, VInt};
pub use set::{SelectCursor, Set, SetCodec};
use set_block::{
DenseBlock, DenseBlockCodec, SparseBlock, SparseBlockCodec, DENSE_BLOCK_NUM_BYTES,
};
use crate::iterable::Iterable;
use crate::{DocId, InvalidData, RowId};
/// The threshold for for number of elements after which we switch to dense block encoding.
///
/// We simply pick the value that minimize the size of the blocks.
const DENSE_BLOCK_THRESHOLD: u32 =
set_block::DENSE_BLOCK_NUM_BYTES / std::mem::size_of::<u16>() as u32; //< 5_120
const ELEMENTS_PER_BLOCK: u32 = u16::MAX as u32 + 1;
const BLOCK_SIZE: RowId = 1 << 16;
#[derive(Copy, Clone, Debug)]
struct BlockMeta {
non_null_rows_before_block: u32,
start_byte_offset: u32,
block_variant: BlockVariant,
}
#[derive(Clone, Copy, Debug)]
enum BlockVariant {
Dense,
Sparse { num_vals: u16 },
}
impl BlockVariant {
pub fn empty() -> Self {
Self::Sparse { num_vals: 0 }
}
pub fn num_bytes_in_block(&self) -> u32 {
match *self {
BlockVariant::Dense => set_block::DENSE_BLOCK_NUM_BYTES,
BlockVariant::Sparse { num_vals } => num_vals as u32 * 2,
}
}
}
/// This codec is inspired by roaring bitmaps.
/// In the dense blocks, however, in order to accelerate `select`
/// we interleave an offset over two bytes. (more on this lower)
///
/// The lower 16 bits of doc ids are stored as u16 while the upper 16 bits are given by the block
/// id. Each block contains 1<<16 docids.
///
/// # Serialized Data Layout
/// The data starts with the block data. Each block is either dense or sparse encoded, depending on
/// the number of values in the block. A block is sparse when it contains less than
/// DENSE_BLOCK_THRESHOLD (6144) values.
/// [Sparse data block | dense data block, .. #repeat*; Desc: Either a sparse or dense encoded
/// block]
/// ### Sparse block data
/// [u16 LE, .. #repeat*; Desc: Positions with values in a block]
/// ### Dense block data
/// [Dense codec for the whole block; Desc: Similar to a bitvec(0..ELEMENTS_PER_BLOCK) + Metadata
/// for faster lookups. See dense.rs]
///
/// The data is followed by block metadata, to know which area of the raw block data belongs to
/// which block. Only metadata for blocks with elements is recorded to
/// keep the overhead low for scenarios with many very sparse columns. The block metadata consists
/// of the block index and the number of values in the block. Since we don't store empty blocks
/// num_vals is incremented by 1, e.g. 0 means 1 value.
///
/// The last u16 is storing the number of metadata blocks.
/// [u16 LE, .. #repeat*; Desc: Positions with values in a block][(u16 LE, u16 LE), .. #repeat*;
/// Desc: (Block Id u16, Num Elements u16)][u16 LE; Desc: num blocks with values u16]
///
/// # Opening
/// When opening the data layout, the data is expanded to `Vec<SparseCodecBlockVariant>`, where the
/// index is the block index. For each block `byte_start` and `offset` is computed.
#[derive(Clone)]
pub struct OptionalIndex {
num_rows: RowId,
num_non_null_rows: RowId,
block_data: OwnedBytes,
block_metas: Arc<[BlockMeta]>,
}
/// Splits a value address into lower and upper 16bits.
/// The lower 16 bits are the value in the block
/// The upper 16 bits are the block index
#[derive(Copy, Debug, Clone)]
struct RowAddr {
block_id: u16,
in_block_row_id: u16,
}
#[inline(always)]
fn row_addr_from_row_id(row_id: RowId) -> RowAddr {
RowAddr {
block_id: (row_id / BLOCK_SIZE) as u16,
in_block_row_id: (row_id % BLOCK_SIZE) as u16,
}
}
enum BlockSelectCursor<'a> {
Dense(<DenseBlock<'a> as Set<u16>>::SelectCursor<'a>),
Sparse(<SparseBlock<'a> as Set<u16>>::SelectCursor<'a>),
}
impl<'a> BlockSelectCursor<'a> {
fn select(&mut self, rank: u16) -> u16 {
match self {
BlockSelectCursor::Dense(dense_select_cursor) => dense_select_cursor.select(rank),
BlockSelectCursor::Sparse(sparse_select_cursor) => sparse_select_cursor.select(rank),
}
}
}
pub struct OptionalIndexSelectCursor<'a> {
current_block_cursor: BlockSelectCursor<'a>,
current_block_id: u16,
// The current block is guaranteed to contain ranks < end_rank.
current_block_end_rank: RowId,
optional_index: &'a OptionalIndex,
block_doc_idx_start: RowId,
num_null_rows_before_block: RowId,
}
impl<'a> OptionalIndexSelectCursor<'a> {
fn search_and_load_block(&mut self, rank: RowId) {
if rank < self.current_block_end_rank {
// we are already in the right block
return;
}
self.current_block_id = self.optional_index.find_block(rank, self.current_block_id);
self.current_block_end_rank = self
.optional_index
.block_metas
.get(self.current_block_id as usize + 1)
.map(|block_meta| block_meta.non_null_rows_before_block)
.unwrap_or(u32::MAX);
self.block_doc_idx_start = (self.current_block_id as u32) * ELEMENTS_PER_BLOCK;
let block_meta = self.optional_index.block_metas[self.current_block_id as usize];
self.num_null_rows_before_block = block_meta.non_null_rows_before_block;
let block: Block<'_> = self.optional_index.block(block_meta);
self.current_block_cursor = match block {
Block::Dense(dense_block) => BlockSelectCursor::Dense(dense_block.select_cursor()),
Block::Sparse(sparse_block) => BlockSelectCursor::Sparse(sparse_block.select_cursor()),
};
}
}
impl<'a> SelectCursor<RowId> for OptionalIndexSelectCursor<'a> {
fn select(&mut self, rank: RowId) -> RowId {
self.search_and_load_block(rank);
let index_in_block = (rank - self.num_null_rows_before_block) as u16;
self.current_block_cursor.select(index_in_block) as RowId + self.block_doc_idx_start
}
}
impl Set<RowId> for OptionalIndex {
type SelectCursor<'b> = OptionalIndexSelectCursor<'b> where Self: 'b;
// Check if value at position is not null.
#[inline]
fn contains(&self, row_id: RowId) -> bool {
let RowAddr {
block_id,
in_block_row_id,
} = row_addr_from_row_id(row_id);
let block_meta = self.block_metas[block_id as usize];
match self.block(block_meta) {
Block::Dense(dense_block) => dense_block.contains(in_block_row_id),
Block::Sparse(sparse_block) => sparse_block.contains(in_block_row_id),
}
}
#[inline]
fn rank(&self, doc_id: DocId) -> RowId {
let RowAddr {
block_id,
in_block_row_id,
} = row_addr_from_row_id(doc_id);
let block_meta = self.block_metas[block_id as usize];
let block = self.block(block_meta);
let block_offset_row_id = match block {
Block::Dense(dense_block) => dense_block.rank(in_block_row_id),
Block::Sparse(sparse_block) => sparse_block.rank(in_block_row_id),
} as u32;
block_meta.non_null_rows_before_block + block_offset_row_id
}
#[inline]
fn rank_if_exists(&self, doc_id: DocId) -> Option<RowId> {
let RowAddr {
block_id,
in_block_row_id,
} = row_addr_from_row_id(doc_id);
let block_meta = self.block_metas[block_id as usize];
let block = self.block(block_meta);
let block_offset_row_id = match block {
Block::Dense(dense_block) => dense_block.rank_if_exists(in_block_row_id),
Block::Sparse(sparse_block) => sparse_block.rank_if_exists(in_block_row_id),
}? as u32;
Some(block_meta.non_null_rows_before_block + block_offset_row_id)
}
#[inline]
fn select(&self, rank: RowId) -> RowId {
let block_pos = self.find_block(rank, 0);
let block_doc_idx_start = (block_pos as u32) * ELEMENTS_PER_BLOCK;
let block_meta = self.block_metas[block_pos as usize];
let block: Block<'_> = self.block(block_meta);
let index_in_block = (rank - block_meta.non_null_rows_before_block) as u16;
let in_block_rank = match block {
Block::Dense(dense_block) => dense_block.select(index_in_block),
Block::Sparse(sparse_block) => sparse_block.select(index_in_block),
};
block_doc_idx_start + in_block_rank as u32
}
fn select_cursor(&self) -> OptionalIndexSelectCursor<'_> {
OptionalIndexSelectCursor {
current_block_cursor: BlockSelectCursor::Sparse(
SparseBlockCodec::open(b"").select_cursor(),
),
current_block_id: 0u16,
current_block_end_rank: 0u32, //< this is sufficient to force the first load
optional_index: self,
block_doc_idx_start: 0u32,
num_null_rows_before_block: 0u32,
}
}
}
impl OptionalIndex {
pub fn for_test(num_rows: RowId, row_ids: &[RowId]) -> OptionalIndex {
assert!(row_ids
.last()
.copied()
.map(|last_row_id| last_row_id < num_rows)
.unwrap_or(true));
let mut buffer = Vec::new();
serialize_optional_index(&row_ids, num_rows, &mut buffer).unwrap();
let bytes = OwnedBytes::new(buffer);
open_optional_index(bytes).unwrap()
}
pub fn num_docs(&self) -> RowId {
self.num_rows
}
pub fn num_non_nulls(&self) -> RowId {
self.num_non_null_rows
}
pub fn iter_rows(&self) -> impl Iterator<Item = RowId> + '_ {
// TODO optimize
let mut select_batch = self.select_cursor();
(0..self.num_non_null_rows).map(move |rank| select_batch.select(rank))
}
pub fn select_batch(&self, ranks: &mut [RowId]) {
let mut select_cursor = self.select_cursor();
for rank in ranks.iter_mut() {
*rank = select_cursor.select(*rank);
}
}
#[inline]
fn block(&self, block_meta: BlockMeta) -> Block<'_> {
let BlockMeta {
start_byte_offset,
block_variant,
..
} = block_meta;
let start_byte_offset = start_byte_offset as usize;
let bytes = self.block_data.as_slice();
match block_variant {
BlockVariant::Dense => Block::Dense(DenseBlockCodec::open(
&bytes[start_byte_offset..start_byte_offset + DENSE_BLOCK_NUM_BYTES as usize],
)),
BlockVariant::Sparse { num_vals } => {
let end_byte_offset = start_byte_offset + num_vals as usize * 2;
let sparse_bytes = &bytes[start_byte_offset..end_byte_offset];
Block::Sparse(SparseBlockCodec::open(sparse_bytes))
}
}
}
#[inline]
fn find_block(&self, dense_idx: u32, start_block_pos: u16) -> u16 {
for block_pos in start_block_pos..self.block_metas.len() as u16 {
let offset = self.block_metas[block_pos as usize].non_null_rows_before_block;
if offset > dense_idx {
return block_pos - 1u16;
}
}
self.block_metas.len() as u16 - 1u16
}
// TODO Add a good API for the codec_idx to original_idx translation.
// The Iterator API is a probably a bad idea
}
#[derive(Copy, Clone)]
enum Block<'a> {
Dense(DenseBlock<'a>),
Sparse(SparseBlock<'a>),
}
#[derive(Debug, Copy, Clone)]
enum OptionalIndexCodec {
Dense = 0,
Sparse = 1,
}
impl OptionalIndexCodec {
fn to_code(self) -> u8 {
self as u8
}
fn try_from_code(code: u8) -> Result<Self, InvalidData> {
match code {
0 => Ok(Self::Dense),
1 => Ok(Self::Sparse),
_ => Err(InvalidData),
}
}
}
impl BinarySerializable for OptionalIndexCodec {
fn serialize<W: Write + ?Sized>(&self, writer: &mut W) -> io::Result<()> {
writer.write_all(&[self.to_code()])
}
fn deserialize<R: io::Read>(reader: &mut R) -> io::Result<Self> {
let optional_codec_code = u8::deserialize(reader)?;
let optional_codec = Self::try_from_code(optional_codec_code)?;
Ok(optional_codec)
}
}
fn serialize_optional_index_block(block_els: &[u16], out: &mut impl io::Write) -> io::Result<()> {
let is_sparse = is_sparse(block_els.len() as u32);
if is_sparse {
SparseBlockCodec::serialize(block_els.iter().copied(), out)?;
} else {
DenseBlockCodec::serialize(block_els.iter().copied(), out)?;
}
Ok(())
}
pub fn serialize_optional_index<W: io::Write>(
non_null_rows: &dyn Iterable<RowId>,
num_rows: RowId,
output: &mut W,
) -> io::Result<()> {
VInt(num_rows as u64).serialize(output)?;
let mut rows_it = non_null_rows.boxed_iter();
let mut block_metadata: Vec<SerializedBlockMeta> = Vec::new();
let mut current_block = Vec::new();
// This if-statement for the first element ensures that
// `block_metadata` is not empty in the loop below.
let Some(idx) = rows_it.next() else {
output.write_all(&0u16.to_le_bytes())?;
return Ok(());
};
let row_addr = row_addr_from_row_id(idx);
let mut current_block_id = row_addr.block_id;
current_block.push(row_addr.in_block_row_id);
for idx in rows_it {
let value_addr = row_addr_from_row_id(idx);
if current_block_id != value_addr.block_id {
serialize_optional_index_block(&current_block[..], output)?;
block_metadata.push(SerializedBlockMeta {
block_id: current_block_id,
num_non_null_rows: current_block.len() as u32,
});
current_block.clear();
current_block_id = value_addr.block_id;
}
current_block.push(value_addr.in_block_row_id);
}
// handle last block
serialize_optional_index_block(&current_block[..], output)?;
block_metadata.push(SerializedBlockMeta {
block_id: current_block_id,
num_non_null_rows: current_block.len() as u32,
});
for block in &block_metadata {
output.write_all(&block.to_bytes())?;
}
output.write_all((block_metadata.len() as u16).to_le_bytes().as_ref())?;
Ok(())
}
const SERIALIZED_BLOCK_META_NUM_BYTES: usize = 4;
#[derive(Clone, Copy, Debug)]
struct SerializedBlockMeta {
block_id: u16,
num_non_null_rows: u32, //< takes values in 1..=u16::MAX
}
// TODO unit tests
impl SerializedBlockMeta {
#[inline]
fn from_bytes(bytes: [u8; SERIALIZED_BLOCK_META_NUM_BYTES]) -> SerializedBlockMeta {
let block_id = u16::from_le_bytes(bytes[0..2].try_into().unwrap());
let num_non_null_rows: u32 =
u16::from_le_bytes(bytes[2..4].try_into().unwrap()) as u32 + 1u32;
SerializedBlockMeta {
block_id,
num_non_null_rows,
}
}
#[inline]
fn to_bytes(self) -> [u8; SERIALIZED_BLOCK_META_NUM_BYTES] {
assert!(self.num_non_null_rows > 0);
let mut bytes = [0u8; SERIALIZED_BLOCK_META_NUM_BYTES];
bytes[0..2].copy_from_slice(&self.block_id.to_le_bytes());
// We don't store empty blocks, therefore we can subtract 1.
// This way we will be able to use u16 when the number of elements is 1 << 16 or u16::MAX+1
bytes[2..4].copy_from_slice(&((self.num_non_null_rows - 1u32) as u16).to_le_bytes());
bytes
}
}
#[inline]
fn is_sparse(num_rows_in_block: u32) -> bool {
num_rows_in_block < DENSE_BLOCK_THRESHOLD
}
fn deserialize_optional_index_block_metadatas(
data: &[u8],
num_rows: u32,
) -> (Box<[BlockMeta]>, u32) {
let num_blocks = data.len() / SERIALIZED_BLOCK_META_NUM_BYTES;
let mut block_metas = Vec::with_capacity(num_blocks + 1);
let mut start_byte_offset = 0;
let mut non_null_rows_before_block = 0;
for block_meta_bytes in data.chunks_exact(SERIALIZED_BLOCK_META_NUM_BYTES) {
let block_meta_bytes: [u8; SERIALIZED_BLOCK_META_NUM_BYTES] =
block_meta_bytes.try_into().unwrap();
let SerializedBlockMeta {
block_id,
num_non_null_rows,
} = SerializedBlockMeta::from_bytes(block_meta_bytes);
block_metas.resize(
block_id as usize,
BlockMeta {
non_null_rows_before_block,
start_byte_offset,
block_variant: BlockVariant::empty(),
},
);
let block_variant = if is_sparse(num_non_null_rows) {
BlockVariant::Sparse {
num_vals: num_non_null_rows as u16,
}
} else {
BlockVariant::Dense
};
block_metas.push(BlockMeta {
non_null_rows_before_block,
start_byte_offset,
block_variant,
});
start_byte_offset += block_variant.num_bytes_in_block();
non_null_rows_before_block += num_non_null_rows;
}
block_metas.resize(
((num_rows + BLOCK_SIZE - 1) / BLOCK_SIZE) as usize,
BlockMeta {
non_null_rows_before_block,
start_byte_offset,
block_variant: BlockVariant::empty(),
},
);
(block_metas.into_boxed_slice(), non_null_rows_before_block)
}
pub fn open_optional_index(bytes: OwnedBytes) -> io::Result<OptionalIndex> {
let (mut bytes, num_non_empty_blocks_bytes) = bytes.rsplit(2);
let num_non_empty_block_bytes =
u16::from_le_bytes(num_non_empty_blocks_bytes.as_slice().try_into().unwrap());
let num_rows = VInt::deserialize_u64(&mut bytes)? as u32;
let block_metas_num_bytes =
num_non_empty_block_bytes as usize * SERIALIZED_BLOCK_META_NUM_BYTES;
let (block_data, block_metas) = bytes.rsplit(block_metas_num_bytes);
let (block_metas, num_non_null_rows) =
deserialize_optional_index_block_metadatas(block_metas.as_slice(), num_rows);
let optional_index = OptionalIndex {
num_rows,
num_non_null_rows,
block_data,
block_metas: block_metas.into(),
};
Ok(optional_index)
}
#[cfg(test)]
mod tests;

View File

@@ -0,0 +1,47 @@
use std::io;
/// A codec makes it possible to serialize a set of
/// elements, and open the resulting Set representation.
pub trait SetCodec {
type Item: Copy + TryFrom<usize> + Eq + std::hash::Hash + std::fmt::Debug;
type Reader<'a>: Set<Self::Item>;
/// Serializes a set of unique sorted u16 elements.
///
/// May panic if the elements are not sorted.
fn serialize(els: impl Iterator<Item = Self::Item>, wrt: impl io::Write) -> io::Result<()>;
fn open(data: &[u8]) -> Self::Reader<'_>;
}
/// Stateful object that makes it possible to compute several select in a row,
/// provided the rank passed as argument are increasing.
pub trait SelectCursor<T> {
// May panic if rank is greater than the number of elements in the Set,
// or if rank is < than value provided in the previous call.
fn select(&mut self, rank: T) -> T;
}
pub trait Set<T> {
type SelectCursor<'b>: SelectCursor<T>
where Self: 'b;
/// Returns true if the elements is contained in the Set
fn contains(&self, el: T) -> bool;
/// Returns the number of rows in the set that are < `el`
fn rank(&self, el: T) -> T;
/// If the set contains `el` returns the element rank.
/// If the set does not contain the element, it returns `None`.
fn rank_if_exists(&self, el: T) -> Option<T>;
/// Return the rank-th value stored in this bitmap.
///
/// # Panics
///
/// May panic if rank is greater than the number of elements in the Set.
fn select(&self, rank: T) -> T;
/// Creates a brand new select cursor.
fn select_cursor(&self) -> Self::SelectCursor<'_>;
}

View File

@@ -0,0 +1,278 @@
use std::convert::TryInto;
use std::io::{self, Write};
use common::BinarySerializable;
use crate::column_index::optional_index::{SelectCursor, Set, SetCodec, ELEMENTS_PER_BLOCK};
#[inline(always)]
fn get_bit_at(input: u64, n: u16) -> bool {
input & (1 << n) != 0
}
#[inline]
fn set_bit_at(input: &mut u64, n: u16) {
*input |= 1 << n;
}
/// For the `DenseCodec`, `data` which contains the encoded blocks.
/// Each block consists of [u8; 12]. The first 8 bytes is a bitvec for 64 elements.
/// The last 4 bytes are the offset, the number of set bits so far.
///
/// When translating the original index to a dense index, the correct block can be computed
/// directly `orig_idx/64`. Inside the block the position is `orig_idx%64`.
///
/// When translating a dense index to the original index, we can use the offset to find the correct
/// block. Direct computation is not possible, but we can employ a linear or binary search.
const ELEMENTS_PER_MINI_BLOCK: u16 = 64;
const MINI_BLOCK_BITVEC_NUM_BYTES: usize = 8;
const MINI_BLOCK_OFFSET_NUM_BYTES: usize = 2;
pub const MINI_BLOCK_NUM_BYTES: usize = MINI_BLOCK_BITVEC_NUM_BYTES + MINI_BLOCK_OFFSET_NUM_BYTES;
/// Number of bytes in a dense block.
pub const DENSE_BLOCK_NUM_BYTES: u32 =
(ELEMENTS_PER_BLOCK / ELEMENTS_PER_MINI_BLOCK as u32) * MINI_BLOCK_NUM_BYTES as u32;
pub struct DenseBlockCodec;
impl SetCodec for DenseBlockCodec {
type Item = u16;
type Reader<'a> = DenseBlock<'a>;
fn serialize(els: impl Iterator<Item = u16>, wrt: impl io::Write) -> io::Result<()> {
serialize_dense_codec(els, wrt)
}
#[inline]
fn open(data: &[u8]) -> Self::Reader<'_> {
assert_eq!(data.len(), DENSE_BLOCK_NUM_BYTES as usize);
DenseBlock(data)
}
}
/// Interpreting the bitvec as a set of integer within 0..=63
/// and given an element, returns the number of elements in the
/// set lesser than the element.
///
/// # Panics
///
/// May panic or return a wrong result if el <= 64.
#[inline(always)]
fn rank_u64(bitvec: u64, el: u16) -> u16 {
debug_assert!(el < 64);
let mask = (1u64 << el) - 1;
let masked_bitvec = bitvec & mask;
masked_bitvec.count_ones() as u16
}
#[inline(always)]
fn select_u64(mut bitvec: u64, rank: u16) -> u16 {
for _ in 0..rank {
bitvec &= bitvec - 1;
}
bitvec.trailing_zeros() as u16
}
// TODO test the following solution on Intel... on Ryzen Zen <3 it is a catastrophy.
// #[target_feature(enable = "bmi2")]
// unsafe fn select_bitvec_unsafe(bitvec: u64, rank: u16) -> u16 {
// let pdep = _pdep_u64(1u64 << rank, bitvec);
// pdep.trailing_zeros() as u16
// }
#[derive(Clone, Copy, Debug)]
struct DenseMiniBlock {
bitvec: u64,
rank: u16,
}
impl DenseMiniBlock {
fn from_bytes(data: [u8; MINI_BLOCK_NUM_BYTES]) -> Self {
let bitvec = u64::from_le_bytes(data[..MINI_BLOCK_BITVEC_NUM_BYTES].try_into().unwrap());
let rank = u16::from_le_bytes(data[MINI_BLOCK_BITVEC_NUM_BYTES..].try_into().unwrap());
Self { bitvec, rank }
}
fn to_bytes(self) -> [u8; MINI_BLOCK_NUM_BYTES] {
let mut bytes = [0u8; MINI_BLOCK_NUM_BYTES];
bytes[..MINI_BLOCK_BITVEC_NUM_BYTES].copy_from_slice(&self.bitvec.to_le_bytes());
bytes[MINI_BLOCK_BITVEC_NUM_BYTES..].copy_from_slice(&self.rank.to_le_bytes());
bytes
}
}
#[derive(Copy, Clone)]
pub struct DenseBlock<'a>(&'a [u8]);
pub struct DenseBlockSelectCursor<'a> {
block_id: u16,
dense_block: DenseBlock<'a>,
}
impl<'a> SelectCursor<u16> for DenseBlockSelectCursor<'a> {
#[inline]
fn select(&mut self, rank: u16) -> u16 {
self.block_id = self
.dense_block
.find_miniblock_containing_rank(rank, self.block_id)
.unwrap();
let index_block = self.dense_block.mini_block(self.block_id);
let in_block_rank = rank - index_block.rank;
self.block_id * ELEMENTS_PER_MINI_BLOCK + select_u64(index_block.bitvec, in_block_rank)
}
}
impl<'a> Set<u16> for DenseBlock<'a> {
type SelectCursor<'b> = DenseBlockSelectCursor<'a> where Self: 'b;
#[inline(always)]
fn contains(&self, el: u16) -> bool {
let mini_block_id = el / ELEMENTS_PER_MINI_BLOCK;
let bitvec = self.mini_block(mini_block_id).bitvec;
let pos_in_bitvec = el % ELEMENTS_PER_MINI_BLOCK;
get_bit_at(bitvec, pos_in_bitvec)
}
#[inline(always)]
fn rank_if_exists(&self, el: u16) -> Option<u16> {
let block_pos = el / ELEMENTS_PER_MINI_BLOCK;
let index_block = self.mini_block(block_pos);
let pos_in_block_bit_vec = el % ELEMENTS_PER_MINI_BLOCK;
let ones_in_block = rank_u64(index_block.bitvec, pos_in_block_bit_vec);
let rank = index_block.rank + ones_in_block;
if get_bit_at(index_block.bitvec, pos_in_block_bit_vec) {
Some(rank)
} else {
None
}
}
#[inline(always)]
fn rank(&self, el: u16) -> u16 {
let block_pos = el / ELEMENTS_PER_MINI_BLOCK;
let index_block = self.mini_block(block_pos);
let pos_in_block_bit_vec = el % ELEMENTS_PER_MINI_BLOCK;
let ones_in_block = rank_u64(index_block.bitvec, pos_in_block_bit_vec);
index_block.rank + ones_in_block
}
#[inline(always)]
fn select(&self, rank: u16) -> u16 {
let block_id = self.find_miniblock_containing_rank(rank, 0).unwrap();
let index_block = self.mini_block(block_id);
let in_block_rank = rank - index_block.rank;
block_id * ELEMENTS_PER_MINI_BLOCK + select_u64(index_block.bitvec, in_block_rank)
}
#[inline(always)]
fn select_cursor(&self) -> Self::SelectCursor<'_> {
DenseBlockSelectCursor {
block_id: 0,
dense_block: *self,
}
}
}
impl<'a> DenseBlock<'a> {
#[inline]
fn mini_block(&self, mini_block_id: u16) -> DenseMiniBlock {
let data_start_pos = mini_block_id as usize * MINI_BLOCK_NUM_BYTES;
DenseMiniBlock::from_bytes(
self.0[data_start_pos..data_start_pos + MINI_BLOCK_NUM_BYTES]
.try_into()
.unwrap(),
)
}
#[inline]
fn iter_miniblocks(
&self,
from_block_id: u16,
) -> impl Iterator<Item = (u16, DenseMiniBlock)> + '_ {
self.0
.chunks_exact(MINI_BLOCK_NUM_BYTES)
.enumerate()
.skip(from_block_id as usize)
.map(|(block_id, bytes)| {
let mini_block = DenseMiniBlock::from_bytes(bytes.try_into().unwrap());
(block_id as u16, mini_block)
})
}
/// Finds the block position containing the dense_idx.
///
/// # Correctness
/// dense_idx needs to be smaller than the number of values in the index
///
/// The last offset number is equal to the number of values in the index.
#[inline]
fn find_miniblock_containing_rank(&self, rank: u16, from_block_id: u16) -> Option<u16> {
self.iter_miniblocks(from_block_id)
.take_while(|(_, block)| block.rank <= rank)
.map(|(block_id, _)| block_id)
.last()
}
}
/// Iterator over all values, true if set, otherwise false
pub fn serialize_dense_codec(
els: impl Iterator<Item = u16>,
mut output: impl Write,
) -> io::Result<()> {
let mut non_null_rows_before: u16 = 0u16;
let mut block = 0u64;
let mut current_block_id = 0u16;
for el in els {
let block_id = el / ELEMENTS_PER_MINI_BLOCK;
let in_offset = el % ELEMENTS_PER_MINI_BLOCK;
while block_id > current_block_id {
let dense_mini_block = DenseMiniBlock {
bitvec: block,
rank: non_null_rows_before,
};
output.write_all(&dense_mini_block.to_bytes())?;
non_null_rows_before += block.count_ones() as u16;
block = 0u64;
current_block_id += 1u16;
}
set_bit_at(&mut block, in_offset);
}
while current_block_id <= u16::MAX / ELEMENTS_PER_MINI_BLOCK {
block.serialize(&mut output)?;
non_null_rows_before.serialize(&mut output)?;
// This will overflow to 0 exactly if all bits are set.
// This is however not problem as we won't use this last value.
non_null_rows_before = non_null_rows_before.wrapping_add(block.count_ones() as u16);
block = 0u64;
current_block_id += 1u16;
}
Ok(())
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_select_bitvec() {
assert_eq!(select_u64(1u64, 0), 0);
assert_eq!(select_u64(2u64, 0), 1);
assert_eq!(select_u64(4u64, 0), 2);
assert_eq!(select_u64(8u64, 0), 3);
assert_eq!(select_u64(1 | 8u64, 0), 0);
assert_eq!(select_u64(1 | 8u64, 1), 3);
}
#[test]
fn test_count_ones() {
for i in 0..=63 {
assert_eq!(rank_u64(u64::MAX, i), i);
}
}
#[test]
fn test_dense() {
assert_eq!(DENSE_BLOCK_NUM_BYTES, 10_240);
}
}

View File

@@ -0,0 +1,8 @@
mod dense;
mod sparse;
pub use dense::{DenseBlock, DenseBlockCodec, DENSE_BLOCK_NUM_BYTES};
pub use sparse::{SparseBlock, SparseBlockCodec};
#[cfg(test)]
mod tests;

View File

@@ -0,0 +1,111 @@
use crate::column_index::optional_index::{SelectCursor, Set, SetCodec};
pub struct SparseBlockCodec;
impl SetCodec for SparseBlockCodec {
type Item = u16;
type Reader<'a> = SparseBlock<'a>;
fn serialize(
els: impl Iterator<Item = u16>,
mut wrt: impl std::io::Write,
) -> std::io::Result<()> {
for el in els {
wrt.write_all(&el.to_le_bytes())?;
}
Ok(())
}
fn open(data: &[u8]) -> Self::Reader<'_> {
SparseBlock(data)
}
}
#[derive(Copy, Clone)]
pub struct SparseBlock<'a>(&'a [u8]);
impl<'a> SelectCursor<u16> for SparseBlock<'a> {
#[inline]
fn select(&mut self, rank: u16) -> u16 {
<SparseBlock<'a> as Set<u16>>::select(self, rank)
}
}
impl<'a> Set<u16> for SparseBlock<'a> {
type SelectCursor<'b> = Self where Self: 'b;
#[inline(always)]
fn contains(&self, el: u16) -> bool {
self.binary_search(el).is_ok()
}
#[inline(always)]
fn rank_if_exists(&self, el: u16) -> Option<u16> {
self.binary_search(el).ok()
}
#[inline(always)]
fn rank(&self, el: u16) -> u16 {
self.binary_search(el).unwrap_or_else(|el| el)
}
#[inline(always)]
fn select(&self, rank: u16) -> u16 {
let offset = rank as usize * 2;
u16::from_le_bytes(self.0[offset..offset + 2].try_into().unwrap())
}
#[inline(always)]
fn select_cursor(&self) -> Self::SelectCursor<'_> {
*self
}
}
#[inline(always)]
fn get_u16(data: &[u8], byte_position: usize) -> u16 {
let bytes: [u8; 2] = data[byte_position..byte_position + 2].try_into().unwrap();
u16::from_le_bytes(bytes)
}
impl<'a> SparseBlock<'a> {
#[inline(always)]
fn value_at_idx(&self, data: &[u8], idx: u16) -> u16 {
let start_offset: usize = idx as usize * 2;
get_u16(data, start_offset)
}
#[inline]
fn num_vals(&self) -> u16 {
(self.0.len() / 2) as u16
}
#[inline]
#[allow(clippy::comparison_chain)]
// Looks for the element in the block. Returns the positions if found.
fn binary_search(&self, target: u16) -> Result<u16, u16> {
let data = &self.0;
let mut size = self.num_vals();
let mut left = 0;
let mut right = size;
// TODO try different implem.
// e.g. exponential search into binary search
while left < right {
let mid = left + size / 2;
// TODO do boundary check only once, and then use an
// unsafe `value_at_idx`
let mid_val = self.value_at_idx(data, mid);
if target > mid_val {
left = mid + 1;
} else if target < mid_val {
right = mid;
} else {
return Ok(mid);
}
size = right - left;
}
Err(left)
}
}

View File

@@ -0,0 +1,109 @@
use std::collections::HashMap;
use crate::column_index::optional_index::set_block::dense::DENSE_BLOCK_NUM_BYTES;
use crate::column_index::optional_index::set_block::{DenseBlockCodec, SparseBlockCodec};
use crate::column_index::optional_index::{SelectCursor, Set, SetCodec};
fn test_set_helper<C: SetCodec<Item = u16>>(vals: &[u16]) -> usize {
let mut buffer = Vec::new();
C::serialize(vals.iter().copied(), &mut buffer).unwrap();
let tested_set = C::open(buffer.as_slice());
let hash_set: HashMap<C::Item, C::Item> = vals
.iter()
.copied()
.enumerate()
.map(|(ord, val)| (val, C::Item::try_from(ord).ok().unwrap()))
.collect();
for val in 0u16..=u16::MAX {
assert_eq!(tested_set.contains(val), hash_set.contains_key(&val));
assert_eq!(tested_set.rank_if_exists(val), hash_set.get(&val).copied());
assert_eq!(
tested_set.rank(val),
vals.iter().cloned().take_while(|v| *v < val).count() as u16
);
}
for rank in 0..vals.len() {
assert_eq!(tested_set.select(rank as u16), vals[rank]);
}
buffer.len()
}
#[test]
fn test_dense_block_set_u16_empty() {
let buffer_len = test_set_helper::<DenseBlockCodec>(&[]);
assert_eq!(buffer_len, DENSE_BLOCK_NUM_BYTES as usize);
}
#[test]
fn test_dense_block_set_u16_max() {
let buffer_len = test_set_helper::<DenseBlockCodec>(&[u16::MAX]);
assert_eq!(buffer_len, DENSE_BLOCK_NUM_BYTES as usize);
}
#[test]
fn test_sparse_block_set_u16_empty() {
let buffer_len = test_set_helper::<SparseBlockCodec>(&[]);
assert_eq!(buffer_len, 0);
}
#[test]
fn test_sparse_block_set_u16_max() {
let buffer_len = test_set_helper::<SparseBlockCodec>(&[u16::MAX]);
assert_eq!(buffer_len, 2);
}
use proptest::prelude::*;
proptest! {
#![proptest_config(ProptestConfig::with_cases(1))]
#[test]
fn test_prop_test_dense(els in proptest::collection::btree_set(0..=u16::MAX, 0..=u16::MAX as usize)) {
let vals: Vec<u16> = els.into_iter().collect();
let buffer_len = test_set_helper::<DenseBlockCodec>(&vals);
assert_eq!(buffer_len, DENSE_BLOCK_NUM_BYTES as usize);
}
#[test]
fn test_prop_test_sparse(els in proptest::collection::btree_set(0..=u16::MAX, 0..=u16::MAX as usize)) {
let vals: Vec<u16> = els.into_iter().collect();
let buffer_len = test_set_helper::<SparseBlockCodec>(&vals);
assert_eq!(buffer_len, vals.len() * 2);
}
}
#[test]
fn test_simple_translate_codec_codec_idx_to_original_idx_dense() {
let mut buffer = Vec::new();
DenseBlockCodec::serialize([1, 3, 17, 32, 30_000, 30_001].iter().copied(), &mut buffer)
.unwrap();
let tested_set = DenseBlockCodec::open(buffer.as_slice());
assert!(tested_set.contains(1));
let mut select_cursor = tested_set.select_cursor();
assert_eq!(select_cursor.select(0), 1);
assert_eq!(select_cursor.select(1), 3);
assert_eq!(select_cursor.select(2), 17);
}
#[test]
fn test_simple_translate_codec_idx_to_original_idx_sparse() {
let mut buffer = Vec::new();
SparseBlockCodec::serialize([1, 3, 17].iter().copied(), &mut buffer).unwrap();
let tested_set = SparseBlockCodec::open(buffer.as_slice());
assert!(tested_set.contains(1));
let mut select_cursor = tested_set.select_cursor();
assert_eq!(SelectCursor::select(&mut select_cursor, 0), 1);
assert_eq!(SelectCursor::select(&mut select_cursor, 1), 3);
assert_eq!(SelectCursor::select(&mut select_cursor, 2), 17);
}
#[test]
fn test_simple_translate_codec_idx_to_original_idx_dense() {
let mut buffer = Vec::new();
DenseBlockCodec::serialize(0u16..150u16, &mut buffer).unwrap();
let tested_set = DenseBlockCodec::open(buffer.as_slice());
assert!(tested_set.contains(1));
let mut select_cursor = tested_set.select_cursor();
for i in 0..150 {
assert_eq!(i, select_cursor.select(i));
}
}

View File

@@ -0,0 +1,371 @@
use proptest::prelude::{any, prop, *};
use proptest::strategy::Strategy;
use proptest::{prop_oneof, proptest};
use super::*;
#[test]
fn test_dense_block_threshold() {
assert_eq!(super::DENSE_BLOCK_THRESHOLD, 5_120);
}
fn random_bitvec() -> BoxedStrategy<Vec<bool>> {
prop_oneof![
1 => prop::collection::vec(proptest::bool::weighted(1.0), 0..100),
1 => prop::collection::vec(proptest::bool::weighted(0.00), 0..(ELEMENTS_PER_BLOCK as usize * 3)), // empty blocks
1 => prop::collection::vec(proptest::bool::weighted(1.00), 0..(ELEMENTS_PER_BLOCK as usize + 10)), // full block
1 => prop::collection::vec(proptest::bool::weighted(0.01), 0..100),
1 => prop::collection::vec(proptest::bool::weighted(0.01), 0..u16::MAX as usize),
8 => vec![any::<bool>()],
]
.boxed()
}
proptest! {
#![proptest_config(ProptestConfig::with_cases(50))]
#[test]
fn test_with_random_bitvecs(bitvec1 in random_bitvec(), bitvec2 in random_bitvec(), bitvec3 in random_bitvec()) {
let mut bitvec = Vec::new();
bitvec.extend_from_slice(&bitvec1);
bitvec.extend_from_slice(&bitvec2);
bitvec.extend_from_slice(&bitvec3);
test_null_index(&bitvec[..]);
}
}
#[test]
fn test_with_random_sets_simple() {
let vals = 10..BLOCK_SIZE * 2;
let mut out: Vec<u8> = Vec::new();
serialize_optional_index(&vals, 100, &mut out).unwrap();
let null_index = open_optional_index(OwnedBytes::new(out)).unwrap();
let ranks: Vec<u32> = (65_472u32..65_473u32).collect();
let els: Vec<u32> = ranks.iter().copied().map(|rank| rank + 10).collect();
let mut select_cursor = null_index.select_cursor();
for (rank, el) in ranks.iter().copied().zip(els.iter().copied()) {
assert_eq!(select_cursor.select(rank), el);
}
}
#[test]
fn test_optional_index_trailing_empty_blocks() {
test_null_index(&[false]);
}
#[test]
fn test_optional_index_one_block_false() {
let mut iter = vec![false; ELEMENTS_PER_BLOCK as usize];
iter.push(true);
test_null_index(&iter[..]);
}
#[test]
fn test_optional_index_one_block_true() {
let mut iter = vec![true; ELEMENTS_PER_BLOCK as usize];
iter.push(true);
test_null_index(&iter[..]);
}
impl<'a> Iterable<RowId> for &'a [bool] {
fn boxed_iter(&self) -> Box<dyn Iterator<Item = RowId> + 'a> {
Box::new(
self.iter()
.cloned()
.enumerate()
.filter(|(_pos, val)| *val)
.map(|(pos, _val)| pos as u32),
)
}
}
fn test_null_index(data: &[bool]) {
let mut out: Vec<u8> = Vec::new();
serialize_optional_index(&data, data.len() as RowId, &mut out).unwrap();
let null_index = open_optional_index(OwnedBytes::new(out)).unwrap();
let orig_idx_with_value: Vec<u32> = data
.iter()
.enumerate()
.filter(|(_pos, val)| **val)
.map(|(pos, _val)| pos as u32)
.collect();
let mut select_iter = null_index.select_cursor();
for i in 0..orig_idx_with_value.len() {
assert_eq!(select_iter.select(i as u32), orig_idx_with_value[i]);
}
let step_size = (orig_idx_with_value.len() / 100).max(1);
for (dense_idx, orig_idx) in orig_idx_with_value.iter().enumerate().step_by(step_size) {
assert_eq!(null_index.rank_if_exists(*orig_idx), Some(dense_idx as u32));
}
// 100 samples
let step_size = (data.len() / 100).max(1);
for (pos, value) in data.iter().enumerate().step_by(step_size) {
assert_eq!(null_index.contains(pos as u32), *value);
}
}
#[test]
fn test_optional_index_test_translation() {
let optional_index = OptionalIndex::for_test(4, &[0, 2]);
let mut select_cursor = optional_index.select_cursor();
assert_eq!(select_cursor.select(0), 0);
assert_eq!(select_cursor.select(1), 2);
}
#[test]
fn test_optional_index_translate() {
let optional_index = OptionalIndex::for_test(4, &[0, 2]);
assert_eq!(optional_index.rank_if_exists(0), Some(0));
assert_eq!(optional_index.rank_if_exists(2), Some(1));
}
#[test]
fn test_optional_index_small() {
let optional_index = OptionalIndex::for_test(4, &[0, 2]);
assert!(optional_index.contains(0));
assert!(!optional_index.contains(1));
assert!(optional_index.contains(2));
assert!(!optional_index.contains(3));
}
#[test]
fn test_optional_index_large() {
let row_ids = &[ELEMENTS_PER_BLOCK, ELEMENTS_PER_BLOCK + 1];
let optional_index = OptionalIndex::for_test(ELEMENTS_PER_BLOCK + 2, row_ids);
assert!(!optional_index.contains(0));
assert!(!optional_index.contains(100));
assert!(!optional_index.contains(ELEMENTS_PER_BLOCK - 1));
assert!(optional_index.contains(ELEMENTS_PER_BLOCK));
assert!(optional_index.contains(ELEMENTS_PER_BLOCK + 1));
}
fn test_optional_index_iter_aux(row_ids: &[RowId], num_rows: RowId) {
let optional_index = OptionalIndex::for_test(num_rows, row_ids);
assert_eq!(optional_index.num_docs(), num_rows);
assert!(optional_index.iter_rows().eq(row_ids.iter().copied()));
}
#[test]
fn test_optional_index_iter_empty() {
test_optional_index_iter_aux(&[], 0u32);
}
fn test_optional_index_rank_aux(row_ids: &[RowId]) {
let num_rows = row_ids.last().copied().unwrap_or(0u32) + 1;
let null_index = OptionalIndex::for_test(num_rows, row_ids);
assert_eq!(null_index.num_docs(), num_rows);
for (row_id, row_val) in row_ids.iter().copied().enumerate() {
assert_eq!(null_index.rank(row_val), row_id as u32);
assert_eq!(null_index.rank_if_exists(row_val), Some(row_id as u32));
if row_val > 0 && !null_index.contains(&row_val - 1) {
assert_eq!(null_index.rank(row_val - 1), row_id as u32);
}
assert_eq!(null_index.rank(row_val + 1), row_id as u32 + 1);
}
}
#[test]
fn test_optional_index_rank() {
test_optional_index_rank_aux(&[1u32]);
test_optional_index_rank_aux(&[0u32, 1u32]);
let mut block = Vec::new();
block.push(3u32);
block.extend((0..BLOCK_SIZE).map(|i| i + BLOCK_SIZE + 1));
test_optional_index_rank_aux(&block);
}
#[test]
fn test_optional_index_iter_empty_one() {
test_optional_index_iter_aux(&[1], 2u32);
test_optional_index_iter_aux(&[100_000], 200_000u32);
}
#[test]
fn test_optional_index_iter_dense_block() {
let mut block = Vec::new();
block.push(3u32);
block.extend((0..BLOCK_SIZE).map(|i| i + BLOCK_SIZE + 1));
test_optional_index_iter_aux(&block, 3 * BLOCK_SIZE);
}
#[test]
fn test_optional_index_for_tests() {
let optional_index = OptionalIndex::for_test(4, &[1, 2]);
assert!(!optional_index.contains(0));
assert!(optional_index.contains(1));
assert!(optional_index.contains(2));
assert!(!optional_index.contains(3));
assert_eq!(optional_index.num_docs(), 4);
}
#[cfg(all(test, feature = "unstable"))]
mod bench {
use rand::rngs::StdRng;
use rand::{Rng, SeedableRng};
use test::Bencher;
use super::*;
const TOTAL_NUM_VALUES: u32 = 1_000_000;
fn gen_bools(fill_ratio: f64) -> OptionalIndex {
let mut out = Vec::new();
let mut rng: StdRng = StdRng::from_seed([1u8; 32]);
let vals: Vec<RowId> = (0..TOTAL_NUM_VALUES)
.map(|_| rng.gen_bool(fill_ratio))
.enumerate()
.filter(|(pos, val)| *val)
.map(|(pos, _)| pos as RowId)
.collect();
serialize_optional_index(&&vals[..], TOTAL_NUM_VALUES, &mut out).unwrap();
let codec = open_optional_index(OwnedBytes::new(out)).unwrap();
codec
}
fn random_range_iterator(
start: u32,
end: u32,
avg_step_size: u32,
avg_deviation: u32,
) -> impl Iterator<Item = u32> {
let mut rng: StdRng = StdRng::from_seed([1u8; 32]);
let mut current = start;
std::iter::from_fn(move || {
current += rng.gen_range(avg_step_size - avg_deviation..=avg_step_size + avg_deviation);
if current >= end {
None
} else {
Some(current)
}
})
}
fn n_percent_step_iterator(percent: f32, num_values: u32) -> impl Iterator<Item = u32> {
let ratio = percent as f32 / 100.0;
let step_size = (1f32 / ratio) as u32;
let deviation = step_size - 1;
random_range_iterator(0, num_values, step_size, deviation)
}
fn walk_over_data(codec: &OptionalIndex, avg_step_size: u32) -> Option<u32> {
walk_over_data_from_positions(
codec,
random_range_iterator(0, TOTAL_NUM_VALUES, avg_step_size, 0),
)
}
fn walk_over_data_from_positions(
codec: &OptionalIndex,
positions: impl Iterator<Item = u32>,
) -> Option<u32> {
let mut dense_idx: Option<u32> = None;
for idx in positions {
dense_idx = dense_idx.or(codec.rank_if_exists(idx));
}
dense_idx
}
#[bench]
fn bench_translate_orig_to_codec_1percent_filled_10percent_hit(bench: &mut Bencher) {
let codec = gen_bools(0.01f64);
bench.iter(|| walk_over_data(&codec, 100));
}
#[bench]
fn bench_translate_orig_to_codec_5percent_filled_10percent_hit(bench: &mut Bencher) {
let codec = gen_bools(0.05f64);
bench.iter(|| walk_over_data(&codec, 100));
}
#[bench]
fn bench_translate_orig_to_codec_5percent_filled_1percent_hit(bench: &mut Bencher) {
let codec = gen_bools(0.05f64);
bench.iter(|| walk_over_data(&codec, 1000));
}
#[bench]
fn bench_translate_orig_to_codec_full_scan_1percent_filled(bench: &mut Bencher) {
let codec = gen_bools(0.01f64);
bench.iter(|| walk_over_data_from_positions(&codec, 0..TOTAL_NUM_VALUES));
}
#[bench]
fn bench_translate_orig_to_codec_full_scan_10percent_filled(bench: &mut Bencher) {
let codec = gen_bools(0.1f64);
bench.iter(|| walk_over_data_from_positions(&codec, 0..TOTAL_NUM_VALUES));
}
#[bench]
fn bench_translate_orig_to_codec_full_scan_90percent_filled(bench: &mut Bencher) {
let codec = gen_bools(0.9f64);
bench.iter(|| walk_over_data_from_positions(&codec, 0..TOTAL_NUM_VALUES));
}
#[bench]
fn bench_translate_orig_to_codec_10percent_filled_1percent_hit(bench: &mut Bencher) {
let codec = gen_bools(0.1f64);
bench.iter(|| walk_over_data(&codec, 100));
}
#[bench]
fn bench_translate_orig_to_codec_50percent_filled_1percent_hit(bench: &mut Bencher) {
let codec = gen_bools(0.5f64);
bench.iter(|| walk_over_data(&codec, 100));
}
#[bench]
fn bench_translate_orig_to_codec_90percent_filled_1percent_hit(bench: &mut Bencher) {
let codec = gen_bools(0.9f64);
bench.iter(|| walk_over_data(&codec, 100));
}
#[bench]
fn bench_translate_codec_to_orig_1percent_filled_0comma005percent_hit(bench: &mut Bencher) {
bench_translate_codec_to_orig_util(0.01f64, 0.005f32, bench);
}
#[bench]
fn bench_translate_codec_to_orig_10percent_filled_0comma005percent_hit(bench: &mut Bencher) {
bench_translate_codec_to_orig_util(0.1f64, 0.005f32, bench);
}
#[bench]
fn bench_translate_codec_to_orig_1percent_filled_10percent_hit(bench: &mut Bencher) {
bench_translate_codec_to_orig_util(0.01f64, 10f32, bench);
}
#[bench]
fn bench_translate_codec_to_orig_1percent_filled_full_scan(bench: &mut Bencher) {
bench_translate_codec_to_orig_util(0.01f64, 100f32, bench);
}
fn bench_translate_codec_to_orig_util(
percent_filled: f64,
percent_hit: f32,
bench: &mut Bencher,
) {
let codec = gen_bools(percent_filled);
let num_non_nulls = codec.num_non_nulls();
let idxs: Vec<u32> = if percent_hit == 100.0f32 {
(0..num_non_nulls).collect()
} else {
n_percent_step_iterator(percent_hit, num_non_nulls).collect()
};
let mut output = vec![0u32; idxs.len()];
bench.iter(|| {
output.copy_from_slice(&idxs[..]);
codec.select_batch(&mut output);
});
}
#[bench]
fn bench_translate_codec_to_orig_90percent_filled_0comma005percent_hit(bench: &mut Bencher) {
bench_translate_codec_to_orig_util(0.9f64, 0.005, bench);
}
#[bench]
fn bench_translate_codec_to_orig_90percent_filled_full_scan(bench: &mut Bencher) {
bench_translate_codec_to_orig_util(0.9f64, 100.0f32, bench);
}
}

View File

@@ -0,0 +1,77 @@
use std::io;
use std::io::Write;
use common::{CountingWriter, OwnedBytes};
use crate::column_index::multivalued_index::serialize_multivalued_index;
use crate::column_index::optional_index::serialize_optional_index;
use crate::column_index::ColumnIndex;
use crate::iterable::Iterable;
use crate::{Cardinality, RowId};
pub enum SerializableColumnIndex<'a> {
Full,
Optional {
non_null_row_ids: Box<dyn Iterable<RowId> + 'a>,
num_rows: RowId,
},
// TODO remove the Arc<dyn> apart from serialization this is not
// dynamic at all.
Multivalued(Box<dyn Iterable<RowId> + 'a>),
}
impl<'a> SerializableColumnIndex<'a> {
pub fn get_cardinality(&self) -> Cardinality {
match self {
SerializableColumnIndex::Full => Cardinality::Full,
SerializableColumnIndex::Optional { .. } => Cardinality::Optional,
SerializableColumnIndex::Multivalued(_) => Cardinality::Multivalued,
}
}
}
pub fn serialize_column_index(
column_index: SerializableColumnIndex,
output: &mut impl Write,
) -> io::Result<u32> {
let mut output = CountingWriter::wrap(output);
let cardinality = column_index.get_cardinality().to_code();
output.write_all(&[cardinality])?;
match column_index {
SerializableColumnIndex::Full => {}
SerializableColumnIndex::Optional {
non_null_row_ids,
num_rows,
} => serialize_optional_index(non_null_row_ids.as_ref(), num_rows, &mut output)?,
SerializableColumnIndex::Multivalued(multivalued_index) => {
serialize_multivalued_index(&*multivalued_index, &mut output)?
}
}
let column_index_num_bytes = output.written_bytes() as u32;
Ok(column_index_num_bytes)
}
pub fn open_column_index(mut bytes: OwnedBytes) -> io::Result<ColumnIndex> {
if bytes.is_empty() {
return Err(io::Error::new(
io::ErrorKind::UnexpectedEof,
"Failed to deserialize column index. Empty buffer.",
));
}
let cardinality_code = bytes[0];
let cardinality = Cardinality::try_from_code(cardinality_code)?;
bytes.advance(1);
match cardinality {
Cardinality::Full => Ok(ColumnIndex::Full),
Cardinality::Optional => {
let optional_index = super::optional_index::open_optional_index(bytes)?;
Ok(ColumnIndex::Optional(optional_index))
}
Cardinality::Multivalued => {
let multivalue_index = super::multivalued_index::open_multivalued_index(bytes)?;
Ok(ColumnIndex::Multivalued(multivalue_index))
}
}
}
// TODO unit tests

View File

@@ -0,0 +1,135 @@
use std::sync::Arc;
use common::OwnedBytes;
use rand::rngs::StdRng;
use rand::{Rng, SeedableRng};
use test::{self, Bencher};
use super::*;
use crate::column_values::u64_based::*;
fn get_data() -> Vec<u64> {
let mut rng = StdRng::seed_from_u64(2u64);
let mut data: Vec<_> = (100..55000_u64)
.map(|num| num + rng.gen::<u8>() as u64)
.collect();
data.push(99_000);
data.insert(1000, 2000);
data.insert(2000, 100);
data.insert(3000, 4100);
data.insert(4000, 100);
data.insert(5000, 800);
data
}
fn compute_stats(vals: impl Iterator<Item = u64>) -> ColumnStats {
let mut stats_collector = StatsCollector::default();
for val in vals {
stats_collector.collect(val);
}
stats_collector.stats()
}
#[inline(never)]
fn value_iter() -> impl Iterator<Item = u64> {
0..20_000
}
fn get_reader_for_bench<Codec: ColumnCodec>(data: &[u64]) -> Codec::ColumnValues {
let mut bytes = Vec::new();
let stats = compute_stats(data.iter().cloned());
let mut codec_serializer = Codec::estimator();
for val in data {
codec_serializer.collect(*val);
}
codec_serializer.serialize(&stats, Box::new(data.iter().copied()).as_mut(), &mut bytes);
Codec::load(OwnedBytes::new(bytes)).unwrap()
}
fn bench_get<Codec: ColumnCodec>(b: &mut Bencher, data: &[u64]) {
let col = get_reader_for_bench::<Codec>(data);
b.iter(|| {
let mut sum = 0u64;
for pos in value_iter() {
let val = col.get_val(pos as u32);
sum = sum.wrapping_add(val);
}
sum
});
}
#[inline(never)]
fn bench_get_dynamic_helper(b: &mut Bencher, col: Arc<dyn ColumnValues>) {
b.iter(|| {
let mut sum = 0u64;
for pos in value_iter() {
let val = col.get_val(pos as u32);
sum = sum.wrapping_add(val);
}
sum
});
}
fn bench_get_dynamic<Codec: ColumnCodec>(b: &mut Bencher, data: &[u64]) {
let col = Arc::new(get_reader_for_bench::<Codec>(data));
bench_get_dynamic_helper(b, col);
}
fn bench_create<Codec: ColumnCodec>(b: &mut Bencher, data: &[u64]) {
let stats = compute_stats(data.iter().cloned());
let mut bytes = Vec::new();
b.iter(|| {
bytes.clear();
let mut codec_serializer = Codec::estimator();
for val in data.iter().take(1024) {
codec_serializer.collect(*val);
}
codec_serializer.serialize(&stats, Box::new(data.iter().copied()).as_mut(), &mut bytes)
});
}
#[bench]
fn bench_fastfield_bitpack_create(b: &mut Bencher) {
let data: Vec<_> = get_data();
bench_create::<BitpackedCodec>(b, &data);
}
#[bench]
fn bench_fastfield_linearinterpol_create(b: &mut Bencher) {
let data: Vec<_> = get_data();
bench_create::<LinearCodec>(b, &data);
}
#[bench]
fn bench_fastfield_multilinearinterpol_create(b: &mut Bencher) {
let data: Vec<_> = get_data();
bench_create::<BlockwiseLinearCodec>(b, &data);
}
#[bench]
fn bench_fastfield_bitpack_get(b: &mut Bencher) {
let data: Vec<_> = get_data();
bench_get::<BitpackedCodec>(b, &data);
}
#[bench]
fn bench_fastfield_bitpack_get_dynamic(b: &mut Bencher) {
let data: Vec<_> = get_data();
bench_get_dynamic::<BitpackedCodec>(b, &data);
}
#[bench]
fn bench_fastfield_linearinterpol_get(b: &mut Bencher) {
let data: Vec<_> = get_data();
bench_get::<LinearCodec>(b, &data);
}
#[bench]
fn bench_fastfield_linearinterpol_get_dynamic(b: &mut Bencher) {
let data: Vec<_> = get_data();
bench_get_dynamic::<LinearCodec>(b, &data);
}
#[bench]
fn bench_fastfield_multilinearinterpol_get(b: &mut Bencher) {
let data: Vec<_> = get_data();
bench_get::<BlockwiseLinearCodec>(b, &data);
}
#[bench]
fn bench_fastfield_multilinearinterpol_get_dynamic(b: &mut Bencher) {
let data: Vec<_> = get_data();
bench_get_dynamic::<BlockwiseLinearCodec>(b, &data);
}

View File

@@ -0,0 +1,41 @@
use std::fmt::Debug;
use std::sync::Arc;
use crate::iterable::Iterable;
use crate::{ColumnIndex, ColumnValues, MergeRowOrder};
pub(crate) struct MergedColumnValues<'a, T> {
pub(crate) column_indexes: &'a [Option<ColumnIndex>],
pub(crate) column_values: &'a [Option<Arc<dyn ColumnValues<T>>>],
pub(crate) merge_row_order: &'a MergeRowOrder,
}
impl<'a, T: Copy + PartialOrd + Debug> Iterable<T> for MergedColumnValues<'a, T> {
fn boxed_iter(&self) -> Box<dyn Iterator<Item = T> + '_> {
match self.merge_row_order {
MergeRowOrder::Stack(_) => Box::new(
self.column_values
.iter()
.flatten()
.flat_map(|column_value| column_value.iter()),
),
MergeRowOrder::Shuffled(shuffle_merge_order) => Box::new(
shuffle_merge_order
.iter_new_to_old_row_addrs()
.flat_map(|row_addr| {
let column_index =
self.column_indexes[row_addr.segment_ord as usize].as_ref()?;
let column_values =
self.column_values[row_addr.segment_ord as usize].as_ref()?;
let value_range = column_index.value_row_ids(row_addr.row_id);
Some((value_range, column_values))
})
.flat_map(|(value_range, column_values)| {
value_range
.into_iter()
.map(|val| column_values.get_val(val))
}),
),
}
}
}

View File

@@ -0,0 +1,220 @@
#![warn(missing_docs)]
//! # `fastfield_codecs`
//!
//! - Columnar storage of data for tantivy [`Column`].
//! - Encode data in different codecs.
//! - Monotonically map values to u64/u128
use std::fmt::Debug;
use std::ops::{Range, RangeInclusive};
use std::sync::Arc;
pub use monotonic_mapping::{MonotonicallyMappableToU64, StrictlyMonotonicFn};
pub use monotonic_mapping_u128::MonotonicallyMappableToU128;
mod merge;
pub(crate) mod monotonic_mapping;
pub(crate) mod monotonic_mapping_u128;
mod stats;
mod u128_based;
mod u64_based;
mod vec_column;
mod monotonic_column;
pub(crate) use merge::MergedColumnValues;
pub use stats::ColumnStats;
pub use u128_based::{open_u128_mapped, serialize_column_values_u128};
pub use u64_based::{
load_u64_based_column_values, serialize_and_load_u64_based_column_values,
serialize_u64_based_column_values, CodecType, ALL_U64_CODEC_TYPES,
};
pub use vec_column::VecColumn;
pub use self::monotonic_column::monotonic_map_column;
use crate::RowId;
/// `ColumnValues` provides access to a dense field column.
///
/// `Column` are just a wrapper over `ColumnValues` and a `ColumnIndex`.
///
/// Any methods with a default and specialized implementation need to be called in the
/// wrappers that implement the trait: Arc and MonotonicMappingColumn
pub trait ColumnValues<T: PartialOrd = u64>: Send + Sync {
/// Return the value associated with the given idx.
///
/// This accessor should return as fast as possible.
///
/// # Panics
///
/// May panic if `idx` is greater than the column length.
fn get_val(&self, idx: u32) -> T;
/// Allows to push down multiple fetch calls, to avoid dynamic dispatch overhead.
///
/// idx and output should have the same length
///
/// # Panics
///
/// May panic if `idx` is greater than the column length.
fn get_vals(&self, idx: &[u32], output: &mut [T]) {
assert!(idx.len() == output.len());
for (out, idx) in output.iter_mut().zip(idx.iter()) {
*out = self.get_val(*idx as u32);
}
}
/// Fills an output buffer with the fast field values
/// associated with the `DocId` going from
/// `start` to `start + output.len()`.
///
/// # Panics
///
/// Must panic if `start + output.len()` is greater than
/// the segment's `maxdoc`.
#[inline(always)]
fn get_range(&self, start: u64, output: &mut [T]) {
for (out, idx) in output.iter_mut().zip(start..) {
*out = self.get_val(idx as u32);
}
}
/// Get the row ids of values which are in the provided value range.
///
/// Note that position == docid for single value fast fields
#[inline(always)]
fn get_row_ids_for_value_range(
&self,
value_range: RangeInclusive<T>,
row_id_range: Range<RowId>,
row_id_hits: &mut Vec<RowId>,
) {
let row_id_range = row_id_range.start..row_id_range.end.min(self.num_vals());
for idx in row_id_range.start..row_id_range.end {
let val = self.get_val(idx);
if value_range.contains(&val) {
row_id_hits.push(idx);
}
}
}
/// Returns the minimum value for this fast field.
///
/// This min_value may not be exact.
/// For instance, the min value does not take in account of possible
/// deleted document. All values are however guaranteed to be higher than
/// `.min_value()`.
fn min_value(&self) -> T;
/// Returns the maximum value for this fast field.
///
/// This max_value may not be exact.
/// For instance, the max value does not take in account of possible
/// deleted document. All values are however guaranteed to be higher than
/// `.max_value()`.
fn max_value(&self) -> T;
/// The number of values in the column.
fn num_vals(&self) -> u32;
/// Returns a iterator over the data
fn iter<'a>(&'a self) -> Box<dyn Iterator<Item = T> + 'a> {
Box::new((0..self.num_vals()).map(|idx| self.get_val(idx)))
}
}
impl<T: Copy + PartialOrd + Debug> ColumnValues<T> for Arc<dyn ColumnValues<T>> {
#[inline(always)]
fn get_val(&self, idx: u32) -> T {
self.as_ref().get_val(idx)
}
#[inline(always)]
fn min_value(&self) -> T {
self.as_ref().min_value()
}
#[inline(always)]
fn max_value(&self) -> T {
self.as_ref().max_value()
}
#[inline(always)]
fn num_vals(&self) -> u32 {
self.as_ref().num_vals()
}
#[inline(always)]
fn iter<'b>(&'b self) -> Box<dyn Iterator<Item = T> + 'b> {
self.as_ref().iter()
}
#[inline(always)]
fn get_range(&self, start: u64, output: &mut [T]) {
self.as_ref().get_range(start, output)
}
#[inline(always)]
fn get_row_ids_for_value_range(
&self,
range: RangeInclusive<T>,
doc_id_range: Range<u32>,
positions: &mut Vec<u32>,
) {
self.as_ref()
.get_row_ids_for_value_range(range, doc_id_range, positions)
}
}
/// Wraps an cloneable iterator into a `Column`.
pub struct IterColumn<T>(T);
impl<T> From<T> for IterColumn<T>
where T: Iterator + Clone + ExactSizeIterator
{
fn from(iter: T) -> Self {
IterColumn(iter)
}
}
impl<T> ColumnValues<T::Item> for IterColumn<T>
where
T: Iterator + Clone + ExactSizeIterator + Send + Sync,
T::Item: PartialOrd + Debug,
{
fn get_val(&self, idx: u32) -> T::Item {
self.0.clone().nth(idx as usize).unwrap()
}
fn min_value(&self) -> T::Item {
self.0.clone().next().unwrap()
}
fn max_value(&self) -> T::Item {
self.0.clone().last().unwrap()
}
fn num_vals(&self) -> u32 {
self.0.len() as u32
}
fn iter(&self) -> Box<dyn Iterator<Item = T::Item> + '_> {
Box::new(self.0.clone())
}
}
#[cfg(all(test, feature = "unstable"))]
mod bench;
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_range_as_col() {
let col = IterColumn::from(10..100);
assert_eq!(col.num_vals(), 90);
assert_eq!(col.max_value(), 99);
}
}

View File

@@ -0,0 +1,120 @@
use std::fmt::Debug;
use std::marker::PhantomData;
use std::ops::{Range, RangeInclusive};
use crate::column_values::monotonic_mapping::StrictlyMonotonicFn;
use crate::ColumnValues;
struct MonotonicMappingColumn<C, T, Input> {
from_column: C,
monotonic_mapping: T,
_phantom: PhantomData<Input>,
}
/// Creates a view of a column transformed by a strictly monotonic mapping. See
/// [`StrictlyMonotonicFn`].
///
/// E.g. apply a gcd monotonic_mapping([100, 200, 300]) == [1, 2, 3]
/// monotonic_mapping.mapping() is expected to be injective, and we should always have
/// monotonic_mapping.inverse(monotonic_mapping.mapping(el)) == el
///
/// The inverse of the mapping is required for:
/// `fn get_positions_for_value_range(&self, range: RangeInclusive<T>) -> Vec<u64> `
/// The user provides the original value range and we need to monotonic map them in the same way the
/// serialization does before calling the underlying column.
///
/// Note that when opening a codec, the monotonic_mapping should be the inverse of the mapping
/// during serialization. And therefore the monotonic_mapping_inv when opening is the same as
/// monotonic_mapping during serialization.
pub fn monotonic_map_column<C, T, Input, Output>(
from_column: C,
monotonic_mapping: T,
) -> impl ColumnValues<Output>
where
C: ColumnValues<Input>,
T: StrictlyMonotonicFn<Input, Output> + Send + Sync,
Input: PartialOrd + Debug + Send + Sync + Clone,
Output: PartialOrd + Debug + Send + Sync + Clone,
{
MonotonicMappingColumn {
from_column,
monotonic_mapping,
_phantom: PhantomData,
}
}
impl<C, T, Input, Output> ColumnValues<Output> for MonotonicMappingColumn<C, T, Input>
where
C: ColumnValues<Input>,
T: StrictlyMonotonicFn<Input, Output> + Send + Sync,
Input: PartialOrd + Send + Debug + Sync + Clone,
Output: PartialOrd + Send + Debug + Sync + Clone,
{
#[inline]
fn get_val(&self, idx: u32) -> Output {
let from_val = self.from_column.get_val(idx);
self.monotonic_mapping.mapping(from_val)
}
fn min_value(&self) -> Output {
let from_min_value = self.from_column.min_value();
self.monotonic_mapping.mapping(from_min_value)
}
fn max_value(&self) -> Output {
let from_max_value = self.from_column.max_value();
self.monotonic_mapping.mapping(from_max_value)
}
fn num_vals(&self) -> u32 {
self.from_column.num_vals()
}
fn iter(&self) -> Box<dyn Iterator<Item = Output> + '_> {
Box::new(
self.from_column
.iter()
.map(|el| self.monotonic_mapping.mapping(el)),
)
}
fn get_row_ids_for_value_range(
&self,
range: RangeInclusive<Output>,
doc_id_range: Range<u32>,
positions: &mut Vec<u32>,
) {
self.from_column.get_row_ids_for_value_range(
self.monotonic_mapping.inverse(range.start().clone())
..=self.monotonic_mapping.inverse(range.end().clone()),
doc_id_range,
positions,
)
}
// We voluntarily do not implement get_range as it yields a regression,
// and we do not have any specialized implementation anyway.
}
#[cfg(test)]
mod tests {
use super::*;
use crate::column_values::monotonic_mapping::{
StrictlyMonotonicMappingInverter, StrictlyMonotonicMappingToInternal,
};
use crate::column_values::VecColumn;
#[test]
fn test_monotonic_mapping_iter() {
let vals: Vec<u64> = (0..100u64).map(|el| el * 10).collect();
let col = VecColumn::from(&vals);
let mapped = monotonic_map_column(
col,
StrictlyMonotonicMappingInverter::from(StrictlyMonotonicMappingToInternal::<i64>::new()),
);
let val_i64s: Vec<u64> = mapped.iter().collect();
for i in 0..100 {
assert_eq!(val_i64s[i as usize], mapped.get_val(i));
}
}
}

View File

@@ -0,0 +1,211 @@
use std::fmt::Debug;
use std::marker::PhantomData;
use common::DateTime;
use super::MonotonicallyMappableToU128;
use crate::RowId;
/// Monotonic maps a value to u64 value space.
/// Monotonic mapping enables `PartialOrd` on u64 space without conversion to original space.
pub trait MonotonicallyMappableToU64: 'static + PartialOrd + Debug + Copy + Send + Sync {
/// Converts a value to u64.
///
/// Internally all fast field values are encoded as u64.
fn to_u64(self) -> u64;
/// Converts a value from u64
///
/// Internally all fast field values are encoded as u64.
/// **Note: To be used for converting encoded Term, Posting values.**
fn from_u64(val: u64) -> Self;
}
/// Values need to be strictly monotonic mapped to a `Internal` value (u64 or u128) that can be
/// used in fast field codecs.
///
/// The monotonic mapping is required so that `PartialOrd` can be used on `Internal` without
/// converting to `External`.
///
/// All strictly monotonic functions are invertible because they are guaranteed to have a one-to-one
/// mapping from their range to their domain. The `inverse` method is required when opening a codec,
/// so a value can be converted back to its original domain (e.g. ip address or f64) from its
/// internal representation.
pub trait StrictlyMonotonicFn<External, Internal> {
/// Strictly monotonically maps the value from External to Internal.
fn mapping(&self, inp: External) -> Internal;
/// Inverse of `mapping`. Maps the value from Internal to External.
fn inverse(&self, out: Internal) -> External;
}
/// Inverts a strictly monotonic mapping from `StrictlyMonotonicFn<A, B>` to
/// `StrictlyMonotonicFn<B, A>`.
///
/// # Warning
///
/// This type comes with a footgun. A type being strictly monotonic does not impose that the inverse
/// mapping is strictly monotonic over the entire space External. e.g. a -> a * 2. Use at your own
/// risks.
pub(crate) struct StrictlyMonotonicMappingInverter<T> {
orig_mapping: T,
}
impl<T> From<T> for StrictlyMonotonicMappingInverter<T> {
fn from(orig_mapping: T) -> Self {
Self { orig_mapping }
}
}
impl<From, To, T> StrictlyMonotonicFn<To, From> for StrictlyMonotonicMappingInverter<T>
where T: StrictlyMonotonicFn<From, To>
{
#[inline(always)]
fn mapping(&self, val: To) -> From {
self.orig_mapping.inverse(val)
}
#[inline(always)]
fn inverse(&self, val: From) -> To {
self.orig_mapping.mapping(val)
}
}
/// Applies the strictly monotonic mapping from `T` without any additional changes.
pub(crate) struct StrictlyMonotonicMappingToInternal<T> {
_phantom: PhantomData<T>,
}
impl<T> StrictlyMonotonicMappingToInternal<T> {
pub(crate) fn new() -> StrictlyMonotonicMappingToInternal<T> {
Self {
_phantom: PhantomData,
}
}
}
impl<External: MonotonicallyMappableToU128, T: MonotonicallyMappableToU128>
StrictlyMonotonicFn<External, u128> for StrictlyMonotonicMappingToInternal<T>
where T: MonotonicallyMappableToU128
{
#[inline(always)]
fn mapping(&self, inp: External) -> u128 {
External::to_u128(inp)
}
#[inline(always)]
fn inverse(&self, out: u128) -> External {
External::from_u128(out)
}
}
impl<External: MonotonicallyMappableToU64, T: MonotonicallyMappableToU64>
StrictlyMonotonicFn<External, u64> for StrictlyMonotonicMappingToInternal<T>
where T: MonotonicallyMappableToU64
{
#[inline(always)]
fn mapping(&self, inp: External) -> u64 {
External::to_u64(inp)
}
#[inline(always)]
fn inverse(&self, out: u64) -> External {
External::from_u64(out)
}
}
impl MonotonicallyMappableToU64 for u64 {
#[inline(always)]
fn to_u64(self) -> u64 {
self
}
#[inline(always)]
fn from_u64(val: u64) -> Self {
val
}
}
impl MonotonicallyMappableToU64 for i64 {
#[inline(always)]
fn to_u64(self) -> u64 {
common::i64_to_u64(self)
}
#[inline(always)]
fn from_u64(val: u64) -> Self {
common::u64_to_i64(val)
}
}
impl MonotonicallyMappableToU64 for DateTime {
#[inline(always)]
fn to_u64(self) -> u64 {
common::i64_to_u64(self.into_timestamp_micros())
}
#[inline(always)]
fn from_u64(val: u64) -> Self {
DateTime::from_timestamp_micros(common::u64_to_i64(val))
}
}
impl MonotonicallyMappableToU64 for bool {
#[inline(always)]
fn to_u64(self) -> u64 {
u64::from(self)
}
#[inline(always)]
fn from_u64(val: u64) -> Self {
val > 0
}
}
impl MonotonicallyMappableToU64 for RowId {
#[inline(always)]
fn to_u64(self) -> u64 {
u64::from(self)
}
#[inline(always)]
fn from_u64(val: u64) -> RowId {
val as RowId
}
}
// TODO remove me.
// Tantivy should refuse NaN values and work with NotNaN internally.
impl MonotonicallyMappableToU64 for f64 {
#[inline(always)]
fn to_u64(self) -> u64 {
common::f64_to_u64(self)
}
#[inline(always)]
fn from_u64(val: u64) -> Self {
common::u64_to_f64(val)
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn strictly_monotonic_test() {
// identity mapping
test_round_trip(&StrictlyMonotonicMappingToInternal::<u64>::new(), 100u64);
// round trip to i64
test_round_trip(&StrictlyMonotonicMappingToInternal::<i64>::new(), 100u64);
// TODO
// identity mapping
// test_round_trip(&StrictlyMonotonicMappingToInternal::<u128>::new(), 100u128);
}
fn test_round_trip<T: StrictlyMonotonicFn<K, L>, K: std::fmt::Debug + Eq + Copy, L>(
mapping: &T,
test_val: K,
) {
assert_eq!(mapping.inverse(mapping.mapping(test_val)), test_val);
}
}

View File

@@ -0,0 +1,41 @@
use std::fmt::Debug;
use std::net::Ipv6Addr;
/// Montonic maps a value to u128 value space
/// Monotonic mapping enables `PartialOrd` on u128 space without conversion to original space.
pub trait MonotonicallyMappableToU128: 'static + PartialOrd + Copy + Debug + Send + Sync {
/// Converts a value to u128.
///
/// Internally all fast field values are encoded as u64.
fn to_u128(self) -> u128;
/// Converts a value from u128
///
/// Internally all fast field values are encoded as u64.
/// **Note: To be used for converting encoded Term, Posting values.**
fn from_u128(val: u128) -> Self;
}
impl MonotonicallyMappableToU128 for u128 {
fn to_u128(self) -> u128 {
self
}
fn from_u128(val: u128) -> Self {
val
}
}
impl MonotonicallyMappableToU128 for Ipv6Addr {
fn to_u128(self) -> u128 {
ip_to_u128(self)
}
fn from_u128(val: u128) -> Self {
Ipv6Addr::from(val.to_be_bytes())
}
}
fn ip_to_u128(ip_addr: Ipv6Addr) -> u128 {
u128::from_be_bytes(ip_addr.octets())
}

View File

@@ -0,0 +1,103 @@
use std::io;
use std::io::Write;
use std::num::NonZeroU64;
use common::{BinarySerializable, VInt};
use crate::RowId;
/// Column statistics.
#[derive(Debug, Clone, Eq, PartialEq)]
pub struct ColumnStats {
/// GCD of the elements `el - min(column)`.
pub gcd: NonZeroU64,
/// Minimum value of the column.
pub min_value: u64,
/// Maximum value of the column.
pub max_value: u64,
/// Number of rows in the column.
pub num_rows: RowId,
}
impl ColumnStats {
/// Amplitude of value.
/// Difference between the maximum and the minimum value.
pub fn amplitude(&self) -> u64 {
self.max_value - self.min_value
}
}
impl BinarySerializable for ColumnStats {
fn serialize<W: Write + ?Sized>(&self, writer: &mut W) -> io::Result<()> {
VInt(self.min_value).serialize(writer)?;
VInt(self.gcd.get()).serialize(writer)?;
VInt(self.amplitude() / self.gcd).serialize(writer)?;
VInt(self.num_rows as u64).serialize(writer)?;
Ok(())
}
fn deserialize<R: io::Read>(reader: &mut R) -> io::Result<Self> {
let min_value = VInt::deserialize(reader)?.0;
let gcd = VInt::deserialize(reader)?.0;
let gcd = NonZeroU64::new(gcd)
.ok_or_else(|| io::Error::new(io::ErrorKind::InvalidData, "GCD of 0 is forbidden"))?;
let amplitude = VInt::deserialize(reader)?.0 * gcd.get();
let max_value = min_value + amplitude;
let num_rows = VInt::deserialize(reader)?.0 as RowId;
Ok(ColumnStats {
min_value,
max_value,
num_rows,
gcd,
})
}
}
#[cfg(test)]
mod tests {
use std::num::NonZeroU64;
use common::BinarySerializable;
use crate::column_values::ColumnStats;
#[track_caller]
fn test_stats_ser_deser_aux(stats: &ColumnStats, num_bytes: usize) {
let mut buffer: Vec<u8> = Vec::new();
stats.serialize(&mut buffer).unwrap();
assert_eq!(buffer.len(), num_bytes);
let deser_stats = ColumnStats::deserialize(&mut &buffer[..]).unwrap();
assert_eq!(stats, &deser_stats);
}
#[test]
fn test_stats_serialization() {
test_stats_ser_deser_aux(
&(ColumnStats {
gcd: NonZeroU64::new(3).unwrap(),
min_value: 1,
max_value: 3001,
num_rows: 10,
}),
5,
);
test_stats_ser_deser_aux(
&(ColumnStats {
gcd: NonZeroU64::new(1_000).unwrap(),
min_value: 1,
max_value: 3001,
num_rows: 10,
}),
5,
);
test_stats_ser_deser_aux(
&(ColumnStats {
gcd: NonZeroU64::new(1).unwrap(),
min_value: 0,
max_value: 0,
num_rows: 0,
}),
4,
);
}
}

View File

@@ -57,7 +57,7 @@ fn num_bits(val: u128) -> u8 {
/// metadata.
pub fn get_compact_space(
values_deduped_sorted: &BTreeSet<u128>,
total_num_values: u64,
total_num_values: u32,
cost_per_blank: usize,
) -> CompactSpace {
let mut compact_space_builder = CompactSpaceBuilder::new();
@@ -208,7 +208,7 @@ impl CompactSpaceBuilder {
};
let covered_range_len = range_mapping.range_length();
ranges_mapping.push(range_mapping);
compact_start += covered_range_len as u64;
compact_start += covered_range_len;
}
// println!("num ranges {}", ranges_mapping.len());
CompactSpace { ranges_mapping }

View File

@@ -14,19 +14,19 @@ use std::{
cmp::Ordering,
collections::BTreeSet,
io::{self, Write},
ops::RangeInclusive,
ops::{Range, RangeInclusive},
};
use common::{BinarySerializable, CountingWriter, VInt, VIntU128};
use ownedbytes::OwnedBytes;
use tantivy_bitpacker::{self, BitPacker, BitUnpacker};
use crate::compact_space::build_compact_space::get_compact_space;
use crate::Column;
mod blank_range;
mod build_compact_space;
use build_compact_space::get_compact_space;
use common::{BinarySerializable, CountingWriter, OwnedBytes, VInt, VIntU128};
use tantivy_bitpacker::{self, BitPacker, BitUnpacker};
use crate::column_values::ColumnValues;
use crate::RowId;
/// The cost per blank is quite hard actually, since blanks are delta encoded, the actual cost of
/// blanks depends on the number of blanks.
///
@@ -56,7 +56,7 @@ impl RangeMapping {
}
impl BinarySerializable for CompactSpace {
fn serialize<W: io::Write>(&self, writer: &mut W) -> io::Result<()> {
fn serialize<W: io::Write + ?Sized>(&self, writer: &mut W) -> io::Result<()> {
VInt(self.ranges_mapping.len() as u64).serialize(writer)?;
let mut prev_value = 0;
@@ -97,7 +97,7 @@ impl BinarySerializable for CompactSpace {
};
let range_length = range_mapping.range_length();
ranges_mapping.push(range_mapping);
compact_start += range_length as u64;
compact_start += range_length;
}
Ok(Self { ranges_mapping })
@@ -159,23 +159,30 @@ impl CompactSpace {
pub struct CompactSpaceCompressor {
params: IPCodecParams,
}
#[derive(Debug, Clone)]
pub struct IPCodecParams {
compact_space: CompactSpace,
bit_unpacker: BitUnpacker,
min_value: u128,
max_value: u128,
num_vals: u64,
num_vals: RowId,
num_bits: u8,
}
impl CompactSpaceCompressor {
/// Taking the vals as Vec may cost a lot of memory. It is used to sort the vals.
pub fn train_from(column: &impl Column<u128>) -> Self {
let mut values_sorted = BTreeSet::new();
values_sorted.extend(column.iter());
let total_num_values = column.num_vals();
pub fn num_vals(&self) -> RowId {
self.params.num_vals
}
/// Taking the vals as Vec may cost a lot of memory. It is used to sort the vals.
pub fn train_from(iter: impl Iterator<Item = u128>) -> Self {
let mut values_sorted = BTreeSet::new();
let mut total_num_values = 0u32;
for val in iter {
total_num_values += 1u32;
values_sorted.insert(val);
}
let compact_space =
get_compact_space(&values_sorted, total_num_values, COST_PER_BLANK_IN_BITS);
let amplitude_compact_space = compact_space.amplitude_compact_space();
@@ -200,7 +207,7 @@ impl CompactSpaceCompressor {
bit_unpacker: BitUnpacker::new(num_bits),
min_value,
max_value,
num_vals: total_num_values as u64,
num_vals: total_num_values,
num_bits,
},
}
@@ -248,7 +255,7 @@ pub struct CompactSpaceDecompressor {
}
impl BinarySerializable for IPCodecParams {
fn serialize<W: io::Write>(&self, writer: &mut W) -> io::Result<()> {
fn serialize<W: io::Write + ?Sized>(&self, writer: &mut W) -> io::Result<()> {
// header flags for future optional dictionary encoding
let footer_flags = 0u64;
footer_flags.serialize(writer)?;
@@ -267,7 +274,7 @@ impl BinarySerializable for IPCodecParams {
let _header_flags = u64::deserialize(reader)?;
let min_value = VIntU128::deserialize(reader)?.0;
let max_value = VIntU128::deserialize(reader)?.0;
let num_vals = VIntU128::deserialize(reader)?.0 as u64;
let num_vals = VIntU128::deserialize(reader)?.0 as u32;
let num_bits = u8::deserialize(reader)?;
let compact_space = CompactSpace::deserialize(reader)?;
@@ -282,9 +289,9 @@ impl BinarySerializable for IPCodecParams {
}
}
impl Column<u128> for CompactSpaceDecompressor {
impl ColumnValues<u128> for CompactSpaceDecompressor {
#[inline]
fn get_val(&self, doc: u64) -> u128 {
fn get_val(&self, doc: u32) -> u128 {
self.get(doc)
}
@@ -296,7 +303,7 @@ impl Column<u128> for CompactSpaceDecompressor {
self.max_value()
}
fn num_vals(&self) -> u64 {
fn num_vals(&self) -> u32 {
self.params.num_vals
}
@@ -304,8 +311,15 @@ impl Column<u128> for CompactSpaceDecompressor {
fn iter(&self) -> Box<dyn Iterator<Item = u128> + '_> {
Box::new(self.iter())
}
fn get_between_vals(&self, range: RangeInclusive<u128>) -> Vec<u64> {
self.get_between_vals(range)
#[inline]
fn get_row_ids_for_value_range(
&self,
value_range: RangeInclusive<u128>,
positions_range: Range<u32>,
positions: &mut Vec<u32>,
) {
self.get_positions_for_value_range(value_range, positions_range, positions)
}
}
@@ -340,12 +354,19 @@ impl CompactSpaceDecompressor {
/// Comparing on compact space: Real dataset 1.08 GElements/s
///
/// Comparing on original space: Real dataset .06 GElements/s (not completely optimized)
pub fn get_between_vals(&self, range: RangeInclusive<u128>) -> Vec<u64> {
if range.start() > range.end() {
return Vec::new();
#[inline]
pub fn get_positions_for_value_range(
&self,
value_range: RangeInclusive<u128>,
position_range: Range<u32>,
positions: &mut Vec<u32>,
) {
if value_range.start() > value_range.end() {
return;
}
let from_value = *range.start();
let to_value = *range.end();
let position_range = position_range.start..position_range.end.min(self.num_vals());
let from_value = *value_range.start();
let to_value = *value_range.end();
assert!(to_value >= from_value);
let compact_from = self.u128_to_compact(from_value);
let compact_to = self.u128_to_compact(to_value);
@@ -353,7 +374,7 @@ impl CompactSpaceDecompressor {
// Quick return, if both ranges fall into the same non-mapped space, the range can't cover
// any values, so we can early exit
match (compact_to, compact_from) {
(Err(pos1), Err(pos2)) if pos1 == pos2 => return Vec::new(),
(Err(pos1), Err(pos2)) if pos1 == pos2 => return,
_ => {}
}
@@ -375,19 +396,20 @@ impl CompactSpaceDecompressor {
});
let range = compact_from..=compact_to;
let mut positions = Vec::new();
let scan_num_docs = position_range.end - position_range.start;
let step_size = 4;
let cutoff = self.params.num_vals - self.params.num_vals % step_size;
let cutoff = position_range.start + scan_num_docs - scan_num_docs % step_size;
let mut push_if_in_range = |idx, val| {
if range.contains(&val) {
positions.push(idx);
}
};
let get_val = |idx| self.params.bit_unpacker.get(idx as u64, &self.data);
let get_val = |idx| self.params.bit_unpacker.get(idx, &self.data);
// unrolled loop
for idx in (0..cutoff).step_by(step_size as usize) {
for idx in (position_range.start..cutoff).step_by(step_size as usize) {
let idx1 = idx;
let idx2 = idx + 1;
let idx3 = idx + 2;
@@ -403,17 +425,14 @@ impl CompactSpaceDecompressor {
}
// handle rest
for idx in cutoff..self.params.num_vals {
for idx in cutoff..position_range.end {
push_if_in_range(idx, get_val(idx));
}
positions
}
#[inline]
fn iter_compact(&self) -> impl Iterator<Item = u64> + '_ {
(0..self.params.num_vals)
.map(move |idx| self.params.bit_unpacker.get(idx as u64, &self.data) as u64)
(0..self.params.num_vals).map(move |idx| self.params.bit_unpacker.get(idx, &self.data))
}
#[inline]
@@ -425,7 +444,7 @@ impl CompactSpaceDecompressor {
}
#[inline]
pub fn get(&self, idx: u64) -> u128 {
pub fn get(&self, idx: u32) -> u128 {
let compact = self.params.bit_unpacker.get(idx, &self.data);
self.compact_to_u128(compact)
}
@@ -442,8 +461,11 @@ impl CompactSpaceDecompressor {
#[cfg(test)]
mod tests {
use itertools::Itertools;
use super::*;
use crate::{open_u128, serialize_u128, VecColumn};
use crate::column_values::u128_based::U128Header;
use crate::column_values::{open_u128_mapped, serialize_column_values_u128};
#[test]
fn compact_space_test() {
@@ -452,7 +474,7 @@ mod tests {
]
.into_iter()
.collect();
let compact_space = get_compact_space(ips, ips.len() as u64, 11);
let compact_space = get_compact_space(ips, ips.len() as u32, 11);
let amplitude = compact_space.amplitude_compact_space();
assert_eq!(amplitude, 17);
assert_eq!(1, compact_space.u128_to_compact(2).unwrap());
@@ -483,24 +505,30 @@ mod tests {
#[test]
fn compact_space_amplitude_test() {
let ips = &[100000u128, 1000000].into_iter().collect();
let compact_space = get_compact_space(ips, ips.len() as u64, 1);
let compact_space = get_compact_space(ips, ips.len() as u32, 1);
let amplitude = compact_space.amplitude_compact_space();
assert_eq!(amplitude, 2);
}
fn test_all(data: OwnedBytes, expected: &[u128]) {
fn test_all(mut data: OwnedBytes, expected: &[u128]) {
let _header = U128Header::deserialize(&mut data);
let decompressor = CompactSpaceDecompressor::open(data).unwrap();
for (idx, expected_val) in expected.iter().cloned().enumerate() {
let val = decompressor.get(idx as u64);
let val = decompressor.get(idx as u32);
assert_eq!(val, expected_val);
let test_range = |range: RangeInclusive<u128>| {
let expected_positions = expected
.iter()
.positions(|val| range.contains(val))
.map(|pos| pos as u64)
.map(|pos| pos as u32)
.collect::<Vec<_>>();
let positions = decompressor.get_between_vals(range);
let mut positions = Vec::new();
decompressor.get_positions_for_value_range(
range,
0..decompressor.num_vals(),
&mut positions,
);
assert_eq!(positions, expected_positions);
};
@@ -513,8 +541,7 @@ mod tests {
fn test_aux_vals(u128_vals: &[u128]) -> OwnedBytes {
let mut out = Vec::new();
serialize_u128(VecColumn::from(u128_vals), &mut out).unwrap();
serialize_column_values_u128(&u128_vals, &mut out).unwrap();
let data = OwnedBytes::new(out);
test_all(data.clone(), u128_vals);
data
@@ -533,27 +560,110 @@ mod tests {
4_000_211_222u128,
333u128,
];
let data = test_aux_vals(vals);
let mut data = test_aux_vals(vals);
let _header = U128Header::deserialize(&mut data);
let decomp = CompactSpaceDecompressor::open(data).unwrap();
let positions = decomp.get_between_vals(0..=1);
let complete_range = 0..vals.len() as u32;
for (pos, val) in vals.iter().enumerate() {
let val = *val;
let pos = pos as u32;
let mut positions = Vec::new();
decomp.get_positions_for_value_range(val..=val, pos..pos + 1, &mut positions);
assert_eq!(positions, vec![pos]);
}
// handle docid range out of bounds
let positions: Vec<u32> = get_positions_for_value_range_helper(&decomp, 0..=1, 1..u32::MAX);
assert!(positions.is_empty());
let positions =
get_positions_for_value_range_helper(&decomp, 0..=1, complete_range.clone());
assert_eq!(positions, vec![0]);
let positions = decomp.get_between_vals(0..=2);
let positions =
get_positions_for_value_range_helper(&decomp, 0..=2, complete_range.clone());
assert_eq!(positions, vec![0]);
let positions = decomp.get_between_vals(0..=3);
let positions =
get_positions_for_value_range_helper(&decomp, 0..=3, complete_range.clone());
assert_eq!(positions, vec![0, 2]);
assert_eq!(decomp.get_between_vals(99999u128..=99999u128), vec![3]);
assert_eq!(decomp.get_between_vals(99999u128..=100000u128), vec![3, 4]);
assert_eq!(decomp.get_between_vals(99998u128..=100000u128), vec![3, 4]);
assert_eq!(decomp.get_between_vals(99998u128..=99999u128), vec![3]);
assert_eq!(decomp.get_between_vals(99998u128..=99998u128), vec![]);
assert_eq!(decomp.get_between_vals(333u128..=333u128), vec![8]);
assert_eq!(decomp.get_between_vals(332u128..=333u128), vec![8]);
assert_eq!(decomp.get_between_vals(332u128..=334u128), vec![8]);
assert_eq!(decomp.get_between_vals(333u128..=334u128), vec![8]);
assert_eq!(
get_positions_for_value_range_helper(
&decomp,
99999u128..=99999u128,
complete_range.clone()
),
vec![3]
);
assert_eq!(
get_positions_for_value_range_helper(
&decomp,
99999u128..=100000u128,
complete_range.clone()
),
vec![3, 4]
);
assert_eq!(
get_positions_for_value_range_helper(
&decomp,
99998u128..=100000u128,
complete_range.clone()
),
vec![3, 4]
);
assert_eq!(
&get_positions_for_value_range_helper(
&decomp,
99998u128..=99999u128,
complete_range.clone()
),
&[3]
);
assert!(get_positions_for_value_range_helper(
&decomp,
99998u128..=99998u128,
complete_range.clone()
)
.is_empty());
assert_eq!(
&get_positions_for_value_range_helper(
&decomp,
333u128..=333u128,
complete_range.clone()
),
&[8]
);
assert_eq!(
&get_positions_for_value_range_helper(
&decomp,
332u128..=333u128,
complete_range.clone()
),
&[8]
);
assert_eq!(
&get_positions_for_value_range_helper(
&decomp,
332u128..=334u128,
complete_range.clone()
),
&[8]
);
assert_eq!(
&get_positions_for_value_range_helper(
&decomp,
333u128..=334u128,
complete_range.clone()
),
&[8]
);
assert_eq!(
decomp.get_between_vals(4_000_211_221u128..=5_000_000_000u128),
vec![6, 7]
&get_positions_for_value_range_helper(
&decomp,
4_000_211_221u128..=5_000_000_000u128,
complete_range
),
&[6, 7]
);
}
@@ -575,14 +685,32 @@ mod tests {
4_000_211_222u128,
333u128,
];
let data = test_aux_vals(vals);
let mut data = test_aux_vals(vals);
let _header = U128Header::deserialize(&mut data);
let decomp = CompactSpaceDecompressor::open(data).unwrap();
let positions = decomp.get_between_vals(0..=5);
assert_eq!(positions, vec![]);
let positions = decomp.get_between_vals(0..=100);
assert_eq!(positions, vec![0]);
let positions = decomp.get_between_vals(0..=105);
assert_eq!(positions, vec![0]);
let complete_range = 0..vals.len() as u32;
assert!(
&get_positions_for_value_range_helper(&decomp, 0..=5, complete_range.clone())
.is_empty(),
);
assert_eq!(
&get_positions_for_value_range_helper(&decomp, 0..=100, complete_range.clone()),
&[0]
);
assert_eq!(
&get_positions_for_value_range_helper(&decomp, 0..=105, complete_range),
&[0]
);
}
fn get_positions_for_value_range_helper<C: ColumnValues<T> + ?Sized, T: PartialOrd>(
column: &C,
value_range: RangeInclusive<T>,
doc_id_range: Range<u32>,
) -> Vec<u32> {
let mut positions = Vec::new();
column.get_row_ids_for_value_range(value_range, doc_id_range, &mut positions);
positions
}
#[test]
@@ -603,13 +731,29 @@ mod tests {
5_000_000_000,
];
let mut out = Vec::new();
serialize_u128(VecColumn::from(vals), &mut out).unwrap();
let decomp = open_u128(OwnedBytes::new(out)).unwrap();
serialize_column_values_u128(&&vals[..], &mut out).unwrap();
let decomp = open_u128_mapped(OwnedBytes::new(out)).unwrap();
let complete_range = 0..vals.len() as u32;
assert_eq!(decomp.get_between_vals(199..=200), vec![0]);
assert_eq!(decomp.get_between_vals(199..=201), vec![0, 1]);
assert_eq!(decomp.get_between_vals(200..=200), vec![0]);
assert_eq!(decomp.get_between_vals(1_000_000..=1_000_000), vec![11]);
assert_eq!(
get_positions_for_value_range_helper(&*decomp, 199..=200, complete_range.clone()),
vec![0]
);
assert_eq!(
get_positions_for_value_range_helper(&*decomp, 199..=201, complete_range.clone()),
vec![0, 1]
);
assert_eq!(
get_positions_for_value_range_helper(&*decomp, 200..=200, complete_range.clone()),
vec![0]
);
assert_eq!(
get_positions_for_value_range_helper(&*decomp, 1_000_000..=1_000_000, complete_range),
vec![11]
);
}
#[test]
@@ -641,7 +785,7 @@ mod tests {
let vals = &[1_000_000_000u128; 100];
let _data = test_aux_vals(vals);
}
use itertools::Itertools;
use proptest::prelude::*;
fn num_strategy() -> impl Strategy<Value = u128> {
@@ -657,10 +801,9 @@ mod tests {
proptest! {
#![proptest_config(ProptestConfig::with_cases(10))]
#[test]
fn compress_decompress_random(vals in proptest::collection::vec(num_strategy()
, 1..1000)) {
let _data = test_aux_vals(&vals);
}
#[test]
fn compress_decompress_random(vals in proptest::collection::vec(num_strategy() , 1..1000)) {
let _data = test_aux_vals(&vals);
}
}
}

View File

@@ -0,0 +1,178 @@
use std::fmt::Debug;
use std::io;
use std::io::Write;
use std::sync::Arc;
mod compact_space;
use common::{BinarySerializable, OwnedBytes, VInt};
use compact_space::{CompactSpaceCompressor, CompactSpaceDecompressor};
use crate::column_values::monotonic_map_column;
use crate::column_values::monotonic_mapping::{
StrictlyMonotonicMappingInverter, StrictlyMonotonicMappingToInternal,
};
use crate::iterable::Iterable;
use crate::{ColumnValues, MonotonicallyMappableToU128};
#[derive(Debug, Copy, Clone, PartialEq, Eq)]
pub(crate) struct U128Header {
pub num_vals: u32,
pub codec_type: U128FastFieldCodecType,
}
impl BinarySerializable for U128Header {
fn serialize<W: io::Write + ?Sized>(&self, writer: &mut W) -> io::Result<()> {
VInt(self.num_vals as u64).serialize(writer)?;
self.codec_type.serialize(writer)?;
Ok(())
}
fn deserialize<R: io::Read>(reader: &mut R) -> io::Result<Self> {
let num_vals = VInt::deserialize(reader)?.0 as u32;
let codec_type = U128FastFieldCodecType::deserialize(reader)?;
Ok(U128Header {
num_vals,
codec_type,
})
}
}
/// Serializes u128 values with the compact space codec.
pub fn serialize_column_values_u128<T: MonotonicallyMappableToU128>(
iterable: &dyn Iterable<T>,
output: &mut impl io::Write,
) -> io::Result<()> {
let compressor = CompactSpaceCompressor::train_from(
iterable
.boxed_iter()
.map(MonotonicallyMappableToU128::to_u128),
);
let header = U128Header {
num_vals: compressor.num_vals(),
codec_type: U128FastFieldCodecType::CompactSpace,
};
header.serialize(output)?;
compressor.compress_into(
iterable
.boxed_iter()
.map(MonotonicallyMappableToU128::to_u128),
output,
)?;
Ok(())
}
#[derive(PartialEq, Eq, PartialOrd, Ord, Debug, Clone, Copy)]
#[repr(u8)]
/// Available codecs to use to encode the u128 (via [`MonotonicallyMappableToU128`]) converted data.
pub(crate) enum U128FastFieldCodecType {
/// This codec takes a large number space (u128) and reduces it to a compact number space, by
/// removing the holes.
CompactSpace = 1,
}
impl BinarySerializable for U128FastFieldCodecType {
fn serialize<W: Write + ?Sized>(&self, wrt: &mut W) -> io::Result<()> {
self.to_code().serialize(wrt)
}
fn deserialize<R: io::Read>(reader: &mut R) -> io::Result<Self> {
let code = u8::deserialize(reader)?;
let codec_type: Self = Self::from_code(code)
.ok_or_else(|| io::Error::new(io::ErrorKind::InvalidData, "Unknown code `{code}.`"))?;
Ok(codec_type)
}
}
impl U128FastFieldCodecType {
pub(crate) fn to_code(self) -> u8 {
self as u8
}
pub(crate) fn from_code(code: u8) -> Option<Self> {
match code {
1 => Some(Self::CompactSpace),
_ => None,
}
}
}
/// Returns the correct codec reader wrapped in the `Arc` for the data.
pub fn open_u128_mapped<T: MonotonicallyMappableToU128 + Debug>(
mut bytes: OwnedBytes,
) -> io::Result<Arc<dyn ColumnValues<T>>> {
let header = U128Header::deserialize(&mut bytes)?;
assert_eq!(header.codec_type, U128FastFieldCodecType::CompactSpace);
let reader = CompactSpaceDecompressor::open(bytes)?;
let inverted: StrictlyMonotonicMappingInverter<StrictlyMonotonicMappingToInternal<T>> =
StrictlyMonotonicMappingToInternal::<T>::new().into();
Ok(Arc::new(monotonic_map_column(reader, inverted)))
}
#[cfg(test)]
pub mod tests {
use super::*;
use crate::column_values::u64_based::{
serialize_and_load_u64_based_column_values, serialize_u64_based_column_values,
ALL_U64_CODEC_TYPES,
};
use crate::column_values::CodecType;
#[test]
fn test_serialize_deserialize_u128_header() {
let original = U128Header {
num_vals: 11,
codec_type: U128FastFieldCodecType::CompactSpace,
};
let mut out = Vec::new();
original.serialize(&mut out).unwrap();
let restored = U128Header::deserialize(&mut &out[..]).unwrap();
assert_eq!(restored, original);
}
#[test]
fn test_serialize_deserialize() {
let original = [1u64, 5u64, 10u64];
let restored: Vec<u64> =
serialize_and_load_u64_based_column_values(&&original[..], &ALL_U64_CODEC_TYPES)
.iter()
.collect();
assert_eq!(&restored, &original[..]);
}
#[test]
fn test_fastfield_bool_size_bitwidth_1() {
let mut buffer = Vec::new();
serialize_u64_based_column_values::<bool>(
&&[false, true][..],
&ALL_U64_CODEC_TYPES,
&mut buffer,
)
.unwrap();
// TODO put the header as a footer so that it serves as a padding.
// 5 bytes of header, 1 byte of value, 7 bytes of padding.
assert_eq!(buffer.len(), 5 + 1);
}
#[test]
fn test_fastfield_bool_bit_size_bitwidth_0() {
let mut buffer = Vec::new();
serialize_u64_based_column_values::<bool>(
&&[false, true][..],
&ALL_U64_CODEC_TYPES,
&mut buffer,
)
.unwrap();
// 6 bytes of header, 0 bytes of value, 7 bytes of padding.
assert_eq!(buffer.len(), 6);
}
#[test]
fn test_fastfield_gcd() {
let mut buffer = Vec::new();
let vals: Vec<u64> = (0..80).map(|val| (val % 7) * 1_000u64).collect();
serialize_u64_based_column_values(&&vals[..], &[CodecType::Bitpacked], &mut buffer)
.unwrap();
// Values are stored over 3 bits.
assert_eq!(buffer.len(), 6 + (3 * 80 / 8));
}
}

View File

@@ -0,0 +1,127 @@
use std::io::{self, Write};
use common::{BinarySerializable, OwnedBytes};
use fastdivide::DividerU64;
use tantivy_bitpacker::{compute_num_bits, BitPacker, BitUnpacker};
use crate::column_values::u64_based::{ColumnCodec, ColumnCodecEstimator, ColumnStats};
use crate::{ColumnValues, RowId};
/// Depending on the field type, a different
/// fast field is required.
#[derive(Clone)]
pub struct BitpackedReader {
data: OwnedBytes,
bit_unpacker: BitUnpacker,
stats: ColumnStats,
}
impl ColumnValues for BitpackedReader {
#[inline(always)]
fn get_val(&self, doc: u32) -> u64 {
self.stats.min_value + self.stats.gcd.get() * self.bit_unpacker.get(doc, &self.data)
}
#[inline]
fn min_value(&self) -> u64 {
self.stats.min_value
}
#[inline]
fn max_value(&self) -> u64 {
self.stats.max_value
}
#[inline]
fn num_vals(&self) -> RowId {
self.stats.num_rows
}
}
fn num_bits(stats: &ColumnStats) -> u8 {
compute_num_bits(stats.amplitude() / stats.gcd)
}
#[derive(Default)]
pub struct BitpackedCodecEstimator;
impl ColumnCodecEstimator for BitpackedCodecEstimator {
fn collect(&mut self, _value: u64) {}
fn estimate(&self, stats: &ColumnStats) -> Option<u64> {
let num_bits_per_value = num_bits(stats);
Some(stats.num_bytes() + (stats.num_rows as u64 * (num_bits_per_value as u64) + 7) / 8)
}
fn serialize(
&self,
stats: &ColumnStats,
vals: &mut dyn Iterator<Item = u64>,
wrt: &mut dyn Write,
) -> io::Result<()> {
stats.serialize(wrt)?;
let num_bits = num_bits(stats);
let mut bit_packer = BitPacker::new();
let divider = DividerU64::divide_by(stats.gcd.get());
for val in vals {
bit_packer.write(divider.divide(val - stats.min_value), num_bits, wrt)?;
}
bit_packer.close(wrt)?;
Ok(())
}
}
pub struct BitpackedCodec;
impl ColumnCodec for BitpackedCodec {
type ColumnValues = BitpackedReader;
type Estimator = BitpackedCodecEstimator;
/// Opens a fast field given a file.
fn load(mut data: OwnedBytes) -> io::Result<Self::ColumnValues> {
let stats = ColumnStats::deserialize(&mut data)?;
let num_bits = num_bits(&stats);
let bit_unpacker = BitUnpacker::new(num_bits);
Ok(BitpackedReader {
data,
bit_unpacker,
stats,
})
}
}
#[cfg(test)]
mod tests {
use super::*;
use crate::column_values::u64_based::tests::create_and_validate;
#[test]
fn test_with_codec_data_sets_simple() {
create_and_validate::<BitpackedCodec>(&[4, 3, 12], "name");
}
#[test]
fn test_with_codec_data_sets_simple_gcd() {
create_and_validate::<BitpackedCodec>(&[1000, 2000, 3000], "name");
}
#[test]
fn test_with_codec_data_sets() {
let data_sets = crate::column_values::u64_based::tests::get_codec_test_datasets();
for (mut data, name) in data_sets {
create_and_validate::<BitpackedCodec>(&data, name);
data.reverse();
create_and_validate::<BitpackedCodec>(&data, name);
}
}
#[test]
fn bitpacked_fast_field_rand() {
for _ in 0..500 {
let mut data = (0..1 + rand::random::<u8>() as usize)
.map(|_| rand::random::<i64>() as u64 / 2)
.collect::<Vec<_>>();
create_and_validate::<BitpackedCodec>(&data, "rand");
data.reverse();
create_and_validate::<BitpackedCodec>(&data, "rand");
}
}
}

View File

@@ -0,0 +1,281 @@
use std::io::Write;
use std::sync::Arc;
use std::{io, iter};
use common::{BinarySerializable, CountingWriter, DeserializeFrom, OwnedBytes};
use fastdivide::DividerU64;
use tantivy_bitpacker::{compute_num_bits, BitPacker, BitUnpacker};
use crate::column_values::u64_based::line::Line;
use crate::column_values::u64_based::{ColumnCodec, ColumnCodecEstimator, ColumnStats};
use crate::column_values::{ColumnValues, VecColumn};
use crate::MonotonicallyMappableToU64;
const BLOCK_SIZE: u32 = 512u32;
#[derive(Debug, Default)]
struct Block {
line: Line,
bit_unpacker: BitUnpacker,
data_start_offset: usize,
}
impl BinarySerializable for Block {
fn serialize<W: Write + ?Sized>(&self, writer: &mut W) -> io::Result<()> {
self.line.serialize(writer)?;
self.bit_unpacker.bit_width().serialize(writer)?;
Ok(())
}
fn deserialize<R: io::Read>(reader: &mut R) -> io::Result<Self> {
let line = Line::deserialize(reader)?;
let bit_width = u8::deserialize(reader)?;
Ok(Block {
line,
bit_unpacker: BitUnpacker::new(bit_width),
data_start_offset: 0,
})
}
}
fn compute_num_blocks(num_vals: u32) -> u32 {
(num_vals + BLOCK_SIZE - 1) / BLOCK_SIZE
}
pub struct BlockwiseLinearEstimator {
block: Vec<u64>,
values_num_bytes: u64,
meta_num_bytes: u64,
}
impl Default for BlockwiseLinearEstimator {
fn default() -> Self {
Self {
block: Vec::with_capacity(BLOCK_SIZE as usize),
values_num_bytes: 0u64,
meta_num_bytes: 0u64,
}
}
}
impl BlockwiseLinearEstimator {
fn flush_block_estimate(&mut self) {
if self.block.is_empty() {
return;
}
let line = Line::train(&VecColumn::from(&self.block));
let mut max_value = 0u64;
for (i, buffer_val) in self.block.iter().enumerate() {
let interpolated_val = line.eval(i as u32);
let val = buffer_val.wrapping_sub(interpolated_val);
max_value = val.max(max_value);
}
let bit_width = compute_num_bits(max_value) as usize;
self.values_num_bytes += (bit_width * self.block.len() + 7) as u64 / 8;
self.meta_num_bytes += 1 + line.num_bytes();
}
}
impl ColumnCodecEstimator for BlockwiseLinearEstimator {
fn collect(&mut self, value: u64) {
self.block.push(value);
if self.block.len() == BLOCK_SIZE as usize {
self.flush_block_estimate();
self.block.clear();
}
}
fn estimate(&self, stats: &ColumnStats) -> Option<u64> {
let mut estimate = 4 + stats.num_bytes() + self.meta_num_bytes + self.values_num_bytes;
if stats.gcd.get() > 1 {
let estimate_gain_from_gcd =
(stats.gcd.get() as f32).log2().floor() * stats.num_rows as f32 / 8.0f32;
estimate = estimate.saturating_sub(estimate_gain_from_gcd as u64);
}
Some(estimate)
}
fn finalize(&mut self) {
self.flush_block_estimate();
}
fn serialize(
&self,
stats: &ColumnStats,
mut vals: &mut dyn Iterator<Item = u64>,
wrt: &mut dyn Write,
) -> io::Result<()> {
stats.serialize(wrt)?;
let mut buffer = Vec::with_capacity(BLOCK_SIZE as usize);
let num_blocks = compute_num_blocks(stats.num_rows) as usize;
let mut blocks = Vec::with_capacity(num_blocks);
let mut bit_packer = BitPacker::new();
let gcd_divider = DividerU64::divide_by(stats.gcd.get());
for _ in 0..num_blocks {
buffer.clear();
buffer.extend(
(&mut vals)
.map(MonotonicallyMappableToU64::to_u64)
.take(BLOCK_SIZE as usize),
);
for buffer_val in buffer.iter_mut() {
*buffer_val = gcd_divider.divide(*buffer_val - stats.min_value);
}
let line = Line::train(&VecColumn::from(&buffer));
assert!(!buffer.is_empty());
for (i, buffer_val) in buffer.iter_mut().enumerate() {
let interpolated_val = line.eval(i as u32);
*buffer_val = buffer_val.wrapping_sub(interpolated_val);
}
let bit_width = buffer.iter().copied().map(compute_num_bits).max().unwrap();
for &buffer_val in &buffer {
bit_packer.write(buffer_val, bit_width, wrt)?;
}
blocks.push(Block {
line,
bit_unpacker: BitUnpacker::new(bit_width),
data_start_offset: 0,
});
}
bit_packer.close(wrt)?;
assert_eq!(blocks.len(), num_blocks);
let mut counting_wrt = CountingWriter::wrap(wrt);
for block in &blocks {
block.serialize(&mut counting_wrt)?;
}
let footer_len = counting_wrt.written_bytes();
(footer_len as u32).serialize(&mut counting_wrt)?;
Ok(())
}
}
pub struct BlockwiseLinearCodec;
impl ColumnCodec<u64> for BlockwiseLinearCodec {
type ColumnValues = BlockwiseLinearReader;
type Estimator = BlockwiseLinearEstimator;
fn load(mut bytes: OwnedBytes) -> io::Result<Self::ColumnValues> {
let stats = ColumnStats::deserialize(&mut bytes)?;
let footer_len: u32 = (&bytes[bytes.len() - 4..]).deserialize()?;
let footer_offset = bytes.len() - 4 - footer_len as usize;
let (data, mut footer) = bytes.split(footer_offset);
let num_blocks = compute_num_blocks(stats.num_rows);
let mut blocks: Vec<Block> = iter::repeat_with(|| Block::deserialize(&mut footer))
.take(num_blocks as usize)
.collect::<io::Result<_>>()?;
let mut start_offset = 0;
for block in &mut blocks {
block.data_start_offset = start_offset;
start_offset += (block.bit_unpacker.bit_width() as usize) * BLOCK_SIZE as usize / 8;
}
Ok(BlockwiseLinearReader {
blocks: blocks.into_boxed_slice().into(),
data,
stats,
})
}
}
#[derive(Clone)]
pub struct BlockwiseLinearReader {
blocks: Arc<[Block]>,
data: OwnedBytes,
stats: ColumnStats,
}
impl ColumnValues for BlockwiseLinearReader {
#[inline(always)]
fn get_val(&self, idx: u32) -> u64 {
let block_id = (idx / BLOCK_SIZE) as usize;
let idx_within_block = idx % BLOCK_SIZE;
let block = &self.blocks[block_id];
let interpoled_val: u64 = block.line.eval(idx_within_block);
let block_bytes = &self.data[block.data_start_offset..];
let bitpacked_diff = block.bit_unpacker.get(idx_within_block, block_bytes);
// TODO optimize me! the line parameters could be tweaked to include the multiplication and
// remove the dependency.
self.stats.min_value
+ self
.stats
.gcd
.get()
.wrapping_mul(interpoled_val.wrapping_add(bitpacked_diff))
}
#[inline(always)]
fn min_value(&self) -> u64 {
self.stats.min_value
}
#[inline(always)]
fn max_value(&self) -> u64 {
self.stats.max_value
}
#[inline(always)]
fn num_vals(&self) -> u32 {
self.stats.num_rows
}
}
#[cfg(test)]
mod tests {
use super::*;
use crate::column_values::u64_based::tests::create_and_validate;
#[test]
fn test_with_codec_data_sets_simple() {
create_and_validate::<BlockwiseLinearCodec>(
&[11, 20, 40, 20, 10, 10, 10, 10, 10, 10],
"simple test",
)
.unwrap();
}
#[test]
fn test_with_codec_data_sets_simple_gcd() {
let (_, actual_compression_rate) = create_and_validate::<BlockwiseLinearCodec>(
&[10, 20, 40, 20, 10, 10, 10, 10, 10, 10],
"name",
)
.unwrap();
assert_eq!(actual_compression_rate, 0.175);
}
#[test]
fn test_with_codec_data_sets() {
let data_sets = crate::column_values::u64_based::tests::get_codec_test_datasets();
for (mut data, name) in data_sets {
create_and_validate::<BlockwiseLinearCodec>(&data, name);
data.reverse();
create_and_validate::<BlockwiseLinearCodec>(&data, name);
}
}
#[test]
fn test_blockwise_linear_fast_field_rand() {
for _ in 0..500 {
let mut data = (0..1 + rand::random::<u8>() as usize)
.map(|_| rand::random::<i64>() as u64 / 2)
.collect::<Vec<_>>();
create_and_validate::<BlockwiseLinearCodec>(&data, "rand");
data.reverse();
create_and_validate::<BlockwiseLinearCodec>(&data, "rand");
}
}
}

View File

@@ -1,9 +1,9 @@
use std::io;
use std::num::NonZeroU64;
use std::num::NonZeroU32;
use common::{BinarySerializable, VInt};
use crate::Column;
use crate::column_values::ColumnValues;
const MID_POINT: u64 = (1u64 << 32) - 1u64;
@@ -17,8 +17,8 @@ const MID_POINT: u64 = (1u64 << 32) - 1u64;
/// `y = m * x >> 32 + b`
#[derive(Debug, Clone, Copy, Default)]
pub struct Line {
slope: u64,
intercept: u64,
pub(crate) slope: u64,
pub(crate) intercept: u64,
}
/// Compute the line slope.
@@ -29,7 +29,7 @@ pub struct Line {
/// compute_slope(y0, y1)
/// = compute_slope(y0 + X % 2^64, y1 + X % 2^64)
/// `
fn compute_slope(y0: u64, y1: u64, num_vals: NonZeroU64) -> u64 {
fn compute_slope(y0: u64, y1: u64, num_vals: NonZeroU32) -> u64 {
let dy = y1.wrapping_sub(y0);
let sign = dy <= (1 << 63);
let abs_dy = if sign {
@@ -43,7 +43,7 @@ fn compute_slope(y0: u64, y1: u64, num_vals: NonZeroU64) -> u64 {
return 0u64;
}
let abs_slope = (abs_dy << 32) / num_vals.get();
let abs_slope = (abs_dy << 32) / num_vals.get() as u64;
if sign {
abs_slope
} else {
@@ -62,29 +62,30 @@ fn compute_slope(y0: u64, y1: u64, num_vals: NonZeroU64) -> u64 {
impl Line {
#[inline(always)]
pub fn eval(&self, x: u64) -> u64 {
let linear_part = (x.wrapping_mul(self.slope) >> 32) as i32 as u64;
pub fn eval(&self, x: u32) -> u64 {
let linear_part = ((x as u64).wrapping_mul(self.slope) >> 32) as i32 as u64;
self.intercept.wrapping_add(linear_part)
}
// Same as train, but the intercept is only estimated from provided sample positions
pub fn estimate(ys: &dyn Column, sample_positions: &[u64]) -> Self {
Self::train_from(ys, sample_positions.iter().cloned())
}
// Intercept is only computed from provided positions
fn train_from(ys: &dyn Column, positions: impl Iterator<Item = u64>) -> Self {
let num_vals = if let Some(num_vals) = NonZeroU64::new(ys.num_vals() - 1) {
num_vals
pub fn train_from(
first_val: u64,
last_val: u64,
num_vals: u32,
positions_and_values: impl Iterator<Item = (u64, u64)>,
) -> Self {
// TODO replace with let else
let idx_last_val = if let Some(idx_last_val) = NonZeroU32::new(num_vals - 1) {
idx_last_val
} else {
return Line::default();
};
let y0 = ys.get_val(0);
let y1 = ys.get_val(num_vals.get());
let y0 = first_val;
let y1 = last_val;
// We first independently pick our slope.
let slope = compute_slope(y0, y1, num_vals);
let slope = compute_slope(y0, y1, idx_last_val);
// We picked our slope. Note that it does not have to be perfect.
// Now we need to compute the best intercept.
@@ -114,11 +115,8 @@ impl Line {
intercept: 0,
};
let heuristic_shift = y0.wrapping_sub(MID_POINT);
line.intercept = positions
.map(|pos| {
let y = ys.get_val(pos);
y.wrapping_sub(line.eval(pos))
})
line.intercept = positions_and_values
.map(|(pos, y)| y.wrapping_sub(line.eval(pos as u32)))
.min_by_key(|&val| val.wrapping_sub(heuristic_shift))
.unwrap_or(0u64); //< Never happens.
line
@@ -134,13 +132,21 @@ impl Line {
///
/// This function is only invariable by translation if all of the
/// `ys` are packaged into half of the space. (See heuristic below)
pub fn train(ys: &dyn Column) -> Self {
Self::train_from(ys, 0..ys.num_vals())
/// TODO USE array
pub fn train(ys: &dyn ColumnValues) -> Self {
let first_val = ys.iter().next().unwrap();
let last_val = ys.iter().nth(ys.num_vals() as usize - 1).unwrap();
Self::train_from(
first_val,
last_val,
ys.num_vals(),
ys.iter().enumerate().map(|(pos, val)| (pos as u64, val)),
)
}
}
impl BinarySerializable for Line {
fn serialize<W: io::Write>(&self, writer: &mut W) -> io::Result<()> {
fn serialize<W: io::Write + ?Sized>(&self, writer: &mut W) -> io::Result<()> {
VInt(self.slope).serialize(writer)?;
VInt(self.intercept).serialize(writer)?;
Ok(())
@@ -156,7 +162,7 @@ impl BinarySerializable for Line {
#[cfg(test)]
mod tests {
use super::*;
use crate::VecColumn;
use crate::column_values::VecColumn;
/// Test training a line and ensuring that the maximum difference between
/// the data points and the line is `expected`.
@@ -181,7 +187,7 @@ mod tests {
let line = Line::train(&VecColumn::from(&ys));
ys.iter()
.enumerate()
.map(|(x, y)| y.wrapping_sub(line.eval(x as u64)))
.map(|(x, y)| y.wrapping_sub(line.eval(x as u32)))
.max()
}

View File

@@ -0,0 +1,277 @@
use std::io;
use common::{BinarySerializable, OwnedBytes};
use tantivy_bitpacker::{compute_num_bits, BitPacker, BitUnpacker};
use super::line::Line;
use super::ColumnValues;
use crate::column_values::u64_based::{ColumnCodec, ColumnCodecEstimator, ColumnStats};
use crate::column_values::VecColumn;
use crate::RowId;
const HALF_SPACE: u64 = u64::MAX / 2;
const LINE_ESTIMATION_BLOCK_LEN: usize = 512;
/// Depending on the field type, a different
/// fast field is required.
#[derive(Clone)]
pub struct LinearReader {
data: OwnedBytes,
linear_params: LinearParams,
stats: ColumnStats,
}
impl ColumnValues for LinearReader {
#[inline]
fn get_val(&self, doc: u32) -> u64 {
let interpoled_val: u64 = self.linear_params.line.eval(doc);
let bitpacked_diff = self.linear_params.bit_unpacker.get(doc, &self.data);
interpoled_val.wrapping_add(bitpacked_diff)
}
#[inline(always)]
fn min_value(&self) -> u64 {
self.stats.min_value
}
#[inline(always)]
fn max_value(&self) -> u64 {
self.stats.max_value
}
#[inline]
fn num_vals(&self) -> u32 {
self.stats.num_rows
}
}
/// Fastfield serializer, which tries to guess values by linear interpolation
/// and stores the difference bitpacked.
pub struct LinearCodec;
#[derive(Debug, Clone)]
struct LinearParams {
line: Line,
bit_unpacker: BitUnpacker,
}
impl BinarySerializable for LinearParams {
fn serialize<W: io::Write + ?Sized>(&self, writer: &mut W) -> io::Result<()> {
self.line.serialize(writer)?;
self.bit_unpacker.bit_width().serialize(writer)?;
Ok(())
}
fn deserialize<R: io::Read>(reader: &mut R) -> io::Result<Self> {
let line = Line::deserialize(reader)?;
let bit_width = u8::deserialize(reader)?;
Ok(Self {
line,
bit_unpacker: BitUnpacker::new(bit_width),
})
}
}
pub struct LinearCodecEstimator {
block: Vec<u64>,
line: Option<Line>,
row_id: RowId,
min_deviation: u64,
max_deviation: u64,
first_val: u64,
last_val: u64,
}
impl Default for LinearCodecEstimator {
fn default() -> LinearCodecEstimator {
LinearCodecEstimator {
block: Vec::with_capacity(LINE_ESTIMATION_BLOCK_LEN),
line: None,
row_id: 0,
min_deviation: u64::MAX,
max_deviation: u64::MIN,
first_val: 0u64,
last_val: 0u64,
}
}
}
impl ColumnCodecEstimator for LinearCodecEstimator {
fn finalize(&mut self) {
if let Some(line) = self.line.as_mut() {
line.intercept = line
.intercept
.wrapping_add(self.min_deviation)
.wrapping_sub(HALF_SPACE);
}
}
fn estimate(&self, stats: &ColumnStats) -> Option<u64> {
let line = self.line?;
let amplitude = self.max_deviation - self.min_deviation;
let num_bits = compute_num_bits(amplitude);
let linear_params = LinearParams {
line,
bit_unpacker: BitUnpacker::new(num_bits),
};
Some(
stats.num_bytes()
+ linear_params.num_bytes()
+ (num_bits as u64 * stats.num_rows as u64 + 7) / 8,
)
}
fn serialize(
&self,
stats: &ColumnStats,
vals: &mut dyn Iterator<Item = u64>,
wrt: &mut dyn io::Write,
) -> io::Result<()> {
stats.serialize(wrt)?;
let line = self.line.unwrap();
let amplitude = self.max_deviation - self.min_deviation;
let num_bits = compute_num_bits(amplitude);
let linear_params = LinearParams {
line,
bit_unpacker: BitUnpacker::new(num_bits),
};
linear_params.serialize(wrt)?;
let mut bit_packer = BitPacker::new();
for (pos, value) in vals.enumerate() {
let calculated_value = line.eval(pos as u32);
let offset = value.wrapping_sub(calculated_value);
bit_packer.write(offset, num_bits, wrt)?;
}
bit_packer.close(wrt)?;
Ok(())
}
fn collect(&mut self, value: u64) {
if let Some(line) = self.line {
self.collect_after_line_estimation(&line, value);
} else {
self.collect_before_line_estimation(value);
}
}
}
impl LinearCodecEstimator {
#[inline]
fn collect_after_line_estimation(&mut self, line: &Line, value: u64) {
let interpoled_val: u64 = line.eval(self.row_id);
let deviation = value.wrapping_add(HALF_SPACE).wrapping_sub(interpoled_val);
self.min_deviation = self.min_deviation.min(deviation);
self.max_deviation = self.max_deviation.max(deviation);
if self.row_id == 0 {
self.first_val = value;
}
self.last_val = value;
self.row_id += 1u32;
}
#[inline]
fn collect_before_line_estimation(&mut self, value: u64) {
self.block.push(value);
if self.block.len() == LINE_ESTIMATION_BLOCK_LEN {
let line = Line::train(&VecColumn::from(&self.block));
let block = std::mem::take(&mut self.block);
for val in block {
self.collect_after_line_estimation(&line, val);
}
self.line = Some(line);
}
}
}
impl ColumnCodec for LinearCodec {
type ColumnValues = LinearReader;
type Estimator = LinearCodecEstimator;
fn load(mut data: OwnedBytes) -> io::Result<Self::ColumnValues> {
let stats = ColumnStats::deserialize(&mut data)?;
let linear_params = LinearParams::deserialize(&mut data)?;
Ok(LinearReader {
stats,
linear_params,
data,
})
}
}
#[cfg(test)]
mod tests {
use rand::RngCore;
use super::*;
use crate::column_values::u64_based::tests::{create_and_validate, get_codec_test_datasets};
#[test]
fn test_compression_simple() {
let vals = (100u64..)
.take(super::LINE_ESTIMATION_BLOCK_LEN)
.collect::<Vec<_>>();
create_and_validate::<LinearCodec>(&vals, "simple monotonically large").unwrap();
}
#[test]
fn test_compression() {
let data = (10..=6_000_u64).collect::<Vec<_>>();
let (estimate, actual_compression) =
create_and_validate::<LinearCodec>(&data, "simple monotonically large").unwrap();
assert_le!(actual_compression, 0.001);
assert_le!(estimate, 0.02);
}
#[test]
fn test_with_codec_datasets() {
let data_sets = get_codec_test_datasets();
for (mut data, name) in data_sets {
create_and_validate::<LinearCodec>(&data, name);
data.reverse();
create_and_validate::<LinearCodec>(&data, name);
}
}
#[test]
fn linear_interpol_fast_field_test_large_amplitude() {
let data = vec![
i64::MAX as u64 / 2,
i64::MAX as u64 / 3,
i64::MAX as u64 / 2,
];
create_and_validate::<LinearCodec>(&data, "large amplitude");
}
#[test]
fn overflow_error_test() {
let data = vec![1572656989877777, 1170935903116329, 720575940379279, 0];
create_and_validate::<LinearCodec>(&data, "overflow test");
}
#[test]
fn linear_interpol_fast_concave_data() {
let data = vec![0, 1, 2, 5, 8, 10, 20, 50];
create_and_validate::<LinearCodec>(&data, "concave data");
}
#[test]
fn linear_interpol_fast_convex_data() {
let data = vec![0, 40, 60, 70, 75, 77];
create_and_validate::<LinearCodec>(&data, "convex data");
}
#[test]
fn linear_interpol_fast_field_test_simple() {
let data = (10..=20_u64).collect::<Vec<_>>();
create_and_validate::<LinearCodec>(&data, "simple monotonically");
}
#[test]
fn linear_interpol_fast_field_rand() {
let mut rng = rand::thread_rng();
for _ in 0..50 {
let mut data = (0..10_000).map(|_| rng.next_u64()).collect::<Vec<_>>();
create_and_validate::<LinearCodec>(&data, "random");
data.reverse();
create_and_validate::<LinearCodec>(&data, "random");
}
}
}

View File

@@ -0,0 +1,214 @@
mod bitpacked;
mod blockwise_linear;
mod line;
mod linear;
mod stats_collector;
use std::io;
use std::io::Write;
use std::sync::Arc;
use common::{BinarySerializable, OwnedBytes};
use crate::column_values::monotonic_mapping::{
StrictlyMonotonicMappingInverter, StrictlyMonotonicMappingToInternal,
};
pub use crate::column_values::u64_based::bitpacked::BitpackedCodec;
pub use crate::column_values::u64_based::blockwise_linear::BlockwiseLinearCodec;
pub use crate::column_values::u64_based::linear::LinearCodec;
pub use crate::column_values::u64_based::stats_collector::StatsCollector;
use crate::column_values::{monotonic_map_column, ColumnStats};
use crate::iterable::Iterable;
use crate::{ColumnValues, MonotonicallyMappableToU64};
/// A `ColumnCodecEstimator` is in charge of gathering all
/// data required to serialize a column.
///
/// This happens during a first pass on data of the column elements.
/// During that pass, all column estimators receive a call to their
/// `.collect(el)`.
///
/// After this first pass, finalize is called.
/// `.estimate(..)` then should return an accurate estimation of the
/// size of the serialized column (were we to pick this codec.).
/// `.serialize(..)` then serializes the column using this codec.
pub trait ColumnCodecEstimator<T = u64>: 'static {
/// Records a new value for estimation.
/// This method will be called for each element of the column during
/// `estimation`.
fn collect(&mut self, value: u64);
/// Finalizes the first pass phase.
fn finalize(&mut self) {}
/// Returns an accurate estimation of the number of bytes that will
/// be used to represent this column.
fn estimate(&self, stats: &ColumnStats) -> Option<u64>;
/// Serializes the column using the given codec.
/// This constitutes a second pass over the columns values.
fn serialize(
&self,
stats: &ColumnStats,
vals: &mut dyn Iterator<Item = T>,
wrt: &mut dyn io::Write,
) -> io::Result<()>;
}
/// A column codec describes a colunm serialization format.
pub trait ColumnCodec<T: PartialOrd = u64> {
/// Specialized `ColumnValues` type.
type ColumnValues: ColumnValues<T> + 'static;
/// `Estimator` for the given codec.
type Estimator: ColumnCodecEstimator + Default;
/// Loads a column that has been serialized using this codec.
fn load(bytes: OwnedBytes) -> io::Result<Self::ColumnValues>;
/// Returns an estimator.
fn estimator() -> Self::Estimator {
Self::Estimator::default()
}
/// Returns a boxed estimator.
fn boxed_estimator() -> Box<dyn ColumnCodecEstimator> {
Box::new(Self::estimator())
}
}
/// Available codecs to use to encode the u64 (via [`MonotonicallyMappableToU64`]) converted data.
#[derive(PartialEq, Eq, PartialOrd, Ord, Debug, Clone, Copy)]
#[repr(u8)]
pub enum CodecType {
/// Bitpack all values in the value range. The number of bits is defined by the amplitude
/// `column.max_value() - column.min_value()`
Bitpacked = 0u8,
/// Linear interpolation puts a line between the first and last value and then bitpacks the
/// values by the offset from the line. The number of bits is defined by the max deviation from
/// the line.
Linear = 1u8,
/// Same as [`CodecType::Linear`], but encodes in blocks of 512 elements.
BlockwiseLinear = 2u8,
}
/// List of all available u64-base codecs.
pub const ALL_U64_CODEC_TYPES: [CodecType; 3] = [
CodecType::Bitpacked,
CodecType::Linear,
CodecType::BlockwiseLinear,
];
impl CodecType {
fn to_code(self) -> u8 {
self as u8
}
fn try_from_code(code: u8) -> Option<CodecType> {
match code {
0u8 => Some(CodecType::Bitpacked),
1u8 => Some(CodecType::Linear),
2u8 => Some(CodecType::BlockwiseLinear),
_ => None,
}
}
fn load<T: MonotonicallyMappableToU64>(
&self,
bytes: OwnedBytes,
) -> io::Result<Arc<dyn ColumnValues<T>>> {
match self {
CodecType::Bitpacked => load_specific_codec::<BitpackedCodec, T>(bytes),
CodecType::Linear => load_specific_codec::<LinearCodec, T>(bytes),
CodecType::BlockwiseLinear => load_specific_codec::<BlockwiseLinearCodec, T>(bytes),
}
}
}
fn load_specific_codec<C: ColumnCodec, T: MonotonicallyMappableToU64>(
bytes: OwnedBytes,
) -> io::Result<Arc<dyn ColumnValues<T>>> {
let reader = C::load(bytes)?;
let reader_typed = monotonic_map_column(
reader,
StrictlyMonotonicMappingInverter::from(StrictlyMonotonicMappingToInternal::<T>::new()),
);
Ok(Arc::new(reader_typed))
}
impl CodecType {
/// Returns a boxed codec estimator associated to a given `CodecType`.
pub fn estimator(&self) -> Box<dyn ColumnCodecEstimator> {
match self {
CodecType::Bitpacked => BitpackedCodec::boxed_estimator(),
CodecType::Linear => LinearCodec::boxed_estimator(),
CodecType::BlockwiseLinear => BlockwiseLinearCodec::boxed_estimator(),
}
}
}
/// Serializes a given column of u64-mapped values.
pub fn serialize_u64_based_column_values<T: MonotonicallyMappableToU64>(
vals: &dyn Iterable<T>,
codec_types: &[CodecType],
wrt: &mut dyn Write,
) -> io::Result<()> {
let mut stats_collector = StatsCollector::default();
let mut estimators: Vec<(CodecType, Box<dyn ColumnCodecEstimator>)> =
Vec::with_capacity(codec_types.len());
for &codec_type in codec_types {
estimators.push((codec_type, codec_type.estimator()));
}
for val in vals.boxed_iter() {
let val_u64 = val.to_u64();
stats_collector.collect(val_u64);
for (_, estimator) in &mut estimators {
estimator.collect(val_u64);
}
}
for (_, estimator) in &mut estimators {
estimator.finalize();
}
let stats = stats_collector.stats();
let (_, best_codec, best_codec_estimator) = estimators
.into_iter()
.flat_map(|(codec_type, estimator)| {
let num_bytes = estimator.estimate(&stats)?;
Some((num_bytes, codec_type, estimator))
})
.min_by_key(|(num_bytes, _, _)| *num_bytes)
.ok_or_else(|| {
io::Error::new(io::ErrorKind::InvalidData, "No available applicable codec.")
})?;
best_codec.to_code().serialize(wrt)?;
best_codec_estimator.serialize(
&stats,
&mut vals.boxed_iter().map(MonotonicallyMappableToU64::to_u64),
wrt,
)?;
Ok(())
}
/// Load u64-based column values.
///
/// This method first identifies the codec off the first byte.
pub fn load_u64_based_column_values<T: MonotonicallyMappableToU64>(
mut bytes: OwnedBytes,
) -> io::Result<Arc<dyn ColumnValues<T>>> {
let codec_type: CodecType = bytes
.first()
.copied()
.and_then(CodecType::try_from_code)
.ok_or_else(|| io::Error::new(io::ErrorKind::InvalidData, "Failed to read codec type"))?;
bytes.advance(1);
codec_type.load(bytes)
}
/// Helper function to serialize a column (autodetect from all codecs) and then open it
pub fn serialize_and_load_u64_based_column_values<T: MonotonicallyMappableToU64>(
vals: &dyn Iterable,
codec_types: &[CodecType],
) -> Arc<dyn ColumnValues<T>> {
let mut buffer = Vec::new();
serialize_u64_based_column_values(vals, codec_types, &mut buffer).unwrap();
load_u64_based_column_values::<T>(OwnedBytes::new(buffer)).unwrap()
}
#[cfg(test)]
mod tests;

View File

@@ -0,0 +1,200 @@
use std::num::NonZeroU64;
use fastdivide::DividerU64;
use crate::column_values::ColumnStats;
use crate::RowId;
/// Compute the gcd of two non null numbers.
///
/// It is recommended, but not required, to feed values such that `large >= small`.
fn compute_gcd(mut large: NonZeroU64, mut small: NonZeroU64) -> NonZeroU64 {
loop {
let rem: u64 = large.get() % small;
if let Some(new_small) = NonZeroU64::new(rem) {
(large, small) = (small, new_small);
} else {
return small;
}
}
}
#[derive(Default)]
pub struct StatsCollector {
min_max_opt: Option<(u64, u64)>,
num_rows: RowId,
// We measure the GCD of the difference between the values and the minimal value.
// This is the same as computing the difference between the values and the first value.
//
// This way, we can compress i64-converted-to-u64 (e.g. timestamp that were supplied in
// seconds, only to be converted in microseconds).
increment_gcd_opt: Option<(NonZeroU64, DividerU64)>,
first_value_opt: Option<u64>,
}
impl StatsCollector {
pub fn stats(&self) -> ColumnStats {
let (min_value, max_value) = self.min_max_opt.unwrap_or((0u64, 0u64));
let increment_gcd = if let Some((increment_gcd, _)) = self.increment_gcd_opt {
increment_gcd
} else {
NonZeroU64::new(1u64).unwrap()
};
ColumnStats {
min_value,
max_value,
num_rows: self.num_rows,
gcd: increment_gcd,
}
}
#[inline]
fn update_increment_gcd(&mut self, value: u64) {
let Some(first_value) = self.first_value_opt else {
// We set the first value and just quit.
self.first_value_opt = Some(value);
return;
};
let Some(non_zero_value) = NonZeroU64::new(value.abs_diff(first_value)) else {
// We can simply skip 0 values.
return;
};
let Some((gcd, gcd_divider)) = self.increment_gcd_opt else {
self.set_increment_gcd(non_zero_value);
return;
};
if gcd.get() == 1 {
// It won't see any update now.
return;
}
let remainder =
non_zero_value.get() - (gcd_divider.divide(non_zero_value.get())) * gcd.get();
if remainder == 0 {
return;
}
let new_gcd = compute_gcd(non_zero_value, gcd);
self.set_increment_gcd(new_gcd);
}
fn set_increment_gcd(&mut self, gcd: NonZeroU64) {
let new_divider = DividerU64::divide_by(gcd.get());
self.increment_gcd_opt = Some((gcd, new_divider));
}
pub fn collect(&mut self, value: u64) {
self.min_max_opt = Some(if let Some((min, max)) = self.min_max_opt {
(min.min(value), max.max(value))
} else {
(value, value)
});
self.num_rows += 1;
self.update_increment_gcd(value);
}
}
#[cfg(test)]
mod tests {
use std::num::NonZeroU64;
use crate::column_values::u64_based::stats_collector::{compute_gcd, StatsCollector};
use crate::column_values::u64_based::ColumnStats;
fn compute_stats(vals: impl Iterator<Item = u64>) -> ColumnStats {
let mut stats_collector = StatsCollector::default();
for val in vals {
stats_collector.collect(val);
}
stats_collector.stats()
}
fn find_gcd(vals: impl Iterator<Item = u64>) -> u64 {
compute_stats(vals).gcd.get()
}
#[test]
fn test_compute_gcd() {
let test_compute_gcd_aux = |large, small, expected| {
let large = NonZeroU64::new(large).unwrap();
let small = NonZeroU64::new(small).unwrap();
let expected = NonZeroU64::new(expected).unwrap();
assert_eq!(compute_gcd(small, large), expected);
assert_eq!(compute_gcd(large, small), expected);
};
test_compute_gcd_aux(1, 4, 1);
test_compute_gcd_aux(2, 4, 2);
test_compute_gcd_aux(10, 25, 5);
test_compute_gcd_aux(25, 25, 25);
}
#[test]
fn test_gcd() {
assert_eq!(find_gcd([0].into_iter()), 1);
assert_eq!(find_gcd([0, 10].into_iter()), 10);
assert_eq!(find_gcd([10, 0].into_iter()), 10);
assert_eq!(find_gcd([].into_iter()), 1);
assert_eq!(find_gcd([15, 30, 5, 10].into_iter()), 5);
assert_eq!(find_gcd([15, 16, 10].into_iter()), 1);
assert_eq!(find_gcd([0, 5, 5, 5].into_iter()), 5);
assert_eq!(find_gcd([0, 0].into_iter()), 1);
assert_eq!(find_gcd([1, 10, 4, 1, 7, 10].into_iter()), 3);
assert_eq!(find_gcd([1, 10, 0, 4, 1, 7, 10].into_iter()), 1);
}
#[test]
fn test_stats() {
assert_eq!(
compute_stats([].into_iter()),
ColumnStats {
gcd: NonZeroU64::new(1).unwrap(),
min_value: 0,
max_value: 0,
num_rows: 0
}
);
assert_eq!(
compute_stats([0, 1].into_iter()),
ColumnStats {
gcd: NonZeroU64::new(1).unwrap(),
min_value: 0,
max_value: 1,
num_rows: 2
}
);
assert_eq!(
compute_stats([0, 1].into_iter()),
ColumnStats {
gcd: NonZeroU64::new(1).unwrap(),
min_value: 0,
max_value: 1,
num_rows: 2
}
);
assert_eq!(
compute_stats([10, 20, 30].into_iter()),
ColumnStats {
gcd: NonZeroU64::new(10).unwrap(),
min_value: 10,
max_value: 30,
num_rows: 3
}
);
assert_eq!(
compute_stats([10, 50, 10, 30].into_iter()),
ColumnStats {
gcd: NonZeroU64::new(20).unwrap(),
min_value: 10,
max_value: 50,
num_rows: 4
}
);
assert_eq!(
compute_stats([10, 0, 30].into_iter()),
ColumnStats {
gcd: NonZeroU64::new(10).unwrap(),
min_value: 0,
max_value: 30,
num_rows: 3
}
);
}
}

View File

@@ -0,0 +1,401 @@
use proptest::prelude::*;
use proptest::strategy::Strategy;
use proptest::{num, prop_oneof, proptest};
#[test]
fn test_serialize_and_load_simple() {
let mut buffer = Vec::new();
let vals = &[1u64, 2u64, 5u64];
serialize_u64_based_column_values(
&&vals[..],
&[CodecType::Bitpacked, CodecType::BlockwiseLinear],
&mut buffer,
)
.unwrap();
assert_eq!(buffer.len(), 7);
let col = load_u64_based_column_values::<u64>(OwnedBytes::new(buffer)).unwrap();
assert_eq!(col.num_vals(), 3);
assert_eq!(col.get_val(0), 1);
assert_eq!(col.get_val(1), 2);
assert_eq!(col.get_val(2), 5);
}
#[test]
fn test_empty_column_i64() {
let vals: [i64; 0] = [];
let mut num_acceptable_codecs = 0;
for codec in ALL_U64_CODEC_TYPES {
let mut buffer = Vec::new();
if serialize_u64_based_column_values(&&vals[..], &[codec], &mut buffer).is_err() {
continue;
}
num_acceptable_codecs += 1;
let col = load_u64_based_column_values::<i64>(OwnedBytes::new(buffer)).unwrap();
assert_eq!(col.num_vals(), 0);
assert_eq!(col.min_value(), i64::MIN);
assert_eq!(col.max_value(), i64::MIN);
}
assert!(num_acceptable_codecs > 0);
}
#[test]
fn test_empty_column_u64() {
let vals: [u64; 0] = [];
let mut num_acceptable_codecs = 0;
for codec in ALL_U64_CODEC_TYPES {
let mut buffer = Vec::new();
if serialize_u64_based_column_values(&&vals[..], &[codec], &mut buffer).is_err() {
continue;
}
num_acceptable_codecs += 1;
let col = load_u64_based_column_values::<u64>(OwnedBytes::new(buffer)).unwrap();
assert_eq!(col.num_vals(), 0);
assert_eq!(col.min_value(), u64::MIN);
assert_eq!(col.max_value(), u64::MIN);
}
assert!(num_acceptable_codecs > 0);
}
#[test]
fn test_empty_column_f64() {
let vals: [f64; 0] = [];
let mut num_acceptable_codecs = 0;
for codec in ALL_U64_CODEC_TYPES {
let mut buffer = Vec::new();
if serialize_u64_based_column_values(&&vals[..], &[codec], &mut buffer).is_err() {
continue;
}
num_acceptable_codecs += 1;
let col = load_u64_based_column_values::<f64>(OwnedBytes::new(buffer)).unwrap();
assert_eq!(col.num_vals(), 0);
// FIXME. f64::MIN would be better!
assert!(col.min_value().is_nan());
assert!(col.max_value().is_nan());
}
assert!(num_acceptable_codecs > 0);
}
pub(crate) fn create_and_validate<TColumnCodec: ColumnCodec>(
vals: &[u64],
name: &str,
) -> Option<(f32, f32)> {
let mut stats_collector = StatsCollector::default();
let mut codec_estimator: TColumnCodec::Estimator = Default::default();
for val in vals.boxed_iter() {
stats_collector.collect(val);
codec_estimator.collect(val);
}
codec_estimator.finalize();
let stats = stats_collector.stats();
let estimation = codec_estimator.estimate(&stats)?;
let mut buffer = Vec::new();
codec_estimator
.serialize(&stats, vals.boxed_iter().as_mut(), &mut buffer)
.unwrap();
let actual_compression = buffer.len() as u64;
let reader = TColumnCodec::load(OwnedBytes::new(buffer)).unwrap();
assert_eq!(reader.num_vals(), vals.len() as u32);
for (doc, orig_val) in vals.iter().copied().enumerate() {
let val = reader.get_val(doc as u32);
assert_eq!(
val, orig_val,
"val `{val}` does not match orig_val {orig_val:?}, in data set {name}, data `{vals:?}`",
);
}
if !vals.is_empty() {
let test_rand_idx = rand::thread_rng().gen_range(0..=vals.len() - 1);
let expected_positions: Vec<u32> = vals
.iter()
.enumerate()
.filter(|(_, el)| **el == vals[test_rand_idx])
.map(|(pos, _)| pos as u32)
.collect();
let mut positions = Vec::new();
reader.get_row_ids_for_value_range(
vals[test_rand_idx]..=vals[test_rand_idx],
0..vals.len() as u32,
&mut positions,
);
assert_eq!(expected_positions, positions);
}
if actual_compression > 1000 {
assert!(relative_difference(estimation, actual_compression) < 0.10f32);
}
Some((
compression_rate(estimation, stats.num_rows),
compression_rate(actual_compression, stats.num_rows),
))
}
fn compression_rate(num_bytes: u64, num_values: u32) -> f32 {
num_bytes as f32 / (num_values as f32 * 8.0)
}
fn relative_difference(left: u64, right: u64) -> f32 {
let left = left as f32;
let right = right as f32;
2.0f32 * (left - right).abs() / (left + right)
}
proptest! {
#![proptest_config(ProptestConfig::with_cases(100))]
#[test]
fn test_proptest_small_bitpacked(data in proptest::collection::vec(num_strategy(), 1..10)) {
create_and_validate::<BitpackedCodec>(&data, "proptest bitpacked");
}
#[test]
fn test_proptest_small_linear(data in proptest::collection::vec(num_strategy(), 1..10)) {
create_and_validate::<LinearCodec>(&data, "proptest linearinterpol");
}
#[test]
fn test_proptest_small_blockwise_linear(data in proptest::collection::vec(num_strategy(), 1..10)) {
create_and_validate::<BlockwiseLinearCodec>(&data, "proptest multilinearinterpol");
}
}
#[test]
fn test_small_blockwise_linear_example() {
create_and_validate::<BlockwiseLinearCodec>(
&[9223372036854775808, 9223370937344622593],
"proptest multilinearinterpol",
);
}
proptest! {
#![proptest_config(ProptestConfig::with_cases(10))]
#[test]
fn test_proptest_large_bitpacked(data in proptest::collection::vec(num_strategy(), 1..6000)) {
create_and_validate::<BitpackedCodec>(&data, "proptest bitpacked");
}
#[test]
fn test_proptest_large_linear(data in proptest::collection::vec(num_strategy(), 1..6000)) {
create_and_validate::<LinearCodec>(&data, "proptest linearinterpol");
}
#[test]
fn test_proptest_large_blockwise_linear(data in proptest::collection::vec(num_strategy(), 1..6000)) {
create_and_validate::<BlockwiseLinearCodec>(&data, "proptest multilinearinterpol");
}
}
fn num_strategy() -> impl Strategy<Value = u64> {
prop_oneof![
1 => prop::num::u64::ANY.prop_map(|num| u64::MAX - (num % 10) ),
1 => prop::num::u64::ANY.prop_map(|num| num % 10 ),
20 => prop::num::u64::ANY,
]
}
pub fn get_codec_test_datasets() -> Vec<(Vec<u64>, &'static str)> {
let mut data_and_names = vec![];
let data = (10..=10_000_u64).collect::<Vec<_>>();
data_and_names.push((data, "simple monotonically increasing"));
data_and_names.push((
vec![5, 6, 7, 8, 9, 10, 99, 100],
"offset in linear interpol",
));
data_and_names.push((vec![5, 50, 3, 13, 1, 1000, 35], "rand small"));
data_and_names.push((vec![10], "single value"));
data_and_names.push((
vec![1572656989877777, 1170935903116329, 720575940379279, 0],
"overflow error",
));
data_and_names
}
fn test_codec<C: ColumnCodec>() {
let codec_name = std::any::type_name::<C>();
for (data, dataset_name) in get_codec_test_datasets() {
let estimate_actual_opt: Option<(f32, f32)> =
tests::create_and_validate::<C>(&data, dataset_name);
let result = if let Some((estimate, actual)) = estimate_actual_opt {
format!("Estimate `{estimate}` Actual `{actual}`")
} else {
"Disabled".to_string()
};
println!("Codec {codec_name}, DataSet {dataset_name}, {result}");
}
}
#[test]
fn test_codec_bitpacking() {
test_codec::<BitpackedCodec>();
}
#[test]
fn test_codec_interpolation() {
test_codec::<LinearCodec>();
}
#[test]
fn test_codec_multi_interpolation() {
test_codec::<BlockwiseLinearCodec>();
}
use super::*;
fn estimate<C: ColumnCodec>(vals: &[u64]) -> Option<f32> {
let mut stats_collector = StatsCollector::default();
let mut estimator = C::Estimator::default();
for &val in vals {
stats_collector.collect(val);
estimator.collect(val);
}
estimator.finalize();
let stats = stats_collector.stats();
let num_bytes = estimator.estimate(&stats)?;
if stats.num_rows == 0 {
return None;
}
Some(num_bytes as f32 / (8.0 * stats.num_rows as f32))
}
#[test]
fn estimation_good_interpolation_case() {
let data = (10..=20000_u64).collect::<Vec<_>>();
let linear_interpol_estimation = estimate::<LinearCodec>(&data).unwrap();
assert_le!(linear_interpol_estimation, 0.01);
let multi_linear_interpol_estimation = estimate::<BlockwiseLinearCodec>(&data).unwrap();
assert_le!(multi_linear_interpol_estimation, 0.2);
assert_lt!(linear_interpol_estimation, multi_linear_interpol_estimation);
let bitpacked_estimation = estimate::<BitpackedCodec>(&data).unwrap();
assert_lt!(linear_interpol_estimation, bitpacked_estimation);
}
#[test]
fn estimation_test_bad_interpolation_case_monotonically_increasing() {
let mut data: Vec<u64> = (201..=20000_u64).collect();
data.push(1_000_000);
// in this case the linear interpolation can't in fact not be worse than bitpacking,
// but the estimator adds some threshold, which leads to estimated worse behavior
let linear_interpol_estimation = estimate::<LinearCodec>(&data[..]).unwrap();
assert_le!(linear_interpol_estimation, 0.35);
let bitpacked_estimation = estimate::<BitpackedCodec>(&data).unwrap();
assert_le!(bitpacked_estimation, 0.32);
assert_le!(bitpacked_estimation, linear_interpol_estimation);
}
#[test]
fn test_fast_field_codec_type_to_code() {
let mut count_codec = 0;
for code in 0..=255 {
if let Some(codec_type) = CodecType::try_from_code(code) {
assert_eq!(codec_type.to_code(), code);
count_codec += 1;
}
}
assert_eq!(count_codec, 3);
}
fn test_fastfield_gcd_i64_with_codec(codec_type: CodecType, num_vals: usize) -> io::Result<()> {
let mut vals: Vec<i64> = (-4..=(num_vals as i64) - 5).map(|val| val * 1000).collect();
let mut buffer: Vec<u8> = Vec::new();
crate::column_values::serialize_u64_based_column_values(
&&vals[..],
&[codec_type],
&mut buffer,
)?;
let buffer = OwnedBytes::new(buffer);
let column = crate::column_values::load_u64_based_column_values::<i64>(buffer.clone())?;
assert_eq!(column.get_val(0), -4000i64);
assert_eq!(column.get_val(1), -3000i64);
assert_eq!(column.get_val(2), -2000i64);
assert_eq!(column.max_value(), (num_vals as i64 - 5) * 1000);
assert_eq!(column.min_value(), -4000i64);
// Can't apply gcd
let mut buffer_without_gcd = Vec::new();
vals.pop();
vals.push(1001i64);
crate::column_values::serialize_u64_based_column_values(
&&vals[..],
&[codec_type],
&mut buffer_without_gcd,
)?;
let buffer_without_gcd = OwnedBytes::new(buffer_without_gcd);
assert!(buffer_without_gcd.len() > buffer.len());
Ok(())
}
#[test]
fn test_fastfield_gcd_i64() -> io::Result<()> {
for &codec_type in &[
CodecType::Bitpacked,
CodecType::BlockwiseLinear,
CodecType::Linear,
] {
test_fastfield_gcd_i64_with_codec(codec_type, 5500)?;
}
Ok(())
}
fn test_fastfield_gcd_u64_with_codec(codec_type: CodecType, num_vals: usize) -> io::Result<()> {
let mut vals: Vec<u64> = (1..=num_vals).map(|i| i as u64 * 1000u64).collect();
let mut buffer: Vec<u8> = Vec::new();
crate::column_values::serialize_u64_based_column_values(
&&vals[..],
&[codec_type],
&mut buffer,
)?;
let buffer = OwnedBytes::new(buffer);
let column = crate::column_values::load_u64_based_column_values::<u64>(buffer.clone())?;
assert_eq!(column.get_val(0), 1000u64);
assert_eq!(column.get_val(1), 2000u64);
assert_eq!(column.get_val(2), 3000u64);
assert_eq!(column.max_value(), num_vals as u64 * 1000);
assert_eq!(column.min_value(), 1000u64);
// Can't apply gcd
let mut buffer_without_gcd = Vec::new();
vals.pop();
vals.push(1001u64);
crate::column_values::serialize_u64_based_column_values(
&&vals[..],
&[codec_type],
&mut buffer_without_gcd,
)?;
let buffer_without_gcd = OwnedBytes::new(buffer_without_gcd);
assert!(buffer_without_gcd.len() > buffer.len());
Ok(())
}
#[test]
fn test_fastfield_gcd_u64() -> io::Result<()> {
for &codec_type in &[
CodecType::Bitpacked,
CodecType::BlockwiseLinear,
CodecType::Linear,
] {
test_fastfield_gcd_u64_with_codec(codec_type, 5500)?;
}
Ok(())
}
#[test]
pub fn test_fastfield2() {
let test_fastfield = crate::column_values::serialize_and_load_u64_based_column_values::<u64>(
&&[100u64, 200u64, 300u64][..],
&ALL_U64_CODEC_TYPES,
);
assert_eq!(test_fastfield.get_val(0), 100);
assert_eq!(test_fastfield.get_val(1), 200);
assert_eq!(test_fastfield.get_val(2), 300);
}

View File

@@ -0,0 +1,52 @@
use std::fmt::Debug;
use tantivy_bitpacker::minmax;
use crate::ColumnValues;
/// VecColumn provides `Column` over a slice.
pub struct VecColumn<'a, T = u64> {
pub(crate) values: &'a [T],
pub(crate) min_value: T,
pub(crate) max_value: T,
}
impl<'a, T: Copy + PartialOrd + Send + Sync + Debug> ColumnValues<T> for VecColumn<'a, T> {
fn get_val(&self, position: u32) -> T {
self.values[position as usize]
}
fn iter(&self) -> Box<dyn Iterator<Item = T> + '_> {
Box::new(self.values.iter().copied())
}
fn min_value(&self) -> T {
self.min_value
}
fn max_value(&self) -> T {
self.max_value
}
fn num_vals(&self) -> u32 {
self.values.len() as u32
}
fn get_range(&self, start: u64, output: &mut [T]) {
output.copy_from_slice(&self.values[start as usize..][..output.len()])
}
}
impl<'a, T: Copy + PartialOrd + Default, V> From<&'a V> for VecColumn<'a, T>
where V: AsRef<[T]> + ?Sized
{
fn from(values: &'a V) -> Self {
let values = values.as_ref();
let (min_value, max_value) = minmax(values.iter().copied()).unwrap_or_default();
Self {
values,
min_value,
max_value,
}
}
}

View File

@@ -0,0 +1,163 @@
use std::fmt::Debug;
use std::net::Ipv6Addr;
use serde::{Deserialize, Serialize};
use crate::value::NumericalType;
use crate::InvalidData;
/// The column type represents the column type.
/// Any changes need to be propagated to `COLUMN_TYPES`.
#[derive(Hash, Eq, PartialEq, Debug, Clone, Copy, Ord, PartialOrd, Serialize, Deserialize)]
#[repr(u8)]
pub enum ColumnType {
I64 = 0u8,
U64 = 1u8,
F64 = 2u8,
Bytes = 3u8,
Str = 4u8,
Bool = 5u8,
IpAddr = 6u8,
DateTime = 7u8,
}
// The order needs to match _exactly_ the order in the enum
const COLUMN_TYPES: [ColumnType; 8] = [
ColumnType::I64,
ColumnType::U64,
ColumnType::F64,
ColumnType::Bytes,
ColumnType::Str,
ColumnType::Bool,
ColumnType::IpAddr,
ColumnType::DateTime,
];
impl ColumnType {
pub fn to_code(self) -> u8 {
self as u8
}
pub(crate) fn try_from_code(code: u8) -> Result<ColumnType, InvalidData> {
COLUMN_TYPES.get(code as usize).copied().ok_or(InvalidData)
}
}
impl From<NumericalType> for ColumnType {
fn from(numerical_type: NumericalType) -> Self {
match numerical_type {
NumericalType::I64 => ColumnType::I64,
NumericalType::U64 => ColumnType::U64,
NumericalType::F64 => ColumnType::F64,
}
}
}
impl ColumnType {
pub fn numerical_type(&self) -> Option<NumericalType> {
match self {
ColumnType::I64 => Some(NumericalType::I64),
ColumnType::U64 => Some(NumericalType::U64),
ColumnType::F64 => Some(NumericalType::F64),
ColumnType::Bytes
| ColumnType::Str
| ColumnType::Bool
| ColumnType::IpAddr
| ColumnType::DateTime => None,
}
}
}
// TODO remove if possible
pub trait HasAssociatedColumnType: 'static + Debug + Send + Sync + Copy + PartialOrd {
fn column_type() -> ColumnType;
fn default_value() -> Self;
}
impl HasAssociatedColumnType for u64 {
fn column_type() -> ColumnType {
ColumnType::U64
}
fn default_value() -> Self {
0u64
}
}
impl HasAssociatedColumnType for i64 {
fn column_type() -> ColumnType {
ColumnType::I64
}
fn default_value() -> Self {
0i64
}
}
impl HasAssociatedColumnType for f64 {
fn column_type() -> ColumnType {
ColumnType::F64
}
fn default_value() -> Self {
Default::default()
}
}
impl HasAssociatedColumnType for bool {
fn column_type() -> ColumnType {
ColumnType::Bool
}
fn default_value() -> Self {
Default::default()
}
}
impl HasAssociatedColumnType for common::DateTime {
fn column_type() -> ColumnType {
ColumnType::DateTime
}
fn default_value() -> Self {
Default::default()
}
}
impl HasAssociatedColumnType for Ipv6Addr {
fn column_type() -> ColumnType {
ColumnType::IpAddr
}
fn default_value() -> Self {
Ipv6Addr::from([0u8; 16])
}
}
#[cfg(test)]
mod tests {
use super::*;
use crate::Cardinality;
#[test]
fn test_column_type_to_code() {
for (code, expected_column_type) in super::COLUMN_TYPES.iter().copied().enumerate() {
if let Ok(column_type) = ColumnType::try_from_code(code as u8) {
assert_eq!(column_type, expected_column_type);
}
}
for code in COLUMN_TYPES.len() as u8..=u8::MAX {
assert!(ColumnType::try_from_code(code).is_err());
}
}
#[test]
fn test_cardinality_to_code() {
let mut num_cardinality = 0;
for code in u8::MIN..=u8::MAX {
if let Ok(cardinality) = Cardinality::try_from_code(code) {
assert_eq!(cardinality.to_code(), code);
num_cardinality += 1;
}
}
assert_eq!(num_cardinality, 3);
}
}

View File

@@ -0,0 +1,73 @@
use crate::InvalidData;
pub const VERSION_FOOTER_NUM_BYTES: usize = MAGIC_BYTES.len() + std::mem::size_of::<u32>();
/// We end the file by these 4 bytes just to somewhat identify that
/// this is indeed a columnar file.
const MAGIC_BYTES: [u8; 4] = [2, 113, 119, 66];
pub fn footer() -> [u8; VERSION_FOOTER_NUM_BYTES] {
let mut footer_bytes = [0u8; VERSION_FOOTER_NUM_BYTES];
footer_bytes[0..4].copy_from_slice(&Version::V1.to_bytes());
footer_bytes[4..8].copy_from_slice(&MAGIC_BYTES[..]);
footer_bytes
}
pub fn parse_footer(footer_bytes: [u8; VERSION_FOOTER_NUM_BYTES]) -> Result<Version, InvalidData> {
if footer_bytes[4..8] != MAGIC_BYTES {
return Err(InvalidData);
}
Version::try_from_bytes(footer_bytes[0..4].try_into().unwrap())
}
#[derive(Debug, Copy, Clone, Eq, PartialEq)]
#[repr(u32)]
pub enum Version {
V1 = 1u32,
}
impl Version {
fn to_bytes(self) -> [u8; 4] {
(self as u32).to_le_bytes()
}
fn try_from_bytes(bytes: [u8; 4]) -> Result<Version, InvalidData> {
let code = u32::from_le_bytes(bytes);
match code {
1u32 => Ok(Version::V1),
_ => Err(InvalidData),
}
}
}
#[cfg(test)]
mod tests {
use std::collections::HashSet;
use super::*;
#[test]
fn test_footer_dserialization() {
let parsed_version: Version = parse_footer(footer()).unwrap();
assert_eq!(Version::V1, parsed_version);
}
#[test]
fn test_version_serialization() {
let version_to_tests: Vec<u32> = [0, 1 << 8, 1 << 16, 1 << 24]
.iter()
.copied()
.flat_map(|offset| (0..255).map(move |el| el + offset))
.collect();
let mut valid_versions: HashSet<u32> = HashSet::default();
for &i in &version_to_tests {
let version_res = Version::try_from_bytes(i.to_le_bytes());
if let Ok(version) = version_res {
assert_eq!(version, Version::V1);
assert_eq!(version.to_bytes(), i.to_le_bytes());
valid_versions.insert(i);
}
}
assert_eq!(valid_versions.len(), 1);
}
}

View File

@@ -0,0 +1,204 @@
use std::io::{self, Write};
use common::{BitSet, CountingWriter, ReadOnlyBitSet};
use sstable::{SSTable, TermOrdinal};
use super::term_merger::TermMerger;
use crate::column::serialize_column_mappable_to_u64;
use crate::column_index::SerializableColumnIndex;
use crate::iterable::Iterable;
use crate::{BytesColumn, MergeRowOrder, ShuffleMergeOrder};
// Serialize [Dictionary, Column, dictionary num bytes U32::LE]
// Column: [Column Index, Column Values, column index num bytes U32::LE]
pub fn merge_bytes_or_str_column(
column_index: SerializableColumnIndex<'_>,
bytes_columns: &[Option<BytesColumn>],
merge_row_order: &MergeRowOrder,
output: &mut impl Write,
) -> io::Result<()> {
// Serialize dict and generate mapping for values
let mut output = CountingWriter::wrap(output);
// TODO !!! Remove useless terms.
let term_ord_mapping = serialize_merged_dict(bytes_columns, merge_row_order, &mut output)?;
let dictionary_num_bytes: u32 = output.written_bytes() as u32;
let output = output.finish();
let remapped_term_ordinals_values = RemappedTermOrdinalsValues {
bytes_columns,
term_ord_mapping: &term_ord_mapping,
merge_row_order,
};
serialize_column_mappable_to_u64(column_index, &remapped_term_ordinals_values, output)?;
output.write_all(&dictionary_num_bytes.to_le_bytes())?;
Ok(())
}
struct RemappedTermOrdinalsValues<'a> {
bytes_columns: &'a [Option<BytesColumn>],
term_ord_mapping: &'a TermOrdinalMapping,
merge_row_order: &'a MergeRowOrder,
}
impl<'a> Iterable for RemappedTermOrdinalsValues<'a> {
fn boxed_iter(&self) -> Box<dyn Iterator<Item = u64> + '_> {
match self.merge_row_order {
MergeRowOrder::Stack(_) => self.boxed_iter_stacked(),
MergeRowOrder::Shuffled(shuffle_merge_order) => {
self.boxed_iter_shuffled(shuffle_merge_order)
}
}
}
}
impl<'a> RemappedTermOrdinalsValues<'a> {
fn boxed_iter_stacked(&self) -> Box<dyn Iterator<Item = u64> + '_> {
let iter = self
.bytes_columns
.iter()
.enumerate()
.flat_map(|(segment_ord, byte_column)| {
let segment_ord = self.term_ord_mapping.get_segment(segment_ord as u32);
byte_column.iter().flat_map(move |bytes_column| {
bytes_column
.ords()
.values
.iter()
.map(move |term_ord| segment_ord[term_ord as usize])
})
});
// TODO see if we can better decompose the mapping / and the stacking
Box::new(iter)
}
fn boxed_iter_shuffled<'b>(
&'b self,
shuffle_merge_order: &'b ShuffleMergeOrder,
) -> Box<dyn Iterator<Item = u64> + 'b> {
Box::new(
shuffle_merge_order
.iter_new_to_old_row_addrs()
.flat_map(move |old_addr| {
let segment_ord = self.term_ord_mapping.get_segment(old_addr.segment_ord);
self.bytes_columns[old_addr.segment_ord as usize]
.as_ref()
.into_iter()
.flat_map(move |bytes_column| {
bytes_column
.term_ords(old_addr.row_id)
.map(|old_term_ord: u64| segment_ord[old_term_ord as usize])
})
}),
)
}
}
fn compute_term_bitset(column: &BytesColumn, row_bitset: &ReadOnlyBitSet) -> BitSet {
let num_terms = column.dictionary().num_terms();
let mut term_bitset = BitSet::with_max_value(num_terms as u32);
for row_id in row_bitset.iter() {
for term_ord in column.term_ord_column.values_for_doc(row_id) {
term_bitset.insert(term_ord as u32);
}
}
term_bitset
}
fn is_term_present(bitsets: &[Option<BitSet>], term_merger: &TermMerger) -> bool {
for (segment_ord, from_term_ord) in term_merger.matching_segments() {
if let Some(bitset) = bitsets[segment_ord].as_ref() {
if bitset.contains(from_term_ord as u32) {
return true;
}
} else {
return true;
}
}
false
}
fn serialize_merged_dict(
bytes_columns: &[Option<BytesColumn>],
merge_row_order: &MergeRowOrder,
output: &mut impl Write,
) -> io::Result<TermOrdinalMapping> {
let mut term_ord_mapping = TermOrdinalMapping::default();
let mut field_term_streams = Vec::new();
for column in bytes_columns.iter().flatten() {
term_ord_mapping.add_segment(column.dictionary.num_terms());
let terms = column.dictionary.stream()?;
field_term_streams.push(terms);
}
let mut merged_terms = TermMerger::new(field_term_streams);
let mut sstable_builder = sstable::VoidSSTable::writer(output);
// TODO support complex `merge_row_order`.
match merge_row_order {
MergeRowOrder::Stack(_) => {
let mut current_term_ord = 0;
while merged_terms.advance() {
let term_bytes: &[u8] = merged_terms.key();
sstable_builder.insert(term_bytes, &())?;
for (segment_ord, from_term_ord) in merged_terms.matching_segments() {
term_ord_mapping.register_from_to(segment_ord, from_term_ord, current_term_ord);
}
current_term_ord += 1;
}
sstable_builder.finish()?;
}
MergeRowOrder::Shuffled(shuffle_merge_order) => {
assert_eq!(shuffle_merge_order.alive_bitsets.len(), bytes_columns.len());
let mut term_bitsets: Vec<Option<BitSet>> = Vec::with_capacity(bytes_columns.len());
for (alive_bitset_opt, bytes_column_opt) in shuffle_merge_order
.alive_bitsets
.iter()
.zip(bytes_columns.iter())
{
match (alive_bitset_opt, bytes_column_opt) {
(Some(alive_bitset), Some(bytes_column)) => {
let term_bitset = compute_term_bitset(bytes_column, alive_bitset);
term_bitsets.push(Some(term_bitset));
}
_ => {
term_bitsets.push(None);
}
}
}
let mut current_term_ord = 0;
while merged_terms.advance() {
let term_bytes: &[u8] = merged_terms.key();
if !is_term_present(&term_bitsets[..], &merged_terms) {
continue;
}
sstable_builder.insert(term_bytes, &())?;
for (segment_ord, from_term_ord) in merged_terms.matching_segments() {
term_ord_mapping.register_from_to(segment_ord, from_term_ord, current_term_ord);
}
current_term_ord += 1;
}
sstable_builder.finish()?;
}
}
Ok(term_ord_mapping)
}
#[derive(Default, Debug)]
struct TermOrdinalMapping {
per_segment_new_term_ordinals: Vec<Vec<TermOrdinal>>,
}
impl TermOrdinalMapping {
fn add_segment(&mut self, max_term_ord: usize) {
self.per_segment_new_term_ordinals
.push(vec![TermOrdinal::default(); max_term_ord]);
}
fn register_from_to(&mut self, segment_ord: usize, from_ord: TermOrdinal, to_ord: TermOrdinal) {
self.per_segment_new_term_ordinals[segment_ord][from_ord as usize] = to_ord;
}
fn get_segment(&self, segment_ord: u32) -> &[TermOrdinal] {
&(self.per_segment_new_term_ordinals[segment_ord as usize])[..]
}
}

View File

@@ -0,0 +1,118 @@
use std::ops::Range;
use common::{BitSet, OwnedBytes, ReadOnlyBitSet};
use crate::{ColumnarReader, RowAddr, RowId};
pub struct StackMergeOrder {
// This does not start at 0. The first row is the number of
// rows in the first columnar.
cumulated_row_ids: Vec<RowId>,
}
impl StackMergeOrder {
pub fn stack(columnars: &[&ColumnarReader]) -> StackMergeOrder {
let mut cumulated_row_ids: Vec<RowId> = Vec::with_capacity(columnars.len());
let mut cumulated_row_id = 0;
for columnar in columnars {
cumulated_row_id += columnar.num_rows();
cumulated_row_ids.push(cumulated_row_id);
}
StackMergeOrder { cumulated_row_ids }
}
pub fn num_rows(&self) -> RowId {
self.cumulated_row_ids.last().copied().unwrap_or(0)
}
pub fn offset(&self, columnar_id: usize) -> RowId {
if columnar_id == 0 {
return 0;
}
self.cumulated_row_ids[columnar_id - 1]
}
pub fn columnar_range(&self, columnar_id: usize) -> Range<RowId> {
self.offset(columnar_id)..self.offset(columnar_id + 1)
}
}
pub enum MergeRowOrder {
/// Columnar tables are simply stacked one above the other.
/// If the i-th columnar_readers has n_rows_i rows, then
/// in the resulting columnar,
/// rows [r0..n_row_0) contains the row of columnar_readers[0], in ordder
/// rows [n_row_0..n_row_0 + n_row_1 contains the row of columnar_readers[1], in order.
/// ..
/// No documents is deleted.
Stack(StackMergeOrder),
/// Some more complex mapping, that may interleaves rows from the different readers and
/// drop rows, or do both.
Shuffled(ShuffleMergeOrder),
}
impl From<StackMergeOrder> for MergeRowOrder {
fn from(stack_merge_order: StackMergeOrder) -> MergeRowOrder {
MergeRowOrder::Stack(stack_merge_order)
}
}
impl From<ShuffleMergeOrder> for MergeRowOrder {
fn from(shuffle_merge_order: ShuffleMergeOrder) -> MergeRowOrder {
MergeRowOrder::Shuffled(shuffle_merge_order)
}
}
impl MergeRowOrder {
pub fn num_rows(&self) -> RowId {
match self {
MergeRowOrder::Stack(stack_row_order) => stack_row_order.num_rows(),
MergeRowOrder::Shuffled(complex_mapping) => complex_mapping.num_rows(),
}
}
}
pub struct ShuffleMergeOrder {
pub new_row_id_to_old_row_id: Vec<RowAddr>,
pub alive_bitsets: Vec<Option<ReadOnlyBitSet>>,
}
impl ShuffleMergeOrder {
pub fn for_test(
segment_num_rows: &[RowId],
new_row_id_to_old_row_id: Vec<RowAddr>,
) -> ShuffleMergeOrder {
let mut alive_bitsets: Vec<BitSet> = segment_num_rows
.iter()
.map(|&num_rows| BitSet::with_max_value(num_rows))
.collect();
for &RowAddr {
segment_ord,
row_id,
} in &new_row_id_to_old_row_id
{
alive_bitsets[segment_ord as usize].insert(row_id);
}
let alive_bitsets: Vec<Option<ReadOnlyBitSet>> = alive_bitsets
.into_iter()
.map(|alive_bitset| {
let mut buffer = Vec::new();
alive_bitset.serialize(&mut buffer).unwrap();
let data = OwnedBytes::new(buffer);
Some(ReadOnlyBitSet::open(data))
})
.collect();
ShuffleMergeOrder {
new_row_id_to_old_row_id,
alive_bitsets,
}
}
pub fn num_rows(&self) -> RowId {
self.new_row_id_to_old_row_id.len() as RowId
}
pub fn iter_new_to_old_row_addrs(&self) -> impl Iterator<Item = RowAddr> + '_ {
self.new_row_id_to_old_row_id.iter().copied()
}
}

View File

@@ -0,0 +1,375 @@
mod merge_dict_column;
mod merge_mapping;
mod term_merger;
use std::collections::{BTreeMap, HashMap, HashSet};
use std::io;
use std::net::Ipv6Addr;
use std::sync::Arc;
pub use merge_mapping::{MergeRowOrder, ShuffleMergeOrder, StackMergeOrder};
use super::writer::ColumnarSerializer;
use crate::column::{serialize_column_mappable_to_u128, serialize_column_mappable_to_u64};
use crate::column_values::MergedColumnValues;
use crate::columnar::merge::merge_dict_column::merge_bytes_or_str_column;
use crate::columnar::writer::CompatibleNumericalTypes;
use crate::columnar::ColumnarReader;
use crate::dynamic_column::DynamicColumn;
use crate::{
BytesColumn, Column, ColumnIndex, ColumnType, ColumnValues, NumericalType, NumericalValue,
};
/// Column types are grouped into different categories.
/// After merge, all columns belonging to the same category are coerced to
/// the same column type.
///
/// In practise, today, only Numerical colummns are coerced into one type today.
///
/// See also [README.md].
#[derive(Copy, Clone, Eq, PartialEq, Hash, Debug)]
enum ColumnTypeCategory {
Bool,
Str,
Numerical,
DateTime,
Bytes,
IpAddr,
}
impl From<ColumnType> for ColumnTypeCategory {
fn from(column_type: ColumnType) -> Self {
match column_type {
ColumnType::I64 => ColumnTypeCategory::Numerical,
ColumnType::U64 => ColumnTypeCategory::Numerical,
ColumnType::F64 => ColumnTypeCategory::Numerical,
ColumnType::Bytes => ColumnTypeCategory::Bytes,
ColumnType::Str => ColumnTypeCategory::Str,
ColumnType::Bool => ColumnTypeCategory::Bool,
ColumnType::IpAddr => ColumnTypeCategory::IpAddr,
ColumnType::DateTime => ColumnTypeCategory::DateTime,
}
}
}
/// Merge several columnar table together.
///
/// If several columns with the same name are conflicting with the numerical types in the
/// input columnars, the first type compatible out of i64, u64, f64 in that order will be used.
///
/// `require_columns` makes it possible to ensure that some columns will be present in the
/// resulting columnar. When a required column is a numerical column type, one of two things can
/// happen:
/// - If the required column type is compatible with all of the input columnar, the resulsting
/// merged
/// columnar will simply coerce the input column and use the required column type.
/// - If the required column type is incompatible with one of the input columnar, the merged
/// will fail with an InvalidData error.
///
/// `merge_row_order` makes it possible to remove or reorder row in the resulting
/// `Columnar` table.
///
/// Reminder: a string and a numerical column may bare the same column name. This is not
/// considered a conflict.
pub fn merge_columnar(
columnar_readers: &[&ColumnarReader],
required_columns: &[(String, ColumnType)],
merge_row_order: MergeRowOrder,
output: &mut impl io::Write,
) -> io::Result<()> {
let mut serializer = ColumnarSerializer::new(output);
let columns_to_merge = group_columns_for_merge(columnar_readers, required_columns)?;
for ((column_name, column_type), columns) in columns_to_merge {
let mut column_serializer =
serializer.serialize_column(column_name.as_bytes(), column_type);
merge_column(
column_type,
columns,
&merge_row_order,
&mut column_serializer,
)?;
}
serializer.finalize(merge_row_order.num_rows())?;
Ok(())
}
fn dynamic_column_to_u64_monotonic(dynamic_column: DynamicColumn) -> Option<Column<u64>> {
match dynamic_column {
DynamicColumn::Bool(column) => Some(column.to_u64_monotonic()),
DynamicColumn::I64(column) => Some(column.to_u64_monotonic()),
DynamicColumn::U64(column) => Some(column.to_u64_monotonic()),
DynamicColumn::F64(column) => Some(column.to_u64_monotonic()),
DynamicColumn::DateTime(column) => Some(column.to_u64_monotonic()),
DynamicColumn::IpAddr(_) | DynamicColumn::Bytes(_) | DynamicColumn::Str(_) => None,
}
}
fn merge_column(
column_type: ColumnType,
columns: Vec<Option<DynamicColumn>>,
merge_row_order: &MergeRowOrder,
wrt: &mut impl io::Write,
) -> io::Result<()> {
match column_type {
ColumnType::I64
| ColumnType::U64
| ColumnType::F64
| ColumnType::DateTime
| ColumnType::Bool => {
let mut column_indexes: Vec<Option<ColumnIndex>> = Vec::with_capacity(columns.len());
let mut column_values: Vec<Option<Arc<dyn ColumnValues>>> =
Vec::with_capacity(columns.len());
for dynamic_column_opt in columns {
if let Some(Column { idx, values }) =
dynamic_column_opt.and_then(dynamic_column_to_u64_monotonic)
{
column_indexes.push(Some(idx));
column_values.push(Some(values));
} else {
column_indexes.push(None);
column_values.push(None);
}
}
let merged_column_index =
crate::column_index::merge_column_index(&column_indexes[..], merge_row_order);
let merge_column_values = MergedColumnValues {
column_indexes: &column_indexes[..],
column_values: &column_values[..],
merge_row_order,
};
serialize_column_mappable_to_u64(merged_column_index, &merge_column_values, wrt)?;
}
ColumnType::IpAddr => {
let mut column_indexes: Vec<Option<ColumnIndex>> = Vec::with_capacity(columns.len());
let mut column_values: Vec<Option<Arc<dyn ColumnValues<Ipv6Addr>>>> =
Vec::with_capacity(columns.len());
for dynamic_column_opt in columns {
if let Some(DynamicColumn::IpAddr(Column { idx, values })) = dynamic_column_opt {
column_indexes.push(Some(idx));
column_values.push(Some(values));
} else {
column_indexes.push(None);
column_values.push(None);
}
}
let merged_column_index =
crate::column_index::merge_column_index(&column_indexes[..], merge_row_order);
let merge_column_values = MergedColumnValues {
column_indexes: &column_indexes[..],
column_values: &column_values,
merge_row_order,
};
serialize_column_mappable_to_u128(merged_column_index, &merge_column_values, wrt)?;
}
ColumnType::Bytes | ColumnType::Str => {
let mut column_indexes: Vec<Option<ColumnIndex>> = Vec::with_capacity(columns.len());
let mut bytes_columns: Vec<Option<BytesColumn>> = Vec::with_capacity(columns.len());
for dynamic_column_opt in columns {
match dynamic_column_opt {
Some(DynamicColumn::Str(str_column)) => {
column_indexes.push(Some(str_column.term_ord_column.idx.clone()));
bytes_columns.push(Some(str_column.into()));
}
Some(DynamicColumn::Bytes(bytes_column)) => {
column_indexes.push(Some(bytes_column.term_ord_column.idx.clone()));
bytes_columns.push(Some(bytes_column));
}
_ => {
column_indexes.push(None);
bytes_columns.push(None);
}
}
}
let merged_column_index =
crate::column_index::merge_column_index(&column_indexes[..], merge_row_order);
merge_bytes_or_str_column(merged_column_index, &bytes_columns, merge_row_order, wrt)?;
}
}
Ok(())
}
struct GroupedColumns {
required_column_type: Option<ColumnType>,
columns: Vec<Option<DynamicColumn>>,
column_category: ColumnTypeCategory,
}
impl GroupedColumns {
fn for_category(column_category: ColumnTypeCategory, num_columnars: usize) -> Self {
GroupedColumns {
required_column_type: None,
columns: vec![None; num_columnars],
column_category,
}
}
/// Set the dynamic column for a given columnar.
fn set_column(&mut self, columnar_id: usize, column: DynamicColumn) {
self.columns[columnar_id] = Some(column);
}
/// Force the existence of a column, as well as its type.
fn require_type(&mut self, required_type: ColumnType) -> io::Result<()> {
if let Some(existing_required_type) = self.required_column_type {
if existing_required_type == required_type {
// This was just a duplicate in the `required_columns`.
// Nothing to do.
return Ok(());
} else {
return Err(io::Error::new(
io::ErrorKind::InvalidInput,
"Required column conflicts with another required column of the same type \
category.",
));
}
}
self.required_column_type = Some(required_type);
Ok(())
}
/// Returns the column type after merge.
///
/// This method does not check if the column types can actually be coerced to
/// this type.
fn column_type_after_merge(&self) -> ColumnType {
if let Some(required_type) = self.required_column_type {
return required_type;
}
let column_type: HashSet<ColumnType> = self
.columns
.iter()
.flatten()
.map(|column| column.column_type())
.collect();
if column_type.len() == 1 {
return column_type.into_iter().next().unwrap();
}
// At the moment, only the numerical categorical column type has more than one possible
// column type.
assert_eq!(self.column_category, ColumnTypeCategory::Numerical);
merged_numerical_columns_type(self.columns.iter().flatten()).into()
}
}
/// Returns the type of the merged numerical column.
///
/// This function picks the first numerical type out of i64, u64, f64 (order matters
/// here), that is compatible with all the `columns`.
///
/// # Panics
/// Panics if one of the column is not numerical.
fn merged_numerical_columns_type<'a>(
columns: impl Iterator<Item = &'a DynamicColumn>,
) -> NumericalType {
let mut compatible_numerical_types = CompatibleNumericalTypes::default();
for column in columns {
let (min_value, max_value) =
min_max_if_numerical(column).expect("All columns re required to be numerical");
compatible_numerical_types.accept_value(min_value);
compatible_numerical_types.accept_value(max_value);
}
compatible_numerical_types.to_numerical_type()
}
#[allow(clippy::type_complexity)]
fn group_columns_for_merge(
columnar_readers: &[&ColumnarReader],
required_columns: &[(String, ColumnType)],
) -> io::Result<BTreeMap<(String, ColumnType), Vec<Option<DynamicColumn>>>> {
// Each column name may have multiple types of column associated.
// For merging we are interested in the same column type category since they can be merged.
let mut columns_grouped: HashMap<(String, ColumnTypeCategory), GroupedColumns> = HashMap::new();
for &(ref column_name, column_type) in required_columns {
columns_grouped
.entry((column_name.clone(), column_type.into()))
.or_insert_with(|| {
GroupedColumns::for_category(column_type.into(), columnar_readers.len())
})
.require_type(column_type)?;
}
for (columnar_id, columnar_reader) in columnar_readers.iter().enumerate() {
let column_name_and_handle = columnar_reader.list_columns()?;
for (column_name, handle) in column_name_and_handle {
let column_category: ColumnTypeCategory = handle.column_type().into();
let column = handle.open()?;
columns_grouped
.entry((column_name, column_category))
.or_insert_with(|| {
GroupedColumns::for_category(column_category, columnar_readers.len())
})
.set_column(columnar_id, column);
}
}
let mut merge_columns: BTreeMap<(String, ColumnType), Vec<Option<DynamicColumn>>> =
Default::default();
for ((column_name, _), mut grouped_columns) in columns_grouped {
let column_type = grouped_columns.column_type_after_merge();
coerce_columns(column_type, &mut grouped_columns.columns)?;
merge_columns.insert((column_name, column_type), grouped_columns.columns);
}
Ok(merge_columns)
}
fn coerce_columns(
column_type: ColumnType,
columns: &mut [Option<DynamicColumn>],
) -> io::Result<()> {
for column_opt in columns.iter_mut() {
if let Some(column) = column_opt.take() {
*column_opt = Some(coerce_column(column_type, column)?);
}
}
Ok(())
}
fn coerce_column(column_type: ColumnType, column: DynamicColumn) -> io::Result<DynamicColumn> {
if let Some(numerical_type) = column_type.numerical_type() {
column
.coerce_numerical(numerical_type)
.ok_or_else(|| io::Error::new(io::ErrorKind::InvalidInput, ""))
} else {
if column.column_type() != column_type {
return Err(io::Error::new(
io::ErrorKind::InvalidInput,
format!(
"Cannot coerce column of type `{:?}` to `{column_type:?}`",
column.column_type()
),
));
}
Ok(column)
}
}
/// Returns the (min, max) of a column provided it is numerical (i64, u64. f64).
///
/// The min and the max are simply the numerical value as defined by `ColumnValue::min_value()`,
/// and `ColumnValue::max_value()`.
///
/// It is important to note that these values are only guaranteed to be lower/upper bound
/// (as opposed to min/max value).
/// If a column is empty, the min and max values are currently set to 0.
fn min_max_if_numerical(column: &DynamicColumn) -> Option<(NumericalValue, NumericalValue)> {
match column {
DynamicColumn::I64(column) => Some((column.min_value().into(), column.max_value().into())),
DynamicColumn::U64(column) => Some((column.min_value().into(), column.min_value().into())),
DynamicColumn::F64(column) => Some((column.min_value().into(), column.min_value().into())),
DynamicColumn::Bool(_)
| DynamicColumn::IpAddr(_)
| DynamicColumn::DateTime(_)
| DynamicColumn::Bytes(_)
| DynamicColumn::Str(_) => None,
}
}
#[cfg(test)]
mod tests;

View File

@@ -0,0 +1,107 @@
use std::cmp::Ordering;
use std::collections::BinaryHeap;
use sstable::TermOrdinal;
use crate::Streamer;
pub struct HeapItem<'a> {
pub streamer: Streamer<'a>,
pub segment_ord: usize,
}
impl<'a> PartialEq for HeapItem<'a> {
fn eq(&self, other: &Self) -> bool {
self.segment_ord == other.segment_ord
}
}
impl<'a> Eq for HeapItem<'a> {}
impl<'a> PartialOrd for HeapItem<'a> {
fn partial_cmp(&self, other: &HeapItem<'a>) -> Option<Ordering> {
Some(self.cmp(other))
}
}
impl<'a> Ord for HeapItem<'a> {
fn cmp(&self, other: &HeapItem<'a>) -> Ordering {
(&other.streamer.key(), &other.segment_ord).cmp(&(&self.streamer.key(), &self.segment_ord))
}
}
/// Given a list of sorted term streams,
/// returns an iterator over sorted unique terms.
///
/// The item yield is actually a pair with
/// - the term
/// - a slice with the ordinal of the segments containing
/// the terms.
pub struct TermMerger<'a> {
heap: BinaryHeap<HeapItem<'a>>,
current_streamers: Vec<HeapItem<'a>>,
}
impl<'a> TermMerger<'a> {
/// Stream of merged term dictionary
pub fn new(streams: Vec<Streamer<'a>>) -> TermMerger<'a> {
TermMerger {
heap: BinaryHeap::new(),
current_streamers: streams
.into_iter()
.enumerate()
.map(|(ord, streamer)| HeapItem {
streamer,
segment_ord: ord,
})
.collect(),
}
}
pub(crate) fn matching_segments<'b: 'a>(
&'b self,
) -> impl 'b + Iterator<Item = (usize, TermOrdinal)> {
self.current_streamers
.iter()
.map(|heap_item| (heap_item.segment_ord, heap_item.streamer.term_ord()))
}
fn advance_segments(&mut self) {
let streamers = &mut self.current_streamers;
let heap = &mut self.heap;
for mut heap_item in streamers.drain(..) {
if heap_item.streamer.advance() {
heap.push(heap_item);
}
}
}
/// Advance the term iterator to the next term.
/// Returns true if there is indeed another term
/// False if there is none.
pub fn advance(&mut self) -> bool {
self.advance_segments();
if let Some(head) = self.heap.pop() {
self.current_streamers.push(head);
while let Some(next_streamer) = self.heap.peek() {
if self.current_streamers[0].streamer.key() != next_streamer.streamer.key() {
break;
}
let next_heap_it = self.heap.pop().unwrap(); // safe : we peeked beforehand
self.current_streamers.push(next_heap_it);
}
true
} else {
false
}
}
/// Returns the current term.
///
/// This method may be called
/// if and only if advance() has been called before
/// and "true" was returned.
pub fn key(&self) -> &[u8] {
self.current_streamers[0].streamer.key()
}
}

View File

@@ -0,0 +1,318 @@
use super::*;
use crate::{Cardinality, ColumnarWriter, HasAssociatedColumnType, RowId};
fn make_columnar<T: Into<NumericalValue> + HasAssociatedColumnType + Copy>(
column_name: &str,
vals: &[T],
) -> ColumnarReader {
let mut dataframe_writer = ColumnarWriter::default();
dataframe_writer.record_column_type(column_name, T::column_type(), false);
for (row_id, val) in vals.iter().copied().enumerate() {
dataframe_writer.record_numerical(row_id as RowId, column_name, val.into());
}
let mut buffer: Vec<u8> = Vec::new();
dataframe_writer
.serialize(vals.len() as RowId, None, &mut buffer)
.unwrap();
ColumnarReader::open(buffer).unwrap()
}
#[test]
fn test_column_coercion_to_u64() {
// i64 type
let columnar1 = make_columnar("numbers", &[1i64]);
// u64 type
let columnar2 = make_columnar("numbers", &[u64::MAX]);
let column_map: BTreeMap<(String, ColumnType), Vec<Option<DynamicColumn>>> =
group_columns_for_merge(&[&columnar1, &columnar2], &[]).unwrap();
assert_eq!(column_map.len(), 1);
assert!(column_map.contains_key(&("numbers".to_string(), ColumnType::U64)));
}
#[test]
fn test_column_no_coercion_if_all_the_same() {
let columnar1 = make_columnar("numbers", &[1u64]);
let columnar2 = make_columnar("numbers", &[2u64]);
let column_map: BTreeMap<(String, ColumnType), Vec<Option<DynamicColumn>>> =
group_columns_for_merge(&[&columnar1, &columnar2], &[]).unwrap();
assert_eq!(column_map.len(), 1);
assert!(column_map.contains_key(&("numbers".to_string(), ColumnType::U64)));
}
#[test]
fn test_column_coercion_to_i64() {
let columnar1 = make_columnar("numbers", &[-1i64]);
let columnar2 = make_columnar("numbers", &[2u64]);
let column_map: BTreeMap<(String, ColumnType), Vec<Option<DynamicColumn>>> =
group_columns_for_merge(&[&columnar1, &columnar2], &[]).unwrap();
assert_eq!(column_map.len(), 1);
assert!(column_map.contains_key(&("numbers".to_string(), ColumnType::I64)));
}
#[test]
fn test_impossible_coercion_returns_an_error() {
let columnar1 = make_columnar("numbers", &[u64::MAX]);
let group_error =
group_columns_for_merge(&[&columnar1], &[("numbers".to_string(), ColumnType::I64)])
.map(|_| ())
.unwrap_err();
assert_eq!(group_error.kind(), io::ErrorKind::InvalidInput);
}
#[test]
fn test_group_columns_with_required_column() {
let columnar1 = make_columnar("numbers", &[1i64]);
let columnar2 = make_columnar("numbers", &[2u64]);
let column_map: BTreeMap<(String, ColumnType), Vec<Option<DynamicColumn>>> =
group_columns_for_merge(
&[&columnar1, &columnar2],
&[("numbers".to_string(), ColumnType::U64)],
)
.unwrap();
assert_eq!(column_map.len(), 1);
assert!(column_map.contains_key(&("numbers".to_string(), ColumnType::U64)));
}
#[test]
fn test_group_columns_required_column_with_no_existing_columns() {
let columnar1 = make_columnar("numbers", &[2u64]);
let columnar2 = make_columnar("numbers", &[2u64]);
let column_map: BTreeMap<(String, ColumnType), Vec<Option<DynamicColumn>>> =
group_columns_for_merge(
&[&columnar1, &columnar2],
&[("required_col".to_string(), ColumnType::Str)],
)
.unwrap();
assert_eq!(column_map.len(), 2);
let columns = column_map
.get(&("required_col".to_string(), ColumnType::Str))
.unwrap();
assert_eq!(columns.len(), 2);
assert!(columns[0].is_none());
assert!(columns[1].is_none());
}
#[test]
fn test_group_columns_required_column_is_above_all_columns_have_the_same_type_rule() {
let columnar1 = make_columnar("numbers", &[2i64]);
let columnar2 = make_columnar("numbers", &[2i64]);
let column_map: BTreeMap<(String, ColumnType), Vec<Option<DynamicColumn>>> =
group_columns_for_merge(
&[&columnar1, &columnar2],
&[("numbers".to_string(), ColumnType::U64)],
)
.unwrap();
assert_eq!(column_map.len(), 1);
assert!(column_map.contains_key(&("numbers".to_string(), ColumnType::U64)));
}
#[test]
fn test_missing_column() {
let columnar1 = make_columnar("numbers", &[-1i64]);
let columnar2 = make_columnar("numbers2", &[2u64]);
let column_map: BTreeMap<(String, ColumnType), Vec<Option<DynamicColumn>>> =
group_columns_for_merge(&[&columnar1, &columnar2], &[]).unwrap();
assert_eq!(column_map.len(), 2);
assert!(column_map.contains_key(&("numbers".to_string(), ColumnType::I64)));
{
let columns = column_map
.get(&("numbers".to_string(), ColumnType::I64))
.unwrap();
assert!(columns[0].is_some());
assert!(columns[1].is_none());
}
{
let columns = column_map
.get(&("numbers2".to_string(), ColumnType::U64))
.unwrap();
assert!(columns[0].is_none());
assert!(columns[1].is_some());
}
}
fn make_numerical_columnar_multiple_columns(
columns: &[(&str, &[&[NumericalValue]])],
) -> ColumnarReader {
let mut dataframe_writer = ColumnarWriter::default();
for (column_name, column_values) in columns {
for (row_id, vals) in column_values.iter().enumerate() {
for val in vals.iter() {
dataframe_writer.record_numerical(row_id as u32, column_name, *val);
}
}
}
let num_rows = columns
.iter()
.map(|(_, val_rows)| val_rows.len() as RowId)
.max()
.unwrap_or(0u32);
let mut buffer: Vec<u8> = Vec::new();
dataframe_writer
.serialize(num_rows, None, &mut buffer)
.unwrap();
ColumnarReader::open(buffer).unwrap()
}
fn make_byte_columnar_multiple_columns(columns: &[(&str, &[&[&[u8]]])]) -> ColumnarReader {
let mut dataframe_writer = ColumnarWriter::default();
for (column_name, column_values) in columns {
for (row_id, vals) in column_values.iter().enumerate() {
for val in vals.iter() {
dataframe_writer.record_bytes(row_id as u32, column_name, val);
}
}
}
let num_rows = columns
.iter()
.map(|(_, val_rows)| val_rows.len() as RowId)
.max()
.unwrap_or(0u32);
let mut buffer: Vec<u8> = Vec::new();
dataframe_writer
.serialize(num_rows, None, &mut buffer)
.unwrap();
ColumnarReader::open(buffer).unwrap()
}
fn make_text_columnar_multiple_columns(columns: &[(&str, &[&[&str]])]) -> ColumnarReader {
let mut dataframe_writer = ColumnarWriter::default();
for (column_name, column_values) in columns {
for (row_id, vals) in column_values.iter().enumerate() {
for val in vals.iter() {
dataframe_writer.record_str(row_id as u32, column_name, val);
}
}
}
let num_rows = columns
.iter()
.map(|(_, val_rows)| val_rows.len() as RowId)
.max()
.unwrap_or(0u32);
let mut buffer: Vec<u8> = Vec::new();
dataframe_writer
.serialize(num_rows, None, &mut buffer)
.unwrap();
ColumnarReader::open(buffer).unwrap()
}
#[test]
fn test_merge_columnar_numbers() {
let columnar1 =
make_numerical_columnar_multiple_columns(&[("numbers", &[&[NumericalValue::from(-1f64)]])]);
let columnar2 = make_numerical_columnar_multiple_columns(&[(
"numbers",
&[&[], &[NumericalValue::from(-3f64)]],
)]);
let mut buffer = Vec::new();
let columnars = &[&columnar1, &columnar2];
let stack_merge_order = StackMergeOrder::stack(columnars);
crate::columnar::merge_columnar(
columnars,
&[],
MergeRowOrder::Stack(stack_merge_order),
&mut buffer,
)
.unwrap();
let columnar_reader = ColumnarReader::open(buffer).unwrap();
assert_eq!(columnar_reader.num_rows(), 3);
assert_eq!(columnar_reader.num_columns(), 1);
let cols = columnar_reader.read_columns("numbers").unwrap();
let dynamic_column = cols[0].open().unwrap();
let DynamicColumn::F64(vals) = dynamic_column else { panic!() };
assert_eq!(vals.get_cardinality(), Cardinality::Optional);
assert_eq!(vals.first(0u32), Some(-1f64));
assert_eq!(vals.first(1u32), None);
assert_eq!(vals.first(2u32), Some(-3f64));
}
#[test]
fn test_merge_columnar_texts() {
let columnar1 = make_text_columnar_multiple_columns(&[("texts", &[&["a"]])]);
let columnar2 = make_text_columnar_multiple_columns(&[("texts", &[&[], &["b"]])]);
let mut buffer = Vec::new();
let columnars = &[&columnar1, &columnar2];
let stack_merge_order = StackMergeOrder::stack(columnars);
crate::columnar::merge_columnar(
columnars,
&[],
MergeRowOrder::Stack(stack_merge_order),
&mut buffer,
)
.unwrap();
let columnar_reader = ColumnarReader::open(buffer).unwrap();
assert_eq!(columnar_reader.num_rows(), 3);
assert_eq!(columnar_reader.num_columns(), 1);
let cols = columnar_reader.read_columns("texts").unwrap();
let dynamic_column = cols[0].open().unwrap();
let DynamicColumn::Str(vals) = dynamic_column else { panic!() };
let get_str_for_ord = |ord| {
let mut out = String::new();
vals.ord_to_str(ord, &mut out).unwrap();
out
};
assert_eq!(vals.dictionary.num_terms(), 2);
assert_eq!(get_str_for_ord(0), "a");
assert_eq!(get_str_for_ord(1), "b");
let get_str_for_row = |row_id| {
let term_ords: Vec<u64> = vals.term_ords(row_id).collect();
assert!(term_ords.len() <= 1);
let mut out = String::new();
if term_ords.len() == 1 {
vals.ord_to_str(term_ords[0], &mut out).unwrap();
}
out
};
assert_eq!(get_str_for_row(0), "a");
assert_eq!(get_str_for_row(1), "");
assert_eq!(get_str_for_row(2), "b");
}
#[test]
fn test_merge_columnar_byte() {
let columnar1 = make_byte_columnar_multiple_columns(&[("bytes", &[&[b"bbbb"], &[b"baaa"]])]);
let columnar2 = make_byte_columnar_multiple_columns(&[("bytes", &[&[], &[b"a"]])]);
let mut buffer = Vec::new();
let columnars = &[&columnar1, &columnar2];
let stack_merge_order = StackMergeOrder::stack(columnars);
crate::columnar::merge_columnar(
columnars,
&[],
MergeRowOrder::Stack(stack_merge_order),
&mut buffer,
)
.unwrap();
let columnar_reader = ColumnarReader::open(buffer).unwrap();
assert_eq!(columnar_reader.num_rows(), 4);
assert_eq!(columnar_reader.num_columns(), 1);
let cols = columnar_reader.read_columns("bytes").unwrap();
let dynamic_column = cols[0].open().unwrap();
let DynamicColumn::Bytes(vals) = dynamic_column else { panic!() };
let get_bytes_for_ord = |ord| {
let mut out = Vec::new();
vals.ord_to_bytes(ord, &mut out).unwrap();
out
};
assert_eq!(vals.dictionary.num_terms(), 3);
assert_eq!(get_bytes_for_ord(0), b"a");
assert_eq!(get_bytes_for_ord(1), b"baaa");
assert_eq!(get_bytes_for_ord(2), b"bbbb");
let get_bytes_for_row = |row_id| {
let term_ords: Vec<u64> = vals.term_ords(row_id).collect();
assert!(term_ords.len() <= 1);
let mut out = Vec::new();
if term_ords.len() == 1 {
vals.ord_to_bytes(term_ords[0], &mut out).unwrap();
}
out
};
assert_eq!(get_bytes_for_row(0), b"bbbb");
assert_eq!(get_bytes_for_row(1), b"baaa");
assert_eq!(get_bytes_for_row(2), b"");
assert_eq!(get_bytes_for_row(3), b"a");
}

View File

@@ -0,0 +1,10 @@
mod column_type;
mod format_version;
mod merge;
mod reader;
mod writer;
pub use column_type::{ColumnType, HasAssociatedColumnType};
pub use merge::{merge_columnar, MergeRowOrder, ShuffleMergeOrder, StackMergeOrder};
pub use reader::ColumnarReader;
pub use writer::ColumnarWriter;

View File

@@ -0,0 +1,192 @@
use std::{io, mem};
use common::file_slice::FileSlice;
use common::BinarySerializable;
use sstable::{Dictionary, RangeSSTable};
use crate::columnar::{format_version, ColumnType};
use crate::dynamic_column::DynamicColumnHandle;
use crate::RowId;
fn io_invalid_data(msg: String) -> io::Error {
io::Error::new(io::ErrorKind::InvalidData, msg)
}
/// The ColumnarReader makes it possible to access a set of columns
/// associated to field names.
#[derive(Clone)]
pub struct ColumnarReader {
column_dictionary: Dictionary<RangeSSTable>,
column_data: FileSlice,
num_rows: RowId,
}
/// Functions by both the async/sync code listing columns.
/// It takes a stream from the column sstable and return the list of
/// `DynamicColumn` available in it.
fn read_all_columns_in_stream(
mut stream: sstable::Streamer<'_, RangeSSTable>,
column_data: &FileSlice,
) -> io::Result<Vec<DynamicColumnHandle>> {
let mut results = Vec::new();
while stream.advance() {
let key_bytes: &[u8] = stream.key();
let Some(column_code) = key_bytes.last().copied() else {
return Err(io_invalid_data("Empty column name.".to_string()));
};
let column_type = ColumnType::try_from_code(column_code)
.map_err(|_| io_invalid_data(format!("Unknown column code `{column_code}`")))?;
let range = stream.value();
let file_slice = column_data.slice(range.start as usize..range.end as usize);
let dynamic_column_handle = DynamicColumnHandle {
file_slice,
column_type,
};
results.push(dynamic_column_handle);
}
Ok(results)
}
impl ColumnarReader {
/// Opens a new Columnar file.
pub fn open<F>(file_slice: F) -> io::Result<ColumnarReader>
where FileSlice: From<F> {
Self::open_inner(file_slice.into())
}
fn open_inner(file_slice: FileSlice) -> io::Result<ColumnarReader> {
let (file_slice_without_sstable_len, footer_slice) = file_slice
.split_from_end(mem::size_of::<u64>() + 4 + format_version::VERSION_FOOTER_NUM_BYTES);
let footer_bytes = footer_slice.read_bytes()?;
let sstable_len = u64::deserialize(&mut &footer_bytes[0..8])?;
let num_rows = u32::deserialize(&mut &footer_bytes[8..12])?;
let version_footer_bytes: [u8; format_version::VERSION_FOOTER_NUM_BYTES] =
footer_bytes[12..].try_into().unwrap();
let _version = format_version::parse_footer(version_footer_bytes)?;
let (column_data, sstable) =
file_slice_without_sstable_len.split_from_end(sstable_len as usize);
let column_dictionary = Dictionary::open(sstable)?;
Ok(ColumnarReader {
column_dictionary,
column_data,
num_rows,
})
}
pub fn num_rows(&self) -> RowId {
self.num_rows
}
// TODO Add unit tests
pub fn list_columns(&self) -> io::Result<Vec<(String, DynamicColumnHandle)>> {
let mut stream = self.column_dictionary.stream()?;
let mut results = Vec::new();
while stream.advance() {
let key_bytes: &[u8] = stream.key();
let column_code: u8 = key_bytes.last().cloned().unwrap();
let column_type: ColumnType = ColumnType::try_from_code(column_code)
.map_err(|_| io_invalid_data(format!("Unknown column code `{column_code}`")))?;
let range = stream.value().clone();
let column_name =
// The last two bytes are respectively the 0u8 separator and the column_type.
String::from_utf8_lossy(&key_bytes[..key_bytes.len() - 2]).to_string();
let file_slice = self
.column_data
.slice(range.start as usize..range.end as usize);
let column_handle = DynamicColumnHandle {
file_slice,
column_type,
};
results.push((column_name, column_handle));
}
Ok(results)
}
fn stream_for_column_range(&self, column_name: &str) -> sstable::StreamerBuilder<RangeSSTable> {
// Each column is a associated to a given `column_key`,
// that starts by `column_name\0column_header`.
//
// Listing the columns associated to the given column name is therefore equivalent to
// listing `column_key` with the prefix `column_name\0`.
//
// This is in turn equivalent to searching for the range
// `[column_name,\0`..column_name\1)`.
// TODO can we get some more generic `prefix(..)` logic in the dictionary.
let mut start_key = column_name.to_string();
start_key.push('\0');
let mut end_key = column_name.to_string();
end_key.push(1u8 as char);
self.column_dictionary
.range()
.ge(start_key.as_bytes())
.lt(end_key.as_bytes())
}
pub async fn read_columns_async(
&self,
column_name: &str,
) -> io::Result<Vec<DynamicColumnHandle>> {
let stream = self
.stream_for_column_range(column_name)
.into_stream_async()
.await?;
read_all_columns_in_stream(stream, &self.column_data)
}
/// Get all columns for the given column name.
///
/// There can be more than one column associated to a given column name, provided they have
/// different types.
pub fn read_columns(&self, column_name: &str) -> io::Result<Vec<DynamicColumnHandle>> {
let stream = self.stream_for_column_range(column_name).into_stream()?;
read_all_columns_in_stream(stream, &self.column_data)
}
/// Return the number of columns in the columnar.
pub fn num_columns(&self) -> usize {
self.column_dictionary.num_terms()
}
}
#[cfg(test)]
mod tests {
use crate::{ColumnType, ColumnarReader, ColumnarWriter};
#[test]
fn test_list_columns() {
let mut columnar_writer = ColumnarWriter::default();
columnar_writer.record_column_type("col1", ColumnType::Str, false);
columnar_writer.record_column_type("col2", ColumnType::U64, false);
let mut buffer = Vec::new();
columnar_writer.serialize(1, None, &mut buffer).unwrap();
let columnar = ColumnarReader::open(buffer).unwrap();
let columns = columnar.list_columns().unwrap();
assert_eq!(columns.len(), 2);
assert_eq!(&columns[0].0, "col1");
assert_eq!(columns[0].1.column_type(), ColumnType::Str);
assert_eq!(&columns[1].0, "col2");
assert_eq!(columns[1].1.column_type(), ColumnType::U64);
}
#[test]
fn test_list_columns_strict_typing_prevents_coercion() {
let mut columnar_writer = ColumnarWriter::default();
columnar_writer.record_column_type("count", ColumnType::U64, false);
columnar_writer.record_numerical(1, "count", 1u64);
let mut buffer = Vec::new();
columnar_writer.serialize(2, None, &mut buffer).unwrap();
let columnar = ColumnarReader::open(buffer).unwrap();
let columns = columnar.list_columns().unwrap();
assert_eq!(columns.len(), 1);
assert_eq!(&columns[0].0, "count");
assert_eq!(columns[0].1.column_type(), ColumnType::U64);
}
#[test]
#[should_panic(expected = "Input type forbidden")]
fn test_list_columns_strict_typing_panics_on_wrong_types() {
let mut columnar_writer = ColumnarWriter::default();
columnar_writer.record_column_type("count", ColumnType::U64, false);
columnar_writer.record_numerical(1, "count", 1i64);
}
}

View File

@@ -0,0 +1,360 @@
use std::net::Ipv6Addr;
use crate::dictionary::UnorderedId;
use crate::utils::{place_bits, pop_first_byte, select_bits};
use crate::value::NumericalValue;
use crate::{InvalidData, NumericalType, RowId};
/// When we build a columnar dataframe, we first just group
/// all mutations per column, and appends them in append-only buffer
/// in the stacker.
///
/// These ColumnOperation<T> are therefore serialize/deserialized
/// in memory.
///
/// We represents all of these operations as `ColumnOperation`.
#[derive(Eq, PartialEq, Debug, Clone, Copy)]
pub(super) enum ColumnOperation<T> {
NewDoc(RowId),
Value(T),
}
#[derive(Copy, Clone, Eq, PartialEq, Debug)]
struct ColumnOperationMetadata {
op_type: ColumnOperationType,
len: u8,
}
impl ColumnOperationMetadata {
fn to_code(self) -> u8 {
place_bits::<0, 6>(self.len) | place_bits::<6, 8>(self.op_type.to_code())
}
fn try_from_code(code: u8) -> Result<Self, InvalidData> {
let len = select_bits::<0, 6>(code);
let typ_code = select_bits::<6, 8>(code);
let column_type = ColumnOperationType::try_from_code(typ_code)?;
Ok(ColumnOperationMetadata {
op_type: column_type,
len,
})
}
}
#[derive(Copy, Clone, Eq, PartialEq, Debug)]
#[repr(u8)]
enum ColumnOperationType {
NewDoc = 0u8,
AddValue = 1u8,
}
impl ColumnOperationType {
pub fn to_code(self) -> u8 {
self as u8
}
pub fn try_from_code(code: u8) -> Result<Self, InvalidData> {
match code {
0 => Ok(Self::NewDoc),
1 => Ok(Self::AddValue),
_ => Err(InvalidData),
}
}
}
impl<V: SymbolValue> ColumnOperation<V> {
pub(super) fn serialize(self) -> impl AsRef<[u8]> {
let mut minibuf = MiniBuffer::default();
let column_op_metadata = match self {
ColumnOperation::NewDoc(new_doc) => {
let symbol_len = new_doc.serialize(&mut minibuf.bytes[1..]);
ColumnOperationMetadata {
op_type: ColumnOperationType::NewDoc,
len: symbol_len,
}
}
ColumnOperation::Value(val) => {
let symbol_len = val.serialize(&mut minibuf.bytes[1..]);
ColumnOperationMetadata {
op_type: ColumnOperationType::AddValue,
len: symbol_len,
}
}
};
minibuf.bytes[0] = column_op_metadata.to_code();
// +1 for the metadata
minibuf.len = 1 + column_op_metadata.len;
minibuf
}
/// Deserialize a colummn operation.
/// Returns None if the buffer is empty.
///
/// Panics if the payload is invalid:
/// this deserialize method is meant to target in memory.
pub(super) fn deserialize(bytes: &mut &[u8]) -> Option<Self> {
let column_op_metadata_byte = pop_first_byte(bytes)?;
let column_op_metadata = ColumnOperationMetadata::try_from_code(column_op_metadata_byte)
.expect("Invalid op metadata byte");
let symbol_bytes: &[u8];
(symbol_bytes, *bytes) = bytes.split_at(column_op_metadata.len as usize);
match column_op_metadata.op_type {
ColumnOperationType::NewDoc => {
let new_doc = u32::deserialize(symbol_bytes);
Some(ColumnOperation::NewDoc(new_doc))
}
ColumnOperationType::AddValue => {
let value = V::deserialize(symbol_bytes);
Some(ColumnOperation::Value(value))
}
}
}
}
impl<T> From<T> for ColumnOperation<T> {
fn from(value: T) -> Self {
ColumnOperation::Value(value)
}
}
// Serialization trait very local to the writer.
// As we write fast fields, we accumulate them in "in memory".
// In order to limit memory usage, and in order
// to benefit from the stacker, we do this by serialization our data
// as "Symbols".
#[allow(clippy::from_over_into)]
pub(super) trait SymbolValue: Clone + Copy {
// Serializes the symbol into the given buffer.
// Returns the number of bytes written into the buffer.
/// # Panics
/// May not exceed 9bytes
fn serialize(self, buffer: &mut [u8]) -> u8;
// Panics if invalid
fn deserialize(bytes: &[u8]) -> Self;
}
impl SymbolValue for bool {
fn serialize(self, buffer: &mut [u8]) -> u8 {
buffer[0] = u8::from(self);
1u8
}
fn deserialize(bytes: &[u8]) -> Self {
bytes[0] == 1u8
}
}
impl SymbolValue for Ipv6Addr {
fn serialize(self, buffer: &mut [u8]) -> u8 {
buffer[0..16].copy_from_slice(&self.octets());
16
}
fn deserialize(bytes: &[u8]) -> Self {
let octets: [u8; 16] = bytes[0..16].try_into().unwrap();
Ipv6Addr::from(octets)
}
}
#[derive(Default)]
struct MiniBuffer {
pub bytes: [u8; 17],
pub len: u8,
}
impl AsRef<[u8]> for MiniBuffer {
fn as_ref(&self) -> &[u8] {
&self.bytes[..self.len as usize]
}
}
impl SymbolValue for NumericalValue {
fn deserialize(mut bytes: &[u8]) -> Self {
let type_code = pop_first_byte(&mut bytes).unwrap();
let symbol_type = NumericalType::try_from_code(type_code).unwrap();
let mut octet: [u8; 8] = [0u8; 8];
octet[..bytes.len()].copy_from_slice(bytes);
match symbol_type {
NumericalType::U64 => {
let val: u64 = u64::from_le_bytes(octet);
NumericalValue::U64(val)
}
NumericalType::I64 => {
let encoded: u64 = u64::from_le_bytes(octet);
let val: i64 = decode_zig_zag(encoded);
NumericalValue::I64(val)
}
NumericalType::F64 => {
debug_assert_eq!(bytes.len(), 8);
let val: f64 = f64::from_le_bytes(octet);
NumericalValue::F64(val)
}
}
}
/// F64: Serialize with a fixed size of 9 bytes
/// U64: Serialize without leading zeroes
/// I64: ZigZag encoded and serialize without leading zeroes
fn serialize(self, output: &mut [u8]) -> u8 {
match self {
NumericalValue::F64(val) => {
output[0] = NumericalType::F64 as u8;
output[1..9].copy_from_slice(&val.to_le_bytes());
9u8
}
NumericalValue::U64(val) => {
let len = compute_num_bytes_for_u64(val) as u8;
output[0] = NumericalType::U64 as u8;
output[1..9].copy_from_slice(&val.to_le_bytes());
len + 1u8
}
NumericalValue::I64(val) => {
let zig_zag_encoded = encode_zig_zag(val);
let len = compute_num_bytes_for_u64(zig_zag_encoded) as u8;
output[0] = NumericalType::I64 as u8;
output[1..9].copy_from_slice(&zig_zag_encoded.to_le_bytes());
len + 1u8
}
}
}
}
impl SymbolValue for u32 {
fn serialize(self, output: &mut [u8]) -> u8 {
let len = compute_num_bytes_for_u64(self as u64);
output[0..4].copy_from_slice(&self.to_le_bytes());
len as u8
}
fn deserialize(bytes: &[u8]) -> Self {
let mut quartet: [u8; 4] = [0u8; 4];
quartet[..bytes.len()].copy_from_slice(bytes);
u32::from_le_bytes(quartet)
}
}
impl SymbolValue for UnorderedId {
fn serialize(self, output: &mut [u8]) -> u8 {
self.0.serialize(output)
}
fn deserialize(bytes: &[u8]) -> Self {
UnorderedId(u32::deserialize(bytes))
}
}
fn compute_num_bytes_for_u64(val: u64) -> usize {
let msb = (64u32 - val.leading_zeros()) as usize;
(msb + 7) / 8
}
fn encode_zig_zag(n: i64) -> u64 {
((n << 1) ^ (n >> 63)) as u64
}
fn decode_zig_zag(n: u64) -> i64 {
((n >> 1) as i64) ^ (-((n & 1) as i64))
}
#[cfg(test)]
mod tests {
use super::*;
#[track_caller]
fn test_zig_zag_aux(val: i64) {
let encoded = super::encode_zig_zag(val);
assert_eq!(decode_zig_zag(encoded), val);
if let Some(abs_val) = val.checked_abs() {
let abs_val = abs_val as u64;
assert!(encoded <= abs_val * 2);
}
}
#[test]
fn test_zig_zag() {
assert_eq!(encode_zig_zag(0i64), 0u64);
assert_eq!(encode_zig_zag(-1i64), 1u64);
assert_eq!(encode_zig_zag(1i64), 2u64);
test_zig_zag_aux(0i64);
test_zig_zag_aux(i64::MIN);
test_zig_zag_aux(i64::MAX);
}
use proptest::prelude::any;
use proptest::proptest;
proptest! {
#[test]
fn test_proptest_zig_zag(val in any::<i64>()) {
test_zig_zag_aux(val);
}
}
#[test]
fn test_column_op_metadata_byte_serialization() {
for len in 0..=15 {
for op_type in [ColumnOperationType::AddValue, ColumnOperationType::NewDoc] {
let column_op_metadata = ColumnOperationMetadata { op_type, len };
let column_op_metadata_code = column_op_metadata.to_code();
let serdeser_metadata =
ColumnOperationMetadata::try_from_code(column_op_metadata_code).unwrap();
assert_eq!(column_op_metadata, serdeser_metadata);
}
}
}
#[track_caller]
fn ser_deser_symbol(column_op: ColumnOperation<NumericalValue>) {
let buf = column_op.serialize();
let mut buffer = buf.as_ref().to_vec();
buffer.extend_from_slice(b"234234");
let mut bytes = &buffer[..];
let serdeser_symbol = ColumnOperation::deserialize(&mut bytes).unwrap();
assert_eq!(bytes.len() + buf.as_ref().len(), buffer.len());
assert_eq!(column_op, serdeser_symbol);
}
#[test]
fn test_compute_num_bytes_for_u64() {
assert_eq!(compute_num_bytes_for_u64(0), 0);
assert_eq!(compute_num_bytes_for_u64(1), 1);
assert_eq!(compute_num_bytes_for_u64(255), 1);
assert_eq!(compute_num_bytes_for_u64(256), 2);
assert_eq!(compute_num_bytes_for_u64((1 << 16) - 1), 2);
assert_eq!(compute_num_bytes_for_u64(1 << 16), 3);
}
#[test]
fn test_symbol_serialization() {
ser_deser_symbol(ColumnOperation::NewDoc(0));
ser_deser_symbol(ColumnOperation::NewDoc(3));
ser_deser_symbol(ColumnOperation::Value(NumericalValue::I64(0i64)));
ser_deser_symbol(ColumnOperation::Value(NumericalValue::I64(1i64)));
ser_deser_symbol(ColumnOperation::Value(NumericalValue::U64(257u64)));
ser_deser_symbol(ColumnOperation::Value(NumericalValue::I64(-257i64)));
ser_deser_symbol(ColumnOperation::Value(NumericalValue::I64(i64::MIN)));
ser_deser_symbol(ColumnOperation::Value(NumericalValue::U64(0u64)));
ser_deser_symbol(ColumnOperation::Value(NumericalValue::U64(u64::MIN)));
ser_deser_symbol(ColumnOperation::Value(NumericalValue::U64(u64::MAX)));
}
fn test_column_operation_unordered_aux(val: u32, expected_len: usize) {
let column_op = ColumnOperation::Value(UnorderedId(val));
let minibuf = column_op.serialize();
assert_eq!({ minibuf.as_ref().len() }, expected_len);
let mut buf = minibuf.as_ref().to_vec();
buf.extend_from_slice(&[2, 2, 2, 2, 2, 2]);
let mut cursor = &buf[..];
let column_op_serdeser: ColumnOperation<UnorderedId> =
ColumnOperation::deserialize(&mut cursor).unwrap();
assert_eq!(column_op_serdeser, ColumnOperation::Value(UnorderedId(val)));
assert_eq!(cursor.len() + expected_len, buf.len());
}
#[test]
fn test_column_operation_unordered() {
test_column_operation_unordered_aux(300u32, 3);
test_column_operation_unordered_aux(1u32, 2);
test_column_operation_unordered_aux(0u32, 1);
}
}

View File

@@ -0,0 +1,363 @@
use std::cmp::Ordering;
use stacker::{ExpUnrolledLinkedList, MemoryArena};
use crate::columnar::writer::column_operation::{ColumnOperation, SymbolValue};
use crate::dictionary::{DictionaryBuilder, UnorderedId};
use crate::{Cardinality, NumericalType, NumericalValue, RowId};
#[derive(Copy, Clone, Debug, Eq, PartialEq)]
#[repr(u8)]
enum DocumentStep {
Same = 0,
Next = 1,
Skipped = 2,
}
#[inline(always)]
fn delta_with_last_doc(last_doc_opt: Option<u32>, doc: u32) -> DocumentStep {
let expected_next_doc = last_doc_opt.map(|last_doc| last_doc + 1).unwrap_or(0u32);
match doc.cmp(&expected_next_doc) {
Ordering::Less => DocumentStep::Same,
Ordering::Equal => DocumentStep::Next,
Ordering::Greater => DocumentStep::Skipped,
}
}
#[derive(Copy, Clone, Default)]
pub struct ColumnWriter {
// Detected cardinality of the column so far.
cardinality: Cardinality,
// Last document inserted.
// None if no doc has been added yet.
last_doc_opt: Option<u32>,
// Buffer containing the serialized values.
values: ExpUnrolledLinkedList,
}
impl ColumnWriter {
/// Returns an iterator over the Symbol that have been recorded
/// for the given column.
pub(super) fn operation_iterator<'a, V: SymbolValue>(
&self,
arena: &MemoryArena,
old_to_new_ids_opt: Option<&[RowId]>,
buffer: &'a mut Vec<u8>,
) -> impl Iterator<Item = ColumnOperation<V>> + 'a {
buffer.clear();
self.values.read_to_end(arena, buffer);
if let Some(old_to_new_ids) = old_to_new_ids_opt {
// TODO avoid the extra deserialization / serialization.
let mut sorted_ops: Vec<(RowId, ColumnOperation<V>)> = Vec::new();
let mut new_doc = 0u32;
let mut cursor = &buffer[..];
for op in std::iter::from_fn(|| ColumnOperation::<V>::deserialize(&mut cursor)) {
if let ColumnOperation::NewDoc(doc) = &op {
new_doc = old_to_new_ids[*doc as usize];
sorted_ops.push((new_doc, ColumnOperation::NewDoc(new_doc)));
} else {
sorted_ops.push((new_doc, op));
}
}
// stable sort is crucial here.
sorted_ops.sort_by_key(|(new_doc_id, _)| *new_doc_id);
buffer.clear();
for (_, op) in sorted_ops {
buffer.extend_from_slice(op.serialize().as_ref());
}
}
let mut cursor: &[u8] = &buffer[..];
std::iter::from_fn(move || ColumnOperation::deserialize(&mut cursor))
}
/// Records a change of the document being recorded.
///
/// This function will also update the cardinality of the column
/// if necessary.
pub(super) fn record<S: SymbolValue>(&mut self, doc: RowId, value: S, arena: &mut MemoryArena) {
// Difference between `doc` and the last doc.
match delta_with_last_doc(self.last_doc_opt, doc) {
DocumentStep::Same => {
// This is the last encounterred document.
self.cardinality = Cardinality::Multivalued;
}
DocumentStep::Next => {
self.last_doc_opt = Some(doc);
self.write_symbol::<S>(ColumnOperation::NewDoc(doc), arena);
}
DocumentStep::Skipped => {
self.cardinality = self.cardinality.max(Cardinality::Optional);
self.last_doc_opt = Some(doc);
self.write_symbol::<S>(ColumnOperation::NewDoc(doc), arena);
}
}
self.write_symbol(ColumnOperation::Value(value), arena);
}
// Get the cardinality.
// The overall number of docs in the column is necessary to
// deal with the case where the all docs contain 1 value, except some documents
// at the end of the column.
pub(crate) fn get_cardinality(&self, num_docs: RowId) -> Cardinality {
match delta_with_last_doc(self.last_doc_opt, num_docs) {
DocumentStep::Same | DocumentStep::Next => self.cardinality,
DocumentStep::Skipped => self.cardinality.max(Cardinality::Optional),
}
}
/// Appends a new symbol to the `ColumnWriter`.
fn write_symbol<V: SymbolValue>(
&mut self,
column_operation: ColumnOperation<V>,
arena: &mut MemoryArena,
) {
self.values
.writer(arena)
.extend_from_slice(column_operation.serialize().as_ref());
}
}
#[derive(Clone, Copy, Default)]
pub(crate) struct NumericalColumnWriter {
compatible_numerical_types: CompatibleNumericalTypes,
column_writer: ColumnWriter,
}
impl NumericalColumnWriter {
pub fn force_numerical_type(&mut self, numerical_type: NumericalType) {
assert!(self
.compatible_numerical_types
.is_type_accepted(numerical_type));
self.compatible_numerical_types = CompatibleNumericalTypes::StaticType(numerical_type);
}
}
/// State used to store what types are still acceptable
/// after having seen a set of numerical values.
#[derive(Clone, Copy)]
pub(crate) enum CompatibleNumericalTypes {
Dynamic {
all_values_within_i64_range: bool,
all_values_within_u64_range: bool,
},
StaticType(NumericalType),
}
impl Default for CompatibleNumericalTypes {
fn default() -> CompatibleNumericalTypes {
CompatibleNumericalTypes::Dynamic {
all_values_within_i64_range: true,
all_values_within_u64_range: true,
}
}
}
impl CompatibleNumericalTypes {
pub fn is_type_accepted(&self, numerical_type: NumericalType) -> bool {
match self {
CompatibleNumericalTypes::Dynamic {
all_values_within_i64_range,
all_values_within_u64_range,
} => match numerical_type {
NumericalType::I64 => *all_values_within_i64_range,
NumericalType::U64 => *all_values_within_u64_range,
NumericalType::F64 => true,
},
CompatibleNumericalTypes::StaticType(static_numerical_type) => {
*static_numerical_type == numerical_type
}
}
}
pub fn accept_value(&mut self, numerical_value: NumericalValue) {
match self {
CompatibleNumericalTypes::Dynamic {
all_values_within_i64_range,
all_values_within_u64_range,
} => match numerical_value {
NumericalValue::I64(val_i64) => {
let value_within_u64_range = val_i64 >= 0i64;
*all_values_within_u64_range &= value_within_u64_range;
}
NumericalValue::U64(val_u64) => {
let value_within_i64_range = val_u64 < i64::MAX as u64;
*all_values_within_i64_range &= value_within_i64_range;
}
NumericalValue::F64(_) => {
*all_values_within_i64_range = false;
*all_values_within_u64_range = false;
}
},
CompatibleNumericalTypes::StaticType(typ) => {
assert_eq!(
numerical_value.numerical_type(),
*typ,
"Input type forbidden. This column has been forced to type {typ:?}, received \
{numerical_value:?}"
);
}
}
}
pub fn to_numerical_type(self) -> NumericalType {
for numerical_type in [NumericalType::I64, NumericalType::U64] {
if self.is_type_accepted(numerical_type) {
return numerical_type;
}
}
NumericalType::F64
}
}
impl NumericalColumnWriter {
pub fn numerical_type(&self) -> NumericalType {
self.compatible_numerical_types.to_numerical_type()
}
pub fn cardinality(&self, num_docs: RowId) -> Cardinality {
self.column_writer.get_cardinality(num_docs)
}
pub fn record_numerical_value(
&mut self,
doc: RowId,
value: NumericalValue,
arena: &mut MemoryArena,
) {
self.compatible_numerical_types.accept_value(value);
self.column_writer.record(doc, value, arena);
}
pub(super) fn operation_iterator<'a>(
self,
arena: &MemoryArena,
old_to_new_ids: Option<&[RowId]>,
buffer: &'a mut Vec<u8>,
) -> impl Iterator<Item = ColumnOperation<NumericalValue>> + 'a {
self.column_writer
.operation_iterator(arena, old_to_new_ids, buffer)
}
}
#[derive(Copy, Clone)]
pub(crate) struct StrOrBytesColumnWriter {
pub(crate) dictionary_id: u32,
pub(crate) column_writer: ColumnWriter,
// If true, when facing a multivalued cardinality,
// values associated to a given document will be sorted.
//
// This is useful for facets.
//
// If false, the order of appearance in the document will be
// observed.
pub(crate) sort_values_within_row: bool,
}
impl StrOrBytesColumnWriter {
pub(crate) fn with_dictionary_id(dictionary_id: u32) -> StrOrBytesColumnWriter {
StrOrBytesColumnWriter {
dictionary_id,
column_writer: Default::default(),
sort_values_within_row: false,
}
}
pub(crate) fn record_bytes(
&mut self,
doc: RowId,
bytes: &[u8],
dictionaries: &mut [DictionaryBuilder],
arena: &mut MemoryArena,
) {
let unordered_id = dictionaries[self.dictionary_id as usize].get_or_allocate_id(bytes);
self.column_writer.record(doc, unordered_id, arena);
}
pub(super) fn operation_iterator<'a>(
&self,
arena: &MemoryArena,
old_to_new_ids: Option<&[RowId]>,
byte_buffer: &'a mut Vec<u8>,
) -> impl Iterator<Item = ColumnOperation<UnorderedId>> + 'a {
self.column_writer
.operation_iterator(arena, old_to_new_ids, byte_buffer)
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_delta_with_last_doc() {
assert_eq!(delta_with_last_doc(None, 0u32), DocumentStep::Next);
assert_eq!(delta_with_last_doc(None, 1u32), DocumentStep::Skipped);
assert_eq!(delta_with_last_doc(None, 2u32), DocumentStep::Skipped);
assert_eq!(delta_with_last_doc(Some(0u32), 0u32), DocumentStep::Same);
assert_eq!(delta_with_last_doc(Some(1u32), 1u32), DocumentStep::Same);
assert_eq!(delta_with_last_doc(Some(1u32), 2u32), DocumentStep::Next);
assert_eq!(delta_with_last_doc(Some(1u32), 3u32), DocumentStep::Skipped);
assert_eq!(delta_with_last_doc(Some(1u32), 4u32), DocumentStep::Skipped);
}
#[track_caller]
fn test_column_writer_coercion_iter_aux(
values: impl Iterator<Item = NumericalValue>,
expected_numerical_type: NumericalType,
) {
let mut compatible_numerical_types = CompatibleNumericalTypes::default();
for value in values {
compatible_numerical_types.accept_value(value);
}
assert_eq!(
compatible_numerical_types.to_numerical_type(),
expected_numerical_type
);
}
#[track_caller]
fn test_column_writer_coercion_aux(
values: &[NumericalValue],
expected_numerical_type: NumericalType,
) {
test_column_writer_coercion_iter_aux(values.iter().copied(), expected_numerical_type);
test_column_writer_coercion_iter_aux(values.iter().rev().copied(), expected_numerical_type);
}
#[test]
fn test_column_writer_coercion() {
test_column_writer_coercion_aux(&[], NumericalType::I64);
test_column_writer_coercion_aux(&[1i64.into()], NumericalType::I64);
test_column_writer_coercion_aux(&[1u64.into()], NumericalType::I64);
// We don't detect exact integer at the moment. We could!
test_column_writer_coercion_aux(&[1f64.into()], NumericalType::F64);
test_column_writer_coercion_aux(&[u64::MAX.into()], NumericalType::U64);
test_column_writer_coercion_aux(&[(i64::MAX as u64).into()], NumericalType::U64);
test_column_writer_coercion_aux(&[(1u64 << 63).into()], NumericalType::U64);
test_column_writer_coercion_aux(&[1i64.into(), 1u64.into()], NumericalType::I64);
test_column_writer_coercion_aux(&[u64::MAX.into(), (-1i64).into()], NumericalType::F64);
}
#[test]
#[should_panic]
fn test_compatible_numerical_types_static_incompatible_type() {
let mut compatible_numerical_types =
CompatibleNumericalTypes::StaticType(NumericalType::U64);
compatible_numerical_types.accept_value(NumericalValue::I64(1i64));
}
#[test]
fn test_compatible_numerical_types_static_different_type_forbidden() {
let mut compatible_numerical_types =
CompatibleNumericalTypes::StaticType(NumericalType::U64);
compatible_numerical_types.accept_value(NumericalValue::U64(u64::MAX));
}
#[test]
fn test_compatible_numerical_types_static() {
for typ in [NumericalType::I64, NumericalType::I64, NumericalType::F64] {
let compatible_numerical_types = CompatibleNumericalTypes::StaticType(typ);
assert_eq!(compatible_numerical_types.to_numerical_type(), typ);
}
}
}

View File

@@ -0,0 +1,848 @@
mod column_operation;
mod column_writers;
mod serializer;
mod value_index;
use std::io;
use std::net::Ipv6Addr;
use column_operation::ColumnOperation;
pub(crate) use column_writers::CompatibleNumericalTypes;
use common::CountingWriter;
pub(crate) use serializer::ColumnarSerializer;
use stacker::{Addr, ArenaHashMap, MemoryArena};
use crate::column_index::SerializableColumnIndex;
use crate::column_values::{
ColumnValues, MonotonicallyMappableToU128, MonotonicallyMappableToU64, VecColumn,
};
use crate::columnar::column_type::ColumnType;
use crate::columnar::writer::column_writers::{
ColumnWriter, NumericalColumnWriter, StrOrBytesColumnWriter,
};
use crate::columnar::writer::value_index::{IndexBuilder, PreallocatedIndexBuilders};
use crate::dictionary::{DictionaryBuilder, TermIdMapping, UnorderedId};
use crate::value::{Coerce, NumericalType, NumericalValue};
use crate::{Cardinality, RowId};
/// This is a set of buffers that are used to temporarily write the values into before passing them
/// to the fast field codecs.
#[derive(Default)]
struct SpareBuffers {
value_index_builders: PreallocatedIndexBuilders,
u64_values: Vec<u64>,
ip_addr_values: Vec<Ipv6Addr>,
}
/// Makes it possible to create a new columnar.
///
/// ```rust
/// use tantivy_columnar::ColumnarWriter;
///
/// let mut columnar_writer = ColumnarWriter::default();
/// columnar_writer.record_str(0u32 /* doc id */, "product_name", "Red backpack");
/// columnar_writer.record_numerical(0u32 /* doc id */, "price", 10u64);
/// columnar_writer.record_str(1u32 /* doc id */, "product_name", "Apple");
/// columnar_writer.record_numerical(0u32 /* doc id */, "price", 10.5f64); //< uh oh we ended up mixing integer and floats.
/// let mut wrt: Vec<u8> = Vec::new();
/// columnar_writer.serialize(2u32, None, &mut wrt).unwrap();
/// ```
#[derive(Default)]
pub struct ColumnarWriter {
numerical_field_hash_map: ArenaHashMap,
datetime_field_hash_map: ArenaHashMap,
bool_field_hash_map: ArenaHashMap,
ip_addr_field_hash_map: ArenaHashMap,
bytes_field_hash_map: ArenaHashMap,
str_field_hash_map: ArenaHashMap,
arena: MemoryArena,
// Dictionaries used to store dictionary-encoded values.
dictionaries: Vec<DictionaryBuilder>,
buffers: SpareBuffers,
}
#[inline]
fn mutate_or_create_column<V, TMutator>(
arena_hash_map: &mut ArenaHashMap,
column_name: &str,
updater: TMutator,
) where
V: Copy + 'static,
TMutator: FnMut(Option<V>) -> V,
{
assert!(
!column_name.as_bytes().contains(&0u8),
"key may not contain the 0 byte"
);
arena_hash_map.mutate_or_create(column_name.as_bytes(), updater);
}
impl ColumnarWriter {
pub fn mem_usage(&self) -> usize {
// TODO add dictionary builders.
self.arena.mem_usage()
+ self.numerical_field_hash_map.mem_usage()
+ self.bool_field_hash_map.mem_usage()
+ self.bytes_field_hash_map.mem_usage()
+ self.str_field_hash_map.mem_usage()
+ self.ip_addr_field_hash_map.mem_usage()
+ self.datetime_field_hash_map.mem_usage()
}
/// Returns the list of doc ids from 0..num_docs sorted by the `sort_field`
/// column.
///
/// If the column is multivalued, use the first value for scoring.
/// If no value is associated to a specific row, the document is assigned
/// the lowest possible score.
///
/// The sort applied is stable.
pub fn sort_order(&self, sort_field: &str, num_docs: RowId, reversed: bool) -> Vec<u32> {
let Some(numerical_col_writer) =
self.numerical_field_hash_map.get::<NumericalColumnWriter>(sort_field.as_bytes()) else {
return Vec::new();
};
let mut symbols_buffer = Vec::new();
let mut values = Vec::new();
let mut last_doc_opt: Option<RowId> = None;
for op in numerical_col_writer.operation_iterator(&self.arena, None, &mut symbols_buffer) {
match op {
ColumnOperation::NewDoc(doc) => {
last_doc_opt = Some(doc);
}
ColumnOperation::Value(numerical_value) => {
if let Some(last_doc) = last_doc_opt {
let score: f32 = f64::coerce(numerical_value) as f32;
values.push((score, last_doc));
}
}
}
}
for doc in values.len() as u32..num_docs {
values.push((0.0f32, doc));
}
values.sort_by(|(left_score, _), (right_score, _)| {
if reversed {
right_score.partial_cmp(left_score).unwrap()
} else {
left_score.partial_cmp(right_score).unwrap()
}
});
values.into_iter().map(|(_score, doc)| doc).collect()
}
/// Records a column type. This is useful to bypass the coercion process,
/// makes sure the empty is present in the resulting columnar, or set
/// the `sort_values_within_row`.
///
/// `sort_values_within_row` is only allowed for `Bytes` or `Str` columns.
pub fn record_column_type(
&mut self,
column_name: &str,
column_type: ColumnType,
sort_values_within_row: bool,
) {
if sort_values_within_row {
assert!(
column_type == ColumnType::Bytes || column_type == ColumnType::Str,
"sort_values_within_row is only allowed for Bytes and Str columns",
);
}
match column_type {
ColumnType::Str | ColumnType::Bytes => {
let (hash_map, dictionaries) = (
if column_type == ColumnType::Str {
&mut self.str_field_hash_map
} else {
&mut self.bytes_field_hash_map
},
&mut self.dictionaries,
);
mutate_or_create_column(
hash_map,
column_name,
|column_opt: Option<StrOrBytesColumnWriter>| {
let mut column_writer = if let Some(column_writer) = column_opt {
column_writer
} else {
let dictionary_id = dictionaries.len() as u32;
dictionaries.push(DictionaryBuilder::default());
StrOrBytesColumnWriter::with_dictionary_id(dictionary_id)
};
column_writer.sort_values_within_row = sort_values_within_row;
column_writer
},
);
}
ColumnType::Bool => {
mutate_or_create_column(
&mut self.bool_field_hash_map,
column_name,
|column_opt: Option<ColumnWriter>| column_opt.unwrap_or_default(),
);
}
ColumnType::DateTime => {
mutate_or_create_column(
&mut self.datetime_field_hash_map,
column_name,
|column_opt: Option<ColumnWriter>| column_opt.unwrap_or_default(),
);
}
ColumnType::I64 | ColumnType::F64 | ColumnType::U64 => {
let numerical_type = column_type.numerical_type().unwrap();
mutate_or_create_column(
&mut self.numerical_field_hash_map,
column_name,
|column_opt: Option<NumericalColumnWriter>| {
let mut column: NumericalColumnWriter = column_opt.unwrap_or_default();
column.force_numerical_type(numerical_type);
column
},
);
}
ColumnType::IpAddr => mutate_or_create_column(
&mut self.ip_addr_field_hash_map,
column_name,
|column_opt: Option<ColumnWriter>| column_opt.unwrap_or_default(),
),
}
}
pub fn record_numerical<T: Into<NumericalValue> + Copy>(
&mut self,
doc: RowId,
column_name: &str,
numerical_value: T,
) {
let (hash_map, arena) = (&mut self.numerical_field_hash_map, &mut self.arena);
mutate_or_create_column(
hash_map,
column_name,
|column_opt: Option<NumericalColumnWriter>| {
let mut column: NumericalColumnWriter = column_opt.unwrap_or_default();
column.record_numerical_value(doc, numerical_value.into(), arena);
column
},
);
}
pub fn record_ip_addr(&mut self, doc: RowId, column_name: &str, ip_addr: Ipv6Addr) {
assert!(
!column_name.as_bytes().contains(&0u8),
"key may not contain the 0 byte"
);
let (hash_map, arena) = (&mut self.ip_addr_field_hash_map, &mut self.arena);
hash_map.mutate_or_create(
column_name.as_bytes(),
|column_opt: Option<ColumnWriter>| {
let mut column: ColumnWriter = column_opt.unwrap_or_default();
column.record(doc, ip_addr, arena);
column
},
);
}
pub fn record_bool(&mut self, doc: RowId, column_name: &str, val: bool) {
let (hash_map, arena) = (&mut self.bool_field_hash_map, &mut self.arena);
mutate_or_create_column(hash_map, column_name, |column_opt: Option<ColumnWriter>| {
let mut column: ColumnWriter = column_opt.unwrap_or_default();
column.record(doc, val, arena);
column
});
}
pub fn record_datetime(&mut self, doc: RowId, column_name: &str, datetime: common::DateTime) {
let (hash_map, arena) = (&mut self.datetime_field_hash_map, &mut self.arena);
mutate_or_create_column(hash_map, column_name, |column_opt: Option<ColumnWriter>| {
let mut column: ColumnWriter = column_opt.unwrap_or_default();
column.record(
doc,
NumericalValue::I64(datetime.into_timestamp_micros()),
arena,
);
column
});
}
pub fn record_str(&mut self, doc: RowId, column_name: &str, value: &str) {
let (hash_map, arena, dictionaries) = (
&mut self.str_field_hash_map,
&mut self.arena,
&mut self.dictionaries,
);
hash_map.mutate_or_create(
column_name.as_bytes(),
|column_opt: Option<StrOrBytesColumnWriter>| {
let mut column: StrOrBytesColumnWriter = column_opt.unwrap_or_else(|| {
// Each column has its own dictionary
let dictionary_id = dictionaries.len() as u32;
dictionaries.push(DictionaryBuilder::default());
StrOrBytesColumnWriter::with_dictionary_id(dictionary_id)
});
column.record_bytes(doc, value.as_bytes(), dictionaries, arena);
column
},
);
}
pub fn record_bytes(&mut self, doc: RowId, column_name: &str, value: &[u8]) {
assert!(
!column_name.as_bytes().contains(&0u8),
"key may not contain the 0 byte"
);
let (hash_map, arena, dictionaries) = (
&mut self.bytes_field_hash_map,
&mut self.arena,
&mut self.dictionaries,
);
hash_map.mutate_or_create(
column_name.as_bytes(),
|column_opt: Option<StrOrBytesColumnWriter>| {
let mut column: StrOrBytesColumnWriter = column_opt.unwrap_or_else(|| {
// Each column has its own dictionary
let dictionary_id = dictionaries.len() as u32;
dictionaries.push(DictionaryBuilder::default());
StrOrBytesColumnWriter::with_dictionary_id(dictionary_id)
});
column.record_bytes(doc, value, dictionaries, arena);
column
},
);
}
pub fn serialize(
&mut self,
num_docs: RowId,
old_to_new_row_ids: Option<&[RowId]>,
wrt: &mut dyn io::Write,
) -> io::Result<()> {
let mut serializer = ColumnarSerializer::new(wrt);
let mut columns: Vec<(&[u8], ColumnType, Addr)> = self
.numerical_field_hash_map
.iter()
.map(|(column_name, addr, _)| {
let numerical_column_writer: NumericalColumnWriter =
self.numerical_field_hash_map.read(addr);
let column_type = numerical_column_writer.numerical_type().into();
(column_name, column_type, addr)
})
.collect();
columns.extend(
self.bytes_field_hash_map
.iter()
.map(|(term, addr, _)| (term, ColumnType::Bytes, addr)),
);
columns.extend(
self.str_field_hash_map
.iter()
.map(|(column_name, addr, _)| (column_name, ColumnType::Str, addr)),
);
columns.extend(
self.bool_field_hash_map
.iter()
.map(|(column_name, addr, _)| (column_name, ColumnType::Bool, addr)),
);
columns.extend(
self.ip_addr_field_hash_map
.iter()
.map(|(column_name, addr, _)| (column_name, ColumnType::IpAddr, addr)),
);
columns.extend(
self.datetime_field_hash_map
.iter()
.map(|(column_name, addr, _)| (column_name, ColumnType::DateTime, addr)),
);
columns.sort_unstable_by_key(|(column_name, col_type, _)| (*column_name, *col_type));
let (arena, buffers, dictionaries) = (&self.arena, &mut self.buffers, &self.dictionaries);
let mut symbol_byte_buffer: Vec<u8> = Vec::new();
for (column_name, column_type, addr) in columns {
match column_type {
ColumnType::Bool => {
let column_writer: ColumnWriter = self.bool_field_hash_map.read(addr);
let cardinality = column_writer.get_cardinality(num_docs);
let mut column_serializer =
serializer.serialize_column(column_name, column_type);
serialize_bool_column(
cardinality,
num_docs,
column_writer.operation_iterator(
arena,
old_to_new_row_ids,
&mut symbol_byte_buffer,
),
buffers,
&mut column_serializer,
)?;
}
ColumnType::IpAddr => {
let column_writer: ColumnWriter = self.ip_addr_field_hash_map.read(addr);
let cardinality = column_writer.get_cardinality(num_docs);
let mut column_serializer =
serializer.serialize_column(column_name, ColumnType::IpAddr);
serialize_ip_addr_column(
cardinality,
num_docs,
column_writer.operation_iterator(
arena,
old_to_new_row_ids,
&mut symbol_byte_buffer,
),
buffers,
&mut column_serializer,
)?;
}
ColumnType::Bytes | ColumnType::Str => {
let str_or_bytes_column_writer: StrOrBytesColumnWriter =
if column_type == ColumnType::Bytes {
self.bytes_field_hash_map.read(addr)
} else {
self.str_field_hash_map.read(addr)
};
let dictionary_builder =
&dictionaries[str_or_bytes_column_writer.dictionary_id as usize];
let cardinality = str_or_bytes_column_writer
.column_writer
.get_cardinality(num_docs);
let mut column_serializer =
serializer.serialize_column(column_name, column_type);
serialize_bytes_or_str_column(
cardinality,
num_docs,
str_or_bytes_column_writer.sort_values_within_row,
dictionary_builder,
str_or_bytes_column_writer.operation_iterator(
arena,
old_to_new_row_ids,
&mut symbol_byte_buffer,
),
buffers,
&mut column_serializer,
)?;
}
ColumnType::F64 | ColumnType::I64 | ColumnType::U64 => {
let numerical_column_writer: NumericalColumnWriter =
self.numerical_field_hash_map.read(addr);
let cardinality = numerical_column_writer.cardinality(num_docs);
let mut column_serializer =
serializer.serialize_column(column_name, column_type);
let numerical_type = column_type.numerical_type().unwrap();
serialize_numerical_column(
cardinality,
num_docs,
numerical_type,
numerical_column_writer.operation_iterator(
arena,
old_to_new_row_ids,
&mut symbol_byte_buffer,
),
buffers,
&mut column_serializer,
)?;
}
ColumnType::DateTime => {
let column_writer: ColumnWriter = self.datetime_field_hash_map.read(addr);
let cardinality = column_writer.get_cardinality(num_docs);
let mut column_serializer =
serializer.serialize_column(column_name, ColumnType::DateTime);
serialize_numerical_column(
cardinality,
num_docs,
NumericalType::I64,
column_writer.operation_iterator(
arena,
old_to_new_row_ids,
&mut symbol_byte_buffer,
),
buffers,
&mut column_serializer,
)?;
}
};
}
serializer.finalize(num_docs)?;
Ok(())
}
}
// Serialize [Dictionary, Column, dictionary num bytes U32::LE]
// Column: [Column Index, Column Values, column index num bytes U32::LE]
fn serialize_bytes_or_str_column(
cardinality: Cardinality,
num_docs: RowId,
sort_values_within_row: bool,
dictionary_builder: &DictionaryBuilder,
operation_it: impl Iterator<Item = ColumnOperation<UnorderedId>>,
buffers: &mut SpareBuffers,
wrt: impl io::Write,
) -> io::Result<()> {
let SpareBuffers {
value_index_builders,
u64_values,
..
} = buffers;
let mut counting_writer = CountingWriter::wrap(wrt);
let term_id_mapping: TermIdMapping = dictionary_builder.serialize(&mut counting_writer)?;
let dictionary_num_bytes: u32 = counting_writer.written_bytes() as u32;
let mut wrt = counting_writer.finish();
let operation_iterator = operation_it.map(|symbol: ColumnOperation<UnorderedId>| {
// We map unordered ids to ordered ids.
match symbol {
ColumnOperation::Value(unordered_id) => {
let ordered_id = term_id_mapping.to_ord(unordered_id);
ColumnOperation::Value(ordered_id.0 as u64)
}
ColumnOperation::NewDoc(doc) => ColumnOperation::NewDoc(doc),
}
});
send_to_serialize_column_mappable_to_u64(
operation_iterator,
cardinality,
num_docs,
sort_values_within_row,
value_index_builders,
u64_values,
&mut wrt,
)?;
wrt.write_all(&dictionary_num_bytes.to_le_bytes()[..])?;
Ok(())
}
fn serialize_numerical_column(
cardinality: Cardinality,
num_docs: RowId,
numerical_type: NumericalType,
op_iterator: impl Iterator<Item = ColumnOperation<NumericalValue>>,
buffers: &mut SpareBuffers,
wrt: &mut impl io::Write,
) -> io::Result<()> {
let SpareBuffers {
value_index_builders,
u64_values,
..
} = buffers;
match numerical_type {
NumericalType::I64 => {
send_to_serialize_column_mappable_to_u64(
coerce_numerical_symbol::<i64>(op_iterator),
cardinality,
num_docs,
false,
value_index_builders,
u64_values,
wrt,
)?;
}
NumericalType::U64 => {
send_to_serialize_column_mappable_to_u64(
coerce_numerical_symbol::<u64>(op_iterator),
cardinality,
num_docs,
false,
value_index_builders,
u64_values,
wrt,
)?;
}
NumericalType::F64 => {
send_to_serialize_column_mappable_to_u64(
coerce_numerical_symbol::<f64>(op_iterator),
cardinality,
num_docs,
false,
value_index_builders,
u64_values,
wrt,
)?;
}
};
Ok(())
}
fn serialize_bool_column(
cardinality: Cardinality,
num_docs: RowId,
column_operations_it: impl Iterator<Item = ColumnOperation<bool>>,
buffers: &mut SpareBuffers,
wrt: &mut impl io::Write,
) -> io::Result<()> {
let SpareBuffers {
value_index_builders,
u64_values,
..
} = buffers;
send_to_serialize_column_mappable_to_u64(
column_operations_it.map(|bool_column_operation| match bool_column_operation {
ColumnOperation::NewDoc(doc) => ColumnOperation::NewDoc(doc),
ColumnOperation::Value(bool_val) => ColumnOperation::Value(bool_val.to_u64()),
}),
cardinality,
num_docs,
false,
value_index_builders,
u64_values,
wrt,
)?;
Ok(())
}
fn serialize_ip_addr_column(
cardinality: Cardinality,
num_docs: RowId,
column_operations_it: impl Iterator<Item = ColumnOperation<Ipv6Addr>>,
buffers: &mut SpareBuffers,
wrt: &mut impl io::Write,
) -> io::Result<()> {
let SpareBuffers {
value_index_builders,
ip_addr_values,
..
} = buffers;
send_to_serialize_column_mappable_to_u128(
column_operations_it,
cardinality,
num_docs,
value_index_builders,
ip_addr_values,
wrt,
)?;
Ok(())
}
fn send_to_serialize_column_mappable_to_u128<
T: Copy + Ord + std::fmt::Debug + Send + Sync + MonotonicallyMappableToU128 + PartialOrd,
>(
op_iterator: impl Iterator<Item = ColumnOperation<T>>,
cardinality: Cardinality,
num_rows: RowId,
value_index_builders: &mut PreallocatedIndexBuilders,
values: &mut Vec<T>,
mut wrt: impl io::Write,
) -> io::Result<()>
where
for<'a> VecColumn<'a, T>: ColumnValues<T>,
{
values.clear();
// TODO: split index and values
let serializable_column_index = match cardinality {
Cardinality::Full => {
consume_operation_iterator(
op_iterator,
value_index_builders.borrow_required_index_builder(),
values,
);
SerializableColumnIndex::Full
}
Cardinality::Optional => {
let optional_index_builder = value_index_builders.borrow_optional_index_builder();
consume_operation_iterator(op_iterator, optional_index_builder, values);
let optional_index = optional_index_builder.finish(num_rows);
SerializableColumnIndex::Optional {
num_rows,
non_null_row_ids: Box::new(optional_index),
}
}
Cardinality::Multivalued => {
let multivalued_index_builder = value_index_builders.borrow_multivalued_index_builder();
consume_operation_iterator(op_iterator, multivalued_index_builder, values);
let multivalued_index = multivalued_index_builder.finish(num_rows);
SerializableColumnIndex::Multivalued(Box::new(multivalued_index))
}
};
crate::column::serialize_column_mappable_to_u128(
serializable_column_index,
&&values[..],
&mut wrt,
)?;
Ok(())
}
fn sort_values_within_row_in_place(multivalued_index: &[RowId], values: &mut [u64]) {
let mut start_index: usize = 0;
for end_index in multivalued_index.iter().copied() {
let end_index = end_index as usize;
values[start_index..end_index].sort_unstable();
start_index = end_index;
}
}
fn send_to_serialize_column_mappable_to_u64(
op_iterator: impl Iterator<Item = ColumnOperation<u64>>,
cardinality: Cardinality,
num_rows: RowId,
sort_values_within_row: bool,
value_index_builders: &mut PreallocatedIndexBuilders,
values: &mut Vec<u64>,
mut wrt: impl io::Write,
) -> io::Result<()>
where
for<'a> VecColumn<'a, u64>: ColumnValues<u64>,
{
values.clear();
let serializable_column_index = match cardinality {
Cardinality::Full => {
consume_operation_iterator(
op_iterator,
value_index_builders.borrow_required_index_builder(),
values,
);
SerializableColumnIndex::Full
}
Cardinality::Optional => {
let optional_index_builder = value_index_builders.borrow_optional_index_builder();
consume_operation_iterator(op_iterator, optional_index_builder, values);
let optional_index = optional_index_builder.finish(num_rows);
SerializableColumnIndex::Optional {
non_null_row_ids: Box::new(optional_index),
num_rows,
}
}
Cardinality::Multivalued => {
let multivalued_index_builder = value_index_builders.borrow_multivalued_index_builder();
consume_operation_iterator(op_iterator, multivalued_index_builder, values);
let multivalued_index = multivalued_index_builder.finish(num_rows);
if sort_values_within_row {
sort_values_within_row_in_place(multivalued_index, values);
}
SerializableColumnIndex::Multivalued(Box::new(multivalued_index))
}
};
crate::column::serialize_column_mappable_to_u64(
serializable_column_index,
&&values[..],
&mut wrt,
)?;
Ok(())
}
fn coerce_numerical_symbol<T>(
operation_iterator: impl Iterator<Item = ColumnOperation<NumericalValue>>,
) -> impl Iterator<Item = ColumnOperation<u64>>
where T: Coerce + MonotonicallyMappableToU64 {
operation_iterator.map(|symbol| match symbol {
ColumnOperation::NewDoc(doc) => ColumnOperation::NewDoc(doc),
ColumnOperation::Value(numerical_value) => {
ColumnOperation::Value(T::coerce(numerical_value).to_u64())
}
})
}
fn consume_operation_iterator<T: Ord, TIndexBuilder: IndexBuilder>(
operation_iterator: impl Iterator<Item = ColumnOperation<T>>,
index_builder: &mut TIndexBuilder,
values: &mut Vec<T>,
) {
for symbol in operation_iterator {
match symbol {
ColumnOperation::NewDoc(doc) => {
index_builder.record_row(doc);
}
ColumnOperation::Value(value) => {
index_builder.record_value();
values.push(value);
}
}
}
}
#[cfg(test)]
mod tests {
use stacker::MemoryArena;
use crate::columnar::writer::column_operation::ColumnOperation;
use crate::{Cardinality, NumericalValue};
#[test]
fn test_column_writer_required_simple() {
let mut arena = MemoryArena::default();
let mut column_writer = super::ColumnWriter::default();
column_writer.record(0u32, NumericalValue::from(14i64), &mut arena);
column_writer.record(1u32, NumericalValue::from(15i64), &mut arena);
column_writer.record(2u32, NumericalValue::from(-16i64), &mut arena);
assert_eq!(column_writer.get_cardinality(3), Cardinality::Full);
let mut buffer = Vec::new();
let symbols: Vec<ColumnOperation<NumericalValue>> = column_writer
.operation_iterator(&arena, None, &mut buffer)
.collect();
assert_eq!(symbols.len(), 6);
assert!(matches!(symbols[0], ColumnOperation::NewDoc(0u32)));
assert!(matches!(
symbols[1],
ColumnOperation::Value(NumericalValue::I64(14i64))
));
assert!(matches!(symbols[2], ColumnOperation::NewDoc(1u32)));
assert!(matches!(
symbols[3],
ColumnOperation::Value(NumericalValue::I64(15i64))
));
assert!(matches!(symbols[4], ColumnOperation::NewDoc(2u32)));
assert!(matches!(
symbols[5],
ColumnOperation::Value(NumericalValue::I64(-16i64))
));
}
#[test]
fn test_column_writer_optional_cardinality_missing_first() {
let mut arena = MemoryArena::default();
let mut column_writer = super::ColumnWriter::default();
column_writer.record(1u32, NumericalValue::from(15i64), &mut arena);
column_writer.record(2u32, NumericalValue::from(-16i64), &mut arena);
assert_eq!(column_writer.get_cardinality(3), Cardinality::Optional);
let mut buffer = Vec::new();
let symbols: Vec<ColumnOperation<NumericalValue>> = column_writer
.operation_iterator(&arena, None, &mut buffer)
.collect();
assert_eq!(symbols.len(), 4);
assert!(matches!(symbols[0], ColumnOperation::NewDoc(1u32)));
assert!(matches!(
symbols[1],
ColumnOperation::Value(NumericalValue::I64(15i64))
));
assert!(matches!(symbols[2], ColumnOperation::NewDoc(2u32)));
assert!(matches!(
symbols[3],
ColumnOperation::Value(NumericalValue::I64(-16i64))
));
}
#[test]
fn test_column_writer_optional_cardinality_missing_last() {
let mut arena = MemoryArena::default();
let mut column_writer = super::ColumnWriter::default();
column_writer.record(0u32, NumericalValue::from(15i64), &mut arena);
assert_eq!(column_writer.get_cardinality(2), Cardinality::Optional);
let mut buffer = Vec::new();
let symbols: Vec<ColumnOperation<NumericalValue>> = column_writer
.operation_iterator(&arena, None, &mut buffer)
.collect();
assert_eq!(symbols.len(), 2);
assert!(matches!(symbols[0], ColumnOperation::NewDoc(0u32)));
assert!(matches!(
symbols[1],
ColumnOperation::Value(NumericalValue::I64(15i64))
));
}
#[test]
fn test_column_writer_multivalued() {
let mut arena = MemoryArena::default();
let mut column_writer = super::ColumnWriter::default();
column_writer.record(0u32, NumericalValue::from(16i64), &mut arena);
column_writer.record(0u32, NumericalValue::from(17i64), &mut arena);
assert_eq!(column_writer.get_cardinality(1), Cardinality::Multivalued);
let mut buffer = Vec::new();
let symbols: Vec<ColumnOperation<NumericalValue>> = column_writer
.operation_iterator(&arena, None, &mut buffer)
.collect();
assert_eq!(symbols.len(), 3);
assert!(matches!(symbols[0], ColumnOperation::NewDoc(0u32)));
assert!(matches!(
symbols[1],
ColumnOperation::Value(NumericalValue::I64(16i64))
));
assert!(matches!(
symbols[2],
ColumnOperation::Value(NumericalValue::I64(17i64))
));
}
}

View File

@@ -0,0 +1,108 @@
use std::io;
use std::io::Write;
use common::{BinarySerializable, CountingWriter};
use sstable::value::RangeValueWriter;
use sstable::RangeSSTable;
use crate::columnar::ColumnType;
use crate::RowId;
pub struct ColumnarSerializer<W: io::Write> {
wrt: CountingWriter<W>,
sstable_range: sstable::Writer<Vec<u8>, RangeValueWriter>,
prepare_key_buffer: Vec<u8>,
}
/// Returns a key consisting of the concatenation of the key and the column_type_and_cardinality
/// code.
fn prepare_key(key: &[u8], column_type: ColumnType, buffer: &mut Vec<u8>) {
buffer.clear();
buffer.extend_from_slice(key);
buffer.push(0u8);
buffer.push(column_type.to_code());
}
impl<W: io::Write> ColumnarSerializer<W> {
pub(crate) fn new(wrt: W) -> ColumnarSerializer<W> {
let sstable_range: sstable::Writer<Vec<u8>, RangeValueWriter> =
sstable::Dictionary::<RangeSSTable>::builder(Vec::with_capacity(100_000)).unwrap();
ColumnarSerializer {
wrt: CountingWriter::wrap(wrt),
sstable_range,
prepare_key_buffer: Vec::new(),
}
}
pub fn serialize_column<'a>(
&'a mut self,
column_name: &[u8],
column_type: ColumnType,
) -> impl io::Write + 'a {
let start_offset = self.wrt.written_bytes();
prepare_key(column_name, column_type, &mut self.prepare_key_buffer);
ColumnSerializer {
columnar_serializer: self,
start_offset,
}
}
pub(crate) fn finalize(mut self, num_rows: RowId) -> io::Result<()> {
let sstable_bytes: Vec<u8> = self.sstable_range.finish()?;
let sstable_num_bytes: u64 = sstable_bytes.len() as u64;
self.wrt.write_all(&sstable_bytes)?;
self.wrt.write_all(&sstable_num_bytes.to_le_bytes()[..])?;
num_rows.serialize(&mut self.wrt)?;
self.wrt
.write_all(&super::super::format_version::footer())?;
self.wrt.flush()?;
Ok(())
}
}
struct ColumnSerializer<'a, W: io::Write> {
columnar_serializer: &'a mut ColumnarSerializer<W>,
start_offset: u64,
}
impl<'a, W: io::Write> Drop for ColumnSerializer<'a, W> {
fn drop(&mut self) {
let end_offset: u64 = self.columnar_serializer.wrt.written_bytes();
let byte_range = self.start_offset..end_offset;
self.columnar_serializer.sstable_range.insert_cannot_fail(
&self.columnar_serializer.prepare_key_buffer[..],
&byte_range,
);
self.columnar_serializer.prepare_key_buffer.clear();
}
}
impl<'a, W: io::Write> io::Write for ColumnSerializer<'a, W> {
fn write(&mut self, buf: &[u8]) -> io::Result<usize> {
self.columnar_serializer.wrt.write(buf)
}
fn flush(&mut self) -> io::Result<()> {
self.columnar_serializer.wrt.flush()
}
fn write_all(&mut self, buf: &[u8]) -> io::Result<()> {
self.columnar_serializer.wrt.write_all(buf)
}
}
#[cfg(test)]
mod tests {
use super::*;
use crate::columnar::column_type::ColumnType;
#[test]
fn test_prepare_key_bytes() {
let mut buffer: Vec<u8> = b"somegarbage".to_vec();
prepare_key(b"root\0child", ColumnType::Str, &mut buffer);
assert_eq!(buffer.len(), 12);
assert_eq!(&buffer[..10], b"root\0child");
assert_eq!(buffer[10], 0u8);
assert_eq!(buffer[11], ColumnType::Str.to_code());
}
}

View File

@@ -0,0 +1,165 @@
use crate::iterable::Iterable;
use crate::RowId;
/// The `IndexBuilder` interprets a sequence of
/// calls of the form:
/// (record_doc,record_value+)*
/// and can then serialize the results into an index to associate docids with their value[s].
///
/// It has different implementation depending on whether the
/// cardinality is required, optional, or multivalued.
pub(crate) trait IndexBuilder {
fn record_row(&mut self, doc: RowId);
#[inline]
fn record_value(&mut self) {}
}
/// The FullIndexBuilder does nothing.
#[derive(Default)]
pub struct FullIndexBuilder;
impl IndexBuilder for FullIndexBuilder {
#[inline(always)]
fn record_row(&mut self, _doc: RowId) {}
}
#[derive(Default)]
pub struct OptionalIndexBuilder {
docs: Vec<RowId>,
}
impl OptionalIndexBuilder {
pub fn finish(&mut self, num_rows: RowId) -> impl Iterable<RowId> + '_ {
debug_assert!(self
.docs
.last()
.copied()
.map(|last_doc| last_doc < num_rows)
.unwrap_or(true));
&self.docs[..]
}
fn reset(&mut self) {
self.docs.clear();
}
}
impl IndexBuilder for OptionalIndexBuilder {
#[inline(always)]
fn record_row(&mut self, doc: RowId) {
debug_assert!(self
.docs
.last()
.copied()
.map(|prev_doc| doc > prev_doc)
.unwrap_or(true));
self.docs.push(doc);
}
}
#[derive(Default)]
pub struct MultivaluedIndexBuilder {
start_offsets: Vec<RowId>,
total_num_vals_seen: u32,
}
impl MultivaluedIndexBuilder {
pub fn finish(&mut self, num_docs: RowId) -> &[u32] {
self.start_offsets
.resize(num_docs as usize + 1, self.total_num_vals_seen);
&self.start_offsets[..]
}
fn reset(&mut self) {
self.start_offsets.clear();
self.start_offsets.push(0u32);
self.total_num_vals_seen = 0;
}
}
impl IndexBuilder for MultivaluedIndexBuilder {
fn record_row(&mut self, row_id: RowId) {
self.start_offsets
.resize(row_id as usize + 1, self.total_num_vals_seen);
}
fn record_value(&mut self) {
self.total_num_vals_seen += 1;
}
}
/// The `SpareIndexBuilders` is there to avoid allocating a
/// new index builder for every single column.
#[derive(Default)]
pub struct PreallocatedIndexBuilders {
required_index_builder: FullIndexBuilder,
optional_index_builder: OptionalIndexBuilder,
multivalued_index_builder: MultivaluedIndexBuilder,
}
impl PreallocatedIndexBuilders {
pub fn borrow_required_index_builder(&mut self) -> &mut FullIndexBuilder {
&mut self.required_index_builder
}
pub fn borrow_optional_index_builder(&mut self) -> &mut OptionalIndexBuilder {
self.optional_index_builder.reset();
&mut self.optional_index_builder
}
pub fn borrow_multivalued_index_builder(&mut self) -> &mut MultivaluedIndexBuilder {
self.multivalued_index_builder.reset();
&mut self.multivalued_index_builder
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_optional_value_index_builder() {
let mut opt_value_index_builder = OptionalIndexBuilder::default();
opt_value_index_builder.record_row(0u32);
opt_value_index_builder.record_value();
assert_eq!(
&opt_value_index_builder
.finish(1u32)
.boxed_iter()
.collect::<Vec<u32>>(),
&[0]
);
opt_value_index_builder.reset();
opt_value_index_builder.record_row(1u32);
opt_value_index_builder.record_value();
assert_eq!(
&opt_value_index_builder
.finish(2u32)
.boxed_iter()
.collect::<Vec<u32>>(),
&[1]
);
}
#[test]
fn test_multivalued_value_index_builder() {
let mut multivalued_value_index_builder = MultivaluedIndexBuilder::default();
multivalued_value_index_builder.record_row(1u32);
multivalued_value_index_builder.record_value();
multivalued_value_index_builder.record_value();
multivalued_value_index_builder.record_row(2u32);
multivalued_value_index_builder.record_value();
assert_eq!(
multivalued_value_index_builder.finish(4u32).to_vec(),
vec![0, 0, 2, 3, 3]
);
multivalued_value_index_builder.reset();
multivalued_value_index_builder.record_row(2u32);
multivalued_value_index_builder.record_value();
multivalued_value_index_builder.record_value();
assert_eq!(
multivalued_value_index_builder.finish(4u32).to_vec(),
vec![0, 0, 0, 2, 2]
);
}
}

View File

@@ -0,0 +1,84 @@
use std::io;
use fnv::FnvHashMap;
use sstable::SSTable;
pub(crate) struct TermIdMapping {
unordered_to_ord: Vec<OrderedId>,
}
impl TermIdMapping {
pub fn to_ord(&self, unordered: UnorderedId) -> OrderedId {
self.unordered_to_ord[unordered.0 as usize]
}
}
/// When we add values, we cannot know their ordered id yet.
/// For this reason, we temporarily assign them a `UnorderedId`
/// that will be mapped to an `OrderedId` upon serialization.
#[derive(Clone, Copy, Debug, Hash, PartialEq, Eq)]
pub struct UnorderedId(pub u32);
#[derive(Clone, Copy, Hash, PartialEq, Eq, Debug)]
pub struct OrderedId(pub u32);
/// `DictionaryBuilder` for dictionary encoding.
///
/// It stores the different terms encounterred and assigns them a temporary value
/// we call unordered id.
///
/// Upon serialization, we will sort the ids and hence build a `UnorderedId -> Term ordinal`
/// mapping.
#[derive(Default)]
pub(crate) struct DictionaryBuilder {
dict: FnvHashMap<Vec<u8>, UnorderedId>,
}
impl DictionaryBuilder {
/// Get or allocate an unordered id.
/// (This ID is simply an auto-incremented id.)
pub fn get_or_allocate_id(&mut self, term: &[u8]) -> UnorderedId {
if let Some(term_id) = self.dict.get(term) {
return *term_id;
}
let new_id = UnorderedId(self.dict.len() as u32);
self.dict.insert(term.to_vec(), new_id);
new_id
}
/// Serialize the dictionary into an fst, and returns the
/// `UnorderedId -> TermOrdinal` map.
pub fn serialize<'a, W: io::Write + 'a>(&self, wrt: &mut W) -> io::Result<TermIdMapping> {
let mut terms: Vec<(&[u8], UnorderedId)> =
self.dict.iter().map(|(k, v)| (k.as_slice(), *v)).collect();
terms.sort_unstable_by_key(|(key, _)| *key);
// TODO Remove the allocation.
let mut unordered_to_ord: Vec<OrderedId> = vec![OrderedId(0u32); terms.len()];
let mut sstable_builder = sstable::VoidSSTable::writer(wrt);
for (ord, (key, unordered_id)) in terms.into_iter().enumerate() {
let ordered_id = OrderedId(ord as u32);
sstable_builder.insert(key, &())?;
unordered_to_ord[unordered_id.0 as usize] = ordered_id;
}
sstable_builder.finish()?;
Ok(TermIdMapping { unordered_to_ord })
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_dictionary_builder() {
let mut dictionary_builder = DictionaryBuilder::default();
let hello_uid = dictionary_builder.get_or_allocate_id(b"hello");
let happy_uid = dictionary_builder.get_or_allocate_id(b"happy");
let tax_uid = dictionary_builder.get_or_allocate_id(b"tax");
let mut buffer = Vec::new();
let id_mapping = dictionary_builder.serialize(&mut buffer).unwrap();
assert_eq!(id_mapping.to_ord(hello_uid), OrderedId(1));
assert_eq!(id_mapping.to_ord(happy_uid), OrderedId(0));
assert_eq!(id_mapping.to_ord(tax_uid), OrderedId(2));
}
}

View File

@@ -0,0 +1,258 @@
use std::io;
use std::net::Ipv6Addr;
use std::sync::Arc;
use common::file_slice::FileSlice;
use common::{DateTime, HasLen, OwnedBytes};
use crate::column::{BytesColumn, Column, StrColumn};
use crate::column_values::{monotonic_map_column, StrictlyMonotonicFn};
use crate::columnar::ColumnType;
use crate::{Cardinality, NumericalType};
#[derive(Clone)]
pub enum DynamicColumn {
Bool(Column<bool>),
I64(Column<i64>),
U64(Column<u64>),
F64(Column<f64>),
IpAddr(Column<Ipv6Addr>),
DateTime(Column<DateTime>),
Bytes(BytesColumn),
Str(StrColumn),
}
impl DynamicColumn {
pub fn get_cardinality(&self) -> Cardinality {
match self {
DynamicColumn::Bool(c) => c.get_cardinality(),
DynamicColumn::I64(c) => c.get_cardinality(),
DynamicColumn::U64(c) => c.get_cardinality(),
DynamicColumn::F64(c) => c.get_cardinality(),
DynamicColumn::IpAddr(c) => c.get_cardinality(),
DynamicColumn::DateTime(c) => c.get_cardinality(),
DynamicColumn::Bytes(c) => c.ords().get_cardinality(),
DynamicColumn::Str(c) => c.ords().get_cardinality(),
}
}
pub fn column_type(&self) -> ColumnType {
match self {
DynamicColumn::Bool(_) => ColumnType::Bool,
DynamicColumn::I64(_) => ColumnType::I64,
DynamicColumn::U64(_) => ColumnType::U64,
DynamicColumn::F64(_) => ColumnType::F64,
DynamicColumn::IpAddr(_) => ColumnType::IpAddr,
DynamicColumn::DateTime(_) => ColumnType::DateTime,
DynamicColumn::Bytes(_) => ColumnType::Bytes,
DynamicColumn::Str(_) => ColumnType::Str,
}
}
pub fn coerce_numerical(self, target_numerical_type: NumericalType) -> Option<Self> {
match target_numerical_type {
NumericalType::I64 => self.coerce_to_i64(),
NumericalType::U64 => self.coerce_to_u64(),
NumericalType::F64 => self.coerce_to_f64(),
}
}
pub fn is_numerical(&self) -> bool {
self.column_type().numerical_type().is_some()
}
pub fn is_f64(&self) -> bool {
self.column_type().numerical_type() == Some(NumericalType::F64)
}
pub fn is_i64(&self) -> bool {
self.column_type().numerical_type() == Some(NumericalType::I64)
}
pub fn is_u64(&self) -> bool {
self.column_type().numerical_type() == Some(NumericalType::U64)
}
fn coerce_to_f64(self) -> Option<DynamicColumn> {
match self {
DynamicColumn::I64(column) => Some(DynamicColumn::F64(Column {
idx: column.idx,
values: Arc::new(monotonic_map_column(column.values, MapI64ToF64)),
})),
DynamicColumn::U64(column) => Some(DynamicColumn::F64(Column {
idx: column.idx,
values: Arc::new(monotonic_map_column(column.values, MapU64ToF64)),
})),
DynamicColumn::F64(_) => Some(self),
_ => None,
}
}
fn coerce_to_i64(self) -> Option<DynamicColumn> {
match self {
DynamicColumn::U64(column) => {
if column.max_value() > i64::MAX as u64 {
return None;
}
Some(DynamicColumn::I64(Column {
idx: column.idx,
values: Arc::new(monotonic_map_column(column.values, MapU64ToI64)),
}))
}
DynamicColumn::I64(_) => Some(self),
_ => None,
}
}
fn coerce_to_u64(self) -> Option<DynamicColumn> {
match self {
DynamicColumn::I64(column) => {
if column.min_value() < 0 {
return None;
}
Some(DynamicColumn::U64(Column {
idx: column.idx,
values: Arc::new(monotonic_map_column(column.values, MapI64ToU64)),
}))
}
DynamicColumn::U64(_) => Some(self),
_ => None,
}
}
}
struct MapI64ToF64;
impl StrictlyMonotonicFn<i64, f64> for MapI64ToF64 {
#[inline(always)]
fn mapping(&self, inp: i64) -> f64 {
inp as f64
}
#[inline(always)]
fn inverse(&self, out: f64) -> i64 {
out as i64
}
}
struct MapU64ToF64;
impl StrictlyMonotonicFn<u64, f64> for MapU64ToF64 {
#[inline(always)]
fn mapping(&self, inp: u64) -> f64 {
inp as f64
}
#[inline(always)]
fn inverse(&self, out: f64) -> u64 {
out as u64
}
}
struct MapU64ToI64;
impl StrictlyMonotonicFn<u64, i64> for MapU64ToI64 {
#[inline(always)]
fn mapping(&self, inp: u64) -> i64 {
inp as i64
}
#[inline(always)]
fn inverse(&self, out: i64) -> u64 {
out as u64
}
}
struct MapI64ToU64;
impl StrictlyMonotonicFn<i64, u64> for MapI64ToU64 {
#[inline(always)]
fn mapping(&self, inp: i64) -> u64 {
inp as u64
}
#[inline(always)]
fn inverse(&self, out: u64) -> i64 {
out as i64
}
}
macro_rules! static_dynamic_conversions {
($typ:ty, $enum_name:ident) => {
impl From<DynamicColumn> for Option<$typ> {
fn from(dynamic_column: DynamicColumn) -> Option<$typ> {
if let DynamicColumn::$enum_name(col) = dynamic_column {
Some(col)
} else {
None
}
}
}
impl From<$typ> for DynamicColumn {
fn from(typed_column: $typ) -> Self {
DynamicColumn::$enum_name(typed_column)
}
}
};
}
static_dynamic_conversions!(Column<bool>, Bool);
static_dynamic_conversions!(Column<u64>, U64);
static_dynamic_conversions!(Column<i64>, I64);
static_dynamic_conversions!(Column<f64>, F64);
static_dynamic_conversions!(Column<DateTime>, DateTime);
static_dynamic_conversions!(StrColumn, Str);
static_dynamic_conversions!(BytesColumn, Bytes);
static_dynamic_conversions!(Column<Ipv6Addr>, IpAddr);
#[derive(Clone)]
pub struct DynamicColumnHandle {
pub(crate) file_slice: FileSlice,
pub(crate) column_type: ColumnType,
}
impl DynamicColumnHandle {
// TODO rename load
pub fn open(&self) -> io::Result<DynamicColumn> {
let column_bytes: OwnedBytes = self.file_slice.read_bytes()?;
self.open_internal(column_bytes)
}
#[doc(hidden)]
pub fn file_slice(&self) -> &FileSlice {
&self.file_slice
}
/// Returns the `u64` fast field reader reader associated with `fields` of types
/// Str, u64, i64, f64, or datetime.
///
/// If not, the fastfield reader will returns the u64-value associated with the original
/// FastValue.
pub fn open_u64_lenient(&self) -> io::Result<Option<Column<u64>>> {
let column_bytes = self.file_slice.read_bytes()?;
match self.column_type {
ColumnType::Str | ColumnType::Bytes => {
let column: BytesColumn = crate::column::open_column_bytes(column_bytes)?;
Ok(Some(column.term_ord_column))
}
ColumnType::Bool => Ok(None),
ColumnType::IpAddr => Ok(None),
ColumnType::I64 | ColumnType::U64 | ColumnType::F64 | ColumnType::DateTime => {
let column = crate::column::open_column_u64::<u64>(column_bytes)?;
Ok(Some(column))
}
}
}
fn open_internal(&self, column_bytes: OwnedBytes) -> io::Result<DynamicColumn> {
let dynamic_column: DynamicColumn = match self.column_type {
ColumnType::Bytes => crate::column::open_column_bytes(column_bytes)?.into(),
ColumnType::Str => crate::column::open_column_str(column_bytes)?.into(),
ColumnType::I64 => crate::column::open_column_u64::<i64>(column_bytes)?.into(),
ColumnType::U64 => crate::column::open_column_u64::<u64>(column_bytes)?.into(),
ColumnType::F64 => crate::column::open_column_u64::<f64>(column_bytes)?.into(),
ColumnType::Bool => crate::column::open_column_u64::<bool>(column_bytes)?.into(),
ColumnType::IpAddr => crate::column::open_column_u128::<Ipv6Addr>(column_bytes)?.into(),
ColumnType::DateTime => {
crate::column::open_column_u64::<DateTime>(column_bytes)?.into()
}
};
Ok(dynamic_column)
}
pub fn num_bytes(&self) -> usize {
self.file_slice.len()
}
pub fn column_type(&self) -> ColumnType {
self.column_type
}
}

19
columnar/src/iterable.rs Normal file
View File

@@ -0,0 +1,19 @@
use std::ops::Range;
pub trait Iterable<T = u64> {
fn boxed_iter(&self) -> Box<dyn Iterator<Item = T> + '_>;
}
impl<'a, T: Copy> Iterable<T> for &'a [T] {
fn boxed_iter(&self) -> Box<dyn Iterator<Item = T> + '_> {
Box::new(self.iter().copied())
}
}
impl<T: Copy> Iterable<T> for Range<T>
where Range<T>: Iterator<Item = T>
{
fn boxed_iter(&self) -> Box<dyn Iterator<Item = T> + '_> {
Box::new(self.clone())
}
}

96
columnar/src/lib.rs Normal file
View File

@@ -0,0 +1,96 @@
#![cfg_attr(all(feature = "unstable", test), feature(test))]
#[cfg(test)]
#[macro_use]
extern crate more_asserts;
#[cfg(all(test, feature = "unstable"))]
extern crate test;
use std::io;
mod column;
mod column_index;
pub mod column_values;
mod columnar;
mod dictionary;
mod dynamic_column;
mod iterable;
pub(crate) mod utils;
mod value;
pub use column::{BytesColumn, Column, StrColumn};
pub use column_index::ColumnIndex;
pub use column_values::{ColumnValues, MonotonicallyMappableToU128, MonotonicallyMappableToU64};
pub use columnar::{
merge_columnar, ColumnType, ColumnarReader, ColumnarWriter, HasAssociatedColumnType,
MergeRowOrder, ShuffleMergeOrder, StackMergeOrder,
};
use sstable::VoidSSTable;
pub use value::{NumericalType, NumericalValue};
pub use self::dynamic_column::{DynamicColumn, DynamicColumnHandle};
pub type RowId = u32;
pub type DocId = u32;
#[derive(Clone, Copy)]
pub struct RowAddr {
pub segment_ord: u32,
pub row_id: RowId,
}
pub use sstable::Dictionary;
pub type Streamer<'a> = sstable::Streamer<'a, VoidSSTable>;
pub use common::DateTime;
#[derive(Copy, Clone, Debug)]
pub struct InvalidData;
impl From<InvalidData> for io::Error {
fn from(_: InvalidData) -> Self {
io::Error::new(io::ErrorKind::InvalidData, "Invalid data")
}
}
/// Enum describing the number of values that can exist per document
/// (or per row if you will).
///
/// The cardinality must fit on 2 bits.
#[derive(Clone, Copy, Hash, Default, Debug, PartialEq, Eq, PartialOrd, Ord)]
#[repr(u8)]
pub enum Cardinality {
/// All documents contain exactly one value.
/// `Full` is the default for auto-detecting the Cardinality, since it is the most strict.
#[default]
Full = 0,
/// All documents contain at most one value.
Optional = 1,
/// All documents may contain any number of values.
Multivalued = 2,
}
impl Cardinality {
pub fn is_optional(&self) -> bool {
matches!(self, Cardinality::Optional)
}
pub fn is_multivalue(&self) -> bool {
matches!(self, Cardinality::Multivalued)
}
pub(crate) fn to_code(self) -> u8 {
self as u8
}
pub(crate) fn try_from_code(code: u8) -> Result<Cardinality, InvalidData> {
match code {
0 => Ok(Cardinality::Full),
1 => Ok(Cardinality::Optional),
2 => Ok(Cardinality::Multivalued),
_ => Err(InvalidData),
}
}
}
#[cfg(test)]
mod tests;

212
columnar/src/tests.rs Normal file
View File

@@ -0,0 +1,212 @@
use std::net::Ipv6Addr;
use crate::column_values::MonotonicallyMappableToU128;
use crate::columnar::ColumnType;
use crate::dynamic_column::{DynamicColumn, DynamicColumnHandle};
use crate::value::NumericalValue;
use crate::{Cardinality, ColumnarReader, ColumnarWriter};
#[test]
fn test_dataframe_writer_str() {
let mut dataframe_writer = ColumnarWriter::default();
dataframe_writer.record_str(1u32, "my_string", "hello");
dataframe_writer.record_str(3u32, "my_string", "helloeee");
let mut buffer: Vec<u8> = Vec::new();
dataframe_writer.serialize(5, None, &mut buffer).unwrap();
let columnar = ColumnarReader::open(buffer).unwrap();
assert_eq!(columnar.num_columns(), 1);
let cols: Vec<DynamicColumnHandle> = columnar.read_columns("my_string").unwrap();
assert_eq!(cols.len(), 1);
assert_eq!(cols[0].num_bytes(), 158);
}
#[test]
fn test_dataframe_writer_bytes() {
let mut dataframe_writer = ColumnarWriter::default();
dataframe_writer.record_bytes(1u32, "my_string", b"hello");
dataframe_writer.record_bytes(3u32, "my_string", b"helloeee");
let mut buffer: Vec<u8> = Vec::new();
dataframe_writer.serialize(5, None, &mut buffer).unwrap();
let columnar = ColumnarReader::open(buffer).unwrap();
assert_eq!(columnar.num_columns(), 1);
let cols: Vec<DynamicColumnHandle> = columnar.read_columns("my_string").unwrap();
assert_eq!(cols.len(), 1);
assert_eq!(cols[0].num_bytes(), 158);
}
#[test]
fn test_dataframe_writer_bool() {
let mut dataframe_writer = ColumnarWriter::default();
dataframe_writer.record_bool(1u32, "bool.value", false);
dataframe_writer.record_bool(3u32, "bool.value", true);
let mut buffer: Vec<u8> = Vec::new();
dataframe_writer.serialize(5, None, &mut buffer).unwrap();
let columnar = ColumnarReader::open(buffer).unwrap();
assert_eq!(columnar.num_columns(), 1);
let cols: Vec<DynamicColumnHandle> = columnar.read_columns("bool.value").unwrap();
assert_eq!(cols.len(), 1);
assert_eq!(cols[0].num_bytes(), 22);
assert_eq!(cols[0].column_type(), ColumnType::Bool);
let dyn_bool_col = cols[0].open().unwrap();
let DynamicColumn::Bool(bool_col) = dyn_bool_col else { panic!(); };
let vals: Vec<Option<bool>> = (0..5).map(|row_id| bool_col.first(row_id)).collect();
assert_eq!(&vals, &[None, Some(false), None, Some(true), None,]);
}
#[test]
fn test_dataframe_writer_u64_multivalued() {
let mut dataframe_writer = ColumnarWriter::default();
dataframe_writer.record_numerical(2u32, "divisor", 2u64);
dataframe_writer.record_numerical(3u32, "divisor", 3u64);
dataframe_writer.record_numerical(4u32, "divisor", 2u64);
dataframe_writer.record_numerical(5u32, "divisor", 5u64);
dataframe_writer.record_numerical(6u32, "divisor", 2u64);
dataframe_writer.record_numerical(6u32, "divisor", 3u64);
let mut buffer: Vec<u8> = Vec::new();
dataframe_writer.serialize(7, None, &mut buffer).unwrap();
let columnar = ColumnarReader::open(buffer).unwrap();
assert_eq!(columnar.num_columns(), 1);
let cols: Vec<DynamicColumnHandle> = columnar.read_columns("divisor").unwrap();
assert_eq!(cols.len(), 1);
assert_eq!(cols[0].num_bytes(), 29);
let dyn_i64_col = cols[0].open().unwrap();
let DynamicColumn::I64(divisor_col) = dyn_i64_col else { panic!(); };
assert_eq!(
divisor_col.get_cardinality(),
crate::Cardinality::Multivalued
);
assert_eq!(divisor_col.num_docs(), 7);
}
#[test]
fn test_dataframe_writer_ip_addr() {
let mut dataframe_writer = ColumnarWriter::default();
dataframe_writer.record_ip_addr(1, "ip_addr", Ipv6Addr::from_u128(1001));
dataframe_writer.record_ip_addr(3, "ip_addr", Ipv6Addr::from_u128(1050));
let mut buffer: Vec<u8> = Vec::new();
dataframe_writer.serialize(5, None, &mut buffer).unwrap();
let columnar = ColumnarReader::open(buffer).unwrap();
assert_eq!(columnar.num_columns(), 1);
let cols: Vec<DynamicColumnHandle> = columnar.read_columns("ip_addr").unwrap();
assert_eq!(cols.len(), 1);
assert_eq!(cols[0].num_bytes(), 42);
assert_eq!(cols[0].column_type(), ColumnType::IpAddr);
let dyn_bool_col = cols[0].open().unwrap();
let DynamicColumn::IpAddr(ip_col) = dyn_bool_col else { panic!(); };
let vals: Vec<Option<Ipv6Addr>> = (0..5).map(|row_id| ip_col.first(row_id)).collect();
assert_eq!(
&vals,
&[
None,
Some(Ipv6Addr::from_u128(1001)),
None,
Some(Ipv6Addr::from_u128(1050)),
None,
]
);
}
#[test]
fn test_dataframe_writer_numerical() {
let mut dataframe_writer = ColumnarWriter::default();
dataframe_writer.record_numerical(1u32, "srical.value", NumericalValue::U64(12u64));
dataframe_writer.record_numerical(2u32, "srical.value", NumericalValue::U64(13u64));
dataframe_writer.record_numerical(4u32, "srical.value", NumericalValue::U64(15u64));
let mut buffer: Vec<u8> = Vec::new();
dataframe_writer.serialize(6, None, &mut buffer).unwrap();
let columnar = ColumnarReader::open(buffer).unwrap();
assert_eq!(columnar.num_columns(), 1);
let cols: Vec<DynamicColumnHandle> = columnar.read_columns("srical.value").unwrap();
assert_eq!(cols.len(), 1);
// Right now this 31 bytes are spent as follows
//
// - header 14 bytes
// - vals 8 //< due to padding? could have been 1byte?.
// - null footer 6 bytes
assert_eq!(cols[0].num_bytes(), 33);
let column = cols[0].open().unwrap();
let DynamicColumn::I64(column_i64) = column else { panic!(); };
assert_eq!(column_i64.idx.get_cardinality(), Cardinality::Optional);
assert_eq!(column_i64.first(0), None);
assert_eq!(column_i64.first(1), Some(12i64));
assert_eq!(column_i64.first(2), Some(13i64));
assert_eq!(column_i64.first(3), None);
assert_eq!(column_i64.first(4), Some(15i64));
assert_eq!(column_i64.first(5), None);
assert_eq!(column_i64.first(6), None); //< we can change the spec for that one.
}
#[test]
fn test_dictionary_encoded_str() {
let mut buffer = Vec::new();
let mut columnar_writer = ColumnarWriter::default();
columnar_writer.record_str(1, "my.column", "a");
columnar_writer.record_str(3, "my.column", "c");
columnar_writer.record_str(3, "my.column2", "different_column!");
columnar_writer.record_str(4, "my.column", "b");
columnar_writer.serialize(5, None, &mut buffer).unwrap();
let columnar_reader = ColumnarReader::open(buffer).unwrap();
assert_eq!(columnar_reader.num_columns(), 2);
let col_handles = columnar_reader.read_columns("my.column").unwrap();
assert_eq!(col_handles.len(), 1);
let DynamicColumn::Str(str_col) = col_handles[0].open().unwrap() else { panic!(); };
let index: Vec<Option<u64>> = (0..5).map(|row_id| str_col.ords().first(row_id)).collect();
assert_eq!(index, &[None, Some(0), None, Some(2), Some(1)]);
assert_eq!(str_col.num_rows(), 5);
let mut term_buffer = String::new();
let term_ords = str_col.ords();
assert_eq!(term_ords.first(0), None);
assert_eq!(term_ords.first(1), Some(0));
str_col.ord_to_str(0u64, &mut term_buffer).unwrap();
assert_eq!(term_buffer, "a");
assert_eq!(term_ords.first(2), None);
assert_eq!(term_ords.first(3), Some(2));
str_col.ord_to_str(2u64, &mut term_buffer).unwrap();
assert_eq!(term_buffer, "c");
assert_eq!(term_ords.first(4), Some(1));
str_col.ord_to_str(1u64, &mut term_buffer).unwrap();
assert_eq!(term_buffer, "b");
}
#[test]
fn test_dictionary_encoded_bytes() {
let mut buffer = Vec::new();
let mut columnar_writer = ColumnarWriter::default();
columnar_writer.record_bytes(1, "my.column", b"a");
columnar_writer.record_bytes(3, "my.column", b"c");
columnar_writer.record_bytes(3, "my.column2", b"different_column!");
columnar_writer.record_bytes(4, "my.column", b"b");
columnar_writer.serialize(5, None, &mut buffer).unwrap();
let columnar_reader = ColumnarReader::open(buffer).unwrap();
assert_eq!(columnar_reader.num_columns(), 2);
let col_handles = columnar_reader.read_columns("my.column").unwrap();
assert_eq!(col_handles.len(), 1);
let DynamicColumn::Bytes(bytes_col) = col_handles[0].open().unwrap() else { panic!(); };
let index: Vec<Option<u64>> = (0..5)
.map(|row_id| bytes_col.ords().first(row_id))
.collect();
assert_eq!(index, &[None, Some(0), None, Some(2), Some(1)]);
assert_eq!(bytes_col.num_rows(), 5);
let mut term_buffer = Vec::new();
let term_ords = bytes_col.ords();
assert_eq!(term_ords.first(0), None);
assert_eq!(term_ords.first(1), Some(0));
bytes_col
.dictionary
.ord_to_term(0u64, &mut term_buffer)
.unwrap();
assert_eq!(term_buffer, b"a");
assert_eq!(term_ords.first(2), None);
assert_eq!(term_ords.first(3), Some(2));
bytes_col
.dictionary
.ord_to_term(2u64, &mut term_buffer)
.unwrap();
assert_eq!(term_buffer, b"c");
assert_eq!(term_ords.first(4), Some(1));
bytes_col
.dictionary
.ord_to_term(1u64, &mut term_buffer)
.unwrap();
assert_eq!(term_buffer, b"b");
}

76
columnar/src/utils.rs Normal file
View File

@@ -0,0 +1,76 @@
const fn compute_mask(num_bits: u8) -> u8 {
if num_bits == 8 {
u8::MAX
} else {
(1u8 << num_bits) - 1
}
}
#[inline(always)]
#[must_use]
pub(crate) fn select_bits<const START: u8, const END: u8>(code: u8) -> u8 {
assert!(START <= END);
assert!(END <= 8);
let num_bits: u8 = END - START;
let mask: u8 = compute_mask(num_bits);
(code >> START) & mask
}
#[inline(always)]
#[must_use]
pub(crate) fn place_bits<const START: u8, const END: u8>(code: u8) -> u8 {
assert!(START <= END);
assert!(END <= 8);
let num_bits: u8 = END - START;
let mask: u8 = compute_mask(num_bits);
assert!(code <= mask);
code << START
}
/// Pop-front one bytes from a slice of bytes.
#[inline(always)]
pub fn pop_first_byte(bytes: &mut &[u8]) -> Option<u8> {
if bytes.is_empty() {
return None;
}
let first_byte = bytes[0];
*bytes = &bytes[1..];
Some(first_byte)
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_select_bits() {
assert_eq!(255u8, select_bits::<0, 8>(255u8));
assert_eq!(0u8, select_bits::<0, 0>(255u8));
assert_eq!(8u8, select_bits::<0, 4>(8u8));
assert_eq!(4u8, select_bits::<1, 4>(8u8));
assert_eq!(0u8, select_bits::<1, 3>(8u8));
}
#[test]
fn test_place_bits() {
assert_eq!(255u8, place_bits::<0, 8>(255u8));
assert_eq!(4u8, place_bits::<2, 3>(1u8));
assert_eq!(0u8, place_bits::<2, 2>(0u8));
}
#[test]
#[should_panic]
fn test_place_bits_overflows() {
let _ = place_bits::<1, 4>(8u8);
}
#[test]
fn test_pop_first_byte() {
let mut cursor: &[u8] = &b"abcd"[..];
assert_eq!(pop_first_byte(&mut cursor), Some(b'a'));
assert_eq!(pop_first_byte(&mut cursor), Some(b'b'));
assert_eq!(pop_first_byte(&mut cursor), Some(b'c'));
assert_eq!(pop_first_byte(&mut cursor), Some(b'd'));
assert_eq!(pop_first_byte(&mut cursor), None);
}
}

131
columnar/src/value.rs Normal file
View File

@@ -0,0 +1,131 @@
use common::DateTime;
use crate::InvalidData;
#[derive(Copy, Clone, PartialEq, Debug)]
pub enum NumericalValue {
I64(i64),
U64(u64),
F64(f64),
}
impl NumericalValue {
pub fn numerical_type(&self) -> NumericalType {
match self {
NumericalValue::I64(_) => NumericalType::I64,
NumericalValue::U64(_) => NumericalType::U64,
NumericalValue::F64(_) => NumericalType::F64,
}
}
}
impl From<u64> for NumericalValue {
fn from(val: u64) -> NumericalValue {
NumericalValue::U64(val)
}
}
impl From<i64> for NumericalValue {
fn from(val: i64) -> Self {
NumericalValue::I64(val)
}
}
impl From<f64> for NumericalValue {
fn from(val: f64) -> Self {
NumericalValue::F64(val)
}
}
#[derive(Clone, Copy, Debug, Default, Hash, Eq, PartialEq)]
#[repr(u8)]
pub enum NumericalType {
#[default]
I64 = 0,
U64 = 1,
F64 = 2,
}
impl NumericalType {
pub fn to_code(self) -> u8 {
self as u8
}
pub fn try_from_code(code: u8) -> Result<NumericalType, InvalidData> {
match code {
0 => Ok(NumericalType::I64),
1 => Ok(NumericalType::U64),
2 => Ok(NumericalType::F64),
_ => Err(InvalidData),
}
}
}
/// We voluntarily avoid using `Into` here to keep this
/// implementation quirk as private as possible.
///
/// # Panics
/// This coercion trait actually panics if it is used
/// to convert a loose types to a stricter type.
///
/// The level is strictness is somewhat arbitrary.
/// - i64
/// - u64
/// - f64.
pub(crate) trait Coerce {
fn coerce(numerical_value: NumericalValue) -> Self;
}
impl Coerce for i64 {
fn coerce(value: NumericalValue) -> Self {
match value {
NumericalValue::I64(val) => val,
NumericalValue::U64(val) => val as i64,
NumericalValue::F64(_) => unreachable!(),
}
}
}
impl Coerce for u64 {
fn coerce(value: NumericalValue) -> Self {
match value {
NumericalValue::I64(val) => val as u64,
NumericalValue::U64(val) => val,
NumericalValue::F64(_) => unreachable!(),
}
}
}
impl Coerce for f64 {
fn coerce(value: NumericalValue) -> Self {
match value {
NumericalValue::I64(val) => val as f64,
NumericalValue::U64(val) => val as f64,
NumericalValue::F64(val) => val,
}
}
}
impl Coerce for DateTime {
fn coerce(value: NumericalValue) -> Self {
let timestamp_micros = i64::coerce(value);
DateTime::from_timestamp_micros(timestamp_micros)
}
}
#[cfg(test)]
mod tests {
use super::NumericalType;
#[test]
fn test_numerical_type_code() {
let mut num_numerical_type = 0;
for code in u8::MIN..=u8::MAX {
if let Ok(numerical_type) = NumericalType::try_from_code(code) {
assert_eq!(numerical_type.to_code(), code);
num_numerical_type += 1;
}
}
assert_eq!(num_numerical_type, 3);
}
}

View File

@@ -1,16 +1,22 @@
[package]
name = "tantivy-common"
version = "0.3.0"
version = "0.5.0"
authors = ["Paul Masurel <paul@quickwit.io>", "Pascal Seitz <pascal@quickwit.io>"]
license = "MIT"
edition = "2021"
description = "common traits and utility functions used by multiple tantivy subcrates"
documentation = "https://docs.rs/tantivy_common/"
homepage = "https://github.com/quickwit-oss/tantivy"
repository = "https://github.com/quickwit-oss/tantivy"
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
[dependencies]
byteorder = "1.4.3"
ownedbytes = { version="0.3", path="../ownedbytes" }
ownedbytes = { version= "0.5", path="../ownedbytes" }
async-trait = "0.1"
time = { version = "0.3.10", features = ["serde-well-known"] }
serde = { version = "1.0.136", features = ["derive"] }
[dev-dependencies]
proptest = "1.0.0"

View File

@@ -151,7 +151,7 @@ impl TinySet {
if self.is_empty() {
None
} else {
let lowest = self.0.trailing_zeros() as u32;
let lowest = self.0.trailing_zeros();
self.0 ^= TinySet::singleton(lowest).0;
Some(lowest)
}
@@ -259,11 +259,7 @@ impl BitSet {
// we do not check saturated els.
let higher = el / 64u32;
let lower = el % 64u32;
self.len += if self.tinysets[higher as usize].insert_mut(lower) {
1
} else {
0
};
self.len += u64::from(self.tinysets[higher as usize].insert_mut(lower));
}
/// Inserts an element in the `BitSet`
@@ -272,11 +268,7 @@ impl BitSet {
// we do not check saturated els.
let higher = el / 64u32;
let lower = el % 64u32;
self.len -= if self.tinysets[higher as usize].remove_mut(lower) {
1
} else {
0
};
self.len -= u64::from(self.tinysets[higher as usize].remove_mut(lower));
}
/// Returns true iff the elements is in the `BitSet`.
@@ -285,7 +277,7 @@ impl BitSet {
self.tinyset(el / 64u32).contains(el % 64)
}
/// Returns the first non-empty `TinySet` associated to a bucket lower
/// Returns the first non-empty `TinySet` associated with a bucket lower
/// or greater than bucket.
///
/// Reminder: the tiny set with the bucket `bucket`, represents the
@@ -429,7 +421,7 @@ mod tests {
bitset.serialize(&mut out).unwrap();
let bitset = ReadOnlyBitSet::open(OwnedBytes::new(out));
assert_eq!(bitset.len() as usize, i as usize);
assert_eq!(bitset.len(), i as usize);
}
}
@@ -440,7 +432,7 @@ mod tests {
bitset.serialize(&mut out).unwrap();
let bitset = ReadOnlyBitSet::open(OwnedBytes::new(out));
assert_eq!(bitset.len() as usize, 64);
assert_eq!(bitset.len(), 64);
}
#[test]

136
common/src/datetime.rs Normal file
View File

@@ -0,0 +1,136 @@
use std::fmt;
use serde::{Deserialize, Serialize};
use time::format_description::well_known::Rfc3339;
use time::{OffsetDateTime, PrimitiveDateTime, UtcOffset};
/// DateTime Precision
#[derive(
Clone, Copy, Debug, Hash, PartialEq, Eq, PartialOrd, Ord, Serialize, Deserialize, Default,
)]
#[serde(rename_all = "lowercase")]
pub enum DatePrecision {
/// Seconds precision
#[default]
Seconds,
/// Milli-seconds precision.
Milliseconds,
/// Micro-seconds precision.
Microseconds,
}
/// A date/time value with microsecond precision.
///
/// This timestamp does not carry any explicit time zone information.
/// Users are responsible for applying the provided conversion
/// functions consistently. Internally the time zone is assumed
/// to be UTC, which is also used implicitly for JSON serialization.
///
/// All constructors and conversions are provided as explicit
/// functions and not by implementing any `From`/`Into` traits
/// to prevent unintended usage.
#[derive(Clone, Default, Copy, PartialEq, Eq, PartialOrd, Ord, Hash)]
pub struct DateTime {
// Timestamp in microseconds.
pub(crate) timestamp_micros: i64,
}
impl DateTime {
/// Create new from UNIX timestamp in seconds
pub const fn from_timestamp_secs(seconds: i64) -> Self {
Self {
timestamp_micros: seconds * 1_000_000,
}
}
/// Create new from UNIX timestamp in milliseconds
pub const fn from_timestamp_millis(milliseconds: i64) -> Self {
Self {
timestamp_micros: milliseconds * 1_000,
}
}
/// Create new from UNIX timestamp in microseconds.
pub const fn from_timestamp_micros(microseconds: i64) -> Self {
Self {
timestamp_micros: microseconds,
}
}
/// Create new from `OffsetDateTime`
///
/// The given date/time is converted to UTC and the actual
/// time zone is discarded.
pub const fn from_utc(dt: OffsetDateTime) -> Self {
let timestamp_micros = dt.unix_timestamp() * 1_000_000 + dt.microsecond() as i64;
Self { timestamp_micros }
}
/// Create new from `PrimitiveDateTime`
///
/// Implicitly assumes that the given date/time is in UTC!
/// Otherwise the original value must only be reobtained with
/// [`Self::into_primitive()`].
pub fn from_primitive(dt: PrimitiveDateTime) -> Self {
Self::from_utc(dt.assume_utc())
}
/// Convert to UNIX timestamp in seconds.
pub const fn into_timestamp_secs(self) -> i64 {
self.timestamp_micros / 1_000_000
}
/// Convert to UNIX timestamp in milliseconds.
pub const fn into_timestamp_millis(self) -> i64 {
self.timestamp_micros / 1_000
}
/// Convert to UNIX timestamp in microseconds.
pub const fn into_timestamp_micros(self) -> i64 {
self.timestamp_micros
}
/// Convert to UTC `OffsetDateTime`
pub fn into_utc(self) -> OffsetDateTime {
let timestamp_nanos = self.timestamp_micros as i128 * 1000;
let utc_datetime = OffsetDateTime::from_unix_timestamp_nanos(timestamp_nanos)
.expect("valid UNIX timestamp");
debug_assert_eq!(UtcOffset::UTC, utc_datetime.offset());
utc_datetime
}
/// Convert to `OffsetDateTime` with the given time zone
pub fn into_offset(self, offset: UtcOffset) -> OffsetDateTime {
self.into_utc().to_offset(offset)
}
/// Convert to `PrimitiveDateTime` without any time zone
///
/// The value should have been constructed with [`Self::from_primitive()`].
/// Otherwise the time zone is implicitly assumed to be UTC.
pub fn into_primitive(self) -> PrimitiveDateTime {
let utc_datetime = self.into_utc();
// Discard the UTC time zone offset
debug_assert_eq!(UtcOffset::UTC, utc_datetime.offset());
PrimitiveDateTime::new(utc_datetime.date(), utc_datetime.time())
}
/// Truncates the microseconds value to the corresponding precision.
pub fn truncate(self, precision: DatePrecision) -> Self {
let truncated_timestamp_micros = match precision {
DatePrecision::Seconds => (self.timestamp_micros / 1_000_000) * 1_000_000,
DatePrecision::Milliseconds => (self.timestamp_micros / 1_000) * 1_000,
DatePrecision::Microseconds => self.timestamp_micros,
};
Self {
timestamp_micros: truncated_timestamp_micros,
}
}
}
impl fmt::Debug for DateTime {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
let utc_rfc3339 = self.into_utc().format(&Rfc3339).map_err(|_| fmt::Error)?;
f.write_str(&utc_rfc3339)
}
}

View File

@@ -1,23 +1,19 @@
use std::ops::{Deref, Range};
use std::sync::{Arc, Weak};
use std::ops::{Deref, Range, RangeBounds};
use std::sync::Arc;
use std::{fmt, io};
use async_trait::async_trait;
use common::HasLen;
use stable_deref_trait::StableDeref;
use ownedbytes::{OwnedBytes, StableDeref};
use crate::directory::OwnedBytes;
pub type ArcBytes = Arc<dyn Deref<Target = [u8]> + Send + Sync + 'static>;
pub type WeakArcBytes = Weak<dyn Deref<Target = [u8]> + Send + Sync + 'static>;
use crate::HasLen;
/// Objects that represents files sections in tantivy.
///
/// By contract, whatever happens to the directory file, as long as a FileHandle
/// is alive, the data associated with it cannot be altered or destroyed.
///
/// The underlying behavior is therefore specific to the `Directory` that created it.
/// Despite its name, a `FileSlice` may or may not directly map to an actual file
/// The underlying behavior is therefore specific to the `Directory` that
/// created it. Despite its name, a [`FileSlice`] may or may not directly map to an actual file
/// on the filesystem.
#[async_trait]
@@ -27,13 +23,12 @@ pub trait FileHandle: 'static + Send + Sync + HasLen + fmt::Debug {
/// This method may panic if the range requested is invalid.
fn read_bytes(&self, range: Range<usize>) -> io::Result<OwnedBytes>;
#[cfg(feature = "quickwit")]
#[doc(hidden)]
async fn read_bytes_async(
&self,
_byte_range: Range<usize>,
) -> crate::AsyncIoResult<OwnedBytes> {
Err(crate::error::AsyncIoError::AsyncUnsupported)
async fn read_bytes_async(&self, _byte_range: Range<usize>) -> io::Result<OwnedBytes> {
Err(io::Error::new(
io::ErrorKind::Unsupported,
"Async read is not supported.",
))
}
}
@@ -44,8 +39,7 @@ impl FileHandle for &'static [u8] {
Ok(OwnedBytes::new(bytes))
}
#[cfg(feature = "quickwit")]
async fn read_bytes_async(&self, byte_range: Range<usize>) -> crate::AsyncIoResult<OwnedBytes> {
async fn read_bytes_async(&self, byte_range: Range<usize>) -> io::Result<OwnedBytes> {
Ok(self.read_bytes(byte_range)?)
}
}
@@ -73,6 +67,34 @@ impl fmt::Debug for FileSlice {
}
}
/// Takes a range, a `RangeBounds` object, and returns
/// a `Range` that corresponds to the relative application of the
/// `RangeBounds` object to the original `Range`.
///
/// For instance, combine_ranges(`[2..11)`, `[5..7]`) returns `[7..10]`
/// as it reads, what is the sub-range that starts at the 5 element of
/// `[2..11)` and ends at the 9th element included.
///
/// This function panics, if the result would suggest something outside
/// of the bounds of the original range.
fn combine_ranges<R: RangeBounds<usize>>(orig_range: Range<usize>, rel_range: R) -> Range<usize> {
let start: usize = orig_range.start
+ match rel_range.start_bound().cloned() {
std::ops::Bound::Included(rel_start) => rel_start,
std::ops::Bound::Excluded(rel_start) => rel_start + 1,
std::ops::Bound::Unbounded => 0,
};
assert!(start <= orig_range.end);
let end: usize = match rel_range.end_bound().cloned() {
std::ops::Bound::Included(rel_end) => orig_range.start + rel_end + 1,
std::ops::Bound::Excluded(rel_end) => orig_range.start + rel_end,
std::ops::Bound::Unbounded => orig_range.end,
};
assert!(end >= start);
assert!(end <= orig_range.end);
start..end
}
impl FileSlice {
/// Wraps a FileHandle.
pub fn new(file_handle: Arc<dyn FileHandle>) -> Self {
@@ -96,11 +118,11 @@ impl FileSlice {
///
/// Panics if `byte_range.end` exceeds the filesize.
#[must_use]
pub fn slice(&self, byte_range: Range<usize>) -> FileSlice {
assert!(byte_range.end <= self.len());
#[inline]
pub fn slice<R: RangeBounds<usize>>(&self, byte_range: R) -> FileSlice {
FileSlice {
data: self.data.clone(),
range: self.range.start + byte_range.start..self.range.start + byte_range.end,
range: combine_ranges(self.range.clone(), byte_range),
}
}
@@ -120,9 +142,8 @@ impl FileSlice {
self.data.read_bytes(self.range.clone())
}
#[cfg(feature = "quickwit")]
#[doc(hidden)]
pub async fn read_bytes_async(&self) -> crate::AsyncIoResult<OwnedBytes> {
pub async fn read_bytes_async(&self) -> io::Result<OwnedBytes> {
self.data.read_bytes_async(self.range.clone()).await
}
@@ -140,12 +161,8 @@ impl FileSlice {
.read_bytes(self.range.start + range.start..self.range.start + range.end)
}
#[cfg(feature = "quickwit")]
#[doc(hidden)]
pub async fn read_bytes_slice_async(
&self,
byte_range: Range<usize>,
) -> crate::AsyncIoResult<OwnedBytes> {
pub async fn read_bytes_slice_async(&self, byte_range: Range<usize>) -> io::Result<OwnedBytes> {
assert!(
self.range.start + byte_range.end <= self.range.end,
"`to` exceeds the fileslice length"
@@ -207,8 +224,7 @@ impl FileHandle for FileSlice {
self.read_bytes_slice(range)
}
#[cfg(feature = "quickwit")]
async fn read_bytes_async(&self, byte_range: Range<usize>) -> crate::AsyncIoResult<OwnedBytes> {
async fn read_bytes_async(&self, byte_range: Range<usize>) -> io::Result<OwnedBytes> {
self.read_bytes_slice_async(byte_range).await
}
}
@@ -225,21 +241,20 @@ impl FileHandle for OwnedBytes {
Ok(self.slice(range))
}
#[cfg(feature = "quickwit")]
async fn read_bytes_async(&self, range: Range<usize>) -> crate::AsyncIoResult<OwnedBytes> {
let bytes = self.read_bytes(range)?;
Ok(bytes)
async fn read_bytes_async(&self, range: Range<usize>) -> io::Result<OwnedBytes> {
self.read_bytes(range)
}
}
#[cfg(test)]
mod tests {
use std::io;
use std::ops::Bound;
use std::sync::Arc;
use common::HasLen;
use super::{FileHandle, FileSlice};
use crate::file_slice::combine_ranges;
use crate::HasLen;
#[test]
fn test_file_slice() -> io::Result<()> {
@@ -310,4 +325,23 @@ mod tests {
b"bcd"
);
}
#[test]
fn test_combine_range() {
assert_eq!(combine_ranges(1..3, 0..1), 1..2);
assert_eq!(combine_ranges(1..3, 1..), 2..3);
assert_eq!(combine_ranges(1..4, ..2), 1..3);
assert_eq!(combine_ranges(3..10, 2..5), 5..8);
assert_eq!(combine_ranges(2..11, 5..=7), 7..10);
assert_eq!(
combine_ranges(2..11, (Bound::Excluded(5), Bound::Unbounded)),
8..11
);
}
#[test]
#[should_panic]
fn test_combine_range_panics() {
let _ = combine_ranges(3..5, 1..4);
}
}

166
common/src/group_by.rs Normal file
View File

@@ -0,0 +1,166 @@
use std::cell::RefCell;
use std::iter::Peekable;
use std::rc::Rc;
pub trait GroupByIteratorExtended: Iterator {
/// Return an `Iterator` that groups iterator elements. Consecutive elements that map to the
/// same key are assigned to the same group.
///
/// The returned Iterator item is `(K, impl Iterator)`, where Iterator are the items of the
/// group.
///
/// ```
/// use tantivy_common::GroupByIteratorExtended;
///
/// // group data into blocks of larger than zero or not.
/// let data: Vec<i32> = vec![1, 3, -2, -2, 1, 0, 1, 2];
/// // groups: |---->|------>|--------->|
///
/// let mut data_grouped = Vec::new();
/// // Note: group is an iterator
/// for (key, group) in data.into_iter().group_by(|val| *val >= 0) {
/// data_grouped.push((key, group.collect()));
/// }
/// assert_eq!(data_grouped, vec![(true, vec![1, 3]), (false, vec![-2, -2]), (true, vec![1, 0, 1, 2])]);
/// ```
fn group_by<K, F>(self, key: F) -> GroupByIterator<Self, F, K>
where
Self: Sized,
F: FnMut(&Self::Item) -> K,
K: PartialEq + Copy,
Self::Item: Copy,
{
GroupByIterator::new(self, key)
}
}
impl<I: Iterator> GroupByIteratorExtended for I {}
pub struct GroupByIterator<I, F, K: Copy>
where
I: Iterator,
F: FnMut(&I::Item) -> K,
{
// I really would like to avoid the Rc<RefCell>, but the Iterator is shared between
// `GroupByIterator` and `GroupIter`. In practice they are used consecutive and
// `GroupByIter` is finished before calling next on `GroupByIterator`. I'm not sure there
// is a solution with lifetimes for that, because we would need to enforce it in the usage
// somehow.
//
// One potential solution would be to replace the iterator approach with something similar.
inner: Rc<RefCell<GroupByShared<I, F, K>>>,
}
struct GroupByShared<I, F, K: Copy>
where
I: Iterator,
F: FnMut(&I::Item) -> K,
{
iter: Peekable<I>,
group_by_fn: F,
}
impl<I, F, K> GroupByIterator<I, F, K>
where
I: Iterator,
F: FnMut(&I::Item) -> K,
K: Copy,
{
fn new(inner: I, group_by_fn: F) -> Self {
let inner = GroupByShared {
iter: inner.peekable(),
group_by_fn,
};
Self {
inner: Rc::new(RefCell::new(inner)),
}
}
}
impl<I, F, K> Iterator for GroupByIterator<I, F, K>
where
I: Iterator,
I::Item: Copy,
F: FnMut(&I::Item) -> K,
K: Copy,
{
type Item = (K, GroupIterator<I, F, K>);
fn next(&mut self) -> Option<Self::Item> {
let mut inner = self.inner.borrow_mut();
let value = *inner.iter.peek()?;
let key = (inner.group_by_fn)(&value);
let inner = self.inner.clone();
let group_iter = GroupIterator {
inner,
group_key: key,
};
Some((key, group_iter))
}
}
pub struct GroupIterator<I, F, K: Copy>
where
I: Iterator,
F: FnMut(&I::Item) -> K,
{
inner: Rc<RefCell<GroupByShared<I, F, K>>>,
group_key: K,
}
impl<I, F, K: PartialEq + Copy> Iterator for GroupIterator<I, F, K>
where
I: Iterator,
I::Item: Copy,
F: FnMut(&I::Item) -> K,
{
type Item = I::Item;
fn next(&mut self) -> Option<Self::Item> {
let mut inner = self.inner.borrow_mut();
// peek if next value is in group
let peek_val = *inner.iter.peek()?;
if (inner.group_by_fn)(&peek_val) == self.group_key {
inner.iter.next()
} else {
None
}
}
}
#[cfg(test)]
mod tests {
use super::*;
fn group_by_collect<I: Iterator<Item = u32>>(iter: I) -> Vec<(I::Item, Vec<I::Item>)> {
iter.group_by(|val| val / 10)
.map(|(el, iter)| (el, iter.collect::<Vec<_>>()))
.collect::<Vec<_>>()
}
#[test]
fn group_by_two_groups() {
let vals = vec![1u32, 4, 15];
let grouped_vals = group_by_collect(vals.into_iter());
assert_eq!(grouped_vals, vec![(0, vec![1, 4]), (1, vec![15])]);
}
#[test]
fn group_by_test_empty() {
let vals = vec![];
let grouped_vals = group_by_collect(vals.into_iter());
assert_eq!(grouped_vals, vec![]);
}
#[test]
fn group_by_three_groups() {
let vals = vec![1u32, 4, 15, 1];
let grouped_vals = group_by_collect(vals.into_iter());
assert_eq!(
grouped_vals,
vec![(0, vec![1, 4]), (1, vec![15]), (0, vec![1])]
);
}
}

View File

@@ -2,14 +2,17 @@
use std::ops::Deref;
pub use byteorder::LittleEndian as Endianness;
mod bitset;
mod datetime;
pub mod file_slice;
mod group_by;
mod serialize;
mod vint;
mod writer;
pub use bitset::*;
pub use datetime::{DatePrecision, DateTime};
pub use group_by::GroupByIteratorExtended;
pub use ownedbytes::{OwnedBytes, StableDeref};
pub use serialize::{BinarySerializable, DeserializeFrom, FixedSize};
pub use vint::{
deserialize_vint_u128, read_u32_vint, read_u32_vint_no_advance, serialize_vint_u128,
@@ -104,6 +107,21 @@ pub fn u64_to_f64(val: u64) -> f64 {
})
}
/// Replaces a given byte in the `bytes` slice of bytes.
///
/// This function assumes that the needle is rarely contained in the bytes string
/// and offers a fast path if the needle is not present.
pub fn replace_in_place(needle: u8, replacement: u8, bytes: &mut [u8]) {
if !bytes.contains(&needle) {
return;
}
for b in bytes {
if *b == needle {
*b = replacement;
}
}
}
#[cfg(test)]
pub mod test {
@@ -168,4 +186,20 @@ pub mod test {
assert!(f64_to_u64(-2.0) < f64_to_u64(1.0));
assert!(f64_to_u64(-2.0) < f64_to_u64(-1.5));
}
#[test]
fn test_replace_in_place() {
let test_aux = |before_replacement: &[u8], expected: &[u8]| {
let mut bytes: Vec<u8> = before_replacement.to_vec();
super::replace_in_place(b'b', b'c', &mut bytes);
assert_eq!(&bytes[..], expected);
};
test_aux(b"", b"");
test_aux(b"b", b"c");
test_aux(b"baaa", b"caaa");
test_aux(b"aaab", b"aaac");
test_aux(b"aaabaa", b"aaacaa");
test_aux(b"aaaaaa", b"aaaaaa");
test_aux(b"bbbb", b"cccc");
}
}

View File

@@ -1,16 +1,39 @@
use std::io::{Read, Write};
use std::{fmt, io};
use byteorder::{ReadBytesExt, WriteBytesExt};
use crate::VInt;
use crate::{Endianness, VInt};
#[derive(Default)]
struct Counter(u64);
impl io::Write for Counter {
fn write(&mut self, buf: &[u8]) -> io::Result<usize> {
self.0 += buf.len() as u64;
Ok(buf.len())
}
fn write_all(&mut self, buf: &[u8]) -> io::Result<()> {
self.0 += buf.len() as u64;
Ok(())
}
fn flush(&mut self) -> io::Result<()> {
Ok(())
}
}
/// Trait for a simple binary serialization.
pub trait BinarySerializable: fmt::Debug + Sized {
/// Serialize
fn serialize<W: Write>(&self, writer: &mut W) -> io::Result<()>;
fn serialize<W: Write + ?Sized>(&self, writer: &mut W) -> io::Result<()>;
/// Deserialize
fn deserialize<R: Read>(reader: &mut R) -> io::Result<Self>;
fn num_bytes(&self) -> u64 {
let mut counter = Counter::default();
self.serialize(&mut counter).unwrap();
counter.0
}
}
pub trait DeserializeFrom<T: BinarySerializable> {
@@ -34,7 +57,7 @@ pub trait FixedSize: BinarySerializable {
}
impl BinarySerializable for () {
fn serialize<W: Write>(&self, _: &mut W) -> io::Result<()> {
fn serialize<W: Write + ?Sized>(&self, _: &mut W) -> io::Result<()> {
Ok(())
}
fn deserialize<R: Read>(_: &mut R) -> io::Result<Self> {
@@ -47,7 +70,7 @@ impl FixedSize for () {
}
impl<T: BinarySerializable> BinarySerializable for Vec<T> {
fn serialize<W: Write>(&self, writer: &mut W) -> io::Result<()> {
fn serialize<W: Write + ?Sized>(&self, writer: &mut W) -> io::Result<()> {
VInt(self.len() as u64).serialize(writer)?;
for it in self {
it.serialize(writer)?;
@@ -66,7 +89,7 @@ impl<T: BinarySerializable> BinarySerializable for Vec<T> {
}
impl<Left: BinarySerializable, Right: BinarySerializable> BinarySerializable for (Left, Right) {
fn serialize<W: Write>(&self, write: &mut W) -> io::Result<()> {
fn serialize<W: Write + ?Sized>(&self, write: &mut W) -> io::Result<()> {
self.0.serialize(write)?;
self.1.serialize(write)
}
@@ -81,12 +104,14 @@ impl<Left: BinarySerializable + FixedSize, Right: BinarySerializable + FixedSize
}
impl BinarySerializable for u32 {
fn serialize<W: Write>(&self, writer: &mut W) -> io::Result<()> {
writer.write_u32::<Endianness>(*self)
fn serialize<W: Write + ?Sized>(&self, writer: &mut W) -> io::Result<()> {
writer.write_all(&self.to_le_bytes())
}
fn deserialize<R: Read>(reader: &mut R) -> io::Result<u32> {
reader.read_u32::<Endianness>()
let mut buf = [0u8; 4];
reader.read_exact(&mut buf)?;
Ok(u32::from_le_bytes(buf))
}
}
@@ -94,12 +119,30 @@ impl FixedSize for u32 {
const SIZE_IN_BYTES: usize = 4;
}
impl BinarySerializable for u16 {
fn serialize<W: Write + ?Sized>(&self, writer: &mut W) -> io::Result<()> {
writer.write_all(&self.to_le_bytes())
}
fn deserialize<R: Read>(reader: &mut R) -> io::Result<u16> {
let mut buf = [0u8; 2];
reader.read_exact(&mut buf)?;
Ok(Self::from_le_bytes(buf))
}
}
impl FixedSize for u16 {
const SIZE_IN_BYTES: usize = 2;
}
impl BinarySerializable for u64 {
fn serialize<W: Write>(&self, writer: &mut W) -> io::Result<()> {
writer.write_u64::<Endianness>(*self)
fn serialize<W: Write + ?Sized>(&self, writer: &mut W) -> io::Result<()> {
writer.write_all(&self.to_le_bytes())
}
fn deserialize<R: Read>(reader: &mut R) -> io::Result<Self> {
reader.read_u64::<Endianness>()
let mut buf = [0u8; 8];
reader.read_exact(&mut buf)?;
Ok(Self::from_le_bytes(buf))
}
}
@@ -107,12 +150,29 @@ impl FixedSize for u64 {
const SIZE_IN_BYTES: usize = 8;
}
impl BinarySerializable for f32 {
fn serialize<W: Write>(&self, writer: &mut W) -> io::Result<()> {
writer.write_f32::<Endianness>(*self)
impl BinarySerializable for u128 {
fn serialize<W: Write + ?Sized>(&self, writer: &mut W) -> io::Result<()> {
writer.write_all(&self.to_le_bytes())
}
fn deserialize<R: Read>(reader: &mut R) -> io::Result<Self> {
reader.read_f32::<Endianness>()
let mut buf = [0u8; 16];
reader.read_exact(&mut buf)?;
Ok(Self::from_le_bytes(buf))
}
}
impl FixedSize for u128 {
const SIZE_IN_BYTES: usize = 16;
}
impl BinarySerializable for f32 {
fn serialize<W: Write + ?Sized>(&self, writer: &mut W) -> io::Result<()> {
writer.write_all(&self.to_le_bytes())
}
fn deserialize<R: Read>(reader: &mut R) -> io::Result<Self> {
let mut buf = [0u8; 4];
reader.read_exact(&mut buf)?;
Ok(Self::from_le_bytes(buf))
}
}
@@ -121,11 +181,13 @@ impl FixedSize for f32 {
}
impl BinarySerializable for i64 {
fn serialize<W: Write>(&self, writer: &mut W) -> io::Result<()> {
writer.write_i64::<Endianness>(*self)
fn serialize<W: Write + ?Sized>(&self, writer: &mut W) -> io::Result<()> {
writer.write_all(&self.to_le_bytes())
}
fn deserialize<R: Read>(reader: &mut R) -> io::Result<Self> {
reader.read_i64::<Endianness>()
let mut buf = [0u8; Self::SIZE_IN_BYTES];
reader.read_exact(&mut buf)?;
Ok(Self::from_le_bytes(buf))
}
}
@@ -134,11 +196,13 @@ impl FixedSize for i64 {
}
impl BinarySerializable for f64 {
fn serialize<W: Write>(&self, writer: &mut W) -> io::Result<()> {
writer.write_f64::<Endianness>(*self)
fn serialize<W: Write + ?Sized>(&self, writer: &mut W) -> io::Result<()> {
writer.write_all(&self.to_le_bytes())
}
fn deserialize<R: Read>(reader: &mut R) -> io::Result<Self> {
reader.read_f64::<Endianness>()
let mut buf = [0u8; Self::SIZE_IN_BYTES];
reader.read_exact(&mut buf)?;
Ok(Self::from_le_bytes(buf))
}
}
@@ -147,11 +211,13 @@ impl FixedSize for f64 {
}
impl BinarySerializable for u8 {
fn serialize<W: Write>(&self, writer: &mut W) -> io::Result<()> {
writer.write_u8(*self)
fn serialize<W: Write + ?Sized>(&self, writer: &mut W) -> io::Result<()> {
writer.write_all(&self.to_le_bytes())
}
fn deserialize<R: Read>(reader: &mut R) -> io::Result<u8> {
reader.read_u8()
fn deserialize<R: Read>(reader: &mut R) -> io::Result<Self> {
let mut buf = [0u8; Self::SIZE_IN_BYTES];
reader.read_exact(&mut buf)?;
Ok(Self::from_le_bytes(buf))
}
}
@@ -160,12 +226,11 @@ impl FixedSize for u8 {
}
impl BinarySerializable for bool {
fn serialize<W: Write>(&self, writer: &mut W) -> io::Result<()> {
let val = if *self { 1 } else { 0 };
writer.write_u8(val)
fn serialize<W: Write + ?Sized>(&self, writer: &mut W) -> io::Result<()> {
(*self as u8).serialize(writer)
}
fn deserialize<R: Read>(reader: &mut R) -> io::Result<bool> {
let val = reader.read_u8()?;
let val = u8::deserialize(reader)?;
match val {
0 => Ok(false),
1 => Ok(true),
@@ -182,7 +247,7 @@ impl FixedSize for bool {
}
impl BinarySerializable for String {
fn serialize<W: Write>(&self, writer: &mut W) -> io::Result<()> {
fn serialize<W: Write + ?Sized>(&self, writer: &mut W) -> io::Result<()> {
let data: &[u8] = self.as_bytes();
VInt(data.len() as u64).serialize(writer)?;
writer.write_all(data)

View File

@@ -1,8 +1,6 @@
use std::io;
use std::io::{Read, Write};
use byteorder::{ByteOrder, LittleEndian};
use super::BinarySerializable;
/// Variable int serializes a u128 number
@@ -44,7 +42,7 @@ pub fn deserialize_vint_u128(data: &[u8]) -> io::Result<(u128, &[u8])> {
pub struct VIntU128(pub u128);
impl BinarySerializable for VIntU128 {
fn serialize<W: Write>(&self, writer: &mut W) -> io::Result<()> {
fn serialize<W: Write + ?Sized>(&self, writer: &mut W) -> io::Result<()> {
let mut buffer = vec![];
serialize_vint_u128(self.0, &mut buffer);
writer.write_all(&buffer)
@@ -127,7 +125,7 @@ pub fn serialize_vint_u32(val: u32, buf: &mut [u8; 8]) -> &[u8] {
5,
),
};
LittleEndian::write_u64(&mut buf[..], res);
buf.copy_from_slice(&res.to_le_bytes());
&buf[0..num_bytes]
}
@@ -157,7 +155,7 @@ fn vint_len(data: &[u8]) -> usize {
/// If the buffer does not start by a valid
/// vint payload
pub fn read_u32_vint(data: &mut &[u8]) -> u32 {
let (result, vlen) = read_u32_vint_no_advance(*data);
let (result, vlen) = read_u32_vint_no_advance(data);
*data = &data[vlen..];
result
}
@@ -211,7 +209,7 @@ impl VInt {
}
impl BinarySerializable for VInt {
fn serialize<W: Write>(&self, writer: &mut W) -> io::Result<()> {
fn serialize<W: Write + ?Sized>(&self, writer: &mut W) -> io::Result<()> {
let mut buffer = [0u8; 10];
let num_bytes = self.serialize_into(&mut buffer);
writer.write_all(&buffer[0..num_bytes])

View File

@@ -50,7 +50,7 @@ to get tantivy to fit your use case:
*Example 1* You could for instance use hadoop to build a very large search index in a timely manner, copy all of the resulting segment files in the same directory and edit the `meta.json` to get a functional index.[^2]
*Example 2* You could also disable your merge policy and enforce daily segments. Removing data after one week can then be done very efficiently by just editing the `meta.json` and deleting the files associated to segment `D-7`.
*Example 2* You could also disable your merge policy and enforce daily segments. Removing data after one week can then be done very efficiently by just editing the `meta.json` and deleting the files associated with segment `D-7`.
## Merging

View File

@@ -1,130 +1,319 @@
// # Aggregation example
//
// This example shows how you can use built-in aggregations.
// We will use range buckets and compute the average in each bucket.
//
// We will use nested aggregations with buckets and metrics:
// - Range buckets and compute the average in each bucket.
// - Term aggregation and compute the min price in each bucket
// ---
use serde_json::Value;
use serde_json::{Deserializer, Value};
use tantivy::aggregation::agg_req::{
Aggregation, Aggregations, BucketAggregation, BucketAggregationType, MetricAggregation,
RangeAggregation,
};
use tantivy::aggregation::agg_result::AggregationResults;
use tantivy::aggregation::bucket::RangeAggregationRange;
use tantivy::aggregation::metric::AverageAggregation;
use tantivy::aggregation::AggregationCollector;
use tantivy::query::TermQuery;
use tantivy::schema::{self, Cardinality, IndexRecordOption, Schema, TextFieldIndexing};
use tantivy::{doc, Index, Term};
use tantivy::query::AllQuery;
use tantivy::schema::{self, IndexRecordOption, Schema, TextFieldIndexing, FAST};
use tantivy::Index;
fn main() -> tantivy::Result<()> {
// # Create Schema
//
// Lets create a schema for a footwear shop, with 4 fields: name, category, stock and price.
// category, stock and price will be fast fields as that's the requirement
// for aggregation queries.
//
let mut schema_builder = Schema::builder();
// In preparation of the `TermsAggregation`, the category field is configured with:
// - `set_fast`
// - `raw` tokenizer
//
// The tokenizer is set to "raw", because the fast field uses the same dictionary as the
// inverted index. (This behaviour will change in tantivy 0.20, where the fast field will
// always be raw tokenized independent from the regular tokenizing)
//
let text_fieldtype = schema::TextOptions::default()
.set_indexing_options(
TextFieldIndexing::default().set_index_option(IndexRecordOption::WithFreqs),
TextFieldIndexing::default()
.set_index_option(IndexRecordOption::WithFreqs)
.set_tokenizer("raw"),
)
.set_fast()
.set_stored();
let text_field = schema_builder.add_text_field("text", text_fieldtype);
let score_fieldtype =
crate::schema::NumericOptions::default().set_fast(Cardinality::SingleValue);
let highscore_field = schema_builder.add_f64_field("highscore", score_fieldtype.clone());
let price_field = schema_builder.add_f64_field("price", score_fieldtype.clone());
schema_builder.add_text_field("category", text_fieldtype);
schema_builder.add_f64_field("stock", FAST);
schema_builder.add_f64_field("price", FAST);
let schema = schema_builder.build();
// # Indexing documents
//
// Lets index a bunch of documents for this example.
let index = Index::create_in_ram(schema);
let index = Index::create_in_ram(schema.clone());
let data = r#"{
"name": "Almond Toe Court Shoes, Patent Black",
"category": "Womens Footwear",
"price": 99.00,
"stock": 5
}
{
"name": "Suede Shoes, Blue",
"category": "Womens Footwear",
"price": 42.00,
"stock": 4
}
{
"name": "Leather Driver Saddle Loafers, Tan",
"category": "Mens Footwear",
"price": 34.00,
"stock": 12
}
{
"name": "Flip Flops, Red",
"category": "Mens Footwear",
"price": 19.00,
"stock": 6
}
{
"name": "Flip Flops, Blue",
"category": "Mens Footwear",
"price": 19.00,
"stock": 0
}
{
"name": "Gold Button Cardigan, Black",
"category": "Womens Casualwear",
"price": 167.00,
"stock": 6
}
{
"name": "Cotton Shorts, Medium Red",
"category": "Womens Casualwear",
"price": 30.00,
"stock": 5
}
{
"name": "Fine Stripe Short SleeveShirt, Grey",
"category": "Mens Casualwear",
"price": 49.99,
"stock": 9
}
{
"name": "Fine Stripe Short SleeveShirt, Green",
"category": "Mens Casualwear",
"price": 49.99,
"offer": 39.99,
"stock": 9
}
{
"name": "Sharkskin Waistcoat, Charcoal",
"category": "Mens Formalwear",
"price": 75.00,
"stock": 2
}
{
"name": "Lightweight Patch PocketBlazer, Deer",
"category": "Mens Formalwear",
"price": 175.50,
"stock": 1
}
{
"name": "Bird Print Dress, Black",
"category": "Womens Formalwear",
"price": 270.00,
"stock": 10
}
{
"name": "Mid Twist Cut-Out Dress, Pink",
"category": "Womens Formalwear",
"price": 540.00,
"stock": 5
}"#;
let stream = Deserializer::from_str(data).into_iter::<Value>();
let mut index_writer = index.writer(50_000_000)?;
// writing the segment
index_writer.add_document(doc!(
text_field => "cool",
highscore_field => 1f64,
price_field => 0f64,
))?;
index_writer.add_document(doc!(
text_field => "cool",
highscore_field => 3f64,
price_field => 1f64,
))?;
index_writer.add_document(doc!(
text_field => "cool",
highscore_field => 5f64,
price_field => 1f64,
))?;
index_writer.add_document(doc!(
text_field => "nohit",
highscore_field => 6f64,
price_field => 2f64,
))?;
index_writer.add_document(doc!(
text_field => "cool",
highscore_field => 7f64,
price_field => 2f64,
))?;
index_writer.commit()?;
index_writer.add_document(doc!(
text_field => "cool",
highscore_field => 11f64,
price_field => 10f64,
))?;
index_writer.add_document(doc!(
text_field => "cool",
highscore_field => 14f64,
price_field => 15f64,
))?;
index_writer.add_document(doc!(
text_field => "cool",
highscore_field => 15f64,
price_field => 20f64,
))?;
let mut num_indexed = 0;
for value in stream {
let doc = schema.parse_document(&serde_json::to_string(&value.unwrap())?)?;
index_writer.add_document(doc)?;
num_indexed += 1;
if num_indexed > 4 {
// Writing the first segment
index_writer.commit()?;
}
}
// Writing the second segment
index_writer.commit()?;
// We have two segments now. The `AggregationCollector` will run the aggregation on each
// segment and then merge the results into an `IntermediateAggregationResult`.
let reader = index.reader()?;
let text_field = reader.searcher().schema().get_field("text").unwrap();
let searcher = reader.searcher();
// ---
// # Aggregation Query
//
//
// We can construct the query by building the request structure or by deserializing from JSON.
// The JSON API is more stable and therefore recommended.
//
// ## Request 1
let term_query = TermQuery::new(
Term::from_field_text(text_field, "cool"),
IndexRecordOption::Basic,
);
let agg_req_str = r#"
{
"group_by_stock": {
"aggs": {
"average_price": { "avg": { "field": "price" } }
},
"range": {
"field": "stock",
"ranges": [
{ "key": "few", "to": 1.0 },
{ "key": "some", "from": 1.0, "to": 10.0 },
{ "key": "many", "from": 10.0 }
]
}
}
} "#;
let sub_agg_req_1: Aggregations = vec![(
"average_price".to_string(),
Aggregation::Metric(MetricAggregation::Average(
AverageAggregation::from_field_name("price".to_string()),
)),
)]
.into_iter()
.collect();
// In this Aggregation we want to get the average price for different groups, depending on how
// many items are in stock. We define custom ranges `few`, `some`, `many` via the
// range aggregation.
// For every bucket we want the average price, so we create a nested metric aggregation on the
// range bucket aggregation. Only buckets support nested aggregations.
// ### Request JSON API
//
let agg_req_1: Aggregations = vec![(
"score_ranges".to_string(),
let agg_req: Aggregations = serde_json::from_str(agg_req_str)?;
let collector = AggregationCollector::from_aggs(agg_req, None);
let agg_res: AggregationResults = searcher.search(&AllQuery, &collector).unwrap();
let res2: Value = serde_json::to_value(agg_res)?;
// ### Request Rust API
//
// This is exactly the same request as above, but via the rust structures.
//
let agg_req: Aggregations = vec![(
"group_by_stock".to_string(),
Aggregation::Bucket(BucketAggregation {
bucket_agg: BucketAggregationType::Range(RangeAggregation {
field: "highscore".to_string(),
field: "stock".to_string(),
ranges: vec![
(-1f64..9f64).into(),
(9f64..14f64).into(),
(14f64..20f64).into(),
RangeAggregationRange {
key: Some("few".into()),
from: None,
to: Some(1f64),
},
RangeAggregationRange {
key: Some("some".into()),
from: Some(1f64),
to: Some(10f64),
},
RangeAggregationRange {
key: Some("many".into()),
from: Some(10f64),
to: None,
},
],
..Default::default()
}),
sub_aggregation: sub_agg_req_1.clone(),
sub_aggregation: vec![(
"average_price".to_string(),
Aggregation::Metric(MetricAggregation::Average(
AverageAggregation::from_field_name("price".to_string()),
)),
)]
.into_iter()
.collect(),
}),
)]
.into_iter()
.collect();
let collector = AggregationCollector::from_aggs(agg_req_1, None);
let collector = AggregationCollector::from_aggs(agg_req, None);
// We use the `AllQuery` which will pass all documents to the AggregationCollector.
let agg_res: AggregationResults = searcher.search(&AllQuery, &collector).unwrap();
let searcher = reader.searcher();
let agg_res: AggregationResults = searcher.search(&term_query, &collector).unwrap();
let res1: Value = serde_json::to_value(agg_res)?;
let res: Value = serde_json::to_value(&agg_res)?;
println!("{}", serde_json::to_string_pretty(&res)?);
// ### Aggregation Result
//
// The resulting structure deserializes in the same JSON format as elastic search.
//
let expected_res = r#"
{
"group_by_stock":{
"buckets":[
{"average_price":{"value":19.0},"doc_count":1,"key":"few","to":1.0},
{"average_price":{"value":124.748},"doc_count":10,"from":1.0,"key":"some","to":10.0},
{"average_price":{"value":152.0},"doc_count":2,"from":10.0,"key":"many"}
]
}
}
"#;
let expected_json: Value = serde_json::from_str(expected_res)?;
assert_eq!(expected_json, res1);
assert_eq!(expected_json, res2);
// ### Request 2
//
// Now we are interested in the minimum price per category, so we create a bucket per
// category via `TermsAggregation`. We are interested in the highest minimum prices, and set the
// order of the buckets `"order": { "min_price": "desc" }` to be sorted by the the metric of
// the sub aggregation. (awesome)
//
let agg_req_str = r#"
{
"min_price_per_category": {
"aggs": {
"min_price": { "min": { "field": "price" } }
},
"terms": {
"field": "category",
"min_doc_count": 1,
"order": { "min_price": "desc" }
}
}
} "#;
let agg_req: Aggregations = serde_json::from_str(agg_req_str)?;
let collector = AggregationCollector::from_aggs(agg_req, None);
let agg_res: AggregationResults = searcher.search(&AllQuery, &collector).unwrap();
let res: Value = serde_json::to_value(agg_res)?;
// Minimum price per category, sorted by minimum price descending
//
// As you can see, the starting prices for `Formalwear` are higher than `Casualwear`.
//
let expected_res = r#"
{
"min_price_per_category": {
"buckets": [
{ "doc_count": 2, "key": "Womens Formalwear", "min_price": { "value": 270.0 } },
{ "doc_count": 2, "key": "Mens Formalwear", "min_price": { "value": 75.0 } },
{ "doc_count": 2, "key": "Mens Casualwear", "min_price": { "value": 49.99 } },
{ "doc_count": 2, "key": "Womens Footwear", "min_price": { "value": 42.0 } },
{ "doc_count": 2, "key": "Womens Casualwear", "min_price": { "value": 30.0 } },
{ "doc_count": 3, "key": "Mens Footwear", "min_price": { "value": 19.0 } }
],
"sum_other_doc_count": 0
}
}
"#;
let expected_json: Value = serde_json::from_str(expected_res)?;
assert_eq!(expected_json, res);
Ok(())
}

View File

@@ -7,14 +7,12 @@
// Of course, you can have a look at the tantivy's built-in collectors
// such as the `CountCollector` for more examples.
use std::sync::Arc;
use fastfield_codecs::Column;
use columnar::Column;
// ---
// Importing tantivy...
use tantivy::collector::{Collector, SegmentCollector};
use tantivy::query::QueryParser;
use tantivy::schema::{Field, Schema, FAST, INDEXED, TEXT};
use tantivy::schema::{Schema, FAST, INDEXED, TEXT};
use tantivy::{doc, Index, Score, SegmentReader};
#[derive(Default)]
@@ -52,11 +50,11 @@ impl Stats {
}
struct StatsCollector {
field: Field,
field: String,
}
impl StatsCollector {
fn with_field(field: Field) -> StatsCollector {
fn with_field(field: String) -> StatsCollector {
StatsCollector { field }
}
}
@@ -73,7 +71,7 @@ impl Collector for StatsCollector {
_segment_local_id: u32,
segment_reader: &SegmentReader,
) -> tantivy::Result<StatsSegmentCollector> {
let fast_field_reader = segment_reader.fast_fields().u64(self.field)?;
let fast_field_reader = segment_reader.fast_fields().u64(&self.field)?;
Ok(StatsSegmentCollector {
fast_field_reader,
stats: Stats::default(),
@@ -97,7 +95,7 @@ impl Collector for StatsCollector {
}
struct StatsSegmentCollector {
fast_field_reader: Arc<dyn Column<u64>>,
fast_field_reader: Column,
stats: Stats,
}
@@ -105,10 +103,14 @@ impl SegmentCollector for StatsSegmentCollector {
type Fruit = Option<Stats>;
fn collect(&mut self, doc: u32, _score: Score) {
let value = self.fast_field_reader.get_val(doc as u64) as f64;
self.stats.count += 1;
self.stats.sum += value;
self.stats.squared_sum += value * value;
// Since we know the values are single value, we could call `first_or_default_col` on the
// column and fetch single values.
for value in self.fast_field_reader.values_for_doc(doc) {
let value = value as f64;
self.stats.count += 1;
self.stats.sum += value;
self.stats.squared_sum += value * value;
}
}
fn harvest(self) -> <Self as SegmentCollector>::Fruit {
@@ -169,9 +171,11 @@ fn main() -> tantivy::Result<()> {
let searcher = reader.searcher();
let query_parser = QueryParser::for_index(&index, vec![product_name, product_description]);
// here we want to get a hit on the 'ken' in Frankenstein
// here we want to search for `broom` and use `StatsCollector` on the hits.
let query = query_parser.parse_query("broom")?;
if let Some(stats) = searcher.search(&query, &StatsCollector::with_field(price))? {
if let Some(stats) =
searcher.search(&query, &StatsCollector::with_field("price".to_string()))?
{
println!("count: {}", stats.count());
println!("mean: {}", stats.mean());
println!("standard deviation: {}", stats.standard_deviation());

View File

@@ -1,7 +1,7 @@
// # Defining a tokenizer pipeline
//
// In this example, we'll see how to define a tokenizer pipeline
// by aligning a bunch of `TokenFilter`.
// In this example, we'll see how to define a tokenizer
// by creating a custom `NgramTokenizer`.
use tantivy::collector::TopDocs;
use tantivy::query::QueryParser;
use tantivy::schema::*;

View File

@@ -4,7 +4,7 @@
use tantivy::collector::TopDocs;
use tantivy::query::QueryParser;
use tantivy::schema::{Cardinality, DateOptions, Schema, Value, INDEXED, STORED, STRING};
use tantivy::schema::{DateOptions, Schema, Value, INDEXED, STORED, STRING};
use tantivy::Index;
fn main() -> tantivy::Result<()> {
@@ -12,8 +12,9 @@ fn main() -> tantivy::Result<()> {
let mut schema_builder = Schema::builder();
let opts = DateOptions::from(INDEXED)
.set_stored()
.set_fast(Cardinality::SingleValue)
.set_fast()
.set_precision(tantivy::DatePrecision::Seconds);
// Add `occurred_at` date field type
let occurred_at = schema_builder.add_date_field("occurred_at", opts);
let event_type = schema_builder.add_text_field("event", STRING | STORED);
let schema = schema_builder.build();
@@ -22,6 +23,7 @@ fn main() -> tantivy::Result<()> {
let index = Index::create_in_ram(schema.clone());
let mut index_writer = index.writer(50_000_000)?;
// The dates are passed as string in the RFC3339 format
let doc = schema.parse_document(
r#"{
"occurred_at": "2022-06-22T12:53:50.53Z",
@@ -41,14 +43,16 @@ fn main() -> tantivy::Result<()> {
let reader = index.reader()?;
let searcher = reader.searcher();
// # Default fields: event_type
// # Search
let query_parser = QueryParser::for_index(&index, vec![event_type]);
{
let query = query_parser.parse_query("event:comment")?;
// Simple exact search on the date
let query = query_parser.parse_query("occurred_at:\"2022-06-22T12:53:50.53Z\"")?;
let count_docs = searcher.search(&*query, &TopDocs::with_limit(5))?;
assert_eq!(count_docs.len(), 1);
}
{
// Range query on the date field
let query = query_parser
.parse_query(r#"occurred_at:[2022-06-22T12:58:00Z TO 2022-06-23T00:00:00Z}"#)?;
let count_docs = searcher.search(&*query, &TopDocs::with_limit(4))?;

View File

@@ -113,7 +113,7 @@ fn main() -> tantivy::Result<()> {
// on its id.
//
// Note that `tantivy` does nothing to enforce the idea that
// there is only one document associated to this id.
// there is only one document associated with this id.
//
// Also you might have noticed that we apply the delete before
// having committed. This does not matter really...

View File

@@ -1,15 +1,17 @@
// # Basic Example
// # Faceted Search
//
// This example covers the basic functionalities of
// This example covers the faceted search functionalities of
// tantivy.
//
// We will :
// - define our schema
// = create an index in a directory
// - index few documents in our index
// - search for the best document matchings "sea whale"
// - retrieve the best document original content.
// - define a text field "name" in our schema
// - define a facet field "classification" in our schema
// - create an index in memory
// - index few documents with respective facets in our index
// - search and count the number of documents that the classifications start the facet "/Felidae"
// - Search the facet "/Felidae/Pantherinae" and count the number of documents that the
// classifications include the facet.
//
// ---
// Importing tantivy...
use tantivy::collector::FacetCollector;
@@ -21,7 +23,7 @@ fn main() -> tantivy::Result<()> {
// Let's create a temporary directory for the sake of this example
let mut schema_builder = Schema::builder();
let name = schema_builder.add_text_field("felin_name", TEXT | STORED);
let name = schema_builder.add_text_field("name", TEXT | STORED);
// this is our faceted field: its scientific classification
let classification = schema_builder.add_facet_field("classification", FacetOptions::default());
@@ -69,7 +71,7 @@ fn main() -> tantivy::Result<()> {
let reader = index.reader()?;
let searcher = reader.searcher();
{
let mut facet_collector = FacetCollector::for_field(classification);
let mut facet_collector = FacetCollector::for_field("classification");
facet_collector.add_facet("/Felidae");
let facet_counts = searcher.search(&AllQuery, &facet_collector)?;
// This lists all of the facet counts, right below "/Felidae".
@@ -95,7 +97,7 @@ fn main() -> tantivy::Result<()> {
let facet = Facet::from("/Felidae/Pantherinae");
let facet_term = Term::from_facet(classification, &facet);
let facet_term_query = TermQuery::new(facet_term, IndexRecordOption::Basic);
let mut facet_collector = FacetCollector::for_field(classification);
let mut facet_collector = FacetCollector::for_field("classification");
facet_collector.add_facet("/Felidae/Pantherinae");
let facet_counts = searcher.search(&facet_term_query, &facet_collector)?;
let facets: Vec<(&Facet, u64)> = facet_counts.get("/Felidae/Pantherinae").collect();

View File

@@ -1,3 +1,12 @@
// # Faceted Search With Tweak Score
//
// This example covers the faceted search functionalities of
// tantivy.
//
// We will :
// - define a text field "name" in our schema
// - define a facet field "classification" in our schema
use std::collections::HashSet;
use tantivy::collector::TopDocs;
@@ -55,8 +64,9 @@ fn main() -> tantivy::Result<()> {
.collect(),
);
let top_docs_by_custom_score =
// Call TopDocs with a custom tweak score
TopDocs::with_limit(2).tweak_score(move |segment_reader: &SegmentReader| {
let ingredient_reader = segment_reader.facet_reader(ingredient).unwrap();
let ingredient_reader = segment_reader.facet_reader("ingredient").unwrap();
let facet_dict = ingredient_reader.facet_dict();
let query_ords: HashSet<u64> = facets
@@ -64,12 +74,10 @@ fn main() -> tantivy::Result<()> {
.filter_map(|key| facet_dict.term_ord(key.encoded_str()).unwrap())
.collect();
let mut facet_ords_buffer: Vec<u64> = Vec::with_capacity(20);
move |doc: DocId, original_score: Score| {
ingredient_reader.facet_ords(doc, &mut facet_ords_buffer);
let missing_ingredients = facet_ords_buffer
.iter()
// Update the original score with a tweaked score
let missing_ingredients = ingredient_reader
.facet_ords(doc)
.filter(|ord| !query_ords.contains(ord))
.count();
let tweak = 1.0 / 4_f32.powi(missing_ingredients as i32);

167
examples/fuzzy_search.rs Normal file
View File

@@ -0,0 +1,167 @@
// # Basic Example
//
// This example covers the basic functionalities of
// tantivy.
//
// We will :
// - define our schema
// - create an index in a directory
// - index a few documents into our index
// - search for the best document matching a basic query
// - retrieve the best document's original content.
// ---
// Importing tantivy...
use tantivy::collector::{Count, TopDocs};
use tantivy::query::FuzzyTermQuery;
use tantivy::schema::*;
use tantivy::{doc, Index, ReloadPolicy};
use tempfile::TempDir;
fn main() -> tantivy::Result<()> {
// Let's create a temporary directory for the
// sake of this example
let index_path = TempDir::new()?;
// # Defining the schema
//
// The Tantivy index requires a very strict schema.
// The schema declares which fields are in the index,
// and for each field, its type and "the way it should
// be indexed".
// First we need to define a schema ...
let mut schema_builder = Schema::builder();
// Our first field is title.
// We want full-text search for it, and we also want
// to be able to retrieve the document after the search.
//
// `TEXT | STORED` is some syntactic sugar to describe
// that.
//
// `TEXT` means the field should be tokenized and indexed,
// along with its term frequency and term positions.
//
// `STORED` means that the field will also be saved
// in a compressed, row-oriented key-value store.
// This store is useful for reconstructing the
// documents that were selected during the search phase.
let title = schema_builder.add_text_field("title", TEXT | STORED);
let schema = schema_builder.build();
// # Indexing documents
//
// Let's create a brand new index.
//
// This will actually just save a meta.json
// with our schema in the directory.
let index = Index::create_in_dir(&index_path, schema.clone())?;
// To insert a document we will need an index writer.
// There must be only one writer at a time.
// This single `IndexWriter` is already
// multithreaded.
//
// Here we give tantivy a budget of `50MB`.
// Using a bigger memory_arena for the indexer may increase
// throughput, but 50 MB is already plenty.
let mut index_writer = index.writer(50_000_000)?;
// Let's index our documents!
// We first need a handle on the title and the body field.
// ### Adding documents
//
index_writer.add_document(doc!(
title => "The Name of the Wind",
))?;
index_writer.add_document(doc!(
title => "The Diary of Muadib",
))?;
index_writer.add_document(doc!(
title => "A Dairy Cow",
))?;
index_writer.add_document(doc!(
title => "The Diary of a Young Girl",
))?;
index_writer.commit()?;
// ### Committing
//
// At this point our documents are not searchable.
//
//
// We need to call `.commit()` explicitly to force the
// `index_writer` to finish processing the documents in the queue,
// flush the current index to the disk, and advertise
// the existence of new documents.
//
// This call is blocking.
index_writer.commit()?;
// If `.commit()` returns correctly, then all of the
// documents that have been added are guaranteed to be
// persistently indexed.
//
// In the scenario of a crash or a power failure,
// tantivy behaves as if it has rolled back to its last
// commit.
// # Searching
//
// ### Searcher
//
// A reader is required first in order to search an index.
// It acts as a `Searcher` pool that reloads itself,
// depending on a `ReloadPolicy`.
//
// For a search server you will typically create one reader for the entire lifetime of your
// program, and acquire a new searcher for every single request.
//
// In the code below, we rely on the 'ON_COMMIT' policy: the reader
// will reload the index automatically after each commit.
let reader = index
.reader_builder()
.reload_policy(ReloadPolicy::OnCommit)
.try_into()?;
// We now need to acquire a searcher.
//
// A searcher points to a snapshotted, immutable version of the index.
//
// Some search experience might require more than
// one query. Using the same searcher ensures that all of these queries will run on the
// same version of the index.
//
// Acquiring a `searcher` is very cheap.
//
// You should acquire a searcher every time you start processing a request and
// and release it right after your query is finished.
let searcher = reader.searcher();
// ### FuzzyTermQuery
{
let term = Term::from_field_text(title, "Diary");
let query = FuzzyTermQuery::new(term, 2, true);
let (top_docs, count) = searcher
.search(&query, &(TopDocs::with_limit(5), Count))
.unwrap();
assert_eq!(count, 3);
assert_eq!(top_docs.len(), 3);
for (score, doc_address) in top_docs {
let retrieved_doc = searcher.doc(doc_address)?;
// Note that the score is not lower for the fuzzy hit.
// There's an issue open for that: https://github.com/quickwit-oss/tantivy/issues/563
println!("score {score:?} doc {}", schema.to_json(&retrieved_doc));
// score 1.0 doc {"title":["The Diary of Muadib"]}
//
// score 1.0 doc {"title":["The Diary of a Young Girl"]}
//
// score 1.0 doc {"title":["A Dairy Cow"]}
}
}
Ok(())
}

Some files were not shown because too many files have changed in this diff Show More