mirror of
https://github.com/quickwit-oss/tantivy.git
synced 2026-03-15 09:40:41 +00:00
Compare commits
1 Commits
composite-
...
clippy-and
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
dc0aa3d734 |
@@ -1,125 +0,0 @@
|
||||
---
|
||||
name: rationalize-deps
|
||||
description: Analyze Cargo.toml dependencies and attempt to remove unused features to reduce compile times and binary size
|
||||
---
|
||||
|
||||
# Rationalize Dependencies
|
||||
|
||||
This skill analyzes Cargo.toml dependencies to identify and remove unused features.
|
||||
|
||||
## Overview
|
||||
|
||||
Many crates enable features by default that may not be needed. This skill:
|
||||
1. Identifies dependencies with default features enabled
|
||||
2. Tests if `default-features = false` works
|
||||
3. Identifies which specific features are actually needed
|
||||
4. Verifies compilation after changes
|
||||
|
||||
## Step 1: Identify the target
|
||||
|
||||
Ask the user which crate(s) to analyze:
|
||||
- A specific crate name (e.g., "tokio", "serde")
|
||||
- A specific workspace member (e.g., "quickwit-search")
|
||||
- "all" to scan the entire workspace
|
||||
|
||||
## Step 2: Analyze current dependencies
|
||||
|
||||
For the workspace Cargo.toml (`quickwit/Cargo.toml`), list dependencies that:
|
||||
- Do NOT have `default-features = false`
|
||||
- Have default features that might be unnecessary
|
||||
|
||||
Run: `cargo tree -p <crate> -f "{p} {f}" --edges features` to see what features are actually used.
|
||||
|
||||
## Step 3: For each candidate dependency
|
||||
|
||||
### 3a: Check the crate's default features
|
||||
|
||||
Look up the crate on crates.io or check its Cargo.toml to understand:
|
||||
- What features are enabled by default
|
||||
- What each feature provides
|
||||
|
||||
Use: `cargo metadata --format-version=1 | jq '.packages[] | select(.name == "<crate>") | .features'`
|
||||
|
||||
### 3b: Try disabling default features
|
||||
|
||||
Modify the dependency in `quickwit/Cargo.toml`:
|
||||
|
||||
From:
|
||||
```toml
|
||||
some-crate = { version = "1.0" }
|
||||
```
|
||||
|
||||
To:
|
||||
```toml
|
||||
some-crate = { version = "1.0", default-features = false }
|
||||
```
|
||||
|
||||
### 3c: Run cargo check
|
||||
|
||||
Run: `cargo check --workspace` (or target specific packages for faster feedback)
|
||||
|
||||
If compilation fails:
|
||||
1. Read the error messages to identify which features are needed
|
||||
2. Add only the required features explicitly:
|
||||
```toml
|
||||
some-crate = { version = "1.0", default-features = false, features = ["needed-feature"] }
|
||||
```
|
||||
3. Re-run cargo check
|
||||
|
||||
### 3d: Binary search for minimal features
|
||||
|
||||
If there are many default features, use binary search:
|
||||
1. Start with no features
|
||||
2. If it fails, add half the default features
|
||||
3. Continue until you find the minimal set
|
||||
|
||||
## Step 4: Document findings
|
||||
|
||||
For each dependency analyzed, report:
|
||||
- Original configuration
|
||||
- New configuration (if changed)
|
||||
- Features that were removed
|
||||
- Any features that are required
|
||||
|
||||
## Step 5: Verify full build
|
||||
|
||||
After all changes, run:
|
||||
```bash
|
||||
cargo check --workspace --all-targets
|
||||
cargo test --workspace --no-run
|
||||
```
|
||||
|
||||
## Common Patterns
|
||||
|
||||
### Serde
|
||||
Often only needs `derive`:
|
||||
```toml
|
||||
serde = { version = "1.0", default-features = false, features = ["derive", "std"] }
|
||||
```
|
||||
|
||||
### Tokio
|
||||
Identify which runtime features are actually used:
|
||||
```toml
|
||||
tokio = { version = "1.0", default-features = false, features = ["rt-multi-thread", "macros", "sync"] }
|
||||
```
|
||||
|
||||
### Reqwest
|
||||
Often doesn't need all TLS backends:
|
||||
```toml
|
||||
reqwest = { version = "0.11", default-features = false, features = ["rustls-tls", "json"] }
|
||||
```
|
||||
|
||||
## Rollback
|
||||
|
||||
If changes cause issues:
|
||||
```bash
|
||||
git checkout quickwit/Cargo.toml
|
||||
cargo check --workspace
|
||||
```
|
||||
|
||||
## Tips
|
||||
|
||||
- Start with large crates that have many default features (tokio, reqwest, hyper)
|
||||
- Use `cargo bloat --crates` to identify large dependencies
|
||||
- Check `cargo tree -d` for duplicate dependencies that might indicate feature conflicts
|
||||
- Some features are needed only for tests - consider using `[dev-dependencies]` features
|
||||
@@ -1,60 +0,0 @@
|
||||
---
|
||||
name: simple-pr
|
||||
description: Create a simple PR from staged changes with an auto-generated commit message
|
||||
disable-model-invocation: true
|
||||
---
|
||||
|
||||
# Simple PR
|
||||
|
||||
Follow these steps to create a simple PR from staged changes:
|
||||
|
||||
## Step 1: Check workspace state
|
||||
|
||||
Run: `git status`
|
||||
|
||||
Verify that all changes have been staged (no unstaged changes). If there are unstaged changes, abort and ask the user to stage their changes first with `git add`.
|
||||
|
||||
Also verify that we are on the `main` branch. If not, abort and ask the user to switch to main first.
|
||||
|
||||
## Step 2: Ensure main is up to date
|
||||
|
||||
Run: `git pull origin main`
|
||||
|
||||
This ensures we're working from the latest code.
|
||||
|
||||
## Step 3: Review staged changes
|
||||
|
||||
Run: `git diff --cached`
|
||||
|
||||
Review the staged changes to understand what the PR will contain.
|
||||
|
||||
## Step 4: Generate commit message
|
||||
|
||||
Based on the staged changes, generate a concise commit message (1-2 sentences) that describes the "why" rather than the "what".
|
||||
|
||||
Display the proposed commit message to the user and ask for confirmation before proceeding.
|
||||
|
||||
## Step 5: Create a new branch
|
||||
|
||||
Get the git username: `git config user.name | tr ' ' '-' | tr '[:upper:]' '[:lower:]'`
|
||||
|
||||
Create a short, descriptive branch name based on the changes (e.g., `fix-typo-in-readme`, `add-retry-logic`, `update-deps`).
|
||||
|
||||
Create and checkout the branch: `git checkout -b {username}/{short-descriptive-name}`
|
||||
|
||||
## Step 6: Commit changes
|
||||
|
||||
Commit with the message from step 3:
|
||||
```
|
||||
git commit -m "{commit-message}"
|
||||
```
|
||||
|
||||
## Step 7: Push and open a PR
|
||||
|
||||
Push the branch and open a PR:
|
||||
```
|
||||
git push -u origin {branch-name}
|
||||
gh pr create --title "{commit-message-title}" --body "{longer-description-if-needed}"
|
||||
```
|
||||
|
||||
Report the PR URL to the user when complete.
|
||||
4
.github/workflows/coverage.yml
vendored
4
.github/workflows/coverage.yml
vendored
@@ -15,11 +15,11 @@ jobs:
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
- name: Install Rust
|
||||
run: rustup toolchain install nightly-2025-12-01 --profile minimal --component llvm-tools-preview
|
||||
run: rustup toolchain install nightly-2024-07-01 --profile minimal --component llvm-tools-preview
|
||||
- uses: Swatinem/rust-cache@v2
|
||||
- uses: taiki-e/install-action@cargo-llvm-cov
|
||||
- name: Generate code coverage
|
||||
run: cargo +nightly-2025-12-01 llvm-cov --all-features --workspace --doctests --lcov --output-path lcov.info
|
||||
run: cargo +nightly-2024-07-01 llvm-cov --all-features --workspace --doctests --lcov --output-path lcov.info
|
||||
- name: Upload coverage to Codecov
|
||||
uses: codecov/codecov-action@v3
|
||||
continue-on-error: true
|
||||
|
||||
30
.github/workflows/test.yml
vendored
30
.github/workflows/test.yml
vendored
@@ -39,11 +39,11 @@ jobs:
|
||||
|
||||
- name: Check Formatting
|
||||
run: cargo +nightly fmt --all -- --check
|
||||
|
||||
|
||||
- name: Check Stable Compilation
|
||||
run: cargo build --all-features
|
||||
|
||||
|
||||
|
||||
- name: Check Bench Compilation
|
||||
run: cargo +nightly bench --no-run --profile=dev --all-features
|
||||
|
||||
@@ -59,10 +59,10 @@ jobs:
|
||||
|
||||
strategy:
|
||||
matrix:
|
||||
features:
|
||||
- { label: "all", flags: "mmap,stopwords,lz4-compression,zstd-compression,failpoints,stemmer" }
|
||||
- { label: "quickwit", flags: "mmap,quickwit,failpoints" }
|
||||
- { label: "none", flags: "" }
|
||||
features: [
|
||||
{ label: "all", flags: "mmap,stopwords,lz4-compression,zstd-compression,failpoints" },
|
||||
{ label: "quickwit", flags: "mmap,quickwit,failpoints" }
|
||||
]
|
||||
|
||||
name: test-${{ matrix.features.label}}
|
||||
|
||||
@@ -80,21 +80,7 @@ jobs:
|
||||
- uses: Swatinem/rust-cache@v2
|
||||
|
||||
- name: Run tests
|
||||
run: |
|
||||
# if matrix.feature.flags is empty then run on --lib to avoid compiling examples
|
||||
# (as most of them rely on mmap) otherwise run all
|
||||
if [ -z "${{ matrix.features.flags }}" ]; then
|
||||
cargo +stable nextest run --lib --no-default-features --verbose --workspace
|
||||
else
|
||||
cargo +stable nextest run --features ${{ matrix.features.flags }} --no-default-features --verbose --workspace
|
||||
fi
|
||||
run: cargo +stable nextest run --features ${{ matrix.features.flags }} --verbose --workspace
|
||||
|
||||
- name: Run doctests
|
||||
run: |
|
||||
# if matrix.feature.flags is empty then run on --lib to avoid compiling examples
|
||||
# (as most of them rely on mmap) otherwise run all
|
||||
if [ -z "${{ matrix.features.flags }}" ]; then
|
||||
echo "no doctest for no feature flag"
|
||||
else
|
||||
cargo +stable test --doc --features ${{ matrix.features.flags }} --verbose --workspace
|
||||
fi
|
||||
run: cargo +stable test --doc --features ${{ matrix.features.flags }} --verbose --workspace
|
||||
|
||||
28
CHANGELOG.md
28
CHANGELOG.md
@@ -2,30 +2,14 @@ Tantivy 0.25
|
||||
================================
|
||||
|
||||
## Bugfixes
|
||||
- fix union performance regression in tantivy 0.24 [#2663](https://github.com/quickwit-oss/tantivy/pull/2663)(@PSeitz)
|
||||
- fix union performance regression in tantivy 0.24 [#2663](https://github.com/quickwit-oss/tantivy/pull/2663)(@PSeitz-dd)
|
||||
- make zstd optional in sstable [#2633](https://github.com/quickwit-oss/tantivy/pull/2633)(@Parth)
|
||||
- Fix TopDocs::order_by_string_fast_field for asc order [#2672](https://github.com/quickwit-oss/tantivy/pull/2672)(@stuhood @PSeitz)
|
||||
|
||||
## Features/Improvements
|
||||
- add docs/example and Vec<u32> values to sstable [#2660](https://github.com/quickwit-oss/tantivy/pull/2660)(@PSeitz)
|
||||
- Add string fast field support to `TopDocs`. [#2642](https://github.com/quickwit-oss/tantivy/pull/2642)(@stuhood)
|
||||
- update edition to 2024 [#2620](https://github.com/quickwit-oss/tantivy/pull/2620)(@PSeitz)
|
||||
- Allow optional spaces between the field name and the value in the query parser [#2678](https://github.com/quickwit-oss/tantivy/pull/2678)(@Darkheir)
|
||||
- Support mixed field types in query parser [#2676](https://github.com/quickwit-oss/tantivy/pull/2676)(@trinity-1686a)
|
||||
- Add per-field size details [#2679](https://github.com/quickwit-oss/tantivy/pull/2679)(@fulmicoton)
|
||||
|
||||
Tantivy 0.24.2
|
||||
================================
|
||||
- Fix TopNComputer for reverse order. [#2672](https://github.com/quickwit-oss/tantivy/pull/2672)(@stuhood @PSeitz)
|
||||
|
||||
Affected queries are [order_by_fast_field](https://docs.rs/tantivy/latest/tantivy/collector/struct.TopDocs.html#method.order_by_fast_field) and
|
||||
[order_by_u64_field](https://docs.rs/tantivy/latest/tantivy/collector/struct.TopDocs.html#method.order_by_u64_field)
|
||||
for `Order::Asc`
|
||||
|
||||
Tantivy 0.24.1
|
||||
================================
|
||||
- Fix: bump required rust version to 1.81
|
||||
|
||||
Tantivy 0.24
|
||||
================================
|
||||
Tantivy 0.24 will be backwards compatible with indices created with v0.22 and v0.21. The new minimum rust version will be 1.75. Tantivy 0.23 will be skipped.
|
||||
@@ -78,7 +62,7 @@ This will slightly increase space and access time. [#2439](https://github.com/qu
|
||||
|
||||
- **Store DateTime as nanoseconds in doc store** DateTime in the doc store was truncated to microseconds previously. This removes this truncation, while still keeping backwards compatibility. [#2486](https://github.com/quickwit-oss/tantivy/pull/2486)(@PSeitz)
|
||||
|
||||
- **Performance/Memory**
|
||||
- **Performace/Memory**
|
||||
- lift clauses in LogicalAst for optimized ast during execution [#2449](https://github.com/quickwit-oss/tantivy/pull/2449)(@PSeitz)
|
||||
- Use Vec instead of BTreeMap to back OwnedValue object [#2364](https://github.com/quickwit-oss/tantivy/pull/2364)(@fulmicoton)
|
||||
- Replace TantivyDocument with CompactDoc. CompactDoc is much smaller and provides similar performance. [#2402](https://github.com/quickwit-oss/tantivy/pull/2402)(@PSeitz)
|
||||
@@ -108,14 +92,6 @@ This will slightly increase space and access time. [#2439](https://github.com/qu
|
||||
- Fix trait bound of StoreReader::iter [#2360](https://github.com/quickwit-oss/tantivy/pull/2360)(@adamreichold)
|
||||
- remove read_postings_no_deletes [#2526](https://github.com/quickwit-oss/tantivy/pull/2526)(@PSeitz)
|
||||
|
||||
Tantivy 0.22.1
|
||||
================================
|
||||
- Fix TopNComputer for reverse order. [#2672](https://github.com/quickwit-oss/tantivy/pull/2672)(@stuhood @PSeitz)
|
||||
|
||||
Affected queries are [order_by_fast_field](https://docs.rs/tantivy/latest/tantivy/collector/struct.TopDocs.html#method.order_by_fast_field) and
|
||||
[order_by_u64_field](https://docs.rs/tantivy/latest/tantivy/collector/struct.TopDocs.html#method.order_by_u64_field)
|
||||
for `Order::Asc`
|
||||
|
||||
Tantivy 0.22
|
||||
================================
|
||||
|
||||
|
||||
78
Cargo.toml
78
Cargo.toml
@@ -1,6 +1,6 @@
|
||||
[package]
|
||||
name = "tantivy"
|
||||
version = "0.26.0"
|
||||
version = "0.24.0"
|
||||
authors = ["Paul Masurel <paul.masurel@gmail.com>"]
|
||||
license = "MIT"
|
||||
categories = ["database-implementations", "data-structures"]
|
||||
@@ -15,7 +15,7 @@ rust-version = "1.85"
|
||||
exclude = ["benches/*.json", "benches/*.txt"]
|
||||
|
||||
[dependencies]
|
||||
oneshot = "0.1.13"
|
||||
oneshot = "0.1.7"
|
||||
base64 = "0.22.0"
|
||||
byteorder = "1.4.3"
|
||||
crc32fast = "1.3.2"
|
||||
@@ -27,7 +27,7 @@ regex = { version = "1.5.5", default-features = false, features = [
|
||||
aho-corasick = "1.0"
|
||||
tantivy-fst = "0.5"
|
||||
memmap2 = { version = "0.9.0", optional = true }
|
||||
lz4_flex = { version = "0.12", default-features = false, optional = true }
|
||||
lz4_flex = { version = "0.11", default-features = false, optional = true }
|
||||
zstd = { version = "0.13", optional = true, default-features = false }
|
||||
tempfile = { version = "3.12.0", optional = true }
|
||||
log = "0.4.16"
|
||||
@@ -37,9 +37,9 @@ fs4 = { version = "0.13.1", optional = true }
|
||||
levenshtein_automata = "0.2.1"
|
||||
uuid = { version = "1.0.0", features = ["v4", "serde"] }
|
||||
crossbeam-channel = "0.5.4"
|
||||
rust-stemmers = { version = "1.2.0", optional = true }
|
||||
rust-stemmers = "1.2.0"
|
||||
downcast-rs = "2.0.1"
|
||||
bitpacking = { version = "0.9.3", default-features = false, features = [
|
||||
bitpacking = { version = "0.9.2", default-features = false, features = [
|
||||
"bitpacker4x",
|
||||
] }
|
||||
census = "0.4.2"
|
||||
@@ -50,45 +50,44 @@ fail = { version = "0.5.0", optional = true }
|
||||
time = { version = "0.3.35", features = ["serde-well-known"] }
|
||||
smallvec = "1.8.0"
|
||||
rayon = "1.5.2"
|
||||
lru = "0.16.3"
|
||||
lru = "0.12.0"
|
||||
fastdivide = "0.4.0"
|
||||
itertools = "0.14.0"
|
||||
measure_time = "0.9.0"
|
||||
arc-swap = "1.5.0"
|
||||
bon = "3.3.1"
|
||||
|
||||
columnar = { version = "0.6", path = "./columnar", package = "tantivy-columnar" }
|
||||
sstable = { version = "0.6", path = "./sstable", package = "tantivy-sstable", optional = true }
|
||||
stacker = { version = "0.6", path = "./stacker", package = "tantivy-stacker" }
|
||||
query-grammar = { version = "0.25.0", path = "./query-grammar", package = "tantivy-query-grammar" }
|
||||
tantivy-bitpacker = { version = "0.9", path = "./bitpacker" }
|
||||
common = { version = "0.10", path = "./common/", package = "tantivy-common" }
|
||||
tokenizer-api = { version = "0.6", path = "./tokenizer-api", package = "tantivy-tokenizer-api" }
|
||||
sketches-ddsketch = { path = "./sketches-ddsketch", features = ["use_serde"] }
|
||||
datasketches = "0.2.0"
|
||||
columnar = { version = "0.5", path = "./columnar", package = "tantivy-columnar" }
|
||||
sstable = { version = "0.5", path = "./sstable", package = "tantivy-sstable", optional = true }
|
||||
stacker = { version = "0.5", path = "./stacker", package = "tantivy-stacker" }
|
||||
query-grammar = { version = "0.24.0", path = "./query-grammar", package = "tantivy-query-grammar" }
|
||||
tantivy-bitpacker = { version = "0.8", path = "./bitpacker" }
|
||||
common = { version = "0.9", path = "./common/", package = "tantivy-common" }
|
||||
tokenizer-api = { version = "0.5", path = "./tokenizer-api", package = "tantivy-tokenizer-api" }
|
||||
sketches-ddsketch = { version = "0.3.0", features = ["use_serde"] }
|
||||
hyperloglogplus = { version = "0.4.1", features = ["const-loop"] }
|
||||
futures-util = { version = "0.3.28", optional = true }
|
||||
futures-channel = { version = "0.3.28", optional = true }
|
||||
fnv = "1.0.7"
|
||||
typetag = "0.2.21"
|
||||
|
||||
[target.'cfg(windows)'.dependencies]
|
||||
winapi = "0.3.9"
|
||||
|
||||
[dev-dependencies]
|
||||
binggan = "0.14.2"
|
||||
rand = "0.9"
|
||||
binggan = "0.14.0"
|
||||
rand = "0.8.5"
|
||||
maplit = "1.0.2"
|
||||
matches = "0.1.9"
|
||||
pretty_assertions = "1.2.1"
|
||||
proptest = "1.7.0"
|
||||
proptest = "1.0.0"
|
||||
test-log = "0.2.10"
|
||||
futures = "0.3.21"
|
||||
paste = "1.0.11"
|
||||
more-asserts = "0.3.1"
|
||||
rand_distr = "0.5"
|
||||
rand_distr = "0.4.3"
|
||||
time = { version = "0.3.10", features = ["serde-well-known", "macros"] }
|
||||
postcard = { version = "1.0.4", features = [
|
||||
"use-std",
|
||||
"use-std",
|
||||
], default-features = false }
|
||||
|
||||
[target.'cfg(not(windows))'.dev-dependencies]
|
||||
@@ -113,8 +112,7 @@ debug-assertions = true
|
||||
overflow-checks = true
|
||||
|
||||
[features]
|
||||
default = ["mmap", "stopwords", "lz4-compression", "columnar-zstd-compression", "stemmer"]
|
||||
stemmer = ["rust-stemmers"]
|
||||
default = ["mmap", "stopwords", "lz4-compression", "columnar-zstd-compression"]
|
||||
mmap = ["fs4", "tempfile", "memmap2"]
|
||||
stopwords = []
|
||||
|
||||
@@ -144,7 +142,6 @@ members = [
|
||||
"sstable",
|
||||
"tokenizer-api",
|
||||
"columnar",
|
||||
"sketches-ddsketch",
|
||||
]
|
||||
|
||||
# Following the "fail" crate best practises, we isolate
|
||||
@@ -170,36 +167,3 @@ harness = false
|
||||
[[bench]]
|
||||
name = "agg_bench"
|
||||
harness = false
|
||||
|
||||
[[bench]]
|
||||
name = "exists_json"
|
||||
harness = false
|
||||
|
||||
[[bench]]
|
||||
name = "range_query"
|
||||
harness = false
|
||||
|
||||
[[bench]]
|
||||
name = "and_or_queries"
|
||||
harness = false
|
||||
|
||||
[[bench]]
|
||||
name = "range_queries"
|
||||
harness = false
|
||||
|
||||
[[bench]]
|
||||
name = "bool_queries_with_range"
|
||||
harness = false
|
||||
|
||||
[[bench]]
|
||||
name = "str_search_and_get"
|
||||
harness = false
|
||||
|
||||
[[bench]]
|
||||
name = "merge_segments"
|
||||
harness = false
|
||||
|
||||
[[bench]]
|
||||
name = "regex_all_terms"
|
||||
harness = false
|
||||
|
||||
|
||||
@@ -23,6 +23,8 @@ performance for different types of queries/collections.
|
||||
|
||||
Your mileage WILL vary depending on the nature of queries and their load.
|
||||
|
||||
<img src="doc/assets/images/searchbenchmark.png">
|
||||
|
||||
Details about the benchmark can be found at this [repository](https://github.com/quickwit-oss/search-benchmark-game).
|
||||
|
||||
## Features
|
||||
@@ -123,7 +125,6 @@ You can also find other bindings on [GitHub](https://github.com/search?q=tantivy
|
||||
- [seshat](https://github.com/matrix-org/seshat/): A matrix message database/indexer
|
||||
- [tantiny](https://github.com/baygeldin/tantiny): Tiny full-text search for Ruby
|
||||
- [lnx](https://github.com/lnx-search/lnx): adaptable, typo tolerant search engine with a REST API
|
||||
- [Bichon](https://github.com/rustmailer/bichon): A lightweight, high-performance Rust email archiver with WebUI
|
||||
- and [more](https://github.com/search?q=tantivy)!
|
||||
|
||||
### On average, how much faster is Tantivy compared to Lucene?
|
||||
|
||||
27
RELEASE.md
27
RELEASE.md
@@ -1,4 +1,4 @@
|
||||
# Releasing a new Tantivy Version
|
||||
# Release a new Tantivy Version
|
||||
|
||||
## Steps
|
||||
|
||||
@@ -10,29 +10,12 @@
|
||||
6. Set git tag with new version
|
||||
|
||||
|
||||
[`cargo-release`](https://github.com/crate-ci/cargo-release) will help us with steps 1-5:
|
||||
In conjucation with `cargo-release` Steps 1-4 (I'm not sure if the change detection works):
|
||||
Set new packages to version 0.0.0
|
||||
|
||||
Replace prev-tag-name
|
||||
```bash
|
||||
cargo release --workspace --no-publish -v --prev-tag-name 0.24 --push-remote origin minor --no-tag
|
||||
cargo release --workspace --no-publish -v --prev-tag-name 0.19 --push-remote origin minor --no-tag --execute
|
||||
```
|
||||
|
||||
`no-tag` or it will create tags for all the subpackages
|
||||
|
||||
cargo release will _not_ ignore unchanged packages, but it will print warnings for them.
|
||||
e.g. "warning: updating ownedbytes to 0.10.0 despite no changes made since tag 0.24"
|
||||
|
||||
We need to manually ignore these unchanged packages
|
||||
```bash
|
||||
cargo release --workspace --no-publish -v --prev-tag-name 0.24 --push-remote origin minor --no-tag --exclude tokenizer-api
|
||||
```
|
||||
|
||||
Add `--execute` to actually publish the packages, otherwise it will only print the commands that would be run.
|
||||
|
||||
### Tag Version
|
||||
```bash
|
||||
git tag 0.25.0
|
||||
git push upstream tag 0.25.0
|
||||
```
|
||||
|
||||
|
||||
no-tag or it will create tags for all the subpackages
|
||||
|
||||
2
TODO.txt
2
TODO.txt
@@ -10,7 +10,7 @@ rename FastFieldReaders::open to load
|
||||
remove fast field reader
|
||||
|
||||
find a way to unify the two DateTime.
|
||||
re-add type check in the filter wrapper
|
||||
readd type check in the filter wrapper
|
||||
|
||||
add unit test on columnar list columns.
|
||||
|
||||
|
||||
@@ -1,9 +1,7 @@
|
||||
use binggan::plugins::PeakMemAllocPlugin;
|
||||
use binggan::{black_box, InputGroup, PeakMemAlloc, INSTRUMENTED_SYSTEM};
|
||||
use common::DateTime;
|
||||
use rand::distr::weighted::WeightedIndex;
|
||||
use rand::prelude::SliceRandom;
|
||||
use rand::rngs::StdRng;
|
||||
use rand::seq::IndexedRandom;
|
||||
use rand::{Rng, SeedableRng};
|
||||
use rand_distr::Distribution;
|
||||
use serde_json::json;
|
||||
@@ -55,47 +53,26 @@ fn bench_agg(mut group: InputGroup<Index>) {
|
||||
register!(group, stats_f64);
|
||||
register!(group, extendedstats_f64);
|
||||
register!(group, percentiles_f64);
|
||||
register!(group, terms_7);
|
||||
register!(group, terms_all_unique);
|
||||
register!(group, terms_150_000);
|
||||
register!(group, terms_few);
|
||||
register!(group, terms_many);
|
||||
register!(group, terms_many_top_1000);
|
||||
register!(group, terms_many_order_by_term);
|
||||
register!(group, terms_many_with_top_hits);
|
||||
register!(group, terms_all_unique_with_avg_sub_agg);
|
||||
register!(group, terms_many_with_avg_sub_agg);
|
||||
register!(group, terms_status_with_avg_sub_agg);
|
||||
register!(group, terms_status_with_histogram);
|
||||
register!(group, terms_zipf_1000);
|
||||
register!(group, terms_zipf_1000_with_histogram);
|
||||
register!(group, terms_zipf_1000_with_avg_sub_agg);
|
||||
|
||||
register!(group, terms_many_json_mixed_type_with_avg_sub_agg);
|
||||
|
||||
register!(group, composite_term_many_page_1000);
|
||||
register!(group, composite_term_many_page_1000_with_avg_sub_agg);
|
||||
register!(group, composite_term_few);
|
||||
register!(group, composite_histogram);
|
||||
register!(group, composite_histogram_calendar);
|
||||
|
||||
register!(group, cardinality_agg);
|
||||
register!(group, terms_status_with_cardinality_agg);
|
||||
register!(group, terms_few_with_cardinality_agg);
|
||||
|
||||
register!(group, range_agg);
|
||||
register!(group, range_agg_with_avg_sub_agg);
|
||||
register!(group, range_agg_with_term_agg_status);
|
||||
register!(group, range_agg_with_term_agg_few);
|
||||
register!(group, range_agg_with_term_agg_many);
|
||||
register!(group, histogram);
|
||||
register!(group, histogram_hard_bounds);
|
||||
register!(group, histogram_with_avg_sub_agg);
|
||||
register!(group, histogram_with_term_agg_status);
|
||||
register!(group, avg_and_range_with_avg_sub_agg);
|
||||
|
||||
// Filter aggregation benchmarks
|
||||
register!(group, filter_agg_all_query_count_agg);
|
||||
register!(group, filter_agg_term_query_count_agg);
|
||||
register!(group, filter_agg_all_query_with_sub_aggs);
|
||||
register!(group, filter_agg_term_query_with_sub_aggs);
|
||||
|
||||
group.run();
|
||||
}
|
||||
|
||||
@@ -146,12 +123,12 @@ fn extendedstats_f64(index: &Index) {
|
||||
}
|
||||
fn percentiles_f64(index: &Index) {
|
||||
let agg_req = json!({
|
||||
"mypercentiles": {
|
||||
"percentiles": {
|
||||
"field": "score_f64",
|
||||
"percents": [ 95, 99, 99.9 ]
|
||||
}
|
||||
"mypercentiles": {
|
||||
"percentiles": {
|
||||
"field": "score_f64",
|
||||
"percents": [ 95, 99, 99.9 ]
|
||||
}
|
||||
}
|
||||
});
|
||||
execute_agg(index, agg_req);
|
||||
}
|
||||
@@ -166,10 +143,10 @@ fn cardinality_agg(index: &Index) {
|
||||
});
|
||||
execute_agg(index, agg_req);
|
||||
}
|
||||
fn terms_status_with_cardinality_agg(index: &Index) {
|
||||
fn terms_few_with_cardinality_agg(index: &Index) {
|
||||
let agg_req = json!({
|
||||
"my_texts": {
|
||||
"terms": { "field": "text_few_terms_status" },
|
||||
"terms": { "field": "text_few_terms" },
|
||||
"aggs": {
|
||||
"cardinality": {
|
||||
"cardinality": {
|
||||
@@ -182,20 +159,13 @@ fn terms_status_with_cardinality_agg(index: &Index) {
|
||||
execute_agg(index, agg_req);
|
||||
}
|
||||
|
||||
fn terms_7(index: &Index) {
|
||||
fn terms_few(index: &Index) {
|
||||
let agg_req = json!({
|
||||
"my_texts": { "terms": { "field": "text_few_terms_status" } },
|
||||
"my_texts": { "terms": { "field": "text_few_terms" } },
|
||||
});
|
||||
execute_agg(index, agg_req);
|
||||
}
|
||||
fn terms_all_unique(index: &Index) {
|
||||
let agg_req = json!({
|
||||
"my_texts": { "terms": { "field": "text_all_unique_terms" } },
|
||||
});
|
||||
execute_agg(index, agg_req);
|
||||
}
|
||||
|
||||
fn terms_150_000(index: &Index) {
|
||||
fn terms_many(index: &Index) {
|
||||
let agg_req = json!({
|
||||
"my_texts": { "terms": { "field": "text_many_terms" } },
|
||||
});
|
||||
@@ -243,72 +213,6 @@ fn terms_many_with_avg_sub_agg(index: &Index) {
|
||||
});
|
||||
execute_agg(index, agg_req);
|
||||
}
|
||||
fn terms_all_unique_with_avg_sub_agg(index: &Index) {
|
||||
let agg_req = json!({
|
||||
"my_texts": {
|
||||
"terms": { "field": "text_all_unique_terms" },
|
||||
"aggs": {
|
||||
"average_f64": { "avg": { "field": "score_f64" } }
|
||||
}
|
||||
},
|
||||
});
|
||||
execute_agg(index, agg_req);
|
||||
}
|
||||
fn terms_status_with_histogram(index: &Index) {
|
||||
let agg_req = json!({
|
||||
"my_texts": {
|
||||
"terms": { "field": "text_few_terms_status" },
|
||||
"aggs": {
|
||||
"histo": {"histogram": { "field": "score_f64", "interval": 10 }}
|
||||
}
|
||||
}
|
||||
});
|
||||
execute_agg(index, agg_req);
|
||||
}
|
||||
|
||||
fn terms_zipf_1000_with_histogram(index: &Index) {
|
||||
let agg_req = json!({
|
||||
"my_texts": {
|
||||
"terms": { "field": "text_1000_terms_zipf" },
|
||||
"aggs": {
|
||||
"histo": {"histogram": { "field": "score_f64", "interval": 10 }}
|
||||
}
|
||||
}
|
||||
});
|
||||
execute_agg(index, agg_req);
|
||||
}
|
||||
|
||||
fn terms_status_with_avg_sub_agg(index: &Index) {
|
||||
let agg_req = json!({
|
||||
"my_texts": {
|
||||
"terms": { "field": "text_few_terms_status" },
|
||||
"aggs": {
|
||||
"average_f64": { "avg": { "field": "score_f64" } }
|
||||
}
|
||||
},
|
||||
});
|
||||
execute_agg(index, agg_req);
|
||||
}
|
||||
|
||||
fn terms_zipf_1000_with_avg_sub_agg(index: &Index) {
|
||||
let agg_req = json!({
|
||||
"my_texts": {
|
||||
"terms": { "field": "text_1000_terms_zipf" },
|
||||
"aggs": {
|
||||
"average_f64": { "avg": { "field": "score_f64" } }
|
||||
}
|
||||
},
|
||||
});
|
||||
execute_agg(index, agg_req);
|
||||
}
|
||||
|
||||
fn terms_zipf_1000(index: &Index) {
|
||||
let agg_req = json!({
|
||||
"my_texts": { "terms": { "field": "text_1000_terms_zipf" } },
|
||||
});
|
||||
execute_agg(index, agg_req);
|
||||
}
|
||||
|
||||
fn terms_many_json_mixed_type_with_avg_sub_agg(index: &Index) {
|
||||
let agg_req = json!({
|
||||
"my_texts": {
|
||||
@@ -320,75 +224,6 @@ fn terms_many_json_mixed_type_with_avg_sub_agg(index: &Index) {
|
||||
});
|
||||
execute_agg(index, agg_req);
|
||||
}
|
||||
fn composite_term_few(index: &Index) {
|
||||
let agg_req = json!({
|
||||
"my_ctf": {
|
||||
"composite": {
|
||||
"sources": [
|
||||
{ "text_few_terms": { "terms": { "field": "text_few_terms" } } }
|
||||
],
|
||||
"size": 1000
|
||||
}
|
||||
},
|
||||
});
|
||||
execute_agg(index, agg_req);
|
||||
}
|
||||
fn composite_term_many_page_1000(index: &Index) {
|
||||
let agg_req = json!({
|
||||
"my_ctmp1000": {
|
||||
"composite": {
|
||||
"sources": [
|
||||
{ "text_many_terms": { "terms": { "field": "text_many_terms" } } }
|
||||
],
|
||||
"size": 1000
|
||||
}
|
||||
},
|
||||
});
|
||||
execute_agg(index, agg_req);
|
||||
}
|
||||
fn composite_term_many_page_1000_with_avg_sub_agg(index: &Index) {
|
||||
let agg_req = json!({
|
||||
"my_ctmp1000wasa": {
|
||||
"composite": {
|
||||
"sources": [
|
||||
{ "text_many_terms": { "terms": { "field": "text_many_terms" } } }
|
||||
],
|
||||
"size": 1000,
|
||||
|
||||
},
|
||||
"aggs": {
|
||||
"average_f64": { "avg": { "field": "score_f64" } }
|
||||
}
|
||||
},
|
||||
});
|
||||
execute_agg(index, agg_req);
|
||||
}
|
||||
fn composite_histogram(index: &Index) {
|
||||
let agg_req = json!({
|
||||
"my_ch": {
|
||||
"composite": {
|
||||
"sources": [
|
||||
{ "f64_histogram": { "histogram": { "field": "score_f64", "interval": 1 } } }
|
||||
],
|
||||
"size": 1000
|
||||
}
|
||||
},
|
||||
});
|
||||
execute_agg(index, agg_req);
|
||||
}
|
||||
fn composite_histogram_calendar(index: &Index) {
|
||||
let agg_req = json!({
|
||||
"my_chc": {
|
||||
"composite": {
|
||||
"sources": [
|
||||
{ "time_histogram": { "date_histogram": { "field": "timestamp", "calendar_interval": "month" } } }
|
||||
],
|
||||
"size": 1000
|
||||
}
|
||||
},
|
||||
});
|
||||
execute_agg(index, agg_req);
|
||||
}
|
||||
|
||||
fn execute_agg(index: &Index, agg_req: serde_json::Value) {
|
||||
let agg_req: Aggregations = serde_json::from_value(agg_req).unwrap();
|
||||
@@ -433,7 +268,7 @@ fn range_agg_with_avg_sub_agg(index: &Index) {
|
||||
execute_agg(index, agg_req);
|
||||
}
|
||||
|
||||
fn range_agg_with_term_agg_status(index: &Index) {
|
||||
fn range_agg_with_term_agg_few(index: &Index) {
|
||||
let agg_req = json!({
|
||||
"rangef64": {
|
||||
"range": {
|
||||
@@ -448,7 +283,7 @@ fn range_agg_with_term_agg_status(index: &Index) {
|
||||
]
|
||||
},
|
||||
"aggs": {
|
||||
"my_texts": { "terms": { "field": "text_few_terms_status" } },
|
||||
"my_texts": { "terms": { "field": "text_few_terms" } },
|
||||
}
|
||||
},
|
||||
});
|
||||
@@ -504,17 +339,6 @@ fn histogram_with_avg_sub_agg(index: &Index) {
|
||||
});
|
||||
execute_agg(index, agg_req);
|
||||
}
|
||||
fn histogram_with_term_agg_status(index: &Index) {
|
||||
let agg_req = json!({
|
||||
"rangef64": {
|
||||
"histogram": { "field": "score_f64", "interval": 10 },
|
||||
"aggs": {
|
||||
"my_texts": { "terms": { "field": "text_few_terms_status" } }
|
||||
}
|
||||
}
|
||||
});
|
||||
execute_agg(index, agg_req);
|
||||
}
|
||||
fn avg_and_range_with_avg_sub_agg(index: &Index) {
|
||||
let agg_req = json!({
|
||||
"rangef64": {
|
||||
@@ -554,13 +378,6 @@ fn get_collector(agg_req: Aggregations) -> AggregationCollector {
|
||||
}
|
||||
|
||||
fn get_test_index_bench(cardinality: Cardinality) -> tantivy::Result<Index> {
|
||||
// Flag to use existing index
|
||||
let reuse_index = std::env::var("REUSE_AGG_BENCH_INDEX").is_ok();
|
||||
if reuse_index && std::path::Path::new("agg_bench").exists() {
|
||||
return Index::open_in_dir("agg_bench");
|
||||
}
|
||||
// crreate dir
|
||||
std::fs::create_dir_all("agg_bench")?;
|
||||
let mut schema_builder = Schema::builder();
|
||||
let text_fieldtype = tantivy::schema::TextOptions::default()
|
||||
.set_indexing_options(
|
||||
@@ -569,48 +386,20 @@ fn get_test_index_bench(cardinality: Cardinality) -> tantivy::Result<Index> {
|
||||
.set_stored();
|
||||
let text_field = schema_builder.add_text_field("text", text_fieldtype);
|
||||
let json_field = schema_builder.add_json_field("json", FAST);
|
||||
let text_field_all_unique_terms =
|
||||
schema_builder.add_text_field("text_all_unique_terms", STRING | FAST);
|
||||
let text_field_many_terms = schema_builder.add_text_field("text_many_terms", STRING | FAST);
|
||||
let text_field_few_terms_status =
|
||||
schema_builder.add_text_field("text_few_terms_status", STRING | FAST);
|
||||
let text_field_1000_terms_zipf =
|
||||
schema_builder.add_text_field("text_1000_terms_zipf", STRING | FAST);
|
||||
let text_field_few_terms = schema_builder.add_text_field("text_few_terms", STRING | FAST);
|
||||
let score_fieldtype = tantivy::schema::NumericOptions::default().set_fast();
|
||||
let score_field = schema_builder.add_u64_field("score", score_fieldtype.clone());
|
||||
let score_field_f64 = schema_builder.add_f64_field("score_f64", score_fieldtype.clone());
|
||||
let score_field_i64 = schema_builder.add_i64_field("score_i64", score_fieldtype);
|
||||
let date_field = schema_builder.add_date_field("timestamp", FAST);
|
||||
// use tmp dir
|
||||
let index = if reuse_index {
|
||||
Index::create_in_dir("agg_bench", schema_builder.build())?
|
||||
} else {
|
||||
Index::create_from_tempdir(schema_builder.build())?
|
||||
};
|
||||
// Approximate log proportions
|
||||
let status_field_data = [
|
||||
("INFO", 8000),
|
||||
("ERROR", 300),
|
||||
("WARN", 1200),
|
||||
("DEBUG", 500),
|
||||
("OK", 500),
|
||||
("CRITICAL", 20),
|
||||
("EMERGENCY", 1),
|
||||
];
|
||||
let log_level_distribution =
|
||||
WeightedIndex::new(status_field_data.iter().map(|item| item.1)).unwrap();
|
||||
let index = Index::create_from_tempdir(schema_builder.build())?;
|
||||
let few_terms_data = ["INFO", "ERROR", "WARN", "DEBUG"];
|
||||
|
||||
let lg_norm = rand_distr::LogNormal::new(2.996f64, 0.979f64).unwrap();
|
||||
|
||||
let many_terms_data = (0..150_000)
|
||||
.map(|num| format!("author{num}"))
|
||||
.collect::<Vec<_>>();
|
||||
|
||||
// Prepare 1000 unique terms sampled using a Zipf distribution.
|
||||
// Exponent ~1.1 approximates top-20 terms covering around ~20%.
|
||||
let terms_1000: Vec<String> = (1..=1000).map(|i| format!("term_{i}")).collect();
|
||||
let zipf_1000 = rand_distr::Zipf::new(1000.0, 1.1f64).unwrap();
|
||||
|
||||
{
|
||||
let mut rng = StdRng::from_seed([1u8; 32]);
|
||||
let mut index_writer = index.writer_with_num_threads(1, 200_000_000)?;
|
||||
@@ -620,25 +409,15 @@ fn get_test_index_bench(cardinality: Cardinality) -> tantivy::Result<Index> {
|
||||
index_writer.add_document(doc!())?;
|
||||
}
|
||||
if cardinality == Cardinality::Multivalued {
|
||||
let log_level_sample_a = status_field_data[log_level_distribution.sample(&mut rng)].0;
|
||||
let log_level_sample_b = status_field_data[log_level_distribution.sample(&mut rng)].0;
|
||||
let idx_a = zipf_1000.sample(&mut rng) as usize - 1;
|
||||
let idx_b = zipf_1000.sample(&mut rng) as usize - 1;
|
||||
let term_1000_a = &terms_1000[idx_a];
|
||||
let term_1000_b = &terms_1000[idx_b];
|
||||
index_writer.add_document(doc!(
|
||||
json_field => json!({"mixed_type": 10.0}),
|
||||
json_field => json!({"mixed_type": 10.0}),
|
||||
text_field => "cool",
|
||||
text_field => "cool",
|
||||
text_field_all_unique_terms => "cool",
|
||||
text_field_all_unique_terms => "coolo",
|
||||
text_field_many_terms => "cool",
|
||||
text_field_many_terms => "cool",
|
||||
text_field_few_terms_status => log_level_sample_a,
|
||||
text_field_few_terms_status => log_level_sample_b,
|
||||
text_field_1000_terms_zipf => term_1000_a.as_str(),
|
||||
text_field_1000_terms_zipf => term_1000_b.as_str(),
|
||||
text_field_few_terms => "cool",
|
||||
text_field_few_terms => "cool",
|
||||
score_field => 1u64,
|
||||
score_field => 1u64,
|
||||
score_field_f64 => lg_norm.sample(&mut rng),
|
||||
@@ -653,8 +432,8 @@ fn get_test_index_bench(cardinality: Cardinality) -> tantivy::Result<Index> {
|
||||
}
|
||||
let _val_max = 1_000_000.0;
|
||||
for _ in 0..doc_with_value {
|
||||
let val: f64 = rng.random_range(0.0..1_000_000.0);
|
||||
let json = if rng.random_bool(0.1) {
|
||||
let val: f64 = rng.gen_range(0.0..1_000_000.0);
|
||||
let json = if rng.gen_bool(0.1) {
|
||||
// 10% are numeric values
|
||||
json!({ "mixed_type": val })
|
||||
} else {
|
||||
@@ -663,14 +442,11 @@ fn get_test_index_bench(cardinality: Cardinality) -> tantivy::Result<Index> {
|
||||
index_writer.add_document(doc!(
|
||||
text_field => "cool",
|
||||
json_field => json,
|
||||
text_field_all_unique_terms => format!("unique_term_{}", rng.random::<u64>()),
|
||||
text_field_many_terms => many_terms_data.choose(&mut rng).unwrap().to_string(),
|
||||
text_field_few_terms_status => status_field_data[log_level_distribution.sample(&mut rng)].0,
|
||||
text_field_1000_terms_zipf => terms_1000[zipf_1000.sample(&mut rng) as usize - 1].as_str(),
|
||||
text_field_few_terms => few_terms_data.choose(&mut rng).unwrap().to_string(),
|
||||
score_field => val as u64,
|
||||
score_field_f64 => lg_norm.sample(&mut rng),
|
||||
score_field_i64 => val as i64,
|
||||
date_field => DateTime::from_timestamp_millis((val * 1_000_000.) as i64),
|
||||
))?;
|
||||
if cardinality == Cardinality::OptionalSparse {
|
||||
for _ in 0..20 {
|
||||
@@ -684,61 +460,3 @@ fn get_test_index_bench(cardinality: Cardinality) -> tantivy::Result<Index> {
|
||||
|
||||
Ok(index)
|
||||
}
|
||||
|
||||
// Filter aggregation benchmarks
|
||||
|
||||
fn filter_agg_all_query_count_agg(index: &Index) {
|
||||
let agg_req = json!({
|
||||
"filtered": {
|
||||
"filter": "*",
|
||||
"aggs": {
|
||||
"count": { "value_count": { "field": "score" } }
|
||||
}
|
||||
}
|
||||
});
|
||||
execute_agg(index, agg_req);
|
||||
}
|
||||
|
||||
fn filter_agg_term_query_count_agg(index: &Index) {
|
||||
let agg_req = json!({
|
||||
"filtered": {
|
||||
"filter": "text:cool",
|
||||
"aggs": {
|
||||
"count": { "value_count": { "field": "score" } }
|
||||
}
|
||||
}
|
||||
});
|
||||
execute_agg(index, agg_req);
|
||||
}
|
||||
|
||||
fn filter_agg_all_query_with_sub_aggs(index: &Index) {
|
||||
let agg_req = json!({
|
||||
"filtered": {
|
||||
"filter": "*",
|
||||
"aggs": {
|
||||
"avg_score": { "avg": { "field": "score" } },
|
||||
"stats_score": { "stats": { "field": "score_f64" } },
|
||||
"terms_text": {
|
||||
"terms": { "field": "text_few_terms_status" }
|
||||
}
|
||||
}
|
||||
}
|
||||
});
|
||||
execute_agg(index, agg_req);
|
||||
}
|
||||
|
||||
fn filter_agg_term_query_with_sub_aggs(index: &Index) {
|
||||
let agg_req = json!({
|
||||
"filtered": {
|
||||
"filter": "text:cool",
|
||||
"aggs": {
|
||||
"avg_score": { "avg": { "field": "score" } },
|
||||
"stats_score": { "stats": { "field": "score_f64" } },
|
||||
"terms_text": {
|
||||
"terms": { "field": "text_few_terms_status" }
|
||||
}
|
||||
}
|
||||
}
|
||||
});
|
||||
execute_agg(index, agg_req);
|
||||
}
|
||||
|
||||
@@ -1,218 +0,0 @@
|
||||
// Benchmarks boolean conjunction queries using binggan.
|
||||
//
|
||||
// What’s measured:
|
||||
// - Or and And queries with varying selectivity (only `Term` queries for now on leafs)
|
||||
// - Nested AND/OR combinations (on multiple fields)
|
||||
// - No-scoring path using the Count collector (focus on iterator/skip performance)
|
||||
// - Top-K retrieval (k=10) using the TopDocs collector
|
||||
//
|
||||
// Corpus model:
|
||||
// - Synthetic docs; each token a/b/c is independently included per doc
|
||||
// - If none of a/b/c are included, emit a neutral filler token to keep doc length similar
|
||||
//
|
||||
// Notes:
|
||||
// - After optimization, when scoring is disabled Tantivy reads doc-only postings
|
||||
// (IndexRecordOption::Basic), avoiding frequency decoding overhead.
|
||||
// - This bench isolates boolean iteration speed and intersection/union cost.
|
||||
// - Use `cargo bench --bench boolean_conjunction` to run.
|
||||
|
||||
use binggan::{black_box, BenchGroup, BenchRunner};
|
||||
use rand::prelude::*;
|
||||
use rand::rngs::StdRng;
|
||||
use rand::SeedableRng;
|
||||
use tantivy::collector::sort_key::SortByStaticFastValue;
|
||||
use tantivy::collector::{Collector, Count, TopDocs};
|
||||
use tantivy::query::{Query, QueryParser};
|
||||
use tantivy::schema::{Schema, FAST, TEXT};
|
||||
use tantivy::{doc, Index, Order, ReloadPolicy, Searcher};
|
||||
|
||||
#[derive(Clone)]
|
||||
struct BenchIndex {
|
||||
#[allow(dead_code)]
|
||||
index: Index,
|
||||
searcher: Searcher,
|
||||
query_parser: QueryParser,
|
||||
}
|
||||
|
||||
/// Build a single index containing both fields (title, body) and
|
||||
/// return two BenchIndex views:
|
||||
/// - single_field: QueryParser defaults to only "body"
|
||||
/// - multi_field: QueryParser defaults to ["title", "body"]
|
||||
fn build_shared_indices(num_docs: usize, p_a: f32, p_b: f32, p_c: f32) -> (BenchIndex, BenchIndex) {
|
||||
// Unified schema (two text fields)
|
||||
let mut schema_builder = Schema::builder();
|
||||
let f_title = schema_builder.add_text_field("title", TEXT);
|
||||
let f_body = schema_builder.add_text_field("body", TEXT);
|
||||
let f_score = schema_builder.add_u64_field("score", FAST);
|
||||
let f_score2 = schema_builder.add_u64_field("score2", FAST);
|
||||
let schema = schema_builder.build();
|
||||
let index = Index::create_in_ram(schema.clone());
|
||||
|
||||
// Populate index with stable RNG for reproducibility.
|
||||
let mut rng = StdRng::from_seed([7u8; 32]);
|
||||
|
||||
// Populate: spread each present token 90/10 to body/title
|
||||
{
|
||||
let mut writer = index.writer_with_num_threads(1, 500_000_000).unwrap();
|
||||
for _ in 0..num_docs {
|
||||
let has_a = rng.random_bool(p_a as f64);
|
||||
let has_b = rng.random_bool(p_b as f64);
|
||||
let has_c = rng.random_bool(p_c as f64);
|
||||
let score = rng.random_range(0u64..100u64);
|
||||
let score2 = rng.random_range(0u64..100_000u64);
|
||||
let mut title_tokens: Vec<&str> = Vec::new();
|
||||
let mut body_tokens: Vec<&str> = Vec::new();
|
||||
if has_a {
|
||||
if rng.random_bool(0.1) {
|
||||
title_tokens.push("a");
|
||||
} else {
|
||||
body_tokens.push("a");
|
||||
}
|
||||
}
|
||||
if has_b {
|
||||
if rng.random_bool(0.1) {
|
||||
title_tokens.push("b");
|
||||
} else {
|
||||
body_tokens.push("b");
|
||||
}
|
||||
}
|
||||
if has_c {
|
||||
if rng.random_bool(0.1) {
|
||||
title_tokens.push("c");
|
||||
} else {
|
||||
body_tokens.push("c");
|
||||
}
|
||||
}
|
||||
if title_tokens.is_empty() && body_tokens.is_empty() {
|
||||
body_tokens.push("z");
|
||||
}
|
||||
writer
|
||||
.add_document(doc!(
|
||||
f_title=>title_tokens.join(" "),
|
||||
f_body=>body_tokens.join(" "),
|
||||
f_score=>score,
|
||||
f_score2=>score2,
|
||||
))
|
||||
.unwrap();
|
||||
}
|
||||
writer.commit().unwrap();
|
||||
}
|
||||
|
||||
// Prepare reader/searcher once.
|
||||
let reader = index
|
||||
.reader_builder()
|
||||
.reload_policy(ReloadPolicy::Manual)
|
||||
.try_into()
|
||||
.unwrap();
|
||||
let searcher = reader.searcher();
|
||||
|
||||
// Build two query parsers with different default fields.
|
||||
let qp_single = QueryParser::for_index(&index, vec![f_body]);
|
||||
let qp_multi = QueryParser::for_index(&index, vec![f_title, f_body]);
|
||||
|
||||
let single_view = BenchIndex {
|
||||
index: index.clone(),
|
||||
searcher: searcher.clone(),
|
||||
query_parser: qp_single,
|
||||
};
|
||||
let multi_view = BenchIndex {
|
||||
index,
|
||||
searcher,
|
||||
query_parser: qp_multi,
|
||||
};
|
||||
(single_view, multi_view)
|
||||
}
|
||||
|
||||
fn main() {
|
||||
// Prepare corpora with varying selectivity. Build one index per corpus
|
||||
// and derive two views (single-field vs multi-field) from it.
|
||||
let scenarios = vec![
|
||||
(
|
||||
"N=1M, p(a)=5%, p(b)=1%, p(c)=15%".to_string(),
|
||||
1_000_000,
|
||||
0.05,
|
||||
0.01,
|
||||
0.15,
|
||||
),
|
||||
(
|
||||
"N=1M, p(a)=1%, p(b)=1%, p(c)=15%".to_string(),
|
||||
1_000_000,
|
||||
0.01,
|
||||
0.01,
|
||||
0.15,
|
||||
),
|
||||
];
|
||||
|
||||
let queries = &["a", "+a +b", "+a +b +c", "a OR b", "a OR b OR c"];
|
||||
|
||||
let mut runner = BenchRunner::new();
|
||||
for (label, n, pa, pb, pc) in scenarios {
|
||||
let (single_view, multi_view) = build_shared_indices(n, pa, pb, pc);
|
||||
|
||||
for (view_name, bench_index) in [("single_field", single_view), ("multi_field", multi_view)]
|
||||
{
|
||||
// Single-field group: default field is body only
|
||||
let mut group = runner.new_group();
|
||||
group.set_name(format!("{} — {}", view_name, label));
|
||||
for query_str in queries {
|
||||
add_bench_task(&mut group, &bench_index, query_str, Count, "count");
|
||||
add_bench_task(
|
||||
&mut group,
|
||||
&bench_index,
|
||||
query_str,
|
||||
TopDocs::with_limit(10).order_by_score(),
|
||||
"top10",
|
||||
);
|
||||
add_bench_task(
|
||||
&mut group,
|
||||
&bench_index,
|
||||
query_str,
|
||||
TopDocs::with_limit(10).order_by_fast_field::<u64>("score", Order::Asc),
|
||||
"top10_by_ff",
|
||||
);
|
||||
add_bench_task(
|
||||
&mut group,
|
||||
&bench_index,
|
||||
query_str,
|
||||
TopDocs::with_limit(10).order_by((
|
||||
SortByStaticFastValue::<u64>::for_field("score"),
|
||||
SortByStaticFastValue::<u64>::for_field("score2"),
|
||||
)),
|
||||
"top10_by_2ff",
|
||||
);
|
||||
}
|
||||
group.run();
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
fn add_bench_task<C: Collector + 'static>(
|
||||
bench_group: &mut BenchGroup,
|
||||
bench_index: &BenchIndex,
|
||||
query_str: &str,
|
||||
collector: C,
|
||||
collector_name: &str,
|
||||
) {
|
||||
let task_name = format!("{}_{}", query_str.replace(" ", "_"), collector_name);
|
||||
let query = bench_index.query_parser.parse_query(query_str).unwrap();
|
||||
let search_task = SearchTask {
|
||||
searcher: bench_index.searcher.clone(),
|
||||
collector,
|
||||
query,
|
||||
};
|
||||
bench_group.register(task_name, move |_| black_box(search_task.run()));
|
||||
}
|
||||
|
||||
struct SearchTask<C: Collector> {
|
||||
searcher: Searcher,
|
||||
collector: C,
|
||||
query: Box<dyn Query>,
|
||||
}
|
||||
|
||||
impl<C: Collector> SearchTask<C> {
|
||||
#[inline(never)]
|
||||
pub fn run(&self) -> usize {
|
||||
self.searcher.search(&self.query, &self.collector).unwrap();
|
||||
1
|
||||
}
|
||||
}
|
||||
@@ -1,288 +0,0 @@
|
||||
use binggan::{black_box, BenchGroup, BenchRunner};
|
||||
use rand::prelude::*;
|
||||
use rand::rngs::StdRng;
|
||||
use rand::SeedableRng;
|
||||
use tantivy::collector::{Collector, Count, DocSetCollector, TopDocs};
|
||||
use tantivy::query::{Query, QueryParser};
|
||||
use tantivy::schema::{Schema, FAST, INDEXED, TEXT};
|
||||
use tantivy::{doc, Index, Order, ReloadPolicy, Searcher};
|
||||
|
||||
#[derive(Clone)]
|
||||
struct BenchIndex {
|
||||
#[allow(dead_code)]
|
||||
index: Index,
|
||||
searcher: Searcher,
|
||||
query_parser: QueryParser,
|
||||
}
|
||||
|
||||
fn build_shared_indices(num_docs: usize, p_title_a: f32, distribution: &str) -> BenchIndex {
|
||||
// Unified schema
|
||||
let mut schema_builder = Schema::builder();
|
||||
let f_title = schema_builder.add_text_field("title", TEXT);
|
||||
let f_num_rand = schema_builder.add_u64_field("num_rand", INDEXED);
|
||||
let f_num_asc = schema_builder.add_u64_field("num_asc", INDEXED);
|
||||
let f_num_rand_fast = schema_builder.add_u64_field("num_rand_fast", INDEXED | FAST);
|
||||
let f_num_asc_fast = schema_builder.add_u64_field("num_asc_fast", INDEXED | FAST);
|
||||
let schema = schema_builder.build();
|
||||
let index = Index::create_in_ram(schema.clone());
|
||||
|
||||
// Populate index with stable RNG for reproducibility.
|
||||
let mut rng = StdRng::from_seed([7u8; 32]);
|
||||
|
||||
{
|
||||
let mut writer = index.writer_with_num_threads(1, 4_000_000_000).unwrap();
|
||||
|
||||
match distribution {
|
||||
"dense" => {
|
||||
for doc_id in 0..num_docs {
|
||||
// Always add title to avoid empty documents
|
||||
let title_token = if rng.random_bool(p_title_a as f64) {
|
||||
"a"
|
||||
} else {
|
||||
"b"
|
||||
};
|
||||
|
||||
let num_rand = rng.random_range(0u64..1000u64);
|
||||
|
||||
let num_asc = (doc_id / 10000) as u64;
|
||||
|
||||
writer
|
||||
.add_document(doc!(
|
||||
f_title=>title_token,
|
||||
f_num_rand=>num_rand,
|
||||
f_num_asc=>num_asc,
|
||||
f_num_rand_fast=>num_rand,
|
||||
f_num_asc_fast=>num_asc,
|
||||
))
|
||||
.unwrap();
|
||||
}
|
||||
}
|
||||
"sparse" => {
|
||||
for doc_id in 0..num_docs {
|
||||
// Always add title to avoid empty documents
|
||||
let title_token = if rng.random_bool(p_title_a as f64) {
|
||||
"a"
|
||||
} else {
|
||||
"b"
|
||||
};
|
||||
|
||||
let num_rand = rng.random_range(0u64..10000000u64);
|
||||
|
||||
let num_asc = doc_id as u64;
|
||||
|
||||
writer
|
||||
.add_document(doc!(
|
||||
f_title=>title_token,
|
||||
f_num_rand=>num_rand,
|
||||
f_num_asc=>num_asc,
|
||||
f_num_rand_fast=>num_rand,
|
||||
f_num_asc_fast=>num_asc,
|
||||
))
|
||||
.unwrap();
|
||||
}
|
||||
}
|
||||
_ => {
|
||||
panic!("Unsupported distribution type");
|
||||
}
|
||||
}
|
||||
writer.commit().unwrap();
|
||||
}
|
||||
|
||||
// Prepare reader/searcher once.
|
||||
let reader = index
|
||||
.reader_builder()
|
||||
.reload_policy(ReloadPolicy::Manual)
|
||||
.try_into()
|
||||
.unwrap();
|
||||
let searcher = reader.searcher();
|
||||
|
||||
// Build query parser for title field
|
||||
let qp_title = QueryParser::for_index(&index, vec![f_title]);
|
||||
|
||||
BenchIndex {
|
||||
index,
|
||||
searcher,
|
||||
query_parser: qp_title,
|
||||
}
|
||||
}
|
||||
|
||||
fn main() {
|
||||
// Prepare corpora with varying scenarios
|
||||
let scenarios = vec![
|
||||
(
|
||||
"dense and 99% a".to_string(),
|
||||
10_000_000,
|
||||
0.99,
|
||||
"dense",
|
||||
0,
|
||||
9,
|
||||
),
|
||||
(
|
||||
"dense and 99% a".to_string(),
|
||||
10_000_000,
|
||||
0.99,
|
||||
"dense",
|
||||
990,
|
||||
999,
|
||||
),
|
||||
(
|
||||
"sparse and 99% a".to_string(),
|
||||
10_000_000,
|
||||
0.99,
|
||||
"sparse",
|
||||
0,
|
||||
9,
|
||||
),
|
||||
(
|
||||
"sparse and 99% a".to_string(),
|
||||
10_000_000,
|
||||
0.99,
|
||||
"sparse",
|
||||
9_999_990,
|
||||
9_999_999,
|
||||
),
|
||||
];
|
||||
|
||||
let mut runner = BenchRunner::new();
|
||||
for (scenario_id, n, p_title_a, num_rand_distribution, range_low, range_high) in scenarios {
|
||||
// Build index for this scenario
|
||||
let bench_index = build_shared_indices(n, p_title_a, num_rand_distribution);
|
||||
|
||||
// Create benchmark group
|
||||
let mut group = runner.new_group();
|
||||
|
||||
// Now set the name (this moves scenario_id)
|
||||
group.set_name(scenario_id);
|
||||
|
||||
// Define all four field types
|
||||
let field_names = ["num_rand", "num_asc", "num_rand_fast", "num_asc_fast"];
|
||||
|
||||
// Define the three terms we want to test with
|
||||
let terms = ["a", "b", "z"];
|
||||
|
||||
// Generate all combinations of terms and field names
|
||||
let mut queries = Vec::new();
|
||||
for &term in &terms {
|
||||
for &field_name in &field_names {
|
||||
let query_str = format!(
|
||||
"{} AND {}:[{} TO {}]",
|
||||
term, field_name, range_low, range_high
|
||||
);
|
||||
queries.push((query_str, field_name.to_string()));
|
||||
}
|
||||
}
|
||||
|
||||
let query_str = format!(
|
||||
"{}:[{} TO {}] AND {}:[{} TO {}]",
|
||||
"num_rand_fast", range_low, range_high, "num_asc_fast", range_low, range_high
|
||||
);
|
||||
queries.push((query_str, "num_asc_fast".to_string()));
|
||||
|
||||
// Run all benchmark tasks for each query and its corresponding field name
|
||||
for (query_str, field_name) in queries {
|
||||
run_benchmark_tasks(&mut group, &bench_index, &query_str, &field_name);
|
||||
}
|
||||
|
||||
group.run();
|
||||
}
|
||||
}
|
||||
|
||||
/// Run all benchmark tasks for a given query string and field name
|
||||
fn run_benchmark_tasks(
|
||||
bench_group: &mut BenchGroup,
|
||||
bench_index: &BenchIndex,
|
||||
query_str: &str,
|
||||
field_name: &str,
|
||||
) {
|
||||
// Test count
|
||||
add_bench_task(bench_group, bench_index, query_str, Count, "count");
|
||||
|
||||
// Test all results
|
||||
add_bench_task(
|
||||
bench_group,
|
||||
bench_index,
|
||||
query_str,
|
||||
DocSetCollector,
|
||||
"all results",
|
||||
);
|
||||
|
||||
// Test top 100 by the field (if it's a FAST field)
|
||||
if field_name.ends_with("_fast") {
|
||||
// Ascending order
|
||||
{
|
||||
let collector_name = format!("top100_by_{}_asc", field_name);
|
||||
let field_name_owned = field_name.to_string();
|
||||
add_bench_task(
|
||||
bench_group,
|
||||
bench_index,
|
||||
query_str,
|
||||
TopDocs::with_limit(100).order_by_fast_field::<u64>(field_name_owned, Order::Asc),
|
||||
&collector_name,
|
||||
);
|
||||
}
|
||||
|
||||
// Descending order
|
||||
{
|
||||
let collector_name = format!("top100_by_{}_desc", field_name);
|
||||
let field_name_owned = field_name.to_string();
|
||||
add_bench_task(
|
||||
bench_group,
|
||||
bench_index,
|
||||
query_str,
|
||||
TopDocs::with_limit(100).order_by_fast_field::<u64>(field_name_owned, Order::Desc),
|
||||
&collector_name,
|
||||
);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
fn add_bench_task<C: Collector + 'static>(
|
||||
bench_group: &mut BenchGroup,
|
||||
bench_index: &BenchIndex,
|
||||
query_str: &str,
|
||||
collector: C,
|
||||
collector_name: &str,
|
||||
) {
|
||||
let task_name = format!("{}_{}", query_str.replace(" ", "_"), collector_name);
|
||||
let query = bench_index.query_parser.parse_query(query_str).unwrap();
|
||||
let search_task = SearchTask {
|
||||
searcher: bench_index.searcher.clone(),
|
||||
collector,
|
||||
query,
|
||||
};
|
||||
bench_group.register(task_name, move |_| black_box(search_task.run()));
|
||||
}
|
||||
|
||||
struct SearchTask<C: Collector> {
|
||||
searcher: Searcher,
|
||||
collector: C,
|
||||
query: Box<dyn Query>,
|
||||
}
|
||||
|
||||
impl<C: Collector> SearchTask<C> {
|
||||
#[inline(never)]
|
||||
pub fn run(&self) -> usize {
|
||||
let result = self.searcher.search(&self.query, &self.collector).unwrap();
|
||||
if let Some(count) = (&result as &dyn std::any::Any).downcast_ref::<usize>() {
|
||||
*count
|
||||
} else if let Some(top_docs) = (&result as &dyn std::any::Any)
|
||||
.downcast_ref::<Vec<(Option<u64>, tantivy::DocAddress)>>()
|
||||
{
|
||||
top_docs.len()
|
||||
} else if let Some(top_docs) =
|
||||
(&result as &dyn std::any::Any).downcast_ref::<Vec<(u64, tantivy::DocAddress)>>()
|
||||
{
|
||||
top_docs.len()
|
||||
} else if let Some(doc_set) = (&result as &dyn std::any::Any)
|
||||
.downcast_ref::<std::collections::HashSet<tantivy::DocAddress>>()
|
||||
{
|
||||
doc_set.len()
|
||||
} else {
|
||||
eprintln!(
|
||||
"Unknown collector result type: {:?}",
|
||||
std::any::type_name::<C::Fruit>()
|
||||
);
|
||||
0
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -1,69 +0,0 @@
|
||||
use binggan::plugins::PeakMemAllocPlugin;
|
||||
use binggan::{black_box, InputGroup, PeakMemAlloc, INSTRUMENTED_SYSTEM};
|
||||
use serde_json::json;
|
||||
use tantivy::collector::Count;
|
||||
use tantivy::query::ExistsQuery;
|
||||
use tantivy::schema::{Schema, FAST, TEXT};
|
||||
use tantivy::{doc, Index};
|
||||
|
||||
#[global_allocator]
|
||||
pub static GLOBAL: &PeakMemAlloc<std::alloc::System> = &INSTRUMENTED_SYSTEM;
|
||||
|
||||
fn main() {
|
||||
let doc_count: usize = 500_000;
|
||||
let subfield_counts: &[usize] = &[1, 2, 3, 4, 5, 6, 7, 8, 16, 256, 4096, 65536, 262144];
|
||||
|
||||
let indices: Vec<(String, Index)> = subfield_counts
|
||||
.iter()
|
||||
.map(|&sub_fields| {
|
||||
(
|
||||
format!("subfields={sub_fields}"),
|
||||
build_index_with_json_subfields(doc_count, sub_fields),
|
||||
)
|
||||
})
|
||||
.collect();
|
||||
|
||||
let mut group = InputGroup::new_with_inputs(indices);
|
||||
group.add_plugin(PeakMemAllocPlugin::new(GLOBAL));
|
||||
|
||||
group.config().num_iter_group = Some(1);
|
||||
group.config().num_iter_bench = Some(1);
|
||||
group.register("exists_json", exists_json_union);
|
||||
|
||||
group.run();
|
||||
}
|
||||
|
||||
fn exists_json_union(index: &Index) {
|
||||
let reader = index.reader().expect("reader");
|
||||
let searcher = reader.searcher();
|
||||
let query = ExistsQuery::new("json".to_string(), true);
|
||||
let count = searcher.search(&query, &Count).expect("exists search");
|
||||
// Prevents optimizer from eliding the search
|
||||
black_box(count);
|
||||
}
|
||||
|
||||
fn build_index_with_json_subfields(num_docs: usize, num_subfields: usize) -> Index {
|
||||
// Schema: single JSON field stored as FAST to support ExistsQuery.
|
||||
let mut schema_builder = Schema::builder();
|
||||
let json_field = schema_builder.add_json_field("json", TEXT | FAST);
|
||||
let schema = schema_builder.build();
|
||||
|
||||
let index = Index::create_from_tempdir(schema).expect("create index");
|
||||
{
|
||||
let mut index_writer = index
|
||||
.writer_with_num_threads(1, 200_000_000)
|
||||
.expect("writer");
|
||||
for i in 0..num_docs {
|
||||
let sub = i % num_subfields;
|
||||
// Only one subpath set per document; rotate subpaths so that
|
||||
// no single subpath is full, but the union covers all docs.
|
||||
let v = json!({ format!("field_{sub}"): i as u64 });
|
||||
index_writer
|
||||
.add_document(doc!(json_field => v))
|
||||
.expect("add_document");
|
||||
}
|
||||
index_writer.commit().expect("commit");
|
||||
}
|
||||
|
||||
index
|
||||
}
|
||||
@@ -1,224 +0,0 @@
|
||||
// Benchmarks segment merging
|
||||
//
|
||||
// Notes:
|
||||
// - Input segments are kept intact (no deletes / no IndexWriter merge).
|
||||
// - Output is written to a `NullDirectory` that discards all files except
|
||||
// fieldnorms (needed for merging).
|
||||
|
||||
use std::collections::HashMap;
|
||||
use std::io::{self, Write};
|
||||
use std::path::{Path, PathBuf};
|
||||
use std::sync::{Arc, RwLock};
|
||||
|
||||
use binggan::{black_box, BenchRunner};
|
||||
use rand::prelude::*;
|
||||
use rand::rngs::StdRng;
|
||||
use rand::SeedableRng;
|
||||
use tantivy::directory::error::{DeleteError, OpenReadError, OpenWriteError};
|
||||
use tantivy::directory::{
|
||||
AntiCallToken, Directory, FileHandle, OwnedBytes, TerminatingWrite, WatchCallback, WatchHandle,
|
||||
WritePtr,
|
||||
};
|
||||
use tantivy::indexer::{merge_filtered_segments, NoMergePolicy};
|
||||
use tantivy::schema::{Schema, TEXT};
|
||||
use tantivy::{doc, HasLen, Index, IndexSettings, Segment};
|
||||
|
||||
#[derive(Clone, Default, Debug)]
|
||||
struct NullDirectory {
|
||||
blobs: Arc<RwLock<HashMap<PathBuf, OwnedBytes>>>,
|
||||
}
|
||||
|
||||
struct NullWriter;
|
||||
|
||||
impl Write for NullWriter {
|
||||
fn write(&mut self, buf: &[u8]) -> io::Result<usize> {
|
||||
Ok(buf.len())
|
||||
}
|
||||
|
||||
fn flush(&mut self) -> io::Result<()> {
|
||||
Ok(())
|
||||
}
|
||||
}
|
||||
|
||||
impl TerminatingWrite for NullWriter {
|
||||
fn terminate_ref(&mut self, _token: AntiCallToken) -> io::Result<()> {
|
||||
Ok(())
|
||||
}
|
||||
}
|
||||
|
||||
struct InMemoryWriter {
|
||||
path: PathBuf,
|
||||
buffer: Vec<u8>,
|
||||
blobs: Arc<RwLock<HashMap<PathBuf, OwnedBytes>>>,
|
||||
}
|
||||
|
||||
impl Write for InMemoryWriter {
|
||||
fn write(&mut self, buf: &[u8]) -> io::Result<usize> {
|
||||
self.buffer.extend_from_slice(buf);
|
||||
Ok(buf.len())
|
||||
}
|
||||
|
||||
fn flush(&mut self) -> io::Result<()> {
|
||||
Ok(())
|
||||
}
|
||||
}
|
||||
|
||||
impl TerminatingWrite for InMemoryWriter {
|
||||
fn terminate_ref(&mut self, _token: AntiCallToken) -> io::Result<()> {
|
||||
let bytes = OwnedBytes::new(std::mem::take(&mut self.buffer));
|
||||
self.blobs.write().unwrap().insert(self.path.clone(), bytes);
|
||||
Ok(())
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Debug, Default)]
|
||||
struct NullFileHandle;
|
||||
impl HasLen for NullFileHandle {
|
||||
fn len(&self) -> usize {
|
||||
0
|
||||
}
|
||||
}
|
||||
impl FileHandle for NullFileHandle {
|
||||
fn read_bytes(&self, _range: std::ops::Range<usize>) -> io::Result<OwnedBytes> {
|
||||
unimplemented!()
|
||||
}
|
||||
}
|
||||
|
||||
impl Directory for NullDirectory {
|
||||
fn get_file_handle(&self, path: &Path) -> Result<Arc<dyn FileHandle>, OpenReadError> {
|
||||
if let Some(bytes) = self.blobs.read().unwrap().get(path) {
|
||||
return Ok(Arc::new(bytes.clone()));
|
||||
}
|
||||
Ok(Arc::new(NullFileHandle))
|
||||
}
|
||||
|
||||
fn delete(&self, _path: &Path) -> Result<(), DeleteError> {
|
||||
Ok(())
|
||||
}
|
||||
|
||||
fn exists(&self, _path: &Path) -> Result<bool, OpenReadError> {
|
||||
Ok(true)
|
||||
}
|
||||
|
||||
fn open_write(&self, path: &Path) -> Result<WritePtr, OpenWriteError> {
|
||||
let path_buf = path.to_path_buf();
|
||||
if path.to_string_lossy().ends_with(".fieldnorm") {
|
||||
let writer = InMemoryWriter {
|
||||
path: path_buf,
|
||||
buffer: Vec::new(),
|
||||
blobs: Arc::clone(&self.blobs),
|
||||
};
|
||||
Ok(io::BufWriter::new(Box::new(writer)))
|
||||
} else {
|
||||
Ok(io::BufWriter::new(Box::new(NullWriter)))
|
||||
}
|
||||
}
|
||||
|
||||
fn atomic_read(&self, path: &Path) -> Result<Vec<u8>, OpenReadError> {
|
||||
if let Some(bytes) = self.blobs.read().unwrap().get(path) {
|
||||
return Ok(bytes.as_slice().to_vec());
|
||||
}
|
||||
Err(OpenReadError::FileDoesNotExist(path.to_path_buf()))
|
||||
}
|
||||
|
||||
fn atomic_write(&self, _path: &Path, _data: &[u8]) -> io::Result<()> {
|
||||
Ok(())
|
||||
}
|
||||
|
||||
fn sync_directory(&self) -> io::Result<()> {
|
||||
Ok(())
|
||||
}
|
||||
|
||||
fn watch(&self, _watch_callback: WatchCallback) -> tantivy::Result<WatchHandle> {
|
||||
Ok(WatchHandle::empty())
|
||||
}
|
||||
}
|
||||
|
||||
struct MergeScenario {
|
||||
#[allow(dead_code)]
|
||||
index: Index,
|
||||
segments: Vec<Segment>,
|
||||
settings: IndexSettings,
|
||||
label: String,
|
||||
}
|
||||
|
||||
fn build_index(
|
||||
num_segments: usize,
|
||||
docs_per_segment: usize,
|
||||
tokens_per_doc: usize,
|
||||
vocab_size: usize,
|
||||
) -> MergeScenario {
|
||||
let mut schema_builder = Schema::builder();
|
||||
let body = schema_builder.add_text_field("body", TEXT);
|
||||
let schema = schema_builder.build();
|
||||
let index = Index::create_in_ram(schema.clone());
|
||||
|
||||
assert!(vocab_size > 0);
|
||||
let total_tokens = num_segments * docs_per_segment * tokens_per_doc;
|
||||
let use_unique_terms = vocab_size >= total_tokens;
|
||||
let mut rng = StdRng::from_seed([7u8; 32]);
|
||||
let mut next_token_id: u64 = 0;
|
||||
|
||||
{
|
||||
let mut writer = index.writer_with_num_threads(1, 256_000_000).unwrap();
|
||||
writer.set_merge_policy(Box::new(NoMergePolicy));
|
||||
for _ in 0..num_segments {
|
||||
for _ in 0..docs_per_segment {
|
||||
let mut tokens = Vec::with_capacity(tokens_per_doc);
|
||||
for _ in 0..tokens_per_doc {
|
||||
let token_id = if use_unique_terms {
|
||||
let id = next_token_id;
|
||||
next_token_id += 1;
|
||||
id
|
||||
} else {
|
||||
rng.random_range(0..vocab_size as u64)
|
||||
};
|
||||
tokens.push(format!("term_{token_id}"));
|
||||
}
|
||||
writer.add_document(doc!(body => tokens.join(" "))).unwrap();
|
||||
}
|
||||
writer.commit().unwrap();
|
||||
}
|
||||
}
|
||||
|
||||
let segments = index.searchable_segments().unwrap();
|
||||
let settings = index.settings().clone();
|
||||
let label = format!(
|
||||
"segments={}, docs/seg={}, tokens/doc={}, vocab={}",
|
||||
num_segments, docs_per_segment, tokens_per_doc, vocab_size
|
||||
);
|
||||
|
||||
MergeScenario {
|
||||
index,
|
||||
segments,
|
||||
settings,
|
||||
label,
|
||||
}
|
||||
}
|
||||
|
||||
fn main() {
|
||||
let scenarios = vec![
|
||||
build_index(8, 50_000, 12, 8),
|
||||
build_index(16, 50_000, 12, 8),
|
||||
build_index(16, 100_000, 12, 8),
|
||||
build_index(8, 50_000, 8, 8 * 50_000 * 8),
|
||||
];
|
||||
|
||||
let mut runner = BenchRunner::new();
|
||||
for scenario in scenarios {
|
||||
let mut group = runner.new_group();
|
||||
group.set_name(format!("merge_segments inv_index — {}", scenario.label));
|
||||
let segments = scenario.segments.clone();
|
||||
let settings = scenario.settings.clone();
|
||||
group.register("merge", move |_| {
|
||||
let output_dir = NullDirectory::default();
|
||||
let filter_doc_ids = vec![None; segments.len()];
|
||||
let merged_index =
|
||||
merge_filtered_segments(&segments, settings.clone(), filter_doc_ids, output_dir)
|
||||
.unwrap();
|
||||
black_box(merged_index);
|
||||
});
|
||||
|
||||
group.run();
|
||||
}
|
||||
}
|
||||
@@ -1,365 +0,0 @@
|
||||
use std::ops::Bound;
|
||||
|
||||
use binggan::{black_box, BenchGroup, BenchRunner};
|
||||
use rand::prelude::*;
|
||||
use rand::rngs::StdRng;
|
||||
use rand::SeedableRng;
|
||||
use tantivy::collector::{Count, DocSetCollector, TopDocs};
|
||||
use tantivy::query::RangeQuery;
|
||||
use tantivy::schema::{Schema, FAST, INDEXED};
|
||||
use tantivy::{doc, Index, Order, ReloadPolicy, Searcher, Term};
|
||||
|
||||
#[derive(Clone)]
|
||||
struct BenchIndex {
|
||||
#[allow(dead_code)]
|
||||
index: Index,
|
||||
searcher: Searcher,
|
||||
}
|
||||
|
||||
fn build_shared_indices(num_docs: usize, distribution: &str) -> BenchIndex {
|
||||
// Schema with fast fields only
|
||||
let mut schema_builder = Schema::builder();
|
||||
let f_num_rand_fast = schema_builder.add_u64_field("num_rand_fast", INDEXED | FAST);
|
||||
let f_num_asc_fast = schema_builder.add_u64_field("num_asc_fast", INDEXED | FAST);
|
||||
let schema = schema_builder.build();
|
||||
let index = Index::create_in_ram(schema.clone());
|
||||
|
||||
// Populate index with stable RNG for reproducibility.
|
||||
let mut rng = StdRng::from_seed([7u8; 32]);
|
||||
|
||||
{
|
||||
let mut writer = index.writer_with_num_threads(1, 4_000_000_000).unwrap();
|
||||
|
||||
match distribution {
|
||||
"dense" => {
|
||||
for doc_id in 0..num_docs {
|
||||
let num_rand = rng.random_range(0u64..1000u64);
|
||||
let num_asc = (doc_id / 10000) as u64;
|
||||
|
||||
writer
|
||||
.add_document(doc!(
|
||||
f_num_rand_fast=>num_rand,
|
||||
f_num_asc_fast=>num_asc,
|
||||
))
|
||||
.unwrap();
|
||||
}
|
||||
}
|
||||
"sparse" => {
|
||||
for doc_id in 0..num_docs {
|
||||
let num_rand = rng.random_range(0u64..10000000u64);
|
||||
let num_asc = doc_id as u64;
|
||||
|
||||
writer
|
||||
.add_document(doc!(
|
||||
f_num_rand_fast=>num_rand,
|
||||
f_num_asc_fast=>num_asc,
|
||||
))
|
||||
.unwrap();
|
||||
}
|
||||
}
|
||||
_ => {
|
||||
panic!("Unsupported distribution type");
|
||||
}
|
||||
}
|
||||
writer.commit().unwrap();
|
||||
}
|
||||
|
||||
// Prepare reader/searcher once.
|
||||
let reader = index
|
||||
.reader_builder()
|
||||
.reload_policy(ReloadPolicy::Manual)
|
||||
.try_into()
|
||||
.unwrap();
|
||||
let searcher = reader.searcher();
|
||||
|
||||
BenchIndex { index, searcher }
|
||||
}
|
||||
|
||||
fn main() {
|
||||
// Prepare corpora with varying scenarios
|
||||
let scenarios = vec![
|
||||
// Dense distribution - random values in small range (0-999)
|
||||
(
|
||||
"dense_values_search_low_value_range".to_string(),
|
||||
10_000_000,
|
||||
"dense",
|
||||
0,
|
||||
9,
|
||||
),
|
||||
(
|
||||
"dense_values_search_high_value_range".to_string(),
|
||||
10_000_000,
|
||||
"dense",
|
||||
990,
|
||||
999,
|
||||
),
|
||||
(
|
||||
"dense_values_search_out_of_range".to_string(),
|
||||
10_000_000,
|
||||
"dense",
|
||||
1000,
|
||||
1002,
|
||||
),
|
||||
(
|
||||
"sparse_values_search_low_value_range".to_string(),
|
||||
10_000_000,
|
||||
"sparse",
|
||||
0,
|
||||
9,
|
||||
),
|
||||
(
|
||||
"sparse_values_search_high_value_range".to_string(),
|
||||
10_000_000,
|
||||
"sparse",
|
||||
9_999_990,
|
||||
9_999_999,
|
||||
),
|
||||
(
|
||||
"sparse_values_search_out_of_range".to_string(),
|
||||
10_000_000,
|
||||
"sparse",
|
||||
10_000_000,
|
||||
10_000_002,
|
||||
),
|
||||
];
|
||||
|
||||
let mut runner = BenchRunner::new();
|
||||
for (scenario_id, n, num_rand_distribution, range_low, range_high) in scenarios {
|
||||
// Build index for this scenario
|
||||
let bench_index = build_shared_indices(n, num_rand_distribution);
|
||||
|
||||
// Create benchmark group
|
||||
let mut group = runner.new_group();
|
||||
|
||||
// Now set the name (this moves scenario_id)
|
||||
group.set_name(scenario_id);
|
||||
|
||||
// Define fast field types
|
||||
let field_names = ["num_rand_fast", "num_asc_fast"];
|
||||
|
||||
// Generate range queries for fast fields
|
||||
for &field_name in &field_names {
|
||||
// Create the range query
|
||||
let field = bench_index.searcher.schema().get_field(field_name).unwrap();
|
||||
let lower_term = Term::from_field_u64(field, range_low);
|
||||
let upper_term = Term::from_field_u64(field, range_high);
|
||||
|
||||
let query = RangeQuery::new(Bound::Included(lower_term), Bound::Included(upper_term));
|
||||
|
||||
run_benchmark_tasks(
|
||||
&mut group,
|
||||
&bench_index,
|
||||
query,
|
||||
field_name,
|
||||
range_low,
|
||||
range_high,
|
||||
);
|
||||
}
|
||||
|
||||
group.run();
|
||||
}
|
||||
}
|
||||
|
||||
/// Run all benchmark tasks for a given range query and field name
|
||||
fn run_benchmark_tasks(
|
||||
bench_group: &mut BenchGroup,
|
||||
bench_index: &BenchIndex,
|
||||
query: RangeQuery,
|
||||
field_name: &str,
|
||||
range_low: u64,
|
||||
range_high: u64,
|
||||
) {
|
||||
// Test count
|
||||
add_bench_task_count(
|
||||
bench_group,
|
||||
bench_index,
|
||||
query.clone(),
|
||||
"count",
|
||||
field_name,
|
||||
range_low,
|
||||
range_high,
|
||||
);
|
||||
|
||||
// Test top 100 by the field (ascending order)
|
||||
{
|
||||
let collector_name = format!("top100_by_{}_asc", field_name);
|
||||
let field_name_owned = field_name.to_string();
|
||||
add_bench_task_top100_asc(
|
||||
bench_group,
|
||||
bench_index,
|
||||
query.clone(),
|
||||
&collector_name,
|
||||
field_name,
|
||||
range_low,
|
||||
range_high,
|
||||
field_name_owned,
|
||||
);
|
||||
}
|
||||
|
||||
// Test top 100 by the field (descending order)
|
||||
{
|
||||
let collector_name = format!("top100_by_{}_desc", field_name);
|
||||
let field_name_owned = field_name.to_string();
|
||||
add_bench_task_top100_desc(
|
||||
bench_group,
|
||||
bench_index,
|
||||
query,
|
||||
&collector_name,
|
||||
field_name,
|
||||
range_low,
|
||||
range_high,
|
||||
field_name_owned,
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
fn add_bench_task_count(
|
||||
bench_group: &mut BenchGroup,
|
||||
bench_index: &BenchIndex,
|
||||
query: RangeQuery,
|
||||
collector_name: &str,
|
||||
field_name: &str,
|
||||
range_low: u64,
|
||||
range_high: u64,
|
||||
) {
|
||||
let task_name = format!(
|
||||
"range_{}_[{} TO {}]_{}",
|
||||
field_name, range_low, range_high, collector_name
|
||||
);
|
||||
|
||||
let search_task = CountSearchTask {
|
||||
searcher: bench_index.searcher.clone(),
|
||||
query,
|
||||
};
|
||||
bench_group.register(task_name, move |_| black_box(search_task.run()));
|
||||
}
|
||||
|
||||
fn add_bench_task_docset(
|
||||
bench_group: &mut BenchGroup,
|
||||
bench_index: &BenchIndex,
|
||||
query: RangeQuery,
|
||||
collector_name: &str,
|
||||
field_name: &str,
|
||||
range_low: u64,
|
||||
range_high: u64,
|
||||
) {
|
||||
let task_name = format!(
|
||||
"range_{}_[{} TO {}]_{}",
|
||||
field_name, range_low, range_high, collector_name
|
||||
);
|
||||
|
||||
let search_task = DocSetSearchTask {
|
||||
searcher: bench_index.searcher.clone(),
|
||||
query,
|
||||
};
|
||||
bench_group.register(task_name, move |_| black_box(search_task.run()));
|
||||
}
|
||||
|
||||
fn add_bench_task_top100_asc(
|
||||
bench_group: &mut BenchGroup,
|
||||
bench_index: &BenchIndex,
|
||||
query: RangeQuery,
|
||||
collector_name: &str,
|
||||
field_name: &str,
|
||||
range_low: u64,
|
||||
range_high: u64,
|
||||
field_name_owned: String,
|
||||
) {
|
||||
let task_name = format!(
|
||||
"range_{}_[{} TO {}]_{}",
|
||||
field_name, range_low, range_high, collector_name
|
||||
);
|
||||
|
||||
let search_task = Top100AscSearchTask {
|
||||
searcher: bench_index.searcher.clone(),
|
||||
query,
|
||||
field_name: field_name_owned,
|
||||
};
|
||||
bench_group.register(task_name, move |_| black_box(search_task.run()));
|
||||
}
|
||||
|
||||
fn add_bench_task_top100_desc(
|
||||
bench_group: &mut BenchGroup,
|
||||
bench_index: &BenchIndex,
|
||||
query: RangeQuery,
|
||||
collector_name: &str,
|
||||
field_name: &str,
|
||||
range_low: u64,
|
||||
range_high: u64,
|
||||
field_name_owned: String,
|
||||
) {
|
||||
let task_name = format!(
|
||||
"range_{}_[{} TO {}]_{}",
|
||||
field_name, range_low, range_high, collector_name
|
||||
);
|
||||
|
||||
let search_task = Top100DescSearchTask {
|
||||
searcher: bench_index.searcher.clone(),
|
||||
query,
|
||||
field_name: field_name_owned,
|
||||
};
|
||||
bench_group.register(task_name, move |_| black_box(search_task.run()));
|
||||
}
|
||||
|
||||
struct CountSearchTask {
|
||||
searcher: Searcher,
|
||||
query: RangeQuery,
|
||||
}
|
||||
|
||||
impl CountSearchTask {
|
||||
#[inline(never)]
|
||||
pub fn run(&self) -> usize {
|
||||
self.searcher.search(&self.query, &Count).unwrap()
|
||||
}
|
||||
}
|
||||
|
||||
struct DocSetSearchTask {
|
||||
searcher: Searcher,
|
||||
query: RangeQuery,
|
||||
}
|
||||
|
||||
impl DocSetSearchTask {
|
||||
#[inline(never)]
|
||||
pub fn run(&self) -> usize {
|
||||
let result = self.searcher.search(&self.query, &DocSetCollector).unwrap();
|
||||
result.len()
|
||||
}
|
||||
}
|
||||
|
||||
struct Top100AscSearchTask {
|
||||
searcher: Searcher,
|
||||
query: RangeQuery,
|
||||
field_name: String,
|
||||
}
|
||||
|
||||
impl Top100AscSearchTask {
|
||||
#[inline(never)]
|
||||
pub fn run(&self) -> usize {
|
||||
let collector =
|
||||
TopDocs::with_limit(100).order_by_fast_field::<u64>(&self.field_name, Order::Asc);
|
||||
let result = self.searcher.search(&self.query, &collector).unwrap();
|
||||
for (_score, doc_address) in &result {
|
||||
let _doc: tantivy::TantivyDocument = self.searcher.doc(*doc_address).unwrap();
|
||||
}
|
||||
result.len()
|
||||
}
|
||||
}
|
||||
|
||||
struct Top100DescSearchTask {
|
||||
searcher: Searcher,
|
||||
query: RangeQuery,
|
||||
field_name: String,
|
||||
}
|
||||
|
||||
impl Top100DescSearchTask {
|
||||
#[inline(never)]
|
||||
pub fn run(&self) -> usize {
|
||||
let collector =
|
||||
TopDocs::with_limit(100).order_by_fast_field::<u64>(&self.field_name, Order::Desc);
|
||||
let result = self.searcher.search(&self.query, &collector).unwrap();
|
||||
for (_score, doc_address) in &result {
|
||||
let _doc: tantivy::TantivyDocument = self.searcher.doc(*doc_address).unwrap();
|
||||
}
|
||||
result.len()
|
||||
}
|
||||
}
|
||||
@@ -1,260 +0,0 @@
|
||||
use std::fmt::Display;
|
||||
use std::net::Ipv6Addr;
|
||||
use std::ops::RangeInclusive;
|
||||
|
||||
use binggan::plugins::PeakMemAllocPlugin;
|
||||
use binggan::{black_box, BenchRunner, OutputValue, PeakMemAlloc, INSTRUMENTED_SYSTEM};
|
||||
use columnar::MonotonicallyMappableToU128;
|
||||
use rand::rngs::StdRng;
|
||||
use rand::{Rng, SeedableRng};
|
||||
use tantivy::collector::{Count, TopDocs};
|
||||
use tantivy::query::QueryParser;
|
||||
use tantivy::schema::*;
|
||||
use tantivy::{doc, Index};
|
||||
|
||||
#[global_allocator]
|
||||
pub static GLOBAL: &PeakMemAlloc<std::alloc::System> = &INSTRUMENTED_SYSTEM;
|
||||
|
||||
fn main() {
|
||||
bench_range_query();
|
||||
}
|
||||
|
||||
fn bench_range_query() {
|
||||
let index = get_index_0_to_100();
|
||||
let mut runner = BenchRunner::new();
|
||||
runner.add_plugin(PeakMemAllocPlugin::new(GLOBAL));
|
||||
|
||||
runner.set_name("range_query on u64");
|
||||
let field_name_and_descr: Vec<_> = vec![
|
||||
("id", "Single Valued Range Field"),
|
||||
("ids", "Multi Valued Range Field"),
|
||||
];
|
||||
let range_num_hits = vec![
|
||||
("90_percent", get_90_percent()),
|
||||
("10_percent", get_10_percent()),
|
||||
("1_percent", get_1_percent()),
|
||||
];
|
||||
|
||||
test_range(&mut runner, &index, &field_name_and_descr, range_num_hits);
|
||||
|
||||
runner.set_name("range_query on ip");
|
||||
let field_name_and_descr: Vec<_> = vec![
|
||||
("ip", "Single Valued Range Field"),
|
||||
("ips", "Multi Valued Range Field"),
|
||||
];
|
||||
let range_num_hits = vec![
|
||||
("90_percent", get_90_percent_ip()),
|
||||
("10_percent", get_10_percent_ip()),
|
||||
("1_percent", get_1_percent_ip()),
|
||||
];
|
||||
|
||||
test_range(&mut runner, &index, &field_name_and_descr, range_num_hits);
|
||||
}
|
||||
|
||||
fn test_range<T: Display>(
|
||||
runner: &mut BenchRunner,
|
||||
index: &Index,
|
||||
field_name_and_descr: &[(&str, &str)],
|
||||
range_num_hits: Vec<(&str, RangeInclusive<T>)>,
|
||||
) {
|
||||
for (field, suffix) in field_name_and_descr {
|
||||
let term_num_hits = vec![
|
||||
("", ""),
|
||||
("1_percent", "veryfew"),
|
||||
("10_percent", "few"),
|
||||
("90_percent", "most"),
|
||||
];
|
||||
let mut group = runner.new_group();
|
||||
group.set_name(suffix);
|
||||
// all intersect combinations
|
||||
for (range_name, range) in &range_num_hits {
|
||||
for (term_name, term) in &term_num_hits {
|
||||
let index = &index;
|
||||
let test_name = if term_name.is_empty() {
|
||||
format!("id_range_hit_{}", range_name)
|
||||
} else {
|
||||
format!(
|
||||
"id_range_hit_{}_intersect_with_term_{}",
|
||||
range_name, term_name
|
||||
)
|
||||
};
|
||||
group.register(test_name, move |_| {
|
||||
let query = if term_name.is_empty() {
|
||||
"".to_string()
|
||||
} else {
|
||||
format!("AND id_name:{}", term)
|
||||
};
|
||||
black_box(execute_query(field, range, &query, index));
|
||||
});
|
||||
}
|
||||
}
|
||||
group.run();
|
||||
}
|
||||
}
|
||||
|
||||
fn get_index_0_to_100() -> Index {
|
||||
let mut rng = StdRng::from_seed([1u8; 32]);
|
||||
let num_vals = 100_000;
|
||||
let docs: Vec<_> = (0..num_vals)
|
||||
.map(|_i| {
|
||||
let id_name = if rng.random_bool(0.01) {
|
||||
"veryfew".to_string() // 1%
|
||||
} else if rng.random_bool(0.1) {
|
||||
"few".to_string() // 9%
|
||||
} else {
|
||||
"most".to_string() // 90%
|
||||
};
|
||||
Doc {
|
||||
id_name,
|
||||
id: rng.random_range(0..100),
|
||||
// Multiply by 1000, so that we create most buckets in the compact space
|
||||
// The benches depend on this range to select n-percent of elements with the
|
||||
// methods below.
|
||||
ip: Ipv6Addr::from_u128(rng.random_range(0..100) * 1000),
|
||||
}
|
||||
})
|
||||
.collect();
|
||||
|
||||
create_index_from_docs(&docs)
|
||||
}
|
||||
|
||||
#[derive(Clone, Debug)]
|
||||
pub struct Doc {
|
||||
pub id_name: String,
|
||||
pub id: u64,
|
||||
pub ip: Ipv6Addr,
|
||||
}
|
||||
|
||||
pub fn create_index_from_docs(docs: &[Doc]) -> Index {
|
||||
let mut schema_builder = Schema::builder();
|
||||
let id_u64_field = schema_builder.add_u64_field("id", INDEXED | STORED | FAST);
|
||||
let ids_u64_field =
|
||||
schema_builder.add_u64_field("ids", NumericOptions::default().set_fast().set_indexed());
|
||||
|
||||
let id_f64_field = schema_builder.add_f64_field("id_f64", INDEXED | STORED | FAST);
|
||||
let ids_f64_field = schema_builder.add_f64_field(
|
||||
"ids_f64",
|
||||
NumericOptions::default().set_fast().set_indexed(),
|
||||
);
|
||||
|
||||
let id_i64_field = schema_builder.add_i64_field("id_i64", INDEXED | STORED | FAST);
|
||||
let ids_i64_field = schema_builder.add_i64_field(
|
||||
"ids_i64",
|
||||
NumericOptions::default().set_fast().set_indexed(),
|
||||
);
|
||||
|
||||
let text_field = schema_builder.add_text_field("id_name", STRING | STORED);
|
||||
let text_field2 = schema_builder.add_text_field("id_name_fast", STRING | STORED | FAST);
|
||||
|
||||
let ip_field = schema_builder.add_ip_addr_field("ip", FAST);
|
||||
let ips_field = schema_builder.add_ip_addr_field("ips", FAST);
|
||||
|
||||
let schema = schema_builder.build();
|
||||
|
||||
let index = Index::create_in_ram(schema);
|
||||
|
||||
{
|
||||
let mut index_writer = index.writer_with_num_threads(1, 50_000_000).unwrap();
|
||||
for doc in docs.iter() {
|
||||
index_writer
|
||||
.add_document(doc!(
|
||||
ids_i64_field => doc.id as i64,
|
||||
ids_i64_field => doc.id as i64,
|
||||
ids_f64_field => doc.id as f64,
|
||||
ids_f64_field => doc.id as f64,
|
||||
ids_u64_field => doc.id,
|
||||
ids_u64_field => doc.id,
|
||||
id_u64_field => doc.id,
|
||||
id_f64_field => doc.id as f64,
|
||||
id_i64_field => doc.id as i64,
|
||||
text_field => doc.id_name.to_string(),
|
||||
text_field2 => doc.id_name.to_string(),
|
||||
ips_field => doc.ip,
|
||||
ips_field => doc.ip,
|
||||
ip_field => doc.ip,
|
||||
))
|
||||
.unwrap();
|
||||
}
|
||||
|
||||
index_writer.commit().unwrap();
|
||||
}
|
||||
index
|
||||
}
|
||||
|
||||
fn get_90_percent() -> RangeInclusive<u64> {
|
||||
0..=90
|
||||
}
|
||||
|
||||
fn get_10_percent() -> RangeInclusive<u64> {
|
||||
0..=10
|
||||
}
|
||||
|
||||
fn get_1_percent() -> RangeInclusive<u64> {
|
||||
10..=10
|
||||
}
|
||||
|
||||
fn get_90_percent_ip() -> RangeInclusive<Ipv6Addr> {
|
||||
let start = Ipv6Addr::from_u128(0);
|
||||
let end = Ipv6Addr::from_u128(90 * 1000);
|
||||
start..=end
|
||||
}
|
||||
|
||||
fn get_10_percent_ip() -> RangeInclusive<Ipv6Addr> {
|
||||
let start = Ipv6Addr::from_u128(0);
|
||||
let end = Ipv6Addr::from_u128(10 * 1000);
|
||||
start..=end
|
||||
}
|
||||
|
||||
fn get_1_percent_ip() -> RangeInclusive<Ipv6Addr> {
|
||||
let start = Ipv6Addr::from_u128(10 * 1000);
|
||||
let end = Ipv6Addr::from_u128(10 * 1000);
|
||||
start..=end
|
||||
}
|
||||
|
||||
struct NumHits {
|
||||
count: usize,
|
||||
}
|
||||
impl OutputValue for NumHits {
|
||||
fn column_title() -> &'static str {
|
||||
"NumHits"
|
||||
}
|
||||
fn format(&self) -> Option<String> {
|
||||
Some(self.count.to_string())
|
||||
}
|
||||
}
|
||||
|
||||
fn execute_query<T: Display>(
|
||||
field: &str,
|
||||
id_range: &RangeInclusive<T>,
|
||||
suffix: &str,
|
||||
index: &Index,
|
||||
) -> NumHits {
|
||||
let gen_query_inclusive = |from: &T, to: &T| {
|
||||
format!(
|
||||
"{}:[{} TO {}] {}",
|
||||
field,
|
||||
&from.to_string(),
|
||||
&to.to_string(),
|
||||
suffix
|
||||
)
|
||||
};
|
||||
|
||||
let query = gen_query_inclusive(id_range.start(), id_range.end());
|
||||
execute_query_(&query, index)
|
||||
}
|
||||
|
||||
fn execute_query_(query: &str, index: &Index) -> NumHits {
|
||||
let query_from_text = |text: &str| {
|
||||
QueryParser::for_index(index, vec![])
|
||||
.parse_query(text)
|
||||
.unwrap()
|
||||
};
|
||||
let query = query_from_text(query);
|
||||
let reader = index.reader().unwrap();
|
||||
let searcher = reader.searcher();
|
||||
let num_hits = searcher
|
||||
.search(&query, &(TopDocs::with_limit(10).order_by_score(), Count))
|
||||
.unwrap()
|
||||
.1;
|
||||
NumHits { count: num_hits }
|
||||
}
|
||||
@@ -1,113 +0,0 @@
|
||||
// Benchmarks regex query that matches all terms in a synthetic index.
|
||||
//
|
||||
// Corpus model:
|
||||
// - N unique terms: t000000, t000001, ...
|
||||
// - M docs
|
||||
// - K tokens per doc: doc i gets terms derived from (i, token_index)
|
||||
//
|
||||
// Query:
|
||||
// - Regex "t.*" to match all terms
|
||||
//
|
||||
// Run with:
|
||||
// - cargo bench --bench regex_all_terms
|
||||
//
|
||||
|
||||
use std::fmt::Write;
|
||||
|
||||
use binggan::{black_box, BenchRunner};
|
||||
use tantivy::collector::Count;
|
||||
use tantivy::query::RegexQuery;
|
||||
use tantivy::schema::{Schema, TEXT};
|
||||
use tantivy::{doc, Index, ReloadPolicy};
|
||||
|
||||
const HEAP_SIZE_BYTES: usize = 200_000_000;
|
||||
|
||||
#[derive(Clone, Copy)]
|
||||
struct BenchConfig {
|
||||
num_terms: usize,
|
||||
num_docs: usize,
|
||||
tokens_per_doc: usize,
|
||||
}
|
||||
|
||||
fn main() {
|
||||
let configs = default_configs();
|
||||
|
||||
let mut runner = BenchRunner::new();
|
||||
for config in configs {
|
||||
let (index, text_field) = build_index(config, HEAP_SIZE_BYTES);
|
||||
let reader = index
|
||||
.reader_builder()
|
||||
.reload_policy(ReloadPolicy::Manual)
|
||||
.try_into()
|
||||
.expect("reader");
|
||||
let searcher = reader.searcher();
|
||||
let query = RegexQuery::from_pattern("t.*", text_field).expect("regex query");
|
||||
|
||||
let mut group = runner.new_group();
|
||||
group.set_name(format!(
|
||||
"regex_all_terms_t{}_d{}_k{}",
|
||||
config.num_terms, config.num_docs, config.tokens_per_doc
|
||||
));
|
||||
group.register("regex_count", move |_| {
|
||||
let count = searcher.search(&query, &Count).expect("search");
|
||||
black_box(count);
|
||||
});
|
||||
group.run();
|
||||
}
|
||||
}
|
||||
|
||||
fn default_configs() -> Vec<BenchConfig> {
|
||||
vec![
|
||||
BenchConfig {
|
||||
num_terms: 10_000,
|
||||
num_docs: 100_000,
|
||||
tokens_per_doc: 1,
|
||||
},
|
||||
BenchConfig {
|
||||
num_terms: 10_000,
|
||||
num_docs: 100_000,
|
||||
tokens_per_doc: 8,
|
||||
},
|
||||
BenchConfig {
|
||||
num_terms: 100_000,
|
||||
num_docs: 100_000,
|
||||
tokens_per_doc: 1,
|
||||
},
|
||||
BenchConfig {
|
||||
num_terms: 100_000,
|
||||
num_docs: 100_000,
|
||||
tokens_per_doc: 8,
|
||||
},
|
||||
]
|
||||
}
|
||||
|
||||
fn build_index(config: BenchConfig, heap_size_bytes: usize) -> (Index, tantivy::schema::Field) {
|
||||
let mut schema_builder = Schema::builder();
|
||||
let text_field = schema_builder.add_text_field("text", TEXT);
|
||||
let schema = schema_builder.build();
|
||||
let index = Index::create_in_ram(schema);
|
||||
|
||||
let term_width = config.num_terms.to_string().len();
|
||||
{
|
||||
let mut writer = index
|
||||
.writer_with_num_threads(1, heap_size_bytes)
|
||||
.expect("writer");
|
||||
let mut buffer = String::new();
|
||||
for doc_id in 0..config.num_docs {
|
||||
buffer.clear();
|
||||
for token_idx in 0..config.tokens_per_doc {
|
||||
if token_idx > 0 {
|
||||
buffer.push(' ');
|
||||
}
|
||||
let term_id = (doc_id * config.tokens_per_doc + token_idx) % config.num_terms;
|
||||
write!(&mut buffer, "t{term_id:0term_width$}").expect("write token");
|
||||
}
|
||||
writer
|
||||
.add_document(doc!(text_field => buffer.as_str()))
|
||||
.expect("add_document");
|
||||
}
|
||||
writer.commit().expect("commit");
|
||||
}
|
||||
|
||||
(index, text_field)
|
||||
}
|
||||
@@ -1,421 +0,0 @@
|
||||
// This benchmark compares different approaches for retrieving string values:
|
||||
//
|
||||
// 1. Fast Field Approach: retrieves string values via term_ords() and ord_to_str()
|
||||
//
|
||||
// 2. Doc Store Approach: retrieves string values via searcher.doc() and field extraction
|
||||
//
|
||||
// The benchmark includes various data distributions:
|
||||
// - Dense Sequential: Sequential document IDs with dense data
|
||||
// - Dense Random: Random document IDs with dense data
|
||||
// - Sparse Sequential: Sequential document IDs with sparse data
|
||||
// - Sparse Random: Random document IDs with sparse data
|
||||
use std::ops::Bound;
|
||||
|
||||
use binggan::{black_box, BenchGroup, BenchRunner};
|
||||
use rand::prelude::*;
|
||||
use rand::rngs::StdRng;
|
||||
use rand::SeedableRng;
|
||||
use tantivy::collector::{Count, DocSetCollector};
|
||||
use tantivy::query::RangeQuery;
|
||||
use tantivy::schema::document::TantivyDocument;
|
||||
use tantivy::schema::{Schema, Value, FAST, STORED, STRING};
|
||||
use tantivy::{doc, Index, ReloadPolicy, Searcher, Term};
|
||||
|
||||
#[derive(Clone)]
|
||||
struct BenchIndex {
|
||||
#[allow(dead_code)]
|
||||
index: Index,
|
||||
searcher: Searcher,
|
||||
}
|
||||
|
||||
fn build_shared_indices(num_docs: usize, distribution: &str) -> BenchIndex {
|
||||
// Schema with string fast field and stored field for doc access
|
||||
let mut schema_builder = Schema::builder();
|
||||
let f_str_fast = schema_builder.add_text_field("str_fast", STRING | STORED | FAST);
|
||||
let f_str_stored = schema_builder.add_text_field("str_stored", STRING | STORED);
|
||||
let schema = schema_builder.build();
|
||||
let index = Index::create_in_ram(schema.clone());
|
||||
|
||||
// Populate index with stable RNG for reproducibility.
|
||||
let mut rng = StdRng::from_seed([7u8; 32]);
|
||||
|
||||
{
|
||||
let mut writer = index.writer_with_num_threads(1, 4_000_000_000).unwrap();
|
||||
|
||||
match distribution {
|
||||
"dense_random" => {
|
||||
for _doc_id in 0..num_docs {
|
||||
let suffix = rng.gen_range(0u64..1000u64);
|
||||
let str_val = format!("str_{:03}", suffix);
|
||||
|
||||
writer
|
||||
.add_document(doc!(
|
||||
f_str_fast=>str_val.clone(),
|
||||
f_str_stored=>str_val,
|
||||
))
|
||||
.unwrap();
|
||||
}
|
||||
}
|
||||
"dense_sequential" => {
|
||||
for doc_id in 0..num_docs {
|
||||
let suffix = doc_id as u64 % 1000;
|
||||
let str_val = format!("str_{:03}", suffix);
|
||||
|
||||
writer
|
||||
.add_document(doc!(
|
||||
f_str_fast=>str_val.clone(),
|
||||
f_str_stored=>str_val,
|
||||
))
|
||||
.unwrap();
|
||||
}
|
||||
}
|
||||
"sparse_random" => {
|
||||
for _doc_id in 0..num_docs {
|
||||
let suffix = rng.gen_range(0u64..1000000u64);
|
||||
let str_val = format!("str_{:07}", suffix);
|
||||
|
||||
writer
|
||||
.add_document(doc!(
|
||||
f_str_fast=>str_val.clone(),
|
||||
f_str_stored=>str_val,
|
||||
))
|
||||
.unwrap();
|
||||
}
|
||||
}
|
||||
"sparse_sequential" => {
|
||||
for doc_id in 0..num_docs {
|
||||
let suffix = doc_id as u64;
|
||||
let str_val = format!("str_{:07}", suffix);
|
||||
|
||||
writer
|
||||
.add_document(doc!(
|
||||
f_str_fast=>str_val.clone(),
|
||||
f_str_stored=>str_val,
|
||||
))
|
||||
.unwrap();
|
||||
}
|
||||
}
|
||||
_ => {
|
||||
panic!("Unsupported distribution type");
|
||||
}
|
||||
}
|
||||
writer.commit().unwrap();
|
||||
}
|
||||
|
||||
// Prepare reader/searcher once.
|
||||
let reader = index
|
||||
.reader_builder()
|
||||
.reload_policy(ReloadPolicy::Manual)
|
||||
.try_into()
|
||||
.unwrap();
|
||||
let searcher = reader.searcher();
|
||||
|
||||
BenchIndex { index, searcher }
|
||||
}
|
||||
|
||||
fn main() {
|
||||
// Prepare corpora with varying scenarios
|
||||
let scenarios = vec![
|
||||
(
|
||||
"dense_random_search_low_range".to_string(),
|
||||
1_000_000,
|
||||
"dense_random",
|
||||
0,
|
||||
9,
|
||||
),
|
||||
(
|
||||
"dense_random_search_high_range".to_string(),
|
||||
1_000_000,
|
||||
"dense_random",
|
||||
990,
|
||||
999,
|
||||
),
|
||||
(
|
||||
"dense_sequential_search_low_range".to_string(),
|
||||
1_000_000,
|
||||
"dense_sequential",
|
||||
0,
|
||||
9,
|
||||
),
|
||||
(
|
||||
"dense_sequential_search_high_range".to_string(),
|
||||
1_000_000,
|
||||
"dense_sequential",
|
||||
990,
|
||||
999,
|
||||
),
|
||||
(
|
||||
"sparse_random_search_low_range".to_string(),
|
||||
1_000_000,
|
||||
"sparse_random",
|
||||
0,
|
||||
9999,
|
||||
),
|
||||
(
|
||||
"sparse_random_search_high_range".to_string(),
|
||||
1_000_000,
|
||||
"sparse_random",
|
||||
990_000,
|
||||
999_999,
|
||||
),
|
||||
(
|
||||
"sparse_sequential_search_low_range".to_string(),
|
||||
1_000_000,
|
||||
"sparse_sequential",
|
||||
0,
|
||||
9999,
|
||||
),
|
||||
(
|
||||
"sparse_sequential_search_high_range".to_string(),
|
||||
1_000_000,
|
||||
"sparse_sequential",
|
||||
990_000,
|
||||
999_999,
|
||||
),
|
||||
];
|
||||
|
||||
let mut runner = BenchRunner::new();
|
||||
for (scenario_id, n, distribution, range_low, range_high) in scenarios {
|
||||
let bench_index = build_shared_indices(n, distribution);
|
||||
let mut group = runner.new_group();
|
||||
group.set_name(scenario_id);
|
||||
|
||||
let field = bench_index.searcher.schema().get_field("str_fast").unwrap();
|
||||
|
||||
let (lower_str, upper_str) =
|
||||
if distribution == "dense_sequential" || distribution == "dense_random" {
|
||||
(
|
||||
format!("str_{:03}", range_low),
|
||||
format!("str_{:03}", range_high),
|
||||
)
|
||||
} else {
|
||||
(
|
||||
format!("str_{:07}", range_low),
|
||||
format!("str_{:07}", range_high),
|
||||
)
|
||||
};
|
||||
|
||||
let lower_term = Term::from_field_text(field, &lower_str);
|
||||
let upper_term = Term::from_field_text(field, &upper_str);
|
||||
|
||||
let query = RangeQuery::new(Bound::Included(lower_term), Bound::Included(upper_term));
|
||||
|
||||
run_benchmark_tasks(&mut group, &bench_index, query, range_low, range_high);
|
||||
|
||||
group.run();
|
||||
}
|
||||
}
|
||||
|
||||
/// Run all benchmark tasks for a given range query
|
||||
fn run_benchmark_tasks(
|
||||
bench_group: &mut BenchGroup,
|
||||
bench_index: &BenchIndex,
|
||||
query: RangeQuery,
|
||||
range_low: u64,
|
||||
range_high: u64,
|
||||
) {
|
||||
// Test count of matching documents
|
||||
add_bench_task_count(
|
||||
bench_group,
|
||||
bench_index,
|
||||
query.clone(),
|
||||
range_low,
|
||||
range_high,
|
||||
);
|
||||
|
||||
// Test fetching all DocIds of matching documents
|
||||
add_bench_task_docset(
|
||||
bench_group,
|
||||
bench_index,
|
||||
query.clone(),
|
||||
range_low,
|
||||
range_high,
|
||||
);
|
||||
|
||||
// Test fetching all string fast field values of matching documents
|
||||
add_bench_task_fetch_all_strings(
|
||||
bench_group,
|
||||
bench_index,
|
||||
query.clone(),
|
||||
range_low,
|
||||
range_high,
|
||||
);
|
||||
|
||||
// Test fetching all string values of matching documents through doc() method
|
||||
add_bench_task_fetch_all_strings_from_doc(
|
||||
bench_group,
|
||||
bench_index,
|
||||
query,
|
||||
range_low,
|
||||
range_high,
|
||||
);
|
||||
}
|
||||
|
||||
fn add_bench_task_count(
|
||||
bench_group: &mut BenchGroup,
|
||||
bench_index: &BenchIndex,
|
||||
query: RangeQuery,
|
||||
range_low: u64,
|
||||
range_high: u64,
|
||||
) {
|
||||
let task_name = format!("string_search_count_[{}-{}]", range_low, range_high);
|
||||
|
||||
let search_task = CountSearchTask {
|
||||
searcher: bench_index.searcher.clone(),
|
||||
query,
|
||||
};
|
||||
bench_group.register(task_name, move |_| black_box(search_task.run()));
|
||||
}
|
||||
|
||||
fn add_bench_task_docset(
|
||||
bench_group: &mut BenchGroup,
|
||||
bench_index: &BenchIndex,
|
||||
query: RangeQuery,
|
||||
range_low: u64,
|
||||
range_high: u64,
|
||||
) {
|
||||
let task_name = format!("string_fetch_all_docset_[{}-{}]", range_low, range_high);
|
||||
|
||||
let search_task = DocSetSearchTask {
|
||||
searcher: bench_index.searcher.clone(),
|
||||
query,
|
||||
};
|
||||
bench_group.register(task_name, move |_| black_box(search_task.run()));
|
||||
}
|
||||
|
||||
fn add_bench_task_fetch_all_strings(
|
||||
bench_group: &mut BenchGroup,
|
||||
bench_index: &BenchIndex,
|
||||
query: RangeQuery,
|
||||
range_low: u64,
|
||||
range_high: u64,
|
||||
) {
|
||||
let task_name = format!(
|
||||
"string_fastfield_fetch_all_strings_[{}-{}]",
|
||||
range_low, range_high
|
||||
);
|
||||
|
||||
let search_task = FetchAllStringsSearchTask {
|
||||
searcher: bench_index.searcher.clone(),
|
||||
query,
|
||||
};
|
||||
|
||||
bench_group.register(task_name, move |_| {
|
||||
let result = black_box(search_task.run());
|
||||
result.len()
|
||||
});
|
||||
}
|
||||
|
||||
fn add_bench_task_fetch_all_strings_from_doc(
|
||||
bench_group: &mut BenchGroup,
|
||||
bench_index: &BenchIndex,
|
||||
query: RangeQuery,
|
||||
range_low: u64,
|
||||
range_high: u64,
|
||||
) {
|
||||
let task_name = format!(
|
||||
"string_doc_fetch_all_strings_[{}-{}]",
|
||||
range_low, range_high
|
||||
);
|
||||
|
||||
let search_task = FetchAllStringsFromDocTask {
|
||||
searcher: bench_index.searcher.clone(),
|
||||
query,
|
||||
};
|
||||
|
||||
bench_group.register(task_name, move |_| {
|
||||
let result = black_box(search_task.run());
|
||||
result.len()
|
||||
});
|
||||
}
|
||||
|
||||
struct CountSearchTask {
|
||||
searcher: Searcher,
|
||||
query: RangeQuery,
|
||||
}
|
||||
|
||||
impl CountSearchTask {
|
||||
#[inline(never)]
|
||||
pub fn run(&self) -> usize {
|
||||
self.searcher.search(&self.query, &Count).unwrap()
|
||||
}
|
||||
}
|
||||
|
||||
struct DocSetSearchTask {
|
||||
searcher: Searcher,
|
||||
query: RangeQuery,
|
||||
}
|
||||
|
||||
impl DocSetSearchTask {
|
||||
#[inline(never)]
|
||||
pub fn run(&self) -> usize {
|
||||
let result = self.searcher.search(&self.query, &DocSetCollector).unwrap();
|
||||
result.len()
|
||||
}
|
||||
}
|
||||
|
||||
struct FetchAllStringsSearchTask {
|
||||
searcher: Searcher,
|
||||
query: RangeQuery,
|
||||
}
|
||||
|
||||
impl FetchAllStringsSearchTask {
|
||||
#[inline(never)]
|
||||
pub fn run(&self) -> Vec<String> {
|
||||
let doc_addresses = self.searcher.search(&self.query, &DocSetCollector).unwrap();
|
||||
let mut docs = doc_addresses.into_iter().collect::<Vec<_>>();
|
||||
docs.sort();
|
||||
let mut strings = Vec::with_capacity(docs.len());
|
||||
|
||||
for doc_address in docs {
|
||||
let segment_reader = &self.searcher.segment_readers()[doc_address.segment_ord as usize];
|
||||
let str_column_opt = segment_reader.fast_fields().str("str_fast");
|
||||
|
||||
if let Ok(Some(str_column)) = str_column_opt {
|
||||
let doc_id = doc_address.doc_id;
|
||||
let term_ord = str_column.term_ords(doc_id).next().unwrap();
|
||||
let mut str_buffer = String::new();
|
||||
if str_column.ord_to_str(term_ord, &mut str_buffer).is_ok() {
|
||||
strings.push(str_buffer);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
strings
|
||||
}
|
||||
}
|
||||
|
||||
struct FetchAllStringsFromDocTask {
|
||||
searcher: Searcher,
|
||||
query: RangeQuery,
|
||||
}
|
||||
|
||||
impl FetchAllStringsFromDocTask {
|
||||
#[inline(never)]
|
||||
pub fn run(&self) -> Vec<String> {
|
||||
let doc_addresses = self.searcher.search(&self.query, &DocSetCollector).unwrap();
|
||||
let mut docs = doc_addresses.into_iter().collect::<Vec<_>>();
|
||||
docs.sort();
|
||||
let mut strings = Vec::with_capacity(docs.len());
|
||||
|
||||
let str_stored_field = self
|
||||
.searcher
|
||||
.schema()
|
||||
.get_field("str_stored")
|
||||
.expect("str_stored field should exist");
|
||||
|
||||
for doc_address in docs {
|
||||
// Get the document from the doc store (row store access)
|
||||
if let Ok(doc) = self.searcher.doc::<TantivyDocument>(doc_address) {
|
||||
// Extract string values from the stored field
|
||||
if let Some(field_value) = doc.get_first(str_stored_field) {
|
||||
if let Some(text) = field_value.as_value().as_str() {
|
||||
strings.push(text.to_string());
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
strings
|
||||
}
|
||||
}
|
||||
@@ -1,6 +1,6 @@
|
||||
[package]
|
||||
name = "tantivy-bitpacker"
|
||||
version = "0.9.0"
|
||||
version = "0.8.0"
|
||||
edition = "2024"
|
||||
authors = ["Paul Masurel <paul.masurel@gmail.com>"]
|
||||
license = "MIT"
|
||||
@@ -18,5 +18,5 @@ homepage = "https://github.com/quickwit-oss/tantivy"
|
||||
bitpacking = { version = "0.9.2", default-features = false, features = ["bitpacker1x"] }
|
||||
|
||||
[dev-dependencies]
|
||||
rand = "0.9"
|
||||
rand = "0.8"
|
||||
proptest = "1"
|
||||
|
||||
@@ -4,8 +4,8 @@ extern crate test;
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use rand::rng;
|
||||
use rand::seq::IteratorRandom;
|
||||
use rand::thread_rng;
|
||||
use tantivy_bitpacker::{BitPacker, BitUnpacker, BlockedBitpacker};
|
||||
use test::Bencher;
|
||||
|
||||
@@ -27,7 +27,7 @@ mod tests {
|
||||
let num_els = 1_000_000u32;
|
||||
let bit_unpacker = BitUnpacker::new(bit_width);
|
||||
let data = create_bitpacked_data(bit_width, num_els);
|
||||
let idxs: Vec<u32> = (0..num_els).choose_multiple(&mut rng(), 100_000);
|
||||
let idxs: Vec<u32> = (0..num_els).choose_multiple(&mut thread_rng(), 100_000);
|
||||
b.iter(|| {
|
||||
let mut out = 0u64;
|
||||
for &idx in &idxs {
|
||||
|
||||
@@ -1,3 +1,7 @@
|
||||
// manual divceil actually generates code that is not optimal (to accept the full range of u32) and
|
||||
// perf matters here.
|
||||
#![allow(clippy::manual_div_ceil)]
|
||||
|
||||
use std::io;
|
||||
use std::ops::{Range, RangeInclusive};
|
||||
|
||||
@@ -48,7 +52,7 @@ impl BitPacker {
|
||||
|
||||
pub fn flush<TWrite: io::Write + ?Sized>(&mut self, output: &mut TWrite) -> io::Result<()> {
|
||||
if self.mini_buffer_written > 0 {
|
||||
let num_bytes = self.mini_buffer_written.div_ceil(8);
|
||||
let num_bytes = (self.mini_buffer_written + 7) / 8;
|
||||
let bytes = self.mini_buffer.to_le_bytes();
|
||||
output.write_all(&bytes[..num_bytes])?;
|
||||
self.mini_buffer_written = 0;
|
||||
@@ -138,7 +142,7 @@ impl BitUnpacker {
|
||||
|
||||
// We use `usize` here to avoid overflow issues.
|
||||
let end_bit_read = (end_idx as usize) * self.num_bits;
|
||||
let end_byte_read = end_bit_read.div_ceil(8);
|
||||
let end_byte_read = (end_bit_read + 7) / 8;
|
||||
assert!(
|
||||
end_byte_read <= data.len(),
|
||||
"Requested index is out of bounds."
|
||||
@@ -258,7 +262,7 @@ mod test {
|
||||
bitpacker.write(val, num_bits, &mut data).unwrap();
|
||||
}
|
||||
bitpacker.close(&mut data).unwrap();
|
||||
assert_eq!(data.len(), ((num_bits as usize) * len).div_ceil(8));
|
||||
assert_eq!(data.len(), ((num_bits as usize) * len + 7) / 8);
|
||||
let bitunpacker = BitUnpacker::new(num_bits);
|
||||
(bitunpacker, vals, data)
|
||||
}
|
||||
@@ -304,7 +308,7 @@ mod test {
|
||||
bitpacker.write(val, num_bits, &mut buffer).unwrap();
|
||||
}
|
||||
bitpacker.flush(&mut buffer).unwrap();
|
||||
assert_eq!(buffer.len(), (vals.len() * num_bits as usize).div_ceil(8));
|
||||
assert_eq!(buffer.len(), (vals.len() * num_bits as usize + 7) / 8);
|
||||
let bitunpacker = BitUnpacker::new(num_bits);
|
||||
let max_val = if num_bits == 64 {
|
||||
u64::MAX
|
||||
|
||||
@@ -140,7 +140,6 @@ impl BlockedBitpacker {
|
||||
pub fn iter(&self) -> impl Iterator<Item = u64> + '_ {
|
||||
// todo performance: we could decompress a whole block and cache it instead
|
||||
let bitpacked_elems = self.offset_and_bits.len() * BLOCK_SIZE;
|
||||
|
||||
(0..bitpacked_elems)
|
||||
.map(move |idx| self.get(idx))
|
||||
.chain(self.buffer.iter().cloned())
|
||||
|
||||
@@ -19,7 +19,7 @@ fn u32_to_i32(val: u32) -> i32 {
|
||||
#[inline]
|
||||
unsafe fn u32_to_i32_avx2(vals_u32x8s: DataType) -> DataType {
|
||||
const HIGHEST_BIT_MASK: DataType = from_u32x8([HIGHEST_BIT; NUM_LANES]);
|
||||
unsafe { op_xor(vals_u32x8s, HIGHEST_BIT_MASK) }
|
||||
op_xor(vals_u32x8s, HIGHEST_BIT_MASK)
|
||||
}
|
||||
|
||||
pub fn filter_vec_in_place(range: RangeInclusive<u32>, offset: u32, output: &mut Vec<u32>) {
|
||||
@@ -66,19 +66,17 @@ unsafe fn filter_vec_avx2_aux(
|
||||
]);
|
||||
const SHIFT: __m256i = from_u32x8([NUM_LANES as u32; NUM_LANES]);
|
||||
for _ in 0..num_words {
|
||||
unsafe {
|
||||
let word = load_unaligned(input);
|
||||
let word = u32_to_i32_avx2(word);
|
||||
let keeper_bitset = compute_filter_bitset(word, range_simd.clone());
|
||||
let added_len = keeper_bitset.count_ones();
|
||||
let filtered_doc_ids = compact(ids, keeper_bitset);
|
||||
store_unaligned(output_tail as *mut __m256i, filtered_doc_ids);
|
||||
output_tail = output_tail.offset(added_len as isize);
|
||||
ids = op_add(ids, SHIFT);
|
||||
input = input.offset(1);
|
||||
}
|
||||
let word = load_unaligned(input);
|
||||
let word = u32_to_i32_avx2(word);
|
||||
let keeper_bitset = compute_filter_bitset(word, range_simd.clone());
|
||||
let added_len = keeper_bitset.count_ones();
|
||||
let filtered_doc_ids = compact(ids, keeper_bitset);
|
||||
store_unaligned(output_tail as *mut __m256i, filtered_doc_ids);
|
||||
output_tail = output_tail.offset(added_len as isize);
|
||||
ids = op_add(ids, SHIFT);
|
||||
input = input.offset(1);
|
||||
}
|
||||
unsafe { output_tail.offset_from(output) as usize }
|
||||
output_tail.offset_from(output) as usize
|
||||
}
|
||||
|
||||
#[inline]
|
||||
@@ -94,7 +92,8 @@ unsafe fn compute_filter_bitset(val: __m256i, range: std::ops::RangeInclusive<__
|
||||
let too_low = op_greater(*range.start(), val);
|
||||
let too_high = op_greater(val, *range.end());
|
||||
let inside = op_or(too_low, too_high);
|
||||
255 - std::arch::x86_64::_mm256_movemask_ps(_mm256_castsi256_ps(inside)) as u8
|
||||
255 - std::arch::x86_64::_mm256_movemask_ps(std::mem::transmute::<DataType, __m256>(inside))
|
||||
as u8
|
||||
}
|
||||
|
||||
union U8x32 {
|
||||
|
||||
@@ -1,3 +1,5 @@
|
||||
// #[allow(clippy::manual_div_ceil)]
|
||||
|
||||
mod bitpacker;
|
||||
mod blocked_bitpacker;
|
||||
mod filter_vec;
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
[package]
|
||||
name = "tantivy-columnar"
|
||||
version = "0.6.0"
|
||||
version = "0.5.0"
|
||||
edition = "2024"
|
||||
license = "MIT"
|
||||
homepage = "https://github.com/quickwit-oss/tantivy"
|
||||
@@ -12,17 +12,17 @@ categories = ["database-implementations", "data-structures", "compression"]
|
||||
itertools = "0.14.0"
|
||||
fastdivide = "0.4.0"
|
||||
|
||||
stacker = { version= "0.6", path = "../stacker", package="tantivy-stacker"}
|
||||
sstable = { version= "0.6", path = "../sstable", package = "tantivy-sstable" }
|
||||
common = { version= "0.10", path = "../common", package = "tantivy-common" }
|
||||
tantivy-bitpacker = { version= "0.9", path = "../bitpacker/" }
|
||||
stacker = { version= "0.5", path = "../stacker", package="tantivy-stacker"}
|
||||
sstable = { version= "0.5", path = "../sstable", package = "tantivy-sstable" }
|
||||
common = { version= "0.9", path = "../common", package = "tantivy-common" }
|
||||
tantivy-bitpacker = { version= "0.8", path = "../bitpacker/" }
|
||||
serde = "1.0.152"
|
||||
downcast-rs = "2.0.1"
|
||||
|
||||
[dev-dependencies]
|
||||
proptest = "1"
|
||||
more-asserts = "0.3.1"
|
||||
rand = "0.9"
|
||||
rand = "0.8"
|
||||
binggan = "0.14.0"
|
||||
|
||||
[[bench]]
|
||||
@@ -33,29 +33,6 @@ harness = false
|
||||
name = "bench_access"
|
||||
harness = false
|
||||
|
||||
[[bench]]
|
||||
name = "bench_first_vals"
|
||||
harness = false
|
||||
|
||||
[[bench]]
|
||||
name = "bench_values_u64"
|
||||
harness = false
|
||||
|
||||
[[bench]]
|
||||
name = "bench_values_u128"
|
||||
harness = false
|
||||
|
||||
[[bench]]
|
||||
name = "bench_create_column_values"
|
||||
harness = false
|
||||
|
||||
[[bench]]
|
||||
name = "bench_column_values_get"
|
||||
harness = false
|
||||
|
||||
[[bench]]
|
||||
name = "bench_optional_index"
|
||||
harness = false
|
||||
|
||||
[features]
|
||||
unstable = []
|
||||
zstd-compression = ["sstable/zstd-compression"]
|
||||
|
||||
@@ -73,7 +73,7 @@ The crate introduces the following concepts.
|
||||
`Columnar` is an equivalent of a dataframe.
|
||||
It maps `column_key` to `Column`.
|
||||
|
||||
A `Column<T>` associates a `RowId` (u32) to any
|
||||
A `Column<T>` asssociates a `RowId` (u32) to any
|
||||
number of values.
|
||||
|
||||
This is made possible by wrapping a `ColumnIndex` and a `ColumnValue` object.
|
||||
|
||||
@@ -19,7 +19,7 @@ fn main() {
|
||||
|
||||
let mut add_card = |card1: Card| {
|
||||
inputs.push((
|
||||
card1.to_string(),
|
||||
format!("{card1}"),
|
||||
generate_columnar_and_open(card1, NUM_DOCS),
|
||||
));
|
||||
};
|
||||
@@ -50,7 +50,6 @@ fn bench_group(mut runner: InputGroup<Column>) {
|
||||
let mut buffer = vec![None; BLOCK_SIZE];
|
||||
for i in (0..NUM_DOCS).step_by(BLOCK_SIZE) {
|
||||
// fill docs
|
||||
#[allow(clippy::needless_range_loop)]
|
||||
for idx in 0..BLOCK_SIZE {
|
||||
docs[idx] = idx as u32 + i;
|
||||
}
|
||||
|
||||
@@ -1,61 +0,0 @@
|
||||
use std::sync::Arc;
|
||||
|
||||
use binggan::{InputGroup, black_box};
|
||||
use rand::rngs::StdRng;
|
||||
use rand::{Rng, SeedableRng};
|
||||
use tantivy_columnar::ColumnValues;
|
||||
use tantivy_columnar::column_values::{CodecType, serialize_and_load_u64_based_column_values};
|
||||
|
||||
fn get_data() -> Vec<u64> {
|
||||
let mut rng = StdRng::seed_from_u64(2u64);
|
||||
let mut data: Vec<_> = (100..55_000_u64)
|
||||
.map(|num| num + rng.random::<u8>() as u64)
|
||||
.collect();
|
||||
data.push(99_000);
|
||||
data.insert(1000, 2000);
|
||||
data.insert(2000, 100);
|
||||
data.insert(3000, 4100);
|
||||
data.insert(4000, 100);
|
||||
data.insert(5000, 800);
|
||||
data
|
||||
}
|
||||
|
||||
#[inline(never)]
|
||||
fn value_iter() -> impl Iterator<Item = u64> {
|
||||
0..20_000
|
||||
}
|
||||
|
||||
type Col = Arc<dyn ColumnValues<u64>>;
|
||||
|
||||
fn main() {
|
||||
let data = get_data();
|
||||
let inputs: Vec<(String, Col)> = vec![
|
||||
(
|
||||
"bitpacked".to_string(),
|
||||
serialize_and_load_u64_based_column_values(&data.as_slice(), &[CodecType::Bitpacked]),
|
||||
),
|
||||
(
|
||||
"linear".to_string(),
|
||||
serialize_and_load_u64_based_column_values(&data.as_slice(), &[CodecType::Linear]),
|
||||
),
|
||||
(
|
||||
"blockwise_linear".to_string(),
|
||||
serialize_and_load_u64_based_column_values(
|
||||
&data.as_slice(),
|
||||
&[CodecType::BlockwiseLinear],
|
||||
),
|
||||
),
|
||||
];
|
||||
|
||||
let mut group: InputGroup<Col> = InputGroup::new_with_inputs(inputs);
|
||||
|
||||
group.register("fastfield_get", |col: &Col| {
|
||||
let mut sum = 0u64;
|
||||
for pos in value_iter() {
|
||||
sum = sum.wrapping_add(col.get_val(pos as u32));
|
||||
}
|
||||
black_box(sum);
|
||||
});
|
||||
|
||||
group.run();
|
||||
}
|
||||
@@ -1,44 +0,0 @@
|
||||
use binggan::{InputGroup, black_box};
|
||||
use rand::rngs::StdRng;
|
||||
use rand::{Rng, SeedableRng};
|
||||
use tantivy_columnar::column_values::{CodecType, serialize_u64_based_column_values};
|
||||
|
||||
fn get_data() -> Vec<u64> {
|
||||
let mut rng = StdRng::seed_from_u64(2u64);
|
||||
let mut data: Vec<_> = (100..55_000_u64)
|
||||
.map(|num| num + rng.random::<u8>() as u64)
|
||||
.collect();
|
||||
data.push(99_000);
|
||||
data.insert(1000, 2000);
|
||||
data.insert(2000, 100);
|
||||
data.insert(3000, 4100);
|
||||
data.insert(4000, 100);
|
||||
data.insert(5000, 800);
|
||||
data
|
||||
}
|
||||
|
||||
fn main() {
|
||||
let data = get_data();
|
||||
let mut group: InputGroup<(CodecType, Vec<u64>)> = InputGroup::new_with_inputs(vec![
|
||||
(
|
||||
"bitpacked codec".to_string(),
|
||||
(CodecType::Bitpacked, data.clone()),
|
||||
),
|
||||
(
|
||||
"linear codec".to_string(),
|
||||
(CodecType::Linear, data.clone()),
|
||||
),
|
||||
(
|
||||
"blockwise linear codec".to_string(),
|
||||
(CodecType::BlockwiseLinear, data.clone()),
|
||||
),
|
||||
]);
|
||||
|
||||
group.register("serialize column_values", |data| {
|
||||
let mut buffer = Vec::new();
|
||||
serialize_u64_based_column_values(&data.1.as_slice(), &[data.0], &mut buffer).unwrap();
|
||||
black_box(buffer.len());
|
||||
});
|
||||
|
||||
group.run();
|
||||
}
|
||||
@@ -1,9 +1,12 @@
|
||||
#![feature(test)]
|
||||
extern crate test;
|
||||
|
||||
use std::sync::Arc;
|
||||
|
||||
use binggan::{InputGroup, black_box};
|
||||
use rand::prelude::*;
|
||||
use tantivy_columnar::column_values::{CodecType, serialize_and_load_u64_based_column_values};
|
||||
use tantivy_columnar::*;
|
||||
use test::{Bencher, black_box};
|
||||
|
||||
struct Columns {
|
||||
pub optional: Column,
|
||||
@@ -65,38 +68,88 @@ pub fn serialize_and_load(column: &[u64], codec_type: CodecType) -> Arc<dyn Colu
|
||||
serialize_and_load_u64_based_column_values(&column, &[codec_type])
|
||||
}
|
||||
|
||||
fn main() {
|
||||
let Columns {
|
||||
optional,
|
||||
full,
|
||||
multi,
|
||||
} = get_test_columns();
|
||||
|
||||
let inputs = vec![
|
||||
("full".to_string(), full),
|
||||
("optional".to_string(), optional),
|
||||
("multi".to_string(), multi),
|
||||
];
|
||||
|
||||
let mut group = InputGroup::new_with_inputs(inputs);
|
||||
|
||||
group.register("first_full_scan", |column| {
|
||||
fn run_bench_on_column_full_scan(b: &mut Bencher, column: Column) {
|
||||
let num_iter = black_box(NUM_VALUES);
|
||||
b.iter(|| {
|
||||
let mut sum = 0u64;
|
||||
for i in 0..NUM_VALUES as u32 {
|
||||
for i in 0..num_iter as u32 {
|
||||
let val = column.first(i);
|
||||
sum += val.unwrap_or(0);
|
||||
}
|
||||
black_box(sum);
|
||||
sum
|
||||
});
|
||||
|
||||
group.register("first_block_single_calls", |column| {
|
||||
let mut block: Vec<Option<u64>> = vec![None; 64];
|
||||
let fetch_docids = (0..64).collect::<Vec<_>>();
|
||||
}
|
||||
fn run_bench_on_column_block_fetch(b: &mut Bencher, column: Column) {
|
||||
let mut block: Vec<Option<u64>> = vec![None; 64];
|
||||
let fetch_docids = (0..64).collect::<Vec<_>>();
|
||||
b.iter(move || {
|
||||
column.first_vals(&fetch_docids, &mut block);
|
||||
block[0]
|
||||
});
|
||||
}
|
||||
fn run_bench_on_column_block_single_calls(b: &mut Bencher, column: Column) {
|
||||
let mut block: Vec<Option<u64>> = vec![None; 64];
|
||||
let fetch_docids = (0..64).collect::<Vec<_>>();
|
||||
b.iter(move || {
|
||||
for i in 0..fetch_docids.len() {
|
||||
block[i] = column.first(fetch_docids[i]);
|
||||
}
|
||||
black_box(block[0]);
|
||||
block[0]
|
||||
});
|
||||
|
||||
group.run();
|
||||
}
|
||||
|
||||
/// Column first method
|
||||
#[bench]
|
||||
fn bench_get_first_on_full_column_full_scan(b: &mut Bencher) {
|
||||
let column = get_test_columns().full;
|
||||
run_bench_on_column_full_scan(b, column);
|
||||
}
|
||||
|
||||
#[bench]
|
||||
fn bench_get_first_on_optional_column_full_scan(b: &mut Bencher) {
|
||||
let column = get_test_columns().optional;
|
||||
run_bench_on_column_full_scan(b, column);
|
||||
}
|
||||
|
||||
#[bench]
|
||||
fn bench_get_first_on_multi_column_full_scan(b: &mut Bencher) {
|
||||
let column = get_test_columns().multi;
|
||||
run_bench_on_column_full_scan(b, column);
|
||||
}
|
||||
|
||||
/// Block fetch column accessor
|
||||
#[bench]
|
||||
fn bench_get_block_first_on_optional_column(b: &mut Bencher) {
|
||||
let column = get_test_columns().optional;
|
||||
run_bench_on_column_block_fetch(b, column);
|
||||
}
|
||||
|
||||
#[bench]
|
||||
fn bench_get_block_first_on_multi_column(b: &mut Bencher) {
|
||||
let column = get_test_columns().multi;
|
||||
run_bench_on_column_block_fetch(b, column);
|
||||
}
|
||||
|
||||
#[bench]
|
||||
fn bench_get_block_first_on_full_column(b: &mut Bencher) {
|
||||
let column = get_test_columns().full;
|
||||
run_bench_on_column_block_fetch(b, column);
|
||||
}
|
||||
|
||||
#[bench]
|
||||
fn bench_get_block_first_on_optional_column_single_calls(b: &mut Bencher) {
|
||||
let column = get_test_columns().optional;
|
||||
run_bench_on_column_block_single_calls(b, column);
|
||||
}
|
||||
|
||||
#[bench]
|
||||
fn bench_get_block_first_on_multi_column_single_calls(b: &mut Bencher) {
|
||||
let column = get_test_columns().multi;
|
||||
run_bench_on_column_block_single_calls(b, column);
|
||||
}
|
||||
|
||||
#[bench]
|
||||
fn bench_get_block_first_on_full_column_single_calls(b: &mut Bencher) {
|
||||
let column = get_test_columns().full;
|
||||
run_bench_on_column_block_single_calls(b, column);
|
||||
}
|
||||
|
||||
@@ -1,106 +0,0 @@
|
||||
use binggan::{InputGroup, black_box};
|
||||
use rand::rngs::StdRng;
|
||||
use rand::{Rng, SeedableRng};
|
||||
use tantivy_columnar::column_index::{OptionalIndex, Set};
|
||||
|
||||
const TOTAL_NUM_VALUES: u32 = 1_000_000;
|
||||
|
||||
fn gen_optional_index(fill_ratio: f64) -> OptionalIndex {
|
||||
let mut rng: StdRng = StdRng::from_seed([1u8; 32]);
|
||||
let vals: Vec<u32> = (0..TOTAL_NUM_VALUES)
|
||||
.map(|_| rng.random_bool(fill_ratio))
|
||||
.enumerate()
|
||||
.filter(|(_pos, val)| *val)
|
||||
.map(|(pos, _)| pos as u32)
|
||||
.collect();
|
||||
OptionalIndex::for_test(TOTAL_NUM_VALUES, &vals)
|
||||
}
|
||||
|
||||
fn random_range_iterator(
|
||||
start: u32,
|
||||
end: u32,
|
||||
avg_step_size: u32,
|
||||
avg_deviation: u32,
|
||||
) -> impl Iterator<Item = u32> {
|
||||
let mut rng: StdRng = StdRng::from_seed([1u8; 32]);
|
||||
let mut current = start;
|
||||
std::iter::from_fn(move || {
|
||||
current += rng.random_range(avg_step_size - avg_deviation..=avg_step_size + avg_deviation);
|
||||
if current >= end { None } else { Some(current) }
|
||||
})
|
||||
}
|
||||
|
||||
fn n_percent_step_iterator(percent: f32, num_values: u32) -> impl Iterator<Item = u32> {
|
||||
let ratio = percent / 100.0;
|
||||
let step_size = (1f32 / ratio) as u32;
|
||||
let deviation = step_size - 1;
|
||||
random_range_iterator(0, num_values, step_size, deviation)
|
||||
}
|
||||
|
||||
fn walk_over_data(codec: &OptionalIndex, avg_step_size: u32) -> Option<u32> {
|
||||
walk_over_data_from_positions(
|
||||
codec,
|
||||
random_range_iterator(0, TOTAL_NUM_VALUES, avg_step_size, 0),
|
||||
)
|
||||
}
|
||||
|
||||
fn walk_over_data_from_positions(
|
||||
codec: &OptionalIndex,
|
||||
positions: impl Iterator<Item = u32>,
|
||||
) -> Option<u32> {
|
||||
let mut dense_idx: Option<u32> = None;
|
||||
for idx in positions {
|
||||
dense_idx = dense_idx.or(codec.rank_if_exists(idx));
|
||||
}
|
||||
dense_idx
|
||||
}
|
||||
|
||||
fn main() {
|
||||
// Build separate inputs for each fill ratio.
|
||||
let inputs: Vec<(String, OptionalIndex)> = vec![
|
||||
("fill=1%".to_string(), gen_optional_index(0.01)),
|
||||
("fill=5%".to_string(), gen_optional_index(0.05)),
|
||||
("fill=10%".to_string(), gen_optional_index(0.10)),
|
||||
("fill=50%".to_string(), gen_optional_index(0.50)),
|
||||
("fill=90%".to_string(), gen_optional_index(0.90)),
|
||||
];
|
||||
|
||||
let mut group: InputGroup<OptionalIndex> = InputGroup::new_with_inputs(inputs);
|
||||
|
||||
// Translate orig->codec (rank_if_exists) with sampling
|
||||
group.register("orig_to_codec_10pct_hit", |codec: &OptionalIndex| {
|
||||
black_box(walk_over_data(codec, 100));
|
||||
});
|
||||
group.register("orig_to_codec_1pct_hit", |codec: &OptionalIndex| {
|
||||
black_box(walk_over_data(codec, 1000));
|
||||
});
|
||||
group.register("orig_to_codec_full_scan", |codec: &OptionalIndex| {
|
||||
black_box(walk_over_data_from_positions(codec, 0..TOTAL_NUM_VALUES));
|
||||
});
|
||||
|
||||
// Translate codec->orig (select/select_batch) on sampled ranks
|
||||
fn bench_translate_codec_to_orig_util(codec: &OptionalIndex, percent_hit: f32) {
|
||||
let num_non_nulls = codec.num_non_nulls();
|
||||
let idxs: Vec<u32> = if percent_hit == 100.0f32 {
|
||||
(0..num_non_nulls).collect()
|
||||
} else {
|
||||
n_percent_step_iterator(percent_hit, num_non_nulls).collect()
|
||||
};
|
||||
let mut output = vec![0u32; idxs.len()];
|
||||
output.copy_from_slice(&idxs[..]);
|
||||
codec.select_batch(&mut output);
|
||||
black_box(output);
|
||||
}
|
||||
|
||||
group.register("codec_to_orig_0.005pct_hit", |codec: &OptionalIndex| {
|
||||
bench_translate_codec_to_orig_util(codec, 0.005);
|
||||
});
|
||||
group.register("codec_to_orig_10pct_hit", |codec: &OptionalIndex| {
|
||||
bench_translate_codec_to_orig_util(codec, 10.0);
|
||||
});
|
||||
group.register("codec_to_orig_full_scan", |codec: &OptionalIndex| {
|
||||
bench_translate_codec_to_orig_util(codec, 100.0);
|
||||
});
|
||||
|
||||
group.run();
|
||||
}
|
||||
@@ -1,12 +1,15 @@
|
||||
#![feature(test)]
|
||||
|
||||
use std::ops::RangeInclusive;
|
||||
use std::sync::Arc;
|
||||
|
||||
use binggan::{InputGroup, black_box};
|
||||
use common::OwnedBytes;
|
||||
use rand::rngs::StdRng;
|
||||
use rand::seq::SliceRandom;
|
||||
use rand::{Rng, SeedableRng, random};
|
||||
use tantivy_columnar::ColumnValues;
|
||||
use test::Bencher;
|
||||
extern crate test;
|
||||
|
||||
// TODO does this make sense for IPv6 ?
|
||||
fn generate_random() -> Vec<u64> {
|
||||
@@ -39,82 +42,83 @@ fn get_data_50percent_item() -> Vec<u128> {
|
||||
|
||||
let mut data = vec![];
|
||||
for _ in 0..300_000 {
|
||||
let val = rng.random_range(1..=100);
|
||||
let val = rng.gen_range(1..=100);
|
||||
data.push(val);
|
||||
}
|
||||
data.push(SINGLE_ITEM);
|
||||
data.shuffle(&mut rng);
|
||||
data.iter().map(|el| *el as u128).collect::<Vec<_>>()
|
||||
let data = data.iter().map(|el| *el as u128).collect::<Vec<_>>();
|
||||
data
|
||||
}
|
||||
|
||||
fn main() {
|
||||
#[bench]
|
||||
fn bench_intfastfield_getrange_u128_50percent_hit(b: &mut Bencher) {
|
||||
let data = get_data_50percent_item();
|
||||
let column_range = get_u128_column_from_data(&data);
|
||||
let column_random = get_u128_column_random();
|
||||
let column = get_u128_column_from_data(&data);
|
||||
|
||||
struct Inputs {
|
||||
data: Vec<u128>,
|
||||
column_range: Arc<dyn ColumnValues<u128>>,
|
||||
column_random: Arc<dyn ColumnValues<u128>>,
|
||||
}
|
||||
|
||||
let inputs = Inputs {
|
||||
data,
|
||||
column_range,
|
||||
column_random,
|
||||
};
|
||||
let mut group: InputGroup<Inputs> =
|
||||
InputGroup::new_with_inputs(vec![("u128 benches".to_string(), inputs)]);
|
||||
|
||||
group.register(
|
||||
"intfastfield_getrange_u128_50percent_hit",
|
||||
|inp: &Inputs| {
|
||||
let mut positions = Vec::new();
|
||||
inp.column_range.get_row_ids_for_value_range(
|
||||
*FIFTY_PERCENT_RANGE.start() as u128..=*FIFTY_PERCENT_RANGE.end() as u128,
|
||||
0..inp.data.len() as u32,
|
||||
&mut positions,
|
||||
);
|
||||
black_box(positions.len());
|
||||
},
|
||||
);
|
||||
|
||||
group.register("intfastfield_getrange_u128_single_hit", |inp: &Inputs| {
|
||||
b.iter(|| {
|
||||
let mut positions = Vec::new();
|
||||
inp.column_range.get_row_ids_for_value_range(
|
||||
column.get_row_ids_for_value_range(
|
||||
*FIFTY_PERCENT_RANGE.start() as u128..=*FIFTY_PERCENT_RANGE.end() as u128,
|
||||
0..data.len() as u32,
|
||||
&mut positions,
|
||||
);
|
||||
positions
|
||||
});
|
||||
}
|
||||
|
||||
#[bench]
|
||||
fn bench_intfastfield_getrange_u128_single_hit(b: &mut Bencher) {
|
||||
let data = get_data_50percent_item();
|
||||
let column = get_u128_column_from_data(&data);
|
||||
|
||||
b.iter(|| {
|
||||
let mut positions = Vec::new();
|
||||
column.get_row_ids_for_value_range(
|
||||
*SINGLE_ITEM_RANGE.start() as u128..=*SINGLE_ITEM_RANGE.end() as u128,
|
||||
0..inp.data.len() as u32,
|
||||
0..data.len() as u32,
|
||||
&mut positions,
|
||||
);
|
||||
black_box(positions.len());
|
||||
positions
|
||||
});
|
||||
}
|
||||
|
||||
group.register("intfastfield_getrange_u128_hit_all", |inp: &Inputs| {
|
||||
#[bench]
|
||||
fn bench_intfastfield_getrange_u128_hit_all(b: &mut Bencher) {
|
||||
let data = get_data_50percent_item();
|
||||
let column = get_u128_column_from_data(&data);
|
||||
|
||||
b.iter(|| {
|
||||
let mut positions = Vec::new();
|
||||
inp.column_range.get_row_ids_for_value_range(
|
||||
0..=u128::MAX,
|
||||
0..inp.data.len() as u32,
|
||||
&mut positions,
|
||||
);
|
||||
black_box(positions.len());
|
||||
column.get_row_ids_for_value_range(0..=u128::MAX, 0..data.len() as u32, &mut positions);
|
||||
positions
|
||||
});
|
||||
}
|
||||
// U128 RANGE END
|
||||
|
||||
group.register("intfastfield_scan_all_fflookup_u128", |inp: &Inputs| {
|
||||
#[bench]
|
||||
fn bench_intfastfield_scan_all_fflookup_u128(b: &mut Bencher) {
|
||||
let column = get_u128_column_random();
|
||||
|
||||
b.iter(|| {
|
||||
let mut a = 0u128;
|
||||
for i in 0u64..inp.column_random.num_vals() as u64 {
|
||||
a += inp.column_random.get_val(i as u32);
|
||||
for i in 0u64..column.num_vals() as u64 {
|
||||
a += column.get_val(i as u32);
|
||||
}
|
||||
black_box(a);
|
||||
a
|
||||
});
|
||||
}
|
||||
|
||||
group.register("intfastfield_jumpy_stride5_u128", |inp: &Inputs| {
|
||||
let n = inp.column_random.num_vals();
|
||||
#[bench]
|
||||
fn bench_intfastfield_jumpy_stride5_u128(b: &mut Bencher) {
|
||||
let column = get_u128_column_random();
|
||||
|
||||
b.iter(|| {
|
||||
let n = column.num_vals();
|
||||
let mut a = 0u128;
|
||||
for i in (0..n / 5).map(|val| val * 5) {
|
||||
a += inp.column_random.get_val(i);
|
||||
a += column.get_val(i);
|
||||
}
|
||||
black_box(a);
|
||||
a
|
||||
});
|
||||
|
||||
group.run();
|
||||
}
|
||||
|
||||
@@ -1,10 +1,13 @@
|
||||
#![feature(test)]
|
||||
extern crate test;
|
||||
|
||||
use std::ops::RangeInclusive;
|
||||
use std::sync::Arc;
|
||||
|
||||
use binggan::{InputGroup, black_box};
|
||||
use rand::prelude::*;
|
||||
use tantivy_columnar::column_values::{CodecType, serialize_and_load_u64_based_column_values};
|
||||
use tantivy_columnar::*;
|
||||
use test::Bencher;
|
||||
|
||||
// Warning: this generates the same permutation at each call
|
||||
fn generate_permutation() -> Vec<u64> {
|
||||
@@ -24,138 +27,177 @@ pub fn serialize_and_load(column: &[u64], codec_type: CodecType) -> Arc<dyn Colu
|
||||
serialize_and_load_u64_based_column_values(&column, &[codec_type])
|
||||
}
|
||||
|
||||
#[bench]
|
||||
fn bench_intfastfield_jumpy_veclookup(b: &mut Bencher) {
|
||||
let permutation = generate_permutation();
|
||||
let n = permutation.len();
|
||||
b.iter(|| {
|
||||
let mut a = 0u64;
|
||||
for _ in 0..n {
|
||||
a = permutation[a as usize];
|
||||
}
|
||||
a
|
||||
});
|
||||
}
|
||||
|
||||
#[bench]
|
||||
fn bench_intfastfield_jumpy_fflookup_bitpacked(b: &mut Bencher) {
|
||||
let permutation = generate_permutation();
|
||||
let n = permutation.len();
|
||||
let column: Arc<dyn ColumnValues<u64>> = serialize_and_load(&permutation, CodecType::Bitpacked);
|
||||
b.iter(|| {
|
||||
let mut a = 0u64;
|
||||
for _ in 0..n {
|
||||
a = column.get_val(a as u32);
|
||||
}
|
||||
a
|
||||
});
|
||||
}
|
||||
|
||||
const FIFTY_PERCENT_RANGE: RangeInclusive<u64> = 1..=50;
|
||||
const SINGLE_ITEM: u64 = 90;
|
||||
const SINGLE_ITEM_RANGE: RangeInclusive<u64> = 90..=90;
|
||||
const ONE_PERCENT_ITEM_RANGE: RangeInclusive<u64> = 49..=49;
|
||||
|
||||
fn get_data_50percent_item() -> Vec<u128> {
|
||||
let mut rng = StdRng::from_seed([1u8; 32]);
|
||||
|
||||
let mut data = vec![];
|
||||
for _ in 0..300_000 {
|
||||
let val = rng.random_range(1..=100);
|
||||
let val = rng.gen_range(1..=100);
|
||||
data.push(val);
|
||||
}
|
||||
data.push(SINGLE_ITEM);
|
||||
|
||||
data.shuffle(&mut rng);
|
||||
data.iter().map(|el| *el as u128).collect::<Vec<_>>()
|
||||
let data = data.iter().map(|el| *el as u128).collect::<Vec<_>>();
|
||||
data
|
||||
}
|
||||
|
||||
type VecCol = (Vec<u64>, Arc<dyn ColumnValues<u64>>);
|
||||
// U64 RANGE START
|
||||
#[bench]
|
||||
fn bench_intfastfield_getrange_u64_50percent_hit(b: &mut Bencher) {
|
||||
let data = get_data_50percent_item();
|
||||
let data = data.iter().map(|el| *el as u64).collect::<Vec<_>>();
|
||||
let column: Arc<dyn ColumnValues<u64>> = serialize_and_load(&data, CodecType::Bitpacked);
|
||||
b.iter(|| {
|
||||
let mut positions = Vec::new();
|
||||
column.get_row_ids_for_value_range(
|
||||
FIFTY_PERCENT_RANGE,
|
||||
0..data.len() as u32,
|
||||
&mut positions,
|
||||
);
|
||||
positions
|
||||
});
|
||||
}
|
||||
|
||||
fn bench_access() {
|
||||
#[bench]
|
||||
fn bench_intfastfield_getrange_u64_1percent_hit(b: &mut Bencher) {
|
||||
let data = get_data_50percent_item();
|
||||
let data = data.iter().map(|el| *el as u64).collect::<Vec<_>>();
|
||||
let column: Arc<dyn ColumnValues<u64>> = serialize_and_load(&data, CodecType::Bitpacked);
|
||||
|
||||
b.iter(|| {
|
||||
let mut positions = Vec::new();
|
||||
column.get_row_ids_for_value_range(
|
||||
ONE_PERCENT_ITEM_RANGE,
|
||||
0..data.len() as u32,
|
||||
&mut positions,
|
||||
);
|
||||
positions
|
||||
});
|
||||
}
|
||||
|
||||
#[bench]
|
||||
fn bench_intfastfield_getrange_u64_single_hit(b: &mut Bencher) {
|
||||
let data = get_data_50percent_item();
|
||||
let data = data.iter().map(|el| *el as u64).collect::<Vec<_>>();
|
||||
let column: Arc<dyn ColumnValues<u64>> = serialize_and_load(&data, CodecType::Bitpacked);
|
||||
|
||||
b.iter(|| {
|
||||
let mut positions = Vec::new();
|
||||
column.get_row_ids_for_value_range(SINGLE_ITEM_RANGE, 0..data.len() as u32, &mut positions);
|
||||
positions
|
||||
});
|
||||
}
|
||||
|
||||
#[bench]
|
||||
fn bench_intfastfield_getrange_u64_hit_all(b: &mut Bencher) {
|
||||
let data = get_data_50percent_item();
|
||||
let data = data.iter().map(|el| *el as u64).collect::<Vec<_>>();
|
||||
let column: Arc<dyn ColumnValues<u64>> = serialize_and_load(&data, CodecType::Bitpacked);
|
||||
|
||||
b.iter(|| {
|
||||
let mut positions = Vec::new();
|
||||
column.get_row_ids_for_value_range(0..=u64::MAX, 0..data.len() as u32, &mut positions);
|
||||
positions
|
||||
});
|
||||
}
|
||||
// U64 RANGE END
|
||||
|
||||
#[bench]
|
||||
fn bench_intfastfield_stride7_vec(b: &mut Bencher) {
|
||||
let permutation = generate_permutation();
|
||||
let column_perm: Arc<dyn ColumnValues<u64>> =
|
||||
serialize_and_load(&permutation, CodecType::Bitpacked);
|
||||
|
||||
let permutation_gcd = generate_permutation_gcd();
|
||||
let column_perm_gcd: Arc<dyn ColumnValues<u64>> =
|
||||
serialize_and_load(&permutation_gcd, CodecType::Bitpacked);
|
||||
|
||||
let mut group: InputGroup<VecCol> = InputGroup::new_with_inputs(vec![
|
||||
(
|
||||
"access".to_string(),
|
||||
(permutation.clone(), column_perm.clone()),
|
||||
),
|
||||
(
|
||||
"access_gcd".to_string(),
|
||||
(permutation_gcd.clone(), column_perm_gcd.clone()),
|
||||
),
|
||||
]);
|
||||
|
||||
group.register("stride7_vec", |inp: &VecCol| {
|
||||
let n = inp.0.len();
|
||||
let n = permutation.len();
|
||||
b.iter(|| {
|
||||
let mut a = 0u64;
|
||||
for i in (0..n / 7).map(|val| val * 7) {
|
||||
a += inp.0[i];
|
||||
a += permutation[i as usize];
|
||||
}
|
||||
black_box(a);
|
||||
a
|
||||
});
|
||||
}
|
||||
|
||||
group.register("fullscan_vec", |inp: &VecCol| {
|
||||
let mut a = 0u64;
|
||||
for i in 0..inp.0.len() {
|
||||
a += inp.0[i];
|
||||
}
|
||||
black_box(a);
|
||||
});
|
||||
|
||||
group.register("stride7_column_values", |inp: &VecCol| {
|
||||
let n = inp.1.num_vals() as usize;
|
||||
let mut a = 0u64;
|
||||
#[bench]
|
||||
fn bench_intfastfield_stride7_fflookup(b: &mut Bencher) {
|
||||
let permutation = generate_permutation();
|
||||
let n = permutation.len();
|
||||
let column: Arc<dyn ColumnValues<u64>> = serialize_and_load(&permutation, CodecType::Bitpacked);
|
||||
b.iter(|| {
|
||||
let mut a = 0;
|
||||
for i in (0..n / 7).map(|val| val * 7) {
|
||||
a += inp.1.get_val(i as u32);
|
||||
a += column.get_val(i as u32);
|
||||
}
|
||||
black_box(a);
|
||||
a
|
||||
});
|
||||
}
|
||||
|
||||
group.register("fullscan_column_values", |inp: &VecCol| {
|
||||
#[bench]
|
||||
fn bench_intfastfield_scan_all_fflookup(b: &mut Bencher) {
|
||||
let permutation = generate_permutation();
|
||||
let n = permutation.len();
|
||||
let column: Arc<dyn ColumnValues<u64>> = serialize_and_load(&permutation, CodecType::Bitpacked);
|
||||
let column_ref = column.as_ref();
|
||||
b.iter(|| {
|
||||
let mut a = 0u64;
|
||||
for i in 0u32..n as u32 {
|
||||
a += column_ref.get_val(i);
|
||||
}
|
||||
a
|
||||
});
|
||||
}
|
||||
|
||||
#[bench]
|
||||
fn bench_intfastfield_scan_all_fflookup_gcd(b: &mut Bencher) {
|
||||
let permutation = generate_permutation_gcd();
|
||||
let n = permutation.len();
|
||||
let column: Arc<dyn ColumnValues<u64>> = serialize_and_load(&permutation, CodecType::Bitpacked);
|
||||
b.iter(|| {
|
||||
let mut a = 0u64;
|
||||
let n = inp.1.num_vals() as usize;
|
||||
for i in 0..n {
|
||||
a += inp.1.get_val(i as u32);
|
||||
a += column.get_val(i as u32);
|
||||
}
|
||||
black_box(a);
|
||||
a
|
||||
});
|
||||
|
||||
group.run();
|
||||
}
|
||||
|
||||
fn bench_range() {
|
||||
let data_50 = get_data_50percent_item();
|
||||
let data_u64 = data_50.iter().map(|el| *el as u64).collect::<Vec<_>>();
|
||||
let column_data: Arc<dyn ColumnValues<u64>> =
|
||||
serialize_and_load(&data_u64, CodecType::Bitpacked);
|
||||
|
||||
let mut group: InputGroup<Arc<dyn ColumnValues<u64>>> =
|
||||
InputGroup::new_with_inputs(vec![("dist_50pct_item".to_string(), column_data.clone())]);
|
||||
|
||||
group.register(
|
||||
"fastfield_getrange_u64_50percent_hit",
|
||||
|col: &Arc<dyn ColumnValues<u64>>| {
|
||||
let mut positions = Vec::new();
|
||||
col.get_row_ids_for_value_range(FIFTY_PERCENT_RANGE, 0..col.num_vals(), &mut positions);
|
||||
black_box(positions.len());
|
||||
},
|
||||
);
|
||||
|
||||
group.register(
|
||||
"fastfield_getrange_u64_1percent_hit",
|
||||
|col: &Arc<dyn ColumnValues<u64>>| {
|
||||
let mut positions = Vec::new();
|
||||
col.get_row_ids_for_value_range(
|
||||
ONE_PERCENT_ITEM_RANGE,
|
||||
0..col.num_vals(),
|
||||
&mut positions,
|
||||
);
|
||||
black_box(positions.len());
|
||||
},
|
||||
);
|
||||
|
||||
group.register(
|
||||
"fastfield_getrange_u64_single_hit",
|
||||
|col: &Arc<dyn ColumnValues<u64>>| {
|
||||
let mut positions = Vec::new();
|
||||
col.get_row_ids_for_value_range(SINGLE_ITEM_RANGE, 0..col.num_vals(), &mut positions);
|
||||
black_box(positions.len());
|
||||
},
|
||||
);
|
||||
|
||||
group.register(
|
||||
"fastfield_getrange_u64_hit_all",
|
||||
|col: &Arc<dyn ColumnValues<u64>>| {
|
||||
let mut positions = Vec::new();
|
||||
col.get_row_ids_for_value_range(0..=u64::MAX, 0..col.num_vals(), &mut positions);
|
||||
black_box(positions.len());
|
||||
},
|
||||
);
|
||||
|
||||
group.run();
|
||||
}
|
||||
|
||||
fn main() {
|
||||
bench_access();
|
||||
bench_range();
|
||||
#[bench]
|
||||
fn bench_intfastfield_scan_all_vec(b: &mut Bencher) {
|
||||
let permutation = generate_permutation();
|
||||
b.iter(|| {
|
||||
let mut a = 0u64;
|
||||
for i in 0..permutation.len() {
|
||||
a += permutation[i as usize] as u64;
|
||||
}
|
||||
a
|
||||
});
|
||||
}
|
||||
|
||||
@@ -29,20 +29,12 @@ impl<T: PartialOrd + Copy + std::fmt::Debug + Send + Sync + 'static + Default>
|
||||
}
|
||||
}
|
||||
#[inline]
|
||||
pub fn fetch_block_with_missing(
|
||||
&mut self,
|
||||
docs: &[u32],
|
||||
accessor: &Column<T>,
|
||||
missing: Option<T>,
|
||||
) {
|
||||
pub fn fetch_block_with_missing(&mut self, docs: &[u32], accessor: &Column<T>, missing: T) {
|
||||
self.fetch_block(docs, accessor);
|
||||
// no missing values
|
||||
if accessor.index.get_cardinality().is_full() {
|
||||
return;
|
||||
}
|
||||
let Some(missing) = missing else {
|
||||
return;
|
||||
};
|
||||
|
||||
// We can compare docid_cache length with docs to find missing docs
|
||||
// For multi value columns we can't rely on the length and always need to scan
|
||||
|
||||
@@ -85,8 +85,8 @@ impl<T: PartialOrd + Copy + Debug + Send + Sync + 'static> Column<T> {
|
||||
}
|
||||
|
||||
#[inline]
|
||||
pub fn first(&self, doc_id: DocId) -> Option<T> {
|
||||
self.values_for_doc(doc_id).next()
|
||||
pub fn first(&self, row_id: RowId) -> Option<T> {
|
||||
self.values_for_doc(row_id).next()
|
||||
}
|
||||
|
||||
/// Load the first value for each docid in the provided slice.
|
||||
@@ -131,8 +131,6 @@ impl<T: PartialOrd + Copy + Debug + Send + Sync + 'static> Column<T> {
|
||||
self.index.docids_to_rowids(doc_ids, doc_ids_out, row_ids)
|
||||
}
|
||||
|
||||
/// Get an iterator over the values for the provided docid.
|
||||
#[inline]
|
||||
pub fn values_for_doc(&self, doc_id: DocId) -> impl Iterator<Item = T> + '_ {
|
||||
self.index
|
||||
.value_row_ids(doc_id)
|
||||
@@ -160,6 +158,15 @@ impl<T: PartialOrd + Copy + Debug + Send + Sync + 'static> Column<T> {
|
||||
.select_batch_in_place(selected_docid_range.start, doc_ids);
|
||||
}
|
||||
|
||||
/// Fills the output vector with the (possibly multiple values that are associated_with
|
||||
/// `row_id`.
|
||||
///
|
||||
/// This method clears the `output` vector.
|
||||
pub fn fill_vals(&self, row_id: RowId, output: &mut Vec<T>) {
|
||||
output.clear();
|
||||
output.extend(self.values_for_doc(row_id));
|
||||
}
|
||||
|
||||
pub fn first_or_default_col(self, default_value: T) -> Arc<dyn ColumnValues<T>> {
|
||||
Arc::new(FirstValueWithDefault {
|
||||
column: self,
|
||||
|
||||
@@ -56,7 +56,7 @@ fn get_doc_ids_with_values<'a>(
|
||||
ColumnIndex::Full => Box::new(doc_range),
|
||||
ColumnIndex::Optional(optional_index) => Box::new(
|
||||
optional_index
|
||||
.iter_non_null_docs()
|
||||
.iter_docs()
|
||||
.map(move |row| row + doc_range.start),
|
||||
),
|
||||
ColumnIndex::Multivalued(multivalued_index) => match multivalued_index {
|
||||
@@ -73,7 +73,7 @@ fn get_doc_ids_with_values<'a>(
|
||||
MultiValueIndex::MultiValueIndexV2(multivalued_index) => Box::new(
|
||||
multivalued_index
|
||||
.optional_index
|
||||
.iter_non_null_docs()
|
||||
.iter_docs()
|
||||
.map(move |row| row + doc_range.start),
|
||||
),
|
||||
},
|
||||
@@ -105,11 +105,10 @@ fn get_num_values_iterator<'a>(
|
||||
) -> Box<dyn Iterator<Item = u32> + 'a> {
|
||||
match column_index {
|
||||
ColumnIndex::Empty { .. } => Box::new(std::iter::empty()),
|
||||
ColumnIndex::Full => Box::new(std::iter::repeat_n(1u32, num_docs as usize)),
|
||||
ColumnIndex::Optional(optional_index) => Box::new(std::iter::repeat_n(
|
||||
1u32,
|
||||
optional_index.num_non_nulls() as usize,
|
||||
)),
|
||||
ColumnIndex::Full => Box::new(std::iter::repeat(1u32).take(num_docs as usize)),
|
||||
ColumnIndex::Optional(optional_index) => {
|
||||
Box::new(std::iter::repeat(1u32).take(optional_index.num_non_nulls() as usize))
|
||||
}
|
||||
ColumnIndex::Multivalued(multivalued_index) => Box::new(
|
||||
multivalued_index
|
||||
.get_start_index_column()
|
||||
@@ -178,7 +177,7 @@ impl<'a> Iterable<RowId> for StackedOptionalIndex<'a> {
|
||||
ColumnIndex::Full => Box::new(columnar_row_range),
|
||||
ColumnIndex::Optional(optional_index) => Box::new(
|
||||
optional_index
|
||||
.iter_non_null_docs()
|
||||
.iter_docs()
|
||||
.map(move |row_id: RowId| columnar_row_range.start + row_id),
|
||||
),
|
||||
ColumnIndex::Multivalued(_) => {
|
||||
|
||||
@@ -215,32 +215,6 @@ impl MultiValueIndex {
|
||||
}
|
||||
}
|
||||
|
||||
/// Returns an iterator over document ids that have at least one value.
|
||||
pub fn iter_non_null_docs(&self) -> Box<dyn Iterator<Item = DocId> + '_> {
|
||||
match self {
|
||||
MultiValueIndex::MultiValueIndexV1(idx) => {
|
||||
let mut doc: DocId = 0u32;
|
||||
let num_docs = idx.num_docs();
|
||||
Box::new(std::iter::from_fn(move || {
|
||||
// This is not the most efficient way to do this, but it's legacy code.
|
||||
while doc < num_docs {
|
||||
let cur = doc;
|
||||
doc += 1;
|
||||
let start = idx.start_index_column.get_val(cur);
|
||||
let end = idx.start_index_column.get_val(cur + 1);
|
||||
if end > start {
|
||||
return Some(cur);
|
||||
}
|
||||
}
|
||||
None
|
||||
}))
|
||||
}
|
||||
MultiValueIndex::MultiValueIndexV2(idx) => {
|
||||
Box::new(idx.optional_index.iter_non_null_docs())
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/// Converts a list of ranks (row ids of values) in a 1:n index to the corresponding list of
|
||||
/// docids. Positions are converted inplace to docids.
|
||||
///
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
use std::io;
|
||||
use std::io::{self, Write};
|
||||
use std::sync::Arc;
|
||||
|
||||
mod set;
|
||||
@@ -11,7 +11,7 @@ use set_block::{
|
||||
};
|
||||
|
||||
use crate::iterable::Iterable;
|
||||
use crate::{DocId, RowId};
|
||||
use crate::{DocId, InvalidData, RowId};
|
||||
|
||||
/// The threshold for for number of elements after which we switch to dense block encoding.
|
||||
///
|
||||
@@ -88,7 +88,7 @@ pub struct OptionalIndex {
|
||||
|
||||
impl Iterable<u32> for &OptionalIndex {
|
||||
fn boxed_iter(&self) -> Box<dyn Iterator<Item = u32> + '_> {
|
||||
Box::new(self.iter_non_null_docs())
|
||||
Box::new(self.iter_docs())
|
||||
}
|
||||
}
|
||||
|
||||
@@ -280,9 +280,8 @@ impl OptionalIndex {
|
||||
self.num_non_null_docs
|
||||
}
|
||||
|
||||
pub fn iter_non_null_docs(&self) -> impl Iterator<Item = RowId> + '_ {
|
||||
// TODO optimize. We could iterate over the blocks directly.
|
||||
// We use the dense value ids and retrieve the doc ids via select.
|
||||
pub fn iter_docs(&self) -> impl Iterator<Item = RowId> + '_ {
|
||||
// TODO optimize
|
||||
let mut select_batch = self.select_cursor();
|
||||
(0..self.num_non_null_docs).map(move |rank| select_batch.select(rank))
|
||||
}
|
||||
@@ -335,6 +334,38 @@ enum Block<'a> {
|
||||
Sparse(SparseBlock<'a>),
|
||||
}
|
||||
|
||||
#[derive(Debug, Copy, Clone)]
|
||||
enum OptionalIndexCodec {
|
||||
Dense = 0,
|
||||
Sparse = 1,
|
||||
}
|
||||
|
||||
impl OptionalIndexCodec {
|
||||
fn to_code(self) -> u8 {
|
||||
self as u8
|
||||
}
|
||||
|
||||
fn try_from_code(code: u8) -> Result<Self, InvalidData> {
|
||||
match code {
|
||||
0 => Ok(Self::Dense),
|
||||
1 => Ok(Self::Sparse),
|
||||
_ => Err(InvalidData),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl BinarySerializable for OptionalIndexCodec {
|
||||
fn serialize<W: Write + ?Sized>(&self, writer: &mut W) -> io::Result<()> {
|
||||
writer.write_all(&[self.to_code()])
|
||||
}
|
||||
|
||||
fn deserialize<R: io::Read>(reader: &mut R) -> io::Result<Self> {
|
||||
let optional_codec_code = u8::deserialize(reader)?;
|
||||
let optional_codec = Self::try_from_code(optional_codec_code)?;
|
||||
Ok(optional_codec)
|
||||
}
|
||||
}
|
||||
|
||||
fn serialize_optional_index_block(block_els: &[u16], out: &mut impl io::Write) -> io::Result<()> {
|
||||
let is_sparse = is_sparse(block_els.len() as u32);
|
||||
if is_sparse {
|
||||
|
||||
@@ -164,11 +164,7 @@ fn test_optional_index_large() {
|
||||
fn test_optional_index_iter_aux(row_ids: &[RowId], num_rows: RowId) {
|
||||
let optional_index = OptionalIndex::for_test(num_rows, row_ids);
|
||||
assert_eq!(optional_index.num_docs(), num_rows);
|
||||
assert!(
|
||||
optional_index
|
||||
.iter_non_null_docs()
|
||||
.eq(row_ids.iter().copied())
|
||||
);
|
||||
assert!(optional_index.iter_docs().eq(row_ids.iter().copied()));
|
||||
}
|
||||
|
||||
#[test]
|
||||
@@ -223,3 +219,170 @@ fn test_optional_index_for_tests() {
|
||||
assert!(!optional_index.contains(3));
|
||||
assert_eq!(optional_index.num_docs(), 4);
|
||||
}
|
||||
|
||||
#[cfg(all(test, feature = "unstable"))]
|
||||
mod bench {
|
||||
|
||||
use rand::rngs::StdRng;
|
||||
use rand::{Rng, SeedableRng};
|
||||
use test::Bencher;
|
||||
|
||||
use super::*;
|
||||
|
||||
const TOTAL_NUM_VALUES: u32 = 1_000_000;
|
||||
fn gen_bools(fill_ratio: f64) -> OptionalIndex {
|
||||
let mut out = Vec::new();
|
||||
let mut rng: StdRng = StdRng::from_seed([1u8; 32]);
|
||||
let vals: Vec<RowId> = (0..TOTAL_NUM_VALUES)
|
||||
.map(|_| rng.gen_bool(fill_ratio))
|
||||
.enumerate()
|
||||
.filter(|(_pos, val)| *val)
|
||||
.map(|(pos, _)| pos as RowId)
|
||||
.collect();
|
||||
serialize_optional_index(&&vals[..], TOTAL_NUM_VALUES, &mut out).unwrap();
|
||||
|
||||
open_optional_index(OwnedBytes::new(out)).unwrap()
|
||||
}
|
||||
|
||||
fn random_range_iterator(
|
||||
start: u32,
|
||||
end: u32,
|
||||
avg_step_size: u32,
|
||||
avg_deviation: u32,
|
||||
) -> impl Iterator<Item = u32> {
|
||||
let mut rng: StdRng = StdRng::from_seed([1u8; 32]);
|
||||
let mut current = start;
|
||||
std::iter::from_fn(move || {
|
||||
current += rng.gen_range(avg_step_size - avg_deviation..=avg_step_size + avg_deviation);
|
||||
if current >= end { None } else { Some(current) }
|
||||
})
|
||||
}
|
||||
|
||||
fn n_percent_step_iterator(percent: f32, num_values: u32) -> impl Iterator<Item = u32> {
|
||||
let ratio = percent / 100.0;
|
||||
let step_size = (1f32 / ratio) as u32;
|
||||
let deviation = step_size - 1;
|
||||
random_range_iterator(0, num_values, step_size, deviation)
|
||||
}
|
||||
|
||||
fn walk_over_data(codec: &OptionalIndex, avg_step_size: u32) -> Option<u32> {
|
||||
walk_over_data_from_positions(
|
||||
codec,
|
||||
random_range_iterator(0, TOTAL_NUM_VALUES, avg_step_size, 0),
|
||||
)
|
||||
}
|
||||
|
||||
fn walk_over_data_from_positions(
|
||||
codec: &OptionalIndex,
|
||||
positions: impl Iterator<Item = u32>,
|
||||
) -> Option<u32> {
|
||||
let mut dense_idx: Option<u32> = None;
|
||||
for idx in positions {
|
||||
dense_idx = dense_idx.or(codec.rank_if_exists(idx));
|
||||
}
|
||||
dense_idx
|
||||
}
|
||||
|
||||
#[bench]
|
||||
fn bench_translate_orig_to_codec_1percent_filled_10percent_hit(bench: &mut Bencher) {
|
||||
let codec = gen_bools(0.01f64);
|
||||
bench.iter(|| walk_over_data(&codec, 100));
|
||||
}
|
||||
|
||||
#[bench]
|
||||
fn bench_translate_orig_to_codec_5percent_filled_10percent_hit(bench: &mut Bencher) {
|
||||
let codec = gen_bools(0.05f64);
|
||||
bench.iter(|| walk_over_data(&codec, 100));
|
||||
}
|
||||
|
||||
#[bench]
|
||||
fn bench_translate_orig_to_codec_5percent_filled_1percent_hit(bench: &mut Bencher) {
|
||||
let codec = gen_bools(0.05f64);
|
||||
bench.iter(|| walk_over_data(&codec, 1000));
|
||||
}
|
||||
|
||||
#[bench]
|
||||
fn bench_translate_orig_to_codec_full_scan_1percent_filled(bench: &mut Bencher) {
|
||||
let codec = gen_bools(0.01f64);
|
||||
bench.iter(|| walk_over_data_from_positions(&codec, 0..TOTAL_NUM_VALUES));
|
||||
}
|
||||
|
||||
#[bench]
|
||||
fn bench_translate_orig_to_codec_full_scan_10percent_filled(bench: &mut Bencher) {
|
||||
let codec = gen_bools(0.1f64);
|
||||
bench.iter(|| walk_over_data_from_positions(&codec, 0..TOTAL_NUM_VALUES));
|
||||
}
|
||||
|
||||
#[bench]
|
||||
fn bench_translate_orig_to_codec_full_scan_90percent_filled(bench: &mut Bencher) {
|
||||
let codec = gen_bools(0.9f64);
|
||||
bench.iter(|| walk_over_data_from_positions(&codec, 0..TOTAL_NUM_VALUES));
|
||||
}
|
||||
|
||||
#[bench]
|
||||
fn bench_translate_orig_to_codec_10percent_filled_1percent_hit(bench: &mut Bencher) {
|
||||
let codec = gen_bools(0.1f64);
|
||||
bench.iter(|| walk_over_data(&codec, 100));
|
||||
}
|
||||
|
||||
#[bench]
|
||||
fn bench_translate_orig_to_codec_50percent_filled_1percent_hit(bench: &mut Bencher) {
|
||||
let codec = gen_bools(0.5f64);
|
||||
bench.iter(|| walk_over_data(&codec, 100));
|
||||
}
|
||||
|
||||
#[bench]
|
||||
fn bench_translate_orig_to_codec_90percent_filled_1percent_hit(bench: &mut Bencher) {
|
||||
let codec = gen_bools(0.9f64);
|
||||
bench.iter(|| walk_over_data(&codec, 100));
|
||||
}
|
||||
|
||||
#[bench]
|
||||
fn bench_translate_codec_to_orig_1percent_filled_0comma005percent_hit(bench: &mut Bencher) {
|
||||
bench_translate_codec_to_orig_util(0.01f64, 0.005f32, bench);
|
||||
}
|
||||
|
||||
#[bench]
|
||||
fn bench_translate_codec_to_orig_10percent_filled_0comma005percent_hit(bench: &mut Bencher) {
|
||||
bench_translate_codec_to_orig_util(0.1f64, 0.005f32, bench);
|
||||
}
|
||||
|
||||
#[bench]
|
||||
fn bench_translate_codec_to_orig_1percent_filled_10percent_hit(bench: &mut Bencher) {
|
||||
bench_translate_codec_to_orig_util(0.01f64, 10f32, bench);
|
||||
}
|
||||
|
||||
#[bench]
|
||||
fn bench_translate_codec_to_orig_1percent_filled_full_scan(bench: &mut Bencher) {
|
||||
bench_translate_codec_to_orig_util(0.01f64, 100f32, bench);
|
||||
}
|
||||
|
||||
fn bench_translate_codec_to_orig_util(
|
||||
percent_filled: f64,
|
||||
percent_hit: f32,
|
||||
bench: &mut Bencher,
|
||||
) {
|
||||
let codec = gen_bools(percent_filled);
|
||||
let num_non_nulls = codec.num_non_nulls();
|
||||
let idxs: Vec<u32> = if percent_hit == 100.0f32 {
|
||||
(0..num_non_nulls).collect()
|
||||
} else {
|
||||
n_percent_step_iterator(percent_hit, num_non_nulls).collect()
|
||||
};
|
||||
let mut output = vec![0u32; idxs.len()];
|
||||
bench.iter(|| {
|
||||
output.copy_from_slice(&idxs[..]);
|
||||
codec.select_batch(&mut output);
|
||||
});
|
||||
}
|
||||
|
||||
#[bench]
|
||||
fn bench_translate_codec_to_orig_90percent_filled_0comma005percent_hit(bench: &mut Bencher) {
|
||||
bench_translate_codec_to_orig_util(0.9f64, 0.005, bench);
|
||||
}
|
||||
|
||||
#[bench]
|
||||
fn bench_translate_codec_to_orig_90percent_filled_full_scan(bench: &mut Bencher) {
|
||||
bench_translate_codec_to_orig_util(0.9f64, 100.0f32, bench);
|
||||
}
|
||||
}
|
||||
|
||||
139
columnar/src/column_values/bench.rs
Normal file
139
columnar/src/column_values/bench.rs
Normal file
@@ -0,0 +1,139 @@
|
||||
use std::sync::Arc;
|
||||
|
||||
use common::OwnedBytes;
|
||||
use rand::rngs::StdRng;
|
||||
use rand::{Rng, SeedableRng};
|
||||
use test::{self, Bencher};
|
||||
|
||||
use super::*;
|
||||
use crate::column_values::u64_based::*;
|
||||
|
||||
fn get_data() -> Vec<u64> {
|
||||
let mut rng = StdRng::seed_from_u64(2u64);
|
||||
let mut data: Vec<_> = (100..55000_u64)
|
||||
.map(|num| num + rng.r#gen::<u8>() as u64)
|
||||
.collect();
|
||||
data.push(99_000);
|
||||
data.insert(1000, 2000);
|
||||
data.insert(2000, 100);
|
||||
data.insert(3000, 4100);
|
||||
data.insert(4000, 100);
|
||||
data.insert(5000, 800);
|
||||
data
|
||||
}
|
||||
|
||||
fn compute_stats(vals: impl Iterator<Item = u64>) -> ColumnStats {
|
||||
let mut stats_collector = StatsCollector::default();
|
||||
for val in vals {
|
||||
stats_collector.collect(val);
|
||||
}
|
||||
stats_collector.stats()
|
||||
}
|
||||
|
||||
#[inline(never)]
|
||||
fn value_iter() -> impl Iterator<Item = u64> {
|
||||
0..20_000
|
||||
}
|
||||
|
||||
fn get_reader_for_bench<Codec: ColumnCodec>(data: &[u64]) -> Codec::ColumnValues {
|
||||
let mut bytes = Vec::new();
|
||||
let stats = compute_stats(data.iter().cloned());
|
||||
let mut codec_serializer = Codec::estimator();
|
||||
for val in data {
|
||||
codec_serializer.collect(*val);
|
||||
}
|
||||
codec_serializer
|
||||
.serialize(&stats, Box::new(data.iter().copied()).as_mut(), &mut bytes)
|
||||
.unwrap();
|
||||
|
||||
Codec::load(OwnedBytes::new(bytes)).unwrap()
|
||||
}
|
||||
|
||||
fn bench_get<Codec: ColumnCodec>(b: &mut Bencher, data: &[u64]) {
|
||||
let col = get_reader_for_bench::<Codec>(data);
|
||||
b.iter(|| {
|
||||
let mut sum = 0u64;
|
||||
for pos in value_iter() {
|
||||
let val = col.get_val(pos as u32);
|
||||
sum = sum.wrapping_add(val);
|
||||
}
|
||||
sum
|
||||
});
|
||||
}
|
||||
|
||||
#[inline(never)]
|
||||
fn bench_get_dynamic_helper(b: &mut Bencher, col: Arc<dyn ColumnValues>) {
|
||||
b.iter(|| {
|
||||
let mut sum = 0u64;
|
||||
for pos in value_iter() {
|
||||
let val = col.get_val(pos as u32);
|
||||
sum = sum.wrapping_add(val);
|
||||
}
|
||||
sum
|
||||
});
|
||||
}
|
||||
|
||||
fn bench_get_dynamic<Codec: ColumnCodec>(b: &mut Bencher, data: &[u64]) {
|
||||
let col = Arc::new(get_reader_for_bench::<Codec>(data));
|
||||
bench_get_dynamic_helper(b, col);
|
||||
}
|
||||
fn bench_create<Codec: ColumnCodec>(b: &mut Bencher, data: &[u64]) {
|
||||
let stats = compute_stats(data.iter().cloned());
|
||||
|
||||
let mut bytes = Vec::new();
|
||||
b.iter(|| {
|
||||
bytes.clear();
|
||||
let mut codec_serializer = Codec::estimator();
|
||||
for val in data.iter().take(1024) {
|
||||
codec_serializer.collect(*val);
|
||||
}
|
||||
|
||||
codec_serializer.serialize(&stats, Box::new(data.iter().copied()).as_mut(), &mut bytes)
|
||||
});
|
||||
}
|
||||
|
||||
#[bench]
|
||||
fn bench_fastfield_bitpack_create(b: &mut Bencher) {
|
||||
let data: Vec<_> = get_data();
|
||||
bench_create::<BitpackedCodec>(b, &data);
|
||||
}
|
||||
#[bench]
|
||||
fn bench_fastfield_linearinterpol_create(b: &mut Bencher) {
|
||||
let data: Vec<_> = get_data();
|
||||
bench_create::<LinearCodec>(b, &data);
|
||||
}
|
||||
#[bench]
|
||||
fn bench_fastfield_multilinearinterpol_create(b: &mut Bencher) {
|
||||
let data: Vec<_> = get_data();
|
||||
bench_create::<BlockwiseLinearCodec>(b, &data);
|
||||
}
|
||||
#[bench]
|
||||
fn bench_fastfield_bitpack_get(b: &mut Bencher) {
|
||||
let data: Vec<_> = get_data();
|
||||
bench_get::<BitpackedCodec>(b, &data);
|
||||
}
|
||||
#[bench]
|
||||
fn bench_fastfield_bitpack_get_dynamic(b: &mut Bencher) {
|
||||
let data: Vec<_> = get_data();
|
||||
bench_get_dynamic::<BitpackedCodec>(b, &data);
|
||||
}
|
||||
#[bench]
|
||||
fn bench_fastfield_linearinterpol_get(b: &mut Bencher) {
|
||||
let data: Vec<_> = get_data();
|
||||
bench_get::<LinearCodec>(b, &data);
|
||||
}
|
||||
#[bench]
|
||||
fn bench_fastfield_linearinterpol_get_dynamic(b: &mut Bencher) {
|
||||
let data: Vec<_> = get_data();
|
||||
bench_get_dynamic::<LinearCodec>(b, &data);
|
||||
}
|
||||
#[bench]
|
||||
fn bench_fastfield_multilinearinterpol_get(b: &mut Bencher) {
|
||||
let data: Vec<_> = get_data();
|
||||
bench_get::<BlockwiseLinearCodec>(b, &data);
|
||||
}
|
||||
#[bench]
|
||||
fn bench_fastfield_multilinearinterpol_get_dynamic(b: &mut Bencher) {
|
||||
let data: Vec<_> = get_data();
|
||||
bench_get_dynamic::<BlockwiseLinearCodec>(b, &data);
|
||||
}
|
||||
@@ -31,7 +31,7 @@ pub use u64_based::{
|
||||
serialize_and_load_u64_based_column_values, serialize_u64_based_column_values,
|
||||
};
|
||||
pub use u128_based::{
|
||||
CompactHit, CompactSpaceU64Accessor, open_u128_as_compact_u64, open_u128_mapped,
|
||||
CompactSpaceU64Accessor, open_u128_as_compact_u64, open_u128_mapped,
|
||||
serialize_column_values_u128,
|
||||
};
|
||||
pub use vec_column::VecColumn;
|
||||
@@ -242,3 +242,6 @@ impl<T: Copy + PartialOrd + Debug + 'static> ColumnValues<T> for Arc<dyn ColumnV
|
||||
.get_row_ids_for_value_range(range, doc_id_range, positions)
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(all(test, feature = "unstable"))]
|
||||
mod bench;
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
use std::fmt::Debug;
|
||||
use std::net::Ipv6Addr;
|
||||
|
||||
/// Monotonic maps a value to u128 value space
|
||||
/// Montonic maps a value to u128 value space
|
||||
/// Monotonic mapping enables `PartialOrd` on u128 space without conversion to original space.
|
||||
pub trait MonotonicallyMappableToU128: 'static + PartialOrd + Copy + Debug + Send + Sync {
|
||||
/// Converts a value to u128.
|
||||
|
||||
@@ -185,10 +185,10 @@ impl CompactSpaceBuilder {
|
||||
let mut covered_space = Vec::with_capacity(self.blanks.len());
|
||||
|
||||
// beginning of the blanks
|
||||
if let Some(first_blank_start) = self.blanks.first().map(RangeInclusive::start)
|
||||
&& *first_blank_start != 0
|
||||
{
|
||||
covered_space.push(0..=first_blank_start - 1);
|
||||
if let Some(first_blank_start) = self.blanks.first().map(RangeInclusive::start) {
|
||||
if *first_blank_start != 0 {
|
||||
covered_space.push(0..=first_blank_start - 1);
|
||||
}
|
||||
}
|
||||
|
||||
// Between the blanks
|
||||
@@ -202,10 +202,10 @@ impl CompactSpaceBuilder {
|
||||
covered_space.extend(between_blanks);
|
||||
|
||||
// end of the blanks
|
||||
if let Some(last_blank_end) = self.blanks.last().map(RangeInclusive::end)
|
||||
&& *last_blank_end != u128::MAX
|
||||
{
|
||||
covered_space.push(last_blank_end + 1..=u128::MAX);
|
||||
if let Some(last_blank_end) = self.blanks.last().map(RangeInclusive::end) {
|
||||
if *last_blank_end != u128::MAX {
|
||||
covered_space.push(last_blank_end + 1..=u128::MAX);
|
||||
}
|
||||
}
|
||||
|
||||
if covered_space.is_empty() {
|
||||
|
||||
@@ -292,19 +292,6 @@ impl BinarySerializable for IPCodecParams {
|
||||
}
|
||||
}
|
||||
|
||||
/// Represents the result of looking up a u128 value in the compact space.
|
||||
///
|
||||
/// If a value is outside the compact space, the next compact value is returned.
|
||||
#[derive(Debug, Clone, Copy, PartialEq, Eq)]
|
||||
pub enum CompactHit {
|
||||
/// The value exists in the compact space
|
||||
Exact(u32),
|
||||
/// The value does not exist in the compact space, but the next higher value does
|
||||
Next(u32),
|
||||
/// The value is greater than the maximum compact value
|
||||
AfterLast,
|
||||
}
|
||||
|
||||
/// Exposes the compact space compressed values as u64.
|
||||
///
|
||||
/// This allows faster access to the values, as u64 is faster to work with than u128.
|
||||
@@ -322,11 +309,6 @@ impl CompactSpaceU64Accessor {
|
||||
pub fn compact_to_u128(&self, compact: u32) -> u128 {
|
||||
self.0.compact_to_u128(compact)
|
||||
}
|
||||
|
||||
/// Finds the next compact space value for a given u128 value.
|
||||
pub fn u128_to_next_compact(&self, value: u128) -> CompactHit {
|
||||
self.0.u128_to_next_compact(value)
|
||||
}
|
||||
}
|
||||
|
||||
impl ColumnValues<u64> for CompactSpaceU64Accessor {
|
||||
@@ -448,26 +430,6 @@ impl CompactSpaceDecompressor {
|
||||
Ok(decompressor)
|
||||
}
|
||||
|
||||
/// Finds the next compact space value for a given u128 value
|
||||
pub fn u128_to_next_compact(&self, value: u128) -> CompactHit {
|
||||
// Try to convert to compact space
|
||||
match self.u128_to_compact(value) {
|
||||
// Value is in compact space, return its compact representation
|
||||
Ok(compact) => CompactHit::Exact(compact),
|
||||
// Value is not in compact space
|
||||
Err(pos) => {
|
||||
if pos >= self.params.compact_space.ranges_mapping.len() {
|
||||
// Value is beyond all ranges, no next value exists
|
||||
CompactHit::AfterLast
|
||||
} else {
|
||||
// Get the next range and return its start compact value
|
||||
let next_range = &self.params.compact_space.ranges_mapping[pos];
|
||||
CompactHit::Next(next_range.compact_start)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/// Converting to compact space for the decompressor is more complex, since we may get values
|
||||
/// which are outside the compact space. e.g. if we map
|
||||
/// 1000 => 5
|
||||
@@ -861,41 +823,6 @@ mod tests {
|
||||
let _data = test_aux_vals(vals);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_u128_to_next_compact() {
|
||||
let vals = &[100u128, 200u128, 1_000_000_000u128, 1_000_000_100u128];
|
||||
let mut data = test_aux_vals(vals);
|
||||
|
||||
let _header = U128Header::deserialize(&mut data);
|
||||
let decomp = CompactSpaceDecompressor::open(data).unwrap();
|
||||
|
||||
// Test value that's already in a range
|
||||
let compact_100 = decomp.u128_to_compact(100).unwrap();
|
||||
assert_eq!(
|
||||
decomp.u128_to_next_compact(100),
|
||||
CompactHit::Exact(compact_100)
|
||||
);
|
||||
|
||||
// Test value between two ranges
|
||||
let compact_million = decomp.u128_to_compact(1_000_000_000).unwrap();
|
||||
assert_eq!(
|
||||
decomp.u128_to_next_compact(250),
|
||||
CompactHit::Next(compact_million)
|
||||
);
|
||||
|
||||
// Test value before the first range
|
||||
assert_eq!(
|
||||
decomp.u128_to_next_compact(50),
|
||||
CompactHit::Next(compact_100)
|
||||
);
|
||||
|
||||
// Test value after the last range
|
||||
assert_eq!(
|
||||
decomp.u128_to_next_compact(10_000_000_000),
|
||||
CompactHit::AfterLast
|
||||
);
|
||||
}
|
||||
|
||||
use proptest::prelude::*;
|
||||
|
||||
fn num_strategy() -> impl Strategy<Value = u128> {
|
||||
|
||||
@@ -7,7 +7,7 @@ mod compact_space;
|
||||
|
||||
use common::{BinarySerializable, OwnedBytes, VInt};
|
||||
pub use compact_space::{
|
||||
CompactHit, CompactSpaceCompressor, CompactSpaceDecompressor, CompactSpaceU64Accessor,
|
||||
CompactSpaceCompressor, CompactSpaceDecompressor, CompactSpaceU64Accessor,
|
||||
};
|
||||
|
||||
use crate::column_values::monotonic_map_column;
|
||||
|
||||
@@ -41,6 +41,12 @@ fn transform_range_before_linear_transformation(
|
||||
if range.is_empty() {
|
||||
return None;
|
||||
}
|
||||
if stats.min_value > *range.end() {
|
||||
return None;
|
||||
}
|
||||
if stats.max_value < *range.start() {
|
||||
return None;
|
||||
}
|
||||
let shifted_range =
|
||||
range.start().saturating_sub(stats.min_value)..=range.end().saturating_sub(stats.min_value);
|
||||
let start_before_gcd_multiplication: u64 = div_ceil(*shifted_range.start(), stats.gcd);
|
||||
@@ -99,7 +105,7 @@ impl ColumnCodecEstimator for BitpackedCodecEstimator {
|
||||
|
||||
fn estimate(&self, stats: &ColumnStats) -> Option<u64> {
|
||||
let num_bits_per_value = num_bits(stats);
|
||||
Some(stats.num_bytes() + (stats.num_rows as u64 * (num_bits_per_value as u64)).div_ceil(8))
|
||||
Some(stats.num_bytes() + (stats.num_rows as u64 * (num_bits_per_value as u64) + 7) / 8)
|
||||
}
|
||||
|
||||
fn serialize(
|
||||
|
||||
@@ -8,7 +8,7 @@ use crate::column_values::ColumnValues;
|
||||
const MID_POINT: u64 = (1u64 << 32) - 1u64;
|
||||
|
||||
/// `Line` describes a line function `y: ax + b` using integer
|
||||
/// arithmetic.
|
||||
/// arithmetics.
|
||||
///
|
||||
/// The slope is in fact a decimal split into a 32 bit integer value,
|
||||
/// and a 32-bit decimal value.
|
||||
@@ -94,7 +94,7 @@ impl Line {
|
||||
// `(i, ys[])`.
|
||||
//
|
||||
// The best intercept therefore has the form
|
||||
// `y[i] - line.eval(i)` (using wrapping arithmetic).
|
||||
// `y[i] - line.eval(i)` (using wrapping arithmetics).
|
||||
// In other words, the best intercept is one of the `y - Line::eval(ys[i])`
|
||||
// and our task is just to pick the one that minimizes our error.
|
||||
//
|
||||
|
||||
@@ -117,7 +117,7 @@ impl ColumnCodecEstimator for LinearCodecEstimator {
|
||||
Some(
|
||||
stats.num_bytes()
|
||||
+ linear_params.num_bytes()
|
||||
+ (num_bits as u64 * stats.num_rows as u64).div_ceil(8),
|
||||
+ (num_bits as u64 * stats.num_rows as u64 + 7) / 8,
|
||||
)
|
||||
}
|
||||
|
||||
@@ -268,7 +268,7 @@ mod tests {
|
||||
|
||||
#[test]
|
||||
fn linear_interpol_fast_field_rand() {
|
||||
let mut rng = rand::rng();
|
||||
let mut rng = rand::thread_rng();
|
||||
for _ in 0..50 {
|
||||
let mut data = (0..10_000).map(|_| rng.next_u64()).collect::<Vec<_>>();
|
||||
create_and_validate::<LinearCodec>(&data, "random");
|
||||
|
||||
@@ -52,7 +52,7 @@ pub trait ColumnCodecEstimator<T = u64>: 'static {
|
||||
) -> io::Result<()>;
|
||||
}
|
||||
|
||||
/// A column codec describes a column serialization format.
|
||||
/// A column codec describes a colunm serialization format.
|
||||
pub trait ColumnCodec<T: PartialOrd = u64> {
|
||||
/// Specialized `ColumnValues` type.
|
||||
type ColumnValues: ColumnValues<T> + 'static;
|
||||
|
||||
@@ -122,7 +122,7 @@ pub(crate) fn create_and_validate<TColumnCodec: ColumnCodec>(
|
||||
assert_eq!(vals, buffer);
|
||||
|
||||
if !vals.is_empty() {
|
||||
let test_rand_idx = rand::rng().random_range(0..=vals.len() - 1);
|
||||
let test_rand_idx = rand::thread_rng().gen_range(0..=vals.len() - 1);
|
||||
let expected_positions: Vec<u32> = vals
|
||||
.iter()
|
||||
.enumerate()
|
||||
|
||||
@@ -367,7 +367,7 @@ fn is_empty_after_merge(
|
||||
ColumnIndex::Empty { .. } => true,
|
||||
ColumnIndex::Full => alive_bitset.len() == 0,
|
||||
ColumnIndex::Optional(optional_index) => {
|
||||
for doc in optional_index.iter_non_null_docs() {
|
||||
for doc in optional_index.iter_docs() {
|
||||
if alive_bitset.contains(doc) {
|
||||
return false;
|
||||
}
|
||||
|
||||
@@ -1,3 +1,5 @@
|
||||
#![allow(clippy::manual_div_ceil)]
|
||||
|
||||
mod column_type;
|
||||
mod format_version;
|
||||
mod merge;
|
||||
|
||||
@@ -244,7 +244,7 @@ impl SymbolValue for UnorderedId {
|
||||
|
||||
fn compute_num_bytes_for_u64(val: u64) -> usize {
|
||||
let msb = (64u32 - val.leading_zeros()) as usize;
|
||||
msb.div_ceil(8)
|
||||
(msb + 7) / 8
|
||||
}
|
||||
|
||||
fn encode_zig_zag(n: i64) -> u64 {
|
||||
|
||||
@@ -3,8 +3,7 @@ use std::sync::Arc;
|
||||
use std::{fmt, io};
|
||||
|
||||
use common::file_slice::FileSlice;
|
||||
use common::{ByteCount, DateTime, OwnedBytes};
|
||||
use serde::{Deserialize, Serialize};
|
||||
use common::{ByteCount, DateTime, HasLen, OwnedBytes};
|
||||
|
||||
use crate::column::{BytesColumn, Column, StrColumn};
|
||||
use crate::column_values::{StrictlyMonotonicFn, monotonic_map_column};
|
||||
@@ -318,89 +317,10 @@ impl DynamicColumnHandle {
|
||||
}
|
||||
|
||||
pub fn num_bytes(&self) -> ByteCount {
|
||||
self.file_slice.num_bytes()
|
||||
}
|
||||
|
||||
/// Legacy helper returning the column space usage.
|
||||
pub fn column_and_dictionary_num_bytes(&self) -> io::Result<ColumnSpaceUsage> {
|
||||
self.space_usage()
|
||||
}
|
||||
|
||||
/// Return the space usage of the column, optionally broken down by dictionary and column
|
||||
/// values.
|
||||
///
|
||||
/// For dictionary encoded columns (strings and bytes), this splits the total footprint into
|
||||
/// the dictionary and the remaining column data (including index and values).
|
||||
/// For all other column types, the dictionary size is `None` and the column size
|
||||
/// equals the total bytes.
|
||||
pub fn space_usage(&self) -> io::Result<ColumnSpaceUsage> {
|
||||
let total_num_bytes = self.num_bytes();
|
||||
let dynamic_column = self.open()?;
|
||||
let dictionary_num_bytes = match &dynamic_column {
|
||||
DynamicColumn::Bytes(bytes_column) => bytes_column.dictionary().num_bytes(),
|
||||
DynamicColumn::Str(str_column) => str_column.dictionary().num_bytes(),
|
||||
_ => {
|
||||
return Ok(ColumnSpaceUsage::new(self.num_bytes(), None));
|
||||
}
|
||||
};
|
||||
assert!(dictionary_num_bytes <= total_num_bytes);
|
||||
let column_num_bytes =
|
||||
ByteCount::from(total_num_bytes.get_bytes() - dictionary_num_bytes.get_bytes());
|
||||
Ok(ColumnSpaceUsage::new(
|
||||
column_num_bytes,
|
||||
Some(dictionary_num_bytes),
|
||||
))
|
||||
self.file_slice.len().into()
|
||||
}
|
||||
|
||||
pub fn column_type(&self) -> ColumnType {
|
||||
self.column_type
|
||||
}
|
||||
}
|
||||
|
||||
/// Represents space usage of a column.
|
||||
///
|
||||
/// `column_num_bytes` tracks the column payload (index, values and footer).
|
||||
/// For dictionary encoded columns, `dictionary_num_bytes` captures the dictionary footprint.
|
||||
/// [`ColumnSpaceUsage::total_num_bytes`] returns the sum of both parts.
|
||||
#[derive(Clone, Debug, Serialize, Deserialize)]
|
||||
pub struct ColumnSpaceUsage {
|
||||
column_num_bytes: ByteCount,
|
||||
dictionary_num_bytes: Option<ByteCount>,
|
||||
}
|
||||
|
||||
impl ColumnSpaceUsage {
|
||||
pub(crate) fn new(
|
||||
column_num_bytes: ByteCount,
|
||||
dictionary_num_bytes: Option<ByteCount>,
|
||||
) -> Self {
|
||||
ColumnSpaceUsage {
|
||||
column_num_bytes,
|
||||
dictionary_num_bytes,
|
||||
}
|
||||
}
|
||||
|
||||
pub fn column_num_bytes(&self) -> ByteCount {
|
||||
self.column_num_bytes
|
||||
}
|
||||
|
||||
pub fn dictionary_num_bytes(&self) -> Option<ByteCount> {
|
||||
self.dictionary_num_bytes
|
||||
}
|
||||
|
||||
pub fn total_num_bytes(&self) -> ByteCount {
|
||||
self.column_num_bytes + self.dictionary_num_bytes.unwrap_or_default()
|
||||
}
|
||||
|
||||
/// Merge two space usage values by summing their components.
|
||||
pub fn merge(&self, other: &ColumnSpaceUsage) -> ColumnSpaceUsage {
|
||||
let dictionary_num_bytes = match (self.dictionary_num_bytes, other.dictionary_num_bytes) {
|
||||
(Some(lhs), Some(rhs)) => Some(lhs + rhs),
|
||||
(Some(val), None) | (None, Some(val)) => Some(val),
|
||||
(None, None) => None,
|
||||
};
|
||||
ColumnSpaceUsage {
|
||||
column_num_bytes: self.column_num_bytes + other.column_num_bytes,
|
||||
dictionary_num_bytes,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -17,10 +17,15 @@
|
||||
//! column.
|
||||
//! - [column_values]: Stores the values of a column in a dense format.
|
||||
|
||||
// #![cfg_attr(all(feature = "unstable", test), feature(test))]
|
||||
|
||||
#[cfg(test)]
|
||||
#[macro_use]
|
||||
extern crate more_asserts;
|
||||
|
||||
#[cfg(all(test, feature = "unstable"))]
|
||||
extern crate test;
|
||||
|
||||
use std::fmt::Display;
|
||||
use std::io;
|
||||
|
||||
@@ -48,7 +53,7 @@ pub use columnar::{
|
||||
use sstable::VoidSSTable;
|
||||
pub use value::{NumericalType, NumericalValue};
|
||||
|
||||
pub use self::dynamic_column::{ColumnSpaceUsage, DynamicColumn, DynamicColumnHandle};
|
||||
pub use self::dynamic_column::{DynamicColumn, DynamicColumnHandle};
|
||||
|
||||
pub type RowId = u32;
|
||||
pub type DocId = u32;
|
||||
@@ -59,7 +64,7 @@ pub struct RowAddr {
|
||||
pub row_id: RowId,
|
||||
}
|
||||
|
||||
pub use sstable::{Dictionary, TermOrdHit};
|
||||
pub use sstable::Dictionary;
|
||||
pub type Streamer<'a> = sstable::Streamer<'a, VoidSSTable>;
|
||||
|
||||
pub use common::DateTime;
|
||||
|
||||
@@ -60,7 +60,7 @@ fn test_dataframe_writer_bool() {
|
||||
let DynamicColumn::Bool(bool_col) = dyn_bool_col else {
|
||||
panic!();
|
||||
};
|
||||
let vals: Vec<Option<bool>> = (0..5).map(|doc_id| bool_col.first(doc_id)).collect();
|
||||
let vals: Vec<Option<bool>> = (0..5).map(|row_id| bool_col.first(row_id)).collect();
|
||||
assert_eq!(&vals, &[None, Some(false), None, Some(true), None,]);
|
||||
}
|
||||
|
||||
@@ -108,7 +108,7 @@ fn test_dataframe_writer_ip_addr() {
|
||||
let DynamicColumn::IpAddr(ip_col) = dyn_bool_col else {
|
||||
panic!();
|
||||
};
|
||||
let vals: Vec<Option<Ipv6Addr>> = (0..5).map(|doc_id| ip_col.first(doc_id)).collect();
|
||||
let vals: Vec<Option<Ipv6Addr>> = (0..5).map(|row_id| ip_col.first(row_id)).collect();
|
||||
assert_eq!(
|
||||
&vals,
|
||||
&[
|
||||
@@ -169,7 +169,7 @@ fn test_dictionary_encoded_str() {
|
||||
let DynamicColumn::Str(str_col) = col_handles[0].open().unwrap() else {
|
||||
panic!();
|
||||
};
|
||||
let index: Vec<Option<u64>> = (0..5).map(|doc_id| str_col.ords().first(doc_id)).collect();
|
||||
let index: Vec<Option<u64>> = (0..5).map(|row_id| str_col.ords().first(row_id)).collect();
|
||||
assert_eq!(index, &[None, Some(0), None, Some(2), Some(1)]);
|
||||
assert_eq!(str_col.num_rows(), 5);
|
||||
let mut term_buffer = String::new();
|
||||
@@ -204,7 +204,7 @@ fn test_dictionary_encoded_bytes() {
|
||||
panic!();
|
||||
};
|
||||
let index: Vec<Option<u64>> = (0..5)
|
||||
.map(|doc_id| bytes_col.ords().first(doc_id))
|
||||
.map(|row_id| bytes_col.ords().first(row_id))
|
||||
.collect();
|
||||
assert_eq!(index, &[None, Some(0), None, Some(2), Some(1)]);
|
||||
assert_eq!(bytes_col.num_rows(), 5);
|
||||
|
||||
@@ -1,5 +1,3 @@
|
||||
use std::str::FromStr;
|
||||
|
||||
use common::DateTime;
|
||||
|
||||
use crate::InvalidData;
|
||||
@@ -11,23 +9,6 @@ pub enum NumericalValue {
|
||||
F64(f64),
|
||||
}
|
||||
|
||||
impl FromStr for NumericalValue {
|
||||
type Err = ();
|
||||
|
||||
fn from_str(s: &str) -> Result<Self, ()> {
|
||||
if let Ok(val_i64) = s.parse::<i64>() {
|
||||
return Ok(val_i64.into());
|
||||
}
|
||||
if let Ok(val_u64) = s.parse::<u64>() {
|
||||
return Ok(val_u64.into());
|
||||
}
|
||||
if let Ok(val_f64) = s.parse::<f64>() {
|
||||
return Ok(NumericalValue::from(val_f64).normalize());
|
||||
}
|
||||
Err(())
|
||||
}
|
||||
}
|
||||
|
||||
impl NumericalValue {
|
||||
pub fn numerical_type(&self) -> NumericalType {
|
||||
match self {
|
||||
@@ -45,7 +26,7 @@ impl NumericalValue {
|
||||
if val <= i64::MAX as u64 {
|
||||
NumericalValue::I64(val as i64)
|
||||
} else {
|
||||
NumericalValue::U64(val)
|
||||
NumericalValue::F64(val as f64)
|
||||
}
|
||||
}
|
||||
NumericalValue::I64(val) => NumericalValue::I64(val),
|
||||
@@ -160,7 +141,6 @@ impl Coerce for DateTime {
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::NumericalType;
|
||||
use crate::NumericalValue;
|
||||
|
||||
#[test]
|
||||
fn test_numerical_type_code() {
|
||||
@@ -173,58 +153,4 @@ mod tests {
|
||||
}
|
||||
assert_eq!(num_numerical_type, 3);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_parse_numerical() {
|
||||
assert_eq!(
|
||||
"123".parse::<NumericalValue>().unwrap(),
|
||||
NumericalValue::I64(123)
|
||||
);
|
||||
assert_eq!(
|
||||
"18446744073709551615".parse::<NumericalValue>().unwrap(),
|
||||
NumericalValue::U64(18446744073709551615u64)
|
||||
);
|
||||
assert_eq!(
|
||||
"1.0".parse::<NumericalValue>().unwrap(),
|
||||
NumericalValue::I64(1i64)
|
||||
);
|
||||
assert_eq!(
|
||||
"1.1".parse::<NumericalValue>().unwrap(),
|
||||
NumericalValue::F64(1.1f64)
|
||||
);
|
||||
assert_eq!(
|
||||
"-1.0".parse::<NumericalValue>().unwrap(),
|
||||
NumericalValue::I64(-1i64)
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_normalize_numerical() {
|
||||
assert_eq!(
|
||||
NumericalValue::from(1u64).normalize(),
|
||||
NumericalValue::I64(1i64),
|
||||
);
|
||||
let limit_val = i64::MAX as u64 + 1u64;
|
||||
assert_eq!(
|
||||
NumericalValue::from(limit_val).normalize(),
|
||||
NumericalValue::U64(limit_val),
|
||||
);
|
||||
assert_eq!(
|
||||
NumericalValue::from(-1i64).normalize(),
|
||||
NumericalValue::I64(-1i64),
|
||||
);
|
||||
assert_eq!(
|
||||
NumericalValue::from(-2.0f64).normalize(),
|
||||
NumericalValue::I64(-2i64),
|
||||
);
|
||||
assert_eq!(
|
||||
NumericalValue::from(-2.1f64).normalize(),
|
||||
NumericalValue::F64(-2.1f64),
|
||||
);
|
||||
let large_float = 2.0f64.powf(70.0f64);
|
||||
assert_eq!(
|
||||
NumericalValue::from(large_float).normalize(),
|
||||
NumericalValue::F64(large_float),
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
[package]
|
||||
name = "tantivy-common"
|
||||
version = "0.10.0"
|
||||
version = "0.9.0"
|
||||
authors = ["Paul Masurel <paul@quickwit.io>", "Pascal Seitz <pascal@quickwit.io>"]
|
||||
license = "MIT"
|
||||
edition = "2024"
|
||||
@@ -21,5 +21,5 @@ serde = { version = "1.0.136", features = ["derive"] }
|
||||
[dev-dependencies]
|
||||
binggan = "0.14.0"
|
||||
proptest = "1.0.0"
|
||||
rand = "0.9"
|
||||
rand = "0.8.4"
|
||||
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
use binggan::{BenchRunner, black_box};
|
||||
use rand::rng;
|
||||
use rand::seq::IteratorRandom;
|
||||
use rand::thread_rng;
|
||||
use tantivy_common::{BitSet, TinySet, serialize_vint_u32};
|
||||
|
||||
fn bench_vint() {
|
||||
@@ -17,7 +17,7 @@ fn bench_vint() {
|
||||
black_box(out);
|
||||
});
|
||||
|
||||
let vals: Vec<u32> = (0..20_000).choose_multiple(&mut rng(), 100_000);
|
||||
let vals: Vec<u32> = (0..20_000).choose_multiple(&mut thread_rng(), 100_000);
|
||||
runner.bench_function("bench_vint_rand", move |_| {
|
||||
let mut out = 0u64;
|
||||
for val in vals.iter().cloned() {
|
||||
|
||||
@@ -9,7 +9,7 @@ use crate::ByteCount;
|
||||
pub struct TinySet(u64);
|
||||
|
||||
impl fmt::Debug for TinySet {
|
||||
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
|
||||
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
|
||||
self.into_iter().collect::<Vec<u32>>().fmt(f)
|
||||
}
|
||||
}
|
||||
@@ -181,17 +181,10 @@ pub struct BitSet {
|
||||
len: u64,
|
||||
max_value: u32,
|
||||
}
|
||||
impl std::fmt::Debug for BitSet {
|
||||
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
|
||||
f.debug_struct("BitSet")
|
||||
.field("len", &self.len)
|
||||
.field("max_value", &self.max_value)
|
||||
.finish()
|
||||
}
|
||||
}
|
||||
|
||||
#[inline(always)]
|
||||
fn num_buckets(max_val: u32) -> u32 {
|
||||
max_val.div_ceil(64u32)
|
||||
(max_val + 63u32) / 64u32
|
||||
}
|
||||
|
||||
impl BitSet {
|
||||
@@ -416,7 +409,7 @@ mod tests {
|
||||
use std::collections::HashSet;
|
||||
|
||||
use ownedbytes::OwnedBytes;
|
||||
use rand::distr::Bernoulli;
|
||||
use rand::distributions::Bernoulli;
|
||||
use rand::rngs::StdRng;
|
||||
use rand::{Rng, SeedableRng};
|
||||
|
||||
|
||||
@@ -1,4 +1,6 @@
|
||||
#![allow(clippy::len_without_is_empty)]
|
||||
// manual divceil actually generates code that is not optimal (to accept the full range of u32) and
|
||||
// perf matters here.
|
||||
#![allow(clippy::len_without_is_empty, clippy::manual_div_ceil)]
|
||||
|
||||
use std::ops::Deref;
|
||||
|
||||
|
||||
@@ -28,9 +28,7 @@ impl BinarySerializable for VIntU128 {
|
||||
writer.write_all(&buffer)
|
||||
}
|
||||
|
||||
#[allow(clippy::unbuffered_bytes)]
|
||||
fn deserialize<R: Read>(reader: &mut R) -> io::Result<Self> {
|
||||
#[allow(clippy::unbuffered_bytes)]
|
||||
let mut bytes = reader.bytes();
|
||||
let mut result = 0u128;
|
||||
let mut shift = 0u64;
|
||||
@@ -197,9 +195,7 @@ impl BinarySerializable for VInt {
|
||||
writer.write_all(&buffer[0..num_bytes])
|
||||
}
|
||||
|
||||
#[allow(clippy::unbuffered_bytes)]
|
||||
fn deserialize<R: Read>(reader: &mut R) -> io::Result<Self> {
|
||||
#[allow(clippy::unbuffered_bytes)]
|
||||
let mut bytes = reader.bytes();
|
||||
let mut result = 0u64;
|
||||
let mut shift = 0u64;
|
||||
|
||||
BIN
doc/assets/images/searchbenchmark.png
Normal file
BIN
doc/assets/images/searchbenchmark.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 653 KiB |
@@ -60,7 +60,7 @@ At indexing, tantivy will try to interpret number and strings as different type
|
||||
priority order.
|
||||
|
||||
Numbers will be interpreted as u64, i64 and f64 in that order.
|
||||
Strings will be interpreted as rfc3339 dates or simple strings.
|
||||
Strings will be interpreted as rfc3999 dates or simple strings.
|
||||
|
||||
The first working type is picked and is the only term that is emitted for indexing.
|
||||
Note this interpretation happens on a per-document basis, and there is no effort to try to sniff
|
||||
@@ -81,7 +81,7 @@ Will be interpreted as
|
||||
(my_path.my_segment, String, 233) or (my_path.my_segment, u64, 233)
|
||||
```
|
||||
|
||||
Likewise, we need to emit two tokens if the query contains an rfc3339 date.
|
||||
Likewise, we need to emit two tokens if the query contains an rfc3999 date.
|
||||
Indeed the date could have been actually a single token inside the text of a document at ingestion time. Generally speaking, we will always at least emit a string token in query parsing, and sometimes more.
|
||||
|
||||
If one more json field is defined, things get even more complicated.
|
||||
|
||||
@@ -208,7 +208,7 @@ fn main() -> tantivy::Result<()> {
|
||||
// is the role of the `TopDocs` collector.
|
||||
|
||||
// We can now perform our query.
|
||||
let top_docs = searcher.search(&query, &TopDocs::with_limit(10).order_by_score())?;
|
||||
let top_docs = searcher.search(&query, &TopDocs::with_limit(10))?;
|
||||
|
||||
// The actual documents still need to be
|
||||
// retrieved from Tantivy's store.
|
||||
@@ -226,7 +226,7 @@ fn main() -> tantivy::Result<()> {
|
||||
let query = query_parser.parse_query("title:sea^20 body:whale^70")?;
|
||||
|
||||
let (_score, doc_address) = searcher
|
||||
.search(&query, &TopDocs::with_limit(1).order_by_score())?
|
||||
.search(&query, &TopDocs::with_limit(1))?
|
||||
.into_iter()
|
||||
.next()
|
||||
.unwrap();
|
||||
|
||||
@@ -100,7 +100,7 @@ fn main() -> tantivy::Result<()> {
|
||||
// here we want to get a hit on the 'ken' in Frankenstein
|
||||
let query = query_parser.parse_query("ken")?;
|
||||
|
||||
let top_docs = searcher.search(&query, &TopDocs::with_limit(10).order_by_score())?;
|
||||
let top_docs = searcher.search(&query, &TopDocs::with_limit(10))?;
|
||||
|
||||
for (_, doc_address) in top_docs {
|
||||
let retrieved_doc: TantivyDocument = searcher.doc(doc_address)?;
|
||||
|
||||
@@ -50,14 +50,14 @@ fn main() -> tantivy::Result<()> {
|
||||
{
|
||||
// Simple exact search on the date
|
||||
let query = query_parser.parse_query("occurred_at:\"2022-06-22T12:53:50.53Z\"")?;
|
||||
let count_docs = searcher.search(&*query, &TopDocs::with_limit(5).order_by_score())?;
|
||||
let count_docs = searcher.search(&*query, &TopDocs::with_limit(5))?;
|
||||
assert_eq!(count_docs.len(), 1);
|
||||
}
|
||||
{
|
||||
// Range query on the date field
|
||||
let query = query_parser
|
||||
.parse_query(r#"occurred_at:[2022-06-22T12:58:00Z TO 2022-06-23T00:00:00Z}"#)?;
|
||||
let count_docs = searcher.search(&*query, &TopDocs::with_limit(4).order_by_score())?;
|
||||
let count_docs = searcher.search(&*query, &TopDocs::with_limit(4))?;
|
||||
assert_eq!(count_docs.len(), 1);
|
||||
for (_score, doc_address) in count_docs {
|
||||
let retrieved_doc = searcher.doc::<TantivyDocument>(doc_address)?;
|
||||
|
||||
@@ -28,7 +28,7 @@ fn extract_doc_given_isbn(
|
||||
// The second argument is here to tell we don't care about decoding positions,
|
||||
// or term frequencies.
|
||||
let term_query = TermQuery::new(isbn_term.clone(), IndexRecordOption::Basic);
|
||||
let top_docs = searcher.search(&term_query, &TopDocs::with_limit(1).order_by_score())?;
|
||||
let top_docs = searcher.search(&term_query, &TopDocs::with_limit(1))?;
|
||||
|
||||
if let Some((_score, doc_address)) = top_docs.first() {
|
||||
let doc = searcher.doc(*doc_address)?;
|
||||
|
||||
@@ -1,212 +0,0 @@
|
||||
// # Filter Aggregation Example
|
||||
//
|
||||
// This example demonstrates filter aggregations - creating buckets of documents
|
||||
// matching specific queries, with nested aggregations computed on each bucket.
|
||||
//
|
||||
// Filter aggregations are useful for computing metrics on different subsets of
|
||||
// your data in a single query, like "average price overall + average price for
|
||||
// electronics + count of in-stock items".
|
||||
|
||||
use serde_json::json;
|
||||
use tantivy::aggregation::agg_req::Aggregations;
|
||||
use tantivy::aggregation::AggregationCollector;
|
||||
use tantivy::query::AllQuery;
|
||||
use tantivy::schema::{Schema, FAST, INDEXED, TEXT};
|
||||
use tantivy::{doc, Index};
|
||||
|
||||
fn main() -> tantivy::Result<()> {
|
||||
// Create a simple product schema
|
||||
let mut schema_builder = Schema::builder();
|
||||
schema_builder.add_text_field("category", TEXT | FAST);
|
||||
schema_builder.add_text_field("brand", TEXT | FAST);
|
||||
schema_builder.add_u64_field("price", FAST);
|
||||
schema_builder.add_f64_field("rating", FAST);
|
||||
schema_builder.add_bool_field("in_stock", FAST | INDEXED);
|
||||
let schema = schema_builder.build();
|
||||
|
||||
// Create index and add sample products
|
||||
let index = Index::create_in_ram(schema.clone());
|
||||
let mut writer = index.writer(50_000_000)?;
|
||||
|
||||
writer.add_document(doc!(
|
||||
schema.get_field("category")? => "electronics",
|
||||
schema.get_field("brand")? => "apple",
|
||||
schema.get_field("price")? => 999u64,
|
||||
schema.get_field("rating")? => 4.5f64,
|
||||
schema.get_field("in_stock")? => true
|
||||
))?;
|
||||
writer.add_document(doc!(
|
||||
schema.get_field("category")? => "electronics",
|
||||
schema.get_field("brand")? => "samsung",
|
||||
schema.get_field("price")? => 799u64,
|
||||
schema.get_field("rating")? => 4.2f64,
|
||||
schema.get_field("in_stock")? => true
|
||||
))?;
|
||||
writer.add_document(doc!(
|
||||
schema.get_field("category")? => "clothing",
|
||||
schema.get_field("brand")? => "nike",
|
||||
schema.get_field("price")? => 120u64,
|
||||
schema.get_field("rating")? => 4.1f64,
|
||||
schema.get_field("in_stock")? => false
|
||||
))?;
|
||||
writer.add_document(doc!(
|
||||
schema.get_field("category")? => "books",
|
||||
schema.get_field("brand")? => "penguin",
|
||||
schema.get_field("price")? => 25u64,
|
||||
schema.get_field("rating")? => 4.8f64,
|
||||
schema.get_field("in_stock")? => true
|
||||
))?;
|
||||
|
||||
writer.commit()?;
|
||||
|
||||
let reader = index.reader()?;
|
||||
let searcher = reader.searcher();
|
||||
|
||||
// Example 1: Basic filter with metric aggregation
|
||||
println!("=== Example 1: Electronics average price ===");
|
||||
let agg_req = json!({
|
||||
"electronics": {
|
||||
"filter": "category:electronics",
|
||||
"aggs": {
|
||||
"avg_price": { "avg": { "field": "price" } }
|
||||
}
|
||||
}
|
||||
});
|
||||
|
||||
let agg: Aggregations = serde_json::from_value(agg_req)?;
|
||||
let collector = AggregationCollector::from_aggs(agg, Default::default());
|
||||
let result = searcher.search(&AllQuery, &collector)?;
|
||||
|
||||
let expected = json!({
|
||||
"electronics": {
|
||||
"doc_count": 2,
|
||||
"avg_price": { "value": 899.0 }
|
||||
}
|
||||
});
|
||||
assert_eq!(serde_json::to_value(&result)?, expected);
|
||||
println!("{}\n", serde_json::to_string_pretty(&result)?);
|
||||
|
||||
// Example 2: Multiple independent filters
|
||||
println!("=== Example 2: Multiple filters in one query ===");
|
||||
let agg_req = json!({
|
||||
"electronics": {
|
||||
"filter": "category:electronics",
|
||||
"aggs": { "avg_price": { "avg": { "field": "price" } } }
|
||||
},
|
||||
"in_stock": {
|
||||
"filter": "in_stock:true",
|
||||
"aggs": { "count": { "value_count": { "field": "brand" } } }
|
||||
},
|
||||
"high_rated": {
|
||||
"filter": "rating:[4.5 TO *]",
|
||||
"aggs": { "count": { "value_count": { "field": "brand" } } }
|
||||
}
|
||||
});
|
||||
|
||||
let agg: Aggregations = serde_json::from_value(agg_req)?;
|
||||
let collector = AggregationCollector::from_aggs(agg, Default::default());
|
||||
let result = searcher.search(&AllQuery, &collector)?;
|
||||
|
||||
let expected = json!({
|
||||
"electronics": {
|
||||
"doc_count": 2,
|
||||
"avg_price": { "value": 899.0 }
|
||||
},
|
||||
"in_stock": {
|
||||
"doc_count": 3,
|
||||
"count": { "value": 3.0 }
|
||||
},
|
||||
"high_rated": {
|
||||
"doc_count": 2,
|
||||
"count": { "value": 2.0 }
|
||||
}
|
||||
});
|
||||
assert_eq!(serde_json::to_value(&result)?, expected);
|
||||
println!("{}\n", serde_json::to_string_pretty(&result)?);
|
||||
|
||||
// Example 3: Nested filters - progressive refinement
|
||||
println!("=== Example 3: Nested filters ===");
|
||||
let agg_req = json!({
|
||||
"in_stock": {
|
||||
"filter": "in_stock:true",
|
||||
"aggs": {
|
||||
"electronics": {
|
||||
"filter": "category:electronics",
|
||||
"aggs": {
|
||||
"expensive": {
|
||||
"filter": "price:[800 TO *]",
|
||||
"aggs": {
|
||||
"avg_rating": { "avg": { "field": "rating" } }
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
});
|
||||
|
||||
let agg: Aggregations = serde_json::from_value(agg_req)?;
|
||||
let collector = AggregationCollector::from_aggs(agg, Default::default());
|
||||
let result = searcher.search(&AllQuery, &collector)?;
|
||||
|
||||
let expected = json!({
|
||||
"in_stock": {
|
||||
"doc_count": 3, // apple, samsung, penguin
|
||||
"electronics": {
|
||||
"doc_count": 2, // apple, samsung
|
||||
"expensive": {
|
||||
"doc_count": 1, // only apple (999)
|
||||
"avg_rating": { "value": 4.5 }
|
||||
}
|
||||
}
|
||||
}
|
||||
});
|
||||
assert_eq!(serde_json::to_value(&result)?, expected);
|
||||
println!("{}\n", serde_json::to_string_pretty(&result)?);
|
||||
|
||||
// Example 4: Filter with sub-aggregation (terms)
|
||||
println!("=== Example 4: Filter with terms sub-aggregation ===");
|
||||
let agg_req = json!({
|
||||
"electronics": {
|
||||
"filter": "category:electronics",
|
||||
"aggs": {
|
||||
"by_brand": {
|
||||
"terms": { "field": "brand" },
|
||||
"aggs": {
|
||||
"avg_price": { "avg": { "field": "price" } }
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
});
|
||||
|
||||
let agg: Aggregations = serde_json::from_value(agg_req)?;
|
||||
let collector = AggregationCollector::from_aggs(agg, Default::default());
|
||||
let result = searcher.search(&AllQuery, &collector)?;
|
||||
|
||||
let expected = json!({
|
||||
"electronics": {
|
||||
"doc_count": 2,
|
||||
"by_brand": {
|
||||
"buckets": [
|
||||
{
|
||||
"key": "samsung",
|
||||
"doc_count": 1,
|
||||
"avg_price": { "value": 799.0 }
|
||||
},
|
||||
{
|
||||
"key": "apple",
|
||||
"doc_count": 1,
|
||||
"avg_price": { "value": 999.0 }
|
||||
}
|
||||
],
|
||||
"sum_other_doc_count": 0,
|
||||
"doc_count_error_upper_bound": 0
|
||||
}
|
||||
}
|
||||
});
|
||||
assert_eq!(serde_json::to_value(&result)?, expected);
|
||||
println!("{}", serde_json::to_string_pretty(&result)?);
|
||||
|
||||
Ok(())
|
||||
}
|
||||
@@ -85,6 +85,7 @@ fn main() -> tantivy::Result<()> {
|
||||
index_writer.add_document(doc!(
|
||||
title => "The Diary of a Young Girl",
|
||||
))?;
|
||||
index_writer.commit()?;
|
||||
|
||||
// ### Committing
|
||||
//
|
||||
@@ -145,7 +146,7 @@ fn main() -> tantivy::Result<()> {
|
||||
let query = FuzzyTermQuery::new(term, 2, true);
|
||||
|
||||
let (top_docs, count) = searcher
|
||||
.search(&query, &(TopDocs::with_limit(5).order_by_score(), Count))
|
||||
.search(&query, &(TopDocs::with_limit(5), Count))
|
||||
.unwrap();
|
||||
assert_eq!(count, 3);
|
||||
assert_eq!(top_docs.len(), 3);
|
||||
|
||||
@@ -69,25 +69,25 @@ fn main() -> tantivy::Result<()> {
|
||||
{
|
||||
// Inclusive range queries
|
||||
let query = query_parser.parse_query("ip:[192.168.0.80 TO 192.168.0.100]")?;
|
||||
let count_docs = searcher.search(&*query, &TopDocs::with_limit(5).order_by_score())?;
|
||||
let count_docs = searcher.search(&*query, &TopDocs::with_limit(5))?;
|
||||
assert_eq!(count_docs.len(), 1);
|
||||
}
|
||||
{
|
||||
// Exclusive range queries
|
||||
let query = query_parser.parse_query("ip:{192.168.0.80 TO 192.168.1.100]")?;
|
||||
let count_docs = searcher.search(&*query, &TopDocs::with_limit(2).order_by_score())?;
|
||||
let count_docs = searcher.search(&*query, &TopDocs::with_limit(2))?;
|
||||
assert_eq!(count_docs.len(), 0);
|
||||
}
|
||||
{
|
||||
// Find docs with IP addresses smaller equal 192.168.1.100
|
||||
let query = query_parser.parse_query("ip:[* TO 192.168.1.100]")?;
|
||||
let count_docs = searcher.search(&*query, &TopDocs::with_limit(2).order_by_score())?;
|
||||
let count_docs = searcher.search(&*query, &TopDocs::with_limit(2))?;
|
||||
assert_eq!(count_docs.len(), 2);
|
||||
}
|
||||
{
|
||||
// Find docs with IP addresses smaller than 192.168.1.100
|
||||
let query = query_parser.parse_query("ip:[* TO 192.168.1.100}")?;
|
||||
let count_docs = searcher.search(&*query, &TopDocs::with_limit(2).order_by_score())?;
|
||||
let count_docs = searcher.search(&*query, &TopDocs::with_limit(2))?;
|
||||
assert_eq!(count_docs.len(), 2);
|
||||
}
|
||||
|
||||
|
||||
@@ -59,12 +59,12 @@ fn main() -> tantivy::Result<()> {
|
||||
let query_parser = QueryParser::for_index(&index, vec![event_type, attributes]);
|
||||
{
|
||||
let query = query_parser.parse_query("target:submit-button")?;
|
||||
let count_docs = searcher.search(&*query, &TopDocs::with_limit(2).order_by_score())?;
|
||||
let count_docs = searcher.search(&*query, &TopDocs::with_limit(2))?;
|
||||
assert_eq!(count_docs.len(), 2);
|
||||
}
|
||||
{
|
||||
let query = query_parser.parse_query("target:submit")?;
|
||||
let count_docs = searcher.search(&*query, &TopDocs::with_limit(2).order_by_score())?;
|
||||
let count_docs = searcher.search(&*query, &TopDocs::with_limit(2))?;
|
||||
assert_eq!(count_docs.len(), 2);
|
||||
}
|
||||
{
|
||||
@@ -74,33 +74,33 @@ fn main() -> tantivy::Result<()> {
|
||||
}
|
||||
{
|
||||
let query = query_parser.parse_query("click AND cart.product_id:133")?;
|
||||
let hits = searcher.search(&*query, &TopDocs::with_limit(2).order_by_score())?;
|
||||
let hits = searcher.search(&*query, &TopDocs::with_limit(2))?;
|
||||
assert_eq!(hits.len(), 1);
|
||||
}
|
||||
{
|
||||
// The sub-fields in the json field marked as default field still need to be explicitly
|
||||
// addressed
|
||||
let query = query_parser.parse_query("click AND 133")?;
|
||||
let hits = searcher.search(&*query, &TopDocs::with_limit(2).order_by_score())?;
|
||||
let hits = searcher.search(&*query, &TopDocs::with_limit(2))?;
|
||||
assert_eq!(hits.len(), 0);
|
||||
}
|
||||
{
|
||||
// Default json fields are ignored if they collide with the schema
|
||||
let query = query_parser.parse_query("event_type:holiday-sale")?;
|
||||
let hits = searcher.search(&*query, &TopDocs::with_limit(2).order_by_score())?;
|
||||
let hits = searcher.search(&*query, &TopDocs::with_limit(2))?;
|
||||
assert_eq!(hits.len(), 0);
|
||||
}
|
||||
// # Query via full attribute path
|
||||
{
|
||||
// This only searches in our schema's `event_type` field
|
||||
let query = query_parser.parse_query("event_type:click")?;
|
||||
let hits = searcher.search(&*query, &TopDocs::with_limit(2).order_by_score())?;
|
||||
let hits = searcher.search(&*query, &TopDocs::with_limit(2))?;
|
||||
assert_eq!(hits.len(), 2);
|
||||
}
|
||||
{
|
||||
// Default json fields can still be accessed by full path
|
||||
let query = query_parser.parse_query("attributes.event_type:holiday-sale")?;
|
||||
let hits = searcher.search(&*query, &TopDocs::with_limit(2).order_by_score())?;
|
||||
let hits = searcher.search(&*query, &TopDocs::with_limit(2))?;
|
||||
assert_eq!(hits.len(), 1);
|
||||
}
|
||||
Ok(())
|
||||
|
||||
@@ -63,7 +63,7 @@ fn main() -> Result<()> {
|
||||
// but not "in the Gulf Stream".
|
||||
let query = query_parser.parse_query("\"in the su\"*")?;
|
||||
|
||||
let top_docs = searcher.search(&query, &TopDocs::with_limit(10).order_by_score())?;
|
||||
let top_docs = searcher.search(&query, &TopDocs::with_limit(10))?;
|
||||
let mut titles = top_docs
|
||||
.into_iter()
|
||||
.map(|(_score, doc_address)| {
|
||||
|
||||
@@ -107,8 +107,7 @@ fn main() -> tantivy::Result<()> {
|
||||
IndexRecordOption::Basic,
|
||||
);
|
||||
|
||||
let (top_docs, count) =
|
||||
searcher.search(&query, &(TopDocs::with_limit(2).order_by_score(), Count))?;
|
||||
let (top_docs, count) = searcher.search(&query, &(TopDocs::with_limit(2), Count))?;
|
||||
|
||||
assert_eq!(count, 2);
|
||||
|
||||
@@ -129,8 +128,7 @@ fn main() -> tantivy::Result<()> {
|
||||
IndexRecordOption::Basic,
|
||||
);
|
||||
|
||||
let (_top_docs, count) =
|
||||
searcher.search(&query, &(TopDocs::with_limit(2).order_by_score(), Count))?;
|
||||
let (_top_docs, count) = searcher.search(&query, &(TopDocs::with_limit(2), Count))?;
|
||||
|
||||
assert_eq!(count, 0);
|
||||
|
||||
|
||||
@@ -50,7 +50,7 @@ fn main() -> tantivy::Result<()> {
|
||||
let query_parser = QueryParser::for_index(&index, vec![title, body]);
|
||||
let query = query_parser.parse_query("sycamore spring")?;
|
||||
|
||||
let top_docs = searcher.search(&query, &TopDocs::with_limit(10).order_by_score())?;
|
||||
let top_docs = searcher.search(&query, &TopDocs::with_limit(10))?;
|
||||
|
||||
let snippet_generator = SnippetGenerator::create(&searcher, &*query, body)?;
|
||||
|
||||
|
||||
@@ -102,7 +102,7 @@ fn main() -> tantivy::Result<()> {
|
||||
// stop words are applied on the query as well.
|
||||
// The following will be equivalent to `title:frankenstein`
|
||||
let query = query_parser.parse_query("title:\"the Frankenstein\"")?;
|
||||
let top_docs = searcher.search(&query, &TopDocs::with_limit(10).order_by_score())?;
|
||||
let top_docs = searcher.search(&query, &TopDocs::with_limit(10))?;
|
||||
|
||||
for (score, doc_address) in top_docs {
|
||||
let retrieved_doc: TantivyDocument = searcher.doc(doc_address)?;
|
||||
|
||||
@@ -164,7 +164,7 @@ fn main() -> tantivy::Result<()> {
|
||||
move |doc_id: DocId| Reverse(price[doc_id as usize])
|
||||
};
|
||||
|
||||
let most_expensive_first = TopDocs::with_limit(10).order_by(score_by_price);
|
||||
let most_expensive_first = TopDocs::with_limit(10).custom_score(score_by_price);
|
||||
|
||||
let hits = searcher.search(&query, &most_expensive_first)?;
|
||||
assert_eq!(
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
[package]
|
||||
name = "tantivy-query-grammar"
|
||||
version = "0.25.0"
|
||||
version = "0.24.0"
|
||||
authors = ["Paul Masurel <paul.masurel@gmail.com>"]
|
||||
license = "MIT"
|
||||
categories = ["database-implementations", "data-structures"]
|
||||
@@ -15,5 +15,3 @@ edition = "2024"
|
||||
nom = "7"
|
||||
serde = { version = "1.0.219", features = ["derive"] }
|
||||
serde_json = "1.0.140"
|
||||
ordered-float = "5.0.0"
|
||||
fnv = "1.0.7"
|
||||
|
||||
@@ -117,22 +117,6 @@ where F: nom::Parser<I, (O, ErrorList), Infallible> {
|
||||
}
|
||||
}
|
||||
|
||||
pub(crate) fn terminated_infallible<I, O1, O2, F, G>(
|
||||
mut first: F,
|
||||
mut second: G,
|
||||
) -> impl FnMut(I) -> JResult<I, O1>
|
||||
where
|
||||
F: nom::Parser<I, (O1, ErrorList), Infallible>,
|
||||
G: nom::Parser<I, (O2, ErrorList), Infallible>,
|
||||
{
|
||||
move |input: I| {
|
||||
let (input, (o1, mut err)) = first.parse(input)?;
|
||||
let (input, (_, mut err2)) = second.parse(input)?;
|
||||
err.append(&mut err2);
|
||||
Ok((input, (o1, err)))
|
||||
}
|
||||
}
|
||||
|
||||
pub(crate) fn delimited_infallible<I, O1, O2, O3, F, G, H>(
|
||||
mut first: F,
|
||||
mut second: G,
|
||||
|
||||
@@ -31,17 +31,7 @@ pub fn parse_query_lenient(query: &str) -> (UserInputAst, Vec<LenientError>) {
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use crate::{UserInputAst, parse_query, parse_query_lenient};
|
||||
|
||||
#[test]
|
||||
fn test_deduplication() {
|
||||
let ast: UserInputAst = parse_query("a a").unwrap();
|
||||
let json = serde_json::to_string(&ast).unwrap();
|
||||
assert_eq!(
|
||||
json,
|
||||
r#"{"type":"bool","clauses":[[null,{"type":"literal","field_name":null,"phrase":"a","delimiter":"none","slop":0,"prefix":false}]]}"#
|
||||
);
|
||||
}
|
||||
use crate::{parse_query, parse_query_lenient};
|
||||
|
||||
#[test]
|
||||
fn test_parse_query_serialization() {
|
||||
|
||||
@@ -1,7 +1,6 @@
|
||||
use std::borrow::Cow;
|
||||
use std::iter::once;
|
||||
|
||||
use fnv::FnvHashSet;
|
||||
use nom::IResult;
|
||||
use nom::branch::alt;
|
||||
use nom::bytes::complete::tag;
|
||||
@@ -37,7 +36,7 @@ fn field_name(inp: &str) -> IResult<&str, String> {
|
||||
alt((first_char, escape_sequence())),
|
||||
many0(alt((simple_char, escape_sequence(), char('\\')))),
|
||||
)),
|
||||
tuple((multispace0, char(':'), multispace0)),
|
||||
char(':'),
|
||||
),
|
||||
|(first_char, next)| once(first_char).chain(next).collect(),
|
||||
)(inp)
|
||||
@@ -69,7 +68,7 @@ fn interpret_escape(source: &str) -> String {
|
||||
|
||||
/// Consume a word outside of any context.
|
||||
// TODO should support escape sequences
|
||||
fn word(inp: &str) -> IResult<&str, Cow<'_, str>> {
|
||||
fn word(inp: &str) -> IResult<&str, Cow<str>> {
|
||||
map_res(
|
||||
recognize(tuple((
|
||||
alt((
|
||||
@@ -367,10 +366,7 @@ fn literal(inp: &str) -> IResult<&str, UserInputAst> {
|
||||
// something (a field name) got parsed before
|
||||
alt((
|
||||
map(
|
||||
tuple((
|
||||
opt(field_name),
|
||||
alt((range, set, exists, regex, term_or_phrase)),
|
||||
)),
|
||||
tuple((opt(field_name), alt((range, set, exists, term_or_phrase)))),
|
||||
|(field_name, leaf): (Option<String>, UserInputLeaf)| leaf.set_field(field_name).into(),
|
||||
),
|
||||
term_group,
|
||||
@@ -392,10 +388,6 @@ fn literal_no_group_infallible(inp: &str) -> JResult<&str, Option<UserInputAst>>
|
||||
value((), peek(one_of("{[><"))),
|
||||
map(range_infallible, |(range, errs)| (Some(range), errs)),
|
||||
),
|
||||
(
|
||||
value((), peek(one_of("/"))),
|
||||
map(regex_infallible, |(regex, errs)| (Some(regex), errs)),
|
||||
),
|
||||
),
|
||||
delimited_infallible(space0_infallible, term_or_phrase_infallible, nothing),
|
||||
),
|
||||
@@ -560,7 +552,7 @@ fn range_infallible(inp: &str) -> JResult<&str, UserInputLeaf> {
|
||||
(
|
||||
(
|
||||
value((), tag(">=")),
|
||||
map(word_infallible(")", false), |(bound, err)| {
|
||||
map(word_infallible("", false), |(bound, err)| {
|
||||
(
|
||||
(
|
||||
bound
|
||||
@@ -574,7 +566,7 @@ fn range_infallible(inp: &str) -> JResult<&str, UserInputLeaf> {
|
||||
),
|
||||
(
|
||||
value((), tag("<=")),
|
||||
map(word_infallible(")", false), |(bound, err)| {
|
||||
map(word_infallible("", false), |(bound, err)| {
|
||||
(
|
||||
(
|
||||
UserInputBound::Unbounded,
|
||||
@@ -588,7 +580,7 @@ fn range_infallible(inp: &str) -> JResult<&str, UserInputLeaf> {
|
||||
),
|
||||
(
|
||||
value((), tag(">")),
|
||||
map(word_infallible(")", false), |(bound, err)| {
|
||||
map(word_infallible("", false), |(bound, err)| {
|
||||
(
|
||||
(
|
||||
bound
|
||||
@@ -602,7 +594,7 @@ fn range_infallible(inp: &str) -> JResult<&str, UserInputLeaf> {
|
||||
),
|
||||
(
|
||||
value((), tag("<")),
|
||||
map(word_infallible(")", false), |(bound, err)| {
|
||||
map(word_infallible("", false), |(bound, err)| {
|
||||
(
|
||||
(
|
||||
UserInputBound::Unbounded,
|
||||
@@ -696,69 +688,6 @@ fn set_infallible(mut inp: &str) -> JResult<&str, UserInputLeaf> {
|
||||
}
|
||||
}
|
||||
|
||||
fn regex(inp: &str) -> IResult<&str, UserInputLeaf> {
|
||||
map(
|
||||
terminated(
|
||||
delimited(
|
||||
char('/'),
|
||||
many1(alt((preceded(char('\\'), char('/')), none_of("/")))),
|
||||
char('/'),
|
||||
),
|
||||
peek(alt((
|
||||
value((), multispace1),
|
||||
value((), char(')')),
|
||||
value((), eof),
|
||||
))),
|
||||
),
|
||||
|elements| UserInputLeaf::Regex {
|
||||
field: None,
|
||||
pattern: elements.into_iter().collect::<String>(),
|
||||
},
|
||||
)(inp)
|
||||
}
|
||||
|
||||
fn regex_infallible(inp: &str) -> JResult<&str, UserInputLeaf> {
|
||||
match terminated_infallible(
|
||||
delimited_infallible(
|
||||
opt_i_err(char('/'), "missing delimiter /"),
|
||||
opt_i(many1(alt((preceded(char('\\'), char('/')), none_of("/"))))),
|
||||
opt_i_err(char('/'), "missing delimiter /"),
|
||||
),
|
||||
opt_i_err(
|
||||
peek(alt((
|
||||
value((), multispace1),
|
||||
value((), char(')')),
|
||||
value((), eof),
|
||||
))),
|
||||
"expected whitespace, closing parenthesis, or end of input",
|
||||
),
|
||||
)(inp)
|
||||
{
|
||||
Ok((rest, (elements_part, errors))) => {
|
||||
let pattern = match elements_part {
|
||||
Some(elements_part) => elements_part.into_iter().collect(),
|
||||
None => String::new(),
|
||||
};
|
||||
let res = UserInputLeaf::Regex {
|
||||
field: None,
|
||||
pattern,
|
||||
};
|
||||
Ok((rest, (res, errors)))
|
||||
}
|
||||
Err(e) => {
|
||||
let errs = vec![LenientErrorInternal {
|
||||
pos: inp.len(),
|
||||
message: e.to_string(),
|
||||
}];
|
||||
let res = UserInputLeaf::Regex {
|
||||
field: None,
|
||||
pattern: String::new(),
|
||||
};
|
||||
Ok((inp, (res, errs)))
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
fn negate(expr: UserInputAst) -> UserInputAst {
|
||||
expr.unary(Occur::MustNot)
|
||||
}
|
||||
@@ -766,17 +695,7 @@ fn negate(expr: UserInputAst) -> UserInputAst {
|
||||
fn leaf(inp: &str) -> IResult<&str, UserInputAst> {
|
||||
alt((
|
||||
delimited(char('('), ast, char(')')),
|
||||
map(
|
||||
terminated(
|
||||
char('*'),
|
||||
peek(alt((
|
||||
value((), multispace1),
|
||||
value((), char(')')),
|
||||
value((), eof),
|
||||
))),
|
||||
),
|
||||
|_| UserInputAst::from(UserInputLeaf::All),
|
||||
),
|
||||
map(char('*'), |_| UserInputAst::from(UserInputLeaf::All)),
|
||||
map(preceded(tuple((tag("NOT"), multispace1)), leaf), negate),
|
||||
literal,
|
||||
))(inp)
|
||||
@@ -797,17 +716,7 @@ fn leaf_infallible(inp: &str) -> JResult<&str, Option<UserInputAst>> {
|
||||
),
|
||||
),
|
||||
(
|
||||
value(
|
||||
(),
|
||||
terminated(
|
||||
char('*'),
|
||||
peek(alt((
|
||||
value((), multispace1),
|
||||
value((), char(')')),
|
||||
value((), eof),
|
||||
))),
|
||||
),
|
||||
),
|
||||
value((), char('*')),
|
||||
map(nothing, |_| {
|
||||
(Some(UserInputAst::from(UserInputLeaf::All)), Vec::new())
|
||||
}),
|
||||
@@ -843,7 +752,7 @@ fn boosted_leaf(inp: &str) -> IResult<&str, UserInputAst> {
|
||||
tuple((leaf, fallible(boost))),
|
||||
|(leaf, boost_opt)| match boost_opt {
|
||||
Some(boost) if (boost - 1.0).abs() > f64::EPSILON => {
|
||||
UserInputAst::Boost(Box::new(leaf), boost.into())
|
||||
UserInputAst::Boost(Box::new(leaf), boost)
|
||||
}
|
||||
_ => leaf,
|
||||
},
|
||||
@@ -855,7 +764,7 @@ fn boosted_leaf_infallible(inp: &str) -> JResult<&str, Option<UserInputAst>> {
|
||||
tuple_infallible((leaf_infallible, boost)),
|
||||
|((leaf, boost_opt), error)| match boost_opt {
|
||||
Some(boost) if (boost - 1.0).abs() > f64::EPSILON => (
|
||||
leaf.map(|leaf| UserInputAst::Boost(Box::new(leaf), boost.into())),
|
||||
leaf.map(|leaf| UserInputAst::Boost(Box::new(leaf), boost)),
|
||||
error,
|
||||
),
|
||||
_ => (leaf, error),
|
||||
@@ -1106,25 +1015,12 @@ pub fn parse_to_ast_lenient(query_str: &str) -> (UserInputAst, Vec<LenientError>
|
||||
(rewrite_ast(res), errors)
|
||||
}
|
||||
|
||||
/// Removes unnecessary children clauses in AST
|
||||
///
|
||||
/// Motivated by [issue #1433](https://github.com/quickwit-oss/tantivy/issues/1433)
|
||||
fn rewrite_ast(mut input: UserInputAst) -> UserInputAst {
|
||||
if let UserInputAst::Clause(sub_clauses) = &mut input {
|
||||
// call rewrite_ast recursively on children clauses if applicable
|
||||
let mut new_clauses = Vec::with_capacity(sub_clauses.len());
|
||||
for (occur, clause) in sub_clauses.drain(..) {
|
||||
let rewritten_clause = rewrite_ast(clause);
|
||||
new_clauses.push((occur, rewritten_clause));
|
||||
}
|
||||
*sub_clauses = new_clauses;
|
||||
|
||||
// remove duplicate child clauses
|
||||
// e.g. (+a +b) OR (+c +d) OR (+a +b) => (+a +b) OR (+c +d)
|
||||
let mut seen = FnvHashSet::default();
|
||||
sub_clauses.retain(|term| seen.insert(term.clone()));
|
||||
|
||||
// Removes unnecessary children clauses in AST
|
||||
//
|
||||
// Motivated by [issue #1433](https://github.com/quickwit-oss/tantivy/issues/1433)
|
||||
for term in sub_clauses {
|
||||
if let UserInputAst::Clause(terms) = &mut input {
|
||||
for term in terms {
|
||||
rewrite_ast_clause(term);
|
||||
}
|
||||
}
|
||||
@@ -1331,14 +1227,6 @@ mod test {
|
||||
test_parse_query_to_ast_helper("<a", "{\"*\" TO \"a\"}");
|
||||
test_parse_query_to_ast_helper("<=a", "{\"*\" TO \"a\"]");
|
||||
test_parse_query_to_ast_helper("<=bsd", "{\"*\" TO \"bsd\"]");
|
||||
|
||||
test_parse_query_to_ast_helper("(<=42)", "{\"*\" TO \"42\"]");
|
||||
test_parse_query_to_ast_helper("(<=42 )", "{\"*\" TO \"42\"]");
|
||||
test_parse_query_to_ast_helper("(age:>5)", "\"age\":{\"5\" TO \"*\"}");
|
||||
test_parse_query_to_ast_helper(
|
||||
"(title:bar AND age:>12)",
|
||||
"(+\"title\":bar +\"age\":{\"12\" TO \"*\"})",
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
@@ -1394,10 +1282,6 @@ mod test {
|
||||
super::field_name("~my~field:a"),
|
||||
Ok(("a", "~my~field".to_string()))
|
||||
);
|
||||
assert_eq!(
|
||||
super::field_name(".my.field.name : a"),
|
||||
Ok(("a", ".my.field.name".to_string()))
|
||||
);
|
||||
for special_char in SPECIAL_CHARS.iter() {
|
||||
let query = &format!("\\{special_char}my\\{special_char}field:a");
|
||||
assert_eq!(
|
||||
@@ -1707,25 +1591,6 @@ mod test {
|
||||
test_parse_query_to_ast_helper("abc:a b", "(*\"abc\":a *b)");
|
||||
test_parse_query_to_ast_helper("abc:\"a b\"", "\"abc\":\"a b\"");
|
||||
test_parse_query_to_ast_helper("foo:[1 TO 5]", "\"foo\":[\"1\" TO \"5\"]");
|
||||
|
||||
// Phrase prefixed with *
|
||||
test_parse_query_to_ast_helper("foo:(*A)", "\"foo\":*A");
|
||||
test_parse_query_to_ast_helper("*A", "*A");
|
||||
test_parse_query_to_ast_helper("(*A)", "*A");
|
||||
test_parse_query_to_ast_helper("foo:(A OR B)", "(?\"foo\":A ?\"foo\":B)");
|
||||
test_parse_query_to_ast_helper("foo:(A* OR B*)", "(?\"foo\":A* ?\"foo\":B*)");
|
||||
test_parse_query_to_ast_helper("foo:(*A OR *B)", "(?\"foo\":*A ?\"foo\":*B)");
|
||||
|
||||
// Regexes between parentheses
|
||||
test_parse_query_to_ast_helper("foo:(/A.*/)", "\"foo\":/A.*/");
|
||||
test_parse_query_to_ast_helper("foo:(/A.*/ OR /B.*/)", "(?\"foo\":/A.*/ ?\"foo\":/B.*/)");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_parse_query_all() {
|
||||
test_parse_query_to_ast_helper("*", "*");
|
||||
test_parse_query_to_ast_helper("(*)", "*");
|
||||
test_parse_query_to_ast_helper("(* )", "*");
|
||||
}
|
||||
|
||||
#[test]
|
||||
@@ -1823,72 +1688,4 @@ mod test {
|
||||
fn test_invalid_field() {
|
||||
test_is_parse_err(r#"!bc:def"#, "!bc:def");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_regex_parser() {
|
||||
let r = parse_to_ast(r#"a:/joh?n(ath[oa]n)/"#);
|
||||
assert!(r.is_ok(), "Failed to parse custom query: {r:?}");
|
||||
let (_, input) = r.unwrap();
|
||||
match input {
|
||||
UserInputAst::Leaf(leaf) => match leaf.as_ref() {
|
||||
UserInputLeaf::Regex { field, pattern } => {
|
||||
assert_eq!(field, &Some("a".to_string()));
|
||||
assert_eq!(pattern, "joh?n(ath[oa]n)");
|
||||
}
|
||||
_ => panic!("Expected a regex leaf, got {leaf:?}"),
|
||||
},
|
||||
_ => panic!("Expected a leaf"),
|
||||
}
|
||||
let r = parse_to_ast(r#"a:/\\/cgi-bin\\/luci.*/"#);
|
||||
assert!(r.is_ok(), "Failed to parse custom query: {r:?}");
|
||||
let (_, input) = r.unwrap();
|
||||
match input {
|
||||
UserInputAst::Leaf(leaf) => match leaf.as_ref() {
|
||||
UserInputLeaf::Regex { field, pattern } => {
|
||||
assert_eq!(field, &Some("a".to_string()));
|
||||
assert_eq!(pattern, "\\/cgi-bin\\/luci.*");
|
||||
}
|
||||
_ => panic!("Expected a regex leaf, got {leaf:?}"),
|
||||
},
|
||||
_ => panic!("Expected a leaf"),
|
||||
}
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_regex_parser_lenient() {
|
||||
let literal = |query| literal_infallible(query).unwrap().1;
|
||||
|
||||
let (res, errs) = literal(r#"a:/joh?n(ath[oa]n)/"#);
|
||||
let expected = UserInputLeaf::Regex {
|
||||
field: Some("a".to_string()),
|
||||
pattern: "joh?n(ath[oa]n)".to_string(),
|
||||
}
|
||||
.into();
|
||||
assert_eq!(res.unwrap(), expected);
|
||||
assert!(errs.is_empty(), "Expected no errors, got: {errs:?}");
|
||||
|
||||
let (res, errs) = literal("title:/joh?n(ath[oa]n)");
|
||||
let expected = UserInputLeaf::Regex {
|
||||
field: Some("title".to_string()),
|
||||
pattern: "joh?n(ath[oa]n)".to_string(),
|
||||
}
|
||||
.into();
|
||||
assert_eq!(res.unwrap(), expected);
|
||||
assert_eq!(errs.len(), 1, "Expected 1 error, got: {errs:?}");
|
||||
assert_eq!(
|
||||
errs[0].message, "missing delimiter /",
|
||||
"Unexpected error message",
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_space_before_value() {
|
||||
test_parse_query_to_ast_helper("field : a", r#""field":a"#);
|
||||
test_parse_query_to_ast_helper("field: a", r#""field":a"#);
|
||||
test_parse_query_to_ast_helper("field :a", r#""field":a"#);
|
||||
test_parse_query_to_ast_helper(
|
||||
"field : 'happy tax payer' AND other_field : 1",
|
||||
r#"(+"field":'happy tax payer' +"other_field":1)"#,
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
@@ -5,7 +5,7 @@ use serde::Serialize;
|
||||
|
||||
use crate::Occur;
|
||||
|
||||
#[derive(PartialEq, Eq, Hash, Clone, Serialize)]
|
||||
#[derive(PartialEq, Clone, Serialize)]
|
||||
#[serde(tag = "type")]
|
||||
#[serde(rename_all = "snake_case")]
|
||||
pub enum UserInputLeaf {
|
||||
@@ -23,10 +23,6 @@ pub enum UserInputLeaf {
|
||||
Exists {
|
||||
field: String,
|
||||
},
|
||||
Regex {
|
||||
field: Option<String>,
|
||||
pattern: String,
|
||||
},
|
||||
}
|
||||
|
||||
impl UserInputLeaf {
|
||||
@@ -50,7 +46,6 @@ impl UserInputLeaf {
|
||||
UserInputLeaf::Exists { field: _ } => UserInputLeaf::Exists {
|
||||
field: field.expect("Exist query without a field isn't allowed"),
|
||||
},
|
||||
UserInputLeaf::Regex { field: _, pattern } => UserInputLeaf::Regex { field, pattern },
|
||||
}
|
||||
}
|
||||
|
||||
@@ -66,7 +61,6 @@ impl UserInputLeaf {
|
||||
}
|
||||
UserInputLeaf::Range { field, .. } if field.is_none() => *field = Some(default_field),
|
||||
UserInputLeaf::Set { field, .. } if field.is_none() => *field = Some(default_field),
|
||||
UserInputLeaf::Regex { field, .. } if field.is_none() => *field = Some(default_field),
|
||||
_ => (), // field was already set, do nothing
|
||||
}
|
||||
}
|
||||
@@ -109,19 +103,11 @@ impl Debug for UserInputLeaf {
|
||||
UserInputLeaf::Exists { field } => {
|
||||
write!(formatter, "$exists(\"{field}\")")
|
||||
}
|
||||
UserInputLeaf::Regex { field, pattern } => {
|
||||
if let Some(field) = field {
|
||||
// TODO properly escape field (in case of \")
|
||||
write!(formatter, "\"{field}\":")?;
|
||||
}
|
||||
// TODO properly escape pattern (in case of \")
|
||||
write!(formatter, "/{pattern}/")
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Copy, Clone, Eq, PartialEq, Hash, Debug, Serialize)]
|
||||
#[derive(Copy, Clone, Eq, PartialEq, Debug, Serialize)]
|
||||
#[serde(rename_all = "snake_case")]
|
||||
pub enum Delimiter {
|
||||
SingleQuotes,
|
||||
@@ -129,7 +115,7 @@ pub enum Delimiter {
|
||||
None,
|
||||
}
|
||||
|
||||
#[derive(PartialEq, Eq, Hash, Clone, Serialize)]
|
||||
#[derive(PartialEq, Clone, Serialize)]
|
||||
#[serde(rename_all = "snake_case")]
|
||||
pub struct UserInputLiteral {
|
||||
pub field_name: Option<String>,
|
||||
@@ -168,7 +154,7 @@ impl fmt::Debug for UserInputLiteral {
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(PartialEq, Eq, Hash, Debug, Clone, Serialize)]
|
||||
#[derive(PartialEq, Debug, Clone, Serialize)]
|
||||
#[serde(tag = "type", content = "value")]
|
||||
#[serde(rename_all = "snake_case")]
|
||||
pub enum UserInputBound {
|
||||
@@ -205,11 +191,11 @@ impl UserInputBound {
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(PartialEq, Eq, Hash, Clone, Serialize)]
|
||||
#[derive(PartialEq, Clone, Serialize)]
|
||||
#[serde(into = "UserInputAstSerde")]
|
||||
pub enum UserInputAst {
|
||||
Clause(Vec<(Option<Occur>, UserInputAst)>),
|
||||
Boost(Box<UserInputAst>, ordered_float::OrderedFloat<f64>),
|
||||
Boost(Box<UserInputAst>, f64),
|
||||
Leaf(Box<UserInputLeaf>),
|
||||
}
|
||||
|
||||
@@ -231,10 +217,9 @@ impl From<UserInputAst> for UserInputAstSerde {
|
||||
fn from(ast: UserInputAst) -> Self {
|
||||
match ast {
|
||||
UserInputAst::Clause(clause) => UserInputAstSerde::Bool { clauses: clause },
|
||||
UserInputAst::Boost(underlying, boost) => UserInputAstSerde::Boost {
|
||||
underlying,
|
||||
boost: boost.into_inner(),
|
||||
},
|
||||
UserInputAst::Boost(underlying, boost) => {
|
||||
UserInputAstSerde::Boost { underlying, boost }
|
||||
}
|
||||
UserInputAst::Leaf(leaf) => UserInputAstSerde::Leaf(leaf),
|
||||
}
|
||||
}
|
||||
@@ -393,7 +378,7 @@ mod tests {
|
||||
#[test]
|
||||
fn test_boost_serialization() {
|
||||
let inner_ast = UserInputAst::Leaf(Box::new(UserInputLeaf::All));
|
||||
let boost_ast = UserInputAst::Boost(Box::new(inner_ast), 2.5.into());
|
||||
let boost_ast = UserInputAst::Boost(Box::new(inner_ast), 2.5);
|
||||
let json = serde_json::to_string(&boost_ast).unwrap();
|
||||
assert_eq!(
|
||||
json,
|
||||
@@ -420,7 +405,7 @@ mod tests {
|
||||
}))),
|
||||
),
|
||||
])),
|
||||
2.5.into(),
|
||||
2.5,
|
||||
);
|
||||
let json = serde_json::to_string(&boost_ast).unwrap();
|
||||
assert_eq!(
|
||||
|
||||
@@ -1,27 +0,0 @@
|
||||
[package]
|
||||
name = "sketches-ddsketch"
|
||||
version = "0.3.0"
|
||||
authors = ["Mike Heffner <mikeh@fesnel.com>"]
|
||||
edition = "2018"
|
||||
license = "Apache-2.0"
|
||||
readme = "README.md"
|
||||
repository = "https://github.com/mheffner/rust-sketches-ddsketch"
|
||||
homepage = "https://github.com/mheffner/rust-sketches-ddsketch"
|
||||
description = """
|
||||
A direct port of the Golang DDSketch implementation.
|
||||
"""
|
||||
exclude = [".gitignore"]
|
||||
|
||||
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
|
||||
|
||||
[dependencies]
|
||||
serde = { package = "serde", version = "1.0", optional = true, features = ["derive", "serde_derive"] }
|
||||
|
||||
[dev-dependencies]
|
||||
approx = "0.5.1"
|
||||
rand = "0.8.5"
|
||||
rand_distr = "0.4.3"
|
||||
|
||||
[features]
|
||||
use_serde = ["serde", "serde/derive"]
|
||||
|
||||
@@ -1,201 +0,0 @@
|
||||
Apache License
|
||||
Version 2.0, January 2004
|
||||
http://www.apache.org/licenses/
|
||||
|
||||
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
|
||||
|
||||
1. Definitions.
|
||||
|
||||
"License" shall mean the terms and conditions for use, reproduction,
|
||||
and distribution as defined by Sections 1 through 9 of this document.
|
||||
|
||||
"Licensor" shall mean the copyright owner or entity authorized by
|
||||
the copyright owner that is granting the License.
|
||||
|
||||
"Legal Entity" shall mean the union of the acting entity and all
|
||||
other entities that control, are controlled by, or are under common
|
||||
control with that entity. For the purposes of this definition,
|
||||
"control" means (i) the power, direct or indirect, to cause the
|
||||
direction or management of such entity, whether by contract or
|
||||
otherwise, or (ii) ownership of fifty percent (50%) or more of the
|
||||
outstanding shares, or (iii) beneficial ownership of such entity.
|
||||
|
||||
"You" (or "Your") shall mean an individual or Legal Entity
|
||||
exercising permissions granted by this License.
|
||||
|
||||
"Source" form shall mean the preferred form for making modifications,
|
||||
including but not limited to software source code, documentation
|
||||
source, and configuration files.
|
||||
|
||||
"Object" form shall mean any form resulting from mechanical
|
||||
transformation or translation of a Source form, including but
|
||||
not limited to compiled object code, generated documentation,
|
||||
and conversions to other media types.
|
||||
|
||||
"Work" shall mean the work of authorship, whether in Source or
|
||||
Object form, made available under the License, as indicated by a
|
||||
copyright notice that is included in or attached to the work
|
||||
(an example is provided in the Appendix below).
|
||||
|
||||
"Derivative Works" shall mean any work, whether in Source or Object
|
||||
form, that is based on (or derived from) the Work and for which the
|
||||
editorial revisions, annotations, elaborations, or other modifications
|
||||
represent, as a whole, an original work of authorship. For the purposes
|
||||
of this License, Derivative Works shall not include works that remain
|
||||
separable from, or merely link (or bind by name) to the interfaces of,
|
||||
the Work and Derivative Works thereof.
|
||||
|
||||
"Contribution" shall mean any work of authorship, including
|
||||
the original version of the Work and any modifications or additions
|
||||
to that Work or Derivative Works thereof, that is intentionally
|
||||
submitted to Licensor for inclusion in the Work by the copyright owner
|
||||
or by an individual or Legal Entity authorized to submit on behalf of
|
||||
the copyright owner. For the purposes of this definition, "submitted"
|
||||
means any form of electronic, verbal, or written communication sent
|
||||
to the Licensor or its representatives, including but not limited to
|
||||
communication on electronic mailing lists, source code control systems,
|
||||
and issue tracking systems that are managed by, or on behalf of, the
|
||||
Licensor for the purpose of discussing and improving the Work, but
|
||||
excluding communication that is conspicuously marked or otherwise
|
||||
designated in writing by the copyright owner as "Not a Contribution."
|
||||
|
||||
"Contributor" shall mean Licensor and any individual or Legal Entity
|
||||
on behalf of whom a Contribution has been received by Licensor and
|
||||
subsequently incorporated within the Work.
|
||||
|
||||
2. Grant of Copyright License. Subject to the terms and conditions of
|
||||
this License, each Contributor hereby grants to You a perpetual,
|
||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||
copyright license to reproduce, prepare Derivative Works of,
|
||||
publicly display, publicly perform, sublicense, and distribute the
|
||||
Work and such Derivative Works in Source or Object form.
|
||||
|
||||
3. Grant of Patent License. Subject to the terms and conditions of
|
||||
this License, each Contributor hereby grants to You a perpetual,
|
||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||
(except as stated in this section) patent license to make, have made,
|
||||
use, offer to sell, sell, import, and otherwise transfer the Work,
|
||||
where such license applies only to those patent claims licensable
|
||||
by such Contributor that are necessarily infringed by their
|
||||
Contribution(s) alone or by combination of their Contribution(s)
|
||||
with the Work to which such Contribution(s) was submitted. If You
|
||||
institute patent litigation against any entity (including a
|
||||
cross-claim or counterclaim in a lawsuit) alleging that the Work
|
||||
or a Contribution incorporated within the Work constitutes direct
|
||||
or contributory patent infringement, then any patent licenses
|
||||
granted to You under this License for that Work shall terminate
|
||||
as of the date such litigation is filed.
|
||||
|
||||
4. Redistribution. You may reproduce and distribute copies of the
|
||||
Work or Derivative Works thereof in any medium, with or without
|
||||
modifications, and in Source or Object form, provided that You
|
||||
meet the following conditions:
|
||||
|
||||
(a) You must give any other recipients of the Work or
|
||||
Derivative Works a copy of this License; and
|
||||
|
||||
(b) You must cause any modified files to carry prominent notices
|
||||
stating that You changed the files; and
|
||||
|
||||
(c) You must retain, in the Source form of any Derivative Works
|
||||
that You distribute, all copyright, patent, trademark, and
|
||||
attribution notices from the Source form of the Work,
|
||||
excluding those notices that do not pertain to any part of
|
||||
the Derivative Works; and
|
||||
|
||||
(d) If the Work includes a "NOTICE" text file as part of its
|
||||
distribution, then any Derivative Works that You distribute must
|
||||
include a readable copy of the attribution notices contained
|
||||
within such NOTICE file, excluding those notices that do not
|
||||
pertain to any part of the Derivative Works, in at least one
|
||||
of the following places: within a NOTICE text file distributed
|
||||
as part of the Derivative Works; within the Source form or
|
||||
documentation, if provided along with the Derivative Works; or,
|
||||
within a display generated by the Derivative Works, if and
|
||||
wherever such third-party notices normally appear. The contents
|
||||
of the NOTICE file are for informational purposes only and
|
||||
do not modify the License. You may add Your own attribution
|
||||
notices within Derivative Works that You distribute, alongside
|
||||
or as an addendum to the NOTICE text from the Work, provided
|
||||
that such additional attribution notices cannot be construed
|
||||
as modifying the License.
|
||||
|
||||
You may add Your own copyright statement to Your modifications and
|
||||
may provide additional or different license terms and conditions
|
||||
for use, reproduction, or distribution of Your modifications, or
|
||||
for any such Derivative Works as a whole, provided Your use,
|
||||
reproduction, and distribution of the Work otherwise complies with
|
||||
the conditions stated in this License.
|
||||
|
||||
5. Submission of Contributions. Unless You explicitly state otherwise,
|
||||
any Contribution intentionally submitted for inclusion in the Work
|
||||
by You to the Licensor shall be under the terms and conditions of
|
||||
this License, without any additional terms or conditions.
|
||||
Notwithstanding the above, nothing herein shall supersede or modify
|
||||
the terms of any separate license agreement you may have executed
|
||||
with Licensor regarding such Contributions.
|
||||
|
||||
6. Trademarks. This License does not grant permission to use the trade
|
||||
names, trademarks, service marks, or product names of the Licensor,
|
||||
except as required for reasonable and customary use in describing the
|
||||
origin of the Work and reproducing the content of the NOTICE file.
|
||||
|
||||
7. Disclaimer of Warranty. Unless required by applicable law or
|
||||
agreed to in writing, Licensor provides the Work (and each
|
||||
Contributor provides its Contributions) on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
implied, including, without limitation, any warranties or conditions
|
||||
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
|
||||
PARTICULAR PURPOSE. You are solely responsible for determining the
|
||||
appropriateness of using or redistributing the Work and assume any
|
||||
risks associated with Your exercise of permissions under this License.
|
||||
|
||||
8. Limitation of Liability. In no event and under no legal theory,
|
||||
whether in tort (including negligence), contract, or otherwise,
|
||||
unless required by applicable law (such as deliberate and grossly
|
||||
negligent acts) or agreed to in writing, shall any Contributor be
|
||||
liable to You for damages, including any direct, indirect, special,
|
||||
incidental, or consequential damages of any character arising as a
|
||||
result of this License or out of the use or inability to use the
|
||||
Work (including but not limited to damages for loss of goodwill,
|
||||
work stoppage, computer failure or malfunction, or any and all
|
||||
other commercial damages or losses), even if such Contributor
|
||||
has been advised of the possibility of such damages.
|
||||
|
||||
9. Accepting Warranty or Additional Liability. While redistributing
|
||||
the Work or Derivative Works thereof, You may choose to offer,
|
||||
and charge a fee for, acceptance of support, warranty, indemnity,
|
||||
or other liability obligations and/or rights consistent with this
|
||||
License. However, in accepting such obligations, You may act only
|
||||
on Your own behalf and on Your sole responsibility, not on behalf
|
||||
of any other Contributor, and only if You agree to indemnify,
|
||||
defend, and hold each Contributor harmless for any liability
|
||||
incurred by, or claims asserted against, such Contributor by reason
|
||||
of your accepting any such warranty or additional liability.
|
||||
|
||||
END OF TERMS AND CONDITIONS
|
||||
|
||||
APPENDIX: How to apply the Apache License to your work.
|
||||
|
||||
To apply the Apache License to your work, attach the following
|
||||
boilerplate notice, with the fields enclosed by brackets "[]"
|
||||
replaced with your own identifying information. (Don't include
|
||||
the brackets!) The text should be enclosed in the appropriate
|
||||
comment syntax for the file format. We also recommend that a
|
||||
file or class name and description of purpose be included on the
|
||||
same "printed page" as the copyright notice for easier
|
||||
identification within third-party archives.
|
||||
|
||||
Copyright [2019] [Mike Heffner]
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
@@ -1,11 +0,0 @@
|
||||
clean:
|
||||
cargo clean
|
||||
|
||||
test:
|
||||
cargo test
|
||||
|
||||
test_logs:
|
||||
cargo test -- --nocapture
|
||||
|
||||
test_performance:
|
||||
cargo test --release --jobs 1 test_performance -- --ignored --nocapture
|
||||
@@ -1,37 +0,0 @@
|
||||
# sketches-ddsketch
|
||||
|
||||
This is a direct port of the [Golang](https://github.com/DataDog/sketches-go)
|
||||
[DDSketch](https://arxiv.org/pdf/1908.10693.pdf) quantile sketch implementation
|
||||
to Rust. DDSketch is a fully-mergeable quantile sketch with relative-error
|
||||
guarantees and is extremely fast.
|
||||
|
||||
# DDSketch
|
||||
|
||||
* Sketch size automatically grows as needed, starting with 128 bins.
|
||||
* Extremely fast sample insertion and sketch merges.
|
||||
|
||||
## Usage
|
||||
|
||||
```rust
|
||||
use sketches_ddsketch::{Config, DDSketch};
|
||||
|
||||
let config = Config::defaults();
|
||||
let mut sketch = DDSketch::new(c);
|
||||
|
||||
sketch.add(1.0);
|
||||
sketch.add(1.0);
|
||||
sketch.add(1.0);
|
||||
|
||||
// Get p=50%
|
||||
let quantile = sketch.quantile(0.5).unwrap();
|
||||
assert_eq!(quantile, Some(1.0));
|
||||
```
|
||||
|
||||
## Performance
|
||||
|
||||
No performance tuning has been done with this implementation of the port, so we
|
||||
would expect similar profiles to the original implementation.
|
||||
|
||||
Out of the box we see can achieve over 70M sample inserts/sec and 350K sketch
|
||||
merges/sec. All tests run on a single core Intel i7 processor with 4.2Ghz max
|
||||
clock.
|
||||
@@ -1,98 +0,0 @@
|
||||
#[cfg(feature = "use_serde")]
|
||||
use serde::{Deserialize, Serialize};
|
||||
|
||||
const DEFAULT_MAX_BINS: u32 = 2048;
|
||||
const DEFAULT_ALPHA: f64 = 0.01;
|
||||
const DEFAULT_MIN_VALUE: f64 = 1.0e-9;
|
||||
|
||||
/// The configuration struct for constructing a `DDSketch`
|
||||
#[derive(Copy, Clone, Debug, PartialEq)]
|
||||
#[cfg_attr(feature = "use_serde", derive(Serialize, Deserialize))]
|
||||
pub struct Config {
|
||||
pub max_num_bins: u32,
|
||||
pub gamma: f64,
|
||||
pub(crate) gamma_ln: f64,
|
||||
pub(crate) min_value: f64,
|
||||
pub offset: i32,
|
||||
}
|
||||
|
||||
fn log_gamma(value: f64, gamma_ln: f64) -> f64 {
|
||||
value.ln() / gamma_ln
|
||||
}
|
||||
|
||||
impl Config {
|
||||
/// Construct a new `Config` struct with specific parameters. If you are unsure of how to
|
||||
/// configure this, the `defaults` method constructs a `Config` with built-in defaults.
|
||||
///
|
||||
/// `max_num_bins` is the max number of bins the DDSketch will grow to, in steps of 128 bins.
|
||||
pub fn new(alpha: f64, max_num_bins: u32, min_value: f64) -> Self {
|
||||
// Aligned with Java's LogarithmicMapping / LogLikeIndexMapping:
|
||||
// gamma = (1 + alpha) / (1 - alpha) (correctingFactor=1 for LogarithmicMapping)
|
||||
// gamma_ln = gamma.ln() (not ln_1p, to match Java's Math.log(gamma))
|
||||
// See: https://github.com/DataDog/sketches-java/blob/master/src/main/java/com/datadoghq/sketch/ddsketch/mapping/LogLikeIndexMapping.java (gamma() static method)
|
||||
// See: https://github.com/DataDog/sketches-java/blob/master/src/main/java/com/datadoghq/sketch/ddsketch/mapping/LogarithmicMapping.java (constructor, correctingFactor()=1)
|
||||
let gamma = (1.0 + alpha) / (1.0 - alpha);
|
||||
let gamma_ln = gamma.ln();
|
||||
|
||||
Config {
|
||||
max_num_bins,
|
||||
gamma,
|
||||
gamma_ln,
|
||||
min_value,
|
||||
offset: 1 - (log_gamma(min_value, gamma_ln) as i32),
|
||||
}
|
||||
}
|
||||
|
||||
/// Return a `Config` using built-in default settings
|
||||
pub fn defaults() -> Self {
|
||||
Self::new(DEFAULT_ALPHA, DEFAULT_MAX_BINS, DEFAULT_MIN_VALUE)
|
||||
}
|
||||
|
||||
pub fn key(&self, v: f64) -> i32 {
|
||||
// Aligned with Java's LogLikeIndexMapping.index(): floor-based indexing.
|
||||
// Java uses `(int) index` / `(int) index - 1` which is equivalent to floor().
|
||||
// See: https://github.com/DataDog/sketches-java/blob/master/src/main/java/com/datadoghq/sketch/ddsketch/mapping/LogLikeIndexMapping.java (index() method)
|
||||
self.log_gamma(v).floor() as i32
|
||||
}
|
||||
|
||||
pub fn value(&self, key: i32) -> f64 {
|
||||
// Aligned with Java's LogLikeIndexMapping.value():
|
||||
// lowerBound(index) * (1 + relativeAccuracy)
|
||||
// = logInverse((index - indexOffset) / multiplier) * (1 + relativeAccuracy)
|
||||
// = gamma^key * 2*gamma/(gamma+1)
|
||||
// See: https://github.com/DataDog/sketches-java/blob/master/src/main/java/com/datadoghq/sketch/ddsketch/mapping/LogLikeIndexMapping.java (value() and lowerBound() methods)
|
||||
self.pow_gamma(key) * (2.0 * self.gamma / (1.0 + self.gamma))
|
||||
}
|
||||
|
||||
pub fn log_gamma(&self, value: f64) -> f64 {
|
||||
log_gamma(value, self.gamma_ln)
|
||||
}
|
||||
|
||||
pub fn pow_gamma(&self, key: i32) -> f64 {
|
||||
((key as f64) * self.gamma_ln).exp()
|
||||
}
|
||||
|
||||
pub fn min_possible(&self) -> f64 {
|
||||
self.min_value
|
||||
}
|
||||
|
||||
/// Reconstruct a Config from a gamma value (as decoded from the binary format).
|
||||
/// Uses default max_num_bins and min_value.
|
||||
/// See Java: https://github.com/DataDog/sketches-java/blob/master/src/main/java/com/datadoghq/sketch/ddsketch/mapping/LogarithmicMapping.java (LogarithmicMapping(double gamma, double indexOffset) constructor)
|
||||
pub(crate) fn from_gamma(gamma: f64) -> Self {
|
||||
let gamma_ln = gamma.ln();
|
||||
Config {
|
||||
max_num_bins: DEFAULT_MAX_BINS,
|
||||
gamma,
|
||||
gamma_ln,
|
||||
min_value: DEFAULT_MIN_VALUE,
|
||||
offset: 1 - (log_gamma(DEFAULT_MIN_VALUE, gamma_ln) as i32),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl Default for Config {
|
||||
fn default() -> Self {
|
||||
Self::new(DEFAULT_ALPHA, DEFAULT_MAX_BINS, DEFAULT_MIN_VALUE)
|
||||
}
|
||||
}
|
||||
@@ -1,385 +0,0 @@
|
||||
use std::{error, fmt};
|
||||
|
||||
#[cfg(feature = "use_serde")]
|
||||
use serde::{Deserialize, Serialize};
|
||||
|
||||
use crate::config::Config;
|
||||
use crate::store::Store;
|
||||
|
||||
type Result<T> = std::result::Result<T, DDSketchError>;
|
||||
|
||||
/// General error type for DDSketch, represents either an invalid quantile or an
|
||||
/// incompatible merge operation.
|
||||
#[derive(Debug, Clone)]
|
||||
pub enum DDSketchError {
|
||||
Quantile,
|
||||
Merge,
|
||||
}
|
||||
impl fmt::Display for DDSketchError {
|
||||
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
|
||||
match self {
|
||||
DDSketchError::Quantile => {
|
||||
write!(f, "Invalid quantile, must be between 0 and 1 (inclusive)")
|
||||
}
|
||||
DDSketchError::Merge => write!(f, "Can not merge sketches with different configs"),
|
||||
}
|
||||
}
|
||||
}
|
||||
impl error::Error for DDSketchError {
|
||||
fn source(&self) -> Option<&(dyn error::Error + 'static)> {
|
||||
// Generic
|
||||
None
|
||||
}
|
||||
}
|
||||
|
||||
/// This struct represents a [DDSketch](https://arxiv.org/pdf/1908.10693.pdf)
|
||||
#[derive(Clone)]
|
||||
#[cfg_attr(feature = "use_serde", derive(Serialize, Deserialize))]
|
||||
pub struct DDSketch {
|
||||
pub(crate) config: Config,
|
||||
pub(crate) store: Store,
|
||||
pub(crate) negative_store: Store,
|
||||
pub(crate) min: f64,
|
||||
pub(crate) max: f64,
|
||||
pub(crate) sum: f64,
|
||||
pub(crate) zero_count: u64,
|
||||
}
|
||||
|
||||
impl Default for DDSketch {
|
||||
fn default() -> Self {
|
||||
Self::new(Default::default())
|
||||
}
|
||||
}
|
||||
|
||||
// XXX: functions should return Option<> in the case of empty
|
||||
impl DDSketch {
|
||||
/// Construct a `DDSketch`. Requires a `Config` specifying the parameters of the sketch
|
||||
pub fn new(config: Config) -> Self {
|
||||
DDSketch {
|
||||
config,
|
||||
store: Store::new(config.max_num_bins as usize),
|
||||
negative_store: Store::new(config.max_num_bins as usize),
|
||||
min: f64::INFINITY,
|
||||
max: f64::NEG_INFINITY,
|
||||
sum: 0.0,
|
||||
zero_count: 0,
|
||||
}
|
||||
}
|
||||
|
||||
/// Add the sample to the sketch
|
||||
pub fn add(&mut self, v: f64) {
|
||||
if v > self.config.min_possible() {
|
||||
let key = self.config.key(v);
|
||||
self.store.add(key);
|
||||
} else if v < -self.config.min_possible() {
|
||||
let key = self.config.key(-v);
|
||||
self.negative_store.add(key);
|
||||
} else {
|
||||
self.zero_count += 1;
|
||||
}
|
||||
|
||||
if v < self.min {
|
||||
self.min = v;
|
||||
}
|
||||
if self.max < v {
|
||||
self.max = v;
|
||||
}
|
||||
self.sum += v;
|
||||
}
|
||||
|
||||
/// Return the quantile value for quantiles between 0.0 and 1.0. Result is an error, represented
|
||||
/// as DDSketchError::Quantile if the requested quantile is outside of that range.
|
||||
///
|
||||
/// If the sketch is empty the result is None, else Some(v) for the quantile value.
|
||||
pub fn quantile(&self, q: f64) -> Result<Option<f64>> {
|
||||
if !(0.0..=1.0).contains(&q) {
|
||||
return Err(DDSketchError::Quantile);
|
||||
}
|
||||
|
||||
if self.empty() {
|
||||
return Ok(None);
|
||||
}
|
||||
|
||||
if q == 0.0 {
|
||||
return Ok(Some(self.min));
|
||||
} else if q == 1.0 {
|
||||
return Ok(Some(self.max));
|
||||
}
|
||||
|
||||
let rank = (q * (self.count() as f64 - 1.0)) as u64;
|
||||
let quantile;
|
||||
if rank < self.negative_store.count() {
|
||||
let reversed_rank = self.negative_store.count() - rank - 1;
|
||||
let key = self.negative_store.key_at_rank(reversed_rank);
|
||||
quantile = -self.config.value(key);
|
||||
} else if rank < self.zero_count + self.negative_store.count() {
|
||||
quantile = 0.0;
|
||||
} else {
|
||||
let key = self
|
||||
.store
|
||||
.key_at_rank(rank - self.zero_count - self.negative_store.count());
|
||||
quantile = self.config.value(key);
|
||||
}
|
||||
|
||||
Ok(Some(quantile))
|
||||
}
|
||||
|
||||
/// Returns the minimum value seen, or None if sketch is empty
|
||||
pub fn min(&self) -> Option<f64> {
|
||||
if self.empty() {
|
||||
None
|
||||
} else {
|
||||
Some(self.min)
|
||||
}
|
||||
}
|
||||
|
||||
/// Returns the maximum value seen, or None if sketch is empty
|
||||
pub fn max(&self) -> Option<f64> {
|
||||
if self.empty() {
|
||||
None
|
||||
} else {
|
||||
Some(self.max)
|
||||
}
|
||||
}
|
||||
|
||||
/// Returns the sum of values seen, or None if sketch is empty
|
||||
pub fn sum(&self) -> Option<f64> {
|
||||
if self.empty() {
|
||||
None
|
||||
} else {
|
||||
Some(self.sum)
|
||||
}
|
||||
}
|
||||
|
||||
/// Returns the number of values added to the sketch
|
||||
pub fn count(&self) -> usize {
|
||||
(self.store.count() + self.zero_count + self.negative_store.count()) as usize
|
||||
}
|
||||
|
||||
/// Returns the length of the underlying `Store`. This is mainly only useful for understanding
|
||||
/// how much the sketch has grown given the inserted values.
|
||||
pub fn length(&self) -> usize {
|
||||
self.store.length() as usize + self.negative_store.length() as usize
|
||||
}
|
||||
|
||||
/// Merge the contents of another sketch into this one. The sketch that is merged into this one
|
||||
/// is unchanged after the merge.
|
||||
pub fn merge(&mut self, o: &DDSketch) -> Result<()> {
|
||||
if self.config != o.config {
|
||||
return Err(DDSketchError::Merge);
|
||||
}
|
||||
|
||||
let was_empty = self.store.count() == 0;
|
||||
|
||||
// Merge the stores
|
||||
self.store.merge(&o.store);
|
||||
self.negative_store.merge(&o.negative_store);
|
||||
self.zero_count += o.zero_count;
|
||||
|
||||
// Need to ensure we don't override min/max with initializers
|
||||
// if either store were empty
|
||||
if was_empty {
|
||||
self.min = o.min;
|
||||
self.max = o.max;
|
||||
} else if o.store.count() > 0 {
|
||||
if o.min < self.min {
|
||||
self.min = o.min
|
||||
}
|
||||
if o.max > self.max {
|
||||
self.max = o.max;
|
||||
}
|
||||
}
|
||||
self.sum += o.sum;
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
fn empty(&self) -> bool {
|
||||
self.count() == 0
|
||||
}
|
||||
|
||||
/// Encode this sketch into the Java-compatible binary format used by
|
||||
/// `com.datadoghq.sketch.ddsketch.DDSketchWithExactSummaryStatistics`.
|
||||
pub fn to_java_bytes(&self) -> Vec<u8> {
|
||||
crate::encoding::encode_to_java_bytes(self)
|
||||
}
|
||||
|
||||
/// Decode a sketch from the Java-compatible binary format.
|
||||
/// Accepts bytes produced by Java's `DDSketchWithExactSummaryStatistics.encode()`
|
||||
/// with or without the `0x02` version prefix.
|
||||
pub fn from_java_bytes(
|
||||
bytes: &[u8],
|
||||
) -> std::result::Result<Self, crate::encoding::DecodeError> {
|
||||
crate::encoding::decode_from_java_bytes(bytes)
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use approx::assert_relative_eq;
|
||||
|
||||
use crate::{Config, DDSketch};
|
||||
|
||||
#[test]
|
||||
fn test_add_zero() {
|
||||
let alpha = 0.01;
|
||||
let c = Config::new(alpha, 2048, 10e-9);
|
||||
let mut dd = DDSketch::new(c);
|
||||
dd.add(0.0);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_quartiles() {
|
||||
let alpha = 0.01;
|
||||
let c = Config::new(alpha, 2048, 10e-9);
|
||||
let mut dd = DDSketch::new(c);
|
||||
|
||||
// Initialize sketch with {1.0, 2.0, 3.0, 4.0}
|
||||
for i in 1..5 {
|
||||
dd.add(i as f64);
|
||||
}
|
||||
|
||||
// We expect the following mappings from quantile to value:
|
||||
// [0,0.33]: 1.0, (0.34,0.66]: 2.0, (0.67,0.99]: 3.0, (0.99, 1.0]: 4.0
|
||||
let test_cases = vec![
|
||||
(0.0, 1.0),
|
||||
(0.25, 1.0),
|
||||
(0.33, 1.0),
|
||||
(0.34, 2.0),
|
||||
(0.5, 2.0),
|
||||
(0.66, 2.0),
|
||||
(0.67, 3.0),
|
||||
(0.75, 3.0),
|
||||
(0.99, 3.0),
|
||||
(1.0, 4.0),
|
||||
];
|
||||
|
||||
for (q, val) in test_cases {
|
||||
assert_relative_eq!(dd.quantile(q).unwrap().unwrap(), val, max_relative = alpha);
|
||||
}
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_neg_quartiles() {
|
||||
let alpha = 0.01;
|
||||
let c = Config::new(alpha, 2048, 10e-9);
|
||||
let mut dd = DDSketch::new(c);
|
||||
|
||||
// Initialize sketch with {1.0, 2.0, 3.0, 4.0}
|
||||
for i in 1..5 {
|
||||
dd.add(-i as f64);
|
||||
}
|
||||
|
||||
let test_cases = vec![
|
||||
(0.0, -4.0),
|
||||
(0.25, -4.0),
|
||||
(0.5, -3.0),
|
||||
(0.75, -2.0),
|
||||
(1.0, -1.0),
|
||||
];
|
||||
|
||||
for (q, val) in test_cases {
|
||||
assert_relative_eq!(dd.quantile(q).unwrap().unwrap(), val, max_relative = alpha);
|
||||
}
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_simple_quantile() {
|
||||
let c = Config::defaults();
|
||||
let mut dd = DDSketch::new(c);
|
||||
|
||||
for i in 1..101 {
|
||||
dd.add(i as f64);
|
||||
}
|
||||
|
||||
assert_eq!(dd.quantile(0.95).unwrap().unwrap().ceil(), 95.0);
|
||||
|
||||
assert!(dd.quantile(-1.01).is_err());
|
||||
assert!(dd.quantile(1.01).is_err());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_empty_sketch() {
|
||||
let c = Config::defaults();
|
||||
let dd = DDSketch::new(c);
|
||||
|
||||
assert_eq!(dd.quantile(0.98).unwrap(), None);
|
||||
assert_eq!(dd.max(), None);
|
||||
assert_eq!(dd.min(), None);
|
||||
assert_eq!(dd.sum(), None);
|
||||
assert_eq!(dd.count(), 0);
|
||||
|
||||
assert!(dd.quantile(1.01).is_err());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_basic_histogram_data() {
|
||||
let values = &[
|
||||
0.754225035,
|
||||
0.752900282,
|
||||
0.752812246,
|
||||
0.752602367,
|
||||
0.754310155,
|
||||
0.753525981,
|
||||
0.752981082,
|
||||
0.752715536,
|
||||
0.751667941,
|
||||
0.755079054,
|
||||
0.753528150,
|
||||
0.755188464,
|
||||
0.752508723,
|
||||
0.750064549,
|
||||
0.753960428,
|
||||
0.751139298,
|
||||
0.752523560,
|
||||
0.753253428,
|
||||
0.753498342,
|
||||
0.751858358,
|
||||
0.752104636,
|
||||
0.753841300,
|
||||
0.754467374,
|
||||
0.753814334,
|
||||
0.750881719,
|
||||
0.753182556,
|
||||
0.752576884,
|
||||
0.753945708,
|
||||
0.753571911,
|
||||
0.752314573,
|
||||
0.752586651,
|
||||
];
|
||||
|
||||
let c = Config::defaults();
|
||||
let mut dd = DDSketch::new(c);
|
||||
|
||||
for value in values {
|
||||
dd.add(*value);
|
||||
}
|
||||
|
||||
assert_eq!(dd.max(), Some(0.755188464));
|
||||
assert_eq!(dd.min(), Some(0.750064549));
|
||||
assert_eq!(dd.count(), 31);
|
||||
assert_eq!(dd.sum(), Some(23.343630625000003));
|
||||
|
||||
assert!(dd.quantile(0.25).unwrap().is_some());
|
||||
assert!(dd.quantile(0.5).unwrap().is_some());
|
||||
assert!(dd.quantile(0.75).unwrap().is_some());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_length() {
|
||||
let mut dd = DDSketch::default();
|
||||
assert_eq!(dd.length(), 0);
|
||||
|
||||
dd.add(1.0);
|
||||
assert_eq!(dd.length(), 128);
|
||||
dd.add(2.0);
|
||||
dd.add(3.0);
|
||||
assert_eq!(dd.length(), 128);
|
||||
|
||||
dd.add(-1.0);
|
||||
assert_eq!(dd.length(), 256);
|
||||
dd.add(-2.0);
|
||||
dd.add(-3.0);
|
||||
assert_eq!(dd.length(), 256);
|
||||
}
|
||||
}
|
||||
@@ -1,813 +0,0 @@
|
||||
//! Java-compatible binary encoding/decoding for DDSketch.
|
||||
//!
|
||||
//! This module implements the binary format used by the Java
|
||||
//! `com.datadoghq.sketch.ddsketch.DDSketchWithExactSummaryStatistics` class
|
||||
//! from the DataDog/sketches-java library. It enables cross-language
|
||||
//! serialization so that sketches produced in Rust can be deserialized
|
||||
//! and merged by Java consumers.
|
||||
|
||||
use std::fmt;
|
||||
|
||||
use crate::config::Config;
|
||||
use crate::ddsketch::DDSketch;
|
||||
use crate::store::Store;
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Flag byte layout
|
||||
//
|
||||
// Each flag byte packs a 2-bit type ordinal in the low bits and a 6-bit
|
||||
// subflag in the upper bits: (subflag << 2) | type_ordinal
|
||||
// See: https://github.com/DataDog/sketches-java/blob/master/src/main/java/com/datadoghq/sketch/ddsketch/encoding/Flag.java
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
/// The 2-bit type field occupying the low bits of every flag byte.
|
||||
#[repr(u8)]
|
||||
#[derive(Debug, Clone, Copy, PartialEq, Eq)]
|
||||
enum FlagType {
|
||||
SketchFeatures = 0,
|
||||
PositiveStore = 1,
|
||||
IndexMapping = 2,
|
||||
NegativeStore = 3,
|
||||
}
|
||||
|
||||
impl FlagType {
|
||||
fn from_byte(b: u8) -> Option<Self> {
|
||||
match b & 0x03 {
|
||||
0 => Some(Self::SketchFeatures),
|
||||
1 => Some(Self::PositiveStore),
|
||||
2 => Some(Self::IndexMapping),
|
||||
3 => Some(Self::NegativeStore),
|
||||
_ => None,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/// Construct a flag byte from a subflag and a type.
|
||||
const fn flag(subflag: u8, flag_type: FlagType) -> u8 {
|
||||
(subflag << 2) | (flag_type as u8)
|
||||
}
|
||||
|
||||
// Pre-computed flag bytes for the sketch features we encode/decode.
|
||||
const FLAG_INDEX_MAPPING_LOG: u8 = flag(0, FlagType::IndexMapping); // 0x02
|
||||
const FLAG_ZERO_COUNT: u8 = flag(1, FlagType::SketchFeatures); // 0x04
|
||||
const FLAG_COUNT: u8 = flag(0x28, FlagType::SketchFeatures); // 0xA0
|
||||
const FLAG_SUM: u8 = flag(0x21, FlagType::SketchFeatures); // 0x84
|
||||
const FLAG_MIN: u8 = flag(0x22, FlagType::SketchFeatures); // 0x88
|
||||
const FLAG_MAX: u8 = flag(0x23, FlagType::SketchFeatures); // 0x8C
|
||||
|
||||
/// BinEncodingMode subflags for store flag bytes.
|
||||
/// See: https://github.com/DataDog/sketches-java/blob/master/src/main/java/com/datadoghq/sketch/ddsketch/encoding/BinEncodingMode.java
|
||||
#[repr(u8)]
|
||||
#[derive(Debug, Clone, Copy, PartialEq, Eq)]
|
||||
enum BinEncodingMode {
|
||||
IndexDeltasAndCounts = 1,
|
||||
IndexDeltas = 2,
|
||||
ContiguousCounts = 3,
|
||||
}
|
||||
|
||||
impl BinEncodingMode {
|
||||
fn from_subflag(subflag: u8) -> Option<Self> {
|
||||
match subflag {
|
||||
1 => Some(Self::IndexDeltasAndCounts),
|
||||
2 => Some(Self::IndexDeltas),
|
||||
3 => Some(Self::ContiguousCounts),
|
||||
_ => None,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
const VAR_DOUBLE_ROTATE_DISTANCE: u32 = 6;
|
||||
const MAX_VAR_LEN_64: usize = 9;
|
||||
|
||||
const DEFAULT_MAX_BINS: u32 = 2048;
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Error type
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
#[derive(Debug, Clone)]
|
||||
pub enum DecodeError {
|
||||
UnexpectedEof,
|
||||
InvalidFlag(u8),
|
||||
InvalidData(String),
|
||||
}
|
||||
|
||||
impl fmt::Display for DecodeError {
|
||||
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
|
||||
match self {
|
||||
Self::UnexpectedEof => write!(f, "unexpected end of input"),
|
||||
Self::InvalidFlag(b) => write!(f, "invalid flag byte: 0x{b:02X}"),
|
||||
Self::InvalidData(msg) => write!(f, "invalid data: {msg}"),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl std::error::Error for DecodeError {}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// VarEncoding — bit-exact port of Java VarEncodingHelper
|
||||
// See: https://github.com/DataDog/sketches-java/blob/master/src/main/java/com/datadoghq/sketch/ddsketch/encoding/VarEncodingHelper.java
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
fn encode_unsigned_var_long(out: &mut Vec<u8>, mut value: u64) {
|
||||
let length = ((63 - value.leading_zeros() as i32) / 7).clamp(0, 8);
|
||||
for _ in 0..length {
|
||||
out.push((value as u8) | 0x80);
|
||||
value >>= 7;
|
||||
}
|
||||
out.push(value as u8);
|
||||
}
|
||||
|
||||
fn decode_unsigned_var_long(input: &mut &[u8]) -> Result<u64, DecodeError> {
|
||||
let mut value: u64 = 0;
|
||||
let mut shift: u32 = 0;
|
||||
loop {
|
||||
let next = read_byte(input)?;
|
||||
if next < 0x80 || shift == 56 {
|
||||
return Ok(value | (u64::from(next) << shift));
|
||||
}
|
||||
value |= (u64::from(next) & 0x7F) << shift;
|
||||
shift += 7;
|
||||
}
|
||||
}
|
||||
|
||||
/// ZigZag encode then var-long encode.
|
||||
fn encode_signed_var_long(out: &mut Vec<u8>, value: i64) {
|
||||
let encoded = ((value >> 63) ^ (value << 1)) as u64;
|
||||
encode_unsigned_var_long(out, encoded);
|
||||
}
|
||||
|
||||
fn decode_signed_var_long(input: &mut &[u8]) -> Result<i64, DecodeError> {
|
||||
let encoded = decode_unsigned_var_long(input)?;
|
||||
Ok(((encoded >> 1) as i64) ^ -((encoded & 1) as i64))
|
||||
}
|
||||
|
||||
fn double_to_var_bits(value: f64) -> u64 {
|
||||
let bits = f64::to_bits(value + 1.0).wrapping_sub(f64::to_bits(1.0));
|
||||
bits.rotate_left(VAR_DOUBLE_ROTATE_DISTANCE)
|
||||
}
|
||||
|
||||
fn var_bits_to_double(bits: u64) -> f64 {
|
||||
f64::from_bits(
|
||||
bits.rotate_right(VAR_DOUBLE_ROTATE_DISTANCE)
|
||||
.wrapping_add(f64::to_bits(1.0)),
|
||||
) - 1.0
|
||||
}
|
||||
|
||||
fn encode_var_double(out: &mut Vec<u8>, value: f64) {
|
||||
let mut bits = double_to_var_bits(value);
|
||||
for _ in 0..MAX_VAR_LEN_64 - 1 {
|
||||
let next = (bits >> 57) as u8;
|
||||
bits <<= 7;
|
||||
if bits == 0 {
|
||||
out.push(next);
|
||||
return;
|
||||
}
|
||||
out.push(next | 0x80);
|
||||
}
|
||||
out.push((bits >> 56) as u8);
|
||||
}
|
||||
|
||||
fn decode_var_double(input: &mut &[u8]) -> Result<f64, DecodeError> {
|
||||
let mut bits: u64 = 0;
|
||||
let mut shift: i32 = 57; // 8*8 - 7
|
||||
loop {
|
||||
let next = read_byte(input)?;
|
||||
if shift == 1 {
|
||||
bits |= u64::from(next);
|
||||
break;
|
||||
}
|
||||
if next < 0x80 {
|
||||
bits |= u64::from(next) << shift;
|
||||
break;
|
||||
}
|
||||
bits |= (u64::from(next) & 0x7F) << shift;
|
||||
shift -= 7;
|
||||
}
|
||||
Ok(var_bits_to_double(bits))
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Byte-level helpers
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
fn read_byte(input: &mut &[u8]) -> Result<u8, DecodeError> {
|
||||
match input.split_first() {
|
||||
Some((&byte, rest)) => {
|
||||
*input = rest;
|
||||
Ok(byte)
|
||||
}
|
||||
None => Err(DecodeError::UnexpectedEof),
|
||||
}
|
||||
}
|
||||
|
||||
fn write_f64_le(out: &mut Vec<u8>, value: f64) {
|
||||
out.extend_from_slice(&value.to_le_bytes());
|
||||
}
|
||||
|
||||
fn read_f64_le(input: &mut &[u8]) -> Result<f64, DecodeError> {
|
||||
if input.len() < 8 {
|
||||
return Err(DecodeError::UnexpectedEof);
|
||||
}
|
||||
let (bytes, rest) = input.split_at(8);
|
||||
*input = rest;
|
||||
// bytes is guaranteed to be length 8 by the split_at above.
|
||||
let arr = [
|
||||
bytes[0], bytes[1], bytes[2], bytes[3], bytes[4], bytes[5], bytes[6], bytes[7],
|
||||
];
|
||||
Ok(f64::from_le_bytes(arr))
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Store encoding/decoding
|
||||
// See: https://github.com/DataDog/sketches-java/blob/master/src/main/java/com/datadoghq/sketch/ddsketch/store/DenseStore.java (encode/decode methods)
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
/// Collect non-zero bins in the store as (absolute_index, count) pairs.
|
||||
///
|
||||
/// Allocation is acceptable here: this runs once per encode and the Vec
|
||||
/// has at most `max_num_bins` entries.
|
||||
fn collect_non_zero_bins(store: &Store) -> Vec<(i32, u64)> {
|
||||
if store.count == 0 {
|
||||
return Vec::new();
|
||||
}
|
||||
let start = (store.min_key - store.offset) as usize;
|
||||
let end = ((store.max_key - store.offset + 1) as usize).min(store.bins.len());
|
||||
store.bins[start..end]
|
||||
.iter()
|
||||
.enumerate()
|
||||
.filter(|&(_, &count)| count > 0)
|
||||
.map(|(i, &count)| (start as i32 + i as i32 + store.offset, count))
|
||||
.collect()
|
||||
}
|
||||
|
||||
fn encode_store(out: &mut Vec<u8>, store: &Store, flag_type: FlagType) {
|
||||
let bins = collect_non_zero_bins(store);
|
||||
if bins.is_empty() {
|
||||
return;
|
||||
}
|
||||
|
||||
out.push(flag(BinEncodingMode::IndexDeltasAndCounts as u8, flag_type));
|
||||
encode_unsigned_var_long(out, bins.len() as u64);
|
||||
|
||||
let mut prev_index: i64 = 0;
|
||||
for &(index, count) in &bins {
|
||||
encode_signed_var_long(out, i64::from(index) - prev_index);
|
||||
encode_var_double(out, count as f64);
|
||||
prev_index = i64::from(index);
|
||||
}
|
||||
}
|
||||
|
||||
fn decode_store(input: &mut &[u8], subflag: u8, bin_limit: usize) -> Result<Store, DecodeError> {
|
||||
let mode = BinEncodingMode::from_subflag(subflag).ok_or_else(|| {
|
||||
DecodeError::InvalidData(format!("unknown bin encoding mode subflag: {subflag}"))
|
||||
})?;
|
||||
let num_bins = decode_unsigned_var_long(input)? as usize;
|
||||
let mut store = Store::new(bin_limit);
|
||||
|
||||
match mode {
|
||||
BinEncodingMode::IndexDeltasAndCounts => {
|
||||
let mut index: i64 = 0;
|
||||
for _ in 0..num_bins {
|
||||
index += decode_signed_var_long(input)?;
|
||||
let count = decode_var_double(input)?;
|
||||
store.add_count(index as i32, count as u64);
|
||||
}
|
||||
}
|
||||
BinEncodingMode::IndexDeltas => {
|
||||
let mut index: i64 = 0;
|
||||
for _ in 0..num_bins {
|
||||
index += decode_signed_var_long(input)?;
|
||||
store.add_count(index as i32, 1);
|
||||
}
|
||||
}
|
||||
BinEncodingMode::ContiguousCounts => {
|
||||
let start_index = decode_signed_var_long(input)?;
|
||||
let index_delta = decode_signed_var_long(input)?;
|
||||
let mut index = start_index;
|
||||
for _ in 0..num_bins {
|
||||
let count = decode_var_double(input)?;
|
||||
store.add_count(index as i32, count as u64);
|
||||
index += index_delta;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
Ok(store)
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Top-level encode / decode
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
/// Encode a DDSketch into the Java-compatible binary format.
|
||||
///
|
||||
/// The output follows the encoding order of
|
||||
/// `DDSketchWithExactSummaryStatistics.encode()` then `DDSketch.encode()`:
|
||||
///
|
||||
/// 1. Summary statistics: COUNT, MIN, MAX (if count > 0)
|
||||
/// 2. SUM (if sum != 0)
|
||||
/// 3. Index mapping (LOG layout): gamma, indexOffset
|
||||
/// 4. Zero count (if > 0)
|
||||
/// 5. Positive store bins
|
||||
/// 6. Negative store bins
|
||||
pub fn encode_to_java_bytes(sketch: &DDSketch) -> Vec<u8> {
|
||||
let mut out = Vec::new();
|
||||
let count = sketch.count() as f64;
|
||||
|
||||
// Summary statistics (DDSketchWithExactSummaryStatistics.encode)
|
||||
if count != 0.0 {
|
||||
out.push(FLAG_COUNT);
|
||||
encode_var_double(&mut out, count);
|
||||
out.push(FLAG_MIN);
|
||||
write_f64_le(&mut out, sketch.min);
|
||||
out.push(FLAG_MAX);
|
||||
write_f64_le(&mut out, sketch.max);
|
||||
}
|
||||
if sketch.sum != 0.0 {
|
||||
out.push(FLAG_SUM);
|
||||
write_f64_le(&mut out, sketch.sum);
|
||||
}
|
||||
|
||||
// DDSketch.encode: index mapping + zero count + stores
|
||||
out.push(FLAG_INDEX_MAPPING_LOG);
|
||||
write_f64_le(&mut out, sketch.config.gamma);
|
||||
write_f64_le(&mut out, 0.0_f64);
|
||||
|
||||
if sketch.zero_count != 0 {
|
||||
out.push(FLAG_ZERO_COUNT);
|
||||
encode_var_double(&mut out, sketch.zero_count as f64);
|
||||
}
|
||||
|
||||
encode_store(&mut out, &sketch.store, FlagType::PositiveStore);
|
||||
encode_store(&mut out, &sketch.negative_store, FlagType::NegativeStore);
|
||||
|
||||
out
|
||||
}
|
||||
|
||||
/// Decode a DDSketch from the Java-compatible binary format.
|
||||
///
|
||||
/// Accepts bytes with or without a `0x02` version prefix.
|
||||
pub fn decode_from_java_bytes(bytes: &[u8]) -> Result<DDSketch, DecodeError> {
|
||||
if bytes.is_empty() {
|
||||
return Err(DecodeError::UnexpectedEof);
|
||||
}
|
||||
|
||||
let mut input = bytes;
|
||||
|
||||
// Skip optional version prefix (0x02 followed by a valid flag byte).
|
||||
if input.len() >= 2 && input[0] == 0x02 && is_valid_flag_byte(input[1]) {
|
||||
input = &input[1..];
|
||||
}
|
||||
|
||||
let mut gamma: Option<f64> = None;
|
||||
let mut zero_count: f64 = 0.0;
|
||||
let mut sum: f64 = 0.0;
|
||||
let mut min: f64 = f64::INFINITY;
|
||||
let mut max: f64 = f64::NEG_INFINITY;
|
||||
let mut positive_store: Option<Store> = None;
|
||||
let mut negative_store: Option<Store> = None;
|
||||
|
||||
while !input.is_empty() {
|
||||
let flag_byte = read_byte(&mut input)?;
|
||||
let flag_type =
|
||||
FlagType::from_byte(flag_byte).ok_or(DecodeError::InvalidFlag(flag_byte))?;
|
||||
let subflag = flag_byte >> 2;
|
||||
|
||||
match flag_type {
|
||||
FlagType::IndexMapping => {
|
||||
gamma = Some(read_f64_le(&mut input)?);
|
||||
let _index_offset = read_f64_le(&mut input)?;
|
||||
}
|
||||
FlagType::SketchFeatures => match flag_byte {
|
||||
FLAG_ZERO_COUNT => zero_count += decode_var_double(&mut input)?,
|
||||
FLAG_COUNT => {
|
||||
let _count = decode_var_double(&mut input)?;
|
||||
}
|
||||
FLAG_SUM => sum = read_f64_le(&mut input)?,
|
||||
FLAG_MIN => min = read_f64_le(&mut input)?,
|
||||
FLAG_MAX => max = read_f64_le(&mut input)?,
|
||||
_ => return Err(DecodeError::InvalidFlag(flag_byte)),
|
||||
},
|
||||
FlagType::PositiveStore => {
|
||||
positive_store = Some(decode_store(
|
||||
&mut input,
|
||||
subflag,
|
||||
DEFAULT_MAX_BINS as usize,
|
||||
)?);
|
||||
}
|
||||
FlagType::NegativeStore => {
|
||||
negative_store = Some(decode_store(
|
||||
&mut input,
|
||||
subflag,
|
||||
DEFAULT_MAX_BINS as usize,
|
||||
)?);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
let g = gamma.unwrap_or_else(|| Config::defaults().gamma);
|
||||
let config = Config::from_gamma(g);
|
||||
let store = positive_store.unwrap_or_else(|| Store::new(config.max_num_bins as usize));
|
||||
let neg = negative_store.unwrap_or_else(|| Store::new(config.max_num_bins as usize));
|
||||
|
||||
Ok(DDSketch {
|
||||
config,
|
||||
store,
|
||||
negative_store: neg,
|
||||
min,
|
||||
max,
|
||||
sum,
|
||||
zero_count: zero_count as u64,
|
||||
})
|
||||
}
|
||||
|
||||
/// Check whether a byte is a valid flag byte for the DDSketch binary format.
|
||||
fn is_valid_flag_byte(b: u8) -> bool {
|
||||
// Known sketch-feature flags
|
||||
if matches!(
|
||||
b,
|
||||
FLAG_ZERO_COUNT | FLAG_COUNT | FLAG_SUM | FLAG_MIN | FLAG_MAX | FLAG_INDEX_MAPPING_LOG
|
||||
) {
|
||||
return true;
|
||||
}
|
||||
let Some(flag_type) = FlagType::from_byte(b) else {
|
||||
return false;
|
||||
};
|
||||
let subflag = b >> 2;
|
||||
match flag_type {
|
||||
FlagType::PositiveStore | FlagType::NegativeStore => (1..=3).contains(&subflag),
|
||||
FlagType::IndexMapping => subflag <= 4, // LOG=0, LOG_LINEAR=1 .. LOG_QUARTIC=4
|
||||
_ => false,
|
||||
}
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Tests
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
use crate::{Config, DDSketch};
|
||||
|
||||
// --- VarEncoding unit tests ---
|
||||
|
||||
#[test]
|
||||
fn test_unsigned_var_long_zero() {
|
||||
let mut buf = Vec::new();
|
||||
encode_unsigned_var_long(&mut buf, 0);
|
||||
assert_eq!(buf, [0x00]);
|
||||
|
||||
let mut input = buf.as_slice();
|
||||
assert_eq!(decode_unsigned_var_long(&mut input).unwrap(), 0);
|
||||
assert!(input.is_empty());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_unsigned_var_long_small() {
|
||||
let mut buf = Vec::new();
|
||||
encode_unsigned_var_long(&mut buf, 1);
|
||||
assert_eq!(buf, [0x01]);
|
||||
|
||||
let mut input = buf.as_slice();
|
||||
assert_eq!(decode_unsigned_var_long(&mut input).unwrap(), 1);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_unsigned_var_long_128() {
|
||||
let mut buf = Vec::new();
|
||||
encode_unsigned_var_long(&mut buf, 128);
|
||||
assert_eq!(buf, [0x80, 0x01]);
|
||||
|
||||
let mut input = buf.as_slice();
|
||||
assert_eq!(decode_unsigned_var_long(&mut input).unwrap(), 128);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_unsigned_var_long_roundtrip() {
|
||||
for v in [0u64, 1, 127, 128, 255, 256, 16383, 16384, u64::MAX] {
|
||||
let mut buf = Vec::new();
|
||||
encode_unsigned_var_long(&mut buf, v);
|
||||
let mut input = buf.as_slice();
|
||||
let decoded = decode_unsigned_var_long(&mut input).unwrap();
|
||||
assert_eq!(decoded, v, "roundtrip failed for {}", v);
|
||||
assert!(input.is_empty());
|
||||
}
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_signed_var_long_roundtrip() {
|
||||
for v in [0i64, 1, -1, 63, -64, 64, -65, i64::MAX, i64::MIN] {
|
||||
let mut buf = Vec::new();
|
||||
encode_signed_var_long(&mut buf, v);
|
||||
let mut input = buf.as_slice();
|
||||
let decoded = decode_signed_var_long(&mut input).unwrap();
|
||||
assert_eq!(decoded, v, "roundtrip failed for {}", v);
|
||||
assert!(input.is_empty());
|
||||
}
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_var_double_roundtrip() {
|
||||
for v in [0.0, 1.0, 2.0, 5.0, 15.0, 42.0, 100.0, 1e-9, 1e15, 0.5, 7.77] {
|
||||
let mut buf = Vec::new();
|
||||
encode_var_double(&mut buf, v);
|
||||
let mut input = buf.as_slice();
|
||||
let decoded = decode_var_double(&mut input).unwrap();
|
||||
assert!(
|
||||
(decoded - v).abs() < 1e-15 || decoded == v,
|
||||
"roundtrip failed for {}: got {}",
|
||||
v,
|
||||
decoded,
|
||||
);
|
||||
assert!(input.is_empty());
|
||||
}
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_var_double_small_integers() {
|
||||
let mut buf = Vec::new();
|
||||
encode_var_double(&mut buf, 1.0);
|
||||
assert_eq!(buf.len(), 1, "VarDouble(1.0) should be 1 byte");
|
||||
|
||||
buf.clear();
|
||||
encode_var_double(&mut buf, 5.0);
|
||||
assert_eq!(buf.len(), 1, "VarDouble(5.0) should be 1 byte");
|
||||
}
|
||||
|
||||
// --- DDSketch encode/decode roundtrip tests ---
|
||||
|
||||
#[test]
|
||||
fn test_encode_empty_sketch() {
|
||||
let sketch = DDSketch::new(Config::defaults());
|
||||
let bytes = sketch.to_java_bytes();
|
||||
assert!(!bytes.is_empty());
|
||||
|
||||
let decoded = DDSketch::from_java_bytes(&bytes).unwrap();
|
||||
assert_eq!(decoded.count(), 0);
|
||||
assert_eq!(decoded.min(), None);
|
||||
assert_eq!(decoded.max(), None);
|
||||
assert_eq!(decoded.sum(), None);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_encode_simple_sketch() {
|
||||
let mut sketch = DDSketch::new(Config::defaults());
|
||||
for v in [1.0, 2.0, 3.0, 4.0, 5.0] {
|
||||
sketch.add(v);
|
||||
}
|
||||
|
||||
let bytes = sketch.to_java_bytes();
|
||||
let decoded = DDSketch::from_java_bytes(&bytes).unwrap();
|
||||
|
||||
assert_eq!(decoded.count(), 5);
|
||||
assert_eq!(decoded.min(), Some(1.0));
|
||||
assert_eq!(decoded.max(), Some(5.0));
|
||||
assert_eq!(decoded.sum(), Some(15.0));
|
||||
|
||||
assert_quantiles_match(&sketch, &decoded, &[0.5, 0.9, 0.95, 0.99]);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_encode_single_value() {
|
||||
let mut sketch = DDSketch::new(Config::defaults());
|
||||
sketch.add(42.0);
|
||||
|
||||
let bytes = sketch.to_java_bytes();
|
||||
let decoded = DDSketch::from_java_bytes(&bytes).unwrap();
|
||||
|
||||
assert_eq!(decoded.count(), 1);
|
||||
assert_eq!(decoded.min(), Some(42.0));
|
||||
assert_eq!(decoded.max(), Some(42.0));
|
||||
assert_eq!(decoded.sum(), Some(42.0));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_encode_negative_values() {
|
||||
let mut sketch = DDSketch::new(Config::defaults());
|
||||
for v in [-3.0, -1.0, 2.0, 5.0] {
|
||||
sketch.add(v);
|
||||
}
|
||||
|
||||
let bytes = sketch.to_java_bytes();
|
||||
let decoded = DDSketch::from_java_bytes(&bytes).unwrap();
|
||||
|
||||
assert_eq!(decoded.count(), 4);
|
||||
assert_eq!(decoded.min(), Some(-3.0));
|
||||
assert_eq!(decoded.max(), Some(5.0));
|
||||
assert_eq!(decoded.sum(), Some(3.0));
|
||||
|
||||
assert_quantiles_match(&sketch, &decoded, &[0.0, 0.25, 0.5, 0.75, 1.0]);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_encode_with_zero_value() {
|
||||
let mut sketch = DDSketch::new(Config::defaults());
|
||||
for v in [0.0, 1.0, 2.0] {
|
||||
sketch.add(v);
|
||||
}
|
||||
|
||||
let bytes = sketch.to_java_bytes();
|
||||
let decoded = DDSketch::from_java_bytes(&bytes).unwrap();
|
||||
|
||||
assert_eq!(decoded.count(), 3);
|
||||
assert_eq!(decoded.min(), Some(0.0));
|
||||
assert_eq!(decoded.max(), Some(2.0));
|
||||
assert_eq!(decoded.sum(), Some(3.0));
|
||||
assert_eq!(decoded.zero_count, 1);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_encode_large_range() {
|
||||
let mut sketch = DDSketch::new(Config::defaults());
|
||||
sketch.add(0.001);
|
||||
sketch.add(1_000_000.0);
|
||||
|
||||
let bytes = sketch.to_java_bytes();
|
||||
let decoded = DDSketch::from_java_bytes(&bytes).unwrap();
|
||||
|
||||
assert_eq!(decoded.count(), 2);
|
||||
assert_eq!(decoded.min(), Some(0.001));
|
||||
assert_eq!(decoded.max(), Some(1_000_000.0));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_encode_with_version_prefix() {
|
||||
let mut sketch = DDSketch::new(Config::defaults());
|
||||
for v in [1.0, 2.0, 3.0] {
|
||||
sketch.add(v);
|
||||
}
|
||||
|
||||
let bytes = sketch.to_java_bytes();
|
||||
|
||||
// Simulate Java's toByteArrayV2: prepend 0x02
|
||||
let mut v2_bytes = vec![0x02];
|
||||
v2_bytes.extend_from_slice(&bytes);
|
||||
|
||||
let decoded = DDSketch::from_java_bytes(&v2_bytes).unwrap();
|
||||
assert_eq!(decoded.count(), 3);
|
||||
assert_eq!(decoded.min(), Some(1.0));
|
||||
assert_eq!(decoded.max(), Some(3.0));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_byte_level_encoding() {
|
||||
let mut sketch = DDSketch::new(Config::defaults());
|
||||
sketch.add(1.0);
|
||||
|
||||
let bytes = sketch.to_java_bytes();
|
||||
|
||||
assert_eq!(bytes[0], FLAG_COUNT, "first byte should be COUNT flag");
|
||||
assert!(
|
||||
bytes.contains(&FLAG_INDEX_MAPPING_LOG),
|
||||
"should contain index mapping flag"
|
||||
);
|
||||
}
|
||||
|
||||
// --- Cross-language golden byte tests ---
|
||||
//
|
||||
// Golden bytes generated by Java's DDSketchWithExactSummaryStatistics.encode()
|
||||
// using LogarithmicMapping(0.01) + CollapsingLowestDenseStore(2048).
|
||||
|
||||
const GOLDEN_SIMPLE: &str = "a00588000000000000f03f8c0000000000001440840000000000002e4002fd4a815abf52f03f000000000000000005050002440228021e021602";
|
||||
const GOLDEN_SINGLE: &str = "a0028800000000000045408c000000000000454084000000000000454002fd4a815abf52f03f00000000000000000501f40202";
|
||||
const GOLDEN_NEGATIVE: &str = "a084408800000000000008c08c000000000000144084000000000000084002fd4a815abf52f03f0000000000000000050244025c02070200026c02";
|
||||
const GOLDEN_ZERO: &str = "a0048800000000000000008c000000000000004084000000000000084002fd4a815abf52f03f00000000000000000402050200024402";
|
||||
const GOLDEN_EMPTY: &str = "02fd4a815abf52f03f0000000000000000";
|
||||
const GOLDEN_MANY: &str = "a08d1488000000000000f03f8c0000000000005940840000000000bab34002fd4a815abf52f03f000000000000000005550002440228021e021602120210020c020c020c0208020a020802060208020602060206020602040206020402040204020402040204020402040204020202040202020402020204020202020204020202020202020402020202020202020202020202020202020202020202020202020202020203020202020202020302020202020302020202020302020203020202030202020302030202020302030203020202030203020302030202";
|
||||
|
||||
fn hex_to_bytes(hex: &str) -> Vec<u8> {
|
||||
(0..hex.len())
|
||||
.step_by(2)
|
||||
.map(|i| u8::from_str_radix(&hex[i..i + 2], 16).unwrap())
|
||||
.collect()
|
||||
}
|
||||
|
||||
fn bytes_to_hex(bytes: &[u8]) -> String {
|
||||
bytes.iter().map(|b| format!("{b:02x}")).collect()
|
||||
}
|
||||
|
||||
fn assert_golden(label: &str, sketch: &DDSketch, golden_hex: &str) {
|
||||
let bytes = sketch.to_java_bytes();
|
||||
let expected = hex_to_bytes(golden_hex);
|
||||
assert_eq!(
|
||||
bytes,
|
||||
expected,
|
||||
"Rust encoding doesn't match Java golden bytes for {}.\nRust: {}\nJava: {}",
|
||||
label,
|
||||
bytes_to_hex(&bytes),
|
||||
golden_hex,
|
||||
);
|
||||
}
|
||||
|
||||
fn assert_quantiles_match(a: &DDSketch, b: &DDSketch, quantiles: &[f64]) {
|
||||
for &q in quantiles {
|
||||
let va = a.quantile(q).unwrap().unwrap();
|
||||
let vb = b.quantile(q).unwrap().unwrap();
|
||||
assert!(
|
||||
(va - vb).abs() / va.abs().max(1e-15) < 1e-12,
|
||||
"quantile({}) mismatch: {} vs {}",
|
||||
q,
|
||||
va,
|
||||
vb,
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_cross_language_simple() {
|
||||
let mut sketch = DDSketch::new(Config::defaults());
|
||||
for v in [1.0, 2.0, 3.0, 4.0, 5.0] {
|
||||
sketch.add(v);
|
||||
}
|
||||
assert_golden("SIMPLE", &sketch, GOLDEN_SIMPLE);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_cross_language_single() {
|
||||
let mut sketch = DDSketch::new(Config::defaults());
|
||||
sketch.add(42.0);
|
||||
assert_golden("SINGLE", &sketch, GOLDEN_SINGLE);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_cross_language_negative() {
|
||||
let mut sketch = DDSketch::new(Config::defaults());
|
||||
for v in [-3.0, -1.0, 2.0, 5.0] {
|
||||
sketch.add(v);
|
||||
}
|
||||
assert_golden("NEGATIVE", &sketch, GOLDEN_NEGATIVE);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_cross_language_zero() {
|
||||
let mut sketch = DDSketch::new(Config::defaults());
|
||||
for v in [0.0, 1.0, 2.0] {
|
||||
sketch.add(v);
|
||||
}
|
||||
assert_golden("ZERO", &sketch, GOLDEN_ZERO);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_cross_language_empty() {
|
||||
let sketch = DDSketch::new(Config::defaults());
|
||||
assert_golden("EMPTY", &sketch, GOLDEN_EMPTY);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_cross_language_many() {
|
||||
let mut sketch = DDSketch::new(Config::defaults());
|
||||
for i in 1..=100 {
|
||||
sketch.add(i as f64);
|
||||
}
|
||||
assert_golden("MANY", &sketch, GOLDEN_MANY);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_decode_java_golden_bytes() {
|
||||
for (name, hex) in [
|
||||
("SIMPLE", GOLDEN_SIMPLE),
|
||||
("SINGLE", GOLDEN_SINGLE),
|
||||
("NEGATIVE", GOLDEN_NEGATIVE),
|
||||
("ZERO", GOLDEN_ZERO),
|
||||
("EMPTY", GOLDEN_EMPTY),
|
||||
("MANY", GOLDEN_MANY),
|
||||
] {
|
||||
let bytes = hex_to_bytes(hex);
|
||||
let result = DDSketch::from_java_bytes(&bytes);
|
||||
assert!(
|
||||
result.is_ok(),
|
||||
"failed to decode {}: {:?}",
|
||||
name,
|
||||
result.err()
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_encode_decode_many_values() {
|
||||
let mut sketch = DDSketch::new(Config::defaults());
|
||||
for i in 1..=100 {
|
||||
sketch.add(i as f64);
|
||||
}
|
||||
|
||||
let bytes = sketch.to_java_bytes();
|
||||
let decoded = DDSketch::from_java_bytes(&bytes).unwrap();
|
||||
|
||||
assert_eq!(decoded.count(), 100);
|
||||
assert_eq!(decoded.min(), Some(1.0));
|
||||
assert_eq!(decoded.max(), Some(100.0));
|
||||
assert_eq!(decoded.sum(), Some(5050.0));
|
||||
|
||||
let alpha = 0.01;
|
||||
let orig_p95 = sketch.quantile(0.95).unwrap().unwrap();
|
||||
let dec_p95 = decoded.quantile(0.95).unwrap().unwrap();
|
||||
assert!(
|
||||
(orig_p95 - dec_p95).abs() / orig_p95 < alpha,
|
||||
"p95 mismatch: {} vs {}",
|
||||
orig_p95,
|
||||
dec_p95,
|
||||
);
|
||||
}
|
||||
}
|
||||
@@ -1,52 +0,0 @@
|
||||
//! This crate provides a direct port of the [Golang](https://github.com/DataDog/sketches-go)
|
||||
//! [DDSketch](https://arxiv.org/pdf/1908.10693.pdf) implementation to Rust. All efforts
|
||||
//! have been made to keep this as close to the original implementation as possible, with a few
|
||||
//! tweaks to get closer to idiomatic Rust.
|
||||
//!
|
||||
//! # Usage
|
||||
//!
|
||||
//! Add multiple samples to a DDSketch and invoke the `quantile` method to pull any quantile from
|
||||
//! 0.0* to *1.0*.
|
||||
//!
|
||||
//! ```rust
|
||||
//! use sketches_ddsketch::{Config, DDSketch};
|
||||
//!
|
||||
//! let c = Config::defaults();
|
||||
//! let mut d = DDSketch::new(c);
|
||||
//!
|
||||
//! d.add(1.0);
|
||||
//! d.add(1.0);
|
||||
//! d.add(1.0);
|
||||
//!
|
||||
//! let q = d.quantile(0.50).unwrap();
|
||||
//!
|
||||
//! assert!(q < Some(1.02));
|
||||
//! assert!(q > Some(0.98));
|
||||
//! ```
|
||||
//!
|
||||
//! Sketches can also be merged.
|
||||
//!
|
||||
//! ```rust
|
||||
//! use sketches_ddsketch::{Config, DDSketch};
|
||||
//!
|
||||
//! let c = Config::defaults();
|
||||
//! let mut d1 = DDSketch::new(c);
|
||||
//! let mut d2 = DDSketch::new(c);
|
||||
//!
|
||||
//! d1.add(1.0);
|
||||
//! d2.add(2.0);
|
||||
//! d2.add(2.0);
|
||||
//!
|
||||
//! d1.merge(&d2);
|
||||
//!
|
||||
//! assert_eq!(d1.count(), 3);
|
||||
//! ```
|
||||
|
||||
pub use self::config::Config;
|
||||
pub use self::ddsketch::{DDSketch, DDSketchError};
|
||||
pub use self::encoding::DecodeError;
|
||||
|
||||
mod config;
|
||||
mod ddsketch;
|
||||
pub mod encoding;
|
||||
mod store;
|
||||
@@ -1,252 +0,0 @@
|
||||
#[cfg(feature = "use_serde")]
|
||||
use serde::{Deserialize, Serialize};
|
||||
|
||||
const CHUNK_SIZE: i32 = 128;
|
||||
|
||||
// Divide the `dividend` by the `divisor`, rounding towards positive infinity.
|
||||
//
|
||||
// Similar to the nightly only `std::i32::div_ceil`.
|
||||
fn div_ceil(dividend: i32, divisor: i32) -> i32 {
|
||||
(dividend + divisor - 1) / divisor
|
||||
}
|
||||
|
||||
/// CollapsingLowestDenseStore
|
||||
#[derive(Clone, Debug)]
|
||||
#[cfg_attr(feature = "use_serde", derive(Serialize, Deserialize))]
|
||||
pub struct Store {
|
||||
pub(crate) bins: Vec<u64>,
|
||||
pub(crate) count: u64,
|
||||
pub(crate) min_key: i32,
|
||||
pub(crate) max_key: i32,
|
||||
pub(crate) offset: i32,
|
||||
pub(crate) bin_limit: usize,
|
||||
is_collapsed: bool,
|
||||
}
|
||||
|
||||
impl Store {
|
||||
pub fn new(bin_limit: usize) -> Self {
|
||||
Store {
|
||||
bins: Vec::new(),
|
||||
count: 0,
|
||||
min_key: i32::MAX,
|
||||
max_key: i32::MIN,
|
||||
offset: 0,
|
||||
bin_limit,
|
||||
is_collapsed: false,
|
||||
}
|
||||
}
|
||||
|
||||
/// Return the number of bins.
|
||||
pub fn length(&self) -> i32 {
|
||||
self.bins.len() as i32
|
||||
}
|
||||
|
||||
pub fn is_empty(&self) -> bool {
|
||||
self.bins.is_empty()
|
||||
}
|
||||
|
||||
pub fn add(&mut self, key: i32) {
|
||||
let idx = self.get_index(key);
|
||||
self.bins[idx] += 1;
|
||||
self.count += 1;
|
||||
}
|
||||
|
||||
/// See Java: https://github.com/DataDog/sketches-java/blob/master/src/main/java/com/datadoghq/sketch/ddsketch/store/DenseStore.java (add(int index, double count) method)
|
||||
pub(crate) fn add_count(&mut self, key: i32, count: u64) {
|
||||
let idx = self.get_index(key);
|
||||
self.bins[idx] += count;
|
||||
self.count += count;
|
||||
}
|
||||
|
||||
fn get_index(&mut self, key: i32) -> usize {
|
||||
if key < self.min_key {
|
||||
if self.is_collapsed {
|
||||
return 0;
|
||||
}
|
||||
|
||||
self.extend_range(key, None);
|
||||
if self.is_collapsed {
|
||||
return 0;
|
||||
}
|
||||
} else if key > self.max_key {
|
||||
self.extend_range(key, None);
|
||||
}
|
||||
|
||||
(key - self.offset) as usize
|
||||
}
|
||||
|
||||
fn extend_range(&mut self, key: i32, second_key: Option<i32>) {
|
||||
let second_key = second_key.unwrap_or(key);
|
||||
let new_min_key = i32::min(key, i32::min(second_key, self.min_key));
|
||||
let new_max_key = i32::max(key, i32::max(second_key, self.max_key));
|
||||
|
||||
if self.is_empty() {
|
||||
let new_len = self.get_new_length(new_min_key, new_max_key);
|
||||
self.bins.resize(new_len, 0);
|
||||
self.offset = new_min_key;
|
||||
self.adjust(new_min_key, new_max_key);
|
||||
} else if new_min_key >= self.min_key && new_max_key < self.offset + self.length() {
|
||||
self.min_key = new_min_key;
|
||||
self.max_key = new_max_key;
|
||||
} else {
|
||||
// Grow bins
|
||||
let new_length = self.get_new_length(new_min_key, new_max_key);
|
||||
if new_length > self.length() as usize {
|
||||
self.bins.resize(new_length, 0);
|
||||
}
|
||||
self.adjust(new_min_key, new_max_key);
|
||||
}
|
||||
}
|
||||
|
||||
fn get_new_length(&self, new_min_key: i32, new_max_key: i32) -> usize {
|
||||
let desired_length = new_max_key - new_min_key + 1;
|
||||
usize::min(
|
||||
(CHUNK_SIZE * div_ceil(desired_length, CHUNK_SIZE)) as usize,
|
||||
self.bin_limit,
|
||||
)
|
||||
}
|
||||
|
||||
fn adjust(&mut self, new_min_key: i32, new_max_key: i32) {
|
||||
if new_max_key - new_min_key + 1 > self.length() {
|
||||
let new_min_key = new_max_key - self.length() + 1;
|
||||
|
||||
if new_min_key >= self.max_key {
|
||||
// Put everything in the first bin.
|
||||
self.offset = new_min_key;
|
||||
self.min_key = new_min_key;
|
||||
self.bins.fill(0);
|
||||
self.bins[0] = self.count;
|
||||
} else {
|
||||
let shift = self.offset - new_min_key;
|
||||
if shift < 0 {
|
||||
let collapse_start_index = (self.min_key - self.offset) as usize;
|
||||
let collapse_end_index = (new_min_key - self.offset) as usize;
|
||||
let collapsed_count: u64 = self.bins[collapse_start_index..collapse_end_index]
|
||||
.iter()
|
||||
.sum();
|
||||
let zero_len = (new_min_key - self.min_key) as usize;
|
||||
self.bins.splice(
|
||||
collapse_start_index..collapse_end_index,
|
||||
std::iter::repeat_n(0, zero_len),
|
||||
);
|
||||
self.bins[collapse_end_index] += collapsed_count;
|
||||
}
|
||||
self.min_key = new_min_key;
|
||||
self.shift_bins(shift);
|
||||
}
|
||||
|
||||
self.max_key = new_max_key;
|
||||
self.is_collapsed = true;
|
||||
} else {
|
||||
self.center_bins(new_min_key, new_max_key);
|
||||
self.min_key = new_min_key;
|
||||
self.max_key = new_max_key;
|
||||
}
|
||||
}
|
||||
|
||||
fn shift_bins(&mut self, shift: i32) {
|
||||
if shift > 0 {
|
||||
let shift = shift as usize;
|
||||
self.bins.rotate_right(shift);
|
||||
for idx in 0..shift {
|
||||
self.bins[idx] = 0;
|
||||
}
|
||||
} else {
|
||||
let shift = shift.unsigned_abs() as usize;
|
||||
for idx in 0..shift {
|
||||
self.bins[idx] = 0;
|
||||
}
|
||||
self.bins.rotate_left(shift);
|
||||
}
|
||||
|
||||
self.offset -= shift;
|
||||
}
|
||||
|
||||
fn center_bins(&mut self, new_min_key: i32, new_max_key: i32) {
|
||||
let middle_key = new_min_key + (new_max_key - new_min_key + 1) / 2;
|
||||
let shift = self.offset + self.length() / 2 - middle_key;
|
||||
self.shift_bins(shift)
|
||||
}
|
||||
|
||||
pub fn key_at_rank(&self, rank: u64) -> i32 {
|
||||
let mut n = 0;
|
||||
for (i, bin) in self.bins.iter().enumerate() {
|
||||
n += *bin;
|
||||
if n > rank {
|
||||
return i as i32 + self.offset;
|
||||
}
|
||||
}
|
||||
|
||||
self.max_key
|
||||
}
|
||||
|
||||
pub fn count(&self) -> u64 {
|
||||
self.count
|
||||
}
|
||||
|
||||
pub fn merge(&mut self, other: &Store) {
|
||||
if other.count == 0 {
|
||||
return;
|
||||
}
|
||||
|
||||
if self.count == 0 {
|
||||
self.copy(other);
|
||||
return;
|
||||
}
|
||||
|
||||
if other.min_key < self.min_key || other.max_key > self.max_key {
|
||||
self.extend_range(other.min_key, Some(other.max_key));
|
||||
}
|
||||
|
||||
let collapse_start_index = other.min_key - other.offset;
|
||||
let mut collapse_end_index = i32::min(self.min_key, other.max_key + 1) - other.offset;
|
||||
if collapse_end_index > collapse_start_index {
|
||||
let collapsed_count: u64 = self.bins
|
||||
[collapse_start_index as usize..collapse_end_index as usize]
|
||||
.iter()
|
||||
.sum();
|
||||
self.bins[0] += collapsed_count;
|
||||
} else {
|
||||
collapse_end_index = collapse_start_index;
|
||||
}
|
||||
|
||||
for key in (collapse_end_index + other.offset)..(other.max_key + 1) {
|
||||
self.bins[(key - self.offset) as usize] += other.bins[(key - other.offset) as usize]
|
||||
}
|
||||
|
||||
self.count += other.count;
|
||||
}
|
||||
|
||||
fn copy(&mut self, o: &Store) {
|
||||
self.bins = o.bins.clone();
|
||||
self.count = o.count;
|
||||
self.min_key = o.min_key;
|
||||
self.max_key = o.max_key;
|
||||
self.offset = o.offset;
|
||||
self.bin_limit = o.bin_limit;
|
||||
self.is_collapsed = o.is_collapsed;
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use crate::store::Store;
|
||||
|
||||
#[test]
|
||||
fn test_simple_store() {
|
||||
let mut s = Store::new(2048);
|
||||
|
||||
for i in 0..2048 {
|
||||
s.add(i);
|
||||
}
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_simple_store_rev() {
|
||||
let mut s = Store::new(2048);
|
||||
|
||||
for i in (0..2048).rev() {
|
||||
s.add(i);
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -1,88 +0,0 @@
|
||||
use std::cmp::Ordering;
|
||||
use std::f64::NAN;
|
||||
|
||||
pub struct Dataset {
|
||||
values: Vec<f64>,
|
||||
sum: f64,
|
||||
sorted: bool,
|
||||
}
|
||||
|
||||
fn cmp_f64(a: &f64, b: &f64) -> Ordering {
|
||||
assert!(!a.is_nan() && !b.is_nan());
|
||||
|
||||
if a < b {
|
||||
return Ordering::Less;
|
||||
} else if a > b {
|
||||
return Ordering::Greater;
|
||||
} else {
|
||||
return Ordering::Equal;
|
||||
}
|
||||
}
|
||||
|
||||
impl Dataset {
|
||||
pub fn new() -> Self {
|
||||
Dataset {
|
||||
values: Vec::new(),
|
||||
sum: 0.0,
|
||||
sorted: false,
|
||||
}
|
||||
}
|
||||
|
||||
pub fn add(&mut self, value: f64) {
|
||||
self.values.push(value);
|
||||
self.sum += value;
|
||||
self.sorted = false;
|
||||
}
|
||||
|
||||
// pub fn quantile(&mut self, q: f64) -> f64 {
|
||||
// self.lower_quantile(q)
|
||||
// }
|
||||
|
||||
pub fn lower_quantile(&mut self, q: f64) -> f64 {
|
||||
if q < 0.0 || q > 1.0 || self.values.len() == 0 {
|
||||
return NAN;
|
||||
}
|
||||
|
||||
self.sort();
|
||||
let rank = q * (self.values.len() - 1) as f64;
|
||||
|
||||
self.values[rank.floor() as usize]
|
||||
}
|
||||
|
||||
pub fn upper_quantile(&mut self, q: f64) -> f64 {
|
||||
if q < 0.0 || q > 1.0 || self.values.len() == 0 {
|
||||
return NAN;
|
||||
}
|
||||
|
||||
self.sort();
|
||||
let rank = q * (self.values.len() - 1) as f64;
|
||||
self.values[rank.ceil() as usize]
|
||||
}
|
||||
|
||||
pub fn min(&mut self) -> f64 {
|
||||
self.sort();
|
||||
self.values[0]
|
||||
}
|
||||
|
||||
pub fn max(&mut self) -> f64 {
|
||||
self.sort();
|
||||
self.values[self.values.len() - 1]
|
||||
}
|
||||
|
||||
pub fn sum(&self) -> f64 {
|
||||
self.sum
|
||||
}
|
||||
|
||||
pub fn count(&self) -> usize {
|
||||
self.values.len()
|
||||
}
|
||||
|
||||
fn sort(&mut self) {
|
||||
if self.sorted {
|
||||
return;
|
||||
}
|
||||
|
||||
self.values.sort_by(cmp_f64);
|
||||
self.sorted = true;
|
||||
}
|
||||
}
|
||||
@@ -1,100 +0,0 @@
|
||||
extern crate rand;
|
||||
extern crate rand_distr;
|
||||
|
||||
use rand::prelude::*;
|
||||
|
||||
pub trait Generator {
|
||||
fn generate(&mut self) -> f64;
|
||||
}
|
||||
|
||||
// Constant generator
|
||||
//
|
||||
pub struct Constant {
|
||||
value: f64,
|
||||
}
|
||||
impl Constant {
|
||||
pub fn new(value: f64) -> Self {
|
||||
Constant { value }
|
||||
}
|
||||
}
|
||||
impl Generator for Constant {
|
||||
fn generate(&mut self) -> f64 {
|
||||
self.value
|
||||
}
|
||||
}
|
||||
|
||||
// Linear generator
|
||||
//
|
||||
pub struct Linear {
|
||||
current_value: f64,
|
||||
step: f64,
|
||||
}
|
||||
impl Linear {
|
||||
pub fn new(start_value: f64, step: f64) -> Self {
|
||||
Linear {
|
||||
current_value: start_value,
|
||||
step,
|
||||
}
|
||||
}
|
||||
}
|
||||
impl Generator for Linear {
|
||||
fn generate(&mut self) -> f64 {
|
||||
let value = self.current_value;
|
||||
self.current_value += self.step;
|
||||
value
|
||||
}
|
||||
}
|
||||
|
||||
// Normal distribution generator
|
||||
//
|
||||
pub struct Normal {
|
||||
distr: rand_distr::Normal<f64>,
|
||||
}
|
||||
impl Normal {
|
||||
pub fn new(mean: f64, stddev: f64) -> Self {
|
||||
Normal {
|
||||
distr: rand_distr::Normal::new(mean, stddev).unwrap(),
|
||||
}
|
||||
}
|
||||
}
|
||||
impl Generator for Normal {
|
||||
fn generate(&mut self) -> f64 {
|
||||
self.distr.sample(&mut rand::thread_rng())
|
||||
}
|
||||
}
|
||||
|
||||
// Lognormal distribution generator
|
||||
//
|
||||
pub struct Lognormal {
|
||||
distr: rand_distr::LogNormal<f64>,
|
||||
}
|
||||
impl Lognormal {
|
||||
pub fn new(mean: f64, stddev: f64) -> Self {
|
||||
Lognormal {
|
||||
distr: rand_distr::LogNormal::new(mean, stddev).unwrap(),
|
||||
}
|
||||
}
|
||||
}
|
||||
impl Generator for Lognormal {
|
||||
fn generate(&mut self) -> f64 {
|
||||
self.distr.sample(&mut rand::thread_rng())
|
||||
}
|
||||
}
|
||||
|
||||
// Exponential distribution generator
|
||||
//
|
||||
pub struct Exponential {
|
||||
distr: rand_distr::Exp<f64>,
|
||||
}
|
||||
impl Exponential {
|
||||
pub fn new(lambda: f64) -> Self {
|
||||
Exponential {
|
||||
distr: rand_distr::Exp::new(lambda).unwrap(),
|
||||
}
|
||||
}
|
||||
}
|
||||
impl Generator for Exponential {
|
||||
fn generate(&mut self) -> f64 {
|
||||
self.distr.sample(&mut rand::thread_rng())
|
||||
}
|
||||
}
|
||||
@@ -1,2 +0,0 @@
|
||||
pub mod dataset;
|
||||
pub mod generator;
|
||||
@@ -1,316 +0,0 @@
|
||||
mod common;
|
||||
use std::time::Instant;
|
||||
|
||||
use common::dataset::Dataset;
|
||||
use common::generator;
|
||||
use common::generator::Generator;
|
||||
use sketches_ddsketch::{Config, DDSketch};
|
||||
|
||||
const TEST_ALPHA: f64 = 0.01;
|
||||
const TEST_MAX_BINS: u32 = 1024;
|
||||
const TEST_MIN_VALUE: f64 = 1.0e-9;
|
||||
|
||||
// Used for float equality
|
||||
const TEST_ERROR_THRESH: f64 = 1.0e-9;
|
||||
|
||||
const TEST_SIZES: [usize; 5] = [3, 5, 10, 100, 1000];
|
||||
const TEST_QUANTILES: [f64; 10] = [0.0, 0.1, 0.25, 0.5, 0.75, 0.9, 0.95, 0.99, 0.999, 1.0];
|
||||
|
||||
#[test]
|
||||
fn test_constant() {
|
||||
evaluate_sketches(|| Box::new(generator::Constant::new(42.0)));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_linear() {
|
||||
evaluate_sketches(|| Box::new(generator::Linear::new(0.0, 1.0)));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_normal() {
|
||||
evaluate_sketches(|| Box::new(generator::Normal::new(35.0, 1.0)));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_lognormal() {
|
||||
evaluate_sketches(|| Box::new(generator::Lognormal::new(0.0, 2.0)));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_exponential() {
|
||||
evaluate_sketches(|| Box::new(generator::Exponential::new(2.0)));
|
||||
}
|
||||
|
||||
fn evaluate_test_sizes(f: impl Fn(usize)) {
|
||||
for sz in &TEST_SIZES {
|
||||
f(*sz);
|
||||
}
|
||||
}
|
||||
|
||||
fn evaluate_sketches(gen_factory: impl Fn() -> Box<dyn generator::Generator>) {
|
||||
evaluate_test_sizes(|sz: usize| {
|
||||
let mut generator = gen_factory();
|
||||
evaluate_sketch(sz, &mut generator);
|
||||
});
|
||||
}
|
||||
|
||||
fn new_config() -> Config {
|
||||
Config::new(TEST_ALPHA, TEST_MAX_BINS, TEST_MIN_VALUE)
|
||||
}
|
||||
|
||||
fn assert_float_eq(a: f64, b: f64) {
|
||||
assert!((a - b).abs() < TEST_ERROR_THRESH, "{} != {}", a, b);
|
||||
}
|
||||
|
||||
fn evaluate_sketch(count: usize, generator: &mut Box<dyn generator::Generator>) {
|
||||
let c = new_config();
|
||||
let mut g = DDSketch::new(c);
|
||||
|
||||
let mut d = Dataset::new();
|
||||
|
||||
for _i in 0..count {
|
||||
let value = generator.generate();
|
||||
|
||||
g.add(value);
|
||||
d.add(value);
|
||||
}
|
||||
|
||||
compare_sketches(&mut d, &g);
|
||||
}
|
||||
|
||||
fn compare_sketches(d: &mut Dataset, g: &DDSketch) {
|
||||
for q in &TEST_QUANTILES {
|
||||
let lower = d.lower_quantile(*q);
|
||||
let upper = d.upper_quantile(*q);
|
||||
|
||||
let min_expected;
|
||||
if lower < 0.0 {
|
||||
min_expected = lower * (1.0 + TEST_ALPHA);
|
||||
} else {
|
||||
min_expected = lower * (1.0 - TEST_ALPHA);
|
||||
}
|
||||
|
||||
let max_expected;
|
||||
if upper > 0.0 {
|
||||
max_expected = upper * (1.0 + TEST_ALPHA);
|
||||
} else {
|
||||
max_expected = upper * (1.0 - TEST_ALPHA);
|
||||
}
|
||||
|
||||
let quantile = g.quantile(*q).unwrap().unwrap();
|
||||
|
||||
assert!(
|
||||
min_expected <= quantile,
|
||||
"Lower than min, quantile: {}, wanted {} <= {}",
|
||||
*q,
|
||||
min_expected,
|
||||
quantile
|
||||
);
|
||||
assert!(
|
||||
quantile <= max_expected,
|
||||
"Higher than max, quantile: {}, wanted {} <= {}",
|
||||
*q,
|
||||
quantile,
|
||||
max_expected
|
||||
);
|
||||
|
||||
// verify that calls do not modify result (not mut so not possible?)
|
||||
let quantile2 = g.quantile(*q).unwrap().unwrap();
|
||||
assert_eq!(quantile, quantile2);
|
||||
}
|
||||
|
||||
assert_eq!(g.min().unwrap(), d.min());
|
||||
assert_eq!(g.max().unwrap(), d.max());
|
||||
assert_float_eq(g.sum().unwrap(), d.sum());
|
||||
assert_eq!(g.count(), d.count());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_merge_normal() {
|
||||
evaluate_test_sizes(|sz: usize| {
|
||||
let c = new_config();
|
||||
let mut d = Dataset::new();
|
||||
let mut g1 = DDSketch::new(c);
|
||||
|
||||
let mut generator1 = generator::Normal::new(35.0, 1.0);
|
||||
for _ in (0..sz).step_by(3) {
|
||||
let value = generator1.generate();
|
||||
g1.add(value);
|
||||
d.add(value);
|
||||
}
|
||||
let mut g2 = DDSketch::new(c);
|
||||
let mut generator2 = generator::Normal::new(50.0, 2.0);
|
||||
for _ in (1..sz).step_by(3) {
|
||||
let value = generator2.generate();
|
||||
g2.add(value);
|
||||
d.add(value);
|
||||
}
|
||||
g1.merge(&g2).unwrap();
|
||||
|
||||
let mut g3 = DDSketch::new(c);
|
||||
let mut generator3 = generator::Normal::new(40.0, 0.5);
|
||||
for _ in (2..sz).step_by(3) {
|
||||
let value = generator3.generate();
|
||||
g3.add(value);
|
||||
d.add(value);
|
||||
}
|
||||
g1.merge(&g3).unwrap();
|
||||
|
||||
compare_sketches(&mut d, &g1);
|
||||
});
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_merge_empty() {
|
||||
evaluate_test_sizes(|sz: usize| {
|
||||
let c = new_config();
|
||||
|
||||
let mut d = Dataset::new();
|
||||
|
||||
let mut g1 = DDSketch::new(c);
|
||||
let mut g2 = DDSketch::new(c);
|
||||
let mut generator = generator::Exponential::new(5.0);
|
||||
|
||||
for _ in 0..sz {
|
||||
let value = generator.generate();
|
||||
g2.add(value);
|
||||
d.add(value);
|
||||
}
|
||||
g1.merge(&g2).unwrap();
|
||||
compare_sketches(&mut d, &g1);
|
||||
|
||||
let g3 = DDSketch::new(c);
|
||||
g2.merge(&g3).unwrap();
|
||||
compare_sketches(&mut d, &g2);
|
||||
});
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_merge_mixed() {
|
||||
evaluate_test_sizes(|sz: usize| {
|
||||
let c = new_config();
|
||||
let mut d = Dataset::new();
|
||||
let mut g1 = DDSketch::new(c);
|
||||
|
||||
let mut generator1 = generator::Normal::new(100.0, 1.0);
|
||||
for _ in (0..sz).step_by(3) {
|
||||
let value = generator1.generate();
|
||||
g1.add(value);
|
||||
d.add(value);
|
||||
}
|
||||
|
||||
let mut g2 = DDSketch::new(c);
|
||||
let mut generator2 = generator::Exponential::new(5.0);
|
||||
for _ in (1..sz).step_by(3) {
|
||||
let value = generator2.generate();
|
||||
g2.add(value);
|
||||
d.add(value);
|
||||
}
|
||||
g1.merge(&g2).unwrap();
|
||||
|
||||
let mut g3 = DDSketch::new(c);
|
||||
let mut generator3 = generator::Exponential::new(0.1);
|
||||
for _ in (2..sz).step_by(3) {
|
||||
let value = generator3.generate();
|
||||
g3.add(value);
|
||||
d.add(value);
|
||||
}
|
||||
g1.merge(&g3).unwrap();
|
||||
|
||||
compare_sketches(&mut d, &g1);
|
||||
})
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_merge_incompatible() {
|
||||
let c1 = Config::new(TEST_ALPHA, TEST_MAX_BINS, TEST_MIN_VALUE);
|
||||
let c2 = Config::new(TEST_ALPHA * 2.0, TEST_MAX_BINS, TEST_MIN_VALUE);
|
||||
|
||||
let mut d1 = DDSketch::new(c1);
|
||||
let d2 = DDSketch::new(c2);
|
||||
|
||||
assert!(d1.merge(&d2).is_err());
|
||||
|
||||
let c3 = Config::new(TEST_ALPHA, TEST_MAX_BINS, TEST_MIN_VALUE * 10.0);
|
||||
let d3 = DDSketch::new(c3);
|
||||
|
||||
assert!(d1.merge(&d3).is_err());
|
||||
|
||||
let c4 = Config::new(TEST_ALPHA, TEST_MAX_BINS * 2, TEST_MIN_VALUE);
|
||||
let d4 = DDSketch::new(c4);
|
||||
|
||||
assert!(d1.merge(&d4).is_err());
|
||||
|
||||
// the same should work
|
||||
let c5 = Config::new(TEST_ALPHA, TEST_MAX_BINS, TEST_MIN_VALUE);
|
||||
let dsame = DDSketch::new(c5);
|
||||
assert!(d1.merge(&dsame).is_ok());
|
||||
}
|
||||
|
||||
#[test]
|
||||
#[ignore]
|
||||
fn test_performance_insert() {
|
||||
let c = Config::defaults();
|
||||
let mut g = DDSketch::new(c);
|
||||
let mut gen = generator::Normal::new(1000.0, 500.0);
|
||||
let count = 300_000_000;
|
||||
|
||||
let mut values = Vec::new();
|
||||
for _ in 0..count {
|
||||
values.push(gen.generate());
|
||||
}
|
||||
|
||||
let start_time = Instant::now();
|
||||
for value in values {
|
||||
g.add(value);
|
||||
}
|
||||
|
||||
// This simply ensures the operations don't get optimzed out as ignored
|
||||
let quantile = g.quantile(0.50).unwrap().unwrap();
|
||||
|
||||
let elapsed = start_time.elapsed().as_micros() as f64;
|
||||
let elapsed = elapsed / 1_000_000.0;
|
||||
|
||||
println!(
|
||||
"RESULT: p50={:.2} => Added {}M samples in {:2} secs ({:.2}M samples/sec)",
|
||||
quantile,
|
||||
count / 1_000_000,
|
||||
elapsed,
|
||||
(count as f64) / 1_000_000.0 / elapsed
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
#[ignore]
|
||||
fn test_performance_merge() {
|
||||
let c = Config::defaults();
|
||||
let mut gen = generator::Normal::new(1000.0, 500.0);
|
||||
let merge_count = 500_000;
|
||||
let sample_count = 1_000;
|
||||
let mut sketches = Vec::new();
|
||||
|
||||
for _ in 0..merge_count {
|
||||
let mut d = DDSketch::new(c);
|
||||
for _ in 0..sample_count {
|
||||
d.add(gen.generate());
|
||||
}
|
||||
sketches.push(d);
|
||||
}
|
||||
|
||||
let mut base = DDSketch::new(c);
|
||||
|
||||
let start_time = Instant::now();
|
||||
for sketch in &sketches {
|
||||
base.merge(sketch).unwrap();
|
||||
}
|
||||
|
||||
let elapsed = start_time.elapsed().as_micros() as f64;
|
||||
let elapsed = elapsed / 1_000_000.0;
|
||||
|
||||
println!(
|
||||
"RESULT: Merged {} sketches in {:2} secs ({:.2} merges/sec)",
|
||||
merge_count,
|
||||
elapsed,
|
||||
(merge_count as f64) / elapsed
|
||||
);
|
||||
}
|
||||
@@ -20,16 +20,17 @@ Contains all metric aggregations, like average aggregation. Metric aggregations
|
||||
#### agg_req
|
||||
agg_req contains the users aggregation request. Deserialization from json is compatible with elasticsearch aggregation requests.
|
||||
|
||||
#### agg_data
|
||||
agg_data contains the users aggregation request enriched with fast field accessors etc, which are
|
||||
#### agg_req_with_accessor
|
||||
agg_req_with_accessor contains the users aggregation request enriched with fast field accessors etc, which are
|
||||
used during collection.
|
||||
|
||||
#### segment_agg_result
|
||||
segment_agg_result contains the aggregation result tree, which is used for collection of a segment.
|
||||
agg_data is passed during collection.
|
||||
The tree from agg_req_with_accessor is passed during collection.
|
||||
|
||||
#### intermediate_agg_result
|
||||
intermediate_agg_result contains the aggregation tree for merging with other trees.
|
||||
|
||||
#### agg_result
|
||||
agg_result contains the final aggregation tree.
|
||||
|
||||
|
||||
@@ -1,115 +0,0 @@
|
||||
//! This will enhance the request tree with access to the fastfield and metadata.
|
||||
|
||||
use std::io;
|
||||
|
||||
use columnar::{Column, ColumnType};
|
||||
|
||||
use crate::aggregation::{f64_to_fastfield_u64, Key};
|
||||
use crate::index::SegmentReader;
|
||||
|
||||
/// Get the missing value as internal u64 representation
|
||||
///
|
||||
/// For terms we use u64::MAX as sentinel value
|
||||
/// For numerical data we convert the value into the representation
|
||||
/// we would get from the fast field, when we open it as u64_lenient_for_type.
|
||||
///
|
||||
/// That way we can use it the same way as if it would come from the fastfield.
|
||||
pub(crate) fn get_missing_val_as_u64_lenient(
|
||||
column_type: ColumnType,
|
||||
column_max_value: u64,
|
||||
missing: &Key,
|
||||
field_name: &str,
|
||||
) -> crate::Result<Option<u64>> {
|
||||
let missing_val = match missing {
|
||||
Key::Str(_) if column_type == ColumnType::Str => Some(column_max_value + 1),
|
||||
// Allow fallback to number on text fields
|
||||
Key::F64(_) if column_type == ColumnType::Str => Some(column_max_value + 1),
|
||||
Key::U64(_) if column_type == ColumnType::Str => Some(column_max_value + 1),
|
||||
Key::I64(_) if column_type == ColumnType::Str => Some(column_max_value + 1),
|
||||
Key::F64(val) if column_type.numerical_type().is_some() => {
|
||||
f64_to_fastfield_u64(*val, &column_type)
|
||||
}
|
||||
// NOTE: We may loose precision of the passed missing value by casting i64 and u64 to f64.
|
||||
Key::I64(val) if column_type.numerical_type().is_some() => {
|
||||
f64_to_fastfield_u64(*val as f64, &column_type)
|
||||
}
|
||||
Key::U64(val) if column_type.numerical_type().is_some() => {
|
||||
f64_to_fastfield_u64(*val as f64, &column_type)
|
||||
}
|
||||
_ => {
|
||||
return Err(crate::TantivyError::InvalidArgument(format!(
|
||||
"Missing value {missing:?} for field {field_name} is not supported for column \
|
||||
type {column_type:?}"
|
||||
)));
|
||||
}
|
||||
};
|
||||
Ok(missing_val)
|
||||
}
|
||||
|
||||
pub(crate) fn get_numeric_or_date_column_types() -> &'static [ColumnType] {
|
||||
&[
|
||||
ColumnType::F64,
|
||||
ColumnType::U64,
|
||||
ColumnType::I64,
|
||||
ColumnType::DateTime,
|
||||
]
|
||||
}
|
||||
|
||||
/// Get fast field reader or empty as default.
|
||||
pub(crate) fn get_ff_reader(
|
||||
reader: &SegmentReader,
|
||||
field_name: &str,
|
||||
allowed_column_types: Option<&[ColumnType]>,
|
||||
) -> crate::Result<(columnar::Column<u64>, ColumnType)> {
|
||||
let ff_fields = reader.fast_fields();
|
||||
let ff_field_with_type = ff_fields
|
||||
.u64_lenient_for_type(allowed_column_types, field_name)?
|
||||
.unwrap_or_else(|| {
|
||||
(
|
||||
Column::build_empty_column(reader.num_docs()),
|
||||
ColumnType::U64,
|
||||
)
|
||||
});
|
||||
Ok(ff_field_with_type)
|
||||
}
|
||||
|
||||
pub(crate) fn get_dynamic_columns(
|
||||
reader: &SegmentReader,
|
||||
field_name: &str,
|
||||
) -> crate::Result<Vec<columnar::DynamicColumn>> {
|
||||
let ff_fields = reader.fast_fields().dynamic_column_handles(field_name)?;
|
||||
let cols = ff_fields
|
||||
.iter()
|
||||
.map(|h| h.open())
|
||||
.collect::<io::Result<_>>()?;
|
||||
assert!(!ff_fields.is_empty(), "field {field_name} not found");
|
||||
Ok(cols)
|
||||
}
|
||||
|
||||
/// Get all fast field reader or empty as default.
|
||||
///
|
||||
/// Is guaranteed to return at least one column.
|
||||
pub(crate) fn get_all_ff_reader_or_empty(
|
||||
reader: &SegmentReader,
|
||||
field_name: &str,
|
||||
allowed_column_types: Option<&[ColumnType]>,
|
||||
fallback_type: ColumnType,
|
||||
) -> crate::Result<Vec<(columnar::Column<u64>, ColumnType)>> {
|
||||
let mut ff_field_with_type = get_all_ff_readers(reader, field_name, allowed_column_types)?;
|
||||
if ff_field_with_type.is_empty() {
|
||||
ff_field_with_type.push((Column::build_empty_column(reader.num_docs()), fallback_type));
|
||||
}
|
||||
Ok(ff_field_with_type)
|
||||
}
|
||||
|
||||
/// Get all fast field reader.
|
||||
pub(crate) fn get_all_ff_readers(
|
||||
reader: &SegmentReader,
|
||||
field_name: &str,
|
||||
allowed_column_types: Option<&[ColumnType]>,
|
||||
) -> crate::Result<Vec<(columnar::Column<u64>, ColumnType)>> {
|
||||
let ff_fields = reader.fast_fields();
|
||||
let ff_field_with_type =
|
||||
ff_fields.u64_lenient_for_type_all(allowed_column_types, field_name)?;
|
||||
Ok(ff_field_with_type)
|
||||
}
|
||||
File diff suppressed because it is too large
Load Diff
@@ -35,7 +35,6 @@ pub struct AggregationLimitsGuard {
|
||||
/// Allocated memory with this guard.
|
||||
allocated_with_the_guard: u64,
|
||||
}
|
||||
|
||||
impl Clone for AggregationLimitsGuard {
|
||||
fn clone(&self) -> Self {
|
||||
Self {
|
||||
@@ -71,7 +70,7 @@ impl AggregationLimitsGuard {
|
||||
/// *memory_limit*
|
||||
/// memory_limit is defined in bytes.
|
||||
/// Aggregation fails when the estimated memory consumption of the aggregation is higher than
|
||||
/// memory_limit.
|
||||
/// memory_limit.
|
||||
/// memory_limit will default to `DEFAULT_MEMORY_LIMIT` (500MB)
|
||||
///
|
||||
/// *bucket_limit*
|
||||
|
||||
@@ -26,27 +26,24 @@
|
||||
//! let _agg_req: Aggregations = serde_json::from_str(elasticsearch_compatible_json_req).unwrap();
|
||||
//! ```
|
||||
|
||||
use std::collections::HashSet;
|
||||
use std::collections::{HashMap, HashSet};
|
||||
|
||||
use rustc_hash::FxHashMap;
|
||||
use serde::{Deserialize, Serialize};
|
||||
|
||||
use super::bucket::{
|
||||
DateHistogramAggregationReq, FilterAggregation, HistogramAggregation, RangeAggregation,
|
||||
TermsAggregation,
|
||||
DateHistogramAggregationReq, HistogramAggregation, RangeAggregation, TermsAggregation,
|
||||
};
|
||||
use super::metric::{
|
||||
AverageAggregation, CardinalityAggregationReq, CountAggregation, ExtendedStatsAggregation,
|
||||
MaxAggregation, MinAggregation, PercentilesAggregationReq, StatsAggregation, SumAggregation,
|
||||
TopHitsAggregationReq,
|
||||
};
|
||||
use crate::aggregation::bucket::CompositeAggregation;
|
||||
|
||||
/// The top-level aggregation request structure, which contains [`Aggregation`] and their user
|
||||
/// defined names. It is also used in buckets aggregations to define sub-aggregations.
|
||||
///
|
||||
/// The key is the user defined name of the aggregation.
|
||||
pub type Aggregations = FxHashMap<String, Aggregation>;
|
||||
pub type Aggregations = HashMap<String, Aggregation>;
|
||||
|
||||
/// Aggregation request.
|
||||
///
|
||||
@@ -132,12 +129,6 @@ pub enum AggregationVariants {
|
||||
/// Put data into buckets of terms.
|
||||
#[serde(rename = "terms")]
|
||||
Terms(TermsAggregation),
|
||||
/// Filter documents into a single bucket.
|
||||
#[serde(rename = "filter")]
|
||||
Filter(FilterAggregation),
|
||||
/// Put data into multi level paginated buckets.
|
||||
#[serde(rename = "composite")]
|
||||
Composite(CompositeAggregation),
|
||||
|
||||
// Metric aggregation types
|
||||
/// Computes the average of the extracted values.
|
||||
@@ -183,12 +174,6 @@ impl AggregationVariants {
|
||||
AggregationVariants::Range(range) => vec![range.field.as_str()],
|
||||
AggregationVariants::Histogram(histogram) => vec![histogram.field.as_str()],
|
||||
AggregationVariants::DateHistogram(histogram) => vec![histogram.field.as_str()],
|
||||
AggregationVariants::Filter(filter) => filter.get_fast_field_names(),
|
||||
AggregationVariants::Composite(composite) => composite
|
||||
.sources
|
||||
.iter()
|
||||
.map(|source_map| source_map.field())
|
||||
.collect(),
|
||||
AggregationVariants::Average(avg) => vec![avg.field_name()],
|
||||
AggregationVariants::Count(count) => vec![count.field_name()],
|
||||
AggregationVariants::Max(max) => vec![max.field_name()],
|
||||
@@ -223,12 +208,13 @@ impl AggregationVariants {
|
||||
_ => None,
|
||||
}
|
||||
}
|
||||
pub(crate) fn as_composite(&self) -> Option<&CompositeAggregation> {
|
||||
pub(crate) fn as_top_hits(&self) -> Option<&TopHitsAggregationReq> {
|
||||
match &self {
|
||||
AggregationVariants::Composite(composite) => Some(composite),
|
||||
AggregationVariants::TopHits(top_hits) => Some(top_hits),
|
||||
_ => None,
|
||||
}
|
||||
}
|
||||
|
||||
pub(crate) fn as_percentile(&self) -> Option<&PercentilesAggregationReq> {
|
||||
match &self {
|
||||
AggregationVariants::Percentiles(percentile_req) => Some(percentile_req),
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user