* Optimizing top K using Adrien Grand's ideas
https://jpountz.github.io/2025/08/28/compiled-vs-vectorized-search-engine-edition.html
* Suffix-sum pruning for multi-term intersection candidates
After scoring each secondary in Phase 2, check whether remaining
secondaries' block_max scores can still beat the threshold. Skip
to the next candidate early if impossible, avoiding expensive seeks
into later secondaries.
Improves three-term intersection by ~8% on the balanced benchmark
while keeping two-term performance neutral.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* Claude CR comment
* Removed 16 term scorer limit.
---------
Co-authored-by: Paul Masurel <paul.masurel@datadoghq.com>
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* Fix O(2^n) query parser regression for deeply-nested queries
The top-level `ast()` parser used `alt((boolean_expr, single_leaf))` at
every group level. When the group contained a single leaf with no
trailing operand, `boolean_expr` would parse `occur_leaf` (recursing
into the inner group), fail at `multispace1`, backtrack, and then
`single_leaf` would re-parse `occur_leaf` from scratch. Every nesting
level doubled the work, giving O(2^n) time for queries like
`(((((title:test)))))`.
Parse `occur_leaf` once and peek ahead for a trailing operand instead
of backtracking. This keeps parsing O(n) and also avoids the duplicate
parse for simple single-leaf queries.
Fixes#2498.
Measured on the issue reproducer (release build):
depth before after
20 0.87 s <1 us
25 28.23 s <1 us
60 (years) ~5 us
Non-pathological queries are unaffected or slightly faster:
query before after
hello 650 ns 308 ns
a AND b AND c 1380 ns 1364 ns
title:rust AND (...) 3426 ns 3460 ns
All 53 existing grammar tests and 56 query_parser tests pass. Adds a
regression test at depth 60 that would not complete under the old
parser.
* Add ignored benchmark for nested query parsing at depth 20/21
Matches the depths from issue #2498 which reported 0.87 s / 1.72 s
under the regression. With the fix these parse in single-digit
microseconds. Runs via:
cargo test -p tantivy-query-grammar --release bench_deeply_nested \
-- --ignored --nocapture
* Propagate Err::Failure and Err::Incomplete from operand parser
`alt((boolean_expr, single_leaf))` only retried on `Err::Error` and
propagated `Err::Failure` and `Err::Incomplete`. The replacement was
catching all three with `Err(_)`, which would silently fall back to
a single leaf if any cut point were ever added to `operand_leaf` or
its descendants. Match specifically on `Err::Error` to preserve the
original `alt` semantics.
* Replace inline bench with binggan bench in benches/
Move the nested-query benchmark out of the query-grammar test module
and into a proper binggan benchmark at benches/query_parser_nested.rs,
registered as a harnessless bench in Cargo.toml. Keeps the correctness
regression test (depth 60) in place.
Run with: cargo bench --bench query_parser_nested
* Fix rustfmt import ordering in query_parser_nested bench
Previously, coupons were computed via murmurhash32 and fed as raw u32
to the HLL sketch, bypassing the sketch's internal hashing and producing
invalid (slot, value) pairs. Switch to Coupon::from_hash from the
datasketches crate which correctly derives coupons, and drop the
now-unused murmurhash32 dependency.
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
When a string cardinality aggregation is nested it end up being applied to different buckets.
Dictionary encoding relies on a different dictionaries for each segment.
As a result, during segment collection, we only collect term ordinals in a HashSet, and decode them in the
term dictionary at the end of collection.
Before this PR, this decoding phase was done once for each bucket, causing the same work to be done over and over. This PR introduce a coupon cache. The HLL sketch relies on a hash of the string values.
We populate the cache before bucket collection, and get our values from it.
This PR also rename "caching" "buffering" in aggregation (it was never caching), and does several cleanups.
Keep use_serde on sketches-ddsketch so DDSketch derives
Serialize/Deserialize, removing the need for custom impls
on PercentilesCollector.
Co-authored-by: Cursor <cursoragent@cursor.com>
Replace the derived Serialize/Deserialize on PercentilesCollector with
custom impls that use DDSketch's Java-compatible binary encoding
(encode_to_java_bytes / decode_from_java_bytes). This removes the need
for the use_serde feature on sketches-ddsketch entirely.
Also restore original float test values and use assert_nearly_equals!
for all float comparisons in percentile tests, since DDSketch quantile
estimates can have minor precision differences across platforms.
Co-authored-by: Cursor <cursoragent@cursor.com>
Move the vendored sketches-ddsketch crate (with Java-compatible binary
encoding) to its own repo at quickwit-oss/rust-sketches-ddsketch and
reference it via git+rev in Cargo.toml.
Co-authored-by: Cursor <cursoragent@cursor.com>
Fork sketches-ddsketch as a workspace member to add native Java binary
serialization (to_java_bytes/from_java_bytes) for DDSketch. This enables
pomsky to return raw DDSketch bytes that event-query can deserialize via
DDSketchWithExactSummaryStatistics.decode().
Key changes:
- Vendor sketches-ddsketch crate with encoding.rs implementing VarEncoding,
flag bytes, and INDEX_DELTAS_AND_COUNTS store format
- Align Config::key() to floor-based indexing matching Java's LogarithmicMapping
- Add PercentilesCollector::to_sketch_bytes() for pomsky integration
- Cross-language golden byte tests verified byte-identical with Java output
Co-authored-by: Cursor <cursoragent@cursor.com>
Switch tantivy's cardinality aggregation from the hyperloglogplus crate
(HyperLogLog++ with p=16) to the official Apache DataSketches HLL
implementation (datasketches crate v0.2.0 with lg_k=11, Hll4).
This enables returning raw HLL sketch bytes from pomsky to Datadog's
event query, where they can be properly deserialized and merged using
the same DataSketches library (Java). The previous implementation
required pomsky to fabricate fake HLL sketches from scalar cardinality
estimates, which produced incorrect results when merged.
Changes:
- Cargo.toml: hyperloglogplus 0.4.1 -> datasketches 0.2.0
- CardinalityCollector: HyperLogLogPlus<u64, BuildSaltedHasher> -> HllSketch
- Custom Serde impl using HllSketch binary format (cross-shard compat)
- New to_sketch_bytes() for external consumers (pomsky)
- Salt preserved via (salt, value) tuple hashing for column type disambiguation
- Removed BuildSaltedHasher struct
- Added 4 new unit tests (serde roundtrip, merge, binary compat, salt)
* seek_exact + cost based intersection
Adds `seek_exact` and `cost` to `DocSet` for a more efficient intersection.
Unlike `seek`, `seek_exact` does not require the DocSet to advance to the next hit, if the target does not exist.
`cost` allows to address the different DocSet types and their cost
model and is used to determine the DocSet that drives the intersection.
E.g. fast field range queries may do a full scan. Phrase queries load the positions to check if a we have a hit.
They both have a higher cost than their size_hint would suggest.
Improves `size_hint` estimation for intersection and union, by having a
estimation based on random distribution with a co-location factor.
Refactor range query benchmark.
Closes#2531
*Future Work*
Implement `seek_exact` for BufferedUnionScorer and RangeDocSet (fast field range queries)
Evaluate replacing `seek` with `seek_exact` to reduce code complexity
* Apply suggestions from code review
Co-authored-by: Paul Masurel <paul@quickwit.io>
* add API contract verfication
* impl seek_exact on union
* rename seek_exact
* add mixed AND OR test, fix buffered_union
* Add a proptest of BooleanQuery. (#2690)
* fix build
* Increase the document count.
* fix merge conflict
* fix debug assert
* Fix compilation errors after rebase
- Remove duplicate proptest_boolean_query module
- Remove duplicate cost() method implementations
- Fix TopDocs API usage (add .order_by_score())
- Remove duplicate imports
- Remove unused variable assignments
---------
Co-authored-by: Paul Masurel <paul@quickwit.io>
Co-authored-by: Pascal Seitz <pascal.seitz@datadoghq.com>
Co-authored-by: Stu Hood <stuhood@gmail.com>
* Refactoring of the score tweaker into `SortKeyComputer`s to unlock two features.
- Allow lazy evaluation of score. As soon as we identified that a doc won't
reach the topK threshold, we can stop the evaluation.
- Allow for a different segment level score, segment level score and their conversion.
This PR breaks public API, but fixing code is straightforward.
* Bumping tantivy version
---------
Co-authored-by: Paul Masurel <paul.masurel@datadoghq.com>
* Initial impl
* Added `Filter` impl in `build_single_agg_segment_collector_with_reader` + Added tests
* Added `Filter(FilterBucketResult)` + Made tests work.
* Fixed type issues.
* Fixed a test.
* 8a7a73a: Pass `segment_reader`
* Added more tests.
* Improved parsing + tests
* refactoring
* Added more tests.
* refactoring: moved parsing code under QueryParser
* Use Tantivy syntax instead of ES
* Added a sanity check test.
* Simplified impl + tests
* Added back tests in a more maintable way
* nitz.
* nitz
* implemented very simple fast-path
* improved a comment
* implemented fast field support
* Used `BoundsRange`
* Improved fast field impl + tests
* Simplified execution.
* Fixed exports + nitz
* Improved the tests to check to the expected result.
* Improved test by checking the whole result JSON
* Removed brittle perf checks.
* Added efficiency verification tests.
* Added one more efficiency check test.
* Improved the efficiency tests.
* Removed unnecessary parsing code + added direct Query obj
* Fixed tests.
* Improved tests
* Fixed code structure
* Fixed lint issues
* nitz.
* nitz
* nitz.
* nitz.
* nitz.
* Added an example
* Fixed PR comments.
* Applied PR comments + nitz
* nitz.
* Improved the code.
* Fixed a perf issue.
* Added batch processing.
* Made the example more interesting
* Fixed bucket count
* Renamed Direct to CustomQuery
* Fixed lint issues.
* No need for scorer to be an `Option`
* nitz
* Used BitSet
* Added an optimization for AllQuery
* Fixed merge issues.
* Fixed lint issues.
* Added benchmark for FILTER
* Removed the Option wrapper.
* nitz.
* Applied PR comments.
* Fixed the AllQuery optimization
* Applied PR comments.
* feat: used `erased_serde` to allow filter query to be serialized
* further improved a comment
* Added back tests.
* removed an unused method
* removed an unused method
* Added documentation
* nitz.
* Added query builder.
* Fixed a comment.
* Applied PR comments.
* Fixed doctest issues.
* Added ser/de
* Removed bench in test
* Fixed a lint issue.
* Added per-field size details.
This also does a bunch of refactoring.
merging field metadata does not silently asserts that arguments should be sorted.
merging does not set `stored`.
We do not rely on a hashmap to group fields, but instead rely on the fact that
the term dictionary is sorted.
The inverted level method that exposes field metadata is not exposed
as public anymore.
* CR comment
---------
Co-authored-by: Paul Masurel <paul.masurel@datadoghq.com>