mirror of
https://github.com/quickwit-oss/tantivy.git
synced 2026-02-05 15:40:36 +00:00
Compare commits
1 Commits
postings-w
...
postings-w
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
dd1190f4bc |
@@ -1,125 +0,0 @@
|
||||
---
|
||||
name: rationalize-deps
|
||||
description: Analyze Cargo.toml dependencies and attempt to remove unused features to reduce compile times and binary size
|
||||
---
|
||||
|
||||
# Rationalize Dependencies
|
||||
|
||||
This skill analyzes Cargo.toml dependencies to identify and remove unused features.
|
||||
|
||||
## Overview
|
||||
|
||||
Many crates enable features by default that may not be needed. This skill:
|
||||
1. Identifies dependencies with default features enabled
|
||||
2. Tests if `default-features = false` works
|
||||
3. Identifies which specific features are actually needed
|
||||
4. Verifies compilation after changes
|
||||
|
||||
## Step 1: Identify the target
|
||||
|
||||
Ask the user which crate(s) to analyze:
|
||||
- A specific crate name (e.g., "tokio", "serde")
|
||||
- A specific workspace member (e.g., "quickwit-search")
|
||||
- "all" to scan the entire workspace
|
||||
|
||||
## Step 2: Analyze current dependencies
|
||||
|
||||
For the workspace Cargo.toml (`quickwit/Cargo.toml`), list dependencies that:
|
||||
- Do NOT have `default-features = false`
|
||||
- Have default features that might be unnecessary
|
||||
|
||||
Run: `cargo tree -p <crate> -f "{p} {f}" --edges features` to see what features are actually used.
|
||||
|
||||
## Step 3: For each candidate dependency
|
||||
|
||||
### 3a: Check the crate's default features
|
||||
|
||||
Look up the crate on crates.io or check its Cargo.toml to understand:
|
||||
- What features are enabled by default
|
||||
- What each feature provides
|
||||
|
||||
Use: `cargo metadata --format-version=1 | jq '.packages[] | select(.name == "<crate>") | .features'`
|
||||
|
||||
### 3b: Try disabling default features
|
||||
|
||||
Modify the dependency in `quickwit/Cargo.toml`:
|
||||
|
||||
From:
|
||||
```toml
|
||||
some-crate = { version = "1.0" }
|
||||
```
|
||||
|
||||
To:
|
||||
```toml
|
||||
some-crate = { version = "1.0", default-features = false }
|
||||
```
|
||||
|
||||
### 3c: Run cargo check
|
||||
|
||||
Run: `cargo check --workspace` (or target specific packages for faster feedback)
|
||||
|
||||
If compilation fails:
|
||||
1. Read the error messages to identify which features are needed
|
||||
2. Add only the required features explicitly:
|
||||
```toml
|
||||
some-crate = { version = "1.0", default-features = false, features = ["needed-feature"] }
|
||||
```
|
||||
3. Re-run cargo check
|
||||
|
||||
### 3d: Binary search for minimal features
|
||||
|
||||
If there are many default features, use binary search:
|
||||
1. Start with no features
|
||||
2. If it fails, add half the default features
|
||||
3. Continue until you find the minimal set
|
||||
|
||||
## Step 4: Document findings
|
||||
|
||||
For each dependency analyzed, report:
|
||||
- Original configuration
|
||||
- New configuration (if changed)
|
||||
- Features that were removed
|
||||
- Any features that are required
|
||||
|
||||
## Step 5: Verify full build
|
||||
|
||||
After all changes, run:
|
||||
```bash
|
||||
cargo check --workspace --all-targets
|
||||
cargo test --workspace --no-run
|
||||
```
|
||||
|
||||
## Common Patterns
|
||||
|
||||
### Serde
|
||||
Often only needs `derive`:
|
||||
```toml
|
||||
serde = { version = "1.0", default-features = false, features = ["derive", "std"] }
|
||||
```
|
||||
|
||||
### Tokio
|
||||
Identify which runtime features are actually used:
|
||||
```toml
|
||||
tokio = { version = "1.0", default-features = false, features = ["rt-multi-thread", "macros", "sync"] }
|
||||
```
|
||||
|
||||
### Reqwest
|
||||
Often doesn't need all TLS backends:
|
||||
```toml
|
||||
reqwest = { version = "0.11", default-features = false, features = ["rustls-tls", "json"] }
|
||||
```
|
||||
|
||||
## Rollback
|
||||
|
||||
If changes cause issues:
|
||||
```bash
|
||||
git checkout quickwit/Cargo.toml
|
||||
cargo check --workspace
|
||||
```
|
||||
|
||||
## Tips
|
||||
|
||||
- Start with large crates that have many default features (tokio, reqwest, hyper)
|
||||
- Use `cargo bloat --crates` to identify large dependencies
|
||||
- Check `cargo tree -d` for duplicate dependencies that might indicate feature conflicts
|
||||
- Some features are needed only for tests - consider using `[dev-dependencies]` features
|
||||
@@ -1,60 +0,0 @@
|
||||
---
|
||||
name: simple-pr
|
||||
description: Create a simple PR from staged changes with an auto-generated commit message
|
||||
disable-model-invocation: true
|
||||
---
|
||||
|
||||
# Simple PR
|
||||
|
||||
Follow these steps to create a simple PR from staged changes:
|
||||
|
||||
## Step 1: Check workspace state
|
||||
|
||||
Run: `git status`
|
||||
|
||||
Verify that all changes have been staged (no unstaged changes). If there are unstaged changes, abort and ask the user to stage their changes first with `git add`.
|
||||
|
||||
Also verify that we are on the `main` branch. If not, abort and ask the user to switch to main first.
|
||||
|
||||
## Step 2: Ensure main is up to date
|
||||
|
||||
Run: `git pull origin main`
|
||||
|
||||
This ensures we're working from the latest code.
|
||||
|
||||
## Step 3: Review staged changes
|
||||
|
||||
Run: `git diff --cached`
|
||||
|
||||
Review the staged changes to understand what the PR will contain.
|
||||
|
||||
## Step 4: Generate commit message
|
||||
|
||||
Based on the staged changes, generate a concise commit message (1-2 sentences) that describes the "why" rather than the "what".
|
||||
|
||||
Display the proposed commit message to the user and ask for confirmation before proceeding.
|
||||
|
||||
## Step 5: Create a new branch
|
||||
|
||||
Get the git username: `git config user.name | tr ' ' '-' | tr '[:upper:]' '[:lower:]'`
|
||||
|
||||
Create a short, descriptive branch name based on the changes (e.g., `fix-typo-in-readme`, `add-retry-logic`, `update-deps`).
|
||||
|
||||
Create and checkout the branch: `git checkout -b {username}/{short-descriptive-name}`
|
||||
|
||||
## Step 6: Commit changes
|
||||
|
||||
Commit with the message from step 3:
|
||||
```
|
||||
git commit -m "{commit-message}"
|
||||
```
|
||||
|
||||
## Step 7: Push and open a PR
|
||||
|
||||
Push the branch and open a PR:
|
||||
```
|
||||
git push -u origin {branch-name}
|
||||
gh pr create --title "{commit-message-title}" --body "{longer-description-if-needed}"
|
||||
```
|
||||
|
||||
Report the PR URL to the user when complete.
|
||||
11
Cargo.toml
11
Cargo.toml
@@ -15,7 +15,7 @@ rust-version = "1.85"
|
||||
exclude = ["benches/*.json", "benches/*.txt"]
|
||||
|
||||
[dependencies]
|
||||
oneshot = "0.1.13"
|
||||
oneshot = "0.1.7"
|
||||
base64 = "0.22.0"
|
||||
byteorder = "1.4.3"
|
||||
crc32fast = "1.3.2"
|
||||
@@ -193,12 +193,3 @@ harness = false
|
||||
[[bench]]
|
||||
name = "str_search_and_get"
|
||||
harness = false
|
||||
|
||||
[[bench]]
|
||||
name = "merge_segments"
|
||||
harness = false
|
||||
|
||||
[[bench]]
|
||||
name = "regex_all_terms"
|
||||
harness = false
|
||||
|
||||
|
||||
@@ -1,224 +0,0 @@
|
||||
// Benchmarks segment merging
|
||||
//
|
||||
// Notes:
|
||||
// - Input segments are kept intact (no deletes / no IndexWriter merge).
|
||||
// - Output is written to a `NullDirectory` that discards all files except
|
||||
// fieldnorms (needed for merging).
|
||||
|
||||
use std::collections::HashMap;
|
||||
use std::io::{self, Write};
|
||||
use std::path::{Path, PathBuf};
|
||||
use std::sync::{Arc, RwLock};
|
||||
|
||||
use binggan::{black_box, BenchRunner};
|
||||
use rand::prelude::*;
|
||||
use rand::rngs::StdRng;
|
||||
use rand::SeedableRng;
|
||||
use tantivy::directory::error::{DeleteError, OpenReadError, OpenWriteError};
|
||||
use tantivy::directory::{
|
||||
AntiCallToken, Directory, FileHandle, OwnedBytes, TerminatingWrite, WatchCallback, WatchHandle,
|
||||
WritePtr,
|
||||
};
|
||||
use tantivy::indexer::{merge_filtered_segments, NoMergePolicy};
|
||||
use tantivy::schema::{Schema, TEXT};
|
||||
use tantivy::{doc, HasLen, Index, IndexSettings, Segment};
|
||||
|
||||
#[derive(Clone, Default, Debug)]
|
||||
struct NullDirectory {
|
||||
blobs: Arc<RwLock<HashMap<PathBuf, OwnedBytes>>>,
|
||||
}
|
||||
|
||||
struct NullWriter;
|
||||
|
||||
impl Write for NullWriter {
|
||||
fn write(&mut self, buf: &[u8]) -> io::Result<usize> {
|
||||
Ok(buf.len())
|
||||
}
|
||||
|
||||
fn flush(&mut self) -> io::Result<()> {
|
||||
Ok(())
|
||||
}
|
||||
}
|
||||
|
||||
impl TerminatingWrite for NullWriter {
|
||||
fn terminate_ref(&mut self, _token: AntiCallToken) -> io::Result<()> {
|
||||
Ok(())
|
||||
}
|
||||
}
|
||||
|
||||
struct InMemoryWriter {
|
||||
path: PathBuf,
|
||||
buffer: Vec<u8>,
|
||||
blobs: Arc<RwLock<HashMap<PathBuf, OwnedBytes>>>,
|
||||
}
|
||||
|
||||
impl Write for InMemoryWriter {
|
||||
fn write(&mut self, buf: &[u8]) -> io::Result<usize> {
|
||||
self.buffer.extend_from_slice(buf);
|
||||
Ok(buf.len())
|
||||
}
|
||||
|
||||
fn flush(&mut self) -> io::Result<()> {
|
||||
Ok(())
|
||||
}
|
||||
}
|
||||
|
||||
impl TerminatingWrite for InMemoryWriter {
|
||||
fn terminate_ref(&mut self, _token: AntiCallToken) -> io::Result<()> {
|
||||
let bytes = OwnedBytes::new(std::mem::take(&mut self.buffer));
|
||||
self.blobs.write().unwrap().insert(self.path.clone(), bytes);
|
||||
Ok(())
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Debug, Default)]
|
||||
struct NullFileHandle;
|
||||
impl HasLen for NullFileHandle {
|
||||
fn len(&self) -> usize {
|
||||
0
|
||||
}
|
||||
}
|
||||
impl FileHandle for NullFileHandle {
|
||||
fn read_bytes(&self, _range: std::ops::Range<usize>) -> io::Result<OwnedBytes> {
|
||||
unimplemented!()
|
||||
}
|
||||
}
|
||||
|
||||
impl Directory for NullDirectory {
|
||||
fn get_file_handle(&self, path: &Path) -> Result<Arc<dyn FileHandle>, OpenReadError> {
|
||||
if let Some(bytes) = self.blobs.read().unwrap().get(path) {
|
||||
return Ok(Arc::new(bytes.clone()));
|
||||
}
|
||||
Ok(Arc::new(NullFileHandle))
|
||||
}
|
||||
|
||||
fn delete(&self, _path: &Path) -> Result<(), DeleteError> {
|
||||
Ok(())
|
||||
}
|
||||
|
||||
fn exists(&self, _path: &Path) -> Result<bool, OpenReadError> {
|
||||
Ok(true)
|
||||
}
|
||||
|
||||
fn open_write(&self, path: &Path) -> Result<WritePtr, OpenWriteError> {
|
||||
let path_buf = path.to_path_buf();
|
||||
if path.to_string_lossy().ends_with(".fieldnorm") {
|
||||
let writer = InMemoryWriter {
|
||||
path: path_buf,
|
||||
buffer: Vec::new(),
|
||||
blobs: Arc::clone(&self.blobs),
|
||||
};
|
||||
Ok(io::BufWriter::new(Box::new(writer)))
|
||||
} else {
|
||||
Ok(io::BufWriter::new(Box::new(NullWriter)))
|
||||
}
|
||||
}
|
||||
|
||||
fn atomic_read(&self, path: &Path) -> Result<Vec<u8>, OpenReadError> {
|
||||
if let Some(bytes) = self.blobs.read().unwrap().get(path) {
|
||||
return Ok(bytes.as_slice().to_vec());
|
||||
}
|
||||
Err(OpenReadError::FileDoesNotExist(path.to_path_buf()))
|
||||
}
|
||||
|
||||
fn atomic_write(&self, _path: &Path, _data: &[u8]) -> io::Result<()> {
|
||||
Ok(())
|
||||
}
|
||||
|
||||
fn sync_directory(&self) -> io::Result<()> {
|
||||
Ok(())
|
||||
}
|
||||
|
||||
fn watch(&self, _watch_callback: WatchCallback) -> tantivy::Result<WatchHandle> {
|
||||
Ok(WatchHandle::empty())
|
||||
}
|
||||
}
|
||||
|
||||
struct MergeScenario {
|
||||
#[allow(dead_code)]
|
||||
index: Index,
|
||||
segments: Vec<Segment>,
|
||||
settings: IndexSettings,
|
||||
label: String,
|
||||
}
|
||||
|
||||
fn build_index(
|
||||
num_segments: usize,
|
||||
docs_per_segment: usize,
|
||||
tokens_per_doc: usize,
|
||||
vocab_size: usize,
|
||||
) -> MergeScenario {
|
||||
let mut schema_builder = Schema::builder();
|
||||
let body = schema_builder.add_text_field("body", TEXT);
|
||||
let schema = schema_builder.build();
|
||||
let index = Index::create_in_ram(schema.clone());
|
||||
|
||||
assert!(vocab_size > 0);
|
||||
let total_tokens = num_segments * docs_per_segment * tokens_per_doc;
|
||||
let use_unique_terms = vocab_size >= total_tokens;
|
||||
let mut rng = StdRng::from_seed([7u8; 32]);
|
||||
let mut next_token_id: u64 = 0;
|
||||
|
||||
{
|
||||
let mut writer = index.writer_with_num_threads(1, 256_000_000).unwrap();
|
||||
writer.set_merge_policy(Box::new(NoMergePolicy));
|
||||
for _ in 0..num_segments {
|
||||
for _ in 0..docs_per_segment {
|
||||
let mut tokens = Vec::with_capacity(tokens_per_doc);
|
||||
for _ in 0..tokens_per_doc {
|
||||
let token_id = if use_unique_terms {
|
||||
let id = next_token_id;
|
||||
next_token_id += 1;
|
||||
id
|
||||
} else {
|
||||
rng.random_range(0..vocab_size as u64)
|
||||
};
|
||||
tokens.push(format!("term_{token_id}"));
|
||||
}
|
||||
writer.add_document(doc!(body => tokens.join(" "))).unwrap();
|
||||
}
|
||||
writer.commit().unwrap();
|
||||
}
|
||||
}
|
||||
|
||||
let segments = index.searchable_segments().unwrap();
|
||||
let settings = index.settings().clone();
|
||||
let label = format!(
|
||||
"segments={}, docs/seg={}, tokens/doc={}, vocab={}",
|
||||
num_segments, docs_per_segment, tokens_per_doc, vocab_size
|
||||
);
|
||||
|
||||
MergeScenario {
|
||||
index,
|
||||
segments,
|
||||
settings,
|
||||
label,
|
||||
}
|
||||
}
|
||||
|
||||
fn main() {
|
||||
let scenarios = vec![
|
||||
build_index(8, 50_000, 12, 8),
|
||||
build_index(16, 50_000, 12, 8),
|
||||
build_index(16, 100_000, 12, 8),
|
||||
build_index(8, 50_000, 8, 8 * 50_000 * 8),
|
||||
];
|
||||
|
||||
let mut runner = BenchRunner::new();
|
||||
for scenario in scenarios {
|
||||
let mut group = runner.new_group();
|
||||
group.set_name(format!("merge_segments inv_index — {}", scenario.label));
|
||||
let segments = scenario.segments.clone();
|
||||
let settings = scenario.settings.clone();
|
||||
group.register("merge", move |_| {
|
||||
let output_dir = NullDirectory::default();
|
||||
let filter_doc_ids = vec![None; segments.len()];
|
||||
let merged_index =
|
||||
merge_filtered_segments(&segments, settings.clone(), filter_doc_ids, output_dir)
|
||||
.unwrap();
|
||||
black_box(merged_index);
|
||||
});
|
||||
|
||||
group.run();
|
||||
}
|
||||
}
|
||||
@@ -1,113 +0,0 @@
|
||||
// Benchmarks regex query that matches all terms in a synthetic index.
|
||||
//
|
||||
// Corpus model:
|
||||
// - N unique terms: t000000, t000001, ...
|
||||
// - M docs
|
||||
// - K tokens per doc: doc i gets terms derived from (i, token_index)
|
||||
//
|
||||
// Query:
|
||||
// - Regex "t.*" to match all terms
|
||||
//
|
||||
// Run with:
|
||||
// - cargo bench --bench regex_all_terms
|
||||
//
|
||||
|
||||
use std::fmt::Write;
|
||||
|
||||
use binggan::{black_box, BenchRunner};
|
||||
use tantivy::collector::Count;
|
||||
use tantivy::query::RegexQuery;
|
||||
use tantivy::schema::{Schema, TEXT};
|
||||
use tantivy::{doc, Index, ReloadPolicy};
|
||||
|
||||
const HEAP_SIZE_BYTES: usize = 200_000_000;
|
||||
|
||||
#[derive(Clone, Copy)]
|
||||
struct BenchConfig {
|
||||
num_terms: usize,
|
||||
num_docs: usize,
|
||||
tokens_per_doc: usize,
|
||||
}
|
||||
|
||||
fn main() {
|
||||
let configs = default_configs();
|
||||
|
||||
let mut runner = BenchRunner::new();
|
||||
for config in configs {
|
||||
let (index, text_field) = build_index(config, HEAP_SIZE_BYTES);
|
||||
let reader = index
|
||||
.reader_builder()
|
||||
.reload_policy(ReloadPolicy::Manual)
|
||||
.try_into()
|
||||
.expect("reader");
|
||||
let searcher = reader.searcher();
|
||||
let query = RegexQuery::from_pattern("t.*", text_field).expect("regex query");
|
||||
|
||||
let mut group = runner.new_group();
|
||||
group.set_name(format!(
|
||||
"regex_all_terms_t{}_d{}_k{}",
|
||||
config.num_terms, config.num_docs, config.tokens_per_doc
|
||||
));
|
||||
group.register("regex_count", move |_| {
|
||||
let count = searcher.search(&query, &Count).expect("search");
|
||||
black_box(count);
|
||||
});
|
||||
group.run();
|
||||
}
|
||||
}
|
||||
|
||||
fn default_configs() -> Vec<BenchConfig> {
|
||||
vec![
|
||||
BenchConfig {
|
||||
num_terms: 10_000,
|
||||
num_docs: 100_000,
|
||||
tokens_per_doc: 1,
|
||||
},
|
||||
BenchConfig {
|
||||
num_terms: 10_000,
|
||||
num_docs: 100_000,
|
||||
tokens_per_doc: 8,
|
||||
},
|
||||
BenchConfig {
|
||||
num_terms: 100_000,
|
||||
num_docs: 100_000,
|
||||
tokens_per_doc: 1,
|
||||
},
|
||||
BenchConfig {
|
||||
num_terms: 100_000,
|
||||
num_docs: 100_000,
|
||||
tokens_per_doc: 8,
|
||||
},
|
||||
]
|
||||
}
|
||||
|
||||
fn build_index(config: BenchConfig, heap_size_bytes: usize) -> (Index, tantivy::schema::Field) {
|
||||
let mut schema_builder = Schema::builder();
|
||||
let text_field = schema_builder.add_text_field("text", TEXT);
|
||||
let schema = schema_builder.build();
|
||||
let index = Index::create_in_ram(schema);
|
||||
|
||||
let term_width = config.num_terms.to_string().len();
|
||||
{
|
||||
let mut writer = index
|
||||
.writer_with_num_threads(1, heap_size_bytes)
|
||||
.expect("writer");
|
||||
let mut buffer = String::new();
|
||||
for doc_id in 0..config.num_docs {
|
||||
buffer.clear();
|
||||
for token_idx in 0..config.tokens_per_doc {
|
||||
if token_idx > 0 {
|
||||
buffer.push(' ');
|
||||
}
|
||||
let term_id = (doc_id * config.tokens_per_doc + token_idx) % config.num_terms;
|
||||
write!(&mut buffer, "t{term_id:0term_width$}").expect("write token");
|
||||
}
|
||||
writer
|
||||
.add_document(doc!(text_field => buffer.as_str()))
|
||||
.expect("add_document");
|
||||
}
|
||||
writer.commit().expect("commit");
|
||||
}
|
||||
|
||||
(index, text_field)
|
||||
}
|
||||
@@ -60,7 +60,7 @@ At indexing, tantivy will try to interpret number and strings as different type
|
||||
priority order.
|
||||
|
||||
Numbers will be interpreted as u64, i64 and f64 in that order.
|
||||
Strings will be interpreted as rfc3339 dates or simple strings.
|
||||
Strings will be interpreted as rfc3999 dates or simple strings.
|
||||
|
||||
The first working type is picked and is the only term that is emitted for indexing.
|
||||
Note this interpretation happens on a per-document basis, and there is no effort to try to sniff
|
||||
@@ -81,7 +81,7 @@ Will be interpreted as
|
||||
(my_path.my_segment, String, 233) or (my_path.my_segment, u64, 233)
|
||||
```
|
||||
|
||||
Likewise, we need to emit two tokens if the query contains an rfc3339 date.
|
||||
Likewise, we need to emit two tokens if the query contains an rfc3999 date.
|
||||
Indeed the date could have been actually a single token inside the text of a document at ingestion time. Generally speaking, we will always at least emit a string token in query parsing, and sometimes more.
|
||||
|
||||
If one more json field is defined, things get even more complicated.
|
||||
|
||||
@@ -70,7 +70,7 @@ impl Collector for StatsCollector {
|
||||
fn for_segment(
|
||||
&self,
|
||||
_segment_local_id: u32,
|
||||
segment_reader: &dyn SegmentReader,
|
||||
segment_reader: &SegmentReader,
|
||||
) -> tantivy::Result<StatsSegmentCollector> {
|
||||
let fast_field_reader = segment_reader.fast_fields().u64(&self.field)?;
|
||||
Ok(StatsSegmentCollector {
|
||||
|
||||
@@ -65,7 +65,7 @@ fn main() -> tantivy::Result<()> {
|
||||
);
|
||||
let top_docs_by_custom_score =
|
||||
// Call TopDocs with a custom tweak score
|
||||
TopDocs::with_limit(2).tweak_score(move |segment_reader: &dyn SegmentReader| {
|
||||
TopDocs::with_limit(2).tweak_score(move |segment_reader: &SegmentReader| {
|
||||
let ingredient_reader = segment_reader.facet_reader("ingredient").unwrap();
|
||||
let facet_dict = ingredient_reader.facet_dict();
|
||||
|
||||
|
||||
@@ -43,7 +43,7 @@ impl DynamicPriceColumn {
|
||||
}
|
||||
}
|
||||
|
||||
pub fn price_for_segment(&self, segment_reader: &dyn SegmentReader) -> Option<Arc<Vec<Price>>> {
|
||||
pub fn price_for_segment(&self, segment_reader: &SegmentReader) -> Option<Arc<Vec<Price>>> {
|
||||
let segment_key = (segment_reader.segment_id(), segment_reader.delete_opstamp());
|
||||
self.price_cache.read().unwrap().get(&segment_key).cloned()
|
||||
}
|
||||
@@ -157,7 +157,7 @@ fn main() -> tantivy::Result<()> {
|
||||
let query = query_parser.parse_query("cooking")?;
|
||||
|
||||
let searcher = reader.searcher();
|
||||
let score_by_price = move |segment_reader: &dyn SegmentReader| {
|
||||
let score_by_price = move |segment_reader: &SegmentReader| {
|
||||
let price = price_dynamic_column
|
||||
.price_for_segment(segment_reader)
|
||||
.unwrap();
|
||||
|
||||
@@ -560,7 +560,7 @@ fn range_infallible(inp: &str) -> JResult<&str, UserInputLeaf> {
|
||||
(
|
||||
(
|
||||
value((), tag(">=")),
|
||||
map(word_infallible(")", false), |(bound, err)| {
|
||||
map(word_infallible("", false), |(bound, err)| {
|
||||
(
|
||||
(
|
||||
bound
|
||||
@@ -574,7 +574,7 @@ fn range_infallible(inp: &str) -> JResult<&str, UserInputLeaf> {
|
||||
),
|
||||
(
|
||||
value((), tag("<=")),
|
||||
map(word_infallible(")", false), |(bound, err)| {
|
||||
map(word_infallible("", false), |(bound, err)| {
|
||||
(
|
||||
(
|
||||
UserInputBound::Unbounded,
|
||||
@@ -588,7 +588,7 @@ fn range_infallible(inp: &str) -> JResult<&str, UserInputLeaf> {
|
||||
),
|
||||
(
|
||||
value((), tag(">")),
|
||||
map(word_infallible(")", false), |(bound, err)| {
|
||||
map(word_infallible("", false), |(bound, err)| {
|
||||
(
|
||||
(
|
||||
bound
|
||||
@@ -602,7 +602,7 @@ fn range_infallible(inp: &str) -> JResult<&str, UserInputLeaf> {
|
||||
),
|
||||
(
|
||||
value((), tag("<")),
|
||||
map(word_infallible(")", false), |(bound, err)| {
|
||||
map(word_infallible("", false), |(bound, err)| {
|
||||
(
|
||||
(
|
||||
UserInputBound::Unbounded,
|
||||
@@ -1323,14 +1323,6 @@ mod test {
|
||||
test_parse_query_to_ast_helper("<a", "{\"*\" TO \"a\"}");
|
||||
test_parse_query_to_ast_helper("<=a", "{\"*\" TO \"a\"]");
|
||||
test_parse_query_to_ast_helper("<=bsd", "{\"*\" TO \"bsd\"]");
|
||||
|
||||
test_parse_query_to_ast_helper("(<=42)", "{\"*\" TO \"42\"]");
|
||||
test_parse_query_to_ast_helper("(<=42 )", "{\"*\" TO \"42\"]");
|
||||
test_parse_query_to_ast_helper("(age:>5)", "\"age\":{\"5\" TO \"*\"}");
|
||||
test_parse_query_to_ast_helper(
|
||||
"(title:bar AND age:>12)",
|
||||
"(+\"title\":bar +\"age\":{\"12\" TO \"*\"})",
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
|
||||
@@ -57,7 +57,7 @@ pub(crate) fn get_numeric_or_date_column_types() -> &'static [ColumnType] {
|
||||
|
||||
/// Get fast field reader or empty as default.
|
||||
pub(crate) fn get_ff_reader(
|
||||
reader: &dyn SegmentReader,
|
||||
reader: &SegmentReader,
|
||||
field_name: &str,
|
||||
allowed_column_types: Option<&[ColumnType]>,
|
||||
) -> crate::Result<(columnar::Column<u64>, ColumnType)> {
|
||||
@@ -74,7 +74,7 @@ pub(crate) fn get_ff_reader(
|
||||
}
|
||||
|
||||
pub(crate) fn get_dynamic_columns(
|
||||
reader: &dyn SegmentReader,
|
||||
reader: &SegmentReader,
|
||||
field_name: &str,
|
||||
) -> crate::Result<Vec<columnar::DynamicColumn>> {
|
||||
let ff_fields = reader.fast_fields().dynamic_column_handles(field_name)?;
|
||||
@@ -90,7 +90,7 @@ pub(crate) fn get_dynamic_columns(
|
||||
///
|
||||
/// Is guaranteed to return at least one column.
|
||||
pub(crate) fn get_all_ff_reader_or_empty(
|
||||
reader: &dyn SegmentReader,
|
||||
reader: &SegmentReader,
|
||||
field_name: &str,
|
||||
allowed_column_types: Option<&[ColumnType]>,
|
||||
fallback_type: ColumnType,
|
||||
|
||||
@@ -469,7 +469,7 @@ impl AggKind {
|
||||
/// Build AggregationsData by walking the request tree.
|
||||
pub(crate) fn build_aggregations_data_from_req(
|
||||
aggs: &Aggregations,
|
||||
reader: &dyn SegmentReader,
|
||||
reader: &SegmentReader,
|
||||
segment_ordinal: SegmentOrdinal,
|
||||
context: AggContextParams,
|
||||
) -> crate::Result<AggregationsSegmentCtx> {
|
||||
@@ -489,7 +489,7 @@ pub(crate) fn build_aggregations_data_from_req(
|
||||
fn build_nodes(
|
||||
agg_name: &str,
|
||||
req: &Aggregation,
|
||||
reader: &dyn SegmentReader,
|
||||
reader: &SegmentReader,
|
||||
segment_ordinal: SegmentOrdinal,
|
||||
data: &mut AggregationsSegmentCtx,
|
||||
is_top_level: bool,
|
||||
@@ -728,7 +728,7 @@ fn build_nodes(
|
||||
let idx_in_req_data = data.push_filter_req_data(FilterAggReqData {
|
||||
name: agg_name.to_string(),
|
||||
req: filter_req.clone(),
|
||||
segment_reader: reader.clone_arc(),
|
||||
segment_reader: reader.clone(),
|
||||
evaluator,
|
||||
matching_docs_buffer,
|
||||
is_top_level,
|
||||
@@ -745,7 +745,7 @@ fn build_nodes(
|
||||
|
||||
fn build_children(
|
||||
aggs: &Aggregations,
|
||||
reader: &dyn SegmentReader,
|
||||
reader: &SegmentReader,
|
||||
segment_ordinal: SegmentOrdinal,
|
||||
data: &mut AggregationsSegmentCtx,
|
||||
) -> crate::Result<Vec<AggRefNode>> {
|
||||
@@ -764,7 +764,7 @@ fn build_children(
|
||||
}
|
||||
|
||||
fn get_term_agg_accessors(
|
||||
reader: &dyn SegmentReader,
|
||||
reader: &SegmentReader,
|
||||
field_name: &str,
|
||||
missing: &Option<Key>,
|
||||
) -> crate::Result<Vec<(Column<u64>, ColumnType)>> {
|
||||
@@ -817,7 +817,7 @@ fn build_terms_or_cardinality_nodes(
|
||||
agg_name: &str,
|
||||
field_name: &str,
|
||||
missing: &Option<Key>,
|
||||
reader: &dyn SegmentReader,
|
||||
reader: &SegmentReader,
|
||||
segment_ordinal: SegmentOrdinal,
|
||||
data: &mut AggregationsSegmentCtx,
|
||||
sub_aggs: &Aggregations,
|
||||
|
||||
@@ -1,5 +1,4 @@
|
||||
use std::fmt::Debug;
|
||||
use std::sync::Arc;
|
||||
|
||||
use common::BitSet;
|
||||
use serde::{Deserialize, Deserializer, Serialize, Serializer};
|
||||
@@ -403,7 +402,7 @@ pub struct FilterAggReqData {
|
||||
/// The filter aggregation
|
||||
pub req: FilterAggregation,
|
||||
/// The segment reader
|
||||
pub segment_reader: Arc<dyn SegmentReader>,
|
||||
pub segment_reader: SegmentReader,
|
||||
/// Document evaluator for the filter query (precomputed BitSet)
|
||||
/// This is built once when the request data is created
|
||||
pub evaluator: DocumentQueryEvaluator,
|
||||
@@ -417,7 +416,7 @@ impl FilterAggReqData {
|
||||
pub(crate) fn get_memory_consumption(&self) -> usize {
|
||||
// Estimate: name + segment reader reference + bitset + buffer capacity
|
||||
self.name.len()
|
||||
+ std::mem::size_of::<Arc<dyn SegmentReader>>()
|
||||
+ std::mem::size_of::<SegmentReader>()
|
||||
+ self.evaluator.bitset.len() / 8 // BitSet memory (bits to bytes)
|
||||
+ self.matching_docs_buffer.capacity() * std::mem::size_of::<DocId>()
|
||||
+ std::mem::size_of::<bool>()
|
||||
@@ -439,7 +438,7 @@ impl DocumentQueryEvaluator {
|
||||
pub(crate) fn new(
|
||||
query: Box<dyn Query>,
|
||||
schema: Schema,
|
||||
segment_reader: &dyn SegmentReader,
|
||||
segment_reader: &SegmentReader,
|
||||
) -> crate::Result<Self> {
|
||||
let max_doc = segment_reader.max_doc();
|
||||
|
||||
|
||||
@@ -66,7 +66,7 @@ impl Collector for DistributedAggregationCollector {
|
||||
fn for_segment(
|
||||
&self,
|
||||
segment_local_id: crate::SegmentOrdinal,
|
||||
reader: &dyn SegmentReader,
|
||||
reader: &crate::SegmentReader,
|
||||
) -> crate::Result<Self::Child> {
|
||||
AggregationSegmentCollector::from_agg_req_and_reader(
|
||||
&self.agg,
|
||||
@@ -96,7 +96,7 @@ impl Collector for AggregationCollector {
|
||||
fn for_segment(
|
||||
&self,
|
||||
segment_local_id: crate::SegmentOrdinal,
|
||||
reader: &dyn SegmentReader,
|
||||
reader: &crate::SegmentReader,
|
||||
) -> crate::Result<Self::Child> {
|
||||
AggregationSegmentCollector::from_agg_req_and_reader(
|
||||
&self.agg,
|
||||
@@ -145,7 +145,7 @@ impl AggregationSegmentCollector {
|
||||
/// reader. Also includes validation, e.g. checking field types and existence.
|
||||
pub fn from_agg_req_and_reader(
|
||||
agg: &Aggregations,
|
||||
reader: &dyn SegmentReader,
|
||||
reader: &SegmentReader,
|
||||
segment_ordinal: SegmentOrdinal,
|
||||
context: &AggContextParams,
|
||||
) -> crate::Result<Self> {
|
||||
|
||||
@@ -820,7 +820,7 @@ impl IntermediateRangeBucketEntry {
|
||||
};
|
||||
|
||||
// If we have a date type on the histogram buckets, we add the `key_as_string` field as
|
||||
// rfc3339
|
||||
// rfc339
|
||||
if column_type == Some(ColumnType::DateTime) {
|
||||
if let Some(val) = range_bucket_entry.to {
|
||||
let key_as_string = format_date(val as i64)?;
|
||||
|
||||
139
src/codec/mod.rs
139
src/codec/mod.rs
@@ -4,18 +4,18 @@ pub mod postings;
|
||||
/// Standard tantivy codec. This is the codec you use by default.
|
||||
pub mod standard;
|
||||
|
||||
use std::sync::Arc;
|
||||
use std::io;
|
||||
|
||||
pub use standard::StandardCodec;
|
||||
|
||||
use crate::codec::postings::PostingsCodec;
|
||||
use crate::directory::Directory;
|
||||
use crate::fastfield::AliveBitSet;
|
||||
use crate::fieldnorm::FieldNormReader;
|
||||
use crate::postings::{Postings, TermInfo};
|
||||
use crate::query::score_combiner::DoNothingCombiner;
|
||||
use crate::query::term_query::TermScorer;
|
||||
use crate::query::{box_scorer, BufferedUnionScorer, Scorer, SumCombiner};
|
||||
use crate::schema::Schema;
|
||||
use crate::{DocId, Score, SegmentMeta, SegmentReader, TantivySegmentReader};
|
||||
use crate::query::{box_scorer, Bm25Weight, BufferedUnionScorer, Scorer, SumCombiner};
|
||||
use crate::schema::IndexRecordOption;
|
||||
use crate::{DocId, InvertedIndexReader, Score};
|
||||
|
||||
/// Codecs describes how data is layed out on disk.
|
||||
///
|
||||
@@ -36,46 +36,58 @@ pub trait Codec: Clone + std::fmt::Debug + Send + Sync + 'static {
|
||||
|
||||
/// Returns the postings codec.
|
||||
fn postings_codec(&self) -> &Self::PostingsCodec;
|
||||
|
||||
/// Loads postings using the codec's concrete postings type.
|
||||
fn load_postings_typed(
|
||||
&self,
|
||||
reader: &dyn crate::index::InvertedIndexReader,
|
||||
term_info: &crate::postings::TermInfo,
|
||||
option: crate::schema::IndexRecordOption,
|
||||
) -> std::io::Result<<Self::PostingsCodec as crate::codec::postings::PostingsCodec>::Postings>
|
||||
{
|
||||
let postings_data = reader.read_raw_postings_data(term_info, option)?;
|
||||
self.postings_codec()
|
||||
.load_postings(term_info.doc_freq, postings_data)
|
||||
}
|
||||
|
||||
/// Opens a segment reader using this codec.
|
||||
///
|
||||
/// Override this if your codec uses a custom segment reader implementation.
|
||||
fn open_segment_reader(
|
||||
&self,
|
||||
directory: &dyn Directory,
|
||||
segment_meta: &SegmentMeta,
|
||||
schema: Schema,
|
||||
custom_bitset: Option<AliveBitSet>,
|
||||
) -> crate::Result<Arc<dyn SegmentReader>> {
|
||||
let codec: Arc<dyn ObjectSafeCodec> = Arc::new(self.clone());
|
||||
let reader = TantivySegmentReader::open_with_custom_alive_set_from_directory(
|
||||
directory,
|
||||
segment_meta,
|
||||
schema,
|
||||
codec,
|
||||
custom_bitset,
|
||||
)?;
|
||||
Ok(Arc::new(reader))
|
||||
}
|
||||
}
|
||||
|
||||
/// Object-safe codec is a Codec that can be used in a trait object.
|
||||
///
|
||||
/// The point of it is to offer a way to use a codec without a proliferation of generics.
|
||||
pub trait ObjectSafeCodec: 'static + Send + Sync {
|
||||
/// Loads a type-erased Postings object for the given term.
|
||||
///
|
||||
/// If the schema used to build the index did not provide enough
|
||||
/// information to match the requested `option`, a Postings is still
|
||||
/// returned in a best-effort manner.
|
||||
fn load_postings_type_erased(
|
||||
&self,
|
||||
term_info: &TermInfo,
|
||||
option: IndexRecordOption,
|
||||
inverted_index_reader: &InvertedIndexReader,
|
||||
) -> io::Result<Box<dyn Postings>>;
|
||||
|
||||
/// Loads a type-erased TermScorer object for the given term.
|
||||
///
|
||||
/// If the schema used to build the index did not provide enough
|
||||
/// information to match the requested `option`, a TermScorer is still
|
||||
/// returned in a best-effort manner.
|
||||
///
|
||||
/// The point of this contraption is that the return TermScorer is backed,
|
||||
/// not by Box<dyn Postings> but by the codec's concrete Postings type.
|
||||
fn load_term_scorer_type_erased(
|
||||
&self,
|
||||
term_info: &TermInfo,
|
||||
option: IndexRecordOption,
|
||||
inverted_index_reader: &InvertedIndexReader,
|
||||
fieldnorm_reader: FieldNormReader,
|
||||
similarity_weight: Bm25Weight,
|
||||
) -> io::Result<Box<dyn Scorer>>;
|
||||
|
||||
/// Loads a type-erased PhraseScorer object for the given term.
|
||||
///
|
||||
/// If the schema used to build the index did not provide enough
|
||||
/// information to match the requested `option`, a TermScorer is still
|
||||
/// returned in a best-effort manner.
|
||||
///
|
||||
/// The point of this contraption is that the return PhraseScorer is backed,
|
||||
/// not by Box<dyn Postings> but by the codec's concrete Postings type.
|
||||
fn new_phrase_scorer_type_erased(
|
||||
&self,
|
||||
term_infos: &[(usize, TermInfo)],
|
||||
similarity_weight: Option<Bm25Weight>,
|
||||
fieldnorm_reader: FieldNormReader,
|
||||
slop: u32,
|
||||
inverted_index_reader: &InvertedIndexReader,
|
||||
) -> io::Result<Box<dyn Scorer>>;
|
||||
|
||||
/// Performs a for_each_pruning operation on the given scorer.
|
||||
///
|
||||
/// The function will go through matching documents and call the callback
|
||||
@@ -104,6 +116,53 @@ pub trait ObjectSafeCodec: 'static + Send + Sync {
|
||||
}
|
||||
|
||||
impl<TCodec: Codec> ObjectSafeCodec for TCodec {
|
||||
fn load_postings_type_erased(
|
||||
&self,
|
||||
term_info: &TermInfo,
|
||||
option: IndexRecordOption,
|
||||
inverted_index_reader: &InvertedIndexReader,
|
||||
) -> io::Result<Box<dyn Postings>> {
|
||||
let postings = inverted_index_reader
|
||||
.read_postings_from_terminfo_specialized(term_info, option, self)?;
|
||||
Ok(Box::new(postings))
|
||||
}
|
||||
|
||||
fn load_term_scorer_type_erased(
|
||||
&self,
|
||||
term_info: &TermInfo,
|
||||
option: IndexRecordOption,
|
||||
inverted_index_reader: &InvertedIndexReader,
|
||||
fieldnorm_reader: FieldNormReader,
|
||||
similarity_weight: Bm25Weight,
|
||||
) -> io::Result<Box<dyn Scorer>> {
|
||||
let scorer = inverted_index_reader.new_term_scorer_specialized(
|
||||
term_info,
|
||||
option,
|
||||
fieldnorm_reader,
|
||||
similarity_weight,
|
||||
self,
|
||||
)?;
|
||||
Ok(box_scorer(scorer))
|
||||
}
|
||||
|
||||
fn new_phrase_scorer_type_erased(
|
||||
&self,
|
||||
term_infos: &[(usize, TermInfo)],
|
||||
similarity_weight: Option<Bm25Weight>,
|
||||
fieldnorm_reader: FieldNormReader,
|
||||
slop: u32,
|
||||
inverted_index_reader: &InvertedIndexReader,
|
||||
) -> io::Result<Box<dyn Scorer>> {
|
||||
let scorer = inverted_index_reader.new_phrase_scorer_type_specialized(
|
||||
term_infos,
|
||||
similarity_weight,
|
||||
fieldnorm_reader,
|
||||
slop,
|
||||
self,
|
||||
)?;
|
||||
Ok(box_scorer(scorer))
|
||||
}
|
||||
|
||||
fn build_union_scorer_with_sum_combiner(
|
||||
&self,
|
||||
scorers: Vec<Box<dyn Scorer>>,
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
/// Block-max WAND algorithm.
|
||||
pub mod block_wand;
|
||||
use std::io;
|
||||
|
||||
/// Block-max WAND algorithm.
|
||||
pub mod block_wand;
|
||||
use common::OwnedBytes;
|
||||
|
||||
use crate::fieldnorm::FieldNormReader;
|
||||
@@ -10,16 +10,38 @@ use crate::query::{Bm25Weight, Scorer};
|
||||
use crate::schema::IndexRecordOption;
|
||||
use crate::{DocId, Score};
|
||||
|
||||
/// Postings codec (read path).
|
||||
/// Postings codec.
|
||||
pub trait PostingsCodec: Send + Sync + 'static {
|
||||
/// Serializer type for the postings codec.
|
||||
type PostingsSerializer: PostingsSerializer;
|
||||
/// Postings type for the postings codec.
|
||||
type Postings: Postings + Clone;
|
||||
/// Creates a new postings serializer.
|
||||
fn new_serializer(
|
||||
&self,
|
||||
avg_fieldnorm: Score,
|
||||
mode: IndexRecordOption,
|
||||
fieldnorm_reader: Option<FieldNormReader>,
|
||||
) -> Self::PostingsSerializer;
|
||||
|
||||
/// Load postings from raw bytes and metadata.
|
||||
/// Loads postings
|
||||
///
|
||||
/// Record option is the option that was passed at indexing time.
|
||||
/// Requested option is the option that is requested.
|
||||
///
|
||||
/// For instance, we may have term_freq in the posting list
|
||||
/// but we can skip decompressing as we read the posting list.
|
||||
///
|
||||
/// If record option does not support the requested option,
|
||||
/// this method does NOT return an error and will in fact restrict
|
||||
/// requested_option to what is available.
|
||||
fn load_postings(
|
||||
&self,
|
||||
doc_freq: u32,
|
||||
postings_data: RawPostingsData,
|
||||
postings_data: OwnedBytes,
|
||||
record_option: IndexRecordOption,
|
||||
requested_option: IndexRecordOption,
|
||||
positions_data: Option<OwnedBytes>,
|
||||
) -> io::Result<Self::Postings>;
|
||||
|
||||
/// If your codec supports different ways to accelerate `for_each_pruning` that's
|
||||
@@ -41,17 +63,43 @@ pub trait PostingsCodec: Send + Sync + 'static {
|
||||
}
|
||||
}
|
||||
|
||||
/// Raw postings bytes and metadata read from storage.
|
||||
#[derive(Debug, Clone)]
|
||||
pub struct RawPostingsData {
|
||||
/// Raw postings bytes for the term.
|
||||
pub postings_data: OwnedBytes,
|
||||
/// Raw positions bytes for the term, if positions are available.
|
||||
pub positions_data: Option<OwnedBytes>,
|
||||
/// Record option of the indexed field.
|
||||
pub record_option: IndexRecordOption,
|
||||
/// Effective record option after downgrading to the indexed field capability.
|
||||
pub effective_option: IndexRecordOption,
|
||||
/// A postings serializer is a listener that is in charge of serializing postings
|
||||
///
|
||||
/// IO is done only once per postings, once all of the data has been received.
|
||||
/// A serializer will therefore contain internal buffers.
|
||||
///
|
||||
/// A serializer is created once and recycled for all postings.
|
||||
///
|
||||
/// Clients should use PostingsSerializer as follows.
|
||||
/// ```rust,no_run
|
||||
/// // First postings list
|
||||
/// serializer.new_term(2, true);
|
||||
/// serializer.write_doc(2, 1);
|
||||
/// serializer.write_doc(6, 2);
|
||||
/// serializer.close_term(3);
|
||||
/// serializer.clear();
|
||||
/// // Second postings list
|
||||
/// serializer.new_term(1, true);
|
||||
/// serializer.write_doc(3, 1);
|
||||
/// serializer.close_term(3);
|
||||
/// ```
|
||||
pub trait PostingsSerializer {
|
||||
/// The term_doc_freq here is the number of documents
|
||||
/// in the postings lists.
|
||||
///
|
||||
/// It can be used to compute the idf that will be used for the
|
||||
/// blockmax parameters.
|
||||
///
|
||||
/// If not available (e.g. if we do not collect `term_frequencies`
|
||||
/// blockwand is disabled), the term_doc_freq passed will be set 0.
|
||||
fn new_term(&mut self, term_doc_freq: u32, record_term_freq: bool);
|
||||
|
||||
/// Records a new document id for the current term.
|
||||
/// The serializer may ignore it.
|
||||
fn write_doc(&mut self, doc_id: DocId, term_freq: u32);
|
||||
|
||||
/// Closes the current term and writes the postings list associated.
|
||||
fn close_term(&mut self, doc_freq: u32, wrt: &mut impl io::Write) -> io::Result<()>;
|
||||
}
|
||||
|
||||
/// A light complement interface to Postings to allow block-max wand acceleration.
|
||||
|
||||
50
src/codec/standard/postings/block.rs
Normal file
50
src/codec/standard/postings/block.rs
Normal file
@@ -0,0 +1,50 @@
|
||||
use crate::postings::compression::COMPRESSION_BLOCK_SIZE;
|
||||
use crate::DocId;
|
||||
|
||||
pub struct Block {
|
||||
doc_ids: [DocId; COMPRESSION_BLOCK_SIZE],
|
||||
term_freqs: [u32; COMPRESSION_BLOCK_SIZE],
|
||||
len: usize,
|
||||
}
|
||||
|
||||
impl Block {
|
||||
pub fn new() -> Self {
|
||||
Block {
|
||||
doc_ids: [0u32; COMPRESSION_BLOCK_SIZE],
|
||||
term_freqs: [0u32; COMPRESSION_BLOCK_SIZE],
|
||||
len: 0,
|
||||
}
|
||||
}
|
||||
|
||||
pub fn doc_ids(&self) -> &[DocId] {
|
||||
&self.doc_ids[..self.len]
|
||||
}
|
||||
|
||||
pub fn term_freqs(&self) -> &[u32] {
|
||||
&self.term_freqs[..self.len]
|
||||
}
|
||||
|
||||
pub fn clear(&mut self) {
|
||||
self.len = 0;
|
||||
}
|
||||
|
||||
pub fn append_doc(&mut self, doc: DocId, term_freq: u32) {
|
||||
let len = self.len;
|
||||
self.doc_ids[len] = doc;
|
||||
self.term_freqs[len] = term_freq;
|
||||
self.len = len + 1;
|
||||
}
|
||||
|
||||
pub fn is_full(&self) -> bool {
|
||||
self.len == COMPRESSION_BLOCK_SIZE
|
||||
}
|
||||
|
||||
pub fn is_empty(&self) -> bool {
|
||||
self.len == 0
|
||||
}
|
||||
|
||||
pub fn last_doc(&self) -> DocId {
|
||||
assert_eq!(self.len, COMPRESSION_BLOCK_SIZE);
|
||||
self.doc_ids[COMPRESSION_BLOCK_SIZE - 1]
|
||||
}
|
||||
}
|
||||
@@ -2,10 +2,10 @@ use std::io;
|
||||
|
||||
use common::{OwnedBytes, VInt};
|
||||
|
||||
use crate::codec::standard::postings::skip::{BlockInfo, SkipReader};
|
||||
use crate::codec::standard::postings::FreqReadingOption;
|
||||
use crate::fieldnorm::FieldNormReader;
|
||||
use crate::postings::compression::{BlockDecoder, VIntDecoder as _, COMPRESSION_BLOCK_SIZE};
|
||||
use crate::postings::skip::{BlockInfo, SkipReader};
|
||||
use crate::query::Bm25Weight;
|
||||
use crate::schema::IndexRecordOption;
|
||||
use crate::{DocId, Score, TERMINATED};
|
||||
@@ -337,17 +337,18 @@ mod tests {
|
||||
use common::OwnedBytes;
|
||||
|
||||
use super::BlockSegmentPostings;
|
||||
use crate::codec::postings::PostingsSerializer;
|
||||
use crate::codec::standard::postings::segment_postings::SegmentPostings;
|
||||
use crate::codec::standard::postings::StandardPostingsSerializer;
|
||||
use crate::docset::{DocSet, TERMINATED};
|
||||
use crate::postings::compression::COMPRESSION_BLOCK_SIZE;
|
||||
use crate::postings::serializer::PostingsSerializer;
|
||||
use crate::schema::IndexRecordOption;
|
||||
|
||||
#[cfg(test)]
|
||||
fn build_block_postings(docs: &[u32]) -> BlockSegmentPostings {
|
||||
let doc_freq = docs.len() as u32;
|
||||
let mut postings_serializer =
|
||||
PostingsSerializer::new(1.0f32, IndexRecordOption::Basic, None);
|
||||
StandardPostingsSerializer::new(1.0f32, IndexRecordOption::Basic, None);
|
||||
postings_serializer.new_term(docs.len() as u32, false);
|
||||
for doc in docs {
|
||||
postings_serializer.write_doc(*doc, 1u32);
|
||||
|
||||
@@ -1,20 +1,24 @@
|
||||
use std::io;
|
||||
|
||||
use common::BitSet;
|
||||
|
||||
use crate::codec::postings::block_wand::{block_wand, block_wand_single_scorer};
|
||||
use crate::codec::postings::{PostingsCodec, RawPostingsData};
|
||||
use crate::codec::postings::PostingsCodec;
|
||||
use crate::codec::standard::postings::block_segment_postings::BlockSegmentPostings;
|
||||
pub use crate::codec::standard::postings::segment_postings::SegmentPostings;
|
||||
use crate::fieldnorm::FieldNormReader;
|
||||
use crate::positions::PositionReader;
|
||||
use crate::query::term_query::TermScorer;
|
||||
use crate::query::{BufferedUnionScorer, Scorer, SumCombiner};
|
||||
use crate::schema::IndexRecordOption;
|
||||
use crate::{DocSet as _, Score, TERMINATED};
|
||||
|
||||
mod block;
|
||||
mod block_segment_postings;
|
||||
mod segment_postings;
|
||||
mod skip;
|
||||
mod standard_postings_serializer;
|
||||
|
||||
pub use segment_postings::SegmentPostings as StandardPostings;
|
||||
pub use standard_postings_serializer::StandardPostingsSerializer;
|
||||
|
||||
/// The default postings codec for tantivy.
|
||||
pub struct StandardPostingsCodec;
|
||||
@@ -28,14 +32,35 @@ pub(crate) enum FreqReadingOption {
|
||||
}
|
||||
|
||||
impl PostingsCodec for StandardPostingsCodec {
|
||||
type PostingsSerializer = StandardPostingsSerializer;
|
||||
type Postings = SegmentPostings;
|
||||
|
||||
fn new_serializer(
|
||||
&self,
|
||||
avg_fieldnorm: Score,
|
||||
mode: IndexRecordOption,
|
||||
fieldnorm_reader: Option<FieldNormReader>,
|
||||
) -> Self::PostingsSerializer {
|
||||
StandardPostingsSerializer::new(avg_fieldnorm, mode, fieldnorm_reader)
|
||||
}
|
||||
|
||||
fn load_postings(
|
||||
&self,
|
||||
doc_freq: u32,
|
||||
postings_data: RawPostingsData,
|
||||
postings_data: common::OwnedBytes,
|
||||
record_option: IndexRecordOption,
|
||||
requested_option: IndexRecordOption,
|
||||
positions_data_opt: Option<common::OwnedBytes>,
|
||||
) -> io::Result<Self::Postings> {
|
||||
load_postings_from_raw_data(doc_freq, postings_data)
|
||||
// Rationalize record_option/requested_option.
|
||||
let requested_option = requested_option.downgrade(record_option);
|
||||
let block_segment_postings =
|
||||
BlockSegmentPostings::open(doc_freq, postings_data, record_option, requested_option)?;
|
||||
let position_reader = positions_data_opt.map(PositionReader::open).transpose()?;
|
||||
Ok(SegmentPostings::from_block_postings(
|
||||
block_segment_postings,
|
||||
position_reader,
|
||||
))
|
||||
}
|
||||
|
||||
fn try_accelerated_for_each_pruning(
|
||||
@@ -51,7 +76,14 @@ impl PostingsCodec for StandardPostingsCodec {
|
||||
Err(scorer) => scorer,
|
||||
};
|
||||
let mut union_scorer =
|
||||
scorer.downcast::<BufferedUnionScorer<TermScorer<Self::Postings>, SumCombiner>>()?;
|
||||
scorer.downcast::<BufferedUnionScorer<Box<dyn Scorer>, SumCombiner>>()?;
|
||||
if !union_scorer
|
||||
.scorers()
|
||||
.iter()
|
||||
.all(|scorer| scorer.is::<TermScorer<Self::Postings>>())
|
||||
{
|
||||
return Err(union_scorer);
|
||||
}
|
||||
let doc = union_scorer.doc();
|
||||
if doc == TERMINATED {
|
||||
return Ok(());
|
||||
@@ -60,69 +92,31 @@ impl PostingsCodec for StandardPostingsCodec {
|
||||
if score > threshold {
|
||||
threshold = callback(doc, score);
|
||||
}
|
||||
let scorers: Vec<TermScorer<Self::Postings>> = union_scorer.into_scorers();
|
||||
let boxed_scorers: Vec<Box<dyn Scorer>> = union_scorer.into_scorers();
|
||||
let scorers: Vec<TermScorer<Self::Postings>> = boxed_scorers
|
||||
.into_iter()
|
||||
.map(|scorer| {
|
||||
*scorer.downcast::<TermScorer<Self::Postings>>().ok().expect(
|
||||
"Downcast failed despite the fact we already checked the type was correct",
|
||||
)
|
||||
})
|
||||
.collect();
|
||||
block_wand(scorers, threshold, callback);
|
||||
Ok(())
|
||||
}
|
||||
}
|
||||
pub(crate) fn load_postings_from_raw_data(
|
||||
doc_freq: u32,
|
||||
postings_data: RawPostingsData,
|
||||
) -> io::Result<SegmentPostings> {
|
||||
let RawPostingsData {
|
||||
postings_data,
|
||||
positions_data: positions_data_opt,
|
||||
record_option,
|
||||
effective_option,
|
||||
} = postings_data;
|
||||
let requested_option = effective_option;
|
||||
let block_segment_postings =
|
||||
BlockSegmentPostings::open(doc_freq, postings_data, record_option, requested_option)?;
|
||||
let position_reader = positions_data_opt.map(PositionReader::open).transpose()?;
|
||||
Ok(SegmentPostings::from_block_postings(
|
||||
block_segment_postings,
|
||||
position_reader,
|
||||
))
|
||||
}
|
||||
|
||||
pub(crate) fn fill_bitset_from_raw_data(
|
||||
doc_freq: u32,
|
||||
postings_data: RawPostingsData,
|
||||
doc_bitset: &mut BitSet,
|
||||
) -> io::Result<()> {
|
||||
let RawPostingsData {
|
||||
postings_data,
|
||||
record_option,
|
||||
effective_option,
|
||||
..
|
||||
} = postings_data;
|
||||
let mut block_postings =
|
||||
BlockSegmentPostings::open(doc_freq, postings_data, record_option, effective_option)?;
|
||||
loop {
|
||||
let docs = block_postings.docs();
|
||||
if docs.is_empty() {
|
||||
break;
|
||||
}
|
||||
for &doc in docs {
|
||||
doc_bitset.insert(doc);
|
||||
}
|
||||
block_postings.advance();
|
||||
}
|
||||
Ok(())
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use common::OwnedBytes;
|
||||
|
||||
use super::*;
|
||||
use crate::postings::serializer::PostingsSerializer;
|
||||
use crate::codec::postings::PostingsSerializer as _;
|
||||
use crate::postings::Postings as _;
|
||||
use crate::schema::IndexRecordOption;
|
||||
|
||||
fn test_segment_postings_tf_aux(num_docs: u32, include_term_freq: bool) -> SegmentPostings {
|
||||
let mut postings_serializer =
|
||||
PostingsSerializer::new(1.0f32, IndexRecordOption::WithFreqs, None);
|
||||
StandardPostingsCodec.new_serializer(1.0f32, IndexRecordOption::WithFreqs, None);
|
||||
let mut buffer = Vec::new();
|
||||
postings_serializer.new_term(num_docs, include_term_freq);
|
||||
for i in 0..num_docs {
|
||||
@@ -131,16 +125,15 @@ mod tests {
|
||||
postings_serializer
|
||||
.close_term(num_docs, &mut buffer)
|
||||
.unwrap();
|
||||
load_postings_from_raw_data(
|
||||
num_docs,
|
||||
RawPostingsData {
|
||||
postings_data: OwnedBytes::new(buffer),
|
||||
positions_data: None,
|
||||
record_option: IndexRecordOption::WithFreqs,
|
||||
effective_option: IndexRecordOption::WithFreqs,
|
||||
},
|
||||
)
|
||||
.unwrap()
|
||||
StandardPostingsCodec
|
||||
.load_postings(
|
||||
num_docs,
|
||||
OwnedBytes::new(buffer),
|
||||
IndexRecordOption::WithFreqs,
|
||||
IndexRecordOption::WithFreqs,
|
||||
None,
|
||||
)
|
||||
.unwrap()
|
||||
}
|
||||
|
||||
#[test]
|
||||
|
||||
@@ -47,10 +47,14 @@ impl SegmentPostings {
|
||||
use crate::schema::IndexRecordOption;
|
||||
let mut buffer = Vec::new();
|
||||
{
|
||||
use crate::postings::serializer::PostingsSerializer;
|
||||
use crate::codec::postings::PostingsSerializer;
|
||||
|
||||
let mut postings_serializer =
|
||||
PostingsSerializer::new(0.0, IndexRecordOption::Basic, None);
|
||||
crate::codec::standard::postings::StandardPostingsSerializer::new(
|
||||
0.0,
|
||||
IndexRecordOption::Basic,
|
||||
None,
|
||||
);
|
||||
postings_serializer.new_term(docs.len() as u32, false);
|
||||
for &doc in docs {
|
||||
postings_serializer.write_doc(doc, 1u32);
|
||||
@@ -77,8 +81,9 @@ impl SegmentPostings {
|
||||
) -> SegmentPostings {
|
||||
use common::OwnedBytes;
|
||||
|
||||
use crate::codec::postings::PostingsSerializer as _;
|
||||
use crate::codec::standard::postings::StandardPostingsSerializer;
|
||||
use crate::fieldnorm::FieldNormReader;
|
||||
use crate::postings::serializer::PostingsSerializer;
|
||||
use crate::schema::IndexRecordOption;
|
||||
use crate::Score;
|
||||
let mut buffer: Vec<u8> = Vec::new();
|
||||
@@ -95,7 +100,7 @@ impl SegmentPostings {
|
||||
total_num_tokens as Score / fieldnorms.len() as Score
|
||||
})
|
||||
.unwrap_or(0.0);
|
||||
let mut postings_serializer = PostingsSerializer::new(
|
||||
let mut postings_serializer = StandardPostingsSerializer::new(
|
||||
average_field_norm,
|
||||
IndexRecordOption::WithFreqs,
|
||||
fieldnorm_reader,
|
||||
@@ -264,7 +269,6 @@ impl Postings for SegmentPostings {
|
||||
}
|
||||
|
||||
impl PostingsWithBlockMax for SegmentPostings {
|
||||
#[inline]
|
||||
fn seek_block_max(
|
||||
&mut self,
|
||||
target_doc: crate::DocId,
|
||||
@@ -276,7 +280,6 @@ impl PostingsWithBlockMax for SegmentPostings {
|
||||
.block_max_score(fieldnorm_reader, similarity_weight)
|
||||
}
|
||||
|
||||
#[inline]
|
||||
fn last_doc_in_block(&self) -> crate::DocId {
|
||||
self.block_cursor.skip_reader().last_doc_in_block()
|
||||
}
|
||||
|
||||
184
src/codec/standard/postings/standard_postings_serializer.rs
Normal file
184
src/codec/standard/postings/standard_postings_serializer.rs
Normal file
@@ -0,0 +1,184 @@
|
||||
use std::cmp::Ordering;
|
||||
use std::io::{self, Write as _};
|
||||
|
||||
use common::{BinarySerializable as _, VInt};
|
||||
|
||||
use crate::codec::postings::PostingsSerializer;
|
||||
use crate::codec::standard::postings::block::Block;
|
||||
use crate::codec::standard::postings::skip::SkipSerializer;
|
||||
use crate::fieldnorm::FieldNormReader;
|
||||
use crate::postings::compression::{BlockEncoder, VIntEncoder as _, COMPRESSION_BLOCK_SIZE};
|
||||
use crate::query::Bm25Weight;
|
||||
use crate::schema::IndexRecordOption;
|
||||
use crate::{DocId, Score};
|
||||
|
||||
/// Serializer object for tantivy's default postings format.
|
||||
pub struct StandardPostingsSerializer {
|
||||
last_doc_id_encoded: u32,
|
||||
|
||||
block_encoder: BlockEncoder,
|
||||
block: Box<Block>,
|
||||
|
||||
postings_write: Vec<u8>,
|
||||
skip_write: SkipSerializer,
|
||||
|
||||
mode: IndexRecordOption,
|
||||
fieldnorm_reader: Option<FieldNormReader>,
|
||||
|
||||
bm25_weight: Option<Bm25Weight>,
|
||||
avg_fieldnorm: Score, /* Average number of term in the field for that segment.
|
||||
* this value is used to compute the block wand information. */
|
||||
term_has_freq: bool,
|
||||
}
|
||||
|
||||
impl StandardPostingsSerializer {
|
||||
pub(crate) fn new(
|
||||
avg_fieldnorm: Score,
|
||||
mode: IndexRecordOption,
|
||||
fieldnorm_reader: Option<FieldNormReader>,
|
||||
) -> StandardPostingsSerializer {
|
||||
Self {
|
||||
last_doc_id_encoded: 0,
|
||||
block_encoder: BlockEncoder::new(),
|
||||
block: Box::new(Block::new()),
|
||||
postings_write: Vec::new(),
|
||||
skip_write: SkipSerializer::new(),
|
||||
mode,
|
||||
fieldnorm_reader,
|
||||
bm25_weight: None,
|
||||
avg_fieldnorm,
|
||||
term_has_freq: false,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl PostingsSerializer for StandardPostingsSerializer {
|
||||
fn new_term(&mut self, term_doc_freq: u32, record_term_freq: bool) {
|
||||
self.clear();
|
||||
|
||||
self.term_has_freq = self.mode.has_freq() && record_term_freq;
|
||||
if !self.term_has_freq {
|
||||
return;
|
||||
}
|
||||
|
||||
let num_docs_in_segment: u64 =
|
||||
if let Some(fieldnorm_reader) = self.fieldnorm_reader.as_ref() {
|
||||
fieldnorm_reader.num_docs() as u64
|
||||
} else {
|
||||
return;
|
||||
};
|
||||
|
||||
if num_docs_in_segment == 0 {
|
||||
return;
|
||||
}
|
||||
|
||||
self.bm25_weight = Some(Bm25Weight::for_one_term_without_explain(
|
||||
term_doc_freq as u64,
|
||||
num_docs_in_segment,
|
||||
self.avg_fieldnorm,
|
||||
));
|
||||
}
|
||||
|
||||
fn write_doc(&mut self, doc_id: DocId, term_freq: u32) {
|
||||
self.block.append_doc(doc_id, term_freq);
|
||||
if self.block.is_full() {
|
||||
self.write_block();
|
||||
}
|
||||
}
|
||||
|
||||
fn close_term(&mut self, doc_freq: u32, output_write: &mut impl io::Write) -> io::Result<()> {
|
||||
if !self.block.is_empty() {
|
||||
// we have doc ids waiting to be written
|
||||
// this happens when the number of doc ids is
|
||||
// not a perfect multiple of our block size.
|
||||
//
|
||||
// In that case, the remaining part is encoded
|
||||
// using variable int encoding.
|
||||
{
|
||||
let block_encoded = self
|
||||
.block_encoder
|
||||
.compress_vint_sorted(self.block.doc_ids(), self.last_doc_id_encoded);
|
||||
self.postings_write.write_all(block_encoded)?;
|
||||
}
|
||||
// ... Idem for term frequencies
|
||||
if self.term_has_freq {
|
||||
let block_encoded = self
|
||||
.block_encoder
|
||||
.compress_vint_unsorted(self.block.term_freqs());
|
||||
self.postings_write.write_all(block_encoded)?;
|
||||
}
|
||||
self.block.clear();
|
||||
}
|
||||
if doc_freq >= COMPRESSION_BLOCK_SIZE as u32 {
|
||||
let skip_data = self.skip_write.data();
|
||||
VInt(skip_data.len() as u64).serialize(output_write)?;
|
||||
output_write.write_all(skip_data)?;
|
||||
}
|
||||
output_write.write_all(&self.postings_write[..])?;
|
||||
self.skip_write.clear();
|
||||
self.postings_write.clear();
|
||||
self.bm25_weight = None;
|
||||
Ok(())
|
||||
}
|
||||
}
|
||||
|
||||
impl StandardPostingsSerializer {
|
||||
fn clear(&mut self) {
|
||||
self.bm25_weight = None;
|
||||
self.block.clear();
|
||||
self.last_doc_id_encoded = 0;
|
||||
}
|
||||
|
||||
fn write_block(&mut self) {
|
||||
{
|
||||
// encode the doc ids
|
||||
let (num_bits, block_encoded): (u8, &[u8]) = self
|
||||
.block_encoder
|
||||
.compress_block_sorted(self.block.doc_ids(), self.last_doc_id_encoded);
|
||||
self.last_doc_id_encoded = self.block.last_doc();
|
||||
self.skip_write
|
||||
.write_doc(self.last_doc_id_encoded, num_bits);
|
||||
// last el block 0, offset block 1,
|
||||
self.postings_write.extend(block_encoded);
|
||||
}
|
||||
if self.term_has_freq {
|
||||
let (num_bits, block_encoded): (u8, &[u8]) = self
|
||||
.block_encoder
|
||||
.compress_block_unsorted(self.block.term_freqs(), true);
|
||||
self.postings_write.extend(block_encoded);
|
||||
self.skip_write.write_term_freq(num_bits);
|
||||
if self.mode.has_positions() {
|
||||
// We serialize the sum of term freqs within the skip information
|
||||
// in order to navigate through positions.
|
||||
let sum_freq = self.block.term_freqs().iter().cloned().sum();
|
||||
self.skip_write.write_total_term_freq(sum_freq);
|
||||
}
|
||||
let mut blockwand_params = (0u8, 0u32);
|
||||
if let Some(bm25_weight) = self.bm25_weight.as_ref() {
|
||||
if let Some(fieldnorm_reader) = self.fieldnorm_reader.as_ref() {
|
||||
let docs = self.block.doc_ids().iter().cloned();
|
||||
let term_freqs = self.block.term_freqs().iter().cloned();
|
||||
let fieldnorms = docs.map(|doc| fieldnorm_reader.fieldnorm_id(doc));
|
||||
blockwand_params = fieldnorms
|
||||
.zip(term_freqs)
|
||||
.max_by(
|
||||
|(left_fieldnorm_id, left_term_freq),
|
||||
(right_fieldnorm_id, right_term_freq)| {
|
||||
let left_score =
|
||||
bm25_weight.tf_factor(*left_fieldnorm_id, *left_term_freq);
|
||||
let right_score =
|
||||
bm25_weight.tf_factor(*right_fieldnorm_id, *right_term_freq);
|
||||
left_score
|
||||
.partial_cmp(&right_score)
|
||||
.unwrap_or(Ordering::Equal)
|
||||
},
|
||||
)
|
||||
.unwrap();
|
||||
}
|
||||
}
|
||||
let (fieldnorm_id, term_freq) = blockwand_params;
|
||||
self.skip_write.write_blockwand_max(fieldnorm_id, term_freq);
|
||||
}
|
||||
self.block.clear();
|
||||
}
|
||||
}
|
||||
@@ -43,7 +43,7 @@ impl Collector for Count {
|
||||
fn for_segment(
|
||||
&self,
|
||||
_: SegmentOrdinal,
|
||||
_: &dyn SegmentReader,
|
||||
_: &SegmentReader,
|
||||
) -> crate::Result<SegmentCountCollector> {
|
||||
Ok(SegmentCountCollector::default())
|
||||
}
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
use std::collections::HashSet;
|
||||
|
||||
use super::{Collector, SegmentCollector};
|
||||
use crate::{DocAddress, DocId, Score, SegmentReader};
|
||||
use crate::{DocAddress, DocId, Score};
|
||||
|
||||
/// Collectors that returns the set of DocAddress that matches the query.
|
||||
///
|
||||
@@ -15,7 +15,7 @@ impl Collector for DocSetCollector {
|
||||
fn for_segment(
|
||||
&self,
|
||||
segment_local_id: crate::SegmentOrdinal,
|
||||
_segment: &dyn SegmentReader,
|
||||
_segment: &crate::SegmentReader,
|
||||
) -> crate::Result<Self::Child> {
|
||||
Ok(DocSetChildCollector {
|
||||
segment_local_id,
|
||||
|
||||
@@ -265,7 +265,7 @@ impl Collector for FacetCollector {
|
||||
fn for_segment(
|
||||
&self,
|
||||
_: SegmentOrdinal,
|
||||
reader: &dyn SegmentReader,
|
||||
reader: &SegmentReader,
|
||||
) -> crate::Result<FacetSegmentCollector> {
|
||||
let facet_reader = reader.facet_reader(&self.field_name)?;
|
||||
let facet_dict = facet_reader.facet_dict();
|
||||
|
||||
@@ -113,7 +113,7 @@ where
|
||||
fn for_segment(
|
||||
&self,
|
||||
segment_local_id: u32,
|
||||
segment_reader: &dyn SegmentReader,
|
||||
segment_reader: &SegmentReader,
|
||||
) -> crate::Result<Self::Child> {
|
||||
let column_opt = segment_reader.fast_fields().column_opt(&self.field)?;
|
||||
|
||||
@@ -287,7 +287,7 @@ where
|
||||
fn for_segment(
|
||||
&self,
|
||||
segment_local_id: u32,
|
||||
segment_reader: &dyn SegmentReader,
|
||||
segment_reader: &SegmentReader,
|
||||
) -> crate::Result<Self::Child> {
|
||||
let column_opt = segment_reader.fast_fields().bytes(&self.field)?;
|
||||
|
||||
|
||||
@@ -6,7 +6,7 @@ use fastdivide::DividerU64;
|
||||
use crate::collector::{Collector, SegmentCollector};
|
||||
use crate::fastfield::{FastFieldNotAvailableError, FastValue};
|
||||
use crate::schema::Type;
|
||||
use crate::{DocId, Score, SegmentReader};
|
||||
use crate::{DocId, Score};
|
||||
|
||||
/// Histogram builds an histogram of the values of a fastfield for the
|
||||
/// collected DocSet.
|
||||
@@ -110,7 +110,7 @@ impl Collector for HistogramCollector {
|
||||
fn for_segment(
|
||||
&self,
|
||||
_segment_local_id: crate::SegmentOrdinal,
|
||||
segment: &dyn SegmentReader,
|
||||
segment: &crate::SegmentReader,
|
||||
) -> crate::Result<Self::Child> {
|
||||
let column_opt = segment.fast_fields().u64_lenient(&self.field)?;
|
||||
let (column, _column_type) = column_opt.ok_or_else(|| FastFieldNotAvailableError {
|
||||
|
||||
@@ -156,7 +156,7 @@ pub trait Collector: Sync + Send {
|
||||
fn for_segment(
|
||||
&self,
|
||||
segment_local_id: SegmentOrdinal,
|
||||
segment: &dyn SegmentReader,
|
||||
segment: &SegmentReader,
|
||||
) -> crate::Result<Self::Child>;
|
||||
|
||||
/// Returns true iff the collector requires to compute scores for documents.
|
||||
@@ -174,7 +174,7 @@ pub trait Collector: Sync + Send {
|
||||
&self,
|
||||
weight: &dyn Weight,
|
||||
segment_ord: u32,
|
||||
reader: &dyn SegmentReader,
|
||||
reader: &SegmentReader,
|
||||
) -> crate::Result<<Self::Child as SegmentCollector>::Fruit> {
|
||||
let with_scoring = self.requires_scoring();
|
||||
let mut segment_collector = self.for_segment(segment_ord, reader)?;
|
||||
@@ -186,7 +186,7 @@ pub trait Collector: Sync + Send {
|
||||
pub(crate) fn default_collect_segment_impl<TSegmentCollector: SegmentCollector>(
|
||||
segment_collector: &mut TSegmentCollector,
|
||||
weight: &dyn Weight,
|
||||
reader: &dyn SegmentReader,
|
||||
reader: &SegmentReader,
|
||||
with_scoring: bool,
|
||||
) -> crate::Result<()> {
|
||||
match (reader.alive_bitset(), with_scoring) {
|
||||
@@ -255,7 +255,7 @@ impl<TCollector: Collector> Collector for Option<TCollector> {
|
||||
fn for_segment(
|
||||
&self,
|
||||
segment_local_id: SegmentOrdinal,
|
||||
segment: &dyn SegmentReader,
|
||||
segment: &SegmentReader,
|
||||
) -> crate::Result<Self::Child> {
|
||||
Ok(if let Some(inner) = self {
|
||||
let inner_segment_collector = inner.for_segment(segment_local_id, segment)?;
|
||||
@@ -336,7 +336,7 @@ where
|
||||
fn for_segment(
|
||||
&self,
|
||||
segment_local_id: u32,
|
||||
segment: &dyn SegmentReader,
|
||||
segment: &SegmentReader,
|
||||
) -> crate::Result<Self::Child> {
|
||||
let left = self.0.for_segment(segment_local_id, segment)?;
|
||||
let right = self.1.for_segment(segment_local_id, segment)?;
|
||||
@@ -407,7 +407,7 @@ where
|
||||
fn for_segment(
|
||||
&self,
|
||||
segment_local_id: u32,
|
||||
segment: &dyn SegmentReader,
|
||||
segment: &SegmentReader,
|
||||
) -> crate::Result<Self::Child> {
|
||||
let one = self.0.for_segment(segment_local_id, segment)?;
|
||||
let two = self.1.for_segment(segment_local_id, segment)?;
|
||||
@@ -487,7 +487,7 @@ where
|
||||
fn for_segment(
|
||||
&self,
|
||||
segment_local_id: u32,
|
||||
segment: &dyn SegmentReader,
|
||||
segment: &SegmentReader,
|
||||
) -> crate::Result<Self::Child> {
|
||||
let one = self.0.for_segment(segment_local_id, segment)?;
|
||||
let two = self.1.for_segment(segment_local_id, segment)?;
|
||||
|
||||
@@ -24,7 +24,7 @@ impl<TCollector: Collector> Collector for CollectorWrapper<TCollector> {
|
||||
fn for_segment(
|
||||
&self,
|
||||
segment_local_id: u32,
|
||||
reader: &dyn SegmentReader,
|
||||
reader: &SegmentReader,
|
||||
) -> crate::Result<Box<dyn BoxableSegmentCollector>> {
|
||||
let child = self.0.for_segment(segment_local_id, reader)?;
|
||||
Ok(Box::new(SegmentCollectorWrapper(child)))
|
||||
@@ -209,7 +209,7 @@ impl Collector for MultiCollector<'_> {
|
||||
fn for_segment(
|
||||
&self,
|
||||
segment_local_id: SegmentOrdinal,
|
||||
segment: &dyn SegmentReader,
|
||||
segment: &SegmentReader,
|
||||
) -> crate::Result<MultiCollectorChild> {
|
||||
let children = self
|
||||
.collector_wrappers
|
||||
|
||||
@@ -5,7 +5,7 @@ use serde::{Deserialize, Serialize};
|
||||
|
||||
use crate::collector::{SegmentSortKeyComputer, SortKeyComputer};
|
||||
use crate::schema::{OwnedValue, Schema};
|
||||
use crate::{DocId, Order, Score, SegmentReader};
|
||||
use crate::{DocId, Order, Score};
|
||||
|
||||
fn compare_owned_value<const NULLS_FIRST: bool>(lhs: &OwnedValue, rhs: &OwnedValue) -> Ordering {
|
||||
match (lhs, rhs) {
|
||||
@@ -430,7 +430,7 @@ where
|
||||
|
||||
fn segment_sort_key_computer(
|
||||
&self,
|
||||
segment_reader: &dyn SegmentReader,
|
||||
segment_reader: &crate::SegmentReader,
|
||||
) -> crate::Result<Self::Child> {
|
||||
let child = self.0.segment_sort_key_computer(segment_reader)?;
|
||||
Ok(SegmentSortKeyComputerWithComparator {
|
||||
@@ -468,7 +468,7 @@ where
|
||||
|
||||
fn segment_sort_key_computer(
|
||||
&self,
|
||||
segment_reader: &dyn SegmentReader,
|
||||
segment_reader: &crate::SegmentReader,
|
||||
) -> crate::Result<Self::Child> {
|
||||
let child = self.0.segment_sort_key_computer(segment_reader)?;
|
||||
Ok(SegmentSortKeyComputerWithComparator {
|
||||
|
||||
@@ -6,7 +6,7 @@ use crate::collector::sort_key::{
|
||||
use crate::collector::{SegmentSortKeyComputer, SortKeyComputer};
|
||||
use crate::fastfield::FastFieldNotAvailableError;
|
||||
use crate::schema::OwnedValue;
|
||||
use crate::{DateTime, DocId, Score, SegmentReader};
|
||||
use crate::{DateTime, DocId, Score};
|
||||
|
||||
/// Sort by the boxed / OwnedValue representation of either a fast field, or of the score.
|
||||
///
|
||||
@@ -86,7 +86,7 @@ impl SortKeyComputer for SortByErasedType {
|
||||
|
||||
fn segment_sort_key_computer(
|
||||
&self,
|
||||
segment_reader: &dyn SegmentReader,
|
||||
segment_reader: &crate::SegmentReader,
|
||||
) -> crate::Result<Self::Child> {
|
||||
let inner: Box<dyn ErasedSegmentSortKeyComputer> = match self {
|
||||
Self::Field(column_name) => {
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
use crate::collector::sort_key::NaturalComparator;
|
||||
use crate::collector::{SegmentSortKeyComputer, SortKeyComputer, TopNComputer};
|
||||
use crate::{DocAddress, DocId, Score, SegmentReader};
|
||||
use crate::{DocAddress, DocId, Score};
|
||||
|
||||
/// Sort by similarity score.
|
||||
#[derive(Clone, Debug, Copy)]
|
||||
@@ -19,7 +19,7 @@ impl SortKeyComputer for SortBySimilarityScore {
|
||||
|
||||
fn segment_sort_key_computer(
|
||||
&self,
|
||||
_segment_reader: &dyn SegmentReader,
|
||||
_segment_reader: &crate::SegmentReader,
|
||||
) -> crate::Result<Self::Child> {
|
||||
Ok(SortBySimilarityScore)
|
||||
}
|
||||
@@ -29,7 +29,7 @@ impl SortKeyComputer for SortBySimilarityScore {
|
||||
&self,
|
||||
k: usize,
|
||||
weight: &dyn crate::query::Weight,
|
||||
reader: &dyn SegmentReader,
|
||||
reader: &crate::SegmentReader,
|
||||
segment_ord: u32,
|
||||
) -> crate::Result<Vec<(Self::SortKey, DocAddress)>> {
|
||||
let mut top_n: TopNComputer<Score, DocId, Self::Comparator> =
|
||||
|
||||
@@ -61,7 +61,7 @@ impl<T: FastValue> SortKeyComputer for SortByStaticFastValue<T> {
|
||||
|
||||
fn segment_sort_key_computer(
|
||||
&self,
|
||||
segment_reader: &dyn SegmentReader,
|
||||
segment_reader: &SegmentReader,
|
||||
) -> crate::Result<Self::Child> {
|
||||
let sort_column_opt = segment_reader.fast_fields().u64_lenient(&self.field)?;
|
||||
let (sort_column, _sort_column_type) =
|
||||
|
||||
@@ -3,7 +3,7 @@ use columnar::StrColumn;
|
||||
use crate::collector::sort_key::NaturalComparator;
|
||||
use crate::collector::{SegmentSortKeyComputer, SortKeyComputer};
|
||||
use crate::termdict::TermOrdinal;
|
||||
use crate::{DocId, Score, SegmentReader};
|
||||
use crate::{DocId, Score};
|
||||
|
||||
/// Sort by the first value of a string column.
|
||||
///
|
||||
@@ -35,7 +35,7 @@ impl SortKeyComputer for SortByString {
|
||||
|
||||
fn segment_sort_key_computer(
|
||||
&self,
|
||||
segment_reader: &dyn SegmentReader,
|
||||
segment_reader: &crate::SegmentReader,
|
||||
) -> crate::Result<Self::Child> {
|
||||
let str_column_opt = segment_reader.fast_fields().str(&self.column_name)?;
|
||||
Ok(ByStringColumnSegmentSortKeyComputer { str_column_opt })
|
||||
|
||||
@@ -119,7 +119,7 @@ pub trait SortKeyComputer: Sync {
|
||||
&self,
|
||||
k: usize,
|
||||
weight: &dyn crate::query::Weight,
|
||||
reader: &dyn SegmentReader,
|
||||
reader: &crate::SegmentReader,
|
||||
segment_ord: u32,
|
||||
) -> crate::Result<Vec<(Self::SortKey, DocAddress)>> {
|
||||
let with_scoring = self.requires_scoring();
|
||||
@@ -135,7 +135,7 @@ pub trait SortKeyComputer: Sync {
|
||||
}
|
||||
|
||||
/// Builds a child sort key computer for a specific segment.
|
||||
fn segment_sort_key_computer(&self, segment_reader: &dyn SegmentReader) -> Result<Self::Child>;
|
||||
fn segment_sort_key_computer(&self, segment_reader: &SegmentReader) -> Result<Self::Child>;
|
||||
}
|
||||
|
||||
impl<HeadSortKeyComputer, TailSortKeyComputer> SortKeyComputer
|
||||
@@ -156,7 +156,7 @@ where
|
||||
(self.0.comparator(), self.1.comparator())
|
||||
}
|
||||
|
||||
fn segment_sort_key_computer(&self, segment_reader: &dyn SegmentReader) -> Result<Self::Child> {
|
||||
fn segment_sort_key_computer(&self, segment_reader: &SegmentReader) -> Result<Self::Child> {
|
||||
Ok((
|
||||
self.0.segment_sort_key_computer(segment_reader)?,
|
||||
self.1.segment_sort_key_computer(segment_reader)?,
|
||||
@@ -357,7 +357,7 @@ where
|
||||
)
|
||||
}
|
||||
|
||||
fn segment_sort_key_computer(&self, segment_reader: &dyn SegmentReader) -> Result<Self::Child> {
|
||||
fn segment_sort_key_computer(&self, segment_reader: &SegmentReader) -> Result<Self::Child> {
|
||||
let sort_key_computer1 = self.0.segment_sort_key_computer(segment_reader)?;
|
||||
let sort_key_computer2 = self.1.segment_sort_key_computer(segment_reader)?;
|
||||
let sort_key_computer3 = self.2.segment_sort_key_computer(segment_reader)?;
|
||||
@@ -420,7 +420,7 @@ where
|
||||
SortKeyComputer4::Comparator,
|
||||
);
|
||||
|
||||
fn segment_sort_key_computer(&self, segment_reader: &dyn SegmentReader) -> Result<Self::Child> {
|
||||
fn segment_sort_key_computer(&self, segment_reader: &SegmentReader) -> Result<Self::Child> {
|
||||
let sort_key_computer1 = self.0.segment_sort_key_computer(segment_reader)?;
|
||||
let sort_key_computer2 = self.1.segment_sort_key_computer(segment_reader)?;
|
||||
let sort_key_computer3 = self.2.segment_sort_key_computer(segment_reader)?;
|
||||
@@ -454,7 +454,7 @@ where
|
||||
|
||||
impl<F, SegmentF, TSortKey> SortKeyComputer for F
|
||||
where
|
||||
F: 'static + Send + Sync + Fn(&dyn SegmentReader) -> SegmentF,
|
||||
F: 'static + Send + Sync + Fn(&SegmentReader) -> SegmentF,
|
||||
SegmentF: 'static + FnMut(DocId) -> TSortKey,
|
||||
TSortKey: 'static + PartialOrd + Clone + Send + Sync + std::fmt::Debug,
|
||||
{
|
||||
@@ -462,7 +462,7 @@ where
|
||||
type Child = SegmentF;
|
||||
type Comparator = NaturalComparator;
|
||||
|
||||
fn segment_sort_key_computer(&self, segment_reader: &dyn SegmentReader) -> Result<Self::Child> {
|
||||
fn segment_sort_key_computer(&self, segment_reader: &SegmentReader) -> Result<Self::Child> {
|
||||
Ok((self)(segment_reader))
|
||||
}
|
||||
}
|
||||
@@ -509,10 +509,10 @@ mod tests {
|
||||
|
||||
#[test]
|
||||
fn test_lazy_score_computer() {
|
||||
let score_computer_primary = |_segment_reader: &dyn SegmentReader| |_doc: DocId| 200u32;
|
||||
let score_computer_primary = |_segment_reader: &SegmentReader| |_doc: DocId| 200u32;
|
||||
let call_count = Arc::new(AtomicUsize::new(0));
|
||||
let call_count_clone = call_count.clone();
|
||||
let score_computer_secondary = move |_segment_reader: &dyn SegmentReader| {
|
||||
let score_computer_secondary = move |_segment_reader: &SegmentReader| {
|
||||
let call_count_new_clone = call_count_clone.clone();
|
||||
move |_doc: DocId| {
|
||||
call_count_new_clone.fetch_add(1, AtomicOrdering::SeqCst);
|
||||
@@ -572,10 +572,10 @@ mod tests {
|
||||
|
||||
#[test]
|
||||
fn test_lazy_score_computer_dynamic_ordering() {
|
||||
let score_computer_primary = |_segment_reader: &dyn SegmentReader| |_doc: DocId| 200u32;
|
||||
let score_computer_primary = |_segment_reader: &SegmentReader| |_doc: DocId| 200u32;
|
||||
let call_count = Arc::new(AtomicUsize::new(0));
|
||||
let call_count_clone = call_count.clone();
|
||||
let score_computer_secondary = move |_segment_reader: &dyn SegmentReader| {
|
||||
let score_computer_secondary = move |_segment_reader: &SegmentReader| {
|
||||
let call_count_new_clone = call_count_clone.clone();
|
||||
move |_doc: DocId| {
|
||||
call_count_new_clone.fetch_add(1, AtomicOrdering::SeqCst);
|
||||
|
||||
@@ -32,11 +32,7 @@ where TSortKeyComputer: SortKeyComputer + Send + Sync + 'static
|
||||
self.sort_key_computer.check_schema(schema)
|
||||
}
|
||||
|
||||
fn for_segment(
|
||||
&self,
|
||||
segment_ord: u32,
|
||||
segment_reader: &dyn SegmentReader,
|
||||
) -> Result<Self::Child> {
|
||||
fn for_segment(&self, segment_ord: u32, segment_reader: &SegmentReader) -> Result<Self::Child> {
|
||||
let segment_sort_key_computer = self
|
||||
.sort_key_computer
|
||||
.segment_sort_key_computer(segment_reader)?;
|
||||
@@ -67,7 +63,7 @@ where TSortKeyComputer: SortKeyComputer + Send + Sync + 'static
|
||||
&self,
|
||||
weight: &dyn Weight,
|
||||
segment_ord: u32,
|
||||
reader: &dyn SegmentReader,
|
||||
reader: &SegmentReader,
|
||||
) -> crate::Result<Vec<(TSortKeyComputer::SortKey, DocAddress)>> {
|
||||
let k = self.doc_range.end;
|
||||
let docs = self
|
||||
|
||||
@@ -5,7 +5,7 @@ use crate::query::{AllQuery, QueryParser};
|
||||
use crate::schema::{Schema, FAST, TEXT};
|
||||
use crate::time::format_description::well_known::Rfc3339;
|
||||
use crate::time::OffsetDateTime;
|
||||
use crate::{DateTime, DocAddress, Index, Searcher, SegmentReader, TantivyDocument};
|
||||
use crate::{DateTime, DocAddress, Index, Searcher, TantivyDocument};
|
||||
|
||||
pub const TEST_COLLECTOR_WITH_SCORE: TestCollector = TestCollector {
|
||||
compute_score: true,
|
||||
@@ -109,7 +109,7 @@ impl Collector for TestCollector {
|
||||
fn for_segment(
|
||||
&self,
|
||||
segment_id: SegmentOrdinal,
|
||||
_reader: &dyn SegmentReader,
|
||||
_reader: &SegmentReader,
|
||||
) -> crate::Result<TestSegmentCollector> {
|
||||
Ok(TestSegmentCollector {
|
||||
segment_id,
|
||||
@@ -180,7 +180,7 @@ impl Collector for FastFieldTestCollector {
|
||||
fn for_segment(
|
||||
&self,
|
||||
_: SegmentOrdinal,
|
||||
segment_reader: &dyn SegmentReader,
|
||||
segment_reader: &SegmentReader,
|
||||
) -> crate::Result<FastFieldSegmentCollector> {
|
||||
let reader = segment_reader
|
||||
.fast_fields()
|
||||
@@ -243,7 +243,7 @@ impl Collector for BytesFastFieldTestCollector {
|
||||
fn for_segment(
|
||||
&self,
|
||||
_segment_local_id: u32,
|
||||
segment_reader: &dyn SegmentReader,
|
||||
segment_reader: &SegmentReader,
|
||||
) -> crate::Result<BytesFastFieldSegmentCollector> {
|
||||
let column_opt = segment_reader.fast_fields().bytes(&self.field)?;
|
||||
Ok(BytesFastFieldSegmentCollector {
|
||||
|
||||
@@ -393,7 +393,7 @@ impl TopDocs {
|
||||
/// // This is where we build our collector with our custom score.
|
||||
/// let top_docs_by_custom_score = TopDocs
|
||||
/// ::with_limit(10)
|
||||
/// .tweak_score(move |segment_reader: &dyn SegmentReader| {
|
||||
/// .tweak_score(move |segment_reader: &SegmentReader| {
|
||||
/// // The argument is a function that returns our scoring
|
||||
/// // function.
|
||||
/// //
|
||||
@@ -442,7 +442,7 @@ pub struct TweakScoreFn<F>(F);
|
||||
|
||||
impl<F, TTweakScoreSortKeyFn, TSortKey> SortKeyComputer for TweakScoreFn<F>
|
||||
where
|
||||
F: 'static + Send + Sync + Fn(&dyn SegmentReader) -> TTweakScoreSortKeyFn,
|
||||
F: 'static + Send + Sync + Fn(&SegmentReader) -> TTweakScoreSortKeyFn,
|
||||
TTweakScoreSortKeyFn: 'static + Fn(DocId, Score) -> TSortKey,
|
||||
TweakScoreSegmentSortKeyComputer<TTweakScoreSortKeyFn>:
|
||||
SegmentSortKeyComputer<SortKey = TSortKey, SegmentSortKey = TSortKey>,
|
||||
@@ -458,7 +458,7 @@ where
|
||||
|
||||
fn segment_sort_key_computer(
|
||||
&self,
|
||||
segment_reader: &dyn SegmentReader,
|
||||
segment_reader: &SegmentReader,
|
||||
) -> crate::Result<Self::Child> {
|
||||
Ok({
|
||||
TweakScoreSegmentSortKeyComputer {
|
||||
@@ -1525,7 +1525,7 @@ mod tests {
|
||||
let text_query = query_parser.parse_query("droopy tax")?;
|
||||
let collector = TopDocs::with_limit(2)
|
||||
.and_offset(1)
|
||||
.order_by(move |_segment_reader: &dyn SegmentReader| move |doc: DocId| doc);
|
||||
.order_by(move |_segment_reader: &SegmentReader| move |doc: DocId| doc);
|
||||
let score_docs: Vec<(u32, DocAddress)> =
|
||||
index.reader()?.searcher().search(&text_query, &collector)?;
|
||||
assert_eq!(
|
||||
@@ -1543,7 +1543,7 @@ mod tests {
|
||||
let text_query = query_parser.parse_query("droopy tax").unwrap();
|
||||
let collector = TopDocs::with_limit(2)
|
||||
.and_offset(1)
|
||||
.order_by(move |_segment_reader: &dyn SegmentReader| move |doc: DocId| doc);
|
||||
.order_by(move |_segment_reader: &SegmentReader| move |doc: DocId| doc);
|
||||
let score_docs: Vec<(u32, DocAddress)> = index
|
||||
.reader()
|
||||
.unwrap()
|
||||
|
||||
@@ -36,7 +36,7 @@ pub struct SearcherGeneration {
|
||||
|
||||
impl SearcherGeneration {
|
||||
pub(crate) fn from_segment_readers(
|
||||
segment_readers: &[Arc<dyn SegmentReader>],
|
||||
segment_readers: &[SegmentReader],
|
||||
generation_id: u64,
|
||||
) -> Self {
|
||||
let mut segment_id_to_del_opstamp = BTreeMap::new();
|
||||
@@ -154,13 +154,13 @@ impl Searcher {
|
||||
}
|
||||
|
||||
/// Return the list of segment readers
|
||||
pub fn segment_readers(&self) -> &[Arc<dyn SegmentReader>] {
|
||||
pub fn segment_readers(&self) -> &[SegmentReader] {
|
||||
&self.inner.segment_readers
|
||||
}
|
||||
|
||||
/// Returns the segment_reader associated with the given segment_ord
|
||||
pub fn segment_reader(&self, segment_ord: u32) -> &dyn SegmentReader {
|
||||
self.inner.segment_readers[segment_ord as usize].as_ref()
|
||||
pub fn segment_reader(&self, segment_ord: u32) -> &SegmentReader {
|
||||
&self.inner.segment_readers[segment_ord as usize]
|
||||
}
|
||||
|
||||
/// Runs a query on the segment readers wrapped by the searcher.
|
||||
@@ -229,11 +229,7 @@ impl Searcher {
|
||||
let segment_readers = self.segment_readers();
|
||||
let fruits = executor.map(
|
||||
|(segment_ord, segment_reader)| {
|
||||
collector.collect_segment(
|
||||
weight.as_ref(),
|
||||
segment_ord as u32,
|
||||
segment_reader.as_ref(),
|
||||
)
|
||||
collector.collect_segment(weight.as_ref(), segment_ord as u32, segment_reader)
|
||||
},
|
||||
segment_readers.iter().enumerate(),
|
||||
)?;
|
||||
@@ -263,7 +259,7 @@ impl From<Arc<SearcherInner>> for Searcher {
|
||||
pub(crate) struct SearcherInner {
|
||||
schema: Schema,
|
||||
index: Index,
|
||||
segment_readers: Vec<Arc<dyn SegmentReader>>,
|
||||
segment_readers: Vec<SegmentReader>,
|
||||
store_readers: Vec<StoreReader>,
|
||||
generation: TrackedObject<SearcherGeneration>,
|
||||
}
|
||||
@@ -273,7 +269,7 @@ impl SearcherInner {
|
||||
pub(crate) fn new(
|
||||
schema: Schema,
|
||||
index: Index,
|
||||
segment_readers: Vec<Arc<dyn SegmentReader>>,
|
||||
segment_readers: Vec<SegmentReader>,
|
||||
generation: TrackedObject<SearcherGeneration>,
|
||||
doc_store_cache_num_blocks: usize,
|
||||
) -> io::Result<SearcherInner> {
|
||||
@@ -305,7 +301,7 @@ impl fmt::Debug for Searcher {
|
||||
let segment_ids = self
|
||||
.segment_readers()
|
||||
.iter()
|
||||
.map(|segment_reader| segment_reader.segment_id())
|
||||
.map(SegmentReader::segment_id)
|
||||
.collect::<Vec<_>>();
|
||||
write!(f, "Searcher({segment_ids:?})")
|
||||
}
|
||||
|
||||
@@ -1,4 +1,3 @@
|
||||
use std::borrow::BorrowMut;
|
||||
use std::ops::{Deref as _, DerefMut as _};
|
||||
|
||||
use common::BitSet;
|
||||
@@ -266,10 +265,8 @@ impl<TDocSet: DocSet + ?Sized> DocSet for Box<TDocSet> {
|
||||
self.deref_mut().seek(target)
|
||||
}
|
||||
|
||||
#[inline]
|
||||
fn seek_danger(&mut self, target: DocId) -> SeekDangerResult {
|
||||
let unboxed: &mut TDocSet = self.borrow_mut();
|
||||
unboxed.seek_danger(target)
|
||||
self.deref_mut().seek_danger(target)
|
||||
}
|
||||
|
||||
#[inline]
|
||||
|
||||
@@ -96,7 +96,7 @@ mod tests {
|
||||
};
|
||||
use crate::time::OffsetDateTime;
|
||||
use crate::tokenizer::{LowerCaser, RawTokenizer, TextAnalyzer, TokenizerManager};
|
||||
use crate::{Index, IndexWriter};
|
||||
use crate::{Index, IndexWriter, SegmentReader};
|
||||
|
||||
pub static SCHEMA: Lazy<Schema> = Lazy::new(|| {
|
||||
let mut schema_builder = Schema::builder();
|
||||
@@ -430,7 +430,7 @@ mod tests {
|
||||
.searcher()
|
||||
.segment_readers()
|
||||
.iter()
|
||||
.map(|segment_reader| segment_reader.segment_id())
|
||||
.map(SegmentReader::segment_id)
|
||||
.collect();
|
||||
assert_eq!(segment_ids.len(), 2);
|
||||
index_writer.merge(&segment_ids[..]).wait().unwrap();
|
||||
|
||||
@@ -26,6 +26,7 @@ use crate::reader::{IndexReader, IndexReaderBuilder};
|
||||
use crate::schema::document::Document;
|
||||
use crate::schema::{Field, FieldType, Schema};
|
||||
use crate::tokenizer::{TextAnalyzer, TokenizerManager};
|
||||
use crate::SegmentReader;
|
||||
|
||||
fn load_metas(
|
||||
directory: &dyn Directory,
|
||||
@@ -578,15 +579,7 @@ impl<Codec: crate::codec::Codec> Index<Codec> {
|
||||
let segments = self.searchable_segments()?;
|
||||
let fields_metadata: Vec<Vec<FieldMetadata>> = segments
|
||||
.into_iter()
|
||||
.map(|segment| {
|
||||
let segment_reader = segment.index().codec().open_segment_reader(
|
||||
segment.index().directory(),
|
||||
segment.meta(),
|
||||
segment.schema(),
|
||||
None,
|
||||
)?;
|
||||
segment_reader.fields_metadata()
|
||||
})
|
||||
.map(|segment| SegmentReader::open(&segment)?.fields_metadata())
|
||||
.collect::<Result<_, _>>()?;
|
||||
Ok(merge_field_meta_data(fields_metadata))
|
||||
}
|
||||
|
||||
@@ -1,11 +1,8 @@
|
||||
#[cfg(feature = "quickwit")]
|
||||
use std::future::Future;
|
||||
use std::io;
|
||||
#[cfg(feature = "quickwit")]
|
||||
use std::pin::Pin;
|
||||
use std::sync::Arc;
|
||||
|
||||
use common::json_path_writer::JSON_END_OF_PATH;
|
||||
use common::{BinarySerializable, BitSet, ByteCount, OwnedBytes};
|
||||
use common::{BinarySerializable, ByteCount, OwnedBytes};
|
||||
#[cfg(feature = "quickwit")]
|
||||
use futures_util::{FutureExt, StreamExt, TryStreamExt};
|
||||
#[cfg(feature = "quickwit")]
|
||||
@@ -13,144 +10,43 @@ use itertools::Itertools;
|
||||
#[cfg(feature = "quickwit")]
|
||||
use tantivy_fst::automaton::{AlwaysMatch, Automaton};
|
||||
|
||||
use crate::codec::postings::RawPostingsData;
|
||||
use crate::codec::standard::postings::{
|
||||
fill_bitset_from_raw_data, load_postings_from_raw_data, SegmentPostings,
|
||||
};
|
||||
use crate::codec::postings::PostingsCodec;
|
||||
use crate::codec::{Codec, ObjectSafeCodec, StandardCodec};
|
||||
use crate::directory::FileSlice;
|
||||
use crate::docset::DocSet;
|
||||
use crate::fieldnorm::FieldNormReader;
|
||||
use crate::postings::{Postings, TermInfo};
|
||||
use crate::query::term_query::TermScorer;
|
||||
use crate::query::{box_scorer, Bm25Weight, PhraseScorer, Scorer};
|
||||
use crate::query::{Bm25Weight, PhraseScorer, Scorer};
|
||||
use crate::schema::{IndexRecordOption, Term, Type};
|
||||
use crate::termdict::TermDictionary;
|
||||
|
||||
/// Trait defining the contract for inverted index readers.
|
||||
pub trait InvertedIndexReader: Send + Sync {
|
||||
/// Returns the term info associated with the term.
|
||||
fn get_term_info(&self, term: &Term) -> io::Result<Option<TermInfo>> {
|
||||
self.terms().get(term.serialized_value_bytes())
|
||||
}
|
||||
|
||||
/// Return the term dictionary datastructure.
|
||||
fn terms(&self) -> &TermDictionary;
|
||||
|
||||
/// Return the fields and types encoded in the dictionary in lexicographic order.
|
||||
/// Only valid on JSON fields.
|
||||
///
|
||||
/// Notice: This requires a full scan and therefore **very expensive**.
|
||||
fn list_encoded_json_fields(&self) -> io::Result<Vec<InvertedIndexFieldSpace>>;
|
||||
|
||||
/// Build a new term scorer.
|
||||
fn new_term_scorer(
|
||||
&self,
|
||||
term_info: &TermInfo,
|
||||
option: IndexRecordOption,
|
||||
fieldnorm_reader: FieldNormReader,
|
||||
similarity_weight: Bm25Weight,
|
||||
) -> io::Result<Box<dyn Scorer>>;
|
||||
|
||||
/// Returns a posting object given a `term_info`.
|
||||
/// This method is for an advanced usage only.
|
||||
///
|
||||
/// Most users should prefer using [`Self::read_postings()`] instead.
|
||||
fn read_postings_from_terminfo(
|
||||
&self,
|
||||
term_info: &TermInfo,
|
||||
option: IndexRecordOption,
|
||||
) -> io::Result<Box<dyn Postings>>;
|
||||
|
||||
/// Returns the raw postings bytes and metadata for a term.
|
||||
fn read_raw_postings_data(
|
||||
&self,
|
||||
term_info: &TermInfo,
|
||||
option: IndexRecordOption,
|
||||
) -> io::Result<RawPostingsData>;
|
||||
|
||||
/// Fills a bitset with documents containing the term.
|
||||
///
|
||||
/// Implementers can override this to avoid boxing postings.
|
||||
fn fill_bitset_for_term(
|
||||
&self,
|
||||
term_info: &TermInfo,
|
||||
option: IndexRecordOption,
|
||||
doc_bitset: &mut BitSet,
|
||||
) -> io::Result<()> {
|
||||
let mut postings = self.read_postings_from_terminfo(term_info, option)?;
|
||||
postings.fill_bitset(doc_bitset);
|
||||
Ok(())
|
||||
}
|
||||
|
||||
/// Builds a phrase scorer for the given term infos.
|
||||
fn new_phrase_scorer(
|
||||
&self,
|
||||
term_infos: &[(usize, TermInfo)],
|
||||
similarity_weight: Option<Bm25Weight>,
|
||||
fieldnorm_reader: FieldNormReader,
|
||||
slop: u32,
|
||||
) -> io::Result<Box<dyn Scorer>>;
|
||||
|
||||
/// Returns the total number of tokens recorded for all documents
|
||||
/// (including deleted documents).
|
||||
fn total_num_tokens(&self) -> u64;
|
||||
|
||||
/// Returns the segment postings associated with the term, and with the given option,
|
||||
/// or `None` if the term has never been encountered and indexed.
|
||||
fn read_postings(
|
||||
&self,
|
||||
term: &Term,
|
||||
option: IndexRecordOption,
|
||||
) -> io::Result<Option<Box<dyn Postings>>> {
|
||||
self.get_term_info(term)?
|
||||
.map(move |term_info| self.read_postings_from_terminfo(&term_info, option))
|
||||
.transpose()
|
||||
}
|
||||
|
||||
/// Returns the number of documents containing the term.
|
||||
fn doc_freq(&self, term: &Term) -> io::Result<u32>;
|
||||
|
||||
/// Returns the number of documents containing the term asynchronously.
|
||||
#[cfg(feature = "quickwit")]
|
||||
fn doc_freq_async<'a>(
|
||||
&'a self,
|
||||
term: &'a Term,
|
||||
) -> Pin<Box<dyn Future<Output = io::Result<u32>> + Send + 'a>>;
|
||||
}
|
||||
|
||||
/// Tantivy's default inverted index reader implementation.
|
||||
///
|
||||
/// The inverted index reader is in charge of accessing
|
||||
/// the inverted index associated with a specific field.
|
||||
///
|
||||
/// # Note
|
||||
///
|
||||
/// It is safe to delete the segment associated with
|
||||
/// an `InvertedIndexReader` implementation. As long as it is open,
|
||||
/// an `InvertedIndexReader`. As long as it is open,
|
||||
/// the [`FileSlice`] it is relying on should
|
||||
/// stay available.
|
||||
///
|
||||
/// `TantivyInvertedIndexReader` instances are created by calling
|
||||
/// `InvertedIndexReader` are created by calling
|
||||
/// [`SegmentReader::inverted_index()`](crate::SegmentReader::inverted_index).
|
||||
pub struct TantivyInvertedIndexReader {
|
||||
pub struct InvertedIndexReader {
|
||||
termdict: TermDictionary,
|
||||
postings_file_slice: FileSlice,
|
||||
positions_file_slice: FileSlice,
|
||||
record_option: IndexRecordOption,
|
||||
total_num_tokens: u64,
|
||||
codec: Arc<dyn ObjectSafeCodec>,
|
||||
}
|
||||
|
||||
/// Object that records the amount of space used by a field in an inverted index.
|
||||
pub struct InvertedIndexFieldSpace {
|
||||
/// Field name as encoded in the term dictionary.
|
||||
pub(crate) struct InvertedIndexFieldSpace {
|
||||
pub field_name: String,
|
||||
/// Value type for the encoded field.
|
||||
pub field_type: Type,
|
||||
/// Total bytes used by postings for this field.
|
||||
pub postings_size: ByteCount,
|
||||
/// Total bytes used by positions for this field.
|
||||
pub positions_size: ByteCount,
|
||||
/// Number of terms in the field.
|
||||
pub num_terms: u64,
|
||||
}
|
||||
|
||||
@@ -172,79 +68,55 @@ impl InvertedIndexFieldSpace {
|
||||
}
|
||||
}
|
||||
|
||||
impl TantivyInvertedIndexReader {
|
||||
pub(crate) fn read_raw_postings_data_inner(
|
||||
&self,
|
||||
term_info: &TermInfo,
|
||||
option: IndexRecordOption,
|
||||
) -> io::Result<RawPostingsData> {
|
||||
let effective_option = option.downgrade(self.record_option);
|
||||
let postings_data = self
|
||||
.postings_file_slice
|
||||
.slice(term_info.postings_range.clone())
|
||||
.read_bytes()?;
|
||||
let positions_data: Option<OwnedBytes> = if effective_option.has_positions() {
|
||||
let positions_data = self
|
||||
.positions_file_slice
|
||||
.slice(term_info.positions_range.clone())
|
||||
.read_bytes()?;
|
||||
Some(positions_data)
|
||||
} else {
|
||||
None
|
||||
};
|
||||
Ok(RawPostingsData {
|
||||
postings_data,
|
||||
positions_data,
|
||||
record_option: self.record_option,
|
||||
effective_option,
|
||||
})
|
||||
}
|
||||
|
||||
impl InvertedIndexReader {
|
||||
pub(crate) fn new(
|
||||
termdict: TermDictionary,
|
||||
postings_file_slice: FileSlice,
|
||||
positions_file_slice: FileSlice,
|
||||
record_option: IndexRecordOption,
|
||||
) -> io::Result<TantivyInvertedIndexReader> {
|
||||
codec: Arc<dyn ObjectSafeCodec>,
|
||||
) -> io::Result<InvertedIndexReader> {
|
||||
let (total_num_tokens_slice, postings_body) = postings_file_slice.split(8);
|
||||
let total_num_tokens = u64::deserialize(&mut total_num_tokens_slice.read_bytes()?)?;
|
||||
Ok(TantivyInvertedIndexReader {
|
||||
Ok(InvertedIndexReader {
|
||||
termdict,
|
||||
postings_file_slice: postings_body,
|
||||
positions_file_slice,
|
||||
record_option,
|
||||
total_num_tokens,
|
||||
codec,
|
||||
})
|
||||
}
|
||||
|
||||
/// Creates an empty `TantivyInvertedIndexReader` object, which
|
||||
/// Creates an empty `InvertedIndexReader` object, which
|
||||
/// contains no terms at all.
|
||||
pub fn empty(record_option: IndexRecordOption) -> TantivyInvertedIndexReader {
|
||||
TantivyInvertedIndexReader {
|
||||
pub fn empty(record_option: IndexRecordOption) -> InvertedIndexReader {
|
||||
InvertedIndexReader {
|
||||
termdict: TermDictionary::empty(),
|
||||
postings_file_slice: FileSlice::empty(),
|
||||
positions_file_slice: FileSlice::empty(),
|
||||
record_option,
|
||||
total_num_tokens: 0u64,
|
||||
codec: Arc::new(StandardCodec),
|
||||
}
|
||||
}
|
||||
|
||||
fn load_segment_postings(
|
||||
&self,
|
||||
term_info: &TermInfo,
|
||||
option: IndexRecordOption,
|
||||
) -> io::Result<SegmentPostings> {
|
||||
let postings_data = self.read_raw_postings_data_inner(term_info, option)?;
|
||||
load_postings_from_raw_data(term_info.doc_freq, postings_data)
|
||||
/// Returns the term info associated with the term.
|
||||
pub fn get_term_info(&self, term: &Term) -> io::Result<Option<TermInfo>> {
|
||||
self.termdict.get(term.serialized_value_bytes())
|
||||
}
|
||||
}
|
||||
|
||||
impl InvertedIndexReader for TantivyInvertedIndexReader {
|
||||
fn terms(&self) -> &TermDictionary {
|
||||
/// Return the term dictionary datastructure.
|
||||
pub fn terms(&self) -> &TermDictionary {
|
||||
&self.termdict
|
||||
}
|
||||
|
||||
fn list_encoded_json_fields(&self) -> io::Result<Vec<InvertedIndexFieldSpace>> {
|
||||
/// Return the fields and types encoded in the dictionary in lexicographic order.
|
||||
/// Only valid on JSON fields.
|
||||
///
|
||||
/// Notice: This requires a full scan and therefore **very expensive**.
|
||||
/// TODO: Move to sstable to use the index.
|
||||
pub(crate) fn list_encoded_json_fields(&self) -> io::Result<Vec<InvertedIndexFieldSpace>> {
|
||||
let mut stream = self.termdict.stream()?;
|
||||
let mut fields: Vec<InvertedIndexFieldSpace> = Vec::new();
|
||||
|
||||
@@ -297,73 +169,130 @@ impl InvertedIndexReader for TantivyInvertedIndexReader {
|
||||
Ok(fields)
|
||||
}
|
||||
|
||||
fn new_term_scorer(
|
||||
pub(crate) fn new_term_scorer_specialized<C: Codec>(
|
||||
&self,
|
||||
term_info: &TermInfo,
|
||||
option: IndexRecordOption,
|
||||
fieldnorm_reader: FieldNormReader,
|
||||
similarity_weight: Bm25Weight,
|
||||
codec: &C,
|
||||
) -> io::Result<TermScorer<<<C as Codec>::PostingsCodec as PostingsCodec>::Postings>> {
|
||||
let postings = self.read_postings_from_terminfo_specialized(term_info, option, codec)?;
|
||||
let term_scorer = TermScorer::new(postings, fieldnorm_reader, similarity_weight);
|
||||
Ok(term_scorer)
|
||||
}
|
||||
|
||||
pub(crate) fn new_phrase_scorer_type_specialized<C: Codec>(
|
||||
&self,
|
||||
term_infos: &[(usize, TermInfo)],
|
||||
similarity_weight_opt: Option<Bm25Weight>,
|
||||
fieldnorm_reader: FieldNormReader,
|
||||
slop: u32,
|
||||
codec: &C,
|
||||
) -> io::Result<PhraseScorer<<<C as Codec>::PostingsCodec as PostingsCodec>::Postings>> {
|
||||
let mut offset_and_term_postings: Vec<(
|
||||
usize,
|
||||
<<C as Codec>::PostingsCodec as PostingsCodec>::Postings,
|
||||
)> = Vec::with_capacity(term_infos.len());
|
||||
for (offset, term_info) in term_infos {
|
||||
let postings = self.read_postings_from_terminfo_specialized(
|
||||
term_info,
|
||||
IndexRecordOption::WithFreqsAndPositions,
|
||||
codec,
|
||||
)?;
|
||||
offset_and_term_postings.push((*offset, postings));
|
||||
}
|
||||
let phrase_scorer = PhraseScorer::new(
|
||||
offset_and_term_postings,
|
||||
similarity_weight_opt,
|
||||
fieldnorm_reader,
|
||||
slop,
|
||||
);
|
||||
Ok(phrase_scorer)
|
||||
}
|
||||
|
||||
/// Build a new term scorer.
|
||||
pub fn new_term_scorer(
|
||||
&self,
|
||||
term_info: &TermInfo,
|
||||
option: IndexRecordOption,
|
||||
fieldnorm_reader: FieldNormReader,
|
||||
similarity_weight: Bm25Weight,
|
||||
) -> io::Result<Box<dyn Scorer>> {
|
||||
let postings = self.load_segment_postings(term_info, option)?;
|
||||
let term_scorer = TermScorer::new(postings, fieldnorm_reader, similarity_weight);
|
||||
Ok(box_scorer(term_scorer))
|
||||
let term_scorer = self.codec.load_term_scorer_type_erased(
|
||||
term_info,
|
||||
option,
|
||||
self,
|
||||
fieldnorm_reader,
|
||||
similarity_weight,
|
||||
)?;
|
||||
Ok(term_scorer)
|
||||
}
|
||||
|
||||
fn read_postings_from_terminfo(
|
||||
/// Returns a postings object specific with a concrete type.
|
||||
///
|
||||
/// This requires you to provied the actual codec.
|
||||
pub fn read_postings_from_terminfo_specialized<C: Codec>(
|
||||
&self,
|
||||
term_info: &TermInfo,
|
||||
option: IndexRecordOption,
|
||||
codec: &C,
|
||||
) -> io::Result<<<C as Codec>::PostingsCodec as PostingsCodec>::Postings> {
|
||||
let option = option.downgrade(self.record_option);
|
||||
let postings_data = self
|
||||
.postings_file_slice
|
||||
.slice(term_info.postings_range.clone())
|
||||
.read_bytes()?;
|
||||
let positions_data: Option<OwnedBytes> = if option.has_positions() {
|
||||
let positions_data = self
|
||||
.positions_file_slice
|
||||
.slice(term_info.positions_range.clone())
|
||||
.read_bytes()?;
|
||||
Some(positions_data)
|
||||
} else {
|
||||
None
|
||||
};
|
||||
let postings: <<C as Codec>::PostingsCodec as PostingsCodec>::Postings =
|
||||
codec.postings_codec().load_postings(
|
||||
term_info.doc_freq,
|
||||
postings_data,
|
||||
self.record_option,
|
||||
option,
|
||||
positions_data,
|
||||
)?;
|
||||
Ok(postings)
|
||||
}
|
||||
|
||||
/// Returns a posting object given a `term_info`.
|
||||
/// This method is for an advanced usage only.
|
||||
///
|
||||
/// Most users should prefer using [`Self::read_postings()`] instead.
|
||||
pub fn read_postings_from_terminfo(
|
||||
&self,
|
||||
term_info: &TermInfo,
|
||||
option: IndexRecordOption,
|
||||
) -> io::Result<Box<dyn Postings>> {
|
||||
let postings = self.load_segment_postings(term_info, option)?;
|
||||
Ok(Box::new(postings))
|
||||
self.codec
|
||||
.load_postings_type_erased(term_info, option, self)
|
||||
}
|
||||
|
||||
fn read_raw_postings_data(
|
||||
&self,
|
||||
term_info: &TermInfo,
|
||||
option: IndexRecordOption,
|
||||
) -> io::Result<RawPostingsData> {
|
||||
self.read_raw_postings_data_inner(term_info, option)
|
||||
}
|
||||
|
||||
fn fill_bitset_for_term(
|
||||
&self,
|
||||
term_info: &TermInfo,
|
||||
option: IndexRecordOption,
|
||||
doc_bitset: &mut BitSet,
|
||||
) -> io::Result<()> {
|
||||
let postings_data = self.read_raw_postings_data_inner(term_info, option)?;
|
||||
fill_bitset_from_raw_data(term_info.doc_freq, postings_data, doc_bitset)
|
||||
}
|
||||
|
||||
fn new_phrase_scorer(
|
||||
&self,
|
||||
term_infos: &[(usize, TermInfo)],
|
||||
similarity_weight: Option<Bm25Weight>,
|
||||
fieldnorm_reader: FieldNormReader,
|
||||
slop: u32,
|
||||
) -> io::Result<Box<dyn Scorer>> {
|
||||
let mut offset_and_term_postings: Vec<(usize, SegmentPostings)> =
|
||||
Vec::with_capacity(term_infos.len());
|
||||
for (offset, term_info) in term_infos {
|
||||
let postings =
|
||||
self.load_segment_postings(term_info, IndexRecordOption::WithFreqsAndPositions)?;
|
||||
offset_and_term_postings.push((*offset, postings));
|
||||
}
|
||||
let scorer = PhraseScorer::new(
|
||||
offset_and_term_postings,
|
||||
similarity_weight,
|
||||
fieldnorm_reader,
|
||||
slop,
|
||||
);
|
||||
Ok(box_scorer(scorer))
|
||||
}
|
||||
|
||||
fn total_num_tokens(&self) -> u64 {
|
||||
/// Returns the total number of tokens recorded for all documents
|
||||
/// (including deleted documents).
|
||||
pub fn total_num_tokens(&self) -> u64 {
|
||||
self.total_num_tokens
|
||||
}
|
||||
|
||||
fn read_postings(
|
||||
/// Returns the segment postings associated with the term, and with the given option,
|
||||
/// or `None` if the term has never been encountered and indexed.
|
||||
///
|
||||
/// If the field was not indexed with the indexing options that cover
|
||||
/// the requested options, the returned [`SegmentPostings`] the method does not fail
|
||||
/// and returns a `SegmentPostings` with as much information as possible.
|
||||
///
|
||||
/// For instance, requesting [`IndexRecordOption::WithFreqs`] for a
|
||||
/// [`TextOptions`](crate::schema::TextOptions) that does not index position
|
||||
/// will return a [`SegmentPostings`] with `DocId`s and frequencies.
|
||||
pub fn read_postings(
|
||||
&self,
|
||||
term: &Term,
|
||||
option: IndexRecordOption,
|
||||
@@ -373,30 +302,17 @@ impl InvertedIndexReader for TantivyInvertedIndexReader {
|
||||
.transpose()
|
||||
}
|
||||
|
||||
fn doc_freq(&self, term: &Term) -> io::Result<u32> {
|
||||
/// Returns the number of documents containing the term.
|
||||
pub fn doc_freq(&self, term: &Term) -> io::Result<u32> {
|
||||
Ok(self
|
||||
.get_term_info(term)?
|
||||
.map(|term_info| term_info.doc_freq)
|
||||
.unwrap_or(0u32))
|
||||
}
|
||||
|
||||
#[cfg(feature = "quickwit")]
|
||||
fn doc_freq_async<'a>(
|
||||
&'a self,
|
||||
term: &'a Term,
|
||||
) -> Pin<Box<dyn Future<Output = io::Result<u32>> + Send + 'a>> {
|
||||
Box::pin(async move {
|
||||
Ok(self
|
||||
.get_term_info_async(term)
|
||||
.await?
|
||||
.map(|term_info| term_info.doc_freq)
|
||||
.unwrap_or(0u32))
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(feature = "quickwit")]
|
||||
impl TantivyInvertedIndexReader {
|
||||
impl InvertedIndexReader {
|
||||
pub(crate) async fn get_term_info_async(&self, term: &Term) -> io::Result<Option<TermInfo>> {
|
||||
self.termdict.get_async(term.serialized_value_bytes()).await
|
||||
}
|
||||
@@ -596,4 +512,13 @@ impl TantivyInvertedIndexReader {
|
||||
}
|
||||
Ok(())
|
||||
}
|
||||
|
||||
/// Returns the number of documents containing the term asynchronously.
|
||||
pub async fn doc_freq_async(&self, term: &Term) -> io::Result<u32> {
|
||||
Ok(self
|
||||
.get_term_info_async(term)
|
||||
.await?
|
||||
.map(|term_info| term_info.doc_freq)
|
||||
.unwrap_or(0u32))
|
||||
}
|
||||
}
|
||||
|
||||
@@ -15,10 +15,8 @@ pub use self::codec_configuration::CodecConfiguration;
|
||||
pub use self::index::{Index, IndexBuilder};
|
||||
pub(crate) use self::index_meta::SegmentMetaInventory;
|
||||
pub use self::index_meta::{IndexMeta, IndexSettings, Order, SegmentMeta};
|
||||
pub use self::inverted_index_reader::{
|
||||
InvertedIndexFieldSpace, InvertedIndexReader, TantivyInvertedIndexReader,
|
||||
};
|
||||
pub use self::inverted_index_reader::InvertedIndexReader;
|
||||
pub use self::segment::Segment;
|
||||
pub use self::segment_component::SegmentComponent;
|
||||
pub use self::segment_id::SegmentId;
|
||||
pub use self::segment_reader::{FieldMetadata, SegmentReader, TantivySegmentReader};
|
||||
pub use self::segment_reader::{FieldMetadata, SegmentReader};
|
||||
|
||||
@@ -44,7 +44,7 @@ fn create_uuid() -> Uuid {
|
||||
}
|
||||
|
||||
impl SegmentId {
|
||||
/// Generates a new random `SegmentId`.
|
||||
#[doc(hidden)]
|
||||
pub fn generate_random() -> SegmentId {
|
||||
SegmentId(create_uuid())
|
||||
}
|
||||
|
||||
@@ -6,99 +6,18 @@ use common::{ByteCount, HasLen};
|
||||
use fnv::FnvHashMap;
|
||||
use itertools::Itertools;
|
||||
|
||||
use crate::codec::{ObjectSafeCodec, SumOrDoNothingCombiner};
|
||||
use crate::directory::{CompositeFile, Directory, FileSlice};
|
||||
use crate::codec::ObjectSafeCodec;
|
||||
use crate::directory::{CompositeFile, FileSlice};
|
||||
use crate::error::DataCorruption;
|
||||
use crate::fastfield::{intersect_alive_bitsets, AliveBitSet, FacetReader, FastFieldReaders};
|
||||
use crate::fieldnorm::{FieldNormReader, FieldNormReaders};
|
||||
use crate::index::{
|
||||
InvertedIndexReader, Segment, SegmentComponent, SegmentId, SegmentMeta,
|
||||
TantivyInvertedIndexReader,
|
||||
};
|
||||
use crate::index::{InvertedIndexReader, Segment, SegmentComponent, SegmentId};
|
||||
use crate::json_utils::json_path_sep_to_dot;
|
||||
use crate::query::Scorer;
|
||||
use crate::schema::{Field, IndexRecordOption, Schema, Type};
|
||||
use crate::space_usage::SegmentSpaceUsage;
|
||||
use crate::store::StoreReader;
|
||||
use crate::termdict::TermDictionary;
|
||||
use crate::{DocId, Opstamp, Score};
|
||||
|
||||
/// Trait defining the contract for a segment reader.
|
||||
pub trait SegmentReader: Send + Sync {
|
||||
/// Returns the highest document id ever attributed in this segment + 1.
|
||||
fn max_doc(&self) -> DocId;
|
||||
|
||||
/// Returns the number of alive documents. Deleted documents are not counted.
|
||||
fn num_docs(&self) -> DocId;
|
||||
|
||||
/// Returns the schema of the index this segment belongs to.
|
||||
fn schema(&self) -> &Schema;
|
||||
|
||||
/// Performs a for_each_pruning operation on the given scorer.
|
||||
fn for_each_pruning(
|
||||
&self,
|
||||
threshold: Score,
|
||||
scorer: Box<dyn Scorer>,
|
||||
callback: &mut dyn FnMut(DocId, Score) -> Score,
|
||||
);
|
||||
|
||||
/// Builds a union scorer possibly specialized if all scorers are term scorers.
|
||||
fn build_union_scorer_with_sum_combiner(
|
||||
&self,
|
||||
scorers: Vec<Box<dyn Scorer>>,
|
||||
num_docs: DocId,
|
||||
score_combiner_type: SumOrDoNothingCombiner,
|
||||
) -> Box<dyn Scorer>;
|
||||
|
||||
/// Return the number of documents that have been deleted in the segment.
|
||||
fn num_deleted_docs(&self) -> DocId;
|
||||
|
||||
/// Returns true if some of the documents of the segment have been deleted.
|
||||
fn has_deletes(&self) -> bool;
|
||||
|
||||
/// Accessor to a segment's fast field reader given a field.
|
||||
fn fast_fields(&self) -> &FastFieldReaders;
|
||||
|
||||
/// Accessor to the `FacetReader` associated with a given `Field`.
|
||||
fn facet_reader(&self, field_name: &str) -> crate::Result<FacetReader>;
|
||||
|
||||
/// Accessor to the segment's `Field norms`'s reader.
|
||||
fn get_fieldnorms_reader(&self, field: Field) -> crate::Result<FieldNormReader>;
|
||||
|
||||
/// Accessor to the segment's field norms readers.
|
||||
#[doc(hidden)]
|
||||
fn fieldnorms_readers(&self) -> &FieldNormReaders;
|
||||
|
||||
/// Accessor to the segment's [`StoreReader`](crate::store::StoreReader).
|
||||
fn get_store_reader(&self, cache_num_blocks: usize) -> io::Result<StoreReader>;
|
||||
|
||||
/// Returns a field reader associated with the field given in argument.
|
||||
fn inverted_index(&self, field: Field) -> crate::Result<Arc<dyn InvertedIndexReader>>;
|
||||
|
||||
/// Returns the list of fields that have been indexed in the segment.
|
||||
fn fields_metadata(&self) -> crate::Result<Vec<FieldMetadata>>;
|
||||
|
||||
/// Returns the segment id.
|
||||
fn segment_id(&self) -> SegmentId;
|
||||
|
||||
/// Returns the delete opstamp.
|
||||
fn delete_opstamp(&self) -> Option<Opstamp>;
|
||||
|
||||
/// Returns the bitset representing the alive `DocId`s.
|
||||
fn alive_bitset(&self) -> Option<&AliveBitSet>;
|
||||
|
||||
/// Returns true if the `doc` is marked as deleted.
|
||||
fn is_deleted(&self, doc: DocId) -> bool;
|
||||
|
||||
/// Returns an iterator that will iterate over the alive document ids.
|
||||
fn doc_ids_alive(&self) -> Box<dyn Iterator<Item = DocId> + Send + '_>;
|
||||
|
||||
/// Summarize total space usage of this segment.
|
||||
fn space_usage(&self) -> io::Result<SegmentSpaceUsage>;
|
||||
|
||||
/// Clones this reader into a shared trait object.
|
||||
fn clone_arc(&self) -> Arc<dyn SegmentReader>;
|
||||
}
|
||||
use crate::{DocId, Opstamp};
|
||||
|
||||
/// Entry point to access all of the datastructures of the `Segment`
|
||||
///
|
||||
@@ -111,8 +30,8 @@ pub trait SegmentReader: Send + Sync {
|
||||
/// The segment reader has a very low memory footprint,
|
||||
/// as close to all of the memory data is mmapped.
|
||||
#[derive(Clone)]
|
||||
pub struct TantivySegmentReader {
|
||||
inv_idx_reader_cache: Arc<RwLock<HashMap<Field, Arc<dyn InvertedIndexReader>>>>,
|
||||
pub struct SegmentReader {
|
||||
inv_idx_reader_cache: Arc<RwLock<HashMap<Field, Arc<InvertedIndexReader>>>>,
|
||||
|
||||
segment_id: SegmentId,
|
||||
delete_opstamp: Option<Opstamp>,
|
||||
@@ -132,159 +51,78 @@ pub struct TantivySegmentReader {
|
||||
codec: Arc<dyn ObjectSafeCodec>,
|
||||
}
|
||||
|
||||
impl TantivySegmentReader {
|
||||
/// Open a new segment for reading.
|
||||
pub fn open<C: crate::codec::Codec>(
|
||||
segment: &Segment<C>,
|
||||
) -> crate::Result<Arc<dyn SegmentReader>> {
|
||||
Self::open_with_custom_alive_set(segment, None)
|
||||
}
|
||||
|
||||
/// Open a new segment for reading.
|
||||
pub fn open_with_custom_alive_set<C: crate::codec::Codec>(
|
||||
segment: &Segment<C>,
|
||||
custom_bitset: Option<AliveBitSet>,
|
||||
) -> crate::Result<Arc<dyn SegmentReader>> {
|
||||
segment.index().codec().open_segment_reader(
|
||||
segment.index().directory(),
|
||||
segment.meta(),
|
||||
segment.schema(),
|
||||
custom_bitset,
|
||||
)
|
||||
}
|
||||
|
||||
pub(crate) fn open_with_custom_alive_set_from_directory(
|
||||
directory: &dyn Directory,
|
||||
segment_meta: &SegmentMeta,
|
||||
schema: Schema,
|
||||
codec: Arc<dyn ObjectSafeCodec>,
|
||||
custom_bitset: Option<AliveBitSet>,
|
||||
) -> crate::Result<TantivySegmentReader> {
|
||||
let termdict_file =
|
||||
directory.open_read(&segment_meta.relative_path(SegmentComponent::Terms))?;
|
||||
let termdict_composite = CompositeFile::open(&termdict_file)?;
|
||||
|
||||
let store_file =
|
||||
directory.open_read(&segment_meta.relative_path(SegmentComponent::Store))?;
|
||||
|
||||
crate::fail_point!("SegmentReader::open#middle");
|
||||
|
||||
let postings_file =
|
||||
directory.open_read(&segment_meta.relative_path(SegmentComponent::Postings))?;
|
||||
let postings_composite = CompositeFile::open(&postings_file)?;
|
||||
|
||||
let positions_composite = {
|
||||
if let Ok(positions_file) =
|
||||
directory.open_read(&segment_meta.relative_path(SegmentComponent::Positions))
|
||||
{
|
||||
CompositeFile::open(&positions_file)?
|
||||
} else {
|
||||
CompositeFile::empty()
|
||||
}
|
||||
};
|
||||
|
||||
let fast_fields_data =
|
||||
directory.open_read(&segment_meta.relative_path(SegmentComponent::FastFields))?;
|
||||
let fast_fields_readers = FastFieldReaders::open(fast_fields_data, schema.clone())?;
|
||||
let fieldnorm_data =
|
||||
directory.open_read(&segment_meta.relative_path(SegmentComponent::FieldNorms))?;
|
||||
let fieldnorm_readers = FieldNormReaders::open(fieldnorm_data)?;
|
||||
|
||||
let original_bitset = if segment_meta.has_deletes() {
|
||||
let alive_doc_file_slice =
|
||||
directory.open_read(&segment_meta.relative_path(SegmentComponent::Delete))?;
|
||||
let alive_doc_data = alive_doc_file_slice.read_bytes()?;
|
||||
Some(AliveBitSet::open(alive_doc_data))
|
||||
} else {
|
||||
None
|
||||
};
|
||||
|
||||
let alive_bitset_opt = intersect_alive_bitset(original_bitset, custom_bitset);
|
||||
|
||||
let max_doc = segment_meta.max_doc();
|
||||
let num_docs = alive_bitset_opt
|
||||
.as_ref()
|
||||
.map(|alive_bitset| alive_bitset.num_alive_docs() as u32)
|
||||
.unwrap_or(max_doc);
|
||||
|
||||
Ok(TantivySegmentReader {
|
||||
inv_idx_reader_cache: Default::default(),
|
||||
num_docs,
|
||||
max_doc,
|
||||
termdict_composite,
|
||||
postings_composite,
|
||||
fast_fields_readers,
|
||||
fieldnorm_readers,
|
||||
segment_id: segment_meta.id(),
|
||||
delete_opstamp: segment_meta.delete_opstamp(),
|
||||
store_file,
|
||||
alive_bitset_opt,
|
||||
positions_composite,
|
||||
schema,
|
||||
codec,
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
impl SegmentReader for TantivySegmentReader {
|
||||
fn max_doc(&self) -> DocId {
|
||||
impl SegmentReader {
|
||||
/// Returns the highest document id ever attributed in
|
||||
/// this segment + 1.
|
||||
pub fn max_doc(&self) -> DocId {
|
||||
self.max_doc
|
||||
}
|
||||
|
||||
fn num_docs(&self) -> DocId {
|
||||
/// Returns the number of alive documents.
|
||||
/// Deleted documents are not counted.
|
||||
pub fn num_docs(&self) -> DocId {
|
||||
self.num_docs
|
||||
}
|
||||
|
||||
fn schema(&self) -> &Schema {
|
||||
/// Returns the schema of the index this segment belongs to.
|
||||
pub fn schema(&self) -> &Schema {
|
||||
&self.schema
|
||||
}
|
||||
|
||||
fn for_each_pruning(
|
||||
&self,
|
||||
threshold: Score,
|
||||
scorer: Box<dyn Scorer>,
|
||||
callback: &mut dyn FnMut(DocId, Score) -> Score,
|
||||
) {
|
||||
self.codec.for_each_pruning(threshold, scorer, callback);
|
||||
/// Returns the index codec.
|
||||
pub fn codec(&self) -> &dyn ObjectSafeCodec {
|
||||
&*self.codec
|
||||
}
|
||||
|
||||
fn build_union_scorer_with_sum_combiner(
|
||||
&self,
|
||||
scorers: Vec<Box<dyn Scorer>>,
|
||||
num_docs: DocId,
|
||||
score_combiner_type: SumOrDoNothingCombiner,
|
||||
) -> Box<dyn Scorer> {
|
||||
self.codec
|
||||
.build_union_scorer_with_sum_combiner(scorers, num_docs, score_combiner_type)
|
||||
}
|
||||
|
||||
fn num_deleted_docs(&self) -> DocId {
|
||||
/// Return the number of documents that have been
|
||||
/// deleted in the segment.
|
||||
pub fn num_deleted_docs(&self) -> DocId {
|
||||
self.max_doc - self.num_docs
|
||||
}
|
||||
|
||||
fn has_deletes(&self) -> bool {
|
||||
self.num_docs != self.max_doc
|
||||
/// Returns true if some of the documents of the segment have been deleted.
|
||||
pub fn has_deletes(&self) -> bool {
|
||||
self.num_deleted_docs() > 0
|
||||
}
|
||||
|
||||
fn fast_fields(&self) -> &FastFieldReaders {
|
||||
/// Accessor to a segment's fast field reader given a field.
|
||||
///
|
||||
/// Returns the u64 fast value reader if the field
|
||||
/// is a u64 field indexed as "fast".
|
||||
///
|
||||
/// Return a FastFieldNotAvailableError if the field is not
|
||||
/// declared as a fast field in the schema.
|
||||
///
|
||||
/// # Panics
|
||||
/// May panic if the index is corrupted.
|
||||
pub fn fast_fields(&self) -> &FastFieldReaders {
|
||||
&self.fast_fields_readers
|
||||
}
|
||||
|
||||
fn facet_reader(&self, field_name: &str) -> crate::Result<FacetReader> {
|
||||
let field = self.schema.get_field(field_name)?;
|
||||
let field_entry = self.schema.get_field_entry(field);
|
||||
/// Accessor to the `FacetReader` associated with a given `Field`.
|
||||
pub fn facet_reader(&self, field_name: &str) -> crate::Result<FacetReader> {
|
||||
let schema = self.schema();
|
||||
let field = schema.get_field(field_name)?;
|
||||
let field_entry = schema.get_field_entry(field);
|
||||
if field_entry.field_type().value_type() != Type::Facet {
|
||||
return Err(crate::TantivyError::SchemaError(format!(
|
||||
"`{field_name}` is not a facet field.`"
|
||||
)));
|
||||
}
|
||||
let Some(facet_column) = self.fast_fields_readers.str(field_name)? else {
|
||||
let Some(facet_column) = self.fast_fields().str(field_name)? else {
|
||||
panic!("Facet Field `{field_name}` is missing. This should not happen");
|
||||
};
|
||||
Ok(FacetReader::new(facet_column))
|
||||
}
|
||||
|
||||
fn get_fieldnorms_reader(&self, field: Field) -> crate::Result<FieldNormReader> {
|
||||
/// Accessor to the segment's `Field norms`'s reader.
|
||||
///
|
||||
/// Field norms are the length (in tokens) of the fields.
|
||||
/// It is used in the computation of the [TfIdf](https://fulmicoton.gitbooks.io/tantivy-doc/content/tfidf.html).
|
||||
///
|
||||
/// They are simply stored as a fast field, serialized in
|
||||
/// the `.fieldnorm` file of the segment.
|
||||
pub fn get_fieldnorms_reader(&self, field: Field) -> crate::Result<FieldNormReader> {
|
||||
self.fieldnorm_readers.get_field(field)?.ok_or_else(|| {
|
||||
let field_name = self.schema.get_field_name(field);
|
||||
let err_msg = format!(
|
||||
@@ -295,15 +133,102 @@ impl SegmentReader for TantivySegmentReader {
|
||||
})
|
||||
}
|
||||
|
||||
fn fieldnorms_readers(&self) -> &FieldNormReaders {
|
||||
#[doc(hidden)]
|
||||
pub fn fieldnorms_readers(&self) -> &FieldNormReaders {
|
||||
&self.fieldnorm_readers
|
||||
}
|
||||
|
||||
fn get_store_reader(&self, cache_num_blocks: usize) -> io::Result<StoreReader> {
|
||||
/// Accessor to the segment's [`StoreReader`](crate::store::StoreReader).
|
||||
///
|
||||
/// `cache_num_blocks` sets the number of decompressed blocks to be cached in an LRU.
|
||||
/// The size of blocks is configurable, this should be reflexted in the
|
||||
pub fn get_store_reader(&self, cache_num_blocks: usize) -> io::Result<StoreReader> {
|
||||
StoreReader::open(self.store_file.clone(), cache_num_blocks)
|
||||
}
|
||||
|
||||
fn inverted_index(&self, field: Field) -> crate::Result<Arc<dyn InvertedIndexReader>> {
|
||||
/// Open a new segment for reading.
|
||||
pub fn open<C: crate::codec::Codec>(segment: &Segment<C>) -> crate::Result<SegmentReader> {
|
||||
Self::open_with_custom_alive_set(segment, None)
|
||||
}
|
||||
|
||||
/// Open a new segment for reading.
|
||||
pub fn open_with_custom_alive_set<C: crate::codec::Codec>(
|
||||
segment: &Segment<C>,
|
||||
custom_bitset: Option<AliveBitSet>,
|
||||
) -> crate::Result<SegmentReader> {
|
||||
let codec: Arc<dyn ObjectSafeCodec> = Arc::new(segment.index().codec().clone());
|
||||
let termdict_file = segment.open_read(SegmentComponent::Terms)?;
|
||||
let termdict_composite = CompositeFile::open(&termdict_file)?;
|
||||
|
||||
let store_file = segment.open_read(SegmentComponent::Store)?;
|
||||
|
||||
crate::fail_point!("SegmentReader::open#middle");
|
||||
|
||||
let postings_file = segment.open_read(SegmentComponent::Postings)?;
|
||||
let postings_composite = CompositeFile::open(&postings_file)?;
|
||||
|
||||
let positions_composite = {
|
||||
if let Ok(positions_file) = segment.open_read(SegmentComponent::Positions) {
|
||||
CompositeFile::open(&positions_file)?
|
||||
} else {
|
||||
CompositeFile::empty()
|
||||
}
|
||||
};
|
||||
|
||||
let schema = segment.schema();
|
||||
|
||||
let fast_fields_data = segment.open_read(SegmentComponent::FastFields)?;
|
||||
let fast_fields_readers = FastFieldReaders::open(fast_fields_data, schema.clone())?;
|
||||
let fieldnorm_data = segment.open_read(SegmentComponent::FieldNorms)?;
|
||||
let fieldnorm_readers = FieldNormReaders::open(fieldnorm_data)?;
|
||||
|
||||
let original_bitset = if segment.meta().has_deletes() {
|
||||
let alive_doc_file_slice = segment.open_read(SegmentComponent::Delete)?;
|
||||
let alive_doc_data = alive_doc_file_slice.read_bytes()?;
|
||||
Some(AliveBitSet::open(alive_doc_data))
|
||||
} else {
|
||||
None
|
||||
};
|
||||
|
||||
let alive_bitset_opt = intersect_alive_bitset(original_bitset, custom_bitset);
|
||||
|
||||
let max_doc = segment.meta().max_doc();
|
||||
let num_docs = alive_bitset_opt
|
||||
.as_ref()
|
||||
.map(|alive_bitset| alive_bitset.num_alive_docs() as u32)
|
||||
.unwrap_or(max_doc);
|
||||
|
||||
Ok(SegmentReader {
|
||||
inv_idx_reader_cache: Default::default(),
|
||||
num_docs,
|
||||
max_doc,
|
||||
termdict_composite,
|
||||
postings_composite,
|
||||
fast_fields_readers,
|
||||
fieldnorm_readers,
|
||||
segment_id: segment.id(),
|
||||
delete_opstamp: segment.meta().delete_opstamp(),
|
||||
store_file,
|
||||
alive_bitset_opt,
|
||||
positions_composite,
|
||||
schema,
|
||||
codec,
|
||||
})
|
||||
}
|
||||
|
||||
/// Returns a field reader associated with the field given in argument.
|
||||
/// If the field was not present in the index during indexing time,
|
||||
/// the InvertedIndexReader is empty.
|
||||
///
|
||||
/// The field reader is in charge of iterating through the
|
||||
/// term dictionary associated with a specific field,
|
||||
/// and opening the posting list associated with any term.
|
||||
///
|
||||
/// If the field is not marked as index, a warning is logged and an empty `InvertedIndexReader`
|
||||
/// is returned.
|
||||
/// Similarly, if the field is marked as indexed but no term has been indexed for the given
|
||||
/// index, an empty `InvertedIndexReader` is returned (but no warning is logged).
|
||||
pub fn inverted_index(&self, field: Field) -> crate::Result<Arc<InvertedIndexReader>> {
|
||||
if let Some(inv_idx_reader) = self
|
||||
.inv_idx_reader_cache
|
||||
.read()
|
||||
@@ -328,9 +253,7 @@ impl SegmentReader for TantivySegmentReader {
|
||||
//
|
||||
// Returns an empty inverted index.
|
||||
let record_option = record_option_opt.unwrap_or(IndexRecordOption::Basic);
|
||||
let inv_idx_reader: Arc<dyn InvertedIndexReader> =
|
||||
Arc::new(TantivyInvertedIndexReader::empty(record_option));
|
||||
return Ok(inv_idx_reader);
|
||||
return Ok(Arc::new(InvertedIndexReader::empty(record_option)));
|
||||
}
|
||||
|
||||
let record_option = record_option_opt.unwrap();
|
||||
@@ -354,13 +277,13 @@ impl SegmentReader for TantivySegmentReader {
|
||||
DataCorruption::comment_only(error_msg)
|
||||
})?;
|
||||
|
||||
let inv_idx_reader: Arc<dyn InvertedIndexReader> =
|
||||
Arc::new(TantivyInvertedIndexReader::new(
|
||||
TermDictionary::open(termdict_file)?,
|
||||
postings_file,
|
||||
positions_file,
|
||||
record_option,
|
||||
)?);
|
||||
let inv_idx_reader = Arc::new(InvertedIndexReader::new(
|
||||
TermDictionary::open(termdict_file)?,
|
||||
postings_file,
|
||||
positions_file,
|
||||
record_option,
|
||||
self.codec.clone(),
|
||||
)?);
|
||||
|
||||
// by releasing the lock in between, we may end up opening the inverting index
|
||||
// twice, but this is fine.
|
||||
@@ -372,10 +295,23 @@ impl SegmentReader for TantivySegmentReader {
|
||||
Ok(inv_idx_reader)
|
||||
}
|
||||
|
||||
fn fields_metadata(&self) -> crate::Result<Vec<FieldMetadata>> {
|
||||
/// Returns the list of fields that have been indexed in the segment.
|
||||
/// The field list includes the field defined in the schema as well as the fields
|
||||
/// that have been indexed as a part of a JSON field.
|
||||
/// The returned field name is the full field name, including the name of the JSON field.
|
||||
///
|
||||
/// The returned field names can be used in queries.
|
||||
///
|
||||
/// Notice: If your data contains JSON fields this is **very expensive**, as it requires
|
||||
/// browsing through the inverted index term dictionary and the columnar field dictionary.
|
||||
///
|
||||
/// Disclaimer: Some fields may not be listed here. For instance, if the schema contains a json
|
||||
/// field that is not indexed nor a fast field but is stored, it is possible for the field
|
||||
/// to not be listed.
|
||||
pub fn fields_metadata(&self) -> crate::Result<Vec<FieldMetadata>> {
|
||||
let mut indexed_fields: Vec<FieldMetadata> = Vec::new();
|
||||
let mut map_to_canonical = FnvHashMap::default();
|
||||
for (field, field_entry) in self.schema.fields() {
|
||||
for (field, field_entry) in self.schema().fields() {
|
||||
let field_name = field_entry.name().to_string();
|
||||
let is_indexed = field_entry.is_indexed();
|
||||
if is_indexed {
|
||||
@@ -465,7 +401,7 @@ impl SegmentReader for TantivySegmentReader {
|
||||
}
|
||||
}
|
||||
let fast_fields: Vec<FieldMetadata> = self
|
||||
.fast_fields_readers
|
||||
.fast_fields()
|
||||
.columnar()
|
||||
.iter_columns()?
|
||||
.map(|(mut field_name, handle)| {
|
||||
@@ -493,26 +429,31 @@ impl SegmentReader for TantivySegmentReader {
|
||||
Ok(merged_field_metadatas)
|
||||
}
|
||||
|
||||
fn segment_id(&self) -> SegmentId {
|
||||
/// Returns the segment id
|
||||
pub fn segment_id(&self) -> SegmentId {
|
||||
self.segment_id
|
||||
}
|
||||
|
||||
fn delete_opstamp(&self) -> Option<Opstamp> {
|
||||
/// Returns the delete opstamp
|
||||
pub fn delete_opstamp(&self) -> Option<Opstamp> {
|
||||
self.delete_opstamp
|
||||
}
|
||||
|
||||
fn alive_bitset(&self) -> Option<&AliveBitSet> {
|
||||
/// Returns the bitset representing the alive `DocId`s.
|
||||
pub fn alive_bitset(&self) -> Option<&AliveBitSet> {
|
||||
self.alive_bitset_opt.as_ref()
|
||||
}
|
||||
|
||||
fn is_deleted(&self, doc: DocId) -> bool {
|
||||
self.alive_bitset_opt
|
||||
.as_ref()
|
||||
/// Returns true if the `doc` is marked
|
||||
/// as deleted.
|
||||
pub fn is_deleted(&self, doc: DocId) -> bool {
|
||||
self.alive_bitset()
|
||||
.map(|alive_bitset| alive_bitset.is_deleted(doc))
|
||||
.unwrap_or(false)
|
||||
}
|
||||
|
||||
fn doc_ids_alive(&self) -> Box<dyn Iterator<Item = DocId> + Send + '_> {
|
||||
/// Returns an iterator that will iterate over the alive document ids
|
||||
pub fn doc_ids_alive(&self) -> Box<dyn Iterator<Item = DocId> + Send + '_> {
|
||||
if let Some(alive_bitset) = &self.alive_bitset_opt {
|
||||
Box::new(alive_bitset.iter_alive())
|
||||
} else {
|
||||
@@ -520,25 +461,22 @@ impl SegmentReader for TantivySegmentReader {
|
||||
}
|
||||
}
|
||||
|
||||
fn space_usage(&self) -> io::Result<SegmentSpaceUsage> {
|
||||
/// Summarize total space usage of this segment.
|
||||
pub fn space_usage(&self) -> io::Result<SegmentSpaceUsage> {
|
||||
Ok(SegmentSpaceUsage::new(
|
||||
self.num_docs,
|
||||
self.termdict_composite.space_usage(&self.schema),
|
||||
self.postings_composite.space_usage(&self.schema),
|
||||
self.positions_composite.space_usage(&self.schema),
|
||||
self.num_docs(),
|
||||
self.termdict_composite.space_usage(self.schema()),
|
||||
self.postings_composite.space_usage(self.schema()),
|
||||
self.positions_composite.space_usage(self.schema()),
|
||||
self.fast_fields_readers.space_usage()?,
|
||||
self.fieldnorm_readers.space_usage(&self.schema),
|
||||
StoreReader::open(self.store_file.clone(), 0)?.space_usage(),
|
||||
self.fieldnorm_readers.space_usage(self.schema()),
|
||||
self.get_store_reader(0)?.space_usage(),
|
||||
self.alive_bitset_opt
|
||||
.as_ref()
|
||||
.map(AliveBitSet::space_usage)
|
||||
.unwrap_or_default(),
|
||||
))
|
||||
}
|
||||
|
||||
fn clone_arc(&self) -> Arc<dyn SegmentReader> {
|
||||
Arc::new(self.clone())
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Clone, Debug, PartialEq, Eq, PartialOrd, Ord)]
|
||||
@@ -648,7 +586,7 @@ fn intersect_alive_bitset(
|
||||
}
|
||||
}
|
||||
|
||||
impl fmt::Debug for TantivySegmentReader {
|
||||
impl fmt::Debug for SegmentReader {
|
||||
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
|
||||
write!(f, "SegmentReader({:?})", self.segment_id)
|
||||
}
|
||||
|
||||
@@ -250,15 +250,11 @@ mod tests {
|
||||
|
||||
struct DummyWeight;
|
||||
impl Weight for DummyWeight {
|
||||
fn scorer(
|
||||
&self,
|
||||
_reader: &dyn SegmentReader,
|
||||
_boost: Score,
|
||||
) -> crate::Result<Box<dyn Scorer>> {
|
||||
fn scorer(&self, _reader: &SegmentReader, _boost: Score) -> crate::Result<Box<dyn Scorer>> {
|
||||
Err(crate::TantivyError::InternalError("dummy impl".to_owned()))
|
||||
}
|
||||
|
||||
fn explain(&self, _reader: &dyn SegmentReader, _doc: DocId) -> crate::Result<Explanation> {
|
||||
fn explain(&self, _reader: &SegmentReader, _doc: DocId) -> crate::Result<Explanation> {
|
||||
Err(crate::TantivyError::InternalError("dummy impl".to_owned()))
|
||||
}
|
||||
}
|
||||
|
||||
@@ -95,7 +95,7 @@ pub struct IndexWriter<C: Codec = StandardCodec, D: Document = TantivyDocument>
|
||||
|
||||
fn compute_deleted_bitset(
|
||||
alive_bitset: &mut BitSet,
|
||||
segment_reader: &dyn SegmentReader,
|
||||
segment_reader: &SegmentReader,
|
||||
delete_cursor: &mut DeleteCursor,
|
||||
doc_opstamps: &DocToOpstampMapping,
|
||||
target_opstamp: Opstamp,
|
||||
@@ -144,12 +144,7 @@ pub fn advance_deletes<C: Codec>(
|
||||
return Ok(());
|
||||
}
|
||||
|
||||
let segment_reader = segment.index().codec().open_segment_reader(
|
||||
segment.index().directory(),
|
||||
segment.meta(),
|
||||
segment.schema(),
|
||||
None,
|
||||
)?;
|
||||
let segment_reader = SegmentReader::open(&segment)?;
|
||||
|
||||
let max_doc = segment_reader.max_doc();
|
||||
let mut alive_bitset: BitSet = match segment_entry.alive_bitset() {
|
||||
@@ -161,7 +156,7 @@ pub fn advance_deletes<C: Codec>(
|
||||
|
||||
compute_deleted_bitset(
|
||||
&mut alive_bitset,
|
||||
segment_reader.as_ref(),
|
||||
&segment_reader,
|
||||
segment_entry.delete_cursor(),
|
||||
&DocToOpstampMapping::None,
|
||||
target_opstamp,
|
||||
@@ -249,19 +244,14 @@ fn apply_deletes<C: crate::codec::Codec>(
|
||||
.max()
|
||||
.expect("Empty DocOpstamp is forbidden");
|
||||
|
||||
let segment_reader = segment.index().codec().open_segment_reader(
|
||||
segment.index().directory(),
|
||||
segment.meta(),
|
||||
segment.schema(),
|
||||
None,
|
||||
)?;
|
||||
let segment_reader = SegmentReader::open(segment)?;
|
||||
let doc_to_opstamps = DocToOpstampMapping::WithMap(doc_opstamps);
|
||||
|
||||
let max_doc = segment.meta().max_doc();
|
||||
let mut deleted_bitset = BitSet::with_max_value_and_full(max_doc);
|
||||
let may_have_deletes = compute_deleted_bitset(
|
||||
&mut deleted_bitset,
|
||||
segment_reader.as_ref(),
|
||||
&segment_reader,
|
||||
delete_cursor,
|
||||
&doc_to_opstamps,
|
||||
max_doc_opstamp,
|
||||
|
||||
@@ -1,5 +1,6 @@
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use crate::codec::StandardCodec;
|
||||
use crate::collector::TopDocs;
|
||||
use crate::fastfield::AliveBitSet;
|
||||
use crate::index::Index;
|
||||
@@ -122,28 +123,22 @@ mod tests {
|
||||
let term_a = Term::from_field_text(my_text_field, "text");
|
||||
let inverted_index = segment_reader.inverted_index(my_text_field).unwrap();
|
||||
let term_info = inverted_index.get_term_info(&term_a).unwrap().unwrap();
|
||||
let typed_postings = crate::codec::Codec::load_postings_typed(
|
||||
index.codec(),
|
||||
inverted_index.as_ref(),
|
||||
&term_info,
|
||||
IndexRecordOption::WithFreqsAndPositions,
|
||||
)
|
||||
.unwrap();
|
||||
let mut postings = inverted_index
|
||||
.read_postings_from_terminfo_specialized(
|
||||
&term_info,
|
||||
IndexRecordOption::WithFreqsAndPositions,
|
||||
&StandardCodec,
|
||||
)
|
||||
.unwrap();
|
||||
assert_eq!(postings.doc_freq(), DocFreq::Exact(2));
|
||||
let fallback_bitset = AliveBitSet::for_test_from_deleted_docs(&[0], 100);
|
||||
assert_eq!(
|
||||
crate::indexer::merger::doc_freq_given_deletes(
|
||||
&typed_postings,
|
||||
&postings,
|
||||
segment_reader.alive_bitset().unwrap_or(&fallback_bitset)
|
||||
),
|
||||
2
|
||||
);
|
||||
let mut postings = inverted_index
|
||||
.read_postings_from_terminfo(&term_info, IndexRecordOption::WithFreqsAndPositions)
|
||||
.unwrap();
|
||||
assert_eq!(postings.doc_freq(), DocFreq::Exact(2));
|
||||
let mut postings = inverted_index
|
||||
.read_postings_from_terminfo(&term_info, IndexRecordOption::WithFreqsAndPositions)
|
||||
.unwrap();
|
||||
|
||||
assert_eq!(postings.term_freq(), 1);
|
||||
let mut output = Vec::new();
|
||||
|
||||
@@ -1,5 +1,3 @@
|
||||
use std::io;
|
||||
use std::marker::PhantomData;
|
||||
use std::sync::Arc;
|
||||
|
||||
use columnar::{
|
||||
@@ -19,8 +17,8 @@ use crate::fieldnorm::{FieldNormReader, FieldNormReaders, FieldNormsSerializer,
|
||||
use crate::index::{Segment, SegmentComponent, SegmentReader};
|
||||
use crate::indexer::doc_id_mapping::{MappingType, SegmentDocIdMapping};
|
||||
use crate::indexer::SegmentSerializer;
|
||||
use crate::postings::{InvertedIndexSerializer, Postings, TermInfo};
|
||||
use crate::schema::{value_type_to_column_type, Field, FieldType, IndexRecordOption, Schema};
|
||||
use crate::postings::{InvertedIndexSerializer, Postings};
|
||||
use crate::schema::{value_type_to_column_type, Field, FieldType, Schema};
|
||||
use crate::store::StoreWriter;
|
||||
use crate::termdict::{TermMerger, TermOrdinal};
|
||||
use crate::{DocAddress, DocId, InvertedIndexReader};
|
||||
@@ -31,7 +29,7 @@ use crate::{DocAddress, DocId, InvertedIndexReader};
|
||||
pub const MAX_DOC_LIMIT: u32 = 1 << 31;
|
||||
|
||||
fn estimate_total_num_tokens_in_single_segment(
|
||||
reader: &dyn SegmentReader,
|
||||
reader: &SegmentReader,
|
||||
field: Field,
|
||||
) -> crate::Result<u64> {
|
||||
// There are no deletes. We can simply use the exact value saved into the posting list.
|
||||
@@ -72,23 +70,19 @@ fn estimate_total_num_tokens_in_single_segment(
|
||||
Ok((segment_num_tokens as f64 * ratio) as u64)
|
||||
}
|
||||
|
||||
fn estimate_total_num_tokens(
|
||||
readers: &[Arc<dyn SegmentReader>],
|
||||
field: Field,
|
||||
) -> crate::Result<u64> {
|
||||
fn estimate_total_num_tokens(readers: &[SegmentReader], field: Field) -> crate::Result<u64> {
|
||||
let mut total_num_tokens: u64 = 0;
|
||||
for reader in readers {
|
||||
total_num_tokens += estimate_total_num_tokens_in_single_segment(reader.as_ref(), field)?;
|
||||
total_num_tokens += estimate_total_num_tokens_in_single_segment(reader, field)?;
|
||||
}
|
||||
Ok(total_num_tokens)
|
||||
}
|
||||
|
||||
pub struct IndexMerger<C: Codec = StandardCodec> {
|
||||
schema: Schema,
|
||||
pub(crate) readers: Vec<Arc<dyn SegmentReader>>,
|
||||
pub(crate) readers: Vec<SegmentReader>,
|
||||
max_doc: u32,
|
||||
codec: C,
|
||||
phantom: PhantomData<C>,
|
||||
}
|
||||
|
||||
struct DeltaComputer {
|
||||
@@ -183,12 +177,8 @@ impl<C: Codec> IndexMerger<C> {
|
||||
let mut readers = vec![];
|
||||
for (segment, new_alive_bitset_opt) in segments.iter().zip(alive_bitset_opt) {
|
||||
if segment.meta().num_docs() > 0 {
|
||||
let reader = segment.index().codec().open_segment_reader(
|
||||
segment.index().directory(),
|
||||
segment.meta(),
|
||||
segment.schema(),
|
||||
new_alive_bitset_opt,
|
||||
)?;
|
||||
let reader =
|
||||
SegmentReader::open_with_custom_alive_set(segment, new_alive_bitset_opt)?;
|
||||
readers.push(reader);
|
||||
}
|
||||
}
|
||||
@@ -207,7 +197,6 @@ impl<C: Codec> IndexMerger<C> {
|
||||
readers,
|
||||
max_doc,
|
||||
codec,
|
||||
phantom: PhantomData,
|
||||
})
|
||||
}
|
||||
|
||||
@@ -281,7 +270,7 @@ impl<C: Codec> IndexMerger<C> {
|
||||
}),
|
||||
);
|
||||
|
||||
let has_deletes: bool = self.readers.iter().any(|reader| reader.has_deletes());
|
||||
let has_deletes: bool = self.readers.iter().any(SegmentReader::has_deletes);
|
||||
let mapping_type = if has_deletes {
|
||||
MappingType::StackedWithDeletes
|
||||
} else {
|
||||
@@ -306,7 +295,7 @@ impl<C: Codec> IndexMerger<C> {
|
||||
&self,
|
||||
indexed_field: Field,
|
||||
_field_type: &FieldType,
|
||||
serializer: &mut InvertedIndexSerializer,
|
||||
serializer: &mut InvertedIndexSerializer<C>,
|
||||
fieldnorm_reader: Option<FieldNormReader>,
|
||||
doc_id_mapping: &SegmentDocIdMapping,
|
||||
) -> crate::Result<()> {
|
||||
@@ -316,7 +305,7 @@ impl<C: Codec> IndexMerger<C> {
|
||||
|
||||
let mut max_term_ords: Vec<TermOrdinal> = Vec::new();
|
||||
|
||||
let field_readers: Vec<Arc<dyn InvertedIndexReader>> = self
|
||||
let field_readers: Vec<Arc<InvertedIndexReader>> = self
|
||||
.readers
|
||||
.iter()
|
||||
.map(|reader| reader.inverted_index(indexed_field))
|
||||
@@ -388,14 +377,23 @@ impl<C: Codec> IndexMerger<C> {
|
||||
// Let's compute the list of non-empty posting lists
|
||||
for (segment_ord, term_info) in merged_terms.current_segment_ords_and_term_infos() {
|
||||
let segment_reader = &self.readers[segment_ord];
|
||||
let inverted_index = &field_readers[segment_ord];
|
||||
if let Some((doc_freq, postings)) = postings_for_merge::<C>(
|
||||
inverted_index.as_ref(),
|
||||
&self.codec,
|
||||
let inverted_index: &InvertedIndexReader = &field_readers[segment_ord];
|
||||
let postings = inverted_index.read_postings_from_terminfo_specialized(
|
||||
&term_info,
|
||||
segment_postings_option,
|
||||
segment_reader.alive_bitset(),
|
||||
)? {
|
||||
&self.codec,
|
||||
)?;
|
||||
let alive_bitset_opt = segment_reader.alive_bitset();
|
||||
let doc_freq = if let Some(alive_bitset) = alive_bitset_opt {
|
||||
doc_freq_given_deletes(&postings, alive_bitset)
|
||||
} else {
|
||||
// We do not an exact document frequency here.
|
||||
match postings.doc_freq() {
|
||||
crate::postings::DocFreq::Approximate(_) => exact_doc_freq(&postings),
|
||||
crate::postings::DocFreq::Exact(doc_freq) => doc_freq,
|
||||
}
|
||||
};
|
||||
if doc_freq > 0u32 {
|
||||
total_doc_freq += doc_freq;
|
||||
segment_postings_containing_the_term.push((segment_ord, postings));
|
||||
}
|
||||
@@ -483,7 +481,7 @@ impl<C: Codec> IndexMerger<C> {
|
||||
|
||||
fn write_postings(
|
||||
&self,
|
||||
serializer: &mut InvertedIndexSerializer,
|
||||
serializer: &mut InvertedIndexSerializer<C>,
|
||||
fieldnorm_readers: FieldNormReaders,
|
||||
doc_id_mapping: &SegmentDocIdMapping,
|
||||
) -> crate::Result<()> {
|
||||
@@ -577,66 +575,32 @@ pub(crate) fn doc_freq_given_deletes<P: Postings + Clone>(
|
||||
postings: &P,
|
||||
alive_bitset: &AliveBitSet,
|
||||
) -> u32 {
|
||||
let mut postings = postings.clone();
|
||||
let mut docset = postings.clone();
|
||||
let mut doc_freq = 0;
|
||||
loop {
|
||||
let doc = postings.doc();
|
||||
let doc = docset.doc();
|
||||
if doc == TERMINATED {
|
||||
return doc_freq;
|
||||
}
|
||||
if alive_bitset.is_alive(doc) {
|
||||
doc_freq += 1u32;
|
||||
}
|
||||
postings.advance();
|
||||
docset.advance();
|
||||
}
|
||||
}
|
||||
|
||||
fn read_postings_for_merge<C: Codec>(
|
||||
inverted_index: &dyn InvertedIndexReader,
|
||||
codec: &C,
|
||||
term_info: &TermInfo,
|
||||
option: IndexRecordOption,
|
||||
) -> io::Result<<C::PostingsCodec as PostingsCodec>::Postings> {
|
||||
codec.load_postings_typed(inverted_index, term_info, option)
|
||||
}
|
||||
|
||||
fn postings_for_merge<C: Codec>(
|
||||
inverted_index: &dyn InvertedIndexReader,
|
||||
codec: &C,
|
||||
term_info: &TermInfo,
|
||||
option: IndexRecordOption,
|
||||
alive_bitset_opt: Option<&AliveBitSet>,
|
||||
) -> io::Result<Option<(u32, <C::PostingsCodec as PostingsCodec>::Postings)>> {
|
||||
let postings = read_postings_for_merge(inverted_index, codec, term_info, option)?;
|
||||
let doc_freq = if let Some(alive_bitset) = alive_bitset_opt {
|
||||
doc_freq_given_deletes(&postings, alive_bitset)
|
||||
} else {
|
||||
// We do not need an exact document frequency here.
|
||||
match postings.doc_freq() {
|
||||
crate::postings::DocFreq::Exact(doc_freq) => doc_freq,
|
||||
crate::postings::DocFreq::Approximate(_) => exact_doc_freq(&postings),
|
||||
}
|
||||
};
|
||||
|
||||
if doc_freq == 0u32 {
|
||||
return Ok(None);
|
||||
}
|
||||
|
||||
Ok(Some((doc_freq, postings)))
|
||||
}
|
||||
|
||||
/// If the postings is not able to inform us of the document frequency,
|
||||
/// we just scan through it.
|
||||
pub(crate) fn exact_doc_freq<P: Postings + Clone>(postings: &P) -> u32 {
|
||||
let mut postings = postings.clone();
|
||||
let mut docset = postings.clone();
|
||||
let mut doc_freq = 0;
|
||||
loop {
|
||||
let doc = postings.doc();
|
||||
let doc = docset.doc();
|
||||
if doc == TERMINATED {
|
||||
return doc_freq;
|
||||
}
|
||||
doc_freq += 1u32;
|
||||
postings.advance();
|
||||
docset.advance();
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1625,7 +1589,7 @@ mod tests {
|
||||
for segment_reader in searcher.segment_readers() {
|
||||
let mut term_scorer = term_query
|
||||
.specialized_weight(EnableScoring::enabled_from_searcher(&searcher))?
|
||||
.term_scorer_for_test(segment_reader.as_ref(), 1.0)
|
||||
.term_scorer_for_test(segment_reader, 1.0)
|
||||
.unwrap();
|
||||
// the difference compared to before is intrinsic to the bm25 formula. no worries
|
||||
// there.
|
||||
@@ -1680,8 +1644,6 @@ mod tests {
|
||||
assert_eq!(super::doc_freq_given_deletes(&docs, &alive_bitset), 2);
|
||||
let all_deleted =
|
||||
AliveBitSet::for_test_from_deleted_docs(&[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], 12);
|
||||
let docs =
|
||||
<StandardPostingsCodec as PostingsCodec>::Postings::create_from_docs(&[0, 2, 10]);
|
||||
assert_eq!(super::doc_freq_given_deletes(&docs, &all_deleted), 0);
|
||||
}
|
||||
}
|
||||
|
||||
@@ -13,7 +13,7 @@ pub struct SegmentSerializer<C: crate::codec::Codec> {
|
||||
pub(crate) store_writer: StoreWriter,
|
||||
fast_field_write: WritePtr,
|
||||
fieldnorms_serializer: Option<FieldNormsSerializer>,
|
||||
postings_serializer: InvertedIndexSerializer,
|
||||
postings_serializer: InvertedIndexSerializer<C>,
|
||||
}
|
||||
|
||||
impl<C: crate::codec::Codec> SegmentSerializer<C> {
|
||||
@@ -55,7 +55,7 @@ impl<C: crate::codec::Codec> SegmentSerializer<C> {
|
||||
}
|
||||
|
||||
/// Accessor to the `PostingsSerializer`.
|
||||
pub fn get_postings_serializer(&mut self) -> &mut InvertedIndexSerializer {
|
||||
pub fn get_postings_serializer(&mut self) -> &mut InvertedIndexSerializer<C> {
|
||||
&mut self.postings_serializer
|
||||
}
|
||||
|
||||
|
||||
@@ -875,7 +875,7 @@ mod tests {
|
||||
let searcher = reader.searcher();
|
||||
let segment_reader = searcher.segment_reader(0u32);
|
||||
|
||||
fn assert_type(reader: &dyn SegmentReader, field: &str, typ: ColumnType) {
|
||||
fn assert_type(reader: &SegmentReader, field: &str, typ: ColumnType) {
|
||||
let cols = reader.fast_fields().dynamic_column_handles(field).unwrap();
|
||||
assert_eq!(cols.len(), 1, "{field}");
|
||||
assert_eq!(cols[0].column_type(), typ, "{field}");
|
||||
@@ -894,7 +894,7 @@ mod tests {
|
||||
assert_type(segment_reader, "json.my_arr", ColumnType::I64);
|
||||
assert_type(segment_reader, "json.my_arr.my_key", ColumnType::Str);
|
||||
|
||||
fn assert_empty(reader: &dyn SegmentReader, field: &str) {
|
||||
fn assert_empty(reader: &SegmentReader, field: &str) {
|
||||
let cols = reader.fast_fields().dynamic_column_handles(field).unwrap();
|
||||
assert_eq!(cols.len(), 0);
|
||||
}
|
||||
|
||||
@@ -228,7 +228,7 @@ pub use crate::core::{json_utils, Executor, Searcher, SearcherGeneration};
|
||||
pub use crate::directory::Directory;
|
||||
pub use crate::index::{
|
||||
Index, IndexBuilder, IndexMeta, IndexSettings, InvertedIndexReader, Order, Segment,
|
||||
SegmentMeta, SegmentReader, TantivyInvertedIndexReader, TantivySegmentReader,
|
||||
SegmentMeta, SegmentReader,
|
||||
};
|
||||
pub use crate::indexer::{IndexWriter, SingleSegmentIndexWriter};
|
||||
pub use crate::schema::{Document, TantivyDocument, Term};
|
||||
@@ -548,7 +548,7 @@ pub mod tests {
|
||||
index_writer.commit()?;
|
||||
let reader = index.reader()?;
|
||||
let searcher = reader.searcher();
|
||||
let segment_reader: &dyn SegmentReader = searcher.segment_reader(0);
|
||||
let segment_reader: &SegmentReader = searcher.segment_reader(0);
|
||||
let fieldnorms_reader = segment_reader.get_fieldnorms_reader(text_field)?;
|
||||
assert_eq!(fieldnorms_reader.fieldnorm(0), 3);
|
||||
assert_eq!(fieldnorms_reader.fieldnorm(1), 0);
|
||||
@@ -556,7 +556,7 @@ pub mod tests {
|
||||
Ok(())
|
||||
}
|
||||
|
||||
fn advance_undeleted(docset: &mut dyn DocSet, reader: &dyn SegmentReader) -> bool {
|
||||
fn advance_undeleted(docset: &mut dyn DocSet, reader: &SegmentReader) -> bool {
|
||||
let mut doc = docset.advance();
|
||||
while doc != TERMINATED {
|
||||
if !reader.is_deleted(doc) {
|
||||
@@ -1073,7 +1073,7 @@ pub mod tests {
|
||||
}
|
||||
let reader = index.reader()?;
|
||||
let searcher = reader.searcher();
|
||||
let segment_reader: &dyn SegmentReader = searcher.segment_reader(0);
|
||||
let segment_reader: &SegmentReader = searcher.segment_reader(0);
|
||||
{
|
||||
let fast_field_reader_res = segment_reader.fast_fields().u64("text");
|
||||
assert!(fast_field_reader_res.is_err());
|
||||
|
||||
@@ -3,6 +3,7 @@ use std::io;
|
||||
use common::json_path_writer::JSON_END_OF_PATH;
|
||||
use stacker::Addr;
|
||||
|
||||
use crate::codec::Codec;
|
||||
use crate::indexer::indexing_term::IndexingTerm;
|
||||
use crate::indexer::path_to_unordered_id::OrderedPathId;
|
||||
use crate::postings::postings_writer::SpecializedPostingsWriter;
|
||||
@@ -52,12 +53,12 @@ impl<Rec: Recorder> PostingsWriter for JsonPostingsWriter<Rec> {
|
||||
}
|
||||
|
||||
/// The actual serialization format is handled by the `PostingsSerializer`.
|
||||
fn serialize(
|
||||
fn serialize<C: Codec>(
|
||||
&self,
|
||||
ordered_term_addrs: &[(Field, OrderedPathId, &[u8], Addr)],
|
||||
ordered_id_to_path: &[&str],
|
||||
ctx: &IndexingContext,
|
||||
serializer: &mut FieldSerializer,
|
||||
serializer: &mut FieldSerializer<C>,
|
||||
) -> io::Result<()> {
|
||||
let mut term_buffer = JsonTermSerializer(Vec::with_capacity(48));
|
||||
let mut buffer_lender = BufferLender::default();
|
||||
|
||||
@@ -12,8 +12,7 @@ mod per_field_postings_writer;
|
||||
mod postings;
|
||||
mod postings_writer;
|
||||
mod recorder;
|
||||
pub(crate) mod serializer;
|
||||
pub(crate) mod skip;
|
||||
mod serializer;
|
||||
mod term_info;
|
||||
|
||||
pub(crate) use loaded_postings::LoadedPostings;
|
||||
@@ -36,7 +35,7 @@ pub(crate) mod tests {
|
||||
use super::{InvertedIndexSerializer, Postings};
|
||||
use crate::docset::{DocSet, TERMINATED};
|
||||
use crate::fieldnorm::FieldNormReader;
|
||||
use crate::index::{Index, SegmentComponent};
|
||||
use crate::index::{Index, SegmentComponent, SegmentReader};
|
||||
use crate::indexer::operation::AddOperation;
|
||||
use crate::indexer::SegmentWriter;
|
||||
use crate::postings::DocFreq;
|
||||
@@ -249,13 +248,7 @@ pub(crate) mod tests {
|
||||
segment_writer.finalize()?;
|
||||
}
|
||||
{
|
||||
let segment_reader = crate::codec::Codec::open_segment_reader(
|
||||
segment.index().codec(),
|
||||
segment.index().directory(),
|
||||
segment.meta(),
|
||||
segment.schema(),
|
||||
None,
|
||||
)?;
|
||||
let segment_reader = SegmentReader::open(&segment)?;
|
||||
{
|
||||
let fieldnorm_reader = segment_reader.get_fieldnorms_reader(text_field)?;
|
||||
assert_eq!(fieldnorm_reader.fieldnorm(0), 8 + 5);
|
||||
|
||||
@@ -4,6 +4,7 @@ use std::ops::Range;
|
||||
|
||||
use stacker::Addr;
|
||||
|
||||
use crate::codec::Codec;
|
||||
use crate::fieldnorm::FieldNormReaders;
|
||||
use crate::indexer::indexing_term::IndexingTerm;
|
||||
use crate::indexer::path_to_unordered_id::OrderedPathId;
|
||||
@@ -48,12 +49,12 @@ fn make_field_partition(
|
||||
/// Serialize the inverted index.
|
||||
/// It pushes all term, one field at a time, towards the
|
||||
/// postings serializer.
|
||||
pub(crate) fn serialize_postings(
|
||||
pub(crate) fn serialize_postings<C: Codec>(
|
||||
ctx: IndexingContext,
|
||||
schema: Schema,
|
||||
per_field_postings_writers: &PerFieldPostingsWriter,
|
||||
fieldnorm_readers: FieldNormReaders,
|
||||
serializer: &mut InvertedIndexSerializer,
|
||||
serializer: &mut InvertedIndexSerializer<C>,
|
||||
) -> crate::Result<()> {
|
||||
// Replace unordered ids by ordered ids to be able to sort
|
||||
let unordered_id_to_ordered_id: Vec<OrderedPathId> =
|
||||
@@ -166,12 +167,12 @@ impl PostingsWriter for PostingsWriterEnum {
|
||||
}
|
||||
}
|
||||
|
||||
fn serialize(
|
||||
fn serialize<C: Codec>(
|
||||
&self,
|
||||
term_addrs: &[(Field, OrderedPathId, &[u8], Addr)],
|
||||
ordered_id_to_path: &[&str],
|
||||
ctx: &IndexingContext,
|
||||
serializer: &mut FieldSerializer,
|
||||
serializer: &mut FieldSerializer<C>,
|
||||
) -> io::Result<()> {
|
||||
match self {
|
||||
PostingsWriterEnum::DocId(writer) => {
|
||||
@@ -254,12 +255,12 @@ pub(crate) trait PostingsWriter: Send + Sync {
|
||||
|
||||
/// Serializes the postings on disk.
|
||||
/// The actual serialization format is handled by the `PostingsSerializer`.
|
||||
fn serialize(
|
||||
fn serialize<C: Codec>(
|
||||
&self,
|
||||
term_addrs: &[(Field, OrderedPathId, &[u8], Addr)],
|
||||
ordered_id_to_path: &[&str],
|
||||
ctx: &IndexingContext,
|
||||
serializer: &mut FieldSerializer,
|
||||
serializer: &mut FieldSerializer<C>,
|
||||
) -> io::Result<()>;
|
||||
|
||||
/// Tokenize a text and subscribe all of its token.
|
||||
@@ -311,12 +312,12 @@ pub(crate) struct SpecializedPostingsWriter<Rec: Recorder> {
|
||||
|
||||
impl<Rec: Recorder> SpecializedPostingsWriter<Rec> {
|
||||
#[inline]
|
||||
pub(crate) fn serialize_one_term(
|
||||
pub(crate) fn serialize_one_term<C: Codec>(
|
||||
term: &[u8],
|
||||
addr: Addr,
|
||||
buffer_lender: &mut BufferLender,
|
||||
ctx: &IndexingContext,
|
||||
serializer: &mut FieldSerializer,
|
||||
serializer: &mut FieldSerializer<C>,
|
||||
) -> io::Result<()> {
|
||||
let recorder: Rec = ctx.term_index.read(addr);
|
||||
let term_doc_freq = recorder.term_doc_freq().unwrap_or(0u32);
|
||||
@@ -357,12 +358,12 @@ impl<Rec: Recorder> PostingsWriter for SpecializedPostingsWriter<Rec> {
|
||||
});
|
||||
}
|
||||
|
||||
fn serialize(
|
||||
fn serialize<C: Codec>(
|
||||
&self,
|
||||
term_addrs: &[(Field, OrderedPathId, &[u8], Addr)],
|
||||
_ordered_id_to_path: &[&str],
|
||||
ctx: &IndexingContext,
|
||||
serializer: &mut FieldSerializer,
|
||||
serializer: &mut FieldSerializer<C>,
|
||||
) -> io::Result<()> {
|
||||
let mut buffer_lender = BufferLender::default();
|
||||
for (_field, _path_id, term, addr) in term_addrs {
|
||||
|
||||
@@ -1,6 +1,7 @@
|
||||
use common::read_u32_vint;
|
||||
use stacker::{ExpUnrolledLinkedList, MemoryArena};
|
||||
|
||||
use crate::codec::Codec;
|
||||
use crate::postings::FieldSerializer;
|
||||
use crate::DocId;
|
||||
|
||||
@@ -67,10 +68,10 @@ pub(crate) trait Recorder: Copy + Default + Send + Sync + 'static {
|
||||
/// Close the document. It will help record the term frequency.
|
||||
fn close_doc(&mut self, arena: &mut MemoryArena);
|
||||
/// Pushes the postings information to the serializer.
|
||||
fn serialize(
|
||||
fn serialize<C: Codec>(
|
||||
&self,
|
||||
arena: &MemoryArena,
|
||||
serializer: &mut FieldSerializer,
|
||||
serializer: &mut FieldSerializer<C>,
|
||||
buffer_lender: &mut BufferLender,
|
||||
);
|
||||
/// Returns the number of document containing this term.
|
||||
@@ -110,10 +111,10 @@ impl Recorder for DocIdRecorder {
|
||||
#[inline]
|
||||
fn close_doc(&mut self, _arena: &mut MemoryArena) {}
|
||||
|
||||
fn serialize(
|
||||
fn serialize<C: Codec>(
|
||||
&self,
|
||||
arena: &MemoryArena,
|
||||
serializer: &mut FieldSerializer,
|
||||
serializer: &mut FieldSerializer<C>,
|
||||
buffer_lender: &mut BufferLender,
|
||||
) {
|
||||
let buffer = buffer_lender.lend_u8();
|
||||
@@ -178,10 +179,10 @@ impl Recorder for TermFrequencyRecorder {
|
||||
self.current_tf = 0;
|
||||
}
|
||||
|
||||
fn serialize(
|
||||
fn serialize<C: Codec>(
|
||||
&self,
|
||||
arena: &MemoryArena,
|
||||
serializer: &mut FieldSerializer,
|
||||
serializer: &mut FieldSerializer<C>,
|
||||
buffer_lender: &mut BufferLender,
|
||||
) {
|
||||
let buffer = buffer_lender.lend_u8();
|
||||
@@ -235,10 +236,10 @@ impl Recorder for TfAndPositionRecorder {
|
||||
self.stack.writer(arena).write_u32_vint(POSITION_END);
|
||||
}
|
||||
|
||||
fn serialize(
|
||||
fn serialize<C: Codec>(
|
||||
&self,
|
||||
arena: &MemoryArena,
|
||||
serializer: &mut FieldSerializer,
|
||||
serializer: &mut FieldSerializer<C>,
|
||||
buffer_lender: &mut BufferLender,
|
||||
) {
|
||||
let (buffer_u8, buffer_positions) = buffer_lender.lend_all();
|
||||
|
||||
@@ -1,16 +1,14 @@
|
||||
use std::cmp::Ordering;
|
||||
use std::io::{self, Write};
|
||||
|
||||
use common::{BinarySerializable, CountingWriter, VInt};
|
||||
use common::{BinarySerializable, CountingWriter};
|
||||
|
||||
use super::TermInfo;
|
||||
use crate::codec::postings::PostingsSerializer;
|
||||
use crate::codec::Codec;
|
||||
use crate::directory::{CompositeWrite, WritePtr};
|
||||
use crate::fieldnorm::FieldNormReader;
|
||||
use crate::index::Segment;
|
||||
use crate::positions::PositionSerializer;
|
||||
use crate::postings::compression::{BlockEncoder, VIntEncoder as _, COMPRESSION_BLOCK_SIZE};
|
||||
use crate::postings::skip::SkipSerializer;
|
||||
use crate::query::Bm25Weight;
|
||||
use crate::schema::{Field, FieldEntry, FieldType, IndexRecordOption, Schema};
|
||||
use crate::termdict::TermDictionaryBuilder;
|
||||
use crate::{DocId, Score};
|
||||
@@ -46,24 +44,27 @@ use crate::{DocId, Score};
|
||||
///
|
||||
/// A description of the serialization format is
|
||||
/// [available here](https://fulmicoton.gitbooks.io/tantivy-doc/content/inverted-index.html).
|
||||
pub struct InvertedIndexSerializer {
|
||||
pub struct InvertedIndexSerializer<C: Codec> {
|
||||
terms_write: CompositeWrite<WritePtr>,
|
||||
postings_write: CompositeWrite<WritePtr>,
|
||||
positions_write: CompositeWrite<WritePtr>,
|
||||
schema: Schema,
|
||||
codec: C,
|
||||
}
|
||||
|
||||
impl InvertedIndexSerializer {
|
||||
use crate::codec::postings::PostingsCodec;
|
||||
|
||||
impl<C: Codec> InvertedIndexSerializer<C> {
|
||||
/// Open a new `InvertedIndexSerializer` for the given segment
|
||||
pub fn open<C: crate::codec::Codec>(
|
||||
segment: &mut Segment<C>,
|
||||
) -> crate::Result<InvertedIndexSerializer> {
|
||||
pub fn open(segment: &mut Segment<C>) -> crate::Result<InvertedIndexSerializer<C>> {
|
||||
use crate::index::SegmentComponent::{Positions, Postings, Terms};
|
||||
let codec = segment.index().codec().clone();
|
||||
let inv_index_serializer = InvertedIndexSerializer {
|
||||
terms_write: CompositeWrite::wrap(segment.open_write(Terms)?),
|
||||
postings_write: CompositeWrite::wrap(segment.open_write(Postings)?),
|
||||
positions_write: CompositeWrite::wrap(segment.open_write(Positions)?),
|
||||
schema: segment.schema(),
|
||||
codec,
|
||||
};
|
||||
Ok(inv_index_serializer)
|
||||
}
|
||||
@@ -77,7 +78,7 @@ impl InvertedIndexSerializer {
|
||||
field: Field,
|
||||
total_num_tokens: u64,
|
||||
fieldnorm_reader: Option<FieldNormReader>,
|
||||
) -> io::Result<FieldSerializer<'_>> {
|
||||
) -> io::Result<FieldSerializer<'_, C>> {
|
||||
let field_entry: &FieldEntry = self.schema.get_field_entry(field);
|
||||
let term_dictionary_write = self.terms_write.for_field(field);
|
||||
let postings_write = self.postings_write.for_field(field);
|
||||
@@ -90,6 +91,7 @@ impl InvertedIndexSerializer {
|
||||
postings_write,
|
||||
positions_write,
|
||||
fieldnorm_reader,
|
||||
&self.codec,
|
||||
)
|
||||
}
|
||||
|
||||
@@ -104,9 +106,9 @@ impl InvertedIndexSerializer {
|
||||
|
||||
/// The field serializer is in charge of
|
||||
/// the serialization of a specific field.
|
||||
pub struct FieldSerializer<'a> {
|
||||
pub struct FieldSerializer<'a, C: Codec> {
|
||||
term_dictionary_builder: TermDictionaryBuilder<&'a mut CountingWriter<WritePtr>>,
|
||||
postings_serializer: PostingsSerializer,
|
||||
postings_serializer: <C::PostingsCodec as PostingsCodec>::PostingsSerializer,
|
||||
positions_serializer_opt: Option<PositionSerializer<&'a mut CountingWriter<WritePtr>>>,
|
||||
current_term_info: TermInfo,
|
||||
term_open: bool,
|
||||
@@ -114,7 +116,7 @@ pub struct FieldSerializer<'a> {
|
||||
postings_start_offset: u64,
|
||||
}
|
||||
|
||||
impl<'a> FieldSerializer<'a> {
|
||||
impl<'a, C: Codec> FieldSerializer<'a, C> {
|
||||
fn create(
|
||||
field_type: &FieldType,
|
||||
total_num_tokens: u64,
|
||||
@@ -122,7 +124,8 @@ impl<'a> FieldSerializer<'a> {
|
||||
postings_write: &'a mut CountingWriter<WritePtr>,
|
||||
positions_write: &'a mut CountingWriter<WritePtr>,
|
||||
fieldnorm_reader: Option<FieldNormReader>,
|
||||
) -> io::Result<FieldSerializer<'a>> {
|
||||
codec: &C,
|
||||
) -> io::Result<FieldSerializer<'a, C>> {
|
||||
total_num_tokens.serialize(postings_write)?;
|
||||
let index_record_option = field_type
|
||||
.index_record_option()
|
||||
@@ -132,8 +135,11 @@ impl<'a> FieldSerializer<'a> {
|
||||
.as_ref()
|
||||
.map(|ff_reader| total_num_tokens as Score / ff_reader.num_docs() as Score)
|
||||
.unwrap_or(0.0);
|
||||
let postings_serializer =
|
||||
PostingsSerializer::new(average_fieldnorm, index_record_option, fieldnorm_reader);
|
||||
let postings_serializer = codec.postings_codec().new_serializer(
|
||||
average_fieldnorm,
|
||||
index_record_option,
|
||||
fieldnorm_reader,
|
||||
);
|
||||
let positions_serializer_opt = if index_record_option.has_positions() {
|
||||
Some(PositionSerializer::new(positions_write))
|
||||
} else {
|
||||
@@ -186,7 +192,6 @@ impl<'a> FieldSerializer<'a> {
|
||||
"Called new_term, while the previous term was not closed."
|
||||
);
|
||||
self.term_open = true;
|
||||
self.postings_serializer.clear();
|
||||
self.current_term_info = self.current_term_info();
|
||||
self.term_dictionary_builder.insert_key(term)?;
|
||||
self.postings_serializer
|
||||
@@ -250,223 +255,3 @@ impl<'a> FieldSerializer<'a> {
|
||||
Ok(())
|
||||
}
|
||||
}
|
||||
|
||||
struct Block {
|
||||
doc_ids: [DocId; COMPRESSION_BLOCK_SIZE],
|
||||
term_freqs: [u32; COMPRESSION_BLOCK_SIZE],
|
||||
len: usize,
|
||||
}
|
||||
|
||||
impl Block {
|
||||
fn new() -> Self {
|
||||
Block {
|
||||
doc_ids: [0u32; COMPRESSION_BLOCK_SIZE],
|
||||
term_freqs: [0u32; COMPRESSION_BLOCK_SIZE],
|
||||
len: 0,
|
||||
}
|
||||
}
|
||||
|
||||
fn doc_ids(&self) -> &[DocId] {
|
||||
&self.doc_ids[..self.len]
|
||||
}
|
||||
|
||||
fn term_freqs(&self) -> &[u32] {
|
||||
&self.term_freqs[..self.len]
|
||||
}
|
||||
|
||||
fn clear(&mut self) {
|
||||
self.len = 0;
|
||||
}
|
||||
|
||||
fn append_doc(&mut self, doc: DocId, term_freq: u32) {
|
||||
let len = self.len;
|
||||
self.doc_ids[len] = doc;
|
||||
self.term_freqs[len] = term_freq;
|
||||
self.len = len + 1;
|
||||
}
|
||||
|
||||
fn is_full(&self) -> bool {
|
||||
self.len == COMPRESSION_BLOCK_SIZE
|
||||
}
|
||||
|
||||
fn is_empty(&self) -> bool {
|
||||
self.len == 0
|
||||
}
|
||||
|
||||
fn last_doc(&self) -> DocId {
|
||||
assert_eq!(self.len, COMPRESSION_BLOCK_SIZE);
|
||||
self.doc_ids[COMPRESSION_BLOCK_SIZE - 1]
|
||||
}
|
||||
}
|
||||
|
||||
pub struct PostingsSerializer {
|
||||
last_doc_id_encoded: u32,
|
||||
|
||||
block_encoder: BlockEncoder,
|
||||
block: Box<Block>,
|
||||
|
||||
postings_write: Vec<u8>,
|
||||
skip_write: SkipSerializer,
|
||||
|
||||
mode: IndexRecordOption,
|
||||
fieldnorm_reader: Option<FieldNormReader>,
|
||||
|
||||
bm25_weight: Option<Bm25Weight>,
|
||||
avg_fieldnorm: Score, /* Average number of term in the field for that segment.
|
||||
* this value is used to compute the block wand information. */
|
||||
term_has_freq: bool,
|
||||
}
|
||||
|
||||
impl PostingsSerializer {
|
||||
pub fn new(
|
||||
avg_fieldnorm: Score,
|
||||
mode: IndexRecordOption,
|
||||
fieldnorm_reader: Option<FieldNormReader>,
|
||||
) -> PostingsSerializer {
|
||||
PostingsSerializer {
|
||||
block_encoder: BlockEncoder::new(),
|
||||
block: Box::new(Block::new()),
|
||||
|
||||
postings_write: Vec::new(),
|
||||
skip_write: SkipSerializer::new(),
|
||||
|
||||
last_doc_id_encoded: 0u32,
|
||||
mode,
|
||||
|
||||
fieldnorm_reader,
|
||||
bm25_weight: None,
|
||||
avg_fieldnorm,
|
||||
term_has_freq: false,
|
||||
}
|
||||
}
|
||||
|
||||
pub fn new_term(&mut self, term_doc_freq: u32, record_term_freq: bool) {
|
||||
self.bm25_weight = None;
|
||||
|
||||
self.term_has_freq = self.mode.has_freq() && record_term_freq;
|
||||
if !self.term_has_freq {
|
||||
return;
|
||||
}
|
||||
|
||||
let num_docs_in_segment: u64 =
|
||||
if let Some(fieldnorm_reader) = self.fieldnorm_reader.as_ref() {
|
||||
fieldnorm_reader.num_docs() as u64
|
||||
} else {
|
||||
return;
|
||||
};
|
||||
|
||||
if num_docs_in_segment == 0 {
|
||||
return;
|
||||
}
|
||||
|
||||
self.bm25_weight = Some(Bm25Weight::for_one_term_without_explain(
|
||||
term_doc_freq as u64,
|
||||
num_docs_in_segment,
|
||||
self.avg_fieldnorm,
|
||||
));
|
||||
}
|
||||
|
||||
fn write_block(&mut self) {
|
||||
{
|
||||
// encode the doc ids
|
||||
let (num_bits, block_encoded): (u8, &[u8]) = self
|
||||
.block_encoder
|
||||
.compress_block_sorted(self.block.doc_ids(), self.last_doc_id_encoded);
|
||||
self.last_doc_id_encoded = self.block.last_doc();
|
||||
self.skip_write
|
||||
.write_doc(self.last_doc_id_encoded, num_bits);
|
||||
// last el block 0, offset block 1,
|
||||
self.postings_write.extend(block_encoded);
|
||||
}
|
||||
if self.term_has_freq {
|
||||
let (num_bits, block_encoded): (u8, &[u8]) = self
|
||||
.block_encoder
|
||||
.compress_block_unsorted(self.block.term_freqs(), true);
|
||||
self.postings_write.extend(block_encoded);
|
||||
self.skip_write.write_term_freq(num_bits);
|
||||
if self.mode.has_positions() {
|
||||
// We serialize the sum of term freqs within the skip information
|
||||
// in order to navigate through positions.
|
||||
let sum_freq = self.block.term_freqs().iter().cloned().sum();
|
||||
self.skip_write.write_total_term_freq(sum_freq);
|
||||
}
|
||||
let mut blockwand_params = (0u8, 0u32);
|
||||
if let Some(bm25_weight) = self.bm25_weight.as_ref() {
|
||||
if let Some(fieldnorm_reader) = self.fieldnorm_reader.as_ref() {
|
||||
let docs = self.block.doc_ids().iter().cloned();
|
||||
let term_freqs = self.block.term_freqs().iter().cloned();
|
||||
let fieldnorms = docs.map(|doc| fieldnorm_reader.fieldnorm_id(doc));
|
||||
blockwand_params = fieldnorms
|
||||
.zip(term_freqs)
|
||||
.max_by(
|
||||
|(left_fieldnorm_id, left_term_freq),
|
||||
(right_fieldnorm_id, right_term_freq)| {
|
||||
let left_score =
|
||||
bm25_weight.tf_factor(*left_fieldnorm_id, *left_term_freq);
|
||||
let right_score =
|
||||
bm25_weight.tf_factor(*right_fieldnorm_id, *right_term_freq);
|
||||
left_score
|
||||
.partial_cmp(&right_score)
|
||||
.unwrap_or(Ordering::Equal)
|
||||
},
|
||||
)
|
||||
.unwrap();
|
||||
}
|
||||
}
|
||||
let (fieldnorm_id, term_freq) = blockwand_params;
|
||||
self.skip_write.write_blockwand_max(fieldnorm_id, term_freq);
|
||||
}
|
||||
self.block.clear();
|
||||
}
|
||||
|
||||
pub fn write_doc(&mut self, doc_id: DocId, term_freq: u32) {
|
||||
self.block.append_doc(doc_id, term_freq);
|
||||
if self.block.is_full() {
|
||||
self.write_block();
|
||||
}
|
||||
}
|
||||
|
||||
pub fn close_term(
|
||||
&mut self,
|
||||
doc_freq: u32,
|
||||
output_write: &mut impl std::io::Write,
|
||||
) -> io::Result<()> {
|
||||
if !self.block.is_empty() {
|
||||
// we have doc ids waiting to be written
|
||||
// this happens when the number of doc ids is
|
||||
// not a perfect multiple of our block size.
|
||||
//
|
||||
// In that case, the remaining part is encoded
|
||||
// using variable int encoding.
|
||||
{
|
||||
let block_encoded = self
|
||||
.block_encoder
|
||||
.compress_vint_sorted(self.block.doc_ids(), self.last_doc_id_encoded);
|
||||
self.postings_write.write_all(block_encoded)?;
|
||||
}
|
||||
// ... Idem for term frequencies
|
||||
if self.term_has_freq {
|
||||
let block_encoded = self
|
||||
.block_encoder
|
||||
.compress_vint_unsorted(self.block.term_freqs());
|
||||
self.postings_write.write_all(block_encoded)?;
|
||||
}
|
||||
self.block.clear();
|
||||
}
|
||||
if doc_freq >= COMPRESSION_BLOCK_SIZE as u32 {
|
||||
let skip_data = self.skip_write.data();
|
||||
VInt(skip_data.len() as u64).serialize(output_write)?;
|
||||
output_write.write_all(skip_data)?;
|
||||
}
|
||||
output_write.write_all(&self.postings_write[..])?;
|
||||
self.skip_write.clear();
|
||||
self.postings_write.clear();
|
||||
self.bm25_weight = None;
|
||||
Ok(())
|
||||
}
|
||||
|
||||
fn clear(&mut self) {
|
||||
self.block.clear();
|
||||
self.last_doc_id_encoded = 0;
|
||||
}
|
||||
}
|
||||
|
||||
@@ -21,7 +21,7 @@ impl Query for AllQuery {
|
||||
pub struct AllWeight;
|
||||
|
||||
impl Weight for AllWeight {
|
||||
fn scorer(&self, reader: &dyn SegmentReader, boost: Score) -> crate::Result<Box<dyn Scorer>> {
|
||||
fn scorer(&self, reader: &SegmentReader, boost: Score) -> crate::Result<Box<dyn Scorer>> {
|
||||
let all_scorer = AllScorer::new(reader.max_doc());
|
||||
if boost != 1.0 {
|
||||
Ok(box_scorer(BoostScorer::new(all_scorer, boost)))
|
||||
@@ -30,7 +30,7 @@ impl Weight for AllWeight {
|
||||
}
|
||||
}
|
||||
|
||||
fn explain(&self, reader: &dyn SegmentReader, doc: DocId) -> crate::Result<Explanation> {
|
||||
fn explain(&self, reader: &SegmentReader, doc: DocId) -> crate::Result<Explanation> {
|
||||
if doc >= reader.max_doc() {
|
||||
return Err(does_not_match(doc));
|
||||
}
|
||||
|
||||
@@ -67,7 +67,7 @@ where
|
||||
}
|
||||
|
||||
/// Returns the term infos that match the automaton
|
||||
pub fn get_match_term_infos(&self, reader: &dyn SegmentReader) -> crate::Result<Vec<TermInfo>> {
|
||||
pub fn get_match_term_infos(&self, reader: &SegmentReader) -> crate::Result<Vec<TermInfo>> {
|
||||
let inverted_index = reader.inverted_index(self.field)?;
|
||||
let term_dict = inverted_index.terms();
|
||||
let mut term_stream = self.automaton_stream(term_dict)?;
|
||||
@@ -84,7 +84,7 @@ where
|
||||
A: Automaton + Send + Sync + 'static,
|
||||
A::State: Clone,
|
||||
{
|
||||
fn scorer(&self, reader: &dyn SegmentReader, boost: Score) -> crate::Result<Box<dyn Scorer>> {
|
||||
fn scorer(&self, reader: &SegmentReader, boost: Score) -> crate::Result<Box<dyn Scorer>> {
|
||||
let max_doc = reader.max_doc();
|
||||
let mut doc_bitset = BitSet::with_max_value(max_doc);
|
||||
let inverted_index = reader.inverted_index(self.field)?;
|
||||
@@ -92,18 +92,16 @@ where
|
||||
let mut term_stream = self.automaton_stream(term_dict)?;
|
||||
while term_stream.advance() {
|
||||
let term_info = term_stream.value();
|
||||
inverted_index.fill_bitset_for_term(
|
||||
term_info,
|
||||
IndexRecordOption::Basic,
|
||||
&mut doc_bitset,
|
||||
)?;
|
||||
let mut block_segment_postings =
|
||||
inverted_index.read_postings_from_terminfo(term_info, IndexRecordOption::Basic)?;
|
||||
block_segment_postings.fill_bitset(&mut doc_bitset);
|
||||
}
|
||||
let doc_bitset = BitSetDocSet::from(doc_bitset);
|
||||
let const_scorer = ConstScorer::new(doc_bitset, boost);
|
||||
Ok(Box::new(const_scorer))
|
||||
}
|
||||
|
||||
fn explain(&self, reader: &dyn SegmentReader, doc: DocId) -> crate::Result<Explanation> {
|
||||
fn explain(&self, reader: &SegmentReader, doc: DocId) -> crate::Result<Explanation> {
|
||||
let mut scorer = self.scorer(reader, 1.0)?;
|
||||
if scorer.seek(doc) == doc {
|
||||
Ok(Explanation::new("AutomatonScorer", 1.0))
|
||||
|
||||
@@ -1,12 +1,11 @@
|
||||
use std::collections::HashMap;
|
||||
|
||||
use crate::codec::SumOrDoNothingCombiner;
|
||||
use crate::codec::{ObjectSafeCodec, SumOrDoNothingCombiner};
|
||||
use crate::docset::COLLECT_BLOCK_BUFFER_LEN;
|
||||
use crate::index::SegmentReader;
|
||||
use crate::query::disjunction::Disjunction;
|
||||
use crate::query::explanation::does_not_match;
|
||||
use crate::query::score_combiner::{DoNothingCombiner, ScoreCombiner};
|
||||
use crate::query::term_query::TermScorer;
|
||||
use crate::query::weight::for_each_docset_buffered;
|
||||
use crate::query::{
|
||||
box_scorer, intersect_scorers, AllScorer, BufferedUnionScorer, EmptyScorer, Exclude,
|
||||
@@ -39,7 +38,7 @@ fn scorer_union<TScoreCombiner>(
|
||||
scorers: Vec<Box<dyn Scorer>>,
|
||||
score_combiner_fn: impl Fn() -> TScoreCombiner,
|
||||
num_docs: u32,
|
||||
reader: &dyn SegmentReader,
|
||||
codec: &dyn ObjectSafeCodec,
|
||||
) -> Box<dyn Scorer>
|
||||
where
|
||||
TScoreCombiner: ScoreCombiner,
|
||||
@@ -62,7 +61,9 @@ where
|
||||
None
|
||||
};
|
||||
if let Some(combiner) = combiner_opt {
|
||||
reader.build_union_scorer_with_sum_combiner(scorers, num_docs, combiner)
|
||||
let scorer =
|
||||
codec.build_union_scorer_with_sum_combiner(scorers, num_docs, combiner);
|
||||
scorer
|
||||
} else {
|
||||
box_scorer(BufferedUnionScorer::build(
|
||||
scorers,
|
||||
@@ -180,7 +181,7 @@ impl<TScoreCombiner: ScoreCombiner> BooleanWeight<TScoreCombiner> {
|
||||
|
||||
fn per_occur_scorers(
|
||||
&self,
|
||||
reader: &dyn SegmentReader,
|
||||
reader: &SegmentReader,
|
||||
boost: Score,
|
||||
) -> crate::Result<HashMap<Occur, Vec<Box<dyn Scorer>>>> {
|
||||
let mut per_occur_scorers: HashMap<Occur, Vec<Box<dyn Scorer>>> = HashMap::new();
|
||||
@@ -196,7 +197,7 @@ impl<TScoreCombiner: ScoreCombiner> BooleanWeight<TScoreCombiner> {
|
||||
|
||||
fn complex_scorer<TComplexScoreCombiner: ScoreCombiner>(
|
||||
&self,
|
||||
reader: &dyn SegmentReader,
|
||||
reader: &SegmentReader,
|
||||
boost: Score,
|
||||
score_combiner_fn: impl Fn() -> TComplexScoreCombiner,
|
||||
) -> crate::Result<Box<dyn Scorer>> {
|
||||
@@ -244,13 +245,13 @@ impl<TScoreCombiner: ScoreCombiner> BooleanWeight<TScoreCombiner> {
|
||||
should_scorers,
|
||||
&score_combiner_fn,
|
||||
num_docs,
|
||||
reader,
|
||||
reader.codec(),
|
||||
)),
|
||||
1 => ShouldScorersCombinationMethod::Required(scorer_union(
|
||||
should_scorers,
|
||||
&score_combiner_fn,
|
||||
num_docs,
|
||||
reader,
|
||||
reader.codec(),
|
||||
)),
|
||||
n if num_of_should_scorers == n => {
|
||||
// When num_of_should_scorers equals the number of should clauses,
|
||||
@@ -266,6 +267,18 @@ impl<TScoreCombiner: ScoreCombiner> BooleanWeight<TScoreCombiner> {
|
||||
}
|
||||
};
|
||||
|
||||
let exclude_scorer_opt: Option<Box<dyn Scorer>> = if exclude_scorers.is_empty() {
|
||||
None
|
||||
} else {
|
||||
let exclude_scorers_union: Box<dyn Scorer> = scorer_union(
|
||||
exclude_scorers,
|
||||
DoNothingCombiner::default,
|
||||
num_docs,
|
||||
reader.codec(),
|
||||
);
|
||||
Some(exclude_scorers_union)
|
||||
};
|
||||
|
||||
let include_scorer = match (should_scorers, must_scorers) {
|
||||
(ShouldScorersCombinationMethod::Ignored, must_scorers) => {
|
||||
// No SHOULD clauses (or they were absorbed into MUST).
|
||||
@@ -334,22 +347,11 @@ impl<TScoreCombiner: ScoreCombiner> BooleanWeight<TScoreCombiner> {
|
||||
}
|
||||
}
|
||||
};
|
||||
if exclude_scorers.is_empty() {
|
||||
return Ok(include_scorer);
|
||||
}
|
||||
|
||||
let scorer: Box<dyn Scorer> = if exclude_scorers.len() == 1 {
|
||||
let exclude_scorer = exclude_scorers.pop().unwrap();
|
||||
match exclude_scorer.downcast::<TermScorer>() {
|
||||
// Cast to TermScorer succeeded
|
||||
Ok(exclude_scorer) => Box::new(Exclude::new(include_scorer, *exclude_scorer)),
|
||||
// We get back the original Box<dyn Scorer>
|
||||
Err(exclude_scorer) => Box::new(Exclude::new(include_scorer, exclude_scorer)),
|
||||
}
|
||||
if let Some(exclude_scorer) = exclude_scorer_opt {
|
||||
Ok(box_scorer(Exclude::new(include_scorer, exclude_scorer)))
|
||||
} else {
|
||||
Box::new(Exclude::new(include_scorer, exclude_scorers))
|
||||
};
|
||||
Ok(scorer)
|
||||
Ok(include_scorer)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -378,7 +380,7 @@ fn remove_and_count_all_and_empty_scorers(
|
||||
}
|
||||
|
||||
impl<TScoreCombiner: ScoreCombiner + Sync> Weight for BooleanWeight<TScoreCombiner> {
|
||||
fn scorer(&self, reader: &dyn SegmentReader, boost: Score) -> crate::Result<Box<dyn Scorer>> {
|
||||
fn scorer(&self, reader: &SegmentReader, boost: Score) -> crate::Result<Box<dyn Scorer>> {
|
||||
if self.weights.is_empty() {
|
||||
Ok(Box::new(EmptyScorer))
|
||||
} else if self.weights.len() == 1 {
|
||||
@@ -395,7 +397,7 @@ impl<TScoreCombiner: ScoreCombiner + Sync> Weight for BooleanWeight<TScoreCombin
|
||||
}
|
||||
}
|
||||
|
||||
fn explain(&self, reader: &dyn SegmentReader, doc: DocId) -> crate::Result<Explanation> {
|
||||
fn explain(&self, reader: &SegmentReader, doc: DocId) -> crate::Result<Explanation> {
|
||||
let mut scorer = self.scorer(reader, 1.0)?;
|
||||
if scorer.seek(doc) != doc {
|
||||
return Err(does_not_match(doc));
|
||||
@@ -417,7 +419,7 @@ impl<TScoreCombiner: ScoreCombiner + Sync> Weight for BooleanWeight<TScoreCombin
|
||||
|
||||
fn for_each(
|
||||
&self,
|
||||
reader: &dyn SegmentReader,
|
||||
reader: &SegmentReader,
|
||||
callback: &mut dyn FnMut(DocId, Score),
|
||||
) -> crate::Result<()> {
|
||||
let mut scorer = self.complex_scorer(reader, 1.0, &self.score_combiner_fn)?;
|
||||
@@ -427,7 +429,7 @@ impl<TScoreCombiner: ScoreCombiner + Sync> Weight for BooleanWeight<TScoreCombin
|
||||
|
||||
fn for_each_no_score(
|
||||
&self,
|
||||
reader: &dyn SegmentReader,
|
||||
reader: &SegmentReader,
|
||||
callback: &mut dyn FnMut(&[DocId]),
|
||||
) -> crate::Result<()> {
|
||||
let mut scorer = self.complex_scorer(reader, 1.0, || DoNothingCombiner)?;
|
||||
@@ -449,11 +451,11 @@ impl<TScoreCombiner: ScoreCombiner + Sync> Weight for BooleanWeight<TScoreCombin
|
||||
fn for_each_pruning(
|
||||
&self,
|
||||
threshold: Score,
|
||||
reader: &dyn SegmentReader,
|
||||
reader: &SegmentReader,
|
||||
callback: &mut dyn FnMut(DocId, Score) -> Score,
|
||||
) -> crate::Result<()> {
|
||||
let scorer = self.complex_scorer(reader, 1.0, &self.score_combiner_fn)?;
|
||||
reader.for_each_pruning(threshold, scorer, callback);
|
||||
reader.codec().for_each_pruning(threshold, scorer, callback);
|
||||
Ok(())
|
||||
}
|
||||
}
|
||||
|
||||
@@ -67,11 +67,11 @@ impl BoostWeight {
|
||||
}
|
||||
|
||||
impl Weight for BoostWeight {
|
||||
fn scorer(&self, reader: &dyn SegmentReader, boost: Score) -> crate::Result<Box<dyn Scorer>> {
|
||||
fn scorer(&self, reader: &SegmentReader, boost: Score) -> crate::Result<Box<dyn Scorer>> {
|
||||
self.weight.scorer(reader, boost * self.boost)
|
||||
}
|
||||
|
||||
fn explain(&self, reader: &dyn SegmentReader, doc: u32) -> crate::Result<Explanation> {
|
||||
fn explain(&self, reader: &SegmentReader, doc: u32) -> crate::Result<Explanation> {
|
||||
let underlying_explanation = self.weight.explain(reader, doc)?;
|
||||
let score = underlying_explanation.value() * self.boost;
|
||||
let mut explanation =
|
||||
@@ -80,7 +80,7 @@ impl Weight for BoostWeight {
|
||||
Ok(explanation)
|
||||
}
|
||||
|
||||
fn count(&self, reader: &dyn SegmentReader) -> crate::Result<u32> {
|
||||
fn count(&self, reader: &SegmentReader) -> crate::Result<u32> {
|
||||
self.weight.count(reader)
|
||||
}
|
||||
}
|
||||
|
||||
@@ -63,7 +63,7 @@ impl ConstWeight {
|
||||
}
|
||||
|
||||
impl Weight for ConstWeight {
|
||||
fn scorer(&self, reader: &dyn SegmentReader, boost: Score) -> crate::Result<Box<dyn Scorer>> {
|
||||
fn scorer(&self, reader: &SegmentReader, boost: Score) -> crate::Result<Box<dyn Scorer>> {
|
||||
let inner_scorer = self.weight.scorer(reader, boost)?;
|
||||
Ok(box_scorer(ConstScorer::new(
|
||||
inner_scorer,
|
||||
@@ -71,7 +71,7 @@ impl Weight for ConstWeight {
|
||||
)))
|
||||
}
|
||||
|
||||
fn explain(&self, reader: &dyn SegmentReader, doc: u32) -> crate::Result<Explanation> {
|
||||
fn explain(&self, reader: &SegmentReader, doc: u32) -> crate::Result<Explanation> {
|
||||
let mut scorer = self.scorer(reader, 1.0)?;
|
||||
if scorer.seek(doc) != doc {
|
||||
return Err(TantivyError::InvalidArgument(format!(
|
||||
@@ -84,7 +84,7 @@ impl Weight for ConstWeight {
|
||||
Ok(explanation)
|
||||
}
|
||||
|
||||
fn count(&self, reader: &dyn SegmentReader) -> crate::Result<u32> {
|
||||
fn count(&self, reader: &SegmentReader) -> crate::Result<u32> {
|
||||
self.weight.count(reader)
|
||||
}
|
||||
}
|
||||
|
||||
@@ -26,11 +26,11 @@ impl Query for EmptyQuery {
|
||||
/// It is useful for tests and handling edge cases.
|
||||
pub struct EmptyWeight;
|
||||
impl Weight for EmptyWeight {
|
||||
fn scorer(&self, _reader: &dyn SegmentReader, _boost: Score) -> crate::Result<Box<dyn Scorer>> {
|
||||
fn scorer(&self, _reader: &SegmentReader, _boost: Score) -> crate::Result<Box<dyn Scorer>> {
|
||||
Ok(box_scorer(EmptyScorer))
|
||||
}
|
||||
|
||||
fn explain(&self, _reader: &dyn SegmentReader, doc: DocId) -> crate::Result<Explanation> {
|
||||
fn explain(&self, _reader: &SegmentReader, doc: DocId) -> crate::Result<Explanation> {
|
||||
Err(does_not_match(doc))
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1,71 +1,48 @@
|
||||
use crate::docset::{DocSet, SeekDangerResult, TERMINATED};
|
||||
use crate::docset::{DocSet, TERMINATED};
|
||||
use crate::query::Scorer;
|
||||
use crate::{DocId, Score};
|
||||
|
||||
/// An exclusion set is a set of documents
|
||||
/// that should be excluded from a given DocSet.
|
||||
#[inline]
|
||||
fn is_within<TDocSetExclude: DocSet>(docset: &mut TDocSetExclude, doc: DocId) -> bool {
|
||||
docset.doc() <= doc && docset.seek(doc) == doc
|
||||
}
|
||||
|
||||
/// Filters a given `DocSet` by removing the docs from a given `DocSet`.
|
||||
///
|
||||
/// It can be a single DocSet, or a Vec of DocSets.
|
||||
pub trait ExclusionSet: Send {
|
||||
/// Returns `true` if the given `doc` is in the exclusion set.
|
||||
fn contains(&mut self, doc: DocId) -> bool;
|
||||
}
|
||||
|
||||
impl<TDocSet: DocSet> ExclusionSet for TDocSet {
|
||||
#[inline]
|
||||
fn contains(&mut self, doc: DocId) -> bool {
|
||||
self.seek_danger(doc) == SeekDangerResult::Found
|
||||
}
|
||||
}
|
||||
|
||||
impl<TDocSet: DocSet> ExclusionSet for Vec<TDocSet> {
|
||||
#[inline]
|
||||
fn contains(&mut self, doc: DocId) -> bool {
|
||||
for docset in self.iter_mut() {
|
||||
if docset.seek_danger(doc) == SeekDangerResult::Found {
|
||||
return true;
|
||||
}
|
||||
}
|
||||
false
|
||||
}
|
||||
}
|
||||
|
||||
/// Filters a given `DocSet` by removing the docs from an exclusion set.
|
||||
///
|
||||
/// The excluding docsets have no impact on scoring.
|
||||
pub struct Exclude<TDocSet, TExclusionSet> {
|
||||
/// The excluding docset has no impact on scoring.
|
||||
pub struct Exclude<TDocSet, TDocSetExclude> {
|
||||
underlying_docset: TDocSet,
|
||||
exclusion_set: TExclusionSet,
|
||||
excluding_docset: TDocSetExclude,
|
||||
}
|
||||
|
||||
impl<TDocSet, TExclusionSet> Exclude<TDocSet, TExclusionSet>
|
||||
impl<TDocSet, TDocSetExclude> Exclude<TDocSet, TDocSetExclude>
|
||||
where
|
||||
TDocSet: DocSet,
|
||||
TExclusionSet: ExclusionSet,
|
||||
TDocSetExclude: DocSet,
|
||||
{
|
||||
/// Creates a new `ExcludeScorer`
|
||||
pub fn new(
|
||||
mut underlying_docset: TDocSet,
|
||||
mut exclusion_set: TExclusionSet,
|
||||
) -> Exclude<TDocSet, TExclusionSet> {
|
||||
mut excluding_docset: TDocSetExclude,
|
||||
) -> Exclude<TDocSet, TDocSetExclude> {
|
||||
while underlying_docset.doc() != TERMINATED {
|
||||
let target = underlying_docset.doc();
|
||||
if !exclusion_set.contains(target) {
|
||||
if !is_within(&mut excluding_docset, target) {
|
||||
break;
|
||||
}
|
||||
underlying_docset.advance();
|
||||
}
|
||||
Exclude {
|
||||
underlying_docset,
|
||||
exclusion_set,
|
||||
excluding_docset,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl<TDocSet, TExclusionSet> DocSet for Exclude<TDocSet, TExclusionSet>
|
||||
impl<TDocSet, TDocSetExclude> DocSet for Exclude<TDocSet, TDocSetExclude>
|
||||
where
|
||||
TDocSet: DocSet,
|
||||
TExclusionSet: ExclusionSet,
|
||||
TDocSetExclude: DocSet,
|
||||
{
|
||||
fn advance(&mut self) -> DocId {
|
||||
loop {
|
||||
@@ -73,7 +50,7 @@ where
|
||||
if candidate == TERMINATED {
|
||||
return TERMINATED;
|
||||
}
|
||||
if !self.exclusion_set.contains(candidate) {
|
||||
if !is_within(&mut self.excluding_docset, candidate) {
|
||||
return candidate;
|
||||
}
|
||||
}
|
||||
@@ -84,7 +61,7 @@ where
|
||||
if candidate == TERMINATED {
|
||||
return TERMINATED;
|
||||
}
|
||||
if !self.exclusion_set.contains(candidate) {
|
||||
if !is_within(&mut self.excluding_docset, candidate) {
|
||||
return candidate;
|
||||
}
|
||||
self.advance()
|
||||
@@ -102,10 +79,10 @@ where
|
||||
}
|
||||
}
|
||||
|
||||
impl<TScorer, TExclusionSet> Scorer for Exclude<TScorer, TExclusionSet>
|
||||
impl<TScorer, TDocSetExclude> Scorer for Exclude<TScorer, TDocSetExclude>
|
||||
where
|
||||
TScorer: Scorer,
|
||||
TExclusionSet: ExclusionSet + 'static,
|
||||
TDocSetExclude: DocSet + 'static,
|
||||
{
|
||||
#[inline]
|
||||
fn score(&mut self) -> Score {
|
||||
|
||||
@@ -98,7 +98,7 @@ pub struct ExistsWeight {
|
||||
}
|
||||
|
||||
impl Weight for ExistsWeight {
|
||||
fn scorer(&self, reader: &dyn SegmentReader, boost: Score) -> crate::Result<Box<dyn Scorer>> {
|
||||
fn scorer(&self, reader: &SegmentReader, boost: Score) -> crate::Result<Box<dyn Scorer>> {
|
||||
let fast_field_reader = reader.fast_fields();
|
||||
let mut column_handles = fast_field_reader.dynamic_column_handles(&self.field_name)?;
|
||||
if self.field_type == Type::Json && self.json_subpaths {
|
||||
@@ -165,7 +165,7 @@ impl Weight for ExistsWeight {
|
||||
Ok(box_scorer(ConstScorer::new(docset, boost)))
|
||||
}
|
||||
|
||||
fn explain(&self, reader: &dyn SegmentReader, doc: DocId) -> crate::Result<Explanation> {
|
||||
fn explain(&self, reader: &SegmentReader, doc: DocId) -> crate::Result<Explanation> {
|
||||
let mut scorer = self.scorer(reader, 1.0)?;
|
||||
if scorer.seek(doc) != doc {
|
||||
return Err(does_not_match(doc));
|
||||
|
||||
@@ -43,7 +43,7 @@ pub use self::boost_query::{BoostQuery, BoostWeight};
|
||||
pub use self::const_score_query::{ConstScoreQuery, ConstScorer};
|
||||
pub use self::disjunction_max_query::DisjunctionMaxQuery;
|
||||
pub use self::empty_query::{EmptyQuery, EmptyScorer, EmptyWeight};
|
||||
pub use self::exclude::{Exclude, ExclusionSet};
|
||||
pub use self::exclude::Exclude;
|
||||
pub use self::exist_query::ExistsQuery;
|
||||
pub use self::explanation::Explanation;
|
||||
#[cfg(test)]
|
||||
|
||||
@@ -31,7 +31,7 @@ impl PhrasePrefixWeight {
|
||||
}
|
||||
}
|
||||
|
||||
fn fieldnorm_reader(&self, reader: &dyn SegmentReader) -> crate::Result<FieldNormReader> {
|
||||
fn fieldnorm_reader(&self, reader: &SegmentReader) -> crate::Result<FieldNormReader> {
|
||||
let field = self.phrase_terms[0].1.field();
|
||||
if self.similarity_weight_opt.is_some() {
|
||||
if let Some(fieldnorm_reader) = reader.fieldnorms_readers().get_field(field)? {
|
||||
@@ -43,7 +43,7 @@ impl PhrasePrefixWeight {
|
||||
|
||||
pub(crate) fn phrase_scorer(
|
||||
&self,
|
||||
reader: &dyn SegmentReader,
|
||||
reader: &SegmentReader,
|
||||
boost: Score,
|
||||
) -> crate::Result<Option<Box<dyn Scorer>>> {
|
||||
let similarity_weight_opt = self
|
||||
@@ -113,7 +113,7 @@ impl PhrasePrefixWeight {
|
||||
}
|
||||
|
||||
impl Weight for PhrasePrefixWeight {
|
||||
fn scorer(&self, reader: &dyn SegmentReader, boost: Score) -> crate::Result<Box<dyn Scorer>> {
|
||||
fn scorer(&self, reader: &SegmentReader, boost: Score) -> crate::Result<Box<dyn Scorer>> {
|
||||
if let Some(scorer) = self.phrase_scorer(reader, boost)? {
|
||||
Ok(scorer)
|
||||
} else {
|
||||
|
||||
@@ -27,7 +27,7 @@ impl PhraseWeight {
|
||||
}
|
||||
}
|
||||
|
||||
fn fieldnorm_reader(&self, reader: &dyn SegmentReader) -> crate::Result<FieldNormReader> {
|
||||
fn fieldnorm_reader(&self, reader: &SegmentReader) -> crate::Result<FieldNormReader> {
|
||||
let field = self.phrase_terms[0].1.field();
|
||||
if self.similarity_weight_opt.is_some() {
|
||||
if let Some(fieldnorm_reader) = reader.fieldnorms_readers().get_field(field)? {
|
||||
@@ -39,7 +39,7 @@ impl PhraseWeight {
|
||||
|
||||
pub(crate) fn phrase_scorer(
|
||||
&self,
|
||||
reader: &dyn SegmentReader,
|
||||
reader: &SegmentReader,
|
||||
boost: Score,
|
||||
) -> crate::Result<Option<Box<dyn Scorer>>> {
|
||||
let similarity_weight_opt = self
|
||||
@@ -74,11 +74,12 @@ impl PhraseWeight {
|
||||
term_infos.push((offset, term_info));
|
||||
}
|
||||
|
||||
let scorer = inverted_index_reader.new_phrase_scorer(
|
||||
let scorer = reader.codec().new_phrase_scorer_type_erased(
|
||||
&term_infos[..],
|
||||
similarity_weight_opt,
|
||||
fieldnorm_reader,
|
||||
self.slop,
|
||||
&inverted_index_reader,
|
||||
)?;
|
||||
|
||||
Ok(Some(scorer))
|
||||
@@ -91,7 +92,7 @@ impl PhraseWeight {
|
||||
}
|
||||
|
||||
impl Weight for PhraseWeight {
|
||||
fn scorer(&self, reader: &dyn SegmentReader, boost: Score) -> crate::Result<Box<dyn Scorer>> {
|
||||
fn scorer(&self, reader: &SegmentReader, boost: Score) -> crate::Result<Box<dyn Scorer>> {
|
||||
if let Some(scorer) = self.phrase_scorer(reader, boost)? {
|
||||
Ok(scorer)
|
||||
} else {
|
||||
@@ -99,7 +100,7 @@ impl Weight for PhraseWeight {
|
||||
}
|
||||
}
|
||||
|
||||
fn explain(&self, reader: &dyn SegmentReader, doc: DocId) -> crate::Result<Explanation> {
|
||||
fn explain(&self, reader: &SegmentReader, doc: DocId) -> crate::Result<Explanation> {
|
||||
let scorer_opt = self.phrase_scorer(reader, 1.0)?;
|
||||
if scorer_opt.is_none() {
|
||||
return Err(does_not_match(doc));
|
||||
|
||||
@@ -47,7 +47,7 @@ impl RegexPhraseWeight {
|
||||
}
|
||||
}
|
||||
|
||||
fn fieldnorm_reader(&self, reader: &dyn SegmentReader) -> crate::Result<FieldNormReader> {
|
||||
fn fieldnorm_reader(&self, reader: &SegmentReader) -> crate::Result<FieldNormReader> {
|
||||
if self.similarity_weight_opt.is_some() {
|
||||
if let Some(fieldnorm_reader) = reader.fieldnorms_readers().get_field(self.field)? {
|
||||
return Ok(fieldnorm_reader);
|
||||
@@ -58,7 +58,7 @@ impl RegexPhraseWeight {
|
||||
|
||||
pub(crate) fn phrase_scorer(
|
||||
&self,
|
||||
reader: &dyn SegmentReader,
|
||||
reader: &SegmentReader,
|
||||
boost: Score,
|
||||
) -> crate::Result<Option<PhraseScorer<UnionType>>> {
|
||||
let similarity_weight_opt = self
|
||||
@@ -86,8 +86,7 @@ impl RegexPhraseWeight {
|
||||
"Phrase query exceeded max expansions {num_terms}"
|
||||
)));
|
||||
}
|
||||
let union =
|
||||
Self::get_union_from_term_infos(&term_infos, reader, inverted_index.as_ref())?;
|
||||
let union = Self::get_union_from_term_infos(&term_infos, reader, &inverted_index)?;
|
||||
|
||||
posting_lists.push((offset, union));
|
||||
}
|
||||
@@ -102,11 +101,13 @@ impl RegexPhraseWeight {
|
||||
|
||||
/// Add all docs of the term to the docset
|
||||
fn add_to_bitset(
|
||||
inverted_index: &dyn InvertedIndexReader,
|
||||
inverted_index: &InvertedIndexReader,
|
||||
term_info: &TermInfo,
|
||||
doc_bitset: &mut BitSet,
|
||||
) -> crate::Result<()> {
|
||||
inverted_index.fill_bitset_for_term(term_info, IndexRecordOption::Basic, doc_bitset)?;
|
||||
let mut segment_postings =
|
||||
inverted_index.read_postings_from_terminfo(term_info, IndexRecordOption::Basic)?;
|
||||
segment_postings.fill_bitset(doc_bitset);
|
||||
Ok(())
|
||||
}
|
||||
|
||||
@@ -166,8 +167,8 @@ impl RegexPhraseWeight {
|
||||
/// Use Roaring Bitmaps for sparse terms. The full bitvec is main memory consumer currently.
|
||||
pub(crate) fn get_union_from_term_infos(
|
||||
term_infos: &[TermInfo],
|
||||
reader: &dyn SegmentReader,
|
||||
inverted_index: &dyn InvertedIndexReader,
|
||||
reader: &SegmentReader,
|
||||
inverted_index: &InvertedIndexReader,
|
||||
) -> crate::Result<UnionType> {
|
||||
let max_doc = reader.max_doc();
|
||||
|
||||
@@ -261,7 +262,7 @@ impl RegexPhraseWeight {
|
||||
}
|
||||
|
||||
impl Weight for RegexPhraseWeight {
|
||||
fn scorer(&self, reader: &dyn SegmentReader, boost: Score) -> crate::Result<Box<dyn Scorer>> {
|
||||
fn scorer(&self, reader: &SegmentReader, boost: Score) -> crate::Result<Box<dyn Scorer>> {
|
||||
if let Some(scorer) = self.phrase_scorer(reader, boost)? {
|
||||
Ok(box_scorer(scorer))
|
||||
} else {
|
||||
@@ -269,7 +270,7 @@ impl Weight for RegexPhraseWeight {
|
||||
}
|
||||
}
|
||||
|
||||
fn explain(&self, reader: &dyn SegmentReader, doc: DocId) -> crate::Result<Explanation> {
|
||||
fn explain(&self, reader: &SegmentReader, doc: DocId) -> crate::Result<Explanation> {
|
||||
let scorer_opt = self.phrase_scorer(reader, 1.0)?;
|
||||
if scorer_opt.is_none() {
|
||||
return Err(does_not_match(doc));
|
||||
|
||||
@@ -146,7 +146,7 @@ pub trait Query: QueryClone + Send + Sync + downcast_rs::Downcast + fmt::Debug {
|
||||
let weight = self.weight(EnableScoring::disabled_from_searcher(searcher))?;
|
||||
let mut result = 0;
|
||||
for reader in searcher.segment_readers() {
|
||||
result += weight.count(reader.as_ref())? as usize;
|
||||
result += weight.count(reader)? as usize;
|
||||
}
|
||||
Ok(result)
|
||||
}
|
||||
|
||||
@@ -214,7 +214,7 @@ impl InvertedIndexRangeWeight {
|
||||
}
|
||||
|
||||
impl Weight for InvertedIndexRangeWeight {
|
||||
fn scorer(&self, reader: &dyn SegmentReader, boost: Score) -> crate::Result<Box<dyn Scorer>> {
|
||||
fn scorer(&self, reader: &SegmentReader, boost: Score) -> crate::Result<Box<dyn Scorer>> {
|
||||
let max_doc = reader.max_doc();
|
||||
let mut doc_bitset = BitSet::with_max_value(max_doc);
|
||||
|
||||
@@ -230,17 +230,15 @@ impl Weight for InvertedIndexRangeWeight {
|
||||
}
|
||||
processed_count += 1;
|
||||
let term_info = term_range.value();
|
||||
inverted_index.fill_bitset_for_term(
|
||||
term_info,
|
||||
IndexRecordOption::Basic,
|
||||
&mut doc_bitset,
|
||||
)?;
|
||||
let mut postings =
|
||||
inverted_index.read_postings_from_terminfo(term_info, IndexRecordOption::Basic)?;
|
||||
postings.fill_bitset(&mut doc_bitset);
|
||||
}
|
||||
let doc_bitset = BitSetDocSet::from(doc_bitset);
|
||||
Ok(box_scorer(ConstScorer::new(doc_bitset, boost)))
|
||||
}
|
||||
|
||||
fn explain(&self, reader: &dyn SegmentReader, doc: DocId) -> crate::Result<Explanation> {
|
||||
fn explain(&self, reader: &SegmentReader, doc: DocId) -> crate::Result<Explanation> {
|
||||
let mut scorer = self.scorer(reader, 1.0)?;
|
||||
if scorer.seek(doc) != doc {
|
||||
return Err(does_not_match(doc));
|
||||
@@ -681,7 +679,7 @@ mod tests {
|
||||
.weight(EnableScoring::disabled_from_schema(&schema))
|
||||
.unwrap();
|
||||
let range_scorer = range_weight
|
||||
.scorer(searcher.segment_readers()[0].as_ref(), 1.0f32)
|
||||
.scorer(&searcher.segment_readers()[0], 1.0f32)
|
||||
.unwrap();
|
||||
range_scorer
|
||||
};
|
||||
|
||||
@@ -53,7 +53,7 @@ impl FastFieldRangeWeight {
|
||||
}
|
||||
|
||||
impl Weight for FastFieldRangeWeight {
|
||||
fn scorer(&self, reader: &dyn SegmentReader, boost: Score) -> crate::Result<Box<dyn Scorer>> {
|
||||
fn scorer(&self, reader: &SegmentReader, boost: Score) -> crate::Result<Box<dyn Scorer>> {
|
||||
// Check if both bounds are Bound::Unbounded
|
||||
if self.bounds.is_unbounded() {
|
||||
return Ok(box_scorer(AllScorer::new(reader.max_doc())));
|
||||
@@ -220,7 +220,7 @@ impl Weight for FastFieldRangeWeight {
|
||||
}
|
||||
}
|
||||
|
||||
fn explain(&self, reader: &dyn SegmentReader, doc: DocId) -> crate::Result<Explanation> {
|
||||
fn explain(&self, reader: &SegmentReader, doc: DocId) -> crate::Result<Explanation> {
|
||||
let mut scorer = self.scorer(reader, 1.0)?;
|
||||
if scorer.seek(doc) != doc {
|
||||
return Err(TantivyError::InvalidArgument(format!(
|
||||
@@ -237,7 +237,7 @@ impl Weight for FastFieldRangeWeight {
|
||||
///
|
||||
/// Convert into fast field value space and search.
|
||||
fn search_on_json_numerical_field(
|
||||
reader: &dyn SegmentReader,
|
||||
reader: &SegmentReader,
|
||||
field_name: &str,
|
||||
typ: Type,
|
||||
bounds: BoundsRange<ValueBytes<Vec<u8>>>,
|
||||
|
||||
@@ -252,9 +252,7 @@ mod tests {
|
||||
let mut block_max_scores_b = vec![];
|
||||
let mut docs = vec![];
|
||||
{
|
||||
let mut term_scorer = term_weight
|
||||
.term_scorer_for_test(reader.as_ref(), 1.0)
|
||||
.unwrap();
|
||||
let mut term_scorer = term_weight.term_scorer_for_test(reader, 1.0).unwrap();
|
||||
while term_scorer.doc() != TERMINATED {
|
||||
let mut score = term_scorer.score();
|
||||
docs.push(term_scorer.doc());
|
||||
@@ -268,9 +266,7 @@ mod tests {
|
||||
}
|
||||
}
|
||||
{
|
||||
let mut term_scorer = term_weight
|
||||
.term_scorer_for_test(reader.as_ref(), 1.0)
|
||||
.unwrap();
|
||||
let mut term_scorer = term_weight.term_scorer_for_test(reader, 1.0).unwrap();
|
||||
for d in docs {
|
||||
let block_max_score = term_scorer.seek_block_max(d);
|
||||
block_max_scores_b.push(block_max_score);
|
||||
|
||||
@@ -33,11 +33,11 @@ impl TermOrEmptyOrAllScorer {
|
||||
}
|
||||
|
||||
impl Weight for TermWeight {
|
||||
fn scorer(&self, reader: &dyn SegmentReader, boost: Score) -> crate::Result<Box<dyn Scorer>> {
|
||||
fn scorer(&self, reader: &SegmentReader, boost: Score) -> crate::Result<Box<dyn Scorer>> {
|
||||
Ok(self.specialized_scorer(reader, boost)?.into_boxed_scorer())
|
||||
}
|
||||
|
||||
fn explain(&self, reader: &dyn SegmentReader, doc: DocId) -> crate::Result<Explanation> {
|
||||
fn explain(&self, reader: &SegmentReader, doc: DocId) -> crate::Result<Explanation> {
|
||||
match self.specialized_scorer(reader, 1.0)? {
|
||||
TermOrEmptyOrAllScorer::TermScorer(mut term_scorer) => {
|
||||
if term_scorer.doc() > doc || term_scorer.seek(doc) != doc {
|
||||
@@ -53,7 +53,7 @@ impl Weight for TermWeight {
|
||||
}
|
||||
}
|
||||
|
||||
fn count(&self, reader: &dyn SegmentReader) -> crate::Result<u32> {
|
||||
fn count(&self, reader: &SegmentReader) -> crate::Result<u32> {
|
||||
if let Some(alive_bitset) = reader.alive_bitset() {
|
||||
Ok(self.scorer(reader, 1.0)?.count(alive_bitset))
|
||||
} else {
|
||||
@@ -68,7 +68,7 @@ impl Weight for TermWeight {
|
||||
/// `DocSet` and push the scored documents to the collector.
|
||||
fn for_each(
|
||||
&self,
|
||||
reader: &dyn SegmentReader,
|
||||
reader: &SegmentReader,
|
||||
callback: &mut dyn FnMut(DocId, Score),
|
||||
) -> crate::Result<()> {
|
||||
match self.specialized_scorer(reader, 1.0)? {
|
||||
@@ -87,7 +87,7 @@ impl Weight for TermWeight {
|
||||
/// `DocSet` and push the scored documents to the collector.
|
||||
fn for_each_no_score(
|
||||
&self,
|
||||
reader: &dyn SegmentReader,
|
||||
reader: &SegmentReader,
|
||||
callback: &mut dyn FnMut(&[DocId]),
|
||||
) -> crate::Result<()> {
|
||||
match self.specialized_scorer(reader, 1.0)? {
|
||||
@@ -118,13 +118,15 @@ impl Weight for TermWeight {
|
||||
fn for_each_pruning(
|
||||
&self,
|
||||
threshold: Score,
|
||||
reader: &dyn SegmentReader,
|
||||
reader: &SegmentReader,
|
||||
callback: &mut dyn FnMut(DocId, Score) -> Score,
|
||||
) -> crate::Result<()> {
|
||||
let specialized_scorer = self.specialized_scorer(reader, 1.0)?;
|
||||
match specialized_scorer {
|
||||
TermOrEmptyOrAllScorer::TermScorer(term_scorer) => {
|
||||
reader.for_each_pruning(threshold, term_scorer, callback);
|
||||
reader
|
||||
.codec()
|
||||
.for_each_pruning(threshold, term_scorer, callback);
|
||||
}
|
||||
TermOrEmptyOrAllScorer::Empty => {}
|
||||
TermOrEmptyOrAllScorer::AllMatch(_) => {
|
||||
@@ -162,7 +164,7 @@ impl TermWeight {
|
||||
#[cfg(test)]
|
||||
pub(crate) fn term_scorer_for_test(
|
||||
&self,
|
||||
reader: &dyn SegmentReader,
|
||||
reader: &SegmentReader,
|
||||
boost: Score,
|
||||
) -> Option<super::TermScorer> {
|
||||
let scorer = self.specialized_scorer(reader, boost).unwrap();
|
||||
@@ -177,7 +179,7 @@ impl TermWeight {
|
||||
|
||||
fn specialized_scorer(
|
||||
&self,
|
||||
reader: &dyn SegmentReader,
|
||||
reader: &SegmentReader,
|
||||
boost: Score,
|
||||
) -> crate::Result<TermOrEmptyOrAllScorer> {
|
||||
let field = self.term.field();
|
||||
@@ -207,10 +209,7 @@ impl TermWeight {
|
||||
Ok(TermOrEmptyOrAllScorer::TermScorer(term_scorer))
|
||||
}
|
||||
|
||||
fn fieldnorm_reader(
|
||||
&self,
|
||||
segment_reader: &dyn SegmentReader,
|
||||
) -> crate::Result<FieldNormReader> {
|
||||
fn fieldnorm_reader(&self, segment_reader: &SegmentReader) -> crate::Result<FieldNormReader> {
|
||||
if self.scoring_enabled {
|
||||
if let Some(field_norm_reader) = segment_reader
|
||||
.fieldnorms_readers()
|
||||
|
||||
@@ -275,8 +275,8 @@ where
|
||||
let mut is_hit = false;
|
||||
let mut min_new_target = TERMINATED;
|
||||
|
||||
for docset in self.scorers.iter_mut() {
|
||||
match docset.seek_danger(target) {
|
||||
for scorer in self.scorers.iter_mut() {
|
||||
match scorer.seek_danger(target) {
|
||||
SeekDangerResult::Found => {
|
||||
is_hit = true;
|
||||
break;
|
||||
|
||||
@@ -32,10 +32,10 @@ pub trait Weight: Send + Sync + 'static {
|
||||
/// `boost` is a multiplier to apply to the score.
|
||||
///
|
||||
/// See [`Query`](crate::query::Query).
|
||||
fn scorer(&self, reader: &dyn SegmentReader, boost: Score) -> crate::Result<Box<dyn Scorer>>;
|
||||
fn scorer(&self, reader: &SegmentReader, boost: Score) -> crate::Result<Box<dyn Scorer>>;
|
||||
|
||||
/// Returns an [`Explanation`] for the given document.
|
||||
fn explain(&self, reader: &dyn SegmentReader, doc: DocId) -> crate::Result<Explanation> {
|
||||
fn explain(&self, reader: &SegmentReader, doc: DocId) -> crate::Result<Explanation> {
|
||||
let mut scorer = self.scorer(reader, 1.0)?;
|
||||
if scorer.doc() > doc || scorer.seek(doc) != doc {
|
||||
return Err(does_not_match(doc));
|
||||
@@ -44,7 +44,7 @@ pub trait Weight: Send + Sync + 'static {
|
||||
}
|
||||
|
||||
/// Returns the number documents within the given [`SegmentReader`].
|
||||
fn count(&self, reader: &dyn SegmentReader) -> crate::Result<u32> {
|
||||
fn count(&self, reader: &SegmentReader) -> crate::Result<u32> {
|
||||
let mut scorer = self.scorer(reader, 1.0)?;
|
||||
if let Some(alive_bitset) = reader.alive_bitset() {
|
||||
Ok(scorer.count(alive_bitset))
|
||||
@@ -57,7 +57,7 @@ pub trait Weight: Send + Sync + 'static {
|
||||
/// `DocSet` and push the scored documents to the collector.
|
||||
fn for_each(
|
||||
&self,
|
||||
reader: &dyn SegmentReader,
|
||||
reader: &SegmentReader,
|
||||
callback: &mut dyn FnMut(DocId, Score),
|
||||
) -> crate::Result<()> {
|
||||
let mut scorer = self.scorer(reader, 1.0)?;
|
||||
@@ -69,7 +69,7 @@ pub trait Weight: Send + Sync + 'static {
|
||||
/// `DocSet` and push the scored documents to the collector.
|
||||
fn for_each_no_score(
|
||||
&self,
|
||||
reader: &dyn SegmentReader,
|
||||
reader: &SegmentReader,
|
||||
callback: &mut dyn FnMut(&[DocId]),
|
||||
) -> crate::Result<()> {
|
||||
let mut docset = self.scorer(reader, 1.0)?;
|
||||
@@ -92,7 +92,7 @@ pub trait Weight: Send + Sync + 'static {
|
||||
fn for_each_pruning(
|
||||
&self,
|
||||
threshold: Score,
|
||||
reader: &dyn SegmentReader,
|
||||
reader: &SegmentReader,
|
||||
callback: &mut dyn FnMut(DocId, Score) -> Score,
|
||||
) -> crate::Result<()> {
|
||||
let mut scorer = self.scorer(reader, 1.0)?;
|
||||
|
||||
@@ -190,26 +190,19 @@ impl<C: Codec> InnerIndexReader<C> {
|
||||
///
|
||||
/// This function acquires a lock to prevent GC from removing files
|
||||
/// as we are opening our index.
|
||||
fn open_segment_readers(index: &Index<C>) -> crate::Result<Vec<Arc<dyn SegmentReader>>> {
|
||||
fn open_segment_readers(index: &Index<C>) -> crate::Result<Vec<SegmentReader>> {
|
||||
// Prevents segment files from getting deleted while we are in the process of opening them
|
||||
let _meta_lock = index.directory().acquire_lock(&META_LOCK)?;
|
||||
let searchable_segments = index.searchable_segments()?;
|
||||
let segment_readers = searchable_segments
|
||||
.iter()
|
||||
.map(|segment| {
|
||||
segment.index().codec().open_segment_reader(
|
||||
segment.index().directory(),
|
||||
segment.meta(),
|
||||
segment.schema(),
|
||||
None,
|
||||
)
|
||||
})
|
||||
.map(SegmentReader::open)
|
||||
.collect::<crate::Result<_>>()?;
|
||||
Ok(segment_readers)
|
||||
}
|
||||
|
||||
fn track_segment_readers_in_inventory(
|
||||
segment_readers: &[Arc<dyn SegmentReader>],
|
||||
segment_readers: &[SegmentReader],
|
||||
searcher_generation_counter: &Arc<AtomicU64>,
|
||||
searcher_generation_inventory: &Inventory<SearcherGeneration>,
|
||||
) -> TrackedObject<SearcherGeneration> {
|
||||
|
||||
Reference in New Issue
Block a user