Compare commits

...

23 Commits

Author SHA1 Message Date
Paul Masurel
72fc1c10a6 rebasing, fixing some test 2019-09-03 10:07:03 +09:00
Paul Masurel
b28654c3fb crate division 2019-09-03 09:53:06 +09:00
Paul Masurel
5196ca41d8 Small code clean up 2019-09-03 09:22:32 +09:00
dependabot-preview[bot]
4959e06151 Update once_cell requirement from 0.2 to 1.0 (#643)
Updates the requirements on [once_cell](https://github.com/matklad/once_cell) to permit the latest version.
- [Release notes](https://github.com/matklad/once_cell/releases)
- [Changelog](https://github.com/matklad/once_cell/blob/master/CHANGELOG.md)
- [Commits](https://github.com/matklad/once_cell/compare/v0.2.0...v1.0.2)

Signed-off-by: dependabot-preview[bot] <support@dependabot.com>
2019-09-03 07:00:45 +09:00
Paul Masurel
c1635c13f6 RegexQuery performance: make it possible to cache Regexes - remastered by fulmicoton (Closes #639) (#641)
* small docs cleanup

* only compile a regex once per RegexQuery

Building a `Regex` is an expensive operation. Users of `RegexQuery`
need to cache and reuse regexes when searching across multiple fields.

This is the first step towards allowing that: we can store the `Regex`
directly in the `RegexQuery`, instead of the string pattern.

* RegexQuery: account for possible failure in the constructor

When building a regex from a str pattern, we have to account for the
possibility that the pattern is invalid. Before the previous commit, the
failure would happen in the `specialized_weight` method. Now that we
store a compiled `Regex` in `RegexQuery`, `specialized_weight` doesn't
fail anymore, and we can fail early while constructing `RegexQuery` if
the pattern is invalid.

This is a breaking change for users of `RegexQuery::new`.

* add RegexQuery::from_regex method

This builds a `RegexQuery` from an already compiled `Regex`. The use of
`Into<Arc<Regex>>` is to allow the caller to either simply pass a
`Regex`, or an `Arc<Regex>`, in case it needs to be cached and shared on
the caller's side.

* Using an Arc in AutomatonWeight

Closes #639
2019-08-22 16:14:01 +09:00
Paul Masurel
135e0ea2e9 Expose new segment meta from Index (#637) 2019-08-19 10:39:15 +09:00
Paul Masurel
f283bfd7ab Added segmentid_from_string (#636) 2019-08-19 10:37:30 +09:00
Joshua Dutton
9f74786db2 Update import statements in examples, doctests (#633)
Update import statements to edition 2018, including removing
`extern crate` and  `#[macro_use]`. Alphabetize the statements.
2019-08-19 07:26:35 +09:00
Joshua Dutton
32e5d7a0c7 Fix trait object in doctest (#635) 2019-08-19 07:25:00 +09:00
Joshua Dutton
84c615cff1 Fixing typos (#634) 2019-08-19 07:24:05 +09:00
Paul Masurel
039c0a0863 Introducing a wrapper struct instead of Boxed<BoxableTokenizer> (#631)
Closes #629
2019-08-15 16:37:04 +09:00
Paul Masurel
b3b0138b82 Change for tantivy-py
Schema.convert_named_doc
Better Debug string for Terms and TermQueries
2019-08-14 17:44:25 +09:00
petr-tik
ea56160cdc Added cargo-fmt to CI runs (#627)
* Added cargo-fmt to CI runs

Closes #625

* Remove fmt from appveyor builds

Windows seems to have issues with install components through rustup.

Formatting should be equally informative regardless of the OS,
so best to keep it in Linux on Travis
2019-08-12 08:25:47 +09:00
petr-tik
028b0a749c Elastic unbounded range query (#624)
* Tidy up

fmt

remove unneccessary -> Result<()> followed by run.unwrap() in a test

* Adding support for elasticsearch-style unbounded queries

Extend the UserInputBound to include Unbounded, so we can reuse formatting and
internal query format

* Still working on elastic-style range queries

Fixes #498

Merge the elastic_range into range

Reformat to make code easier to follow, use optional() macro to return Some

* Fixed bugs

Made the range parser insensitive to whitespace between the ":" and the range.

Removed optional parsing of field.

Added a unit test for the range parser.

Derived PartialEq to compare the results of parsing as structs, instead of
strings. Found a bug with that unit test - "*}" was parsed as an
UserInputBound::Exclusive, instead of UserInputBound::Unbounded. Added an early
detection-and-return for * in the original range parser

* Correct failing test

Assume that we will use "{*" for Unbounded ranges

* Add a note in the changelog

cargo-fmt

* Moved parenthesis to a newline to make nested if-else more visible
2019-08-12 08:24:47 +09:00
Paul Masurel
941f06eb9f Added Schema.from_named_doc 2019-08-11 16:50:32 +09:00
Paul Masurel
04832a86eb WTF is this file doing here (#622) 2019-08-08 21:54:10 +09:00
fdb-hiroshima
beb8e990cd fix parsing neg float in range query (#621)
fix #620
2019-08-08 20:41:04 +09:00
Paul Masurel
001af3876f cargo fmt 2019-08-08 18:07:19 +09:00
Paul Masurel
f428f344da Various bugfix in the query parser (#619) 2019-08-08 17:48:21 +09:00
Paul Masurel
143f78eced Trying to fix #609 (#616) 2019-08-06 20:33:30 +09:00
Kornel
754b55eee5 Bump deps (#613)
* Bump crossbeam

* Warnings--

* Remove outdated tempdir
2019-08-05 22:21:22 +09:00
Paul Masurel
280ea1209c Changes required for python binding (#610) 2019-08-01 17:26:21 +09:00
petr-tik
0154dbe477 Replace unwrap with match and proper Error handling (#606)
* Replace unwrap with match and proper Error handling

* Replaced 'magic' values with a documented variable

Didn't like the unexplained 0..3 range, thought it was best as a variable

Calculating Levenshtein distance is expensive, so best explain why we should
keep it low
2019-07-31 08:16:02 +09:00
96 changed files with 1307 additions and 786 deletions

View File

@@ -47,6 +47,7 @@ matrix:
before_install: before_install:
- set -e - set -e
- rustup self update - rustup self update
- rustup component add rustfmt
install: install:
- sh ci/install.sh - sh ci/install.sh
@@ -60,6 +61,7 @@ before_script:
script: script:
- bash ci/script.sh - bash ci/script.sh
- cargo fmt --all -- --check
before_deploy: before_deploy:
- sh ci/before_deploy.sh - sh ci/before_deploy.sh

View File

@@ -2,6 +2,18 @@ Tantivy 0.11.0
===================== =====================
- Added f64 field. Internally reuse u64 code the same way i64 does (@fdb-hiroshima) - Added f64 field. Internally reuse u64 code the same way i64 does (@fdb-hiroshima)
- Various bugfixes in the query parser.
- Better handling of hyphens in query parser. (#609)
- Better handling of whitespaces.
- Closes #498 - add support for Elastic-style unbounded range queries for alphanumeric types eg. "title:>hello", "weight:>=70.5", "height:<200" (@petr-tik)
- API change around `Box<BoxableTokenizer>`. See detail in #629
- Avoid rebuilding Regex automaton whenever a regex query is reused. #630 (@brainlock)
## How to update?
- `Box<dyn BoxableTokenizer>` has been replaced by a `BoxedTokenizer` struct.
- Regex are now compiled when the `RegexQuery` instance is built. As a result, it can now return
an error and handling the `Result` is required.
Tantivy 0.10.1 Tantivy 0.10.1
===================== =====================

View File

@@ -1,6 +1,6 @@
[package] [package]
name = "tantivy" name = "tantivy"
version = "0.10.1" version = "0.11.0"
authors = ["Paul Masurel <paul.masurel@gmail.com>"] authors = ["Paul Masurel <paul.masurel@gmail.com>"]
license = "MIT" license = "MIT"
categories = ["database-implementations", "data-structures"] categories = ["database-implementations", "data-structures"]
@@ -15,7 +15,7 @@ edition = "2018"
[dependencies] [dependencies]
base64 = "0.10.0" base64 = "0.10.0"
byteorder = "1.0" byteorder = "1.0"
once_cell = "0.2" once_cell = "1.0"
regex = "1.0" regex = "1.0"
tantivy-fst = "0.1" tantivy-fst = "0.1"
memmap = {version = "0.7", optional=true} memmap = {version = "0.7", optional=true}
@@ -25,7 +25,6 @@ atomicwrites = {version="0.2.2", optional=true}
tempfile = "3.0" tempfile = "3.0"
log = "0.4" log = "0.4"
combine = ">=3.6.0,<4.0.0" combine = ">=3.6.0,<4.0.0"
tempdir = "0.3"
serde = "1.0" serde = "1.0"
serde_derive = "1.0" serde_derive = "1.0"
serde_json = "1.0" serde_json = "1.0"
@@ -36,7 +35,7 @@ levenshtein_automata = {version="0.1", features=["fst_automaton"]}
notify = {version="4", optional=true} notify = {version="4", optional=true}
bit-set = "0.5" bit-set = "0.5"
uuid = { version = "0.7.2", features = ["v4", "serde"] } uuid = { version = "0.7.2", features = ["v4", "serde"] }
crossbeam = "0.5" crossbeam = "0.7"
futures = "0.1" futures = "0.1"
futures-cpupool = "0.1" futures-cpupool = "0.1"
owning_ref = "0.4" owning_ref = "0.4"
@@ -55,6 +54,10 @@ murmurhash32 = "0.2"
chrono = "0.4" chrono = "0.4"
smallvec = "0.6" smallvec = "0.6"
tantivy-schema = {path= "./tantivy-schema"}
tantivy-tokenizer = {path= "./tantivy-tokenizer"}
tantivy-common = {path="./tantivy-common"}
[target.'cfg(windows)'.dependencies] [target.'cfg(windows)'.dependencies]
winapi = "0.3" winapi = "0.3"
@@ -87,7 +90,6 @@ travis-ci = { repository = "tantivy-search/tantivy" }
[dev-dependencies.fail] [dev-dependencies.fail]
features = ["failpoints"] features = ["failpoints"]
# Following the "fail" crate best practises, we isolate # Following the "fail" crate best practises, we isolate
# tests that define specific behavior in fail check points # tests that define specific behavior in fail check points
# in a different binary. # in a different binary.
@@ -98,4 +100,8 @@ features = ["failpoints"]
[[test]] [[test]]
name = "failpoints" name = "failpoints"
path = "tests/failpoints/mod.rs" path = "tests/failpoints/mod.rs"
required-features = ["fail/failpoints"] required-features = ["fail/failpoints"]
[workspace]
members = ["tantivy-schema", "tantivy-common", "tantivy-tokenizer"]

View File

@@ -5,26 +5,23 @@
// //
// We will : // We will :
// - define our schema // - define our schema
// = create an index in a directory // - create an index in a directory
// - index few documents in our index // - index a few documents into our index
// - search for the best document matchings "sea whale" // - search for the best document matching a basic query
// - retrieve the best document original content. // - retrieve the best document's original content.
// --- // ---
// Importing tantivy... // Importing tantivy...
#[macro_use]
extern crate tantivy;
use tantivy::collector::TopDocs; use tantivy::collector::TopDocs;
use tantivy::query::QueryParser; use tantivy::query::QueryParser;
use tantivy::schema::*; use tantivy::schema::*;
use tantivy::Index; use tantivy::{doc, Index, ReloadPolicy};
use tantivy::ReloadPolicy; use tempfile::TempDir;
use tempdir::TempDir;
fn main() -> tantivy::Result<()> { fn main() -> tantivy::Result<()> {
// Let's create a temporary directory for the // Let's create a temporary directory for the
// sake of this example // sake of this example
let index_path = TempDir::new("tantivy_example_dir")?; let index_path = TempDir::new()?;
// # Defining the schema // # Defining the schema
// //
@@ -33,7 +30,7 @@ fn main() -> tantivy::Result<()> {
// and for each field, its type and "the way it should // and for each field, its type and "the way it should
// be indexed". // be indexed".
// first we need to define a schema ... // First we need to define a schema ...
let mut schema_builder = Schema::builder(); let mut schema_builder = Schema::builder();
// Our first field is title. // Our first field is title.
@@ -48,7 +45,7 @@ fn main() -> tantivy::Result<()> {
// //
// `STORED` means that the field will also be saved // `STORED` means that the field will also be saved
// in a compressed, row-oriented key-value store. // in a compressed, row-oriented key-value store.
// This store is useful to reconstruct the // This store is useful for reconstructing the
// documents that were selected during the search phase. // documents that were selected during the search phase.
schema_builder.add_text_field("title", TEXT | STORED); schema_builder.add_text_field("title", TEXT | STORED);
@@ -57,8 +54,7 @@ fn main() -> tantivy::Result<()> {
// need to be able to be able to retrieve it // need to be able to be able to retrieve it
// for our application. // for our application.
// //
// We can make our index lighter and // We can make our index lighter by omitting the `STORED` flag.
// by omitting `STORED` flag.
schema_builder.add_text_field("body", TEXT); schema_builder.add_text_field("body", TEXT);
let schema = schema_builder.build(); let schema = schema_builder.build();
@@ -71,7 +67,7 @@ fn main() -> tantivy::Result<()> {
// with our schema in the directory. // with our schema in the directory.
let index = Index::create_in_dir(&index_path, schema.clone())?; let index = Index::create_in_dir(&index_path, schema.clone())?;
// To insert document we need an index writer. // To insert a document we will need an index writer.
// There must be only one writer at a time. // There must be only one writer at a time.
// This single `IndexWriter` is already // This single `IndexWriter` is already
// multithreaded. // multithreaded.
@@ -149,8 +145,8 @@ fn main() -> tantivy::Result<()> {
// At this point our documents are not searchable. // At this point our documents are not searchable.
// //
// //
// We need to call .commit() explicitly to force the // We need to call `.commit()` explicitly to force the
// index_writer to finish processing the documents in the queue, // `index_writer` to finish processing the documents in the queue,
// flush the current index to the disk, and advertise // flush the current index to the disk, and advertise
// the existence of new documents. // the existence of new documents.
// //
@@ -162,14 +158,14 @@ fn main() -> tantivy::Result<()> {
// persistently indexed. // persistently indexed.
// //
// In the scenario of a crash or a power failure, // In the scenario of a crash or a power failure,
// tantivy behaves as if has rolled back to its last // tantivy behaves as if it has rolled back to its last
// commit. // commit.
// # Searching // # Searching
// //
// ### Searcher // ### Searcher
// //
// A reader is required to get search the index. // A reader is required first in order to search an index.
// It acts as a `Searcher` pool that reloads itself, // It acts as a `Searcher` pool that reloads itself,
// depending on a `ReloadPolicy`. // depending on a `ReloadPolicy`.
// //
@@ -185,7 +181,7 @@ fn main() -> tantivy::Result<()> {
// We now need to acquire a searcher. // We now need to acquire a searcher.
// //
// A searcher points to snapshotted, immutable version of the index. // A searcher points to a snapshotted, immutable version of the index.
// //
// Some search experience might require more than // Some search experience might require more than
// one query. Using the same searcher ensures that all of these queries will run on the // one query. Using the same searcher ensures that all of these queries will run on the
@@ -205,7 +201,7 @@ fn main() -> tantivy::Result<()> {
// in both title and body. // in both title and body.
let query_parser = QueryParser::for_index(&index, vec![title, body]); let query_parser = QueryParser::for_index(&index, vec![title, body]);
// QueryParser may fail if the query is not in the right // `QueryParser` may fail if the query is not in the right
// format. For user facing applications, this can be a problem. // format. For user facing applications, this can be a problem.
// A ticket has been opened regarding this problem. // A ticket has been opened regarding this problem.
let query = query_parser.parse_query("sea whale")?; let query = query_parser.parse_query("sea whale")?;
@@ -221,7 +217,7 @@ fn main() -> tantivy::Result<()> {
// //
// We are not interested in all of the documents but // We are not interested in all of the documents but
// only in the top 10. Keeping track of our top 10 best documents // only in the top 10. Keeping track of our top 10 best documents
// is the role of the TopDocs. // is the role of the `TopDocs` collector.
// We can now perform our query. // We can now perform our query.
let top_docs = searcher.search(&query, &TopDocs::with_limit(10))?; let top_docs = searcher.search(&query, &TopDocs::with_limit(10))?;

View File

@@ -9,15 +9,12 @@
// --- // ---
// Importing tantivy... // Importing tantivy...
#[macro_use]
extern crate tantivy;
use tantivy::collector::{Collector, SegmentCollector}; use tantivy::collector::{Collector, SegmentCollector};
use tantivy::fastfield::FastFieldReader; use tantivy::fastfield::FastFieldReader;
use tantivy::query::QueryParser; use tantivy::query::QueryParser;
use tantivy::schema::Field; use tantivy::schema::Field;
use tantivy::schema::{Schema, FAST, INDEXED, TEXT}; use tantivy::schema::{Schema, FAST, INDEXED, TEXT};
use tantivy::SegmentReader; use tantivy::{doc, Index, SegmentReader, TantivyError};
use tantivy::{Index, TantivyError};
#[derive(Default)] #[derive(Default)]
struct Stats { struct Stats {

View File

@@ -2,14 +2,11 @@
// //
// In this example, we'll see how to define a tokenizer pipeline // In this example, we'll see how to define a tokenizer pipeline
// by aligning a bunch of `TokenFilter`. // by aligning a bunch of `TokenFilter`.
#[macro_use]
extern crate tantivy;
use tantivy::collector::TopDocs; use tantivy::collector::TopDocs;
use tantivy::query::QueryParser; use tantivy::query::QueryParser;
use tantivy::schema::*; use tantivy::schema::*;
use tantivy::tokenizer::NgramTokenizer; use tantivy::tokenizer::NgramTokenizer;
use tantivy::Index; use tantivy::{doc, Index};
fn main() -> tantivy::Result<()> { fn main() -> tantivy::Result<()> {
// # Defining the schema // # Defining the schema

View File

@@ -8,13 +8,10 @@
// //
// --- // ---
// Importing tantivy... // Importing tantivy...
#[macro_use]
extern crate tantivy;
use tantivy::collector::TopDocs; use tantivy::collector::TopDocs;
use tantivy::query::TermQuery; use tantivy::query::TermQuery;
use tantivy::schema::*; use tantivy::schema::*;
use tantivy::Index; use tantivy::{doc, Index, IndexReader};
use tantivy::IndexReader;
// A simple helper function to fetch a single document // A simple helper function to fetch a single document
// given its id from our index. // given its id from our index.

View File

@@ -12,17 +12,16 @@
// --- // ---
// Importing tantivy... // Importing tantivy...
#[macro_use]
extern crate tantivy;
use tantivy::collector::FacetCollector; use tantivy::collector::FacetCollector;
use tantivy::query::AllQuery; use tantivy::query::AllQuery;
use tantivy::schema::*; use tantivy::schema::*;
use tantivy::Index; use tantivy::{doc, Index};
use tempfile::TempDir;
fn main() -> tantivy::Result<()> { fn main() -> tantivy::Result<()> {
// Let's create a temporary directory for the // Let's create a temporary directory for the
// sake of this example // sake of this example
let index_path = TempDir::new("tantivy_facet_example_dir")?; let index_path = TempDir::new()?;
let mut schema_builder = Schema::builder(); let mut schema_builder = Schema::builder();
schema_builder.add_text_field("name", TEXT | STORED); schema_builder.add_text_field("name", TEXT | STORED);
@@ -74,5 +73,3 @@ fn main() -> tantivy::Result<()> {
Ok(()) Ok(())
} }
use tempdir::TempDir;

View File

@@ -2,14 +2,10 @@
// //
// Below is an example of creating an indexed integer field in your schema // Below is an example of creating an indexed integer field in your schema
// You can use RangeQuery to get a Count of all occurrences in a given range. // You can use RangeQuery to get a Count of all occurrences in a given range.
#[macro_use]
extern crate tantivy;
use tantivy::collector::Count; use tantivy::collector::Count;
use tantivy::query::RangeQuery; use tantivy::query::RangeQuery;
use tantivy::schema::{Schema, INDEXED}; use tantivy::schema::{Schema, INDEXED};
use tantivy::Index; use tantivy::{doc, Index, Result};
use tantivy::Result;
fn run() -> Result<()> { fn run() -> Result<()> {
// For the sake of simplicity, this schema will only have 1 field // For the sake of simplicity, this schema will only have 1 field

View File

@@ -9,11 +9,8 @@
// --- // ---
// Importing tantivy... // Importing tantivy...
#[macro_use]
extern crate tantivy;
use tantivy::schema::*; use tantivy::schema::*;
use tantivy::Index; use tantivy::{doc, DocId, DocSet, Index, Postings};
use tantivy::{DocId, DocSet, Postings};
fn main() -> tantivy::Result<()> { fn main() -> tantivy::Result<()> {
// We first create a schema for the sake of the // We first create a schema for the sake of the

View File

@@ -25,14 +25,11 @@
// --- // ---
// Importing tantivy... // Importing tantivy...
#[macro_use]
extern crate tantivy;
use std::sync::{Arc, RwLock}; use std::sync::{Arc, RwLock};
use std::thread; use std::thread;
use std::time::Duration; use std::time::Duration;
use tantivy::schema::{Schema, STORED, TEXT}; use tantivy::schema::{Schema, STORED, TEXT};
use tantivy::Opstamp; use tantivy::{doc, Index, IndexWriter, Opstamp};
use tantivy::{Index, IndexWriter};
fn main() -> tantivy::Result<()> { fn main() -> tantivy::Result<()> {
// # Defining the schema // # Defining the schema

View File

@@ -7,19 +7,16 @@
// --- // ---
// Importing tantivy... // Importing tantivy...
#[macro_use]
extern crate tantivy;
use tantivy::collector::TopDocs; use tantivy::collector::TopDocs;
use tantivy::query::QueryParser; use tantivy::query::QueryParser;
use tantivy::schema::*; use tantivy::schema::*;
use tantivy::Index; use tantivy::{doc, Index, Snippet, SnippetGenerator};
use tantivy::{Snippet, SnippetGenerator}; use tempfile::TempDir;
use tempdir::TempDir;
fn main() -> tantivy::Result<()> { fn main() -> tantivy::Result<()> {
// Let's create a temporary directory for the // Let's create a temporary directory for the
// sake of this example // sake of this example
let index_path = TempDir::new("tantivy_example_dir")?; let index_path = TempDir::new()?;
// # Defining the schema // # Defining the schema
let mut schema_builder = Schema::builder(); let mut schema_builder = Schema::builder();

View File

@@ -11,13 +11,11 @@
// --- // ---
// Importing tantivy... // Importing tantivy...
#[macro_use]
extern crate tantivy;
use tantivy::collector::TopDocs; use tantivy::collector::TopDocs;
use tantivy::query::QueryParser; use tantivy::query::QueryParser;
use tantivy::schema::*; use tantivy::schema::*;
use tantivy::tokenizer::*; use tantivy::tokenizer::*;
use tantivy::Index; use tantivy::{doc, Index};
fn main() -> tantivy::Result<()> { fn main() -> tantivy::Result<()> {
// this example assumes you understand the content in `basic_search` // this example assumes you understand the content in `basic_search`

View File

@@ -10,12 +10,10 @@ use crate::SegmentReader;
/// documents match the query. /// documents match the query.
/// ///
/// ```rust /// ```rust
/// #[macro_use]
/// extern crate tantivy;
/// use tantivy::schema::{Schema, TEXT};
/// use tantivy::{Index, Result};
/// use tantivy::collector::Count; /// use tantivy::collector::Count;
/// use tantivy::query::QueryParser; /// use tantivy::query::QueryParser;
/// use tantivy::schema::{Schema, TEXT};
/// use tantivy::{doc, Index, Result};
/// ///
/// # fn main() { example().unwrap(); } /// # fn main() { example().unwrap(); }
/// fn example() -> Result<()> { /// fn example() -> Result<()> {

View File

@@ -81,12 +81,10 @@ fn facet_depth(facet_bytes: &[u8]) -> usize {
/// ///
/// ///
/// ```rust /// ```rust
/// #[macro_use]
/// extern crate tantivy;
/// use tantivy::schema::{Facet, Schema, TEXT};
/// use tantivy::{Index, Result};
/// use tantivy::collector::FacetCollector; /// use tantivy::collector::FacetCollector;
/// use tantivy::query::AllQuery; /// use tantivy::query::AllQuery;
/// use tantivy::schema::{Facet, Schema, TEXT};
/// use tantivy::{doc, Index, Result};
/// ///
/// # fn main() { example().unwrap(); } /// # fn main() { example().unwrap(); }
/// fn example() -> Result<()> { /// fn example() -> Result<()> {

View File

@@ -35,7 +35,6 @@ The resulting `Fruit` will then be a typed tuple with each collector's original
in their respective position. in their respective position.
```rust ```rust
# extern crate tantivy;
# use tantivy::schema::*; # use tantivy::schema::*;
# use tantivy::*; # use tantivy::*;
# use tantivy::query::*; # use tantivy::query::*;

View File

@@ -105,12 +105,10 @@ impl<TFruit: Fruit> FruitHandle<TFruit> {
/// [Combining several collectors section of the collector documentation](./index.html#combining-several-collectors). /// [Combining several collectors section of the collector documentation](./index.html#combining-several-collectors).
/// ///
/// ```rust /// ```rust
/// #[macro_use]
/// extern crate tantivy;
/// use tantivy::schema::{Schema, TEXT};
/// use tantivy::{Index, Result};
/// use tantivy::collector::{Count, TopDocs, MultiCollector}; /// use tantivy::collector::{Count, TopDocs, MultiCollector};
/// use tantivy::query::QueryParser; /// use tantivy::query::QueryParser;
/// use tantivy::schema::{Schema, TEXT};
/// use tantivy::{doc, Index, Result};
/// ///
/// # fn main() { example().unwrap(); } /// # fn main() { example().unwrap(); }
/// fn example() -> Result<()> { /// fn example() -> Result<()> {

View File

@@ -13,6 +13,7 @@ use crate::Result;
use crate::Score; use crate::Score;
use crate::SegmentLocalId; use crate::SegmentLocalId;
use crate::SegmentReader; use crate::SegmentReader;
use std::fmt;
/// The Top Score Collector keeps track of the K documents /// The Top Score Collector keeps track of the K documents
/// sorted by their score. /// sorted by their score.
@@ -22,13 +23,10 @@ use crate::SegmentReader;
/// is `O(n log K)`. /// is `O(n log K)`.
/// ///
/// ```rust /// ```rust
/// #[macro_use]
/// extern crate tantivy;
/// use tantivy::DocAddress;
/// use tantivy::schema::{Schema, TEXT};
/// use tantivy::{Index, Result};
/// use tantivy::collector::TopDocs; /// use tantivy::collector::TopDocs;
/// use tantivy::query::QueryParser; /// use tantivy::query::QueryParser;
/// use tantivy::schema::{Schema, TEXT};
/// use tantivy::{doc, DocAddress, Index, Result};
/// ///
/// # fn main() { example().unwrap(); } /// # fn main() { example().unwrap(); }
/// fn example() -> Result<()> { /// fn example() -> Result<()> {
@@ -68,6 +66,12 @@ use crate::SegmentReader;
/// ``` /// ```
pub struct TopDocs(TopCollector<Score>); pub struct TopDocs(TopCollector<Score>);
impl fmt::Debug for TopDocs {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
write!(f, "TopDocs({})", self.0.limit())
}
}
impl TopDocs { impl TopDocs {
/// Creates a top score collector, with a number of documents equal to "limit". /// Creates a top score collector, with a number of documents equal to "limit".
/// ///
@@ -80,10 +84,8 @@ impl TopDocs {
/// Set top-K to rank documents by a given fast field. /// Set top-K to rank documents by a given fast field.
/// ///
/// ```rust /// ```rust
/// #[macro_use]
/// extern crate tantivy;
/// # use tantivy::schema::{Schema, FAST, TEXT}; /// # use tantivy::schema::{Schema, FAST, TEXT};
/// # use tantivy::{Index, Result, DocAddress}; /// # use tantivy::{doc, Index, Result, DocAddress};
/// # use tantivy::query::{Query, QueryParser}; /// # use tantivy::query::{Query, QueryParser};
/// use tantivy::Searcher; /// use tantivy::Searcher;
/// use tantivy::collector::TopDocs; /// use tantivy::collector::TopDocs;
@@ -121,7 +123,7 @@ impl TopDocs {
/// /// /// ///
/// /// `field` is required to be a FAST field. /// /// `field` is required to be a FAST field.
/// fn docs_sorted_by_rating(searcher: &Searcher, /// fn docs_sorted_by_rating(searcher: &Searcher,
/// query: &Query, /// query: &dyn Query,
/// sort_by_field: Field) /// sort_by_field: Field)
/// -> Result<Vec<(u64, DocAddress)>> { /// -> Result<Vec<(u64, DocAddress)>> {
/// ///
@@ -190,10 +192,8 @@ impl TopDocs {
/// learning-to-rank model over various features /// learning-to-rank model over various features
/// ///
/// ```rust /// ```rust
/// #[macro_use]
/// extern crate tantivy;
/// # use tantivy::schema::{Schema, FAST, TEXT}; /// # use tantivy::schema::{Schema, FAST, TEXT};
/// # use tantivy::{Index, DocAddress, DocId, Score}; /// # use tantivy::{doc, Index, DocAddress, DocId, Score};
/// # use tantivy::query::QueryParser; /// # use tantivy::query::QueryParser;
/// use tantivy::SegmentReader; /// use tantivy::SegmentReader;
/// use tantivy::collector::TopDocs; /// use tantivy::collector::TopDocs;
@@ -295,10 +295,8 @@ impl TopDocs {
/// # Example /// # Example
/// ///
/// ```rust /// ```rust
/// # #[macro_use]
/// # extern crate tantivy;
/// # use tantivy::schema::{Schema, FAST, TEXT}; /// # use tantivy::schema::{Schema, FAST, TEXT};
/// # use tantivy::{Index, DocAddress, DocId}; /// # use tantivy::{doc, Index, DocAddress, DocId};
/// # use tantivy::query::QueryParser; /// # use tantivy::query::QueryParser;
/// use tantivy::SegmentReader; /// use tantivy::SegmentReader;
/// use tantivy::collector::TopDocs; /// use tantivy::collector::TopDocs;
@@ -584,7 +582,7 @@ mod tests {
query_field: Field, query_field: Field,
schema: Schema, schema: Schema,
mut doc_adder: impl FnMut(&mut IndexWriter) -> (), mut doc_adder: impl FnMut(&mut IndexWriter) -> (),
) -> (Index, Box<Query>) { ) -> (Index, Box<dyn Query>) {
let index = Index::create_in_ram(schema); let index = Index::create_in_ram(schema);
let mut index_writer = index.writer_with_num_threads(1, 3_000_000).unwrap(); let mut index_writer = index.writer_with_num_threads(1, 3_000_000).unwrap();

View File

@@ -173,11 +173,11 @@ impl Index {
} }
/// Helper to access the tokenizer associated to a specific field. /// Helper to access the tokenizer associated to a specific field.
pub fn tokenizer_for_field(&self, field: Field) -> Result<Box<dyn BoxedTokenizer>> { pub fn tokenizer_for_field(&self, field: Field) -> Result<BoxedTokenizer> {
let field_entry = self.schema.get_field_entry(field); let field_entry = self.schema.get_field_entry(field);
let field_type = field_entry.field_type(); let field_type = field_entry.field_type();
let tokenizer_manager: &TokenizerManager = self.tokenizers(); let tokenizer_manager: &TokenizerManager = self.tokenizers();
let tokenizer_name_opt: Option<Box<dyn BoxedTokenizer>> = match field_type { let tokenizer_name_opt: Option<BoxedTokenizer> = match field_type {
FieldType::Str(text_options) => text_options FieldType::Str(text_options) => text_options
.get_indexing_options() .get_indexing_options()
.map(|text_indexing_options| text_indexing_options.tokenizer().to_string()) .map(|text_indexing_options| text_indexing_options.tokenizer().to_string())
@@ -216,8 +216,22 @@ impl Index {
Index::open(mmap_directory) Index::open(mmap_directory)
} }
pub(crate) fn inventory(&self) -> &SegmentMetaInventory { /// Returns the list of the segment metas tracked by the index.
&self.inventory ///
/// Such segments can of course be part of the index,
/// but also they could be segments being currently built or in the middle of a merge
/// operation.
pub fn list_all_segment_metas(&self) -> Vec<SegmentMeta> {
self.inventory.all()
}
/// Creates a new segment_meta (Advanced user only).
///
/// As long as the `SegmentMeta` lives, the files associated with the
/// `SegmentMeta` are guaranteed to not be garbage collected, regardless of
/// whether the segment is recorded as part of the index or not.
pub fn new_segment_meta(&self, segment_id: SegmentId, max_doc: u32) -> SegmentMeta {
self.inventory.new_segment_meta(segment_id, max_doc)
} }
/// Open the index using the provided directory /// Open the index using the provided directory
@@ -459,13 +473,13 @@ mod tests {
use super::*; use super::*;
use std::path::PathBuf; use std::path::PathBuf;
use tempdir::TempDir; use tempfile::TempDir;
#[test] #[test]
fn test_index_on_commit_reload_policy_mmap() { fn test_index_on_commit_reload_policy_mmap() {
let schema = throw_away_schema(); let schema = throw_away_schema();
let field = schema.get_field("num_likes").unwrap(); let field = schema.get_field("num_likes").unwrap();
let tempdir = TempDir::new("index").unwrap(); let tempdir = TempDir::new().unwrap();
let tempdir_path = PathBuf::from(tempdir.path()); let tempdir_path = PathBuf::from(tempdir.path());
let index = Index::create_in_dir(&tempdir_path, schema).unwrap(); let index = Index::create_in_dir(&tempdir_path, schema).unwrap();
let mut writer = index.writer_with_num_threads(1, 3_000_000).unwrap(); let mut writer = index.writer_with_num_threads(1, 3_000_000).unwrap();
@@ -504,7 +518,7 @@ mod tests {
fn test_index_on_commit_reload_policy_different_directories() { fn test_index_on_commit_reload_policy_different_directories() {
let schema = throw_away_schema(); let schema = throw_away_schema();
let field = schema.get_field("num_likes").unwrap(); let field = schema.get_field("num_likes").unwrap();
let tempdir = TempDir::new("index").unwrap(); let tempdir = TempDir::new().unwrap();
let tempdir_path = PathBuf::from(tempdir.path()); let tempdir_path = PathBuf::from(tempdir.path());
let write_index = Index::create_in_dir(&tempdir_path, schema).unwrap(); let write_index = Index::create_in_dir(&tempdir_path, schema).unwrap();
let read_index = Index::open_in_dir(&tempdir_path).unwrap(); let read_index = Index::open_in_dir(&tempdir_path).unwrap();

View File

@@ -30,7 +30,6 @@ impl SegmentMetaInventory {
.collect::<Vec<_>>() .collect::<Vec<_>>()
} }
#[doc(hidden)]
pub fn new_segment_meta(&self, segment_id: SegmentId, max_doc: u32) -> SegmentMeta { pub fn new_segment_meta(&self, segment_id: SegmentId, max_doc: u32) -> SegmentMeta {
let inner = InnerSegmentMeta { let inner = InnerSegmentMeta {
segment_id, segment_id,

View File

@@ -4,6 +4,8 @@ use uuid::Uuid;
#[cfg(test)] #[cfg(test)]
use once_cell::sync::Lazy; use once_cell::sync::Lazy;
use std::error::Error;
use std::str::FromStr;
#[cfg(test)] #[cfg(test)]
use std::sync::atomic; use std::sync::atomic;
@@ -52,15 +54,51 @@ impl SegmentId {
/// and the rest is random. /// and the rest is random.
/// ///
/// Picking the first 8 chars is ok to identify /// Picking the first 8 chars is ok to identify
/// segments in a display message. /// segments in a display message (e.g. a5c4dfcb).
pub fn short_uuid_string(&self) -> String { pub fn short_uuid_string(&self) -> String {
(&self.0.to_simple_ref().to_string()[..8]).to_string() (&self.0.to_simple_ref().to_string()[..8]).to_string()
} }
/// Returns a segment uuid string. /// Returns a segment uuid string.
///
/// It consists in 32 lowercase hexadecimal chars
/// (e.g. a5c4dfcbdfe645089129e308e26d5523)
pub fn uuid_string(&self) -> String { pub fn uuid_string(&self) -> String {
self.0.to_simple_ref().to_string() self.0.to_simple_ref().to_string()
} }
/// Build a `SegmentId` string from the full uuid string.
///
/// E.g. "a5c4dfcbdfe645089129e308e26d5523"
pub fn from_uuid_string(uuid_string: &str) -> Result<SegmentId, SegmentIdParseError> {
FromStr::from_str(uuid_string)
}
}
/// Error type used when parsing a `SegmentId` from a string fails.
pub struct SegmentIdParseError(uuid::parser::ParseError);
impl Error for SegmentIdParseError {}
impl fmt::Debug for SegmentIdParseError {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
self.0.fmt(f)
}
}
impl fmt::Display for SegmentIdParseError {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
self.0.fmt(f)
}
}
impl FromStr for SegmentId {
type Err = SegmentIdParseError;
fn from_str(uuid_string: &str) -> Result<Self, SegmentIdParseError> {
let uuid = Uuid::parse_str(uuid_string).map_err(SegmentIdParseError)?;
Ok(SegmentId(uuid))
}
} }
impl fmt::Debug for SegmentId { impl fmt::Debug for SegmentId {
@@ -80,3 +118,18 @@ impl Ord for SegmentId {
self.0.as_bytes().cmp(other.0.as_bytes()) self.0.as_bytes().cmp(other.0.as_bytes())
} }
} }
#[cfg(test)]
mod tests {
use super::SegmentId;
#[test]
fn test_to_uuid_string() {
let full_uuid = "a5c4dfcbdfe645089129e308e26d5523";
let segment_id = SegmentId::from_uuid_string(full_uuid).unwrap();
assert_eq!(segment_id.uuid_string(), full_uuid);
assert_eq!(segment_id.short_uuid_string(), "a5c4dfcb");
// one extra char
assert!(SegmentId::from_uuid_string("a5c4dfcbdfe645089129e308e26d5523b").is_err());
}
}

View File

@@ -1,4 +1,3 @@
use crate::common::CompositeFile;
use crate::common::HasLen; use crate::common::HasLen;
use crate::core::InvertedIndexReader; use crate::core::InvertedIndexReader;
use crate::core::Segment; use crate::core::Segment;
@@ -15,6 +14,7 @@ use crate::schema::Schema;
use crate::space_usage::SegmentSpaceUsage; use crate::space_usage::SegmentSpaceUsage;
use crate::store::StoreReader; use crate::store::StoreReader;
use crate::termdict::TermDictionary; use crate::termdict::TermDictionary;
use crate::CompositeFile;
use crate::DocId; use crate::DocId;
use crate::Result; use crate::Result;
use fail::fail_point; use fail::fail_point;

View File

@@ -48,14 +48,14 @@ impl RetryPolicy {
/// ///
/// It is transparently associated to a lock file, that gets deleted /// It is transparently associated to a lock file, that gets deleted
/// on `Drop.` The lock is released automatically on `Drop`. /// on `Drop.` The lock is released automatically on `Drop`.
pub struct DirectoryLock(Box<dyn Drop + Send + Sync + 'static>); pub struct DirectoryLock(Box<dyn Send + Sync + 'static>);
struct DirectoryLockGuard { struct DirectoryLockGuard {
directory: Box<dyn Directory>, directory: Box<dyn Directory>,
path: PathBuf, path: PathBuf,
} }
impl<T: Drop + Send + Sync + 'static> From<Box<T>> for DirectoryLock { impl<T: Send + Sync + 'static> From<Box<T>> for DirectoryLock {
fn from(underlying: Box<T>) -> Self { fn from(underlying: Box<T>) -> Self {
DirectoryLock(underlying) DirectoryLock(underlying)
} }

View File

@@ -263,11 +263,11 @@ mod tests_mmap_specific {
use std::collections::HashSet; use std::collections::HashSet;
use std::io::Write; use std::io::Write;
use std::path::{Path, PathBuf}; use std::path::{Path, PathBuf};
use tempdir::TempDir; use tempfile::TempDir;
#[test] #[test]
fn test_managed_directory() { fn test_managed_directory() {
let tempdir = TempDir::new("tantivy-test").unwrap(); let tempdir = TempDir::new().unwrap();
let tempdir_path = PathBuf::from(tempdir.path()); let tempdir_path = PathBuf::from(tempdir.path());
let test_path1: &'static Path = Path::new("some_path_for_test"); let test_path1: &'static Path = Path::new("some_path_for_test");
@@ -304,7 +304,7 @@ mod tests_mmap_specific {
fn test_managed_directory_gc_while_mmapped() { fn test_managed_directory_gc_while_mmapped() {
let test_path1: &'static Path = Path::new("some_path_for_test"); let test_path1: &'static Path = Path::new("some_path_for_test");
let tempdir = TempDir::new("index").unwrap(); let tempdir = TempDir::new().unwrap();
let tempdir_path = PathBuf::from(tempdir.path()); let tempdir_path = PathBuf::from(tempdir.path());
let living_files = HashSet::new(); let living_files = HashSet::new();

View File

@@ -36,7 +36,7 @@ use std::sync::Mutex;
use std::sync::RwLock; use std::sync::RwLock;
use std::sync::Weak; use std::sync::Weak;
use std::thread; use std::thread;
use tempdir::TempDir; use tempfile::TempDir;
/// Create a default io error given a string. /// Create a default io error given a string.
pub(crate) fn make_io_err(msg: String) -> io::Error { pub(crate) fn make_io_err(msg: String) -> io::Error {
@@ -294,7 +294,7 @@ impl MmapDirectory {
/// This is mostly useful to test the MmapDirectory itself. /// This is mostly useful to test the MmapDirectory itself.
/// For your unit tests, prefer the RAMDirectory. /// For your unit tests, prefer the RAMDirectory.
pub fn create_from_tempdir() -> Result<MmapDirectory, OpenDirectoryError> { pub fn create_from_tempdir() -> Result<MmapDirectory, OpenDirectoryError> {
let tempdir = TempDir::new("index").map_err(OpenDirectoryError::IoError)?; let tempdir = TempDir::new().map_err(OpenDirectoryError::IoError)?;
let tempdir_path = PathBuf::from(tempdir.path()); let tempdir_path = PathBuf::from(tempdir.path());
MmapDirectory::new(tempdir_path, Some(tempdir)) MmapDirectory::new(tempdir_path, Some(tempdir))
} }
@@ -539,7 +539,7 @@ impl Directory for MmapDirectory {
#[cfg(test)] #[cfg(test)]
mod tests { mod tests {
// There are more tests in directory/mod.rs // There are more tests in directory/lib.rs
// The following tests are specific to the MmapDirectory // The following tests are specific to the MmapDirectory
use super::*; use super::*;
@@ -642,7 +642,7 @@ mod tests {
fn test_watch_wrapper() { fn test_watch_wrapper() {
let counter: Arc<AtomicUsize> = Default::default(); let counter: Arc<AtomicUsize> = Default::default();
let counter_clone = counter.clone(); let counter_clone = counter.clone();
let tmp_dir: TempDir = tempdir::TempDir::new("test_watch_wrapper").unwrap(); let tmp_dir = tempfile::TempDir::new().unwrap();
let tmp_dirpath = tmp_dir.path().to_owned(); let tmp_dirpath = tmp_dir.path().to_owned();
let mut watch_wrapper = WatcherWrapper::new(&tmp_dirpath).unwrap(); let mut watch_wrapper = WatcherWrapper::new(&tmp_dirpath).unwrap();
let tmp_file = tmp_dirpath.join("coucou"); let tmp_file = tmp_dirpath.join("coucou");

View File

@@ -177,7 +177,7 @@ impl Directory for RAMDirectory {
fn atomic_write(&mut self, path: &Path, data: &[u8]) -> io::Result<()> { fn atomic_write(&mut self, path: &Path, data: &[u8]) -> io::Result<()> {
fail_point!("RAMDirectory::atomic_write", |msg| Err(io::Error::new( fail_point!("RAMDirectory::atomic_write", |msg| Err(io::Error::new(
io::ErrorKind::Other, io::ErrorKind::Other,
msg.unwrap_or("Undefined".to_string()) msg.unwrap_or_else(|| "Undefined".to_string())
))); )));
let path_buf = PathBuf::from(path); let path_buf = PathBuf::from(path);

View File

@@ -148,13 +148,13 @@ fn value_to_u64(value: &Value) -> u64 {
mod tests { mod tests {
use super::*; use super::*;
use crate::common::CompositeFile;
use crate::directory::{Directory, RAMDirectory, WritePtr}; use crate::directory::{Directory, RAMDirectory, WritePtr};
use crate::fastfield::FastFieldReader; use crate::fastfield::FastFieldReader;
use crate::schema::Document; use crate::schema::Document;
use crate::schema::Field; use crate::schema::Field;
use crate::schema::Schema; use crate::schema::Schema;
use crate::schema::FAST; use crate::schema::FAST;
use crate::CompositeFile;
use once_cell::sync::Lazy; use once_cell::sync::Lazy;
use rand::prelude::SliceRandom; use rand::prelude::SliceRandom;
use rand::rngs::StdRng; use rand::rngs::StdRng;

View File

@@ -2,12 +2,12 @@ use super::FastValue;
use crate::common::bitpacker::BitUnpacker; use crate::common::bitpacker::BitUnpacker;
use crate::common::compute_num_bits; use crate::common::compute_num_bits;
use crate::common::BinarySerializable; use crate::common::BinarySerializable;
use crate::common::CompositeFile;
use crate::directory::ReadOnlySource; use crate::directory::ReadOnlySource;
use crate::directory::{Directory, RAMDirectory, WritePtr}; use crate::directory::{Directory, RAMDirectory, WritePtr};
use crate::fastfield::{FastFieldSerializer, FastFieldsWriter}; use crate::fastfield::{FastFieldSerializer, FastFieldsWriter};
use crate::schema::Schema; use crate::schema::Schema;
use crate::schema::FAST; use crate::schema::FAST;
use crate::CompositeFile;
use crate::DocId; use crate::DocId;
use owning_ref::OwningRef; use owning_ref::OwningRef;
use std::collections::HashMap; use std::collections::HashMap;

View File

@@ -1,9 +1,9 @@
use crate::common::CompositeFile;
use crate::fastfield::BytesFastFieldReader; use crate::fastfield::BytesFastFieldReader;
use crate::fastfield::MultiValueIntFastFieldReader; use crate::fastfield::MultiValueIntFastFieldReader;
use crate::fastfield::{FastFieldNotAvailableError, FastFieldReader}; use crate::fastfield::{FastFieldNotAvailableError, FastFieldReader};
use crate::schema::{Cardinality, Field, FieldType, Schema}; use crate::schema::{Cardinality, Field, FieldType, Schema};
use crate::space_usage::PerFieldSpaceUsage; use crate::space_usage::PerFieldSpaceUsage;
use crate::CompositeFile;
use crate::Result; use crate::Result;
use std::collections::HashMap; use std::collections::HashMap;

View File

@@ -1,10 +1,10 @@
use crate::common::bitpacker::BitPacker; use crate::common::bitpacker::BitPacker;
use crate::common::compute_num_bits; use crate::common::compute_num_bits;
use crate::common::BinarySerializable; use crate::common::BinarySerializable;
use crate::common::CompositeWrite;
use crate::common::CountingWriter; use crate::common::CountingWriter;
use crate::directory::WritePtr; use crate::directory::WritePtr;
use crate::schema::Field; use crate::schema::Field;
use crate::CompositeWrite;
use std::io::{self, Write}; use std::io::{self, Write};
/// `FastFieldSerializer` is in charge of serializing /// `FastFieldSerializer` is in charge of serializing

View File

@@ -31,7 +31,9 @@ impl FastFieldsWriter {
_ => 0u64, _ => 0u64,
}; };
match *field_entry.field_type() { match *field_entry.field_type() {
FieldType::I64(ref int_options) | FieldType::U64(ref int_options) | FieldType::F64(ref int_options) => { FieldType::I64(ref int_options)
| FieldType::U64(ref int_options)
| FieldType::F64(ref int_options) => {
match int_options.get_fastfield_cardinality() { match int_options.get_fastfield_cardinality() {
Some(Cardinality::SingleValue) => { Some(Cardinality::SingleValue) => {
let mut fast_field_writer = IntFastFieldWriter::new(field); let mut fast_field_writer = IntFastFieldWriter::new(field);

View File

@@ -1,6 +1,6 @@
use crate::common::CompositeWrite;
use crate::directory::WritePtr; use crate::directory::WritePtr;
use crate::schema::Field; use crate::schema::Field;
use crate::CompositeWrite;
use std::io; use std::io;
use std::io::Write; use std::io::Write;

View File

@@ -209,10 +209,7 @@ fn index_documents(
assert!(num_docs > 0); assert!(num_docs > 0);
let doc_opstamps: Vec<Opstamp> = segment_writer.finalize()?; let doc_opstamps: Vec<Opstamp> = segment_writer.finalize()?;
let segment_meta = segment let segment_meta = segment.index().new_segment_meta(segment_id, num_docs);
.index()
.inventory()
.new_segment_meta(segment_id, num_docs);
let last_docstamp: Opstamp = *(doc_opstamps.last().unwrap()); let last_docstamp: Opstamp = *(doc_opstamps.last().unwrap());
@@ -450,12 +447,10 @@ impl IndexWriter {
/// by clearing and resubmitting necessary documents /// by clearing and resubmitting necessary documents
/// ///
/// ```rust /// ```rust
/// #[macro_use]
/// extern crate tantivy;
/// use tantivy::query::QueryParser;
/// use tantivy::collector::TopDocs; /// use tantivy::collector::TopDocs;
/// use tantivy::query::QueryParser;
/// use tantivy::schema::*; /// use tantivy::schema::*;
/// use tantivy::Index; /// use tantivy::{doc, Index};
/// ///
/// fn main() -> tantivy::Result<()> { /// fn main() -> tantivy::Result<()> {
/// let mut schema_builder = Schema::builder(); /// let mut schema_builder = Schema::builder();
@@ -761,7 +756,6 @@ mod tests {
use crate::Index; use crate::Index;
use crate::ReloadPolicy; use crate::ReloadPolicy;
use crate::Term; use crate::Term;
use fail;
#[test] #[test]
fn test_operations_group() { fn test_operations_group() {

View File

@@ -126,9 +126,7 @@ fn perform_merge(
let num_docs = merger.write(segment_serializer)?; let num_docs = merger.write(segment_serializer)?;
let segment_meta = index let segment_meta = index.new_segment_meta(merged_segment.id(), num_docs);
.inventory()
.new_segment_meta(merged_segment.id(), num_docs);
let after_merge_segment_entry = SegmentEntry::new(segment_meta.clone(), delete_cursor, None); let after_merge_segment_entry = SegmentEntry::new(segment_meta.clone(), delete_cursor, None);
Ok(after_merge_segment_entry) Ok(after_merge_segment_entry)
@@ -282,7 +280,7 @@ impl SegmentUpdater {
fn list_files(&self) -> HashSet<PathBuf> { fn list_files(&self) -> HashSet<PathBuf> {
let mut files = HashSet::new(); let mut files = HashSet::new();
files.insert(META_FILEPATH.to_path_buf()); files.insert(META_FILEPATH.to_path_buf());
for segment_meta in self.0.index.inventory().all() { for segment_meta in self.0.index.list_all_segment_metas() {
files.extend(segment_meta.list_files()); files.extend(segment_meta.list_files());
} }
files files

View File

@@ -49,7 +49,7 @@ pub struct SegmentWriter {
fast_field_writers: FastFieldsWriter, fast_field_writers: FastFieldsWriter,
fieldnorms_writer: FieldNormsWriter, fieldnorms_writer: FieldNormsWriter,
doc_opstamps: Vec<Opstamp>, doc_opstamps: Vec<Opstamp>,
tokenizers: Vec<Option<Box<dyn BoxedTokenizer>>>, tokenizers: Vec<Option<BoxedTokenizer>>,
} }
impl SegmentWriter { impl SegmentWriter {

View File

@@ -1,9 +1,9 @@
#![doc(html_logo_url = "http://fulmicoton.com/tantivy-logo/tantivy-logo.png")] #![doc(html_logo_url = "http://fulmicoton.com/tantivy-logo/tantivy-logo.png")]
#![recursion_limit = "100"]
#![cfg_attr(all(feature = "unstable", test), feature(test))] #![cfg_attr(all(feature = "unstable", test), feature(test))]
#![cfg_attr(feature = "cargo-clippy", allow(clippy::module_inception))] #![cfg_attr(feature = "cargo-clippy", allow(clippy::module_inception))]
#![doc(test(attr(allow(unused_variables), deny(warnings))))] #![doc(test(attr(allow(unused_variables), deny(warnings))))]
#![warn(missing_docs)] #![warn(missing_docs)]
#![recursion_limit = "80"]
//! # `tantivy` //! # `tantivy`
//! //!
@@ -11,26 +11,17 @@
//! Think `Lucene`, but in Rust. //! Think `Lucene`, but in Rust.
//! //!
//! ```rust //! ```rust
//! # extern crate tempdir;
//! #
//! #[macro_use]
//! extern crate tantivy;
//!
//! // ...
//!
//! # use std::path::Path; //! # use std::path::Path;
//! # use tempdir::TempDir; //! # use tempfile::TempDir;
//! # use tantivy::Index;
//! # use tantivy::schema::*;
//! # use tantivy::{Score, DocAddress};
//! # use tantivy::collector::TopDocs; //! # use tantivy::collector::TopDocs;
//! # use tantivy::query::QueryParser; //! # use tantivy::query::QueryParser;
//! # use tantivy::schema::*;
//! # use tantivy::{doc, DocAddress, Index, Score};
//! # //! #
//! # fn main() { //! # fn main() {
//! # // Let's create a temporary directory for the //! # // Let's create a temporary directory for the
//! # // sake of this example //! # // sake of this example
//! # if let Ok(dir) = TempDir::new("tantivy_example_dir") { //! # if let Ok(dir) = TempDir::new() {
//! # run_example(dir.path()).unwrap(); //! # run_example(dir.path()).unwrap();
//! # dir.close().unwrap(); //! # dir.close().unwrap();
//! # } //! # }
@@ -111,9 +102,6 @@
#[macro_use] #[macro_use]
extern crate serde_derive; extern crate serde_derive;
#[cfg_attr(test, macro_use)]
extern crate serde_json;
#[macro_use] #[macro_use]
extern crate log; extern crate log;
@@ -130,6 +118,9 @@ mod functional_test;
#[macro_use] #[macro_use]
mod macros; mod macros;
mod composite_file;
pub(crate) use composite_file::{CompositeFile, CompositeWrite};
pub use crate::error::TantivyError; pub use crate::error::TantivyError;
#[deprecated(since = "0.7.0", note = "please use `tantivy::TantivyError` instead")] #[deprecated(since = "0.7.0", note = "please use `tantivy::TantivyError` instead")]
@@ -142,22 +133,22 @@ pub type Result<T> = std::result::Result<T, error::TantivyError>;
/// Tantivy DateTime /// Tantivy DateTime
pub type DateTime = chrono::DateTime<chrono::Utc>; pub type DateTime = chrono::DateTime<chrono::Utc>;
mod common; pub use tantivy_common as common;
pub use tantivy_schema as schema;
pub use tantivy_tokenizer as tokenizer;
mod core; mod core;
mod indexer; mod indexer;
#[allow(unused_doc_comments)]
mod error;
pub mod tokenizer;
pub mod collector; pub mod collector;
pub mod directory; pub mod directory;
#[allow(unused_doc_comments)]
mod error;
pub mod fastfield; pub mod fastfield;
pub mod fieldnorm; pub mod fieldnorm;
pub(crate) mod positions; pub(crate) mod positions;
pub mod postings; pub mod postings;
pub mod query; pub mod query;
pub mod schema;
pub mod space_usage; pub mod space_usage;
pub mod store; pub mod store;
pub mod termdict; pub mod termdict;
@@ -171,16 +162,16 @@ pub use self::snippet::{Snippet, SnippetGenerator};
mod docset; mod docset;
pub use self::docset::{DocSet, SkipResult}; pub use self::docset::{DocSet, SkipResult};
pub use crate::common::{f64_to_u64, i64_to_u64, u64_to_f64, u64_to_i64};
pub use crate::core::SegmentComponent; pub use crate::core::SegmentComponent;
pub use crate::core::{Index, IndexMeta, Searcher, Segment, SegmentId, SegmentMeta}; pub use crate::core::{Index, IndexMeta, Searcher, Segment, SegmentId, SegmentMeta};
pub use crate::core::{InvertedIndexReader, SegmentReader}; pub use crate::core::{InvertedIndexReader, SegmentReader};
pub use crate::directory::Directory; pub use crate::directory::Directory;
pub use crate::indexer::IndexWriter; pub use crate::indexer::IndexWriter;
pub use crate::postings::Postings; pub use crate::postings::Postings;
pub use crate::reader::LeasedItem;
pub use crate::schema::{Document, Term}; pub use crate::schema::{Document, Term};
pub use crate::common::{i64_to_u64, u64_to_i64, f64_to_u64, u64_to_f64};
/// Expose the current version of tantivy, as well /// Expose the current version of tantivy, as well
/// whether it was compiled with the simd compression. /// whether it was compiled with the simd compression.
pub fn version() -> &'static str { pub fn version() -> &'static str {
@@ -261,7 +252,6 @@ mod tests {
use crate::Postings; use crate::Postings;
use crate::ReloadPolicy; use crate::ReloadPolicy;
use rand::distributions::Bernoulli; use rand::distributions::Bernoulli;
use rand::distributions::Uniform;
use rand::rngs::StdRng; use rand::rngs::StdRng;
use rand::{Rng, SeedableRng}; use rand::{Rng, SeedableRng};
@@ -278,14 +268,6 @@ mod tests {
(a - b).abs() < 0.0005 * (a + b).abs() (a - b).abs() < 0.0005 * (a + b).abs()
} }
pub fn generate_nonunique_unsorted(max_value: u32, n_elems: usize) -> Vec<u32> {
let seed: [u8; 32] = [1; 32];
StdRng::from_seed(seed)
.sample_iter(&Uniform::new(0u32, max_value))
.take(n_elems)
.collect::<Vec<u32>>()
}
pub fn sample_with_seed(n: u32, ratio: f64, seed_val: u8) -> Vec<u32> { pub fn sample_with_seed(n: u32, ratio: f64, seed_val: u8) -> Vec<u32> {
StdRng::from_seed([seed_val; 32]) StdRng::from_seed([seed_val; 32])
.sample_iter(&Bernoulli::new(ratio).unwrap()) .sample_iter(&Bernoulli::new(ratio).unwrap())
@@ -295,10 +277,6 @@ mod tests {
.collect() .collect()
} }
pub fn sample(n: u32, ratio: f64) -> Vec<u32> {
sample_with_seed(n, ratio, 4)
}
#[test] #[test]
#[cfg(feature = "mmap")] #[cfg(feature = "mmap")]
fn test_indexing() { fn test_indexing() {
@@ -849,7 +827,8 @@ mod tests {
let index = Index::create_in_ram(schema); let index = Index::create_in_ram(schema);
let mut index_writer = index.writer_with_num_threads(1, 50_000_000).unwrap(); let mut index_writer = index.writer_with_num_threads(1, 50_000_000).unwrap();
{ {
let document = doc!(fast_field_unsigned => 4u64, fast_field_signed=>4i64, fast_field_float=>4f64); let document =
doc!(fast_field_unsigned => 4u64, fast_field_signed=>4i64, fast_field_float=>4f64);
index_writer.add_document(document); index_writer.add_document(document);
index_writer.commit().unwrap(); index_writer.commit().unwrap();
} }

View File

@@ -22,11 +22,9 @@
/// ///
/// # Example /// # Example
/// ///
/// ``` /// ```rust
/// #[macro_use]
/// extern crate tantivy;
///
/// use tantivy::schema::{Schema, TEXT, FAST}; /// use tantivy::schema::{Schema, TEXT, FAST};
/// use tantivy::doc;
/// ///
/// //... /// //...
/// ///

View File

@@ -1,6 +1,6 @@
use super::TermInfo; use super::TermInfo;
use crate::common::CountingWriter;
use crate::common::{BinarySerializable, VInt}; use crate::common::{BinarySerializable, VInt};
use crate::common::{CompositeWrite, CountingWriter};
use crate::core::Segment; use crate::core::Segment;
use crate::directory::WritePtr; use crate::directory::WritePtr;
use crate::positions::PositionSerializer; use crate::positions::PositionSerializer;
@@ -10,6 +10,7 @@ use crate::postings::USE_SKIP_INFO_LIMIT;
use crate::schema::Schema; use crate::schema::Schema;
use crate::schema::{Field, FieldEntry, FieldType}; use crate::schema::{Field, FieldEntry, FieldType};
use crate::termdict::{TermDictionaryBuilder, TermOrdinal}; use crate::termdict::{TermDictionaryBuilder, TermOrdinal};
use crate::CompositeWrite;
use crate::DocId; use crate::DocId;
use crate::Result; use crate::Result;
use std::io::{self, Write}; use std::io::{self, Write};

View File

@@ -45,7 +45,7 @@ impl BinarySerializable for TermInfo {
mod tests { mod tests {
use super::TermInfo; use super::TermInfo;
use crate::common::test::fixed_size_test; use crate::common::fixed_size_test;
#[test] #[test]
fn test_fixed_size() { fn test_fixed_size() {

View File

@@ -8,15 +8,13 @@ use crate::termdict::{TermDictionary, TermStreamer};
use crate::DocId; use crate::DocId;
use crate::TantivyError; use crate::TantivyError;
use crate::{Result, SkipResult}; use crate::{Result, SkipResult};
use std::sync::Arc;
use tantivy_fst::Automaton; use tantivy_fst::Automaton;
/// A weight struct for Fuzzy Term and Regex Queries /// A weight struct for Fuzzy Term and Regex Queries
pub struct AutomatonWeight<A> pub struct AutomatonWeight<A> {
where
A: Automaton + Send + Sync + 'static,
{
field: Field, field: Field,
automaton: A, automaton: Arc<A>,
} }
impl<A> AutomatonWeight<A> impl<A> AutomatonWeight<A>
@@ -24,12 +22,16 @@ where
A: Automaton + Send + Sync + 'static, A: Automaton + Send + Sync + 'static,
{ {
/// Create a new AutomationWeight /// Create a new AutomationWeight
pub fn new(field: Field, automaton: A) -> AutomatonWeight<A> { pub fn new<IntoArcA: Into<Arc<A>>>(field: Field, automaton: IntoArcA) -> AutomatonWeight<A> {
AutomatonWeight { field, automaton } AutomatonWeight {
field,
automaton: automaton.into(),
}
} }
fn automaton_stream<'a>(&'a self, term_dict: &'a TermDictionary) -> TermStreamer<'a, &'a A> { fn automaton_stream<'a>(&'a self, term_dict: &'a TermDictionary) -> TermStreamer<'a, &'a A> {
let term_stream_builder = term_dict.search(&self.automaton); let automaton: &A = &*self.automaton;
let term_stream_builder = term_dict.search(automaton);
term_stream_builder.into_stream() term_stream_builder.into_stream()
} }
} }

View File

@@ -216,7 +216,6 @@ mod tests {
assert!(!docset.advance()); assert!(!docset.advance());
} }
} }
} }
#[cfg(all(test, feature = "unstable"))] #[cfg(all(test, feature = "unstable"))]

View File

@@ -1,3 +1,4 @@
use crate::error::TantivyError::InvalidArgument;
use crate::query::{AutomatonWeight, Query, Weight}; use crate::query::{AutomatonWeight, Query, Weight};
use crate::schema::Term; use crate::schema::Term;
use crate::Result; use crate::Result;
@@ -5,11 +6,16 @@ use crate::Searcher;
use levenshtein_automata::{LevenshteinAutomatonBuilder, DFA}; use levenshtein_automata::{LevenshteinAutomatonBuilder, DFA};
use once_cell::sync::Lazy; use once_cell::sync::Lazy;
use std::collections::HashMap; use std::collections::HashMap;
use std::ops::Range;
/// A range of Levenshtein distances that we will build DFAs for our terms
/// The computation is exponential, so best keep it to low single digits
const VALID_LEVENSHTEIN_DISTANCE_RANGE: Range<u8> = (0..3);
static LEV_BUILDER: Lazy<HashMap<(u8, bool), LevenshteinAutomatonBuilder>> = Lazy::new(|| { static LEV_BUILDER: Lazy<HashMap<(u8, bool), LevenshteinAutomatonBuilder>> = Lazy::new(|| {
let mut lev_builder_cache = HashMap::new(); let mut lev_builder_cache = HashMap::new();
// TODO make population lazy on a `(distance, val)` basis // TODO make population lazy on a `(distance, val)` basis
for distance in 0..3 { for distance in VALID_LEVENSHTEIN_DISTANCE_RANGE {
for &transposition in &[false, true] { for &transposition in &[false, true] {
let lev_automaton_builder = LevenshteinAutomatonBuilder::new(distance, transposition); let lev_automaton_builder = LevenshteinAutomatonBuilder::new(distance, transposition);
lev_builder_cache.insert((distance, transposition), lev_automaton_builder); lev_builder_cache.insert((distance, transposition), lev_automaton_builder);
@@ -22,12 +28,10 @@ static LEV_BUILDER: Lazy<HashMap<(u8, bool), LevenshteinAutomatonBuilder>> = Laz
/// containing a specific term that is within /// containing a specific term that is within
/// Levenshtein distance /// Levenshtein distance
/// ```rust /// ```rust
/// #[macro_use]
/// extern crate tantivy;
/// use tantivy::schema::{Schema, TEXT};
/// use tantivy::{Index, Result, Term};
/// use tantivy::collector::{Count, TopDocs}; /// use tantivy::collector::{Count, TopDocs};
/// use tantivy::query::FuzzyTermQuery; /// use tantivy::query::FuzzyTermQuery;
/// use tantivy::schema::{Schema, TEXT};
/// use tantivy::{doc, Index, Result, Term};
/// ///
/// # fn main() { example().unwrap(); } /// # fn main() { example().unwrap(); }
/// fn example() -> Result<()> { /// fn example() -> Result<()> {
@@ -100,10 +104,18 @@ impl FuzzyTermQuery {
} }
fn specialized_weight(&self) -> Result<AutomatonWeight<DFA>> { fn specialized_weight(&self) -> Result<AutomatonWeight<DFA>> {
let automaton = LEV_BUILDER.get(&(self.distance, false)) // LEV_BUILDER is a HashMap, whose `get` method returns an Option
.unwrap() // TODO return an error match LEV_BUILDER.get(&(self.distance, false)) {
.build_dfa(self.term.text()); // Unwrap the option and build the Ok(AutomatonWeight)
Ok(AutomatonWeight::new(self.term.field(), automaton)) Some(automaton_builder) => {
let automaton = automaton_builder.build_dfa(self.term.text());
Ok(AutomatonWeight::new(self.term.field(), automaton))
}
None => Err(InvalidArgument(format!(
"Levenshtein distance of {} is not allowed. Choose a value in the {:?} range",
self.distance, VALID_LEVENSHTEIN_DISTANCE_RANGE
))),
}
} }
} }

View File

@@ -18,7 +18,6 @@ pub enum LogicalLiteral {
All, All,
} }
#[derive(Clone)]
pub enum LogicalAST { pub enum LogicalAST {
Clause(Vec<(Occur, LogicalAST)>), Clause(Vec<(Occur, LogicalAST)>),
Leaf(Box<LogicalLiteral>), Leaf(Box<LogicalLiteral>),

View File

@@ -1,4 +1,3 @@
use super::query_grammar;
use super::user_input_ast::*; use super::user_input_ast::*;
use crate::query::occur::Occur; use crate::query::occur::Occur;
use crate::query::query_parser::user_input_ast::UserInputBound; use crate::query::query_parser::user_input_ast::UserInputBound;
@@ -13,22 +12,25 @@ parser! {
( (
letter(), letter(),
many(satisfy(|c: char| c.is_alphanumeric() || c == '_')), many(satisfy(|c: char| c.is_alphanumeric() || c == '_')),
).map(|(s1, s2): (char, String)| format!("{}{}", s1, s2)) ).skip(char(':')).map(|(s1, s2): (char, String)| format!("{}{}", s1, s2))
} }
} }
parser! { parser! {
fn word[I]()(I) -> String fn word[I]()(I) -> String
where [I: Stream<Item = char>] { where [I: Stream<Item = char>] {
many1(satisfy(|c: char| c.is_alphanumeric() || c=='.')) (
.and_then(|s: String| { satisfy(|c: char| !c.is_whitespace() && !['-', '`', ':', '{', '}', '"', '[', ']', '(',')'].contains(&c) ),
match s.as_str() { many(satisfy(|c: char| !c.is_whitespace() && ![':', '{', '}', '"', '[', ']', '(',')'].contains(&c)))
"OR" => Err(StreamErrorFor::<I>::unexpected_static_message("OR")), )
"AND" => Err(StreamErrorFor::<I>::unexpected_static_message("AND")), .map(|(s1, s2): (char, String)| format!("{}{}", s1, s2))
"NOT" => Err(StreamErrorFor::<I>::unexpected_static_message("NOT")), .and_then(|s: String|
_ => Ok(s) match s.as_str() {
} "OR" => Err(StreamErrorFor::<I>::unexpected_static_message("OR")),
}) "AND" => Err(StreamErrorFor::<I>::unexpected_static_message("AND")),
"NOT" => Err(StreamErrorFor::<I>::unexpected_static_message("NOT")),
_ => Ok(s)
})
} }
} }
@@ -37,12 +39,13 @@ parser! {
where [I: Stream<Item = char>] where [I: Stream<Item = char>]
{ {
let term_val = || { let term_val = || {
let phrase = (char('"'), many1(satisfy(|c| c != '"')), char('"')).map(|(_, s, _)| s); let phrase = char('"').with(many1(satisfy(|c| c != '"'))).skip(char('"'));
phrase.or(word()) phrase.or(word())
}; };
let term_val_with_field = negative_number().or(term_val()); let term_val_with_field = negative_number().or(term_val());
let term_query = let term_query =
(field(), char(':'), term_val_with_field).map(|(field_name, _, phrase)| UserInputLiteral { (field(), term_val_with_field)
.map(|(field_name, phrase)| UserInputLiteral {
field_name: Some(field_name), field_name: Some(field_name),
phrase, phrase,
}); });
@@ -60,8 +63,15 @@ parser! {
fn negative_number[I]()(I) -> String fn negative_number[I]()(I) -> String
where [I: Stream<Item = char>] where [I: Stream<Item = char>]
{ {
(char('-'), many1(satisfy(char::is_numeric))) (char('-'), many1(satisfy(char::is_numeric)),
.map(|(s1, s2): (char, String)| format!("{}{}", s1, s2)) optional((char('.'), many1(satisfy(char::is_numeric)))))
.map(|(s1, s2, s3): (char, String, Option<(char, String)>)| {
if let Some(('.', s3)) = s3 {
format!("{}{}.{}", s1, s2, s3)
} else {
format!("{}{}", s1, s2)
}
})
} }
} }
@@ -73,55 +83,93 @@ parser! {
} }
parser! { parser! {
/// Function that parses a range out of a Stream
/// Supports ranges like:
/// [5 TO 10], {5 TO 10}, [* TO 10], [10 TO *], {10 TO *], >5, <=10
/// [a TO *], [a TO c], [abc TO bcd}
fn range[I]()(I) -> UserInputLeaf fn range[I]()(I) -> UserInputLeaf
where [I: Stream<Item = char>] { where [I: Stream<Item = char>] {
let term_val = || { let range_term_val = || {
word().or(negative_number()).or(char('*').map(|_| "*".to_string())) word().or(negative_number()).or(char('*').with(value("*".to_string())))
}; };
let lower_bound = {
let excl = (char('{'), term_val()).map(|(_, w)| UserInputBound::Exclusive(w)); // check for unbounded range in the form of <5, <=10, >5, >=5
let incl = (char('['), term_val()).map(|(_, w)| UserInputBound::Inclusive(w)); let elastic_unbounded_range = (choice([attempt(string(">=")),
attempt(excl).or(incl) attempt(string("<=")),
}; attempt(string("<")),
let upper_bound = { attempt(string(">"))])
let excl = (term_val(), char('}')).map(|(w, _)| UserInputBound::Exclusive(w)); .skip(spaces()),
let incl = (term_val(), char(']')).map(|(w, _)| UserInputBound::Inclusive(w)); range_term_val()).
attempt(excl).or(incl) map(|(comparison_sign, bound): (&str, String)|
}; match comparison_sign {
( ">=" => (UserInputBound::Inclusive(bound), UserInputBound::Unbounded),
optional((field(), char(':')).map(|x| x.0)), "<=" => (UserInputBound::Unbounded, UserInputBound::Inclusive(bound)),
lower_bound, "<" => (UserInputBound::Unbounded, UserInputBound::Exclusive(bound)),
spaces(), ">" => (UserInputBound::Exclusive(bound), UserInputBound::Unbounded),
string("TO"), // default case
spaces(), _ => (UserInputBound::Unbounded, UserInputBound::Unbounded)
upper_bound, });
).map(|(field, lower, _, _, _, upper)| UserInputLeaf::Range { let lower_bound = (one_of("{[".chars()), range_term_val())
field, .map(|(boundary_char, lower_bound): (char, String)|
lower, if lower_bound == "*" {
upper UserInputBound::Unbounded
} else if boundary_char == '{' {
UserInputBound::Exclusive(lower_bound)
} else {
UserInputBound::Inclusive(lower_bound)
});
let upper_bound = (range_term_val(), one_of("}]".chars()))
.map(|(higher_bound, boundary_char): (String, char)|
if higher_bound == "*" {
UserInputBound::Unbounded
} else if boundary_char == '}' {
UserInputBound::Exclusive(higher_bound)
} else {
UserInputBound::Inclusive(higher_bound)
});
// return only lower and upper
let lower_to_upper = (lower_bound.
skip((spaces(),
string("TO"),
spaces())),
upper_bound);
(optional(field()).skip(spaces()),
// try elastic first, if it matches, the range is unbounded
attempt(elastic_unbounded_range).or(lower_to_upper))
.map(|(field, (lower, upper))|
// Construct the leaf from extracted field (optional)
// and bounds
UserInputLeaf::Range {
field,
lower,
upper
}) })
} }
} }
fn negate(expr: UserInputAST) -> UserInputAST {
expr.unary(Occur::MustNot)
}
fn must(expr: UserInputAST) -> UserInputAST {
expr.unary(Occur::Must)
}
parser! { parser! {
fn leaf[I]()(I) -> UserInputAST fn leaf[I]()(I) -> UserInputAST
where [I: Stream<Item = char>] { where [I: Stream<Item = char>] {
(char('-'), leaf()).map(|(_, expr)| expr.unary(Occur::MustNot) ) char('-').with(leaf()).map(negate)
.or((char('+'), leaf()).map(|(_, expr)| expr.unary(Occur::Must) )) .or(char('+').with(leaf()).map(must))
.or((char('('), parse_to_ast(), char(')')).map(|(_, expr, _)| expr)) .or(char('(').with(ast()).skip(char(')')))
.or(char('*').map(|_| UserInputAST::from(UserInputLeaf::All) )) .or(char('*').map(|_| UserInputAST::from(UserInputLeaf::All)))
.or(attempt( .or(attempt(string("NOT").skip(spaces1()).with(leaf()).map(negate)))
(string("NOT"), spaces1(), leaf()).map(|(_, _, expr)| expr.unary(Occur::MustNot)) .or(attempt(range().map(UserInputAST::from)))
) .or(literal().map(UserInputAST::from))
)
.or(attempt(
range().map(UserInputAST::from)
)
)
.or(literal().map(|leaf| UserInputAST::Leaf(Box::new(leaf))))
} }
} }
#[derive(Clone, Copy)]
enum BinaryOperand { enum BinaryOperand {
Or, Or,
And, And,
@@ -129,27 +177,54 @@ enum BinaryOperand {
parser! { parser! {
fn binary_operand[I]()(I) -> BinaryOperand fn binary_operand[I]()(I) -> BinaryOperand
where [I: Stream<Item = char>] { where [I: Stream<Item = char>]
(spaces1(), {
( string("AND").with(value(BinaryOperand::And))
string("AND").map(|_| BinaryOperand::And) .or(string("OR").with(value(BinaryOperand::Or)))
.or(string("OR").map(|_| BinaryOperand::Or))
),
spaces1()).map(|(_, op,_)| op)
} }
} }
enum Element { fn aggregate_binary_expressions(
SingleEl(UserInputAST), left: UserInputAST,
NormalDisjunctive(Vec<Vec<UserInputAST>>), others: Vec<(BinaryOperand, UserInputAST)>,
) -> UserInputAST {
let mut dnf: Vec<Vec<UserInputAST>> = vec![vec![left]];
for (operator, operand_ast) in others {
match operator {
BinaryOperand::And => {
if let Some(last) = dnf.last_mut() {
last.push(operand_ast);
}
}
BinaryOperand::Or => {
dnf.push(vec![operand_ast]);
}
}
}
if dnf.len() == 1 {
UserInputAST::and(dnf.into_iter().next().unwrap()) //< safe
} else {
let conjunctions = dnf.into_iter().map(UserInputAST::and).collect();
UserInputAST::or(conjunctions)
}
} }
impl Element { parser! {
pub fn into_dnf(self) -> Vec<Vec<UserInputAST>> { pub fn ast[I]()(I) -> UserInputAST
match self { where [I: Stream<Item = char>]
Element::NormalDisjunctive(conjunctions) => conjunctions, {
Element::SingleEl(el) => vec![vec![el]], let operand_leaf = (binary_operand().skip(spaces()), leaf().skip(spaces()));
} let boolean_expr = (leaf().skip(spaces().silent()), many1(operand_leaf)).map(
|(left, right)| aggregate_binary_expressions(left,right));
let whitespace_separated_leaves = many1(leaf().skip(spaces().silent()))
.map(|subqueries: Vec<UserInputAST>|
if subqueries.len() == 1 {
subqueries.into_iter().next().unwrap()
} else {
UserInputAST::Clause(subqueries.into_iter().collect())
});
let expr = attempt(boolean_expr).or(whitespace_separated_leaves);
spaces().with(expr).skip(spaces())
} }
} }
@@ -157,56 +232,7 @@ parser! {
pub fn parse_to_ast[I]()(I) -> UserInputAST pub fn parse_to_ast[I]()(I) -> UserInputAST
where [I: Stream<Item = char>] where [I: Stream<Item = char>]
{ {
( spaces().with(optional(ast()).skip(eof())).map(|opt_ast| opt_ast.unwrap_or_else(UserInputAST::empty_query))
attempt(
chainl1(
leaf().map(Element::SingleEl),
binary_operand().map(|op: BinaryOperand|
move |left: Element, right: Element| {
let mut dnf = left.into_dnf();
if let Element::SingleEl(el) = right {
match op {
BinaryOperand::And => {
if let Some(last) = dnf.last_mut() {
last.push(el);
}
}
BinaryOperand::Or => {
dnf.push(vec!(el));
}
}
} else {
unreachable!("Please report.")
}
Element::NormalDisjunctive(dnf)
}
)
)
.map(query_grammar::Element::into_dnf)
.map(|fnd| {
if fnd.len() == 1 {
UserInputAST::and(fnd.into_iter().next().unwrap()) //< safe
} else {
let conjunctions = fnd
.into_iter()
.map(UserInputAST::and)
.collect();
UserInputAST::or(conjunctions)
}
})
)
.or(
sep_by(leaf(), spaces())
.map(|subqueries: Vec<UserInputAST>| {
if subqueries.len() == 1 {
subqueries.into_iter().next().unwrap()
} else {
UserInputAST::Clause(subqueries.into_iter().collect())
}
})
)
)
} }
} }
@@ -225,6 +251,18 @@ mod test {
assert!(parse_to_ast().parse(query).is_err()); assert!(parse_to_ast().parse(query).is_err());
} }
#[test]
fn test_parse_empty_to_ast() {
test_parse_query_to_ast_helper("", "<emptyclause>");
}
#[test]
fn test_parse_query_to_ast_hyphen() {
test_parse_query_to_ast_helper("\"www-form-encoded\"", "\"www-form-encoded\"");
test_parse_query_to_ast_helper("www-form-encoded", "\"www-form-encoded\"");
test_parse_query_to_ast_helper("www-form-encoded", "\"www-form-encoded\"");
}
#[test] #[test]
fn test_parse_query_to_ast_not_op() { fn test_parse_query_to_ast_not_op() {
assert_eq!( assert_eq!(
@@ -259,8 +297,67 @@ mod test {
); );
} }
#[test]
fn test_parse_elastic_query_ranges() {
test_parse_query_to_ast_helper("title: >a", "title:{\"a\" TO \"*\"}");
test_parse_query_to_ast_helper("title:>=a", "title:[\"a\" TO \"*\"}");
test_parse_query_to_ast_helper("title: <a", "title:{\"*\" TO \"a\"}");
test_parse_query_to_ast_helper("title:<=a", "title:{\"*\" TO \"a\"]");
test_parse_query_to_ast_helper("title:<=bsd", "title:{\"*\" TO \"bsd\"]");
test_parse_query_to_ast_helper("weight: >70", "weight:{\"70\" TO \"*\"}");
test_parse_query_to_ast_helper("weight:>=70", "weight:[\"70\" TO \"*\"}");
test_parse_query_to_ast_helper("weight: <70", "weight:{\"*\" TO \"70\"}");
test_parse_query_to_ast_helper("weight:<=70", "weight:{\"*\" TO \"70\"]");
test_parse_query_to_ast_helper("weight: >60.7", "weight:{\"60.7\" TO \"*\"}");
test_parse_query_to_ast_helper("weight: <= 70", "weight:{\"*\" TO \"70\"]");
test_parse_query_to_ast_helper("weight: <= 70.5", "weight:{\"*\" TO \"70.5\"]");
}
#[test]
fn test_range_parser() {
// testing the range() parser separately
let res = range().parse("title: <hello").unwrap().0;
let expected = UserInputLeaf::Range {
field: Some("title".to_string()),
lower: UserInputBound::Unbounded,
upper: UserInputBound::Exclusive("hello".to_string()),
};
let res2 = range().parse("title:{* TO hello}").unwrap().0;
assert_eq!(res, expected);
assert_eq!(res2, expected);
let expected_weight = UserInputLeaf::Range {
field: Some("weight".to_string()),
lower: UserInputBound::Inclusive("71.2".to_string()),
upper: UserInputBound::Unbounded,
};
let res3 = range().parse("weight: >=71.2").unwrap().0;
let res4 = range().parse("weight:[71.2 TO *}").unwrap().0;
assert_eq!(res3, expected_weight);
assert_eq!(res4, expected_weight);
}
#[test]
fn test_parse_query_to_triming_spaces() {
test_parse_query_to_ast_helper(" abc", "\"abc\"");
test_parse_query_to_ast_helper("abc ", "\"abc\"");
test_parse_query_to_ast_helper("( a OR abc)", "(?(\"a\") ?(\"abc\"))");
test_parse_query_to_ast_helper("(a OR abc)", "(?(\"a\") ?(\"abc\"))");
test_parse_query_to_ast_helper("(a OR abc)", "(?(\"a\") ?(\"abc\"))");
test_parse_query_to_ast_helper("a OR abc ", "(?(\"a\") ?(\"abc\"))");
test_parse_query_to_ast_helper("(a OR abc )", "(?(\"a\") ?(\"abc\"))");
test_parse_query_to_ast_helper("(a OR abc) ", "(?(\"a\") ?(\"abc\"))");
}
#[test] #[test]
fn test_parse_query_to_ast() { fn test_parse_query_to_ast() {
test_parse_query_to_ast_helper("abc", "\"abc\"");
test_parse_query_to_ast_helper("a b", "(\"a\" \"b\")");
test_parse_query_to_ast_helper("+(a b)", "+((\"a\" \"b\"))");
test_parse_query_to_ast_helper("+d", "+(\"d\")");
test_parse_query_to_ast_helper("+(a b) +d", "(+((\"a\" \"b\")) +(\"d\"))"); test_parse_query_to_ast_helper("+(a b) +d", "(+((\"a\" \"b\")) +(\"d\"))");
test_parse_query_to_ast_helper("(+a +b) d", "((+(\"a\") +(\"b\")) \"d\")"); test_parse_query_to_ast_helper("(+a +b) d", "((+(\"a\") +(\"b\")) \"d\")");
test_parse_query_to_ast_helper("(+a)", "+(\"a\")"); test_parse_query_to_ast_helper("(+a)", "+(\"a\")");
@@ -276,7 +373,7 @@ mod test {
test_parse_query_to_ast_helper("[1 TO 5]", "[\"1\" TO \"5\"]"); test_parse_query_to_ast_helper("[1 TO 5]", "[\"1\" TO \"5\"]");
test_parse_query_to_ast_helper("foo:{a TO z}", "foo:{\"a\" TO \"z\"}"); test_parse_query_to_ast_helper("foo:{a TO z}", "foo:{\"a\" TO \"z\"}");
test_parse_query_to_ast_helper("foo:[1 TO toto}", "foo:[\"1\" TO \"toto\"}"); test_parse_query_to_ast_helper("foo:[1 TO toto}", "foo:[\"1\" TO \"toto\"}");
test_parse_query_to_ast_helper("foo:[* TO toto}", "foo:[\"*\" TO \"toto\"}"); test_parse_query_to_ast_helper("foo:[* TO toto}", "foo:{\"*\" TO \"toto\"}");
test_parse_query_to_ast_helper("foo:[1 TO *}", "foo:[\"1\" TO \"*\"}"); test_parse_query_to_ast_helper("foo:[1 TO *}", "foo:[\"1\" TO \"*\"}");
test_parse_query_to_ast_helper("foo:[1.1 TO *}", "foo:[\"1.1\" TO \"*\"}"); test_parse_query_to_ast_helper("foo:[1.1 TO *}", "foo:[\"1.1\" TO \"*\"}");
test_is_parse_err("abc + "); test_is_parse_err("abc + ");

View File

@@ -18,42 +18,56 @@ use crate::schema::{FieldType, Term};
use crate::tokenizer::TokenizerManager; use crate::tokenizer::TokenizerManager;
use combine::Parser; use combine::Parser;
use std::borrow::Cow; use std::borrow::Cow;
use std::num::{ParseIntError, ParseFloatError}; use std::num::{ParseFloatError, ParseIntError};
use std::ops::Bound; use std::ops::Bound;
use std::str::FromStr; use std::str::FromStr;
/// Possible error that may happen when parsing a query. /// Possible error that may happen when parsing a query.
#[derive(Debug, PartialEq, Eq)] #[derive(Debug, PartialEq, Eq, Fail)]
pub enum QueryParserError { pub enum QueryParserError {
/// Error in the query syntax /// Error in the query syntax
#[fail(display = "Syntax Error")]
SyntaxError, SyntaxError,
/// `FieldDoesNotExist(field_name: String)` /// `FieldDoesNotExist(field_name: String)`
/// The query references a field that is not in the schema /// The query references a field that is not in the schema
#[fail(display = "File does not exists: '{:?}'", _0)]
FieldDoesNotExist(String), FieldDoesNotExist(String),
/// The query contains a term for a `u64` or `i64`-field, but the value /// The query contains a term for a `u64` or `i64`-field, but the value
/// is neither. /// is neither.
#[fail(display = "Expected a valid integer: '{:?}'", _0)]
ExpectedInt(ParseIntError), ExpectedInt(ParseIntError),
/// The query contains a term for a `f64`-field, but the value /// The query contains a term for a `f64`-field, but the value
/// is not a f64. /// is not a f64.
#[fail(display = "Invalid query: Only excluding terms given")]
ExpectedFloat(ParseFloatError), ExpectedFloat(ParseFloatError),
/// It is forbidden queries that are only "excluding". (e.g. -title:pop) /// It is forbidden queries that are only "excluding". (e.g. -title:pop)
#[fail(display = "Invalid query: Only excluding terms given")]
AllButQueryForbidden, AllButQueryForbidden,
/// If no default field is declared, running a query without any /// If no default field is declared, running a query without any
/// field specified is forbbidden. /// field specified is forbbidden.
#[fail(display = "No default field declared and no field specified in query")]
NoDefaultFieldDeclared, NoDefaultFieldDeclared,
/// The field searched for is not declared /// The field searched for is not declared
/// as indexed in the schema. /// as indexed in the schema.
#[fail(display = "The field '{:?}' is not declared as indexed", _0)]
FieldNotIndexed(String), FieldNotIndexed(String),
/// A phrase query was requested for a field that does not /// A phrase query was requested for a field that does not
/// have any positions indexed. /// have any positions indexed.
#[fail(display = "The field '{:?}' does not have positions indexed", _0)]
FieldDoesNotHavePositionsIndexed(String), FieldDoesNotHavePositionsIndexed(String),
/// The tokenizer for the given field is unknown /// The tokenizer for the given field is unknown
/// The two argument strings are the name of the field, the name of the tokenizer /// The two argument strings are the name of the field, the name of the tokenizer
#[fail(
display = "The tokenizer '{:?}' for the field '{:?}' is unknown",
_0, _1
)]
UnknownTokenizer(String, String), UnknownTokenizer(String, String),
/// The query contains a range query with a phrase as one of the bounds. /// The query contains a range query with a phrase as one of the bounds.
/// Only terms can be used as bounds. /// Only terms can be used as bounds.
#[fail(display = "A range query cannot have a phrase as one of the bounds")]
RangeMustNotHavePhrase, RangeMustNotHavePhrase,
/// The format for the date field is not RFC 3339 compliant. /// The format for the date field is not RFC 3339 compliant.
#[fail(display = "The date field has an invalid format")]
DateFormatError(chrono::ParseError), DateFormatError(chrono::ParseError),
} }
@@ -355,6 +369,7 @@ impl QueryParser {
match *bound { match *bound {
UserInputBound::Inclusive(_) => Ok(Bound::Included(term)), UserInputBound::Inclusive(_) => Ok(Bound::Included(term)),
UserInputBound::Exclusive(_) => Ok(Bound::Excluded(term)), UserInputBound::Exclusive(_) => Ok(Bound::Excluded(term)),
UserInputBound::Unbounded => Ok(Bound::Unbounded),
} }
} }
@@ -614,7 +629,7 @@ mod test {
pub fn test_parse_query_untokenized() { pub fn test_parse_query_untokenized() {
test_parse_query_to_logical_ast_helper( test_parse_query_to_logical_ast_helper(
"nottokenized:\"wordone wordtwo\"", "nottokenized:\"wordone wordtwo\"",
"Term([0, 0, 0, 7, 119, 111, 114, 100, 111, 110, \ "Term(field=7,bytes=[119, 111, 114, 100, 111, 110, \
101, 32, 119, 111, 114, 100, 116, 119, 111])", 101, 32, 119, 111, 114, 100, 116, 119, 111])",
false, false,
); );
@@ -658,7 +673,7 @@ mod test {
.is_ok()); .is_ok());
test_parse_query_to_logical_ast_helper( test_parse_query_to_logical_ast_helper(
"unsigned:2324", "unsigned:2324",
"Term([0, 0, 0, 3, 0, 0, 0, 0, 0, 0, 9, 20])", "Term(field=3,bytes=[0, 0, 0, 0, 0, 0, 9, 20])",
false, false,
); );
@@ -676,22 +691,22 @@ mod test {
} }
#[test] #[test]
pub fn test_parse_query_to_ast_disjunction() { pub fn test_parse_query_to_ast_single_term() {
test_parse_query_to_logical_ast_helper( test_parse_query_to_logical_ast_helper(
"title:toto", "title:toto",
"Term([0, 0, 0, 0, 116, 111, 116, 111])", "Term(field=0,bytes=[116, 111, 116, 111])",
false, false,
); );
test_parse_query_to_logical_ast_helper( test_parse_query_to_logical_ast_helper(
"+title:toto", "+title:toto",
"Term([0, 0, 0, 0, 116, 111, 116, 111])", "Term(field=0,bytes=[116, 111, 116, 111])",
false, false,
); );
test_parse_query_to_logical_ast_helper( test_parse_query_to_logical_ast_helper(
"+title:toto -titi", "+title:toto -titi",
"(+Term([0, 0, 0, 0, 116, 111, 116, 111]) \ "(+Term(field=0,bytes=[116, 111, 116, 111]) \
-(Term([0, 0, 0, 0, 116, 105, 116, 105]) \ -(Term(field=0,bytes=[116, 105, 116, 105]) \
Term([0, 0, 0, 1, 116, 105, 116, 105])))", Term(field=1,bytes=[116, 105, 116, 105])))",
false, false,
); );
assert_eq!( assert_eq!(
@@ -700,49 +715,67 @@ mod test {
.unwrap(), .unwrap(),
QueryParserError::AllButQueryForbidden QueryParserError::AllButQueryForbidden
); );
}
#[test]
pub fn test_parse_query_to_ast_two_terms() {
test_parse_query_to_logical_ast_helper( test_parse_query_to_logical_ast_helper(
"title:a b", "title:a b",
"(Term([0, 0, 0, 0, 97]) (Term([0, 0, 0, 0, 98]) \ "(Term(field=0,bytes=[97]) (Term(field=0,bytes=[98]) Term(field=1,bytes=[98])))",
Term([0, 0, 0, 1, 98])))",
false, false,
); );
test_parse_query_to_logical_ast_helper( test_parse_query_to_logical_ast_helper(
"title:\"a b\"", "title:\"a b\"",
"\"[(0, Term([0, 0, 0, 0, 97])), \ "\"[(0, Term(field=0,bytes=[97])), \
(1, Term([0, 0, 0, 0, 98]))]\"", (1, Term(field=0,bytes=[98]))]\"",
false, false,
); );
}
#[test]
pub fn test_parse_query_to_ast_ranges() {
test_parse_query_to_logical_ast_helper( test_parse_query_to_logical_ast_helper(
"title:[a TO b]", "title:[a TO b]",
"(Included(Term([0, 0, 0, 0, 97])) TO \ "(Included(Term(field=0,bytes=[97])) TO Included(Term(field=0,bytes=[98])))",
Included(Term([0, 0, 0, 0, 98])))",
false, false,
); );
test_parse_query_to_logical_ast_helper( test_parse_query_to_logical_ast_helper(
"[a TO b]", "[a TO b]",
"((Included(Term([0, 0, 0, 0, 97])) TO \ "((Included(Term(field=0,bytes=[97])) TO \
Included(Term([0, 0, 0, 0, 98]))) \ Included(Term(field=0,bytes=[98]))) \
(Included(Term([0, 0, 0, 1, 97])) TO \ (Included(Term(field=1,bytes=[97])) TO \
Included(Term([0, 0, 0, 1, 98]))))", Included(Term(field=1,bytes=[98]))))",
false, false,
); );
test_parse_query_to_logical_ast_helper( test_parse_query_to_logical_ast_helper(
"title:{titi TO toto}", "title:{titi TO toto}",
"(Excluded(Term([0, 0, 0, 0, 116, 105, 116, 105])) TO \ "(Excluded(Term(field=0,bytes=[116, 105, 116, 105])) TO \
Excluded(Term([0, 0, 0, 0, 116, 111, 116, 111])))", Excluded(Term(field=0,bytes=[116, 111, 116, 111])))",
false, false,
); );
test_parse_query_to_logical_ast_helper( test_parse_query_to_logical_ast_helper(
"title:{* TO toto}", "title:{* TO toto}",
"(Unbounded TO \ "(Unbounded TO Excluded(Term(field=0,bytes=[116, 111, 116, 111])))",
Excluded(Term([0, 0, 0, 0, 116, 111, 116, 111])))",
false, false,
); );
test_parse_query_to_logical_ast_helper( test_parse_query_to_logical_ast_helper(
"title:{titi TO *}", "title:{titi TO *}",
"(Excluded(Term([0, 0, 0, 0, 116, 105, 116, 105])) TO Unbounded)", "(Excluded(Term(field=0,bytes=[116, 105, 116, 105])) TO Unbounded)",
false, false,
); );
test_parse_query_to_logical_ast_helper(
"signed:{-5 TO 3}",
"(Excluded(Term(field=2,bytes=[127, 255, 255, 255, 255, 255, 255, 251])) TO \
Excluded(Term(field=2,bytes=[128, 0, 0, 0, 0, 0, 0, 3])))",
false,
);
test_parse_query_to_logical_ast_helper(
"float:{-1.5 TO 1.5}",
"(Excluded(Term(field=10,bytes=[64, 7, 255, 255, 255, 255, 255, 255])) TO \
Excluded(Term(field=10,bytes=[191, 248, 0, 0, 0, 0, 0, 0])))",
false,
);
test_parse_query_to_logical_ast_helper("*", "*", false); test_parse_query_to_logical_ast_helper("*", "*", false);
} }
@@ -844,19 +877,19 @@ mod test {
pub fn test_parse_query_to_ast_conjunction() { pub fn test_parse_query_to_ast_conjunction() {
test_parse_query_to_logical_ast_helper( test_parse_query_to_logical_ast_helper(
"title:toto", "title:toto",
"Term([0, 0, 0, 0, 116, 111, 116, 111])", "Term(field=0,bytes=[116, 111, 116, 111])",
true, true,
); );
test_parse_query_to_logical_ast_helper( test_parse_query_to_logical_ast_helper(
"+title:toto", "+title:toto",
"Term([0, 0, 0, 0, 116, 111, 116, 111])", "Term(field=0,bytes=[116, 111, 116, 111])",
true, true,
); );
test_parse_query_to_logical_ast_helper( test_parse_query_to_logical_ast_helper(
"+title:toto -titi", "+title:toto -titi",
"(+Term([0, 0, 0, 0, 116, 111, 116, 111]) \ "(+Term(field=0,bytes=[116, 111, 116, 111]) \
-(Term([0, 0, 0, 0, 116, 105, 116, 105]) \ -(Term(field=0,bytes=[116, 105, 116, 105]) \
Term([0, 0, 0, 1, 116, 105, 116, 105])))", Term(field=1,bytes=[116, 105, 116, 105])))",
true, true,
); );
assert_eq!( assert_eq!(
@@ -867,16 +900,25 @@ mod test {
); );
test_parse_query_to_logical_ast_helper( test_parse_query_to_logical_ast_helper(
"title:a b", "title:a b",
"(+Term([0, 0, 0, 0, 97]) \ "(+Term(field=0,bytes=[97]) \
+(Term([0, 0, 0, 0, 98]) \ +(Term(field=0,bytes=[98]) \
Term([0, 0, 0, 1, 98])))", Term(field=1,bytes=[98])))",
true, true,
); );
test_parse_query_to_logical_ast_helper( test_parse_query_to_logical_ast_helper(
"title:\"a b\"", "title:\"a b\"",
"\"[(0, Term([0, 0, 0, 0, 97])), \ "\"[(0, Term(field=0,bytes=[97])), \
(1, Term([0, 0, 0, 0, 98]))]\"", (1, Term(field=0,bytes=[98]))]\"",
true, true,
); );
} }
#[test]
pub fn test_query_parser_hyphen() {
test_parse_query_to_logical_ast_helper(
"title:www-form-encoded",
"\"[(0, Term(field=0,bytes=[119, 119, 119])), (1, Term(field=0,bytes=[102, 111, 114, 109])), (2, Term(field=0,bytes=[101, 110, 99, 111, 100, 101, 100]))]\"",
false
);
}
} }

View File

@@ -1,44 +0,0 @@
use std::sync::Arc;
use stemmer;
pub struct StemmerTokenStream<TailTokenStream>
where TailTokenStream: TokenStream {
tail: TailTokenStream,
stemmer: Arc<stemmer::Stemmer>,
}
impl<TailTokenStream> TokenStream for StemmerTokenStream<TailTokenStream>
where TailTokenStream: TokenStream {
fn token(&self) -> &Token {
self.tail.token()
}
fn token_mut(&mut self) -> &mut Token {
self.tail.token_mut()
}
fn advance(&mut self) -> bool {
if self.tail.advance() {
// self.tail.token_mut().term.make_ascii_lowercase();
let new_str = self.stemmer.stem_str(&self.token().term);
true
}
else {
false
}
}
}
impl<TailTokenStream> StemmerTokenStream<TailTokenStream>
where TailTokenStream: TokenStream {
fn wrap(stemmer: Arc<stemmer::Stemmer>, tail: TailTokenStream) -> StemmerTokenStream<TailTokenStream> {
StemmerTokenStream {
tail,
stemmer,
}
}
}

View File

@@ -3,6 +3,7 @@ use std::fmt::{Debug, Formatter};
use crate::query::Occur; use crate::query::Occur;
#[derive(PartialEq)]
pub enum UserInputLeaf { pub enum UserInputLeaf {
Literal(UserInputLiteral), Literal(UserInputLiteral),
All, All,
@@ -35,6 +36,7 @@ impl Debug for UserInputLeaf {
} }
} }
#[derive(PartialEq)]
pub struct UserInputLiteral { pub struct UserInputLiteral {
pub field_name: Option<String>, pub field_name: Option<String>,
pub phrase: String, pub phrase: String,
@@ -49,9 +51,11 @@ impl fmt::Debug for UserInputLiteral {
} }
} }
#[derive(PartialEq)]
pub enum UserInputBound { pub enum UserInputBound {
Inclusive(String), Inclusive(String),
Exclusive(String), Exclusive(String),
Unbounded,
} }
impl UserInputBound { impl UserInputBound {
@@ -59,6 +63,7 @@ impl UserInputBound {
match *self { match *self {
UserInputBound::Inclusive(ref word) => write!(formatter, "[\"{}\"", word), UserInputBound::Inclusive(ref word) => write!(formatter, "[\"{}\"", word),
UserInputBound::Exclusive(ref word) => write!(formatter, "{{\"{}\"", word), UserInputBound::Exclusive(ref word) => write!(formatter, "{{\"{}\"", word),
UserInputBound::Unbounded => write!(formatter, "{{\"*\""),
} }
} }
@@ -66,6 +71,7 @@ impl UserInputBound {
match *self { match *self {
UserInputBound::Inclusive(ref word) => write!(formatter, "\"{}\"]", word), UserInputBound::Inclusive(ref word) => write!(formatter, "\"{}\"]", word),
UserInputBound::Exclusive(ref word) => write!(formatter, "\"{}\"}}", word), UserInputBound::Exclusive(ref word) => write!(formatter, "\"{}\"}}", word),
UserInputBound::Unbounded => write!(formatter, "\"*\"}}"),
} }
} }
@@ -73,6 +79,7 @@ impl UserInputBound {
match *self { match *self {
UserInputBound::Inclusive(ref contents) => contents, UserInputBound::Inclusive(ref contents) => contents,
UserInputBound::Exclusive(ref contents) => contents, UserInputBound::Exclusive(ref contents) => contents,
UserInputBound::Unbounded => &"*",
} }
} }
} }
@@ -80,9 +87,6 @@ impl UserInputBound {
pub enum UserInputAST { pub enum UserInputAST {
Clause(Vec<UserInputAST>), Clause(Vec<UserInputAST>),
Unary(Occur, Box<UserInputAST>), Unary(Occur, Box<UserInputAST>),
// Not(Box<UserInputAST>),
// Should(Box<UserInputAST>),
// Must(Box<UserInputAST>),
Leaf(Box<UserInputLeaf>), Leaf(Box<UserInputLeaf>),
} }
@@ -92,7 +96,7 @@ impl UserInputAST {
} }
fn compose(occur: Occur, asts: Vec<UserInputAST>) -> UserInputAST { fn compose(occur: Occur, asts: Vec<UserInputAST>) -> UserInputAST {
assert!(occur != Occur::MustNot); assert_ne!(occur, Occur::MustNot);
assert!(!asts.is_empty()); assert!(!asts.is_empty());
if asts.len() == 1 { if asts.len() == 1 {
asts.into_iter().next().unwrap() //< safe asts.into_iter().next().unwrap() //< safe
@@ -105,6 +109,10 @@ impl UserInputAST {
} }
} }
pub fn empty_query() -> UserInputAST {
UserInputAST::Clause(Vec::default())
}
pub fn and(asts: Vec<UserInputAST>) -> UserInputAST { pub fn and(asts: Vec<UserInputAST>) -> UserInputAST {
UserInputAST::compose(Occur::Must, asts) UserInputAST::compose(Occur::Must, asts)
} }
@@ -114,42 +122,6 @@ impl UserInputAST {
} }
} }
/*
impl UserInputAST {
fn compose_occur(self, occur: Occur) -> UserInputAST {
match self {
UserInputAST::Not(other) => {
let new_occur = compose_occur(Occur::MustNot, occur);
other.simplify()
}
_ => {
self
}
}
}
pub fn simplify(self) -> UserInputAST {
match self {
UserInputAST::Clause(els) => {
if els.len() == 1 {
return els.into_iter().next().unwrap();
} else {
return self;
}
}
UserInputAST::Not(els) => {
if els.len() == 1 {
return els.into_iter().next().unwrap();
} else {
return self;
}
}
}
}
}
*/
impl From<UserInputLiteral> for UserInputLeaf { impl From<UserInputLiteral> for UserInputLeaf {
fn from(literal: UserInputLiteral) -> UserInputLeaf { fn from(literal: UserInputLiteral) -> UserInputLeaf {
UserInputLeaf::Literal(literal) UserInputLeaf::Literal(literal)

View File

@@ -38,14 +38,10 @@ fn map_bound<TFrom, TTo, Transform: Fn(&TFrom) -> TTo>(
/// # Example /// # Example
/// ///
/// ```rust /// ```rust
///
/// # #[macro_use]
/// # extern crate tantivy;
/// # use tantivy::Index;
/// # use tantivy::schema::{Schema, INDEXED};
/// # use tantivy::collector::Count; /// # use tantivy::collector::Count;
/// # use tantivy::Result;
/// # use tantivy::query::RangeQuery; /// # use tantivy::query::RangeQuery;
/// # use tantivy::schema::{Schema, INDEXED};
/// # use tantivy::{doc, Index, Result};
/// # /// #
/// # fn run() -> Result<()> { /// # fn run() -> Result<()> {
/// # let mut schema_builder = Schema::builder(); /// # let mut schema_builder = Schema::builder();
@@ -338,39 +334,33 @@ mod tests {
use crate::collector::Count; use crate::collector::Count;
use crate::schema::{Document, Field, Schema, INDEXED}; use crate::schema::{Document, Field, Schema, INDEXED};
use crate::Index; use crate::Index;
use crate::Result;
use std::collections::Bound; use std::collections::Bound;
#[test] #[test]
fn test_range_query_simple() { fn test_range_query_simple() {
fn run() -> Result<()> { let mut schema_builder = Schema::builder();
let mut schema_builder = Schema::builder(); let year_field = schema_builder.add_u64_field("year", INDEXED);
let year_field = schema_builder.add_u64_field("year", INDEXED); let schema = schema_builder.build();
let schema = schema_builder.build();
let index = Index::create_in_ram(schema); let index = Index::create_in_ram(schema);
{ {
let mut index_writer = index.writer_with_num_threads(1, 6_000_000).unwrap(); let mut index_writer = index.writer_with_num_threads(1, 6_000_000).unwrap();
for year in 1950u64..2017u64 { for year in 1950u64..2017u64 {
let num_docs_within_year = 10 + (year - 1950) * (year - 1950); let num_docs_within_year = 10 + (year - 1950) * (year - 1950);
for _ in 0..num_docs_within_year { for _ in 0..num_docs_within_year {
index_writer.add_document(doc!(year_field => year)); index_writer.add_document(doc!(year_field => year));
}
} }
index_writer.commit().unwrap();
} }
let reader = index.reader().unwrap(); index_writer.commit().unwrap();
let searcher = reader.searcher();
let docs_in_the_sixties = RangeQuery::new_u64(year_field, 1960u64..1970u64);
// ... or `1960..=1969` if inclusive range is enabled.
let count = searcher.search(&docs_in_the_sixties, &Count)?;
assert_eq!(count, 2285);
Ok(())
} }
let reader = index.reader().unwrap();
let searcher = reader.searcher();
run().unwrap(); let docs_in_the_sixties = RangeQuery::new_u64(year_field, 1960u64..1970u64);
// ... or `1960..=1969` if inclusive range is enabled.
let count = searcher.search(&docs_in_the_sixties, &Count).unwrap();
assert_eq!(count, 2285);
} }
#[test] #[test]
@@ -460,7 +450,10 @@ mod tests {
let count_multiples = let count_multiples =
|range_query: RangeQuery| searcher.search(&range_query, &Count).unwrap(); |range_query: RangeQuery| searcher.search(&range_query, &Count).unwrap();
assert_eq!(count_multiples(RangeQuery::new_f64(float_field, 10.0..11.0)), 9); assert_eq!(
count_multiples(RangeQuery::new_f64(float_field, 10.0..11.0)),
9
);
assert_eq!( assert_eq!(
count_multiples(RangeQuery::new_f64_bounds( count_multiples(RangeQuery::new_f64_bounds(
float_field, float_field,

View File

@@ -4,22 +4,18 @@ use crate::schema::Field;
use crate::Result; use crate::Result;
use crate::Searcher; use crate::Searcher;
use std::clone::Clone; use std::clone::Clone;
use std::sync::Arc;
use tantivy_fst::Regex; use tantivy_fst::Regex;
// A Regex Query matches all of the documents /// A Regex Query matches all of the documents
/// containing a specific term that matches /// containing a specific term that matches
/// a regex pattern /// a regex pattern.
/// A Fuzzy Query matches all of the documents
/// containing a specific term that is within
/// Levenshtein distance
/// ///
/// ```rust /// ```rust
/// #[macro_use]
/// extern crate tantivy;
/// use tantivy::schema::{Schema, TEXT};
/// use tantivy::{Index, Result, Term};
/// use tantivy::collector::Count; /// use tantivy::collector::Count;
/// use tantivy::query::RegexQuery; /// use tantivy::query::RegexQuery;
/// use tantivy::schema::{Schema, TEXT};
/// use tantivy::{doc, Index, Result, Term};
/// ///
/// # fn main() { example().unwrap(); } /// # fn main() { example().unwrap(); }
/// fn example() -> Result<()> { /// fn example() -> Result<()> {
@@ -48,7 +44,7 @@ use tantivy_fst::Regex;
/// let searcher = reader.searcher(); /// let searcher = reader.searcher();
/// ///
/// let term = Term::from_field_text(title, "Diary"); /// let term = Term::from_field_text(title, "Diary");
/// let query = RegexQuery::new("d[ai]{2}ry".to_string(), title); /// let query = RegexQuery::from_pattern("d[ai]{2}ry", title)?;
/// let count = searcher.search(&query, &Count)?; /// let count = searcher.search(&query, &Count)?;
/// assert_eq!(count, 3); /// assert_eq!(count, 3);
/// Ok(()) /// Ok(())
@@ -56,30 +52,34 @@ use tantivy_fst::Regex;
/// ``` /// ```
#[derive(Debug, Clone)] #[derive(Debug, Clone)]
pub struct RegexQuery { pub struct RegexQuery {
regex_pattern: String, regex: Arc<Regex>,
field: Field, field: Field,
} }
impl RegexQuery { impl RegexQuery {
/// Creates a new Fuzzy Query /// Creates a new RegexQuery from a given pattern
pub fn new(regex_pattern: String, field: Field) -> RegexQuery { pub fn from_pattern(regex_pattern: &str, field: Field) -> Result<Self> {
let regex = Regex::new(&regex_pattern)
.map_err(|_| TantivyError::InvalidArgument(regex_pattern.to_string()))?;
Ok(RegexQuery::from_regex(regex, field))
}
/// Creates a new RegexQuery from a fully built Regex
pub fn from_regex<T: Into<Arc<Regex>>>(regex: T, field: Field) -> Self {
RegexQuery { RegexQuery {
regex_pattern, regex: regex.into(),
field, field,
} }
} }
fn specialized_weight(&self) -> Result<AutomatonWeight<Regex>> { fn specialized_weight(&self) -> AutomatonWeight<Regex> {
let automaton = Regex::new(&self.regex_pattern) AutomatonWeight::new(self.field, self.regex.clone())
.map_err(|_| TantivyError::InvalidArgument(self.regex_pattern.clone()))?;
Ok(AutomatonWeight::new(self.field, automaton))
} }
} }
impl Query for RegexQuery { impl Query for RegexQuery {
fn weight(&self, _searcher: &Searcher, _scoring_enabled: bool) -> Result<Box<dyn Weight>> { fn weight(&self, _searcher: &Searcher, _scoring_enabled: bool) -> Result<Box<dyn Weight>> {
Ok(Box::new(self.specialized_weight()?)) Ok(Box::new(self.specialized_weight()))
} }
} }
@@ -87,13 +87,14 @@ impl Query for RegexQuery {
mod test { mod test {
use super::RegexQuery; use super::RegexQuery;
use crate::collector::TopDocs; use crate::collector::TopDocs;
use crate::schema::Schema;
use crate::schema::TEXT; use crate::schema::TEXT;
use crate::schema::{Field, Schema};
use crate::tests::assert_nearly_equals; use crate::tests::assert_nearly_equals;
use crate::Index; use crate::{Index, IndexReader};
use std::sync::Arc;
use tantivy_fst::Regex;
#[test] fn build_test_index() -> (IndexReader, Field) {
pub fn test_regex_query() {
let mut schema_builder = Schema::builder(); let mut schema_builder = Schema::builder();
let country_field = schema_builder.add_text_field("country", TEXT); let country_field = schema_builder.add_text_field("country", TEXT);
let schema = schema_builder.build(); let schema = schema_builder.build();
@@ -109,20 +110,65 @@ mod test {
index_writer.commit().unwrap(); index_writer.commit().unwrap();
} }
let reader = index.reader().unwrap(); let reader = index.reader().unwrap();
(reader, country_field)
}
fn verify_regex_query(
query_matching_one: RegexQuery,
query_matching_zero: RegexQuery,
reader: IndexReader,
) {
let searcher = reader.searcher(); let searcher = reader.searcher();
{ {
let regex_query = RegexQuery::new("jap[ao]n".to_string(), country_field);
let scored_docs = searcher let scored_docs = searcher
.search(&regex_query, &TopDocs::with_limit(2)) .search(&query_matching_one, &TopDocs::with_limit(2))
.unwrap(); .unwrap();
assert_eq!(scored_docs.len(), 1, "Expected only 1 document"); assert_eq!(scored_docs.len(), 1, "Expected only 1 document");
let (score, _) = scored_docs[0]; let (score, _) = scored_docs[0];
assert_nearly_equals(1f32, score); assert_nearly_equals(1f32, score);
} }
let regex_query = RegexQuery::new("jap[A-Z]n".to_string(), country_field);
let top_docs = searcher let top_docs = searcher
.search(&regex_query, &TopDocs::with_limit(2)) .search(&query_matching_zero, &TopDocs::with_limit(2))
.unwrap(); .unwrap();
assert!(top_docs.is_empty(), "Expected ZERO document"); assert!(top_docs.is_empty(), "Expected ZERO document");
} }
#[test]
pub fn test_regex_query() {
let (reader, field) = build_test_index();
let matching_one = RegexQuery::from_pattern("jap[ao]n", field).unwrap();
let matching_zero = RegexQuery::from_pattern("jap[A-Z]n", field).unwrap();
verify_regex_query(matching_one, matching_zero, reader);
}
#[test]
pub fn test_construct_from_regex() {
let (reader, field) = build_test_index();
let matching_one = RegexQuery::from_regex(Regex::new("jap[ao]n").unwrap(), field);
let matching_zero = RegexQuery::from_regex(Regex::new("jap[A-Z]n").unwrap(), field);
verify_regex_query(matching_one, matching_zero, reader);
}
#[test]
pub fn test_construct_from_reused_regex() {
let r1 = Arc::new(Regex::new("jap[ao]n").unwrap());
let r2 = Arc::new(Regex::new("jap[A-Z]n").unwrap());
let (reader, field) = build_test_index();
let matching_one = RegexQuery::from_regex(r1.clone(), field);
let matching_zero = RegexQuery::from_regex(r2.clone(), field);
verify_regex_query(matching_one, matching_zero, reader.clone());
let matching_one = RegexQuery::from_regex(r1.clone(), field);
let matching_zero = RegexQuery::from_regex(r2.clone(), field);
verify_regex_query(matching_one, matching_zero, reader.clone());
}
} }

View File

@@ -12,7 +12,7 @@ mod tests {
use crate::collector::TopDocs; use crate::collector::TopDocs;
use crate::docset::DocSet; use crate::docset::DocSet;
use crate::query::{Query, QueryParser, Scorer, TermQuery}; use crate::query::{Query, QueryParser, Scorer, TermQuery};
use crate::schema::{IndexRecordOption, Schema, STRING, TEXT}; use crate::schema::{Field, IndexRecordOption, Schema, STRING, TEXT};
use crate::tests::assert_nearly_equals; use crate::tests::assert_nearly_equals;
use crate::Index; use crate::Index;
use crate::Term; use crate::Term;
@@ -114,4 +114,16 @@ mod tests {
let reader = index.reader().unwrap(); let reader = index.reader().unwrap();
assert_eq!(term_query.count(&*reader.searcher()).unwrap(), 1); assert_eq!(term_query.count(&*reader.searcher()).unwrap(), 1);
} }
#[test]
fn test_term_query_debug() {
let term_query = TermQuery::new(
Term::from_field_text(Field(1), "hello"),
IndexRecordOption::WithFreqs,
);
assert_eq!(
format!("{:?}", term_query),
"TermQuery(Term(field=1,bytes=[104, 101, 108, 108, 111]))"
);
}
} }

View File

@@ -7,6 +7,7 @@ use crate::Result;
use crate::Searcher; use crate::Searcher;
use crate::Term; use crate::Term;
use std::collections::BTreeSet; use std::collections::BTreeSet;
use std::fmt;
/// A Term query matches all of the documents /// A Term query matches all of the documents
/// containing a specific term. /// containing a specific term.
@@ -19,12 +20,10 @@ use std::collections::BTreeSet;
/// * `field norm` - number of tokens in the field. /// * `field norm` - number of tokens in the field.
/// ///
/// ```rust /// ```rust
/// #[macro_use]
/// extern crate tantivy;
/// use tantivy::schema::{Schema, TEXT, IndexRecordOption};
/// use tantivy::{Index, Result, Term};
/// use tantivy::collector::{Count, TopDocs}; /// use tantivy::collector::{Count, TopDocs};
/// use tantivy::query::TermQuery; /// use tantivy::query::TermQuery;
/// use tantivy::schema::{Schema, TEXT, IndexRecordOption};
/// use tantivy::{doc, Index, Result, Term};
/// ///
/// # fn main() { example().unwrap(); } /// # fn main() { example().unwrap(); }
/// fn example() -> Result<()> { /// fn example() -> Result<()> {
@@ -61,12 +60,18 @@ use std::collections::BTreeSet;
/// Ok(()) /// Ok(())
/// } /// }
/// ``` /// ```
#[derive(Clone, Debug)] #[derive(Clone)]
pub struct TermQuery { pub struct TermQuery {
term: Term, term: Term,
index_record_option: IndexRecordOption, index_record_option: IndexRecordOption,
} }
impl fmt::Debug for TermQuery {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
write!(f, "TermQuery({:?})", self.term)
}
}
impl TermQuery { impl TermQuery {
/// Creates a new term query. /// Creates a new term query.
pub fn new(term: Term, segment_postings_options: IndexRecordOption) -> TermQuery { pub fn new(term: Term, segment_postings_options: IndexRecordOption) -> TermQuery {

View File

@@ -1,6 +1,7 @@
mod pool; mod pool;
use self::pool::{LeasedItem, Pool}; pub use self::pool::LeasedItem;
use self::pool::Pool;
use crate::core::Segment; use crate::core::Segment;
use crate::directory::Directory; use crate::directory::Directory;
use crate::directory::WatchHandle; use crate::directory::WatchHandle;

View File

@@ -123,6 +123,10 @@ impl<T> Pool<T> {
} }
} }
/// A LeasedItem holds an object borrowed from a Pool.
///
/// Upon drop, the object is automatically returned
/// into the pool.
pub struct LeasedItem<T> { pub struct LeasedItem<T> {
gen_item: Option<GenerationItem<T>>, gen_item: Option<GenerationItem<T>>,
recycle_queue: Arc<Queue<GenerationItem<T>>>, recycle_queue: Arc<Queue<GenerationItem<T>>>,

View File

@@ -63,7 +63,7 @@ impl FragmentCandidate {
fn try_add_token(&mut self, token: &Token, terms: &BTreeMap<String, f32>) { fn try_add_token(&mut self, token: &Token, terms: &BTreeMap<String, f32>) {
self.stop_offset = token.offset_to; self.stop_offset = token.offset_to;
if let Some(score) = terms.get(&token.text.to_lowercase()) { if let Some(&score) = terms.get(&token.text.to_lowercase()) {
self.score += score; self.score += score;
self.highlighted self.highlighted
.push(HighlightSection::new(token.offset_from, token.offset_to)); .push(HighlightSection::new(token.offset_from, token.offset_to));
@@ -142,7 +142,7 @@ impl Snippet {
/// Fragments must be valid in the sense that `&text[fragment.start..fragment.stop]`\ /// Fragments must be valid in the sense that `&text[fragment.start..fragment.stop]`\
/// has to be a valid string. /// has to be a valid string.
fn search_fragments<'a>( fn search_fragments<'a>(
tokenizer: &dyn BoxedTokenizer, tokenizer: &BoxedTokenizer,
text: &'a str, text: &'a str,
terms: &BTreeMap<String, f32>, terms: &BTreeMap<String, f32>,
max_num_chars: usize, max_num_chars: usize,
@@ -150,7 +150,6 @@ fn search_fragments<'a>(
let mut token_stream = tokenizer.token_stream(text); let mut token_stream = tokenizer.token_stream(text);
let mut fragment = FragmentCandidate::new(0); let mut fragment = FragmentCandidate::new(0);
let mut fragments: Vec<FragmentCandidate> = vec![]; let mut fragments: Vec<FragmentCandidate> = vec![];
while let Some(next) = token_stream.next() { while let Some(next) = token_stream.next() {
if (next.offset_to - fragment.start_offset) > max_num_chars { if (next.offset_to - fragment.start_offset) > max_num_chars {
if fragment.score > 0.0 { if fragment.score > 0.0 {
@@ -214,11 +213,9 @@ fn select_best_fragment_combination(fragments: &[FragmentCandidate], text: &str)
/// # Example /// # Example
/// ///
/// ```rust /// ```rust
/// # #[macro_use]
/// # extern crate tantivy;
/// # use tantivy::Index;
/// # use tantivy::schema::{Schema, TEXT};
/// # use tantivy::query::QueryParser; /// # use tantivy::query::QueryParser;
/// # use tantivy::schema::{Schema, TEXT};
/// # use tantivy::{doc, Index};
/// use tantivy::SnippetGenerator; /// use tantivy::SnippetGenerator;
/// ///
/// # fn main() -> tantivy::Result<()> { /// # fn main() -> tantivy::Result<()> {
@@ -254,7 +251,7 @@ fn select_best_fragment_combination(fragments: &[FragmentCandidate], text: &str)
/// ``` /// ```
pub struct SnippetGenerator { pub struct SnippetGenerator {
terms_text: BTreeMap<String, f32>, terms_text: BTreeMap<String, f32>,
tokenizer: Box<dyn BoxedTokenizer>, tokenizer: BoxedTokenizer,
field: Field, field: Field,
max_num_chars: usize, max_num_chars: usize,
} }
@@ -316,12 +313,8 @@ impl SnippetGenerator {
/// Generates a snippet for the given text. /// Generates a snippet for the given text.
pub fn snippet(&self, text: &str) -> Snippet { pub fn snippet(&self, text: &str) -> Snippet {
let fragment_candidates = search_fragments( let fragment_candidates =
&*self.tokenizer, search_fragments(&self.tokenizer, &text, &self.terms_text, self.max_num_chars);
&text,
&self.terms_text,
self.max_num_chars,
);
select_best_fragment_combination(&fragment_candidates[..], &text) select_best_fragment_combination(&fragment_candidates[..], &text)
} }
} }
@@ -331,7 +324,7 @@ mod tests {
use super::{search_fragments, select_best_fragment_combination}; use super::{search_fragments, select_best_fragment_combination};
use crate::query::QueryParser; use crate::query::QueryParser;
use crate::schema::{IndexRecordOption, Schema, TextFieldIndexing, TextOptions, TEXT}; use crate::schema::{IndexRecordOption, Schema, TextFieldIndexing, TextOptions, TEXT};
use crate::tokenizer::{box_tokenizer, SimpleTokenizer}; use crate::tokenizer::SimpleTokenizer;
use crate::Index; use crate::Index;
use crate::SnippetGenerator; use crate::SnippetGenerator;
use maplit::btreemap; use maplit::btreemap;
@@ -355,12 +348,12 @@ Survey in 2016, 2017, and 2018."#;
#[test] #[test]
fn test_snippet() { fn test_snippet() {
let boxed_tokenizer = box_tokenizer(SimpleTokenizer); let boxed_tokenizer = SimpleTokenizer.into();
let terms = btreemap! { let terms = btreemap! {
String::from("rust") => 1.0, String::from("rust") => 1.0,
String::from("language") => 0.9 String::from("language") => 0.9
}; };
let fragments = search_fragments(&*boxed_tokenizer, TEST_TEXT, &terms, 100); let fragments = search_fragments(&boxed_tokenizer, TEST_TEXT, &terms, 100);
assert_eq!(fragments.len(), 7); assert_eq!(fragments.len(), 7);
{ {
let first = &fragments[0]; let first = &fragments[0];
@@ -382,13 +375,13 @@ Survey in 2016, 2017, and 2018."#;
#[test] #[test]
fn test_snippet_scored_fragment() { fn test_snippet_scored_fragment() {
let boxed_tokenizer = box_tokenizer(SimpleTokenizer); let boxed_tokenizer = SimpleTokenizer.into();
{ {
let terms = btreemap! { let terms = btreemap! {
String::from("rust") =>1.0f32, String::from("rust") =>1.0f32,
String::from("language") => 0.9f32 String::from("language") => 0.9f32
}; };
let fragments = search_fragments(&*boxed_tokenizer, TEST_TEXT, &terms, 20); let fragments = search_fragments(&boxed_tokenizer, TEST_TEXT, &terms, 20);
{ {
let first = &fragments[0]; let first = &fragments[0];
assert_eq!(first.score, 1.0); assert_eq!(first.score, 1.0);
@@ -397,13 +390,13 @@ Survey in 2016, 2017, and 2018."#;
let snippet = select_best_fragment_combination(&fragments[..], &TEST_TEXT); let snippet = select_best_fragment_combination(&fragments[..], &TEST_TEXT);
assert_eq!(snippet.to_html(), "<b>Rust</b> is a systems") assert_eq!(snippet.to_html(), "<b>Rust</b> is a systems")
} }
let boxed_tokenizer = box_tokenizer(SimpleTokenizer); let boxed_tokenizer = SimpleTokenizer.into();
{ {
let terms = btreemap! { let terms = btreemap! {
String::from("rust") =>0.9f32, String::from("rust") =>0.9f32,
String::from("language") => 1.0f32 String::from("language") => 1.0f32
}; };
let fragments = search_fragments(&*boxed_tokenizer, TEST_TEXT, &terms, 20); let fragments = search_fragments(&boxed_tokenizer, TEST_TEXT, &terms, 20);
//assert_eq!(fragments.len(), 7); //assert_eq!(fragments.len(), 7);
{ {
let first = &fragments[0]; let first = &fragments[0];
@@ -417,14 +410,14 @@ Survey in 2016, 2017, and 2018."#;
#[test] #[test]
fn test_snippet_in_second_fragment() { fn test_snippet_in_second_fragment() {
let boxed_tokenizer = box_tokenizer(SimpleTokenizer); let boxed_tokenizer = SimpleTokenizer.into();
let text = "a b c d e f g"; let text = "a b c d e f g";
let mut terms = BTreeMap::new(); let mut terms = BTreeMap::new();
terms.insert(String::from("c"), 1.0); terms.insert(String::from("c"), 1.0);
let fragments = search_fragments(&*boxed_tokenizer, &text, &terms, 3); let fragments = search_fragments(&boxed_tokenizer, &text, &terms, 3);
assert_eq!(fragments.len(), 1); assert_eq!(fragments.len(), 1);
{ {
@@ -441,14 +434,14 @@ Survey in 2016, 2017, and 2018."#;
#[test] #[test]
fn test_snippet_with_term_at_the_end_of_fragment() { fn test_snippet_with_term_at_the_end_of_fragment() {
let boxed_tokenizer = box_tokenizer(SimpleTokenizer); let boxed_tokenizer = SimpleTokenizer.into();
let text = "a b c d e f f g"; let text = "a b c d e f f g";
let mut terms = BTreeMap::new(); let mut terms = BTreeMap::new();
terms.insert(String::from("f"), 1.0); terms.insert(String::from("f"), 1.0);
let fragments = search_fragments(&*boxed_tokenizer, &text, &terms, 3); let fragments = search_fragments(&boxed_tokenizer, &text, &terms, 3);
assert_eq!(fragments.len(), 2); assert_eq!(fragments.len(), 2);
{ {
@@ -465,7 +458,7 @@ Survey in 2016, 2017, and 2018."#;
#[test] #[test]
fn test_snippet_with_second_fragment_has_the_highest_score() { fn test_snippet_with_second_fragment_has_the_highest_score() {
let boxed_tokenizer = box_tokenizer(SimpleTokenizer); let boxed_tokenizer = SimpleTokenizer.into();
let text = "a b c d e f g"; let text = "a b c d e f g";
@@ -473,7 +466,7 @@ Survey in 2016, 2017, and 2018."#;
terms.insert(String::from("f"), 1.0); terms.insert(String::from("f"), 1.0);
terms.insert(String::from("a"), 0.9); terms.insert(String::from("a"), 0.9);
let fragments = search_fragments(&*boxed_tokenizer, &text, &terms, 7); let fragments = search_fragments(&boxed_tokenizer, &text, &terms, 7);
assert_eq!(fragments.len(), 2); assert_eq!(fragments.len(), 2);
{ {
@@ -490,14 +483,14 @@ Survey in 2016, 2017, and 2018."#;
#[test] #[test]
fn test_snippet_with_term_not_in_text() { fn test_snippet_with_term_not_in_text() {
let boxed_tokenizer = box_tokenizer(SimpleTokenizer); let boxed_tokenizer = SimpleTokenizer.into();
let text = "a b c d"; let text = "a b c d";
let mut terms = BTreeMap::new(); let mut terms = BTreeMap::new();
terms.insert(String::from("z"), 1.0); terms.insert(String::from("z"), 1.0);
let fragments = search_fragments(&*boxed_tokenizer, &text, &terms, 3); let fragments = search_fragments(&boxed_tokenizer, &text, &terms, 3);
assert_eq!(fragments.len(), 0); assert_eq!(fragments.len(), 0);
@@ -508,12 +501,12 @@ Survey in 2016, 2017, and 2018."#;
#[test] #[test]
fn test_snippet_with_no_terms() { fn test_snippet_with_no_terms() {
let boxed_tokenizer = box_tokenizer(SimpleTokenizer); let boxed_tokenizer = SimpleTokenizer.into();
let text = "a b c d"; let text = "a b c d";
let terms = BTreeMap::new(); let terms = BTreeMap::new();
let fragments = search_fragments(&*boxed_tokenizer, &text, &terms, 3); let fragments = search_fragments(&boxed_tokenizer, &text, &terms, 3);
assert_eq!(fragments.len(), 0); assert_eq!(fragments.len(), 0);
let snippet = select_best_fragment_combination(&fragments[..], &text); let snippet = select_best_fragment_combination(&fragments[..], &text);

View File

@@ -268,7 +268,7 @@ mod tests {
#[test] #[test]
fn test_term_info_block() { fn test_term_info_block() {
common::test::fixed_size_test::<TermInfoBlockMeta>(); common::fixed_size_test::<TermInfoBlockMeta>();
} }
#[test] #[test]

10
tantivy-common/Cargo.toml Normal file
View File

@@ -0,0 +1,10 @@
[package]
name = "tantivy-common"
version = "0.1.0"
authors = ["Paul Masurel <paul.masurel@gmail.com>"]
edition = "2018"
workspace = ".."
[dependencies]
byteorder = "*"
chrono = "*"

View File

@@ -2,7 +2,7 @@ use byteorder::{ByteOrder, LittleEndian, WriteBytesExt};
use std::io; use std::io;
use std::ops::Deref; use std::ops::Deref;
pub(crate) struct BitPacker { pub struct BitPacker {
mini_buffer: u64, mini_buffer: u64,
mini_buffer_written: usize, mini_buffer_written: usize,
} }

View File

@@ -2,7 +2,7 @@ use std::fmt;
use std::u64; use std::u64;
#[derive(Clone, Copy, Eq, PartialEq)] #[derive(Clone, Copy, Eq, PartialEq)]
pub(crate) struct TinySet(u64); pub struct TinySet(u64);
impl fmt::Debug for TinySet { impl fmt::Debug for TinySet {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
@@ -179,7 +179,7 @@ impl BitSet {
/// ///
/// Reminder: the tiny set with the bucket `bucket`, represents the /// Reminder: the tiny set with the bucket `bucket`, represents the
/// elements from `bucket * 64` to `(bucket+1) * 64`. /// elements from `bucket * 64` to `(bucket+1) * 64`.
pub(crate) fn first_non_empty_bucket(&self, bucket: u32) -> Option<u32> { pub fn first_non_empty_bucket(&self, bucket: u32) -> Option<u32> {
self.tinysets[bucket as usize..] self.tinysets[bucket as usize..]
.iter() .iter()
.cloned() .cloned()
@@ -194,7 +194,7 @@ impl BitSet {
/// Returns the tiny bitset representing the /// Returns the tiny bitset representing the
/// the set restricted to the number range from /// the set restricted to the number range from
/// `bucket * 64` to `(bucket + 1) * 64`. /// `bucket * 64` to `(bucket + 1) * 64`.
pub(crate) fn tinyset(&self, bucket: u32) -> TinySet { pub fn tinyset(&self, bucket: u32) -> TinySet {
self.tinysets[bucket as usize] self.tinysets[bucket as usize]
} }
} }
@@ -204,12 +204,7 @@ mod tests {
use super::BitSet; use super::BitSet;
use super::TinySet; use super::TinySet;
use crate::docset::DocSet; use std::collections::{BTreeSet, HashSet};
use crate::query::BitSetDocSet;
use crate::tests;
use crate::tests::generate_nonunique_unsorted;
use std::collections::BTreeSet;
use std::collections::HashSet;
#[test] #[test]
fn test_tiny_set() { fn test_tiny_set() {
@@ -264,26 +259,19 @@ mod tests {
test_against_hashset(&[62u32, 63u32], 64); test_against_hashset(&[62u32, 63u32], 64);
} }
#[test] // #[test]
fn test_bitset_large() { // fn test_bitset_clear() {
let arr = generate_nonunique_unsorted(100_000, 5_000); // let mut bitset = BitSet::with_max_value(1_000);
let mut btreeset: BTreeSet<u32> = BTreeSet::new(); // let els = tests::sample(1_000, 0.01f64);
let mut bitset = BitSet::with_max_value(100_000); // for &el in &els {
for el in arr { // bitset.insert(el);
btreeset.insert(el); // }
bitset.insert(el); // assert!(els.iter().all(|el| bitset.contains(*el)));
} // bitset.clear();
for i in 0..100_000 { // for el in 0u32..1000u32 {
assert_eq!(btreeset.contains(&i), bitset.contains(i)); // assert!(!bitset.contains(el));
} // }
assert_eq!(btreeset.len(), bitset.len()); // }
let mut bitset_docset = BitSetDocSet::from(bitset);
for el in btreeset.into_iter() {
bitset_docset.advance();
assert_eq!(bitset_docset.doc(), el);
}
assert!(!bitset_docset.advance());
}
#[test] #[test]
fn test_bitset_num_buckets() { fn test_bitset_num_buckets() {
@@ -339,19 +327,6 @@ mod tests {
assert_eq!(bitset.len(), 3); assert_eq!(bitset.len(), 3);
} }
#[test]
fn test_bitset_clear() {
let mut bitset = BitSet::with_max_value(1_000);
let els = tests::sample(1_000, 0.01f64);
for &el in &els {
bitset.insert(el);
}
assert!(els.iter().all(|el| bitset.contains(*el)));
bitset.clear();
for el in 0u32..1000u32 {
assert!(!bitset.contains(el));
}
}
} }
#[cfg(all(test, feature = "unstable"))] #[cfg(all(test, feature = "unstable"))]

View File

@@ -0,0 +1,235 @@
use crate::common::BinarySerializable;
use crate::common::CountingWriter;
use crate::common::VInt;
use crate::directory::ReadOnlySource;
use crate::directory::WritePtr;
use crate::schema::Field;
use crate::space_usage::FieldUsage;
use crate::space_usage::PerFieldSpaceUsage;
use std::collections::HashMap;
use std::io::Write;
use std::io::{self, Read};
#[derive(Eq, PartialEq, Hash, Copy, Ord, PartialOrd, Clone, Debug)]
pub struct FileAddr {
field: Field,
idx: usize,
}
impl FileAddr {
fn new(field: Field, idx: usize) -> FileAddr {
FileAddr { field, idx }
}
}
impl BinarySerializable for FileAddr {
fn serialize<W: Write>(&self, writer: &mut W) -> io::Result<()> {
self.field.serialize(writer)?;
VInt(self.idx as u64).serialize(writer)?;
Ok(())
}
fn deserialize<R: Read>(reader: &mut R) -> io::Result<Self> {
let field = Field::deserialize(reader)?;
let idx = VInt::deserialize(reader)?.0 as usize;
Ok(FileAddr { field, idx })
}
}
/// A `CompositeWrite` is used to write a `CompositeFile`.
pub struct CompositeWrite<W = WritePtr> {
write: CountingWriter<W>,
offsets: HashMap<FileAddr, u64>,
}
impl<W: Write> CompositeWrite<W> {
/// Crate a new API writer that writes a composite file
/// in a given write.
pub fn wrap(w: W) -> CompositeWrite<W> {
CompositeWrite {
write: CountingWriter::wrap(w),
offsets: HashMap::new(),
}
}
/// Start writing a new field.
pub fn for_field(&mut self, field: Field) -> &mut CountingWriter<W> {
self.for_field_with_idx(field, 0)
}
/// Start writing a new field.
pub fn for_field_with_idx(&mut self, field: Field, idx: usize) -> &mut CountingWriter<W> {
let offset = self.write.written_bytes();
let file_addr = FileAddr::new(field, idx);
assert!(!self.offsets.contains_key(&file_addr));
self.offsets.insert(file_addr, offset);
&mut self.write
}
/// Close the composite file
///
/// An index of the different field offsets
/// will be written as a footer.
pub fn close(mut self) -> io::Result<()> {
let footer_offset = self.write.written_bytes();
VInt(self.offsets.len() as u64).serialize(&mut self.write)?;
let mut offset_fields: Vec<_> = self
.offsets
.iter()
.map(|(file_addr, offset)| (*offset, *file_addr))
.collect();
offset_fields.sort();
let mut prev_offset = 0;
for (offset, file_addr) in offset_fields {
VInt((offset - prev_offset) as u64).serialize(&mut self.write)?;
file_addr.serialize(&mut self.write)?;
prev_offset = offset;
}
let footer_len = (self.write.written_bytes() - footer_offset) as u32;
footer_len.serialize(&mut self.write)?;
self.write.flush()?;
Ok(())
}
}
/// A composite file is an abstraction to store a
/// file partitioned by field.
///
/// The file needs to be written field by field.
/// A footer describes the start and stop offsets
/// for each field.
#[derive(Clone)]
pub struct CompositeFile {
data: ReadOnlySource,
offsets_index: HashMap<FileAddr, (usize, usize)>,
}
impl CompositeFile {
/// Opens a composite file stored in a given
/// `ReadOnlySource`.
pub fn open(data: &ReadOnlySource) -> io::Result<CompositeFile> {
let end = data.len();
let footer_len_data = data.slice_from(end - 4);
let footer_len = u32::deserialize(&mut footer_len_data.as_slice())? as usize;
let footer_start = end - 4 - footer_len;
let footer_data = data.slice(footer_start, footer_start + footer_len);
let mut footer_buffer = footer_data.as_slice();
let num_fields = VInt::deserialize(&mut footer_buffer)?.0 as usize;
let mut file_addrs = vec![];
let mut offsets = vec![];
let mut field_index = HashMap::new();
let mut offset = 0;
for _ in 0..num_fields {
offset += VInt::deserialize(&mut footer_buffer)?.0 as usize;
let file_addr = FileAddr::deserialize(&mut footer_buffer)?;
offsets.push(offset);
file_addrs.push(file_addr);
}
offsets.push(footer_start);
for i in 0..num_fields {
let file_addr = file_addrs[i];
let start_offset = offsets[i];
let end_offset = offsets[i + 1];
field_index.insert(file_addr, (start_offset, end_offset));
}
Ok(CompositeFile {
data: data.slice_to(footer_start),
offsets_index: field_index,
})
}
/// Returns a composite file that stores
/// no fields.
pub fn empty() -> CompositeFile {
CompositeFile {
offsets_index: HashMap::new(),
data: ReadOnlySource::empty(),
}
}
/// Returns the `ReadOnlySource` associated
/// to a given `Field` and stored in a `CompositeFile`.
pub fn open_read(&self, field: Field) -> Option<ReadOnlySource> {
self.open_read_with_idx(field, 0)
}
/// Returns the `ReadOnlySource` associated
/// to a given `Field` and stored in a `CompositeFile`.
pub fn open_read_with_idx(&self, field: Field, idx: usize) -> Option<ReadOnlySource> {
self.offsets_index
.get(&FileAddr { field, idx })
.map(|&(from, to)| self.data.slice(from, to))
}
pub fn space_usage(&self) -> PerFieldSpaceUsage {
let mut fields = HashMap::new();
for (&field_addr, &(start, end)) in self.offsets_index.iter() {
fields
.entry(field_addr.field)
.or_insert_with(|| FieldUsage::empty(field_addr.field))
.add_field_idx(field_addr.idx, end - start);
}
PerFieldSpaceUsage::new(fields)
}
}
#[cfg(test)]
mod test {
use super::{CompositeFile, CompositeWrite};
use crate::common::BinarySerializable;
use crate::common::VInt;
use crate::directory::{Directory, RAMDirectory};
use crate::schema::Field;
use std::io::Write;
use std::path::Path;
#[test]
fn test_composite_file() {
let path = Path::new("test_path");
let mut directory = RAMDirectory::create();
{
let w = directory.open_write(path).unwrap();
let mut composite_write = CompositeWrite::wrap(w);
{
let mut write_0 = composite_write.for_field(Field(0u32));
VInt(32431123u64).serialize(&mut write_0).unwrap();
write_0.flush().unwrap();
}
{
let mut write_4 = composite_write.for_field(Field(4u32));
VInt(2).serialize(&mut write_4).unwrap();
write_4.flush().unwrap();
}
composite_write.close().unwrap();
}
{
let r = directory.open_read(path).unwrap();
let composite_file = CompositeFile::open(&r).unwrap();
{
let file0 = composite_file.open_read(Field(0u32)).unwrap();
let mut file0_buf = file0.as_slice();
let payload_0 = VInt::deserialize(&mut file0_buf).unwrap().0;
assert_eq!(file0_buf.len(), 0);
assert_eq!(payload_0, 32431123u64);
}
{
let file4 = composite_file.open_read(Field(4u32)).unwrap();
let mut file4_buf = file4.as_slice();
let payload_4 = VInt::deserialize(&mut file4_buf).unwrap().0;
assert_eq!(file4_buf.len(), 0);
assert_eq!(payload_4, 2u64);
}
}
}
}

View File

@@ -1,18 +1,18 @@
pub mod bitpacker; pub mod bitpacker;
mod bitset; mod bitset;
mod composite_file;
mod counting_writer; mod counting_writer;
mod serialize; mod serialize;
mod vint; mod vint;
pub use self::bitset::BitSet; pub use self::bitset::BitSet;
pub(crate) use self::bitset::TinySet; pub use self::bitset::TinySet;
pub(crate) use self::composite_file::{CompositeFile, CompositeWrite};
pub use self::counting_writer::CountingWriter; pub use self::counting_writer::CountingWriter;
pub use self::serialize::{BinarySerializable, FixedSize}; pub use self::serialize::{BinarySerializable, FixedSize};
pub use self::vint::{read_u32_vint, serialize_vint_u32, write_u32_vint, VInt}; pub use self::vint::{read_u32_vint, serialize_vint_u32, write_u32_vint, VInt};
pub use byteorder::LittleEndian as Endianness; pub use byteorder::LittleEndian as Endianness;
pub type DateTime = chrono::DateTime<chrono::Utc>;
/// Segment's max doc must be `< MAX_DOC_LIMIT`. /// Segment's max doc must be `< MAX_DOC_LIMIT`.
/// ///
/// We do not allow segments with more than /// We do not allow segments with more than
@@ -42,7 +42,7 @@ pub const MAX_DOC_LIMIT: u32 = 1 << 31;
/// a very large range of values. Even in this case, it results /// a very large range of values. Even in this case, it results
/// in an extra cost of at most 12% compared to the optimal /// in an extra cost of at most 12% compared to the optimal
/// number of bits. /// number of bits.
pub(crate) fn compute_num_bits(n: u64) -> u8 { pub fn compute_num_bits(n: u64) -> u8 {
let amplitude = (64u32 - n.leading_zeros()) as u8; let amplitude = (64u32 - n.leading_zeros()) as u8;
if amplitude <= 64 - 8 { if amplitude <= 64 - 8 {
amplitude amplitude
@@ -51,7 +51,7 @@ pub(crate) fn compute_num_bits(n: u64) -> u8 {
} }
} }
pub(crate) fn is_power_of_2(n: usize) -> bool { pub fn is_power_of_2(n: usize) -> bool {
(n > 0) && (n & (n - 1) == 0) (n > 0) && (n & (n - 1) == 0)
} }
@@ -124,26 +124,26 @@ pub fn f64_to_u64(val: f64) -> u64 {
/// Reverse the mapping given by [`i64_to_u64`](./fn.i64_to_u64.html). /// Reverse the mapping given by [`i64_to_u64`](./fn.i64_to_u64.html).
#[inline(always)] #[inline(always)]
pub fn u64_to_f64(val: u64) -> f64 { pub fn u64_to_f64(val: u64) -> f64 {
f64::from_bits( f64::from_bits(if val & HIGHEST_BIT != 0 {
if val & HIGHEST_BIT != 0 { val ^ HIGHEST_BIT
val ^ HIGHEST_BIT } else {
} else { !val
!val })
}
)
} }
pub use self::serialize::fixed_size_test;
#[cfg(test)] #[cfg(test)]
pub(crate) mod test { pub(crate) mod test {
pub use super::serialize::test::fixed_size_test; use super::fixed_size_test;
use super::{compute_num_bits, i64_to_u64, u64_to_i64, f64_to_u64, u64_to_f64}; use super::{compute_num_bits, f64_to_u64, i64_to_u64, u64_to_f64, u64_to_i64};
use std::f64; use std::f64;
fn test_i64_converter_helper(val: i64) { fn test_i64_converter_helper(val: i64) {
assert_eq!(u64_to_i64(i64_to_u64(val)), val); assert_eq!(u64_to_i64(i64_to_u64(val)), val);
} }
fn test_f64_converter_helper(val: f64) { fn test_f64_converter_helper(val: f64) {
assert_eq!(u64_to_f64(f64_to_u64(val)), val); assert_eq!(u64_to_f64(f64_to_u64(val)), val);
} }
@@ -172,7 +172,8 @@ pub(crate) mod test {
#[test] #[test]
fn test_f64_order() { fn test_f64_order() {
assert!(!(f64_to_u64(f64::NEG_INFINITY)..f64_to_u64(f64::INFINITY)).contains(&f64_to_u64(f64::NAN))); //nan is not a number assert!(!(f64_to_u64(f64::NEG_INFINITY)..f64_to_u64(f64::INFINITY))
.contains(&f64_to_u64(f64::NAN))); //nan is not a number
assert!(f64_to_u64(1.5) > f64_to_u64(1.0)); //same exponent, different mantissa assert!(f64_to_u64(1.5) > f64_to_u64(1.0)); //same exponent, different mantissa
assert!(f64_to_u64(2.0) > f64_to_u64(1.0)); //same mantissa, different exponent assert!(f64_to_u64(2.0) > f64_to_u64(1.0)); //same mantissa, different exponent
assert!(f64_to_u64(2.0) > f64_to_u64(1.5)); //different exponent and mantissa assert!(f64_to_u64(2.0) > f64_to_u64(1.5)); //different exponent and mantissa

View File

@@ -1,5 +1,5 @@
use crate::common::Endianness; use crate::Endianness;
use crate::common::VInt; use crate::VInt;
use byteorder::{ReadBytesExt, WriteBytesExt}; use byteorder::{ReadBytesExt, WriteBytesExt};
use std::fmt; use std::fmt;
use std::io; use std::io;
@@ -145,17 +145,17 @@ impl BinarySerializable for String {
} }
} }
pub fn fixed_size_test<O: BinarySerializable + FixedSize + Default>() {
let mut buffer = Vec::new();
O::default().serialize(&mut buffer).unwrap();
assert_eq!(buffer.len(), O::SIZE_IN_BYTES);
}
#[cfg(test)] #[cfg(test)]
pub mod test { mod test {
use super::*; use super::*;
use crate::common::VInt; use crate::VInt;
pub fn fixed_size_test<O: BinarySerializable + FixedSize + Default>() {
let mut buffer = Vec::new();
O::default().serialize(&mut buffer).unwrap();
assert_eq!(buffer.len(), O::SIZE_IN_BYTES);
}
fn serialize_test<T: BinarySerializable + Eq>(v: T) -> usize { fn serialize_test<T: BinarySerializable + Eq>(v: T) -> usize {
let mut buffer: Vec<u8> = Vec::new(); let mut buffer: Vec<u8> = Vec::new();

View File

@@ -171,7 +171,7 @@ mod tests {
use super::serialize_vint_u32; use super::serialize_vint_u32;
use super::VInt; use super::VInt;
use crate::common::BinarySerializable; use crate::BinarySerializable;
use byteorder::{ByteOrder, LittleEndian}; use byteorder::{ByteOrder, LittleEndian};
fn aux_test_vint(val: u64) { fn aux_test_vint(val: u64) {

33
tantivy-schema/Cargo.toml Normal file
View File

@@ -0,0 +1,33 @@
[package]
name = "tantivy-schema"
version = "0.1.0"
authors = ["Paul Masurel <paul.masurel@gmail.com>"]
edition = "2018"
workspace = ".."
[dependencies]
base64 = "0.10.0"
byteorder = "1.0"
once_cell = "0.2"
regex = "1.0"
serde = "1.0"
serde_derive = "1.0"
serde_json = "1.0"
num_cpus = "1.2"
itertools = "0.8"
notify = {version="4", optional=true}
crossbeam = "0.7"
owning_ref = "0.4"
stable_deref_trait = "1.0.0"
downcast-rs = { version="1.0" }
census = "0.2"
failure = "0.1"
fail = "0.3"
scoped-pool = "1.0"
tantivy-common = {path="../tantivy-common"}
chrono = "*"
[dev-dependencies]
matches = "0.1.8"

View File

@@ -1,9 +1,10 @@
use super::*; use super::*;
use crate::common::BinarySerializable;
use crate::common::VInt;
use crate::DateTime;
use itertools::Itertools; use itertools::Itertools;
use serde_derive::{Deserialize, Serialize};
use std::io::{self, Read, Write}; use std::io::{self, Read, Write};
use tantivy_common::BinarySerializable;
use tantivy_common::DateTime;
use tantivy_common::VInt;
/// Tantivy's Document is the object that can /// Tantivy's Document is the object that can
/// be indexed and then searched for. /// be indexed and then searched for.
@@ -168,7 +169,7 @@ impl BinarySerializable for Document {
#[cfg(test)] #[cfg(test)]
mod tests { mod tests {
use crate::schema::*; use crate::*;
#[test] #[test]
fn test_doc() { fn test_doc() {

View File

@@ -1,4 +1,3 @@
use crate::common::BinarySerializable;
use once_cell::sync::Lazy; use once_cell::sync::Lazy;
use regex::Regex; use regex::Regex;
use serde::{Deserialize, Deserializer, Serialize, Serializer}; use serde::{Deserialize, Deserializer, Serialize, Serializer};
@@ -8,6 +7,7 @@ use std::fmt::{self, Debug, Display, Formatter};
use std::io::{self, Read, Write}; use std::io::{self, Read, Write};
use std::str; use std::str;
use std::string::FromUtf8Error; use std::string::FromUtf8Error;
use tantivy_common::BinarySerializable;
const SLASH_BYTE: u8 = b'/'; const SLASH_BYTE: u8 = b'/';
const ESCAPE_BYTE: u8 = b'\\'; const ESCAPE_BYTE: u8 = b'\\';
@@ -59,7 +59,7 @@ impl Facet {
&self.0 &self.0
} }
pub(crate) fn from_encoded_string(facet_string: String) -> Facet { pub fn from_encoded_string(facet_string: String) -> Facet {
Facet(facet_string) Facet(facet_string)
} }
@@ -104,7 +104,7 @@ impl Facet {
} }
/// Accessor for the inner buffer of the `Facet`. /// Accessor for the inner buffer of the `Facet`.
pub(crate) fn set_facet_str(&mut self, facet_str: &str) { pub fn set_facet_str(&mut self, facet_str: &str) {
self.0.clear(); self.0.clear();
self.0.push_str(facet_str); self.0.push_str(facet_str);
} }
@@ -120,9 +120,7 @@ impl Facet {
/// Extract path from the `Facet`. /// Extract path from the `Facet`.
pub fn to_path(&self) -> Vec<&str> { pub fn to_path(&self) -> Vec<&str> {
self.encoded_str() self.encoded_str().split(|c| c == FACET_SEP_CHAR).collect()
.split(|c| c == FACET_SEP_CHAR)
.collect()
} }
} }

View File

@@ -1,7 +1,8 @@
use crate::common::BinarySerializable; use serde_derive::{Deserialize, Serialize};
use std::io; use std::io;
use std::io::Read; use std::io::Read;
use std::io::Write; use std::io::Write;
use tantivy_common::BinarySerializable;
/// `Field` is actually a `u8` identifying a `Field` /// `Field` is actually a `u8` identifying a `Field`
/// The schema is in charge of holding mapping between field names /// The schema is in charge of holding mapping between field names

View File

@@ -1,7 +1,8 @@
use crate::schema::IntOptions; use serde_derive::*;
use crate::schema::TextOptions;
use crate::schema::FieldType; use crate::FieldType;
use crate::IntOptions;
use crate::TextOptions;
use serde::de::{self, MapAccess, Visitor}; use serde::de::{self, MapAccess, Visitor};
use serde::ser::SerializeStruct; use serde::ser::SerializeStruct;
use serde::{Deserialize, Deserializer, Serialize, Serializer}; use serde::{Deserialize, Deserializer, Serialize, Serializer};
@@ -108,7 +109,9 @@ impl FieldEntry {
/// Returns true iff the field is a int (signed or unsigned) fast field /// Returns true iff the field is a int (signed or unsigned) fast field
pub fn is_int_fast(&self) -> bool { pub fn is_int_fast(&self) -> bool {
match self.field_type { match self.field_type {
FieldType::U64(ref options) | FieldType::I64(ref options) | FieldType::F64(ref options) => options.is_fast(), FieldType::U64(ref options)
| FieldType::I64(ref options)
| FieldType::F64(ref options) => options.is_fast(),
_ => false, _ => false,
} }
} }
@@ -263,7 +266,7 @@ impl<'de> Deserialize<'de> for FieldEntry {
#[cfg(test)] #[cfg(test)]
mod tests { mod tests {
use super::*; use super::*;
use crate::schema::TEXT; use crate::TEXT;
use serde_json; use serde_json;
#[test] #[test]

View File

@@ -1,16 +1,15 @@
use base64::decode; use base64::decode;
use crate::schema::{IntOptions, TextOptions}; use crate::Facet;
use crate::IndexRecordOption;
use crate::schema::Facet; use crate::TextFieldIndexing;
use crate::schema::IndexRecordOption; use crate::Value;
use crate::schema::TextFieldIndexing; use crate::{IntOptions, TextOptions};
use crate::schema::Value;
use serde_json::Value as JsonValue; use serde_json::Value as JsonValue;
/// Possible error that may occur while parsing a field value /// Possible error that may occur while parsing a field value
/// At this point the JSON is known to be valid. /// At this point the JSON is known to be valid.
#[derive(Debug)] #[derive(Debug, PartialEq)]
pub enum ValueParsingError { pub enum ValueParsingError {
/// Encountered a numerical value that overflows or underflow its integer type. /// Encountered a numerical value that overflows or underflow its integer type.
OverflowError(String), OverflowError(String),
@@ -83,9 +82,9 @@ impl FieldType {
pub fn is_indexed(&self) -> bool { pub fn is_indexed(&self) -> bool {
match *self { match *self {
FieldType::Str(ref text_options) => text_options.get_indexing_options().is_some(), FieldType::Str(ref text_options) => text_options.get_indexing_options().is_some(),
FieldType::U64(ref int_options) | FieldType::I64(ref int_options) | FieldType::F64(ref int_options) => { FieldType::U64(ref int_options)
int_options.is_indexed() | FieldType::I64(ref int_options)
} | FieldType::F64(ref int_options) => int_options.is_indexed(),
FieldType::Date(ref date_options) => date_options.is_indexed(), FieldType::Date(ref date_options) => date_options.is_indexed(),
FieldType::HierarchicalFacet => true, FieldType::HierarchicalFacet => true,
FieldType::Bytes => false, FieldType::Bytes => false,
@@ -125,9 +124,12 @@ impl FieldType {
match *json { match *json {
JsonValue::String(ref field_text) => match *self { JsonValue::String(ref field_text) => match *self {
FieldType::Str(_) => Ok(Value::Str(field_text.clone())), FieldType::Str(_) => Ok(Value::Str(field_text.clone())),
FieldType::U64(_) | FieldType::I64(_) | FieldType::F64(_) | FieldType::Date(_) => Err( FieldType::U64(_) | FieldType::I64(_) | FieldType::F64(_) | FieldType::Date(_) => {
ValueParsingError::TypeError(format!("Expected an integer, got {:?}", json)), Err(ValueParsingError::TypeError(format!(
), "Expected an integer, got {:?}",
json
)))
}
FieldType::HierarchicalFacet => Ok(Value::Facet(Facet::from(field_text))), FieldType::HierarchicalFacet => Ok(Value::Facet(Facet::from(field_text))),
FieldType::Bytes => decode(field_text).map(Value::Bytes).map_err(|_| { FieldType::Bytes => decode(field_text).map(Value::Bytes).map_err(|_| {
ValueParsingError::InvalidBase64(format!( ValueParsingError::InvalidBase64(format!(
@@ -152,7 +154,7 @@ impl FieldType {
let msg = format!("Expected a u64 int, got {:?}", json); let msg = format!("Expected a u64 int, got {:?}", json);
Err(ValueParsingError::OverflowError(msg)) Err(ValueParsingError::OverflowError(msg))
} }
}, }
FieldType::F64(_) => { FieldType::F64(_) => {
if let Some(field_val_f64) = field_val_num.as_f64() { if let Some(field_val_f64) = field_val_num.as_f64() {
Ok(Value::F64(field_val_f64)) Ok(Value::F64(field_val_f64))
@@ -180,8 +182,9 @@ impl FieldType {
#[cfg(test)] #[cfg(test)]
mod tests { mod tests {
use super::FieldType; use super::FieldType;
use crate::schema::field_type::ValueParsingError; use crate::field_type::ValueParsingError;
use crate::schema::Value; use crate::Value;
use serde_json::json;
#[test] #[test]
fn test_bytes_value_from_json() { fn test_bytes_value_from_json() {

View File

@@ -1,9 +1,11 @@
use crate::common::BinarySerializable; use crate::Field;
use crate::schema::Field; use crate::Value;
use crate::schema::Value; //use serde::Deserialize;
use serde_derive::{Deserialize, Serialize};
use std::io; use std::io;
use std::io::Read; use std::io::Read;
use std::io::Write; use std::io::Write;
use tantivy_common::BinarySerializable;
/// `FieldValue` holds together a `Field` and its `Value`. /// `FieldValue` holds together a `Field` and its `Value`.
#[derive(Debug, Clone, Ord, PartialEq, Eq, PartialOrd, Serialize, Deserialize)] #[derive(Debug, Clone, Ord, PartialEq, Eq, PartialOrd, Serialize, Deserialize)]

View File

@@ -1,5 +1,5 @@
use crate::schema::IntOptions; use crate::IntOptions;
use crate::schema::TextOptions; use crate::TextOptions;
use std::ops::BitOr; use std::ops::BitOr;
#[derive(Clone)] #[derive(Clone)]

View File

@@ -1,3 +1,5 @@
use serde_derive::{Deserialize, Serialize};
/// `IndexRecordOption` describes an amount information associated /// `IndexRecordOption` describes an amount information associated
/// to a given indexed field. /// to a given indexed field.
/// ///

View File

@@ -1,4 +1,5 @@
use crate::schema::flags::{FastFlag, IndexedFlag, SchemaFlagList, StoredFlag}; use crate::flags::{FastFlag, IndexedFlag, SchemaFlagList, StoredFlag};
use serde_derive::{Deserialize, Serialize};
use std::ops::BitOr; use std::ops::BitOr;
/// Express whether a field is single-value or multi-valued. /// Express whether a field is single-value or multi-valued.

View File

@@ -26,7 +26,7 @@ directory.
### Example ### Example
``` ```
use tantivy::schema::*; use tantivy_schema::*;
let mut schema_builder = Schema::builder(); let mut schema_builder = Schema::builder();
let title_options = TextOptions::default() let title_options = TextOptions::default()
.set_stored() .set_stored()
@@ -59,7 +59,7 @@ when [`searcher.doc(doc_address)`](../struct.Searcher.html#method.doc) is called
### Example ### Example
``` ```
use tantivy::schema::*; use tantivy_schema::*;
let mut schema_builder = Schema::builder(); let mut schema_builder = Schema::builder();
let num_stars_options = IntOptions::default() let num_stars_options = IntOptions::default()
.set_stored() .set_stored()
@@ -93,7 +93,7 @@ using the `|` operator.
For instance, a schema containing the two fields defined in the example above could be rewritten : For instance, a schema containing the two fields defined in the example above could be rewritten :
``` ```
use tantivy::schema::*; use tantivy_schema::*;
let mut schema_builder = Schema::builder(); let mut schema_builder = Schema::builder();
schema_builder.add_u64_field("num_stars", INDEXED | STORED); schema_builder.add_u64_field("num_stars", INDEXED | STORED);
schema_builder.add_text_field("title", TEXT | STORED); schema_builder.add_text_field("title", TEXT | STORED);
@@ -126,7 +126,6 @@ pub use self::schema::{Schema, SchemaBuilder};
pub use self::value::Value; pub use self::value::Value;
pub use self::facet::Facet; pub use self::facet::Facet;
pub(crate) use self::facet::FACET_SEP_BYTE;
pub use self::document::Document; pub use self::document::Document;
pub use self::field::Field; pub use self::field::Field;

View File

@@ -1,4 +1,5 @@
use crate::schema::Value; use crate::Value;
use serde_derive::Serialize;
use std::collections::BTreeMap; use std::collections::BTreeMap;
/// Internal representation of a document used for JSON /// Internal representation of a document used for JSON

View File

@@ -1,14 +1,14 @@
use crate::schema::field_type::ValueParsingError;
use std::collections::BTreeMap;
use std::collections::HashMap;
use std::sync::Arc;
use super::*; use super::*;
use crate::schema::field_type::ValueParsingError;
use failure::Fail;
use serde::de::{SeqAccess, Visitor}; use serde::de::{SeqAccess, Visitor};
use serde::ser::SerializeSeq; use serde::ser::SerializeSeq;
use serde::{Deserialize, Deserializer, Serialize, Serializer}; use serde::{Deserialize, Deserializer, Serialize, Serializer};
use serde_json::{self, Map as JsonObject, Value as JsonValue}; use serde_json::{self, Map as JsonObject, Value as JsonValue};
use std::collections::BTreeMap;
use std::collections::HashMap;
use std::fmt; use std::fmt;
use std::sync::Arc;
/// Tantivy has a very strict schema. /// Tantivy has a very strict schema.
/// You need to specify in advance whether a field is indexed or not, /// You need to specify in advance whether a field is indexed or not,
@@ -21,7 +21,7 @@ use std::fmt;
/// # Examples /// # Examples
/// ///
/// ``` /// ```
/// use tantivy::schema::*; /// use tantivy_schema::*;
/// ///
/// let mut schema_builder = Schema::builder(); /// let mut schema_builder = Schema::builder();
/// let id_field = schema_builder.add_text_field("id", STRING); /// let id_field = schema_builder.add_text_field("id", STRING);
@@ -208,7 +208,7 @@ impl Eq for InnerSchema {}
/// # Examples /// # Examples
/// ///
/// ``` /// ```
/// use tantivy::schema::*; /// use tantivy_schema::*;
/// ///
/// let mut schema_builder = Schema::builder(); /// let mut schema_builder = Schema::builder();
/// let id_field = schema_builder.add_text_field("id", STRING); /// let id_field = schema_builder.add_text_field("id", STRING);
@@ -246,6 +246,25 @@ impl Schema {
self.0.fields_map.get(field_name).cloned() self.0.fields_map.get(field_name).cloned()
} }
/// Create a named document off the doc.
pub fn convert_named_doc(
&self,
named_doc: NamedFieldDocument,
) -> Result<Document, DocParsingError> {
let mut document = Document::new();
for (field_name, values) in named_doc.0 {
if let Some(field) = self.get_field(&field_name) {
for value in values {
let field_value = FieldValue::new(field, value);
document.add(field_value);
}
} else {
return Err(DocParsingError::NoSuchFieldInSchema(field_name));
}
}
Ok(document)
}
/// Create a named document off the doc. /// Create a named document off the doc.
pub fn to_named_doc(&self, doc: &Document) -> NamedFieldDocument { pub fn to_named_doc(&self, doc: &Document) -> NamedFieldDocument {
let mut field_map = BTreeMap::new(); let mut field_map = BTreeMap::new();
@@ -282,28 +301,26 @@ impl Schema {
let mut doc = Document::default(); let mut doc = Document::default();
for (field_name, json_value) in json_obj.iter() { for (field_name, json_value) in json_obj.iter() {
match self.get_field(field_name) { let field = self
Some(field) => { .get_field(field_name)
let field_entry = self.get_field_entry(field); .ok_or_else(|| DocParsingError::NoSuchFieldInSchema(field_name.clone()))?;
let field_type = field_entry.field_type(); let field_entry = self.get_field_entry(field);
match *json_value { let field_type = field_entry.field_type();
JsonValue::Array(ref json_items) => { match *json_value {
for json_item in json_items { JsonValue::Array(ref json_items) => {
let value = field_type.value_from_json(json_item).map_err(|e| { for json_item in json_items {
DocParsingError::ValueError(field_name.clone(), e) let value = field_type
})?; .value_from_json(json_item)
doc.add(FieldValue::new(field, value)); .map_err(|e| DocParsingError::ValueError(field_name.clone(), e))?;
} doc.add(FieldValue::new(field, value));
}
_ => {
let value = field_type
.value_from_json(json_value)
.map_err(|e| DocParsingError::ValueError(field_name.clone(), e))?;
doc.add(FieldValue::new(field, value));
}
} }
} }
None => return Err(DocParsingError::NoSuchFieldInSchema(field_name.clone())), _ => {
let value = field_type
.value_from_json(json_value)
.map_err(|e| DocParsingError::ValueError(field_name.clone(), e))?;
doc.add(FieldValue::new(field, value));
}
} }
} }
Ok(doc) Ok(doc)
@@ -360,13 +377,19 @@ impl<'de> Deserialize<'de> for Schema {
/// Error that may happen when deserializing /// Error that may happen when deserializing
/// a document from JSON. /// a document from JSON.
#[derive(Debug)] #[derive(Debug, Fail, PartialEq)]
pub enum DocParsingError { pub enum DocParsingError {
/// The payload given is not valid JSON. /// The payload given is not valid JSON.
#[fail(display = "The provided string is not valid JSON")]
NotJSON(String), NotJSON(String),
/// One of the value node could not be parsed. /// One of the value node could not be parsed.
#[fail(display = "The field '{:?}' could not be parsed: {:?}", _0, _1)]
ValueError(String, ValueParsingError), ValueError(String, ValueParsingError),
/// The json-document contains a field that is not declared in the schema. /// The json-document contains a field that is not declared in the schema.
#[fail(
display = "The document contains a field that is not declared in the schema: {:?}",
_0
)]
NoSuchFieldInSchema(String), NoSuchFieldInSchema(String),
} }
@@ -378,6 +401,7 @@ mod tests {
use crate::schema::*; use crate::schema::*;
use matches::{assert_matches, matches}; use matches::{assert_matches, matches};
use serde_json; use serde_json;
use std::collections::BTreeMap;
#[test] #[test]
pub fn is_indexed_test() { pub fn is_indexed_test() {
@@ -492,6 +516,54 @@ mod tests {
assert_eq!(doc, doc_serdeser); assert_eq!(doc, doc_serdeser);
} }
#[test]
pub fn test_document_from_nameddoc() {
let mut schema_builder = Schema::builder();
let title = schema_builder.add_text_field("title", TEXT);
let val = schema_builder.add_i64_field("val", INDEXED);
let schema = schema_builder.build();
let mut named_doc_map = BTreeMap::default();
named_doc_map.insert(
"title".to_string(),
vec![Value::from("title1"), Value::from("title2")],
);
named_doc_map.insert(
"val".to_string(),
vec![Value::from(14u64), Value::from(-1i64)],
);
let doc = schema
.convert_named_doc(NamedFieldDocument(named_doc_map))
.unwrap();
assert_eq!(
doc.get_all(title),
vec![
&Value::from("title1".to_string()),
&Value::from("title2".to_string())
]
);
assert_eq!(
doc.get_all(val),
vec![&Value::from(14u64), &Value::from(-1i64)]
);
}
#[test]
pub fn test_document_from_nameddoc_error() {
let schema = Schema::builder().build();
let mut named_doc_map = BTreeMap::default();
named_doc_map.insert(
"title".to_string(),
vec![Value::from("title1"), Value::from("title2")],
);
let err = schema
.convert_named_doc(NamedFieldDocument(named_doc_map))
.unwrap_err();
assert_eq!(
err,
DocParsingError::NoSuchFieldInSchema("title".to_string())
);
}
#[test] #[test]
pub fn test_parse_document() { pub fn test_parse_document() {
let mut schema_builder = Schema::builder(); let mut schema_builder = Schema::builder();

View File

@@ -1,11 +1,11 @@
use std::fmt; use std::fmt;
use super::Field; use crate::Facet;
use crate::common; use crate::Field;
use crate::schema::Facet;
use crate::DateTime;
use byteorder::{BigEndian, ByteOrder}; use byteorder::{BigEndian, ByteOrder};
use std::str; use std::str;
use tantivy_common as common;
use tantivy_common::DateTime;
/// Size (in bytes) of the buffer of a int field. /// Size (in bytes) of the buffer of a int field.
const INT_TERM_LEN: usize = 4 + 8; const INT_TERM_LEN: usize = 4 + 8;
@@ -94,7 +94,7 @@ impl Term {
} }
/// Creates a new Term for a given field. /// Creates a new Term for a given field.
pub(crate) fn for_field(field: Field) -> Term { pub fn for_field(field: Field) -> Term {
let mut term = Term(Vec::with_capacity(100)); let mut term = Term(Vec::with_capacity(100));
term.set_field(field); term.set_field(field);
term term
@@ -134,7 +134,7 @@ impl Term {
self.0.extend(bytes); self.0.extend(bytes);
} }
pub(crate) fn from_field_bytes(field: Field, bytes: &[u8]) -> Term { pub fn from_field_bytes(field: Field, bytes: &[u8]) -> Term {
let mut term = Term::for_field(field); let mut term = Term::for_field(field);
term.set_bytes(bytes); term.set_bytes(bytes);
term term
@@ -224,14 +224,19 @@ where
impl fmt::Debug for Term { impl fmt::Debug for Term {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
write!(f, "Term({:?})", &self.0[..]) write!(
f,
"Term(field={},bytes={:?})",
self.field().0,
self.value_bytes()
)
} }
} }
#[cfg(test)] #[cfg(test)]
mod tests { mod tests {
use crate::schema::*; use crate::{Schema, Term, STRING};
#[test] #[test]
pub fn test_term() { pub fn test_term() {

View File

@@ -1,6 +1,7 @@
use crate::schema::flags::SchemaFlagList; use crate::flags::SchemaFlagList;
use crate::schema::flags::StoredFlag; use crate::flags::StoredFlag;
use crate::schema::IndexRecordOption; use crate::IndexRecordOption;
use serde_derive::{Deserialize, Serialize};
use std::borrow::Cow; use std::borrow::Cow;
use std::ops::BitOr; use std::ops::BitOr;
@@ -151,7 +152,7 @@ where
#[cfg(test)] #[cfg(test)]
mod tests { mod tests {
use crate::schema::*; use crate::{FieldType, IndexRecordOption, Schema, STORED, TEXT};
#[test] #[test]
fn test_field_options() { fn test_field_options() {

View File

@@ -1,8 +1,10 @@
use crate::schema::Facet; use crate::Facet;
use crate::DateTime; use chrono;
use serde::de::Visitor; use serde::de::Visitor;
use serde::{Deserialize, Deserializer, Serialize, Serializer}; use serde::{Deserialize, Deserializer, Serialize, Serializer};
use std::{fmt, cmp::Ordering}; use std::{cmp::Ordering, fmt};
pub(crate) type DateTime = chrono::DateTime<chrono::Utc>;
/// Value represents the value of a any field. /// Value represents the value of a any field.
/// It is an enum over all over all of the possible field type. /// It is an enum over all over all of the possible field type.
@@ -27,7 +29,7 @@ pub enum Value {
impl Eq for Value {} impl Eq for Value {}
impl Ord for Value { impl Ord for Value {
fn cmp(&self, other: &Self) -> Ordering { fn cmp(&self, other: &Self) -> Ordering {
match (self,other) { match (self, other) {
(Value::Str(l), Value::Str(r)) => l.cmp(r), (Value::Str(l), Value::Str(r)) => l.cmp(r),
(Value::U64(l), Value::U64(r)) => l.cmp(r), (Value::U64(l), Value::U64(r)) => l.cmp(r),
(Value::I64(l), Value::I64(r)) => l.cmp(r), (Value::I64(l), Value::I64(r)) => l.cmp(r),
@@ -35,7 +37,7 @@ impl Ord for Value {
(Value::Facet(l), Value::Facet(r)) => l.cmp(r), (Value::Facet(l), Value::Facet(r)) => l.cmp(r),
(Value::Bytes(l), Value::Bytes(r)) => l.cmp(r), (Value::Bytes(l), Value::Bytes(r)) => l.cmp(r),
(Value::F64(l), Value::F64(r)) => { (Value::F64(l), Value::F64(r)) => {
match (l.is_nan(),r.is_nan()) { match (l.is_nan(), r.is_nan()) {
(false, false) => l.partial_cmp(r).unwrap(), // only fail on NaN (false, false) => l.partial_cmp(r).unwrap(), // only fail on NaN
(true, true) => Ordering::Equal, (true, true) => Ordering::Equal,
(true, false) => Ordering::Less, // we define NaN as less than -∞ (true, false) => Ordering::Less, // we define NaN as less than -∞
@@ -155,7 +157,7 @@ impl Value {
Value::F64(ref value) => *value, Value::F64(ref value) => *value,
_ => panic!("This is not a f64 field."), _ => panic!("This is not a f64 field."),
} }
} }
/// Returns the Date-value, provided the value is of the `Date` type. /// Returns the Date-value, provided the value is of the `Date` type.
/// ///
@@ -218,11 +220,11 @@ impl From<Vec<u8>> for Value {
} }
mod binary_serialize { mod binary_serialize {
use super::Value; use crate::Facet;
use crate::common::{BinarySerializable, f64_to_u64, u64_to_f64}; use crate::Value;
use crate::schema::Facet;
use chrono::{TimeZone, Utc}; use chrono::{TimeZone, Utc};
use std::io::{self, Read, Write}; use std::io::{self, Read, Write};
use tantivy_common::{f64_to_u64, u64_to_f64, BinarySerializable};
const TEXT_CODE: u8 = 0; const TEXT_CODE: u8 = 0;
const U64_CODE: u8 = 1; const U64_CODE: u8 = 1;

View File

@@ -0,0 +1,13 @@
[package]
name = "tantivy-tokenizer"
version = "0.1.0"
authors = ["Paul Masurel <paul.masurel@gmail.com>"]
edition = "2018"
workspace = ".."
[dependencies]
fnv = "*"
rust-stemmers = "*"
serde = "*"
serde_derive = "*"
tantivy-schema = {path="../tantivy-schema"}

View File

@@ -1,7 +1,6 @@
//! # Example //! # Example
//! ``` //! ```rust
//! extern crate tantivy; //! use tantivy_tokenizer::*;
//! use tantivy::tokenizer::*;
//! //!
//! # fn main() { //! # fn main() {
//! //!
@@ -65,14 +64,6 @@ impl<TailTokenStream> TokenStream for AlphaNumOnlyFilterStream<TailTokenStream>
where where
TailTokenStream: TokenStream, TailTokenStream: TokenStream,
{ {
fn token(&self) -> &Token {
self.tail.token()
}
fn token_mut(&mut self) -> &mut Token {
self.tail.token_mut()
}
fn advance(&mut self) -> bool { fn advance(&mut self) -> bool {
while self.tail.advance() { while self.tail.advance() {
if self.predicate(self.tail.token()) { if self.predicate(self.tail.token()) {
@@ -82,4 +73,12 @@ where
false false
} }
fn token(&self) -> &Token {
self.tail.token()
}
fn token_mut(&mut self) -> &mut Token {
self.tail.token_mut()
}
} }

View File

@@ -1558,11 +1558,11 @@ fn to_ascii(text: &mut String, output: &mut String) {
#[cfg(test)] #[cfg(test)]
mod tests { mod tests {
use super::to_ascii; use super::to_ascii;
use crate::tokenizer::AsciiFoldingFilter; use crate::AsciiFoldingFilter;
use crate::tokenizer::RawTokenizer; use crate::RawTokenizer;
use crate::tokenizer::SimpleTokenizer; use crate::SimpleTokenizer;
use crate::tokenizer::TokenStream; use crate::TokenStream;
use crate::tokenizer::Tokenizer; use crate::Tokenizer;
use std::iter; use std::iter;
#[test] #[test]

View File

@@ -1,5 +1,5 @@
use super::{Token, TokenStream, Tokenizer}; use super::{Token, TokenStream, Tokenizer};
use crate::schema::FACET_SEP_BYTE; use crate::FACET_SEP_BYTE;
/// The `FacetTokenizer` process a `Facet` binary representation /// The `FacetTokenizer` process a `Facet` binary representation
/// and emits a token for all of its parent. /// and emits a token for all of its parent.
@@ -83,8 +83,8 @@ impl<'a> TokenStream for FacetTokenStream<'a> {
mod tests { mod tests {
use super::FacetTokenizer; use super::FacetTokenizer;
use crate::schema::Facet;
use crate::tokenizer::{Token, TokenStream, Tokenizer}; use crate::tokenizer::{Token, TokenStream, Tokenizer};
use tantivy_schema::Facet;
#[test] #[test]
fn test_facet_tokenizer() { fn test_facet_tokenizer() {

View File

@@ -4,9 +4,8 @@
//! You must define in your schema which tokenizer should be used for //! You must define in your schema which tokenizer should be used for
//! each of your fields : //! each of your fields :
//! //!
//! ``` //! ```rust
//! extern crate tantivy; //! use tantivy_schema::*;
//! use tantivy::schema::*;
//! //!
//! # fn main() { //! # fn main() {
//! let mut schema_builder = Schema::builder(); //! let mut schema_builder = Schema::builder();
@@ -65,9 +64,7 @@
//! For instance, the `en_stem` is defined as follows. //! For instance, the `en_stem` is defined as follows.
//! //!
//! ```rust //! ```rust
//! # extern crate tantivy; //! use tantivy_tokenizer::*;
//!
//! use tantivy::tokenizer::*;
//! //!
//! # fn main() { //! # fn main() {
//! let en_stem = SimpleTokenizer //! let en_stem = SimpleTokenizer
@@ -80,10 +77,9 @@
//! Once your tokenizer is defined, you need to //! Once your tokenizer is defined, you need to
//! register it with a name in your index's [`TokenizerManager`](./struct.TokenizerManager.html). //! register it with a name in your index's [`TokenizerManager`](./struct.TokenizerManager.html).
//! //!
//! ``` //! ```rust
//! # extern crate tantivy; //! # use tantivy_schema::Schema;
//! # use tantivy::schema::Schema; //! # use tantivy_tokenizer::*;
//! # use tantivy::tokenizer::*;
//! # use tantivy::Index; //! # use tantivy::Index;
//! # fn main() { //! # fn main() {
//! # let custom_en_tokenizer = SimpleTokenizer; //! # let custom_en_tokenizer = SimpleTokenizer;
@@ -101,10 +97,9 @@
//! //!
//! # Example //! # Example
//! //!
//! ``` //! ```rust
//! extern crate tantivy; //! use tantivy_schema::{Schema, IndexRecordOption, TextOptions, TextFieldIndexing};
//! use tantivy::schema::{Schema, IndexRecordOption, TextOptions, TextFieldIndexing}; //! use tantivy_tokenizer::*;
//! use tantivy::tokenizer::*;
//! use tantivy::Index; //! use tantivy::Index;
//! //!
//! # fn main() { //! # fn main() {
@@ -155,9 +150,10 @@ pub use self::simple_tokenizer::SimpleTokenizer;
pub use self::stemmer::{Language, Stemmer}; pub use self::stemmer::{Language, Stemmer};
pub use self::stop_word_filter::StopWordFilter; pub use self::stop_word_filter::StopWordFilter;
pub(crate) use self::token_stream_chain::TokenStreamChain; pub(crate) use self::token_stream_chain::TokenStreamChain;
pub(crate) use self::tokenizer::box_tokenizer;
pub use self::tokenizer::BoxedTokenizer; pub use self::tokenizer::BoxedTokenizer;
pub(crate) const FACET_SEP_BYTE: u8 = 0u8;
pub use self::tokenizer::{Token, TokenFilter, TokenStream, Tokenizer}; pub use self::tokenizer::{Token, TokenFilter, TokenStream, Tokenizer};
pub use self::tokenizer_manager::TokenizerManager; pub use self::tokenizer_manager::TokenizerManager;

View File

@@ -72,10 +72,10 @@ where
#[cfg(test)] #[cfg(test)]
mod tests { mod tests {
use crate::tokenizer::LowerCaser; use crate::LowerCaser;
use crate::tokenizer::SimpleTokenizer; use crate::SimpleTokenizer;
use crate::tokenizer::TokenStream; use crate::TokenStream;
use crate::tokenizer::Tokenizer; use crate::Tokenizer;
#[test] #[test]
fn test_to_lower_case() { fn test_to_lower_case() {

View File

@@ -29,9 +29,8 @@ use super::{Token, TokenStream, Tokenizer};
/// ///
/// # Example /// # Example
/// ///
/// ``` /// ```rust
/// # extern crate tantivy; /// use tantivy_tokenizer::*;
/// use tantivy::tokenizer::*;
/// # fn main() { /// # fn main() {
/// let tokenizer = NgramTokenizer::new(2, 3, false); /// let tokenizer = NgramTokenizer::new(2, 3, false);
/// let mut stream = tokenizer.token_stream("hello"); /// let mut stream = tokenizer.token_stream("hello");
@@ -309,9 +308,9 @@ mod tests {
use super::CodepointFrontiers; use super::CodepointFrontiers;
use super::NgramTokenizer; use super::NgramTokenizer;
use super::StutteringIterator; use super::StutteringIterator;
use crate::tokenizer::tests::assert_token; use crate::tests::assert_token;
use crate::tokenizer::tokenizer::{TokenStream, Tokenizer}; use crate::tokenizer::{TokenStream, Tokenizer};
use crate::tokenizer::Token; use crate::Token;
fn test_helper<T: TokenStream>(mut tokenizer: T) -> Vec<Token> { fn test_helper<T: TokenStream>(mut tokenizer: T) -> Vec<Token> {
let mut tokens: Vec<Token> = vec![]; let mut tokens: Vec<Token> = vec![];

View File

@@ -1,7 +1,6 @@
//! # Example //! # Example
//! ``` //! ```rust
//! extern crate tantivy; //! use tantivy_tokenizer::*;
//! use tantivy::tokenizer::*;
//! //!
//! # fn main() { //! # fn main() {
//! //!

View File

@@ -1,5 +1,6 @@
use super::{Token, TokenFilter, TokenStream}; use super::{Token, TokenFilter, TokenStream};
use rust_stemmers::{self, Algorithm}; use rust_stemmers::{self, Algorithm};
use serde_derive::{Deserialize, Serialize};
/// Available stemmer languages. /// Available stemmer languages.
#[derive(Debug, Serialize, Deserialize, Eq, PartialEq, Copy, Clone)] #[derive(Debug, Serialize, Deserialize, Eq, PartialEq, Copy, Clone)]

View File

@@ -1,7 +1,6 @@
//! # Example //! # Example
//! ``` //! ```rust
//! extern crate tantivy; //! use tantivy_tokenizer::*;
//! use tantivy::tokenizer::*;
//! //!
//! # fn main() { //! # fn main() {
//! let tokenizer = SimpleTokenizer //! let tokenizer = SimpleTokenizer

View File

@@ -1,4 +1,4 @@
use crate::tokenizer::{Token, TokenStream}; use crate::{Token, TokenStream};
const POSITION_GAP: usize = 2; const POSITION_GAP: usize = 2;

View File

@@ -1,4 +1,4 @@
use crate::tokenizer::TokenStreamChain; use crate::TokenStreamChain;
/// The tokenizer module contains all of the tools used to process /// The tokenizer module contains all of the tools used to process
/// text in `tantivy`. /// text in `tantivy`.
use std::borrow::{Borrow, BorrowMut}; use std::borrow::{Borrow, BorrowMut};
@@ -56,9 +56,7 @@ pub trait Tokenizer<'a>: Sized + Clone {
/// # Example /// # Example
/// ///
/// ```rust /// ```rust
/// # extern crate tantivy; /// use tantivy_tokenizer::*;
///
/// use tantivy::tokenizer::*;
/// ///
/// # fn main() { /// # fn main() {
/// let en_stem = SimpleTokenizer /// let en_stem = SimpleTokenizer
@@ -80,7 +78,7 @@ pub trait Tokenizer<'a>: Sized + Clone {
} }
/// A boxed tokenizer /// A boxed tokenizer
pub trait BoxedTokenizer: Send + Sync { trait BoxedTokenizerTrait: Send + Sync {
/// Tokenize a `&str` /// Tokenize a `&str`
fn token_stream<'a>(&self, text: &'a str) -> Box<dyn TokenStream + 'a>; fn token_stream<'a>(&self, text: &'a str) -> Box<dyn TokenStream + 'a>;
@@ -92,7 +90,41 @@ pub trait BoxedTokenizer: Send + Sync {
fn token_stream_texts<'b>(&self, texts: &'b [&'b str]) -> Box<dyn TokenStream + 'b>; fn token_stream_texts<'b>(&self, texts: &'b [&'b str]) -> Box<dyn TokenStream + 'b>;
/// Return a boxed clone of the tokenizer /// Return a boxed clone of the tokenizer
fn boxed_clone(&self) -> Box<dyn BoxedTokenizer>; fn boxed_clone(&self) -> BoxedTokenizer;
}
/// A boxed tokenizer
pub struct BoxedTokenizer(Box<dyn BoxedTokenizerTrait>);
impl<T> From<T> for BoxedTokenizer
where
T: 'static + Send + Sync + for<'a> Tokenizer<'a>,
{
fn from(tokenizer: T) -> BoxedTokenizer {
BoxedTokenizer(Box::new(BoxableTokenizer(tokenizer)))
}
}
impl BoxedTokenizer {
/// Tokenize a `&str`
pub fn token_stream<'a>(&self, text: &'a str) -> Box<dyn TokenStream + 'a> {
self.0.token_stream(text)
}
/// Tokenize an array`&str`
///
/// The resulting `TokenStream` is equivalent to what would be obtained if the &str were
/// one concatenated `&str`, with an artificial position gap of `2` between the different fields
/// to prevent accidental `PhraseQuery` to match accross two terms.
pub fn token_stream_texts<'b>(&self, texts: &'b [&'b str]) -> Box<dyn TokenStream + 'b> {
self.0.token_stream_texts(texts)
}
}
impl Clone for BoxedTokenizer {
fn clone(&self) -> BoxedTokenizer {
self.0.boxed_clone()
}
} }
#[derive(Clone)] #[derive(Clone)]
@@ -100,7 +132,7 @@ struct BoxableTokenizer<A>(A)
where where
A: for<'a> Tokenizer<'a> + Send + Sync; A: for<'a> Tokenizer<'a> + Send + Sync;
impl<A> BoxedTokenizer for BoxableTokenizer<A> impl<A> BoxedTokenizerTrait for BoxableTokenizer<A>
where where
A: 'static + Send + Sync + for<'a> Tokenizer<'a>, A: 'static + Send + Sync + for<'a> Tokenizer<'a>,
{ {
@@ -125,18 +157,11 @@ where
} }
} }
fn boxed_clone(&self) -> Box<dyn BoxedTokenizer> { fn boxed_clone(&self) -> BoxedTokenizer {
Box::new(self.clone()) self.0.clone().into()
} }
} }
pub(crate) fn box_tokenizer<A>(a: A) -> Box<dyn BoxedTokenizer>
where
A: 'static + Send + Sync + for<'a> Tokenizer<'a>,
{
Box::new(BoxableTokenizer(a))
}
impl<'b> TokenStream for Box<dyn TokenStream + 'b> { impl<'b> TokenStream for Box<dyn TokenStream + 'b> {
fn advance(&mut self) -> bool { fn advance(&mut self) -> bool {
let token_stream: &mut dyn TokenStream = self.borrow_mut(); let token_stream: &mut dyn TokenStream = self.borrow_mut();
@@ -161,8 +186,7 @@ impl<'b> TokenStream for Box<dyn TokenStream + 'b> {
/// # Example /// # Example
/// ///
/// ``` /// ```
/// extern crate tantivy; /// use tantivy_tokenizer::*;
/// use tantivy::tokenizer::*;
/// ///
/// # fn main() { /// # fn main() {
/// let tokenizer = SimpleTokenizer /// let tokenizer = SimpleTokenizer
@@ -203,8 +227,7 @@ pub trait TokenStream {
/// and `.token()`. /// and `.token()`.
/// ///
/// ``` /// ```
/// # extern crate tantivy; /// # use tantivy_tokenizer::*;
/// # use tantivy::tokenizer::*;
/// # /// #
/// # fn main() { /// # fn main() {
/// # let tokenizer = SimpleTokenizer /// # let tokenizer = SimpleTokenizer

View File

@@ -1,14 +1,12 @@
use crate::tokenizer::box_tokenizer; use crate::stemmer::Language;
use crate::tokenizer::stemmer::Language; use crate::BoxedTokenizer;
use crate::tokenizer::BoxedTokenizer; use crate::LowerCaser;
use crate::tokenizer::LowerCaser; use crate::RawTokenizer;
use crate::tokenizer::RawTokenizer; use crate::RemoveLongFilter;
use crate::tokenizer::RemoveLongFilter; use crate::SimpleTokenizer;
use crate::tokenizer::SimpleTokenizer; use crate::Stemmer;
use crate::tokenizer::Stemmer; use crate::Tokenizer;
use crate::tokenizer::Tokenizer;
use std::collections::HashMap; use std::collections::HashMap;
use std::ops::Deref;
use std::sync::{Arc, RwLock}; use std::sync::{Arc, RwLock};
/// The tokenizer manager serves as a store for /// The tokenizer manager serves as a store for
@@ -25,30 +23,28 @@ use std::sync::{Arc, RwLock};
/// search engine. /// search engine.
#[derive(Clone)] #[derive(Clone)]
pub struct TokenizerManager { pub struct TokenizerManager {
tokenizers: Arc<RwLock<HashMap<String, Box<dyn BoxedTokenizer>>>>, tokenizers: Arc<RwLock<HashMap<String, BoxedTokenizer>>>,
} }
impl TokenizerManager { impl TokenizerManager {
/// Registers a new tokenizer associated with a given name. /// Registers a new tokenizer associated with a given name.
pub fn register<A>(&self, tokenizer_name: &str, tokenizer: A) pub fn register<A>(&self, tokenizer_name: &str, tokenizer: A)
where where
A: 'static + Send + Sync + for<'a> Tokenizer<'a>, A: Into<BoxedTokenizer>,
{ {
let boxed_tokenizer = box_tokenizer(tokenizer);
self.tokenizers self.tokenizers
.write() .write()
.expect("Acquiring the lock should never fail") .expect("Acquiring the lock should never fail")
.insert(tokenizer_name.to_string(), boxed_tokenizer); .insert(tokenizer_name.to_string(), tokenizer.into());
} }
/// Accessing a tokenizer given its name. /// Accessing a tokenizer given its name.
pub fn get(&self, tokenizer_name: &str) -> Option<Box<dyn BoxedTokenizer>> { pub fn get(&self, tokenizer_name: &str) -> Option<BoxedTokenizer> {
self.tokenizers self.tokenizers
.read() .read()
.expect("Acquiring the lock should never fail") .expect("Acquiring the lock should never fail")
.get(tokenizer_name) .get(tokenizer_name)
.map(Deref::deref) .cloned()
.map(BoxedTokenizer::boxed_clone)
} }
} }

View File

@@ -8,7 +8,7 @@ use tantivy::{Index, Term};
#[test] #[test]
fn test_failpoints_managed_directory_gc_if_delete_fails() { fn test_failpoints_managed_directory_gc_if_delete_fails() {
let scenario = fail::FailScenario::setup(); let _scenario = fail::FailScenario::setup();
let test_path: &'static Path = Path::new("some_path_for_test"); let test_path: &'static Path = Path::new("some_path_for_test");