Compare commits

...

613 Commits

Author SHA1 Message Date
Paul Masurel
e70a45426a 0.8.2 release
Backporting a fix for non x86_64 platforms
2019-02-14 09:16:27 +09:00
Paul Masurel
176f67a266 Refactoring 2019-01-23 10:06:40 +09:00
Paul Masurel
19babff849 Closes #476 2019-01-23 10:06:39 +09:00
Paul Masurel
bf2576adf9 Added a broken unit test 2019-01-23 10:04:27 +09:00
Paul Masurel
b8241c5603 0.8.0 2018-12-26 10:18:34 +09:00
Paul Masurel
a4745151c0 Version to 0.8 2018-12-26 10:11:06 +09:00
Paul Masurel
e2ce326a8c Merge branch 'issue/457' 2018-12-18 10:35:01 +09:00
Paul Masurel
bb21d12a70 Bumping version 2018-12-18 10:14:12 +09:00
Paul Masurel
4565aba62a Added unit test for exponential search 2018-12-18 09:24:31 +09:00
Paul Masurel
545a7ec8dd Closes #457 2018-12-18 09:18:46 +09:00
Paul Masurel
e68775d71c Format and update murmurhash32 version 2018-12-17 19:12:38 +09:00
Paul Masurel
dcc92d287e Facet remove unsafe (#456)
* Removing some unsafe

* Removing some unsafe (2)

* Remove murmurhash
2018-12-17 19:08:48 +09:00
Paul Masurel
b48f81c051 Removing unsafe from bitpacking code (#455) 2018-12-17 19:06:37 +09:00
Paul Masurel
a3042e956b Facet remove unsafe (#454)
* Removing some unsafe

* Removing some unsafe (2)
2018-12-17 09:31:09 +09:00
dependabot[bot]
1fa10f0a0b Update itertools requirement from 0.7 to 0.8 (#453)
Updates the requirements on [itertools](https://github.com/bluss/rust-itertools) to permit the latest version.
- [Release notes](https://github.com/bluss/rust-itertools/releases)
- [Commits](https://github.com/bluss/rust-itertools/commits/0.8.0)

Signed-off-by: dependabot[bot] <support@dependabot.com>
2018-12-17 09:28:36 +09:00
Paul Masurel
279a9eb5e3 Closes #449 (#450)
Clippy working on stable.
Clippy warnings addressed
2018-12-10 12:20:59 +09:00
fdb-hiroshima
21a24672d8 Add accessors for Snippet and HighlightSection (#448)
* Add accessors for Snippet and HighlightSection

And add an example of custom highlighter

* Remove inline(always) and unnecessary empty lines
2018-12-02 18:00:16 +09:00
dependabot[bot]
a3f1fbaae6 Update scoped-pool requirement from 0.1 to 1.0 (#447)
Updates the requirements on [scoped-pool](https://github.com/reem/rust-scoped-pool) to permit the latest version.
- [Release notes](https://github.com/reem/rust-scoped-pool/releases)
- [Commits](https://github.com/reem/rust-scoped-pool/commits/1.0.0)

Signed-off-by: dependabot[bot] <support@dependabot.com>
2018-12-01 13:54:59 +09:00
Paul Masurel
a6e767c877 Cargo fmt 2018-11-30 22:52:45 +09:00
Paul Masurel
6af0488dbe Executor made sorted 2018-11-30 22:52:26 +09:00
Paul Masurel
07d87e154b Collector refactoring and multithreaded search (#437)
* Split Collector into an overall Collector and a per-segment SegmentCollector. Precursor to cross-segment parallelism, and as a side benefit cleans up any per-segment fields from being Option<T> to just T.

* Attempt to add MultiCollector back

* working. Chained collector is broken though

* Fix chained collector

* Fix test

* Make Weight Send+Sync for parallelization purposes

* Expose parameters of RangeQuery for external usage

* Removed &mut self

* fixing tests

* Restored TestCollectors

* blop

* multicollector working

* chained collector working

* test broken

* fixing unit test

* blop

* blop

* Blop

* simplifying APi

* blop

* better syntax

* Simplifying top_collector

* refactoring

* blop

* Sync with master

* Added multithread search

* Collector refactoring

* Schema::builder

* CR and rustdoc

* CR comments

* blop

* Added an executor

* Sorted the segment readers in the searcher

* Update searcher.rs

* Fixed unit testst

* changed the place where we have the sort-segment-by-count heuristic

* using crossbeam::channel

* inlining

* Comments about panics propagating

* Added unit test for executor panicking

* Readded default

* Removed Default impl

* Added unit test for executor
2018-11-30 22:46:59 +09:00
Paul Masurel
8b0b0133dd Importing crossbeam_channel from crossbeam reexport. 2018-11-19 09:19:28 +09:00
dependabot[bot]
7b9752f897 Update crossbeam-channel requirement from 0.2 to 0.3 (#436)
* Update crossbeam-channel requirement from 0.2 to 0.3

Updates the requirements on [crossbeam-channel](https://github.com/crossbeam-rs/crossbeam-channel) to permit the latest version.
- [Release notes](https://github.com/crossbeam-rs/crossbeam-channel/releases)
- [Changelog](https://github.com/crossbeam-rs/crossbeam-channel/blob/master/CHANGELOG.md)
- [Commits](https://github.com/crossbeam-rs/crossbeam-channel/commits/v0.3.0)

Signed-off-by: dependabot[bot] <support@dependabot.com>

* fixing build
2018-11-16 14:26:59 +09:00
dependabot[bot]
c92f41aea8 Update rand requirement from 0.5 to 0.6 (#440)
* Update rand requirement from 0.5 to 0.6

Updates the requirements on [rand](https://github.com/rust-random/rand) to permit the latest version.
- [Release notes](https://github.com/rust-random/rand/releases)
- [Changelog](https://github.com/rust-random/rand/blob/master/CHANGELOG.md)
- [Commits](https://github.com/rust-random/rand/commits)

Signed-off-by: dependabot[bot] <support@dependabot.com>

* Updating rand.
2018-11-16 12:38:01 +09:00
Do Duy
dea16f1d9d Derive Clone for QueryParser (#442) 2018-11-15 18:45:40 +09:00
dependabot[bot]
236cfbec08 Update crossbeam requirement from 0.4 to 0.5 (#438)
Updates the requirements on [crossbeam](https://github.com/crossbeam-rs/crossbeam) to permit the latest version.
- [Release notes](https://github.com/crossbeam-rs/crossbeam/releases)
- [Changelog](https://github.com/crossbeam-rs/crossbeam/blob/master/CHANGELOG.md)
- [Commits](https://github.com/crossbeam-rs/crossbeam/commits/crossbeam-0.5.0)

Signed-off-by: dependabot[bot] <support@dependabot.com>
2018-11-15 06:16:22 +09:00
Paul Masurel
edcafb69bb Fixed benches 2018-11-10 17:04:29 -08:00
Paul Masurel
14908479d5 Release 0.7.1 2018-11-02 17:56:25 +09:00
Dru Sellers
ab4593eeb7 Adds open_or_create method (#428)
* Change the semantic of Index::create_in_dir.

It should return an error if the directory already contains an Index.

* Index::open_or_create is working

* additional test

* Checking that schema matches on open_or_create.

Simplifying unit tests.

* simplifying Eq
2018-10-31 08:36:39 +09:00
Dru Sellers
e75bb1d6a1 Fix NGram processing of non-ascii characters (#430)
* A working version

* optimize the ngram parsing

* Decoding codepoint only once.

* Closes #429

* using leading_zeros to make code less cryptic

* lookup in a table
2018-10-31 08:35:27 +09:00
dependabot[bot]
63b9d62237 Update base64 requirement from 0.9.1 to 0.10.0 (#433)
Updates the requirements on [base64](https://github.com/alicemaz/rust-base64) to permit the latest version.
- [Release notes](https://github.com/alicemaz/rust-base64/releases)
- [Changelog](https://github.com/alicemaz/rust-base64/blob/master/RELEASE-NOTES.md)
- [Commits](https://github.com/alicemaz/rust-base64/commits/v0.10.0)

Signed-off-by: dependabot[bot] <support@dependabot.com>
2018-10-31 08:34:44 +09:00
Jason Wolfe
0098e3d428 Compute space usage of a Searcher / SegmentReader / CompositeFile (#282)
* Compute space usage of a Searcher / SegmentReader / CompositeFile

* Fix typo

* Add serde Serialize/Deserialize for all the SpaceUsage structs

* Fix indexing

* Public methods for consuming space usage information

* #281: Add a space usage method that takes a SegmentComponent to support code that is unaware of particular segment components, and to make it more likely to update methods when a new component type is added.

* Add support for space usage computation of positions skip index file (#281)

* Add some tests for space usage computation (#281)
2018-10-15 09:04:36 +09:00
Konstantin Gribov
69d5e4b9b1 Added proper references for Apache Lucene & Solr (#432)
Also, added links to websites for Lucene, Solr & ElasticSearch
2018-10-12 08:46:07 +09:00
Paul Masurel
e0cdd3114d Fixing README (#427)
Closes #424.
2018-09-17 08:52:29 +09:00
Paul Masurel
f32b4a2ebe Removing release build from ci, disabling lto (#425) 2018-09-17 06:41:40 +09:00
Paul Masurel
6ff60b8ed8 Fixing README (#426) 2018-09-17 06:20:44 +09:00
Paul Masurel
8da28fb6cf Added iml filewq 2018-09-16 13:26:54 +09:00
Paul Masurel
0df2a221da Bump version pre-release 2018-09-16 13:24:14 +09:00
Paul Masurel
5449ec3c11 Snippet term score (#423) 2018-09-16 10:21:02 +09:00
Paul Masurel
10f6c07c53 Clippy (#422)
* Cargo Format
* Clippy
2018-09-15 20:20:22 +09:00
Paul Masurel
06e7bd18e7 Clippy (#421)
* Cargo Format

* Clippy

* bugfix

* still clippy stuff

* clippy step 2
2018-09-15 14:56:14 +09:00
Paul Masurel
37e4280c0a Cargo Format (#420) 2018-09-15 07:44:22 +09:00
Paul Masurel
0ba1cf93f7 Remove Searcher dereference (#419) 2018-09-14 09:54:26 +09:00
Paul Masurel
21a9940726 Update Changelog with #388 (#418) 2018-09-14 09:31:11 +09:00
pentlander
8600b8ea25 Top collector (#413)
* Make TopCollector generic

Make TopCollector take a generic type instead of only being tied to
score. This will allow for sharing code between a TopCollector that
sorts results by Score and a TopCollector that sorts documents by a fast
field. This commit makes no functional changes to TopCollector.

* Add TopFieldCollector and TopScoreCollector

Create two new collectors that use the refactored TopCollector.
TopFieldCollector has the same functionality that TopCollector
originally had. TopFieldCollector allows for sorting results by a given
fast field. Closes tantivy-search/tantivy#388

* Make TopCollector private

Make TopCollector package private and export TopFieldCollector as
TopCollector to maintain backwards compatibility. Mark TopCollector
as deprecated to encourage use of the non-aliased TopFieldCollector.
Remove Collector implementation for TopCollector since it is not longer
used.
2018-09-14 09:22:17 +09:00
Paul Masurel
30f4f85d48 Closes #414. (#417)
Updating documentation for load_searchers.
2018-09-14 09:11:07 +09:00
Paul Masurel
82d25b8397 Fixing snippet example 2018-09-13 12:39:42 +09:00
Paul Masurel
2104c0277c Updating uuid 2018-09-13 09:13:37 +09:00
Paul Masurel
dd37e109f2 Merge branch 'issue/368b' 2018-09-11 20:16:14 +09:00
Paul Masurel
cc23194c58 Editing document 2018-09-11 20:15:38 +09:00
Paul Masurel
63868733a3 Added SnippetGenerator 2018-09-11 09:45:27 +09:00
Paul Masurel
644d8a3a10 Added snippet generator 2018-09-10 16:39:45 +09:00
Paul Masurel
e32dba1a97 Phrase weight 2018-09-10 09:26:33 +09:00
Paul Masurel
a78aa4c259 updating doc 2018-09-09 17:23:30 +09:00
Paul Masurel
7e5f697d00 Closes #387 2018-09-09 16:23:56 +09:00
Paul Masurel
a78f4cca37 Merge branch 'issue/368' into issue/368b 2018-09-09 16:04:20 +09:00
Paul Masurel
2e44f0f099 blop 2018-09-09 14:23:24 +09:00
Vignesh Sarma K
9ccba9f864 Merge branch 'master' into issue/368 2018-09-07 20:27:38 +05:30
Paul Masurel
9101bf5753 Fragments 2018-09-07 09:57:12 +09:00
Paul Masurel
23e97da9f6 Merge branch 'master' of github.com:tantivy-search/tantivy 2018-09-07 08:44:14 +09:00
Paul Masurel
1d439e96f5 Using sort unstable by key. 2018-09-07 08:43:44 +09:00
Paul Masurel
934933582e Closes #402 (#403) 2018-09-06 10:12:26 +09:00
Paul Masurel
98c7fbdc6f Issue/378 (#392)
* Added failing unit test

* Closes #378. Handling queries that end up empty after going through the analyzer.

* Fixed stop word example
2018-09-06 10:11:54 +09:00
Paul Masurel
cec9956a01 Issue/389 (#405)
* Setting up the dependency.

* Completed README
2018-09-06 10:10:40 +09:00
Paul Masurel
c64972e039 Apply unicode lowercasing. (#408)
Checks if the str is ASCII, and uses a fast track if it is the case.
If not, the std's definition of a lowercase character.

Closes #406
2018-09-05 09:43:56 +09:00
Paul Masurel
b3b2421e8a Issue/367 (#404)
* First stab

* Closes #367
2018-09-04 09:17:00 +09:00
Paul Masurel
f570fe37d4 small changes 2018-08-31 09:03:44 +09:00
Paul Masurel
6704ab6987 Added methods to extract the matching terms. First stab 2018-08-30 09:47:19 +09:00
Paul Masurel
a12d211330 Extracting terms matching query in the document 2018-08-30 09:23:34 +09:00
Paul Masurel
ee681a4dd1 Added say thanks badge 2018-08-29 11:06:04 +09:00
petr-tik
d15efd6635 Closes #235 - adds a new error type (#398)
error message suggests possible causes

Addressed code review 1 thread + smaller heap size
2018-08-29 08:26:59 +09:00
Vignesh Sarma K (വിഘ്നേഷ് ശ൪മ കെ)
18814ba0c1 add a test for second fragment having higher score 2018-08-28 22:27:56 +05:30
Vignesh Sarma K (വിഘ്നേഷ് ശ൪മ കെ)
f247935bb9 Use HighlightSection::new rather than just directly creating the object 2018-08-28 22:16:22 +05:30
Vignesh Sarma K (വിഘ്നേഷ് ശ൪മ കെ)
6a197e023e ran rustfmt 2018-08-28 20:41:58 +05:30
Vignesh Sarma K (വിഘ്നേഷ് ശ൪മ കെ)
96a313c6dd add more tests 2018-08-28 20:41:58 +05:30
Vignesh Sarma K (വിഘ്നേഷ് ശ൪മ കെ)
fb9b1c1f41 add a test and fix the bug of not calculating first token 2018-08-28 20:41:58 +05:30
Vignesh Sarma K (വിഘ്നേഷ് ശ൪മ കെ)
e1bca6db9d update calculate_score to try_add_token
`try_add_token` will now update the stop_offset as well.
`FragmentCandidate::new` now just takes `start_offset`,
it expects `try_add_token` to be called to add a token.
2018-08-28 20:41:58 +05:30
Vignesh Sarma K (വിഘ്നേഷ് ശ൪മ കെ)
8438eda01a use while let instead of loop and if.
as per CR comment
2018-08-28 20:41:57 +05:30
Vignesh Sarma K (വിഘ്നേഷ് ശ൪മ കെ)
b373f00840 add htmlescape and update to_html fn to use it.
tests and imports also updated.
2018-08-28 20:41:57 +05:30
Vignesh Sarma K (വിഘ്നേഷ് ശ൪മ കെ)
46decdb0ea compare against accumulator rather than init value 2018-08-28 20:41:41 +05:30
Vignesh Sarma K (വിഘ്നേഷ് ശ൪മ കെ)
835cdc2fe8 Initial version of snippet
refer #368
2018-08-28 20:41:41 +05:30
Paul Masurel
19756bb7d6 Getting started on #368 2018-08-28 20:41:41 +05:30
CJP10
57e1f8ed28 Missed a closing bracket (#397) 2018-08-28 23:17:59 +09:00
Paul Masurel
2649c8a715 Issue/246 (#393)
* Moving Range and All to Leaves

* Parsing OR/AND

* Simplify user input ast

* AND and OR supported. Returning an error when mixing syntax

Closes #246

* Added support for NOT

* Updated changelog
2018-08-28 11:03:54 +09:00
Paul Masurel
ede97eded6 Removed use 2018-08-28 09:54:04 +09:00
Paul Masurel
4b7ff78c5a Added fundamentalss 2018-08-28 08:09:27 +09:00
Paul Masurel
948758ad78 First commit for the documentation 2018-08-27 09:49:49 +09:00
Paul Masurel
d71fa43ca3 Moving emoticon on the right side of the parenthesis 2018-08-23 08:59:11 +09:00
Paul Masurel
1e5266d4c9 Merge branch 'master' of github.com:tantivy-search/tantivy 2018-08-23 08:55:30 +09:00
Paul Masurel
537fc27231 Added bench line in features 2018-08-23 08:55:13 +09:00
Dru Sellers
af593b1116 Add default EN stopwords to the default analyzer (#381)
* Add a default list of en stopwords

* Add the default en stopword filter to the standard tokenizers

* code review feedback
2018-08-22 10:49:39 +09:00
Paul Masurel
3d73c0c240 Update issue templates 2018-08-21 10:59:08 +09:00
Paul Masurel
3a8e524f77 Added example to show how to access the inverted list directly 2018-08-21 09:36:13 +09:00
Paul Masurel
c0641c2b47 Remove generate html script. It moved to tantivy-search.github.io 2018-08-21 08:26:46 +09:00
Dru Sellers
ef3a16a129 Switch from error-chain to failure crate (#376)
* Switch from error-chain to failure crate

* Added deprecated alias for

* Started editing the changeld
2018-08-20 09:40:45 +09:00
Paul Masurel
a0a284fe91 Added a full fledge empty query and relyign on it in QueryParser, instead of using an empty clause. 2018-08-20 09:21:32 +09:00
dependabot[bot]
0feeef2684 Update owning_ref requirement from 0.3 to 0.4 (#379)
Updates the requirements on [owning_ref](https://github.com/Kimundi/owning-ref-rs) to permit the latest version.
- [Release notes](https://github.com/Kimundi/owning-ref-rs/releases)
- [Commits](https://github.com/Kimundi/owning-ref-rs/commits)

Signed-off-by: dependabot[bot] <support@dependabot.com>
2018-08-20 09:08:11 +09:00
Dru Sellers
cc50bdb06a Add a basic faceted search example (#383)
* Add a basic faceted search example

* quieting the compiler
2018-08-19 08:07:54 +09:00
Paul Masurel
23c2c3ae7c Building all examples on appveyor + running them on travis 2018-08-17 13:24:37 +09:00
Dru Sellers
674524ba91 Add an example of using the stopwords filter (#377) 2018-08-17 12:52:21 +09:00
Paul Masurel
60a9a7f837 Added example showing how to delete/update documents 2018-08-17 09:43:55 +09:00
Paul Masurel
5b5c706581 Simplified examples 2018-08-16 22:38:39 +09:00
Paul Masurel
3e14a76623 Update regex_query.rs 2018-08-15 16:38:32 +09:00
Paul Masurel
8cde1c81e5 Update README.md 2018-08-13 18:03:30 +09:00
Paul Masurel
8d0a29b137 Added sourcerer wall of fame 2018-08-13 18:02:49 +09:00
Paul Masurel
cbfb2fe19d Avoid building twice when doing code coverage 2018-08-13 10:38:01 +09:00
Vignesh Sarma K
09e00f1d42 add position_length to Token (#337)
* add position_length to Token

refer #291

* Add term offset to `PhraseQuery`

ref #291

* Add new constructor for `PhraseQuery` that allows custom offset

* fix the method name as per pr comment

* Closes #291

Added unit test.
Using offsets from the analyzer in QueryParser.
2018-08-13 10:14:50 +09:00
Paul Masurel
290620fdee Added slashes 2018-08-13 09:13:01 +09:00
petr-tik
f0d1b85bd8 N370 pr fix num searchers (#371)
* Change ordering to Acquire

* set_num_searchers now uses AtomicUsize.store
2018-08-13 08:56:30 +09:00
petr-tik
aaef546f91 Moved NUM_SEARCHERS into a local variable (#369)
* Moved NUM_SEARCHERS into a local variable

dynamically determined as the number of available cpus.

var name in lowercase (not a constant anymore).

updated it in docstring

* lowercased the varnames

* User can set number of logical cores in create_from_metas

* cargo fmt

* Num_searchers as Arc<AtomicUsize>

Retrieving the value with Relaxed ordering

Reverted create_from_metas signature. However, it calls num_cpus and
sets the Arc val
2018-08-12 20:08:14 +09:00
Paul Masurel
811ddf2226 Closes #364 (#365)
* Closes #364

* Trying to raise the recursion limit

* Better unit test and bug fix on token offsets
2018-08-08 11:15:20 +09:00
Paul Masurel
79a339d353 Removing env_logger dependency 2018-08-02 19:29:09 +09:00
Paul Masurel
e45e4c79d9 update crossbeam 2018-08-02 19:24:08 +09:00
Paul Masurel
848bf41bc9 Updating rand to 0.5 (#363) 2018-08-02 19:19:04 +09:00
Paul Masurel
d11cb087a7 Updated to combine-0.3 (#362) 2018-08-02 18:29:58 +09:00
Jacob Brown
2dd7422f42 replace chan with crossbeam-channel (#361)
* replace chan with crossbeam-channel

* Update Cargo.toml
2018-08-02 12:47:22 +09:00
Paul Masurel
e8707c02c0 Issue/333 (#335)
* Add skip information for posting list (skip to doc ids) 
* Separate num bits from data for positions (skip n positions)
* Address in the position using a n-position offset
* Added a long skip structure to allow efficient opening of the position for a given term.
2018-07-31 10:51:53 +09:00
dependabot[bot]
55928d756a Update rust-stemmers requirement to 1.0.2 (#350)
* Update rust-stemmers requirement to 1.0.2

Updates the requirements on [rust-stemmers](https://github.com/CurrySoftware/rust-stemmers) to permit the latest version.
- [Release notes](https://github.com/CurrySoftware/rust-stemmers/releases)
- [Commits](https://github.com/CurrySoftware/rust-stemmers/commits)

Signed-off-by: dependabot[bot] <support@dependabot.com>

* Update Cargo.toml
2018-07-31 09:32:57 +09:00
dependabot[bot]
a4370bca64 Update owned-read requirement to 0.4 (#352)
Updates the requirements on [owned-read](https://github.com/tantivy-search/owned-read) to permit the latest version.
- [Release notes](https://github.com/tantivy-search/owned-read/releases)
- [Commits](https://github.com/tantivy-search/owned-read/commits)

Signed-off-by: dependabot[bot] <support@dependabot.com>
2018-07-31 09:32:01 +09:00
dependabot[bot]
5a5c5a8ca5 Update bit-set requirement to 0.5.0 (#351)
* Update bit-set requirement to 0.5.0

Updates the requirements on [bit-set](https://github.com/contain-rs/bit-set) to permit the latest version.
- [Release notes](https://github.com/contain-rs/bit-set/releases)
- [Commits](https://github.com/contain-rs/bit-set/commits)

Signed-off-by: dependabot[bot] <support@dependabot.com>

* Update Cargo.toml

* Update Cargo.toml
2018-07-31 09:31:41 +09:00
dependabot[bot]
1b470dd474 Update log requirement to 0.4.3 (#353)
* Update log requirement to 0.4.3

Updates the requirements on [log](https://github.com/rust-lang/log) to permit the latest version.
- [Release notes](https://github.com/rust-lang/log/releases)
- [Changelog](https://github.com/rust-lang-nursery/log/blob/master/CHANGELOG.md)
- [Commits](https://github.com/rust-lang/log/commits/env_logger-0.4.3)

Signed-off-by: dependabot[bot] <support@dependabot.com>

* Update Cargo.toml
2018-07-31 09:31:19 +09:00
Paul Masurel
52b4575245 Issue/355 (#358)
* issue with top_k sorting (#356)

* Closes #355
2018-07-31 08:24:55 +09:00
dependabot[bot]
ddd2d5b04c Update lazy_static requirement to 1.0.2 (#349)
* Update lazy_static requirement to 1.0.2

Updates the requirements on [lazy_static](https://github.com/rust-lang-nursery/lazy-static.rs) to permit the latest version.
- [Release notes](https://github.com/rust-lang-nursery/lazy-static.rs/releases)
- [Commits](https://github.com/rust-lang-nursery/lazy-static.rs/commits/v1.0.2)

Signed-off-by: dependabot[bot] <support@dependabot.com>

* Update Cargo.toml
2018-07-30 12:34:06 +09:00
dependabot[bot]
fa22b4041a Update itertools requirement to 0.7.8 (#346)
* Update itertools requirement to 0.7.8

Updates the requirements on [itertools](https://github.com/bluss/rust-itertools) to permit the latest version.
- [Release notes](https://github.com/bluss/rust-itertools/releases)
- [Commits](https://github.com/bluss/rust-itertools/commits/0.7.8)

Signed-off-by: dependabot[bot] <support@dependabot.com>

* Update Cargo.toml
2018-07-30 11:32:12 +09:00
dependabot[bot]
8faee143fa Update regex requirement to 1.0 (#347)
Updates the requirements on [regex](https://github.com/rust-lang/regex) to permit the latest version.
- [Release notes](https://github.com/rust-lang/regex/releases)
- [Changelog](https://github.com/rust-lang/regex/blob/master/CHANGELOG.md)
- [Commits](https://github.com/rust-lang/regex/commits/1.0.2)

Signed-off-by: dependabot[bot] <support@dependabot.com>
2018-07-30 09:59:19 +09:00
dependabot[bot]
366ce98f08 Update tempfile requirement to 3.0 (#348)
Updates the requirements on [tempfile](https://github.com/Stebalien/tempfile) to permit the latest version.
- [Release notes](https://github.com/Stebalien/tempfile/releases)
- [Changelog](https://github.com/Stebalien/tempfile/blob/master/NEWS)
- [Commits](https://github.com/Stebalien/tempfile/commits/v3.0.3)

Signed-off-by: dependabot[bot] <support@dependabot.com>
2018-07-30 09:58:56 +09:00
Paul Masurel
190e60a41c Closes #339. (#340)
As required per the FacetCollector,
facet values needs to be sorted before being encoded in the
multivalued field.
2018-07-25 18:21:48 +09:00
Vignesh Sarma K
b9558801a1 Declare and implement separate Clone Traits (#336)
For traits, `Directory` and `MergePolicy`.

refer #306
2018-07-18 12:36:43 +09:00
Paul Masurel
36728215ac Using the codecov badge 2018-07-10 21:19:59 +09:00
Paul Masurel
39551a0418 fix travis 2018-07-10 13:08:22 +09:00
Paul Masurel
39b98b2e76 fix travis 2018-07-10 13:07:15 +09:00
Paul Masurel
616162400d Add missing space 2018-07-10 12:49:32 +09:00
Paul Masurel
694d164db6 fix travis.yml 2018-07-10 09:39:39 +09:00
Paul Masurel
ef442cefb1 codecov 2018-07-10 09:38:59 +09:00
Paul Masurel
14da241f35 Readed cov 2018-07-10 09:25:24 +09:00
Paul Masurel
346a9e4287 Set dev version 2018-07-10 09:20:21 +09:00
Paul Masurel
31655e92d7 Preparing release 0.6.1 2018-07-10 09:12:26 +09:00
Paul Masurel
6b8d76685a Tiny refactoring 2018-07-05 09:11:55 +09:00
Paul Masurel
ce5683fc6a Removed useless counting_writer 2018-07-04 16:13:19 +09:00
Paul Masurel
5205579db6 Merge branch 'master' of github.com:tantivy-search/tantivy 2018-07-04 16:09:59 +09:00
Paul Masurel
d056ae60dc Removed SourceRead. Relying on the new owned-read crate instead (#332) 2018-07-04 16:08:52 +09:00
Paul Masurel
af9280c95f Removed SourceRead. Relying on the new owned-read crate instead 2018-07-04 12:47:25 +09:00
David Hewson
2e538ce6e6 remove extra space in name (#331)
the extra space that appeared breaks using the package
2018-07-02 05:32:19 +09:00
Jason Wolfe
00466d2b08 #328: Support parsing unbounded range queries (#329)
* #328: Support parsing unbounded range queries. Update CHANGELOG.md for query parser changes.

* Set version to 0.7-dev
2018-06-30 13:24:02 +09:00
Paul Masurel
8ebbf6b336 Issue/325 (#330)
* Introducing a SegmentMea inventory.
* Depending on census=0.1
* Cargo fmt
2018-06-30 13:11:41 +09:00
Paul Masurel
1ce36bb211 Merge branch 'master' of github.com:tantivy-search/tantivy 2018-06-27 16:58:47 +09:00
Jason Wolfe
2ac43bf21b Support parsing RangeQuery and AllQuery in Queryparser (#323)
* (#321) Add support for range query parsing to grammar / parser. Still needs to be wired through the rest of the way.

* (321) Finish wiring RangeQuery parsing through

* (#321) Add logical AST query parser tests for RangeQuery

* (#321) Support parsing AllQuery

* (#321) Update documentation of QueryParser

* (#321) Support negative numbers in range query parsing
2018-06-25 08:29:47 +09:00
Paul Masurel
3fd8c2aa5a Removed one keywoard 2018-06-22 14:47:21 +09:00
Paul Masurel
c1022e23d2 Switching to stable rust in AppVeyor. 2018-06-22 14:33:42 +09:00
Paul Masurel
8ccbfdea5d Preparing for release 2018-06-22 14:27:46 +09:00
Paul Masurel
badfce3a23 Preparing for release. 2018-06-22 14:09:14 +09:00
Dru Sellers
e301e0bc87 Add some simple doc tests (#320)
* Add TopCollector doc test

* Add CountCollector Doc Test

* Add Doc Test for MultiCollector

* Add ChainedCollector Doc Test

* Expose Fuzzy Query where it should be

* Add FuzzyTermQuery Doc Test

* Expose RegexQuery

* Regex Query Doc Test

* Add TermQuery Doc Test

* Add doc comments

* fix test 🤦

* Added explanation about the complexity variables

* Fixing unit tests

* Single threads if you check docids
2018-06-19 10:45:20 +09:00
Dru Sellers
317baf4e75 Add in simple regex query support (#319)
* Add fst_regex crate in

* Reduce API surface area

This doesn't need to be public

* better test name

* Pull Automaton weight out so it can be shared

* Implement Regex Query
2018-06-16 14:08:30 +09:00
Paul Masurel
24398d94e4 Exposing the 2018-06-15 21:40:57 +09:00
Dru Sellers
360f4132eb Standardizes the Index::open_* APIs (#318)
* Relocate `from_directory` closer to its usage

* Specific methods come before the generic method

* Rename open methods to follow the lead of the create methods
2018-06-15 12:16:41 +09:00
Dru Sellers
2b8f02764b Standardizes the Index::create_* APIs (#317)
* Pull all creation methods next to each other

The goal here is to make it clear which methods are performing the
same function, and to assist with standardizing the API calls.

* Make `from_directory` private

This seems to be an internal function, so lets make it internal.

* Rename `create` to `create_in_dir`

This lets the name match the `create_in_ram` pattern and opens up
`create` for the generic implementation.

* Implement the generic create function

All of the create methods now delegate to the common create function
and future `create_in_*` functions now have a clear pattern
to follow as well
2018-06-14 11:08:42 +09:00
Paul Masurel
0465876854 Issue/257 (#310)
* Replaced lz4 by a pure rust implementation of snappy.

Closes #257

* snappy is the default compression. One can use lz4 by enabling the lz4 feature flag.

* Removed Compression trait
2018-06-12 19:02:57 +09:00
Dru Sellers
6f7b099370 Add AutomatonWeight to a fuzzy_search module and FuzzyQuery (#300)
* Add AutomatonWeight to a fuzzy_search module

* Hacking around ownership issues

* Working through lifetime issues

* Working through tests

* fix test by lower casing the words (reducing distance)

* code review changes

* Suggestion on how to solve the borrow problem

* clean up
2018-06-11 22:23:03 +09:00
Paul Masurel
84f5cc4388 Added an AUTHORS file. Closes #315 (#316) 2018-06-11 22:21:58 +09:00
Paul Masurel
75aae0d2c2 Update README 2018-06-08 13:05:57 +09:00
Paul Masurel
009a3559be atomicwrites 2.2.0 for ARM compilation 2018-06-06 07:13:09 +09:00
Paul Masurel
7a31669e9d Disabling ARM targets 2018-06-05 12:22:00 +09:00
Paul Masurel
5185eb790b Reduced heap usage in unit test 2018-06-05 10:02:10 +09:00
Paul Masurel
a3dffbf1c6 Added more ARM target. 2018-06-05 09:06:33 +09:00
Paul Masurel
857a5794d8 Updated nix version 2018-06-05 09:02:40 +09:00
Paul Masurel
b0a6fc1448 Reduce RAM usage 2018-06-04 11:20:24 +09:00
Paul Masurel
989d52bea4 Updated atomicwrites version. 2018-06-04 10:00:21 +09:00
Paul Masurel
09661ea7ec Added cross testing on different platforms 2018-06-04 09:47:53 +09:00
Paul Masurel
b59132966f Better heap (#311)
* Changed the heap to a paged memory arena.
* Trying to simplify the indexing term hashmap
* Exploding datastruct
* Removed some complexity in bitpacker
2018-06-04 09:39:18 +09:00
Paul Masurel
863d3411bc Update Cargo.toml 2018-05-31 15:54:34 +09:00
Paul Masurel
8a55d133ab Showing Appveyor CI badge for the master branch
.. before the last build was shown.
2018-05-28 13:44:53 +09:00
Jason Wolfe
432d49d814 Expose parameters of RangeQuery for external usage (#309) 2018-05-19 14:29:25 +09:00
Jason Wolfe
0cea706f10 Add docs to new Query methods (#307) 2018-05-18 13:53:29 +09:00
Paul Masurel
71d41ca209 Added Google to the license 2018-05-18 10:13:23 +09:00
Paul Masurel
bc69dab822 cargo fmt 2018-05-18 10:08:05 +09:00
Jason Wolfe
72acad0921 Add box_clone() and downcast::Any to Query (#303) 2018-05-18 09:53:11 +09:00
Paul Masurel
c9459f74e8 Update docs about TermDict. 2018-05-18 09:20:39 +09:00
Dru Sellers
08d2cc6c7b Make it possible to stream the terms matching an Automaton (#297)
* rustfmt and some English grammar

* sort cargo.toml crates

* WIP: something to show

* Remove example for now

* Implement desired method

* Resolving Generic Type Arguments

* Resolve Generic Types

* Banging around on the tests

* DANGER! Change unsafe usage based on compiler warnings

* Unscrew up my rebase

* Clean Up Type Spam

Default Types FTW

* typo

* better variable names

* Remove Duplicate Levenshtein crate
2018-05-11 12:41:14 -07:00
Dru Sellers
82d87416c2 Implement StopWords Filter (#292)
* Implement StopWords Filter

- added example doctest for alphanum_only.rs so that I could
drive my own test of the stopword filter

* Style Cop

* Switch HashSet Hasher to FNV for speed

* Update Change Log

* fix missed location renaming
2018-05-09 18:40:41 -07:00
Paul Masurel
96b2c2971e Testing actual doc ids in unit test 2018-05-09 09:14:22 -07:00
Dru Sellers
162afd73f6 Alive docs iterator (#293)
* Add non-deleted DocId iterator to SegmentReader

Closes #287

* Add Todo

* Add Unit Test

* Improving test based on feedback

- found bug and fixed it. :)

* Reestablish changes post rebase for clean merge
2018-05-09 09:03:27 -07:00
Paul Masurel
ddfd87fa59 Merge branch 'master' of github.com:tantivy-search/tantivy 2018-05-08 00:08:17 -07:00
Paul Masurel
24050d0eb5 Remove some unsafe stuff, justified some of it. 2018-05-07 23:57:53 -07:00
Jason Wolfe
89eb209ece #294: Make fieldnorm module public, add documentation (#295) 2018-05-07 20:20:38 -07:00
Paul Masurel
9a0b7f9855 Rustfmt 2018-05-07 19:50:35 -07:00
Jason Wolfe
8e343b1ca3 Add fast field for associating arbitrary bytes to a document (#275)
* Add fast field for associating arbitrary bytes to a document

* Fix unused macro_use warning

* Improvements from code review

* Make BytesFastFieldWriter public

* Fix json parsing validation failure

* Add bytes fast field to CHANGELOG.md

* Fix compile errors from merge

* Support merging

* Address misc code review comments

* Fix comments from CR
2018-05-07 19:30:31 -07:00
Paul Masurel
99c0b84036 Integrating #274, #280, #289 into master (#290)
* Integrating bugfixes into master

Closes #274
Closes #280
Closes #289

* Next version will be 0.6
2018-05-06 09:48:25 -07:00
Dru Sellers
ca74c14647 Simple Implementation of NGram Tokenizer (#278)
* Simple Implementation of NGram Tokenizer

It does not yet support edges
It could probably be better in many "rusty" ways
But the test is passing, so I'll call this a good stopping point for
the day.

* Remove Ngram from manager. Too many variations

* Basic configuration model

Should the extensive tests exist here?

* Add Sample to provide an End to End testing

* Basic Edgegram support

* cleanup

* code feedback

* More code review feedback processed
2018-05-06 09:47:49 -07:00
Dru Sellers
68ee18e4e8 Add Index::open_directory function (#285)
* Add Index::open_directory function

* dry
2018-05-03 00:07:46 -07:00
Paul Masurel
5637657c2f Removed ptr dereference for explicit ptr::read_unaligned 2018-04-25 19:15:32 +09:00
Paul Masurel
2e3c9a8878 Bugfix in murmurhash. 2018-04-25 19:06:31 +09:00
Paul Masurel
78673172d0 Cargo fmt 2018-04-21 20:05:36 +09:00
Paul Masurel
175b76f119 Removed streamdict
Closes #271
2018-04-21 19:55:41 +09:00
Paul Masurel
9b79e21bd7 Returning error when schema is not valid for a given query. 2018-04-19 13:02:30 +09:00
Paul Masurel
5e38ae336f Bump tantivy version and readded win deps 2018-04-17 18:27:57 +09:00
Paul Masurel
8604351f59 Hide some of the API
Added some doc.
2018-04-17 13:31:22 +09:00
Paul Masurel
6a48953d8a Closes #266 (#268)
PhraseQuery panics with a nice error message when the underlying field does not have any positions.
The `QueryParser` fails as well with a dedicated error.
2018-04-17 10:03:15 +09:00
pmasurel
0804b42afa Checking the type of range queries 2018-04-16 14:01:10 +09:00
Paul Masurel
8083bc6eef bench working 2018-04-15 12:25:38 +09:00
Paul Masurel
0156f88265 Compiles in stable rust 2018-04-15 11:03:44 +09:00
Paul Masurel
a1c07bf457 Added iterator for facet collector 2018-04-14 20:22:02 +09:00
Paul Masurel
9de74b68d1 Remove range argument 2018-04-13 18:34:23 +09:00
Paul Masurel
57c7073867 Removed 2018-04-13 09:43:36 +09:00
Paul Masurel
121374b89b Removed the need for AtomicU64 2018-04-12 22:08:15 +09:00
Paul Masurel
e44782bf14 No more 2018-04-12 13:01:11 +09:00
Paul Masurel
dfafb24fa6 Bumped bitpacker's version 2018-04-10 21:21:47 +09:00
jason-wolfe
4c6f9541e9 #263: Make MultiValueIntFastFieldWriter public, expose via FastFieldsWriter (#264) 2018-04-10 12:27:34 +09:00
Paul Masurel
743ae102f1 Using bitpacker@3 2018-04-10 10:05:42 +09:00
Paul Masurel
0107fe886b Removed timer 2018-03-31 15:40:16 +09:00
Paul Masurel
1d9566e73c Making mmap a feature 2018-03-31 13:23:43 +09:00
Paul Masurel
8006f1df11 Added comments 2018-03-28 08:28:49 +09:00
Paul Masurel
ffa03bad71 TermScorer does not handle deletes 2018-03-27 17:35:20 +09:00
Paul Masurel
98cf4ba63a Small refactor of postings's skip method 2018-03-27 16:14:28 +09:00
Paul Masurel
4d65771e04 field norm reader is not an option anymore. 2018-03-26 13:25:29 +09:00
Paul Masurel
9712a75399 Added unit test for intersection score 2018-03-25 12:58:24 +09:00
Paul Masurel
3ae03b91ae PhraseScorer's score aligned with that of Lucene.) 2018-03-25 12:44:16 +09:00
Paul Masurel
238b02ce7d Bugfixed 2018-03-23 18:50:57 +09:00
Paul Masurel
3091459777 Fixed main bug. Unit test still not passing because of altered scoring 2018-03-23 13:52:10 +09:00
Paul Masurel
b7f8884246 Closes #245 = BM25. (#260)
* Closes #245 = BM25.

Scores are the same as Lucene.

* Fixing travis conf
2018-03-22 15:06:56 +09:00
Paul Masurel
e22f767fda Backmerge 2018-03-21 21:18:46 +09:00
Paul Masurel
3ecfc36e53 Total field norm fixed. 2018-03-21 20:43:02 +09:00
Paul Masurel
1c9450174e Fieldnorm reader working except merge 2018-03-21 17:36:16 +09:00
Paul Masurel
cde4c391cd Added fieldnorm module 2018-03-21 15:41:46 +09:00
Paul Masurel
6d47634616 Added unit tests 2018-03-20 12:11:28 +09:00
Paul Masurel
39b182c24b Simplified phrase queries. Reading several time is ok. 2018-03-20 11:47:48 +09:00
Paul Masurel
baaae3f4ec Making it possible to read positions twice 2018-03-20 11:36:22 +09:00
Paul Masurel
63064601a7 Readded test for reading positions twice 2018-03-20 10:04:36 +09:00
Paul Masurel
07a8023a3a Added 2018-03-19 14:36:43 +09:00
Paul Masurel
59639cd311 In sync with master. Fixed merging 2018-03-19 12:58:42 +09:00
Paul Masurel
b0e5e1f61d Back merged master 2018-03-19 12:19:08 +09:00
Paul Masurel
234a902470 Removed cc from Cargo.toml 2018-03-19 12:09:25 +09:00
Paul Masurel
75d130f1ce Edited CHANGELOG 2018-03-19 12:01:48 +09:00
Paul Masurel
410187dd24 Removed .vimrc 2018-03-19 11:54:10 +09:00
Paul Masurel
88303d4833 Removed script directory 2018-03-19 11:53:15 +09:00
Paul Masurel
a26b0ff4a2 Removed exclude cpp from travis configuration 2018-03-19 11:51:41 +09:00
Paul Masurel
d4ed86f13a Issue/255 (#256)
* Remove cpp compression.

* Pointing to publish bitpacking

* Edited README
2018-03-19 11:48:40 +09:00
Paul Masurel
fc8902353c fieldnrom encoding. test broken 2018-03-10 18:35:16 +09:00
Paul Masurel
a2ee988304 Small change in pop_lowest. 2018-03-10 15:32:30 +09:00
Paul Masurel
97b7984200 Updated CHANGELOG 2018-03-10 14:08:11 +09:00
Paul Masurel
8683718159 Version bump 2018-03-10 14:01:30 +09:00
Paul Masurel
0cf274135b Clippy 2018-03-10 13:07:18 +09:00
Paul Masurel
a3b44773bb Bugfix and rustfmt 2018-03-10 12:21:50 +09:00
Paul Masurel
ec7c582109 NOBUG no-simd compression fix 2018-03-09 14:19:58 +09:00
Ewan Higgs
ee7ab72fb1 Support trailing commas using ',+ ,' trick from Blandy 2017. (#250) 2018-02-27 10:33:39 +09:00
Paul Masurel
2c20759829 removed unsafecell for position computer 2018-02-24 12:07:55 +09:00
Paul Masurel
23387b0ed0 Positions writes to an external Vec 2018-02-24 11:14:45 +09:00
Dylan DPC
e82859f2e6 Update Cargo.toml (#249) 2018-02-24 09:17:33 +09:00
Paul Masurel
be830b03c5 Bugfix in intersection.advance and impl skip_next 2018-02-23 11:55:23 +09:00
Paul Masurel
1b94a3e382 Phrase query optimisation 2018-02-23 00:00:22 +09:00
Paul Masurel
c3fbc4c8fa Simplified a notch TinySet::pop_lowest() 2018-02-22 10:43:06 +09:00
Paul Masurel
4ee2db25a0 Generic on Postings rather than deletes in TermScorer 2018-02-22 08:26:45 +09:00
Paul Masurel
e423784fd0 Added specialized SegmentPostings when there are no DeleteSet 2018-02-21 23:49:20 +09:00
Paul Masurel
fdb9c3c516 Tantivy version 0.5.0 2018-02-21 11:38:26 +09:00
Paul Masurel
6fb114224a Added unit test 2018-02-21 00:13:04 +09:00
Paul Masurel
2c3e33895a Added unit tests 2018-02-21 00:03:41 +09:00
Paul Masurel
d512b53688 Added handling of parenthesis in query parser 2018-02-20 23:18:02 +09:00
Paul Masurel
c8afd2b55d Added unit tests 2018-02-20 17:05:33 +09:00
Paul Masurel
3fd6d7125b Added unit test 2018-02-20 13:12:05 +09:00
Paul Masurel
de6a3987a9 Ignoring functional test 2018-02-20 12:58:06 +09:00
Paul Masurel
3dedc465fa Merge branch 'feature/multivalued-i64-u64' 2018-02-20 12:54:18 +09:00
Paul Masurel
f16cc6367e Refactoring of fastfields 2018-02-20 12:52:30 +09:00
Paul Masurel
4026fc5fb1 Removed redundant compressed_block_size function 2018-02-20 08:28:28 +09:00
Paul Masurel
43742a93ef Multivalue u64 field / i64 field. 2018-02-20 00:16:20 +09:00
Paul Masurel
2a843d86cb Code cleaning 2018-02-19 21:51:39 +09:00
Paul Masurel
9a706c296a Larger union horizon 2018-02-19 21:50:33 +09:00
Paul Masurel
5ff8123b7a Code cleaning 2018-02-19 15:41:19 +09:00
Paul Masurel
6061158506 Added long running test to travis conf 2018-02-19 13:23:04 +09:00
Paul Masurel
4e8b0e89d9 Added unit test 2018-02-19 13:19:18 +09:00
Paul Masurel
0540ebb49e Cargo clippy 2018-02-19 12:36:24 +09:00
Paul Masurel
ef94582203 Rustfmt 2018-02-19 12:12:10 +09:00
Paul Masurel
2f242d5f52 Moving docset around 2018-02-19 12:07:05 +09:00
Paul Masurel
da3d372e6e Faster union counts 2018-02-19 10:17:16 +09:00
Paul Masurel
42fd3fe5c7 Bugfix on TermWeight::count() 2018-02-18 10:59:18 +09:00
Paul Masurel
5dae6e6bbc Downcast TermScorer for intersection when all legs are TermScorers 2018-02-18 10:28:43 +09:00
Paul Masurel
e608e0a1df Removed half baked usage of Any 2018-02-18 10:01:14 +09:00
Paul Masurel
6c8c90d348 Removed lifetime from scorer 2018-02-18 09:12:40 +09:00
Paul Masurel
eb50e92ec4 Removed specialized postings on SegmentPostings 2018-02-18 00:09:15 +09:00
Paul Masurel
20bede9462 Bugfix when requesting no termfreq. 2018-02-17 22:41:12 +09:00
Paul Masurel
4640ab4e65 Merge branch 'master' into issue/query-perf 2018-02-17 17:31:51 +09:00
Paul Masurel
cd51ed0f9f Added comments 2018-02-17 16:59:28 +09:00
Paul Masurel
6676fe5717 Added a count method 2018-02-17 15:02:51 +09:00
Paul Masurel
292bb17346 Disable scoring
- Disabling scoring is an argument of the `.weight()` method
- Collectors declare whether they need scoring
2018-02-17 12:43:16 +09:00
Paul Masurel
0300e7272b Scoring for union. 2018-02-17 11:56:21 +09:00
Paul Masurel
8760899fa2 Stupid implementaiton of Box<Scorer>::collect 2018-02-16 19:30:50 +09:00
Paul Masurel
c89d570a79 rustfmt 2018-02-16 17:50:05 +09:00
Paul Masurel
1da06d867b Using the same logic when score is enabled. 2018-02-16 17:36:33 +09:00
Paul Masurel
76e8db6ed3 blop 2018-02-16 14:57:08 +09:00
Paul Masurel
31e5580bfa Renaming intersection / exclude 2018-02-16 11:55:56 +09:00
Paul Masurel
930d3db2f7 Integrated reqopt_scorer 2018-02-16 11:43:27 +09:00
Paul Masurel
1593e1dc6f Added reqopt 2018-02-16 11:22:39 +09:00
Paul Masurel
e0189fc9e6 Added exclude query 2018-02-14 18:06:51 +09:00
Paul Masurel
ffdb4ef0a7 Added unit test 2018-02-14 11:58:40 +09:00
Paul Masurel
58845344c2 Unit test + bugfix in union 2018-02-13 14:54:20 +09:00
Paul Masurel
548ec9ecca Added ok unit test 2018-02-12 17:48:41 +09:00
Paul Masurel
86b700fa93 Updated travis.yml 2018-02-12 12:13:36 +09:00
Paul Masurel
e95c49e749 Added unit test to show bug in intersection 2018-02-12 12:06:19 +09:00
Paul Masurel
f3033a8469 Added sudo required to travis conf because of https://github.com/travis-ci/travis-ci/issues/9061 2018-02-12 11:19:12 +09:00
Paul Masurel
c4125bda59 Backmerging master 2018-02-12 11:08:57 +09:00
Paul Masurel
a7ffc0e610 Rustfmt 2018-02-12 10:31:29 +09:00
Paul Masurel
9370427ae2 Terminfo blocks (#244)
* Using u64 key in the store
* Using Option<> for the next element, as opposed to u64
* Code simplification.
* Added TermInfoStoreWriter.
* Added a TermInfoStore
* Added FixedSized for BinarySerialized.
2018-02-12 10:24:58 +09:00
Paul Masurel
1fc7afa90a Issue/range query (#242)
BitSet and RangeQuery
2018-02-05 09:33:25 +09:00
Paul Masurel
6a104e4f69 Cargo fmt 2018-02-03 11:59:34 +09:00
Paul Masurel
920f086e1d Clippy 2018-02-03 11:46:01 +09:00
Paul Masurel
13aaca7e11 Merge branch 'master' into merge-facets 2018-02-03 11:13:02 +09:00
Paul Masurel
df53dc4ceb Format 2018-02-03 00:21:05 +09:00
Paul Masurel
dd028841e8 Added documentation / test and change the contract of .add_facet() 2018-02-03 00:17:51 +09:00
Paul Masurel
eb84b8a60d bugfix 2018-02-02 18:52:07 +09:00
Paul Masurel
c05f46ad0e skip for intersection 2018-02-02 17:22:58 +09:00
Paul Masurel
435ff9d524 Make constructor of RangeQuery public 2018-02-02 16:50:22 +09:00
Paul Masurel
fdd5dd8496 Merge branch 'master' into issue/query-perf 2018-02-02 16:39:28 +09:00
Paul Masurel
fb5476d5de Query optimization: phrase query + union 2018-02-02 16:39:17 +09:00
Paul Masurel
dd8332c327 Added disabling scoring 2018-02-02 12:11:56 +09:00
Paul Masurel
63d201150b issue/range-query Added range query 2018-02-02 00:41:12 +09:00
Paul Masurel
b78efdc59f NOBUG Use the skipping logic of segment postings in 2018-02-01 18:36:55 +09:00
Paul Masurel
5cb08f7996 Method to create bitset from DocSet directly. 2018-02-01 18:25:43 +09:00
Paul Masurel
1947a19700 Added bitse 2018-01-31 23:56:54 +09:00
Paul Masurel
271b019420 added cargo doc 2018-01-30 15:18:19 +09:00
Paul Masurel
340693184f Added comment 2018-01-30 15:15:55 +09:00
Paul Masurel
97782a9511 updated travis-cargo 2018-01-30 13:18:51 +09:00
Paul Masurel
930010aa88 Unit test passing 2018-01-28 00:03:51 +09:00
Paul Masurel
7f5b07d4e7 Fixing unit tests 2018-01-25 14:55:29 +09:00
Paul Masurel
3edb3dce6a Test not passing 2018-01-25 12:46:32 +09:00
Paul Masurel
1edaf7a312 Closes #236. Removes dependency to version. 2018-01-20 12:12:43 +09:00
Paul Masurel
137906ff29 Fixing PhraseQuery, broken due to the reordering of the intersection clauses.
Closes #234
2018-01-12 21:01:28 +09:00
Paul Masurel
143a143cde issue/232 added unit test. (#233) 2018-01-11 23:37:45 +09:00
Paul Masurel
4f5ce12a77 NOBUG removed cpp from patterns 2018-01-05 12:09:42 +09:00
Paul Masurel
813efa4ab3 NOBUG coveralls 2018-01-05 11:03:27 +09:00
Paul Masurel
c3b6c1dc0b NOBUG coveralls 2018-01-05 00:31:57 +09:00
Paul Masurel
6f5e0ef6f4 NOBUG Simplify travis 2018-01-04 20:51:00 +09:00
Paul Masurel
7224f58895 Merge branch 'issue/218'
Conflicts:
	src/directory/mmap_directory.rs
	src/lib.rs
2018-01-04 18:47:10 +09:00
Paul Masurel
49519c3f61 added comments 2018-01-04 12:53:20 +09:00
Paul Masurel
cb11b92505 Added comments 2018-01-04 12:27:14 +09:00
Paul Masurel
7b2dcfbd91 Merge branch 'issue/227' 2018-01-04 12:12:00 +09:00
Paul Masurel
d2e30e6681 Merge branch 'master' of github.com:tantivy-search/tantivy 2018-01-04 12:09:44 +09:00
Paul Masurel
ef109927b3 rustfmt 2018-01-04 12:08:34 +09:00
Paul Masurel
44e5c4dfd3 Added alphanum only token filter 2017-12-31 13:43:10 +09:00
Paul Masurel
6f223253ea Made load_metas public 2017-12-31 08:57:19 +09:00
Paul Masurel
f7b0392bd5 issue/230 Add an optional commit message. (#231)
Closes #230
2017-12-27 12:27:02 +09:00
Paul Masurel
442bc9a1b8 Fixes the computation of the memory size of a hashtable with a key of n bits. (#229)
Closes #228
2017-12-25 13:04:10 +09:00
Paul Masurel
db7d784573 Issue 227 Faster merge when there are no deletes 2017-12-21 22:04:05 +09:00
Paul Masurel
79132e803a NOBUG Switched to 64 bits addr 2017-12-21 11:06:46 +09:00
Paul Masurel
9e132b7dde NOBUG QueryParser does not need to be mut. Code cleanup 2017-12-16 15:43:35 +09:00
Paul Masurel
1e55189db1 NOBUG rustfmt 2017-12-14 19:30:31 +09:00
Paul Masurel
8b1b389a76 NOBUG Clippy 2017-12-14 19:25:12 +09:00
Paul Masurel
46f3ec87a5 Removed packed memory layout. 2017-12-14 18:37:04 +09:00
Paul Masurel
f24e5f405e NOBUG intellij misc lint 2017-12-14 18:23:35 +09:00
Paul Masurel
2589be3984 BUGFIX Serialization of schema got broken after serde's update 2017-12-14 17:37:20 +09:00
Paul Masurel
a02a9294e4 removed doc in travis 2017-11-27 13:53:58 +09:00
Paul Masurel
8023445b63 docs 2017-11-26 11:52:03 +09:00
Paul Masurel
05ce093f97 doc 2017-11-26 11:43:11 +09:00
Paul Masurel
6937e23a56 fixing doctest 2017-11-26 11:06:34 +09:00
Paul Masurel
974c321153 cargo fmt 2017-11-26 11:02:02 +09:00
Paul Masurel
f30ec9b36b Merge branch 'master' of github.com:tantivy-search/tantivy
Conflicts:
	src/analyzer/mod.rs
	src/schema/index_record_option.rs
	src/tokenizer/lower_caser.rs
	src/tokenizer/tokenizer.rs
2017-11-26 10:54:05 +09:00
Paul Masurel
acd7c1ea2d Added comments 2017-11-26 10:44:49 +09:00
Paul Masurel
aaeeda2bc5 Editing rustdoc 2017-11-25 13:23:32 +09:00
Paul Masurel
ac4d433fad Renamed analyzer to tokenizer 2017-11-24 16:50:32 +09:00
Paul Masurel
a298c084e6 Analyzer's Analyzer::token_stream does not need to me &mut self 2017-11-22 20:37:34 +09:00
Paul Masurel
185a72b341 Closes #224. Fixes documentation about STORED in the example. (#225) 2017-11-16 08:22:54 +09:00
Paul Masurel
bb41ae76f9 Closes #224. Fixes documentation about STORED in the example. 2017-11-16 08:16:17 +09:00
Paul Masurel
74d32e522a Stopped using mmap in tantivy. Caching MmapReadOnly.
Closes #218
2017-10-08 17:07:19 +09:00
Jain Jacob
927dd1ee6f Updates crate gcc to cc v1 (#217)
* Bump cc to v1

* Changes gcc::Config to cc::Build. Resolves #216
2017-10-06 16:18:44 +09:00
Paul Masurel
2c9302290f #191 Analyzer 2017-09-20 22:56:55 +09:00
Paul Masurel
426cc436da Test passing 2017-09-10 17:48:41 +09:00
Paul Masurel
68d42c9cf2 Added raw tokenizer, using the right analyzer in query parser. 2017-09-10 16:58:50 +09:00
Paul Masurel
ca49d6130f Test not passing 2017-09-09 17:32:47 +09:00
Paul Masurel
3588ca0561 Integrated with the merge branch 2017-09-09 15:27:19 +09:00
Paul Masurel
7c6cdcd876 Merge branch 'master' of github.com:tantivy-search/tantivy 2017-09-02 16:03:06 +09:00
Paul Masurel
71366b9a56 issue/197 Remove logic that prevents leak from crossbeam MsQueue. (#212)
Closes #197
2017-09-02 15:55:23 +09:00
Paul Masurel
a3247ebcfb issue/197 Remove logic that prevents leak from csossbeam MsQueue. 2017-09-02 15:53:07 +09:00
Paul Masurel
3ec13a8719 Readded fix for non-simd 2017-08-28 23:18:56 +09:00
Paul Masurel
f8593c76d5 Merge branch 'imhotep-new-codec'
Conflicts:
	src/common/bitpacker.rs
	src/compression/pack/compression_pack_nosimd.rs
	src/indexer/log_merge_policy.rs
2017-08-28 19:30:01 +09:00
Paul Masurel
f8710bd4b0 Format 2017-08-28 18:22:41 +09:00
Paul Masurel
8d05b8f7b2 Added comments. Renamed field reader 2017-08-28 17:00:12 +09:00
Paul Masurel
fc25516b7a Added unit test. 2017-08-28 11:15:37 +09:00
Paul Masurel
5b1e71947f Stream working, all test passing 2017-08-27 20:20:38 +09:00
Paul Masurel
69351fb4a5 Toward a new codec 2017-08-27 18:44:37 +09:00
Paul Masurel
3d0082d020 Delta encoded. Range and get are broken 2017-08-26 19:59:51 +09:00
Paul Masurel
8e450c770a Better error handling. Some doc. 2017-08-26 18:40:30 +09:00
Paul Masurel
a757902aed Merge branch 'feature/streamdict-simd' into imhotep 2017-08-22 18:58:57 +09:00
Paul Masurel
b3a8074826 removed println 2017-08-22 18:58:17 +09:00
Paul Masurel
4289625348 Merged with the new codec branch 2017-08-22 18:26:09 +09:00
Paul Masurel
850f10c1fe Exposing Field 2017-08-22 18:21:35 +09:00
raphael claude
d7f9bfdfc5 fix segments sorting in log_merge_policy (#211)
bug: segments were sorted on their indices (first field in the tuples)
fix: sort on the segments size
2017-08-20 08:59:54 +09:00
Paul Masurel
d0d5db4515 Streamdict using SIMD instruction. 2017-08-19 12:03:04 +09:00
Paul Masurel
303fc7e820 Better unit test for termdict. Checking the TermInfo 2017-08-17 12:08:39 +09:00
Paul Masurel
744edb2c5c NOBUG Avoid serializing position offset when useless. Test passing 2017-08-16 14:06:00 +09:00
Paul Masurel
2d70efb7b0 Removed trait boundary on termdict 2017-08-15 14:43:05 +09:00
Paul Masurel
eb5b2ffdcc Cleanups 2017-08-15 13:57:22 +09:00
Paul Masurel
38513014d5 Reenable unit test.
Consuming CompositeWrite on Close.
2017-08-14 23:35:09 +09:00
Paul Masurel
9cb7a0f6e6 Unit tests passing 2017-08-13 19:38:25 +09:00
Paul Masurel
8d466b8a76 half way through removing FastFieldsReader 2017-08-13 18:39:45 +09:00
Paul Masurel
413d0e1719 NOBUG test passing 2017-08-13 17:57:11 +09:00
Paul Masurel
0eb3c872fd Using composite file for all of the inverted index component 2017-08-12 19:34:23 +09:00
Paul Masurel
f9203228be Using composite file in fast field. 2017-08-12 18:45:59 +09:00
Paul Masurel
8f377b92d0 introducing a field serializer 2017-08-11 18:11:32 +09:00
Paul Masurel
1e89f86267 blop 2017-08-08 13:55:09 +09:00
Paul Masurel
d1f61a50c1 issue/207 Lazily decompressing positions. 2017-08-06 20:29:21 +09:00
Dru Sellers
2bb85ed575 Minor Doc Changes (#206)
* Various small documentation tweaks

* walking through the docs

* Update lib.rs

* Update lib.rs

* Update mod.rs
2017-08-06 09:22:03 +09:00
Paul Masurel
236fa74767 Positions almost working. 2017-08-05 23:17:35 +09:00
Paul Masurel
63b35dd87b removing freq handler. 2017-08-05 18:09:19 +09:00
Paul Masurel
efb910f4e8 Added CompressedIntStream 2017-08-05 16:44:01 +09:00
Paul Masurel
aff7e64d4e test 2017-08-04 22:07:14 +09:00
Paul Masurel
92a3f3981f issue/204 trying to fix nosimd branch. test not passing 2017-08-04 21:19:18 +09:00
king6cong
447a9361d8 Remove submodule information in README as subtree is now used 2017-08-03 13:52:16 +09:00
Paul Masurel
5f59139484 NOBUG simplified code. 2017-08-02 20:49:47 +09:00
Paul Masurel
27c373d26d NOBUG Updated changelog and bumped version 2017-07-24 18:52:45 +09:00
Paul Masurel
80ae136646 issue/198 Getting living_file after getting the list of managed files. 2017-07-24 18:46:41 +09:00
Paul Masurel
52b1398702 NOBUG version 0.4.0 -> 0.4.1 2017-07-19 19:07:54 +09:00
Paul Masurel
7b9cd09a6e Closes #199. Unindexed fields are indexed as untokenized 2017-07-19 18:41:22 +09:00
Paul Masurel
4c423ad2ca Merge branch 'master' of github.com:tantivy-search/tantivy 2017-07-19 17:01:32 +09:00
Paul Masurel
9f542d5252 NOBUG Fix spelling of "encountered". (as reported by @dazzag24) 2017-07-19 16:59:50 +09:00
Paul Masurel
77d8e81ae4 issue/17 Slightly more explicit error message 2017-07-19 11:08:42 +09:00
Paul Masurel
76e07b9705 NOBUG Small fixes. 2017-07-14 18:09:54 +09:00
Paul Masurel
ea4e9fdaf1 NOBUG updated README 2017-07-14 14:09:13 +09:00
Paul Masurel
e418bee693 NOBUG Garbage collection after end merge. 2017-07-14 12:09:47 +09:00
Paul Masurel
af4f1a86bc Merge remote-tracking branch 'origin/exp/hash_intable' 2017-07-13 20:50:54 +09:00
Paul Masurel
753b639454 NOBUG splitting the per-thread memory between the table and the heap 2017-07-13 17:11:39 +09:00
Paul Masurel
5907a47547 NOBUG Added whitespaces. 2017-07-13 15:14:12 +09:00
Paul Masurel
586a6e62a2 NOBUG Added Changelog for 4.0 2017-07-13 15:06:09 +09:00
Paul Masurel
fdae0eff5a NOBUG Remove range step_by 2017-07-13 14:05:33 +09:00
Paul Masurel
6eea407f20 Removing usage of step_by 2017-06-23 17:46:39 +09:00
Paul Masurel
1ba51d4dc4 NOBUG removed using range.step_by 2017-06-22 22:10:53 +09:00
Paul Masurel
6e742d5145 NOBUG removing batch add docs 2017-06-22 11:35:22 +09:00
Paul Masurel
1843259e91 NOBUG Simplified addr definitions 2017-06-22 11:27:32 +09:00
Paul Masurel
4ebacb7297 BytesRef is now wrapping an addr 2017-06-21 22:32:05 +09:00
Paul Masurel
fb75e60c6e issue/136 Added hashmaps. 2017-06-21 15:47:55 +09:00
Paul Masurel
04b15c6c11 Merge branch 'master' into exp/hash_intable
Conflicts:
	src/datastruct/stacker/hashmap.rs
2017-06-21 11:40:49 +09:00
Paul Masurel
b05b5f5487 issue/191 Added an analyzer manager. 2017-06-20 10:02:26 +09:00
Paul Masurel
4fe96483bc fill_buffer 2017-06-14 23:32:58 +09:00
Paul Masurel
09e27740e2 Added fill_buffer in DocSet 2017-06-14 18:28:30 +09:00
Paul Masurel
e51feea574 Removed cargo fmt from travis. 2017-06-14 13:45:11 +09:00
Paul Masurel
93e7f28cc0 Added unit test 2017-06-14 10:46:06 +09:00
Paul Masurel
8875b9794a Added API to get range from fastfield 2017-06-13 23:16:50 +09:00
Paul Masurel
f26874557e Remove the concept of pipeline. Made a BoableAnalyzer 2017-06-10 20:06:00 +09:00
Paul Masurel
a7d10b65ae Added support for Japanese. 2017-06-09 22:25:03 +09:00
Paul Masurel
e120e3b7aa issue/191 Added proper analyzer 2017-06-07 23:21:36 +09:00
Paul Masurel
90fcfb3f43 issue/188 Using murmurhash 2017-06-07 09:30:34 +09:00
Paul Masurel
e547e8abad Closes #184
Resizing the `Vec` was a bad idea, as for some stacker operation,
we may have a living reference to an object in the current heap.
2017-06-06 23:16:28 +09:00
Paul Masurel
5aa4565424 Tiny cleaning 2017-06-05 23:40:08 +09:00
Paul Masurel
3637620187 Merge branch 'master' of github.com:tantivy-search/tantivy 2017-06-02 21:03:37 +09:00
Laurentiu Nicola
a94679d74d Use four terms in the intersection bench 2017-05-31 08:31:33 +09:00
Laurentiu Nicola
a35a8638cc Comment nit 2017-05-31 08:31:33 +09:00
Paul Masurel
97a051996f issue 171. Hopefully bugfix? 2017-05-31 08:31:33 +09:00
Laurentiu Nicola
69525cb3c7 Add extra intersection test 2017-05-31 08:31:33 +09:00
Laurentiu Nicola
63867a7150 Fix document generation for posting benchmarks 2017-05-31 08:31:33 +09:00
Paul Masurel
19c073385a Better intersection and added size_hint 2017-05-31 08:31:33 +09:00
Paul Masurel
0521844e56 Format, small changes in VInt 2017-05-31 08:31:20 +09:00
Paul Masurel
8d4778f94d issue/181 BinarySerializable does not return the len + Generics over Read+Write 2017-05-31 08:31:20 +09:00
Paul Masurel
1d5464351d generic read 2017-05-31 08:31:20 +09:00
Paul Masurel
522ebdc674 made ResultExt public 2017-05-31 08:31:20 +09:00
Paul Masurel
4a805733db another hash 2017-05-30 15:36:48 +09:00
Paul Masurel
568d149db8 Merge branch 'master' into exp/hash_intable 2017-05-30 08:27:33 +09:00
Paul Masurel
4cfc9806c0 made ResultExt public 2017-05-30 08:22:17 +09:00
Paul Masurel
37042e3ccb Send and Sync impl now useless 2017-05-29 18:53:49 +09:00
Paul Masurel
b316cd337a Optimization in bitpacker 2017-05-29 18:53:49 +09:00
Paul Masurel
c04991e5ad Removed pointer in fastfield 2017-05-29 18:53:49 +09:00
Paul Masurel
c59b712eeb Added hash info in the table 2017-05-29 18:47:20 +09:00
Ashley Mannix
da61baed3b run fmt 2017-05-29 18:29:39 +09:00
Ashley Mannix
b6140d2962 drop some patch bounds 2017-05-29 18:29:39 +09:00
Ashley Mannix
6a9a71bb1b re-export ErrorKind 2017-05-29 18:29:39 +09:00
Ashley Mannix
e8fc4c77e2 fix delete error msg 2017-05-29 18:29:39 +09:00
Ashley Mannix
80837601ea remove error::* imports 2017-05-29 18:29:39 +09:00
Ashley Mannix
2b2703cf51 run cargo fmt 2017-05-29 18:29:39 +09:00
Ashley Mannix
d79018a7f8 fix build warnings 2017-05-29 18:29:39 +09:00
Ashley Mannix
d8a7c428f7 impl std error for directory errors 2017-05-29 18:29:39 +09:00
Ashley Mannix
45595234cc fix error match 2017-05-29 18:29:39 +09:00
Ashley Mannix
1bcebdd29e initial error-chain 2017-05-29 18:29:39 +09:00
Paul Masurel
ed0333a404 Optimized streamer 2017-05-28 19:58:28 +09:00
Paul Masurel
ac0b1a21eb Term as a wrapper
Small changes

Plastic
2017-05-25 23:49:54 +09:00
Paul Masurel
6bbc789d84 Fmt fix 2017-05-25 23:49:54 +09:00
Paul Masurel
87152daef3 issue/174 Added doc, and made field private 2017-05-25 23:49:54 +09:00
Paul Masurel
e0fce4782a Added documentation 2017-05-25 23:49:54 +09:00
Paul Masurel
a633c2a49a Avoid exposing common. Exposes u64 to i64 conversion instead. 2017-05-25 23:49:54 +09:00
Paul Masurel
51623d593e Avoid exposign schema from segment_reader 2017-05-25 23:49:54 +09:00
Paul Masurel
29bf740ddf Exposing the remaining API 2017-05-25 23:49:54 +09:00
Paul Masurel
511bd25a31 trailing whitespace 2017-05-25 18:17:37 +09:00
Paul Masurel
66e14ac1b1 clippy 2017-05-25 18:17:37 +09:00
Paul Masurel
09e94072ba Cargo fmt 2017-05-25 18:17:37 +09:00
Paul Masurel
6c68136d31 Reorganized code 2017-05-25 18:17:37 +09:00
Paul Masurel
aaf1b2c6b6 Reorganized code and added documentation. 2017-05-25 18:17:37 +09:00
Paul Masurel
8a6af2aefa Added unit test and bugfix 2017-05-25 18:17:37 +09:00
Paul Masurel
7a6e62976b Added stream dictionary code, merge unit test 2017-05-25 18:17:37 +09:00
Paul Masurel
2712930bd6 Added the feature 2017-05-25 18:17:37 +09:00
Paul Masurel
cb05f8c098 Prevent execution of the code in the macro doc 2017-05-22 10:55:45 +09:00
Paul Masurel
c0c9d04ca9 Added extra doc 2017-05-22 10:55:45 +09:00
Paul Masurel
7ea5e740e0 Using the $crate thing to make the macro usable in and outside tantivy 2017-05-22 10:55:45 +09:00
Paul Masurel
2afa6c372a issue/168 Make doc! macro usable outside tantivy 2017-05-22 10:55:45 +09:00
Paul Masurel
c7db8866b5 Merge branch 'facets' 2017-05-21 22:57:01 +09:00
Paul Masurel
02d992324a simplified facets. 2017-05-21 22:56:43 +09:00
Paul Masurel
4ab511ffc6 Merging 2017-05-21 22:15:02 +09:00
Paul Masurel
f318172ea4 Merge branch 'issue/162' 2017-05-21 20:04:03 +09:00
Paul Masurel
581449a824 issue/162 Docs and unit tests 2017-05-21 18:58:04 +09:00
Maciej Dziardziel
272589a381 faceting for fast numerical fields 2017-05-21 12:04:29 +03:00
Laurentiu Nicola
73d54c6379 Inline block_len 2017-05-21 10:44:49 +03:00
Paul Masurel
3e4606de5d Simplifying, and reordering the members 2017-05-21 16:31:52 +09:00
Laurentiu Nicola
020779f61b Make things faster 2017-05-20 20:56:37 +03:00
Laurentiu Nicola
835936585f Don't search whole blocks, but only the remaining part 2017-05-20 18:45:41 +03:00
Paul Masurel
bdd05e97d1 Added bench for segment postings 2017-05-20 23:38:53 +09:00
Paul Masurel
2be5f08cd6 issue/162 Added block iteration API 2017-05-20 11:46:40 +09:00
Paul Masurel
3f49d65a87 issue/162 Create block postings 2017-05-20 00:46:23 +09:00
Paul Masurel
f9baf4bcc8 Merge branch 'issue/155'
Conflicts:
	src/indexer/merger.rs
	src/indexer/segment_writer.rs
2017-05-19 20:14:36 +09:00
Paul Masurel
7ee93fbed5 Cleaning 2017-05-19 20:08:04 +09:00
Paul Masurel
57a5547ae8 Comments and cleaning up API 2017-05-19 11:20:27 +09:00
Paul Masurel
c57ab6a335 Renamed fstmap to termdict 2017-05-19 09:26:18 +09:00
Paul Masurel
02bfa9be52 Moving to termdict 2017-05-19 08:43:52 +09:00
Paul Masurel
b3f62b8acc Better API 2017-05-18 23:35:39 +09:00
Paul Masurel
2a08c247af Clippy 2017-05-18 23:20:41 +09:00
Paul Masurel
d2926b6ee0 Format 2017-05-18 23:09:20 +09:00
Paul Masurel
0272167c2e Code cleaning 2017-05-18 23:06:02 +09:00
Laurentiu Nicola
a9cf0bde16 Format code 2017-05-18 22:07:49 +09:00
Laurentiu Nicola
5a457df45d VInt encode values in IntFastFieldWriter
Closes #131
2017-05-18 22:07:49 +09:00
Paul Masurel
ca76fd5ba0 Uncommenting unit test 2017-05-18 20:41:56 +09:00
Paul Masurel
e79a316e41 Issue 155 - Trying to avoid term lookup when merging terms
+ Adds a proper Streamer interface
2017-05-18 20:12:00 +09:00
Paul Masurel
733f54d80e Making clippy happy. 2017-05-17 19:07:39 +09:00
Paul Masurel
7b2b181652 Merge branch 'master' into issue/136
Conflicts:
	src/datastruct/stacker/hashmap.rs
	src/datastruct/stacker/heap.rs
	src/datastruct/stacker/mod.rs
	src/indexer/index_writer.rs
	src/indexer/merger.rs
	src/indexer/segment_updater.rs
	src/indexer/segment_writer.rs
	src/postings/postings_writer.rs
	src/postings/recorder.rs
	src/schema/term.rs
2017-05-17 18:40:09 +09:00
Laurentiu Nicola
b3f39f2343 Remove unneeded suppressions, make clippy lints explicit 2017-05-17 15:50:07 +09:00
Laurentiu Nicola
a13122d392 use explicit drop instead of suppression 2017-05-17 15:50:07 +09:00
Paul Masurel
113917c521 Making clippy happy.
+ Simplifying bitpacking by adding a 7 byte padding.
+ Bugfix in a unit test.
2017-05-17 15:50:07 +09:00
Laurentiu Nicola
1352b95b07 clippy: fix never_loop warnings 2017-05-17 15:50:07 +09:00
Laurentiu Nicola
c0538dbe9a clippy: fix mut_from_ref warnings 2017-05-17 15:50:07 +09:00
Laurentiu Nicola
0d5ea98132 clippy: fix inline_always warnings 2017-05-17 15:50:07 +09:00
Laurentiu Nicola
0404df3fd5 Fix typo in docstring 2017-05-17 15:50:07 +09:00
Laurentiu Nicola
a67caee141 clippy: fix len_zero warnings 2017-05-17 15:50:07 +09:00
Laurentiu Nicola
f5fb29422a clippy: fix while_let_loop warnings 2017-05-17 15:50:07 +09:00
Laurentiu Nicola
4e48bbf0ea clippy: fix needless_lifetimes warnings 2017-05-17 15:50:07 +09:00
Laurentiu Nicola
6fea510869 clippy: fix redundant_closure warnings 2017-05-17 15:50:07 +09:00
Laurentiu Nicola
39958ec476 clippy: fix single_match warnings 2017-05-17 15:50:07 +09:00
Laurentiu Nicola
36f51e289e clippy: fix match_same_arms warnings 2017-05-17 15:50:07 +09:00
Laurentiu Nicola
5c83153035 clippy: fix or_fun_call warnings 2017-05-17 15:50:07 +09:00
Laurentiu Nicola
8e407bb314 clippy: fix needless_borrow warnings 2017-05-17 15:50:07 +09:00
Laurentiu Nicola
103ba6ba35 clippy: fix match_ref_pats warnings 2017-05-17 15:50:07 +09:00
Laurentiu Nicola
3965b26cd2 clippy: fix useless_let_if_seq warnings 2017-05-17 15:50:07 +09:00
Laurentiu Nicola
1cd0b378fb clippy: fix map_clone warnings 2017-05-17 15:50:07 +09:00
Laurentiu Nicola
92f383fa51 clippy: fix let_unit_value warnings 2017-05-17 15:50:07 +09:00
Laurentiu Nicola
6ae34d2a77 clippy: fix toplevel_ref_arg warnings 2017-05-17 15:50:07 +09:00
Laurentiu Nicola
1af1f7e0d1 clippy: fix if_let_redundant_pattern_matching warnings 2017-05-17 15:50:07 +09:00
Laurentiu Nicola
feec2e2620 clippy: fix needless_bool warnings 2017-05-17 15:50:07 +09:00
Laurentiu Nicola
3e2ad7542d clippy: fix needless_return warnings 2017-05-17 15:50:07 +09:00
Laurentiu Nicola
ac02c76b1e clippy: fix doc_markdown warnings 2017-05-17 15:50:07 +09:00
Paul Masurel
e5c7c0b8b9 Update CHANGELOG.md 2017-05-16 21:13:33 +09:00
Laurentiu Nicola
49dbe4722f Add a test for SegmentPostings::skip_len 2017-05-16 21:12:43 +09:00
Laurentiu Nicola
f64ff77424 Use an exponential search 2017-05-16 21:12:43 +09:00
Laurentiu Nicola
2bf93e9e51 Avoid rebuilding simdcomp when running tests 2017-05-16 08:37:43 +09:00
Laurentiu Nicola
3dde748b25 Make rustfmt happy 2017-05-16 00:49:05 +03:00
Laurentiu Nicola
1dabe26395 Add comment about block_len 2017-05-15 21:26:28 +03:00
Laurentiu Nicola
5590537739 Disable early exit 2017-05-15 21:18:06 +03:00
Laurentiu Nicola
ccf0f9cb2f Merge branch 'master' of github.com:tantivy-search/tantivy into issue/130 2017-05-15 18:54:16 +03:00
Laurentiu Nicola
e21913ecdc Use binary search for SegmentPostings::skip_next 2017-05-15 18:33:43 +03:00
Laurentiu Nicola
2cc826adc7 Add a bench for SegmentPostings::SkipNext 2017-05-15 18:33:43 +03:00
Laurentiu Nicola
4d90d8fc1d Move the random sampling helpers to the tests module 2017-05-15 18:33:43 +03:00
Paul Masurel
0606a8ae73 Bugfix in travis yml 2017-05-16 00:22:11 +09:00
Paul Masurel
03564214e7 Added check for rustfmt in travis 2017-05-15 22:46:43 +09:00
Paul Masurel
4c8f9742f8 format 2017-05-15 22:30:18 +09:00
Paul Masurel
a23b7a1815 Test the size of complete 0..128 block 2017-05-15 19:09:52 +09:00
Paul Masurel
6f89a86b14 Added simple search in travis CI 2017-05-15 12:10:23 +09:00
Laurentiu Nicola
b2beac1203 Check the result of wait_merging_threads 2017-05-15 08:00:25 +09:00
Paul Masurel
8cd5a2d81d Fixed logging deleted files twice 2017-05-15 00:25:49 +09:00
Paul Masurel
b26c22ada0 Merge branch 'issue/148' 2017-05-15 00:02:51 +09:00
Laurentiu Nicola
8a35259300 Avoid clone() call 2017-05-14 23:28:17 +09:00
Paul Masurel
db56167a5d Display backtrace 2017-05-14 23:28:17 +09:00
Paul Masurel
ab66ffed4e Closes #147 2017-05-14 23:28:17 +09:00
Laurentiu Nicola
e04f2f0b08 issue/148 Wait for the index writer threads to shut down in simple_search 2017-05-14 16:35:24 +03:00
Paul Masurel
7a5df33c85 issue/148 Wrapping MsQueue to drop all of its concent on Drop 2017-05-14 16:25:33 +03:00
Laurentiu Nicola
ee0873dd07 Avoid clone() call 2017-05-13 16:11:58 +03:00
Paul Masurel
695c8828b8 Display backtrace 2017-05-13 18:51:38 +09:00
Paul Masurel
4ff7dc7a4f Closes #147 2017-05-13 18:46:50 +09:00
Paul Masurel
69832bfd03 NOBUG Disabling running examples in CI as it is not working. 2017-05-12 14:35:50 +09:00
Paul Masurel
ecbdd70c37 Removed the clunky linked list logic of the heap. 2017-05-12 14:01:52 +09:00
Paul Masurel
fb1b2be782 issue/136 Fix following CR 2017-05-12 13:51:09 +09:00
Paul Masurel
9cd7458978 NOBUG Hiding methods making it possible to build a incorrect Term. 2017-05-11 21:12:59 +09:00
Paul Masurel
4c4c28e2c4 Fix broke compile 2017-05-11 20:57:32 +09:00
Paul Masurel
9f9e588905 Merge branch 'master' into issue/136
Conflicts:
	src/postings/postings_writer.rs
2017-05-11 20:50:24 +09:00
Paul Masurel
6fd17e0ead Code cleaning 2017-05-11 20:47:30 +09:00
Paul Masurel
65dc5b0d83 Closes #145 2017-05-11 19:48:06 +09:00
Paul Masurel
15d15c01f8 Runing examples in CI
Closes #143
2017-05-11 19:43:36 +09:00
Paul Masurel
106832a66a Make Term::with_capacity crate-public 2017-05-11 19:37:15 +09:00
Paul Masurel
477b9136b9 FIXED inconsistent Term's field serialization.
Also.

Cleaned up the code to make sure that the logic
is only in one place.
Removed allocate_vec

Closes #141
Closes #139
Closes #142
Closes #138
2017-05-11 19:37:15 +09:00
Paul Masurel
7852d097b8 CHANGELOG 0.3.1 did not included the fix of the Field(u32) 2017-05-11 09:48:37 +09:00
Ashley Mannix
0bd56241bb pretty print meta.json 2017-05-10 20:13:53 +09:00
Paul Masurel
54ab897755 Added comment 2017-05-10 19:30:24 +09:00
Paul Masurel
1369d2d144 Quadratic probing. 2017-05-10 10:38:47 +09:00
Paul Masurel
d3f829dc8a Bugfix 2017-05-10 00:29:37 +09:00
Paul Masurel
e82ccf9627 Merge branch 'master' into issue/indexing-refactoring 2017-05-09 16:43:33 +09:00
Paul Masurel
d3d29f7f54 NOBUG Updated CHANGELOG with the serde change for 0.4.0 2017-05-09 16:42:25 +09:00
Paul Masurel
3566717979 Merge pull request #134 from tantivy-search/chore/serde-rebase
Replace rustc_serialize with serde (updated)
2017-05-09 16:38:42 +09:00
Paul Masurel
90bc3e3773 Added limitation on term dictionary saturation 2017-05-09 14:10:33 +09:00
Paul Masurel
ffb62b6835 working 2017-05-09 10:17:05 +09:00
Ashley Mannix
4f9ce91d6a update underflow test 2017-05-08 14:40:58 +10:00
Laurentiu Nicola
3c3a2fbfe8 Remove old serialization code 2017-05-08 07:36:15 +03:00
Laurentiu Nicola
0508571d1a Use the proper error type on u64 overflow 2017-05-08 07:35:33 +03:00
Laurentiu Nicola
7b733dd34f Fix i64 overflow check and merge NotJSON with NotJSONObject 2017-05-08 07:09:54 +03:00
Ashley Mannix
2c798e3147 Replace rustc_serialize with serde 2017-05-07 20:21:22 +03:00
Paul Masurel
2c13f210bc Bugfix on merging i64 fast fields 2017-05-07 15:57:29 +09:00
Paul Masurel
0dad02791c issues/65 Added comments
Closes #65
Closes #132
2017-05-06 23:09:45 +09:00
Paul Masurel
2947364ae1 issues/65 Phrase query for untokenized fields are not tokenized. 2017-05-06 22:14:26 +09:00
Paul Masurel
05111599b3 Removed several TODOs 2017-05-05 16:08:09 +08:00
Paul Masurel
83263eabbb issues/65 Updated changelog added some doc. 2017-05-04 17:13:14 +08:00
Paul Masurel
5cb5c9a8f2 issues/65 Added i64 fast fields 2017-05-04 16:46:14 +08:00
Paul Masurel
9ab92b7739 i64 fast field working 2017-05-04 16:46:14 +08:00
Paul Masurel
962bddfbbf Merge with panicks. 2017-05-04 16:46:14 +08:00
Paul Masurel
26cfe2909f FastField with different types 2017-05-04 16:46:13 +08:00
Paul Masurel
afdfb1a69b Compiling... fastfield not implemented yet 2017-05-04 16:46:13 +08:00
Paul Masurel
b26ad1d57a Added int options 2017-05-04 16:46:13 +08:00
Paul Masurel
1dbd54edbb Renamed u64options 2017-05-04 16:46:13 +08:00
Paul Masurel
deb04eb090 issue/65 Switching to u64. 2017-05-04 16:46:13 +08:00
Paul Masurel
bed34bf502 Merge branch 'issues/122' 2017-04-23 16:14:40 +08:00
Paul Masurel
95bfb71901 NOBUG Remove 256 num fields limit 2017-04-19 22:37:34 +09:00
Paul Masurel
74e10843a7 issue/120 Disabled SIMD vbyte compression for msvc 2017-04-17 22:36:32 +09:00
Paul Masurel
1b922e6d23 issue 120. Using streamvbyte codec for the vbyte part of the encoding 2017-04-16 18:49:53 +09:00
Paul Masurel
a7c6c31538 Merge commit '9d071c8d4610aa61f4b1f7dd489210415a05cfc0' as 'cpp/streamvbyte' 2017-04-16 15:22:43 +09:00
Paul Masurel
9d071c8d46 Squashed 'cpp/streamvbyte/' content from commit f38aa6b
git-subtree-dir: cpp/streamvbyte
git-subtree-split: f38aa6b6ec4c5cee9d72c94ef305e6a79a108252
2017-04-16 15:22:43 +09:00
Paul Masurel
04074f7bcb Merge pull request #119 from tantivy-search/issue/118
Using u32 for field ids
2017-04-15 13:11:22 +09:00
Paul Masurel
8a28d1643d Using u32 for field ids 2017-04-15 13:04:33 +09:00
287 changed files with 28394 additions and 106858 deletions

19
.github/ISSUE_TEMPLATE/bug_report.md vendored Normal file
View File

@@ -0,0 +1,19 @@
---
name: Bug report
about: Create a report to help us improve
---
**Describe the bug**
- What did you do?
- What happened?
- What was expected?
**Which version of tantivy are you using?**
If "master", ideally give the specific sha1 revision.
**To Reproduce**
If your bug is deterministic, can you give a minimal reproducing code?
Some bugs are not deterministic. Can you describe with precision in which context it happened?
If this is possible, can you share your code?

View File

@@ -0,0 +1,14 @@
---
name: Feature request
about: Suggest an idea for this project
---
**Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
**[Optional] describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.

7
.github/ISSUE_TEMPLATE/question.md vendored Normal file
View File

@@ -0,0 +1,7 @@
---
name: Question
about: Ask any question about tantivy's usage...
---
Try to be specific about your use case...

7
.gitignore vendored
View File

@@ -1,3 +1,5 @@
tantivy.iml
*.swp
target
target/debug
.vscode
@@ -5,4 +7,7 @@ target/release
Cargo.lock
benchmark
.DS_Store
cpp/simdcomp/bitpackingbenchmark
cpp/simdcomp/bitpackingbenchmark
*.bk
.idea
trace.dat

View File

@@ -1,16 +1,22 @@
# Based on the "trust" template v0.1.2
# https://github.com/japaric/trust/tree/v0.1.2
dist: trusty
language: rust
rust:
- nightly
services: docker
sudo: required
env:
global:
- CC=gcc-4.8
- CXX=g++-4.8
- CRATE_NAME=tantivy
- TRAVIS_CARGO_NIGHTLY_FEATURE=""
- secure: eC8HjTi1wgRVCsMAeXEXt8Ckr0YBSGOEnQkkW4/Nde/OZ9jJjz2nmP1ELQlDE7+czHub2QvYtDMG0parcHZDx/Kus0yvyn08y3g2rhGIiE7y8OCvQm1Mybu2D/p7enm6shXquQ6Z5KRfRq+18mHy80wy9ABMA/ukEZdvnfQ76/Een8/Lb0eHaDoXDXn3PqLVtByvSfQQ7OhS60dEScu8PWZ6/l1057P5NpdWbMExBE7Ro4zYXNhkJeGZx0nP/Bd4Jjdt1XfPzMEybV6NZ5xsTILUBFTmOOt603IsqKGov089NExqxYu5bD3K+S4MzF1Nd6VhomNPJqLDCfhlymJCUj5n5Ku4yidlhQbM4Ej9nGrBalJnhcjBjPua5tmMF2WCxP9muKn/2tIOu1/+wc0vMf9Yd3wKIkf5+FtUxCgs2O+NslWvmOMAMI/yD25m7hb4t1IwE/4Bk+GVcWJRWXbo0/m6ZUHzRzdjUY2a1qvw7C9udzdhg7gcnXwsKrSWi2NjMiIVw86l+Zim0nLpKIN41sxZHLaFRG63Ki8zQ/481LGn32awJ6i3sizKS0WD+N1DfR2qYMrwYHaMN0uR0OFXYTJkFvTFttAeUY3EKmRKAuMhmO2YRdSr4/j/G5E9HMc1gSGJj6PxgpQU7EpvxRsmoVAEJr0mszmOj9icGHep/FM=
addons:
apt:
sources:
- ubuntu-toolchain-r-test
- kalakris-cmake
packages:
- gcc-4.8
- g++-4.8
@@ -18,18 +24,57 @@ addons:
- libelf-dev
- libdw-dev
- binutils-dev
- cmake
matrix:
include:
# Android
- env: TARGET=aarch64-linux-android DISABLE_TESTS=1
#- env: TARGET=arm-linux-androideabi DISABLE_TESTS=1
#- env: TARGET=armv7-linux-androideabi DISABLE_TESTS=1
#- env: TARGET=i686-linux-android DISABLE_TESTS=1
#- env: TARGET=x86_64-linux-android DISABLE_TESTS=1
# Linux
#- env: TARGET=aarch64-unknown-linux-gnu
#- env: TARGET=i686-unknown-linux-gnu
- env: TARGET=x86_64-unknown-linux-gnu CODECOV=1
# - env: TARGET=x86_64-unknown-linux-musl CODECOV=1
# OSX
- env: TARGET=x86_64-apple-darwin
os: osx
before_install:
- set -e
- rustup self update
install:
- sh ci/install.sh
- source ~/.cargo/env || true
before_script:
- |
pip install 'travis-cargo<0.2' --user &&
export PATH=$HOME/.local/bin:$PATH
- export PATH=$HOME/.cargo/bin:$PATH
- cargo install cargo-update || echo "cargo-update already installed"
- cargo install cargo-travis || echo "cargo-travis already installed"
script:
- |
travis-cargo build &&
travis-cargo test &&
travis-cargo bench &&
travis-cargo doc
after_success:
- bash ./script/build-doc.sh
- travis-cargo doc-upload
- if [[ "$TRAVIS_OS_NAME" == "linux" ]]; then travis-cargo coveralls --no-sudo --verify; fi
- if [[ "$TRAVIS_OS_NAME" == "linux" ]]; then ./kcov/build/src/kcov --verify --coveralls-id=$TRAVIS_JOB_ID --include-path=`pwd`/src --exclude-path=`pwd`/cpp --exclude-pattern=/.cargo target/kcov target/debug/tantivy-*; fi
- bash ci/script.sh
before_deploy:
- sh ci/before_deploy.sh
cache: cargo
before_cache:
# Travis can't cache files that are not readable by "others"
- chmod -R a+r $HOME/.cargo
#branches:
# only:
# # release tags
# - /^v\d+\.\d+\.\d+.*$/
# - master
notifications:
email:
on_success: never

11
AUTHORS Normal file
View File

@@ -0,0 +1,11 @@
# This is the list of authors of tantivy for copyright purposes.
Paul Masurel
Laurentiu Nicola
Dru Sellers
Ashley Mannix
Michael J. Curry
Jason Wolfe
# As an employee of Google I am required to add Google LLC
# in the list of authors, but this project is not affiliated to Google
# in any other way.
Google LLC

View File

@@ -1,3 +1,148 @@
Tantivy 0.8.2
=====================
Fixing build for x86_64 platforms. (#496)
No need to update from 0.8.1 if tantivy
is building on your platform.
Tantivy 0.8.1
=====================
Hotfix of #476.
Merge was reflecting deletes before commit was passed.
Thanks @barrotsteindev for reporting the bug.
Tantivy 0.8.0
=====================
*No change in the index format*
- API Breaking change in the collector API. (@jwolfe, @fulmicoton)
- Multithreaded search (@jwolfe, @fulmicoton)
Tantivy 0.7.1
=====================
*No change in the index format*
- Bugfix: NGramTokenizer panics on non ascii chars
- Added a space usage API
Tantivy 0.7
=====================
- Skip data for doc ids and positions (@fulmicoton),
greatly improving performance
- Tantivy error now rely on the failure crate (@drusellers)
- Added support for `AND`, `OR`, `NOT` syntax in addition to the `+`,`-` syntax
- Added a snippet generator with highlight (@vigneshsarma, @fulmicoton)
- Added a `TopFieldCollector` (@pentlander)
Tantivy 0.6.1
=========================
- Bugfix #324. GC removing was removing file that were still in useful
- Added support for parsing AllQuery and RangeQuery via QueryParser
- AllQuery: `*`
- RangeQuery:
- Inclusive `field:[startIncl to endIncl]`
- Exclusive `field:{startExcl to endExcl}`
- Mixed `field:[startIncl to endExcl}` and vice versa
- Unbounded `field:[start to *]`, `field:[* to end]`
Tantivy 0.6
==========================
Special thanks to @drusellers and @jason-wolfe for their contributions
to this release!
- Removed C code. Tantivy is now pure Rust. (@pmasurel)
- BM25 (@pmasurel)
- Approximate field norms encoded over 1 byte. (@pmasurel)
- Compiles on stable rust (@pmasurel)
- Add &[u8] fastfield for associating arbitrary bytes to each document (@jason-wolfe) (#270)
- Completely uncompressed
- Internally: One u64 fast field for indexes, one fast field for the bytes themselves.
- Add NGram token support (@drusellers)
- Add Stopword Filter support (@drusellers)
- Add a FuzzyTermQuery (@drusellers)
- Add a RegexQuery (@drusellers)
- Various performance improvements (@pmasurel)_
Tantivy 0.5.2
===========================
- bugfix #274
- bugfix #280
- bugfix #289
Tantivy 0.5.1
==========================
- bugfix #254 : tantivy failed if no documents in a segment contained a specific field.
Tantivy 0.5
==========================
- Faceting
- RangeQuery
- Configurable tokenization pipeline
- Bugfix in PhraseQuery
- Various query optimisation
- Allowing very large indexes
- 64 bits file address
- Smarter encoding of the `TermInfo` objects
Tantivy 0.4.3
==========================
- Bugfix race condition when deleting files. (#198)
Tantivy 0.4.2
==========================
- Prevent usage of AVX2 instructions (#201)
Tantivy 0.4.1
==========================
- Bugfix for non-indexed fields. (#199)
Tantivy 0.4.0
==========================
- Raise the limit of number of fields (previously 256 fields) (@fulmicoton)
- Removed u32 fields. They are replaced by u64 and i64 fields (#65) (@fulmicoton)
- Optimized skip in SegmentPostings (#130) (@lnicola)
- Replacing rustc_serialize by serde. Kudos to @KodrAus and @lnicola
- Using error-chain (@KodrAus)
- QueryParser: (@fulmicoton)
- Explicit error returned when searched for a term that is not indexed
- Searching for a int term via the query parser was broken `(age:1)`
- Searching for a non-indexed field returns an explicit Error
- Phrase query for non-tokenized field are not tokenized by the query parser.
- Faster/Better indexing (@fulmicoton)
- using murmurhash2
- faster merging
- more memory efficient fast field writer (@lnicola )
- better handling of collisions
- lesser memory usage
- Added API, most notably to iterate over ranges of terms (@fulmicoton)
- Bugfix that was preventing to unmap segment files, on index drop (@fulmicoton)
- Made the doc! macro public (@fulmicoton)
- Added an alternative implementation of the streaming dictionary (@fulmicoton)
Tantivy 0.3.1
==========================
- Expose a method to trigger files garbage collection
Tantivy 0.3
==========================
@@ -5,7 +150,7 @@ Tantivy 0.3
Special thanks to @Kodraus @lnicola @Ameobea @manuel-woelker @celaus
for their contribution to this release.
Thanks also to everyone in tantivy gitter chat
Thanks also to everyone in tantivy gitter chat
for their advise and company :)
https://gitter.im/tantivy-search/tantivy
@@ -13,12 +158,13 @@ https://gitter.im/tantivy-search/tantivy
Warning:
Tantivy 0.3 is NOT backward compatible with tantivy 0.2
Tantivy 0.3 is NOT backward compatible with tantivy 0.2
code and index format.
You should not expect backward compatibility before
You should not expect backward compatibility before
tantivy 1.0.
New Features
------------
@@ -40,7 +186,7 @@ Thanks to @KodrAus ! (#108)
the natural ordering.
- Building binary targets for tantivy-cli (Thanks to @KodrAus)
- Misc invisible bug fixes, and code cleanup.
- Use
- Use

View File

@@ -1,11 +1,10 @@
[package]
name = "tantivy"
version = "0.3.1"
version = "0.8.2"
authors = ["Paul Masurel <paul.masurel@gmail.com>"]
build = "build.rs"
license = "MIT"
categories = ["database-implementations", "data-structures"]
description = """Tantivy is a search engine library."""
description = """Search engine library"""
documentation = "https://tantivy-search.github.io/tantivy/tantivy/index.html"
homepage = "https://github.com/tantivy-search/tantivy"
repository = "https://github.com/tantivy-search/tantivy"
@@ -13,52 +12,70 @@ readme = "README.md"
keywords = ["search", "information", "retrieval"]
[dependencies]
base64 = "0.10.0"
byteorder = "1.0"
memmap = "0.4"
lazy_static = "0.2.1"
regex = "0.2"
fst = "0.1.37"
atomicwrites = "0.1.3"
tempfile = "2.1"
rustc-serialize = "0.3"
log = "0.3.6"
combine = "2.2"
lazy_static = "1"
regex = "1.0"
fst = {version="0.3", default-features=false}
fst-regex = { version="0.2" }
lz4 = {version="1.20", optional=true}
snap = {version="0.2"}
atomicwrites = {version="0.2.2", optional=true}
tempfile = "3.0"
log = "0.4"
combine = "3"
tempdir = "0.3"
bincode = "0.5"
libc = {version = "0.2.20", optional=true}
serde = "1.0"
serde_derive = "1.0"
serde_json = "1.0"
num_cpus = "1.2"
itertools = "0.5.9"
lz4 = "1.20"
bit-set = "0.4.0"
time = "0.1"
uuid = { version = "0.4", features = ["v4", "rustc-serialize"] }
chan = "0.1"
version = "2"
crossbeam = "0.2"
futures = "0.1.9"
futures-cpupool = "0.1.2"
itertools = "0.8"
levenshtein_automata = {version="0.1", features=["fst_automaton"]}
bit-set = "0.5"
uuid = { version = "0.7", features = ["v4", "serde"] }
crossbeam = "0.5"
futures = "0.1"
futures-cpupool = "0.1"
owning_ref = "0.4"
stable_deref_trait = "1.0.0"
rust-stemmers = "1"
downcast = { version="0.9" }
matches = "0.1"
bitpacking = "0.5"
census = "0.1"
fnv = "1.0.6"
owned-read = "0.4"
failure = "0.1"
htmlescape = "0.3.1"
fail = "0.2"
scoped-pool = "1.0"
murmurhash32 = "0.2"
[target.'cfg(windows)'.dependencies]
winapi = "0.2"
[dev-dependencies]
rand = "0.3"
env_logger = "0.4"
[build-dependencies]
gcc = {version = "0.3", optional=true}
rand = "0.6"
maplit = "1"
[profile.release]
opt-level = 3
debug = false
lto = true
debug-assertions = false
[profile.test]
debug-assertions = true
overflow-checks = true
[features]
default = ["simdcompression"]
simdcompression = ["libc", "gcc"]
# by default no-fail is disabled. We manually enable it when running test.
default = ["mmap", "no_fail"]
mmap = ["fst/mmap", "atomicwrites"]
lz4-compression = ["lz4"]
no_fail = ["fail/no_fail"]
unstable = [] # useful for benches.
[badges]
travis-ci = { repository = "tantivy-search/tantivy" }

View File

@@ -1,4 +1,4 @@
Copyright (c) 2016 Paul Masurel
Copyright (c) 2018 by the project authors, as listed in the AUTHORS file.
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:

View File

@@ -1,62 +1,90 @@
![Tantivy](https://tantivy-search.github.io/logo/tantivy-logo.png)
[![Build Status](https://travis-ci.org/tantivy-search/tantivy.svg?branch=master)](https://travis-ci.org/tantivy-search/tantivy)
[![Coverage Status](https://coveralls.io/repos/github/tantivy-search/tantivy/badge.svg?branch=master&refresh1)](https://coveralls.io/github/tantivy-search/tantivy?branch=master)
[![codecov](https://codecov.io/gh/tantivy-search/tantivy/branch/master/graph/badge.svg)](https://codecov.io/gh/tantivy-search/tantivy)
[![Join the chat at https://gitter.im/tantivy-search/tantivy](https://badges.gitter.im/tantivy-search/tantivy.svg)](https://gitter.im/tantivy-search/tantivy?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge)
[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
[![Build status](https://ci.appveyor.com/api/projects/status/r7nb13kj23u8m9pj?svg=true)](https://ci.appveyor.com/project/fulmicoton/tantivy)
![beacon for google analytics](https://ga-beacon.appspot.com/UA-88834340-1/tantivy/README)
[![Build status](https://ci.appveyor.com/api/projects/status/r7nb13kj23u8m9pj/branch/master?svg=true)](https://ci.appveyor.com/project/fulmicoton/tantivy/branch/master)
[![Say Thanks!](https://img.shields.io/badge/Say%20Thanks-!-1EAEDB.svg)](https://saythanks.io/to/fulmicoton)
![Tantivy](https://tantivy-search.github.io/logo/tantivy-logo.png)
[![](https://sourcerer.io/fame/fulmicoton/tantivy-search/tantivy/images/0)](https://sourcerer.io/fame/fulmicoton/tantivy-search/tantivy/links/0)
[![](https://sourcerer.io/fame/fulmicoton/tantivy-search/tantivy/images/1)](https://sourcerer.io/fame/fulmicoton/tantivy-search/tantivy/links/1)
[![](https://sourcerer.io/fame/fulmicoton/tantivy-search/tantivy/images/2)](https://sourcerer.io/fame/fulmicoton/tantivy-search/tantivy/links/2)
[![](https://sourcerer.io/fame/fulmicoton/tantivy-search/tantivy/images/3)](https://sourcerer.io/fame/fulmicoton/tantivy-search/tantivy/links/3)
[![](https://sourcerer.io/fame/fulmicoton/tantivy-search/tantivy/images/4)](https://sourcerer.io/fame/fulmicoton/tantivy-search/tantivy/links/4)
[![](https://sourcerer.io/fame/fulmicoton/tantivy-search/tantivy/images/5)](https://sourcerer.io/fame/fulmicoton/tantivy-search/tantivy/links/5)
[![](https://sourcerer.io/fame/fulmicoton/tantivy-search/tantivy/images/6)](https://sourcerer.io/fame/fulmicoton/tantivy-search/tantivy/links/6)
[![](https://sourcerer.io/fame/fulmicoton/tantivy-search/tantivy/images/7)](https://sourcerer.io/fame/fulmicoton/tantivy-search/tantivy/links/7)
**Tantivy** is a **full text search engine library** written in rust.
It is strongly inspired by Lucene's design.
It is closer to [Apache Lucene](https://lucene.apache.org/) than to [Elastic Search](https://www.elastic.co/products/elasticsearch) and [Apache Solr](https://lucene.apache.org/solr/) in the sense it is not
an off-the-shelf search engine server, but rather a crate that can be used
to build such a search engine.
Tantivy is, in fact, strongly inspired by Lucene's design.
# Features
- configurable indexing (optional term frequency and position indexing)
- tf-idf scoring
- Basic query language
- Phrase queries
- Full-text search
- Fast (check out the :racehorse: :sparkles: [benchmark](https://tantivy-search.github.io/bench/) :sparkles: :racehorse:)
- Tiny startup time (<10ms), perfect for command line tools
- BM25 scoring (the same as lucene)
- Natural query language `(michael AND jackson) OR "king of pop"`
- Phrase queries search (`"michael jackson"`)
- Incremental indexing
- Multithreaded indexing (indexing English Wikipedia takes 4 minutes on my desktop)
- mmap based
- optional SIMD integer compression
- u32 fast fields (equivalent of doc values in Lucene)
- Multithreaded indexing (indexing English Wikipedia takes < 3 minutes on my desktop)
- Mmap directory
- SIMD integer compression when the platform/CPU includes the SSE2 instruction set.
- Single valued and multivalued u64 and i64 fast fields (equivalent of doc values in Lucene)
- `&[u8]` fast fields
- LZ4 compressed document store
- Range queries
- Faceted search
- Configurable indexing (optional term frequency and position indexing)
- Cheesy logo with a horse
Tantivy supports Linux, MacOS and Windows.
# Non-features
- Distributed search is out of the scope of tantivy. That being said, tantivy is meant as a
library upon which one could build a distributed search. Serializable/mergeable collector state for instance,
are within the scope of tantivy.
# Supported OS and compiler
Tantivy works on stable rust (>= 1.27) and supports Linux, MacOS and Windows.
# Getting started
- [tantivy's usage example](http://fulmicoton.com/tantivy-examples/simple_search.html)
- [tantivy's simple search example](http://fulmicoton.com/tantivy-examples/simple_search.html)
- [tantivy-cli and its tutorial](https://github.com/tantivy-search/tantivy-cli).
`tantivy-cli` is an actual command line interface that makes it easy for you to create a search engine,
index documents and search via the CLI or a small server with a REST API.
It will walk you through getting a wikipedia search engine up and running in a few minutes.
- [reference doc]
- [For the last released version](https://docs.rs/tantivy/)
- [For the last master branch](https://tantivy-search.github.io/tantivy/tantivy/index.html)
# Compiling
# Compiling
Tantivy requires Rust Nightly because it uses requires the features [`box_syntax`](https://doc.rust-lang.org/stable/book/box-syntax-and-patterns.html), [`optin_builtin_traits`](https://github.com/rust-lang/rfcs/blob/master/text/0019-opt-in-builtin-traits.md), and [`conservative_impl_trait`](https://github.com/rust-lang/rfcs/blob/master/text/1522-conservative-impl-trait.md).
By default, `tantivy` uses a git submodule called `simdcomp`.
After cloning the repository, you will need to initialize and update
the submodules. The project can then be built using `cargo`.
## Development
Tantivy compiles on stable rust but requires `Rust >= 1.27`.
To check out and run tests, you can simply run :
git clone git@github.com:tantivy-search/tantivy.git
cd tantivy
cargo build
## Running tests
Alternatively, if you are trying to compile `tantivy` without simd compression,
you can disable this functionality. In this case, this submodule is not required
and you can compile tantivy by using the `--no-default-features` flag.
cargo build --no-default-features
Some tests will not run with just `cargo test` because of `fail-rs`.
To run the tests exhaustively, run `./run-tests.sh`.
# Contribute
Send me an email (paul.masurel at gmail.com) if you want to contribute to tantivy.
Send me an email (paul.masurel at gmail.com) if you want to contribute to tantivy.

View File

@@ -4,11 +4,8 @@
os: Visual Studio 2015
environment:
matrix:
- channel: nightly
- channel: stable
target: x86_64-pc-windows-msvc
- channel: nightly
target: x86_64-pc-windows-gnu
msys_bits: 64
install:
- appveyor DownloadFile https://win.rustup.rs/ -FileName rustup-init.exe
@@ -21,4 +18,5 @@ install:
build: false
test_script:
- REM SET RUST_LOG=tantivy,test & cargo test --verbose
- REM SET RUST_LOG=tantivy,test & cargo test --verbose --no-default-features --features mmap -- --test-threads 1
- REM SET RUST_BACKTRACE=1 & cargo build --examples

View File

@@ -1,50 +0,0 @@
#[cfg(feature = "simdcompression")]
mod build {
extern crate gcc;
pub fn build() {
let mut config = gcc::Config::new();
config.include("./cpp/simdcomp/include")
.file("cpp/simdcomp/src/avxbitpacking.c")
.file("cpp/simdcomp/src/simdintegratedbitpacking.c")
.file("cpp/simdcomp/src/simdbitpacking.c")
.file("cpp/simdcomp/src/simdpackedsearch.c")
.file("cpp/simdcomp/src/simdcomputil.c")
.file("cpp/simdcomp/src/simdpackedselect.c")
.file("cpp/simdcomp/src/simdfor.c")
.file("cpp/simdcomp_wrapper.c");
if !cfg!(debug_assertions) {
config.opt_level(3);
if cfg!(target_env = "msvc") {
config.define("NDEBUG", None)
.flag("/Gm-")
.flag("/GS-")
.flag("/Gy")
.flag("/Oi")
.flag("/GL");
} else {
config.flag("-msse4.1")
.flag("-march=native");
}
}
config.compile("libsimdcomp.a");
// Workaround for linking static libraries built with /GL
// https://github.com/rust-lang/rust/issues/26003
if !cfg!(debug_assertions) && cfg!(target_env = "msvc") {
println!("cargo:rustc-link-lib=dylib=simdcomp");
}
}
}
#[cfg(not(feature = "simdcompression"))]
mod build {
pub fn build() {}
}
fn main() {
build::build();
}

23
ci/before_deploy.ps1 Normal file
View File

@@ -0,0 +1,23 @@
# This script takes care of packaging the build artifacts that will go in the
# release zipfile
$SRC_DIR = $PWD.Path
$STAGE = [System.Guid]::NewGuid().ToString()
Set-Location $ENV:Temp
New-Item -Type Directory -Name $STAGE
Set-Location $STAGE
$ZIP = "$SRC_DIR\$($Env:CRATE_NAME)-$($Env:APPVEYOR_REPO_TAG_NAME)-$($Env:TARGET).zip"
# TODO Update this to package the right artifacts
Copy-Item "$SRC_DIR\target\$($Env:TARGET)\release\hello.exe" '.\'
7z a "$ZIP" *
Push-AppveyorArtifact "$ZIP"
Remove-Item *.* -Force
Set-Location ..
Remove-Item $STAGE
Set-Location $SRC_DIR

33
ci/before_deploy.sh Normal file
View File

@@ -0,0 +1,33 @@
# This script takes care of building your crate and packaging it for release
set -ex
main() {
local src=$(pwd) \
stage=
case $TRAVIS_OS_NAME in
linux)
stage=$(mktemp -d)
;;
osx)
stage=$(mktemp -d -t tmp)
;;
esac
test -f Cargo.lock || cargo generate-lockfile
# TODO Update this to build the artifacts that matter to you
cross rustc --bin hello --target $TARGET --release -- -C lto
# TODO Update this to package the right artifacts
cp target/$TARGET/release/hello $stage/
cd $stage
tar czf $src/$CRATE_NAME-$TRAVIS_TAG-$TARGET.tar.gz *
cd $src
rm -rf $stage
}
main

47
ci/install.sh Normal file
View File

@@ -0,0 +1,47 @@
set -ex
main() {
local target=
if [ $TRAVIS_OS_NAME = linux ]; then
target=x86_64-unknown-linux-musl
sort=sort
else
target=x86_64-apple-darwin
sort=gsort # for `sort --sort-version`, from brew's coreutils.
fi
# Builds for iOS are done on OSX, but require the specific target to be
# installed.
case $TARGET in
aarch64-apple-ios)
rustup target install aarch64-apple-ios
;;
armv7-apple-ios)
rustup target install armv7-apple-ios
;;
armv7s-apple-ios)
rustup target install armv7s-apple-ios
;;
i386-apple-ios)
rustup target install i386-apple-ios
;;
x86_64-apple-ios)
rustup target install x86_64-apple-ios
;;
esac
# This fetches latest stable release
local tag=$(git ls-remote --tags --refs --exit-code https://github.com/japaric/cross \
| cut -d/ -f3 \
| grep -E '^v[0.1.0-9.]+$' \
| $sort --version-sort \
| tail -n1)
curl -LSfs https://japaric.github.io/trust/install.sh | \
sh -s -- \
--force \
--git japaric/cross \
--tag $tag \
--target $target
}
main

29
ci/script.sh Normal file
View File

@@ -0,0 +1,29 @@
#!/usr/bin/env bash
# This script takes care of testing your crate
set -ex
main() {
if [ ! -z $CODECOV ]; then
echo "Codecov"
cargo build --verbose && cargo coverage --verbose && bash <(curl -s https://codecov.io/bash) -s target/kcov
else
echo "Build"
cross build --target $TARGET
if [ ! -z $DISABLE_TESTS ]; then
return
fi
echo "Test"
cross test --target $TARGET --no-default-features --features mmap -- --test-threads 1
fi
for example in $(ls examples/*.rs)
do
cargo run --example $(basename $example .rs)
done
}
# we don't run the "test phase" when doing deploys
if [ -z $TRAVIS_TAG ]; then
main
fi

View File

@@ -1,9 +0,0 @@
Makefile.in
lib*
unit*
*.o
src/*.lo
src/*.o
src/.deps
src/.dirstamp
src/.libs

View File

@@ -1,11 +0,0 @@
language: c
sudo: false
compiler:
- gcc
- clang
branches:
only:
- master
script: make && ./unit

View File

@@ -1,9 +0,0 @@
Upcoming
- added missing include
- improved portability (MSVC)
- implemented C89 compatibility
Version 0.0.3 (19 May 2014)
- improved documentation
Version 0.0.2 (6 February 2014)
- added go demo
Version 0.0.1 (5 February 2014)

View File

@@ -1,27 +0,0 @@
Copyright (c) 2014--, The authors
All rights reserved.
Redistribution and use in source and binary forms, with or without modification,
are permitted provided that the following conditions are met:
* Redistributions of source code must retain the above copyright notice, this
list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright notice, this
list of conditions and the following disclaimer in the documentation and/or
other materials provided with the distribution.
* Neither the name of the {organization} nor the names of its
contributors may be used to endorse or promote products derived from
this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR
ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
(INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON
ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

View File

@@ -1,137 +0,0 @@
The SIMDComp library
====================
[![Build Status](https://travis-ci.org/lemire/simdcomp.png)](https://travis-ci.org/lemire/simdcomp)
A simple C library for compressing lists of integers using binary packing and SIMD instructions.
The assumption is either that you have a list of 32-bit integers where most of them are small, or a list of 32-bit integers where differences between successive integers are small. No software is able to reliably compress an array of 32-bit random numbers.
This library can decode at least 4 billions of compressed integers per second on most
desktop or laptop processors. That is, it can decompress data at a rate of 15 GB/s.
This is significantly faster than generic codecs like gzip, LZO, Snappy or LZ4.
On a Skylake Intel processor, it can decode integers at a rate 0.3 cycles per integer,
which can easily translate into more than 8 decoded billions integers per second.
Contributors: Daniel Lemire, Nathan Kurz, Christoph Rupp, Anatol Belski, Nick White and others
What is it for?
-------------
This is a low-level library for fast integer compression. By design it does not define a compressed
format. It is up to the (sophisticated) user to create a compressed format.
Requirements
-------------
- Your processor should support SSE4.1 (It is supported by most Intel and AMD processors released since 2008.)
- It is possible to build the core part of the code if your processor support SSE2 (Pentium4 or better)
- C99 compliant compiler (GCC is assumed)
- A Linux-like distribution is assumed by the makefile
For a plain C version that does not use SIMD instructions, see https://github.com/lemire/LittleIntPacker
Usage
-------
Compression works over blocks of 128 integers.
For a complete working example, see example.c (you can build it and
run it with "make example; ./example").
1) Lists of integers in random order.
```C
const uint32_t b = maxbits(datain);// computes bit width
simdpackwithoutmask(datain, buffer, b);//compressed to buffer, compressing 128 32-bit integers down to b*32 bytes
simdunpack(buffer, backbuffer, b);//uncompressed to backbuffer
```
While 128 32-bit integers are read, only b 128-bit words are written. Thus, the compression ratio is 32/b.
2) Sorted lists of integers.
We used differential coding: we store the difference between successive integers. For this purpose, we need an initial value (called offset).
```C
uint32_t offset = 0;
uint32_t b1 = simdmaxbitsd1(offset,datain); // bit width
simdpackwithoutmaskd1(offset, datain, buffer, b1);//compressing 128 32-bit integers down to b1*32 bytes
simdunpackd1(offset, buffer, backbuffer, b1);//uncompressed
```
General example for arrays of arbitrary length:
```C
int compress_decompress_demo() {
size_t k, N = 9999;
__m128i * endofbuf;
uint32_t * datain = malloc(N * sizeof(uint32_t));
uint8_t * buffer;
uint32_t * backbuffer = malloc(N * sizeof(uint32_t));
uint32_t b;
for (k = 0; k < N; ++k){ /* start with k=0, not k=1! */
datain[k] = k;
}
b = maxbits_length(datain, N);
buffer = malloc(simdpack_compressedbytes(N,b)); // allocate just enough memory
endofbuf = simdpack_length(datain, N, (__m128i *)buffer, b);
/* compressed data is stored between buffer and endofbuf using (endofbuf-buffer)*sizeof(__m128i) bytes */
/* would be safe to do : buffer = realloc(buffer,(endofbuf-(__m128i *)buffer)*sizeof(__m128i)); */
simdunpack_length((const __m128i *)buffer, N, backbuffer, b);
for (k = 0; k < N; ++k){
if(datain[k] != backbuffer[k]) {
printf("bug\n");
return -1;
}
}
return 0;
}
```
3) Frame-of-Reference
We also have frame-of-reference (FOR) functions (see simdfor.h header). They work like the bit packing
routines, but do not use differential coding so they allow faster search in some cases, at the expense
of compression.
Setup
---------
make
make test
and if you are daring:
make install
Go
--------
If you are a go user, there is a "go" folder where you will find a simple demo.
Other libraries
----------------
* Fast decoder for VByte-compressed integers https://github.com/lemire/MaskedVByte
* Fast integer compression in C using StreamVByte https://github.com/lemire/streamvbyte
* FastPFOR is a C++ research library well suited to compress unsorted arrays: https://github.com/lemire/FastPFor
* SIMDCompressionAndIntersection is a C++ research library well suited for sorted arrays (differential coding)
and computing intersections: https://github.com/lemire/SIMDCompressionAndIntersection
* TurboPFor is a C library that offers lots of interesting optimizations. Well worth checking! (GPL license) https://github.com/powturbo/TurboPFor
* Oroch is a C++ library that offers a usable API (MIT license) https://github.com/ademakov/Oroch
References
------------
* Daniel Lemire, Leonid Boytsov, Nathan Kurz, SIMD Compression and the Intersection of Sorted Integers, Software Practice & Experience 46 (6) 2016. http://arxiv.org/abs/1401.6399
* Daniel Lemire and Leonid Boytsov, Decoding billions of integers per second through vectorization, Software Practice & Experience 45 (1), 2015. http://arxiv.org/abs/1209.2137 http://onlinelibrary.wiley.com/doi/10.1002/spe.2203/abstract
* Jeff Plaisance, Nathan Kurz, Daniel Lemire, Vectorized VByte Decoding, International Symposium on Web Algorithms 2015, 2015. http://arxiv.org/abs/1503.07387
* Wayne Xin Zhao, Xudong Zhang, Daniel Lemire, Dongdong Shan, Jian-Yun Nie, Hongfei Yan, Ji-Rong Wen, A General SIMD-based Approach to Accelerating Compression Algorithms, ACM Transactions on Information Systems 33 (3), 2015. http://arxiv.org/abs/1502.01916
* T. D. Wu, Bitpacking techniques for indexing genomes: I. Hash tables, Algorithms for Molecular Biology 11 (5), 2016. http://almob.biomedcentral.com/articles/10.1186/s13015-016-0069-5

View File

@@ -1,235 +0,0 @@
/**
* This code is released under a BSD License.
*/
#include <assert.h>
#include <stdio.h>
#include <stdlib.h>
#include <time.h>
#include "simdcomp.h"
#ifdef _MSC_VER
# include <windows.h>
__int64 freq;
typedef __int64 time_snap_t;
static time_snap_t time_snap(void)
{
__int64 now;
QueryPerformanceCounter((LARGE_INTEGER *)&now);
return (__int64)((now*1000000)/freq);
}
# define TIME_SNAP_FMT "%I64d"
#else
# define time_snap clock
# define TIME_SNAP_FMT "%lu"
typedef clock_t time_snap_t;
#endif
void benchmarkSelect() {
uint32_t buffer[128];
uint32_t backbuffer[128];
uint32_t initial = 33;
uint32_t b;
time_snap_t S1, S2, S3;
int i;
printf("benchmarking select \n");
/* this test creates delta encoded buffers with different bits, then
* performs lower bound searches for each key */
for (b = 0; b <= 32; b++) {
uint32_t prev = initial;
uint32_t out[128];
/* initialize the buffer */
for (i = 0; i < 128; i++) {
buffer[i] = ((uint32_t)(1655765 * i )) ;
if(b < 32) buffer[i] %= (1<<b);
}
for (i = 0; i < 128; i++) {
buffer[i] = buffer[i] + prev;
prev = buffer[i];
}
for (i = 1; i < 128; i++) {
if(buffer[i] < buffer[i-1] )
buffer[i] = buffer[i-1];
}
assert(simdmaxbitsd1(initial, buffer)<=b);
for (i = 0; i < 128; i++) {
out[i] = 0; /* memset would do too */
}
/* delta-encode to 'i' bits */
simdpackwithoutmaskd1(initial, buffer, (__m128i *)out, b);
S1 = time_snap();
for (i = 0; i < 128 * 10; i++) {
uint32_t valretrieved = simdselectd1(initial, (__m128i *)out, b, (uint32_t)i % 128);
assert(valretrieved == buffer[i%128]);
}
S2 = time_snap();
for (i = 0; i < 128 * 10; i++) {
simdunpackd1(initial, (__m128i *)out, backbuffer, b);
assert(backbuffer[i % 128] == buffer[i % 128]);
}
S3 = time_snap();
printf("bit width = %d, fast select function time = " TIME_SNAP_FMT ", naive time = " TIME_SNAP_FMT " \n", b, (S2-S1), (S3-S2));
}
}
int uint32_cmp(const void *a, const void *b)
{
const uint32_t *ia = (const uint32_t *)a;
const uint32_t *ib = (const uint32_t *)b;
if(*ia < *ib)
return -1;
else if (*ia > *ib)
return 1;
return 0;
}
/* adapted from wikipedia */
int binary_search(uint32_t * A, uint32_t key, int imin, int imax)
{
int imid;
imax --;
while(imin + 1 < imax) {
imid = imin + ((imax - imin) / 2);
if (A[imid] > key) {
imax = imid;
} else if (A[imid] < key) {
imin = imid;
} else {
return imid;
}
}
return imax;
}
/* adapted from wikipedia */
int lower_bound(uint32_t * A, uint32_t key, int imin, int imax)
{
int imid;
imax --;
while(imin + 1 < imax) {
imid = imin + ((imax - imin) / 2);
if (A[imid] >= key) {
imax = imid;
} else if (A[imid] < key) {
imin = imid;
}
}
if(A[imin] >= key) return imin;
return imax;
}
void benchmarkSearch() {
uint32_t buffer[128];
uint32_t backbuffer[128];
uint32_t out[128];
uint32_t result, initial = 0;
uint32_t b, i;
time_snap_t S1, S2, S3, S4;
printf("benchmarking search \n");
/* this test creates delta encoded buffers with different bits, then
* performs lower bound searches for each key */
for (b = 0; b <= 32; b++) {
uint32_t prev = initial;
/* initialize the buffer */
for (i = 0; i < 128; i++) {
buffer[i] = ((uint32_t)rand()) ;
if(b < 32) buffer[i] %= (1<<b);
}
qsort(buffer,128, sizeof(uint32_t), uint32_cmp);
for (i = 0; i < 128; i++) {
buffer[i] = buffer[i] + prev;
prev = buffer[i];
}
for (i = 1; i < 128; i++) {
if(buffer[i] < buffer[i-1] )
buffer[i] = buffer[i-1];
}
assert(simdmaxbitsd1(initial, buffer)<=b);
for (i = 0; i < 128; i++) {
out[i] = 0; /* memset would do too */
}
/* delta-encode to 'i' bits */
simdpackwithoutmaskd1(initial, buffer, (__m128i *)out, b);
simdunpackd1(initial, (__m128i *)out, backbuffer, b);
for (i = 0; i < 128; i++) {
assert(buffer[i] == backbuffer[i]);
}
S1 = time_snap();
for (i = 0; i < 128 * 10; i++) {
int pos;
uint32_t pseudorandomkey = buffer[i%128];
__m128i vecinitial = _mm_set1_epi32(initial);
pos = simdsearchd1(&vecinitial, (__m128i *)out, b,
pseudorandomkey, &result);
if((result < pseudorandomkey) || (buffer[pos] != result)) {
printf("bug A.\n");
} else if (pos > 0) {
if(buffer[pos-1] >= pseudorandomkey)
printf("bug B.\n");
}
}
S2 = time_snap();
for (i = 0; i < 128 * 10; i++) {
int pos;
uint32_t pseudorandomkey = buffer[i%128];
simdunpackd1(initial, (__m128i *)out, backbuffer, b);
pos = lower_bound(backbuffer, pseudorandomkey, 0, 128);
result = backbuffer[pos];
if((result < pseudorandomkey) || (buffer[pos] != result)) {
printf("bug C.\n");
} else if (pos > 0) {
if(buffer[pos-1] >= pseudorandomkey)
printf("bug D.\n");
}
}
S3 = time_snap();
for (i = 0; i < 128 * 10; i++) {
int pos;
uint32_t pseudorandomkey = buffer[i%128];
pos = simdsearchwithlengthd1(initial, (__m128i *)out, b, 128,
pseudorandomkey, &result);
if((result < pseudorandomkey) || (buffer[pos] != result)) {
printf("bug A.\n");
} else if (pos > 0) {
if(buffer[pos-1] >= pseudorandomkey)
printf("bug B.\n");
}
}
S4 = time_snap();
printf("bit width = %d, fast search function time = " TIME_SNAP_FMT ", naive time = " TIME_SNAP_FMT " , fast with length time = " TIME_SNAP_FMT " \n", b, (S2-S1), (S3-S2), (S4-S3) );
}
}
int main() {
#ifdef _MSC_VER
QueryPerformanceFrequency((LARGE_INTEGER *)&freq);
#endif
benchmarkSearch();
benchmarkSelect();
return 0;
}

View File

@@ -1,205 +0,0 @@
#include <stdio.h>
#include "simdcomp.h"
#define RDTSC_START(cycles) \
do { \
register unsigned cyc_high, cyc_low; \
__asm volatile( \
"cpuid\n\t" \
"rdtsc\n\t" \
"mov %%edx, %0\n\t" \
"mov %%eax, %1\n\t" \
: "=r"(cyc_high), "=r"(cyc_low)::"%rax", "%rbx", "%rcx", "%rdx"); \
(cycles) = ((uint64_t)cyc_high << 32) | cyc_low; \
} while (0)
#define RDTSC_FINAL(cycles) \
do { \
register unsigned cyc_high, cyc_low; \
__asm volatile( \
"rdtscp\n\t" \
"mov %%edx, %0\n\t" \
"mov %%eax, %1\n\t" \
"cpuid\n\t" \
: "=r"(cyc_high), "=r"(cyc_low)::"%rax", "%rbx", "%rcx", "%rdx"); \
(cycles) = ((uint64_t)cyc_high << 32) | cyc_low; \
} while (0)
uint32_t * get_random_array_from_bit_width(uint32_t length, uint32_t bit) {
uint32_t * answer = malloc(sizeof(uint32_t) * length);
uint32_t mask = (uint32_t) ((UINT64_C(1) << bit) - 1);
uint32_t i;
for(i = 0; i < length; ++i) {
answer[i] = rand() & mask;
}
return answer;
}
uint32_t * get_random_array_from_bit_width_d1(uint32_t length, uint32_t bit) {
uint32_t * answer = malloc(sizeof(uint32_t) * length);
uint32_t mask = (uint32_t) ((UINT64_C(1) << bit) - 1);
uint32_t i;
answer[0] = rand() & mask;
for(i = 1; i < length; ++i) {
answer[i] = answer[i-1] + (rand() & mask);
}
return answer;
}
void demo128() {
const uint32_t length = 128;
uint32_t bit;
printf("# --- %s\n", __func__);
printf("# compressing %d integers\n",length);
printf("# format: bit width, pack in cycles per int, unpack in cycles per int\n");
for(bit = 1; bit <= 32; ++bit) {
uint32_t i;
uint32_t * data = get_random_array_from_bit_width(length, bit);
__m128i * buffer = malloc(length * sizeof(uint32_t));
uint32_t * backdata = malloc(length * sizeof(uint32_t));
uint32_t repeat = 500;
uint64_t min_diff;
printf("%d\t",bit);
min_diff = (uint64_t)-1;
for (i = 0; i < repeat; i++) {
uint64_t cycles_start, cycles_final, cycles_diff;
__asm volatile("" ::: /* pretend to clobber */ "memory");
RDTSC_START(cycles_start);
simdpackwithoutmask(data,buffer, bit);
RDTSC_FINAL(cycles_final);
cycles_diff = (cycles_final - cycles_start);
if (cycles_diff < min_diff) min_diff = cycles_diff;
}
printf("%.2f\t",min_diff*1.0/length);
min_diff = (uint64_t)-1;
for (i = 0; i < repeat; i++) {
uint64_t cycles_start, cycles_final, cycles_diff;
__asm volatile("" ::: /* pretend to clobber */ "memory");
RDTSC_START(cycles_start);
simdunpack(buffer, backdata,bit);
RDTSC_FINAL(cycles_final);
cycles_diff = (cycles_final - cycles_start);
if (cycles_diff < min_diff) min_diff = cycles_diff;
}
printf("%.2f\t",min_diff*1.0/length);
free(data);
free(buffer);
free(backdata);
printf("\n");
}
printf("\n\n"); /* two blank lines are required by gnuplot */
}
void demo128_d1() {
const uint32_t length = 128;
uint32_t bit;
printf("# --- %s\n", __func__);
printf("# compressing %d integers\n",length);
printf("# format: bit width, pack in cycles per int, unpack in cycles per int\n");
for(bit = 1; bit <= 32; ++bit) {
uint32_t i;
uint32_t * data = get_random_array_from_bit_width_d1(length, bit);
__m128i * buffer = malloc(length * sizeof(uint32_t));
uint32_t * backdata = malloc(length * sizeof(uint32_t));
uint32_t repeat = 500;
uint64_t min_diff;
printf("%d\t",bit);
min_diff = (uint64_t)-1;
for (i = 0; i < repeat; i++) {
uint64_t cycles_start, cycles_final, cycles_diff;
__asm volatile("" ::: /* pretend to clobber */ "memory");
RDTSC_START(cycles_start);
simdpackwithoutmaskd1(0,data,buffer, bit);
RDTSC_FINAL(cycles_final);
cycles_diff = (cycles_final - cycles_start);
if (cycles_diff < min_diff) min_diff = cycles_diff;
}
printf("%.2f\t",min_diff*1.0/length);
min_diff = (uint64_t)-1;
for (i = 0; i < repeat; i++) {
uint64_t cycles_start, cycles_final, cycles_diff;
__asm volatile("" ::: /* pretend to clobber */ "memory");
RDTSC_START(cycles_start);
simdunpackd1(0,buffer, backdata,bit);
RDTSC_FINAL(cycles_final);
cycles_diff = (cycles_final - cycles_start);
if (cycles_diff < min_diff) min_diff = cycles_diff;
}
printf("%.2f\t",min_diff*1.0/length);
free(data);
free(buffer);
free(backdata);
printf("\n");
}
printf("\n\n"); /* two blank lines are required by gnuplot */
}
#ifdef __AVX2__
void demo256() {
const uint32_t length = 256;
uint32_t bit;
printf("# --- %s\n", __func__);
printf("# compressing %d integers\n",length);
printf("# format: bit width, pack in cycles per int, unpack in cycles per int\n");
for(bit = 1; bit <= 32; ++bit) {
uint32_t i;
uint32_t * data = get_random_array_from_bit_width(length, bit);
__m256i * buffer = malloc(length * sizeof(uint32_t));
uint32_t * backdata = malloc(length * sizeof(uint32_t));
uint32_t repeat = 500;
uint64_t min_diff;
printf("%d\t",bit);
min_diff = (uint64_t)-1;
for (i = 0; i < repeat; i++) {
uint64_t cycles_start, cycles_final, cycles_diff;
__asm volatile("" ::: /* pretend to clobber */ "memory");
RDTSC_START(cycles_start);
avxpackwithoutmask(data,buffer, bit);
RDTSC_FINAL(cycles_final);
cycles_diff = (cycles_final - cycles_start);
if (cycles_diff < min_diff) min_diff = cycles_diff;
}
printf("%.2f\t",min_diff*1.0/length);
min_diff = (uint64_t)-1;
for (i = 0; i < repeat; i++) {
uint64_t cycles_start, cycles_final, cycles_diff;
__asm volatile("" ::: /* pretend to clobber */ "memory");
RDTSC_START(cycles_start);
avxunpack(buffer, backdata,bit);
RDTSC_FINAL(cycles_final);
cycles_diff = (cycles_final - cycles_start);
if (cycles_diff < min_diff) min_diff = cycles_diff;
}
printf("%.2f\t",min_diff*1.0/length);
free(data);
free(buffer);
free(backdata);
printf("\n");
}
printf("\n\n"); /* two blank lines are required by gnuplot */
}
#endif /* avx 2 */
int main() {
demo128();
demo128_d1();
#ifdef __AVX2__
demo256();
#endif
return 0;
}

View File

@@ -1,195 +0,0 @@
/* Type "make example" to build this example program. */
#include <stdio.h>
#include <time.h>
#include <stdlib.h>
#include "simdcomp.h"
/**
We provide several different code examples.
**/
/* very simple test to illustrate a simple application */
int compress_decompress_demo() {
size_t k, N = 9999;
__m128i * endofbuf;
int howmanybytes;
float compratio;
uint32_t * datain = malloc(N * sizeof(uint32_t));
uint8_t * buffer;
uint32_t * backbuffer = malloc(N * sizeof(uint32_t));
uint32_t b;
printf("== simple test\n");
for (k = 0; k < N; ++k) { /* start with k=0, not k=1! */
datain[k] = k;
}
b = maxbits_length(datain, N);
buffer = malloc(simdpack_compressedbytes(N,b));
endofbuf = simdpack_length(datain, N, (__m128i *)buffer, b);
howmanybytes = (endofbuf-(__m128i *)buffer)*sizeof(__m128i); /* number of compressed bytes */
compratio = N*sizeof(uint32_t) * 1.0 / howmanybytes;
/* endofbuf points to the end of the compressed data */
buffer = realloc(buffer,(endofbuf-(__m128i *)buffer)*sizeof(__m128i)); /* optional but safe. */
printf("Compressed %d integers down to %d bytes (comp. ratio = %f).\n",(int)N,howmanybytes,compratio);
/* in actual applications b must be stored and retrieved: caller is responsible for that. */
simdunpack_length((const __m128i *)buffer, N, backbuffer, b); /* will return a pointer to endofbuf */
for (k = 0; k < N; ++k) {
if(datain[k] != backbuffer[k]) {
printf("bug at %lu \n",(unsigned long)k);
return -1;
}
}
printf("Code works!\n");
free(datain);
free(buffer);
free(backbuffer);
return 0;
}
/* compresses data from datain to buffer, returns how many bytes written
used below in simple_demo */
size_t compress(uint32_t * datain, size_t length, uint8_t * buffer) {
uint32_t offset;
uint8_t * initout;
size_t k;
if(length/SIMDBlockSize*SIMDBlockSize != length) {
printf("Data length should be a multiple of %i \n",SIMDBlockSize);
}
offset = 0;
initout = buffer;
for(k = 0; k < length / SIMDBlockSize; ++k) {
uint32_t b = simdmaxbitsd1(offset,
datain + k * SIMDBlockSize);
*buffer++ = b;
simdpackwithoutmaskd1(offset, datain + k * SIMDBlockSize, (__m128i *) buffer,
b);
offset = datain[k * SIMDBlockSize + SIMDBlockSize - 1];
buffer += b * sizeof(__m128i);
}
return buffer - initout;
}
/* Another illustration ... */
void simple_demo() {
size_t REPEAT = 10, gap;
size_t N = 1000 * SIMDBlockSize;/* SIMDBlockSize is 128 */
uint32_t * datain = malloc(N * sizeof(uint32_t));
size_t compsize;
clock_t start, end;
uint8_t * buffer = malloc(N * sizeof(uint32_t) + N / SIMDBlockSize); /* output buffer */
uint32_t * backbuffer = malloc(SIMDBlockSize * sizeof(uint32_t));
printf("== simple demo\n");
for (gap = 1; gap <= 243; gap *= 3) {
size_t k, repeat;
uint32_t offset = 0;
uint32_t bogus = 0;
double numberofseconds;
printf("\n");
printf(" gap = %lu \n", (unsigned long) gap);
datain[0] = 0;
for (k = 1; k < N; ++k)
datain[k] = datain[k-1] + ( rand() % (gap + 1) );
compsize = compress(datain,N,buffer);
printf("compression ratio = %f \n", (N * sizeof(uint32_t))/ (compsize * 1.0 ));
start = clock();
for(repeat = 0; repeat < REPEAT; ++repeat) {
uint8_t * decbuffer = buffer;
for (k = 0; k * SIMDBlockSize < N; ++k) {
uint8_t b = *decbuffer++;
simdunpackd1(offset, (__m128i *) decbuffer, backbuffer, b);
/* do something here with backbuffer */
bogus += backbuffer[3];
decbuffer += b * sizeof(__m128i);
offset = backbuffer[SIMDBlockSize - 1];
}
}
end = clock();
numberofseconds = (end-start)/(double)CLOCKS_PER_SEC;
printf("decoding speed in million of integers per second %f \n",N*REPEAT/(numberofseconds*1000.0*1000.0));
start = clock();
for(repeat = 0; repeat < REPEAT; ++repeat) {
uint8_t * decbuffer = buffer;
for (k = 0; k * SIMDBlockSize < N; ++k) {
memcpy(backbuffer,decbuffer+k*SIMDBlockSize,SIMDBlockSize*sizeof(uint32_t));
bogus += backbuffer[3] - backbuffer[100];
}
}
end = clock();
numberofseconds = (end-start)/(double)CLOCKS_PER_SEC;
printf("memcpy speed in million of integers per second %f \n",N*REPEAT/(numberofseconds*1000.0*1000.0));
printf("ignore me %i \n",bogus);
printf("All tests are in CPU cache. Avoid out-of-cache decoding in applications.\n");
}
free(buffer);
free(datain);
free(backbuffer);
}
/* Used below in more_sophisticated_demo ... */
size_t varying_bit_width_compress(uint32_t * datain, size_t length, uint8_t * buffer) {
uint8_t * initout;
size_t k;
if(length/SIMDBlockSize*SIMDBlockSize != length) {
printf("Data length should be a multiple of %i \n",SIMDBlockSize);
}
initout = buffer;
for(k = 0; k < length / SIMDBlockSize; ++k) {
uint32_t b = maxbits(datain);
*buffer++ = b;
simdpackwithoutmask(datain, (__m128i *)buffer, b);
datain += SIMDBlockSize;
buffer += b * sizeof(__m128i);
}
return buffer - initout;
}
/* Here we compress the data in blocks of 128 integers with varying bit width */
int varying_bit_width_demo() {
size_t nn = 128 * 2;
uint32_t * datainn = malloc(nn * sizeof(uint32_t));
uint8_t * buffern = malloc(nn * sizeof(uint32_t) + nn / SIMDBlockSize);
uint8_t * initbuffern = buffern;
uint32_t * backbuffern = malloc(nn * sizeof(uint32_t));
size_t k, compsize;
printf("== varying bit-width demo\n");
for(k=0; k<nn; ++k) {
datainn[k] = rand() % (k + 1);
}
compsize = varying_bit_width_compress(datainn,nn,buffern);
printf("encoded size: %u (original size: %u)\n", (unsigned)compsize,
(unsigned)(nn * sizeof(uint32_t)));
for (k = 0; k * SIMDBlockSize < nn; ++k) {
uint32_t b = *buffern;
buffern++;
simdunpack((const __m128i *)buffern, backbuffern + k * SIMDBlockSize, b);
buffern += b * sizeof(__m128i);
}
for (k = 0; k < nn; ++k) {
if(backbuffern[k] != datainn[k]) {
printf("bug\n");
return -1;
}
}
printf("Code works!\n");
free(datainn);
free(initbuffern);
free(backbuffern);
return 0;
}
int main() {
if(compress_decompress_demo() != 0) return -1;
if(varying_bit_width_demo() != 0) return -1;
simple_demo();
return 0;
}

View File

@@ -1,13 +0,0 @@
Simple Go demo
==============
Setup
======
Start by installing the simdcomp library (make && make install).
Then type:
go run test.go

View File

@@ -1,71 +0,0 @@
/////////
// This particular file is in the public domain.
// Author: Daniel Lemire
////////
package main
/*
#cgo LDFLAGS: -lsimdcomp
#include <simdcomp.h>
*/
import "C"
import "fmt"
//////////
// For this demo, we pack and unpack blocks of 128 integers
/////////
func main() {
// I am going to use C types. Alternative might be to use unsafe.Pointer calls, see http://bit.ly/1ndw3W3
// this is our original data
var data [128]C.uint32_t
for i := C.uint32_t(0); i < C.uint32_t(128); i++ {
data[i] = i
}
////////////
// We first pack without differential coding
///////////
// computing how many bits per int. is needed
b := C.maxbits(&data[0])
ratio := 32.0/float64(b)
fmt.Println("Bit width ", b)
fmt.Println(fmt.Sprintf("Compression ratio %f ", ratio))
// we are now going to create a buffer to receive the packed data (each __m128i uses 128 bits)
out := make([] C.__m128i,b)
C.simdpackwithoutmask( &data[0],&out[0],b);
var recovereddata [128]C.uint32_t
C.simdunpack(&out[0],&recovereddata[0],b)
for i := 0; i < 128; i++ {
if data[i] != recovereddata[i] {
fmt.Println("Bug ")
return
}
}
///////////
// Next, we use differential coding
//////////
offset := C.uint32_t(0) // if you pack data from K to K + 128, offset should be the value at K-1. When K = 0, choose a default
b1 := C.simdmaxbitsd1(offset,&data[0])
ratio1 := 32.0/float64(b1)
fmt.Println("Bit width ", b1)
fmt.Println(fmt.Sprintf("Compression ratio %f ", ratio1))
// we are now going to create a buffer to receive the packed data (each __m128i uses 128 bits)
out = make([] C.__m128i,b1)
C.simdpackwithoutmaskd1(offset, &data[0],&out[0],b1);
C.simdunpackd1(offset,&out[0],&recovereddata[0],b1)
for i := 0; i < 128; i++ {
if data[i] != recovereddata[i] {
fmt.Println("Bug ")
return
}
}
fmt.Println("test succesful.")
}

View File

@@ -1,40 +0,0 @@
/**
* This code is released under a BSD License.
*/
#ifndef INCLUDE_AVXBITPACKING_H_
#define INCLUDE_AVXBITPACKING_H_
#ifdef __AVX2__
#include "portability.h"
/* AVX2 is required */
#include <immintrin.h>
/* for memset */
#include <string.h>
#include "simdcomputil.h"
enum{ AVXBlockSize = 256};
/* max integer logarithm over a range of AVXBlockSize integers (256 integer) */
uint32_t avxmaxbits(const uint32_t * begin);
/* reads 256 values from "in", writes "bit" 256-bit vectors to "out" */
void avxpack(const uint32_t * in,__m256i * out, const uint32_t bit);
/* reads 256 values from "in", writes "bit" 256-bit vectors to "out" */
void avxpackwithoutmask(const uint32_t * in,__m256i * out, const uint32_t bit);
/* reads "bit" 256-bit vectors from "in", writes 256 values to "out" */
void avxunpack(const __m256i * in,uint32_t * out, const uint32_t bit);
#endif /* __AVX2__ */
#endif /* INCLUDE_AVXBITPACKING_H_ */

View File

@@ -1,81 +0,0 @@
/**
* This code is released under a BSD License.
*/
#ifndef SIMDBITCOMPAT_H_
#define SIMDBITCOMPAT_H_
#include <iso646.h> /* mostly for Microsoft compilers */
#include <string.h>
#if SIMDCOMP_DEBUG
# define SIMDCOMP_ALWAYS_INLINE inline
# define SIMDCOMP_NEVER_INLINE
# define SIMDCOMP_PURE
#else
# if defined(__GNUC__)
# if __GNUC__ >= 3
# define SIMDCOMP_ALWAYS_INLINE inline __attribute__((always_inline))
# define SIMDCOMP_NEVER_INLINE __attribute__((noinline))
# define SIMDCOMP_PURE __attribute__((pure))
# else
# define SIMDCOMP_ALWAYS_INLINE inline
# define SIMDCOMP_NEVER_INLINE
# define SIMDCOMP_PURE
# endif
# elif defined(_MSC_VER)
# define SIMDCOMP_ALWAYS_INLINE __forceinline
# define SIMDCOMP_NEVER_INLINE
# define SIMDCOMP_PURE
# else
# if __has_attribute(always_inline)
# define SIMDCOMP_ALWAYS_INLINE inline __attribute__((always_inline))
# else
# define SIMDCOMP_ALWAYS_INLINE inline
# endif
# if __has_attribute(noinline)
# define SIMDCOMP_NEVER_INLINE __attribute__((noinline))
# else
# define SIMDCOMP_NEVER_INLINE
# endif
# if __has_attribute(pure)
# define SIMDCOMP_PURE __attribute__((pure))
# else
# define SIMDCOMP_PURE
# endif
# endif
#endif
#if defined(_MSC_VER) && _MSC_VER < 1600
typedef unsigned int uint32_t;
typedef unsigned char uint8_t;
typedef signed char int8_t;
#else
#include <stdint.h> /* part of Visual Studio 2010 and better, others likely anyway */
#endif
#if defined(_MSC_VER)
#define SIMDCOMP_ALIGNED(x) __declspec(align(x))
#else
#if defined(__GNUC__)
#define SIMDCOMP_ALIGNED(x) __attribute__ ((aligned(x)))
#endif
#endif
#if defined(_MSC_VER)
# include <intrin.h>
/* 64-bit needs extending */
# define SIMDCOMP_CTZ(result, mask) do { \
unsigned long index; \
if (!_BitScanForward(&(index), (mask))) { \
(result) = 32U; \
} else { \
(result) = (uint32_t)(index); \
} \
} while (0)
#else
# define SIMDCOMP_CTZ(result, mask) \
result = __builtin_ctz(mask)
#endif
#endif /* SIMDBITCOMPAT_H_ */

View File

@@ -1,72 +0,0 @@
/**
* This code is released under a BSD License.
*/
#ifndef SIMDBITPACKING_H_
#define SIMDBITPACKING_H_
#include "portability.h"
/* SSE2 is required */
#include <emmintrin.h>
/* for memset */
#include <string.h>
#include "simdcomputil.h"
/***
* Please see example.c for various examples on how to make good use
* of these functions.
*/
/* reads 128 values from "in", writes "bit" 128-bit vectors to "out".
* The input values are masked so that only the least significant "bit" bits are used. */
void simdpack(const uint32_t * in,__m128i * out, const uint32_t bit);
/* reads 128 values from "in", writes "bit" 128-bit vectors to "out".
* The input values are assumed to be less than 1<<bit. */
void simdpackwithoutmask(const uint32_t * in,__m128i * out, const uint32_t bit);
/* reads "bit" 128-bit vectors from "in", writes 128 values to "out" */
void simdunpack(const __m128i * in,uint32_t * out, const uint32_t bit);
/* how many compressed bytes are needed to compressed length integers using a bit width of bit with
the simdpackFOR_length function. */
int simdpack_compressedbytes(int length, const uint32_t bit);
/* like simdpack, but supports an undetermined number of inputs.
* This is useful if you need to unpack an array of integers that is not divisible by 128 integers.
* Returns a pointer to the (advanced) compressed array. Compressed data is stored in the memory location between
the provided (out) pointer and the returned pointer. */
__m128i * simdpack_length(const uint32_t * in, size_t length, __m128i * out, const uint32_t bit);
/* like simdunpack, but supports an undetermined number of inputs.
* This is useful if you need to unpack an array of integers that is not divisible by 128 integers.
* Returns a pointer to the (advanced) compressed array. The read compressed data is between the provided
(in) pointer and the returned pointer. */
const __m128i * simdunpack_length(const __m128i * in, size_t length, uint32_t * out, const uint32_t bit);
/* like simdpack, but supports an undetermined small number of inputs. This is useful if you need to pack less
than 128 integers.
* Note that this function is much slower.
* Returns a pointer to the (advanced) compressed array. Compressed data is stored in the memory location
between the provided (out) pointer and the returned pointer. */
__m128i * simdpack_shortlength(const uint32_t * in, int length, __m128i * out, const uint32_t bit);
/* like simdunpack, but supports an undetermined small number of inputs. This is useful if you need to unpack less
than 128 integers.
* Note that this function is much slower.
* Returns a pointer to the (advanced) compressed array. The read compressed data is between the provided (in)
pointer and the returned pointer. */
const __m128i * simdunpack_shortlength(const __m128i * in, int length, uint32_t * out, const uint32_t bit);
/* given a block of 128 packed values, this function sets the value at index "index" to "value" */
void simdfastset(__m128i * in128, uint32_t b, uint32_t value, size_t index);
#endif /* SIMDBITPACKING_H_ */

View File

@@ -1,22 +0,0 @@
/**
* This code is released under a BSD License.
*/
#ifndef SIMDCOMP_H_
#define SIMDCOMP_H_
#ifdef __cplusplus
extern "C" {
#endif
#include "simdbitpacking.h"
#include "simdcomputil.h"
#include "simdfor.h"
#include "simdintegratedbitpacking.h"
#include "avxbitpacking.h"
#ifdef __cplusplus
} // extern "C"
#endif
#endif

View File

@@ -1,54 +0,0 @@
/**
* This code is released under a BSD License.
*/
#ifndef SIMDCOMPUTIL_H_
#define SIMDCOMPUTIL_H_
#include "portability.h"
/* SSE2 is required */
#include <emmintrin.h>
/* returns the integer logarithm of v (bit width) */
uint32_t bits(const uint32_t v);
/* max integer logarithm over a range of SIMDBlockSize integers (128 integer) */
uint32_t maxbits(const uint32_t * begin);
/* same as maxbits, but we specify the number of integers */
uint32_t maxbits_length(const uint32_t * in,uint32_t length);
enum{ SIMDBlockSize = 128};
/* computes (quickly) the minimal value of 128 values */
uint32_t simdmin(const uint32_t * in);
/* computes (quickly) the minimal value of the specified number of values */
uint32_t simdmin_length(const uint32_t * in, uint32_t length);
#ifdef __SSE4_1__
/* computes (quickly) the minimal and maximal value of the specified number of values */
void simdmaxmin_length(const uint32_t * in, uint32_t length, uint32_t * getmin, uint32_t * getmax);
/* computes (quickly) the minimal and maximal value of the 128 values */
void simdmaxmin(const uint32_t * in, uint32_t * getmin, uint32_t * getmax);
#endif
/* like maxbit over 128 integers (SIMDBlockSize) with provided initial value
and using differential coding */
uint32_t simdmaxbitsd1(uint32_t initvalue, const uint32_t * in);
/* like simdmaxbitsd1, but calculates maxbits over |length| integers
with provided initial value. |length| can be any arbitrary value. */
uint32_t simdmaxbitsd1_length(uint32_t initvalue, const uint32_t * in,
uint32_t length);
#endif /* SIMDCOMPUTIL_H_ */

View File

@@ -1,72 +0,0 @@
/**
* This code is released under a BSD License.
*/
#ifndef INCLUDE_SIMDFOR_H_
#define INCLUDE_SIMDFOR_H_
#include "portability.h"
/* SSE2 is required */
#include <emmintrin.h>
#include "simdcomputil.h"
#include "simdbitpacking.h"
#ifdef __cplusplus
extern "C" {
#endif
/* reads 128 values from "in", writes "bit" 128-bit vectors to "out" */
void simdpackFOR(uint32_t initvalue, const uint32_t * in,__m128i * out, const uint32_t bit);
/* reads "bit" 128-bit vectors from "in", writes 128 values to "out" */
void simdunpackFOR(uint32_t initvalue, const __m128i * in,uint32_t * out, const uint32_t bit);
/* how many compressed bytes are needed to compressed length integers using a bit width of bit with
the simdpackFOR_length function. */
int simdpackFOR_compressedbytes(int length, const uint32_t bit);
/* like simdpackFOR, but supports an undetermined number of inputs.
This is useful if you need to pack less than 128 integers. Note that this function is much slower.
Compressed data is stored in the memory location between
the provided (out) pointer and the returned pointer. */
__m128i * simdpackFOR_length(uint32_t initvalue, const uint32_t * in, int length, __m128i * out, const uint32_t bit);
/* like simdunpackFOR, but supports an undetermined number of inputs.
This is useful if you need to unpack less than 128 integers. Note that this function is much slower.
The read compressed data is between the provided
(in) pointer and the returned pointer. */
const __m128i * simdunpackFOR_length(uint32_t initvalue, const __m128i * in, int length, uint32_t * out, const uint32_t bit);
/* returns the value stored at the specified "slot".
* */
uint32_t simdselectFOR(uint32_t initvalue, const __m128i *in, uint32_t bit,
int slot);
/* given a block of 128 packed values, this function sets the value at index "index" to "value" */
void simdfastsetFOR(uint32_t initvalue, __m128i * in, uint32_t bit, uint32_t value, size_t index);
/* searches "bit" 128-bit vectors from "in" (= length<=128 encoded integers) for the first encoded uint32 value
* which is >= |key|, and returns its position. It is assumed that the values
* stored are in sorted order.
* The encoded key is stored in "*presult".
* The first length decoded integers, ignoring others. If no value is larger or equal to the key,
* length is returned. Length should be no larger than 128.
*
* If no value is larger or equal to the key,
* length is returned */
int simdsearchwithlengthFOR(uint32_t initvalue, const __m128i *in, uint32_t bit,
int length, uint32_t key, uint32_t *presult);
#ifdef __cplusplus
} // extern "C"
#endif
#endif /* INCLUDE_SIMDFOR_H_ */

View File

@@ -1,98 +0,0 @@
/**
* This code is released under a BSD License.
*/
#ifndef SIMD_INTEGRATED_BITPACKING_H
#define SIMD_INTEGRATED_BITPACKING_H
#include "portability.h"
/* SSE2 is required */
#include <emmintrin.h>
#include "simdcomputil.h"
#include "simdbitpacking.h"
#ifdef __cplusplus
extern "C" {
#endif
/* reads 128 values from "in", writes "bit" 128-bit vectors to "out"
integer values should be in sorted order (for best results).
The differences are masked so that only the least significant "bit" bits are used. */
void simdpackd1(uint32_t initvalue, const uint32_t * in,__m128i * out, const uint32_t bit);
/* reads 128 values from "in", writes "bit" 128-bit vectors to "out"
integer values should be in sorted order (for best results).
The difference values are assumed to be less than 1<<bit. */
void simdpackwithoutmaskd1(uint32_t initvalue, const uint32_t * in,__m128i * out, const uint32_t bit);
/* reads "bit" 128-bit vectors from "in", writes 128 values to "out" */
void simdunpackd1(uint32_t initvalue, const __m128i * in,uint32_t * out, const uint32_t bit);
/* searches "bit" 128-bit vectors from "in" (= 128 encoded integers) for the first encoded uint32 value
* which is >= |key|, and returns its position. It is assumed that the values
* stored are in sorted order.
* The encoded key is stored in "*presult". If no value is larger or equal to the key,
* 128 is returned. The pointer initOffset is a pointer to the last four value decoded
* (when starting out, this can be a zero vector or initialized with _mm_set1_epi32(init)),
* and the vector gets updated.
**/
int
simdsearchd1(__m128i * initOffset, const __m128i *in, uint32_t bit,
uint32_t key, uint32_t *presult);
/* searches "bit" 128-bit vectors from "in" (= length<=128 encoded integers) for the first encoded uint32 value
* which is >= |key|, and returns its position. It is assumed that the values
* stored are in sorted order.
* The encoded key is stored in "*presult".
* The first length decoded integers, ignoring others. If no value is larger or equal to the key,
* length is returned. Length should be no larger than 128.
*
* If no value is larger or equal to the key,
* length is returned */
int simdsearchwithlengthd1(uint32_t initvalue, const __m128i *in, uint32_t bit,
int length, uint32_t key, uint32_t *presult);
/* returns the value stored at the specified "slot".
* */
uint32_t simdselectd1(uint32_t initvalue, const __m128i *in, uint32_t bit,
int slot);
/* given a block of 128 packed values, this function sets the value at index "index" to "value",
* you must somehow know the previous value.
* Because of differential coding, all following values are incremented by the offset between this new
* value and the old value...
* This functions is useful if you want to modify the last value.
*/
void simdfastsetd1fromprevious( __m128i * in, uint32_t bit, uint32_t previousvalue, uint32_t value, size_t index);
/* given a block of 128 packed values, this function sets the value at index "index" to "value",
* This function computes the previous value if needed.
* Because of differential coding, all following values are incremented by the offset between this new
* value and the old value...
* This functions is useful if you want to modify the last value.
*/
void simdfastsetd1(uint32_t initvalue, __m128i * in, uint32_t bit, uint32_t value, size_t index);
/*Simply scan the data
* The pointer initOffset is a pointer to the last four value decoded
* (when starting out, this can be a zero vector or initialized with _mm_set1_epi32(init);),
* and the vector gets updated.
* */
void
simdscand1(__m128i * initOffset, const __m128i *in, uint32_t bit);
#ifdef __cplusplus
} // extern "C"
#endif
#endif

View File

@@ -1,79 +0,0 @@
# minimalist makefile
.SUFFIXES:
#
.SUFFIXES: .cpp .o .c .h
ifeq ($(DEBUG),1)
CFLAGS = -fPIC -std=c89 -ggdb -msse4.1 -march=native -Wall -Wextra -Wshadow -fsanitize=undefined -fno-omit-frame-pointer -fsanitize=address
else
CFLAGS = -fPIC -std=c89 -O3 -msse4.1 -march=native -Wall -Wextra -Wshadow
endif # debug
LDFLAGS = -shared
LIBNAME=libsimdcomp.so.0.0.3
all: unit unit_chars bitpackingbenchmark $(LIBNAME)
test:
./unit
./unit_chars
install: $(OBJECTS)
cp $(LIBNAME) /usr/local/lib
ln -s /usr/local/lib/$(LIBNAME) /usr/local/lib/libsimdcomp.so
ldconfig
cp $(HEADERS) /usr/local/include
HEADERS=./include/simdbitpacking.h ./include/simdcomputil.h ./include/simdintegratedbitpacking.h ./include/simdcomp.h ./include/simdfor.h ./include/avxbitpacking.h
uninstall:
for h in $(HEADERS) ; do rm /usr/local/$$h; done
rm /usr/local/lib/$(LIBNAME)
rm /usr/local/lib/libsimdcomp.so
ldconfig
OBJECTS= simdbitpacking.o simdintegratedbitpacking.o simdcomputil.o \
simdpackedsearch.o simdpackedselect.o simdfor.o avxbitpacking.o
$(LIBNAME): $(OBJECTS)
$(CC) $(CFLAGS) -o $(LIBNAME) $(OBJECTS) $(LDFLAGS)
avxbitpacking.o: ./src/avxbitpacking.c $(HEADERS)
$(CC) $(CFLAGS) -c ./src/avxbitpacking.c -Iinclude
simdfor.o: ./src/simdfor.c $(HEADERS)
$(CC) $(CFLAGS) -c ./src/simdfor.c -Iinclude
simdcomputil.o: ./src/simdcomputil.c $(HEADERS)
$(CC) $(CFLAGS) -c ./src/simdcomputil.c -Iinclude
simdbitpacking.o: ./src/simdbitpacking.c $(HEADERS)
$(CC) $(CFLAGS) -c ./src/simdbitpacking.c -Iinclude
simdintegratedbitpacking.o: ./src/simdintegratedbitpacking.c $(HEADERS)
$(CC) $(CFLAGS) -c ./src/simdintegratedbitpacking.c -Iinclude
simdpackedsearch.o: ./src/simdpackedsearch.c $(HEADERS)
$(CC) $(CFLAGS) -c ./src/simdpackedsearch.c -Iinclude
simdpackedselect.o: ./src/simdpackedselect.c $(HEADERS)
$(CC) $(CFLAGS) -c ./src/simdpackedselect.c -Iinclude
example: ./example.c $(HEADERS) $(OBJECTS)
$(CC) $(CFLAGS) -o example ./example.c -Iinclude $(OBJECTS)
unit: ./tests/unit.c $(HEADERS) $(OBJECTS)
$(CC) $(CFLAGS) -o unit ./tests/unit.c -Iinclude $(OBJECTS)
bitpackingbenchmark: ./benchmarks/bitpackingbenchmark.c $(HEADERS) $(OBJECTS)
$(CC) $(CFLAGS) -o bitpackingbenchmark ./benchmarks/bitpackingbenchmark.c -Iinclude $(OBJECTS)
benchmark: ./benchmarks/benchmark.c $(HEADERS) $(OBJECTS)
$(CC) $(CFLAGS) -o benchmark ./benchmarks/benchmark.c -Iinclude $(OBJECTS)
dynunit: ./tests/unit.c $(HEADERS) $(LIBNAME)
$(CC) $(CFLAGS) -o dynunit ./tests/unit.c -Iinclude -lsimdcomp
unit_chars: ./tests/unit_chars.c $(HEADERS) $(OBJECTS)
$(CC) $(CFLAGS) -o unit_chars ./tests/unit_chars.c -Iinclude $(OBJECTS)
clean:
rm -f unit *.o $(LIBNAME) example benchmark bitpackingbenchmark dynunit unit_chars

View File

@@ -1,104 +0,0 @@
!IFNDEF MACHINE
!IF "$(PROCESSOR_ARCHITECTURE)"=="AMD64"
MACHINE=x64
!ELSE
MACHINE=x86
!ENDIF
!ENDIF
!IFNDEF DEBUG
DEBUG=no
!ENDIF
!IFNDEF CC
CC=cl.exe
!ENDIF
!IFNDEF AR
AR=lib.exe
!ENDIF
!IFNDEF LINK
LINK=link.exe
!ENDIF
!IFNDEF PGO
PGO=no
!ENDIF
!IFNDEF PGI
PGI=no
!ENDIF
INC = /Iinclude
!IF "$(DEBUG)"=="yes"
CFLAGS = /nologo /MDd /LDd /Od /Zi /D_DEBUG /RTC1 /W3 /GS /Gm
ARFLAGS = /nologo
LDFLAGS = /nologo /debug /nodefaultlib:msvcrt
!ELSE
CFLAGS = /nologo /MD /O2 /Zi /DNDEBUG /W3 /Gm- /GS /Gy /Oi /GL /MP
ARFLAGS = /nologo /LTCG
LDFLAGS = /nologo /LTCG /DYNAMICBASE /incremental:no /debug /opt:ref,icf
!ENDIF
!IF "$(PGI)"=="yes"
LDFLAGS = $(LDFLAGS) /ltcg:pgi
!ENDIF
!IF "$(PGO)"=="yes"
LDFLAGS = $(LDFLAGS) /ltcg:pgo
!ENDIF
LIB_OBJS = simdbitpacking.obj simdintegratedbitpacking.obj simdcomputil.obj \
simdpackedsearch.obj simdpackedselect.obj simdfor.obj
all: lib dll dynunit unit_chars example benchmark
# need some good use case scenario to train the instrumented build
@if "$(PGI)"=="yes" echo Running PGO training
@if "$(PGI)"=="yes" benchmark.exe >nul 2>&1
@if "$(PGI)"=="yes" example.exe >nul 2>&1
$(LIB_OBJS):
$(CC) $(INC) $(CFLAGS) /c src/simdbitpacking.c src/simdintegratedbitpacking.c src/simdcomputil.c \
src/simdpackedsearch.c src/simdpackedselect.c src/simdfor.c
lib: $(LIB_OBJS)
$(AR) $(ARFLAGS) /OUT:simdcomp_a.lib $(LIB_OBJS)
dll: $(LIB_OBJS)
$(LINK) /DLL $(LDFLAGS) /OUT:simdcomp.dll /IMPLIB:simdcomp.lib /DEF:simdcomp.def $(LIB_OBJS)
unit: lib
$(CC) $(INC) $(CFLAGS) /c src/unit.c
$(LINK) $(LDFLAGS) /OUT:unit.exe unit.obj simdcomp_a.lib
dynunit: dll
$(CC) $(INC) $(CFLAGS) /c src/unit.c
$(LINK) $(LDFLAGS) /OUT:unit.exe unit.obj simdcomp.lib
unit_chars: lib
$(CC) $(INC) $(CFLAGS) /c src/unit_chars.c
$(LINK) $(LDFLAGS) /OUT:unit_chars.exe unit_chars.obj simdcomp.lib
example: lib
$(CC) $(INC) $(CFLAGS) /c example.c
$(LINK) $(LDFLAGS) /OUT:example.exe example.obj simdcomp.lib
benchmark: lib
$(CC) $(INC) $(CFLAGS) /c src/benchmark.c
$(LINK) $(LDFLAGS) /OUT:benchmark.exe benchmark.obj simdcomp.lib
clean:
del /Q *.obj
del /Q *.lib
del /Q *.exe
del /Q *.dll
del /Q *.pgc
del /Q *.pgd
del /Q *.pdb

View File

@@ -1,16 +0,0 @@
{
"name": "simdcomp",
"version": "0.0.3",
"repo": "lemire/simdcomp",
"description": "A simple C library for compressing lists of integers",
"license": "BSD-3-Clause",
"src": [
"src/simdbitpacking.c",
"src/simdcomputil.c",
"src/simdintegratedbitpacking.c",
"include/simdbitpacking.h",
"include/simdcomp.h",
"include/simdcomputil.h",
"include/simdintegratedbitpacking.h"
]
}

View File

@@ -1,182 +0,0 @@
#!/usr/bin/env python
import sys
def howmany(bit):
""" how many values are we going to pack? """
return 256
def howmanywords(bit):
return (howmany(bit) * bit + 255)/256
def howmanybytes(bit):
return howmanywords(bit) * 16
print("""
/** code generated by avxpacking.py starts here **/
""")
print("""typedef void (*avxpackblockfnc)(const uint32_t * pin, __m256i * compressed);""")
print("""typedef void (*avxunpackblockfnc)(const __m256i * compressed, uint32_t * pout);""")
def plurial(number):
if(number <> 1):
return "s"
else :
return ""
print("")
print("static void avxpackblock0(const uint32_t * pin, __m256i * compressed) {");
print(" (void)compressed;");
print(" (void) pin; /* we consumed {0} 32-bit integer{1} */ ".format(howmany(0),plurial(howmany(0))));
print("}");
print("")
for bit in range(1,33):
print("")
print("/* we are going to pack {0} {1}-bit values, touching {2} 256-bit words, using {3} bytes */ ".format(howmany(bit),bit,howmanywords(bit),howmanybytes(bit)))
print("static void avxpackblock{0}(const uint32_t * pin, __m256i * compressed) {{".format(bit));
print(" const __m256i * in = (const __m256i *) pin;");
print(" /* we are going to touch {0} 256-bit word{1} */ ".format(howmanywords(bit),plurial(howmanywords(bit))));
if(howmanywords(bit) == 1):
print(" __m256i w0;")
else:
print(" __m256i w0, w1;")
if( (bit & (bit-1)) <> 0) : print(" __m256i tmp; /* used to store inputs at word boundary */")
oldword = 0
for j in range(howmany(bit)/8):
firstword = j * bit / 32
if(firstword > oldword):
print(" _mm256_storeu_si256(compressed + {0}, w{1});".format(oldword,oldword%2))
oldword = firstword
secondword = (j * bit + bit - 1)/32
firstshift = (j*bit) % 32
if( firstword == secondword):
if(firstshift == 0):
print(" w{0} = _mm256_lddqu_si256 (in + {1});".format(firstword%2,j))
else:
print(" w{0} = _mm256_or_si256(w{0},_mm256_slli_epi32(_mm256_lddqu_si256 (in + {1}) , {2}));".format(firstword%2,j,firstshift))
else:
print(" tmp = _mm256_lddqu_si256 (in + {0});".format(j))
print(" w{0} = _mm256_or_si256(w{0},_mm256_slli_epi32(tmp , {2}));".format(firstword%2,j,firstshift))
secondshift = 32-firstshift
print(" w{0} = _mm256_srli_epi32(tmp,{2});".format(secondword%2,j,secondshift))
print(" _mm256_storeu_si256(compressed + {0}, w{1});".format(secondword,secondword%2))
print("}");
print("")
print("")
print("static void avxpackblockmask0(const uint32_t * pin, __m256i * compressed) {");
print(" (void)compressed;");
print(" (void) pin; /* we consumed {0} 32-bit integer{1} */ ".format(howmany(0),plurial(howmany(0))));
print("}");
print("")
for bit in range(1,33):
print("")
print("/* we are going to pack {0} {1}-bit values, touching {2} 256-bit words, using {3} bytes */ ".format(howmany(bit),bit,howmanywords(bit),howmanybytes(bit)))
print("static void avxpackblockmask{0}(const uint32_t * pin, __m256i * compressed) {{".format(bit));
print(" /* we are going to touch {0} 256-bit word{1} */ ".format(howmanywords(bit),plurial(howmanywords(bit))));
if(howmanywords(bit) == 1):
print(" __m256i w0;")
else:
print(" __m256i w0, w1;")
print(" const __m256i * in = (const __m256i *) pin;");
if(bit < 32): print(" const __m256i mask = _mm256_set1_epi32({0});".format((1<<bit)-1));
def maskfnc(x):
if(bit == 32): return x
return " _mm256_and_si256 ( mask, {0}) ".format(x)
if( (bit & (bit-1)) <> 0) : print(" __m256i tmp; /* used to store inputs at word boundary */")
oldword = 0
for j in range(howmany(bit)/8):
firstword = j * bit / 32
if(firstword > oldword):
print(" _mm256_storeu_si256(compressed + {0}, w{1});".format(oldword,oldword%2))
oldword = firstword
secondword = (j * bit + bit - 1)/32
firstshift = (j*bit) % 32
loadstr = maskfnc(" _mm256_lddqu_si256 (in + {0}) ".format(j))
if( firstword == secondword):
if(firstshift == 0):
print(" w{0} = {1};".format(firstword%2,loadstr))
else:
print(" w{0} = _mm256_or_si256(w{0},_mm256_slli_epi32({1} , {2}));".format(firstword%2,loadstr,firstshift))
else:
print(" tmp = {0};".format(loadstr))
print(" w{0} = _mm256_or_si256(w{0},_mm256_slli_epi32(tmp , {2}));".format(firstword%2,j,firstshift))
secondshift = 32-firstshift
print(" w{0} = _mm256_srli_epi32(tmp,{2});".format(secondword%2,j,secondshift))
print(" _mm256_storeu_si256(compressed + {0}, w{1});".format(secondword,secondword%2))
print("}");
print("")
print("static void avxunpackblock0(const __m256i * compressed, uint32_t * pout) {");
print(" (void) compressed;");
print(" memset(pout,0,{0});".format(howmany(0)));
print("}");
print("")
for bit in range(1,33):
print("")
print("/* we packed {0} {1}-bit values, touching {2} 256-bit words, using {3} bytes */ ".format(howmany(bit),bit,howmanywords(bit),howmanybytes(bit)))
print("static void avxunpackblock{0}(const __m256i * compressed, uint32_t * pout) {{".format(bit));
print(" /* we are going to access {0} 256-bit word{1} */ ".format(howmanywords(bit),plurial(howmanywords(bit))));
if(howmanywords(bit) == 1):
print(" __m256i w0;")
else:
print(" __m256i w0, w1;")
print(" __m256i * out = (__m256i *) pout;");
if(bit < 32): print(" const __m256i mask = _mm256_set1_epi32({0});".format((1<<bit)-1));
maskstr = " _mm256_and_si256 ( mask, {0}) "
if (bit == 32) : maskstr = " {0} " # no need
oldword = 0
print(" w0 = _mm256_lddqu_si256 (compressed);")
for j in range(howmany(bit)/8):
firstword = j * bit / 32
secondword = (j * bit + bit - 1)/32
if(secondword > oldword):
print(" w{0} = _mm256_lddqu_si256 (compressed + {1});".format(secondword%2,secondword))
oldword = secondword
firstshift = (j*bit) % 32
firstshiftstr = "_mm256_srli_epi32( w{0} , "+str(firstshift)+") "
if(firstshift == 0):
firstshiftstr =" w{0} " # no need
wfirst = firstshiftstr.format(firstword%2)
if( firstword == secondword):
if(firstshift + bit <> 32):
wfirst = maskstr.format(wfirst)
print(" _mm256_storeu_si256(out + {0}, {1});".format(j,wfirst))
else:
secondshift = (32-firstshift)
wsecond = "_mm256_slli_epi32( w{0} , {1} ) ".format((firstword+1)%2,secondshift)
wfirstorsecond = " _mm256_or_si256 ({0},{1}) ".format(wfirst,wsecond)
wfirstorsecond = maskstr.format(wfirstorsecond)
print(" _mm256_storeu_si256(out + {0},\n {1});".format(j,wfirstorsecond))
print("}");
print("")
print("static avxpackblockfnc avxfuncPackArr[] = {")
for bit in range(0,32):
print("&avxpackblock{0},".format(bit))
print("&avxpackblock32")
print("};")
print("static avxpackblockfnc avxfuncPackMaskArr[] = {")
for bit in range(0,32):
print("&avxpackblockmask{0},".format(bit))
print("&avxpackblockmask32")
print("};")
print("static avxunpackblockfnc avxfuncUnpackArr[] = {")
for bit in range(0,32):
print("&avxunpackblock{0},".format(bit))
print("&avxunpackblock32")
print("};")
print("/** code generated by avxpacking.py ends here **/")

View File

@@ -1,152 +0,0 @@
#!/usr/bin/env python3
from math import ceil
print("""
/**
* Blablabla
*
*/
""");
def mask(bit):
return str((1 << bit) - 1)
for length in [32]:
print("""
static __m128i iunpackFOR0(__m128i initOffset, const __m128i * _in , uint32_t * _out) {
__m128i *out = (__m128i*)(_out);
int i;
(void) _in;
for (i = 0; i < 8; ++i) {
_mm_store_si128(out++, initOffset);
_mm_store_si128(out++, initOffset);
_mm_store_si128(out++, initOffset);
_mm_store_si128(out++, initOffset);
}
return initOffset;
}
""")
print("""
static void ipackFOR0(__m128i initOffset , const uint32_t * _in , __m128i * out ) {
(void) initOffset;
(void) _in;
(void) out;
}
""")
for bit in range(1,33):
offsetVar = " initOffset";
print("""
static void ipackFOR"""+str(bit)+"""(__m128i """+offsetVar+""", const uint32_t * _in, __m128i * out) {
const __m128i *in = (const __m128i*)(_in);
__m128i OutReg;
""");
if (bit != 32):
print(" __m128i CurrIn = _mm_load_si128(in);");
print(" __m128i InReg = _mm_sub_epi32(CurrIn, initOffset);");
else:
print(" __m128i InReg = _mm_load_si128(in);");
print(" (void) initOffset;");
inwordpointer = 0
valuecounter = 0
for k in range(ceil((length * bit) / 32)):
if(valuecounter == length): break
for x in range(inwordpointer,32,bit):
if(x!=0) :
print(" OutReg = _mm_or_si128(OutReg, _mm_slli_epi32(InReg, " + str(x) + "));");
else:
print(" OutReg = InReg; ");
if((x+bit>=32) ):
while(inwordpointer<32):
inwordpointer += bit
print(" _mm_store_si128(out, OutReg);");
print("");
if(valuecounter + 1 < length):
print(" ++out;")
inwordpointer -= 32;
if(inwordpointer>0):
print(" OutReg = _mm_srli_epi32(InReg, " + str(bit) + " - " + str(inwordpointer) + ");");
if(valuecounter + 1 < length):
print(" ++in;")
if (bit != 32):
print(" CurrIn = _mm_load_si128(in);");
print(" InReg = _mm_sub_epi32(CurrIn, initOffset);");
else:
print(" InReg = _mm_load_si128(in);");
print("");
valuecounter = valuecounter + 1
if(valuecounter == length): break
assert(valuecounter == length)
print("\n}\n\n""")
for bit in range(1,32):
offsetVar = " initOffset";
print("""\n
static __m128i iunpackFOR"""+str(bit)+"""(__m128i """+offsetVar+""", const __m128i* in, uint32_t * _out) {
""");
print(""" __m128i* out = (__m128i*)(_out);
__m128i InReg = _mm_load_si128(in);
__m128i OutReg;
__m128i tmp;
const __m128i mask = _mm_set1_epi32((1U<<"""+str(bit)+""")-1);
""");
MainText = "";
MainText += "\n";
inwordpointer = 0
valuecounter = 0
for k in range(ceil((length * bit) / 32)):
for x in range(inwordpointer,32,bit):
if(valuecounter == length): break
if (x > 0):
MainText += " tmp = _mm_srli_epi32(InReg," + str(x) +");\n";
else:
MainText += " tmp = InReg;\n";
if(x+bit<32):
MainText += " OutReg = _mm_and_si128(tmp, mask);\n";
else:
MainText += " OutReg = tmp;\n";
if((x+bit>=32) ):
while(inwordpointer<32):
inwordpointer += bit
if(valuecounter + 1 < length):
MainText += " ++in;"
MainText += " InReg = _mm_load_si128(in);\n";
inwordpointer -= 32;
if(inwordpointer>0):
MainText += " OutReg = _mm_or_si128(OutReg, _mm_and_si128(_mm_slli_epi32(InReg, " + str(bit) + "-" + str(inwordpointer) + "), mask));\n\n";
if (bit != 32):
MainText += " OutReg = _mm_add_epi32(OutReg, initOffset);\n";
MainText += " _mm_store_si128(out++, OutReg);\n\n";
MainText += "";
valuecounter = valuecounter + 1
if(valuecounter == length): break
assert(valuecounter == length)
print(MainText)
print(" return initOffset;");
print("\n}\n\n")
print("""
static __m128i iunpackFOR32(__m128i initvalue , const __m128i* in, uint32_t * _out) {
__m128i * mout = (__m128i *)_out;
__m128i invec;
size_t k;
for(k = 0; k < 128/4; ++k) {
invec = _mm_load_si128(in++);
_mm_store_si128(mout++, invec);
}
return invec;
}
""")

View File

@@ -1,40 +0,0 @@
EXPORTS
simdpack
simdpackwithoutmask
simdunpack
bits
maxbits
maxbits_length
simdmin
simdmin_length
simdmaxmin
simdmaxmin_length
simdmaxbitsd1
simdmaxbitsd1_length
simdpackd1
simdpackwithoutmaskd1
simdunpackd1
simdsearchd1
simdsearchwithlengthd1
simdselectd1
simdpackFOR
simdselectFOR
simdsearchwithlengthFOR
simdunpackFOR
simdmin_length
simdmaxmin
simdmaxmin_length
simdpack_length
simdpackFOR_length
simdunpackFOR_length
simdpack_shortlength
simdfastsetFOR
simdfastset
simdfastsetd1
simdunpack_length
simdunpack_shortlength
simdsearchwithlengthFOR
simdscand1
simdfastsetd1fromprevious
simdfastsetd1

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -1,234 +0,0 @@
/**
* This code is released under a BSD License.
*/
#include "simdcomputil.h"
#ifdef __SSE4_1__
#include <smmintrin.h>
#endif
#include <assert.h>
#define Delta(curr, prev) \
_mm_sub_epi32(curr, \
_mm_or_si128(_mm_slli_si128(curr, 4), _mm_srli_si128(prev, 12)))
/* returns the integer logarithm of v (bit width) */
uint32_t bits(const uint32_t v) {
#ifdef _MSC_VER
unsigned long answer;
if (v == 0) {
return 0;
}
_BitScanReverse(&answer, v);
return answer + 1;
#else
return v == 0 ? 0 : 32 - __builtin_clz(v); /* assume GCC-like compiler if not microsoft */
#endif
}
static uint32_t maxbitas32int(const __m128i accumulator) {
const __m128i _tmp1 = _mm_or_si128(_mm_srli_si128(accumulator, 8), accumulator); /* (A,B,C,D) xor (0,0,A,B) = (A,B,C xor A,D xor B)*/
const __m128i _tmp2 = _mm_or_si128(_mm_srli_si128(_tmp1, 4), _tmp1); /* (A,B,C xor A,D xor B) xor (0,0,0,C xor A)*/
uint32_t ans = _mm_cvtsi128_si32(_tmp2);
return bits(ans);
}
SIMDCOMP_PURE uint32_t maxbits(const uint32_t * begin) {
const __m128i* pin = (const __m128i*)(begin);
__m128i accumulator = _mm_loadu_si128(pin);
uint32_t k = 1;
for(; 4*k < SIMDBlockSize; ++k) {
__m128i newvec = _mm_loadu_si128(pin+k);
accumulator = _mm_or_si128(accumulator,newvec);
}
return maxbitas32int(accumulator);
}
static uint32_t orasint(const __m128i accumulator) {
const __m128i _tmp1 = _mm_or_si128(_mm_srli_si128(accumulator, 8), accumulator); /* (A,B,C,D) xor (0,0,A,B) = (A,B,C xor A,D xor B)*/
const __m128i _tmp2 = _mm_or_si128(_mm_srli_si128(_tmp1, 4), _tmp1); /* (A,B,C xor A,D xor B) xor (0,0,0,C xor A)*/
return _mm_cvtsi128_si32(_tmp2);
}
#ifdef __SSE4_1__
static uint32_t minasint(const __m128i accumulator) {
const __m128i _tmp1 = _mm_min_epu32(_mm_srli_si128(accumulator, 8), accumulator); /* (A,B,C,D) xor (0,0,A,B) = (A,B,C xor A,D xor B)*/
const __m128i _tmp2 = _mm_min_epu32(_mm_srli_si128(_tmp1, 4), _tmp1); /* (A,B,C xor A,D xor B) xor (0,0,0,C xor A)*/
return _mm_cvtsi128_si32(_tmp2);
}
static uint32_t maxasint(const __m128i accumulator) {
const __m128i _tmp1 = _mm_max_epu32(_mm_srli_si128(accumulator, 8), accumulator); /* (A,B,C,D) xor (0,0,A,B) = (A,B,C xor A,D xor B)*/
const __m128i _tmp2 = _mm_max_epu32(_mm_srli_si128(_tmp1, 4), _tmp1); /* (A,B,C xor A,D xor B) xor (0,0,0,C xor A)*/
return _mm_cvtsi128_si32(_tmp2);
}
uint32_t simdmin(const uint32_t * in) {
const __m128i* pin = (const __m128i*)(in);
__m128i accumulator = _mm_loadu_si128(pin);
uint32_t k = 1;
for(; 4*k < SIMDBlockSize; ++k) {
__m128i newvec = _mm_loadu_si128(pin+k);
accumulator = _mm_min_epu32(accumulator,newvec);
}
return minasint(accumulator);
}
void simdmaxmin(const uint32_t * in, uint32_t * getmin, uint32_t * getmax) {
const __m128i* pin = (const __m128i*)(in);
__m128i minaccumulator = _mm_loadu_si128(pin);
__m128i maxaccumulator = minaccumulator;
uint32_t k = 1;
for(; 4*k < SIMDBlockSize; ++k) {
__m128i newvec = _mm_loadu_si128(pin+k);
minaccumulator = _mm_min_epu32(minaccumulator,newvec);
maxaccumulator = _mm_max_epu32(maxaccumulator,newvec);
}
*getmin = minasint(minaccumulator);
*getmax = maxasint(maxaccumulator);
}
uint32_t simdmin_length(const uint32_t * in, uint32_t length) {
uint32_t currentmin = 0xFFFFFFFF;
uint32_t lengthdividedby4 = length / 4;
uint32_t offset = lengthdividedby4 * 4;
uint32_t k;
if (lengthdividedby4 > 0) {
const __m128i* pin = (const __m128i*)(in);
__m128i accumulator = _mm_loadu_si128(pin);
k = 1;
for(; 4*k < lengthdividedby4 * 4; ++k) {
__m128i newvec = _mm_loadu_si128(pin+k);
accumulator = _mm_min_epu32(accumulator,newvec);
}
currentmin = minasint(accumulator);
}
for (k = offset; k < length; ++k)
if (in[k] < currentmin)
currentmin = in[k];
return currentmin;
}
void simdmaxmin_length(const uint32_t * in, uint32_t length, uint32_t * getmin, uint32_t * getmax) {
uint32_t lengthdividedby4 = length / 4;
uint32_t offset = lengthdividedby4 * 4;
uint32_t k;
*getmin = 0xFFFFFFFF;
*getmax = 0;
if (lengthdividedby4 > 0) {
const __m128i* pin = (const __m128i*)(in);
__m128i minaccumulator = _mm_loadu_si128(pin);
__m128i maxaccumulator = minaccumulator;
k = 1;
for(; 4*k < lengthdividedby4 * 4; ++k) {
__m128i newvec = _mm_loadu_si128(pin+k);
minaccumulator = _mm_min_epu32(minaccumulator,newvec);
maxaccumulator = _mm_max_epu32(maxaccumulator,newvec);
}
*getmin = minasint(minaccumulator);
*getmax = maxasint(maxaccumulator);
}
for (k = offset; k < length; ++k) {
if (in[k] < *getmin)
*getmin = in[k];
if (in[k] > *getmax)
*getmax = in[k];
}
}
#endif
SIMDCOMP_PURE uint32_t maxbits_length(const uint32_t * in,uint32_t length) {
uint32_t k;
uint32_t lengthdividedby4 = length / 4;
uint32_t offset = lengthdividedby4 * 4;
uint32_t bigxor = 0;
if(lengthdividedby4 > 0) {
const __m128i* pin = (const __m128i*)(in);
__m128i accumulator = _mm_loadu_si128(pin);
k = 1;
for(; 4*k < 4*lengthdividedby4; ++k) {
__m128i newvec = _mm_loadu_si128(pin+k);
accumulator = _mm_or_si128(accumulator,newvec);
}
bigxor = orasint(accumulator);
}
for(k = offset; k < length; ++k)
bigxor |= in[k];
return bits(bigxor);
}
/* maxbit over 128 integers (SIMDBlockSize) with provided initial value */
uint32_t simdmaxbitsd1(uint32_t initvalue, const uint32_t * in) {
__m128i initoffset = _mm_set1_epi32 (initvalue);
const __m128i* pin = (const __m128i*)(in);
__m128i newvec = _mm_loadu_si128(pin);
__m128i accumulator = Delta(newvec , initoffset);
__m128i oldvec = newvec;
uint32_t k = 1;
for(; 4*k < SIMDBlockSize; ++k) {
newvec = _mm_loadu_si128(pin+k);
accumulator = _mm_or_si128(accumulator,Delta(newvec , oldvec));
oldvec = newvec;
}
initoffset = oldvec;
return maxbitas32int(accumulator);
}
/* maxbit over |length| integers with provided initial value */
uint32_t simdmaxbitsd1_length(uint32_t initvalue, const uint32_t * in,
uint32_t length) {
__m128i newvec;
__m128i oldvec;
__m128i initoffset;
__m128i accumulator;
const __m128i *pin;
uint32_t tmparray[4];
uint32_t k = 1;
uint32_t acc;
assert(length > 0);
pin = (const __m128i *)(in);
initoffset = _mm_set1_epi32(initvalue);
switch (length) {
case 1:
newvec = _mm_set1_epi32(in[0]);
break;
case 2:
newvec = _mm_setr_epi32(in[0], in[1], in[1], in[1]);
break;
case 3:
newvec = _mm_setr_epi32(in[0], in[1], in[2], in[2]);
break;
default:
newvec = _mm_loadu_si128(pin);
break;
}
accumulator = Delta(newvec, initoffset);
oldvec = newvec;
/* process 4 integers and build an accumulator */
while (k * 4 + 4 <= length) {
newvec = _mm_loadu_si128(pin + k);
accumulator = _mm_or_si128(accumulator, Delta(newvec, oldvec));
oldvec = newvec;
k++;
}
/* extract the accumulator as an integer */
_mm_storeu_si128((__m128i *)(tmparray), accumulator);
acc = tmparray[0] | tmparray[1] | tmparray[2] | tmparray[3];
/* now process the remaining integers */
for (k *= 4; k < length; k++)
acc |= in[k] - (k == 0 ? initvalue : in[k - 1]);
/* return the number of bits */
return bits(acc);
}

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -1,900 +0,0 @@
/**
* This code is released under a BSD License.
*/
#include <assert.h>
#include <stdio.h>
#include <stdlib.h>
#include "simdcomp.h"
int testshortpack() {
int bit;
size_t i;
size_t length;
__m128i * bb;
srand(0);
printf("testshortpack\n");
for (bit = 0; bit < 32; ++bit) {
const size_t N = 128;
uint32_t * data = malloc(N * sizeof(uint32_t));
uint32_t * backdata = malloc(N * sizeof(uint32_t));
uint32_t * buffer = malloc((2 * N + 1024) * sizeof(uint32_t));
for (i = 0; i < N; ++i) {
data[i] = rand() & ((1 << bit) - 1);
}
for (length = 0; length <= N; ++length) {
for (i = 0; i < N; ++i) {
backdata[i] = 0;
}
bb = simdpack_shortlength(data, length, (__m128i *) buffer,
bit);
if((bb - (__m128i *) buffer) * sizeof(__m128i) != (unsigned) simdpack_compressedbytes(length,bit)) {
printf("bug\n");
return -1;
}
simdunpack_shortlength((__m128i *) buffer, length,
backdata, bit);
for (i = 0; i < length; ++i) {
if (data[i] != backdata[i]) {
printf("bug\n");
return -1;
}
}
}
free(data);
free(backdata);
free(buffer);
}
return 0;
}
int testlongpack() {
int bit;
size_t i;
size_t length;
__m128i * bb;
srand(0);
printf("testlongpack\n");
for (bit = 0; bit < 32; ++bit) {
const size_t N = 2048;
uint32_t * data = malloc(N * sizeof(uint32_t));
uint32_t * backdata = malloc(N * sizeof(uint32_t));
uint32_t * buffer = malloc((2 * N + 1024) * sizeof(uint32_t));
for (i = 0; i < N; ++i) {
data[i] = rand() & ((1 << bit) - 1);
}
for (length = 0; length <= N; ++length) {
for (i = 0; i < N; ++i) {
backdata[i] = 0;
}
bb = simdpack_length(data, length, (__m128i *) buffer,
bit);
if((bb - (__m128i *) buffer) * sizeof(__m128i) != (unsigned) simdpack_compressedbytes(length,bit)) {
printf("bug\n");
return -1;
}
simdunpack_length((__m128i *) buffer, length,
backdata, bit);
for (i = 0; i < length; ++i) {
if (data[i] != backdata[i]) {
printf("bug\n");
return -1;
}
}
}
free(data);
free(backdata);
free(buffer);
}
return 0;
}
int testset() {
int bit;
size_t i;
const size_t N = 128;
uint32_t * data = malloc(N * sizeof(uint32_t));
uint32_t * backdata = malloc(N * sizeof(uint32_t));
uint32_t * buffer = malloc((2 * N + 1024) * sizeof(uint32_t));
srand(0);
for (bit = 0; bit < 32; ++bit) {
printf("simple set %d \n",bit);
for (i = 0; i < N; ++i) {
data[i] = rand() & ((1 << bit) - 1);
}
for (i = 0; i < N; ++i) {
backdata[i] = 0;
}
simdpack(data, (__m128i *) buffer, bit);
simdunpack((__m128i *) buffer, backdata, bit);
for (i = 0; i < N; ++i) {
if (data[i] != backdata[i]) {
printf("bug\n");
return -1;
}
}
for(i = N ; i > 0; i--) {
simdfastset((__m128i *) buffer, bit, data[N - i], i - 1);
}
simdunpack((__m128i *) buffer, backdata, bit);
for (i = 0; i < N; ++i) {
if (data[i] != backdata[N - i - 1]) {
printf("bug\n");
return -1;
}
}
simdpack(data, (__m128i *) buffer, bit);
for(i = 1 ; i <= N; i++) {
simdfastset((__m128i *) buffer, bit, data[i - 1], i - 1);
}
simdunpack((__m128i *) buffer, backdata, bit);
for (i = 0; i < N; ++i) {
if (data[i] != backdata[i]) {
printf("bug\n");
return -1;
}
}
}
free(data);
free(backdata);
free(buffer);
return 0;
}
#ifdef __SSE4_1__
int testsetd1() {
int bit;
size_t i;
uint32_t newvalue;
const size_t N = 128;
uint32_t * data = malloc(N * sizeof(uint32_t));
uint32_t * datazeroes = malloc(N * sizeof(uint32_t));
uint32_t * backdata = malloc(N * sizeof(uint32_t));
uint32_t * buffer = malloc((2 * N + 1024) * sizeof(uint32_t));
srand(0);
for (bit = 0; bit < 32; ++bit) {
printf("simple set d1 %d \n",bit);
data[0] = rand() & ((1 << bit) - 1);
datazeroes[0] = 0;
for (i = 1; i < N; ++i) {
data[i] = data[i - 1] + (rand() & ((1 << bit) - 1));
datazeroes[i] = 0;
}
for (i = 0; i < N; ++i) {
backdata[i] = 0;
}
simdpackd1(0,datazeroes, (__m128i *) buffer, bit);
for(i = 1 ; i <= N; i++) {
simdfastsetd1(0,(__m128i *) buffer, bit, data[i - 1], i - 1);
newvalue = simdselectd1(0, (const __m128i *) buffer, bit,i - 1);
if( newvalue != data[i-1] ) {
printf("bad set-select\n");
return -1;
}
}
simdunpackd1(0,(__m128i *) buffer, backdata, bit);
for (i = 0; i < N; ++i) {
if (data[i] != backdata[i])
return -1;
}
}
free(data);
free(backdata);
free(buffer);
free(datazeroes);
return 0;
}
#endif
int testsetFOR() {
int bit;
size_t i;
uint32_t newvalue;
const size_t N = 128;
uint32_t * data = malloc(N * sizeof(uint32_t));
uint32_t * datazeroes = malloc(N * sizeof(uint32_t));
uint32_t * backdata = malloc(N * sizeof(uint32_t));
uint32_t * buffer = malloc((2 * N + 1024) * sizeof(uint32_t));
srand(0);
for (bit = 0; bit < 32; ++bit) {
printf("simple set FOR %d \n",bit);
for (i = 0; i < N; ++i) {
data[i] = (rand() & ((1 << bit) - 1));
datazeroes[i] = 0;
}
for (i = 0; i < N; ++i) {
backdata[i] = 0;
}
simdpackFOR(0,datazeroes, (__m128i *) buffer, bit);
for(i = 1 ; i <= N; i++) {
simdfastsetFOR(0,(__m128i *) buffer, bit, data[i - 1], i - 1);
newvalue = simdselectFOR(0, (const __m128i *) buffer, bit,i - 1);
if( newvalue != data[i-1] ) {
printf("bad set-select\n");
return -1;
}
}
simdunpackFOR(0,(__m128i *) buffer, backdata, bit);
for (i = 0; i < N; ++i) {
if (data[i] != backdata[i])
return -1;
}
}
free(data);
free(backdata);
free(buffer);
free(datazeroes);
return 0;
}
int testshortFORpack() {
int bit;
size_t i;
__m128i * rb;
size_t length;
uint32_t offset = 7;
srand(0);
for (bit = 0; bit < 32; ++bit) {
const size_t N = 128;
uint32_t * data = malloc(N * sizeof(uint32_t));
uint32_t * backdata = malloc(N * sizeof(uint32_t));
uint32_t * buffer = malloc((2 * N + 1024) * sizeof(uint32_t));
for (i = 0; i < N; ++i) {
data[i] = (rand() & ((1 << bit) - 1)) + offset;
}
for (length = 0; length <= N; ++length) {
for (i = 0; i < N; ++i) {
backdata[i] = 0;
}
rb = simdpackFOR_length(offset,data, length, (__m128i *) buffer,
bit);
if(((rb - (__m128i *) buffer)*sizeof(__m128i)) != (unsigned) simdpackFOR_compressedbytes(length,bit)) {
return -1;
}
simdunpackFOR_length(offset,(__m128i *) buffer, length,
backdata, bit);
for (i = 0; i < length; ++i) {
if (data[i] != backdata[i])
return -1;
}
}
free(data);
free(backdata);
free(buffer);
}
return 0;
}
#ifdef __AVX2__
int testbabyavx() {
int bit;
int trial;
unsigned int i,j;
const size_t N = AVXBlockSize;
srand(0);
printf("testbabyavx\n");
printf("bit = ");
for (bit = 0; bit < 32; ++bit) {
printf(" %d ",bit);
fflush(stdout);
for(trial = 0; trial < 100; ++trial) {
uint32_t * data = malloc(N * sizeof(uint32_t)+ 64 * sizeof(uint32_t));
uint32_t * backdata = malloc(N * sizeof(uint32_t) + 64 * sizeof(uint32_t) );
__m256i * buffer = malloc((2 * N + 1024) * sizeof(uint32_t) + 32);
for (i = 0; i < N; ++i) {
data[i] = rand() & ((uint32_t)(1 << bit) - 1);
}
for (i = 0; i < N; ++i) {
backdata[i] = 0;
}
if(avxmaxbits(data) != maxbits_length(data,N)) {
printf("avxmaxbits is buggy\n");
return -1;
}
avxpackwithoutmask(data, buffer, bit);
avxunpack(buffer, backdata, bit);
for (i = 0; i < AVXBlockSize; ++i) {
if (data[i] != backdata[i]) {
printf("bug\n");
for (j = 0; j < N; ++j) {
if (data[j] != backdata[j]) {
printf("data[%d]=%d v.s. backdata[%d]=%d\n",j,data[j],j,backdata[j]);
} else {
printf("data[%d]=%d\n",j,data[j]);
}
}
return -1;
}
}
free(data);
free(backdata);
free(buffer);
}
}
printf("\n");
return 0;
}
int testavx2() {
int N = 5000 * AVXBlockSize, gap;
__m256i * buffer = malloc(AVXBlockSize * sizeof(uint32_t));
uint32_t * datain = malloc(N * sizeof(uint32_t));
uint32_t * backbuffer = malloc(AVXBlockSize * sizeof(uint32_t));
for (gap = 1; gap <= 387420489; gap *= 3) {
int k;
printf(" gap = %u \n", gap);
for (k = 0; k < N; ++k)
datain[k] = k * gap;
for (k = 0; k * AVXBlockSize < N; ++k) {
/*
First part works for general arrays (sorted or unsorted)
*/
int j;
/* we compute the bit width */
const uint32_t b = avxmaxbits(datain + k * AVXBlockSize);
if(avxmaxbits(datain + k * AVXBlockSize) != maxbits_length(datain + k * AVXBlockSize,AVXBlockSize)) {
printf("avxmaxbits is buggy %d %d \n",
avxmaxbits(datain + k * AVXBlockSize),
maxbits_length(datain + k * AVXBlockSize,AVXBlockSize));
return -1;
}
printf("bit width = %d\n",b);
/* we read 256 integers at "datain + k * AVXBlockSize" and
write b 256-bit vectors at "buffer" */
avxpackwithoutmask(datain + k * AVXBlockSize, buffer, b);
/* we read back b1 128-bit vectors at "buffer" and write 128 integers at backbuffer */
avxunpack(buffer, backbuffer, b);/* uncompressed */
for (j = 0; j < AVXBlockSize; ++j) {
if (backbuffer[j] != datain[k * AVXBlockSize + j]) {
int i;
printf("bug in avxpack\n");
for(i = 0; i < AVXBlockSize; ++i) {
printf("data[%d]=%d got back %d %s\n",i,
datain[k * AVXBlockSize + i],backbuffer[i],
datain[k * AVXBlockSize + i]!=backbuffer[i]?"bug":"");
}
return -2;
}
}
}
}
free(buffer);
free(datain);
free(backbuffer);
printf("Code looks good.\n");
return 0;
}
#endif /* avx2 */
int test() {
int N = 5000 * SIMDBlockSize, gap;
__m128i * buffer = malloc(SIMDBlockSize * sizeof(uint32_t));
uint32_t * datain = malloc(N * sizeof(uint32_t));
uint32_t * backbuffer = malloc(SIMDBlockSize * sizeof(uint32_t));
for (gap = 1; gap <= 387420489; gap *= 3) {
int k;
printf(" gap = %u \n", gap);
for (k = 0; k < N; ++k)
datain[k] = k * gap;
for (k = 0; k * SIMDBlockSize < N; ++k) {
/*
First part works for general arrays (sorted or unsorted)
*/
int j;
/* we compute the bit width */
const uint32_t b = maxbits(datain + k * SIMDBlockSize);
/* we read 128 integers at "datain + k * SIMDBlockSize" and
write b 128-bit vectors at "buffer" */
simdpackwithoutmask(datain + k * SIMDBlockSize, buffer, b);
/* we read back b1 128-bit vectors at "buffer" and write 128 integers at backbuffer */
simdunpack(buffer, backbuffer, b);/* uncompressed */
for (j = 0; j < SIMDBlockSize; ++j) {
if (backbuffer[j] != datain[k * SIMDBlockSize + j]) {
printf("bug in simdpack\n");
return -2;
}
}
{
/*
next part assumes that the data is sorted (uses differential coding)
*/
uint32_t offset = 0;
/* we compute the bit width */
const uint32_t b1 = simdmaxbitsd1(offset,
datain + k * SIMDBlockSize);
/* we read 128 integers at "datain + k * SIMDBlockSize" and
write b1 128-bit vectors at "buffer" */
simdpackwithoutmaskd1(offset, datain + k * SIMDBlockSize, buffer,
b1);
/* we read back b1 128-bit vectors at "buffer" and write 128 integers at backbuffer */
simdunpackd1(offset, buffer, backbuffer, b1);
for (j = 0; j < SIMDBlockSize; ++j) {
if (backbuffer[j] != datain[k * SIMDBlockSize + j]) {
printf("bug in simdpack d1\n");
return -3;
}
}
offset = datain[k * SIMDBlockSize + SIMDBlockSize - 1];
}
}
}
free(buffer);
free(datain);
free(backbuffer);
printf("Code looks good.\n");
return 0;
}
#ifdef __SSE4_1__
int testFOR() {
int N = 5000 * SIMDBlockSize, gap;
__m128i * buffer = malloc(SIMDBlockSize * sizeof(uint32_t));
uint32_t * datain = malloc(N * sizeof(uint32_t));
uint32_t * backbuffer = malloc(SIMDBlockSize * sizeof(uint32_t));
uint32_t tmax, tmin, tb;
for (gap = 1; gap <= 387420489; gap *= 2) {
int k;
printf(" gap = %u \n", gap);
for (k = 0; k < N; ++k)
datain[k] = k * gap;
for (k = 0; k * SIMDBlockSize < N; ++k) {
int j;
simdmaxmin_length(datain + k * SIMDBlockSize,SIMDBlockSize,&tmin,&tmax);
/* we compute the bit width */
tb = bits(tmax - tmin);
/* we read 128 integers at "datain + k * SIMDBlockSize" and
write b 128-bit vectors at "buffer" */
simdpackFOR(tmin,datain + k * SIMDBlockSize, buffer, tb);
for (j = 0; j < SIMDBlockSize; ++j) {
uint32_t selectedvalue = simdselectFOR(tmin,buffer,tb,j);
if (selectedvalue != datain[k * SIMDBlockSize + j]) {
printf("bug in simdselectFOR\n");
return -3;
}
}
/* we read back b1 128-bit vectors at "buffer" and write 128 integers at backbuffer */
simdunpackFOR(tmin,buffer, backbuffer, tb);/* uncompressed */
for (j = 0; j < SIMDBlockSize; ++j) {
if (backbuffer[j] != datain[k * SIMDBlockSize + j]) {
printf("bug in simdpackFOR\n");
return -2;
}
}
}
}
free(buffer);
free(datain);
free(backbuffer);
printf("Code looks good.\n");
return 0;
}
#endif
#define MAX 300
int test_simdmaxbitsd1_length() {
uint32_t result, buffer[MAX + 1];
int i, j;
memset(&buffer[0], 0xff, sizeof(buffer));
/* this test creates buffers of different length; each buffer is
* initialized to result in the following deltas:
* length 1: 2
* length 2: 1 2
* length 3: 1 1 2
* length 4: 1 1 1 2
* length 5: 1 1 1 1 2
* etc. Each sequence's "maxbits" is 2. */
for (i = 0; i < MAX; i++) {
for (j = 0; j < i; j++)
buffer[j] = j + 1;
buffer[i] = i + 2;
result = simdmaxbitsd1_length(0, &buffer[0], i + 1);
if (result != 2) {
printf("simdmaxbitsd1_length: unexpected result %u in loop %d\n",
result, i);
return -1;
}
}
printf("simdmaxbitsd1_length: ok\n");
return 0;
}
int uint32_cmp(const void *a, const void *b)
{
const uint32_t *ia = (const uint32_t *)a;
const uint32_t *ib = (const uint32_t *)b;
if(*ia < *ib)
return -1;
else if (*ia > *ib)
return 1;
return 0;
}
#ifdef __SSE4_1__
int test_simdpackedsearch() {
uint32_t buffer[128];
uint32_t result = 0;
int b, i;
uint32_t init = 0;
__m128i initial = _mm_set1_epi32(init);
/* initialize the buffer */
for (i = 0; i < 128; i++)
buffer[i] = (uint32_t)(i + 1);
/* this test creates delta encoded buffers with different bits, then
* performs lower bound searches for each key */
for (b = 1; b <= 32; b++) {
uint32_t out[128];
/* delta-encode to 'i' bits */
simdpackwithoutmaskd1(init, buffer, (__m128i *)out, b);
initial = _mm_setzero_si128();
printf("simdsearchd1: %d bits\n", b);
/* now perform the searches */
initial = _mm_set1_epi32(init);
assert(simdsearchd1(&initial, (__m128i *)out, b, 0, &result) == 0);
assert(result > 0);
for (i = 1; i <= 128; i++) {
initial = _mm_set1_epi32(init);
assert(simdsearchd1(&initial, (__m128i *)out, b,
(uint32_t)i, &result) == i - 1);
assert(result == (unsigned)i);
}
initial = _mm_set1_epi32(init);
assert(simdsearchd1(&initial, (__m128i *)out, b, 200, &result)
== 128);
assert(result > 200);
}
printf("simdsearchd1: ok\n");
return 0;
}
int test_simdpackedsearchFOR() {
uint32_t buffer[128];
uint32_t result = 0;
int b;
uint32_t i;
uint32_t maxv, tmin, tmax, tb;
uint32_t out[128];
/* this test creates delta encoded buffers with different bits, then
* performs lower bound searches for each key */
for (b = 1; b <= 32; b++) {
/* initialize the buffer */
maxv = (b == 32)
? 0xFFFFFFFF
: ((1U<<b) - 1);
for (i = 0; i < 128; i++)
buffer[i] = maxv * (i + 1) / 128;
simdmaxmin_length(buffer,SIMDBlockSize,&tmin,&tmax);
/* we compute the bit width */
tb = bits(tmax - tmin);
/* delta-encode to 'i' bits */
simdpackFOR(tmin, buffer, (__m128i *)out, tb);
printf("simdsearchd1: %d bits\n", b);
/* now perform the searches */
for (i = 0; i < 128; i++) {
assert(buffer[i] == simdselectFOR(tmin, (__m128i *)out, tb,i));
}
for (i = 0; i < 128; i++) {
int x = simdsearchwithlengthFOR(tmin, (__m128i *)out, tb,
128,buffer[i], &result) ;
assert(simdselectFOR(tmin, (__m128i *)out, tb,x) == buffer[x]);
assert(simdselectFOR(tmin, (__m128i *)out, tb,x) == result);
assert(buffer[x] == result);
assert(result == buffer[i]);
assert(buffer[x] == buffer[i]);
}
}
printf("simdsearchFOR: ok\n");
return 0;
}
int test_simdpackedsearch_advanced() {
uint32_t buffer[128];
uint32_t backbuffer[128];
uint32_t out[128];
uint32_t result = 0;
uint32_t b, i;
uint32_t init = 0;
__m128i initial = _mm_set1_epi32(init);
/* this test creates delta encoded buffers with different bits, then
* performs lower bound searches for each key */
for (b = 0; b <= 32; b++) {
uint32_t prev = init;
/* initialize the buffer */
for (i = 0; i < 128; i++) {
buffer[i] = ((uint32_t)(1431655765 * i + 0xFFFFFFFF)) ;
if(b < 32) buffer[i] %= (1<<b);
}
qsort(buffer,128, sizeof(uint32_t), uint32_cmp);
for (i = 0; i < 128; i++) {
buffer[i] = buffer[i] + prev;
prev = buffer[i];
}
for (i = 1; i < 128; i++) {
if(buffer[i] < buffer[i-1] )
buffer[i] = buffer[i-1];
}
assert(simdmaxbitsd1(init, buffer)<=b);
for (i = 0; i < 128; i++) {
out[i] = 0; /* memset would do too */
}
/* delta-encode to 'i' bits */
simdpackwithoutmaskd1(init, buffer, (__m128i *)out, b);
simdunpackd1(init, (__m128i *)out, backbuffer, b);
for (i = 0; i < 128; i++) {
assert(buffer[i] == backbuffer[i]);
}
printf("advanced simdsearchd1: %d bits\n", b);
for (i = 0; i < 128; i++) {
int pos;
initial = _mm_set1_epi32(init);
pos = simdsearchd1(&initial, (__m128i *)out, b,
buffer[i], &result);
assert(pos == simdsearchwithlengthd1(init, (__m128i *)out, b, 128,
buffer[i], &result));
assert(buffer[pos] == buffer[i]);
if(pos > 0)
assert(buffer[pos - 1] < buffer[i]);
assert(result == buffer[i]);
}
for (i = 0; i < 128; i++) {
int pos;
if(buffer[i] == 0) continue;
initial = _mm_set1_epi32(init);
pos = simdsearchd1(&initial, (__m128i *)out, b,
buffer[i] - 1, &result);
assert(pos == simdsearchwithlengthd1(init, (__m128i *)out, b, 128,
buffer[i] - 1, &result));
assert(buffer[pos] >= buffer[i] - 1);
if(pos > 0)
assert(buffer[pos - 1] < buffer[i] - 1);
assert(result == buffer[pos]);
}
for (i = 0; i < 128; i++) {
int pos;
if (buffer[i] + 1 == 0)
continue;
initial = _mm_set1_epi32(init);
pos = simdsearchd1(&initial, (__m128i *) out, b,
buffer[i] + 1, &result);
assert(pos == simdsearchwithlengthd1(init, (__m128i *)out, b, 128,
buffer[i] + 1, &result));
if(pos == 128) {
assert(buffer[i] == buffer[127]);
} else {
assert(buffer[pos] >= buffer[i] + 1);
if (pos > 0)
assert(buffer[pos - 1] < buffer[i] + 1);
assert(result == buffer[pos]);
}
}
}
printf("advanced simdsearchd1: ok\n");
return 0;
}
int test_simdpackedselect() {
uint32_t buffer[128];
uint32_t initial = 33;
int b, i;
/* initialize the buffer */
for (i = 0; i < 128; i++)
buffer[i] = (uint32_t)(initial + i);
/* this test creates delta encoded buffers with different bits, then
* performs lower bound searches for each key */
for (b = 1; b <= 32; b++) {
uint32_t out[128];
/* delta-encode to 'i' bits */
simdpackwithoutmaskd1(initial, buffer, (__m128i *)out, b);
printf("simdselectd1: %d bits\n", b);
/* now perform the searches */
for (i = 0; i < 128; i++) {
assert(simdselectd1(initial, (__m128i *)out, b, (uint32_t)i)
== initial + i);
}
}
printf("simdselectd1: ok\n");
return 0;
}
int test_simdpackedselect_advanced() {
uint32_t buffer[128];
uint32_t initial = 33;
uint32_t b;
int i;
/* this test creates delta encoded buffers with different bits, then
* performs lower bound searches for each key */
for (b = 0; b <= 32; b++) {
uint32_t prev = initial;
uint32_t out[128];
/* initialize the buffer */
for (i = 0; i < 128; i++) {
buffer[i] = ((uint32_t)(165576 * i)) ;
if(b < 32) buffer[i] %= (1<<b);
}
for (i = 0; i < 128; i++) {
buffer[i] = buffer[i] + prev;
prev = buffer[i];
}
for (i = 1; i < 128; i++) {
if(buffer[i] < buffer[i-1] )
buffer[i] = buffer[i-1];
}
assert(simdmaxbitsd1(initial, buffer)<=b);
for (i = 0; i < 128; i++) {
out[i] = 0; /* memset would do too */
}
/* delta-encode to 'i' bits */
simdpackwithoutmaskd1(initial, buffer, (__m128i *)out, b);
printf("simdselectd1: %d bits\n", b);
/* now perform the searches */
for (i = 0; i < 128; i++) {
uint32_t valretrieved = simdselectd1(initial, (__m128i *)out, b, (uint32_t)i);
assert(valretrieved == buffer[i]);
}
}
printf("advanced simdselectd1: ok\n");
return 0;
}
#endif
int main() {
int r;
r = testsetFOR();
if (r) {
printf("test failure 1\n");
return r;
}
#ifdef __SSE4_1__
r = testsetd1();
if (r) {
printf("test failure 2\n");
return r;
}
#endif
r = testset();
if (r) {
printf("test failure 3\n");
return r;
}
r = testshortFORpack();
if (r) {
printf("test failure 4\n");
return r;
}
r = testshortpack();
if (r) {
printf("test failure 5\n");
return r;
}
r = testlongpack();
if (r) {
printf("test failure 6\n");
return r;
}
#ifdef __SSE4_1__
r = test_simdpackedsearchFOR();
if (r) {
printf("test failure 7\n");
return r;
}
r = testFOR();
if (r) {
printf("test failure 8\n");
return r;
}
#endif
#ifdef __AVX2__
r= testbabyavx();
if (r) {
printf("test failure baby avx\n");
return r;
}
r = testavx2();
if (r) {
printf("test failure 9 avx\n");
return r;
}
#endif
r = test();
if (r) {
printf("test failure 9\n");
return r;
}
r = test_simdmaxbitsd1_length();
if (r) {
printf("test failure 10\n");
return r;
}
#ifdef __SSE4_1__
r = test_simdpackedsearch();
if (r) {
printf("test failure 11\n");
return r;
}
r = test_simdpackedsearch_advanced();
if (r) {
printf("test failure 12\n");
return r;
}
r = test_simdpackedselect();
if (r) {
printf("test failure 13\n");
return r;
}
r = test_simdpackedselect_advanced();
if (r) {
printf("test failure 14\n");
return r;
}
#endif
printf("All tests OK!\n");
return 0;
}

View File

@@ -1,102 +0,0 @@
/**
* This code is released under a BSD License.
*/
#include <stdio.h>
#include <stdlib.h>
#include <time.h>
#include "simdcomp.h"
#define get_random_char() (uint8_t)(rand() % 256);
int main() {
int N = 5000 * SIMDBlockSize, gap;
__m128i * buffer = malloc(SIMDBlockSize * sizeof(uint32_t));
uint32_t * datain = malloc(N * sizeof(uint32_t));
uint32_t * backbuffer = malloc(SIMDBlockSize * sizeof(uint32_t));
srand(time(NULL));
for (gap = 1; gap <= 387420489; gap *= 3) {
int k;
printf(" gap = %u \n", gap);
/* simulate some random character string, don't care about endiannes */
for (k = 0; k < N; ++k) {
uint8_t _tmp[4];
_tmp[0] = get_random_char();
_tmp[1] = get_random_char();
_tmp[2] = get_random_char();
_tmp[3] = get_random_char();
memmove(&datain[k], _tmp, 4);
}
for (k = 0; k * SIMDBlockSize < N; ++k) {
/*
First part works for general arrays (sorted or unsorted)
*/
int j;
/* we compute the bit width */
const uint32_t b = maxbits(datain + k * SIMDBlockSize);
/* we read 128 integers at "datain + k * SIMDBlockSize" and
write b 128-bit vectors at "buffer" */
simdpackwithoutmask(datain + k * SIMDBlockSize, buffer, b);
/* we read back b1 128-bit vectors at "buffer" and write 128 integers at backbuffer */
simdunpack(buffer, backbuffer, b);/* uncompressed */
for (j = 0; j < SIMDBlockSize; ++j) {
uint8_t chars_back[4];
uint8_t chars_in[4];
memmove(chars_back, &backbuffer[j], 4);
memmove(chars_in, &datain[k * SIMDBlockSize + j], 4);
if (chars_in[0] != chars_back[0]
|| chars_in[1] != chars_back[1]
|| chars_in[2] != chars_back[2]
|| chars_in[3] != chars_back[3]) {
printf("bug in simdpack\n");
return -2;
}
}
{
/*
next part assumes that the data is sorted (uses differential coding)
*/
uint32_t offset = 0;
/* we compute the bit width */
const uint32_t b1 = simdmaxbitsd1(offset,
datain + k * SIMDBlockSize);
/* we read 128 integers at "datain + k * SIMDBlockSize" and
write b1 128-bit vectors at "buffer" */
simdpackwithoutmaskd1(offset, datain + k * SIMDBlockSize, buffer,
b1);
/* we read back b1 128-bit vectors at "buffer" and write 128 integers at backbuffer */
simdunpackd1(offset, buffer, backbuffer, b1);
for (j = 0; j < SIMDBlockSize; ++j) {
uint8_t chars_back[4];
uint8_t chars_in[4];
memmove(chars_back, &backbuffer[j], 4);
memmove(chars_in, &datain[k * SIMDBlockSize + j], 4);
if (chars_in[0] != chars_back[0]
|| chars_in[1] != chars_back[1]
|| chars_in[2] != chars_back[2]
|| chars_in[3] != chars_back[3]) {
printf("bug in simdpack\n");
return -3;
}
}
offset = datain[k * SIMDBlockSize + SIMDBlockSize - 1];
}
}
}
free(buffer);
free(datain);
free(backbuffer);
printf("Code looks good.\n");
return 0;
}

View File

@@ -1,42 +0,0 @@
#include "simdcomp.h"
#include "simdcomputil.h"
// assumes datain has a size of 128 uint32
// and that buffer is large enough to host the data.
size_t compress_sorted(
const uint32_t* datain,
uint8_t* output,
const uint32_t offset) {
const uint32_t b = simdmaxbitsd1(offset, datain);
*output++ = b;
simdpackwithoutmaskd1(offset, datain, (__m128i *) output, b);
return 1 + b * sizeof(__m128i);
}
// assumes datain has a size of 128 uint32
// and that buffer is large enough to host the data.
size_t uncompress_sorted(
const uint8_t* compressed_data,
uint32_t* output,
uint32_t offset) {
const uint32_t b = *compressed_data++;
simdunpackd1(offset, (__m128i *)compressed_data, output, b);
return 1 + b * sizeof(__m128i);
}
size_t compress_unsorted(
const uint32_t* datain,
uint8_t* output) {
const uint32_t b = maxbits(datain);
*output++ = b;
simdpackwithoutmask(datain, (__m128i *) output, b);
return 1 + b * sizeof(__m128i);
}
size_t uncompress_unsorted(
const uint8_t* compressed_data,
uint32_t* output) {
const uint32_t b = *compressed_data++;
simdunpack((__m128i *)compressed_data, output, b);
return 1 + b * sizeof(__m128i);
}

1
doc/.gitignore vendored Normal file
View File

@@ -0,0 +1 @@
book

5
doc/book.toml Normal file
View File

@@ -0,0 +1,5 @@
[book]
authors = ["Paul Masurel"]
multilingual = false
src = "src"
title = "Tantivy, the user guide"

15
doc/src/SUMMARY.md Normal file
View File

@@ -0,0 +1,15 @@
# Summary
[Avant Propos](./avant-propos.md)
- [Segments](./basis.md)
- [Defining your schema](./schema.md)
- [Facetting](./facetting.md)
- [Innerworkings](./innerworkings.md)
- [Inverted index](./inverted_index.md)
- [Best practise](./inverted_index.md)
[Frequently Asked Questions](./faq.md)
[Examples](./examples.md)

34
doc/src/avant-propos.md Normal file
View File

@@ -0,0 +1,34 @@
# Foreword, what is the scope of tantivy?
> Tantivy is a **search** engine **library** for Rust.
If you are familiar with Lucene, it's an excellent approximation to consider tantivy as Lucene for rust. tantivy is heavily inspired by Lucene's design and
they both have the same scope and targetted use cases.
If you are not familiar with Lucene, let's break down our little tagline.
- **Search** here means full-text search : fundamentally, tantivy is here to help you
identify efficiently what are the documents matching a given query in your corpus.
But modern search UI are so much more : text processing, facetting, autocomplete, fuzzy search, good
relevancy, collapsing, highlighting, spatial search.
While some of these features are not available in tantivy yet, all of these are relevant
feature requests. Tantivy's objective is to offer a solid toolbox to create the best search
experience. But keep in mind this is just a toolbox.
Which bring us to the second keyword...
- **Library** means that you will have to write code. tantivy is not an *all-in-one* server solution like elastic search for instance.
Sometimes a functionality will not be available in tantivy because it is too
specific to your use case. By design, tantivy should make it possible to extend
the available set of features using the existing rock-solid datastructures.
Most frequently this will mean writing your own `Collector`, your own `Scorer` or your own
`TokenFilter`... Some of your requirements may also be related to
something closer to architecture or operations. For instance, you may
want to build a large corpus on Hadoop, fine-tune the merge policy to keep your
index sharded in a time-wise fashion, or you may want to convert and existing
index from a different format.
Tantivy exposes a lot of low level API to do all of these things.

77
doc/src/basis.md Normal file
View File

@@ -0,0 +1,77 @@
# Anatomy of an index
## Straight from disk
Tantivy accesses its data using an abstracting trait called `Directory`.
In theory, one can come and override the data access logic. In practise, the
trait somewhat assumes that your data can be mapped to memory, and tantivy
seems deeply married to using `mmap` for its io [^1], and the only persisting
directory shipped with tantivy is the `MmapDirectory`.
While this design has some downsides, this greatly simplifies the source code of
tantivy. Caching is also entirely delegated to the OS.
`tantivy` works entirely (or almost) by directly reading the datastructures as they are layed on disk. As a result, the act of opening an indexing does not involve loading different datastructures from the disk into random access memory : starting a process, opening an index, and performing your first query can typically be done in a matter of milliseconds.
This is an interesting property for a command line search engine, or for some multi-tenant log search engine : spawning a new process for each new query can be a perfectly sensible solution in some use case.
In later chapters, we will discuss tantivy's inverted index data layout.
One key take away is that to achieve great performance, search indexes are extremely compact.
Of course this is crucial to reduce IO, and ensure that as much of our index can sit in RAM.
Also, whenever possible its data is accessed sequentially. Of course, this is an amazing property when tantivy needs to access the data from your spinning hard disk, but this is also
critical for performance, if your data is read from and an `SSD` or even already in your pagecache.
## Segments, and the log method
That kind of compact layout comes at one cost: it prevents our datastructures from being dynamic.
In fact, the `Directory` trait does not even allow you to modify part of a file.
To allow the addition / deletion of documents, and create the illusion that
your index is dynamic (i.e.: adding and deleting documents), tantivy uses a common database trick sometimes referred to as the *log method*.
Let's forget about deletes for a moment.
As you add documents, these documents are processed and stored in a dedicated datastructure, in a `RAM` buffer. This datastructure is not ready for search, but it is useful to receive your data and rearrange it very rapidly.
As you add documents, this buffer will reach its capacity and tantivy will transparently stop adding document to it and start converting this datastructure to its final read-only format on disk. Once written, an brand empty buffer is available to resume adding documents.
The resulting chunk of index obtained after this serialization is called a `Segment`.
> A segment is a self-contained atomic piece of index. It is identified with a UUID, and all of its files are identified using the naming scheme : `<UUID>.*`.
Which brings us to the nature of a tantivy `Index`.
> A tantivy `Index` is a collection of `Segments`.
Physically, this really just means and index is a bunch of segment files in a given `Directory`,
linked together by a `meta.json` file. This transparency can become extremely handy
to get tantivy to fit your use case:
*Example 1* You could for instance use hadoop to build a very large search index in a timely manner, copy all of the resulting segment files in the same directory and edit the `meta.json` to get a functional index.[^2]
*Example 2* You could also disable your merge policy and enforce daily segments. Removing data after one week can then be done very efficiently by just editing the `meta.json` and deleting the files associated to segment `D-7`.
# Merging
As you index more and more data, your index will accumulate more and more segments.
Having a lot of small segments is not really optimal. There is a bit of redundancy in having
all these term dictionary. Also when searching, we will need to do term lookups as many times as we have segments. It can hurt search performance a bit.
That's where merging or compacting comes into place. Tantivy will continuously consider merge
opportunities and start merging segments in the background.
# Indexing throughput, number of indexing threads
[^1]: This may eventually change.
[^2]: Be careful however. By default these files will not be considered as *managed* by tantivy. This means they will never be garbage collected by tantivy, regardless of whether they become obsolete or not.

View File

3
doc/src/examples.md Normal file
View File

@@ -0,0 +1,3 @@
# Examples
- [Basic search](/examples/basic_search.html)

5
doc/src/facetting.md Normal file
View File

@@ -0,0 +1,5 @@
# Facetting
wewew
## weeewe

0
doc/src/faq.md Normal file
View File

1
doc/src/innerworkings.md Normal file
View File

@@ -0,0 +1 @@
# Innerworkings

View File

@@ -0,0 +1 @@
# Inverted index

1
doc/src/schema.md Normal file
View File

@@ -0,0 +1 @@
# Defining your schema

234
examples/basic_search.rs Normal file
View File

@@ -0,0 +1,234 @@
// # Basic Example
//
// This example covers the basic functionalities of
// tantivy.
//
// We will :
// - define our schema
// = create an index in a directory
// - index few documents in our index
// - search for the best document matchings "sea whale"
// - retrieve the best document original content.
extern crate tempdir;
// ---
// Importing tantivy...
#[macro_use]
extern crate tantivy;
use tantivy::collector::TopDocs;
use tantivy::query::QueryParser;
use tantivy::schema::*;
use tantivy::Index;
use tempdir::TempDir;
fn main() -> tantivy::Result<()> {
// Let's create a temporary directory for the
// sake of this example
let index_path = TempDir::new("tantivy_example_dir")?;
// # Defining the schema
//
// The Tantivy index requires a very strict schema.
// The schema declares which fields are in the index,
// and for each field, its type and "the way it should
// be indexed".
// first we need to define a schema ...
let mut schema_builder = Schema::builder();
// Our first field is title.
// We want full-text search for it, and we also want
// to be able to retrieve the document after the search.
//
// `TEXT | STORED` is some syntactic sugar to describe
// that.
//
// `TEXT` means the field should be tokenized and indexed,
// along with its term frequency and term positions.
//
// `STORED` means that the field will also be saved
// in a compressed, row-oriented key-value store.
// This store is useful to reconstruct the
// documents that were selected during the search phase.
schema_builder.add_text_field("title", TEXT | STORED);
// Our second field is body.
// We want full-text search for it, but we do not
// need to be able to be able to retrieve it
// for our application.
//
// We can make our index lighter and
// by omitting `STORED` flag.
schema_builder.add_text_field("body", TEXT);
let schema = schema_builder.build();
// # Indexing documents
//
// Let's create a brand new index.
//
// This will actually just save a meta.json
// with our schema in the directory.
let index = Index::create_in_dir(&index_path, schema.clone())?;
// To insert document we need an index writer.
// There must be only one writer at a time.
// This single `IndexWriter` is already
// multithreaded.
//
// Here we give tantivy a budget of `50MB`.
// Using a bigger heap for the indexer may increase
// throughput, but 50 MB is already plenty.
let mut index_writer = index.writer(50_000_000)?;
// Let's index our documents!
// We first need a handle on the title and the body field.
// ### Adding documents
//
// We can create a document manually, by setting the fields
// one by one in a Document object.
let title = schema.get_field("title").unwrap();
let body = schema.get_field("body").unwrap();
let mut old_man_doc = Document::default();
old_man_doc.add_text(title, "The Old Man and the Sea");
old_man_doc.add_text(
body,
"He was an old man who fished alone in a skiff in the Gulf Stream and \
he had gone eighty-four days now without taking a fish.",
);
// ... and add it to the `IndexWriter`.
index_writer.add_document(old_man_doc);
// For convenience, tantivy also comes with a macro to
// reduce the boilerplate above.
index_writer.add_document(doc!(
title => "Of Mice and Men",
body => "A few miles south of Soledad, the Salinas River drops in close to the hillside \
bank and runs deep and green. The water is warm too, for it has slipped twinkling \
over the yellow sands in the sunlight before reaching the narrow pool. On one \
side of the river the golden foothill slopes curve up to the strong and rocky \
Gabilan Mountains, but on the valley side the water is lined with trees—willows \
fresh and green with every spring, carrying in their lower leaf junctures the \
debris of the winters flooding; and sycamores with mottled, white, recumbent \
limbs and branches that arch over the pool"
));
index_writer.add_document(doc!(
title => "Of Mice and Men",
body => "A few miles south of Soledad, the Salinas River drops in close to the hillside \
bank and runs deep and green. The water is warm too, for it has slipped twinkling \
over the yellow sands in the sunlight before reaching the narrow pool. On one \
side of the river the golden foothill slopes curve up to the strong and rocky \
Gabilan Mountains, but on the valley side the water is lined with trees—willows \
fresh and green with every spring, carrying in their lower leaf junctures the \
debris of the winters flooding; and sycamores with mottled, white, recumbent \
limbs and branches that arch over the pool"
));
// Multivalued field just need to be repeated.
index_writer.add_document(doc!(
title => "Frankenstein",
title => "The Modern Prometheus",
body => "You will rejoice to hear that no disaster has accompanied the commencement of an \
enterprise which you have regarded with such evil forebodings. I arrived here \
yesterday, and my first task is to assure my dear sister of my welfare and \
increasing confidence in the success of my undertaking."
));
// This is an example, so we will only index 3 documents
// here. You can check out tantivy's tutorial to index
// the English wikipedia. Tantivy's indexing is rather fast.
// Indexing 5 million articles of the English wikipedia takes
// around 3 minutes on my computer!
// ### Committing
//
// At this point our documents are not searchable.
//
//
// We need to call .commit() explicitly to force the
// index_writer to finish processing the documents in the queue,
// flush the current index to the disk, and advertise
// the existence of new documents.
//
// This call is blocking.
index_writer.commit()?;
// If `.commit()` returns correctly, then all of the
// documents that have been added are guaranteed to be
// persistently indexed.
//
// In the scenario of a crash or a power failure,
// tantivy behaves as if has rolled back to its last
// commit.
// # Searching
//
// ### Searcher
//
// Let's search our index. Start by reloading
// searchers in the index. This should be done
// after every `commit()`.
index.load_searchers()?;
// We now need to acquire a searcher.
// Some search experience might require more than
// one query.
//
// The searcher ensure that we get to work
// with a consistent version of the index.
//
// Acquiring a `searcher` is very cheap.
//
// You should acquire a searcher every time you
// start processing a request and
// and release it right after your query is finished.
let searcher = index.searcher();
// ### Query
// The query parser can interpret human queries.
// Here, if the user does not specify which
// field they want to search, tantivy will search
// in both title and body.
let query_parser = QueryParser::for_index(&index, vec![title, body]);
// QueryParser may fail if the query is not in the right
// format. For user facing applications, this can be a problem.
// A ticket has been opened regarding this problem.
let query = query_parser.parse_query("sea whale")?;
// A query defines a set of documents, as
// well as the way they should be scored.
//
// A query created by the query parser is scored according
// to a metric called Tf-Idf, and will consider
// any document matching at least one of our terms.
// ### Collectors
//
// We are not interested in all of the documents but
// only in the top 10. Keeping track of our top 10 best documents
// is the role of the TopDocs.
// We can now perform our query.
let top_docs = searcher.search(&query, &TopDocs::with_limit(10))?;
// The actual documents still need to be
// retrieved from Tantivy's store.
//
// Since the body field was not configured as stored,
// the document returned will only contain
// a title.
for (_score, doc_address) in top_docs {
let retrieved_doc = searcher.doc(doc_address)?;
println!("{}", schema.to_json(&retrieved_doc));
}
Ok(())
}

View File

@@ -0,0 +1,187 @@
// # Custom collector example
//
// This example shows how you can implement your own
// collector. As an example, we will compute a collector
// that computes the standard deviation of a given fast field.
//
// Of course, you can have a look at the tantivy's built-in collectors
// such as the `CountCollector` for more examples.
extern crate tempdir;
// ---
// Importing tantivy...
#[macro_use]
extern crate tantivy;
use tantivy::collector::{Collector, SegmentCollector};
use tantivy::fastfield::FastFieldReader;
use tantivy::query::QueryParser;
use tantivy::schema::Field;
use tantivy::schema::{Schema, FAST, INT_INDEXED, TEXT};
use tantivy::Index;
use tantivy::SegmentReader;
#[derive(Default)]
struct Stats {
count: usize,
sum: f64,
squared_sum: f64,
}
impl Stats {
pub fn count(&self) -> usize {
self.count
}
pub fn mean(&self) -> f64 {
self.sum / (self.count as f64)
}
fn square_mean(&self) -> f64 {
self.squared_sum / (self.count as f64)
}
pub fn standard_deviation(&self) -> f64 {
let mean = self.mean();
(self.square_mean() - mean * mean).sqrt()
}
fn non_zero_count(self) -> Option<Stats> {
if self.count == 0 {
None
} else {
Some(self)
}
}
}
struct StatsCollector {
field: Field,
}
impl StatsCollector {
fn with_field(field: Field) -> StatsCollector {
StatsCollector { field }
}
}
impl Collector for StatsCollector {
// That's the type of our result.
// Our standard deviation will be a float.
type Fruit = Option<Stats>;
type Child = StatsSegmentCollector;
fn for_segment(
&self,
_segment_local_id: u32,
segment: &SegmentReader,
) -> tantivy::Result<StatsSegmentCollector> {
let fast_field_reader = segment.fast_field_reader(self.field)?;
Ok(StatsSegmentCollector {
fast_field_reader,
stats: Stats::default(),
})
}
fn requires_scoring(&self) -> bool {
// this collector does not care about score.
false
}
fn merge_fruits(&self, segment_stats: Vec<Option<Stats>>) -> tantivy::Result<Option<Stats>> {
let mut stats = Stats::default();
for segment_stats_opt in segment_stats {
if let Some(segment_stats) = segment_stats_opt {
stats.count += segment_stats.count;
stats.sum += segment_stats.sum;
stats.squared_sum += segment_stats.squared_sum;
}
}
Ok(stats.non_zero_count())
}
}
struct StatsSegmentCollector {
fast_field_reader: FastFieldReader<u64>,
stats: Stats,
}
impl SegmentCollector for StatsSegmentCollector {
type Fruit = Option<Stats>;
fn collect(&mut self, doc: u32, _score: f32) {
let value = self.fast_field_reader.get(doc) as f64;
self.stats.count += 1;
self.stats.sum += value;
self.stats.squared_sum += value * value;
}
fn harvest(self) -> <Self as SegmentCollector>::Fruit {
self.stats.non_zero_count()
}
}
fn main() -> tantivy::Result<()> {
// # Defining the schema
//
// The Tantivy index requires a very strict schema.
// The schema declares which fields are in the index,
// and for each field, its type and "the way it should
// be indexed".
// first we need to define a schema ...
let mut schema_builder = Schema::builder();
// We'll assume a fictional index containing
// products, and with a name, a description, and a price.
let product_name = schema_builder.add_text_field("name", TEXT);
let product_description = schema_builder.add_text_field("description", TEXT);
let price = schema_builder.add_u64_field("price", INT_INDEXED | FAST);
let schema = schema_builder.build();
// # Indexing documents
//
// Lets index a bunch of fake documents for the sake of
// this example.
let index = Index::create_in_ram(schema.clone());
let mut index_writer = index.writer(50_000_000)?;
index_writer.add_document(doc!(
product_name => "Super Broom 2000",
product_description => "While it is ok for short distance travel, this broom \
was designed quiditch. It will up your game.",
price => 30_200u64
));
index_writer.add_document(doc!(
product_name => "Turbulobroom",
product_description => "You might have heard of this broom before : it is the sponsor of the Wales team.\
You'll enjoy its sharp turns, and rapid acceleration",
price => 29_240u64
));
index_writer.add_document(doc!(
product_name => "Broomio",
product_description => "Great value for the price. This broom is a market favorite",
price => 21_240u64
));
index_writer.add_document(doc!(
product_name => "Whack a Mole",
product_description => "Prime quality bat.",
price => 5_200u64
));
index_writer.commit()?;
index.load_searchers()?;
let searcher = index.searcher();
let query_parser = QueryParser::for_index(&index, vec![product_name, product_description]);
// here we want to get a hit on the 'ken' in Frankenstein
let query = query_parser.parse_query("broom")?;
if let Some(stats) = searcher.search(&query, &StatsCollector::with_field(price))? {
println!("count: {}", stats.count());
println!("mean: {}", stats.mean());
println!("standard deviation: {}", stats.standard_deviation());
}
Ok(())
}

View File

@@ -0,0 +1,115 @@
// # Defining a tokenizer pipeline
//
// In this example, we'll see how to define a tokenizer pipeline
// by aligning a bunch of `TokenFilter`.
#[macro_use]
extern crate tantivy;
use tantivy::collector::TopDocs;
use tantivy::query::QueryParser;
use tantivy::schema::*;
use tantivy::tokenizer::NgramTokenizer;
use tantivy::Index;
fn main() -> tantivy::Result<()> {
// # Defining the schema
//
// The Tantivy index requires a very strict schema.
// The schema declares which fields are in the index,
// and for each field, its type and "the way it should
// be indexed".
// first we need to define a schema ...
let mut schema_builder = Schema::builder();
// Our first field is title.
// In this example we want to use NGram searching
// we will set that to 3 characters, so any three
// char in the title should be findable.
let text_field_indexing = TextFieldIndexing::default()
.set_tokenizer("ngram3")
.set_index_option(IndexRecordOption::WithFreqsAndPositions);
let text_options = TextOptions::default()
.set_indexing_options(text_field_indexing)
.set_stored();
let title = schema_builder.add_text_field("title", text_options);
// Our second field is body.
// We want full-text search for it, but we do not
// need to be able to be able to retrieve it
// for our application.
//
// We can make our index lighter and
// by omitting `STORED` flag.
let body = schema_builder.add_text_field("body", TEXT);
let schema = schema_builder.build();
// # Indexing documents
//
// Let's create a brand new index.
// To simplify we will work entirely in RAM.
// This is not what you want in reality, but it is very useful
// for your unit tests... Or this example.
let index = Index::create_in_ram(schema.clone());
// here we are registering our custome tokenizer
// this will store tokens of 3 characters each
index
.tokenizers()
.register("ngram3", NgramTokenizer::new(3, 3, false));
// To insert document we need an index writer.
// There must be only one writer at a time.
// This single `IndexWriter` is already
// multithreaded.
//
// Here we use a buffer of 50MB per thread. Using a bigger
// heap for the indexer can increase its throughput.
let mut index_writer = index.writer(50_000_000)?;
index_writer.add_document(doc!(
title => "The Old Man and the Sea",
body => "He was an old man who fished alone in a skiff in the Gulf Stream and \
he had gone eighty-four days now without taking a fish."
));
index_writer.add_document(doc!(
title => "Of Mice and Men",
body => r#"A few miles south of Soledad, the Salinas River drops in close to the hillside
bank and runs deep and green. The water is warm too, for it has slipped twinkling
over the yellow sands in the sunlight before reaching the narrow pool. On one
side of the river the golden foothill slopes curve up to the strong and rocky
Gabilan Mountains, but on the valley side the water is lined with trees—willows
fresh and green with every spring, carrying in their lower leaf junctures the
debris of the winters flooding; and sycamores with mottled, white, recumbent
limbs and branches that arch over the pool"#
));
index_writer.add_document(doc!(
title => "Frankenstein",
body => r#"You will rejoice to hear that no disaster has accompanied the commencement of an
enterprise which you have regarded with such evil forebodings. I arrived here
yesterday, and my first task is to assure my dear sister of my welfare and
increasing confidence in the success of my undertaking."#
));
index_writer.commit()?;
index.load_searchers()?;
let searcher = index.searcher();
// The query parser can interpret human queries.
// Here, if the user does not specify which
// field they want to search, tantivy will search
// in both title and body.
let query_parser = QueryParser::for_index(&index, vec![title, body]);
// here we want to get a hit on the 'ken' in Frankenstein
let query = query_parser.parse_query("ken")?;
let top_docs = searcher.search(&query, &TopDocs::with_limit(10))?;
for (_, doc_address) in top_docs {
let retrieved_doc = searcher.doc(doc_address)?;
println!("{}", schema.to_json(&retrieved_doc));
}
Ok(())
}

View File

@@ -0,0 +1,142 @@
// # Deleting and Updating (?) documents
//
// This example explains how to delete and update documents.
// In fact there is actually no such thing as an update in tantivy.
//
// To update a document, you need to delete a document and then reinsert
// its new version.
//
// ---
// Importing tantivy...
#[macro_use]
extern crate tantivy;
use tantivy::collector::TopDocs;
use tantivy::query::TermQuery;
use tantivy::schema::*;
use tantivy::Index;
// A simple helper function to fetch a single document
// given its id from our index.
// It will be helpful to check our work.
fn extract_doc_given_isbn(index: &Index, isbn_term: &Term) -> tantivy::Result<Option<Document>> {
let searcher = index.searcher();
// This is the simplest query you can think of.
// It matches all of the documents containing a specific term.
//
// The second argument is here to tell we don't care about decoding positions,
// or term frequencies.
let term_query = TermQuery::new(isbn_term.clone(), IndexRecordOption::Basic);
let top_docs = searcher.search(&term_query, &TopDocs::with_limit(1))?;
if let Some((_score, doc_address)) = top_docs.first() {
let doc = searcher.doc(*doc_address)?;
Ok(Some(doc))
} else {
// no doc matching this ID.
Ok(None)
}
}
fn main() -> tantivy::Result<()> {
// # Defining the schema
//
// Check out the *basic_search* example if this makes
// small sense to you.
let mut schema_builder = Schema::builder();
// Tantivy does not really have a notion of primary id.
// This may change in the future.
//
// Still, we can create a `isbn` field and use it as an id. This
// field can be `u64` or a `text`, depending on your use case.
// It just needs to be indexed.
//
// If it is `text`, let's make sure to keep it `raw` and let's avoid
// running any text processing on it.
// This is done by associating this field to the tokenizer named `raw`.
// Rather than building our [`TextOptions`](//docs.rs/tantivy/~0/tantivy/schema/struct.TextOptions.html) manually,
// We use the `STRING` shortcut. `STRING` stands for indexed (without term frequency or positions)
// and untokenized.
//
// Because we also want to be able to see this `id` in our returned documents,
// we also mark the field as stored.
let isbn = schema_builder.add_text_field("isbn", STRING | STORED);
let title = schema_builder.add_text_field("title", TEXT | STORED);
let schema = schema_builder.build();
let index = Index::create_in_ram(schema.clone());
let mut index_writer = index.writer(50_000_000)?;
// Let's add a couple of documents, for the sake of the example.
let mut old_man_doc = Document::default();
old_man_doc.add_text(title, "The Old Man and the Sea");
index_writer.add_document(doc!(
isbn => "978-0099908401",
title => "The old Man and the see"
));
index_writer.add_document(doc!(
isbn => "978-0140177398",
title => "Of Mice and Men",
));
index_writer.add_document(doc!(
title => "Frankentein", //< Oops there is a typo here.
isbn => "978-9176370711",
));
index_writer.commit()?;
index.load_searchers()?;
let frankenstein_isbn = Term::from_field_text(isbn, "978-9176370711");
// Oops our frankenstein doc seems mispelled
let frankenstein_doc_misspelled = extract_doc_given_isbn(&index, &frankenstein_isbn)?.unwrap();
assert_eq!(
schema.to_json(&frankenstein_doc_misspelled),
r#"{"isbn":["978-9176370711"],"title":["Frankentein"]}"#,
);
// # Update = Delete + Insert
//
// Here we will want to update the typo in the `Frankenstein` book.
//
// Tantivy does not handle updates directly, we need to delete
// and reinsert the document.
//
// This can be complicated as it means you need to have access
// to the entire document. It is good practise to integrate tantivy
// with a key value store for this reason.
//
// To remove one of the document, we just call `delete_term`
// on its id.
//
// Note that `tantivy` does nothing to enforce the idea that
// there is only one document associated to this id.
//
// Also you might have noticed that we apply the delete before
// having committed. This does not matter really...
index_writer.delete_term(frankenstein_isbn.clone());
// We now need to reinsert our document without the typo.
index_writer.add_document(doc!(
title => "Frankenstein",
isbn => "978-9176370711",
));
// You are guaranteed that your clients will only observe your index in
// the state it was in after a commit.
// In this example, your search engine will at no point be missing the *Frankenstein* document.
// Everything happened as if the document was updated.
index_writer.commit()?;
// We reload our searcher to make our change available to clients.
index.load_searchers()?;
// No more typo!
let frankenstein_new_doc = extract_doc_given_isbn(&index, &frankenstein_isbn)?.unwrap();
assert_eq!(
schema.to_json(&frankenstein_new_doc),
r#"{"isbn":["978-9176370711"],"title":["Frankenstein"]}"#,
);
Ok(())
}

View File

@@ -0,0 +1,80 @@
// # Basic Example
//
// This example covers the basic functionalities of
// tantivy.
//
// We will :
// - define our schema
// = create an index in a directory
// - index few documents in our index
// - search for the best document matchings "sea whale"
// - retrieve the best document original content.
extern crate tempdir;
// ---
// Importing tantivy...
#[macro_use]
extern crate tantivy;
use tantivy::collector::FacetCollector;
use tantivy::query::AllQuery;
use tantivy::schema::*;
use tantivy::Index;
fn main() -> tantivy::Result<()> {
// Let's create a temporary directory for the
// sake of this example
let index_path = TempDir::new("tantivy_facet_example_dir")?;
let mut schema_builder = Schema::builder();
schema_builder.add_text_field("name", TEXT | STORED);
// this is our faceted field
schema_builder.add_facet_field("tags");
let schema = schema_builder.build();
let index = Index::create_in_dir(&index_path, schema.clone())?;
let mut index_writer = index.writer(50_000_000)?;
let name = schema.get_field("name").unwrap();
let tags = schema.get_field("tags").unwrap();
// For convenience, tantivy also comes with a macro to
// reduce the boilerplate above.
index_writer.add_document(doc!(
name => "the ditch",
tags => Facet::from("/pools/north")
));
index_writer.add_document(doc!(
name => "little stacey",
tags => Facet::from("/pools/south")
));
index_writer.commit()?;
index.load_searchers()?;
let searcher = index.searcher();
let mut facet_collector = FacetCollector::for_field(tags);
facet_collector.add_facet("/pools");
let facet_counts = searcher.search(&AllQuery, &facet_collector).unwrap();
// This lists all of the facet counts
let facets: Vec<(&Facet, u64)> = facet_counts.get("/pools").collect();
assert_eq!(
facets,
vec![
(&Facet::from("/pools/north"), 1),
(&Facet::from("/pools/south"), 1),
]
);
Ok(())
}
use tempdir::TempDir;

View File

@@ -1,2 +0,0 @@
#!/bin/bash
docco simple_search.rs -o html

View File

@@ -1,518 +0,0 @@
/*--------------------- Typography ----------------------------*/
@font-face {
font-family: 'aller-light';
src: url('public/fonts/aller-light.eot');
src: url('public/fonts/aller-light.eot?#iefix') format('embedded-opentype'),
url('public/fonts/aller-light.woff') format('woff'),
url('public/fonts/aller-light.ttf') format('truetype');
font-weight: normal;
font-style: normal;
}
@font-face {
font-family: 'aller-bold';
src: url('public/fonts/aller-bold.eot');
src: url('public/fonts/aller-bold.eot?#iefix') format('embedded-opentype'),
url('public/fonts/aller-bold.woff') format('woff'),
url('public/fonts/aller-bold.ttf') format('truetype');
font-weight: normal;
font-style: normal;
}
@font-face {
font-family: 'roboto-black';
src: url('public/fonts/roboto-black.eot');
src: url('public/fonts/roboto-black.eot?#iefix') format('embedded-opentype'),
url('public/fonts/roboto-black.woff') format('woff'),
url('public/fonts/roboto-black.ttf') format('truetype');
font-weight: normal;
font-style: normal;
}
/*--------------------- Layout ----------------------------*/
html { height: 100%; }
body {
font-family: "aller-light";
font-size: 14px;
line-height: 18px;
color: #30404f;
margin: 0; padding: 0;
height:100%;
}
#container { min-height: 100%; }
a {
color: #000;
}
b, strong {
font-weight: normal;
font-family: "aller-bold";
}
p {
margin: 15px 0 0px;
}
.annotation ul, .annotation ol {
margin: 25px 0;
}
.annotation ul li, .annotation ol li {
font-size: 14px;
line-height: 18px;
margin: 10px 0;
}
h1, h2, h3, h4, h5, h6 {
color: #112233;
line-height: 1em;
font-weight: normal;
font-family: "roboto-black";
text-transform: uppercase;
margin: 30px 0 15px 0;
}
h1 {
margin-top: 40px;
}
h2 {
font-size: 1.26em;
}
hr {
border: 0;
background: 1px #ddd;
height: 1px;
margin: 20px 0;
}
pre, tt, code {
font-size: 12px; line-height: 16px;
font-family: Menlo, Monaco, Consolas, "Lucida Console", monospace;
margin: 0; padding: 0;
}
.annotation pre {
display: block;
margin: 0;
padding: 7px 10px;
background: #fcfcfc;
-moz-box-shadow: inset 0 0 10px rgba(0,0,0,0.1);
-webkit-box-shadow: inset 0 0 10px rgba(0,0,0,0.1);
box-shadow: inset 0 0 10px rgba(0,0,0,0.1);
overflow-x: auto;
}
.annotation pre code {
border: 0;
padding: 0;
background: transparent;
}
blockquote {
border-left: 5px solid #ccc;
margin: 0;
padding: 1px 0 1px 1em;
}
.sections blockquote p {
font-family: Menlo, Consolas, Monaco, monospace;
font-size: 12px; line-height: 16px;
color: #999;
margin: 10px 0 0;
white-space: pre-wrap;
}
ul.sections {
list-style: none;
padding:0 0 5px 0;;
margin:0;
}
/*
Force border-box so that % widths fit the parent
container without overlap because of margin/padding.
More Info : http://www.quirksmode.org/css/box.html
*/
ul.sections > li > div {
-moz-box-sizing: border-box; /* firefox */
-ms-box-sizing: border-box; /* ie */
-webkit-box-sizing: border-box; /* webkit */
-khtml-box-sizing: border-box; /* konqueror */
box-sizing: border-box; /* css3 */
}
/*---------------------- Jump Page -----------------------------*/
#jump_to, #jump_page {
margin: 0;
background: white;
-webkit-box-shadow: 0 0 25px #777; -moz-box-shadow: 0 0 25px #777;
-webkit-border-bottom-left-radius: 5px; -moz-border-radius-bottomleft: 5px;
font: 16px Arial;
cursor: pointer;
text-align: right;
list-style: none;
}
#jump_to a {
text-decoration: none;
}
#jump_to a.large {
display: none;
}
#jump_to a.small {
font-size: 22px;
font-weight: bold;
color: #676767;
}
#jump_to, #jump_wrapper {
position: fixed;
right: 0; top: 0;
padding: 10px 15px;
margin:0;
}
#jump_wrapper {
display: none;
padding:0;
}
#jump_to:hover #jump_wrapper {
display: block;
}
#jump_page_wrapper{
position: fixed;
right: 0;
top: 0;
bottom: 0;
}
#jump_page {
padding: 5px 0 3px;
margin: 0 0 25px 25px;
max-height: 100%;
overflow: auto;
}
#jump_page .source {
display: block;
padding: 15px;
text-decoration: none;
border-top: 1px solid #eee;
}
#jump_page .source:hover {
background: #f5f5ff;
}
#jump_page .source:first-child {
}
/*---------------------- Low resolutions (> 320px) ---------------------*/
@media only screen and (min-width: 320px) {
.pilwrap { display: none; }
ul.sections > li > div {
display: block;
padding:5px 10px 0 10px;
}
ul.sections > li > div.annotation ul, ul.sections > li > div.annotation ol {
padding-left: 30px;
}
ul.sections > li > div.content {
overflow-x:auto;
-webkit-box-shadow: inset 0 0 5px #e5e5ee;
box-shadow: inset 0 0 5px #e5e5ee;
border: 1px solid #dedede;
margin:5px 10px 5px 10px;
padding-bottom: 5px;
}
ul.sections > li > div.annotation pre {
margin: 7px 0 7px;
padding-left: 15px;
}
ul.sections > li > div.annotation p tt, .annotation code {
background: #f8f8ff;
border: 1px solid #dedede;
font-size: 12px;
padding: 0 0.2em;
}
}
/*---------------------- (> 481px) ---------------------*/
@media only screen and (min-width: 481px) {
#container {
position: relative;
}
body {
background-color: #F5F5FF;
font-size: 15px;
line-height: 21px;
}
pre, tt, code {
line-height: 18px;
}
p, ul, ol {
margin: 0 0 15px;
}
#jump_to {
padding: 5px 10px;
}
#jump_wrapper {
padding: 0;
}
#jump_to, #jump_page {
font: 10px Arial;
text-transform: uppercase;
}
#jump_page .source {
padding: 5px 10px;
}
#jump_to a.large {
display: inline-block;
}
#jump_to a.small {
display: none;
}
#background {
position: absolute;
top: 0; bottom: 0;
width: 350px;
background: #fff;
border-right: 1px solid #e5e5ee;
z-index: -1;
}
ul.sections > li > div.annotation ul, ul.sections > li > div.annotation ol {
padding-left: 40px;
}
ul.sections > li {
white-space: nowrap;
}
ul.sections > li > div {
display: inline-block;
}
ul.sections > li > div.annotation {
max-width: 350px;
min-width: 350px;
min-height: 5px;
padding: 13px;
overflow-x: hidden;
white-space: normal;
vertical-align: top;
text-align: left;
}
ul.sections > li > div.annotation pre {
margin: 15px 0 15px;
padding-left: 15px;
}
ul.sections > li > div.content {
padding: 13px;
vertical-align: top;
border: none;
-webkit-box-shadow: none;
box-shadow: none;
}
.pilwrap {
position: relative;
display: inline;
}
.pilcrow {
font: 12px Arial;
text-decoration: none;
color: #454545;
position: absolute;
top: 3px; left: -20px;
padding: 1px 2px;
opacity: 0;
-webkit-transition: opacity 0.2s linear;
}
.for-h1 .pilcrow {
top: 47px;
}
.for-h2 .pilcrow, .for-h3 .pilcrow, .for-h4 .pilcrow {
top: 35px;
}
ul.sections > li > div.annotation:hover .pilcrow {
opacity: 1;
}
}
/*---------------------- (> 1025px) ---------------------*/
@media only screen and (min-width: 1025px) {
body {
font-size: 16px;
line-height: 24px;
}
#background {
width: 525px;
}
ul.sections > li > div.annotation {
max-width: 525px;
min-width: 525px;
padding: 10px 25px 1px 50px;
}
ul.sections > li > div.content {
padding: 9px 15px 16px 25px;
}
}
/*---------------------- Syntax Highlighting -----------------------------*/
td.linenos { background-color: #f0f0f0; padding-right: 10px; }
span.lineno { background-color: #f0f0f0; padding: 0 5px 0 5px; }
/*
github.com style (c) Vasily Polovnyov <vast@whiteants.net>
*/
pre code {
display: block; padding: 0.5em;
color: #000;
background: #f8f8ff
}
pre .hljs-comment,
pre .hljs-template_comment,
pre .hljs-diff .hljs-header,
pre .hljs-javadoc {
color: #408080;
font-style: italic
}
pre .hljs-keyword,
pre .hljs-assignment,
pre .hljs-literal,
pre .hljs-css .hljs-rule .hljs-keyword,
pre .hljs-winutils,
pre .hljs-javascript .hljs-title,
pre .hljs-lisp .hljs-title,
pre .hljs-subst {
color: #954121;
/*font-weight: bold*/
}
pre .hljs-number,
pre .hljs-hexcolor {
color: #40a070
}
pre .hljs-string,
pre .hljs-tag .hljs-value,
pre .hljs-phpdoc,
pre .hljs-tex .hljs-formula {
color: #219161;
}
pre .hljs-title,
pre .hljs-id {
color: #19469D;
}
pre .hljs-params {
color: #00F;
}
pre .hljs-javascript .hljs-title,
pre .hljs-lisp .hljs-title,
pre .hljs-subst {
font-weight: normal
}
pre .hljs-class .hljs-title,
pre .hljs-haskell .hljs-label,
pre .hljs-tex .hljs-command {
color: #458;
font-weight: bold
}
pre .hljs-tag,
pre .hljs-tag .hljs-title,
pre .hljs-rules .hljs-property,
pre .hljs-django .hljs-tag .hljs-keyword {
color: #000080;
font-weight: normal
}
pre .hljs-attribute,
pre .hljs-variable,
pre .hljs-instancevar,
pre .hljs-lisp .hljs-body {
color: #008080
}
pre .hljs-regexp {
color: #B68
}
pre .hljs-class {
color: #458;
font-weight: bold
}
pre .hljs-symbol,
pre .hljs-ruby .hljs-symbol .hljs-string,
pre .hljs-ruby .hljs-symbol .hljs-keyword,
pre .hljs-ruby .hljs-symbol .hljs-keymethods,
pre .hljs-lisp .hljs-keyword,
pre .hljs-tex .hljs-special,
pre .hljs-input_number {
color: #990073
}
pre .hljs-builtin,
pre .hljs-constructor,
pre .hljs-built_in,
pre .hljs-lisp .hljs-title {
color: #0086b3
}
pre .hljs-preprocessor,
pre .hljs-pi,
pre .hljs-doctype,
pre .hljs-shebang,
pre .hljs-cdata {
color: #999;
font-weight: bold
}
pre .hljs-deletion {
background: #fdd
}
pre .hljs-addition {
background: #dfd
}
pre .hljs-diff .hljs-change {
background: #0086b3
}
pre .hljs-chunk {
color: #aaa
}
pre .hljs-tex .hljs-formula {
opacity: 0.5;
}

Binary file not shown.

Before

Width:  |  Height:  |  Size: 56 KiB

View File

@@ -1,375 +0,0 @@
/*! normalize.css v2.0.1 | MIT License | git.io/normalize */
/* ==========================================================================
HTML5 display definitions
========================================================================== */
/*
* Corrects `block` display not defined in IE 8/9.
*/
article,
aside,
details,
figcaption,
figure,
footer,
header,
hgroup,
nav,
section,
summary {
display: block;
}
/*
* Corrects `inline-block` display not defined in IE 8/9.
*/
audio,
canvas,
video {
display: inline-block;
}
/*
* Prevents modern browsers from displaying `audio` without controls.
* Remove excess height in iOS 5 devices.
*/
audio:not([controls]) {
display: none;
height: 0;
}
/*
* Addresses styling for `hidden` attribute not present in IE 8/9.
*/
[hidden] {
display: none;
}
/* ==========================================================================
Base
========================================================================== */
/*
* 1. Sets default font family to sans-serif.
* 2. Prevents iOS text size adjust after orientation change, without disabling
* user zoom.
*/
html {
font-family: sans-serif; /* 1 */
-webkit-text-size-adjust: 100%; /* 2 */
-ms-text-size-adjust: 100%; /* 2 */
}
/*
* Removes default margin.
*/
body {
margin: 0;
}
/* ==========================================================================
Links
========================================================================== */
/*
* Addresses `outline` inconsistency between Chrome and other browsers.
*/
a:focus {
outline: thin dotted;
}
/*
* Improves readability when focused and also mouse hovered in all browsers.
*/
a:active,
a:hover {
outline: 0;
}
/* ==========================================================================
Typography
========================================================================== */
/*
* Addresses `h1` font sizes within `section` and `article` in Firefox 4+,
* Safari 5, and Chrome.
*/
h1 {
font-size: 2em;
}
/*
* Addresses styling not present in IE 8/9, Safari 5, and Chrome.
*/
abbr[title] {
border-bottom: 1px dotted;
}
/*
* Addresses style set to `bolder` in Firefox 4+, Safari 5, and Chrome.
*/
b,
strong {
font-weight: bold;
}
/*
* Addresses styling not present in Safari 5 and Chrome.
*/
dfn {
font-style: italic;
}
/*
* Addresses styling not present in IE 8/9.
*/
mark {
background: #ff0;
color: #000;
}
/*
* Corrects font family set oddly in Safari 5 and Chrome.
*/
code,
kbd,
pre,
samp {
font-family: monospace, serif;
font-size: 1em;
}
/*
* Improves readability of pre-formatted text in all browsers.
*/
pre {
white-space: pre;
white-space: pre-wrap;
word-wrap: break-word;
}
/*
* Sets consistent quote types.
*/
q {
quotes: "\201C" "\201D" "\2018" "\2019";
}
/*
* Addresses inconsistent and variable font size in all browsers.
*/
small {
font-size: 80%;
}
/*
* Prevents `sub` and `sup` affecting `line-height` in all browsers.
*/
sub,
sup {
font-size: 75%;
line-height: 0;
position: relative;
vertical-align: baseline;
}
sup {
top: -0.5em;
}
sub {
bottom: -0.25em;
}
/* ==========================================================================
Embedded content
========================================================================== */
/*
* Removes border when inside `a` element in IE 8/9.
*/
img {
border: 0;
}
/*
* Corrects overflow displayed oddly in IE 9.
*/
svg:not(:root) {
overflow: hidden;
}
/* ==========================================================================
Figures
========================================================================== */
/*
* Addresses margin not present in IE 8/9 and Safari 5.
*/
figure {
margin: 0;
}
/* ==========================================================================
Forms
========================================================================== */
/*
* Define consistent border, margin, and padding.
*/
fieldset {
border: 1px solid #c0c0c0;
margin: 0 2px;
padding: 0.35em 0.625em 0.75em;
}
/*
* 1. Corrects color not being inherited in IE 8/9.
* 2. Remove padding so people aren't caught out if they zero out fieldsets.
*/
legend {
border: 0; /* 1 */
padding: 0; /* 2 */
}
/*
* 1. Corrects font family not being inherited in all browsers.
* 2. Corrects font size not being inherited in all browsers.
* 3. Addresses margins set differently in Firefox 4+, Safari 5, and Chrome
*/
button,
input,
select,
textarea {
font-family: inherit; /* 1 */
font-size: 100%; /* 2 */
margin: 0; /* 3 */
}
/*
* Addresses Firefox 4+ setting `line-height` on `input` using `!important` in
* the UA stylesheet.
*/
button,
input {
line-height: normal;
}
/*
* 1. Avoid the WebKit bug in Android 4.0.* where (2) destroys native `audio`
* and `video` controls.
* 2. Corrects inability to style clickable `input` types in iOS.
* 3. Improves usability and consistency of cursor style between image-type
* `input` and others.
*/
button,
html input[type="button"], /* 1 */
input[type="reset"],
input[type="submit"] {
-webkit-appearance: button; /* 2 */
cursor: pointer; /* 3 */
}
/*
* Re-set default cursor for disabled elements.
*/
button[disabled],
input[disabled] {
cursor: default;
}
/*
* 1. Addresses box sizing set to `content-box` in IE 8/9.
* 2. Removes excess padding in IE 8/9.
*/
input[type="checkbox"],
input[type="radio"] {
box-sizing: border-box; /* 1 */
padding: 0; /* 2 */
}
/*
* 1. Addresses `appearance` set to `searchfield` in Safari 5 and Chrome.
* 2. Addresses `box-sizing` set to `border-box` in Safari 5 and Chrome
* (include `-moz` to future-proof).
*/
input[type="search"] {
-webkit-appearance: textfield; /* 1 */
-moz-box-sizing: content-box;
-webkit-box-sizing: content-box; /* 2 */
box-sizing: content-box;
}
/*
* Removes inner padding and search cancel button in Safari 5 and Chrome
* on OS X.
*/
input[type="search"]::-webkit-search-cancel-button,
input[type="search"]::-webkit-search-decoration {
-webkit-appearance: none;
}
/*
* Removes inner padding and border in Firefox 4+.
*/
button::-moz-focus-inner,
input::-moz-focus-inner {
border: 0;
padding: 0;
}
/*
* 1. Removes default vertical scrollbar in IE 8/9.
* 2. Improves readability and alignment in all browsers.
*/
textarea {
overflow: auto; /* 1 */
vertical-align: top; /* 2 */
}
/* ==========================================================================
Tables
========================================================================== */
/*
* Remove most spacing between table cells.
*/
table {
border-collapse: collapse;
border-spacing: 0;
}

View File

@@ -1,503 +0,0 @@
<!DOCTYPE html>
<html>
<head>
<title>simple_search.rs</title>
<meta http-equiv="content-type" content="text/html; charset=UTF-8">
<meta name="viewport" content="width=device-width, target-densitydpi=160dpi, initial-scale=1.0; maximum-scale=1.0; user-scalable=0;">
<link rel="stylesheet" media="all" href="docco.css" />
</head>
<body>
<div id="container">
<div id="background"></div>
<ul class="sections">
<li id="title">
<div class="annotation">
<h1>simple_search.rs</h1>
</div>
</li>
<li id="section-1">
<div class="annotation">
<div class="pilwrap ">
<a class="pilcrow" href="#section-1">&#182;</a>
</div>
</div>
<div class="content"><div class='highlight'><pre><span class="hljs-keyword">extern</span> <span class="hljs-keyword">crate</span> rustc_serialize;
<span class="hljs-keyword">extern</span> <span class="hljs-keyword">crate</span> tantivy;
<span class="hljs-keyword">extern</span> <span class="hljs-keyword">crate</span> tempdir;
<span class="hljs-keyword">use</span> std::path::Path;
<span class="hljs-keyword">use</span> tempdir::TempDir;
<span class="hljs-keyword">use</span> tantivy::Index;
<span class="hljs-keyword">use</span> tantivy::schema::*;
<span class="hljs-keyword">use</span> tantivy::collector::TopCollector;
<span class="hljs-keyword">use</span> tantivy::query::QueryParser;
<span class="hljs-function"><span class="hljs-keyword">fn</span> <span class="hljs-title">main</span></span>() {</pre></div></div>
</li>
<li id="section-2">
<div class="annotation">
<div class="pilwrap ">
<a class="pilcrow" href="#section-2">&#182;</a>
</div>
<p>Lets create a temporary directory for the
sake of this example</p>
</div>
<div class="content"><div class='highlight'><pre> <span class="hljs-keyword">if</span> <span class="hljs-keyword">let</span> <span class="hljs-literal">Ok</span>(dir) = TempDir::new(<span class="hljs-string">"tantivy_example_dir"</span>) {
run_example(dir.path()).unwrap();
dir.close().unwrap();
}
}
<span class="hljs-function"><span class="hljs-keyword">fn</span> <span class="hljs-title">run_example</span></span>(index_path: &amp;Path) -&gt; tantivy::<span class="hljs-built_in">Result</span>&lt;()&gt; {</pre></div></div>
</li>
<li id="section-3">
<div class="annotation">
<div class="pilwrap ">
<a class="pilcrow" href="#section-3">&#182;</a>
</div>
<h1 id="defining-the-schema">Defining the schema</h1>
<p>The Tantivy index requires a very strict schema.
The schema declares which fields are in the index,
and for each field, its type and “the way it should
be indexed”.</p>
</div>
</li>
<li id="section-4">
<div class="annotation">
<div class="pilwrap ">
<a class="pilcrow" href="#section-4">&#182;</a>
</div>
<p>first we need to define a schema …</p>
</div>
<div class="content"><div class='highlight'><pre> <span class="hljs-keyword">let</span> <span class="hljs-keyword">mut</span> schema_builder = SchemaBuilder::<span class="hljs-keyword">default</span>();</pre></div></div>
</li>
<li id="section-5">
<div class="annotation">
<div class="pilwrap ">
<a class="pilcrow" href="#section-5">&#182;</a>
</div>
<p>Our first field is title.
We want full-text search for it, and we want to be able
to retrieve the document after the search.</p>
<p>TEXT | STORED is some syntactic sugar to describe
that.</p>
<p><code>TEXT</code> means the field should be tokenized and indexed,
along with its term frequency and term positions.</p>
<p><code>STORED</code> means that the field will also be saved
in a compressed, row-oriented key-value store.
This store is useful to reconstruct the
documents that were selected during the search phase.</p>
</div>
<div class="content"><div class='highlight'><pre> schema_builder.add_text_field(<span class="hljs-string">"title"</span>, TEXT | STORED);</pre></div></div>
</li>
<li id="section-6">
<div class="annotation">
<div class="pilwrap ">
<a class="pilcrow" href="#section-6">&#182;</a>
</div>
<p>Our first field is body.
We want full-text search for it, and we want to be able
to retrieve the body after the search.</p>
</div>
<div class="content"><div class='highlight'><pre> schema_builder.add_text_field(<span class="hljs-string">"body"</span>, TEXT);
<span class="hljs-keyword">let</span> schema = schema_builder.build();</pre></div></div>
</li>
<li id="section-7">
<div class="annotation">
<div class="pilwrap ">
<a class="pilcrow" href="#section-7">&#182;</a>
</div>
<h1 id="indexing-documents">Indexing documents</h1>
<p>Lets create a brand new index.</p>
<p>This will actually just save a meta.json
with our schema in the directory.</p>
</div>
<div class="content"><div class='highlight'><pre> <span class="hljs-keyword">let</span> index = <span class="hljs-built_in">try!</span>(Index::create(index_path, schema.clone()));</pre></div></div>
</li>
<li id="section-8">
<div class="annotation">
<div class="pilwrap ">
<a class="pilcrow" href="#section-8">&#182;</a>
</div>
<p>To insert document we need an index writer.
There must be only one writer at a time.
This single <code>IndexWriter</code> is already
multithreaded.</p>
<p>Here we use a buffer of 50MB per thread. Using a bigger
heap for the indexer can increase its throughput.</p>
</div>
<div class="content"><div class='highlight'><pre> <span class="hljs-keyword">let</span> <span class="hljs-keyword">mut</span> index_writer = <span class="hljs-built_in">try!</span>(index.writer(<span class="hljs-number">50_000_000</span>));</pre></div></div>
</li>
<li id="section-9">
<div class="annotation">
<div class="pilwrap ">
<a class="pilcrow" href="#section-9">&#182;</a>
</div>
<p>Lets index our documents!
We first need a handle on the title and the body field.</p>
</div>
</li>
<li id="section-10">
<div class="annotation">
<div class="pilwrap ">
<a class="pilcrow" href="#section-10">&#182;</a>
</div>
<h3 id="create-a-document-manually-">Create a document “manually”.</h3>
<p>We can create a document manually, by setting the fields
one by one in a Document object.</p>
</div>
<div class="content"><div class='highlight'><pre> <span class="hljs-keyword">let</span> title = schema.get_field(<span class="hljs-string">"title"</span>).unwrap();
<span class="hljs-keyword">let</span> body = schema.get_field(<span class="hljs-string">"body"</span>).unwrap();
<span class="hljs-keyword">let</span> <span class="hljs-keyword">mut</span> old_man_doc = Document::<span class="hljs-keyword">default</span>();
old_man_doc.add_text(title, <span class="hljs-string">"The Old Man and the Sea"</span>);
old_man_doc.add_text(body,
<span class="hljs-string">"He was an old man who fished alone in a skiff in the Gulf Stream and \
he had gone eighty-four days now without taking a fish."</span>);</pre></div></div>
</li>
<li id="section-11">
<div class="annotation">
<div class="pilwrap ">
<a class="pilcrow" href="#section-11">&#182;</a>
</div>
<p>… and add it to the <code>IndexWriter</code>.</p>
</div>
<div class="content"><div class='highlight'><pre> index_writer.add_document(old_man_doc);</pre></div></div>
</li>
<li id="section-12">
<div class="annotation">
<div class="pilwrap ">
<a class="pilcrow" href="#section-12">&#182;</a>
</div>
<h3 id="create-a-document-directly-from-json-">Create a document directly from json.</h3>
<p>Alternatively, we can use our schema to parse
a document object directly from json.</p>
</div>
<div class="content"><div class='highlight'><pre>
<span class="hljs-keyword">let</span> mice_and_men_doc = <span class="hljs-built_in">try!</span>(schema.parse_document(r#<span class="hljs-string">"{
"</span>title<span class="hljs-string">": "</span>Of Mice and Men<span class="hljs-string">",
"</span>body<span class="hljs-string">": "</span>few miles south of Soledad, the Salinas River drops <span class="hljs-keyword">in</span> close to the hillside bank and runs deep and green. The water is warm too, <span class="hljs-keyword">for</span> it has slipped twinkling over the yellow sands <span class="hljs-keyword">in</span> the sunlight before reaching the narrow pool. On one side of the river the golden foothill slopes curve up to the strong and rocky Gabilan Mountains, but on the valley side the water is lined with trees—willows fresh and green with every spring, carrying <span class="hljs-keyword">in</span> their lower leaf junctures the debris of the winters flooding; and sycamores with mottled, white,recumbent limbs and branches that arch over the pool<span class="hljs-string">"
}"</span>#));
index_writer.add_document(mice_and_men_doc);</pre></div></div>
</li>
<li id="section-13">
<div class="annotation">
<div class="pilwrap ">
<a class="pilcrow" href="#section-13">&#182;</a>
</div>
<p>Multi-valued field are allowed, they are
expressed in JSON by an array.
The following document has two titles.</p>
</div>
<div class="content"><div class='highlight'><pre> <span class="hljs-keyword">let</span> frankenstein_doc = <span class="hljs-built_in">try!</span>(schema.parse_document(r#<span class="hljs-string">"{
"</span>title<span class="hljs-string">": ["</span>Frankenstein<span class="hljs-string">", "</span>The Modern Promotheus<span class="hljs-string">"],
"</span>body<span class="hljs-string">": "</span>You will rejoice to hear that no disaster has accompanied the commencement of an enterprise which you have regarded with such evil forebodings. I arrived here yesterday, and my first task is to assure my dear sister of my welfare and increasing confidence <span class="hljs-keyword">in</span> the success of my undertaking.<span class="hljs-string">"
}"</span>#));
index_writer.add_document(frankenstein_doc);</pre></div></div>
</li>
<li id="section-14">
<div class="annotation">
<div class="pilwrap ">
<a class="pilcrow" href="#section-14">&#182;</a>
</div>
<p>This is an example, so we will only index 3 documents
here. You can check out tantivys tutorial to index
the English wikipedia. Tantivys indexing is rather fast.
Indexing 5 million articles of the English wikipedia takes
around 4 minutes on my computer!</p>
</div>
</li>
<li id="section-15">
<div class="annotation">
<div class="pilwrap ">
<a class="pilcrow" href="#section-15">&#182;</a>
</div>
<h3 id="committing">Committing</h3>
<p>At this point our documents are not searchable.</p>
<p>We need to call .commit() explicitly to force the
index_writer to finish processing the documents in the queue,
flush the current index to the disk, and advertise
the existence of new documents.</p>
<p>This call is blocking.</p>
</div>
<div class="content"><div class='highlight'><pre> <span class="hljs-built_in">try!</span>(index_writer.commit());</pre></div></div>
</li>
<li id="section-16">
<div class="annotation">
<div class="pilwrap ">
<a class="pilcrow" href="#section-16">&#182;</a>
</div>
<p>If <code>.commit()</code> returns correctly, then all of the
documents that have been added are guaranteed to be
persistently indexed.</p>
<p>In the scenario of a crash or a power failure,
tantivy behaves as if has rolled back to its last
commit.</p>
</div>
</li>
<li id="section-17">
<div class="annotation">
<div class="pilwrap ">
<a class="pilcrow" href="#section-17">&#182;</a>
</div>
<h1 id="searching">Searching</h1>
<p>Lets search our index. Start by reloading
searchers in the index. This should be done
after every commit().</p>
</div>
<div class="content"><div class='highlight'><pre> <span class="hljs-built_in">try!</span>(index.load_searchers());</pre></div></div>
</li>
<li id="section-18">
<div class="annotation">
<div class="pilwrap ">
<a class="pilcrow" href="#section-18">&#182;</a>
</div>
<p>Afterwards create one (or more) searchers.</p>
<p>You should create a searcher
every time you start a “search query”.</p>
</div>
<div class="content"><div class='highlight'><pre> <span class="hljs-keyword">let</span> searcher = index.searcher();</pre></div></div>
</li>
<li id="section-19">
<div class="annotation">
<div class="pilwrap ">
<a class="pilcrow" href="#section-19">&#182;</a>
</div>
<p>The query parser can interpret human queries.
Here, if the user does not specify which
field they want to search, tantivy will search
in both title and body.</p>
</div>
<div class="content"><div class='highlight'><pre> <span class="hljs-keyword">let</span> query_parser = QueryParser::new(index.schema(), <span class="hljs-built_in">vec!</span>[title, body]);</pre></div></div>
</li>
<li id="section-20">
<div class="annotation">
<div class="pilwrap ">
<a class="pilcrow" href="#section-20">&#182;</a>
</div>
<p>QueryParser may fail if the query is not in the right
format. For user facing applications, this can be a problem.
A ticket has been opened regarding this problem.</p>
</div>
<div class="content"><div class='highlight'><pre> <span class="hljs-keyword">let</span> query = <span class="hljs-built_in">try!</span>(query_parser.parse_query(<span class="hljs-string">"sea whale"</span>));</pre></div></div>
</li>
<li id="section-21">
<div class="annotation">
<div class="pilwrap ">
<a class="pilcrow" href="#section-21">&#182;</a>
</div>
<p>A query defines a set of documents, as
well as the way they should be scored.</p>
<p>A query created by the query parser is scored according
to a metric called Tf-Idf, and will consider
any document matching at least one of our terms.</p>
</div>
</li>
<li id="section-22">
<div class="annotation">
<div class="pilwrap ">
<a class="pilcrow" href="#section-22">&#182;</a>
</div>
<h3 id="collectors">Collectors</h3>
<p>We are not interested in all of the documents but
only in the top 10. Keeping track of our top 10 best documents
is the role of the TopCollector.</p>
</div>
<div class="content"><div class='highlight'><pre> <span class="hljs-keyword">let</span> <span class="hljs-keyword">mut</span> top_collector = TopCollector::with_limit(<span class="hljs-number">10</span>);</pre></div></div>
</li>
<li id="section-23">
<div class="annotation">
<div class="pilwrap ">
<a class="pilcrow" href="#section-23">&#182;</a>
</div>
<p>We can now perform our query.</p>
</div>
<div class="content"><div class='highlight'><pre> <span class="hljs-built_in">try!</span>(searcher.search(&amp;*query, &amp;<span class="hljs-keyword">mut</span> top_collector));</pre></div></div>
</li>
<li id="section-24">
<div class="annotation">
<div class="pilwrap ">
<a class="pilcrow" href="#section-24">&#182;</a>
</div>
<p>Our top collector now contains the 10
most relevant doc ids…</p>
</div>
<div class="content"><div class='highlight'><pre> <span class="hljs-keyword">let</span> doc_addresses = top_collector.docs();</pre></div></div>
</li>
<li id="section-25">
<div class="annotation">
<div class="pilwrap ">
<a class="pilcrow" href="#section-25">&#182;</a>
</div>
<p>The actual documents still need to be
retrieved from Tantivys store.</p>
<p>Since the body field was not configured as stored,
the document returned will only contain
a title.</p>
</div>
<div class="content"><div class='highlight'><pre>
<span class="hljs-keyword">for</span> doc_address <span class="hljs-keyword">in</span> doc_addresses {
<span class="hljs-keyword">let</span> retrieved_doc = <span class="hljs-built_in">try!</span>(searcher.doc(&amp;doc_address));
<span class="hljs-built_in">println!</span>(<span class="hljs-string">"{}"</span>, schema.to_json(&amp;retrieved_doc));
}
<span class="hljs-literal">Ok</span>(())
}</pre></div></div>
</li>
</ul>
</div>
</body>
</html>

View File

@@ -0,0 +1,133 @@
// # Iterating docs and positioms.
//
// At its core of tantivy, relies on a data structure
// called an inverted index.
//
// This example shows how to manually iterate through
// the list of documents containing a term, getting
// its term frequency, and accessing its positions.
// ---
// Importing tantivy...
#[macro_use]
extern crate tantivy;
use tantivy::schema::*;
use tantivy::Index;
use tantivy::{DocId, DocSet, Postings};
fn main() -> tantivy::Result<()> {
// We first create a schema for the sake of the
// example. Check the `basic_search` example for more information.
let mut schema_builder = Schema::builder();
// For this example, we need to make sure to index positions for our title
// field. `TEXT` precisely does this.
let title = schema_builder.add_text_field("title", TEXT | STORED);
let schema = schema_builder.build();
let index = Index::create_in_ram(schema.clone());
let mut index_writer = index.writer_with_num_threads(1, 50_000_000)?;
index_writer.add_document(doc!(title => "The Old Man and the Sea"));
index_writer.add_document(doc!(title => "Of Mice and Men"));
index_writer.add_document(doc!(title => "The modern Promotheus"));
index_writer.commit()?;
index.load_searchers()?;
let searcher = index.searcher();
// A tantivy index is actually a collection of segments.
// Similarly, a searcher just wraps a list `segment_reader`.
//
// (Because we indexed a very small number of documents over one thread
// there is actually only one segment here, but let's iterate through the list
// anyway)
for segment_reader in searcher.segment_readers() {
// A segment contains different data structure.
// Inverted index stands for the combination of
// - the term dictionary
// - the inverted lists associated to each terms and their positions
let inverted_index = segment_reader.inverted_index(title);
// A `Term` is a text token associated with a field.
// Let's go through all docs containing the term `title:the` and access their position
let term_the = Term::from_field_text(title, "the");
// This segment posting object is like a cursor over the documents matching the term.
// The `IndexRecordOption` arguments tells tantivy we will be interested in both term frequencies
// and positions.
//
// If you don't need all this information, you may get better performance by decompressing less
// information.
if let Some(mut segment_postings) =
inverted_index.read_postings(&term_the, IndexRecordOption::WithFreqsAndPositions)
{
// this buffer will be used to request for positions
let mut positions: Vec<u32> = Vec::with_capacity(100);
while segment_postings.advance() {
// the number of time the term appears in the document.
let doc_id: DocId = segment_postings.doc(); //< do not try to access this before calling advance once.
// This MAY contains deleted documents as well.
if segment_reader.is_deleted(doc_id) {
continue;
}
// the number of time the term appears in the document.
let term_freq: u32 = segment_postings.term_freq();
// accessing positions is slightly expensive and lazy, do not request
// for them if you don't need them for some documents.
segment_postings.positions(&mut positions);
// By definition we should have `term_freq` positions.
assert_eq!(positions.len(), term_freq as usize);
// This prints:
// ```
// Doc 0: TermFreq 2: [0, 4]
// Doc 2: TermFreq 1: [0]
// ```
println!("Doc {}: TermFreq {}: {:?}", doc_id, term_freq, positions);
}
}
}
// A `Term` is a text token associated with a field.
// Let's go through all docs containing the term `title:the` and access their position
let term_the = Term::from_field_text(title, "the");
// Some other powerful operations (especially `.skip_to`) may be useful to consume these
// posting lists rapidly.
// You can check for them in the [`DocSet`](https://docs.rs/tantivy/~0/tantivy/trait.DocSet.html) trait
// and the [`Postings`](https://docs.rs/tantivy/~0/tantivy/trait.Postings.html) trait
// Also, for some VERY specific high performance use case like an OLAP analysis of logs,
// you can get better performance by accessing directly the blocks of doc ids.
for segment_reader in searcher.segment_readers() {
// A segment contains different data structure.
// Inverted index stands for the combination of
// - the term dictionary
// - the inverted lists associated to each terms and their positions
let inverted_index = segment_reader.inverted_index(title);
// This segment posting object is like a cursor over the documents matching the term.
// The `IndexRecordOption` arguments tells tantivy we will be interested in both term frequencies
// and positions.
//
// If you don't need all this information, you may get better performance by decompressing less
// information.
if let Some(mut block_segment_postings) =
inverted_index.read_block_postings(&term_the, IndexRecordOption::Basic)
{
while block_segment_postings.advance() {
// Once again these docs MAY contains deleted documents as well.
let docs = block_segment_postings.docs();
// Prints `Docs [0, 2].`
println!("Docs {:?}", docs);
}
}
}
Ok(())
}

View File

@@ -1,209 +0,0 @@
extern crate rustc_serialize;
extern crate tantivy;
extern crate tempdir;
use std::path::Path;
use tempdir::TempDir;
use tantivy::Index;
use tantivy::schema::*;
use tantivy::collector::TopCollector;
use tantivy::query::QueryParser;
fn main() {
// Let's create a temporary directory for the
// sake of this example
if let Ok(dir) = TempDir::new("tantivy_example_dir") {
run_example(dir.path()).unwrap();
dir.close().unwrap();
}
}
fn run_example(index_path: &Path) -> tantivy::Result<()> {
// # Defining the schema
//
// The Tantivy index requires a very strict schema.
// The schema declares which fields are in the index,
// and for each field, its type and "the way it should
// be indexed".
// first we need to define a schema ...
let mut schema_builder = SchemaBuilder::default();
// Our first field is title.
// We want full-text search for it, and we want to be able
// to retrieve the document after the search.
//
// TEXT | STORED is some syntactic sugar to describe
// that.
//
// `TEXT` means the field should be tokenized and indexed,
// along with its term frequency and term positions.
//
// `STORED` means that the field will also be saved
// in a compressed, row-oriented key-value store.
// This store is useful to reconstruct the
// documents that were selected during the search phase.
schema_builder.add_text_field("title", TEXT | STORED);
// Our first field is body.
// We want full-text search for it, and we want to be able
// to retrieve the body after the search.
schema_builder.add_text_field("body", TEXT);
let schema = schema_builder.build();
// # Indexing documents
//
// Let's create a brand new index.
//
// This will actually just save a meta.json
// with our schema in the directory.
let index = try!(Index::create(index_path, schema.clone()));
// To insert document we need an index writer.
// There must be only one writer at a time.
// This single `IndexWriter` is already
// multithreaded.
//
// Here we use a buffer of 50MB per thread. Using a bigger
// heap for the indexer can increase its throughput.
let mut index_writer = try!(index.writer(50_000_000));
// Let's index our documents!
// We first need a handle on the title and the body field.
// ### Create a document "manually".
//
// We can create a document manually, by setting the fields
// one by one in a Document object.
let title = schema.get_field("title").unwrap();
let body = schema.get_field("body").unwrap();
let mut old_man_doc = Document::default();
old_man_doc.add_text(title, "The Old Man and the Sea");
old_man_doc.add_text(body,
"He was an old man who fished alone in a skiff in the Gulf Stream and \
he had gone eighty-four days now without taking a fish.");
// ... and add it to the `IndexWriter`.
index_writer.add_document(old_man_doc);
// ### Create a document directly from json.
//
// Alternatively, we can use our schema to parse
// a document object directly from json.
let mice_and_men_doc = try!(schema.parse_document(r#"{
"title": "Of Mice and Men",
"body": "few miles south of Soledad, the Salinas River drops in close to the hillside bank and runs deep and green. The water is warm too, for it has slipped twinkling over the yellow sands in the sunlight before reaching the narrow pool. On one side of the river the golden foothill slopes curve up to the strong and rocky Gabilan Mountains, but on the valley side the water is lined with trees—willows fresh and green with every spring, carrying in their lower leaf junctures the debris of the winters flooding; and sycamores with mottled, white,recumbent limbs and branches that arch over the pool"
}"#));
index_writer.add_document(mice_and_men_doc);
// Multi-valued field are allowed, they are
// expressed in JSON by an array.
// The following document has two titles.
let frankenstein_doc = try!(schema.parse_document(r#"{
"title": ["Frankenstein", "The Modern Promotheus"],
"body": "You will rejoice to hear that no disaster has accompanied the commencement of an enterprise which you have regarded with such evil forebodings. I arrived here yesterday, and my first task is to assure my dear sister of my welfare and increasing confidence in the success of my undertaking."
}"#));
index_writer.add_document(frankenstein_doc);
// This is an example, so we will only index 3 documents
// here. You can check out tantivy's tutorial to index
// the English wikipedia. Tantivy's indexing is rather fast.
// Indexing 5 million articles of the English wikipedia takes
// around 4 minutes on my computer!
// ### Committing
//
// At this point our documents are not searchable.
//
//
// We need to call .commit() explicitly to force the
// index_writer to finish processing the documents in the queue,
// flush the current index to the disk, and advertise
// the existence of new documents.
//
// This call is blocking.
try!(index_writer.commit());
// If `.commit()` returns correctly, then all of the
// documents that have been added are guaranteed to be
// persistently indexed.
//
// In the scenario of a crash or a power failure,
// tantivy behaves as if has rolled back to its last
// commit.
// # Searching
//
// Let's search our index. Start by reloading
// searchers in the index. This should be done
// after every commit().
try!(index.load_searchers());
// Afterwards create one (or more) searchers.
//
// You should create a searcher
// every time you start a "search query".
let searcher = index.searcher();
// The query parser can interpret human queries.
// Here, if the user does not specify which
// field they want to search, tantivy will search
// in both title and body.
let query_parser = QueryParser::new(index.schema(), vec![title, body]);
// QueryParser may fail if the query is not in the right
// format. For user facing applications, this can be a problem.
// A ticket has been opened regarding this problem.
let query = try!(query_parser.parse_query("sea whale"));
// A query defines a set of documents, as
// well as the way they should be scored.
//
// A query created by the query parser is scored according
// to a metric called Tf-Idf, and will consider
// any document matching at least one of our terms.
// ### Collectors
//
// We are not interested in all of the documents but
// only in the top 10. Keeping track of our top 10 best documents
// is the role of the TopCollector.
let mut top_collector = TopCollector::with_limit(10);
// We can now perform our query.
try!(searcher.search(&*query, &mut top_collector));
// Our top collector now contains the 10
// most relevant doc ids...
let doc_addresses = top_collector.docs();
// The actual documents still need to be
// retrieved from Tantivy's store.
//
// Since the body field was not configured as stored,
// the document returned will only contain
// a title.
for doc_address in doc_addresses {
let retrieved_doc = try!(searcher.doc(&doc_address));
println!("{}", schema.to_json(&retrieved_doc));
}
Ok(())
}

87
examples/snippet.rs Normal file
View File

@@ -0,0 +1,87 @@
// # Snippet example
//
// This example shows how to return a representative snippet of
// your hit result.
// Snippet are an extracted of a target document, and returned in HTML format.
// The keyword searched by the user are highlighted with a `<b>` tag.
extern crate tempdir;
// ---
// Importing tantivy...
#[macro_use]
extern crate tantivy;
use tantivy::collector::TopDocs;
use tantivy::query::QueryParser;
use tantivy::schema::*;
use tantivy::Index;
use tantivy::{Snippet, SnippetGenerator};
use tempdir::TempDir;
fn main() -> tantivy::Result<()> {
// Let's create a temporary directory for the
// sake of this example
let index_path = TempDir::new("tantivy_example_dir")?;
// # Defining the schema
let mut schema_builder = Schema::builder();
let title = schema_builder.add_text_field("title", TEXT | STORED);
let body = schema_builder.add_text_field("body", TEXT | STORED);
let schema = schema_builder.build();
// # Indexing documents
let index = Index::create_in_dir(&index_path, schema.clone())?;
let mut index_writer = index.writer(50_000_000)?;
// we'll only need one doc for this example.
index_writer.add_document(doc!(
title => "Of Mice and Men",
body => "A few miles south of Soledad, the Salinas River drops in close to the hillside \
bank and runs deep and green. The water is warm too, for it has slipped twinkling \
over the yellow sands in the sunlight before reaching the narrow pool. On one \
side of the river the golden foothill slopes curve up to the strong and rocky \
Gabilan Mountains, but on the valley side the water is lined with trees—willows \
fresh and green with every spring, carrying in their lower leaf junctures the \
debris of the winters flooding; and sycamores with mottled, white, recumbent \
limbs and branches that arch over the pool"
));
// ...
index_writer.commit()?;
index.load_searchers()?;
let searcher = index.searcher();
let query_parser = QueryParser::for_index(&index, vec![title, body]);
let query = query_parser.parse_query("sycamore spring")?;
let top_docs = searcher.search(&query, &TopDocs::with_limit(10))?;
let snippet_generator = SnippetGenerator::create(&searcher, &*query, body)?;
for (score, doc_address) in top_docs {
let doc = searcher.doc(doc_address)?;
let snippet = snippet_generator.snippet_from_doc(&doc);
println!("Document score {}:", score);
println!("title: {}", doc.get_first(title).unwrap().text().unwrap());
println!("snippet: {}", snippet.to_html());
println!("custom highlighting: {}", highlight(snippet));
}
Ok(())
}
fn highlight(snippet: Snippet) -> String {
let mut result = String::new();
let mut start_from = 0;
for (start, end) in snippet.highlighted().iter().map(|h| h.bounds()) {
result.push_str(&snippet.fragments()[start_from..start]);
result.push_str(" --> ");
result.push_str(&snippet.fragments()[start..end]);
result.push_str(" <-- ");
start_from = end;
}
result.push_str(&snippet.fragments()[start_from..]);
result
}

117
examples/stop_words.rs Normal file
View File

@@ -0,0 +1,117 @@
// # Stop Words Example
//
// This example covers the basic usage of stop words
// with tantivy
//
// We will :
// - define our schema
// - create an index in a directory
// - add a few stop words
// - index few documents in our index
extern crate tempdir;
// ---
// Importing tantivy...
#[macro_use]
extern crate tantivy;
use tantivy::collector::TopDocs;
use tantivy::query::QueryParser;
use tantivy::schema::*;
use tantivy::tokenizer::*;
use tantivy::Index;
fn main() -> tantivy::Result<()> {
// this example assumes you understand the content in `basic_search`
let mut schema_builder = Schema::builder();
// This configures your custom options for how tantivy will
// store and process your content in the index; The key
// to note is that we are setting the tokenizer to `stoppy`
// which will be defined and registered below.
let text_field_indexing = TextFieldIndexing::default()
.set_tokenizer("stoppy")
.set_index_option(IndexRecordOption::WithFreqsAndPositions);
let text_options = TextOptions::default()
.set_indexing_options(text_field_indexing)
.set_stored();
// Our first field is title.
schema_builder.add_text_field("title", text_options);
// Our second field is body.
let text_field_indexing = TextFieldIndexing::default()
.set_tokenizer("stoppy")
.set_index_option(IndexRecordOption::WithFreqsAndPositions);
let text_options = TextOptions::default()
.set_indexing_options(text_field_indexing)
.set_stored();
schema_builder.add_text_field("body", text_options);
let schema = schema_builder.build();
let index = Index::create_in_ram(schema.clone());
// This tokenizer lowers all of the text (to help with stop word matching)
// then removes all instances of `the` and `and` from the corpus
let tokenizer = SimpleTokenizer
.filter(LowerCaser)
.filter(StopWordFilter::remove(vec![
"the".to_string(),
"and".to_string(),
]));
index.tokenizers().register("stoppy", tokenizer);
let mut index_writer = index.writer(50_000_000)?;
let title = schema.get_field("title").unwrap();
let body = schema.get_field("body").unwrap();
index_writer.add_document(doc!(
title => "The Old Man and the Sea",
body => "He was an old man who fished alone in a skiff in the Gulf Stream and \
he had gone eighty-four days now without taking a fish."
));
index_writer.add_document(doc!(
title => "Of Mice and Men",
body => "A few miles south of Soledad, the Salinas River drops in close to the hillside \
bank and runs deep and green. The water is warm too, for it has slipped twinkling \
over the yellow sands in the sunlight before reaching the narrow pool. On one \
side of the river the golden foothill slopes curve up to the strong and rocky \
Gabilan Mountains, but on the valley side the water is lined with trees—willows \
fresh and green with every spring, carrying in their lower leaf junctures the \
debris of the winters flooding; and sycamores with mottled, white, recumbent \
limbs and branches that arch over the pool"
));
index_writer.add_document(doc!(
title => "Frankenstein",
body => "You will rejoice to hear that no disaster has accompanied the commencement of an \
enterprise which you have regarded with such evil forebodings. I arrived here \
yesterday, and my first task is to assure my dear sister of my welfare and \
increasing confidence in the success of my undertaking."
));
index_writer.commit()?;
index.load_searchers()?;
let searcher = index.searcher();
let query_parser = QueryParser::for_index(&index, vec![title, body]);
// stop words are applied on the query as well.
// The following will be equivalent to `title:frankenstein`
let query = query_parser.parse_query("title:\"the Frankenstein\"")?;
let top_docs = searcher.search(&query, &TopDocs::with_limit(10))?;
for (score, doc_address) in top_docs {
let retrieved_doc = searcher.doc(doc_address)?;
println!("\n==\nDocument score {}:", score);
println!("{}", schema.to_json(&retrieved_doc));
}
Ok(())
}

View File

@@ -0,0 +1,41 @@
extern crate tantivy;
use tantivy::schema::*;
// # Document from json
//
// For convenience, `Document` can be parsed directly from json.
fn main() -> tantivy::Result<()> {
// Let's first define a schema and an index.
// Check out the basic example if this is confusing to you.
//
// first we need to define a schema ...
let mut schema_builder = Schema::builder();
schema_builder.add_text_field("title", TEXT | STORED);
schema_builder.add_text_field("body", TEXT);
schema_builder.add_u64_field("year", INT_INDEXED);
let schema = schema_builder.build();
// Let's assume we have a json-serialized document.
let mice_and_men_doc_json = r#"{
"title": "Of Mice and Men",
"year": 1937
}"#;
// We can parse our document
let _mice_and_men_doc = schema.parse_document(&mice_and_men_doc_json)?;
// Multi-valued field are allowed, they are
// expressed in JSON by an array.
// The following document has two titles.
let frankenstein_json = r#"{
"title": ["Frankenstein", "The Modern Prometheus"],
"year": 1818
}"#;
let _frankenstein_doc = schema.parse_document(&frankenstein_json)?;
// Note that the schema is saved in your index directory.
//
// As a result, Indexes are aware of their schema, and you can use this feature
// just by opening an existing `Index`, and calling `index.schema()..parse_document(json)`.
Ok(())
}

2
run-tests.sh Executable file
View File

@@ -0,0 +1,2 @@
#!/bin/bash
cargo test --no-default-features --features mmap -- --test-threads 1

1
rustfmt.toml Normal file
View File

@@ -0,0 +1 @@
use_try_shorthand = true

View File

@@ -1,10 +0,0 @@
#!/bin/bash
DEST=target/doc/tantivy/docs/
mkdir -p $DEST
for f in $(ls docs/*.md)
do
rustdoc $f -o $DEST --markdown-css ../../rustdoc.css --markdown-css style.css
done
cp docs/*.css $DEST

View File

@@ -1,5 +0,0 @@
#/bin/bash
valgrind --tool=cachegrind target/release/tantivy-bench -i /data/wiki-index -q ./queries.txt -n 3
valgrind --tool=callgrind target/release/tantivy-bench -i /data/wiki-index -q ./queries.txt -n 3

View File

@@ -1,86 +0,0 @@
extern crate regex;
use std::str::Chars;
use std::ascii::AsciiExt;
pub struct TokenIter<'a> {
chars: Chars<'a>,
term_buffer: String,
}
fn append_char_lowercase(c: char, term_buffer: &mut String) {
term_buffer.push(c.to_ascii_lowercase());
}
pub trait StreamingIterator<'a, T> {
fn next(&'a mut self) -> Option<T>;
}
impl<'a, 'b> TokenIter<'b> {
fn consume_token(&'a mut self) -> Option<&'a str> {
for c in &mut self.chars {
if c.is_alphanumeric() {
append_char_lowercase(c, &mut self.term_buffer);
}
else {
break;
}
}
Some(&self.term_buffer)
}
}
impl<'a, 'b> StreamingIterator<'a, &'a str> for TokenIter<'b> {
#[inline]
fn next(&'a mut self,) -> Option<&'a str> {
self.term_buffer.clear();
// skipping non-letter characters.
loop {
match self.chars.next() {
Some(c) => {
if c.is_alphanumeric() {
append_char_lowercase(c, &mut self.term_buffer);
return self.consume_token();
}
}
None => { return None; }
}
}
}
}
pub struct SimpleTokenizer;
impl SimpleTokenizer {
pub fn tokenize<'a>(&self, text: &'a str) -> TokenIter<'a> {
TokenIter {
term_buffer: String::new(),
chars: text.chars(),
}
}
}
#[test]
fn test_tokenizer() {
let simple_tokenizer = SimpleTokenizer;
let mut term_reader = simple_tokenizer.tokenize("hello, happy tax payer!");
assert_eq!(term_reader.next().unwrap(), "hello");
assert_eq!(term_reader.next().unwrap(), "happy");
assert_eq!(term_reader.next().unwrap(), "tax");
assert_eq!(term_reader.next().unwrap(), "payer");
assert_eq!(term_reader.next(), None);
}
#[test]
fn test_tokenizer_empty() {
let simple_tokenizer = SimpleTokenizer;
let mut term_reader = simple_tokenizer.tokenize("");
assert_eq!(term_reader.next(), None);
}

View File

@@ -1,83 +0,0 @@
use Result;
use collector::Collector;
use SegmentLocalId;
use SegmentReader;
use DocId;
use Score;
/// Collector that does nothing.
/// This is used in the chain Collector and will hopefully
/// be optimized away by the compiler.
pub struct DoNothingCollector;
impl Collector for DoNothingCollector {
#[inline]
fn set_segment(&mut self, _: SegmentLocalId, _: &SegmentReader) -> Result<()> {
Ok(())
}
#[inline]
fn collect(&mut self, _doc: DocId, _score: Score) {}
}
/// Zero-cost abstraction used to collect on multiple collectors.
/// This contraption is only usable if the type of your collectors
/// are known at compile time.
pub struct ChainedCollector<Left: Collector, Right: Collector> {
left: Left,
right: Right
}
impl<Left: Collector, Right: Collector> ChainedCollector<Left, Right> {
/// Adds a collector
pub fn push<C: Collector>(self, new_collector: &mut C) -> ChainedCollector<Self, &mut C> {
ChainedCollector {
left: self,
right: new_collector,
}
}
}
impl<Left: Collector, Right: Collector> Collector for ChainedCollector<Left, Right> {
fn set_segment(&mut self, segment_local_id: SegmentLocalId, segment: &SegmentReader) -> Result<()> {
try!(self.left.set_segment(segment_local_id, segment));
try!(self.right.set_segment(segment_local_id, segment));
Ok(())
}
fn collect(&mut self, doc: DocId, score: Score) {
self.left.collect(doc, score);
self.right.collect(doc, score);
}
}
/// Creates a `ChainedCollector`
pub fn chain() -> ChainedCollector<DoNothingCollector, DoNothingCollector> {
ChainedCollector {
left: DoNothingCollector,
right: DoNothingCollector,
}
}
#[cfg(test)]
mod tests {
use super::*;
use collector::{Collector, CountCollector, TopCollector};
#[test]
fn test_chained_collector() {
let mut top_collector = TopCollector::with_limit(2);
let mut count_collector = CountCollector::default();
{
let mut collectors = chain()
.push(&mut top_collector)
.push(&mut count_collector);
collectors.collect(1, 0.2);
collectors.collect(2, 0.1);
collectors.collect(3, 0.5);
}
assert_eq!(count_collector.count(), 3);
assert!(top_collector.at_capacity());
}
}

View File

@@ -1,57 +1,129 @@
use super::Collector;
use collector::SegmentCollector;
use DocId;
use Score;
use Result;
use SegmentReader;
use Score;
use SegmentLocalId;
use SegmentReader;
/// `CountCollector` collector only counts how many
/// documents match the query.
pub struct CountCollector {
/// documents match the query.
///
/// ```rust
/// #[macro_use]
/// extern crate tantivy;
/// use tantivy::schema::{Schema, TEXT};
/// use tantivy::{Index, Result};
/// use tantivy::collector::Count;
/// use tantivy::query::QueryParser;
///
/// # fn main() { example().unwrap(); }
/// fn example() -> Result<()> {
/// let mut schema_builder = Schema::builder();
/// let title = schema_builder.add_text_field("title", TEXT);
/// let schema = schema_builder.build();
/// let index = Index::create_in_ram(schema);
/// {
/// let mut index_writer = index.writer(3_000_000)?;
/// index_writer.add_document(doc!(
/// title => "The Name of the Wind",
/// ));
/// index_writer.add_document(doc!(
/// title => "The Diary of Muadib",
/// ));
/// index_writer.add_document(doc!(
/// title => "A Dairy Cow",
/// ));
/// index_writer.add_document(doc!(
/// title => "The Diary of a Young Girl",
/// ));
/// index_writer.commit().unwrap();
/// }
///
/// index.load_searchers()?;
/// let searcher = index.searcher();
///
/// {
/// let query_parser = QueryParser::for_index(&index, vec![title]);
/// let query = query_parser.parse_query("diary")?;
/// let count = searcher.search(&query, &Count).unwrap();
///
/// assert_eq!(count, 2);
/// }
///
/// Ok(())
/// }
/// ```
pub struct Count;
impl Collector for Count {
type Fruit = usize;
type Child = SegmentCountCollector;
fn for_segment(&self, _: SegmentLocalId, _: &SegmentReader) -> Result<SegmentCountCollector> {
Ok(SegmentCountCollector::default())
}
fn requires_scoring(&self) -> bool {
false
}
fn merge_fruits(&self, segment_counts: Vec<usize>) -> Result<usize> {
Ok(segment_counts.into_iter().sum())
}
}
#[derive(Default)]
pub struct SegmentCountCollector {
count: usize,
}
impl CountCollector {
/// Returns the count of documents that were
/// collected.
pub fn count(&self,) -> usize {
self.count
}
}
impl Default for CountCollector {
fn default() -> CountCollector {
CountCollector {count: 0,
}
}
}
impl Collector for CountCollector {
fn set_segment(&mut self, _: SegmentLocalId, _: &SegmentReader) -> Result<()> {
Ok(())
}
impl SegmentCollector for SegmentCountCollector {
type Fruit = usize;
fn collect(&mut self, _: DocId, _: Score) {
self.count += 1;
}
fn harvest(self) -> usize {
self.count
}
}
#[cfg(test)]
mod tests {
use super::*;
use test::Bencher;
use super::{Count, SegmentCountCollector};
use collector::Collector;
use collector::SegmentCollector;
#[bench]
fn build_collector(b: &mut Bencher) {
b.iter(|| {
let mut count_collector = CountCollector::default();
for doc in 0..1_000_000 {
count_collector.collect(doc, 1f32);
}
count_collector.count()
});
#[test]
fn test_count_collect_does_not_requires_scoring() {
assert!(!Count.requires_scoring());
}
#[test]
fn test_segment_count_collector() {
{
let count_collector = SegmentCountCollector::default();
assert_eq!(count_collector.harvest(), 0);
}
{
let mut count_collector = SegmentCountCollector::default();
count_collector.collect(0u32, 1f32);
assert_eq!(count_collector.harvest(), 1);
}
{
let mut count_collector = SegmentCountCollector::default();
count_collector.collect(0u32, 1f32);
assert_eq!(count_collector.harvest(), 1);
}
{
let mut count_collector = SegmentCountCollector::default();
count_collector.collect(0u32, 1f32);
count_collector.collect(1u32, 1f32);
assert_eq!(count_collector.harvest(), 2);
}
}
}

View File

@@ -0,0 +1,646 @@
use collector::Collector;
use collector::SegmentCollector;
use docset::SkipResult;
use fastfield::FacetReader;
use schema::Facet;
use schema::Field;
use std::cmp::Ordering;
use std::collections::btree_map;
use std::collections::BTreeMap;
use std::collections::BTreeSet;
use std::collections::BinaryHeap;
use std::collections::Bound;
use std::iter::Peekable;
use std::{u64, usize};
use DocId;
use Result;
use Score;
use SegmentLocalId;
use SegmentReader;
struct Hit<'a> {
count: u64,
facet: &'a Facet,
}
impl<'a> Eq for Hit<'a> {}
impl<'a> PartialEq<Hit<'a>> for Hit<'a> {
fn eq(&self, other: &Hit) -> bool {
self.count == other.count
}
}
impl<'a> PartialOrd<Hit<'a>> for Hit<'a> {
fn partial_cmp(&self, other: &Hit) -> Option<Ordering> {
Some(self.cmp(other))
}
}
impl<'a> Ord for Hit<'a> {
fn cmp(&self, other: &Self) -> Ordering {
other.count.cmp(&self.count)
}
}
fn facet_depth(facet_bytes: &[u8]) -> usize {
if facet_bytes.is_empty() {
0
} else {
facet_bytes.iter().cloned().filter(|b| *b == 0u8).count() + 1
}
}
/// Collector for faceting
///
/// The collector collects all facets. You need to configure it
/// beforehand with the facet you want to extract.
///
/// This is done by calling `.add_facet(...)` with the root of the
/// facet you want to extract as argument.
///
/// Facet counts will only be computed for the facet that are direct children
/// of such a root facet.
///
/// For instance, if your index represents books, your hierarchy of facets
/// may contain `category`, `language`.
///
/// The category facet may include `subcategories`. For instance, a book
/// could belong to `/category/fiction/fantasy`.
///
/// If you request the facet counts for `/category`, the result will be
/// the breakdown of counts for the direct children of `/category`
/// (e.g. `/category/fiction`, `/category/biography`, `/category/personal_development`).
///
/// Once collection is finished, you can harvest its results in the form
/// of a `FacetCounts` object, and extract your face t counts from it.
///
/// This implementation assumes you are working with a number of facets that
/// is much hundreds of time lower than your number of documents.
///
///
/// ```rust
/// #[macro_use]
/// extern crate tantivy;
/// use tantivy::schema::{Facet, Schema, TEXT};
/// use tantivy::{Index, Result};
/// use tantivy::collector::FacetCollector;
/// use tantivy::query::AllQuery;
///
/// # fn main() { example().unwrap(); }
/// fn example() -> Result<()> {
/// let mut schema_builder = Schema::builder();
///
/// // Facet have their own specific type.
/// // It is not a bad practise to put all of your
/// // facet information in the same field.
/// let facet = schema_builder.add_facet_field("facet");
/// let title = schema_builder.add_text_field("title", TEXT);
/// let schema = schema_builder.build();
/// let index = Index::create_in_ram(schema);
/// {
/// let mut index_writer = index.writer(3_000_000)?;
/// // a document can be associated to any number of facets
/// index_writer.add_document(doc!(
/// title => "The Name of the Wind",
/// facet => Facet::from("/lang/en"),
/// facet => Facet::from("/category/fiction/fantasy")
/// ));
/// index_writer.add_document(doc!(
/// title => "Dune",
/// facet => Facet::from("/lang/en"),
/// facet => Facet::from("/category/fiction/sci-fi")
/// ));
/// index_writer.add_document(doc!(
/// title => "La Vénus d'Ille",
/// facet => Facet::from("/lang/fr"),
/// facet => Facet::from("/category/fiction/fantasy"),
/// facet => Facet::from("/category/fiction/horror")
/// ));
/// index_writer.add_document(doc!(
/// title => "The Diary of a Young Girl",
/// facet => Facet::from("/lang/en"),
/// facet => Facet::from("/category/biography")
/// ));
/// index_writer.commit().unwrap();
/// }
///
/// index.load_searchers()?;
/// let searcher = index.searcher();
///
/// {
/// let mut facet_collector = FacetCollector::for_field(facet);
/// facet_collector.add_facet("/lang");
/// facet_collector.add_facet("/category");
/// let facet_counts = searcher.search(&AllQuery, &facet_collector).unwrap();
///
/// // This lists all of the facet counts
/// let facets: Vec<(&Facet, u64)> = facet_counts
/// .get("/category")
/// .collect();
/// assert_eq!(facets, vec![
/// (&Facet::from("/category/biography"), 1),
/// (&Facet::from("/category/fiction"), 3)
/// ]);
/// }
///
/// {
/// let mut facet_collector = FacetCollector::for_field(facet);
/// facet_collector.add_facet("/category/fiction");
/// let facet_counts = searcher.search(&AllQuery, &facet_collector).unwrap();
///
/// // This lists all of the facet counts
/// let facets: Vec<(&Facet, u64)> = facet_counts
/// .get("/category/fiction")
/// .collect();
/// assert_eq!(facets, vec![
/// (&Facet::from("/category/fiction/fantasy"), 2),
/// (&Facet::from("/category/fiction/horror"), 1),
/// (&Facet::from("/category/fiction/sci-fi"), 1)
/// ]);
/// }
///
/// {
/// let mut facet_collector = FacetCollector::for_field(facet);
/// facet_collector.add_facet("/category/fiction");
/// let facet_counts = searcher.search(&AllQuery, &facet_collector).unwrap();
///
/// // This lists all of the facet counts
/// let facets: Vec<(&Facet, u64)> = facet_counts.top_k("/category/fiction", 1);
/// assert_eq!(facets, vec![
/// (&Facet::from("/category/fiction/fantasy"), 2)
/// ]);
/// }
///
/// Ok(())
/// }
/// ```
pub struct FacetCollector {
field: Field,
facets: BTreeSet<Facet>,
}
pub struct FacetSegmentCollector {
reader: FacetReader,
facet_ords_buf: Vec<u64>,
// facet_ord -> collapse facet_id
collapse_mapping: Vec<usize>,
// collapse facet_id -> count
counts: Vec<u64>,
// collapse facet_id -> facet_ord
collapse_facet_ords: Vec<u64>,
}
fn skip<'a, I: Iterator<Item = &'a Facet>>(
target: &[u8],
collapse_it: &mut Peekable<I>,
) -> SkipResult {
loop {
match collapse_it.peek() {
Some(facet_bytes) => match facet_bytes.encoded_str().as_bytes().cmp(target) {
Ordering::Less => {}
Ordering::Greater => {
return SkipResult::OverStep;
}
Ordering::Equal => {
return SkipResult::Reached;
}
},
None => {
return SkipResult::End;
}
}
collapse_it.next();
}
}
impl FacetCollector {
/// Create a facet collector to collect the facets
/// from a specific facet `Field`.
///
/// This function does not check whether the field
/// is of the proper type.
pub fn for_field(field: Field) -> FacetCollector {
FacetCollector {
field,
facets: BTreeSet::default(),
}
}
/// Adds a facet that we want to record counts
///
/// Adding facet `Facet::from("/country")` for instance,
/// will record the counts of all of the direct children of the facet country
/// (e.g. `/country/FR`, `/country/UK`).
///
/// Adding two facets within which one is the prefix of the other is forbidden.
/// If you need the correct number of unique documents for two such facets,
/// just add them in separate `FacetCollector`.
pub fn add_facet<T>(&mut self, facet_from: T)
where
Facet: From<T>,
{
let facet = Facet::from(facet_from);
for old_facet in &self.facets {
assert!(
!old_facet.is_prefix_of(&facet),
"Tried to add a facet which is a descendant of an already added facet."
);
assert!(
!facet.is_prefix_of(old_facet),
"Tried to add a facet which is an ancestor of an already added facet."
);
}
self.facets.insert(facet);
}
}
impl Collector for FacetCollector {
type Fruit = FacetCounts;
type Child = FacetSegmentCollector;
fn for_segment(
&self,
_: SegmentLocalId,
reader: &SegmentReader,
) -> Result<FacetSegmentCollector> {
let facet_reader = reader.facet_reader(self.field)?;
let mut collapse_mapping = Vec::new();
let mut counts = Vec::new();
let mut collapse_facet_ords = Vec::new();
let mut collapse_facet_it = self.facets.iter().peekable();
collapse_facet_ords.push(0);
{
let mut facet_streamer = facet_reader.facet_dict().range().into_stream();
if facet_streamer.advance() {
'outer: loop {
// at the begining of this loop, facet_streamer
// is positionned on a term that has not been processed yet.
let skip_result = skip(facet_streamer.key(), &mut collapse_facet_it);
match skip_result {
SkipResult::Reached => {
// we reach a facet we decided to collapse.
let collapse_depth = facet_depth(facet_streamer.key());
let mut collapsed_id = 0;
collapse_mapping.push(0);
while facet_streamer.advance() {
let depth = facet_depth(facet_streamer.key());
if depth <= collapse_depth {
continue 'outer;
}
if depth == collapse_depth + 1 {
collapsed_id = collapse_facet_ords.len();
collapse_facet_ords.push(facet_streamer.term_ord());
collapse_mapping.push(collapsed_id);
} else {
collapse_mapping.push(collapsed_id);
}
}
break;
}
SkipResult::End | SkipResult::OverStep => {
collapse_mapping.push(0);
if !facet_streamer.advance() {
break;
}
}
}
}
}
}
counts.resize(collapse_facet_ords.len(), 0);
Ok(FacetSegmentCollector {
reader: facet_reader,
facet_ords_buf: Vec::with_capacity(255),
collapse_mapping,
counts,
collapse_facet_ords,
})
}
fn requires_scoring(&self) -> bool {
false
}
fn merge_fruits(&self, segments_facet_counts: Vec<FacetCounts>) -> Result<FacetCounts> {
let mut facet_counts: BTreeMap<Facet, u64> = BTreeMap::new();
for segment_facet_counts in segments_facet_counts {
for (facet, count) in segment_facet_counts.facet_counts {
*(facet_counts.entry(facet).or_insert(0)) += count;
}
}
Ok(FacetCounts { facet_counts })
}
}
impl SegmentCollector for FacetSegmentCollector {
type Fruit = FacetCounts;
fn collect(&mut self, doc: DocId, _: Score) {
self.reader.facet_ords(doc, &mut self.facet_ords_buf);
let mut previous_collapsed_ord: usize = usize::MAX;
for &facet_ord in &self.facet_ords_buf {
let collapsed_ord = self.collapse_mapping[facet_ord as usize];
self.counts[collapsed_ord] += if collapsed_ord == previous_collapsed_ord {
0
} else {
1
};
previous_collapsed_ord = collapsed_ord;
}
}
/// Returns the results of the collection.
///
/// This method does not just return the counters,
/// it also translates the facet ordinals of the last segment.
fn harvest(self) -> FacetCounts {
let mut facet_counts = BTreeMap::new();
let facet_dict = self.reader.facet_dict();
for (collapsed_facet_ord, count) in self.counts.iter().cloned().enumerate() {
if count == 0 {
continue;
}
let mut facet = vec![];
let facet_ord = self.collapse_facet_ords[collapsed_facet_ord];
facet_dict.ord_to_term(facet_ord as u64, &mut facet);
// TODO
facet_counts.insert(Facet::from_encoded(facet).unwrap(), count);
}
FacetCounts { facet_counts }
}
}
/// Intermediary result of the `FacetCollector` that stores
/// the facet counts for all the segments.
pub struct FacetCounts {
facet_counts: BTreeMap<Facet, u64>,
}
pub struct FacetChildIterator<'a> {
underlying: btree_map::Range<'a, Facet, u64>,
}
impl<'a> Iterator for FacetChildIterator<'a> {
type Item = (&'a Facet, u64);
fn next(&mut self) -> Option<Self::Item> {
self.underlying.next().map(|(facet, count)| (facet, *count))
}
}
impl FacetCounts {
pub fn get<T>(&self, facet_from: T) -> FacetChildIterator
where
Facet: From<T>,
{
let facet = Facet::from(facet_from);
let left_bound = Bound::Excluded(facet.clone());
let right_bound = if facet.is_root() {
Bound::Unbounded
} else {
let mut facet_after_bytes: String = facet.encoded_str().to_owned();
facet_after_bytes.push('\u{1}');
let facet_after = Facet::from_encoded_string(facet_after_bytes);
Bound::Excluded(facet_after)
};
let underlying: btree_map::Range<_, _> = self.facet_counts.range((left_bound, right_bound));
FacetChildIterator { underlying }
}
pub fn top_k<T>(&self, facet: T, k: usize) -> Vec<(&Facet, u64)>
where
Facet: From<T>,
{
let mut heap = BinaryHeap::with_capacity(k);
let mut it = self.get(facet);
// push the first k elements to first bring the heap
// to capacity
for (facet, count) in (&mut it).take(k) {
heap.push(Hit { count, facet });
}
let mut lowest_count: u64 = heap.peek().map(|hit| hit.count).unwrap_or(u64::MIN); //< the `unwrap_or` case may be triggered but the value
// is never used in that case.
for (facet, count) in it {
if count > lowest_count {
if let Some(mut head) = heap.peek_mut() {
*head = Hit { count, facet };
}
// the heap gets reconstructed at this point
if let Some(head) = heap.peek() {
lowest_count = head.count;
}
}
}
heap.into_sorted_vec()
.into_iter()
.map(|hit| (hit.facet, hit.count))
.collect::<Vec<_>>()
}
}
#[cfg(test)]
mod tests {
use super::{FacetCollector, FacetCounts};
use core::Index;
use query::AllQuery;
use rand::distributions::Uniform;
use rand::prelude::SliceRandom;
use rand::{thread_rng, Rng};
use schema::{Document, Facet, Field, Schema};
use std::iter;
#[test]
fn test_facet_collector_drilldown() {
let mut schema_builder = Schema::builder();
let facet_field = schema_builder.add_facet_field("facet");
let schema = schema_builder.build();
let index = Index::create_in_ram(schema);
let mut index_writer = index.writer_with_num_threads(1, 3_000_000).unwrap();
let num_facets: usize = 3 * 4 * 5;
let facets: Vec<Facet> = (0..num_facets)
.map(|mut n| {
let top = n % 3;
n /= 3;
let mid = n % 4;
n /= 4;
let leaf = n % 5;
Facet::from(&format!("/top{}/mid{}/leaf{}", top, mid, leaf))
})
.collect();
for i in 0..num_facets * 10 {
let mut doc = Document::new();
doc.add_facet(facet_field, facets[i % num_facets].clone());
index_writer.add_document(doc);
}
index_writer.commit().unwrap();
index.load_searchers().unwrap();
let searcher = index.searcher();
let mut facet_collector = FacetCollector::for_field(facet_field);
facet_collector.add_facet(Facet::from("/top1"));
let counts = searcher.search(&AllQuery, &facet_collector).unwrap();
{
let facets: Vec<(String, u64)> = counts
.get("/top1")
.map(|(facet, count)| (facet.to_string(), count))
.collect();
assert_eq!(
facets,
[
("/top1/mid0", 50),
("/top1/mid1", 50),
("/top1/mid2", 50),
("/top1/mid3", 50),
]
.iter()
.map(|&(facet_str, count)| (String::from(facet_str), count))
.collect::<Vec<_>>()
);
}
}
#[test]
#[should_panic(expected = "Tried to add a facet which is a descendant of \
an already added facet.")]
fn test_misused_facet_collector() {
let mut facet_collector = FacetCollector::for_field(Field(0));
facet_collector.add_facet(Facet::from("/country"));
facet_collector.add_facet(Facet::from("/country/europe"));
}
#[test]
fn test_doc_unsorted_multifacet() {
let mut schema_builder = Schema::builder();
let facet_field = schema_builder.add_facet_field("facets");
let schema = schema_builder.build();
let index = Index::create_in_ram(schema);
let mut index_writer = index.writer_with_num_threads(1, 3_000_000).unwrap();
index_writer.add_document(doc!(
facet_field => Facet::from_text(&"/subjects/A/a"),
facet_field => Facet::from_text(&"/subjects/B/a"),
facet_field => Facet::from_text(&"/subjects/A/b"),
facet_field => Facet::from_text(&"/subjects/B/b"),
));
index_writer.commit().unwrap();
index.load_searchers().unwrap();
let searcher = index.searcher();
assert_eq!(searcher.num_docs(), 1);
let mut facet_collector = FacetCollector::for_field(facet_field);
facet_collector.add_facet("/subjects");
let counts = searcher.search(&AllQuery, &facet_collector).unwrap();
let facets: Vec<(&Facet, u64)> = counts.get("/subjects").collect();
assert_eq!(facets[0].1, 1);
}
#[test]
fn test_non_used_facet_collector() {
let mut facet_collector = FacetCollector::for_field(Field(0));
facet_collector.add_facet(Facet::from("/country"));
facet_collector.add_facet(Facet::from("/countryeurope"));
}
#[test]
fn test_facet_collector_topk() {
let mut schema_builder = Schema::builder();
let facet_field = schema_builder.add_facet_field("facet");
let schema = schema_builder.build();
let index = Index::create_in_ram(schema);
let uniform = Uniform::new_inclusive(1, 100_000);
let mut docs: Vec<Document> = vec![("a", 10), ("b", 100), ("c", 7), ("d", 12), ("e", 21)]
.into_iter()
.flat_map(|(c, count)| {
let facet = Facet::from(&format!("/facet/{}", c));
let doc = doc!(facet_field => facet);
iter::repeat(doc).take(count)
})
.map(|mut doc| {
doc.add_facet(
facet_field,
&format!("/facet/{}", thread_rng().sample(&uniform)),
);
doc
})
.collect();
docs[..].shuffle(&mut thread_rng());
let mut index_writer = index.writer_with_num_threads(1, 3_000_000).unwrap();
for doc in docs {
index_writer.add_document(doc);
}
index_writer.commit().unwrap();
index.load_searchers().unwrap();
let searcher = index.searcher();
let mut facet_collector = FacetCollector::for_field(facet_field);
facet_collector.add_facet("/facet");
let counts: FacetCounts = searcher.search(&AllQuery, &facet_collector).unwrap();
{
let facets: Vec<(&Facet, u64)> = counts.top_k("/facet", 3);
assert_eq!(
facets,
vec![
(&Facet::from("/facet/b"), 100),
(&Facet::from("/facet/e"), 21),
(&Facet::from("/facet/d"), 12),
]
);
}
}
}
#[cfg(all(test, feature = "unstable"))]
mod bench {
use collector::FacetCollector;
use query::AllQuery;
use rand::{thread_rng, Rng};
use schema::Facet;
use schema::Schema;
use test::Bencher;
use Index;
#[bench]
fn bench_facet_collector(b: &mut Bencher) {
let mut schema_builder = Schema::builder();
let facet_field = schema_builder.add_facet_field("facet");
let schema = schema_builder.build();
let index = Index::create_in_ram(schema);
let mut docs = vec![];
for val in 0..50 {
let facet = Facet::from(&format!("/facet_{}", val));
for _ in 0..val * val {
docs.push(doc!(facet_field=>facet.clone()));
}
}
// 40425 docs
thread_rng().shuffle(&mut docs[..]);
let mut index_writer = index.writer_with_num_threads(1, 3_000_000).unwrap();
for doc in docs {
index_writer.add_document(doc);
}
index_writer.commit().unwrap();
index.load_searchers().unwrap();
b.iter(|| {
let searcher = index.searcher();
let facet_collector = FacetCollector::for_field(facet_field);
searcher.search(&AllQuery, &facet_collector).unwrap();
});
}
}

View File

@@ -0,0 +1,123 @@
use std::cmp::Eq;
use std::collections::HashMap;
use std::hash::Hash;
use collector::Collector;
use fastfield::FastFieldReader;
use schema::Field;
use DocId;
use Result;
use Score;
use SegmentReader;
use SegmentLocalId;
/// Facet collector for i64/u64 fast field
pub struct IntFacetCollector<T>
where
T: FastFieldReader,
T::ValueType: Eq + Hash,
{
counters: HashMap<T::ValueType, u64>,
field: Field,
ff_reader: Option<T>,
}
impl<T> IntFacetCollector<T>
where
T: FastFieldReader,
T::ValueType: Eq + Hash,
{
/// Creates a new facet collector for aggregating a given field.
pub fn new(field: Field) -> IntFacetCollector<T> {
IntFacetCollector {
counters: HashMap::new(),
field: field,
ff_reader: None,
}
}
}
impl<T> Collector for IntFacetCollector<T>
where
T: FastFieldReader,
T::ValueType: Eq + Hash,
{
fn set_segment(&mut self, _: SegmentLocalId, reader: &SegmentReader) -> Result<()> {
self.ff_reader = Some(reader.get_fast_field_reader(self.field)?);
Ok(())
}
fn collect(&mut self, doc: DocId, _: Score) {
let val = self.ff_reader
.as_ref()
.expect(
"collect() was called before set_segment. \
This should never happen.",
)
.get(doc);
*(self.counters.entry(val).or_insert(0)) += 1;
}
}
#[cfg(test)]
mod tests {
use collector::{chain, IntFacetCollector};
use query::QueryParser;
use fastfield::{I64FastFieldReader, U64FastFieldReader};
use schema::{self, FAST, STRING};
use Index;
#[test]
// create 10 documents, set num field value to 0 or 1 for even/odd ones
// make sure we have facet counters correctly filled
fn test_facet_collector_results() {
let mut schema_builder = schema::Schema::builder();
let num_field_i64 = schema_builder.add_i64_field("num_i64", FAST);
let num_field_u64 = schema_builder.add_u64_field("num_u64", FAST);
let text_field = schema_builder.add_text_field("text", STRING);
let schema = schema_builder.build();
let index = Index::create_in_ram(schema.clone());
{
let mut index_writer = index.writer_with_num_threads(1, 40_000_000).unwrap();
{
for i in 0u64..10u64 {
index_writer.add_document(doc!(
num_field_i64 => ((i as i64) % 3i64) as i64,
num_field_u64 => (i % 2u64) as u64,
text_field => "text"
));
}
}
assert_eq!(index_writer.commit().unwrap(), 10u64);
}
index.load_searchers().unwrap();
let searcher = index.searcher();
let mut ffvf_i64: IntFacetCollector<I64FastFieldReader> = IntFacetCollector::new(num_field_i64);
let mut ffvf_u64: IntFacetCollector<U64FastFieldReader> = IntFacetCollector::new(num_field_u64);
{
// perform the query
let mut facet_collectors = chain().push(&mut ffvf_i64).push(&mut ffvf_u64);
let mut query_parser = QueryParser::for_index(index, vec![text_field]);
let query = query_parser.parse_query("text:text").unwrap();
query.search(&searcher, &mut facet_collectors).unwrap();
}
assert_eq!(ffvf_u64.counters[&0], 5);
assert_eq!(ffvf_u64.counters[&1], 5);
assert_eq!(ffvf_i64.counters[&0], 4);
assert_eq!(ffvf_i64.counters[&1], 3);
}
}

View File

@@ -1,171 +1,367 @@
use SegmentReader;
use SegmentLocalId;
/*!
# Collectors
Collectors define the information you want to extract from the documents matching the queries.
In tantivy jargon, we call this information your search "fruit".
Your fruit could for instance be :
- [the count of matching documents](./struct.Count.html)
- [the top 10 documents, by relevancy or by a fast field](./struct.TopDocs.html)
- [facet counts](./struct.FacetCollector.html)
At one point in your code, you will trigger the actual search operation by calling
[the `search(...)` method of your `Searcher` object](../struct.Searcher.html#method.search).
This call will look like this.
```verbatim
let fruit = searcher.search(&query, &collector)?;
```
Here the type of fruit is actually determined as an associated type of the collector (`Collector::Fruit`).
# Combining several collectors
A rich search experience often requires to run several collectors on your search query.
For instance,
- selecting the top-K products matching your query
- counting the matching documents
- computing several facets
- computing statistics about the matching product prices
A simple and efficient way to do that is to pass your collectors as one tuple.
The resulting `Fruit` will then be a typed tuple with each collector's original fruits
in their respective position.
```rust
# extern crate tantivy;
# use tantivy::schema::*;
# use tantivy::*;
# use tantivy::query::*;
use tantivy::collector::{Count, TopDocs};
#
# fn main() -> tantivy::Result<()> {
# let mut schema_builder = Schema::builder();
# let title = schema_builder.add_text_field("title", TEXT);
# let schema = schema_builder.build();
# let index = Index::create_in_ram(schema);
# let mut index_writer = index.writer(3_000_000)?;
# index_writer.add_document(doc!(
# title => "The Name of the Wind",
# ));
# index_writer.add_document(doc!(
# title => "The Diary of Muadib",
# ));
# index_writer.commit().unwrap();
# index.load_searchers()?;
# let searcher = index.searcher();
# let query_parser = QueryParser::for_index(&index, vec![title]);
# let query = query_parser.parse_query("diary")?;
let (doc_count, top_docs): (usize, Vec<(Score, DocAddress)>) =
searcher.search(&query, &(Count, TopDocs::with_limit(2)))?;
# Ok(())
# }
```
The `Collector` trait is implemented for up to 4 collectors.
If you have more than 4 collectors, you can either group them into
tuples of tuples `(a,(b,(c,d)))`, or rely on `MultiCollector`'s.
# Combining several collectors dynamically
Combining collectors into a tuple is a zero-cost abstraction: everything
happens as if you had manually implemented a single collector
combining all of our features.
Unfortunately it requires you to know at compile time your collector types.
If on the other hand, the collectors depend on some query parameter,
you can rely on `MultiCollector`'s.
# Implementing your own collectors.
See the `custom_collector` example.
*/
use downcast;
use DocId;
use Score;
use Result;
use Score;
use SegmentLocalId;
use SegmentReader;
mod count_collector;
pub use self::count_collector::CountCollector;
pub use self::count_collector::Count;
mod multi_collector;
pub use self::multi_collector::MultiCollector;
mod top_collector;
pub use self::top_collector::TopCollector;
mod chained_collector;
pub use self::chained_collector::chain;
mod top_score_collector;
pub use self::top_score_collector::TopDocs;
/// Collectors are in charge of collecting and retaining relevant
mod top_field_collector;
pub use self::top_field_collector::TopDocsByField;
mod facet_collector;
pub use self::facet_collector::FacetCollector;
/// `Fruit` is the type for the result of our collection.
/// e.g. `usize` for the `Count` collector.
pub trait Fruit: Send + downcast::Any {}
impl<T> Fruit for T where T: Send + downcast::Any {}
/// Collectors are in charge of collecting and retaining relevant
/// information from the document found and scored by the query.
///
///
/// For instance,
/// For instance,
///
/// - keeping track of the top 10 best documents
/// - computing a breakdown over a fast field
/// - computing the number of documents matching the query
///
/// Queries are in charge of pushing the `DocSet` to the collector.
/// Our search index is in fact a collection of segments, so
/// a `Collector` trait is actually more of a factory to instance
/// `SegmentCollector`s for each segments.
///
/// As they work on multiple segments, they first inform
/// the collector of a change in a segment and then
/// call the `collect` method to push the document to the collector.
///
/// Temporally, our collector will receive calls
/// - `.set_segment(0, segment_reader_0)`
/// - `.collect(doc0_of_segment_0)`
/// - `.collect(...)`
/// - `.collect(last_doc_of_segment_0)`
/// - `.set_segment(1, segment_reader_1)`
/// - `.collect(doc0_of_segment_1)`
/// - `.collect(...)`
/// - `.collect(last_doc_of_segment_1)`
/// - `...`
/// - `.collect(last_doc_of_last_segment)`
/// The collection logic itself is in the `SegmentCollector`.
///
/// Segments are not guaranteed to be visited in any specific order.
pub trait Collector {
/// `set_segment` is called before beginning to enumerate
pub trait Collector: Sync {
/// `Fruit` is the type for the result of our collection.
/// e.g. `usize` for the `Count` collector.
type Fruit: Fruit;
/// Type of the `SegmentCollector` associated to this collector.
type Child: SegmentCollector<Fruit = Self::Fruit>;
/// `set_segment` is called before beginning to enumerate
/// on this segment.
fn set_segment(&mut self, segment_local_id: SegmentLocalId, segment: &SegmentReader) -> Result<()>;
fn for_segment(
&self,
segment_local_id: SegmentLocalId,
segment: &SegmentReader,
) -> Result<Self::Child>;
/// Returns true iff the collector requires to compute scores for documents.
fn requires_scoring(&self) -> bool;
/// Combines the fruit associated to the collection of each segments
/// into one fruit.
fn merge_fruits(&self, segment_fruits: Vec<Self::Fruit>) -> Result<Self::Fruit>;
}
/// The `SegmentCollector` is the trait in charge of defining the
/// collect operation at the scale of the segment.
///
/// `.collect(doc, score)` will be called for every documents
/// matching the query.
pub trait SegmentCollector: 'static {
/// `Fruit` is the type for the result of our collection.
/// e.g. `usize` for the `Count` collector.
type Fruit: Fruit;
/// The query pushes the scored document to the collector via this method.
fn collect(&mut self, doc: DocId, score: Score);
/// Extract the fruit of the collection from the `SegmentCollector`.
fn harvest(self) -> Self::Fruit;
}
// -----------------------------------------------
// Tuple implementations.
impl<'a, C: Collector> Collector for &'a mut C {
fn set_segment(&mut self, segment_local_id: SegmentLocalId, segment: &SegmentReader) -> Result<()> {
(*self).set_segment(segment_local_id, segment)
impl<Left, Right> Collector for (Left, Right)
where
Left: Collector,
Right: Collector,
{
type Fruit = (Left::Fruit, Right::Fruit);
type Child = (Left::Child, Right::Child);
fn for_segment(&self, segment_local_id: u32, segment: &SegmentReader) -> Result<Self::Child> {
let left = self.0.for_segment(segment_local_id, segment)?;
let right = self.1.for_segment(segment_local_id, segment)?;
Ok((left, right))
}
/// The query pushes the scored document to the collector via this method.
fn requires_scoring(&self) -> bool {
self.0.requires_scoring() || self.1.requires_scoring()
}
fn merge_fruits(
&self,
children: Vec<(Left::Fruit, Right::Fruit)>,
) -> Result<(Left::Fruit, Right::Fruit)> {
let mut left_fruits = vec![];
let mut right_fruits = vec![];
for (left_fruit, right_fruit) in children {
left_fruits.push(left_fruit);
right_fruits.push(right_fruit);
}
Ok((
self.0.merge_fruits(left_fruits)?,
self.1.merge_fruits(right_fruits)?,
))
}
}
impl<Left, Right> SegmentCollector for (Left, Right)
where
Left: SegmentCollector,
Right: SegmentCollector,
{
type Fruit = (Left::Fruit, Right::Fruit);
fn collect(&mut self, doc: DocId, score: Score) {
(*self).collect(doc, score);
self.0.collect(doc, score);
self.1.collect(doc, score);
}
fn harvest(self) -> <Self as SegmentCollector>::Fruit {
(self.0.harvest(), self.1.harvest())
}
}
// 3-Tuple
impl<One, Two, Three> Collector for (One, Two, Three)
where
One: Collector,
Two: Collector,
Three: Collector,
{
type Fruit = (One::Fruit, Two::Fruit, Three::Fruit);
type Child = (One::Child, Two::Child, Three::Child);
fn for_segment(&self, segment_local_id: u32, segment: &SegmentReader) -> Result<Self::Child> {
let one = self.0.for_segment(segment_local_id, segment)?;
let two = self.1.for_segment(segment_local_id, segment)?;
let three = self.2.for_segment(segment_local_id, segment)?;
Ok((one, two, three))
}
fn requires_scoring(&self) -> bool {
self.0.requires_scoring() || self.1.requires_scoring() || self.2.requires_scoring()
}
fn merge_fruits(&self, children: Vec<Self::Fruit>) -> Result<Self::Fruit> {
let mut one_fruits = vec![];
let mut two_fruits = vec![];
let mut three_fruits = vec![];
for (one_fruit, two_fruit, three_fruit) in children {
one_fruits.push(one_fruit);
two_fruits.push(two_fruit);
three_fruits.push(three_fruit);
}
Ok((
self.0.merge_fruits(one_fruits)?,
self.1.merge_fruits(two_fruits)?,
self.2.merge_fruits(three_fruits)?,
))
}
}
impl<One, Two, Three> SegmentCollector for (One, Two, Three)
where
One: SegmentCollector,
Two: SegmentCollector,
Three: SegmentCollector,
{
type Fruit = (One::Fruit, Two::Fruit, Three::Fruit);
fn collect(&mut self, doc: DocId, score: Score) {
self.0.collect(doc, score);
self.1.collect(doc, score);
self.2.collect(doc, score);
}
fn harvest(self) -> <Self as SegmentCollector>::Fruit {
(self.0.harvest(), self.1.harvest(), self.2.harvest())
}
}
// 4-Tuple
impl<One, Two, Three, Four> Collector for (One, Two, Three, Four)
where
One: Collector,
Two: Collector,
Three: Collector,
Four: Collector,
{
type Fruit = (One::Fruit, Two::Fruit, Three::Fruit, Four::Fruit);
type Child = (One::Child, Two::Child, Three::Child, Four::Child);
fn for_segment(&self, segment_local_id: u32, segment: &SegmentReader) -> Result<Self::Child> {
let one = self.0.for_segment(segment_local_id, segment)?;
let two = self.1.for_segment(segment_local_id, segment)?;
let three = self.2.for_segment(segment_local_id, segment)?;
let four = self.3.for_segment(segment_local_id, segment)?;
Ok((one, two, three, four))
}
fn requires_scoring(&self) -> bool {
self.0.requires_scoring()
|| self.1.requires_scoring()
|| self.2.requires_scoring()
|| self.3.requires_scoring()
}
fn merge_fruits(&self, children: Vec<Self::Fruit>) -> Result<Self::Fruit> {
let mut one_fruits = vec![];
let mut two_fruits = vec![];
let mut three_fruits = vec![];
let mut four_fruits = vec![];
for (one_fruit, two_fruit, three_fruit, four_fruit) in children {
one_fruits.push(one_fruit);
two_fruits.push(two_fruit);
three_fruits.push(three_fruit);
four_fruits.push(four_fruit);
}
Ok((
self.0.merge_fruits(one_fruits)?,
self.1.merge_fruits(two_fruits)?,
self.2.merge_fruits(three_fruits)?,
self.3.merge_fruits(four_fruits)?,
))
}
}
impl<One, Two, Three, Four> SegmentCollector for (One, Two, Three, Four)
where
One: SegmentCollector,
Two: SegmentCollector,
Three: SegmentCollector,
Four: SegmentCollector,
{
type Fruit = (One::Fruit, Two::Fruit, Three::Fruit, Four::Fruit);
fn collect(&mut self, doc: DocId, score: Score) {
self.0.collect(doc, score);
self.1.collect(doc, score);
self.2.collect(doc, score);
self.3.collect(doc, score);
}
fn harvest(self) -> <Self as SegmentCollector>::Fruit {
(
self.0.harvest(),
self.1.harvest(),
self.2.harvest(),
self.3.harvest(),
)
}
}
#[allow(missing_docs)]
mod downcast_impl {
downcast!(super::Fruit);
}
#[cfg(test)]
pub mod tests {
use super::*;
use test::Bencher;
use DocId;
use Score;
use core::SegmentReader;
use SegmentLocalId;
use fastfield::U32FastFieldReader;
use schema::Field;
/// Stores all of the doc ids.
/// This collector is only used for tests.
/// It is unusable in practise, as it does not store
/// the segment ordinals
pub struct TestCollector {
offset: DocId,
segment_max_doc: DocId,
docs: Vec<DocId>,
}
impl TestCollector {
/// Return the exhalist of documents.
pub fn docs(self,) -> Vec<DocId> {
self.docs
}
}
impl Default for TestCollector {
fn default() -> TestCollector {
TestCollector {
docs: Vec::new(),
offset: 0,
segment_max_doc: 0,
}
}
}
impl Collector for TestCollector {
fn set_segment(&mut self, _: SegmentLocalId, reader: &SegmentReader) -> Result<()> {
self.offset += self.segment_max_doc;
self.segment_max_doc = reader.max_doc();
Ok(())
}
fn collect(&mut self, doc: DocId, _score: Score) {
self.docs.push(doc + self.offset);
}
}
/// Collects in order all of the fast fields for all of the
/// doc in the `DocSet`
///
/// This collector is mainly useful for tests.
pub struct FastFieldTestCollector {
vals: Vec<u32>,
field: Field,
ff_reader: Option<U32FastFieldReader>,
}
impl FastFieldTestCollector {
pub fn for_field(field: Field) -> FastFieldTestCollector {
FastFieldTestCollector {
vals: Vec::new(),
field: field,
ff_reader: None,
}
}
pub fn vals(self,) -> Vec<u32> {
self.vals
}
}
impl Collector for FastFieldTestCollector {
fn set_segment(&mut self, _: SegmentLocalId, reader: &SegmentReader) -> Result<()> {
self.ff_reader = reader.get_fast_field_reader(self.field);
Ok(())
}
fn collect(&mut self, doc: DocId, _score: Score) {
let val = self.ff_reader.as_ref().unwrap().get(doc);
self.vals.push(val);
}
}
#[bench]
fn build_collector(b: &mut Bencher) {
b.iter(|| {
let mut count_collector = CountCollector::default();
let docs: Vec<u32> = (0..1_000_000).collect();
for doc in docs {
count_collector.collect(doc, 1f32);
}
count_collector.count()
});
}
}
pub mod tests;

View File

@@ -1,63 +1,289 @@
use super::Collector;
use super::SegmentCollector;
use collector::Fruit;
use downcast::Downcast;
use std::marker::PhantomData;
use DocId;
use Score;
use Result;
use SegmentReader;
use Score;
use SegmentLocalId;
use SegmentReader;
use TantivyError;
pub struct MultiFruit {
sub_fruits: Vec<Option<Box<Fruit>>>,
}
pub struct CollectorWrapper<TCollector: Collector>(TCollector);
impl<TCollector: Collector> Collector for CollectorWrapper<TCollector> {
type Fruit = Box<Fruit>;
type Child = Box<BoxableSegmentCollector>;
fn for_segment(
&self,
segment_local_id: u32,
reader: &SegmentReader,
) -> Result<Box<BoxableSegmentCollector>> {
let child = self.0.for_segment(segment_local_id, reader)?;
Ok(Box::new(SegmentCollectorWrapper(child)))
}
fn requires_scoring(&self) -> bool {
self.0.requires_scoring()
}
fn merge_fruits(&self, children: Vec<<Self as Collector>::Fruit>) -> Result<Box<Fruit>> {
let typed_fruit: Vec<TCollector::Fruit> = children
.into_iter()
.map(|untyped_fruit| {
Downcast::<TCollector::Fruit>::downcast(untyped_fruit)
.map(|boxed_but_typed| *boxed_but_typed)
.map_err(|e| {
let err_msg = format!("Failed to cast child collector fruit. {:?}", e);
TantivyError::InvalidArgument(err_msg)
})
})
.collect::<Result<_>>()?;
let merged_fruit = self.0.merge_fruits(typed_fruit)?;
Ok(Box::new(merged_fruit))
}
}
impl SegmentCollector for Box<BoxableSegmentCollector> {
type Fruit = Box<Fruit>;
fn collect(&mut self, doc: u32, score: f32) {
self.as_mut().collect(doc, score);
}
fn harvest(self) -> Box<Fruit> {
BoxableSegmentCollector::harvest_from_box(self)
}
}
pub trait BoxableSegmentCollector {
fn collect(&mut self, doc: u32, score: f32);
fn harvest_from_box(self: Box<Self>) -> Box<Fruit>;
}
pub struct SegmentCollectorWrapper<TSegmentCollector: SegmentCollector>(TSegmentCollector);
impl<TSegmentCollector: SegmentCollector> BoxableSegmentCollector
for SegmentCollectorWrapper<TSegmentCollector>
{
fn collect(&mut self, doc: u32, score: f32) {
self.0.collect(doc, score);
}
fn harvest_from_box(self: Box<Self>) -> Box<Fruit> {
Box::new(self.0.harvest())
}
}
pub struct FruitHandle<TFruit: Fruit> {
pos: usize,
_phantom: PhantomData<TFruit>,
}
impl<TFruit: Fruit> FruitHandle<TFruit> {
pub fn extract(self, fruits: &mut MultiFruit) -> TFruit {
let boxed_fruit = fruits.sub_fruits[self.pos].take().expect("");
*Downcast::<TFruit>::downcast(boxed_fruit).expect("Failed")
}
}
/// Multicollector makes it possible to collect on more than one collector.
/// It should only be used for use cases where the Collector types is unknown
/// It should only be used for use cases where the Collector types is unknown
/// at compile time.
/// If the type of the collectors is known, you should prefer to use `ChainedCollector`.
///
/// ```rust
/// #[macro_use]
/// extern crate tantivy;
/// use tantivy::schema::{Schema, TEXT};
/// use tantivy::{Index, Result};
/// use tantivy::collector::{Count, TopDocs, MultiCollector};
/// use tantivy::query::QueryParser;
///
/// # fn main() { example().unwrap(); }
/// fn example() -> Result<()> {
/// let mut schema_builder = Schema::builder();
/// let title = schema_builder.add_text_field("title", TEXT);
/// let schema = schema_builder.build();
/// let index = Index::create_in_ram(schema);
/// {
/// let mut index_writer = index.writer(3_000_000)?;
/// index_writer.add_document(doc!(
/// title => "The Name of the Wind",
/// ));
/// index_writer.add_document(doc!(
/// title => "The Diary of Muadib",
/// ));
/// index_writer.add_document(doc!(
/// title => "A Dairy Cow",
/// ));
/// index_writer.add_document(doc!(
/// title => "The Diary of a Young Girl",
/// ));
/// index_writer.commit().unwrap();
/// }
///
/// index.load_searchers()?;
/// let searcher = index.searcher();
///
/// let mut collectors = MultiCollector::new();
/// let top_docs_handle = collectors.add_collector(TopDocs::with_limit(2));
/// let count_handle = collectors.add_collector(Count);
/// let query_parser = QueryParser::for_index(&index, vec![title]);
/// let query = query_parser.parse_query("diary")?;
/// let mut multi_fruit = searcher.search(&query, &collectors)?;
///
/// let count = count_handle.extract(&mut multi_fruit);
/// let top_docs = top_docs_handle.extract(&mut multi_fruit);
///
/// # assert_eq!(count, 2);
/// # assert_eq!(top_docs.len(), 2);
///
/// Ok(())
/// }
/// ```
#[allow(clippy::type_complexity)]
#[derive(Default)]
pub struct MultiCollector<'a> {
collectors: Vec<&'a mut Collector>,
collector_wrappers:
Vec<Box<Collector<Child = Box<BoxableSegmentCollector>, Fruit = Box<Fruit>> + 'a>>,
}
impl<'a> MultiCollector<'a> {
/// Constructor
pub fn from(collectors: Vec<&'a mut Collector>) -> MultiCollector {
MultiCollector {
collectors: collectors,
/// Create a new `MultiCollector`
pub fn new() -> Self {
Default::default()
}
/// Add a new collector to our `MultiCollector`.
pub fn add_collector<'b: 'a, TCollector: Collector + 'b>(
&mut self,
collector: TCollector,
) -> FruitHandle<TCollector::Fruit> {
let pos = self.collector_wrappers.len();
self.collector_wrappers
.push(Box::new(CollectorWrapper(collector)));
FruitHandle {
pos,
_phantom: PhantomData,
}
}
}
impl<'a> Collector for MultiCollector<'a> {
fn set_segment(&mut self, segment_local_id: SegmentLocalId, segment: &SegmentReader) -> Result<()> {
for collector in &mut self.collectors {
try!(collector.set_segment(segment_local_id, segment));
}
Ok(())
type Fruit = MultiFruit;
type Child = MultiCollectorChild;
fn for_segment(
&self,
segment_local_id: SegmentLocalId,
segment: &SegmentReader,
) -> Result<MultiCollectorChild> {
let children = self
.collector_wrappers
.iter()
.map(|collector_wrapper| collector_wrapper.for_segment(segment_local_id, segment))
.collect::<Result<Vec<_>>>()?;
Ok(MultiCollectorChild { children })
}
fn collect(&mut self, doc: DocId, score: Score) {
for collector in &mut self.collectors {
collector.collect(doc, score);
fn requires_scoring(&self) -> bool {
self.collector_wrappers.iter().any(|c| c.requires_scoring())
}
fn merge_fruits(&self, segments_multifruits: Vec<MultiFruit>) -> Result<MultiFruit> {
let mut segment_fruits_list: Vec<Vec<Box<Fruit>>> = (0..self.collector_wrappers.len())
.map(|_| Vec::with_capacity(segments_multifruits.len()))
.collect::<Vec<_>>();
for segment_multifruit in segments_multifruits {
for (idx, segment_fruit_opt) in segment_multifruit.sub_fruits.into_iter().enumerate() {
if let Some(segment_fruit) = segment_fruit_opt {
segment_fruits_list[idx].push(segment_fruit);
}
}
}
let sub_fruits = self
.collector_wrappers
.iter()
.zip(segment_fruits_list)
.map(|(child_collector, segment_fruits)| {
Ok(Some(child_collector.merge_fruits(segment_fruits)?))
})
.collect::<Result<_>>()?;
Ok(MultiFruit { sub_fruits })
}
}
pub struct MultiCollectorChild {
children: Vec<Box<BoxableSegmentCollector>>,
}
impl SegmentCollector for MultiCollectorChild {
type Fruit = MultiFruit;
fn collect(&mut self, doc: DocId, score: Score) {
for child in &mut self.children {
child.collect(doc, score);
}
}
fn harvest(self) -> MultiFruit {
MultiFruit {
sub_fruits: self
.children
.into_iter()
.map(|child| Some(child.harvest()))
.collect(),
}
}
}
#[cfg(test)]
mod tests {
use super::*;
use collector::{Collector, CountCollector, TopCollector};
use collector::{Count, TopDocs};
use query::TermQuery;
use schema::IndexRecordOption;
use schema::{Schema, TEXT};
use Index;
use Term;
#[test]
fn test_multi_collector() {
let mut top_collector = TopCollector::with_limit(2);
let mut count_collector = CountCollector::default();
let mut schema_builder = Schema::builder();
let text = schema_builder.add_text_field("text", TEXT);
let schema = schema_builder.build();
let index = Index::create_in_ram(schema);
{
let mut collectors = MultiCollector::from(vec!(&mut top_collector, &mut count_collector));
collectors.collect(1, 0.2);
collectors.collect(2, 0.1);
collectors.collect(3, 0.5);
let mut index_writer = index.writer_with_num_threads(1, 3_000_000).unwrap();
index_writer.add_document(doc!(text=>"abc"));
index_writer.add_document(doc!(text=>"abc abc abc"));
index_writer.add_document(doc!(text=>"abc abc"));
index_writer.commit().unwrap();
index_writer.add_document(doc!(text=>""));
index_writer.add_document(doc!(text=>"abc abc abc abc"));
index_writer.add_document(doc!(text=>"abc"));
index_writer.commit().unwrap();
}
assert_eq!(count_collector.count(), 3);
assert!(top_collector.at_capacity());
index.load_searchers().unwrap();
let searcher = index.searcher();
let term = Term::from_field_text(text, "abc");
let query = TermQuery::new(term, IndexRecordOption::Basic);
let mut collectors = MultiCollector::new();
let topdocs_handler = collectors.add_collector(TopDocs::with_limit(2));
let count_handler = collectors.add_collector(Count);
let mut multifruits = searcher.search(&query, &mut collectors).unwrap();
assert_eq!(count_handler.extract(&mut multifruits), 5);
assert_eq!(topdocs_handler.extract(&mut multifruits).len(), 2);
}
}

201
src/collector/tests.rs Normal file
View File

@@ -0,0 +1,201 @@
use super::*;
use core::SegmentReader;
use fastfield::BytesFastFieldReader;
use fastfield::FastFieldReader;
use schema::Field;
use DocAddress;
use DocId;
use Score;
use SegmentLocalId;
/// Stores all of the doc ids.
/// This collector is only used for tests.
/// It is unusable in pr
///
/// actise, as it does not store
/// the segment ordinals
pub struct TestCollector;
pub struct TestSegmentCollector {
segment_id: SegmentLocalId,
fruit: TestFruit,
}
#[derive(Default)]
pub struct TestFruit {
docs: Vec<DocAddress>,
scores: Vec<Score>,
}
impl TestFruit {
/// Return the list of matching documents exhaustively.
pub fn docs(&self) -> &[DocAddress] {
&self.docs[..]
}
pub fn scores(&self) -> &[Score] {
&self.scores[..]
}
}
impl Collector for TestCollector {
type Fruit = TestFruit;
type Child = TestSegmentCollector;
fn for_segment(
&self,
segment_id: SegmentLocalId,
_reader: &SegmentReader,
) -> Result<TestSegmentCollector> {
Ok(TestSegmentCollector {
segment_id,
fruit: TestFruit::default(),
})
}
fn requires_scoring(&self) -> bool {
true
}
fn merge_fruits(&self, mut children: Vec<TestFruit>) -> Result<TestFruit> {
children.sort_by_key(|fruit| {
if fruit.docs().is_empty() {
0
} else {
fruit.docs()[0].segment_ord()
}
});
let mut docs = vec![];
let mut scores = vec![];
for child in children {
docs.extend(child.docs());
scores.extend(child.scores);
}
Ok(TestFruit { docs, scores })
}
}
impl SegmentCollector for TestSegmentCollector {
type Fruit = TestFruit;
fn collect(&mut self, doc: DocId, score: Score) {
self.fruit.docs.push(DocAddress(self.segment_id, doc));
self.fruit.scores.push(score);
}
fn harvest(self) -> <Self as SegmentCollector>::Fruit {
self.fruit
}
}
/// Collects in order all of the fast fields for all of the
/// doc in the `DocSet`
///
/// This collector is mainly useful for tests.
pub struct FastFieldTestCollector {
field: Field,
}
pub struct FastFieldSegmentCollector {
vals: Vec<u64>,
reader: FastFieldReader<u64>,
}
impl FastFieldTestCollector {
pub fn for_field(field: Field) -> FastFieldTestCollector {
FastFieldTestCollector { field }
}
}
impl Collector for FastFieldTestCollector {
type Fruit = Vec<u64>;
type Child = FastFieldSegmentCollector;
fn for_segment(
&self,
_: SegmentLocalId,
reader: &SegmentReader,
) -> Result<FastFieldSegmentCollector> {
Ok(FastFieldSegmentCollector {
vals: Vec::new(),
reader: reader.fast_field_reader(self.field)?,
})
}
fn requires_scoring(&self) -> bool {
false
}
fn merge_fruits(&self, children: Vec<Vec<u64>>) -> Result<Vec<u64>> {
Ok(children.into_iter().flat_map(|v| v.into_iter()).collect())
}
}
impl SegmentCollector for FastFieldSegmentCollector {
type Fruit = Vec<u64>;
fn collect(&mut self, doc: DocId, _score: Score) {
let val = self.reader.get(doc);
self.vals.push(val);
}
fn harvest(self) -> Vec<u64> {
self.vals
}
}
/// Collects in order all of the fast field bytes for all of the
/// docs in the `DocSet`
///
/// This collector is mainly useful for tests.
pub struct BytesFastFieldTestCollector {
field: Field,
}
pub struct BytesFastFieldSegmentCollector {
vals: Vec<u8>,
reader: BytesFastFieldReader,
}
impl BytesFastFieldTestCollector {
pub fn for_field(field: Field) -> BytesFastFieldTestCollector {
BytesFastFieldTestCollector { field }
}
}
impl Collector for BytesFastFieldTestCollector {
type Fruit = Vec<u8>;
type Child = BytesFastFieldSegmentCollector;
fn for_segment(
&self,
_segment_local_id: u32,
segment: &SegmentReader,
) -> Result<BytesFastFieldSegmentCollector> {
Ok(BytesFastFieldSegmentCollector {
vals: Vec::new(),
reader: segment.bytes_fast_field_reader(self.field)?,
})
}
fn requires_scoring(&self) -> bool {
false
}
fn merge_fruits(&self, children: Vec<Vec<u8>>) -> Result<Vec<u8>> {
Ok(children.into_iter().flat_map(|c| c.into_iter()).collect())
}
}
impl SegmentCollector for BytesFastFieldSegmentCollector {
type Fruit = Vec<u8>;
fn collect(&mut self, doc: u32, _score: f32) {
let data = self.reader.get_val(doc);
self.vals.extend(data);
}
fn harvest(self) -> <Self as SegmentCollector>::Fruit {
self.vals
}
}

Some files were not shown because too many files have changed in this diff Show More