Change Footer version handling, Make compression dynamic
Change Footer version handling
Simplify version handling by switching to JSON instead of binary serialization.
fixes#1058
Make compression dynamic
Instead of choosing the compression during compile time via a feature flag, you can now have multiple compression algorithms enabled and decide during runtime which one to choose via IndexSettings. Changing the compression algorithm on an index is also supported. The information which algorithm was used in the doc store is stored in the DocStoreFooter. The default is the lz4 block format.
fixes#904
Handle merging of different compressors
Fix feature flag names
Add doc store test for all compressors
* add iterator over documents in docstore
When profiling, I saw that around 8% of the time in a merge was spent in look-ups into the skip index. Since the documents in the merge case are read continuously, we can replace the random access with an iterator over the documents.
Merge Time on Sorted Index Before/After:
24s / 19s
Merge Time on Unsorted Index Before/After:
15s / 13,5s
So we can expect 10-20% faster merges.
This iterator is also important if we add sorting based on a field in the documents.
* Update reader.rs
Co-authored-by: Paul Masurel <paul@quickwit.io>
* sort index by field
add sort info to IndexSettings
generate docid mapping for sorted field (only fastfield)
remap singlevalue fastfield
* support docid mapping in multivalue fastfield
move docid mapping to serialization step (less intermediate data for mapping)
add support for docid mapping in multivalue fastfield
* handle docid map in bytes fastfield
* forward docid mapping, remap postings
* fix merge conflicts
* move test to index_sorter
* add docid index mapping old->new
add docid mapping for both directions old->new (used in postings) and new->old (used in fast field)
handle mapping in postings recorder
warn instead of info for MAX_TOKEN_LEN
* remap docid in fielnorm
* resort docids in recorder, more extensive tests
* handle index sorting in docstore
handle index sort in docstore, by saving all the docs in a temp docstore file (SegmentComponent::TempStore). On serialization the docid mapping is used to create a docstore in the correct order by reader the old docstore.
add docstore sort tests
refactor tests
* refactor
rename docid doc_id
rename docid_map doc_id_map
rename DocidMapping DocIdMapping
fix typo
* u32 to DocId
* better doc_id_map creation
remove unstable sort
* add non mut method to FastFieldWriters
add _mut prefix to &mut methods
* remove sort_index
* fix clippy issues
* fix SegmentComponent iterator
use std::mem::replace
* fix test
* fmt
* handle indexsettings deserialize
* add reading, writing bytes to doc store
get bytes of document in doc store
add store_bytes method doc writer to accept serialized document
add serialization index settings test
* rename index_sorter to doc_id_mapping
use bufferlender in recorder
* fix compile issue, make sort_by_field optional
* fix test compile
* validate index settings on merge
validate index settings on merge
forward merge info to SegmentSerializer (for TempStore)
* fix doctest
* add itertools, use kmerge
add itertools, use kmerge
push because rustfmt fails
* implement/test merge for fastfield
implement/test merge for fastfield
rename len to num_deleted in DeleteBitSet
* Use precalculated docid mapping in merger
Use precalculated docid mapping in merger for sorted indices instead of on the fly calculation
Add index creation macro benchmark, but commented out for now, since it is not really usable due to long runtimes, and extreme fluctuations. May be better suited in criterion or an external bench bin
* fix fast field reader docs
fix fast field reader docs, Error instead of None returned
add u64s_lenient to fastreader
add create docid mapping benchmark
* add test for multifast field merge
refactor test
add test for multifast field merge
* add num_bytes to BytesFastFieldReader
equivalent to num_vals in MultiValuedFastFieldReader
* add MultiValueLength trait
add MultiValueLength trait in order to unify index creation for BytesFastFieldReader and MultiValuedFastFieldReader in merger
* Add ReaderWithOrdinal, fix
Add ReaderWithOrdinal to associate data to a reader in merger
Fix bytes offset index creation in merger
* add test for merging bytes with sorted docids
* Merge fieldnorm for sorted index
* handle posting list in merge in sorted index
handle posting list in merge in sorted index by using doc id mapping for sorting
reuse SegmentOrdinal type
* handle doc store order in merge in sorted index
* fix typo, cleanup
* make IndexSetting non-optional
* fix type, rename test file
fix type
rename test file
add type
* remove SegmentReaderWithOrdinal accessors
* cargo fmt
* add index sort & merge test to include deletes
* Fix posting list merge issue
Fix posting list merge issue - ensure serializer always gets monotonically increasing doc ids
handle sorting and merging for facets field
* performance: cache field readers, use bytes for doc store merge
* change facet merge test to cover index sorting
* add RawDocument abstraction to access bytes in doc store
* fix deserialization, update changelog
fix deserialization
update changelog
forward error on merge failed
* cache store readers to utilize lru cache (4x performance)
cache store readers, to utilize lru cache (4x faster performance, due to less decompress calls on the block)
* add include_temp_doc_store flag in InnerSegmentMeta
unset flag on deserialization and after finalize of a segment
set flag when creating new instances
add lz4 block compressor using lz4_flex, add lz4-block-compression feature flag
add snappy-compression feature flag for snap compressor, make snap crate optional
set lz4-block-compression as default feature flag
Tantivy used to assume that all files could be somehow memory mapped. After this change, Directory return a `FileSlice` that can be reduced and eventually read into an `OwnedBytes` object. Long and blocking io operation are still required by they do not span over the entire file.
* WIP implemented is_compatible
hide Footer::from_bytes from public consumption - only found Footer::extract
used outside the module
Add a new error type for IncompatibleIndex
add a prototypical call to footer.is_compatible() in ManagedDirectory::open_read
to make sure we error before reading it further
* Make error handling more ergonomic
Add an error subtype for OpenReadError and converters to TantivyError
* Remove an unnecessary assert
it's follower by the same check that Errors instead of panicking
* Correct the compatibility check logic
Leave a defensive versioned footer check to make sure we add new logic handling
when we add possible footer versions
Restricted VersionedFooter::from_bytes to be used inside the crate only
remove a half-baked test
* WIP.
* Return an error if index incompatible - closes#662
Enrich the error type with incompatibility
Change return type to Result<bool, TantivyError>, instead of bool
Add an Incompatibility enum that enriches the IncompatibleIndex error variant
with information, which then allows us to generate a developer-friendly hint how
to upgrade library version or switch feature flags for a different compression
algorithm
Updated changelog
Change the signature of is_compatible
Added documentation to the Incompatibility
Added a conditional test on a Footer with lz4 erroring
* add checksum check in ManagedDirectory
fix#400
* flush after writing checksum
* don't checksum atomic file access and clone managed_paths
* implement a footer storing metadata about a file
this is more of a poc, it require some refactoring into multiple files
`terminate(self)` is implemented, but not used anywhere yet
* address comments and simplify things with new contract
use BitOrder for integer to raw byte conversion
consider atomic write imply atomic read, which might not actually be true
use some indirection to have a boxable terminating writer
* implement TerminatingWrite and make terminate() be called where it should
add dependancy to drop_bomb to help find where terminate() should be called
implement TerminatingWrite for wrapper writers
make tests pass
/!\ some tests seems to pass where they shouldn't
* remove usage of drop_bomb
* fmt
* add test for checksum
* address some review comments
* update changelog
* fmt
* Refactor deletes
* Removing generation from SegmentUpdater. These have been obsolete for a long time
* Number literal clippy
* Removed clippy useless allow statement
* Enables clearing the index
Closes#510
* Adds an examples to clear and rebuild index
* Addressing code review
Moved the example from examples/ to docstring above `clear`
* Corrected minor typos and missed/duplicate words
* Added stamper.revert method to be used for rollback
Added type alias for Opstamp
Moved to AtomicU64 on stable rust (since 1.34)
* Change the method name and doc-string
* Remove rollback from delete_all_documents
test_add_then_delete_all_documents fails with --test-threads 2
* Passes all the tests with any number of test-threads
(ran locally 5 times)
* Addressed code review
Deleted comments with debug info
changed ReloadPolicy to Manual
* Removing useless garbage_collect call and updated CHANGELOG
* Split Collector into an overall Collector and a per-segment SegmentCollector. Precursor to cross-segment parallelism, and as a side benefit cleans up any per-segment fields from being Option<T> to just T.
* Attempt to add MultiCollector back
* working. Chained collector is broken though
* Fix chained collector
* Fix test
* Make Weight Send+Sync for parallelization purposes
* Expose parameters of RangeQuery for external usage
* Removed &mut self
* fixing tests
* Restored TestCollectors
* blop
* multicollector working
* chained collector working
* test broken
* fixing unit test
* blop
* blop
* Blop
* simplifying APi
* blop
* better syntax
* Simplifying top_collector
* refactoring
* blop
* Sync with master
* Added multithread search
* Collector refactoring
* Schema::builder
* CR and rustdoc
* CR comments
* blop
* Added an executor
* Sorted the segment readers in the searcher
* Update searcher.rs
* Fixed unit testst
* changed the place where we have the sort-segment-by-count heuristic
* using crossbeam::channel
* inlining
* Comments about panics propagating
* Added unit test for executor panicking
* Readded default
* Removed Default impl
* Added unit test for executor
* Compute space usage of a Searcher / SegmentReader / CompositeFile
* Fix typo
* Add serde Serialize/Deserialize for all the SpaceUsage structs
* Fix indexing
* Public methods for consuming space usage information
* #281: Add a space usage method that takes a SegmentComponent to support code that is unaware of particular segment components, and to make it more likely to update methods when a new component type is added.
* Add support for space usage computation of positions skip index file (#281)
* Add some tests for space usage computation (#281)
* Add skip information for posting list (skip to doc ids)
* Separate num bits from data for positions (skip n positions)
* Address in the position using a n-position offset
* Added a long skip structure to allow efficient opening of the position for a given term.