## Summary
Adds a Rust expression builder API as a type-safe alternative to SQL
strings for query filters.
## Motivation
Filtering with raw SQL strings can be awkward when using variables and
special types:
Closes #3038
---------
Co-authored-by: Will Jones <willjones127@gmail.com>
When input data is sufficiently large, we automatically split up into
parallel writes using a round-robin exchange operator. We sample the
first batch to determine data width, and target size of 1 million rows
or 2GB, whichever is smaller.
This hooks up a new writer implementation for the `add()` method. The
main immediate benefit is it allows streaming requests to remote tables,
and at the same time allowing retries for most inputs.
In NodeJS, we always convert the data to `Vec<RecordBatch>`, so it's
always retry-able.
For Python, all are retry-able, except `Iterator` and
`pa.RecordBatchReader`, which can only be consumed once. Some, like
`pa.datasets.Dataset` are retry-able *and* streaming.
A lot of the changes here are to make the new DataFusion write pipeline
maintain the same behavior as the existing Python-based preprocessing,
such as:
* casting input data to target schema
* rejecting NaN values if `on_bad_vectors="error"`
* applying embedding functions.
In future PRs, we'll enhance these by moving the embedding calls into
DataFusion and making sure we parallelize them. See:
https://github.com/lancedb/lancedb/issues/3048
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
## Summary
- Upgrades `@napi-rs/cli` from v2 to v3, `napi`/`napi-derive` Rust
crates to 3.x
- Fixes a bug
([napi-rs#1170](https://github.com/napi-rs/napi-rs/issues/1170)) where
the CLI failed to locate the built `.node` binary when a custom Cargo
target directory is set (via `config.toml`)
## Changes
**package.json / CLI**:
- `napi.name` → `napi.binaryName`, `napi.triples` → `napi.targets`
- Removed `--no-const-enum` flag and fixed output dir arg
- `napi universal` → `napi universalize`
**Rust API migration**:
- `#[napi::module_init]` → `#[napi_derive::module_init]`
- `napi::JsObject` → `Object`, `.get::<_, T>()` → `.get::<T>()`
- `ErrorStrategy` removed; `ThreadsafeFunction` now takes an explicit
`Return` type with `CalleeHandled = false` const generic
- `JsFunction` + `create_threadsafe_function` replaced by typed
`Function<Args, Return>` + `build_threadsafe_function().build()`
- `RerankerCallbacks` struct removed (`Function<'env,...>` can't be
stored in structs); `VectorQuery::rerank` now accepts the function
directly
- `ClassInstance::clone()` now returns `ClassInstance`, fixed with
explicit deref
- `Vec<u8>` in `#[napi(object)]` now maps to `Array<number>` in v3;
changed to `Buffer` to preserve the TypeScript `Buffer` type
**TypeScript**:
- `inner.rerank({ rerankHybrid: async (_, args) => ... })` →
`inner.rerank(async (args) => ...)`
- Header provider callback wrapped in `async` to match stricter typed
constructor signature
🤖 Generated with [Claude Code](https://claude.com/claude-code)
---------
Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
## Summary
`DatasetConsistencyWrapper::update()` only stored datasets with a
strictly newer
version. This caused `migrate_manifest_paths_v2` to silently drop its
update since
the migration renames files without bumping the dataset version. The
subsequent
`uses_v2_manifest_paths()` call would then return the stale cached
dataset.
Changed the version check from `>` to `>=` so same-version updates are
accepted.
## Test plan
- [x] Existing `test_create_table_v2_manifest_paths_async` Python test
should pass
- [x] Existing `should be able to migrate tables to the V2 manifest
paths` NodeJS test should pass
- [x] All dataset wrapper unit tests pass locally
🤖 Generated with [Claude Code](https://claude.com/claude-code)
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
This updates `DatasetConsistencyWrapper` to block less:
1. `DatasetConsistencyWrapper::get()` just returns `Arc<Dataset>` now,
instead of a guard that blocks writes.
`DatasetConsistencyWrapper::get_mut()` is gone; now write methods just
use `get()` and then later call `update()` with the new version. This
means a given table handle can do concurrent reads **and** writes.
2. In weak consistency mode, will check for dataset updates in the
background, instead of blocking calls to `get()`.
---------
Co-authored-by: Claude Sonnet 4.5 <noreply@anthropic.com>
## Summary
Fixes#1679
This PR prevents the OpenAI embedding function from retrying when
receiving a 401 Unauthorized error. Authentication errors are permanent
failures that won't be fixed by retrying, yet the current implementation
retries all exceptions up to 7 times by default.
## Changes
- Modified `retry_with_exponential_backoff` in `utils.py` to check for
non-retryable errors before retrying
- Added `_is_non_retryable_error` helper function that detects:
- Exceptions with name `AuthenticationError` (OpenAI's 401 error)
- Exceptions with `status_code` attribute of 401 or 403
- Enhanced OpenAI embeddings to explicitly catch and re-raise
`AuthenticationError` with better logging
- Added unit test `test_openai_no_retry_on_401` to verify authentication
errors don't trigger retries
## Test Plan
- Added test that verifies:
1. A function raising `AuthenticationError` is only called once
2. No retry delays occur (sleep is never called)
- Existing tests continue to pass
- Formatting applied via `make format`
## Example Behavior
**Before**: With an invalid API key, users would see 7 retry attempts
over ~2 minutes:
```
WARNING:root:Error occurred: Error code: 401 - {'error': {'message': 'Incorrect API key provided...'}}
Retrying in 3.97 seconds (retry 1 of 7)
WARNING:root:Error occurred: Error code: 401...
Retrying in 7.94 seconds (retry 2 of 7)
...
```
**After**: With an invalid API key, the error is raised immediately:
```
ERROR:root:Authentication failed: Invalid API key provided
AuthenticationError: Error code: 401 - {'error': {'message': 'Incorrect API key provided...'}}
```
This provides better UX and prevents unnecessary API calls that would
fail anyway.
---------
Co-authored-by: Will Jones <willjones127@gmail.com>
There are old and outdated files in our embedding registry that can
confuse coding agents. This PR deprecates the following files that have
newer, more modern methods to generate such embeddings.
- Deprecate `embeddings/siglip.py`
- Deprecate `embeddings/gte.py`
## Why this change?
Per a discussion with @AyushExel, the [embedding registry directory
](1840aa7edc/python/python/lancedb/embeddings)
in the LanceDB repo has a number of outdated files that need to be
deprecated.
See https://github.com/lancedb/docs/issues/85 for the docs gaps that
identified this.
- Add note in `openclip` docs that it can be used for SigLip embeddings,
which it now supports
- Add note in the `sentence-transformers` page that ALL text embedding
models on Hugging Face can be used
There were two issues:
1. The python code needs to get access to the underlying rust table to
setup the permutation reader and the attributes involved in this differ
between the python local table and remote table objects.
~~2. The remote table was sending projection dictionaries as arrays of
tuples and (on LanceDB cloud at least) it does not appear this is how
rest servers are setup to receive them.~~ (this is now fixed as #3023)
~~Leaving as draft as this is built on
https://github.com/lancedb/lancedb/pull/3016~~
## Problem
When applying hard filters that result in zero matches, hybrid search
crashes with `IndexError: list index out of range` during reranking.
This happens because empty result tables are passed through the full
reranker pipeline, which expects at least one result.
Traceback from the issue:
```
lancedb/query.py: in _combine_hybrid_results
results = reranker.rerank_hybrid(fts_query, vector_results, fts_results)
lancedb/rerankers/answerdotai.py: in rerank_hybrid
combined_results = self._rerank(combined_results, query)
...
IndexError: list index out of range
```
## Fix
Added an early return in `_combine_hybrid_results` when both vector and
FTS results are empty. Instead of passing empty tables through
normalization, reranking, and score restoration (which can fail in
various ways), we now build a properly-typed empty result table with the
`_relevance_score` column and return it directly.
## Test
Added `test_empty_hybrid_result_reranker` that exercises
`_combine_hybrid_results` directly with empty vector and FTS tables,
verifying:
- Returns empty table with correct schema
- Includes `_relevance_score` column
- Respects `with_row_ids` flag
Closes#2425
Completes the **merge_insert.rs** checklist item from #2949.
## Changes
- Moved `MergeResult` struct from `table.rs` to `table/merge.rs`
- Moved the `NativeTable::merge_insert` implementation into
`merge::execute_merge_insert()`, with the trait impl now delegating to
it (same pattern as `delete.rs`)
- Moved `test_merge_insert` and `test_merge_insert_use_index` tests into
`table/merge.rs`
- Improved moved tests to use `memory://` URIs instead of temporary
directories
- Cleaned up unused imports from `table.rs` (`FutureExt`,
`TryFutureExt`, `Either`, `WhenMatched`, `WhenNotMatchedBySource`,
`LanceMergeInsertBuilder`)
- `MergeResult` is re-exported from `table.rs` so the public API is
unchanged
## Testing
`cargo build -p lancedb` compiles cleanly with no warnings.
References #2949 Moved query logic and helpers from table.rs to
query.rs. Refactored tests using guidelines and added coverage for multi
vector plan structure.
When a table has a read consistency interval, queries within the
interval skip the version check. Once the interval expires, a list call
checks for new versions. If the version hasn't changed, the timer should
reset so the next interval begins, but it didn't. The timer stayed
expired, so every query after that triggered a list call, even though
nothing changed.
This affects all read operations (queries, schema lookups, searches) on
tables with read_consistency_interval set. Each operation adds a
list("_versions/") call to object storage, adding latency proportional
to the store's list performance. For high-QPS workloads, this can
saturate object store list throughput and significantly degrade query
latency.
Bug flow:
1. Every read operation (query, schema, search) calls
ensure_up_to_date()
2. ensure_up_to_date() calls is_up_to_date(), which compares
last_consistency_check.elapsed() against
read_consistency_interval
3. If the interval has expired, it calls reload()
4. reload() calls need_reload(), which calls latest_version_id() — this
is the list IOP
(list("_versions/"))
5. If no new version, reload() returns early without resetting
last_consistency_check
6. On the next query, step 2 sees the stale timer again → step 3 → step
4 → another list IOP
7. This repeats on every query forever
Caches the schema of remote tables and invalidates the cache when:
1. After 30 second TTL
2. When we do an operation that changes schema (e.g. add_columns) or
checks out a different version (e.g. checkout_version)
3. When we get a 400, 404, or 500 reponse
If the schema is retrieved close to the TTL, we optimistically fetch the
schema in the background. This means a continuous stream of queries will
never have the schema fetch on the critical path.
Closes#3014
---------
Co-authored-by: Claude Sonnet 4.5 <noreply@anthropic.com>
BREAKING CHANGE: Arbitrary `impl RecordBatchReader` is no longer
accepted, it must be made into `Box<dyn RecordBatchReader>`.
This PR replaces `IntoArrow` with a new trait `Scannable` to define
input row data. This provides the following advantages:
1. **We can implement `Scannable` for more types than `IntoArrow`, such
as `RecordBatch` and `Vec<RecordBatch>`.** The `IntoArrow` trait was
implemented for arbitrary `T: RecordBatchReader`, and the Rust compiler
would prevent us from implementing it for foreign types like
`RecordBatch` because (theoretically) those types might implement
`RecordBatchReader` in the future. That's why we implement `Scannable`
for `Box<dyn RecordBatchReader>` instead; since it's a concrete type it
doesn't block implementing for other foreign types.
2. **We can potentially replay `Scannable` values**. Previously, we had
to choose between buffering all data in memory and supporting retries of
writes. But because `Scannable` things can optionally support
re-scanning, we now have a way of supporting retries while also
streaming.
3. **`Scannable` can provide hints like `num_rows`, which can be used to
schedule parallel writers.** Without knowing the total number of rows,
it's difficult to know whether it's worth writing multiple files in
parallel.
We don't yet fully take advantage of (2) and (3) yet, but will in future
PRs. For (2), in order to be ready to leverage this, we need to hook the
`Scannable` implementation up to Python and NodeJS bindings. Right now
they always pass down a stream, but we want to make sure they support
retries when possible. And for (3), this will need to be hooked up to
#2939 and to a pipeline for running pre-processing steps (like embedding
generation).
## Other changes
* Moved `create_table` and `add_data` into their own modules. I've
created a follow up issue to split up `table.rs` further, as it's by far
the largest file: https://github.com/lancedb/lancedb/issues/2949
* Eliminated the `HAS_DATA` generic for `CreateTableBuilder`. I didn't
see any public-facing places where we differentiated methods, which is
why I felt this simplification was okay.
* Added an `Error::External` variant and integrated some conversions to
allow certain errors to pass through transparently. This will fully work
once we upgrade Lance and get to take advantage of changes in
https://github.com/lance-format/lance/pull/5606
* Added LZ4 compression support for write requests to remote endpoints.
I checked and this has been supported on the server for > 1 year.
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
This changes around the output format of `Permutation` in some breaking
ways but I think the API is still new enough to be considered
experimental.
1. In order to align with both huggingface's dataset and torch's
expectations the default output format is now a list of dicts
(row-major) instead of a dict of lists (column-major). I've added a
python_col option which will return the dict of lists.
2. In order to align with pytorch's expectation the `torch` format is
now a list of tensors (row-major) instead of a 2D tensor (column-major).
I've added a torch_col option which will return the 2D tensor instead.
Added tests for torch integration with Permutation
~~Leaving draft until https://github.com/lancedb/lancedb/pull/3013
merges as this is built on top of that~~
Closes#3000
The hybrid search `explain_plan` now shows the reranker as the top-level
node with
the vector and FTS sub-plans indented underneath, instead of just
listing them
separately with no reranker context.
**Before:**
```
Vector Search Plan:
ProjectionExec: ...
FTS Search Plan:
ProjectionExec: ...
```
**After:**
```
RRFReranker(K=60)
Vector Search Plan:
ProjectionExec: ...
FTS Search Plan:
ProjectionExec: ...
```
Other rerankers display similarly ; e.g.
`LinearCombinationReranker(weight=0.7, fill=1.0)`,
`MRRReranker(weight_vector=0.5, weight_fts=0.5)`,
`CohereReranker(model_name=name)`.
---------
Signed-off-by: dask-58 <googldhruv@gmail.com>
Co-authored-by: Will Jones <willjones127@gmail.com>
Fixes#2999
The error message previously said `"field_names must be a string when
use_tantivy=False"` implying they should use the to be deprecated
tantivy backend #2998.
Updated the error message and docstring to instead guide users to create
a separate FTS index for each field
Signed-off-by: dask-58 <googldhruv@gmail.com>
## Summary
Continues the modularization effort of table operations as outlined in
#2949.
- Extracts optimization operations (`OptimizeAction`, `OptimizeStats`,
`execute_optimize`, `compact_files_impl`, `cleanup_old_versions`,
`optimize_indices`) from
`table.rs` into `table/optimize.rs`
- Public API remains unchanged via re-exports
- Adds comprehensive tests including error cases with message assertions
## Test plan
- [x] All new optimization tests pass
- [x] All existing tests pass
- [x] `cargo clippy` passes with no warnings
- [x] `cargo fmt --check` passes
---------
Co-authored-by: Will Jones <willjones127@gmail.com>
## Summary
- Added `repository` field to all nodejs package.json files (main
package + 7 platform-specific packages)
- This fixes the npm publish E422 error where sigstore provenance
verification fails because the repository.url was empty
## Root Cause
Failing CI:
https://github.com/lancedb/lancedb/actions/runs/21770794768/job/62821570260
npm's sigstore provenance verification requires the `repository.url`
field in package.json to match the GitHub repository URL from the
provenance bundle. The platform-specific packages
(`@lancedb/lancedb-darwin-arm64`, etc.) were missing this field
entirely, causing the publish to fail with:
```
npm error 422 Unprocessable Entity - Error verifying sigstore provenance bundle:
Failed to validate repository information: package.json: "repository.url" is "",
expected to match "https://github.com/lancedb/lancedb" from provenance
```
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>