Compare commits

...

44 Commits

Author SHA1 Message Date
Lance Release
7a15337e03 Bump version: 0.24.2-beta.0 → 0.24.2-beta.1 2025-07-22 15:40:17 +00:00
BubbleCal
96c66fd087 feat: support multivector for JS SDK (#2527)
Signed-off-by: BubbleCal <bubble-cal@outlook.com>
2025-07-22 21:19:34 +08:00
Will Jones
0579303602 feat: allow setting custom Session on ListingDatabase (#2526)
## Summary

Add support for providing a custom `Session` when connecting to a
`ListingDatabase`. This allows users to configure object store
registries, caching, and other session-related settings while
maintaining full backward compatibility.

## Usage Example

```rust
use std::sync::Arc;
use lancedb::connect;

let custom_session = Arc::new(lance::session::Session::default());

let db = connect("/path/to/database")
    .session(custom_session)
    .execute()
    .await?;
```

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-authored-by: Claude <noreply@anthropic.com>
2025-07-21 16:28:39 -07:00
Jack Ye
75edb8756c feat(java): integrate lance-namespace to lancedb Java SDK (#2524) 2025-07-21 14:21:21 -07:00
Will Jones
88283110f4 fix: handle input with missing columns when using embedding functions (#2516)
## Summary

Fixes #2515 by implementing comprehensive support for missing columns in
Arrow table inputs when using embedding functions.

### Problem
Previously, when an Arrow table was passed to `fromDataToBuffer` with
missing columns and a schema containing embedding functions, the system
would fail because `applyEmbeddingsFromMetadata` expected all columns to
be present in the table.

🤖 Generated with [Claude Code](https://claude.ai/code)

---------

Co-authored-by: Claude <noreply@anthropic.com>
2025-07-18 15:54:25 -07:00
Lance Release
b3a637fdeb Bump version: 0.21.1 → 0.21.2-beta.0 2025-07-18 16:03:28 +00:00
Lance Release
ce24457531 Bump version: 0.24.1 → 0.24.2-beta.0 2025-07-18 16:02:37 +00:00
BubbleCal
087fe6343d test: fix random data may break test case (#2514)
this test adds a new vector and then performs vector search with
distance range.
this may fail if the new vector becomes the closest one to the query
vector

Signed-off-by: BubbleCal <bubble-cal@outlook.com>
2025-07-18 16:15:06 +08:00
Wyatt Alt
ab8cbe62dd fix: excessive object storage handle creation in create_table (#2505)
This fixes two bugs with create_table storage handle reuse. First issue
is, the database object did not previously carry a session that
create_table operations could reuse for create_table operations.

Second issue is, the inheritance logic for create_table and open_table
was causing empty storage options (i.e Some({})) to get sent, instead of
None. Lance handles these differently:

* When None is set, the object store held in the session's storage
registry that was created at "connect" is used. This value stays in the
cache long-term (probably as long as the db reference is held).
* When Some({}) is sent, LanceDB will create a new connection and cache
it for an empty key. However, that cached value will remain valid only
as long as the client holds a reference to the table. After that, the
cache is poisoned and the next create_table with the same key, will
create a new connection. This confounds reuse if e.g python gc's the
table object before another table is created.

My feeling is that the second path, if intentional, is probably meant to
serve cases where tables are overriding settings and the cached
connection is assumed not to be generally applicable. The bug is we were
engaging that mechanism for all tables.
2025-07-17 16:27:23 -07:00
Ayush Chaurasia
f076bb41f4 feat: add support for returning all scores with rerankers (#2509)
Previously `return_score="all"` was supported only for the default
reranker (RRF) and not the model based rerankers.
This adds support for keeping all scores in the base reranker so that
all model based rerankers can use it. Its a slower path than keeping
just the relevance score but can be useful in debugging
2025-07-15 21:03:03 +05:30
BubbleCal
902fb83d54 fix: set_lance_version may miss features when upgrading lance (#2510)
Signed-off-by: BubbleCal <bubble-cal@outlook.com>
2025-07-15 20:11:10 +08:00
BubbleCal
779118339f chore: upgrade lance to 0.31.2-beta.3 (#2508)
Signed-off-by: BubbleCal <bubble-cal@outlook.com>
2025-07-15 17:08:11 +08:00
BubbleCal
03b62599d7 feat: support ngram tokenizer (#2507)
Signed-off-by: BubbleCal <bubble-cal@outlook.com>
2025-07-15 16:36:08 +08:00
Benjamin Schmidt
4c999fb651 chore: fix cleanupOlderThan docs (#2504)
Thanks for all your work.

The docstring for `OptimizeOptions ` seems to reference a non-existent
method on `Table`. I believe this is the correct example for
`cleanupOlderThan`.

This also appears in the generated docs, but I assume they live
downstream from this code?
2025-07-15 16:23:10 +08:00
Lance Release
6d23d32ab5 Bump version: 0.21.1-beta.2 → 0.21.1 2025-07-10 21:36:59 +00:00
Lance Release
704cec34e1 Bump version: 0.21.1-beta.1 → 0.21.1-beta.2 2025-07-10 21:36:26 +00:00
Lance Release
a300a238db Bump version: 0.24.1-beta.2 → 0.24.1 2025-07-10 21:36:02 +00:00
Lance Release
a41ff1df0a Bump version: 0.24.1-beta.1 → 0.24.1-beta.2 2025-07-10 21:36:02 +00:00
Weston Pace
77b005d849 feat: update lance to 0.31.1 (#2501)
This is preparation for a stable release
2025-07-10 14:35:29 -07:00
CyrusAttoun
167fccc427 fix: change 'return' to 'raise' for unimplemented remote table function (#2484)
just noticed that we're doing a 'return' instead of a 'raise' while
trying to get remote functionality working for my project. I went ahead
and implemented tests for both of the unimplemented functions (to_pandas
and to_arrow) while I was in there.

---------

Co-authored-by: Cyrus Attoun <jattoun1@gmail.com>
2025-07-09 14:27:08 -07:00
Lance Release
2bffbcefa5 Bump version: 0.21.1-beta.0 → 0.21.1-beta.1 2025-07-09 05:54:20 +00:00
Lance Release
905552f993 Bump version: 0.24.1-beta.0 → 0.24.1-beta.1 2025-07-09 05:53:28 +00:00
BubbleCal
e4898c9313 chore: sync node package-lock (#2491)
Signed-off-by: BubbleCal <bubble-cal@outlook.com>
2025-07-09 12:34:03 +08:00
BubbleCal
cab36d94b2 feat: support to specify num_partitions and num_bits (#2488) 2025-07-09 11:36:09 +08:00
Weston Pace
b64252d4fd chore: don't require exact version of half (#2489)
I can't find any reason for pinning this dependency and the fact that it
is pinned can be kind of annoying to use downstream (e.g. datafusion
currently requires >= 2.6).
2025-07-08 08:36:04 -07:00
Lance Release
6fc006072c Bump version: 0.21.0 → 0.21.1-beta.0 2025-07-07 21:01:30 +00:00
Lance Release
d4bb59b542 Bump version: 0.24.0 → 0.24.1-beta.0 2025-07-07 21:00:38 +00:00
Wyatt Alt
6b2dd6de51 chore: update lance to 31.1-beta.2 (#2487) 2025-07-07 12:53:16 -07:00
BubbleCal
dbccd9e4f1 chore: upgrade lance to 0.31.1-beta.1 (#2486)
this also upgrades:
- datafusion 47.0 -> 48.0
- half 2.5.0 -> 2.6.0

to be consistent with lance

---------

Signed-off-by: BubbleCal <bubble-cal@outlook.com>
2025-07-07 22:16:43 +08:00
Will Jones
b12ebfed4c fix: only monotonically update dataset (#2479)
Make sure we only update the latest version if it's actually newer. This
is important if there are concurrent queries, as they can take different
amounts of time.
2025-07-01 08:29:37 -07:00
Weston Pace
1dadb2aefa feat: upgrade to lance 0.31.0-beta.1 (#2469)
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

* **Chores**
* Updated dependencies to newer versions for improved compatibility and
stability.

* **Refactor**
* Improved internal handling of data ranges and stream lifetimes for
enhanced performance and reliability.
* Simplified code style for Python query object conversions without
affecting functionality.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-06-30 11:10:53 -07:00
Haoyu Weng
eb9784d7f2 feat(python): batch Ollama embed calls (#2453)
Other embedding integrations such as Cohere and OpenAI already send
requests in batches. We should do that for Ollama too to improve
throughput.

The Ollama [`.embed`
API](63ca747622/ollama/_client.py (L359-L378))
was added in version 0.3.0 (almost a year ago) so I updated the version
requirement in pyproject.

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- **Bug Fixes**
- Improved compatibility with newer versions of the "ollama" package by
requiring version 0.3.0 or higher.
- Enhanced embedding generation to process batches of texts more
efficiently and reliably.
- **Refactor**
	- Improved type consistency and clarity for embedding-related methods.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-06-30 08:28:14 -07:00
Kilerd Chan
ba755626cc fix: expose parsing error coming from invalid object store uri (#2475)
this PR is to expose the error from `ListingCatalog::open_path` which
unwrap the Result coming from `ObjectStore::from_uri` to avoid panic
2025-06-30 10:33:18 +08:00
Keming
7760799cb8 docs: fix multivector notebook markdown style (#2447)
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- **Documentation**
- Improved formatting and clarity in instructional text within the
Multivector on LanceDB notebook.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-06-27 15:34:01 -07:00
Will Jones
4beb2d2877 fix(python): make sure explain_plan works with FTS queries (#2466)
## Summary

Fixes issue #2465 where FTS explain plans only showed basic `LanceScan`
instead of detailed execution plans with FTS query details, limits, and
offsets.

## Root Cause

The `FTSQuery::explain_plan()` and `analyze_plan()` methods were missing
the `.full_text_search()` call before calling explain/analyze plan,
causing them to operate on the base query without FTS context.

## Changes

- **Fixed** `explain_plan()` and `analyze_plan()` in `src/query.rs` to
call `.full_text_search()`
- **Added comprehensive test coverage** for FTS explain plans with
limits, offsets, and filters
- **Updated existing tests** to expect correct behavior instead of buggy
behavior

## Before/After

**Before (broken):**
```
LanceScan: uri=..., projection=[...], row_id=false, row_addr=false, ordered=true
```

**After (fixed):**
```
ProjectionExec: expr=[id@2 as id, text@3 as text, _score@1 as _score]
  Take: columns="_rowid, _score, (id), (text)"
    CoalesceBatchesExec: target_batch_size=1024
      GlobalLimitExec: skip=2, fetch=4
        MatchQuery: query=test
```

## Test Plan

- [x] All new FTS explain plan tests pass 
- [x] Existing tests continue to pass
- [x] FTS queries now show proper execution plans with MatchQuery,
limits, filters

Closes #2465

🤖 Generated with [Claude Code](https://claude.ai/code)

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

* **Tests**
* Added new test cases to verify explain plan output for full-text
search, vector queries with pagination, and queries with filters.

* **Bug Fixes**
* Improved the accuracy of explain plan and analysis output for
full-text search queries, ensuring the correct query details are
reflected.

* **Refactor**
* Enhanced the formatting and hierarchical structure of execution plans
for hybrid queries, providing clearer and more detailed plan
representations.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->

---------

Co-authored-by: Claude <noreply@anthropic.com>
2025-06-26 23:35:14 -07:00
Lance Release
a00b8595d1 Bump version: 0.21.0-beta.0 → 0.21.0 2025-06-20 05:47:06 +00:00
Lance Release
9c8314b4fd Bump version: 0.20.1-beta.2 → 0.21.0-beta.0 2025-06-20 05:46:27 +00:00
Lance Release
c625b6f2b2 Bump version: 0.24.0-beta.0 → 0.24.0 2025-06-20 05:46:05 +00:00
Lance Release
bec8fe6547 Bump version: 0.23.1-beta.2 → 0.24.0-beta.0 2025-06-20 05:46:04 +00:00
BubbleCal
dc1150c011 chore: upgrade lance to 0.30.0 (#2451)
lance [release
details](https://github.com/lancedb/lance/releases/tag/v0.30.0)
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- **Chores**
- Updated dependency specifications to use exact version numbers instead
of referencing a git repository and tag.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->

Signed-off-by: BubbleCal <bubble-cal@outlook.com>
2025-06-20 11:27:20 +08:00
Will Jones
afaefc6264 ci: fix package lock again (#2449)
We are able to push commits over here:
cb7293e073/.github/workflows/make-release-commit.yml (L88-L95)

So I think it's safe to assume this will work.

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- **Chores**
- Updated workflow configuration to improve authentication and branch
targeting for automated release processes.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-06-19 08:51:48 -07:00
BubbleCal
cb70ff8cee feat!: switch default FTS to native lance FTS (#2428)
This switches the default FTS to native lance FTS for Python sync table
API, the other APIs have switched to native implementation already

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

- **New Features**
- The default behavior for creating a full-text search index now uses
the new implementation rather than the legacy one.
- **Bug Fixes**
- Improved handling and error messages for phrase queries in full-text
search.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->

---------

Signed-off-by: BubbleCal <bubble-cal@outlook.com>
2025-06-19 10:38:34 +08:00
BubbleCal
cbb5a841b1 feat: support prefix matching and must_not clause (#2441) 2025-06-19 10:32:32 +08:00
Lance Release
c72f6770fd Bump version: 0.20.1-beta.1 → 0.20.1-beta.2 2025-06-18 23:33:57 +00:00
81 changed files with 2324 additions and 567 deletions

View File

@@ -1,5 +1,5 @@
[tool.bumpversion]
current_version = "0.20.1-beta.1"
current_version = "0.21.2-beta.0"
parse = """(?x)
(?P<major>0|[1-9]\\d*)\\.
(?P<minor>0|[1-9]\\d*)\\.

View File

@@ -550,6 +550,9 @@ jobs:
bash ci/update_lockfiles.sh
- name: Push new commit
uses: ad-m/github-push-action@master
with:
github_token: ${{ secrets.LANCEDB_RELEASE_TOKEN }}
branch: main
- name: Notify Slack Action
uses: ravsamhq/notify-slack-action@2.3.0
if: ${{ always() }}

399
Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -21,14 +21,16 @@ categories = ["database-implementations"]
rust-version = "1.78.0"
[workspace.dependencies]
lance = { "version" = "=0.30.0", "features" = ["dynamodb"], tag = "v0.30.0-beta.1", git="https://github.com/lancedb/lance.git" }
lance-io = { version = "=0.30.0", tag = "v0.30.0-beta.1", git="https://github.com/lancedb/lance.git" }
lance-index = { version = "=0.30.0", tag = "v0.30.0-beta.1", git="https://github.com/lancedb/lance.git" }
lance-linalg = { version = "=0.30.0", tag = "v0.30.0-beta.1", git="https://github.com/lancedb/lance.git" }
lance-table = { version = "=0.30.0", tag = "v0.30.0-beta.1", git="https://github.com/lancedb/lance.git" }
lance-testing = { version = "=0.30.0", tag = "v0.30.0-beta.1", git="https://github.com/lancedb/lance.git" }
lance-datafusion = { version = "=0.30.0", tag = "v0.30.0-beta.1", git="https://github.com/lancedb/lance.git" }
lance-encoding = { version = "=0.30.0", tag = "v0.30.0-beta.1", git="https://github.com/lancedb/lance.git" }
lance = { "version" = "=0.31.2", "features" = [
"dynamodb",
], "tag" = "v0.31.2-beta.3", "git" = "https://github.com/lancedb/lance.git" }
lance-io = { "version" = "=0.31.2", "tag" = "v0.31.2-beta.3", "git" = "https://github.com/lancedb/lance.git" }
lance-index = { "version" = "=0.31.2", "tag" = "v0.31.2-beta.3", "git" = "https://github.com/lancedb/lance.git" }
lance-linalg = { "version" = "=0.31.2", "tag" = "v0.31.2-beta.3", "git" = "https://github.com/lancedb/lance.git" }
lance-table = { "version" = "=0.31.2", "tag" = "v0.31.2-beta.3", "git" = "https://github.com/lancedb/lance.git" }
lance-testing = { "version" = "=0.31.2", "tag" = "v0.31.2-beta.3", "git" = "https://github.com/lancedb/lance.git" }
lance-datafusion = { "version" = "=0.31.2", "tag" = "v0.31.2-beta.3", "git" = "https://github.com/lancedb/lance.git" }
lance-encoding = { "version" = "=0.31.2", "tag" = "v0.31.2-beta.3", "git" = "https://github.com/lancedb/lance.git" }
# Note that this one does not include pyarrow
arrow = { version = "55.1", optional = false }
arrow-array = "55.1"
@@ -39,20 +41,20 @@ arrow-schema = "55.1"
arrow-arith = "55.1"
arrow-cast = "55.1"
async-trait = "0"
datafusion = { version = "47.0", default-features = false }
datafusion-catalog = "47.0"
datafusion-common = { version = "47.0", default-features = false }
datafusion-execution = "47.0"
datafusion-expr = "47.0"
datafusion-physical-plan = "47.0"
datafusion = { version = "48.0", default-features = false }
datafusion-catalog = "48.0"
datafusion-common = { version = "48.0", default-features = false }
datafusion-execution = "48.0"
datafusion-expr = "48.0"
datafusion-physical-plan = "48.0"
env_logger = "0.11"
half = { "version" = "=2.5.0", default-features = false, features = [
half = { "version" = "2.6.0", default-features = false, features = [
"num-traits",
] }
futures = "0"
log = "0.4"
moka = { version = "0.12", features = ["future"] }
object_store = "0.11.0"
object_store = "0.12.0"
pin-project = "1.0.7"
snafu = "0.8"
url = "2"

View File

@@ -47,10 +47,10 @@ def extract_features(line: str) -> list:
"""
import re
match = re.search(r'"features"\s*=\s*\[(.*?)\]', line)
match = re.search(r'"features"\s*=\s*\[\s*(.*?)\s*\]', line, re.DOTALL)
if match:
features_str = match.group(1)
return [f.strip('"') for f in features_str.split(",")]
return [f.strip('"') for f in features_str.split(",") if len(f) > 0]
return []
@@ -63,10 +63,24 @@ def update_cargo_toml(line_updater):
lines = f.readlines()
new_lines = []
lance_line = ""
is_parsing_lance_line = False
for line in lines:
if line.startswith("lance"):
# Update the line using the provided function
new_lines.append(line_updater(line))
if line.strip().endswith("}"):
new_lines.append(line_updater(line))
else:
lance_line = line
is_parsing_lance_line = True
elif is_parsing_lance_line:
lance_line += line
if line.strip().endswith("}"):
new_lines.append(line_updater(lance_line))
lance_line = ""
is_parsing_lance_line = False
else:
print("doesn't end with }:", line)
else:
# Keep the line unchanged
new_lines.append(line)

12
docs/package-lock.json generated
View File

@@ -19,7 +19,7 @@
},
"../node": {
"name": "vectordb",
"version": "0.12.0",
"version": "0.21.2-beta.0",
"cpu": [
"x64",
"arm64"
@@ -65,11 +65,11 @@
"uuid": "^9.0.0"
},
"optionalDependencies": {
"@lancedb/vectordb-darwin-arm64": "0.12.0",
"@lancedb/vectordb-darwin-x64": "0.12.0",
"@lancedb/vectordb-linux-arm64-gnu": "0.12.0",
"@lancedb/vectordb-linux-x64-gnu": "0.12.0",
"@lancedb/vectordb-win32-x64-msvc": "0.12.0"
"@lancedb/vectordb-darwin-arm64": "0.21.2-beta.0",
"@lancedb/vectordb-darwin-x64": "0.21.2-beta.0",
"@lancedb/vectordb-linux-arm64-gnu": "0.21.2-beta.0",
"@lancedb/vectordb-linux-x64-gnu": "0.21.2-beta.0",
"@lancedb/vectordb-win32-x64-msvc": "0.21.2-beta.0"
},
"peerDependencies": {
"@apache-arrow/ts": "^14.0.2",

View File

@@ -1,7 +1,9 @@
# SQL Querying
You can use DuckDB and Apache Datafusion to query your LanceDB tables using SQL.
This guide will show how to query Lance tables them using both.
We will re-use the dataset [created previously](./pandas_and_pyarrow.md):
We will re-use the dataset [created previously](./tables.md):
```python
import lancedb
@@ -27,15 +29,10 @@ arrow_table = table.to_lance()
duckdb.query("SELECT * FROM arrow_table")
```
```
┌─────────────┬─────────┬────────┐
│ vector │ item │ price
│ float[] │ varchar │ double │
├─────────────┼─────────┼────────┤
│ [3.1, 4.1] │ foo │ 10.0 │
│ [5.9, 26.5] │ bar │ 20.0 │
└─────────────┴─────────┴────────┘
```
| vector | item | price |
| ----------- | ---- | ----- |
| [3.1, 4.1] | foo | 10.0 |
| [5.9, 26.5] | bar | 20.0 |
## Querying a LanceDB Table with Apache Datafusion
@@ -57,12 +54,7 @@ Register the table created with the Datafusion session context.
--8<-- "python/python/tests/docs/test_guide_tables.py:lance_sql_basic"
```
```
┌─────────────┬─────────┬────────┐
│ vector │ item │ price
│ float[] │ varchar │ double │
├─────────────┼─────────┼────────┤
│ [3.1, 4.1] │ foo │ 10.0 │
│ [5.9, 26.5] │ bar │ 20.0 │
└─────────────┴─────────┴────────┘
```
| vector | item | price |
| ----------- | ---- | ----- |
| [3.1, 4.1] | foo | 10.0 |
| [5.9, 26.5] | bar | 20.0 |

View File

@@ -41,6 +41,7 @@ Creates an instance of MatchQuery.
- `fuzziness`: The fuzziness level for the query (default is 0).
- `maxExpansions`: The maximum number of terms to consider for fuzzy matching (default is 50).
- `operator`: The logical operator to use for combining terms in the query (default is "OR").
- `prefixLength`: The number of beginning characters being unchanged for fuzzy matching.
* **options.boost?**: `number`
@@ -50,6 +51,8 @@ Creates an instance of MatchQuery.
* **options.operator?**: [`Operator`](../enumerations/Operator.md)
* **options.prefixLength?**: `number`
#### Returns
[`MatchQuery`](MatchQuery.md)

View File

@@ -612,7 +612,7 @@ of the given query
#### Parameters
* **query**: `string` \| [`IntoVector`](../type-aliases/IntoVector.md) \| [`FullTextQuery`](../interfaces/FullTextQuery.md)
* **query**: `string` \| [`IntoVector`](../type-aliases/IntoVector.md) \| [`MultiVector`](../type-aliases/MultiVector.md) \| [`FullTextQuery`](../interfaces/FullTextQuery.md)
the query, a vector or string
* **queryType?**: `string`
@@ -799,7 +799,7 @@ by `query`.
#### Parameters
* **vector**: [`IntoVector`](../type-aliases/IntoVector.md)
* **vector**: [`IntoVector`](../type-aliases/IntoVector.md) \| [`MultiVector`](../type-aliases/MultiVector.md)
#### Returns

View File

@@ -386,6 +386,53 @@ called then every valid row from the table will be returned.
***
### maximumNprobes()
```ts
maximumNprobes(maximumNprobes): VectorQuery
```
Set the maximum number of probes used.
This controls the maximum number of partitions that will be searched. If this
number is greater than minimumNprobes then the excess partitions will _only_ be
searched if we have not found enough results. This can be useful when there is
a narrow filter to allow these queries to spend more time searching and avoid
potential false negatives.
#### Parameters
* **maximumNprobes**: `number`
#### Returns
[`VectorQuery`](VectorQuery.md)
***
### minimumNprobes()
```ts
minimumNprobes(minimumNprobes): VectorQuery
```
Set the minimum number of probes used.
This controls the minimum number of partitions that will be searched. This
parameter will impact every query against a vector index, regardless of the
filter. See `nprobes` for more details. Higher values will increase recall
but will also increase latency.
#### Parameters
* **minimumNprobes**: `number`
#### Returns
[`VectorQuery`](VectorQuery.md)
***
### nprobes()
```ts
@@ -413,6 +460,10 @@ For best results we recommend tuning this parameter with a benchmark against
your actual data to find the smallest possible value that will still give
you the desired recall.
For more fine grained control over behavior when you have a very narrow filter
you can use `minimumNprobes` and `maximumNprobes`. This method sets both
the minimum and maximum to the same value.
#### Parameters
* **nprobes**: `number`

View File

@@ -10,6 +10,7 @@ Enum representing the occurrence of terms in full-text queries.
- `Must`: The term must be present in the document.
- `Should`: The term should contribute to the document score, but is not required.
- `MustNot`: The term must not be present in the document.
## Enumeration Members
@@ -21,6 +22,14 @@ Must: "MUST";
***
### MustNot
```ts
MustNot: "MUST_NOT";
```
***
### Should
```ts

View File

@@ -84,6 +84,7 @@
- [FieldLike](type-aliases/FieldLike.md)
- [IntoSql](type-aliases/IntoSql.md)
- [IntoVector](type-aliases/IntoVector.md)
- [MultiVector](type-aliases/MultiVector.md)
- [RecordBatchLike](type-aliases/RecordBatchLike.md)
- [SchemaLike](type-aliases/SchemaLike.md)
- [TableLike](type-aliases/TableLike.md)

View File

@@ -23,7 +23,7 @@ whether to remove punctuation
### baseTokenizer?
```ts
optional baseTokenizer: "raw" | "simple" | "whitespace";
optional baseTokenizer: "raw" | "simple" | "whitespace" | "ngram";
```
The tokenizer to use when building the index.
@@ -71,6 +71,36 @@ tokens longer than this length will be ignored
***
### ngramMaxLength?
```ts
optional ngramMaxLength: number;
```
ngram max length
***
### ngramMinLength?
```ts
optional ngramMinLength: number;
```
ngram min length
***
### prefixOnly?
```ts
optional prefixOnly: boolean;
```
whether to only index the prefix of the token for ngram tokenizer
***
### removeStopWords?
```ts

View File

@@ -24,10 +24,10 @@ The default is 7 days
// Delete all versions older than 1 day
const olderThan = new Date();
olderThan.setDate(olderThan.getDate() - 1));
tbl.cleanupOlderVersions(olderThan);
tbl.optimize({cleanupOlderThan: olderThan});
// Delete all versions except the current version
tbl.cleanupOlderVersions(new Date());
tbl.optimize({cleanupOlderThan: new Date()});
```
***

View File

@@ -0,0 +1,11 @@
[**@lancedb/lancedb**](../README.md) • **Docs**
***
[@lancedb/lancedb](../globals.md) / MultiVector
# Type Alias: MultiVector
```ts
type MultiVector: IntoVector[];
```

View File

@@ -428,7 +428,7 @@
"\n",
"**Why?** \n",
"Embedding the UFO dataset and ingesting it into LanceDB takes **~2 hours on a T4 GPU**. To save time: \n",
"- **Use the pre-prepared table with index created ** (provided below) to proceed directly to step7: search. \n",
"- **Use the pre-prepared table with index created** (provided below) to proceed directly to **Step 7**: search. \n",
"- **Step 5a** contains the full ingestion code for reference (run it only if necessary). \n",
"- **Step 6** contains the details on creating the index on the multivector column"
]

View File

@@ -30,7 +30,8 @@ excluded_globs = [
"../src/rag/advanced_techniques/*.md",
"../src/guides/scalar_index.md",
"../src/guides/storage.md",
"../src/search.md"
"../src/search.md",
"../src/guides/sql_querying.md",
]
python_prefix = "py"

View File

@@ -0,0 +1,19 @@
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
wrapperVersion=3.3.2
distributionType=only-script
distributionUrl=https://repo.maven.apache.org/maven2/org/apache/maven/apache-maven/3.9.9/apache-maven-3.9.9-bin.zip

37
java/README.md Normal file
View File

@@ -0,0 +1,37 @@
# LanceDB Java SDK
## Configuration and Initialization
### LanceDB Cloud
For LanceDB Cloud, use the simplified builder API:
```java
import com.lancedb.lance.namespace.LanceRestNamespace;
// If your DB url is db://example-db, then your database here is example-db
LanceRestNamespace namespace = LanceDBRestNamespaces.builder()
.apiKey("your_lancedb_cloud_api_key")
.database("your_database_name")
.build();
```
### LanceDB Enterprise
For Enterprise deployments, use your VPC endpoint:
```java
LanceRestNamespace namespace = LanceDBRestNamespaces.builder()
.apiKey("your_lancedb_enterprise_api_key")
.database("your-top-dir") // Your top level folder under your cloud bucket, e.g. s3://your-bucket/your-top-dir/
.hostOverride("http://<vpc_endpoint_dns_name>:80")
.build();
```
## Development
Build:
```shell
./mvnw install
```

View File

@@ -8,18 +8,24 @@
<parent>
<groupId>com.lancedb</groupId>
<artifactId>lancedb-parent</artifactId>
<version>0.20.1-beta.1</version>
<version>0.21.2-beta.0</version>
<relativePath>../pom.xml</relativePath>
</parent>
<artifactId>lancedb-core</artifactId>
<name>LanceDB Core</name>
<name>${project.artifactId}</name>
<description>LanceDB Core</description>
<packaging>jar</packaging>
<properties>
<rust.release.build>false</rust.release.build>
</properties>
<dependencies>
<dependency>
<groupId>com.lancedb</groupId>
<artifactId>lance-namespace-core</artifactId>
<version>0.0.1</version>
</dependency>
<dependency>
<groupId>org.apache.arrow</groupId>
<artifactId>arrow-vector</artifactId>

View File

@@ -0,0 +1,26 @@
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<parent>
<groupId>com.lancedb</groupId>
<artifactId>lancedb-parent</artifactId>
<version>0.21.2-beta.0</version>
<relativePath>../pom.xml</relativePath>
</parent>
<artifactId>lancedb-lance-namespace</artifactId>
<name>${project.artifactId}</name>
<description>LanceDB Java Integration with Lance Namespace</description>
<packaging>jar</packaging>
<dependencies>
<dependency>
<groupId>com.lancedb</groupId>
<artifactId>lance-namespace-core</artifactId>
</dependency>
</dependencies>
</project>

View File

@@ -0,0 +1,146 @@
/*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package com.lancedb.lancedb;
import com.lancedb.lance.namespace.LanceRestNamespace;
import com.lancedb.lance.namespace.client.apache.ApiClient;
import java.util.HashMap;
import java.util.Map;
import java.util.Optional;
/** Util class to help construct a {@link LanceRestNamespace} for LanceDB. */
public class LanceDbRestNamespaces {
private static final String DEFAULT_REGION = "us-east-1";
private static final String CLOUD_URL_PATTERN = "https://%s.%s.api.lancedb.com";
private String apiKey;
private String database;
private Optional<String> hostOverride = Optional.empty();
private Optional<String> region = Optional.empty();
private Map<String, String> additionalConfig = new HashMap<>();
private LanceDbRestNamespaces() {}
/**
* Create a new builder instance.
*
* @return A new LanceRestNamespaceBuilder
*/
public static LanceDbRestNamespaces builder() {
return new LanceDbRestNamespaces();
}
/**
* Set the API key (required).
*
* @param apiKey The LanceDB API key
* @return This builder
*/
public LanceDbRestNamespaces apiKey(String apiKey) {
if (apiKey == null || apiKey.trim().isEmpty()) {
throw new IllegalArgumentException("API key cannot be null or empty");
}
this.apiKey = apiKey;
return this;
}
/**
* Set the database name (required).
*
* @param database The database name
* @return This builder
*/
public LanceDbRestNamespaces database(String database) {
if (database == null || database.trim().isEmpty()) {
throw new IllegalArgumentException("Database cannot be null or empty");
}
this.database = database;
return this;
}
/**
* Set a custom host override (optional). When set, this overrides the default LanceDB Cloud URL
* construction. Use this for LanceDB Enterprise deployments.
*
* @param hostOverride The complete base URL (e.g., "http://your-vpc-endpoint:80")
* @return This builder
*/
public LanceDbRestNamespaces hostOverride(String hostOverride) {
this.hostOverride = Optional.ofNullable(hostOverride);
return this;
}
/**
* Set the region for LanceDB Cloud (optional). Defaults to "us-east-1" if not specified. This is
* ignored when hostOverride is set.
*
* @param region The AWS region (e.g., "us-east-1", "eu-west-1")
* @return This builder
*/
public LanceDbRestNamespaces region(String region) {
this.region = Optional.ofNullable(region);
return this;
}
/**
* Add additional configuration parameters.
*
* @param key The configuration key
* @param value The configuration value
* @return This builder
*/
public LanceDbRestNamespaces config(String key, String value) {
this.additionalConfig.put(key, value);
return this;
}
/**
* Build the LanceRestNamespace instance.
*
* @return A configured LanceRestNamespace
* @throws IllegalStateException if required parameters are missing
*/
public LanceRestNamespace build() {
// Validate required fields
if (apiKey == null) {
throw new IllegalStateException("API key is required");
}
if (database == null) {
throw new IllegalStateException("Database is required");
}
// Build configuration map
Map<String, String> config = new HashMap<>(additionalConfig);
config.put("headers.x-lancedb-database", database);
config.put("headers.x-api-key", apiKey);
// Determine base URL
String baseUrl;
if (hostOverride.isPresent()) {
baseUrl = hostOverride.get();
config.put("host_override", hostOverride.get());
} else {
String effectiveRegion = region.orElse(DEFAULT_REGION);
baseUrl = String.format(CLOUD_URL_PATTERN, database, effectiveRegion);
config.put("region", effectiveRegion);
}
// Create and configure ApiClient
ApiClient apiClient = new ApiClient();
apiClient.setBasePath(baseUrl);
return new LanceRestNamespace(apiClient, config);
}
}

259
java/mvnw vendored Executable file
View File

@@ -0,0 +1,259 @@
#!/bin/sh
# ----------------------------------------------------------------------------
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
# ----------------------------------------------------------------------------
# ----------------------------------------------------------------------------
# Apache Maven Wrapper startup batch script, version 3.3.2
#
# Optional ENV vars
# -----------------
# JAVA_HOME - location of a JDK home dir, required when download maven via java source
# MVNW_REPOURL - repo url base for downloading maven distribution
# MVNW_USERNAME/MVNW_PASSWORD - user and password for downloading maven
# MVNW_VERBOSE - true: enable verbose log; debug: trace the mvnw script; others: silence the output
# ----------------------------------------------------------------------------
set -euf
[ "${MVNW_VERBOSE-}" != debug ] || set -x
# OS specific support.
native_path() { printf %s\\n "$1"; }
case "$(uname)" in
CYGWIN* | MINGW*)
[ -z "${JAVA_HOME-}" ] || JAVA_HOME="$(cygpath --unix "$JAVA_HOME")"
native_path() { cygpath --path --windows "$1"; }
;;
esac
# set JAVACMD and JAVACCMD
set_java_home() {
# For Cygwin and MinGW, ensure paths are in Unix format before anything is touched
if [ -n "${JAVA_HOME-}" ]; then
if [ -x "$JAVA_HOME/jre/sh/java" ]; then
# IBM's JDK on AIX uses strange locations for the executables
JAVACMD="$JAVA_HOME/jre/sh/java"
JAVACCMD="$JAVA_HOME/jre/sh/javac"
else
JAVACMD="$JAVA_HOME/bin/java"
JAVACCMD="$JAVA_HOME/bin/javac"
if [ ! -x "$JAVACMD" ] || [ ! -x "$JAVACCMD" ]; then
echo "The JAVA_HOME environment variable is not defined correctly, so mvnw cannot run." >&2
echo "JAVA_HOME is set to \"$JAVA_HOME\", but \"\$JAVA_HOME/bin/java\" or \"\$JAVA_HOME/bin/javac\" does not exist." >&2
return 1
fi
fi
else
JAVACMD="$(
'set' +e
'unset' -f command 2>/dev/null
'command' -v java
)" || :
JAVACCMD="$(
'set' +e
'unset' -f command 2>/dev/null
'command' -v javac
)" || :
if [ ! -x "${JAVACMD-}" ] || [ ! -x "${JAVACCMD-}" ]; then
echo "The java/javac command does not exist in PATH nor is JAVA_HOME set, so mvnw cannot run." >&2
return 1
fi
fi
}
# hash string like Java String::hashCode
hash_string() {
str="${1:-}" h=0
while [ -n "$str" ]; do
char="${str%"${str#?}"}"
h=$(((h * 31 + $(LC_CTYPE=C printf %d "'$char")) % 4294967296))
str="${str#?}"
done
printf %x\\n $h
}
verbose() { :; }
[ "${MVNW_VERBOSE-}" != true ] || verbose() { printf %s\\n "${1-}"; }
die() {
printf %s\\n "$1" >&2
exit 1
}
trim() {
# MWRAPPER-139:
# Trims trailing and leading whitespace, carriage returns, tabs, and linefeeds.
# Needed for removing poorly interpreted newline sequences when running in more
# exotic environments such as mingw bash on Windows.
printf "%s" "${1}" | tr -d '[:space:]'
}
# parse distributionUrl and optional distributionSha256Sum, requires .mvn/wrapper/maven-wrapper.properties
while IFS="=" read -r key value; do
case "${key-}" in
distributionUrl) distributionUrl=$(trim "${value-}") ;;
distributionSha256Sum) distributionSha256Sum=$(trim "${value-}") ;;
esac
done <"${0%/*}/.mvn/wrapper/maven-wrapper.properties"
[ -n "${distributionUrl-}" ] || die "cannot read distributionUrl property in ${0%/*}/.mvn/wrapper/maven-wrapper.properties"
case "${distributionUrl##*/}" in
maven-mvnd-*bin.*)
MVN_CMD=mvnd.sh _MVNW_REPO_PATTERN=/maven/mvnd/
case "${PROCESSOR_ARCHITECTURE-}${PROCESSOR_ARCHITEW6432-}:$(uname -a)" in
*AMD64:CYGWIN* | *AMD64:MINGW*) distributionPlatform=windows-amd64 ;;
:Darwin*x86_64) distributionPlatform=darwin-amd64 ;;
:Darwin*arm64) distributionPlatform=darwin-aarch64 ;;
:Linux*x86_64*) distributionPlatform=linux-amd64 ;;
*)
echo "Cannot detect native platform for mvnd on $(uname)-$(uname -m), use pure java version" >&2
distributionPlatform=linux-amd64
;;
esac
distributionUrl="${distributionUrl%-bin.*}-$distributionPlatform.zip"
;;
maven-mvnd-*) MVN_CMD=mvnd.sh _MVNW_REPO_PATTERN=/maven/mvnd/ ;;
*) MVN_CMD="mvn${0##*/mvnw}" _MVNW_REPO_PATTERN=/org/apache/maven/ ;;
esac
# apply MVNW_REPOURL and calculate MAVEN_HOME
# maven home pattern: ~/.m2/wrapper/dists/{apache-maven-<version>,maven-mvnd-<version>-<platform>}/<hash>
[ -z "${MVNW_REPOURL-}" ] || distributionUrl="$MVNW_REPOURL$_MVNW_REPO_PATTERN${distributionUrl#*"$_MVNW_REPO_PATTERN"}"
distributionUrlName="${distributionUrl##*/}"
distributionUrlNameMain="${distributionUrlName%.*}"
distributionUrlNameMain="${distributionUrlNameMain%-bin}"
MAVEN_USER_HOME="${MAVEN_USER_HOME:-${HOME}/.m2}"
MAVEN_HOME="${MAVEN_USER_HOME}/wrapper/dists/${distributionUrlNameMain-}/$(hash_string "$distributionUrl")"
exec_maven() {
unset MVNW_VERBOSE MVNW_USERNAME MVNW_PASSWORD MVNW_REPOURL || :
exec "$MAVEN_HOME/bin/$MVN_CMD" "$@" || die "cannot exec $MAVEN_HOME/bin/$MVN_CMD"
}
if [ -d "$MAVEN_HOME" ]; then
verbose "found existing MAVEN_HOME at $MAVEN_HOME"
exec_maven "$@"
fi
case "${distributionUrl-}" in
*?-bin.zip | *?maven-mvnd-?*-?*.zip) ;;
*) die "distributionUrl is not valid, must match *-bin.zip or maven-mvnd-*.zip, but found '${distributionUrl-}'" ;;
esac
# prepare tmp dir
if TMP_DOWNLOAD_DIR="$(mktemp -d)" && [ -d "$TMP_DOWNLOAD_DIR" ]; then
clean() { rm -rf -- "$TMP_DOWNLOAD_DIR"; }
trap clean HUP INT TERM EXIT
else
die "cannot create temp dir"
fi
mkdir -p -- "${MAVEN_HOME%/*}"
# Download and Install Apache Maven
verbose "Couldn't find MAVEN_HOME, downloading and installing it ..."
verbose "Downloading from: $distributionUrl"
verbose "Downloading to: $TMP_DOWNLOAD_DIR/$distributionUrlName"
# select .zip or .tar.gz
if ! command -v unzip >/dev/null; then
distributionUrl="${distributionUrl%.zip}.tar.gz"
distributionUrlName="${distributionUrl##*/}"
fi
# verbose opt
__MVNW_QUIET_WGET=--quiet __MVNW_QUIET_CURL=--silent __MVNW_QUIET_UNZIP=-q __MVNW_QUIET_TAR=''
[ "${MVNW_VERBOSE-}" != true ] || __MVNW_QUIET_WGET='' __MVNW_QUIET_CURL='' __MVNW_QUIET_UNZIP='' __MVNW_QUIET_TAR=v
# normalize http auth
case "${MVNW_PASSWORD:+has-password}" in
'') MVNW_USERNAME='' MVNW_PASSWORD='' ;;
has-password) [ -n "${MVNW_USERNAME-}" ] || MVNW_USERNAME='' MVNW_PASSWORD='' ;;
esac
if [ -z "${MVNW_USERNAME-}" ] && command -v wget >/dev/null; then
verbose "Found wget ... using wget"
wget ${__MVNW_QUIET_WGET:+"$__MVNW_QUIET_WGET"} "$distributionUrl" -O "$TMP_DOWNLOAD_DIR/$distributionUrlName" || die "wget: Failed to fetch $distributionUrl"
elif [ -z "${MVNW_USERNAME-}" ] && command -v curl >/dev/null; then
verbose "Found curl ... using curl"
curl ${__MVNW_QUIET_CURL:+"$__MVNW_QUIET_CURL"} -f -L -o "$TMP_DOWNLOAD_DIR/$distributionUrlName" "$distributionUrl" || die "curl: Failed to fetch $distributionUrl"
elif set_java_home; then
verbose "Falling back to use Java to download"
javaSource="$TMP_DOWNLOAD_DIR/Downloader.java"
targetZip="$TMP_DOWNLOAD_DIR/$distributionUrlName"
cat >"$javaSource" <<-END
public class Downloader extends java.net.Authenticator
{
protected java.net.PasswordAuthentication getPasswordAuthentication()
{
return new java.net.PasswordAuthentication( System.getenv( "MVNW_USERNAME" ), System.getenv( "MVNW_PASSWORD" ).toCharArray() );
}
public static void main( String[] args ) throws Exception
{
setDefault( new Downloader() );
java.nio.file.Files.copy( java.net.URI.create( args[0] ).toURL().openStream(), java.nio.file.Paths.get( args[1] ).toAbsolutePath().normalize() );
}
}
END
# For Cygwin/MinGW, switch paths to Windows format before running javac and java
verbose " - Compiling Downloader.java ..."
"$(native_path "$JAVACCMD")" "$(native_path "$javaSource")" || die "Failed to compile Downloader.java"
verbose " - Running Downloader.java ..."
"$(native_path "$JAVACMD")" -cp "$(native_path "$TMP_DOWNLOAD_DIR")" Downloader "$distributionUrl" "$(native_path "$targetZip")"
fi
# If specified, validate the SHA-256 sum of the Maven distribution zip file
if [ -n "${distributionSha256Sum-}" ]; then
distributionSha256Result=false
if [ "$MVN_CMD" = mvnd.sh ]; then
echo "Checksum validation is not supported for maven-mvnd." >&2
echo "Please disable validation by removing 'distributionSha256Sum' from your maven-wrapper.properties." >&2
exit 1
elif command -v sha256sum >/dev/null; then
if echo "$distributionSha256Sum $TMP_DOWNLOAD_DIR/$distributionUrlName" | sha256sum -c >/dev/null 2>&1; then
distributionSha256Result=true
fi
elif command -v shasum >/dev/null; then
if echo "$distributionSha256Sum $TMP_DOWNLOAD_DIR/$distributionUrlName" | shasum -a 256 -c >/dev/null 2>&1; then
distributionSha256Result=true
fi
else
echo "Checksum validation was requested but neither 'sha256sum' or 'shasum' are available." >&2
echo "Please install either command, or disable validation by removing 'distributionSha256Sum' from your maven-wrapper.properties." >&2
exit 1
fi
if [ $distributionSha256Result = false ]; then
echo "Error: Failed to validate Maven distribution SHA-256, your Maven distribution might be compromised." >&2
echo "If you updated your Maven version, you need to update the specified distributionSha256Sum property." >&2
exit 1
fi
fi
# unzip and move
if command -v unzip >/dev/null; then
unzip ${__MVNW_QUIET_UNZIP:+"$__MVNW_QUIET_UNZIP"} "$TMP_DOWNLOAD_DIR/$distributionUrlName" -d "$TMP_DOWNLOAD_DIR" || die "failed to unzip"
else
tar xzf${__MVNW_QUIET_TAR:+"$__MVNW_QUIET_TAR"} "$TMP_DOWNLOAD_DIR/$distributionUrlName" -C "$TMP_DOWNLOAD_DIR" || die "failed to untar"
fi
printf %s\\n "$distributionUrl" >"$TMP_DOWNLOAD_DIR/$distributionUrlNameMain/mvnw.url"
mv -- "$TMP_DOWNLOAD_DIR/$distributionUrlNameMain" "$MAVEN_HOME" || [ -d "$MAVEN_HOME" ] || die "fail to move MAVEN_HOME"
clean || :
exec_maven "$@"

View File

@@ -6,11 +6,10 @@
<groupId>com.lancedb</groupId>
<artifactId>lancedb-parent</artifactId>
<version>0.20.1-beta.1</version>
<version>0.21.2-beta.0</version>
<packaging>pom</packaging>
<name>LanceDB Parent</name>
<description>LanceDB vector database Java API</description>
<name>${project.artifactId}</name>
<description>LanceDB Java SDK Parent POM</description>
<url>http://lancedb.com/</url>
<developers>
@@ -29,6 +28,7 @@
<properties>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
<arrow.version>15.0.0</arrow.version>
<lance-namespace.verison>0.0.1</lance-namespace.verison>
<spotless.skip>false</spotless.skip>
<spotless.version>2.30.0</spotless.version>
<spotless.java.googlejavaformat.version>1.7</spotless.java.googlejavaformat.version>
@@ -52,6 +52,7 @@
<modules>
<module>core</module>
<module>lance-namespace</module>
</modules>
<scm>
@@ -62,6 +63,11 @@
<dependencyManagement>
<dependencies>
<dependency>
<groupId>com.lancedb</groupId>
<artifactId>lance-namespace-core</artifactId>
<version>${lance-namespace.verison}</version>
</dependency>
<dependency>
<groupId>org.apache.arrow</groupId>
<artifactId>arrow-vector</artifactId>

49
node/package-lock.json generated
View File

@@ -1,12 +1,12 @@
{
"name": "vectordb",
"version": "0.20.1-beta.1",
"version": "0.21.2-beta.0",
"lockfileVersion": 3,
"requires": true,
"packages": {
"": {
"name": "vectordb",
"version": "0.20.1-beta.1",
"version": "0.21.2-beta.0",
"cpu": [
"x64",
"arm64"
@@ -52,11 +52,11 @@
"uuid": "^9.0.0"
},
"optionalDependencies": {
"@lancedb/vectordb-darwin-arm64": "0.20.1-beta.1",
"@lancedb/vectordb-darwin-x64": "0.20.1-beta.1",
"@lancedb/vectordb-linux-arm64-gnu": "0.20.1-beta.1",
"@lancedb/vectordb-linux-x64-gnu": "0.20.1-beta.1",
"@lancedb/vectordb-win32-x64-msvc": "0.20.1-beta.1"
"@lancedb/vectordb-darwin-arm64": "0.21.2-beta.0",
"@lancedb/vectordb-darwin-x64": "0.21.2-beta.0",
"@lancedb/vectordb-linux-arm64-gnu": "0.21.2-beta.0",
"@lancedb/vectordb-linux-x64-gnu": "0.21.2-beta.0",
"@lancedb/vectordb-win32-x64-msvc": "0.21.2-beta.0"
},
"peerDependencies": {
"@apache-arrow/ts": "^14.0.2",
@@ -327,60 +327,65 @@
}
},
"node_modules/@lancedb/vectordb-darwin-arm64": {
"version": "0.20.1-beta.1",
"resolved": "https://registry.npmjs.org/@lancedb/vectordb-darwin-arm64/-/vectordb-darwin-arm64-0.20.1-beta.1.tgz",
"integrity": "sha512-DPD8gwFQz5aENYYbTFS/l3YX/rqzS6Kj2B4IZERccVFULQsdR5YwtaAfFwTMp7NSnsjWKwJAknohiMZlJr4njQ==",
"version": "0.21.2-beta.0",
"resolved": "https://registry.npmjs.org/@lancedb/vectordb-darwin-arm64/-/vectordb-darwin-arm64-0.21.2-beta.0.tgz",
"integrity": "sha512-RiYqpKuq9v8A4wFuHt1iPNFYjWJ1KgGFLJwQO4ajp9Hee84sDHq8mP0ATgMcc24hiaOUQ1lRRTULjGbHn4NIYw==",
"cpu": [
"arm64"
],
"license": "Apache-2.0",
"optional": true,
"os": [
"darwin"
]
},
"node_modules/@lancedb/vectordb-darwin-x64": {
"version": "0.20.1-beta.1",
"resolved": "https://registry.npmjs.org/@lancedb/vectordb-darwin-x64/-/vectordb-darwin-x64-0.20.1-beta.1.tgz",
"integrity": "sha512-lTPtlRSTC08UgQW5Bv8WYhdbogAgUJ+9ejg+UE+fwP9gEsgEKXL/SHBm+9gmAlTo7LbrxJjg0CtCde/mW68UTw==",
"version": "0.21.2-beta.0",
"resolved": "https://registry.npmjs.org/@lancedb/vectordb-darwin-x64/-/vectordb-darwin-x64-0.21.2-beta.0.tgz",
"integrity": "sha512-togdP0YIjMYg/hBRMMxW434i5VB789JWU5o3hWrodbX8olEc0Txqw5Dg9CgIOldBIiCti6uTSQiTo6uldZon1w==",
"cpu": [
"x64"
],
"license": "Apache-2.0",
"optional": true,
"os": [
"darwin"
]
},
"node_modules/@lancedb/vectordb-linux-arm64-gnu": {
"version": "0.20.1-beta.1",
"resolved": "https://registry.npmjs.org/@lancedb/vectordb-linux-arm64-gnu/-/vectordb-linux-arm64-gnu-0.20.1-beta.1.tgz",
"integrity": "sha512-w/3O9FvwQiGegYsM21yZ0FezfOFVsW7HttYwwPzZMZaCpK3/i+LvZVSqwO4qXHHJBtHgKevonINyvVlg5487aQ==",
"version": "0.21.2-beta.0",
"resolved": "https://registry.npmjs.org/@lancedb/vectordb-linux-arm64-gnu/-/vectordb-linux-arm64-gnu-0.21.2-beta.0.tgz",
"integrity": "sha512-ErS4IQDQVTYVATPeOj/dZXQR34eZQ5rAXm3vJdQi5K6X4zCDaIjOhpmnwzPBGT9W1idaBAoDJhtNfsFaJ6/PQQ==",
"cpu": [
"arm64"
],
"license": "Apache-2.0",
"optional": true,
"os": [
"linux"
]
},
"node_modules/@lancedb/vectordb-linux-x64-gnu": {
"version": "0.20.1-beta.1",
"resolved": "https://registry.npmjs.org/@lancedb/vectordb-linux-x64-gnu/-/vectordb-linux-x64-gnu-0.20.1-beta.1.tgz",
"integrity": "sha512-rq7Q6Lq9kJmBcgwplYQVJmRbyeP+xPVmXyyQfAO3IjekqeSsyjj1HoCZYqZIfBZyN5ELiSvIJB0731aKf9pr1A==",
"version": "0.21.2-beta.0",
"resolved": "https://registry.npmjs.org/@lancedb/vectordb-linux-x64-gnu/-/vectordb-linux-x64-gnu-0.21.2-beta.0.tgz",
"integrity": "sha512-ycDpyBGbfxtnGGa/RQo5+So6dHALiem1pbYc/LDKKluUJpadtXtEwC61o6hZTcejoYjhEE8ET7vA3OCEJfMFaw==",
"cpu": [
"x64"
],
"license": "Apache-2.0",
"optional": true,
"os": [
"linux"
]
},
"node_modules/@lancedb/vectordb-win32-x64-msvc": {
"version": "0.20.1-beta.1",
"resolved": "https://registry.npmjs.org/@lancedb/vectordb-win32-x64-msvc/-/vectordb-win32-x64-msvc-0.20.1-beta.1.tgz",
"integrity": "sha512-kHra0SEXeMKdgqi5h0igsqHcBr73hKBhEVJBa8VTv1DUv6Jvazwl4B4ueqllcyD4k3vvOTb2XzZomm7dhQ9QnA==",
"version": "0.21.2-beta.0",
"resolved": "https://registry.npmjs.org/@lancedb/vectordb-win32-x64-msvc/-/vectordb-win32-x64-msvc-0.21.2-beta.0.tgz",
"integrity": "sha512-IgVkAP/LiNIQD5P6n/9x3bgQOt5pGJarjtSF8r+ialD95QHmo6tcxrwTy/DlA+H1uI6B6h+sbN0c1KXTh1rYcg==",
"cpu": [
"x64"
],
"license": "Apache-2.0",
"optional": true,
"os": [
"win32"

View File

@@ -1,6 +1,6 @@
{
"name": "vectordb",
"version": "0.20.1-beta.1",
"version": "0.21.2-beta.0",
"description": " Serverless, low-latency vector database for AI applications",
"private": false,
"main": "dist/index.js",
@@ -89,10 +89,10 @@
}
},
"optionalDependencies": {
"@lancedb/vectordb-darwin-x64": "0.20.1-beta.1",
"@lancedb/vectordb-darwin-arm64": "0.20.1-beta.1",
"@lancedb/vectordb-linux-x64-gnu": "0.20.1-beta.1",
"@lancedb/vectordb-linux-arm64-gnu": "0.20.1-beta.1",
"@lancedb/vectordb-win32-x64-msvc": "0.20.1-beta.1"
"@lancedb/vectordb-darwin-x64": "0.21.2-beta.0",
"@lancedb/vectordb-darwin-arm64": "0.21.2-beta.0",
"@lancedb/vectordb-linux-x64-gnu": "0.21.2-beta.0",
"@lancedb/vectordb-linux-arm64-gnu": "0.21.2-beta.0",
"@lancedb/vectordb-win32-x64-msvc": "0.21.2-beta.0"
}
}

View File

@@ -1,7 +1,7 @@
[package]
name = "lancedb-nodejs"
edition.workspace = true
version = "0.20.1-beta.1"
version = "0.21.2-beta.0"
license.workspace = true
description.workspace = true
repository.workspace = true

View File

@@ -1,7 +1,7 @@
// SPDX-License-Identifier: Apache-2.0
// SPDX-FileCopyrightText: Copyright The LanceDB Authors
import { Schema } from "apache-arrow";
import { Bool, Field, Int32, List, Schema, Struct, Utf8 } from "apache-arrow";
import * as arrow15 from "apache-arrow-15";
import * as arrow16 from "apache-arrow-16";
@@ -11,10 +11,12 @@ import * as arrow18 from "apache-arrow-18";
import {
convertToTable,
fromBufferToRecordBatch,
fromDataToBuffer,
fromRecordBatchToBuffer,
fromTableToBuffer,
makeArrowTable,
makeEmptyTable,
tableFromIPC,
} from "../lancedb/arrow";
import {
EmbeddingFunction,
@@ -375,8 +377,221 @@ describe.each([arrow15, arrow16, arrow17, arrow18])(
expect(table2.schema).toEqual(schema);
});
it("will handle missing columns in schema alignment when using embeddings", async function () {
const schema = new Schema(
[
new Field("domain", new Utf8(), true),
new Field("name", new Utf8(), true),
new Field("description", new Utf8(), true),
],
new Map([["embedding_functions", JSON.stringify([])]]),
);
const data = [
{ domain: "google.com", name: "Google" },
{ domain: "facebook.com", name: "Facebook" },
];
const table = await convertToTable(data, undefined, { schema });
expect(table.numCols).toBe(3);
expect(table.numRows).toBe(2);
const descriptionColumn = table.getChild("description");
expect(descriptionColumn).toBeDefined();
expect(descriptionColumn?.nullCount).toBe(2);
expect(descriptionColumn?.toArray()).toEqual([null, null]);
expect(table.getChild("domain")?.toArray()).toEqual([
"google.com",
"facebook.com",
]);
expect(table.getChild("name")?.toArray()).toEqual([
"Google",
"Facebook",
]);
});
it("will handle completely missing nested struct columns", async function () {
const schema = new Schema(
[
new Field("id", new Utf8(), true),
new Field("name", new Utf8(), true),
new Field(
"metadata",
new Struct([
new Field("version", new Int32(), true),
new Field("author", new Utf8(), true),
new Field(
"tags",
new List(new Field("item", new Utf8(), true)),
true,
),
]),
true,
),
],
new Map([["embedding_functions", JSON.stringify([])]]),
);
const data = [
{ id: "doc1", name: "Document 1" },
{ id: "doc2", name: "Document 2" },
];
const table = await convertToTable(data, undefined, { schema });
expect(table.numCols).toBe(3);
expect(table.numRows).toBe(2);
const buf = await fromTableToBuffer(table);
const retrievedTable = tableFromIPC(buf);
const rows = [];
for (let i = 0; i < retrievedTable.numRows; i++) {
rows.push(retrievedTable.get(i));
}
expect(rows[0].metadata.version).toBe(null);
expect(rows[0].metadata.author).toBe(null);
expect(rows[0].metadata.tags).toBe(null);
expect(rows[0].id).toBe("doc1");
expect(rows[0].name).toBe("Document 1");
});
it("will handle partially missing nested struct fields", async function () {
const schema = new Schema(
[
new Field("id", new Utf8(), true),
new Field(
"metadata",
new Struct([
new Field("version", new Int32(), true),
new Field("author", new Utf8(), true),
new Field("created_at", new Utf8(), true),
]),
true,
),
],
new Map([["embedding_functions", JSON.stringify([])]]),
);
const data = [
{ id: "doc1", metadata: { version: 1, author: "Alice" } },
{ id: "doc2", metadata: { version: 2 } },
];
const table = await convertToTable(data, undefined, { schema });
expect(table.numCols).toBe(2);
expect(table.numRows).toBe(2);
const metadataColumn = table.getChild("metadata");
expect(metadataColumn).toBeDefined();
expect(metadataColumn?.type.toString()).toBe(
"Struct<{version:Int32, author:Utf8, created_at:Utf8}>",
);
});
it("will handle multiple levels of nested structures", async function () {
const schema = new Schema(
[
new Field("id", new Utf8(), true),
new Field(
"config",
new Struct([
new Field("database", new Utf8(), true),
new Field(
"connection",
new Struct([
new Field("host", new Utf8(), true),
new Field("port", new Int32(), true),
new Field(
"ssl",
new Struct([
new Field("enabled", new Bool(), true),
new Field("cert_path", new Utf8(), true),
]),
true,
),
]),
true,
),
]),
true,
),
],
new Map([["embedding_functions", JSON.stringify([])]]),
);
const data = [
{
id: "config1",
config: {
database: "postgres",
connection: { host: "localhost" },
},
},
{
id: "config2",
config: { database: "mysql" },
},
{
id: "config3",
},
];
const table = await convertToTable(data, undefined, { schema });
expect(table.numCols).toBe(2);
expect(table.numRows).toBe(3);
const configColumn = table.getChild("config");
expect(configColumn).toBeDefined();
expect(configColumn?.type.toString()).toBe(
"Struct<{database:Utf8, connection:Struct<{host:Utf8, port:Int32, ssl:Struct<{enabled:Bool, cert_path:Utf8}>}>}>",
);
});
it("will handle missing columns in Arrow table input when using embeddings", async function () {
const incompleteTable = makeArrowTable([
{ domain: "google.com", name: "Google" },
{ domain: "facebook.com", name: "Facebook" },
]);
const schema = new Schema(
[
new Field("domain", new Utf8(), true),
new Field("name", new Utf8(), true),
new Field("description", new Utf8(), true),
],
new Map([["embedding_functions", JSON.stringify([])]]),
);
const buf = await fromDataToBuffer(incompleteTable, undefined, schema);
expect(buf.byteLength).toBeGreaterThan(0);
const retrievedTable = tableFromIPC(buf);
expect(retrievedTable.numCols).toBe(3);
expect(retrievedTable.numRows).toBe(2);
const descriptionColumn = retrievedTable.getChild("description");
expect(descriptionColumn).toBeDefined();
expect(descriptionColumn?.nullCount).toBe(2);
expect(descriptionColumn?.toArray()).toEqual([null, null]);
expect(retrievedTable.getChild("domain")?.toArray()).toEqual([
"google.com",
"facebook.com",
]);
expect(retrievedTable.getChild("name")?.toArray()).toEqual([
"Google",
"Facebook",
]);
});
it("should correctly retain values in nested struct fields", async function () {
// Define test data with nested struct
const testData = [
{
id: "doc1",
@@ -400,10 +615,8 @@ describe.each([arrow15, arrow16, arrow17, arrow18])(
},
];
// Create Arrow table from the data
const table = makeArrowTable(testData);
// Verify schema has the nested struct fields
const metadataField = table.schema.fields.find(
(f) => f.name === "metadata",
);
@@ -417,23 +630,17 @@ describe.each([arrow15, arrow16, arrow17, arrow18])(
"text",
]);
// Convert to buffer and back (simulating storage and retrieval)
const buf = await fromTableToBuffer(table);
const retrievedTable = tableFromIPC(buf);
// Verify the retrieved table has the same structure
const rows = [];
for (let i = 0; i < retrievedTable.numRows; i++) {
rows.push(retrievedTable.get(i));
}
// Check values in the first row
const firstRow = rows[0];
expect(firstRow.id).toBe("doc1");
expect(firstRow.vector.toJSON()).toEqual([1, 2, 3]);
// Verify metadata values are preserved (this is where the bug is)
expect(firstRow.metadata).toBeDefined();
expect(firstRow.metadata.filePath).toBe("/path/to/file1.ts");
expect(firstRow.metadata.startLine).toBe(10);
expect(firstRow.metadata.endLine).toBe(20);

View File

@@ -368,9 +368,9 @@ describe("merge insert", () => {
{ a: 4, b: "z" },
];
expect(
JSON.parse(JSON.stringify((await table.toArrow()).toArray())),
).toEqual(expected);
const result = (await table.toArrow()).toArray().sort((a, b) => a.a - b.a);
expect(result.map((row) => ({ ...row }))).toEqual(expected);
});
test("conditional update", async () => {
const newData = [
@@ -1650,13 +1650,25 @@ describe.each([arrow15, arrow16, arrow17, arrow18])(
expect(resultSet.has("fob")).toBe(true);
expect(resultSet.has("fo")).toBe(true);
expect(resultSet.has("food")).toBe(true);
const prefixResults = await table
.search(
new MatchQuery("foo", "text", { fuzziness: 3, prefixLength: 3 }),
)
.toArray();
expect(prefixResults.length).toBe(2);
const resultSet2 = new Set(prefixResults.map((r) => r.text));
expect(resultSet2.has("foo")).toBe(true);
expect(resultSet2.has("food")).toBe(true);
});
test("full text search boolean query", async () => {
const db = await connect(tmpDir.name);
const data = [
{ text: "hello world", vector: [0.1, 0.2, 0.3] },
{ text: "goodbye world", vector: [0.4, 0.5, 0.6] },
{ text: "The cat and dog are playing" },
{ text: "The cat is sleeping" },
{ text: "The dog is barking" },
{ text: "The dog chases the cat" },
];
const table = await db.createTable("test", data);
await table.createIndex("text", {
@@ -1666,22 +1678,86 @@ describe.each([arrow15, arrow16, arrow17, arrow18])(
const shouldResults = await table
.search(
new BooleanQuery([
[Occur.Should, new MatchQuery("hello", "text")],
[Occur.Should, new MatchQuery("goodbye", "text")],
[Occur.Should, new MatchQuery("cat", "text")],
[Occur.Should, new MatchQuery("dog", "text")],
]),
)
.toArray();
expect(shouldResults.length).toBe(2);
expect(shouldResults.length).toBe(4);
const mustResults = await table
.search(
new BooleanQuery([
[Occur.Must, new MatchQuery("hello", "text")],
[Occur.Must, new MatchQuery("world", "text")],
[Occur.Must, new MatchQuery("cat", "text")],
[Occur.Must, new MatchQuery("dog", "text")],
]),
)
.toArray();
expect(mustResults.length).toBe(1);
expect(mustResults.length).toBe(2);
const mustNotResults = await table
.search(
new BooleanQuery([
[Occur.Must, new MatchQuery("cat", "text")],
[Occur.MustNot, new MatchQuery("dog", "text")],
]),
)
.toArray();
expect(mustNotResults.length).toBe(1);
});
test("full text search ngram", async () => {
const db = await connect(tmpDir.name);
const data = [
{ text: "hello world", vector: [0.1, 0.2, 0.3] },
{ text: "lance database", vector: [0.4, 0.5, 0.6] },
{ text: "lance is cool", vector: [0.7, 0.8, 0.9] },
];
const table = await db.createTable("test", data);
await table.createIndex("text", {
config: Index.fts({ baseTokenizer: "ngram" }),
});
const results = await table.search("lan").toArray();
expect(results.length).toBe(2);
const resultSet = new Set(results.map((r) => r.text));
expect(resultSet.has("lance database")).toBe(true);
expect(resultSet.has("lance is cool")).toBe(true);
const results2 = await table.search("nce").toArray(); // spellchecker:disable-line
expect(results2.length).toBe(2);
const resultSet2 = new Set(results2.map((r) => r.text));
expect(resultSet2.has("lance database")).toBe(true);
expect(resultSet2.has("lance is cool")).toBe(true);
// the default min_ngram_length is 3, so "la" should not match
const results3 = await table.search("la").toArray();
expect(results3.length).toBe(0);
// test setting min_ngram_length and prefix_only
await table.createIndex("text", {
config: Index.fts({
baseTokenizer: "ngram",
ngramMinLength: 2,
prefixOnly: true,
}),
replace: true,
});
const results4 = await table.search("lan").toArray();
expect(results4.length).toBe(2);
const resultSet4 = new Set(results4.map((r) => r.text));
expect(resultSet4.has("lance database")).toBe(true);
expect(resultSet4.has("lance is cool")).toBe(true);
const results5 = await table.search("nce").toArray(); // spellchecker:disable-line
expect(results5.length).toBe(0);
const results6 = await table.search("la").toArray();
expect(results6.length).toBe(2);
const resultSet6 = new Set(results6.map((r) => r.text));
expect(resultSet6.has("lance database")).toBe(true);
expect(resultSet6.has("lance is cool")).toBe(true);
});
test.each([
@@ -1787,4 +1863,43 @@ describe("column name options", () => {
expect(results[0].query_index).toBe(0);
expect(results[1].query_index).toBe(1);
});
test("index and search multivectors", async () => {
const db = await connect(tmpDir.name);
const data = [];
// generate 512 random multivectors
for (let i = 0; i < 256; i++) {
data.push({
multivector: Array.from({ length: 10 }, () =>
Array(2).fill(Math.random()),
),
});
}
const table = await db.createTable("multivectors", data, {
schema: new Schema([
new Field(
"multivector",
new List(
new Field(
"item",
new FixedSizeList(2, new Field("item", new Float32())),
),
),
),
]),
});
const results = await table.search(data[0].multivector).limit(10).toArray();
expect(results.length).toBe(10);
await table.createIndex("multivector", {
config: Index.ivfPq({ numPartitions: 2, distanceType: "cosine" }),
});
const results2 = await table
.search(data[0].multivector)
.limit(10)
.toArray();
expect(results2.length).toBe(10);
});
});

View File

@@ -107,6 +107,20 @@ export type IntoVector =
| number[]
| Promise<Float32Array | Float64Array | number[]>;
export type MultiVector = IntoVector[];
export function isMultiVector(value: unknown): value is MultiVector {
return Array.isArray(value) && isIntoVector(value[0]);
}
export function isIntoVector(value: unknown): value is IntoVector {
return (
value instanceof Float32Array ||
value instanceof Float64Array ||
(Array.isArray(value) && !Array.isArray(value[0]))
);
}
export function isArrowTable(value: object): value is TableLike {
if (value instanceof ArrowTable) return true;
return "schema" in value && "batches" in value;
@@ -839,6 +853,15 @@ async function applyEmbeddingsFromMetadata(
const vector = makeVector(vectors, destType);
columns[destColumn] = vector;
}
// Add any missing columns from the schema as null vectors
for (const field of schema.fields) {
if (!(field.name in columns)) {
const nullValues = new Array(table.numRows).fill(null);
columns[field.name] = makeVector(nullValues, field.type);
}
}
const newTable = new ArrowTable(columns);
return alignTable(newTable, schema);
}
@@ -987,7 +1010,21 @@ export async function convertToTable(
embeddings?: EmbeddingFunctionConfig,
makeTableOptions?: Partial<MakeArrowTableOptions>,
): Promise<ArrowTable> {
const table = makeArrowTable(data, makeTableOptions);
let processedData = data;
// If we have a schema with embedding metadata, we need to preprocess the data
// to ensure all nested fields are present
if (
makeTableOptions?.schema &&
makeTableOptions.schema.metadata?.has("embedding_functions")
) {
processedData = ensureNestedFieldsExist(
data,
makeTableOptions.schema as Schema,
);
}
const table = makeArrowTable(processedData, makeTableOptions);
return await applyEmbeddings(table, embeddings, makeTableOptions?.schema);
}
@@ -1080,7 +1117,16 @@ export async function fromDataToBuffer(
schema = sanitizeSchema(schema);
}
if (isArrowTable(data)) {
return fromTableToBuffer(sanitizeTable(data), embeddings, schema);
const table = sanitizeTable(data);
// If we have a schema with embedding functions, we need to ensure all columns exist
// before applying embeddings, since applyEmbeddingsFromMetadata expects all columns
// to be present in the table
if (schema && schema.metadata?.has("embedding_functions")) {
const alignedTable = alignTableToSchema(table, schema);
return fromTableToBuffer(alignedTable, embeddings, schema);
} else {
return fromTableToBuffer(table, embeddings, schema);
}
} else {
const table = await convertToTable(data, embeddings, { schema });
return fromTableToBuffer(table);
@@ -1149,7 +1195,7 @@ function alignBatch(batch: RecordBatch, schema: Schema): RecordBatch {
type: new Struct(schema.fields),
length: batch.numRows,
nullCount: batch.nullCount,
children: alignedChildren,
children: alignedChildren as unknown as ArrowData<DataType>[],
});
return new RecordBatch(schema, newData);
}
@@ -1221,6 +1267,79 @@ function validateSchemaEmbeddings(
return new Schema(fields, schema.metadata);
}
/**
* Ensures that all nested fields defined in the schema exist in the data,
* filling missing fields with null values.
*/
export function ensureNestedFieldsExist(
data: Array<Record<string, unknown>>,
schema: Schema,
): Array<Record<string, unknown>> {
return data.map((row) => {
const completeRow: Record<string, unknown> = {};
for (const field of schema.fields) {
if (field.name in row) {
if (
field.type.constructor.name === "Struct" &&
row[field.name] !== null &&
row[field.name] !== undefined
) {
// Handle nested struct
const nestedValue = row[field.name] as Record<string, unknown>;
completeRow[field.name] = ensureStructFieldsExist(
nestedValue,
field.type,
);
} else {
// Non-struct field or null struct value
completeRow[field.name] = row[field.name];
}
} else {
// Field is missing from the data - set to null
completeRow[field.name] = null;
}
}
return completeRow;
});
}
/**
* Recursively ensures that all fields in a struct type exist in the data,
* filling missing fields with null values.
*/
function ensureStructFieldsExist(
data: Record<string, unknown>,
structType: Struct,
): Record<string, unknown> {
const completeStruct: Record<string, unknown> = {};
for (const childField of structType.children) {
if (childField.name in data) {
if (
childField.type.constructor.name === "Struct" &&
data[childField.name] !== null &&
data[childField.name] !== undefined
) {
// Recursively handle nested struct
completeStruct[childField.name] = ensureStructFieldsExist(
data[childField.name] as Record<string, unknown>,
childField.type,
);
} else {
// Non-struct field or null struct value
completeStruct[childField.name] = data[childField.name];
}
} else {
// Field is missing - set to null
completeStruct[childField.name] = null;
}
}
return completeStruct;
}
interface JsonDataType {
type: string;
fields?: JsonField[];
@@ -1354,3 +1473,64 @@ function fieldToJson(field: Field): JsonField {
metadata: field.metadata,
};
}
function alignTableToSchema(
table: ArrowTable,
targetSchema: Schema,
): ArrowTable {
const existingColumns = new Map<string, Vector>();
// Map existing columns
for (const field of table.schema.fields) {
existingColumns.set(field.name, table.getChild(field.name)!);
}
// Create vectors for all fields in target schema
const alignedColumns: Record<string, Vector> = {};
for (const field of targetSchema.fields) {
if (existingColumns.has(field.name)) {
// Column exists, use it
alignedColumns[field.name] = existingColumns.get(field.name)!;
} else {
// Column missing, create null vector
alignedColumns[field.name] = createNullVector(field, table.numRows);
}
}
// Create new table with aligned schema and columns
return new ArrowTable(targetSchema, alignedColumns);
}
function createNullVector(field: Field, numRows: number): Vector {
if (field.type.constructor.name === "Struct") {
// For struct types, create a struct with null fields
const structType = field.type as Struct;
const childVectors = structType.children.map((childField) =>
createNullVector(childField, numRows),
);
// Create struct data
const structData = makeData({
type: structType,
length: numRows,
nullCount: 0,
children: childVectors.map((v) => v.data[0]),
});
return arrowMakeVector(structData);
} else {
// For other types, create a vector of nulls
const nullBitmap = new Uint8Array(Math.ceil(numRows / 8));
// All bits are 0, meaning all values are null
const data = makeData({
type: field.type,
length: numRows,
nullCount: numRows,
nullBitmap,
});
return arrowMakeVector(data);
}
}

View File

@@ -100,6 +100,7 @@ export {
RecordBatchLike,
DataLike,
IntoVector,
MultiVector,
} from "./arrow";
export { IntoSql, packBits } from "./util";

View File

@@ -439,7 +439,7 @@ export interface FtsOptions {
*
* "raw" - Raw tokenizer. This tokenizer does not split the text into tokens and indexes the entire text as a single token.
*/
baseTokenizer?: "simple" | "whitespace" | "raw";
baseTokenizer?: "simple" | "whitespace" | "raw" | "ngram";
/**
* language for stemming and stop words
@@ -472,6 +472,21 @@ export interface FtsOptions {
* whether to remove punctuation
*/
asciiFolding?: boolean;
/**
* ngram min length
*/
ngramMinLength?: number;
/**
* ngram max length
*/
ngramMaxLength?: number;
/**
* whether to only index the prefix of the token for ngram tokenizer
*/
prefixOnly?: boolean;
}
export class Index {
@@ -608,6 +623,9 @@ export class Index {
options?.stem,
options?.removeStopWords,
options?.asciiFolding,
options?.ngramMinLength,
options?.ngramMaxLength,
options?.prefixOnly,
),
);
}

View File

@@ -812,10 +812,12 @@ export enum Operator {
*
* - `Must`: The term must be present in the document.
* - `Should`: The term should contribute to the document score, but is not required.
* - `MustNot`: The term must not be present in the document.
*/
export enum Occur {
Must = "MUST",
Should = "SHOULD",
Must = "MUST",
MustNot = "MUST_NOT",
}
/**
@@ -856,6 +858,7 @@ export class MatchQuery implements FullTextQuery {
* - `fuzziness`: The fuzziness level for the query (default is 0).
* - `maxExpansions`: The maximum number of terms to consider for fuzzy matching (default is 50).
* - `operator`: The logical operator to use for combining terms in the query (default is "OR").
* - `prefixLength`: The number of beginning characters being unchanged for fuzzy matching.
*/
constructor(
query: string,
@@ -865,6 +868,7 @@ export class MatchQuery implements FullTextQuery {
fuzziness?: number;
maxExpansions?: number;
operator?: Operator;
prefixLength?: number;
},
) {
let fuzziness = options?.fuzziness;
@@ -878,6 +882,7 @@ export class MatchQuery implements FullTextQuery {
fuzziness,
options?.maxExpansions ?? 50,
options?.operator ?? Operator.Or,
options?.prefixLength ?? 0,
);
}

View File

@@ -6,9 +6,11 @@ import {
Data,
DataType,
IntoVector,
MultiVector,
Schema,
dataTypeToJson,
fromDataToBuffer,
isMultiVector,
tableFromIPC,
} from "./arrow";
@@ -75,10 +77,10 @@ export interface OptimizeOptions {
* // Delete all versions older than 1 day
* const olderThan = new Date();
* olderThan.setDate(olderThan.getDate() - 1));
* tbl.cleanupOlderVersions(olderThan);
* tbl.optimize({cleanupOlderThan: olderThan});
*
* // Delete all versions except the current version
* tbl.cleanupOlderVersions(new Date());
* tbl.optimize({cleanupOlderThan: new Date()});
*/
cleanupOlderThan: Date;
deleteUnverified: boolean;
@@ -346,7 +348,7 @@ export abstract class Table {
* if the query is a string and no embedding function is defined, it will be treated as a full text search query
*/
abstract search(
query: string | IntoVector | FullTextQuery,
query: string | IntoVector | MultiVector | FullTextQuery,
queryType?: string,
ftsColumns?: string | string[],
): VectorQuery | Query;
@@ -357,7 +359,7 @@ export abstract class Table {
* is the same thing as calling `nearestTo` on the builder returned
* by `query`. @see {@link Query#nearestTo} for more details.
*/
abstract vectorSearch(vector: IntoVector): VectorQuery;
abstract vectorSearch(vector: IntoVector | MultiVector): VectorQuery;
/**
* Add new columns with defined values.
* @param {AddColumnsSql[]} newColumnTransforms pairs of column names and
@@ -668,7 +670,7 @@ export class LocalTable extends Table {
}
search(
query: string | IntoVector | FullTextQuery,
query: string | IntoVector | MultiVector | FullTextQuery,
queryType: string = "auto",
ftsColumns?: string | string[],
): VectorQuery | Query {
@@ -715,7 +717,15 @@ export class LocalTable extends Table {
return this.query().nearestTo(queryPromise);
}
vectorSearch(vector: IntoVector): VectorQuery {
vectorSearch(vector: IntoVector | MultiVector): VectorQuery {
if (isMultiVector(vector)) {
const query = this.query().nearestTo(vector[0]);
for (const v of vector.slice(1)) {
query.addQueryVector(v);
}
return query;
}
return this.query().nearestTo(vector);
}

View File

@@ -1,6 +1,6 @@
{
"name": "@lancedb/lancedb-darwin-arm64",
"version": "0.20.1-beta.1",
"version": "0.21.2-beta.0",
"os": ["darwin"],
"cpu": ["arm64"],
"main": "lancedb.darwin-arm64.node",

View File

@@ -1,6 +1,6 @@
{
"name": "@lancedb/lancedb-darwin-x64",
"version": "0.20.1-beta.1",
"version": "0.21.2-beta.0",
"os": ["darwin"],
"cpu": ["x64"],
"main": "lancedb.darwin-x64.node",

View File

@@ -1,6 +1,6 @@
{
"name": "@lancedb/lancedb-linux-arm64-gnu",
"version": "0.20.1-beta.1",
"version": "0.21.2-beta.0",
"os": ["linux"],
"cpu": ["arm64"],
"main": "lancedb.linux-arm64-gnu.node",

View File

@@ -1,6 +1,6 @@
{
"name": "@lancedb/lancedb-linux-arm64-musl",
"version": "0.20.1-beta.1",
"version": "0.21.2-beta.0",
"os": ["linux"],
"cpu": ["arm64"],
"main": "lancedb.linux-arm64-musl.node",

View File

@@ -1,6 +1,6 @@
{
"name": "@lancedb/lancedb-linux-x64-gnu",
"version": "0.20.1-beta.1",
"version": "0.21.2-beta.0",
"os": ["linux"],
"cpu": ["x64"],
"main": "lancedb.linux-x64-gnu.node",

View File

@@ -1,6 +1,6 @@
{
"name": "@lancedb/lancedb-linux-x64-musl",
"version": "0.20.1-beta.1",
"version": "0.21.2-beta.0",
"os": ["linux"],
"cpu": ["x64"],
"main": "lancedb.linux-x64-musl.node",

View File

@@ -1,6 +1,6 @@
{
"name": "@lancedb/lancedb-win32-arm64-msvc",
"version": "0.20.1-beta.1",
"version": "0.21.2-beta.0",
"os": [
"win32"
],

View File

@@ -1,6 +1,6 @@
{
"name": "@lancedb/lancedb-win32-x64-msvc",
"version": "0.20.1-beta.1",
"version": "0.21.2-beta.0",
"os": ["win32"],
"cpu": ["x64"],
"main": "lancedb.win32-x64-msvc.node",

View File

@@ -1,12 +1,12 @@
{
"name": "@lancedb/lancedb",
"version": "0.20.1-beta.1",
"version": "0.21.2-beta.0",
"lockfileVersion": 3,
"requires": true,
"packages": {
"": {
"name": "@lancedb/lancedb",
"version": "0.20.1-beta.1",
"version": "0.21.2-beta.0",
"cpu": [
"x64",
"arm64"

View File

@@ -11,7 +11,7 @@
"ann"
],
"private": false,
"version": "0.20.1-beta.1",
"version": "0.21.2-beta.0",
"main": "dist/index.js",
"exports": {
".": "./dist/index.js",

View File

@@ -123,6 +123,9 @@ impl Index {
stem: Option<bool>,
remove_stop_words: Option<bool>,
ascii_folding: Option<bool>,
ngram_min_length: Option<u32>,
ngram_max_length: Option<u32>,
prefix_only: Option<bool>,
) -> Self {
let mut opts = FtsIndexBuilder::default();
if let Some(with_position) = with_position {
@@ -149,6 +152,15 @@ impl Index {
if let Some(ascii_folding) = ascii_folding {
opts = opts.ascii_folding(ascii_folding);
}
if let Some(ngram_min_length) = ngram_min_length {
opts = opts.ngram_min_length(ngram_min_length);
}
if let Some(ngram_max_length) = ngram_max_length {
opts = opts.ngram_max_length(ngram_max_length);
}
if let Some(prefix_only) = prefix_only {
opts = opts.ngram_prefix_only(prefix_only);
}
Self {
inner: Mutex::new(Some(LanceDbIndex::FTS(opts))),

View File

@@ -335,6 +335,7 @@ impl JsFullTextQuery {
fuzziness: Option<u32>,
max_expansions: u32,
operator: String,
prefix_length: u32,
) -> napi::Result<Self> {
Ok(Self {
inner: MatchQuery::new(query)
@@ -347,6 +348,7 @@ impl JsFullTextQuery {
napi::Error::from_reason(format!("Invalid operator: {}", e))
})?,
)
.with_prefix_length(prefix_length)
.into(),
})
}

View File

@@ -1,5 +1,5 @@
[tool.bumpversion]
current_version = "0.23.1-beta.2"
current_version = "0.24.2-beta.1"
parse = """(?x)
(?P<major>0|[1-9]\\d*)\\.
(?P<minor>0|[1-9]\\d*)\\.

View File

@@ -1,6 +1,6 @@
[package]
name = "lancedb-python"
version = "0.23.1-beta.2"
version = "0.24.2-beta.1"
edition.workspace = true
description = "Python bindings for LanceDB"
license.workspace = true

View File

@@ -85,7 +85,7 @@ embeddings = [
"boto3>=1.28.57",
"awscli>=1.29.57",
"botocore>=1.31.57",
"ollama",
"ollama>=0.3.0",
"ibm-watsonx-ai>=1.1.2",
]
azure = ["adlfs>=2024.2.0"]

View File

@@ -2,14 +2,15 @@
# SPDX-FileCopyrightText: Copyright The LanceDB Authors
from functools import cached_property
from typing import TYPE_CHECKING, List, Optional, Union
from typing import TYPE_CHECKING, List, Optional, Sequence, Union
import numpy as np
from ..util import attempt_import_or_raise
from .base import TextEmbeddingFunction
from .registry import register
if TYPE_CHECKING:
import numpy as np
import ollama
@@ -28,23 +29,21 @@ class OllamaEmbeddings(TextEmbeddingFunction):
keep_alive: Optional[Union[float, str]] = None
ollama_client_kwargs: Optional[dict] = {}
def ndims(self):
def ndims(self) -> int:
return len(self.generate_embeddings(["foo"])[0])
def _compute_embedding(self, text) -> Union["np.array", None]:
return (
self._ollama_client.embeddings(
model=self.name,
prompt=text,
options=self.options,
keep_alive=self.keep_alive,
)["embedding"]
or None
def _compute_embedding(self, text: Sequence[str]) -> Sequence[Sequence[float]]:
response = self._ollama_client.embed(
model=self.name,
input=text,
options=self.options,
keep_alive=self.keep_alive,
)
return response.embeddings
def generate_embeddings(
self, texts: Union[List[str], "np.ndarray"]
) -> list[Union["np.array", None]]:
self, texts: Union[List[str], np.ndarray]
) -> list[Union[np.array, None]]:
"""
Get the embeddings for the given texts
@@ -54,8 +53,8 @@ class OllamaEmbeddings(TextEmbeddingFunction):
The texts to embed
"""
# TODO retry, rate limit, token limit
embeddings = [self._compute_embedding(text) for text in texts]
return embeddings
embeddings = self._compute_embedding(texts)
return list(embeddings)
@cached_property
def _ollama_client(self) -> "ollama.Client":

View File

@@ -137,6 +137,9 @@ class FTS:
stem: bool = True
remove_stop_words: bool = True
ascii_folding: bool = True
ngram_min_length: int = 3
ngram_max_length: int = 3
prefix_only: bool = False
@dataclass

View File

@@ -101,8 +101,9 @@ class FullTextOperator(str, Enum):
class Occur(str, Enum):
MUST = "MUST"
SHOULD = "SHOULD"
MUST = "MUST"
MUST_NOT = "MUST_NOT"
@pydantic.dataclasses.dataclass
@@ -181,6 +182,9 @@ class MatchQuery(FullTextQuery):
Can be either `AND` or `OR`.
If `AND`, all terms in the query must match.
If `OR`, at least one term in the query must match.
prefix_length : int, optional
The number of beginning characters being unchanged for fuzzy matching.
This is useful to achieve prefix matching.
"""
query: str
@@ -189,6 +193,7 @@ class MatchQuery(FullTextQuery):
fuzziness: int = pydantic.Field(0, kw_only=True)
max_expansions: int = pydantic.Field(50, kw_only=True)
operator: FullTextOperator = pydantic.Field(FullTextOperator.OR, kw_only=True)
prefix_length: int = pydantic.Field(0, kw_only=True)
def query_type(self) -> FullTextQueryType:
return FullTextQueryType.MATCH
@@ -1369,6 +1374,8 @@ class LanceVectorQueryBuilder(LanceQueryBuilder):
if query_string is not None and not isinstance(query_string, str):
raise ValueError("Reranking currently only supports string queries")
self._str_query = query_string if query_string is not None else self._str_query
if reranker.score == "all":
self.with_row_id(True)
return self
def bypass_vector_index(self) -> LanceVectorQueryBuilder:
@@ -1446,10 +1453,13 @@ class LanceFtsQueryBuilder(LanceQueryBuilder):
query = self._query
if self._phrase_query:
raise NotImplementedError(
"Phrase query is not yet supported in Lance FTS. "
"Use tantivy-based index instead for now."
)
if isinstance(query, str):
if not query.startswith('"') or not query.endswith('"'):
query = f'"{query}"'
elif isinstance(query, FullTextQuery) and not isinstance(
query, PhraseQuery
):
raise TypeError("Please use PhraseQuery for phrase queries.")
query = self.to_query_object()
results = self._table._execute_query(query, timeout=timeout)
results = results.read_all()
@@ -1561,6 +1571,8 @@ class LanceFtsQueryBuilder(LanceQueryBuilder):
The LanceQueryBuilder object.
"""
self._reranker = reranker
if reranker.score == "all":
self.with_row_id(True)
return self
@@ -1837,6 +1849,8 @@ class LanceHybridQueryBuilder(LanceQueryBuilder):
self._norm = normalize
self._reranker = reranker
if reranker.score == "all":
self.with_row_id(True)
return self
@@ -3034,15 +3048,21 @@ class AsyncHybridQuery(AsyncQueryBase, AsyncVectorQueryBase):
>>> asyncio.run(doctest_example()) # doctest: +ELLIPSIS, +NORMALIZE_WHITESPACE
Vector Search Plan:
ProjectionExec: expr=[vector@0 as vector, text@3 as text, _distance@2 as _distance]
Take: columns="vector, _rowid, _distance, (text)"
CoalesceBatchesExec: target_batch_size=1024
GlobalLimitExec: skip=0, fetch=10
FilterExec: _distance@2 IS NOT NULL
SortExec: TopK(fetch=10), expr=[_distance@2 ASC NULLS LAST], preserve_partitioning=[false]
KNNVectorDistance: metric=l2
LanceScan: uri=..., projection=[vector], row_id=true, row_addr=false, ordered=false
Take: columns="vector, _rowid, _distance, (text)"
CoalesceBatchesExec: target_batch_size=1024
GlobalLimitExec: skip=0, fetch=10
FilterExec: _distance@2 IS NOT NULL
SortExec: TopK(fetch=10), expr=[_distance@2 ASC NULLS LAST], preserve_partitioning=[false]
KNNVectorDistance: metric=l2
LanceScan: uri=..., projection=[vector], row_id=true, row_addr=false, ordered=false
<BLANKLINE>
FTS Search Plan:
LanceScan: uri=..., projection=[vector, text], row_id=false, row_addr=false, ordered=true
ProjectionExec: expr=[vector@2 as vector, text@3 as text, _score@1 as _score]
Take: columns="_rowid, _score, (vector), (text)"
CoalesceBatchesExec: target_batch_size=1024
GlobalLimitExec: skip=0, fetch=10
MatchQuery: query=hello
<BLANKLINE>
Parameters
----------

View File

@@ -18,7 +18,7 @@ from lancedb._lancedb import (
UpdateResult,
)
from lancedb.embeddings.base import EmbeddingFunctionConfig
from lancedb.index import FTS, BTree, Bitmap, HnswPq, HnswSq, IvfFlat, IvfPq, LabelList
from lancedb.index import FTS, BTree, Bitmap, HnswSq, IvfFlat, IvfPq, LabelList
from lancedb.remote.db import LOOP
import pyarrow as pa
@@ -89,7 +89,7 @@ class RemoteTable(Table):
def to_pandas(self):
"""to_pandas() is not yet supported on LanceDB cloud."""
return NotImplementedError("to_pandas() is not yet supported on LanceDB cloud.")
raise NotImplementedError("to_pandas() is not yet supported on LanceDB cloud.")
def checkout(self, version: Union[int, str]):
return LOOP.run(self._table.checkout(version))
@@ -158,6 +158,9 @@ class RemoteTable(Table):
stem: bool = True,
remove_stop_words: bool = True,
ascii_folding: bool = True,
ngram_min_length: int = 3,
ngram_max_length: int = 3,
prefix_only: bool = False,
):
config = FTS(
with_position=with_position,
@@ -168,6 +171,9 @@ class RemoteTable(Table):
stem=stem,
remove_stop_words=remove_stop_words,
ascii_folding=ascii_folding,
ngram_min_length=ngram_min_length,
ngram_max_length=ngram_max_length,
prefix_only=prefix_only,
)
LOOP.run(
self._table.create_index(
@@ -186,6 +192,8 @@ class RemoteTable(Table):
accelerator: Optional[str] = None,
index_type="vector",
wait_timeout: Optional[timedelta] = None,
*,
num_bits: int = 8,
):
"""Create an index on the table.
Currently, the only parameters that matter are
@@ -220,11 +228,6 @@ class RemoteTable(Table):
>>> table.create_index("l2", "vector") # doctest: +SKIP
"""
if num_partitions is not None:
logging.warning(
"num_partitions is not supported on LanceDB cloud."
"This parameter will be tuned automatically."
)
if num_sub_vectors is not None:
logging.warning(
"num_sub_vectors is not supported on LanceDB cloud."
@@ -244,13 +247,21 @@ class RemoteTable(Table):
index_type = index_type.upper()
if index_type == "VECTOR" or index_type == "IVF_PQ":
config = IvfPq(distance_type=metric)
config = IvfPq(
distance_type=metric,
num_partitions=num_partitions,
num_sub_vectors=num_sub_vectors,
num_bits=num_bits,
)
elif index_type == "IVF_HNSW_PQ":
config = HnswPq(distance_type=metric)
raise ValueError(
"IVF_HNSW_PQ is not supported on LanceDB cloud."
"Please use IVF_HNSW_SQ instead."
)
elif index_type == "IVF_HNSW_SQ":
config = HnswSq(distance_type=metric)
config = HnswSq(distance_type=metric, num_partitions=num_partitions)
elif index_type == "IVF_FLAT":
config = IvfFlat(distance_type=metric)
config = IvfFlat(distance_type=metric, num_partitions=num_partitions)
else:
raise ValueError(
f"Unknown vector index type: {index_type}. Valid options are"

View File

@@ -74,9 +74,7 @@ class AnswerdotaiRerankers(Reranker):
if self.score == "relevance":
combined_results = self._keep_relevance_score(combined_results)
elif self.score == "all":
raise NotImplementedError(
"Answerdotai Reranker does not support score='all' yet"
)
combined_results = self._merge_and_keep_scores(vector_results, fts_results)
combined_results = combined_results.sort_by(
[("_relevance_score", "descending")]
)

View File

@@ -232,6 +232,39 @@ class Reranker(ABC):
return deduped_table
def _merge_and_keep_scores(self, vector_results: pa.Table, fts_results: pa.Table):
"""
Merge the results from the vector and FTS search and keep the scores.
This op is slower than just keeping relevance score but can be useful
for debugging.
"""
# add nulls to fts results for _distance
if "_distance" not in fts_results.column_names:
fts_results = fts_results.append_column(
"_distance",
pa.array([None] * len(fts_results), type=pa.float32()),
)
# add nulls to vector results for _score
if "_score" not in vector_results.column_names:
vector_results = vector_results.append_column(
"_score",
pa.array([None] * len(vector_results), type=pa.float32()),
)
# combine them and fill the scores
vector_results_dict = {row["_rowid"]: row for row in vector_results.to_pylist()}
fts_results_dict = {row["_rowid"]: row for row in fts_results.to_pylist()}
# merge them into vector_results
for key, value in fts_results_dict.items():
if key in vector_results_dict:
vector_results_dict[key]["_score"] = value["_score"]
else:
vector_results_dict[key] = value
combined = pa.Table.from_pylist(list(vector_results_dict.values()))
return combined
def _keep_relevance_score(self, combined_results: pa.Table):
if self.score == "relevance":
if "_score" in combined_results.column_names:

View File

@@ -92,14 +92,14 @@ class CohereReranker(Reranker):
vector_results: pa.Table,
fts_results: pa.Table,
):
combined_results = self.merge_results(vector_results, fts_results)
if self.score == "all":
combined_results = self._merge_and_keep_scores(vector_results, fts_results)
else:
combined_results = self.merge_results(vector_results, fts_results)
combined_results = self._rerank(combined_results, query)
if self.score == "relevance":
combined_results = self._keep_relevance_score(combined_results)
elif self.score == "all":
raise NotImplementedError(
"return_score='all' not implemented for cohere reranker"
)
return combined_results
def rerank_vector(self, query: str, vector_results: pa.Table):

View File

@@ -81,15 +81,15 @@ class CrossEncoderReranker(Reranker):
vector_results: pa.Table,
fts_results: pa.Table,
):
combined_results = self.merge_results(vector_results, fts_results)
if self.score == "all":
combined_results = self._merge_and_keep_scores(vector_results, fts_results)
else:
combined_results = self.merge_results(vector_results, fts_results)
combined_results = self._rerank(combined_results, query)
# sort the results by _score
if self.score == "relevance":
combined_results = self._keep_relevance_score(combined_results)
elif self.score == "all":
raise NotImplementedError(
"return_score='all' not implemented for CrossEncoderReranker"
)
combined_results = combined_results.sort_by(
[("_relevance_score", "descending")]
)

View File

@@ -97,14 +97,14 @@ class JinaReranker(Reranker):
vector_results: pa.Table,
fts_results: pa.Table,
):
combined_results = self.merge_results(vector_results, fts_results)
if self.score == "all":
combined_results = self._merge_and_keep_scores(vector_results, fts_results)
else:
combined_results = self.merge_results(vector_results, fts_results)
combined_results = self._rerank(combined_results, query)
if self.score == "relevance":
combined_results = self._keep_relevance_score(combined_results)
elif self.score == "all":
raise NotImplementedError(
"return_score='all' not implemented for JinaReranker"
)
return combined_results
def rerank_vector(self, query: str, vector_results: pa.Table):

View File

@@ -88,14 +88,13 @@ class OpenaiReranker(Reranker):
vector_results: pa.Table,
fts_results: pa.Table,
):
combined_results = self.merge_results(vector_results, fts_results)
if self.score == "all":
combined_results = self._merge_and_keep_scores(vector_results, fts_results)
else:
combined_results = self.merge_results(vector_results, fts_results)
combined_results = self._rerank(combined_results, query)
if self.score == "relevance":
combined_results = self._keep_relevance_score(combined_results)
elif self.score == "all":
raise NotImplementedError(
"OpenAI Reranker does not support score='all' yet"
)
combined_results = combined_results.sort_by(
[("_relevance_score", "descending")]

View File

@@ -94,14 +94,14 @@ class VoyageAIReranker(Reranker):
vector_results: pa.Table,
fts_results: pa.Table,
):
combined_results = self.merge_results(vector_results, fts_results)
if self.score == "all":
combined_results = self._merge_and_keep_scores(vector_results, fts_results)
else:
combined_results = self.merge_results(vector_results, fts_results)
combined_results = self._rerank(combined_results, query)
if self.score == "relevance":
combined_results = self._keep_relevance_score(combined_results)
elif self.score == "all":
raise NotImplementedError(
"return_score='all' not implemented for voyageai reranker"
)
return combined_results
def rerank_vector(self, query: str, vector_results: pa.Table):

View File

@@ -827,7 +827,7 @@ class Table(ABC):
ordering_field_names: Optional[Union[str, List[str]]] = None,
replace: bool = False,
writer_heap_size: Optional[int] = 1024 * 1024 * 1024,
use_tantivy: bool = True,
use_tantivy: bool = False,
tokenizer_name: Optional[str] = None,
with_position: bool = False,
# tokenizer configs:
@@ -838,6 +838,9 @@ class Table(ABC):
stem: bool = True,
remove_stop_words: bool = True,
ascii_folding: bool = True,
ngram_min_length: int = 3,
ngram_max_length: int = 3,
prefix_only: bool = False,
wait_timeout: Optional[timedelta] = None,
):
"""Create a full-text search index on the table.
@@ -864,7 +867,7 @@ class Table(ABC):
The tokenizer to use for the index. Can be "raw", "default" or the 2 letter
language code followed by "_stem". So for english it would be "en_stem".
For available languages see: https://docs.rs/tantivy/latest/tantivy/tokenizer/enum.Language.html
use_tantivy: bool, default True
use_tantivy: bool, default False
If True, use the legacy full-text search implementation based on tantivy.
If False, use the new full-text search implementation based on lance-index.
with_position: bool, default False
@@ -877,6 +880,7 @@ class Table(ABC):
- "simple": Splits text by whitespace and punctuation.
- "whitespace": Split text by whitespace, but not punctuation.
- "raw": No tokenization. The entire text is treated as a single token.
- "ngram": N-Gram tokenizer.
language : str, default "English"
The language to use for tokenization.
max_token_length : int, default 40
@@ -894,6 +898,12 @@ class Table(ABC):
ascii_folding : bool, default True
Whether to fold ASCII characters. This converts accented characters to
their ASCII equivalent. For example, "café" would be converted to "cafe".
ngram_min_length: int, default 3
The minimum length of an n-gram.
ngram_max_length: int, default 3
The maximum length of an n-gram.
prefix_only: bool, default False
Whether to only index the prefix of the token for ngram tokenizer.
wait_timeout: timedelta, optional
The timeout to wait if indexing is asynchronous.
"""
@@ -1970,7 +1980,7 @@ class LanceTable(Table):
ordering_field_names: Optional[Union[str, List[str]]] = None,
replace: bool = False,
writer_heap_size: Optional[int] = 1024 * 1024 * 1024,
use_tantivy: bool = True,
use_tantivy: bool = False,
tokenizer_name: Optional[str] = None,
with_position: bool = False,
# tokenizer configs:
@@ -1981,6 +1991,9 @@ class LanceTable(Table):
stem: bool = True,
remove_stop_words: bool = True,
ascii_folding: bool = True,
ngram_min_length: int = 3,
ngram_max_length: int = 3,
prefix_only: bool = False,
):
if not use_tantivy:
if not isinstance(field_names, str):
@@ -1996,6 +2009,9 @@ class LanceTable(Table):
"stem": stem,
"remove_stop_words": remove_stop_words,
"ascii_folding": ascii_folding,
"ngram_min_length": ngram_min_length,
"ngram_max_length": ngram_max_length,
"prefix_only": prefix_only,
}
else:
tokenizer_configs = self.infer_tokenizer_configs(tokenizer_name)
@@ -2065,6 +2081,9 @@ class LanceTable(Table):
"stem": False,
"remove_stop_words": False,
"ascii_folding": False,
"ngram_min_length": 3,
"ngram_max_length": 3,
"prefix_only": False,
}
elif tokenizer_name == "raw":
return {
@@ -2075,6 +2094,9 @@ class LanceTable(Table):
"stem": False,
"remove_stop_words": False,
"ascii_folding": False,
"ngram_min_length": 3,
"ngram_max_length": 3,
"prefix_only": False,
}
elif tokenizer_name == "whitespace":
return {
@@ -2085,6 +2107,9 @@ class LanceTable(Table):
"stem": False,
"remove_stop_words": False,
"ascii_folding": False,
"ngram_min_length": 3,
"ngram_max_length": 3,
"prefix_only": False,
}
# or it's with language stemming with pattern like "en_stem"
@@ -2103,6 +2128,9 @@ class LanceTable(Table):
"stem": True,
"remove_stop_words": False,
"ascii_folding": False,
"ngram_min_length": 3,
"ngram_max_length": 3,
"prefix_only": False,
}
def add(

View File

@@ -25,4 +25,4 @@ IndexType = Literal[
]
# Tokenizer literals
BaseTokenizerType = Literal["simple", "raw", "whitespace"]
BaseTokenizerType = Literal["simple", "raw", "whitespace", "ngram"]

View File

@@ -6,7 +6,7 @@ import lancedb
# --8<-- [end:import-lancedb]
# --8<-- [start:import-numpy]
from lancedb.query import BoostQuery, MatchQuery
from lancedb.query import BooleanQuery, BoostQuery, MatchQuery, Occur
import numpy as np
import pyarrow as pa
@@ -191,6 +191,15 @@ def test_fts_fuzzy_query():
"food", # 1 insertion
}
results = table.search(
MatchQuery("foo", "text", fuzziness=1, prefix_length=3)
).to_pandas()
assert len(results) == 2
assert set(results["text"].to_list()) == {
"foo",
"food",
}
@pytest.mark.skipif(
os.name == "nt", reason="Need to fix https://github.com/lancedb/lance/issues/3905"
@@ -240,6 +249,60 @@ def test_fts_boost_query():
)
@pytest.mark.skipif(
os.name == "nt", reason="Need to fix https://github.com/lancedb/lance/issues/3905"
)
def test_fts_boolean_query(tmp_path):
uri = tmp_path / "boolean-example"
db = lancedb.connect(uri)
table = db.create_table(
"my_table_fts_boolean",
data=[
{"text": "The cat and dog are playing"},
{"text": "The cat is sleeping"},
{"text": "The dog is barking"},
{"text": "The dog chases the cat"},
],
mode="overwrite",
)
table.create_fts_index("text", use_tantivy=False, replace=True)
# SHOULD
results = table.search(
MatchQuery("cat", "text") | MatchQuery("dog", "text")
).to_pandas()
assert len(results) == 4
assert set(results["text"].to_list()) == {
"The cat and dog are playing",
"The cat is sleeping",
"The dog is barking",
"The dog chases the cat",
}
# MUST
results = table.search(
MatchQuery("cat", "text") & MatchQuery("dog", "text")
).to_pandas()
assert len(results) == 2
assert set(results["text"].to_list()) == {
"The cat and dog are playing",
"The dog chases the cat",
}
# MUST NOT
results = table.search(
BooleanQuery(
[
(Occur.MUST, MatchQuery("cat", "text")),
(Occur.MUST_NOT, MatchQuery("dog", "text")),
]
)
).to_pandas()
assert len(results) == 1
assert set(results["text"].to_list()) == {
"The cat is sleeping",
}
@pytest.mark.skipif(
os.name == "nt", reason="Need to fix https://github.com/lancedb/lance/issues/3905"
)

View File

@@ -669,3 +669,46 @@ def test_fts_on_list(mem_db: DBConnection):
res = table.search(PhraseQuery("lance database", "text")).limit(5).to_list()
assert len(res) == 2
def test_fts_ngram(mem_db: DBConnection):
data = pa.table({"text": ["hello world", "lance database", "lance is cool"]})
table = mem_db.create_table("test", data=data)
table.create_fts_index("text", use_tantivy=False, base_tokenizer="ngram")
results = table.search("lan", query_type="fts").limit(10).to_list()
assert len(results) == 2
assert set(r["text"] for r in results) == {"lance database", "lance is cool"}
results = (
table.search("nce", query_type="fts").limit(10).to_list()
) # spellchecker:disable-line
assert len(results) == 2
assert set(r["text"] for r in results) == {"lance database", "lance is cool"}
# the default min_ngram_length is 3, so "la" should not match
results = table.search("la", query_type="fts").limit(10).to_list()
assert len(results) == 0
# test setting min_ngram_length and prefix_only
table.create_fts_index(
"text",
use_tantivy=False,
base_tokenizer="ngram",
replace=True,
ngram_min_length=2,
prefix_only=True,
)
results = table.search("lan", query_type="fts").limit(10).to_list()
assert len(results) == 2
assert set(r["text"] for r in results) == {"lance database", "lance is cool"}
results = (
table.search("nce", query_type="fts").limit(10).to_list()
) # spellchecker:disable-line
assert len(results) == 0
results = table.search("la", query_type="fts").limit(10).to_list()
assert len(results) == 2
assert set(r["text"] for r in results) == {"lance database", "lance is cool"}

View File

@@ -272,7 +272,9 @@ async def test_distance_range_with_new_rows_async():
# append more rows so that execution plan would be mixed with ANN & Flat KNN
new_data = pa.table(
{
"vector": pa.FixedShapeTensorArray.from_numpy_ndarray(np.random.rand(4, 2)),
"vector": pa.FixedShapeTensorArray.from_numpy_ndarray(
np.random.rand(4, 2) + 1
),
}
)
await table.add(new_data)
@@ -775,6 +777,82 @@ async def test_explain_plan_async(table_async: AsyncTable):
assert "KNN" in plan
@pytest.mark.asyncio
async def test_explain_plan_fts(table_async: AsyncTable):
"""Test explain plan for FTS queries"""
# Create FTS index
from lancedb.index import FTS
await table_async.create_index("text", config=FTS())
# Test pure FTS query
query = await table_async.search("dog", query_type="fts", fts_columns="text")
plan = await query.explain_plan()
# Should show FTS details (issue #2465 is now fixed)
assert "MatchQuery: query=dog" in plan
assert "GlobalLimitExec" in plan # Default limit
# Test FTS query with limit
query_with_limit = await table_async.search(
"dog", query_type="fts", fts_columns="text"
)
plan_with_limit = await query_with_limit.limit(1).explain_plan()
assert "MatchQuery: query=dog" in plan_with_limit
assert "GlobalLimitExec: skip=0, fetch=1" in plan_with_limit
# Test FTS query with offset and limit
query_with_offset = await table_async.search(
"dog", query_type="fts", fts_columns="text"
)
plan_with_offset = await query_with_offset.offset(1).limit(1).explain_plan()
assert "MatchQuery: query=dog" in plan_with_offset
assert "GlobalLimitExec: skip=1, fetch=1" in plan_with_offset
@pytest.mark.asyncio
async def test_explain_plan_vector_with_limit_offset(table_async: AsyncTable):
"""Test explain plan for vector queries with limit and offset"""
# Test vector query with limit
plan_with_limit = await (
table_async.query().nearest_to(pa.array([1, 2])).limit(1).explain_plan()
)
assert "KNN" in plan_with_limit
assert "GlobalLimitExec: skip=0, fetch=1" in plan_with_limit
# Test vector query with offset and limit
plan_with_offset = await (
table_async.query()
.nearest_to(pa.array([1, 2]))
.offset(1)
.limit(1)
.explain_plan()
)
assert "KNN" in plan_with_offset
assert "GlobalLimitExec: skip=1, fetch=1" in plan_with_offset
@pytest.mark.asyncio
async def test_explain_plan_with_filters(table_async: AsyncTable):
"""Test explain plan for queries with filters"""
# Test vector query with filter
plan_with_filter = await (
table_async.query().nearest_to(pa.array([1, 2])).where("id = 1").explain_plan()
)
assert "KNN" in plan_with_filter
assert "FilterExec" in plan_with_filter
# Test FTS query with filter
from lancedb.index import FTS
await table_async.create_index("text", config=FTS())
query_fts_filter = await table_async.search(
"dog", query_type="fts", fts_columns="text"
)
plan_fts_filter = await query_fts_filter.where("id = 1").explain_plan()
assert "MatchQuery: query=dog" in plan_fts_filter
assert "FilterExec: id@" in plan_fts_filter # Should show filter details
@pytest.mark.asyncio
async def test_query_camelcase_async(tmp_path):
db = await lancedb.connect_async(tmp_path)

View File

@@ -210,6 +210,25 @@ async def test_retry_error():
assert cause.status_code == 429
def test_table_unimplemented_functions():
def handler(request):
if request.path == "/v1/table/test/create/?mode=create":
request.send_response(200)
request.send_header("Content-Type", "application/json")
request.end_headers()
request.wfile.write(b"{}")
else:
request.send_response(404)
request.end_headers()
with mock_lancedb_connection(handler) as db:
table = db.create_table("test", [{"id": 1}])
with pytest.raises(NotImplementedError):
table.to_arrow()
with pytest.raises(NotImplementedError):
table.to_pandas()
def test_table_add_in_threadpool():
def handler(request):
if request.path == "/v1/table/test/insert/":

View File

@@ -499,3 +499,19 @@ def test_empty_result_reranker():
.rerank(reranker)
.to_arrow()
)
@pytest.mark.parametrize("use_tantivy", [True, False])
def test_cross_encoder_reranker_return_all(tmp_path, use_tantivy):
pytest.importorskip("sentence_transformers")
reranker = CrossEncoderReranker(return_score="all")
table, schema = get_test_table(tmp_path, use_tantivy)
query = "single player experience"
result = (
table.search(query, query_type="hybrid", vector_column_name="vector")
.rerank(reranker=reranker)
.to_arrow()
)
assert "_relevance_score" in result.column_names
assert "_score" in result.column_names
assert "_distance" in result.column_names

View File

@@ -245,7 +245,7 @@ def test_s3_dynamodb_sync(s3_bucket: str, commit_table: str, monkeypatch):
NotImplementedError,
match="Full-text search is only supported on the local filesystem",
):
table.create_fts_index("x")
table.create_fts_index("x", use_tantivy=True)
# make sure list tables still works
assert db.table_names() == ["test_ddb_sync"]

View File

@@ -47,7 +47,10 @@ pub fn extract_index_params(source: &Option<Bound<'_, PyAny>>) -> PyResult<Lance
.max_token_length(params.max_token_length)
.remove_stop_words(params.remove_stop_words)
.stem(params.stem)
.ascii_folding(params.ascii_folding);
.ascii_folding(params.ascii_folding)
.ngram_min_length(params.ngram_min_length)
.ngram_max_length(params.ngram_max_length)
.ngram_prefix_only(params.prefix_only);
Ok(LanceDbIndex::FTS(inner_opts))
},
"IvfFlat" => {
@@ -130,6 +133,9 @@ struct FtsParams {
stem: bool,
remove_stop_words: bool,
ascii_folding: bool,
ngram_min_length: u32,
ngram_max_length: u32,
prefix_only: bool,
}
#[derive(FromPyObject)]

View File

@@ -50,8 +50,9 @@ impl FromPyObject<'_> for PyLanceDB<FtsQuery> {
let fuzziness = ob.getattr("fuzziness")?.extract()?;
let max_expansions = ob.getattr("max_expansions")?.extract()?;
let operator = ob.getattr("operator")?.extract::<String>()?;
let prefix_length = ob.getattr("prefix_length")?.extract()?;
Ok(PyLanceDB(
Ok(Self(
MatchQuery::new(query)
.with_column(Some(column))
.with_boost(boost)
@@ -60,6 +61,7 @@ impl FromPyObject<'_> for PyLanceDB<FtsQuery> {
.with_operator(Operator::try_from(operator.as_str()).map_err(|e| {
PyValueError::new_err(format!("Invalid operator: {}", e))
})?)
.with_prefix_length(prefix_length)
.into(),
))
}
@@ -68,7 +70,7 @@ impl FromPyObject<'_> for PyLanceDB<FtsQuery> {
let column = ob.getattr("column")?.extract()?;
let slop = ob.getattr("slop")?.extract()?;
Ok(PyLanceDB(
Ok(Self(
PhraseQuery::new(query)
.with_column(Some(column))
.with_slop(slop)
@@ -76,10 +78,10 @@ impl FromPyObject<'_> for PyLanceDB<FtsQuery> {
))
}
"BoostQuery" => {
let positive: PyLanceDB<FtsQuery> = ob.getattr("positive")?.extract()?;
let negative: PyLanceDB<FtsQuery> = ob.getattr("negative")?.extract()?;
let positive: Self = ob.getattr("positive")?.extract()?;
let negative: Self = ob.getattr("negative")?.extract()?;
let negative_boost = ob.getattr("negative_boost")?.extract()?;
Ok(PyLanceDB(
Ok(Self(
BoostQuery::new(positive.0, negative.0, negative_boost).into(),
))
}
@@ -101,18 +103,17 @@ impl FromPyObject<'_> for PyLanceDB<FtsQuery> {
let op = Operator::try_from(operator.as_str())
.map_err(|e| PyValueError::new_err(format!("Invalid operator: {}", e)))?;
Ok(PyLanceDB(q.with_operator(op).into()))
Ok(Self(q.with_operator(op).into()))
}
"BooleanQuery" => {
let queries: Vec<(String, PyLanceDB<FtsQuery>)> =
ob.getattr("queries")?.extract()?;
let queries: Vec<(String, Self)> = ob.getattr("queries")?.extract()?;
let mut sub_queries = Vec::with_capacity(queries.len());
for (occur, q) in queries {
let occur = Occur::try_from(occur.as_str())
.map_err(|e| PyValueError::new_err(e.to_string()))?;
sub_queries.push((occur, q.0));
}
Ok(PyLanceDB(BooleanQuery::new(sub_queries).into()))
Ok(Self(BooleanQuery::new(sub_queries).into()))
}
name => Err(PyValueError::new_err(format!(
"Unsupported FTS query type: {}",
@@ -139,7 +140,8 @@ impl<'py> IntoPyObject<'py> for PyLanceDB<FtsQuery> {
kwargs.set_item("boost", query.boost)?;
kwargs.set_item("fuzziness", query.fuzziness)?;
kwargs.set_item("max_expansions", query.max_expansions)?;
kwargs.set_item("operator", operator_to_str(query.operator))?;
kwargs.set_item::<_, &str>("operator", query.operator.into())?;
kwargs.set_item("prefix_length", query.prefix_length)?;
namespace
.getattr(intern!(py, "MatchQuery"))?
.call((query.terms, query.column.unwrap()), Some(&kwargs))
@@ -152,8 +154,8 @@ impl<'py> IntoPyObject<'py> for PyLanceDB<FtsQuery> {
.call((query.terms, query.column.unwrap()), Some(&kwargs))
}
FtsQuery::Boost(query) => {
let positive = PyLanceDB(query.positive.as_ref().clone()).into_pyobject(py)?;
let negative = PyLanceDB(query.negative.as_ref().clone()).into_pyobject(py)?;
let positive = Self(query.positive.as_ref().clone()).into_pyobject(py)?;
let negative = Self(query.negative.as_ref().clone()).into_pyobject(py)?;
let kwargs = PyDict::new(py);
kwargs.set_item("negative_boost", query.negative_boost)?;
namespace
@@ -169,19 +171,25 @@ impl<'py> IntoPyObject<'py> for PyLanceDB<FtsQuery> {
.unzip();
let kwargs = PyDict::new(py);
kwargs.set_item("boosts", boosts)?;
kwargs.set_item("operator", operator_to_str(first.operator))?;
kwargs.set_item::<_, &str>("operator", first.operator.into())?;
namespace
.getattr(intern!(py, "MultiMatchQuery"))?
.call((first.terms.clone(), columns), Some(&kwargs))
}
FtsQuery::Boolean(query) => {
let mut queries = Vec::with_capacity(query.must.len() + query.should.len());
for q in query.must {
queries.push((occur_to_str(Occur::Must), PyLanceDB(q).into_pyobject(py)?));
}
let mut queries: Vec<(&str, Bound<'py, PyAny>)> = Vec::with_capacity(
query.should.len() + query.must.len() + query.must_not.len(),
);
for q in query.should {
queries.push((occur_to_str(Occur::Should), PyLanceDB(q).into_pyobject(py)?));
queries.push((Occur::Should.into(), Self(q).into_pyobject(py)?));
}
for q in query.must {
queries.push((Occur::Must.into(), Self(q).into_pyobject(py)?));
}
for q in query.must_not {
queries.push((Occur::MustNot.into(), Self(q).into_pyobject(py)?));
}
namespace
.getattr(intern!(py, "BooleanQuery"))?
.call1((queries,))
@@ -190,21 +198,6 @@ impl<'py> IntoPyObject<'py> for PyLanceDB<FtsQuery> {
}
}
fn operator_to_str(op: Operator) -> &'static str {
match op {
Operator::And => "AND",
Operator::Or => "OR",
}
}
fn occur_to_str(occur: Occur) -> &'static str {
match occur {
Occur::Must => "MUST",
Occur::Should => "SHOULD",
Occur::MustNot => "MUST NOT",
}
}
// Python representation of query vector(s)
#[derive(Clone)]
pub struct PyQueryVectors(Vec<Arc<dyn Array>>);
@@ -569,7 +562,10 @@ impl FTSQuery {
}
pub fn explain_plan(self_: PyRef<'_, Self>, verbose: bool) -> PyResult<Bound<'_, PyAny>> {
let inner = self_.inner.clone();
let inner = self_
.inner
.clone()
.full_text_search(self_.fts_query.clone());
future_into_py(self_.py(), async move {
inner
.explain_plan(verbose)
@@ -579,7 +575,10 @@ impl FTSQuery {
}
pub fn analyze_plan(self_: PyRef<'_, Self>) -> PyResult<Bound<'_, PyAny>> {
let inner = self_.inner.clone();
let inner = self_
.inner
.clone()
.full_text_search(self_.fts_query.clone());
future_into_py(self_.py(), async move {
inner
.analyze_plan()

View File

@@ -1,6 +1,6 @@
[package]
name = "lancedb-node"
version = "0.20.1-beta.1"
version = "0.21.2-beta.0"
description = "Serverless, low-latency vector database for AI applications"
license.workspace = true
edition.workspace = true

View File

@@ -1,6 +1,6 @@
[package]
name = "lancedb"
version = "0.20.1-beta.1"
version = "0.21.2-beta.0"
edition.workspace = true
description = "LanceDB: A serverless, low-latency vector database for AI applications"
license.workspace = true

View File

@@ -105,7 +105,7 @@ impl ListingCatalog {
}
async fn open_path(path: &str) -> Result<Self> {
let (object_store, base_path) = ObjectStore::from_uri(path).await.unwrap();
let (object_store, base_path) = ObjectStore::from_uri(path).await?;
if object_store.is_local() {
Self::try_create_dir(path).context(CreateDirSnafu { path })?;
}
@@ -216,6 +216,7 @@ impl Catalog for ListingCatalog {
client_config: Default::default(),
read_consistency_interval: None,
options: Default::default(),
session: None,
};
// Add the db options to the connect request
@@ -243,6 +244,7 @@ impl Catalog for ListingCatalog {
client_config: Default::default(),
read_consistency_interval: None,
options: Default::default(),
session: None,
};
// Add the db options to the connect request
@@ -312,6 +314,7 @@ mod tests {
client_config: Default::default(),
options: Default::default(),
read_consistency_interval: None,
session: None,
};
let catalog = ListingCatalog::connect(&request).await.unwrap();
@@ -573,6 +576,7 @@ mod tests {
client_config: Default::default(),
options: Default::default(),
read_consistency_interval: None,
session: None,
};
let catalog = ListingCatalog::connect(&request).await.unwrap();
@@ -592,6 +596,7 @@ mod tests {
client_config: Default::default(),
options: Default::default(),
read_consistency_interval: None,
session: None,
};
let catalog = ListingCatalog::connect(&request).await.unwrap();
@@ -608,6 +613,7 @@ mod tests {
client_config: Default::default(),
options: Default::default(),
read_consistency_interval: None,
session: None,
};
let result = ListingCatalog::connect(&request).await;

View File

@@ -627,6 +627,12 @@ pub struct ConnectRequest {
/// consistency only applies to read operations. Write operations are
/// always consistent.
pub read_consistency_interval: Option<std::time::Duration>,
/// Optional session for object stores and caching
///
/// If provided, this session will be used instead of creating a default one.
/// This allows for custom configuration of object store registries, caching, etc.
pub session: Option<Arc<lance::session::Session>>,
}
#[derive(Debug)]
@@ -645,6 +651,7 @@ impl ConnectBuilder {
client_config: Default::default(),
read_consistency_interval: None,
options: HashMap::new(),
session: None,
},
embedding_registry: None,
}
@@ -802,6 +809,20 @@ impl ConnectBuilder {
self
}
/// Set a custom session for object stores and caching.
///
/// By default, a new session with default configuration will be created.
/// This method allows you to provide a custom session with your own
/// configuration for object store registries, caching, etc.
///
/// # Arguments
///
/// * `session` - A custom session to use for this connection
pub fn session(mut self, session: Arc<lance::session::Session>) -> Self {
self.request.session = Some(session);
self
}
#[cfg(feature = "remote")]
fn execute_remote(self) -> Result<Connection> {
use crate::remote::db::RemoteDatabaseOptions;
@@ -884,6 +905,7 @@ impl CatalogConnectBuilder {
client_config: Default::default(),
read_consistency_interval: None,
options: HashMap::new(),
session: None,
},
}
}

View File

@@ -8,7 +8,7 @@ use std::path::Path;
use std::{collections::HashMap, sync::Arc};
use lance::dataset::{ReadParams, WriteMode};
use lance::io::{ObjectStore, ObjectStoreParams, ObjectStoreRegistry, WrappingObjectStore};
use lance::io::{ObjectStore, ObjectStoreParams, WrappingObjectStore};
use lance_datafusion::utils::StreamingWriteSource;
use lance_encoding::version::LanceFileVersion;
use lance_table::io::commit::commit_handler_from_url;
@@ -217,6 +217,9 @@ pub struct ListingDatabase {
// Options for tables created by this connection
new_table_config: NewTableConfig,
// Session for object stores and caching
session: Arc<lance::session::Session>,
}
impl std::fmt::Display for ListingDatabase {
@@ -262,6 +265,7 @@ impl ListingDatabase {
uri,
request.read_consistency_interval,
options.new_table_config,
request.session.clone(),
)
.await
}
@@ -313,13 +317,20 @@ impl ListingDatabase {
let plain_uri = url.to_string();
let registry = Arc::new(ObjectStoreRegistry::default());
let session = request
.session
.clone()
.unwrap_or_else(|| Arc::new(lance::session::Session::default()));
let os_params = ObjectStoreParams {
storage_options: Some(options.storage_options.clone()),
..Default::default()
};
let (object_store, base_path) =
ObjectStore::from_uri_and_params(registry, &plain_uri, &os_params).await?;
let (object_store, base_path) = ObjectStore::from_uri_and_params(
session.store_registry(),
&plain_uri,
&os_params,
)
.await?;
if object_store.is_local() {
Self::try_create_dir(&plain_uri).context(CreateDirSnafu { path: plain_uri })?;
}
@@ -342,6 +353,7 @@ impl ListingDatabase {
read_consistency_interval: request.read_consistency_interval,
storage_options: options.storage_options,
new_table_config: options.new_table_config,
session,
})
}
Err(_) => {
@@ -349,6 +361,7 @@ impl ListingDatabase {
uri,
request.read_consistency_interval,
options.new_table_config,
request.session.clone(),
)
.await
}
@@ -359,8 +372,15 @@ impl ListingDatabase {
path: &str,
read_consistency_interval: Option<std::time::Duration>,
new_table_config: NewTableConfig,
session: Option<Arc<lance::session::Session>>,
) -> Result<Self> {
let (object_store, base_path) = ObjectStore::from_uri(path).await?;
let session = session.unwrap_or_else(|| Arc::new(lance::session::Session::default()));
let (object_store, base_path) = ObjectStore::from_uri_and_params(
session.store_registry(),
path,
&ObjectStoreParams::default(),
)
.await?;
if object_store.is_local() {
Self::try_create_dir(path).context(CreateDirSnafu { path })?;
}
@@ -374,6 +394,7 @@ impl ListingDatabase {
read_consistency_interval,
storage_options: HashMap::new(),
new_table_config,
session,
})
}
@@ -441,6 +462,128 @@ impl ListingDatabase {
}
Ok(())
}
/// Inherit storage options from the connection into the target map
fn inherit_storage_options(&self, target: &mut HashMap<String, String>) {
for (key, value) in self.storage_options.iter() {
if !target.contains_key(key) {
target.insert(key.clone(), value.clone());
}
}
}
/// Extract storage option overrides from the request
fn extract_storage_overrides(
&self,
request: &CreateTableRequest,
) -> Result<(Option<LanceFileVersion>, Option<bool>)> {
let storage_options = request
.write_options
.lance_write_params
.as_ref()
.and_then(|p| p.store_params.as_ref())
.and_then(|sp| sp.storage_options.as_ref());
let storage_version_override = storage_options
.and_then(|opts| opts.get(OPT_NEW_TABLE_STORAGE_VERSION))
.map(|s| s.parse::<LanceFileVersion>())
.transpose()?;
let v2_manifest_override = storage_options
.and_then(|opts| opts.get(OPT_NEW_TABLE_V2_MANIFEST_PATHS))
.map(|s| s.parse::<bool>())
.transpose()
.map_err(|_| Error::InvalidInput {
message: "enable_v2_manifest_paths must be a boolean".to_string(),
})?;
Ok((storage_version_override, v2_manifest_override))
}
/// Prepare write parameters for table creation
fn prepare_write_params(
&self,
request: &CreateTableRequest,
storage_version_override: Option<LanceFileVersion>,
v2_manifest_override: Option<bool>,
) -> lance::dataset::WriteParams {
let mut write_params = request
.write_options
.lance_write_params
.clone()
.unwrap_or_default();
// Only modify the storage options if we actually have something to
// inherit. There is a difference between storage_options=None and
// storage_options=Some({}). Using storage_options=None will cause the
// connection's session store registry to be used. Supplying Some({})
// will cause a new connection to be created, and that connection will
// be dropped from the cache when python GCs the table object, which
// confounds reuse across tables.
if !self.storage_options.is_empty() {
let storage_options = write_params
.store_params
.get_or_insert_with(Default::default)
.storage_options
.get_or_insert_with(Default::default);
self.inherit_storage_options(storage_options);
}
write_params.data_storage_version = self
.new_table_config
.data_storage_version
.or(storage_version_override);
if let Some(enable_v2_manifest_paths) = self
.new_table_config
.enable_v2_manifest_paths
.or(v2_manifest_override)
{
write_params.enable_v2_manifest_paths = enable_v2_manifest_paths;
}
if matches!(&request.mode, CreateTableMode::Overwrite) {
write_params.mode = WriteMode::Overwrite;
}
write_params.session = Some(self.session.clone());
write_params
}
/// Handle the case where table already exists based on the create mode
async fn handle_table_exists(
&self,
table_name: &str,
mode: CreateTableMode,
data_schema: &arrow_schema::Schema,
) -> Result<Arc<dyn BaseTable>> {
match mode {
CreateTableMode::Create => Err(Error::TableAlreadyExists {
name: table_name.to_string(),
}),
CreateTableMode::ExistOk(callback) => {
let req = OpenTableRequest {
name: table_name.to_string(),
index_cache_size: None,
lance_read_params: None,
};
let req = (callback)(req);
let table = self.open_table(req).await?;
let table_schema = table.schema().await?;
if table_schema.as_ref() != data_schema {
return Err(Error::Schema {
message: "Provided schema does not match existing table schema".to_string(),
});
}
Ok(table)
}
CreateTableMode::Overwrite => unreachable!(),
}
}
}
#[async_trait::async_trait]
@@ -475,50 +618,14 @@ impl Database for ListingDatabase {
Ok(f)
}
async fn create_table(&self, mut request: CreateTableRequest) -> Result<Arc<dyn BaseTable>> {
async fn create_table(&self, request: CreateTableRequest) -> Result<Arc<dyn BaseTable>> {
let table_uri = self.table_uri(&request.name)?;
// Inherit storage options from the connection
let storage_options = request
.write_options
.lance_write_params
.get_or_insert_with(Default::default)
.store_params
.get_or_insert_with(Default::default)
.storage_options
.get_or_insert_with(Default::default);
for (key, value) in self.storage_options.iter() {
if !storage_options.contains_key(key) {
storage_options.insert(key.clone(), value.clone());
}
}
let storage_options = storage_options.clone();
let (storage_version_override, v2_manifest_override) =
self.extract_storage_overrides(&request)?;
let mut write_params = request.write_options.lance_write_params.unwrap_or_default();
if let Some(storage_version) = &self.new_table_config.data_storage_version {
write_params.data_storage_version = Some(*storage_version);
} else {
// Allow the user to override the storage version via storage options (backwards compatibility)
if let Some(data_storage_version) = storage_options.get(OPT_NEW_TABLE_STORAGE_VERSION) {
write_params.data_storage_version = Some(data_storage_version.parse()?);
}
}
if let Some(enable_v2_manifest_paths) = self.new_table_config.enable_v2_manifest_paths {
write_params.enable_v2_manifest_paths = enable_v2_manifest_paths;
} else {
// Allow the user to override the storage version via storage options (backwards compatibility)
if let Some(enable_v2_manifest_paths) = storage_options
.get(OPT_NEW_TABLE_V2_MANIFEST_PATHS)
.map(|s| s.parse::<bool>().unwrap())
{
write_params.enable_v2_manifest_paths = enable_v2_manifest_paths;
}
}
if matches!(&request.mode, CreateTableMode::Overwrite) {
write_params.mode = WriteMode::Overwrite;
}
let write_params =
self.prepare_write_params(&request, storage_version_override, v2_manifest_override);
let data_schema = request.data.arrow_schema();
@@ -533,30 +640,10 @@ impl Database for ListingDatabase {
.await
{
Ok(table) => Ok(Arc::new(table)),
Err(Error::TableAlreadyExists { name }) => match request.mode {
CreateTableMode::Create => Err(Error::TableAlreadyExists { name }),
CreateTableMode::ExistOk(callback) => {
let req = OpenTableRequest {
name: request.name.clone(),
index_cache_size: None,
lance_read_params: None,
};
let req = (callback)(req);
let table = self.open_table(req).await?;
let table_schema = table.schema().await?;
if table_schema != data_schema {
return Err(Error::Schema {
message: "Provided schema does not match existing table schema"
.to_string(),
});
}
Ok(table)
}
CreateTableMode::Overwrite => unreachable!(),
},
Err(Error::TableAlreadyExists { .. }) => {
self.handle_table_exists(&request.name, request.mode, &data_schema)
.await
}
Err(err) => Err(err),
}
}
@@ -564,18 +651,22 @@ impl Database for ListingDatabase {
async fn open_table(&self, mut request: OpenTableRequest) -> Result<Arc<dyn BaseTable>> {
let table_uri = self.table_uri(&request.name)?;
// Inherit storage options from the connection
let storage_options = request
.lance_read_params
.get_or_insert_with(Default::default)
.store_options
.get_or_insert_with(Default::default)
.storage_options
.get_or_insert_with(Default::default);
for (key, value) in self.storage_options.iter() {
if !storage_options.contains_key(key) {
storage_options.insert(key.clone(), value.clone());
}
// Only modify the storage options if we actually have something to
// inherit. There is a difference between storage_options=None and
// storage_options=Some({}). Using storage_options=None will cause the
// connection's session store registry to be used. Supplying Some({})
// will cause a new connection to be created, and that connection will
// be dropped from the cache when python GCs the table object, which
// confounds reuse across tables.
if !self.storage_options.is_empty() {
let storage_options = request
.lance_read_params
.get_or_insert_with(Default::default)
.store_options
.get_or_insert_with(Default::default)
.storage_options
.get_or_insert_with(Default::default);
self.inherit_storage_options(storage_options);
}
// Some ReadParams are exposed in the OpenTableBuilder, but we also
@@ -584,13 +675,14 @@ impl Database for ListingDatabase {
// If we have a user provided ReadParams use that
// If we don't then start with the default ReadParams and customize it with
// the options from the OpenTableBuilder
let read_params = request.lance_read_params.unwrap_or_else(|| {
let mut read_params = request.lance_read_params.unwrap_or_else(|| {
let mut default_params = ReadParams::default();
if let Some(index_cache_size) = request.index_cache_size {
default_params.index_cache_size = index_cache_size as usize;
}
default_params
});
read_params.session(self.session.clone());
let native_table = Arc::new(
NativeTable::open_with_params(

View File

@@ -107,7 +107,7 @@ impl ObjectStore for MirroringObjectStore {
self.primary.delete(location).await
}
fn list(&self, prefix: Option<&Path>) -> BoxStream<'_, Result<ObjectMeta>> {
fn list(&self, prefix: Option<&Path>) -> BoxStream<'static, Result<ObjectMeta>> {
self.primary.list(prefix)
}

View File

@@ -119,7 +119,7 @@ impl ObjectStore for IoTrackingStore {
let result = self.target.get(location).await;
if let Ok(result) = &result {
let num_bytes = result.range.end - result.range.start;
self.record_read(num_bytes as u64);
self.record_read(num_bytes);
}
result
}
@@ -128,12 +128,12 @@ impl ObjectStore for IoTrackingStore {
let result = self.target.get_opts(location, options).await;
if let Ok(result) = &result {
let num_bytes = result.range.end - result.range.start;
self.record_read(num_bytes as u64);
self.record_read(num_bytes);
}
result
}
async fn get_range(&self, location: &Path, range: std::ops::Range<usize>) -> OSResult<Bytes> {
async fn get_range(&self, location: &Path, range: std::ops::Range<u64>) -> OSResult<Bytes> {
let result = self.target.get_range(location, range).await;
if let Ok(result) = &result {
self.record_read(result.len() as u64);
@@ -144,7 +144,7 @@ impl ObjectStore for IoTrackingStore {
async fn get_ranges(
&self,
location: &Path,
ranges: &[std::ops::Range<usize>],
ranges: &[std::ops::Range<u64>],
) -> OSResult<Vec<Bytes>> {
let result = self.target.get_ranges(location, ranges).await;
if let Ok(result) = &result {
@@ -170,7 +170,7 @@ impl ObjectStore for IoTrackingStore {
self.target.delete_stream(locations)
}
fn list(&self, prefix: Option<&Path>) -> BoxStream<'_, OSResult<ObjectMeta>> {
fn list(&self, prefix: Option<&Path>) -> BoxStream<'static, OSResult<ObjectMeta>> {
self.record_read(0);
self.target.list(prefix)
}
@@ -179,7 +179,7 @@ impl ObjectStore for IoTrackingStore {
&self,
prefix: Option<&Path>,
offset: &Path,
) -> BoxStream<'_, OSResult<ObjectMeta>> {
) -> BoxStream<'static, OSResult<ObjectMeta>> {
self.record_read(0);
self.target.list_with_offset(prefix, offset)
}

View File

@@ -57,6 +57,8 @@ use crate::{
};
const REQUEST_TIMEOUT_HEADER: HeaderName = HeaderName::from_static("x-request-timeout-ms");
const METRIC_TYPE_KEY: &str = "metric_type";
const INDEX_TYPE_KEY: &str = "index_type";
pub struct RemoteTags<'a, S: HttpSend = Sender> {
inner: &'a RemoteTable<S>,
@@ -997,23 +999,53 @@ impl<S: HttpSend> BaseTable for RemoteTable<S> {
"column": column
});
let (index_type, distance_type) = match index.index {
match index.index {
// TODO: Should we pass the actual index parameters? SaaS does not
// yet support them.
Index::IvfFlat(index) => ("IVF_FLAT", Some(index.distance_type)),
Index::IvfPq(index) => ("IVF_PQ", Some(index.distance_type)),
Index::IvfHnswSq(index) => ("IVF_HNSW_SQ", Some(index.distance_type)),
Index::BTree(_) => ("BTREE", None),
Index::Bitmap(_) => ("BITMAP", None),
Index::LabelList(_) => ("LABEL_LIST", None),
Index::IvfFlat(index) => {
body[INDEX_TYPE_KEY] = serde_json::Value::String("IVF_FLAT".to_string());
body[METRIC_TYPE_KEY] =
serde_json::Value::String(index.distance_type.to_string().to_lowercase());
if let Some(num_partitions) = index.num_partitions {
body["num_partitions"] = serde_json::Value::Number(num_partitions.into());
}
}
Index::IvfPq(index) => {
body[INDEX_TYPE_KEY] = serde_json::Value::String("IVF_PQ".to_string());
body[METRIC_TYPE_KEY] =
serde_json::Value::String(index.distance_type.to_string().to_lowercase());
if let Some(num_partitions) = index.num_partitions {
body["num_partitions"] = serde_json::Value::Number(num_partitions.into());
}
if let Some(num_bits) = index.num_bits {
body["num_bits"] = serde_json::Value::Number(num_bits.into());
}
}
Index::IvfHnswSq(index) => {
body[INDEX_TYPE_KEY] = serde_json::Value::String("IVF_HNSW_SQ".to_string());
body[METRIC_TYPE_KEY] =
serde_json::Value::String(index.distance_type.to_string().to_lowercase());
if let Some(num_partitions) = index.num_partitions {
body["num_partitions"] = serde_json::Value::Number(num_partitions.into());
}
}
Index::BTree(_) => {
body[INDEX_TYPE_KEY] = serde_json::Value::String("BTREE".to_string());
}
Index::Bitmap(_) => {
body[INDEX_TYPE_KEY] = serde_json::Value::String("BITMAP".to_string());
}
Index::LabelList(_) => {
body[INDEX_TYPE_KEY] = serde_json::Value::String("LABEL_LIST".to_string());
}
Index::FTS(fts) => {
body[INDEX_TYPE_KEY] = serde_json::Value::String("FTS".to_string());
let params = serde_json::to_value(&fts).map_err(|e| Error::InvalidInput {
message: format!("failed to serialize FTS index params {:?}", e),
})?;
for (key, value) in params.as_object().unwrap() {
body[key] = value.clone();
}
("FTS", None)
}
Index::Auto => {
let schema = self.schema().await?;
@@ -1023,9 +1055,11 @@ impl<S: HttpSend> BaseTable for RemoteTable<S> {
message: format!("Column {} not found in schema", column),
})?;
if supported_vector_data_type(field.data_type()) {
("IVF_PQ", Some(DistanceType::L2))
body[INDEX_TYPE_KEY] = serde_json::Value::String("IVF_PQ".to_string());
body[METRIC_TYPE_KEY] =
serde_json::Value::String(DistanceType::L2.to_string().to_lowercase());
} else if supported_btree_data_type(field.data_type()) {
("BTREE", None)
body[INDEX_TYPE_KEY] = serde_json::Value::String("BTREE".to_string());
} else {
return Err(Error::NotSupported {
message: format!(
@@ -1042,12 +1076,6 @@ impl<S: HttpSend> BaseTable for RemoteTable<S> {
})
}
};
body["index_type"] = serde_json::Value::String(index_type.into());
if let Some(distance_type) = distance_type {
// Phalanx expects this to be lowercase right now.
body["metric_type"] =
serde_json::Value::String(distance_type.to_string().to_lowercase());
}
let request = request.json(&body);
@@ -1429,11 +1457,12 @@ mod tests {
use chrono::{DateTime, Utc};
use futures::{future::BoxFuture, StreamExt, TryFutureExt};
use lance_index::scalar::inverted::query::MatchQuery;
use lance_index::scalar::FullTextSearchQuery;
use lance_index::scalar::{FullTextSearchQuery, InvertedIndexParams};
use reqwest::Body;
use rstest::rstest;
use serde_json::json;
use crate::index::vector::IvfFlatIndexBuilder;
use crate::index::vector::{IvfFlatIndexBuilder, IvfHnswSqIndexBuilder};
use crate::remote::db::DEFAULT_SERVER_VERSION;
use crate::remote::JSON_CONTENT_TYPE;
use crate::{
@@ -2433,29 +2462,79 @@ mod tests {
let cases = [
(
"IVF_FLAT",
Some("hamming"),
json!({
"metric_type": "hamming",
}),
Index::IvfFlat(IvfFlatIndexBuilder::default().distance_type(DistanceType::Hamming)),
),
("IVF_PQ", Some("l2"), Index::IvfPq(Default::default())),
(
"IVF_FLAT",
json!({
"metric_type": "hamming",
"num_partitions": 128,
}),
Index::IvfFlat(
IvfFlatIndexBuilder::default()
.distance_type(DistanceType::Hamming)
.num_partitions(128),
),
),
(
"IVF_PQ",
Some("cosine"),
Index::IvfPq(IvfPqIndexBuilder::default().distance_type(DistanceType::Cosine)),
json!({
"metric_type": "l2",
}),
Index::IvfPq(Default::default()),
),
(
"IVF_PQ",
json!({
"metric_type": "cosine",
"num_partitions": 128,
"num_bits": 4,
}),
Index::IvfPq(
IvfPqIndexBuilder::default()
.distance_type(DistanceType::Cosine)
.num_partitions(128)
.num_bits(4),
),
),
(
"IVF_HNSW_SQ",
Some("l2"),
json!({
"metric_type": "l2",
}),
Index::IvfHnswSq(Default::default()),
),
(
"IVF_HNSW_SQ",
json!({
"metric_type": "l2",
"num_partitions": 128,
}),
Index::IvfHnswSq(
IvfHnswSqIndexBuilder::default()
.distance_type(DistanceType::L2)
.num_partitions(128),
),
),
// HNSW_PQ isn't yet supported on SaaS
("BTREE", None, Index::BTree(Default::default())),
("BITMAP", None, Index::Bitmap(Default::default())),
("LABEL_LIST", None, Index::LabelList(Default::default())),
("FTS", None, Index::FTS(Default::default())),
("BTREE", json!({}), Index::BTree(Default::default())),
("BITMAP", json!({}), Index::Bitmap(Default::default())),
(
"LABEL_LIST",
json!({}),
Index::LabelList(Default::default()),
),
(
"FTS",
serde_json::to_value(InvertedIndexParams::default()).unwrap(),
Index::FTS(Default::default()),
),
];
for (index_type, distance_type, index) in cases {
let params = index.clone();
for (index_type, expected_body, index) in cases {
let table = Table::new_with_handler("my_table", move |request| {
assert_eq!(request.method(), "POST");
assert_eq!(request.url().path(), "/v1/table/my_table/create_index/");
@@ -2465,19 +2544,9 @@ mod tests {
);
let body = request.body().unwrap().as_bytes().unwrap();
let body: serde_json::Value = serde_json::from_slice(body).unwrap();
let mut expected_body = serde_json::json!({
"column": "a",
"index_type": index_type,
});
if let Some(distance_type) = distance_type {
expected_body["metric_type"] = distance_type.to_lowercase().into();
}
if let Index::FTS(fts) = &params {
let params = serde_json::to_value(fts).unwrap();
for (key, value) in params.as_object().unwrap() {
expected_body[key] = value.clone();
}
}
let mut expected_body = expected_body.clone();
expected_body["column"] = "a".into();
expected_body[INDEX_TYPE_KEY] = index_type.into();
assert_eq!(body, expected_body);

View File

@@ -392,9 +392,18 @@ pub mod tests {
} else {
expected_line.trim()
};
assert_eq!(&actual_trimmed[..expected_trimmed.len()], expected_trimmed);
assert_eq!(
&actual_trimmed[..expected_trimmed.len()],
expected_trimmed,
"\nactual:\n{physical_plan}\nexpected:\n{expected}"
);
}
assert_eq!(lines_checked, expected.lines().count());
assert_eq!(
lines_checked,
expected.lines().count(),
"\nlines_checked:\n{lines_checked}\nexpected:\n{}",
expected.lines().count()
);
}
}
@@ -477,9 +486,9 @@ pub mod tests {
TestFixture::check_plan(
plan,
"MetadataEraserExec
RepartitionExec:...
CoalesceBatchesExec:...
FilterExec: i@0 >= 5
RepartitionExec:...
ProjectionExec:...
LanceScan:...",
)

View File

@@ -129,7 +129,9 @@ impl DatasetRef {
dataset: ref mut ds,
..
} => {
*ds = dataset;
if dataset.manifest().version > ds.manifest().version {
*ds = dataset;
}
}
_ => unreachable!("Dataset should be in latest mode at this point"),
}

View File

@@ -281,6 +281,46 @@ async fn test_encryption() -> Result<()> {
Ok(())
}
#[tokio::test]
async fn test_table_storage_options_override() -> Result<()> {
// Test that table-level storage options override connection-level options
let bucket = S3Bucket::new("test-override").await;
let key1 = KMSKey::new().await;
let key2 = KMSKey::new().await;
let uri = format!("s3://{}", bucket.0);
// Create connection with key1 encryption
let db = lancedb::connect(&uri)
.storage_options(CONFIG.iter().cloned())
.storage_option("aws_server_side_encryption", "aws:kms")
.storage_option("aws_sse_kms_key_id", &key1.0)
.execute()
.await?;
// Create table overriding with key2 encryption
let data = test_data();
let data = RecordBatchIterator::new(vec![Ok(data.clone())], data.schema());
let _table = db
.create_table("test_override", data)
.storage_option("aws_sse_kms_key_id", &key2.0)
.execute()
.await?;
// Verify objects are encrypted with key2, not key1
validate_objects_encrypted(&bucket.0, "test_override", &key2.0).await;
// Also test that a table created without override uses connection settings
let data = test_data();
let data = RecordBatchIterator::new(vec![Ok(data.clone())], data.schema());
let _table2 = db.create_table("test_inherit", data).execute().await?;
// Verify this table uses key1 from connection
validate_objects_encrypted(&bucket.0, "test_inherit", &key1.0).await;
Ok(())
}
struct DynamoDBCommitTable(String);
impl DynamoDBCommitTable {