Compare commits

..

95 Commits

Author SHA1 Message Date
Lance Release
a3b45a4d00 Bump version: 0.21.1-beta.0 → 0.21.1 2025-03-11 13:14:30 +00:00
Lance Release
c316c2f532 Bump version: 0.21.0 → 0.21.1-beta.0 2025-03-11 13:14:29 +00:00
Weston Pace
3966b16b63 fix: restore pylance as mandatory dependency (#2204)
We attempted to make pylance optional in
https://github.com/lancedb/lancedb/pull/2156 but it appears this did not
quite work. Users are unable to use lancedb from a fresh install. This
reverts the optional-ness so we can get back in a working state while we
fix the issue.
2025-03-11 06:13:52 -07:00
Lance Release
5661cc15ac Updating package-lock.json 2025-03-10 23:53:56 +00:00
Lance Release
4e7220400f Updating package-lock.json 2025-03-10 23:13:52 +00:00
Lance Release
ae4928fe77 Updating package-lock.json 2025-03-10 23:13:36 +00:00
Lance Release
e80a405dee Bump version: 0.18.0-beta.1 → 0.18.0 2025-03-10 23:13:18 +00:00
Lance Release
a53e19e386 Bump version: 0.18.0-beta.0 → 0.18.0-beta.1 2025-03-10 23:13:13 +00:00
Lance Release
c0097c5f0a Bump version: 0.21.0-beta.2 → 0.21.0 2025-03-10 23:12:56 +00:00
Lance Release
c199708e64 Bump version: 0.21.0-beta.1 → 0.21.0-beta.2 2025-03-10 23:12:56 +00:00
Weston Pace
4a47150ae7 feat: upgrade to lance 0.24.1 (#2199) 2025-03-10 15:18:37 -07:00
Wyatt Alt
f86b20a564 fix: delete tables from DDB on drop_all_tables (#2194)
Prior to this commit, issuing drop_all_tables on a listing database with
an external manifest store would delete physical tables but leave
references behind in the manifest store. The table drop would succeed,
but subsequent creation of a table with the same name would fail with a
conflict.

With this patch, the external manifest store is updated to account for
the dropped tables so that dropped table names can be reused.
2025-03-10 15:00:53 -07:00
msu-reevo
cc81f3e1a5 fix(python): typing (#2167)
@wjones127 is there a standard way you guys setup your virtualenv? I can
either relist all the dependencies in the pyright precommit section, or
specify a venv, or the user has to be in the virtual environment when
they run git commit. If the venv location was standardized or a python
manager like `uv` was used it would be easier to avoid duplicating the
pyright dependency list.

Per your suggestion, in `pyproject.toml` I added in all the passing
files to the `includes` section.

For ruff I upgraded the version and removed "TCH" which doesn't exist as
an option.

I added a `pyright_report.csv` which contains a list of all files sorted
by pyright errors ascending as a todo list to work on.

I fixed about 30 issues in `table.py` stemming from str's being passed
into methods that required a string within a set of string Literals by
extracting them into `types.py`

Can you verify in the rust bridge that the schema should be a property
and not a method here? If it's a method, then there's another place in
the code where `inner.schema` should be `inner.schema()`
``` python
class RecordBatchStream:
    @property
    def schema(self) -> pa.Schema: ...
```

Also unless the `_lancedb.pyi` file is wrong, then there is no
`__anext__` here for `__inner` when it's not an `AsyncGenerator` and
only `next` is defined:
``` python
    async def __anext__(self) -> pa.RecordBatch:
        return await self._inner.__anext__()
        if isinstance(self._inner, AsyncGenerator):
            batch = await self._inner.__anext__()
        else:
            batch = await self._inner.next()
        if batch is None:
            raise StopAsyncIteration
        return batch
```
in the else statement, `_inner` is a `RecordBatchStream`
```python
class RecordBatchStream:
    @property
    def schema(self) -> pa.Schema: ...
    async def next(self) -> Optional[pa.RecordBatch]: ...
```

---------

Co-authored-by: Will Jones <willjones127@gmail.com>
2025-03-10 09:01:23 -07:00
Weston Pace
bc49c4db82 feat: respect datafusion's batch size when running as a table provider (#2187)
Datafusion makes the batch size available as part of the `SessionState`.
We should use that to set the `max_batch_length` property in the
`QueryExecutionOptions`.
2025-03-07 05:53:36 -08:00
Weston Pace
d2eec46f17 feat: add support for streaming input to create_table (#2175)
This PR makes it possible to create a table using an asynchronous stream
of input data. Currently only a synchronous iterator is supported. There
are a number of follow-ups not yet tackled:

* Support for embedding functions (the embedding functions wrapper needs
to be re-written to be async, should be an easy lift)
* Support for async input into the remote table (the make_ipc_batch
needs to change to accept async input, leaving undone for now because I
think we want to support actual streaming uploads into the remote table
soon)
* Support for async input into the add function (pretty essential, but
it is a fairly distinct code path, so saving for a different PR)
2025-03-06 11:55:00 -08:00
Lance Release
51437bc228 Bump version: 0.21.0-beta.0 → 0.21.0-beta.1 2025-03-06 19:23:06 +00:00
Bert
fa53cfcfd2 feat: support modifying field metadata in lancedb python (#2178) 2025-03-04 16:58:46 -05:00
vinoyang
374fe0ad95 feat(rust): introduce Catalog trait and implement ListingCatalog (#2148)
Co-authored-by: Weston Pace <weston.pace@gmail.com>
2025-03-03 20:22:24 -08:00
BubbleCal
35e5b84ba9 chore: upgrade lance to 0.24.0-beta.1 (#2171)
Signed-off-by: BubbleCal <bubble-cal@outlook.com>
2025-03-03 12:32:12 +08:00
Lei Xu
7c12d497b0 ci: bump python to 3.12 in GHA (#2169) 2025-03-01 17:24:02 -08:00
ayao227
dfe4ba8dad chore: add reo integration (#2149)
This PR adds reo integration to the lancedb documentation website.
2025-02-28 07:51:34 -08:00
Weston Pace
fa1b9ad5bd fix: don't use with_schema to remove schema metadata (#2162)
It seems that `RecordBatch::with_schema` is unable to remove schema
metadata from a batch. It fails with the error `target schema is not
superset of current schema`.

I'm not sure how the `test_metadata_erased` test is passing. Strangely,
the metadata was not present by the time the batch arrived at the
metadata eraser. I think maybe the schema metadata is only present in
the batch if there is a filter.

I've created a new unit test that makes sure the metadata is erased if
we have a filter also
2025-02-27 10:24:00 -08:00
BubbleCal
8877eb020d feat: record the server version for remote table (#2147)
Signed-off-by: BubbleCal <bubble-cal@outlook.com>
2025-02-27 15:55:59 +08:00
Will Jones
01e4291d21 feat(python): drop hard dependency on pylance (#2156)
Closes #1793
2025-02-26 15:53:45 -08:00
Lance Release
ab3ea76ad1 Updating package-lock.json 2025-02-26 21:23:39 +00:00
Lance Release
728ef8657d Updating package-lock.json 2025-02-26 20:11:37 +00:00
Lance Release
0b13901a16 Updating package-lock.json 2025-02-26 20:11:22 +00:00
Lance Release
84b110e0ef Bump version: 0.17.0 → 0.18.0-beta.0 2025-02-26 20:11:07 +00:00
Lance Release
e1836e54e3 Bump version: 0.20.0 → 0.21.0-beta.0 2025-02-26 20:10:54 +00:00
Weston Pace
4ba5326880 feat: reapply upgrade lance to v0.23.3-beta.1 (#2157)
This reverts commit 2f0c5baea2.

---------

Co-authored-by: Lu Qiu <luqiujob@gmail.com>
2025-02-26 11:44:11 -08:00
Lance Release
b036a69300 Updating package-lock.json 2025-02-26 19:32:22 +00:00
Will Jones
5b12a47119 feat!: revert query limit to be unbounded for scans (#2151)
In earlier PRs (#1886, #1191) we made the default limit 10 regardless of
the query type. This was confusing for users and in many cases a
breaking change. Users would have queries that used to return all
results, but instead only returned the first 10, causing silent bugs.

Part of the cause was consistency: the Python sync API seems to have
always had a limit of 10, while newer APIs (Python async and Nodejs)
didn't.

This PR sets the default limit only for searches (vector search, FTS),
while letting scans (even with filters) be unbounded. It does this
consistently for all SDKs.

Fixes #1983
Fixes #1852
Fixes #2141
2025-02-26 10:32:14 -08:00
Lance Release
769d483e50 Updating package-lock.json 2025-02-26 18:16:59 +00:00
Lance Release
9ecb11fe5a Updating package-lock.json 2025-02-26 18:16:42 +00:00
Lance Release
22bd8329f3 Bump version: 0.17.0-beta.0 → 0.17.0 2025-02-26 18:16:07 +00:00
Lance Release
a736fad149 Bump version: 0.16.1-beta.3 → 0.17.0-beta.0 2025-02-26 18:16:01 +00:00
Lance Release
072adc41aa Bump version: 0.20.0-beta.0 → 0.20.0 2025-02-26 18:15:23 +00:00
Lance Release
c6f25ef1f0 Bump version: 0.19.1-beta.3 → 0.20.0-beta.0 2025-02-26 18:15:23 +00:00
Weston Pace
2f0c5baea2 Revert "chore: upgrade lance to v0.23.3-beta.1 (#2153)"
This reverts commit a63dd66d41.
2025-02-26 10:14:29 -08:00
BubbleCal
a63dd66d41 chore: upgrade lance to v0.23.3-beta.1 (#2153)
this fixes a bug in SQ, see https://github.com/lancedb/lance/pull/3476
for more details

---------

Signed-off-by: BubbleCal <bubble-cal@outlook.com>
Co-authored-by: Lu Qiu <luqiujob@gmail.com>
2025-02-26 09:52:28 -08:00
Weston Pace
d6b3ccb37b feat: upgrade lance to 0.23.2 (#2152)
This also changes the pylance pin from `==0.23.2` to `~=0.23.2` which
should allow the pylance dependency to float a little. The pylance
dependency is actually not used for much anymore and so it should be
tolerant of patch changes.
2025-02-26 09:02:51 -08:00
Weston Pace
c4f99e82e5 feat: push filters down into DF table provider (#2128) 2025-02-25 14:46:28 -08:00
andrew-pienso
979a2d3d9d docs: fixes is_open docstring on AsyncTable (#2150) 2025-02-25 09:11:25 -08:00
Will Jones
7ac5f74c80 feat!: add variable store to embeddings registry (#2112)
BREAKING CHANGE: embedding function implementations in Node need to now
call `resolveVariables()` in their constructors and should **not**
implement `toJSON()`.

This tries to address the handling of secrets. In Node, they are
currently lost. In Python, they are currently leaked into the table
schema metadata.

This PR introduces an in-memory variable store on the function registry.
It also allows embedding function definitions to label certain config
values as "sensitive", and the preprocessing logic will raise an error
if users try to pass in hard-coded values.

Closes #2110
Closes #521

---------

Co-authored-by: Weston Pace <weston.pace@gmail.com>
2025-02-24 15:52:19 -08:00
Will Jones
ecdee4d2b1 feat(python): add search() method to async API (#2049)
Reviving #1966.

Closes #1938

The `search()` method can apply embeddings for the user. This simplifies
hybrid search, so instead of writing:

```python
vector_query = embeddings.compute_query_embeddings("flower moon")[0]
await (
    async_tbl.query()
    .nearest_to(vector_query)
    .nearest_to_text("flower moon")
    .to_pandas()
)
```

You can write:

```python
await (await async_tbl.search("flower moon", query_type="hybrid")).to_pandas()
```

Unfortunately, we had to do a double-await here because `search()` needs
to be async. This is because it often needs to do IO to retrieve and run
an embedding function.
2025-02-24 14:19:25 -08:00
BubbleCal
f391ed828a fix: remote table doesn't apply the prefilter flag for FTS (#2145) 2025-02-24 21:37:43 +08:00
BubbleCal
a99a450f2b fix: flat FTS panic with prefilter and update lance (#2144)
this is fixed in lance so upgrade lance to 0.23.2-beta1
2025-02-24 14:34:00 +08:00
Lei Xu
6fa1f37506 docs: improve pydantic integration docs (#2136)
Address usage mistakes in
https://github.com/lancedb/lancedb/issues/2135.

* Add example of how to use `LanceModel` and `Vector` decorator
* Add test for pydantic doc
* Fix the example to directly use LanceModel instead of calling
`MyModel.to_arrow_schema()` in the example.
* Add cross-reference link to pydantic doc site
* Configure mkdocs to watch code changes in python directory.
2025-02-21 12:48:37 -08:00
BubbleCal
544382df5e fix: handle batch quires in single request (#2139) 2025-02-21 13:23:39 +08:00
BubbleCal
784f00ef6d chore: update Cargo.lock (#2137) 2025-02-21 12:27:10 +08:00
Lance Release
96d7446f70 Updating package-lock.json 2025-02-20 04:51:26 +00:00
Lance Release
99ea78fb55 Updating package-lock.json 2025-02-20 03:38:44 +00:00
Lance Release
8eef4cdc28 Updating package-lock.json 2025-02-20 03:38:27 +00:00
Lance Release
0f102f02c3 Bump version: 0.16.1-beta.2 → 0.16.1-beta.3 2025-02-20 03:38:01 +00:00
Lance Release
a33a0670f6 Bump version: 0.19.1-beta.2 → 0.19.1-beta.3 2025-02-20 03:37:27 +00:00
BubbleCal
14c9ff46d1 feat: support multivector on remote table (#2045)
Signed-off-by: BubbleCal <bubble-cal@outlook.com>
2025-02-20 11:34:51 +08:00
Lei Xu
1865f7decf fix: support optional nested pydantic model (#2130)
Closes #2129
2025-02-17 20:43:13 -08:00
BubbleCal
a608621476 test: query with dist range and new rows (#2126)
we found a bug that flat KNN plan node's stats is not in right order as
fields in schema, it would cause an error if querying with distance
range and new unindexed rows.

we've fixed this in lance so add this test for verifying it works

Signed-off-by: BubbleCal <bubble-cal@outlook.com>
2025-02-17 12:57:45 +08:00
BubbleCal
00514999ff feat: upgrade lance to 0.23.1-beta.4 (#2121)
this also upgrades object_store to 0.11.0, snafu to 0.8

Signed-off-by: BubbleCal <bubble-cal@outlook.com>
2025-02-16 14:53:26 +08:00
Lance Release
b3b597fef6 Updating package-lock.json 2025-02-13 04:40:10 +00:00
Lance Release
bf17144591 Updating package-lock.json 2025-02-13 04:39:54 +00:00
Lance Release
09e110525f Bump version: 0.16.1-beta.1 → 0.16.1-beta.2 2025-02-13 04:39:38 +00:00
Lance Release
40f0dbb64d Bump version: 0.19.1-beta.1 → 0.19.1-beta.2 2025-02-13 04:39:19 +00:00
BubbleCal
3b19e96ae7 fix: panic when field id doesn't equal to field index (#2116)
Signed-off-by: BubbleCal <bubble-cal@outlook.com>
2025-02-13 12:38:35 +08:00
Will Jones
78a17ad54c chore: improve dev instructions for Python (#2088)
Closes #2042
2025-02-12 14:08:52 -08:00
Lance Release
a8e6b491e2 Updating package-lock.json 2025-02-11 22:05:54 +00:00
Lance Release
cea541ca46 Updating package-lock.json 2025-02-11 20:56:22 +00:00
Lance Release
873ffc1042 Updating package-lock.json 2025-02-11 20:56:05 +00:00
Lance Release
83273ad997 Bump version: 0.16.1-beta.0 → 0.16.1-beta.1 2025-02-11 20:55:43 +00:00
Lance Release
d18d63c69d Bump version: 0.19.1-beta.0 → 0.19.1-beta.1 2025-02-11 20:55:23 +00:00
LuQQiu
c3e865e8d0 fix: fix index out of bound in load indices (#2108)
panicked at 'index out of bounds: the len is 24 but the index is
25':Lancedb/rust/lancedb/src/index/vector.rs:26\n

load_indices() on the old manifest while use the newer manifest to get
column names could result in index out of bound if some columns are
removed from the new version.
This change reduce the possibility of index out of bound operation but
does not fully remove it.
Better that lance can directly provide column name info so no need extra
calls to get column name but that require modify the public APIs
2025-02-11 12:54:11 -08:00
Weston Pace
a7755cb313 docs: standardize node example prints (#2080)
Minor cleanup to help debug future CI failures
2025-02-11 08:26:29 -08:00
BubbleCal
3490f3456f chore: upgrade lance to 0.23.1-beta.2 (#2109) 2025-02-11 23:57:56 +08:00
Lance Release
0a1d0693e1 Updating package-lock.json 2025-02-07 20:06:22 +00:00
Lance Release
fd330b4b4b Updating package-lock.json 2025-02-07 19:28:01 +00:00
Lance Release
d4e9fc08e0 Updating package-lock.json 2025-02-07 19:27:44 +00:00
Lance Release
3626f2f5e1 Bump version: 0.16.0 → 0.16.1-beta.0 2025-02-07 19:27:26 +00:00
Lance Release
e64712cfa5 Bump version: 0.19.0 → 0.19.1-beta.0 2025-02-07 19:27:07 +00:00
Wyatt Alt
3e3118f85c feat: update lance dependency to 0.23.1-beta.1 (#2102) 2025-02-07 10:56:01 -08:00
Lance Release
592598a333 Updating package-lock.json 2025-02-07 18:50:53 +00:00
Lance Release
5ad21341c9 Updating package-lock.json 2025-02-07 17:34:04 +00:00
Lance Release
6e08caa091 Updating package-lock.json 2025-02-07 17:33:48 +00:00
Lance Release
7e259d8b0f Bump version: 0.16.0-beta.0 → 0.16.0 2025-02-07 17:33:13 +00:00
Lance Release
e84f747464 Bump version: 0.15.1-beta.3 → 0.16.0-beta.0 2025-02-07 17:33:08 +00:00
Lance Release
998cd43fe6 Bump version: 0.19.0-beta.0 → 0.19.0 2025-02-07 17:32:26 +00:00
Lance Release
4bc7eebe61 Bump version: 0.18.1-beta.4 → 0.19.0-beta.0 2025-02-07 17:32:26 +00:00
Will Jones
2e3b34e79b feat(node): support inserting and upserting subschemas (#2100)
Fixes #2095
Closes #1832
2025-02-07 09:30:18 -08:00
Will Jones
e7574698eb feat: upgrade Lance to 0.23.0 (#2101)
Upstream changelog:
https://github.com/lancedb/lance/releases/tag/v0.23.0
2025-02-07 07:58:07 -08:00
Will Jones
801a9e5f6f feat(python): streaming larger-than-memory writes (#2094)
Makes our preprocessing pipeline do transforms in streaming fashion, so
users can do larger-then-memory writes.

Closes #2082
2025-02-06 16:37:30 -08:00
Weston Pace
4e5fbe6c99 fix: ensure metadata erased from schema call in table provider (#2099)
This also adds a basic unit test for the table provider
2025-02-06 15:30:20 -08:00
Weston Pace
1a449fa49e refactor: rename drop_db / drop_database to drop_all_tables, expose database from connection (#2098)
If we start supporting external catalogs then "drop database" may be
misleading (and not possible). We should be more clear that this is a
utility method to drop all tables. This is also a nice chance for some
consistency cleanup as it was `drop_db` in rust, `drop_database` in
python, and non-existent in typescript.

This PR also adds a public accessor to get the database trait from a
connection.

BREAKING CHANGE: the `drop_database` / `drop_db` methods are now
deprecated.
2025-02-06 13:22:28 -08:00
Weston Pace
6bf742c759 feat: expose table trait (#2097)
Similar to
c269524b2f
this PR reworks and exposes an internal trait (this time
`TableInternal`) to be a public trait. These two PRs together should
make it possible for others to integrate LanceDB on top of other
catalogs.

This PR also adds a basic `TableProvider` implementation for tables,
although some work still needs to be done here (pushdown not yet
enabled).
2025-02-05 18:13:51 -08:00
Ryan Green
ef3093bc23 feat: drop_index() remote implementation (#2093)
Support drop_index operation in remote table.
2025-02-05 10:06:19 -03:30
Will Jones
16851389ea feat: extra headers parameter in client options (#2091)
Closes #1106

Unfortunately, these need to be set at the connection level. I
investigated whether if we let users provide a callback they could use
`AsyncLocalStorage` to access their context. However, it doesn't seem
like NAPI supports this right now. I filed an issue:
https://github.com/napi-rs/napi-rs/issues/2456
2025-02-04 17:26:45 -08:00
Weston Pace
c269524b2f feat!: refactor ConnectionInternal into a Database trait (#2067)
This opens up the door for more custom database implementations than the
two we have today. The biggest change should be inivisble:
`ConnectionInternal` has been renamed to `Database`, made public, and
refactored

However, there are a few breaking changes. `data_storage_version` and
`enable_v2_manifest_paths` have been moved from options on
`create_table` to options for the database which are now set via
`storage_options`.

Before:
```
db = connect(uri)
tbl = db.create_table("my_table", data, data_storage_version="legacy", enable_v2_manifest_paths=True)
```

After:
```
db = connect(uri, storage_options={
  "new_table_enable_v2_manifest_paths": "true",
  "new_table_data_storage_version": "legacy"
})
tbl = db.create_table("my_table", data)
```

BREAKING CHANGE: the data_storage_version, enable_v2_manifest_paths
options have moved from options to create_table to storage_options.
BREAKING CHANGE: the use_legacy_format option has been removed,
data_storage_version has replaced it for some time now
2025-02-04 14:35:14 -08:00
130 changed files with 6660 additions and 2740 deletions

View File

@@ -1,5 +1,5 @@
[tool.bumpversion]
current_version = "0.15.1-beta.3"
current_version = "0.18.0"
parse = """(?x)
(?P<major>0|[1-9]\\d*)\\.
(?P<minor>0|[1-9]\\d*)\\.

View File

@@ -33,13 +33,14 @@ jobs:
python-version: "3.12"
- name: Install ruff
run: |
pip install ruff==0.8.4
pip install ruff==0.9.9
- name: Format check
run: ruff format --check .
- name: Lint
run: ruff check .
doctest:
name: "Doctest"
type-check:
name: "Type Check"
timeout-minutes: 30
runs-on: "ubuntu-22.04"
defaults:
@@ -54,7 +55,36 @@ jobs:
- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: "3.11"
python-version: "3.12"
- name: Install protobuf compiler
run: |
sudo apt update
sudo apt install -y protobuf-compiler
pip install toml
- name: Install dependencies
run: |
python ../ci/parse_requirements.py pyproject.toml --extras dev,tests,embeddings > requirements.txt
pip install -r requirements.txt
- name: Run pyright
run: pyright
doctest:
name: "Doctest"
timeout-minutes: 30
runs-on: "ubuntu-24.04"
defaults:
run:
shell: bash
working-directory: python
steps:
- uses: actions/checkout@v4
with:
fetch-depth: 0
lfs: true
- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: "3.12"
cache: "pip"
- name: Install protobuf
run: |
@@ -75,8 +105,8 @@ jobs:
timeout-minutes: 30
strategy:
matrix:
python-minor-version: ["9", "11"]
runs-on: "ubuntu-22.04"
python-minor-version: ["9", "12"]
runs-on: "ubuntu-24.04"
defaults:
run:
shell: bash
@@ -127,7 +157,7 @@ jobs:
- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: "3.11"
python-version: "3.12"
- uses: Swatinem/rust-cache@v2
with:
workspaces: python
@@ -157,7 +187,7 @@ jobs:
- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: "3.11"
python-version: "3.12"
- uses: Swatinem/rust-cache@v2
with:
workspaces: python
@@ -168,7 +198,7 @@ jobs:
run: rm -rf target/wheels
pydantic1x:
timeout-minutes: 30
runs-on: "ubuntu-22.04"
runs-on: "ubuntu-24.04"
defaults:
run:
shell: bash

View File

@@ -61,7 +61,12 @@ jobs:
CXX: clang++
steps:
- uses: actions/checkout@v4
# Remote cargo.lock to force a fresh build
# Building without a lock file often requires the latest Rust version since downstream
# dependencies may have updated their minimum Rust version.
- uses: actions-rust-lang/setup-rust-toolchain@v1
with:
toolchain: "stable"
# Remove cargo.lock to force a fresh build
- name: Remove Cargo.lock
run: rm -f Cargo.lock
- uses: rui314/setup-mold@v1
@@ -179,15 +184,17 @@ jobs:
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Install dependencies
- name: Install dependencies (part 1)
run: |
set -e
apk add protobuf-dev curl clang lld llvm19 grep npm bash msitools sed
curl --proto '=https' --tlsv1.3 -sSf https://raw.githubusercontent.com/rust-lang/rustup/refs/heads/master/rustup-init.sh | sh -s -- -y
source $HOME/.cargo/env
rustup target add aarch64-pc-windows-msvc
- name: Install rust
uses: actions-rust-lang/setup-rust-toolchain@v1
with:
target: aarch64-pc-windows-msvc
- name: Install dependencies (part 2)
run: |
set -e
mkdir -p sysroot
cd sysroot
sh ../ci/sysroot-aarch64-pc-windows-msvc.sh
@@ -259,7 +266,7 @@ jobs:
- name: Install Rust
run: |
Invoke-WebRequest https://win.rustup.rs/x86_64 -OutFile rustup-init.exe
.\rustup-init.exe -y --default-host aarch64-pc-windows-msvc
.\rustup-init.exe -y --default-host aarch64-pc-windows-msvc --default-toolchain 1.83.0
shell: powershell
- name: Add Rust to PATH
run: |

View File

@@ -1,21 +1,27 @@
repos:
- repo: https://github.com/pre-commit/pre-commit-hooks
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v3.2.0
hooks:
- id: check-yaml
- id: end-of-file-fixer
- id: trailing-whitespace
- repo: https://github.com/astral-sh/ruff-pre-commit
- id: check-yaml
- id: end-of-file-fixer
- id: trailing-whitespace
- repo: https://github.com/astral-sh/ruff-pre-commit
# Ruff version.
rev: v0.2.2
rev: v0.9.9
hooks:
- id: ruff
- repo: local
hooks:
- id: local-biome-check
name: biome check
entry: npx @biomejs/biome@1.8.3 check --config-path nodejs/biome.json nodejs/
language: system
types: [text]
files: "nodejs/.*"
exclude: nodejs/lancedb/native.d.ts|nodejs/dist/.*|nodejs/examples/.*
- id: ruff
# - repo: https://github.com/RobertCraigie/pyright-python
# rev: v1.1.395
# hooks:
# - id: pyright
# args: ["--project", "python"]
# additional_dependencies: [pyarrow-stubs]
- repo: local
hooks:
- id: local-biome-check
name: biome check
entry: npx @biomejs/biome@1.8.3 check --config-path nodejs/biome.json nodejs/
language: system
types: [text]
files: "nodejs/.*"
exclude: nodejs/lancedb/native.d.ts|nodejs/dist/.*|nodejs/examples/.*

1216
Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -21,44 +21,52 @@ categories = ["database-implementations"]
rust-version = "1.78.0"
[workspace.dependencies]
lance = { "version" = "=0.23.0", "features" = [
"dynamodb",
], git = "https://github.com/lancedb/lance.git", tag = "v0.23.0-beta.5" }
lance-io = { version = "=0.23.0", git = "https://github.com/lancedb/lance.git", tag = "v0.23.0-beta.5" }
lance-index = { version = "=0.23.0", git = "https://github.com/lancedb/lance.git", tag = "v0.23.0-beta.5" }
lance-linalg = { version = "=0.23.0", git = "https://github.com/lancedb/lance.git", tag = "v0.23.0-beta.5" }
lance-table = { version = "=0.23.0", git = "https://github.com/lancedb/lance.git", tag = "v0.23.0-beta.5" }
lance-testing = { version = "=0.23.0", git = "https://github.com/lancedb/lance.git", tag = "v0.23.0-beta.5" }
lance-datafusion = { version = "=0.23.0", git = "https://github.com/lancedb/lance.git", tag = "v0.23.0-beta.5" }
lance-encoding = { version = "=0.23.0", git = "https://github.com/lancedb/lance.git", tag = "v0.23.0-beta.5" }
lance = { "version" = "=0.24.1", "features" = ["dynamodb"] }
lance-io = { version = "=0.24.1" }
lance-index = { version = "=0.24.1" }
lance-linalg = { version = "=0.24.1" }
lance-table = { version = "=0.24.1" }
lance-testing = { version = "=0.24.1" }
lance-datafusion = { version = "=0.24.1" }
lance-encoding = { version = "=0.24.1" }
# Note that this one does not include pyarrow
arrow = { version = "53.2", optional = false }
arrow-array = "53.2"
arrow-data = "53.2"
arrow-ipc = "53.2"
arrow-ord = "53.2"
arrow-schema = "53.2"
arrow-arith = "53.2"
arrow-cast = "53.2"
arrow = { version = "54.1", optional = false }
arrow-array = "54.1"
arrow-data = "54.1"
arrow-ipc = "54.1"
arrow-ord = "54.1"
arrow-schema = "54.1"
arrow-arith = "54.1"
arrow-cast = "54.1"
async-trait = "0"
chrono = "0.4.35"
datafusion-common = "44.0"
datafusion-physical-plan = "44.0"
env_logger = "0.10"
datafusion = { version = "45.0", default-features = false }
datafusion-catalog = "45.0"
datafusion-common = { version = "45.0", default-features = false }
datafusion-execution = "45.0"
datafusion-expr = "45.0"
datafusion-physical-plan = "45.0"
env_logger = "0.11"
half = { "version" = "=2.4.1", default-features = false, features = [
"num-traits",
] }
futures = "0"
log = "0.4"
moka = { version = "0.11", features = ["future"] }
object_store = "0.10.2"
moka = { version = "0.12", features = ["future"] }
object_store = "0.11.0"
pin-project = "1.0.7"
snafu = "0.7.4"
snafu = "0.8"
url = "2"
num-traits = "0.2"
rand = "0.8"
regex = "1.10"
lazy_static = "1"
semver = "1.0.25"
# Temporary pins to work around downstream issues
# https://github.com/apache/arrow-rs/commit/2fddf85afcd20110ce783ed5b4cdeb82293da30b
chrono = "=0.4.39"
# https://github.com/RustCrypto/formats/issues/1684
base64ct = "=1.6.0"
# Workaround for: https://github.com/eira-fransham/crunchy/issues/13
crunchy = "=0.2.2"

41
ci/parse_requirements.py Normal file
View File

@@ -0,0 +1,41 @@
import argparse
import toml
def parse_dependencies(pyproject_path, extras=None):
with open(pyproject_path, "r") as file:
pyproject = toml.load(file)
dependencies = pyproject.get("project", {}).get("dependencies", [])
for dependency in dependencies:
print(dependency)
optional_dependencies = pyproject.get("project", {}).get(
"optional-dependencies", {}
)
if extras:
for extra in extras.split(","):
for dep in optional_dependencies.get(extra, []):
print(dep)
def main():
parser = argparse.ArgumentParser(
description="Generate requirements.txt from pyproject.toml"
)
parser.add_argument("path", type=str, help="Path to pyproject.toml")
parser.add_argument(
"--extras",
type=str,
help="Comma-separated list of extras to include",
default="",
)
args = parser.parse_args()
parse_dependencies(args.path, args.extras)
if __name__ == "__main__":
main()

View File

@@ -4,6 +4,9 @@ repo_url: https://github.com/lancedb/lancedb
edit_uri: https://github.com/lancedb/lancedb/tree/main/docs/src
repo_name: lancedb/lancedb
docs_dir: src
watch:
- src
- ../python/python
theme:
name: "material"
@@ -63,6 +66,7 @@ plugins:
- https://arrow.apache.org/docs/objects.inv
- https://pandas.pydata.org/docs/objects.inv
- https://lancedb.github.io/lance/objects.inv
- https://docs.pydantic.dev/latest/objects.inv
- mkdocs-jupyter
- render_swagger:
allow_arbitrary_locations: true
@@ -105,8 +109,8 @@ nav:
- 📚 Concepts:
- Vector search: concepts/vector_search.md
- Indexing:
- IVFPQ: concepts/index_ivfpq.md
- HNSW: concepts/index_hnsw.md
- IVFPQ: concepts/index_ivfpq.md
- HNSW: concepts/index_hnsw.md
- Storage: concepts/storage.md
- Data management: concepts/data_management.md
- 🔨 Guides:
@@ -130,8 +134,8 @@ nav:
- Adaptive RAG: rag/adaptive_rag.md
- SFR RAG: rag/sfr_rag.md
- Advanced Techniques:
- HyDE: rag/advanced_techniques/hyde.md
- FLARE: rag/advanced_techniques/flare.md
- HyDE: rag/advanced_techniques/hyde.md
- FLARE: rag/advanced_techniques/flare.md
- Reranking:
- Quickstart: reranking/index.md
- Cohere Reranker: reranking/cohere.md
@@ -146,7 +150,7 @@ nav:
- Building Custom Rerankers: reranking/custom_reranker.md
- Example: notebooks/lancedb_reranking.ipynb
- Filtering: sql.md
- Versioning & Reproducibility:
- Versioning & Reproducibility:
- sync API: notebooks/reproducibility.ipynb
- async API: notebooks/reproducibility_async.ipynb
- Configuring Storage: guides/storage.md
@@ -178,6 +182,7 @@ nav:
- Imagebind embeddings: embeddings/available_embedding_models/multimodal_embedding_functions/imagebind_embedding.md
- Jina Embeddings: embeddings/available_embedding_models/multimodal_embedding_functions/jina_multimodal_embedding.md
- User-defined embedding functions: embeddings/custom_embedding_function.md
- Variables and secrets: embeddings/variables_and_secrets.md
- "Example: Multi-lingual semantic search": notebooks/multi_lingual_example.ipynb
- "Example: MultiModal CLIP Embeddings": notebooks/DisappearingEmbeddingFunction.ipynb
- 🔌 Integrations:
@@ -240,8 +245,8 @@ nav:
- Concepts:
- Vector search: concepts/vector_search.md
- Indexing:
- IVFPQ: concepts/index_ivfpq.md
- HNSW: concepts/index_hnsw.md
- IVFPQ: concepts/index_ivfpq.md
- HNSW: concepts/index_hnsw.md
- Storage: concepts/storage.md
- Data management: concepts/data_management.md
- Guides:
@@ -265,8 +270,8 @@ nav:
- Adaptive RAG: rag/adaptive_rag.md
- SFR RAG: rag/sfr_rag.md
- Advanced Techniques:
- HyDE: rag/advanced_techniques/hyde.md
- FLARE: rag/advanced_techniques/flare.md
- HyDE: rag/advanced_techniques/hyde.md
- FLARE: rag/advanced_techniques/flare.md
- Reranking:
- Quickstart: reranking/index.md
- Cohere Reranker: reranking/cohere.md
@@ -280,7 +285,7 @@ nav:
- Building Custom Rerankers: reranking/custom_reranker.md
- Example: notebooks/lancedb_reranking.ipynb
- Filtering: sql.md
- Versioning & Reproducibility:
- Versioning & Reproducibility:
- sync API: notebooks/reproducibility.ipynb
- async API: notebooks/reproducibility_async.ipynb
- Configuring Storage: guides/storage.md
@@ -311,6 +316,7 @@ nav:
- Imagebind embeddings: embeddings/available_embedding_models/multimodal_embedding_functions/imagebind_embedding.md
- Jina Embeddings: embeddings/available_embedding_models/multimodal_embedding_functions/jina_multimodal_embedding.md
- User-defined embedding functions: embeddings/custom_embedding_function.md
- Variables and secrets: embeddings/variables_and_secrets.md
- "Example: Multi-lingual semantic search": notebooks/multi_lingual_example.ipynb
- "Example: MultiModal CLIP Embeddings": notebooks/DisappearingEmbeddingFunction.ipynb
- Integrations:
@@ -349,8 +355,8 @@ nav:
- 🦀 Rust:
- Overview: examples/examples_rust.md
- Studies:
- studies/overview.md
- ↗Improve retrievers with hybrid search and reranking: https://blog.lancedb.com/hybrid-search-and-reranking-report/
- studies/overview.md
- ↗Improve retrievers with hybrid search and reranking: https://blog.lancedb.com/hybrid-search-and-reranking-report/
- API reference:
- Overview: api_reference.md
- Python: python/python.md
@@ -371,6 +377,7 @@ extra_css:
extra_javascript:
- "extra_js/init_ask_ai_widget.js"
- "extra_js/reo.js"
extra:
analytics:

View File

@@ -38,6 +38,13 @@ components:
required: true
schema:
type: string
index_name:
name: index_name
in: path
description: name of the index
required: true
schema:
type: string
responses:
invalid_request:
description: Invalid request
@@ -485,3 +492,22 @@ paths:
$ref: "#/components/responses/unauthorized"
"404":
$ref: "#/components/responses/not_found"
/v1/table/{name}/index/{index_name}/drop/:
post:
description: Drop an index from the table
tags:
- Tables
summary: Drop an index from the table
operationId: dropIndex
parameters:
- $ref: "#/components/parameters/table_name"
- $ref: "#/components/parameters/index_name"
responses:
"200":
description: Index successfully dropped
"400":
$ref: "#/components/responses/invalid_request"
"401":
$ref: "#/components/responses/unauthorized"
"404":
$ref: "#/components/responses/not_found"

View File

@@ -3,6 +3,7 @@ import * as vectordb from "vectordb";
// --8<-- [end:import]
(async () => {
console.log("ann_indexes.ts: start");
// --8<-- [start:ingest]
const db = await vectordb.connect("data/sample-lancedb");
@@ -49,5 +50,5 @@ import * as vectordb from "vectordb";
.execute();
// --8<-- [end:search3]
console.log("Ann indexes: done");
console.log("ann_indexes.ts: done");
})();

View File

@@ -107,7 +107,6 @@ const example = async () => {
// --8<-- [start:search]
const query = await tbl.search([100, 100]).limit(2).execute();
// --8<-- [end:search]
console.log(query);
// --8<-- [start:delete]
await tbl.delete('item = "fizz"');
@@ -119,8 +118,9 @@ const example = async () => {
};
async function main() {
console.log("basic_legacy.ts: start");
await example();
console.log("Basic example: done");
console.log("basic_legacy.ts: done");
}
main();

View File

@@ -55,6 +55,14 @@ Let's implement `SentenceTransformerEmbeddings` class. All you need to do is imp
This is a stripped down version of our implementation of `SentenceTransformerEmbeddings` that removes certain optimizations and default settings.
!!! danger "Use sensitive keys to prevent leaking secrets"
To prevent leaking secrets, such as API keys, you should add any sensitive
parameters of an embedding function to the output of the
[sensitive_keys()][lancedb.embeddings.base.EmbeddingFunction.sensitive_keys] /
[getSensitiveKeys()](../../js/namespaces/embedding/classes/EmbeddingFunction/#getsensitivekeys)
method. This prevents users from accidentally instantiating the embedding
function with hard-coded secrets.
Now you can use this embedding function to create your table schema and that's it! you can then ingest data and run queries without manually vectorizing the inputs.
=== "Python"

View File

@@ -0,0 +1,53 @@
# Variable and Secrets
Most embedding configuration options are saved in the table's metadata. However,
this isn't always appropriate. For example, API keys should never be stored in the
metadata. Additionally, other configuration options might be best set at runtime,
such as the `device` configuration that controls whether to use GPU or CPU for
inference. If you hardcoded this to GPU, you wouldn't be able to run the code on
a server without one.
To handle these cases, you can set variables on the embedding registry and
reference them in the embedding configuration. These variables will be available
during the runtime of your program, but not saved in the table's metadata. When
the table is loaded from a different process, the variables must be set again.
To set a variable, use the `set_var()` / `setVar()` method on the embedding registry.
To reference a variable, use the syntax `$env:VARIABLE_NAME`. If there is a default
value, you can use the syntax `$env:VARIABLE_NAME:DEFAULT_VALUE`.
## Using variables to set secrets
Sensitive configuration, such as API keys, must either be set as environment
variables or using variables on the embedding registry. If you pass in a hardcoded
value, LanceDB will raise an error. Instead, if you want to set an API key via
configuration, use a variable:
=== "Python"
```python
--8<-- "python/python/tests/docs/test_embeddings_optional.py:register_secret"
```
=== "Typescript"
```typescript
--8<-- "nodejs/examples/embedding.test.ts:register_secret"
```
## Using variables to set the device parameter
Many embedding functions that run locally have a `device` parameter that controls
whether to use GPU or CPU for inference. Because not all computers have a GPU,
it's helpful to be able to set the `device` parameter at runtime, rather than
have it hard coded in the embedding configuration. To make it work even if the
variable isn't set, you could provide a default value of `cpu` in the embedding
configuration.
Some embedding libraries even have a method to detect which devices are available,
which could be used to dynamically set the device at runtime. For example, in Python
you can check if a CUDA GPU is available using `torch.cuda.is_available()`.
```python
--8<-- "python/python/tests/docs/test_embeddings_optional.py:register_device"
```

1
docs/src/extra_js/reo.js Normal file
View File

@@ -0,0 +1 @@
!function(){var e,t,n;e="9627b71b382d201",t=function(){Reo.init({clientID:"9627b71b382d201"})},(n=document.createElement("script")).src="https://static.reo.dev/"+e+"/reo.js",n.defer=!0,n.onload=t,document.head.appendChild(n)}();

View File

@@ -131,6 +131,20 @@ Return a brief description of the connection
***
### dropAllTables()
```ts
abstract dropAllTables(): Promise<void>
```
Drop all tables in the database.
#### Returns
`Promise`&lt;`void`&gt;
***
### dropTable()
```ts

View File

@@ -22,8 +22,6 @@ when creating a table or adding data to it)
This function converts an array of Record<String, any> (row-major JS objects)
to an Arrow Table (a columnar structure)
Note that it currently does not support nulls.
If a schema is provided then it will be used to determine the resulting array
types. Fields will also be reordered to fit the order defined by the schema.
@@ -31,6 +29,9 @@ If a schema is not provided then the types will be inferred and the field order
will be controlled by the order of properties in the first record. If a type
is inferred it will always be nullable.
If not all fields are found in the data, then a subset of the schema will be
returned.
If the input is empty then a schema must be provided to create an empty table.
When a schema is not specified then data types will be inferred. The inference
@@ -38,6 +39,7 @@ rules are as follows:
- boolean => Bool
- number => Float64
- bigint => Int64
- String => Utf8
- Buffer => Binary
- Record<String, any> => Struct

View File

@@ -8,6 +8,14 @@
## Properties
### extraHeaders?
```ts
optional extraHeaders: Record<string, string>;
```
***
### retryConfig?
```ts

View File

@@ -8,7 +8,7 @@
## Properties
### dataStorageVersion?
### ~~dataStorageVersion?~~
```ts
optional dataStorageVersion: string;
@@ -19,6 +19,10 @@ The version of the data storage format to use.
The default is `stable`.
Set to "legacy" to use the old format.
#### Deprecated
Pass `new_table_data_storage_version` to storageOptions instead.
***
### embeddingFunction?
@@ -29,7 +33,7 @@ optional embeddingFunction: EmbeddingFunctionConfig;
***
### enableV2ManifestPaths?
### ~~enableV2ManifestPaths?~~
```ts
optional enableV2ManifestPaths: boolean;
@@ -41,6 +45,10 @@ turning this on will make the dataset unreadable for older versions
of LanceDB (prior to 0.10.0). To migrate an existing dataset, instead
use the LocalTable#migrateManifestPathsV2 method.
#### Deprecated
Pass `new_table_enable_v2_manifest_paths` to storageOptions instead.
***
### existOk
@@ -90,17 +98,3 @@ Options already set on the connection will be inherited by the table,
but can be overridden here.
The available options are described at https://lancedb.github.io/lancedb/guides/storage/
***
### useLegacyFormat?
```ts
optional useLegacyFormat: boolean;
```
If true then data files will be written with the legacy format
The default is false.
Deprecated. Use data storage version instead.

View File

@@ -8,6 +8,23 @@
An embedding function that automatically creates vector representation for a given column.
It's important subclasses pass the **original** options to the super constructor
and then pass those options to `resolveVariables` to resolve any variables before
using them.
## Example
```ts
class MyEmbeddingFunction extends EmbeddingFunction {
constructor(options: {model: string, timeout: number}) {
super(optionsRaw);
const options = this.resolveVariables(optionsRaw);
this.model = options.model;
this.timeout = options.timeout;
}
}
```
## Extended by
- [`TextEmbeddingFunction`](TextEmbeddingFunction.md)
@@ -82,12 +99,33 @@ The datatype of the embeddings
***
### getSensitiveKeys()
```ts
protected getSensitiveKeys(): string[]
```
Provide a list of keys in the function options that should be treated as
sensitive. If users pass raw values for these keys, they will be rejected.
#### Returns
`string`[]
***
### init()?
```ts
optional init(): Promise<void>
```
Optionally load any resources needed for the embedding function.
This method is called after the embedding function has been initialized
but before any embeddings are computed. It is useful for loading local models
or other resources that are needed for the embedding function to work.
#### Returns
`Promise`&lt;`void`&gt;
@@ -108,6 +146,24 @@ The number of dimensions of the embeddings
***
### resolveVariables()
```ts
protected resolveVariables(config): Partial<M>
```
Apply variables to the config.
#### Parameters
* **config**: `Partial`&lt;`M`&gt;
#### Returns
`Partial`&lt;`M`&gt;
***
### sourceField()
```ts
@@ -134,37 +190,15 @@ sourceField is used in combination with `LanceSchema` to provide a declarative d
### toJSON()
```ts
abstract toJSON(): Partial<M>
toJSON(): Record<string, any>
```
Convert the embedding function to a JSON object
It is used to serialize the embedding function to the schema
It's important that any object returned by this method contains all the necessary
information to recreate the embedding function
It should return the same object that was passed to the constructor
If it does not, the embedding function will not be able to be recreated, or could be recreated incorrectly
Get the original arguments to the constructor, to serialize them so they
can be used to recreate the embedding function later.
#### Returns
`Partial`&lt;`M`&gt;
#### Example
```ts
class MyEmbeddingFunction extends EmbeddingFunction {
constructor(options: {model: string, timeout: number}) {
super();
this.model = options.model;
this.timeout = options.timeout;
}
toJSON() {
return {
model: this.model,
timeout: this.timeout,
};
}
```
`Record`&lt;`string`, `any`&gt;
***

View File

@@ -80,6 +80,28 @@ getTableMetadata(functions): Map<string, string>
***
### getVar()
```ts
getVar(name): undefined | string
```
Get a variable.
#### Parameters
* **name**: `string`
#### Returns
`undefined` \| `string`
#### See
[setVar](EmbeddingFunctionRegistry.md#setvar)
***
### length()
```ts
@@ -145,3 +167,31 @@ reset the registry to the initial state
#### Returns
`void`
***
### setVar()
```ts
setVar(name, value): void
```
Set a variable. These can be accessed in the embedding function
configuration using the syntax `$var:variable_name`. If they are not
set, an error will be thrown letting you know which key is unset. If you
want to supply a default value, you can add an additional part in the
configuration like so: `$var:variable_name:default_value`. Default values
can be used for runtime configurations that are not sensitive, such as
whether to use a GPU for inference.
The name must not contain colons. The default value can contain colons.
#### Parameters
* **name**: `string`
* **value**: `string`
#### Returns
`void`

View File

@@ -114,12 +114,37 @@ abstract generateEmbeddings(texts, ...args): Promise<number[][] | Float32Array[]
***
### getSensitiveKeys()
```ts
protected getSensitiveKeys(): string[]
```
Provide a list of keys in the function options that should be treated as
sensitive. If users pass raw values for these keys, they will be rejected.
#### Returns
`string`[]
#### Inherited from
[`EmbeddingFunction`](EmbeddingFunction.md).[`getSensitiveKeys`](EmbeddingFunction.md#getsensitivekeys)
***
### init()?
```ts
optional init(): Promise<void>
```
Optionally load any resources needed for the embedding function.
This method is called after the embedding function has been initialized
but before any embeddings are computed. It is useful for loading local models
or other resources that are needed for the embedding function to work.
#### Returns
`Promise`&lt;`void`&gt;
@@ -148,6 +173,28 @@ The number of dimensions of the embeddings
***
### resolveVariables()
```ts
protected resolveVariables(config): Partial<M>
```
Apply variables to the config.
#### Parameters
* **config**: `Partial`&lt;`M`&gt;
#### Returns
`Partial`&lt;`M`&gt;
#### Inherited from
[`EmbeddingFunction`](EmbeddingFunction.md).[`resolveVariables`](EmbeddingFunction.md#resolvevariables)
***
### sourceField()
```ts
@@ -173,37 +220,15 @@ sourceField is used in combination with `LanceSchema` to provide a declarative d
### toJSON()
```ts
abstract toJSON(): Partial<M>
toJSON(): Record<string, any>
```
Convert the embedding function to a JSON object
It is used to serialize the embedding function to the schema
It's important that any object returned by this method contains all the necessary
information to recreate the embedding function
It should return the same object that was passed to the constructor
If it does not, the embedding function will not be able to be recreated, or could be recreated incorrectly
Get the original arguments to the constructor, to serialize them so they
can be used to recreate the embedding function later.
#### Returns
`Partial`&lt;`M`&gt;
#### Example
```ts
class MyEmbeddingFunction extends EmbeddingFunction {
constructor(options: {model: string, timeout: number}) {
super();
this.model = options.model;
this.timeout = options.timeout;
}
toJSON() {
return {
model: this.model,
timeout: this.timeout,
};
}
```
`Record`&lt;`string`, `any`&gt;
#### Inherited from

View File

@@ -9,23 +9,50 @@ LanceDB supports [Polars](https://github.com/pola-rs/polars), a blazingly fast D
First, we connect to a LanceDB database.
=== "Sync API"
```py
--8<-- "python/python/tests/docs/test_python.py:import-lancedb"
--8<-- "python/python/tests/docs/test_python.py:connect_to_lancedb"
```
=== "Async API"
```py
--8<-- "python/python/tests/docs/test_python.py:import-lancedb"
--8<-- "python/python/tests/docs/test_python.py:connect_to_lancedb_async"
```
```py
--8<-- "python/python/tests/docs/test_python.py:import-lancedb"
--8<-- "python/python/tests/docs/test_python.py:connect_to_lancedb"
```
We can load a Polars `DataFrame` to LanceDB directly.
```py
--8<-- "python/python/tests/docs/test_python.py:import-polars"
--8<-- "python/python/tests/docs/test_python.py:create_table_polars"
```
=== "Sync API"
```py
--8<-- "python/python/tests/docs/test_python.py:import-polars"
--8<-- "python/python/tests/docs/test_python.py:create_table_polars"
```
=== "Async API"
```py
--8<-- "python/python/tests/docs/test_python.py:import-polars"
--8<-- "python/python/tests/docs/test_python.py:create_table_polars_async"
```
We can now perform similarity search via the LanceDB Python API.
```py
--8<-- "python/python/tests/docs/test_python.py:vector_search_polars"
```
=== "Sync API"
```py
--8<-- "python/python/tests/docs/test_python.py:vector_search_polars"
```
=== "Async API"
```py
--8<-- "python/python/tests/docs/test_python.py:vector_search_polars_async"
```
In addition to the selected columns, LanceDB also returns a vector
and also the `_distance` column which is the distance between the query
@@ -112,4 +139,3 @@ The reason it's beneficial to not convert the LanceDB Table
to a DataFrame is because the table can potentially be way larger
than memory, and Polars LazyFrames allow us to work with such
larger-than-memory datasets by not loading it into memory all at once.

View File

@@ -2,14 +2,19 @@
[Pydantic](https://docs.pydantic.dev/latest/) is a data validation library in Python.
LanceDB integrates with Pydantic for schema inference, data ingestion, and query result casting.
Using [LanceModel][lancedb.pydantic.LanceModel], users can seamlessly
integrate Pydantic with the rest of the LanceDB APIs.
## Schema
```python
LanceDB supports to create Apache Arrow Schema from a
[Pydantic BaseModel](https://docs.pydantic.dev/latest/api/main/#pydantic.main.BaseModel)
via [pydantic_to_schema()](python.md#lancedb.pydantic.pydantic_to_schema) method.
--8<-- "python/python/tests/docs/test_pydantic_integration.py:imports"
--8<-- "python/python/tests/docs/test_pydantic_integration.py:base_model"
--8<-- "python/python/tests/docs/test_pydantic_integration.py:set_url"
--8<-- "python/python/tests/docs/test_pydantic_integration.py:base_example"
```
::: lancedb.pydantic.pydantic_to_schema
## Vector Field
@@ -34,3 +39,9 @@ Current supported type conversions:
| `list` | `pyarrow.List` |
| `BaseModel` | `pyarrow.Struct` |
| `Vector(n)` | `pyarrow.FixedSizeList(float32, n)` |
LanceDB supports to create Apache Arrow Schema from a
[Pydantic BaseModel][pydantic.BaseModel]
via [pydantic_to_schema()](python.md#lancedb.pydantic.pydantic_to_schema) method.
::: lancedb.pydantic.pydantic_to_schema

View File

@@ -20,6 +20,7 @@ async function setup() {
}
async () => {
console.log("search_legacy.ts: start");
await setup();
// --8<-- [start:search1]
@@ -37,5 +38,5 @@ async () => {
.execute();
// --8<-- [end:search2]
console.log("search: done");
console.log("search_legacy.ts: done");
};

View File

@@ -1,6 +1,7 @@
import * as vectordb from "vectordb";
(async () => {
console.log("sql_legacy.ts: start");
const db = await vectordb.connect("data/sample-lancedb");
let data = [];
@@ -34,5 +35,5 @@ import * as vectordb from "vectordb";
await tbl.filter("id = 10").limit(10).execute();
// --8<-- [end:sql_search]
console.log("SQL search: done");
console.log("sql_legacy.ts: done");
})();

View File

@@ -15,6 +15,7 @@ excluded_globs = [
"../src/python/duckdb.md",
"../src/python/pandas_and_pyarrow.md",
"../src/python/polars_arrow.md",
"../src/python/pydantic.md",
"../src/embeddings/*.md",
"../src/concepts/*.md",
"../src/ann_indexes.md",

View File

@@ -8,7 +8,7 @@
<parent>
<groupId>com.lancedb</groupId>
<artifactId>lancedb-parent</artifactId>
<version>0.15.1-beta.3</version>
<version>0.18.0-final.0</version>
<relativePath>../pom.xml</relativePath>
</parent>

View File

@@ -6,7 +6,7 @@
<groupId>com.lancedb</groupId>
<artifactId>lancedb-parent</artifactId>
<version>0.15.1-beta.3</version>
<version>0.18.0-final.0</version>
<packaging>pom</packaging>
<name>LanceDB Parent</name>

68
node/package-lock.json generated
View File

@@ -1,12 +1,12 @@
{
"name": "vectordb",
"version": "0.15.1-beta.3",
"version": "0.18.0",
"lockfileVersion": 3,
"requires": true,
"packages": {
"": {
"name": "vectordb",
"version": "0.15.1-beta.3",
"version": "0.18.0",
"cpu": [
"x64",
"arm64"
@@ -52,14 +52,14 @@
"uuid": "^9.0.0"
},
"optionalDependencies": {
"@lancedb/vectordb-darwin-arm64": "0.15.1-beta.3",
"@lancedb/vectordb-darwin-x64": "0.15.1-beta.3",
"@lancedb/vectordb-linux-arm64-gnu": "0.15.1-beta.3",
"@lancedb/vectordb-linux-arm64-musl": "0.15.1-beta.3",
"@lancedb/vectordb-linux-x64-gnu": "0.15.1-beta.3",
"@lancedb/vectordb-linux-x64-musl": "0.15.1-beta.3",
"@lancedb/vectordb-win32-arm64-msvc": "0.15.1-beta.3",
"@lancedb/vectordb-win32-x64-msvc": "0.15.1-beta.3"
"@lancedb/vectordb-darwin-arm64": "0.18.0",
"@lancedb/vectordb-darwin-x64": "0.18.0",
"@lancedb/vectordb-linux-arm64-gnu": "0.18.0",
"@lancedb/vectordb-linux-arm64-musl": "0.18.0",
"@lancedb/vectordb-linux-x64-gnu": "0.18.0",
"@lancedb/vectordb-linux-x64-musl": "0.18.0",
"@lancedb/vectordb-win32-arm64-msvc": "0.18.0",
"@lancedb/vectordb-win32-x64-msvc": "0.18.0"
},
"peerDependencies": {
"@apache-arrow/ts": "^14.0.2",
@@ -330,9 +330,9 @@
}
},
"node_modules/@lancedb/vectordb-darwin-arm64": {
"version": "0.15.1-beta.3",
"resolved": "https://registry.npmjs.org/@lancedb/vectordb-darwin-arm64/-/vectordb-darwin-arm64-0.15.1-beta.3.tgz",
"integrity": "sha512-2GinbODdSsUc+zJQ4BFZPsdraPWHJpDpGf7CsZIqfokwxIRnzVzFfQy+SZhmNhKzFkmtW21yWw6wrJ4FgS7Qtw==",
"version": "0.18.0",
"resolved": "https://registry.npmjs.org/@lancedb/vectordb-darwin-arm64/-/vectordb-darwin-arm64-0.18.0.tgz",
"integrity": "sha512-ormNCmny1j64aSZRrZeUQ1Zs8cOFKrW14NgTmW3AehDuru+Ep+8AriHA5Pmyi6raBOZfNzDSiZs/LTzzyVaa7g==",
"cpu": [
"arm64"
],
@@ -343,9 +343,9 @@
]
},
"node_modules/@lancedb/vectordb-darwin-x64": {
"version": "0.15.1-beta.3",
"resolved": "https://registry.npmjs.org/@lancedb/vectordb-darwin-x64/-/vectordb-darwin-x64-0.15.1-beta.3.tgz",
"integrity": "sha512-nRp5eN6yvx5kvfDEQuh3EHCmwjVNCIm7dXoV6BasepFkOoaHHmjKSIUFW7HjtJOfdFbb+r8UjBJx4cN6Jh2iFg==",
"version": "0.18.0",
"resolved": "https://registry.npmjs.org/@lancedb/vectordb-darwin-x64/-/vectordb-darwin-x64-0.18.0.tgz",
"integrity": "sha512-S4skQ1RXXQJciq40s84Kyy7v3YC+nao8pX4xUyxDcKRx+90Qg9eH+tehs6XLN1IjrQT/9CWKaE5wxZmv6Oys4g==",
"cpu": [
"x64"
],
@@ -356,9 +356,9 @@
]
},
"node_modules/@lancedb/vectordb-linux-arm64-gnu": {
"version": "0.15.1-beta.3",
"resolved": "https://registry.npmjs.org/@lancedb/vectordb-linux-arm64-gnu/-/vectordb-linux-arm64-gnu-0.15.1-beta.3.tgz",
"integrity": "sha512-JOyD7Nt3RSfHGWNQjHbZMHsIw1cVWPySxbtDmDqk5QH5IfgDNZLiz/sNbROuQkNvc5SsC6wUmhBUwWBETzW7/g==",
"version": "0.18.0",
"resolved": "https://registry.npmjs.org/@lancedb/vectordb-linux-arm64-gnu/-/vectordb-linux-arm64-gnu-0.18.0.tgz",
"integrity": "sha512-1txr4tasVdxy321/4Fw8GJPjzrf84F02L9ffN8JebHmmR0S8uk2MKf2WsyLaSVRPd4YHIvvf3qmG0RGaUsb2sw==",
"cpu": [
"arm64"
],
@@ -369,9 +369,9 @@
]
},
"node_modules/@lancedb/vectordb-linux-arm64-musl": {
"version": "0.15.1-beta.3",
"resolved": "https://registry.npmjs.org/@lancedb/vectordb-linux-arm64-musl/-/vectordb-linux-arm64-musl-0.15.1-beta.3.tgz",
"integrity": "sha512-4jTHl1i/4e7wP2U7RMjHr87/gsGJ9tfRJ4ljQIfV+LkA7ROMd/TA5XSnvPesQCDjPNRI4wAyb/BmK18V96VqBg==",
"version": "0.18.0",
"resolved": "https://registry.npmjs.org/@lancedb/vectordb-linux-arm64-musl/-/vectordb-linux-arm64-musl-0.18.0.tgz",
"integrity": "sha512-8xS1xaoJeFDx6WmDBcfueWvIbdNX/ptQXfoC7hYICwNHizjlyt4O3Nxz8uG9URMF1y9saUYUditIHLzLVZc76g==",
"cpu": [
"arm64"
],
@@ -382,9 +382,9 @@
]
},
"node_modules/@lancedb/vectordb-linux-x64-gnu": {
"version": "0.15.1-beta.3",
"resolved": "https://registry.npmjs.org/@lancedb/vectordb-linux-x64-gnu/-/vectordb-linux-x64-gnu-0.15.1-beta.3.tgz",
"integrity": "sha512-odrNqB/bGL+sweZi6ed9sKft/H5/bca/tDVG/Y39xCJ6swPWxXQK2Zpn7EjqbccI2p2zkrhKcOUBO/bEkOqQng==",
"version": "0.18.0",
"resolved": "https://registry.npmjs.org/@lancedb/vectordb-linux-x64-gnu/-/vectordb-linux-x64-gnu-0.18.0.tgz",
"integrity": "sha512-8XUc2UnEV3awv0DGJS5gRA7yTkicX6oPN7GudXXxycCKL33FJ2ah7hkeDia9Bhk9MmvTonvsEDvUSqnglcpqfA==",
"cpu": [
"x64"
],
@@ -395,9 +395,9 @@
]
},
"node_modules/@lancedb/vectordb-linux-x64-musl": {
"version": "0.15.1-beta.3",
"resolved": "https://registry.npmjs.org/@lancedb/vectordb-linux-x64-musl/-/vectordb-linux-x64-musl-0.15.1-beta.3.tgz",
"integrity": "sha512-Zml4KgQWzkkMBHZiD30Gs3N56BT5xO01efwO/Q2qB7JKw5Vy9pa6SgFf9woBvKFQRY73fiKqafy+BmGHTgozNg==",
"version": "0.18.0",
"resolved": "https://registry.npmjs.org/@lancedb/vectordb-linux-x64-musl/-/vectordb-linux-x64-musl-0.18.0.tgz",
"integrity": "sha512-LV7TuWgLcL82Wdq+EH2Xs3+apqeLohwYLlVIauVAwKEHvdwyNxTOW9TaNLvHXcbylIh7agl2xXvASCNhYZAyzA==",
"cpu": [
"x64"
],
@@ -408,9 +408,9 @@
]
},
"node_modules/@lancedb/vectordb-win32-arm64-msvc": {
"version": "0.15.1-beta.3",
"resolved": "https://registry.npmjs.org/@lancedb/vectordb-win32-arm64-msvc/-/vectordb-win32-arm64-msvc-0.15.1-beta.3.tgz",
"integrity": "sha512-3BWkK+8JP+js/KoTad7bm26NTR5pq2tvXJkrFB0eaFfsIuUXebS+LIBF22f39He2WMpq3YojT0bMnYxp8qvRkQ==",
"version": "0.18.0",
"resolved": "https://registry.npmjs.org/@lancedb/vectordb-win32-arm64-msvc/-/vectordb-win32-arm64-msvc-0.18.0.tgz",
"integrity": "sha512-kxdCnKfvnuDKoKZRUBbreMBpimHb+k9/pFR48GN6JSrIuzUDx5G1VjHKBmaFhbveZCOBjjtYlg/8ohnWQHZfeA==",
"cpu": [
"arm64"
],
@@ -421,9 +421,9 @@
]
},
"node_modules/@lancedb/vectordb-win32-x64-msvc": {
"version": "0.15.1-beta.3",
"resolved": "https://registry.npmjs.org/@lancedb/vectordb-win32-x64-msvc/-/vectordb-win32-x64-msvc-0.15.1-beta.3.tgz",
"integrity": "sha512-jr8SEisYAX7pQHIbxIDJPkANmxWh5Yohm8ELbMgu76IvLI7bsS7sB9ID+kcj1SiS5m4V6OG2BO1FrEYbPLZ6Dg==",
"version": "0.18.0",
"resolved": "https://registry.npmjs.org/@lancedb/vectordb-win32-x64-msvc/-/vectordb-win32-x64-msvc-0.18.0.tgz",
"integrity": "sha512-uAE80q50cAp4gHoGvclxJqZGqj3/9oN9kz8iXgNIxiPngqnN01kVyaj4ulm4Qk/nauWUhHJ3tjTh/+CpkhSc2Q==",
"cpu": [
"x64"
],

View File

@@ -1,6 +1,6 @@
{
"name": "vectordb",
"version": "0.15.1-beta.3",
"version": "0.18.0",
"description": " Serverless, low-latency vector database for AI applications",
"private": false,
"main": "dist/index.js",
@@ -92,13 +92,13 @@
}
},
"optionalDependencies": {
"@lancedb/vectordb-darwin-x64": "0.15.1-beta.3",
"@lancedb/vectordb-darwin-arm64": "0.15.1-beta.3",
"@lancedb/vectordb-linux-x64-gnu": "0.15.1-beta.3",
"@lancedb/vectordb-linux-arm64-gnu": "0.15.1-beta.3",
"@lancedb/vectordb-linux-x64-musl": "0.15.1-beta.3",
"@lancedb/vectordb-linux-arm64-musl": "0.15.1-beta.3",
"@lancedb/vectordb-win32-x64-msvc": "0.15.1-beta.3",
"@lancedb/vectordb-win32-arm64-msvc": "0.15.1-beta.3"
"@lancedb/vectordb-darwin-x64": "0.18.0",
"@lancedb/vectordb-darwin-arm64": "0.18.0",
"@lancedb/vectordb-linux-x64-gnu": "0.18.0",
"@lancedb/vectordb-linux-arm64-gnu": "0.18.0",
"@lancedb/vectordb-linux-x64-musl": "0.18.0",
"@lancedb/vectordb-linux-arm64-musl": "0.18.0",
"@lancedb/vectordb-win32-x64-msvc": "0.18.0",
"@lancedb/vectordb-win32-arm64-msvc": "0.18.0"
}
}

View File

@@ -47,7 +47,8 @@ const {
tableSchema,
tableAddColumns,
tableAlterColumns,
tableDropColumns
tableDropColumns,
tableDropIndex
// eslint-disable-next-line @typescript-eslint/no-var-requires
} = require("../native.js");
@@ -604,6 +605,13 @@ export interface Table<T = number[]> {
*/
dropColumns(columnNames: string[]): Promise<void>
/**
* Drop an index from the table
*
* @param indexName The name of the index to drop
*/
dropIndex(indexName: string): Promise<void>
/**
* Instrument the behavior of this Table with middleware.
*
@@ -1206,6 +1214,10 @@ export class LocalTable<T = number[]> implements Table<T> {
return tableDropColumns.call(this._tbl, columnNames);
}
async dropIndex(indexName: string): Promise<void> {
return tableDropIndex.call(this._tbl, indexName);
}
withMiddleware(middleware: HttpMiddleware): Table<T> {
return this;
}

View File

@@ -471,6 +471,18 @@ export class RemoteTable<T = number[]> implements Table<T> {
)
}
}
async dropIndex (index_name: string): Promise<void> {
const res = await this._client.post(
`/v1/table/${encodeURIComponent(this._name)}/index/${encodeURIComponent(index_name)}/drop/`
)
if (res.status !== 200) {
throw new Error(
`Server Error, status: ${res.status}, ` +
// eslint-disable-next-line @typescript-eslint/restrict-template-expressions
`message: ${res.statusText}: ${await res.body()}`
)
}
}
async countRows (filter?: string): Promise<number> {
const result = await this._client.post(`/v1/table/${encodeURIComponent(this._name)}/count_rows/`, {

View File

@@ -894,6 +894,27 @@ describe("LanceDB client", function () {
expect(stats.distanceType).to.equal("l2");
expect(stats.numIndices).to.equal(1);
}).timeout(50_000);
// not yet implemented
// it("can drop index", async function () {
// const uri = await createTestDB(32, 300);
// const con = await lancedb.connect(uri);
// const table = await con.openTable("vectors");
// await table.createIndex({
// type: "ivf_pq",
// column: "vector",
// num_partitions: 2,
// max_iters: 2,
// num_sub_vectors: 2
// });
//
// const indices = await table.listIndices();
// expect(indices).to.have.lengthOf(1);
// expect(indices[0].name).to.equal("vector_idx");
//
// await table.dropIndex("vector_idx");
// expect(await table.listIndices()).to.have.lengthOf(0);
// }).timeout(50_000);
});
describe("when using a custom embedding function", function () {

View File

@@ -1,7 +1,7 @@
[package]
name = "lancedb-nodejs"
edition.workspace = true
version = "0.15.1-beta.3"
version = "0.18.0"
license.workspace = true
description.workspace = true
repository.workspace = true

View File

@@ -55,6 +55,7 @@ describe.each([arrow15, arrow16, arrow17, arrow18])(
Float64,
Struct,
List,
Int16,
Int32,
Int64,
Float,
@@ -108,13 +109,16 @@ describe.each([arrow15, arrow16, arrow17, arrow18])(
false,
),
]);
const table = (await tableCreationMethod(
records,
recordsReversed,
schema,
// biome-ignore lint/suspicious/noExplicitAny: <explanation>
)) as any;
// We expect deterministic ordering of the fields
expect(table.schema.names).toEqual(schema.names);
schema.fields.forEach(
(
// biome-ignore lint/suspicious/noExplicitAny: <explanation>
@@ -141,13 +145,13 @@ describe.each([arrow15, arrow16, arrow17, arrow18])(
describe("The function makeArrowTable", function () {
it("will use data types from a provided schema instead of inference", async function () {
const schema = new Schema([
new Field("a", new Int32()),
new Field("b", new Float32()),
new Field("a", new Int32(), false),
new Field("b", new Float32(), true),
new Field(
"c",
new FixedSizeList(3, new Field("item", new Float16())),
),
new Field("d", new Int64()),
new Field("d", new Int64(), true),
]);
const table = makeArrowTable(
[
@@ -165,12 +169,15 @@ describe.each([arrow15, arrow16, arrow17, arrow18])(
expect(actual.numRows).toBe(3);
const actualSchema = actual.schema;
expect(actualSchema).toEqual(schema);
expect(table.getChild("a")?.toJSON()).toEqual([1, 4, 7]);
expect(table.getChild("b")?.toJSON()).toEqual([2, 5, 8]);
expect(table.getChild("d")?.toJSON()).toEqual([9n, 10n, null]);
});
it("will assume the column `vector` is FixedSizeList<Float32> by default", async function () {
const schema = new Schema([
new Field("a", new Float(Precision.DOUBLE), true),
new Field("b", new Float(Precision.DOUBLE), true),
new Field("b", new Int64(), true),
new Field(
"vector",
new FixedSizeList(
@@ -181,9 +188,9 @@ describe.each([arrow15, arrow16, arrow17, arrow18])(
),
]);
const table = makeArrowTable([
{ a: 1, b: 2, vector: [1, 2, 3] },
{ a: 4, b: 5, vector: [4, 5, 6] },
{ a: 7, b: 8, vector: [7, 8, 9] },
{ a: 1, b: 2n, vector: [1, 2, 3] },
{ a: 4, b: 5n, vector: [4, 5, 6] },
{ a: 7, b: 8n, vector: [7, 8, 9] },
]);
const buf = await fromTableToBuffer(table);
@@ -193,6 +200,19 @@ describe.each([arrow15, arrow16, arrow17, arrow18])(
expect(actual.numRows).toBe(3);
const actualSchema = actual.schema;
expect(actualSchema).toEqual(schema);
expect(table.getChild("a")?.toJSON()).toEqual([1, 4, 7]);
expect(table.getChild("b")?.toJSON()).toEqual([2n, 5n, 8n]);
expect(
table
.getChild("vector")
?.toJSON()
.map((v) => v.toJSON()),
).toEqual([
[1, 2, 3],
[4, 5, 6],
[7, 8, 9],
]);
});
it("can support multiple vector columns", async function () {
@@ -206,7 +226,7 @@ describe.each([arrow15, arrow16, arrow17, arrow18])(
),
new Field(
"vec2",
new FixedSizeList(3, new Field("item", new Float16(), true)),
new FixedSizeList(3, new Field("item", new Float64(), true)),
true,
),
]);
@@ -219,7 +239,7 @@ describe.each([arrow15, arrow16, arrow17, arrow18])(
{
vectorColumns: {
vec1: { type: new Float16() },
vec2: { type: new Float16() },
vec2: { type: new Float64() },
},
},
);
@@ -307,6 +327,53 @@ describe.each([arrow15, arrow16, arrow17, arrow18])(
false,
);
});
it("will allow subsets of columns if nullable", async function () {
const schema = new Schema([
new Field("a", new Int64(), true),
new Field(
"s",
new Struct([
new Field("x", new Int32(), true),
new Field("y", new Int32(), true),
]),
true,
),
new Field("d", new Int16(), true),
]);
const table = makeArrowTable([{ a: 1n }], { schema });
expect(table.numCols).toBe(1);
expect(table.numRows).toBe(1);
const table2 = makeArrowTable([{ a: 1n, d: 2 }], { schema });
expect(table2.numCols).toBe(2);
const table3 = makeArrowTable([{ s: { y: 3 } }], { schema });
expect(table3.numCols).toBe(1);
const expectedSchema = new Schema([
new Field("s", new Struct([new Field("y", new Int32(), true)]), true),
]);
expect(table3.schema).toEqual(expectedSchema);
});
it("will work even if columns are sparsely provided", async function () {
const sparseRecords = [{ a: 1n }, { b: 2n }, { c: 3n }, { d: 4n }];
const table = makeArrowTable(sparseRecords);
expect(table.numCols).toBe(4);
expect(table.numRows).toBe(4);
const schema = new Schema([
new Field("a", new Int64(), true),
new Field("b", new Int32(), true),
new Field("c", new Int64(), true),
new Field("d", new Int16(), true),
]);
const table2 = makeArrowTable(sparseRecords, { schema });
expect(table2.numCols).toBe(4);
expect(table2.numRows).toBe(4);
expect(table2.schema).toEqual(schema);
});
});
class DummyEmbedding extends EmbeddingFunction<string> {

View File

@@ -17,14 +17,14 @@ describe("when connecting", () => {
it("should connect", async () => {
const db = await connect(tmpDir.name);
expect(db.display()).toBe(
`NativeDatabase(uri=${tmpDir.name}, read_consistency_interval=None)`,
`ListingDatabase(uri=${tmpDir.name}, read_consistency_interval=None)`,
);
});
it("should allow read consistency interval to be specified", async () => {
const db = await connect(tmpDir.name, { readConsistencyInterval: 5 });
expect(db.display()).toBe(
`NativeDatabase(uri=${tmpDir.name}, read_consistency_interval=5s)`,
`ListingDatabase(uri=${tmpDir.name}, read_consistency_interval=5s)`,
);
});
});
@@ -61,6 +61,26 @@ describe("given a connection", () => {
await expect(tbl.countRows()).resolves.toBe(1);
});
it("should be able to drop tables`", async () => {
await db.createTable("test", [{ id: 1 }, { id: 2 }]);
await db.createTable("test2", [{ id: 1 }, { id: 2 }]);
await db.createTable("test3", [{ id: 1 }, { id: 2 }]);
await expect(db.tableNames()).resolves.toEqual(["test", "test2", "test3"]);
await db.dropTable("test2");
await expect(db.tableNames()).resolves.toEqual(["test", "test3"]);
await db.dropAllTables();
await expect(db.tableNames()).resolves.toEqual([]);
// Make sure we can still create more tables after dropping all
await db.createTable("test4", [{ id: 1 }, { id: 2 }]);
});
it("should fail if creating table twice, unless overwrite is true", async () => {
let tbl = await db.createTable("test", [{ id: 1 }, { id: 2 }]);
await expect(tbl.countRows()).resolves.toBe(2);
@@ -96,14 +116,15 @@ describe("given a connection", () => {
const data = [...Array(10000).keys()].map((i) => ({ id: i }));
// Create in v1 mode
let table = await db.createTable("test", data, { useLegacyFormat: true });
let table = await db.createTable("test", data, {
storageOptions: { newTableDataStorageVersion: "legacy" },
});
const isV2 = async (table: Table) => {
const data = await table
.query()
.limit(10000)
.toArrow({ maxBatchLength: 100000 });
console.log(data.batches.length);
return data.batches.length < 5;
};
@@ -122,7 +143,7 @@ describe("given a connection", () => {
const schema = new Schema([new Field("id", new Float64(), true)]);
table = await db.createEmptyTable("test_v2_empty", schema, {
useLegacyFormat: false,
storageOptions: { newTableDataStorageVersion: "stable" },
});
await table.add(data);

View File

@@ -17,6 +17,8 @@ import {
import { EmbeddingFunction, LanceSchema } from "../lancedb/embedding";
import { getRegistry, register } from "../lancedb/embedding/registry";
const testOpenAIInteg = process.env.OPENAI_API_KEY == null ? test.skip : test;
describe("embedding functions", () => {
let tmpDir: tmp.DirResult;
beforeEach(() => {
@@ -29,9 +31,6 @@ describe("embedding functions", () => {
it("should be able to create a table with an embedding function", async () => {
class MockEmbeddingFunction extends EmbeddingFunction<string> {
toJSON(): object {
return {};
}
ndims() {
return 3;
}
@@ -75,9 +74,6 @@ describe("embedding functions", () => {
it("should be able to append and upsert using embedding function", async () => {
@register()
class MockEmbeddingFunction extends EmbeddingFunction<string> {
toJSON(): object {
return {};
}
ndims() {
return 3;
}
@@ -143,9 +139,6 @@ describe("embedding functions", () => {
it("should be able to create an empty table with an embedding function", async () => {
@register()
class MockEmbeddingFunction extends EmbeddingFunction<string> {
toJSON(): object {
return {};
}
ndims() {
return 3;
}
@@ -194,9 +187,6 @@ describe("embedding functions", () => {
it("should error when appending to a table with an unregistered embedding function", async () => {
@register("mock")
class MockEmbeddingFunction extends EmbeddingFunction<string> {
toJSON(): object {
return {};
}
ndims() {
return 3;
}
@@ -241,13 +231,35 @@ describe("embedding functions", () => {
`Function "mock" not found in registry`,
);
});
testOpenAIInteg("propagates variables through all methods", async () => {
delete process.env.OPENAI_API_KEY;
const registry = getRegistry();
registry.setVar("openai_api_key", "sk-...");
const func = registry.get("openai")?.create({
model: "text-embedding-ada-002",
apiKey: "$var:openai_api_key",
}) as EmbeddingFunction;
const db = await connect("memory://");
const wordsSchema = LanceSchema({
text: func.sourceField(new Utf8()),
vector: func.vectorField(),
});
const tbl = await db.createEmptyTable("words", wordsSchema, {
mode: "overwrite",
});
await tbl.add([{ text: "hello world" }, { text: "goodbye world" }]);
const query = "greetings";
const actual = (await tbl.search(query).limit(1).toArray())[0];
expect(actual).toHaveProperty("text");
});
test.each([new Float16(), new Float32(), new Float64()])(
"should be able to provide manual embeddings with multiple float datatype",
async (floatType) => {
class MockEmbeddingFunction extends EmbeddingFunction<string> {
toJSON(): object {
return {};
}
ndims() {
return 3;
}
@@ -292,10 +304,6 @@ describe("embedding functions", () => {
async (floatType) => {
@register("test1")
class MockEmbeddingFunctionWithoutNDims extends EmbeddingFunction<string> {
toJSON(): object {
return {};
}
embeddingDataType(): Float {
return floatType;
}
@@ -310,9 +318,6 @@ describe("embedding functions", () => {
}
@register("test")
class MockEmbeddingFunction extends EmbeddingFunction<string> {
toJSON(): object {
return {};
}
ndims() {
return 3;
}

View File

@@ -11,7 +11,11 @@ import * as arrow18 from "apache-arrow-18";
import * as tmp from "tmp";
import { connect } from "../lancedb";
import { EmbeddingFunction, LanceSchema } from "../lancedb/embedding";
import {
EmbeddingFunction,
FunctionOptions,
LanceSchema,
} from "../lancedb/embedding";
import { getRegistry, register } from "../lancedb/embedding/registry";
describe.each([arrow15, arrow16, arrow17, arrow18])("LanceSchema", (arrow) => {
@@ -39,11 +43,6 @@ describe.each([arrow15, arrow16, arrow17, arrow18])("Registry", (arrow) => {
it("should register a new item to the registry", async () => {
@register("mock-embedding")
class MockEmbeddingFunction extends EmbeddingFunction<string> {
toJSON(): object {
return {
someText: "hello",
};
}
constructor() {
super();
}
@@ -89,11 +88,6 @@ describe.each([arrow15, arrow16, arrow17, arrow18])("Registry", (arrow) => {
});
test("should error if registering with the same name", async () => {
class MockEmbeddingFunction extends EmbeddingFunction<string> {
toJSON(): object {
return {
someText: "hello",
};
}
constructor() {
super();
}
@@ -114,13 +108,9 @@ describe.each([arrow15, arrow16, arrow17, arrow18])("Registry", (arrow) => {
});
test("schema should contain correct metadata", async () => {
class MockEmbeddingFunction extends EmbeddingFunction<string> {
toJSON(): object {
return {
someText: "hello",
};
}
constructor() {
constructor(args: FunctionOptions = {}) {
super();
this.resolveVariables(args);
}
ndims() {
return 3;
@@ -132,7 +122,7 @@ describe.each([arrow15, arrow16, arrow17, arrow18])("Registry", (arrow) => {
return data.map(() => [1, 2, 3]);
}
}
const func = new MockEmbeddingFunction();
const func = new MockEmbeddingFunction({ someText: "hello" });
const schema = LanceSchema({
id: new arrow.Int32(),
@@ -155,3 +145,79 @@ describe.each([arrow15, arrow16, arrow17, arrow18])("Registry", (arrow) => {
expect(schema.metadata).toEqual(expectedMetadata);
});
});
describe("Registry.setVar", () => {
const registry = getRegistry();
beforeEach(() => {
@register("mock-embedding")
// biome-ignore lint/correctness/noUnusedVariables :
class MockEmbeddingFunction extends EmbeddingFunction<string> {
constructor(optionsRaw: FunctionOptions = {}) {
super();
const options = this.resolveVariables(optionsRaw);
expect(optionsRaw["someKey"].startsWith("$var:someName")).toBe(true);
expect(options["someKey"]).toBe("someValue");
if (options["secretKey"]) {
expect(optionsRaw["secretKey"]).toBe("$var:secretKey");
expect(options["secretKey"]).toBe("mySecret");
}
}
async computeSourceEmbeddings(data: string[]) {
return data.map(() => [1, 2, 3]);
}
embeddingDataType() {
return new arrow18.Float32() as apiArrow.Float;
}
protected getSensitiveKeys() {
return ["secretKey"];
}
}
});
afterEach(() => {
registry.reset();
});
it("Should error if the variable is not set", () => {
console.log(registry.get("mock-embedding"));
expect(() =>
registry.get("mock-embedding")!.create({ someKey: "$var:someName" }),
).toThrow('Variable "someName" not found');
});
it("should use default values if not set", () => {
registry
.get("mock-embedding")!
.create({ someKey: "$var:someName:someValue" });
});
it("should set a variable that the embedding function understand", () => {
registry.setVar("someName", "someValue");
registry.get("mock-embedding")!.create({ someKey: "$var:someName" });
});
it("should reject secrets that aren't passed as variables", () => {
registry.setVar("someName", "someValue");
expect(() =>
registry
.get("mock-embedding")!
.create({ secretKey: "someValue", someKey: "$var:someName" }),
).toThrow(
'The key "secretKey" is sensitive and cannot be set directly. Please use the $var: syntax to set it.',
);
});
it("should not serialize secrets", () => {
registry.setVar("someName", "someValue");
registry.setVar("secretKey", "mySecret");
const func = registry
.get("mock-embedding")!
.create({ secretKey: "$var:secretKey", someKey: "$var:someName" });
expect(func.toJSON()).toEqual({
secretKey: "$var:secretKey",
someKey: "$var:someName",
});
});
});

View File

@@ -104,4 +104,26 @@ describe("remote connection", () => {
},
);
});
it("should pass on requested extra headers", async () => {
await withMockDatabase(
(req, res) => {
expect(req.headers["x-my-header"]).toEqual("my-value");
const body = JSON.stringify({ tables: [] });
res.writeHead(200, { "Content-Type": "application/json" }).end(body);
},
async (db) => {
const tableNames = await db.tableNames();
expect(tableNames).toEqual([]);
},
{
clientConfig: {
extraHeaders: {
"x-my-header": "my-value",
},
},
},
);
});
});

View File

@@ -175,6 +175,8 @@ maybeDescribe("storage_options", () => {
tableNames = await db.tableNames();
expect(tableNames).toEqual([]);
await db.dropAllTables();
});
it("can configure encryption at connection and table level", async () => {
@@ -210,6 +212,8 @@ maybeDescribe("storage_options", () => {
await table.add([{ a: 2, b: 3 }]);
await bucket.assertAllEncrypted("test/table2.lance", kmsKey.keyId);
await db.dropAllTables();
});
});
@@ -298,5 +302,32 @@ maybeDescribe("DynamoDB Lock", () => {
const rowCount = await table.countRows();
expect(rowCount).toBe(6);
await db.dropAllTables();
});
it("clears dynamodb state after dropping all tables", async () => {
const uri = `s3+ddb://${bucket.name}/test?ddbTableName=${commitTable.name}`;
const db = await connect(uri, {
storageOptions: CONFIG,
readConsistencyInterval: 0,
});
await db.createTable("foo", [{ a: 1, b: 2 }]);
await db.createTable("bar", [{ a: 1, b: 2 }]);
let tableNames = await db.tableNames();
expect(tableNames).toEqual(["bar", "foo"]);
await db.dropAllTables();
tableNames = await db.tableNames();
expect(tableNames).toEqual([]);
// We can create a new table with the same name as the one we dropped.
await db.createTable("foo", [{ a: 1, b: 2 }]);
tableNames = await db.tableNames();
expect(tableNames).toEqual(["foo"]);
await db.dropAllTables();
});
});

View File

@@ -253,6 +253,31 @@ describe.each([arrow15, arrow16, arrow17, arrow18])(
const arrowTbl = await table.toArrow();
expect(arrowTbl).toBeInstanceOf(ArrowTable);
});
it("should be able to handle missing fields", async () => {
const schema = new arrow.Schema([
new arrow.Field("id", new arrow.Int32(), true),
new arrow.Field("y", new arrow.Int32(), true),
new arrow.Field("z", new arrow.Int64(), true),
]);
const db = await connect(tmpDir.name);
const table = await db.createEmptyTable("testNull", schema);
await table.add([{ id: 1, y: 2 }]);
await table.add([{ id: 2 }]);
await table
.mergeInsert("id")
.whenNotMatchedInsertAll()
.execute([
{ id: 3, z: 3 },
{ id: 4, z: 5 },
]);
const res = await table.query().toArrow();
expect(res.getChild("id")?.toJSON()).toEqual([1, 2, 3, 4]);
expect(res.getChild("y")?.toJSON()).toEqual([2, null, null, null]);
expect(res.getChild("z")?.toJSON()).toEqual([null, null, 3n, 5n]);
});
},
);
@@ -641,11 +666,11 @@ describe("When creating an index", () => {
expect(fs.readdirSync(indexDir)).toHaveLength(1);
for await (const r of tbl.query().where("id > 1").select(["id"])) {
expect(r.numRows).toBe(10);
expect(r.numRows).toBe(298);
}
// should also work with 'filter' alias
for await (const r of tbl.query().filter("id > 1").select(["id"])) {
expect(r.numRows).toBe(10);
expect(r.numRows).toBe(298);
}
});
@@ -1013,9 +1038,6 @@ describe.each([arrow15, arrow16, arrow17, arrow18])(
test("can search using a string", async () => {
@register()
class MockEmbeddingFunction extends EmbeddingFunction<string> {
toJSON(): object {
return {};
}
ndims() {
return 1;
}

View File

@@ -43,12 +43,17 @@ test("custom embedding function", async () => {
@register("my_embedding")
class MyEmbeddingFunction extends EmbeddingFunction<string> {
toJSON(): object {
return {};
constructor(optionsRaw = {}) {
super();
const options = this.resolveVariables(optionsRaw);
// Initialize using options
}
ndims() {
return 3;
}
protected getSensitiveKeys(): string[] {
return [];
}
embeddingDataType(): Float {
return new Float32();
}
@@ -94,3 +99,14 @@ test("custom embedding function", async () => {
expect(await table2.countRows()).toBe(2);
});
});
test("embedding function api_key", async () => {
// --8<-- [start:register_secret]
const registry = getRegistry();
registry.setVar("api_key", "sk-...");
const func = registry.get("openai")!.create({
apiKey: "$var:api_key",
});
// --8<-- [end:register_secret]
});

View File

@@ -42,4 +42,4 @@ test("full text search", async () => {
expect(result.length).toBe(10);
// --8<-- [end:full_text_search]
});
});
}, 10_000);

View File

@@ -2,31 +2,37 @@
// SPDX-FileCopyrightText: Copyright The LanceDB Authors
import {
Data as ArrowData,
Table as ArrowTable,
Binary,
Bool,
BufferType,
DataType,
Dictionary,
Field,
FixedSizeBinary,
FixedSizeList,
Float,
Float32,
Float64,
Int,
Int32,
Int64,
LargeBinary,
List,
Null,
RecordBatch,
RecordBatchFileReader,
RecordBatchFileWriter,
RecordBatchReader,
RecordBatchStreamWriter,
Schema,
Struct,
Utf8,
Vector,
makeVector as arrowMakeVector,
makeBuilder,
makeData,
type makeTable,
makeTable,
vectorFromArray,
} from "apache-arrow";
import { Buffers } from "apache-arrow/data";
@@ -236,8 +242,6 @@ export class MakeArrowTableOptions {
* This function converts an array of Record<String, any> (row-major JS objects)
* to an Arrow Table (a columnar structure)
*
* Note that it currently does not support nulls.
*
* If a schema is provided then it will be used to determine the resulting array
* types. Fields will also be reordered to fit the order defined by the schema.
*
@@ -245,6 +249,9 @@ export class MakeArrowTableOptions {
* will be controlled by the order of properties in the first record. If a type
* is inferred it will always be nullable.
*
* If not all fields are found in the data, then a subset of the schema will be
* returned.
*
* If the input is empty then a schema must be provided to create an empty table.
*
* When a schema is not specified then data types will be inferred. The inference
@@ -252,6 +259,7 @@ export class MakeArrowTableOptions {
*
* - boolean => Bool
* - number => Float64
* - bigint => Int64
* - String => Utf8
* - Buffer => Binary
* - Record<String, any> => Struct
@@ -322,126 +330,316 @@ export function makeArrowTable(
options?: Partial<MakeArrowTableOptions>,
metadata?: Map<string, string>,
): ArrowTable {
const opt = new MakeArrowTableOptions(options !== undefined ? options : {});
let schema: Schema | undefined = undefined;
if (opt.schema !== undefined && opt.schema !== null) {
schema = sanitizeSchema(opt.schema);
schema = validateSchemaEmbeddings(
schema as Schema,
data,
options?.embeddingFunction,
);
}
let schemaMetadata = schema?.metadata || new Map<string, string>();
if (metadata !== undefined) {
schemaMetadata = new Map([...schemaMetadata, ...metadata]);
}
if (
data.length === 0 &&
(options?.schema === undefined || options?.schema === null)
) {
throw new Error("At least one record or a schema needs to be provided");
}
const opt = new MakeArrowTableOptions(options !== undefined ? options : {});
if (opt.schema !== undefined && opt.schema !== null) {
opt.schema = sanitizeSchema(opt.schema);
opt.schema = validateSchemaEmbeddings(
opt.schema as Schema,
data,
options?.embeddingFunction,
);
}
const columns: Record<string, Vector> = {};
// TODO: sample dataset to find missing columns
// Prefer the field ordering of the schema, if present
const columnNames =
opt.schema != null ? (opt.schema.names as string[]) : Object.keys(data[0]);
for (const colName of columnNames) {
if (
data.length !== 0 &&
!Object.prototype.hasOwnProperty.call(data[0], colName)
) {
// The field is present in the schema, but not in the data, skip it
continue;
}
// Extract a single column from the records (transpose from row-major to col-major)
let values = data.map((datum) => datum[colName]);
// By default (type === undefined) arrow will infer the type from the JS type
let type;
if (opt.schema !== undefined) {
// If there is a schema provided, then use that for the type instead
type = opt.schema?.fields.filter((f) => f.name === colName)[0]?.type;
if (DataType.isInt(type) && type.bitWidth === 64) {
// wrap in BigInt to avoid bug: https://github.com/apache/arrow/issues/40051
values = values.map((v) => {
if (v === null) {
return v;
}
if (typeof v === "bigint") {
return v;
}
if (typeof v === "number") {
return BigInt(v);
}
throw new Error(
`Expected BigInt or number for column ${colName}, got ${typeof v}`,
);
});
}
} else if (data.length === 0) {
if (schema === undefined) {
throw new Error("A schema must be provided if data is empty");
} else {
// Otherwise, check to see if this column is one of the vector columns
// defined by opt.vectorColumns and, if so, use the fixed size list type
const vectorColumnOptions = opt.vectorColumns[colName];
if (vectorColumnOptions !== undefined) {
const firstNonNullValue = values.find((v) => v !== null);
if (Array.isArray(firstNonNullValue)) {
type = newVectorType(
firstNonNullValue.length,
vectorColumnOptions.type,
);
schema = new Schema(schema.fields, schemaMetadata);
return new ArrowTable(schema);
}
}
let inferredSchema = inferSchema(data, schema, opt);
inferredSchema = new Schema(inferredSchema.fields, schemaMetadata);
const finalColumns: Record<string, Vector> = {};
for (const field of inferredSchema.fields) {
finalColumns[field.name] = transposeData(data, field);
}
return new ArrowTable(inferredSchema, finalColumns);
}
function inferSchema(
data: Array<Record<string, unknown>>,
schema: Schema | undefined,
opts: MakeArrowTableOptions,
): Schema {
// We will collect all fields we see in the data.
const pathTree = new PathTree<DataType>();
for (const [rowI, row] of data.entries()) {
for (const [path, value] of rowPathsAndValues(row)) {
if (!pathTree.has(path)) {
// First time seeing this field.
if (schema !== undefined) {
const field = getFieldForPath(schema, path);
if (field === undefined) {
throw new Error(
`Found field not in schema: ${path.join(".")} at row ${rowI}`,
);
} else {
pathTree.set(path, field.type);
}
} else {
throw new Error(
`Column ${colName} is expected to be a vector column but first non-null value is not an array. Could not determine size of vector column`,
);
const inferredType = inferType(value, path, opts);
if (inferredType === undefined) {
throw new Error(`Failed to infer data type for field ${path.join(".")} at row ${rowI}. \
Consider providing an explicit schema.`);
}
pathTree.set(path, inferredType);
}
} else if (schema === undefined) {
const currentType = pathTree.get(path);
const newType = inferType(value, path, opts);
if (currentType !== newType) {
new Error(`Failed to infer schema for data. Previously inferred type \
${currentType} but found ${newType} at row ${rowI}. Consider \
providing an explicit schema.`);
}
}
}
try {
// Convert an Array of JS values to an arrow vector
columns[colName] = makeVector(values, type, opt.dictionaryEncodeStrings);
} catch (error: unknown) {
// eslint-disable-next-line @typescript-eslint/restrict-template-expressions
throw Error(`Could not convert column "${colName}" to Arrow: ${error}`);
}
}
if (opt.schema != null) {
// `new ArrowTable(columns)` infers a schema which may sometimes have
// incorrect nullability (it assumes nullable=true always)
//
// `new ArrowTable(schema, columns)` will also fail because it will create a
// batch with an inferred schema and then complain that the batch schema
// does not match the provided schema.
//
// To work around this we first create a table with the wrong schema and
// then patch the schema of the batches so we can use
// `new ArrowTable(schema, batches)` which does not do any schema inference
const firstTable = new ArrowTable(columns);
const batchesFixed = firstTable.batches.map(
(batch) => new RecordBatch(opt.schema as Schema, batch.data),
);
let schema: Schema;
if (metadata !== undefined) {
let schemaMetadata = opt.schema.metadata;
if (schemaMetadata.size === 0) {
schemaMetadata = metadata;
} else {
for (const [key, entry] of schemaMetadata.entries()) {
schemaMetadata.set(key, entry);
if (schema === undefined) {
function fieldsFromPathTree(pathTree: PathTree<DataType>): Field[] {
const fields = [];
for (const [name, value] of pathTree.map.entries()) {
if (value instanceof PathTree) {
const children = fieldsFromPathTree(value);
fields.push(new Field(name, new Struct(children), true));
} else {
fields.push(new Field(name, value, true));
}
}
return fields;
}
const fields = fieldsFromPathTree(pathTree);
return new Schema(fields);
} else {
function takeMatchingFields(
fields: Field[],
pathTree: PathTree<DataType>,
): Field[] {
const outFields = [];
for (const field of fields) {
if (pathTree.map.has(field.name)) {
const value = pathTree.get([field.name]);
if (value instanceof PathTree) {
const struct = field.type as Struct;
const children = takeMatchingFields(struct.children, value);
outFields.push(
new Field(field.name, new Struct(children), field.nullable),
);
} else {
outFields.push(
new Field(field.name, value as DataType, field.nullable),
);
}
}
}
return outFields;
}
const fields = takeMatchingFields(schema.fields, pathTree);
return new Schema(fields);
}
}
schema = new Schema(opt.schema.fields as Field[], schemaMetadata);
function* rowPathsAndValues(
row: Record<string, unknown>,
basePath: string[] = [],
): Generator<[string[], unknown]> {
for (const [key, value] of Object.entries(row)) {
if (isObject(value)) {
yield* rowPathsAndValues(value, [...basePath, key]);
} else {
schema = opt.schema as Schema;
yield [[...basePath, key], value];
}
return new ArrowTable(schema, batchesFixed);
}
const tbl = new ArrowTable(columns);
if (metadata !== undefined) {
// biome-ignore lint/suspicious/noExplicitAny: <explanation>
(<any>tbl.schema).metadata = metadata;
}
function isObject(value: unknown): value is Record<string, unknown> {
return (
typeof value === "object" &&
value !== null &&
!Array.isArray(value) &&
!(value instanceof RegExp) &&
!(value instanceof Date) &&
!(value instanceof Set) &&
!(value instanceof Map) &&
!(value instanceof Buffer)
);
}
function getFieldForPath(schema: Schema, path: string[]): Field | undefined {
let current: Field | Schema = schema;
for (const key of path) {
if (current instanceof Schema) {
const field: Field | undefined = current.fields.find(
(f) => f.name === key,
);
if (field === undefined) {
return undefined;
}
current = field;
} else if (current instanceof Field && DataType.isStruct(current.type)) {
const struct: Struct = current.type;
const field = struct.children.find((f) => f.name === key);
if (field === undefined) {
return undefined;
}
current = field;
} else {
return undefined;
}
}
if (current instanceof Field) {
return current;
} else {
return undefined;
}
}
/**
* Try to infer which Arrow type to use for a given value.
*
* May return undefined if the type cannot be inferred.
*/
function inferType(
value: unknown,
path: string[],
opts: MakeArrowTableOptions,
): DataType | undefined {
if (typeof value === "bigint") {
return new Int64();
} else if (typeof value === "number") {
// Even if it's an integer, it's safer to assume Float64. Users can
// always provide an explicit schema or use BigInt if they mean integer.
return new Float64();
} else if (typeof value === "string") {
if (opts.dictionaryEncodeStrings) {
return new Dictionary(new Utf8(), new Int32());
} else {
return new Utf8();
}
} else if (typeof value === "boolean") {
return new Bool();
} else if (value instanceof Buffer) {
return new Binary();
} else if (Array.isArray(value)) {
if (value.length === 0) {
return undefined; // Without any values we can't infer the type
}
if (path.length === 1 && Object.hasOwn(opts.vectorColumns, path[0])) {
const floatType = sanitizeType(opts.vectorColumns[path[0]].type);
return new FixedSizeList(
value.length,
new Field("item", floatType, true),
);
}
const valueType = inferType(value[0], path, opts);
if (valueType === undefined) {
return undefined;
}
// Try to automatically detect embedding columns.
if (valueType instanceof Float && path[path.length - 1] === "vector") {
// We default to Float32 for vectors.
const child = new Field("item", new Float32(), true);
return new FixedSizeList(value.length, child);
} else {
const child = new Field("item", valueType, true);
return new List(child);
}
} else {
// TODO: timestamp
return undefined;
}
}
class PathTree<V> {
map: Map<string, V | PathTree<V>>;
constructor(entries?: [string[], V][]) {
this.map = new Map();
if (entries !== undefined) {
for (const [path, value] of entries) {
this.set(path, value);
}
}
}
has(path: string[]): boolean {
let ref: PathTree<V> = this;
for (const part of path) {
if (!(ref instanceof PathTree) || !ref.map.has(part)) {
return false;
}
ref = ref.map.get(part) as PathTree<V>;
}
return true;
}
get(path: string[]): V | undefined {
let ref: PathTree<V> = this;
for (const part of path) {
if (!(ref instanceof PathTree) || !ref.map.has(part)) {
return undefined;
}
ref = ref.map.get(part) as PathTree<V>;
}
return ref as V;
}
set(path: string[], value: V): void {
let ref: PathTree<V> = this;
for (const part of path.slice(0, path.length - 1)) {
if (!ref.map.has(part)) {
ref.map.set(part, new PathTree<V>());
}
ref = ref.map.get(part) as PathTree<V>;
}
ref.map.set(path[path.length - 1], value);
}
}
function transposeData(
data: Record<string, unknown>[],
field: Field,
path: string[] = [],
): Vector {
if (field.type instanceof Struct) {
const childFields = field.type.children;
const childVectors = childFields.map((child) => {
return transposeData(data, child, [...path, child.name]);
});
const structData = makeData({
type: field.type,
children: childVectors as unknown as ArrowData<DataType>[],
});
return arrowMakeVector(structData);
} else {
const valuesPath = [...path, field.name];
const values = data.map((datum) => {
let current: unknown = datum;
for (const key of valuesPath) {
if (isObject(current) && Object.hasOwn(current, key)) {
current = current[key];
} else {
return null;
}
}
return current;
});
return makeVector(values, field.type);
}
return tbl;
}
/**
@@ -491,6 +689,31 @@ function makeVector(
): Vector<any> {
if (type !== undefined) {
// No need for inference, let Arrow create it
if (type instanceof Int) {
if (DataType.isInt(type) && type.bitWidth === 64) {
// wrap in BigInt to avoid bug: https://github.com/apache/arrow/issues/40051
values = values.map((v) => {
if (v === null) {
return v;
} else if (typeof v === "bigint") {
return v;
} else if (typeof v === "number") {
return BigInt(v);
} else {
return v;
}
});
} else {
// Similarly, bigint isn't supported for 16 or 32-bit ints.
values = values.map((v) => {
if (typeof v == "bigint") {
return Number(v);
} else {
return v;
}
});
}
}
return vectorFromArray(values, type);
}
if (values.length === 0) {
@@ -902,7 +1125,7 @@ function validateSchemaEmbeddings(
schema: Schema,
data: Array<Record<string, unknown>>,
embeddings: EmbeddingFunctionConfig | undefined,
) {
): Schema {
const fields = [];
const missingEmbeddingFields = [];

View File

@@ -52,6 +52,8 @@ export interface CreateTableOptions {
*
* The default is `stable`.
* Set to "legacy" to use the old format.
*
* @deprecated Pass `new_table_data_storage_version` to storageOptions instead.
*/
dataStorageVersion?: string;
@@ -61,17 +63,11 @@ export interface CreateTableOptions {
* turning this on will make the dataset unreadable for older versions
* of LanceDB (prior to 0.10.0). To migrate an existing dataset, instead
* use the {@link LocalTable#migrateManifestPathsV2} method.
*
* @deprecated Pass `new_table_enable_v2_manifest_paths` to storageOptions instead.
*/
enableV2ManifestPaths?: boolean;
/**
* If true then data files will be written with the legacy format
*
* The default is false.
*
* Deprecated. Use data storage version instead.
*/
useLegacyFormat?: boolean;
schema?: SchemaLike;
embeddingFunction?: EmbeddingFunctionConfig;
}
@@ -215,6 +211,11 @@ export abstract class Connection {
* @param {string} name The name of the table to drop.
*/
abstract dropTable(name: string): Promise<void>;
/**
* Drop all tables in the database.
*/
abstract dropAllTables(): Promise<void>;
}
/** @hideconstructor */
@@ -256,6 +257,28 @@ export class LocalConnection extends Connection {
return new LocalTable(innerTable);
}
private getStorageOptions(
options?: Partial<CreateTableOptions>,
): Record<string, string> | undefined {
if (options?.dataStorageVersion !== undefined) {
if (options.storageOptions === undefined) {
options.storageOptions = {};
}
options.storageOptions["newTableDataStorageVersion"] =
options.dataStorageVersion;
}
if (options?.enableV2ManifestPaths !== undefined) {
if (options.storageOptions === undefined) {
options.storageOptions = {};
}
options.storageOptions["newTableEnableV2ManifestPaths"] =
options.enableV2ManifestPaths ? "true" : "false";
}
return cleanseStorageOptions(options?.storageOptions);
}
async createTable(
nameOrOptions:
| string
@@ -272,20 +295,14 @@ export class LocalConnection extends Connection {
throw new Error("data is required");
}
const { buf, mode } = await parseTableData(data, options);
let dataStorageVersion = "stable";
if (options?.dataStorageVersion !== undefined) {
dataStorageVersion = options.dataStorageVersion;
} else if (options?.useLegacyFormat !== undefined) {
dataStorageVersion = options.useLegacyFormat ? "legacy" : "stable";
}
const storageOptions = this.getStorageOptions(options);
const innerTable = await this.inner.createTable(
nameOrOptions,
buf,
mode,
cleanseStorageOptions(options?.storageOptions),
dataStorageVersion,
options?.enableV2ManifestPaths,
storageOptions,
);
return new LocalTable(innerTable);
@@ -309,22 +326,14 @@ export class LocalConnection extends Connection {
metadata = registry.getTableMetadata([embeddingFunction]);
}
let dataStorageVersion = "stable";
if (options?.dataStorageVersion !== undefined) {
dataStorageVersion = options.dataStorageVersion;
} else if (options?.useLegacyFormat !== undefined) {
dataStorageVersion = options.useLegacyFormat ? "legacy" : "stable";
}
const storageOptions = this.getStorageOptions(options);
const table = makeEmptyTable(schema, metadata);
const buf = await fromTableToBuffer(table);
const innerTable = await this.inner.createEmptyTable(
name,
buf,
mode,
cleanseStorageOptions(options?.storageOptions),
dataStorageVersion,
options?.enableV2ManifestPaths,
storageOptions,
);
return new LocalTable(innerTable);
}
@@ -332,6 +341,10 @@ export class LocalConnection extends Connection {
async dropTable(name: string): Promise<void> {
return this.inner.dropTable(name);
}
async dropAllTables(): Promise<void> {
return this.inner.dropAllTables();
}
}
/**

View File

@@ -15,6 +15,7 @@ import {
newVectorType,
} from "../arrow";
import { sanitizeType } from "../sanitize";
import { getRegistry } from "./registry";
/**
* Options for a given embedding function
@@ -32,6 +33,22 @@ export interface EmbeddingFunctionConstructor<
/**
* An embedding function that automatically creates vector representation for a given column.
*
* It's important subclasses pass the **original** options to the super constructor
* and then pass those options to `resolveVariables` to resolve any variables before
* using them.
*
* @example
* ```ts
* class MyEmbeddingFunction extends EmbeddingFunction {
* constructor(options: {model: string, timeout: number}) {
* super(optionsRaw);
* const options = this.resolveVariables(optionsRaw);
* this.model = options.model;
* this.timeout = options.timeout;
* }
* }
* ```
*/
export abstract class EmbeddingFunction<
// biome-ignore lint/suspicious/noExplicitAny: we don't know what the implementor will do
@@ -44,33 +61,74 @@ export abstract class EmbeddingFunction<
*/
// biome-ignore lint/style/useNamingConvention: we want to keep the name as it is
readonly TOptions!: M;
/**
* Convert the embedding function to a JSON object
* It is used to serialize the embedding function to the schema
* It's important that any object returned by this method contains all the necessary
* information to recreate the embedding function
*
* It should return the same object that was passed to the constructor
* If it does not, the embedding function will not be able to be recreated, or could be recreated incorrectly
*
* @example
* ```ts
* class MyEmbeddingFunction extends EmbeddingFunction {
* constructor(options: {model: string, timeout: number}) {
* super();
* this.model = options.model;
* this.timeout = options.timeout;
* }
* toJSON() {
* return {
* model: this.model,
* timeout: this.timeout,
* };
* }
* ```
*/
abstract toJSON(): Partial<M>;
#config: Partial<M>;
/**
* Get the original arguments to the constructor, to serialize them so they
* can be used to recreate the embedding function later.
*/
// biome-ignore lint/suspicious/noExplicitAny :
toJSON(): Record<string, any> {
return JSON.parse(JSON.stringify(this.#config));
}
constructor() {
this.#config = {};
}
/**
* Provide a list of keys in the function options that should be treated as
* sensitive. If users pass raw values for these keys, they will be rejected.
*/
protected getSensitiveKeys(): string[] {
return [];
}
/**
* Apply variables to the config.
*/
protected resolveVariables(config: Partial<M>): Partial<M> {
this.#config = config;
const registry = getRegistry();
const newConfig = { ...config };
for (const [key_, value] of Object.entries(newConfig)) {
if (
this.getSensitiveKeys().includes(key_) &&
!value.startsWith("$var:")
) {
throw new Error(
`The key "${key_}" is sensitive and cannot be set directly. Please use the $var: syntax to set it.`,
);
}
// Makes TS happy (https://stackoverflow.com/a/78391854)
const key = key_ as keyof M;
if (typeof value === "string" && value.startsWith("$var:")) {
const [name, defaultValue] = value.slice(5).split(":", 2);
const variableValue = registry.getVar(name);
if (!variableValue) {
if (defaultValue) {
// biome-ignore lint/suspicious/noExplicitAny:
newConfig[key] = defaultValue as any;
} else {
throw new Error(`Variable "${name}" not found`);
}
} else {
// biome-ignore lint/suspicious/noExplicitAny:
newConfig[key] = variableValue as any;
}
}
}
return newConfig;
}
/**
* Optionally load any resources needed for the embedding function.
*
* This method is called after the embedding function has been initialized
* but before any embeddings are computed. It is useful for loading local models
* or other resources that are needed for the embedding function to work.
*/
async init?(): Promise<void>;
/**

View File

@@ -21,11 +21,13 @@ export class OpenAIEmbeddingFunction extends EmbeddingFunction<
#modelName: OpenAIOptions["model"];
constructor(
options: Partial<OpenAIOptions> = {
optionsRaw: Partial<OpenAIOptions> = {
model: "text-embedding-ada-002",
},
) {
super();
const options = this.resolveVariables(optionsRaw);
const openAIKey = options?.apiKey ?? process.env.OPENAI_API_KEY;
if (!openAIKey) {
throw new Error("OpenAI API key is required");
@@ -52,10 +54,8 @@ export class OpenAIEmbeddingFunction extends EmbeddingFunction<
this.#modelName = modelName;
}
toJSON() {
return {
model: this.#modelName,
};
protected getSensitiveKeys(): string[] {
return ["apiKey"];
}
ndims(): number {

View File

@@ -23,6 +23,7 @@ export interface EmbeddingFunctionCreate<T extends EmbeddingFunction> {
*/
export class EmbeddingFunctionRegistry {
#functions = new Map<string, EmbeddingFunctionConstructor>();
#variables = new Map<string, string>();
/**
* Get the number of registered functions
@@ -82,10 +83,7 @@ export class EmbeddingFunctionRegistry {
};
} else {
// biome-ignore lint/suspicious/noExplicitAny: <explanation>
create = function (options?: any) {
const instance = new factory(options);
return instance;
};
create = (options?: any) => new factory(options);
}
return {
@@ -164,6 +162,37 @@ export class EmbeddingFunctionRegistry {
return metadata;
}
/**
* Set a variable. These can be accessed in the embedding function
* configuration using the syntax `$var:variable_name`. If they are not
* set, an error will be thrown letting you know which key is unset. If you
* want to supply a default value, you can add an additional part in the
* configuration like so: `$var:variable_name:default_value`. Default values
* can be used for runtime configurations that are not sensitive, such as
* whether to use a GPU for inference.
*
* The name must not contain colons. The default value can contain colons.
*
* @param name
* @param value
*/
setVar(name: string, value: string): void {
if (name.includes(":")) {
throw new Error("Variable names cannot contain colons");
}
this.#variables.set(name, value);
}
/**
* Get a variable.
* @param name
* @returns
* @see {@link setVar}
*/
getVar(name: string): string | undefined {
return this.#variables.get(name);
}
}
const _REGISTRY = new EmbeddingFunctionRegistry();

View File

@@ -44,11 +44,12 @@ export class TransformersEmbeddingFunction extends EmbeddingFunction<
#ndims?: number;
constructor(
options: Partial<XenovaTransformerOptions> = {
optionsRaw: Partial<XenovaTransformerOptions> = {
model: "Xenova/all-MiniLM-L6-v2",
},
) {
super();
const options = this.resolveVariables(optionsRaw);
const modelName = options?.model ?? "Xenova/all-MiniLM-L6-v2";
this.#tokenizerOptions = {
@@ -59,22 +60,6 @@ export class TransformersEmbeddingFunction extends EmbeddingFunction<
this.#ndims = options.ndims;
this.#modelName = modelName;
}
toJSON() {
// biome-ignore lint/suspicious/noExplicitAny: <explanation>
const obj: Record<string, any> = {
model: this.#modelName,
};
if (this.#ndims) {
obj["ndims"] = this.#ndims;
}
if (this.#tokenizerOptions) {
obj["tokenizerOptions"] = this.#tokenizerOptions;
}
if (this.#tokenizer) {
obj["tokenizer"] = this.#tokenizer.name;
}
return obj;
}
async init() {
let transformers;

View File

@@ -1,6 +1,6 @@
{
"name": "@lancedb/lancedb-darwin-arm64",
"version": "0.15.1-beta.3",
"version": "0.18.0",
"os": ["darwin"],
"cpu": ["arm64"],
"main": "lancedb.darwin-arm64.node",

View File

@@ -1,6 +1,6 @@
{
"name": "@lancedb/lancedb-darwin-x64",
"version": "0.15.1-beta.3",
"version": "0.18.0",
"os": ["darwin"],
"cpu": ["x64"],
"main": "lancedb.darwin-x64.node",

View File

@@ -1,6 +1,6 @@
{
"name": "@lancedb/lancedb-linux-arm64-gnu",
"version": "0.15.1-beta.3",
"version": "0.18.0",
"os": ["linux"],
"cpu": ["arm64"],
"main": "lancedb.linux-arm64-gnu.node",

View File

@@ -1,6 +1,6 @@
{
"name": "@lancedb/lancedb-linux-arm64-musl",
"version": "0.15.1-beta.3",
"version": "0.18.0",
"os": ["linux"],
"cpu": ["arm64"],
"main": "lancedb.linux-arm64-musl.node",

View File

@@ -1,6 +1,6 @@
{
"name": "@lancedb/lancedb-linux-x64-gnu",
"version": "0.15.1-beta.3",
"version": "0.18.0",
"os": ["linux"],
"cpu": ["x64"],
"main": "lancedb.linux-x64-gnu.node",

View File

@@ -1,6 +1,6 @@
{
"name": "@lancedb/lancedb-linux-x64-musl",
"version": "0.15.1-beta.3",
"version": "0.18.0",
"os": ["linux"],
"cpu": ["x64"],
"main": "lancedb.linux-x64-musl.node",

View File

@@ -1,6 +1,6 @@
{
"name": "@lancedb/lancedb-win32-arm64-msvc",
"version": "0.15.1-beta.3",
"version": "0.18.0",
"os": [
"win32"
],

View File

@@ -1,6 +1,6 @@
{
"name": "@lancedb/lancedb-win32-x64-msvc",
"version": "0.15.1-beta.3",
"version": "0.18.0",
"os": ["win32"],
"cpu": ["x64"],
"main": "lancedb.win32-x64-msvc.node",

View File

@@ -1,12 +1,12 @@
{
"name": "@lancedb/lancedb",
"version": "0.15.1-beta.3",
"version": "0.18.0",
"lockfileVersion": 3,
"requires": true,
"packages": {
"": {
"name": "@lancedb/lancedb",
"version": "0.15.1-beta.3",
"version": "0.18.0",
"cpu": [
"x64",
"arm64"

View File

@@ -11,7 +11,7 @@
"ann"
],
"private": false,
"version": "0.15.1-beta.3",
"version": "0.18.0",
"main": "dist/index.js",
"exports": {
".": "./dist/index.js",

View File

@@ -2,17 +2,15 @@
// SPDX-FileCopyrightText: Copyright The LanceDB Authors
use std::collections::HashMap;
use std::str::FromStr;
use lancedb::database::CreateTableMode;
use napi::bindgen_prelude::*;
use napi_derive::*;
use crate::error::{convert_error, NapiErrorExt};
use crate::error::NapiErrorExt;
use crate::table::Table;
use crate::ConnectionOptions;
use lancedb::connection::{
ConnectBuilder, Connection as LanceDBConnection, CreateTableMode, LanceFileVersion,
};
use lancedb::connection::{ConnectBuilder, Connection as LanceDBConnection};
use lancedb::ipc::{ipc_file_to_batches, ipc_file_to_schema};
#[napi]
@@ -124,8 +122,6 @@ impl Connection {
buf: Buffer,
mode: String,
storage_options: Option<HashMap<String, String>>,
data_storage_options: Option<String>,
enable_v2_manifest_paths: Option<bool>,
) -> napi::Result<Table> {
let batches = ipc_file_to_batches(buf.to_vec())
.map_err(|e| napi::Error::from_reason(format!("Failed to read IPC file: {}", e)))?;
@@ -137,14 +133,6 @@ impl Connection {
builder = builder.storage_option(key, value);
}
}
if let Some(data_storage_option) = data_storage_options.as_ref() {
builder = builder.data_storage_version(
LanceFileVersion::from_str(data_storage_option).map_err(|e| convert_error(&e))?,
);
}
if let Some(enable_v2_manifest_paths) = enable_v2_manifest_paths {
builder = builder.enable_v2_manifest_paths(enable_v2_manifest_paths);
}
let tbl = builder.execute().await.default_error()?;
Ok(Table::new(tbl))
}
@@ -156,8 +144,6 @@ impl Connection {
schema_buf: Buffer,
mode: String,
storage_options: Option<HashMap<String, String>>,
data_storage_options: Option<String>,
enable_v2_manifest_paths: Option<bool>,
) -> napi::Result<Table> {
let schema = ipc_file_to_schema(schema_buf.to_vec()).map_err(|e| {
napi::Error::from_reason(format!("Failed to marshal schema from JS to Rust: {}", e))
@@ -172,14 +158,6 @@ impl Connection {
builder = builder.storage_option(key, value);
}
}
if let Some(data_storage_option) = data_storage_options.as_ref() {
builder = builder.data_storage_version(
LanceFileVersion::from_str(data_storage_option).map_err(|e| convert_error(&e))?,
);
}
if let Some(enable_v2_manifest_paths) = enable_v2_manifest_paths {
builder = builder.enable_v2_manifest_paths(enable_v2_manifest_paths);
}
let tbl = builder.execute().await.default_error()?;
Ok(Table::new(tbl))
}
@@ -209,4 +187,9 @@ impl Connection {
pub async fn drop_table(&self, name: String) -> napi::Result<()> {
self.get_inner()?.drop_table(&name).await.default_error()
}
#[napi(catch_unwind)]
pub async fn drop_all_tables(&self) -> napi::Result<()> {
self.get_inner()?.drop_all_tables().await.default_error()
}
}

View File

@@ -1,6 +1,8 @@
// SPDX-License-Identifier: Apache-2.0
// SPDX-FileCopyrightText: Copyright The LanceDB Authors
use std::collections::HashMap;
use napi_derive::*;
/// Timeout configuration for remote HTTP client.
@@ -67,6 +69,7 @@ pub struct ClientConfig {
pub user_agent: Option<String>,
pub retry_config: Option<RetryConfig>,
pub timeout_config: Option<TimeoutConfig>,
pub extra_headers: Option<HashMap<String, String>>,
}
impl From<TimeoutConfig> for lancedb::remote::TimeoutConfig {
@@ -104,6 +107,7 @@ impl From<ClientConfig> for lancedb::remote::ClientConfig {
.unwrap_or(concat!("LanceDB-Node-Client/", env!("CARGO_PKG_VERSION")).to_string()),
retry_config: config.retry_config.map(Into::into).unwrap_or_default(),
timeout_config: config.timeout_config.map(Into::into).unwrap_or_default(),
extra_headers: config.extra_headers.unwrap_or_default(),
}
}
}

56
pyright_report.csv Normal file
View File

@@ -0,0 +1,56 @@
file,errors,warnings,total_issues
python/python/lancedb/arrow.py,0,0,0
python/python/lancedb/background_loop.py,0,0,0
python/python/lancedb/embeddings/__init__.py,0,0,0
python/python/lancedb/exceptions.py,0,0,0
python/python/lancedb/index.py,0,0,0
python/python/lancedb/integrations/__init__.py,0,0,0
python/python/lancedb/remote/__init__.py,0,0,0
python/python/lancedb/remote/errors.py,0,0,0
python/python/lancedb/rerankers/__init__.py,0,0,0
python/python/lancedb/rerankers/answerdotai.py,0,0,0
python/python/lancedb/rerankers/cohere.py,0,0,0
python/python/lancedb/rerankers/colbert.py,0,0,0
python/python/lancedb/rerankers/cross_encoder.py,0,0,0
python/python/lancedb/rerankers/openai.py,0,0,0
python/python/lancedb/rerankers/util.py,0,0,0
python/python/lancedb/rerankers/voyageai.py,0,0,0
python/python/lancedb/schema.py,0,0,0
python/python/lancedb/types.py,0,0,0
python/python/lancedb/__init__.py,0,1,1
python/python/lancedb/conftest.py,1,0,1
python/python/lancedb/embeddings/bedrock.py,1,0,1
python/python/lancedb/merge.py,1,0,1
python/python/lancedb/rerankers/base.py,1,0,1
python/python/lancedb/rerankers/jinaai.py,0,1,1
python/python/lancedb/rerankers/linear_combination.py,1,0,1
python/python/lancedb/embeddings/instructor.py,2,0,2
python/python/lancedb/embeddings/openai.py,2,0,2
python/python/lancedb/embeddings/watsonx.py,2,0,2
python/python/lancedb/embeddings/registry.py,3,0,3
python/python/lancedb/embeddings/sentence_transformers.py,3,0,3
python/python/lancedb/integrations/pyarrow.py,3,0,3
python/python/lancedb/rerankers/rrf.py,3,0,3
python/python/lancedb/dependencies.py,4,0,4
python/python/lancedb/embeddings/gemini_text.py,4,0,4
python/python/lancedb/embeddings/gte.py,4,0,4
python/python/lancedb/embeddings/gte_mlx_model.py,4,0,4
python/python/lancedb/embeddings/ollama.py,4,0,4
python/python/lancedb/embeddings/transformers.py,4,0,4
python/python/lancedb/remote/db.py,5,0,5
python/python/lancedb/context.py,6,0,6
python/python/lancedb/embeddings/cohere.py,6,0,6
python/python/lancedb/fts.py,6,0,6
python/python/lancedb/db.py,9,0,9
python/python/lancedb/embeddings/utils.py,9,0,9
python/python/lancedb/common.py,11,0,11
python/python/lancedb/util.py,13,0,13
python/python/lancedb/embeddings/imagebind.py,14,0,14
python/python/lancedb/embeddings/voyageai.py,15,0,15
python/python/lancedb/embeddings/open_clip.py,16,0,16
python/python/lancedb/pydantic.py,16,0,16
python/python/lancedb/embeddings/base.py,17,0,17
python/python/lancedb/embeddings/jinaai.py,18,1,19
python/python/lancedb/remote/table.py,23,0,23
python/python/lancedb/query.py,47,1,48
python/python/lancedb/table.py,61,0,61
1 file errors warnings total_issues
2 python/python/lancedb/arrow.py 0 0 0
3 python/python/lancedb/background_loop.py 0 0 0
4 python/python/lancedb/embeddings/__init__.py 0 0 0
5 python/python/lancedb/exceptions.py 0 0 0
6 python/python/lancedb/index.py 0 0 0
7 python/python/lancedb/integrations/__init__.py 0 0 0
8 python/python/lancedb/remote/__init__.py 0 0 0
9 python/python/lancedb/remote/errors.py 0 0 0
10 python/python/lancedb/rerankers/__init__.py 0 0 0
11 python/python/lancedb/rerankers/answerdotai.py 0 0 0
12 python/python/lancedb/rerankers/cohere.py 0 0 0
13 python/python/lancedb/rerankers/colbert.py 0 0 0
14 python/python/lancedb/rerankers/cross_encoder.py 0 0 0
15 python/python/lancedb/rerankers/openai.py 0 0 0
16 python/python/lancedb/rerankers/util.py 0 0 0
17 python/python/lancedb/rerankers/voyageai.py 0 0 0
18 python/python/lancedb/schema.py 0 0 0
19 python/python/lancedb/types.py 0 0 0
20 python/python/lancedb/__init__.py 0 1 1
21 python/python/lancedb/conftest.py 1 0 1
22 python/python/lancedb/embeddings/bedrock.py 1 0 1
23 python/python/lancedb/merge.py 1 0 1
24 python/python/lancedb/rerankers/base.py 1 0 1
25 python/python/lancedb/rerankers/jinaai.py 0 1 1
26 python/python/lancedb/rerankers/linear_combination.py 1 0 1
27 python/python/lancedb/embeddings/instructor.py 2 0 2
28 python/python/lancedb/embeddings/openai.py 2 0 2
29 python/python/lancedb/embeddings/watsonx.py 2 0 2
30 python/python/lancedb/embeddings/registry.py 3 0 3
31 python/python/lancedb/embeddings/sentence_transformers.py 3 0 3
32 python/python/lancedb/integrations/pyarrow.py 3 0 3
33 python/python/lancedb/rerankers/rrf.py 3 0 3
34 python/python/lancedb/dependencies.py 4 0 4
35 python/python/lancedb/embeddings/gemini_text.py 4 0 4
36 python/python/lancedb/embeddings/gte.py 4 0 4
37 python/python/lancedb/embeddings/gte_mlx_model.py 4 0 4
38 python/python/lancedb/embeddings/ollama.py 4 0 4
39 python/python/lancedb/embeddings/transformers.py 4 0 4
40 python/python/lancedb/remote/db.py 5 0 5
41 python/python/lancedb/context.py 6 0 6
42 python/python/lancedb/embeddings/cohere.py 6 0 6
43 python/python/lancedb/fts.py 6 0 6
44 python/python/lancedb/db.py 9 0 9
45 python/python/lancedb/embeddings/utils.py 9 0 9
46 python/python/lancedb/common.py 11 0 11
47 python/python/lancedb/util.py 13 0 13
48 python/python/lancedb/embeddings/imagebind.py 14 0 14
49 python/python/lancedb/embeddings/voyageai.py 15 0 15
50 python/python/lancedb/embeddings/open_clip.py 16 0 16
51 python/python/lancedb/pydantic.py 16 0 16
52 python/python/lancedb/embeddings/base.py 17 0 17
53 python/python/lancedb/embeddings/jinaai.py 18 1 19
54 python/python/lancedb/remote/table.py 23 0 23
55 python/python/lancedb/query.py 47 1 48
56 python/python/lancedb/table.py 61 0 61

View File

@@ -1,5 +1,5 @@
[tool.bumpversion]
current_version = "0.18.1-beta.4"
current_version = "0.21.1"
parse = """(?x)
(?P<major>0|[1-9]\\d*)\\.
(?P<minor>0|[1-9]\\d*)\\.

2
python/.gitignore vendored Normal file
View File

@@ -0,0 +1,2 @@
# Test data created by some example tests
data/

View File

@@ -8,9 +8,9 @@ For general contribution guidelines, see [CONTRIBUTING.md](../CONTRIBUTING.md).
The Python package is a wrapper around the Rust library, `lancedb`. We use
[pyo3](https://pyo3.rs/) to create the bindings between Rust and Python.
* `src/`: Rust bindings source code
* `python/lancedb`: Python package source code
* `python/tests`: Unit tests
- `src/`: Rust bindings source code
- `python/lancedb`: Python package source code
- `python/tests`: Unit tests
## Development environment
@@ -61,6 +61,12 @@ make test
make doctest
```
Run type checking:
```shell
make typecheck
```
To run a single test, you can use the `pytest` command directly. Provide the path
to the test file, and optionally the test name after `::`.

View File

@@ -1,6 +1,6 @@
[package]
name = "lancedb-python"
version = "0.18.1-beta.4"
version = "0.21.1"
edition.workspace = true
description = "Python bindings for LanceDB"
license.workspace = true
@@ -14,21 +14,20 @@ name = "_lancedb"
crate-type = ["cdylib"]
[dependencies]
arrow = { version = "53.2", features = ["pyarrow"] }
arrow = { version = "54.1", features = ["pyarrow"] }
lancedb = { path = "../rust/lancedb", default-features = false }
env_logger.workspace = true
pyo3 = { version = "0.22.2", features = [
"extension-module",
"abi3-py39",
"gil-refs"
pyo3 = { version = "0.23", features = ["extension-module", "abi3-py39"] }
pyo3-async-runtimes = { version = "0.23", features = [
"attributes",
"tokio-runtime",
] }
pyo3-async-runtimes = { version = "0.22", features = ["attributes", "tokio-runtime"] }
pin-project = "1.1.5"
futures.workspace = true
tokio = { version = "1.40", features = ["sync"] }
[build-dependencies]
pyo3-build-config = { version = "0.20.3", features = [
pyo3-build-config = { version = "0.23", features = [
"extension-module",
"abi3-py39",
] }

View File

@@ -23,10 +23,18 @@ check: ## Check formatting and lints.
fix: ## Fix python lints
ruff check python --fix
.PHONY: typecheck
typecheck: ## Run type checking with pyright.
pyright
.PHONY: doctest
doctest: ## Run documentation tests.
pytest --doctest-modules python/lancedb
.PHONY: test
test: ## Run tests.
pytest python/tests -vv --durations=10 -m "not slow"
pytest python/tests -vv --durations=10 -m "not slow and not s3_test"
.PHONY: clean
clean:
rm -rf data

View File

@@ -4,11 +4,12 @@ name = "lancedb"
dynamic = ["version"]
dependencies = [
"deprecation",
"pylance==0.23.0b5",
"tqdm>=4.27.0",
"pyarrow>=14",
"pydantic>=1.10",
"packaging",
"overrides>=0.7",
"pylance>=0.23.2",
]
description = "lancedb"
authors = [{ name = "LanceDB Devs", email = "dev@lancedb.com" }]
@@ -55,7 +56,12 @@ tests = [
"tantivy",
"pyarrow-stubs",
]
dev = ["ruff", "pre-commit", "pyright", 'typing-extensions>=4.0.0; python_version < "3.11"']
dev = [
"ruff",
"pre-commit",
"pyright",
'typing-extensions>=4.0.0; python_version < "3.11"',
]
docs = ["mkdocs", "mkdocs-jupyter", "mkdocs-material", "mkdocstrings[python]"]
clip = ["torch", "pillow", "open-clip"]
embeddings = [
@@ -86,7 +92,7 @@ requires = ["maturin>=1.4"]
build-backend = "maturin"
[tool.ruff.lint]
select = ["F", "E", "W", "G", "TCH", "PERF"]
select = ["F", "E", "W", "G", "PERF"]
[tool.pytest.ini_options]
addopts = "--strict-markers --ignore-glob=lancedb/embeddings/*.py"
@@ -97,5 +103,28 @@ markers = [
]
[tool.pyright]
include = ["python/lancedb/table.py"]
include = [
"python/lancedb/index.py",
"python/lancedb/rerankers/util.py",
"python/lancedb/rerankers/__init__.py",
"python/lancedb/rerankers/voyageai.py",
"python/lancedb/rerankers/jinaai.py",
"python/lancedb/rerankers/openai.py",
"python/lancedb/rerankers/cross_encoder.py",
"python/lancedb/rerankers/colbert.py",
"python/lancedb/rerankers/answerdotai.py",
"python/lancedb/rerankers/cohere.py",
"python/lancedb/arrow.py",
"python/lancedb/__init__.py",
"python/lancedb/types.py",
"python/lancedb/integrations/__init__.py",
"python/lancedb/exceptions.py",
"python/lancedb/background_loop.py",
"python/lancedb/schema.py",
"python/lancedb/remote/__init__.py",
"python/lancedb/remote/errors.py",
"python/lancedb/embeddings/__init__.py",
"python/lancedb/_lancedb.pyi",
]
exclude = ["python/tests/"]
pythonVersion = "3.12"

View File

@@ -14,6 +14,7 @@ from ._lancedb import connect as lancedb_connect
from .common import URI, sanitize_uri
from .db import AsyncConnection, DBConnection, LanceDBConnection
from .remote import ClientConfig
from .remote.db import RemoteDBConnection
from .schema import vector
from .table import AsyncTable
@@ -86,8 +87,6 @@ def connect(
conn : DBConnection
A connection to a LanceDB database.
"""
from .remote.db import RemoteDBConnection
if isinstance(uri, str) and uri.startswith("db://"):
if api_key is None:
api_key = os.environ.get("LANCEDB_API_KEY")

View File

@@ -3,6 +3,7 @@ from typing import Dict, List, Optional, Tuple, Any, Union, Literal
import pyarrow as pa
from .index import BTree, IvfFlat, IvfPq, Bitmap, LabelList, HnswPq, HnswSq, FTS
from .remote import ClientConfig
class Connection(object):
uri: str
@@ -15,8 +16,6 @@ class Connection(object):
mode: str,
data: pa.RecordBatchReader,
storage_options: Optional[Dict[str, str]] = None,
data_storage_version: Optional[str] = None,
enable_v2_manifest_paths: Optional[bool] = None,
) -> Table: ...
async def create_empty_table(
self,
@@ -24,8 +23,6 @@ class Connection(object):
mode: str,
schema: pa.Schema,
storage_options: Optional[Dict[str, str]] = None,
data_storage_version: Optional[str] = None,
enable_v2_manifest_paths: Optional[bool] = None,
) -> Table: ...
async def rename_table(self, old_name: str, new_name: str) -> None: ...
async def drop_table(self, name: str) -> None: ...
@@ -75,11 +72,15 @@ async def connect(
region: Optional[str],
host_override: Optional[str],
read_consistency_interval: Optional[float],
client_config: Optional[Union[ClientConfig, Dict[str, Any]]],
storage_options: Optional[Dict[str, str]],
) -> Connection: ...
class RecordBatchStream:
@property
def schema(self) -> pa.Schema: ...
async def next(self) -> Optional[pa.RecordBatch]: ...
def __aiter__(self) -> "RecordBatchStream": ...
async def __anext__(self) -> pa.RecordBatch: ...
class Query:
def where(self, filter: str): ...
@@ -146,6 +147,10 @@ class CompactionStats:
files_removed: int
files_added: int
class CleanupStats:
bytes_removed: int
old_versions: int
class RemovalStats:
bytes_removed: int
old_versions_removed: int

View File

@@ -1,7 +1,7 @@
# SPDX-License-Identifier: Apache-2.0
# SPDX-FileCopyrightText: Copyright The LanceDB Authors
from typing import List, Optional, Union
from typing import List, Optional, Tuple, Union
import pyarrow as pa
@@ -66,3 +66,17 @@ class AsyncRecordBatchReader:
batches = table.to_batches(max_chunksize=max_batch_length)
for batch in batches:
yield batch
def peek_reader(
reader: pa.RecordBatchReader,
) -> Tuple[pa.RecordBatch, pa.RecordBatchReader]:
if not isinstance(reader, pa.RecordBatchReader):
raise TypeError("reader must be a RecordBatchReader")
batch = reader.read_next_batch()
def all_batches():
yield batch
yield from reader
return batch, pa.RecordBatchReader.from_batches(batch.schema, all_batches())

View File

@@ -14,6 +14,7 @@ from overrides import EnforceOverrides, override # type: ignore
from lancedb.common import data_to_reader, sanitize_uri, validate_schema
from lancedb.background_loop import LOOP
from . import __version__
from ._lancedb import connect as lancedb_connect # type: ignore
from .table import (
AsyncTable,
@@ -26,6 +27,8 @@ from .util import (
validate_table_name,
)
import deprecation
if TYPE_CHECKING:
import pyarrow as pa
from .pydantic import LanceModel
@@ -119,19 +122,11 @@ class DBConnection(EnforceOverrides):
See available options at
<https://lancedb.github.io/lancedb/guides/storage/>
data_storage_version: optional, str, default "stable"
The version of the data storage format to use. Newer versions are more
efficient but require newer versions of lance to read. The default is
"stable" which will use the legacy v2 version. See the user guide
for more details.
enable_v2_manifest_paths: bool, optional, default False
Use the new V2 manifest paths. These paths provide more efficient
opening of datasets with many versions on object stores. WARNING:
turning this on will make the dataset unreadable for older versions
of LanceDB (prior to 0.13.0). To migrate an existing dataset, instead
use the
[Table.migrate_manifest_paths_v2][lancedb.table.Table.migrate_v2_manifest_paths]
method.
Deprecated. Set `storage_options` when connecting to the database and set
`new_table_data_storage_version` in the options.
enable_v2_manifest_paths: optional, bool, default False
Deprecated. Set `storage_options` when connecting to the database and set
`new_table_enable_v2_manifest_paths` in the options.
Returns
-------
LanceTable
@@ -302,6 +297,12 @@ class DBConnection(EnforceOverrides):
"""
raise NotImplementedError
def drop_all_tables(self):
"""
Drop all tables from the database
"""
raise NotImplementedError
@property
def uri(self) -> str:
return self._uri
@@ -452,8 +453,6 @@ class LanceDBConnection(DBConnection):
fill_value=fill_value,
embedding_functions=embedding_functions,
storage_options=storage_options,
data_storage_version=data_storage_version,
enable_v2_manifest_paths=enable_v2_manifest_paths,
)
return tbl
@@ -496,9 +495,19 @@ class LanceDBConnection(DBConnection):
"""
LOOP.run(self._conn.drop_table(name, ignore_missing=ignore_missing))
@override
def drop_all_tables(self):
LOOP.run(self._conn.drop_all_tables())
@deprecation.deprecated(
deprecated_in="0.15.1",
removed_in="0.17",
current_version=__version__,
details="Use drop_all_tables() instead",
)
@override
def drop_database(self):
LOOP.run(self._conn.drop_database())
LOOP.run(self._conn.drop_all_tables())
class AsyncConnection(object):
@@ -595,9 +604,6 @@ class AsyncConnection(object):
storage_options: Optional[Dict[str, str]] = None,
*,
embedding_functions: Optional[List[EmbeddingFunctionConfig]] = None,
data_storage_version: Optional[str] = None,
use_legacy_format: Optional[bool] = None,
enable_v2_manifest_paths: Optional[bool] = None,
) -> AsyncTable:
"""Create an [AsyncTable][lancedb.table.AsyncTable] in the database.
@@ -640,23 +646,6 @@ class AsyncConnection(object):
connection will be inherited by the table, but can be overridden here.
See available options at
<https://lancedb.github.io/lancedb/guides/storage/>
data_storage_version: optional, str, default "stable"
The version of the data storage format to use. Newer versions are more
efficient but require newer versions of lance to read. The default is
"stable" which will use the legacy v2 version. See the user guide
for more details.
use_legacy_format: bool, optional, default False. (Deprecated)
If True, use the legacy format for the table. If False, use the new format.
This method is deprecated, use `data_storage_version` instead.
enable_v2_manifest_paths: bool, optional, default False
Use the new V2 manifest paths. These paths provide more efficient
opening of datasets with many versions on object stores. WARNING:
turning this on will make the dataset unreadable for older versions
of LanceDB (prior to 0.13.0). To migrate an existing dataset, instead
use the
[AsyncTable.migrate_manifest_paths_v2][lancedb.table.AsyncTable.migrate_manifest_paths_v2]
method.
Returns
-------
@@ -795,17 +784,12 @@ class AsyncConnection(object):
if mode == "create" and exist_ok:
mode = "exist_ok"
if not data_storage_version:
data_storage_version = "legacy" if use_legacy_format else "stable"
if data is None:
new_table = await self._inner.create_empty_table(
name,
mode,
schema,
storage_options=storage_options,
data_storage_version=data_storage_version,
enable_v2_manifest_paths=enable_v2_manifest_paths,
)
else:
data = data_to_reader(data, schema)
@@ -814,8 +798,6 @@ class AsyncConnection(object):
mode,
data,
storage_options=storage_options,
data_storage_version=data_storage_version,
enable_v2_manifest_paths=enable_v2_manifest_paths,
)
return AsyncTable(new_table)
@@ -885,9 +867,19 @@ class AsyncConnection(object):
if f"Table '{name}' was not found" not in str(e):
raise e
async def drop_all_tables(self):
"""Drop all tables from the database."""
await self._inner.drop_all_tables()
@deprecation.deprecated(
deprecated_in="0.15.1",
removed_in="0.17",
current_version=__version__,
details="Use drop_all_tables() instead",
)
async def drop_database(self):
"""
Drop database
This is the same thing as dropping all the tables
"""
await self._inner.drop_db()
await self._inner.drop_all_tables()

View File

@@ -2,8 +2,10 @@
# SPDX-FileCopyrightText: Copyright The LanceDB Authors
from abc import ABC, abstractmethod
import copy
from typing import List, Union
from lancedb.util import add_note
import numpy as np
import pyarrow as pa
from pydantic import BaseModel, Field, PrivateAttr
@@ -28,13 +30,67 @@ class EmbeddingFunction(BaseModel, ABC):
7 # Setting 0 disables retires. Maybe this should not be enabled by default,
)
_ndims: int = PrivateAttr()
_original_args: dict = PrivateAttr()
@classmethod
def create(cls, **kwargs):
"""
Create an instance of the embedding function
"""
return cls(**kwargs)
resolved_kwargs = cls.__resolveVariables(kwargs)
instance = cls(**resolved_kwargs)
instance._original_args = kwargs
return instance
@classmethod
def __resolveVariables(cls, args: dict) -> dict:
"""
Resolve variables in the args
"""
from .registry import EmbeddingFunctionRegistry
new_args = copy.deepcopy(args)
registry = EmbeddingFunctionRegistry.get_instance()
sensitive_keys = cls.sensitive_keys()
for k, v in new_args.items():
if isinstance(v, str) and not v.startswith("$var:") and k in sensitive_keys:
exc = ValueError(
f"Sensitive key '{k}' cannot be set to a hardcoded value"
)
add_note(exc, "Help: Use $var: to set sensitive keys to variables")
raise exc
if isinstance(v, str) and v.startswith("$var:"):
parts = v[5:].split(":", maxsplit=1)
if len(parts) == 1:
try:
new_args[k] = registry.get_var(parts[0])
except KeyError:
exc = ValueError(
"Variable '{}' not found in registry".format(parts[0])
)
add_note(
exc,
"Help: Variables are reset in new Python sessions. "
"Use `registry.set_var` to set variables.",
)
raise exc
else:
name, default = parts
try:
new_args[k] = registry.get_var(name)
except KeyError:
new_args[k] = default
return new_args
@staticmethod
def sensitive_keys() -> List[str]:
"""
Return a list of keys that are sensitive and should not be allowed
to be set to hardcoded values in the config. For example, API keys.
"""
return []
@abstractmethod
def compute_query_embeddings(self, *args, **kwargs) -> list[Union[np.array, None]]:
@@ -103,20 +159,14 @@ class EmbeddingFunction(BaseModel, ABC):
return texts
def safe_model_dump(self):
from ..pydantic import PYDANTIC_VERSION
if PYDANTIC_VERSION.major < 2:
return {k: v for k, v in self.__dict__.items() if not k.startswith("_")}
return self.model_dump(
exclude={
field_name
for field_name in self.model_fields
if field_name.startswith("_")
}
)
if not hasattr(self, "_original_args"):
raise ValueError(
"EmbeddingFunction was not created with EmbeddingFunction.create()"
)
return self._original_args
@abstractmethod
def ndims(self):
def ndims(self) -> int:
"""
Return the dimensions of the vector column
"""

View File

@@ -57,6 +57,10 @@ class JinaEmbeddings(EmbeddingFunction):
# TODO: fix hardcoding
return 768
@staticmethod
def sensitive_keys() -> List[str]:
return ["api_key"]
def sanitize_input(
self, inputs: Union[TEXT, IMAGES]
) -> Union[List[Any], np.ndarray]:

View File

@@ -54,6 +54,10 @@ class OpenAIEmbeddings(TextEmbeddingFunction):
def ndims(self):
return self._ndims
@staticmethod
def sensitive_keys():
return ["api_key"]
@staticmethod
def model_names():
return [

View File

@@ -41,6 +41,7 @@ class EmbeddingFunctionRegistry:
def __init__(self):
self._functions = {}
self._variables = {}
def register(self, alias: str = None):
"""
@@ -156,6 +157,28 @@ class EmbeddingFunctionRegistry:
metadata = json.dumps(json_data, indent=2).encode("utf-8")
return {"embedding_functions": metadata}
def set_var(self, name: str, value: str) -> None:
"""
Set a variable. These can be accessed in embedding configuration using
the syntax `$var:variable_name`. If they are not set, an error will be
thrown letting you know which variable is missing. If you want to supply
a default value, you can add an additional part in the configuration
like so: `$var:variable_name:default_value`. Default values can be
used for runtime configurations that are not sensitive, such as
whether to use a GPU for inference.
The name must not contain a colon. Default values can contain colons.
"""
if ":" in name:
raise ValueError("Variable names cannot contain colons")
self._variables[name] = value
def get_var(self, name: str) -> str:
"""
Get a variable.
"""
return self._variables[name]
# Global instance
__REGISTRY__ = EmbeddingFunctionRegistry()

View File

@@ -40,6 +40,10 @@ class WatsonxEmbeddings(TextEmbeddingFunction):
url: Optional[str] = None
params: Optional[Dict] = None
@staticmethod
def sensitive_keys():
return ["api_key"]
@staticmethod
def model_names():
return [

View File

@@ -199,18 +199,29 @@ else:
]
def _pydantic_type_to_arrow_type(tp: Any, field: FieldInfo) -> pa.DataType:
if inspect.isclass(tp):
if issubclass(tp, pydantic.BaseModel):
# Struct
fields = _pydantic_model_to_fields(tp)
return pa.struct(fields)
if issubclass(tp, FixedSizeListMixin):
return pa.list_(tp.value_arrow_type(), tp.dim())
return _py_type_to_arrow_type(tp, field)
def _pydantic_to_arrow_type(field: FieldInfo) -> pa.DataType:
"""Convert a Pydantic FieldInfo to Arrow DataType"""
if isinstance(field.annotation, (_GenericAlias, GenericAlias)):
origin = field.annotation.__origin__
args = field.annotation.__args__
if origin is list:
child = args[0]
return pa.list_(_py_type_to_arrow_type(child, field))
elif origin == Union:
if len(args) == 2 and args[1] is type(None):
return _py_type_to_arrow_type(args[0], field)
return _pydantic_type_to_arrow_type(args[0], field)
elif sys.version_info >= (3, 10) and isinstance(field.annotation, types.UnionType):
args = field.annotation.__args__
if len(args) == 2:
@@ -218,14 +229,7 @@ def _pydantic_to_arrow_type(field: FieldInfo) -> pa.DataType:
if typ is type(None):
continue
return _py_type_to_arrow_type(typ, field)
elif inspect.isclass(field.annotation):
if issubclass(field.annotation, pydantic.BaseModel):
# Struct
fields = _pydantic_model_to_fields(field.annotation)
return pa.struct(fields)
elif issubclass(field.annotation, FixedSizeListMixin):
return pa.list_(field.annotation.value_arrow_type(), field.annotation.dim())
return _py_type_to_arrow_type(field.annotation, field)
return _pydantic_type_to_arrow_type(field.annotation, field)
def is_nullable(field: FieldInfo) -> bool:
@@ -255,7 +259,8 @@ def _pydantic_to_field(name: str, field: FieldInfo) -> pa.Field:
def pydantic_to_schema(model: Type[pydantic.BaseModel]) -> pa.Schema:
"""Convert a Pydantic model to a PyArrow Schema.
"""Convert a [Pydantic Model][pydantic.BaseModel] to a
[PyArrow Schema][pyarrow.Schema].
Parameters
----------
@@ -265,24 +270,25 @@ def pydantic_to_schema(model: Type[pydantic.BaseModel]) -> pa.Schema:
Returns
-------
pyarrow.Schema
The Arrow Schema
Examples
--------
>>> from typing import List, Optional
>>> import pydantic
>>> from lancedb.pydantic import pydantic_to_schema
>>> from lancedb.pydantic import pydantic_to_schema, Vector
>>> class FooModel(pydantic.BaseModel):
... id: int
... s: str
... vec: List[float]
... vec: Vector(1536) # fixed_size_list<item: float32>[1536]
... li: List[int]
...
>>> schema = pydantic_to_schema(FooModel)
>>> assert schema == pa.schema([
... pa.field("id", pa.int64(), False),
... pa.field("s", pa.utf8(), False),
... pa.field("vec", pa.list_(pa.float64()), False),
... pa.field("vec", pa.list_(pa.float32(), 1536)),
... pa.field("li", pa.list_(pa.int64()), False),
... ])
"""
@@ -304,7 +310,7 @@ class LanceModel(pydantic.BaseModel):
... vector: Vector(2)
...
>>> db = lancedb.connect("./example")
>>> table = db.create_table("test", schema=TestModel.to_arrow_schema())
>>> table = db.create_table("test", schema=TestModel)
>>> table.add([
... TestModel(name="test", vector=[1.0, 2.0])
... ])

View File

@@ -110,7 +110,7 @@ class Query(pydantic.BaseModel):
full_text_query: Optional[Union[str, dict]] = None
# top k results to return
k: int
k: Optional[int] = None
# # metrics
metric: str = "L2"
@@ -257,7 +257,7 @@ class LanceQueryBuilder(ABC):
def __init__(self, table: "Table"):
self._table = table
self._limit = 10
self._limit = None
self._offset = 0
self._columns = None
self._where = None
@@ -370,8 +370,7 @@ class LanceQueryBuilder(ABC):
The maximum number of results to return.
The default query limit is 10 results.
For ANN/KNN queries, you must specify a limit.
Entering 0, a negative number, or None will reset
the limit to the default value of 10.
For plain searches, all records are returned if limit not set.
*WARNING* if you have a large dataset, setting
the limit to a large number, e.g. the table size,
can potentially result in reading a
@@ -595,6 +594,8 @@ class LanceVectorQueryBuilder(LanceQueryBuilder):
fast_search: bool = False,
):
super().__init__(table)
if self._limit is None:
self._limit = 10
self._query = query
self._distance_type = "L2"
self._nprobes = 20
@@ -888,6 +889,8 @@ class LanceFtsQueryBuilder(LanceQueryBuilder):
fts_columns: Union[str, List[str]] = [],
):
super().__init__(table)
if self._limit is None:
self._limit = 10
self._query = query
self._phrase_query = False
self.ordering_field_name = ordering_field_name
@@ -1055,7 +1058,7 @@ class LanceEmptyQueryBuilder(LanceQueryBuilder):
query = Query(
columns=self._columns,
filter=self._where,
k=self._limit or 10,
k=self._limit,
with_row_id=self._with_row_id,
vector=[],
# not actually respected in remote query

View File

@@ -109,6 +109,7 @@ class ClientConfig:
user_agent: str = f"LanceDB-Python-Client/{__version__}"
retry_config: RetryConfig = field(default_factory=RetryConfig)
timeout_config: Optional[TimeoutConfig] = field(default_factory=TimeoutConfig)
extra_headers: Optional[dict] = None
def __post_init__(self):
if isinstance(self.retry_config, dict):

View File

@@ -9,7 +9,8 @@ from typing import Any, Dict, Iterable, List, Optional, Union
from urllib.parse import urlparse
import warnings
from lancedb import connect_async
# Remove this import to fix circular dependency
# from lancedb import connect_async
from lancedb.remote import ClientConfig
import pyarrow as pa
from overrides import override
@@ -78,6 +79,9 @@ class RemoteDBConnection(DBConnection):
self.client_config = client_config
# Import connect_async here to avoid circular import
from lancedb import connect_async
self._conn = LOOP.run(
connect_async(
db_url,

View File

@@ -526,6 +526,9 @@ class RemoteTable(Table):
def drop_columns(self, columns: Iterable[str]):
return LOOP.run(self._table.drop_columns(columns))
def drop_index(self, index_name: str):
return LOOP.run(self._table.drop_index(index_name))
def uses_v2_manifest_paths(self) -> bool:
raise NotImplementedError(
"uses_v2_manifest_paths() is not supported on the LanceDB Cloud"

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,28 @@
# SPDX-License-Identifier: Apache-2.0
# SPDX-FileCopyrightText: Copyright The LanceDB Authors
from typing import Literal
# Query type literals
QueryType = Literal["vector", "fts", "hybrid", "auto"]
# Distance type literals
DistanceType = Literal["l2", "cosine", "dot"]
DistanceTypeWithHamming = Literal["l2", "cosine", "dot", "hamming"]
# Vector handling literals
OnBadVectorsType = Literal["error", "drop", "fill", "null"]
# Mode literals
AddMode = Literal["append", "overwrite"]
CreateMode = Literal["create", "overwrite"]
# Index type literals
VectorIndexType = Literal["IVF_FLAT", "IVF_PQ", "IVF_HNSW_SQ", "IVF_HNSW_PQ"]
ScalarIndexType = Literal["BTREE", "BITMAP", "LABEL_LIST"]
IndexType = Literal[
"IVF_PQ", "IVF_HNSW_PQ", "IVF_HNSW_SQ", "FTS", "BTREE", "BITMAP", "LABEL_LIST"
]
# Tokenizer literals
BaseTokenizerType = Literal["simple", "raw", "whitespace"]

View File

@@ -75,6 +75,6 @@ async def test_binary_vector_async():
query = np.random.randint(0, 2, size=256)
packed_query = np.packbits(query)
await tbl.query().nearest_to(packed_query).distance_type("hamming").to_arrow()
await (await tbl.search(packed_query)).distance_type("hamming").to_arrow()
# --8<-- [end:async_binary_vector]
await db.drop_table("my_binary_vectors")

View File

@@ -53,13 +53,13 @@ async def test_binary_vector_async():
query = np.random.random(256)
# Search for the vectors within the range of [0.1, 0.5)
await tbl.query().nearest_to(query).distance_range(0.1, 0.5).to_arrow()
await (await tbl.search(query)).distance_range(0.1, 0.5).to_arrow()
# Search for the vectors with the distance less than 0.5
await tbl.query().nearest_to(query).distance_range(upper_bound=0.5).to_arrow()
await (await tbl.search(query)).distance_range(upper_bound=0.5).to_arrow()
# Search for the vectors with the distance greater or equal to 0.1
await tbl.query().nearest_to(query).distance_range(lower_bound=0.1).to_arrow()
await (await tbl.search(query)).distance_range(lower_bound=0.1).to_arrow()
# --8<-- [end:async_distance_range]
await db.drop_table("my_table")

View File

@@ -28,3 +28,49 @@ def test_embeddings_openai():
actual = table.search(query).limit(1).to_pydantic(Words)[0]
print(actual.text)
# --8<-- [end:openai_embeddings]
@pytest.mark.slow
@pytest.mark.asyncio
async def test_embeddings_openai_async():
uri = "memory://"
# --8<-- [start:async_openai_embeddings]
db = await lancedb.connect_async(uri)
func = get_registry().get("openai").create(name="text-embedding-ada-002")
class Words(LanceModel):
text: str = func.SourceField()
vector: Vector(func.ndims()) = func.VectorField()
table = await db.create_table("words", schema=Words, mode="overwrite")
await table.add([{"text": "hello world"}, {"text": "goodbye world"}])
query = "greetings"
actual = await (await table.search(query)).limit(1).to_pydantic(Words)[0]
print(actual.text)
# --8<-- [end:async_openai_embeddings]
def test_embeddings_secret():
# --8<-- [start:register_secret]
registry = get_registry()
registry.set_var("api_key", "sk-...")
func = registry.get("openai").create(api_key="$var:api_key")
# --8<-- [end:register_secret]
try:
import torch
except ImportError:
pytest.skip("torch not installed")
# --8<-- [start:register_device]
import torch
registry = get_registry()
if torch.cuda.is_available():
registry.set_var("device", "cuda")
func = registry.get("huggingface").create(device="$var:device:cpu")
# --8<-- [end:register_device]
assert func.device == "cuda" if torch.cuda.is_available() else "cpu"

View File

@@ -72,8 +72,7 @@ async def test_ann_index_async():
# --8<-- [end:create_ann_index_async]
# --8<-- [start:vector_search_async]
await (
async_tbl.query()
.nearest_to(np.random.random((32)))
(await async_tbl.search(np.random.random((32))))
.limit(2)
.nprobes(20)
.refine_factor(10)
@@ -82,18 +81,14 @@ async def test_ann_index_async():
# --8<-- [end:vector_search_async]
# --8<-- [start:vector_search_async_with_filter]
await (
async_tbl.query()
.nearest_to(np.random.random((32)))
(await async_tbl.search(np.random.random((32))))
.where("item != 'item 1141'")
.to_pandas()
)
# --8<-- [end:vector_search_async_with_filter]
# --8<-- [start:vector_search_async_with_select]
await (
async_tbl.query()
.nearest_to(np.random.random((32)))
.select(["vector"])
.to_pandas()
(await async_tbl.search(np.random.random((32)))).select(["vector"]).to_pandas()
)
# --8<-- [end:vector_search_async_with_select]
@@ -164,7 +159,7 @@ async def test_scalar_index_async():
{"book_id": 3, "vector": [5.0, 6]},
]
async_tbl = await async_db.create_table("book_with_embeddings_async", data)
(await async_tbl.query().where("book_id != 3").nearest_to([1, 2]).to_pandas())
(await (await async_tbl.search([1, 2])).where("book_id != 3").to_pandas())
# --8<-- [end:vector_search_with_scalar_index_async]
# --8<-- [start:update_scalar_index_async]
await async_tbl.add([{"vector": [7, 8], "book_id": 4}])

View File

@@ -0,0 +1,36 @@
# SPDX-License-Identifier: Apache-2.0
# SPDX-FileCopyrightText: Copyright The LanceDB Authors
# --8<-- [start:imports]
import lancedb
from lancedb.pydantic import Vector, LanceModel
# --8<-- [end:imports]
def test_pydantic_model(tmp_path):
# --8<-- [start:base_model]
class PersonModel(LanceModel):
name: str
age: int
vector: Vector(2)
# --8<-- [end:base_model]
# --8<-- [start:set_url]
url = "./example"
# --8<-- [end:set_url]
url = tmp_path
# --8<-- [start:base_example]
db = lancedb.connect(url)
table = db.create_table("person", schema=PersonModel)
table.add(
[
PersonModel(name="bob", age=1, vector=[1.0, 2.0]),
PersonModel(name="alice", age=2, vector=[3.0, 4.0]),
]
)
assert table.count_rows() == 2
person = table.search([0.0, 0.0]).limit(1).to_pydantic(PersonModel)
assert person[0].name == "bob"
# --8<-- [end:base_example]

View File

@@ -126,19 +126,17 @@ async def test_pandas_and_pyarrow_async():
query_vector = [100, 100]
# Pandas DataFrame
df = await async_tbl.query().nearest_to(query_vector).limit(1).to_pandas()
df = await (await async_tbl.search(query_vector)).limit(1).to_pandas()
print(df)
# --8<-- [end:vector_search_async]
# --8<-- [start:vector_search_with_filter_async]
# Apply the filter via LanceDB
results = (
await async_tbl.query().nearest_to([100, 100]).where("price < 15").to_pandas()
)
results = await (await async_tbl.search([100, 100])).where("price < 15").to_pandas()
assert len(results) == 1
assert results["item"].iloc[0] == "foo"
# Apply the filter via Pandas
df = results = await async_tbl.query().nearest_to([100, 100]).to_pandas()
df = results = await (await async_tbl.search([100, 100])).to_pandas()
results = df[df.price < 15]
assert len(results) == 1
assert results["item"].iloc[0] == "foo"
@@ -188,3 +186,26 @@ def test_polars():
# --8<-- [start:print_table_lazyform]
print(ldf.first().collect())
# --8<-- [end:print_table_lazyform]
@pytest.mark.asyncio
async def test_polars_async():
uri = "data/sample-lancedb"
db = await lancedb.connect_async(uri)
# --8<-- [start:create_table_polars_async]
data = pl.DataFrame(
{
"vector": [[3.1, 4.1], [5.9, 26.5]],
"item": ["foo", "bar"],
"price": [10.0, 20.0],
}
)
table = await db.create_table("pl_table_async", data=data)
# --8<-- [end:create_table_polars_async]
# --8<-- [start:vector_search_polars_async]
query = [3.0, 4.0]
result = await (await table.search(query)).limit(1).to_polars()
print(result)
print(type(result))
# --8<-- [end:vector_search_polars_async]

View File

@@ -117,12 +117,11 @@ async def test_vector_search_async():
for i, row in enumerate(np.random.random((10_000, 1536)).astype("float32"))
]
async_tbl = await async_db.create_table("vector_search_async", data=data)
(await async_tbl.query().nearest_to(np.random.random((1536))).limit(10).to_list())
(await (await async_tbl.search(np.random.random((1536)))).limit(10).to_list())
# --8<-- [end:exhaustive_search_async]
# --8<-- [start:exhaustive_search_async_cosine]
(
await async_tbl.query()
.nearest_to(np.random.random((1536)))
await (await async_tbl.search(np.random.random((1536))))
.distance_type("cosine")
.limit(10)
.to_list()
@@ -145,13 +144,13 @@ async def test_vector_search_async():
async_tbl = await async_db.create_table("documents_async", data=data)
# --8<-- [end:create_table_async_with_nested_schema]
# --8<-- [start:search_result_async_as_pyarrow]
await async_tbl.query().nearest_to(np.random.randn(1536)).to_arrow()
await (await async_tbl.search(np.random.randn(1536))).to_arrow()
# --8<-- [end:search_result_async_as_pyarrow]
# --8<-- [start:search_result_async_as_pandas]
await async_tbl.query().nearest_to(np.random.randn(1536)).to_pandas()
await (await async_tbl.search(np.random.randn(1536))).to_pandas()
# --8<-- [end:search_result_async_as_pandas]
# --8<-- [start:search_result_async_as_list]
await async_tbl.query().nearest_to(np.random.randn(1536)).to_list()
await (await async_tbl.search(np.random.randn(1536))).to_list()
# --8<-- [end:search_result_async_as_list]
@@ -219,9 +218,7 @@ async def test_fts_native_async():
# async API uses our native FTS algorithm
await async_tbl.create_index("text", config=FTS())
await (
async_tbl.query().nearest_to_text("puppy").select(["text"]).limit(10).to_list()
)
await (await async_tbl.search("puppy")).select(["text"]).limit(10).to_list()
# [{'text': 'Frodo was a happy puppy', '_score': 0.6931471824645996}]
# ...
# --8<-- [end:basic_fts_async]
@@ -235,18 +232,11 @@ async def test_fts_native_async():
)
# --8<-- [end:fts_config_folding_async]
# --8<-- [start:fts_prefiltering_async]
await (
async_tbl.query()
.nearest_to_text("puppy")
.limit(10)
.where("text='foo'")
.to_list()
)
await (await async_tbl.search("puppy")).limit(10).where("text='foo'").to_list()
# --8<-- [end:fts_prefiltering_async]
# --8<-- [start:fts_postfiltering_async]
await (
async_tbl.query()
.nearest_to_text("puppy")
(await async_tbl.search("puppy"))
.limit(10)
.where("text='foo'")
.postfilter()
@@ -347,14 +337,8 @@ async def test_hybrid_search_async():
# Create a fts index before the hybrid search
await async_tbl.create_index("text", config=FTS())
text_query = "flower moon"
vector_query = embeddings.compute_query_embeddings(text_query)[0]
# hybrid search with default re-ranker
await (
async_tbl.query()
.nearest_to(vector_query)
.nearest_to_text(text_query)
.to_pandas()
)
await (await async_tbl.search("flower moon", query_type="hybrid")).to_pandas()
# --8<-- [end:basic_hybrid_search_async]
# --8<-- [start:hybrid_search_pass_vector_text_async]
vector_query = [0.1, 0.2, 0.3, 0.4, 0.5]

View File

@@ -299,12 +299,12 @@ def test_create_exist_ok(tmp_db: lancedb.DBConnection):
@pytest.mark.asyncio
async def test_connect(tmp_path):
db = await lancedb.connect_async(tmp_path)
assert str(db) == f"NativeDatabase(uri={tmp_path}, read_consistency_interval=None)"
assert str(db) == f"ListingDatabase(uri={tmp_path}, read_consistency_interval=None)"
db = await lancedb.connect_async(
tmp_path, read_consistency_interval=timedelta(seconds=5)
)
assert str(db) == f"NativeDatabase(uri={tmp_path}, read_consistency_interval=5s)"
assert str(db) == f"ListingDatabase(uri={tmp_path}, read_consistency_interval=5s)"
@pytest.mark.asyncio
@@ -396,13 +396,16 @@ async def test_create_exist_ok_async(tmp_db_async: lancedb.AsyncConnection):
@pytest.mark.asyncio
async def test_create_table_v2_manifest_paths_async(tmp_path):
db = await lancedb.connect_async(tmp_path)
db_with_v2_paths = await lancedb.connect_async(
tmp_path, storage_options={"new_table_enable_v2_manifest_paths": "true"}
)
db_no_v2_paths = await lancedb.connect_async(
tmp_path, storage_options={"new_table_enable_v2_manifest_paths": "false"}
)
# Create table in v2 mode with v2 manifest paths enabled
tbl = await db.create_table(
tbl = await db_with_v2_paths.create_table(
"test_v2_manifest_paths",
data=[{"id": 0}],
use_legacy_format=False,
enable_v2_manifest_paths=True,
)
assert await tbl.uses_v2_manifest_paths()
manifests_dir = tmp_path / "test_v2_manifest_paths.lance" / "_versions"
@@ -410,11 +413,9 @@ async def test_create_table_v2_manifest_paths_async(tmp_path):
assert re.match(r"\d{20}\.manifest", manifest)
# Start a table in V1 mode then migrate
tbl = await db.create_table(
tbl = await db_no_v2_paths.create_table(
"test_v2_migration",
data=[{"id": 0}],
use_legacy_format=False,
enable_v2_manifest_paths=False,
)
assert not await tbl.uses_v2_manifest_paths()
manifests_dir = tmp_path / "test_v2_migration.lance" / "_versions"
@@ -498,6 +499,10 @@ def test_delete_table(tmp_db: lancedb.DBConnection):
# if ignore_missing=True
tmp_db.drop_table("does_not_exist", ignore_missing=True)
tmp_db.drop_all_tables()
assert tmp_db.table_names() == []
@pytest.mark.asyncio
async def test_delete_table_async(tmp_db: lancedb.DBConnection):
@@ -583,7 +588,7 @@ def test_empty_or_nonexistent_table(mem_db: lancedb.DBConnection):
@pytest.mark.asyncio
async def test_create_in_v2_mode(mem_db_async: lancedb.AsyncConnection):
async def test_create_in_v2_mode():
def make_data():
for i in range(10):
yield pa.record_batch([pa.array([x for x in range(1024)])], names=["x"])
@@ -594,10 +599,13 @@ async def test_create_in_v2_mode(mem_db_async: lancedb.AsyncConnection):
schema = pa.schema([pa.field("x", pa.int64())])
# Create table in v1 mode
tbl = await mem_db_async.create_table(
"test", data=make_data(), schema=schema, data_storage_version="legacy"
v1_db = await lancedb.connect_async(
"memory://", storage_options={"new_table_data_storage_version": "legacy"}
)
tbl = await v1_db.create_table("test", data=make_data(), schema=schema)
async def is_in_v2_mode(tbl):
batches = (
await tbl.query().limit(10 * 1024).to_batches(max_batch_length=1024 * 10)
@@ -610,10 +618,12 @@ async def test_create_in_v2_mode(mem_db_async: lancedb.AsyncConnection):
assert not await is_in_v2_mode(tbl)
# Create table in v2 mode
tbl = await mem_db_async.create_table(
"test_v2", data=make_data(), schema=schema, use_legacy_format=False
v2_db = await lancedb.connect_async(
"memory://", storage_options={"new_table_data_storage_version": "stable"}
)
tbl = await v2_db.create_table("test_v2", data=make_data(), schema=schema)
assert await is_in_v2_mode(tbl)
# Add data (should remain in v2 mode)
@@ -622,20 +632,18 @@ async def test_create_in_v2_mode(mem_db_async: lancedb.AsyncConnection):
assert await is_in_v2_mode(tbl)
# Create empty table in v2 mode and add data
tbl = await mem_db_async.create_table(
"test_empty_v2", data=None, schema=schema, use_legacy_format=False
)
tbl = await v2_db.create_table("test_empty_v2", data=None, schema=schema)
await tbl.add(make_table())
assert await is_in_v2_mode(tbl)
# Create empty table uses v1 mode by default
tbl = await mem_db_async.create_table(
"test_empty_v2_default", data=None, schema=schema, data_storage_version="legacy"
)
# Db uses v2 mode by default
db = await lancedb.connect_async("memory://")
tbl = await db.create_table("test_empty_v2_default", data=None, schema=schema)
await tbl.add(make_table())
assert not await is_in_v2_mode(tbl)
assert await is_in_v2_mode(tbl)
def test_replace_index(mem_db: lancedb.DBConnection):

View File

@@ -1,7 +1,8 @@
# SPDX-License-Identifier: Apache-2.0
# SPDX-FileCopyrightText: Copyright The LanceDB Authors
from typing import List, Union
import os
from typing import List, Optional, Union
from unittest.mock import MagicMock, patch
import lance
@@ -56,7 +57,7 @@ def test_embedding_function(tmp_path):
conf = EmbeddingFunctionConfig(
source_column="text",
vector_column="vector",
function=MockTextEmbeddingFunction(),
function=MockTextEmbeddingFunction.create(),
)
metadata = registry.get_table_metadata([conf])
table = table.replace_schema_metadata(metadata)
@@ -80,6 +81,57 @@ def test_embedding_function(tmp_path):
assert np.allclose(actual, expected)
def test_embedding_function_variables():
@register("variable-testing")
class VariableTestingFunction(TextEmbeddingFunction):
key1: str
secret_key: Optional[str] = None
@staticmethod
def sensitive_keys():
return ["secret_key"]
def ndims():
pass
def generate_embeddings(self, _texts):
pass
registry = EmbeddingFunctionRegistry.get_instance()
# Should error if variable is not set
with pytest.raises(ValueError, match="Variable 'test' not found"):
registry.get("variable-testing").create(
key1="$var:test",
)
# Should use default values if not set
func = registry.get("variable-testing").create(key1="$var:test:some_value")
assert func.key1 == "some_value"
# Should set a variable that the embedding function understands
registry.set_var("test", "some_value")
func = registry.get("variable-testing").create(key1="$var:test")
assert func.key1 == "some_value"
# Should reject secrets that aren't passed in as variables
with pytest.raises(
ValueError,
match="Sensitive key 'secret_key' cannot be set to a hardcoded value",
):
registry.get("variable-testing").create(
key1="whatever", secret_key="some_value"
)
# Should not serialize secrets.
registry.set_var("secret", "secret_value")
func = registry.get("variable-testing").create(
key1="whatever", secret_key="$var:secret"
)
assert func.secret_key == "secret_value"
assert func.safe_model_dump()["secret_key"] == "$var:secret"
def test_embedding_with_bad_results(tmp_path):
@register("null-embedding")
class NullEmbeddingFunction(TextEmbeddingFunction):
@@ -91,9 +143,11 @@ def test_embedding_with_bad_results(tmp_path):
) -> list[Union[np.array, None]]:
# Return None, which is bad if field is non-nullable
a = [
np.full(self.ndims(), np.nan)
if i % 2 == 0
else np.random.randn(self.ndims())
(
np.full(self.ndims(), np.nan)
if i % 2 == 0
else np.random.randn(self.ndims())
)
for i in range(len(texts))
]
return a
@@ -107,7 +161,7 @@ def test_embedding_with_bad_results(tmp_path):
vector: Vector(model.ndims()) = model.VectorField()
table = db.create_table("test", schema=Schema, mode="overwrite")
with pytest.raises(ValueError):
with pytest.raises(RuntimeError):
# Default on_bad_vectors is "error"
table.add([{"text": "hello world"}])
@@ -341,6 +395,7 @@ def test_add_optional_vector(tmp_path):
assert not (np.abs(tbl.to_pandas()["vector"][0]) < 1e-6).all()
@pytest.mark.slow
@pytest.mark.parametrize(
"embedding_type",
[
@@ -358,23 +413,23 @@ def test_embedding_function_safe_model_dump(embedding_type):
# Note: Some embedding types might require specific parameters
try:
model = registry.get(embedding_type).create()
model = registry.get(embedding_type).create({"max_retries": 1})
except Exception as e:
pytest.skip(f"Skipping {embedding_type} due to error: {str(e)}")
dumped_model = model.safe_model_dump()
assert all(
not k.startswith("_") for k in dumped_model.keys()
), f"{embedding_type}: Dumped model contains keys starting with underscore"
assert all(not k.startswith("_") for k in dumped_model.keys()), (
f"{embedding_type}: Dumped model contains keys starting with underscore"
)
assert (
"max_retries" in dumped_model
), f"{embedding_type}: Essential field 'max_retries' is missing from dumped model"
assert "max_retries" in dumped_model, (
f"{embedding_type}: Essential field 'max_retries' is missing from dumped model"
)
assert isinstance(
dumped_model, dict
), f"{embedding_type}: Dumped model is not a dictionary"
assert isinstance(dumped_model, dict), (
f"{embedding_type}: Dumped model is not a dictionary"
)
for key in model.__dict__:
if key.startswith("_"):
@@ -391,3 +446,33 @@ def test_retry(mock_sleep):
result = test_function()
assert mock_sleep.call_count == 9
assert result == "result"
@pytest.mark.skipif(
os.environ.get("OPENAI_API_KEY") is None, reason="OpenAI API key not set"
)
def test_openai_propagates_api_key(monkeypatch):
# Make sure that if we set it as a variable, the API key is propagated
api_key = os.environ["OPENAI_API_KEY"]
monkeypatch.delenv("OPENAI_API_KEY")
uri = "memory://"
registry = get_registry()
registry.set_var("open_api_key", api_key)
func = registry.get("openai").create(
name="text-embedding-ada-002",
max_retries=0,
api_key="$var:open_api_key",
)
class Words(LanceModel):
text: str = func.SourceField()
vector: Vector(func.ndims()) = func.VectorField()
db = lancedb.connect(uri)
table = db.create_table("words", schema=Words, mode="overwrite")
table.add([{"text": "hello world"}, {"text": "goodbye world"}])
query = "greetings"
actual = table.search(query).limit(1).to_pydantic(Words)[0]
assert len(actual.text) > 0

View File

@@ -174,6 +174,10 @@ def test_search_fts(table, use_tantivy):
assert len(results) == 5
assert len(results[0]) == 3 # id, text, _score
# Default limit of 10
results = table.search("puppy").select(["id", "text"]).to_list()
assert len(results) == 10
@pytest.mark.asyncio
async def test_fts_select_async(async_table):

View File

@@ -129,6 +129,6 @@ def test_normalize_scores():
if invert:
expected = pc.subtract(1.0, expected)
assert pc.equal(
result, expected
), f"Expected {expected} but got {result} for invert={invert}"
assert pc.equal(result, expected), (
f"Expected {expected} but got {result} for invert={invert}"
)

View File

@@ -10,6 +10,7 @@ import pyarrow as pa
import pydantic
import pytest
from lancedb.pydantic import PYDANTIC_VERSION, LanceModel, Vector, pydantic_to_schema
from pydantic import BaseModel
from pydantic import Field
@@ -252,3 +253,104 @@ def test_lance_model():
t = TestModel()
assert t == TestModel(vec=[0.0] * 16, li=[1, 2, 3])
def test_optional_nested_model():
class WAMedia(BaseModel):
url: str
mimetype: str
filename: Optional[str]
error: Optional[str]
data: bytes
class WALocation(BaseModel):
description: Optional[str]
latitude: str
longitude: str
class ReplyToMessage(BaseModel):
id: str
participant: str
body: str
class Message(BaseModel):
id: str
timestamp: int
from_: str
fromMe: bool
to: str
body: str
hasMedia: Optional[bool]
media: WAMedia
mediaUrl: Optional[str]
ack: Optional[int]
ackName: Optional[str]
author: Optional[str]
location: Optional[WALocation]
vCards: Optional[List[str]]
replyTo: Optional[ReplyToMessage]
class AnyEvent(LanceModel):
id: str
session: str
metadata: Optional[str] = None
engine: str
event: str
class MessageEvent(AnyEvent):
payload: Message
schema = pydantic_to_schema(MessageEvent)
payload = schema.field("payload")
assert payload.type == pa.struct(
[
pa.field("id", pa.utf8(), False),
pa.field("timestamp", pa.int64(), False),
pa.field("from_", pa.utf8(), False),
pa.field("fromMe", pa.bool_(), False),
pa.field("to", pa.utf8(), False),
pa.field("body", pa.utf8(), False),
pa.field("hasMedia", pa.bool_(), True),
pa.field(
"media",
pa.struct(
[
pa.field("url", pa.utf8(), False),
pa.field("mimetype", pa.utf8(), False),
pa.field("filename", pa.utf8(), True),
pa.field("error", pa.utf8(), True),
pa.field("data", pa.binary(), False),
]
),
False,
),
pa.field("mediaUrl", pa.utf8(), True),
pa.field("ack", pa.int64(), True),
pa.field("ackName", pa.utf8(), True),
pa.field("author", pa.utf8(), True),
pa.field(
"location",
pa.struct(
[
pa.field("description", pa.utf8(), True),
pa.field("latitude", pa.utf8(), False),
pa.field("longitude", pa.utf8(), False),
]
),
True, # Optional
),
pa.field("vCards", pa.list_(pa.utf8()), True),
pa.field(
"replyTo",
pa.struct(
[
pa.field("id", pa.utf8(), False),
pa.field("participant", pa.utf8(), False),
pa.field("body", pa.utf8(), False),
]
),
True,
),
]
)

View File

@@ -1,25 +1,35 @@
# SPDX-License-Identifier: Apache-2.0
# SPDX-FileCopyrightText: Copyright The LanceDB Authors
from typing import List, Union
import unittest.mock as mock
from datetime import timedelta
from pathlib import Path
import lancedb
from lancedb.index import IvfPq, FTS
from lancedb.rerankers.cross_encoder import CrossEncoderReranker
from lancedb.db import AsyncConnection
from lancedb.embeddings.base import TextEmbeddingFunction
from lancedb.embeddings.registry import get_registry, register
from lancedb.index import FTS, IvfPq
import lancedb.pydantic
import numpy as np
import pandas.testing as tm
import pyarrow as pa
import pyarrow.compute as pc
import pytest
import pytest_asyncio
from lancedb.pydantic import LanceModel, Vector
from lancedb.query import (
AsyncFTSQuery,
AsyncHybridQuery,
AsyncQueryBase,
AsyncVectorQuery,
LanceVectorQueryBuilder,
Query,
)
from lancedb.rerankers.cross_encoder import CrossEncoderReranker
from lancedb.table import AsyncTable, LanceTable
from utils import exception_output
@pytest.fixture(scope="module")
@@ -232,6 +242,71 @@ async def test_distance_range_async(table_async: AsyncTable):
assert res["_distance"].to_pylist() == [min_dist, max_dist]
@pytest.mark.asyncio
async def test_distance_range_with_new_rows_async():
conn = await lancedb.connect_async(
"memory://", read_consistency_interval=timedelta(seconds=0)
)
data = pa.table(
{
"vector": pa.FixedShapeTensorArray.from_numpy_ndarray(
np.random.rand(256, 2)
),
}
)
table = await conn.create_table("test", data)
table.create_index("vector", config=IvfPq(num_partitions=1, num_sub_vectors=2))
q = [0, 0]
rs = await table.query().nearest_to(q).to_arrow()
dists = rs["_distance"].to_pylist()
min_dist = dists[0]
max_dist = dists[-1]
# append more rows so that execution plan would be mixed with ANN & Flat KNN
new_data = pa.table(
{
"vector": pa.FixedShapeTensorArray.from_numpy_ndarray(np.random.rand(4, 2)),
}
)
await table.add(new_data)
res = (
await table.query()
.nearest_to(q)
.distance_range(upper_bound=min_dist)
.to_arrow()
)
assert len(res) == 0
res = (
await table.query()
.nearest_to(q)
.distance_range(lower_bound=max_dist)
.to_arrow()
)
for dist in res["_distance"].to_pylist():
assert dist >= max_dist
res = (
await table.query()
.nearest_to(q)
.distance_range(upper_bound=max_dist)
.to_arrow()
)
for dist in res["_distance"].to_pylist():
assert dist < max_dist
res = (
await table.query()
.nearest_to(q)
.distance_range(lower_bound=min_dist)
.to_arrow()
)
for dist in res["_distance"].to_pylist():
assert dist >= min_dist
@pytest.mark.parametrize(
"multivec_table", [pa.float16(), pa.float32(), pa.float64()], indirect=True
)
@@ -651,3 +726,100 @@ async def test_query_with_f16(tmp_path: Path):
tbl = await db.create_table("test", df)
results = await tbl.vector_search([np.float16(1), np.float16(2)]).to_pandas()
assert len(results) == 2
@pytest.mark.asyncio
async def test_query_search_auto(mem_db_async: AsyncConnection):
nrows = 1000
data = pa.table(
{
"text": [str(i) for i in range(nrows)],
}
)
@register("test2")
class TestEmbedding(TextEmbeddingFunction):
def ndims(self):
return 4
def generate_embeddings(
self, texts: Union[List[str], np.ndarray]
) -> List[np.array]:
embeddings = []
for text in texts:
vec = np.array([float(text) / 1000] * self.ndims())
embeddings.append(vec)
return embeddings
registry = get_registry()
func = registry.get("test2").create()
class TestModel(LanceModel):
text: str = func.SourceField()
vector: Vector(func.ndims()) = func.VectorField()
tbl = await mem_db_async.create_table("test", data, schema=TestModel)
funcs = await tbl.embedding_functions()
assert len(funcs) == 1
# No FTS or vector index
# Search for vector -> vector query
q = [0.1] * 4
query = await tbl.search(q)
assert isinstance(query, AsyncVectorQuery)
# Search for string -> vector query
query = await tbl.search("0.1")
assert isinstance(query, AsyncVectorQuery)
await tbl.create_index("text", config=FTS())
query = await tbl.search("0.1")
assert isinstance(query, AsyncHybridQuery)
data_with_vecs = await tbl.to_arrow()
data_with_vecs = data_with_vecs.replace_schema_metadata(None)
tbl2 = await mem_db_async.create_table("test2", data_with_vecs)
with pytest.raises(
Exception,
match=(
"Cannot perform full text search unless an INVERTED index has been created"
),
):
query = await (await tbl2.search("0.1")).to_arrow()
@pytest.mark.asyncio
async def test_query_search_specified(mem_db_async: AsyncConnection):
nrows, ndims = 1000, 16
data = pa.table(
{
"text": [str(i) for i in range(nrows)],
"vector": pa.FixedSizeListArray.from_arrays(
pc.random(nrows * ndims).cast(pa.float32()), ndims
),
}
)
table = await mem_db_async.create_table("test", data)
await table.create_index("text", config=FTS())
# Validate that specifying fts, vector or hybrid gets the right query.
q = [0.1] * ndims
query = await table.search(q, query_type="vector")
assert isinstance(query, AsyncVectorQuery)
query = await table.search("0.1", query_type="fts")
assert isinstance(query, AsyncFTSQuery)
with pytest.raises(ValueError, match="Unknown query type: 'foo'"):
await table.search("0.1", query_type="foo")
with pytest.raises(
ValueError, match="Column 'vector' has no registered embedding function"
) as e:
await table.search("0.1", query_type="vector")
assert "No embedding functions are registered for any columns" in exception_output(
e
)

View File

@@ -9,6 +9,7 @@ import json
import threading
from unittest.mock import MagicMock
import uuid
from packaging.version import Version
import lancedb
from lancedb.conftest import MockTextEmbeddingFunction
@@ -32,15 +33,16 @@ def make_mock_http_handler(handler):
@contextlib.contextmanager
def mock_lancedb_connection(handler):
with http.server.HTTPServer(
("localhost", 8080), make_mock_http_handler(handler)
("localhost", 0), make_mock_http_handler(handler)
) as server:
port = server.server_address[1]
handle = threading.Thread(target=server.serve_forever)
handle.start()
db = lancedb.connect(
"db://dev",
api_key="fake",
host_override="http://localhost:8080",
host_override=f"http://localhost:{port}",
client_config={
"retry_config": {"retries": 2},
"timeout_config": {
@@ -57,22 +59,24 @@ def mock_lancedb_connection(handler):
@contextlib.asynccontextmanager
async def mock_lancedb_connection_async(handler):
async def mock_lancedb_connection_async(handler, **client_config):
with http.server.HTTPServer(
("localhost", 8080), make_mock_http_handler(handler)
("localhost", 0), make_mock_http_handler(handler)
) as server:
port = server.server_address[1]
handle = threading.Thread(target=server.serve_forever)
handle.start()
db = await lancedb.connect_async(
"db://dev",
api_key="fake",
host_override="http://localhost:8080",
host_override=f"http://localhost:{port}",
client_config={
"retry_config": {"retries": 2},
"timeout_config": {
"connect_timeout": 1,
},
**client_config,
},
)
@@ -254,6 +258,9 @@ def test_table_create_indices():
)
)
request.wfile.write(payload.encode())
elif "/drop/" in request.path:
request.send_response(200)
request.end_headers()
else:
request.send_response(404)
request.end_headers()
@@ -265,14 +272,18 @@ def test_table_create_indices():
table.create_scalar_index("id")
table.create_fts_index("text")
table.create_scalar_index("vector")
table.drop_index("vector_idx")
table.drop_index("id_idx")
table.drop_index("text_idx")
@contextlib.contextmanager
def query_test_table(query_handler):
def query_test_table(query_handler, *, server_version=Version("0.1.0")):
def handler(request):
if request.path == "/v1/table/test/describe/":
request.send_response(200)
request.send_header("Content-Type", "application/json")
request.send_header("phalanx-version", str(server_version))
request.end_headers()
request.wfile.write(b"{}")
elif request.path == "/v1/table/test/query/":
@@ -329,6 +340,7 @@ def test_query_sync_empty_query():
"filter": "true",
"vector": [],
"columns": ["id"],
"prefilter": False,
"version": None,
}
@@ -378,11 +390,25 @@ def test_query_sync_maximal():
)
def test_query_sync_multiple_vectors():
def handler(_body):
return pa.table({"id": [1]})
@pytest.mark.parametrize("server_version", [Version("0.1.0"), Version("0.2.0")])
def test_query_sync_batch_queries(server_version):
def handler(body):
# TODO: we will add the ability to get the server version,
# so that we can decide how to perform batch quires.
vectors = body["vector"]
if server_version >= Version(
"0.2.0"
): # we can handle batch queries in single request since 0.2.0
assert len(vectors) == 2
res = []
for i, vector in enumerate(vectors):
res.append({"id": 1, "query_index": i})
return pa.Table.from_pylist(res)
else:
assert len(vectors) == 3 # matching dim
return pa.table({"id": [1]})
with query_test_table(handler) as table:
with query_test_table(handler, server_version=server_version) as table:
results = table.search([[1, 2, 3], [4, 5, 6]]).limit(1).to_list()
assert len(results) == 2
results.sort(key=lambda x: x["query_index"])
@@ -397,6 +423,7 @@ def test_query_sync_fts():
"columns": [],
},
"k": 10,
"prefilter": True,
"vector": [],
"version": None,
}
@@ -414,6 +441,7 @@ def test_query_sync_fts():
},
"k": 42,
"vector": [],
"prefilter": True,
"with_row_id": True,
"version": None,
}
@@ -440,6 +468,7 @@ def test_query_sync_hybrid():
},
"k": 42,
"vector": [],
"prefilter": True,
"with_row_id": True,
"version": None,
}
@@ -522,3 +551,19 @@ def test_create_client():
with pytest.warns(DeprecationWarning):
lancedb.connect(**mandatory_args, request_thread_pool=10)
@pytest.mark.asyncio
async def test_pass_through_headers():
def handler(request):
assert request.headers["foo"] == "bar"
request.send_response(200)
request.send_header("Content-Type", "application/json")
request.end_headers()
request.wfile.write(b'{"tables": []}')
async with mock_lancedb_connection_async(
handler, extra_headers={"foo": "bar"}
) as db:
table_names = await db.table_names()
assert table_names == []

View File

@@ -32,8 +32,8 @@ pytest.importorskip("lancedb.fts")
def get_test_table(tmp_path, use_tantivy):
db = lancedb.connect(tmp_path)
# Create a LanceDB table schema with a vector and a text column
emb = EmbeddingFunctionRegistry.get_instance().get("test")()
meta_emb = EmbeddingFunctionRegistry.get_instance().get("test")()
emb = EmbeddingFunctionRegistry.get_instance().get("test").create()
meta_emb = EmbeddingFunctionRegistry.get_instance().get("test").create()
class MyTable(LanceModel):
text: str = emb.SourceField()
@@ -131,9 +131,9 @@ def _run_test_reranker(reranker, table, query, query_vector, schema):
"represents the relevance of the result to the query & should "
"be descending."
)
assert np.all(
np.diff(result.column("_relevance_score").to_numpy()) <= 0
), ascending_relevance_err
assert np.all(np.diff(result.column("_relevance_score").to_numpy()) <= 0), (
ascending_relevance_err
)
# Vector search setting
result = (
@@ -143,9 +143,9 @@ def _run_test_reranker(reranker, table, query, query_vector, schema):
.to_arrow()
)
assert len(result) == 30
assert np.all(
np.diff(result.column("_relevance_score").to_numpy()) <= 0
), ascending_relevance_err
assert np.all(np.diff(result.column("_relevance_score").to_numpy()) <= 0), (
ascending_relevance_err
)
result_explicit = (
table.search(query_vector, vector_column_name="vector")
.rerank(reranker=reranker, query_string=query)
@@ -168,9 +168,9 @@ def _run_test_reranker(reranker, table, query, query_vector, schema):
.to_arrow()
)
assert len(result) > 0
assert np.all(
np.diff(result.column("_relevance_score").to_numpy()) <= 0
), ascending_relevance_err
assert np.all(np.diff(result.column("_relevance_score").to_numpy()) <= 0), (
ascending_relevance_err
)
# empty FTS results
query = "abcxyz" * 100
@@ -185,9 +185,9 @@ def _run_test_reranker(reranker, table, query, query_vector, schema):
# should return _relevance_score column
assert "_relevance_score" in result.column_names
assert np.all(
np.diff(result.column("_relevance_score").to_numpy()) <= 0
), ascending_relevance_err
assert np.all(np.diff(result.column("_relevance_score").to_numpy()) <= 0), (
ascending_relevance_err
)
# Multi-vector search setting
rs1 = table.search(query, vector_column_name="vector").limit(10).with_row_id(True)
@@ -262,9 +262,9 @@ def _run_test_hybrid_reranker(reranker, tmp_path, use_tantivy):
"represents the relevance of the result to the query & should "
"be descending."
)
assert np.all(
np.diff(result.column("_relevance_score").to_numpy()) <= 0
), ascending_relevance_err
assert np.all(np.diff(result.column("_relevance_score").to_numpy()) <= 0), (
ascending_relevance_err
)
# Test with empty FTS results
query = "abcxyz" * 100
@@ -278,9 +278,9 @@ def _run_test_hybrid_reranker(reranker, tmp_path, use_tantivy):
)
# should return _relevance_score column
assert "_relevance_score" in result.column_names
assert np.all(
np.diff(result.column("_relevance_score").to_numpy()) <= 0
), ascending_relevance_err
assert np.all(np.diff(result.column("_relevance_score").to_numpy()) <= 0), (
ascending_relevance_err
)
@pytest.mark.parametrize("use_tantivy", [True, False])
@@ -405,7 +405,9 @@ def test_answerdotai_reranker(tmp_path, use_tantivy):
@pytest.mark.skipif(
os.environ.get("OPENAI_API_KEY") is None, reason="OPENAI_API_KEY not set"
os.environ.get("OPENAI_API_KEY") is None
or os.environ.get("OPENAI_BASE_URL") is not None,
reason="OPENAI_API_KEY not set",
)
@pytest.mark.parametrize("use_tantivy", [True, False])
def test_openai_reranker(tmp_path, use_tantivy):

View File

@@ -252,3 +252,27 @@ def test_s3_dynamodb_sync(s3_bucket: str, commit_table: str, monkeypatch):
db.drop_table("test_ddb_sync")
assert db.table_names() == []
db.drop_database()
@pytest.mark.s3_test
def test_s3_dynamodb_drop_all_tables(s3_bucket: str, commit_table: str, monkeypatch):
for key, value in CONFIG.items():
monkeypatch.setenv(key.upper(), value)
uri = f"s3+ddb://{s3_bucket}/test2?ddbTableName={commit_table}"
db = lancedb.connect(uri, read_consistency_interval=timedelta(0))
data = pa.table({"x": ["a", "b", "c"]})
db.create_table("foo", data)
db.create_table("bar", data)
assert db.table_names() == ["bar", "foo"]
# dropping all tables should clear multiple tables
db.drop_all_tables()
assert db.table_names() == []
# create a new table with the same name to ensure DDB is clean
db.create_table("foo", data)
assert db.table_names() == ["foo"]
db.drop_all_tables()

Some files were not shown because too many files have changed in this diff Show More