Compare commits

..

96 Commits

Author SHA1 Message Date
Lance Release
26dab93f2a Bump version: 0.21.3-beta.0 → 0.22.0-beta.0 2025-03-30 18:04:14 +00:00
LuQQiu
b9bdb8d937 fix: fix remote restore api to always checkout latest version (#2291)
Fix restore to always checkout latest version, following local restore
api implementation

a1d1833a40/rust/lancedb/src/table.rs (L1910)
Otherwise
table.create_table -> version 1
table.add_table -> version 2
table.checkout(1), table.restore() -> the version remains at 1 (should
checkout_latest inside restore method to update version to latest
version and allow write operation)
table.checkout_latest() -> version is 3
can do write operations
2025-03-29 22:46:57 -07:00
LuQQiu
a1d1833a40 feat: add analyze_plan api (#2280)
add analyze plan api to allow executing the queries and see runtime
metrics.
Which help identify the query IO overhead and help identify query
slowness
2025-03-28 14:28:52 -07:00
Will Jones
a547c523c2 feat!: change default read_consistency_interval=5s (#2281)
Previously, when we loaded the next version of the table, we would block
all reads with a write lock. Now, we only do that if
`read_consistency_interval=0`. Otherwise, we load the next version
asynchronously in the background. This should mean that
`read_consistency_interval > 0` won't have a meaningful impact on
latency.

Along with this change, I felt it was safe to change the default
consistency interval to 5 seconds. The current default is `None`, which
means we will **never** check for a new version by default. I think that
default is contrary to most users expectations.
2025-03-28 11:04:31 -07:00
Lance Release
dc8b75feab Updating package-lock.json 2025-03-28 17:15:17 +00:00
Lance Release
c1600cdc06 Updating package-lock.json 2025-03-28 16:04:01 +00:00
Lance Release
f5dee46970 Updating package-lock.json 2025-03-28 16:03:46 +00:00
Lance Release
346cbf8bf7 Bump version: 0.18.2-beta.0 → 0.18.3-beta.0 2025-03-28 16:03:31 +00:00
Lance Release
3c7dfe9f28 Bump version: 0.21.2-beta.0 → 0.21.3-beta.0 2025-03-28 16:03:17 +00:00
Lei Xu
f52d05d3fa feat: add columns using pyarrow schema (#2284) 2025-03-28 08:51:50 -07:00
vinoyang
c321cccc12 chore(java): make rust release to be a switch option (#2277) 2025-03-28 11:26:24 +08:00
LuQQiu
cba14a5743 feat: add restore remote api (#2282) 2025-03-27 16:33:52 -07:00
vinoyang
72057b743d chore(java): introduce spotless plugin (#2278) 2025-03-27 10:38:39 +08:00
LuQQiu
698f329598 feat: add explain plan remote api (#2263)
Add explain plan remote api
2025-03-26 11:22:40 -07:00
BubbleCal
79fa745130 feat: upgrade lance to v0.25.1-beta.3 (#2276)
Signed-off-by: BubbleCal <bubble-cal@outlook.com>
2025-03-26 23:14:27 +08:00
vinoyang
2ad71bdeca fix(java): make test work for jdk8 (#2269) 2025-03-25 10:57:49 -07:00
vinoyang
7c13615096 fix(java): add .gitignore file (#2270) 2025-03-25 10:56:08 -07:00
Wyatt Alt
f882f5b69a fix: update Query pydoc (#2273)
Removes reference of nonexistent method.
2025-03-25 08:50:23 -07:00
Benjamin Clavié
a68311a893 fix: answerdotai rerankers argument passing (#2117)
This fixes an issue for people wishing to use different kinds of
rerankers in lancedb via AnswerDotAI rerankers. Currently, the arguments
are passed sequentially, but they don't match the[Reranker class
implementation](d604a8c47d/rerankers/reranker.py (L179)):
the second argument is expected to be an optional "lang" for default
models, while model_type should be passed explicitly.

The one line changes in this PR fixes it and enables the use of other
methods (eg LLMs-as-rerankers)
2025-03-24 12:31:59 +05:30
Ayush Chaurasia
846a5cea33 fix: handle light and dark mode logo (#2265) 2025-03-22 10:21:05 -07:00
QianZhu
e3dec647b5 docs: replace banner as an image (#2262) 2025-03-21 18:35:35 -07:00
QianZhu
c58104cecc docs: add banner for LanceDB Cloud in public beta (#2261) 2025-03-21 17:54:34 -07:00
QianZhu
b3b5362632 docs: replace Lancedb Cloud link (#2259)
* direct users to cloud.lancedb.com since LanceDB Cloud is in public
beta
* removed the `cast vector dimension` from alter columns as we don't
support it
2025-03-21 17:43:00 -07:00
Will Jones
abe06fee3d feat(python): warn on fork (#2258)
Closes #768
2025-03-21 17:18:10 -07:00
Will Jones
93a82fd371 ci: allow dry run on PR to Python release (#2245)
This just makes it easier to test in the future.
2025-03-21 16:14:32 -07:00
Will Jones
0d379e6ffa ci(node): setup URL so auth token is picked up (#2257)
Should fix failure seen here:
https://github.com/lancedb/lancedb/actions/runs/13999958170/job/39207039825
2025-03-21 16:14:24 -07:00
Lance Release
e1388bdfdd Updating package-lock.json 2025-03-21 20:46:53 +00:00
Lance Release
315a24c2bc Updating package-lock.json 2025-03-21 20:03:43 +00:00
Lance Release
6dd4cf6038 Updating package-lock.json 2025-03-21 20:03:27 +00:00
Lance Release
f97e751b3c Bump version: 0.18.1 → 0.18.2-beta.0 2025-03-21 20:02:59 +00:00
Lance Release
e803a626a1 Bump version: 0.21.1 → 0.21.2-beta.0 2025-03-21 20:02:25 +00:00
Weston Pace
9403254442 feat: add to_query_object method (#2239)
This PR adds a `to_query_object` method to the various query builders
(except not hybrid queries yet). This makes it possible to inspect the
query that is built.

In addition this PR does some normalization between the sync and async
query paths. A few custom defaults were removed in favor of None (with
the default getting set once, in rust).

Also, the synchronous to_batches method will now actually stream results

Also, the remote API now defaults to prefiltering
2025-03-21 13:01:51 -07:00
Will Jones
b2a38ac366 fix: make pylance optional again (#2209)
The two remaining blockers were:

* A method `with_embeddings` that was deprecated a year ago
* A typecheck for `LanceDataset`
2025-03-21 11:26:32 -07:00
BubbleCal
bdb6c09c3b feat: support binary vector and IVF_FLAT in TypeScript (#2221)
resolve #2218

---------

Signed-off-by: BubbleCal <bubble-cal@outlook.com>
2025-03-21 10:57:08 -07:00
Will Jones
2bfdef2624 ci: refactor node releases (#2223)
This PR fixes build issues associated with `aws-lc-rs`, while
simplifying the build process. Previously, we used custom scripts for
the musl and Windows ARM builds. These were complicated and prone to
breaking. This PR switches to a setup that mirrors
https://github.com/napi-rs/package-template/blob/main/.github/workflows/CI.yml.

* linux glibc and musl builds now use the Docker images provided by the
napi project
* Windows ARM build now just cross compiles from Windows x64, which
turns out to work quite well.
2025-03-21 10:56:29 -07:00
Samuel Colvin
7982d5c082 fix: correct rust install docs (#2253)
I'm pretty sure you mean `cargo add lancedb` here, `cargo install
lancedb` fails right now.
2025-03-21 10:12:53 -07:00
BubbleCal
7ff6ec7fe3 feat: upgrade to lance v0.25.0-beta.5 (#2248)
- adds `loss` into the index stats for vector index
- now `optimize` can retrain the vector index

---------

Signed-off-by: BubbleCal <bubble-cal@outlook.com>
2025-03-21 10:12:23 -07:00
Ayush Chaurasia
ba1ded933a fix: add better check for empty results in hybrid search (#2252)
fixes: https://github.com/lancedb/lancedb/issues/2249
2025-03-21 13:05:05 +05:30
Will Jones
b595d8a579 fix(nodejs): workaround for apache-arrow null vector issue (#2244)
Fixes #2240
2025-03-20 08:07:10 -07:00
Will Jones
2a1d6d8abf ci: simplify windows builds (#2243)
We soon won't rely on cross compiling from Linux to windows, so can
remove this check. Instead, check that we can cross compile from Windows
between architectures.
2025-03-20 08:06:56 -07:00
Will Jones
440a466a13 ci: remove OpenSSL as dependency in favor of rustls (#2242)
`object_store` already hard codes `rustls` as the TLS implementation, so
we have been shipping a mix of `rustls` and `openssl`. For simplicity of
builds, we should consolidate to one, and that has to be `rustls`.
2025-03-20 08:06:45 -07:00
Ayush Chaurasia
b9afd9c860 docs: add late interaction, multi-vector guide & link example (#2231)
1/2 docs update for this week. Addesses issues from this docs epic -
https://github.com/lancedb/lancedb/issues/1476
2025-03-20 20:29:32 +05:30
Will Jones
a6b6f6a806 ci: drop vectordb support for musl, windows ARM (#2241)
vectordb is deprecated, and these platforms are particularly difficult
to maintain. Removing now to prevent further headaches.

We will keep these platforms supported on `@lancedb/lancedb`.
2025-03-19 12:23:46 -07:00
Ayush Chaurasia
ae1548b507 docs: add cloud & enterprise cta (#2235)
2/2 docs update this week
- Add cloud & enterprise CTA
- remove outdated projects/examples from landing page
2025-03-19 10:55:05 -07:00
Weston Pace
4e03ee82bc refactor: rework catalog/database options (#2213)
The `ConnectRequest` has a set of properties that only make sense for
listing databases / catalogs and a set of properties that only make
sense for remote databases.

This PR reduces all options to a single `HashMap<String, String>`. This
makes it easier to add new database / catalog implementations and makes
it clearer to users which options are applicable in which situations.

I don't believe there are any breaking changes here. The closest thing
is that I placed the `ConnectBuilder` methods `api_key`, `region`, and
`host_override` behind a `remote` feature gate. This is not strictly
needed and I could remove the feature gate but it seemed appropriate.
Since using these methods without the remote feature would have been
meaningless I don't feel this counts as a breaking change.

We could look at removing these methods entirely from the
`ConnectBuilder` (and encouraging users to use `RemoteDatabaseOptions`
instead) but I'm not sure how I feel about that.

Another approach we could take is to move these methods into a
`RemoteConnectBuilderExt` trait (and there could be a similar
`ListingConnectBuilderExt` trait to add methods for the listing database
/ catalog).

For now though my main goal is to simplify `ConnectRequest` as much as
possible (I see this being part of the key public API for database /
catalog integrations, similar to the `BaseTable`, `Catalog`, and
`Database` traits and I'd like it to be simple).
2025-03-18 10:13:59 -07:00
Weston Pace
46a6846d07 refactor: remove dataset reference from base table (#2226) 2025-03-17 06:27:33 -07:00
Will Jones
a207213358 fix: insert structs in non-alphabetical order (#2222)
Closes #2114

Starting in #1965, we no longer pass the table schema into
`pa.Table.from_pylist()`. This means PyArrow is choosing the order of
the struct subfields, and apparently it does them in alphabetical order.
This is fine in theory, since in Lance we support providing fields in
any order. However, before we pass it to Lance, we call
`pa.Table.cast()` to align column types to the table types.
`pa.Table.cast()` is strict about field order, so we need to create a
cast target schema that aligns with the input data. We were doing this
at the top-level fields, but weren't doing this in nested fields. This
PR adds support to do this for nested ones.
2025-03-13 14:46:05 -07:00
BubbleCal
6c321c694a feat: upgrade lance to 0.25.0-beta2 (#2220)
Signed-off-by: BubbleCal <bubble-cal@outlook.com>
2025-03-13 14:12:54 -07:00
Bob Liu
5c00b2904c feat: add get dataset method on NativeTable (#2021)
I want to public the dataset method from native table, then I can use
more lance method like order_by which is not exposed in the lancedb
crate.
2025-03-13 11:15:28 -07:00
Gagan Bhullar
14677d7c18 fix: metric type inconsistency (#2122)
PR fixes #2113

---------

Co-authored-by: Will Jones <willjones127@gmail.com>
2025-03-12 10:28:37 -07:00
Martin Schorfmann
dd22a379b2 fix: use Self return type annotation for abstract query builder (#2127)
Hello LanceDB team,

while developing using `lancedb` as a library I encountered a typing
problem affecting IDE hints and completions during development.

---

## Current Situation

Currently, the abstract base class `lancedb.query:LanceQueryBuilder`
uses method chaining to build up the search parameters, where the
methods have `LanceQueryBuilder` as a return type hint.

This leads to two issues:
1. Implementing subclasses of `LanceQueryBuilder` need to override
methods to modify the return type hint, even when they don't need to
change its implementation, just to ensure adequate IDE hints and
completions.
2. When using method chaining the first method directly inherited from
the abstract `LanceQueryBuilder` causes the inferred type to switch back
to `LanceQueryBuilder`. So even when the type starts from
`lancdb.table:LanceTable.search(query_type="vector", ...)` and therefor
correctly is inferred as `LanceVectorQueryBuilder`, after calling e.g.
`LanceVectorQueryBuilder.limit(...)` it is seen as the abstract
`LanceQueryBuilder` from that point on.

### Example of current situation


![image](https://github.com/user-attachments/assets/09678727-8722-43bd-a8a2-67d9b5fc0db5)

## Proposed changes

I propose to change the return type hints of the corresponding methods
(including classmethod `create()`) in the abstract base class
`LanceQueryBuilder` from `LanceQueryBuilder` to `Self`.
`Self` is already imported in the module:

```py
    if sys.version_info >= (3, 11):
        from typing import Self
    else:
        from typing_extensions import Self
```

### Further possible changes

Additionally, the implementing subclasses could also change the return
type hints to `Self` to potentially allow for further inheritance
easily.
> [!NOTE]
> **However this is not part of this pull request as of writing.**

### Example after proposed changes


![image](https://github.com/user-attachments/assets/a9aea636-e426-477a-86ee-2dad3af2876f)

---

Best regards
Martin
2025-03-12 10:08:25 -07:00
Will Jones
7747c9bcbf feat(node): parse arrow types in alterColumns() (#2208)
Previously, users could only specify new data types in `alterColumns` as
strings:

```ts
await tbl.alterColumns([
  path: "price",
  dataType: "float"
]);
```

But this has some problems:

1. It wasn't clear what were valid types
2. It was impossible to specify nested types, like lists and vector
columns.

This PR changes it to take an Arrow data type, similar to how the Python
API works. This allows casting vector types:

```ts
await tbl.alterColumns([
  {
    path: "vector",
    dataType: new arrow.FixedSizeList(
      2,
      new arrow.Field("item", new arrow.Float16(), false),
    ),
  },
]);
```

Closes #2185
2025-03-12 09:57:36 -07:00
QianZhu
c9d6fc43a6 docs: use bypass_vector_index() instead of use_index=false (#2115) 2025-03-12 09:31:09 -07:00
Martin Schorfmann
581bcfbb88 docs: fix docstring of EmbeddingFunction (#2118)
Hello LanceDB team,

---

I have fixed a discrepancy in the class docstring of
`lancedb.embeddings.base:EmbeddingFunction` and made consistency
alignments to that docstring.

### Changes made

1. The docstring referred to the abstract method
`get_source_embeddings()`.
  This method does not exist in the repository at the current state.
I have changed the mention to refer to the actual abstract method
`compute_source_embeddings()`.
2. Also, I aligned the consistency within the ordered list which is
describing the methods to be implemented by concrete embedding
functions.

---

Thank you for developing this useful library. 👍

Best regards
Martin
2025-03-12 09:30:01 -07:00
vinoyang
3750639b5f feat(rust): add connect_catalog method to support connect catalog via url (#2177) 2025-03-12 05:19:03 -07:00
Lance Release
e744d54460 Updating package-lock.json 2025-03-11 14:00:55 +00:00
Lance Release
9d1ce4b5a5 Updating package-lock.json 2025-03-11 13:15:18 +00:00
Lance Release
729ce5e542 Updating package-lock.json 2025-03-11 13:15:03 +00:00
Lance Release
de6739e7ec Bump version: 0.18.1-beta.0 → 0.18.1 2025-03-11 13:14:49 +00:00
Lance Release
495216efdb Bump version: 0.18.0 → 0.18.1-beta.0 2025-03-11 13:14:44 +00:00
Lance Release
a3b45a4d00 Bump version: 0.21.1-beta.0 → 0.21.1 2025-03-11 13:14:30 +00:00
Lance Release
c316c2f532 Bump version: 0.21.0 → 0.21.1-beta.0 2025-03-11 13:14:29 +00:00
Weston Pace
3966b16b63 fix: restore pylance as mandatory dependency (#2204)
We attempted to make pylance optional in
https://github.com/lancedb/lancedb/pull/2156 but it appears this did not
quite work. Users are unable to use lancedb from a fresh install. This
reverts the optional-ness so we can get back in a working state while we
fix the issue.
2025-03-11 06:13:52 -07:00
Lance Release
5661cc15ac Updating package-lock.json 2025-03-10 23:53:56 +00:00
Lance Release
4e7220400f Updating package-lock.json 2025-03-10 23:13:52 +00:00
Lance Release
ae4928fe77 Updating package-lock.json 2025-03-10 23:13:36 +00:00
Lance Release
e80a405dee Bump version: 0.18.0-beta.1 → 0.18.0 2025-03-10 23:13:18 +00:00
Lance Release
a53e19e386 Bump version: 0.18.0-beta.0 → 0.18.0-beta.1 2025-03-10 23:13:13 +00:00
Lance Release
c0097c5f0a Bump version: 0.21.0-beta.2 → 0.21.0 2025-03-10 23:12:56 +00:00
Lance Release
c199708e64 Bump version: 0.21.0-beta.1 → 0.21.0-beta.2 2025-03-10 23:12:56 +00:00
Weston Pace
4a47150ae7 feat: upgrade to lance 0.24.1 (#2199) 2025-03-10 15:18:37 -07:00
Wyatt Alt
f86b20a564 fix: delete tables from DDB on drop_all_tables (#2194)
Prior to this commit, issuing drop_all_tables on a listing database with
an external manifest store would delete physical tables but leave
references behind in the manifest store. The table drop would succeed,
but subsequent creation of a table with the same name would fail with a
conflict.

With this patch, the external manifest store is updated to account for
the dropped tables so that dropped table names can be reused.
2025-03-10 15:00:53 -07:00
msu-reevo
cc81f3e1a5 fix(python): typing (#2167)
@wjones127 is there a standard way you guys setup your virtualenv? I can
either relist all the dependencies in the pyright precommit section, or
specify a venv, or the user has to be in the virtual environment when
they run git commit. If the venv location was standardized or a python
manager like `uv` was used it would be easier to avoid duplicating the
pyright dependency list.

Per your suggestion, in `pyproject.toml` I added in all the passing
files to the `includes` section.

For ruff I upgraded the version and removed "TCH" which doesn't exist as
an option.

I added a `pyright_report.csv` which contains a list of all files sorted
by pyright errors ascending as a todo list to work on.

I fixed about 30 issues in `table.py` stemming from str's being passed
into methods that required a string within a set of string Literals by
extracting them into `types.py`

Can you verify in the rust bridge that the schema should be a property
and not a method here? If it's a method, then there's another place in
the code where `inner.schema` should be `inner.schema()`
``` python
class RecordBatchStream:
    @property
    def schema(self) -> pa.Schema: ...
```

Also unless the `_lancedb.pyi` file is wrong, then there is no
`__anext__` here for `__inner` when it's not an `AsyncGenerator` and
only `next` is defined:
``` python
    async def __anext__(self) -> pa.RecordBatch:
        return await self._inner.__anext__()
        if isinstance(self._inner, AsyncGenerator):
            batch = await self._inner.__anext__()
        else:
            batch = await self._inner.next()
        if batch is None:
            raise StopAsyncIteration
        return batch
```
in the else statement, `_inner` is a `RecordBatchStream`
```python
class RecordBatchStream:
    @property
    def schema(self) -> pa.Schema: ...
    async def next(self) -> Optional[pa.RecordBatch]: ...
```

---------

Co-authored-by: Will Jones <willjones127@gmail.com>
2025-03-10 09:01:23 -07:00
Weston Pace
bc49c4db82 feat: respect datafusion's batch size when running as a table provider (#2187)
Datafusion makes the batch size available as part of the `SessionState`.
We should use that to set the `max_batch_length` property in the
`QueryExecutionOptions`.
2025-03-07 05:53:36 -08:00
Weston Pace
d2eec46f17 feat: add support for streaming input to create_table (#2175)
This PR makes it possible to create a table using an asynchronous stream
of input data. Currently only a synchronous iterator is supported. There
are a number of follow-ups not yet tackled:

* Support for embedding functions (the embedding functions wrapper needs
to be re-written to be async, should be an easy lift)
* Support for async input into the remote table (the make_ipc_batch
needs to change to accept async input, leaving undone for now because I
think we want to support actual streaming uploads into the remote table
soon)
* Support for async input into the add function (pretty essential, but
it is a fairly distinct code path, so saving for a different PR)
2025-03-06 11:55:00 -08:00
Lance Release
51437bc228 Bump version: 0.21.0-beta.0 → 0.21.0-beta.1 2025-03-06 19:23:06 +00:00
Bert
fa53cfcfd2 feat: support modifying field metadata in lancedb python (#2178) 2025-03-04 16:58:46 -05:00
vinoyang
374fe0ad95 feat(rust): introduce Catalog trait and implement ListingCatalog (#2148)
Co-authored-by: Weston Pace <weston.pace@gmail.com>
2025-03-03 20:22:24 -08:00
BubbleCal
35e5b84ba9 chore: upgrade lance to 0.24.0-beta.1 (#2171)
Signed-off-by: BubbleCal <bubble-cal@outlook.com>
2025-03-03 12:32:12 +08:00
Lei Xu
7c12d497b0 ci: bump python to 3.12 in GHA (#2169) 2025-03-01 17:24:02 -08:00
ayao227
dfe4ba8dad chore: add reo integration (#2149)
This PR adds reo integration to the lancedb documentation website.
2025-02-28 07:51:34 -08:00
Weston Pace
fa1b9ad5bd fix: don't use with_schema to remove schema metadata (#2162)
It seems that `RecordBatch::with_schema` is unable to remove schema
metadata from a batch. It fails with the error `target schema is not
superset of current schema`.

I'm not sure how the `test_metadata_erased` test is passing. Strangely,
the metadata was not present by the time the batch arrived at the
metadata eraser. I think maybe the schema metadata is only present in
the batch if there is a filter.

I've created a new unit test that makes sure the metadata is erased if
we have a filter also
2025-02-27 10:24:00 -08:00
BubbleCal
8877eb020d feat: record the server version for remote table (#2147)
Signed-off-by: BubbleCal <bubble-cal@outlook.com>
2025-02-27 15:55:59 +08:00
Will Jones
01e4291d21 feat(python): drop hard dependency on pylance (#2156)
Closes #1793
2025-02-26 15:53:45 -08:00
Lance Release
ab3ea76ad1 Updating package-lock.json 2025-02-26 21:23:39 +00:00
Lance Release
728ef8657d Updating package-lock.json 2025-02-26 20:11:37 +00:00
Lance Release
0b13901a16 Updating package-lock.json 2025-02-26 20:11:22 +00:00
Lance Release
84b110e0ef Bump version: 0.17.0 → 0.18.0-beta.0 2025-02-26 20:11:07 +00:00
Lance Release
e1836e54e3 Bump version: 0.20.0 → 0.21.0-beta.0 2025-02-26 20:10:54 +00:00
Weston Pace
4ba5326880 feat: reapply upgrade lance to v0.23.3-beta.1 (#2157)
This reverts commit 2f0c5baea2.

---------

Co-authored-by: Lu Qiu <luqiujob@gmail.com>
2025-02-26 11:44:11 -08:00
Lance Release
b036a69300 Updating package-lock.json 2025-02-26 19:32:22 +00:00
Will Jones
5b12a47119 feat!: revert query limit to be unbounded for scans (#2151)
In earlier PRs (#1886, #1191) we made the default limit 10 regardless of
the query type. This was confusing for users and in many cases a
breaking change. Users would have queries that used to return all
results, but instead only returned the first 10, causing silent bugs.

Part of the cause was consistency: the Python sync API seems to have
always had a limit of 10, while newer APIs (Python async and Nodejs)
didn't.

This PR sets the default limit only for searches (vector search, FTS),
while letting scans (even with filters) be unbounded. It does this
consistently for all SDKs.

Fixes #1983
Fixes #1852
Fixes #2141
2025-02-26 10:32:14 -08:00
Lance Release
769d483e50 Updating package-lock.json 2025-02-26 18:16:59 +00:00
Lance Release
9ecb11fe5a Updating package-lock.json 2025-02-26 18:16:42 +00:00
Lance Release
22bd8329f3 Bump version: 0.17.0-beta.0 → 0.17.0 2025-02-26 18:16:07 +00:00
Lance Release
a736fad149 Bump version: 0.16.1-beta.3 → 0.17.0-beta.0 2025-02-26 18:16:01 +00:00
156 changed files with 7739 additions and 2623 deletions

View File

@@ -1,5 +1,5 @@
[tool.bumpversion] [tool.bumpversion]
current_version = "0.16.1-beta.3" current_version = "0.18.3-beta.0"
parse = """(?x) parse = """(?x)
(?P<major>0|[1-9]\\d*)\\. (?P<major>0|[1-9]\\d*)\\.
(?P<minor>0|[1-9]\\d*)\\. (?P<minor>0|[1-9]\\d*)\\.
@@ -87,26 +87,11 @@ glob = "node/package.json"
replace = "\"@lancedb/vectordb-linux-x64-gnu\": \"{new_version}\"" replace = "\"@lancedb/vectordb-linux-x64-gnu\": \"{new_version}\""
search = "\"@lancedb/vectordb-linux-x64-gnu\": \"{current_version}\"" search = "\"@lancedb/vectordb-linux-x64-gnu\": \"{current_version}\""
[[tool.bumpversion.files]]
glob = "node/package.json"
replace = "\"@lancedb/vectordb-linux-arm64-musl\": \"{new_version}\""
search = "\"@lancedb/vectordb-linux-arm64-musl\": \"{current_version}\""
[[tool.bumpversion.files]]
glob = "node/package.json"
replace = "\"@lancedb/vectordb-linux-x64-musl\": \"{new_version}\""
search = "\"@lancedb/vectordb-linux-x64-musl\": \"{current_version}\""
[[tool.bumpversion.files]] [[tool.bumpversion.files]]
glob = "node/package.json" glob = "node/package.json"
replace = "\"@lancedb/vectordb-win32-x64-msvc\": \"{new_version}\"" replace = "\"@lancedb/vectordb-win32-x64-msvc\": \"{new_version}\""
search = "\"@lancedb/vectordb-win32-x64-msvc\": \"{current_version}\"" search = "\"@lancedb/vectordb-win32-x64-msvc\": \"{current_version}\""
[[tool.bumpversion.files]]
glob = "node/package.json"
replace = "\"@lancedb/vectordb-win32-arm64-msvc\": \"{new_version}\""
search = "\"@lancedb/vectordb-win32-arm64-msvc\": \"{current_version}\""
# Cargo files # Cargo files
# ------------ # ------------
[[tool.bumpversion.files]] [[tool.bumpversion.files]]

View File

@@ -34,6 +34,10 @@ rustflags = ["-C", "target-cpu=haswell", "-C", "target-feature=+avx2,+fma,+f16c"
[target.x86_64-unknown-linux-musl] [target.x86_64-unknown-linux-musl]
rustflags = ["-C", "target-cpu=haswell", "-C", "target-feature=-crt-static,+avx2,+fma,+f16c"] rustflags = ["-C", "target-cpu=haswell", "-C", "target-feature=-crt-static,+avx2,+fma,+f16c"]
[target.aarch64-unknown-linux-musl]
linker = "aarch64-linux-musl-gcc"
rustflags = ["-C", "target-feature=-crt-static"]
[target.aarch64-apple-darwin] [target.aarch64-apple-darwin]
rustflags = ["-C", "target-cpu=apple-m1", "-C", "target-feature=+neon,+fp16,+fhm,+dotprod"] rustflags = ["-C", "target-cpu=apple-m1", "-C", "target-feature=+neon,+fp16,+fhm,+dotprod"]
@@ -44,4 +48,4 @@ rustflags = ["-Ctarget-feature=+crt-static"]
# Experimental target for Arm64 Windows # Experimental target for Arm64 Windows
[target.aarch64-pc-windows-msvc] [target.aarch64-pc-windows-msvc]
rustflags = ["-Ctarget-feature=+crt-static"] rustflags = ["-Ctarget-feature=+crt-static"]

View File

@@ -36,8 +36,7 @@ runs:
args: ${{ inputs.args }} args: ${{ inputs.args }}
before-script-linux: | before-script-linux: |
set -e set -e
yum install -y openssl-devel \ curl -L https://github.com/protocolbuffers/protobuf/releases/download/v24.4/protoc-24.4-linux-$(uname -m).zip > /tmp/protoc.zip \
&& curl -L https://github.com/protocolbuffers/protobuf/releases/download/v24.4/protoc-24.4-linux-$(uname -m).zip > /tmp/protoc.zip \
&& unzip /tmp/protoc.zip -d /usr/local \ && unzip /tmp/protoc.zip -d /usr/local \
&& rm /tmp/protoc.zip && rm /tmp/protoc.zip
- name: Build Arm Manylinux Wheel - name: Build Arm Manylinux Wheel
@@ -52,7 +51,7 @@ runs:
args: ${{ inputs.args }} args: ${{ inputs.args }}
before-script-linux: | before-script-linux: |
set -e set -e
yum install -y openssl-devel clang \ yum install -y clang \
&& curl -L https://github.com/protocolbuffers/protobuf/releases/download/v24.4/protoc-24.4-linux-aarch_64.zip > /tmp/protoc.zip \ && curl -L https://github.com/protocolbuffers/protobuf/releases/download/v24.4/protoc-24.4-linux-aarch_64.zip > /tmp/protoc.zip \
&& unzip /tmp/protoc.zip -d /usr/local \ && unzip /tmp/protoc.zip -d /usr/local \
&& rm /tmp/protoc.zip && rm /tmp/protoc.zip

View File

@@ -43,7 +43,7 @@ jobs:
- uses: Swatinem/rust-cache@v2 - uses: Swatinem/rust-cache@v2
- uses: actions-rust-lang/setup-rust-toolchain@v1 - uses: actions-rust-lang/setup-rust-toolchain@v1
with: with:
toolchain: "1.79.0" toolchain: "1.81.0"
cache-workspaces: "./java/core/lancedb-jni" cache-workspaces: "./java/core/lancedb-jni"
# Disable full debug symbol generation to speed up CI build and keep memory down # Disable full debug symbol generation to speed up CI build and keep memory down
# "1" means line tables only, which is useful for panic tracebacks. # "1" means line tables only, which is useful for panic tracebacks.
@@ -97,7 +97,7 @@ jobs:
- name: Dry run - name: Dry run
if: github.event_name == 'pull_request' if: github.event_name == 'pull_request'
run: | run: |
mvn --batch-mode -DskipTests package mvn --batch-mode -DskipTests -Drust.release.build=true package
- name: Set github - name: Set github
run: | run: |
git config --global user.email "LanceDB Github Runner" git config --global user.email "LanceDB Github Runner"
@@ -108,7 +108,7 @@ jobs:
echo "use-agent" >> ~/.gnupg/gpg.conf echo "use-agent" >> ~/.gnupg/gpg.conf
echo "pinentry-mode loopback" >> ~/.gnupg/gpg.conf echo "pinentry-mode loopback" >> ~/.gnupg/gpg.conf
export GPG_TTY=$(tty) export GPG_TTY=$(tty)
mvn --batch-mode -DskipTests -DpushChanges=false -Dgpg.passphrase=${{ secrets.GPG_PASSPHRASE }} deploy -P deploy-to-ossrh mvn --batch-mode -DskipTests -Drust.release.build=true -DpushChanges=false -Dgpg.passphrase=${{ secrets.GPG_PASSPHRASE }} deploy -P deploy-to-ossrh
env: env:
SONATYPE_USER: ${{ secrets.SONATYPE_USER }} SONATYPE_USER: ${{ secrets.SONATYPE_USER }}
SONATYPE_TOKEN: ${{ secrets.SONATYPE_TOKEN }} SONATYPE_TOKEN: ${{ secrets.SONATYPE_TOKEN }}

File diff suppressed because it is too large Load Diff

View File

@@ -4,6 +4,10 @@ on:
push: push:
tags: tags:
- 'python-v*' - 'python-v*'
pull_request:
# This should trigger a dry run (we skip the final publish step)
paths:
- .github/workflows/pypi-publish.yml
jobs: jobs:
linux: linux:
@@ -46,6 +50,7 @@ jobs:
arm-build: ${{ matrix.config.platform == 'aarch64' }} arm-build: ${{ matrix.config.platform == 'aarch64' }}
manylinux: ${{ matrix.config.manylinux }} manylinux: ${{ matrix.config.manylinux }}
- uses: ./.github/workflows/upload_wheel - uses: ./.github/workflows/upload_wheel
if: startsWith(github.ref, 'refs/tags/python-v')
with: with:
pypi_token: ${{ secrets.LANCEDB_PYPI_API_TOKEN }} pypi_token: ${{ secrets.LANCEDB_PYPI_API_TOKEN }}
fury_token: ${{ secrets.FURY_TOKEN }} fury_token: ${{ secrets.FURY_TOKEN }}
@@ -75,6 +80,7 @@ jobs:
python-minor-version: 8 python-minor-version: 8
args: "--release --strip --target ${{ matrix.config.target }} --features fp16kernels" args: "--release --strip --target ${{ matrix.config.target }} --features fp16kernels"
- uses: ./.github/workflows/upload_wheel - uses: ./.github/workflows/upload_wheel
if: startsWith(github.ref, 'refs/tags/python-v')
with: with:
pypi_token: ${{ secrets.LANCEDB_PYPI_API_TOKEN }} pypi_token: ${{ secrets.LANCEDB_PYPI_API_TOKEN }}
fury_token: ${{ secrets.FURY_TOKEN }} fury_token: ${{ secrets.FURY_TOKEN }}
@@ -96,10 +102,12 @@ jobs:
args: "--release --strip" args: "--release --strip"
vcpkg_token: ${{ secrets.VCPKG_GITHUB_PACKAGES }} vcpkg_token: ${{ secrets.VCPKG_GITHUB_PACKAGES }}
- uses: ./.github/workflows/upload_wheel - uses: ./.github/workflows/upload_wheel
if: startsWith(github.ref, 'refs/tags/python-v')
with: with:
pypi_token: ${{ secrets.LANCEDB_PYPI_API_TOKEN }} pypi_token: ${{ secrets.LANCEDB_PYPI_API_TOKEN }}
fury_token: ${{ secrets.FURY_TOKEN }} fury_token: ${{ secrets.FURY_TOKEN }}
gh-release: gh-release:
if: startsWith(github.ref, 'refs/tags/python-v')
runs-on: ubuntu-latest runs-on: ubuntu-latest
permissions: permissions:
contents: write contents: write

View File

@@ -13,6 +13,11 @@ concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }} group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}
cancel-in-progress: true cancel-in-progress: true
env:
# Color output for pytest is off by default.
PYTEST_ADDOPTS: "--color=yes"
FORCE_COLOR: "1"
jobs: jobs:
lint: lint:
name: "Lint" name: "Lint"
@@ -33,13 +38,14 @@ jobs:
python-version: "3.12" python-version: "3.12"
- name: Install ruff - name: Install ruff
run: | run: |
pip install ruff==0.8.4 pip install ruff==0.9.9
- name: Format check - name: Format check
run: ruff format --check . run: ruff format --check .
- name: Lint - name: Lint
run: ruff check . run: ruff check .
doctest:
name: "Doctest" type-check:
name: "Type Check"
timeout-minutes: 30 timeout-minutes: 30
runs-on: "ubuntu-22.04" runs-on: "ubuntu-22.04"
defaults: defaults:
@@ -54,7 +60,36 @@ jobs:
- name: Set up Python - name: Set up Python
uses: actions/setup-python@v5 uses: actions/setup-python@v5
with: with:
python-version: "3.11" python-version: "3.12"
- name: Install protobuf compiler
run: |
sudo apt update
sudo apt install -y protobuf-compiler
pip install toml
- name: Install dependencies
run: |
python ../ci/parse_requirements.py pyproject.toml --extras dev,tests,embeddings > requirements.txt
pip install -r requirements.txt
- name: Run pyright
run: pyright
doctest:
name: "Doctest"
timeout-minutes: 30
runs-on: "ubuntu-24.04"
defaults:
run:
shell: bash
working-directory: python
steps:
- uses: actions/checkout@v4
with:
fetch-depth: 0
lfs: true
- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: "3.12"
cache: "pip" cache: "pip"
- name: Install protobuf - name: Install protobuf
run: | run: |
@@ -75,8 +110,8 @@ jobs:
timeout-minutes: 30 timeout-minutes: 30
strategy: strategy:
matrix: matrix:
python-minor-version: ["9", "11"] python-minor-version: ["9", "12"]
runs-on: "ubuntu-22.04" runs-on: "ubuntu-24.04"
defaults: defaults:
run: run:
shell: bash shell: bash
@@ -101,6 +136,10 @@ jobs:
- uses: ./.github/workflows/run_tests - uses: ./.github/workflows/run_tests
with: with:
integration: true integration: true
- name: Test without pylance
run: |
pip uninstall -y pylance
pytest -vv python/tests/test_table.py
# Make sure wheels are not included in the Rust cache # Make sure wheels are not included in the Rust cache
- name: Delete wheels - name: Delete wheels
run: rm -rf target/wheels run: rm -rf target/wheels
@@ -127,7 +166,7 @@ jobs:
- name: Set up Python - name: Set up Python
uses: actions/setup-python@v5 uses: actions/setup-python@v5
with: with:
python-version: "3.11" python-version: "3.12"
- uses: Swatinem/rust-cache@v2 - uses: Swatinem/rust-cache@v2
with: with:
workspaces: python workspaces: python
@@ -157,7 +196,7 @@ jobs:
- name: Set up Python - name: Set up Python
uses: actions/setup-python@v5 uses: actions/setup-python@v5
with: with:
python-version: "3.11" python-version: "3.12"
- uses: Swatinem/rust-cache@v2 - uses: Swatinem/rust-cache@v2
with: with:
workspaces: python workspaces: python
@@ -168,7 +207,7 @@ jobs:
run: rm -rf target/wheels run: rm -rf target/wheels
pydantic1x: pydantic1x:
timeout-minutes: 30 timeout-minutes: 30
runs-on: "ubuntu-22.04" runs-on: "ubuntu-24.04"
defaults: defaults:
run: run:
shell: bash shell: bash

View File

@@ -157,151 +157,33 @@ jobs:
windows: windows:
runs-on: windows-2022 runs-on: windows-2022
strategy:
matrix:
target:
- x86_64-pc-windows-msvc
- aarch64-pc-windows-msvc
defaults:
run:
working-directory: rust/lancedb
steps: steps:
- uses: actions/checkout@v4 - uses: actions/checkout@v4
- uses: Swatinem/rust-cache@v2 - uses: Swatinem/rust-cache@v2
with: with:
workspaces: rust workspaces: rust
- name: Install Protoc v21.12 - name: Install Protoc v21.12
working-directory: C:\ run: choco install --no-progress protoc
- name: Build
run: | run: |
New-Item -Path 'C:\protoc' -ItemType Directory rustup target add ${{ matrix.target }}
Set-Location C:\protoc $env:VCPKG_ROOT = $env:VCPKG_INSTALLATION_ROOT
Invoke-WebRequest https://github.com/protocolbuffers/protobuf/releases/download/v21.12/protoc-21.12-win64.zip -OutFile C:\protoc\protoc.zip cargo build --features remote --tests --locked --target ${{ matrix.target }}
7z x protoc.zip
Add-Content $env:GITHUB_PATH "C:\protoc\bin"
shell: powershell
- name: Run tests - name: Run tests
# Can only run tests when target matches host
if: ${{ matrix.target == 'x86_64-pc-windows-msvc' }}
run: | run: |
$env:VCPKG_ROOT = $env:VCPKG_INSTALLATION_ROOT $env:VCPKG_ROOT = $env:VCPKG_INSTALLATION_ROOT
cargo test --features remote --locked cargo test --features remote --locked
windows-arm64-cross:
# We cross compile in Node releases, so we want to make sure
# this can run successfully.
runs-on: ubuntu-latest
container: alpine:edge
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Install dependencies
run: |
set -e
apk add protobuf-dev curl clang lld llvm19 grep npm bash msitools sed
curl --proto '=https' --tlsv1.3 -sSf https://raw.githubusercontent.com/rust-lang/rustup/refs/heads/master/rustup-init.sh | sh -s -- -y
source $HOME/.cargo/env
rustup target add aarch64-pc-windows-msvc
mkdir -p sysroot
cd sysroot
sh ../ci/sysroot-aarch64-pc-windows-msvc.sh
- name: Check
env:
CC: clang
AR: llvm-ar
C_INCLUDE_PATH: /usr/aarch64-pc-windows-msvc/usr/include
CARGO_BUILD_TARGET: aarch64-pc-windows-msvc
RUSTFLAGS: -Ctarget-feature=+crt-static,+neon,+fp16,+fhm,+dotprod -Clinker=lld -Clink-arg=/LIBPATH:/usr/aarch64-pc-windows-msvc/usr/lib -Clink-arg=arm64rt.lib
run: |
source $HOME/.cargo/env
cargo check --features remote --locked
windows-arm64:
runs-on: windows-4x-arm
steps:
- name: Install Git
run: |
Invoke-WebRequest -Uri "https://github.com/git-for-windows/git/releases/download/v2.44.0.windows.1/Git-2.44.0-64-bit.exe" -OutFile "git-installer.exe"
Start-Process -FilePath "git-installer.exe" -ArgumentList "/VERYSILENT", "/NORESTART" -Wait
shell: powershell
- name: Add Git to PATH
run: |
Add-Content $env:GITHUB_PATH "C:\Program Files\Git\bin"
$env:Path = [System.Environment]::GetEnvironmentVariable("Path","Machine") + ";" + [System.Environment]::GetEnvironmentVariable("Path","User")
shell: powershell
- name: Configure Git symlinks
run: git config --global core.symlinks true
- uses: actions/checkout@v4
- uses: actions/setup-python@v5
with:
python-version: "3.13"
- name: Install Visual Studio Build Tools
run: |
Invoke-WebRequest -Uri "https://aka.ms/vs/17/release/vs_buildtools.exe" -OutFile "vs_buildtools.exe"
Start-Process -FilePath "vs_buildtools.exe" -ArgumentList "--quiet", "--wait", "--norestart", "--nocache", `
"--installPath", "C:\BuildTools", `
"--add", "Microsoft.VisualStudio.Component.VC.Tools.ARM64", `
"--add", "Microsoft.VisualStudio.Component.VC.Tools.x86.x64", `
"--add", "Microsoft.VisualStudio.Component.Windows11SDK.22621", `
"--add", "Microsoft.VisualStudio.Component.VC.ATL", `
"--add", "Microsoft.VisualStudio.Component.VC.ATLMFC", `
"--add", "Microsoft.VisualStudio.Component.VC.Llvm.Clang" -Wait
shell: powershell
- name: Add Visual Studio Build Tools to PATH
run: |
$vsPath = "C:\BuildTools\VC\Tools\MSVC"
$latestVersion = (Get-ChildItem $vsPath | Sort-Object {[version]$_.Name} -Descending)[0].Name
Add-Content $env:GITHUB_PATH "C:\BuildTools\VC\Tools\MSVC\$latestVersion\bin\Hostx64\arm64"
Add-Content $env:GITHUB_PATH "C:\BuildTools\VC\Tools\MSVC\$latestVersion\bin\Hostx64\x64"
Add-Content $env:GITHUB_PATH "C:\Program Files (x86)\Windows Kits\10\bin\10.0.22621.0\arm64"
Add-Content $env:GITHUB_PATH "C:\Program Files (x86)\Windows Kits\10\bin\10.0.22621.0\x64"
Add-Content $env:GITHUB_PATH "C:\BuildTools\VC\Tools\Llvm\x64\bin"
# Add MSVC runtime libraries to LIB
$env:LIB = "C:\BuildTools\VC\Tools\MSVC\$latestVersion\lib\arm64;" +
"C:\Program Files (x86)\Windows Kits\10\Lib\10.0.22621.0\um\arm64;" +
"C:\Program Files (x86)\Windows Kits\10\Lib\10.0.22621.0\ucrt\arm64"
Add-Content $env:GITHUB_ENV "LIB=$env:LIB"
# Add INCLUDE paths
$env:INCLUDE = "C:\BuildTools\VC\Tools\MSVC\$latestVersion\include;" +
"C:\Program Files (x86)\Windows Kits\10\Include\10.0.22621.0\ucrt;" +
"C:\Program Files (x86)\Windows Kits\10\Include\10.0.22621.0\um;" +
"C:\Program Files (x86)\Windows Kits\10\Include\10.0.22621.0\shared"
Add-Content $env:GITHUB_ENV "INCLUDE=$env:INCLUDE"
shell: powershell
- name: Install Rust
run: |
Invoke-WebRequest https://win.rustup.rs/x86_64 -OutFile rustup-init.exe
.\rustup-init.exe -y --default-host aarch64-pc-windows-msvc
shell: powershell
- name: Add Rust to PATH
run: |
Add-Content $env:GITHUB_PATH "$env:USERPROFILE\.cargo\bin"
shell: powershell
- uses: Swatinem/rust-cache@v2
with:
workspaces: rust
- name: Install 7-Zip ARM
run: |
New-Item -Path 'C:\7zip' -ItemType Directory
Invoke-WebRequest https://7-zip.org/a/7z2408-arm64.exe -OutFile C:\7zip\7z-installer.exe
Start-Process -FilePath C:\7zip\7z-installer.exe -ArgumentList '/S' -Wait
shell: powershell
- name: Add 7-Zip to PATH
run: Add-Content $env:GITHUB_PATH "C:\Program Files\7-Zip"
shell: powershell
- name: Install Protoc v21.12
working-directory: C:\
run: |
if (Test-Path 'C:\protoc') {
Write-Host "Protoc directory exists, skipping installation"
return
}
New-Item -Path 'C:\protoc' -ItemType Directory
Set-Location C:\protoc
Invoke-WebRequest https://github.com/protocolbuffers/protobuf/releases/download/v21.12/protoc-21.12-win64.zip -OutFile C:\protoc\protoc.zip
& 'C:\Program Files\7-Zip\7z.exe' x protoc.zip
shell: powershell
- name: Add Protoc to PATH
run: Add-Content $env:GITHUB_PATH "C:\protoc\bin"
shell: powershell
- name: Run tests
run: |
$env:VCPKG_ROOT = $env:VCPKG_INSTALLATION_ROOT
cargo test --target aarch64-pc-windows-msvc --features remote --locked
msrv: msrv:
# Check the minimum supported Rust version # Check the minimum supported Rust version
name: MSRV Check - Rust v${{ matrix.msrv }} name: MSRV Check - Rust v${{ matrix.msrv }}

View File

@@ -1,21 +1,27 @@
repos: repos:
- repo: https://github.com/pre-commit/pre-commit-hooks - repo: https://github.com/pre-commit/pre-commit-hooks
rev: v3.2.0 rev: v3.2.0
hooks: hooks:
- id: check-yaml - id: check-yaml
- id: end-of-file-fixer - id: end-of-file-fixer
- id: trailing-whitespace - id: trailing-whitespace
- repo: https://github.com/astral-sh/ruff-pre-commit - repo: https://github.com/astral-sh/ruff-pre-commit
# Ruff version. # Ruff version.
rev: v0.8.4 rev: v0.9.9
hooks: hooks:
- id: ruff - id: ruff
- repo: local # - repo: https://github.com/RobertCraigie/pyright-python
hooks: # rev: v1.1.395
- id: local-biome-check # hooks:
name: biome check # - id: pyright
entry: npx @biomejs/biome@1.8.3 check --config-path nodejs/biome.json nodejs/ # args: ["--project", "python"]
language: system # additional_dependencies: [pyarrow-stubs]
types: [text] - repo: local
files: "nodejs/.*" hooks:
exclude: nodejs/lancedb/native.d.ts|nodejs/dist/.*|nodejs/examples/.* - id: local-biome-check
name: biome check
entry: npx @biomejs/biome@1.8.3 check --config-path nodejs/biome.json nodejs/
language: system
types: [text]
files: "nodejs/.*"
exclude: nodejs/lancedb/native.d.ts|nodejs/dist/.*|nodejs/examples/.*

1649
Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -21,30 +21,32 @@ categories = ["database-implementations"]
rust-version = "1.78.0" rust-version = "1.78.0"
[workspace.dependencies] [workspace.dependencies]
lance = { "version" = "=0.23.2", "features" = ["dynamodb"] } lance = { "version" = "=0.25.1", "features" = [
lance-io = { version = "=0.23.2" } "dynamodb",
lance-index = { version = "=0.23.2" } ], tag = "v0.25.1-beta.3", git = "https://github.com/lancedb/lance.git" }
lance-linalg = { version = "=0.23.2" } lance-io = { version = "=0.25.1", tag = "v0.25.1-beta.3", git = "https://github.com/lancedb/lance.git" }
lance-table = { version = "=0.23.2" } lance-index = { version = "=0.25.1", tag = "v0.25.1-beta.3", git = "https://github.com/lancedb/lance.git" }
lance-testing = { version = "=0.23.2" } lance-linalg = { version = "=0.25.1", tag = "v0.25.1-beta.3", git = "https://github.com/lancedb/lance.git" }
lance-datafusion = { version = "=0.23.2" } lance-table = { version = "=0.25.1", tag = "v0.25.1-beta.3", git = "https://github.com/lancedb/lance.git" }
lance-encoding = { version = "=0.23.2" } lance-testing = { version = "=0.25.1", tag = "v0.25.1-beta.3", git = "https://github.com/lancedb/lance.git" }
lance-datafusion = { version = "=0.25.1", tag = "v0.25.1-beta.3", git = "https://github.com/lancedb/lance.git" }
lance-encoding = { version = "=0.25.1", tag = "v0.25.1-beta.3", git = "https://github.com/lancedb/lance.git" }
# Note that this one does not include pyarrow # Note that this one does not include pyarrow
arrow = { version = "53.2", optional = false } arrow = { version = "54.1", optional = false }
arrow-array = "53.2" arrow-array = "54.1"
arrow-data = "53.2" arrow-data = "54.1"
arrow-ipc = "53.2" arrow-ipc = "54.1"
arrow-ord = "53.2" arrow-ord = "54.1"
arrow-schema = "53.2" arrow-schema = "54.1"
arrow-arith = "53.2" arrow-arith = "54.1"
arrow-cast = "53.2" arrow-cast = "54.1"
async-trait = "0" async-trait = "0"
datafusion = { version = "44.0", default-features = false } datafusion = { version = "45.0", default-features = false }
datafusion-catalog = "44.0" datafusion-catalog = "45.0"
datafusion-common = { version = "44.0", default-features = false } datafusion-common = { version = "45.0", default-features = false }
datafusion-execution = "44.0" datafusion-execution = "45.0"
datafusion-expr = "44.0" datafusion-expr = "45.0"
datafusion-physical-plan = "44.0" datafusion-physical-plan = "45.0"
env_logger = "0.11" env_logger = "0.11"
half = { "version" = "=2.4.1", default-features = false, features = [ half = { "version" = "=2.4.1", default-features = false, features = [
"num-traits", "num-traits",
@@ -60,6 +62,7 @@ num-traits = "0.2"
rand = "0.8" rand = "0.8"
regex = "1.10" regex = "1.10"
lazy_static = "1" lazy_static = "1"
semver = "1.0.25"
# Temporary pins to work around downstream issues # Temporary pins to work around downstream issues
# https://github.com/apache/arrow-rs/commit/2fddf85afcd20110ce783ed5b4cdeb82293da30b # https://github.com/apache/arrow-rs/commit/2fddf85afcd20110ce783ed5b4cdeb82293da30b
@@ -69,3 +72,6 @@ base64ct = "=1.6.0"
# Workaround for: https://github.com/eira-fransham/crunchy/issues/13 # Workaround for: https://github.com/eira-fransham/crunchy/issues/13
crunchy = "=0.2.2" crunchy = "=0.2.2"
# Workaround for: https://github.com/Lokathor/bytemuck/issues/306
bytemuck_derive = ">=1.8.1, <1.9.0"

View File

@@ -1,9 +1,17 @@
<a href="https://cloud.lancedb.com" target="_blank">
<img src="https://github.com/user-attachments/assets/92dad0a2-2a37-4ce1-b783-0d1b4f30a00c" alt="LanceDB Cloud Public Beta" width="100%" style="max-width: 100%;">
</a>
<div align="center"> <div align="center">
<p align="center"> <p align="center">
<img width="275" alt="LanceDB Logo" src="https://github.com/lancedb/lancedb/assets/5846846/37d7c7ad-c2fd-4f56-9f16-fffb0d17c73a"> <picture>
<source media="(prefers-color-scheme: dark)" srcset="https://github.com/user-attachments/assets/ac270358-333e-4bea-a132-acefaa94040e">
<source media="(prefers-color-scheme: light)" srcset="https://github.com/user-attachments/assets/b864d814-0d29-4784-8fd9-807297c758c0">
<img alt="LanceDB Logo" src="https://github.com/user-attachments/assets/b864d814-0d29-4784-8fd9-807297c758c0" width=300>
</picture>
**Developer-friendly, database for multimodal AI** **Search More, Manage Less**
<a href='https://github.com/lancedb/vectordb-recipes/tree/main' target="_blank"><img alt='LanceDB' src='https://img.shields.io/badge/VectorDB_Recipes-100000?style=for-the-badge&logo=LanceDB&logoColor=white&labelColor=645cfb&color=645cfb'/></a> <a href='https://github.com/lancedb/vectordb-recipes/tree/main' target="_blank"><img alt='LanceDB' src='https://img.shields.io/badge/VectorDB_Recipes-100000?style=for-the-badge&logo=LanceDB&logoColor=white&labelColor=645cfb&color=645cfb'/></a>
<a href='https://lancedb.github.io/lancedb/' target="_blank"><img alt='lancdb' src='https://img.shields.io/badge/DOCS-100000?style=for-the-badge&logo=lancdb&logoColor=white&labelColor=645cfb&color=645cfb'/></a> <a href='https://lancedb.github.io/lancedb/' target="_blank"><img alt='lancdb' src='https://img.shields.io/badge/DOCS-100000?style=for-the-badge&logo=lancdb&logoColor=white&labelColor=645cfb&color=645cfb'/></a>

View File

@@ -1,21 +0,0 @@
#!/bin/bash
set -e
ARCH=${1:-x86_64}
# We pass down the current user so that when we later mount the local files
# into the container, the files are accessible by the current user.
pushd ci/manylinux_node
docker build \
-t lancedb-node-manylinux-$ARCH \
--build-arg="ARCH=$ARCH" \
--build-arg="DOCKER_USER=$(id -u)" \
--progress=plain \
.
popd
# We turn on memory swap to avoid OOM killer
docker run \
-v $(pwd):/io -w /io \
--memory-swap=-1 \
lancedb-node-manylinux-$ARCH \
bash ci/manylinux_node/build_lancedb.sh $ARCH

View File

@@ -1,34 +0,0 @@
# Builds the macOS artifacts (nodejs binaries).
# Usage: ./ci/build_macos_artifacts_nodejs.sh [target]
# Targets supported: x86_64-apple-darwin aarch64-apple-darwin
set -e
prebuild_rust() {
# Building here for the sake of easier debugging.
pushd rust/lancedb
echo "Building rust library for $1"
export RUST_BACKTRACE=1
cargo build --release --target $1
popd
}
build_node_binaries() {
pushd nodejs
echo "Building nodejs library for $1"
export RUST_TARGET=$1
npm run build-release
popd
}
if [ -n "$1" ]; then
targets=$1
else
targets="x86_64-apple-darwin aarch64-apple-darwin"
fi
echo "Building artifacts for targets: $targets"
for target in $targets
do
prebuild_rust $target
build_node_binaries $target
done

View File

@@ -1,5 +1,5 @@
# Many linux dockerfile with Rust, Node, and Lance dependencies installed. # Many linux dockerfile with Rust, Node, and Lance dependencies installed.
# This container allows building the node modules native libraries in an # This container allows building the node modules native libraries in an
# environment with a very old glibc, so that we are compatible with a wide # environment with a very old glibc, so that we are compatible with a wide
# range of linux distributions. # range of linux distributions.
ARG ARCH=x86_64 ARG ARCH=x86_64
@@ -9,10 +9,6 @@ FROM quay.io/pypa/manylinux_2_28_${ARCH}
ARG ARCH=x86_64 ARG ARCH=x86_64
ARG DOCKER_USER=default_user ARG DOCKER_USER=default_user
# Install static openssl
COPY install_openssl.sh install_openssl.sh
RUN ./install_openssl.sh ${ARCH} > /dev/null
# Protobuf is also installed as root. # Protobuf is also installed as root.
COPY install_protobuf.sh install_protobuf.sh COPY install_protobuf.sh install_protobuf.sh
RUN ./install_protobuf.sh ${ARCH} RUN ./install_protobuf.sh ${ARCH}
@@ -21,7 +17,7 @@ ENV DOCKER_USER=${DOCKER_USER}
# Create a group and user, but only if it doesn't exist # Create a group and user, but only if it doesn't exist
RUN echo ${ARCH} && id -u ${DOCKER_USER} >/dev/null 2>&1 || adduser --user-group --create-home --uid ${DOCKER_USER} build_user RUN echo ${ARCH} && id -u ${DOCKER_USER} >/dev/null 2>&1 || adduser --user-group --create-home --uid ${DOCKER_USER} build_user
# We switch to the user to install Rust and Node, since those like to be # We switch to the user to install Rust and Node, since those like to be
# installed at the user level. # installed at the user level.
USER ${DOCKER_USER} USER ${DOCKER_USER}

View File

@@ -1,19 +0,0 @@
#!/bin/bash
# Builds the nodejs module for manylinux. Invoked by ci/build_linux_artifacts_nodejs.sh.
set -e
ARCH=${1:-x86_64}
if [ "$ARCH" = "x86_64" ]; then
export OPENSSL_LIB_DIR=/usr/local/lib64/
else
export OPENSSL_LIB_DIR=/usr/local/lib/
fi
export OPENSSL_STATIC=1
export OPENSSL_INCLUDE_DIR=/usr/local/include/openssl
#Alpine doesn't have .bashrc
FILE=$HOME/.bashrc && test -f $FILE && source $FILE
cd nodejs
npm ci
npm run build-release

View File

@@ -4,14 +4,6 @@ set -e
ARCH=${1:-x86_64} ARCH=${1:-x86_64}
TARGET_TRIPLE=${2:-x86_64-unknown-linux-gnu} TARGET_TRIPLE=${2:-x86_64-unknown-linux-gnu}
if [ "$ARCH" = "x86_64" ]; then
export OPENSSL_LIB_DIR=/usr/local/lib64/
else
export OPENSSL_LIB_DIR=/usr/local/lib/
fi
export OPENSSL_STATIC=1
export OPENSSL_INCLUDE_DIR=/usr/local/include/openssl
#Alpine doesn't have .bashrc #Alpine doesn't have .bashrc
FILE=$HOME/.bashrc && test -f $FILE && source $FILE FILE=$HOME/.bashrc && test -f $FILE && source $FILE

View File

@@ -1,26 +0,0 @@
#!/bin/bash
# Builds openssl from source so we can statically link to it
# this is to avoid the error we get with the system installation:
# /usr/bin/ld: <library>: version node not found for symbol SSLeay@@OPENSSL_1.0.1
# /usr/bin/ld: failed to set dynamic section sizes: Bad value
set -e
git clone -b OpenSSL_1_1_1v \
--single-branch \
https://github.com/openssl/openssl.git
pushd openssl
if [[ $1 == x86_64* ]]; then
ARCH=linux-x86_64
else
# gnu target
ARCH=linux-aarch64
fi
./Configure no-shared $ARCH
make
make install

41
ci/parse_requirements.py Normal file
View File

@@ -0,0 +1,41 @@
import argparse
import toml
def parse_dependencies(pyproject_path, extras=None):
with open(pyproject_path, "r") as file:
pyproject = toml.load(file)
dependencies = pyproject.get("project", {}).get("dependencies", [])
for dependency in dependencies:
print(dependency)
optional_dependencies = pyproject.get("project", {}).get(
"optional-dependencies", {}
)
if extras:
for extra in extras.split(","):
for dep in optional_dependencies.get(extra, []):
print(dep)
def main():
parser = argparse.ArgumentParser(
description="Generate requirements.txt from pyproject.toml"
)
parser.add_argument("path", type=str, help="Path to pyproject.toml")
parser.add_argument(
"--extras",
type=str,
help="Comma-separated list of extras to include",
default="",
)
args = parser.parse_args()
parse_dependencies(args.path, args.extras)
if __name__ == "__main__":
main()

View File

@@ -124,6 +124,9 @@ nav:
- Overview: hybrid_search/hybrid_search.md - Overview: hybrid_search/hybrid_search.md
- Comparing Rerankers: hybrid_search/eval.md - Comparing Rerankers: hybrid_search/eval.md
- Airbnb financial data example: notebooks/hybrid_search.ipynb - Airbnb financial data example: notebooks/hybrid_search.ipynb
- Late interaction with MultiVector search:
- Overview: guides/multi-vector.md
- Example: notebooks/Multivector_on_LanceDB.ipynb
- RAG: - RAG:
- Vanilla RAG: rag/vanilla_rag.md - Vanilla RAG: rag/vanilla_rag.md
- Multi-head RAG: rag/multi_head_rag.md - Multi-head RAG: rag/multi_head_rag.md
@@ -233,13 +236,6 @@ nav:
- 👾 JavaScript (vectordb): javascript/modules.md - 👾 JavaScript (vectordb): javascript/modules.md
- 👾 JavaScript (lancedb): js/globals.md - 👾 JavaScript (lancedb): js/globals.md
- 🦀 Rust: https://docs.rs/lancedb/latest/lancedb/ - 🦀 Rust: https://docs.rs/lancedb/latest/lancedb/
- ☁️ LanceDB Cloud:
- Overview: cloud/index.md
- API reference:
- 🐍 Python: python/saas-python.md
- 👾 JavaScript: javascript/modules.md
- REST API: cloud/rest.md
- FAQs: cloud/cloud_faq.md
- Quick start: basic.md - Quick start: basic.md
- Concepts: - Concepts:
@@ -260,6 +256,9 @@ nav:
- Overview: hybrid_search/hybrid_search.md - Overview: hybrid_search/hybrid_search.md
- Comparing Rerankers: hybrid_search/eval.md - Comparing Rerankers: hybrid_search/eval.md
- Airbnb financial data example: notebooks/hybrid_search.ipynb - Airbnb financial data example: notebooks/hybrid_search.ipynb
- Late interaction with MultiVector search:
- Overview: guides/multi-vector.md
- Document search Example: notebooks/Multivector_on_LanceDB.ipynb
- RAG: - RAG:
- Vanilla RAG: rag/vanilla_rag.md - Vanilla RAG: rag/vanilla_rag.md
- Multi-head RAG: rag/multi_head_rag.md - Multi-head RAG: rag/multi_head_rag.md
@@ -363,13 +362,6 @@ nav:
- Javascript (vectordb): javascript/modules.md - Javascript (vectordb): javascript/modules.md
- Javascript (lancedb): js/globals.md - Javascript (lancedb): js/globals.md
- Rust: https://docs.rs/lancedb/latest/lancedb/index.html - Rust: https://docs.rs/lancedb/latest/lancedb/index.html
- LanceDB Cloud:
- Overview: cloud/index.md
- API reference:
- 🐍 Python: python/saas-python.md
- 👾 JavaScript: javascript/modules.md
- REST API: cloud/rest.md
- FAQs: cloud/cloud_faq.md
extra_css: extra_css:
- styles/global.css - styles/global.css
@@ -377,6 +369,7 @@ extra_css:
extra_javascript: extra_javascript:
- "extra_js/init_ask_ai_widget.js" - "extra_js/init_ask_ai_widget.js"
- "extra_js/reo.js"
extra: extra:
analytics: analytics:

View File

@@ -171,7 +171,7 @@ paths:
distance_type: distance_type:
type: string type: string
description: | description: |
The distance metric to use for search. L2, Cosine, Dot and Hamming are supported. Default is L2. The distance metric to use for search. l2, Cosine, Dot and Hamming are supported. Default is l2.
bypass_vector_index: bypass_vector_index:
type: boolean type: boolean
description: | description: |
@@ -450,7 +450,7 @@ paths:
type: string type: string
nullable: false nullable: false
description: | description: |
The metric type to use for the index. L2, Cosine, Dot are supported. The metric type to use for the index. l2, Cosine, Dot are supported.
index_type: index_type:
type: string type: string
responses: responses:

View File

@@ -69,7 +69,7 @@ Lance supports `IVF_PQ` index type by default.
The following IVF_PQ paramters can be specified: The following IVF_PQ paramters can be specified:
- **distance_type**: The distance metric to use. By default it uses euclidean distance "`L2`". - **distance_type**: The distance metric to use. By default it uses euclidean distance "`l2`".
We also support "cosine" and "dot" distance as well. We also support "cosine" and "dot" distance as well.
- **num_partitions**: The number of partitions in the index. The default is the square root - **num_partitions**: The number of partitions in the index. The default is the square root
of the number of rows. of the number of rows.

View File

@@ -2,7 +2,7 @@
LanceDB Cloud is a SaaS (software-as-a-service) solution that runs serverless in the cloud, clearly separating storage from compute. It's designed to be highly scalable without breaking the bank. LanceDB Cloud is currently in private beta with general availability coming soon, but you can apply for early access with the private beta release by signing up below. LanceDB Cloud is a SaaS (software-as-a-service) solution that runs serverless in the cloud, clearly separating storage from compute. It's designed to be highly scalable without breaking the bank. LanceDB Cloud is currently in private beta with general availability coming soon, but you can apply for early access with the private beta release by signing up below.
[Try out LanceDB Cloud](https://noteforms.com/forms/lancedb-mailing-list-cloud-kty1o5?notionforms=1&utm_source=notionforms){ .md-button .md-button--primary } [Try out LanceDB Cloud (Public Beta)](https://cloud.lancedb.com){ .md-button .md-button--primary }
## Architecture ## Architecture

View File

@@ -59,7 +59,7 @@ Then the greedy search routine operates as follows:
There are three key parameters to set when constructing an HNSW index: There are three key parameters to set when constructing an HNSW index:
* `metric`: Use an `L2` euclidean distance metric. We also support `dot` and `cosine` distance. * `metric`: Use an `l2` euclidean distance metric. We also support `dot` and `cosine` distance.
* `m`: The number of neighbors to select for each vector in the HNSW graph. * `m`: The number of neighbors to select for each vector in the HNSW graph.
* `ef_construction`: The number of candidates to evaluate during the construction of the HNSW graph. * `ef_construction`: The number of candidates to evaluate during the construction of the HNSW graph.

View File

@@ -47,7 +47,7 @@ We can combine the above concepts to understand how to build and query an IVF-PQ
There are three key parameters to set when constructing an IVF-PQ index: There are three key parameters to set when constructing an IVF-PQ index:
* `metric`: Use an `L2` euclidean distance metric. We also support `dot` and `cosine` distance. * `metric`: Use an `l2` euclidean distance metric. We also support `dot` and `cosine` distance.
* `num_partitions`: The number of partitions in the IVF portion of the index. * `num_partitions`: The number of partitions in the IVF portion of the index.
* `num_sub_vectors`: The number of sub-vectors that will be created during Product Quantization (PQ). * `num_sub_vectors`: The number of sub-vectors that will be created during Product Quantization (PQ).
@@ -56,7 +56,7 @@ In Python, the index can be created as follows:
```python ```python
# Create and train the index for a 1536-dimensional vector # Create and train the index for a 1536-dimensional vector
# Make sure you have enough data in the table for an effective training step # Make sure you have enough data in the table for an effective training step
tbl.create_index(metric="L2", num_partitions=256, num_sub_vectors=96) tbl.create_index(metric="l2", num_partitions=256, num_sub_vectors=96)
``` ```
!!! note !!! note
`num_partitions`=256 and `num_sub_vectors`=96 does not work for every dataset. Those values needs to be adjusted for your particular dataset. `num_partitions`=256 and `num_sub_vectors`=96 does not work for every dataset. Those values needs to be adjusted for your particular dataset.

View File

@@ -54,7 +54,7 @@ As mentioned, after creating embedding, each data point is represented as a vect
Points that are close to each other in vector space are considered similar (or appear in similar contexts), and points that are far away are considered dissimilar. To quantify this closeness, we use distance as a metric which can be measured in the following way - Points that are close to each other in vector space are considered similar (or appear in similar contexts), and points that are far away are considered dissimilar. To quantify this closeness, we use distance as a metric which can be measured in the following way -
1. **Euclidean Distance (L2)**: It calculates the straight-line distance between two points (vectors) in a multidimensional space. 1. **Euclidean Distance (l2)**: It calculates the straight-line distance between two points (vectors) in a multidimensional space.
2. **Cosine Similarity**: It measures the cosine of the angle between two vectors, providing a normalized measure of similarity based on their direction. 2. **Cosine Similarity**: It measures the cosine of the angle between two vectors, providing a normalized measure of similarity based on their direction.
3. **Dot product**: It is calculated as the sum of the products of their corresponding components. To measure relatedness it considers both the magnitude and direction of the vectors. 3. **Dot product**: It is calculated as the sum of the products of their corresponding components. To measure relatedness it considers both the magnitude and direction of the vectors.

View File

@@ -8,15 +8,5 @@ LanceDB provides language APIs, allowing you to embed a database in your languag
* 👾 [JavaScript](examples_js.md) examples * 👾 [JavaScript](examples_js.md) examples
* 🦀 Rust examples (coming soon) * 🦀 Rust examples (coming soon)
## Python Applications powered by LanceDB !!! tip "Hosted LanceDB"
If you want S3 cost-efficiency and local performance via a simple serverless API, checkout **LanceDB Cloud**. For private deployments, high performance at extreme scale, or if you have strict security requirements, talk to us about **LanceDB Enterprise**. [Learn more](https://docs.lancedb.com/)
| Project Name | Description |
| --- | --- |
| **Ultralytics Explorer 🚀**<br>[![Ultralytics](https://img.shields.io/badge/Ultralytics-Docs-green?labelColor=0f3bc4&style=flat-square&logo=https://cdn.prod.website-files.com/646dd1f1a3703e451ba81ecc/64994922cf2a6385a4bf4489_UltralyticsYOLO_mark_blue.svg&link=https://docs.ultralytics.com/datasets/explorer/)](https://docs.ultralytics.com/datasets/explorer/)<br>[![Open In Collab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/ultralytics/ultralytics/blob/main/docs/en/datasets/explorer/explorer.ipynb) | - 🔍 **Explore CV Datasets**: Semantic search, SQL queries, vector similarity, natural language.<br>- 🖥️ **GUI & Python API**: Seamless dataset interaction.<br>- ⚡ **Efficient & Scalable**: Leverages LanceDB for large datasets.<br>- 📊 **Detailed Analysis**: Easily analyze data patterns.<br>- 🌐 **Browser GUI Demo**: Create embeddings, search images, run queries. |
| **Website Chatbot🤖**<br>[![GitHub](https://img.shields.io/badge/github-%23121011.svg?style=for-the-badge&logo=github&logoColor=white)](https://github.com/lancedb/lancedb-vercel-chatbot)<br>[![Deploy with Vercel](https://vercel.com/button)](https://vercel.com/new/clone?repository-url=https%3A%2F%2Fgithub.com%2Flancedb%2Flancedb-vercel-chatbot&amp;env=OPENAI_API_KEY&amp;envDescription=OpenAI%20API%20Key%20for%20chat%20completion.&amp;project-name=lancedb-vercel-chatbot&amp;repository-name=lancedb-vercel-chatbot&amp;demo-title=LanceDB%20Chatbot%20Demo&amp;demo-description=Demo%20website%20chatbot%20with%20LanceDB.&amp;demo-url=https%3A%2F%2Flancedb.vercel.app&amp;demo-image=https%3A%2F%2Fi.imgur.com%2FazVJtvr.png) | - 🌐 **Chatbot from Sitemap/Docs**: Create a chatbot using site or document context.<br>- 🚀 **Embed LanceDB in Next.js**: Lightweight, on-prem storage.<br>- 🧠 **AI-Powered Context Retrieval**: Efficiently access relevant data.<br>- 🔧 **Serverless & Native JS**: Seamless integration with Next.js.<br>- ⚡ **One-Click Deploy on Vercel**: Quick and easy setup.. |
## Nodejs Applications powered by LanceDB
| Project Name | Description |
| --- | --- |
| **Langchain Writing Assistant✍ **<br>[![Github](../assets/github.svg)](https://github.com/lancedb/vectordb-recipes/tree/main/applications/node/lanchain_writing_assistant) | - **📂 Data Source Integration**: Use your own data by specifying data source file, and the app instantly processes it to provide insights. <br>- **🧠 Intelligent Suggestions**: Powered by LangChain.js and LanceDB, it improves writing productivity and accuracy. <br>- **💡 Enhanced Writing Experience**: It delivers real-time contextual insights and factual suggestions while the user writes. |

1
docs/src/extra_js/reo.js Normal file
View File

@@ -0,0 +1 @@
!function(){var e,t,n;e="9627b71b382d201",t=function(){Reo.init({clientID:"9627b71b382d201"})},(n=document.createElement("script")).src="https://static.reo.dev/"+e+"/reo.js",n.defer=!0,n.onload=t,document.head.appendChild(n)}();

View File

@@ -0,0 +1,85 @@
# Late interaction & MultiVector embedding type
Late interaction is a technique used in retrieval that calculates the relevance of a query to a document by comparing their multi-vector representations. The key difference between late interaction and other popular methods:
![late interaction vs other methods](https://raw.githubusercontent.com/lancedb/assets/b035a0ceb2c237734e0d393054c146d289792339/docs/assets/integration/colbert-blog-interaction.svg)
[ Illustration from https://jina.ai/news/what-is-colbert-and-late-interaction-and-why-they-matter-in-search/]
<b>No interaction:</b> Refers to independently embedding the query and document, that are compared to calcualte similarity without any interaction between them. This is typically used in vector search operations.
<b>Partial interaction</b> Refers to a specific approach where the similarity computation happens primarily between query vectors and document vectors, without extensive interaction between individual components of each. An example of this is dual-encoder models like BERT.
<b>Early full interaction</b> Refers to techniques like cross-encoders that process query and docs in pairs with full interaction across various stages of encoding. This is a powerful, but relatively slower technique. Because it requires processing query and docs in pairs, doc embeddings can't be pre-computed for fast retrieval. This is why cross encoders are typically used as reranking models combined with vector search. Learn more about [LanceDB Reranking support](https://lancedb.github.io/lancedb/reranking/).
<b>Late interaction</b> Late interaction is a technique that calculates the doc and query similarity independently and then the interaction or evaluation happens during the retrieval process. This is typically used in retrieval models like ColBERT. Unlike early interaction, It allows speeding up the retrieval process without compromising the depth of semantic analysis.
## Internals of ColBERT
Let's take a look at the steps involved in performing late interaction based retrieval using ColBERT:
• ColBERT employs BERT-based encoders for both queries `(fQ)` and documents `(fD)`
• A single BERT model is shared between query and document encoders and special tokens distinguish input types: `[Q]` for queries and `[D]` for documents
**Query Encoder (fQ):**
• Query q is tokenized into WordPiece tokens: `q1, q2, ..., ql`. `[Q]` token is prepended right after BERT's `[CLS]` token
• If query length < Nq, it's padded with [MASK] tokens up to Nq.
The padded sequence goes through BERT's transformer architecture
Final embeddings are L2-normalized.
**Document Encoder (fD):**
Document d is tokenized into tokens `d1, d2, ..., dm`. `[D]` token is prepended after `[CLS]` token
Unlike queries, documents are NOT padded with `[MASK]` tokens
Document tokens are processed through BERT and the same linear layer
**Late Interaction:**
Late interaction estimates relevance score `S(q,d)` using embedding `Eq` and `Ed`. Late interaction happens after independent encoding
For each query embedding, maximum similarity is computed against all document embeddings
The similarity measure can be cosine similarity or squared L2 distance
**MaxSim Calculation:**
```
S(q,d) := Σ max(Eqi⋅EdjT)
i∈|Eq| j∈|Ed|
```
This finds the best matching document embedding for each query embedding
Captures relevance based on strongest local matches between contextual embeddings
## LanceDB MultiVector type
LanceDB supports multivector type, this is useful when you have multiple vectors for a single item (e.g. with ColBert and ColPali).
You can index on a column with multivector type and search on it, the query can be single vector or multiple vectors. For now, only cosine metric is supported for multivector search. The vector value type can be float16, float32 or float64. LanceDB integrateds [ConteXtualized Token Retriever(XTR)](https://arxiv.org/abs/2304.01982), which introduces a simple, yet novel, objective function that encourages the model to retrieve the most important document tokens first.
```python
import lancedb
import numpy as np
import pyarrow as pa
db = lancedb.connect("data/multivector_demo")
schema = pa.schema(
[
pa.field("id", pa.int64()),
# float16, float32, and float64 are supported
pa.field("vector", pa.list_(pa.list_(pa.float32(), 256))),
]
)
data = [
{
"id": i,
"vector": np.random.random(size=(2, 256)).tolist(),
}
for i in range(1024)
]
tbl = db.create_table("my_table", data=data, schema=schema)
# only cosine similarity is supported for multi-vectors
tbl.create_index(metric="cosine")
# query with single vector
query = np.random.random(256).astype(np.float16)
tbl.search(query).to_arrow()
# query with multiple vectors
query = np.random.random(size=(2, 256))
tbl.search(query).to_arrow()
```
Find more about vector search in LanceDB [here](https://lancedb.github.io/lancedb/search/#multivector-type).

View File

@@ -1001,9 +1001,11 @@ In LanceDB OSS, users can set the `read_consistency_interval` parameter on conne
There are three possible settings for `read_consistency_interval`: There are three possible settings for `read_consistency_interval`:
1. **Unset (default)**: The database does not check for updates to tables made by other processes. This provides the best query performance, but means that clients may not see the most up-to-date data. This setting is suitable for applications where the data does not change during the lifetime of the table reference. 1. **Unset**: The database does not check for updates to tables made by other processes. This setting is suitable for applications where the data does not change during the lifetime of the table reference.
2. **Zero seconds (Strong consistency)**: The database checks for updates on every read. This provides the strongest consistency guarantees, ensuring that all clients see the latest committed data. However, it has the most overhead. This setting is suitable when consistency matters more than having high QPS. 2. **Zero seconds (Strong consistency)**: The database checks for updates on every read. This provides the strongest consistency guarantees, ensuring that all clients see the latest committed data. However, it has the most overhead. This setting is suitable when consistency matters more than having high QPS. For best performance, combine this setting with the storage option `new_table_enable_v2_manifest_paths` set to `true`.
3. **Custom interval (Eventual consistency)**: The database checks for updates at a custom interval, such as every 5 seconds. This provides eventual consistency, allowing for some lag between write and read operations. Performance wise, this is a middle ground between strong consistency and no consistency check. This setting is suitable for applications where immediate consistency is not critical, but clients should see updated data eventually. 3. **Custom interval (Eventual consistency, the default)**: The database checks for updates at a custom interval. By default, this is every 5 seconds. This provides eventual consistency, allowing for some lag between write and read operations. Performance wise, this is a middle ground between strong consistency and no consistency check. This setting is suitable for applications where immediate consistency is not critical, but clients should see updated data eventually.
You can always force a synchronization by calling `checkout_latest()` / `checkoutLatest()` on a table.
!!! tip "Consistency in LanceDB Cloud" !!! tip "Consistency in LanceDB Cloud"
@@ -1041,7 +1043,21 @@ There are three possible settings for `read_consistency_interval`:
--8<-- "python/python/tests/docs/test_guide_tables.py:table_async_eventual_consistency" --8<-- "python/python/tests/docs/test_guide_tables.py:table_async_eventual_consistency"
``` ```
By default, a `Table` will never check for updates from other writers. To manually check for updates you can use `checkout_latest`: For no consistency, use `None`:
=== "Sync API"
```python
--8<-- "python/python/tests/docs/test_guide_tables.py:table_no_consistency"
```
=== "Async API"
```python
--8<-- "python/python/tests/docs/test_guide_tables.py:table_async_no_consistency"
```
To manually check for updates you can use `checkout_latest`:
=== "Sync API" === "Sync API"
@@ -1059,15 +1075,25 @@ There are three possible settings for `read_consistency_interval`:
To set strong consistency, use `0`: To set strong consistency, use `0`:
```ts ```ts
const db = await lancedb.connect({ uri: "./.lancedb", readConsistencyInterval: 0 }); --8<-- "nodejs/examples/basic.test.ts:table_strong_consistency"
const tbl = await db.openTable("my_table");
``` ```
For eventual consistency, specify the update interval as seconds: For eventual consistency, specify the update interval as seconds:
```ts ```ts
const db = await lancedb.connect({ uri: "./.lancedb", readConsistencyInterval: 5 }); --8<-- "nodejs/examples/basic.test.ts:table_eventual_consistency"
const tbl = await db.openTable("my_table"); ```
For no consistency, use `null`:
```ts
--8<-- "nodejs/examples/basic.test.ts:table_no_consistency"
```
To manually check for updates you can use `checkoutLatest`:
```ts
--8<-- "nodejs/examples/basic.test.ts:table_checkout_latest"
``` ```
<!-- Node doesn't yet support the version time travel: https://github.com/lancedb/lancedb/issues/1007 <!-- Node doesn't yet support the version time travel: https://github.com/lancedb/lancedb/issues/1007

View File

@@ -4,6 +4,9 @@ LanceDB is an open-source vector database for AI that's designed to store, manag
Both the database and the underlying data format are designed from the ground up to be **easy-to-use**, **scalable** and **cost-effective**. Both the database and the underlying data format are designed from the ground up to be **easy-to-use**, **scalable** and **cost-effective**.
!!! tip "Hosted LanceDB"
If you want S3 cost-efficiency and local performance via a simple serverless API, checkout **LanceDB Cloud**. For private deployments, high performance at extreme scale, or if you have strict security requirements, talk to us about **LanceDB Enterprise**. [Learn more](https://docs.lancedb.com/)
![](assets/lancedb_and_lance.png) ![](assets/lancedb_and_lance.png)
## Truly multi-modal ## Truly multi-modal
@@ -20,7 +23,7 @@ LanceDB **OSS** is an **open-source**, batteries-included embedded vector databa
LanceDB **Cloud** is a SaaS (software-as-a-service) solution that runs serverless in the cloud, making the storage clearly separated from compute. It's designed to be cost-effective and highly scalable without breaking the bank. LanceDB Cloud is currently in private beta with general availability coming soon, but you can apply for early access with the private beta release by signing up below. LanceDB **Cloud** is a SaaS (software-as-a-service) solution that runs serverless in the cloud, making the storage clearly separated from compute. It's designed to be cost-effective and highly scalable without breaking the bank. LanceDB Cloud is currently in private beta with general availability coming soon, but you can apply for early access with the private beta release by signing up below.
[Try out LanceDB Cloud](https://noteforms.com/forms/lancedb-mailing-list-cloud-kty1o5?notionforms=1&utm_source=notionforms){ .md-button .md-button--primary } [Try out LanceDB Cloud (Public Beta) Now](https://cloud.lancedb.com){ .md-button .md-button--primary }
## Why use LanceDB? ## Why use LanceDB?

View File

@@ -108,7 +108,7 @@ This method creates a scalar(for non-vector cols) or a vector index on a table.
|:---|:---|:---|:---| |:---|:---|:---|:---|
|`vector_col`|`Optional[str]`| Provide if you want to create index on a vector column. |`None`| |`vector_col`|`Optional[str]`| Provide if you want to create index on a vector column. |`None`|
|`col_name`|`Optional[str]`| Provide if you want to create index on a non-vector column. |`None`| |`col_name`|`Optional[str]`| Provide if you want to create index on a non-vector column. |`None`|
|`metric`|`Optional[str]` |Provide the metric to use for vector index. choice of metrics: 'L2', 'dot', 'cosine'. |`L2`| |`metric`|`Optional[str]` |Provide the metric to use for vector index. choice of metrics: 'l2', 'dot', 'cosine'. |`l2`|
|`num_partitions`|`Optional[int]`|Number of partitions to use for the index.|`256`| |`num_partitions`|`Optional[int]`|Number of partitions to use for the index.|`256`|
|`num_sub_vectors`|`Optional[int]` |Number of sub-vectors to use for the index.|`96`| |`num_sub_vectors`|`Optional[int]` |Number of sub-vectors to use for the index.|`96`|
|`index_cache_size`|`Optional[int]` |Size of the index cache.|`None`| |`index_cache_size`|`Optional[int]` |Size of the index cache.|`None`|

View File

@@ -125,7 +125,7 @@ The exhaustive list of parameters for `LanceDBVectorStore` vector store are :
``` ```
- **_table_exists(self, tbl_name: `Optional[str]` = `None`) -> `bool`** : Returns `True` if `tbl_name` exists in database. - **_table_exists(self, tbl_name: `Optional[str]` = `None`) -> `bool`** : Returns `True` if `tbl_name` exists in database.
- __create_index( - __create_index(
self, scalar: `Optional[bool]` = False, col_name: `Optional[str]` = None, num_partitions: `Optional[int]` = 256, num_sub_vectors: `Optional[int]` = 96, index_cache_size: `Optional[int]` = None, metric: `Optional[str]` = "L2", self, scalar: `Optional[bool]` = False, col_name: `Optional[str]` = None, num_partitions: `Optional[int]` = 256, num_sub_vectors: `Optional[int]` = 96, index_cache_size: `Optional[int]` = None, metric: `Optional[str]` = "l2",
) -> `None`__ : Creates a scalar(for non-vector cols) or a vector index on a table. ) -> `None`__ : Creates a scalar(for non-vector cols) or a vector index on a table.
Make sure your vector column has enough data before creating an index on it. Make sure your vector column has enough data before creating an index on it.

View File

@@ -10,7 +10,7 @@ Distance metrics type.
- [Cosine](MetricType.md#cosine) - [Cosine](MetricType.md#cosine)
- [Dot](MetricType.md#dot) - [Dot](MetricType.md#dot)
- [L2](MetricType.md#l2) - [l2](MetricType.md#l2)
## Enumeration Members ## Enumeration Members

View File

@@ -85,7 +85,7 @@ ___
`Optional` **metric\_type**: [`MetricType`](../enums/MetricType.md) `Optional` **metric\_type**: [`MetricType`](../enums/MetricType.md)
Metric type, L2 or Cosine Metric type, l2 or Cosine
#### Defined in #### Defined in

View File

@@ -15,11 +15,9 @@ npm install @lancedb/lancedb
This will download the appropriate native library for your platform. We currently This will download the appropriate native library for your platform. We currently
support: support:
- Linux (x86_64 and aarch64) - Linux (x86_64 and aarch64 on glibc and musl)
- MacOS (Intel and ARM/M1/M2) - MacOS (Intel and ARM/M1/M2)
- Windows (x86_64 only) - Windows (x86_64 and aarch64)
We do not yet support musl-based Linux (such as Alpine Linux) or aarch64 Windows.
## Usage ## Usage

View File

@@ -126,6 +126,37 @@ the vectors.
*** ***
### ivfFlat()
```ts
static ivfFlat(options?): Index
```
Create an IvfFlat index
This index groups vectors into partitions of similar vectors. Each partition keeps track of
a centroid which is the average value of all vectors in the group.
During a query the centroids are compared with the query vector to find the closest
partitions. The vectors in these partitions are then searched to find
the closest vectors.
The partitioning process is called IVF and the `num_partitions` parameter controls how
many groups to create.
Note that training an IVF FLAT index on a large dataset is a slow operation and
currently is also a memory intensive operation.
#### Parameters
* **options?**: `Partial`&lt;[`IvfFlatOptions`](../interfaces/IvfFlatOptions.md)&gt;
#### Returns
[`Index`](Index.md)
***
### ivfPq() ### ivfPq()
```ts ```ts

View File

@@ -30,6 +30,53 @@ protected inner: Query | Promise<Query>;
## Methods ## Methods
### analyzePlan()
```ts
analyzePlan(): Promise<string>
```
Executes the query and returns the physical query plan annotated with runtime metrics.
This is useful for debugging and performance analysis, as it shows how the query was executed
and includes metrics such as elapsed time, rows processed, and I/O statistics.
#### Returns
`Promise`&lt;`string`&gt;
A query execution plan with runtime metrics for each step.
#### Example
```ts
import * as lancedb from "@lancedb/lancedb"
const db = await lancedb.connect("./.lancedb");
const table = await db.createTable("my_table", [
{ vector: [1.1, 0.9], id: "1" },
]);
const plan = await table.query().nearestTo([0.5, 0.2]).analyzePlan();
Example output (with runtime metrics inlined):
AnalyzeExec verbose=true, metrics=[]
ProjectionExec: expr=[id@3 as id, vector@0 as vector, _distance@2 as _distance], metrics=[output_rows=1, elapsed_compute=3.292µs]
Take: columns="vector, _rowid, _distance, (id)", metrics=[output_rows=1, elapsed_compute=66.001µs, batches_processed=1, bytes_read=8, iops=1, requests=1]
CoalesceBatchesExec: target_batch_size=1024, metrics=[output_rows=1, elapsed_compute=3.333µs]
GlobalLimitExec: skip=0, fetch=10, metrics=[output_rows=1, elapsed_compute=167ns]
FilterExec: _distance@2 IS NOT NULL, metrics=[output_rows=1, elapsed_compute=8.542µs]
SortExec: TopK(fetch=10), expr=[_distance@2 ASC NULLS LAST], metrics=[output_rows=1, elapsed_compute=63.25µs, row_replacements=1]
KNNVectorDistance: metric=l2, metrics=[output_rows=1, elapsed_compute=114.333µs, output_batches=1]
LanceScan: uri=/path/to/data, projection=[vector], row_id=true, row_addr=false, ordered=false, metrics=[output_rows=1, elapsed_compute=103.626µs, bytes_read=549, iops=2, requests=2]
```
#### Inherited from
[`QueryBase`](QueryBase.md).[`analyzePlan`](QueryBase.md#analyzeplan)
***
### execute() ### execute()
```ts ```ts

View File

@@ -36,6 +36,49 @@ protected inner: NativeQueryType | Promise<NativeQueryType>;
## Methods ## Methods
### analyzePlan()
```ts
analyzePlan(): Promise<string>
```
Executes the query and returns the physical query plan annotated with runtime metrics.
This is useful for debugging and performance analysis, as it shows how the query was executed
and includes metrics such as elapsed time, rows processed, and I/O statistics.
#### Returns
`Promise`&lt;`string`&gt;
A query execution plan with runtime metrics for each step.
#### Example
```ts
import * as lancedb from "@lancedb/lancedb"
const db = await lancedb.connect("./.lancedb");
const table = await db.createTable("my_table", [
{ vector: [1.1, 0.9], id: "1" },
]);
const plan = await table.query().nearestTo([0.5, 0.2]).analyzePlan();
Example output (with runtime metrics inlined):
AnalyzeExec verbose=true, metrics=[]
ProjectionExec: expr=[id@3 as id, vector@0 as vector, _distance@2 as _distance], metrics=[output_rows=1, elapsed_compute=3.292µs]
Take: columns="vector, _rowid, _distance, (id)", metrics=[output_rows=1, elapsed_compute=66.001µs, batches_processed=1, bytes_read=8, iops=1, requests=1]
CoalesceBatchesExec: target_batch_size=1024, metrics=[output_rows=1, elapsed_compute=3.333µs]
GlobalLimitExec: skip=0, fetch=10, metrics=[output_rows=1, elapsed_compute=167ns]
FilterExec: _distance@2 IS NOT NULL, metrics=[output_rows=1, elapsed_compute=8.542µs]
SortExec: TopK(fetch=10), expr=[_distance@2 ASC NULLS LAST], metrics=[output_rows=1, elapsed_compute=63.25µs, row_replacements=1]
KNNVectorDistance: metric=l2, metrics=[output_rows=1, elapsed_compute=114.333µs, output_batches=1]
LanceScan: uri=/path/to/data, projection=[vector], row_id=true, row_addr=false, ordered=false, metrics=[output_rows=1, elapsed_compute=103.626µs, bytes_read=549, iops=2, requests=2]
```
***
### execute() ### execute()
```ts ```ts

View File

@@ -48,6 +48,53 @@ addQueryVector(vector): VectorQuery
*** ***
### analyzePlan()
```ts
analyzePlan(): Promise<string>
```
Executes the query and returns the physical query plan annotated with runtime metrics.
This is useful for debugging and performance analysis, as it shows how the query was executed
and includes metrics such as elapsed time, rows processed, and I/O statistics.
#### Returns
`Promise`&lt;`string`&gt;
A query execution plan with runtime metrics for each step.
#### Example
```ts
import * as lancedb from "@lancedb/lancedb"
const db = await lancedb.connect("./.lancedb");
const table = await db.createTable("my_table", [
{ vector: [1.1, 0.9], id: "1" },
]);
const plan = await table.query().nearestTo([0.5, 0.2]).analyzePlan();
Example output (with runtime metrics inlined):
AnalyzeExec verbose=true, metrics=[]
ProjectionExec: expr=[id@3 as id, vector@0 as vector, _distance@2 as _distance], metrics=[output_rows=1, elapsed_compute=3.292µs]
Take: columns="vector, _rowid, _distance, (id)", metrics=[output_rows=1, elapsed_compute=66.001µs, batches_processed=1, bytes_read=8, iops=1, requests=1]
CoalesceBatchesExec: target_batch_size=1024, metrics=[output_rows=1, elapsed_compute=3.333µs]
GlobalLimitExec: skip=0, fetch=10, metrics=[output_rows=1, elapsed_compute=167ns]
FilterExec: _distance@2 IS NOT NULL, metrics=[output_rows=1, elapsed_compute=8.542µs]
SortExec: TopK(fetch=10), expr=[_distance@2 ASC NULLS LAST], metrics=[output_rows=1, elapsed_compute=63.25µs, row_replacements=1]
KNNVectorDistance: metric=l2, metrics=[output_rows=1, elapsed_compute=114.333µs, output_batches=1]
LanceScan: uri=/path/to/data, projection=[vector], row_id=true, row_addr=false, ordered=false, metrics=[output_rows=1, elapsed_compute=103.626µs, bytes_read=549, iops=2, requests=2]
```
#### Inherited from
[`QueryBase`](QueryBase.md).[`analyzePlan`](QueryBase.md#analyzeplan)
***
### bypassVectorIndex() ### bypassVectorIndex()
```ts ```ts

View File

@@ -0,0 +1,19 @@
[**@lancedb/lancedb**](../README.md) • **Docs**
***
[@lancedb/lancedb](../globals.md) / packBits
# Function: packBits()
```ts
function packBits(data): number[]
```
## Parameters
* **data**: `number`[]
## Returns
`number`[]

View File

@@ -39,6 +39,7 @@
- [IndexConfig](interfaces/IndexConfig.md) - [IndexConfig](interfaces/IndexConfig.md)
- [IndexOptions](interfaces/IndexOptions.md) - [IndexOptions](interfaces/IndexOptions.md)
- [IndexStatistics](interfaces/IndexStatistics.md) - [IndexStatistics](interfaces/IndexStatistics.md)
- [IvfFlatOptions](interfaces/IvfFlatOptions.md)
- [IvfPqOptions](interfaces/IvfPqOptions.md) - [IvfPqOptions](interfaces/IvfPqOptions.md)
- [OpenTableOptions](interfaces/OpenTableOptions.md) - [OpenTableOptions](interfaces/OpenTableOptions.md)
- [OptimizeOptions](interfaces/OptimizeOptions.md) - [OptimizeOptions](interfaces/OptimizeOptions.md)
@@ -66,3 +67,4 @@
- [connect](functions/connect.md) - [connect](functions/connect.md)
- [makeArrowTable](functions/makeArrowTable.md) - [makeArrowTable](functions/makeArrowTable.md)
- [packBits](functions/packBits.md)

View File

@@ -16,7 +16,7 @@ must be provided.
### dataType? ### dataType?
```ts ```ts
optional dataType: string; optional dataType: string | DataType<Type, any>;
``` ```
A new data type for the column. If not provided then the data type will not be changed. A new data type for the column. If not provided then the data type will not be changed.

View File

@@ -44,7 +44,7 @@ for testing purposes.
### readConsistencyInterval? ### readConsistencyInterval?
```ts ```ts
optional readConsistencyInterval: number; optional readConsistencyInterval: null | number;
``` ```
(For LanceDB OSS only): The interval, in seconds, at which to check for (For LanceDB OSS only): The interval, in seconds, at which to check for

View File

@@ -24,18 +24,18 @@ The following distance types are available:
"l2" - Euclidean distance. This is a very common distance metric that "l2" - Euclidean distance. This is a very common distance metric that
accounts for both magnitude and direction when determining the distance accounts for both magnitude and direction when determining the distance
between vectors. L2 distance has a range of [0, ∞). between vectors. l2 distance has a range of [0, ∞).
"cosine" - Cosine distance. Cosine distance is a distance metric "cosine" - Cosine distance. Cosine distance is a distance metric
calculated from the cosine similarity between two vectors. Cosine calculated from the cosine similarity between two vectors. Cosine
similarity is a measure of similarity between two non-zero vectors of an similarity is a measure of similarity between two non-zero vectors of an
inner product space. It is defined to equal the cosine of the angle inner product space. It is defined to equal the cosine of the angle
between them. Unlike L2, the cosine distance is not affected by the between them. Unlike l2, the cosine distance is not affected by the
magnitude of the vectors. Cosine distance has a range of [0, 2]. magnitude of the vectors. Cosine distance has a range of [0, 2].
"dot" - Dot product. Dot distance is the dot product of two vectors. Dot "dot" - Dot product. Dot distance is the dot product of two vectors. Dot
distance has a range of (-∞, ∞). If the vectors are normalized (i.e. their distance has a range of (-∞, ∞). If the vectors are normalized (i.e. their
L2 norm is 1), then dot distance is equivalent to the cosine distance. l2 norm is 1), then dot distance is equivalent to the cosine distance.
*** ***

View File

@@ -24,18 +24,18 @@ The following distance types are available:
"l2" - Euclidean distance. This is a very common distance metric that "l2" - Euclidean distance. This is a very common distance metric that
accounts for both magnitude and direction when determining the distance accounts for both magnitude and direction when determining the distance
between vectors. L2 distance has a range of [0, ∞). between vectors. l2 distance has a range of [0, ∞).
"cosine" - Cosine distance. Cosine distance is a distance metric "cosine" - Cosine distance. Cosine distance is a distance metric
calculated from the cosine similarity between two vectors. Cosine calculated from the cosine similarity between two vectors. Cosine
similarity is a measure of similarity between two non-zero vectors of an similarity is a measure of similarity between two non-zero vectors of an
inner product space. It is defined to equal the cosine of the angle inner product space. It is defined to equal the cosine of the angle
between them. Unlike L2, the cosine distance is not affected by the between them. Unlike l2, the cosine distance is not affected by the
magnitude of the vectors. Cosine distance has a range of [0, 2]. magnitude of the vectors. Cosine distance has a range of [0, 2].
"dot" - Dot product. Dot distance is the dot product of two vectors. Dot "dot" - Dot product. Dot distance is the dot product of two vectors. Dot
distance has a range of (-∞, ∞). If the vectors are normalized (i.e. their distance has a range of (-∞, ∞). If the vectors are normalized (i.e. their
L2 norm is 1), then dot distance is equivalent to the cosine distance. l2 norm is 1), then dot distance is equivalent to the cosine distance.
*** ***

View File

@@ -30,6 +30,17 @@ The type of the index
*** ***
### loss?
```ts
optional loss: number;
```
The KMeans loss value of the index,
it is only present for vector indices.
***
### numIndexedRows ### numIndexedRows
```ts ```ts

View File

@@ -0,0 +1,112 @@
[**@lancedb/lancedb**](../README.md) • **Docs**
***
[@lancedb/lancedb](../globals.md) / IvfFlatOptions
# Interface: IvfFlatOptions
Options to create an `IVF_FLAT` index
## Properties
### distanceType?
```ts
optional distanceType: "l2" | "cosine" | "dot" | "hamming";
```
Distance type to use to build the index.
Default value is "l2".
This is used when training the index to calculate the IVF partitions
(vectors are grouped in partitions with similar vectors according to this
distance type).
The distance type used to train an index MUST match the distance type used
to search the index. Failure to do so will yield inaccurate results.
The following distance types are available:
"l2" - Euclidean distance. This is a very common distance metric that
accounts for both magnitude and direction when determining the distance
between vectors. l2 distance has a range of [0, ∞).
"cosine" - Cosine distance. Cosine distance is a distance metric
calculated from the cosine similarity between two vectors. Cosine
similarity is a measure of similarity between two non-zero vectors of an
inner product space. It is defined to equal the cosine of the angle
between them. Unlike l2, the cosine distance is not affected by the
magnitude of the vectors. Cosine distance has a range of [0, 2].
Note: the cosine distance is undefined when one (or both) of the vectors
are all zeros (there is no direction). These vectors are invalid and may
never be returned from a vector search.
"dot" - Dot product. Dot distance is the dot product of two vectors. Dot
distance has a range of (-∞, ∞). If the vectors are normalized (i.e. their
l2 norm is 1), then dot distance is equivalent to the cosine distance.
"hamming" - Hamming distance. Hamming distance is a distance metric
calculated from the number of bits that are different between two vectors.
Hamming distance has a range of [0, dimension]. Note that the hamming distance
is only valid for binary vectors.
***
### maxIterations?
```ts
optional maxIterations: number;
```
Max iteration to train IVF kmeans.
When training an IVF FLAT index we use kmeans to calculate the partitions. This parameter
controls how many iterations of kmeans to run.
Increasing this might improve the quality of the index but in most cases these extra
iterations have diminishing returns.
The default value is 50.
***
### numPartitions?
```ts
optional numPartitions: number;
```
The number of IVF partitions to create.
This value should generally scale with the number of rows in the dataset.
By default the number of partitions is the square root of the number of
rows.
If this value is too large then the first part of the search (picking the
right partition) will be slow. If this value is too small then the second
part of the search (searching within a partition) will be slow.
***
### sampleRate?
```ts
optional sampleRate: number;
```
The number of vectors, per partition, to sample when training IVF kmeans.
When an IVF FLAT index is trained, we need to calculate partitions. These are groups
of vectors that are similar to each other. To do this we use an algorithm called kmeans.
Running kmeans on a large dataset can be slow. To speed this up we run kmeans on a
random sample of the data. This parameter controls the size of the sample. The total
number of vectors used to train the index is `sample_rate * num_partitions`.
Increasing this value might improve the quality of the index but in most cases the
default should be sufficient.
The default value is 256.

View File

@@ -31,13 +31,13 @@ The following distance types are available:
"l2" - Euclidean distance. This is a very common distance metric that "l2" - Euclidean distance. This is a very common distance metric that
accounts for both magnitude and direction when determining the distance accounts for both magnitude and direction when determining the distance
between vectors. L2 distance has a range of [0, ∞). between vectors. l2 distance has a range of [0, ∞).
"cosine" - Cosine distance. Cosine distance is a distance metric "cosine" - Cosine distance. Cosine distance is a distance metric
calculated from the cosine similarity between two vectors. Cosine calculated from the cosine similarity between two vectors. Cosine
similarity is a measure of similarity between two non-zero vectors of an similarity is a measure of similarity between two non-zero vectors of an
inner product space. It is defined to equal the cosine of the angle inner product space. It is defined to equal the cosine of the angle
between them. Unlike L2, the cosine distance is not affected by the between them. Unlike l2, the cosine distance is not affected by the
magnitude of the vectors. Cosine distance has a range of [0, 2]. magnitude of the vectors. Cosine distance has a range of [0, 2].
Note: the cosine distance is undefined when one (or both) of the vectors Note: the cosine distance is undefined when one (or both) of the vectors
@@ -46,7 +46,7 @@ never be returned from a vector search.
"dot" - Dot product. Dot distance is the dot product of two vectors. Dot "dot" - Dot product. Dot distance is the dot product of two vectors. Dot
distance has a range of (-∞, ∞). If the vectors are normalized (i.e. their distance has a range of (-∞, ∞). If the vectors are normalized (i.e. their
L2 norm is 1), then dot distance is equivalent to the cosine distance. l2 norm is 1), then dot distance is equivalent to the cosine distance.
*** ***

File diff suppressed because one or more lines are too long

View File

@@ -59,8 +59,6 @@ is also an [asynchronous API client](#connections-asynchronous).
::: lancedb.embeddings.open_clip.OpenClipEmbeddings ::: lancedb.embeddings.open_clip.OpenClipEmbeddings
::: lancedb.embeddings.utils.with_embeddings
## Context ## Context
::: lancedb.context.contextualize ::: lancedb.context.contextualize

View File

@@ -15,7 +15,7 @@ Currently, LanceDB supports the following metrics:
| Metric | Description | | Metric | Description |
| --------- | --------------------------------------------------------------------------- | | --------- | --------------------------------------------------------------------------- |
| `l2` | [Euclidean / L2 distance](https://en.wikipedia.org/wiki/Euclidean_distance) | | `l2` | [Euclidean / l2 distance](https://en.wikipedia.org/wiki/Euclidean_distance) |
| `cosine` | [Cosine Similarity](https://en.wikipedia.org/wiki/Cosine_similarity) | | `cosine` | [Cosine Similarity](https://en.wikipedia.org/wiki/Cosine_similarity) |
| `dot` | [Dot Production](https://en.wikipedia.org/wiki/Dot_product) | | `dot` | [Dot Production](https://en.wikipedia.org/wiki/Dot_product) |
| `hamming` | [Hamming Distance](https://en.wikipedia.org/wiki/Hamming_distance) | | `hamming` | [Hamming Distance](https://en.wikipedia.org/wiki/Hamming_distance) |
@@ -138,6 +138,19 @@ LanceDB supports binary vectors as a data type, and has the ability to search bi
--8<-- "python/python/tests/docs/test_binary_vector.py:async_binary_vector" --8<-- "python/python/tests/docs/test_binary_vector.py:async_binary_vector"
``` ```
=== "TypeScript"
```ts
--8<-- "nodejs/examples/search.test.ts:import"
--8<-- "nodejs/examples/search.test.ts:import_bin_util"
--8<-- "nodejs/examples/search.test.ts:ingest_binary_data"
--8<-- "nodejs/examples/search.test.ts:search_binary_data"
```
## Multivector type ## Multivector type
LanceDB supports multivector type, this is useful when you have multiple vectors for a single item (e.g. with ColBert and ColPali). LanceDB supports multivector type, this is useful when you have multiple vectors for a single item (e.g. with ColBert and ColPali).

View File

@@ -7,7 +7,7 @@ performed on the top-k results returned by the vector search. However, pre-filte
option that performs the filter prior to vector search. This can be useful to narrow down option that performs the filter prior to vector search. This can be useful to narrow down
the search space of a very large dataset to reduce query latency. the search space of a very large dataset to reduce query latency.
Note that both pre-filtering and post-filtering can yield false positives. For pre-filtering, if the filter is too selective, it might eliminate relevant items that the vector search would have otherwise identified as a good match. In this case, increasing `nprobes` parameter will help reduce such false positives. It is recommended to set `use_index=false` if you know that the filter is highly selective. Note that both pre-filtering and post-filtering can yield false positives. For pre-filtering, if the filter is too selective, it might eliminate relevant items that the vector search would have otherwise identified as a good match. In this case, increasing `nprobes` parameter will help reduce such false positives. It is recommended to call `bypass_vector_index()` if you know that the filter is highly selective.
Similarly, a highly selective post-filter can lead to false positives. Increasing both `nprobes` and `refine_factor` can mitigate this issue. When deciding between pre-filtering and post-filtering, pre-filtering is generally the safer choice if you're uncertain. Similarly, a highly selective post-filter can lead to false positives. Increasing both `nprobes` and `refine_factor` can mitigate this issue. When deciding between pre-filtering and post-filtering, pre-filtering is generally the safer choice if you're uncertain.

View File

@@ -8,6 +8,11 @@ For trouble shooting, the best place to ask is in our Discord, under the relevan
language channel. By asking in the language-specific channel, it makes it more language channel. By asking in the language-specific channel, it makes it more
likely that someone who knows the answer will see your question. likely that someone who knows the answer will see your question.
## Common issues
* Multiprocessing with `fork` is not supported. You should use `spawn` instead.
* Data returned by queries may not reflect the most recent writes, depending on configuration. LanceDB uses eventual consistency by default. See [consistency](/docs/src/guides/tables.md#consistency) for more information.
## Enabling logging ## Enabling logging
To provide more information, especially for LanceDB Cloud related issues, enable To provide more information, especially for LanceDB Cloud related issues, enable
@@ -31,3 +36,9 @@ print the resolved query plan. You can use the `explain_plan` method to do this:
* Python Sync: [LanceQueryBuilder.explain_plan][lancedb.query.LanceQueryBuilder.explain_plan] * Python Sync: [LanceQueryBuilder.explain_plan][lancedb.query.LanceQueryBuilder.explain_plan]
* Python Async: [AsyncQueryBase.explain_plan][lancedb.query.AsyncQueryBase.explain_plan] * Python Async: [AsyncQueryBase.explain_plan][lancedb.query.AsyncQueryBase.explain_plan]
* Node @lancedb/lancedb: [LanceQueryBuilder.explainPlan](/lancedb/js/classes/QueryBase/#explainplan) * Node @lancedb/lancedb: [LanceQueryBuilder.explainPlan](/lancedb/js/classes/QueryBase/#explainplan)
To understand how a query was actually executed—including metrics like execution time, number of rows processed, I/O stats, and more—use the analyze_plan method. This executes the query and returns a physical execution plan annotated with runtime metrics, making it especially helpful for performance tuning and debugging.
* Python Sync: [LanceQueryBuilder.analyze_plan][lancedb.query.LanceQueryBuilder.analyze_plan]
* Python Async: [AsyncQueryBase.analyze_plan][lancedb.query.AsyncQueryBase.analyze_plan]
* Node @lancedb/lancedb: [LanceQueryBuilder.analyzePlan](/lancedb/js/classes/QueryBase/#analyzePlan)

3
java/.gitignore vendored Normal file
View File

@@ -0,0 +1,3 @@
*.iml
.java-version

View File

@@ -8,13 +8,16 @@
<parent> <parent>
<groupId>com.lancedb</groupId> <groupId>com.lancedb</groupId>
<artifactId>lancedb-parent</artifactId> <artifactId>lancedb-parent</artifactId>
<version>0.16.1-beta.3</version> <version>0.18.3-beta.0</version>
<relativePath>../pom.xml</relativePath> <relativePath>../pom.xml</relativePath>
</parent> </parent>
<artifactId>lancedb-core</artifactId> <artifactId>lancedb-core</artifactId>
<name>LanceDB Core</name> <name>LanceDB Core</name>
<packaging>jar</packaging> <packaging>jar</packaging>
<properties>
<rust.release.build>false</rust.release.build>
</properties>
<dependencies> <dependencies>
<dependency> <dependency>
@@ -68,7 +71,7 @@
</goals> </goals>
<configuration> <configuration>
<path>lancedb-jni</path> <path>lancedb-jni</path>
<release>true</release> <release>${rust.release.build}</release>
<!-- Copy native libraries to target/classes for runtime access --> <!-- Copy native libraries to target/classes for runtime access -->
<copyTo>${project.build.directory}/classes/nativelib</copyTo> <copyTo>${project.build.directory}/classes/nativelib</copyTo>
<copyWithPlatformDir>true</copyWithPlatformDir> <copyWithPlatformDir>true</copyWithPlatformDir>

View File

@@ -1,16 +1,25 @@
// SPDX-License-Identifier: Apache-2.0 /*
// SPDX-FileCopyrightText: Copyright The LanceDB Authors * Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package com.lancedb.lancedb; package com.lancedb.lancedb;
import io.questdb.jar.jni.JarJniLoader; import io.questdb.jar.jni.JarJniLoader;
import java.io.Closeable; import java.io.Closeable;
import java.util.List; import java.util.List;
import java.util.Optional; import java.util.Optional;
/** /** Represents LanceDB database. */
* Represents LanceDB database.
*/
public class Connection implements Closeable { public class Connection implements Closeable {
static { static {
JarJniLoader.loadLib(Connection.class, "/nativelib", "lancedb_jni"); JarJniLoader.loadLib(Connection.class, "/nativelib", "lancedb_jni");
@@ -18,14 +27,11 @@ public class Connection implements Closeable {
private long nativeConnectionHandle; private long nativeConnectionHandle;
/** /** Connect to a LanceDB instance. */
* Connect to a LanceDB instance.
*/
public static native Connection connect(String uri); public static native Connection connect(String uri);
/** /**
* Get the names of all tables in the database. The names are sorted in * Get the names of all tables in the database. The names are sorted in ascending order.
* ascending order.
* *
* @return the table names * @return the table names
*/ */
@@ -34,8 +40,7 @@ public class Connection implements Closeable {
} }
/** /**
* Get the names of filtered tables in the database. The names are sorted in * Get the names of filtered tables in the database. The names are sorted in ascending order.
* ascending order.
* *
* @param limit The number of results to return. * @param limit The number of results to return.
* @return the table names * @return the table names
@@ -45,12 +50,11 @@ public class Connection implements Closeable {
} }
/** /**
* Get the names of filtered tables in the database. The names are sorted in * Get the names of filtered tables in the database. The names are sorted in ascending order.
* ascending order.
* *
* @param startAfter If present, only return names that come lexicographically after the supplied * @param startAfter If present, only return names that come lexicographically after the supplied
* value. This can be combined with limit to implement pagination * value. This can be combined with limit to implement pagination by setting this to the last
* by setting this to the last table name from the previous page. * table name from the previous page.
* @return the table names * @return the table names
*/ */
public List<String> tableNames(String startAfter) { public List<String> tableNames(String startAfter) {
@@ -58,12 +62,11 @@ public class Connection implements Closeable {
} }
/** /**
* Get the names of filtered tables in the database. The names are sorted in * Get the names of filtered tables in the database. The names are sorted in ascending order.
* ascending order.
* *
* @param startAfter If present, only return names that come lexicographically after the supplied * @param startAfter If present, only return names that come lexicographically after the supplied
* value. This can be combined with limit to implement pagination * value. This can be combined with limit to implement pagination by setting this to the last
* by setting this to the last table name from the previous page. * table name from the previous page.
* @param limit The number of results to return. * @param limit The number of results to return.
* @return the table names * @return the table names
*/ */
@@ -72,22 +75,19 @@ public class Connection implements Closeable {
} }
/** /**
* Get the names of filtered tables in the database. The names are sorted in * Get the names of filtered tables in the database. The names are sorted in ascending order.
* ascending order.
* *
* @param startAfter If present, only return names that come lexicographically after the supplied * @param startAfter If present, only return names that come lexicographically after the supplied
* value. This can be combined with limit to implement pagination * value. This can be combined with limit to implement pagination by setting this to the last
* by setting this to the last table name from the previous page. * table name from the previous page.
* @param limit The number of results to return. * @param limit The number of results to return.
* @return the table names * @return the table names
*/ */
public native List<String> tableNames( public native List<String> tableNames(Optional<String> startAfter, Optional<Integer> limit);
Optional<String> startAfter, Optional<Integer> limit);
/** /**
* Closes this connection and releases any system resources associated with it. If * Closes this connection and releases any system resources associated with it. If the connection
* the connection is * is already closed, then invoking this method has no effect.
* already closed, then invoking this method has no effect.
*/ */
@Override @Override
public void close() { public void close() {
@@ -98,8 +98,7 @@ public class Connection implements Closeable {
} }
/** /**
* Native method to release the Lance connection resources associated with the * Native method to release the Lance connection resources associated with the given handle.
* given handle.
* *
* @param handle The native handle to the connection resource. * @param handle The native handle to the connection resource.
*/ */

View File

@@ -1,27 +1,35 @@
// SPDX-License-Identifier: Apache-2.0 /*
// SPDX-FileCopyrightText: Copyright The LanceDB Authors * Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package com.lancedb.lancedb; package com.lancedb.lancedb;
import static org.junit.jupiter.api.Assertions.assertEquals;
import static org.junit.jupiter.api.Assertions.assertTrue;
import java.nio.file.Path;
import java.util.List;
import java.net.URL;
import org.junit.jupiter.api.BeforeAll; import org.junit.jupiter.api.BeforeAll;
import org.junit.jupiter.api.Test; import org.junit.jupiter.api.Test;
import org.junit.jupiter.api.io.TempDir; import org.junit.jupiter.api.io.TempDir;
import java.net.URL;
import java.nio.file.Path;
import java.util.List;
import static org.junit.jupiter.api.Assertions.assertEquals;
import static org.junit.jupiter.api.Assertions.assertTrue;
public class ConnectionTest { public class ConnectionTest {
private static final String[] TABLE_NAMES = { private static final String[] TABLE_NAMES = {
"dataset_version", "dataset_version", "new_empty_dataset", "test", "write_stream"
"new_empty_dataset",
"test",
"write_stream"
}; };
@TempDir @TempDir static Path tempDir; // Temporary directory for the tests
static Path tempDir; // Temporary directory for the tests
private static URL lanceDbURL; private static URL lanceDbURL;
@BeforeAll @BeforeAll
@@ -53,18 +61,21 @@ public class ConnectionTest {
@Test @Test
void tableNamesStartAfter() { void tableNamesStartAfter() {
try (Connection conn = Connection.connect(lanceDbURL.toString())) { try (Connection conn = Connection.connect(lanceDbURL.toString())) {
assertTableNamesStartAfter(conn, TABLE_NAMES[0], 3, TABLE_NAMES[1], TABLE_NAMES[2], TABLE_NAMES[3]); assertTableNamesStartAfter(
conn, TABLE_NAMES[0], 3, TABLE_NAMES[1], TABLE_NAMES[2], TABLE_NAMES[3]);
assertTableNamesStartAfter(conn, TABLE_NAMES[1], 2, TABLE_NAMES[2], TABLE_NAMES[3]); assertTableNamesStartAfter(conn, TABLE_NAMES[1], 2, TABLE_NAMES[2], TABLE_NAMES[3]);
assertTableNamesStartAfter(conn, TABLE_NAMES[2], 1, TABLE_NAMES[3]); assertTableNamesStartAfter(conn, TABLE_NAMES[2], 1, TABLE_NAMES[3]);
assertTableNamesStartAfter(conn, TABLE_NAMES[3], 0); assertTableNamesStartAfter(conn, TABLE_NAMES[3], 0);
assertTableNamesStartAfter(conn, "a_dataset", 4, TABLE_NAMES[0], TABLE_NAMES[1], TABLE_NAMES[2], TABLE_NAMES[3]); assertTableNamesStartAfter(
conn, "a_dataset", 4, TABLE_NAMES[0], TABLE_NAMES[1], TABLE_NAMES[2], TABLE_NAMES[3]);
assertTableNamesStartAfter(conn, "o_dataset", 2, TABLE_NAMES[2], TABLE_NAMES[3]); assertTableNamesStartAfter(conn, "o_dataset", 2, TABLE_NAMES[2], TABLE_NAMES[3]);
assertTableNamesStartAfter(conn, "v_dataset", 1, TABLE_NAMES[3]); assertTableNamesStartAfter(conn, "v_dataset", 1, TABLE_NAMES[3]);
assertTableNamesStartAfter(conn, "z_dataset", 0); assertTableNamesStartAfter(conn, "z_dataset", 0);
} }
} }
private void assertTableNamesStartAfter(Connection conn, String startAfter, int expectedSize, String... expectedNames) { private void assertTableNamesStartAfter(
Connection conn, String startAfter, int expectedSize, String... expectedNames) {
List<String> tableNames = conn.tableNames(startAfter); List<String> tableNames = conn.tableNames(startAfter);
assertEquals(expectedSize, tableNames.size()); assertEquals(expectedSize, tableNames.size());
for (int i = 0; i < expectedNames.length; i++) { for (int i = 0; i < expectedNames.length; i++) {
@@ -74,7 +85,7 @@ public class ConnectionTest {
@Test @Test
void tableNamesLimit() { void tableNamesLimit() {
try (Connection conn = Connection.connect(lanceDbURL.toString())) { try (Connection conn = Connection.connect(lanceDbURL.toString())) {
for (int i = 0; i <= TABLE_NAMES.length; i++) { for (int i = 0; i <= TABLE_NAMES.length; i++) {
List<String> tableNames = conn.tableNames(i); List<String> tableNames = conn.tableNames(i);
assertEquals(i, tableNames.size()); assertEquals(i, tableNames.size());

View File

@@ -6,7 +6,7 @@
<groupId>com.lancedb</groupId> <groupId>com.lancedb</groupId>
<artifactId>lancedb-parent</artifactId> <artifactId>lancedb-parent</artifactId>
<version>0.16.1-beta.3</version> <version>0.18.3-beta.0</version>
<packaging>pom</packaging> <packaging>pom</packaging>
<name>LanceDB Parent</name> <name>LanceDB Parent</name>
@@ -29,6 +29,25 @@
<properties> <properties>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding> <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
<arrow.version>15.0.0</arrow.version> <arrow.version>15.0.0</arrow.version>
<spotless.skip>false</spotless.skip>
<spotless.version>2.30.0</spotless.version>
<spotless.java.googlejavaformat.version>1.7</spotless.java.googlejavaformat.version>
<spotless.delimiter>package</spotless.delimiter>
<spotless.license.header>
/*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
</spotless.license.header>
</properties> </properties>
<modules> <modules>
@@ -127,7 +146,8 @@
<configuration> <configuration>
<configLocation>google_checks.xml</configLocation> <configLocation>google_checks.xml</configLocation>
<consoleOutput>true</consoleOutput> <consoleOutput>true</consoleOutput>
<failsOnError>true</failsOnError> <failsOnError>false</failsOnError>
<failOnViolation>false</failOnViolation>
<violationSeverity>warning</violationSeverity> <violationSeverity>warning</violationSeverity>
<linkXRef>false</linkXRef> <linkXRef>false</linkXRef>
</configuration> </configuration>
@@ -141,6 +161,10 @@
</execution> </execution>
</executions> </executions>
</plugin> </plugin>
<plugin>
<groupId>com.diffplug.spotless</groupId>
<artifactId>spotless-maven-plugin</artifactId>
</plugin>
</plugins> </plugins>
<pluginManagement> <pluginManagement>
<plugins> <plugins>
@@ -166,7 +190,6 @@
<artifactId>maven-surefire-plugin</artifactId> <artifactId>maven-surefire-plugin</artifactId>
<version>3.2.5</version> <version>3.2.5</version>
<configuration> <configuration>
<argLine>--add-opens=java.base/java.nio=ALL-UNNAMED</argLine>
<forkNode <forkNode
implementation="org.apache.maven.plugin.surefire.extensions.SurefireForkNodeFactory" /> implementation="org.apache.maven.plugin.surefire.extensions.SurefireForkNodeFactory" />
<useSystemClassLoader>false</useSystemClassLoader> <useSystemClassLoader>false</useSystemClassLoader>
@@ -180,6 +203,54 @@
<artifactId>maven-install-plugin</artifactId> <artifactId>maven-install-plugin</artifactId>
<version>2.5.2</version> <version>2.5.2</version>
</plugin> </plugin>
<plugin>
<groupId>com.diffplug.spotless</groupId>
<artifactId>spotless-maven-plugin</artifactId>
<version>${spotless.version}</version>
<configuration>
<skip>${spotless.skip}</skip>
<upToDateChecking>
<enabled>true</enabled>
</upToDateChecking>
<java>
<includes>
<include>src/main/java/**/*.java</include>
<include>src/test/java/**/*.java</include>
</includes>
<googleJavaFormat>
<version>${spotless.java.googlejavaformat.version}</version>
<style>GOOGLE</style>
</googleJavaFormat>
<importOrder>
<order>com.lancedb.lance,,javax,java,\#</order>
</importOrder>
<removeUnusedImports />
</java>
<scala>
<includes>
<include>src/main/scala/**/*.scala</include>
<include>src/main/scala-*/**/*.scala</include>
<include>src/test/scala/**/*.scala</include>
<include>src/test/scala-*/**/*.scala</include>
</includes>
</scala>
<licenseHeader>
<content>${spotless.license.header}</content>
<delimiter>${spotless.delimiter}</delimiter>
</licenseHeader>
</configuration>
<executions>
<execution>
<id>spotless-check</id>
<phase>validate</phase>
<goals>
<goal>apply</goal>
</goals>
</execution>
</executions>
</plugin>
</plugins> </plugins>
</pluginManagement> </pluginManagement>
</build> </build>

86
node/package-lock.json generated
View File

@@ -1,12 +1,12 @@
{ {
"name": "vectordb", "name": "vectordb",
"version": "0.16.1-beta.3", "version": "0.18.3-beta.0",
"lockfileVersion": 3, "lockfileVersion": 3,
"requires": true, "requires": true,
"packages": { "packages": {
"": { "": {
"name": "vectordb", "name": "vectordb",
"version": "0.16.1-beta.3", "version": "0.18.3-beta.0",
"cpu": [ "cpu": [
"x64", "x64",
"arm64" "arm64"
@@ -52,14 +52,11 @@
"uuid": "^9.0.0" "uuid": "^9.0.0"
}, },
"optionalDependencies": { "optionalDependencies": {
"@lancedb/vectordb-darwin-arm64": "0.16.1-beta.3", "@lancedb/vectordb-darwin-arm64": "0.18.3-beta.0",
"@lancedb/vectordb-darwin-x64": "0.16.1-beta.3", "@lancedb/vectordb-darwin-x64": "0.18.3-beta.0",
"@lancedb/vectordb-linux-arm64-gnu": "0.16.1-beta.3", "@lancedb/vectordb-linux-arm64-gnu": "0.18.3-beta.0",
"@lancedb/vectordb-linux-arm64-musl": "0.16.1-beta.3", "@lancedb/vectordb-linux-x64-gnu": "0.18.3-beta.0",
"@lancedb/vectordb-linux-x64-gnu": "0.16.1-beta.3", "@lancedb/vectordb-win32-x64-msvc": "0.18.3-beta.0"
"@lancedb/vectordb-linux-x64-musl": "0.16.1-beta.3",
"@lancedb/vectordb-win32-arm64-msvc": "0.16.1-beta.3",
"@lancedb/vectordb-win32-x64-msvc": "0.16.1-beta.3"
}, },
"peerDependencies": { "peerDependencies": {
"@apache-arrow/ts": "^14.0.2", "@apache-arrow/ts": "^14.0.2",
@@ -330,9 +327,9 @@
} }
}, },
"node_modules/@lancedb/vectordb-darwin-arm64": { "node_modules/@lancedb/vectordb-darwin-arm64": {
"version": "0.16.1-beta.3", "version": "0.18.3-beta.0",
"resolved": "https://registry.npmjs.org/@lancedb/vectordb-darwin-arm64/-/vectordb-darwin-arm64-0.16.1-beta.3.tgz", "resolved": "https://registry.npmjs.org/@lancedb/vectordb-darwin-arm64/-/vectordb-darwin-arm64-0.18.3-beta.0.tgz",
"integrity": "sha512-k2dfDNvoFjZuF8RCkFX9yFkLIg292mFg+o6IUeXndlikhABi8F+NbRODGUxJf3QUioks2tGF831KFoV5oQyeEA==", "integrity": "sha512-dhJ5VlXV2N/L67mIpTSePhb8krX0FyQgpuz3I+4T4vYuU5JEF3cmedQ5TF5+3cGJhZim4PHRYLkfgCyTlxcqUg==",
"cpu": [ "cpu": [
"arm64" "arm64"
], ],
@@ -343,9 +340,9 @@
] ]
}, },
"node_modules/@lancedb/vectordb-darwin-x64": { "node_modules/@lancedb/vectordb-darwin-x64": {
"version": "0.16.1-beta.3", "version": "0.18.3-beta.0",
"resolved": "https://registry.npmjs.org/@lancedb/vectordb-darwin-x64/-/vectordb-darwin-x64-0.16.1-beta.3.tgz", "resolved": "https://registry.npmjs.org/@lancedb/vectordb-darwin-x64/-/vectordb-darwin-x64-0.18.3-beta.0.tgz",
"integrity": "sha512-pYvwcAXBB3MXxa2kvK8PxMoEsaE+EFld5pky6dDo6qJQVepUz9pi/e1FTLxW6m0mgwtRj52P6xe55sj1Yln9Qw==", "integrity": "sha512-SHqPkuyfe87d5skf9GERzdeu6AKvVIbXMUwl5N+dVrE7HH6qiuP2HvOmiyHS2lJFgo0Ph8jSBVzPDxxtjF36Dg==",
"cpu": [ "cpu": [
"x64" "x64"
], ],
@@ -356,22 +353,9 @@
] ]
}, },
"node_modules/@lancedb/vectordb-linux-arm64-gnu": { "node_modules/@lancedb/vectordb-linux-arm64-gnu": {
"version": "0.16.1-beta.3", "version": "0.18.3-beta.0",
"resolved": "https://registry.npmjs.org/@lancedb/vectordb-linux-arm64-gnu/-/vectordb-linux-arm64-gnu-0.16.1-beta.3.tgz", "resolved": "https://registry.npmjs.org/@lancedb/vectordb-linux-arm64-gnu/-/vectordb-linux-arm64-gnu-0.18.3-beta.0.tgz",
"integrity": "sha512-BS4rnBtKGJlEdbYgOe85mGhviQaSfEXl8qw0fh0ml8E0qbi5RuLtwfTFMe3yAKSOnNAvaJISqXQyUN7hzkYkUQ==", "integrity": "sha512-ohnWsV1n9cxL5ik/GGL4FdQ04Ff9REELcNb1zgmJYyEfwyc6TH9m5HdySO/1ACPZJiLbML4gSvZ10J0Zyb+2SA==",
"cpu": [
"arm64"
],
"license": "Apache-2.0",
"optional": true,
"os": [
"linux"
]
},
"node_modules/@lancedb/vectordb-linux-arm64-musl": {
"version": "0.16.1-beta.3",
"resolved": "https://registry.npmjs.org/@lancedb/vectordb-linux-arm64-musl/-/vectordb-linux-arm64-musl-0.16.1-beta.3.tgz",
"integrity": "sha512-/F1mzpgSipfXjeaXJx5c0zLPOipPKnSPIpYviSdLU2Ahm1aHLweW1UsoiUoRkBkvEcVrZfHxL64vasey2I0P7Q==",
"cpu": [ "cpu": [
"arm64" "arm64"
], ],
@@ -382,9 +366,9 @@
] ]
}, },
"node_modules/@lancedb/vectordb-linux-x64-gnu": { "node_modules/@lancedb/vectordb-linux-x64-gnu": {
"version": "0.16.1-beta.3", "version": "0.18.3-beta.0",
"resolved": "https://registry.npmjs.org/@lancedb/vectordb-linux-x64-gnu/-/vectordb-linux-x64-gnu-0.16.1-beta.3.tgz", "resolved": "https://registry.npmjs.org/@lancedb/vectordb-linux-x64-gnu/-/vectordb-linux-x64-gnu-0.18.3-beta.0.tgz",
"integrity": "sha512-zGn2Oby8GAQYG7+dqFVi2DDzli2/GAAY7lwPoYbPlyVytcdTlXRsxea1XiT1jzZmyKIlrxA/XXSRsmRq4n1j1w==", "integrity": "sha512-nhbW2CKaBSUesiYCPBd9fAsDYIJLadlGsrb2gfjODlFy+2Lpnbz6T9SuV7dNqj6KBw+KHhaRhLqta7tyMZm/EA==",
"cpu": [ "cpu": [
"x64" "x64"
], ],
@@ -394,36 +378,10 @@
"linux" "linux"
] ]
}, },
"node_modules/@lancedb/vectordb-linux-x64-musl": {
"version": "0.16.1-beta.3",
"resolved": "https://registry.npmjs.org/@lancedb/vectordb-linux-x64-musl/-/vectordb-linux-x64-musl-0.16.1-beta.3.tgz",
"integrity": "sha512-MXYvI7dL+0QtWGDuliUUaEp/XQN+hSndtDc8wlAMyI0lOzmTvC7/C3OZQcMKf6JISZuNS71OVzVTYDYSab9aXw==",
"cpu": [
"x64"
],
"license": "Apache-2.0",
"optional": true,
"os": [
"linux"
]
},
"node_modules/@lancedb/vectordb-win32-arm64-msvc": {
"version": "0.16.1-beta.3",
"resolved": "https://registry.npmjs.org/@lancedb/vectordb-win32-arm64-msvc/-/vectordb-win32-arm64-msvc-0.16.1-beta.3.tgz",
"integrity": "sha512-1dbUSg+Mi+0W8JAUXqNWC+uCr0RUqVHhxFVGLSlprqZ8qFJYQ61jFSZr4onOYj9Ta1n6tUb3Nc4acxf3vXXPmw==",
"cpu": [
"arm64"
],
"license": "Apache-2.0",
"optional": true,
"os": [
"win32"
]
},
"node_modules/@lancedb/vectordb-win32-x64-msvc": { "node_modules/@lancedb/vectordb-win32-x64-msvc": {
"version": "0.16.1-beta.3", "version": "0.18.3-beta.0",
"resolved": "https://registry.npmjs.org/@lancedb/vectordb-win32-x64-msvc/-/vectordb-win32-x64-msvc-0.16.1-beta.3.tgz", "resolved": "https://registry.npmjs.org/@lancedb/vectordb-win32-x64-msvc/-/vectordb-win32-x64-msvc-0.18.3-beta.0.tgz",
"integrity": "sha512-K9oT47zKnFoCEB/JjVKG+w+L0GOMDsPPln+B2TvefAXAWrvweCN2H4LUdsBYCTnntzy80OJCwwH3OwX07M1Y3g==", "integrity": "sha512-VE4TvMdZ7DIrTC8VYylGxEcH4h2UEejSwGX4PxRzrN9QsCQ4m4pOh3L/UguSO3g+Y1QEaGE20iWQoX6wgSEUhA==",
"cpu": [ "cpu": [
"x64" "x64"
], ],

View File

@@ -1,6 +1,6 @@
{ {
"name": "vectordb", "name": "vectordb",
"version": "0.16.1-beta.3", "version": "0.18.3-beta.0",
"description": " Serverless, low-latency vector database for AI applications", "description": " Serverless, low-latency vector database for AI applications",
"private": false, "private": false,
"main": "dist/index.js", "main": "dist/index.js",
@@ -85,20 +85,14 @@
"aarch64-apple-darwin": "@lancedb/vectordb-darwin-arm64", "aarch64-apple-darwin": "@lancedb/vectordb-darwin-arm64",
"x86_64-unknown-linux-gnu": "@lancedb/vectordb-linux-x64-gnu", "x86_64-unknown-linux-gnu": "@lancedb/vectordb-linux-x64-gnu",
"aarch64-unknown-linux-gnu": "@lancedb/vectordb-linux-arm64-gnu", "aarch64-unknown-linux-gnu": "@lancedb/vectordb-linux-arm64-gnu",
"x86_64-unknown-linux-musl": "@lancedb/vectordb-linux-x64-musl", "x86_64-pc-windows-msvc": "@lancedb/vectordb-win32-x64-msvc"
"aarch64-unknown-linux-musl": "@lancedb/vectordb-linux-arm64-musl",
"x86_64-pc-windows-msvc": "@lancedb/vectordb-win32-x64-msvc",
"aarch64-pc-windows-msvc": "@lancedb/vectordb-win32-arm64-msvc"
} }
}, },
"optionalDependencies": { "optionalDependencies": {
"@lancedb/vectordb-darwin-x64": "0.16.1-beta.3", "@lancedb/vectordb-darwin-x64": "0.18.3-beta.0",
"@lancedb/vectordb-darwin-arm64": "0.16.1-beta.3", "@lancedb/vectordb-darwin-arm64": "0.18.3-beta.0",
"@lancedb/vectordb-linux-x64-gnu": "0.16.1-beta.3", "@lancedb/vectordb-linux-x64-gnu": "0.18.3-beta.0",
"@lancedb/vectordb-linux-arm64-gnu": "0.16.1-beta.3", "@lancedb/vectordb-linux-arm64-gnu": "0.18.3-beta.0",
"@lancedb/vectordb-linux-x64-musl": "0.16.1-beta.3", "@lancedb/vectordb-win32-x64-msvc": "0.18.3-beta.0"
"@lancedb/vectordb-linux-arm64-musl": "0.16.1-beta.3",
"@lancedb/vectordb-win32-x64-msvc": "0.16.1-beta.3",
"@lancedb/vectordb-win32-arm64-msvc": "0.16.1-beta.3"
} }
} }

View File

@@ -1299,7 +1299,7 @@ export interface IvfPQIndexConfig {
index_name?: string index_name?: string
/** /**
* Metric type, L2 or Cosine * Metric type, l2 or Cosine
*/ */
metric_type?: MetricType metric_type?: MetricType

View File

@@ -110,7 +110,7 @@ describe('LanceDB Mirrored Store Integration test', function () {
fs.readdir(path.join(mirroredPath, 'data'), { withFileTypes: true }, (err, files) => { fs.readdir(path.join(mirroredPath, 'data'), { withFileTypes: true }, (err, files) => {
if (err != null) throw err if (err != null) throw err
assert.equal(files.length, 1) assert.equal(files.length, 1, `Found files: ${files.map(f => f.name)}`)
assert.isTrue(files[0].name.endsWith('.lance')) assert.isTrue(files[0].name.endsWith('.lance'))
}) })

View File

@@ -22,3 +22,4 @@ build.rs
jest.config.js jest.config.js
tsconfig.json tsconfig.json
typedoc.json typedoc.json
typedoc_post_process.js

View File

@@ -1,7 +1,7 @@
[package] [package]
name = "lancedb-nodejs" name = "lancedb-nodejs"
edition.workspace = true edition.workspace = true
version = "0.16.1-beta.3" version = "0.18.3-beta.0"
license.workspace = true license.workspace = true
description.workspace = true description.workspace = true
repository.workspace = true repository.workspace = true
@@ -18,7 +18,7 @@ arrow-array.workspace = true
arrow-schema.workspace = true arrow-schema.workspace = true
env_logger.workspace = true env_logger.workspace = true
futures.workspace = true futures.workspace = true
lancedb = { path = "../rust/lancedb", features = ["remote"] } lancedb = { path = "../rust/lancedb" }
napi = { version = "2.16.8", default-features = false, features = [ napi = { version = "2.16.8", default-features = false, features = [
"napi9", "napi9",
"async" "async"
@@ -30,3 +30,8 @@ log.workspace = true
[build-dependencies] [build-dependencies]
napi-build = "2.1" napi-build = "2.1"
[features]
default = ["remote"]
fp16kernels = ["lancedb/fp16kernels"]
remote = ["lancedb/remote"]

View File

@@ -11,11 +11,9 @@ npm install @lancedb/lancedb
This will download the appropriate native library for your platform. We currently This will download the appropriate native library for your platform. We currently
support: support:
- Linux (x86_64 and aarch64) - Linux (x86_64 and aarch64 on glibc and musl)
- MacOS (Intel and ARM/M1/M2) - MacOS (Intel and ARM/M1/M2)
- Windows (x86_64 only) - Windows (x86_64 and aarch64)
We do not yet support musl-based Linux (such as Alpine Linux) or aarch64 Windows.
## Usage ## Usage

View File

@@ -17,7 +17,7 @@ describe("when connecting", () => {
it("should connect", async () => { it("should connect", async () => {
const db = await connect(tmpDir.name); const db = await connect(tmpDir.name);
expect(db.display()).toBe( expect(db.display()).toBe(
`ListingDatabase(uri=${tmpDir.name}, read_consistency_interval=None)`, `ListingDatabase(uri=${tmpDir.name}, read_consistency_interval=5s)`,
); );
}); });

View File

@@ -175,6 +175,8 @@ maybeDescribe("storage_options", () => {
tableNames = await db.tableNames(); tableNames = await db.tableNames();
expect(tableNames).toEqual([]); expect(tableNames).toEqual([]);
await db.dropAllTables();
}); });
it("can configure encryption at connection and table level", async () => { it("can configure encryption at connection and table level", async () => {
@@ -210,6 +212,8 @@ maybeDescribe("storage_options", () => {
await table.add([{ a: 2, b: 3 }]); await table.add([{ a: 2, b: 3 }]);
await bucket.assertAllEncrypted("test/table2.lance", kmsKey.keyId); await bucket.assertAllEncrypted("test/table2.lance", kmsKey.keyId);
await db.dropAllTables();
}); });
}); });
@@ -298,5 +302,32 @@ maybeDescribe("DynamoDB Lock", () => {
const rowCount = await table.countRows(); const rowCount = await table.countRows();
expect(rowCount).toBe(6); expect(rowCount).toBe(6);
await db.dropAllTables();
});
it("clears dynamodb state after dropping all tables", async () => {
const uri = `s3+ddb://${bucket.name}/test?ddbTableName=${commitTable.name}`;
const db = await connect(uri, {
storageOptions: CONFIG,
readConsistencyInterval: 0,
});
await db.createTable("foo", [{ a: 1, b: 2 }]);
await db.createTable("bar", [{ a: 1, b: 2 }]);
let tableNames = await db.tableNames();
expect(tableNames).toEqual(["bar", "foo"]);
await db.dropAllTables();
tableNames = await db.tableNames();
expect(tableNames).toEqual([]);
// We can create a new table with the same name as the one we dropped.
await db.createTable("foo", [{ a: 1, b: 2 }]);
tableNames = await db.tableNames();
expect(tableNames).toEqual(["foo"]);
await db.dropAllTables();
}); });
}); });

View File

@@ -21,9 +21,11 @@ import {
Int64, Int64,
List, List,
Schema, Schema,
Uint8,
Utf8, Utf8,
makeArrowTable, makeArrowTable,
} from "../lancedb/arrow"; } from "../lancedb/arrow";
import * as arrow from "../lancedb/arrow";
import { import {
EmbeddingFunction, EmbeddingFunction,
LanceSchema, LanceSchema,
@@ -56,7 +58,7 @@ describe.each([arrow15, arrow16, arrow17, arrow18])(
it("be displayable", async () => { it("be displayable", async () => {
expect(table.display()).toMatch( expect(table.display()).toMatch(
/NativeTable\(some_table, uri=.*, read_consistency_interval=None\)/, /NativeTable\(some_table, uri=.*, read_consistency_interval=5s\)/,
); );
table.close(); table.close();
expect(table.display()).toBe("ClosedTable(some_table)"); expect(table.display()).toBe("ClosedTable(some_table)");
@@ -278,6 +280,15 @@ describe.each([arrow15, arrow16, arrow17, arrow18])(
expect(res.getChild("y")?.toJSON()).toEqual([2, null, null, null]); expect(res.getChild("y")?.toJSON()).toEqual([2, null, null, null]);
expect(res.getChild("z")?.toJSON()).toEqual([null, null, 3n, 5n]); expect(res.getChild("z")?.toJSON()).toEqual([null, null, 3n, 5n]);
}); });
it("should handle null vectors at end of data", async () => {
// https://github.com/lancedb/lancedb/issues/2240
const data = [{ vector: [1, 2, 3] }, { vector: null }];
const db = await connect("memory://");
const table = await db.createTable("my_table", data);
expect(await table.countRows()).toEqual(2);
});
}, },
); );
@@ -460,6 +471,8 @@ describe("When creating an index", () => {
indexType: "IvfPq", indexType: "IvfPq",
columns: ["vec"], columns: ["vec"],
}); });
const stats = await tbl.indexStats("vec_idx");
expect(stats?.loss).toBeDefined();
// Search without specifying the column // Search without specifying the column
let rst = await tbl let rst = await tbl
@@ -620,6 +633,23 @@ describe("When creating an index", () => {
expect(plan2).not.toMatch("LanceScan"); expect(plan2).not.toMatch("LanceScan");
}); });
it("should be able to run analyze plan", async () => {
await tbl.createIndex("vec");
await tbl.add([
{
id: 300,
vec: Array(32)
.fill(1)
.map(() => Math.random()),
tags: [],
},
]);
const plan = await tbl.query().nearestTo(queryVec).analyzePlan();
expect(plan).toMatch("AnalyzeExec");
expect(plan).toMatch("metrics=");
});
it("should be able to query with row id", async () => { it("should be able to query with row id", async () => {
const results = await tbl const results = await tbl
.query() .query()
@@ -666,11 +696,11 @@ describe("When creating an index", () => {
expect(fs.readdirSync(indexDir)).toHaveLength(1); expect(fs.readdirSync(indexDir)).toHaveLength(1);
for await (const r of tbl.query().where("id > 1").select(["id"])) { for await (const r of tbl.query().where("id > 1").select(["id"])) {
expect(r.numRows).toBe(10); expect(r.numRows).toBe(298);
} }
// should also work with 'filter' alias // should also work with 'filter' alias
for await (const r of tbl.query().filter("id > 1").select(["id"])) { for await (const r of tbl.query().filter("id > 1").select(["id"])) {
expect(r.numRows).toBe(10); expect(r.numRows).toBe(298);
} }
}); });
@@ -720,6 +750,7 @@ describe("When creating an index", () => {
expect(stats?.distanceType).toBeUndefined(); expect(stats?.distanceType).toBeUndefined();
expect(stats?.indexType).toEqual("BTREE"); expect(stats?.indexType).toEqual("BTREE");
expect(stats?.numIndices).toEqual(1); expect(stats?.numIndices).toEqual(1);
expect(stats?.loss).toBeUndefined();
}); });
test("when getting stats on non-existent index", async () => { test("when getting stats on non-existent index", async () => {
@@ -727,6 +758,38 @@ describe("When creating an index", () => {
expect(stats).toBeUndefined(); expect(stats).toBeUndefined();
}); });
test("create ivf_flat with binary vectors", async () => {
const db = await connect(tmpDir.name);
const binarySchema = new Schema([
new Field("id", new Int32(), true),
new Field("vec", new FixedSizeList(32, new Field("item", new Uint8()))),
]);
const tbl = await db.createTable(
"binary",
makeArrowTable(
Array(300)
.fill(1)
.map((_, i) => ({
id: i,
vec: Array(32)
.fill(1)
.map(() => Math.floor(Math.random() * 255)),
})),
{ schema: binarySchema },
),
);
await tbl.createIndex("vec", {
config: Index.ivfFlat({ numPartitions: 10, distanceType: "hamming" }),
});
// query with binary vectors
const queryVec = Array(32)
.fill(1)
.map(() => Math.floor(Math.random() * 255));
const rst = await tbl.query().limit(5).nearestTo(queryVec).toArrow();
expect(rst.numRows).toBe(5);
});
// TODO: Move this test to the query API test (making sure we can reject queries // TODO: Move this test to the query API test (making sure we can reject queries
// when the dimension is incorrect) // when the dimension is incorrect)
test("two columns with different dimensions", async () => { test("two columns with different dimensions", async () => {
@@ -920,6 +983,93 @@ describe("schema evolution", function () {
new Field("price", new Float64(), true), new Field("price", new Float64(), true),
]); ]);
expect(await table.schema()).toEqual(expectedSchema2); expect(await table.schema()).toEqual(expectedSchema2);
await table.alterColumns([
{
path: "vector",
dataType: new FixedSizeList(2, new Field("item", new Float64(), true)),
},
]);
const expectedSchema3 = new Schema([
new Field("new_id", new Int32(), true),
new Field(
"vector",
new FixedSizeList(2, new Field("item", new Float64(), true)),
true,
),
new Field("price", new Float64(), true),
]);
expect(await table.schema()).toEqual(expectedSchema3);
});
it("can cast to various types", async function () {
const con = await connect(tmpDir.name);
// integers
const intTypes = [
new arrow.Int8(),
new arrow.Int16(),
new arrow.Int32(),
new arrow.Int64(),
new arrow.Uint8(),
new arrow.Uint16(),
new arrow.Uint32(),
new arrow.Uint64(),
];
const tableInts = await con.createTable("ints", [{ id: 1n }], {
schema: new Schema([new Field("id", new Int64(), true)]),
});
for (const intType of intTypes) {
await tableInts.alterColumns([{ path: "id", dataType: intType }]);
const schema = new Schema([new Field("id", intType, true)]);
expect(await tableInts.schema()).toEqual(schema);
}
// floats
const floatTypes = [
new arrow.Float16(),
new arrow.Float32(),
new arrow.Float64(),
];
const tableFloats = await con.createTable("floats", [{ val: 2.1 }], {
schema: new Schema([new Field("val", new Float32(), true)]),
});
for (const floatType of floatTypes) {
await tableFloats.alterColumns([{ path: "val", dataType: floatType }]);
const schema = new Schema([new Field("val", floatType, true)]);
expect(await tableFloats.schema()).toEqual(schema);
}
// Lists of floats
const listTypes = [
new arrow.List(new arrow.Field("item", new arrow.Float32(), true)),
new arrow.FixedSizeList(
2,
new arrow.Field("item", new arrow.Float64(), true),
),
new arrow.FixedSizeList(
2,
new arrow.Field("item", new arrow.Float16(), true),
),
new arrow.FixedSizeList(
2,
new arrow.Field("item", new arrow.Float32(), true),
),
];
const tableLists = await con.createTable("lists", [{ val: [2.1, 3.2] }], {
schema: new Schema([
new Field(
"val",
new FixedSizeList(2, new arrow.Field("item", new Float32())),
true,
),
]),
});
for (const listType of listTypes) {
await tableLists.alterColumns([{ path: "val", dataType: listType }]);
const schema = new Schema([new Field("val", listType, true)]);
expect(await tableLists.schema()).toEqual(schema);
}
}); });
it("can drop a column from the schema", async function () { it("can drop a column from the schema", async function () {
@@ -1213,6 +1363,30 @@ describe("when calling explainPlan", () => {
}); });
}); });
describe("when calling analyzePlan", () => {
let tmpDir: tmp.DirResult;
let table: Table;
let queryVec: number[];
beforeEach(async () => {
tmpDir = tmp.dirSync({ unsafeCleanup: true });
const con = await connect(tmpDir.name);
table = await con.createTable("vectors", [{ id: 1, vector: [1.1, 0.9] }]);
});
afterEach(() => {
tmpDir.removeCallback();
});
it("retrieves runtime metrics", async () => {
queryVec = Array(2)
.fill(1)
.map(() => Math.random());
const plan = await table.query().nearestTo(queryVec).analyzePlan();
console.log("Query Plan:\n", plan); // <--- Print the plan
expect(plan).toMatch("AnalyzeExec");
});
});
describe("column name options", () => { describe("column name options", () => {
let tmpDir: tmp.DirResult; let tmpDir: tmp.DirResult;
let table: Table; let table: Table;

View File

@@ -132,6 +132,17 @@ test("basic table examples", async () => {
}, },
]); ]);
// --8<-- [end:alter_columns] // --8<-- [end:alter_columns]
// --8<-- [start:alter_columns_vector]
await tbl.alterColumns([
{
path: "vector",
dataType: new arrow.FixedSizeList(
2,
new arrow.Field("item", new arrow.Float16(), false),
),
},
]);
// --8<-- [end:alter_columns_vector]
// --8<-- [start:drop_columns] // --8<-- [start:drop_columns]
await tbl.dropColumns(["dbl_price"]); await tbl.dropColumns(["dbl_price"]);
// --8<-- [end:drop_columns] // --8<-- [end:drop_columns]
@@ -191,5 +202,35 @@ test("basic table examples", async () => {
// --8<-- [end:create_f16_table] // --8<-- [end:create_f16_table]
await db.dropTable("f16_tbl"); await db.dropTable("f16_tbl");
} }
const uri = databaseDir;
await db.createTable("my_table", [{ id: 1 }, { id: 2 }]);
{
// --8<-- [start:table_strong_consistency]
const db = await lancedb.connect({ uri, readConsistencyInterval: 0 });
const tbl = await db.openTable("my_table");
// --8<-- [end:table_strong_consistency]
}
{
// --8<-- [start:table_eventual_consistency]
const db = await lancedb.connect({ uri, readConsistencyInterval: 5 });
const tbl = await db.openTable("my_table");
// --8<-- [end:table_eventual_consistency]
}
{
// --8<-- [start:table_no_consistency]
const db = await lancedb.connect({ uri, readConsistencyInterval: null });
const tbl = await db.openTable("my_table");
// --8<-- [end:table_no_consistency]
}
{
// --8<-- [start:table_checkout_latest]
const tbl = await db.openTable("my_table");
// (Other writes happen to test_table_async from another process)
// Check for updates
tbl.checkoutLatest();
// --8<-- [end:table_checkout_latest]
}
}); });
}); });

View File

@@ -4,9 +4,12 @@ import { expect, test } from "@jest/globals";
// --8<-- [start:import] // --8<-- [start:import]
import * as lancedb from "@lancedb/lancedb"; import * as lancedb from "@lancedb/lancedb";
// --8<-- [end:import] // --8<-- [end:import]
// --8<-- [start:import_bin_util]
import { Field, FixedSizeList, Int32, Schema, Uint8 } from "apache-arrow";
// --8<-- [end:import_bin_util]
import { withTempDirectory } from "./util.ts"; import { withTempDirectory } from "./util.ts";
test("full text search", async () => { test("vector search", async () => {
await withTempDirectory(async (databaseDir) => { await withTempDirectory(async (databaseDir) => {
{ {
const db = await lancedb.connect(databaseDir); const db = await lancedb.connect(databaseDir);
@@ -14,8 +17,6 @@ test("full text search", async () => {
const data = Array.from({ length: 10_000 }, (_, i) => ({ const data = Array.from({ length: 10_000 }, (_, i) => ({
vector: Array(128).fill(i), vector: Array(128).fill(i),
id: `${i}`, id: `${i}`,
content: "",
longId: `${i}`,
})); }));
await db.createTable("my_vectors", data); await db.createTable("my_vectors", data);
@@ -52,5 +53,41 @@ test("full text search", async () => {
expect(r.distance).toBeGreaterThanOrEqual(0.1); expect(r.distance).toBeGreaterThanOrEqual(0.1);
expect(r.distance).toBeLessThan(0.2); expect(r.distance).toBeLessThan(0.2);
} }
{
// --8<-- [start:ingest_binary_data]
const schema = new Schema([
new Field("id", new Int32(), true),
new Field("vec", new FixedSizeList(32, new Field("item", new Uint8()))),
]);
const data = lancedb.makeArrowTable(
Array(1_000)
.fill(0)
.map((_, i) => ({
// the 256 bits would be store in 32 bytes,
// if your data is already in this format, you can skip the packBits step
id: i,
vec: lancedb.packBits(Array(256).fill(i % 2)),
})),
{ schema: schema },
);
const tbl = await db.createTable("binary_table", data);
await tbl.createIndex("vec", {
config: lancedb.Index.ivfFlat({
numPartitions: 10,
distanceType: "hamming",
}),
});
// --8<-- [end:ingest_binary_data]
// --8<-- [start:search_binary_data]
const query = Array(32)
.fill(1)
.map(() => Math.floor(Math.random() * 255));
const results = await tbl.query().nearestTo(query).limit(10).toArrow();
// --8<-- [end:search_binary_data
expect(results.numRows).toBe(10);
}
}); });
}); });

View File

@@ -8,7 +8,11 @@ import {
Bool, Bool,
BufferType, BufferType,
DataType, DataType,
DateUnit,
Date_,
Decimal,
Dictionary, Dictionary,
Duration,
Field, Field,
FixedSizeBinary, FixedSizeBinary,
FixedSizeList, FixedSizeList,
@@ -21,19 +25,22 @@ import {
LargeBinary, LargeBinary,
List, List,
Null, Null,
Precision,
RecordBatch, RecordBatch,
RecordBatchFileReader, RecordBatchFileReader,
RecordBatchFileWriter, RecordBatchFileWriter,
RecordBatchStreamWriter, RecordBatchStreamWriter,
Schema, Schema,
Struct, Struct,
Timestamp,
Type,
Utf8, Utf8,
Vector, Vector,
makeVector as arrowMakeVector, makeVector as arrowMakeVector,
vectorFromArray as badVectorFromArray,
makeBuilder, makeBuilder,
makeData, makeData,
makeTable, makeTable,
vectorFromArray,
} from "apache-arrow"; } from "apache-arrow";
import { Buffers } from "apache-arrow/data"; import { Buffers } from "apache-arrow/data";
import { type EmbeddingFunction } from "./embedding/embedding_function"; import { type EmbeddingFunction } from "./embedding/embedding_function";
@@ -179,6 +186,21 @@ export class VectorColumnOptions {
} }
} }
// biome-ignore lint/suspicious/noExplicitAny: skip
function vectorFromArray(data: any, type?: DataType) {
// Workaround for: https://github.com/apache/arrow/issues/45862
// If FSL type with float
if (DataType.isFixedSizeList(type) && DataType.isFloat(type.valueType)) {
const extendedData = [...data, new Array(type.listSize).fill(0.0)];
const array = badVectorFromArray(extendedData, type);
return array.slice(0, data.length);
} else if (type === undefined) {
return badVectorFromArray(data);
} else {
return badVectorFromArray(data, type);
}
}
/** Options to control the makeArrowTable call. */ /** Options to control the makeArrowTable call. */
export class MakeArrowTableOptions { export class MakeArrowTableOptions {
/* /*
@@ -1170,3 +1192,137 @@ function validateSchemaEmbeddings(
return new Schema(fields, schema.metadata); return new Schema(fields, schema.metadata);
} }
interface JsonDataType {
type: string;
fields?: JsonField[];
length?: number;
}
interface JsonField {
name: string;
type: JsonDataType;
nullable: boolean;
metadata: Map<string, string>;
}
// Matches format of https://github.com/lancedb/lance/blob/main/rust/lance/src/arrow/json.rs
export function dataTypeToJson(dataType: DataType): JsonDataType {
switch (dataType.typeId) {
// For primitives, matches https://github.com/lancedb/lance/blob/e12bb9eff2a52f753668d4b62c52e4d72b10d294/rust/lance-core/src/datatypes.rs#L185
case Type.Null:
return { type: "null" };
case Type.Bool:
return { type: "bool" };
case Type.Int8:
return { type: "int8" };
case Type.Int16:
return { type: "int16" };
case Type.Int32:
return { type: "int32" };
case Type.Int64:
return { type: "int64" };
case Type.Uint8:
return { type: "uint8" };
case Type.Uint16:
return { type: "uint16" };
case Type.Uint32:
return { type: "uint32" };
case Type.Uint64:
return { type: "uint64" };
case Type.Int: {
const bitWidth = (dataType as Int).bitWidth;
const signed = (dataType as Int).isSigned;
const prefix = signed ? "" : "u";
return { type: `${prefix}int${bitWidth}` };
}
case Type.Float: {
switch ((dataType as Float).precision) {
case Precision.HALF:
return { type: "halffloat" };
case Precision.SINGLE:
return { type: "float" };
case Precision.DOUBLE:
return { type: "double" };
}
throw Error("Unsupported float precision");
}
case Type.Float16:
return { type: "halffloat" };
case Type.Float32:
return { type: "float" };
case Type.Float64:
return { type: "double" };
case Type.Utf8:
return { type: "string" };
case Type.Binary:
return { type: "binary" };
case Type.LargeUtf8:
return { type: "large_string" };
case Type.LargeBinary:
return { type: "large_binary" };
case Type.List:
return {
type: "list",
fields: [fieldToJson((dataType as List).children[0])],
};
case Type.FixedSizeList: {
const fixedSizeList = dataType as FixedSizeList;
return {
type: "fixed_size_list",
fields: [fieldToJson(fixedSizeList.children[0])],
length: fixedSizeList.listSize,
};
}
case Type.Struct:
return {
type: "struct",
fields: (dataType as Struct).children.map(fieldToJson),
};
case Type.Date: {
const unit = (dataType as Date_).unit;
return {
type: unit === DateUnit.DAY ? "date32:day" : "date64:ms",
};
}
case Type.Timestamp: {
const timestamp = dataType as Timestamp;
const timezone = timestamp.timezone || "-";
return {
type: `timestamp:${timestamp.unit}:${timezone}`,
};
}
case Type.Decimal: {
const decimal = dataType as Decimal;
return {
type: `decimal:${decimal.bitWidth}:${decimal.precision}:${decimal.scale}`,
};
}
case Type.Duration: {
const duration = dataType as Duration;
return { type: `duration:${duration.unit}` };
}
case Type.FixedSizeBinary: {
const byteWidth = (dataType as FixedSizeBinary).byteWidth;
return { type: `fixed_size_binary:${byteWidth}` };
}
case Type.Dictionary: {
const dict = dataType as Dictionary;
const indexType = dataTypeToJson(dict.indices);
const valueType = dataTypeToJson(dict.valueType);
return {
type: `dict:${valueType.type}:${indexType.type}:false`,
};
}
}
throw new Error("Unsupported data type");
}
function fieldToJson(field: Field): JsonField {
return {
name: field.name,
type: dataTypeToJson(field.type),
nullable: field.nullable,
metadata: field.metadata,
};
}

View File

@@ -14,7 +14,6 @@ import {
export { export {
AddColumnsSql, AddColumnsSql,
ColumnAlteration,
ConnectionOptions, ConnectionOptions,
IndexStatistics, IndexStatistics,
IndexConfig, IndexConfig,
@@ -54,6 +53,7 @@ export {
Index, Index,
IndexOptions, IndexOptions,
IvfPqOptions, IvfPqOptions,
IvfFlatOptions,
HnswPqOptions, HnswPqOptions,
HnswSqOptions, HnswSqOptions,
FtsOptions, FtsOptions,
@@ -65,6 +65,7 @@ export {
UpdateOptions, UpdateOptions,
OptimizeOptions, OptimizeOptions,
Version, Version,
ColumnAlteration,
} from "./table"; } from "./table";
export { MergeInsertBuilder } from "./merge"; export { MergeInsertBuilder } from "./merge";
@@ -79,7 +80,7 @@ export {
DataLike, DataLike,
IntoVector, IntoVector,
} from "./arrow"; } from "./arrow";
export { IntoSql } from "./util"; export { IntoSql, packBits } from "./util";
/** /**
* Connect to a LanceDB instance at the given URI. * Connect to a LanceDB instance at the given URI.

View File

@@ -62,13 +62,13 @@ export interface IvfPqOptions {
* *
* "l2" - Euclidean distance. This is a very common distance metric that * "l2" - Euclidean distance. This is a very common distance metric that
* accounts for both magnitude and direction when determining the distance * accounts for both magnitude and direction when determining the distance
* between vectors. L2 distance has a range of [0, ∞). * between vectors. l2 distance has a range of [0, ∞).
* *
* "cosine" - Cosine distance. Cosine distance is a distance metric * "cosine" - Cosine distance. Cosine distance is a distance metric
* calculated from the cosine similarity between two vectors. Cosine * calculated from the cosine similarity between two vectors. Cosine
* similarity is a measure of similarity between two non-zero vectors of an * similarity is a measure of similarity between two non-zero vectors of an
* inner product space. It is defined to equal the cosine of the angle * inner product space. It is defined to equal the cosine of the angle
* between them. Unlike L2, the cosine distance is not affected by the * between them. Unlike l2, the cosine distance is not affected by the
* magnitude of the vectors. Cosine distance has a range of [0, 2]. * magnitude of the vectors. Cosine distance has a range of [0, 2].
* *
* Note: the cosine distance is undefined when one (or both) of the vectors * Note: the cosine distance is undefined when one (or both) of the vectors
@@ -77,7 +77,7 @@ export interface IvfPqOptions {
* *
* "dot" - Dot product. Dot distance is the dot product of two vectors. Dot * "dot" - Dot product. Dot distance is the dot product of two vectors. Dot
* distance has a range of (-∞, ∞). If the vectors are normalized (i.e. their * distance has a range of (-∞, ∞). If the vectors are normalized (i.e. their
* L2 norm is 1), then dot distance is equivalent to the cosine distance. * l2 norm is 1), then dot distance is equivalent to the cosine distance.
*/ */
distanceType?: "l2" | "cosine" | "dot"; distanceType?: "l2" | "cosine" | "dot";
@@ -125,18 +125,18 @@ export interface HnswPqOptions {
* *
* "l2" - Euclidean distance. This is a very common distance metric that * "l2" - Euclidean distance. This is a very common distance metric that
* accounts for both magnitude and direction when determining the distance * accounts for both magnitude and direction when determining the distance
* between vectors. L2 distance has a range of [0, ∞). * between vectors. l2 distance has a range of [0, ∞).
* *
* "cosine" - Cosine distance. Cosine distance is a distance metric * "cosine" - Cosine distance. Cosine distance is a distance metric
* calculated from the cosine similarity between two vectors. Cosine * calculated from the cosine similarity between two vectors. Cosine
* similarity is a measure of similarity between two non-zero vectors of an * similarity is a measure of similarity between two non-zero vectors of an
* inner product space. It is defined to equal the cosine of the angle * inner product space. It is defined to equal the cosine of the angle
* between them. Unlike L2, the cosine distance is not affected by the * between them. Unlike l2, the cosine distance is not affected by the
* magnitude of the vectors. Cosine distance has a range of [0, 2]. * magnitude of the vectors. Cosine distance has a range of [0, 2].
* *
* "dot" - Dot product. Dot distance is the dot product of two vectors. Dot * "dot" - Dot product. Dot distance is the dot product of two vectors. Dot
* distance has a range of (-∞, ∞). If the vectors are normalized (i.e. their * distance has a range of (-∞, ∞). If the vectors are normalized (i.e. their
* L2 norm is 1), then dot distance is equivalent to the cosine distance. * l2 norm is 1), then dot distance is equivalent to the cosine distance.
*/ */
distanceType?: "l2" | "cosine" | "dot"; distanceType?: "l2" | "cosine" | "dot";
@@ -241,18 +241,18 @@ export interface HnswSqOptions {
* *
* "l2" - Euclidean distance. This is a very common distance metric that * "l2" - Euclidean distance. This is a very common distance metric that
* accounts for both magnitude and direction when determining the distance * accounts for both magnitude and direction when determining the distance
* between vectors. L2 distance has a range of [0, ∞). * between vectors. l2 distance has a range of [0, ∞).
* *
* "cosine" - Cosine distance. Cosine distance is a distance metric * "cosine" - Cosine distance. Cosine distance is a distance metric
* calculated from the cosine similarity between two vectors. Cosine * calculated from the cosine similarity between two vectors. Cosine
* similarity is a measure of similarity between two non-zero vectors of an * similarity is a measure of similarity between two non-zero vectors of an
* inner product space. It is defined to equal the cosine of the angle * inner product space. It is defined to equal the cosine of the angle
* between them. Unlike L2, the cosine distance is not affected by the * between them. Unlike l2, the cosine distance is not affected by the
* magnitude of the vectors. Cosine distance has a range of [0, 2]. * magnitude of the vectors. Cosine distance has a range of [0, 2].
* *
* "dot" - Dot product. Dot distance is the dot product of two vectors. Dot * "dot" - Dot product. Dot distance is the dot product of two vectors. Dot
* distance has a range of (-∞, ∞). If the vectors are normalized (i.e. their * distance has a range of (-∞, ∞). If the vectors are normalized (i.e. their
* L2 norm is 1), then dot distance is equivalent to the cosine distance. * l2 norm is 1), then dot distance is equivalent to the cosine distance.
*/ */
distanceType?: "l2" | "cosine" | "dot"; distanceType?: "l2" | "cosine" | "dot";
@@ -327,6 +327,94 @@ export interface HnswSqOptions {
efConstruction?: number; efConstruction?: number;
} }
/**
* Options to create an `IVF_FLAT` index
*/
export interface IvfFlatOptions {
/**
* The number of IVF partitions to create.
*
* This value should generally scale with the number of rows in the dataset.
* By default the number of partitions is the square root of the number of
* rows.
*
* If this value is too large then the first part of the search (picking the
* right partition) will be slow. If this value is too small then the second
* part of the search (searching within a partition) will be slow.
*/
numPartitions?: number;
/**
* Distance type to use to build the index.
*
* Default value is "l2".
*
* This is used when training the index to calculate the IVF partitions
* (vectors are grouped in partitions with similar vectors according to this
* distance type).
*
* The distance type used to train an index MUST match the distance type used
* to search the index. Failure to do so will yield inaccurate results.
*
* The following distance types are available:
*
* "l2" - Euclidean distance. This is a very common distance metric that
* accounts for both magnitude and direction when determining the distance
* between vectors. l2 distance has a range of [0, ∞).
*
* "cosine" - Cosine distance. Cosine distance is a distance metric
* calculated from the cosine similarity between two vectors. Cosine
* similarity is a measure of similarity between two non-zero vectors of an
* inner product space. It is defined to equal the cosine of the angle
* between them. Unlike l2, the cosine distance is not affected by the
* magnitude of the vectors. Cosine distance has a range of [0, 2].
*
* Note: the cosine distance is undefined when one (or both) of the vectors
* are all zeros (there is no direction). These vectors are invalid and may
* never be returned from a vector search.
*
* "dot" - Dot product. Dot distance is the dot product of two vectors. Dot
* distance has a range of (-∞, ∞). If the vectors are normalized (i.e. their
* l2 norm is 1), then dot distance is equivalent to the cosine distance.
*
* "hamming" - Hamming distance. Hamming distance is a distance metric
* calculated from the number of bits that are different between two vectors.
* Hamming distance has a range of [0, dimension]. Note that the hamming distance
* is only valid for binary vectors.
*/
distanceType?: "l2" | "cosine" | "dot" | "hamming";
/**
* Max iteration to train IVF kmeans.
*
* When training an IVF FLAT index we use kmeans to calculate the partitions. This parameter
* controls how many iterations of kmeans to run.
*
* Increasing this might improve the quality of the index but in most cases these extra
* iterations have diminishing returns.
*
* The default value is 50.
*/
maxIterations?: number;
/**
* The number of vectors, per partition, to sample when training IVF kmeans.
*
* When an IVF FLAT index is trained, we need to calculate partitions. These are groups
* of vectors that are similar to each other. To do this we use an algorithm called kmeans.
*
* Running kmeans on a large dataset can be slow. To speed this up we run kmeans on a
* random sample of the data. This parameter controls the size of the sample. The total
* number of vectors used to train the index is `sample_rate * num_partitions`.
*
* Increasing this value might improve the quality of the index but in most cases the
* default should be sufficient.
*
* The default value is 256.
*/
sampleRate?: number;
}
/** /**
* Options to create a full text search index * Options to create a full text search index
*/ */
@@ -426,6 +514,33 @@ export class Index {
); );
} }
/**
* Create an IvfFlat index
*
* This index groups vectors into partitions of similar vectors. Each partition keeps track of
* a centroid which is the average value of all vectors in the group.
*
* During a query the centroids are compared with the query vector to find the closest
* partitions. The vectors in these partitions are then searched to find
* the closest vectors.
*
* The partitioning process is called IVF and the `num_partitions` parameter controls how
* many groups to create.
*
* Note that training an IVF FLAT index on a large dataset is a slow operation and
* currently is also a memory intensive operation.
*/
static ivfFlat(options?: Partial<IvfFlatOptions>) {
return new Index(
LanceDbIndex.ivfFlat(
options?.distanceType,
options?.numPartitions,
options?.maxIterations,
options?.sampleRate,
),
);
}
/** /**
* Create a btree index * Create a btree index
* *

View File

@@ -348,6 +348,43 @@ export class QueryBase<NativeQueryType extends NativeQuery | NativeVectorQuery>
return this.inner.explainPlan(verbose); return this.inner.explainPlan(verbose);
} }
} }
/**
* Executes the query and returns the physical query plan annotated with runtime metrics.
*
* This is useful for debugging and performance analysis, as it shows how the query was executed
* and includes metrics such as elapsed time, rows processed, and I/O statistics.
*
* @example
* import * as lancedb from "@lancedb/lancedb"
*
* const db = await lancedb.connect("./.lancedb");
* const table = await db.createTable("my_table", [
* { vector: [1.1, 0.9], id: "1" },
* ]);
*
* const plan = await table.query().nearestTo([0.5, 0.2]).analyzePlan();
*
* Example output (with runtime metrics inlined):
* AnalyzeExec verbose=true, metrics=[]
* ProjectionExec: expr=[id@3 as id, vector@0 as vector, _distance@2 as _distance], metrics=[output_rows=1, elapsed_compute=3.292µs]
* Take: columns="vector, _rowid, _distance, (id)", metrics=[output_rows=1, elapsed_compute=66.001µs, batches_processed=1, bytes_read=8, iops=1, requests=1]
* CoalesceBatchesExec: target_batch_size=1024, metrics=[output_rows=1, elapsed_compute=3.333µs]
* GlobalLimitExec: skip=0, fetch=10, metrics=[output_rows=1, elapsed_compute=167ns]
* FilterExec: _distance@2 IS NOT NULL, metrics=[output_rows=1, elapsed_compute=8.542µs]
* SortExec: TopK(fetch=10), expr=[_distance@2 ASC NULLS LAST], metrics=[output_rows=1, elapsed_compute=63.25µs, row_replacements=1]
* KNNVectorDistance: metric=l2, metrics=[output_rows=1, elapsed_compute=114.333µs, output_batches=1]
* LanceScan: uri=/path/to/data, projection=[vector], row_id=true, row_addr=false, ordered=false, metrics=[output_rows=1, elapsed_compute=103.626µs, bytes_read=549, iops=2, requests=2]
*
* @returns A query execution plan with runtime metrics for each step.
*/
async analyzePlan(): Promise<string> {
if (this.inner instanceof Promise) {
return this.inner.then((inner) => inner.analyzePlan());
} else {
return this.inner.analyzePlan();
}
}
} }
/** /**

View File

@@ -4,8 +4,10 @@
import { import {
Table as ArrowTable, Table as ArrowTable,
Data, Data,
DataType,
IntoVector, IntoVector,
Schema, Schema,
dataTypeToJson,
fromDataToBuffer, fromDataToBuffer,
tableFromIPC, tableFromIPC,
} from "./arrow"; } from "./arrow";
@@ -15,13 +17,13 @@ import { IndexOptions } from "./indices";
import { MergeInsertBuilder } from "./merge"; import { MergeInsertBuilder } from "./merge";
import { import {
AddColumnsSql, AddColumnsSql,
ColumnAlteration,
IndexConfig, IndexConfig,
IndexStatistics, IndexStatistics,
OptimizeStats, OptimizeStats,
Table as _NativeTable, Table as _NativeTable,
} from "./native"; } from "./native";
import { Query, VectorQuery } from "./query"; import { Query, VectorQuery } from "./query";
import { sanitizeType } from "./sanitize";
import { IntoSql, toSQL } from "./util"; import { IntoSql, toSQL } from "./util";
export { IndexConfig } from "./native"; export { IndexConfig } from "./native";
@@ -618,7 +620,27 @@ export class LocalTable extends Table {
} }
async alterColumns(columnAlterations: ColumnAlteration[]): Promise<void> { async alterColumns(columnAlterations: ColumnAlteration[]): Promise<void> {
await this.inner.alterColumns(columnAlterations); const processedAlterations = columnAlterations.map((alteration) => {
if (typeof alteration.dataType === "string") {
return {
...alteration,
dataType: JSON.stringify({ type: alteration.dataType }),
};
} else if (alteration.dataType === undefined) {
return {
...alteration,
dataType: undefined,
};
} else {
const dataType = sanitizeType(alteration.dataType);
return {
...alteration,
dataType: JSON.stringify(dataTypeToJson(dataType)),
};
}
});
await this.inner.alterColumns(processedAlterations);
} }
async dropColumns(columnNames: string[]): Promise<void> { async dropColumns(columnNames: string[]): Promise<void> {
@@ -711,3 +733,38 @@ export class LocalTable extends Table {
await this.inner.migrateManifestPathsV2(); await this.inner.migrateManifestPathsV2();
} }
} }
/**
* A definition of a column alteration. The alteration changes the column at
* `path` to have the new name `name`, to be nullable if `nullable` is true,
* and to have the data type `data_type`. At least one of `rename` or `nullable`
* must be provided.
*/
export interface ColumnAlteration {
/**
* The path to the column to alter. This is a dot-separated path to the column.
* If it is a top-level column then it is just the name of the column. If it is
* a nested column then it is the path to the column, e.g. "a.b.c" for a column
* `c` nested inside a column `b` nested inside a column `a`.
*/
path: string;
/**
* The new name of the column. If not provided then the name will not be changed.
* This must be distinct from the names of all other columns in the table.
*/
rename?: string;
/**
* A new data type for the column. If not provided then the data type will not be changed.
* Changing data types is limited to casting to the same general type. For example, these
* changes are valid:
* * `int32` -> `int64` (integers)
* * `double` -> `float` (floats)
* * `string` -> `large_string` (strings)
* But these changes are not:
* * `int32` -> `double` (mix integers and floats)
* * `string` -> `int32` (mix strings and integers)
*/
dataType?: string | DataType;
/** Set the new nullability. Note that a nullable column cannot be made non-nullable. */
nullable?: boolean;
}

View File

@@ -35,6 +35,16 @@ export function toSQL(value: IntoSql): string {
} }
} }
export function packBits(data: Array<number>): Array<number> {
const packed = Array(data.length >> 3).fill(0);
for (let i = 0; i < data.length; i++) {
const byte = i >> 3;
const bit = i & 7;
packed[byte] |= data[i] << bit;
}
return packed;
}
export class TTLCache { export class TTLCache {
// biome-ignore lint/suspicious/noExplicitAny: <explanation> // biome-ignore lint/suspicious/noExplicitAny: <explanation>
private readonly cache: Map<string, { value: any; expires: number }>; private readonly cache: Map<string, { value: any; expires: number }>;

View File

@@ -1,6 +1,6 @@
{ {
"name": "@lancedb/lancedb-darwin-arm64", "name": "@lancedb/lancedb-darwin-arm64",
"version": "0.16.1-beta.3", "version": "0.18.3-beta.0",
"os": ["darwin"], "os": ["darwin"],
"cpu": ["arm64"], "cpu": ["arm64"],
"main": "lancedb.darwin-arm64.node", "main": "lancedb.darwin-arm64.node",

View File

@@ -1,6 +1,6 @@
{ {
"name": "@lancedb/lancedb-darwin-x64", "name": "@lancedb/lancedb-darwin-x64",
"version": "0.16.1-beta.3", "version": "0.18.3-beta.0",
"os": ["darwin"], "os": ["darwin"],
"cpu": ["x64"], "cpu": ["x64"],
"main": "lancedb.darwin-x64.node", "main": "lancedb.darwin-x64.node",

View File

@@ -1,6 +1,6 @@
{ {
"name": "@lancedb/lancedb-linux-arm64-gnu", "name": "@lancedb/lancedb-linux-arm64-gnu",
"version": "0.16.1-beta.3", "version": "0.18.3-beta.0",
"os": ["linux"], "os": ["linux"],
"cpu": ["arm64"], "cpu": ["arm64"],
"main": "lancedb.linux-arm64-gnu.node", "main": "lancedb.linux-arm64-gnu.node",

View File

@@ -1,6 +1,6 @@
{ {
"name": "@lancedb/lancedb-linux-arm64-musl", "name": "@lancedb/lancedb-linux-arm64-musl",
"version": "0.16.1-beta.3", "version": "0.18.3-beta.0",
"os": ["linux"], "os": ["linux"],
"cpu": ["arm64"], "cpu": ["arm64"],
"main": "lancedb.linux-arm64-musl.node", "main": "lancedb.linux-arm64-musl.node",

View File

@@ -1,6 +1,6 @@
{ {
"name": "@lancedb/lancedb-linux-x64-gnu", "name": "@lancedb/lancedb-linux-x64-gnu",
"version": "0.16.1-beta.3", "version": "0.18.3-beta.0",
"os": ["linux"], "os": ["linux"],
"cpu": ["x64"], "cpu": ["x64"],
"main": "lancedb.linux-x64-gnu.node", "main": "lancedb.linux-x64-gnu.node",

View File

@@ -1,6 +1,6 @@
{ {
"name": "@lancedb/lancedb-linux-x64-musl", "name": "@lancedb/lancedb-linux-x64-musl",
"version": "0.16.1-beta.3", "version": "0.18.3-beta.0",
"os": ["linux"], "os": ["linux"],
"cpu": ["x64"], "cpu": ["x64"],
"main": "lancedb.linux-x64-musl.node", "main": "lancedb.linux-x64-musl.node",

View File

@@ -1,6 +1,6 @@
{ {
"name": "@lancedb/lancedb-win32-arm64-msvc", "name": "@lancedb/lancedb-win32-arm64-msvc",
"version": "0.16.1-beta.3", "version": "0.18.3-beta.0",
"os": [ "os": [
"win32" "win32"
], ],

View File

@@ -1,6 +1,6 @@
{ {
"name": "@lancedb/lancedb-win32-x64-msvc", "name": "@lancedb/lancedb-win32-x64-msvc",
"version": "0.16.1-beta.3", "version": "0.18.3-beta.0",
"os": ["win32"], "os": ["win32"],
"cpu": ["x64"], "cpu": ["x64"],
"main": "lancedb.win32-x64-msvc.node", "main": "lancedb.win32-x64-msvc.node",

View File

@@ -1,12 +1,12 @@
{ {
"name": "@lancedb/lancedb", "name": "@lancedb/lancedb",
"version": "0.16.1-beta.3", "version": "0.18.3-beta.0",
"lockfileVersion": 3, "lockfileVersion": 3,
"requires": true, "requires": true,
"packages": { "packages": {
"": { "": {
"name": "@lancedb/lancedb", "name": "@lancedb/lancedb",
"version": "0.16.1-beta.3", "version": "0.18.3-beta.0",
"cpu": [ "cpu": [
"x64", "x64",
"arm64" "arm64"

View File

@@ -11,7 +11,7 @@
"ann" "ann"
], ],
"private": false, "private": false,
"version": "0.16.1-beta.3", "version": "0.18.3-beta.0",
"main": "dist/index.js", "main": "dist/index.js",
"exports": { "exports": {
".": "./dist/index.js", ".": "./dist/index.js",
@@ -29,7 +29,6 @@
"aarch64-apple-darwin", "aarch64-apple-darwin",
"x86_64-unknown-linux-gnu", "x86_64-unknown-linux-gnu",
"aarch64-unknown-linux-gnu", "aarch64-unknown-linux-gnu",
"x86_64-unknown-linux-musl",
"aarch64-unknown-linux-musl", "aarch64-unknown-linux-musl",
"x86_64-pc-windows-msvc", "x86_64-pc-windows-msvc",
"aarch64-pc-windows-msvc" "aarch64-pc-windows-msvc"
@@ -74,8 +73,10 @@
"artifacts": "napi artifacts", "artifacts": "napi artifacts",
"build:debug": "napi build --platform --no-const-enum --dts ../lancedb/native.d.ts --js ../lancedb/native.js lancedb", "build:debug": "napi build --platform --no-const-enum --dts ../lancedb/native.d.ts --js ../lancedb/native.js lancedb",
"build:release": "napi build --platform --no-const-enum --release --dts ../lancedb/native.d.ts --js ../lancedb/native.js dist/", "build:release": "napi build --platform --no-const-enum --release --dts ../lancedb/native.d.ts --js ../lancedb/native.js dist/",
"build": "npm run build:debug && tsc -b && shx cp lancedb/native.d.ts dist/native.d.ts && shx cp lancedb/*.node dist/", "build": "npm run build:debug && npm run tsc && shx cp lancedb/*.node dist/",
"build-release": "npm run build:release && tsc -b && shx cp lancedb/native.d.ts dist/native.d.ts", "build-release": "npm run build:release && npm run tsc",
"tsc": "tsc -b",
"posttsc": "shx cp lancedb/native.d.ts dist/native.d.ts",
"lint-ci": "biome ci .", "lint-ci": "biome ci .",
"docs": "typedoc --plugin typedoc-plugin-markdown --treatWarningsAsErrors --out ../docs/src/js lancedb/index.ts", "docs": "typedoc --plugin typedoc-plugin-markdown --treatWarningsAsErrors --out ../docs/src/js lancedb/index.ts",
"postdocs": "node typedoc_post_process.js", "postdocs": "node typedoc_post_process.js",

View File

@@ -48,8 +48,16 @@ impl Connection {
pub async fn new(uri: String, options: ConnectionOptions) -> napi::Result<Self> { pub async fn new(uri: String, options: ConnectionOptions) -> napi::Result<Self> {
let mut builder = ConnectBuilder::new(&uri); let mut builder = ConnectBuilder::new(&uri);
if let Some(interval) = options.read_consistency_interval { if let Some(interval) = options.read_consistency_interval {
builder = match interval {
builder.read_consistency_interval(std::time::Duration::from_secs_f64(interval)); Either::A(seconds) => {
builder = builder.read_consistency_interval(Some(
std::time::Duration::from_secs_f64(seconds),
));
}
Either::B(_) => {
builder = builder.read_consistency_interval(None);
}
}
} }
if let Some(storage_options) = options.storage_options { if let Some(storage_options) = options.storage_options {
for (key, value) in storage_options { for (key, value) in storage_options {

View File

@@ -4,7 +4,9 @@
use std::sync::Mutex; use std::sync::Mutex;
use lancedb::index::scalar::{BTreeIndexBuilder, FtsIndexBuilder}; use lancedb::index::scalar::{BTreeIndexBuilder, FtsIndexBuilder};
use lancedb::index::vector::{IvfHnswPqIndexBuilder, IvfHnswSqIndexBuilder, IvfPqIndexBuilder}; use lancedb::index::vector::{
IvfFlatIndexBuilder, IvfHnswPqIndexBuilder, IvfHnswSqIndexBuilder, IvfPqIndexBuilder,
};
use lancedb::index::Index as LanceDbIndex; use lancedb::index::Index as LanceDbIndex;
use napi_derive::napi; use napi_derive::napi;
@@ -63,6 +65,32 @@ impl Index {
}) })
} }
#[napi(factory)]
pub fn ivf_flat(
distance_type: Option<String>,
num_partitions: Option<u32>,
max_iterations: Option<u32>,
sample_rate: Option<u32>,
) -> napi::Result<Self> {
let mut ivf_flat_builder = IvfFlatIndexBuilder::default();
if let Some(distance_type) = distance_type {
let distance_type = parse_distance_type(distance_type)?;
ivf_flat_builder = ivf_flat_builder.distance_type(distance_type);
}
if let Some(num_partitions) = num_partitions {
ivf_flat_builder = ivf_flat_builder.num_partitions(num_partitions);
}
if let Some(max_iterations) = max_iterations {
ivf_flat_builder = ivf_flat_builder.max_iterations(max_iterations);
}
if let Some(sample_rate) = sample_rate {
ivf_flat_builder = ivf_flat_builder.sample_rate(sample_rate);
}
Ok(Self {
inner: Mutex::new(Some(LanceDbIndex::IvfFlat(ivf_flat_builder))),
})
}
#[napi(factory)] #[napi(factory)]
pub fn btree() -> Self { pub fn btree() -> Self {
Self { Self {

View File

@@ -4,6 +4,7 @@
use std::collections::HashMap; use std::collections::HashMap;
use env_logger::Env; use env_logger::Env;
use napi::{bindgen_prelude::Null, Either};
use napi_derive::*; use napi_derive::*;
mod connection; mod connection;
@@ -18,7 +19,6 @@ mod table;
mod util; mod util;
#[napi(object)] #[napi(object)]
#[derive(Debug)]
pub struct ConnectionOptions { pub struct ConnectionOptions {
/// (For LanceDB OSS only): The interval, in seconds, at which to check for /// (For LanceDB OSS only): The interval, in seconds, at which to check for
/// updates to the table from other processes. If None, then consistency is not /// updates to the table from other processes. If None, then consistency is not
@@ -29,7 +29,7 @@ pub struct ConnectionOptions {
/// has passed since the last check, then the table will be checked for updates. /// has passed since the last check, then the table will be checked for updates.
/// Note: this consistency only applies to read operations. Write operations are /// Note: this consistency only applies to read operations. Write operations are
/// always consistent. /// always consistent.
pub read_consistency_interval: Option<f64>, pub read_consistency_interval: Option<Either<f64, Null>>,
/// (For LanceDB OSS only): configuration for object storage. /// (For LanceDB OSS only): configuration for object storage.
/// ///
/// The available options are described at https://lancedb.github.io/lancedb/guides/storage/ /// The available options are described at https://lancedb.github.io/lancedb/guides/storage/

View File

@@ -114,6 +114,16 @@ impl Query {
)) ))
}) })
} }
#[napi(catch_unwind)]
pub async fn analyze_plan(&self) -> napi::Result<String> {
self.inner.analyze_plan().await.map_err(|e| {
napi::Error::from_reason(format!(
"Failed to execute analyze plan: {}",
convert_error(&e)
))
})
}
} }
#[napi] #[napi]
@@ -259,4 +269,14 @@ impl VectorQuery {
)) ))
}) })
} }
#[napi(catch_unwind)]
pub async fn analyze_plan(&self) -> napi::Result<String> {
self.inner.analyze_plan().await.map_err(|e| {
napi::Error::from_reason(format!(
"Failed to execute analyze plan: {}",
convert_error(&e)
))
})
}
} }

View File

@@ -498,6 +498,9 @@ pub struct IndexStatistics {
pub distance_type: Option<String>, pub distance_type: Option<String>,
/// The number of parts this index is split into. /// The number of parts this index is split into.
pub num_indices: Option<u32>, pub num_indices: Option<u32>,
/// The KMeans loss value of the index,
/// it is only present for vector indices.
pub loss: Option<f64>,
} }
impl From<lancedb::index::IndexStatistics> for IndexStatistics { impl From<lancedb::index::IndexStatistics> for IndexStatistics {
fn from(value: lancedb::index::IndexStatistics) -> Self { fn from(value: lancedb::index::IndexStatistics) -> Self {
@@ -507,6 +510,7 @@ impl From<lancedb::index::IndexStatistics> for IndexStatistics {
index_type: value.index_type.to_string(), index_type: value.index_type.to_string(),
distance_type: value.distance_type.map(|d| d.to_string()), distance_type: value.distance_type.map(|d| d.to_string()),
num_indices: value.num_indices, num_indices: value.num_indices,
loss: value.loss,
} }
} }
} }

56
pyright_report.csv Normal file
View File

@@ -0,0 +1,56 @@
file,errors,warnings,total_issues
python/python/lancedb/arrow.py,0,0,0
python/python/lancedb/background_loop.py,0,0,0
python/python/lancedb/embeddings/__init__.py,0,0,0
python/python/lancedb/exceptions.py,0,0,0
python/python/lancedb/index.py,0,0,0
python/python/lancedb/integrations/__init__.py,0,0,0
python/python/lancedb/remote/__init__.py,0,0,0
python/python/lancedb/remote/errors.py,0,0,0
python/python/lancedb/rerankers/__init__.py,0,0,0
python/python/lancedb/rerankers/answerdotai.py,0,0,0
python/python/lancedb/rerankers/cohere.py,0,0,0
python/python/lancedb/rerankers/colbert.py,0,0,0
python/python/lancedb/rerankers/cross_encoder.py,0,0,0
python/python/lancedb/rerankers/openai.py,0,0,0
python/python/lancedb/rerankers/util.py,0,0,0
python/python/lancedb/rerankers/voyageai.py,0,0,0
python/python/lancedb/schema.py,0,0,0
python/python/lancedb/types.py,0,0,0
python/python/lancedb/__init__.py,0,1,1
python/python/lancedb/conftest.py,1,0,1
python/python/lancedb/embeddings/bedrock.py,1,0,1
python/python/lancedb/merge.py,1,0,1
python/python/lancedb/rerankers/base.py,1,0,1
python/python/lancedb/rerankers/jinaai.py,0,1,1
python/python/lancedb/rerankers/linear_combination.py,1,0,1
python/python/lancedb/embeddings/instructor.py,2,0,2
python/python/lancedb/embeddings/openai.py,2,0,2
python/python/lancedb/embeddings/watsonx.py,2,0,2
python/python/lancedb/embeddings/registry.py,3,0,3
python/python/lancedb/embeddings/sentence_transformers.py,3,0,3
python/python/lancedb/integrations/pyarrow.py,3,0,3
python/python/lancedb/rerankers/rrf.py,3,0,3
python/python/lancedb/dependencies.py,4,0,4
python/python/lancedb/embeddings/gemini_text.py,4,0,4
python/python/lancedb/embeddings/gte.py,4,0,4
python/python/lancedb/embeddings/gte_mlx_model.py,4,0,4
python/python/lancedb/embeddings/ollama.py,4,0,4
python/python/lancedb/embeddings/transformers.py,4,0,4
python/python/lancedb/remote/db.py,5,0,5
python/python/lancedb/context.py,6,0,6
python/python/lancedb/embeddings/cohere.py,6,0,6
python/python/lancedb/fts.py,6,0,6
python/python/lancedb/db.py,9,0,9
python/python/lancedb/embeddings/utils.py,9,0,9
python/python/lancedb/common.py,11,0,11
python/python/lancedb/util.py,13,0,13
python/python/lancedb/embeddings/imagebind.py,14,0,14
python/python/lancedb/embeddings/voyageai.py,15,0,15
python/python/lancedb/embeddings/open_clip.py,16,0,16
python/python/lancedb/pydantic.py,16,0,16
python/python/lancedb/embeddings/base.py,17,0,17
python/python/lancedb/embeddings/jinaai.py,18,1,19
python/python/lancedb/remote/table.py,23,0,23
python/python/lancedb/query.py,47,1,48
python/python/lancedb/table.py,61,0,61
1 file errors warnings total_issues
2 python/python/lancedb/arrow.py 0 0 0
3 python/python/lancedb/background_loop.py 0 0 0
4 python/python/lancedb/embeddings/__init__.py 0 0 0
5 python/python/lancedb/exceptions.py 0 0 0
6 python/python/lancedb/index.py 0 0 0
7 python/python/lancedb/integrations/__init__.py 0 0 0
8 python/python/lancedb/remote/__init__.py 0 0 0
9 python/python/lancedb/remote/errors.py 0 0 0
10 python/python/lancedb/rerankers/__init__.py 0 0 0
11 python/python/lancedb/rerankers/answerdotai.py 0 0 0
12 python/python/lancedb/rerankers/cohere.py 0 0 0
13 python/python/lancedb/rerankers/colbert.py 0 0 0
14 python/python/lancedb/rerankers/cross_encoder.py 0 0 0
15 python/python/lancedb/rerankers/openai.py 0 0 0
16 python/python/lancedb/rerankers/util.py 0 0 0
17 python/python/lancedb/rerankers/voyageai.py 0 0 0
18 python/python/lancedb/schema.py 0 0 0
19 python/python/lancedb/types.py 0 0 0
20 python/python/lancedb/__init__.py 0 1 1
21 python/python/lancedb/conftest.py 1 0 1
22 python/python/lancedb/embeddings/bedrock.py 1 0 1
23 python/python/lancedb/merge.py 1 0 1
24 python/python/lancedb/rerankers/base.py 1 0 1
25 python/python/lancedb/rerankers/jinaai.py 0 1 1
26 python/python/lancedb/rerankers/linear_combination.py 1 0 1
27 python/python/lancedb/embeddings/instructor.py 2 0 2
28 python/python/lancedb/embeddings/openai.py 2 0 2
29 python/python/lancedb/embeddings/watsonx.py 2 0 2
30 python/python/lancedb/embeddings/registry.py 3 0 3
31 python/python/lancedb/embeddings/sentence_transformers.py 3 0 3
32 python/python/lancedb/integrations/pyarrow.py 3 0 3
33 python/python/lancedb/rerankers/rrf.py 3 0 3
34 python/python/lancedb/dependencies.py 4 0 4
35 python/python/lancedb/embeddings/gemini_text.py 4 0 4
36 python/python/lancedb/embeddings/gte.py 4 0 4
37 python/python/lancedb/embeddings/gte_mlx_model.py 4 0 4
38 python/python/lancedb/embeddings/ollama.py 4 0 4
39 python/python/lancedb/embeddings/transformers.py 4 0 4
40 python/python/lancedb/remote/db.py 5 0 5
41 python/python/lancedb/context.py 6 0 6
42 python/python/lancedb/embeddings/cohere.py 6 0 6
43 python/python/lancedb/fts.py 6 0 6
44 python/python/lancedb/db.py 9 0 9
45 python/python/lancedb/embeddings/utils.py 9 0 9
46 python/python/lancedb/common.py 11 0 11
47 python/python/lancedb/util.py 13 0 13
48 python/python/lancedb/embeddings/imagebind.py 14 0 14
49 python/python/lancedb/embeddings/voyageai.py 15 0 15
50 python/python/lancedb/embeddings/open_clip.py 16 0 16
51 python/python/lancedb/pydantic.py 16 0 16
52 python/python/lancedb/embeddings/base.py 17 0 17
53 python/python/lancedb/embeddings/jinaai.py 18 1 19
54 python/python/lancedb/remote/table.py 23 0 23
55 python/python/lancedb/query.py 47 1 48
56 python/python/lancedb/table.py 61 0 61

View File

@@ -1,5 +1,5 @@
[tool.bumpversion] [tool.bumpversion]
current_version = "0.20.0" current_version = "0.22.0-beta.0"
parse = """(?x) parse = """(?x)
(?P<major>0|[1-9]\\d*)\\. (?P<major>0|[1-9]\\d*)\\.
(?P<minor>0|[1-9]\\d*)\\. (?P<minor>0|[1-9]\\d*)\\.

View File

@@ -8,9 +8,9 @@ For general contribution guidelines, see [CONTRIBUTING.md](../CONTRIBUTING.md).
The Python package is a wrapper around the Rust library, `lancedb`. We use The Python package is a wrapper around the Rust library, `lancedb`. We use
[pyo3](https://pyo3.rs/) to create the bindings between Rust and Python. [pyo3](https://pyo3.rs/) to create the bindings between Rust and Python.
* `src/`: Rust bindings source code - `src/`: Rust bindings source code
* `python/lancedb`: Python package source code - `python/lancedb`: Python package source code
* `python/tests`: Unit tests - `python/tests`: Unit tests
## Development environment ## Development environment
@@ -61,6 +61,12 @@ make test
make doctest make doctest
``` ```
Run type checking:
```shell
make typecheck
```
To run a single test, you can use the `pytest` command directly. Provide the path To run a single test, you can use the `pytest` command directly. Provide the path
to the test file, and optionally the test name after `::`. to the test file, and optionally the test name after `::`.

View File

@@ -1,6 +1,6 @@
[package] [package]
name = "lancedb-python" name = "lancedb-python"
version = "0.20.0" version = "0.22.0-beta.0"
edition.workspace = true edition.workspace = true
description = "Python bindings for LanceDB" description = "Python bindings for LanceDB"
license.workspace = true license.workspace = true
@@ -14,30 +14,25 @@ name = "_lancedb"
crate-type = ["cdylib"] crate-type = ["cdylib"]
[dependencies] [dependencies]
arrow = { version = "53.2", features = ["pyarrow"] } arrow = { version = "54.1", features = ["pyarrow"] }
lancedb = { path = "../rust/lancedb", default-features = false } lancedb = { path = "../rust/lancedb", default-features = false }
env_logger.workspace = true env_logger.workspace = true
pyo3 = { version = "0.22.2", features = [ pyo3 = { version = "0.23", features = ["extension-module", "abi3-py39"] }
"extension-module", pyo3-async-runtimes = { version = "0.23", features = [
"abi3-py39", "attributes",
"gil-refs" "tokio-runtime",
] } ] }
pyo3-async-runtimes = { version = "0.22", features = ["attributes", "tokio-runtime"] }
pin-project = "1.1.5" pin-project = "1.1.5"
futures.workspace = true futures.workspace = true
tokio = { version = "1.40", features = ["sync"] } tokio = { version = "1.40", features = ["sync"] }
[build-dependencies] [build-dependencies]
pyo3-build-config = { version = "0.20.3", features = [ pyo3-build-config = { version = "0.23", features = [
"extension-module", "extension-module",
"abi3-py39", "abi3-py39",
] } ] }
[features] [features]
default = ["default-tls", "remote"] default = ["remote"]
fp16kernels = ["lancedb/fp16kernels"] fp16kernels = ["lancedb/fp16kernels"]
remote = ["lancedb/remote"] remote = ["lancedb/remote"]
# TLS
default-tls = ["lancedb/default-tls"]
native-tls = ["lancedb/native-tls"]
rustls-tls = ["lancedb/rustls-tls"]

View File

@@ -23,6 +23,10 @@ check: ## Check formatting and lints.
fix: ## Fix python lints fix: ## Fix python lints
ruff check python --fix ruff check python --fix
.PHONY: typecheck
typecheck: ## Run type checking with pyright.
pyright
.PHONY: doctest .PHONY: doctest
doctest: ## Run documentation tests. doctest: ## Run documentation tests.
pytest --doctest-modules python/lancedb pytest --doctest-modules python/lancedb
@@ -30,3 +34,7 @@ doctest: ## Run documentation tests.
.PHONY: test .PHONY: test
test: ## Run tests. test: ## Run tests.
pytest python/tests -vv --durations=10 -m "not slow and not s3_test" pytest python/tests -vv --durations=10 -m "not slow and not s3_test"
.PHONY: clean
clean:
rm -rf data

View File

@@ -4,8 +4,8 @@ name = "lancedb"
dynamic = ["version"] dynamic = ["version"]
dependencies = [ dependencies = [
"deprecation", "deprecation",
"pylance~=0.23.2",
"tqdm>=4.27.0", "tqdm>=4.27.0",
"pyarrow>=14",
"pydantic>=1.10", "pydantic>=1.10",
"packaging", "packaging",
"overrides>=0.7", "overrides>=0.7",
@@ -54,6 +54,7 @@ tests = [
"polars>=0.19, <=1.3.0", "polars>=0.19, <=1.3.0",
"tantivy", "tantivy",
"pyarrow-stubs", "pyarrow-stubs",
"pylance>=0.23.2",
] ]
dev = [ dev = [
"ruff", "ruff",
@@ -62,7 +63,7 @@ dev = [
'typing-extensions>=4.0.0; python_version < "3.11"', 'typing-extensions>=4.0.0; python_version < "3.11"',
] ]
docs = ["mkdocs", "mkdocs-jupyter", "mkdocs-material", "mkdocstrings[python]"] docs = ["mkdocs", "mkdocs-jupyter", "mkdocs-material", "mkdocstrings[python]"]
clip = ["torch", "pillow", "open-clip"] clip = ["torch", "pillow", "open-clip-torch"]
embeddings = [ embeddings = [
"requests>=2.31.0", "requests>=2.31.0",
"openai>=1.6.1", "openai>=1.6.1",
@@ -91,7 +92,7 @@ requires = ["maturin>=1.4"]
build-backend = "maturin" build-backend = "maturin"
[tool.ruff.lint] [tool.ruff.lint]
select = ["F", "E", "W", "G", "TCH", "PERF"] select = ["F", "E", "W", "G", "PERF"]
[tool.pytest.ini_options] [tool.pytest.ini_options]
addopts = "--strict-markers --ignore-glob=lancedb/embeddings/*.py" addopts = "--strict-markers --ignore-glob=lancedb/embeddings/*.py"
@@ -102,5 +103,28 @@ markers = [
] ]
[tool.pyright] [tool.pyright]
include = ["python/lancedb/table.py"] include = [
"python/lancedb/index.py",
"python/lancedb/rerankers/util.py",
"python/lancedb/rerankers/__init__.py",
"python/lancedb/rerankers/voyageai.py",
"python/lancedb/rerankers/jinaai.py",
"python/lancedb/rerankers/openai.py",
"python/lancedb/rerankers/cross_encoder.py",
"python/lancedb/rerankers/colbert.py",
"python/lancedb/rerankers/answerdotai.py",
"python/lancedb/rerankers/cohere.py",
"python/lancedb/arrow.py",
"python/lancedb/__init__.py",
"python/lancedb/types.py",
"python/lancedb/integrations/__init__.py",
"python/lancedb/exceptions.py",
"python/lancedb/background_loop.py",
"python/lancedb/schema.py",
"python/lancedb/remote/__init__.py",
"python/lancedb/remote/errors.py",
"python/lancedb/embeddings/__init__.py",
"python/lancedb/_lancedb.pyi",
]
exclude = ["python/tests/"]
pythonVersion = "3.12" pythonVersion = "3.12"

View File

@@ -7,6 +7,7 @@ import os
from concurrent.futures import ThreadPoolExecutor from concurrent.futures import ThreadPoolExecutor
from datetime import timedelta from datetime import timedelta
from typing import Dict, Optional, Union, Any from typing import Dict, Optional, Union, Any
import warnings
__version__ = importlib.metadata.version("lancedb") __version__ = importlib.metadata.version("lancedb")
@@ -14,6 +15,7 @@ from ._lancedb import connect as lancedb_connect
from .common import URI, sanitize_uri from .common import URI, sanitize_uri
from .db import AsyncConnection, DBConnection, LanceDBConnection from .db import AsyncConnection, DBConnection, LanceDBConnection
from .remote import ClientConfig from .remote import ClientConfig
from .remote.db import RemoteDBConnection
from .schema import vector from .schema import vector
from .table import AsyncTable from .table import AsyncTable
@@ -24,7 +26,7 @@ def connect(
api_key: Optional[str] = None, api_key: Optional[str] = None,
region: str = "us-east-1", region: str = "us-east-1",
host_override: Optional[str] = None, host_override: Optional[str] = None,
read_consistency_interval: Optional[timedelta] = None, read_consistency_interval: Optional[timedelta] = timedelta(seconds=5),
request_thread_pool: Optional[Union[int, ThreadPoolExecutor]] = None, request_thread_pool: Optional[Union[int, ThreadPoolExecutor]] = None,
client_config: Union[ClientConfig, Dict[str, Any], None] = None, client_config: Union[ClientConfig, Dict[str, Any], None] = None,
storage_options: Optional[Dict[str, str]] = None, storage_options: Optional[Dict[str, str]] = None,
@@ -47,9 +49,8 @@ def connect(
read_consistency_interval: timedelta, default None read_consistency_interval: timedelta, default None
(For LanceDB OSS only) (For LanceDB OSS only)
The interval at which to check for updates to the table from other The interval at which to check for updates to the table from other
processes. If None, then consistency is not checked. For performance processes. If None, then consistency is not checked. For strong consistency,
reasons, this is the default. For strong consistency, set this to set this to zero seconds. Then every read will check for updates from other
zero seconds. Then every read will check for updates from other
processes. As a compromise, you can set this to a non-zero timedelta processes. As a compromise, you can set this to a non-zero timedelta
for eventual consistency. If more than that interval has passed since for eventual consistency. If more than that interval has passed since
the last check, then the table will be checked for updates. Note: this the last check, then the table will be checked for updates. Note: this
@@ -86,8 +87,6 @@ def connect(
conn : DBConnection conn : DBConnection
A connection to a LanceDB database. A connection to a LanceDB database.
""" """
from .remote.db import RemoteDBConnection
if isinstance(uri, str) and uri.startswith("db://"): if isinstance(uri, str) and uri.startswith("db://"):
if api_key is None: if api_key is None:
api_key = os.environ.get("LANCEDB_API_KEY") api_key = os.environ.get("LANCEDB_API_KEY")
@@ -122,7 +121,7 @@ async def connect_async(
api_key: Optional[str] = None, api_key: Optional[str] = None,
region: str = "us-east-1", region: str = "us-east-1",
host_override: Optional[str] = None, host_override: Optional[str] = None,
read_consistency_interval: Optional[timedelta] = None, read_consistency_interval: Optional[timedelta] = timedelta(seconds=5),
client_config: Optional[Union[ClientConfig, Dict[str, Any]]] = None, client_config: Optional[Union[ClientConfig, Dict[str, Any]]] = None,
storage_options: Optional[Dict[str, str]] = None, storage_options: Optional[Dict[str, str]] = None,
) -> AsyncConnection: ) -> AsyncConnection:
@@ -143,9 +142,8 @@ async def connect_async(
read_consistency_interval: timedelta, default None read_consistency_interval: timedelta, default None
(For LanceDB OSS only) (For LanceDB OSS only)
The interval at which to check for updates to the table from other The interval at which to check for updates to the table from other
processes. If None, then consistency is not checked. For performance processes. If None, then consistency is not checked. For strong consistency,
reasons, this is the default. For strong consistency, set this to set this to zero seconds. Then every read will check for updates from other
zero seconds. Then every read will check for updates from other
processes. As a compromise, you can set this to a non-zero timedelta processes. As a compromise, you can set this to a non-zero timedelta
for eventual consistency. If more than that interval has passed since for eventual consistency. If more than that interval has passed since
the last check, then the table will be checked for updates. Note: this the last check, then the table will be checked for updates. Note: this
@@ -214,3 +212,13 @@ __all__ = [
"RemoteDBConnection", "RemoteDBConnection",
"__version__", "__version__",
] ]
def __warn_on_fork():
warnings.warn(
"lance is not fork-safe. If you are using multiprocessing, use spawn instead.",
)
if hasattr(os, "register_at_fork"):
os.register_at_fork(before=__warn_on_fork)

View File

@@ -3,6 +3,7 @@ from typing import Dict, List, Optional, Tuple, Any, Union, Literal
import pyarrow as pa import pyarrow as pa
from .index import BTree, IvfFlat, IvfPq, Bitmap, LabelList, HnswPq, HnswSq, FTS from .index import BTree, IvfFlat, IvfPq, Bitmap, LabelList, HnswPq, HnswSq, FTS
from .remote import ClientConfig
class Connection(object): class Connection(object):
uri: str uri: str
@@ -47,10 +48,11 @@ class Table:
async def version(self) -> int: ... async def version(self) -> int: ...
async def checkout(self, version: int): ... async def checkout(self, version: int): ...
async def checkout_latest(self): ... async def checkout_latest(self): ...
async def restore(self): ... async def restore(self, version: Optional[int] = None): ...
async def list_indices(self) -> list[IndexConfig]: ... async def list_indices(self) -> list[IndexConfig]: ...
async def delete(self, filter: str): ... async def delete(self, filter: str): ...
async def add_columns(self, columns: list[tuple[str, str]]) -> None: ... async def add_columns(self, columns: list[tuple[str, str]]) -> None: ...
async def add_columns_with_schema(self, schema: pa.Schema) -> None: ...
async def alter_columns(self, columns: list[dict[str, Any]]) -> None: ... async def alter_columns(self, columns: list[dict[str, Any]]) -> None: ...
async def optimize( async def optimize(
self, self,
@@ -71,11 +73,15 @@ async def connect(
region: Optional[str], region: Optional[str],
host_override: Optional[str], host_override: Optional[str],
read_consistency_interval: Optional[float], read_consistency_interval: Optional[float],
client_config: Optional[Union[ClientConfig, Dict[str, Any]]],
storage_options: Optional[Dict[str, str]],
) -> Connection: ... ) -> Connection: ...
class RecordBatchStream: class RecordBatchStream:
@property
def schema(self) -> pa.Schema: ... def schema(self) -> pa.Schema: ...
async def next(self) -> Optional[pa.RecordBatch]: ... def __aiter__(self) -> "RecordBatchStream": ...
async def __anext__(self) -> pa.RecordBatch: ...
class Query: class Query:
def where(self, filter: str): ... def where(self, filter: str): ...
@@ -89,6 +95,9 @@ class Query:
def nearest_to(self, query_vec: pa.Array) -> VectorQuery: ... def nearest_to(self, query_vec: pa.Array) -> VectorQuery: ...
def nearest_to_text(self, query: dict) -> FTSQuery: ... def nearest_to_text(self, query: dict) -> FTSQuery: ...
async def execute(self, max_batch_length: Optional[int]) -> RecordBatchStream: ... async def execute(self, max_batch_length: Optional[int]) -> RecordBatchStream: ...
async def explain_plan(self, verbose: Optional[bool]) -> str: ...
async def analyze_plan(self) -> str: ...
def to_query_request(self) -> PyQueryRequest: ...
class FTSQuery: class FTSQuery:
def where(self, filter: str): ... def where(self, filter: str): ...
@@ -102,7 +111,7 @@ class FTSQuery:
def add_query_vector(self, query_vec: pa.Array) -> None: ... def add_query_vector(self, query_vec: pa.Array) -> None: ...
def nearest_to(self, query_vec: pa.Array) -> HybridQuery: ... def nearest_to(self, query_vec: pa.Array) -> HybridQuery: ...
async def execute(self, max_batch_length: Optional[int]) -> RecordBatchStream: ... async def execute(self, max_batch_length: Optional[int]) -> RecordBatchStream: ...
async def explain_plan(self) -> str: ... def to_query_request(self) -> PyQueryRequest: ...
class VectorQuery: class VectorQuery:
async def execute(self) -> RecordBatchStream: ... async def execute(self) -> RecordBatchStream: ...
@@ -118,6 +127,7 @@ class VectorQuery:
def nprobes(self, nprobes: int): ... def nprobes(self, nprobes: int): ...
def bypass_vector_index(self): ... def bypass_vector_index(self): ...
def nearest_to_text(self, query: dict) -> HybridQuery: ... def nearest_to_text(self, query: dict) -> HybridQuery: ...
def to_query_request(self) -> PyQueryRequest: ...
class HybridQuery: class HybridQuery:
def where(self, filter: str): ... def where(self, filter: str): ...
@@ -135,6 +145,33 @@ class HybridQuery:
def to_fts_query(self) -> FTSQuery: ... def to_fts_query(self) -> FTSQuery: ...
def get_limit(self) -> int: ... def get_limit(self) -> int: ...
def get_with_row_id(self) -> bool: ... def get_with_row_id(self) -> bool: ...
def to_query_request(self) -> PyQueryRequest: ...
class PyFullTextSearchQuery:
columns: Optional[List[str]]
query: str
limit: Optional[int]
wand_factor: Optional[float]
class PyQueryRequest:
limit: Optional[int]
offset: Optional[int]
filter: Optional[Union[str, bytes]]
full_text_search: Optional[PyFullTextSearchQuery]
select: Optional[Union[str, List[str]]]
fast_search: Optional[bool]
with_row_id: Optional[bool]
column: Optional[str]
query_vector: Optional[List[pa.Array]]
nprobes: Optional[int]
lower_bound: Optional[float]
upper_bound: Optional[float]
ef: Optional[int]
refine_factor: Optional[int]
distance_type: Optional[str]
bypass_vector_index: Optional[bool]
postfilter: Optional[bool]
norm: Optional[str]
class CompactionStats: class CompactionStats:
fragments_removed: int fragments_removed: int
@@ -142,6 +179,10 @@ class CompactionStats:
files_removed: int files_removed: int
files_added: int files_added: int
class CleanupStats:
bytes_removed: int
old_versions: int
class RemovalStats: class RemovalStats:
bytes_removed: int bytes_removed: int
old_versions_removed: int old_versions_removed: int

Some files were not shown because too many files have changed in this diff Show More