Compare commits

..

149 Commits

Author SHA1 Message Date
Lance Release
4f7b24d1a9 Bump version: 0.25.3-beta.6 → 0.25.3 2025-11-07 04:57:55 +00:00
Lance Release
f9540724b7 Bump version: 0.25.3-beta.5 → 0.25.3-beta.6 2025-11-07 04:57:54 +00:00
Weston Pace
aeac9c7644 feat: add python Permutation class to mimic hugging face dataset and provide pytorch dataloader (#2725) 2025-11-06 16:15:33 -08:00
Mark
6ddd271627 fix: relax bytemuck and crunchy version pins (#2768)
Closes #2767
2025-11-05 14:07:35 -08:00
LanceDB Robot
f0d7520bdf chore: update lance dependency to v0.39.0 (#2766)
## Summary
- bump Lance crates to v0.39.0 with ci/set_lance_version.py and refresh
Cargo.lock
- keep namespace feature set intact while moving off git dependencies
- verified cargo clippy --workspace --tests --all-features -- -D
warnings
- ran cargo fmt --all

## References
- https://github.com/lancedb/lance/releases/tag/v0.39.0
2025-11-05 21:25:05 +08:00
Will Jones
7ef8bafd51 feat: add source to TableNotFound errors (#2765)
This will make it easier to see if there are underlying problems. We
should see the actual object store HTTP request error within the error
chain after this.
2025-11-04 15:31:45 -08:00
Lance Release
aed4a7c98e Bump version: 0.22.3-beta.4 → 0.22.3-beta.5 2025-10-31 17:08:56 +00:00
Lance Release
273ba18426 Bump version: 0.25.3-beta.4 → 0.25.3-beta.5 2025-10-31 17:07:31 +00:00
LuQQiu
8b94308cf2 feat: add fts udtf in sql (#2755)
Support FTS feature parity in SQL to match current Python API
capability.
Add `.to_json()` method to FTS query classes to enable usage with SQL
`fts()` UDTF.
Related: https://github.com/lancedb/blog-lancedb/pull/147

query = MatchQuery("puppy", "text", fuzziness=2)
result = client.execute(f"SELECT * FROM fts('table',
'{query.to_json()}')")

---------

Co-authored-by: Claude <noreply@anthropic.com>
2025-10-31 10:06:19 -07:00
Lance Release
0b7b27481e Bump version: 0.22.3-beta.3 → 0.22.3-beta.4 2025-10-31 01:14:39 +00:00
Lance Release
e1f9b011f8 Bump version: 0.25.3-beta.3 → 0.25.3-beta.4 2025-10-31 01:13:18 +00:00
Wyatt Alt
d664b8739f chore: update lance to 0.38.3 stable (#2757) 2025-10-30 16:44:10 -07:00
S.A.N
20bec61ecb refactor(node): async generator for RecordBatchIterator (#2744)
JS native Async Generator, more efficient asynchronous iteration, fewer
synthetic promises, and the ability to handle `catch` or `break` of
parent loop in `finally` block
2025-10-30 14:36:24 -07:00
Will Jones
45255be42c ci: add agents and add reviewing instructions (#2754) 2025-10-29 17:28:26 -07:00
fzowl
93c2cf2f59 feat(voyageai): update voyage integration (#2713)
Adding multimodal usage guide
VoyageAI integration changes:
 - Adding voyage-3.5 and voyage-3.5-lite models
 - Adding voyage-context-3 model
 - Adding rerank-2.5 and rerank-2.5-lite models
2025-10-29 16:49:07 +05:30
Oz Katz
9d29c83f81 docs: remove DynamoDB commit store section (#2715)
This PR removes the section about needing the DynamoDB Commit Store.
Reasoning:

* S3 now supports [conditional
writes](https://docs.aws.amazon.com/AmazonS3/latest/userguide/conditional-writes.html)
* Upstream lance was updated to use this capability in
https://github.com/lancedb/lance/issues/2793
* lanceDB itself was updated to include this (see @wjones127's comment
[here](https://github.com/lancedb/lancedb/issues/1614#issuecomment-2725687260))
2025-10-29 02:12:50 +08:00
Lance Release
2a6143b5bd Bump version: 0.22.3-beta.2 → 0.22.3-beta.3 2025-10-28 02:12:20 +00:00
Lance Release
b2242886e0 Bump version: 0.25.3-beta.2 → 0.25.3-beta.3 2025-10-28 02:11:17 +00:00
LuQQiu
199904ab35 chore: update lance dependency to v0.38.3-beta.11 (#2749)
## Summary

- Updated all Lance dependencies from v0.38.3-beta.9 to v0.38.3-beta.11
- Migrated `lance-namespace-impls` to use new granular cloud provider
features (`dir-aws`, `dir-gcp`, `dir-azure`, `dir-oss`) instead of
deprecated `dir` feature
- Updated namespace connection API to use `ConnectBuilder` instead of
deprecated `connect()` function

## API Changes

The Lance team refactored the `lance-namespace-impls` package in
v0.38.3-beta.11:

1. **Feature flags**: The single `dir` feature was split into cloud
provider-specific features:
   - `dir-aws` for AWS S3 support
   - `dir-gcp` for Google Cloud Storage support
   - `dir-azure` for Azure Blob Storage support
   - `dir-oss` for Alibaba Cloud OSS support

2. **Connection API**: The `connect()` function was replaced with a
`ConnectBuilder` pattern for more flexibility

## Testing

-  Ran `cargo clippy --workspace --tests --all-features -- -D warnings`
- no warnings
-  Ran `cargo fmt --all` - code formatted
-  All changes verified and committed

## Related

This update was triggered by the Lance release:
https://github.com/lancedb/lance/releases/tag/v0.38.3-beta.11

🤖 Generated with [Claude Code](https://claude.com/claude-code)

---------

Co-authored-by: Claude <noreply@anthropic.com>
2025-10-27 19:10:26 -07:00
Lance Release
1fa888615f Bump version: 0.22.3-beta.1 → 0.22.3-beta.2 2025-10-21 20:14:20 +00:00
Lance Release
40967f3baa Bump version: 0.25.3-beta.1 → 0.25.3-beta.2 2025-10-21 20:13:10 +00:00
Jack Ye
0bfc7de32c feat: expose storage options in table (#2736)
Pending https://github.com/lancedb/lance/pull/5016
2025-10-21 16:10:40 -04:00
LanceDB Robot
d43880a585 ci: polish codex prompt for better behavior (#2739) 2025-10-22 03:49:25 +08:00
LanceDB Robot
59a886958b ci: make sure GH_TOKEN included in codex env (#2738) 2025-10-21 17:51:41 +08:00
github-actions[bot]
c36f6746d1 chore: update lance dependency to v0.38.3-beta.8 (#2737)
## Summary
- bump Lance dependencies to v0.38.3-beta.8
- ran `cargo clippy --workspace --tests --all-features -- -D warnings`
- ran `cargo fmt --all`

## Links
- https://github.com/lancedb/lance/releases/tag/v0.38.3-beta.8

Co-authored-by: lancedb automation <robot@lancedb.com>
2025-10-21 17:29:08 +08:00
LanceDB Robot
25ce6d311f ci: add instruct for codex to use gh with token (#2734) 2025-10-21 17:12:15 +08:00
github-actions[bot]
92a4e46f9f chore: update lance dependency to v0.38.3-beta.7 (#2735)
## Summary
- bump Lance dependencies to v0.38.3-beta.7
- ran cargo clippy --workspace --tests --all-features -- -D warnings
- ran cargo fmt --all

Triggered by tag
[v0.38.3-beta.7](https://github.com/lancedb/lance/releases/tag/v0.38.3-beta.7).

---------

Co-authored-by: LanceDB Robot <robot@lancedb.com>
2025-10-21 17:04:57 +08:00
LanceDB Robot
845641c480 ci: use robot token instead of github's own token (#2732) 2025-10-21 02:38:14 +08:00
Lance Release
d96404c635 Bump version: 0.22.3-beta.0 → 0.22.3-beta.1 2025-10-19 23:41:46 +00:00
Lance Release
02d31ee412 Bump version: 0.25.3-beta.0 → 0.25.3-beta.1 2025-10-19 23:40:45 +00:00
github-actions[bot]
308623577d chore: update lance dependency to v0.38.3-beta.6 (#2731)
## Summary
- bump Lance dependencies across the workspace to v0.38.3-beta.6
- verified the workspace with cargo clippy --workspace --tests
--all-features -D warnings
- formatted the workspace with cargo fmt --all

## Reference
- https://github.com/lancedb/lance/releases/tag/v0.38.3-beta.6

Co-authored-by: lancedb automation <automation@lancedb.com>
2025-10-19 14:26:20 -07:00
Jack Ye
8ee3ae378f chore: use lance-namespace in lance main repo (#2729)
This fully fixes the duplicated lance version issue without the need of
a patch section in Cargo
2025-10-17 22:01:20 -07:00
github-actions[bot]
3372a2aae0 chore: update lance dependency to v0.38.3-beta.5 (#2726)
## Summary
- update Lance dependencies to v0.38.3-beta.4 via
ci/set_lance_version.py
- refresh Cargo.lock for the preview release

## Testing
- cargo clippy --workspace --tests --all-features -- -D warnings
- cargo fmt --all

Triggered by tag:
[v0.38.3-beta.4](https://github.com/lancedb/lance/releases/tag/v0.38.3-beta.4)

Co-authored-by: Jack Ye <yezhaoqin@gmail.com>
2025-10-17 15:17:16 -07:00
Weston Pace
4cfcd95320 feat: add a permutation reader that can read a permutation view (#2712)
This adds a rust permutation builder. In the next PR I will have python
bindings and integration with pytorch.
2025-10-17 05:00:23 -07:00
Xuanwo
a70ff04bc9 ci: polish prompt to make codex happy work (#2724)
Chang a bit of prompts to make codex happy.

Signed-off-by: Xuanwo <github@xuanwo.io>
2025-10-17 17:54:19 +08:00
Xuanwo
a9daa18be9 feat: using codex to auto upgrade lance (#2723)
This PR will add an action that allow codex to auto upgrade lance.

---

**This PR was primarily authored with Codex using GPT-5-Codex and then
hand-reviewed by me. I AM responsible for every change made in this PR.
I aimed to keep it aligned with our goals, though I may have missed
minor issues. Please flag anything that feels off, I'll fix it
quickly.**

Signed-off-by: Xuanwo <github@xuanwo.io>
2025-10-17 17:21:16 +08:00
Ayush Chaurasia
3f2e3986e9 feat: expand support for multivector colpali models and enchancements (#2719) 2025-10-17 14:36:32 +05:30
Rudi Floren
bf55feb9b6 feat: remove dynamodb default dependency (#2720)
`dynamodb` pulls in aws-* crates even if not used.

You can enable the `dynamodb` feature for lancedb to enable it for
lance.

Closes #2718
2025-10-16 10:54:06 -07:00
Weston Pace
8f8e06a2da feat: add output_schema method to queries (#2717)
This is a helper utility I need for some of my data loader work. It
makes it easy to see the output schema even when a `select` has been
applied.
2025-10-14 05:13:28 -07:00
Lance Release
03eab0f091 Bump version: 0.22.2 → 0.22.3-beta.0 2025-10-14 02:25:58 +00:00
Lance Release
143184c0ae Bump version: 0.25.2 → 0.25.3-beta.0 2025-10-14 02:25:16 +00:00
Jack Ye
dadb042978 feat: bump lance to 0.38.3-beta.2 and rust to 1.90.0 (#2714) 2025-10-10 14:02:41 -07:00
Weston Pace
5a19cf15a6 feat: a utility for creating "permutation views" (#2552)
I'm working on a lancedb version of pytorch data loading (and hopefully
addressing https://github.com/lancedb/lance/issues/3727).

However, rather than rely on pytorch for everything I'm moving some of
the things that pytorch does into rust. This gives us more control over
data loading (e.g. using shards or a hash-based split) and it allows
permutations to be persistent. In particular I hope to be able to:

* Create a persistent permutation
* This permutation can handle splits, filtering, shuffling, and sharding
* Create a rust data loader that can read a permutation (one or more
splits), or a subset of a permutation (for DDP)
* Create a python data loader that delegates to the rust data loader

Eventually create integrations for other data loading libraries,
including rust & node
2025-10-09 18:07:31 -07:00
Will Jones
3dcec724b7 chore: loosen pin on chrono (#2710)
Fixes #2709
2025-10-09 14:23:56 -07:00
LuQQiu
86a6bb9fcb chore: supports limit push down through MetadataEraserExec (#2679)
For limit to sucessfully push down to FilteredReadExec
https://github.com/lancedb/lance/pull/4795/
2025-10-09 09:33:38 -07:00
BubbleCal
b59d1007d3 feat(index): add IVF_RQ index type (#2687)
this expose IVF_RQ (RabitQ quantization) index type to lancedb

---------

Signed-off-by: BubbleCal <bubble-cal@outlook.com>
2025-10-09 15:46:18 +08:00
Lance Release
56a16b1728 Bump version: 0.22.2-beta.3 → 0.22.2 2025-10-08 18:13:08 +00:00
Lance Release
b7afed9beb Bump version: 0.22.2-beta.2 → 0.22.2-beta.3 2025-10-08 18:12:23 +00:00
Lance Release
5cbbaa2e4a Bump version: 0.25.2-beta.3 → 0.25.2 2025-10-08 18:11:45 +00:00
Lance Release
1b6bd2498e Bump version: 0.25.2-beta.2 → 0.25.2-beta.3 2025-10-08 18:11:45 +00:00
Jack Ye
285da9db1d feat: upgrade lance to 0.38.2 (#2705) 2025-10-08 09:59:28 -07:00
Ayush Chaurasia
ad8306c96b docs: add custom redirect for storage page (#2706)
Expand the custom redirection links list to include storage page
2025-10-08 21:35:48 +05:30
Wyatt Alt
3594538509 fix: add name to index config and fix create_index typing (#2660)
Co-authored-by: Mark McCaskey <markm@harvey.ai>
2025-10-08 04:41:30 -07:00
Tom LaMarre
917aabd077 fix(node): support specifying arrow field types by name (#2704)
The [`FieldLike` type in
arrow.ts](5ec12c9971/nodejs/lancedb/arrow.ts (L71-L78))
can have a `type: string` property, but before this change, actually
trying to create a table that has a schema that specifies field types by
name results in an error:

```
Error: Expected a Type but object was null/undefined
```

This change adds support for mapping some type name strings to arrow
`DataType`s, so that passing `FieldLike`s with a `type: string` property
to `sanitizeField` does not throw an error.

The type names that can be passed are upper/lowercase variations of the
keys of the `constructorsByTypeName` object. This does not support
mapping types that need parameters, such as timestamps which need
timezones.

With this, it is possible to create empty tables from `SchemaLike`
objects without instantiating arrow types, e.g.:

```
    import { SchemaLike } from "../lancedb/arrow"
    // ...
    const schemaLike = {
      fields: [
        {
          name: "id",
          type: "int64",
          nullable: true,
        },
        {
          name: "vector",
          type: "float64",
          nullable: true,
        },
      ],
    // ...
    } satisfies SchemaLike;
    const table = await con.createEmptyTable("test", schemaLike);
 ```

This change also makes `FieldLike.nullable` required since the `sanitizeField` function throws if it is undefined.
2025-10-08 04:40:06 -07:00
Jack Ye
5ec12c9971 fix: federated database should not pass namesapce to listing database (#2702)
Fixes error that when converting a federated database operation to a
listing database operation, the namespace parameter is no longer correct
and should be dropped.

Note that with the testing infra we have today, we don't have a good way
to test these changes. I will do a quick follow up on
https://github.com/lancedb/lancedb/issues/2701 but would be great to get
this in first to resolve the related issues.
2025-10-06 14:12:41 -07:00
Ed Rogers
d0ce489b21 fix: use stdlib override when possible (#2699)
## Description of changes

Fixes #2698  

This PR uses
[`typing.override`](https://docs.python.org/3/library/typing.html#typing.override)
in favor of the [`overrides`](https://pypi.org/project/overrides/)
dependency when possible. As of Python 3.12, the standard library offers
`typing.override` to perform a static check on overridden methods.

### Motivation

Currently, `overrides` is incompatible with Python 3.14. As a result,
any package that attempts to import `overrides` using Python 3.14+ will
raise an `AttributeError`. An
[issue](https://github.com/mkorpela/overrides/issues/127) has been
raised and a [pull
request](https://github.com/mkorpela/overrides/pull/133) has been
submitted to the GitHub repo for the `overrides` project. But the
maintainer has been unresponsive.

To ensure readiness for Python 3.14, this package (and any other package
directly depending on `overrides`) should consider using
`typing.override` instead.

### Impact

The standard library added `typing.override` as of 3.12. As a result,
this change will affect only users of Python 3.12+. Previous versions
will continue to rely on `overrides`. Notably, the standard library
implementation is slightly different than that of `overrides`. A
thorough discussion of those differences is shown in [PEP
698](https://peps.python.org/pep-0698/), and it is also summarized
nicely by the maintainer of `overrides`
[here](https://github.com/mkorpela/overrides/issues/126#issuecomment-2401327116).

There are 2 main ways that switching from `overrides` to
`typing.override` will have an impact on developers of this repo.
1. `typing.override` does not implement any runtime checking. Instead,
it provides information to type checkers.
2. The stdlib does not provide a mixin class to enforce override
decorators on child classes. (Their reasoning for this is explained in
[the PEP](https://peps.python.org/pep-0698/).) This PR disables that
behavior entirely by replacing the `EnforceOverrides`.
2025-10-06 11:23:20 -07:00
Lance Release
d7e02c8181 Bump version: 0.22.2-beta.1 → 0.22.2-beta.2 2025-10-06 18:10:40 +00:00
Lance Release
70958f6366 Bump version: 0.25.2-beta.1 → 0.25.2-beta.2 2025-10-06 18:09:24 +00:00
Will Jones
1ac745eb18 ci: fix Python and Node CI on main (#2700)
Example failure:
https://github.com/lancedb/lancedb/actions/runs/18237024283/job/51932651993
2025-10-06 09:40:08 -07:00
Will Jones
1357fe8aa1 ci: run remote tests on PRs only if they aren't a fork (#2697) 2025-10-03 17:38:40 -07:00
LuQQiu
0d78929893 feat: upgrade lance to 0.38.0 (#2695)
https://github.com/lancedb/lance/releases/tag/v0.38.0

---------

Co-authored-by: Will Jones <willjones127@gmail.com>
2025-10-03 16:47:05 -07:00
Neha Prasad
9e2a68541e fix(node): allow undefined/omitted values for nullable vector fields (#2656)
**Problem**: When a vector field is marked as nullable, users should be
able to omit it or pass `undefined`, but this was throwing an error:
"Table has embeddings: 'vector', but no embedding function was provided"

fixes: #2646

**Solution**: Modified `validateSchemaEmbeddings` to check
`field.nullable` before treating `undefined` values as missing embedding
fields.

**Changes**:
- Fixed validation logic in `nodejs/lancedb/arrow.ts`
- Enabled previously skipped test for nullable fields
- Added reproduction test case

**Behavior**:
-  `{ vector: undefined }` now works for nullable fields
-  `{}` (omitted field) now works for nullable fields  
-  `{ vector: null }` still works (unchanged)
-  Non-nullable fields still properly throw errors (unchanged)

---------

Co-authored-by: Will Jones <willjones127@gmail.com>
Co-authored-by: neha <neha@posthog.com>
2025-10-02 10:53:05 -07:00
Will Jones
1aa0fd16e7 ci: automatic issue creation for failed publish workflows (#2694)
## Summary
- Created custom GitHub Action that creates issues when workflow jobs
fail
- Added report-failure jobs to cargo-publish.yml, java-publish.yml,
npm-publish.yml, and pypi-publish.yml
- Issues are created automatically with workflow name, failed job names,
and run URL

## Test plan
- Workflows will only create issues on actual release or
workflow_dispatch events
- Can be tested by triggering workflow_dispatch on a publish workflow

Based on lancedb/lance#4873

🤖 Generated with [Claude Code](https://claude.com/claude-code)

---------

Co-authored-by: Claude <noreply@anthropic.com>
2025-10-02 08:24:16 -07:00
Lance Release
fec2a05629 Bump version: 0.22.2-beta.0 → 0.22.2-beta.1 2025-09-30 19:31:44 +00:00
Lance Release
79a1cd60ee Bump version: 0.25.2-beta.0 → 0.25.2-beta.1 2025-09-30 19:30:39 +00:00
Colin Patrick McCabe
88807a59a4 fix: have CI download from ci-support-binaries (#2692)
Have CI download from ci-support-binaries to fix the build.
2025-09-30 11:54:43 -07:00
Jack Ye
e0e7e01ea8 fix: inflated release size due to lance-namespace transitive dependency (#2691)
Fixed the issue on lance-namespace side to avoid pinning to a specific
lance version. This should fix the issue of the increased release
artifact size and build time.
2025-09-30 11:18:32 -07:00
Ayush Chaurasia
a416ebc11d fix: use correct nodejs path for ci (#2689) 2025-09-30 14:18:42 +05:30
Ayush Chaurasia
f941054baf docs: fix doc deployment and remove recipes workflow trigger (#2688) 2025-09-30 13:10:39 +05:30
Ayush Chaurasia
1a81c46505 docs: transition to new docs (#2681) 2025-09-29 11:37:08 +05:30
Colin Patrick McCabe
82b25a71e9 feat: add support for test_remote_connections (#2666)
Add a new test feature which allows for running the lancedb tests
against a remote server. Convert over a few tests in src/connection.rs
as a proof of concept.

To make local development easier, the remote tests can be run locally
from a Makefile. This file can also be used to run the feature tests,
with a single invocation of 'make'. (The feature tests require bringing
up a docker compose environment.)
2025-09-26 11:24:43 -07:00
Jack Ye
13c613d45f chore: upgrade lance to v0.37.1-beta.1 (#2682) 2025-09-25 23:12:09 -07:00
Weston Pace
e07389a36c feat: allow bitmap indexes on large-string, binary, large-binary, and bitmap (#2678)
The underlying `pylance` already supported this, it was just blocked out
by an over-eager validation function

Closes #1981
2025-09-25 09:46:42 -07:00
Lance Release
e7e9e80b1d Bump version: 0.22.1 → 0.22.2-beta.0 2025-09-24 22:54:54 +00:00
Lance Release
247fb58400 Bump version: 0.25.1 → 0.25.2-beta.0 2025-09-24 22:54:09 +00:00
Jack Ye
504bdc471c feat(rust): support namespace backed database (#2664)
This PR adds support for namespace-backed databases through
lance-namespace integration, enabling centralized table management
through namespace APIs.

---------

Co-authored-by: Claude <noreply@anthropic.com>
2025-09-24 15:33:31 -07:00
Will Jones
d617cdef4a feat: add use_index parameter to merge insert operations (#2674)
## Summary

Exposes `use_index` Merge Insert parameter, which was created upstream
in https://github.com/lancedb/lance/pull/4688.

## API Examples

### Python
```python
# Force table scan
table.merge_insert(["id"]) \
    .when_not_matched_insert_all() \
    .use_index(False) \
    .execute(data)
```

### Node.js/TypeScript
```typescript
// Force table scan  
await table.mergeInsert("id")
    .whenNotMatchedInsertAll()
    .useIndex(false)
    .execute(data);
```

### Rust
```rust
// Force table scan
let mut builder = table.merge_insert(&["id"]);
builder.when_not_matched_insert_all()
       .use_index(false);
builder.execute(data).await?;
```

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-authored-by: Claude <noreply@anthropic.com>
2025-09-24 12:50:21 -07:00
Will Jones
356d7046fd ci: fix test failure on main (#2677)
Test was in wrong position.
2025-09-24 09:46:04 -07:00
Will Jones
48e5caabda ci(nodejs): lint for unused imports (#2673) 2025-09-23 18:49:42 -07:00
Lance Release
d6cc68f671 Bump version: 0.22.1-beta.4 → 0.22.1 2025-09-23 22:07:31 +00:00
Lance Release
55eacfa685 Bump version: 0.22.1-beta.3 → 0.22.1-beta.4 2025-09-23 22:06:45 +00:00
Lance Release
222e3264ab Bump version: 0.25.1-beta.4 → 0.25.1 2025-09-23 22:06:08 +00:00
Lance Release
13505026cb Bump version: 0.25.1-beta.3 → 0.25.1-beta.4 2025-09-23 22:06:08 +00:00
Neha Prasad
b0800b4b71 fix: undefined values should become null in nullable fields (#2658)
### Bug Fix: Undefined Values in Nullable Fields

**Issue**: When inserting data with `undefined` values into nullable
fields, LanceDB was incorrectly coercing them to default values (`false`
for booleans, `NaN` for numbers, `""` for strings) instead of `null`.

**Fix**: Modified the `makeVector()` function in `arrow.ts` to properly
convert `undefined` values to `null` for nullable fields before passing
data to Apache Arrow.

fixes: #2645

**Result**: Now `{ text: undefined, number: undefined, bool: undefined
}` correctly becomes `{ text: null, number: null, bool: null }` when
fields are marked as nullable in the schema.

**Files Changed**: 
- `nodejs/lancedb/arrow.ts` (core fix)
- `nodejs/__test__/arrow.test.ts` (test coverage)

- This ensures proper null handling for nullable fields as expected by
users.

---------

Co-authored-by: Will Jones <willjones127@gmail.com>
2025-09-23 14:29:52 -07:00
Neha Prasad
1befebf614 fix(node): handle null values in nullable boolean fields (#2657)
### Solution
Added special handling in `makeVector` function for boolean arrays where
all values are null. The fix creates a proper null bitmap using
`makeData` and `arrowMakeVector` instead of relying on Apache Arrow's
`vectorFromArray` which doesn't handle this edge case correctly.

fixes: #2644

### Changes
- Added null value detection for boolean types in `makeVector` function
- Creates proper Arrow data structure with null bitmap when all boolean
values are null
- Preserves existing behavior for non-null boolean values and other data
types

- Fixes the boolean null value bug while maintaining backward
compatibility.

---------

Co-authored-by: Will Jones <willjones127@gmail.com>
2025-09-23 14:07:00 -07:00
Will Jones
1ab60fae7f feat: upgrade Lance to v0.37.0 (#2672)
Change logs:

* https://github.com/lancedb/lance/releases/tag/v0.37.0
* https://github.com/lancedb/lance/releases/tag/v0.36.0
2025-09-23 13:41:47 -07:00
Ayush Chaurasia
e921c90c1b feat: support mean reciprocal rank reranker (#2671)
The basic idea of MRR is this -
https://www.evidentlyai.com/ranking-metrics/mean-reciprocal-rank-mrr
I've implemented a weighted version for allowing user to set weightage
between vector and fts.

The gist is something like this 

### Scenario A: Document at rank 1 in one set, absent from another

```
# Assuming equal weights: weight_vector = 0.5, weight_fts = 0.5
vector_rr = 1.0  # rank 1 → 1/1 = 1.0
fts_rr = 0.0     # absent → 0.0

weighted_mrr = 0.5 × 1.0 + 0.5 × 0.0 = 0.5
```
### Scenario B: Document at rank 1 in one set, rank 2 in another
```
# Same weights: weight_vector = 0.5, weight_fts = 0.5
vector_rr = 1.0  # rank 1 → 1/1 = 1.0
fts_rr = 0.5     # rank 2 → 1/2 = 0.5

weighted_mrr = 0.5 × 1.0 + 0.5 × 0.5 = 0.5 + 0.25 = 0.75
```

And so with `return_score="all"` the result looks something like this
(this is from the reranker tests).
Because this is a weighted rank based reranker, some results might have
the same score
```
                                                 text                                             vector     _distance      _rowid     _score  _relevance_score
0                                    I am your father  [-0.010703234, 0.069315575, 0.030076642, 0.002...  8.149148e-13  8589934598  10.978719          1.000000
1                          the ground beneath my feet  [-0.09500901, 0.00092102867, 0.0755851, 0.0372...  1.376896e+00  8589934604        NaN          0.250000
2                I find your lack of faith disturbing  [0.07525753, -0.0100010475, 0.09990541, 0.0209...           NaN  8589934595   3.483394          0.250000
3                               but I don't wanna die  [0.033476487, -0.011235877, -0.057625435, -0.0...  1.538222e+00  8589934610   1.130355          0.238095
4   if you strike me down I shall become more powe...  [0.00432201, 0.030120496, 5.3317923e-05, 0.033...  1.381086e+00  8589934594   0.715157          0.216667
5           I see a salty message written in the eves  [-0.04213107, 0.0016004723, 0.061052393, -0.02...  1.638301e+00  8589934603   1.043785          0.133333
6                              but his son was mortal  [0.012462767, 0.049041674, -0.057339743, -0.04...  1.421566e+00  8589934620        NaN          0.125000
7                   I've got a bad feeling about this  [-0.06973199, -0.029960092, 0.02641632, -0.031...           NaN  8589934596   1.043785          0.125000
8    now that's a name I haven't heard in a long time  [-0.014374257, -0.013588792, -0.07487557, 0.03...  1.597573e+00  8589934593   0.848772          0.118056
9                                        he was a god  [-0.0258895, 0.11925236, -0.029397793, 0.05888...  1.423147e+00  8589934618        NaN          0.100000
10                 I wish they would make another one  [-0.14737535, -0.015304729, 0.04318139, -0.061...           NaN  8589934622   1.043785          0.100000
11                                   Kratos had a son  [-0.057455737, 0.13734367, -0.03537109, -0.000...  1.488075e+00  8589934617        NaN          0.083333
12                       I don't wanna live like this  [-0.0028891307, 0.015214227, 0.025183653, 0.08...           NaN  8589934609   1.043785          0.071429
13             I see a mansard roof through the trees  [0.052383978, 0.087759204, 0.014739997, 0.0239...           NaN  8589934602   1.043785          0.062500
14                          great kid don't get cocky  [-0.047043696, 0.054648954, -0.008509666, -0.0...  1.618125e+00  8589934592        NaN          0.055556
```
2025-09-23 18:25:18 +05:30
Lance Release
05a4ea646a Bump version: 0.22.1-beta.2 → 0.22.1-beta.3 2025-09-22 04:49:00 +00:00
Lance Release
ebbeeff4e0 Bump version: 0.25.1-beta.2 → 0.25.1-beta.3 2025-09-22 04:47:42 +00:00
Jack Ye
407ca53f92 chore: increase pypi publish timeout and use warp runner for arm64 (#2670)
Fix failures like:
https://github.com/lancedb/lancedb/actions/runs/17840462235/job/50748940233

ARM64 build cannot succeed within 1 hour, x86-64 build sometimes cannot
succeed within 1 hour.
2025-09-21 21:42:44 -07:00
Jack Ye
ff71d7e552 feat: support shallow clone (#2653)
Support shallow cloning a dataset at a specific location to create a new
dataset, using the shallow_clone feature in Lance. Also introduce remote
`clone` API for remote tables for this functionality.
2025-09-21 21:28:40 -07:00
Neha Prasad
2261eb95a0 fix(node): handle undefined vector fields with embedding functions (#2655)
- Fixes issue where passing `{ vector: undefined }` with an embedding
function threw "Found field not in schema" error instead of calling the
embedding function like `null` or omitted fields.

**Changes:**
- Modified `rowPathsAndValues` to skip undefined values during schema
inference
- Added test case verifying undefined, null, and omitted vector fields
all work correctly

**Before:** `{ vector: undefined }` → Error
**After:** `{ vector: undefined }` → Calls embedding function

Closes #2647
2025-09-19 09:17:28 -07:00
Jack Ye
5b397e410b chore: fix out of date tests with new namespace validation (#2663)
Failure:
https://github.com/lancedb/lancedb/actions/runs/17820044478/job/50660516344
2025-09-18 13:29:47 -07:00
Lance Release
b5a39bffec Bump version: 0.22.1-beta.1 → 0.22.1-beta.2 2025-09-18 20:22:35 +00:00
Lance Release
5e1e9add07 Bump version: 0.25.1-beta.1 → 0.25.1-beta.2 2025-09-18 20:21:33 +00:00
Jack Ye
97e9938dfe fix: add missing validations to namespace operations (#2659) 2025-09-17 23:27:04 -07:00
Weston Pace
1d4b92e01e refactor: remove catalog implementation now that we have namespaces in database (#2662)
We had previously prototyped a `Catalog` trait anticipating a
three-tiered Catalog-Database-Table structure. Now that we have
namespaces in the `Database` we can support any tiering scheme and the
`Catalog` trait is no longer needed.
2025-09-17 08:40:20 -07:00
Le Duc Manh
4c9fc3044b fix: use create to resolve variables (#2640)
# What
- Use `create` to resolve variables values

# Reference
Fixes #2181

---------

Co-authored-by: Will Jones <willjones127@gmail.com>
2025-09-12 13:07:32 -07:00
Jack Ye
0ebc8d45a8 chore: fix no lock build warnings and CI timeouts (#2650)
Example CI failures:
- publish build timeout:
https://github.com/lancedb/lancedb/actions/runs/17626482881/job/50084552906
- doc test build timeout:
https://github.com/lancedb/lancedb/actions/runs/17627058590/job/50086456818
2025-09-11 15:30:35 -07:00
BubbleCal
f7d78c3420 feat: add 'target_partition_size' param (#2642)
this exposes the param `target_partition_size` from lance

---------

Signed-off-by: BubbleCal <bubble-cal@outlook.com>
2025-09-11 22:56:16 +08:00
Lance Release
6ea6884260 Bump version: 0.22.1-beta.0 → 0.22.1-beta.1 2025-09-10 20:49:43 +00:00
Lance Release
b1d791a299 Bump version: 0.25.1-beta.0 → 0.25.1-beta.1 2025-09-10 20:48:56 +00:00
Jack Ye
8da74dcb37 feat: support per-request header override (#2631)
## Summary

This PR introduces a `HeaderProvider` which is called for all remote
HTTP calls to get the latest headers to inject. This is useful for
features like adding the latest auth tokens where the header provider
can auto-refresh tokens internally and each request always set the
refreshed token.

---------

Co-authored-by: Claude <noreply@anthropic.com>
2025-09-10 13:44:00 -07:00
Lance Release
3c7419b392 Bump version: 0.22.0 → 0.22.1-beta.0 2025-09-10 14:24:58 +00:00
Lance Release
e612686fdb Bump version: 0.25.0 → 0.25.1-beta.0 2025-09-10 14:24:07 +00:00
Wyatt Alt
e77d57a5b6 chore: update lance to 0.35.0-beta4 (#2639)
Updates lance to 0.35.0-beta4, which also incurs a datafusion update.
This brings in a fix for a memory leak in index caching, resulting from
a cyclical reference.
2025-09-10 06:19:35 -07:00
Jack Ye
9391ad1450 feat: support mTLS for remote database (#2638)
This PR adds mTLS (mutual TLS) configuration support for the LanceDB
remote HTTP client, allowing users to authenticate with client
certificates and configure custom CA certificates for server
verification.

---------

Co-authored-by: Claude <noreply@anthropic.com>
2025-09-09 21:04:46 -07:00
LuQQiu
79960b254e fix: add partition statistics to MetadataEraser (#2637)
Some of the data fusion optimizers optimize based on data statistics
(e.g. total bytes, number of rows).
If those statistics are not supplied, optimizers cannot optimize on top.
One example is Anti Hash Join which can optimize from LeftAnti (Left:
big table, Right: small table) to RightAnti (Left: small table, Right:
big table). Left Anti requires reading the whole big & small table while
RightAnti only requires reading the whole left table and supports limit
push down to only read partial of big table
2025-09-09 09:13:22 -07:00
Xuanwo
d19c64e29b chore: bump version for JSON support (#2633)
Bump version of lance to latest beta for JSON support.

Signed-off-by: Xuanwo <github@xuanwo.io>
2025-09-05 12:26:28 -07:00
Lance Release
06d5612443 Bump version: 0.22.0-beta.2 → 0.22.0 2025-09-04 08:33:40 +00:00
Lance Release
45f96f4151 Bump version: 0.22.0-beta.1 → 0.22.0-beta.2 2025-09-04 08:33:09 +00:00
Lance Release
f744b785f8 Bump version: 0.25.0-beta.2 → 0.25.0 2025-09-04 08:32:44 +00:00
Lance Release
2e3f745820 Bump version: 0.25.0-beta.1 → 0.25.0-beta.2 2025-09-04 08:32:43 +00:00
Jack Ye
683aaed716 chore: upgrade lance to 0.35.0 (#2625) 2025-09-04 01:31:13 -07:00
Lance Release
48f7b20daa Bump version: 0.22.0-beta.0 → 0.22.0-beta.1 2025-09-03 17:51:36 +00:00
Lance Release
4dd399ca29 Bump version: 0.25.0-beta.0 → 0.25.0-beta.1 2025-09-03 17:50:41 +00:00
Jack Ye
e6f1da31dc chore: upgrade lance to 0.34.0-beta.4 (#2621) 2025-09-02 21:33:55 -07:00
Wyatt Alt
a9ea785b15 fix: remote python sdk namespace typing (#2620)
This changes the default values for some namespace parameters in the
remote python SDK from None to [], to match the underlying code it
calls.

Prior to this commit, failing to supply "namespace" with the remote SDK
would cause an error because the underlying code it dispatches to does
not consider None to be valid input.
2025-09-02 16:32:32 -07:00
Colin Patrick McCabe
cc38453391 fix!: fix doctest in query.py (#2622)
Fix doctest in query.py to include cumulative_cpu, now that lance
includes that.
2025-09-02 15:47:32 -07:00
Lance Release
47747287b6 Bump version: 0.21.4-beta.1 → 0.22.0-beta.0 2025-08-29 21:20:57 +00:00
Lance Release
0847e666a0 Bump version: 0.24.4-beta.1 → 0.25.0-beta.0 2025-08-29 21:19:51 +00:00
Wyatt Alt
981f8427e6 chore: update lance (#2610)
Adds storage_options to object_store wrap() to adhere to upstream lance
change.
2025-08-29 13:41:02 -07:00
Will Jones
f6846004ca feat: add name parameter to remaining Python create index calls (#2617)
## Summary
This PR adds the missing `name` parameter to `create_scalar_index` and
`create_fts_index` methods in the Python SDK, which was inadvertently
omitted when it was added to `create_index` in PR #2586.

## Changes
- Add `name: Optional[str] = None` parameter to abstract
`Table.create_scalar_index` and `Table.create_fts_index` methods
- Update `LanceTable` implementation to accept and pass the `name`
parameter to the underlying Rust layer
- Update `RemoteTable` implementation to accept and pass the `name`
parameter
- Enhanced tests to verify custom index names work correctly for both
scalar and FTS indices
- When `name` is not provided, default names are generated (e.g.,
`{column}_idx`)

## Test plan
- [x] Added test cases for custom names in scalar index creation
- [x] Added test cases for custom names in FTS index creation  
- [x] Verified existing tests continue to pass
- [x] Code formatting and linting checks pass

This ensures API consistency across all index creation methods in the
LanceDB Python SDK.

Fixes #2616

🤖 Generated with [Claude Code](https://claude.ai/code)

---------

Co-authored-by: Claude <noreply@anthropic.com>
2025-08-27 14:02:48 -07:00
Jack Ye
faf8973624 feat!: support multi-level namespace (#2603)
This PR adds support of multi-level namespace in a LanceDB database,
according to the Lance Namespace spec.

This allows users to create namespace inside a database connection,
perform create, drop, list, list_tables in a namespace. (other
operations like update, describe will be in a follow-up PR)

The 3 types of database connections behave like the following:
1 Local database connections will continue to have just a flat list of
tables for backwards compatibility.
2. Remote database connections will make REST API calls according to the
APIs in the Lance Namespace spec.
3. Lance Namespace connections will invoke the corresponding operations
against the specific namespace implementation which could have different
behaviors regarding these APIs.

All the table APIs now take identifier instead of name, for example
`/v1/table/{name}/create` is now `/v1/table/{id}/create`. If a table is
directly in the root namespace, the API call is identical. If the table
is in a namespace, then the full table ID should be used, with `$` as
the default delimiter (`.` is a special character and creates issues
with URL parsing so `$` is used), for example
`/v1/table/ns1$table1/create`. If a different parameter needs to be
passed in, user can configure the `id_delimiter` in client config and
that becomes a query parameter, for example
`/v1/table/ns1__table1/create?delimiter=__`

The Python and Typescript APIs are kept backwards compatible, but the
following Rust APIs are not:
1. `Connection::drop_table(&self, name: impl AsRef<str>) -> Result<()>`
is now `Connection::drop_table(&self, name: impl AsRef<str>, namespace:
&[String]) -> Result<()>`
2. `Connection::drop_all_tables(&self) -> Result<()>` is now
`Connection::drop_all_tables(&self, name: impl AsRef<str>) ->
Result<()>`
2025-08-27 12:07:55 -07:00
Weston Pace
fabe37274f feat: add __getitems__ method impl for torch integration (#2596)
This allows a lancedb Table to act as a torch dataset.
2025-08-25 13:23:22 -07:00
Lance Release
6839ac3509 Bump version: 0.21.4-beta.0 → 0.21.4-beta.1 2025-08-22 03:55:22 +00:00
Lance Release
b88422e515 Bump version: 0.24.4-beta.0 → 0.24.4-beta.1 2025-08-22 03:54:34 +00:00
BubbleCal
8d60685ede chore: upgrade lance to 0.33.0-beta.4 (#2604)
detials:
https://github.com/lancedb/lance/releases/tag/untagged-5191abd48c1fbe76f746

Signed-off-by: BubbleCal <bubble-cal@outlook.com>
2025-08-21 21:18:48 +08:00
Jack Ye
04285a4a4e feat(python): integrate with lance namespace (#2599)
This PR integrates `lancedb` with `lance-namespace` so that users can
use LanceDB client to access Lance tables in any catalog services. In
general, we expect most of the logic to be delegated to the existing
`LanceDBConnection` and `LanceTable`, but the namespace implemenation
will control how table is created, dropped, and describe where the table
is stored with any related storage options like access credentials.

The implementation currently only supports a 1 level namespace that
directly contains tables. We will introduce nested namespace support in
a separated PR.

Users are expected to use it in the following way:

```python
>>> import lancedb
>>> import pyarrow as pa
>>> # Connect using GlueNamespace
>>> db = lancedb.connect_namespace("glue", {"catalog_id": "123456789012"})
>>> # Create a table with schema
>>> schema = pa.schema([
...     pa.field("id", pa.int64()),
...     pa.field("vector", pa.list_(pa.float32(), 2))
... ])
>>> table = db.create_table("my_table", schema=schema)
>>> # List tables
>>> db.table_names()
['my_table']
```
2025-08-20 15:46:16 -07:00
Lance Release
d4a41b5663 Bump version: 0.21.3 → 0.21.4-beta.0 2025-08-19 22:56:52 +00:00
Lance Release
adc3daa462 Bump version: 0.24.3 → 0.24.4-beta.0 2025-08-19 22:56:05 +00:00
Will Jones
acbfa6c012 feat: upgrade lance to 0.33.0-beta.3 (#2598)
Change logs:
*
[v0.33.0-beta.3](https://github.com/lancedb/lance/releases/tag/v0.33.0-beta.3)
*
[v0.33.0-beta.2](https://github.com/lancedb/lance/releases/tag/v0.33.0-beta.2)
*
[v0.33.0-beta.1](https://github.com/lancedb/lance/releases/tag/v0.33.0-beta.1)

Important changes:

* Row-level conflict resolution for delete operations
* Fixes #2593
* Fix for keeping tombstones fields around, preventing cleanup of
dropped columns.
2025-08-19 13:45:15 -07:00
Vitali Lovich
d602e9f98c fix: make cloud features optional (#2567) (#2568)
This shrinks the size of a local embedded build that can disable all the
default features. When combined with
https://github.com/lancedb/lance/pull/4362 and the dependencies are
updated to point to the fix, this resolves #2567 fully.

Verified by patching the workspace to redirect to my clone of lance with
the PR applied.
```
cargo tree -p lancedb -e no-build -e no-dev --no-default-features -i aws-config | less
```

The reason that lance itself needs to change too is that many
dependencies within that project depend on lance-io/default and lancedb
depends on them which transitively ends up enabling the cloud
regardless. The PR in lance removes the dependency on lance-io/default
from all sibling crates.

---------

Co-authored-by: Will Jones <willjones127@gmail.com>
2025-08-15 16:46:52 -07:00
Will Jones
ad09234d59 feat: allow setting train=False and name on indices (#2586)
Enables two new parameters when building indices:

* `name`: Allows explicitly setting a name on the index. Default is
`{col_name}_idx`.
* `train` (default `True`): When set to `False`, an empty index will be
immediately created.

The upgrade of Lance means there are also additional behaviors from
cd76a993b8:

* When a scalar index is created on a Table, it will be kept around even
if all rows are deleted or updated.
* Scalar indices can be created on empty tables. They will default to
`train=False` if the table is empty.

---------

Co-authored-by: Weston Pace <weston.pace@gmail.com>
2025-08-15 14:00:26 -07:00
Lance Release
0c34ffb252 Bump version: 0.21.3-beta.0 → 0.21.3 2025-08-15 18:03:26 +00:00
Lance Release
d9f333d828 Bump version: 0.21.2 → 0.21.3-beta.0 2025-08-15 18:02:43 +00:00
Lance Release
bb809abd4b Bump version: 0.24.3-beta.0 → 0.24.3 2025-08-15 18:02:04 +00:00
Lance Release
c87530f7a3 Bump version: 0.24.2 → 0.24.3-beta.0 2025-08-15 18:02:04 +00:00
Will Jones
1eb1beecd6 ci: remove more mentions of node (#2595)
I promise this time I tested it locally :)
2025-08-15 11:01:02 -07:00
Yuval Lifshitz
ce550e6c45 feat: add missing rust examples (#2583)
all 3 example are running now with:
```
cargo run --example simple
cargo run --example full_text_search
cargo run --example ivf_pq
```

Signed-off-by: Yuval Lifshitz <ylifshit@ibm.com>
Co-authored-by: Weston Pace <weston.pace@gmail.com>
2025-08-15 10:38:58 -07:00
Will Jones
d3bae1f3a3 ci: drop old node mention (#2594)
This broke release here:
https://github.com/lancedb/lancedb/actions/runs/16993824504/job/48179542912
2025-08-15 09:51:19 -07:00
Will Jones
dcf53c4506 fix: limit and offset support paginating through FTS and vector search results (#2592)
Adds tests to ensure that users can paginate through simple scan, FTS,
and vector search results using `limit` and `offset`.

Tests upstream work: https://github.com/lancedb/lance/pull/4318

Closes #2459
2025-08-15 08:55:12 -07:00
Ryan Green
941eada703 docs: update indexing and compaction docs (#2362)
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- **Documentation**
- Clarified and expanded explanations of data management concepts in
LanceDB.
- Added notes on automatic background fragment compaction and
incremental reindexing support in LanceDB Cloud/Enterprise.
- Updated details on disabling interim exhaustive kNN search during
background reindexing.
  - Improved formatting and removed outdated FTS reindexing subsection.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->

---------

Co-authored-by: Will Jones <willjones127@gmail.com>
2025-08-15 12:41:47 -02:30
Weston Pace
ed640a76d9 feat: add take_offsets and take_row_ids (#2584)
These operations have existed in lance for a long while and many users
need to drop down to lance for this capability. This PR adds the API and
implements it using filters (e.g. `_rowid IN (...)`) so that in doesn't
currently add any load to `BaseTable`. I'm not sure that is sustainable
as base table implementations may want to specialize how they handle
this method. However, I figure it is a good starting point.

In addition, unlike Lance, this API does not currently guarantee
anything about the order of the take results. This is necessary for the
fallback filter approach to work (SQL filters cannot guarantee result
order)
2025-08-15 06:48:24 -07:00
Will Jones
296205ef96 feat: upgrade lance to v0.33.0 (#2591)
https://github.com/lancedb/lance/releases/tag/v0.33.0
2025-08-14 12:11:19 -07:00
Weston Pace
16beaaa656 ci: fix broken CI checks (#2585) 2025-08-13 10:05:57 -07:00
Tomoko Uchida
4ff87b1f4a feat: add hybrid search example in Rust (#2579)
Hello!

I'm new to lancedb and interested in the Rust SDK.
I couldn't find a good hybrid search example in Rust, so I created one.

## Usage

```bash
$ cargo run --quiet --example hybrid_search --features=sentence-transformers
Result: Python is a popular programming language.
Result: Mount Everest is the highest mountain in the world.
Result: The first computer programmer was Ada Lovelace.
Result: Coffee is one of the most popular beverages in the world.
Result: Basketball is a sport played with a ball and a hoop.
```
2025-08-12 08:22:19 -07:00
Shawn
0532ef2358 chore(deps): update crunchy to 0.2.4 (#2581)
Hi,

I'm try to build goose (rely on lancedb) for android/termux.
Found out some depsendencies need to update. 

https://github.com/block/goose/pull/3890

0.2.4 update
- nmathewson Fix cross-compilation between windows and non-windows.

https://github.com/shawn111/lancedb/actions/runs/16871317860
windows and linux build passed

https://github.com/shawn111/lancedb/actions/runs/16871859398

Signed-off-by: Shawn Wang <shawn111@gmail.com>
2025-08-11 18:00:00 -07:00
BubbleCal
dcf7334c1f chore: upgrade lance to v0.32.2-beta.1 (#2580)
Signed-off-by: BubbleCal <bubble-cal@outlook.com>
2025-08-08 17:00:54 +08:00
206 changed files with 24985 additions and 3362 deletions

View File

@@ -1,5 +1,5 @@
[tool.bumpversion]
current_version = "0.21.2"
current_version = "0.22.3-beta.5"
parse = """(?x)
(?P<major>0|[1-9]\\d*)\\.
(?P<minor>0|[1-9]\\d*)\\.

View File

@@ -0,0 +1,45 @@
name: Create Failure Issue
description: Creates a GitHub issue if any jobs in the workflow failed
inputs:
job-results:
description: 'JSON string of job results from needs context'
required: true
workflow-name:
description: 'Name of the workflow'
required: true
runs:
using: composite
steps:
- name: Check for failures and create issue
shell: bash
env:
JOB_RESULTS: ${{ inputs.job-results }}
WORKFLOW_NAME: ${{ inputs.workflow-name }}
RUN_URL: ${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }}
GH_TOKEN: ${{ github.token }}
run: |
# Check if any job failed
if echo "$JOB_RESULTS" | jq -e 'to_entries | any(.value.result == "failure")' > /dev/null; then
echo "Detected job failures, creating issue..."
# Extract failed job names
FAILED_JOBS=$(echo "$JOB_RESULTS" | jq -r 'to_entries | map(select(.value.result == "failure")) | map(.key) | join(", ")')
# Create issue with workflow name, failed jobs, and run URL
gh issue create \
--title "$WORKFLOW_NAME Failed ($FAILED_JOBS)" \
--body "The workflow **$WORKFLOW_NAME** failed during execution.
**Failed jobs:** $FAILED_JOBS
**Run URL:** $RUN_URL
Please investigate the failed jobs and address any issues." \
--label "ci"
echo "Issue created successfully"
else
echo "No job failures detected, skipping issue creation"
fi

View File

@@ -38,3 +38,17 @@ jobs:
- name: Publish the package
run: |
cargo publish -p lancedb --all-features --token ${{ steps.auth.outputs.token }}
report-failure:
name: Report Workflow Failure
runs-on: ubuntu-latest
needs: [build]
if: always() && (github.event_name == 'release' || github.event_name == 'workflow_dispatch')
permissions:
contents: read
issues: write
steps:
- uses: actions/checkout@v4
- uses: ./.github/actions/create-failure-issue
with:
job-results: ${{ toJSON(needs) }}
workflow-name: ${{ github.workflow }}

View File

@@ -0,0 +1,107 @@
name: Codex Update Lance Dependency
on:
workflow_call:
inputs:
tag:
description: "Tag name from Lance"
required: true
type: string
workflow_dispatch:
inputs:
tag:
description: "Tag name from Lance"
required: true
type: string
permissions:
contents: write
pull-requests: write
actions: read
jobs:
update:
runs-on: ubuntu-latest
steps:
- name: Show inputs
run: |
echo "tag = ${{ inputs.tag }}"
- name: Checkout Repo LanceDB
uses: actions/checkout@v4
with:
fetch-depth: 0
persist-credentials: true
- name: Set up Node.js
uses: actions/setup-node@v4
with:
node-version: 20
- name: Install Codex CLI
run: npm install -g @openai/codex
- name: Install Rust toolchain
uses: dtolnay/rust-toolchain@stable
with:
toolchain: stable
components: clippy, rustfmt
- name: Install system dependencies
run: |
sudo apt-get update
sudo apt-get install -y protobuf-compiler libssl-dev
- name: Install cargo-info
run: cargo install cargo-info
- name: Install Python dependencies
run: python3 -m pip install --upgrade pip packaging
- name: Configure git user
run: |
git config user.name "lancedb automation"
git config user.email "robot@lancedb.com"
- name: Configure Codex authentication
env:
CODEX_TOKEN_B64: ${{ secrets.CODEX_TOKEN }}
run: |
if [ -z "${CODEX_TOKEN_B64}" ]; then
echo "Repository secret CODEX_TOKEN is not defined; skipping Codex execution."
exit 1
fi
mkdir -p ~/.codex
echo "${CODEX_TOKEN_B64}" | base64 --decode > ~/.codex/auth.json
- name: Run Codex to update Lance dependency
env:
TAG: ${{ inputs.tag }}
GITHUB_TOKEN: ${{ secrets.ROBOT_TOKEN }}
GH_TOKEN: ${{ secrets.ROBOT_TOKEN }}
run: |
set -euo pipefail
VERSION="${TAG#refs/tags/}"
VERSION="${VERSION#v}"
BRANCH_NAME="codex/update-lance-${VERSION//[^a-zA-Z0-9]/-}"
cat <<EOF >/tmp/codex-prompt.txt
You are running inside the lancedb repository on a GitHub Actions runner. Update the Lance dependency to version ${VERSION} and prepare a pull request for maintainers to review.
Follow these steps exactly:
1. Use script "ci/set_lance_version.py" to update Lance dependencies. The script already refreshes Cargo metadata, so allow it to finish even if it takes time.
2. Run "cargo clippy --workspace --tests --all-features -- -D warnings". If diagnostics appear, fix them yourself and rerun clippy until it exits cleanly. Do not skip any warnings.
3. After clippy succeeds, run "cargo fmt --all" to format the workspace.
4. Ensure the repository is clean except for intentional changes. Inspect "git status --short" and "git diff" to confirm the dependency update and any required fixes.
5. Create and switch to a new branch named "${BRANCH_NAME}" (replace any duplicated hyphens if necessary).
6. Stage all relevant files with "git add -A". Commit using the message "chore: update lance dependency to v${VERSION}".
7. Push the branch to origin. If the branch already exists, force-push your changes.
8. env "GH_TOKEN" is available, use "gh" tools for github related operations like creating pull request.
9. Create a pull request targeting "main" with title "chore: update lance dependency to v${VERSION}". In the body, summarize the dependency bump, clippy/fmt verification, and link the triggering tag (${TAG}).
10. After creating the PR, display the PR URL, "git status --short", and a concise summary of the commands run and their results.
Constraints:
- Use bash commands; avoid modifying GitHub workflow files other than through the scripted task above.
- Do not merge the PR.
- If any command fails, diagnose and fix the issue instead of aborting.
EOF
codex --config shell_environment_policy.ignore_default_excludes=true exec --dangerously-bypass-approvals-and-sandbox "$(cat /tmp/codex-prompt.txt)"

View File

@@ -56,22 +56,12 @@ jobs:
with:
node-version: 20
cache: 'npm'
cache-dependency-path: node/package-lock.json
cache-dependency-path: docs/package-lock.json
- name: Install node dependencies
working-directory: node
working-directory: nodejs
run: |
sudo apt update
sudo apt install -y protobuf-compiler libssl-dev
- name: Build node
working-directory: node
run: |
npm ci
npm run build
npm run tsc
- name: Create markdown files
working-directory: node
run: |
npx typedoc --plugin typedoc-plugin-markdown --out ../docs/src/javascript src/index.ts
- name: Build docs
working-directory: docs
run: |

View File

@@ -24,7 +24,8 @@ env:
jobs:
test-python:
name: Test doc python code
runs-on: ubuntu-24.04
runs-on: warp-ubuntu-2204-x64-8x
timeout-minutes: 60
steps:
- name: Checkout
uses: actions/checkout@v4
@@ -58,51 +59,3 @@ jobs:
run: |
cd docs/test/python
for d in *; do cd "$d"; echo "$d".py; python "$d".py; cd ..; done
test-node:
name: Test doc nodejs code
runs-on: ubuntu-24.04
timeout-minutes: 60
strategy:
fail-fast: false
steps:
- name: Checkout
uses: actions/checkout@v4
with:
fetch-depth: 0
lfs: true
- name: Print CPU capabilities
run: cat /proc/cpuinfo
- name: Set up Node
uses: actions/setup-node@v4
with:
node-version: 20
- name: Install protobuf
run: |
sudo apt update
sudo apt install -y protobuf-compiler
- name: Install dependecies needed for ubuntu
run: |
sudo apt install -y libssl-dev
rustup update && rustup default
- name: Rust cache
uses: swatinem/rust-cache@v2
- name: Install node dependencies
run: |
sudo swapoff -a
sudo fallocate -l 8G /swapfile
sudo chmod 600 /swapfile
sudo mkswap /swapfile
sudo swapon /swapfile
sudo swapon --show
cd node
npm ci
npm run build-release
cd ../docs
npm install
- name: Test
env:
LANCEDB_URI: ${{ secrets.LANCEDB_URI }}
LANCEDB_DEV_API_KEY: ${{ secrets.LANCEDB_DEV_API_KEY }}
run: |
cd docs
npm t

View File

@@ -43,7 +43,6 @@ jobs:
- uses: Swatinem/rust-cache@v2
- uses: actions-rust-lang/setup-rust-toolchain@v1
with:
toolchain: "1.81.0"
cache-workspaces: "./java/core/lancedb-jni"
# Disable full debug symbol generation to speed up CI build and keep memory down
# "1" means line tables only, which is useful for panic tracebacks.
@@ -112,3 +111,17 @@ jobs:
env:
SONATYPE_USER: ${{ secrets.SONATYPE_USER }}
SONATYPE_TOKEN: ${{ secrets.SONATYPE_TOKEN }}
report-failure:
name: Report Workflow Failure
runs-on: ubuntu-latest
needs: [linux-arm64, linux-x86, macos-arm64]
if: always() && (github.event_name == 'release' || github.event_name == 'workflow_dispatch')
permissions:
contents: read
issues: write
steps:
- uses: actions/checkout@v4
- uses: ./.github/actions/create-failure-issue
with:
job-results: ${{ toJSON(needs) }}
workflow-name: ${{ github.workflow }}

View File

@@ -6,6 +6,7 @@ on:
- main
pull_request:
paths:
- Cargo.toml
- nodejs/**
- .github/workflows/nodejs.yml
- docker-compose.yml
@@ -79,7 +80,7 @@ jobs:
with:
node-version: ${{ matrix.node-version }}
cache: 'npm'
cache-dependency-path: node/package-lock.json
cache-dependency-path: nodejs/package-lock.json
- uses: Swatinem/rust-cache@v2
- name: Install dependencies
run: |
@@ -116,7 +117,7 @@ jobs:
set -e
npm ci
npm run docs
if ! git diff --exit-code -- . ':(exclude)Cargo.lock'; then
if ! git diff --exit-code -- ../ ':(exclude)Cargo.lock'; then
echo "Docs need to be updated"
echo "Run 'npm run docs', fix any warnings, and commit the changes."
exit 1
@@ -137,7 +138,7 @@ jobs:
with:
node-version: 20
cache: 'npm'
cache-dependency-path: node/package-lock.json
cache-dependency-path: nodejs/package-lock.json
- uses: Swatinem/rust-cache@v2
- name: Install dependencies
run: |

View File

@@ -365,3 +365,17 @@ jobs:
ARGS="$ARGS --tag preview"
fi
npm publish $ARGS
report-failure:
name: Report Workflow Failure
runs-on: ubuntu-latest
needs: [build-lancedb, test-lancedb, publish]
if: always() && (github.event_name == 'release' || github.event_name == 'workflow_dispatch')
permissions:
contents: read
issues: write
steps:
- uses: actions/checkout@v4
- uses: ./.github/actions/create-failure-issue
with:
job-results: ${{ toJSON(needs) }}
workflow-name: ${{ github.workflow }}

View File

@@ -56,7 +56,7 @@ jobs:
pypi_token: ${{ secrets.LANCEDB_PYPI_API_TOKEN }}
fury_token: ${{ secrets.FURY_TOKEN }}
mac:
timeout-minutes: 60
timeout-minutes: 90
runs-on: ${{ matrix.config.runner }}
strategy:
matrix:
@@ -64,7 +64,7 @@ jobs:
- target: x86_64-apple-darwin
runner: macos-13
- target: aarch64-apple-darwin
runner: macos-14
runner: warp-macos-14-arm64-6x
env:
MACOSX_DEPLOYMENT_TARGET: 10.15
steps:
@@ -173,3 +173,17 @@ jobs:
generate_release_notes: false
name: Python LanceDB v${{ steps.extract_version.outputs.version }}
body: ${{ steps.python_release_notes.outputs.changelog }}
report-failure:
name: Report Workflow Failure
runs-on: ubuntu-latest
needs: [linux, mac, windows]
permissions:
contents: read
issues: write
if: always() && (github.event_name == 'release' || github.event_name == 'workflow_dispatch')
steps:
- uses: actions/checkout@v4
- uses: ./.github/actions/create-failure-issue
with:
job-results: ${{ toJSON(needs) }}
workflow-name: ${{ github.workflow }}

View File

@@ -6,6 +6,7 @@ on:
- main
pull_request:
paths:
- Cargo.toml
- python/**
- .github/workflows/python.yml

View File

@@ -96,6 +96,7 @@ jobs:
# Need up-to-date compilers for kernels
CC: clang-18
CXX: clang++-18
GH_TOKEN: ${{ secrets.SOPHON_READ_TOKEN }}
steps:
- uses: actions/checkout@v4
with:
@@ -117,15 +118,17 @@ jobs:
sudo chmod 600 /swapfile
sudo mkswap /swapfile
sudo swapon /swapfile
- name: Start S3 integration test environment
working-directory: .
run: docker compose up --detach --wait
- name: Build
run: cargo build --all-features --tests --locked --examples
- name: Run tests
run: cargo test --all-features --locked
- name: Run feature tests
run: make -C ./lancedb feature-tests
- name: Run examples
run: cargo run --example simple --locked
- name: Run remote tests
# Running this requires access to secrets, so skip if this is
# a PR from a fork.
if: github.event_name != 'pull_request' || !github.event.pull_request.head.repo.fork
run: make -C ./lancedb remote-tests
macos:
timeout-minutes: 30

View File

@@ -1,26 +0,0 @@
name: Trigger vectordb-recipers workflow
on:
push:
branches: [ main ]
pull_request:
paths:
- .github/workflows/trigger-vectordb-recipes.yml
workflow_dispatch:
jobs:
build:
runs-on: ubuntu-latest
steps:
- name: Trigger vectordb-recipes workflow
uses: actions/github-script@v6
with:
github-token: ${{ secrets.VECTORDB_RECIPES_ACTION_TOKEN }}
script: |
const result = await github.rest.actions.createWorkflowDispatch({
owner: 'lancedb',
repo: 'vectordb-recipes',
workflow_id: 'examples-test.yml',
ref: 'main'
});
console.log(result);

3
.gitignore vendored
View File

@@ -31,9 +31,6 @@ python/dist
*.node
**/node_modules
**/.DS_Store
node/dist
node/examples/**/package-lock.json
node/examples/**/dist
nodejs/lancedb/native*
dist

101
AGENTS.md Normal file
View File

@@ -0,0 +1,101 @@
LanceDB is a database designed for retrieval, including vector, full-text, and hybrid search.
It is a wrapper around Lance. There are two backends: local (in-process like SQLite) and
remote (against LanceDB Cloud).
The core of LanceDB is written in Rust. There are bindings in Python, Typescript, and Java.
Project layout:
* `rust/lancedb`: The LanceDB core Rust implementation.
* `python`: The Python bindings, using PyO3.
* `nodejs`: The Typescript bindings, using napi-rs
* `java`: The Java bindings
Common commands:
* Check for compiler errors: `cargo check --quiet --features remote --tests --examples`
* Run tests: `cargo test --quiet --features remote --tests`
* Run specific test: `cargo test --quiet --features remote -p <package_name> --test <test_name>`
* Lint: `cargo clippy --quiet --features remote --tests --examples`
* Format: `cargo fmt --all`
Before committing changes, run formatting.
## Coding tips
* When writing Rust doctests for things that require a connection or table reference,
write them as a function instead of a fully executable test. This allows type checking
to run but avoids needing a full test environment. For example:
```rust
/// ```
/// use lance_index::scalar::FullTextSearchQuery;
/// use lancedb::query::{QueryBase, ExecutableQuery};
///
/// # use lancedb::Table;
/// # async fn query(table: &Table) -> Result<(), Box<dyn std::error::Error>> {
/// let results = table.query()
/// .full_text_search(FullTextSearchQuery::new("hello world".into()))
/// .execute()
/// .await?;
/// # Ok(())
/// # }
/// ```
```
## Example plan: adding a new method on Table
Adding a new method involves first adding it to the Rust core, then exposing it
in the Python and TypeScript bindings. There are both local and remote tables.
Remote tables are implemented via a HTTP API and require the `remote` cargo
feature flag to be enabled. Python has both sync and async methods.
Rust core changes:
1. Add method on `Table` struct in `rust/lancedb/src/table.rs` (calls `BaseTable` trait).
2. Add method to `BaseTable` trait in `rust/lancedb/src/table.rs`.
3. Implement new trait method on `NativeTable` in `rust/lancedb/src/table.rs`.
* Test with unit test in `rust/lancedb/src/table.rs`.
4. Implement new trait method on `RemoteTable` in `rust/lancedb/src/remote/table.rs`.
* Test with unit test in `rust/lancedb/src/remote/table.rs` against mocked endpoint.
Python bindings changes:
1. Add PyO3 method binding in `python/src/table.rs`. Run `make develop` to compile bindings.
2. Add types for PyO3 method in `python/python/lancedb/_lancedb.pyi`.
3. Add method to `AsyncTable` class in `python/python/lancedb/table.py`.
4. Add abstract method to `Table` abstract base class in `python/python/lancedb/table.py`.
5. Add concrete sync method to `LanceTable` class in `python/python/lancedb/table.py`.
* Should use `LOOP.run()` to call the corresponding `AsyncTable` method.
6. Add concrete sync method to `RemoteTable` class in `python/python/lancedb/remote/table.py`.
7. Add unit test in `python/tests/test_table.py`.
TypeScript bindings changes:
1. Add napi-rs method binding on `Table` in `nodejs/src/table.rs`.
2. Run `npm run build` to generate TypeScript definitions.
3. Add typescript method on abstract class `Table` in `nodejs/src/table.ts`.
4. Add concrete method on `LocalTable` class in `nodejs/src/native_table.ts`.
* Note: despite the name, this class is also used for remote tables.
5. Add test in `nodejs/__test__/table.test.ts`.
6. Run `npm run docs` to generate TypeScript documentation.
## Review Guidelines
Please consider the following when reviewing code contributions.
### Rust API design
* Design public APIs so they can be evolved easily in the future without breaking
changes. Often this means using builder patterns or options structs instead of
long argument lists.
* For public APIs, prefer inputs that use `Into<T>` or `AsRef<T>` traits to allow
more flexible inputs. For example, use `name: Into<String>` instead of `name: String`,
so we don't have to write `func("my_string".to_string())`.
### Testing
* Ensure all new public APIs have documentation and examples.
* Ensure that all bugfixes and features have corresponding tests. **We do not merge
code without tests.**
### Documentation
* New features must include updates to the rust documentation comments. Link to
relevant structs and methods to increase the value of documentation.

View File

@@ -1,24 +0,0 @@
LanceDB is a database designed for retrieval, including vector, full-text, and hybrid search.
It is a wrapper around Lance. There are two backends: local (in-process like SQLite) and
remote (against LanceDB Cloud).
The core of LanceDB is written in Rust. There are bindings in Python, Typescript, and Java.
Project layout:
* `rust/lancedb`: The LanceDB core Rust implementation.
* `python`: The Python bindings, using PyO3.
* `nodejs`: The Typescript bindings, using napi-rs
* `java`: The Java bindings
(`rust/ffi` and `node/` are for a deprecated package. You can ignore them.)
Common commands:
* Check for compiler errors: `cargo check --features remote --tests --examples`
* Run tests: `cargo test --features remote --tests`
* Run specific test: `cargo test --features remote -p <package_name> --test <test_name>`
* Lint: `cargo clippy --features remote --tests --examples`
* Format: `cargo fmt --all`
Before committing changes, run formatting.

1
CLAUDE.md Symbolic link
View File

@@ -0,0 +1 @@
AGENTS.md

2540
Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -1,10 +1,5 @@
[workspace]
members = [
"rust/lancedb",
"nodejs",
"python",
"java/core/lancedb-jni",
]
members = ["rust/lancedb", "nodejs", "python", "java/core/lancedb-jni"]
# Python package needs to be built by maturin.
exclude = ["python"]
resolver = "2"
@@ -20,32 +15,37 @@ categories = ["database-implementations"]
rust-version = "1.78.0"
[workspace.dependencies]
lance = { "version" = "=0.32.1", "features" = [
"dynamodb",
], "tag" = "v0.32.1-beta.2", "git" = "https://github.com/lancedb/lance.git" }
lance-io = { "version" = "=0.32.1", "tag" = "v0.32.1-beta.2", "git" = "https://github.com/lancedb/lance.git" }
lance-index = { "version" = "=0.32.1", "tag" = "v0.32.1-beta.2", "git" = "https://github.com/lancedb/lance.git" }
lance-linalg = { "version" = "=0.32.1", "tag" = "v0.32.1-beta.2", "git" = "https://github.com/lancedb/lance.git" }
lance-table = { "version" = "=0.32.1", "tag" = "v0.32.1-beta.2", "git" = "https://github.com/lancedb/lance.git" }
lance-testing = { "version" = "=0.32.1", "tag" = "v0.32.1-beta.2", "git" = "https://github.com/lancedb/lance.git" }
lance-datafusion = { "version" = "=0.32.1", "tag" = "v0.32.1-beta.2", "git" = "https://github.com/lancedb/lance.git" }
lance-encoding = { "version" = "=0.32.1", "tag" = "v0.32.1-beta.2", "git" = "https://github.com/lancedb/lance.git" }
lance = { "version" = "=0.39.0", default-features = false }
lance-core = "=0.39.0"
lance-datagen = "=0.39.0"
lance-file = "=0.39.0"
lance-io = { "version" = "=0.39.0", default-features = false }
lance-index = "=0.39.0"
lance-linalg = "=0.39.0"
lance-namespace = "=0.39.0"
lance-namespace-impls = { "version" = "=0.39.0", "features" = ["dir-aws", "dir-gcp", "dir-azure", "dir-oss", "rest"] }
lance-table = "=0.39.0"
lance-testing = "=0.39.0"
lance-datafusion = "=0.39.0"
lance-encoding = "=0.39.0"
lance-arrow = "=0.39.0"
ahash = "0.8"
# Note that this one does not include pyarrow
arrow = { version = "55.1", optional = false }
arrow-array = "55.1"
arrow-data = "55.1"
arrow-ipc = "55.1"
arrow-ord = "55.1"
arrow-schema = "55.1"
arrow-arith = "55.1"
arrow-cast = "55.1"
arrow = { version = "56.2", optional = false }
arrow-array = "56.2"
arrow-data = "56.2"
arrow-ipc = "56.2"
arrow-ord = "56.2"
arrow-schema = "56.2"
arrow-select = "56.2"
arrow-cast = "56.2"
async-trait = "0"
datafusion = { version = "48.0", default-features = false }
datafusion-catalog = "48.0"
datafusion-common = { version = "48.0", default-features = false }
datafusion-execution = "48.0"
datafusion-expr = "48.0"
datafusion-physical-plan = "48.0"
datafusion = { version = "50.1", default-features = false }
datafusion-catalog = "50.1"
datafusion-common = { version = "50.1", default-features = false }
datafusion-execution = "50.1"
datafusion-expr = "50.1"
datafusion-physical-plan = "50.1"
env_logger = "0.11"
half = { "version" = "2.6.0", default-features = false, features = [
"num-traits",
@@ -55,19 +55,11 @@ log = "0.4"
moka = { version = "0.12", features = ["future"] }
object_store = "0.12.0"
pin-project = "1.0.7"
rand = "0.9"
snafu = "0.8"
url = "2"
num-traits = "0.2"
rand = "0.9"
regex = "1.10"
lazy_static = "1"
semver = "1.0.25"
# Temporary pins to work around downstream issues
# https://github.com/apache/arrow-rs/commit/2fddf85afcd20110ce783ed5b4cdeb82293da30b
chrono = "=0.4.41"
# https://github.com/RustCrypto/formats/issues/1684
base64ct = "=1.6.0"
# Workaround for: https://github.com/eira-fransham/crunchy/issues/13
crunchy = "=0.2.2"
# Workaround for: https://github.com/Lokathor/bytemuck/issues/306
bytemuck_derive = ">=1.8.1, <1.9.0"
chrono = "0.4"

View File

@@ -0,0 +1,4 @@
#!/usr/bin/env bash
export RUST_LOG=info
exec ./lancedb server --port 0 --sql-port 0 --data-dir "${1}"

18
ci/run_with_docker_compose.sh Executable file
View File

@@ -0,0 +1,18 @@
#!/usr/bin/env bash
#
# A script for running the given command together with a docker compose environment.
#
# Bring down the docker setup once the command is done running.
tear_down() {
docker compose -p fixture down
}
trap tear_down EXIT
set +xe
# Clean up any existing docker setup and bring up a new one.
docker compose -p fixture up --detach --wait || exit 1
"${@}"

68
ci/run_with_test_connection.sh Executable file
View File

@@ -0,0 +1,68 @@
#!/usr/bin/env bash
#
# A script for running the given command together with the lancedb cli.
#
die() {
echo $?
exit 1
}
check_command_exists() {
command="${1}"
which ${command} &> /dev/null || \
die "Unable to locate command: ${command}. Did you install it?"
}
if [[ ! -e ./lancedb ]]; then
if [[ -v SOPHON_READ_TOKEN ]]; then
INPUT="lancedb-linux-x64"
gh release \
--repo lancedb/lancedb \
download ci-support-binaries \
--pattern "${INPUT}" \
|| die "failed to fetch cli."
check_command_exists openssl
openssl enc -aes-256-cbc \
-d -pbkdf2 \
-pass "env:SOPHON_READ_TOKEN" \
-in "${INPUT}" \
-out ./lancedb-linux-x64.tar.gz \
|| die "openssl failed"
TARGET="${INPUT}.tar.gz"
else
ARCH="x64"
if [[ $OSTYPE == 'darwin'* ]]; then
UNAME=$(uname -m)
if [[ $UNAME == 'arm64' ]]; then
ARCH='arm64'
fi
OSTYPE="macos"
elif [[ $OSTYPE == 'linux'* ]]; then
if [[ $UNAME == 'aarch64' ]]; then
ARCH='arm64'
fi
OSTYPE="linux"
else
die "unknown OSTYPE: $OSTYPE"
fi
check_command_exists gh
TARGET="lancedb-${OSTYPE}-${ARCH}.tar.gz"
gh release \
--repo lancedb/sophon \
download lancedb-cli-v0.0.3 \
--pattern "${TARGET}" \
|| die "failed to fetch cli."
fi
check_command_exists tar
tar xvf "${TARGET}" || die "tar failed."
[[ -e ./lancedb ]] || die "failed to extract lancedb."
fi
SCRIPT_DIR=$(dirname "$(readlink -f "$0")")
export CREATE_LANCEDB_TEST_CONNECTION_SCRIPT="${SCRIPT_DIR}/create_lancedb_test_connection.sh"
"${@}"

View File

@@ -1,4 +1,5 @@
import argparse
import re
import sys
import json
@@ -18,8 +19,12 @@ def run_command(command: str) -> str:
def get_latest_stable_version() -> str:
version_line = run_command("cargo info lance | grep '^version:'")
version = version_line.split(" ")[1].strip()
return version
# Example output: "version: 0.35.0 (latest 0.37.0)"
match = re.search(r'\(latest ([0-9.]+)\)', version_line)
if match:
return match.group(1)
# Fallback: use the first version after 'version:'
return version_line.split("version:")[1].split()[0].strip()
def get_latest_preview_version() -> str:
@@ -50,10 +55,56 @@ def extract_features(line: str) -> list:
match = re.search(r'"features"\s*=\s*\[\s*(.*?)\s*\]', line, re.DOTALL)
if match:
features_str = match.group(1)
return [f.strip('"') for f in features_str.split(",") if len(f) > 0]
return [f.strip().strip('"') for f in features_str.split(",") if f.strip()]
return []
def extract_default_features(line: str) -> bool:
"""
Checks if default-features = false is present in a line in Cargo.toml.
Example: 'lance = { "version" = "=0.29.0", default-features = false, "features" = ["dynamodb"] }'
Returns: True if default-features = false is present, False otherwise
"""
import re
match = re.search(r'default-features\s*=\s*false', line)
return match is not None
def dict_to_toml_line(package_name: str, config: dict) -> str:
"""
Converts a configuration dictionary to a TOML dependency line.
Dictionary insertion order is preserved (Python 3.7+), so the caller
controls the order of fields in the output.
Args:
package_name: The name of the package (e.g., "lance", "lance-io")
config: Dictionary with keys like "version", "path", "git", "tag", "features", "default-features"
The order of keys in this dict determines the order in the output.
Returns:
A properly formatted TOML line with a trailing newline
"""
# If only version is specified, use simple format
if len(config) == 1 and "version" in config:
return f'{package_name} = "{config["version"]}"\n'
# Otherwise, use inline table format
parts = []
for key, value in config.items():
if key == "default-features" and not value:
parts.append("default-features = false")
elif key == "features":
parts.append(f'"features" = {json.dumps(value)}')
elif isinstance(value, str):
parts.append(f'"{key}" = "{value}"')
else:
# This shouldn't happen with our current usage
parts.append(f'"{key}" = {json.dumps(value)}')
return f'{package_name} = {{ {", ".join(parts)} }}\n'
def update_cargo_toml(line_updater):
"""
Updates the Cargo.toml file by applying the line_updater function to each line.
@@ -67,20 +118,27 @@ def update_cargo_toml(line_updater):
is_parsing_lance_line = False
for line in lines:
if line.startswith("lance"):
# Update the line using the provided function
if line.strip().endswith("}"):
# Check if this is a single-line or multi-line entry
# Single-line entries either:
# 1. End with } (complete inline table)
# 2. End with " (simple version string)
# Multi-line entries start with { but don't end with }
if line.strip().endswith("}") or line.strip().endswith('"'):
# Single-line entry - process immediately
new_lines.append(line_updater(line))
else:
elif "{" in line and not line.strip().endswith("}"):
# Multi-line entry - start accumulating
lance_line = line
is_parsing_lance_line = True
else:
# Single-line entry without quotes or braces (shouldn't happen but handle it)
new_lines.append(line_updater(line))
elif is_parsing_lance_line:
lance_line += line
if line.strip().endswith("}"):
new_lines.append(line_updater(lance_line))
lance_line = ""
is_parsing_lance_line = False
else:
print("doesn't end with }:", line)
else:
# Keep the line unchanged
new_lines.append(line)
@@ -92,18 +150,25 @@ def update_cargo_toml(line_updater):
def set_stable_version(version: str):
"""
Sets lines to
lance = { "version" = "=0.29.0", "features" = ["dynamodb"] }
lance-io = "=0.29.0"
lance = { "version" = "=0.29.0", default-features = false, "features" = ["dynamodb"] }
lance-io = { "version" = "=0.29.0", default-features = false }
...
"""
def line_updater(line: str) -> str:
package_name = line.split("=", maxsplit=1)[0].strip()
# Build config in desired order: version, default-features, features
config = {"version": f"={version}"}
if extract_default_features(line):
config["default-features"] = False
features = extract_features(line)
if features:
return f'{package_name} = {{ "version" = "={version}", "features" = {json.dumps(features)} }}\n'
else:
return f'{package_name} = "={version}"\n'
config["features"] = features
return dict_to_toml_line(package_name, config)
update_cargo_toml(line_updater)
@@ -111,19 +176,27 @@ def set_stable_version(version: str):
def set_preview_version(version: str):
"""
Sets lines to
lance = { "version" = "=0.29.0", "features" = ["dynamodb"], tag = "v0.29.0-beta.2", git="https://github.com/lancedb/lance.git" }
lance-io = { version = "=0.29.0", tag = "v0.29.0-beta.2", git="https://github.com/lancedb/lance.git" }
lance = { "version" = "=0.29.0", default-features = false, "features" = ["dynamodb"], "tag" = "v0.29.0-beta.2", "git" = "https://github.com/lancedb/lance.git" }
lance-io = { "version" = "=0.29.0", default-features = false, "tag" = "v0.29.0-beta.2", "git" = "https://github.com/lancedb/lance.git" }
...
"""
def line_updater(line: str) -> str:
package_name = line.split("=", maxsplit=1)[0].strip()
# Build config in desired order: version, default-features, features, tag, git
config = {"version": f"={version}"}
if extract_default_features(line):
config["default-features"] = False
features = extract_features(line)
base_version = version.split("-")[0] # Get the base version without beta suffix
if features:
return f'{package_name} = {{ "version" = "={base_version}", "features" = {json.dumps(features)}, "tag" = "v{version}", "git" = "https://github.com/lancedb/lance.git" }}\n'
else:
return f'{package_name} = {{ "version" = "={base_version}", "tag" = "v{version}", "git" = "https://github.com/lancedb/lance.git" }}\n'
config["features"] = features
config["tag"] = f"v{version}"
config["git"] = "https://github.com/lancedb/lance.git"
return dict_to_toml_line(package_name, config)
update_cargo_toml(line_updater)
@@ -131,18 +204,25 @@ def set_preview_version(version: str):
def set_local_version():
"""
Sets lines to
lance = { path = "../lance/rust/lance", features = ["dynamodb"] }
lance-io = { path = "../lance/rust/lance-io" }
lance = { "path" = "../lance/rust/lance", default-features = false, "features" = ["dynamodb"] }
lance-io = { "path" = "../lance/rust/lance-io", default-features = false }
...
"""
def line_updater(line: str) -> str:
package_name = line.split("=", maxsplit=1)[0].strip()
# Build config in desired order: path, default-features, features
config = {"path": f"../lance/rust/{package_name}"}
if extract_default_features(line):
config["default-features"] = False
features = extract_features(line)
if features:
return f'{package_name} = {{ "path" = "../lance/rust/{package_name}", "features" = {json.dumps(features)} }}\n'
else:
return f'{package_name} = {{ "path" = "../lance/rust/{package_name}" }}\n'
config["features"] = features
return dict_to_toml_line(package_name, config)
update_cargo_toml(line_updater)

View File

@@ -15,16 +15,13 @@ cargo metadata --quiet > /dev/null
pushd nodejs || exit 1
npm install --package-lock-only --silent
popd
pushd node || exit 1
npm install --package-lock-only --silent
popd
if git diff --quiet --exit-code; then
echo "No lockfile changes to commit; skipping amend."
elif $AMEND; then
git add Cargo.lock nodejs/package-lock.json node/package-lock.json
git add Cargo.lock nodejs/package-lock.json
git commit --amend --no-edit
else
git add Cargo.lock nodejs/package-lock.json node/package-lock.json
git add Cargo.lock nodejs/package-lock.json
git commit -m "Update lockfiles"
fi

View File

@@ -70,6 +70,23 @@ plugins:
- mkdocs-jupyter
- render_swagger:
allow_arbitrary_locations: true
- redirects:
redirect_maps:
# Redirect the home page and other top-level markdown files. This enables maximum SEO benefit
# other sub-pages are handled by the ingected js in overrides/partials/header.html
'index.md': 'https://lancedb.com/docs/'
'guides/tables.md': 'https://lancedb.com/docs/tables/'
'ann_indexes.md': 'https://lancedb.com/docs/indexing/'
'basic.md': 'https://lancedb.com/docs/quickstart/'
'faq.md': 'https://lancedb.com/docs/faq/'
'embeddings/understanding_embeddings.md': 'https://lancedb.com/docs/embedding/'
'integrations.md': 'https://lancedb.com/docs/integrations/'
'examples.md': 'https://lancedb.com/docs/tutorials/'
'concepts/vector_search.md': 'https://lancedb.com/docs/search/vector-search/'
'troubleshooting.md': 'https://lancedb.com/docs/troubleshooting/'
'guides/storage.md': 'https://lancedb.com/docs/storage/integrations'
markdown_extensions:
- admonition
@@ -103,6 +120,264 @@ markdown_extensions:
permalink: ""
nav:
- Home:
- LanceDB: index.md
- 🏃🏼‍♂️ Quick start: basic.md
- 📚 Concepts:
- Vector search: concepts/vector_search.md
- Indexing:
- IVFPQ: concepts/index_ivfpq.md
- HNSW: concepts/index_hnsw.md
- Storage: concepts/storage.md
- Data management: concepts/data_management.md
- 🔨 Guides:
- Working with tables: guides/tables.md
- Building a vector index: ann_indexes.md
- Vector Search: search.md
- Full-text search (native): fts.md
- Full-text search (tantivy-based): fts_tantivy.md
- Building a scalar index: guides/scalar_index.md
- Hybrid search:
- Overview: hybrid_search/hybrid_search.md
- Comparing Rerankers: hybrid_search/eval.md
- Airbnb financial data example: notebooks/hybrid_search.ipynb
- Late interaction with MultiVector search:
- Overview: guides/multi-vector.md
- Example: notebooks/Multivector_on_LanceDB.ipynb
- RAG:
- Vanilla RAG: rag/vanilla_rag.md
- Multi-head RAG: rag/multi_head_rag.md
- Corrective RAG: rag/corrective_rag.md
- Agentic RAG: rag/agentic_rag.md
- Graph RAG: rag/graph_rag.md
- Self RAG: rag/self_rag.md
- Adaptive RAG: rag/adaptive_rag.md
- SFR RAG: rag/sfr_rag.md
- Advanced Techniques:
- HyDE: rag/advanced_techniques/hyde.md
- FLARE: rag/advanced_techniques/flare.md
- Reranking:
- Quickstart: reranking/index.md
- Cohere Reranker: reranking/cohere.md
- Linear Combination Reranker: reranking/linear_combination.md
- Reciprocal Rank Fusion Reranker: reranking/rrf.md
- Cross Encoder Reranker: reranking/cross_encoder.md
- ColBERT Reranker: reranking/colbert.md
- Jina Reranker: reranking/jina.md
- OpenAI Reranker: reranking/openai.md
- AnswerDotAi Rerankers: reranking/answerdotai.md
- Voyage AI Rerankers: reranking/voyageai.md
- Building Custom Rerankers: reranking/custom_reranker.md
- Example: notebooks/lancedb_reranking.ipynb
- Filtering: sql.md
- Versioning & Reproducibility:
- sync API: notebooks/reproducibility.ipynb
- async API: notebooks/reproducibility_async.ipynb
- Configuring Storage: guides/storage.md
- Migration Guide: migration.md
- Tuning retrieval performance:
- Choosing right query type: guides/tuning_retrievers/1_query_types.md
- Reranking: guides/tuning_retrievers/2_reranking.md
- Embedding fine-tuning: guides/tuning_retrievers/3_embed_tuning.md
- 🧬 Managing embeddings:
- Understand Embeddings: embeddings/understanding_embeddings.md
- Get Started: embeddings/index.md
- Embedding functions: embeddings/embedding_functions.md
- Available models:
- Overview: embeddings/default_embedding_functions.md
- Text Embedding Functions:
- Sentence Transformers: embeddings/available_embedding_models/text_embedding_functions/sentence_transformers.md
- Huggingface Embedding Models: embeddings/available_embedding_models/text_embedding_functions/huggingface_embedding.md
- Ollama Embeddings: embeddings/available_embedding_models/text_embedding_functions/ollama_embedding.md
- OpenAI Embeddings: embeddings/available_embedding_models/text_embedding_functions/openai_embedding.md
- Instructor Embeddings: embeddings/available_embedding_models/text_embedding_functions/instructor_embedding.md
- Gemini Embeddings: embeddings/available_embedding_models/text_embedding_functions/gemini_embedding.md
- Cohere Embeddings: embeddings/available_embedding_models/text_embedding_functions/cohere_embedding.md
- Jina Embeddings: embeddings/available_embedding_models/text_embedding_functions/jina_embedding.md
- AWS Bedrock Text Embedding Functions: embeddings/available_embedding_models/text_embedding_functions/aws_bedrock_embedding.md
- IBM watsonx.ai Embeddings: embeddings/available_embedding_models/text_embedding_functions/ibm_watsonx_ai_embedding.md
- Voyage AI Embeddings: embeddings/available_embedding_models/text_embedding_functions/voyageai_embedding.md
- Multimodal Embedding Functions:
- OpenClip embeddings: embeddings/available_embedding_models/multimodal_embedding_functions/openclip_embedding.md
- Imagebind embeddings: embeddings/available_embedding_models/multimodal_embedding_functions/imagebind_embedding.md
- Jina Embeddings: embeddings/available_embedding_models/multimodal_embedding_functions/jina_multimodal_embedding.md
- User-defined embedding functions: embeddings/custom_embedding_function.md
- Variables and secrets: embeddings/variables_and_secrets.md
- "Example: Multi-lingual semantic search": notebooks/multi_lingual_example.ipynb
- "Example: MultiModal CLIP Embeddings": notebooks/DisappearingEmbeddingFunction.ipynb
- 🔌 Integrations:
- Tools and data formats: integrations/index.md
- Pandas and PyArrow: python/pandas_and_pyarrow.md
- Polars: python/polars_arrow.md
- DuckDB: python/duckdb.md
- Datafusion: python/datafusion.md
- LangChain:
- LangChain 🔗: integrations/langchain.md
- LangChain demo: notebooks/langchain_demo.ipynb
- LangChain JS/TS 🔗: https://js.langchain.com/docs/integrations/vectorstores/lancedb
- LlamaIndex 🦙:
- LlamaIndex docs: integrations/llamaIndex.md
- LlamaIndex demo: notebooks/llamaIndex_demo.ipynb
- Pydantic: python/pydantic.md
- Voxel51: integrations/voxel51.md
- PromptTools: integrations/prompttools.md
- dlt: integrations/dlt.md
- phidata: integrations/phidata.md
- Genkit: integrations/genkit.md
- 🎯 Examples:
- Overview: examples/index.md
- 🐍 Python:
- Overview: examples/examples_python.md
- Build From Scratch: examples/python_examples/build_from_scratch.md
- Multimodal: examples/python_examples/multimodal.md
- Rag: examples/python_examples/rag.md
- Vector Search: examples/python_examples/vector_search.md
- Chatbot: examples/python_examples/chatbot.md
- Evaluation: examples/python_examples/evaluations.md
- AI Agent: examples/python_examples/aiagent.md
- Recommender System: examples/python_examples/recommendersystem.md
- Miscellaneous:
- Serverless QA Bot with S3 and Lambda: examples/serverless_lancedb_with_s3_and_lambda.md
- Serverless QA Bot with Modal: examples/serverless_qa_bot_with_modal_and_langchain.md
- 👾 JavaScript:
- Overview: examples/examples_js.md
- Serverless Website Chatbot: examples/serverless_website_chatbot.md
- YouTube Transcript Search: examples/youtube_transcript_bot_with_nodejs.md
- TransformersJS Embedding Search: examples/transformerjs_embedding_search_nodejs.md
- 🦀 Rust:
- Overview: examples/examples_rust.md
- 📓 Studies:
- ↗Improve retrievers with hybrid search and reranking: https://blog.lancedb.com/hybrid-search-and-reranking-report/
- 💭 FAQs: faq.md
- 🔍 Troubleshooting: troubleshooting.md
- ⚙️ API reference:
- 🐍 Python: python/python.md
- 👾 JavaScript (vectordb): javascript/modules.md
- 👾 JavaScript (lancedb): js/globals.md
- 🦀 Rust: https://docs.rs/lancedb/latest/lancedb/
- Quick start: basic.md
- Concepts:
- Vector search: concepts/vector_search.md
- Indexing:
- IVFPQ: concepts/index_ivfpq.md
- HNSW: concepts/index_hnsw.md
- Storage: concepts/storage.md
- Data management: concepts/data_management.md
- Guides:
- Working with tables: guides/tables.md
- Working with SQL: guides/sql_querying.md
- Building an ANN index: ann_indexes.md
- Vector Search: search.md
- Full-text search (native): fts.md
- Full-text search (tantivy-based): fts_tantivy.md
- Building a scalar index: guides/scalar_index.md
- Hybrid search:
- Overview: hybrid_search/hybrid_search.md
- Comparing Rerankers: hybrid_search/eval.md
- Airbnb financial data example: notebooks/hybrid_search.ipynb
- Late interaction with MultiVector search:
- Overview: guides/multi-vector.md
- Document search Example: notebooks/Multivector_on_LanceDB.ipynb
- RAG:
- Vanilla RAG: rag/vanilla_rag.md
- Multi-head RAG: rag/multi_head_rag.md
- Corrective RAG: rag/corrective_rag.md
- Agentic RAG: rag/agentic_rag.md
- Graph RAG: rag/graph_rag.md
- Self RAG: rag/self_rag.md
- Adaptive RAG: rag/adaptive_rag.md
- SFR RAG: rag/sfr_rag.md
- Advanced Techniques:
- HyDE: rag/advanced_techniques/hyde.md
- FLARE: rag/advanced_techniques/flare.md
- Reranking:
- Quickstart: reranking/index.md
- Cohere Reranker: reranking/cohere.md
- Linear Combination Reranker: reranking/linear_combination.md
- Reciprocal Rank Fusion Reranker: reranking/rrf.md
- Cross Encoder Reranker: reranking/cross_encoder.md
- ColBERT Reranker: reranking/colbert.md
- Jina Reranker: reranking/jina.md
- OpenAI Reranker: reranking/openai.md
- AnswerDotAi Rerankers: reranking/answerdotai.md
- Building Custom Rerankers: reranking/custom_reranker.md
- Example: notebooks/lancedb_reranking.ipynb
- Filtering: sql.md
- Versioning & Reproducibility:
- sync API: notebooks/reproducibility.ipynb
- async API: notebooks/reproducibility_async.ipynb
- Configuring Storage: guides/storage.md
- Migration Guide: migration.md
- Tuning retrieval performance:
- Choosing right query type: guides/tuning_retrievers/1_query_types.md
- Reranking: guides/tuning_retrievers/2_reranking.md
- Embedding fine-tuning: guides/tuning_retrievers/3_embed_tuning.md
- Managing Embeddings:
- Understand Embeddings: embeddings/understanding_embeddings.md
- Get Started: embeddings/index.md
- Embedding functions: embeddings/embedding_functions.md
- Available models:
- Overview: embeddings/default_embedding_functions.md
- Text Embedding Functions:
- Sentence Transformers: embeddings/available_embedding_models/text_embedding_functions/sentence_transformers.md
- Huggingface Embedding Models: embeddings/available_embedding_models/text_embedding_functions/huggingface_embedding.md
- Ollama Embeddings: embeddings/available_embedding_models/text_embedding_functions/ollama_embedding.md
- OpenAI Embeddings: embeddings/available_embedding_models/text_embedding_functions/openai_embedding.md
- Instructor Embeddings: embeddings/available_embedding_models/text_embedding_functions/instructor_embedding.md
- Gemini Embeddings: embeddings/available_embedding_models/text_embedding_functions/gemini_embedding.md
- Cohere Embeddings: embeddings/available_embedding_models/text_embedding_functions/cohere_embedding.md
- Jina Embeddings: embeddings/available_embedding_models/text_embedding_functions/jina_embedding.md
- AWS Bedrock Text Embedding Functions: embeddings/available_embedding_models/text_embedding_functions/aws_bedrock_embedding.md
- IBM watsonx.ai Embeddings: embeddings/available_embedding_models/text_embedding_functions/ibm_watsonx_ai_embedding.md
- Multimodal Embedding Functions:
- OpenClip embeddings: embeddings/available_embedding_models/multimodal_embedding_functions/openclip_embedding.md
- Imagebind embeddings: embeddings/available_embedding_models/multimodal_embedding_functions/imagebind_embedding.md
- Jina Embeddings: embeddings/available_embedding_models/multimodal_embedding_functions/jina_multimodal_embedding.md
- User-defined embedding functions: embeddings/custom_embedding_function.md
- Variables and secrets: embeddings/variables_and_secrets.md
- "Example: Multi-lingual semantic search": notebooks/multi_lingual_example.ipynb
- "Example: MultiModal CLIP Embeddings": notebooks/DisappearingEmbeddingFunction.ipynb
- Integrations:
- Overview: integrations/index.md
- Pandas and PyArrow: python/pandas_and_pyarrow.md
- Polars: python/polars_arrow.md
- DuckDB: python/duckdb.md
- Datafusion: python/datafusion.md
- LangChain 🦜️🔗↗: integrations/langchain.md
- LangChain.js 🦜️🔗↗: https://js.langchain.com/docs/integrations/vectorstores/lancedb
- LlamaIndex 🦙↗: integrations/llamaIndex.md
- Pydantic: python/pydantic.md
- Voxel51: integrations/voxel51.md
- PromptTools: integrations/prompttools.md
- dlt: integrations/dlt.md
- phidata: integrations/phidata.md
- Genkit: integrations/genkit.md
- Examples:
- examples/index.md
- 🐍 Python:
- Overview: examples/examples_python.md
- Build From Scratch: examples/python_examples/build_from_scratch.md
- Multimodal: examples/python_examples/multimodal.md
- Rag: examples/python_examples/rag.md
- Vector Search: examples/python_examples/vector_search.md
- Chatbot: examples/python_examples/chatbot.md
- Evaluation: examples/python_examples/evaluations.md
- AI Agent: examples/python_examples/aiagent.md
- Recommender System: examples/python_examples/recommendersystem.md
- Miscellaneous:
- Serverless QA Bot with S3 and Lambda: examples/serverless_lancedb_with_s3_and_lambda.md
- Serverless QA Bot with Modal: examples/serverless_qa_bot_with_modal_and_langchain.md
- 👾 JavaScript:
- Overview: examples/examples_js.md
- Serverless Website Chatbot: examples/serverless_website_chatbot.md
- YouTube Transcript Search: examples/youtube_transcript_bot_with_nodejs.md
- TransformersJS Embedding Search: examples/transformerjs_embedding_search_nodejs.md
- 🦀 Rust:
- Overview: examples/examples_rust.md
- Studies:
- studies/overview.md
- ↗Improve retrievers with hybrid search and reranking: https://blog.lancedb.com/hybrid-search-and-reranking-report/
- API reference:
- Overview: api_reference.md
- Python: python/python.md

View File

@@ -19,7 +19,13 @@
FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
IN THE SOFTWARE.
-->
<div id="deprecation-banner" style="background-color: #f8d7da; color: #721c24; padding: 1em; text-align: center;">
<p style="margin: 0; font-size: 1.1em;">
<strong>This documentation site is deprecated.</strong>
Please visit our new documentation site at <a href="https://lancedb.com/docs" style="color: #721c24; text-decoration: underline;">
lancedb.com/docs</a> for the latest information.
</p>
</div>
{% set class = "md-header" %}
{% if "navigation.tabs.sticky" in features %}
{% set class = class ~ " md-header--shadow md-header--lifted" %}
@@ -150,9 +156,9 @@
<div style="margin-left: 10px; margin-right: 5px;">
<a href="https://discord.com/invite/zMM32dvNtd" target="_blank" rel="noopener noreferrer">
<svg fill="#FFFFFF" xmlns="http://www.w3.org/2000/svg" viewBox="0 0 50 50" width="25px" height="25px"><path d="M 41.625 10.769531 C 37.644531 7.566406 31.347656 7.023438 31.078125 7.003906 C 30.660156 6.96875 30.261719 7.203125 30.089844 7.589844 C 30.074219 7.613281 29.9375 7.929688 29.785156 8.421875 C 32.417969 8.867188 35.652344 9.761719 38.578125 11.578125 C 39.046875 11.867188 39.191406 12.484375 38.902344 12.953125 C 38.710938 13.261719 38.386719 13.429688 38.050781 13.429688 C 37.871094 13.429688 37.6875 13.378906 37.523438 13.277344 C 32.492188 10.15625 26.210938 10 25 10 C 23.789063 10 17.503906 10.15625 12.476563 13.277344 C 12.007813 13.570313 11.390625 13.425781 11.101563 12.957031 C 10.808594 12.484375 10.953125 11.871094 11.421875 11.578125 C 14.347656 9.765625 17.582031 8.867188 20.214844 8.425781 C 20.0625 7.929688 19.925781 7.617188 19.914063 7.589844 C 19.738281 7.203125 19.34375 6.960938 18.921875 7.003906 C 18.652344 7.023438 12.355469 7.566406 8.320313 10.8125 C 6.214844 12.761719 2 24.152344 2 34 C 2 34.175781 2.046875 34.34375 2.132813 34.496094 C 5.039063 39.605469 12.972656 40.941406 14.78125 41 C 14.789063 41 14.800781 41 14.8125 41 C 15.132813 41 15.433594 40.847656 15.621094 40.589844 L 17.449219 38.074219 C 12.515625 36.800781 9.996094 34.636719 9.851563 34.507813 C 9.4375 34.144531 9.398438 33.511719 9.765625 33.097656 C 10.128906 32.683594 10.761719 32.644531 11.175781 33.007813 C 11.234375 33.0625 15.875 37 25 37 C 34.140625 37 38.78125 33.046875 38.828125 33.007813 C 39.242188 32.648438 39.871094 32.683594 40.238281 33.101563 C 40.601563 33.515625 40.5625 34.144531 40.148438 34.507813 C 40.003906 34.636719 37.484375 36.800781 32.550781 38.074219 L 34.378906 40.589844 C 34.566406 40.847656 34.867188 41 35.1875 41 C 35.199219 41 35.210938 41 35.21875 41 C 37.027344 40.941406 44.960938 39.605469 47.867188 34.496094 C 47.953125 34.34375 48 34.175781 48 34 C 48 24.152344 43.785156 12.761719 41.625 10.769531 Z M 18.5 30 C 16.566406 30 15 28.210938 15 26 C 15 23.789063 16.566406 22 18.5 22 C 20.433594 22 22 23.789063 22 26 C 22 28.210938 20.433594 30 18.5 30 Z M 31.5 30 C 29.566406 30 28 28.210938 28 26 C 28 23.789063 29.566406 22 31.5 22 C 33.433594 22 35 23.789063 35 26 C 35 28.210938 33.433594 30 31.5 30 Z"/></svg>
</a>
</div>
<svg fill="#FFFFFF" xmlns="http://www.w3.org/2000/svg" viewBox="0 0 50 50" width="25px" height="25px"><path d="M 41.625 10.769531 C 37.644531 7.566406 31.347656 7.023438 31.078125 7.003906 C 30.660156 6.96875 30.261719 7.203125 30.089844 7.589844 C 30.074219 7.613281 29.9375 7.929688 29.785156 8.421875 C 32.417969 8.867188 35.652344 9.761719 38.578125 11.578125 C 39.046875 11.867188 39.191406 12.484375 38.902344 12.953125 C 38.710938 13.261719 38.386719 13.429688 38.050781 13.429688 C 37.871094 13.429688 37.6875 13.378906 37.523438 13.277344 C 32.492188 10.15625 26.210938 10 25 10 C 23.789063 10 17.503906 10.15625 12.476563 13.277344 C 12.007813 13.570313 11.390625 13.425781 11.101563 12.957031 C 10.808594 12.484375 10.953125 11.871094 11.421875 11.578125 C 14.347656 9.765625 17.582031 8.867188 20.214844 8.425781 C 20.0625 7.929688 19.925781 7.617188 19.914063 7.589844 C 19.738281 7.203125 19.34375 6.960938 18.921875 7.003906 C 18.652344 7.023438 12.355469 7.566406 8.320313 10.8125 C 6.214844 12.761719 2 24.152344 2 34 C 2 34.175781 2.046875 34.34375 2.132813 34.496094 C 5.039063 39.605469 12.972656 40.941406 14.78125 41 C 14.789063 41 14.800781 41 14.8125 41 C 15.132813 41 15.433594 40.847656 15.621094 40.589844 L 17.449219 38.074219 C 12.515625 36.800781 9.996094 34.636719 9.851563 34.507813 C 9.4375 34.144531 9.398438 33.511719 9.765625 33.097656 C 10.128906 32.683594 10.761719 32.644531 11.175781 33.007813 C 11.234375 33.0625 15.875 37 25 37 C 34.140625 37 38.78125 33.046875 38.828125 33.007813 C 39.242188 32.648438 39.871094 32.683594 40.238281 33.101563 C 40.601563 33.515625 40.5625 34.144531 40.148438 34.507813 C 40.003906 34.636719 37.484375 36.800781 32.550781 38.074219 L 34.378906 40.589844 C 34.566406 40.847656 34.867188 41 35.1875 41 C 35.199219 41 35.210938 41 35.21875 41 C 37.027344 40.941406 44.960938 39.605469 47.867188 34.496094 C 47.953125 34.34375 48 34.175781 48 34 C 48 24.152344 43.785156 12.761719 41.625 10.769531 Z M 18.5 30 C 16.566406 30 15 28.210938 15 26 C 15 23.789063 16.566406 22 18.5 22 C 20.433594 22 22 23.789063 22 26 C 22 28.210938 20.433594 30 18.5 30 Z M 31.5 30 C 29.566406 30 28 28.210938 28 26 C 28 23.789063 29.566406 22 31.5 22 C 33.433594 22 35 23.789063 35 26 C 35 28.210938 33.433594 30 31.5 30 Z"/></svg>
</a>
</div>
<div style="margin-left: 5px; margin-right: 5px;">
<a href="https://twitter.com/lancedb" target="_blank" rel="noopener noreferrer">
<svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" viewBox="0,0,256,256" width="25px" height="25px" fill-rule="nonzero"><g fill-opacity="0" fill="#ffffff" fill-rule="nonzero" stroke="none" stroke-width="1" stroke-linecap="butt" stroke-linejoin="miter" stroke-miterlimit="10" stroke-dasharray="" stroke-dashoffset="0" font-family="none" font-weight="none" font-size="none" text-anchor="none" style="mix-blend-mode: normal"><path d="M0,256v-256h256v256z" id="bgRectangle"></path></g><g fill="#ffffff" fill-rule="nonzero" stroke="none" stroke-width="1" stroke-linecap="butt" stroke-linejoin="miter" stroke-miterlimit="10" stroke-dasharray="" stroke-dashoffset="0" font-family="none" font-weight="none" font-size="none" text-anchor="none" style="mix-blend-mode: normal"><g transform="scale(4,4)"><path d="M57,17.114c-1.32,1.973 -2.991,3.707 -4.916,5.097c0.018,0.423 0.028,0.847 0.028,1.274c0,13.013 -9.902,28.018 -28.016,28.018c-5.562,0 -12.81,-1.948 -15.095,-4.423c0.772,0.092 1.556,0.138 2.35,0.138c4.615,0 8.861,-1.575 12.23,-4.216c-4.309,-0.079 -7.946,-2.928 -9.199,-6.84c1.96,0.308 4.447,-0.17 4.447,-0.17c0,0 -7.7,-1.322 -7.899,-9.779c2.226,1.291 4.46,1.231 4.46,1.231c0,0 -4.441,-2.734 -4.379,-8.195c0.037,-3.221 1.331,-4.953 1.331,-4.953c8.414,10.361 20.298,10.29 20.298,10.29c0,0 -0.255,-1.471 -0.255,-2.243c0,-5.437 4.408,-9.847 9.847,-9.847c2.832,0 5.391,1.196 7.187,3.111c2.245,-0.443 4.353,-1.263 6.255,-2.391c-0.859,3.44 -4.329,5.448 -4.329,5.448c0,0 2.969,-0.329 5.655,-1.55z"></path></g></g></svg>
@@ -173,4 +179,77 @@
{% include "partials/tabs.html" %}
{% endif %}
{% endif %}
</header>
</header>
<script>
(function() {
function checkPathAndRedirect() {
var banner = document.getElementById('deprecation-banner');
if (document.querySelector('meta[http-equiv="refresh"]')) {
return; // The redirects plugin is already handling this page.
}
var currentPath = window.location.pathname;
var cleanPath = currentPath.endsWith('/') && currentPath.length > 1
? currentPath.slice(0, -1)
: currentPath;
// These are the ONLY paths that should remain on the old site
var apiPaths = [
'/lancedb/python',
'/lancedb/javascript',
'/lancedb/js',
'/lancedb/api_reference'
];
var isApiPage = apiPaths.some(function(apiPath) {
return cleanPath.startsWith(apiPath);
});
if (isApiPage) {
if (banner) {
banner.style.display = 'none';
}
} else {
if (banner) {
banner.style.display = 'block';
}
// Add noindex meta tag to prevent indexing of old docs for seo
var noindexMeta = document.createElement('meta');
noindexMeta.setAttribute('name', 'robots');
noindexMeta.setAttribute('content', 'noindex, follow');
document.head.appendChild(noindexMeta);
// Add canonical link to point to the new docs to reward new site for seo
var canonicalLink = document.createElement('link');
canonicalLink.setAttribute('rel', 'canonical');
canonicalLink.setAttribute('href', 'https://lancedb.com/docs');
document.head.appendChild(canonicalLink);
window.location.replace('https://lancedb.com/docs');
}
}
// Run the check only if doc is ready. This makes sure we catch the initial load
// and redirect.
if (document.readyState === 'loading') {
document.addEventListener('DOMContentLoaded', checkPathAndRedirect);
} else {
checkPathAndRedirect();
}
// Use an interval to handle subsequent navigation clicks.
var lastPath = window.location.pathname;
setInterval(function() {
if (window.location.pathname !== lastPath) {
lastPath = window.location.pathname;
checkPathAndRedirect();
}
}, 2000); // keeping it 2 second to make it easy for user to understand
// what's happening
})();
</script>

View File

@@ -5,3 +5,4 @@ mkdocstrings[python]==0.25.2
griffe
mkdocs-render-swagger-plugin
pydantic
mkdocs-redirects

View File

@@ -13,7 +13,7 @@ The following concepts are important to keep in mind:
- Data is versioned, with each insert operation creating a new version of the dataset and an update to the manifest that tracks versions via metadata
!!! note
1. First, each version contains metadata and just the new/updated data in your transaction. So if you have 100 versions, they aren't 100 duplicates of the same data. However, they do have 100x the metadata overhead of a single version, which can result in slower queries.
1. First, each version contains metadata and just the new/updated data in your transaction. So if you have 100 versions, they aren't 100 duplicates of the same data. However, they do have 100x the metadata overhead of a single version, which can result in slower queries.
2. Second, these versions exist to keep LanceDB scalable and consistent. We do not immediately blow away old versions when creating new ones because other clients might be in the middle of querying the old version. It's important to retain older versions for as long as they might be queried.
## What are fragments?
@@ -37,6 +37,10 @@ Depending on the use case and dataset, optimal compaction will have different re
- Its always better to use *batch* inserts rather than adding 1 row at a time (to avoid too small fragments). If single-row inserts are unavoidable, run compaction on a regular basis to merge them into larger fragments.
- Keep the number of fragments under 100, which is suitable for most use cases (for *really* large datasets of >500M rows, more fragments might be needed)
!!! note
LanceDB Cloud/Enterprise supports [auto-compaction](https://docs.lancedb.com/enterprise/architecture/architecture#write-path) which automatically optimizes fragments in the background as data changes.
## Deletion
Although Lance allows you to delete rows from a dataset, it does not actually delete the data immediately. It simply marks the row as deleted in the `DataFile` that represents a fragment. For a given version of the dataset, each fragment can have up to one deletion file (if no rows were ever deleted from that fragment, it will not have a deletion file). This is important to keep in mind because it means that the data is still there, and can be recovered if needed, as long as that version still exists based on your backup policy.
@@ -50,13 +54,9 @@ Reindexing is the process of updating the index to account for new data, keeping
Both LanceDB OSS and Cloud support reindexing, but the process (at least for now) is different for each, depending on the type of index.
When a reindex job is triggered in the background, the entire data is reindexed, but in the interim as new queries come in, LanceDB will combine results from the existing index with exhaustive kNN search on the new data. This is done to ensure that you're still searching on all your data, but it does come at a performance cost. The more data that you add without reindexing, the impact on latency (due to exhaustive search) can be noticeable.
In LanceDB OSS, re-indexing happens synchronously when you call either `create_index` or `optimize` on a table. In LanceDB Cloud, re-indexing happens asynchronously as you add and update data in your table.
### Vector reindex
By default, queries will search new data even if it has yet to be indexed. This is done using brute-force methods, such as kNN for vector search, and combined with the fast index search results. This is done to ensure that you're always searching over all your data, but it does come at a performance cost. Without reindexing, adding more data to a table will make queries slower and more expensive. This behavior can be disabled by setting the [fast_search](https://lancedb.github.io/lancedb/python/python/#lancedb.query.AsyncQuery.fast_search) parameter which will instruct the query to ignore un-indexed data.
* LanceDB Cloud supports incremental reindexing, where a background process will trigger a new index build for you automatically when new data is added to a dataset
* LanceDB Cloud/Enterprise supports [automatic incremental reindexing](https://docs.lancedb.com/core#vector-index) for vector, scalar, and FTS indices, where a background process will trigger a new index build for you automatically when new data is added or modified in a dataset
* LanceDB OSS requires you to manually trigger a reindex operation -- we are working on adding incremental reindexing to LanceDB OSS as well
### FTS reindex
FTS reindexing is supported in both LanceDB OSS and Cloud, but requires that it's manually rebuilt once you have a significant enough amount of new data added that needs to be reindexed. We [updated](https://github.com/lancedb/lancedb/pull/762) Tantivy's default heap size from 128MB to 1GB in LanceDB to make it much faster to reindex, by up to 10x from the default settings.

View File

@@ -0,0 +1,97 @@
# VoyageAI Embeddings : Multimodal
VoyageAI embeddings can also be used to embed both text and image data, only some of the models support image data and you can check the list
under [https://docs.voyageai.com/docs/multimodal-embeddings](https://docs.voyageai.com/docs/multimodal-embeddings)
Supported parameters (to be passed in `create` method) are:
| Parameter | Type | Default Value | Description |
|---|---|-------------------------|-------------------------------------------|
| `name` | `str` | `"voyage-multimodal-3"` | The model ID of the VoyageAI model to use |
Usage Example:
```python
import base64
import os
from io import BytesIO
import requests
import lancedb
from lancedb.pydantic import LanceModel, Vector
from lancedb.embeddings import get_registry
import pandas as pd
os.environ['VOYAGE_API_KEY'] = 'YOUR_VOYAGE_API_KEY'
db = lancedb.connect(".lancedb")
func = get_registry().get("voyageai").create(name="voyage-multimodal-3")
def image_to_base64(image_bytes: bytes):
buffered = BytesIO(image_bytes)
img_str = base64.b64encode(buffered.getvalue())
return img_str.decode("utf-8")
class Images(LanceModel):
label: str
image_uri: str = func.SourceField() # image uri as the source
image_bytes: str = func.SourceField() # image bytes base64 encoded as the source
vector: Vector(func.ndims()) = func.VectorField() # vector column
vec_from_bytes: Vector(func.ndims()) = func.VectorField() # Another vector column
if "images" in db.table_names():
db.drop_table("images")
table = db.create_table("images", schema=Images)
labels = ["cat", "cat", "dog", "dog", "horse", "horse"]
uris = [
"http://farm1.staticflickr.com/53/167798175_7c7845bbbd_z.jpg",
"http://farm1.staticflickr.com/134/332220238_da527d8140_z.jpg",
"http://farm9.staticflickr.com/8387/8602747737_2e5c2a45d4_z.jpg",
"http://farm5.staticflickr.com/4092/5017326486_1f46057f5f_z.jpg",
"http://farm9.staticflickr.com/8216/8434969557_d37882c42d_z.jpg",
"http://farm6.staticflickr.com/5142/5835678453_4f3a4edb45_z.jpg",
]
# get each uri as bytes
images_bytes = [image_to_base64(requests.get(uri).content) for uri in uris]
table.add(
pd.DataFrame({"label": labels, "image_uri": uris, "image_bytes": images_bytes})
)
```
Now we can search using text from both the default vector column and the custom vector column
```python
# text search
actual = table.search("man's best friend", "vec_from_bytes").limit(1).to_pydantic(Images)[0]
print(actual.label) # prints "dog"
frombytes = (
table.search("man's best friend", vector_column_name="vec_from_bytes")
.limit(1)
.to_pydantic(Images)[0]
)
print(frombytes.label)
```
Because we're using a multi-modal embedding function, we can also search using images
```python
# image search
query_image_uri = "http://farm1.staticflickr.com/200/467715466_ed4a31801f_z.jpg"
image_bytes = requests.get(query_image_uri).content
query_image = Image.open(BytesIO(image_bytes))
actual = table.search(query_image, "vec_from_bytes").limit(1).to_pydantic(Images)[0]
print(actual.label == "dog")
# image search using a custom vector column
other = (
table.search(query_image, vector_column_name="vec_from_bytes")
.limit(1)
.to_pydantic(Images)[0]
)
print(actual.label)
```

View File

@@ -397,117 +397,6 @@ For **read-only access**, LanceDB will need a policy such as:
}
```
#### DynamoDB Commit Store for concurrent writes
By default, S3 does not support concurrent writes. Having two or more processes
writing to the same table at the same time can lead to data corruption. This is
because S3, unlike other object stores, does not have any atomic put or copy
operation.
To enable concurrent writes, you can configure LanceDB to use a DynamoDB table
as a commit store. This table will be used to coordinate writes between
different processes. To enable this feature, you must modify your connection
URI to use the `s3+ddb` scheme and add a query parameter `ddbTableName` with the
name of the table to use.
=== "Python"
=== "Sync API"
```python
import lancedb
db = lancedb.connect(
"s3+ddb://bucket/path?ddbTableName=my-dynamodb-table",
)
```
=== "Async API"
```python
import lancedb
async_db = await lancedb.connect_async(
"s3+ddb://bucket/path?ddbTableName=my-dynamodb-table",
)
```
=== "JavaScript"
```javascript
const lancedb = require("lancedb");
const db = await lancedb.connect(
"s3+ddb://bucket/path?ddbTableName=my-dynamodb-table",
);
```
The DynamoDB table must be created with the following schema:
- Hash key: `base_uri` (string)
- Range key: `version` (number)
You can create this programmatically with:
=== "Python"
<!-- skip-test -->
```python
import boto3
dynamodb = boto3.client("dynamodb")
table = dynamodb.create_table(
TableName=table_name,
KeySchema=[
{"AttributeName": "base_uri", "KeyType": "HASH"},
{"AttributeName": "version", "KeyType": "RANGE"},
],
AttributeDefinitions=[
{"AttributeName": "base_uri", "AttributeType": "S"},
{"AttributeName": "version", "AttributeType": "N"},
],
ProvisionedThroughput={"ReadCapacityUnits": 1, "WriteCapacityUnits": 1},
)
```
=== "JavaScript"
<!-- skip-test -->
```javascript
import {
CreateTableCommand,
DynamoDBClient,
} from "@aws-sdk/client-dynamodb";
const dynamodb = new DynamoDBClient({
region: CONFIG.awsRegion,
credentials: {
accessKeyId: CONFIG.awsAccessKeyId,
secretAccessKey: CONFIG.awsSecretAccessKey,
},
endpoint: CONFIG.awsEndpoint,
});
const command = new CreateTableCommand({
TableName: table_name,
AttributeDefinitions: [
{
AttributeName: "base_uri",
AttributeType: "S",
},
{
AttributeName: "version",
AttributeType: "N",
},
],
KeySchema: [
{ AttributeName: "base_uri", KeyType: "HASH" },
{ AttributeName: "version", KeyType: "RANGE" },
],
ProvisionedThroughput: {
ReadCapacityUnits: 1,
WriteCapacityUnits: 1,
},
});
await client.send(command);
```
#### S3-compatible stores

View File

@@ -25,6 +25,51 @@ the underlying connection has been closed.
## Methods
### cloneTable()
```ts
abstract cloneTable(
targetTableName,
sourceUri,
options?): Promise<Table>
```
Clone a table from a source table.
A shallow clone creates a new table that shares the underlying data files
with the source table but has its own independent manifest. This allows
both the source and cloned tables to evolve independently while initially
sharing the same data, deletion, and index files.
#### Parameters
* **targetTableName**: `string`
The name of the target table to create.
* **sourceUri**: `string`
The URI of the source table to clone from.
* **options?**
Clone options.
* **options.isShallow?**: `boolean`
Whether to perform a shallow clone (defaults to true).
* **options.sourceTag?**: `string`
The tag of the source table to clone.
* **options.sourceVersion?**: `number`
The version of the source table to clone.
* **options.targetNamespace?**: `string`[]
The namespace for the target table (defaults to root namespace).
#### Returns
`Promise`&lt;[`Table`](Table.md)&gt;
***
### close()
```ts
@@ -45,6 +90,8 @@ Any attempt to use the connection after it is closed will result in an error.
### createEmptyTable()
#### createEmptyTable(name, schema, options)
```ts
abstract createEmptyTable(
name,
@@ -54,7 +101,7 @@ abstract createEmptyTable(
Creates a new empty Table
#### Parameters
##### Parameters
* **name**: `string`
The name of the table.
@@ -63,8 +110,39 @@ Creates a new empty Table
The schema of the table
* **options?**: `Partial`&lt;[`CreateTableOptions`](../interfaces/CreateTableOptions.md)&gt;
Additional options (backwards compatibility)
#### Returns
##### Returns
`Promise`&lt;[`Table`](Table.md)&gt;
#### createEmptyTable(name, schema, namespace, options)
```ts
abstract createEmptyTable(
name,
schema,
namespace?,
options?): Promise<Table>
```
Creates a new empty Table
##### Parameters
* **name**: `string`
The name of the table.
* **schema**: [`SchemaLike`](../type-aliases/SchemaLike.md)
The schema of the table
* **namespace?**: `string`[]
The namespace to create the table in (defaults to root namespace)
* **options?**: `Partial`&lt;[`CreateTableOptions`](../interfaces/CreateTableOptions.md)&gt;
Additional options
##### Returns
`Promise`&lt;[`Table`](Table.md)&gt;
@@ -72,10 +150,10 @@ Creates a new empty Table
### createTable()
#### createTable(options)
#### createTable(options, namespace)
```ts
abstract createTable(options): Promise<Table>
abstract createTable(options, namespace?): Promise<Table>
```
Creates a new Table and initialize it with new data.
@@ -85,6 +163,9 @@ Creates a new Table and initialize it with new data.
* **options**: `object` & `Partial`&lt;[`CreateTableOptions`](../interfaces/CreateTableOptions.md)&gt;
The options object.
* **namespace?**: `string`[]
The namespace to create the table in (defaults to root namespace)
##### Returns
`Promise`&lt;[`Table`](Table.md)&gt;
@@ -110,6 +191,38 @@ Creates a new Table and initialize it with new data.
to be inserted into the table
* **options?**: `Partial`&lt;[`CreateTableOptions`](../interfaces/CreateTableOptions.md)&gt;
Additional options (backwards compatibility)
##### Returns
`Promise`&lt;[`Table`](Table.md)&gt;
#### createTable(name, data, namespace, options)
```ts
abstract createTable(
name,
data,
namespace?,
options?): Promise<Table>
```
Creates a new Table and initialize it with new data.
##### Parameters
* **name**: `string`
The name of the table.
* **data**: [`TableLike`](../type-aliases/TableLike.md) \| `Record`&lt;`string`, `unknown`&gt;[]
Non-empty Array of Records
to be inserted into the table
* **namespace?**: `string`[]
The namespace to create the table in (defaults to root namespace)
* **options?**: `Partial`&lt;[`CreateTableOptions`](../interfaces/CreateTableOptions.md)&gt;
Additional options
##### Returns
@@ -134,11 +247,16 @@ Return a brief description of the connection
### dropAllTables()
```ts
abstract dropAllTables(): Promise<void>
abstract dropAllTables(namespace?): Promise<void>
```
Drop all tables in the database.
#### Parameters
* **namespace?**: `string`[]
The namespace to drop tables from (defaults to root namespace).
#### Returns
`Promise`&lt;`void`&gt;
@@ -148,7 +266,7 @@ Drop all tables in the database.
### dropTable()
```ts
abstract dropTable(name): Promise<void>
abstract dropTable(name, namespace?): Promise<void>
```
Drop an existing table.
@@ -158,6 +276,9 @@ Drop an existing table.
* **name**: `string`
The name of the table to drop.
* **namespace?**: `string`[]
The namespace of the table (defaults to root namespace).
#### Returns
`Promise`&lt;`void`&gt;
@@ -181,7 +302,10 @@ Return true if the connection has not been closed
### openTable()
```ts
abstract openTable(name, options?): Promise<Table>
abstract openTable(
name,
namespace?,
options?): Promise<Table>
```
Open a table in the database.
@@ -191,7 +315,11 @@ Open a table in the database.
* **name**: `string`
The name of the table
* **namespace?**: `string`[]
The namespace of the table (defaults to root namespace)
* **options?**: `Partial`&lt;[`OpenTableOptions`](../interfaces/OpenTableOptions.md)&gt;
Additional options
#### Returns
@@ -201,6 +329,8 @@ Open a table in the database.
### tableNames()
#### tableNames(options)
```ts
abstract tableNames(options?): Promise<string[]>
```
@@ -209,12 +339,35 @@ List all the table names in this database.
Tables will be returned in lexicographical order.
#### Parameters
##### Parameters
* **options?**: `Partial`&lt;[`TableNamesOptions`](../interfaces/TableNamesOptions.md)&gt;
options to control the
paging / start point (backwards compatibility)
##### Returns
`Promise`&lt;`string`[]&gt;
#### tableNames(namespace, options)
```ts
abstract tableNames(namespace?, options?): Promise<string[]>
```
List all the table names in this database.
Tables will be returned in lexicographical order.
##### Parameters
* **namespace?**: `string`[]
The namespace to list tables from (defaults to root namespace)
* **options?**: `Partial`&lt;[`TableNamesOptions`](../interfaces/TableNamesOptions.md)&gt;
options to control the
paging / start point
#### Returns
##### Returns
`Promise`&lt;`string`[]&gt;

View File

@@ -0,0 +1,85 @@
[**@lancedb/lancedb**](../README.md) • **Docs**
***
[@lancedb/lancedb](../globals.md) / HeaderProvider
# Class: `abstract` HeaderProvider
Abstract base class for providing custom headers for each request.
Users can implement this interface to provide dynamic headers for various purposes
such as authentication (OAuth tokens, API keys), request tracking (correlation IDs),
custom metadata, or any other header-based requirements. The provider is called
before each request to ensure fresh header values are always used.
## Examples
Simple JWT token provider:
```typescript
class JWTProvider extends HeaderProvider {
constructor(private token: string) {
super();
}
getHeaders(): Record<string, string> {
return { authorization: `Bearer ${this.token}` };
}
}
```
Provider with request tracking:
```typescript
class RequestTrackingProvider extends HeaderProvider {
constructor(private sessionId: string) {
super();
}
getHeaders(): Record<string, string> {
return {
"X-Session-Id": this.sessionId,
"X-Request-Id": `req-${Date.now()}`
};
}
}
```
## Extended by
- [`StaticHeaderProvider`](StaticHeaderProvider.md)
- [`OAuthHeaderProvider`](OAuthHeaderProvider.md)
## Constructors
### new HeaderProvider()
```ts
new HeaderProvider(): HeaderProvider
```
#### Returns
[`HeaderProvider`](HeaderProvider.md)
## Methods
### getHeaders()
```ts
abstract getHeaders(): Record<string, string>
```
Get the latest headers to be added to requests.
This method is called before each request to the remote LanceDB server.
Implementations should return headers that will be merged with existing headers.
#### Returns
`Record`&lt;`string`, `string`&gt;
Dictionary of header names to values to add to the request.
#### Throws
If unable to fetch headers, the exception will be propagated and the request will fail.

View File

@@ -194,6 +194,37 @@ currently is also a memory intensive operation.
***
### ivfRq()
```ts
static ivfRq(options?): Index
```
Create an IvfRq index
IVF-RQ (RabitQ Quantization) compresses vectors using RabitQ quantization
and organizes them into IVF partitions.
The compression scheme is called RabitQ quantization. Each dimension is quantized into a small number of bits.
The parameters `num_bits` and `num_partitions` control this process, providing a tradeoff
between index size (and thus search speed) and index accuracy.
The partitioning process is called IVF and the `num_partitions` parameter controls how
many groups to create.
Note that training an IVF RQ index on a large dataset is a slow operation and
currently is also a memory intensive operation.
#### Parameters
* **options?**: `Partial`&lt;[`IvfRqOptions`](../interfaces/IvfRqOptions.md)&gt;
#### Returns
[`Index`](Index.md)
***
### labelList()
```ts

View File

@@ -52,6 +52,30 @@ the merge result
***
### useIndex()
```ts
useIndex(useIndex): MergeInsertBuilder
```
Controls whether to use indexes for the merge operation.
When set to `true` (the default), the operation will use an index if available
on the join key for improved performance. When set to `false`, it forces a full
table scan even if an index exists. This can be useful for benchmarking or when
the query optimizer chooses a suboptimal path.
#### Parameters
* **useIndex**: `boolean`
Whether to use indices for the merge operation. Defaults to `true`.
#### Returns
[`MergeInsertBuilder`](MergeInsertBuilder.md)
***
### whenMatchedUpdateAll()
```ts

View File

@@ -0,0 +1,29 @@
[**@lancedb/lancedb**](../README.md) • **Docs**
***
[@lancedb/lancedb](../globals.md) / NativeJsHeaderProvider
# Class: NativeJsHeaderProvider
JavaScript HeaderProvider implementation that wraps a JavaScript callback.
This is the only native header provider - all header provider implementations
should provide a JavaScript function that returns headers.
## Constructors
### new NativeJsHeaderProvider()
```ts
new NativeJsHeaderProvider(getHeadersCallback): NativeJsHeaderProvider
```
Create a new JsHeaderProvider from a JavaScript callback
#### Parameters
* **getHeadersCallback**
#### Returns
[`NativeJsHeaderProvider`](NativeJsHeaderProvider.md)

View File

@@ -0,0 +1,108 @@
[**@lancedb/lancedb**](../README.md) • **Docs**
***
[@lancedb/lancedb](../globals.md) / OAuthHeaderProvider
# Class: OAuthHeaderProvider
Example implementation: OAuth token provider with automatic refresh.
This is an example implementation showing how to manage OAuth tokens
with automatic refresh when they expire.
## Example
```typescript
async function fetchToken(): Promise<TokenResponse> {
const response = await fetch("https://oauth.example.com/token", {
method: "POST",
body: JSON.stringify({
grant_type: "client_credentials",
client_id: "your-client-id",
client_secret: "your-client-secret"
}),
headers: { "Content-Type": "application/json" }
});
const data = await response.json();
return {
accessToken: data.access_token,
expiresIn: data.expires_in
};
}
const provider = new OAuthHeaderProvider(fetchToken);
const headers = provider.getHeaders();
// Returns: {"authorization": "Bearer <your-token>"}
```
## Extends
- [`HeaderProvider`](HeaderProvider.md)
## Constructors
### new OAuthHeaderProvider()
```ts
new OAuthHeaderProvider(tokenFetcher, refreshBufferSeconds): OAuthHeaderProvider
```
Initialize the OAuth provider.
#### Parameters
* **tokenFetcher**
Function to fetch new tokens. Should return object with 'accessToken' and optionally 'expiresIn'.
* **refreshBufferSeconds**: `number` = `300`
Seconds before expiry to refresh token. Default 300 (5 minutes).
#### Returns
[`OAuthHeaderProvider`](OAuthHeaderProvider.md)
#### Overrides
[`HeaderProvider`](HeaderProvider.md).[`constructor`](HeaderProvider.md#constructors)
## Methods
### getHeaders()
```ts
getHeaders(): Record<string, string>
```
Get OAuth headers, refreshing token if needed.
Note: This is synchronous for now as the Rust implementation expects sync.
In a real implementation, this would need to handle async properly.
#### Returns
`Record`&lt;`string`, `string`&gt;
Headers with Bearer token authorization.
#### Throws
If unable to fetch or refresh token.
#### Overrides
[`HeaderProvider`](HeaderProvider.md).[`getHeaders`](HeaderProvider.md#getheaders)
***
### refreshToken()
```ts
refreshToken(): Promise<void>
```
Manually refresh the token.
Call this before using getHeaders() to ensure token is available.
#### Returns
`Promise`&lt;`void`&gt;

View File

@@ -0,0 +1,250 @@
[**@lancedb/lancedb**](../README.md) • **Docs**
***
[@lancedb/lancedb](../globals.md) / PermutationBuilder
# Class: PermutationBuilder
A PermutationBuilder for creating data permutations with splits, shuffling, and filtering.
This class provides a TypeScript wrapper around the native Rust PermutationBuilder,
offering methods to configure data splits, shuffling, and filtering before executing
the permutation to create a new table.
## Methods
### execute()
```ts
execute(): Promise<Table>
```
Execute the permutation and create the destination table.
#### Returns
`Promise`&lt;[`Table`](Table.md)&gt;
A Promise that resolves to the new Table instance
#### Example
```ts
const permutationTable = await builder.execute();
console.log(`Created table: ${permutationTable.name}`);
```
***
### filter()
```ts
filter(filter): PermutationBuilder
```
Configure filtering for the permutation.
#### Parameters
* **filter**: `string`
SQL filter expression
#### Returns
[`PermutationBuilder`](PermutationBuilder.md)
A new PermutationBuilder instance
#### Example
```ts
builder.filter("age > 18 AND status = 'active'");
```
***
### persist()
```ts
persist(connection, tableName): PermutationBuilder
```
Configure the permutation to be persisted.
#### Parameters
* **connection**: [`Connection`](Connection.md)
The connection to persist the permutation to
* **tableName**: `string`
The name of the table to create
#### Returns
[`PermutationBuilder`](PermutationBuilder.md)
A new PermutationBuilder instance
#### Example
```ts
builder.persist(connection, "permutation_table");
```
***
### shuffle()
```ts
shuffle(options): PermutationBuilder
```
Configure shuffling for the permutation.
#### Parameters
* **options**: [`ShuffleOptions`](../interfaces/ShuffleOptions.md)
Configuration for shuffling
#### Returns
[`PermutationBuilder`](PermutationBuilder.md)
A new PermutationBuilder instance
#### Example
```ts
// Basic shuffle
builder.shuffle({ seed: 42 });
// Shuffle with clump size
builder.shuffle({ seed: 42, clumpSize: 10 });
```
***
### splitCalculated()
```ts
splitCalculated(options): PermutationBuilder
```
Configure calculated splits for the permutation.
#### Parameters
* **options**: [`SplitCalculatedOptions`](../interfaces/SplitCalculatedOptions.md)
Configuration for calculated splitting
#### Returns
[`PermutationBuilder`](PermutationBuilder.md)
A new PermutationBuilder instance
#### Example
```ts
builder.splitCalculated("user_id % 3");
```
***
### splitHash()
```ts
splitHash(options): PermutationBuilder
```
Configure hash-based splits for the permutation.
#### Parameters
* **options**: [`SplitHashOptions`](../interfaces/SplitHashOptions.md)
Configuration for hash-based splitting
#### Returns
[`PermutationBuilder`](PermutationBuilder.md)
A new PermutationBuilder instance
#### Example
```ts
builder.splitHash({
columns: ["user_id"],
splitWeights: [70, 30],
discardWeight: 0
});
```
***
### splitRandom()
```ts
splitRandom(options): PermutationBuilder
```
Configure random splits for the permutation.
#### Parameters
* **options**: [`SplitRandomOptions`](../interfaces/SplitRandomOptions.md)
Configuration for random splitting
#### Returns
[`PermutationBuilder`](PermutationBuilder.md)
A new PermutationBuilder instance
#### Example
```ts
// Split by ratios
builder.splitRandom({ ratios: [0.7, 0.3], seed: 42 });
// Split by counts
builder.splitRandom({ counts: [1000, 500], seed: 42 });
// Split with fixed size
builder.splitRandom({ fixed: 100, seed: 42 });
```
***
### splitSequential()
```ts
splitSequential(options): PermutationBuilder
```
Configure sequential splits for the permutation.
#### Parameters
* **options**: [`SplitSequentialOptions`](../interfaces/SplitSequentialOptions.md)
Configuration for sequential splitting
#### Returns
[`PermutationBuilder`](PermutationBuilder.md)
A new PermutationBuilder instance
#### Example
```ts
// Split by ratios
builder.splitSequential({ ratios: [0.8, 0.2] });
// Split by counts
builder.splitSequential({ counts: [800, 200] });
// Split with fixed size
builder.splitSequential({ fixed: 1000 });
```

View File

@@ -14,7 +14,7 @@ A builder for LanceDB queries.
## Extends
- [`QueryBase`](QueryBase.md)&lt;`NativeQuery`&gt;
- `StandardQueryBase`&lt;`NativeQuery`&gt;
## Properties
@@ -26,7 +26,7 @@ protected inner: Query | Promise<Query>;
#### Inherited from
[`QueryBase`](QueryBase.md).[`inner`](QueryBase.md#inner)
`StandardQueryBase.inner`
## Methods
@@ -73,14 +73,14 @@ AnalyzeExec verbose=true, metrics=[]
#### Inherited from
[`QueryBase`](QueryBase.md).[`analyzePlan`](QueryBase.md#analyzeplan)
`StandardQueryBase.analyzePlan`
***
### execute()
```ts
protected execute(options?): RecordBatchIterator
protected execute(options?): AsyncGenerator<RecordBatch<any>, void, unknown>
```
Execute the query and return the results as an
@@ -91,7 +91,7 @@ Execute the query and return the results as an
#### Returns
[`RecordBatchIterator`](RecordBatchIterator.md)
`AsyncGenerator`&lt;`RecordBatch`&lt;`any`&gt;, `void`, `unknown`&gt;
#### See
@@ -107,7 +107,7 @@ single query)
#### Inherited from
[`QueryBase`](QueryBase.md).[`execute`](QueryBase.md#execute)
`StandardQueryBase.execute`
***
@@ -143,7 +143,7 @@ const plan = await table.query().nearestTo([0.5, 0.2]).explainPlan();
#### Inherited from
[`QueryBase`](QueryBase.md).[`explainPlan`](QueryBase.md#explainplan)
`StandardQueryBase.explainPlan`
***
@@ -164,7 +164,7 @@ Use [Table#optimize](Table.md#optimize) to index all un-indexed data.
#### Inherited from
[`QueryBase`](QueryBase.md).[`fastSearch`](QueryBase.md#fastsearch)
`StandardQueryBase.fastSearch`
***
@@ -194,7 +194,7 @@ Use `where` instead
#### Inherited from
[`QueryBase`](QueryBase.md).[`filter`](QueryBase.md#filter)
`StandardQueryBase.filter`
***
@@ -216,7 +216,7 @@ fullTextSearch(query, options?): this
#### Inherited from
[`QueryBase`](QueryBase.md).[`fullTextSearch`](QueryBase.md#fulltextsearch)
`StandardQueryBase.fullTextSearch`
***
@@ -241,7 +241,7 @@ called then every valid row from the table will be returned.
#### Inherited from
[`QueryBase`](QueryBase.md).[`limit`](QueryBase.md#limit)
`StandardQueryBase.limit`
***
@@ -325,6 +325,10 @@ nearestToText(query, columns?): Query
offset(offset): this
```
Set the number of rows to skip before returning results.
This is useful for pagination.
#### Parameters
* **offset**: `number`
@@ -335,7 +339,30 @@ offset(offset): this
#### Inherited from
[`QueryBase`](QueryBase.md).[`offset`](QueryBase.md#offset)
`StandardQueryBase.offset`
***
### outputSchema()
```ts
outputSchema(): Promise<Schema<any>>
```
Returns the schema of the output that will be returned by this query.
This can be used to inspect the types and names of the columns that will be
returned by the query before executing it.
#### Returns
`Promise`&lt;`Schema`&lt;`any`&gt;&gt;
An Arrow Schema describing the output columns.
#### Inherited from
`StandardQueryBase.outputSchema`
***
@@ -388,7 +415,7 @@ object insertion order is easy to get wrong and `Map` is more foolproof.
#### Inherited from
[`QueryBase`](QueryBase.md).[`select`](QueryBase.md#select)
`StandardQueryBase.select`
***
@@ -410,7 +437,7 @@ Collect the results as an array of objects.
#### Inherited from
[`QueryBase`](QueryBase.md).[`toArray`](QueryBase.md#toarray)
`StandardQueryBase.toArray`
***
@@ -436,7 +463,7 @@ ArrowTable.
#### Inherited from
[`QueryBase`](QueryBase.md).[`toArrow`](QueryBase.md#toarrow)
`StandardQueryBase.toArrow`
***
@@ -471,7 +498,7 @@ on the filter column(s).
#### Inherited from
[`QueryBase`](QueryBase.md).[`where`](QueryBase.md#where)
`StandardQueryBase.where`
***
@@ -493,4 +520,4 @@ order to perform hybrid search.
#### Inherited from
[`QueryBase`](QueryBase.md).[`withRowId`](QueryBase.md#withrowid)
`StandardQueryBase.withRowId`

View File

@@ -15,12 +15,11 @@ Common methods supported by all query types
## Extended by
- [`Query`](Query.md)
- [`VectorQuery`](VectorQuery.md)
- [`TakeQuery`](TakeQuery.md)
## Type Parameters
**NativeQueryType** *extends* `NativeQuery` \| `NativeVectorQuery`
**NativeQueryType** *extends* `NativeQuery` \| `NativeVectorQuery` \| `NativeTakeQuery`
## Implements
@@ -82,7 +81,7 @@ AnalyzeExec verbose=true, metrics=[]
### execute()
```ts
protected execute(options?): RecordBatchIterator
protected execute(options?): AsyncGenerator<RecordBatch<any>, void, unknown>
```
Execute the query and return the results as an
@@ -93,7 +92,7 @@ Execute the query and return the results as an
#### Returns
[`RecordBatchIterator`](RecordBatchIterator.md)
`AsyncGenerator`&lt;`RecordBatch`&lt;`any`&gt;, `void`, `unknown`&gt;
#### See
@@ -141,101 +140,22 @@ const plan = await table.query().nearestTo([0.5, 0.2]).explainPlan();
***
### fastSearch()
### outputSchema()
```ts
fastSearch(): this
outputSchema(): Promise<Schema<any>>
```
Skip searching un-indexed data. This can make search faster, but will miss
any data that is not yet indexed.
Returns the schema of the output that will be returned by this query.
Use [Table#optimize](Table.md#optimize) to index all un-indexed data.
This can be used to inspect the types and names of the columns that will be
returned by the query before executing it.
#### Returns
`this`
`Promise`&lt;`Schema`&lt;`any`&gt;&gt;
***
### ~~filter()~~
```ts
filter(predicate): this
```
A filter statement to be applied to this query.
#### Parameters
* **predicate**: `string`
#### Returns
`this`
#### See
where
#### Deprecated
Use `where` instead
***
### fullTextSearch()
```ts
fullTextSearch(query, options?): this
```
#### Parameters
* **query**: `string` \| [`FullTextQuery`](../interfaces/FullTextQuery.md)
* **options?**: `Partial`&lt;[`FullTextSearchOptions`](../interfaces/FullTextSearchOptions.md)&gt;
#### Returns
`this`
***
### limit()
```ts
limit(limit): this
```
Set the maximum number of results to return.
By default, a plain search has no limit. If this method is not
called then every valid row from the table will be returned.
#### Parameters
* **limit**: `number`
#### Returns
`this`
***
### offset()
```ts
offset(offset): this
```
#### Parameters
* **offset**: `number`
#### Returns
`this`
An Arrow Schema describing the output columns.
***
@@ -328,37 +248,6 @@ ArrowTable.
***
### where()
```ts
where(predicate): this
```
A filter statement to be applied to this query.
The filter should be supplied as an SQL query string. For example:
#### Parameters
* **predicate**: `string`
#### Returns
`this`
#### Example
```ts
x > 10
y > 0 AND y < 100
x > 5 OR y = 'test'
Filtering performance can often be improved by creating a scalar index
on the filter column(s).
```
***
### withRowId()
```ts

View File

@@ -1,43 +0,0 @@
[**@lancedb/lancedb**](../README.md) • **Docs**
***
[@lancedb/lancedb](../globals.md) / RecordBatchIterator
# Class: RecordBatchIterator
## Implements
- `AsyncIterator`&lt;`RecordBatch`&gt;
## Constructors
### new RecordBatchIterator()
```ts
new RecordBatchIterator(promise?): RecordBatchIterator
```
#### Parameters
* **promise?**: `Promise`&lt;`RecordBatchIterator`&gt;
#### Returns
[`RecordBatchIterator`](RecordBatchIterator.md)
## Methods
### next()
```ts
next(): Promise<IteratorResult<RecordBatch<any>, any>>
```
#### Returns
`Promise`&lt;`IteratorResult`&lt;`RecordBatch`&lt;`any`&gt;, `any`&gt;&gt;
#### Implementation of
`AsyncIterator.next`

View File

@@ -9,7 +9,8 @@
A session for managing caches and object stores across LanceDB operations.
Sessions allow you to configure cache sizes for index and metadata caches,
which can significantly impact performance for large datasets.
which can significantly impact memory use and performance. They can
also be re-used across multiple connections to share the same cache state.
## Constructors
@@ -24,8 +25,11 @@ Create a new session with custom cache sizes.
# Parameters
- `index_cache_size_bytes`: The size of the index cache in bytes.
Index data is stored in memory in this cache to speed up queries.
Defaults to 6GB if not specified.
- `metadata_cache_size_bytes`: The size of the metadata cache in bytes.
The metadata cache stores file metadata and schema information in memory.
This cache improves scan and write performance.
Defaults to 1GB if not specified.
#### Parameters

View File

@@ -0,0 +1,70 @@
[**@lancedb/lancedb**](../README.md) • **Docs**
***
[@lancedb/lancedb](../globals.md) / StaticHeaderProvider
# Class: StaticHeaderProvider
Example implementation: A simple header provider that returns static headers.
This is an example implementation showing how to create a HeaderProvider
for cases where headers don't change during the session.
## Example
```typescript
const provider = new StaticHeaderProvider({
authorization: "Bearer my-token",
"X-Custom-Header": "custom-value"
});
const headers = provider.getHeaders();
// Returns: {authorization: 'Bearer my-token', 'X-Custom-Header': 'custom-value'}
```
## Extends
- [`HeaderProvider`](HeaderProvider.md)
## Constructors
### new StaticHeaderProvider()
```ts
new StaticHeaderProvider(headers): StaticHeaderProvider
```
Initialize with static headers.
#### Parameters
* **headers**: `Record`&lt;`string`, `string`&gt;
Headers to return for every request.
#### Returns
[`StaticHeaderProvider`](StaticHeaderProvider.md)
#### Overrides
[`HeaderProvider`](HeaderProvider.md).[`constructor`](HeaderProvider.md#constructors)
## Methods
### getHeaders()
```ts
getHeaders(): Record<string, string>
```
Return the static headers.
#### Returns
`Record`&lt;`string`, `string`&gt;
Copy of the static headers.
#### Overrides
[`HeaderProvider`](HeaderProvider.md).[`getHeaders`](HeaderProvider.md#getheaders)

View File

@@ -674,6 +674,48 @@ console.log(tags); // { "v1": { version: 1, manifestSize: ... } }
***
### takeOffsets()
```ts
abstract takeOffsets(offsets): TakeQuery
```
Create a query that returns a subset of the rows in the table.
#### Parameters
* **offsets**: `number`[]
The offsets of the rows to return.
#### Returns
[`TakeQuery`](TakeQuery.md)
A builder that can be used to parameterize the query.
***
### takeRowIds()
```ts
abstract takeRowIds(rowIds): TakeQuery
```
Create a query that returns a subset of the rows in the table.
#### Parameters
* **rowIds**: `number`[]
The row ids of the rows to return.
#### Returns
[`TakeQuery`](TakeQuery.md)
A builder that can be used to parameterize the query.
***
### toArrow()
```ts

View File

@@ -0,0 +1,288 @@
[**@lancedb/lancedb**](../README.md) • **Docs**
***
[@lancedb/lancedb](../globals.md) / TakeQuery
# Class: TakeQuery
A query that returns a subset of the rows in the table.
## Extends
- [`QueryBase`](QueryBase.md)&lt;`NativeTakeQuery`&gt;
## Properties
### inner
```ts
protected inner: TakeQuery | Promise<TakeQuery>;
```
#### Inherited from
[`QueryBase`](QueryBase.md).[`inner`](QueryBase.md#inner)
## Methods
### analyzePlan()
```ts
analyzePlan(): Promise<string>
```
Executes the query and returns the physical query plan annotated with runtime metrics.
This is useful for debugging and performance analysis, as it shows how the query was executed
and includes metrics such as elapsed time, rows processed, and I/O statistics.
#### Returns
`Promise`&lt;`string`&gt;
A query execution plan with runtime metrics for each step.
#### Example
```ts
import * as lancedb from "@lancedb/lancedb"
const db = await lancedb.connect("./.lancedb");
const table = await db.createTable("my_table", [
{ vector: [1.1, 0.9], id: "1" },
]);
const plan = await table.query().nearestTo([0.5, 0.2]).analyzePlan();
Example output (with runtime metrics inlined):
AnalyzeExec verbose=true, metrics=[]
ProjectionExec: expr=[id@3 as id, vector@0 as vector, _distance@2 as _distance], metrics=[output_rows=1, elapsed_compute=3.292µs]
Take: columns="vector, _rowid, _distance, (id)", metrics=[output_rows=1, elapsed_compute=66.001µs, batches_processed=1, bytes_read=8, iops=1, requests=1]
CoalesceBatchesExec: target_batch_size=1024, metrics=[output_rows=1, elapsed_compute=3.333µs]
GlobalLimitExec: skip=0, fetch=10, metrics=[output_rows=1, elapsed_compute=167ns]
FilterExec: _distance@2 IS NOT NULL, metrics=[output_rows=1, elapsed_compute=8.542µs]
SortExec: TopK(fetch=10), expr=[_distance@2 ASC NULLS LAST], metrics=[output_rows=1, elapsed_compute=63.25µs, row_replacements=1]
KNNVectorDistance: metric=l2, metrics=[output_rows=1, elapsed_compute=114.333µs, output_batches=1]
LanceScan: uri=/path/to/data, projection=[vector], row_id=true, row_addr=false, ordered=false, metrics=[output_rows=1, elapsed_compute=103.626µs, bytes_read=549, iops=2, requests=2]
```
#### Inherited from
[`QueryBase`](QueryBase.md).[`analyzePlan`](QueryBase.md#analyzeplan)
***
### execute()
```ts
protected execute(options?): AsyncGenerator<RecordBatch<any>, void, unknown>
```
Execute the query and return the results as an
#### Parameters
* **options?**: `Partial`&lt;[`QueryExecutionOptions`](../interfaces/QueryExecutionOptions.md)&gt;
#### Returns
`AsyncGenerator`&lt;`RecordBatch`&lt;`any`&gt;, `void`, `unknown`&gt;
#### See
- AsyncIterator
of
- RecordBatch.
By default, LanceDb will use many threads to calculate results and, when
the result set is large, multiple batches will be processed at one time.
This readahead is limited however and backpressure will be applied if this
stream is consumed slowly (this constrains the maximum memory used by a
single query)
#### Inherited from
[`QueryBase`](QueryBase.md).[`execute`](QueryBase.md#execute)
***
### explainPlan()
```ts
explainPlan(verbose): Promise<string>
```
Generates an explanation of the query execution plan.
#### Parameters
* **verbose**: `boolean` = `false`
If true, provides a more detailed explanation. Defaults to false.
#### Returns
`Promise`&lt;`string`&gt;
A Promise that resolves to a string containing the query execution plan explanation.
#### Example
```ts
import * as lancedb from "@lancedb/lancedb"
const db = await lancedb.connect("./.lancedb");
const table = await db.createTable("my_table", [
{ vector: [1.1, 0.9], id: "1" },
]);
const plan = await table.query().nearestTo([0.5, 0.2]).explainPlan();
```
#### Inherited from
[`QueryBase`](QueryBase.md).[`explainPlan`](QueryBase.md#explainplan)
***
### outputSchema()
```ts
outputSchema(): Promise<Schema<any>>
```
Returns the schema of the output that will be returned by this query.
This can be used to inspect the types and names of the columns that will be
returned by the query before executing it.
#### Returns
`Promise`&lt;`Schema`&lt;`any`&gt;&gt;
An Arrow Schema describing the output columns.
#### Inherited from
[`QueryBase`](QueryBase.md).[`outputSchema`](QueryBase.md#outputschema)
***
### select()
```ts
select(columns): this
```
Return only the specified columns.
By default a query will return all columns from the table. However, this can have
a very significant impact on latency. LanceDb stores data in a columnar fashion. This
means we can finely tune our I/O to select exactly the columns we need.
As a best practice you should always limit queries to the columns that you need. If you
pass in an array of column names then only those columns will be returned.
You can also use this method to create new "dynamic" columns based on your existing columns.
For example, you may not care about "a" or "b" but instead simply want "a + b". This is often
seen in the SELECT clause of an SQL query (e.g. `SELECT a+b FROM my_table`).
To create dynamic columns you can pass in a Map<string, string>. A column will be returned
for each entry in the map. The key provides the name of the column. The value is
an SQL string used to specify how the column is calculated.
For example, an SQL query might state `SELECT a + b AS combined, c`. The equivalent
input to this method would be:
#### Parameters
* **columns**: `string` \| `string`[] \| `Record`&lt;`string`, `string`&gt; \| `Map`&lt;`string`, `string`&gt;
#### Returns
`this`
#### Example
```ts
new Map([["combined", "a + b"], ["c", "c"]])
Columns will always be returned in the order given, even if that order is different than
the order used when adding the data.
Note that you can pass in a `Record<string, string>` (e.g. an object literal). This method
uses `Object.entries` which should preserve the insertion order of the object. However,
object insertion order is easy to get wrong and `Map` is more foolproof.
```
#### Inherited from
[`QueryBase`](QueryBase.md).[`select`](QueryBase.md#select)
***
### toArray()
```ts
toArray(options?): Promise<any[]>
```
Collect the results as an array of objects.
#### Parameters
* **options?**: `Partial`&lt;[`QueryExecutionOptions`](../interfaces/QueryExecutionOptions.md)&gt;
#### Returns
`Promise`&lt;`any`[]&gt;
#### Inherited from
[`QueryBase`](QueryBase.md).[`toArray`](QueryBase.md#toarray)
***
### toArrow()
```ts
toArrow(options?): Promise<Table<any>>
```
Collect the results as an Arrow
#### Parameters
* **options?**: `Partial`&lt;[`QueryExecutionOptions`](../interfaces/QueryExecutionOptions.md)&gt;
#### Returns
`Promise`&lt;`Table`&lt;`any`&gt;&gt;
#### See
ArrowTable.
#### Inherited from
[`QueryBase`](QueryBase.md).[`toArrow`](QueryBase.md#toarrow)
***
### withRowId()
```ts
withRowId(): this
```
Whether to return the row id in the results.
This column can be used to match results between different queries. For
example, to match results from a full text search and a vector search in
order to perform hybrid search.
#### Returns
`this`
#### Inherited from
[`QueryBase`](QueryBase.md).[`withRowId`](QueryBase.md#withrowid)

View File

@@ -16,7 +16,7 @@ This builder can be reused to execute the query many times.
## Extends
- [`QueryBase`](QueryBase.md)&lt;`NativeVectorQuery`&gt;
- `StandardQueryBase`&lt;`NativeVectorQuery`&gt;
## Properties
@@ -28,7 +28,7 @@ protected inner: VectorQuery | Promise<VectorQuery>;
#### Inherited from
[`QueryBase`](QueryBase.md).[`inner`](QueryBase.md#inner)
`StandardQueryBase.inner`
## Methods
@@ -91,7 +91,7 @@ AnalyzeExec verbose=true, metrics=[]
#### Inherited from
[`QueryBase`](QueryBase.md).[`analyzePlan`](QueryBase.md#analyzeplan)
`StandardQueryBase.analyzePlan`
***
@@ -221,7 +221,7 @@ also increase the latency of your query. The default value is 1.5*limit.
### execute()
```ts
protected execute(options?): RecordBatchIterator
protected execute(options?): AsyncGenerator<RecordBatch<any>, void, unknown>
```
Execute the query and return the results as an
@@ -232,7 +232,7 @@ Execute the query and return the results as an
#### Returns
[`RecordBatchIterator`](RecordBatchIterator.md)
`AsyncGenerator`&lt;`RecordBatch`&lt;`any`&gt;, `void`, `unknown`&gt;
#### See
@@ -248,7 +248,7 @@ single query)
#### Inherited from
[`QueryBase`](QueryBase.md).[`execute`](QueryBase.md#execute)
`StandardQueryBase.execute`
***
@@ -284,7 +284,7 @@ const plan = await table.query().nearestTo([0.5, 0.2]).explainPlan();
#### Inherited from
[`QueryBase`](QueryBase.md).[`explainPlan`](QueryBase.md#explainplan)
`StandardQueryBase.explainPlan`
***
@@ -305,7 +305,7 @@ Use [Table#optimize](Table.md#optimize) to index all un-indexed data.
#### Inherited from
[`QueryBase`](QueryBase.md).[`fastSearch`](QueryBase.md#fastsearch)
`StandardQueryBase.fastSearch`
***
@@ -335,7 +335,7 @@ Use `where` instead
#### Inherited from
[`QueryBase`](QueryBase.md).[`filter`](QueryBase.md#filter)
`StandardQueryBase.filter`
***
@@ -357,7 +357,7 @@ fullTextSearch(query, options?): this
#### Inherited from
[`QueryBase`](QueryBase.md).[`fullTextSearch`](QueryBase.md#fulltextsearch)
`StandardQueryBase.fullTextSearch`
***
@@ -382,7 +382,7 @@ called then every valid row from the table will be returned.
#### Inherited from
[`QueryBase`](QueryBase.md).[`limit`](QueryBase.md#limit)
`StandardQueryBase.limit`
***
@@ -480,6 +480,10 @@ the minimum and maximum to the same value.
offset(offset): this
```
Set the number of rows to skip before returning results.
This is useful for pagination.
#### Parameters
* **offset**: `number`
@@ -490,7 +494,30 @@ offset(offset): this
#### Inherited from
[`QueryBase`](QueryBase.md).[`offset`](QueryBase.md#offset)
`StandardQueryBase.offset`
***
### outputSchema()
```ts
outputSchema(): Promise<Schema<any>>
```
Returns the schema of the output that will be returned by this query.
This can be used to inspect the types and names of the columns that will be
returned by the query before executing it.
#### Returns
`Promise`&lt;`Schema`&lt;`any`&gt;&gt;
An Arrow Schema describing the output columns.
#### Inherited from
`StandardQueryBase.outputSchema`
***
@@ -637,7 +664,7 @@ object insertion order is easy to get wrong and `Map` is more foolproof.
#### Inherited from
[`QueryBase`](QueryBase.md).[`select`](QueryBase.md#select)
`StandardQueryBase.select`
***
@@ -659,7 +686,7 @@ Collect the results as an array of objects.
#### Inherited from
[`QueryBase`](QueryBase.md).[`toArray`](QueryBase.md#toarray)
`StandardQueryBase.toArray`
***
@@ -685,7 +712,7 @@ ArrowTable.
#### Inherited from
[`QueryBase`](QueryBase.md).[`toArrow`](QueryBase.md#toarrow)
`StandardQueryBase.toArrow`
***
@@ -720,7 +747,7 @@ on the filter column(s).
#### Inherited from
[`QueryBase`](QueryBase.md).[`where`](QueryBase.md#where)
`StandardQueryBase.where`
***
@@ -742,4 +769,4 @@ order to perform hybrid search.
#### Inherited from
[`QueryBase`](QueryBase.md).[`withRowId`](QueryBase.md#withrowid)
`StandardQueryBase.withRowId`

View File

@@ -0,0 +1,19 @@
[**@lancedb/lancedb**](../README.md) • **Docs**
***
[@lancedb/lancedb](../globals.md) / RecordBatchIterator
# Function: RecordBatchIterator()
```ts
function RecordBatchIterator(promisedInner): AsyncGenerator<RecordBatch<any>, void, unknown>
```
## Parameters
* **promisedInner**: `Promise`&lt;`RecordBatchIterator`&gt;
## Returns
`AsyncGenerator`&lt;`RecordBatch`&lt;`any`&gt;, `void`, `unknown`&gt;

View File

@@ -6,13 +6,14 @@
# Function: connect()
## connect(uri, options, session)
## connect(uri, options, session, headerProvider)
```ts
function connect(
uri,
options?,
session?): Promise<Connection>
session?,
headerProvider?): Promise<Connection>
```
Connect to a LanceDB instance at the given URI.
@@ -34,6 +35,8 @@ Accepted formats:
* **session?**: [`Session`](../classes/Session.md)
* **headerProvider?**: [`HeaderProvider`](../classes/HeaderProvider.md) \| () => `Record`&lt;`string`, `string`&gt; \| () => `Promise`&lt;`Record`&lt;`string`, `string`&gt;&gt;
### Returns
`Promise`&lt;[`Connection`](../classes/Connection.md)&gt;
@@ -55,6 +58,18 @@ const conn = await connect(
});
```
Using with a header provider for per-request authentication:
```ts
const provider = new StaticHeaderProvider({
"X-API-Key": "my-key"
});
const conn = await connectWithHeaderProvider(
"db://host:port",
options,
provider
);
```
## connect(options)
```ts

View File

@@ -13,7 +13,7 @@ function makeArrowTable(
metadata?): ArrowTable
```
An enhanced version of the makeTable function from Apache Arrow
An enhanced version of the apache-arrow makeTable function from Apache Arrow
that supports nested fields and embeddings columns.
(typically you do not need to call this function. It will be called automatically

View File

@@ -0,0 +1,34 @@
[**@lancedb/lancedb**](../README.md) • **Docs**
***
[@lancedb/lancedb](../globals.md) / permutationBuilder
# Function: permutationBuilder()
```ts
function permutationBuilder(table): PermutationBuilder
```
Create a permutation builder for the given table.
## Parameters
* **table**: [`Table`](../classes/Table.md)
The source table to create a permutation from
## Returns
[`PermutationBuilder`](../classes/PermutationBuilder.md)
A PermutationBuilder instance
## Example
```ts
const builder = permutationBuilder(sourceTable, "training_data")
.splitRandom({ ratios: [0.8, 0.2], seed: 42 })
.shuffle({ seed: 123 });
const trainingTable = await builder.execute();
```

View File

@@ -20,19 +20,24 @@
- [BooleanQuery](classes/BooleanQuery.md)
- [BoostQuery](classes/BoostQuery.md)
- [Connection](classes/Connection.md)
- [HeaderProvider](classes/HeaderProvider.md)
- [Index](classes/Index.md)
- [MakeArrowTableOptions](classes/MakeArrowTableOptions.md)
- [MatchQuery](classes/MatchQuery.md)
- [MergeInsertBuilder](classes/MergeInsertBuilder.md)
- [MultiMatchQuery](classes/MultiMatchQuery.md)
- [NativeJsHeaderProvider](classes/NativeJsHeaderProvider.md)
- [OAuthHeaderProvider](classes/OAuthHeaderProvider.md)
- [PermutationBuilder](classes/PermutationBuilder.md)
- [PhraseQuery](classes/PhraseQuery.md)
- [Query](classes/Query.md)
- [QueryBase](classes/QueryBase.md)
- [RecordBatchIterator](classes/RecordBatchIterator.md)
- [Session](classes/Session.md)
- [StaticHeaderProvider](classes/StaticHeaderProvider.md)
- [Table](classes/Table.md)
- [TagContents](classes/TagContents.md)
- [Tags](classes/Tags.md)
- [TakeQuery](classes/TakeQuery.md)
- [VectorColumnOptions](classes/VectorColumnOptions.md)
- [VectorQuery](classes/VectorQuery.md)
@@ -63,6 +68,7 @@
- [IndexStatistics](interfaces/IndexStatistics.md)
- [IvfFlatOptions](interfaces/IvfFlatOptions.md)
- [IvfPqOptions](interfaces/IvfPqOptions.md)
- [IvfRqOptions](interfaces/IvfRqOptions.md)
- [MergeResult](interfaces/MergeResult.md)
- [OpenTableOptions](interfaces/OpenTableOptions.md)
- [OptimizeOptions](interfaces/OptimizeOptions.md)
@@ -70,9 +76,16 @@
- [QueryExecutionOptions](interfaces/QueryExecutionOptions.md)
- [RemovalStats](interfaces/RemovalStats.md)
- [RetryConfig](interfaces/RetryConfig.md)
- [ShuffleOptions](interfaces/ShuffleOptions.md)
- [SplitCalculatedOptions](interfaces/SplitCalculatedOptions.md)
- [SplitHashOptions](interfaces/SplitHashOptions.md)
- [SplitRandomOptions](interfaces/SplitRandomOptions.md)
- [SplitSequentialOptions](interfaces/SplitSequentialOptions.md)
- [TableNamesOptions](interfaces/TableNamesOptions.md)
- [TableStatistics](interfaces/TableStatistics.md)
- [TimeoutConfig](interfaces/TimeoutConfig.md)
- [TlsConfig](interfaces/TlsConfig.md)
- [TokenResponse](interfaces/TokenResponse.md)
- [UpdateOptions](interfaces/UpdateOptions.md)
- [UpdateResult](interfaces/UpdateResult.md)
- [Version](interfaces/Version.md)
@@ -92,6 +105,8 @@
## Functions
- [RecordBatchIterator](functions/RecordBatchIterator.md)
- [connect](functions/connect.md)
- [makeArrowTable](functions/makeArrowTable.md)
- [packBits](functions/packBits.md)
- [permutationBuilder](functions/permutationBuilder.md)

View File

@@ -16,6 +16,14 @@ optional extraHeaders: Record<string, string>;
***
### idDelimiter?
```ts
optional idDelimiter: string;
```
***
### retryConfig?
```ts
@@ -32,6 +40,14 @@ optional timeoutConfig: TimeoutConfig;
***
### tlsConfig?
```ts
optional tlsConfig: TlsConfig;
```
***
### userAgent?
```ts

View File

@@ -26,6 +26,18 @@ will be used to determine the most useful kind of index to create.
***
### name?
```ts
optional name: string;
```
Optional custom name for the index.
If not provided, a default name will be generated based on the column name.
***
### replace?
```ts
@@ -42,8 +54,27 @@ The default is true
***
### train?
```ts
optional train: boolean;
```
Whether to train the index with existing data.
If true (default), the index will be trained with existing data in the table.
If false, the index will be created empty and populated as new data is added.
Note: This option is only supported for scalar indices. Vector indices always train.
***
### waitTimeoutSeconds?
```ts
optional waitTimeoutSeconds: number;
```
Timeout in seconds to wait for index creation to complete.
If not specified, the method will return immediately after starting the index creation.

View File

@@ -0,0 +1,101 @@
[**@lancedb/lancedb**](../README.md) • **Docs**
***
[@lancedb/lancedb](../globals.md) / IvfRqOptions
# Interface: IvfRqOptions
## Properties
### distanceType?
```ts
optional distanceType: "l2" | "cosine" | "dot";
```
Distance type to use to build the index.
Default value is "l2".
This is used when training the index to calculate the IVF partitions
(vectors are grouped in partitions with similar vectors according to this
distance type) and during quantization.
The distance type used to train an index MUST match the distance type used
to search the index. Failure to do so will yield inaccurate results.
The following distance types are available:
"l2" - Euclidean distance.
"cosine" - Cosine distance.
"dot" - Dot product.
***
### maxIterations?
```ts
optional maxIterations: number;
```
Max iterations to train IVF kmeans.
When training an IVF index we use kmeans to calculate the partitions. This parameter
controls how many iterations of kmeans to run.
The default value is 50.
***
### numBits?
```ts
optional numBits: number;
```
Number of bits per dimension for residual quantization.
This value controls how much each residual component is compressed. The more
bits, the more accurate the index will be but the slower search. Typical values
are small integers; the default is 1 bit per dimension.
***
### numPartitions?
```ts
optional numPartitions: number;
```
The number of IVF partitions to create.
This value should generally scale with the number of rows in the dataset.
By default the number of partitions is the square root of the number of
rows.
If this value is too large then the first part of the search (picking the
right partition) will be slow. If this value is too small then the second
part of the search (searching within a partition) will be slow.
***
### sampleRate?
```ts
optional sampleRate: number;
```
The number of vectors, per partition, to sample when training IVF kmeans.
When an IVF index is trained, we need to calculate partitions. These are groups
of vectors that are similar to each other. To do this we use an algorithm called kmeans.
Running kmeans on a large dataset can be slow. To speed this up we run kmeans on a
random sample of the data. This parameter controls the size of the sample. The total
number of vectors used to train the index is `sample_rate * num_partitions`.
Increasing this value might improve the quality of the index but in most cases the
default should be sufficient.
The default value is 256.

View File

@@ -0,0 +1,23 @@
[**@lancedb/lancedb**](../README.md) • **Docs**
***
[@lancedb/lancedb](../globals.md) / ShuffleOptions
# Interface: ShuffleOptions
## Properties
### clumpSize?
```ts
optional clumpSize: number;
```
***
### seed?
```ts
optional seed: number;
```

View File

@@ -0,0 +1,23 @@
[**@lancedb/lancedb**](../README.md) • **Docs**
***
[@lancedb/lancedb](../globals.md) / SplitCalculatedOptions
# Interface: SplitCalculatedOptions
## Properties
### calculation
```ts
calculation: string;
```
***
### splitNames?
```ts
optional splitNames: string[];
```

View File

@@ -0,0 +1,39 @@
[**@lancedb/lancedb**](../README.md) • **Docs**
***
[@lancedb/lancedb](../globals.md) / SplitHashOptions
# Interface: SplitHashOptions
## Properties
### columns
```ts
columns: string[];
```
***
### discardWeight?
```ts
optional discardWeight: number;
```
***
### splitNames?
```ts
optional splitNames: string[];
```
***
### splitWeights
```ts
splitWeights: number[];
```

View File

@@ -0,0 +1,47 @@
[**@lancedb/lancedb**](../README.md) • **Docs**
***
[@lancedb/lancedb](../globals.md) / SplitRandomOptions
# Interface: SplitRandomOptions
## Properties
### counts?
```ts
optional counts: number[];
```
***
### fixed?
```ts
optional fixed: number;
```
***
### ratios?
```ts
optional ratios: number[];
```
***
### seed?
```ts
optional seed: number;
```
***
### splitNames?
```ts
optional splitNames: string[];
```

View File

@@ -0,0 +1,39 @@
[**@lancedb/lancedb**](../README.md) • **Docs**
***
[@lancedb/lancedb](../globals.md) / SplitSequentialOptions
# Interface: SplitSequentialOptions
## Properties
### counts?
```ts
optional counts: number[];
```
***
### fixed?
```ts
optional fixed: number;
```
***
### ratios?
```ts
optional ratios: number[];
```
***
### splitNames?
```ts
optional splitNames: string[];
```

View File

@@ -44,3 +44,17 @@ optional readTimeout: number;
The timeout for reading data from the server in seconds. Default is 300
seconds (5 minutes). This can also be set via the environment variable
`LANCE_CLIENT_READ_TIMEOUT`, as an integer number of seconds.
***
### timeout?
```ts
optional timeout: number;
```
The overall timeout for the entire request in seconds. This includes
connection, send, and read time. If the entire request doesn't complete
within this time, it will fail. Default is None (no overall timeout).
This can also be set via the environment variable `LANCE_CLIENT_TIMEOUT`,
as an integer number of seconds.

View File

@@ -0,0 +1,49 @@
[**@lancedb/lancedb**](../README.md) • **Docs**
***
[@lancedb/lancedb](../globals.md) / TlsConfig
# Interface: TlsConfig
TLS/mTLS configuration for the remote HTTP client.
## Properties
### assertHostname?
```ts
optional assertHostname: boolean;
```
Whether to verify the hostname in the server's certificate.
***
### certFile?
```ts
optional certFile: string;
```
Path to the client certificate file (PEM format) for mTLS authentication.
***
### keyFile?
```ts
optional keyFile: string;
```
Path to the client private key file (PEM format) for mTLS authentication.
***
### sslCaCert?
```ts
optional sslCaCert: string;
```
Path to the CA certificate file (PEM format) for server verification.

View File

@@ -0,0 +1,25 @@
[**@lancedb/lancedb**](../README.md) • **Docs**
***
[@lancedb/lancedb](../globals.md) / TokenResponse
# Interface: TokenResponse
Token response from OAuth provider.
## Properties
### accessToken
```ts
accessToken: string;
```
***
### expiresIn?
```ts
optional expiresIn: number;
```

View File

@@ -15,7 +15,7 @@ publish = false
crate-type = ["cdylib"]
[dependencies]
lancedb = { path = "../../../rust/lancedb" }
lancedb = { path = "../../../rust/lancedb", default-features = false }
lance = { workspace = true }
arrow = { workspace = true, features = ["ffi"] }
arrow-schema.workspace = true
@@ -25,3 +25,6 @@ snafu.workspace = true
lazy_static.workspace = true
serde = { version = "^1" }
serde_json = { version = "1" }
[features]
default = ["lancedb/default"]

View File

@@ -51,8 +51,11 @@ pub enum Error {
DatasetAlreadyExists { uri: String, location: Location },
#[snafu(display("Table '{name}' already exists"))]
TableAlreadyExists { name: String },
#[snafu(display("Table '{name}' was not found"))]
TableNotFound { name: String },
#[snafu(display("Table '{name}' was not found: {source}"))]
TableNotFound {
name: String,
source: Box<dyn std::error::Error + Send + Sync>,
},
#[snafu(display("Invalid table name '{name}': {reason}"))]
InvalidTableName { name: String, reason: String },
#[snafu(display("Embedding function '{name}' was not found: {reason}, {location}"))]
@@ -191,7 +194,7 @@ impl From<lancedb::Error> for Error {
message,
location: std::panic::Location::caller().to_snafu_location(),
},
lancedb::Error::TableNotFound { name } => Self::TableNotFound { name },
lancedb::Error::TableNotFound { name, source } => Self::TableNotFound { name, source },
lancedb::Error::TableAlreadyExists { name } => Self::TableAlreadyExists { name },
lancedb::Error::EmbeddingFunctionNotFound { name, reason } => {
Self::EmbeddingFunctionNotFound {

View File

@@ -16,6 +16,7 @@ pub trait JNIEnvExt {
fn get_integers(&mut self, obj: &JObject) -> Result<Vec<i32>>;
/// Get strings from Java List<String> object.
#[allow(dead_code)]
fn get_strings(&mut self, obj: &JObject) -> Result<Vec<String>>;
/// Get strings from Java String[] object.

View File

@@ -6,6 +6,7 @@ use jni::JNIEnv;
use crate::Result;
#[allow(dead_code)]
pub trait FromJObject<T> {
fn extract(&self) -> Result<T>;
}
@@ -39,6 +40,7 @@ impl FromJObject<f64> for JObject<'_> {
}
}
#[allow(dead_code)]
pub trait FromJString {
fn extract(&self, env: &mut JNIEnv) -> Result<String>;
}
@@ -66,6 +68,7 @@ pub trait JMapExt {
fn get_f64(&self, env: &mut JNIEnv, key: &str) -> Result<Option<f64>>;
}
#[allow(dead_code)]
fn get_map_value<T>(env: &mut JNIEnv, map: &JMap, key: &str) -> Result<Option<T>>
where
for<'a> JObject<'a>: FromJObject<T>,

View File

@@ -8,7 +8,7 @@
<parent>
<groupId>com.lancedb</groupId>
<artifactId>lancedb-parent</artifactId>
<version>0.21.2-final.0</version>
<version>0.22.3-beta.5</version>
<relativePath>../pom.xml</relativePath>
</parent>

View File

@@ -8,7 +8,7 @@
<parent>
<groupId>com.lancedb</groupId>
<artifactId>lancedb-parent</artifactId>
<version>0.21.2-final.0</version>
<version>0.22.3-beta.5</version>
<relativePath>../pom.xml</relativePath>
</parent>

View File

@@ -6,7 +6,7 @@
<groupId>com.lancedb</groupId>
<artifactId>lancedb-parent</artifactId>
<version>0.21.2-final.0</version>
<version>0.22.3-beta.5</version>
<packaging>pom</packaging>
<name>${project.artifactId}</name>
<description>LanceDB Java SDK Parent POM</description>

13
nodejs/AGENTS.md Normal file
View File

@@ -0,0 +1,13 @@
These are the typescript bindings of LanceDB.
The core Rust library is in the `../rust/lancedb` directory, the rust binding
code is in the `src/` directory and the typescript bindings are in
the `lancedb/` directory.
Whenever you change the Rust code, you will need to recompile: `npm run build`.
Common commands:
* Build: `npm run build`
* Lint: `npm run lint`
* Fix lints: `npm run lint-fix`
* Test: `npm test`
* Run single test file: `npm test __test__/arrow.test.ts`

View File

@@ -1,13 +0,0 @@
These are the typescript bindings of LanceDB.
The core Rust library is in the `../rust/lancedb` directory, the rust binding
code is in the `src/` directory and the typescript bindings are in
the `lancedb/` directory.
Whenever you change the Rust code, you will need to recompile: `npm run build`.
Common commands:
* Build: `npm run build`
* Lint: `npm run lint`
* Fix lints: `npm run lint-fix`
* Test: `npm test`
* Run single test file: `npm test __test__/arrow.test.ts`

1
nodejs/CLAUDE.md Symbolic link
View File

@@ -0,0 +1 @@
AGENTS.md

View File

@@ -1,7 +1,7 @@
[package]
name = "lancedb-nodejs"
edition.workspace = true
version = "0.21.2"
version = "0.22.3-beta.5"
license.workspace = true
description.workspace = true
repository.workspace = true
@@ -18,7 +18,7 @@ arrow-array.workspace = true
arrow-schema.workspace = true
env_logger.workspace = true
futures.workspace = true
lancedb = { path = "../rust/lancedb" }
lancedb = { path = "../rust/lancedb", default-features = false }
napi = { version = "2.16.8", default-features = false, features = [
"napi9",
"async"
@@ -36,6 +36,6 @@ aws-lc-rs = "=1.13.0"
napi-build = "2.1"
[features]
default = ["remote"]
default = ["remote", "lancedb/default"]
fp16kernels = ["lancedb/fp16kernels"]
remote = ["lancedb/remote"]

View File

@@ -1,17 +1,5 @@
// SPDX-License-Identifier: Apache-2.0
// SPDX-FileCopyrightText: Copyright The LanceDB Authors
import {
Bool,
Field,
Int32,
List,
Schema,
Struct,
Uint8,
Utf8,
} from "apache-arrow";
import * as arrow15 from "apache-arrow-15";
import * as arrow16 from "apache-arrow-16";
import * as arrow17 from "apache-arrow-17";
@@ -25,11 +13,9 @@ import {
fromTableToBuffer,
makeArrowTable,
makeEmptyTable,
tableFromIPC,
} from "../lancedb/arrow";
import {
EmbeddingFunction,
FieldOptions,
FunctionOptions,
} from "../lancedb/embedding/embedding_function";
import { EmbeddingFunctionConfig } from "../lancedb/embedding/registry";
@@ -1008,5 +994,64 @@ describe.each([arrow15, arrow16, arrow17, arrow18])(
expect(result).toEqual(null);
});
});
describe("boolean null handling", function () {
it("should handle null values in nullable boolean fields", () => {
const { makeArrowTable } = require("../lancedb/arrow");
const schema = new Schema([new Field("test", new arrow.Bool(), true)]);
// Test with all null values
const data = [{ test: null }];
const table = makeArrowTable(data, { schema });
expect(table.numRows).toBe(1);
expect(table.schema.names).toEqual(["test"]);
expect(table.getChild("test")!.get(0)).toBeNull();
});
it("should handle mixed null and non-null boolean values", () => {
const { makeArrowTable } = require("../lancedb/arrow");
const schema = new Schema([new Field("test", new Bool(), true)]);
// Test with mixed values
const data = [{ test: true }, { test: null }, { test: false }];
const table = makeArrowTable(data, { schema });
expect(table.numRows).toBe(3);
expect(table.getChild("test")!.get(0)).toBe(true);
expect(table.getChild("test")!.get(1)).toBeNull();
expect(table.getChild("test")!.get(2)).toBe(false);
});
});
// Test for the undefined values bug fix
describe("undefined values handling", () => {
it("should handle mixed undefined and actual values", () => {
const schema = new Schema([
new Field("text", new Utf8(), true), // nullable
new Field("number", new Int32(), true), // nullable
new Field("bool", new Bool(), true), // nullable
]);
const data = [
{ text: undefined, number: 42, bool: true },
{ text: "hello", number: undefined, bool: false },
{ text: "world", number: 123, bool: undefined },
];
const table = makeArrowTable(data, { schema });
const result = table.toArray();
expect(result).toHaveLength(3);
expect(result[0].text).toBe(null);
expect(result[0].number).toBe(42);
expect(result[0].bool).toBe(true);
expect(result[1].text).toBe("hello");
expect(result[1].number).toBe(null);
expect(result[1].bool).toBe(false);
expect(result[2].text).toBe("world");
expect(result[2].number).toBe(123);
expect(result[2].bool).toBe(null);
});
});
},
);

View File

@@ -203,3 +203,106 @@ describe("given a connection", () => {
});
});
});
describe("clone table functionality", () => {
let tmpDir: tmp.DirResult;
let db: Connection;
beforeEach(async () => {
tmpDir = tmp.dirSync({ unsafeCleanup: true });
db = await connect(tmpDir.name);
});
afterEach(() => tmpDir.removeCallback());
it("should clone a table with latest version (default behavior)", async () => {
// Create source table with some data
const data = [
{ id: 1, text: "hello", vector: [1.0, 2.0] },
{ id: 2, text: "world", vector: [3.0, 4.0] },
];
const sourceTable = await db.createTable("source", data);
// Add more data to create a new version
const moreData = [{ id: 3, text: "test", vector: [5.0, 6.0] }];
await sourceTable.add(moreData);
// Clone the table (should get latest version with 3 rows)
const sourceUri = `${tmpDir.name}/source.lance`;
const clonedTable = await db.cloneTable("cloned", sourceUri);
// Verify cloned table has all 3 rows
expect(await clonedTable.countRows()).toBe(3);
expect((await db.tableNames()).includes("cloned")).toBe(true);
});
it("should clone a table from a specific version", async () => {
// Create source table with initial data
const data = [
{ id: 1, text: "hello", vector: [1.0, 2.0] },
{ id: 2, text: "world", vector: [3.0, 4.0] },
];
const sourceTable = await db.createTable("source", data);
// Get the initial version
const initialVersion = await sourceTable.version();
// Add more data to create a new version
const moreData = [{ id: 3, text: "test", vector: [5.0, 6.0] }];
await sourceTable.add(moreData);
// Verify source now has 3 rows
expect(await sourceTable.countRows()).toBe(3);
// Clone from the initial version (should have only 2 rows)
const sourceUri = `${tmpDir.name}/source.lance`;
const clonedTable = await db.cloneTable("cloned", sourceUri, {
sourceVersion: initialVersion,
});
// Verify cloned table has only the initial 2 rows
expect(await clonedTable.countRows()).toBe(2);
});
it("should clone a table from a tagged version", async () => {
// Create source table with initial data
const data = [
{ id: 1, text: "hello", vector: [1.0, 2.0] },
{ id: 2, text: "world", vector: [3.0, 4.0] },
];
const sourceTable = await db.createTable("source", data);
// Create a tag for the current version
const tags = await sourceTable.tags();
await tags.create("v1.0", await sourceTable.version());
// Add more data after the tag
const moreData = [{ id: 3, text: "test", vector: [5.0, 6.0] }];
await sourceTable.add(moreData);
// Verify source now has 3 rows
expect(await sourceTable.countRows()).toBe(3);
// Clone from the tagged version (should have only 2 rows)
const sourceUri = `${tmpDir.name}/source.lance`;
const clonedTable = await db.cloneTable("cloned", sourceUri, {
sourceTag: "v1.0",
});
// Verify cloned table has only the tagged version's 2 rows
expect(await clonedTable.countRows()).toBe(2);
});
it("should fail when attempting deep clone", async () => {
// Create source table with some data
const data = [
{ id: 1, text: "hello", vector: [1.0, 2.0] },
{ id: 2, text: "world", vector: [3.0, 4.0] },
];
await db.createTable("source", data);
// Try to create a deep clone (should fail)
const sourceUri = `${tmpDir.name}/source.lance`;
await expect(
db.cloneTable("cloned", sourceUri, { isShallow: false }),
).rejects.toThrow("Deep clone is not yet implemented");
});
});

View File

@@ -256,6 +256,60 @@ describe("embedding functions", () => {
expect(actual).toHaveProperty("text");
});
it("should handle undefined vector field with embedding function correctly", async () => {
@register("undefined_test")
class MockEmbeddingFunction extends EmbeddingFunction<string> {
ndims() {
return 3;
}
embeddingDataType(): Float {
return new Float32();
}
async computeQueryEmbeddings(_data: string) {
return [1, 2, 3];
}
async computeSourceEmbeddings(data: string[]) {
return Array.from({ length: data.length }).fill([
1, 2, 3,
]) as number[][];
}
}
const func = getRegistry()
.get<MockEmbeddingFunction>("undefined_test")!
.create();
const schema = new Schema([
new Field("text", new Utf8(), true),
new Field(
"vector",
new FixedSizeList(3, new Field("item", new Float32(), true)),
true,
),
]);
const db = await connect(tmpDir.name);
const table = await db.createEmptyTable("test_undefined", schema, {
embeddingFunction: {
function: func,
sourceColumn: "text",
vectorColumn: "vector",
},
});
// Test that undefined, null, and omitted vector fields all work
await table.add([{ text: "test1", vector: undefined }]);
await table.add([{ text: "test2", vector: null }]);
await table.add([{ text: "test3" }]);
const rows = await table.query().toArray();
expect(rows.length).toBe(3);
// All rows should have vectors computed by the embedding function
for (const row of rows) {
expect(row.vector).toBeDefined();
expect(JSON.parse(JSON.stringify(row.vector))).toEqual([1, 2, 3]);
}
});
test.each([new Float16(), new Float32(), new Float64()])(
"should be able to provide manual embeddings with multiple float datatype",
async (floatType) => {

View File

@@ -0,0 +1,371 @@
// SPDX-License-Identifier: Apache-2.0
// SPDX-FileCopyrightText: Copyright The LanceDB Authors
import * as tmp from "tmp";
import { Table, connect, permutationBuilder } from "../lancedb";
import { makeArrowTable } from "../lancedb/arrow";
describe("PermutationBuilder", () => {
let tmpDir: tmp.DirResult;
let table: Table;
beforeEach(async () => {
tmpDir = tmp.dirSync({ unsafeCleanup: true });
const db = await connect(tmpDir.name);
// Create test data
const data = makeArrowTable(
[
{ id: 1, value: 10 },
{ id: 2, value: 20 },
{ id: 3, value: 30 },
{ id: 4, value: 40 },
{ id: 5, value: 50 },
{ id: 6, value: 60 },
{ id: 7, value: 70 },
{ id: 8, value: 80 },
{ id: 9, value: 90 },
{ id: 10, value: 100 },
],
{ vectorColumns: {} },
);
table = await db.createTable("test_table", data);
});
afterEach(() => {
tmpDir.removeCallback();
});
test("should create permutation builder", () => {
const builder = permutationBuilder(table);
expect(builder).toBeDefined();
});
test("should execute basic permutation", async () => {
const builder = permutationBuilder(table);
const permutationTable = await builder.execute();
expect(permutationTable).toBeDefined();
const rowCount = await permutationTable.countRows();
expect(rowCount).toBe(10);
});
test("should create permutation with random splits", async () => {
const builder = permutationBuilder(table).splitRandom({
ratios: [1.0],
seed: 42,
});
const permutationTable = await builder.execute();
const rowCount = await permutationTable.countRows();
expect(rowCount).toBe(10);
});
test("should create permutation with percentage splits", async () => {
const builder = permutationBuilder(table).splitRandom({
ratios: [0.3, 0.7],
seed: 42,
});
const permutationTable = await builder.execute();
const rowCount = await permutationTable.countRows();
expect(rowCount).toBe(10);
// Check split distribution
const split0Count = await permutationTable.countRows("split_id = 0");
const split1Count = await permutationTable.countRows("split_id = 1");
expect(split0Count).toBeGreaterThan(0);
expect(split1Count).toBeGreaterThan(0);
expect(split0Count + split1Count).toBe(10);
});
test("should create permutation with count splits", async () => {
const builder = permutationBuilder(table).splitRandom({
counts: [3, 7],
seed: 42,
});
const permutationTable = await builder.execute();
const rowCount = await permutationTable.countRows();
expect(rowCount).toBe(10);
// Check split distribution
const split0Count = await permutationTable.countRows("split_id = 0");
const split1Count = await permutationTable.countRows("split_id = 1");
expect(split0Count).toBe(3);
expect(split1Count).toBe(7);
});
test("should create permutation with hash splits", async () => {
const builder = permutationBuilder(table).splitHash({
columns: ["id"],
splitWeights: [50, 50],
discardWeight: 0,
});
const permutationTable = await builder.execute();
const rowCount = await permutationTable.countRows();
expect(rowCount).toBe(10);
// Check that splits exist
const split0Count = await permutationTable.countRows("split_id = 0");
const split1Count = await permutationTable.countRows("split_id = 1");
expect(split0Count).toBeGreaterThan(0);
expect(split1Count).toBeGreaterThan(0);
expect(split0Count + split1Count).toBe(10);
});
test("should create permutation with sequential splits", async () => {
const builder = permutationBuilder(table).splitSequential({
ratios: [0.5, 0.5],
});
const permutationTable = await builder.execute();
const rowCount = await permutationTable.countRows();
expect(rowCount).toBe(10);
// Check split distribution - sequential should give exactly 5 and 5
const split0Count = await permutationTable.countRows("split_id = 0");
const split1Count = await permutationTable.countRows("split_id = 1");
expect(split0Count).toBe(5);
expect(split1Count).toBe(5);
});
test("should create permutation with calculated splits", async () => {
const builder = permutationBuilder(table).splitCalculated({
calculation: "id % 2",
});
const permutationTable = await builder.execute();
const rowCount = await permutationTable.countRows();
expect(rowCount).toBe(10);
// Check split distribution
const split0Count = await permutationTable.countRows("split_id = 0");
const split1Count = await permutationTable.countRows("split_id = 1");
expect(split0Count).toBeGreaterThan(0);
expect(split1Count).toBeGreaterThan(0);
expect(split0Count + split1Count).toBe(10);
});
test("should create permutation with shuffle", async () => {
const builder = permutationBuilder(table).shuffle({
seed: 42,
});
const permutationTable = await builder.execute();
const rowCount = await permutationTable.countRows();
expect(rowCount).toBe(10);
});
test("should create permutation with shuffle and clump size", async () => {
const builder = permutationBuilder(table).shuffle({
seed: 42,
clumpSize: 2,
});
const permutationTable = await builder.execute();
const rowCount = await permutationTable.countRows();
expect(rowCount).toBe(10);
});
test("should create permutation with filter", async () => {
const builder = permutationBuilder(table).filter("value > 50");
const permutationTable = await builder.execute();
const rowCount = await permutationTable.countRows();
expect(rowCount).toBe(5); // Values 60, 70, 80, 90, 100
});
test("should chain multiple operations", async () => {
const builder = permutationBuilder(table)
.filter("value <= 80")
.splitRandom({ ratios: [0.5, 0.5], seed: 42 })
.shuffle({ seed: 123 });
const permutationTable = await builder.execute();
const rowCount = await permutationTable.countRows();
expect(rowCount).toBe(8); // Values 10, 20, 30, 40, 50, 60, 70, 80
// Check split distribution
const split0Count = await permutationTable.countRows("split_id = 0");
const split1Count = await permutationTable.countRows("split_id = 1");
expect(split0Count).toBeGreaterThan(0);
expect(split1Count).toBeGreaterThan(0);
expect(split0Count + split1Count).toBe(8);
});
test("should throw error for invalid split arguments", () => {
const builder = permutationBuilder(table);
// Test no arguments provided
expect(() => builder.splitRandom({})).toThrow(
"Exactly one of 'ratios', 'counts', or 'fixed' must be provided",
);
// Test multiple arguments provided
expect(() =>
builder.splitRandom({ ratios: [0.5, 0.5], counts: [3, 7], seed: 42 }),
).toThrow("Exactly one of 'ratios', 'counts', or 'fixed' must be provided");
});
test("should throw error when builder is consumed", async () => {
const builder = permutationBuilder(table);
// Execute once
await builder.execute();
// Should throw error on second execution
await expect(builder.execute()).rejects.toThrow("Builder already consumed");
});
test("should accept custom split names with random splits", async () => {
const builder = permutationBuilder(table).splitRandom({
ratios: [0.3, 0.7],
seed: 42,
splitNames: ["train", "test"],
});
const permutationTable = await builder.execute();
const rowCount = await permutationTable.countRows();
expect(rowCount).toBe(10);
// Split names are provided but split_id is still numeric (0, 1, etc.)
// The names are metadata that can be used by higher-level APIs
const split0Count = await permutationTable.countRows("split_id = 0");
const split1Count = await permutationTable.countRows("split_id = 1");
expect(split0Count).toBeGreaterThan(0);
expect(split1Count).toBeGreaterThan(0);
expect(split0Count + split1Count).toBe(10);
});
test("should accept custom split names with hash splits", async () => {
const builder = permutationBuilder(table).splitHash({
columns: ["id"],
splitWeights: [50, 50],
discardWeight: 0,
splitNames: ["set_a", "set_b"],
});
const permutationTable = await builder.execute();
const rowCount = await permutationTable.countRows();
expect(rowCount).toBe(10);
// Split names are provided but split_id is still numeric
const split0Count = await permutationTable.countRows("split_id = 0");
const split1Count = await permutationTable.countRows("split_id = 1");
expect(split0Count).toBeGreaterThan(0);
expect(split1Count).toBeGreaterThan(0);
expect(split0Count + split1Count).toBe(10);
});
test("should accept custom split names with sequential splits", async () => {
const builder = permutationBuilder(table).splitSequential({
ratios: [0.5, 0.5],
splitNames: ["first", "second"],
});
const permutationTable = await builder.execute();
const rowCount = await permutationTable.countRows();
expect(rowCount).toBe(10);
// Split names are provided but split_id is still numeric
const split0Count = await permutationTable.countRows("split_id = 0");
const split1Count = await permutationTable.countRows("split_id = 1");
expect(split0Count).toBe(5);
expect(split1Count).toBe(5);
});
test("should accept custom split names with calculated splits", async () => {
const builder = permutationBuilder(table).splitCalculated({
calculation: "id % 2",
splitNames: ["even", "odd"],
});
const permutationTable = await builder.execute();
const rowCount = await permutationTable.countRows();
expect(rowCount).toBe(10);
// Split names are provided but split_id is still numeric
const split0Count = await permutationTable.countRows("split_id = 0");
const split1Count = await permutationTable.countRows("split_id = 1");
expect(split0Count).toBeGreaterThan(0);
expect(split1Count).toBeGreaterThan(0);
expect(split0Count + split1Count).toBe(10);
});
test("should persist permutation to a new table", async () => {
const db = await connect(tmpDir.name);
const builder = permutationBuilder(table)
.splitRandom({
ratios: [0.7, 0.3],
seed: 42,
splitNames: ["train", "validation"],
})
.persist(db, "my_permutation");
// Execute the builder which will persist the table
const permutationTable = await builder.execute();
// Verify the persisted table exists and can be opened
const persistedTable = await db.openTable("my_permutation");
expect(persistedTable).toBeDefined();
// Verify the persisted table has the correct number of rows
const rowCount = await persistedTable.countRows();
expect(rowCount).toBe(10);
// Verify splits exist (numeric split_id values)
const split0Count = await persistedTable.countRows("split_id = 0");
const split1Count = await persistedTable.countRows("split_id = 1");
expect(split0Count).toBeGreaterThan(0);
expect(split1Count).toBeGreaterThan(0);
expect(split0Count + split1Count).toBe(10);
// Verify the table returned by execute is the same as the persisted one
const executedRowCount = await permutationTable.countRows();
expect(executedRowCount).toBe(10);
});
test("should persist permutation with multiple operations", async () => {
const db = await connect(tmpDir.name);
const builder = permutationBuilder(table)
.filter("value > 30")
.splitRandom({ ratios: [0.5, 0.5], seed: 123, splitNames: ["a", "b"] })
.shuffle({ seed: 456 })
.persist(db, "filtered_permutation");
// Execute the builder
const permutationTable = await builder.execute();
// Verify the persisted table
const persistedTable = await db.openTable("filtered_permutation");
const rowCount = await persistedTable.countRows();
expect(rowCount).toBe(7); // Values 40, 50, 60, 70, 80, 90, 100
// Verify splits exist (numeric split_id values)
const split0Count = await persistedTable.countRows("split_id = 0");
const split1Count = await persistedTable.countRows("split_id = 1");
expect(split0Count).toBeGreaterThan(0);
expect(split1Count).toBeGreaterThan(0);
expect(split0Count + split1Count).toBe(7);
// Verify the executed table matches
const executedRowCount = await permutationTable.countRows();
expect(executedRowCount).toBe(7);
});
});

View File

@@ -0,0 +1,111 @@
// SPDX-License-Identifier: Apache-2.0
// SPDX-FileCopyrightText: Copyright The LanceDB Authors
import * as tmp from "tmp";
import { type Table, connect } from "../lancedb";
import {
Field,
FixedSizeList,
Float32,
Int64,
Schema,
Utf8,
makeArrowTable,
} from "../lancedb/arrow";
import { Index } from "../lancedb/indices";
describe("Query outputSchema", () => {
let tmpDir: tmp.DirResult;
let table: Table;
beforeEach(async () => {
tmpDir = tmp.dirSync({ unsafeCleanup: true });
const db = await connect(tmpDir.name);
// Create table with explicit schema to ensure proper types
const schema = new Schema([
new Field("a", new Int64(), true),
new Field("text", new Utf8(), true),
new Field(
"vec",
new FixedSizeList(2, new Field("item", new Float32())),
true,
),
]);
const data = makeArrowTable(
[
{ a: 1n, text: "foo", vec: [1, 2] },
{ a: 2n, text: "bar", vec: [3, 4] },
{ a: 3n, text: "baz", vec: [5, 6] },
],
{ schema },
);
table = await db.createTable("test", data);
});
afterEach(() => {
tmpDir.removeCallback();
});
it("should return schema for plain query", async () => {
const schema = await table.query().outputSchema();
expect(schema.fields.length).toBe(3);
expect(schema.fields.map((f) => f.name)).toEqual(["a", "text", "vec"]);
expect(schema.fields[0].type.toString()).toBe("Int64");
expect(schema.fields[1].type.toString()).toBe("Utf8");
});
it("should return schema with dynamic projection", async () => {
const schema = await table.query().select({ bl: "a * 2" }).outputSchema();
expect(schema.fields.length).toBe(1);
expect(schema.fields[0].name).toBe("bl");
expect(schema.fields[0].type.toString()).toBe("Int64");
});
it("should return schema for vector search with _distance column", async () => {
const schema = await table
.vectorSearch([1, 2])
.select(["a"])
.outputSchema();
expect(schema.fields.length).toBe(2);
expect(schema.fields.map((f) => f.name)).toEqual(["a", "_distance"]);
expect(schema.fields[0].type.toString()).toBe("Int64");
expect(schema.fields[1].type.toString()).toBe("Float32");
});
it("should return schema for FTS search", async () => {
await table.createIndex("text", { config: Index.fts() });
const schema = await table
.search("foo", "fts")
.select(["a"])
.outputSchema();
// FTS search includes _score column in addition to selected columns
expect(schema.fields.length).toBe(2);
expect(schema.fields.map((f) => f.name)).toContain("a");
expect(schema.fields.map((f) => f.name)).toContain("_score");
const aField = schema.fields.find((f) => f.name === "a");
expect(aField?.type.toString()).toBe("Int64");
});
it("should return schema for take query", async () => {
const schema = await table.takeOffsets([0]).select(["text"]).outputSchema();
expect(schema.fields.length).toBe(1);
expect(schema.fields[0].name).toBe("text");
expect(schema.fields[0].type.toString()).toBe("Utf8");
});
it("should return full schema when no select is specified", async () => {
const schema = await table.query().outputSchema();
// Should return all columns
expect(schema.fields.length).toBe(3);
});
});

View File

@@ -3,7 +3,49 @@
import * as http from "http";
import { RequestListener } from "http";
import { Connection, ConnectionOptions, connect } from "../lancedb";
import {
ClientConfig,
Connection,
ConnectionOptions,
TlsConfig,
connect,
} from "../lancedb";
import {
HeaderProvider,
OAuthHeaderProvider,
StaticHeaderProvider,
} from "../lancedb/header";
// Test-only header providers
class CustomProvider extends HeaderProvider {
getHeaders(): Record<string, string> {
return { "X-Custom": "custom-value" };
}
}
class ErrorProvider extends HeaderProvider {
private errorMessage: string;
public callCount: number = 0;
constructor(errorMessage: string = "Test error") {
super();
this.errorMessage = errorMessage;
}
getHeaders(): Record<string, string> {
this.callCount++;
throw new Error(this.errorMessage);
}
}
class ConcurrentProvider extends HeaderProvider {
private counter: number = 0;
getHeaders(): Record<string, string> {
this.counter++;
return { "X-Request-Id": String(this.counter) };
}
}
async function withMockDatabase(
listener: RequestListener,
@@ -148,4 +190,431 @@ describe("remote connection", () => {
},
);
});
describe("TlsConfig", () => {
it("should create TlsConfig with all fields", () => {
const tlsConfig: TlsConfig = {
certFile: "/path/to/cert.pem",
keyFile: "/path/to/key.pem",
sslCaCert: "/path/to/ca.pem",
assertHostname: false,
};
expect(tlsConfig.certFile).toBe("/path/to/cert.pem");
expect(tlsConfig.keyFile).toBe("/path/to/key.pem");
expect(tlsConfig.sslCaCert).toBe("/path/to/ca.pem");
expect(tlsConfig.assertHostname).toBe(false);
});
it("should create TlsConfig with partial fields", () => {
const tlsConfig: TlsConfig = {
certFile: "/path/to/cert.pem",
keyFile: "/path/to/key.pem",
};
expect(tlsConfig.certFile).toBe("/path/to/cert.pem");
expect(tlsConfig.keyFile).toBe("/path/to/key.pem");
expect(tlsConfig.sslCaCert).toBeUndefined();
expect(tlsConfig.assertHostname).toBeUndefined();
});
it("should create ClientConfig with TlsConfig", () => {
const tlsConfig: TlsConfig = {
certFile: "/path/to/cert.pem",
keyFile: "/path/to/key.pem",
sslCaCert: "/path/to/ca.pem",
assertHostname: true,
};
const clientConfig: ClientConfig = {
userAgent: "test-agent",
tlsConfig: tlsConfig,
};
expect(clientConfig.userAgent).toBe("test-agent");
expect(clientConfig.tlsConfig).toBeDefined();
expect(clientConfig.tlsConfig?.certFile).toBe("/path/to/cert.pem");
expect(clientConfig.tlsConfig?.keyFile).toBe("/path/to/key.pem");
expect(clientConfig.tlsConfig?.sslCaCert).toBe("/path/to/ca.pem");
expect(clientConfig.tlsConfig?.assertHostname).toBe(true);
});
it("should handle empty TlsConfig", () => {
const tlsConfig: TlsConfig = {};
expect(tlsConfig.certFile).toBeUndefined();
expect(tlsConfig.keyFile).toBeUndefined();
expect(tlsConfig.sslCaCert).toBeUndefined();
expect(tlsConfig.assertHostname).toBeUndefined();
});
it("should accept TlsConfig in connection options", () => {
const tlsConfig: TlsConfig = {
certFile: "/path/to/cert.pem",
keyFile: "/path/to/key.pem",
sslCaCert: "/path/to/ca.pem",
assertHostname: false,
};
// Just verify that the ClientConfig accepts the TlsConfig
const clientConfig: ClientConfig = {
tlsConfig: tlsConfig,
};
const connectionOptions: ConnectionOptions = {
apiKey: "fake",
clientConfig: clientConfig,
};
// Verify the configuration structure is correct
expect(connectionOptions.clientConfig).toBeDefined();
expect(connectionOptions.clientConfig?.tlsConfig).toBeDefined();
expect(connectionOptions.clientConfig?.tlsConfig?.certFile).toBe(
"/path/to/cert.pem",
);
});
});
describe("header providers", () => {
it("should work with StaticHeaderProvider", async () => {
const provider = new StaticHeaderProvider({
authorization: "Bearer test-token",
"X-Custom": "value",
});
const headers = provider.getHeaders();
expect(headers).toEqual({
authorization: "Bearer test-token",
"X-Custom": "value",
});
// Test that it returns a copy
headers["X-Modified"] = "modified";
const headers2 = provider.getHeaders();
expect(headers2).not.toHaveProperty("X-Modified");
});
it("should pass headers from StaticHeaderProvider to requests", async () => {
const provider = new StaticHeaderProvider({
"X-Custom-Auth": "secret-token",
"X-Request-Source": "test-suite",
});
await withMockDatabase(
(req, res) => {
expect(req.headers["x-custom-auth"]).toEqual("secret-token");
expect(req.headers["x-request-source"]).toEqual("test-suite");
const body = JSON.stringify({ tables: [] });
res.writeHead(200, { "Content-Type": "application/json" }).end(body);
},
async () => {
// Use actual header provider mechanism instead of extraHeaders
const conn = await connect(
"db://dev",
{
apiKey: "fake",
hostOverride: "http://localhost:8000",
},
undefined, // session
provider, // headerProvider
);
const tableNames = await conn.tableNames();
expect(tableNames).toEqual([]);
},
);
});
it("should work with CustomProvider", () => {
const provider = new CustomProvider();
const headers = provider.getHeaders();
expect(headers).toEqual({ "X-Custom": "custom-value" });
});
it("should handle ErrorProvider errors", () => {
const provider = new ErrorProvider("Authentication failed");
expect(() => provider.getHeaders()).toThrow("Authentication failed");
expect(provider.callCount).toBe(1);
// Test that error is thrown each time
expect(() => provider.getHeaders()).toThrow("Authentication failed");
expect(provider.callCount).toBe(2);
});
it("should work with ConcurrentProvider", () => {
const provider = new ConcurrentProvider();
const headers1 = provider.getHeaders();
const headers2 = provider.getHeaders();
const headers3 = provider.getHeaders();
expect(headers1).toEqual({ "X-Request-Id": "1" });
expect(headers2).toEqual({ "X-Request-Id": "2" });
expect(headers3).toEqual({ "X-Request-Id": "3" });
});
describe("OAuthHeaderProvider", () => {
it("should initialize correctly", () => {
const fetcher = () => ({
accessToken: "token123",
expiresIn: 3600,
});
const provider = new OAuthHeaderProvider(fetcher);
expect(provider).toBeInstanceOf(HeaderProvider);
});
it("should fetch token on first use", async () => {
let callCount = 0;
const fetcher = () => {
callCount++;
return {
accessToken: "token123",
expiresIn: 3600,
};
};
const provider = new OAuthHeaderProvider(fetcher);
// Need to manually refresh first due to sync limitation
await provider.refreshToken();
const headers = provider.getHeaders();
expect(headers).toEqual({ authorization: "Bearer token123" });
expect(callCount).toBe(1);
// Second call should not fetch again
const headers2 = provider.getHeaders();
expect(headers2).toEqual({ authorization: "Bearer token123" });
expect(callCount).toBe(1);
});
it("should handle tokens without expiry", async () => {
const fetcher = () => ({
accessToken: "permanent_token",
});
const provider = new OAuthHeaderProvider(fetcher);
await provider.refreshToken();
const headers = provider.getHeaders();
expect(headers).toEqual({ authorization: "Bearer permanent_token" });
});
it("should throw error when access_token is missing", async () => {
const fetcher = () =>
({
expiresIn: 3600,
}) as { accessToken?: string; expiresIn?: number };
const provider = new OAuthHeaderProvider(
fetcher as () => {
accessToken: string;
expiresIn?: number;
},
);
await expect(provider.refreshToken()).rejects.toThrow(
"Token fetcher did not return 'accessToken'",
);
});
it("should handle async token fetchers", async () => {
const fetcher = async () => {
// Simulate async operation
await new Promise((resolve) => setTimeout(resolve, 10));
return {
accessToken: "async_token",
expiresIn: 3600,
};
};
const provider = new OAuthHeaderProvider(fetcher);
await provider.refreshToken();
const headers = provider.getHeaders();
expect(headers).toEqual({ authorization: "Bearer async_token" });
});
});
it("should merge header provider headers with extra headers", async () => {
const provider = new StaticHeaderProvider({
"X-From-Provider": "provider-value",
});
await withMockDatabase(
(req, res) => {
expect(req.headers["x-from-provider"]).toEqual("provider-value");
expect(req.headers["x-extra-header"]).toEqual("extra-value");
const body = JSON.stringify({ tables: [] });
res.writeHead(200, { "Content-Type": "application/json" }).end(body);
},
async () => {
// Use header provider with additional extraHeaders
const conn = await connect(
"db://dev",
{
apiKey: "fake",
hostOverride: "http://localhost:8000",
clientConfig: {
extraHeaders: {
"X-Extra-Header": "extra-value",
},
},
},
undefined, // session
provider, // headerProvider
);
const tableNames = await conn.tableNames();
expect(tableNames).toEqual([]);
},
);
});
});
describe("header provider integration", () => {
it("should work with TypeScript StaticHeaderProvider", async () => {
let requestCount = 0;
await withMockDatabase(
(req, res) => {
requestCount++;
// Check headers are present on each request
expect(req.headers["authorization"]).toEqual("Bearer test-token-123");
expect(req.headers["x-custom"]).toEqual("custom-value");
// Return different responses based on the endpoint
if (req.url === "/v1/table/test_table/describe/") {
const body = JSON.stringify({
name: "test_table",
schema: { fields: [] },
});
res
.writeHead(200, { "Content-Type": "application/json" })
.end(body);
} else {
const body = JSON.stringify({ tables: ["test_table"] });
res
.writeHead(200, { "Content-Type": "application/json" })
.end(body);
}
},
async () => {
// Create provider with static headers
const provider = new StaticHeaderProvider({
authorization: "Bearer test-token-123",
"X-Custom": "custom-value",
});
// Connect with the provider
const conn = await connect(
"db://dev",
{
apiKey: "fake",
hostOverride: "http://localhost:8000",
},
undefined, // session
provider, // headerProvider
);
// Make multiple requests to verify headers are sent each time
const tables1 = await conn.tableNames();
expect(tables1).toEqual(["test_table"]);
const tables2 = await conn.tableNames();
expect(tables2).toEqual(["test_table"]);
// Verify headers were sent with each request
expect(requestCount).toBeGreaterThanOrEqual(2);
},
);
});
it("should work with JavaScript function provider", async () => {
let requestId = 0;
await withMockDatabase(
(req, res) => {
// Check dynamic header is present
expect(req.headers["x-request-id"]).toBeDefined();
expect(req.headers["x-request-id"]).toMatch(/^req-\d+$/);
const body = JSON.stringify({ tables: [] });
res.writeHead(200, { "Content-Type": "application/json" }).end(body);
},
async () => {
// Create a JavaScript function that returns dynamic headers
const getHeaders = async () => {
requestId++;
return {
"X-Request-Id": `req-${requestId}`,
"X-Timestamp": new Date().toISOString(),
};
};
// Connect with the function directly
const conn = await connect(
"db://dev",
{
apiKey: "fake",
hostOverride: "http://localhost:8000",
},
undefined, // session
getHeaders, // headerProvider
);
// Make requests - each should have different headers
const tables = await conn.tableNames();
expect(tables).toEqual([]);
},
);
});
it("should support OAuth-like token refresh pattern", async () => {
let tokenVersion = 0;
await withMockDatabase(
(req, res) => {
// Verify authorization header
const authHeader = req.headers["authorization"];
expect(authHeader).toBeDefined();
expect(authHeader).toMatch(/^Bearer token-v\d+$/);
const body = JSON.stringify({ tables: [] });
res.writeHead(200, { "Content-Type": "application/json" }).end(body);
},
async () => {
// Simulate OAuth token fetcher
const fetchToken = async () => {
tokenVersion++;
return {
authorization: `Bearer token-v${tokenVersion}`,
};
};
// Connect with the function directly
const conn = await connect(
"db://dev",
{
apiKey: "fake",
hostOverride: "http://localhost:8000",
},
undefined, // session
fetchToken, // headerProvider
);
// Each request will fetch a new token
await conn.tableNames();
// Token should be different on next request
await conn.tableNames();
},
);
});
});
});

View File

@@ -0,0 +1,184 @@
// SPDX-License-Identifier: Apache-2.0
// SPDX-FileCopyrightText: Copyright The LanceDB Authors
import * as arrow from "../lancedb/arrow";
import { sanitizeField, sanitizeType } from "../lancedb/sanitize";
describe("sanitize", function () {
describe("sanitizeType function", function () {
it("should handle type objects", function () {
const type = new arrow.Int32();
const result = sanitizeType(type);
expect(result.typeId).toBe(arrow.Type.Int);
expect((result as arrow.Int).bitWidth).toBe(32);
expect((result as arrow.Int).isSigned).toBe(true);
const floatType = {
typeId: 3, // Type.Float = 3
precision: 2,
toString: () => "Float",
isFloat: true,
isFixedWidth: true,
};
const floatResult = sanitizeType(floatType);
expect(floatResult).toBeInstanceOf(arrow.DataType);
expect(floatResult.typeId).toBe(arrow.Type.Float);
const floatResult2 = sanitizeType({ ...floatType, typeId: () => 3 });
expect(floatResult2).toBeInstanceOf(arrow.DataType);
expect(floatResult2.typeId).toBe(arrow.Type.Float);
});
const allTypeNameTestCases = [
["null", new arrow.Null()],
["binary", new arrow.Binary()],
["utf8", new arrow.Utf8()],
["bool", new arrow.Bool()],
["int8", new arrow.Int8()],
["int16", new arrow.Int16()],
["int32", new arrow.Int32()],
["int64", new arrow.Int64()],
["uint8", new arrow.Uint8()],
["uint16", new arrow.Uint16()],
["uint32", new arrow.Uint32()],
["uint64", new arrow.Uint64()],
["float16", new arrow.Float16()],
["float32", new arrow.Float32()],
["float64", new arrow.Float64()],
["datemillisecond", new arrow.DateMillisecond()],
["dateday", new arrow.DateDay()],
["timenanosecond", new arrow.TimeNanosecond()],
["timemicrosecond", new arrow.TimeMicrosecond()],
["timemillisecond", new arrow.TimeMillisecond()],
["timesecond", new arrow.TimeSecond()],
["intervaldaytime", new arrow.IntervalDayTime()],
["intervalyearmonth", new arrow.IntervalYearMonth()],
["durationnanosecond", new arrow.DurationNanosecond()],
["durationmicrosecond", new arrow.DurationMicrosecond()],
["durationmillisecond", new arrow.DurationMillisecond()],
["durationsecond", new arrow.DurationSecond()],
] as const;
it.each(allTypeNameTestCases)(
'should map type name "%s" to %s',
function (name, expected) {
const result = sanitizeType(name);
expect(result).toBeInstanceOf(expected.constructor);
},
);
const caseVariationTestCases = [
["NULL", new arrow.Null()],
["Utf8", new arrow.Utf8()],
["FLOAT32", new arrow.Float32()],
["DaTedAy", new arrow.DateDay()],
] as const;
it.each(caseVariationTestCases)(
'should be case insensitive for type name "%s" mapped to %s',
function (name, expected) {
const result = sanitizeType(name);
expect(result).toBeInstanceOf(expected.constructor);
},
);
it("should throw error for unrecognized type name", function () {
expect(() => sanitizeType("invalid_type")).toThrow(
"Unrecognized type name in schema: invalid_type",
);
});
});
describe("sanitizeField function", function () {
it("should handle field with string type name", function () {
const field = sanitizeField({
name: "string_field",
type: "utf8",
nullable: true,
metadata: new Map([["key", "value"]]),
});
expect(field).toBeInstanceOf(arrow.Field);
expect(field.name).toBe("string_field");
expect(field.type).toBeInstanceOf(arrow.Utf8);
expect(field.nullable).toBe(true);
expect(field.metadata?.get("key")).toBe("value");
});
it("should handle field with type object", function () {
const floatType = {
typeId: 3, // Float
precision: 32,
};
const field = sanitizeField({
name: "float_field",
type: floatType,
nullable: false,
});
expect(field).toBeInstanceOf(arrow.Field);
expect(field.name).toBe("float_field");
expect(field.type).toBeInstanceOf(arrow.DataType);
expect(field.type.typeId).toBe(arrow.Type.Float);
expect((field.type as arrow.Float64).precision).toBe(32);
expect(field.nullable).toBe(false);
});
it("should handle field with direct Type instance", function () {
const field = sanitizeField({
name: "bool_field",
type: new arrow.Bool(),
nullable: true,
});
expect(field).toBeInstanceOf(arrow.Field);
expect(field.name).toBe("bool_field");
expect(field.type).toBeInstanceOf(arrow.Bool);
expect(field.nullable).toBe(true);
});
it("should throw error for invalid field object", function () {
expect(() =>
sanitizeField({
type: "int32",
nullable: true,
}),
).toThrow(
"The field passed in is missing a `type`/`name`/`nullable` property",
);
// Invalid type
expect(() =>
sanitizeField({
name: "invalid",
type: { invalid: true },
nullable: true,
}),
).toThrow("Expected a Type to have a typeId property");
// Invalid nullable
expect(() =>
sanitizeField({
name: "invalid_nullable",
type: "int32",
nullable: "not a boolean",
}),
).toThrow("The field passed in had a non-boolean `nullable` property");
});
it("should report error for invalid type name", function () {
expect(() =>
sanitizeField({
name: "invalid_field",
type: "invalid_type",
nullable: true,
}),
).toThrow(
"Unable to sanitize type for field: invalid_field due to error: Error: Unrecognized type name in schema: invalid_type",
);
});
});
});

View File

@@ -10,7 +10,13 @@ import * as arrow16 from "apache-arrow-16";
import * as arrow17 from "apache-arrow-17";
import * as arrow18 from "apache-arrow-18";
import { MatchQuery, PhraseQuery, Table, connect } from "../lancedb";
import {
Connection,
MatchQuery,
PhraseQuery,
Table,
connect,
} from "../lancedb";
import {
Table as ArrowTable,
Field,
@@ -21,6 +27,8 @@ import {
Int64,
List,
Schema,
SchemaLike,
Type,
Uint8,
Utf8,
makeArrowTable,
@@ -39,7 +47,6 @@ import {
Operator,
instanceOfFullTextQuery,
} from "../lancedb/query";
import exp = require("constants");
describe.each([arrow15, arrow16, arrow17, arrow18])(
"Given a table",
@@ -212,8 +219,7 @@ describe.each([arrow15, arrow16, arrow17, arrow18])(
},
);
// TODO: https://github.com/lancedb/lancedb/issues/1832
it.skip("should be able to omit nullable fields", async () => {
it("should be able to omit nullable fields", async () => {
const db = await connect(tmpDir.name);
const schema = new arrow.Schema([
new arrow.Field(
@@ -237,23 +243,36 @@ describe.each([arrow15, arrow16, arrow17, arrow18])(
await table.add([data3]);
let res = await table.query().limit(10).toArray();
const resVector = res.map((r) => r.get("vector").toArray());
const resVector = res.map((r) =>
r.vector ? Array.from(r.vector) : null,
);
expect(resVector).toEqual([null, data2.vector, data3.vector]);
const resItem = res.map((r) => r.get("item").toArray());
const resItem = res.map((r) => r.item);
expect(resItem).toEqual(["foo", null, "bar"]);
const resPrice = res.map((r) => r.get("price").toArray());
const resPrice = res.map((r) => r.price);
expect(resPrice).toEqual([10.0, 2.0, 3.0]);
const data4 = { item: "foo" };
// We can't omit a column if it's not nullable
await expect(table.add([data4])).rejects.toThrow("Invalid user input");
await expect(table.add([data4])).rejects.toThrow(
"Append with different schema",
);
// But we can alter columns to make them nullable
await table.alterColumns([{ path: "price", nullable: true }]);
await table.add([data4]);
res = (await table.query().limit(10).toArray()).map((r) => r.toJSON());
expect(res).toEqual([data1, data2, data3, data4]);
res = (await table.query().limit(10).toArray()).map((r) => ({
...r.toJSON(),
vector: r.vector ? Array.from(r.vector) : null,
}));
// Rust fills missing nullable fields with null
expect(res).toEqual([
{ ...data1, vector: null },
{ ...data2, item: null },
data3,
{ ...data4, price: null, vector: null },
]);
});
it("should be able to insert nullable data for non-nullable fields", async () => {
@@ -287,6 +306,12 @@ describe.each([arrow15, arrow16, arrow17, arrow18])(
expect(res2[1].id).toEqual(data2.id);
});
it("should support take queries", async () => {
await table.add([{ id: 1 }, { id: 2 }, { id: 3 }]);
const res = await table.takeOffsets([1, 2]).toArrow();
expect(res.getChild("id")?.toJSON()).toEqual([2, 3]);
});
it("should return the table as an instance of an arrow table", async () => {
const arrowTbl = await table.toArrow();
expect(arrowTbl).toBeInstanceOf(ArrowTable);
@@ -325,6 +350,43 @@ describe.each([arrow15, arrow16, arrow17, arrow18])(
const table = await db.createTable("my_table", data);
expect(await table.countRows()).toEqual(2);
});
it("should allow undefined and omitted nullable vector fields", async () => {
// Test for the bug: can't pass undefined or omit vector column
const db = await connect("memory://");
const schema = new arrow.Schema([
new arrow.Field("id", new arrow.Int32(), true),
new arrow.Field(
"vector",
new arrow.FixedSizeList(
32,
new arrow.Field("item", new arrow.Float32(), true),
),
true, // nullable = true
),
]);
const table = await db.createEmptyTable("test_table", schema);
// Should not throw error for undefined value
await table.add([{ id: 0, vector: undefined }]);
// Should not throw error for omitted field
await table.add([{ id: 1 }]);
// Should still work for null
await table.add([{ id: 2, vector: null }]);
// Should still work for actual vector
const testVector = new Array(32).fill(0.5);
await table.add([{ id: 3, vector: testVector }]);
expect(await table.countRows()).toEqual(4);
const res = await table.query().limit(10).toArray();
const resVector = res.map((r) =>
r.vector ? Array.from(r.vector) : null,
);
expect(resVector).toEqual([null, null, null, testVector]);
});
},
);
@@ -482,6 +544,32 @@ describe("merge insert", () => {
.execute(newData, { timeoutMs: 0 }),
).rejects.toThrow("merge insert timed out");
});
test("useIndex", async () => {
const newData = [
{ a: 2, b: "x" },
{ a: 4, b: "z" },
];
// Test with useIndex(true) - should work fine
const result1 = await table
.mergeInsert("a")
.whenNotMatchedInsertAll()
.useIndex(true)
.execute(newData);
expect(result1.numInsertedRows).toBe(1); // Only a=4 should be inserted
// Test with useIndex(false) - should also work fine
const newData2 = [{ a: 5, b: "w" }];
const result2 = await table
.mergeInsert("a")
.whenNotMatchedInsertAll()
.useIndex(false)
.execute(newData2);
expect(result2.numInsertedRows).toBe(1); // a=5 should be inserted
});
});
describe("When creating an index", () => {
@@ -557,7 +645,7 @@ describe("When creating an index", () => {
// test offset
rst = await tbl.query().limit(2).offset(1).nearestTo(queryVec).toArrow();
expect(rst.numRows).toBe(1);
expect(rst.numRows).toBe(2);
// test nprobes
rst = await tbl.query().nearestTo(queryVec).limit(2).nprobes(50).toArrow();
@@ -696,7 +784,7 @@ describe("When creating an index", () => {
// test offset
rst = await tbl.query().limit(2).offset(1).nearestTo(queryVec).toArrow();
expect(rst.numRows).toBe(1);
expect(rst.numRows).toBe(2);
// test ef
rst = await tbl.query().limit(2).nearestTo(queryVec).ef(100).toArrow();
@@ -773,6 +861,15 @@ describe("When creating an index", () => {
});
});
it("should be able to create IVF_RQ", async () => {
await tbl.createIndex("vec", {
config: Index.ivfRq({
numPartitions: 10,
numBits: 1,
}),
});
});
it("should allow me to replace (or not) an existing index", async () => {
await tbl.createIndex("id");
// Default is replace=true
@@ -851,6 +948,40 @@ describe("When creating an index", () => {
expect(stats).toBeUndefined();
});
test("should support name and train parameters", async () => {
// Test with custom name
await tbl.createIndex("vec", {
config: Index.ivfPq({ numPartitions: 4 }),
name: "my_custom_vector_index",
});
const indices = await tbl.listIndices();
expect(indices).toHaveLength(1);
expect(indices[0].name).toBe("my_custom_vector_index");
// Test scalar index with train=false
await tbl.createIndex("id", {
config: Index.btree(),
name: "btree_empty",
train: false,
});
const allIndices = await tbl.listIndices();
expect(allIndices).toHaveLength(2);
expect(allIndices.some((idx) => idx.name === "btree_empty")).toBe(true);
// Test with both name and train=true (use tags column)
await tbl.createIndex("tags", {
config: Index.labelList(),
name: "tags_trained",
train: true,
});
const finalIndices = await tbl.listIndices();
expect(finalIndices).toHaveLength(3);
expect(finalIndices.some((idx) => idx.name === "tags_trained")).toBe(true);
});
test("create ivf_flat with binary vectors", async () => {
const db = await connect(tmpDir.name);
const binarySchema = new Schema([
@@ -1389,7 +1520,9 @@ describe("when optimizing a dataset", () => {
it("delete unverified", async () => {
const version = await table.version();
const versionFile = `${tmpDir.name}/${table.name}.lance/_versions/${version - 1}.manifest`;
const versionFile = `${tmpDir.name}/${table.name}.lance/_versions/${
version - 1
}.manifest`;
fs.rmSync(versionFile);
let stats = await table.optimize({ deleteUnverified: false });
@@ -1903,3 +2036,52 @@ describe("column name options", () => {
expect(results2.length).toBe(10);
});
});
describe("when creating an empty table", () => {
let con: Connection;
beforeEach(async () => {
const tmpDir = tmp.dirSync({ unsafeCleanup: true });
con = await connect(tmpDir.name);
});
afterEach(() => {
con.close();
});
it("can create an empty table from an arrow Schema", async () => {
const schema = new Schema([
new Field("id", new Int64()),
new Field("vector", new Float64()),
]);
const table = await con.createEmptyTable("test", schema);
const actualSchema = await table.schema();
expect(actualSchema.fields[0].type.typeId).toBe(Type.Int);
expect((actualSchema.fields[0].type as Int64).bitWidth).toBe(64);
expect(actualSchema.fields[1].type.typeId).toBe(Type.Float);
expect((actualSchema.fields[1].type as Float64).precision).toBe(2);
});
it("can create an empty table from schema that specifies field types by name", async () => {
const schemaLike = {
fields: [
{
name: "id",
type: "int64",
nullable: true,
},
{
name: "vector",
type: "float64",
nullable: true,
},
],
metadata: new Map(),
names: ["id", "vector"],
} satisfies SchemaLike;
const table = await con.createEmptyTable("test", schemaLike);
const actualSchema = await table.schema();
expect(actualSchema.fields[0].type.typeId).toBe(Type.Int);
expect((actualSchema.fields[0].type as Int64).bitWidth).toBe(64);
expect(actualSchema.fields[1].type.typeId).toBe(Type.Float);
expect((actualSchema.fields[1].type as Float64).precision).toBe(2);
});
});

View File

@@ -48,6 +48,7 @@
"noUnreachableSuper": "error",
"noUnsafeFinally": "error",
"noUnsafeOptionalChaining": "error",
"noUnusedImports": "error",
"noUnusedLabels": "error",
"noUnusedVariables": "warn",
"useIsNan": "error",

View File

@@ -12,7 +12,7 @@ test("ann index examples", async () => {
// --8<-- [start:ingest]
const db = await lancedb.connect(databaseDir);
const data = Array.from({ length: 5_000 }, (_, i) => ({
const data = Array.from({ length: 1_000 }, (_, i) => ({
vector: Array(128).fill(i),
id: `${i}`,
content: "",
@@ -24,8 +24,8 @@ test("ann index examples", async () => {
});
await table.createIndex("vector", {
config: lancedb.Index.ivfPq({
numPartitions: 10,
numSubVectors: 16,
numPartitions: 30,
numSubVectors: 8,
}),
});
// --8<-- [end:ingest]

View File

@@ -41,7 +41,6 @@ import {
vectorFromArray as badVectorFromArray,
makeBuilder,
makeData,
makeTable,
} from "apache-arrow";
import { Buffers } from "apache-arrow/data";
import { type EmbeddingFunction } from "./embedding/embedding_function";
@@ -74,7 +73,7 @@ export type FieldLike =
| {
type: string;
name: string;
nullable?: boolean;
nullable: boolean;
metadata?: Map<string, string>;
};
@@ -279,7 +278,7 @@ export class MakeArrowTableOptions {
}
/**
* An enhanced version of the {@link makeTable} function from Apache Arrow
* An enhanced version of the apache-arrow makeTable function from Apache Arrow
* that supports nested fields and embeddings columns.
*
* (typically you do not need to call this function. It will be called automatically
@@ -512,7 +511,11 @@ function* rowPathsAndValues(
if (isObject(value)) {
yield* rowPathsAndValues(value, [...basePath, key]);
} else {
yield [[...basePath, key], value];
// Skip undefined values - they should be treated the same as missing fields
// for embedding function purposes
if (value !== undefined) {
yield [[...basePath, key], value];
}
}
}
}
@@ -701,7 +704,7 @@ function transposeData(
}
return current;
});
return makeVector(values, field.type);
return makeVector(values, field.type, undefined, field.nullable);
}
}
@@ -748,9 +751,30 @@ function makeVector(
values: unknown[],
type?: DataType,
stringAsDictionary?: boolean,
nullable?: boolean,
// biome-ignore lint/suspicious/noExplicitAny: skip
): Vector<any> {
if (type !== undefined) {
// Convert undefined values to null for nullable fields
if (nullable) {
values = values.map((v) => (v === undefined ? null : v));
}
// workaround for: https://github.com/apache/arrow-js/issues/68
if (DataType.isBool(type)) {
const hasNonNullValue = values.some((v) => v !== null && v !== undefined);
if (!hasNonNullValue) {
const nullBitmap = new Uint8Array(Math.ceil(values.length / 8));
const data = makeData({
type: type,
length: values.length,
nullCount: values.length,
nullBitmap,
});
return arrowMakeVector(data);
}
}
// No need for inference, let Arrow create it
if (type instanceof Int) {
if (DataType.isInt(type) && type.bitWidth === 64) {
@@ -875,7 +899,12 @@ async function applyEmbeddingsFromMetadata(
for (const field of schema.fields) {
if (!(field.name in columns)) {
const nullValues = new Array(table.numRows).fill(null);
columns[field.name] = makeVector(nullValues, field.type);
columns[field.name] = makeVector(
nullValues,
field.type,
undefined,
field.nullable,
);
}
}
@@ -939,7 +968,12 @@ async function applyEmbeddings<T>(
} else if (schema != null) {
const destField = schema.fields.find((f) => f.name === destColumn);
if (destField != null) {
newColumns[destColumn] = makeVector([], destField.type);
newColumns[destColumn] = makeVector(
[],
destField.type,
undefined,
destField.nullable,
);
} else {
throw new Error(
`Attempt to apply embeddings to an empty table failed because schema was missing embedding column '${destColumn}'`,
@@ -1251,19 +1285,36 @@ function validateSchemaEmbeddings(
if (isFixedSizeList(field.type)) {
field = sanitizeField(field);
if (data.length !== 0 && data?.[0]?.[field.name] === undefined) {
// Check if there's an embedding function registered for this field
let hasEmbeddingFunction = false;
// Check schema metadata for embedding functions
if (schema.metadata.has("embedding_functions")) {
const embeddings = JSON.parse(
schema.metadata.get("embedding_functions")!,
);
if (
// biome-ignore lint/suspicious/noExplicitAny: we don't know the type of `f`
embeddings.find((f: any) => f["vectorColumn"] === field.name) ===
undefined
) {
// biome-ignore lint/suspicious/noExplicitAny: we don't know the type of `f`
if (embeddings.find((f: any) => f["vectorColumn"] === field.name)) {
hasEmbeddingFunction = true;
}
}
// Check passed embedding function parameter
if (embeddings && embeddings.vectorColumn === field.name) {
hasEmbeddingFunction = true;
}
// If the field is nullable AND there's no embedding function, allow undefined/omitted values
if (field.nullable && !hasEmbeddingFunction) {
fields.push(field);
} else {
// Either not nullable OR has embedding function - require explicit values
if (hasEmbeddingFunction) {
// Don't add to missingEmbeddingFields since this is expected to be filled by embedding function
fields.push(field);
} else {
missingEmbeddingFields.push(field);
}
} else {
missingEmbeddingFields.push(field);
}
} else {
fields.push(field);

View File

@@ -3,7 +3,6 @@
import {
Data,
Schema,
SchemaLike,
TableLike,
fromTableToStreamBuffer,
@@ -159,17 +158,33 @@ export abstract class Connection {
*
* Tables will be returned in lexicographical order.
* @param {Partial<TableNamesOptions>} options - options to control the
* paging / start point
* paging / start point (backwards compatibility)
*
*/
abstract tableNames(options?: Partial<TableNamesOptions>): Promise<string[]>;
/**
* List all the table names in this database.
*
* Tables will be returned in lexicographical order.
* @param {string[]} namespace - The namespace to list tables from (defaults to root namespace)
* @param {Partial<TableNamesOptions>} options - options to control the
* paging / start point
*
*/
abstract tableNames(
namespace?: string[],
options?: Partial<TableNamesOptions>,
): Promise<string[]>;
/**
* Open a table in the database.
* @param {string} name - The name of the table
* @param {string[]} namespace - The namespace of the table (defaults to root namespace)
* @param {Partial<OpenTableOptions>} options - Additional options
*/
abstract openTable(
name: string,
namespace?: string[],
options?: Partial<OpenTableOptions>,
): Promise<Table>;
@@ -178,6 +193,7 @@ export abstract class Connection {
* @param {object} options - The options object.
* @param {string} options.name - The name of the table.
* @param {Data} options.data - Non-empty Array of Records to be inserted into the table
* @param {string[]} namespace - The namespace to create the table in (defaults to root namespace)
*
*/
abstract createTable(
@@ -185,40 +201,99 @@ export abstract class Connection {
name: string;
data: Data;
} & Partial<CreateTableOptions>,
namespace?: string[],
): Promise<Table>;
/**
* Creates a new Table and initialize it with new data.
* @param {string} name - The name of the table.
* @param {Record<string, unknown>[] | TableLike} data - Non-empty Array of Records
* to be inserted into the table
* @param {Partial<CreateTableOptions>} options - Additional options (backwards compatibility)
*/
abstract createTable(
name: string,
data: Record<string, unknown>[] | TableLike,
options?: Partial<CreateTableOptions>,
): Promise<Table>;
/**
* Creates a new Table and initialize it with new data.
* @param {string} name - The name of the table.
* @param {Record<string, unknown>[] | TableLike} data - Non-empty Array of Records
* to be inserted into the table
* @param {string[]} namespace - The namespace to create the table in (defaults to root namespace)
* @param {Partial<CreateTableOptions>} options - Additional options
*/
abstract createTable(
name: string,
data: Record<string, unknown>[] | TableLike,
namespace?: string[],
options?: Partial<CreateTableOptions>,
): Promise<Table>;
/**
* Creates a new empty Table
* @param {string} name - The name of the table.
* @param {Schema} schema - The schema of the table
* @param {Partial<CreateTableOptions>} options - Additional options (backwards compatibility)
*/
abstract createEmptyTable(
name: string,
schema: import("./arrow").SchemaLike,
options?: Partial<CreateTableOptions>,
): Promise<Table>;
/**
* Creates a new empty Table
* @param {string} name - The name of the table.
* @param {Schema} schema - The schema of the table
* @param {string[]} namespace - The namespace to create the table in (defaults to root namespace)
* @param {Partial<CreateTableOptions>} options - Additional options
*/
abstract createEmptyTable(
name: string,
schema: import("./arrow").SchemaLike,
namespace?: string[],
options?: Partial<CreateTableOptions>,
): Promise<Table>;
/**
* Drop an existing table.
* @param {string} name The name of the table to drop.
* @param {string[]} namespace The namespace of the table (defaults to root namespace).
*/
abstract dropTable(name: string): Promise<void>;
abstract dropTable(name: string, namespace?: string[]): Promise<void>;
/**
* Drop all tables in the database.
* @param {string[]} namespace The namespace to drop tables from (defaults to root namespace).
*/
abstract dropAllTables(): Promise<void>;
abstract dropAllTables(namespace?: string[]): Promise<void>;
/**
* Clone a table from a source table.
*
* A shallow clone creates a new table that shares the underlying data files
* with the source table but has its own independent manifest. This allows
* both the source and cloned tables to evolve independently while initially
* sharing the same data, deletion, and index files.
*
* @param {string} targetTableName - The name of the target table to create.
* @param {string} sourceUri - The URI of the source table to clone from.
* @param {object} options - Clone options.
* @param {string[]} options.targetNamespace - The namespace for the target table (defaults to root namespace).
* @param {number} options.sourceVersion - The version of the source table to clone.
* @param {string} options.sourceTag - The tag of the source table to clone.
* @param {boolean} options.isShallow - Whether to perform a shallow clone (defaults to true).
*/
abstract cloneTable(
targetTableName: string,
sourceUri: string,
options?: {
targetNamespace?: string[];
sourceVersion?: number;
sourceTag?: string;
isShallow?: boolean;
},
): Promise<Table>;
}
/** @hideconstructor */
@@ -243,16 +318,39 @@ export class LocalConnection extends Connection {
return this.inner.display();
}
async tableNames(options?: Partial<TableNamesOptions>): Promise<string[]> {
return this.inner.tableNames(options?.startAfter, options?.limit);
async tableNames(
namespaceOrOptions?: string[] | Partial<TableNamesOptions>,
options?: Partial<TableNamesOptions>,
): Promise<string[]> {
// Detect if first argument is namespace array or options object
let namespace: string[] | undefined;
let tableNamesOptions: Partial<TableNamesOptions> | undefined;
if (Array.isArray(namespaceOrOptions)) {
// First argument is namespace array
namespace = namespaceOrOptions;
tableNamesOptions = options;
} else {
// First argument is options object (backwards compatibility)
namespace = undefined;
tableNamesOptions = namespaceOrOptions;
}
return this.inner.tableNames(
namespace ?? [],
tableNamesOptions?.startAfter,
tableNamesOptions?.limit,
);
}
async openTable(
name: string,
namespace?: string[],
options?: Partial<OpenTableOptions>,
): Promise<Table> {
const innerTable = await this.inner.openTable(
name,
namespace ?? [],
cleanseStorageOptions(options?.storageOptions),
options?.indexCacheSize,
);
@@ -260,6 +358,28 @@ export class LocalConnection extends Connection {
return new LocalTable(innerTable);
}
async cloneTable(
targetTableName: string,
sourceUri: string,
options?: {
targetNamespace?: string[];
sourceVersion?: number;
sourceTag?: string;
isShallow?: boolean;
},
): Promise<Table> {
const innerTable = await this.inner.cloneTable(
targetTableName,
sourceUri,
options?.targetNamespace ?? [],
options?.sourceVersion ?? null,
options?.sourceTag ?? null,
options?.isShallow ?? true,
);
return new LocalTable(innerTable);
}
private getStorageOptions(
options?: Partial<CreateTableOptions>,
): Record<string, string> | undefined {
@@ -286,14 +406,44 @@ export class LocalConnection extends Connection {
nameOrOptions:
| string
| ({ name: string; data: Data } & Partial<CreateTableOptions>),
data?: Record<string, unknown>[] | TableLike,
dataOrNamespace?: Record<string, unknown>[] | TableLike | string[],
namespaceOrOptions?: string[] | Partial<CreateTableOptions>,
options?: Partial<CreateTableOptions>,
): Promise<Table> {
if (typeof nameOrOptions !== "string" && "name" in nameOrOptions) {
const { name, data, ...options } = nameOrOptions;
return this.createTable(name, data, options);
// First overload: createTable(options, namespace?)
const { name, data, ...createOptions } = nameOrOptions;
const namespace = dataOrNamespace as string[] | undefined;
return this._createTableImpl(name, data, namespace, createOptions);
}
// Second overload: createTable(name, data, namespace?, options?)
const name = nameOrOptions;
const data = dataOrNamespace as Record<string, unknown>[] | TableLike;
// Detect if third argument is namespace array or options object
let namespace: string[] | undefined;
let createOptions: Partial<CreateTableOptions> | undefined;
if (Array.isArray(namespaceOrOptions)) {
// Third argument is namespace array
namespace = namespaceOrOptions;
createOptions = options;
} else {
// Third argument is options object (backwards compatibility)
namespace = undefined;
createOptions = namespaceOrOptions;
}
return this._createTableImpl(name, data, namespace, createOptions);
}
private async _createTableImpl(
name: string,
data: Data,
namespace?: string[],
options?: Partial<CreateTableOptions>,
): Promise<Table> {
if (data === undefined) {
throw new Error("data is required");
}
@@ -302,9 +452,10 @@ export class LocalConnection extends Connection {
const storageOptions = this.getStorageOptions(options);
const innerTable = await this.inner.createTable(
nameOrOptions,
name,
buf,
mode,
namespace ?? [],
storageOptions,
);
@@ -314,39 +465,55 @@ export class LocalConnection extends Connection {
async createEmptyTable(
name: string,
schema: import("./arrow").SchemaLike,
namespaceOrOptions?: string[] | Partial<CreateTableOptions>,
options?: Partial<CreateTableOptions>,
): Promise<Table> {
let mode: string = options?.mode ?? "create";
const existOk = options?.existOk ?? false;
// Detect if third argument is namespace array or options object
let namespace: string[] | undefined;
let createOptions: Partial<CreateTableOptions> | undefined;
if (Array.isArray(namespaceOrOptions)) {
// Third argument is namespace array
namespace = namespaceOrOptions;
createOptions = options;
} else {
// Third argument is options object (backwards compatibility)
namespace = undefined;
createOptions = namespaceOrOptions;
}
let mode: string = createOptions?.mode ?? "create";
const existOk = createOptions?.existOk ?? false;
if (mode === "create" && existOk) {
mode = "exist_ok";
}
let metadata: Map<string, string> | undefined = undefined;
if (options?.embeddingFunction !== undefined) {
const embeddingFunction = options.embeddingFunction;
if (createOptions?.embeddingFunction !== undefined) {
const embeddingFunction = createOptions.embeddingFunction;
const registry = getRegistry();
metadata = registry.getTableMetadata([embeddingFunction]);
}
const storageOptions = this.getStorageOptions(options);
const storageOptions = this.getStorageOptions(createOptions);
const table = makeEmptyTable(schema, metadata);
const buf = await fromTableToBuffer(table);
const innerTable = await this.inner.createEmptyTable(
name,
buf,
mode,
namespace ?? [],
storageOptions,
);
return new LocalTable(innerTable);
}
async dropTable(name: string): Promise<void> {
return this.inner.dropTable(name);
async dropTable(name: string, namespace?: string[]): Promise<void> {
return this.inner.dropTable(name, namespace ?? []);
}
async dropAllTables(): Promise<void> {
return this.inner.dropAllTables();
async dropAllTables(namespace?: string[]): Promise<void> {
return this.inner.dropAllTables(namespace ?? []);
}
}

253
nodejs/lancedb/header.ts Normal file
View File

@@ -0,0 +1,253 @@
// SPDX-License-Identifier: Apache-2.0
// SPDX-FileCopyrightText: Copyright The LanceDB Authors
/**
* Header providers for LanceDB remote connections.
*
* This module provides a flexible header management framework for LanceDB remote
* connections, allowing users to implement custom header strategies for
* authentication, request tracking, custom metadata, or any other header-based
* requirements.
*
* @module header
*/
/**
* Abstract base class for providing custom headers for each request.
*
* Users can implement this interface to provide dynamic headers for various purposes
* such as authentication (OAuth tokens, API keys), request tracking (correlation IDs),
* custom metadata, or any other header-based requirements. The provider is called
* before each request to ensure fresh header values are always used.
*
* @example
* Simple JWT token provider:
* ```typescript
* class JWTProvider extends HeaderProvider {
* constructor(private token: string) {
* super();
* }
*
* getHeaders(): Record<string, string> {
* return { authorization: `Bearer ${this.token}` };
* }
* }
* ```
*
* @example
* Provider with request tracking:
* ```typescript
* class RequestTrackingProvider extends HeaderProvider {
* constructor(private sessionId: string) {
* super();
* }
*
* getHeaders(): Record<string, string> {
* return {
* "X-Session-Id": this.sessionId,
* "X-Request-Id": `req-${Date.now()}`
* };
* }
* }
* ```
*/
export abstract class HeaderProvider {
/**
* Get the latest headers to be added to requests.
*
* This method is called before each request to the remote LanceDB server.
* Implementations should return headers that will be merged with existing headers.
*
* @returns Dictionary of header names to values to add to the request.
* @throws If unable to fetch headers, the exception will be propagated and the request will fail.
*/
abstract getHeaders(): Record<string, string>;
}
/**
* Example implementation: A simple header provider that returns static headers.
*
* This is an example implementation showing how to create a HeaderProvider
* for cases where headers don't change during the session.
*
* @example
* ```typescript
* const provider = new StaticHeaderProvider({
* authorization: "Bearer my-token",
* "X-Custom-Header": "custom-value"
* });
* const headers = provider.getHeaders();
* // Returns: {authorization: 'Bearer my-token', 'X-Custom-Header': 'custom-value'}
* ```
*/
export class StaticHeaderProvider extends HeaderProvider {
private _headers: Record<string, string>;
/**
* Initialize with static headers.
* @param headers - Headers to return for every request.
*/
constructor(headers: Record<string, string>) {
super();
this._headers = { ...headers };
}
/**
* Return the static headers.
* @returns Copy of the static headers.
*/
getHeaders(): Record<string, string> {
return { ...this._headers };
}
}
/**
* Token response from OAuth provider.
* @public
*/
export interface TokenResponse {
accessToken: string;
expiresIn?: number;
}
/**
* Example implementation: OAuth token provider with automatic refresh.
*
* This is an example implementation showing how to manage OAuth tokens
* with automatic refresh when they expire.
*
* @example
* ```typescript
* async function fetchToken(): Promise<TokenResponse> {
* const response = await fetch("https://oauth.example.com/token", {
* method: "POST",
* body: JSON.stringify({
* grant_type: "client_credentials",
* client_id: "your-client-id",
* client_secret: "your-client-secret"
* }),
* headers: { "Content-Type": "application/json" }
* });
* const data = await response.json();
* return {
* accessToken: data.access_token,
* expiresIn: data.expires_in
* };
* }
*
* const provider = new OAuthHeaderProvider(fetchToken);
* const headers = provider.getHeaders();
* // Returns: {"authorization": "Bearer <your-token>"}
* ```
*/
export class OAuthHeaderProvider extends HeaderProvider {
private _tokenFetcher: () => Promise<TokenResponse> | TokenResponse;
private _refreshBufferSeconds: number;
private _currentToken: string | null = null;
private _tokenExpiresAt: number | null = null;
private _refreshPromise: Promise<void> | null = null;
/**
* Initialize the OAuth provider.
* @param tokenFetcher - Function to fetch new tokens. Should return object with 'accessToken' and optionally 'expiresIn'.
* @param refreshBufferSeconds - Seconds before expiry to refresh token. Default 300 (5 minutes).
*/
constructor(
tokenFetcher: () => Promise<TokenResponse> | TokenResponse,
refreshBufferSeconds: number = 300,
) {
super();
this._tokenFetcher = tokenFetcher;
this._refreshBufferSeconds = refreshBufferSeconds;
}
/**
* Check if token needs refresh.
*/
private _needsRefresh(): boolean {
if (this._currentToken === null) {
return true;
}
if (this._tokenExpiresAt === null) {
// No expiration info, assume token is valid
return false;
}
// Refresh if we're within the buffer time of expiration
const now = Date.now() / 1000;
return now >= this._tokenExpiresAt - this._refreshBufferSeconds;
}
/**
* Refresh the token if it's expired or close to expiring.
*/
private async _refreshTokenIfNeeded(): Promise<void> {
if (!this._needsRefresh()) {
return;
}
// If refresh is already in progress, wait for it
if (this._refreshPromise) {
await this._refreshPromise;
return;
}
// Start refresh
this._refreshPromise = (async () => {
try {
const tokenData = await this._tokenFetcher();
this._currentToken = tokenData.accessToken;
if (!this._currentToken) {
throw new Error("Token fetcher did not return 'accessToken'");
}
// Set expiration if provided
if (tokenData.expiresIn) {
this._tokenExpiresAt = Date.now() / 1000 + tokenData.expiresIn;
} else {
// Token doesn't expire or expiration unknown
this._tokenExpiresAt = null;
}
} finally {
this._refreshPromise = null;
}
})();
await this._refreshPromise;
}
/**
* Get OAuth headers, refreshing token if needed.
* Note: This is synchronous for now as the Rust implementation expects sync.
* In a real implementation, this would need to handle async properly.
* @returns Headers with Bearer token authorization.
* @throws If unable to fetch or refresh token.
*/
getHeaders(): Record<string, string> {
// For simplicity in this example, we assume the token is already fetched
// In a real implementation, this would need to handle the async nature properly
if (!this._currentToken && !this._refreshPromise) {
// Synchronously trigger refresh - this is a limitation of the current implementation
throw new Error(
"Token not initialized. Call refreshToken() first or use async initialization.",
);
}
if (!this._currentToken) {
throw new Error("Failed to obtain OAuth token");
}
return { authorization: `Bearer ${this._currentToken}` };
}
/**
* Manually refresh the token.
* Call this before using getHeaders() to ensure token is available.
*/
async refreshToken(): Promise<void> {
this._currentToken = null; // Force refresh
await this._refreshTokenIfNeeded();
}
}

View File

@@ -10,9 +10,15 @@ import {
import {
ConnectionOptions,
Connection as LanceDbConnection,
JsHeaderProvider as NativeJsHeaderProvider,
Session,
} from "./native.js";
import { HeaderProvider } from "./header";
// Re-export native header provider for use with connectWithHeaderProvider
export { JsHeaderProvider as NativeJsHeaderProvider } from "./native.js";
export {
AddColumnsSql,
ConnectionOptions,
@@ -21,6 +27,7 @@ export {
ClientConfig,
TimeoutConfig,
RetryConfig,
TlsConfig,
OptimizeStats,
CompactionStats,
RemovalStats,
@@ -36,6 +43,11 @@ export {
DeleteResult,
DropColumnsResult,
UpdateResult,
SplitCalculatedOptions,
SplitRandomOptions,
SplitHashOptions,
SplitSequentialOptions,
ShuffleOptions,
} from "./native.js";
export {
@@ -59,6 +71,7 @@ export {
Query,
QueryBase,
VectorQuery,
TakeQuery,
QueryExecutionOptions,
FullTextSearchOptions,
RecordBatchIterator,
@@ -77,6 +90,7 @@ export {
Index,
IndexOptions,
IvfPqOptions,
IvfRqOptions,
IvfFlatOptions,
HnswPqOptions,
HnswSqOptions,
@@ -92,9 +106,17 @@ export {
ColumnAlteration,
} from "./table";
export {
HeaderProvider,
StaticHeaderProvider,
OAuthHeaderProvider,
TokenResponse,
} from "./header";
export { MergeInsertBuilder, WriteExecutionOptions } from "./merge";
export * as embedding from "./embedding";
export { permutationBuilder, PermutationBuilder } from "./permutation";
export * as rerankers from "./rerankers";
export {
SchemaLike,
@@ -130,11 +152,27 @@ export { IntoSql, packBits } from "./util";
* {storageOptions: {timeout: "60s"}
* });
* ```
* @example
* Using with a header provider for per-request authentication:
* ```ts
* const provider = new StaticHeaderProvider({
* "X-API-Key": "my-key"
* });
* const conn = await connectWithHeaderProvider(
* "db://host:port",
* options,
* provider
* );
* ```
*/
export async function connect(
uri: string,
options?: Partial<ConnectionOptions>,
session?: Session,
headerProvider?:
| HeaderProvider
| (() => Record<string, string>)
| (() => Promise<Record<string, string>>),
): Promise<Connection>;
/**
* Connect to a LanceDB instance at the given URI.
@@ -168,18 +206,58 @@ export async function connect(
): Promise<Connection>;
export async function connect(
uriOrOptions: string | (Partial<ConnectionOptions> & { uri: string }),
options?: Partial<ConnectionOptions>,
optionsOrSession?: Partial<ConnectionOptions> | Session,
sessionOrHeaderProvider?:
| Session
| HeaderProvider
| (() => Record<string, string>)
| (() => Promise<Record<string, string>>),
headerProvider?:
| HeaderProvider
| (() => Record<string, string>)
| (() => Promise<Record<string, string>>),
): Promise<Connection> {
let uri: string | undefined;
let finalOptions: Partial<ConnectionOptions> = {};
let finalHeaderProvider:
| HeaderProvider
| (() => Record<string, string>)
| (() => Promise<Record<string, string>>)
| undefined;
if (typeof uriOrOptions !== "string") {
// First overload: connect(options)
const { uri: uri_, ...opts } = uriOrOptions;
uri = uri_;
finalOptions = opts;
} else {
// Second overload: connect(uri, options?, session?, headerProvider?)
uri = uriOrOptions;
finalOptions = options || {};
// Handle optionsOrSession parameter
if (optionsOrSession && "inner" in optionsOrSession) {
// Second param is session, so no options provided
finalOptions = {};
} else {
// Second param is options
finalOptions = (optionsOrSession as Partial<ConnectionOptions>) || {};
}
// Handle sessionOrHeaderProvider parameter
if (
sessionOrHeaderProvider &&
(typeof sessionOrHeaderProvider === "function" ||
"getHeaders" in sessionOrHeaderProvider)
) {
// Third param is header provider
finalHeaderProvider = sessionOrHeaderProvider as
| HeaderProvider
| (() => Record<string, string>)
| (() => Promise<Record<string, string>>);
} else {
// Third param is session, header provider is fourth param
finalHeaderProvider = headerProvider;
}
}
if (!uri) {
@@ -190,6 +268,26 @@ export async function connect(
(<ConnectionOptions>finalOptions).storageOptions = cleanseStorageOptions(
(<ConnectionOptions>finalOptions).storageOptions,
);
const nativeConn = await LanceDbConnection.new(uri, finalOptions);
// Create native header provider if one was provided
let nativeProvider: NativeJsHeaderProvider | undefined;
if (finalHeaderProvider) {
if (typeof finalHeaderProvider === "function") {
nativeProvider = new NativeJsHeaderProvider(finalHeaderProvider);
} else if (
finalHeaderProvider &&
typeof finalHeaderProvider.getHeaders === "function"
) {
nativeProvider = new NativeJsHeaderProvider(async () =>
finalHeaderProvider.getHeaders(),
);
}
}
const nativeConn = await LanceDbConnection.new(
uri,
finalOptions,
nativeProvider,
);
return new LocalConnection(nativeConn);
}

View File

@@ -112,6 +112,77 @@ export interface IvfPqOptions {
sampleRate?: number;
}
export interface IvfRqOptions {
/**
* The number of IVF partitions to create.
*
* This value should generally scale with the number of rows in the dataset.
* By default the number of partitions is the square root of the number of
* rows.
*
* If this value is too large then the first part of the search (picking the
* right partition) will be slow. If this value is too small then the second
* part of the search (searching within a partition) will be slow.
*/
numPartitions?: number;
/**
* Number of bits per dimension for residual quantization.
*
* This value controls how much each residual component is compressed. The more
* bits, the more accurate the index will be but the slower search. Typical values
* are small integers; the default is 1 bit per dimension.
*/
numBits?: number;
/**
* Distance type to use to build the index.
*
* Default value is "l2".
*
* This is used when training the index to calculate the IVF partitions
* (vectors are grouped in partitions with similar vectors according to this
* distance type) and during quantization.
*
* The distance type used to train an index MUST match the distance type used
* to search the index. Failure to do so will yield inaccurate results.
*
* The following distance types are available:
*
* "l2" - Euclidean distance.
* "cosine" - Cosine distance.
* "dot" - Dot product.
*/
distanceType?: "l2" | "cosine" | "dot";
/**
* Max iterations to train IVF kmeans.
*
* When training an IVF index we use kmeans to calculate the partitions. This parameter
* controls how many iterations of kmeans to run.
*
* The default value is 50.
*/
maxIterations?: number;
/**
* The number of vectors, per partition, to sample when training IVF kmeans.
*
* When an IVF index is trained, we need to calculate partitions. These are groups
* of vectors that are similar to each other. To do this we use an algorithm called kmeans.
*
* Running kmeans on a large dataset can be slow. To speed this up we run kmeans on a
* random sample of the data. This parameter controls the size of the sample. The total
* number of vectors used to train the index is `sample_rate * num_partitions`.
*
* Increasing this value might improve the quality of the index but in most cases the
* default should be sufficient.
*
* The default value is 256.
*/
sampleRate?: number;
}
/**
* Options to create an `HNSW_PQ` index
*/
@@ -523,6 +594,35 @@ export class Index {
options?.distanceType,
options?.numPartitions,
options?.numSubVectors,
options?.numBits,
options?.maxIterations,
options?.sampleRate,
),
);
}
/**
* Create an IvfRq index
*
* IVF-RQ (RabitQ Quantization) compresses vectors using RabitQ quantization
* and organizes them into IVF partitions.
*
* The compression scheme is called RabitQ quantization. Each dimension is quantized into a small number of bits.
* The parameters `num_bits` and `num_partitions` control this process, providing a tradeoff
* between index size (and thus search speed) and index accuracy.
*
* The partitioning process is called IVF and the `num_partitions` parameter controls how
* many groups to create.
*
* Note that training an IVF RQ index on a large dataset is a slow operation and
* currently is also a memory intensive operation.
*/
static ivfRq(options?: Partial<IvfRqOptions>) {
return new Index(
LanceDbIndex.ivfRq(
options?.distanceType,
options?.numPartitions,
options?.numBits,
options?.maxIterations,
options?.sampleRate,
),
@@ -700,5 +800,27 @@ export interface IndexOptions {
*/
replace?: boolean;
/**
* Timeout in seconds to wait for index creation to complete.
*
* If not specified, the method will return immediately after starting the index creation.
*/
waitTimeoutSeconds?: number;
/**
* Optional custom name for the index.
*
* If not provided, a default name will be generated based on the column name.
*/
name?: string;
/**
* Whether to train the index with existing data.
*
* If true (default), the index will be trained with existing data in the table.
* If false, the index will be created empty and populated as new data is added.
*
* Note: This option is only supported for scalar indices. Vector indices always train.
*/
train?: boolean;
}

View File

@@ -70,6 +70,23 @@ export class MergeInsertBuilder {
this.#schema,
);
}
/**
* Controls whether to use indexes for the merge operation.
*
* When set to `true` (the default), the operation will use an index if available
* on the join key for improved performance. When set to `false`, it forces a full
* table scan even if an index exists. This can be useful for benchmarking or when
* the query optimizer chooses a suboptimal path.
*
* @param useIndex - Whether to use indices for the merge operation. Defaults to `true`.
*/
useIndex(useIndex: boolean): MergeInsertBuilder {
return new MergeInsertBuilder(
this.#native.useIndex(useIndex),
this.#schema,
);
}
/**
* Executes the merge insert operation
*

View File

@@ -0,0 +1,202 @@
// SPDX-License-Identifier: Apache-2.0
// SPDX-FileCopyrightText: Copyright The LanceDB Authors
import { Connection, LocalConnection } from "./connection.js";
import {
PermutationBuilder as NativePermutationBuilder,
Table as NativeTable,
ShuffleOptions,
SplitCalculatedOptions,
SplitHashOptions,
SplitRandomOptions,
SplitSequentialOptions,
permutationBuilder as nativePermutationBuilder,
} from "./native.js";
import { LocalTable, Table } from "./table";
/**
* A PermutationBuilder for creating data permutations with splits, shuffling, and filtering.
*
* This class provides a TypeScript wrapper around the native Rust PermutationBuilder,
* offering methods to configure data splits, shuffling, and filtering before executing
* the permutation to create a new table.
*/
export class PermutationBuilder {
private inner: NativePermutationBuilder;
/**
* @hidden
*/
constructor(inner: NativePermutationBuilder) {
this.inner = inner;
}
/**
* Configure the permutation to be persisted.
*
* @param connection - The connection to persist the permutation to
* @param tableName - The name of the table to create
* @returns A new PermutationBuilder instance
* @example
* ```ts
* builder.persist(connection, "permutation_table");
* ```
*/
persist(connection: Connection, tableName: string): PermutationBuilder {
const localConnection = connection as LocalConnection;
const newInner = this.inner.persist(localConnection.inner, tableName);
return new PermutationBuilder(newInner);
}
/**
* Configure random splits for the permutation.
*
* @param options - Configuration for random splitting
* @returns A new PermutationBuilder instance
* @example
* ```ts
* // Split by ratios
* builder.splitRandom({ ratios: [0.7, 0.3], seed: 42 });
*
* // Split by counts
* builder.splitRandom({ counts: [1000, 500], seed: 42 });
*
* // Split with fixed size
* builder.splitRandom({ fixed: 100, seed: 42 });
* ```
*/
splitRandom(options: SplitRandomOptions): PermutationBuilder {
const newInner = this.inner.splitRandom(options);
return new PermutationBuilder(newInner);
}
/**
* Configure hash-based splits for the permutation.
*
* @param options - Configuration for hash-based splitting
* @returns A new PermutationBuilder instance
* @example
* ```ts
* builder.splitHash({
* columns: ["user_id"],
* splitWeights: [70, 30],
* discardWeight: 0
* });
* ```
*/
splitHash(options: SplitHashOptions): PermutationBuilder {
const newInner = this.inner.splitHash(options);
return new PermutationBuilder(newInner);
}
/**
* Configure sequential splits for the permutation.
*
* @param options - Configuration for sequential splitting
* @returns A new PermutationBuilder instance
* @example
* ```ts
* // Split by ratios
* builder.splitSequential({ ratios: [0.8, 0.2] });
*
* // Split by counts
* builder.splitSequential({ counts: [800, 200] });
*
* // Split with fixed size
* builder.splitSequential({ fixed: 1000 });
* ```
*/
splitSequential(options: SplitSequentialOptions): PermutationBuilder {
const newInner = this.inner.splitSequential(options);
return new PermutationBuilder(newInner);
}
/**
* Configure calculated splits for the permutation.
*
* @param options - Configuration for calculated splitting
* @returns A new PermutationBuilder instance
* @example
* ```ts
* builder.splitCalculated("user_id % 3");
* ```
*/
splitCalculated(options: SplitCalculatedOptions): PermutationBuilder {
const newInner = this.inner.splitCalculated(options);
return new PermutationBuilder(newInner);
}
/**
* Configure shuffling for the permutation.
*
* @param options - Configuration for shuffling
* @returns A new PermutationBuilder instance
* @example
* ```ts
* // Basic shuffle
* builder.shuffle({ seed: 42 });
*
* // Shuffle with clump size
* builder.shuffle({ seed: 42, clumpSize: 10 });
* ```
*/
shuffle(options: ShuffleOptions): PermutationBuilder {
const newInner = this.inner.shuffle(options);
return new PermutationBuilder(newInner);
}
/**
* Configure filtering for the permutation.
*
* @param filter - SQL filter expression
* @returns A new PermutationBuilder instance
* @example
* ```ts
* builder.filter("age > 18 AND status = 'active'");
* ```
*/
filter(filter: string): PermutationBuilder {
const newInner = this.inner.filter(filter);
return new PermutationBuilder(newInner);
}
/**
* Execute the permutation and create the destination table.
*
* @returns A Promise that resolves to the new Table instance
* @example
* ```ts
* const permutationTable = await builder.execute();
* console.log(`Created table: ${permutationTable.name}`);
* ```
*/
async execute(): Promise<Table> {
const nativeTable: NativeTable = await this.inner.execute();
return new LocalTable(nativeTable);
}
}
/**
* Create a permutation builder for the given table.
*
* @param table - The source table to create a permutation from
* @returns A PermutationBuilder instance
* @example
* ```ts
* const builder = permutationBuilder(sourceTable, "training_data")
* .splitRandom({ ratios: [0.8, 0.2], seed: 42 })
* .shuffle({ seed: 123 });
*
* const trainingTable = await builder.execute();
* ```
*/
export function permutationBuilder(table: Table): PermutationBuilder {
// Extract the inner native table from the TypeScript wrapper
const localTable = table as LocalTable;
// Access inner through type assertion since it's private
const nativeBuilder = nativePermutationBuilder(
// biome-ignore lint/suspicious/noExplicitAny: need access to private variable
(localTable as any).inner,
);
return new PermutationBuilder(nativeBuilder);
}

View File

@@ -15,42 +15,33 @@ import {
RecordBatchIterator as NativeBatchIterator,
Query as NativeQuery,
Table as NativeTable,
TakeQuery as NativeTakeQuery,
VectorQuery as NativeVectorQuery,
} from "./native";
import { Reranker } from "./rerankers";
export class RecordBatchIterator implements AsyncIterator<RecordBatch> {
private promisedInner?: Promise<NativeBatchIterator>;
private inner?: NativeBatchIterator;
export async function* RecordBatchIterator(
promisedInner: Promise<NativeBatchIterator>,
) {
const inner = await promisedInner;
constructor(promise?: Promise<NativeBatchIterator>) {
// TODO: check promise reliably so we dont need to pass two arguments.
this.promisedInner = promise;
if (inner === undefined) {
throw new Error("Invalid iterator state");
}
// biome-ignore lint/suspicious/noExplicitAny: skip
async next(): Promise<IteratorResult<RecordBatch<any>>> {
if (this.inner === undefined) {
this.inner = await this.promisedInner;
}
if (this.inner === undefined) {
throw new Error("Invalid iterator state state");
}
const n = await this.inner.next();
if (n == null) {
return Promise.resolve({ done: true, value: null });
}
const tbl = tableFromIPC(n);
if (tbl.batches.length != 1) {
for (let buffer = await inner.next(); buffer; buffer = await inner.next()) {
const { batches } = tableFromIPC(buffer);
if (batches.length !== 1) {
throw new Error("Expected only one batch");
}
return Promise.resolve({ done: false, value: tbl.batches[0] });
yield batches[0];
}
}
/* eslint-enable */
class RecordBatchIterable<
NativeQueryType extends NativeQuery | NativeVectorQuery,
NativeQueryType extends NativeQuery | NativeVectorQuery | NativeTakeQuery,
> implements AsyncIterable<RecordBatch>
{
private inner: NativeQueryType;
@@ -63,7 +54,7 @@ class RecordBatchIterable<
// biome-ignore lint/suspicious/noExplicitAny: skip
[Symbol.asyncIterator](): AsyncIterator<RecordBatch<any>, any, undefined> {
return new RecordBatchIterator(
return RecordBatchIterator(
this.inner.execute(this.options?.maxBatchLength, this.options?.timeoutMs),
);
}
@@ -107,8 +98,9 @@ export interface FullTextSearchOptions {
*
* @hideconstructor
*/
export class QueryBase<NativeQueryType extends NativeQuery | NativeVectorQuery>
implements AsyncIterable<RecordBatch>
export class QueryBase<
NativeQueryType extends NativeQuery | NativeVectorQuery | NativeTakeQuery,
> implements AsyncIterable<RecordBatch>
{
/**
* @hidden
@@ -133,56 +125,6 @@ export class QueryBase<NativeQueryType extends NativeQuery | NativeVectorQuery>
fn(this.inner);
}
}
/**
* A filter statement to be applied to this query.
*
* The filter should be supplied as an SQL query string. For example:
* @example
* x > 10
* y > 0 AND y < 100
* x > 5 OR y = 'test'
*
* Filtering performance can often be improved by creating a scalar index
* on the filter column(s).
*/
where(predicate: string): this {
this.doCall((inner: NativeQueryType) => inner.onlyIf(predicate));
return this;
}
/**
* A filter statement to be applied to this query.
* @see where
* @deprecated Use `where` instead
*/
filter(predicate: string): this {
return this.where(predicate);
}
fullTextSearch(
query: string | FullTextQuery,
options?: Partial<FullTextSearchOptions>,
): this {
let columns: string[] | null = null;
if (options) {
if (typeof options.columns === "string") {
columns = [options.columns];
} else if (Array.isArray(options.columns)) {
columns = options.columns;
}
}
this.doCall((inner: NativeQueryType) => {
if (typeof query === "string") {
inner.fullTextSearch({
query: query,
columns: columns,
});
} else {
inner.fullTextSearch({ query: query.inner });
}
});
return this;
}
/**
* Return only the specified columns.
@@ -241,33 +183,6 @@ export class QueryBase<NativeQueryType extends NativeQuery | NativeVectorQuery>
return this;
}
/**
* Set the maximum number of results to return.
*
* By default, a plain search has no limit. If this method is not
* called then every valid row from the table will be returned.
*/
limit(limit: number): this {
this.doCall((inner: NativeQueryType) => inner.limit(limit));
return this;
}
offset(offset: number): this {
this.doCall((inner: NativeQueryType) => inner.offset(offset));
return this;
}
/**
* Skip searching un-indexed data. This can make search faster, but will miss
* any data that is not yet indexed.
*
* Use {@link Table#optimize} to index all un-indexed data.
*/
fastSearch(): this {
this.doCall((inner: NativeQueryType) => inner.fastSearch());
return this;
}
/**
* Whether to return the row id in the results.
*
@@ -306,10 +221,8 @@ export class QueryBase<NativeQueryType extends NativeQuery | NativeVectorQuery>
* single query)
*
*/
protected execute(
options?: Partial<QueryExecutionOptions>,
): RecordBatchIterator {
return new RecordBatchIterator(this.nativeExecute(options));
protected execute(options?: Partial<QueryExecutionOptions>) {
return RecordBatchIterator(this.nativeExecute(options));
}
/**
@@ -317,8 +230,7 @@ export class QueryBase<NativeQueryType extends NativeQuery | NativeVectorQuery>
*/
// biome-ignore lint/suspicious/noExplicitAny: skip
[Symbol.asyncIterator](): AsyncIterator<RecordBatch<any>> {
const promise = this.nativeExecute();
return new RecordBatchIterator(promise);
return RecordBatchIterator(this.nativeExecute());
}
/** Collect the results as an Arrow @see {@link ArrowTable}. */
@@ -401,6 +313,119 @@ export class QueryBase<NativeQueryType extends NativeQuery | NativeVectorQuery>
return this.inner.analyzePlan();
}
}
/**
* Returns the schema of the output that will be returned by this query.
*
* This can be used to inspect the types and names of the columns that will be
* returned by the query before executing it.
*
* @returns An Arrow Schema describing the output columns.
*/
async outputSchema(): Promise<import("./arrow").Schema> {
let schemaBuffer: Buffer;
if (this.inner instanceof Promise) {
schemaBuffer = await this.inner.then((inner) => inner.outputSchema());
} else {
schemaBuffer = await this.inner.outputSchema();
}
const schema = tableFromIPC(schemaBuffer).schema;
return schema;
}
}
export class StandardQueryBase<
NativeQueryType extends NativeQuery | NativeVectorQuery,
>
extends QueryBase<NativeQueryType>
implements ExecutableQuery
{
constructor(inner: NativeQueryType | Promise<NativeQueryType>) {
super(inner);
}
/**
* A filter statement to be applied to this query.
*
* The filter should be supplied as an SQL query string. For example:
* @example
* x > 10
* y > 0 AND y < 100
* x > 5 OR y = 'test'
*
* Filtering performance can often be improved by creating a scalar index
* on the filter column(s).
*/
where(predicate: string): this {
this.doCall((inner: NativeQueryType) => inner.onlyIf(predicate));
return this;
}
/**
* A filter statement to be applied to this query.
* @see where
* @deprecated Use `where` instead
*/
filter(predicate: string): this {
return this.where(predicate);
}
fullTextSearch(
query: string | FullTextQuery,
options?: Partial<FullTextSearchOptions>,
): this {
let columns: string[] | null = null;
if (options) {
if (typeof options.columns === "string") {
columns = [options.columns];
} else if (Array.isArray(options.columns)) {
columns = options.columns;
}
}
this.doCall((inner: NativeQueryType) => {
if (typeof query === "string") {
inner.fullTextSearch({
query: query,
columns: columns,
});
} else {
inner.fullTextSearch({ query: query.inner });
}
});
return this;
}
/**
* Set the maximum number of results to return.
*
* By default, a plain search has no limit. If this method is not
* called then every valid row from the table will be returned.
*/
limit(limit: number): this {
this.doCall((inner: NativeQueryType) => inner.limit(limit));
return this;
}
/**
* Set the number of rows to skip before returning results.
*
* This is useful for pagination.
*/
offset(offset: number): this {
this.doCall((inner: NativeQueryType) => inner.offset(offset));
return this;
}
/**
* Skip searching un-indexed data. This can make search faster, but will miss
* any data that is not yet indexed.
*
* Use {@link Table#optimize} to index all un-indexed data.
*/
fastSearch(): this {
this.doCall((inner: NativeQueryType) => inner.fastSearch());
return this;
}
}
/**
@@ -419,7 +444,7 @@ export interface ExecutableQuery {}
*
* @hideconstructor
*/
export class VectorQuery extends QueryBase<NativeVectorQuery> {
export class VectorQuery extends StandardQueryBase<NativeVectorQuery> {
/**
* @hidden
*/
@@ -679,13 +704,24 @@ export class VectorQuery extends QueryBase<NativeVectorQuery> {
}
}
/**
* A query that returns a subset of the rows in the table.
*
* @hideconstructor
*/
export class TakeQuery extends QueryBase<NativeTakeQuery> {
constructor(inner: NativeTakeQuery) {
super(inner);
}
}
/** A builder for LanceDB queries.
*
* @see {@link Table#query}, {@link Table#search}
*
* @hideconstructor
*/
export class Query extends QueryBase<NativeQuery> {
export class Query extends StandardQueryBase<NativeQuery> {
/**
* @hidden
*/

View File

@@ -326,6 +326,9 @@ export function sanitizeDictionary(typeLike: object) {
// biome-ignore lint/suspicious/noExplicitAny: skip
export function sanitizeType(typeLike: unknown): DataType<any> {
if (typeof typeLike === "string") {
return dataTypeFromName(typeLike);
}
if (typeof typeLike !== "object" || typeLike === null) {
throw Error("Expected a Type but object was null/undefined");
}
@@ -447,7 +450,7 @@ export function sanitizeType(typeLike: unknown): DataType<any> {
case Type.DurationSecond:
return new DurationSecond();
default:
throw new Error("Unrecoginized type id in schema: " + typeId);
throw new Error("Unrecognized type id in schema: " + typeId);
}
}
@@ -467,7 +470,15 @@ export function sanitizeField(fieldLike: unknown): Field {
"The field passed in is missing a `type`/`name`/`nullable` property",
);
}
const type = sanitizeType(fieldLike.type);
let type: DataType;
try {
type = sanitizeType(fieldLike.type);
} catch (error: unknown) {
throw Error(
`Unable to sanitize type for field: ${fieldLike.name} due to error: ${error}`,
{ cause: error },
);
}
const name = fieldLike.name;
if (!(typeof name === "string")) {
throw Error("The field passed in had a non-string `name` property");
@@ -581,3 +592,46 @@ function sanitizeData(
},
);
}
const constructorsByTypeName = {
null: () => new Null(),
binary: () => new Binary(),
utf8: () => new Utf8(),
bool: () => new Bool(),
int8: () => new Int8(),
int16: () => new Int16(),
int32: () => new Int32(),
int64: () => new Int64(),
uint8: () => new Uint8(),
uint16: () => new Uint16(),
uint32: () => new Uint32(),
uint64: () => new Uint64(),
float16: () => new Float16(),
float32: () => new Float32(),
float64: () => new Float64(),
datemillisecond: () => new DateMillisecond(),
dateday: () => new DateDay(),
timenanosecond: () => new TimeNanosecond(),
timemicrosecond: () => new TimeMicrosecond(),
timemillisecond: () => new TimeMillisecond(),
timesecond: () => new TimeSecond(),
intervaldaytime: () => new IntervalDayTime(),
intervalyearmonth: () => new IntervalYearMonth(),
durationnanosecond: () => new DurationNanosecond(),
durationmicrosecond: () => new DurationMicrosecond(),
durationmillisecond: () => new DurationMillisecond(),
durationsecond: () => new DurationSecond(),
} as const;
type MappableTypeName = keyof typeof constructorsByTypeName;
export function dataTypeFromName(typeName: string): DataType {
const normalizedTypeName = typeName.toLowerCase() as MappableTypeName;
const _constructor = constructorsByTypeName[normalizedTypeName];
if (!_constructor) {
throw new Error("Unrecognized type name in schema: " + typeName);
}
return _constructor();
}

View File

@@ -35,6 +35,7 @@ import {
import {
FullTextQuery,
Query,
TakeQuery,
VectorQuery,
instanceOfFullTextQuery,
} from "./query";
@@ -336,6 +337,20 @@ export abstract class Table {
*/
abstract query(): Query;
/**
* Create a query that returns a subset of the rows in the table.
* @param offsets The offsets of the rows to return.
* @returns A builder that can be used to parameterize the query.
*/
abstract takeOffsets(offsets: number[]): TakeQuery;
/**
* Create a query that returns a subset of the rows in the table.
* @param rowIds The row ids of the rows to return.
* @returns A builder that can be used to parameterize the query.
*/
abstract takeRowIds(rowIds: number[]): TakeQuery;
/**
* Create a search query to find the nearest neighbors
* of the given query
@@ -647,6 +662,8 @@ export class LocalTable extends Table {
column,
options?.replace,
options?.waitTimeoutSeconds,
options?.name,
options?.train,
);
}
@@ -665,6 +682,14 @@ export class LocalTable extends Table {
await this.inner.waitForIndex(indexNames, timeoutSeconds);
}
takeOffsets(offsets: number[]): TakeQuery {
return new TakeQuery(this.inner.takeOffsets(offsets));
}
takeRowIds(rowIds: number[]): TakeQuery {
return new TakeQuery(this.inner.takeRowIds(rowIds));
}
query(): Query {
return new Query(this.inner);
}

View File

@@ -1,6 +1,6 @@
{
"name": "@lancedb/lancedb-darwin-arm64",
"version": "0.21.2",
"version": "0.22.3-beta.5",
"os": ["darwin"],
"cpu": ["arm64"],
"main": "lancedb.darwin-arm64.node",

View File

@@ -1,6 +1,6 @@
{
"name": "@lancedb/lancedb-darwin-x64",
"version": "0.21.2",
"version": "0.22.3-beta.5",
"os": ["darwin"],
"cpu": ["x64"],
"main": "lancedb.darwin-x64.node",

View File

@@ -1,6 +1,6 @@
{
"name": "@lancedb/lancedb-linux-arm64-gnu",
"version": "0.21.2",
"version": "0.22.3-beta.5",
"os": ["linux"],
"cpu": ["arm64"],
"main": "lancedb.linux-arm64-gnu.node",

View File

@@ -1,6 +1,6 @@
{
"name": "@lancedb/lancedb-linux-arm64-musl",
"version": "0.21.2",
"version": "0.22.3-beta.5",
"os": ["linux"],
"cpu": ["arm64"],
"main": "lancedb.linux-arm64-musl.node",

View File

@@ -1,6 +1,6 @@
{
"name": "@lancedb/lancedb-linux-x64-gnu",
"version": "0.21.2",
"version": "0.22.3-beta.5",
"os": ["linux"],
"cpu": ["x64"],
"main": "lancedb.linux-x64-gnu.node",

View File

@@ -1,6 +1,6 @@
{
"name": "@lancedb/lancedb-linux-x64-musl",
"version": "0.21.2",
"version": "0.22.3-beta.5",
"os": ["linux"],
"cpu": ["x64"],
"main": "lancedb.linux-x64-musl.node",

View File

@@ -1,6 +1,6 @@
{
"name": "@lancedb/lancedb-win32-arm64-msvc",
"version": "0.21.2",
"version": "0.22.3-beta.5",
"os": [
"win32"
],

View File

@@ -1,6 +1,6 @@
{
"name": "@lancedb/lancedb-win32-x64-msvc",
"version": "0.21.2",
"version": "0.22.3-beta.5",
"os": ["win32"],
"cpu": ["x64"],
"main": "lancedb.win32-x64-msvc.node",

Some files were not shown because too many files have changed in this diff Show More