Compare commits

...

28 Commits

Author SHA1 Message Date
qzhu
b7c816c919 add index_stats to python api 2024-03-12 16:28:15 -07:00
qzhu
34dd548bc8 init commit for test 2024-03-11 13:28:24 -07:00
Ivan Leo
553dae1607 Update default_embedding_functions.md (#1073)
Added a small bit of documentation for the `dim` feature which is
provided by the new `text-embedding-3` model series that allows users to
shorten an embedding.

Happy to discuss a bit on the phrasing but I struggled quite a bit with
getting it to work so wanted to help others who might want to use the
newer model too
2024-03-11 21:30:07 +05:30
Weston Pace
9c7e00eec3 Remove remote integration workflow (#1076) 2024-03-07 12:00:04 -08:00
Will Jones
a7d66032aa fix: Allow converting from NativeTable to Table (#1069) 2024-03-07 08:33:46 -08:00
Lance Release
7fb8a732a5 Updating package-lock.json 2024-03-07 01:05:09 +00:00
Lance Release
f393ac3b0d Updating package-lock.json 2024-03-06 23:26:48 +00:00
Lance Release
ca83354780 Bump version: 0.4.11 → 0.4.12 2024-03-06 23:26:38 +00:00
Lance Release
272cbcad7a [python] Bump version: 0.6.1 → 0.6.2 2024-03-06 16:28:50 +00:00
Will Jones
722fe1836c fix: make checkout_latest force a reload (#1064)
#1002 accidentally changed `checkout_latest` to do nothing if the table
was already in latest mode. This PR makes sure it forces a reload of the
table (if there is a newer version).
2024-03-05 11:51:47 -08:00
Lei Xu
d1983602c2 chore: bump lance to 0.10.2 (#1061) 2024-03-05 10:16:07 -08:00
Weston Pace
9148cd6d47 feat: page_token / limit to native table_names function. Use async table_names function from sync table_names function (#1059)
The synchronous table_names function in python lancedb relies on arrow's
filesystem which behaves slightly differently than object_store. As a
result, the function would not work properly in GCS.

However, the async table_names function uses object_store directly and
thus is accurate. In most cases we can fallback to using the async
table_names function and so this PR does so. The one case we cannot is
if the user is already in an async context (we can't start a new async
event loop). Soon, we can just redirect those users to use the async API
instead of the sync API and so that case will eventually go away. For
now, we fallback to the old behavior.
2024-03-05 08:38:18 -08:00
Will Jones
47dbb988bf feat: more accessible errors (#1025)
The fact that we convert errors to strings makes them really hard to
work with. For example, in SaaS we want to know whether the underlying
`lance::Error` was the `InvalidInput` variant, so we can return a 400
instead of a 500.
2024-03-05 07:57:11 -08:00
Chang She
6821536d44 doc(python): document the method in fts (#982)
Co-authored-by: prrao87 <prrao87@gmail.com>
Co-authored-by: Prashanth Rao <35005448+prrao87@users.noreply.github.com>
2024-03-04 16:42:24 -08:00
Ayush Chaurasia
d6f0663671 fix(python): Few fts patches (#1039)
1. filtering with fts mutated the schema, which caused schema mistmatch
problems with hybrid search as it combines fts and vector search tables.
2. fts with filter failed with `with_row_id`. This was because row_id
was calculated before filtering which caused size mismatch on attaching
it after.
3. The fix for 1 meant that now row_id is attached before filtering but
passing a filter to `to_lance` on a dataset that already contains
`_rowid` raises a panic from lance. So temporarily, in case where fts is
used with a filter AND `with_row_id`, we just force user to using the
duckdb pathway.

---------

Co-authored-by: Chang She <759245+changhiskhan@users.noreply.github.com>
2024-03-04 16:41:59 -08:00
Weston Pace
ea33b68c6c fix: sanitize foreign schemas (#1058)
Arrow-js uses brittle `instanceof` checks throughout the code base.
These fail unless the library instance that produced the object matches
exactly the same instance the vectordb is using. At a minimum, this
means that a user using arrow version 15 (or any version that doesn't
match exactly the version that vectordb is using) will get strange
errors when they try and use vectordb.

However, there are even cases where the versions can be perfectly
identical, and the instanceof check still fails. One such example is
when using `vite` (e.g. https://github.com/vitejs/vite/issues/3910)

This PR solves the problem in a rather brute force, but workable,
fashion. If we encounter a schema that does not pass the `instanceof`
check then we will attempt to sanitize that schema by traversing the
object and, if it has all the correct properties, constructing an
appropriate `Schema` instance via deep cloning.
2024-03-04 13:06:36 -08:00
Weston Pace
1453bf4e7a feat: reconfigure typescript linter / formatter for nodejs (#1042)
The eslint rules specify some formatting requirements that are rather
strict and conflict with vscode's default formatter. I was unable to get
auto-formatting to setup correctly. Also, eslint has quite recently
[given up on
formatting](https://eslint.org/blog/2023/10/deprecating-formatting-rules/)
and recommends using a 3rd party formatter.

This PR adds prettier as the formatter. It restores the eslint rules to
their defaults. This does mean we now have the "no explicit any" check
back on. I know that rule is pedantic but it did help me catch a few
corner cases in type testing that weren't covered in the current code.
Leaving in draft as this is dependent on other PRs.
2024-03-04 10:49:08 -08:00
Weston Pace
abaf315baf feat: add support for add to async python API (#1037)
In order to add support for `add` we needed to migrate the rust `Table`
trait to a `Table` struct and `TableInternal` trait (similar to the way
the connection is designed).

While doing this we also cleaned up some inconsistencies between the
SDKs:

* Python and Node are garbage collected languages and it can be
difficult to trigger something to be freed. The convention for these
languages is to have some kind of close method. I added a close method
to both the table and connection which will drop the underlying rust
object.
* We made significant improvements to table creation in
cc5f2136a6
for the `node` SDK. I copied these changes to the `nodejs` SDK.
* The nodejs tables were using fs to create tmp directories and these
were not getting cleaned up. This is mostly harmless but annoying and so
I changed it up a bit to ensure we cleanup tmp directories.
* ~~countRows in the node SDK was returning `bigint`. I changed it to
return `number`~~ (this actually happened in a previous PR)
* Tables and connections now implement `std::fmt::Display` which is
hooked into python's `__repr__`. Node has no concept of a regular "to
string" function and so I added a `display` method.
* Python method signatures are changing so that optional parameters are
always `Optional[foo] = None` instead of something like `foo = False`.
This is because we want those defaults to be in rust whenever possible
(though we still need to mention the default in documentation).
* I changed the python `AsyncConnection/AsyncTable` classes from
abstract classes with a single implementation to just classes because we
no longer have the remote implementation in python.

Note: this does NOT add the `add` function to the remote table. This PR
was already large enough, and the remote implementation is unique
enough, that I am going to do all the remote stuff at a later date (we
should have the structure in place and correct so there shouldn't be any
refactor concerns)

---------

Co-authored-by: Will Jones <willjones127@gmail.com>
2024-03-04 09:27:41 -08:00
Chang She
14b9277ac1 chore(rust): update rust version (#810) 2024-03-03 18:51:58 -08:00
Chang She
d621826b79 feat(python): allow user to override api url (#1054) 2024-03-03 18:29:47 -08:00
Chang She
08c0803ae1 chore(python): use pypi tantivy to speed up CI (#987) 2024-03-03 16:57:55 -08:00
Chang She
62632cb90b doc: fix docs deployment GHA (#1055) 2024-03-03 16:04:45 -08:00
Prashanth Rao
14566df213 [docs]: Fix issues with Rust code snippets in "quick start" (#1047)
The renaming of `vectordb` to `lancedb` broke the [quick start
docs](https://lancedb.github.io/lancedb/basic/#__tabbed_5_3) (it's
pointing to a non-existent directory). This PR fixes the code snippets
and the paths in the docs page.

Additionally, more fixes related to indexing docs below 👇🏽.
2024-03-03 15:59:57 -08:00
Louis Guitton
acfdf1b9cb Fix default_embedding_functions.md (#1043)
typo and broken table
2024-03-03 15:22:53 -08:00
Chang She
f95402af7c doc: fix langchain link (#1053) 2024-03-03 15:20:48 -08:00
Chang She
d14c9b6d9e feat(python): add model_names() method to openai embedding function (#1049)
small QoL improvement
2024-03-03 12:33:00 -08:00
QianZhu
c1af53b787 Add create scalar index to sdk (#1033) 2024-02-29 13:32:01 -08:00
Weston Pace
2a02d1394b feat: port create_table to the async python API and the remote rust API (#1031)
I've also started `ASYNC_MIGRATION.MD` to keep track of the breaking
changes from sync to async python.
2024-02-29 13:29:29 -08:00
102 changed files with 6468 additions and 1551 deletions

View File

@@ -1,5 +1,5 @@
[bumpversion] [bumpversion]
current_version = 0.4.11 current_version = 0.4.12
commit = True commit = True
message = Bump version: {current_version} → {new_version} message = Bump version: {current_version} → {new_version}
tag = True tag = True

View File

@@ -24,10 +24,14 @@ jobs:
environment: environment:
name: github-pages name: github-pages
url: ${{ steps.deployment.outputs.page_url }} url: ${{ steps.deployment.outputs.page_url }}
runs-on: ubuntu-22.04 runs-on: buildjet-8vcpu-ubuntu-2204
steps: steps:
- name: Checkout - name: Checkout
uses: actions/checkout@v4 uses: actions/checkout@v4
- name: Install dependecies needed for ubuntu
run: |
sudo apt install -y protobuf-compiler libssl-dev
rustup update && rustup default
- name: Set up Python - name: Set up Python
uses: actions/setup-python@v5 uses: actions/setup-python@v5
with: with:

View File

@@ -24,16 +24,22 @@ env:
jobs: jobs:
test-python: test-python:
name: Test doc python code name: Test doc python code
runs-on: "ubuntu-latest" runs-on: "buildjet-8vcpu-ubuntu-2204"
steps: steps:
- name: Checkout - name: Checkout
uses: actions/checkout@v4 uses: actions/checkout@v4
- name: Install dependecies needed for ubuntu
run: |
sudo apt install -y protobuf-compiler libssl-dev
rustup update && rustup default
- name: Set up Python - name: Set up Python
uses: actions/setup-python@v5 uses: actions/setup-python@v5
with: with:
python-version: 3.11 python-version: 3.11
cache: "pip" cache: "pip"
cache-dependency-path: "docs/test/requirements.txt" cache-dependency-path: "docs/test/requirements.txt"
- name: Rust cache
uses: swatinem/rust-cache@v2
- name: Build Python - name: Build Python
working-directory: docs/test working-directory: docs/test
run: run:
@@ -48,8 +54,8 @@ jobs:
for d in *; do cd "$d"; echo "$d".py; python "$d".py; cd ..; done for d in *; do cd "$d"; echo "$d".py; python "$d".py; cd ..; done
test-node: test-node:
name: Test doc nodejs code name: Test doc nodejs code
runs-on: "ubuntu-latest" runs-on: "buildjet-8vcpu-ubuntu-2204"
timeout-minutes: 45 timeout-minutes: 60
strategy: strategy:
fail-fast: false fail-fast: false
steps: steps:
@@ -65,6 +71,7 @@ jobs:
- name: Install dependecies needed for ubuntu - name: Install dependecies needed for ubuntu
run: | run: |
sudo apt install -y protobuf-compiler libssl-dev sudo apt install -y protobuf-compiler libssl-dev
rustup update && rustup default
- name: Rust cache - name: Rust cache
uses: swatinem/rust-cache@v2 uses: swatinem/rust-cache@v2
- name: Install node dependencies - name: Install node dependencies

View File

@@ -24,27 +24,6 @@ env:
RUST_BACKTRACE: "1" RUST_BACKTRACE: "1"
jobs: jobs:
lint:
name: Lint
runs-on: ubuntu-22.04
defaults:
run:
shell: bash
working-directory: node
steps:
- uses: actions/checkout@v4
with:
fetch-depth: 0
lfs: true
- uses: actions/setup-node@v3
with:
node-version: 20
cache: 'npm'
cache-dependency-path: node/package-lock.json
- name: Lint
run: |
npm ci
npm run lint
linux: linux:
name: Linux (Node ${{ matrix.node-version }}) name: Linux (Node ${{ matrix.node-version }})
timeout-minutes: 30 timeout-minutes: 30

View File

@@ -49,6 +49,7 @@ jobs:
cargo clippy --all --all-features -- -D warnings cargo clippy --all --all-features -- -D warnings
npm ci npm ci
npm run lint npm run lint
npm run chkformat
linux: linux:
name: Linux (NodeJS ${{ matrix.node-version }}) name: Linux (NodeJS ${{ matrix.node-version }})
timeout-minutes: 30 timeout-minutes: 30
@@ -111,4 +112,3 @@ jobs:
- name: Test - name: Test
run: | run: |
npm run test npm run test

View File

@@ -66,7 +66,7 @@ jobs:
- name: Install - name: Install
run: | run: |
pip install -e .[tests,dev,embeddings] pip install -e .[tests,dev,embeddings]
pip install tantivy@git+https://github.com/quickwit-oss/tantivy-py#164adc87e1a033117001cf70e38c82a53014d985 pip install tantivy
pip install mlx pip install mlx
- name: Doctest - name: Doctest
run: pytest --doctest-modules python/lancedb run: pytest --doctest-modules python/lancedb
@@ -188,6 +188,6 @@ jobs:
run: | run: |
pip install "pydantic<2" pip install "pydantic<2"
pip install -e .[tests] pip install -e .[tests]
pip install tantivy@git+https://github.com/quickwit-oss/tantivy-py#164adc87e1a033117001cf70e38c82a53014d985 pip install tantivy
- name: Run tests - name: Run tests
run: pytest -m "not slow" -x -v --durations=30 python/tests run: pytest -m "not slow" -x -v --durations=30 python/tests

View File

@@ -1,37 +0,0 @@
name: LanceDb Cloud Integration Test
on:
workflow_run:
workflows: [Rust]
types:
- completed
env:
LANCEDB_PROJECT: ${{ secrets.LANCEDB_PROJECT }}
LANCEDB_API_KEY: ${{ secrets.LANCEDB_API_KEY }}
LANCEDB_REGION: ${{ secrets.LANCEDB_REGION }}
jobs:
test:
timeout-minutes: 30
runs-on: ubuntu-22.04
defaults:
run:
shell: bash
working-directory: rust
steps:
- uses: actions/checkout@v4
with:
fetch-depth: 0
lfs: true
- uses: Swatinem/rust-cache@v2
with:
workspaces: rust
- name: Install dependencies
run: |
sudo apt update
sudo apt install -y protobuf-compiler libssl-dev
- name: Build
run: cargo build --all-features
- name: Run Integration test
run: cargo test --tests -- --ignored

View File

@@ -10,3 +10,9 @@ repos:
rev: v0.2.2 rev: v0.2.2
hooks: hooks:
- id: ruff - id: ruff
- repo: https://github.com/pre-commit/mirrors-prettier
rev: v3.1.0
hooks:
- id: prettier
files: "nodejs/.*"
exclude: nodejs/lancedb/native.d.ts|nodejs/dist/.*

View File

@@ -14,10 +14,10 @@ keywords = ["lancedb", "lance", "database", "vector", "search"]
categories = ["database-implementations"] categories = ["database-implementations"]
[workspace.dependencies] [workspace.dependencies]
lance = { "version" = "=0.10.1", "features" = ["dynamodb"] } lance = { "version" = "=0.10.2", "features" = ["dynamodb"] }
lance-index = { "version" = "=0.10.1" } lance-index = { "version" = "=0.10.2" }
lance-linalg = { "version" = "=0.10.1" } lance-linalg = { "version" = "=0.10.2" }
lance-testing = { "version" = "=0.10.1" } lance-testing = { "version" = "=0.10.2" }
# Note that this one does not include pyarrow # Note that this one does not include pyarrow
arrow = { version = "50.0", optional = false } arrow = { version = "50.0", optional = false }
arrow-array = "50.0" arrow-array = "50.0"

View File

@@ -7,20 +7,11 @@ for brute-force scanning of the entire vector space.
A vector index is faster but less accurate than exhaustive search (kNN or flat search). A vector index is faster but less accurate than exhaustive search (kNN or flat search).
LanceDB provides many parameters to fine-tune the index's size, the speed of queries, and the accuracy of results. LanceDB provides many parameters to fine-tune the index's size, the speed of queries, and the accuracy of results.
Currently, LanceDB does _not_ automatically create the ANN index. ## Disk-based Index
LanceDB has optimized code for kNN as well. For many use-cases, datasets under 100K vectors won't require index creation at all.
If you can live with <100ms latency, skipping index creation is a simpler workflow while guaranteeing 100% recall.
In the future we will look to automatically create and configure the ANN index as data comes in. Lance provides an `IVF_PQ` disk-based index. It uses **Inverted File Index (IVF)** to first divide
the dataset into `N` partitions, and then applies **Product Quantization** to compress vectors in each partition.
## Types of Index See the [indexing](concepts/index_ivfpq.md) concepts guide for more information on how this works.
Lance can support multiple index types, the most widely used one is `IVF_PQ`.
- `IVF_PQ`: use **Inverted File Index (IVF)** to first divide the dataset into `N` partitions,
and then use **Product Quantization** to compress vectors in each partition.
- `DiskANN` (**Experimental**): organize the vector as a on-disk graph, where the vertices approximately
represent the nearest neighbors of each vector.
## Creating an IVF_PQ Index ## Creating an IVF_PQ Index
@@ -88,7 +79,7 @@ You can specify the GPU device to train IVF partitions via
) )
``` ```
=== "Macos" === "MacOS"
<!-- skip-test --> <!-- skip-test -->
```python ```python
@@ -100,7 +91,7 @@ You can specify the GPU device to train IVF partitions via
) )
``` ```
Trouble shootings: Troubleshooting:
If you see `AssertionError: Torch not compiled with CUDA enabled`, you need to [install If you see `AssertionError: Torch not compiled with CUDA enabled`, you need to [install
PyTorch with CUDA support](https://pytorch.org/get-started/locally/). PyTorch with CUDA support](https://pytorch.org/get-started/locally/).
@@ -187,13 +178,21 @@ You can select the columns returned by the query using a select clause.
## FAQ ## FAQ
### Why do I need to manually create an index?
Currently, LanceDB does _not_ automatically create the ANN index.
LanceDB is well-optimized for kNN (exhaustive search) via a disk-based index. For many use-cases,
datasets of the order of ~100K vectors don't require index creation. If you can live with up to
100ms latency, skipping index creation is a simpler workflow while guaranteeing 100% recall.
### When is it necessary to create an ANN vector index? ### When is it necessary to create an ANN vector index?
`LanceDB` has manually-tuned SIMD code for computing vector distances. `LanceDB` comes out-of-the-box with highly optimized SIMD code for computing vector similarity.
In our benchmarks, computing 100K pairs of 1K dimension vectors takes **less than 20ms**. In our benchmarks, computing distances for 100K pairs of 1K dimension vectors takes **less than 20ms**.
For small datasets (< 100K rows) or applications that can accept 100ms latency, vector indices are usually not necessary. We observe that for small datasets (~100K rows) or for applications that can accept 100ms latency,
vector indices are usually not necessary.
For large-scale or higher dimension vectors, it is beneficial to create vector index. For large-scale or higher dimension vectors, it can beneficial to create vector index for performance.
### How big is my index, and how many memory will it take? ### How big is my index, and how many memory will it take?

View File

@@ -46,7 +46,7 @@
!!! info "Please also make sure you're using the same version of Arrow as in the [vectordb crate](https://github.com/lancedb/lancedb/blob/main/Cargo.toml)" !!! info "Please also make sure you're using the same version of Arrow as in the [vectordb crate](https://github.com/lancedb/lancedb/blob/main/Cargo.toml)"
## How to connect to a database ## Connect to a database
=== "Python" === "Python"
@@ -69,17 +69,22 @@
```rust ```rust
#[tokio::main] #[tokio::main]
async fn main() -> Result<()> { async fn main() -> Result<()> {
--8<-- "rust/vectordb/examples/simple.rs:connect" --8<-- "rust/lancedb/examples/simple.rs:connect"
} }
``` ```
!!! info "See [examples/simple.rs](https://github.com/lancedb/lancedb/tree/main/rust/vectordb/examples/simple.rs) for a full working example." !!! info "See [examples/simple.rs](https://github.com/lancedb/lancedb/tree/main/rust/lancedb/examples/simple.rs) for a full working example."
LanceDB will create the directory if it doesn't exist (including parent directories). LanceDB will create the directory if it doesn't exist (including parent directories).
If you need a reminder of the uri, you can call `db.uri()`. If you need a reminder of the uri, you can call `db.uri()`.
## How to create a table ## Create a table
### Directly insert data to a new table
If you have data to insert into the table at creation time, you can simultaneously create a
table and insert the data to it.
=== "Python" === "Python"
@@ -118,17 +123,18 @@ If you need a reminder of the uri, you can call `db.uri()`.
use arrow_schema::{DataType, Schema, Field}; use arrow_schema::{DataType, Schema, Field};
use arrow_array::{RecordBatch, RecordBatchIterator}; use arrow_array::{RecordBatch, RecordBatchIterator};
--8<-- "rust/vectordb/examples/simple.rs:create_table" --8<-- "rust/lancedb/examples/simple.rs:create_table"
``` ```
If the table already exists, LanceDB will raise an error by default. If the table already exists, LanceDB will raise an error by default.
!!! info "Under the hood, LanceDB is converting the input data into an Apache Arrow table and persisting it to disk in [Lance format](https://www.github.com/lancedb/lance)." !!! info "Under the hood, LanceDB converts the input data into an Apache Arrow table and persists it to disk using the [Lance format](https://www.github.com/lancedb/lance)."
### Creating an empty table ### Create an empty table
Sometimes you may not have the data to insert into the table at creation time. Sometimes you may not have the data to insert into the table at creation time.
In this case, you can create an empty table and specify the schema. In this case, you can create an empty table and specify the schema, so that you can add
data to the table at a later time (such that it conforms to the schema).
=== "Python" === "Python"
@@ -147,12 +153,12 @@ In this case, you can create an empty table and specify the schema.
=== "Rust" === "Rust"
```rust ```rust
--8<-- "rust/vectordb/examples/simple.rs:create_empty_table" --8<-- "rust/lancedb/examples/simple.rs:create_empty_table"
``` ```
## How to open an existing table ## Open an existing table
Once created, you can open a table using the following code: Once created, you can open a table as follows:
=== "Python" === "Python"
@@ -169,7 +175,7 @@ Once created, you can open a table using the following code:
=== "Rust" === "Rust"
```rust ```rust
--8<-- "rust/vectordb/examples/simple.rs:open_with_existing_file" --8<-- "rust/lancedb/examples/simple.rs:open_with_existing_file"
``` ```
If you forget the name of your table, you can always get a listing of all table names: If you forget the name of your table, you can always get a listing of all table names:
@@ -189,12 +195,12 @@ If you forget the name of your table, you can always get a listing of all table
=== "Rust" === "Rust"
```rust ```rust
--8<-- "rust/vectordb/examples/simple.rs:list_names" --8<-- "rust/lancedb/examples/simple.rs:list_names"
``` ```
## How to add data to a table ## Add data to a table
After a table has been created, you can always add more data to it using After a table has been created, you can always add more data to it as follows:
=== "Python" === "Python"
@@ -219,12 +225,12 @@ After a table has been created, you can always add more data to it using
=== "Rust" === "Rust"
```rust ```rust
--8<-- "rust/vectordb/examples/simple.rs:add" --8<-- "rust/lancedb/examples/simple.rs:add"
``` ```
## How to search for (approximate) nearest neighbors ## Search for nearest neighbors
Once you've embedded the query, you can find its nearest neighbors using the following code: Once you've embedded the query, you can find its nearest neighbors as follows:
=== "Python" === "Python"
@@ -245,11 +251,12 @@ Once you've embedded the query, you can find its nearest neighbors using the fol
```rust ```rust
use futures::TryStreamExt; use futures::TryStreamExt;
--8<-- "rust/vectordb/examples/simple.rs:search" --8<-- "rust/lancedb/examples/simple.rs:search"
``` ```
By default, LanceDB runs a brute-force scan over dataset to find the K nearest neighbours (KNN). By default, LanceDB runs a brute-force scan over dataset to find the K nearest neighbours (KNN).
For tables with more than 50K vectors, creating an ANN index is recommended to speed up search performance. For tables with more than 50K vectors, creating an ANN index is recommended to speed up search performance.
LanceDB allows you to create an ANN index on a table as follows:
=== "Python" === "Python"
@@ -266,12 +273,17 @@ For tables with more than 50K vectors, creating an ANN index is recommended to s
=== "Rust" === "Rust"
```rust ```rust
--8<-- "rust/vectordb/examples/simple.rs:create_index" --8<-- "rust/lancedb/examples/simple.rs:create_index"
``` ```
Check [Approximate Nearest Neighbor (ANN) Indexes](/ann_indices.md) section for more details. !!! note "Why do I need to create an index manually?"
LanceDB does not automatically create the ANN index, for two reasons. The first is that it's optimized
for really fast retrievals via a disk-based index, and the second is that data and query workloads can
be very diverse, so there's no one-size-fits-all index configuration. LanceDB provides many parameters
to fine-tune index size, query latency and accuracy. See the section on
[ANN indexes](ann_indexes.md) for more details.
## How to delete rows from a table ## Delete rows from a table
Use the `delete()` method on tables to delete rows from a table. To choose Use the `delete()` method on tables to delete rows from a table. To choose
which rows to delete, provide a filter that matches on the metadata columns. which rows to delete, provide a filter that matches on the metadata columns.
@@ -292,7 +304,7 @@ This can delete any number of rows that match the filter.
=== "Rust" === "Rust"
```rust ```rust
--8<-- "rust/vectordb/examples/simple.rs:delete" --8<-- "rust/lancedb/examples/simple.rs:delete"
``` ```
The deletion predicate is a SQL expression that supports the same expressions The deletion predicate is a SQL expression that supports the same expressions
@@ -307,7 +319,7 @@ To see what expressions are supported, see the [SQL filters](sql.md) section.
Read more: [vectordb.Table.delete](javascript/interfaces/Table.md#delete) Read more: [vectordb.Table.delete](javascript/interfaces/Table.md#delete)
## How to remove a table ## Drop a table
Use the `drop_table()` method on the database to remove a table. Use the `drop_table()` method on the database to remove a table.
@@ -333,7 +345,7 @@ Use the `drop_table()` method on the database to remove a table.
=== "Rust" === "Rust"
```rust ```rust
--8<-- "rust/vectordb/examples/simple.rs:drop_table" --8<-- "rust/lancedb/examples/simple.rs:drop_table"
``` ```
!!! note "Bundling `vectordb` apps with Webpack" !!! note "Bundling `vectordb` apps with Webpack"

View File

@@ -81,24 +81,4 @@ The above query will perform a search on the table `tbl` using the given query v
* `to_pandas()`: Convert the results to a pandas DataFrame * `to_pandas()`: Convert the results to a pandas DataFrame
And there you have it! You now understand what an IVF-PQ index is, and how to create and query it in LanceDB. And there you have it! You now understand what an IVF-PQ index is, and how to create and query it in LanceDB.
To see how to create an IVF-PQ index in LanceDB, take a look at the [ANN indexes](../ann_indexes.md) section.
## FAQ
### When is it necessary to create a vector index?
LanceDB has manually-tuned SIMD code for computing vector distances. In our benchmarks, computing 100K pairs of 1K dimension vectors takes **<20ms**. For small datasets (<100K rows) or applications that can accept up to 100ms latency, vector indices are usually not necessary.
For large-scale or higher dimension vectors, it is beneficial to create vector index.
### How big is my index, and how much memory will it take?
In LanceDB, all vector indices are disk-based, meaning that when responding to a vector query, only the relevant pages from the index file are loaded from disk and cached in memory. Additionally, each sub-vector is usually encoded into 1 byte PQ code.
For example, with 1024-dimension vectors, if we choose `num_sub_vectors = 64`, each sub-vector has `1024 / 64 = 16` float32 numbers. Product quantization can lead to approximately `16 * sizeof(float32) / 1 = 64` times of space reduction.
### How to choose `num_partitions` and `num_sub_vectors` for IVF_PQ index?
`num_partitions` is used to decide how many partitions the first level IVF index uses. Higher number of partitions could lead to more efficient I/O during queries and better accuracy, but it takes much more time to train. On SIFT-1M dataset, our benchmark shows that keeping each partition 1K-4K rows lead to a good latency/recall.
`num_sub_vectors` specifies how many PQ short codes to generate on each vector. Because PQ is a lossy compression of the original vector, a higher `num_sub_vectors` usually results in less space distortion, and thus yields better accuracy. However, a higher `num_sub_vectors` also causes heavier I/O and more PQ computation, and thus, higher latency. `dimension / num_sub_vectors` should be a multiple of 8 for optimum SIMD efficiency.

View File

@@ -47,6 +47,7 @@ LanceDB registers the OpenAI embeddings function in the registry by default, as
| Parameter | Type | Default Value | Description | | Parameter | Type | Default Value | Description |
|---|---|---|---| |---|---|---|---|
| `name` | `str` | `"text-embedding-ada-002"` | The name of the model. | | `name` | `str` | `"text-embedding-ada-002"` | The name of the model. |
| `dim` | `int` | Model default | For OpenAI's newer text-embedding-3 model, we can specify a dimensionality that is smaller than the 1536 size. This feature supports it |
```python ```python
@@ -175,7 +176,8 @@ Supported Embedding modelIDs are:
* `cohere.embed-english-v3` * `cohere.embed-english-v3`
* `cohere.embed-multilingual-v3` * `cohere.embed-multilingual-v3`
Supported paramters (to be passed in `create` method) are: Supported parameters (to be passed in `create` method) are:
| Parameter | Type | Default Value | Description | | Parameter | Type | Default Value | Description |
|---|---|---|---| |---|---|---|---|
| **name** | str | "amazon.titan-embed-text-v1" | The model ID of the bedrock model to use. Supported base models for Text Embeddings: amazon.titan-embed-text-v1, cohere.embed-english-v3, cohere.embed-multilingual-v3 | | **name** | str | "amazon.titan-embed-text-v1" | The model ID of the bedrock model to use. Supported base models for Text Embeddings: amazon.titan-embed-text-v1, cohere.embed-english-v3, cohere.embed-multilingual-v3 |

View File

@@ -43,7 +43,7 @@ pip install lancedb
We also need to install a specific commit of `tantivy`, a dependency of the LanceDB full text search engine we will use later in this guide: We also need to install a specific commit of `tantivy`, a dependency of the LanceDB full text search engine we will use later in this guide:
``` ```
pip install tantivy@git+https://github.com/quickwit-oss/tantivy-py#164adc87e1a033117001cf70e38c82a53014d985 pip install tantivy
``` ```
Create a new Python file and add the following code: Create a new Python file and add the following code:

View File

@@ -40,7 +40,7 @@ LanceDB and its underlying data format, Lance, are built to scale to really larg
No. LanceDB is blazing fast (due to its disk-based index) for even brute force kNN search, within reason. In our benchmarks, computing 100K pairs of 1000-dimension vectors takes less than 20ms. For small datasets of ~100K records or applications that can accept ~100ms latency, an ANN index is usually not necessary. No. LanceDB is blazing fast (due to its disk-based index) for even brute force kNN search, within reason. In our benchmarks, computing 100K pairs of 1000-dimension vectors takes less than 20ms. For small datasets of ~100K records or applications that can accept ~100ms latency, an ANN index is usually not necessary.
For large-scale (>1M) or higher dimension vectors, it is beneficial to create an ANN index. For large-scale (>1M) or higher dimension vectors, it is beneficial to create an ANN index. See the [ANN indexes](ann_indexes.md) section for more details.
### Does LanceDB support full-text search? ### Does LanceDB support full-text search?

View File

@@ -75,21 +75,40 @@ applied on top of the full text search results. This can be invoked via the fami
table.search("puppy").limit(10).where("meta='foo'").to_list() table.search("puppy").limit(10).where("meta='foo'").to_list()
``` ```
## Syntax ## Phrase queries vs. terms queries
For full-text search you can perform either a phrase query like "the old man and the sea", For full-text search you can specify either a **phrase** query like `"the old man and the sea"`,
or a structured search query like "(Old AND Man) AND Sea". or a **terms** search query like `"(Old AND Man) AND Sea"`. For more details on the terms
Double quotes are used to disambiguate. query syntax, see Tantivy's [query parser rules](https://docs.rs/tantivy/latest/tantivy/query/struct.QueryParser.html).
For example: !!! tip "Note"
The query parser will raise an exception on queries that are ambiguous. For example, in the query `they could have been dogs OR cats`, `OR` is capitalized so it's considered a keyword query operator. But it's ambiguous how the left part should be treated. So if you submit this search query as is, you'll get `Syntax Error: they could have been dogs OR cats`.
If you intended "they could have been dogs OR cats" as a phrase query, this actually ```py
raises a syntax error since `OR` is a recognized operator. If you make `or` lower case, # This raises a syntax error
this avoids the syntax error. However, it is cumbersome to have to remember what will table.search("they could have been dogs OR cats")
conflict with the query syntax. Instead, if you search using ```
`table.search('"they could have been dogs OR cats"')`, then the syntax checker avoids
checking inside the quotes.
On the other hand, lowercasing `OR` to `or` will work, because there are no capitalized logical operators and
the query is treated as a phrase query.
```py
# This works!
table.search("they could have been dogs or cats")
```
It can be cumbersome to have to remember what will cause a syntax error depending on the type of
query you want to perform. To make this simpler, when you want to perform a phrase query, you can
enforce it in one of two ways:
1. Place the double-quoted query inside single quotes. For example, `table.search('"they could have been dogs OR cats"')` is treated as
a phrase query.
2. Explicitly declare the `phrase_query()` method. This is useful when you have a phrase query that
itself contains double quotes. For example, `table.search('the cats OR dogs were not really "pets" at all').phrase_query()`
is treated as a phrase query.
In general, a query that's declared as a phrase query will be wrapped in double quotes during parsing, with nested
double quotes replaced by single quotes.
## Configurations ## Configurations

View File

@@ -13,7 +13,7 @@ Get started using these examples and quick links.
| Integrations | | | Integrations | |
|---|---:| |---|---:|
| <h3> LlamaIndex </h3>LlamaIndex is a simple, flexible data framework for connecting custom data sources to large language models. Llama index integrates with LanceDB as the serverless VectorDB. <h3>[Lean More](https://gpt-index.readthedocs.io/en/latest/examples/vector_stores/LanceDBIndexDemo.html) </h3> |<img src="../assets/llama-index.jpg" alt="image" width="150" height="auto">| | <h3> LlamaIndex </h3>LlamaIndex is a simple, flexible data framework for connecting custom data sources to large language models. Llama index integrates with LanceDB as the serverless VectorDB. <h3>[Lean More](https://gpt-index.readthedocs.io/en/latest/examples/vector_stores/LanceDBIndexDemo.html) </h3> |<img src="../assets/llama-index.jpg" alt="image" width="150" height="auto">|
| <h3>Langchain</h3>Langchain allows building applications with LLMs through composability <h3>[Lean More](https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/lancedb.html) | <img src="../assets/langchain.png" alt="image" width="150" height="auto">| | <h3>Langchain</h3>Langchain allows building applications with LLMs through composability <h3>[Lean More](https://python.langchain.com/docs/integrations/vectorstores/lancedb) | <img src="../assets/langchain.png" alt="image" width="150" height="auto">|
| <h3>Langchain TS</h3> Javascript bindings for Langchain. It integrates with LanceDB's serverless vectordb allowing you to build powerful AI applications through composibility using only serverless functions. <h3>[Learn More]( https://js.langchain.com/docs/modules/data_connection/vectorstores/integrations/lancedb) | <img src="../assets/langchain.png" alt="image" width="150" height="auto">| | <h3>Langchain TS</h3> Javascript bindings for Langchain. It integrates with LanceDB's serverless vectordb allowing you to build powerful AI applications through composibility using only serverless functions. <h3>[Learn More]( https://js.langchain.com/docs/modules/data_connection/vectorstores/integrations/lancedb) | <img src="../assets/langchain.png" alt="image" width="150" height="auto">|
| <h3>Voxel51</h3> It is an open source toolkit that enables you to build better computer vision workflows by improving the quality of your datasets and delivering insights about your models.<h3>[Learn More](./voxel51.md) | <img src="../assets/voxel.gif" alt="image" width="150" height="auto">| | <h3>Voxel51</h3> It is an open source toolkit that enables you to build better computer vision workflows by improving the quality of your datasets and delivering insights about your models.<h3>[Learn More](./voxel51.md) | <img src="../assets/voxel.gif" alt="image" width="150" height="auto">|
| <h3>PromptTools</h3> Offers a set of free, open-source tools for testing and experimenting with models, prompts, and configurations. The core idea is to enable developers to evaluate prompts using familiar interfaces like code and notebooks. You can use it to experiment with different configurations of LanceDB, and test how LanceDB integrates with the LLM of your choice.<h3>[Learn More](./prompttools.md) | <img src="../assets/prompttools.jpeg" alt="image" width="150" height="auto">| | <h3>PromptTools</h3> Offers a set of free, open-source tools for testing and experimenting with models, prompts, and configurations. The core idea is to enable developers to evaluate prompts using familiar interfaces like code and notebooks. You can use it to experiment with different configurations of LanceDB, and test how LanceDB integrates with the LLM of your choice.<h3>[Learn More](./prompttools.md) | <img src="../assets/prompttools.jpeg" alt="image" width="150" height="auto">|

View File

@@ -24,6 +24,12 @@ pip install lancedb
::: lancedb.query.LanceQueryBuilder ::: lancedb.query.LanceQueryBuilder
::: lancedb.query.LanceVectorQueryBuilder
::: lancedb.query.LanceFtsQueryBuilder
::: lancedb.query.LanceHybridQueryBuilder
## Embeddings ## Embeddings
::: lancedb.embeddings.registry.EmbeddingFunctionRegistry ::: lancedb.embeddings.registry.EmbeddingFunctionRegistry
@@ -62,10 +68,22 @@ pip install lancedb
## Integrations ## Integrations
### Pydantic ## Pydantic
::: lancedb.pydantic.pydantic_to_schema ::: lancedb.pydantic.pydantic_to_schema
::: lancedb.pydantic.vector ::: lancedb.pydantic.vector
::: lancedb.pydantic.LanceModel ::: lancedb.pydantic.LanceModel
## Reranking
::: lancedb.rerankers.linear_combination.LinearCombinationReranker
::: lancedb.rerankers.cohere.CohereReranker
::: lancedb.rerankers.colbert.ColbertReranker
::: lancedb.rerankers.cross_encoder.CrossEncoderReranker
::: lancedb.rerankers.openai.OpenaiReranker

View File

@@ -13,5 +13,10 @@ module.exports = {
}, },
rules: { rules: {
"@typescript-eslint/method-signature-style": "off", "@typescript-eslint/method-signature-style": "off",
"@typescript-eslint/quotes": "off",
"@typescript-eslint/semi": "off",
"@typescript-eslint/explicit-function-return-type": "off",
"@typescript-eslint/space-before-function-paren": "off",
"@typescript-eslint/indent": "off",
} }
} }

87
node/package-lock.json generated
View File

@@ -1,12 +1,12 @@
{ {
"name": "vectordb", "name": "vectordb",
"version": "0.4.11", "version": "0.4.12",
"lockfileVersion": 3, "lockfileVersion": 3,
"requires": true, "requires": true,
"packages": { "packages": {
"": { "": {
"name": "vectordb", "name": "vectordb",
"version": "0.4.11", "version": "0.4.12",
"cpu": [ "cpu": [
"x64", "x64",
"arm64" "arm64"
@@ -18,9 +18,7 @@
"win32" "win32"
], ],
"dependencies": { "dependencies": {
"@apache-arrow/ts": "^14.0.2",
"@neon-rs/load": "^0.0.74", "@neon-rs/load": "^0.0.74",
"apache-arrow": "^14.0.2",
"axios": "^1.4.0" "axios": "^1.4.0"
}, },
"devDependencies": { "devDependencies": {
@@ -33,6 +31,7 @@
"@types/temp": "^0.9.1", "@types/temp": "^0.9.1",
"@types/uuid": "^9.0.3", "@types/uuid": "^9.0.3",
"@typescript-eslint/eslint-plugin": "^5.59.1", "@typescript-eslint/eslint-plugin": "^5.59.1",
"apache-arrow-old": "npm:apache-arrow@13.0.0",
"cargo-cp-artifact": "^0.1", "cargo-cp-artifact": "^0.1",
"chai": "^4.3.7", "chai": "^4.3.7",
"chai-as-promised": "^7.1.1", "chai-as-promised": "^7.1.1",
@@ -53,11 +52,15 @@
"uuid": "^9.0.0" "uuid": "^9.0.0"
}, },
"optionalDependencies": { "optionalDependencies": {
"@lancedb/vectordb-darwin-arm64": "0.4.11", "@lancedb/vectordb-darwin-arm64": "0.4.12",
"@lancedb/vectordb-darwin-x64": "0.4.11", "@lancedb/vectordb-darwin-x64": "0.4.12",
"@lancedb/vectordb-linux-arm64-gnu": "0.4.11", "@lancedb/vectordb-linux-arm64-gnu": "0.4.12",
"@lancedb/vectordb-linux-x64-gnu": "0.4.11", "@lancedb/vectordb-linux-x64-gnu": "0.4.12",
"@lancedb/vectordb-win32-x64-msvc": "0.4.11" "@lancedb/vectordb-win32-x64-msvc": "0.4.12"
},
"peerDependencies": {
"@apache-arrow/ts": "^14.0.2",
"apache-arrow": "^14.0.2"
} }
}, },
"node_modules/@75lb/deep-merge": { "node_modules/@75lb/deep-merge": {
@@ -93,6 +96,7 @@
"version": "14.0.2", "version": "14.0.2",
"resolved": "https://registry.npmjs.org/@apache-arrow/ts/-/ts-14.0.2.tgz", "resolved": "https://registry.npmjs.org/@apache-arrow/ts/-/ts-14.0.2.tgz",
"integrity": "sha512-CtwAvLkK0CZv7xsYeCo91ml6PvlfzAmAJZkRYuz2GNBwfYufj5SVi0iuSMwIMkcU/szVwvLdzORSLa5PlF/2ug==", "integrity": "sha512-CtwAvLkK0CZv7xsYeCo91ml6PvlfzAmAJZkRYuz2GNBwfYufj5SVi0iuSMwIMkcU/szVwvLdzORSLa5PlF/2ug==",
"peer": true,
"dependencies": { "dependencies": {
"@types/command-line-args": "5.2.0", "@types/command-line-args": "5.2.0",
"@types/command-line-usage": "5.0.2", "@types/command-line-usage": "5.0.2",
@@ -109,7 +113,8 @@
"node_modules/@apache-arrow/ts/node_modules/@types/node": { "node_modules/@apache-arrow/ts/node_modules/@types/node": {
"version": "20.3.0", "version": "20.3.0",
"resolved": "https://registry.npmjs.org/@types/node/-/node-20.3.0.tgz", "resolved": "https://registry.npmjs.org/@types/node/-/node-20.3.0.tgz",
"integrity": "sha512-cumHmIAf6On83X7yP+LrsEyUOf/YlociZelmpRYaGFydoaPdxdt80MAbu6vWerQT2COCp2nPvHdsbD7tHn/YlQ==" "integrity": "sha512-cumHmIAf6On83X7yP+LrsEyUOf/YlociZelmpRYaGFydoaPdxdt80MAbu6vWerQT2COCp2nPvHdsbD7tHn/YlQ==",
"peer": true
}, },
"node_modules/@cargo-messages/android-arm-eabi": { "node_modules/@cargo-messages/android-arm-eabi": {
"version": "0.0.160", "version": "0.0.160",
@@ -329,9 +334,9 @@
} }
}, },
"node_modules/@lancedb/vectordb-darwin-arm64": { "node_modules/@lancedb/vectordb-darwin-arm64": {
"version": "0.4.11", "version": "0.4.12",
"resolved": "https://registry.npmjs.org/@lancedb/vectordb-darwin-arm64/-/vectordb-darwin-arm64-0.4.11.tgz", "resolved": "https://registry.npmjs.org/@lancedb/vectordb-darwin-arm64/-/vectordb-darwin-arm64-0.4.12.tgz",
"integrity": "sha512-JDOKmFnuJPFkA7ZmrzBJolROwSjWr7yMvAbi40uLBc25YbbVezodd30u2EFtIwWwtk1GqNYRZ49FZOElKYeC/Q==", "integrity": "sha512-38/rkJRlWXkPWXuj9onzvbrhnIWcIUQjgEp5G9v5ixPosBowm7A4j8e2Q8CJMsVSNcVX2JLqwWVldiWegZFuYw==",
"cpu": [ "cpu": [
"arm64" "arm64"
], ],
@@ -341,9 +346,9 @@
] ]
}, },
"node_modules/@lancedb/vectordb-darwin-x64": { "node_modules/@lancedb/vectordb-darwin-x64": {
"version": "0.4.11", "version": "0.4.12",
"resolved": "https://registry.npmjs.org/@lancedb/vectordb-darwin-x64/-/vectordb-darwin-x64-0.4.11.tgz", "resolved": "https://registry.npmjs.org/@lancedb/vectordb-darwin-x64/-/vectordb-darwin-x64-0.4.12.tgz",
"integrity": "sha512-iy6r+8tp2v1EFgJV52jusXtxgO6NY6SkpOdX41xPqN2mQWMkfUAR9Xtks1mgknjPOIKH4MRc8ZS0jcW/UWmilQ==", "integrity": "sha512-psE48dztyO450hXWdv9Rl9aayM2HQ1uF9wErfC0gKmDUh1N0NdVq2viDuFpZxnmCis/nvGwKlYiYT9OnYNCJ9g==",
"cpu": [ "cpu": [
"x64" "x64"
], ],
@@ -353,9 +358,9 @@
] ]
}, },
"node_modules/@lancedb/vectordb-linux-arm64-gnu": { "node_modules/@lancedb/vectordb-linux-arm64-gnu": {
"version": "0.4.11", "version": "0.4.12",
"resolved": "https://registry.npmjs.org/@lancedb/vectordb-linux-arm64-gnu/-/vectordb-linux-arm64-gnu-0.4.11.tgz", "resolved": "https://registry.npmjs.org/@lancedb/vectordb-linux-arm64-gnu/-/vectordb-linux-arm64-gnu-0.4.12.tgz",
"integrity": "sha512-5K6IVcTMuH0SZBjlqB5Gg39WC889FpTwIWKufxzQMMXrzxo5J3lKUHVoR28RRlNhDF2d9kZXBEyCpIfDFsV9iQ==", "integrity": "sha512-xwkgF6MiF5aAdG9JG8v4ke652YxUJrhs9z4OrsEfrENnvsIQd2C5UyKMepVLdvij4BI/XPFRFWXdjPvP7S9rTA==",
"cpu": [ "cpu": [
"arm64" "arm64"
], ],
@@ -365,9 +370,9 @@
] ]
}, },
"node_modules/@lancedb/vectordb-linux-x64-gnu": { "node_modules/@lancedb/vectordb-linux-x64-gnu": {
"version": "0.4.11", "version": "0.4.12",
"resolved": "https://registry.npmjs.org/@lancedb/vectordb-linux-x64-gnu/-/vectordb-linux-x64-gnu-0.4.11.tgz", "resolved": "https://registry.npmjs.org/@lancedb/vectordb-linux-x64-gnu/-/vectordb-linux-x64-gnu-0.4.12.tgz",
"integrity": "sha512-hF9ZChsdqKqqnivOzd9mE7lC3PmhZadXtwThi2RrsPiOLoEaGDfmr6Ni3amVQnB3bR8YEJtTxdQxe0NC4uW/8g==", "integrity": "sha512-gJqYR0aymrS+C60xc4EQPzmQ5/69XfeFv2ofBvAj7qW+c6BcnoAcfVl+7s1IrcWeGz251sm5cD5Lx4AzJd89dA==",
"cpu": [ "cpu": [
"x64" "x64"
], ],
@@ -377,9 +382,9 @@
] ]
}, },
"node_modules/@lancedb/vectordb-win32-x64-msvc": { "node_modules/@lancedb/vectordb-win32-x64-msvc": {
"version": "0.4.11", "version": "0.4.12",
"resolved": "https://registry.npmjs.org/@lancedb/vectordb-win32-x64-msvc/-/vectordb-win32-x64-msvc-0.4.11.tgz", "resolved": "https://registry.npmjs.org/@lancedb/vectordb-win32-x64-msvc/-/vectordb-win32-x64-msvc-0.4.12.tgz",
"integrity": "sha512-0+9ut1ccKoqIyGxsVixwx3771Z+DXpl5WfSmOeA8kf3v3jlOg2H+0YUahiXLDid2ju+yeLPrAUYm7A1gKHVhew==", "integrity": "sha512-LhCzpyEeBUyO6L2fuVqeP3mW8kYDryyU9PNqcM01m88sZB1Do6AlwiM+GjPRQ0SpzD0LK9oxQqSmJrdcNGqjbw==",
"cpu": [ "cpu": [
"x64" "x64"
], ],
@@ -948,6 +953,7 @@
"version": "14.0.2", "version": "14.0.2",
"resolved": "https://registry.npmjs.org/apache-arrow/-/apache-arrow-14.0.2.tgz", "resolved": "https://registry.npmjs.org/apache-arrow/-/apache-arrow-14.0.2.tgz",
"integrity": "sha512-EBO2xJN36/XoY81nhLcwCJgFwkboDZeyNQ+OPsG7bCoQjc2BT0aTyH/MR6SrL+LirSNz+cYqjGRlupMMlP1aEg==", "integrity": "sha512-EBO2xJN36/XoY81nhLcwCJgFwkboDZeyNQ+OPsG7bCoQjc2BT0aTyH/MR6SrL+LirSNz+cYqjGRlupMMlP1aEg==",
"peer": true,
"dependencies": { "dependencies": {
"@types/command-line-args": "5.2.0", "@types/command-line-args": "5.2.0",
"@types/command-line-usage": "5.0.2", "@types/command-line-usage": "5.0.2",
@@ -964,10 +970,39 @@
"arrow2csv": "bin/arrow2csv.js" "arrow2csv": "bin/arrow2csv.js"
} }
}, },
"node_modules/apache-arrow-old": {
"name": "apache-arrow",
"version": "13.0.0",
"resolved": "https://registry.npmjs.org/apache-arrow/-/apache-arrow-13.0.0.tgz",
"integrity": "sha512-3gvCX0GDawWz6KFNC28p65U+zGh/LZ6ZNKWNu74N6CQlKzxeoWHpi4CgEQsgRSEMuyrIIXi1Ea2syja7dwcHvw==",
"dev": true,
"dependencies": {
"@types/command-line-args": "5.2.0",
"@types/command-line-usage": "5.0.2",
"@types/node": "20.3.0",
"@types/pad-left": "2.1.1",
"command-line-args": "5.2.1",
"command-line-usage": "7.0.1",
"flatbuffers": "23.5.26",
"json-bignum": "^0.0.3",
"pad-left": "^2.1.0",
"tslib": "^2.5.3"
},
"bin": {
"arrow2csv": "bin/arrow2csv.js"
}
},
"node_modules/apache-arrow-old/node_modules/@types/node": {
"version": "20.3.0",
"resolved": "https://registry.npmjs.org/@types/node/-/node-20.3.0.tgz",
"integrity": "sha512-cumHmIAf6On83X7yP+LrsEyUOf/YlociZelmpRYaGFydoaPdxdt80MAbu6vWerQT2COCp2nPvHdsbD7tHn/YlQ==",
"dev": true
},
"node_modules/apache-arrow/node_modules/@types/node": { "node_modules/apache-arrow/node_modules/@types/node": {
"version": "20.3.0", "version": "20.3.0",
"resolved": "https://registry.npmjs.org/@types/node/-/node-20.3.0.tgz", "resolved": "https://registry.npmjs.org/@types/node/-/node-20.3.0.tgz",
"integrity": "sha512-cumHmIAf6On83X7yP+LrsEyUOf/YlociZelmpRYaGFydoaPdxdt80MAbu6vWerQT2COCp2nPvHdsbD7tHn/YlQ==" "integrity": "sha512-cumHmIAf6On83X7yP+LrsEyUOf/YlociZelmpRYaGFydoaPdxdt80MAbu6vWerQT2COCp2nPvHdsbD7tHn/YlQ==",
"peer": true
}, },
"node_modules/arg": { "node_modules/arg": {
"version": "4.1.3", "version": "4.1.3",

View File

@@ -1,6 +1,6 @@
{ {
"name": "vectordb", "name": "vectordb",
"version": "0.4.11", "version": "0.4.12",
"description": " Serverless, low-latency vector database for AI applications", "description": " Serverless, low-latency vector database for AI applications",
"main": "dist/index.js", "main": "dist/index.js",
"types": "dist/index.d.ts", "types": "dist/index.d.ts",
@@ -41,6 +41,7 @@
"@types/temp": "^0.9.1", "@types/temp": "^0.9.1",
"@types/uuid": "^9.0.3", "@types/uuid": "^9.0.3",
"@typescript-eslint/eslint-plugin": "^5.59.1", "@typescript-eslint/eslint-plugin": "^5.59.1",
"apache-arrow-old": "npm:apache-arrow@13.0.0",
"cargo-cp-artifact": "^0.1", "cargo-cp-artifact": "^0.1",
"chai": "^4.3.7", "chai": "^4.3.7",
"chai-as-promised": "^7.1.1", "chai-as-promised": "^7.1.1",
@@ -87,10 +88,10 @@
} }
}, },
"optionalDependencies": { "optionalDependencies": {
"@lancedb/vectordb-darwin-arm64": "0.4.11", "@lancedb/vectordb-darwin-arm64": "0.4.12",
"@lancedb/vectordb-darwin-x64": "0.4.11", "@lancedb/vectordb-darwin-x64": "0.4.12",
"@lancedb/vectordb-linux-arm64-gnu": "0.4.11", "@lancedb/vectordb-linux-arm64-gnu": "0.4.12",
"@lancedb/vectordb-linux-x64-gnu": "0.4.11", "@lancedb/vectordb-linux-x64-gnu": "0.4.12",
"@lancedb/vectordb-win32-x64-msvc": "0.4.11" "@lancedb/vectordb-win32-x64-msvc": "0.4.12"
} }
} }

View File

@@ -20,19 +20,20 @@ import {
type Vector, type Vector,
FixedSizeList, FixedSizeList,
vectorFromArray, vectorFromArray,
type Schema, Schema,
Table as ArrowTable, Table as ArrowTable,
RecordBatchStreamWriter, RecordBatchStreamWriter,
List, List,
RecordBatch, RecordBatch,
makeData, makeData,
Struct, Struct,
type Float, Float,
DataType, DataType,
Binary, Binary,
Float32 Float32
} from 'apache-arrow' } from 'apache-arrow'
import { type EmbeddingFunction } from './index' import { type EmbeddingFunction } from './index'
import { sanitizeSchema } from './sanitize'
/* /*
* Options to control how a column should be converted to a vector array * Options to control how a column should be converted to a vector array
@@ -201,10 +202,13 @@ export function makeArrowTable (
} }
const opt = new MakeArrowTableOptions(options !== undefined ? options : {}) const opt = new MakeArrowTableOptions(options !== undefined ? options : {})
if (opt.schema !== undefined && opt.schema !== null) {
opt.schema = sanitizeSchema(opt.schema)
}
const columns: Record<string, Vector> = {} const columns: Record<string, Vector> = {}
// TODO: sample dataset to find missing columns // TODO: sample dataset to find missing columns
// Prefer the field ordering of the schema, if present // Prefer the field ordering of the schema, if present
const columnNames = ((options?.schema) != null) ? (options?.schema?.names as string[]) : Object.keys(data[0]) const columnNames = ((opt.schema) != null) ? (opt.schema.names as string[]) : Object.keys(data[0])
for (const colName of columnNames) { for (const colName of columnNames) {
if (data.length !== 0 && !Object.prototype.hasOwnProperty.call(data[0], colName)) { if (data.length !== 0 && !Object.prototype.hasOwnProperty.call(data[0], colName)) {
// The field is present in the schema, but not in the data, skip it // The field is present in the schema, but not in the data, skip it
@@ -329,6 +333,9 @@ async function applyEmbeddings<T> (table: ArrowTable, embeddings?: EmbeddingFunc
if (embeddings == null) { if (embeddings == null) {
return table return table
} }
if (schema !== undefined && schema !== null) {
schema = sanitizeSchema(schema)
}
// Convert from ArrowTable to Record<String, Vector> // Convert from ArrowTable to Record<String, Vector>
const colEntries = [...Array(table.numCols).keys()].map((_, idx) => { const colEntries = [...Array(table.numCols).keys()].map((_, idx) => {
@@ -439,6 +446,9 @@ export async function fromRecordsToBuffer<T> (
embeddings?: EmbeddingFunction<T>, embeddings?: EmbeddingFunction<T>,
schema?: Schema schema?: Schema
): Promise<Buffer> { ): Promise<Buffer> {
if (schema !== undefined && schema !== null) {
schema = sanitizeSchema(schema)
}
const table = await convertToTable(data, embeddings, { schema }) const table = await convertToTable(data, embeddings, { schema })
const writer = RecordBatchFileWriter.writeAll(table) const writer = RecordBatchFileWriter.writeAll(table)
return Buffer.from(await writer.toUint8Array()) return Buffer.from(await writer.toUint8Array())
@@ -456,6 +466,9 @@ export async function fromRecordsToStreamBuffer<T> (
embeddings?: EmbeddingFunction<T>, embeddings?: EmbeddingFunction<T>,
schema?: Schema schema?: Schema
): Promise<Buffer> { ): Promise<Buffer> {
if (schema !== null && schema !== undefined) {
schema = sanitizeSchema(schema)
}
const table = await convertToTable(data, embeddings, { schema }) const table = await convertToTable(data, embeddings, { schema })
const writer = RecordBatchStreamWriter.writeAll(table) const writer = RecordBatchStreamWriter.writeAll(table)
return Buffer.from(await writer.toUint8Array()) return Buffer.from(await writer.toUint8Array())
@@ -474,6 +487,9 @@ export async function fromTableToBuffer<T> (
embeddings?: EmbeddingFunction<T>, embeddings?: EmbeddingFunction<T>,
schema?: Schema schema?: Schema
): Promise<Buffer> { ): Promise<Buffer> {
if (schema !== null && schema !== undefined) {
schema = sanitizeSchema(schema)
}
const tableWithEmbeddings = await applyEmbeddings(table, embeddings, schema) const tableWithEmbeddings = await applyEmbeddings(table, embeddings, schema)
const writer = RecordBatchFileWriter.writeAll(tableWithEmbeddings) const writer = RecordBatchFileWriter.writeAll(tableWithEmbeddings)
return Buffer.from(await writer.toUint8Array()) return Buffer.from(await writer.toUint8Array())
@@ -492,6 +508,9 @@ export async function fromTableToStreamBuffer<T> (
embeddings?: EmbeddingFunction<T>, embeddings?: EmbeddingFunction<T>,
schema?: Schema schema?: Schema
): Promise<Buffer> { ): Promise<Buffer> {
if (schema !== null && schema !== undefined) {
schema = sanitizeSchema(schema)
}
const tableWithEmbeddings = await applyEmbeddings(table, embeddings, schema) const tableWithEmbeddings = await applyEmbeddings(table, embeddings, schema)
const writer = RecordBatchStreamWriter.writeAll(tableWithEmbeddings) const writer = RecordBatchStreamWriter.writeAll(tableWithEmbeddings)
return Buffer.from(await writer.toUint8Array()) return Buffer.from(await writer.toUint8Array())
@@ -528,5 +547,5 @@ function alignTable (table: ArrowTable, schema: Schema): ArrowTable {
// Creates an empty Arrow Table // Creates an empty Arrow Table
export function createEmptyTable (schema: Schema): ArrowTable { export function createEmptyTable (schema: Schema): ArrowTable {
return new ArrowTable(schema) return new ArrowTable(sanitizeSchema(schema))
} }

View File

@@ -341,6 +341,7 @@ export interface Table<T = number[]> {
* *
* @param column The column to index * @param column The column to index
* @param replace If false, fail if an index already exists on the column * @param replace If false, fail if an index already exists on the column
* it is always set to true for remote connections
* *
* Scalar indices, like vector indices, can be used to speed up scans. A scalar * Scalar indices, like vector indices, can be used to speed up scans. A scalar
* index can speed up scans that contain filter expressions on the indexed column. * index can speed up scans that contain filter expressions on the indexed column.
@@ -384,7 +385,7 @@ export interface Table<T = number[]> {
* await table.createScalarIndex('my_col') * await table.createScalarIndex('my_col')
* ``` * ```
*/ */
createScalarIndex: (column: string, replace: boolean) => Promise<void> createScalarIndex: (column: string, replace?: boolean) => Promise<void>
/** /**
* Returns the number of rows in this table. * Returns the number of rows in this table.
@@ -914,7 +915,10 @@ export class LocalTable<T = number[]> implements Table<T> {
}) })
} }
async createScalarIndex (column: string, replace: boolean): Promise<void> { async createScalarIndex (column: string, replace?: boolean): Promise<void> {
if (replace === undefined) {
replace = true
}
return tableCreateScalarIndex.call(this._tbl, column, replace) return tableCreateScalarIndex.call(this._tbl, column, replace)
} }

View File

@@ -397,7 +397,7 @@ export class RemoteTable<T = number[]> implements Table<T> {
} }
const column = indexParams.column ?? 'vector' const column = indexParams.column ?? 'vector'
const indexType = 'vector' // only vector index is supported for remote connections const indexType = 'vector'
const metricType = indexParams.metric_type ?? 'L2' const metricType = indexParams.metric_type ?? 'L2'
const indexCacheSize = indexParams.index_cache_size ?? null const indexCacheSize = indexParams.index_cache_size ?? null
@@ -420,8 +420,25 @@ export class RemoteTable<T = number[]> implements Table<T> {
} }
} }
async createScalarIndex (column: string, replace: boolean): Promise<void> { async createScalarIndex (column: string): Promise<void> {
throw new Error('Not implemented') const indexType = 'scalar'
const data = {
column,
index_type: indexType,
replace: true
}
const res = await this._client.post(
`/v1/table/${this._name}/create_scalar_index/`,
data
)
if (res.status !== 200) {
throw new Error(
`Server Error, status: ${res.status}, ` +
// eslint-disable-next-line @typescript-eslint/restrict-template-expressions
`message: ${res.statusText}: ${res.data}`
)
}
} }
async countRows (): Promise<number> { async countRows (): Promise<number> {

501
node/src/sanitize.ts Normal file
View File

@@ -0,0 +1,501 @@
// Copyright 2023 LanceDB Developers.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// The utilities in this file help sanitize data from the user's arrow
// library into the types expected by vectordb's arrow library. Node
// generally allows for mulitple versions of the same library (and sometimes
// even multiple copies of the same version) to be installed at the same
// time. However, arrow-js uses instanceof which expected that the input
// comes from the exact same library instance. This is not always the case
// and so we must sanitize the input to ensure that it is compatible.
import {
Field,
Utf8,
FixedSizeBinary,
FixedSizeList,
Schema,
List,
Struct,
Float,
Bool,
Date_,
Decimal,
DataType,
Dictionary,
Binary,
Float32,
Interval,
Map_,
Duration,
Union,
Time,
Timestamp,
Type,
Null,
Int,
type Precision,
type DateUnit,
Int8,
Int16,
Int32,
Int64,
Uint8,
Uint16,
Uint32,
Uint64,
Float16,
Float64,
DateDay,
DateMillisecond,
DenseUnion,
SparseUnion,
TimeNanosecond,
TimeMicrosecond,
TimeMillisecond,
TimeSecond,
TimestampNanosecond,
TimestampMicrosecond,
TimestampMillisecond,
TimestampSecond,
IntervalDayTime,
IntervalYearMonth,
DurationNanosecond,
DurationMicrosecond,
DurationMillisecond,
DurationSecond,
} from "apache-arrow";
import type { IntBitWidth, TimeBitWidth } from "apache-arrow/type";
function sanitizeMetadata(
metadataLike?: unknown
): Map<string, string> | undefined {
if (metadataLike === undefined || metadataLike === null) {
return undefined;
}
if (!(metadataLike instanceof Map)) {
throw Error("Expected metadata, if present, to be a Map<string, string>");
}
for (const item of metadataLike) {
if (!(typeof item[0] === "string" || !(typeof item[1] === "string"))) {
throw Error(
"Expected metadata, if present, to be a Map<string, string> but it had non-string keys or values"
);
}
}
return metadataLike as Map<string, string>;
}
function sanitizeInt(typeLike: object) {
if (
!("bitWidth" in typeLike) ||
typeof typeLike.bitWidth !== "number" ||
!("isSigned" in typeLike) ||
typeof typeLike.isSigned !== "boolean"
) {
throw Error(
"Expected an Int Type to have a `bitWidth` and `isSigned` property"
);
}
return new Int(typeLike.isSigned, typeLike.bitWidth as IntBitWidth);
}
function sanitizeFloat(typeLike: object) {
if (!("precision" in typeLike) || typeof typeLike.precision !== "number") {
throw Error("Expected a Float Type to have a `precision` property");
}
return new Float(typeLike.precision as Precision);
}
function sanitizeDecimal(typeLike: object) {
if (
!("scale" in typeLike) ||
typeof typeLike.scale !== "number" ||
!("precision" in typeLike) ||
typeof typeLike.precision !== "number" ||
!("bitWidth" in typeLike) ||
typeof typeLike.bitWidth !== "number"
) {
throw Error(
"Expected a Decimal Type to have `scale`, `precision`, and `bitWidth` properties"
);
}
return new Decimal(typeLike.scale, typeLike.precision, typeLike.bitWidth);
}
function sanitizeDate(typeLike: object) {
if (!("unit" in typeLike) || typeof typeLike.unit !== "number") {
throw Error("Expected a Date type to have a `unit` property");
}
return new Date_(typeLike.unit as DateUnit);
}
function sanitizeTime(typeLike: object) {
if (
!("unit" in typeLike) ||
typeof typeLike.unit !== "number" ||
!("bitWidth" in typeLike) ||
typeof typeLike.bitWidth !== "number"
) {
throw Error(
"Expected a Time type to have `unit` and `bitWidth` properties"
);
}
return new Time(typeLike.unit, typeLike.bitWidth as TimeBitWidth);
}
function sanitizeTimestamp(typeLike: object) {
if (!("unit" in typeLike) || typeof typeLike.unit !== "number") {
throw Error("Expected a Timestamp type to have a `unit` property");
}
let timezone = null;
if ("timezone" in typeLike && typeof typeLike.timezone === "string") {
timezone = typeLike.timezone;
}
return new Timestamp(typeLike.unit, timezone);
}
function sanitizeTypedTimestamp(
typeLike: object,
Datatype:
| typeof TimestampNanosecond
| typeof TimestampMicrosecond
| typeof TimestampMillisecond
| typeof TimestampSecond
) {
let timezone = null;
if ("timezone" in typeLike && typeof typeLike.timezone === "string") {
timezone = typeLike.timezone;
}
return new Datatype(timezone);
}
function sanitizeInterval(typeLike: object) {
if (!("unit" in typeLike) || typeof typeLike.unit !== "number") {
throw Error("Expected an Interval type to have a `unit` property");
}
return new Interval(typeLike.unit);
}
function sanitizeList(typeLike: object) {
if (!("children" in typeLike) || !Array.isArray(typeLike.children)) {
throw Error(
"Expected a List type to have an array-like `children` property"
);
}
if (typeLike.children.length !== 1) {
throw Error("Expected a List type to have exactly one child");
}
return new List(sanitizeField(typeLike.children[0]));
}
function sanitizeStruct(typeLike: object) {
if (!("children" in typeLike) || !Array.isArray(typeLike.children)) {
throw Error(
"Expected a Struct type to have an array-like `children` property"
);
}
return new Struct(typeLike.children.map((child) => sanitizeField(child)));
}
function sanitizeUnion(typeLike: object) {
if (
!("typeIds" in typeLike) ||
!("mode" in typeLike) ||
typeof typeLike.mode !== "number"
) {
throw Error(
"Expected a Union type to have `typeIds` and `mode` properties"
);
}
if (!("children" in typeLike) || !Array.isArray(typeLike.children)) {
throw Error(
"Expected a Union type to have an array-like `children` property"
);
}
return new Union(
typeLike.mode,
typeLike.typeIds as any,
typeLike.children.map((child) => sanitizeField(child))
);
}
function sanitizeTypedUnion(
typeLike: object,
UnionType: typeof DenseUnion | typeof SparseUnion
) {
if (!("typeIds" in typeLike)) {
throw Error(
"Expected a DenseUnion/SparseUnion type to have a `typeIds` property"
);
}
if (!("children" in typeLike) || !Array.isArray(typeLike.children)) {
throw Error(
"Expected a DenseUnion/SparseUnion type to have an array-like `children` property"
);
}
return new UnionType(
typeLike.typeIds as any,
typeLike.children.map((child) => sanitizeField(child))
);
}
function sanitizeFixedSizeBinary(typeLike: object) {
if (!("byteWidth" in typeLike) || typeof typeLike.byteWidth !== "number") {
throw Error(
"Expected a FixedSizeBinary type to have a `byteWidth` property"
);
}
return new FixedSizeBinary(typeLike.byteWidth);
}
function sanitizeFixedSizeList(typeLike: object) {
if (!("listSize" in typeLike) || typeof typeLike.listSize !== "number") {
throw Error("Expected a FixedSizeList type to have a `listSize` property");
}
if (!("children" in typeLike) || !Array.isArray(typeLike.children)) {
throw Error(
"Expected a FixedSizeList type to have an array-like `children` property"
);
}
if (typeLike.children.length !== 1) {
throw Error("Expected a FixedSizeList type to have exactly one child");
}
return new FixedSizeList(
typeLike.listSize,
sanitizeField(typeLike.children[0])
);
}
function sanitizeMap(typeLike: object) {
if (!("children" in typeLike) || !Array.isArray(typeLike.children)) {
throw Error(
"Expected a Map type to have an array-like `children` property"
);
}
if (!("keysSorted" in typeLike) || typeof typeLike.keysSorted !== "boolean") {
throw Error("Expected a Map type to have a `keysSorted` property");
}
return new Map_(
typeLike.children.map((field) => sanitizeField(field)) as any,
typeLike.keysSorted
);
}
function sanitizeDuration(typeLike: object) {
if (!("unit" in typeLike) || typeof typeLike.unit !== "number") {
throw Error("Expected a Duration type to have a `unit` property");
}
return new Duration(typeLike.unit);
}
function sanitizeDictionary(typeLike: object) {
if (!("id" in typeLike) || typeof typeLike.id !== "number") {
throw Error("Expected a Dictionary type to have an `id` property");
}
if (!("indices" in typeLike) || typeof typeLike.indices !== "object") {
throw Error("Expected a Dictionary type to have an `indices` property");
}
if (!("dictionary" in typeLike) || typeof typeLike.dictionary !== "object") {
throw Error("Expected a Dictionary type to have an `dictionary` property");
}
if (!("isOrdered" in typeLike) || typeof typeLike.isOrdered !== "boolean") {
throw Error("Expected a Dictionary type to have an `isOrdered` property");
}
return new Dictionary(
sanitizeType(typeLike.dictionary),
sanitizeType(typeLike.indices) as any,
typeLike.id,
typeLike.isOrdered
);
}
function sanitizeType(typeLike: unknown): DataType<any> {
if (typeof typeLike !== "object" || typeLike === null) {
throw Error("Expected a Type but object was null/undefined");
}
if (!("typeId" in typeLike) || !(typeof typeLike.typeId !== "function")) {
throw Error("Expected a Type to have a typeId function");
}
let typeId: Type;
if (typeof typeLike.typeId === "function") {
typeId = (typeLike.typeId as () => unknown)() as Type;
} else if (typeof typeLike.typeId === "number") {
typeId = typeLike.typeId as Type;
} else {
throw Error("Type's typeId property was not a function or number");
}
switch (typeId) {
case Type.NONE:
throw Error("Received a Type with a typeId of NONE");
case Type.Null:
return new Null();
case Type.Int:
return sanitizeInt(typeLike);
case Type.Float:
return sanitizeFloat(typeLike);
case Type.Binary:
return new Binary();
case Type.Utf8:
return new Utf8();
case Type.Bool:
return new Bool();
case Type.Decimal:
return sanitizeDecimal(typeLike);
case Type.Date:
return sanitizeDate(typeLike);
case Type.Time:
return sanitizeTime(typeLike);
case Type.Timestamp:
return sanitizeTimestamp(typeLike);
case Type.Interval:
return sanitizeInterval(typeLike);
case Type.List:
return sanitizeList(typeLike);
case Type.Struct:
return sanitizeStruct(typeLike);
case Type.Union:
return sanitizeUnion(typeLike);
case Type.FixedSizeBinary:
return sanitizeFixedSizeBinary(typeLike);
case Type.FixedSizeList:
return sanitizeFixedSizeList(typeLike);
case Type.Map:
return sanitizeMap(typeLike);
case Type.Duration:
return sanitizeDuration(typeLike);
case Type.Dictionary:
return sanitizeDictionary(typeLike);
case Type.Int8:
return new Int8();
case Type.Int16:
return new Int16();
case Type.Int32:
return new Int32();
case Type.Int64:
return new Int64();
case Type.Uint8:
return new Uint8();
case Type.Uint16:
return new Uint16();
case Type.Uint32:
return new Uint32();
case Type.Uint64:
return new Uint64();
case Type.Float16:
return new Float16();
case Type.Float32:
return new Float32();
case Type.Float64:
return new Float64();
case Type.DateMillisecond:
return new DateMillisecond();
case Type.DateDay:
return new DateDay();
case Type.TimeNanosecond:
return new TimeNanosecond();
case Type.TimeMicrosecond:
return new TimeMicrosecond();
case Type.TimeMillisecond:
return new TimeMillisecond();
case Type.TimeSecond:
return new TimeSecond();
case Type.TimestampNanosecond:
return sanitizeTypedTimestamp(typeLike, TimestampNanosecond);
case Type.TimestampMicrosecond:
return sanitizeTypedTimestamp(typeLike, TimestampMicrosecond);
case Type.TimestampMillisecond:
return sanitizeTypedTimestamp(typeLike, TimestampMillisecond);
case Type.TimestampSecond:
return sanitizeTypedTimestamp(typeLike, TimestampSecond);
case Type.DenseUnion:
return sanitizeTypedUnion(typeLike, DenseUnion);
case Type.SparseUnion:
return sanitizeTypedUnion(typeLike, SparseUnion);
case Type.IntervalDayTime:
return new IntervalDayTime();
case Type.IntervalYearMonth:
return new IntervalYearMonth();
case Type.DurationNanosecond:
return new DurationNanosecond();
case Type.DurationMicrosecond:
return new DurationMicrosecond();
case Type.DurationMillisecond:
return new DurationMillisecond();
case Type.DurationSecond:
return new DurationSecond();
}
}
function sanitizeField(fieldLike: unknown): Field {
if (fieldLike instanceof Field) {
return fieldLike;
}
if (typeof fieldLike !== "object" || fieldLike === null) {
throw Error("Expected a Field but object was null/undefined");
}
if (
!("type" in fieldLike) ||
!("name" in fieldLike) ||
!("nullable" in fieldLike)
) {
throw Error(
"The field passed in is missing a `type`/`name`/`nullable` property"
);
}
const type = sanitizeType(fieldLike.type);
const name = fieldLike.name;
if (!(typeof name === "string")) {
throw Error("The field passed in had a non-string `name` property");
}
const nullable = fieldLike.nullable;
if (!(typeof nullable === "boolean")) {
throw Error("The field passed in had a non-boolean `nullable` property");
}
let metadata;
if ("metadata" in fieldLike) {
metadata = sanitizeMetadata(fieldLike.metadata);
}
return new Field(name, type, nullable, metadata);
}
export function sanitizeSchema(schemaLike: unknown): Schema {
if (schemaLike instanceof Schema) {
return schemaLike;
}
if (typeof schemaLike !== "object" || schemaLike === null) {
throw Error("Expected a Schema but object was null/undefined");
}
if (!("fields" in schemaLike)) {
throw Error(
"The schema passed in does not appear to be a schema (no 'fields' property)"
);
}
let metadata;
if ("metadata" in schemaLike) {
metadata = sanitizeMetadata(schemaLike.metadata);
}
if (!Array.isArray(schemaLike.fields)) {
throw Error(
"The schema passed in had a 'fields' property but it was not an array"
);
}
const sanitizedFields = schemaLike.fields.map((field) =>
sanitizeField(field)
);
return new Schema(sanitizedFields, metadata);
}

View File

@@ -34,8 +34,20 @@ import {
List, List,
DataType, DataType,
Dictionary, Dictionary,
Int64 Int64,
MetadataVersion
} from 'apache-arrow' } from 'apache-arrow'
import {
Dictionary as OldDictionary,
Field as OldField,
FixedSizeList as OldFixedSizeList,
Float32 as OldFloat32,
Int32 as OldInt32,
Struct as OldStruct,
Schema as OldSchema,
TimestampNanosecond as OldTimestampNanosecond,
Utf8 as OldUtf8
} from 'apache-arrow-old'
import { type EmbeddingFunction } from '../embedding/embedding_function' import { type EmbeddingFunction } from '../embedding/embedding_function'
chaiUse(chaiAsPromised) chaiUse(chaiAsPromised)
@@ -318,3 +330,31 @@ describe('makeEmptyTable', function () {
await checkTableCreation(async (_, __, schema) => makeEmptyTable(schema)) await checkTableCreation(async (_, __, schema) => makeEmptyTable(schema))
}) })
}) })
describe('when using two versions of arrow', function () {
it('can still import data', async function() {
const schema = new OldSchema([
new OldField('id', new OldInt32()),
new OldField('vector', new OldFixedSizeList(1024, new OldField("item", new OldFloat32(), true))),
new OldField('struct', new OldStruct([
new OldField('nested', new OldDictionary(new OldUtf8(), new OldInt32(), 1, true)),
new OldField('ts_with_tz', new OldTimestampNanosecond("some_tz")),
new OldField('ts_no_tz', new OldTimestampNanosecond(null))
]))
]) as any
// We use arrow version 13 to emulate a "foreign arrow" and this version doesn't have metadataVersion
// In theory, this wouldn't matter. We don't rely on that property. However, it causes deepEqual to
// fail so we patch it back in
schema.metadataVersion = MetadataVersion.V5
const table = makeArrowTable(
[],
{ schema }
)
const buf = await fromTableToBuffer(table)
assert.isAbove(buf.byteLength, 0)
const actual = tableFromIPC(buf)
const actualSchema = actual.schema
assert.deepEqual(actualSchema, schema)
})
})

3
nodejs/.eslintignore Normal file
View File

@@ -0,0 +1,3 @@
**/dist/**/*
**/native.js
**/native.d.ts

View File

@@ -1,22 +0,0 @@
module.exports = {
env: {
browser: true,
es2021: true,
},
extends: [
"eslint:recommended",
"plugin:@typescript-eslint/recommended-type-checked",
"plugin:@typescript-eslint/stylistic-type-checked",
],
overrides: [],
parserOptions: {
project: "./tsconfig.json",
ecmaVersion: "latest",
sourceType: "module",
},
rules: {
"@typescript-eslint/method-signature-style": "off",
"@typescript-eslint/no-explicit-any": "off",
},
ignorePatterns: ["node_modules/", "dist/", "build/", "lancedb/native.*"],
};

1
nodejs/.prettierignore Symbolic link
View File

@@ -0,0 +1 @@
.eslintignore

View File

@@ -2,7 +2,6 @@
It will replace the NodeJS SDK when it is ready. It will replace the NodeJS SDK when it is ready.
## Development ## Development
```sh ```sh
@@ -10,9 +9,35 @@ npm run build
npm t npm t
``` ```
Generating docs ### Running lint / format
LanceDb uses eslint for linting. VSCode does not need any plugins to use eslint. However, it
may need some additional configuration. Make sure that eslint.experimental.useFlatConfig is
set to true. Also, if your vscode root folder is the repo root then you will need to set
the eslint.workingDirectories to ["nodejs"]. To manually lint your code you can run:
```sh
npm run lint
``` ```
LanceDb uses prettier for formatting. If you are using VSCode you will need to install the
"Prettier - Code formatter" extension. You should then configure it to be the default formatter
for typescript and you should enable format on save. To manually check your code's format you
can run:
```sh
npm run chkformat
```
If you need to manually format your code you can run:
```sh
npx prettier --write .
```
### Generating docs
```sh
npm run docs npm run docs
cd ../docs cd ../docs

View File

@@ -12,9 +12,13 @@
// See the License for the specific language governing permissions and // See the License for the specific language governing permissions and
// limitations under the License. // limitations under the License.
import { makeArrowTable, toBuffer } from "../lancedb/arrow";
import { import {
Int64, convertToTable,
fromTableToBuffer,
makeArrowTable,
makeEmptyTable,
} from "../dist/arrow";
import {
Field, Field,
FixedSizeList, FixedSizeList,
Float16, Float16,
@@ -23,98 +27,444 @@ import {
tableFromIPC, tableFromIPC,
Schema, Schema,
Float64, Float64,
type Table,
Binary,
Bool,
Utf8,
Struct,
List,
DataType,
Dictionary,
Int64,
Float,
Precision,
MetadataVersion,
} from "apache-arrow"; } from "apache-arrow";
import {
Dictionary as OldDictionary,
Field as OldField,
FixedSizeList as OldFixedSizeList,
Float32 as OldFloat32,
Int32 as OldInt32,
Struct as OldStruct,
Schema as OldSchema,
TimestampNanosecond as OldTimestampNanosecond,
Utf8 as OldUtf8,
} from "apache-arrow-old";
import { type EmbeddingFunction } from "../dist/embedding/embedding_function";
test("customized schema", function () { // eslint-disable-next-line @typescript-eslint/no-explicit-any
const schema = new Schema([ function sampleRecords(): Array<Record<string, any>> {
new Field("a", new Int32(), true), return [
new Field("b", new Float32(), true),
new Field(
"c",
new FixedSizeList(3, new Field("item", new Float16())),
true
),
]);
const table = makeArrowTable(
[
{ a: 1, b: 2, c: [1, 2, 3] },
{ a: 4, b: 5, c: [4, 5, 6] },
{ a: 7, b: 8, c: [7, 8, 9] },
],
{ schema }
);
expect(table.schema.toString()).toEqual(schema.toString());
const buf = toBuffer(table);
expect(buf.byteLength).toBeGreaterThan(0);
const actual = tableFromIPC(buf);
expect(actual.numRows).toBe(3);
const actualSchema = actual.schema;
expect(actualSchema.toString()).toStrictEqual(schema.toString());
});
test("default vector column", function () {
const schema = new Schema([
new Field("a", new Float64(), true),
new Field("b", new Float64(), true),
new Field("vector", new FixedSizeList(3, new Field("item", new Float32()))),
]);
const table = makeArrowTable([
{ a: 1, b: 2, vector: [1, 2, 3] },
{ a: 4, b: 5, vector: [4, 5, 6] },
{ a: 7, b: 8, vector: [7, 8, 9] },
]);
const buf = toBuffer(table);
expect(buf.byteLength).toBeGreaterThan(0);
const actual = tableFromIPC(buf);
expect(actual.numRows).toBe(3);
const actualSchema = actual.schema;
expect(actualSchema.toString()).toEqual(actualSchema.toString());
});
test("2 vector columns", function () {
const schema = new Schema([
new Field("a", new Float64()),
new Field("b", new Float64()),
new Field("vec1", new FixedSizeList(3, new Field("item", new Float16()))),
new Field("vec2", new FixedSizeList(3, new Field("item", new Float16()))),
]);
const table = makeArrowTable(
[
{ a: 1, b: 2, vec1: [1, 2, 3], vec2: [2, 4, 6] },
{ a: 4, b: 5, vec1: [4, 5, 6], vec2: [8, 10, 12] },
{ a: 7, b: 8, vec1: [7, 8, 9], vec2: [14, 16, 18] },
],
{ {
vectorColumns: { binary: Buffer.alloc(5),
vec1: { type: new Float16() }, boolean: false,
vec2: { type: new Float16() }, number: 7,
}, string: "hello",
struct: { x: 0, y: 0 },
list: ["anime", "action", "comedy"],
},
];
}
// Helper method to verify various ways to create a table
async function checkTableCreation(
tableCreationMethod: (
records: Record<string, unknown>[],
recordsReversed: Record<string, unknown>[],
schema: Schema,
) => Promise<Table>,
infersTypes: boolean,
): Promise<void> {
const records = sampleRecords();
const recordsReversed = [
{
list: ["anime", "action", "comedy"],
struct: { x: 0, y: 0 },
string: "hello",
number: 7,
boolean: false,
binary: Buffer.alloc(5),
},
];
const schema = new Schema([
new Field("binary", new Binary(), false),
new Field("boolean", new Bool(), false),
new Field("number", new Float64(), false),
new Field("string", new Utf8(), false),
new Field(
"struct",
new Struct([
new Field("x", new Float64(), false),
new Field("y", new Float64(), false),
]),
),
new Field("list", new List(new Field("item", new Utf8(), false)), false),
]);
const table = await tableCreationMethod(records, recordsReversed, schema);
schema.fields.forEach((field, idx) => {
const actualField = table.schema.fields[idx];
// Type inference always assumes nullable=true
if (infersTypes) {
expect(actualField.nullable).toBe(true);
} else {
expect(actualField.nullable).toBe(false);
} }
); expect(table.getChild(field.name)?.type.toString()).toEqual(
field.type.toString(),
);
expect(table.getChildAt(idx)?.type.toString()).toEqual(
field.type.toString(),
);
});
}
const buf = toBuffer(table); describe("The function makeArrowTable", function () {
expect(buf.byteLength).toBeGreaterThan(0); it("will use data types from a provided schema instead of inference", async function () {
const schema = new Schema([
new Field("a", new Int32()),
new Field("b", new Float32()),
new Field("c", new FixedSizeList(3, new Field("item", new Float16()))),
new Field("d", new Int64()),
]);
const table = makeArrowTable(
[
{ a: 1, b: 2, c: [1, 2, 3], d: 9 },
{ a: 4, b: 5, c: [4, 5, 6], d: 10 },
{ a: 7, b: 8, c: [7, 8, 9], d: null },
],
{ schema },
);
const actual = tableFromIPC(buf); const buf = await fromTableToBuffer(table);
expect(actual.numRows).toBe(3); expect(buf.byteLength).toBeGreaterThan(0);
const actualSchema = actual.schema;
expect(actualSchema.toString()).toEqual(schema.toString()); const actual = tableFromIPC(buf);
expect(actual.numRows).toBe(3);
const actualSchema = actual.schema;
expect(actualSchema).toEqual(schema);
});
it("will assume the column `vector` is FixedSizeList<Float32> by default", async function () {
const schema = new Schema([
new Field("a", new Float(Precision.DOUBLE), true),
new Field("b", new Float(Precision.DOUBLE), true),
new Field(
"vector",
new FixedSizeList(
3,
new Field("item", new Float(Precision.SINGLE), true),
),
true,
),
]);
const table = makeArrowTable([
{ a: 1, b: 2, vector: [1, 2, 3] },
{ a: 4, b: 5, vector: [4, 5, 6] },
{ a: 7, b: 8, vector: [7, 8, 9] },
]);
const buf = await fromTableToBuffer(table);
expect(buf.byteLength).toBeGreaterThan(0);
const actual = tableFromIPC(buf);
expect(actual.numRows).toBe(3);
const actualSchema = actual.schema;
expect(actualSchema).toEqual(schema);
});
it("can support multiple vector columns", async function () {
const schema = new Schema([
new Field("a", new Float(Precision.DOUBLE), true),
new Field("b", new Float(Precision.DOUBLE), true),
new Field(
"vec1",
new FixedSizeList(3, new Field("item", new Float16(), true)),
true,
),
new Field(
"vec2",
new FixedSizeList(3, new Field("item", new Float16(), true)),
true,
),
]);
const table = makeArrowTable(
[
{ a: 1, b: 2, vec1: [1, 2, 3], vec2: [2, 4, 6] },
{ a: 4, b: 5, vec1: [4, 5, 6], vec2: [8, 10, 12] },
{ a: 7, b: 8, vec1: [7, 8, 9], vec2: [14, 16, 18] },
],
{
vectorColumns: {
vec1: { type: new Float16() },
vec2: { type: new Float16() },
},
},
);
const buf = await fromTableToBuffer(table);
expect(buf.byteLength).toBeGreaterThan(0);
const actual = tableFromIPC(buf);
expect(actual.numRows).toBe(3);
const actualSchema = actual.schema;
expect(actualSchema).toEqual(schema);
});
it("will allow different vector column types", async function () {
const table = makeArrowTable([{ fp16: [1], fp32: [1], fp64: [1] }], {
vectorColumns: {
fp16: { type: new Float16() },
fp32: { type: new Float32() },
fp64: { type: new Float64() },
},
});
expect(table.getChild("fp16")?.type.children[0].type.toString()).toEqual(
new Float16().toString(),
);
expect(table.getChild("fp32")?.type.children[0].type.toString()).toEqual(
new Float32().toString(),
);
expect(table.getChild("fp64")?.type.children[0].type.toString()).toEqual(
new Float64().toString(),
);
});
it("will use dictionary encoded strings if asked", async function () {
const table = makeArrowTable([{ str: "hello" }]);
expect(DataType.isUtf8(table.getChild("str")?.type)).toBe(true);
const tableWithDict = makeArrowTable([{ str: "hello" }], {
dictionaryEncodeStrings: true,
});
expect(DataType.isDictionary(tableWithDict.getChild("str")?.type)).toBe(
true,
);
const schema = new Schema([
new Field("str", new Dictionary(new Utf8(), new Int32())),
]);
const tableWithDict2 = makeArrowTable([{ str: "hello" }], { schema });
expect(DataType.isDictionary(tableWithDict2.getChild("str")?.type)).toBe(
true,
);
});
it("will infer data types correctly", async function () {
await checkTableCreation(async (records) => makeArrowTable(records), true);
});
it("will allow a schema to be provided", async function () {
await checkTableCreation(
async (records, _, schema) => makeArrowTable(records, { schema }),
false,
);
});
it("will use the field order of any provided schema", async function () {
await checkTableCreation(
async (_, recordsReversed, schema) =>
makeArrowTable(recordsReversed, { schema }),
false,
);
});
it("will make an empty table", async function () {
await checkTableCreation(
async (_, __, schema) => makeArrowTable([], { schema }),
false,
);
});
}); });
test("handles int64", function() { class DummyEmbedding implements EmbeddingFunction<string> {
// https://github.com/lancedb/lancedb/issues/960 public readonly sourceColumn = "string";
const schema = new Schema([ public readonly embeddingDimension = 2;
new Field("x", new Int64(), true) public readonly embeddingDataType = new Float16();
]);
const table = makeArrowTable([ async embed(data: string[]): Promise<number[][]> {
{ x: 1 }, return data.map(() => [0.0, 0.0]);
{ x: 2 }, }
{ x: 3 } }
], { schema });
expect(table.schema).toEqual(schema); class DummyEmbeddingWithNoDimension implements EmbeddingFunction<string> {
}) public readonly sourceColumn = "string";
async embed(data: string[]): Promise<number[][]> {
return data.map(() => [0.0, 0.0]);
}
}
describe("convertToTable", function () {
it("will infer data types correctly", async function () {
await checkTableCreation(
async (records) => await convertToTable(records),
true,
);
});
it("will allow a schema to be provided", async function () {
await checkTableCreation(
async (records, _, schema) =>
await convertToTable(records, undefined, { schema }),
false,
);
});
it("will use the field order of any provided schema", async function () {
await checkTableCreation(
async (_, recordsReversed, schema) =>
await convertToTable(recordsReversed, undefined, { schema }),
false,
);
});
it("will make an empty table", async function () {
await checkTableCreation(
async (_, __, schema) => await convertToTable([], undefined, { schema }),
false,
);
});
it("will apply embeddings", async function () {
const records = sampleRecords();
const table = await convertToTable(records, new DummyEmbedding());
expect(DataType.isFixedSizeList(table.getChild("vector")?.type)).toBe(true);
expect(table.getChild("vector")?.type.children[0].type.toString()).toEqual(
new Float16().toString(),
);
});
it("will fail if missing the embedding source column", async function () {
await expect(
convertToTable([{ id: 1 }], new DummyEmbedding()),
).rejects.toThrow("'string' was not present");
});
it("use embeddingDimension if embedding missing from table", async function () {
const schema = new Schema([new Field("string", new Utf8(), false)]);
// Simulate getting an empty Arrow table (minus embedding) from some other source
// In other words, we aren't starting with records
const table = makeEmptyTable(schema);
// If the embedding specifies the dimension we are fine
await fromTableToBuffer(table, new DummyEmbedding());
// We can also supply a schema and should be ok
const schemaWithEmbedding = new Schema([
new Field("string", new Utf8(), false),
new Field(
"vector",
new FixedSizeList(2, new Field("item", new Float16(), false)),
false,
),
]);
await fromTableToBuffer(
table,
new DummyEmbeddingWithNoDimension(),
schemaWithEmbedding,
);
// Otherwise we will get an error
await expect(
fromTableToBuffer(table, new DummyEmbeddingWithNoDimension()),
).rejects.toThrow("does not specify `embeddingDimension`");
});
it("will apply embeddings to an empty table", async function () {
const schema = new Schema([
new Field("string", new Utf8(), false),
new Field(
"vector",
new FixedSizeList(2, new Field("item", new Float16(), false)),
false,
),
]);
const table = await convertToTable([], new DummyEmbedding(), { schema });
expect(DataType.isFixedSizeList(table.getChild("vector")?.type)).toBe(true);
expect(table.getChild("vector")?.type.children[0].type.toString()).toEqual(
new Float16().toString(),
);
});
it("will complain if embeddings present but schema missing embedding column", async function () {
const schema = new Schema([new Field("string", new Utf8(), false)]);
await expect(
convertToTable([], new DummyEmbedding(), { schema }),
).rejects.toThrow("column vector was missing");
});
it("will provide a nice error if run twice", async function () {
const records = sampleRecords();
const table = await convertToTable(records, new DummyEmbedding());
// fromTableToBuffer will try and apply the embeddings again
await expect(
fromTableToBuffer(table, new DummyEmbedding()),
).rejects.toThrow("already existed");
});
});
describe("makeEmptyTable", function () {
it("will make an empty table", async function () {
await checkTableCreation(
async (_, __, schema) => makeEmptyTable(schema),
false,
);
});
});
describe("when using two versions of arrow", function () {
it("can still import data", async function () {
const schema = new OldSchema([
new OldField("id", new OldInt32()),
new OldField(
"vector",
new OldFixedSizeList(
1024,
new OldField("item", new OldFloat32(), true),
),
),
new OldField(
"struct",
new OldStruct([
new OldField(
"nested",
new OldDictionary(new OldUtf8(), new OldInt32(), 1, true),
),
new OldField("ts_with_tz", new OldTimestampNanosecond("some_tz")),
new OldField("ts_no_tz", new OldTimestampNanosecond(null)),
]),
),
// eslint-disable-next-line @typescript-eslint/no-explicit-any
]) as any;
schema.metadataVersion = MetadataVersion.V5;
const table = makeArrowTable([], { schema });
const buf = await fromTableToBuffer(table);
expect(buf.byteLength).toBeGreaterThan(0);
const actual = tableFromIPC(buf);
const actualSchema = actual.schema;
expect(actualSchema.fields.length).toBe(3);
// Deep equality gets hung up on some very minor unimportant differences
// between arrow version 13 and 15 which isn't really what we're testing for
// and so we do our own comparison that just checks name/type/nullability
function compareFields(lhs: Field, rhs: Field) {
expect(lhs.name).toEqual(rhs.name);
expect(lhs.nullable).toEqual(rhs.nullable);
expect(lhs.typeId).toEqual(rhs.typeId);
if ("children" in lhs.type && lhs.type.children !== null) {
const lhsChildren = lhs.type.children as Field[];
lhsChildren.forEach((child: Field, idx) => {
compareFields(child, rhs.type.children[idx]);
});
}
}
actualSchema.fields.forEach((field, idx) => {
compareFields(field, actualSchema.fields[idx]);
});
});
});

View File

@@ -0,0 +1,88 @@
// Copyright 2024 Lance Developers.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
import * as tmp from "tmp";
import { Connection, connect } from "../dist/index.js";
describe("when connecting", () => {
let tmpDir: tmp.DirResult;
beforeEach(() => (tmpDir = tmp.dirSync({ unsafeCleanup: true })));
afterEach(() => tmpDir.removeCallback());
it("should connect", async () => {
const db = await connect(tmpDir.name);
expect(db.display()).toBe(
`NativeDatabase(uri=${tmpDir.name}, read_consistency_interval=None)`,
);
});
it("should allow read consistency interval to be specified", async () => {
const db = await connect(tmpDir.name, { readConsistencyInterval: 5 });
expect(db.display()).toBe(
`NativeDatabase(uri=${tmpDir.name}, read_consistency_interval=5s)`,
);
});
});
describe("given a connection", () => {
let tmpDir: tmp.DirResult;
let db: Connection;
beforeEach(async () => {
tmpDir = tmp.dirSync({ unsafeCleanup: true });
db = await connect(tmpDir.name);
});
afterEach(() => tmpDir.removeCallback());
it("should raise an error if opening a non-existent table", async () => {
await expect(db.openTable("non-existent")).rejects.toThrow("was not found");
});
it("should raise an error if any operation is tried after it is closed", async () => {
expect(db.isOpen()).toBe(true);
await db.close();
expect(db.isOpen()).toBe(false);
await expect(db.tableNames()).rejects.toThrow("Connection is closed");
});
it("should fail if creating table twice, unless overwrite is true", async () => {
let tbl = await db.createTable("test", [{ id: 1 }, { id: 2 }]);
await expect(tbl.countRows()).resolves.toBe(2);
await expect(
db.createTable("test", [{ id: 1 }, { id: 2 }]),
).rejects.toThrow();
tbl = await db.createTable("test", [{ id: 3 }], { mode: "overwrite" });
await expect(tbl.countRows()).resolves.toBe(1);
});
it("should respect limit and page token when listing tables", async () => {
const db = await connect(tmpDir.name);
await db.createTable("b", [{ id: 1 }]);
await db.createTable("a", [{ id: 1 }]);
await db.createTable("c", [{ id: 1 }]);
let tables = await db.tableNames();
expect(tables).toEqual(["a", "b", "c"]);
tables = await db.tableNames({ limit: 1 });
expect(tables).toEqual(["a"]);
tables = await db.tableNames({ limit: 1, startAfter: "a" });
expect(tables).toEqual(["b"]);
tables = await db.tableNames({ startAfter: "a" });
expect(tables).toEqual(["b", "c"]);
});
});

View File

@@ -1,34 +0,0 @@
// Copyright 2024 Lance Developers.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
import * as os from "os";
import * as path from "path";
import * as fs from "fs";
import { Schema, Field, Float64 } from "apache-arrow";
import { connect } from "../dist/index.js";
test("open database", async () => {
const tmpDir = fs.mkdtempSync(path.join(os.tmpdir(), "test-open"));
const db = await connect(tmpDir);
let tableNames = await db.tableNames();
expect(tableNames).toStrictEqual([]);
const tbl = await db.createTable("test", [{ id: 1 }, { id: 2 }]);
expect(await db.tableNames()).toStrictEqual(["test"]);
const schema = await tbl.schema();
expect(schema).toEqual(new Schema([new Field("id", new Float64(), true)]));
});

View File

@@ -12,27 +12,75 @@
// See the License for the specific language governing permissions and // See the License for the specific language governing permissions and
// limitations under the License. // limitations under the License.
import * as os from "os";
import * as path from "path";
import * as fs from "fs"; import * as fs from "fs";
import * as path from "path";
import * as tmp from "tmp";
import { connect } from "../dist"; import { Table, connect } from "../dist";
import { Schema, Field, Float32, Int32, FixedSizeList, Int64, Float64 } from "apache-arrow"; import {
Schema,
Field,
Float32,
Int32,
FixedSizeList,
Int64,
Float64,
} from "apache-arrow";
import { makeArrowTable } from "../dist/arrow"; import { makeArrowTable } from "../dist/arrow";
describe("Given a table", () => {
let tmpDir: tmp.DirResult;
let table: Table;
const schema = new Schema([new Field("id", new Float64(), true)]);
beforeEach(async () => {
tmpDir = tmp.dirSync({ unsafeCleanup: true });
const conn = await connect(tmpDir.name);
table = await conn.createEmptyTable("some_table", schema);
});
afterEach(() => tmpDir.removeCallback());
it("be displayable", async () => {
expect(table.display()).toMatch(
/NativeTable\(some_table, uri=.*, read_consistency_interval=None\)/,
);
table.close();
expect(table.display()).toBe("ClosedTable(some_table)");
});
it("should let me add data", async () => {
await table.add([{ id: 1 }, { id: 2 }]);
await table.add([{ id: 1 }]);
await expect(table.countRows()).resolves.toBe(3);
});
it("should overwrite data if asked", async () => {
await table.add([{ id: 1 }, { id: 2 }]);
await table.add([{ id: 1 }], { mode: "overwrite" });
await expect(table.countRows()).resolves.toBe(1);
});
it("should let me close the table", async () => {
expect(table.isOpen()).toBe(true);
table.close();
expect(table.isOpen()).toBe(false);
expect(table.countRows()).rejects.toThrow("Table some_table is closed");
});
});
describe("Test creating index", () => { describe("Test creating index", () => {
let tmpDir: string; let tmpDir: tmp.DirResult;
const schema = new Schema([ const schema = new Schema([
new Field("id", new Int32(), true), new Field("id", new Int32(), true),
new Field("vec", new FixedSizeList(32, new Field("item", new Float32()))), new Field("vec", new FixedSizeList(32, new Field("item", new Float32()))),
]); ]);
beforeEach(() => { beforeEach(() => {
tmpDir = fs.mkdtempSync(path.join(os.tmpdir(), "index-")); tmpDir = tmp.dirSync({ unsafeCleanup: true });
}); });
afterEach(() => tmpDir.removeCallback());
test("create vector index with no column", async () => { test("create vector index with no column", async () => {
const db = await connect(tmpDir); const db = await connect(tmpDir.name);
const data = makeArrowTable( const data = makeArrowTable(
Array(300) Array(300)
.fill(1) .fill(1)
@@ -44,42 +92,42 @@ describe("Test creating index", () => {
})), })),
{ {
schema, schema,
} },
); );
const tbl = await db.createTable("test", data); const tbl = await db.createTable("test", data);
await tbl.createIndex().build(); await tbl.createIndex().build();
// check index directory // check index directory
const indexDir = path.join(tmpDir, "test.lance", "_indices"); const indexDir = path.join(tmpDir.name, "test.lance", "_indices");
expect(fs.readdirSync(indexDir)).toHaveLength(1); expect(fs.readdirSync(indexDir)).toHaveLength(1);
// TODO: check index type. // TODO: check index type.
// Search without specifying the column // Search without specifying the column
let query_vector = data.toArray()[5].vec.toJSON(); const queryVector = data.toArray()[5].vec.toJSON();
let rst = await tbl.query().nearestTo(query_vector).limit(2).toArrow(); const rst = await tbl.query().nearestTo(queryVector).limit(2).toArrow();
expect(rst.numRows).toBe(2); expect(rst.numRows).toBe(2);
// Search with specifying the column // Search with specifying the column
let rst2 = await tbl.search(query_vector, "vec").limit(2).toArrow(); const rst2 = await tbl.search(queryVector, "vec").limit(2).toArrow();
expect(rst2.numRows).toBe(2); expect(rst2.numRows).toBe(2);
expect(rst.toString()).toEqual(rst2.toString()); expect(rst.toString()).toEqual(rst2.toString());
}); });
test("no vector column available", async () => { test("no vector column available", async () => {
const db = await connect(tmpDir); const db = await connect(tmpDir.name);
const tbl = await db.createTable( const tbl = await db.createTable(
"no_vec", "no_vec",
makeArrowTable([ makeArrowTable([
{ id: 1, val: 2 }, { id: 1, val: 2 },
{ id: 2, val: 3 }, { id: 2, val: 3 },
]) ]),
); );
await expect(tbl.createIndex().build()).rejects.toThrow( await expect(tbl.createIndex().build()).rejects.toThrow(
"No vector column found" "No vector column found",
); );
await tbl.createIndex("val").build(); await tbl.createIndex("val").build();
const indexDir = path.join(tmpDir, "no_vec.lance", "_indices"); const indexDir = path.join(tmpDir.name, "no_vec.lance", "_indices");
expect(fs.readdirSync(indexDir)).toHaveLength(1); expect(fs.readdirSync(indexDir)).toHaveLength(1);
for await (const r of tbl.query().filter("id > 1").select(["id"])) { for await (const r of tbl.query().filter("id > 1").select(["id"])) {
@@ -88,13 +136,13 @@ describe("Test creating index", () => {
}); });
test("two columns with different dimensions", async () => { test("two columns with different dimensions", async () => {
const db = await connect(tmpDir); const db = await connect(tmpDir.name);
const schema = new Schema([ const schema = new Schema([
new Field("id", new Int32(), true), new Field("id", new Int32(), true),
new Field("vec", new FixedSizeList(32, new Field("item", new Float32()))), new Field("vec", new FixedSizeList(32, new Field("item", new Float32()))),
new Field( new Field(
"vec2", "vec2",
new FixedSizeList(64, new Field("item", new Float32())) new FixedSizeList(64, new Field("item", new Float32())),
), ),
]); ]);
const tbl = await db.createTable( const tbl = await db.createTable(
@@ -111,16 +159,17 @@ describe("Test creating index", () => {
.fill(1) .fill(1)
.map(() => Math.random()), .map(() => Math.random()),
})), })),
{ schema } { schema },
) ),
); );
// Only build index over v1 // Only build index over v1
await expect(tbl.createIndex().build()).rejects.toThrow( await expect(tbl.createIndex().build()).rejects.toThrow(
/.*More than one vector columns found.*/ /.*More than one vector columns found.*/,
); );
tbl tbl
.createIndex("vec") .createIndex("vec")
// eslint-disable-next-line @typescript-eslint/naming-convention
.ivf_pq({ num_partitions: 2, num_sub_vectors: 2 }) .ivf_pq({ num_partitions: 2, num_sub_vectors: 2 })
.build(); .build();
@@ -129,7 +178,7 @@ describe("Test creating index", () => {
.nearestTo( .nearestTo(
Array(32) Array(32)
.fill(1) .fill(1)
.map(() => Math.random()) .map(() => Math.random()),
) )
.limit(2) .limit(2)
.toArrow(); .toArrow();
@@ -142,23 +191,23 @@ describe("Test creating index", () => {
Array(64) Array(64)
.fill(1) .fill(1)
.map(() => Math.random()), .map(() => Math.random()),
"vec" "vec",
) )
.limit(2) .limit(2)
.toArrow() .toArrow(),
).rejects.toThrow(/.*does not match the dimension.*/); ).rejects.toThrow(/.*does not match the dimension.*/);
const query64 = Array(64) const query64 = Array(64)
.fill(1) .fill(1)
.map(() => Math.random()); .map(() => Math.random());
const rst64_1 = await tbl.query().nearestTo(query64).limit(2).toArrow(); const rst64Query = await tbl.query().nearestTo(query64).limit(2).toArrow();
const rst64_2 = await tbl.search(query64, "vec2").limit(2).toArrow(); const rst64Search = await tbl.search(query64, "vec2").limit(2).toArrow();
expect(rst64_1.toString()).toEqual(rst64_2.toString()); expect(rst64Query.toString()).toEqual(rst64Search.toString());
expect(rst64_1.numRows).toBe(2); expect(rst64Query.numRows).toBe(2);
}); });
test("create scalar index", async () => { test("create scalar index", async () => {
const db = await connect(tmpDir); const db = await connect(tmpDir.name);
const data = makeArrowTable( const data = makeArrowTable(
Array(300) Array(300)
.fill(1) .fill(1)
@@ -170,113 +219,132 @@ describe("Test creating index", () => {
})), })),
{ {
schema, schema,
} },
); );
const tbl = await db.createTable("test", data); const tbl = await db.createTable("test", data);
await tbl.createIndex("id").build(); await tbl.createIndex("id").build();
// check index directory // check index directory
const indexDir = path.join(tmpDir, "test.lance", "_indices"); const indexDir = path.join(tmpDir.name, "test.lance", "_indices");
expect(fs.readdirSync(indexDir)).toHaveLength(1); expect(fs.readdirSync(indexDir)).toHaveLength(1);
// TODO: check index type. // TODO: check index type.
}); });
}); });
describe("Read consistency interval", () => { describe("Read consistency interval", () => {
let tmpDir: string; let tmpDir: tmp.DirResult;
beforeEach(() => { beforeEach(() => {
tmpDir = fs.mkdtempSync(path.join(os.tmpdir(), "read-consistency-")); tmpDir = tmp.dirSync({ unsafeCleanup: true });
}); });
afterEach(() => tmpDir.removeCallback());
// const intervals = [undefined, 0, 0.1]; // const intervals = [undefined, 0, 0.1];
const intervals = [0]; const intervals = [0];
test.each(intervals)("read consistency interval %p", async (interval) => { test.each(intervals)("read consistency interval %p", async (interval) => {
const db = await connect({ uri: tmpDir }); const db = await connect(tmpDir.name);
const table = await db.createTable("my_table", [{ id: 1 }]); const table = await db.createTable("my_table", [{ id: 1 }]);
const db2 = await connect({ uri: tmpDir, readConsistencyInterval: interval }); const db2 = await connect(tmpDir.name, {
readConsistencyInterval: interval,
});
const table2 = await db2.openTable("my_table"); const table2 = await db2.openTable("my_table");
expect(await table2.countRows()).toEqual(await table.countRows()); expect(await table2.countRows()).toEqual(await table.countRows());
await table.add([{ id: 2 }]); await table.add([{ id: 2 }]);
if (interval === undefined) { if (interval === undefined) {
expect(await table2.countRows()).toEqual(1n); expect(await table2.countRows()).toEqual(1);
// TODO: once we implement time travel we can uncomment this part of the test. // TODO: once we implement time travel we can uncomment this part of the test.
// await table2.checkout_latest(); // await table2.checkout_latest();
// expect(await table2.countRows()).toEqual(2); // expect(await table2.countRows()).toEqual(2);
} else if (interval === 0) { } else if (interval === 0) {
expect(await table2.countRows()).toEqual(2n); expect(await table2.countRows()).toEqual(2);
} else { } else {
// interval == 0.1 // interval == 0.1
expect(await table2.countRows()).toEqual(1n); expect(await table2.countRows()).toEqual(1);
await new Promise(r => setTimeout(r, 100)); await new Promise((r) => setTimeout(r, 100));
expect(await table2.countRows()).toEqual(2n); expect(await table2.countRows()).toEqual(2);
} }
}); });
}); });
describe("schema evolution", function () {
describe('schema evolution', function () { let tmpDir: tmp.DirResult;
let tmpDir: string;
beforeEach(() => { beforeEach(() => {
tmpDir = fs.mkdtempSync(path.join(os.tmpdir(), "schema-evolution-")); tmpDir = tmp.dirSync({ unsafeCleanup: true });
});
afterEach(() => {
tmpDir.removeCallback();
}); });
// Create a new sample table // Create a new sample table
it('can add a new column to the schema', async function () { it("can add a new column to the schema", async function () {
const con = await connect(tmpDir) const con = await connect(tmpDir.name);
const table = await con.createTable('vectors', [ const table = await con.createTable("vectors", [
{ id: 1n, vector: [0.1, 0.2] } { id: 1n, vector: [0.1, 0.2] },
]) ]);
await table.addColumns([{ name: 'price', valueSql: 'cast(10.0 as float)' }]) await table.addColumns([
{ name: "price", valueSql: "cast(10.0 as float)" },
]);
const expectedSchema = new Schema([ const expectedSchema = new Schema([
new Field('id', new Int64(), true), new Field("id", new Int64(), true),
new Field('vector', new FixedSizeList(2, new Field('item', new Float32(), true)), true), new Field(
new Field('price', new Float32(), false) "vector",
]) new FixedSizeList(2, new Field("item", new Float32(), true)),
expect(await table.schema()).toEqual(expectedSchema) true,
),
new Field("price", new Float32(), false),
]);
expect(await table.schema()).toEqual(expectedSchema);
}); });
it('can alter the columns in the schema', async function () { it("can alter the columns in the schema", async function () {
const con = await connect(tmpDir) const con = await connect(tmpDir.name);
const schema = new Schema([ const schema = new Schema([
new Field('id', new Int64(), true), new Field("id", new Int64(), true),
new Field('vector', new FixedSizeList(2, new Field('item', new Float32(), true)), true), new Field(
new Field('price', new Float64(), false) "vector",
]) new FixedSizeList(2, new Field("item", new Float32(), true)),
const table = await con.createTable('vectors', [ true,
{ id: 1n, vector: [0.1, 0.2] } ),
]) new Field("price", new Float64(), false),
]);
const table = await con.createTable("vectors", [
{ id: 1n, vector: [0.1, 0.2] },
]);
// Can create a non-nullable column only through addColumns at the moment. // Can create a non-nullable column only through addColumns at the moment.
await table.addColumns([{ name: 'price', valueSql: 'cast(10.0 as double)' }]) await table.addColumns([
expect(await table.schema()).toEqual(schema) { name: "price", valueSql: "cast(10.0 as double)" },
]);
expect(await table.schema()).toEqual(schema);
await table.alterColumns([ await table.alterColumns([
{ path: 'id', rename: 'new_id' }, { path: "id", rename: "new_id" },
{ path: 'price', nullable: true } { path: "price", nullable: true },
]) ]);
const expectedSchema = new Schema([ const expectedSchema = new Schema([
new Field('new_id', new Int64(), true), new Field("new_id", new Int64(), true),
new Field('vector', new FixedSizeList(2, new Field('item', new Float32(), true)), true), new Field(
new Field('price', new Float64(), true) "vector",
]) new FixedSizeList(2, new Field("item", new Float32(), true)),
expect(await table.schema()).toEqual(expectedSchema) true,
),
new Field("price", new Float64(), true),
]);
expect(await table.schema()).toEqual(expectedSchema);
}); });
it('can drop a column from the schema', async function () { it("can drop a column from the schema", async function () {
const con = await connect(tmpDir) const con = await connect(tmpDir.name);
const table = await con.createTable('vectors', [ const table = await con.createTable("vectors", [
{ id: 1n, vector: [0.1, 0.2] } { id: 1n, vector: [0.1, 0.2] },
]) ]);
await table.dropColumns(['vector']) await table.dropColumns(["vector"]);
const expectedSchema = new Schema([ const expectedSchema = new Schema([new Field("id", new Int64(), true)]);
new Field('id', new Int64(), true) expect(await table.schema()).toEqual(expectedSchema);
])
expect(await table.schema()).toEqual(expectedSchema)
}); });
}); });

View File

@@ -0,0 +1,10 @@
{
"extends": "../tsconfig.json",
"compilerOptions": {
"outDir": "./dist/spec",
"module": "commonjs",
"target": "es2022",
"types": ["jest", "node"]
},
"include": ["**/*"]
}

17
nodejs/eslint.config.js Normal file
View File

@@ -0,0 +1,17 @@
/* eslint-disable @typescript-eslint/naming-convention */
// @ts-check
const eslint = require("@eslint/js");
const tseslint = require("typescript-eslint");
const eslintConfigPrettier = require("eslint-config-prettier");
module.exports = tseslint.config(
eslint.configs.recommended,
eslintConfigPrettier,
...tseslint.configs.recommended,
{
rules: {
"@typescript-eslint/naming-convention": "error",
},
},
);

View File

@@ -1,7 +1,7 @@
/** @type {import('ts-jest').JestConfigWithTsJest} */ /** @type {import('ts-jest').JestConfigWithTsJest} */
module.exports = { module.exports = {
preset: 'ts-jest', preset: "ts-jest",
testEnvironment: 'node', testEnvironment: "node",
moduleDirectories: ["node_modules", "./dist"], moduleDirectories: ["node_modules", "./dist"],
moduleFileExtensions: ["js", "ts"], moduleFileExtensions: ["js", "ts"],
}; };

View File

@@ -1,4 +1,4 @@
// Copyright 2024 Lance Developers. // Copyright 2023 Lance Developers.
// //
// Licensed under the Apache License, Version 2.0 (the "License"); // Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License. // you may not use this file except in compliance with the License.
@@ -13,23 +13,34 @@
// limitations under the License. // limitations under the License.
import { import {
Int64,
Field, Field,
makeBuilder,
RecordBatchFileWriter,
Utf8,
type Vector,
FixedSizeList, FixedSizeList,
Float,
Float32,
Schema,
Table as ArrowTable,
Table,
Vector,
vectorFromArray, vectorFromArray,
tableToIPC, type Schema,
Table as ArrowTable,
RecordBatchStreamWriter,
List,
RecordBatch,
makeData,
Struct,
type Float,
DataType, DataType,
Binary,
Float32,
} from "apache-arrow"; } from "apache-arrow";
import { type EmbeddingFunction } from "./embedding/embedding_function";
import { sanitizeSchema } from "./sanitize";
/** Data type accepted by NodeJS SDK */ /** Data type accepted by NodeJS SDK */
export type Data = Record<string, unknown>[] | ArrowTable; export type Data = Record<string, unknown>[] | ArrowTable;
/*
* Options to control how a column should be converted to a vector array
*/
export class VectorColumnOptions { export class VectorColumnOptions {
/** Vector column type. */ /** Vector column type. */
type: Float = new Float32(); type: Float = new Float32();
@@ -41,14 +52,50 @@ export class VectorColumnOptions {
/** Options to control the makeArrowTable call. */ /** Options to control the makeArrowTable call. */
export class MakeArrowTableOptions { export class MakeArrowTableOptions {
/** Provided schema. */ /*
* Schema of the data.
*
* If this is not provided then the data type will be inferred from the
* JS type. Integer numbers will become int64, floating point numbers
* will become float64 and arrays will become variable sized lists with
* the data type inferred from the first element in the array.
*
* The schema must be specified if there are no records (e.g. to make
* an empty table)
*/
schema?: Schema; schema?: Schema;
/** Vector columns */ /*
* Mapping from vector column name to expected type
*
* Lance expects vector columns to be fixed size list arrays (i.e. tensors)
* However, `makeArrowTable` will not infer this by default (it creates
* variable size list arrays). This field can be used to indicate that a column
* should be treated as a vector column and converted to a fixed size list.
*
* The keys should be the names of the vector columns. The value specifies the
* expected data type of the vector columns.
*
* If `schema` is provided then this field is ignored.
*
* By default, the column named "vector" will be assumed to be a float32
* vector column.
*/
vectorColumns: Record<string, VectorColumnOptions> = { vectorColumns: Record<string, VectorColumnOptions> = {
vector: new VectorColumnOptions(), vector: new VectorColumnOptions(),
}; };
/**
* If true then string columns will be encoded with dictionary encoding
*
* Set this to true if your string columns tend to repeat the same values
* often. For more precise control use the `schema` property to specify the
* data type for individual columns.
*
* If `schema` is provided then this property is ignored.
*/
dictionaryEncodeStrings: boolean = false;
constructor(values?: Partial<MakeArrowTableOptions>) { constructor(values?: Partial<MakeArrowTableOptions>) {
Object.assign(this, values); Object.assign(this, values);
} }
@@ -58,8 +105,30 @@ export class MakeArrowTableOptions {
* An enhanced version of the {@link makeTable} function from Apache Arrow * An enhanced version of the {@link makeTable} function from Apache Arrow
* that supports nested fields and embeddings columns. * that supports nested fields and embeddings columns.
* *
* This function converts an array of Record<String, any> (row-major JS objects)
* to an Arrow Table (a columnar structure)
*
* Note that it currently does not support nulls. * Note that it currently does not support nulls.
* *
* If a schema is provided then it will be used to determine the resulting array
* types. Fields will also be reordered to fit the order defined by the schema.
*
* If a schema is not provided then the types will be inferred and the field order
* will be controlled by the order of properties in the first record. If a type
* is inferred it will always be nullable.
*
* If the input is empty then a schema must be provided to create an empty table.
*
* When a schema is not specified then data types will be inferred. The inference
* rules are as follows:
*
* - boolean => Bool
* - number => Float64
* - String => Utf8
* - Buffer => Binary
* - Record<String, any> => Struct
* - Array<any> => List
*
* @param data input data * @param data input data
* @param options options to control the makeArrowTable call. * @param options options to control the makeArrowTable call.
* *
@@ -82,25 +151,27 @@ export class MakeArrowTableOptions {
* ], { schema }); * ], { schema });
* ``` * ```
* *
* It guesses the vector columns if the schema is not provided. For example, * By default it assumes that the column named `vector` is a vector column
* by default it assumes that the column named `vector` is a vector column. * and it will be converted into a fixed size list array of type float32.
* The `vectorColumns` option can be used to support other vector column
* names and data types.
* *
* ```ts * ```ts
* *
* const schema = new Schema([ * const schema = new Schema([
new Field("a", new Float64()), new Field("a", new Float64()),
new Field("b", new Float64()), new Field("b", new Float64()),
new Field( new Field(
"vector", "vector",
new FixedSizeList(3, new Field("item", new Float32())) new FixedSizeList(3, new Field("item", new Float32()))
), ),
]); ]);
const table = makeArrowTable([ const table = makeArrowTable([
{ a: 1, b: 2, vector: [1, 2, 3] }, { a: 1, b: 2, vector: [1, 2, 3] },
{ a: 4, b: 5, vector: [4, 5, 6] }, { a: 4, b: 5, vector: [4, 5, 6] },
{ a: 7, b: 8, vector: [7, 8, 9] }, { a: 7, b: 8, vector: [7, 8, 9] },
]); ]);
assert.deepEqual(table.schema, schema); assert.deepEqual(table.schema, schema);
* ``` * ```
* *
* You can specify the vector column types and names using the options as well * You can specify the vector column types and names using the options as well
@@ -108,81 +179,456 @@ export class MakeArrowTableOptions {
* ```typescript * ```typescript
* *
* const schema = new Schema([ * const schema = new Schema([
new Field('a', new Float64()), new Field('a', new Float64()),
new Field('b', new Float64()), new Field('b', new Float64()),
new Field('vec1', new FixedSizeList(3, new Field('item', new Float16()))), new Field('vec1', new FixedSizeList(3, new Field('item', new Float16()))),
new Field('vec2', new FixedSizeList(3, new Field('item', new Float16()))) new Field('vec2', new FixedSizeList(3, new Field('item', new Float16())))
]); ]);
* const table = makeArrowTable([ * const table = makeArrowTable([
{ a: 1, b: 2, vec1: [1, 2, 3], vec2: [2, 4, 6] }, { a: 1, b: 2, vec1: [1, 2, 3], vec2: [2, 4, 6] },
{ a: 4, b: 5, vec1: [4, 5, 6], vec2: [8, 10, 12] }, { a: 4, b: 5, vec1: [4, 5, 6], vec2: [8, 10, 12] },
{ a: 7, b: 8, vec1: [7, 8, 9], vec2: [14, 16, 18] } { a: 7, b: 8, vec1: [7, 8, 9], vec2: [14, 16, 18] }
], { ], {
vectorColumns: { vectorColumns: {
vec1: { type: new Float16() }, vec1: { type: new Float16() },
vec2: { type: new Float16() } vec2: { type: new Float16() }
} }
} }
* assert.deepEqual(table.schema, schema) * assert.deepEqual(table.schema, schema)
* ``` * ```
*/ */
export function makeArrowTable( export function makeArrowTable(
data: Record<string, any>[], data: Array<Record<string, unknown>>,
options?: Partial<MakeArrowTableOptions> options?: Partial<MakeArrowTableOptions>,
): Table { ): ArrowTable {
if (data.length === 0) { if (
throw new Error("At least one record needs to be provided"); data.length === 0 &&
(options?.schema === undefined || options?.schema === null)
) {
throw new Error("At least one record or a schema needs to be provided");
}
const opt = new MakeArrowTableOptions(options !== undefined ? options : {});
if (opt.schema !== undefined && opt.schema !== null) {
opt.schema = sanitizeSchema(opt.schema);
} }
const opt = new MakeArrowTableOptions(options ?? {});
const columns: Record<string, Vector> = {}; const columns: Record<string, Vector> = {};
// TODO: sample dataset to find missing columns // TODO: sample dataset to find missing columns
const columnNames = Object.keys(data[0]); // Prefer the field ordering of the schema, if present
const columnNames =
opt.schema != null ? (opt.schema.names as string[]) : Object.keys(data[0]);
for (const colName of columnNames) { for (const colName of columnNames) {
// eslint-disable-next-line @typescript-eslint/no-unsafe-return if (
data.length !== 0 &&
!Object.prototype.hasOwnProperty.call(data[0], colName)
) {
// The field is present in the schema, but not in the data, skip it
continue;
}
// Extract a single column from the records (transpose from row-major to col-major)
let values = data.map((datum) => datum[colName]); let values = data.map((datum) => datum[colName]);
let vector: Vector;
// By default (type === undefined) arrow will infer the type from the JS type
let type;
if (opt.schema !== undefined) { if (opt.schema !== undefined) {
// Explicit schema is provided, highest priority // If there is a schema provided, then use that for the type instead
const fieldType: DataType | undefined = opt.schema.fields.filter((f) => f.name === colName)[0]?.type as DataType; type = opt.schema?.fields.filter((f) => f.name === colName)[0]?.type;
if (fieldType instanceof Int64) { if (DataType.isInt(type) && type.bitWidth === 64) {
// wrap in BigInt to avoid bug: https://github.com/apache/arrow/issues/40051 // wrap in BigInt to avoid bug: https://github.com/apache/arrow/issues/40051
// eslint-disable-next-line @typescript-eslint/no-unsafe-argument values = values.map((v) => {
values = values.map((v) => BigInt(v)); if (v === null) {
return v;
}
if (typeof v === "bigint") {
return v;
}
if (typeof v === "number") {
return BigInt(v);
}
throw new Error(
`Expected BigInt or number for column ${colName}, got ${typeof v}`,
);
});
} }
vector = vectorFromArray(values, fieldType);
} else { } else {
// Otherwise, check to see if this column is one of the vector columns
// defined by opt.vectorColumns and, if so, use the fixed size list type
const vectorColumnOptions = opt.vectorColumns[colName]; const vectorColumnOptions = opt.vectorColumns[colName];
if (vectorColumnOptions !== undefined) { if (vectorColumnOptions !== undefined) {
const fslType = new FixedSizeList( const firstNonNullValue = values.find((v) => v !== null);
(values[0] as any[]).length, if (Array.isArray(firstNonNullValue)) {
new Field("item", vectorColumnOptions.type, false) type = newVectorType(
); firstNonNullValue.length,
vector = vectorFromArray(values, fslType); vectorColumnOptions.type,
} else { );
// Normal case } else {
vector = vectorFromArray(values); throw new Error(
`Column ${colName} is expected to be a vector column but first non-null value is not an array. Could not determine size of vector column`,
);
}
} }
} }
columns[colName] = vector;
try {
// Convert an Array of JS values to an arrow vector
columns[colName] = makeVector(values, type, opt.dictionaryEncodeStrings);
} catch (error: unknown) {
// eslint-disable-next-line @typescript-eslint/restrict-template-expressions
throw Error(`Could not convert column "${colName}" to Arrow: ${error}`);
}
} }
return new Table(columns); if (opt.schema != null) {
// `new ArrowTable(columns)` infers a schema which may sometimes have
// incorrect nullability (it assumes nullable=true always)
//
// `new ArrowTable(schema, columns)` will also fail because it will create a
// batch with an inferred schema and then complain that the batch schema
// does not match the provided schema.
//
// To work around this we first create a table with the wrong schema and
// then patch the schema of the batches so we can use
// `new ArrowTable(schema, batches)` which does not do any schema inference
const firstTable = new ArrowTable(columns);
// eslint-disable-next-line @typescript-eslint/no-non-null-assertion
const batchesFixed = firstTable.batches.map(
(batch) => new RecordBatch(opt.schema!, batch.data),
);
return new ArrowTable(opt.schema, batchesFixed);
} else {
return new ArrowTable(columns);
}
} }
/** /**
* Convert an Arrow Table to a Buffer. * Create an empty Arrow table with the provided schema
*
* @param data Arrow Table
* @param schema Arrow Schema, optional
* @returns Buffer node
*/ */
export function toBuffer(data: Data, schema?: Schema): Buffer { export function makeEmptyTable(schema: Schema): ArrowTable {
let tbl: Table; return makeArrowTable([], { schema });
if (data instanceof Table) { }
tbl = data;
} else { // Helper function to convert Array<Array<any>> to a variable sized list array
tbl = makeArrowTable(data, { schema }); // @ts-expect-error (Vector<unknown> is not assignable to Vector<any>)
} function makeListVector(lists: unknown[][]): Vector<unknown> {
return Buffer.from(tableToIPC(tbl)); if (lists.length === 0 || lists[0].length === 0) {
throw Error("Cannot infer list vector from empty array or empty list");
}
const sampleList = lists[0];
// eslint-disable-next-line @typescript-eslint/no-explicit-any
let inferredType: any;
try {
const sampleVector = makeVector(sampleList);
inferredType = sampleVector.type;
} catch (error: unknown) {
// eslint-disable-next-line @typescript-eslint/restrict-template-expressions
throw Error(`Cannot infer list vector. Cannot infer inner type: ${error}`);
}
const listBuilder = makeBuilder({
type: new List(new Field("item", inferredType, true)),
});
for (const list of lists) {
listBuilder.append(list);
}
return listBuilder.finish().toVector();
}
// Helper function to convert an Array of JS values to an Arrow Vector
function makeVector(
values: unknown[],
type?: DataType,
stringAsDictionary?: boolean,
// eslint-disable-next-line @typescript-eslint/no-explicit-any
): Vector<any> {
if (type !== undefined) {
// No need for inference, let Arrow create it
return vectorFromArray(values, type);
}
if (values.length === 0) {
throw Error(
"makeVector requires at least one value or the type must be specfied",
);
}
const sampleValue = values.find((val) => val !== null && val !== undefined);
if (sampleValue === undefined) {
throw Error(
"makeVector cannot infer the type if all values are null or undefined",
);
}
if (Array.isArray(sampleValue)) {
// Default Arrow inference doesn't handle list types
return makeListVector(values as unknown[][]);
} else if (Buffer.isBuffer(sampleValue)) {
// Default Arrow inference doesn't handle Buffer
return vectorFromArray(values, new Binary());
} else if (
!(stringAsDictionary ?? false) &&
(typeof sampleValue === "string" || sampleValue instanceof String)
) {
// If the type is string then don't use Arrow's default inference unless dictionaries are requested
// because it will always use dictionary encoding for strings
return vectorFromArray(values, new Utf8());
} else {
// Convert a JS array of values to an arrow vector
return vectorFromArray(values);
}
}
async function applyEmbeddings<T>(
table: ArrowTable,
embeddings?: EmbeddingFunction<T>,
schema?: Schema,
): Promise<ArrowTable> {
if (embeddings == null) {
return table;
}
if (schema !== undefined && schema !== null) {
schema = sanitizeSchema(schema);
}
// Convert from ArrowTable to Record<String, Vector>
const colEntries = [...Array(table.numCols).keys()].map((_, idx) => {
const name = table.schema.fields[idx].name;
// eslint-disable-next-line @typescript-eslint/no-non-null-assertion
const vec = table.getChildAt(idx)!;
return [name, vec];
});
const newColumns = Object.fromEntries(colEntries);
const sourceColumn = newColumns[embeddings.sourceColumn];
const destColumn = embeddings.destColumn ?? "vector";
const innerDestType = embeddings.embeddingDataType ?? new Float32();
if (sourceColumn === undefined) {
throw new Error(
`Cannot apply embedding function because the source column '${embeddings.sourceColumn}' was not present in the data`,
);
}
if (table.numRows === 0) {
if (Object.prototype.hasOwnProperty.call(newColumns, destColumn)) {
// We have an empty table and it already has the embedding column so no work needs to be done
// Note: we don't return an error like we did below because this is a common occurrence. For example,
// if we call convertToTable with 0 records and a schema that includes the embedding
return table;
}
if (embeddings.embeddingDimension !== undefined) {
const destType = newVectorType(
embeddings.embeddingDimension,
innerDestType,
);
newColumns[destColumn] = makeVector([], destType);
} else if (schema != null) {
const destField = schema.fields.find((f) => f.name === destColumn);
if (destField != null) {
newColumns[destColumn] = makeVector([], destField.type);
} else {
throw new Error(
`Attempt to apply embeddings to an empty table failed because schema was missing embedding column '${destColumn}'`,
);
}
} else {
throw new Error(
"Attempt to apply embeddings to an empty table when the embeddings function does not specify `embeddingDimension`",
);
}
} else {
if (Object.prototype.hasOwnProperty.call(newColumns, destColumn)) {
throw new Error(
`Attempt to apply embeddings to table failed because column ${destColumn} already existed`,
);
}
if (table.batches.length > 1) {
throw new Error(
"Internal error: `makeArrowTable` unexpectedly created a table with more than one batch",
);
}
const values = sourceColumn.toArray();
const vectors = await embeddings.embed(values as T[]);
if (vectors.length !== values.length) {
throw new Error(
"Embedding function did not return an embedding for each input element",
);
}
const destType = newVectorType(vectors[0].length, innerDestType);
newColumns[destColumn] = makeVector(vectors, destType);
}
const newTable = new ArrowTable(newColumns);
if (schema != null) {
if (schema.fields.find((f) => f.name === destColumn) === undefined) {
throw new Error(
`When using embedding functions and specifying a schema the schema should include the embedding column but the column ${destColumn} was missing`,
);
}
return alignTable(newTable, schema);
}
return newTable;
}
/*
* Convert an Array of records into an Arrow Table, optionally applying an
* embeddings function to it.
*
* This function calls `makeArrowTable` first to create the Arrow Table.
* Any provided `makeTableOptions` (e.g. a schema) will be passed on to
* that call.
*
* The embedding function will be passed a column of values (based on the
* `sourceColumn` of the embedding function) and expects to receive back
* number[][] which will be converted into a fixed size list column. By
* default this will be a fixed size list of Float32 but that can be
* customized by the `embeddingDataType` property of the embedding function.
*
* If a schema is provided in `makeTableOptions` then it should include the
* embedding columns. If no schema is provded then embedding columns will
* be placed at the end of the table, after all of the input columns.
*/
export async function convertToTable<T>(
data: Array<Record<string, unknown>>,
embeddings?: EmbeddingFunction<T>,
makeTableOptions?: Partial<MakeArrowTableOptions>,
): Promise<ArrowTable> {
const table = makeArrowTable(data, makeTableOptions);
return await applyEmbeddings(table, embeddings, makeTableOptions?.schema);
}
// Creates the Arrow Type for a Vector column with dimension `dim`
function newVectorType<T extends Float>(
dim: number,
innerType: T,
): FixedSizeList<T> {
// in Lance we always default to have the elements nullable, so we need to set it to true
// otherwise we often get schema mismatches because the stored data always has schema with nullable elements
const children = new Field<T>("item", innerType, true);
return new FixedSizeList(dim, children);
}
/**
* Serialize an Array of records into a buffer using the Arrow IPC File serialization
*
* This function will call `convertToTable` and pass on `embeddings` and `schema`
*
* `schema` is required if data is empty
*/
export async function fromRecordsToBuffer<T>(
data: Array<Record<string, unknown>>,
embeddings?: EmbeddingFunction<T>,
schema?: Schema,
): Promise<Buffer> {
if (schema !== undefined && schema !== null) {
schema = sanitizeSchema(schema);
}
const table = await convertToTable(data, embeddings, { schema });
const writer = RecordBatchFileWriter.writeAll(table);
return Buffer.from(await writer.toUint8Array());
}
/**
* Serialize an Array of records into a buffer using the Arrow IPC Stream serialization
*
* This function will call `convertToTable` and pass on `embeddings` and `schema`
*
* `schema` is required if data is empty
*/
export async function fromRecordsToStreamBuffer<T>(
data: Array<Record<string, unknown>>,
embeddings?: EmbeddingFunction<T>,
schema?: Schema,
): Promise<Buffer> {
if (schema !== undefined && schema !== null) {
schema = sanitizeSchema(schema);
}
const table = await convertToTable(data, embeddings, { schema });
const writer = RecordBatchStreamWriter.writeAll(table);
return Buffer.from(await writer.toUint8Array());
}
/**
* Serialize an Arrow Table into a buffer using the Arrow IPC File serialization
*
* This function will apply `embeddings` to the table in a manner similar to
* `convertToTable`.
*
* `schema` is required if the table is empty
*/
export async function fromTableToBuffer<T>(
table: ArrowTable,
embeddings?: EmbeddingFunction<T>,
schema?: Schema,
): Promise<Buffer> {
if (schema !== undefined && schema !== null) {
schema = sanitizeSchema(schema);
}
const tableWithEmbeddings = await applyEmbeddings(table, embeddings, schema);
const writer = RecordBatchFileWriter.writeAll(tableWithEmbeddings);
return Buffer.from(await writer.toUint8Array());
}
export async function fromDataToBuffer<T>(
data: Data,
embeddings?: EmbeddingFunction<T>,
schema?: Schema,
): Promise<Buffer> {
if (schema !== undefined && schema !== null) {
schema = sanitizeSchema(schema);
}
if (data instanceof ArrowTable) {
return fromTableToBuffer(data, embeddings, schema);
} else {
const table = await convertToTable(data);
return fromTableToBuffer(table, embeddings, schema);
}
}
/**
* Serialize an Arrow Table into a buffer using the Arrow IPC Stream serialization
*
* This function will apply `embeddings` to the table in a manner similar to
* `convertToTable`.
*
* `schema` is required if the table is empty
*/
export async function fromTableToStreamBuffer<T>(
table: ArrowTable,
embeddings?: EmbeddingFunction<T>,
schema?: Schema,
): Promise<Buffer> {
const tableWithEmbeddings = await applyEmbeddings(table, embeddings, schema);
const writer = RecordBatchStreamWriter.writeAll(tableWithEmbeddings);
return Buffer.from(await writer.toUint8Array());
}
function alignBatch(batch: RecordBatch, schema: Schema): RecordBatch {
const alignedChildren = [];
for (const field of schema.fields) {
const indexInBatch = batch.schema.fields?.findIndex(
(f) => f.name === field.name,
);
if (indexInBatch < 0) {
throw new Error(
`The column ${field.name} was not found in the Arrow Table`,
);
}
alignedChildren.push(batch.data.children[indexInBatch]);
}
const newData = makeData({
type: new Struct(schema.fields),
length: batch.numRows,
nullCount: batch.nullCount,
children: alignedChildren,
});
return new RecordBatch(schema, newData);
}
function alignTable(table: ArrowTable, schema: Schema): ArrowTable {
const alignedBatches = table.batches.map((batch) =>
alignBatch(batch, schema),
);
return new ArrowTable(schema, alignedBatches);
}
// Creates an empty Arrow Table
export function createEmptyTable(schema: Schema): ArrowTable {
return new ArrowTable(sanitizeSchema(schema));
} }

View File

@@ -12,26 +12,95 @@
// See the License for the specific language governing permissions and // See the License for the specific language governing permissions and
// limitations under the License. // limitations under the License.
import { toBuffer } from "./arrow"; import { fromTableToBuffer, makeArrowTable, makeEmptyTable } from "./arrow";
import { Connection as _NativeConnection } from "./native"; import { Connection as LanceDbConnection } from "./native";
import { Table } from "./table"; import { Table } from "./table";
import { Table as ArrowTable } from "apache-arrow"; import { Table as ArrowTable, Schema } from "apache-arrow";
export interface CreateTableOptions {
/**
* The mode to use when creating the table.
*
* If this is set to "create" and the table already exists then either
* an error will be thrown or, if existOk is true, then nothing will
* happen. Any provided data will be ignored.
*
* If this is set to "overwrite" then any existing table will be replaced.
*/
mode: "create" | "overwrite";
/**
* If this is true and the table already exists and the mode is "create"
* then no error will be raised.
*/
existOk: boolean;
}
export interface TableNamesOptions {
/**
* If present, only return names that come lexicographically after the
* supplied value.
*
* This can be combined with limit to implement pagination by setting this to
* the last table name from the previous page.
*/
startAfter?: string;
/** An optional limit to the number of results to return. */
limit?: number;
}
/** /**
* A LanceDB Connection that allows you to open tables and create new ones. * A LanceDB Connection that allows you to open tables and create new ones.
* *
* Connection could be local against filesystem or remote against a server. * Connection could be local against filesystem or remote against a server.
*
* A Connection is intended to be a long lived object and may hold open
* resources such as HTTP connection pools. This is generally fine and
* a single connection should be shared if it is going to be used many
* times. However, if you are finished with a connection, you may call
* close to eagerly free these resources. Any call to a Connection
* method after it has been closed will result in an error.
*
* Closing a connection is optional. Connections will automatically
* be closed when they are garbage collected.
*
* Any created tables are independent and will continue to work even if
* the underlying connection has been closed.
*/ */
export class Connection { export class Connection {
readonly inner: _NativeConnection; readonly inner: LanceDbConnection;
constructor(inner: _NativeConnection) { constructor(inner: LanceDbConnection) {
this.inner = inner; this.inner = inner;
} }
/** List all the table names in this database. */ /** Return true if the connection has not been closed */
async tableNames(): Promise<string[]> { isOpen(): boolean {
return this.inner.tableNames(); return this.inner.isOpen();
}
/** Close the connection, releasing any underlying resources.
*
* It is safe to call this method multiple times.
*
* Any attempt to use the connection after it is closed will result in an error.
*/
close(): void {
this.inner.close();
}
/** Return a brief description of the connection */
display(): string {
return this.inner.display();
}
/** List all the table names in this database.
*
* Tables will be returned in lexicographical order.
*
* @param options Optional parameters to control the listing.
*/
async tableNames(options?: Partial<TableNamesOptions>): Promise<string[]> {
return this.inner.tableNames(options?.startAfter, options?.limit);
} }
/** /**
@@ -53,10 +122,48 @@ export class Connection {
*/ */
async createTable( async createTable(
name: string, name: string,
data: Record<string, unknown>[] | ArrowTable data: Record<string, unknown>[] | ArrowTable,
options?: Partial<CreateTableOptions>,
): Promise<Table> { ): Promise<Table> {
const buf = toBuffer(data); let mode: string = options?.mode ?? "create";
const innerTable = await this.inner.createTable(name, buf); const existOk = options?.existOk ?? false;
if (mode === "create" && existOk) {
mode = "exist_ok";
}
let table: ArrowTable;
if (data instanceof ArrowTable) {
table = data;
} else {
table = makeArrowTable(data);
}
const buf = await fromTableToBuffer(table);
const innerTable = await this.inner.createTable(name, buf, mode);
return new Table(innerTable);
}
/**
* Creates a new empty Table
*
* @param {string} name - The name of the table.
* @param schema - The schema of the table
*/
async createEmptyTable(
name: string,
schema: Schema,
options?: Partial<CreateTableOptions>,
): Promise<Table> {
let mode: string = options?.mode ?? "create";
const existOk = options?.existOk ?? false;
if (mode === "create" && existOk) {
mode = "exist_ok";
}
const table = makeEmptyTable(schema);
const buf = await fromTableToBuffer(table);
const innerTable = await this.inner.createEmptyTable(name, buf, mode);
return new Table(innerTable); return new Table(innerTable);
} }

View File

@@ -0,0 +1,77 @@
// Copyright 2023 Lance Developers.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
import { type Float } from "apache-arrow";
/**
* An embedding function that automatically creates vector representation for a given column.
*/
export interface EmbeddingFunction<T> {
/**
* The name of the column that will be used as input for the Embedding Function.
*/
sourceColumn: string;
/**
* The data type of the embedding
*
* The embedding function should return `number`. This will be converted into
* an Arrow float array. By default this will be Float32 but this property can
* be used to control the conversion.
*/
embeddingDataType?: Float;
/**
* The dimension of the embedding
*
* This is optional, normally this can be determined by looking at the results of
* `embed`. If this is not specified, and there is an attempt to apply the embedding
* to an empty table, then that process will fail.
*/
embeddingDimension?: number;
/**
* The name of the column that will contain the embedding
*
* By default this is "vector"
*/
destColumn?: string;
/**
* Should the source column be excluded from the resulting table
*
* By default the source column is included. Set this to true and
* only the embedding will be stored.
*/
excludeSource?: boolean;
/**
* Creates a vector representation for the given values.
*/
embed: (data: T[]) => Promise<number[][]>;
}
export function isEmbeddingFunction<T>(
value: unknown,
): value is EmbeddingFunction<T> {
if (typeof value !== "object" || value === null) {
return false;
}
if (!("sourceColumn" in value) || !("embed" in value)) {
return false;
}
return (
typeof value.sourceColumn === "string" && typeof value.embed === "function"
);
}

View File

@@ -0,0 +1,62 @@
// Copyright 2023 Lance Developers.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
import { type EmbeddingFunction } from "./embedding_function";
import type OpenAI from "openai";
export class OpenAIEmbeddingFunction implements EmbeddingFunction<string> {
private readonly _openai: OpenAI;
private readonly _modelName: string;
constructor(
sourceColumn: string,
openAIKey: string,
modelName: string = "text-embedding-ada-002",
) {
/**
* @type {import("openai").default}
*/
// eslint-disable-next-line @typescript-eslint/naming-convention
let Openai;
try {
// eslint-disable-next-line @typescript-eslint/no-var-requires
Openai = require("openai");
} catch {
throw new Error("please install openai@^4.24.1 using npm install openai");
}
this.sourceColumn = sourceColumn;
const configuration = {
apiKey: openAIKey,
};
this._openai = new Openai(configuration);
this._modelName = modelName;
}
async embed(data: string[]): Promise<number[][]> {
const response = await this._openai.embeddings.create({
model: this._modelName,
input: data,
});
const embeddings: number[][] = [];
for (let i = 0; i < response.data.length; i++) {
embeddings.push(response.data[i].embedding);
}
return embeddings;
}
sourceColumn: string;
}

View File

@@ -13,7 +13,10 @@
// limitations under the License. // limitations under the License.
import { Connection } from "./connection"; import { Connection } from "./connection";
import { Connection as NativeConnection, ConnectionOptions } from "./native.js"; import {
Connection as LanceDbConnection,
ConnectionOptions,
} from "./native.js";
export { export {
ConnectionOptions, ConnectionOptions,
@@ -23,7 +26,6 @@ export {
} from "./native.js"; } from "./native.js";
export { Connection } from "./connection"; export { Connection } from "./connection";
export { Table } from "./table"; export { Table } from "./table";
export { Data } from "./arrow";
export { IvfPQOptions, IndexBuilder } from "./indexer"; export { IvfPQOptions, IndexBuilder } from "./indexer";
/** /**
@@ -39,26 +41,11 @@ export { IvfPQOptions, IndexBuilder } from "./indexer";
* *
* @see {@link ConnectionOptions} for more details on the URI format. * @see {@link ConnectionOptions} for more details on the URI format.
*/ */
export async function connect(uri: string): Promise<Connection>;
export async function connect( export async function connect(
opts: Partial<ConnectionOptions> uri: string,
): Promise<Connection>; opts?: Partial<ConnectionOptions>,
export async function connect(
args: string | Partial<ConnectionOptions>
): Promise<Connection> { ): Promise<Connection> {
let opts: ConnectionOptions; opts = opts ?? {};
if (typeof args === "string") { const nativeConn = await LanceDbConnection.new(uri, opts);
opts = { uri: args };
} else {
opts = Object.assign(
{
uri: "",
apiKey: undefined,
hostOverride: undefined,
},
args
);
}
const nativeConn = await NativeConnection.new(opts);
return new Connection(nativeConn); return new Connection(nativeConn);
} }

View File

@@ -12,6 +12,9 @@
// See the License for the specific language governing permissions and // See the License for the specific language governing permissions and
// limitations under the License. // limitations under the License.
// TODO: Re-enable this as part of https://github.com/lancedb/lancedb/pull/1052
/* eslint-disable @typescript-eslint/naming-convention */
import { import {
MetricType, MetricType,
IndexBuilder as NativeBuilder, IndexBuilder as NativeBuilder,
@@ -66,7 +69,7 @@ export class IndexBuilder {
options?.num_sub_vectors, options?.num_sub_vectors,
options?.num_bits, options?.num_bits,
options?.max_iterations, options?.max_iterations,
options?.sample_rate options?.sample_rate,
); );
return this; return this;
} }

View File

@@ -45,7 +45,6 @@ export interface AddColumnsSql {
valueSql: string valueSql: string
} }
export interface ConnectionOptions { export interface ConnectionOptions {
uri: string
apiKey?: string apiKey?: string
hostOverride?: string hostOverride?: string
/** /**
@@ -71,12 +70,15 @@ export const enum WriteMode {
export interface WriteOptions { export interface WriteOptions {
mode?: WriteMode mode?: WriteMode
} }
export function connect(options: ConnectionOptions): Promise<Connection> export function connect(uri: string, options: ConnectionOptions): Promise<Connection>
export class Connection { export class Connection {
/** Create a new Connection instance from the given URI. */ /** Create a new Connection instance from the given URI. */
static new(options: ConnectionOptions): Promise<Connection> static new(uri: string, options: ConnectionOptions): Promise<Connection>
display(): string
isOpen(): boolean
close(): void
/** List all tables in the dataset. */ /** List all tables in the dataset. */
tableNames(): Promise<Array<string>> tableNames(startAfter?: string | undefined | null, limit?: number | undefined | null): Promise<Array<string>>
/** /**
* Create table from a Apache Arrow IPC (file) buffer. * Create table from a Apache Arrow IPC (file) buffer.
* *
@@ -85,7 +87,8 @@ export class Connection {
* - buf: The buffer containing the IPC file. * - buf: The buffer containing the IPC file.
* *
*/ */
createTable(name: string, buf: Buffer): Promise<Table> createTable(name: string, buf: Buffer, mode: string): Promise<Table>
createEmptyTable(name: string, schemaBuf: Buffer, mode: string): Promise<Table>
openTable(name: string): Promise<Table> openTable(name: string): Promise<Table>
/** Drop table with the name. Or raise an error if the table does not exist. */ /** Drop table with the name. Or raise an error if the table does not exist. */
dropTable(name: string): Promise<void> dropTable(name: string): Promise<void>
@@ -114,10 +117,13 @@ export class Query {
executeStream(): Promise<RecordBatchIterator> executeStream(): Promise<RecordBatchIterator>
} }
export class Table { export class Table {
display(): string
isOpen(): boolean
close(): void
/** Return Schema as empty Arrow IPC file. */ /** Return Schema as empty Arrow IPC file. */
schema(): Promise<Buffer> schema(): Promise<Buffer>
add(buf: Buffer): Promise<void> add(buf: Buffer, mode: string): Promise<void>
countRows(filter?: string | undefined | null): Promise<bigint> countRows(filter?: string | undefined | null): Promise<number>
delete(predicate: string): Promise<void> delete(predicate: string): Promise<void>
createIndex(): IndexBuilder createIndex(): IndexBuilder
query(): Query query(): Query

View File

@@ -20,21 +20,22 @@ import {
} from "./native"; } from "./native";
class RecordBatchIterator implements AsyncIterator<RecordBatch> { class RecordBatchIterator implements AsyncIterator<RecordBatch> {
private promised_inner?: Promise<NativeBatchIterator>; private promisedInner?: Promise<NativeBatchIterator>;
private inner?: NativeBatchIterator; private inner?: NativeBatchIterator;
constructor( constructor(
inner?: NativeBatchIterator, inner?: NativeBatchIterator,
promise?: Promise<NativeBatchIterator> promise?: Promise<NativeBatchIterator>,
) { ) {
// TODO: check promise reliably so we dont need to pass two arguments. // TODO: check promise reliably so we dont need to pass two arguments.
this.inner = inner; this.inner = inner;
this.promised_inner = promise; this.promisedInner = promise;
} }
async next(): Promise<IteratorResult<RecordBatch<any>, any>> { // eslint-disable-next-line @typescript-eslint/no-explicit-any
async next(): Promise<IteratorResult<RecordBatch<any>>> {
if (this.inner === undefined) { if (this.inner === undefined) {
this.inner = await this.promised_inner; this.inner = await this.promisedInner;
} }
if (this.inner === undefined) { if (this.inner === undefined) {
throw new Error("Invalid iterator state state"); throw new Error("Invalid iterator state state");
@@ -114,8 +115,8 @@ export class Query implements AsyncIterable<RecordBatch> {
/** /**
* Set the refine factor for the query. * Set the refine factor for the query.
*/ */
refineFactor(refine_factor: number): Query { refineFactor(refineFactor: number): Query {
this.inner.refineFactor(refine_factor); this.inner.refineFactor(refineFactor);
return this; return this;
} }
@@ -139,12 +140,13 @@ export class Query implements AsyncIterable<RecordBatch> {
/** Returns a JSON Array of All results. /** Returns a JSON Array of All results.
* *
*/ */
async toArray(): Promise<any[]> { async toArray(): Promise<unknown[]> {
const tbl = await this.toArrow(); const tbl = await this.toArrow();
// eslint-disable-next-line @typescript-eslint/no-unsafe-return // eslint-disable-next-line @typescript-eslint/no-unsafe-return
return tbl.toArray(); return tbl.toArray();
} }
// eslint-disable-next-line @typescript-eslint/no-explicit-any
[Symbol.asyncIterator](): AsyncIterator<RecordBatch<any>> { [Symbol.asyncIterator](): AsyncIterator<RecordBatch<any>> {
const promise = this.inner.executeStream(); const promise = this.inner.executeStream();
return new RecordBatchIterator(undefined, promise); return new RecordBatchIterator(undefined, promise);

509
nodejs/lancedb/sanitize.ts Normal file
View File

@@ -0,0 +1,509 @@
// Copyright 2023 LanceDB Developers.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// The utilities in this file help sanitize data from the user's arrow
// library into the types expected by vectordb's arrow library. Node
// generally allows for mulitple versions of the same library (and sometimes
// even multiple copies of the same version) to be installed at the same
// time. However, arrow-js uses instanceof which expected that the input
// comes from the exact same library instance. This is not always the case
// and so we must sanitize the input to ensure that it is compatible.
import {
Field,
Utf8,
FixedSizeBinary,
FixedSizeList,
Schema,
List,
Struct,
Float,
Bool,
Date_,
Decimal,
DataType,
Dictionary,
Binary,
Float32,
Interval,
Map_,
Duration,
Union,
Time,
Timestamp,
Type,
Null,
Int,
type Precision,
type DateUnit,
Int8,
Int16,
Int32,
Int64,
Uint8,
Uint16,
Uint32,
Uint64,
Float16,
Float64,
DateDay,
DateMillisecond,
DenseUnion,
SparseUnion,
TimeNanosecond,
TimeMicrosecond,
TimeMillisecond,
TimeSecond,
TimestampNanosecond,
TimestampMicrosecond,
TimestampMillisecond,
TimestampSecond,
IntervalDayTime,
IntervalYearMonth,
DurationNanosecond,
DurationMicrosecond,
DurationMillisecond,
DurationSecond,
} from "apache-arrow";
import type { IntBitWidth, TKeys, TimeBitWidth } from "apache-arrow/type";
function sanitizeMetadata(
metadataLike?: unknown,
): Map<string, string> | undefined {
if (metadataLike === undefined || metadataLike === null) {
return undefined;
}
if (!(metadataLike instanceof Map)) {
throw Error("Expected metadata, if present, to be a Map<string, string>");
}
for (const item of metadataLike) {
if (!(typeof item[0] === "string" || !(typeof item[1] === "string"))) {
throw Error(
"Expected metadata, if present, to be a Map<string, string> but it had non-string keys or values",
);
}
}
return metadataLike as Map<string, string>;
}
function sanitizeInt(typeLike: object) {
if (
!("bitWidth" in typeLike) ||
typeof typeLike.bitWidth !== "number" ||
!("isSigned" in typeLike) ||
typeof typeLike.isSigned !== "boolean"
) {
throw Error(
"Expected an Int Type to have a `bitWidth` and `isSigned` property",
);
}
return new Int(typeLike.isSigned, typeLike.bitWidth as IntBitWidth);
}
function sanitizeFloat(typeLike: object) {
if (!("precision" in typeLike) || typeof typeLike.precision !== "number") {
throw Error("Expected a Float Type to have a `precision` property");
}
return new Float(typeLike.precision as Precision);
}
function sanitizeDecimal(typeLike: object) {
if (
!("scale" in typeLike) ||
typeof typeLike.scale !== "number" ||
!("precision" in typeLike) ||
typeof typeLike.precision !== "number" ||
!("bitWidth" in typeLike) ||
typeof typeLike.bitWidth !== "number"
) {
throw Error(
"Expected a Decimal Type to have `scale`, `precision`, and `bitWidth` properties",
);
}
return new Decimal(typeLike.scale, typeLike.precision, typeLike.bitWidth);
}
function sanitizeDate(typeLike: object) {
if (!("unit" in typeLike) || typeof typeLike.unit !== "number") {
throw Error("Expected a Date type to have a `unit` property");
}
return new Date_(typeLike.unit as DateUnit);
}
function sanitizeTime(typeLike: object) {
if (
!("unit" in typeLike) ||
typeof typeLike.unit !== "number" ||
!("bitWidth" in typeLike) ||
typeof typeLike.bitWidth !== "number"
) {
throw Error(
"Expected a Time type to have `unit` and `bitWidth` properties",
);
}
return new Time(typeLike.unit, typeLike.bitWidth as TimeBitWidth);
}
function sanitizeTimestamp(typeLike: object) {
if (!("unit" in typeLike) || typeof typeLike.unit !== "number") {
throw Error("Expected a Timestamp type to have a `unit` property");
}
let timezone = null;
if ("timezone" in typeLike && typeof typeLike.timezone === "string") {
timezone = typeLike.timezone;
}
return new Timestamp(typeLike.unit, timezone);
}
function sanitizeTypedTimestamp(
typeLike: object,
// eslint-disable-next-line @typescript-eslint/naming-convention
Datatype:
| typeof TimestampNanosecond
| typeof TimestampMicrosecond
| typeof TimestampMillisecond
| typeof TimestampSecond,
) {
let timezone = null;
if ("timezone" in typeLike && typeof typeLike.timezone === "string") {
timezone = typeLike.timezone;
}
return new Datatype(timezone);
}
function sanitizeInterval(typeLike: object) {
if (!("unit" in typeLike) || typeof typeLike.unit !== "number") {
throw Error("Expected an Interval type to have a `unit` property");
}
return new Interval(typeLike.unit);
}
function sanitizeList(typeLike: object) {
if (!("children" in typeLike) || !Array.isArray(typeLike.children)) {
throw Error(
"Expected a List type to have an array-like `children` property",
);
}
if (typeLike.children.length !== 1) {
throw Error("Expected a List type to have exactly one child");
}
return new List(sanitizeField(typeLike.children[0]));
}
function sanitizeStruct(typeLike: object) {
if (!("children" in typeLike) || !Array.isArray(typeLike.children)) {
throw Error(
"Expected a Struct type to have an array-like `children` property",
);
}
return new Struct(typeLike.children.map((child) => sanitizeField(child)));
}
function sanitizeUnion(typeLike: object) {
if (
!("typeIds" in typeLike) ||
!("mode" in typeLike) ||
typeof typeLike.mode !== "number"
) {
throw Error(
"Expected a Union type to have `typeIds` and `mode` properties",
);
}
if (!("children" in typeLike) || !Array.isArray(typeLike.children)) {
throw Error(
"Expected a Union type to have an array-like `children` property",
);
}
return new Union(
typeLike.mode,
// eslint-disable-next-line @typescript-eslint/no-explicit-any
typeLike.typeIds as any,
typeLike.children.map((child) => sanitizeField(child)),
);
}
function sanitizeTypedUnion(
typeLike: object,
// eslint-disable-next-line @typescript-eslint/naming-convention
UnionType: typeof DenseUnion | typeof SparseUnion,
) {
if (!("typeIds" in typeLike)) {
throw Error(
"Expected a DenseUnion/SparseUnion type to have a `typeIds` property",
);
}
if (!("children" in typeLike) || !Array.isArray(typeLike.children)) {
throw Error(
"Expected a DenseUnion/SparseUnion type to have an array-like `children` property",
);
}
return new UnionType(
typeLike.typeIds as Int32Array | number[],
typeLike.children.map((child) => sanitizeField(child)),
);
}
function sanitizeFixedSizeBinary(typeLike: object) {
if (!("byteWidth" in typeLike) || typeof typeLike.byteWidth !== "number") {
throw Error(
"Expected a FixedSizeBinary type to have a `byteWidth` property",
);
}
return new FixedSizeBinary(typeLike.byteWidth);
}
function sanitizeFixedSizeList(typeLike: object) {
if (!("listSize" in typeLike) || typeof typeLike.listSize !== "number") {
throw Error("Expected a FixedSizeList type to have a `listSize` property");
}
if (!("children" in typeLike) || !Array.isArray(typeLike.children)) {
throw Error(
"Expected a FixedSizeList type to have an array-like `children` property",
);
}
if (typeLike.children.length !== 1) {
throw Error("Expected a FixedSizeList type to have exactly one child");
}
return new FixedSizeList(
typeLike.listSize,
sanitizeField(typeLike.children[0]),
);
}
function sanitizeMap(typeLike: object) {
if (!("children" in typeLike) || !Array.isArray(typeLike.children)) {
throw Error(
"Expected a Map type to have an array-like `children` property",
);
}
if (!("keysSorted" in typeLike) || typeof typeLike.keysSorted !== "boolean") {
throw Error("Expected a Map type to have a `keysSorted` property");
}
return new Map_(
// eslint-disable-next-line @typescript-eslint/no-explicit-any
typeLike.children.map((field) => sanitizeField(field)) as any,
typeLike.keysSorted,
);
}
function sanitizeDuration(typeLike: object) {
if (!("unit" in typeLike) || typeof typeLike.unit !== "number") {
throw Error("Expected a Duration type to have a `unit` property");
}
return new Duration(typeLike.unit);
}
function sanitizeDictionary(typeLike: object) {
if (!("id" in typeLike) || typeof typeLike.id !== "number") {
throw Error("Expected a Dictionary type to have an `id` property");
}
if (!("indices" in typeLike) || typeof typeLike.indices !== "object") {
throw Error("Expected a Dictionary type to have an `indices` property");
}
if (!("dictionary" in typeLike) || typeof typeLike.dictionary !== "object") {
throw Error("Expected a Dictionary type to have an `dictionary` property");
}
if (!("isOrdered" in typeLike) || typeof typeLike.isOrdered !== "boolean") {
throw Error("Expected a Dictionary type to have an `isOrdered` property");
}
return new Dictionary(
sanitizeType(typeLike.dictionary),
sanitizeType(typeLike.indices) as TKeys,
typeLike.id,
typeLike.isOrdered,
);
}
// eslint-disable-next-line @typescript-eslint/no-explicit-any
function sanitizeType(typeLike: unknown): DataType<any> {
if (typeof typeLike !== "object" || typeLike === null) {
throw Error("Expected a Type but object was null/undefined");
}
if (!("typeId" in typeLike) || !(typeof typeLike.typeId !== "function")) {
throw Error("Expected a Type to have a typeId function");
}
let typeId: Type;
if (typeof typeLike.typeId === "function") {
typeId = (typeLike.typeId as () => unknown)() as Type;
} else if (typeof typeLike.typeId === "number") {
typeId = typeLike.typeId as Type;
} else {
throw Error("Type's typeId property was not a function or number");
}
switch (typeId) {
case Type.NONE:
throw Error("Received a Type with a typeId of NONE");
case Type.Null:
return new Null();
case Type.Int:
return sanitizeInt(typeLike);
case Type.Float:
return sanitizeFloat(typeLike);
case Type.Binary:
return new Binary();
case Type.Utf8:
return new Utf8();
case Type.Bool:
return new Bool();
case Type.Decimal:
return sanitizeDecimal(typeLike);
case Type.Date:
return sanitizeDate(typeLike);
case Type.Time:
return sanitizeTime(typeLike);
case Type.Timestamp:
return sanitizeTimestamp(typeLike);
case Type.Interval:
return sanitizeInterval(typeLike);
case Type.List:
return sanitizeList(typeLike);
case Type.Struct:
return sanitizeStruct(typeLike);
case Type.Union:
return sanitizeUnion(typeLike);
case Type.FixedSizeBinary:
return sanitizeFixedSizeBinary(typeLike);
case Type.FixedSizeList:
return sanitizeFixedSizeList(typeLike);
case Type.Map:
return sanitizeMap(typeLike);
case Type.Duration:
return sanitizeDuration(typeLike);
case Type.Dictionary:
return sanitizeDictionary(typeLike);
case Type.Int8:
return new Int8();
case Type.Int16:
return new Int16();
case Type.Int32:
return new Int32();
case Type.Int64:
return new Int64();
case Type.Uint8:
return new Uint8();
case Type.Uint16:
return new Uint16();
case Type.Uint32:
return new Uint32();
case Type.Uint64:
return new Uint64();
case Type.Float16:
return new Float16();
case Type.Float32:
return new Float32();
case Type.Float64:
return new Float64();
case Type.DateMillisecond:
return new DateMillisecond();
case Type.DateDay:
return new DateDay();
case Type.TimeNanosecond:
return new TimeNanosecond();
case Type.TimeMicrosecond:
return new TimeMicrosecond();
case Type.TimeMillisecond:
return new TimeMillisecond();
case Type.TimeSecond:
return new TimeSecond();
case Type.TimestampNanosecond:
return sanitizeTypedTimestamp(typeLike, TimestampNanosecond);
case Type.TimestampMicrosecond:
return sanitizeTypedTimestamp(typeLike, TimestampMicrosecond);
case Type.TimestampMillisecond:
return sanitizeTypedTimestamp(typeLike, TimestampMillisecond);
case Type.TimestampSecond:
return sanitizeTypedTimestamp(typeLike, TimestampSecond);
case Type.DenseUnion:
return sanitizeTypedUnion(typeLike, DenseUnion);
case Type.SparseUnion:
return sanitizeTypedUnion(typeLike, SparseUnion);
case Type.IntervalDayTime:
return new IntervalDayTime();
case Type.IntervalYearMonth:
return new IntervalYearMonth();
case Type.DurationNanosecond:
return new DurationNanosecond();
case Type.DurationMicrosecond:
return new DurationMicrosecond();
case Type.DurationMillisecond:
return new DurationMillisecond();
case Type.DurationSecond:
return new DurationSecond();
default:
throw new Error("Unrecoginized type id in schema: " + typeId);
}
}
function sanitizeField(fieldLike: unknown): Field {
if (fieldLike instanceof Field) {
return fieldLike;
}
if (typeof fieldLike !== "object" || fieldLike === null) {
throw Error("Expected a Field but object was null/undefined");
}
if (
!("type" in fieldLike) ||
!("name" in fieldLike) ||
!("nullable" in fieldLike)
) {
throw Error(
"The field passed in is missing a `type`/`name`/`nullable` property",
);
}
const type = sanitizeType(fieldLike.type);
const name = fieldLike.name;
if (!(typeof name === "string")) {
throw Error("The field passed in had a non-string `name` property");
}
const nullable = fieldLike.nullable;
if (!(typeof nullable === "boolean")) {
throw Error("The field passed in had a non-boolean `nullable` property");
}
let metadata;
if ("metadata" in fieldLike) {
metadata = sanitizeMetadata(fieldLike.metadata);
}
return new Field(name, type, nullable, metadata);
}
export function sanitizeSchema(schemaLike: unknown): Schema {
if (schemaLike instanceof Schema) {
return schemaLike;
}
if (typeof schemaLike !== "object" || schemaLike === null) {
throw Error("Expected a Schema but object was null/undefined");
}
if (!("fields" in schemaLike)) {
throw Error(
"The schema passed in does not appear to be a schema (no 'fields' property)",
);
}
let metadata;
if ("metadata" in schemaLike) {
metadata = sanitizeMetadata(schemaLike.metadata);
}
if (!Array.isArray(schemaLike.fields)) {
throw Error(
"The schema passed in had a 'fields' property but it was not an array",
);
}
const sanitizedFields = schemaLike.fields.map((field) =>
sanitizeField(field),
);
return new Schema(sanitizedFields, metadata);
}

View File

@@ -13,15 +13,37 @@
// limitations under the License. // limitations under the License.
import { Schema, tableFromIPC } from "apache-arrow"; import { Schema, tableFromIPC } from "apache-arrow";
import { AddColumnsSql, ColumnAlteration, Table as _NativeTable } from "./native"; import {
import { toBuffer, Data } from "./arrow"; AddColumnsSql,
ColumnAlteration,
Table as _NativeTable,
} from "./native";
import { Query } from "./query"; import { Query } from "./query";
import { IndexBuilder } from "./indexer"; import { IndexBuilder } from "./indexer";
import { Data, fromDataToBuffer } from "./arrow";
/** /**
* A LanceDB Table is the collection of Records. * Options for adding data to a table.
*/
export interface AddDataOptions {
/** If "append" (the default) then the new data will be added to the table
*
* If "overwrite" then the new data will replace the existing data in the table.
*/
mode: "append" | "overwrite";
}
/**
* A Table is a collection of Records in a LanceDB Database.
* *
* Each Record has one or more vector fields. * A Table object is expected to be long lived and reused for multiple operations.
* Table objects will cache a certain amount of index data in memory. This cache
* will be freed when the Table is garbage collected. To eagerly free the cache you
* can call the `close` method. Once the Table is closed, it cannot be used for any
* further operations.
*
* Closing a table is optional. It not closed, it will be closed when it is garbage
* collected.
*/ */
export class Table { export class Table {
private readonly inner: _NativeTable; private readonly inner: _NativeTable;
@@ -31,6 +53,26 @@ export class Table {
this.inner = inner; this.inner = inner;
} }
/** Return true if the table has not been closed */
isOpen(): boolean {
return this.inner.isOpen();
}
/** Close the table, releasing any underlying resources.
*
* It is safe to call this method multiple times.
*
* Any attempt to use the table after it is closed will result in an error.
*/
close(): void {
this.inner.close();
}
/** Return a brief description of the table */
display(): string {
return this.inner.display();
}
/** Get the schema of the table. */ /** Get the schema of the table. */
async schema(): Promise<Schema> { async schema(): Promise<Schema> {
const schemaBuf = await this.inner.schema(); const schemaBuf = await this.inner.schema();
@@ -44,13 +86,15 @@ export class Table {
* @param {Data} data Records to be inserted into the Table * @param {Data} data Records to be inserted into the Table
* @return The number of rows added to the table * @return The number of rows added to the table
*/ */
async add(data: Data): Promise<void> { async add(data: Data, options?: Partial<AddDataOptions>): Promise<void> {
const buffer = toBuffer(data); const mode = options?.mode ?? "append";
await this.inner.add(buffer);
const buffer = await fromDataToBuffer(data);
await this.inner.add(buffer, mode);
} }
/** Count the total number of rows in the dataset. */ /** Count the total number of rows in the dataset. */
async countRows(filter?: string): Promise<bigint> { async countRows(filter?: string): Promise<number> {
return await this.inner.countRows(filter); return await this.inner.countRows(filter);
} }

View File

@@ -15,4 +15,4 @@
"engines": { "engines": {
"node": ">= 18" "node": ">= 18"
} }
} }

View File

@@ -15,4 +15,4 @@
"engines": { "engines": {
"node": ">= 18" "node": ">= 18"
} }
} }

View File

@@ -18,4 +18,4 @@
"libc": [ "libc": [
"glibc" "glibc"
] ]
} }

View File

@@ -18,4 +18,4 @@
"libc": [ "libc": [
"glibc" "glibc"
] ]
} }

859
nodejs/package-lock.json generated

File diff suppressed because it is too large Load Diff

View File

@@ -19,14 +19,20 @@
"devDependencies": { "devDependencies": {
"@napi-rs/cli": "^2.18.0", "@napi-rs/cli": "^2.18.0",
"@types/jest": "^29.1.2", "@types/jest": "^29.1.2",
"@types/tmp": "^0.2.6",
"@typescript-eslint/eslint-plugin": "^6.19.0", "@typescript-eslint/eslint-plugin": "^6.19.0",
"@typescript-eslint/parser": "^6.19.0", "@typescript-eslint/parser": "^6.19.0",
"eslint": "^8.56.0", "apache-arrow-old": "npm:apache-arrow@13.0.0",
"eslint": "^8.57.0",
"eslint-config-prettier": "^9.1.0",
"jest": "^29.7.0", "jest": "^29.7.0",
"prettier": "^3.1.0",
"tmp": "^0.2.3",
"ts-jest": "^29.1.2", "ts-jest": "^29.1.2",
"typedoc": "^0.25.7", "typedoc": "^0.25.7",
"typedoc-plugin-markdown": "^3.17.1", "typedoc-plugin-markdown": "^3.17.1",
"typescript": "^5.3.3" "typescript": "^5.3.3",
"typescript-eslint": "^7.1.0"
}, },
"ava": { "ava": {
"timeout": "3m" "timeout": "3m"
@@ -48,11 +54,11 @@
"build:native": "napi build --platform --release --js lancedb/native.js --dts lancedb/native.d.ts dist/", "build:native": "napi build --platform --release --js lancedb/native.js --dts lancedb/native.d.ts dist/",
"build:debug": "napi build --platform --dts ../lancedb/native.d.ts --js ../lancedb/native.js dist/", "build:debug": "napi build --platform --dts ../lancedb/native.d.ts --js ../lancedb/native.js dist/",
"build": "npm run build:debug && tsc -b", "build": "npm run build:debug && tsc -b",
"chkformat": "prettier . --check",
"docs": "typedoc --plugin typedoc-plugin-markdown lancedb/index.ts", "docs": "typedoc --plugin typedoc-plugin-markdown lancedb/index.ts",
"lint": "eslint lancedb --ext .js,.ts", "lint": "eslint lancedb && eslint __test__",
"prepublishOnly": "napi prepublish -t npm", "prepublishOnly": "napi prepublish -t npm",
"//": "maxWorkers=1 is workaround for bigint issue in jest: https://github.com/jestjs/jest/issues/11617#issuecomment-1068732414", "test": "npm run build && jest --verbose",
"test": "npm run build && jest --maxWorkers=1",
"universal": "napi universal", "universal": "napi universal",
"version": "napi version" "version": "napi version"
}, },
@@ -60,7 +66,8 @@
"lancedb-darwin-arm64": "0.4.3", "lancedb-darwin-arm64": "0.4.3",
"lancedb-darwin-x64": "0.4.3", "lancedb-darwin-x64": "0.4.3",
"lancedb-linux-arm64-gnu": "0.4.3", "lancedb-linux-arm64-gnu": "0.4.3",
"lancedb-linux-x64-gnu": "0.4.3" "lancedb-linux-x64-gnu": "0.4.3",
"openai": "^4.28.4"
}, },
"peerDependencies": { "peerDependencies": {
"apache-arrow": "^15.0.0" "apache-arrow": "^15.0.0"

View File

@@ -17,20 +17,43 @@ use napi_derive::*;
use crate::table::Table; use crate::table::Table;
use crate::ConnectionOptions; use crate::ConnectionOptions;
use lancedb::connection::{ConnectBuilder, Connection as LanceDBConnection}; use lancedb::connection::{ConnectBuilder, Connection as LanceDBConnection, CreateTableMode};
use lancedb::ipc::ipc_file_to_batches; use lancedb::ipc::{ipc_file_to_batches, ipc_file_to_schema};
#[napi] #[napi]
pub struct Connection { pub struct Connection {
conn: LanceDBConnection, inner: Option<LanceDBConnection>,
}
impl Connection {
pub(crate) fn inner_new(inner: LanceDBConnection) -> Self {
Self { inner: Some(inner) }
}
fn get_inner(&self) -> napi::Result<&LanceDBConnection> {
self.inner
.as_ref()
.ok_or_else(|| napi::Error::from_reason("Connection is closed"))
}
}
impl Connection {
fn parse_create_mode_str(mode: &str) -> napi::Result<CreateTableMode> {
match mode {
"create" => Ok(CreateTableMode::Create),
"overwrite" => Ok(CreateTableMode::Overwrite),
"exist_ok" => Ok(CreateTableMode::exist_ok(|builder| builder)),
_ => Err(napi::Error::from_reason(format!("Invalid mode {}", mode))),
}
}
} }
#[napi] #[napi]
impl Connection { impl Connection {
/// Create a new Connection instance from the given URI. /// Create a new Connection instance from the given URI.
#[napi(factory)] #[napi(factory)]
pub async fn new(options: ConnectionOptions) -> napi::Result<Self> { pub async fn new(uri: String, options: ConnectionOptions) -> napi::Result<Self> {
let mut builder = ConnectBuilder::new(&options.uri); let mut builder = ConnectBuilder::new(&uri);
if let Some(api_key) = options.api_key { if let Some(api_key) = options.api_key {
builder = builder.api_key(&api_key); builder = builder.api_key(&api_key);
} }
@@ -41,19 +64,44 @@ impl Connection {
builder = builder =
builder.read_consistency_interval(std::time::Duration::from_secs_f64(interval)); builder.read_consistency_interval(std::time::Duration::from_secs_f64(interval));
} }
Ok(Self { Ok(Self::inner_new(
conn: builder builder
.execute() .execute()
.await .await
.map_err(|e| napi::Error::from_reason(format!("{}", e)))?, .map_err(|e| napi::Error::from_reason(format!("{}", e)))?,
}) ))
}
#[napi]
pub fn display(&self) -> napi::Result<String> {
Ok(self.get_inner()?.to_string())
}
#[napi]
pub fn is_open(&self) -> bool {
self.inner.is_some()
}
#[napi]
pub fn close(&mut self) {
self.inner.take();
} }
/// List all tables in the dataset. /// List all tables in the dataset.
#[napi] #[napi]
pub async fn table_names(&self) -> napi::Result<Vec<String>> { pub async fn table_names(
self.conn &self,
.table_names() start_after: Option<String>,
limit: Option<u32>,
) -> napi::Result<Vec<String>> {
let mut op = self.get_inner()?.table_names();
if let Some(start_after) = start_after {
op = op.start_after(start_after);
}
if let Some(limit) = limit {
op = op.limit(limit);
}
op.execute()
.await .await
.map_err(|e| napi::Error::from_reason(format!("{}", e))) .map_err(|e| napi::Error::from_reason(format!("{}", e)))
} }
@@ -65,12 +113,40 @@ impl Connection {
/// - buf: The buffer containing the IPC file. /// - buf: The buffer containing the IPC file.
/// ///
#[napi] #[napi]
pub async fn create_table(&self, name: String, buf: Buffer) -> napi::Result<Table> { pub async fn create_table(
&self,
name: String,
buf: Buffer,
mode: String,
) -> napi::Result<Table> {
let batches = ipc_file_to_batches(buf.to_vec()) let batches = ipc_file_to_batches(buf.to_vec())
.map_err(|e| napi::Error::from_reason(format!("Failed to read IPC file: {}", e)))?; .map_err(|e| napi::Error::from_reason(format!("Failed to read IPC file: {}", e)))?;
let mode = Self::parse_create_mode_str(&mode)?;
let tbl = self let tbl = self
.conn .get_inner()?
.create_table(&name, Box::new(batches)) .create_table(&name, Box::new(batches))
.mode(mode)
.execute()
.await
.map_err(|e| napi::Error::from_reason(format!("{}", e)))?;
Ok(Table::new(tbl))
}
#[napi]
pub async fn create_empty_table(
&self,
name: String,
schema_buf: Buffer,
mode: String,
) -> napi::Result<Table> {
let schema = ipc_file_to_schema(schema_buf.to_vec()).map_err(|e| {
napi::Error::from_reason(format!("Failed to marshal schema from JS to Rust: {}", e))
})?;
let mode = Self::parse_create_mode_str(&mode)?;
let tbl = self
.get_inner()?
.create_empty_table(&name, schema)
.mode(mode)
.execute() .execute()
.await .await
.map_err(|e| napi::Error::from_reason(format!("{}", e)))?; .map_err(|e| napi::Error::from_reason(format!("{}", e)))?;
@@ -80,7 +156,7 @@ impl Connection {
#[napi] #[napi]
pub async fn open_table(&self, name: String) -> napi::Result<Table> { pub async fn open_table(&self, name: String) -> napi::Result<Table> {
let tbl = self let tbl = self
.conn .get_inner()?
.open_table(&name) .open_table(&name)
.execute() .execute()
.await .await
@@ -91,7 +167,7 @@ impl Connection {
/// Drop table with the name. Or raise an error if the table does not exist. /// Drop table with the name. Or raise an error if the table does not exist.
#[napi] #[napi]
pub async fn drop_table(&self, name: String) -> napi::Result<()> { pub async fn drop_table(&self, name: String) -> napi::Result<()> {
self.conn self.get_inner()?
.drop_table(&name) .drop_table(&name)
.await .await
.map_err(|e| napi::Error::from_reason(format!("{}", e))) .map_err(|e| napi::Error::from_reason(format!("{}", e)))

View File

@@ -12,7 +12,11 @@
// See the License for the specific language governing permissions and // See the License for the specific language governing permissions and
// limitations under the License. // limitations under the License.
use std::sync::Mutex;
use lance_linalg::distance::MetricType as LanceMetricType; use lance_linalg::distance::MetricType as LanceMetricType;
use lancedb::index::IndexBuilder as LanceDbIndexBuilder;
use lancedb::Table as LanceDbTable;
use napi_derive::napi; use napi_derive::napi;
#[napi] #[napi]
@@ -40,58 +44,93 @@ impl From<MetricType> for LanceMetricType {
#[napi] #[napi]
pub struct IndexBuilder { pub struct IndexBuilder {
inner: lancedb::index::IndexBuilder, inner: Mutex<Option<LanceDbIndexBuilder>>,
}
impl IndexBuilder {
fn modify(
&self,
mod_fn: impl Fn(LanceDbIndexBuilder) -> LanceDbIndexBuilder,
) -> napi::Result<()> {
let mut inner = self.inner.lock().unwrap();
let inner_builder = inner.take().ok_or_else(|| {
napi::Error::from_reason("IndexBuilder has already been consumed".to_string())
})?;
let inner_builder = mod_fn(inner_builder);
inner.replace(inner_builder);
Ok(())
}
} }
#[napi] #[napi]
impl IndexBuilder { impl IndexBuilder {
pub fn new(tbl: &dyn lancedb::Table) -> Self { pub fn new(tbl: &LanceDbTable) -> Self {
let inner = tbl.create_index(&[]); let inner = tbl.create_index(&[]);
Self { inner } Self {
inner: Mutex::new(Some(inner)),
}
} }
#[napi] #[napi]
pub unsafe fn replace(&mut self, v: bool) { pub fn replace(&self, v: bool) -> napi::Result<()> {
self.inner.replace(v); self.modify(|b| b.replace(v))
} }
#[napi] #[napi]
pub unsafe fn column(&mut self, c: String) { pub fn column(&self, c: String) -> napi::Result<()> {
self.inner.columns(&[c.as_str()]); self.modify(|b| b.columns(&[c.as_str()]))
} }
#[napi] #[napi]
pub unsafe fn name(&mut self, name: String) { pub fn name(&self, name: String) -> napi::Result<()> {
self.inner.name(name.as_str()); self.modify(|b| b.name(name.as_str()))
} }
#[napi] #[napi]
pub unsafe fn ivf_pq( pub fn ivf_pq(
&mut self, &self,
metric_type: Option<MetricType>, metric_type: Option<MetricType>,
num_partitions: Option<u32>, num_partitions: Option<u32>,
num_sub_vectors: Option<u32>, num_sub_vectors: Option<u32>,
num_bits: Option<u32>, num_bits: Option<u32>,
max_iterations: Option<u32>, max_iterations: Option<u32>,
sample_rate: Option<u32>, sample_rate: Option<u32>,
) { ) -> napi::Result<()> {
self.inner.ivf_pq(); self.modify(|b| {
metric_type.map(|m| self.inner.metric_type(m.into())); let mut b = b.ivf_pq();
num_partitions.map(|p| self.inner.num_partitions(p)); if let Some(metric_type) = metric_type {
num_sub_vectors.map(|s| self.inner.num_sub_vectors(s)); b = b.metric_type(metric_type.into());
num_bits.map(|b| self.inner.num_bits(b)); }
max_iterations.map(|i| self.inner.max_iterations(i)); if let Some(num_partitions) = num_partitions {
sample_rate.map(|s| self.inner.sample_rate(s)); b = b.num_partitions(num_partitions);
}
if let Some(num_sub_vectors) = num_sub_vectors {
b = b.num_sub_vectors(num_sub_vectors);
}
if let Some(num_bits) = num_bits {
b = b.num_bits(num_bits);
}
if let Some(max_iterations) = max_iterations {
b = b.max_iterations(max_iterations);
}
if let Some(sample_rate) = sample_rate {
b = b.sample_rate(sample_rate);
}
b
})
} }
#[napi] #[napi]
pub unsafe fn scalar(&mut self) { pub fn scalar(&self) -> napi::Result<()> {
self.inner.scalar(); self.modify(|b| b.scalar())
} }
#[napi] #[napi]
pub async fn build(&self) -> napi::Result<()> { pub async fn build(&self) -> napi::Result<()> {
self.inner let inner = self.inner.lock().unwrap().take().ok_or_else(|| {
napi::Error::from_reason("IndexBuilder has already been consumed".to_string())
})?;
inner
.build() .build()
.await .await
.map_err(|e| napi::Error::from_reason(format!("Failed to build index: {}", e)))?; .map_err(|e| napi::Error::from_reason(format!("Failed to build index: {}", e)))?;

View File

@@ -24,7 +24,6 @@ mod table;
#[napi(object)] #[napi(object)]
#[derive(Debug)] #[derive(Debug)]
pub struct ConnectionOptions { pub struct ConnectionOptions {
pub uri: String,
pub api_key: Option<String>, pub api_key: Option<String>,
pub host_override: Option<String>, pub host_override: Option<String>,
/// (For LanceDB OSS only): The interval, in seconds, at which to check for /// (For LanceDB OSS only): The interval, in seconds, at which to check for
@@ -54,6 +53,6 @@ pub struct WriteOptions {
} }
#[napi] #[napi]
pub async fn connect(options: ConnectionOptions) -> napi::Result<Connection> { pub async fn connect(uri: String, options: ConnectionOptions) -> napi::Result<Connection> {
Connection::new(options).await Connection::new(uri, options).await
} }

View File

@@ -16,7 +16,7 @@ use lancedb::query::Query as LanceDBQuery;
use napi::bindgen_prelude::*; use napi::bindgen_prelude::*;
use napi_derive::napi; use napi_derive::napi;
use crate::{iterator::RecordBatchIterator, table::Table}; use crate::iterator::RecordBatchIterator;
#[napi] #[napi]
pub struct Query { pub struct Query {
@@ -25,10 +25,8 @@ pub struct Query {
#[napi] #[napi]
impl Query { impl Query {
pub fn new(table: &Table) -> Self { pub fn new(query: LanceDBQuery) -> Self {
Self { Self { inner: query }
inner: table.table.query(),
}
} }
#[napi] #[napi]

View File

@@ -14,10 +14,8 @@
use arrow_ipc::writer::FileWriter; use arrow_ipc::writer::FileWriter;
use lance::dataset::ColumnAlteration as LanceColumnAlteration; use lance::dataset::ColumnAlteration as LanceColumnAlteration;
use lancedb::{ use lancedb::ipc::ipc_file_to_batches;
ipc::ipc_file_to_batches, use lancedb::table::{AddDataMode, Table as LanceDbTable};
table::{AddDataOptions, TableRef},
};
use napi::bindgen_prelude::*; use napi::bindgen_prelude::*;
use napi_derive::napi; use napi_derive::napi;
@@ -26,20 +24,52 @@ use crate::query::Query;
#[napi] #[napi]
pub struct Table { pub struct Table {
pub(crate) table: TableRef, // We keep a duplicate of the table name so we can use it for error
// messages even if the table has been closed
name: String,
pub(crate) inner: Option<LanceDbTable>,
}
impl Table {
fn inner_ref(&self) -> napi::Result<&LanceDbTable> {
self.inner
.as_ref()
.ok_or_else(|| napi::Error::from_reason(format!("Table {} is closed", self.name)))
}
} }
#[napi] #[napi]
impl Table { impl Table {
pub(crate) fn new(table: TableRef) -> Self { pub(crate) fn new(table: LanceDbTable) -> Self {
Self { table } Self {
name: table.name().to_string(),
inner: Some(table),
}
}
#[napi]
pub fn display(&self) -> String {
match &self.inner {
None => format!("ClosedTable({})", self.name),
Some(inner) => inner.to_string(),
}
}
#[napi]
pub fn is_open(&self) -> bool {
self.inner.is_some()
}
#[napi]
pub fn close(&mut self) {
self.inner.take();
} }
/// Return Schema as empty Arrow IPC file. /// Return Schema as empty Arrow IPC file.
#[napi] #[napi]
pub async fn schema(&self) -> napi::Result<Buffer> { pub async fn schema(&self) -> napi::Result<Buffer> {
let schema = let schema =
self.table.schema().await.map_err(|e| { self.inner_ref()?.schema().await.map_err(|e| {
napi::Error::from_reason(format!("Failed to create IPC file: {}", e)) napi::Error::from_reason(format!("Failed to create IPC file: {}", e))
})?; })?;
let mut writer = FileWriter::try_new(vec![], &schema) let mut writer = FileWriter::try_new(vec![], &schema)
@@ -53,48 +83,59 @@ impl Table {
} }
#[napi] #[napi]
pub async fn add(&self, buf: Buffer) -> napi::Result<()> { pub async fn add(&self, buf: Buffer, mode: String) -> napi::Result<()> {
let batches = ipc_file_to_batches(buf.to_vec()) let batches = ipc_file_to_batches(buf.to_vec())
.map_err(|e| napi::Error::from_reason(format!("Failed to read IPC file: {}", e)))?; .map_err(|e| napi::Error::from_reason(format!("Failed to read IPC file: {}", e)))?;
self.table let mut op = self.inner_ref()?.add(Box::new(batches));
.add(Box::new(batches), AddDataOptions::default())
op = if mode == "append" {
op.mode(AddDataMode::Append)
} else if mode == "overwrite" {
op.mode(AddDataMode::Overwrite)
} else {
return Err(napi::Error::from_reason(format!("Invalid mode: {}", mode)));
};
op.execute().await.map_err(|e| {
napi::Error::from_reason(format!(
"Failed to add batches to table {}: {}",
self.name, e
))
})
}
#[napi]
pub async fn count_rows(&self, filter: Option<String>) -> napi::Result<i64> {
self.inner_ref()?
.count_rows(filter)
.await .await
.map(|val| val as i64)
.map_err(|e| { .map_err(|e| {
napi::Error::from_reason(format!( napi::Error::from_reason(format!(
"Failed to add batches to table {}: {}", "Failed to count rows in table {}: {}",
self.table, e self.name, e
)) ))
}) })
} }
#[napi]
pub async fn count_rows(&self, filter: Option<String>) -> napi::Result<usize> {
self.table.count_rows(filter).await.map_err(|e| {
napi::Error::from_reason(format!(
"Failed to count rows in table {}: {}",
self.table, e
))
})
}
#[napi] #[napi]
pub async fn delete(&self, predicate: String) -> napi::Result<()> { pub async fn delete(&self, predicate: String) -> napi::Result<()> {
self.table.delete(&predicate).await.map_err(|e| { self.inner_ref()?.delete(&predicate).await.map_err(|e| {
napi::Error::from_reason(format!( napi::Error::from_reason(format!(
"Failed to delete rows in table {}: predicate={}", "Failed to delete rows in table {}: predicate={}",
self.table, e self.name, e
)) ))
}) })
} }
#[napi] #[napi]
pub fn create_index(&self) -> IndexBuilder { pub fn create_index(&self) -> napi::Result<IndexBuilder> {
IndexBuilder::new(self.table.as_ref()) Ok(IndexBuilder::new(self.inner_ref()?))
} }
#[napi] #[napi]
pub fn query(&self) -> Query { pub fn query(&self) -> napi::Result<Query> {
Query::new(self) Ok(Query::new(self.inner_ref()?.query()))
} }
#[napi] #[napi]
@@ -104,13 +145,13 @@ impl Table {
.map(|sql| (sql.name, sql.value_sql)) .map(|sql| (sql.name, sql.value_sql))
.collect::<Vec<_>>(); .collect::<Vec<_>>();
let transforms = lance::dataset::NewColumnTransform::SqlExpressions(transforms); let transforms = lance::dataset::NewColumnTransform::SqlExpressions(transforms);
self.table self.inner_ref()?
.add_columns(transforms, None) .add_columns(transforms, None)
.await .await
.map_err(|err| { .map_err(|err| {
napi::Error::from_reason(format!( napi::Error::from_reason(format!(
"Failed to add columns to table {}: {}", "Failed to add columns to table {}: {}",
self.table, err self.name, err
)) ))
})?; })?;
Ok(()) Ok(())
@@ -130,13 +171,13 @@ impl Table {
.map(LanceColumnAlteration::from) .map(LanceColumnAlteration::from)
.collect::<Vec<_>>(); .collect::<Vec<_>>();
self.table self.inner_ref()?
.alter_columns(&alterations) .alter_columns(&alterations)
.await .await
.map_err(|err| { .map_err(|err| {
napi::Error::from_reason(format!( napi::Error::from_reason(format!(
"Failed to alter columns in table {}: {}", "Failed to alter columns in table {}: {}",
self.table, err self.name, err
)) ))
})?; })?;
Ok(()) Ok(())
@@ -145,12 +186,15 @@ impl Table {
#[napi] #[napi]
pub async fn drop_columns(&self, columns: Vec<String>) -> napi::Result<()> { pub async fn drop_columns(&self, columns: Vec<String>) -> napi::Result<()> {
let col_refs = columns.iter().map(String::as_str).collect::<Vec<_>>(); let col_refs = columns.iter().map(String::as_str).collect::<Vec<_>>();
self.table.drop_columns(&col_refs).await.map_err(|err| { self.inner_ref()?
napi::Error::from_reason(format!( .drop_columns(&col_refs)
"Failed to drop columns from table {}: {}", .await
self.table, err .map_err(|err| {
)) napi::Error::from_reason(format!(
})?; "Failed to drop columns from table {}: {}",
self.name, err
))
})?;
Ok(()) Ok(())
} }
} }

View File

@@ -1,9 +1,5 @@
{ {
"include": [ "include": ["lancedb/*.ts", "lancedb/**/*.ts", "lancedb/*.js"],
"lancedb/*.ts",
"lancedb/**/*.ts",
"lancedb/*.js",
],
"compilerOptions": { "compilerOptions": {
"target": "es2022", "target": "es2022",
"module": "commonjs", "module": "commonjs",
@@ -11,21 +7,17 @@
"outDir": "./dist", "outDir": "./dist",
"strict": true, "strict": true,
"allowJs": true, "allowJs": true,
"resolveJsonModule": true, "resolveJsonModule": true
}, },
"exclude": [ "exclude": ["./dist/*"],
"./dist/*",
],
"typedocOptions": { "typedocOptions": {
"entryPoints": [ "entryPoints": ["lancedb/index.ts"],
"lancedb/index.ts"
],
"out": "../docs/src/javascript/", "out": "../docs/src/javascript/",
"visibilityFilters": { "visibilityFilters": {
"protected": false, "protected": false,
"private": false, "private": false,
"inherited": true, "inherited": true,
"external": false, "external": false
} }
} }
} }

View File

@@ -1,5 +1,5 @@
[bumpversion] [bumpversion]
current_version = 0.6.1 current_version = 0.6.2
commit = True commit = True
message = [python] Bump version: {current_version} → {new_version} message = [python] Bump version: {current_version} → {new_version}
tag = True tag = True

38
python/ASYNC_MIGRATION.md Normal file
View File

@@ -0,0 +1,38 @@
# Migration from Sync to Async API
A new asynchronous API has been added to LanceDb. This API is built
on top of the rust lancedb crate (instead of being built on top of
pylance). This will help keep the various language bindings in sync.
There are some slight changes between the synchronous and the asynchronous
APIs. This document will help you migrate. These changes relate mostly
to the Connection and Table classes.
## Almost all functions are async
The most important change is that almost all functions are now async.
This means the functions now return `asyncio` coroutines. You will
need to use `await` to call these functions.
## Connection
* The connection now has a `close` method. You can call this when
you are done with the connection to eagerly free resources. Currently
this is limited to freeing/closing the HTTP connection for remote
connections. In the future we may add caching or other resources to
native connections so this is probably a good practice even if you aren't using remote connections.
In addition, the connection can be used as a context manager which may
be a more convenient way to ensure the connection is closed.
It is not mandatory to call the `close` method. If you don't call it
the connection will be closed when the object is garbage collected.
## Table
* The table now has a `close` method, similar to the connection. This
can be used to eagerly free the cache used by a Table object. Similar
to the connection, it can be used as a context manager and it is not
mandatory to call the `close` method.
* Previously `Table.schema` was a property. Now it is an async method.
* The method `Table.__len__` was removed and `len(table)` will no longer
work. Use `Table.count_rows` instead.

View File

@@ -7,13 +7,14 @@ license.workspace = true
repository.workspace = true repository.workspace = true
keywords.workspace = true keywords.workspace = true
categories.workspace = true categories.workspace = true
rust-version = "1.75.0"
[lib] [lib]
name = "_lancedb" name = "_lancedb"
crate-type = ["cdylib"] crate-type = ["cdylib"]
[dependencies] [dependencies]
arrow = { version = "50.0.0", features = ["pyarrow"] }
lancedb = { path = "../rust/lancedb" } lancedb = { path = "../rust/lancedb" }
env_logger = "0.10" env_logger = "0.10"
pyo3 = { version = "0.20", features = ["extension-module", "abi3-py38"] } pyo3 = { version = "0.20", features = ["extension-module", "abi3-py38"] }
@@ -23,4 +24,7 @@ pyo3-asyncio = { version = "0.20", features = ["attributes", "tokio-runtime"] }
lzma-sys = { version = "*", features = ["static"] } lzma-sys = { version = "*", features = ["static"] }
[build-dependencies] [build-dependencies]
pyo3-build-config = { version = "0.20.3", features = ["extension-module", "abi3-py38"] } pyo3-build-config = { version = "0.20.3", features = [
"extension-module",
"abi3-py38",
] }

View File

@@ -1,9 +1,9 @@
[project] [project]
name = "lancedb" name = "lancedb"
version = "0.6.1" version = "0.6.2"
dependencies = [ dependencies = [
"deprecation", "deprecation",
"pylance==0.10.1", "pylance==0.10.2",
"ratelimiter~=1.0", "ratelimiter~=1.0",
"retry>=0.9.2", "retry>=0.9.2",
"tqdm>=4.27.0", "tqdm>=4.27.0",

View File

@@ -21,7 +21,7 @@ __version__ = importlib.metadata.version("lancedb")
from ._lancedb import connect as lancedb_connect from ._lancedb import connect as lancedb_connect
from .common import URI, sanitize_uri from .common import URI, sanitize_uri
from .db import AsyncConnection, AsyncLanceDBConnection, DBConnection, LanceDBConnection from .db import AsyncConnection, DBConnection, LanceDBConnection
from .remote.db import RemoteDBConnection from .remote.db import RemoteDBConnection
from .schema import vector # noqa: F401 from .schema import vector # noqa: F401
from .utils import sentry_log # noqa: F401 from .utils import sentry_log # noqa: F401
@@ -168,8 +168,17 @@ async def connect_async(
conn : DBConnection conn : DBConnection
A connection to a LanceDB database. A connection to a LanceDB database.
""" """
return AsyncLanceDBConnection( if read_consistency_interval is not None:
read_consistency_interval_secs = read_consistency_interval.total_seconds()
else:
read_consistency_interval_secs = None
return AsyncConnection(
await lancedb_connect( await lancedb_connect(
sanitize_uri(uri), api_key, region, host_override, read_consistency_interval sanitize_uri(uri),
api_key,
region,
host_override,
read_consistency_interval_secs,
) )
) )

View File

@@ -1,7 +1,22 @@
from typing import Optional from typing import Optional
import pyarrow as pa
class Connection(object): class Connection(object):
async def table_names(self) -> list[str]: ... async def table_names(
self, start_after: Optional[str], limit: Optional[int]
) -> list[str]: ...
async def create_table(
self, name: str, mode: str, data: pa.RecordBatchReader
) -> Table: ...
async def create_empty_table(
self, name: str, mode: str, schema: pa.Schema
) -> Table: ...
class Table(object):
def name(self) -> str: ...
def __repr__(self) -> str: ...
async def schema(self) -> pa.Schema: ...
async def connect( async def connect(
uri: str, uri: str,

View File

@@ -11,7 +11,7 @@
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
from pathlib import Path from pathlib import Path
from typing import Iterable, List, Union from typing import Iterable, List, Optional, Union
import numpy as np import numpy as np
import pyarrow as pa import pyarrow as pa
@@ -38,3 +38,99 @@ class Credential(str):
def sanitize_uri(uri: URI) -> str: def sanitize_uri(uri: URI) -> str:
return str(uri) return str(uri)
def _casting_recordbatch_iter(
input_iter: Iterable[pa.RecordBatch], schema: pa.Schema
) -> Iterable[pa.RecordBatch]:
"""
Wrapper around an iterator of record batches. If the batches don't match the
schema, try to cast them to the schema. If that fails, raise an error.
This is helpful for users who might have written the iterator with default
data types in PyArrow, but specified more specific types in the schema. For
example, PyArrow defaults to float64 for floating point types, but Lance
uses float32 for vectors.
"""
for batch in input_iter:
if not isinstance(batch, pa.RecordBatch):
raise TypeError(f"Expected RecordBatch, got {type(batch)}")
if batch.schema != schema:
try:
# RecordBatch doesn't have a cast method, but table does.
batch = pa.Table.from_batches([batch]).cast(schema).to_batches()[0]
except pa.lib.ArrowInvalid:
raise ValueError(
f"Input RecordBatch iterator yielded a batch with schema that "
f"does not match the expected schema.\nExpected:\n{schema}\n"
f"Got:\n{batch.schema}"
)
yield batch
def data_to_reader(
data: DATA, schema: Optional[pa.Schema] = None
) -> pa.RecordBatchReader:
"""Convert various types of input into a RecordBatchReader"""
if pd is not None and isinstance(data, pd.DataFrame):
return pa.Table.from_pandas(data, schema=schema).to_reader()
elif isinstance(data, pa.Table):
return data.to_reader()
elif isinstance(data, pa.RecordBatch):
return pa.Table.from_batches([data]).to_reader()
# elif isinstance(data, LanceDataset):
# return data_obj.scanner().to_reader()
elif isinstance(data, pa.dataset.Dataset):
return pa.dataset.Scanner.from_dataset(data).to_reader()
elif isinstance(data, pa.dataset.Scanner):
return data.to_reader()
elif isinstance(data, pa.RecordBatchReader):
return data
elif (
type(data).__module__.startswith("polars")
and data.__class__.__name__ == "DataFrame"
):
return data.to_arrow().to_reader()
# for other iterables, assume they are of type Iterable[RecordBatch]
elif isinstance(data, Iterable):
if schema is not None:
data = _casting_recordbatch_iter(data, schema)
return pa.RecordBatchReader.from_batches(schema, data)
else:
raise ValueError(
"Must provide schema to write dataset from RecordBatch iterable"
)
else:
raise TypeError(
f"Unknown data type {type(data)}. "
"Please check "
"https://lancedb.github.io/lance/read_and_write.html "
"to see supported types."
)
def validate_schema(schema: pa.Schema):
"""
Make sure the metadata is valid utf8
"""
if schema.metadata is not None:
_validate_metadata(schema.metadata)
def _validate_metadata(metadata: dict):
"""
Make sure the metadata values are valid utf8 (can be nested)
Raises ValueError if not valid utf8
"""
for k, v in metadata.items():
if isinstance(v, bytes):
try:
v.decode("utf8")
except UnicodeDecodeError:
raise ValueError(
f"Metadata key {k} is not valid utf8. "
"Consider base64 encode for generic binary metadata."
)
elif isinstance(v, dict):
_validate_metadata(v)

View File

@@ -13,16 +13,24 @@
from __future__ import annotations from __future__ import annotations
import asyncio
import inspect
import os import os
from abc import abstractmethod from abc import abstractmethod
from pathlib import Path from pathlib import Path
from typing import TYPE_CHECKING, Iterable, List, Optional, Union from typing import TYPE_CHECKING, Iterable, List, Literal, Optional, Union
import pyarrow as pa import pyarrow as pa
from overrides import EnforceOverrides, override from overrides import EnforceOverrides, override
from pyarrow import fs from pyarrow import fs
from .table import LanceTable, Table from lancedb.common import data_to_reader, validate_schema
from lancedb.embeddings.registry import EmbeddingFunctionRegistry
from lancedb.utils.events import register_event
from ._lancedb import connect as lancedb_connect
from .pydantic import LanceModel
from .table import AsyncTable, LanceTable, Table, _sanitize_data
from .util import fs_from_uri, get_uri_location, get_uri_scheme, join_uri from .util import fs_from_uri, get_uri_location, get_uri_scheme, join_uri
if TYPE_CHECKING: if TYPE_CHECKING:
@@ -31,7 +39,6 @@ if TYPE_CHECKING:
from ._lancedb import Connection as LanceDbConnection from ._lancedb import Connection as LanceDbConnection
from .common import DATA, URI from .common import DATA, URI
from .embeddings import EmbeddingFunctionConfig from .embeddings import EmbeddingFunctionConfig
from .pydantic import LanceModel
class DBConnection(EnforceOverrides): class DBConnection(EnforceOverrides):
@@ -312,6 +319,10 @@ class LanceDBConnection(DBConnection):
def uri(self) -> str: def uri(self) -> str:
return self._uri return self._uri
async def _async_get_table_names(self, start_after: Optional[str], limit: int):
conn = AsyncConnection(await lancedb_connect(self.uri))
return await conn.table_names(start_after=start_after, limit=limit)
@override @override
def table_names( def table_names(
self, page_token: Optional[str] = None, limit: int = 10 self, page_token: Optional[str] = None, limit: int = 10
@@ -324,23 +335,31 @@ class LanceDBConnection(DBConnection):
A list of table names. A list of table names.
""" """
try: try:
filesystem = fs_from_uri(self.uri)[0] asyncio.get_running_loop()
except pa.ArrowInvalid: # User application is async. Soon we will just tell them to use the
raise NotImplementedError("Unsupported scheme: " + self.uri) # async version. Until then fallback to the old sync implementation.
try:
filesystem = fs_from_uri(self.uri)[0]
except pa.ArrowInvalid:
raise NotImplementedError("Unsupported scheme: " + self.uri)
try: try:
loc = get_uri_location(self.uri) loc = get_uri_location(self.uri)
paths = filesystem.get_file_info(fs.FileSelector(loc)) paths = filesystem.get_file_info(fs.FileSelector(loc))
except FileNotFoundError: except FileNotFoundError:
# It is ok if the file does not exist since it will be created # It is ok if the file does not exist since it will be created
paths = [] paths = []
tables = [ tables = [
os.path.splitext(file_info.base_name)[0] os.path.splitext(file_info.base_name)[0]
for file_info in paths for file_info in paths
if file_info.extension == "lance" if file_info.extension == "lance"
] ]
tables.sort() tables.sort()
return tables return tables
except RuntimeError:
# User application is sync. It is safe to use the async implementation
# under the hood.
return asyncio.run(self._async_get_table_names(page_token, limit))
def __len__(self) -> int: def __len__(self) -> int:
return len(self.table_names()) return len(self.table_names())
@@ -422,41 +441,93 @@ class LanceDBConnection(DBConnection):
filesystem.delete_dir(path) filesystem.delete_dir(path)
class AsyncConnection(EnforceOverrides): class AsyncConnection(object):
"""An active LanceDB connection interface.""" """An active LanceDB connection
To obtain a connection you can use the [connect] function.
This could be a native connection (using lance) or a remote connection (e.g. for
connecting to LanceDb Cloud)
Local connections do not currently hold any open resources but they may do so in the
future (for example, for shared cache or connections to catalog services) Remote
connections represent an open connection to the remote server. The [close] method
can be used to release any underlying resources eagerly. The connection can also
be used as a context manager:
Connections can be shared on multiple threads and are expected to be long lived.
Connections can also be used as a context manager, however, in many cases a single
connection can be used for the lifetime of the application and so this is often
not needed. Closing a connection is optional. If it is not closed then it will
be automatically closed when the connection object is deleted.
Examples
--------
>>> import asyncio
>>> import lancedb
>>> async def my_connect():
... with await lancedb.connect("/tmp/my_dataset") as conn:
... # do something with the connection
... pass
... # conn is closed here
"""
def __init__(self, connection: LanceDbConnection):
self._inner = connection
def __repr__(self):
return self._inner.__repr__()
def __enter__(self):
self
def __exit__(self, *_):
self.close()
def is_open(self):
"""Return True if the connection is open."""
return self._inner.is_open()
def close(self):
"""Close the connection, releasing any underlying resources.
It is safe to call this method multiple times.
Any attempt to use the connection after it is closed will result in an error."""
self._inner.close()
@abstractmethod
async def table_names( async def table_names(
self, *, page_token: Optional[str] = None, limit: int = 10 self, *, start_after: Optional[str] = None, limit: Optional[int] = None
) -> Iterable[str]: ) -> Iterable[str]:
"""List all tables in this database, in sorted order """List all tables in this database, in sorted order
Parameters Parameters
---------- ----------
page_token: str, optional start_after: str, optional
The token to use for pagination. If not present, start from the beginning. If present, only return names that come lexicographically after the supplied
Typically, this token is last table name from the previous page. value.
Only supported by LanceDb Cloud.
This can be combined with limit to implement pagination by setting this to
the last table name from the previous page.
limit: int, default 10 limit: int, default 10
The size of the page to return. The number of results to return.
Only supported by LanceDb Cloud.
Returns Returns
------- -------
Iterable of str Iterable of str
""" """
pass return await self._inner.table_names(start_after=start_after, limit=limit)
@abstractmethod
async def create_table( async def create_table(
self, self,
name: str, name: str,
data: Optional[DATA] = None, data: Optional[DATA] = None,
schema: Optional[Union[pa.Schema, LanceModel]] = None, schema: Optional[Union[pa.Schema, LanceModel]] = None,
mode: str = "create", mode: Optional[Literal["create", "overwrite"]] = None,
exist_ok: bool = False, exist_ok: Optional[bool] = None,
on_bad_vectors: str = "error", on_bad_vectors: Optional[str] = None,
fill_value: float = 0.0, fill_value: Optional[float] = None,
embedding_functions: Optional[List[EmbeddingFunctionConfig]] = None, embedding_functions: Optional[List[EmbeddingFunctionConfig]] = None,
) -> Table: ) -> Table:
"""Create a [Table][lancedb.table.Table] in the database. """Create a [Table][lancedb.table.Table] in the database.
@@ -480,7 +551,7 @@ class AsyncConnection(EnforceOverrides):
- pyarrow.Schema - pyarrow.Schema
- [LanceModel][lancedb.pydantic.LanceModel] - [LanceModel][lancedb.pydantic.LanceModel]
mode: str; default "create" mode: Literal["create", "overwrite"]; default "create"
The mode to use when creating the table. The mode to use when creating the table.
Can be either "create" or "overwrite". Can be either "create" or "overwrite".
By default, if the table already exists, an exception is raised. By default, if the table already exists, an exception is raised.
@@ -596,7 +667,74 @@ class AsyncConnection(EnforceOverrides):
LanceTable(connection=..., name="table4") LanceTable(connection=..., name="table4")
""" """
raise NotImplementedError if inspect.isclass(schema) and issubclass(schema, LanceModel):
# convert LanceModel to pyarrow schema
# note that it's possible this contains
# embedding function metadata already
schema = schema.to_arrow_schema()
metadata = None
if embedding_functions is not None:
# If we passed in embedding functions explicitly
# then we'll override any schema metadata that
# may was implicitly specified by the LanceModel schema
registry = EmbeddingFunctionRegistry.get_instance()
metadata = registry.get_table_metadata(embedding_functions)
# Defining defaults here and not in function prototype. In the future
# these defaults will move into rust so better to keep them as None.
if on_bad_vectors is None:
on_bad_vectors = "error"
if fill_value is None:
fill_value = 0.0
if data is not None:
data = _sanitize_data(
data,
schema,
metadata=metadata,
on_bad_vectors=on_bad_vectors,
fill_value=fill_value,
)
if schema is None:
if data is None:
raise ValueError("Either data or schema must be provided")
elif hasattr(data, "schema"):
schema = data.schema
elif isinstance(data, Iterable):
if metadata:
raise TypeError(
(
"Persistent embedding functions not yet "
"supported for generator data input"
)
)
if metadata:
schema = schema.with_metadata(metadata)
validate_schema(schema)
if exist_ok is None:
exist_ok = False
if mode is None:
mode = "create"
if mode == "create" and exist_ok:
mode = "exist_ok"
if data is None:
new_table = await self._inner.create_empty_table(name, mode, schema)
else:
data = data_to_reader(data, schema)
new_table = await self._inner.create_table(
name,
mode,
data,
)
register_event("create_table")
return AsyncTable(new_table)
async def open_table(self, name: str) -> Table: async def open_table(self, name: str) -> Table:
"""Open a Lance Table in the database. """Open a Lance Table in the database.
@@ -610,7 +748,9 @@ class AsyncConnection(EnforceOverrides):
------- -------
A LanceTable object representing the table. A LanceTable object representing the table.
""" """
raise NotImplementedError table = await self._inner.open_table(name)
register_event("open_table")
return AsyncTable(table)
async def drop_table(self, name: str): async def drop_table(self, name: str):
"""Drop a table from the database. """Drop a table from the database.
@@ -628,46 +768,3 @@ class AsyncConnection(EnforceOverrides):
This is the same thing as dropping all the tables This is the same thing as dropping all the tables
""" """
raise NotImplementedError raise NotImplementedError
class AsyncLanceDBConnection(AsyncConnection):
def __init__(self, connection: LanceDbConnection):
self._inner = connection
async def __repr__(self) -> str:
pass
@override
async def table_names(
self,
*,
page_token=None,
limit=None,
) -> Iterable[str]:
return await self._inner.table_names()
@override
async def create_table(
self,
name: str,
data: Optional[DATA] = None,
schema: Optional[Union[pa.Schema, LanceModel]] = None,
mode: str = "create",
exist_ok: bool = False,
on_bad_vectors: str = "error",
fill_value: float = 0.0,
embedding_functions: Optional[List[EmbeddingFunctionConfig]] = None,
) -> LanceTable:
raise NotImplementedError
@override
async def open_table(self, name: str) -> LanceTable:
raise NotImplementedError
@override
async def drop_table(self, name: str, ignore_missing: bool = False):
raise NotImplementedError
@override
async def drop_database(self):
raise NotImplementedError

View File

@@ -10,16 +10,15 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
import os
from functools import cached_property from functools import cached_property
from typing import List, Optional, Union from typing import TYPE_CHECKING, List, Optional, Union
import numpy as np
from ..util import attempt_import_or_raise from ..util import attempt_import_or_raise
from .base import TextEmbeddingFunction from .base import TextEmbeddingFunction
from .registry import register from .registry import register
from .utils import api_key_not_found_help
if TYPE_CHECKING:
import numpy as np
@register("openai") @register("openai")
@@ -28,14 +27,46 @@ class OpenAIEmbeddings(TextEmbeddingFunction):
An embedding function that uses the OpenAI API An embedding function that uses the OpenAI API
https://platform.openai.com/docs/guides/embeddings https://platform.openai.com/docs/guides/embeddings
This can also be used for open source models that
are compatible with the OpenAI API.
Notes
-----
If you're running an Ollama server locally,
you can just override the `base_url` parameter
and provide the Ollama embedding model you want
to use (https://ollama.com/library):
```python
from lancedb.embeddings import get_registry
openai = get_registry().get("openai")
embedding_function = openai.create(
name="<ollama-embedding-model-name>",
base_url="http://localhost:11434",
)
```
""" """
name: str = "text-embedding-ada-002" name: str = "text-embedding-ada-002"
dim: Optional[int] = None dim: Optional[int] = None
base_url: Optional[str] = None
default_headers: Optional[dict] = None
organization: Optional[str] = None
api_key: Optional[str] = None
def ndims(self): def ndims(self):
return self._ndims return self._ndims
@staticmethod
def model_names():
return [
"text-embedding-ada-002",
"text-embedding-3-large",
"text-embedding-3-small",
]
@cached_property @cached_property
def _ndims(self): def _ndims(self):
if self.name == "text-embedding-ada-002": if self.name == "text-embedding-ada-002":
@@ -48,8 +79,8 @@ class OpenAIEmbeddings(TextEmbeddingFunction):
raise ValueError(f"Unknown model name {self.name}") raise ValueError(f"Unknown model name {self.name}")
def generate_embeddings( def generate_embeddings(
self, texts: Union[List[str], np.ndarray] self, texts: Union[List[str], "np.ndarray"]
) -> List[np.array]: ) -> List["np.array"]:
""" """
Get the embeddings for the given texts Get the embeddings for the given texts
@@ -62,15 +93,25 @@ class OpenAIEmbeddings(TextEmbeddingFunction):
if self.name == "text-embedding-ada-002": if self.name == "text-embedding-ada-002":
rs = self._openai_client.embeddings.create(input=texts, model=self.name) rs = self._openai_client.embeddings.create(input=texts, model=self.name)
else: else:
rs = self._openai_client.embeddings.create( kwargs = {
input=texts, model=self.name, dimensions=self.ndims() "input": texts,
) "model": self.name,
}
if self.dim:
kwargs["dimensions"] = self.dim
rs = self._openai_client.embeddings.create(**kwargs)
return [v.embedding for v in rs.data] return [v.embedding for v in rs.data]
@cached_property @cached_property
def _openai_client(self): def _openai_client(self):
openai = attempt_import_or_raise("openai") openai = attempt_import_or_raise("openai")
kwargs = {}
if not os.environ.get("OPENAI_API_KEY"): if self.base_url:
api_key_not_found_help("openai") kwargs["base_url"] = self.base_url
return openai.OpenAI() if self.default_headers:
kwargs["default_headers"] = self.default_headers
if self.organization:
kwargs["organization"] = self.organization
if self.api_key:
kwargs["api_key"] = self
return openai.OpenAI(**kwargs)

View File

@@ -22,7 +22,7 @@ try:
import tantivy import tantivy
except ImportError: except ImportError:
raise ImportError( raise ImportError(
"Please install tantivy-py `pip install tantivy@git+https://github.com/quickwit-oss/tantivy-py#164adc87e1a033117001cf70e38c82a53014d985` to use the full text search feature." # noqa: E501 "Please install tantivy-py `pip install tantivy` to use the full text search feature." # noqa: E501
) )
from .table import LanceTable from .table import LanceTable

View File

@@ -106,8 +106,8 @@ class Query(pydantic.BaseModel):
class LanceQueryBuilder(ABC): class LanceQueryBuilder(ABC):
"""Build LanceDB query based on specific query type: """An abstract query builder. Subclasses are defined for vector search,
vector or full text search. full text search, hybrid, and plain SQL filtering.
""" """
@classmethod @classmethod
@@ -118,6 +118,22 @@ class LanceQueryBuilder(ABC):
query_type: str, query_type: str,
vector_column_name: str, vector_column_name: str,
) -> LanceQueryBuilder: ) -> LanceQueryBuilder:
"""
Create a query builder based on the given query and query type.
Parameters
----------
table: Table
The table to query.
query: Optional[Union[np.ndarray, str, "PIL.Image.Image", Tuple]]
The query to use. If None, an empty query builder is returned
which performs simple SQL filtering.
query_type: str
The type of query to perform. One of "vector", "fts", "hybrid", or "auto".
If "auto", the query type is inferred based on the query.
vector_column_name: str
The name of the vector column to use for vector search.
"""
if query is None: if query is None:
return LanceEmptyQueryBuilder(table) return LanceEmptyQueryBuilder(table)
@@ -559,7 +575,7 @@ class LanceFtsQueryBuilder(LanceQueryBuilder):
import tantivy import tantivy
except ImportError: except ImportError:
raise ImportError( raise ImportError(
"Please install tantivy-py `pip install tantivy@git+https://github.com/quickwit-oss/tantivy-py#164adc87e1a033117001cf70e38c82a53014d985` to use the full text search feature." # noqa: E501 "Please install tantivy-py `pip install tantivy` to use the full text search feature." # noqa: E501
) )
from .fts import search_index from .fts import search_index
@@ -587,19 +603,26 @@ class LanceFtsQueryBuilder(LanceQueryBuilder):
scores = pa.array(scores) scores = pa.array(scores)
output_tbl = self._table.to_lance().take(row_ids, columns=self._columns) output_tbl = self._table.to_lance().take(row_ids, columns=self._columns)
output_tbl = output_tbl.append_column("score", scores) output_tbl = output_tbl.append_column("score", scores)
# this needs to match vector search results which are uint64
row_ids = pa.array(row_ids, type=pa.uint64())
if self._where is not None: if self._where is not None:
tmp_name = "__lancedb__duckdb__indexer__"
output_tbl = output_tbl.append_column(
tmp_name, pa.array(range(len(output_tbl)))
)
try: try:
# TODO would be great to have Substrait generate pyarrow compute # TODO would be great to have Substrait generate pyarrow compute
# expressions or conversely have pyarrow support SQL expressions # expressions or conversely have pyarrow support SQL expressions
# using Substrait # using Substrait
import duckdb import duckdb
output_tbl = ( indexer = duckdb.sql(
duckdb.sql("SELECT * FROM output_tbl") f"SELECT {tmp_name} FROM output_tbl WHERE {self._where}"
.filter(self._where) ).to_arrow_table()[tmp_name]
.to_arrow_table() output_tbl = output_tbl.take(indexer).drop([tmp_name])
) row_ids = row_ids.take(indexer)
except ImportError: except ImportError:
import tempfile import tempfile
@@ -609,10 +632,11 @@ class LanceFtsQueryBuilder(LanceQueryBuilder):
with tempfile.TemporaryDirectory() as tmp: with tempfile.TemporaryDirectory() as tmp:
ds = lance.write_dataset(output_tbl, tmp) ds = lance.write_dataset(output_tbl, tmp)
output_tbl = ds.to_table(filter=self._where) output_tbl = ds.to_table(filter=self._where)
indexer = output_tbl[tmp_name]
row_ids = row_ids.take(indexer)
output_tbl = output_tbl.drop([tmp_name])
if self._with_row_id: if self._with_row_id:
# Need to set this to uint explicitly as vector results are in uint64
row_ids = pa.array(row_ids, type=pa.uint64())
output_tbl = output_tbl.append_column("_rowid", row_ids) output_tbl = output_tbl.append_column("_rowid", row_ids)
return output_tbl return output_tbl
@@ -628,6 +652,16 @@ class LanceEmptyQueryBuilder(LanceQueryBuilder):
class LanceHybridQueryBuilder(LanceQueryBuilder): class LanceHybridQueryBuilder(LanceQueryBuilder):
"""
A query builder that performs hybrid vector and full text search.
Results are combined and reranked based on the specified reranker.
By default, the results are reranked using the LinearCombinationReranker.
To make the vector and fts results comparable, the scores are normalized.
Instead of normalizing scores, the `normalize` parameter can be set to "rank"
in the `rerank` method to convert the scores to ranks and then normalize them.
"""
def __init__(self, table: "Table", query: str, vector_column: str): def __init__(self, table: "Table", query: str, vector_column: str):
super().__init__(table) super().__init__(table)
self._validate_fts_index() self._validate_fts_index()

View File

@@ -66,12 +66,40 @@ class RemoteTable(Table):
"""to_pandas() is not yet supported on LanceDB cloud.""" """to_pandas() is not yet supported on LanceDB cloud."""
return NotImplementedError("to_pandas() is not yet supported on LanceDB cloud.") return NotImplementedError("to_pandas() is not yet supported on LanceDB cloud.")
def create_scalar_index(self, *args, **kwargs): def list_indices(self):
"""Creates a scalar index""" """List all the indices on the table"""
return NotImplementedError( resp = self._conn._client.post(f"/v1/table/{self._name}/index/list/")
"create_scalar_index() is not yet supported on LanceDB cloud." return resp
def index_stats(self, index_uuid: str):
"""List all the indices on the table"""
resp = self._conn._client.post(f"/v1/table/{self._name}/index/{index_uuid}/stats/")
return resp
def create_scalar_index(
self,
column: str,
):
"""Creates a scalar index
Parameters
----------
column : str
The column to be indexed. Must be a boolean, integer, float,
or string column.
"""
index_type = "scalar"
data = {
"column": column,
"index_type": index_type,
"replace": True,
}
resp = self._conn._client.post(
f"/v1/table/{self._name}/create_scalar_index/", data=data
) )
return resp
def create_index( def create_index(
self, self,
metric="L2", metric="L2",

View File

@@ -19,7 +19,17 @@ from abc import ABC, abstractmethod
from dataclasses import dataclass from dataclasses import dataclass
from datetime import timedelta from datetime import timedelta
from functools import cached_property from functools import cached_property
from typing import TYPE_CHECKING, Any, Dict, Iterable, List, Optional, Tuple, Union from typing import (
TYPE_CHECKING,
Any,
Dict,
Iterable,
List,
Literal,
Optional,
Tuple,
Union,
)
import lance import lance
import numpy as np import numpy as np
@@ -48,6 +58,7 @@ if TYPE_CHECKING:
import PIL import PIL
from lance.dataset import CleanupStats, ReaderLike from lance.dataset import CleanupStats, ReaderLike
from ._lancedb import Table as LanceDBTable
from .db import LanceDBConnection from .db import LanceDBConnection
@@ -1780,3 +1791,635 @@ def _sanitize_nans(data, fill_value, on_bad_vectors, vec_arr, vector_column_name
is_full = np.any(~is_value_nan.reshape(-1, vec_arr.type.list_size), axis=1) is_full = np.any(~is_value_nan.reshape(-1, vec_arr.type.list_size), axis=1)
data = data.filter(is_full) data = data.filter(is_full)
return data return data
class AsyncTable:
"""
An AsyncTable is a collection of Records in a LanceDB Database.
An AsyncTable can be obtained from the
[AsyncConnection.create_table][lancedb.AsyncConnection.create_table] and
[AsyncConnection.open_table][lancedb.AsyncConnection.open_table] methods.
An AsyncTable object is expected to be long lived and reused for multiple
operations. AsyncTable objects will cache a certain amount of index data in memory.
This cache will be freed when the Table is garbage collected. To eagerly free the
cache you can call the [close][AsyncTable.close] method. Once the AsyncTable is
closed, it cannot be used for any further operations.
An AsyncTable can also be used as a context manager, and will automatically close
when the context is exited. Closing a table is optional. If you do not close the
table, it will be closed when the AsyncTable object is garbage collected.
Examples
--------
Create using [DBConnection.create_table][lancedb.DBConnection.create_table]
(more examples in that method's documentation).
>>> import lancedb
>>> db = lancedb.connect("./.lancedb")
>>> table = db.create_table("my_table", data=[{"vector": [1.1, 1.2], "b": 2}])
>>> table.head()
pyarrow.Table
vector: fixed_size_list<item: float>[2]
child 0, item: float
b: int64
----
vector: [[[1.1,1.2]]]
b: [[2]]
Can append new data with [Table.add()][lancedb.table.Table.add].
>>> table.add([{"vector": [0.5, 1.3], "b": 4}])
Can query the table with [Table.search][lancedb.table.Table.search].
>>> table.search([0.4, 0.4]).select(["b", "vector"]).to_pandas()
b vector _distance
0 4 [0.5, 1.3] 0.82
1 2 [1.1, 1.2] 1.13
Search queries are much faster when an index is created. See
[Table.create_index][lancedb.table.Table.create_index].
"""
def __init__(self, table: LanceDBTable):
"""Create a new Table object.
You should not create Table objects directly.
Use [AsyncConnection.create_table][lancedb.AsyncConnection.create_table] and
[AsyncConnection.open_table][lancedb.AsyncConnection.open_table] to obtain
Table objects."""
self._inner = table
def __repr__(self):
return self._inner.__repr__()
def __enter__(self):
return self
def __exit__(self, *_):
self.close()
def is_open(self) -> bool:
"""Return True if the table is closed."""
return self._inner.is_open()
def close(self):
"""Close the table and free any resources associated with it.
It is safe to call this method multiple times.
Any attempt to use the table after it has been closed will raise an error."""
return self._inner.close()
@property
def name(self) -> str:
"""The name of the table."""
return self._inner.name()
async def schema(self) -> pa.Schema:
"""The [Arrow Schema](https://arrow.apache.org/docs/python/api/datatypes.html#)
of this Table
"""
return await self._inner.schema()
async def count_rows(self, filter: Optional[str] = None) -> int:
"""
Count the number of rows in the table.
Parameters
----------
filter: str, optional
A SQL where clause to filter the rows to count.
"""
return await self._inner.count_rows(filter)
async def to_pandas(self) -> "pd.DataFrame":
"""Return the table as a pandas DataFrame.
Returns
-------
pd.DataFrame
"""
return self.to_arrow().to_pandas()
async def to_arrow(self) -> pa.Table:
"""Return the table as a pyarrow Table.
Returns
-------
pa.Table
"""
raise NotImplementedError
async def create_index(
self,
metric="L2",
num_partitions=256,
num_sub_vectors=96,
vector_column_name: str = VECTOR_COLUMN_NAME,
replace: bool = True,
accelerator: Optional[str] = None,
index_cache_size: Optional[int] = None,
):
"""Create an index on the table.
Parameters
----------
metric: str, default "L2"
The distance metric to use when creating the index.
Valid values are "L2", "cosine", or "dot".
L2 is euclidean distance.
num_partitions: int, default 256
The number of IVF partitions to use when creating the index.
Default is 256.
num_sub_vectors: int, default 96
The number of PQ sub-vectors to use when creating the index.
Default is 96.
vector_column_name: str, default "vector"
The vector column name to create the index.
replace: bool, default True
- If True, replace the existing index if it exists.
- If False, raise an error if duplicate index exists.
accelerator: str, default None
If set, use the given accelerator to create the index.
Only support "cuda" for now.
index_cache_size : int, optional
The size of the index cache in number of entries. Default value is 256.
"""
raise NotImplementedError
async def create_scalar_index(
self,
column: str,
*,
replace: bool = True,
):
"""Create a scalar index on a column.
Scalar indices, like vector indices, can be used to speed up scans. A scalar
index can speed up scans that contain filter expressions on the indexed column.
For example, the following scan will be faster if the column ``my_col`` has
a scalar index:
.. code-block:: python
import lancedb
db = lancedb.connect("/data/lance")
img_table = db.open_table("images")
my_df = img_table.search().where("my_col = 7", prefilter=True).to_pandas()
Scalar indices can also speed up scans containing a vector search and a
prefilter:
.. code-block::python
import lancedb
db = lancedb.connect("/data/lance")
img_table = db.open_table("images")
img_table.search([1, 2, 3, 4], vector_column_name="vector")
.where("my_col != 7", prefilter=True)
.to_pandas()
Scalar indices can only speed up scans for basic filters using
equality, comparison, range (e.g. ``my_col BETWEEN 0 AND 100``), and set
membership (e.g. `my_col IN (0, 1, 2)`)
Scalar indices can be used if the filter contains multiple indexed columns and
the filter criteria are AND'd or OR'd together
(e.g. ``my_col < 0 AND other_col> 100``)
Scalar indices may be used if the filter contains non-indexed columns but,
depending on the structure of the filter, they may not be usable. For example,
if the column ``not_indexed`` does not have a scalar index then the filter
``my_col = 0 OR not_indexed = 1`` will not be able to use any scalar index on
``my_col``.
**Experimental API**
Parameters
----------
column : str
The column to be indexed. Must be a boolean, integer, float,
or string column.
replace : bool, default True
Replace the existing index if it exists.
Examples
--------
.. code-block:: python
import lance
dataset = lance.dataset("./images.lance")
dataset.create_scalar_index("category")
"""
raise NotImplementedError
async def add(
self,
data: DATA,
*,
mode: Optional[Literal["append", "overwrite"]] = "append",
on_bad_vectors: Optional[str] = None,
fill_value: Optional[float] = None,
):
"""Add more data to the [Table](Table).
Parameters
----------
data: DATA
The data to insert into the table. Acceptable types are:
- dict or list-of-dict
- pandas.DataFrame
- pyarrow.Table or pyarrow.RecordBatch
mode: str
The mode to use when writing the data. Valid values are
"append" and "overwrite".
on_bad_vectors: str, default "error"
What to do if any of the vectors are not the same size or contains NaNs.
One of "error", "drop", "fill".
fill_value: float, default 0.
The value to use when filling vectors. Only used if on_bad_vectors="fill".
"""
schema = await self.schema()
if on_bad_vectors is None:
on_bad_vectors = "error"
if fill_value is None:
fill_value = 0.0
data = _sanitize_data(
data,
schema,
metadata=schema.metadata,
on_bad_vectors=on_bad_vectors,
fill_value=fill_value,
)
await self._inner.add(data, mode)
register_event("add")
def merge_insert(self, on: Union[str, Iterable[str]]) -> LanceMergeInsertBuilder:
"""
Returns a [`LanceMergeInsertBuilder`][lancedb.merge.LanceMergeInsertBuilder]
that can be used to create a "merge insert" operation
This operation can add rows, update rows, and remove rows all in a single
transaction. It is a very generic tool that can be used to create
behaviors like "insert if not exists", "update or insert (i.e. upsert)",
or even replace a portion of existing data with new data (e.g. replace
all data where month="january")
The merge insert operation works by combining new data from a
**source table** with existing data in a **target table** by using a
join. There are three categories of records.
"Matched" records are records that exist in both the source table and
the target table. "Not matched" records exist only in the source table
(e.g. these are new data) "Not matched by source" records exist only
in the target table (this is old data)
The builder returned by this method can be used to customize what
should happen for each category of data.
Please note that the data may appear to be reordered as part of this
operation. This is because updated rows will be deleted from the
dataset and then reinserted at the end with the new values.
Parameters
----------
on: Union[str, Iterable[str]]
A column (or columns) to join on. This is how records from the
source table and target table are matched. Typically this is some
kind of key or id column.
Examples
--------
>>> import lancedb
>>> data = pa.table({"a": [2, 1, 3], "b": ["a", "b", "c"]})
>>> db = lancedb.connect("./.lancedb")
>>> table = db.create_table("my_table", data)
>>> new_data = pa.table({"a": [2, 3, 4], "b": ["x", "y", "z"]})
>>> # Perform a "upsert" operation
>>> table.merge_insert("a") \\
... .when_matched_update_all() \\
... .when_not_matched_insert_all() \\
... .execute(new_data)
>>> # The order of new rows is non-deterministic since we use
>>> # a hash-join as part of this operation and so we sort here
>>> table.to_arrow().sort_by("a").to_pandas()
a b
0 1 b
1 2 x
2 3 y
3 4 z
"""
on = [on] if isinstance(on, str) else list(on.iter())
return LanceMergeInsertBuilder(self, on)
async def search(
self,
query: Optional[Union[VEC, str, "PIL.Image.Image", Tuple]] = None,
vector_column_name: Optional[str] = None,
query_type: str = "auto",
) -> LanceQueryBuilder:
"""Create a search query to find the nearest neighbors
of the given query vector. We currently support [vector search][search]
and [full-text search][experimental-full-text-search].
All query options are defined in [Query][lancedb.query.Query].
Examples
--------
>>> import lancedb
>>> db = lancedb.connect("./.lancedb")
>>> data = [
... {"original_width": 100, "caption": "bar", "vector": [0.1, 2.3, 4.5]},
... {"original_width": 2000, "caption": "foo", "vector": [0.5, 3.4, 1.3]},
... {"original_width": 3000, "caption": "test", "vector": [0.3, 6.2, 2.6]}
... ]
>>> table = db.create_table("my_table", data)
>>> query = [0.4, 1.4, 2.4]
>>> (table.search(query)
... .where("original_width > 1000", prefilter=True)
... .select(["caption", "original_width", "vector"])
... .limit(2)
... .to_pandas())
caption original_width vector _distance
0 foo 2000 [0.5, 3.4, 1.3] 5.220000
1 test 3000 [0.3, 6.2, 2.6] 23.089996
Parameters
----------
query: list/np.ndarray/str/PIL.Image.Image, default None
The targetted vector to search for.
- *default None*.
Acceptable types are: list, np.ndarray, PIL.Image.Image
- If None then the select/where/limit clauses are applied to filter
the table
vector_column_name: str, optional
The name of the vector column to search.
The vector column needs to be a pyarrow fixed size list type
- If not specified then the vector column is inferred from
the table schema
- If the table has multiple vector columns then the *vector_column_name*
needs to be specified. Otherwise, an error is raised.
query_type: str
*default "auto"*.
Acceptable types are: "vector", "fts", "hybrid", or "auto"
- If "auto" then the query type is inferred from the query;
- If `query` is a list/np.ndarray then the query type is
"vector";
- If `query` is a PIL.Image.Image then either do vector search,
or raise an error if no corresponding embedding function is found.
- If `query` is a string, then the query type is "vector" if the
table has embedding functions else the query type is "fts"
Returns
-------
LanceQueryBuilder
A query builder object representing the query.
Once executed, the query returns
- selected columns
- the vector
- and also the "_distance" column which is the distance between the query
vector and the returned vector.
"""
raise NotImplementedError
async def _execute_query(self, query: Query) -> pa.Table:
pass
async def _do_merge(
self,
merge: LanceMergeInsertBuilder,
new_data: DATA,
on_bad_vectors: str,
fill_value: float,
):
pass
async def delete(self, where: str):
"""Delete rows from the table.
This can be used to delete a single row, many rows, all rows, or
sometimes no rows (if your predicate matches nothing).
Parameters
----------
where: str
The SQL where clause to use when deleting rows.
- For example, 'x = 2' or 'x IN (1, 2, 3)'.
The filter must not be empty, or it will error.
Examples
--------
>>> import lancedb
>>> data = [
... {"x": 1, "vector": [1, 2]},
... {"x": 2, "vector": [3, 4]},
... {"x": 3, "vector": [5, 6]}
... ]
>>> db = lancedb.connect("./.lancedb")
>>> table = db.create_table("my_table", data)
>>> table.to_pandas()
x vector
0 1 [1.0, 2.0]
1 2 [3.0, 4.0]
2 3 [5.0, 6.0]
>>> table.delete("x = 2")
>>> table.to_pandas()
x vector
0 1 [1.0, 2.0]
1 3 [5.0, 6.0]
If you have a list of values to delete, you can combine them into a
stringified list and use the `IN` operator:
>>> to_remove = [1, 5]
>>> to_remove = ", ".join([str(v) for v in to_remove])
>>> to_remove
'1, 5'
>>> table.delete(f"x IN ({to_remove})")
>>> table.to_pandas()
x vector
0 3 [5.0, 6.0]
"""
raise NotImplementedError
async def update(
self,
where: Optional[str] = None,
values: Optional[dict] = None,
*,
values_sql: Optional[Dict[str, str]] = None,
):
"""
This can be used to update zero to all rows depending on how many
rows match the where clause. If no where clause is provided, then
all rows will be updated.
Either `values` or `values_sql` must be provided. You cannot provide
both.
Parameters
----------
where: str, optional
The SQL where clause to use when updating rows. For example, 'x = 2'
or 'x IN (1, 2, 3)'. The filter must not be empty, or it will error.
values: dict, optional
The values to update. The keys are the column names and the values
are the values to set.
values_sql: dict, optional
The values to update, expressed as SQL expression strings. These can
reference existing columns. For example, {"x": "x + 1"} will increment
the x column by 1.
Examples
--------
>>> import lancedb
>>> import pandas as pd
>>> data = pd.DataFrame({"x": [1, 2, 3], "vector": [[1, 2], [3, 4], [5, 6]]})
>>> db = lancedb.connect("./.lancedb")
>>> table = db.create_table("my_table", data)
>>> table.to_pandas()
x vector
0 1 [1.0, 2.0]
1 2 [3.0, 4.0]
2 3 [5.0, 6.0]
>>> table.update(where="x = 2", values={"vector": [10, 10]})
>>> table.to_pandas()
x vector
0 1 [1.0, 2.0]
1 3 [5.0, 6.0]
2 2 [10.0, 10.0]
>>> table.update(values_sql={"x": "x + 1"})
>>> table.to_pandas()
x vector
0 2 [1.0, 2.0]
1 4 [5.0, 6.0]
2 3 [10.0, 10.0]
"""
raise NotImplementedError
async def cleanup_old_versions(
self,
older_than: Optional[timedelta] = None,
*,
delete_unverified: bool = False,
) -> CleanupStats:
"""
Clean up old versions of the table, freeing disk space.
Note: This function is not available in LanceDb Cloud (since LanceDb
Cloud manages cleanup for you automatically)
Parameters
----------
older_than: timedelta, default None
The minimum age of the version to delete. If None, then this defaults
to two weeks.
delete_unverified: bool, default False
Because they may be part of an in-progress transaction, files newer
than 7 days old are not deleted by default. If you are sure that
there are no in-progress transactions, then you can set this to True
to delete all files older than `older_than`.
Returns
-------
CleanupStats
The stats of the cleanup operation, including how many bytes were
freed.
"""
raise NotImplementedError
async def compact_files(self, *args, **kwargs):
"""
Run the compaction process on the table.
Note: This function is not available in LanceDb Cloud (since LanceDb
Cloud manages compaction for you automatically)
This can be run after making several small appends to optimize the table
for faster reads.
Arguments are passed onto :meth:`lance.dataset.DatasetOptimizer.compact_files`.
For most cases, the default should be fine.
"""
raise NotImplementedError
async def add_columns(self, transforms: Dict[str, str]):
"""
Add new columns with defined values.
This is not yet available in LanceDB Cloud.
Parameters
----------
transforms: Dict[str, str]
A map of column name to a SQL expression to use to calculate the
value of the new column. These expressions will be evaluated for
each row in the table, and can reference existing columns.
"""
raise NotImplementedError
async def alter_columns(self, alterations: Iterable[Dict[str, str]]):
"""
Alter column names and nullability.
This is not yet available in LanceDB Cloud.
alterations : Iterable[Dict[str, Any]]
A sequence of dictionaries, each with the following keys:
- "path": str
The column path to alter. For a top-level column, this is the name.
For a nested column, this is the dot-separated path, e.g. "a.b.c".
- "name": str, optional
The new name of the column. If not specified, the column name is
not changed.
- "nullable": bool, optional
Whether the column should be nullable. If not specified, the column
nullability is not changed. Only non-nullable columns can be changed
to nullable. Currently, you cannot change a nullable column to
non-nullable.
"""
raise NotImplementedError
async def drop_columns(self, columns: Iterable[str]):
"""
Drop columns from the table.
This is not yet available in LanceDB Cloud.
Parameters
----------
columns : Iterable[str]
The names of the columns to drop.
"""
raise NotImplementedError

View File

@@ -11,6 +11,9 @@
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
import re
from datetime import timedelta
import lancedb import lancedb
import numpy as np import numpy as np
import pandas as pd import pandas as pd
@@ -182,6 +185,10 @@ async def test_table_names_async(tmp_path):
db = await lancedb.connect_async(tmp_path) db = await lancedb.connect_async(tmp_path)
assert await db.table_names() == ["test1", "test2", "test3"] assert await db.table_names() == ["test1", "test2", "test3"]
assert await db.table_names(limit=1) == ["test1"]
assert await db.table_names(start_after="test1", limit=1) == ["test2"]
assert await db.table_names(start_after="test1") == ["test2", "test3"]
def test_create_mode(tmp_path): def test_create_mode(tmp_path):
db = lancedb.connect(tmp_path) db = lancedb.connect(tmp_path)
@@ -250,6 +257,133 @@ def test_create_exist_ok(tmp_path):
db.create_table("test", schema=bad_schema, exist_ok=True) db.create_table("test", schema=bad_schema, exist_ok=True)
@pytest.mark.asyncio
async def test_connect(tmp_path):
db = await lancedb.connect_async(tmp_path)
assert str(db) == f"NativeDatabase(uri={tmp_path}, read_consistency_interval=None)"
db = await lancedb.connect_async(
tmp_path, read_consistency_interval=timedelta(seconds=5)
)
assert str(db) == f"NativeDatabase(uri={tmp_path}, read_consistency_interval=5s)"
@pytest.mark.asyncio
async def test_close(tmp_path):
db = await lancedb.connect_async(tmp_path)
assert db.is_open()
db.close()
assert not db.is_open()
with pytest.raises(RuntimeError, match="is closed"):
await db.table_names()
@pytest.mark.asyncio
async def test_create_mode_async(tmp_path):
db = await lancedb.connect_async(tmp_path)
data = pd.DataFrame(
{
"vector": [[3.1, 4.1], [5.9, 26.5]],
"item": ["foo", "bar"],
"price": [10.0, 20.0],
}
)
await db.create_table("test", data=data)
with pytest.raises(RuntimeError):
await db.create_table("test", data=data)
new_data = pd.DataFrame(
{
"vector": [[3.1, 4.1], [5.9, 26.5]],
"item": ["fizz", "buzz"],
"price": [10.0, 20.0],
}
)
_tbl = await db.create_table("test", data=new_data, mode="overwrite")
# MIGRATION: to_pandas() is not available in async
# assert tbl.to_pandas().item.tolist() == ["fizz", "buzz"]
@pytest.mark.asyncio
async def test_create_exist_ok_async(tmp_path):
db = await lancedb.connect_async(tmp_path)
data = pd.DataFrame(
{
"vector": [[3.1, 4.1], [5.9, 26.5]],
"item": ["foo", "bar"],
"price": [10.0, 20.0],
}
)
tbl = await db.create_table("test", data=data)
with pytest.raises(RuntimeError):
await db.create_table("test", data=data)
# open the table but don't add more rows
tbl2 = await db.create_table("test", data=data, exist_ok=True)
assert tbl.name == tbl2.name
assert await tbl.schema() == await tbl2.schema()
schema = pa.schema(
[
pa.field("vector", pa.list_(pa.float32(), list_size=2)),
pa.field("item", pa.utf8()),
pa.field("price", pa.float64()),
]
)
tbl3 = await db.create_table("test", schema=schema, exist_ok=True)
assert await tbl3.schema() == schema
# Migration: When creating a table, but the table already exists, but
# the schema is different, it should raise an error.
# bad_schema = pa.schema(
# [
# pa.field("vector", pa.list_(pa.float32(), list_size=2)),
# pa.field("item", pa.utf8()),
# pa.field("price", pa.float64()),
# pa.field("extra", pa.float32()),
# ]
# )
# with pytest.raises(ValueError):
# await db.create_table("test", schema=bad_schema, exist_ok=True)
@pytest.mark.asyncio
async def test_open_table(tmp_path):
db = await lancedb.connect_async(tmp_path)
data = pd.DataFrame(
{
"vector": [[3.1, 4.1], [5.9, 26.5]],
"item": ["foo", "bar"],
"price": [10.0, 20.0],
}
)
await db.create_table("test", data=data)
tbl = await db.open_table("test")
assert tbl.name == "test"
assert (
re.search(
r"NativeTable\(test, uri=.*test\.lance, read_consistency_interval=None\)",
str(tbl),
)
is not None
)
assert await tbl.schema() == pa.schema(
{
"vector": pa.list_(pa.float32(), list_size=2),
"item": pa.utf8(),
"price": pa.float64(),
}
)
with pytest.raises(ValueError, match="was not found"):
await db.open_table("does_not_exist")
def test_delete_table(tmp_path): def test_delete_table(tmp_path):
db = lancedb.connect(tmp_path) db = lancedb.connect(tmp_path)
data = pd.DataFrame( data = pd.DataFrame(

View File

@@ -137,7 +137,11 @@ def test_search_index_with_filter(table):
# no duckdb # no duckdb
with mock.patch("builtins.__import__", side_effect=import_mock): with mock.patch("builtins.__import__", side_effect=import_mock):
rs = table.search("puppy").where("id=1").limit(10).to_list() rs = table.search("puppy").where("id=1").limit(10)
# test schema
assert rs.to_arrow().drop("score").schema.equals(table.schema)
rs = rs.to_list()
for r in rs: for r in rs:
assert r["id"] == 1 assert r["id"] == 1
@@ -147,6 +151,10 @@ def test_search_index_with_filter(table):
assert r["id"] == 1 assert r["id"] == 1
assert rs == rs2 assert rs == rs2
rs = table.search("puppy").where("id=1").with_row_id(True).limit(10).to_list()
for r in rs:
assert r["id"] == 1
assert r["_rowid"] is not None
def test_null_input(table): def test_null_input(table):
@@ -169,10 +177,18 @@ def test_syntax(table):
table.create_fts_index("text") table.create_fts_index("text")
with pytest.raises(ValueError, match="Syntax Error"): with pytest.raises(ValueError, match="Syntax Error"):
table.search("they could have been dogs OR cats").limit(10).to_list() table.search("they could have been dogs OR cats").limit(10).to_list()
# these should work
# terms queries
table.search('"they could have been dogs" OR cats').limit(10).to_list()
table.search("(they AND could) OR (have AND been AND dogs) OR cats").limit(
10
).to_list()
# phrase queries
table.search("they could have been dogs OR cats").phrase_query().limit(10).to_list() table.search("they could have been dogs OR cats").phrase_query().limit(10).to_list()
# this should work
table.search('"they could have been dogs OR cats"').limit(10).to_list() table.search('"they could have been dogs OR cats"').limit(10).to_list()
# this should work too
table.search('''"the cats OR dogs were not really 'pets' at all"''').limit( table.search('''"the cats OR dogs were not really 'pets' at all"''').limit(
10 10
).to_list() ).to_list()

View File

@@ -26,8 +26,9 @@ import pandas as pd
import polars as pl import polars as pl
import pyarrow as pa import pyarrow as pa
import pytest import pytest
import pytest_asyncio
from lancedb.conftest import MockTextEmbeddingFunction from lancedb.conftest import MockTextEmbeddingFunction
from lancedb.db import LanceDBConnection from lancedb.db import AsyncConnection, LanceDBConnection
from lancedb.embeddings import EmbeddingFunctionConfig, EmbeddingFunctionRegistry from lancedb.embeddings import EmbeddingFunctionConfig, EmbeddingFunctionRegistry
from lancedb.pydantic import LanceModel, Vector from lancedb.pydantic import LanceModel, Vector
from lancedb.table import LanceTable from lancedb.table import LanceTable
@@ -49,6 +50,13 @@ def db(tmp_path) -> MockDB:
return MockDB(tmp_path) return MockDB(tmp_path)
@pytest_asyncio.fixture
async def db_async(tmp_path) -> AsyncConnection:
return await lancedb.connect_async(
tmp_path, read_consistency_interval=timedelta(seconds=0)
)
def test_basic(db): def test_basic(db):
ds = LanceTable.create( ds = LanceTable.create(
db, db,
@@ -65,6 +73,18 @@ def test_basic(db):
assert table.to_lance().to_table() == ds.to_table() assert table.to_lance().to_table() == ds.to_table()
@pytest.mark.asyncio
async def test_close(db_async: AsyncConnection):
table = await db_async.create_table("some_table", data=[{"id": 0}])
assert table.is_open()
table.close()
assert not table.is_open()
with pytest.raises(Exception, match="Table some_table is closed"):
await table.count_rows()
assert str(table) == "ClosedTable(some_table)"
def test_create_table(db): def test_create_table(db):
schema = pa.schema( schema = pa.schema(
[ [
@@ -186,6 +206,25 @@ def test_add_pydantic_model(db):
assert len(really_flattened.columns) == 7 assert len(really_flattened.columns) == 7
@pytest.mark.asyncio
async def test_add_async(db_async: AsyncConnection):
table = await db_async.create_table(
"test",
data=[
{"vector": [3.1, 4.1], "item": "foo", "price": 10.0},
{"vector": [5.9, 26.5], "item": "bar", "price": 20.0},
],
)
assert await table.count_rows() == 2
await table.add(
data=[
{"vector": [10.0, 11.0], "item": "baz", "price": 30.0},
],
)
table = await db_async.open_table("test")
assert await table.count_rows() == 3
def test_polars(db): def test_polars(db):
data = { data = {
"vector": [[3.1, 4.1], [5.9, 26.5]], "vector": [[3.1, 4.1], [5.9, 26.5]],
@@ -854,8 +893,17 @@ def test_hybrid_search(db, tmp_path):
result3 = table.search( result3 = table.search(
"Our father who art in heaven", query_type="hybrid" "Our father who art in heaven", query_type="hybrid"
).to_pydantic(MyTable) ).to_pydantic(MyTable)
assert result1 == result3 assert result1 == result3
# with post filters
result = (
table.search("Arrrrggghhhhhhh", query_type="hybrid")
.where("text='Arrrrggghhhhhhh'")
.to_list()
)
len(result) == 1
@pytest.mark.parametrize( @pytest.mark.parametrize(
"consistency_interval", [None, timedelta(seconds=0), timedelta(seconds=0.1)] "consistency_interval", [None, timedelta(seconds=0), timedelta(seconds=0.1)]

View File

@@ -12,25 +12,129 @@
// See the License for the specific language governing permissions and // See the License for the specific language governing permissions and
// limitations under the License. // limitations under the License.
use std::time::Duration; use std::{sync::Arc, time::Duration};
use lancedb::connection::Connection as LanceConnection; use arrow::{datatypes::Schema, ffi_stream::ArrowArrayStreamReader, pyarrow::FromPyArrow};
use pyo3::{pyclass, pyfunction, pymethods, PyAny, PyRef, PyResult, Python}; use lancedb::connection::{Connection as LanceConnection, CreateTableMode};
use pyo3::{
exceptions::{PyRuntimeError, PyValueError},
pyclass, pyfunction, pymethods, PyAny, PyRef, PyResult, Python,
};
use pyo3_asyncio::tokio::future_into_py; use pyo3_asyncio::tokio::future_into_py;
use crate::error::PythonErrorExt; use crate::{error::PythonErrorExt, table::Table};
#[pyclass] #[pyclass]
pub struct Connection { pub struct Connection {
inner: LanceConnection, inner: Option<LanceConnection>,
}
impl Connection {
pub(crate) fn new(inner: LanceConnection) -> Self {
Self { inner: Some(inner) }
}
fn get_inner(&self) -> PyResult<&LanceConnection> {
self.inner
.as_ref()
.ok_or_else(|| PyRuntimeError::new_err("Connection is closed"))
}
}
impl Connection {
fn parse_create_mode_str(mode: &str) -> PyResult<CreateTableMode> {
match mode {
"create" => Ok(CreateTableMode::Create),
"overwrite" => Ok(CreateTableMode::Overwrite),
"exist_ok" => Ok(CreateTableMode::exist_ok(|builder| builder)),
_ => Err(PyValueError::new_err(format!("Invalid mode {}", mode))),
}
}
} }
#[pymethods] #[pymethods]
impl Connection { impl Connection {
pub fn table_names(self_: PyRef<'_, Self>) -> PyResult<&PyAny> { fn __repr__(&self) -> String {
let inner = self_.inner.clone(); match &self.inner {
Some(inner) => inner.to_string(),
None => "ClosedConnection".to_string(),
}
}
fn is_open(&self) -> bool {
self.inner.is_some()
}
fn close(&mut self) {
self.inner.take();
}
pub fn table_names(
self_: PyRef<'_, Self>,
start_after: Option<String>,
limit: Option<u32>,
) -> PyResult<&PyAny> {
let inner = self_.get_inner()?.clone();
let mut op = inner.table_names();
if let Some(start_after) = start_after {
op = op.start_after(start_after);
}
if let Some(limit) = limit {
op = op.limit(limit);
}
future_into_py(self_.py(), async move { op.execute().await.infer_error() })
}
pub fn create_table<'a>(
self_: PyRef<'a, Self>,
name: String,
mode: &str,
data: &PyAny,
) -> PyResult<&'a PyAny> {
let inner = self_.get_inner()?.clone();
let mode = Self::parse_create_mode_str(mode)?;
let batches = Box::new(ArrowArrayStreamReader::from_pyarrow(data)?);
future_into_py(self_.py(), async move { future_into_py(self_.py(), async move {
inner.table_names().await.infer_error() let table = inner
.create_table(name, batches)
.mode(mode)
.execute()
.await
.infer_error()?;
Ok(Table::new(table))
})
}
pub fn create_empty_table<'a>(
self_: PyRef<'a, Self>,
name: String,
mode: &str,
schema: &PyAny,
) -> PyResult<&'a PyAny> {
let inner = self_.get_inner()?.clone();
let mode = Self::parse_create_mode_str(mode)?;
let schema = Schema::from_pyarrow(schema)?;
future_into_py(self_.py(), async move {
let table = inner
.create_empty_table(name, Arc::new(schema))
.mode(mode)
.execute()
.await
.infer_error()?;
Ok(Table::new(table))
})
}
pub fn open_table(self_: PyRef<'_, Self>, name: String) -> PyResult<&PyAny> {
let inner = self_.get_inner()?.clone();
future_into_py(self_.py(), async move {
let table = inner.open_table(&name).execute().await.infer_error()?;
Ok(Table::new(table))
}) })
} }
} }
@@ -59,8 +163,6 @@ pub fn connect(
let read_consistency_interval = Duration::from_secs_f64(read_consistency_interval); let read_consistency_interval = Duration::from_secs_f64(read_consistency_interval);
builder = builder.read_consistency_interval(read_consistency_interval); builder = builder.read_consistency_interval(read_consistency_interval);
} }
Ok(Connection { Ok(Connection::new(builder.execute().await.infer_error()?))
inner: builder.execute().await.infer_error()?,
})
}) })
} }

View File

@@ -13,7 +13,7 @@
// limitations under the License. // limitations under the License.
use pyo3::{ use pyo3::{
exceptions::{PyOSError, PyRuntimeError, PyValueError}, exceptions::{PyIOError, PyNotImplementedError, PyOSError, PyRuntimeError, PyValueError},
PyResult, PyResult,
}; };
@@ -41,10 +41,15 @@ impl<T> PythonErrorExt<T> for std::result::Result<T, LanceError> {
LanceError::Schema { .. } => self.value_error(), LanceError::Schema { .. } => self.value_error(),
LanceError::CreateDir { .. } => self.os_error(), LanceError::CreateDir { .. } => self.os_error(),
LanceError::TableAlreadyExists { .. } => self.runtime_error(), LanceError::TableAlreadyExists { .. } => self.runtime_error(),
LanceError::Store { .. } => self.runtime_error(), LanceError::ObjectStore { .. } => Err(PyIOError::new_err(err.to_string())),
LanceError::Lance { .. } => self.runtime_error(), LanceError::Lance { .. } => self.runtime_error(),
LanceError::Runtime { .. } => self.runtime_error(), LanceError::Runtime { .. } => self.runtime_error(),
LanceError::Http { .. } => self.runtime_error(), LanceError::Http { .. } => self.runtime_error(),
LanceError::Arrow { .. } => self.runtime_error(),
LanceError::NotSupported { .. } => {
Err(PyNotImplementedError::new_err(err.to_string()))
}
LanceError::Other { .. } => self.runtime_error(),
}, },
} }
} }

View File

@@ -17,7 +17,8 @@ use env_logger::Env;
use pyo3::{pymodule, types::PyModule, wrap_pyfunction, PyResult, Python}; use pyo3::{pymodule, types::PyModule, wrap_pyfunction, PyResult, Python};
pub mod connection; pub mod connection;
pub(crate) mod error; pub mod error;
pub mod table;
#[pymodule] #[pymodule]
pub fn _lancedb(_py: Python, m: &PyModule) -> PyResult<()> { pub fn _lancedb(_py: Python, m: &PyModule) -> PyResult<()> {

90
python/src/table.rs Normal file
View File

@@ -0,0 +1,90 @@
use arrow::{
ffi_stream::ArrowArrayStreamReader,
pyarrow::{FromPyArrow, ToPyArrow},
};
use lancedb::table::{AddDataMode, Table as LanceDbTable};
use pyo3::{
exceptions::{PyRuntimeError, PyValueError},
pyclass, pymethods, PyAny, PyRef, PyResult, Python,
};
use pyo3_asyncio::tokio::future_into_py;
use crate::error::PythonErrorExt;
#[pyclass]
pub struct Table {
// We keep a copy of the name to use if the inner table is dropped
name: String,
inner: Option<LanceDbTable>,
}
impl Table {
pub(crate) fn new(inner: LanceDbTable) -> Self {
Self {
name: inner.name().to_string(),
inner: Some(inner),
}
}
}
impl Table {
fn inner_ref(&self) -> PyResult<&LanceDbTable> {
self.inner
.as_ref()
.ok_or_else(|| PyRuntimeError::new_err(format!("Table {} is closed", self.name)))
}
}
#[pymethods]
impl Table {
pub fn name(&self) -> String {
self.name.clone()
}
pub fn is_open(&self) -> bool {
self.inner.is_some()
}
pub fn close(&mut self) {
self.inner.take();
}
pub fn schema(self_: PyRef<'_, Self>) -> PyResult<&PyAny> {
let inner = self_.inner_ref()?.clone();
future_into_py(self_.py(), async move {
let schema = inner.schema().await.infer_error()?;
Python::with_gil(|py| schema.to_pyarrow(py))
})
}
pub fn add<'a>(self_: PyRef<'a, Self>, data: &PyAny, mode: String) -> PyResult<&'a PyAny> {
let batches = Box::new(ArrowArrayStreamReader::from_pyarrow(data)?);
let mut op = self_.inner_ref()?.add(batches);
if mode == "append" {
op = op.mode(AddDataMode::Append);
} else if mode == "overwrite" {
op = op.mode(AddDataMode::Overwrite);
} else {
return Err(PyValueError::new_err(format!("Invalid mode: {}", mode)));
}
future_into_py(self_.py(), async move {
op.execute().await.infer_error()?;
Ok(())
})
}
pub fn count_rows(self_: PyRef<'_, Self>, filter: Option<String>) -> PyResult<&PyAny> {
let inner = self_.inner_ref()?.clone();
future_into_py(self_.py(), async move {
inner.count_rows(filter).await.infer_error()
})
}
pub fn __repr__(&self) -> String {
match &self.inner {
None => format!("ClosedTable({})", self.name),
Some(inner) => inner.to_string(),
}
}
}

View File

@@ -1,6 +1,6 @@
[package] [package]
name = "lancedb-node" name = "lancedb-node"
version = "0.4.11" version = "0.4.12"
description = "Serverless, low-latency vector database for AI applications" description = "Serverless, low-latency vector database for AI applications"
license.workspace = true license.workspace = true
edition.workspace = true edition.workspace = true
@@ -8,6 +8,7 @@ repository.workspace = true
keywords.workspace = true keywords.workspace = true
categories.workspace = true categories.workspace = true
exclude = ["index.node"] exclude = ["index.node"]
rust-version = "1.75"
[lib] [lib]
crate-type = ["cdylib"] crate-type = ["cdylib"]

View File

@@ -19,7 +19,6 @@ use neon::{
}; };
use crate::{error::ResultExt, runtime, table::JsTable}; use crate::{error::ResultExt, runtime, table::JsTable};
use lancedb::Table;
pub fn table_create_scalar_index(mut cx: FunctionContext) -> JsResult<JsPromise> { pub fn table_create_scalar_index(mut cx: FunctionContext) -> JsResult<JsPromise> {
let js_table = cx.this().downcast_or_throw::<JsBox<JsTable>, _>(&mut cx)?; let js_table = cx.this().downcast_or_throw::<JsBox<JsTable>, _>(&mut cx)?;
@@ -34,8 +33,6 @@ pub fn table_create_scalar_index(mut cx: FunctionContext) -> JsResult<JsPromise>
rt.spawn(async move { rt.spawn(async move {
let idx_result = table let idx_result = table
.as_native()
.unwrap()
.create_index(&[&column]) .create_index(&[&column])
.replace(replace) .replace(replace)
.build() .build()

View File

@@ -40,8 +40,9 @@ pub fn table_create_vector_index(mut cx: FunctionContext) -> JsResult<JsPromise>
.unwrap_or("vector".to_string()); // Backward compatibility .unwrap_or("vector".to_string()); // Backward compatibility
let tbl = table.clone(); let tbl = table.clone();
let mut index_builder = tbl.create_index(&[&column_name]); let index_builder = tbl.create_index(&[&column_name]);
get_index_params_builder(&mut cx, index_params, &mut index_builder).or_throw(&mut cx)?; let index_builder =
get_index_params_builder(&mut cx, index_params, index_builder).or_throw(&mut cx)?;
rt.spawn(async move { rt.spawn(async move {
let idx_result = index_builder.build().await; let idx_result = index_builder.build().await;
@@ -56,9 +57,9 @@ pub fn table_create_vector_index(mut cx: FunctionContext) -> JsResult<JsPromise>
fn get_index_params_builder( fn get_index_params_builder(
cx: &mut FunctionContext, cx: &mut FunctionContext,
obj: Handle<JsObject>, obj: Handle<JsObject>,
builder: &mut IndexBuilder, builder: IndexBuilder,
) -> crate::error::Result<()> { ) -> crate::error::Result<IndexBuilder> {
match obj.get::<JsString, _, _>(cx, "type")?.value(cx).as_str() { let mut builder = match obj.get::<JsString, _, _>(cx, "type")?.value(cx).as_str() {
"ivf_pq" => builder.ivf_pq(), "ivf_pq" => builder.ivf_pq(),
_ => { _ => {
return Err(InvalidIndexType { return Err(InvalidIndexType {
@@ -67,28 +68,29 @@ fn get_index_params_builder(
} }
}; };
obj.get_opt::<JsString, _, _>(cx, "index_name")? if let Some(index_name) = obj.get_opt::<JsString, _, _>(cx, "index_name")? {
.map(|s| builder.name(s.value(cx).as_str())); builder = builder.name(index_name.value(cx).as_str());
}
if let Some(metric_type) = obj.get_opt::<JsString, _, _>(cx, "metric_type")? { if let Some(metric_type) = obj.get_opt::<JsString, _, _>(cx, "metric_type")? {
let metric_type = MetricType::try_from(metric_type.value(cx).as_str())?; let metric_type = MetricType::try_from(metric_type.value(cx).as_str())?;
builder.metric_type(metric_type); builder = builder.metric_type(metric_type);
} }
if let Some(np) = obj.get_opt_u32(cx, "num_partitions")? { if let Some(np) = obj.get_opt_u32(cx, "num_partitions")? {
builder.num_partitions(np); builder = builder.num_partitions(np);
} }
if let Some(ns) = obj.get_opt_u32(cx, "num_sub_vectors")? { if let Some(ns) = obj.get_opt_u32(cx, "num_sub_vectors")? {
builder.num_sub_vectors(ns); builder = builder.num_sub_vectors(ns);
} }
if let Some(max_iters) = obj.get_opt_u32(cx, "max_iters")? { if let Some(max_iters) = obj.get_opt_u32(cx, "max_iters")? {
builder.max_iterations(max_iters); builder = builder.max_iterations(max_iters);
} }
if let Some(num_bits) = obj.get_opt_u32(cx, "num_bits")? { if let Some(num_bits) = obj.get_opt_u32(cx, "num_bits")? {
builder.num_bits(num_bits); builder = builder.num_bits(num_bits);
} }
if let Some(replace) = obj.get_opt::<JsBoolean, _, _>(cx, "replace")? { if let Some(replace) = obj.get_opt::<JsBoolean, _, _>(cx, "replace")? {
builder.replace(replace.value(cx)); builder = builder.replace(replace.value(cx));
} }
Ok(()) Ok(builder)
} }

View File

@@ -132,7 +132,7 @@ fn database_table_names(mut cx: FunctionContext) -> JsResult<JsPromise> {
let database = db.database.clone(); let database = db.database.clone();
rt.spawn(async move { rt.spawn(async move {
let tables_rst = database.table_names().await; let tables_rst = database.table_names().execute().await;
deferred.settle_with(&channel, move |mut cx| { deferred.settle_with(&channel, move |mut cx| {
let tables = tables_rst.or_throw(&mut cx)?; let tables = tables_rst.or_throw(&mut cx)?;

View File

@@ -18,10 +18,10 @@ use arrow_array::{RecordBatch, RecordBatchIterator};
use lance::dataset::optimize::CompactionOptions; use lance::dataset::optimize::CompactionOptions;
use lance::dataset::{ColumnAlteration, NewColumnTransform, WriteMode, WriteParams}; use lance::dataset::{ColumnAlteration, NewColumnTransform, WriteMode, WriteParams};
use lance::io::ObjectStoreParams; use lance::io::ObjectStoreParams;
use lancedb::table::{AddDataOptions, OptimizeAction, WriteOptions}; use lancedb::table::{OptimizeAction, WriteOptions};
use crate::arrow::{arrow_buffer_to_record_batch, record_batch_to_buffer}; use crate::arrow::{arrow_buffer_to_record_batch, record_batch_to_buffer};
use lancedb::TableRef; use lancedb::table::Table as LanceDbTable;
use neon::prelude::*; use neon::prelude::*;
use neon::types::buffer::TypedArray; use neon::types::buffer::TypedArray;
@@ -29,13 +29,13 @@ use crate::error::ResultExt;
use crate::{convert, get_aws_credential_provider, get_aws_region, runtime, JsDatabase}; use crate::{convert, get_aws_credential_provider, get_aws_region, runtime, JsDatabase};
pub struct JsTable { pub struct JsTable {
pub table: TableRef, pub table: LanceDbTable,
} }
impl Finalize for JsTable {} impl Finalize for JsTable {}
impl From<TableRef> for JsTable { impl From<LanceDbTable> for JsTable {
fn from(table: TableRef) -> Self { fn from(table: LanceDbTable) -> Self {
Self { table } Self { table }
} }
} }
@@ -125,13 +125,13 @@ impl JsTable {
rt.spawn(async move { rt.spawn(async move {
let batch_reader = RecordBatchIterator::new(batches.into_iter().map(Ok), schema); let batch_reader = RecordBatchIterator::new(batches.into_iter().map(Ok), schema);
let opts = AddDataOptions { let add_result = table
write_options: WriteOptions { .add(Box::new(batch_reader))
.write_options(WriteOptions {
lance_write_params: Some(params), lance_write_params: Some(params),
}, })
..Default::default() .execute()
}; .await;
let add_result = table.add(Box::new(batch_reader), opts).await;
deferred.settle_with(&channel, move |mut cx| { deferred.settle_with(&channel, move |mut cx| {
add_result.or_throw(&mut cx)?; add_result.or_throw(&mut cx)?;
@@ -323,7 +323,7 @@ impl JsTable {
.and_then(|val| val.downcast::<JsNumber, _>(&mut cx).ok()) .and_then(|val| val.downcast::<JsNumber, _>(&mut cx).ok())
.map(|val| val.value(&mut cx) as i64) .map(|val| val.value(&mut cx) as i64)
.unwrap_or_else(|| 2 * 7 * 24 * 60); // 2 weeks .unwrap_or_else(|| 2 * 7 * 24 * 60); // 2 weeks
let older_than = chrono::Duration::minutes(older_than); let older_than = chrono::Duration::try_minutes(older_than).unwrap();
let delete_unverified: Option<bool> = Some( let delete_unverified: Option<bool> = Some(
cx.argument_opt(1) cx.argument_opt(1)
.and_then(|val| val.downcast::<JsBoolean, _>(&mut cx).ok()) .and_then(|val| val.downcast::<JsBoolean, _>(&mut cx).ok())

View File

@@ -1,12 +1,13 @@
[package] [package]
name = "lancedb" name = "lancedb"
version = "0.4.11" version = "0.4.12"
edition.workspace = true edition.workspace = true
description = "LanceDB: A serverless, low-latency vector database for AI applications" description = "LanceDB: A serverless, low-latency vector database for AI applications"
license.workspace = true license.workspace = true
repository.workspace = true repository.workspace = true
keywords.workspace = true keywords.workspace = true
categories.workspace = true categories.workspace = true
rust-version = "1.75"
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html # See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
[dependencies] [dependencies]
@@ -42,6 +43,7 @@ reqwest = { version = "0.11.24", features = ["gzip", "json"], optional = true }
[dev-dependencies] [dev-dependencies]
tempfile = "3.5.0" tempfile = "3.5.0"
rand = { version = "0.8.3", features = ["small_rng"] } rand = { version = "0.8.3", features = ["small_rng"] }
uuid = { version = "1.7.0", features = ["v4"] }
walkdir = "2" walkdir = "2"
[features] [features]

View File

@@ -20,8 +20,7 @@ use arrow_schema::{DataType, Field, Schema};
use futures::TryStreamExt; use futures::TryStreamExt;
use lancedb::connection::Connection; use lancedb::connection::Connection;
use lancedb::table::AddDataOptions; use lancedb::{connect, Result, Table as LanceDbTable};
use lancedb::{connect, Result, Table, TableRef};
#[tokio::main] #[tokio::main]
async fn main() -> Result<()> { async fn main() -> Result<()> {
@@ -34,11 +33,11 @@ async fn main() -> Result<()> {
// --8<-- [end:connect] // --8<-- [end:connect]
// --8<-- [start:list_names] // --8<-- [start:list_names]
println!("{:?}", db.table_names().await?); println!("{:?}", db.table_names().execute().await?);
// --8<-- [end:list_names] // --8<-- [end:list_names]
let tbl = create_table(&db).await?; let tbl = create_table(&db).await?;
create_index(tbl.as_ref()).await?; create_index(&tbl).await?;
let batches = search(tbl.as_ref()).await?; let batches = search(&tbl).await?;
println!("{:?}", batches); println!("{:?}", batches);
create_empty_table(&db).await.unwrap(); create_empty_table(&db).await.unwrap();
@@ -63,7 +62,7 @@ async fn open_with_existing_tbl() -> Result<()> {
Ok(()) Ok(())
} }
async fn create_table(db: &Connection) -> Result<TableRef> { async fn create_table(db: &Connection) -> Result<LanceDbTable> {
// --8<-- [start:create_table] // --8<-- [start:create_table]
const TOTAL: usize = 1000; const TOTAL: usize = 1000;
const DIM: usize = 128; const DIM: usize = 128;
@@ -125,15 +124,13 @@ async fn create_table(db: &Connection) -> Result<TableRef> {
schema.clone(), schema.clone(),
); );
// --8<-- [start:add] // --8<-- [start:add]
tbl.add(Box::new(new_batches), AddDataOptions::default()) tbl.add(Box::new(new_batches)).execute().await.unwrap();
.await
.unwrap();
// --8<-- [end:add] // --8<-- [end:add]
Ok(tbl) Ok(tbl)
} }
async fn create_empty_table(db: &Connection) -> Result<TableRef> { async fn create_empty_table(db: &Connection) -> Result<LanceDbTable> {
// --8<-- [start:create_empty_table] // --8<-- [start:create_empty_table]
let schema = Arc::new(Schema::new(vec![ let schema = Arc::new(Schema::new(vec![
Field::new("id", DataType::Int32, false), Field::new("id", DataType::Int32, false),
@@ -143,7 +140,7 @@ async fn create_empty_table(db: &Connection) -> Result<TableRef> {
// --8<-- [end:create_empty_table] // --8<-- [end:create_empty_table]
} }
async fn create_index(table: &dyn Table) -> Result<()> { async fn create_index(table: &LanceDbTable) -> Result<()> {
// --8<-- [start:create_index] // --8<-- [start:create_index]
table table
.create_index(&["vector"]) .create_index(&["vector"])
@@ -154,7 +151,7 @@ async fn create_index(table: &dyn Table) -> Result<()> {
// --8<-- [end:create_index] // --8<-- [end:create_index]
} }
async fn search(table: &dyn Table) -> Result<Vec<RecordBatch>> { async fn search(table: &LanceDbTable) -> Result<Vec<RecordBatch>> {
// --8<-- [start:search] // --8<-- [start:search]
Ok(table Ok(table
.search(&[1.0; 128]) .search(&[1.0; 128])

View File

@@ -29,7 +29,8 @@ use snafu::prelude::*;
use crate::error::{CreateDirSnafu, Error, InvalidTableNameSnafu, Result}; use crate::error::{CreateDirSnafu, Error, InvalidTableNameSnafu, Result};
use crate::io::object_store::MirroringObjectStoreWrapper; use crate::io::object_store::MirroringObjectStoreWrapper;
use crate::table::{NativeTable, TableRef, WriteOptions}; use crate::table::{NativeTable, WriteOptions};
use crate::Table;
pub const LANCE_FILE_EXTENSION: &str = "lance"; pub const LANCE_FILE_EXTENSION: &str = "lance";
@@ -77,14 +78,52 @@ enum BadVectorHandling {
Fill(f32), Fill(f32),
} }
/// A builder for configuring a [`Connection::table_names`] operation
pub struct TableNamesBuilder {
parent: Arc<dyn ConnectionInternal>,
pub(crate) start_after: Option<String>,
pub(crate) limit: Option<u32>,
}
impl TableNamesBuilder {
fn new(parent: Arc<dyn ConnectionInternal>) -> Self {
Self {
parent,
start_after: None,
limit: None,
}
}
/// If present, only return names that come lexicographically after the supplied
/// value.
///
/// This can be combined with limit to implement pagination by setting this to
/// the last table name from the previous page.
pub fn start_after(mut self, start_after: String) -> Self {
self.start_after = Some(start_after);
self
}
/// The maximum number of table names to return
pub fn limit(mut self, limit: u32) -> Self {
self.limit = Some(limit);
self
}
/// Execute the table names operation
pub async fn execute(self) -> Result<Vec<String>> {
self.parent.clone().table_names(self).await
}
}
/// A builder for configuring a [`Connection::create_table`] operation /// A builder for configuring a [`Connection::create_table`] operation
pub struct CreateTableBuilder<const HAS_DATA: bool> { pub struct CreateTableBuilder<const HAS_DATA: bool> {
parent: Arc<dyn ConnectionInternal>, parent: Arc<dyn ConnectionInternal>,
name: String, pub(crate) name: String,
data: Option<Box<dyn RecordBatchReader + Send>>, pub(crate) data: Option<Box<dyn RecordBatchReader + Send>>,
schema: Option<SchemaRef>, pub(crate) schema: Option<SchemaRef>,
mode: CreateTableMode, pub(crate) mode: CreateTableMode,
write_options: WriteOptions, pub(crate) write_options: WriteOptions,
} }
// Builder methods that only apply when we have initial data // Builder methods that only apply when we have initial data
@@ -111,7 +150,7 @@ impl CreateTableBuilder<true> {
} }
/// Execute the create table operation /// Execute the create table operation
pub async fn execute(self) -> Result<TableRef> { pub async fn execute(self) -> Result<Table> {
self.parent.clone().do_create_table(self).await self.parent.clone().do_create_table(self).await
} }
} }
@@ -130,7 +169,7 @@ impl CreateTableBuilder<false> {
} }
/// Execute the create table operation /// Execute the create table operation
pub async fn execute(self) -> Result<TableRef> { pub async fn execute(self) -> Result<Table> {
self.parent.clone().do_create_empty_table(self).await self.parent.clone().do_create_empty_table(self).await
} }
} }
@@ -188,20 +227,22 @@ impl OpenTableBuilder {
} }
/// Open the table /// Open the table
pub async fn execute(self) -> Result<TableRef> { pub async fn execute(self) -> Result<Table> {
self.parent.clone().do_open_table(self).await self.parent.clone().do_open_table(self).await
} }
} }
#[async_trait::async_trait] #[async_trait::async_trait]
pub(crate) trait ConnectionInternal: Send + Sync + std::fmt::Debug + 'static { pub(crate) trait ConnectionInternal:
async fn table_names(&self) -> Result<Vec<String>>; Send + Sync + std::fmt::Debug + std::fmt::Display + 'static
async fn do_create_table(&self, options: CreateTableBuilder<true>) -> Result<TableRef>; {
async fn do_open_table(&self, options: OpenTableBuilder) -> Result<TableRef>; async fn table_names(&self, options: TableNamesBuilder) -> Result<Vec<String>>;
async fn do_create_table(&self, options: CreateTableBuilder<true>) -> Result<Table>;
async fn do_open_table(&self, options: OpenTableBuilder) -> Result<Table>;
async fn drop_table(&self, name: &str) -> Result<()>; async fn drop_table(&self, name: &str) -> Result<()>;
async fn drop_db(&self) -> Result<()>; async fn drop_db(&self) -> Result<()>;
async fn do_create_empty_table(&self, options: CreateTableBuilder<false>) -> Result<TableRef> { async fn do_create_empty_table(&self, options: CreateTableBuilder<false>) -> Result<Table> {
let batches = RecordBatchIterator::new(vec![], options.schema.unwrap()); let batches = RecordBatchIterator::new(vec![], options.schema.unwrap());
let opts = CreateTableBuilder::<true>::new(options.parent, options.name, Box::new(batches)) let opts = CreateTableBuilder::<true>::new(options.parent, options.name, Box::new(batches))
.mode(options.mode) .mode(options.mode)
@@ -217,15 +258,25 @@ pub struct Connection {
internal: Arc<dyn ConnectionInternal>, internal: Arc<dyn ConnectionInternal>,
} }
impl std::fmt::Display for Connection {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
write!(f, "{}", self.internal)
}
}
impl Connection { impl Connection {
/// Get the URI of the connection /// Get the URI of the connection
pub fn uri(&self) -> &str { pub fn uri(&self) -> &str {
self.uri.as_str() self.uri.as_str()
} }
/// Get the names of all tables in the database. /// Get the names of all tables in the database
pub async fn table_names(&self) -> Result<Vec<String>> { ///
self.internal.table_names().await /// The names will be returned in lexicographical order (ascending)
///
/// The parameters `page_token` and `limit` can be used to paginate the results
pub fn table_names(&self) -> TableNamesBuilder {
TableNamesBuilder::new(self.internal.clone())
} }
/// Create a new table from data /// Create a new table from data
@@ -431,6 +482,24 @@ struct Database {
read_consistency_interval: Option<std::time::Duration>, read_consistency_interval: Option<std::time::Duration>,
} }
impl std::fmt::Display for Database {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
write!(
f,
"NativeDatabase(uri={}, read_consistency_interval={})",
self.uri,
match self.read_consistency_interval {
None => {
"None".to_string()
}
Some(duration) => {
format!("{}s", duration.as_secs_f64())
}
}
)
}
}
const LANCE_EXTENSION: &str = "lance"; const LANCE_EXTENSION: &str = "lance";
const ENGINE: &str = "engine"; const ENGINE: &str = "engine";
const MIRRORED_STORE: &str = "mirroredStore"; const MIRRORED_STORE: &str = "mirroredStore";
@@ -459,7 +528,7 @@ impl Database {
engine = Some(value.to_string()); engine = Some(value.to_string());
} else if key == MIRRORED_STORE { } else if key == MIRRORED_STORE {
if cfg!(windows) { if cfg!(windows) {
return Err(Error::Lance { return Err(Error::NotSupported {
message: "mirrored store is not supported on windows".into(), message: "mirrored store is not supported on windows".into(),
}); });
} }
@@ -586,7 +655,7 @@ impl Database {
#[async_trait::async_trait] #[async_trait::async_trait]
impl ConnectionInternal for Database { impl ConnectionInternal for Database {
async fn table_names(&self) -> Result<Vec<String>> { async fn table_names(&self, options: TableNamesBuilder) -> Result<Vec<String>> {
let mut f = self let mut f = self
.object_store .object_store
.read_dir(self.base_path.clone()) .read_dir(self.base_path.clone())
@@ -603,10 +672,20 @@ impl ConnectionInternal for Database {
.filter_map(|p| p.file_stem().and_then(|s| s.to_str().map(String::from))) .filter_map(|p| p.file_stem().and_then(|s| s.to_str().map(String::from)))
.collect::<Vec<String>>(); .collect::<Vec<String>>();
f.sort(); f.sort();
if let Some(start_after) = options.start_after {
let index = f
.iter()
.position(|name| name.as_str() > start_after.as_str())
.unwrap_or(f.len());
f.drain(0..index);
}
if let Some(limit) = options.limit {
f.truncate(limit as usize);
}
Ok(f) Ok(f)
} }
async fn do_create_table(&self, options: CreateTableBuilder<true>) -> Result<TableRef> { async fn do_create_table(&self, options: CreateTableBuilder<true>) -> Result<Table> {
let table_uri = self.table_uri(&options.name)?; let table_uri = self.table_uri(&options.name)?;
let mut write_params = options.write_options.lance_write_params.unwrap_or_default(); let mut write_params = options.write_options.lance_write_params.unwrap_or_default();
@@ -624,7 +703,7 @@ impl ConnectionInternal for Database {
) )
.await .await
{ {
Ok(table) => Ok(Arc::new(table)), Ok(table) => Ok(Table::new(Arc::new(table))),
Err(Error::TableAlreadyExists { name }) => match options.mode { Err(Error::TableAlreadyExists { name }) => match options.mode {
CreateTableMode::Create => Err(Error::TableAlreadyExists { name }), CreateTableMode::Create => Err(Error::TableAlreadyExists { name }),
CreateTableMode::ExistOk(callback) => { CreateTableMode::ExistOk(callback) => {
@@ -638,9 +717,9 @@ impl ConnectionInternal for Database {
} }
} }
async fn do_open_table(&self, options: OpenTableBuilder) -> Result<TableRef> { async fn do_open_table(&self, options: OpenTableBuilder) -> Result<Table> {
let table_uri = self.table_uri(&options.name)?; let table_uri = self.table_uri(&options.name)?;
Ok(Arc::new( let native_table = Arc::new(
NativeTable::open_with_params( NativeTable::open_with_params(
&table_uri, &table_uri,
&options.name, &options.name,
@@ -649,7 +728,8 @@ impl ConnectionInternal for Database {
self.read_consistency_interval, self.read_consistency_interval,
) )
.await?, .await?,
)) );
Ok(Table::new(native_table))
} }
async fn drop_table(&self, name: &str) -> Result<()> { async fn drop_table(&self, name: &str) -> Result<()> {
@@ -714,16 +794,43 @@ mod tests {
#[tokio::test] #[tokio::test]
async fn test_table_names() { async fn test_table_names() {
let tmp_dir = tempdir().unwrap(); let tmp_dir = tempdir().unwrap();
create_dir_all(tmp_dir.path().join("table1.lance")).unwrap(); let mut names = Vec::with_capacity(100);
create_dir_all(tmp_dir.path().join("table2.lance")).unwrap(); for _ in 0..100 {
create_dir_all(tmp_dir.path().join("invalidlance")).unwrap(); let mut name = uuid::Uuid::new_v4().to_string();
names.push(name.clone());
name.push_str(".lance");
create_dir_all(tmp_dir.path().join(&name)).unwrap();
}
names.sort();
let uri = tmp_dir.path().to_str().unwrap(); let uri = tmp_dir.path().to_str().unwrap();
let db = connect(uri).execute().await.unwrap(); let db = connect(uri).execute().await.unwrap();
let tables = db.table_names().await.unwrap(); let tables = db.table_names().execute().await.unwrap();
assert_eq!(tables.len(), 2);
assert!(tables[0].eq(&String::from("table1"))); assert_eq!(tables, names);
assert!(tables[1].eq(&String::from("table2")));
let tables = db
.table_names()
.start_after(names[30].clone())
.execute()
.await
.unwrap();
assert_eq!(tables, names[31..]);
let tables = db
.table_names()
.start_after(names[30].clone())
.limit(7)
.execute()
.await
.unwrap();
assert_eq!(tables, names[31..38]);
let tables = db.table_names().limit(7).execute().await.unwrap();
assert_eq!(tables, names[..7]);
} }
#[tokio::test] #[tokio::test]
@@ -738,14 +845,14 @@ mod tests {
let uri = tmp_dir.path().to_str().unwrap(); let uri = tmp_dir.path().to_str().unwrap();
let db = connect(uri).execute().await.unwrap(); let db = connect(uri).execute().await.unwrap();
assert_eq!(db.table_names().await.unwrap().len(), 0); assert_eq!(db.table_names().execute().await.unwrap().len(), 0);
// open non-exist table // open non-exist table
assert!(matches!( assert!(matches!(
db.open_table("invalid_table").execute().await, db.open_table("invalid_table").execute().await,
Err(crate::Error::TableNotFound { .. }) Err(crate::Error::TableNotFound { .. })
)); ));
assert_eq!(db.table_names().await.unwrap().len(), 0); assert_eq!(db.table_names().execute().await.unwrap().len(), 0);
let schema = Arc::new(Schema::new(vec![Field::new("x", DataType::Int32, false)])); let schema = Arc::new(Schema::new(vec![Field::new("x", DataType::Int32, false)]));
db.create_empty_table("table1", schema) db.create_empty_table("table1", schema)
@@ -753,7 +860,7 @@ mod tests {
.await .await
.unwrap(); .unwrap();
db.open_table("table1").execute().await.unwrap(); db.open_table("table1").execute().await.unwrap();
let tables = db.table_names().await.unwrap(); let tables = db.table_names().execute().await.unwrap();
assert_eq!(tables, vec!["table1".to_owned()]); assert_eq!(tables, vec!["table1".to_owned()]);
} }
@@ -773,7 +880,7 @@ mod tests {
create_dir_all(tmp_dir.path().join("table1.lance")).unwrap(); create_dir_all(tmp_dir.path().join("table1.lance")).unwrap();
db.drop_table("table1").await.unwrap(); db.drop_table("table1").await.unwrap();
let tables = db.table_names().await.unwrap(); let tables = db.table_names().execute().await.unwrap();
assert_eq!(tables.len(), 0); assert_eq!(tables.len(), 0);
} }

View File

@@ -20,61 +20,69 @@ use snafu::Snafu;
#[derive(Debug, Snafu)] #[derive(Debug, Snafu)]
#[snafu(visibility(pub(crate)))] #[snafu(visibility(pub(crate)))]
pub enum Error { pub enum Error {
#[snafu(display("LanceDBError: Invalid table name: {name}"))] #[snafu(display("Invalid table name: {name}"))]
InvalidTableName { name: String }, InvalidTableName { name: String },
#[snafu(display("LanceDBError: Invalid input, {message}"))] #[snafu(display("Invalid input, {message}"))]
InvalidInput { message: String }, InvalidInput { message: String },
#[snafu(display("LanceDBError: Table '{name}' was not found"))] #[snafu(display("Table '{name}' was not found"))]
TableNotFound { name: String }, TableNotFound { name: String },
#[snafu(display("LanceDBError: Table '{name}' already exists"))] #[snafu(display("Table '{name}' already exists"))]
TableAlreadyExists { name: String }, TableAlreadyExists { name: String },
#[snafu(display("LanceDBError: Unable to created lance dataset at {path}: {source}"))] #[snafu(display("Unable to created lance dataset at {path}: {source}"))]
CreateDir { CreateDir {
path: String, path: String,
source: std::io::Error, source: std::io::Error,
}, },
#[snafu(display("LanceDBError: Http error: {message}"))] #[snafu(display("Schema Error: {message}"))]
Http { message: String },
#[snafu(display("LanceDBError: {message}"))]
Store { message: String },
#[snafu(display("LanceDBError: {message}"))]
Lance { message: String },
#[snafu(display("LanceDB Schema Error: {message}"))]
Schema { message: String }, Schema { message: String },
#[snafu(display("Runtime error: {message}"))] #[snafu(display("Runtime error: {message}"))]
Runtime { message: String }, Runtime { message: String },
// 3rd party / external errors
#[snafu(display("object_store error: {source}"))]
ObjectStore { source: object_store::Error },
#[snafu(display("lance error: {source}"))]
Lance { source: lance::Error },
#[snafu(display("Http error: {message}"))]
Http { message: String },
#[snafu(display("Arrow error: {source}"))]
Arrow { source: ArrowError },
#[snafu(display("LanceDBError: not supported: {message}"))]
NotSupported { message: String },
#[snafu(whatever, display("{message}"))]
Other {
message: String,
#[snafu(source(from(Box<dyn std::error::Error + Send + Sync>, Some)))]
source: Option<Box<dyn std::error::Error + Send + Sync>>,
},
} }
pub type Result<T> = std::result::Result<T, Error>; pub type Result<T> = std::result::Result<T, Error>;
impl From<ArrowError> for Error { impl From<ArrowError> for Error {
fn from(e: ArrowError) -> Self { fn from(source: ArrowError) -> Self {
Self::Lance { Self::Arrow { source }
message: e.to_string(),
}
} }
} }
impl From<lance::Error> for Error { impl From<lance::Error> for Error {
fn from(e: lance::Error) -> Self { fn from(source: lance::Error) -> Self {
Self::Lance { // TODO: Once Lance is changed to preserve ObjectStore, DataFusion, and Arrow errors, we can
message: e.to_string(), // pass those variants through here as well.
} Self::Lance { source }
} }
} }
impl From<object_store::Error> for Error { impl From<object_store::Error> for Error {
fn from(e: object_store::Error) -> Self { fn from(source: object_store::Error) -> Self {
Self::Store { Self::ObjectStore { source }
message: e.to_string(),
}
} }
} }
impl From<object_store::path::Error> for Error { impl From<object_store::path::Error> for Error {
fn from(e: object_store::path::Error) -> Self { fn from(source: object_store::path::Error) -> Self {
Self::Store { Self::ObjectStore {
message: e.to_string(), source: object_store::Error::InvalidPath { source },
} }
} }
} }

View File

@@ -14,13 +14,12 @@
use std::{cmp::max, sync::Arc}; use std::{cmp::max, sync::Arc};
use lance::index::scalar::ScalarIndexParams; use lance_index::IndexType;
use lance_index::{DatasetIndexExt, IndexType};
pub use lance_linalg::distance::MetricType; pub use lance_linalg::distance::MetricType;
pub mod vector; pub mod vector;
use crate::{utils::default_vector_column, Error, Result, Table}; use crate::{table::TableInternal, Result};
/// Index Parameters. /// Index Parameters.
pub enum IndexParams { pub enum IndexParams {
@@ -41,36 +40,36 @@ pub enum IndexParams {
/// Builder for Index Parameters. /// Builder for Index Parameters.
pub struct IndexBuilder { pub struct IndexBuilder {
table: Arc<dyn Table>, parent: Arc<dyn TableInternal>,
columns: Vec<String>, pub(crate) columns: Vec<String>,
// General parameters // General parameters
/// Index name. /// Index name.
name: Option<String>, pub(crate) name: Option<String>,
/// Replace the existing index. /// Replace the existing index.
replace: bool, pub(crate) replace: bool,
index_type: IndexType, pub(crate) index_type: IndexType,
// Scalar index parameters // Scalar index parameters
// Nothing to set here. // Nothing to set here.
// IVF_PQ parameters // IVF_PQ parameters
metric_type: MetricType, pub(crate) metric_type: MetricType,
num_partitions: Option<u32>, pub(crate) num_partitions: Option<u32>,
// PQ related // PQ related
num_sub_vectors: Option<u32>, pub(crate) num_sub_vectors: Option<u32>,
num_bits: u32, pub(crate) num_bits: u32,
/// The rate to find samples to train kmeans. /// The rate to find samples to train kmeans.
sample_rate: u32, pub(crate) sample_rate: u32,
/// Max iteration to train kmeans. /// Max iteration to train kmeans.
max_iterations: u32, pub(crate) max_iterations: u32,
} }
impl IndexBuilder { impl IndexBuilder {
pub(crate) fn new(table: Arc<dyn Table>, columns: &[&str]) -> Self { pub(crate) fn new(parent: Arc<dyn TableInternal>, columns: &[&str]) -> Self {
Self { Self {
table, parent,
columns: columns.iter().map(|c| c.to_string()).collect(), columns: columns.iter().map(|c| c.to_string()).collect(),
name: None, name: None,
replace: true, replace: true,
@@ -89,7 +88,7 @@ impl IndexBuilder {
/// Accepted parameters: /// Accepted parameters:
/// - `replace`: Replace the existing index. /// - `replace`: Replace the existing index.
/// - `name`: Index name. Default: `None` /// - `name`: Index name. Default: `None`
pub fn scalar(&mut self) -> &mut Self { pub fn scalar(mut self) -> Self {
self.index_type = IndexType::Scalar; self.index_type = IndexType::Scalar;
self self
} }
@@ -105,25 +104,25 @@ impl IndexBuilder {
/// - `num_bits`: Number of bits used for PQ centroids. /// - `num_bits`: Number of bits used for PQ centroids.
/// - `sample_rate`: The rate to find samples to train kmeans. /// - `sample_rate`: The rate to find samples to train kmeans.
/// - `max_iterations`: Max iteration to train kmeans. /// - `max_iterations`: Max iteration to train kmeans.
pub fn ivf_pq(&mut self) -> &mut Self { pub fn ivf_pq(mut self) -> Self {
self.index_type = IndexType::Vector; self.index_type = IndexType::Vector;
self self
} }
/// The columns to build index on. /// The columns to build index on.
pub fn columns(&mut self, cols: &[&str]) -> &mut Self { pub fn columns(mut self, cols: &[&str]) -> Self {
self.columns = cols.iter().map(|s| s.to_string()).collect(); self.columns = cols.iter().map(|s| s.to_string()).collect();
self self
} }
/// Whether to replace the existing index, default is `true`. /// Whether to replace the existing index, default is `true`.
pub fn replace(&mut self, v: bool) -> &mut Self { pub fn replace(mut self, v: bool) -> Self {
self.replace = v; self.replace = v;
self self
} }
/// Set the index name. /// Set the index name.
pub fn name(&mut self, name: &str) -> &mut Self { pub fn name(mut self, name: &str) -> Self {
self.name = Some(name.to_string()); self.name = Some(name.to_string());
self self
} }
@@ -131,156 +130,53 @@ impl IndexBuilder {
/// [MetricType] to use to build Vector Index. /// [MetricType] to use to build Vector Index.
/// ///
/// Default value is [MetricType::L2]. /// Default value is [MetricType::L2].
pub fn metric_type(&mut self, metric_type: MetricType) -> &mut Self { pub fn metric_type(mut self, metric_type: MetricType) -> Self {
self.metric_type = metric_type; self.metric_type = metric_type;
self self
} }
/// Number of IVF partitions. /// Number of IVF partitions.
pub fn num_partitions(&mut self, num_partitions: u32) -> &mut Self { pub fn num_partitions(mut self, num_partitions: u32) -> Self {
self.num_partitions = Some(num_partitions); self.num_partitions = Some(num_partitions);
self self
} }
/// Number of sub-vectors of PQ. /// Number of sub-vectors of PQ.
pub fn num_sub_vectors(&mut self, num_sub_vectors: u32) -> &mut Self { pub fn num_sub_vectors(mut self, num_sub_vectors: u32) -> Self {
self.num_sub_vectors = Some(num_sub_vectors); self.num_sub_vectors = Some(num_sub_vectors);
self self
} }
/// Number of bits used for PQ centroids. /// Number of bits used for PQ centroids.
pub fn num_bits(&mut self, num_bits: u32) -> &mut Self { pub fn num_bits(mut self, num_bits: u32) -> Self {
self.num_bits = num_bits; self.num_bits = num_bits;
self self
} }
/// The rate to find samples to train kmeans. /// The rate to find samples to train kmeans.
pub fn sample_rate(&mut self, sample_rate: u32) -> &mut Self { pub fn sample_rate(mut self, sample_rate: u32) -> Self {
self.sample_rate = sample_rate; self.sample_rate = sample_rate;
self self
} }
/// Max iteration to train kmeans. /// Max iteration to train kmeans.
pub fn max_iterations(&mut self, max_iterations: u32) -> &mut Self { pub fn max_iterations(mut self, max_iterations: u32) -> Self {
self.max_iterations = max_iterations; self.max_iterations = max_iterations;
self self
} }
/// Build the parameters. /// Build the parameters.
pub async fn build(&self) -> Result<()> { pub async fn build(self) -> Result<()> {
let schema = self.table.schema().await?; self.parent.clone().do_create_index(self).await
// TODO: simplify this after GH lance#1864.
let mut index_type = &self.index_type;
let columns = if self.columns.is_empty() {
// By default we create vector index.
index_type = &IndexType::Vector;
vec![default_vector_column(&schema, None)?]
} else {
self.columns.clone()
};
if columns.len() != 1 {
return Err(Error::Schema {
message: "Only one column is supported for index".to_string(),
});
}
let column = &columns[0];
let field = schema.field_with_name(column)?;
let params = match index_type {
IndexType::Scalar => IndexParams::Scalar {
replace: self.replace,
},
IndexType::Vector => {
let num_partitions = if let Some(n) = self.num_partitions {
n
} else {
suggested_num_partitions(self.table.count_rows(None).await?)
};
let num_sub_vectors: u32 = if let Some(n) = self.num_sub_vectors {
n
} else {
match field.data_type() {
arrow_schema::DataType::FixedSizeList(_, n) => {
Ok::<u32, Error>(suggested_num_sub_vectors(*n as u32))
}
_ => Err(Error::Schema {
message: format!(
"Column '{}' is not a FixedSizeList",
&self.columns[0]
),
}),
}?
};
IndexParams::IvfPq {
replace: self.replace,
metric_type: self.metric_type,
num_partitions: num_partitions as u64,
num_sub_vectors,
num_bits: self.num_bits,
sample_rate: self.sample_rate,
max_iterations: self.max_iterations,
}
}
};
let tbl = self
.table
.as_native()
.expect("Only native table is supported here");
let mut dataset = tbl.dataset.get_mut().await?;
match params {
IndexParams::Scalar { replace } => {
dataset
.create_index(
&[&column],
IndexType::Scalar,
None,
&ScalarIndexParams::default(),
replace,
)
.await?
}
IndexParams::IvfPq {
replace,
metric_type,
num_partitions,
num_sub_vectors,
num_bits,
max_iterations,
..
} => {
let lance_idx_params = lance::index::vector::VectorIndexParams::ivf_pq(
num_partitions as usize,
num_bits as u8,
num_sub_vectors as usize,
false,
metric_type,
max_iterations as usize,
);
dataset
.create_index(
&[column],
IndexType::Vector,
None,
&lance_idx_params,
replace,
)
.await?;
}
}
Ok(())
} }
} }
fn suggested_num_partitions(rows: usize) -> u32 { pub(crate) fn suggested_num_partitions(rows: usize) -> u32 {
let num_partitions = (rows as f64).sqrt() as u32; let num_partitions = (rows as f64).sqrt() as u32;
max(1, num_partitions) max(1, num_partitions)
} }
fn suggested_num_sub_vectors(dim: u32) -> u32 { pub(crate) fn suggested_num_sub_vectors(dim: u32) -> u32 {
if dim % 16 == 0 { if dim % 16 == 0 {
// Should be more aggressive than this default. // Should be more aggressive than this default.
dim / 16 dim / 16

View File

@@ -14,25 +14,27 @@
//! IPC support //! IPC support
use std::io::Cursor; use std::{io::Cursor, sync::Arc};
use arrow_array::{RecordBatch, RecordBatchReader}; use arrow_array::{RecordBatch, RecordBatchReader};
use arrow_ipc::{reader::StreamReader, writer::FileWriter}; use arrow_ipc::{reader::FileReader, writer::FileWriter};
use arrow_schema::Schema;
use crate::{Error, Result}; use crate::{Error, Result};
/// Convert a Arrow IPC file to a batch reader /// Convert a Arrow IPC file to a batch reader
pub fn ipc_file_to_batches(buf: Vec<u8>) -> Result<impl RecordBatchReader> { pub fn ipc_file_to_batches(buf: Vec<u8>) -> Result<impl RecordBatchReader> {
let buf_reader = Cursor::new(buf); let buf_reader = Cursor::new(buf);
let reader = StreamReader::try_new(buf_reader, None)?; let reader = FileReader::try_new(buf_reader, None)?;
Ok(reader) Ok(reader)
} }
/// Convert record batches to Arrow IPC file /// Convert record batches to Arrow IPC file
pub fn batches_to_ipc_file(batches: &[RecordBatch]) -> Result<Vec<u8>> { pub fn batches_to_ipc_file(batches: &[RecordBatch]) -> Result<Vec<u8>> {
if batches.is_empty() { if batches.is_empty() {
return Err(Error::Store { return Err(Error::Other {
message: "No batches to write".to_string(), message: "No batches to write".to_string(),
source: None,
}); });
} }
let schema = batches[0].schema(); let schema = batches[0].schema();
@@ -44,12 +46,25 @@ pub fn batches_to_ipc_file(batches: &[RecordBatch]) -> Result<Vec<u8>> {
Ok(writer.into_inner()?) Ok(writer.into_inner()?)
} }
/// Convert a schema to an Arrow IPC file with 0 batches
pub fn schema_to_ipc_file(schema: &Schema) -> Result<Vec<u8>> {
let mut writer = FileWriter::try_new(vec![], schema)?;
writer.finish()?;
Ok(writer.into_inner()?)
}
/// Retrieve the schema from an Arrow IPC file
pub fn ipc_file_to_schema(buf: Vec<u8>) -> Result<Arc<Schema>> {
let buf_reader = Cursor::new(buf);
let reader = FileReader::try_new(buf_reader, None)?;
Ok(reader.schema())
}
#[cfg(test)] #[cfg(test)]
mod tests { mod tests {
use super::*; use super::*;
use arrow_array::{Float32Array, Int64Array, RecordBatch}; use arrow_array::{Float32Array, Int64Array, RecordBatch};
use arrow_ipc::writer::StreamWriter;
use arrow_schema::{DataType, Field, Schema}; use arrow_schema::{DataType, Field, Schema};
use std::sync::Arc; use std::sync::Arc;
@@ -71,7 +86,7 @@ mod tests {
fn test_ipc_file_to_batches() -> Result<()> { fn test_ipc_file_to_batches() -> Result<()> {
let batch = create_record_batch()?; let batch = create_record_batch()?;
let mut writer = StreamWriter::try_new(vec![], &batch.schema())?; let mut writer = FileWriter::try_new(vec![], &batch.schema())?;
writer.write(&batch)?; writer.write(&batch)?;
writer.finish()?; writer.finish()?;

View File

@@ -194,7 +194,7 @@ pub mod table;
pub mod utils; pub mod utils;
pub use error::{Error, Result}; pub use error::{Error, Result};
pub use table::{Table, TableRef}; pub use table::Table;
/// Connect to a database /// Connect to a database
pub use connection::connect; pub use connection::connect;

View File

@@ -12,17 +12,16 @@
// See the License for the specific language governing permissions and // See the License for the specific language governing permissions and
// limitations under the License. // limitations under the License.
use std::sync::Arc;
use arrow_array::Float32Array; use arrow_array::Float32Array;
use arrow_schema::Schema; use lance::dataset::scanner::DatasetRecordBatchStream;
use lance::dataset::scanner::{DatasetRecordBatchStream, Scanner};
use lance_linalg::distance::MetricType; use lance_linalg::distance::MetricType;
use crate::error::Result; use crate::error::Result;
use crate::table::dataset::DatasetConsistencyWrapper; use crate::table::TableInternal;
use crate::utils::default_vector_column;
use crate::Error;
const DEFAULT_TOP_K: usize = 10; pub(crate) const DEFAULT_TOP_K: usize = 10;
#[derive(Debug, Clone)] #[derive(Debug, Clone)]
pub enum Select { pub enum Select {
@@ -34,29 +33,29 @@ pub enum Select {
/// A builder for nearest neighbor queries for LanceDB. /// A builder for nearest neighbor queries for LanceDB.
#[derive(Clone)] #[derive(Clone)]
pub struct Query { pub struct Query {
dataset: DatasetConsistencyWrapper, parent: Arc<dyn TableInternal>,
// The column to run the query on. If not specified, we will attempt to guess // The column to run the query on. If not specified, we will attempt to guess
// the column based on the dataset's schema. // the column based on the dataset's schema.
column: Option<String>, pub(crate) column: Option<String>,
// IVF PQ - ANN search. // IVF PQ - ANN search.
query_vector: Option<Float32Array>, pub(crate) query_vector: Option<Float32Array>,
nprobes: usize, pub(crate) nprobes: usize,
refine_factor: Option<u32>, pub(crate) refine_factor: Option<u32>,
metric_type: Option<MetricType>, pub(crate) metric_type: Option<MetricType>,
/// limit the number of rows to return. /// limit the number of rows to return.
limit: Option<usize>, pub(crate) limit: Option<usize>,
/// Apply filter to the returned rows. /// Apply filter to the returned rows.
filter: Option<String>, pub(crate) filter: Option<String>,
/// Select column projection. /// Select column projection.
select: Select, pub(crate) select: Select,
/// Default is true. Set to false to enforce a brute force search. /// Default is true. Set to false to enforce a brute force search.
use_index: bool, pub(crate) use_index: bool,
/// Apply filter before ANN search/ /// Apply filter before ANN search/
prefilter: bool, pub(crate) prefilter: bool,
} }
impl Query { impl Query {
@@ -64,11 +63,11 @@ impl Query {
/// ///
/// # Arguments /// # Arguments
/// ///
/// * `dataset` - Lance dataset. /// * `parent` - the table to run the query on.
/// ///
pub(crate) fn new(dataset: DatasetConsistencyWrapper) -> Self { pub(crate) fn new(parent: Arc<dyn TableInternal>) -> Self {
Self { Self {
dataset, parent,
query_vector: None, query_vector: None,
column: None, column: None,
limit: None, limit: None,
@@ -88,54 +87,7 @@ impl Query {
/// ///
/// * A [DatasetRecordBatchStream] with the query's results. /// * A [DatasetRecordBatchStream] with the query's results.
pub async fn execute_stream(&self) -> Result<DatasetRecordBatchStream> { pub async fn execute_stream(&self) -> Result<DatasetRecordBatchStream> {
let ds_ref = self.dataset.get().await?; self.parent.clone().do_query(self).await
let mut scanner: Scanner = ds_ref.scan();
if let Some(query) = self.query_vector.as_ref() {
// If there is a vector query, default to limit=10 if unspecified
let column = if let Some(col) = self.column.as_ref() {
col.clone()
} else {
// Infer a vector column with the same dimension of the query vector.
let arrow_schema = Schema::from(ds_ref.schema());
default_vector_column(&arrow_schema, Some(query.len() as i32))?
};
let field = ds_ref.schema().field(&column).ok_or(Error::Store {
message: format!("Column {} not found in dataset schema", column),
})?;
if !matches!(field.data_type(), arrow_schema::DataType::FixedSizeList(f, dim) if f.data_type().is_floating() && dim == query.len() as i32)
{
return Err(Error::Store {
message: format!(
"Vector column '{}' does not match the dimension of the query vector: dim={}",
column,
query.len(),
),
});
}
scanner.nearest(&column, query, self.limit.unwrap_or(DEFAULT_TOP_K))?;
} else {
// If there is no vector query, it's ok to not have a limit
scanner.limit(self.limit.map(|limit| limit as i64), None)?;
}
scanner.nprobs(self.nprobes);
scanner.use_index(self.use_index);
scanner.prefilter(self.prefilter);
match &self.select {
Select::Simple(select) => {
scanner.project(select.as_slice())?;
}
Select::Projection(select_with_transform) => {
scanner.project_with_transform(select_with_transform.as_slice())?;
}
Select::All => { /* Do nothing */ }
}
self.filter.as_ref().map(|f| scanner.filter(f));
self.refine_factor.map(|rf| scanner.refine(rf));
self.metric_type.map(|mt| scanner.distance_metric(mt));
Ok(scanner.try_into_stream().await?)
} }
/// Set the column to query /// Set the column to query
@@ -259,22 +211,29 @@ mod tests {
}; };
use arrow_schema::{DataType, Field as ArrowField, Schema as ArrowSchema}; use arrow_schema::{DataType, Field as ArrowField, Schema as ArrowSchema};
use futures::{StreamExt, TryStreamExt}; use futures::{StreamExt, TryStreamExt};
use lance::dataset::Dataset;
use lance_testing::datagen::{BatchGenerator, IncrementingInt32, RandomVector}; use lance_testing::datagen::{BatchGenerator, IncrementingInt32, RandomVector};
use tempfile::tempdir; use tempfile::tempdir;
use crate::query::Query; use crate::connect;
use crate::table::{NativeTable, Table};
#[tokio::test] #[tokio::test]
async fn test_setters_getters() { async fn test_setters_getters() {
let batches = make_test_batches(); // TODO: Switch back to memory://foo after https://github.com/lancedb/lancedb/issues/1051
let ds = Dataset::write(batches, "memory://foo", None).await.unwrap(); // is fixed
let tmp_dir = tempdir().unwrap();
let dataset_path = tmp_dir.path().join("test.lance");
let uri = dataset_path.to_str().unwrap();
let ds = DatasetConsistencyWrapper::new_latest(ds, None); let batches = make_test_batches();
let conn = connect(uri).execute().await.unwrap();
let table = conn
.create_table("my_table", Box::new(batches))
.execute()
.await
.unwrap();
let vector = Some(Float32Array::from_iter_values([0.1, 0.2])); let vector = Some(Float32Array::from_iter_values([0.1, 0.2]));
let query = Query::new(ds).nearest_to(&[0.1, 0.2]); let query = table.query().nearest_to(&[0.1, 0.2]);
assert_eq!(query.query_vector, vector); assert_eq!(query.query_vector, vector);
let new_vector = Float32Array::from_iter_values([9.8, 8.7]); let new_vector = Float32Array::from_iter_values([9.8, 8.7]);
@@ -297,12 +256,21 @@ mod tests {
#[tokio::test] #[tokio::test]
async fn test_execute() { async fn test_execute() {
// TODO: Switch back to memory://foo after https://github.com/lancedb/lancedb/issues/1051
// is fixed
let tmp_dir = tempdir().unwrap();
let dataset_path = tmp_dir.path().join("test.lance");
let uri = dataset_path.to_str().unwrap();
let batches = make_non_empty_batches(); let batches = make_non_empty_batches();
let ds = Dataset::write(batches, "memory://foo", None).await.unwrap(); let conn = connect(uri).execute().await.unwrap();
let table = conn
.create_table("my_table", Box::new(batches))
.execute()
.await
.unwrap();
let ds = DatasetConsistencyWrapper::new_latest(ds, None); let query = table.query().nearest_to(&[0.1; 4]);
let query = Query::new(ds.clone()).nearest_to(&[0.1; 4]);
let result = query.limit(10).filter("id % 2 == 0").execute_stream().await; let result = query.limit(10).filter("id % 2 == 0").execute_stream().await;
let mut stream = result.expect("should have result"); let mut stream = result.expect("should have result");
// should only have one batch // should only have one batch
@@ -311,7 +279,7 @@ mod tests {
assert!(batch.expect("should be Ok").num_rows() < 10); assert!(batch.expect("should be Ok").num_rows() < 10);
} }
let query = Query::new(ds).nearest_to(&[0.1; 4]); let query = table.query().nearest_to(&[0.1; 4]);
let result = query let result = query
.limit(10) .limit(10)
.filter(String::from("id % 2 == 0")) // Work with String too .filter(String::from("id % 2 == 0")) // Work with String too
@@ -328,12 +296,22 @@ mod tests {
#[tokio::test] #[tokio::test]
async fn test_select_with_transform() { async fn test_select_with_transform() {
// TODO: Switch back to memory://foo after https://github.com/lancedb/lancedb/issues/1051
// is fixed
let tmp_dir = tempdir().unwrap();
let dataset_path = tmp_dir.path().join("test.lance");
let uri = dataset_path.to_str().unwrap();
let batches = make_non_empty_batches(); let batches = make_non_empty_batches();
let ds = Dataset::write(batches, "memory://foo", None).await.unwrap(); let conn = connect(uri).execute().await.unwrap();
let table = conn
.create_table("my_table", Box::new(batches))
.execute()
.await
.unwrap();
let ds = DatasetConsistencyWrapper::new_latest(ds, None); let query = table
.query()
let query = Query::new(ds)
.limit(10) .limit(10)
.select_with_projection(&[("id2", "id * 2"), ("id", "id")]); .select_with_projection(&[("id2", "id * 2"), ("id", "id")]);
let result = query.execute_stream().await; let result = query.execute_stream().await;
@@ -360,13 +338,22 @@ mod tests {
#[tokio::test] #[tokio::test]
async fn test_execute_no_vector() { async fn test_execute_no_vector() {
// TODO: Switch back to memory://foo after https://github.com/lancedb/lancedb/issues/1051
// is fixed
let tmp_dir = tempdir().unwrap();
let dataset_path = tmp_dir.path().join("test.lance");
let uri = dataset_path.to_str().unwrap();
// test that it's ok to not specify a query vector (just filter / limit) // test that it's ok to not specify a query vector (just filter / limit)
let batches = make_non_empty_batches(); let batches = make_non_empty_batches();
let ds = Dataset::write(batches, "memory://foo", None).await.unwrap(); let conn = connect(uri).execute().await.unwrap();
let table = conn
.create_table("my_table", Box::new(batches))
.execute()
.await
.unwrap();
let ds = DatasetConsistencyWrapper::new_latest(ds, None); let query = table.query();
let query = Query::new(ds);
let result = query.filter("id % 2 == 0").execute_stream().await; let result = query.filter("id % 2 == 0").execute_stream().await;
let mut stream = result.expect("should have result"); let mut stream = result.expect("should have result");
// should only have one batch // should only have one batch
@@ -413,12 +400,13 @@ mod tests {
let uri = dataset_path.to_str().unwrap(); let uri = dataset_path.to_str().unwrap();
let batches = make_test_batches(); let batches = make_test_batches();
Dataset::write(batches, dataset_path.to_str().unwrap(), None) let conn = connect(uri).execute().await.unwrap();
let table = conn
.create_table("my_table", Box::new(batches))
.execute()
.await .await
.unwrap(); .unwrap();
let table = NativeTable::open(uri).await.unwrap();
let query = table.search(&[0.1, 0.2]); let query = table.search(&[0.1, 0.2]);
assert_eq!(&[0.1, 0.2], query.query_vector.unwrap().values()); assert_eq!(&[0.1, 0.2], query.query_vector.unwrap().values());
} }

View File

@@ -19,3 +19,5 @@
pub mod client; pub mod client;
pub mod db; pub mod db;
pub mod table;
pub mod util;

View File

@@ -21,13 +21,17 @@ use reqwest::{
use crate::error::{Error, Result}; use crate::error::{Error, Result};
#[derive(Debug)] #[derive(Clone, Debug)]
pub struct RestfulLanceDbClient { pub struct RestfulLanceDbClient {
client: reqwest::Client, client: reqwest::Client,
host: String, host: String,
} }
impl RestfulLanceDbClient { impl RestfulLanceDbClient {
pub fn host(&self) -> &str {
&self.host
}
fn default_headers( fn default_headers(
api_key: &str, api_key: &str,
region: &str, region: &str,
@@ -97,6 +101,11 @@ impl RestfulLanceDbClient {
self.client.get(full_uri) self.client.get(full_uri)
} }
pub fn post(&self, uri: &str) -> RequestBuilder {
let full_uri = format!("{}{}", self.host, uri);
self.client.post(full_uri)
}
async fn rsp_to_str(response: Response) -> String { async fn rsp_to_str(response: Response) -> String {
let status = response.status(); let status = response.status();
response.text().await.unwrap_or_else(|_| status.to_string()) response.text().await.unwrap_or_else(|_| status.to_string())

View File

@@ -12,14 +12,24 @@
// See the License for the specific language governing permissions and // See the License for the specific language governing permissions and
// limitations under the License. // limitations under the License.
use async_trait::async_trait; use std::sync::Arc;
use serde::Deserialize;
use crate::connection::{ConnectionInternal, CreateTableBuilder, OpenTableBuilder}; use async_trait::async_trait;
use reqwest::header::CONTENT_TYPE;
use serde::Deserialize;
use tokio::task::spawn_blocking;
use crate::connection::{
ConnectionInternal, CreateTableBuilder, OpenTableBuilder, TableNamesBuilder,
};
use crate::error::Result; use crate::error::Result;
use crate::TableRef; use crate::Table;
use super::client::RestfulLanceDbClient; use super::client::RestfulLanceDbClient;
use super::table::RemoteTable;
use super::util::batches_to_ipc_bytes;
const ARROW_STREAM_CONTENT_TYPE: &str = "application/vnd.apache.arrow.stream";
#[derive(Deserialize)] #[derive(Deserialize)]
struct ListTablesResponse { struct ListTablesResponse {
@@ -43,25 +53,52 @@ impl RemoteDatabase {
} }
} }
impl std::fmt::Display for RemoteDatabase {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
write!(f, "RemoteDatabase(host={})", self.client.host())
}
}
#[async_trait] #[async_trait]
impl ConnectionInternal for RemoteDatabase { impl ConnectionInternal for RemoteDatabase {
async fn table_names(&self) -> Result<Vec<String>> { async fn table_names(&self, options: TableNamesBuilder) -> Result<Vec<String>> {
let rsp = self let mut req = self.client.get("/v1/table/");
.client if let Some(limit) = options.limit {
.get("/v1/table/") req = req.query(&[("limit", limit)]);
.query(&[("limit", 10)]) }
.query(&[("page_token", "")]) if let Some(start_after) = options.start_after {
.send() req = req.query(&[("page_token", start_after)]);
.await?; }
let rsp = req.send().await?;
let rsp = self.client.check_response(rsp).await?; let rsp = self.client.check_response(rsp).await?;
Ok(rsp.json::<ListTablesResponse>().await?.tables) Ok(rsp.json::<ListTablesResponse>().await?.tables)
} }
async fn do_create_table(&self, _options: CreateTableBuilder<true>) -> Result<TableRef> { async fn do_create_table(&self, options: CreateTableBuilder<true>) -> Result<Table> {
todo!() let data = options.data.unwrap();
// TODO: https://github.com/lancedb/lancedb/issues/1026
// We should accept data from an async source. In the meantime, spawn this as blocking
// to make sure we don't block the tokio runtime if the source is slow.
let data_buffer = spawn_blocking(move || batches_to_ipc_bytes(data))
.await
.unwrap()?;
self.client
.post(&format!("/v1/table/{}/create", options.name))
.body(data_buffer)
.header(CONTENT_TYPE, ARROW_STREAM_CONTENT_TYPE)
// This is currently expected by LanceDb cloud but will be removed soon.
.header("x-request-id", "na")
.send()
.await?;
Ok(Table::new(Arc::new(RemoteTable::new(
self.client.clone(),
options.name,
))))
} }
async fn do_open_table(&self, _options: OpenTableBuilder) -> Result<TableRef> { async fn do_open_table(&self, _options: OpenTableBuilder) -> Result<Table> {
todo!() todo!()
} }

View File

@@ -0,0 +1,89 @@
use arrow_array::RecordBatchReader;
use arrow_schema::SchemaRef;
use async_trait::async_trait;
use lance::dataset::{scanner::DatasetRecordBatchStream, ColumnAlteration, NewColumnTransform};
use crate::{
error::Result,
index::IndexBuilder,
query::Query,
table::{
merge::MergeInsertBuilder, AddDataBuilder, NativeTable, OptimizeAction, OptimizeStats,
TableInternal,
},
};
use super::client::RestfulLanceDbClient;
#[derive(Debug)]
pub struct RemoteTable {
#[allow(dead_code)]
client: RestfulLanceDbClient,
name: String,
}
impl RemoteTable {
pub fn new(client: RestfulLanceDbClient, name: String) -> Self {
Self { client, name }
}
}
impl std::fmt::Display for RemoteTable {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
write!(f, "RemoteTable({})", self.name)
}
}
#[async_trait]
impl TableInternal for RemoteTable {
fn as_any(&self) -> &dyn std::any::Any {
self
}
fn as_native(&self) -> Option<&NativeTable> {
None
}
fn name(&self) -> &str {
&self.name
}
async fn schema(&self) -> Result<SchemaRef> {
todo!()
}
async fn count_rows(&self, _filter: Option<String>) -> Result<usize> {
todo!()
}
async fn do_add(&self, _add: AddDataBuilder) -> Result<()> {
todo!()
}
async fn do_query(&self, _query: &Query) -> Result<DatasetRecordBatchStream> {
todo!()
}
async fn delete(&self, _predicate: &str) -> Result<()> {
todo!()
}
async fn do_create_index(&self, _index: IndexBuilder) -> Result<()> {
todo!()
}
async fn do_merge_insert(
&self,
_params: MergeInsertBuilder,
_new_data: Box<dyn RecordBatchReader + Send>,
) -> Result<()> {
todo!()
}
async fn optimize(&self, _action: OptimizeAction) -> Result<OptimizeStats> {
todo!()
}
async fn add_columns(
&self,
_transforms: NewColumnTransform,
_read_columns: Option<Vec<String>>,
) -> Result<()> {
todo!()
}
async fn alter_columns(&self, _alterations: &[ColumnAlteration]) -> Result<()> {
todo!()
}
async fn drop_columns(&self, _columns: &[&str]) -> Result<()> {
todo!()
}
}

View File

@@ -0,0 +1,21 @@
use std::io::Cursor;
use arrow_array::RecordBatchReader;
use crate::Result;
pub fn batches_to_ipc_bytes(batches: impl RecordBatchReader) -> Result<Vec<u8>> {
const WRITE_BUF_SIZE: usize = 4096;
let buf = Vec::with_capacity(WRITE_BUF_SIZE);
let mut buf = Cursor::new(buf);
{
let mut writer = arrow_ipc::writer::FileWriter::try_new(&mut buf, &batches.schema())?;
for batch in batches {
let batch = batch?;
writer.write(&batch)?;
}
writer.finish()?;
}
Ok(buf.into_inner())
}

File diff suppressed because it is too large Load Diff

View File

@@ -156,7 +156,7 @@ impl DatasetConsistencyWrapper {
self.0.write().await.set_latest(dataset); self.0.write().await.set_latest(dataset);
} }
async fn reload(&self) -> Result<()> { pub async fn reload(&self) -> Result<()> {
self.0.write().await.reload().await self.0.write().await.reload().await
} }

View File

@@ -15,24 +15,16 @@
use std::sync::Arc; use std::sync::Arc;
use arrow_array::RecordBatchReader; use arrow_array::RecordBatchReader;
use async_trait::async_trait;
use crate::Result; use crate::Result;
#[async_trait] use super::TableInternal;
pub(super) trait MergeInsert: Send + Sync {
async fn do_merge_insert(
&self,
params: MergeInsertBuilder,
new_data: Box<dyn RecordBatchReader + Send>,
) -> Result<()>;
}
/// A builder used to create and run a merge insert operation /// A builder used to create and run a merge insert operation
/// ///
/// See [`super::Table::merge_insert`] for more context /// See [`super::Table::merge_insert`] for more context
pub struct MergeInsertBuilder { pub struct MergeInsertBuilder {
table: Arc<dyn MergeInsert>, table: Arc<dyn TableInternal>,
pub(super) on: Vec<String>, pub(super) on: Vec<String>,
pub(super) when_matched_update_all: bool, pub(super) when_matched_update_all: bool,
pub(super) when_matched_update_all_filt: Option<String>, pub(super) when_matched_update_all_filt: Option<String>,
@@ -42,7 +34,7 @@ pub struct MergeInsertBuilder {
} }
impl MergeInsertBuilder { impl MergeInsertBuilder {
pub(super) fn new(table: Arc<dyn MergeInsert>, on: Vec<String>) -> Self { pub(super) fn new(table: Arc<dyn TableInternal>, on: Vec<String>) -> Self {
Self { Self {
table, table,
on, on,

Some files were not shown because too many files have changed in this diff Show More