Compare commits

...

45 Commits

Author SHA1 Message Date
Lance Release
4231925476 Bump version: 0.17.0-beta.1 → 0.17.0-beta.2 2024-11-29 22:45:55 +00:00
Lance Release
84a6693294 Bump version: 0.17.0-beta.0 → 0.17.0-beta.1 2024-11-29 18:16:02 +00:00
Ryan Green
6c2d4c10a4 feat: support remote options for remote lancedb connection (#1895)
* Support subset of storage options as remote options
* Send Azure storage account name via HTTP header
2024-11-29 14:08:13 -03:30
Ryan Green
d914722f79 Revert "feat: support remote options for remote lancedb connection. Send Azure storage account name via HTTP header."
This reverts commit a6e4034dba.
2024-11-29 11:06:18 -03:30
Ryan Green
a6e4034dba feat: support remote options for remote lancedb connection. Send Azure storage account name via HTTP header. 2024-11-29 11:05:04 -03:30
QianZhu
2616a50502 fix: test errors after setting default limit (#1891) 2024-11-26 16:03:16 -08:00
LuQQiu
7b5e9d824a fix: dynamodb external manifest drop table (#1866)
second pr of https://github.com/lancedb/lancedb/issues/1812
2024-11-26 13:20:48 -08:00
QianZhu
3b173e7cb9 fix: default limit for remote nodejs client (#1886)
https://github.com/lancedb/lancedb/issues/1804
2024-11-26 11:01:25 -08:00
Mr. Doge
d496ab13a0 ci: linux: specify target triple for neon pack-build (vectordb) (#1889)
fixes that all `neon pack-build` packs are named
`vectordb-linux-x64-musl-*.tgz` even when cross-compiling

adds 2nd param:
`TARGET_TRIPLE=${2:-x86_64-unknown-linux-gnu}`
`npm run pack-build -- -t $TARGET_TRIPLE`
2024-11-26 10:57:17 -08:00
Will Jones
69d9beebc7 docs: improve style and introduction to Python API docs (#1885)
I found the signatures difficult to read and the parameter section not
very space efficient.
2024-11-26 09:17:35 -08:00
Bert
d32360b99d feat: support overwrite and exist_ok mode for remote create_table (#1883)
Support passing modes "overwrite" and "exist_ok" when creating a remote
table.
2024-11-26 11:38:36 -05:00
Will Jones
9fa08bfa93 ci: use correct runner for vectordb (#1881)
We already do this for `gnu` builds, we should do this also for `musl`
builds.
2024-11-25 16:17:10 -08:00
LuQQiu
d6d9cb7415 feat: bump lance to 0.20.0b3 (#1882)
Bump lance version.
Upstream change log:
https://github.com/lancedb/lance/releases/tag/v0.20.0-beta.3
2024-11-25 16:15:44 -08:00
Lance Release
990d93f553 Updating package-lock.json 2024-11-25 22:06:39 +00:00
Lance Release
0832cba3c6 Bump version: 0.13.1-beta.0 → 0.14.0-beta.0 2024-11-25 22:06:14 +00:00
Lance Release
38b0d91848 Bump version: 0.16.1-beta.0 → 0.17.0-beta.0 2024-11-25 22:05:49 +00:00
Will Jones
6826039575 fix(python): run remote SDK futures in background thread (#1856)
Users who call the remote SDK from code that uses futures (either
`ThreadPoolExecutor` or `asyncio`) can get odd errors like:

```
Traceback (most recent call last):
  File "/usr/lib/python3.12/asyncio/events.py", line 88, in _run
    self._context.run(self._callback, *self._args)
RuntimeError: cannot enter context: <_contextvars.Context object at 0x7cfe94cdc900> is already entered
```

This PR fixes that by executing all LanceDB futures in a dedicated
thread pool running on a background thread. That way, it doesn't
interact with their threadpool.
2024-11-25 13:12:47 -08:00
QianZhu
3e9321fc40 docs: improve scalar index and filtering (#1874)
improved the docs on build a scalar index and pre-/post-filtering

---------

Co-authored-by: Weston Pace <weston.pace@gmail.com>
2024-11-25 11:30:57 -08:00
Lei Xu
2ded17452b fix(python)!: handle bad openai embeddings gracefully (#1873)
BREAKING-CHANGE: change Pydantic Vector field to be nullable by default.
Closes #1577
2024-11-23 13:33:52 -08:00
Mr. Doge
dfd9d2ac99 ci: musl missing node/package.json targets (#1870)
I missed targets when manually merging draft PR to updated main
I was copying from:
https://github.com/lancedb/lancedb/pull/1816/files#diff-d6e19f28e97cfeda63a9bd9426f10f1d2454eeed375ee1235e8ba842ceeb46a0

fixes:
error: Rust target x86_64-unknown-linux-musl not found in package.json.
2024-11-22 10:40:59 -08:00
Lance Release
162880140e Updating package-lock.json 2024-11-21 21:53:25 +00:00
Lance Release
99d9ced6d5 Bump version: 0.13.0 → 0.13.1-beta.0 2024-11-21 21:53:01 +00:00
Lance Release
96933d7df8 Bump version: 0.16.0 → 0.16.1-beta.0 2024-11-21 21:52:39 +00:00
Lei Xu
d369233b3d feat: bump lance to 0.20.0b2 (#1865)
Bump lance version.
Upstream change log:
https://github.com/lancedb/lance/releases/tag/v0.20.0-beta.2
2024-11-21 13:16:59 -08:00
QianZhu
43a670ed4b fix: limit docstring change (#1860) 2024-11-21 10:50:50 -08:00
Bert
cb9a00a28d feat: add list_versions to typescript, rust and remote python sdks (#1850)
Will require update to lance dependency to bring in this change which
makes the version serializable
https://github.com/lancedb/lance/pull/3143
2024-11-21 13:35:14 -05:00
Max Epstein
72af977a73 fix(CohereReranker): updated default model_name param to newest v3 (#1862) 2024-11-21 09:02:49 -08:00
Bert
7cecb71df0 feat: support for checkout and checkout_latest in remote sdks (#1863) 2024-11-21 11:28:46 -05:00
QianZhu
285071e5c8 docs: full-text search doc update (#1861)
Co-authored-by: BubbleCal <bubble-cal@outlook.com>
2024-11-20 21:07:30 -08:00
QianZhu
114866fbcf docs: OSS doc improvement (#1859)
OSS doc improvement - HNSW index parameter explanation and others.

---------

Co-authored-by: BubbleCal <bubble-cal@outlook.com>
2024-11-20 17:51:11 -08:00
Frank Liu
5387c0e243 docs: add Voyage models to sidebar (#1858) 2024-11-20 14:20:14 -08:00
Mr. Doge
53d1535de1 ci: musl x64,arm64 (#1853)
untested 4 artifacts at:
https://github.com/FuPeiJiang/lancedb/actions/runs/11926579058
node-native-linux-aarch64-musl 22.6 MB
node-native-linux-x86_64-musl 23.6 MB
nodejs-native-linux-aarch64-musl 26.7 MB
nodejs-native-linux-x86_64-musl 27 MB

this follows the same process as:
https://github.com/lancedb/lancedb/pull/1816#issuecomment-2484816669

Closes #1388
Closes #1107

---------

Co-authored-by: Will Jones <willjones127@gmail.com>
2024-11-20 10:53:19 -08:00
BubbleCal
b2f88f0b29 feat: support to sepcify ef search param (#1844)
Signed-off-by: BubbleCal <bubble-cal@outlook.com>
2024-11-19 23:12:25 +08:00
fzowl
f2e3989831 docs: voyageai embedding in the index (#1813)
The code to support VoyageAI embedding and rerank models was added in
the https://github.com/lancedb/lancedb/pull/1799 PR.
Some of the documentation changes was also made, here adding the
VoyageAI embedding doc link to the index page.

These are my first PRs in lancedb and while i checked the
documentation/code structure, i might missed something important. Please
let me know if any changes required!
2024-11-18 14:34:16 -08:00
Emmanuel Ferdman
83ae52938a docs: update migration reference (#1837)
# PR Summary
PR fixes the `migration.md` reference in `docs/src/guides/tables.md`. On
the way, it also fixes some typos found in that document.

Signed-off-by: Emmanuel Ferdman <emmanuelferdman@gmail.com>
2024-11-18 14:33:32 -08:00
Lei Xu
267aa83bf8 feat(python): check vector query is not None (#1847)
Fix the type hints of `nearest_to` method, and raise `ValueError` when
the input is None
2024-11-18 14:15:22 -08:00
Will Jones
cc72050206 chore: update package locks (#1845)
Also ran `npm audit`.
2024-11-18 13:44:06 -08:00
Will Jones
72543c8b9d test(python): test with_row_id in sync query (#1835)
Also remove weird `MockTable` fixture.
2024-11-18 11:32:52 -08:00
Will Jones
97d6210c33 ci: remove invalid references (#1834)
Fix release job
2024-11-18 11:32:44 -08:00
Ho Kim
a3d0c27b0a feat: add support for rustls (#1842)
Hello, this is a simple PR that supports `rustls-tls` feature.

The `reqwest`\`s default TLS `default-tls` is enabled by default, to
dismiss the side-effect.

The user can use `rustls-tls` like this:

```toml
lancedb = { version = "*", default-features = false, features = ["rustls-tls"] }
```
2024-11-18 10:36:20 -08:00
BubbleCal
b23d8abcdd docs: introduce incremental indexing for FTS (#1789)
don't merge it before https://github.com/lancedb/lancedb/pull/1769
merged

---------

Signed-off-by: BubbleCal <bubble-cal@outlook.com>
2024-11-18 20:21:28 +08:00
Rob Meng
e3ea5cf9b9 chore: bump lance to 0.19.3 (#1839) 2024-11-16 14:57:52 -05:00
Lance Release
4f8b086175 Updating package-lock.json 2024-11-15 20:18:16 +00:00
Lance Release
72330fb759 Bump version: 0.13.0-beta.3 → 0.13.0 2024-11-15 20:17:59 +00:00
Lance Release
e3b2c5f438 Bump version: 0.13.0-beta.2 → 0.13.0-beta.3 2024-11-15 20:17:55 +00:00
72 changed files with 1614 additions and 308 deletions

View File

@@ -1,5 +1,5 @@
[tool.bumpversion]
current_version = "0.13.0-beta.2"
current_version = "0.14.0-beta.0"
parse = """(?x)
(?P<major>0|[1-9]\\d*)\\.
(?P<minor>0|[1-9]\\d*)\\.
@@ -87,6 +87,16 @@ glob = "node/package.json"
replace = "\"@lancedb/vectordb-linux-x64-gnu\": \"{new_version}\""
search = "\"@lancedb/vectordb-linux-x64-gnu\": \"{current_version}\""
[[tool.bumpversion.files]]
glob = "node/package.json"
replace = "\"@lancedb/vectordb-linux-arm64-musl\": \"{new_version}\""
search = "\"@lancedb/vectordb-linux-arm64-musl\": \"{current_version}\""
[[tool.bumpversion.files]]
glob = "node/package.json"
replace = "\"@lancedb/vectordb-linux-x64-musl\": \"{new_version}\""
search = "\"@lancedb/vectordb-linux-x64-musl\": \"{current_version}\""
[[tool.bumpversion.files]]
glob = "node/package.json"
replace = "\"@lancedb/vectordb-win32-x64-msvc\": \"{new_version}\""

View File

@@ -31,6 +31,9 @@ rustflags = [
[target.x86_64-unknown-linux-gnu]
rustflags = ["-C", "target-cpu=haswell", "-C", "target-feature=+avx2,+fma,+f16c"]
[target.x86_64-unknown-linux-musl]
rustflags = ["-C", "target-cpu=haswell", "-C", "target-feature=-crt-static,+avx2,+fma,+f16c"]
[target.aarch64-apple-darwin]
rustflags = ["-C", "target-cpu=apple-m1", "-C", "target-feature=+neon,+fp16,+fhm,+dotprod"]

View File

@@ -101,7 +101,7 @@ jobs:
path: |
nodejs/dist/*.node
node-linux:
node-linux-gnu:
name: vectordb (${{ matrix.config.arch}}-unknown-linux-gnu)
runs-on: ${{ matrix.config.runner }}
# Only runs on tags that matches the make-release action
@@ -133,15 +133,70 @@ jobs:
free -h
- name: Build Linux Artifacts
run: |
bash ci/build_linux_artifacts.sh ${{ matrix.config.arch }}
bash ci/build_linux_artifacts.sh ${{ matrix.config.arch }} ${{ matrix.config.arch }}-unknown-linux-gnu
- name: Upload Linux Artifacts
uses: actions/upload-artifact@v4
with:
name: node-native-linux-${{ matrix.config.arch }}
name: node-native-linux-${{ matrix.config.arch }}-gnu
path: |
node/dist/lancedb-vectordb-linux*.tgz
nodejs-linux:
node-linux-musl:
name: vectordb (${{ matrix.config.arch}}-unknown-linux-musl)
runs-on: ${{ matrix.config.runner }}
container: alpine:edge
# Only runs on tags that matches the make-release action
if: startsWith(github.ref, 'refs/tags/v')
strategy:
fail-fast: false
matrix:
config:
- arch: x86_64
runner: ubuntu-latest
- arch: aarch64
# For successful fat LTO builds, we need a large runner to avoid OOM errors.
runner: buildjet-16vcpu-ubuntu-2204-arm
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Install common dependencies
run: |
apk add protobuf-dev curl clang mold grep npm bash
curl --proto '=https' --tlsv1.3 -sSf https://raw.githubusercontent.com/rust-lang/rustup/refs/heads/master/rustup-init.sh | sh -s -- -y --default-toolchain 1.80.0
echo "source $HOME/.cargo/env" >> saved_env
echo "export CC=clang" >> saved_env
echo "export RUSTFLAGS='-Ctarget-cpu=haswell -Ctarget-feature=-crt-static,+avx2,+fma,+f16c -Clinker=clang -Clink-arg=-fuse-ld=mold'" >> saved_env
- name: Configure aarch64 build
if: ${{ matrix.config.arch == 'aarch64' }}
run: |
source "$HOME/.cargo/env"
rustup target add aarch64-unknown-linux-musl --toolchain 1.80.0
crt=$(realpath $(dirname $(rustup which rustc))/../lib/rustlib/aarch64-unknown-linux-musl/lib/self-contained)
sysroot_lib=/usr/aarch64-unknown-linux-musl/usr/lib
apk_url=https://dl-cdn.alpinelinux.org/alpine/latest-stable/main/aarch64/
curl -sSf $apk_url > apk_list
for pkg in gcc libgcc musl; do curl -sSf $apk_url$(cat apk_list | grep -oP '(?<=")'$pkg'-\d.*?(?=")') | tar zxf -; done
mkdir -p $sysroot_lib
echo 'GROUP ( libgcc_s.so.1 -lgcc )' > $sysroot_lib/libgcc_s.so
cp usr/lib/libgcc_s.so.1 $sysroot_lib
cp usr/lib/gcc/aarch64-alpine-linux-musl/*/libgcc.a $sysroot_lib
cp lib/ld-musl-aarch64.so.1 $sysroot_lib/libc.so
echo '!<arch>' > $sysroot_lib/libdl.a
(cd $crt && cp crti.o crtbeginS.o crtendS.o crtn.o -t $sysroot_lib)
echo "export CARGO_BUILD_TARGET=aarch64-unknown-linux-musl" >> saved_env
echo "export RUSTFLAGS='-Ctarget-cpu=apple-m1 -Ctarget-feature=-crt-static,+neon,+fp16,+fhm,+dotprod -Clinker=clang -Clink-arg=-fuse-ld=mold -Clink-arg=--target=aarch64-unknown-linux-musl -Clink-arg=--sysroot=/usr/aarch64-unknown-linux-musl -Clink-arg=-lc'" >> saved_env
- name: Build Linux Artifacts
run: |
source ./saved_env
bash ci/manylinux_node/build_vectordb.sh ${{ matrix.config.arch }} ${{ matrix.config.arch }}-unknown-linux-musl
- name: Upload Linux Artifacts
uses: actions/upload-artifact@v4
with:
name: node-native-linux-${{ matrix.config.arch }}-musl
path: |
node/dist/lancedb-vectordb-linux*.tgz
nodejs-linux-gnu:
name: lancedb (${{ matrix.config.arch}}-unknown-linux-gnu
runs-on: ${{ matrix.config.runner }}
# Only runs on tags that matches the make-release action
@@ -178,7 +233,7 @@ jobs:
- name: Upload Linux Artifacts
uses: actions/upload-artifact@v4
with:
name: nodejs-native-linux-${{ matrix.config.arch }}
name: nodejs-native-linux-${{ matrix.config.arch }}-gnu
path: |
nodejs/dist/*.node
# The generic files are the same in all distros so we just pick
@@ -192,6 +247,65 @@ jobs:
nodejs/dist/*
!nodejs/dist/*.node
nodejs-linux-musl:
name: lancedb (${{ matrix.config.arch}}-unknown-linux-musl
runs-on: ${{ matrix.config.runner }}
container: alpine:edge
# Only runs on tags that matches the make-release action
if: startsWith(github.ref, 'refs/tags/v')
strategy:
fail-fast: false
matrix:
config:
- arch: x86_64
runner: ubuntu-latest
- arch: aarch64
# For successful fat LTO builds, we need a large runner to avoid OOM errors.
runner: buildjet-16vcpu-ubuntu-2204-arm
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Install common dependencies
run: |
apk add protobuf-dev curl clang mold grep npm bash openssl-dev openssl-libs-static
curl --proto '=https' --tlsv1.3 -sSf https://raw.githubusercontent.com/rust-lang/rustup/refs/heads/master/rustup-init.sh | sh -s -- -y --default-toolchain 1.80.0
echo "source $HOME/.cargo/env" >> saved_env
echo "export CC=clang" >> saved_env
echo "export RUSTFLAGS='-Ctarget-cpu=haswell -Ctarget-feature=-crt-static,+avx2,+fma,+f16c -Clinker=clang -Clink-arg=-fuse-ld=mold'" >> saved_env
echo "export X86_64_UNKNOWN_LINUX_MUSL_OPENSSL_INCLUDE_DIR=/usr/include" >> saved_env
echo "export X86_64_UNKNOWN_LINUX_MUSL_OPENSSL_LIB_DIR=/usr/lib" >> saved_env
- name: Configure aarch64 build
if: ${{ matrix.config.arch == 'aarch64' }}
run: |
source "$HOME/.cargo/env"
rustup target add aarch64-unknown-linux-musl --toolchain 1.80.0
crt=$(realpath $(dirname $(rustup which rustc))/../lib/rustlib/aarch64-unknown-linux-musl/lib/self-contained)
sysroot_lib=/usr/aarch64-unknown-linux-musl/usr/lib
apk_url=https://dl-cdn.alpinelinux.org/alpine/latest-stable/main/aarch64/
curl -sSf $apk_url > apk_list
for pkg in gcc libgcc musl openssl-dev openssl-libs-static; do curl -sSf $apk_url$(cat apk_list | grep -oP '(?<=")'$pkg'-\d.*?(?=")') | tar zxf -; done
mkdir -p $sysroot_lib
echo 'GROUP ( libgcc_s.so.1 -lgcc )' > $sysroot_lib/libgcc_s.so
cp usr/lib/libgcc_s.so.1 $sysroot_lib
cp usr/lib/gcc/aarch64-alpine-linux-musl/*/libgcc.a $sysroot_lib
cp lib/ld-musl-aarch64.so.1 $sysroot_lib/libc.so
echo '!<arch>' > $sysroot_lib/libdl.a
(cd $crt && cp crti.o crtbeginS.o crtendS.o crtn.o -t $sysroot_lib)
echo "export CARGO_BUILD_TARGET=aarch64-unknown-linux-musl" >> saved_env
echo "export RUSTFLAGS='-Ctarget-feature=-crt-static,+neon,+fp16,+fhm,+dotprod -Clinker=clang -Clink-arg=-fuse-ld=mold -Clink-arg=--target=aarch64-unknown-linux-musl -Clink-arg=--sysroot=/usr/aarch64-unknown-linux-musl -Clink-arg=-lc'" >> saved_env
echo "export AARCH64_UNKNOWN_LINUX_MUSL_OPENSSL_INCLUDE_DIR=$(realpath usr/include)" >> saved_env
echo "export AARCH64_UNKNOWN_LINUX_MUSL_OPENSSL_LIB_DIR=$(realpath usr/lib)" >> saved_env
- name: Build Linux Artifacts
run: |
source ./saved_env
bash ci/manylinux_node/build_lancedb.sh ${{ matrix.config.arch }}
- name: Upload Linux Artifacts
uses: actions/upload-artifact@v4
with:
name: nodejs-native-linux-${{ matrix.config.arch }}-musl
path: |
nodejs/dist/*.node
node-windows:
name: vectordb ${{ matrix.target }}
runs-on: windows-2022
@@ -460,7 +574,7 @@ jobs:
release:
name: vectordb NPM Publish
needs: [node, node-macos, node-linux, node-windows, node-windows-arm64]
needs: [node, node-macos, node-linux-gnu, node-linux-musl, node-windows]
runs-on: ubuntu-latest
# Only runs on tags that matches the make-release action
if: startsWith(github.ref, 'refs/tags/v')
@@ -500,7 +614,7 @@ jobs:
release-nodejs:
name: lancedb NPM Publish
needs: [nodejs-macos, nodejs-linux, nodejs-windows, nodejs-windows-arm64]
needs: [nodejs-macos, nodejs-linux-gnu, nodejs-linux-musl, nodejs-windows]
runs-on: ubuntu-latest
# Only runs on tags that matches the make-release action
if: startsWith(github.ref, 'refs/tags/v')

View File

@@ -18,18 +18,19 @@ repository = "https://github.com/lancedb/lancedb"
description = "Serverless, low-latency vector database for AI applications"
keywords = ["lancedb", "lance", "database", "vector", "search"]
categories = ["database-implementations"]
rust-version = "1.80.0" # TODO: lower this once we upgrade Lance again.
rust-version = "1.80.0" # TODO: lower this once we upgrade Lance again.
[workspace.dependencies]
lance = { "version" = "=0.19.2", "features" = [
lance = { "version" = "=0.20.0", "features" = [
"dynamodb",
]}
lance-index = "=0.19.2"
lance-linalg = "=0.19.2"
lance-table = "=0.19.2"
lance-testing = "=0.19.2"
lance-datafusion = "=0.19.2"
lance-encoding = "=0.19.2"
], git = "https://github.com/lancedb/lance.git", tag = "v0.20.0-beta.3" }
lance-io = { version = "=0.20.0", git = "https://github.com/lancedb/lance.git", tag = "v0.20.0-beta.3" }
lance-index = { version = "=0.20.0", git = "https://github.com/lancedb/lance.git", tag = "v0.20.0-beta.3" }
lance-linalg = { version = "=0.20.0", git = "https://github.com/lancedb/lance.git", tag = "v0.20.0-beta.3" }
lance-table = { version = "=0.20.0", git = "https://github.com/lancedb/lance.git", tag = "v0.20.0-beta.3" }
lance-testing = { version = "=0.20.0", git = "https://github.com/lancedb/lance.git", tag = "v0.20.0-beta.3" }
lance-datafusion = { version = "=0.20.0", git = "https://github.com/lancedb/lance.git", tag = "v0.20.0-beta.3" }
lance-encoding = { version = "=0.20.0", git = "https://github.com/lancedb/lance.git", tag = "v0.20.0-beta.3" }
# Note that this one does not include pyarrow
arrow = { version = "52.2", optional = false }
arrow-array = "52.2"

View File

@@ -1,8 +1,9 @@
#!/bin/bash
set -e
ARCH=${1:-x86_64}
TARGET_TRIPLE=${2:-x86_64-unknown-linux-gnu}
# We pass down the current user so that when we later mount the local files
# We pass down the current user so that when we later mount the local files
# into the container, the files are accessible by the current user.
pushd ci/manylinux_node
docker build \
@@ -18,4 +19,4 @@ docker run \
-v $(pwd):/io -w /io \
--memory-swap=-1 \
lancedb-node-manylinux \
bash ci/manylinux_node/build_vectordb.sh $ARCH
bash ci/manylinux_node/build_vectordb.sh $ARCH $TARGET_TRIPLE

View File

@@ -11,7 +11,8 @@ fi
export OPENSSL_STATIC=1
export OPENSSL_INCLUDE_DIR=/usr/local/include/openssl
source $HOME/.bashrc
#Alpine doesn't have .bashrc
FILE=$HOME/.bashrc && test -f $FILE && source $FILE
cd nodejs
npm ci

View File

@@ -2,18 +2,20 @@
# Builds the node module for manylinux. Invoked by ci/build_linux_artifacts.sh.
set -e
ARCH=${1:-x86_64}
TARGET_TRIPLE=${2:-x86_64-unknown-linux-gnu}
if [ "$ARCH" = "x86_64" ]; then
export OPENSSL_LIB_DIR=/usr/local/lib64/
else
else
export OPENSSL_LIB_DIR=/usr/local/lib/
fi
export OPENSSL_STATIC=1
export OPENSSL_INCLUDE_DIR=/usr/local/include/openssl
source $HOME/.bashrc
#Alpine doesn't have .bashrc
FILE=$HOME/.bashrc && test -f $FILE && source $FILE
cd node
npm ci
npm run build-release
npm run pack-build
npm run pack-build -t $TARGET_TRIPLE

View File

@@ -55,6 +55,9 @@ plugins:
show_signature_annotations: true
show_root_heading: true
members_order: source
docstring_section_style: list
signature_crossrefs: true
separate_signature: true
import:
# for cross references
- https://arrow.apache.org/docs/objects.inv
@@ -138,6 +141,7 @@ nav:
- Jina Reranker: reranking/jina.md
- OpenAI Reranker: reranking/openai.md
- AnswerDotAi Rerankers: reranking/answerdotai.md
- Voyage AI Rerankers: reranking/voyageai.md
- Building Custom Rerankers: reranking/custom_reranker.md
- Example: notebooks/lancedb_reranking.ipynb
- Filtering: sql.md
@@ -165,6 +169,7 @@ nav:
- Jina Embeddings: embeddings/available_embedding_models/text_embedding_functions/jina_embedding.md
- AWS Bedrock Text Embedding Functions: embeddings/available_embedding_models/text_embedding_functions/aws_bedrock_embedding.md
- IBM watsonx.ai Embeddings: embeddings/available_embedding_models/text_embedding_functions/ibm_watsonx_ai_embedding.md
- Voyage AI Embeddings: embeddings/available_embedding_models/text_embedding_functions/voyageai_embedding.md
- Multimodal Embedding Functions:
- OpenClip embeddings: embeddings/available_embedding_models/multimodal_embedding_functions/openclip_embedding.md
- Imagebind embeddings: embeddings/available_embedding_models/multimodal_embedding_functions/imagebind_embedding.md

21
docs/package-lock.json generated
View File

@@ -19,7 +19,7 @@
},
"../node": {
"name": "vectordb",
"version": "0.4.6",
"version": "0.12.0",
"cpu": [
"x64",
"arm64"
@@ -31,9 +31,7 @@
"win32"
],
"dependencies": {
"@apache-arrow/ts": "^14.0.2",
"@neon-rs/load": "^0.0.74",
"apache-arrow": "^14.0.2",
"axios": "^1.4.0"
},
"devDependencies": {
@@ -46,6 +44,7 @@
"@types/temp": "^0.9.1",
"@types/uuid": "^9.0.3",
"@typescript-eslint/eslint-plugin": "^5.59.1",
"apache-arrow-old": "npm:apache-arrow@13.0.0",
"cargo-cp-artifact": "^0.1",
"chai": "^4.3.7",
"chai-as-promised": "^7.1.1",
@@ -62,15 +61,19 @@
"ts-node-dev": "^2.0.0",
"typedoc": "^0.24.7",
"typedoc-plugin-markdown": "^3.15.3",
"typescript": "*",
"typescript": "^5.1.0",
"uuid": "^9.0.0"
},
"optionalDependencies": {
"@lancedb/vectordb-darwin-arm64": "0.4.6",
"@lancedb/vectordb-darwin-x64": "0.4.6",
"@lancedb/vectordb-linux-arm64-gnu": "0.4.6",
"@lancedb/vectordb-linux-x64-gnu": "0.4.6",
"@lancedb/vectordb-win32-x64-msvc": "0.4.6"
"@lancedb/vectordb-darwin-arm64": "0.12.0",
"@lancedb/vectordb-darwin-x64": "0.12.0",
"@lancedb/vectordb-linux-arm64-gnu": "0.12.0",
"@lancedb/vectordb-linux-x64-gnu": "0.12.0",
"@lancedb/vectordb-win32-x64-msvc": "0.12.0"
},
"peerDependencies": {
"@apache-arrow/ts": "^14.0.2",
"apache-arrow": "^14.0.2"
}
},
"../node/node_modules/apache-arrow": {

View File

@@ -277,7 +277,15 @@ Product quantization can lead to approximately `16 * sizeof(float32) / 1 = 64` t
Higher number of partitions could lead to more efficient I/O during queries and better accuracy, but it takes much more time to train.
On `SIFT-1M` dataset, our benchmark shows that keeping each partition 1K-4K rows lead to a good latency / recall.
`num_sub_vectors` specifies how many Product Quantization (PQ) short codes to generate on each vector. Because
`num_sub_vectors` specifies how many Product Quantization (PQ) short codes to generate on each vector. The number should be a factor of the vector dimension. Because
PQ is a lossy compression of the original vector, a higher `num_sub_vectors` usually results in
less space distortion, and thus yields better accuracy. However, a higher `num_sub_vectors` also causes heavier I/O and
more PQ computation, and thus, higher latency. `dimension / num_sub_vectors` should be a multiple of 8 for optimum SIMD efficiency.
less space distortion, and thus yields better accuracy. However, a higher `num_sub_vectors` also causes heavier I/O and more PQ computation, and thus, higher latency. `dimension / num_sub_vectors` should be a multiple of 8 for optimum SIMD efficiency.
!!! note
if `num_sub_vectors` is set to be greater than the vector dimension, you will see errors like `attempt to divide by zero`
### How to choose `m` and `ef_construction` for `IVF_HNSW_*` index?
`m` determines the number of connections a new node establishes with its closest neighbors upon entering the graph. Typically, `m` falls within the range of 5 to 48. Lower `m` values are suitable for low-dimensional data or scenarios where recall is less critical. Conversely, higher `m` values are beneficial for high-dimensional data or when high recall is required. In essence, a larger `m` results in a denser graph with increased connectivity, but at the expense of higher memory consumption.
`ef_construction` balances build speed and accuracy. Higher values increase accuracy but slow down the build process. A typical range is 150 to 300. For good search results, a minimum value of 100 is recommended. In most cases, setting this value above 500 offers no additional benefit. Ensure that `ef_construction` is always set to a value equal to or greater than `ef` in the search phase

View File

@@ -57,6 +57,13 @@ Then the greedy search routine operates as follows:
## Usage
There are three key parameters to set when constructing an HNSW index:
* `metric`: Use an `L2` euclidean distance metric. We also support `dot` and `cosine` distance.
* `m`: The number of neighbors to select for each vector in the HNSW graph.
* `ef_construction`: The number of candidates to evaluate during the construction of the HNSW graph.
We can combine the above concepts to understand how to build and query an HNSW index in LanceDB.
### Construct index

View File

@@ -58,8 +58,10 @@ In Python, the index can be created as follows:
# Make sure you have enough data in the table for an effective training step
tbl.create_index(metric="L2", num_partitions=256, num_sub_vectors=96)
```
!!! note
`num_partitions`=256 and `num_sub_vectors`=96 does not work for every dataset. Those values needs to be adjusted for your particular dataset.
The `num_partitions` is usually chosen to target a particular number of vectors per partition. `num_sub_vectors` is typically chosen based on the desired recall and the dimensionality of the vector. See the [FAQs](#faq) below for best practices on choosing these parameters.
The `num_partitions` is usually chosen to target a particular number of vectors per partition. `num_sub_vectors` is typically chosen based on the desired recall and the dimensionality of the vector. See [here](../ann_indexes.md/#how-to-choose-num_partitions-and-num_sub_vectors-for-ivf_pq-index) for best practices on choosing these parameters.
### Query the index

View File

@@ -20,7 +20,7 @@ Supported parameters (to be passed in `create` method) are:
| Parameter | Type | Default Value | Description |
|---|---|--------|---------|
| `name` | `str` | `"voyage-3"` | The model ID of the model to use. Supported base models for Text Embeddings: voyage-3, voyage-3-lite, voyage-finance-2, voyage-multilingual-2, voyage-law-2, voyage-code-2 |
| `name` | `str` | `None` | The model ID of the model to use. Supported base models for Text Embeddings: voyage-3, voyage-3-lite, voyage-finance-2, voyage-multilingual-2, voyage-law-2, voyage-code-2 |
| `input_type` | `str` | `None` | Type of the input text. Default to None. Other options: query, document. |
| `truncation` | `bool` | `True` | Whether to truncate the input texts to fit within the context length. |

View File

@@ -53,6 +53,7 @@ These functions are registered by default to handle text embeddings.
| [**Jina Embeddings**](available_embedding_models/text_embedding_functions/jina_embedding.md "jina") | 🔗 World-class embedding models to improve your search and RAG systems. You will need **jina api key**. | [<img src="https://raw.githubusercontent.com/lancedb/assets/main/docs/assets/logos/jina.png" alt="Jina Icon" width="90" height="35">](available_embedding_models/text_embedding_functions/jina_embedding.md) |
| [ **AWS Bedrock Functions**](available_embedding_models/text_embedding_functions/aws_bedrock_embedding.md "bedrock-text") | ☁️ AWS Bedrock supports multiple base models for generating text embeddings. You need to setup the AWS credentials to use this embedding function. | [<img src="https://raw.githubusercontent.com/lancedb/assets/main/docs/assets/logos/aws_bedrock.png" alt="AWS Bedrock Icon" width="120" height="35">](available_embedding_models/text_embedding_functions/aws_bedrock_embedding.md) |
| [**IBM Watsonx.ai**](available_embedding_models/text_embedding_functions/ibm_watsonx_ai_embedding.md "watsonx") | 💡 Generate text embeddings using IBM's watsonx.ai platform. **Note**: watsonx.ai library is an optional dependency. | [<img src="https://raw.githubusercontent.com/lancedb/assets/main/docs/assets/logos/watsonx.png" alt="Watsonx Icon" width="140" height="35">](available_embedding_models/text_embedding_functions/ibm_watsonx_ai_embedding.md) |
| [**VoyageAI Embeddings**](available_embedding_models/text_embedding_functions/voyageai_embedding.md "voyageai") | 🌕 Voyage AI provides cutting-edge embedding and rerankers. This will help you get started with **VoyageAI** embedding models using LanceDB. Using voyageai API requires voyageai package. Install it via `pip`. | [<img src="https://www.voyageai.com/logo.svg" alt="VoyageAI Icon" width="140" height="35">](available_embedding_models/text_embedding_functions/voyageai_embedding.md) |
@@ -66,6 +67,7 @@ These functions are registered by default to handle text embeddings.
[jina-key]: "jina"
[aws-key]: "bedrock-text"
[watsonx-key]: "watsonx"
[voyageai-key]: "voyageai"
## Multi-modal Embedding Functions🖼

View File

@@ -114,12 +114,45 @@ table.create_fts_index("text",
LanceDB full text search supports to filter the search results by a condition, both pre-filtering and post-filtering are supported.
This can be invoked via the familiar `where` syntax:
This can be invoked via the familiar `where` syntax.
With pre-filtering:
=== "Python"
```python
table.search("puppy").limit(10).where("meta='foo'").to_list()
table.search("puppy").limit(10).where("meta='foo'", prefilte=True).to_list()
```
=== "TypeScript"
```typescript
await tbl
.search("puppy")
.select(["id", "doc"])
.limit(10)
.where("meta='foo'")
.prefilter(true)
.toArray();
```
=== "Rust"
```rust
table
.query()
.full_text_search(FullTextSearchQuery::new("puppy".to_owned()))
.select(lancedb::query::Select::Columns(vec!["doc".to_owned()]))
.limit(10)
.only_if("meta='foo'")
.execute()
.await?;
```
With post-filtering:
=== "Python"
```python
table.search("puppy").limit(10).where("meta='foo'", prefilte=False).to_list()
```
=== "TypeScript"
@@ -130,6 +163,7 @@ This can be invoked via the familiar `where` syntax:
.select(["id", "doc"])
.limit(10)
.where("meta='foo'")
.prefilter(false)
.toArray();
```
@@ -140,6 +174,7 @@ This can be invoked via the familiar `where` syntax:
.query()
.full_text_search(FullTextSearchQuery::new(words[0].to_owned()))
.select(lancedb::query::Select::Columns(vec!["doc".to_owned()]))
.postfilter()
.limit(10)
.only_if("meta='foo'")
.execute()
@@ -160,3 +195,35 @@ To search for a phrase, the index must be created with `with_position=True`:
table.create_fts_index("text", use_tantivy=False, with_position=True)
```
This will allow you to search for phrases, but it will also significantly increase the index size and indexing time.
## Incremental indexing
LanceDB supports incremental indexing, which means you can add new records to the table without reindexing the entire table.
This can make the query more efficient, especially when the table is large and the new records are relatively small.
=== "Python"
```python
table.add([{"vector": [3.1, 4.1], "text": "Frodo was a happy puppy"}])
table.optimize()
```
=== "TypeScript"
```typescript
await tbl.add([{ vector: [3.1, 4.1], text: "Frodo was a happy puppy" }]);
await tbl.optimize();
```
=== "Rust"
```rust
let more_data: Box<dyn RecordBatchReader + Send> = create_some_records()?;
tbl.add(more_data).execute().await?;
tbl.optimize(OptimizeAction::All).execute().await?;
```
!!! note
New data added after creating the FTS index will appear in search results while incremental index is still progress, but with increased latency due to a flat search on the unindexed portion. LanceDB Cloud automates this merging process, minimizing the impact on search speed.

View File

@@ -153,9 +153,7 @@ table.create_fts_index(["title", "content"], use_tantivy=True, writer_heap_size=
## Current limitations
1. Currently we do not yet support incremental writes.
If you add data after FTS index creation, it won't be reflected
in search results until you do a full reindex.
1. New data added after creating the FTS index will appear in search results, but with increased latency due to a flat search on the unindexed portion. Re-indexing with `create_fts_index` will reduce latency. LanceDB Cloud automates this merging process, minimizing the impact on search speed.
2. We currently only support local filesystem paths for the FTS index.
This is a tantivy limitation. We've implemented an object store plugin

View File

@@ -1,23 +1,35 @@
# Building Scalar Index
# Building a Scalar Index
Similar to many SQL databases, LanceDB supports several types of Scalar indices to accelerate search
Scalar indices organize data by scalar attributes (e.g. numbers, categorical values), enabling fast filtering of vector data. In vector databases, scalar indices accelerate the retrieval of scalar data associated with vectors, thus enhancing the query performance when searching for vectors that meet certain scalar criteria.
Similar to many SQL databases, LanceDB supports several types of scalar indices to accelerate search
over scalar columns.
- `BTREE`: The most common type is BTREE. This index is inspired by the btree data structure
although only the first few layers of the btree are cached in memory.
It will perform well on columns with a large number of unique values and few rows per value.
- `BITMAP`: this index stores a bitmap for each unique value in the column.
This index is useful for columns with a finite number of unique values and many rows per value.
For example, columns that represent "categories", "labels", or "tags"
- `LABEL_LIST`: a special index that is used to index list columns whose values have a finite set of possibilities.
- `BTREE`: The most common type is BTREE. The index stores a copy of the
column in sorted order. This sorted copy allows a binary search to be used to
satisfy queries.
- `BITMAP`: this index stores a bitmap for each unique value in the column. It
uses a series of bits to indicate whether a value is present in a row of a table
- `LABEL_LIST`: a special index that can be used on `List<T>` columns to
support queries with `array_contains_all` and `array_contains_any`
using an underlying bitmap index.
For example, a column that contains lists of tags (e.g. `["tag1", "tag2", "tag3"]`) can be indexed with a `LABEL_LIST` index.
!!! tips "How to choose the right scalar index type"
`BTREE`: This index is good for scalar columns with mostly distinct values and does best when the query is highly selective.
`BITMAP`: This index works best for low-cardinality numeric or string columns, where the number of unique values is small (i.e., less than a few thousands).
`LABEL_LIST`: This index should be used for columns containing list-type data.
| Data Type | Filter | Index Type |
| --------------------------------------------------------------- | ----------------------------------------- | ------------ |
| Numeric, String, Temporal | `<`, `=`, `>`, `in`, `between`, `is null` | `BTREE` |
| Boolean, numbers or strings with fewer than 1,000 unique values | `<`, `=`, `>`, `in`, `between`, `is null` | `BITMAP` |
| List of low cardinality of numbers or strings | `array_has_any`, `array_has_all` | `LABEL_LIST` |
### Create a scalar index
=== "Python"
```python
@@ -46,7 +58,7 @@ over scalar columns.
await tlb.create_index("publisher", { config: lancedb.Index.bitmap() })
```
For example, the following scan will be faster if the column `my_col` has a scalar index:
The following scan will be faster if the column `book_id` has a scalar index:
=== "Python"
@@ -106,3 +118,30 @@ Scalar indices can also speed up scans containing a vector search or full text s
.limit(10)
.toArray();
```
### Update a scalar index
Updating the table data (adding, deleting, or modifying records) requires that you also update the scalar index. This can be done by calling `optimize`, which will trigger an update to the existing scalar index.
=== "Python"
```python
table.add([{"vector": [7, 8], "book_id": 4}])
table.optimize()
```
=== "TypeScript"
```typescript
await tbl.add([{ vector: [7, 8], book_id: 4 }]);
await tbl.optimize();
```
=== "Rust"
```rust
let more_data: Box<dyn RecordBatchReader + Send> = create_some_records()?;
tbl.add(more_data).execute().await?;
tbl.optimize(OptimizeAction::All).execute().await?;
```
!!! note
New data added after creating the scalar index will still appear in search results if optimize is not used, but with increased latency due to a flat search on the unindexed portion. LanceDB Cloud automates the optimize process, minimizing the impact on search speed.

View File

@@ -274,7 +274,7 @@ table = db.create_table(table_name, schema=Content)
Sometimes your data model may contain nested objects.
For example, you may want to store the document string
and the document soure name as a nested Document object:
and the document source name as a nested Document object:
```python
class Document(BaseModel):
@@ -466,7 +466,7 @@ You can create an empty table for scenarios where you want to add data to the ta
## Adding to a table
After a table has been created, you can always add more data to it usind the `add` method
After a table has been created, you can always add more data to it using the `add` method
=== "Python"
You can add any of the valid data structures accepted by LanceDB table, i.e, `dict`, `list[dict]`, `pd.DataFrame`, or `Iterator[pa.RecordBatch]`. Below are some examples.
@@ -535,7 +535,7 @@ After a table has been created, you can always add more data to it usind the `ad
```
??? "Ingesting Pydantic models with LanceDB embedding API"
When using LanceDB's embedding API, you can add Pydantic models directly to the table. LanceDB will automatically convert the `vector` field to a vector before adding it to the table. You need to specify the default value of `vector` feild as None to allow LanceDB to automatically vectorize the data.
When using LanceDB's embedding API, you can add Pydantic models directly to the table. LanceDB will automatically convert the `vector` field to a vector before adding it to the table. You need to specify the default value of `vector` field as None to allow LanceDB to automatically vectorize the data.
```python
import lancedb
@@ -880,4 +880,4 @@ There are three possible settings for `read_consistency_interval`:
Learn the best practices on creating an ANN index and getting the most out of it.
[^1]: The `vectordb` package is a legacy package that is deprecated in favor of `@lancedb/lancedb`. The `vectordb` package will continue to receive bug fixes and security updates until September 2024. We recommend all new projects use `@lancedb/lancedb`. See the [migration guide](migration.md) for more information.
[^1]: The `vectordb` package is a legacy package that is deprecated in favor of `@lancedb/lancedb`. The `vectordb` package will continue to receive bug fixes and security updates until September 2024. We recommend all new projects use `@lancedb/lancedb`. See the [migration guide](../migration.md) for more information.

View File

@@ -1,6 +1,16 @@
# Python API Reference
This section contains the API reference for the OSS Python API.
This section contains the API reference for the Python API. There is a
synchronous and an asynchronous API client.
The general flow of using the API is:
1. Use [lancedb.connect][] or [lancedb.connect_async][] to connect to a database.
2. Use the returned [lancedb.DBConnection][] or [lancedb.AsyncConnection][] to
create or open tables.
3. Use the returned [lancedb.table.Table][] or [lancedb.AsyncTable][] to query
or modify tables.
## Installation

View File

@@ -6,6 +6,9 @@ This re-ranker uses the [Cohere](https://cohere.ai/) API to rerank the search re
!!! note
Supported Query Types: Hybrid, Vector, FTS
```shell
pip install cohere
```
```python
import numpy

View File

@@ -9,6 +9,7 @@ LanceDB comes with some built-in rerankers. Some of the rerankers that are avail
| `CrossEncoderReranker` | Uses a cross-encoder model to rerank search results | Vector, FTS, Hybrid |
| `ColbertReranker` | Uses a colbert model to rerank search results | Vector, FTS, Hybrid |
| `OpenaiReranker`(Experimental) | Uses OpenAI's chat model to rerank search results | Vector, FTS, Hybrid |
| `VoyageAIReranker` | Uses voyageai Reranker API to rerank results | Vector, FTS, Hybrid |
## Using a Reranker
@@ -73,6 +74,7 @@ LanceDB comes with some built-in rerankers. Here are some of the rerankers that
- [Jina Reranker](./jina.md)
- [AnswerDotAI Rerankers](./answerdotai.md)
- [Reciprocal Rank Fusion Reranker](./rrf.md)
- [VoyageAI Reranker](./voyageai.md)
## Creating Custom Rerankers

View File

@@ -7,6 +7,10 @@ performed on the top-k results returned by the vector search. However, pre-filte
option that performs the filter prior to vector search. This can be useful to narrow down on
the search space on a very large dataset to reduce query latency.
Note that both pre-filtering and post-filtering can yield false positives. For pre-filtering, if the filter is too selective, it might eliminate relevant items that the vector search would have otherwise identified as a good match. In this case, increasing `nprobes` parameter will help reduce such false positives. It is recommended to set `use_index=false` if you know that the filter is highly selective.
Similarly, a highly selective post-filter can lead to false positives. Increasing both `nprobes` and `refine_factor` can mitigate this issue. When deciding between pre-filtering and post-filtering, pre-filtering is generally the safer choice if you're uncertain.
<!-- Setup Code
```python
import lancedb
@@ -57,6 +61,9 @@ const tbl = await db.createTable('myVectors', data)
```ts
--8<-- "docs/src/sql_legacy.ts:search"
```
!!! note
Creating a [scalar index](guides/scalar_index.md) accelerates filtering
## SQL filters

View File

@@ -8,7 +8,7 @@
<parent>
<groupId>com.lancedb</groupId>
<artifactId>lancedb-parent</artifactId>
<version>0.13.0-beta.2</version>
<version>0.14.0-beta.0</version>
<relativePath>../pom.xml</relativePath>
</parent>

View File

@@ -6,7 +6,7 @@
<groupId>com.lancedb</groupId>
<artifactId>lancedb-parent</artifactId>
<version>0.13.0-beta.2</version>
<version>0.14.0-beta.0</version>
<packaging>pom</packaging>
<name>LanceDB Parent</name>

24
node/package-lock.json generated
View File

@@ -1,12 +1,12 @@
{
"name": "vectordb",
"version": "0.13.0-beta.2",
"version": "0.14.0-beta.0",
"lockfileVersion": 3,
"requires": true,
"packages": {
"": {
"name": "vectordb",
"version": "0.13.0-beta.2",
"version": "0.14.0-beta.0",
"cpu": [
"x64",
"arm64"
@@ -52,12 +52,14 @@
"uuid": "^9.0.0"
},
"optionalDependencies": {
"@lancedb/vectordb-darwin-arm64": "0.13.0-beta.2",
"@lancedb/vectordb-darwin-x64": "0.13.0-beta.2",
"@lancedb/vectordb-linux-arm64-gnu": "0.13.0-beta.2",
"@lancedb/vectordb-linux-x64-gnu": "0.13.0-beta.2",
"@lancedb/vectordb-win32-arm64-msvc": "0.13.0-beta.2",
"@lancedb/vectordb-win32-x64-msvc": "0.13.0-beta.2"
"@lancedb/vectordb-darwin-arm64": "0.14.0-beta.0",
"@lancedb/vectordb-darwin-x64": "0.14.0-beta.0",
"@lancedb/vectordb-linux-arm64-gnu": "0.14.0-beta.0",
"@lancedb/vectordb-linux-arm64-musl": "0.14.0-beta.0",
"@lancedb/vectordb-linux-x64-gnu": "0.14.0-beta.0",
"@lancedb/vectordb-linux-x64-musl": "0.14.0-beta.0",
"@lancedb/vectordb-win32-arm64-msvc": "0.14.0-beta.0",
"@lancedb/vectordb-win32-x64-msvc": "0.14.0-beta.0"
},
"peerDependencies": {
"@apache-arrow/ts": "^14.0.2",
@@ -1441,9 +1443,9 @@
"dev": true
},
"node_modules/cross-spawn": {
"version": "7.0.3",
"resolved": "https://registry.npmjs.org/cross-spawn/-/cross-spawn-7.0.3.tgz",
"integrity": "sha512-iRDPJKUPVEND7dHPO8rkbOnPpyDygcDFtWjpeWNCgy8WP2rXcxXL8TskReQl6OrB2G7+UJrags1q15Fudc7G6w==",
"version": "7.0.6",
"resolved": "https://registry.npmjs.org/cross-spawn/-/cross-spawn-7.0.6.tgz",
"integrity": "sha512-uV2QOWP2nWzsy2aMp8aRibhi9dlzF5Hgh5SHaB9OiTGEyDTiJJyx0uy51QXdyWbtAHNua4XJzUKca3OzKUd3vA==",
"dev": true,
"dependencies": {
"path-key": "^3.1.0",

View File

@@ -1,6 +1,6 @@
{
"name": "vectordb",
"version": "0.13.0-beta.2",
"version": "0.14.0-beta.0",
"description": " Serverless, low-latency vector database for AI applications",
"main": "dist/index.js",
"types": "dist/index.d.ts",
@@ -84,16 +84,20 @@
"aarch64-apple-darwin": "@lancedb/vectordb-darwin-arm64",
"x86_64-unknown-linux-gnu": "@lancedb/vectordb-linux-x64-gnu",
"aarch64-unknown-linux-gnu": "@lancedb/vectordb-linux-arm64-gnu",
"x86_64-unknown-linux-musl": "@lancedb/vectordb-linux-x64-musl",
"aarch64-unknown-linux-musl": "@lancedb/vectordb-linux-arm64-musl",
"x86_64-pc-windows-msvc": "@lancedb/vectordb-win32-x64-msvc",
"aarch64-pc-windows-msvc": "@lancedb/vectordb-win32-arm64-msvc"
}
},
"optionalDependencies": {
"@lancedb/vectordb-darwin-arm64": "0.13.0-beta.2",
"@lancedb/vectordb-darwin-x64": "0.13.0-beta.2",
"@lancedb/vectordb-linux-arm64-gnu": "0.13.0-beta.2",
"@lancedb/vectordb-linux-x64-gnu": "0.13.0-beta.2",
"@lancedb/vectordb-win32-x64-msvc": "0.13.0-beta.2",
"@lancedb/vectordb-win32-arm64-msvc": "0.13.0-beta.2"
"@lancedb/vectordb-darwin-x64": "0.14.0-beta.0",
"@lancedb/vectordb-darwin-arm64": "0.14.0-beta.0",
"@lancedb/vectordb-linux-x64-gnu": "0.14.0-beta.0",
"@lancedb/vectordb-linux-arm64-gnu": "0.14.0-beta.0",
"@lancedb/vectordb-linux-x64-musl": "0.14.0-beta.0",
"@lancedb/vectordb-linux-arm64-musl": "0.14.0-beta.0",
"@lancedb/vectordb-win32-x64-msvc": "0.14.0-beta.0",
"@lancedb/vectordb-win32-arm64-msvc": "0.14.0-beta.0"
}
}

View File

@@ -1,7 +1,7 @@
[package]
name = "lancedb-nodejs"
edition.workspace = true
version = "0.13.0-beta.2"
version = "0.14.0-beta.0"
license.workspace = true
description.workspace = true
repository.workspace = true

View File

@@ -110,7 +110,10 @@ describe("given a connection", () => {
let table = await db.createTable("test", data, { useLegacyFormat: true });
const isV2 = async (table: Table) => {
const data = await table.query().toArrow({ maxBatchLength: 100000 });
const data = await table
.query()
.limit(10000)
.toArrow({ maxBatchLength: 100000 });
console.log(data.batches.length);
return data.batches.length < 5;
};

View File

@@ -477,6 +477,54 @@ describe("When creating an index", () => {
expect(rst.numRows).toBe(1);
});
it("should create and search IVF_HNSW indices", async () => {
await tbl.createIndex("vec", {
config: Index.hnswSq(),
});
// check index directory
const indexDir = path.join(tmpDir.name, "test.lance", "_indices");
expect(fs.readdirSync(indexDir)).toHaveLength(1);
const indices = await tbl.listIndices();
expect(indices.length).toBe(1);
expect(indices[0]).toEqual({
name: "vec_idx",
indexType: "IvfHnswSq",
columns: ["vec"],
});
// Search without specifying the column
let rst = await tbl
.query()
.limit(2)
.nearestTo(queryVec)
.distanceType("dot")
.toArrow();
expect(rst.numRows).toBe(2);
// Search using `vectorSearch`
rst = await tbl.vectorSearch(queryVec).limit(2).toArrow();
expect(rst.numRows).toBe(2);
// Search with specifying the column
const rst2 = await tbl
.query()
.limit(2)
.nearestTo(queryVec)
.column("vec")
.toArrow();
expect(rst2.numRows).toBe(2);
expect(rst.toString()).toEqual(rst2.toString());
// test offset
rst = await tbl.query().limit(2).offset(1).nearestTo(queryVec).toArrow();
expect(rst.numRows).toBe(1);
// test ef
rst = await tbl.query().limit(2).nearestTo(queryVec).ef(100).toArrow();
expect(rst.numRows).toBe(2);
});
it("should be able to query unindexed data", async () => {
await tbl.createIndex("vec");
await tbl.add([
@@ -537,11 +585,11 @@ describe("When creating an index", () => {
expect(fs.readdirSync(indexDir)).toHaveLength(1);
for await (const r of tbl.query().where("id > 1").select(["id"])) {
expect(r.numRows).toBe(298);
expect(r.numRows).toBe(10);
}
// should also work with 'filter' alias
for await (const r of tbl.query().filter("id > 1").select(["id"])) {
expect(r.numRows).toBe(298);
expect(r.numRows).toBe(10);
}
});

View File

@@ -385,6 +385,20 @@ export class VectorQuery extends QueryBase<NativeVectorQuery> {
return this;
}
/**
* Set the number of candidates to consider during the search
*
* This argument is only used when the vector column has an HNSW index.
* If there is no index then this value is ignored.
*
* Increasing this value will increase the recall of your query but will
* also increase the latency of your query. The default value is 1.5*limit.
*/
ef(ef: number): VectorQuery {
super.doCall((inner) => inner.ef(ef));
return this;
}
/**
* Set the vector column to query
*

View File

@@ -87,6 +87,12 @@ export interface OptimizeOptions {
deleteUnverified: boolean;
}
export interface Version {
version: number;
timestamp: Date;
metadata: Record<string, string>;
}
/**
* A Table is a collection of Records in a LanceDB Database.
*
@@ -360,6 +366,11 @@ export abstract class Table {
*/
abstract checkoutLatest(): Promise<void>;
/**
* List all the versions of the table
*/
abstract listVersions(): Promise<Version[]>;
/**
* Restore the table to the currently checked out version
*
@@ -659,6 +670,14 @@ export class LocalTable extends Table {
await this.inner.checkoutLatest();
}
async listVersions(): Promise<Version[]> {
return (await this.inner.listVersions()).map((version) => ({
version: version.version,
timestamp: new Date(version.timestamp / 1000),
metadata: version.metadata,
}));
}
async restore(): Promise<void> {
await this.inner.restore();
}

View File

@@ -1,6 +1,6 @@
{
"name": "@lancedb/lancedb-darwin-arm64",
"version": "0.13.0-beta.2",
"version": "0.14.0-beta.0",
"os": ["darwin"],
"cpu": ["arm64"],
"main": "lancedb.darwin-arm64.node",

View File

@@ -1,6 +1,6 @@
{
"name": "@lancedb/lancedb-darwin-x64",
"version": "0.13.0-beta.2",
"version": "0.14.0-beta.0",
"os": ["darwin"],
"cpu": ["x64"],
"main": "lancedb.darwin-x64.node",

View File

@@ -1,6 +1,6 @@
{
"name": "@lancedb/lancedb-linux-arm64-gnu",
"version": "0.13.0-beta.2",
"version": "0.14.0-beta.0",
"os": ["linux"],
"cpu": ["arm64"],
"main": "lancedb.linux-arm64-gnu.node",

View File

@@ -0,0 +1,3 @@
# `@lancedb/lancedb-linux-arm64-musl`
This is the **aarch64-unknown-linux-musl** binary for `@lancedb/lancedb`

View File

@@ -0,0 +1,13 @@
{
"name": "@lancedb/lancedb-linux-arm64-musl",
"version": "0.14.0-beta.0",
"os": ["linux"],
"cpu": ["arm64"],
"main": "lancedb.linux-arm64-musl.node",
"files": ["lancedb.linux-arm64-musl.node"],
"license": "Apache 2.0",
"engines": {
"node": ">= 18"
},
"libc": ["musl"]
}

View File

@@ -1,6 +1,6 @@
{
"name": "@lancedb/lancedb-linux-x64-gnu",
"version": "0.13.0-beta.2",
"version": "0.14.0-beta.0",
"os": ["linux"],
"cpu": ["x64"],
"main": "lancedb.linux-x64-gnu.node",

View File

@@ -0,0 +1,3 @@
# `@lancedb/lancedb-linux-x64-musl`
This is the **x86_64-unknown-linux-musl** binary for `@lancedb/lancedb`

View File

@@ -0,0 +1,13 @@
{
"name": "@lancedb/lancedb-linux-x64-musl",
"version": "0.14.0-beta.0",
"os": ["linux"],
"cpu": ["x64"],
"main": "lancedb.linux-x64-musl.node",
"files": ["lancedb.linux-x64-musl.node"],
"license": "Apache 2.0",
"engines": {
"node": ">= 18"
},
"libc": ["musl"]
}

View File

@@ -1,6 +1,6 @@
{
"name": "@lancedb/lancedb-win32-arm64-msvc",
"version": "0.13.0-beta.2",
"version": "0.14.0-beta.0",
"os": [
"win32"
],

View File

@@ -1,6 +1,6 @@
{
"name": "@lancedb/lancedb-win32-x64-msvc",
"version": "0.13.0-beta.2",
"version": "0.14.0-beta.0",
"os": ["win32"],
"cpu": ["x64"],
"main": "lancedb.win32-x64-msvc.node",

View File

@@ -1,12 +1,12 @@
{
"name": "@lancedb/lancedb",
"version": "0.13.0-beta.1",
"version": "0.13.0",
"lockfileVersion": 3,
"requires": true,
"packages": {
"": {
"name": "@lancedb/lancedb",
"version": "0.13.0-beta.1",
"version": "0.13.0",
"cpu": [
"x64",
"arm64"
@@ -6052,9 +6052,9 @@
}
},
"node_modules/cross-spawn": {
"version": "7.0.3",
"resolved": "https://registry.npmjs.org/cross-spawn/-/cross-spawn-7.0.3.tgz",
"integrity": "sha512-iRDPJKUPVEND7dHPO8rkbOnPpyDygcDFtWjpeWNCgy8WP2rXcxXL8TskReQl6OrB2G7+UJrags1q15Fudc7G6w==",
"version": "7.0.6",
"resolved": "https://registry.npmjs.org/cross-spawn/-/cross-spawn-7.0.6.tgz",
"integrity": "sha512-uV2QOWP2nWzsy2aMp8aRibhi9dlzF5Hgh5SHaB9OiTGEyDTiJJyx0uy51QXdyWbtAHNua4XJzUKca3OzKUd3vA==",
"devOptional": true,
"dependencies": {
"path-key": "^3.1.0",

View File

@@ -10,7 +10,7 @@
"vector database",
"ann"
],
"version": "0.13.0-beta.2",
"version": "0.14.0-beta.0",
"main": "dist/index.js",
"exports": {
".": "./dist/index.js",
@@ -24,10 +24,12 @@
"triples": {
"defaults": false,
"additional": [
"aarch64-apple-darwin",
"aarch64-unknown-linux-gnu",
"x86_64-apple-darwin",
"aarch64-apple-darwin",
"x86_64-unknown-linux-gnu",
"aarch64-unknown-linux-gnu",
"x86_64-unknown-linux-musl",
"aarch64-unknown-linux-musl",
"x86_64-pc-windows-msvc"
]
}

View File

@@ -167,6 +167,11 @@ impl VectorQuery {
self.inner = self.inner.clone().nprobes(nprobe as usize);
}
#[napi]
pub fn ef(&mut self, ef: u32) {
self.inner = self.inner.clone().ef(ef as usize);
}
#[napi]
pub fn bypass_vector_index(&mut self) {
self.inner = self.inner.clone().bypass_vector_index()

View File

@@ -12,6 +12,8 @@
// See the License for the specific language governing permissions and
// limitations under the License.
use std::collections::HashMap;
use arrow_ipc::writer::FileWriter;
use lancedb::ipc::ipc_file_to_batches;
use lancedb::table::{
@@ -226,6 +228,28 @@ impl Table {
self.inner_ref()?.checkout_latest().await.default_error()
}
#[napi(catch_unwind)]
pub async fn list_versions(&self) -> napi::Result<Vec<Version>> {
self.inner_ref()?
.list_versions()
.await
.map(|versions| {
versions
.iter()
.map(|version| Version {
version: version.version as i64,
timestamp: version.timestamp.timestamp_micros(),
metadata: version
.metadata
.iter()
.map(|(k, v)| (k.clone(), v.clone()))
.collect(),
})
.collect()
})
.default_error()
}
#[napi(catch_unwind)]
pub async fn restore(&self) -> napi::Result<()> {
self.inner_ref()?.restore().await.default_error()
@@ -466,3 +490,10 @@ impl From<lancedb::index::IndexStatistics> for IndexStatistics {
}
}
}
#[napi(object)]
pub struct Version {
pub version: i64,
pub timestamp: i64,
pub metadata: HashMap<String, String>,
}

View File

@@ -1,5 +1,5 @@
[tool.bumpversion]
current_version = "0.16.0"
current_version = "0.17.0-beta.2"
parse = """(?x)
(?P<major>0|[1-9]\\d*)\\.
(?P<minor>0|[1-9]\\d*)\\.

View File

@@ -1,6 +1,6 @@
[package]
name = "lancedb-python"
version = "0.16.0"
version = "0.17.0-beta.2"
edition.workspace = true
description = "Python bindings for LanceDB"
license.workspace = true
@@ -15,13 +15,19 @@ crate-type = ["cdylib"]
[dependencies]
arrow = { version = "52.1", features = ["pyarrow"] }
lancedb = { path = "../rust/lancedb" }
lancedb = { path = "../rust/lancedb", default-features = false }
env_logger.workspace = true
pyo3 = { version = "0.21", features = ["extension-module", "abi3-py38", "gil-refs"] }
pyo3 = { version = "0.21", features = [
"extension-module",
"abi3-py39",
"gil-refs"
] }
# Using this fork for now: https://github.com/awestlake87/pyo3-asyncio/issues/119
# pyo3-asyncio = { version = "0.20", features = ["attributes", "tokio-runtime"] }
pyo3-asyncio-0-21 = { version = "0.21.0", features = ["attributes", "tokio-runtime"] }
pyo3-asyncio-0-21 = { version = "0.21.0", features = [
"attributes",
"tokio-runtime"
] }
pin-project = "1.1.5"
futures.workspace = true
tokio = { version = "1.36.0", features = ["sync"] }
@@ -29,10 +35,14 @@ tokio = { version = "1.36.0", features = ["sync"] }
[build-dependencies]
pyo3-build-config = { version = "0.20.3", features = [
"extension-module",
"abi3-py38",
"abi3-py39",
] }
[features]
default = ["remote"]
default = ["default-tls", "remote"]
fp16kernels = ["lancedb/fp16kernels"]
remote = ["lancedb/remote"]
# TLS
default-tls = ["lancedb/default-tls"]
native-tls = ["lancedb/native-tls"]
rustls-tls = ["lancedb/rustls-tls"]

View File

@@ -3,8 +3,7 @@ name = "lancedb"
# version in Cargo.toml
dependencies = [
"deprecation",
"nest-asyncio~=1.0",
"pylance==0.19.2",
"pylance==0.20.0b3",
"tqdm>=4.27.0",
"pydantic>=1.10",
"packaging",
@@ -31,7 +30,6 @@ classifiers = [
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",

View File

@@ -83,25 +83,33 @@ class OpenAIEmbeddings(TextEmbeddingFunction):
"""
openai = attempt_import_or_raise("openai")
valid_texts = []
valid_indices = []
for idx, text in enumerate(texts):
if text:
valid_texts.append(text)
valid_indices.append(idx)
# TODO retry, rate limit, token limit
try:
if self.name == "text-embedding-ada-002":
rs = self._openai_client.embeddings.create(input=texts, model=self.name)
else:
kwargs = {
"input": texts,
"model": self.name,
}
if self.dim:
kwargs["dimensions"] = self.dim
rs = self._openai_client.embeddings.create(**kwargs)
kwargs = {
"input": valid_texts,
"model": self.name,
}
if self.name != "text-embedding-ada-002":
kwargs["dimensions"] = self.dim
rs = self._openai_client.embeddings.create(**kwargs)
valid_embeddings = {
idx: v.embedding for v, idx in zip(rs.data, valid_indices)
}
except openai.BadRequestError:
logging.exception("Bad request: %s", texts)
return [None] * len(texts)
except Exception:
logging.exception("OpenAI embeddings error")
raise
return [v.embedding for v in rs.data]
return [valid_embeddings.get(idx, None) for idx in range(len(texts))]
@cached_property
def _openai_client(self):

View File

@@ -1,15 +1,5 @@
# Copyright 2023 LanceDB Developers
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# SPDX-License-Identifier: Apache-2.0
# SPDX-FileCopyrightText: Copyright The LanceDB Authors
"""Pydantic (v1 / v2) adapter for LanceDB"""
@@ -30,6 +20,7 @@ from typing import (
Type,
Union,
_GenericAlias,
GenericAlias,
)
import numpy as np
@@ -75,7 +66,7 @@ def vector(dim: int, value_type: pa.DataType = pa.float32()):
def Vector(
dim: int, value_type: pa.DataType = pa.float32()
dim: int, value_type: pa.DataType = pa.float32(), nullable: bool = True
) -> Type[FixedSizeListMixin]:
"""Pydantic Vector Type.
@@ -88,6 +79,8 @@ def Vector(
The dimension of the vector.
value_type : pyarrow.DataType, optional
The value type of the vector, by default pa.float32()
nullable : bool, optional
Whether the vector is nullable, by default it is True.
Examples
--------
@@ -103,7 +96,7 @@ def Vector(
>>> assert schema == pa.schema([
... pa.field("id", pa.int64(), False),
... pa.field("url", pa.utf8(), False),
... pa.field("embeddings", pa.list_(pa.float32(), 768), False)
... pa.field("embeddings", pa.list_(pa.float32(), 768))
... ])
"""
@@ -112,6 +105,10 @@ def Vector(
def __repr__(self):
return f"FixedSizeList(dim={dim})"
@staticmethod
def nullable() -> bool:
return nullable
@staticmethod
def dim() -> int:
return dim
@@ -205,9 +202,7 @@ else:
def _pydantic_to_arrow_type(field: FieldInfo) -> pa.DataType:
"""Convert a Pydantic FieldInfo to Arrow DataType"""
if isinstance(field.annotation, _GenericAlias) or (
sys.version_info > (3, 9) and isinstance(field.annotation, types.GenericAlias)
):
if isinstance(field.annotation, (_GenericAlias, GenericAlias)):
origin = field.annotation.__origin__
args = field.annotation.__args__
if origin is list:
@@ -235,7 +230,7 @@ def _pydantic_to_arrow_type(field: FieldInfo) -> pa.DataType:
def is_nullable(field: FieldInfo) -> bool:
"""Check if a Pydantic FieldInfo is nullable."""
if isinstance(field.annotation, _GenericAlias):
if isinstance(field.annotation, (_GenericAlias, GenericAlias)):
origin = field.annotation.__origin__
args = field.annotation.__args__
if origin == Union:
@@ -246,6 +241,10 @@ def is_nullable(field: FieldInfo) -> bool:
for typ in args:
if typ is type(None):
return True
elif inspect.isclass(field.annotation) and issubclass(
field.annotation, FixedSizeListMixin
):
return field.annotation.nullable()
return False

View File

@@ -131,6 +131,8 @@ class Query(pydantic.BaseModel):
fast_search: bool = False
ef: Optional[int] = None
class LanceQueryBuilder(ABC):
"""An abstract query builder. Subclasses are defined for vector search,
@@ -257,6 +259,7 @@ class LanceQueryBuilder(ABC):
self._with_row_id = False
self._vector = None
self._text = None
self._ef = None
@deprecation.deprecated(
deprecated_in="0.3.1",
@@ -367,11 +370,13 @@ class LanceQueryBuilder(ABC):
----------
limit: int
The maximum number of results to return.
By default the query is limited to the first 10.
Call this method and pass 0, a negative value,
or None to remove the limit.
*WARNING* if you have a large dataset, removing
the limit can potentially result in reading a
The default query limit is 10 results.
For ANN/KNN queries, you must specify a limit.
Entering 0, a negative number, or None will reset
the limit to the default value of 10.
*WARNING* if you have a large dataset, setting
the limit to a large number, e.g. the table size,
can potentially result in reading a
large amount of data into memory and cause
out of memory issues.
@@ -638,6 +643,28 @@ class LanceVectorQueryBuilder(LanceQueryBuilder):
self._nprobes = nprobes
return self
def ef(self, ef: int) -> LanceVectorQueryBuilder:
"""Set the number of candidates to consider during search.
Higher values will yield better recall (more likely to find vectors if
they exist) at the expense of latency.
This only applies to the HNSW-related index.
The default value is 1.5 * limit.
Parameters
----------
ef: int
The number of candidates to consider during search.
Returns
-------
LanceVectorQueryBuilder
The LanceQueryBuilder object.
"""
self._ef = ef
return self
def refine_factor(self, refine_factor: int) -> LanceVectorQueryBuilder:
"""Set the refine factor to use, increasing the number of vectors sampled.
@@ -700,6 +727,7 @@ class LanceVectorQueryBuilder(LanceQueryBuilder):
with_row_id=self._with_row_id,
offset=self._offset,
fast_search=self._fast_search,
ef=self._ef,
)
result_set = self._table._execute_query(query, batch_size)
if self._reranker is not None:
@@ -1071,6 +1099,8 @@ class LanceHybridQueryBuilder(LanceQueryBuilder):
self._vector_query.nprobes(self._nprobes)
if self._refine_factor:
self._vector_query.refine_factor(self._refine_factor)
if self._ef:
self._vector_query.ef(self._ef)
with ThreadPoolExecutor() as executor:
fts_future = executor.submit(self._fts_query.with_row_id(True).to_arrow)
@@ -1197,6 +1227,29 @@ class LanceHybridQueryBuilder(LanceQueryBuilder):
self._nprobes = nprobes
return self
def ef(self, ef: int) -> LanceHybridQueryBuilder:
"""
Set the number of candidates to consider during search.
Higher values will yield better recall (more likely to find vectors if
they exist) at the expense of latency.
This only applies to the HNSW-related index.
The default value is 1.5 * limit.
Parameters
----------
ef: int
The number of candidates to consider during search.
Returns
-------
LanceHybridQueryBuilder
The LanceHybridQueryBuilder object.
"""
self._ef = ef
return self
def metric(self, metric: Literal["L2", "cosine", "dot"]) -> LanceHybridQueryBuilder:
"""Set the distance metric to use.
@@ -1449,10 +1502,11 @@ class AsyncQueryBase(object):
... print(plan)
>>> asyncio.run(doctest_example()) # doctest: +ELLIPSIS, +NORMALIZE_WHITESPACE
ProjectionExec: expr=[vector@0 as vector, _distance@2 as _distance]
FilterExec: _distance@2 IS NOT NULL
SortExec: TopK(fetch=10), expr=[_distance@2 ASC NULLS LAST], preserve_partitioning=[false]
KNNVectorDistance: metric=l2
LanceScan: uri=..., projection=[vector], row_id=true, row_addr=false, ordered=false
GlobalLimitExec: skip=0, fetch=10
FilterExec: _distance@2 IS NOT NULL
SortExec: TopK(fetch=10), expr=[_distance@2 ASC NULLS LAST], preserve_partitioning=[false]
KNNVectorDistance: metric=l2
LanceScan: uri=..., projection=[vector], row_id=true, row_addr=false, ordered=false
Parameters
----------
@@ -1495,7 +1549,8 @@ class AsyncQuery(AsyncQueryBase):
return pa.array(vec)
def nearest_to(
self, query_vector: Optional[Union[VEC, Tuple, List[VEC]]] = None
self,
query_vector: Union[VEC, Tuple, List[VEC]],
) -> AsyncVectorQuery:
"""
Find the nearest vectors to the given query vector.
@@ -1542,6 +1597,9 @@ class AsyncQuery(AsyncQueryBase):
will be added to the results. This column will contain the index of the
query vector that the result is nearest to.
"""
if query_vector is None:
raise ValueError("query_vector can not be None")
if (
isinstance(query_vector, list)
and len(query_vector) > 0
@@ -1618,7 +1676,7 @@ class AsyncVectorQuery(AsyncQueryBase):
"""
Set the number of partitions to search (probe)
This argument is only used when the vector column has an IVF PQ index.
This argument is only used when the vector column has an IVF-based index.
If there is no index then this value is ignored.
The IVF stage of IVF PQ divides the input into partitions (clusters) of
@@ -1640,6 +1698,21 @@ class AsyncVectorQuery(AsyncQueryBase):
self._inner.nprobes(nprobes)
return self
def ef(self, ef: int) -> AsyncVectorQuery:
"""
Set the number of candidates to consider during search
This argument is only used when the vector column has an HNSW index.
If there is no index then this value is ignored.
Increasing this value will increase the recall of your query but will also
increase the latency of your query. The default value is 1.5 * limit. This
default is good for many cases but the best value to use will depend on your
data and the recall that you need to achieve.
"""
self._inner.ef(ef)
return self
def refine_factor(self, refine_factor: int) -> AsyncVectorQuery:
"""
A multiplier to control how many additional rows are taken during the refine

View File

@@ -0,0 +1,25 @@
# SPDX-License-Identifier: Apache-2.0
# SPDX-FileCopyrightText: Copyright The LanceDB Authors
import asyncio
import threading
class BackgroundEventLoop:
"""
A background event loop that can run futures.
Used to bridge sync and async code, without messing with users event loops.
"""
def __init__(self):
self.loop = asyncio.new_event_loop()
self.thread = threading.Thread(
target=self.loop.run_forever,
name="LanceDBBackgroundEventLoop",
daemon=True,
)
self.thread.start()
def run(self, future):
return asyncio.run_coroutine_threadsafe(future, self.loop).result()

View File

@@ -11,7 +11,6 @@
# See the License for the specific language governing permissions and
# limitations under the License.
import asyncio
from datetime import timedelta
import logging
from concurrent.futures import ThreadPoolExecutor
@@ -21,6 +20,7 @@ import warnings
from lancedb import connect_async
from lancedb.remote import ClientConfig
from lancedb.remote.background_loop import BackgroundEventLoop
import pyarrow as pa
from overrides import override
@@ -31,6 +31,8 @@ from ..pydantic import LanceModel
from ..table import Table
from ..util import validate_table_name
LOOP = BackgroundEventLoop()
class RemoteDBConnection(DBConnection):
"""A connection to a remote LanceDB database."""
@@ -86,18 +88,9 @@ class RemoteDBConnection(DBConnection):
raise ValueError(f"Invalid scheme: {parsed.scheme}, only accepts db://")
self.db_name = parsed.netloc
import nest_asyncio
nest_asyncio.apply()
try:
self._loop = asyncio.get_running_loop()
except RuntimeError:
self._loop = asyncio.new_event_loop()
asyncio.set_event_loop(self._loop)
self.client_config = client_config
self._conn = self._loop.run_until_complete(
self._conn = LOOP.run(
connect_async(
db_url,
api_key=api_key,
@@ -127,9 +120,7 @@ class RemoteDBConnection(DBConnection):
-------
An iterator of table names.
"""
return self._loop.run_until_complete(
self._conn.table_names(start_after=page_token, limit=limit)
)
return LOOP.run(self._conn.table_names(start_after=page_token, limit=limit))
@override
def open_table(self, name: str, *, index_cache_size: Optional[int] = None) -> Table:
@@ -152,8 +143,8 @@ class RemoteDBConnection(DBConnection):
" (there is no local cache to configure)"
)
table = self._loop.run_until_complete(self._conn.open_table(name))
return RemoteTable(table, self.db_name, self._loop)
table = LOOP.run(self._conn.open_table(name))
return RemoteTable(table, self.db_name)
@override
def create_table(
@@ -268,7 +259,7 @@ class RemoteDBConnection(DBConnection):
from .table import RemoteTable
table = self._loop.run_until_complete(
table = LOOP.run(
self._conn.create_table(
name,
data,
@@ -278,7 +269,7 @@ class RemoteDBConnection(DBConnection):
fill_value=fill_value,
)
)
return RemoteTable(table, self.db_name, self._loop)
return RemoteTable(table, self.db_name)
@override
def drop_table(self, name: str):
@@ -289,7 +280,7 @@ class RemoteDBConnection(DBConnection):
name: str
The name of the table.
"""
self._loop.run_until_complete(self._conn.drop_table(name))
LOOP.run(self._conn.drop_table(name))
@override
def rename_table(self, cur_name: str, new_name: str):
@@ -302,7 +293,7 @@ class RemoteDBConnection(DBConnection):
new_name: str
The new name of the table.
"""
self._loop.run_until_complete(self._conn.rename_table(cur_name, new_name))
LOOP.run(self._conn.rename_table(cur_name, new_name))
async def close(self):
"""Close the connection to the database."""

View File

@@ -12,12 +12,12 @@
# limitations under the License.
from datetime import timedelta
import asyncio
import logging
from functools import cached_property
from typing import Dict, Iterable, List, Optional, Union, Literal
from lancedb.index import FTS, BTree, Bitmap, HnswPq, HnswSq, IvfPq, LabelList
from lancedb.remote.db import LOOP
import pyarrow as pa
from lancedb.common import DATA, VEC, VECTOR_COLUMN_NAME
@@ -33,9 +33,7 @@ class RemoteTable(Table):
self,
table: AsyncTable,
db_name: str,
loop: Optional[asyncio.AbstractEventLoop] = None,
):
self._loop = loop
self._table = table
self.db_name = db_name
@@ -56,12 +54,12 @@ class RemoteTable(Table):
of this Table
"""
return self._loop.run_until_complete(self._table.schema())
return LOOP.run(self._table.schema())
@property
def version(self) -> int:
"""Get the current version of the table"""
return self._loop.run_until_complete(self._table.version())
return LOOP.run(self._table.version())
@cached_property
def embedding_functions(self) -> dict:
@@ -78,6 +76,10 @@ class RemoteTable(Table):
self.schema.metadata
)
def list_versions(self):
"""List all versions of the table"""
return self._loop.run_until_complete(self._table.list_versions())
def to_arrow(self) -> pa.Table:
"""to_arrow() is not yet supported on LanceDB cloud."""
raise NotImplementedError("to_arrow() is not yet supported on LanceDB cloud.")
@@ -86,13 +88,19 @@ class RemoteTable(Table):
"""to_pandas() is not yet supported on LanceDB cloud."""
return NotImplementedError("to_pandas() is not yet supported on LanceDB cloud.")
def checkout(self, version):
return self._loop.run_until_complete(self._table.checkout(version))
def checkout_latest(self):
return self._loop.run_until_complete(self._table.checkout_latest())
def list_indices(self):
"""List all the indices on the table"""
return self._loop.run_until_complete(self._table.list_indices())
return LOOP.run(self._table.list_indices())
def index_stats(self, index_uuid: str):
"""List all the stats of a specified index"""
return self._loop.run_until_complete(self._table.index_stats(index_uuid))
return LOOP.run(self._table.index_stats(index_uuid))
def create_scalar_index(
self,
@@ -122,9 +130,7 @@ class RemoteTable(Table):
else:
raise ValueError(f"Unknown index type: {index_type}")
self._loop.run_until_complete(
self._table.create_index(column, config=config, replace=replace)
)
LOOP.run(self._table.create_index(column, config=config, replace=replace))
def create_fts_index(
self,
@@ -134,9 +140,7 @@ class RemoteTable(Table):
with_position: bool = True,
):
config = FTS(with_position=with_position)
self._loop.run_until_complete(
self._table.create_index(column, config=config, replace=replace)
)
LOOP.run(self._table.create_index(column, config=config, replace=replace))
def create_index(
self,
@@ -217,9 +221,7 @@ class RemoteTable(Table):
" 'IVF_PQ', 'IVF_HNSW_PQ', 'IVF_HNSW_SQ'"
)
self._loop.run_until_complete(
self._table.create_index(vector_column_name, config=config)
)
LOOP.run(self._table.create_index(vector_column_name, config=config))
def add(
self,
@@ -251,7 +253,7 @@ class RemoteTable(Table):
The value to use when filling vectors. Only used if on_bad_vectors="fill".
"""
self._loop.run_until_complete(
LOOP.run(
self._table.add(
data, mode=mode, on_bad_vectors=on_bad_vectors, fill_value=fill_value
)
@@ -339,9 +341,7 @@ class RemoteTable(Table):
def _execute_query(
self, query: Query, batch_size: Optional[int] = None
) -> pa.RecordBatchReader:
return self._loop.run_until_complete(
self._table._execute_query(query, batch_size=batch_size)
)
return LOOP.run(self._table._execute_query(query, batch_size=batch_size))
def merge_insert(self, on: Union[str, Iterable[str]]) -> LanceMergeInsertBuilder:
"""Returns a [`LanceMergeInsertBuilder`][lancedb.merge.LanceMergeInsertBuilder]
@@ -358,9 +358,7 @@ class RemoteTable(Table):
on_bad_vectors: str,
fill_value: float,
):
self._loop.run_until_complete(
self._table._do_merge(merge, new_data, on_bad_vectors, fill_value)
)
LOOP.run(self._table._do_merge(merge, new_data, on_bad_vectors, fill_value))
def delete(self, predicate: str):
"""Delete rows from the table.
@@ -409,7 +407,7 @@ class RemoteTable(Table):
x vector _distance # doctest: +SKIP
0 2 [3.0, 4.0] 85.0 # doctest: +SKIP
"""
self._loop.run_until_complete(self._table.delete(predicate))
LOOP.run(self._table.delete(predicate))
def update(
self,
@@ -459,7 +457,7 @@ class RemoteTable(Table):
2 2 [10.0, 10.0] # doctest: +SKIP
"""
self._loop.run_until_complete(
LOOP.run(
self._table.update(where=where, updates=values, updates_sql=values_sql)
)
@@ -489,7 +487,7 @@ class RemoteTable(Table):
)
def count_rows(self, filter: Optional[str] = None) -> int:
return self._loop.run_until_complete(self._table.count_rows(filter))
return LOOP.run(self._table.count_rows(filter))
def add_columns(self, transforms: Dict[str, str]):
raise NotImplementedError(

View File

@@ -41,7 +41,7 @@ class CohereReranker(Reranker):
def __init__(
self,
model_name: str = "rerank-english-v2.0",
model_name: str = "rerank-english-v3.0",
column: str = "text",
top_n: Union[int, None] = None,
return_score="relevance",

View File

@@ -8,7 +8,7 @@ import inspect
import time
from abc import ABC, abstractmethod
from dataclasses import dataclass
from datetime import timedelta
from datetime import datetime, timedelta
from functools import cached_property
from typing import (
TYPE_CHECKING,
@@ -1012,6 +1012,39 @@ class Table(ABC):
The names of the columns to drop.
"""
@abstractmethod
def checkout(self):
"""
Checks out a specific version of the Table
Any read operation on the table will now access the data at the checked out
version. As a consequence, calling this method will disable any read consistency
interval that was previously set.
This is a read-only operation that turns the table into a sort of "view"
or "detached head". Other table instances will not be affected. To make the
change permanent you can use the `[Self::restore]` method.
Any operation that modifies the table will fail while the table is in a checked
out state.
To return the table to a normal state use `[Self::checkout_latest]`
"""
@abstractmethod
def checkout_latest(self):
"""
Ensures the table is pointing at the latest version
This can be used to manually update a table when the read_consistency_interval
is None
It can also be used to undo a `[Self::checkout]` operation
"""
@abstractmethod
def list_versions(self):
"""List all versions of the table"""
@cached_property
def _dataset_uri(self) -> str:
return _table_uri(self._conn.uri, self.name)
@@ -1959,6 +1992,7 @@ class LanceTable(Table):
"metric": query.metric,
"nprobes": query.nprobes,
"refine_factor": query.refine_factor,
"ef": query.ef,
}
return ds.scanner(
columns=query.columns,
@@ -2697,7 +2731,7 @@ class AsyncTable:
def vector_search(
self,
query_vector: Optional[Union[VEC, Tuple]] = None,
query_vector: Union[VEC, Tuple],
) -> AsyncVectorQuery:
"""
Search the table with a given query vector.
@@ -2736,6 +2770,8 @@ class AsyncTable:
async_query = async_query.refine_factor(query.refine_factor)
if query.vector_column:
async_query = async_query.column(query.vector_column)
if query.ef:
async_query = async_query.ef(query.ef)
if not query.prefilter:
async_query = async_query.postfilter()
@@ -2899,6 +2935,19 @@ class AsyncTable:
"""
return await self._inner.version()
async def list_versions(self):
"""
List all versions of the table
"""
versions = await self._inner.list_versions()
for v in versions:
ts_nanos = v["timestamp"]
v["timestamp"] = datetime.fromtimestamp(ts_nanos // 1e9) + timedelta(
microseconds=(ts_nanos % 1e9) // 1e3
)
return versions
async def checkout(self, version):
"""
Checks out a specific version of the Table

View File

@@ -599,7 +599,9 @@ async def test_create_in_v2_mode(tmp_path):
)
async def is_in_v2_mode(tbl):
batches = await tbl.query().to_batches(max_batch_length=1024 * 10)
batches = (
await tbl.query().limit(10 * 1024).to_batches(max_batch_length=1024 * 10)
)
num_batches = 0
async for batch in batches:
num_batches += 1

View File

@@ -90,10 +90,13 @@ def test_embedding_with_bad_results(tmp_path):
self, texts: Union[List[str], np.ndarray]
) -> list[Union[np.array, None]]:
# Return None, which is bad if field is non-nullable
return [
None if i % 2 == 0 else np.random.randn(self.ndims())
a = [
np.full(self.ndims(), np.nan)
if i % 2 == 0
else np.random.randn(self.ndims())
for i in range(len(texts))
]
return a
db = lancedb.connect(tmp_path)
registry = EmbeddingFunctionRegistry.get_instance()

View File

@@ -1,15 +1,6 @@
# Copyright (c) 2023. LanceDB Developers
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# SPDX-License-Identifier: Apache-2.0
# SPDX-FileCopyrightText: Copyright The LanceDB Authors
import importlib
import io
import os
@@ -17,6 +8,7 @@ import os
import lancedb
import numpy as np
import pandas as pd
import pyarrow as pa
import pytest
from lancedb.embeddings import get_registry
from lancedb.pydantic import LanceModel, Vector
@@ -444,6 +436,30 @@ def test_watsonx_embedding(tmp_path):
assert tbl.search("hello").limit(1).to_pandas()["text"][0] == "hello world"
@pytest.mark.slow
@pytest.mark.skipif(
os.environ.get("OPENAI_API_KEY") is None, reason="OPENAI_API_KEY not set"
)
def test_openai_with_empty_strs(tmp_path):
model = get_registry().get("openai").create(max_retries=0)
class TextModel(LanceModel):
text: str = model.SourceField()
vector: Vector(model.ndims()) = model.VectorField()
df = pd.DataFrame({"text": ["hello world", ""]})
db = lancedb.connect(tmp_path)
tbl = db.create_table("test", schema=TextModel, mode="overwrite")
tbl.add(df, on_bad_vectors="skip")
tb = tbl.to_arrow()
assert tb.schema.field_by_name("vector").type == pa.list_(
pa.float32(), model.ndims()
)
assert len(tb) == 2
assert tb["vector"].is_null().to_pylist() == [False, True]
@pytest.mark.slow
@pytest.mark.skipif(
importlib.util.find_spec("ollama") is None, reason="Ollama not installed"

View File

@@ -1,16 +1,5 @@
# Copyright 2023 LanceDB Developers
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# SPDX-License-Identifier: Apache-2.0
# SPDX-FileCopyrightText: Copyright The LanceDB Authors
import json
import sys
@@ -172,6 +161,26 @@ def test_pydantic_to_arrow_py38():
assert schema == expect_schema
def test_nullable_vector():
class NullableModel(pydantic.BaseModel):
vec: Vector(16, nullable=False)
schema = pydantic_to_schema(NullableModel)
assert schema == pa.schema([pa.field("vec", pa.list_(pa.float32(), 16), False)])
class DefaultModel(pydantic.BaseModel):
vec: Vector(16)
schema = pydantic_to_schema(DefaultModel)
assert schema == pa.schema([pa.field("vec", pa.list_(pa.float32(), 16), True)])
class NotNullableModel(pydantic.BaseModel):
vec: Vector(16)
schema = pydantic_to_schema(NotNullableModel)
assert schema == pa.schema([pa.field("vec", pa.list_(pa.float32(), 16), True)])
def test_fixed_size_list_field():
class TestModel(pydantic.BaseModel):
vec: Vector(16)
@@ -192,7 +201,7 @@ def test_fixed_size_list_field():
schema = pydantic_to_schema(TestModel)
assert schema == pa.schema(
[
pa.field("vec", pa.list_(pa.float32(), 16), False),
pa.field("vec", pa.list_(pa.float32(), 16)),
pa.field("li", pa.list_(pa.int64()), False),
]
)

View File

@@ -1,21 +1,9 @@
# Copyright 2023 LanceDB Developers
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# SPDX-License-Identifier: Apache-2.0
# SPDX-FileCopyrightText: Copyright The LanceDB Authors
import unittest.mock as mock
from datetime import timedelta
from typing import Optional
import lance
import lancedb
from lancedb.index import IvfPq
import numpy as np
@@ -23,41 +11,15 @@ import pandas.testing as tm
import pyarrow as pa
import pytest
import pytest_asyncio
from lancedb.db import LanceDBConnection
from lancedb.pydantic import LanceModel, Vector
from lancedb.query import AsyncQueryBase, LanceVectorQueryBuilder, Query
from lancedb.table import AsyncTable, LanceTable
class MockTable:
def __init__(self, tmp_path):
self.uri = tmp_path
self._conn = LanceDBConnection(self.uri)
def to_lance(self):
return lance.dataset(self.uri)
def _execute_query(self, query, batch_size: Optional[int] = None):
ds = self.to_lance()
return ds.scanner(
columns=query.columns,
filter=query.filter,
prefilter=query.prefilter,
nearest={
"column": query.vector_column,
"q": query.vector,
"k": query.k,
"metric": query.metric,
"nprobes": query.nprobes,
"refine_factor": query.refine_factor,
},
batch_size=batch_size,
offset=query.offset,
).to_reader()
@pytest.fixture
def table(tmp_path) -> MockTable:
@pytest.fixture(scope="module")
def table(tmpdir_factory) -> lancedb.table.Table:
tmp_path = str(tmpdir_factory.mktemp("data"))
db = lancedb.connect(tmp_path)
df = pa.table(
{
"vector": pa.array(
@@ -68,8 +30,7 @@ def table(tmp_path) -> MockTable:
"float_field": pa.array([1.0, 2.0]),
}
)
lance.write_dataset(df, tmp_path)
return MockTable(tmp_path)
return db.create_table("test", df)
@pytest_asyncio.fixture
@@ -126,6 +87,12 @@ def test_query_builder(table):
assert all(np.array(rs[0]["vector"]) == [1, 2])
def test_with_row_id(table: lancedb.table.Table):
rs = table.search().with_row_id(True).to_arrow()
assert "_rowid" in rs.column_names
assert rs["_rowid"].to_pylist() == [0, 1]
def test_vector_query_with_no_limit(table):
with pytest.raises(ValueError):
LanceVectorQueryBuilder(table, [0, 0], "vector").limit(0).select(
@@ -365,6 +332,12 @@ async def test_query_to_pandas_async(table_async: AsyncTable):
assert df.shape == (0, 4)
@pytest.mark.asyncio
async def test_none_query(table_async: AsyncTable):
with pytest.raises(ValueError):
await table_async.query().nearest_to(None).to_arrow()
@pytest.mark.asyncio
async def test_fast_search_async(tmp_path):
db = await lancedb.connect_async(tmp_path)

View File

@@ -1,6 +1,7 @@
# SPDX-License-Identifier: Apache-2.0
# SPDX-FileCopyrightText: Copyright The LanceDB Authors
from concurrent.futures import ThreadPoolExecutor
import contextlib
from datetime import timedelta
import http.server
@@ -103,6 +104,47 @@ async def test_async_remote_db():
assert table_names == []
@pytest.mark.asyncio
async def test_async_checkout():
def handler(request):
if request.path == "/v1/table/test/describe/":
request.send_response(200)
request.send_header("Content-Type", "application/json")
request.end_headers()
response = json.dumps({"version": 42, "schema": {"fields": []}})
request.wfile.write(response.encode())
return
content_len = int(request.headers.get("Content-Length"))
body = request.rfile.read(content_len)
body = json.loads(body)
print("body is", body)
count = 0
if body["version"] == 1:
count = 100
elif body["version"] == 2:
count = 200
elif body["version"] is None:
count = 300
request.send_response(200)
request.send_header("Content-Type", "application/json")
request.end_headers()
request.wfile.write(json.dumps(count).encode())
async with mock_lancedb_connection_async(handler) as db:
table = await db.open_table("test")
assert await table.count_rows() == 300
await table.checkout(1)
assert await table.count_rows() == 100
await table.checkout(2)
assert await table.count_rows() == 200
await table.checkout_latest()
assert await table.count_rows() == 300
@pytest.mark.asyncio
async def test_http_error():
request_id_holder = {"request_id": None}
@@ -146,6 +188,47 @@ async def test_retry_error():
assert cause.status_code == 429
def test_table_add_in_threadpool():
def handler(request):
if request.path == "/v1/table/test/insert/":
request.send_response(200)
request.end_headers()
elif request.path == "/v1/table/test/create/":
request.send_response(200)
request.send_header("Content-Type", "application/json")
request.end_headers()
request.wfile.write(b"{}")
elif request.path == "/v1/table/test/describe/":
request.send_response(200)
request.send_header("Content-Type", "application/json")
request.end_headers()
payload = json.dumps(
dict(
version=1,
schema=dict(
fields=[
dict(name="id", type={"type": "int64"}, nullable=False),
]
),
)
)
request.wfile.write(payload.encode())
else:
request.send_response(404)
request.end_headers()
with mock_lancedb_connection(handler) as db:
table = db.create_table("test", [{"id": 1}])
with ThreadPoolExecutor(3) as executor:
futures = []
for _ in range(10):
future = executor.submit(table.add, [{"id": 1}])
futures.append(future)
for future in futures:
future.result()
@contextlib.contextmanager
def query_test_table(query_handler):
def handler(request):
@@ -185,8 +268,10 @@ def test_query_sync_minimal():
"k": 10,
"prefilter": False,
"refine_factor": None,
"ef": None,
"vector": [1.0, 2.0, 3.0],
"nprobes": 20,
"version": None,
}
return pa.table({"id": [1, 2, 3]})
@@ -204,6 +289,7 @@ def test_query_sync_empty_query():
"filter": "true",
"vector": [],
"columns": ["id"],
"version": None,
}
return pa.table({"id": [1, 2, 3]})
@@ -223,11 +309,13 @@ def test_query_sync_maximal():
"refine_factor": 10,
"vector": [1.0, 2.0, 3.0],
"nprobes": 5,
"ef": None,
"filter": "id > 0",
"columns": ["id", "name"],
"vector_column": "vector2",
"fast_search": True,
"with_row_id": True,
"version": None,
}
return pa.table({"id": [1, 2, 3], "name": ["a", "b", "c"]})
@@ -266,6 +354,7 @@ def test_query_sync_fts():
},
"k": 10,
"vector": [],
"version": None,
}
return pa.table({"id": [1, 2, 3]})
@@ -282,6 +371,7 @@ def test_query_sync_fts():
"k": 42,
"vector": [],
"with_row_id": True,
"version": None,
}
return pa.table({"id": [1, 2, 3]})
@@ -307,6 +397,7 @@ def test_query_sync_hybrid():
"k": 42,
"vector": [],
"with_row_id": True,
"version": None,
}
return pa.table({"_rowid": [1, 2, 3], "_score": [0.1, 0.2, 0.3]})
else:
@@ -318,7 +409,9 @@ def test_query_sync_hybrid():
"refine_factor": None,
"vector": [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
"nprobes": 20,
"ef": None,
"with_row_id": True,
"version": None,
}
return pa.table({"_rowid": [1, 2, 3], "_distance": [0.1, 0.2, 0.3]})

View File

@@ -195,6 +195,10 @@ impl VectorQuery {
self.inner = self.inner.clone().nprobes(nprobe as usize);
}
pub fn ef(&mut self, ef: u32) {
self.inner = self.inner.clone().ef(ef as usize);
}
pub fn bypass_vector_index(&mut self) {
self.inner = self.inner.clone().bypass_vector_index()
}

View File

@@ -8,7 +8,7 @@ use lancedb::table::{
use pyo3::{
exceptions::{PyRuntimeError, PyValueError},
pyclass, pymethods,
types::{PyDict, PyDictMethods, PyString},
types::{IntoPyDict, PyDict, PyDictMethods, PyString},
Bound, FromPyObject, PyAny, PyRef, PyResult, Python, ToPyObject,
};
use pyo3_asyncio_0_21::tokio::future_into_py;
@@ -246,6 +246,33 @@ impl Table {
)
}
pub fn list_versions(self_: PyRef<'_, Self>) -> PyResult<Bound<'_, PyAny>> {
let inner = self_.inner_ref()?.clone();
future_into_py(self_.py(), async move {
let versions = inner.list_versions().await.infer_error()?;
let versions_as_dict = Python::with_gil(|py| {
versions
.iter()
.map(|v| {
let dict = PyDict::new_bound(py);
dict.set_item("version", v.version).unwrap();
dict.set_item(
"timestamp",
v.timestamp.timestamp_nanos_opt().unwrap_or_default(),
)
.unwrap();
let tup: Vec<(&String, &String)> = v.metadata.iter().collect();
dict.set_item("metadata", tup.into_py_dict(py)).unwrap();
dict.to_object(py)
})
.collect::<Vec<_>>()
});
Ok(versions_as_dict)
})
}
pub fn checkout(self_: PyRef<'_, Self>, version: u64) -> PyResult<Bound<'_, PyAny>> {
let inner = self_.inner_ref()?.clone();
future_into_py(self_.py(), async move {

View File

@@ -1,6 +1,6 @@
[package]
name = "lancedb-node"
version = "0.13.0-beta.2"
version = "0.14.0-beta.0"
description = "Serverless, low-latency vector database for AI applications"
license.workspace = true
edition.workspace = true

View File

@@ -1,6 +1,6 @@
[package]
name = "lancedb"
version = "0.13.0-beta.2"
version = "0.14.0-beta.0"
edition.workspace = true
description = "LanceDB: A serverless, low-latency vector database for AI applications"
license.workspace = true
@@ -27,6 +27,7 @@ half = { workspace = true }
lazy_static.workspace = true
lance = { workspace = true }
lance-datafusion.workspace = true
lance-io = { workspace = true }
lance-index = { workspace = true }
lance-table = { workspace = true }
lance-linalg = { workspace = true }
@@ -48,9 +49,16 @@ async-openai = { version = "0.20.0", optional = true }
serde_with = { version = "3.8.1" }
aws-sdk-bedrockruntime = { version = "1.27.0", optional = true }
# For remote feature
reqwest = { version = "0.12.0", features = ["gzip", "json", "stream"], optional = true }
rand = { version = "0.8.3", features = ["small_rng"], optional = true}
http = { version = "1", optional = true } # Matching what is in reqwest
reqwest = { version = "0.12.0", default-features = false, features = [
"charset",
"gzip",
"http2",
"json",
"macos-system-configuration",
"stream",
], optional = true }
rand = { version = "0.8.3", features = ["small_rng"], optional = true }
http = { version = "1", optional = true } # Matching what is in reqwest
uuid = { version = "1.7.0", features = ["v4"], optional = true }
polars-arrow = { version = ">=0.37,<0.40.0", optional = true }
polars = { version = ">=0.37,<0.40.0", optional = true }
@@ -75,7 +83,7 @@ http-body = "1" # Matching reqwest
[features]
default = []
default = ["default-tls"]
remote = ["dep:reqwest", "dep:http", "dep:rand", "dep:uuid"]
fp16kernels = ["lance-linalg/fp16kernels"]
s3-test = []
@@ -90,6 +98,11 @@ sentence-transformers = [
"dep:tokenizers"
]
# TLS
default-tls = ["reqwest?/default-tls"]
native-tls = ["reqwest?/native-tls"]
rustls-tls = ["reqwest?/rustls-tls"]
[[example]]
name = "openai"
required-features = ["openai"]

View File

@@ -38,6 +38,9 @@ use crate::table::{NativeTable, TableDefinition, WriteOptions};
use crate::utils::validate_table_name;
use crate::Table;
pub use lance_encoding::version::LanceFileVersion;
#[cfg(feature = "remote")]
use lance_io::object_store::StorageOptions;
use lance_table::io::commit::commit_handler_from_url;
pub const LANCE_FILE_EXTENSION: &str = "lance";
@@ -133,7 +136,7 @@ impl IntoArrow for NoData {
/// A builder for configuring a [`Connection::create_table`] operation
pub struct CreateTableBuilder<const HAS_DATA: bool, T: IntoArrow> {
parent: Arc<dyn ConnectionInternal>,
pub(crate) parent: Arc<dyn ConnectionInternal>,
pub(crate) name: String,
pub(crate) data: Option<T>,
pub(crate) mode: CreateTableMode,
@@ -341,7 +344,7 @@ pub struct OpenTableBuilder {
}
impl OpenTableBuilder {
fn new(parent: Arc<dyn ConnectionInternal>, name: String) -> Self {
pub(crate) fn new(parent: Arc<dyn ConnectionInternal>, name: String) -> Self {
Self {
parent,
name,
@@ -717,12 +720,14 @@ impl ConnectBuilder {
message: "An api_key is required when connecting to LanceDb Cloud".to_string(),
})?;
let storage_options = StorageOptions(self.storage_options.clone());
let internal = Arc::new(crate::remote::db::RemoteDatabase::try_new(
&self.uri,
&api_key,
&region,
self.host_override,
self.client_config,
storage_options.into(),
)?);
Ok(Connection {
internal,
@@ -855,7 +860,7 @@ impl Database {
let table_base_uri = if let Some(store) = engine {
static WARN_ONCE: std::sync::Once = std::sync::Once::new();
WARN_ONCE.call_once(|| {
log::warn!("Specifing engine is not a publicly supported feature in lancedb yet. THE API WILL CHANGE");
log::warn!("Specifying engine is not a publicly supported feature in lancedb yet. THE API WILL CHANGE");
});
let old_scheme = url.scheme().to_string();
let new_scheme = format!("{}+{}", old_scheme, store);
@@ -1036,6 +1041,7 @@ impl ConnectionInternal for Database {
};
let mut write_params = options.write_options.lance_write_params.unwrap_or_default();
if matches!(&options.mode, CreateTableMode::Overwrite) {
write_params.mode = WriteMode::Overwrite;
}
@@ -1122,7 +1128,7 @@ impl ConnectionInternal for Database {
let dir_name = format!("{}.{}", name, LANCE_EXTENSION);
let full_path = self.base_path.child(dir_name.clone());
self.object_store
.remove_dir_all(full_path)
.remove_dir_all(full_path.clone())
.await
.map_err(|err| match err {
// this error is not lance::Error::DatasetNotFound,
@@ -1132,6 +1138,19 @@ impl ConnectionInternal for Database {
},
_ => Error::from(err),
})?;
let object_store_params = ObjectStoreParams {
storage_options: Some(self.storage_options.clone()),
..Default::default()
};
let mut uri = self.uri.clone();
if let Some(query_string) = &self.query_string {
uri.push_str(&format!("?{}", query_string));
}
let commit_handler = commit_handler_from_url(&uri, &Some(object_store_params))
.await
.unwrap();
commit_handler.delete(&full_path).await.unwrap();
Ok(())
}
@@ -1169,6 +1188,7 @@ mod tests {
use lance_testing::datagen::{BatchGenerator, IncrementingInt32};
use tempfile::tempdir;
use crate::query::QueryBase;
use crate::query::{ExecutableQuery, QueryExecutionOptions};
use super::*;
@@ -1296,6 +1316,7 @@ mod tests {
// In v1 the row group size will trump max_batch_length
let batches = tbl
.query()
.limit(20000)
.execute_with_options(QueryExecutionOptions {
max_batch_length: 50000,
..Default::default()

View File

@@ -596,7 +596,7 @@ impl Query {
pub(crate) fn new(parent: Arc<dyn TableInternal>) -> Self {
Self {
parent,
limit: None,
limit: Some(DEFAULT_TOP_K),
offset: None,
filter: None,
full_text_search: None,
@@ -704,6 +704,9 @@ pub struct VectorQuery {
// IVF PQ - ANN search.
pub(crate) query_vector: Vec<Arc<dyn Array>>,
pub(crate) nprobes: usize,
// The number of candidates to return during the refine step for HNSW,
// defaults to 1.5 * limit.
pub(crate) ef: Option<usize>,
pub(crate) refine_factor: Option<u32>,
pub(crate) distance_type: Option<DistanceType>,
/// Default is true. Set to false to enforce a brute force search.
@@ -717,6 +720,7 @@ impl VectorQuery {
column: None,
query_vector: Vec::new(),
nprobes: 20,
ef: None,
refine_factor: None,
distance_type: None,
use_index: true,
@@ -776,6 +780,18 @@ impl VectorQuery {
self
}
/// Set the number of candidates to return during the refine step for HNSW
///
/// This argument is only used when the vector column has an HNSW index.
/// If there is no index then this value is ignored.
///
/// Increasing this value will increase the recall of your query but will
/// also increase the latency of your query. The default value is 1.5*limit.
pub fn ef(mut self, ef: usize) -> Self {
self.ef = Some(ef);
self
}
/// A multiplier to control how many additional rows are taken during the refine step
///
/// This argument is only used when the vector column has an IVF PQ index.

View File

@@ -21,6 +21,7 @@ use reqwest::{
};
use crate::error::{Error, Result};
use crate::remote::db::RemoteOptions;
const REQUEST_ID_HEADER: &str = "x-request-id";
@@ -215,6 +216,7 @@ impl RestfulLanceDbClient<Sender> {
region: &str,
host_override: Option<String>,
client_config: ClientConfig,
options: &RemoteOptions,
) -> Result<Self> {
let parsed_url = url::Url::parse(db_url).map_err(|err| Error::InvalidInput {
message: format!("db_url is not a valid URL. '{db_url}'. Error: {err}"),
@@ -255,6 +257,7 @@ impl RestfulLanceDbClient<Sender> {
region,
db_name,
host_override.is_some(),
options,
)?)
.user_agent(client_config.user_agent)
.build()
@@ -262,6 +265,7 @@ impl RestfulLanceDbClient<Sender> {
message: "Failed to build HTTP client".into(),
source: Some(Box::new(err)),
})?;
let host = match host_override {
Some(host_override) => host_override,
None => format!("https://{}.{}.api.lancedb.com", db_name, region),
@@ -287,6 +291,7 @@ impl<S: HttpSend> RestfulLanceDbClient<S> {
region: &str,
db_name: &str,
has_host_override: bool,
options: &RemoteOptions,
) -> Result<HeaderMap> {
let mut headers = HeaderMap::new();
headers.insert(
@@ -313,6 +318,23 @@ impl<S: HttpSend> RestfulLanceDbClient<S> {
);
}
if let Some(v) = options.0.get("account_name") {
headers.insert(
"x-azure-storage-account-name",
HeaderValue::from_str(v).map_err(|_| Error::InvalidInput {
message: format!("non-ascii storage account name '{}' provided", db_name),
})?,
);
}
if let Some(v) = options.0.get("azure_storage_account_name") {
headers.insert(
"x-azure-storage-account-name",
HeaderValue::from_str(v).map_err(|_| Error::InvalidInput {
message: format!("non-ascii storage account name '{}' provided", db_name),
})?,
);
}
Ok(headers)
}

View File

@@ -12,18 +12,21 @@
// See the License for the specific language governing permissions and
// limitations under the License.
use std::collections::HashMap;
use std::sync::Arc;
use arrow_array::RecordBatchReader;
use async_trait::async_trait;
use http::StatusCode;
use lance_io::object_store::StorageOptions;
use moka::future::Cache;
use reqwest::header::CONTENT_TYPE;
use serde::Deserialize;
use tokio::task::spawn_blocking;
use crate::connection::{
ConnectionInternal, CreateTableBuilder, NoData, OpenTableBuilder, TableNamesBuilder,
ConnectionInternal, CreateTableBuilder, CreateTableMode, NoData, OpenTableBuilder,
TableNamesBuilder,
};
use crate::embeddings::EmbeddingRegistry;
use crate::error::Result;
@@ -52,9 +55,16 @@ impl RemoteDatabase {
region: &str,
host_override: Option<String>,
client_config: ClientConfig,
options: RemoteOptions,
) -> Result<Self> {
let client =
RestfulLanceDbClient::try_new(uri, api_key, region, host_override, client_config)?;
let client = RestfulLanceDbClient::try_new(
uri,
api_key,
region,
host_override,
client_config,
&options,
)?;
let table_cache = Cache::builder()
.time_to_live(std::time::Duration::from_secs(300))
@@ -95,6 +105,16 @@ impl<S: HttpSend> std::fmt::Display for RemoteDatabase<S> {
}
}
impl From<&CreateTableMode> for &'static str {
fn from(val: &CreateTableMode) -> Self {
match val {
CreateTableMode::Create => "create",
CreateTableMode::Overwrite => "overwrite",
CreateTableMode::ExistOk(_) => "exist_ok",
}
}
}
#[async_trait]
impl<S: HttpSend> ConnectionInternal for RemoteDatabase<S> {
async fn table_names(&self, options: TableNamesBuilder) -> Result<Vec<String>> {
@@ -133,14 +153,40 @@ impl<S: HttpSend> ConnectionInternal for RemoteDatabase<S> {
let req = self
.client
.post(&format!("/v1/table/{}/create/", options.name))
.query(&[("mode", Into::<&str>::into(&options.mode))])
.body(data_buffer)
.header(CONTENT_TYPE, ARROW_STREAM_CONTENT_TYPE);
let (request_id, rsp) = self.client.send(req, false).await?;
if rsp.status() == StatusCode::BAD_REQUEST {
let body = rsp.text().await.err_to_http(request_id.clone())?;
if body.contains("already exists") {
return Err(crate::Error::TableAlreadyExists { name: options.name });
return match options.mode {
CreateTableMode::Create => {
Err(crate::Error::TableAlreadyExists { name: options.name })
}
CreateTableMode::ExistOk(callback) => {
let builder = OpenTableBuilder::new(options.parent, options.name);
let builder = (callback)(builder);
builder.execute().await
}
// This should not happen, as we explicitly set the mode to overwrite and the server
// shouldn't return an error if the table already exists.
//
// However if the server is an older version that doesn't support the mode parameter,
// then we'll get the 400 response.
CreateTableMode::Overwrite => Err(crate::Error::Http {
source: format!(
"unexpected response from server for create mode overwrite: {}",
body
)
.into(),
request_id,
status_code: Some(StatusCode::BAD_REQUEST),
}),
};
} else {
return Err(crate::Error::InvalidInput { message: body });
}
@@ -206,6 +252,29 @@ impl<S: HttpSend> ConnectionInternal for RemoteDatabase<S> {
}
}
/// RemoteOptions contains a subset of StorageOptions that are compatible with Remote LanceDB connections
#[derive(Clone, Debug, Default)]
pub struct RemoteOptions(pub HashMap<String, String>);
impl RemoteOptions {
pub fn new(options: HashMap<String, String>) -> Self {
Self(options)
}
}
impl From<StorageOptions> for RemoteOptions {
fn from(options: StorageOptions) -> Self {
let supported_opts = vec!["account_name", "azure_storage_account_name"];
let mut filtered = HashMap::new();
for opt in supported_opts {
if let Some(v) = options.0.get(opt) {
filtered.insert(opt.to_string(), v.to_string());
}
}
RemoteOptions::new(filtered)
}
}
#[cfg(test)]
mod tests {
use std::sync::{Arc, OnceLock};
@@ -213,7 +282,9 @@ mod tests {
use arrow_array::{Int32Array, RecordBatch, RecordBatchIterator};
use arrow_schema::{DataType, Field, Schema};
use crate::connection::ConnectBuilder;
use crate::{
connection::CreateTableMode,
remote::{ARROW_STREAM_CONTENT_TYPE, JSON_CONTENT_TYPE},
Connection, Error,
};
@@ -382,6 +453,73 @@ mod tests {
);
}
#[tokio::test]
async fn test_create_table_modes() {
let test_cases = [
(None, "mode=create"),
(Some(CreateTableMode::Create), "mode=create"),
(Some(CreateTableMode::Overwrite), "mode=overwrite"),
(
Some(CreateTableMode::ExistOk(Box::new(|b| b))),
"mode=exist_ok",
),
];
for (mode, expected_query_string) in test_cases {
let conn = Connection::new_with_handler(move |request| {
assert_eq!(request.method(), &reqwest::Method::POST);
assert_eq!(request.url().path(), "/v1/table/table1/create/");
assert_eq!(request.url().query(), Some(expected_query_string));
http::Response::builder().status(200).body("").unwrap()
});
let data = RecordBatch::try_new(
Arc::new(Schema::new(vec![Field::new("a", DataType::Int32, false)])),
vec![Arc::new(Int32Array::from(vec![1, 2, 3]))],
)
.unwrap();
let reader = RecordBatchIterator::new([Ok(data.clone())], data.schema());
let mut builder = conn.create_table("table1", reader);
if let Some(mode) = mode {
builder = builder.mode(mode);
}
builder.execute().await.unwrap();
}
// check that the open table callback is called with exist_ok
let conn = Connection::new_with_handler(|request| match request.url().path() {
"/v1/table/table1/create/" => http::Response::builder()
.status(400)
.body("Table table1 already exists")
.unwrap(),
"/v1/table/table1/describe/" => http::Response::builder().status(200).body("").unwrap(),
_ => {
panic!("unexpected path: {:?}", request.url().path());
}
});
let data = RecordBatch::try_new(
Arc::new(Schema::new(vec![Field::new("a", DataType::Int32, false)])),
vec![Arc::new(Int32Array::from(vec![1, 2, 3]))],
)
.unwrap();
let called: Arc<OnceLock<bool>> = Arc::new(OnceLock::new());
let reader = RecordBatchIterator::new([Ok(data.clone())], data.schema());
let called_in_cb = called.clone();
conn.create_table("table1", reader)
.mode(CreateTableMode::ExistOk(Box::new(move |b| {
called_in_cb.clone().set(true).unwrap();
b
})))
.execute()
.await
.unwrap();
let called = *called.get().unwrap_or(&false);
assert!(called);
}
#[tokio::test]
async fn test_create_table_empty() {
let conn = Connection::new_with_handler(|request| {
@@ -436,4 +574,16 @@ mod tests {
});
conn.rename_table("table1", "table2").await.unwrap();
}
#[tokio::test]
async fn test_connect_remote_options() {
let db_uri = "db://my-container/my-prefix";
let _ = ConnectBuilder::new(db_uri)
.region("us-east-1")
.api_key("my-api-key")
.storage_options(vec![("azure_storage_account_name", "my-storage-account")])
.execute()
.await
.unwrap();
}
}

View File

@@ -19,9 +19,10 @@ use http::header::CONTENT_TYPE;
use http::StatusCode;
use lance::arrow::json::JsonSchema;
use lance::dataset::scanner::DatasetRecordBatchStream;
use lance::dataset::{ColumnAlteration, NewColumnTransform};
use lance::dataset::{ColumnAlteration, NewColumnTransform, Version};
use lance_datafusion::exec::OneShotExec;
use serde::{Deserialize, Serialize};
use tokio::sync::RwLock;
use crate::{
connection::NoData,
@@ -43,17 +44,32 @@ pub struct RemoteTable<S: HttpSend = Sender> {
#[allow(dead_code)]
client: RestfulLanceDbClient<S>,
name: String,
version: RwLock<Option<u64>>,
}
impl<S: HttpSend> RemoteTable<S> {
pub fn new(client: RestfulLanceDbClient<S>, name: String) -> Self {
Self { client, name }
Self {
client,
name,
version: RwLock::new(None),
}
}
async fn describe(&self) -> Result<TableDescription> {
let request = self
let version = self.current_version().await;
self.describe_version(version).await
}
async fn describe_version(&self, version: Option<u64>) -> Result<TableDescription> {
let mut request = self
.client
.post(&format!("/v1/table/{}/describe/", self.name));
let body = serde_json::json!({ "version": version });
request = request.json(&body);
let (request_id, response) = self.client.send(request, true).await?;
let response = self.check_table_response(&request_id, response).await?;
@@ -196,6 +212,7 @@ impl<S: HttpSend> RemoteTable<S> {
body["prefilter"] = query.base.prefilter.into();
body["distance_type"] = serde_json::json!(query.distance_type.unwrap_or_default());
body["nprobes"] = query.nprobes.into();
body["ef"] = query.ef.into();
body["refine_factor"] = query.refine_factor.into();
if let Some(vector_column) = query.column.as_ref() {
body["vector_column"] = serde_json::Value::String(vector_column.clone());
@@ -250,6 +267,24 @@ impl<S: HttpSend> RemoteTable<S> {
}
}
}
async fn check_mutable(&self) -> Result<()> {
let read_guard = self.version.read().await;
match *read_guard {
None => Ok(()),
Some(version) => Err(Error::NotSupported {
message: format!(
"Cannot mutate table reference fixed at version {}. Call checkout_latest() to get a mutable table reference.",
version
)
})
}
}
async fn current_version(&self) -> Option<u64> {
let read_guard = self.version.read().await;
*read_guard
}
}
#[derive(Deserialize)]
@@ -277,7 +312,11 @@ mod test_utils {
T: Into<reqwest::Body>,
{
let client = client_with_handler(handler);
Self { client, name }
Self {
client,
name,
version: RwLock::new(None),
}
}
}
}
@@ -296,21 +335,62 @@ impl<S: HttpSend> TableInternal for RemoteTable<S> {
async fn version(&self) -> Result<u64> {
self.describe().await.map(|desc| desc.version)
}
async fn checkout(&self, _version: u64) -> Result<()> {
Err(Error::NotSupported {
message: "checkout is not supported on LanceDB cloud.".into(),
})
async fn checkout(&self, version: u64) -> Result<()> {
// check that the version exists
self.describe_version(Some(version))
.await
.map_err(|e| match e {
// try to map the error to a more user-friendly error telling them
// specifically that the version does not exist
Error::TableNotFound { name } => Error::TableNotFound {
name: format!("{} (version: {})", name, version),
},
e => e,
})?;
let mut write_guard = self.version.write().await;
*write_guard = Some(version);
Ok(())
}
async fn checkout_latest(&self) -> Result<()> {
Err(Error::NotSupported {
message: "checkout is not supported on LanceDB cloud.".into(),
})
let mut write_guard = self.version.write().await;
*write_guard = None;
Ok(())
}
async fn restore(&self) -> Result<()> {
self.check_mutable().await?;
Err(Error::NotSupported {
message: "restore is not supported on LanceDB cloud.".into(),
})
}
async fn list_versions(&self) -> Result<Vec<Version>> {
let request = self
.client
.post(&format!("/v1/table/{}/version/list/", self.name));
let (request_id, response) = self.client.send(request, true).await?;
let response = self.check_table_response(&request_id, response).await?;
#[derive(Deserialize)]
struct ListVersionsResponse {
versions: Vec<Version>,
}
let body = response.text().await.err_to_http(request_id.clone())?;
let body: ListVersionsResponse =
serde_json::from_str(&body).map_err(|err| Error::Http {
source: format!(
"Failed to parse list_versions response: {}, body: {}",
err, body
)
.into(),
request_id,
status_code: None,
})?;
Ok(body.versions)
}
async fn schema(&self) -> Result<SchemaRef> {
let schema = self.describe().await?.schema;
Ok(Arc::new(schema.try_into()?))
@@ -320,10 +400,13 @@ impl<S: HttpSend> TableInternal for RemoteTable<S> {
.client
.post(&format!("/v1/table/{}/count_rows/", self.name));
let version = self.current_version().await;
if let Some(filter) = filter {
request = request.json(&serde_json::json!({ "predicate": filter }));
request = request.json(&serde_json::json!({ "predicate": filter, "version": version }));
} else {
request = request.json(&serde_json::json!({}));
let body = serde_json::json!({ "version": version });
request = request.json(&body);
}
let (request_id, response) = self.client.send(request, true).await?;
@@ -343,6 +426,7 @@ impl<S: HttpSend> TableInternal for RemoteTable<S> {
add: AddDataBuilder<NoData>,
data: Box<dyn RecordBatchReader + Send>,
) -> Result<()> {
self.check_mutable().await?;
let body = Self::reader_as_body(data)?;
let mut request = self
.client
@@ -371,7 +455,8 @@ impl<S: HttpSend> TableInternal for RemoteTable<S> {
) -> Result<Arc<dyn ExecutionPlan>> {
let request = self.client.post(&format!("/v1/table/{}/query/", self.name));
let body = serde_json::Value::Object(Default::default());
let version = self.current_version().await;
let body = serde_json::json!({ "version": version });
let bodies = Self::apply_vector_query_params(body, query)?;
let mut futures = Vec::with_capacity(bodies.len());
@@ -406,7 +491,8 @@ impl<S: HttpSend> TableInternal for RemoteTable<S> {
.post(&format!("/v1/table/{}/query/", self.name))
.header(CONTENT_TYPE, JSON_CONTENT_TYPE);
let mut body = serde_json::Value::Object(Default::default());
let version = self.current_version().await;
let mut body = serde_json::json!({ "version": version });
Self::apply_query_params(&mut body, query)?;
// Empty vector can be passed if no vector search is performed.
body["vector"] = serde_json::Value::Array(Vec::new());
@@ -420,6 +506,7 @@ impl<S: HttpSend> TableInternal for RemoteTable<S> {
Ok(DatasetRecordBatchStream::new(stream))
}
async fn update(&self, update: UpdateBuilder) -> Result<u64> {
self.check_mutable().await?;
let request = self
.client
.post(&format!("/v1/table/{}/update/", self.name));
@@ -441,6 +528,7 @@ impl<S: HttpSend> TableInternal for RemoteTable<S> {
Ok(0) // TODO: support returning number of modified rows once supported in SaaS.
}
async fn delete(&self, predicate: &str) -> Result<()> {
self.check_mutable().await?;
let body = serde_json::json!({ "predicate": predicate });
let request = self
.client
@@ -452,6 +540,7 @@ impl<S: HttpSend> TableInternal for RemoteTable<S> {
}
async fn create_index(&self, mut index: IndexBuilder) -> Result<()> {
self.check_mutable().await?;
let request = self
.client
.post(&format!("/v1/table/{}/create_index/", self.name));
@@ -530,6 +619,7 @@ impl<S: HttpSend> TableInternal for RemoteTable<S> {
params: MergeInsertBuilder,
new_data: Box<dyn RecordBatchReader + Send>,
) -> Result<()> {
self.check_mutable().await?;
let query = MergeInsertRequest::try_from(params)?;
let body = Self::reader_as_body(new_data)?;
let request = self
@@ -546,6 +636,7 @@ impl<S: HttpSend> TableInternal for RemoteTable<S> {
Ok(())
}
async fn optimize(&self, _action: OptimizeAction) -> Result<OptimizeStats> {
self.check_mutable().await?;
Err(Error::NotSupported {
message: "optimize is not supported on LanceDB cloud.".into(),
})
@@ -555,16 +646,19 @@ impl<S: HttpSend> TableInternal for RemoteTable<S> {
_transforms: NewColumnTransform,
_read_columns: Option<Vec<String>>,
) -> Result<()> {
self.check_mutable().await?;
Err(Error::NotSupported {
message: "add_columns is not yet supported.".into(),
})
}
async fn alter_columns(&self, _alterations: &[ColumnAlteration]) -> Result<()> {
self.check_mutable().await?;
Err(Error::NotSupported {
message: "alter_columns is not yet supported.".into(),
})
}
async fn drop_columns(&self, _columns: &[&str]) -> Result<()> {
self.check_mutable().await?;
Err(Error::NotSupported {
message: "drop_columns is not yet supported.".into(),
})
@@ -572,9 +666,13 @@ impl<S: HttpSend> TableInternal for RemoteTable<S> {
async fn list_indices(&self) -> Result<Vec<IndexConfig>> {
// Make request to list the indices
let request = self
let mut request = self
.client
.post(&format!("/v1/table/{}/index/list/", self.name));
let version = self.current_version().await;
let body = serde_json::json!({ "version": version });
request = request.json(&body);
let (request_id, response) = self.client.send(request, true).await?;
let response = self.check_table_response(&request_id, response).await?;
@@ -624,10 +722,14 @@ impl<S: HttpSend> TableInternal for RemoteTable<S> {
}
async fn index_stats(&self, index_name: &str) -> Result<Option<IndexStatistics>> {
let request = self.client.post(&format!(
let mut request = self.client.post(&format!(
"/v1/table/{}/index/{}/stats/",
self.name, index_name
));
let version = self.current_version().await;
let body = serde_json::json!({ "version": version });
request = request.json(&body);
let (request_id, response) = self.client.send(request, true).await?;
if response.status() == StatusCode::NOT_FOUND {
@@ -701,6 +803,7 @@ mod tests {
use arrow::{array::AsArray, compute::concat_batches, datatypes::Int32Type};
use arrow_array::{Int32Array, RecordBatch, RecordBatchIterator};
use arrow_schema::{DataType, Field, Schema};
use chrono::{DateTime, Utc};
use futures::{future::BoxFuture, StreamExt, TryFutureExt};
use lance_index::scalar::FullTextSearchQuery;
use reqwest::Body;
@@ -805,7 +908,10 @@ mod tests {
request.headers().get("Content-Type").unwrap(),
JSON_CONTENT_TYPE
);
assert_eq!(request.body().unwrap().as_bytes().unwrap(), br#"{}"#);
assert_eq!(
request.body().unwrap().as_bytes().unwrap(),
br#"{"version":null}"#
);
http::Response::builder().status(200).body("42").unwrap()
});
@@ -822,7 +928,7 @@ mod tests {
);
assert_eq!(
request.body().unwrap().as_bytes().unwrap(),
br#"{"predicate":"a > 10"}"#
br#"{"predicate":"a > 10","version":null}"#
);
http::Response::builder().status(200).body("42").unwrap()
@@ -1121,7 +1227,10 @@ mod tests {
"prefilter": true,
"distance_type": "l2",
"nprobes": 20,
"k": 10,
"ef": Option::<usize>::None,
"refine_factor": null,
"version": null,
});
// Pass vector separately to make sure it matches f32 precision.
expected_body["vector"] = vec![0.1f32, 0.2, 0.3].into();
@@ -1166,7 +1275,9 @@ mod tests {
"bypass_vector_index": true,
"columns": ["a", "b"],
"nprobes": 12,
"ef": Option::<usize>::None,
"refine_factor": 2,
"version": null,
});
// Pass vector separately to make sure it matches f32 precision.
expected_body["vector"] = vec![0.1f32, 0.2, 0.3].into();
@@ -1222,6 +1333,7 @@ mod tests {
"k": 10,
"vector": [],
"with_row_id": true,
"version": null
});
assert_eq!(body, expected_body);
@@ -1407,6 +1519,51 @@ mod tests {
assert_eq!(indices, expected);
}
#[tokio::test]
async fn test_list_versions() {
let table = Table::new_with_handler("my_table", |request| {
assert_eq!(request.method(), "POST");
assert_eq!(request.url().path(), "/v1/table/my_table/version/list/");
let version1 = lance::dataset::Version {
version: 1,
timestamp: "2024-01-01T00:00:00Z".parse().unwrap(),
metadata: Default::default(),
};
let version2 = lance::dataset::Version {
version: 2,
timestamp: "2024-02-01T00:00:00Z".parse().unwrap(),
metadata: Default::default(),
};
let response_body = serde_json::json!({
"versions": [
version1,
version2,
]
});
let response_body = serde_json::to_string(&response_body).unwrap();
http::Response::builder()
.status(200)
.body(response_body)
.unwrap()
});
let versions = table.list_versions().await.unwrap();
assert_eq!(versions.len(), 2);
assert_eq!(versions[0].version, 1);
assert_eq!(
versions[0].timestamp,
"2024-01-01T00:00:00Z".parse::<DateTime<Utc>>().unwrap()
);
assert_eq!(versions[1].version, 2);
assert_eq!(
versions[1].timestamp,
"2024-02-01T00:00:00Z".parse::<DateTime<Utc>>().unwrap()
);
// assert_eq!(versions, expected);
}
#[tokio::test]
async fn test_index_stats() {
let table = Table::new_with_handler("my_table", |request| {
@@ -1451,4 +1608,195 @@ mod tests {
let indices = table.index_stats("my_index").await.unwrap();
assert!(indices.is_none());
}
#[tokio::test]
async fn test_passes_version() {
let table = Table::new_with_handler("my_table", |request| {
let body = request.body().unwrap().as_bytes().unwrap();
let body: serde_json::Value = serde_json::from_slice(body).unwrap();
let version = body
.as_object()
.unwrap()
.get("version")
.unwrap()
.as_u64()
.unwrap();
assert_eq!(version, 42);
let response_body = match request.url().path() {
"/v1/table/my_table/describe/" => {
serde_json::json!({
"version": 42,
"schema": { "fields": [] }
})
}
"/v1/table/my_table/index/list/" => {
serde_json::json!({
"indexes": []
})
}
"/v1/table/my_table/index/my_idx/stats/" => {
serde_json::json!({
"num_indexed_rows": 100000,
"num_unindexed_rows": 0,
"index_type": "IVF_PQ",
"distance_type": "l2"
})
}
"/v1/table/my_table/count_rows/" => {
serde_json::json!(1000)
}
"/v1/table/my_table/query/" => {
let expected_data = RecordBatch::try_new(
Arc::new(Schema::new(vec![Field::new("a", DataType::Int32, false)])),
vec![Arc::new(Int32Array::from(vec![1, 2, 3]))],
)
.unwrap();
let expected_data_ref = expected_data.clone();
let response_body = write_ipc_file(&expected_data_ref);
return http::Response::builder()
.status(200)
.header(CONTENT_TYPE, ARROW_FILE_CONTENT_TYPE)
.body(response_body)
.unwrap();
}
path => panic!("Unexpected path: {}", path),
};
http::Response::builder()
.status(200)
.body(
serde_json::to_string(&response_body)
.unwrap()
.as_bytes()
.to_vec(),
)
.unwrap()
});
table.checkout(42).await.unwrap();
// ensure that version is passed to the /describe endpoint
let version = table.version().await.unwrap();
assert_eq!(version, 42);
// ensure it's passed to other read API calls
table.list_indices().await.unwrap();
table.index_stats("my_idx").await.unwrap();
table.count_rows(None).await.unwrap();
table
.query()
.nearest_to(vec![0.1, 0.2, 0.3])
.unwrap()
.execute()
.await
.unwrap();
}
#[tokio::test]
async fn test_fails_if_checkout_version_doesnt_exist() {
let table = Table::new_with_handler("my_table", |request| {
let body = request.body().unwrap().as_bytes().unwrap();
let body: serde_json::Value = serde_json::from_slice(body).unwrap();
let version = body
.as_object()
.unwrap()
.get("version")
.unwrap()
.as_u64()
.unwrap();
if version != 42 {
return http::Response::builder()
.status(404)
.body(format!("Table my_table (version: {}) not found", version))
.unwrap();
}
let response_body = match request.url().path() {
"/v1/table/my_table/describe/" => {
serde_json::json!({
"version": 42,
"schema": { "fields": [] }
})
}
_ => panic!("Unexpected path"),
};
http::Response::builder()
.status(200)
.body(serde_json::to_string(&response_body).unwrap())
.unwrap()
});
let res = table.checkout(43).await;
println!("{:?}", res);
assert!(
matches!(res, Err(Error::TableNotFound { name }) if name == "my_table (version: 43)")
);
}
#[tokio::test]
async fn test_timetravel_immutable() {
let table = Table::new_with_handler::<String>("my_table", |request| {
let response_body = match request.url().path() {
"/v1/table/my_table/describe/" => {
serde_json::json!({
"version": 42,
"schema": { "fields": [] }
})
}
_ => panic!("Should not have made a request: {:?}", request),
};
http::Response::builder()
.status(200)
.body(serde_json::to_string(&response_body).unwrap())
.unwrap()
});
table.checkout(42).await.unwrap();
// Ensure that all mutable operations fail.
let res = table
.update()
.column("a", "a + 1")
.column("b", "b - 1")
.only_if("b > 10")
.execute()
.await;
assert!(matches!(res, Err(Error::NotSupported { .. })));
let batch = RecordBatch::try_new(
Arc::new(Schema::new(vec![Field::new("a", DataType::Int32, false)])),
vec![Arc::new(Int32Array::from(vec![1, 2, 3]))],
)
.unwrap();
let data = Box::new(RecordBatchIterator::new(
[Ok(batch.clone())],
batch.schema(),
));
let res = table.merge_insert(&["some_col"]).execute(data).await;
assert!(matches!(res, Err(Error::NotSupported { .. })));
let res = table.delete("id in (1, 2, 3)").await;
assert!(matches!(res, Err(Error::NotSupported { .. })));
let data = RecordBatch::try_new(
Arc::new(Schema::new(vec![Field::new("a", DataType::Int32, false)])),
vec![Arc::new(Int32Array::from(vec![1, 2, 3]))],
)
.unwrap();
let res = table
.add(RecordBatchIterator::new([Ok(data.clone())], data.schema()))
.execute()
.await;
assert!(matches!(res, Err(Error::NotSupported { .. })));
let res = table
.create_index(&["a"], Index::IvfPq(Default::default()))
.execute()
.await;
assert!(matches!(res, Err(Error::NotSupported { .. })));
}
}

View File

@@ -37,7 +37,7 @@ pub use lance::dataset::ColumnAlteration;
pub use lance::dataset::NewColumnTransform;
pub use lance::dataset::ReadParams;
use lance::dataset::{
Dataset, UpdateBuilder as LanceUpdateBuilder, WhenMatched, WriteMode, WriteParams,
Dataset, UpdateBuilder as LanceUpdateBuilder, Version, WhenMatched, WriteMode, WriteParams,
};
use lance::dataset::{MergeInsertBuilder as LanceMergeInsertBuilder, WhenNotMatchedBySource};
use lance::io::WrappingObjectStore;
@@ -426,6 +426,7 @@ pub(crate) trait TableInternal: std::fmt::Display + std::fmt::Debug + Send + Syn
async fn checkout(&self, version: u64) -> Result<()>;
async fn checkout_latest(&self) -> Result<()>;
async fn restore(&self) -> Result<()>;
async fn list_versions(&self) -> Result<Vec<Version>>;
async fn table_definition(&self) -> Result<TableDefinition>;
fn dataset_uri(&self) -> &str;
}
@@ -955,6 +956,11 @@ impl Table {
self.inner.restore().await
}
/// List all the versions of the table
pub async fn list_versions(&self) -> Result<Vec<Version>> {
self.inner.list_versions().await
}
/// List all indices that have been created with [`Self::create_index`]
pub async fn list_indices(&self) -> Result<Vec<IndexConfig>> {
self.inner.list_indices().await
@@ -1319,7 +1325,7 @@ impl NativeTable {
let (indices, mf) = futures::try_join!(dataset.load_indices(), dataset.latest_manifest())?;
Ok(indices
.iter()
.map(|i| VectorIndex::new_from_format(&mf, i))
.map(|i| VectorIndex::new_from_format(&(mf.0), i))
.collect())
}
@@ -1707,6 +1713,10 @@ impl TableInternal for NativeTable {
self.dataset.reload().await
}
async fn list_versions(&self) -> Result<Vec<Version>> {
Ok(self.dataset.get().await?.versions().await?)
}
async fn restore(&self) -> Result<()> {
let version =
self.dataset
@@ -1904,6 +1914,9 @@ impl TableInternal for NativeTable {
query.base.offset.map(|offset| offset as i64),
)?;
scanner.nprobs(query.nprobes);
if let Some(ef) = query.ef {
scanner.ef(ef);
}
scanner.use_index(query.use_index);
scanner.prefilter(query.base.prefilter);
match query.base.select {