Compare commits

..

26 Commits

Author SHA1 Message Date
Lance Release
66a881b33a Bump version: 0.16.0-beta.2 → 0.16.0 2024-11-15 20:17:34 +00:00
Lance Release
a7515d6ee2 Bump version: 0.16.0-beta.1 → 0.16.0-beta.2 2024-11-15 20:17:34 +00:00
Will Jones
587c0824af feat: flexible null handling and insert subschemas in Python (#1827)
* Test that we can insert subschemas (omit nullable columns) in Python.
* More work is needed to support this in Node. See:
https://github.com/lancedb/lancedb/issues/1832
* Test that we can insert data with nullable schema but no nulls in
non-nullable schema.
* Add `"null"` option for `on_bad_vectors` where we fill with null if
the vector is bad.
* Make null values not considered bad if the field itself is nullable.
2024-11-15 11:33:00 -08:00
Will Jones
b38a4269d0 fix(node): make openai and huggingface optional dependencies (#1809)
BREAKING CHANGE: openai and huggingface now have separate entrypoints.

Closes [#1624](https://github.com/lancedb/lancedb/issues/1624)
2024-11-14 15:04:35 -08:00
Will Jones
119d88b9db ci: disable Windows Arm64 until the release builds work (#1833)
Started to actually fix this, but it was taking too long
https://github.com/lancedb/lancedb/pull/1831
2024-11-14 15:04:23 -08:00
StevenSu
74f660d223 feat: add new feature, add amazon bedrock embedding function (#1788)
Add amazon bedrock embedding function to rust sdk.

1.  Add BedrockEmbeddingModel ( lancedb/src/embeddings/bedrock.rs)
2. Add example lancedb/examples/bedrock.rs
2024-11-14 11:04:59 -08:00
Lance Release
b2b0979b90 Updating package-lock.json 2024-11-14 04:42:38 +00:00
Lance Release
ee2a40b182 Bump version: 0.13.0-beta.1 → 0.13.0-beta.2 2024-11-14 04:42:19 +00:00
Lance Release
4ca0b15354 Bump version: 0.16.0-beta.0 → 0.16.0-beta.1 2024-11-14 04:41:56 +00:00
Rob Meng
d8c217b47d chore: bump lance to 0.19.2 (#1829) 2024-11-13 23:23:02 -05:00
Rob Meng
b724b1a01f feat: support remote empty query (#1828)
Support sending empty query types to remote lancedb. also include offset
and limit, where were previously omitted.
2024-11-13 23:04:52 -05:00
Will Jones
abd75e0ead feat: search multiple query vectors as one query (#1811)
Allows users to pass multiple query vector as part of a single query
plan. This just runs the queries in parallel without any further
optimization. It's mostly a convenience.

Previously, I think this was only handled by the sync Python remote API.
This makes it common across all SDKs.

Closes https://github.com/lancedb/lancedb/issues/1803

```python
>>> import lancedb
>>> import asyncio
>>> 
>>> async def main():
...     db = await lancedb.connect_async("./demo")
...     table = await db.create_table("demo", [{"id": 1, "vector": [1, 2, 3]}, {"id": 2, "vector": [4, 5, 6]}], mode="overwrite")
...     return await table.query().nearest_to([[1.0, 2.0, 3.0], [4.0, 5.0, 6.0], [4.0, 5.0, 6.0]]).limit(1).to_pandas()
... 
>>> asyncio.run(main())
   query_index  id           vector  _distance
0            2   2  [4.0, 5.0, 6.0]        0.0
1            1   2  [4.0, 5.0, 6.0]        0.0
2            0   1  [1.0, 2.0, 3.0]        0.0
```
2024-11-13 16:05:16 -08:00
Will Jones
0fd8a50bd7 ci(node): run examples in CI (#1796)
This is done as setup for a PR that will fix the OpenAI dependency
issue.

 * [x] FTS examples
 * [x] Setup mock openai
 * [x] Ran `npm audit fix`
 * [x] sentences embeddings test
 * [x] Double check formatting of docs examples
2024-11-13 11:10:56 -08:00
Umut Hope YILDIRIM
9f228feb0e ci: remove cache to fix build issues on windows arm runner (#1820) 2024-11-13 09:27:10 -08:00
Ayush Chaurasia
90e9c52d0a docs: update hybrid search example to latest langchain (#1824)
Co-authored-by: qzhu <qian@lancedb.com>
2024-11-12 20:06:25 -08:00
Will Jones
68974a4e06 ci: add index URL to fix failing docs build (#1823) 2024-11-12 16:54:22 -08:00
Lei Xu
4c9bab0d92 fix: use pandas with pydantic embedding column (#1818)
* Make Pandas `DataFrame` works with embedding function + Subset of
columns
* Make `lancedb.create_table()` work with embedding function
2024-11-11 14:48:56 -08:00
QianZhu
5117aecc38 docs: search param explanation for OSS doc (#1815)
![Screenshot 2024-11-09 at 11 09
14 AM](https://github.com/user-attachments/assets/2aeba016-aeff-4658-85c6-8640285ba0c9)
2024-11-11 11:57:17 -08:00
Umut Hope YILDIRIM
729718cb09 fix: arm64 runner proto already installed bug (#1810)
https://github.com/lancedb/lancedb/actions/runs/11748512661/job/32732745458
2024-11-08 14:49:37 -08:00
Umut Hope YILDIRIM
b1c84e0bda feat: added lancedb and vectordb release ci for win32-arm64-msvc npmjs only (#1805) 2024-11-08 11:40:57 -08:00
fzowl
cbbc07d0f5 feat: voyageai support (#1799)
Adding VoyageAI embedding and rerank support
2024-11-09 00:51:20 +05:30
Kursat Aktas
21021f94ca docs: introducing LanceDB Guru on Gurubase.io (#1797)
Hello team,

I'm the maintainer of [Anteon](https://github.com/getanteon/anteon). We
have created Gurubase.io with the mission of building a centralized,
open-source tool-focused knowledge base. Essentially, each "guru" is
equipped with custom knowledge to answer user questions based on
collected data related to that tool.

I wanted to update you that I've manually added the [LanceDB
Guru](https://gurubase.io/g/lancedb) to Gurubase. LanceDB Guru uses the
data from this repo and data from the
[docs](https://lancedb.github.io/lancedb/) to answer questions by
leveraging the LLM.

In this PR, I showcased the "LanceDB Guru", which highlights that
LanceDB now has an AI assistant available to help users with their
questions. Please let me know your thoughts on this contribution.

Additionally, if you want me to disable LanceDB Guru in Gurubase, just
let me know that's totally fine.

Signed-off-by: Kursat Aktas <kursat.ce@gmail.com>
2024-11-08 10:55:22 -08:00
BubbleCal
0ed77fa990 chore: impl Debug & Clone for Index params (#1808)
we don't really need these trait in lancedb, but all fields in `Index`
implement the 2 traits, so do it for possibility to use `Index`
somewhere

Signed-off-by: BubbleCal <bubble-cal@outlook.com>
2024-11-09 01:07:43 +08:00
BubbleCal
4372c231cd feat: support optimize indices in sync API (#1769)
Signed-off-by: BubbleCal <bubble-cal@outlook.com>
2024-11-08 08:48:07 -08:00
Umut Hope YILDIRIM
fa9ca8f7a6 ci: arm64 windows build support (#1770)
Adds support for 'aarch64-pc-windows-msvc'.
2024-11-06 15:34:23 -08:00
Lance Release
2a35d24ee6 Updating package-lock.json 2024-11-06 17:26:36 +00:00
95 changed files with 9556 additions and 2265 deletions

View File

@@ -1,5 +1,5 @@
[tool.bumpversion]
current_version = "0.13.0-beta.1"
current_version = "0.13.0-beta.2"
parse = """(?x)
(?P<major>0|[1-9]\\d*)\\.
(?P<minor>0|[1-9]\\d*)\\.
@@ -92,6 +92,11 @@ glob = "node/package.json"
replace = "\"@lancedb/vectordb-win32-x64-msvc\": \"{new_version}\""
search = "\"@lancedb/vectordb-win32-x64-msvc\": \"{current_version}\""
[[tool.bumpversion.files]]
glob = "node/package.json"
replace = "\"@lancedb/vectordb-win32-arm64-msvc\": \"{new_version}\""
search = "\"@lancedb/vectordb-win32-arm64-msvc\": \"{current_version}\""
# Cargo files
# ------------
[[tool.bumpversion.files]]

View File

@@ -38,3 +38,7 @@ rustflags = ["-C", "target-cpu=apple-m1", "-C", "target-feature=+neon,+fp16,+fhm
# not found errors on systems that are missing it.
[target.x86_64-pc-windows-msvc]
rustflags = ["-Ctarget-feature=+crt-static"]
# Experimental target for Arm64 Windows
[target.aarch64-pc-windows-msvc]
rustflags = ["-Ctarget-feature=+crt-static"]

View File

@@ -31,7 +31,7 @@ jobs:
- name: Install dependecies needed for ubuntu
run: |
sudo apt install -y protobuf-compiler libssl-dev
rustup update && rustup default
rustup update && rustup default
- name: Set up Python
uses: actions/setup-python@v5
with:
@@ -41,8 +41,8 @@ jobs:
- name: Build Python
working-directory: python
run: |
python -m pip install -e .
python -m pip install -r ../docs/requirements.txt
python -m pip install --extra-index-url https://pypi.fury.io/lancedb/ -e .
python -m pip install --extra-index-url https://pypi.fury.io/lancedb/ -r ../docs/requirements.txt
- name: Set up node
uses: actions/setup-node@v3
with:

View File

@@ -49,7 +49,7 @@ jobs:
- name: Build Python
working-directory: docs/test
run:
python -m pip install -r requirements.txt
python -m pip install --extra-index-url https://pypi.fury.io/lancedb/ -r requirements.txt
- name: Create test files
run: |
cd docs/test

View File

@@ -53,6 +53,9 @@ jobs:
cargo clippy --all --all-features -- -D warnings
npm ci
npm run lint-ci
- name: Lint examples
working-directory: nodejs/examples
run: npm ci && npm run lint-ci
linux:
name: Linux (NodeJS ${{ matrix.node-version }})
timeout-minutes: 30
@@ -91,6 +94,18 @@ jobs:
env:
S3_TEST: "1"
run: npm run test
- name: Setup examples
working-directory: nodejs/examples
run: npm ci
- name: Test examples
working-directory: ./
env:
OPENAI_API_KEY: test
OPENAI_BASE_URL: http://0.0.0.0:8000
run: |
python ci/mock_openai.py &
cd nodejs/examples
npm test
macos:
timeout-minutes: 30
runs-on: "macos-14"

View File

@@ -226,6 +226,110 @@ jobs:
path: |
node/dist/lancedb-vectordb-win32*.tgz
# TODO: re-enable once working https://github.com/lancedb/lancedb/pull/1831
# node-windows-arm64:
# name: vectordb win32-arm64-msvc
# runs-on: windows-4x-arm
# if: startsWith(github.ref, 'refs/tags/v')
# steps:
# - uses: actions/checkout@v4
# - name: Install Git
# run: |
# Invoke-WebRequest -Uri "https://github.com/git-for-windows/git/releases/download/v2.44.0.windows.1/Git-2.44.0-64-bit.exe" -OutFile "git-installer.exe"
# Start-Process -FilePath "git-installer.exe" -ArgumentList "/VERYSILENT", "/NORESTART" -Wait
# shell: powershell
# - name: Add Git to PATH
# run: |
# Add-Content $env:GITHUB_PATH "C:\Program Files\Git\bin"
# $env:Path = [System.Environment]::GetEnvironmentVariable("Path","Machine") + ";" + [System.Environment]::GetEnvironmentVariable("Path","User")
# shell: powershell
# - name: Configure Git symlinks
# run: git config --global core.symlinks true
# - uses: actions/checkout@v4
# - uses: actions/setup-python@v5
# with:
# python-version: "3.13"
# - name: Install Visual Studio Build Tools
# run: |
# Invoke-WebRequest -Uri "https://aka.ms/vs/17/release/vs_buildtools.exe" -OutFile "vs_buildtools.exe"
# Start-Process -FilePath "vs_buildtools.exe" -ArgumentList "--quiet", "--wait", "--norestart", "--nocache", `
# "--installPath", "C:\BuildTools", `
# "--add", "Microsoft.VisualStudio.Component.VC.Tools.ARM64", `
# "--add", "Microsoft.VisualStudio.Component.VC.Tools.x86.x64", `
# "--add", "Microsoft.VisualStudio.Component.Windows11SDK.22621", `
# "--add", "Microsoft.VisualStudio.Component.VC.ATL", `
# "--add", "Microsoft.VisualStudio.Component.VC.ATLMFC", `
# "--add", "Microsoft.VisualStudio.Component.VC.Llvm.Clang" -Wait
# shell: powershell
# - name: Add Visual Studio Build Tools to PATH
# run: |
# $vsPath = "C:\BuildTools\VC\Tools\MSVC"
# $latestVersion = (Get-ChildItem $vsPath | Sort-Object {[version]$_.Name} -Descending)[0].Name
# Add-Content $env:GITHUB_PATH "C:\BuildTools\VC\Tools\MSVC\$latestVersion\bin\Hostx64\arm64"
# Add-Content $env:GITHUB_PATH "C:\BuildTools\VC\Tools\MSVC\$latestVersion\bin\Hostx64\x64"
# Add-Content $env:GITHUB_PATH "C:\Program Files (x86)\Windows Kits\10\bin\10.0.22621.0\arm64"
# Add-Content $env:GITHUB_PATH "C:\Program Files (x86)\Windows Kits\10\bin\10.0.22621.0\x64"
# Add-Content $env:GITHUB_PATH "C:\BuildTools\VC\Tools\Llvm\x64\bin"
# # Add MSVC runtime libraries to LIB
# $env:LIB = "C:\BuildTools\VC\Tools\MSVC\$latestVersion\lib\arm64;" +
# "C:\Program Files (x86)\Windows Kits\10\Lib\10.0.22621.0\um\arm64;" +
# "C:\Program Files (x86)\Windows Kits\10\Lib\10.0.22621.0\ucrt\arm64"
# Add-Content $env:GITHUB_ENV "LIB=$env:LIB"
# # Add INCLUDE paths
# $env:INCLUDE = "C:\BuildTools\VC\Tools\MSVC\$latestVersion\include;" +
# "C:\Program Files (x86)\Windows Kits\10\Include\10.0.22621.0\ucrt;" +
# "C:\Program Files (x86)\Windows Kits\10\Include\10.0.22621.0\um;" +
# "C:\Program Files (x86)\Windows Kits\10\Include\10.0.22621.0\shared"
# Add-Content $env:GITHUB_ENV "INCLUDE=$env:INCLUDE"
# shell: powershell
# - name: Install Rust
# run: |
# Invoke-WebRequest https://win.rustup.rs/x86_64 -OutFile rustup-init.exe
# .\rustup-init.exe -y --default-host aarch64-pc-windows-msvc
# shell: powershell
# - name: Add Rust to PATH
# run: |
# Add-Content $env:GITHUB_PATH "$env:USERPROFILE\.cargo\bin"
# shell: powershell
# - uses: Swatinem/rust-cache@v2
# with:
# workspaces: rust
# - name: Install 7-Zip ARM
# run: |
# New-Item -Path 'C:\7zip' -ItemType Directory
# Invoke-WebRequest https://7-zip.org/a/7z2408-arm64.exe -OutFile C:\7zip\7z-installer.exe
# Start-Process -FilePath C:\7zip\7z-installer.exe -ArgumentList '/S' -Wait
# shell: powershell
# - name: Add 7-Zip to PATH
# run: Add-Content $env:GITHUB_PATH "C:\Program Files\7-Zip"
# shell: powershell
# - name: Install Protoc v21.12
# working-directory: C:\
# run: |
# if (Test-Path 'C:\protoc') {
# Write-Host "Protoc directory exists, skipping installation"
# return
# }
# New-Item -Path 'C:\protoc' -ItemType Directory
# Set-Location C:\protoc
# Invoke-WebRequest https://github.com/protocolbuffers/protobuf/releases/download/v21.12/protoc-21.12-win64.zip -OutFile C:\protoc\protoc.zip
# & 'C:\Program Files\7-Zip\7z.exe' x protoc.zip
# shell: powershell
# - name: Add Protoc to PATH
# run: Add-Content $env:GITHUB_PATH "C:\protoc\bin"
# shell: powershell
# - name: Build Windows native node modules
# run: .\ci\build_windows_artifacts.ps1 aarch64-pc-windows-msvc
# - name: Upload Windows ARM64 Artifacts
# uses: actions/upload-artifact@v4
# with:
# name: node-native-windows-arm64
# path: |
# node/dist/*.node
nodejs-windows:
name: lancedb ${{ matrix.target }}
runs-on: windows-2022
@@ -260,9 +364,103 @@ jobs:
path: |
nodejs/dist/*.node
# TODO: re-enable once working https://github.com/lancedb/lancedb/pull/1831
# nodejs-windows-arm64:
# name: lancedb win32-arm64-msvc
# runs-on: windows-4x-arm
# if: startsWith(github.ref, 'refs/tags/v')
# steps:
# - uses: actions/checkout@v4
# - name: Install Git
# run: |
# Invoke-WebRequest -Uri "https://github.com/git-for-windows/git/releases/download/v2.44.0.windows.1/Git-2.44.0-64-bit.exe" -OutFile "git-installer.exe"
# Start-Process -FilePath "git-installer.exe" -ArgumentList "/VERYSILENT", "/NORESTART" -Wait
# shell: powershell
# - name: Add Git to PATH
# run: |
# Add-Content $env:GITHUB_PATH "C:\Program Files\Git\bin"
# $env:Path = [System.Environment]::GetEnvironmentVariable("Path","Machine") + ";" + [System.Environment]::GetEnvironmentVariable("Path","User")
# shell: powershell
# - name: Configure Git symlinks
# run: git config --global core.symlinks true
# - uses: actions/checkout@v4
# - uses: actions/setup-python@v5
# with:
# python-version: "3.13"
# - name: Install Visual Studio Build Tools
# run: |
# Invoke-WebRequest -Uri "https://aka.ms/vs/17/release/vs_buildtools.exe" -OutFile "vs_buildtools.exe"
# Start-Process -FilePath "vs_buildtools.exe" -ArgumentList "--quiet", "--wait", "--norestart", "--nocache", `
# "--installPath", "C:\BuildTools", `
# "--add", "Microsoft.VisualStudio.Component.VC.Tools.ARM64", `
# "--add", "Microsoft.VisualStudio.Component.VC.Tools.x86.x64", `
# "--add", "Microsoft.VisualStudio.Component.Windows11SDK.22621", `
# "--add", "Microsoft.VisualStudio.Component.VC.ATL", `
# "--add", "Microsoft.VisualStudio.Component.VC.ATLMFC", `
# "--add", "Microsoft.VisualStudio.Component.VC.Llvm.Clang" -Wait
# shell: powershell
# - name: Add Visual Studio Build Tools to PATH
# run: |
# $vsPath = "C:\BuildTools\VC\Tools\MSVC"
# $latestVersion = (Get-ChildItem $vsPath | Sort-Object {[version]$_.Name} -Descending)[0].Name
# Add-Content $env:GITHUB_PATH "C:\BuildTools\VC\Tools\MSVC\$latestVersion\bin\Hostx64\arm64"
# Add-Content $env:GITHUB_PATH "C:\BuildTools\VC\Tools\MSVC\$latestVersion\bin\Hostx64\x64"
# Add-Content $env:GITHUB_PATH "C:\Program Files (x86)\Windows Kits\10\bin\10.0.22621.0\arm64"
# Add-Content $env:GITHUB_PATH "C:\Program Files (x86)\Windows Kits\10\bin\10.0.22621.0\x64"
# Add-Content $env:GITHUB_PATH "C:\BuildTools\VC\Tools\Llvm\x64\bin"
# $env:LIB = ""
# Add-Content $env:GITHUB_ENV "LIB=C:\Program Files (x86)\Windows Kits\10\Lib\10.0.22621.0\um\arm64;C:\Program Files (x86)\Windows Kits\10\Lib\10.0.22621.0\ucrt\arm64"
# shell: powershell
# - name: Install Rust
# run: |
# Invoke-WebRequest https://win.rustup.rs/x86_64 -OutFile rustup-init.exe
# .\rustup-init.exe -y --default-host aarch64-pc-windows-msvc
# shell: powershell
# - name: Add Rust to PATH
# run: |
# Add-Content $env:GITHUB_PATH "$env:USERPROFILE\.cargo\bin"
# shell: powershell
# - uses: Swatinem/rust-cache@v2
# with:
# workspaces: rust
# - name: Install 7-Zip ARM
# run: |
# New-Item -Path 'C:\7zip' -ItemType Directory
# Invoke-WebRequest https://7-zip.org/a/7z2408-arm64.exe -OutFile C:\7zip\7z-installer.exe
# Start-Process -FilePath C:\7zip\7z-installer.exe -ArgumentList '/S' -Wait
# shell: powershell
# - name: Add 7-Zip to PATH
# run: Add-Content $env:GITHUB_PATH "C:\Program Files\7-Zip"
# shell: powershell
# - name: Install Protoc v21.12
# working-directory: C:\
# run: |
# if (Test-Path 'C:\protoc') {
# Write-Host "Protoc directory exists, skipping installation"
# return
# }
# New-Item -Path 'C:\protoc' -ItemType Directory
# Set-Location C:\protoc
# Invoke-WebRequest https://github.com/protocolbuffers/protobuf/releases/download/v21.12/protoc-21.12-win64.zip -OutFile C:\protoc\protoc.zip
# & 'C:\Program Files\7-Zip\7z.exe' x protoc.zip
# shell: powershell
# - name: Add Protoc to PATH
# run: Add-Content $env:GITHUB_PATH "C:\protoc\bin"
# shell: powershell
# - name: Build Windows native node modules
# run: .\ci\build_windows_artifacts_nodejs.ps1 aarch64-pc-windows-msvc
# - name: Upload Windows ARM64 Artifacts
# uses: actions/upload-artifact@v4
# with:
# name: nodejs-native-windows-arm64
# path: |
# nodejs/dist/*.node
release:
name: vectordb NPM Publish
needs: [node, node-macos, node-linux, node-windows]
needs: [node, node-macos, node-linux, node-windows, node-windows-arm64]
runs-on: ubuntu-latest
# Only runs on tags that matches the make-release action
if: startsWith(github.ref, 'refs/tags/v')
@@ -280,7 +478,7 @@ jobs:
env:
NODE_AUTH_TOKEN: ${{ secrets.LANCEDB_NPM_REGISTRY_TOKEN }}
run: |
# Tag beta as "preview" instead of default "latest". See lancedb
# Tag beta as "preview" instead of default "latest". See lancedb
# npm publish step for more info.
if [[ $GITHUB_REF =~ refs/tags/v(.*)-beta.* ]]; then
PUBLISH_ARGS="--tag preview"
@@ -302,7 +500,7 @@ jobs:
release-nodejs:
name: lancedb NPM Publish
needs: [nodejs-macos, nodejs-linux, nodejs-windows]
needs: [nodejs-macos, nodejs-linux, nodejs-windows, nodejs-windows-arm64]
runs-on: ubuntu-latest
# Only runs on tags that matches the make-release action
if: startsWith(github.ref, 'refs/tags/v')

View File

@@ -138,7 +138,7 @@ jobs:
run: rm -rf target/wheels
windows:
name: "Windows: ${{ matrix.config.name }}"
timeout-minutes: 30
timeout-minutes: 60
strategy:
matrix:
config:

View File

@@ -35,21 +35,22 @@ jobs:
CC: clang-18
CXX: clang++-18
steps:
- uses: actions/checkout@v4
with:
- uses: actions/checkout@v4
with:
fetch-depth: 0
lfs: true
- uses: Swatinem/rust-cache@v2
with:
workspaces: rust
- name: Install dependencies
run: |
- uses: Swatinem/rust-cache@v2
with:
workspaces: rust
- name: Install dependencies
run: |
sudo apt update
sudo apt install -y protobuf-compiler libssl-dev
- name: Run format
run: cargo fmt --all -- --check
- name: Run clippy
run: cargo clippy --workspace --tests --all-features -- -D warnings
- name: Run format
run: cargo fmt --all -- --check
- name: Run clippy
run: cargo clippy --workspace --tests --all-features -- -D warnings
linux:
timeout-minutes: 30
# To build all features, we need more disk space than is available
@@ -65,37 +66,38 @@ jobs:
CC: clang-18
CXX: clang++-18
steps:
- uses: actions/checkout@v4
with:
- uses: actions/checkout@v4
with:
fetch-depth: 0
lfs: true
- uses: Swatinem/rust-cache@v2
with:
- uses: Swatinem/rust-cache@v2
with:
workspaces: rust
- name: Install dependencies
run: |
- name: Install dependencies
run: |
sudo apt update
sudo apt install -y protobuf-compiler libssl-dev
- name: Make Swap
run: |
sudo fallocate -l 16G /swapfile
sudo chmod 600 /swapfile
sudo mkswap /swapfile
sudo swapon /swapfile
- name: Start S3 integration test environment
working-directory: .
run: docker compose up --detach --wait
- name: Build
run: cargo build --all-features
- name: Run tests
run: cargo test --all-features
- name: Run examples
run: cargo run --example simple
- name: Make Swap
run: |
sudo fallocate -l 16G /swapfile
sudo chmod 600 /swapfile
sudo mkswap /swapfile
sudo swapon /swapfile
- name: Start S3 integration test environment
working-directory: .
run: docker compose up --detach --wait
- name: Build
run: cargo build --all-features
- name: Run tests
run: cargo test --all-features
- name: Run examples
run: cargo run --example simple
macos:
timeout-minutes: 30
strategy:
matrix:
mac-runner: [ "macos-13", "macos-14" ]
mac-runner: ["macos-13", "macos-14"]
runs-on: "${{ matrix.mac-runner }}"
defaults:
run:
@@ -104,8 +106,8 @@ jobs:
steps:
- uses: actions/checkout@v4
with:
fetch-depth: 0
lfs: true
fetch-depth: 0
lfs: true
- name: CPU features
run: sysctl -a | grep cpu
- uses: Swatinem/rust-cache@v2
@@ -118,6 +120,7 @@ jobs:
- name: Run tests
# Run with everything except the integration tests.
run: cargo test --features remote,fp16kernels
windows:
runs-on: windows-2022
steps:
@@ -139,3 +142,99 @@ jobs:
$env:VCPKG_ROOT = $env:VCPKG_INSTALLATION_ROOT
cargo build
cargo test
windows-arm64:
runs-on: windows-4x-arm
steps:
- name: Install Git
run: |
Invoke-WebRequest -Uri "https://github.com/git-for-windows/git/releases/download/v2.44.0.windows.1/Git-2.44.0-64-bit.exe" -OutFile "git-installer.exe"
Start-Process -FilePath "git-installer.exe" -ArgumentList "/VERYSILENT", "/NORESTART" -Wait
shell: powershell
- name: Add Git to PATH
run: |
Add-Content $env:GITHUB_PATH "C:\Program Files\Git\bin"
$env:Path = [System.Environment]::GetEnvironmentVariable("Path","Machine") + ";" + [System.Environment]::GetEnvironmentVariable("Path","User")
shell: powershell
- name: Configure Git symlinks
run: git config --global core.symlinks true
- uses: actions/checkout@v4
- uses: actions/setup-python@v5
with:
python-version: "3.13"
- name: Install Visual Studio Build Tools
run: |
Invoke-WebRequest -Uri "https://aka.ms/vs/17/release/vs_buildtools.exe" -OutFile "vs_buildtools.exe"
Start-Process -FilePath "vs_buildtools.exe" -ArgumentList "--quiet", "--wait", "--norestart", "--nocache", `
"--installPath", "C:\BuildTools", `
"--add", "Microsoft.VisualStudio.Component.VC.Tools.ARM64", `
"--add", "Microsoft.VisualStudio.Component.VC.Tools.x86.x64", `
"--add", "Microsoft.VisualStudio.Component.Windows11SDK.22621", `
"--add", "Microsoft.VisualStudio.Component.VC.ATL", `
"--add", "Microsoft.VisualStudio.Component.VC.ATLMFC", `
"--add", "Microsoft.VisualStudio.Component.VC.Llvm.Clang" -Wait
shell: powershell
- name: Add Visual Studio Build Tools to PATH
run: |
$vsPath = "C:\BuildTools\VC\Tools\MSVC"
$latestVersion = (Get-ChildItem $vsPath | Sort-Object {[version]$_.Name} -Descending)[0].Name
Add-Content $env:GITHUB_PATH "C:\BuildTools\VC\Tools\MSVC\$latestVersion\bin\Hostx64\arm64"
Add-Content $env:GITHUB_PATH "C:\BuildTools\VC\Tools\MSVC\$latestVersion\bin\Hostx64\x64"
Add-Content $env:GITHUB_PATH "C:\Program Files (x86)\Windows Kits\10\bin\10.0.22621.0\arm64"
Add-Content $env:GITHUB_PATH "C:\Program Files (x86)\Windows Kits\10\bin\10.0.22621.0\x64"
Add-Content $env:GITHUB_PATH "C:\BuildTools\VC\Tools\Llvm\x64\bin"
# Add MSVC runtime libraries to LIB
$env:LIB = "C:\BuildTools\VC\Tools\MSVC\$latestVersion\lib\arm64;" +
"C:\Program Files (x86)\Windows Kits\10\Lib\10.0.22621.0\um\arm64;" +
"C:\Program Files (x86)\Windows Kits\10\Lib\10.0.22621.0\ucrt\arm64"
Add-Content $env:GITHUB_ENV "LIB=$env:LIB"
# Add INCLUDE paths
$env:INCLUDE = "C:\BuildTools\VC\Tools\MSVC\$latestVersion\include;" +
"C:\Program Files (x86)\Windows Kits\10\Include\10.0.22621.0\ucrt;" +
"C:\Program Files (x86)\Windows Kits\10\Include\10.0.22621.0\um;" +
"C:\Program Files (x86)\Windows Kits\10\Include\10.0.22621.0\shared"
Add-Content $env:GITHUB_ENV "INCLUDE=$env:INCLUDE"
shell: powershell
- name: Install Rust
run: |
Invoke-WebRequest https://win.rustup.rs/x86_64 -OutFile rustup-init.exe
.\rustup-init.exe -y --default-host aarch64-pc-windows-msvc
shell: powershell
- name: Add Rust to PATH
run: |
Add-Content $env:GITHUB_PATH "$env:USERPROFILE\.cargo\bin"
shell: powershell
- uses: Swatinem/rust-cache@v2
with:
workspaces: rust
- name: Install 7-Zip ARM
run: |
New-Item -Path 'C:\7zip' -ItemType Directory
Invoke-WebRequest https://7-zip.org/a/7z2408-arm64.exe -OutFile C:\7zip\7z-installer.exe
Start-Process -FilePath C:\7zip\7z-installer.exe -ArgumentList '/S' -Wait
shell: powershell
- name: Add 7-Zip to PATH
run: Add-Content $env:GITHUB_PATH "C:\Program Files\7-Zip"
shell: powershell
- name: Install Protoc v21.12
working-directory: C:\
run: |
if (Test-Path 'C:\protoc') {
Write-Host "Protoc directory exists, skipping installation"
return
}
New-Item -Path 'C:\protoc' -ItemType Directory
Set-Location C:\protoc
Invoke-WebRequest https://github.com/protocolbuffers/protobuf/releases/download/v21.12/protoc-21.12-win64.zip -OutFile C:\protoc\protoc.zip
& 'C:\Program Files\7-Zip\7z.exe' x protoc.zip
shell: powershell
- name: Add Protoc to PATH
run: Add-Content $env:GITHUB_PATH "C:\protoc\bin"
shell: powershell
- name: Run tests
run: |
$env:VCPKG_ROOT = $env:VCPKG_INSTALLATION_ROOT
cargo build --target aarch64-pc-windows-msvc
cargo test --target aarch64-pc-windows-msvc

View File

@@ -23,13 +23,13 @@ rust-version = "1.80.0" # TODO: lower this once we upgrade Lance again.
[workspace.dependencies]
lance = { "version" = "=0.19.2", "features" = [
"dynamodb",
], git = "https://github.com/lancedb/lance.git", tag = "v0.19.2-beta.3" }
lance-index = { "version" = "=0.19.2", git = "https://github.com/lancedb/lance.git", tag = "v0.19.2-beta.3" }
lance-linalg = { "version" = "=0.19.2", git = "https://github.com/lancedb/lance.git", tag = "v0.19.2-beta.3" }
lance-table = { "version" = "=0.19.2", git = "https://github.com/lancedb/lance.git", tag = "v0.19.2-beta.3" }
lance-testing = { "version" = "=0.19.2", git = "https://github.com/lancedb/lance.git", tag = "v0.19.2-beta.3" }
lance-datafusion = { "version" = "=0.19.2", git = "https://github.com/lancedb/lance.git", tag = "v0.19.2-beta.3" }
lance-encoding = { "version" = "=0.19.2", git = "https://github.com/lancedb/lance.git", tag = "v0.19.2-beta.3" }
]}
lance-index = "=0.19.2"
lance-linalg = "=0.19.2"
lance-table = "=0.19.2"
lance-testing = "=0.19.2"
lance-datafusion = "=0.19.2"
lance-encoding = "=0.19.2"
# Note that this one does not include pyarrow
arrow = { version = "52.2", optional = false }
arrow-array = "52.2"

View File

@@ -10,6 +10,7 @@
[![Blog](https://img.shields.io/badge/Blog-12100E?style=for-the-badge&logoColor=white)](https://blog.lancedb.com/)
[![Discord](https://img.shields.io/badge/Discord-%235865F2.svg?style=for-the-badge&logo=discord&logoColor=white)](https://discord.gg/zMM32dvNtd)
[![Twitter](https://img.shields.io/badge/Twitter-%231DA1F2.svg?style=for-the-badge&logo=Twitter&logoColor=white)](https://twitter.com/lancedb)
[![Gurubase](https://img.shields.io/badge/Gurubase-Ask%20LanceDB%20Guru-006BFF?style=for-the-badge)](https://gurubase.io/g/lancedb)
</p>

View File

@@ -3,6 +3,7 @@
# Targets supported:
# - x86_64-pc-windows-msvc
# - i686-pc-windows-msvc
# - aarch64-pc-windows-msvc
function Prebuild-Rust {
param (
@@ -31,7 +32,7 @@ function Build-NodeBinaries {
$targets = $args[0]
if (-not $targets) {
$targets = "x86_64-pc-windows-msvc"
$targets = "x86_64-pc-windows-msvc", "aarch64-pc-windows-msvc"
}
Write-Host "Building artifacts for targets: $targets"

View File

@@ -3,6 +3,7 @@
# Targets supported:
# - x86_64-pc-windows-msvc
# - i686-pc-windows-msvc
# - aarch64-pc-windows-msvc
function Prebuild-Rust {
param (
@@ -31,7 +32,7 @@ function Build-NodeBinaries {
$targets = $args[0]
if (-not $targets) {
$targets = "x86_64-pc-windows-msvc"
$targets = "x86_64-pc-windows-msvc", "aarch64-pc-windows-msvc"
}
Write-Host "Building artifacts for targets: $targets"

57
ci/mock_openai.py Normal file
View File

@@ -0,0 +1,57 @@
# SPDX-License-Identifier: Apache-2.0
# SPDX-FileCopyrightText: Copyright The LanceDB Authors
"""A zero-dependency mock OpenAI embeddings API endpoint for testing purposes."""
import argparse
import json
import http.server
class MockOpenAIRequestHandler(http.server.BaseHTTPRequestHandler):
def do_POST(self):
content_length = int(self.headers["Content-Length"])
post_data = self.rfile.read(content_length)
post_data = json.loads(post_data.decode("utf-8"))
# See: https://platform.openai.com/docs/api-reference/embeddings/create
if isinstance(post_data["input"], str):
num_inputs = 1
else:
num_inputs = len(post_data["input"])
model = post_data.get("model", "text-embedding-ada-002")
data = []
for i in range(num_inputs):
data.append({
"object": "embedding",
"embedding": [0.1] * 1536,
"index": i,
})
response = {
"object": "list",
"data": data,
"model": model,
"usage": {
"prompt_tokens": 0,
"total_tokens": 0,
}
}
self.send_response(200)
self.send_header("Content-type", "application/json")
self.end_headers()
self.wfile.write(json.dumps(response).encode("utf-8"))
if __name__ == "__main__":
parser = argparse.ArgumentParser(description="Mock OpenAI embeddings API endpoint")
parser.add_argument("--port", type=int, default=8000, help="Port to listen on")
args = parser.parse_args()
port = args.port
print(f"server started on port {port}. Press Ctrl-C to stop.")
print(f"To use, set OPENAI_BASE_URL=http://localhost:{port} in your environment.")
with http.server.HTTPServer(("0.0.0.0", port), MockOpenAIRequestHandler) as server:
server.serve_forever()

View File

@@ -45,9 +45,9 @@ Lance supports `IVF_PQ` index type by default.
Creating indexes is done via the [lancedb.Table.createIndex](../js/classes/Table.md/#createIndex) method.
```typescript
--8<--- "nodejs/examples/ann_indexes.ts:import"
--8<--- "nodejs/examples/ann_indexes.test.ts:import"
--8<-- "nodejs/examples/ann_indexes.ts:ingest"
--8<-- "nodejs/examples/ann_indexes.test.ts:ingest"
```
=== "vectordb (deprecated)"
@@ -140,13 +140,15 @@ There are a couple of parameters that can be used to fine-tune the search:
- **limit** (default: 10): The amount of results that will be returned
- **nprobes** (default: 20): The number of probes used. A higher number makes search more accurate but also slower.<br/>
Most of the time, setting nprobes to cover 5-10% of the dataset should achieve high recall with low latency.<br/>
e.g., for 1M vectors divided up into 256 partitions, nprobes should be set to ~20-40.<br/>
Note: nprobes is only applicable if an ANN index is present. If specified on a table without an ANN index, it is ignored.
Most of the time, setting nprobes to cover 5-15% of the dataset should achieve high recall with low latency.<br/>
- _For example_, For a dataset of 1 million vectors divided into 256 partitions, `nprobes` should be set to ~20-40. This value can be adjusted to achieve the optimal balance between search latency and search quality. <br/>
- **refine_factor** (default: None): Refine the results by reading extra elements and re-ranking them in memory.<br/>
A higher number makes search more accurate but also slower. If you find the recall is less than ideal, try refine_factor=10 to start.<br/>
e.g., for 1M vectors divided into 256 partitions, if you're looking for top 20, then refine_factor=200 reranks the whole partition.<br/>
Note: refine_factor is only applicable if an ANN index is present. If specified on a table without an ANN index, it is ignored.
- _For example_, For a dataset of 1 million vectors divided into 256 partitions, setting the `refine_factor` to 200 will initially retrieve the top 4,000 candidates (top k * refine_factor) from all searched partitions. These candidates are then reranked to determine the final top 20 results.<br/>
!!! note
Both `nprobes` and `refine_factor` are only applicable if an ANN index is present. If specified on a table without an ANN index, those parameters are ignored.
=== "Python"
@@ -169,7 +171,7 @@ There are a couple of parameters that can be used to fine-tune the search:
=== "@lancedb/lancedb"
```typescript
--8<-- "nodejs/examples/ann_indexes.ts:search1"
--8<-- "nodejs/examples/ann_indexes.test.ts:search1"
```
=== "vectordb (deprecated)"
@@ -203,7 +205,7 @@ You can further filter the elements returned by a search using a where clause.
=== "@lancedb/lancedb"
```typescript
--8<-- "nodejs/examples/ann_indexes.ts:search2"
--8<-- "nodejs/examples/ann_indexes.test.ts:search2"
```
=== "vectordb (deprecated)"
@@ -235,7 +237,7 @@ You can select the columns returned by the query using a select clause.
=== "@lancedb/lancedb"
```typescript
--8<-- "nodejs/examples/ann_indexes.ts:search3"
--8<-- "nodejs/examples/ann_indexes.test.ts:search3"
```
=== "vectordb (deprecated)"

View File

@@ -157,7 +157,7 @@ recommend switching to stable releases.
import * as lancedb from "@lancedb/lancedb";
import * as arrow from "apache-arrow";
--8<-- "nodejs/examples/basic.ts:connect"
--8<-- "nodejs/examples/basic.test.ts:connect"
```
=== "vectordb (deprecated)"
@@ -212,7 +212,7 @@ table.
=== "@lancedb/lancedb"
```typescript
--8<-- "nodejs/examples/basic.ts:create_table"
--8<-- "nodejs/examples/basic.test.ts:create_table"
```
=== "vectordb (deprecated)"
@@ -268,7 +268,7 @@ similar to a `CREATE TABLE` statement in SQL.
=== "@lancedb/lancedb"
```typescript
--8<-- "nodejs/examples/basic.ts:create_empty_table"
--8<-- "nodejs/examples/basic.test.ts:create_empty_table"
```
=== "vectordb (deprecated)"
@@ -298,7 +298,7 @@ Once created, you can open a table as follows:
=== "@lancedb/lancedb"
```typescript
--8<-- "nodejs/examples/basic.ts:open_table"
--8<-- "nodejs/examples/basic.test.ts:open_table"
```
=== "vectordb (deprecated)"
@@ -327,7 +327,7 @@ If you forget the name of your table, you can always get a listing of all table
=== "@lancedb/lancedb"
```typescript
--8<-- "nodejs/examples/basic.ts:table_names"
--8<-- "nodejs/examples/basic.test.ts:table_names"
```
=== "vectordb (deprecated)"
@@ -357,7 +357,7 @@ After a table has been created, you can always add more data to it as follows:
=== "@lancedb/lancedb"
```typescript
--8<-- "nodejs/examples/basic.ts:add_data"
--8<-- "nodejs/examples/basic.test.ts:add_data"
```
=== "vectordb (deprecated)"
@@ -389,7 +389,7 @@ Once you've embedded the query, you can find its nearest neighbors as follows:
=== "@lancedb/lancedb"
```typescript
--8<-- "nodejs/examples/basic.ts:vector_search"
--8<-- "nodejs/examples/basic.test.ts:vector_search"
```
=== "vectordb (deprecated)"
@@ -429,7 +429,7 @@ LanceDB allows you to create an ANN index on a table as follows:
=== "@lancedb/lancedb"
```typescript
--8<-- "nodejs/examples/basic.ts:create_index"
--8<-- "nodejs/examples/basic.test.ts:create_index"
```
=== "vectordb (deprecated)"
@@ -469,7 +469,7 @@ This can delete any number of rows that match the filter.
=== "@lancedb/lancedb"
```typescript
--8<-- "nodejs/examples/basic.ts:delete_rows"
--8<-- "nodejs/examples/basic.test.ts:delete_rows"
```
=== "vectordb (deprecated)"
@@ -527,7 +527,7 @@ Use the `drop_table()` method on the database to remove a table.
=== "@lancedb/lancedb"
```typescript
--8<-- "nodejs/examples/basic.ts:drop_table"
--8<-- "nodejs/examples/basic.test.ts:drop_table"
```
=== "vectordb (deprecated)"
@@ -561,8 +561,8 @@ You can use the embedding API when working with embedding models. It automatical
=== "@lancedb/lancedb"
```typescript
--8<-- "nodejs/examples/embedding.ts:imports"
--8<-- "nodejs/examples/embedding.ts:openai_embeddings"
--8<-- "nodejs/examples/embedding.test.ts:imports"
--8<-- "nodejs/examples/embedding.test.ts:openai_embeddings"
```
=== "Rust"

View File

@@ -0,0 +1,51 @@
# VoyageAI Embeddings
Voyage AI provides cutting-edge embedding and rerankers.
Using voyageai API requires voyageai package, which can be installed using `pip install voyageai`. Voyage AI embeddings are used to generate embeddings for text data. The embeddings can be used for various tasks like semantic search, clustering, and classification.
You also need to set the `VOYAGE_API_KEY` environment variable to use the VoyageAI API.
Supported models are:
- voyage-3
- voyage-3-lite
- voyage-finance-2
- voyage-multilingual-2
- voyage-law-2
- voyage-code-2
Supported parameters (to be passed in `create` method) are:
| Parameter | Type | Default Value | Description |
|---|---|--------|---------|
| `name` | `str` | `"voyage-3"` | The model ID of the model to use. Supported base models for Text Embeddings: voyage-3, voyage-3-lite, voyage-finance-2, voyage-multilingual-2, voyage-law-2, voyage-code-2 |
| `input_type` | `str` | `None` | Type of the input text. Default to None. Other options: query, document. |
| `truncation` | `bool` | `True` | Whether to truncate the input texts to fit within the context length. |
Usage Example:
```python
import lancedb
from lancedb.pydantic import LanceModel, Vector
from lancedb.embeddings import EmbeddingFunctionRegistry
voyageai = EmbeddingFunctionRegistry
.get_instance()
.get("voyageai")
.create(name="voyage-3")
class TextModel(LanceModel):
text: str = voyageai.SourceField()
vector: Vector(voyageai.ndims()) = voyageai.VectorField()
data = [ { "text": "hello world" },
{ "text": "goodbye world" }]
db = lancedb.connect("~/.lancedb")
tbl = db.create_table("test", schema=TextModel, mode="overwrite")
tbl.add(data)
```

View File

@@ -47,9 +47,9 @@ Let's implement `SentenceTransformerEmbeddings` class. All you need to do is imp
=== "TypeScript"
```ts
--8<--- "nodejs/examples/custom_embedding_function.ts:imports"
--8<--- "nodejs/examples/custom_embedding_function.test.ts:imports"
--8<--- "nodejs/examples/custom_embedding_function.ts:embedding_impl"
--8<--- "nodejs/examples/custom_embedding_function.test.ts:embedding_impl"
```
@@ -78,7 +78,7 @@ Now you can use this embedding function to create your table schema and that's i
=== "TypeScript"
```ts
--8<--- "nodejs/examples/custom_embedding_function.ts:call_custom_function"
--8<--- "nodejs/examples/custom_embedding_function.test.ts:call_custom_function"
```
!!! note

View File

@@ -94,8 +94,8 @@ the embeddings at all:
=== "@lancedb/lancedb"
```ts
--8<-- "nodejs/examples/embedding.ts:imports"
--8<-- "nodejs/examples/embedding.ts:embedding_function"
--8<-- "nodejs/examples/embedding.test.ts:imports"
--8<-- "nodejs/examples/embedding.test.ts:embedding_function"
```
=== "vectordb (deprecated)"
@@ -150,7 +150,7 @@ need to worry about it when you query the table:
.toArray()
```
=== "vectordb (deprecated)
=== "vectordb (deprecated)"
```ts
const results = await table

View File

@@ -51,8 +51,8 @@ LanceDB registers the OpenAI embeddings function in the registry as `openai`. Yo
=== "TypeScript"
```typescript
--8<--- "nodejs/examples/embedding.ts:imports"
--8<--- "nodejs/examples/embedding.ts:openai_embeddings"
--8<--- "nodejs/examples/embedding.test.ts:imports"
--8<--- "nodejs/examples/embedding.test.ts:openai_embeddings"
```
=== "Rust"
@@ -121,12 +121,10 @@ class Words(LanceModel):
vector: Vector(func.ndims()) = func.VectorField()
table = db.create_table("words", schema=Words)
table.add(
[
{"text": "hello world"},
{"text": "goodbye world"}
]
)
table.add([
{"text": "hello world"},
{"text": "goodbye world"}
])
query = "greetings"
actual = table.search(query).limit(1).to_pydantic(Words)[0]

View File

@@ -85,13 +85,13 @@ Initialize a LanceDB connection and create a table
```ts
--8<-- "nodejs/examples/basic.ts:create_table"
--8<-- "nodejs/examples/basic.test.ts:create_table"
```
This will infer the schema from the provided data. If you want to explicitly provide a schema, you can use `apache-arrow` to declare a schema
```ts
--8<-- "nodejs/examples/basic.ts:create_table_with_schema"
--8<-- "nodejs/examples/basic.test.ts:create_table_with_schema"
```
!!! info "Note"
@@ -100,14 +100,14 @@ Initialize a LanceDB connection and create a table
passed in will NOT be appended to the table in that case.
```ts
--8<-- "nodejs/examples/basic.ts:create_table_exists_ok"
--8<-- "nodejs/examples/basic.test.ts:create_table_exists_ok"
```
Sometimes you want to make sure that you start fresh. If you want to
overwrite the table, you can pass in mode: "overwrite" to the createTable function.
```ts
--8<-- "nodejs/examples/basic.ts:create_table_overwrite"
--8<-- "nodejs/examples/basic.test.ts:create_table_overwrite"
```
=== "vectordb (deprecated)"
@@ -227,7 +227,7 @@ LanceDB supports float16 data type!
=== "@lancedb/lancedb"
```typescript
--8<-- "nodejs/examples/basic.ts:create_f16_table"
--8<-- "nodejs/examples/basic.test.ts:create_f16_table"
```
=== "vectordb (deprecated)"
@@ -455,7 +455,7 @@ You can create an empty table for scenarios where you want to add data to the ta
=== "@lancedb/lancedb"
```typescript
--8<-- "nodejs/examples/basic.ts:create_empty_table"
--8<-- "nodejs/examples/basic.test.ts:create_empty_table"
```
=== "vectordb (deprecated)"
@@ -790,6 +790,27 @@ Use the `drop_table()` method on the database to remove a table.
This permanently removes the table and is not recoverable, unlike deleting rows.
If the table does not exist an exception is raised.
## Handling bad vectors
In LanceDB Python, you can use the `on_bad_vectors` parameter to choose how
invalid vector values are handled. Invalid vectors are vectors that are not valid
because:
1. They are the wrong dimension
2. They contain NaN values
3. They are null but are on a non-nullable field
By default, LanceDB will raise an error if it encounters a bad vector. You can
also choose one of the following options:
* `drop`: Ignore rows with bad vectors
* `fill`: Replace bad values (NaNs) or missing values (too few dimensions) with
the fill value specified in the `fill_value` parameter. An input like
`[1.0, NaN, 3.0]` will be replaced with `[1.0, 0.0, 3.0]` if `fill_value=0.0`.
* `null`: Replace bad vectors with null (only works if the column is nullable).
A bad vector `[1.0, NaN, 3.0]` will be replaced with `null` if the column is
nullable. If the vector column is non-nullable, then bad vectors will cause an
error
## Consistency

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,77 @@
# Voyage AI Reranker
Voyage AI provides cutting-edge embedding and rerankers.
This re-ranker uses the [VoyageAI](https://docs.voyageai.com/docs/) API to rerank the search results. You can use this re-ranker by passing `VoyageAIReranker()` to the `rerank()` method. Note that you'll either need to set the `VOYAGE_API_KEY` environment variable or pass the `api_key` argument to use this re-ranker.
!!! note
Supported Query Types: Hybrid, Vector, FTS
```python
import numpy
import lancedb
from lancedb.embeddings import get_registry
from lancedb.pydantic import LanceModel, Vector
from lancedb.rerankers import VoyageAIReranker
embedder = get_registry().get("sentence-transformers").create()
db = lancedb.connect("~/.lancedb")
class Schema(LanceModel):
text: str = embedder.SourceField()
vector: Vector(embedder.ndims()) = embedder.VectorField()
data = [
{"text": "hello world"},
{"text": "goodbye world"}
]
tbl = db.create_table("test", schema=Schema, mode="overwrite")
tbl.add(data)
reranker = VoyageAIReranker(model_name="rerank-2")
# Run vector search with a reranker
result = tbl.search("hello").rerank(reranker=reranker).to_list()
# Run FTS search with a reranker
result = tbl.search("hello", query_type="fts").rerank(reranker=reranker).to_list()
# Run hybrid search with a reranker
tbl.create_fts_index("text", replace=True)
result = tbl.search("hello", query_type="hybrid").rerank(reranker=reranker).to_list()
```
Accepted Arguments
----------------
| Argument | Type | Default | Description |
| --- | --- | --- | --- |
| `model_name` | `str` | `None` | The name of the reranker model to use. Available models are: rerank-2, rerank-2-lite |
| `column` | `str` | `"text"` | The name of the column to use as input to the cross encoder model. |
| `top_n` | `str` | `None` | The number of results to return. If None, will return all results. |
| `api_key` | `str` | `None` | The API key for the Voyage AI API. If not provided, the `VOYAGE_API_KEY` environment variable is used. |
| `return_score` | str | `"relevance"` | Options are "relevance" or "all". The type of score to return. If "relevance", will return only the `_relevance_score. If "all" is supported, will return relevance score along with the vector and/or fts scores depending on query type |
| `truncation` | `bool` | `None` | Whether to truncate the input to satisfy the "context length limit" on the query and the documents. |
## Supported Scores for each query type
You can specify the type of scores you want the reranker to return. The following are the supported scores for each query type:
### Hybrid Search
|`return_score`| Status | Description |
| --- | --- | --- |
| `relevance` | ✅ Supported | Returns only have the `_relevance_score` column |
| `all` | ❌ Not Supported | Returns have vector(`_distance`) and FTS(`score`) along with Hybrid Search score(`_relevance_score`) |
### Vector Search
|`return_score`| Status | Description |
| --- | --- | --- |
| `relevance` | ✅ Supported | Returns only have the `_relevance_score` column |
| `all` | ✅ Supported | Returns have vector(`_distance`) along with Hybrid Search score(`_relevance_score`) |
### FTS Search
|`return_score`| Status | Description |
| --- | --- | --- |
| `relevance` | ✅ Supported | Returns only have the `_relevance_score` column |
| `all` | ✅ Supported | Returns have FTS(`score`) along with Hybrid Search score(`_relevance_score`) |

View File

@@ -58,9 +58,9 @@ db.create_table("my_vectors", data=data)
=== "@lancedb/lancedb"
```ts
--8<-- "nodejs/examples/search.ts:import"
--8<-- "nodejs/examples/search.test.ts:import"
--8<-- "nodejs/examples/search.ts:search1"
--8<-- "nodejs/examples/search.test.ts:search1"
```
@@ -89,7 +89,7 @@ By default, `l2` will be used as metric type. You can specify the metric type as
=== "@lancedb/lancedb"
```ts
--8<-- "nodejs/examples/search.ts:search2"
--8<-- "nodejs/examples/search.test.ts:search2"
```
=== "vectordb (deprecated)"

View File

@@ -49,7 +49,7 @@ const tbl = await db.createTable('myVectors', data)
=== "@lancedb/lancedb"
```ts
--8<-- "nodejs/examples/filtering.ts:search"
--8<-- "nodejs/examples/filtering.test.ts:search"
```
=== "vectordb (deprecated)"
@@ -91,7 +91,7 @@ For example, the following filter string is acceptable:
=== "@lancedb/lancedb"
```ts
--8<-- "nodejs/examples/filtering.ts:vec_search"
--8<-- "nodejs/examples/filtering.test.ts:vec_search"
```
=== "vectordb (deprecated)"
@@ -169,7 +169,7 @@ You can also filter your data without search.
=== "@lancedb/lancedb"
```ts
--8<-- "nodejs/examples/filtering.ts:sql_search"
--8<-- "nodejs/examples/filtering.test.ts:sql_search"
```
=== "vectordb (deprecated)"

View File

@@ -8,7 +8,7 @@
<parent>
<groupId>com.lancedb</groupId>
<artifactId>lancedb-parent</artifactId>
<version>0.13.0-beta.1</version>
<version>0.13.0-beta.2</version>
<relativePath>../pom.xml</relativePath>
</parent>

View File

@@ -6,7 +6,7 @@
<groupId>com.lancedb</groupId>
<artifactId>lancedb-parent</artifactId>
<version>0.13.0-beta.1</version>
<version>0.13.0-beta.2</version>
<packaging>pom</packaging>
<name>LanceDB Parent</name>

15
node/package-lock.json generated
View File

@@ -1,12 +1,12 @@
{
"name": "vectordb",
"version": "0.13.0-beta.0",
"version": "0.13.0-beta.2",
"lockfileVersion": 3,
"requires": true,
"packages": {
"": {
"name": "vectordb",
"version": "0.13.0-beta.0",
"version": "0.13.0-beta.2",
"cpu": [
"x64",
"arm64"
@@ -52,11 +52,12 @@
"uuid": "^9.0.0"
},
"optionalDependencies": {
"@lancedb/vectordb-darwin-arm64": "0.13.0-beta.0",
"@lancedb/vectordb-darwin-x64": "0.13.0-beta.0",
"@lancedb/vectordb-linux-arm64-gnu": "0.13.0-beta.0",
"@lancedb/vectordb-linux-x64-gnu": "0.13.0-beta.0",
"@lancedb/vectordb-win32-x64-msvc": "0.13.0-beta.0"
"@lancedb/vectordb-darwin-arm64": "0.13.0-beta.2",
"@lancedb/vectordb-darwin-x64": "0.13.0-beta.2",
"@lancedb/vectordb-linux-arm64-gnu": "0.13.0-beta.2",
"@lancedb/vectordb-linux-x64-gnu": "0.13.0-beta.2",
"@lancedb/vectordb-win32-arm64-msvc": "0.13.0-beta.2",
"@lancedb/vectordb-win32-x64-msvc": "0.13.0-beta.2"
},
"peerDependencies": {
"@apache-arrow/ts": "^14.0.2",

View File

@@ -1,6 +1,6 @@
{
"name": "vectordb",
"version": "0.13.0-beta.1",
"version": "0.13.0-beta.2",
"description": " Serverless, low-latency vector database for AI applications",
"main": "dist/index.js",
"types": "dist/index.d.ts",
@@ -84,14 +84,16 @@
"aarch64-apple-darwin": "@lancedb/vectordb-darwin-arm64",
"x86_64-unknown-linux-gnu": "@lancedb/vectordb-linux-x64-gnu",
"aarch64-unknown-linux-gnu": "@lancedb/vectordb-linux-arm64-gnu",
"x86_64-pc-windows-msvc": "@lancedb/vectordb-win32-x64-msvc"
"x86_64-pc-windows-msvc": "@lancedb/vectordb-win32-x64-msvc",
"aarch64-pc-windows-msvc": "@lancedb/vectordb-win32-arm64-msvc"
}
},
"optionalDependencies": {
"@lancedb/vectordb-darwin-arm64": "0.13.0-beta.1",
"@lancedb/vectordb-darwin-x64": "0.13.0-beta.1",
"@lancedb/vectordb-linux-arm64-gnu": "0.13.0-beta.1",
"@lancedb/vectordb-linux-x64-gnu": "0.13.0-beta.1",
"@lancedb/vectordb-win32-x64-msvc": "0.13.0-beta.1"
"@lancedb/vectordb-darwin-arm64": "0.13.0-beta.2",
"@lancedb/vectordb-darwin-x64": "0.13.0-beta.2",
"@lancedb/vectordb-linux-arm64-gnu": "0.13.0-beta.2",
"@lancedb/vectordb-linux-x64-gnu": "0.13.0-beta.2",
"@lancedb/vectordb-win32-x64-msvc": "0.13.0-beta.2",
"@lancedb/vectordb-win32-arm64-msvc": "0.13.0-beta.2"
}
}

View File

@@ -1,7 +1,7 @@
[package]
name = "lancedb-nodejs"
edition.workspace = true
version = "0.13.0-beta.1"
version = "0.13.0-beta.2"
license.workspace = true
description.workspace = true
repository.workspace = true
@@ -18,7 +18,7 @@ futures.workspace = true
lancedb = { path = "../rust/lancedb", features = ["remote"] }
napi = { version = "2.16.8", default-features = false, features = [
"napi9",
"async",
"async"
] }
napi-derive = "2.16.4"
# Prevent dynamic linking of lzma, which comes from datafusion

View File

@@ -187,6 +187,81 @@ describe.each([arrow13, arrow14, arrow15, arrow16, arrow17])(
},
);
// TODO: https://github.com/lancedb/lancedb/issues/1832
it.skip("should be able to omit nullable fields", async () => {
const db = await connect(tmpDir.name);
const schema = new arrow.Schema([
new arrow.Field(
"vector",
new arrow.FixedSizeList(
2,
new arrow.Field("item", new arrow.Float64()),
),
true,
),
new arrow.Field("item", new arrow.Utf8(), true),
new arrow.Field("price", new arrow.Float64(), false),
]);
const table = await db.createEmptyTable("test", schema);
const data1 = { item: "foo", price: 10.0 };
await table.add([data1]);
const data2 = { vector: [3.1, 4.1], price: 2.0 };
await table.add([data2]);
const data3 = { vector: [5.9, 26.5], item: "bar", price: 3.0 };
await table.add([data3]);
let res = await table.query().limit(10).toArray();
const resVector = res.map((r) => r.get("vector").toArray());
expect(resVector).toEqual([null, data2.vector, data3.vector]);
const resItem = res.map((r) => r.get("item").toArray());
expect(resItem).toEqual(["foo", null, "bar"]);
const resPrice = res.map((r) => r.get("price").toArray());
expect(resPrice).toEqual([10.0, 2.0, 3.0]);
const data4 = { item: "foo" };
// We can't omit a column if it's not nullable
await expect(table.add([data4])).rejects.toThrow("Invalid user input");
// But we can alter columns to make them nullable
await table.alterColumns([{ path: "price", nullable: true }]);
await table.add([data4]);
res = (await table.query().limit(10).toArray()).map((r) => r.toJSON());
expect(res).toEqual([data1, data2, data3, data4]);
});
it("should be able to insert nullable data for non-nullable fields", async () => {
const db = await connect(tmpDir.name);
const schema = new arrow.Schema([
new arrow.Field("x", new arrow.Float64(), false),
new arrow.Field("id", new arrow.Utf8(), false),
]);
const table = await db.createEmptyTable("test", schema);
const data1 = { x: 4.1, id: "foo" };
await table.add([data1]);
const res = (await table.query().toArray())[0];
expect(res.x).toEqual(data1.x);
expect(res.id).toEqual(data1.id);
const data2 = { x: null, id: "bar" };
await expect(table.add([data2])).rejects.toThrow(
"declared as non-nullable but contains null values",
);
// But we can alter columns to make them nullable
await table.alterColumns([{ path: "x", nullable: true }]);
await table.add([data2]);
const res2 = await table.query().toArray();
expect(res2.length).toBe(2);
expect(res2[0].x).toEqual(data1.x);
expect(res2[0].id).toEqual(data1.id);
expect(res2[1].x).toBeNull();
expect(res2[1].id).toEqual(data2.id);
});
it("should return the table as an instance of an arrow table", async () => {
const arrowTbl = await table.toArrow();
expect(arrowTbl).toBeInstanceOf(ArrowTable);
@@ -998,4 +1073,18 @@ describe("column name options", () => {
const results = await table.query().where("`camelCase` = 1").toArray();
expect(results[0].camelCase).toBe(1);
});
test("can make multiple vector queries in one go", async () => {
const results = await table
.query()
.nearestTo([0.1, 0.2])
.addQueryVector([0.1, 0.2])
.limit(1)
.toArray();
console.log(results);
expect(results.length).toBe(2);
results.sort((a, b) => a.query_index - b.query_index);
expect(results[0].query_index).toBe(0);
expect(results[1].query_index).toBe(1);
});
});

View File

@@ -9,7 +9,8 @@
"**/native.js",
"**/native.d.ts",
"**/npm/**/*",
"**/.vscode/**"
"**/.vscode/**",
"./examples/*"
]
},
"formatter": {

View File

@@ -0,0 +1,57 @@
// SPDX-License-Identifier: Apache-2.0
// SPDX-FileCopyrightText: Copyright The LanceDB Authors
import { expect, test } from "@jest/globals";
// --8<-- [start:import]
import * as lancedb from "@lancedb/lancedb";
import { VectorQuery } from "@lancedb/lancedb";
// --8<-- [end:import]
import { withTempDirectory } from "./util.ts";
test("ann index examples", async () => {
await withTempDirectory(async (databaseDir) => {
// --8<-- [start:ingest]
const db = await lancedb.connect(databaseDir);
const data = Array.from({ length: 5_000 }, (_, i) => ({
vector: Array(128).fill(i),
id: `${i}`,
content: "",
longId: `${i}`,
}));
const table = await db.createTable("my_vectors", data, {
mode: "overwrite",
});
await table.createIndex("vector", {
config: lancedb.Index.ivfPq({
numPartitions: 10,
numSubVectors: 16,
}),
});
// --8<-- [end:ingest]
// --8<-- [start:search1]
const search = table.search(Array(128).fill(1.2)).limit(2) as VectorQuery;
const results1 = await search.nprobes(20).refineFactor(10).toArray();
// --8<-- [end:search1]
expect(results1.length).toBe(2);
// --8<-- [start:search2]
const results2 = await table
.search(Array(128).fill(1.2))
.where("id != '1141'")
.limit(2)
.toArray();
// --8<-- [end:search2]
expect(results2.length).toBe(2);
// --8<-- [start:search3]
const results3 = await table
.search(Array(128).fill(1.2))
.select(["id"])
.limit(2)
.toArray();
// --8<-- [end:search3]
expect(results3.length).toBe(2);
});
}, 100_000);

View File

@@ -1,49 +0,0 @@
// --8<-- [start:import]
import * as lancedb from "@lancedb/lancedb";
// --8<-- [end:import]
// --8<-- [start:ingest]
const db = await lancedb.connect("/tmp/lancedb/");
const data = Array.from({ length: 10_000 }, (_, i) => ({
vector: Array(1536).fill(i),
id: `${i}`,
content: "",
longId: `${i}`,
}));
const table = await db.createTable("my_vectors", data, { mode: "overwrite" });
await table.createIndex("vector", {
config: lancedb.Index.ivfPq({
numPartitions: 16,
numSubVectors: 48,
}),
});
// --8<-- [end:ingest]
// --8<-- [start:search1]
const _results1 = await table
.search(Array(1536).fill(1.2))
.limit(2)
.nprobes(20)
.refineFactor(10)
.toArray();
// --8<-- [end:search1]
// --8<-- [start:search2]
const _results2 = await table
.search(Array(1536).fill(1.2))
.where("id != '1141'")
.limit(2)
.toArray();
// --8<-- [end:search2]
// --8<-- [start:search3]
const _results3 = await table
.search(Array(1536).fill(1.2))
.select(["id"])
.limit(2)
.toArray();
// --8<-- [end:search3]
console.log("Ann indexes: done");

View File

@@ -0,0 +1,175 @@
// SPDX-License-Identifier: Apache-2.0
// SPDX-FileCopyrightText: Copyright The LanceDB Authors
import { expect, test } from "@jest/globals";
// --8<-- [start:imports]
import * as lancedb from "@lancedb/lancedb";
import * as arrow from "apache-arrow";
import {
Field,
FixedSizeList,
Float16,
Int32,
Schema,
Utf8,
} from "apache-arrow";
// --8<-- [end:imports]
import { withTempDirectory } from "./util.ts";
test("basic table examples", async () => {
await withTempDirectory(async (databaseDir) => {
// --8<-- [start:connect]
const db = await lancedb.connect(databaseDir);
// --8<-- [end:connect]
{
// --8<-- [start:create_table]
const _tbl = await db.createTable(
"myTable",
[
{ vector: [3.1, 4.1], item: "foo", price: 10.0 },
{ vector: [5.9, 26.5], item: "bar", price: 20.0 },
],
{ mode: "overwrite" },
);
// --8<-- [end:create_table]
const data = [
{ vector: [3.1, 4.1], item: "foo", price: 10.0 },
{ vector: [5.9, 26.5], item: "bar", price: 20.0 },
];
{
// --8<-- [start:create_table_exists_ok]
const tbl = await db.createTable("myTable", data, {
existOk: true,
});
// --8<-- [end:create_table_exists_ok]
expect(await tbl.countRows()).toBe(2);
}
{
// --8<-- [start:create_table_overwrite]
const tbl = await db.createTable("myTable", data, {
mode: "overwrite",
});
// --8<-- [end:create_table_overwrite]
expect(await tbl.countRows()).toBe(2);
}
}
await db.dropTable("myTable");
{
// --8<-- [start:create_table_with_schema]
const schema = new arrow.Schema([
new arrow.Field(
"vector",
new arrow.FixedSizeList(
2,
new arrow.Field("item", new arrow.Float32(), true),
),
),
new arrow.Field("item", new arrow.Utf8(), true),
new arrow.Field("price", new arrow.Float32(), true),
]);
const data = [
{ vector: [3.1, 4.1], item: "foo", price: 10.0 },
{ vector: [5.9, 26.5], item: "bar", price: 20.0 },
];
const tbl = await db.createTable("myTable", data, {
schema,
});
// --8<-- [end:create_table_with_schema]
expect(await tbl.countRows()).toBe(2);
}
{
// --8<-- [start:create_empty_table]
const schema = new arrow.Schema([
new arrow.Field("id", new arrow.Int32()),
new arrow.Field("name", new arrow.Utf8()),
]);
const emptyTbl = await db.createEmptyTable("empty_table", schema);
// --8<-- [end:create_empty_table]
expect(await emptyTbl.countRows()).toBe(0);
}
{
// --8<-- [start:open_table]
const _tbl = await db.openTable("myTable");
// --8<-- [end:open_table]
}
{
// --8<-- [start:table_names]
const tableNames = await db.tableNames();
// --8<-- [end:table_names]
expect(tableNames).toEqual(["empty_table", "myTable"]);
}
const tbl = await db.openTable("myTable");
{
// --8<-- [start:add_data]
const data = [
{ vector: [1.3, 1.4], item: "fizz", price: 100.0 },
{ vector: [9.5, 56.2], item: "buzz", price: 200.0 },
];
await tbl.add(data);
// --8<-- [end:add_data]
}
{
// --8<-- [start:vector_search]
const res = await tbl.search([100, 100]).limit(2).toArray();
// --8<-- [end:vector_search]
expect(res.length).toBe(2);
}
{
const data = Array.from({ length: 1000 })
.fill(null)
.map(() => ({
vector: [Math.random(), Math.random()],
item: "autogen",
price: Math.round(Math.random() * 100),
}));
await tbl.add(data);
}
// --8<-- [start:create_index]
await tbl.createIndex("vector");
// --8<-- [end:create_index]
// --8<-- [start:delete_rows]
await tbl.delete('item = "fizz"');
// --8<-- [end:delete_rows]
// --8<-- [start:drop_table]
await db.dropTable("myTable");
// --8<-- [end:drop_table]
await db.dropTable("empty_table");
{
// --8<-- [start:create_f16_table]
const db = await lancedb.connect(databaseDir);
const dim = 16;
const total = 10;
const f16Schema = new Schema([
new Field("id", new Int32()),
new Field(
"vector",
new FixedSizeList(dim, new Field("item", new Float16(), true)),
false,
),
]);
const data = lancedb.makeArrowTable(
Array.from(Array(total), (_, i) => ({
id: i,
vector: Array.from(Array(dim), Math.random),
})),
{ schema: f16Schema },
);
const _table = await db.createTable("f16_tbl", data);
// --8<-- [end:create_f16_table]
await db.dropTable("f16_tbl");
}
});
});

View File

@@ -1,162 +0,0 @@
// --8<-- [start:imports]
import * as lancedb from "@lancedb/lancedb";
import * as arrow from "apache-arrow";
import {
Field,
FixedSizeList,
Float16,
Int32,
Schema,
Utf8,
} from "apache-arrow";
// --8<-- [end:imports]
// --8<-- [start:connect]
const uri = "/tmp/lancedb/";
const db = await lancedb.connect(uri);
// --8<-- [end:connect]
{
// --8<-- [start:create_table]
const tbl = await db.createTable(
"myTable",
[
{ vector: [3.1, 4.1], item: "foo", price: 10.0 },
{ vector: [5.9, 26.5], item: "bar", price: 20.0 },
],
{ mode: "overwrite" },
);
// --8<-- [end:create_table]
const data = [
{ vector: [3.1, 4.1], item: "foo", price: 10.0 },
{ vector: [5.9, 26.5], item: "bar", price: 20.0 },
];
{
// --8<-- [start:create_table_exists_ok]
const tbl = await db.createTable("myTable", data, {
existsOk: true,
});
// --8<-- [end:create_table_exists_ok]
}
{
// --8<-- [start:create_table_overwrite]
const _tbl = await db.createTable("myTable", data, {
mode: "overwrite",
});
// --8<-- [end:create_table_overwrite]
}
}
{
// --8<-- [start:create_table_with_schema]
const schema = new arrow.Schema([
new arrow.Field(
"vector",
new arrow.FixedSizeList(
2,
new arrow.Field("item", new arrow.Float32(), true),
),
),
new arrow.Field("item", new arrow.Utf8(), true),
new arrow.Field("price", new arrow.Float32(), true),
]);
const data = [
{ vector: [3.1, 4.1], item: "foo", price: 10.0 },
{ vector: [5.9, 26.5], item: "bar", price: 20.0 },
];
const _tbl = await db.createTable("myTable", data, {
schema,
});
// --8<-- [end:create_table_with_schema]
}
{
// --8<-- [start:create_empty_table]
const schema = new arrow.Schema([
new arrow.Field("id", new arrow.Int32()),
new arrow.Field("name", new arrow.Utf8()),
]);
const empty_tbl = await db.createEmptyTable("empty_table", schema);
// --8<-- [end:create_empty_table]
}
{
// --8<-- [start:open_table]
const _tbl = await db.openTable("myTable");
// --8<-- [end:open_table]
}
{
// --8<-- [start:table_names]
const tableNames = await db.tableNames();
console.log(tableNames);
// --8<-- [end:table_names]
}
const tbl = await db.openTable("myTable");
{
// --8<-- [start:add_data]
const data = [
{ vector: [1.3, 1.4], item: "fizz", price: 100.0 },
{ vector: [9.5, 56.2], item: "buzz", price: 200.0 },
];
await tbl.add(data);
// --8<-- [end:add_data]
}
{
// --8<-- [start:vector_search]
const _res = tbl.search([100, 100]).limit(2).toArray();
// --8<-- [end:vector_search]
}
{
const data = Array.from({ length: 1000 })
.fill(null)
.map(() => ({
vector: [Math.random(), Math.random()],
item: "autogen",
price: Math.round(Math.random() * 100),
}));
await tbl.add(data);
}
// --8<-- [start:create_index]
await tbl.createIndex("vector");
// --8<-- [end:create_index]
// --8<-- [start:delete_rows]
await tbl.delete('item = "fizz"');
// --8<-- [end:delete_rows]
// --8<-- [start:drop_table]
await db.dropTable("myTable");
// --8<-- [end:drop_table]
await db.dropTable("empty_table");
{
// --8<-- [start:create_f16_table]
const db = await lancedb.connect("/tmp/lancedb");
const dim = 16;
const total = 10;
const f16Schema = new Schema([
new Field("id", new Int32()),
new Field(
"vector",
new FixedSizeList(dim, new Field("item", new Float16(), true)),
false,
),
]);
const data = lancedb.makeArrowTable(
Array.from(Array(total), (_, i) => ({
id: i,
vector: Array.from(Array(dim), Math.random),
})),
{ schema: f16Schema },
);
const _table = await db.createTable("f16_tbl", data);
// --8<-- [end:create_f16_table]
await db.dropTable("f16_tbl");
}

View File

@@ -0,0 +1,76 @@
import { FeatureExtractionPipeline, pipeline } from "@huggingface/transformers";
// SPDX-License-Identifier: Apache-2.0
// SPDX-FileCopyrightText: Copyright The LanceDB Authors
import { expect, test } from "@jest/globals";
// --8<-- [start:imports]
import * as lancedb from "@lancedb/lancedb";
import {
LanceSchema,
TextEmbeddingFunction,
getRegistry,
register,
} from "@lancedb/lancedb/embedding";
// --8<-- [end:imports]
import { withTempDirectory } from "./util.ts";
// --8<-- [start:embedding_impl]
@register("sentence-transformers")
class SentenceTransformersEmbeddings extends TextEmbeddingFunction {
name = "Xenova/all-miniLM-L6-v2";
#ndims!: number;
extractor!: FeatureExtractionPipeline;
async init() {
this.extractor = await pipeline("feature-extraction", this.name, {
dtype: "fp32",
});
this.#ndims = await this.generateEmbeddings(["hello"]).then(
(e) => e[0].length,
);
}
ndims() {
return this.#ndims;
}
toJSON() {
return {
name: this.name,
};
}
async generateEmbeddings(texts: string[]) {
const output = await this.extractor(texts, {
pooling: "mean",
normalize: true,
});
return output.tolist();
}
}
// -8<-- [end:embedding_impl]
test("Registry examples", async () => {
await withTempDirectory(async (databaseDir) => {
// --8<-- [start:call_custom_function]
const registry = getRegistry();
const sentenceTransformer = await registry
.get<SentenceTransformersEmbeddings>("sentence-transformers")!
.create();
const schema = LanceSchema({
vector: sentenceTransformer.vectorField(),
text: sentenceTransformer.sourceField(),
});
const db = await lancedb.connect(databaseDir);
const table = await db.createEmptyTable("table", schema, {
mode: "overwrite",
});
await table.add([{ text: "hello" }, { text: "world" }]);
const results = await table.search("greeting").limit(1).toArray();
// -8<-- [end:call_custom_function]
expect(results.length).toBe(1);
});
}, 100_000);

View File

@@ -1,64 +0,0 @@
// --8<-- [start:imports]
import * as lancedb from "@lancedb/lancedb";
import {
LanceSchema,
TextEmbeddingFunction,
getRegistry,
register,
} from "@lancedb/lancedb/embedding";
import { pipeline } from "@xenova/transformers";
// --8<-- [end:imports]
// --8<-- [start:embedding_impl]
@register("sentence-transformers")
class SentenceTransformersEmbeddings extends TextEmbeddingFunction {
name = "Xenova/all-miniLM-L6-v2";
#ndims!: number;
extractor: any;
async init() {
this.extractor = await pipeline("feature-extraction", this.name);
this.#ndims = await this.generateEmbeddings(["hello"]).then(
(e) => e[0].length,
);
}
ndims() {
return this.#ndims;
}
toJSON() {
return {
name: this.name,
};
}
async generateEmbeddings(texts: string[]) {
const output = await this.extractor(texts, {
pooling: "mean",
normalize: true,
});
return output.tolist();
}
}
// -8<-- [end:embedding_impl]
// --8<-- [start:call_custom_function]
const registry = getRegistry();
const sentenceTransformer = await registry
.get<SentenceTransformersEmbeddings>("sentence-transformers")!
.create();
const schema = LanceSchema({
vector: sentenceTransformer.vectorField(),
text: sentenceTransformer.sourceField(),
});
const db = await lancedb.connect("/tmp/db");
const table = await db.createEmptyTable("table", schema, { mode: "overwrite" });
await table.add([{ text: "hello" }, { text: "world" }]);
const results = await table.search("greeting").limit(1).toArray();
console.log(results[0].text);
// -8<-- [end:call_custom_function]

View File

@@ -0,0 +1,96 @@
// SPDX-License-Identifier: Apache-2.0
// SPDX-FileCopyrightText: Copyright The LanceDB Authors
import { expect, test } from "@jest/globals";
// --8<-- [start:imports]
import * as lancedb from "@lancedb/lancedb";
import "@lancedb/lancedb/embedding/openai";
import { LanceSchema, getRegistry, register } from "@lancedb/lancedb/embedding";
import { EmbeddingFunction } from "@lancedb/lancedb/embedding";
import { type Float, Float32, Utf8 } from "apache-arrow";
// --8<-- [end:imports]
import { withTempDirectory } from "./util.ts";
const openAiTest = process.env.OPENAI_API_KEY == null ? test.skip : test;
openAiTest("openai embeddings", async () => {
await withTempDirectory(async (databaseDir) => {
// --8<-- [start:openai_embeddings]
const db = await lancedb.connect(databaseDir);
const func = getRegistry()
.get("openai")
?.create({ model: "text-embedding-ada-002" }) as EmbeddingFunction;
const wordsSchema = LanceSchema({
text: func.sourceField(new Utf8()),
vector: func.vectorField(),
});
const tbl = await db.createEmptyTable("words", wordsSchema, {
mode: "overwrite",
});
await tbl.add([{ text: "hello world" }, { text: "goodbye world" }]);
const query = "greetings";
const actual = (await tbl.search(query).limit(1).toArray())[0];
// --8<-- [end:openai_embeddings]
expect(actual).toHaveProperty("text");
});
});
test("custom embedding function", async () => {
await withTempDirectory(async (databaseDir) => {
// --8<-- [start:embedding_function]
const db = await lancedb.connect(databaseDir);
@register("my_embedding")
class MyEmbeddingFunction extends EmbeddingFunction<string> {
toJSON(): object {
return {};
}
ndims() {
return 3;
}
embeddingDataType(): Float {
return new Float32();
}
async computeQueryEmbeddings(_data: string) {
// This is a placeholder for a real embedding function
return [1, 2, 3];
}
async computeSourceEmbeddings(data: string[]) {
// This is a placeholder for a real embedding function
return Array.from({ length: data.length }).fill([
1, 2, 3,
]) as number[][];
}
}
const func = new MyEmbeddingFunction();
const data = [{ text: "pepperoni" }, { text: "pineapple" }];
// Option 1: manually specify the embedding function
const table = await db.createTable("vectors", data, {
embeddingFunction: {
function: func,
sourceColumn: "text",
vectorColumn: "vector",
},
mode: "overwrite",
});
// Option 2: provide the embedding function through a schema
const schema = LanceSchema({
text: func.sourceField(new Utf8()),
vector: func.vectorField(),
});
const table2 = await db.createTable("vectors2", data, {
schema,
mode: "overwrite",
});
// --8<-- [end:embedding_function]
expect(await table.countRows()).toBe(2);
expect(await table2.countRows()).toBe(2);
});
});

View File

@@ -1,83 +0,0 @@
// --8<-- [start:imports]
import * as lancedb from "@lancedb/lancedb";
import { LanceSchema, getRegistry, register } from "@lancedb/lancedb/embedding";
import { EmbeddingFunction } from "@lancedb/lancedb/embedding";
import { type Float, Float32, Utf8 } from "apache-arrow";
// --8<-- [end:imports]
{
// --8<-- [start:openai_embeddings]
const db = await lancedb.connect("/tmp/db");
const func = getRegistry()
.get("openai")
?.create({ model: "text-embedding-ada-002" }) as EmbeddingFunction;
const wordsSchema = LanceSchema({
text: func.sourceField(new Utf8()),
vector: func.vectorField(),
});
const tbl = await db.createEmptyTable("words", wordsSchema, {
mode: "overwrite",
});
await tbl.add([{ text: "hello world" }, { text: "goodbye world" }]);
const query = "greetings";
const actual = (await (await tbl.search(query)).limit(1).toArray())[0];
// --8<-- [end:openai_embeddings]
console.log("result = ", actual.text);
}
{
// --8<-- [start:embedding_function]
const db = await lancedb.connect("/tmp/db");
@register("my_embedding")
class MyEmbeddingFunction extends EmbeddingFunction<string> {
toJSON(): object {
return {};
}
ndims() {
return 3;
}
embeddingDataType(): Float {
return new Float32();
}
async computeQueryEmbeddings(_data: string) {
// This is a placeholder for a real embedding function
return [1, 2, 3];
}
async computeSourceEmbeddings(data: string[]) {
// This is a placeholder for a real embedding function
return Array.from({ length: data.length }).fill([1, 2, 3]) as number[][];
}
}
const func = new MyEmbeddingFunction();
const data = [{ text: "pepperoni" }, { text: "pineapple" }];
// Option 1: manually specify the embedding function
const table = await db.createTable("vectors", data, {
embeddingFunction: {
function: func,
sourceColumn: "text",
vectorColumn: "vector",
},
mode: "overwrite",
});
// Option 2: provide the embedding function through a schema
const schema = LanceSchema({
text: func.sourceField(new Utf8()),
vector: func.vectorField(),
});
const table2 = await db.createTable("vectors2", data, {
schema,
mode: "overwrite",
});
// --8<-- [end:embedding_function]
}

View File

@@ -0,0 +1,42 @@
// SPDX-License-Identifier: Apache-2.0
// SPDX-FileCopyrightText: Copyright The LanceDB Authors
import { expect, test } from "@jest/globals";
import * as lancedb from "@lancedb/lancedb";
import { withTempDirectory } from "./util.ts";
test("filtering examples", async () => {
await withTempDirectory(async (databaseDir) => {
const db = await lancedb.connect(databaseDir);
const data = Array.from({ length: 10_000 }, (_, i) => ({
vector: Array(1536).fill(i),
id: i,
item: `item ${i}`,
strId: `${i}`,
}));
const tbl = await db.createTable("myVectors", data, { mode: "overwrite" });
// --8<-- [start:search]
const _result = await tbl
.search(Array(1536).fill(0.5))
.limit(1)
.where("id = 10")
.toArray();
// --8<-- [end:search]
// --8<-- [start:vec_search]
const result = await (
tbl.search(Array(1536).fill(0)) as lancedb.VectorQuery
)
.where("(item IN ('item 0', 'item 2')) AND (id > 10)")
.postfilter()
.toArray();
// --8<-- [end:vec_search]
expect(result.length).toBe(0);
// --8<-- [start:sql_search]
await tbl.query().where("id = 10").limit(10).toArray();
// --8<-- [end:sql_search]
});
});

View File

@@ -1,34 +0,0 @@
import * as lancedb from "@lancedb/lancedb";
const db = await lancedb.connect("data/sample-lancedb");
const data = Array.from({ length: 10_000 }, (_, i) => ({
vector: Array(1536).fill(i),
id: i,
item: `item ${i}`,
strId: `${i}`,
}));
const tbl = await db.createTable("myVectors", data, { mode: "overwrite" });
// --8<-- [start:search]
const _result = await tbl
.search(Array(1536).fill(0.5))
.limit(1)
.where("id = 10")
.toArray();
// --8<-- [end:search]
// --8<-- [start:vec_search]
await tbl
.search(Array(1536).fill(0))
.where("(item IN ('item 0', 'item 2')) AND (id > 10)")
.postfilter()
.toArray();
// --8<-- [end:vec_search]
// --8<-- [start:sql_search]
await tbl.query().where("id = 10").limit(10).toArray();
// --8<-- [end:sql_search]
console.log("SQL search: done");

View File

@@ -0,0 +1,45 @@
// SPDX-License-Identifier: Apache-2.0
// SPDX-FileCopyrightText: Copyright The LanceDB Authors
import { expect, test } from "@jest/globals";
import * as lancedb from "@lancedb/lancedb";
import { withTempDirectory } from "./util.ts";
test("full text search", async () => {
await withTempDirectory(async (databaseDir) => {
const db = await lancedb.connect(databaseDir);
const words = [
"apple",
"banana",
"cherry",
"date",
"elderberry",
"fig",
"grape",
];
const data = Array.from({ length: 10_000 }, (_, i) => ({
vector: Array(1536).fill(i),
id: i,
item: `item ${i}`,
strId: `${i}`,
doc: words[i % words.length],
}));
const tbl = await db.createTable("myVectors", data, { mode: "overwrite" });
await tbl.createIndex("doc", {
config: lancedb.Index.fts(),
});
// --8<-- [start:full_text_search]
const result = await tbl
.query()
.nearestToText("apple")
.select(["id", "doc"])
.limit(10)
.toArray();
expect(result.length).toBe(10);
// --8<-- [end:full_text_search]
});
});

View File

@@ -1,52 +0,0 @@
// Copyright 2024 Lance Developers.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
import * as lancedb from "@lancedb/lancedb";
const db = await lancedb.connect("data/sample-lancedb");
const words = [
"apple",
"banana",
"cherry",
"date",
"elderberry",
"fig",
"grape",
];
const data = Array.from({ length: 10_000 }, (_, i) => ({
vector: Array(1536).fill(i),
id: i,
item: `item ${i}`,
strId: `${i}`,
doc: words[i % words.length],
}));
const tbl = await db.createTable("myVectors", data, { mode: "overwrite" });
await tbl.createIndex("doc", {
config: lancedb.Index.fts(),
});
// --8<-- [start:full_text_search]
let result = await tbl
.search("apple")
.select(["id", "doc"])
.limit(10)
.toArray();
console.log(result);
// --8<-- [end:full_text_search]
console.log("SQL search: done");

View File

@@ -0,0 +1,6 @@
/** @type {import('ts-jest').JestConfigWithTsJest} */
module.exports = {
preset: "ts-jest",
testEnvironment: "node",
testPathIgnorePatterns: ["./dist"],
};

View File

@@ -1,27 +0,0 @@
{
"compilerOptions": {
// Enable latest features
"lib": ["ESNext", "DOM"],
"target": "ESNext",
"module": "ESNext",
"moduleDetection": "force",
"jsx": "react-jsx",
"allowJs": true,
// Bundler mode
"moduleResolution": "bundler",
"allowImportingTsExtensions": true,
"verbatimModuleSyntax": true,
"noEmit": true,
// Best practices
"strict": true,
"skipLibCheck": true,
"noFallthroughCasesInSwitch": true,
// Some stricter flags (disabled by default)
"noUnusedLocals": false,
"noUnusedParameters": false,
"noPropertyAccessFromIndexSignature": false
}
}

File diff suppressed because it is too large Load Diff

View File

@@ -5,24 +5,29 @@
"main": "index.js",
"type": "module",
"scripts": {
"test": "echo \"Error: no test specified\" && exit 1"
"//1": "--experimental-vm-modules is needed to run jest with sentence-transformers",
"//2": "--testEnvironment is needed to run jest with sentence-transformers",
"//3": "See: https://github.com/huggingface/transformers.js/issues/57",
"test": "node --experimental-vm-modules node_modules/.bin/jest --testEnvironment jest-environment-node-single-context --verbose",
"lint": "biome check *.ts && biome format *.ts",
"lint-ci": "biome ci .",
"lint-fix": "biome check --write *.ts && npm run format",
"format": "biome format --write *.ts"
},
"author": "Lance Devs",
"license": "Apache-2.0",
"dependencies": {
"@lancedb/lancedb": "file:../",
"@xenova/transformers": "^2.17.2"
"@huggingface/transformers": "^3.0.2",
"@lancedb/lancedb": "file:../dist",
"openai": "^4.29.2",
"sharp": "^0.33.5"
},
"devDependencies": {
"@biomejs/biome": "^1.7.3",
"@jest/globals": "^29.7.0",
"jest": "^29.7.0",
"jest-environment-node-single-context": "^29.4.0",
"ts-jest": "^29.2.5",
"typescript": "^5.5.4"
},
"compilerOptions": {
"target": "ESNext",
"module": "ESNext",
"moduleResolution": "Node",
"strict": true,
"esModuleInterop": true,
"skipLibCheck": true,
"forceConsistentCasingInFileNames": true
}
}

View File

@@ -0,0 +1,42 @@
// SPDX-License-Identifier: Apache-2.0
// SPDX-FileCopyrightText: Copyright The LanceDB Authors
import { expect, test } from "@jest/globals";
// --8<-- [start:import]
import * as lancedb from "@lancedb/lancedb";
// --8<-- [end:import]
import { withTempDirectory } from "./util.ts";
test("full text search", async () => {
await withTempDirectory(async (databaseDir) => {
{
const db = await lancedb.connect(databaseDir);
const data = Array.from({ length: 10_000 }, (_, i) => ({
vector: Array(128).fill(i),
id: `${i}`,
content: "",
longId: `${i}`,
}));
await db.createTable("my_vectors", data);
}
// --8<-- [start:search1]
const db = await lancedb.connect(databaseDir);
const tbl = await db.openTable("my_vectors");
const results1 = await tbl.search(Array(128).fill(1.2)).limit(10).toArray();
// --8<-- [end:search1]
expect(results1.length).toBe(10);
// --8<-- [start:search2]
const results2 = await (
tbl.search(Array(128).fill(1.2)) as lancedb.VectorQuery
)
.distanceType("cosine")
.limit(10)
.toArray();
// --8<-- [end:search2]
expect(results2.length).toBe(10);
});
});

View File

@@ -1,38 +0,0 @@
// --8<-- [end:import]
import * as fs from "node:fs";
// --8<-- [start:import]
import * as lancedb from "@lancedb/lancedb";
async function setup() {
fs.rmSync("data/sample-lancedb", { recursive: true, force: true });
const db = await lancedb.connect("data/sample-lancedb");
const data = Array.from({ length: 10_000 }, (_, i) => ({
vector: Array(1536).fill(i),
id: `${i}`,
content: "",
longId: `${i}`,
}));
await db.createTable("my_vectors", data);
}
await setup();
// --8<-- [start:search1]
const db = await lancedb.connect("data/sample-lancedb");
const tbl = await db.openTable("my_vectors");
const _results1 = await tbl.search(Array(1536).fill(1.2)).limit(10).toArray();
// --8<-- [end:search1]
// --8<-- [start:search2]
const _results2 = await tbl
.search(Array(1536).fill(1.2))
.distanceType("cosine")
.limit(10)
.toArray();
console.log(_results2);
// --8<-- [end:search2]
console.log("search: done");

View File

@@ -1,50 +0,0 @@
import * as lancedb from "@lancedb/lancedb";
import { LanceSchema, getRegistry } from "@lancedb/lancedb/embedding";
import { Utf8 } from "apache-arrow";
const db = await lancedb.connect("/tmp/db");
const func = await getRegistry().get("huggingface").create();
const facts = [
"Albert Einstein was a theoretical physicist.",
"The capital of France is Paris.",
"The Great Wall of China is one of the Seven Wonders of the World.",
"Python is a popular programming language.",
"Mount Everest is the highest mountain in the world.",
"Leonardo da Vinci painted the Mona Lisa.",
"Shakespeare wrote Hamlet.",
"The human body has 206 bones.",
"The speed of light is approximately 299,792 kilometers per second.",
"Water boils at 100 degrees Celsius.",
"The Earth orbits the Sun.",
"The Pyramids of Giza are located in Egypt.",
"Coffee is one of the most popular beverages in the world.",
"Tokyo is the capital city of Japan.",
"Photosynthesis is the process by which plants make their food.",
"The Pacific Ocean is the largest ocean on Earth.",
"Mozart was a prolific composer of classical music.",
"The Internet is a global network of computers.",
"Basketball is a sport played with a ball and a hoop.",
"The first computer virus was created in 1983.",
"Artificial neural networks are inspired by the human brain.",
"Deep learning is a subset of machine learning.",
"IBM's Watson won Jeopardy! in 2011.",
"The first computer programmer was Ada Lovelace.",
"The first chatbot was ELIZA, created in the 1960s.",
].map((text) => ({ text }));
const factsSchema = LanceSchema({
text: func.sourceField(new Utf8()),
vector: func.vectorField(),
});
const tbl = await db.createTable("facts", facts, {
mode: "overwrite",
schema: factsSchema,
});
const query = "How many bones are in the human body?";
const actual = await tbl.search(query).limit(1).toArray();
console.log("Answer: ", actual[0]["text"]);

View File

@@ -0,0 +1,63 @@
// SPDX-License-Identifier: Apache-2.0
// SPDX-FileCopyrightText: Copyright The LanceDB Authors
import { expect, test } from "@jest/globals";
import { withTempDirectory } from "./util.ts";
import * as lancedb from "@lancedb/lancedb";
import "@lancedb/lancedb/embedding/transformers";
import { LanceSchema, getRegistry } from "@lancedb/lancedb/embedding";
import { EmbeddingFunction } from "@lancedb/lancedb/embedding";
import { Utf8 } from "apache-arrow";
test("full text search", async () => {
await withTempDirectory(async (databaseDir) => {
const db = await lancedb.connect(databaseDir);
console.log(getRegistry());
const func = (await getRegistry()
.get("huggingface")
?.create()) as EmbeddingFunction;
const facts = [
"Albert Einstein was a theoretical physicist.",
"The capital of France is Paris.",
"The Great Wall of China is one of the Seven Wonders of the World.",
"Python is a popular programming language.",
"Mount Everest is the highest mountain in the world.",
"Leonardo da Vinci painted the Mona Lisa.",
"Shakespeare wrote Hamlet.",
"The human body has 206 bones.",
"The speed of light is approximately 299,792 kilometers per second.",
"Water boils at 100 degrees Celsius.",
"The Earth orbits the Sun.",
"The Pyramids of Giza are located in Egypt.",
"Coffee is one of the most popular beverages in the world.",
"Tokyo is the capital city of Japan.",
"Photosynthesis is the process by which plants make their food.",
"The Pacific Ocean is the largest ocean on Earth.",
"Mozart was a prolific composer of classical music.",
"The Internet is a global network of computers.",
"Basketball is a sport played with a ball and a hoop.",
"The first computer virus was created in 1983.",
"Artificial neural networks are inspired by the human brain.",
"Deep learning is a subset of machine learning.",
"IBM's Watson won Jeopardy! in 2011.",
"The first computer programmer was Ada Lovelace.",
"The first chatbot was ELIZA, created in the 1960s.",
].map((text) => ({ text }));
const factsSchema = LanceSchema({
text: func.sourceField(new Utf8()),
vector: func.vectorField(),
});
const tbl = await db.createTable("facts", facts, {
mode: "overwrite",
schema: factsSchema,
});
const query = "How many bones are in the human body?";
const actual = await tbl.search(query).limit(1).toArray();
expect(actual[0]["text"]).toBe("The human body has 206 bones.");
});
}, 100_000);

View File

@@ -0,0 +1,17 @@
{
"include": ["*.test.ts"],
"compilerOptions": {
"target": "es2022",
"module": "NodeNext",
"declaration": true,
"outDir": "./dist",
"strict": true,
"allowJs": true,
"resolveJsonModule": true,
"emitDecoratorMetadata": true,
"experimentalDecorators": true,
"moduleResolution": "NodeNext",
"allowImportingTsExtensions": true,
"emitDeclarationOnly": true
}
}

16
nodejs/examples/util.ts Normal file
View File

@@ -0,0 +1,16 @@
// SPDX-License-Identifier: Apache-2.0
// SPDX-FileCopyrightText: Copyright The LanceDB Authors
import * as fs from "fs";
import { tmpdir } from "os";
import * as path from "path";
export async function withTempDirectory(
fn: (tempDir: string) => Promise<void>,
) {
const tmpDirPath = fs.mkdtempSync(path.join(tmpdir(), "temp-dir-"));
try {
await fn(tmpDirPath);
} finally {
fs.rmSync(tmpDirPath, { recursive: true });
}
}

View File

@@ -4,4 +4,5 @@ module.exports = {
testEnvironment: "node",
moduleDirectories: ["node_modules", "./dist"],
moduleFileExtensions: ["js", "ts"],
modulePathIgnorePatterns: ["<rootDir>/examples/"],
};

View File

@@ -19,9 +19,6 @@ import { EmbeddingFunctionConfig, getRegistry } from "./registry";
export { EmbeddingFunction, TextEmbeddingFunction } from "./embedding_function";
// We need to explicitly export '*' so that the `register` decorator actually registers the class.
export * from "./openai";
export * from "./transformers";
export * from "./registry";
/**

View File

@@ -17,8 +17,6 @@ import {
type EmbeddingFunctionConstructor,
} from "./embedding_function";
import "reflect-metadata";
import { OpenAIEmbeddingFunction } from "./openai";
import { TransformersEmbeddingFunction } from "./transformers";
type CreateReturnType<T> = T extends { init: () => Promise<void> }
? Promise<T>
@@ -73,10 +71,6 @@ export class EmbeddingFunctionRegistry {
};
}
get(name: "openai"): EmbeddingFunctionCreate<OpenAIEmbeddingFunction>;
get(
name: "huggingface",
): EmbeddingFunctionCreate<TransformersEmbeddingFunction>;
get<T extends EmbeddingFunction<unknown>>(
name: string,
): EmbeddingFunctionCreate<T> | undefined;

View File

@@ -47,8 +47,8 @@ export class TransformersEmbeddingFunction extends EmbeddingFunction<
string,
Partial<XenovaTransformerOptions>
> {
#model?: import("@xenova/transformers").PreTrainedModel;
#tokenizer?: import("@xenova/transformers").PreTrainedTokenizer;
#model?: import("@huggingface/transformers").PreTrainedModel;
#tokenizer?: import("@huggingface/transformers").PreTrainedTokenizer;
#modelName: XenovaTransformerOptions["model"];
#initialized = false;
#tokenizerOptions: XenovaTransformerOptions["tokenizerOptions"];
@@ -92,18 +92,19 @@ export class TransformersEmbeddingFunction extends EmbeddingFunction<
try {
// SAFETY:
// since typescript transpiles `import` to `require`, we need to do this in an unsafe way
// We can't use `require` because `@xenova/transformers` is an ESM module
// We can't use `require` because `@huggingface/transformers` is an ESM module
// and we can't use `import` directly because typescript will transpile it to `require`.
// and we want to remain compatible with both ESM and CJS modules
// so we use `eval` to bypass typescript for this specific import.
transformers = await eval('import("@xenova/transformers")');
transformers = await eval('import("@huggingface/transformers")');
} catch (e) {
throw new Error(`error loading @xenova/transformers\nReason: ${e}`);
throw new Error(`error loading @huggingface/transformers\nReason: ${e}`);
}
try {
this.#model = await transformers.AutoModel.from_pretrained(
this.#modelName,
{ dtype: "fp32" },
);
} catch (e) {
throw new Error(
@@ -128,7 +129,8 @@ export class TransformersEmbeddingFunction extends EmbeddingFunction<
} else {
const config = this.#model!.config;
const ndims = config["hidden_size"];
// biome-ignore lint/style/useNamingConvention: we don't control this name.
const ndims = (config as unknown as { hidden_size: number }).hidden_size;
if (!ndims) {
throw new Error(
"hidden_size not found in model config, you may need to manually specify the embedding dimensions. ",
@@ -183,7 +185,7 @@ export class TransformersEmbeddingFunction extends EmbeddingFunction<
}
const tensorDiv = (
src: import("@xenova/transformers").Tensor,
src: import("@huggingface/transformers").Tensor,
divBy: number,
) => {
for (let i = 0; i < src.data.length; ++i) {

View File

@@ -492,6 +492,42 @@ export class VectorQuery extends QueryBase<NativeVectorQuery> {
super.doCall((inner) => inner.bypassVectorIndex());
return this;
}
/*
* Add a query vector to the search
*
* This method can be called multiple times to add multiple query vectors
* to the search. If multiple query vectors are added, then they will be searched
* in parallel, and the results will be concatenated. A column called `query_index`
* will be added to indicate the index of the query vector that produced the result.
*
* Performance wise, this is equivalent to running multiple queries concurrently.
*/
addQueryVector(vector: IntoVector): VectorQuery {
if (vector instanceof Promise) {
const res = (async () => {
try {
const v = await vector;
const arr = Float32Array.from(v);
//
// biome-ignore lint/suspicious/noExplicitAny: we need to get the `inner`, but js has no package scoping
const value: any = this.addQueryVector(arr);
const inner = value.inner as
| NativeVectorQuery
| Promise<NativeVectorQuery>;
return inner;
} catch (e) {
return Promise.reject(e);
}
})();
return new VectorQuery(res);
} else {
super.doCall((inner) => {
inner.addQueryVector(Float32Array.from(vector));
});
return this;
}
}
}
/** A builder for LanceDB queries. */
@@ -571,4 +607,9 @@ export class Query extends QueryBase<NativeQuery> {
return new VectorQuery(vectorQuery);
}
}
nearestToText(query: string, columns?: string[]): Query {
this.doCall((inner) => inner.fullTextSearch(query, columns));
return this;
}
}

View File

@@ -1,6 +1,6 @@
{
"name": "@lancedb/lancedb-darwin-arm64",
"version": "0.13.0-beta.1",
"version": "0.13.0-beta.2",
"os": ["darwin"],
"cpu": ["arm64"],
"main": "lancedb.darwin-arm64.node",

View File

@@ -1,6 +1,6 @@
{
"name": "@lancedb/lancedb-darwin-x64",
"version": "0.13.0-beta.1",
"version": "0.13.0-beta.2",
"os": ["darwin"],
"cpu": ["x64"],
"main": "lancedb.darwin-x64.node",

View File

@@ -1,6 +1,6 @@
{
"name": "@lancedb/lancedb-linux-arm64-gnu",
"version": "0.13.0-beta.1",
"version": "0.13.0-beta.2",
"os": ["linux"],
"cpu": ["arm64"],
"main": "lancedb.linux-arm64-gnu.node",

View File

@@ -1,6 +1,6 @@
{
"name": "@lancedb/lancedb-linux-x64-gnu",
"version": "0.13.0-beta.1",
"version": "0.13.0-beta.2",
"os": ["linux"],
"cpu": ["x64"],
"main": "lancedb.linux-x64-gnu.node",

View File

@@ -0,0 +1,3 @@
# `@lancedb/lancedb-win32-arm64-msvc`
This is the **aarch64-pc-windows-msvc** binary for `@lancedb/lancedb`

View File

@@ -0,0 +1,18 @@
{
"name": "@lancedb/lancedb-win32-arm64-msvc",
"version": "0.13.0-beta.2",
"os": [
"win32"
],
"cpu": [
"arm64"
],
"main": "lancedb.win32-arm64-msvc.node",
"files": [
"lancedb.win32-arm64-msvc.node"
],
"license": "Apache 2.0",
"engines": {
"node": ">= 18"
}
}

View File

@@ -1,6 +1,6 @@
{
"name": "@lancedb/lancedb-win32-x64-msvc",
"version": "0.13.0-beta.1",
"version": "0.13.0-beta.2",
"os": ["win32"],
"cpu": ["x64"],
"main": "lancedb.win32-x64-msvc.node",

1432
nodejs/package-lock.json generated

File diff suppressed because it is too large Load Diff

View File

@@ -10,11 +10,13 @@
"vector database",
"ann"
],
"version": "0.13.0-beta.1",
"version": "0.13.0-beta.2",
"main": "dist/index.js",
"exports": {
".": "./dist/index.js",
"./embedding": "./dist/embedding/index.js"
"./embedding": "./dist/embedding/index.js",
"./embedding/openai": "./dist/embedding/openai.js",
"./embedding/transformers": "./dist/embedding/transformers.js"
},
"types": "dist/index.d.ts",
"napi": {
@@ -85,7 +87,7 @@
"reflect-metadata": "^0.2.2"
},
"optionalDependencies": {
"@xenova/transformers": ">=2.17 < 3",
"@huggingface/transformers": "^3.0.2",
"openai": "^4.29.2"
},
"peerDependencies": {

View File

@@ -135,6 +135,16 @@ impl VectorQuery {
self.inner = self.inner.clone().column(&column);
}
#[napi]
pub fn add_query_vector(&mut self, vector: Float32Array) -> Result<()> {
self.inner = self
.inner
.clone()
.add_query_vector(vector.as_ref())
.default_error()?;
Ok(())
}
#[napi]
pub fn distance_type(&mut self, distance_type: String) -> napi::Result<()> {
let distance_type = parse_distance_type(distance_type)?;

View File

@@ -12,7 +12,7 @@
"experimentalDecorators": true,
"moduleResolution": "Node"
},
"exclude": ["./dist/*"],
"exclude": ["./dist/*", "./examples/*"],
"typedocOptions": {
"entryPoints": ["lancedb/index.ts"],
"out": "../docs/src/javascript/",

View File

@@ -1,5 +1,5 @@
[tool.bumpversion]
current_version = "0.16.0-beta.0"
current_version = "0.16.0"
parse = """(?x)
(?P<major>0|[1-9]\\d*)\\.
(?P<minor>0|[1-9]\\d*)\\.

View File

@@ -1,6 +1,6 @@
[package]
name = "lancedb-python"
version = "0.16.0-beta.0"
version = "0.16.0"
edition.workspace = true
description = "Python bindings for LanceDB"
license.workspace = true

View File

@@ -4,7 +4,7 @@ name = "lancedb"
dependencies = [
"deprecation",
"nest-asyncio~=1.0",
"pylance==0.19.2-beta.3",
"pylance==0.19.2",
"tqdm>=4.27.0",
"pydantic>=1.10",
"packaging",

View File

@@ -27,3 +27,4 @@ from .imagebind import ImageBindEmbeddings
from .utils import with_embeddings
from .jinaai import JinaEmbeddings
from .watsonx import WatsonxEmbeddings
from .voyageai import VoyageAIEmbeddingFunction

View File

@@ -1,15 +1,6 @@
# Copyright (c) 2023. LanceDB Developers
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# SPDX-License-Identifier: Apache-2.0
# SPDX-FileCopyrightText: Copyright The LanceDB Authors
import json
from typing import Dict, Optional
@@ -170,7 +161,7 @@ def register(name):
return __REGISTRY__.get_instance().register(name)
def get_registry():
def get_registry() -> EmbeddingFunctionRegistry:
"""
Utility function to get the global instance of the registry

View File

@@ -0,0 +1,127 @@
# Copyright (c) 2023. LanceDB Developers
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
from typing import ClassVar, List, Union
import numpy as np
from ..util import attempt_import_or_raise
from .base import TextEmbeddingFunction
from .registry import register
from .utils import api_key_not_found_help, TEXT
@register("voyageai")
class VoyageAIEmbeddingFunction(TextEmbeddingFunction):
"""
An embedding function that uses the VoyageAI API
https://docs.voyageai.com/docs/embeddings
Parameters
----------
name: str
The name of the model to use. List of acceptable models:
* voyage-3
* voyage-3-lite
* voyage-finance-2
* voyage-multilingual-2
* voyage-law-2
* voyage-code-2
Examples
--------
import lancedb
from lancedb.pydantic import LanceModel, Vector
from lancedb.embeddings import EmbeddingFunctionRegistry
voyageai = EmbeddingFunctionRegistry
.get_instance()
.get("voyageai")
.create(name="voyage-3")
class TextModel(LanceModel):
text: str = voyageai.SourceField()
vector: Vector(voyageai.ndims()) = voyageai.VectorField()
data = [ { "text": "hello world" },
{ "text": "goodbye world" }]
db = lancedb.connect("~/.lancedb")
tbl = db.create_table("test", schema=TextModel, mode="overwrite")
tbl.add(data)
"""
name: str
client: ClassVar = None
def ndims(self):
if self.name == "voyage-3-lite":
return 512
elif self.name == "voyage-code-2":
return 1536
elif self.name in [
"voyage-3",
"voyage-finance-2",
"voyage-multilingual-2",
"voyage-law-2",
]:
return 1024
else:
raise ValueError(f"Model {self.name} not supported")
def compute_query_embeddings(self, query: str, *args, **kwargs) -> List[np.array]:
return self.compute_source_embeddings(query, input_type="query")
def compute_source_embeddings(self, texts: TEXT, *args, **kwargs) -> List[np.array]:
texts = self.sanitize_input(texts)
input_type = (
kwargs.get("input_type") or "document"
) # assume source input type if not passed by `compute_query_embeddings`
return self.generate_embeddings(texts, input_type=input_type)
def generate_embeddings(
self, texts: Union[List[str], np.ndarray], *args, **kwargs
) -> List[np.array]:
"""
Get the embeddings for the given texts
Parameters
----------
texts: list[str] or np.ndarray (of str)
The texts to embed
input_type: Optional[str]
truncation: Optional[bool]
"""
VoyageAIEmbeddingFunction._init_client()
rs = VoyageAIEmbeddingFunction.client.embed(
texts=texts, model=self.name, **kwargs
)
return [emb for emb in rs.embeddings]
@staticmethod
def _init_client():
if VoyageAIEmbeddingFunction.client is None:
voyageai = attempt_import_or_raise("voyageai")
if os.environ.get("VOYAGE_API_KEY") is None:
api_key_not_found_help("voyageai")
VoyageAIEmbeddingFunction.client = voyageai.Client(
os.environ["VOYAGE_API_KEY"]
)

View File

@@ -943,12 +943,16 @@ class LanceFtsQueryBuilder(LanceQueryBuilder):
class LanceEmptyQueryBuilder(LanceQueryBuilder):
def to_arrow(self) -> pa.Table:
ds = self._table.to_lance()
return ds.to_table(
query = Query(
columns=self._columns,
filter=self._where,
limit=self._limit,
k=self._limit or 10,
with_row_id=self._with_row_id,
vector=[],
# not actually respected in remote query
offset=self._offset or 0,
)
return self._table._execute_query(query).read_all()
def rerank(self, reranker: Reranker) -> LanceEmptyQueryBuilder:
"""Rerank the results using the specified reranker.
@@ -1491,7 +1495,7 @@ class AsyncQuery(AsyncQueryBase):
return pa.array(vec)
def nearest_to(
self, query_vector: Optional[Union[VEC, Tuple]] = None
self, query_vector: Optional[Union[VEC, Tuple, List[VEC]]] = None
) -> AsyncVectorQuery:
"""
Find the nearest vectors to the given query vector.
@@ -1529,10 +1533,30 @@ class AsyncQuery(AsyncQueryBase):
Vector searches always have a [limit][]. If `limit` has not been called then
a default `limit` of 10 will be used.
Typically, a single vector is passed in as the query. However, you can also
pass in multiple vectors. This can be useful if you want to find the nearest
vectors to multiple query vectors. This is not expected to be faster than
making multiple queries concurrently; it is just a convenience method.
If multiple vectors are passed in then an additional column `query_index`
will be added to the results. This column will contain the index of the
query vector that the result is nearest to.
"""
return AsyncVectorQuery(
self._inner.nearest_to(AsyncQuery._query_vec_to_array(query_vector))
)
if (
isinstance(query_vector, list)
and len(query_vector) > 0
and not isinstance(query_vector[0], (float, int))
):
# multiple have been passed
query_vectors = [AsyncQuery._query_vec_to_array(v) for v in query_vector]
new_self = self._inner.nearest_to(query_vectors[0])
for v in query_vectors[1:]:
new_self.add_query_vector(v)
return AsyncVectorQuery(new_self)
else:
return AsyncVectorQuery(
self._inner.nearest_to(AsyncQuery._query_vec_to_array(query_vector))
)
def nearest_to_text(
self, query: str, columns: Union[str, List[str]] = []

View File

@@ -11,6 +11,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
from datetime import timedelta
import asyncio
import logging
from functools import cached_property
@@ -326,10 +327,6 @@ class RemoteTable(Table):
- and also the "_distance" column which is the distance between the query
vector and the returned vector.
"""
# empty query builder is not supported in saas, raise error
if query is None and query_type != "hybrid":
raise ValueError("Empty query is not supported")
return LanceQueryBuilder.create(
self,
query,
@@ -478,6 +475,19 @@ class RemoteTable(Table):
"compact_files() is not supported on the LanceDB cloud"
)
def optimize(
self,
*,
cleanup_older_than: Optional[timedelta] = None,
delete_unverified: bool = False,
):
"""optimize() is not supported on the LanceDB cloud.
Indices are optimized automatically."""
raise NotImplementedError(
"optimize() is not supported on the LanceDB cloud. "
"Indices are optimized automatically."
)
def count_rows(self, filter: Optional[str] = None) -> int:
return self._loop.run_until_complete(self._table.count_rows(filter))

View File

@@ -7,6 +7,7 @@ from .openai import OpenaiReranker
from .jinaai import JinaReranker
from .rrf import RRFReranker
from .answerdotai import AnswerdotaiRerankers
from .voyageai import VoyageAIReranker
__all__ = [
"Reranker",
@@ -18,4 +19,5 @@ __all__ = [
"JinaReranker",
"RRFReranker",
"AnswerdotaiRerankers",
"VoyageAIReranker",
]

View File

@@ -0,0 +1,133 @@
# Copyright (c) 2023. LanceDB Developers
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
from functools import cached_property
from typing import Optional
import pyarrow as pa
from ..util import attempt_import_or_raise
from .base import Reranker
class VoyageAIReranker(Reranker):
"""
Reranks the results using the VoyageAI Rerank API.
https://docs.voyageai.com/docs/reranker
Parameters
----------
model_name : str, default "rerank-english-v2.0"
The name of the cross encoder model to use. Available voyageai models are:
- rerank-2
- rerank-2-lite
column : str, default "text"
The name of the column to use as input to the cross encoder model.
top_n : int, default None
The number of results to return. If None, will return all results.
return_score : str, default "relevance"
options are "relevance" or "all". Only "relevance" is supported for now.
api_key : str, default None
The API key to use. If None, will use the OPENAI_API_KEY environment variable.
truncation : Optional[bool], default None
"""
def __init__(
self,
model_name: str,
column: str = "text",
top_n: Optional[int] = None,
return_score="relevance",
api_key: Optional[str] = None,
truncation: Optional[bool] = True,
):
super().__init__(return_score)
self.model_name = model_name
self.column = column
self.top_n = top_n
self.api_key = api_key
self.truncation = truncation
@cached_property
def _client(self):
voyageai = attempt_import_or_raise("voyageai")
if os.environ.get("VOYAGE_API_KEY") is None and self.api_key is None:
raise ValueError(
"VOYAGE_API_KEY not set. Either set it in your environment or \
pass it as `api_key` argument to the VoyageAIReranker."
)
return voyageai.Client(
api_key=os.environ.get("VOYAGE_API_KEY") or self.api_key,
)
def _rerank(self, result_set: pa.Table, query: str):
docs = result_set[self.column].to_pylist()
response = self._client.rerank(
query=query,
documents=docs,
top_k=self.top_n,
model=self.model_name,
truncation=self.truncation,
)
results = (
response.results
) # returns list (text, idx, relevance) attributes sorted descending by score
indices, scores = list(
zip(*[(result.index, result.relevance_score) for result in results])
) # tuples
result_set = result_set.take(list(indices))
# add the scores
result_set = result_set.append_column(
"_relevance_score", pa.array(scores, type=pa.float32())
)
return result_set
def rerank_hybrid(
self,
query: str,
vector_results: pa.Table,
fts_results: pa.Table,
):
combined_results = self.merge_results(vector_results, fts_results)
combined_results = self._rerank(combined_results, query)
if self.score == "relevance":
combined_results = self._keep_relevance_score(combined_results)
elif self.score == "all":
raise NotImplementedError(
"return_score='all' not implemented for voyageai reranker"
)
return combined_results
def rerank_vector(
self,
query: str,
vector_results: pa.Table,
):
result_set = self._rerank(vector_results, query)
if self.score == "relevance":
result_set = result_set.drop_columns(["_distance"])
return result_set
def rerank_fts(
self,
query: str,
fts_results: pa.Table,
):
result_set = self._rerank(fts_results, query)
if self.score == "relevance":
result_set = result_set.drop_columns(["_score"])
return result_set

View File

@@ -3,6 +3,7 @@
from __future__ import annotations
import asyncio
import inspect
import time
from abc import ABC, abstractmethod
@@ -32,7 +33,7 @@ import pyarrow.fs as pa_fs
from lance import LanceDataset
from lance.dependencies import _check_for_hugging_face
from .common import DATA, VEC, VECTOR_COLUMN_NAME
from .common import DATA, VEC, VECTOR_COLUMN_NAME, sanitize_uri
from .embeddings import EmbeddingFunctionConfig, EmbeddingFunctionRegistry
from .merge import LanceMergeInsertBuilder
from .pydantic import LanceModel, model_to_dict
@@ -57,6 +58,8 @@ from .util import (
)
from .index import lang_mapping
from ._lancedb import connect as lancedb_connect
if TYPE_CHECKING:
import PIL
from lance.dataset import CleanupStats, ReaderLike
@@ -70,6 +73,21 @@ pl = safe_import_polars()
QueryType = Literal["vector", "fts", "hybrid", "auto"]
def _pd_schema_without_embedding_funcs(
schema: Optional[pa.Schema], columns: List[str]
) -> Optional[pa.Schema]:
"""Return a schema without any embedding function columns"""
if schema is None:
return None
embedding_functions = EmbeddingFunctionRegistry.get_instance().parse_functions(
schema.metadata
)
if not embedding_functions:
return schema
columns = set(columns)
return pa.schema([field for field in schema if field.name in columns])
def _coerce_to_table(data, schema: Optional[pa.Schema] = None) -> pa.Table:
if _check_for_hugging_face(data):
# Huggingface datasets
@@ -100,10 +118,10 @@ def _coerce_to_table(data, schema: Optional[pa.Schema] = None) -> pa.Table:
elif isinstance(data[0], pa.RecordBatch):
return pa.Table.from_batches(data, schema=schema)
else:
return pa.Table.from_pylist(data)
return pa.Table.from_pylist(data, schema=schema)
elif _check_for_pandas(data) and isinstance(data, pd.DataFrame):
# Do not add schema here, since schema may contains the vector column
table = pa.Table.from_pandas(data, preserve_index=False)
raw_schema = _pd_schema_without_embedding_funcs(schema, data.columns.to_list())
table = pa.Table.from_pandas(data, preserve_index=False, schema=raw_schema)
# Do not serialize Pandas metadata
meta = table.schema.metadata if table.schema.metadata is not None else {}
meta = {k: v for k, v in meta.items() if k != b"pandas"}
@@ -169,6 +187,8 @@ def sanitize_create_table(
schema = schema.to_arrow_schema()
if data is not None:
if metadata is None and schema is not None:
metadata = schema.metadata
data, schema = _sanitize_data(
data,
schema,
@@ -893,6 +913,55 @@ class Table(ABC):
For most cases, the default should be fine.
"""
@abstractmethod
def optimize(
self,
*,
cleanup_older_than: Optional[timedelta] = None,
delete_unverified: bool = False,
):
"""
Optimize the on-disk data and indices for better performance.
Modeled after ``VACUUM`` in PostgreSQL.
Optimization covers three operations:
* Compaction: Merges small files into larger ones
* Prune: Removes old versions of the dataset
* Index: Optimizes the indices, adding new data to existing indices
Parameters
----------
cleanup_older_than: timedelta, optional default 7 days
All files belonging to versions older than this will be removed. Set
to 0 days to remove all versions except the latest. The latest version
is never removed.
delete_unverified: bool, default False
Files leftover from a failed transaction may appear to be part of an
in-progress operation (e.g. appending new data) and these files will not
be deleted unless they are at least 7 days old. If delete_unverified is True
then these files will be deleted regardless of their age.
Experimental API
----------------
The optimization process is undergoing active development and may change.
Our goal with these changes is to improve the performance of optimization and
reduce the complexity.
That being said, it is essential today to run optimize if you want the best
performance. It should be stable and safe to use in production, but it our
hope that the API may be simplified (or not even need to be called) in the
future.
The frequency an application shoudl call optimize is based on the frequency of
data modifications. If data is frequently added, deleted, or updated then
optimize should be run frequently. A good rule of thumb is to run optimize if
you have added or modified 100,000 or more records or run more than 20 data
modification operations.
"""
@abstractmethod
def add_columns(self, transforms: Dict[str, str]):
"""
@@ -1498,7 +1567,7 @@ class LanceTable(Table):
"append" and "overwrite".
on_bad_vectors: str, default "error"
What to do if any of the vectors are not the same size or contains NaNs.
One of "error", "drop", "fill".
One of "error", "drop", "fill", "null".
fill_value: float, default 0.
The value to use when filling vectors. Only used if on_bad_vectors="fill".
@@ -1782,7 +1851,7 @@ class LanceTable(Table):
data but will validate against any schema that's specified.
on_bad_vectors: str, default "error"
What to do if any of the vectors are not the same size or contains NaNs.
One of "error", "drop", "fill".
One of "error", "drop", "fill", "null".
fill_value: float, default 0.
The value to use when filling vectors. Only used if on_bad_vectors="fill".
embedding_functions: list of EmbeddingFunctionModel, default None
@@ -1971,6 +2040,83 @@ class LanceTable(Table):
"""
return self.to_lance().optimize.compact_files(*args, **kwargs)
def optimize(
self,
*,
cleanup_older_than: Optional[timedelta] = None,
delete_unverified: bool = False,
):
"""
Optimize the on-disk data and indices for better performance.
Modeled after ``VACUUM`` in PostgreSQL.
Optimization covers three operations:
* Compaction: Merges small files into larger ones
* Prune: Removes old versions of the dataset
* Index: Optimizes the indices, adding new data to existing indices
Parameters
----------
cleanup_older_than: timedelta, optional default 7 days
All files belonging to versions older than this will be removed. Set
to 0 days to remove all versions except the latest. The latest version
is never removed.
delete_unverified: bool, default False
Files leftover from a failed transaction may appear to be part of an
in-progress operation (e.g. appending new data) and these files will not
be deleted unless they are at least 7 days old. If delete_unverified is True
then these files will be deleted regardless of their age.
Experimental API
----------------
The optimization process is undergoing active development and may change.
Our goal with these changes is to improve the performance of optimization and
reduce the complexity.
That being said, it is essential today to run optimize if you want the best
performance. It should be stable and safe to use in production, but it our
hope that the API may be simplified (or not even need to be called) in the
future.
The frequency an application shoudl call optimize is based on the frequency of
data modifications. If data is frequently added, deleted, or updated then
optimize should be run frequently. A good rule of thumb is to run optimize if
you have added or modified 100,000 or more records or run more than 20 data
modification operations.
"""
try:
asyncio.get_running_loop()
raise AssertionError(
"Synchronous method called in asynchronous context. "
"If you are writing an asynchronous application "
"then please use the asynchronous APIs"
)
except RuntimeError:
asyncio.run(
self._async_optimize(
cleanup_older_than=cleanup_older_than,
delete_unverified=delete_unverified,
)
)
self.checkout_latest()
async def _async_optimize(
self,
cleanup_older_than: Optional[timedelta] = None,
delete_unverified: bool = False,
):
conn = await lancedb_connect(
sanitize_uri(self._conn.uri),
)
table = AsyncTable(await conn.open_table(self.name))
return await table.optimize(
cleanup_older_than=cleanup_older_than, delete_unverified=delete_unverified
)
def add_columns(self, transforms: Dict[str, str]):
self._dataset_mut.add_columns(transforms)
@@ -2005,13 +2151,11 @@ def _sanitize_schema(
vector column to fixed_size_list(float32) if necessary.
on_bad_vectors: str, default "error"
What to do if any of the vectors are not the same size or contains NaNs.
One of "error", "drop", "fill".
One of "error", "drop", "fill", "null".
fill_value: float, default 0.
The value to use when filling vectors. Only used if on_bad_vectors="fill".
"""
if schema is not None:
if data.schema == schema:
return data
# cast the columns to the expected types
data = data.combine_chunks()
for field in schema:
@@ -2031,6 +2175,7 @@ def _sanitize_schema(
vector_column_name=field.name,
on_bad_vectors=on_bad_vectors,
fill_value=fill_value,
table_schema=schema,
)
return pa.Table.from_arrays(
[data[name] for name in schema.names], schema=schema
@@ -2051,6 +2196,7 @@ def _sanitize_schema(
def _sanitize_vector_column(
data: pa.Table,
vector_column_name: str,
table_schema: Optional[pa.Schema] = None,
on_bad_vectors: str = "error",
fill_value: float = 0.0,
) -> pa.Table:
@@ -2065,12 +2211,16 @@ def _sanitize_vector_column(
The name of the vector column.
on_bad_vectors: str, default "error"
What to do if any of the vectors are not the same size or contains NaNs.
One of "error", "drop", "fill".
One of "error", "drop", "fill", "null".
fill_value: float, default 0.0
The value to use when filling vectors. Only used if on_bad_vectors="fill".
"""
# ChunkedArray is annoying to work with, so we combine chunks here
vec_arr = data[vector_column_name].combine_chunks()
if table_schema is not None:
field = table_schema.field(vector_column_name)
else:
field = None
typ = data[vector_column_name].type
if pa.types.is_list(typ) or pa.types.is_large_list(typ):
# if it's a variable size list array,
@@ -2097,7 +2247,11 @@ def _sanitize_vector_column(
data, fill_value, on_bad_vectors, vec_arr, vector_column_name
)
else:
if pc.any(pc.is_null(vec_arr.values, nan_is_null=True)).as_py():
if (
field is not None
and not field.nullable
and pc.any(pc.is_null(vec_arr.values)).as_py()
) or (pc.any(pc.is_nan(vec_arr.values)).as_py()):
data = _sanitize_nans(
data, fill_value, on_bad_vectors, vec_arr, vector_column_name
)
@@ -2141,6 +2295,12 @@ def _sanitize_jagged(data, fill_value, on_bad_vectors, vec_arr, vector_column_na
)
elif on_bad_vectors == "drop":
data = data.filter(correct_ndims)
elif on_bad_vectors == "null":
data = data.set_column(
data.column_names.index(vector_column_name),
vector_column_name,
pc.if_else(correct_ndims, vec_arr, pa.scalar(None)),
)
return data
@@ -2157,7 +2317,8 @@ def _sanitize_nans(
raise ValueError(
f"Vector column {vector_column_name} has NaNs. "
"Set on_bad_vectors='drop' to remove them, or "
"set on_bad_vectors='fill' and fill_value=<value> to replace them."
"set on_bad_vectors='fill' and fill_value=<value> to replace them. "
"Or set on_bad_vectors='null' to replace them with null."
)
elif on_bad_vectors == "fill":
if fill_value is None:
@@ -2177,6 +2338,17 @@ def _sanitize_nans(
np_arr = np_arr.reshape(-1, vec_arr.type.list_size)
not_nulls = np.any(np_arr, axis=1)
data = data.filter(~not_nulls)
elif on_bad_vectors == "null":
# null = pa.nulls(len(vec_arr)).cast(vec_arr.type)
# values = pc.if_else(pc.is_nan(vec_arr.values), fill_value, vec_arr.values)
np_arr = np.isnan(vec_arr.values.to_numpy(zero_copy_only=False))
np_arr = np_arr.reshape(-1, vec_arr.type.list_size)
no_nans = np.any(np_arr, axis=1)
data = data.set_column(
data.column_names.index(vector_column_name),
vector_column_name,
pc.if_else(no_nans, vec_arr, pa.scalar(None)),
)
return data
@@ -2442,7 +2614,7 @@ class AsyncTable:
"append" and "overwrite".
on_bad_vectors: str, default "error"
What to do if any of the vectors are not the same size or contains NaNs.
One of "error", "drop", "fill".
One of "error", "drop", "fill", "null".
fill_value: float, default 0.
The value to use when filling vectors. Only used if on_bad_vectors="fill".

View File

@@ -1,15 +1,6 @@
# Copyright 2023 LanceDB Developers
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# SPDX-License-Identifier: Apache-2.0
# SPDX-FileCopyrightText: Copyright The LanceDB Authors
from typing import List, Union
from unittest.mock import MagicMock, patch
@@ -18,6 +9,7 @@ import lancedb
import numpy as np
import pyarrow as pa
import pytest
import pandas as pd
from lancedb.conftest import MockTextEmbeddingFunction
from lancedb.embeddings import (
EmbeddingFunctionConfig,
@@ -89,14 +81,15 @@ def test_embedding_function(tmp_path):
def test_embedding_with_bad_results(tmp_path):
@register("mock-embedding")
class MockEmbeddingFunction(TextEmbeddingFunction):
@register("null-embedding")
class NullEmbeddingFunction(TextEmbeddingFunction):
def ndims(self):
return 128
def generate_embeddings(
self, texts: Union[List[str], np.ndarray]
) -> list[Union[np.array, None]]:
# Return None, which is bad if field is non-nullable
return [
None if i % 2 == 0 else np.random.randn(self.ndims())
for i in range(len(texts))
@@ -104,13 +97,17 @@ def test_embedding_with_bad_results(tmp_path):
db = lancedb.connect(tmp_path)
registry = EmbeddingFunctionRegistry.get_instance()
model = registry.get("mock-embedding").create()
model = registry.get("null-embedding").create()
class Schema(LanceModel):
text: str = model.SourceField()
vector: Vector(model.ndims()) = model.VectorField()
table = db.create_table("test", schema=Schema, mode="overwrite")
with pytest.raises(ValueError):
# Default on_bad_vectors is "error"
table.add([{"text": "hello world"}])
table.add(
[{"text": "hello world"}, {"text": "bar"}],
on_bad_vectors="drop",
@@ -120,13 +117,169 @@ def test_embedding_with_bad_results(tmp_path):
assert len(table) == 1
assert df.iloc[0]["text"] == "bar"
# table = db.create_table("test2", schema=Schema, mode="overwrite")
# table.add(
# [{"text": "hello world"}, {"text": "bar"}],
# )
# assert len(table) == 2
# tbl = table.to_arrow()
# assert tbl["vector"].null_count == 1
@register("nan-embedding")
class NanEmbeddingFunction(TextEmbeddingFunction):
def ndims(self):
return 128
def generate_embeddings(
self, texts: Union[List[str], np.ndarray]
) -> list[Union[np.array, None]]:
# Return NaN to produce bad vectors
return [
[np.NAN] * 128 if i % 2 == 0 else np.random.randn(self.ndims())
for i in range(len(texts))
]
db = lancedb.connect(tmp_path)
registry = EmbeddingFunctionRegistry.get_instance()
model = registry.get("nan-embedding").create()
table = db.create_table("test2", schema=Schema, mode="overwrite")
table.alter_columns(dict(path="vector", nullable=True))
table.add(
[{"text": "hello world"}, {"text": "bar"}],
on_bad_vectors="null",
)
assert len(table) == 2
tbl = table.to_arrow()
assert tbl["vector"].null_count == 1
def test_with_existing_vectors(tmp_path):
@register("mock-embedding")
class MockEmbeddingFunction(TextEmbeddingFunction):
def ndims(self):
return 128
def generate_embeddings(
self, texts: Union[List[str], np.ndarray]
) -> List[np.array]:
return [np.random.randn(self.ndims()).tolist() for _ in range(len(texts))]
registry = get_registry()
model = registry.get("mock-embedding").create()
class Schema(LanceModel):
text: str = model.SourceField()
vector: Vector(model.ndims()) = model.VectorField()
db = lancedb.connect(tmp_path)
tbl = db.create_table("test", schema=Schema, mode="overwrite")
tbl.add([{"text": "hello world", "vector": np.zeros(128).tolist()}])
embeddings = tbl.to_arrow()["vector"].to_pylist()
assert not np.any(embeddings), "all zeros"
def test_embedding_function_with_pandas(tmp_path):
@register("mock-embedding")
class _MockEmbeddingFunction(TextEmbeddingFunction):
def ndims(self):
return 128
def generate_embeddings(
self, texts: Union[List[str], np.ndarray]
) -> List[np.array]:
return [np.random.randn(self.ndims()).tolist() for _ in range(len(texts))]
registery = get_registry()
func = registery.get("mock-embedding").create()
class TestSchema(LanceModel):
text: str = func.SourceField()
val: int
vector: Vector(func.ndims()) = func.VectorField()
df = pd.DataFrame(
{
"text": ["hello world", "goodbye world"],
"val": [1, 2],
"not-used": ["s1", "s3"],
}
)
db = lancedb.connect(tmp_path)
tbl = db.create_table("test", schema=TestSchema, mode="overwrite", data=df)
schema = tbl.schema
assert schema.field("text").type == pa.string()
assert schema.field("val").type == pa.int64()
assert schema.field("vector").type == pa.list_(pa.float32(), 128)
df = pd.DataFrame(
{
"text": ["extra", "more"],
"val": [4, 5],
"misc-col": ["s1", "s3"],
}
)
tbl.add(df)
assert tbl.count_rows() == 4
embeddings = tbl.to_arrow()["vector"]
assert embeddings.null_count == 0
df = pd.DataFrame(
{
"text": ["with", "embeddings"],
"val": [6, 7],
"vector": [np.zeros(128).tolist(), np.zeros(128).tolist()],
}
)
tbl.add(df)
embeddings = tbl.search().where("val > 5").to_arrow()["vector"].to_pylist()
assert not np.any(embeddings), "all zeros"
def test_multiple_embeddings_for_pandas(tmp_path):
@register("mock-embedding")
class MockFunc1(TextEmbeddingFunction):
def ndims(self):
return 128
def generate_embeddings(
self, texts: Union[List[str], np.ndarray]
) -> List[np.array]:
return [np.random.randn(self.ndims()).tolist() for _ in range(len(texts))]
@register("mock-embedding2")
class MockFunc2(TextEmbeddingFunction):
def ndims(self):
return 512
def generate_embeddings(
self, texts: Union[List[str], np.ndarray]
) -> List[np.array]:
return [np.random.randn(self.ndims()).tolist() for _ in range(len(texts))]
registery = get_registry()
func1 = registery.get("mock-embedding").create()
func2 = registery.get("mock-embedding2").create()
class TestSchema(LanceModel):
text: str = func1.SourceField()
val: int
vec1: Vector(func1.ndims()) = func1.VectorField()
prompt: str = func2.SourceField()
vec2: Vector(func2.ndims()) = func2.VectorField()
df = pd.DataFrame(
{
"text": ["hello world", "goodbye world"],
"val": [1, 2],
"prompt": ["hello", "goodbye"],
}
)
db = lancedb.connect(tmp_path)
tbl = db.create_table("test", schema=TestSchema, mode="overwrite", data=df)
schema = tbl.schema
assert schema.field("text").type == pa.string()
assert schema.field("val").type == pa.int64()
assert schema.field("vec1").type == pa.list_(pa.float32(), 128)
assert schema.field("prompt").type == pa.string()
assert schema.field("vec2").type == pa.list_(pa.float32(), 512)
assert tbl.count_rows() == 2
@pytest.mark.slow
@@ -196,6 +349,7 @@ def test_add_optional_vector(tmp_path):
"ollama",
"cohere",
"instructor",
"voyageai",
],
)
def test_embedding_function_safe_model_dump(embedding_type):

View File

@@ -481,3 +481,22 @@ def test_ollama_embedding(tmp_path):
json.dumps(dumped_model)
except TypeError:
pytest.fail("Failed to JSON serialize the dumped model")
@pytest.mark.slow
@pytest.mark.skipif(
os.environ.get("VOYAGE_API_KEY") is None, reason="VOYAGE_API_KEY not set"
)
def test_voyageai_embedding_function():
voyageai = get_registry().get("voyageai").create(name="voyage-3", max_retries=0)
class TextModel(LanceModel):
text: str = voyageai.SourceField()
vector: Vector(voyageai.ndims()) = voyageai.VectorField()
df = pd.DataFrame({"text": ["hello world", "goodbye world"]})
db = lancedb.connect("~/lancedb")
tbl = db.create_table("test", schema=TextModel, mode="overwrite")
tbl.add(df)
assert len(tbl.to_pandas()["vector"][0]) == voyageai.ndims()

View File

@@ -197,6 +197,23 @@ def test_query_sync_minimal():
assert data == expected
def test_query_sync_empty_query():
def handler(body):
assert body == {
"k": 10,
"filter": "true",
"vector": [],
"columns": ["id"],
}
return pa.table({"id": [1, 2, 3]})
with query_test_table(handler) as table:
data = table.search(None).where("true").select(["id"]).limit(10).to_list()
expected = [{"id": 1}, {"id": 2}, {"id": 3}]
assert data == expected
def test_query_sync_maximal():
def handler(body):
assert body == {
@@ -229,6 +246,17 @@ def test_query_sync_maximal():
)
def test_query_sync_multiple_vectors():
def handler(_body):
return pa.table({"id": [1]})
with query_test_table(handler) as table:
results = table.search([[1, 2, 3], [4, 5, 6]]).limit(1).to_list()
assert len(results) == 2
results.sort(key=lambda x: x["query_index"])
assert results == [{"id": 1, "query_index": 0}, {"id": 1, "query_index": 1}]
def test_query_sync_fts():
def handler(body):
assert body == {

View File

@@ -16,6 +16,7 @@ from lancedb.rerankers import (
OpenaiReranker,
JinaReranker,
AnswerdotaiRerankers,
VoyageAIReranker,
)
from lancedb.table import LanceTable
@@ -344,3 +345,14 @@ def test_jina_reranker(tmp_path, use_tantivy):
table, schema = get_test_table(tmp_path, use_tantivy)
reranker = JinaReranker()
_run_test_reranker(reranker, table, "single player experience", None, schema)
@pytest.mark.skipif(
os.environ.get("VOYAGE_API_KEY") is None, reason="VOYAGE_API_KEY not set"
)
@pytest.mark.parametrize("use_tantivy", [True, False])
def test_voyageai_reranker(tmp_path, use_tantivy):
pytest.importorskip("voyageai")
reranker = VoyageAIReranker(model_name="rerank-2")
table, schema = get_test_table(tmp_path, use_tantivy)
_run_test_reranker(reranker, table, "single player experience", None, schema)

View File

@@ -240,6 +240,121 @@ def test_add(db):
_add(table, schema)
def test_add_subschema(tmp_path):
db = lancedb.connect(tmp_path)
schema = pa.schema(
[
pa.field("vector", pa.list_(pa.float32(), 2), nullable=True),
pa.field("item", pa.string(), nullable=True),
pa.field("price", pa.float64(), nullable=False),
]
)
table = db.create_table("test", schema=schema)
data = {"price": 10.0, "item": "foo"}
table.add([data])
data = {"price": 2.0, "vector": [3.1, 4.1]}
table.add([data])
data = {"price": 3.0, "vector": [5.9, 26.5], "item": "bar"}
table.add([data])
expected = pa.table(
{
"vector": [None, [3.1, 4.1], [5.9, 26.5]],
"item": ["foo", None, "bar"],
"price": [10.0, 2.0, 3.0],
},
schema=schema,
)
assert table.to_arrow() == expected
data = {"item": "foo"}
# We can't omit a column if it's not nullable
with pytest.raises(OSError, match="Invalid user input"):
table.add([data])
# We can add it if we make the column nullable
table.alter_columns(dict(path="price", nullable=True))
table.add([data])
expected_schema = pa.schema(
[
pa.field("vector", pa.list_(pa.float32(), 2), nullable=True),
pa.field("item", pa.string(), nullable=True),
pa.field("price", pa.float64(), nullable=True),
]
)
expected = pa.table(
{
"vector": [None, [3.1, 4.1], [5.9, 26.5], None],
"item": ["foo", None, "bar", "foo"],
"price": [10.0, 2.0, 3.0, None],
},
schema=expected_schema,
)
assert table.to_arrow() == expected
def test_add_nullability(tmp_path):
db = lancedb.connect(tmp_path)
schema = pa.schema(
[
pa.field("vector", pa.list_(pa.float32(), 2), nullable=False),
pa.field("id", pa.string(), nullable=False),
]
)
table = db.create_table("test", schema=schema)
nullable_schema = pa.schema(
[
pa.field("vector", pa.list_(pa.float32(), 2), nullable=True),
pa.field("id", pa.string(), nullable=True),
]
)
data = pa.table(
{
"vector": [[3.1, 4.1], [5.9, 26.5]],
"id": ["foo", "bar"],
},
schema=nullable_schema,
)
# We can add nullable schema if it doesn't actually contain nulls
table.add(data)
expected = data.cast(schema)
assert table.to_arrow() == expected
data = pa.table(
{
"vector": [None],
"id": ["baz"],
},
schema=nullable_schema,
)
# We can't add nullable schema if it contains nulls
with pytest.raises(Exception, match="Vector column vector has NaNs"):
table.add(data)
# But we can make it nullable
table.alter_columns(dict(path="vector", nullable=True))
table.add(data)
expected_schema = pa.schema(
[
pa.field("vector", pa.list_(pa.float32(), 2), nullable=True),
pa.field("id", pa.string(), nullable=False),
]
)
expected = pa.table(
{
"vector": [[3.1, 4.1], [5.9, 26.5], None],
"id": ["foo", "bar", "baz"],
},
schema=expected_schema,
)
assert table.to_arrow() == expected
def test_add_pydantic_model(db):
# https://github.com/lancedb/lancedb/issues/562
@@ -892,10 +1007,15 @@ def test_empty_query(db):
table = LanceTable.create(db, "my_table2", data=[{"id": i} for i in range(100)])
df = table.search().select(["id"]).to_pandas()
assert len(df) == 10
# None is the same as default
df = table.search().select(["id"]).limit(None).to_pandas()
assert len(df) == 100
assert len(df) == 10
# invalid limist is the same as None, wihch is the same as default
df = table.search().select(["id"]).limit(-1).to_pandas()
assert len(df) == 100
assert len(df) == 10
# valid limit should work
df = table.search().select(["id"]).limit(42).to_pandas()
assert len(df) == 42
def test_search_with_schema_inf_single_vector(db):
@@ -1223,6 +1343,54 @@ async def test_time_travel(db_async: AsyncConnection):
await table.restore()
def test_sync_optimize(db):
table = LanceTable.create(
db,
"test",
data=[
{"vector": [3.1, 4.1], "item": "foo", "price": 10.0},
{"vector": [5.9, 26.5], "item": "bar", "price": 20.0},
],
)
table.create_scalar_index("price", index_type="BTREE")
stats = table.to_lance().stats.index_stats("price_idx")
assert stats["num_indexed_rows"] == 2
table.add([{"vector": [2.0, 2.0], "item": "baz", "price": 30.0}])
assert table.count_rows() == 3
table.optimize()
stats = table.to_lance().stats.index_stats("price_idx")
assert stats["num_indexed_rows"] == 3
@pytest.mark.asyncio
async def test_sync_optimize_in_async(db):
table = LanceTable.create(
db,
"test",
data=[
{"vector": [3.1, 4.1], "item": "foo", "price": 10.0},
{"vector": [5.9, 26.5], "item": "bar", "price": 20.0},
],
)
table.create_scalar_index("price", index_type="BTREE")
stats = table.to_lance().stats.index_stats("price_idx")
assert stats["num_indexed_rows"] == 2
table.add([{"vector": [2.0, 2.0], "item": "baz", "price": 30.0}])
assert table.count_rows() == 3
try:
table.optimize()
except Exception as e:
assert (
"Synchronous method called in asynchronous context. "
"If you are writing an asynchronous application "
"then please use the asynchronous APIs" in str(e)
)
@pytest.mark.asyncio
async def test_optimize(db_async: AsyncConnection):
table = await db_async.create_table(

View File

@@ -142,6 +142,13 @@ impl VectorQuery {
self.inner = self.inner.clone().only_if(predicate);
}
pub fn add_query_vector(&mut self, vector: Bound<'_, PyAny>) -> PyResult<()> {
let data: ArrayData = ArrayData::from_pyarrow_bound(&vector)?;
let array = make_array(data);
self.inner = self.inner.clone().add_query_vector(array).infer_error()?;
Ok(())
}
pub fn select(&mut self, columns: Vec<(String, String)>) {
self.inner = self.inner.clone().select(Select::dynamic(&columns));
}

View File

@@ -1,6 +1,6 @@
[package]
name = "lancedb-node"
version = "0.13.0-beta.1"
version = "0.13.0-beta.2"
description = "Serverless, low-latency vector database for AI applications"
license.workspace = true
edition.workspace = true

View File

@@ -1,6 +1,6 @@
[package]
name = "lancedb"
version = "0.13.0-beta.1"
version = "0.13.0-beta.2"
edition.workspace = true
description = "LanceDB: A serverless, low-latency vector database for AI applications"
license.workspace = true
@@ -46,6 +46,7 @@ serde = { version = "^1" }
serde_json = { version = "1" }
async-openai = { version = "0.20.0", optional = true }
serde_with = { version = "3.8.1" }
aws-sdk-bedrockruntime = { version = "1.27.0", optional = true }
# For remote feature
reqwest = { version = "0.12.0", features = ["gzip", "json", "stream"], optional = true }
rand = { version = "0.8.3", features = ["small_rng"], optional = true}
@@ -72,11 +73,13 @@ aws-config = { version = "1.0" }
aws-smithy-runtime = { version = "1.3" }
http-body = "1" # Matching reqwest
[features]
default = []
remote = ["dep:reqwest", "dep:http", "dep:rand", "dep:uuid"]
fp16kernels = ["lance-linalg/fp16kernels"]
s3-test = []
bedrock = ["dep:aws-sdk-bedrockruntime"]
openai = ["dep:async-openai", "dep:reqwest"]
polars = ["dep:polars-arrow", "dep:polars"]
sentence-transformers = [
@@ -94,3 +97,7 @@ required-features = ["openai"]
[[example]]
name = "sentence_transformers"
required-features = ["sentence-transformers"]
[[example]]
name = "bedrock"
required-features = ["bedrock"]

View File

@@ -0,0 +1,89 @@
use std::{iter::once, sync::Arc};
use arrow_array::{Float64Array, Int32Array, RecordBatch, RecordBatchIterator, StringArray};
use arrow_schema::{DataType, Field, Schema};
use aws_config::Region;
use aws_sdk_bedrockruntime::Client;
use futures::StreamExt;
use lancedb::{
arrow::IntoArrow,
connect,
embeddings::{bedrock::BedrockEmbeddingFunction, EmbeddingDefinition, EmbeddingFunction},
query::{ExecutableQuery, QueryBase},
Result,
};
#[tokio::main]
async fn main() -> Result<()> {
let tempdir = tempfile::tempdir().unwrap();
let tempdir = tempdir.path().to_str().unwrap();
// create Bedrock embedding function
let region: String = "us-east-1".to_string();
let config = aws_config::defaults(aws_config::BehaviorVersion::latest())
.region(Region::new(region))
.load()
.await;
let embedding = Arc::new(BedrockEmbeddingFunction::new(
Client::new(&config), // AWS Region
));
let db = connect(tempdir).execute().await?;
db.embedding_registry()
.register("bedrock", embedding.clone())?;
let table = db
.create_table("vectors", make_data())
.add_embedding(EmbeddingDefinition::new(
"text",
"bedrock",
Some("embeddings"),
))?
.execute()
.await?;
// execute vector search
let query = Arc::new(StringArray::from_iter_values(once("something warm")));
let query_vector = embedding.compute_query_embeddings(query)?;
let mut results = table
.vector_search(query_vector)?
.limit(1)
.execute()
.await?;
let rb = results.next().await.unwrap()?;
let out = rb
.column_by_name("text")
.unwrap()
.as_any()
.downcast_ref::<StringArray>()
.unwrap();
let text = out.iter().next().unwrap().unwrap();
println!("Closest match: {}", text);
Ok(())
}
fn make_data() -> impl IntoArrow {
let schema = Schema::new(vec![
Field::new("id", DataType::Int32, true),
Field::new("text", DataType::Utf8, false),
Field::new("price", DataType::Float64, false),
]);
let id = Int32Array::from(vec![1, 2, 3, 4]);
let text = StringArray::from_iter_values(vec![
"Black T-Shirt",
"Leather Jacket",
"Winter Parka",
"Hooded Sweatshirt",
]);
let price = Float64Array::from(vec![10.0, 50.0, 100.0, 30.0]);
let schema = Arc::new(schema);
let rb = RecordBatch::try_new(
schema.clone(),
vec![Arc::new(id), Arc::new(text), Arc::new(price)],
)
.unwrap();
Box::new(RecordBatchIterator::new(vec![Ok(rb)], schema))
}

View File

@@ -17,6 +17,9 @@ pub mod openai;
#[cfg(feature = "sentence-transformers")]
pub mod sentence_transformers;
#[cfg(feature = "bedrock")]
pub mod bedrock;
use lance::arrow::RecordBatchExt;
use std::{
borrow::Cow,

View File

@@ -0,0 +1,210 @@
use aws_sdk_bedrockruntime::Client as BedrockClient;
use std::{borrow::Cow, fmt::Formatter, str::FromStr, sync::Arc};
use arrow::array::{AsArray, Float32Builder};
use arrow_array::{Array, ArrayRef, FixedSizeListArray, Float32Array};
use arrow_data::ArrayData;
use arrow_schema::DataType;
use serde_json::{json, Value};
use super::EmbeddingFunction;
use crate::{Error, Result};
use tokio::runtime::Handle;
use tokio::task::block_in_place;
#[derive(Debug)]
pub enum BedrockEmbeddingModel {
TitanEmbedding,
CohereLarge,
}
impl BedrockEmbeddingModel {
fn ndims(&self) -> usize {
match self {
Self::TitanEmbedding => 1536,
Self::CohereLarge => 1024,
}
}
fn model_id(&self) -> &str {
match self {
Self::TitanEmbedding => "amazon.titan-embed-text-v1",
Self::CohereLarge => "cohere.embed-english-v3",
}
}
}
impl FromStr for BedrockEmbeddingModel {
type Err = Error;
fn from_str(s: &str) -> std::result::Result<Self, Self::Err> {
match s {
"titan-embed-text-v1" => Ok(Self::TitanEmbedding),
"cohere-embed-english-v3" => Ok(Self::CohereLarge),
_ => Err(Error::InvalidInput {
message: "Invalid model. Available models are: 'titan-embed-text-v1', 'cohere-embed-english-v3'".to_string()
}),
}
}
}
pub struct BedrockEmbeddingFunction {
model: BedrockEmbeddingModel,
client: BedrockClient,
}
impl BedrockEmbeddingFunction {
pub fn new(client: BedrockClient) -> Self {
Self {
model: BedrockEmbeddingModel::TitanEmbedding,
client,
}
}
pub fn with_model(client: BedrockClient, model: BedrockEmbeddingModel) -> Self {
Self { model, client }
}
}
impl EmbeddingFunction for BedrockEmbeddingFunction {
fn name(&self) -> &str {
"bedrock"
}
fn source_type(&self) -> Result<Cow<DataType>> {
Ok(Cow::Owned(DataType::Utf8))
}
fn dest_type(&self) -> Result<Cow<DataType>> {
let n_dims = self.model.ndims();
Ok(Cow::Owned(DataType::new_fixed_size_list(
DataType::Float32,
n_dims as i32,
false,
)))
}
fn compute_source_embeddings(&self, source: ArrayRef) -> Result<ArrayRef> {
let len = source.len();
let n_dims = self.model.ndims();
let inner = self.compute_inner(source)?;
let fsl = DataType::new_fixed_size_list(DataType::Float32, n_dims as i32, false);
let array_data = ArrayData::builder(fsl)
.len(len)
.add_child_data(inner.into_data())
.build()?;
Ok(Arc::new(FixedSizeListArray::from(array_data)))
}
fn compute_query_embeddings(&self, input: Arc<dyn Array>) -> Result<Arc<dyn Array>> {
let arr = self.compute_inner(input)?;
Ok(Arc::new(arr))
}
}
impl std::fmt::Debug for BedrockEmbeddingFunction {
fn fmt(&self, f: &mut Formatter<'_>) -> std::fmt::Result {
f.debug_struct("BedrockEmbeddingFunction")
.field("model", &self.model)
// Skip client field as it doesn't implement Debug
.finish()
}
}
impl BedrockEmbeddingFunction {
fn compute_inner(&self, source: Arc<dyn Array>) -> Result<Float32Array> {
if source.is_nullable() {
return Err(Error::InvalidInput {
message: "Expected non-nullable data type".to_string(),
});
}
if !matches!(source.data_type(), DataType::Utf8 | DataType::LargeUtf8) {
return Err(Error::InvalidInput {
message: "Expected Utf8 data type".to_string(),
});
}
let mut builder = Float32Builder::new();
let texts = match source.data_type() {
DataType::Utf8 => source
.as_string::<i32>()
.into_iter()
.map(|s| s.expect("array is non-nullable").to_string())
.collect::<Vec<String>>(),
DataType::LargeUtf8 => source
.as_string::<i64>()
.into_iter()
.map(|s| s.expect("array is non-nullable").to_string())
.collect::<Vec<String>>(),
_ => unreachable!(),
};
for text in texts {
let request_body = match self.model {
BedrockEmbeddingModel::TitanEmbedding => {
json!({
"inputText": text
})
}
BedrockEmbeddingModel::CohereLarge => {
json!({
"texts": [text],
"input_type": "search_document"
})
}
};
let client = self.client.clone();
let model_id = self.model.model_id().to_string();
let request_body = request_body.clone();
let response = block_in_place(move || {
Handle::current().block_on(async move {
client
.invoke_model()
.model_id(model_id)
.body(aws_sdk_bedrockruntime::primitives::Blob::new(
serde_json::to_vec(&request_body).unwrap(),
))
.send()
.await
})
})
.unwrap();
let response_json: Value =
serde_json::from_slice(response.body.as_ref()).map_err(|e| Error::Runtime {
message: format!("Failed to parse response: {}", e),
})?;
let embedding = match self.model {
BedrockEmbeddingModel::TitanEmbedding => response_json["embedding"]
.as_array()
.ok_or_else(|| Error::Runtime {
message: "Missing embedding in response".to_string(),
})?
.iter()
.map(|v| v.as_f64().unwrap() as f32)
.collect::<Vec<f32>>(),
BedrockEmbeddingModel::CohereLarge => response_json["embeddings"][0]
.as_array()
.ok_or_else(|| Error::Runtime {
message: "Missing embeddings in response".to_string(),
})?
.iter()
.map(|v| v.as_f64().unwrap() as f32)
.collect::<Vec<f32>>(),
};
builder.append_slice(&embedding);
}
Ok(builder.finish())
}
}

View File

@@ -29,6 +29,7 @@ pub mod scalar;
pub mod vector;
/// Supported index types.
#[derive(Debug, Clone)]
pub enum Index {
Auto,
/// A `BTree` index is an sorted index on scalar columns.

View File

@@ -475,6 +475,7 @@ impl<T: HasQuery> QueryBase for T {
/// Options for controlling the execution of a query
#[non_exhaustive]
#[derive(Debug, Clone)]
pub struct QueryExecutionOptions {
/// The maximum number of rows that will be contained in a single
/// `RecordBatch` delivered by the query.
@@ -650,7 +651,7 @@ impl Query {
pub fn nearest_to(self, vector: impl IntoQueryVector) -> Result<VectorQuery> {
let mut vector_query = self.into_vector();
let query_vector = vector.to_query_vector(&DataType::Float32, "default")?;
vector_query.query_vector = Some(query_vector);
vector_query.query_vector.push(query_vector);
Ok(vector_query)
}
}
@@ -701,7 +702,7 @@ pub struct VectorQuery {
// the column based on the dataset's schema.
pub(crate) column: Option<String>,
// IVF PQ - ANN search.
pub(crate) query_vector: Option<Arc<dyn Array>>,
pub(crate) query_vector: Vec<Arc<dyn Array>>,
pub(crate) nprobes: usize,
pub(crate) refine_factor: Option<u32>,
pub(crate) distance_type: Option<DistanceType>,
@@ -714,7 +715,7 @@ impl VectorQuery {
Self {
base,
column: None,
query_vector: None,
query_vector: Vec::new(),
nprobes: 20,
refine_factor: None,
distance_type: None,
@@ -734,6 +735,22 @@ impl VectorQuery {
self
}
/// Add another query vector to the search.
///
/// Multiple searches will be dispatched as part of the query.
/// This is a convenience method for adding multiple query vectors
/// to the search. It is not expected to be faster than issuing
/// multiple queries concurrently.
///
/// The output data will contain an additional columns `query_index` which
/// will contain the index of the query vector that was used to generate the
/// result.
pub fn add_query_vector(mut self, vector: impl IntoQueryVector) -> Result<Self> {
let query_vector = vector.to_query_vector(&DataType::Float32, "default")?;
self.query_vector.push(query_vector);
Ok(self)
}
/// Set the number of partitions to search (probe)
///
/// This argument is only used when the vector column has an IVF PQ index.
@@ -854,6 +871,7 @@ mod tests {
use std::sync::Arc;
use super::*;
use arrow::{compute::concat_batches, datatypes::Int32Type};
use arrow_array::{
cast::AsArray, Float32Array, Int32Array, RecordBatch, RecordBatchIterator,
RecordBatchReader,
@@ -883,7 +901,10 @@ mod tests {
let vector = Float32Array::from_iter_values([0.1, 0.2]);
let query = table.query().nearest_to(&[0.1, 0.2]).unwrap();
assert_eq!(*query.query_vector.unwrap().as_ref().as_primitive(), vector);
assert_eq!(
*query.query_vector.first().unwrap().as_ref().as_primitive(),
vector
);
let new_vector = Float32Array::from_iter_values([9.8, 8.7]);
@@ -899,7 +920,7 @@ mod tests {
.refine_factor(999);
assert_eq!(
*query.query_vector.unwrap().as_ref().as_primitive(),
*query.query_vector.first().unwrap().as_ref().as_primitive(),
new_vector
);
assert_eq!(query.base.limit.unwrap(), 100);
@@ -1197,4 +1218,34 @@ mod tests {
assert!(batch.column_by_name("_rowid").is_some());
}
}
#[tokio::test]
async fn test_multiple_query_vectors() {
let tmp_dir = tempdir().unwrap();
let table = make_test_table(&tmp_dir).await;
let query = table
.query()
.nearest_to(&[0.1, 0.2, 0.3, 0.4])
.unwrap()
.add_query_vector(&[0.5, 0.6, 0.7, 0.8])
.unwrap()
.limit(1);
let plan = query.explain_plan(true).await.unwrap();
assert!(plan.contains("UnionExec"));
let results = query
.execute()
.await
.unwrap()
.try_collect::<Vec<_>>()
.await
.unwrap();
let results = concat_batches(&results[0].schema(), &results).unwrap();
assert_eq!(results.num_rows(), 2); // One result for each query vector.
let query_index = results["query_index"].as_primitive::<Int32Type>();
// We don't guarantee order.
assert!(query_index.values().contains(&0));
assert!(query_index.values().contains(&1));
}
}

View File

@@ -6,7 +6,7 @@ use crate::index::IndexStatistics;
use crate::query::Select;
use crate::table::AddDataMode;
use crate::utils::{supported_btree_data_type, supported_vector_data_type};
use crate::Error;
use crate::{Error, Table};
use arrow_array::RecordBatchReader;
use arrow_ipc::reader::FileReader;
use arrow_schema::{DataType, SchemaRef};
@@ -185,6 +185,71 @@ impl<S: HttpSend> RemoteTable<S> {
Ok(())
}
fn apply_vector_query_params(
mut body: serde_json::Value,
query: &VectorQuery,
) -> Result<Vec<serde_json::Value>> {
Self::apply_query_params(&mut body, &query.base)?;
// Apply general parameters, before we dispatch based on number of query vectors.
body["prefilter"] = query.base.prefilter.into();
body["distance_type"] = serde_json::json!(query.distance_type.unwrap_or_default());
body["nprobes"] = query.nprobes.into();
body["refine_factor"] = query.refine_factor.into();
if let Some(vector_column) = query.column.as_ref() {
body["vector_column"] = serde_json::Value::String(vector_column.clone());
}
if !query.use_index {
body["bypass_vector_index"] = serde_json::Value::Bool(true);
}
fn vector_to_json(vector: &arrow_array::ArrayRef) -> Result<serde_json::Value> {
match vector.data_type() {
DataType::Float32 => {
let array = vector
.as_any()
.downcast_ref::<arrow_array::Float32Array>()
.unwrap();
Ok(serde_json::Value::Array(
array
.values()
.iter()
.map(|v| {
serde_json::Value::Number(
serde_json::Number::from_f64(*v as f64).unwrap(),
)
})
.collect(),
))
}
_ => Err(Error::InvalidInput {
message: "VectorQuery vector must be of type Float32".into(),
}),
}
}
match query.query_vector.len() {
0 => {
// Server takes empty vector, not null or undefined.
body["vector"] = serde_json::Value::Array(Vec::new());
Ok(vec![body])
}
1 => {
body["vector"] = vector_to_json(&query.query_vector[0])?;
Ok(vec![body])
}
_ => {
let mut bodies = Vec::with_capacity(query.query_vector.len());
for vector in &query.query_vector {
let mut body = body.clone();
body["vector"] = vector_to_json(vector)?;
bodies.push(body);
}
Ok(bodies)
}
}
}
}
#[derive(Deserialize)]
@@ -306,51 +371,29 @@ impl<S: HttpSend> TableInternal for RemoteTable<S> {
) -> Result<Arc<dyn ExecutionPlan>> {
let request = self.client.post(&format!("/v1/table/{}/query/", self.name));
let mut body = serde_json::Value::Object(Default::default());
Self::apply_query_params(&mut body, &query.base)?;
let body = serde_json::Value::Object(Default::default());
let bodies = Self::apply_vector_query_params(body, query)?;
body["prefilter"] = query.base.prefilter.into();
body["distance_type"] = serde_json::json!(query.distance_type.unwrap_or_default());
body["nprobes"] = query.nprobes.into();
body["refine_factor"] = query.refine_factor.into();
let vector: Vec<f32> = if let Some(vector) = query.query_vector.as_ref() {
match vector.data_type() {
DataType::Float32 => vector
.as_any()
.downcast_ref::<arrow_array::Float32Array>()
.unwrap()
.values()
.iter()
.cloned()
.collect(),
_ => {
return Err(Error::InvalidInput {
message: "VectorQuery vector must be of type Float32".into(),
})
}
}
let mut futures = Vec::with_capacity(bodies.len());
for body in bodies {
let request = request.try_clone().unwrap().json(&body);
let future = async move {
let (request_id, response) = self.client.send(request, true).await?;
self.read_arrow_stream(&request_id, response).await
};
futures.push(future);
}
let streams = futures::future::try_join_all(futures).await?;
if streams.len() == 1 {
let stream = streams.into_iter().next().unwrap();
Ok(Arc::new(OneShotExec::new(stream)))
} else {
// Server takes empty vector, not null or undefined.
Vec::new()
};
body["vector"] = serde_json::json!(vector);
if let Some(vector_column) = query.column.as_ref() {
body["vector_column"] = serde_json::Value::String(vector_column.clone());
let stream_execs = streams
.into_iter()
.map(|stream| Arc::new(OneShotExec::new(stream)) as Arc<dyn ExecutionPlan>)
.collect();
Table::multi_vector_plan(stream_execs)
}
if !query.use_index {
body["bypass_vector_index"] = serde_json::Value::Bool(true);
}
let request = request.json(&body);
let (request_id, response) = self.client.send(request, true).await?;
let stream = self.read_arrow_stream(&request_id, response).await?;
Ok(Arc::new(OneShotExec::new(stream)))
}
async fn plain_query(
@@ -655,6 +698,7 @@ mod tests {
use super::*;
use arrow::{array::AsArray, compute::concat_batches, datatypes::Int32Type};
use arrow_array::{Int32Array, RecordBatch, RecordBatchIterator};
use arrow_schema::{DataType, Field, Schema};
use futures::{future::BoxFuture, StreamExt, TryFutureExt};
@@ -1207,6 +1251,52 @@ mod tests {
.unwrap();
}
#[tokio::test]
async fn test_query_multiple_vectors() {
let table = Table::new_with_handler("my_table", |request| {
assert_eq!(request.method(), "POST");
assert_eq!(request.url().path(), "/v1/table/my_table/query/");
assert_eq!(
request.headers().get("Content-Type").unwrap(),
JSON_CONTENT_TYPE
);
let data = RecordBatch::try_new(
Arc::new(Schema::new(vec![Field::new("a", DataType::Int32, false)])),
vec![Arc::new(Int32Array::from(vec![1, 2, 3]))],
)
.unwrap();
let response_body = write_ipc_file(&data);
http::Response::builder()
.status(200)
.header(CONTENT_TYPE, ARROW_FILE_CONTENT_TYPE)
.body(response_body)
.unwrap()
});
let query = table
.query()
.nearest_to(vec![0.1, 0.2, 0.3])
.unwrap()
.add_query_vector(vec![0.4, 0.5, 0.6])
.unwrap();
let plan = query.explain_plan(true).await.unwrap();
assert!(plan.contains("UnionExec"), "Plan: {}", plan);
let results = query
.execute()
.await
.unwrap()
.try_collect::<Vec<_>>()
.await
.unwrap();
let results = concat_batches(&results[0].schema(), &results).unwrap();
let query_index = results["query_index"].as_primitive::<Int32Type>();
// We don't guarantee order.
assert!(query_index.values().contains(&0));
assert!(query_index.values().contains(&1));
}
#[tokio::test]
async fn test_create_index() {
let cases = [

View File

@@ -24,6 +24,9 @@ use arrow_array::{RecordBatchIterator, RecordBatchReader};
use arrow_schema::{Field, Schema, SchemaRef};
use async_trait::async_trait;
use datafusion_physical_plan::display::DisplayableExecutionPlan;
use datafusion_physical_plan::projection::ProjectionExec;
use datafusion_physical_plan::repartition::RepartitionExec;
use datafusion_physical_plan::union::UnionExec;
use datafusion_physical_plan::ExecutionPlan;
use futures::{StreamExt, TryStreamExt};
use lance::dataset::builder::DatasetBuilder;
@@ -972,6 +975,57 @@ impl Table {
) -> Result<Option<IndexStatistics>> {
self.inner.index_stats(index_name.as_ref()).await
}
// Take many execution plans and map them into a single plan that adds
// a query_index column and unions them.
pub(crate) fn multi_vector_plan(
plans: Vec<Arc<dyn ExecutionPlan>>,
) -> Result<Arc<dyn ExecutionPlan>> {
if plans.is_empty() {
return Err(Error::InvalidInput {
message: "No plans provided".to_string(),
});
}
// Projection to keeping all existing columns
let first_plan = plans[0].clone();
let project_all_columns = first_plan
.schema()
.fields()
.iter()
.enumerate()
.map(|(i, field)| {
let expr =
datafusion_physical_plan::expressions::Column::new(field.name().as_str(), i);
let expr = Arc::new(expr) as Arc<dyn datafusion_physical_plan::PhysicalExpr>;
(expr, field.name().clone())
})
.collect::<Vec<_>>();
let projected_plans = plans
.into_iter()
.enumerate()
.map(|(plan_i, plan)| {
let query_index = datafusion_common::ScalarValue::Int32(Some(plan_i as i32));
let query_index_expr =
datafusion_physical_plan::expressions::Literal::new(query_index);
let query_index_expr =
Arc::new(query_index_expr) as Arc<dyn datafusion_physical_plan::PhysicalExpr>;
let mut projections = vec![(query_index_expr, "query_index".to_string())];
projections.extend_from_slice(&project_all_columns);
let projection = ProjectionExec::try_new(projections, plan).unwrap();
Arc::new(projection) as Arc<dyn datafusion_physical_plan::ExecutionPlan>
})
.collect::<Vec<_>>();
let unioned = Arc::new(UnionExec::new(projected_plans));
// We require 1 partition in the final output
let repartitioned = RepartitionExec::try_new(
unioned,
datafusion_physical_plan::Partitioning::RoundRobinBatch(1),
)
.unwrap();
Ok(Arc::new(repartitioned))
}
}
impl From<NativeTable> for Table {
@@ -1784,9 +1838,25 @@ impl TableInternal for NativeTable {
) -> Result<Arc<dyn ExecutionPlan>> {
let ds_ref = self.dataset.get().await?;
if query.query_vector.len() > 1 {
// If there are multiple query vectors, create a plan for each of them and union them.
let query_vecs = query.query_vector.clone();
let plan_futures = query_vecs
.into_iter()
.map(|query_vector| {
let mut sub_query = query.clone();
sub_query.query_vector = vec![query_vector];
let options_ref = options.clone();
async move { self.create_plan(&sub_query, options_ref).await }
})
.collect::<Vec<_>>();
let plans = futures::future::try_join_all(plan_futures).await?;
return Table::multi_vector_plan(plans);
}
let mut scanner: Scanner = ds_ref.scan();
if let Some(query_vector) = query.query_vector.as_ref() {
if let Some(query_vector) = query.query_vector.first() {
// If there is a vector query, default to limit=10 if unspecified
let column = if let Some(col) = query.column.as_ref() {
col.clone()
@@ -1828,18 +1898,11 @@ impl TableInternal for NativeTable {
query_vector,
query.base.limit.unwrap_or(DEFAULT_TOP_K),
)?;
scanner.limit(
query.base.limit.map(|limit| limit as i64),
query.base.offset.map(|offset| offset as i64),
)?;
} else {
// If there is no vector query, it's ok to not have a limit
scanner.limit(
query.base.limit.map(|limit| limit as i64),
query.base.offset.map(|offset| offset as i64),
)?;
}
scanner.limit(
query.base.limit.map(|limit| limit as i64),
query.base.offset.map(|offset| offset as i64),
)?;
scanner.nprobs(query.nprobes);
scanner.use_index(query.use_index);
scanner.prefilter(query.base.prefilter);