Compare commits

..

33 Commits

Author SHA1 Message Date
Lance Release
6ef20b85ca Bump version: 0.17.0 → 0.17.1-beta.0 2024-12-09 04:01:19 +00:00
LuQQiu
35bacdd57e feat: support azure account name storage options in sync db.connect (#1926)
db.connect with azure storage account name is supported in async connect
but not sync connect.
Add this functionality

---------

Co-authored-by: Will Jones <willjones127@gmail.com>
2024-12-08 20:00:23 -08:00
Will Jones
a5ebe5a6c4 fix: create_scalar_index in cloud (#1922)
Fixes #1920
2024-12-07 19:48:40 -08:00
Will Jones
bf03ad1b4a ci: fix release (#1919)
* Set `private: false` so we can publish new binary packages
* Add missing windows binary reference
2024-12-06 12:51:48 -08:00
Bert
2a9e3e2084 feat(python): support hybrid search in async sdk (#1915)
fixes: https://github.com/lancedb/lancedb/issues/1765

---------

Co-authored-by: Will Jones <willjones127@gmail.com>
2024-12-06 13:53:15 -05:00
Lance Release
f298f15360 Updating package-lock.json 2024-12-06 17:13:37 +00:00
Lance Release
679b031b99 Bump version: 0.14.0-beta.3 → 0.14.0 2024-12-06 17:13:15 +00:00
Lance Release
f50b5d532b Bump version: 0.14.0-beta.2 → 0.14.0-beta.3 2024-12-06 17:13:10 +00:00
Lance Release
fe655a15f0 Bump version: 0.17.0-beta.4 → 0.17.0 2024-12-06 17:12:43 +00:00
Lance Release
9d0af794d0 Bump version: 0.17.0-beta.3 → 0.17.0-beta.4 2024-12-06 17:12:43 +00:00
Will Jones
048a2d10f8 fix: data type parsing (#1918)
Fixes failing test on main
2024-12-06 08:56:07 -08:00
Lei Xu
c78a9849b4 ci: upgrade version of upload-pages-artifact and deploy-pages (#1917)
For
https://github.blog/changelog/2024-12-05-deprecation-notice-github-pages-actions-to-require-artifacts-actions-v4-on-github-com/
2024-12-06 10:45:24 -05:00
BubbleCal
c663085203 feat: support FTS options on RemoteTable (#1807) 2024-12-06 21:49:03 +08:00
Will Jones
8b628854d5 ci: fix nodejs release jobs (#1912)
* Clean up old commented out jobs
* Fix runner issue that caused these failures:
https://github.com/lancedb/lancedb/actions/runs/12186754094
2024-12-05 14:45:10 -08:00
Will Jones
a8d8c17b2a docs(rust): fix doctests (#1913)
* One doctest was running for > 60 seconds in CI, since it was
(unsuccessfully) trying to connect to LanceDB Cloud.
* Fixed the example for `Query::full_text_query()`, which was incorrect.
2024-12-05 14:44:59 -08:00
Will Jones
3c487e5fc7 perf: re-use table instance during write (#1909)
Previously, whenever `Table.add()` was called, we would write and
re-open the underlying dataset. This was bad for performance, as it
reset the table cache and initiated a lot of IO. It also could be the
source of bugs, since we didn't necessarily pass all the necessary
connection options down when re-opening the table.

Closes #1655
2024-12-05 14:44:50 -08:00
Will Jones
d6219d687c chore: simplify arrow json conversion (#1910)
Taking care of a small TODO
2024-12-05 13:14:43 -08:00
Bert
239f725b32 feat(python)!: async-sync feature parity on Connections (#1905)
Closes #1791
Closes #1764
Closes #1897 (Makes this unnecessary)

BREAKING CHANGE: when using azure connection string `az://...` the call
to connect will fail if the azure storage credentials are not set. this
is breaking from the previous behaviour where the call would fail after
connect, when user invokes methods on the connection.
2024-12-05 14:54:39 -05:00
Will Jones
5f261cf2d8 feat: upgrade to Lance v0.20.0 (#1908)
Upstream change log:
https://github.com/lancedb/lance/releases/tag/v0.20.0
2024-12-05 10:53:59 -08:00
Will Jones
79eaa52184 feat: schema evolution APIs in all SDKs (#1851)
* Support `add_columns`, `alter_columns`, `drop_columns` in Remote SDK
and async Python
* Add `data_type` parameter to node
* Docs updates
2024-12-04 14:47:50 -08:00
Lei Xu
bd82e1f66d feat(python): add support for Azure OpenAPI SDK (#1906)
Closes #1699
2024-12-04 13:09:38 -08:00
Lance Release
ba34c3bee1 Updating package-lock.json 2024-12-04 01:14:24 +00:00
Lance Release
d4d0873e2b Bump version: 0.14.0-beta.1 → 0.14.0-beta.2 2024-12-04 01:13:55 +00:00
Lance Release
12c7bd18a5 Bump version: 0.17.0-beta.2 → 0.17.0-beta.3 2024-12-04 01:13:18 +00:00
LuQQiu
c6bf6a25d6 feat: add remote db uri path with folder prefix (#1901)
Add remote database folder prefix
support db://bucket/path/to/folder/
2024-12-03 16:51:18 -08:00
Weston Pace
c998a47e17 feat: add a pyarrow dataset adapater for LanceDB tables (#1902)
This currently only works for local tables (remote tables cannot be
queried)
This is also exclusive to the sync interface. However, since the pyarrow
dataset interface is synchronous I am not sure if there is much value in
making an async-wrapping variant.

In addition, I added a `to_batches` method to the base query in the sync
API. This already exists in the async API. In the sync API this PR only
adds support for vector queries and scalar queries and not for hybrid or
FTS queries.
2024-12-03 15:42:54 -08:00
Frank Liu
d8c758513c feat: add multimodal capabilities for Voyage embedder (#1878)
Co-authored-by: Will Jones <willjones127@gmail.com>
2024-12-03 10:25:48 -08:00
Will Jones
3795e02ee3 chore: fix ci on main (#1899) 2024-12-02 15:21:18 -08:00
Mr. Doge
c7d424b2f3 ci: aarch64-pc-windows-msvc (#1890)
`npm run pack-build -- -t $TARGET_TRIPLE`
was needed instead of
`npm run pack-build -t $TARGET_TRIPLE`
https://github.com/lancedb/lancedb/pull/1889

some documentation about `*-pc-windows-msvc` cross-compilation (from
alpine):
https://github.com/lancedb/lancedb/pull/1831#issuecomment-2497156918

only `arm64` in `matrix` config is used
since `x86_64` built by `runs-on: windows-2022` is working
2024-12-02 11:17:37 -08:00
Bert
1efb9914ee ci: fix failing python release (#1896)
Fix failing python release for windows:
https://github.com/lancedb/lancedb/actions/runs/12019637086/job/33506642964

Also updates pkginfo to fix twine build as suggested here:
https://github.com/pypi/warehouse/issues/15611
failing release:
https://github.com/lancedb/lancedb/actions/runs/12091344173/job/33719622146
2024-12-02 11:05:29 -08:00
Lance Release
83e26a231e Updating package-lock.json 2024-11-29 22:46:45 +00:00
Lance Release
72a17b2de4 Bump version: 0.14.0-beta.0 → 0.14.0-beta.1 2024-11-29 22:46:20 +00:00
Lance Release
4231925476 Bump version: 0.17.0-beta.1 → 0.17.0-beta.2 2024-11-29 22:45:55 +00:00
68 changed files with 2264 additions and 514 deletions

View File

@@ -1,5 +1,5 @@
[tool.bumpversion]
current_version = "0.14.0-beta.0"
current_version = "0.14.0"
parse = """(?x)
(?P<major>0|[1-9]\\d*)\\.
(?P<minor>0|[1-9]\\d*)\\.

View File

@@ -72,9 +72,9 @@ jobs:
- name: Setup Pages
uses: actions/configure-pages@v2
- name: Upload artifact
uses: actions/upload-pages-artifact@v1
uses: actions/upload-pages-artifact@v3
with:
path: "docs/site"
- name: Deploy to GitHub Pages
id: deployment
uses: actions/deploy-pages@v1
uses: actions/deploy-pages@v4

View File

@@ -143,7 +143,7 @@ jobs:
node-linux-musl:
name: vectordb (${{ matrix.config.arch}}-unknown-linux-musl)
runs-on: ${{ matrix.config.runner }}
runs-on: ubuntu-latest
container: alpine:edge
# Only runs on tags that matches the make-release action
if: startsWith(github.ref, 'refs/tags/v')
@@ -152,10 +152,7 @@ jobs:
matrix:
config:
- arch: x86_64
runner: ubuntu-latest
- arch: aarch64
# For successful fat LTO builds, we need a large runner to avoid OOM errors.
runner: buildjet-16vcpu-ubuntu-2204-arm
steps:
- name: Checkout
uses: actions/checkout@v4
@@ -249,7 +246,7 @@ jobs:
nodejs-linux-musl:
name: lancedb (${{ matrix.config.arch}}-unknown-linux-musl
runs-on: ${{ matrix.config.runner }}
runs-on: ubuntu-latest
container: alpine:edge
# Only runs on tags that matches the make-release action
if: startsWith(github.ref, 'refs/tags/v')
@@ -258,10 +255,7 @@ jobs:
matrix:
config:
- arch: x86_64
runner: ubuntu-latest
- arch: aarch64
# For successful fat LTO builds, we need a large runner to avoid OOM errors.
runner: buildjet-16vcpu-ubuntu-2204-arm
steps:
- name: Checkout
uses: actions/checkout@v4
@@ -340,109 +334,50 @@ jobs:
path: |
node/dist/lancedb-vectordb-win32*.tgz
# TODO: re-enable once working https://github.com/lancedb/lancedb/pull/1831
# node-windows-arm64:
# name: vectordb win32-arm64-msvc
# runs-on: windows-4x-arm
# if: startsWith(github.ref, 'refs/tags/v')
# steps:
# - uses: actions/checkout@v4
# - name: Install Git
# run: |
# Invoke-WebRequest -Uri "https://github.com/git-for-windows/git/releases/download/v2.44.0.windows.1/Git-2.44.0-64-bit.exe" -OutFile "git-installer.exe"
# Start-Process -FilePath "git-installer.exe" -ArgumentList "/VERYSILENT", "/NORESTART" -Wait
# shell: powershell
# - name: Add Git to PATH
# run: |
# Add-Content $env:GITHUB_PATH "C:\Program Files\Git\bin"
# $env:Path = [System.Environment]::GetEnvironmentVariable("Path","Machine") + ";" + [System.Environment]::GetEnvironmentVariable("Path","User")
# shell: powershell
# - name: Configure Git symlinks
# run: git config --global core.symlinks true
# - uses: actions/checkout@v4
# - uses: actions/setup-python@v5
# with:
# python-version: "3.13"
# - name: Install Visual Studio Build Tools
# run: |
# Invoke-WebRequest -Uri "https://aka.ms/vs/17/release/vs_buildtools.exe" -OutFile "vs_buildtools.exe"
# Start-Process -FilePath "vs_buildtools.exe" -ArgumentList "--quiet", "--wait", "--norestart", "--nocache", `
# "--installPath", "C:\BuildTools", `
# "--add", "Microsoft.VisualStudio.Component.VC.Tools.ARM64", `
# "--add", "Microsoft.VisualStudio.Component.VC.Tools.x86.x64", `
# "--add", "Microsoft.VisualStudio.Component.Windows11SDK.22621", `
# "--add", "Microsoft.VisualStudio.Component.VC.ATL", `
# "--add", "Microsoft.VisualStudio.Component.VC.ATLMFC", `
# "--add", "Microsoft.VisualStudio.Component.VC.Llvm.Clang" -Wait
# shell: powershell
# - name: Add Visual Studio Build Tools to PATH
# run: |
# $vsPath = "C:\BuildTools\VC\Tools\MSVC"
# $latestVersion = (Get-ChildItem $vsPath | Sort-Object {[version]$_.Name} -Descending)[0].Name
# Add-Content $env:GITHUB_PATH "C:\BuildTools\VC\Tools\MSVC\$latestVersion\bin\Hostx64\arm64"
# Add-Content $env:GITHUB_PATH "C:\BuildTools\VC\Tools\MSVC\$latestVersion\bin\Hostx64\x64"
# Add-Content $env:GITHUB_PATH "C:\Program Files (x86)\Windows Kits\10\bin\10.0.22621.0\arm64"
# Add-Content $env:GITHUB_PATH "C:\Program Files (x86)\Windows Kits\10\bin\10.0.22621.0\x64"
# Add-Content $env:GITHUB_PATH "C:\BuildTools\VC\Tools\Llvm\x64\bin"
# # Add MSVC runtime libraries to LIB
# $env:LIB = "C:\BuildTools\VC\Tools\MSVC\$latestVersion\lib\arm64;" +
# "C:\Program Files (x86)\Windows Kits\10\Lib\10.0.22621.0\um\arm64;" +
# "C:\Program Files (x86)\Windows Kits\10\Lib\10.0.22621.0\ucrt\arm64"
# Add-Content $env:GITHUB_ENV "LIB=$env:LIB"
# # Add INCLUDE paths
# $env:INCLUDE = "C:\BuildTools\VC\Tools\MSVC\$latestVersion\include;" +
# "C:\Program Files (x86)\Windows Kits\10\Include\10.0.22621.0\ucrt;" +
# "C:\Program Files (x86)\Windows Kits\10\Include\10.0.22621.0\um;" +
# "C:\Program Files (x86)\Windows Kits\10\Include\10.0.22621.0\shared"
# Add-Content $env:GITHUB_ENV "INCLUDE=$env:INCLUDE"
# shell: powershell
# - name: Install Rust
# run: |
# Invoke-WebRequest https://win.rustup.rs/x86_64 -OutFile rustup-init.exe
# .\rustup-init.exe -y --default-host aarch64-pc-windows-msvc
# shell: powershell
# - name: Add Rust to PATH
# run: |
# Add-Content $env:GITHUB_PATH "$env:USERPROFILE\.cargo\bin"
# shell: powershell
# - uses: Swatinem/rust-cache@v2
# with:
# workspaces: rust
# - name: Install 7-Zip ARM
# run: |
# New-Item -Path 'C:\7zip' -ItemType Directory
# Invoke-WebRequest https://7-zip.org/a/7z2408-arm64.exe -OutFile C:\7zip\7z-installer.exe
# Start-Process -FilePath C:\7zip\7z-installer.exe -ArgumentList '/S' -Wait
# shell: powershell
# - name: Add 7-Zip to PATH
# run: Add-Content $env:GITHUB_PATH "C:\Program Files\7-Zip"
# shell: powershell
# - name: Install Protoc v21.12
# working-directory: C:\
# run: |
# if (Test-Path 'C:\protoc') {
# Write-Host "Protoc directory exists, skipping installation"
# return
# }
# New-Item -Path 'C:\protoc' -ItemType Directory
# Set-Location C:\protoc
# Invoke-WebRequest https://github.com/protocolbuffers/protobuf/releases/download/v21.12/protoc-21.12-win64.zip -OutFile C:\protoc\protoc.zip
# & 'C:\Program Files\7-Zip\7z.exe' x protoc.zip
# shell: powershell
# - name: Add Protoc to PATH
# run: Add-Content $env:GITHUB_PATH "C:\protoc\bin"
# shell: powershell
# - name: Build Windows native node modules
# run: .\ci\build_windows_artifacts.ps1 aarch64-pc-windows-msvc
# - name: Upload Windows ARM64 Artifacts
# uses: actions/upload-artifact@v4
# with:
# name: node-native-windows-arm64
# path: |
# node/dist/*.node
node-windows-arm64:
name: vectordb ${{ matrix.config.arch }}-pc-windows-msvc
if: startsWith(github.ref, 'refs/tags/v')
runs-on: ubuntu-latest
container: alpine:edge
strategy:
fail-fast: false
matrix:
config:
# - arch: x86_64
- arch: aarch64
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Install dependencies
run: |
apk add protobuf-dev curl clang lld llvm19 grep npm bash msitools sed
curl --proto '=https' --tlsv1.3 -sSf https://raw.githubusercontent.com/rust-lang/rustup/refs/heads/master/rustup-init.sh | sh -s -- -y --default-toolchain 1.80.0
echo "source $HOME/.cargo/env" >> saved_env
echo "export CC=clang" >> saved_env
echo "export AR=llvm-ar" >> saved_env
source "$HOME/.cargo/env"
rustup target add ${{ matrix.config.arch }}-pc-windows-msvc --toolchain 1.80.0
(mkdir -p sysroot && cd sysroot && sh ../ci/sysroot-${{ matrix.config.arch }}-pc-windows-msvc.sh)
echo "export C_INCLUDE_PATH=/usr/${{ matrix.config.arch }}-pc-windows-msvc/usr/include" >> saved_env
echo "export CARGO_BUILD_TARGET=${{ matrix.config.arch }}-pc-windows-msvc" >> saved_env
- name: Configure x86_64 build
if: ${{ matrix.config.arch == 'x86_64' }}
run: |
echo "export RUSTFLAGS='-Ctarget-cpu=haswell -Ctarget-feature=+crt-static,+avx2,+fma,+f16c -Clinker=lld -Clink-arg=/LIBPATH:/usr/x86_64-pc-windows-msvc/usr/lib'" >> saved_env
- name: Configure aarch64 build
if: ${{ matrix.config.arch == 'aarch64' }}
run: |
echo "export RUSTFLAGS='-Ctarget-feature=+crt-static,+neon,+fp16,+fhm,+dotprod -Clinker=lld -Clink-arg=/LIBPATH:/usr/aarch64-pc-windows-msvc/usr/lib -Clink-arg=arm64rt.lib'" >> saved_env
- name: Build Windows Artifacts
run: |
source ./saved_env
bash ci/manylinux_node/build_vectordb.sh ${{ matrix.config.arch }} ${{ matrix.config.arch }}-pc-windows-msvc
- name: Upload Windows Artifacts
uses: actions/upload-artifact@v4
with:
name: node-native-windows-${{ matrix.config.arch }}
path: |
node/dist/lancedb-vectordb-win32*.tgz
nodejs-windows:
name: lancedb ${{ matrix.target }}
@@ -478,103 +413,57 @@ jobs:
path: |
nodejs/dist/*.node
# TODO: re-enable once working https://github.com/lancedb/lancedb/pull/1831
# nodejs-windows-arm64:
# name: lancedb win32-arm64-msvc
# runs-on: windows-4x-arm
# if: startsWith(github.ref, 'refs/tags/v')
# steps:
# - uses: actions/checkout@v4
# - name: Install Git
# run: |
# Invoke-WebRequest -Uri "https://github.com/git-for-windows/git/releases/download/v2.44.0.windows.1/Git-2.44.0-64-bit.exe" -OutFile "git-installer.exe"
# Start-Process -FilePath "git-installer.exe" -ArgumentList "/VERYSILENT", "/NORESTART" -Wait
# shell: powershell
# - name: Add Git to PATH
# run: |
# Add-Content $env:GITHUB_PATH "C:\Program Files\Git\bin"
# $env:Path = [System.Environment]::GetEnvironmentVariable("Path","Machine") + ";" + [System.Environment]::GetEnvironmentVariable("Path","User")
# shell: powershell
# - name: Configure Git symlinks
# run: git config --global core.symlinks true
# - uses: actions/checkout@v4
# - uses: actions/setup-python@v5
# with:
# python-version: "3.13"
# - name: Install Visual Studio Build Tools
# run: |
# Invoke-WebRequest -Uri "https://aka.ms/vs/17/release/vs_buildtools.exe" -OutFile "vs_buildtools.exe"
# Start-Process -FilePath "vs_buildtools.exe" -ArgumentList "--quiet", "--wait", "--norestart", "--nocache", `
# "--installPath", "C:\BuildTools", `
# "--add", "Microsoft.VisualStudio.Component.VC.Tools.ARM64", `
# "--add", "Microsoft.VisualStudio.Component.VC.Tools.x86.x64", `
# "--add", "Microsoft.VisualStudio.Component.Windows11SDK.22621", `
# "--add", "Microsoft.VisualStudio.Component.VC.ATL", `
# "--add", "Microsoft.VisualStudio.Component.VC.ATLMFC", `
# "--add", "Microsoft.VisualStudio.Component.VC.Llvm.Clang" -Wait
# shell: powershell
# - name: Add Visual Studio Build Tools to PATH
# run: |
# $vsPath = "C:\BuildTools\VC\Tools\MSVC"
# $latestVersion = (Get-ChildItem $vsPath | Sort-Object {[version]$_.Name} -Descending)[0].Name
# Add-Content $env:GITHUB_PATH "C:\BuildTools\VC\Tools\MSVC\$latestVersion\bin\Hostx64\arm64"
# Add-Content $env:GITHUB_PATH "C:\BuildTools\VC\Tools\MSVC\$latestVersion\bin\Hostx64\x64"
# Add-Content $env:GITHUB_PATH "C:\Program Files (x86)\Windows Kits\10\bin\10.0.22621.0\arm64"
# Add-Content $env:GITHUB_PATH "C:\Program Files (x86)\Windows Kits\10\bin\10.0.22621.0\x64"
# Add-Content $env:GITHUB_PATH "C:\BuildTools\VC\Tools\Llvm\x64\bin"
# $env:LIB = ""
# Add-Content $env:GITHUB_ENV "LIB=C:\Program Files (x86)\Windows Kits\10\Lib\10.0.22621.0\um\arm64;C:\Program Files (x86)\Windows Kits\10\Lib\10.0.22621.0\ucrt\arm64"
# shell: powershell
# - name: Install Rust
# run: |
# Invoke-WebRequest https://win.rustup.rs/x86_64 -OutFile rustup-init.exe
# .\rustup-init.exe -y --default-host aarch64-pc-windows-msvc
# shell: powershell
# - name: Add Rust to PATH
# run: |
# Add-Content $env:GITHUB_PATH "$env:USERPROFILE\.cargo\bin"
# shell: powershell
# - uses: Swatinem/rust-cache@v2
# with:
# workspaces: rust
# - name: Install 7-Zip ARM
# run: |
# New-Item -Path 'C:\7zip' -ItemType Directory
# Invoke-WebRequest https://7-zip.org/a/7z2408-arm64.exe -OutFile C:\7zip\7z-installer.exe
# Start-Process -FilePath C:\7zip\7z-installer.exe -ArgumentList '/S' -Wait
# shell: powershell
# - name: Add 7-Zip to PATH
# run: Add-Content $env:GITHUB_PATH "C:\Program Files\7-Zip"
# shell: powershell
# - name: Install Protoc v21.12
# working-directory: C:\
# run: |
# if (Test-Path 'C:\protoc') {
# Write-Host "Protoc directory exists, skipping installation"
# return
# }
# New-Item -Path 'C:\protoc' -ItemType Directory
# Set-Location C:\protoc
# Invoke-WebRequest https://github.com/protocolbuffers/protobuf/releases/download/v21.12/protoc-21.12-win64.zip -OutFile C:\protoc\protoc.zip
# & 'C:\Program Files\7-Zip\7z.exe' x protoc.zip
# shell: powershell
# - name: Add Protoc to PATH
# run: Add-Content $env:GITHUB_PATH "C:\protoc\bin"
# shell: powershell
# - name: Build Windows native node modules
# run: .\ci\build_windows_artifacts_nodejs.ps1 aarch64-pc-windows-msvc
# - name: Upload Windows ARM64 Artifacts
# uses: actions/upload-artifact@v4
# with:
# name: nodejs-native-windows-arm64
# path: |
# nodejs/dist/*.node
nodejs-windows-arm64:
name: lancedb ${{ matrix.config.arch }}-pc-windows-msvc
# Only runs on tags that matches the make-release action
if: startsWith(github.ref, 'refs/tags/v')
runs-on: ubuntu-latest
container: alpine:edge
strategy:
fail-fast: false
matrix:
config:
# - arch: x86_64
- arch: aarch64
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Install dependencies
run: |
apk add protobuf-dev curl clang lld llvm19 grep npm bash msitools sed
curl --proto '=https' --tlsv1.3 -sSf https://raw.githubusercontent.com/rust-lang/rustup/refs/heads/master/rustup-init.sh | sh -s -- -y --default-toolchain 1.80.0
echo "source $HOME/.cargo/env" >> saved_env
echo "export CC=clang" >> saved_env
echo "export AR=llvm-ar" >> saved_env
source "$HOME/.cargo/env"
rustup target add ${{ matrix.config.arch }}-pc-windows-msvc --toolchain 1.80.0
(mkdir -p sysroot && cd sysroot && sh ../ci/sysroot-${{ matrix.config.arch }}-pc-windows-msvc.sh)
echo "export C_INCLUDE_PATH=/usr/${{ matrix.config.arch }}-pc-windows-msvc/usr/include" >> saved_env
echo "export CARGO_BUILD_TARGET=${{ matrix.config.arch }}-pc-windows-msvc" >> saved_env
printf '#!/bin/sh\ncargo "$@"' > $HOME/.cargo/bin/cargo-xwin
chmod u+x $HOME/.cargo/bin/cargo-xwin
- name: Configure x86_64 build
if: ${{ matrix.config.arch == 'x86_64' }}
run: |
echo "export RUSTFLAGS='-Ctarget-cpu=haswell -Ctarget-feature=+crt-static,+avx2,+fma,+f16c -Clinker=lld -Clink-arg=/LIBPATH:/usr/x86_64-pc-windows-msvc/usr/lib'" >> saved_env
- name: Configure aarch64 build
if: ${{ matrix.config.arch == 'aarch64' }}
run: |
echo "export RUSTFLAGS='-Ctarget-feature=+crt-static,+neon,+fp16,+fhm,+dotprod -Clinker=lld -Clink-arg=/LIBPATH:/usr/aarch64-pc-windows-msvc/usr/lib -Clink-arg=arm64rt.lib'" >> saved_env
- name: Build Windows Artifacts
run: |
source ./saved_env
bash ci/manylinux_node/build_lancedb.sh ${{ matrix.config.arch }}
- name: Upload Windows Artifacts
uses: actions/upload-artifact@v4
with:
name: nodejs-native-windows-${{ matrix.config.arch }}
path: |
nodejs/dist/*.node
release:
name: vectordb NPM Publish
needs: [node, node-macos, node-linux-gnu, node-linux-musl, node-windows]
needs: [node, node-macos, node-linux-gnu, node-linux-musl, node-windows, node-windows-arm64]
runs-on: ubuntu-latest
# Only runs on tags that matches the make-release action
if: startsWith(github.ref, 'refs/tags/v')
@@ -614,7 +503,7 @@ jobs:
release-nodejs:
name: lancedb NPM Publish
needs: [nodejs-macos, nodejs-linux-gnu, nodejs-linux-musl, nodejs-windows]
needs: [nodejs-macos, nodejs-linux-gnu, nodejs-linux-musl, nodejs-windows, nodejs-windows-arm64]
runs-on: ubuntu-latest
# Only runs on tags that matches the make-release action
if: startsWith(github.ref, 'refs/tags/v')
@@ -672,6 +561,7 @@ jobs:
SLACK_WEBHOOK_URL: ${{ secrets.ACTION_MONITORING_SLACK }}
update-package-lock:
if: startsWith(github.ref, 'refs/tags/v')
needs: [release]
runs-on: ubuntu-latest
permissions:
@@ -689,6 +579,7 @@ jobs:
github_token: ${{ secrets.GITHUB_TOKEN }}
update-package-lock-nodejs:
if: startsWith(github.ref, 'refs/tags/v')
needs: [release-nodejs]
runs-on: ubuntu-latest
permissions:
@@ -706,6 +597,7 @@ jobs:
github_token: ${{ secrets.GITHUB_TOKEN }}
gh-release:
if: startsWith(github.ref, 'refs/tags/v')
runs-on: ubuntu-latest
permissions:
contents: write

View File

@@ -83,7 +83,7 @@ jobs:
- name: Set up Python
uses: actions/setup-python@v4
with:
python-version: 3.8
python-version: 3.12
- uses: ./.github/workflows/build_windows_wheel
with:
python-minor-version: 8

View File

@@ -17,6 +17,7 @@ runs:
run: |
python -m pip install --upgrade pip
pip install twine
python3 -m pip install --upgrade pkginfo
- name: Choose repo
shell: bash
id: choose_repo

View File

@@ -23,27 +23,27 @@ rust-version = "1.80.0" # TO
[workspace.dependencies]
lance = { "version" = "=0.20.0", "features" = [
"dynamodb",
], git = "https://github.com/lancedb/lance.git", tag = "v0.20.0-beta.3" }
lance-io = { version = "=0.20.0", git = "https://github.com/lancedb/lance.git", tag = "v0.20.0-beta.3" }
lance-index = { version = "=0.20.0", git = "https://github.com/lancedb/lance.git", tag = "v0.20.0-beta.3" }
lance-linalg = { version = "=0.20.0", git = "https://github.com/lancedb/lance.git", tag = "v0.20.0-beta.3" }
lance-table = { version = "=0.20.0", git = "https://github.com/lancedb/lance.git", tag = "v0.20.0-beta.3" }
lance-testing = { version = "=0.20.0", git = "https://github.com/lancedb/lance.git", tag = "v0.20.0-beta.3" }
lance-datafusion = { version = "=0.20.0", git = "https://github.com/lancedb/lance.git", tag = "v0.20.0-beta.3" }
lance-encoding = { version = "=0.20.0", git = "https://github.com/lancedb/lance.git", tag = "v0.20.0-beta.3" }
] }
lance-io = "0.20.0"
lance-index = "0.20.0"
lance-linalg = "0.20.0"
lance-table = "0.20.0"
lance-testing = "0.20.0"
lance-datafusion = "0.20.0"
lance-encoding = "0.20.0"
# Note that this one does not include pyarrow
arrow = { version = "52.2", optional = false }
arrow-array = "52.2"
arrow-data = "52.2"
arrow-ipc = "52.2"
arrow-ord = "52.2"
arrow-schema = "52.2"
arrow-arith = "52.2"
arrow-cast = "52.2"
arrow = { version = "53.2", optional = false }
arrow-array = "53.2"
arrow-data = "53.2"
arrow-ipc = "53.2"
arrow-ord = "53.2"
arrow-schema = "53.2"
arrow-arith = "53.2"
arrow-cast = "53.2"
async-trait = "0"
chrono = "0.4.35"
datafusion-common = "41.0"
datafusion-physical-plan = "41.0"
datafusion-common = "42.0"
datafusion-physical-plan = "42.0"
env_logger = "0.10"
half = { "version" = "=2.4.1", default-features = false, features = [
"num-traits",

View File

@@ -18,4 +18,4 @@ FILE=$HOME/.bashrc && test -f $FILE && source $FILE
cd node
npm ci
npm run build-release
npm run pack-build -t $TARGET_TRIPLE
npm run pack-build -- -t $TARGET_TRIPLE

View File

@@ -0,0 +1,105 @@
#!/bin/sh
# https://github.com/mstorsjo/msvc-wine/blob/master/vsdownload.py
# https://github.com/mozilla/gecko-dev/blob/6027d1d91f2d3204a3992633b3ef730ff005fc64/build/vs/vs2022-car.yaml
# function dl() {
# curl -O https://download.visualstudio.microsoft.com/download/pr/$1
# }
# [[.h]]
# "id": "Win11SDK_10.0.26100"
# "version": "10.0.26100.7"
# libucrt.lib
# example: <assert.h>
# dir: ucrt/
curl -O https://download.visualstudio.microsoft.com/download/pr/32863b8d-a46d-4231-8e84-0888519d20a9/2ee3a5fc6e9fc832af7295b138e93839/universal%20crt%20headers%20libraries%20and%20sources-x86_en-us.msi
curl -O https://download.visualstudio.microsoft.com/download/pr/32863b8d-a46d-4231-8e84-0888519d20a9/b1aa09b90fe314aceb090f6ec7626624/16ab2ea2187acffa6435e334796c8c89.cab
curl -O https://download.visualstudio.microsoft.com/download/pr/32863b8d-a46d-4231-8e84-0888519d20a9/400609bb0ff5804e36dbe6dcd42a7f01/6ee7bbee8435130a869cf971694fd9e2.cab
curl -O https://download.visualstudio.microsoft.com/download/pr/32863b8d-a46d-4231-8e84-0888519d20a9/2ac327317abb865a0e3f56b2faefa918/78fa3c824c2c48bd4a49ab5969adaaf7.cab
curl -O https://download.visualstudio.microsoft.com/download/pr/32863b8d-a46d-4231-8e84-0888519d20a9/f034bc0b2680f67dccd4bfeea3d0f932/7afc7b670accd8e3cc94cfffd516f5cb.cab
curl -O https://download.visualstudio.microsoft.com/download/pr/32863b8d-a46d-4231-8e84-0888519d20a9/7ed5e12f9d50f80825a8b27838cf4c7f/96076045170fe5db6d5dcf14b6f6688e.cab
curl -O https://download.visualstudio.microsoft.com/download/pr/32863b8d-a46d-4231-8e84-0888519d20a9/764edc185a696bda9e07df8891dddbbb/a1e2a83aa8a71c48c742eeaff6e71928.cab
curl -O https://download.visualstudio.microsoft.com/download/pr/32863b8d-a46d-4231-8e84-0888519d20a9/66854bedc6dbd5ccb5dd82c8e2412231/b2f03f34ff83ec013b9e45c7cd8e8a73.cab
# example: <windows.h>
# dir: um/
curl -O https://download.visualstudio.microsoft.com/download/pr/32863b8d-a46d-4231-8e84-0888519d20a9/b286efac4d83a54fc49190bddef1edc9/windows%20sdk%20for%20windows%20store%20apps%20headers-x86_en-us.msi
curl -O https://download.visualstudio.microsoft.com/download/pr/32863b8d-a46d-4231-8e84-0888519d20a9/e0dc3811d92ab96fcb72bf63d6c08d71/766c0ffd568bbb31bf7fb6793383e24a.cab
curl -O https://download.visualstudio.microsoft.com/download/pr/32863b8d-a46d-4231-8e84-0888519d20a9/613503da4b5628768497822826aed39f/8125ee239710f33ea485965f76fae646.cab
# example: <winapifamily.h>
# dir: /shared
curl -O https://download.visualstudio.microsoft.com/download/pr/32863b8d-a46d-4231-8e84-0888519d20a9/122979f0348d3a2a36b6aa1a111d5d0c/windows%20sdk%20for%20windows%20store%20apps%20headers%20onecoreuap-x86_en-us.msi
curl -O https://download.visualstudio.microsoft.com/download/pr/32863b8d-a46d-4231-8e84-0888519d20a9/766e04beecdfccff39e91dd9eb32834a/e89e3dcbb016928c7e426238337d69eb.cab
# "id": "Microsoft.VisualC.14.16.CRT.Headers"
# "version": "14.16.27045"
# example: <vcruntime.h>
# dir: MSVC/
curl -O https://download.visualstudio.microsoft.com/download/pr/bac0afd7-cc9e-4182-8a83-9898fa20e092/87bbe41e09a2f83711e72696f49681429327eb7a4b90618c35667a6ba2e2880e/Microsoft.VisualC.14.16.CRT.Headers.vsix
# [[.lib]]
# advapi32.lib bcrypt.lib kernel32.lib ntdll.lib user32.lib uuid.lib ws2_32.lib userenv.lib cfgmgr32.lib runtimeobject.lib
curl -O https://download.visualstudio.microsoft.com/download/pr/32863b8d-a46d-4231-8e84-0888519d20a9/944c4153b849a1f7d0c0404a4f1c05ea/windows%20sdk%20for%20windows%20store%20apps%20libs-x86_en-us.msi
curl -O https://download.visualstudio.microsoft.com/download/pr/32863b8d-a46d-4231-8e84-0888519d20a9/5306aed3e1a38d1e8bef5934edeb2a9b/05047a45609f311645eebcac2739fc4c.cab
curl -O https://download.visualstudio.microsoft.com/download/pr/32863b8d-a46d-4231-8e84-0888519d20a9/13c8a73a0f5a6474040b26d016a26fab/13d68b8a7b6678a368e2d13ff4027521.cab
curl -O https://download.visualstudio.microsoft.com/download/pr/32863b8d-a46d-4231-8e84-0888519d20a9/149578fb3b621cdb61ee1813b9b3e791/463ad1b0783ebda908fd6c16a4abfe93.cab
curl -O https://download.visualstudio.microsoft.com/download/pr/32863b8d-a46d-4231-8e84-0888519d20a9/5c986c4f393c6b09d5aec3b539e9fb4a/5a22e5cde814b041749fb271547f4dd5.cab
# fwpuclnt.lib arm64rt.lib
curl -O https://download.visualstudio.microsoft.com/download/pr/32863b8d-a46d-4231-8e84-0888519d20a9/7a332420d812f7c1d41da865ae5a7c52/windows%20sdk%20desktop%20libs%20arm64-x86_en-us.msi
curl -O https://download.visualstudio.microsoft.com/download/pr/32863b8d-a46d-4231-8e84-0888519d20a9/19de98ed4a79938d0045d19c047936b3/3e2f7be479e3679d700ce0782e4cc318.cab
# libcmt.lib libvcruntime.lib
curl -O https://download.visualstudio.microsoft.com/download/pr/bac0afd7-cc9e-4182-8a83-9898fa20e092/227f40682a88dc5fa0ccb9cadc9ad30af99ad1f1a75db63407587d079f60d035/Microsoft.VisualC.14.16.CRT.ARM64.Desktop.vsix
msiextract universal%20crt%20headers%20libraries%20and%20sources-x86_en-us.msi
msiextract windows%20sdk%20for%20windows%20store%20apps%20headers-x86_en-us.msi
msiextract windows%20sdk%20for%20windows%20store%20apps%20headers%20onecoreuap-x86_en-us.msi
msiextract windows%20sdk%20for%20windows%20store%20apps%20libs-x86_en-us.msi
msiextract windows%20sdk%20desktop%20libs%20arm64-x86_en-us.msi
unzip -o Microsoft.VisualC.14.16.CRT.Headers.vsix
unzip -o Microsoft.VisualC.14.16.CRT.ARM64.Desktop.vsix
mkdir -p /usr/aarch64-pc-windows-msvc/usr/include
mkdir -p /usr/aarch64-pc-windows-msvc/usr/lib
# lowercase folder/file names
echo "$(find . -regex ".*/[^/]*[A-Z][^/]*")" | xargs -I{} sh -c 'mv "$(echo "{}" | sed -E '"'"'s/(.*\/)/\L\1/'"'"')" "$(echo "{}" | tr [A-Z] [a-z])"'
# .h
(cd 'program files/windows kits/10/include/10.0.26100.0' && cp -r ucrt/* um/* shared/* -t /usr/aarch64-pc-windows-msvc/usr/include)
cp -r contents/vc/tools/msvc/14.16.27023/include/* /usr/aarch64-pc-windows-msvc/usr/include
# lowercase #include "" and #include <>
find /usr/aarch64-pc-windows-msvc/usr/include -type f -exec sed -i -E 's/(#include <[^<>]*?[A-Z][^<>]*?>)|(#include "[^"]*?[A-Z][^"]*?")/\L\1\2/' "{}" ';'
# ARM intrinsics
# original dir: MSVC/
# '__n128x4' redefined in arm_neon.h
# "arm64_neon.h" included from intrin.h
(cd /usr/lib/llvm19/lib/clang/19/include && cp arm_neon.h intrin.h -t /usr/aarch64-pc-windows-msvc/usr/include)
# .lib
# _Interlocked intrinsics
# must always link with arm64rt.lib
# reason: https://developercommunity.visualstudio.com/t/libucrtlibstreamobj-error-lnk2001-unresolved-exter/1544787#T-ND1599818
# I don't understand the 'correct' fix for this, arm64rt.lib is supposed to be the workaround
(cd 'program files/windows kits/10/lib/10.0.26100.0/um/arm64' && cp advapi32.lib bcrypt.lib kernel32.lib ntdll.lib user32.lib uuid.lib ws2_32.lib userenv.lib cfgmgr32.lib runtimeobject.lib fwpuclnt.lib arm64rt.lib -t /usr/aarch64-pc-windows-msvc/usr/lib)
(cd 'contents/vc/tools/msvc/14.16.27023/lib/arm64' && cp libcmt.lib libvcruntime.lib -t /usr/aarch64-pc-windows-msvc/usr/lib)
cp 'program files/windows kits/10/lib/10.0.26100.0/ucrt/arm64/libucrt.lib' /usr/aarch64-pc-windows-msvc/usr/lib

View File

@@ -0,0 +1,105 @@
#!/bin/sh
# https://github.com/mstorsjo/msvc-wine/blob/master/vsdownload.py
# https://github.com/mozilla/gecko-dev/blob/6027d1d91f2d3204a3992633b3ef730ff005fc64/build/vs/vs2022-car.yaml
# function dl() {
# curl -O https://download.visualstudio.microsoft.com/download/pr/$1
# }
# [[.h]]
# "id": "Win11SDK_10.0.26100"
# "version": "10.0.26100.7"
# libucrt.lib
# example: <assert.h>
# dir: ucrt/
curl -O https://download.visualstudio.microsoft.com/download/pr/32863b8d-a46d-4231-8e84-0888519d20a9/2ee3a5fc6e9fc832af7295b138e93839/universal%20crt%20headers%20libraries%20and%20sources-x86_en-us.msi
curl -O https://download.visualstudio.microsoft.com/download/pr/32863b8d-a46d-4231-8e84-0888519d20a9/b1aa09b90fe314aceb090f6ec7626624/16ab2ea2187acffa6435e334796c8c89.cab
curl -O https://download.visualstudio.microsoft.com/download/pr/32863b8d-a46d-4231-8e84-0888519d20a9/400609bb0ff5804e36dbe6dcd42a7f01/6ee7bbee8435130a869cf971694fd9e2.cab
curl -O https://download.visualstudio.microsoft.com/download/pr/32863b8d-a46d-4231-8e84-0888519d20a9/2ac327317abb865a0e3f56b2faefa918/78fa3c824c2c48bd4a49ab5969adaaf7.cab
curl -O https://download.visualstudio.microsoft.com/download/pr/32863b8d-a46d-4231-8e84-0888519d20a9/f034bc0b2680f67dccd4bfeea3d0f932/7afc7b670accd8e3cc94cfffd516f5cb.cab
curl -O https://download.visualstudio.microsoft.com/download/pr/32863b8d-a46d-4231-8e84-0888519d20a9/7ed5e12f9d50f80825a8b27838cf4c7f/96076045170fe5db6d5dcf14b6f6688e.cab
curl -O https://download.visualstudio.microsoft.com/download/pr/32863b8d-a46d-4231-8e84-0888519d20a9/764edc185a696bda9e07df8891dddbbb/a1e2a83aa8a71c48c742eeaff6e71928.cab
curl -O https://download.visualstudio.microsoft.com/download/pr/32863b8d-a46d-4231-8e84-0888519d20a9/66854bedc6dbd5ccb5dd82c8e2412231/b2f03f34ff83ec013b9e45c7cd8e8a73.cab
# example: <windows.h>
# dir: um/
curl -O https://download.visualstudio.microsoft.com/download/pr/32863b8d-a46d-4231-8e84-0888519d20a9/b286efac4d83a54fc49190bddef1edc9/windows%20sdk%20for%20windows%20store%20apps%20headers-x86_en-us.msi
curl -O https://download.visualstudio.microsoft.com/download/pr/32863b8d-a46d-4231-8e84-0888519d20a9/e0dc3811d92ab96fcb72bf63d6c08d71/766c0ffd568bbb31bf7fb6793383e24a.cab
curl -O https://download.visualstudio.microsoft.com/download/pr/32863b8d-a46d-4231-8e84-0888519d20a9/613503da4b5628768497822826aed39f/8125ee239710f33ea485965f76fae646.cab
# example: <winapifamily.h>
# dir: /shared
curl -O https://download.visualstudio.microsoft.com/download/pr/32863b8d-a46d-4231-8e84-0888519d20a9/122979f0348d3a2a36b6aa1a111d5d0c/windows%20sdk%20for%20windows%20store%20apps%20headers%20onecoreuap-x86_en-us.msi
curl -O https://download.visualstudio.microsoft.com/download/pr/32863b8d-a46d-4231-8e84-0888519d20a9/766e04beecdfccff39e91dd9eb32834a/e89e3dcbb016928c7e426238337d69eb.cab
# "id": "Microsoft.VisualC.14.16.CRT.Headers"
# "version": "14.16.27045"
# example: <vcruntime.h>
# dir: MSVC/
curl -O https://download.visualstudio.microsoft.com/download/pr/bac0afd7-cc9e-4182-8a83-9898fa20e092/87bbe41e09a2f83711e72696f49681429327eb7a4b90618c35667a6ba2e2880e/Microsoft.VisualC.14.16.CRT.Headers.vsix
# [[.lib]]
# advapi32.lib bcrypt.lib kernel32.lib ntdll.lib user32.lib uuid.lib ws2_32.lib userenv.lib cfgmgr32.lib
curl -O https://download.visualstudio.microsoft.com/download/pr/32863b8d-a46d-4231-8e84-0888519d20a9/944c4153b849a1f7d0c0404a4f1c05ea/windows%20sdk%20for%20windows%20store%20apps%20libs-x86_en-us.msi
curl -O https://download.visualstudio.microsoft.com/download/pr/32863b8d-a46d-4231-8e84-0888519d20a9/5306aed3e1a38d1e8bef5934edeb2a9b/05047a45609f311645eebcac2739fc4c.cab
curl -O https://download.visualstudio.microsoft.com/download/pr/32863b8d-a46d-4231-8e84-0888519d20a9/13c8a73a0f5a6474040b26d016a26fab/13d68b8a7b6678a368e2d13ff4027521.cab
curl -O https://download.visualstudio.microsoft.com/download/pr/32863b8d-a46d-4231-8e84-0888519d20a9/149578fb3b621cdb61ee1813b9b3e791/463ad1b0783ebda908fd6c16a4abfe93.cab
curl -O https://download.visualstudio.microsoft.com/download/pr/32863b8d-a46d-4231-8e84-0888519d20a9/5c986c4f393c6b09d5aec3b539e9fb4a/5a22e5cde814b041749fb271547f4dd5.cab
curl -O https://download.visualstudio.microsoft.com/download/pr/32863b8d-a46d-4231-8e84-0888519d20a9/bfc3904a0195453419ae4dfea7abd6fb/e10768bb6e9d0ea730280336b697da66.cab
curl -O https://download.visualstudio.microsoft.com/download/pr/32863b8d-a46d-4231-8e84-0888519d20a9/637f9f3be880c71f9e3ca07b4d67345c/f9b24c8280986c0683fbceca5326d806.cab
# dbghelp.lib fwpuclnt.lib
curl -O https://download.visualstudio.microsoft.com/download/pr/32863b8d-a46d-4231-8e84-0888519d20a9/9f51690d5aa804b1340ce12d1ec80f89/windows%20sdk%20desktop%20libs%20x64-x86_en-us.msi
curl -O https://download.visualstudio.microsoft.com/download/pr/32863b8d-a46d-4231-8e84-0888519d20a9/d3a7df4ca3303a698640a29e558a5e5b/58314d0646d7e1a25e97c902166c3155.cab
# libcmt.lib libvcruntime.lib
curl -O https://download.visualstudio.microsoft.com/download/pr/bac0afd7-cc9e-4182-8a83-9898fa20e092/8728f21ae09940f1f4b4ee47b4a596be2509e2a47d2f0c83bbec0ea37d69644b/Microsoft.VisualC.14.16.CRT.x64.Desktop.vsix
msiextract universal%20crt%20headers%20libraries%20and%20sources-x86_en-us.msi
msiextract windows%20sdk%20for%20windows%20store%20apps%20headers-x86_en-us.msi
msiextract windows%20sdk%20for%20windows%20store%20apps%20headers%20onecoreuap-x86_en-us.msi
msiextract windows%20sdk%20for%20windows%20store%20apps%20libs-x86_en-us.msi
msiextract windows%20sdk%20desktop%20libs%20x64-x86_en-us.msi
unzip -o Microsoft.VisualC.14.16.CRT.Headers.vsix
unzip -o Microsoft.VisualC.14.16.CRT.x64.Desktop.vsix
mkdir -p /usr/x86_64-pc-windows-msvc/usr/include
mkdir -p /usr/x86_64-pc-windows-msvc/usr/lib
# lowercase folder/file names
echo "$(find . -regex ".*/[^/]*[A-Z][^/]*")" | xargs -I{} sh -c 'mv "$(echo "{}" | sed -E '"'"'s/(.*\/)/\L\1/'"'"')" "$(echo "{}" | tr [A-Z] [a-z])"'
# .h
(cd 'program files/windows kits/10/include/10.0.26100.0' && cp -r ucrt/* um/* shared/* -t /usr/x86_64-pc-windows-msvc/usr/include)
cp -r contents/vc/tools/msvc/14.16.27023/include/* /usr/x86_64-pc-windows-msvc/usr/include
# lowercase #include "" and #include <>
find /usr/x86_64-pc-windows-msvc/usr/include -type f -exec sed -i -E 's/(#include <[^<>]*?[A-Z][^<>]*?>)|(#include "[^"]*?[A-Z][^"]*?")/\L\1\2/' "{}" ';'
# x86 intrinsics
# original dir: MSVC/
# '_mm_movemask_epi8' defined in emmintrin.h
# '__v4sf' defined in xmmintrin.h
# '__v2si' defined in mmintrin.h
# '__m128d' redefined in immintrin.h
# '__m128i' redefined in intrin.h
# '_mm_comlt_epu8' defined in ammintrin.h
(cd /usr/lib/llvm19/lib/clang/19/include && cp emmintrin.h xmmintrin.h mmintrin.h immintrin.h intrin.h ammintrin.h -t /usr/x86_64-pc-windows-msvc/usr/include)
# .lib
(cd 'program files/windows kits/10/lib/10.0.26100.0/um/x64' && cp advapi32.lib bcrypt.lib kernel32.lib ntdll.lib user32.lib uuid.lib ws2_32.lib userenv.lib cfgmgr32.lib dbghelp.lib fwpuclnt.lib -t /usr/x86_64-pc-windows-msvc/usr/lib)
(cd 'contents/vc/tools/msvc/14.16.27023/lib/x64' && cp libcmt.lib libvcruntime.lib -t /usr/x86_64-pc-windows-msvc/usr/lib)
cp 'program files/windows kits/10/lib/10.0.26100.0/ucrt/x64/libucrt.lib' /usr/x86_64-pc-windows-msvc/usr/lib

View File

@@ -6,6 +6,7 @@ LanceDB registers the OpenAI embeddings function in the registry by default, as
|---|---|---|---|
| `name` | `str` | `"text-embedding-ada-002"` | The name of the model. |
| `dim` | `int` | Model default | For OpenAI's newer text-embedding-3 model, we can specify a dimensionality that is smaller than the 1536 size. This feature supports it |
| `use_azure` | bool | `False` | Set true to use Azure OpenAPI SDK |
```python

View File

@@ -27,10 +27,13 @@ LanceDB OSS supports object stores such as AWS S3 (and compatible stores), Azure
Azure Blob Storage:
<!-- skip-test -->
```python
import lancedb
db = lancedb.connect("az://bucket/path")
```
Note that for Azure, storage credentials must be configured. See [below](#azure-blob-storage) for more details.
=== "TypeScript"
@@ -87,11 +90,6 @@ In most cases, when running in the respective cloud and permissions are set up c
export TIMEOUT=60s
```
!!! note "`storage_options` availability"
The `storage_options` parameter is only available in Python *async* API and JavaScript API.
It is not yet supported in the Python synchronous API.
If you only want this to apply to one particular connection, you can pass the `storage_options` argument when opening the connection:
=== "Python"

View File

@@ -790,6 +790,101 @@ Use the `drop_table()` method on the database to remove a table.
This permanently removes the table and is not recoverable, unlike deleting rows.
If the table does not exist an exception is raised.
## Changing schemas
While tables must have a schema specified when they are created, you can
change the schema over time. There's three methods to alter the schema of
a table:
* `add_columns`: Add new columns to the table
* `alter_columns`: Alter the name, nullability, or data type of a column
* `drop_columns`: Drop columns from the table
### Adding new columns
You can add new columns to the table with the `add_columns` method. New columns
are filled with values based on a SQL expression. For example, you can add a new
column `y` to the table and fill it with the value of `x + 1`.
=== "Python"
```python
table.add_columns({"double_price": "price * 2"})
```
**API Reference:** [lancedb.table.Table.add_columns][]
=== "Typescript"
```typescript
--8<-- "nodejs/examples/basic.test.ts:add_columns"
```
**API Reference:** [lancedb.Table.addColumns](../js/classes/Table.md/#addcolumns)
If you want to fill it with null, you can use `cast(NULL as <data_type>)` as
the SQL expression to fill the column with nulls, while controlling the data
type of the column. Available data types are base on the
[DataFusion data types](https://datafusion.apache.org/user-guide/sql/data_types.html).
You can use any of the SQL types, such as `BIGINT`:
```sql
cast(NULL as BIGINT)
```
Using Arrow data types and the `arrow_typeof` function is not yet supported.
<!-- TODO: we could provide a better formula for filling with nulls:
https://github.com/lancedb/lance/issues/3175
-->
### Altering existing columns
You can alter the name, nullability, or data type of a column with the `alter_columns`
method.
Changing the name or nullability of a column just updates the metadata. Because
of this, it's a fast operation. Changing the data type of a column requires
rewriting the column, which can be a heavy operation.
=== "Python"
```python
import pyarrow as pa
table.alter_column({"path": "double_price", "rename": "dbl_price",
"data_type": pa.float32(), "nullable": False})
```
**API Reference:** [lancedb.table.Table.alter_columns][]
=== "Typescript"
```typescript
--8<-- "nodejs/examples/basic.test.ts:alter_columns"
```
**API Reference:** [lancedb.Table.alterColumns](../js/classes/Table.md/#altercolumns)
### Dropping columns
You can drop columns from the table with the `drop_columns` method. This will
will remove the column from the schema.
<!-- TODO: Provide guidance on how to reduce disk usage once optimize helps here
waiting on: https://github.com/lancedb/lance/issues/3177
-->
=== "Python"
```python
table.drop_columns(["dbl_price"])
```
**API Reference:** [lancedb.table.Table.drop_columns][]
=== "Typescript"
```typescript
--8<-- "nodejs/examples/basic.test.ts:drop_columns"
```
**API Reference:** [lancedb.Table.dropColumns](../js/classes/Table.md/#altercolumns)
## Handling bad vectors
In LanceDB Python, you can use the `on_bad_vectors` parameter to choose how

View File

@@ -8,7 +8,7 @@
<parent>
<groupId>com.lancedb</groupId>
<artifactId>lancedb-parent</artifactId>
<version>0.14.0-beta.0</version>
<version>0.14.0-final.0</version>
<relativePath>../pom.xml</relativePath>
</parent>

View File

@@ -6,7 +6,7 @@
<groupId>com.lancedb</groupId>
<artifactId>lancedb-parent</artifactId>
<version>0.14.0-beta.0</version>
<version>0.14.0-final.0</version>
<packaging>pom</packaging>
<name>LanceDB Parent</name>

116
node/package-lock.json generated
View File

@@ -1,12 +1,12 @@
{
"name": "vectordb",
"version": "0.14.0-beta.0",
"version": "0.14.0",
"lockfileVersion": 3,
"requires": true,
"packages": {
"": {
"name": "vectordb",
"version": "0.14.0-beta.0",
"version": "0.14.0",
"cpu": [
"x64",
"arm64"
@@ -52,14 +52,14 @@
"uuid": "^9.0.0"
},
"optionalDependencies": {
"@lancedb/vectordb-darwin-arm64": "0.14.0-beta.0",
"@lancedb/vectordb-darwin-x64": "0.14.0-beta.0",
"@lancedb/vectordb-linux-arm64-gnu": "0.14.0-beta.0",
"@lancedb/vectordb-linux-arm64-musl": "0.14.0-beta.0",
"@lancedb/vectordb-linux-x64-gnu": "0.14.0-beta.0",
"@lancedb/vectordb-linux-x64-musl": "0.14.0-beta.0",
"@lancedb/vectordb-win32-arm64-msvc": "0.14.0-beta.0",
"@lancedb/vectordb-win32-x64-msvc": "0.14.0-beta.0"
"@lancedb/vectordb-darwin-arm64": "0.14.0",
"@lancedb/vectordb-darwin-x64": "0.14.0",
"@lancedb/vectordb-linux-arm64-gnu": "0.14.0",
"@lancedb/vectordb-linux-arm64-musl": "0.14.0",
"@lancedb/vectordb-linux-x64-gnu": "0.14.0",
"@lancedb/vectordb-linux-x64-musl": "0.14.0",
"@lancedb/vectordb-win32-arm64-msvc": "0.14.0",
"@lancedb/vectordb-win32-x64-msvc": "0.14.0"
},
"peerDependencies": {
"@apache-arrow/ts": "^14.0.2",
@@ -329,6 +329,102 @@
"@jridgewell/sourcemap-codec": "^1.4.10"
}
},
"node_modules/@lancedb/vectordb-darwin-arm64": {
"version": "0.14.0",
"resolved": "https://registry.npmjs.org/@lancedb/vectordb-darwin-arm64/-/vectordb-darwin-arm64-0.14.0.tgz",
"integrity": "sha512-C8wp+eJQY3RMLIRfxDnOm8bYg458OI3Cz7Jh7ws6ibquBdJDCiTdwFfcUXrkoaQ9Wv4nHZOEqupj3FBMsks1hw==",
"cpu": [
"arm64"
],
"optional": true,
"os": [
"darwin"
]
},
"node_modules/@lancedb/vectordb-darwin-x64": {
"version": "0.14.0",
"resolved": "https://registry.npmjs.org/@lancedb/vectordb-darwin-x64/-/vectordb-darwin-x64-0.14.0.tgz",
"integrity": "sha512-5jkQuEVGaPViFb4dOjncUqVCbvEiT8XYFZoprE0yv7HUUCdt5v15GTNxey72yw+aaX2mdb2CeFIs+4ySZqy/MA==",
"cpu": [
"x64"
],
"optional": true,
"os": [
"darwin"
]
},
"node_modules/@lancedb/vectordb-linux-arm64-gnu": {
"version": "0.14.0",
"resolved": "https://registry.npmjs.org/@lancedb/vectordb-linux-arm64-gnu/-/vectordb-linux-arm64-gnu-0.14.0.tgz",
"integrity": "sha512-YLboFJLQyFzsYWi2iW1nr2SGaZTaj4gERIufyTSnX+VXlEYKHke3cMFLF+EamH8eejv2HwXdJpidPaP6aSzujw==",
"cpu": [
"arm64"
],
"optional": true,
"os": [
"linux"
]
},
"node_modules/@lancedb/vectordb-linux-arm64-musl": {
"version": "0.14.0",
"resolved": "https://registry.npmjs.org/@lancedb/vectordb-linux-arm64-musl/-/vectordb-linux-arm64-musl-0.14.0.tgz",
"integrity": "sha512-rel/SaxGRtx5GdAkFH1IknBr0V/tbrN4jYT6FixmSvgc9kgxrMGlBUHSRAO5atdRXZ8jT7XWuOqW1QdgsmPi0g==",
"cpu": [
"arm64"
],
"optional": true,
"os": [
"linux"
]
},
"node_modules/@lancedb/vectordb-linux-x64-gnu": {
"version": "0.14.0",
"resolved": "https://registry.npmjs.org/@lancedb/vectordb-linux-x64-gnu/-/vectordb-linux-x64-gnu-0.14.0.tgz",
"integrity": "sha512-N29n8OO2JqSPaSVd5gmyh6r4x6LX0qpcCHrhkEaRoKKIXYdHQ8sAHOqHNt3xhMDLwDJfjGmzAwd977cOYM5MBw==",
"cpu": [
"x64"
],
"optional": true,
"os": [
"linux"
]
},
"node_modules/@lancedb/vectordb-linux-x64-musl": {
"version": "0.14.0",
"resolved": "https://registry.npmjs.org/@lancedb/vectordb-linux-x64-musl/-/vectordb-linux-x64-musl-0.14.0.tgz",
"integrity": "sha512-36Ewl9M6IsYgxBIaThgqaSlQ++8YsSnZB85DOnuIds+sRbBfNkknvwBRFO1/FGN8RSBydFPy1irNFmCOnrlTZg==",
"cpu": [
"x64"
],
"optional": true,
"os": [
"linux"
]
},
"node_modules/@lancedb/vectordb-win32-arm64-msvc": {
"version": "0.14.0",
"resolved": "https://registry.npmjs.org/@lancedb/vectordb-win32-arm64-msvc/-/vectordb-win32-arm64-msvc-0.14.0.tgz",
"integrity": "sha512-4qsna5yI7umGEA868/ifr1Np66d0dhFAOIGaJKS5Z+Zm4Zplr42BjVZiNWtwwKhndtsiPJnFCYVYRKfjTLZWdg==",
"cpu": [
"arm64"
],
"optional": true,
"os": [
"win32"
]
},
"node_modules/@lancedb/vectordb-win32-x64-msvc": {
"version": "0.14.0",
"resolved": "https://registry.npmjs.org/@lancedb/vectordb-win32-x64-msvc/-/vectordb-win32-x64-msvc-0.14.0.tgz",
"integrity": "sha512-1u+J5WFClNc6mzgF5otevMnOxW3pj8yOHrPoIiZe9SrL8O2oVtdYfWJZYG/OST21cS0Mc4Z0upX86G0sA4kEfA==",
"cpu": [
"x64"
],
"optional": true,
"os": [
"win32"
]
},
"node_modules/@neon-rs/cli": {
"version": "0.0.160",
"resolved": "https://registry.npmjs.org/@neon-rs/cli/-/cli-0.0.160.tgz",

View File

@@ -1,7 +1,8 @@
{
"name": "vectordb",
"version": "0.14.0-beta.0",
"version": "0.14.0",
"description": " Serverless, low-latency vector database for AI applications",
"private": false,
"main": "dist/index.js",
"types": "dist/index.d.ts",
"scripts": {
@@ -91,13 +92,13 @@
}
},
"optionalDependencies": {
"@lancedb/vectordb-darwin-x64": "0.14.0-beta.0",
"@lancedb/vectordb-darwin-arm64": "0.14.0-beta.0",
"@lancedb/vectordb-linux-x64-gnu": "0.14.0-beta.0",
"@lancedb/vectordb-linux-arm64-gnu": "0.14.0-beta.0",
"@lancedb/vectordb-linux-x64-musl": "0.14.0-beta.0",
"@lancedb/vectordb-linux-arm64-musl": "0.14.0-beta.0",
"@lancedb/vectordb-win32-x64-msvc": "0.14.0-beta.0",
"@lancedb/vectordb-win32-arm64-msvc": "0.14.0-beta.0"
"@lancedb/vectordb-darwin-x64": "0.14.0",
"@lancedb/vectordb-darwin-arm64": "0.14.0",
"@lancedb/vectordb-linux-x64-gnu": "0.14.0",
"@lancedb/vectordb-linux-arm64-gnu": "0.14.0",
"@lancedb/vectordb-linux-x64-musl": "0.14.0",
"@lancedb/vectordb-linux-arm64-musl": "0.14.0",
"@lancedb/vectordb-win32-x64-msvc": "0.14.0",
"@lancedb/vectordb-win32-arm64-msvc": "0.14.0"
}
}

View File

@@ -1,7 +1,7 @@
[package]
name = "lancedb-nodejs"
edition.workspace = true
version = "0.14.0-beta.0"
version = "0.14.0"
license.workspace = true
description.workspace = true
repository.workspace = true

View File

@@ -825,6 +825,18 @@ describe("schema evolution", function () {
new Field("price", new Float64(), true),
]);
expect(await table.schema()).toEqual(expectedSchema);
await table.alterColumns([{ path: "new_id", dataType: "int32" }]);
const expectedSchema2 = new Schema([
new Field("new_id", new Int32(), true),
new Field(
"vector",
new FixedSizeList(2, new Field("item", new Float32(), true)),
true,
),
new Field("price", new Float64(), true),
]);
expect(await table.schema()).toEqual(expectedSchema2);
});
it("can drop a column from the schema", async function () {

View File

@@ -116,6 +116,26 @@ test("basic table examples", async () => {
await tbl.add(data);
// --8<-- [end:add_data]
}
{
// --8<-- [start:add_columns]
await tbl.addColumns([{ name: "double_price", valueSql: "price * 2" }]);
// --8<-- [end:add_columns]
// --8<-- [start:alter_columns]
await tbl.alterColumns([
{
path: "double_price",
rename: "dbl_price",
dataType: "float",
nullable: true,
},
]);
// --8<-- [end:alter_columns]
// --8<-- [start:drop_columns]
await tbl.dropColumns(["dbl_price"]);
// --8<-- [end:drop_columns]
}
{
// --8<-- [start:vector_search]
const res = await tbl.search([100, 100]).limit(2).toArray();

View File

@@ -1,6 +1,6 @@
{
"name": "@lancedb/lancedb-darwin-arm64",
"version": "0.14.0-beta.0",
"version": "0.14.0",
"os": ["darwin"],
"cpu": ["arm64"],
"main": "lancedb.darwin-arm64.node",

View File

@@ -1,6 +1,6 @@
{
"name": "@lancedb/lancedb-darwin-x64",
"version": "0.14.0-beta.0",
"version": "0.14.0",
"os": ["darwin"],
"cpu": ["x64"],
"main": "lancedb.darwin-x64.node",

View File

@@ -1,6 +1,6 @@
{
"name": "@lancedb/lancedb-linux-arm64-gnu",
"version": "0.14.0-beta.0",
"version": "0.14.0",
"os": ["linux"],
"cpu": ["arm64"],
"main": "lancedb.linux-arm64-gnu.node",

View File

@@ -1,6 +1,6 @@
{
"name": "@lancedb/lancedb-linux-arm64-musl",
"version": "0.14.0-beta.0",
"version": "0.14.0",
"os": ["linux"],
"cpu": ["arm64"],
"main": "lancedb.linux-arm64-musl.node",

View File

@@ -1,6 +1,6 @@
{
"name": "@lancedb/lancedb-linux-x64-gnu",
"version": "0.14.0-beta.0",
"version": "0.14.0",
"os": ["linux"],
"cpu": ["x64"],
"main": "lancedb.linux-x64-gnu.node",

View File

@@ -1,6 +1,6 @@
{
"name": "@lancedb/lancedb-linux-x64-musl",
"version": "0.14.0-beta.0",
"version": "0.14.0",
"os": ["linux"],
"cpu": ["x64"],
"main": "lancedb.linux-x64-musl.node",

View File

@@ -1,6 +1,6 @@
{
"name": "@lancedb/lancedb-win32-arm64-msvc",
"version": "0.14.0-beta.0",
"version": "0.14.0",
"os": [
"win32"
],

View File

@@ -1,6 +1,6 @@
{
"name": "@lancedb/lancedb-win32-x64-msvc",
"version": "0.14.0-beta.0",
"version": "0.14.0",
"os": ["win32"],
"cpu": ["x64"],
"main": "lancedb.win32-x64-msvc.node",

View File

@@ -1,12 +1,12 @@
{
"name": "@lancedb/lancedb",
"version": "0.13.0",
"version": "0.14.0",
"lockfileVersion": 3,
"requires": true,
"packages": {
"": {
"name": "@lancedb/lancedb",
"version": "0.13.0",
"version": "0.14.0",
"cpu": [
"x64",
"arm64"

View File

@@ -10,7 +10,8 @@
"vector database",
"ann"
],
"version": "0.14.0-beta.0",
"private": false,
"version": "0.14.0",
"main": "dist/index.js",
"exports": {
".": "./dist/index.js",
@@ -30,7 +31,8 @@
"aarch64-unknown-linux-gnu",
"x86_64-unknown-linux-musl",
"aarch64-unknown-linux-musl",
"x86_64-pc-windows-msvc"
"x86_64-pc-windows-msvc",
"aarch64-pc-windows-msvc"
]
}
},

View File

@@ -178,16 +178,20 @@ impl Table {
#[napi(catch_unwind)]
pub async fn alter_columns(&self, alterations: Vec<ColumnAlteration>) -> napi::Result<()> {
for alteration in &alterations {
if alteration.rename.is_none() && alteration.nullable.is_none() {
if alteration.rename.is_none()
&& alteration.nullable.is_none()
&& alteration.data_type.is_none()
{
return Err(napi::Error::from_reason(
"Alteration must have a 'rename' or 'nullable' field.",
"Alteration must have a 'rename', 'dataType', or 'nullable' field.",
));
}
}
let alterations = alterations
.into_iter()
.map(LanceColumnAlteration::from)
.collect::<Vec<_>>();
.map(LanceColumnAlteration::try_from)
.collect::<std::result::Result<Vec<_>, String>>()
.map_err(napi::Error::from_reason)?;
self.inner_ref()?
.alter_columns(&alterations)
@@ -433,24 +437,43 @@ pub struct ColumnAlteration {
/// The new name of the column. If not provided then the name will not be changed.
/// This must be distinct from the names of all other columns in the table.
pub rename: Option<String>,
/// A new data type for the column. If not provided then the data type will not be changed.
/// Changing data types is limited to casting to the same general type. For example, these
/// changes are valid:
/// * `int32` -> `int64` (integers)
/// * `double` -> `float` (floats)
/// * `string` -> `large_string` (strings)
/// But these changes are not:
/// * `int32` -> `double` (mix integers and floats)
/// * `string` -> `int32` (mix strings and integers)
pub data_type: Option<String>,
/// Set the new nullability. Note that a nullable column cannot be made non-nullable.
pub nullable: Option<bool>,
}
impl From<ColumnAlteration> for LanceColumnAlteration {
fn from(js: ColumnAlteration) -> Self {
impl TryFrom<ColumnAlteration> for LanceColumnAlteration {
type Error = String;
fn try_from(js: ColumnAlteration) -> std::result::Result<Self, Self::Error> {
let ColumnAlteration {
path,
rename,
nullable,
data_type,
} = js;
Self {
let data_type = if let Some(data_type) = data_type {
Some(
lancedb::utils::string_to_datatype(&data_type)
.ok_or_else(|| format!("Invalid data type: {}", data_type))?,
)
} else {
None
};
Ok(Self {
path,
rename,
nullable,
// TODO: wire up this field
data_type: None,
}
data_type,
})
}
}

View File

@@ -1,5 +1,5 @@
[tool.bumpversion]
current_version = "0.17.0-beta.1"
current_version = "0.17.1-beta.0"
parse = """(?x)
(?P<major>0|[1-9]\\d*)\\.
(?P<minor>0|[1-9]\\d*)\\.

View File

@@ -1,6 +1,6 @@
[package]
name = "lancedb-python"
version = "0.17.0-beta.1"
version = "0.17.1-beta.0"
edition.workspace = true
description = "Python bindings for LanceDB"
license.workspace = true
@@ -14,23 +14,18 @@ name = "_lancedb"
crate-type = ["cdylib"]
[dependencies]
arrow = { version = "52.1", features = ["pyarrow"] }
arrow = { version = "53.2", features = ["pyarrow"] }
lancedb = { path = "../rust/lancedb", default-features = false }
env_logger.workspace = true
pyo3 = { version = "0.21", features = [
pyo3 = { version = "0.22.2", features = [
"extension-module",
"abi3-py39",
"gil-refs"
] }
# Using this fork for now: https://github.com/awestlake87/pyo3-asyncio/issues/119
# pyo3-asyncio = { version = "0.20", features = ["attributes", "tokio-runtime"] }
pyo3-asyncio-0-21 = { version = "0.21.0", features = [
"attributes",
"tokio-runtime"
] }
pyo3-async-runtimes = { version = "0.22", features = ["attributes", "tokio-runtime"] }
pin-project = "1.1.5"
futures.workspace = true
tokio = { version = "1.36.0", features = ["sync"] }
tokio = { version = "1.40", features = ["sync"] }
[build-dependencies]
pyo3-build-config = { version = "0.20.3", features = [

View File

@@ -3,7 +3,7 @@ name = "lancedb"
# version in Cargo.toml
dependencies = [
"deprecation",
"pylance==0.20.0b3",
"pylance==0.20.0",
"tqdm>=4.27.0",
"pydantic>=1.10",
"packaging",

View File

@@ -36,6 +36,7 @@ def connect(
read_consistency_interval: Optional[timedelta] = None,
request_thread_pool: Optional[Union[int, ThreadPoolExecutor]] = None,
client_config: Union[ClientConfig, Dict[str, Any], None] = None,
storage_options: Optional[Dict[str, str]] = None,
**kwargs: Any,
) -> DBConnection:
"""Connect to a LanceDB database.
@@ -67,6 +68,9 @@ def connect(
Configuration options for the LanceDB Cloud HTTP client. If a dict, then
the keys are the attributes of the ClientConfig class. If None, then the
default configuration is used.
storage_options: dict, optional
Additional options for the storage backend. See available options at
https://lancedb.github.io/lancedb/guides/storage/
Examples
--------
@@ -106,12 +110,17 @@ def connect(
# TODO: remove this (deprecation warning downstream)
request_thread_pool=request_thread_pool,
client_config=client_config,
storage_options=storage_options,
**kwargs,
)
if kwargs:
raise ValueError(f"Unknown keyword arguments: {kwargs}")
return LanceDBConnection(uri, read_consistency_interval=read_consistency_interval)
return LanceDBConnection(
uri,
read_consistency_interval=read_consistency_interval,
storage_options=storage_options,
)
async def connect_async(

View File

@@ -79,9 +79,21 @@ class Query:
def limit(self, limit: int): ...
def offset(self, offset: int): ...
def nearest_to(self, query_vec: pa.Array) -> VectorQuery: ...
def nearest_to_text(self, query: dict) -> Query: ...
def nearest_to_text(self, query: dict) -> FTSQuery: ...
async def execute(self, max_batch_legnth: Optional[int]) -> RecordBatchStream: ...
class FTSQuery:
def where(self, filter: str): ...
def select(self, columns: List[str]): ...
def limit(self, limit: int): ...
def offset(self, offset: int): ...
def fast_search(self): ...
def with_row_id(self): ...
def postfilter(self): ...
def nearest_to(self, query_vec: pa.Array) -> HybridQuery: ...
async def execute(self, max_batch_length: Optional[int]) -> RecordBatchStream: ...
async def explain_plan(self) -> str: ...
class VectorQuery:
async def execute(self) -> RecordBatchStream: ...
def where(self, filter: str): ...
@@ -95,6 +107,24 @@ class VectorQuery:
def refine_factor(self, refine_factor: int): ...
def nprobes(self, nprobes: int): ...
def bypass_vector_index(self): ...
def nearest_to_text(self, query: dict) -> HybridQuery: ...
class HybridQuery:
def where(self, filter: str): ...
def select(self, columns: List[str]): ...
def limit(self, limit: int): ...
def offset(self, offset: int): ...
def fast_search(self): ...
def with_row_id(self): ...
def postfilter(self): ...
def distance_type(self, distance_type: str): ...
def refine_factor(self, refine_factor: int): ...
def nprobes(self, nprobes: int): ...
def bypass_vector_index(self): ...
def to_vector_query(self) -> VectorQuery: ...
def to_fts_query(self) -> FTSQuery: ...
def get_limit(self) -> int: ...
def get_with_row_id(self) -> bool: ...
class CompactionStats:
fragments_removed: int

View File

@@ -13,34 +13,29 @@
from __future__ import annotations
import asyncio
import os
from abc import abstractmethod
from pathlib import Path
from typing import TYPE_CHECKING, Dict, Iterable, List, Literal, Optional, Union
import pyarrow as pa
from overrides import EnforceOverrides, override
from pyarrow import fs
from lancedb.common import data_to_reader, validate_schema
from lancedb.common import data_to_reader, sanitize_uri, validate_schema
from lancedb.background_loop import BackgroundEventLoop
from ._lancedb import connect as lancedb_connect
from .table import (
AsyncTable,
LanceTable,
Table,
_table_path,
sanitize_create_table,
)
from .util import (
fs_from_uri,
get_uri_location,
get_uri_scheme,
validate_table_name,
)
if TYPE_CHECKING:
import pyarrow as pa
from .pydantic import LanceModel
from datetime import timedelta
@@ -48,6 +43,8 @@ if TYPE_CHECKING:
from .common import DATA, URI
from .embeddings import EmbeddingFunctionConfig
LOOP = BackgroundEventLoop()
class DBConnection(EnforceOverrides):
"""An active LanceDB connection interface."""
@@ -180,6 +177,7 @@ class DBConnection(EnforceOverrides):
control over how data is saved, either provide the PyArrow schema to
convert to or else provide a [PyArrow Table](pyarrow.Table) directly.
>>> import pyarrow as pa
>>> custom_schema = pa.schema([
... pa.field("vector", pa.list_(pa.float32(), 2)),
... pa.field("lat", pa.float32()),
@@ -327,7 +325,11 @@ class LanceDBConnection(DBConnection):
"""
def __init__(
self, uri: URI, *, read_consistency_interval: Optional[timedelta] = None
self,
uri: URI,
*,
read_consistency_interval: Optional[timedelta] = None,
storage_options: Optional[Dict[str, str]] = None,
):
if not isinstance(uri, Path):
scheme = get_uri_scheme(uri)
@@ -338,9 +340,27 @@ class LanceDBConnection(DBConnection):
uri = uri.expanduser().absolute()
Path(uri).mkdir(parents=True, exist_ok=True)
self._uri = str(uri)
self._entered = False
self.read_consistency_interval = read_consistency_interval
self.storage_options = storage_options
if read_consistency_interval is not None:
read_consistency_interval_secs = read_consistency_interval.total_seconds()
else:
read_consistency_interval_secs = None
async def do_connect():
return await lancedb_connect(
sanitize_uri(uri),
None,
None,
None,
read_consistency_interval_secs,
None,
storage_options,
)
self._conn = AsyncConnection(LOOP.run(do_connect()))
def __repr__(self) -> str:
val = f"{self.__class__.__name__}({self._uri}"
@@ -364,32 +384,7 @@ class LanceDBConnection(DBConnection):
Iterator of str.
A list of table names.
"""
try:
asyncio.get_running_loop()
# User application is async. Soon we will just tell them to use the
# async version. Until then fallback to the old sync implementation.
try:
filesystem = fs_from_uri(self.uri)[0]
except pa.ArrowInvalid:
raise NotImplementedError("Unsupported scheme: " + self.uri)
try:
loc = get_uri_location(self.uri)
paths = filesystem.get_file_info(fs.FileSelector(loc))
except FileNotFoundError:
# It is ok if the file does not exist since it will be created
paths = []
tables = [
os.path.splitext(file_info.base_name)[0]
for file_info in paths
if file_info.extension == "lance"
]
tables.sort()
return tables
except RuntimeError:
# User application is sync. It is safe to use the async implementation
# under the hood.
return asyncio.run(self._async_get_table_names(page_token, limit))
return LOOP.run(self._conn.table_names(start_after=page_token, limit=limit))
def __len__(self) -> int:
return len(self.table_names())
@@ -461,19 +456,16 @@ class LanceDBConnection(DBConnection):
If True, ignore if the table does not exist.
"""
try:
table_uri = _table_path(self.uri, name)
filesystem, path = fs_from_uri(table_uri)
filesystem.delete_dir(path)
except FileNotFoundError:
LOOP.run(self._conn.drop_table(name))
except ValueError as e:
if not ignore_missing:
raise
raise e
if f"Table '{name}' was not found" not in str(e):
raise e
@override
def drop_database(self):
dummy_table_uri = _table_path(self.uri, "dummy")
uri = dummy_table_uri.removesuffix("dummy.lance")
filesystem, path = fs_from_uri(uri)
filesystem.delete_dir(path)
LOOP.run(self._conn.drop_database())
class AsyncConnection(object):
@@ -689,6 +681,7 @@ class AsyncConnection(object):
control over how data is saved, either provide the PyArrow schema to
convert to or else provide a [PyArrow Table](pyarrow.Table) directly.
>>> import pyarrow as pa
>>> custom_schema = pa.schema([
... pa.field("vector", pa.list_(pa.float32(), 2)),
... pa.field("lat", pa.float32()),

View File

@@ -48,6 +48,9 @@ class OpenAIEmbeddings(TextEmbeddingFunction):
organization: Optional[str] = None
api_key: Optional[str] = None
# Set true to use Azure OpenAI API
use_azure: bool = False
def ndims(self):
return self._ndims
@@ -123,4 +126,8 @@ class OpenAIEmbeddings(TextEmbeddingFunction):
kwargs["organization"] = self.organization
if self.api_key:
kwargs["api_key"] = self.api_key
return openai.OpenAI(**kwargs)
if self.use_azure:
return openai.AzureOpenAI(**kwargs)
else:
return openai.OpenAI(**kwargs)

View File

@@ -12,18 +12,22 @@
# limitations under the License.
import os
from typing import ClassVar, List, Union
from typing import ClassVar, TYPE_CHECKING, List, Union
import numpy as np
import pyarrow as pa
from ..util import attempt_import_or_raise
from .base import TextEmbeddingFunction
from .base import EmbeddingFunction
from .registry import register
from .utils import api_key_not_found_help, TEXT
from .utils import api_key_not_found_help, IMAGES
if TYPE_CHECKING:
import PIL
@register("voyageai")
class VoyageAIEmbeddingFunction(TextEmbeddingFunction):
class VoyageAIEmbeddingFunction(EmbeddingFunction):
"""
An embedding function that uses the VoyageAI API
@@ -36,6 +40,7 @@ class VoyageAIEmbeddingFunction(TextEmbeddingFunction):
* voyage-3
* voyage-3-lite
* voyage-multimodal-3
* voyage-finance-2
* voyage-multilingual-2
* voyage-law-2
@@ -54,7 +59,7 @@ class VoyageAIEmbeddingFunction(TextEmbeddingFunction):
.create(name="voyage-3")
class TextModel(LanceModel):
text: str = voyageai.SourceField()
data: str = voyageai.SourceField()
vector: Vector(voyageai.ndims()) = voyageai.VectorField()
data = [ { "text": "hello world" },
@@ -77,6 +82,7 @@ class VoyageAIEmbeddingFunction(TextEmbeddingFunction):
return 1536
elif self.name in [
"voyage-3",
"voyage-multimodal-3",
"voyage-finance-2",
"voyage-multilingual-2",
"voyage-law-2",
@@ -85,19 +91,19 @@ class VoyageAIEmbeddingFunction(TextEmbeddingFunction):
else:
raise ValueError(f"Model {self.name} not supported")
def compute_query_embeddings(self, query: str, *args, **kwargs) -> List[np.array]:
return self.compute_source_embeddings(query, input_type="query")
def sanitize_input(self, images: IMAGES) -> Union[List[bytes], np.ndarray]:
"""
Sanitize the input to the embedding function.
"""
if isinstance(images, (str, bytes)):
images = [images]
elif isinstance(images, pa.Array):
images = images.to_pylist()
elif isinstance(images, pa.ChunkedArray):
images = images.combine_chunks().to_pylist()
return images
def compute_source_embeddings(self, texts: TEXT, *args, **kwargs) -> List[np.array]:
texts = self.sanitize_input(texts)
input_type = (
kwargs.get("input_type") or "document"
) # assume source input type if not passed by `compute_query_embeddings`
return self.generate_embeddings(texts, input_type=input_type)
def generate_embeddings(
self, texts: Union[List[str], np.ndarray], *args, **kwargs
) -> List[np.array]:
def generate_text_embeddings(self, text: str, **kwargs) -> np.ndarray:
"""
Get the embeddings for the given texts
@@ -109,15 +115,55 @@ class VoyageAIEmbeddingFunction(TextEmbeddingFunction):
truncation: Optional[bool]
"""
VoyageAIEmbeddingFunction._init_client()
rs = VoyageAIEmbeddingFunction.client.embed(
texts=texts, model=self.name, **kwargs
)
if self.name in ["voyage-multimodal-3"]:
rs = VoyageAIEmbeddingFunction._get_client().multimodal_embed(
inputs=[[text]], model=self.name, **kwargs
)
else:
rs = VoyageAIEmbeddingFunction._get_client().embed(
texts=[text], model=self.name, **kwargs
)
return [emb for emb in rs.embeddings]
return rs.embeddings[0]
def generate_image_embedding(
self, image: "PIL.Image.Image", **kwargs
) -> np.ndarray:
rs = VoyageAIEmbeddingFunction._get_client().multimodal_embed(
inputs=[[image]], model=self.name, **kwargs
)
return rs.embeddings[0]
def compute_query_embeddings(
self, query: Union[str, "PIL.Image.Image"], *args, **kwargs
) -> List[np.ndarray]:
"""
Compute the embeddings for a given user query
Parameters
----------
query : Union[str, PIL.Image.Image]
The query to embed. A query can be either text or an image.
"""
if isinstance(query, str):
return [self.generate_text_embeddings(query, input_type="query")]
else:
PIL = attempt_import_or_raise("PIL", "pillow")
if isinstance(query, PIL.Image.Image):
return [self.generate_image_embedding(query, input_type="query")]
else:
raise TypeError("Only text PIL images supported as query")
def compute_source_embeddings(
self, images: IMAGES, *args, **kwargs
) -> List[np.array]:
images = self.sanitize_input(images)
return [
self.generate_image_embedding(img, input_type="document") for img in images
]
@staticmethod
def _init_client():
def _get_client():
if VoyageAIEmbeddingFunction.client is None:
voyageai = attempt_import_or_raise("voyageai")
if os.environ.get("VOYAGE_API_KEY") is None:
@@ -125,3 +171,4 @@ class VoyageAIEmbeddingFunction(TextEmbeddingFunction):
VoyageAIEmbeddingFunction.client = voyageai.Client(
os.environ["VOYAGE_API_KEY"]
)
return VoyageAIEmbeddingFunction.client

View File

@@ -110,7 +110,16 @@ class FTS:
remove_stop_words: bool = False,
ascii_folding: bool = False,
):
self._inner = LanceDbIndex.fts(with_position=with_position)
self._inner = LanceDbIndex.fts(
with_position=with_position,
base_tokenizer=base_tokenizer,
language=language,
max_token_length=max_token_length,
lower_case=lower_case,
stem=stem,
remove_stop_words=remove_stop_words,
ascii_folding=ascii_folding,
)
class HnswPq:

View File

@@ -0,0 +1,248 @@
import logging
from typing import Any, List, Optional, Tuple, Union, Literal
import pyarrow as pa
from ..table import Table
Filter = Union[str, pa.compute.Expression]
Keys = Union[str, List[str]]
JoinType = Literal[
"left semi",
"right semi",
"left anti",
"right anti",
"inner",
"left outer",
"right outer",
"full outer",
]
class PyarrowScannerAdapter(pa.dataset.Scanner):
def __init__(
self,
table: Table,
columns: Optional[List[str]] = None,
filter: Optional[Filter] = None,
batch_size: Optional[int] = None,
batch_readahead: Optional[int] = None,
fragment_readahead: Optional[int] = None,
fragment_scan_options: Optional[Any] = None,
use_threads: bool = True,
memory_pool: Optional[Any] = None,
):
self.table = table
self.columns = columns
self.filter = filter
self.batch_size = batch_size
if batch_readahead is not None:
logging.debug("ignoring batch_readahead which has no lance equivalent")
if fragment_readahead is not None:
logging.debug("ignoring fragment_readahead which has no lance equivalent")
if fragment_scan_options is not None:
raise NotImplementedError("fragment_scan_options not supported")
if use_threads is False:
raise NotImplementedError("use_threads=False not supported")
if memory_pool is not None:
raise NotImplementedError("memory_pool not supported")
def count_rows(self):
return self.table.count_rows(self.filter)
def from_batches(self, **kwargs):
raise NotImplementedError
def from_dataset(self, **kwargs):
raise NotImplementedError
def from_fragment(self, **kwargs):
raise NotImplementedError
def head(self, num_rows: int):
return self.to_reader(limit=num_rows).read_all()
@property
def projected_schema(self):
return self.head(1).schema
def scan_batches(self):
return self.to_reader()
def take(self, indices: List[int]):
raise NotImplementedError
def to_batches(self):
return self.to_reader()
def to_table(self):
return self.to_reader().read_all()
def to_reader(self, *, limit: Optional[int] = None):
query = self.table.search()
# Disable the builtin limit
if limit is None:
num_rows = self.count_rows()
query.limit(num_rows)
elif limit <= 0:
raise ValueError("limit must be positive")
else:
query.limit(limit)
if self.columns is not None:
query = query.select(self.columns)
if self.filter is not None:
query = query.where(self.filter, prefilter=True)
return query.to_batches(batch_size=self.batch_size)
class PyarrowDatasetAdapter(pa.dataset.Dataset):
def __init__(self, table: Table):
self.table = table
def count_rows(self, filter: Optional[Filter] = None):
return self.table.count_rows(filter)
def get_fragments(self, filter: Optional[Filter] = None):
raise NotImplementedError
def head(
self,
num_rows: int,
columns: Optional[List[str]] = None,
filter: Optional[Filter] = None,
batch_size: Optional[int] = None,
batch_readahead: Optional[int] = None,
fragment_readahead: Optional[int] = None,
fragment_scan_options: Optional[Any] = None,
use_threads: bool = True,
memory_pool: Optional[Any] = None,
):
return self.scanner(
columns,
filter,
batch_size,
batch_readahead,
fragment_readahead,
fragment_scan_options,
use_threads,
memory_pool,
).head(num_rows)
def join(
self,
right_dataset: Any,
keys: Keys,
right_keys: Optional[Keys] = None,
join_type: Optional[JoinType] = None,
left_suffix: Optional[str] = None,
right_suffix: Optional[str] = None,
coalesce_keys: bool = True,
use_threads: bool = True,
):
raise NotImplementedError
def join_asof(
self,
right_dataset: Any,
on: str,
by: Keys,
tolerance: int,
right_on: Optional[str] = None,
right_by: Optional[Keys] = None,
):
raise NotImplementedError
@property
def partition_expression(self):
raise NotImplementedError
def replace_schema(self, schema: pa.Schema):
raise NotImplementedError
def scanner(
self,
columns: Optional[List[str]] = None,
filter: Optional[Filter] = None,
batch_size: Optional[int] = None,
batch_readahead: Optional[int] = None,
fragment_readahead: Optional[int] = None,
fragment_scan_options: Optional[Any] = None,
use_threads: bool = True,
memory_pool: Optional[Any] = None,
):
return PyarrowScannerAdapter(
self.table,
columns,
filter,
batch_size,
batch_readahead,
fragment_readahead,
fragment_scan_options,
use_threads,
memory_pool,
)
@property
def schema(self):
return self.table.schema
def sort_by(self, sorting: Union[str, List[Tuple[str, bool]]]):
raise NotImplementedError
def take(
self,
indices: List[int],
columns: Optional[List[str]] = None,
filter: Optional[Filter] = None,
batch_size: Optional[int] = None,
batch_readahead: Optional[int] = None,
fragment_readahead: Optional[int] = None,
fragment_scan_options: Optional[Any] = None,
use_threads: bool = True,
memory_pool: Optional[Any] = None,
):
raise NotImplementedError
def to_batches(
self,
columns: Optional[List[str]] = None,
filter: Optional[Filter] = None,
batch_size: Optional[int] = None,
batch_readahead: Optional[int] = None,
fragment_readahead: Optional[int] = None,
fragment_scan_options: Optional[Any] = None,
use_threads: bool = True,
memory_pool: Optional[Any] = None,
):
return self.scanner(
columns,
filter,
batch_size,
batch_readahead,
fragment_readahead,
fragment_scan_options,
use_threads,
memory_pool,
).to_batches()
def to_table(
self,
columns: Optional[List[str]] = None,
filter: Optional[Filter] = None,
batch_size: Optional[int] = None,
batch_readahead: Optional[int] = None,
fragment_readahead: Optional[int] = None,
fragment_scan_options: Optional[Any] = None,
use_threads: bool = True,
memory_pool: Optional[Any] = None,
):
return self.scanner(
columns,
filter,
batch_size,
batch_readahead,
fragment_readahead,
fragment_scan_options,
use_threads,
memory_pool,
).to_table()

View File

@@ -26,6 +26,7 @@ from typing import (
Union,
)
import asyncio
import deprecation
import numpy as np
import pyarrow as pa
@@ -44,6 +45,8 @@ if TYPE_CHECKING:
import polars as pl
from ._lancedb import Query as LanceQuery
from ._lancedb import FTSQuery as LanceFTSQuery
from ._lancedb import HybridQuery as LanceHybridQuery
from ._lancedb import VectorQuery as LanceVectorQuery
from .common import VEC
from .pydantic import LanceModel
@@ -325,6 +328,14 @@ class LanceQueryBuilder(ABC):
"""
raise NotImplementedError
@abstractmethod
def to_batches(self, /, batch_size: Optional[int] = None) -> pa.Table:
"""
Execute the query and return the results as a pyarrow
[RecordBatchReader](https://arrow.apache.org/docs/python/generated/pyarrow.RecordBatchReader.html)
"""
raise NotImplementedError
def to_list(self) -> List[dict]:
"""
Execute the query and return the results as a list of dictionaries.
@@ -869,6 +880,9 @@ class LanceFtsQueryBuilder(LanceQueryBuilder):
check_reranker_result(results)
return results
def to_batches(self, /, batch_size: Optional[int] = None):
raise NotImplementedError("to_batches on an FTS query")
def tantivy_to_arrow(self) -> pa.Table:
try:
import tantivy
@@ -971,6 +985,9 @@ class LanceFtsQueryBuilder(LanceQueryBuilder):
class LanceEmptyQueryBuilder(LanceQueryBuilder):
def to_arrow(self) -> pa.Table:
return self.to_batches().read_all()
def to_batches(self, /, batch_size: Optional[int] = None) -> pa.RecordBatchReader:
query = Query(
columns=self._columns,
filter=self._where,
@@ -980,7 +997,7 @@ class LanceEmptyQueryBuilder(LanceQueryBuilder):
# not actually respected in remote query
offset=self._offset or 0,
)
return self._table._execute_query(query).read_all()
return self._table._execute_query(query)
def rerank(self, reranker: Reranker) -> LanceEmptyQueryBuilder:
"""Rerank the results using the specified reranker.
@@ -1110,32 +1127,55 @@ class LanceHybridQueryBuilder(LanceQueryBuilder):
fts_results = fts_future.result()
vector_results = vector_future.result()
# convert to ranks first if needed
if self._norm == "rank":
vector_results = self._rank(vector_results, "_distance")
fts_results = self._rank(fts_results, "_score")
return self._combine_hybrid_results(
fts_results=fts_results,
vector_results=vector_results,
norm=self._norm,
fts_query=self._fts_query._query,
reranker=self._reranker,
limit=self._limit,
with_row_ids=self._with_row_id,
)
@staticmethod
def _combine_hybrid_results(
fts_results: pa.Table,
vector_results: pa.Table,
norm: str,
fts_query: str,
reranker,
limit: int,
with_row_ids: bool,
) -> pa.Table:
if norm == "rank":
vector_results = LanceHybridQueryBuilder._rank(vector_results, "_distance")
fts_results = LanceHybridQueryBuilder._rank(fts_results, "_score")
# normalize the scores to be between 0 and 1, 0 being most relevant
vector_results = self._normalize_scores(vector_results, "_distance")
vector_results = LanceHybridQueryBuilder._normalize_scores(
vector_results, "_distance"
)
# In fts higher scores represent relevance. Not inverting them here as
# rerankers might need to preserve this score to support `return_score="all"`
fts_results = self._normalize_scores(fts_results, "_score")
fts_results = LanceHybridQueryBuilder._normalize_scores(fts_results, "_score")
results = self._reranker.rerank_hybrid(
self._fts_query._query, vector_results, fts_results
)
results = reranker.rerank_hybrid(fts_query, vector_results, fts_results)
check_reranker_result(results)
# apply limit after reranking
results = results.slice(length=self._limit)
results = results.slice(length=limit)
if not self._with_row_id:
if not with_row_ids:
results = results.drop(["_rowid"])
return results
def _rank(self, results: pa.Table, column: str, ascending: bool = True):
def to_batches(self):
raise NotImplementedError("to_batches not yet supported on a hybrid query")
@staticmethod
def _rank(results: pa.Table, column: str, ascending: bool = True):
if len(results) == 0:
return results
# Get the _score column from results
@@ -1152,7 +1192,8 @@ class LanceHybridQueryBuilder(LanceQueryBuilder):
)
return results
def _normalize_scores(self, results: pa.Table, column: str, invert=False):
@staticmethod
def _normalize_scores(results: pa.Table, column: str, invert=False):
if len(results) == 0:
return results
# Get the _score column from results
@@ -1618,7 +1659,7 @@ class AsyncQuery(AsyncQueryBase):
def nearest_to_text(
self, query: str, columns: Union[str, List[str]] = []
) -> AsyncQuery:
) -> AsyncFTSQuery:
"""
Find the documents that are most relevant to the given text query.
@@ -1641,8 +1682,90 @@ class AsyncQuery(AsyncQueryBase):
"""
if isinstance(columns, str):
columns = [columns]
self._inner.nearest_to_text({"query": query, "columns": columns})
return self
return AsyncFTSQuery(
self._inner.nearest_to_text({"query": query, "columns": columns})
)
class AsyncFTSQuery(AsyncQueryBase):
"""A query for full text search for LanceDB."""
def __init__(self, inner: LanceFTSQuery):
super().__init__(inner)
self._inner = inner
def get_query(self):
self._inner.get_query()
def nearest_to(
self,
query_vector: Union[VEC, Tuple, List[VEC]],
) -> AsyncHybridQuery:
"""
In addition doing text search on the LanceDB Table, also
find the nearest vectors to the given query vector.
This converts the query from a FTS Query to a Hybrid query. Results
from the vector search will be combined with results from the FTS query.
This method will attempt to convert the input to the query vector
expected by the embedding model. If the input cannot be converted
then an error will be thrown.
By default, there is no embedding model, and the input should be
something that can be converted to a pyarrow array of floats. This
includes lists, numpy arrays, and tuples.
If there is only one vector column (a column whose data type is a
fixed size list of floats) then the column does not need to be specified.
If there is more than one vector column you must use
[AsyncVectorQuery.column][lancedb.query.AsyncVectorQuery.column] to specify
which column you would like to compare with.
If no index has been created on the vector column then a vector query
will perform a distance comparison between the query vector and every
vector in the database and then sort the results. This is sometimes
called a "flat search"
For small databases, with tens of thousands of vectors or less, this can
be reasonably fast. In larger databases you should create a vector index
on the column. If there is a vector index then an "approximate" nearest
neighbor search (frequently called an ANN search) will be performed. This
search is much faster, but the results will be approximate.
The query can be further parameterized using the returned builder. There
are various ANN search parameters that will let you fine tune your recall
accuracy vs search latency.
Hybrid searches always have a [limit][]. If `limit` has not been called then
a default `limit` of 10 will be used.
Typically, a single vector is passed in as the query. However, you can also
pass in multiple vectors. This can be useful if you want to find the nearest
vectors to multiple query vectors. This is not expected to be faster than
making multiple queries concurrently; it is just a convenience method.
If multiple vectors are passed in then an additional column `query_index`
will be added to the results. This column will contain the index of the
query vector that the result is nearest to.
"""
if query_vector is None:
raise ValueError("query_vector can not be None")
if (
isinstance(query_vector, list)
and len(query_vector) > 0
and not isinstance(query_vector[0], (float, int))
):
# multiple have been passed
query_vectors = [AsyncQuery._query_vec_to_array(v) for v in query_vector]
new_self = self._inner.nearest_to(query_vectors[0])
for v in query_vectors[1:]:
new_self.add_query_vector(v)
return AsyncHybridQuery(new_self)
else:
return AsyncHybridQuery(
self._inner.nearest_to(AsyncQuery._query_vec_to_array(query_vector))
)
class AsyncVectorQuery(AsyncQueryBase):
@@ -1779,3 +1902,160 @@ class AsyncVectorQuery(AsyncQueryBase):
"""
self._inner.bypass_vector_index()
return self
def nearest_to_text(
self, query: str, columns: Union[str, List[str]] = []
) -> AsyncHybridQuery:
"""
Find the documents that are most relevant to the given text query,
in addition to vector search.
This converts the vector query into a hybrid query.
This search will perform a full text search on the table and return
the most relevant documents, combined with the vector query results.
The text relevance is determined by BM25.
The columns to search must be with native FTS index
(Tantivy-based can't work with this method).
By default, all indexed columns are searched,
now only one column can be searched at a time.
Parameters
----------
query: str
The text query to search for.
columns: str or list of str, default None
The columns to search in. If None, all indexed columns are searched.
For now only one column can be searched at a time.
"""
if isinstance(columns, str):
columns = [columns]
return AsyncHybridQuery(
self._inner.nearest_to_text({"query": query, "columns": columns})
)
class AsyncHybridQuery(AsyncQueryBase):
"""
A query builder that performs hybrid vector and full text search.
Results are combined and reranked based on the specified reranker.
By default, the results are reranked using the RRFReranker, which
uses reciprocal rank fusion score for reranking.
To make the vector and fts results comparable, the scores are normalized.
Instead of normalizing scores, the `normalize` parameter can be set to "rank"
in the `rerank` method to convert the scores to ranks and then normalize them.
"""
def __init__(self, inner: LanceHybridQuery):
super().__init__(inner)
self._inner = inner
self._norm = "score"
self._reranker = RRFReranker()
def rerank(
self, reranker: Reranker = RRFReranker(), normalize: str = "score"
) -> AsyncHybridQuery:
"""
Rerank the hybrid search results using the specified reranker. The reranker
must be an instance of Reranker class.
Parameters
----------
reranker: Reranker, default RRFReranker()
The reranker to use. Must be an instance of Reranker class.
normalize: str, default "score"
The method to normalize the scores. Can be "rank" or "score". If "rank",
the scores are converted to ranks and then normalized. If "score", the
scores are normalized directly.
Returns
-------
AsyncHybridQuery
The AsyncHybridQuery object.
"""
if normalize not in ["rank", "score"]:
raise ValueError("normalize must be 'rank' or 'score'.")
if reranker and not isinstance(reranker, Reranker):
raise ValueError("reranker must be an instance of Reranker class.")
self._norm = normalize
self._reranker = reranker
return self
async def to_batches(self):
raise NotImplementedError("to_batches not yet supported on a hybrid query")
async def to_arrow(self) -> pa.Table:
fts_query = AsyncFTSQuery(self._inner.to_fts_query())
vec_query = AsyncVectorQuery(self._inner.to_vector_query())
# save the row ID choice that was made on the query builder and force it
# to actually fetch the row ids because we need this for reranking
with_row_ids = self._inner.get_with_row_id()
fts_query.with_row_id()
vec_query.with_row_id()
fts_results, vector_results = await asyncio.gather(
fts_query.to_arrow(),
vec_query.to_arrow(),
)
return LanceHybridQueryBuilder._combine_hybrid_results(
fts_results=fts_results,
vector_results=vector_results,
norm=self._norm,
fts_query=fts_query.get_query(),
reranker=self._reranker,
limit=self._inner.get_limit(),
with_row_ids=with_row_ids,
)
async def explain_plan(self, verbose: Optional[bool] = False):
"""Return the execution plan for this query.
The output includes both the vector and FTS search plans.
Examples
--------
>>> import asyncio
>>> from lancedb import connect_async
>>> from lancedb.index import FTS
>>> async def doctest_example():
... conn = await connect_async("./.lancedb")
... table = await conn.create_table("my_table", [{"vector": [99, 99], "text": "hello world"}])
... await table.create_index("text", config=FTS(with_position=False))
... query = [100, 100]
... plan = await table.query().nearest_to([1, 2]).nearest_to_text("hello").explain_plan(True)
... print(plan)
>>> asyncio.run(doctest_example()) # doctest: +ELLIPSIS, +NORMALIZE_WHITESPACE
Vector Search Plan:
ProjectionExec: expr=[vector@0 as vector, text@3 as text, _distance@2 as _distance]
Take: columns="vector, _rowid, _distance, (text)"
CoalesceBatchesExec: target_batch_size=1024
GlobalLimitExec: skip=0, fetch=10
FilterExec: _distance@2 IS NOT NULL
SortExec: TopK(fetch=10), expr=[_distance@2 ASC NULLS LAST], preserve_partitioning=[false]
KNNVectorDistance: metric=l2
LanceScan: uri=..., projection=[vector], row_id=true, row_addr=false, ordered=false
FTS Search Plan:
LanceScan: uri=..., projection=[vector, text], row_id=false, row_addr=false, ordered=true
Parameters
----------
verbose : bool, default False
Use a verbose output format.
Returns
-------
plan
""" # noqa: E501
results = ["Vector Search Plan:"]
results.append(await self._inner.to_vector_query().explain_plan(verbose))
results.append("FTS Search Plan:")
results.append(await self._inner.to_fts_query().explain_plan(verbose))
return "\n".join(results)

View File

@@ -20,19 +20,16 @@ import warnings
from lancedb import connect_async
from lancedb.remote import ClientConfig
from lancedb.remote.background_loop import BackgroundEventLoop
import pyarrow as pa
from overrides import override
from ..common import DATA
from ..db import DBConnection
from ..db import DBConnection, LOOP
from ..embeddings import EmbeddingFunctionConfig
from ..pydantic import LanceModel
from ..table import Table
from ..util import validate_table_name
LOOP = BackgroundEventLoop()
class RemoteDBConnection(DBConnection):
"""A connection to a remote LanceDB database."""
@@ -47,9 +44,9 @@ class RemoteDBConnection(DBConnection):
client_config: Union[ClientConfig, Dict[str, Any], None] = None,
connection_timeout: Optional[float] = None,
read_timeout: Optional[float] = None,
storage_options: Optional[Dict[str, str]] = None,
):
"""Connect to a remote LanceDB database."""
if isinstance(client_config, dict):
client_config = ClientConfig(**client_config)
elif client_config is None:
@@ -97,6 +94,7 @@ class RemoteDBConnection(DBConnection):
region=region,
host_override=host_override,
client_config=client_config,
storage_options=storage_options,
)
)

View File

@@ -78,7 +78,7 @@ class RemoteTable(Table):
def list_versions(self):
"""List all versions of the table"""
return self._loop.run_until_complete(self._table.list_versions())
return LOOP.run(self._table.list_versions())
def to_arrow(self) -> pa.Table:
"""to_arrow() is not yet supported on LanceDB cloud."""
@@ -89,10 +89,10 @@ class RemoteTable(Table):
return NotImplementedError("to_pandas() is not yet supported on LanceDB cloud.")
def checkout(self, version):
return self._loop.run_until_complete(self._table.checkout(version))
return LOOP.run(self._table.checkout(version))
def checkout_latest(self):
return self._loop.run_until_complete(self._table.checkout_latest())
return LOOP.run(self._table.checkout_latest())
def list_indices(self):
"""List all the indices on the table"""
@@ -138,8 +138,25 @@ class RemoteTable(Table):
*,
replace: bool = False,
with_position: bool = True,
# tokenizer configs:
base_tokenizer: str = "simple",
language: str = "English",
max_token_length: Optional[int] = 40,
lower_case: bool = True,
stem: bool = False,
remove_stop_words: bool = False,
ascii_folding: bool = False,
):
config = FTS(with_position=with_position)
config = FTS(
with_position=with_position,
base_tokenizer=base_tokenizer,
language=language,
max_token_length=max_token_length,
lower_case=lower_case,
stem=stem,
remove_stop_words=remove_stop_words,
ascii_folding=ascii_folding,
)
LOOP.run(self._table.create_index(column, config=config, replace=replace))
def create_index(
@@ -490,19 +507,13 @@ class RemoteTable(Table):
return LOOP.run(self._table.count_rows(filter))
def add_columns(self, transforms: Dict[str, str]):
raise NotImplementedError(
"add_columns() is not yet supported on the LanceDB cloud"
)
return LOOP.run(self._table.add_columns(transforms))
def alter_columns(self, alterations: Iterable[Dict[str, str]]):
raise NotImplementedError(
"alter_columns() is not yet supported on the LanceDB cloud"
)
def alter_columns(self, *alterations: Iterable[Dict[str, str]]):
return LOOP.run(self._table.alter_columns(*alterations))
def drop_columns(self, columns: Iterable[str]):
raise NotImplementedError(
"drop_columns() is not yet supported on the LanceDB cloud"
)
return LOOP.run(self._table.drop_columns(columns))
def add_index(tbl: pa.Table, i: int) -> pa.Table:

View File

@@ -967,8 +967,6 @@ class Table(ABC):
"""
Add new columns with defined values.
This is not yet available in LanceDB Cloud.
Parameters
----------
transforms: Dict[str, str]
@@ -978,20 +976,21 @@ class Table(ABC):
"""
@abstractmethod
def alter_columns(self, alterations: Iterable[Dict[str, str]]):
def alter_columns(self, *alterations: Iterable[Dict[str, str]]):
"""
Alter column names and nullability.
This is not yet available in LanceDB Cloud.
alterations : Iterable[Dict[str, Any]]
A sequence of dictionaries, each with the following keys:
- "path": str
The column path to alter. For a top-level column, this is the name.
For a nested column, this is the dot-separated path, e.g. "a.b.c".
- "name": str, optional
- "rename": str, optional
The new name of the column. If not specified, the column name is
not changed.
- "data_type": pyarrow.DataType, optional
The new data type of the column. Existing values will be casted
to this type. If not specified, the column data type is not changed.
- "nullable": bool, optional
Whether the column should be nullable. If not specified, the column
nullability is not changed. Only non-nullable columns can be changed
@@ -1004,8 +1003,6 @@ class Table(ABC):
"""
Drop columns from the table.
This is not yet available in LanceDB Cloud.
Parameters
----------
columns : Iterable[str]
@@ -1080,13 +1077,16 @@ class _LanceLatestDatasetRef(_LanceDatasetRef):
index_cache_size: Optional[int] = None
read_consistency_interval: Optional[timedelta] = None
last_consistency_check: Optional[float] = None
storage_options: Optional[Dict[str, str]] = None
_dataset: Optional[LanceDataset] = None
@property
def dataset(self) -> LanceDataset:
if not self._dataset:
self._dataset = lance.dataset(
self.uri, index_cache_size=self.index_cache_size
self.uri,
index_cache_size=self.index_cache_size,
storage_options=self.storage_options,
)
self.last_consistency_check = time.monotonic()
elif self.read_consistency_interval is not None:
@@ -1117,13 +1117,17 @@ class _LanceTimeTravelRef(_LanceDatasetRef):
uri: str
version: int
index_cache_size: Optional[int] = None
storage_options: Optional[Dict[str, str]] = None
_dataset: Optional[LanceDataset] = None
@property
def dataset(self) -> LanceDataset:
if not self._dataset:
self._dataset = lance.dataset(
self.uri, version=self.version, index_cache_size=self.index_cache_size
self.uri,
version=self.version,
index_cache_size=self.index_cache_size,
storage_options=self.storage_options,
)
return self._dataset
@@ -1172,24 +1176,27 @@ class LanceTable(Table):
uri=self._dataset_uri,
version=version,
index_cache_size=index_cache_size,
storage_options=connection.storage_options,
)
else:
self._ref = _LanceLatestDatasetRef(
uri=self._dataset_uri,
read_consistency_interval=connection.read_consistency_interval,
index_cache_size=index_cache_size,
storage_options=connection.storage_options,
)
@classmethod
def open(cls, db, name, **kwargs):
tbl = cls(db, name, **kwargs)
fs, path = fs_from_uri(tbl._dataset_path)
file_info = fs.get_file_info(path)
if file_info.type != pa.fs.FileType.Directory:
raise FileNotFoundError(
f"Table {name} does not exist."
f"Please first call db.create_table({name}, data)"
)
# check the dataset exists
try:
tbl.version
except ValueError as e:
if "Not found:" in str(e):
raise FileNotFoundError(f"Table {name} does not exist")
raise e
return tbl
@@ -1617,11 +1624,7 @@ class LanceTable(Table):
on_bad_vectors=on_bad_vectors,
fill_value=fill_value,
)
# Access the dataset_mut property to ensure that the dataset is mutable.
self._ref.dataset_mut
self._ref.dataset = lance.write_dataset(
data, self._dataset_uri, schema=self.schema, mode=mode
)
self._ref.dataset_mut.insert(data, mode=mode, schema=self.schema)
def merge(
self,
@@ -1905,7 +1908,13 @@ class LanceTable(Table):
empty = pa.Table.from_batches([], schema=schema)
try:
lance.write_dataset(empty, tbl._dataset_uri, schema=schema, mode=mode)
lance.write_dataset(
empty,
tbl._dataset_uri,
schema=schema,
mode=mode,
storage_options=db.storage_options,
)
except OSError as err:
if "Dataset already exists" in str(err) and exist_ok:
if tbl.schema != schema:
@@ -2923,6 +2932,53 @@ class AsyncTable:
return await self._inner.update(updates_sql, where)
async def add_columns(self, transforms: Dict[str, str]):
"""
Add new columns with defined values.
Parameters
----------
transforms: Dict[str, str]
A map of column name to a SQL expression to use to calculate the
value of the new column. These expressions will be evaluated for
each row in the table, and can reference existing columns.
"""
await self._inner.add_columns(list(transforms.items()))
async def alter_columns(self, *alterations: Iterable[Dict[str, str]]):
"""
Alter column names and nullability.
alterations : Iterable[Dict[str, Any]]
A sequence of dictionaries, each with the following keys:
- "path": str
The column path to alter. For a top-level column, this is the name.
For a nested column, this is the dot-separated path, e.g. "a.b.c".
- "rename": str, optional
The new name of the column. If not specified, the column name is
not changed.
- "data_type": pyarrow.DataType, optional
The new data type of the column. Existing values will be casted
to this type. If not specified, the column data type is not changed.
- "nullable": bool, optional
Whether the column should be nullable. If not specified, the column
nullability is not changed. Only non-nullable columns can be changed
to nullable. Currently, you cannot change a nullable column to
non-nullable.
"""
await self._inner.alter_columns(alterations)
async def drop_columns(self, columns: Iterable[str]):
"""
Drop columns from the table.
Parameters
----------
columns : Iterable[str]
The names of the columns to drop.
"""
await self._inner.drop_columns(columns)
async def version(self) -> int:
"""
Retrieve the version of the table

View File

@@ -0,0 +1,21 @@
import duckdb
import pyarrow as pa
import lancedb
from lancedb.integrations.pyarrow import PyarrowDatasetAdapter
def test_basic_query(tmp_path):
data = pa.table({"x": [1, 2, 3, 4], "y": [5, 6, 7, 8]})
conn = lancedb.connect(tmp_path)
tbl = conn.create_table("test", data)
adapter = PyarrowDatasetAdapter(tbl) # noqa: F841
duck_conn = duckdb.connect()
results = duck_conn.sql("SELECT SUM(x) FROM adapter").fetchall()
assert results[0][0] == 10
results = duck_conn.sql("SELECT SUM(y) FROM adapter").fetchall()
assert results[0][0] == 26

View File

@@ -0,0 +1,111 @@
# SPDX-License-Identifier: Apache-2.0
# SPDX-FileCopyrightText: Copyright The LanceDB Authors
import lancedb
import pyarrow as pa
import pytest
import pytest_asyncio
from lancedb.index import FTS
from lancedb.table import AsyncTable
@pytest_asyncio.fixture
async def table(tmpdir_factory) -> AsyncTable:
tmp_path = str(tmpdir_factory.mktemp("data"))
db = await lancedb.connect_async(tmp_path)
data = pa.table(
{
"text": pa.array(["a", "b", "cat", "dog"]),
"vector": pa.array(
[[0.1, 0.1], [2, 2], [-0.1, -0.1], [0.5, -0.5]],
type=pa.list_(pa.float32(), list_size=2),
),
}
)
table = await db.create_table("test", data)
await table.create_index("text", config=FTS(with_position=False))
return table
@pytest.mark.asyncio
async def test_async_hybrid_query(table: AsyncTable):
result = await (
table.query().nearest_to([0.0, 0.4]).nearest_to_text("dog").limit(2).to_arrow()
)
assert len(result) == 2
# ensure we get results that would match well for text and vector
assert result["text"].to_pylist() == ["a", "dog"]
# ensure there is no rowid by default
assert "_rowid" not in result
@pytest.mark.asyncio
async def test_async_hybrid_query_with_row_ids(table: AsyncTable):
result = await (
table.query()
.nearest_to([0.0, 0.4])
.nearest_to_text("dog")
.limit(2)
.with_row_id()
.to_arrow()
)
assert len(result) == 2
# ensure we get results that would match well for text and vector
assert result["text"].to_pylist() == ["a", "dog"]
assert result["_rowid"].to_pylist() == [0, 3]
@pytest.mark.asyncio
async def test_async_hybrid_query_filters(table: AsyncTable):
# test that query params are passed down from the regular builder to
# child vector/fts builders
result = await (
table.query()
.where("text not in ('a', 'dog')")
.nearest_to([0.3, 0.3])
.nearest_to_text("*a*")
.limit(2)
.to_arrow()
)
assert len(result) == 2
# ensure we get results that would match well for text and vector
assert result["text"].to_pylist() == ["cat", "b"]
@pytest.mark.asyncio
async def test_async_hybrid_query_default_limit(table: AsyncTable):
# add 10 new rows
new_rows = []
for i in range(100):
if i < 2:
new_rows.append({"text": "close_vec", "vector": [0.1, 0.1]})
else:
new_rows.append({"text": "far_vec", "vector": [5 * i, 5 * i]})
await table.add(new_rows)
result = await (
table.query().nearest_to_text("dog").nearest_to([0.1, 0.1]).to_arrow()
)
# assert we got the default limit of 10
assert len(result) == 10
# assert we got the closest vectors and the text searched for
texts = result["text"].to_pylist()
assert texts.count("close_vec") == 2
assert texts.count("dog") == 1
assert texts.count("a") == 1
@pytest.mark.asyncio
async def test_explain_plan(table: AsyncTable):
plan = await (
table.query().nearest_to_text("dog").nearest_to([0.1, 0.1]).explain_plan(True)
)
assert "Vector Search Plan" in plan
assert "KNNVectorDistance" in plan
assert "FTS Search Plan" in plan
assert "LanceScan" in plan

View File

@@ -0,0 +1,47 @@
import pyarrow as pa
import lancedb
from lancedb.integrations.pyarrow import PyarrowDatasetAdapter
def test_dataset_adapter(tmp_path):
data = pa.table({"x": [1, 2, 3, 4], "y": [5, 6, 7, 8]})
conn = lancedb.connect(tmp_path)
tbl = conn.create_table("test", data)
adapter = PyarrowDatasetAdapter(tbl)
assert adapter.count_rows() == 4
assert adapter.count_rows("x > 2") == 2
assert adapter.schema == data.schema
assert adapter.head(2) == data.slice(0, 2)
assert adapter.to_table() == data
assert adapter.to_batches().read_all() == data
assert adapter.scanner().to_table() == data
assert adapter.scanner().to_batches().read_all() == data
assert adapter.scanner().projected_schema == data.schema
assert adapter.scanner(columns=["x"]).projected_schema == pa.schema(
[data.schema.field("x")]
)
assert adapter.scanner(columns=["x"]).to_table() == pa.table({"x": [1, 2, 3, 4]})
# Make sure we bypass the limit
data = pa.table({"x": range(100)})
tbl = conn.create_table("test2", data)
adapter = PyarrowDatasetAdapter(tbl)
assert adapter.count_rows() == 100
assert adapter.to_table().num_rows == 100
assert adapter.head(10).num_rows == 10
# Empty table
tbl = conn.create_table("test3", None, schema=pa.schema({"x": pa.int64()}))
adapter = PyarrowDatasetAdapter(tbl)
assert adapter.count_rows() == 0
assert adapter.to_table().num_rows == 0
assert adapter.head(10).num_rows == 0
assert adapter.scanner().projected_schema == pa.schema({"x": pa.int64()})

View File

@@ -193,7 +193,7 @@ def test_table_add_in_threadpool():
if request.path == "/v1/table/test/insert/":
request.send_response(200)
request.end_headers()
elif request.path == "/v1/table/test/create/":
elif request.path == "/v1/table/test/create/?mode=create":
request.send_response(200)
request.send_header("Content-Type", "application/json")
request.end_headers()
@@ -229,6 +229,44 @@ def test_table_add_in_threadpool():
future.result()
def test_table_create_indices():
def handler(request):
if request.path == "/v1/table/test/create_index/":
request.send_response(200)
request.end_headers()
elif request.path == "/v1/table/test/create/?mode=create":
request.send_response(200)
request.send_header("Content-Type", "application/json")
request.end_headers()
request.wfile.write(b"{}")
elif request.path == "/v1/table/test/describe/":
request.send_response(200)
request.send_header("Content-Type", "application/json")
request.end_headers()
payload = json.dumps(
dict(
version=1,
schema=dict(
fields=[
dict(name="id", type={"type": "int64"}, nullable=False),
]
),
)
)
request.wfile.write(payload.encode())
else:
request.send_response(404)
request.end_headers()
with mock_lancedb_connection(handler) as db:
# Parameters are well-tested through local and async tests.
# This is a smoke-test.
table = db.create_table("test", [{"id": 1}])
table.create_scalar_index("id")
table.create_fts_index("text")
table.create_scalar_index("vector")
@contextlib.contextmanager
def query_test_table(query_handler):
def handler(request):

View File

@@ -30,6 +30,7 @@ class MockDB:
def __init__(self, uri: Path):
self.uri = str(uri)
self.read_consistency_interval = None
self.storage_options = None
@functools.cached_property
def is_managed_remote(self) -> bool:
@@ -1292,6 +1293,19 @@ def test_add_columns(tmp_path):
assert table.to_arrow().column_names == ["id", "new_col"]
assert table.to_arrow()["new_col"].to_pylist() == [2, 3]
table.add_columns({"null_int": "cast(null as bigint)"})
assert table.schema.field("null_int").type == pa.int64()
@pytest.mark.asyncio
async def test_add_columns_async(db_async: AsyncConnection):
data = pa.table({"id": [0, 1]})
table = await db_async.create_table("my_table", data=data)
await table.add_columns({"new_col": "id + 2"})
data = await table.to_arrow()
assert data.column_names == ["id", "new_col"]
assert data["new_col"].to_pylist() == [2, 3]
def test_alter_columns(tmp_path):
db = lancedb.connect(tmp_path)
@@ -1301,6 +1315,18 @@ def test_alter_columns(tmp_path):
assert table.to_arrow().column_names == ["new_id"]
@pytest.mark.asyncio
async def test_alter_columns_async(db_async: AsyncConnection):
data = pa.table({"id": [0, 1]})
table = await db_async.create_table("my_table", data=data)
await table.alter_columns({"path": "id", "rename": "new_id"})
assert (await table.to_arrow()).column_names == ["new_id"]
await table.alter_columns(dict(path="new_id", data_type=pa.int16(), nullable=True))
data = await table.to_arrow()
assert data.column(0).type == pa.int16()
assert data.schema.field(0).nullable
def test_drop_columns(tmp_path):
db = lancedb.connect(tmp_path)
data = pa.table({"id": [0, 1], "category": ["a", "b"]})
@@ -1309,6 +1335,14 @@ def test_drop_columns(tmp_path):
assert table.to_arrow().column_names == ["id"]
@pytest.mark.asyncio
async def test_drop_columns_async(db_async: AsyncConnection):
data = pa.table({"id": [0, 1], "category": ["a", "b"]})
table = await db_async.create_table("my_table", data=data)
await table.drop_columns(["category"])
assert (await table.to_arrow()).column_names == ["id"]
@pytest.mark.asyncio
async def test_time_travel(db_async: AsyncConnection):
# Setup

View File

@@ -10,7 +10,7 @@ use arrow::{
use futures::stream::StreamExt;
use lancedb::arrow::SendableRecordBatchStream;
use pyo3::{pyclass, pymethods, Bound, PyAny, PyObject, PyRef, PyResult, Python};
use pyo3_asyncio_0_21::tokio::future_into_py;
use pyo3_async_runtimes::tokio::future_into_py;
use crate::error::PythonErrorExt;

View File

@@ -9,7 +9,7 @@ use pyo3::{
exceptions::{PyRuntimeError, PyValueError},
pyclass, pyfunction, pymethods, Bound, FromPyObject, PyAny, PyRef, PyResult, Python,
};
use pyo3_asyncio_0_21::tokio::future_into_py;
use pyo3_async_runtimes::tokio::future_into_py;
use crate::{error::PythonErrorExt, table::Table};
@@ -58,6 +58,7 @@ impl Connection {
self.inner.take();
}
#[pyo3(signature = (start_after=None, limit=None))]
pub fn table_names(
self_: PyRef<'_, Self>,
start_after: Option<String>,
@@ -74,6 +75,7 @@ impl Connection {
future_into_py(self_.py(), async move { op.execute().await.infer_error() })
}
#[pyo3(signature = (name, mode, data, storage_options=None, data_storage_version=None, enable_v2_manifest_paths=None))]
pub fn create_table<'a>(
self_: PyRef<'a, Self>,
name: String,
@@ -111,6 +113,7 @@ impl Connection {
})
}
#[pyo3(signature = (name, mode, schema, storage_options=None, data_storage_version=None, enable_v2_manifest_paths=None))]
pub fn create_empty_table<'a>(
self_: PyRef<'a, Self>,
name: String,
@@ -198,6 +201,7 @@ impl Connection {
}
#[pyfunction]
#[pyo3(signature = (uri, api_key=None, region=None, host_override=None, read_consistency_interval=None, client_config=None, storage_options=None))]
#[allow(clippy::too_many_arguments)]
pub fn connect(
py: Python,

View File

@@ -138,7 +138,9 @@ fn http_from_rust_error(
status_code: Option<u16>,
) -> PyResult<PyErr> {
let message = err.to_string();
let http_err_cls = py.import("lancedb.remote.errors")?.getattr("HttpError")?;
let http_err_cls = py
.import_bound("lancedb.remote.errors")?
.getattr("HttpError")?;
let py_err = http_err_cls.call1((message, request_id, status_code))?;
// Reset the traceback since it doesn't provide additional information.
@@ -149,5 +151,5 @@ fn http_from_rust_error(
py_err.setattr(intern!(py, "__cause__"), cause_err)?;
}
Ok(PyErr::from_value(py_err))
Ok(PyErr::from_value_bound(py_err))
}

View File

@@ -47,6 +47,7 @@ impl Index {
#[pymethods]
impl Index {
#[pyo3(signature = (distance_type=None, num_partitions=None, num_sub_vectors=None, max_iterations=None, sample_rate=None))]
#[staticmethod]
pub fn ivf_pq(
distance_type: Option<String>,
@@ -106,6 +107,7 @@ impl Index {
})
}
#[pyo3(signature = (with_position=None, base_tokenizer=None, language=None, max_token_length=None, lower_case=None, stem=None, remove_stop_words=None, ascii_folding=None))]
#[allow(clippy::too_many_arguments)]
#[staticmethod]
pub fn fts(
@@ -146,6 +148,7 @@ impl Index {
}
}
#[pyo3(signature = (distance_type=None, num_partitions=None, num_sub_vectors=None, max_iterations=None, sample_rate=None, m=None, ef_construction=None))]
#[staticmethod]
pub fn hnsw_pq(
distance_type: Option<String>,
@@ -184,6 +187,7 @@ impl Index {
})
}
#[pyo3(signature = (distance_type=None, num_partitions=None, max_iterations=None, sample_rate=None, m=None, ef_construction=None))]
#[staticmethod]
pub fn hnsw_sq(
distance_type: Option<String>,

View File

@@ -16,7 +16,11 @@ use arrow::RecordBatchStream;
use connection::{connect, Connection};
use env_logger::Env;
use index::{Index, IndexConfig};
use pyo3::{pymodule, types::PyModule, wrap_pyfunction, PyResult, Python};
use pyo3::{
pymodule,
types::{PyModule, PyModuleMethods},
wrap_pyfunction, Bound, PyResult, Python,
};
use query::{Query, VectorQuery};
use table::Table;
@@ -29,7 +33,7 @@ pub mod table;
pub mod util;
#[pymodule]
pub fn _lancedb(_py: Python, m: &PyModule) -> PyResult<()> {
pub fn _lancedb(_py: Python, m: &Bound<'_, PyModule>) -> PyResult<()> {
let env = Env::new()
.filter_or("LANCEDB_LOG", "warn")
.write_style("LANCEDB_LOG_STYLE");

View File

@@ -18,7 +18,8 @@ use arrow::pyarrow::FromPyArrow;
use lancedb::index::scalar::FullTextSearchQuery;
use lancedb::query::QueryExecutionOptions;
use lancedb::query::{
ExecutableQuery, Query as LanceDbQuery, QueryBase, Select, VectorQuery as LanceDbVectorQuery,
ExecutableQuery, HasQuery, Query as LanceDbQuery, QueryBase, Select,
VectorQuery as LanceDbVectorQuery,
};
use pyo3::exceptions::PyRuntimeError;
use pyo3::prelude::{PyAnyMethods, PyDictMethods};
@@ -29,7 +30,7 @@ use pyo3::PyAny;
use pyo3::PyRef;
use pyo3::PyResult;
use pyo3::{pyclass, PyErr};
use pyo3_asyncio_0_21::tokio::future_into_py;
use pyo3_async_runtimes::tokio::future_into_py;
use crate::arrow::RecordBatchStream;
use crate::error::PythonErrorExt;
@@ -87,7 +88,7 @@ impl Query {
Ok(VectorQuery { inner })
}
pub fn nearest_to_text(&mut self, query: Bound<'_, PyDict>) -> PyResult<()> {
pub fn nearest_to_text(&mut self, query: Bound<'_, PyDict>) -> PyResult<FTSQuery> {
let query_text = query
.get_item("query")?
.ok_or(PyErr::new::<PyRuntimeError, _>(
@@ -100,11 +101,14 @@ impl Query {
.transpose()?;
let fts_query = FullTextSearchQuery::new(query_text).columns(columns);
self.inner = self.inner.clone().full_text_search(fts_query);
Ok(())
Ok(FTSQuery {
fts_query,
inner: self.inner.clone(),
})
}
#[pyo3(signature = (max_batch_length=None))]
pub fn execute(
self_: PyRef<'_, Self>,
max_batch_length: Option<u32>,
@@ -132,6 +136,87 @@ impl Query {
}
#[pyclass]
#[derive(Clone)]
pub struct FTSQuery {
inner: LanceDbQuery,
fts_query: FullTextSearchQuery,
}
#[pymethods]
impl FTSQuery {
pub fn r#where(&mut self, predicate: String) {
self.inner = self.inner.clone().only_if(predicate);
}
pub fn select(&mut self, columns: Vec<(String, String)>) {
self.inner = self.inner.clone().select(Select::dynamic(&columns));
}
pub fn limit(&mut self, limit: u32) {
self.inner = self.inner.clone().limit(limit as usize);
}
pub fn offset(&mut self, offset: u32) {
self.inner = self.inner.clone().offset(offset as usize);
}
pub fn fast_search(&mut self) {
self.inner = self.inner.clone().fast_search();
}
pub fn with_row_id(&mut self) {
self.inner = self.inner.clone().with_row_id();
}
pub fn postfilter(&mut self) {
self.inner = self.inner.clone().postfilter();
}
#[pyo3(signature = (max_batch_length=None))]
pub fn execute(
self_: PyRef<'_, Self>,
max_batch_length: Option<u32>,
) -> PyResult<Bound<'_, PyAny>> {
let inner = self_
.inner
.clone()
.full_text_search(self_.fts_query.clone());
future_into_py(self_.py(), async move {
let mut opts = QueryExecutionOptions::default();
if let Some(max_batch_length) = max_batch_length {
opts.max_batch_length = max_batch_length;
}
let inner_stream = inner.execute_with_options(opts).await.infer_error()?;
Ok(RecordBatchStream::new(inner_stream))
})
}
pub fn nearest_to(&mut self, vector: Bound<'_, PyAny>) -> PyResult<HybridQuery> {
let vector_query = Query::new(self.inner.clone()).nearest_to(vector)?;
Ok(HybridQuery {
inner_fts: self.clone(),
inner_vec: vector_query,
})
}
pub fn explain_plan(self_: PyRef<'_, Self>, verbose: bool) -> PyResult<Bound<'_, PyAny>> {
let inner = self_.inner.clone();
future_into_py(self_.py(), async move {
inner
.explain_plan(verbose)
.await
.map_err(|e| PyRuntimeError::new_err(e.to_string()))
})
}
pub fn get_query(&self) -> String {
self.fts_query.query.clone()
}
}
#[pyclass]
#[derive(Clone)]
pub struct VectorQuery {
inner: LanceDbVectorQuery,
}
@@ -203,6 +288,7 @@ impl VectorQuery {
self.inner = self.inner.clone().bypass_vector_index()
}
#[pyo3(signature = (max_batch_length=None))]
pub fn execute(
self_: PyRef<'_, Self>,
max_batch_length: Option<u32>,
@@ -227,4 +313,105 @@ impl VectorQuery {
.map_err(|e| PyRuntimeError::new_err(e.to_string()))
})
}
pub fn nearest_to_text(&mut self, query: Bound<'_, PyDict>) -> PyResult<HybridQuery> {
let fts_query = Query::new(self.inner.mut_query().clone()).nearest_to_text(query)?;
Ok(HybridQuery {
inner_vec: self.clone(),
inner_fts: fts_query,
})
}
}
#[pyclass]
pub struct HybridQuery {
inner_vec: VectorQuery,
inner_fts: FTSQuery,
}
#[pymethods]
impl HybridQuery {
pub fn r#where(&mut self, predicate: String) {
self.inner_vec.r#where(predicate.clone());
self.inner_fts.r#where(predicate);
}
pub fn select(&mut self, columns: Vec<(String, String)>) {
self.inner_vec.select(columns.clone());
self.inner_fts.select(columns);
}
pub fn limit(&mut self, limit: u32) {
self.inner_vec.limit(limit);
self.inner_fts.limit(limit);
}
pub fn offset(&mut self, offset: u32) {
self.inner_vec.offset(offset);
self.inner_fts.offset(offset);
}
pub fn fast_search(&mut self) {
self.inner_vec.fast_search();
self.inner_fts.fast_search();
}
pub fn with_row_id(&mut self) {
self.inner_fts.with_row_id();
self.inner_vec.with_row_id();
}
pub fn postfilter(&mut self) {
self.inner_vec.postfilter();
self.inner_fts.postfilter();
}
pub fn add_query_vector(&mut self, vector: Bound<'_, PyAny>) -> PyResult<()> {
self.inner_vec.add_query_vector(vector)
}
pub fn column(&mut self, column: String) {
self.inner_vec.column(column);
}
pub fn distance_type(&mut self, distance_type: String) -> PyResult<()> {
self.inner_vec.distance_type(distance_type)
}
pub fn refine_factor(&mut self, refine_factor: u32) {
self.inner_vec.refine_factor(refine_factor);
}
pub fn nprobes(&mut self, nprobe: u32) {
self.inner_vec.nprobes(nprobe);
}
pub fn ef(&mut self, ef: u32) {
self.inner_vec.ef(ef);
}
pub fn bypass_vector_index(&mut self) {
self.inner_vec.bypass_vector_index();
}
pub fn to_vector_query(&mut self) -> PyResult<VectorQuery> {
Ok(VectorQuery {
inner: self.inner_vec.inner.clone(),
})
}
pub fn to_fts_query(&mut self) -> PyResult<FTSQuery> {
Ok(FTSQuery {
inner: self.inner_fts.inner.clone(),
fts_query: self.inner_fts.fts_query.clone(),
})
}
pub fn get_limit(&mut self) -> Option<u32> {
self.inner_fts.inner.limit.map(|i| i as u32)
}
pub fn get_with_row_id(&mut self) -> bool {
self.inner_fts.inner.with_row_id
}
}

View File

@@ -1,17 +1,21 @@
// SPDX-License-Identifier: Apache-2.0
// SPDX-FileCopyrightText: Copyright The LanceDB Authors
use arrow::{
datatypes::DataType,
ffi_stream::ArrowArrayStreamReader,
pyarrow::{FromPyArrow, ToPyArrow},
};
use lancedb::table::{
AddDataMode, Duration, OptimizeAction, OptimizeOptions, Table as LanceDbTable,
AddDataMode, ColumnAlteration, Duration, NewColumnTransform, OptimizeAction, OptimizeOptions,
Table as LanceDbTable,
};
use pyo3::{
exceptions::{PyRuntimeError, PyValueError},
pyclass, pymethods,
types::{IntoPyDict, PyDict, PyDictMethods, PyString},
types::{IntoPyDict, PyAnyMethods, PyDict, PyDictMethods},
Bound, FromPyObject, PyAny, PyRef, PyResult, Python, ToPyObject,
};
use pyo3_asyncio_0_21::tokio::future_into_py;
use pyo3_async_runtimes::tokio::future_into_py;
use crate::{
error::PythonErrorExt,
@@ -137,9 +141,10 @@ impl Table {
})
}
#[pyo3(signature = (updates, r#where=None))]
pub fn update<'a>(
self_: PyRef<'a, Self>,
updates: &PyDict,
updates: &Bound<'_, PyDict>,
r#where: Option<String>,
) -> PyResult<Bound<'a, PyAny>> {
let mut op = self_.inner_ref()?.update();
@@ -147,10 +152,8 @@ impl Table {
op = op.only_if(only_if);
}
for (column_name, value) in updates.into_iter() {
let column_name: &PyString = column_name.downcast()?;
let column_name = column_name.to_str()?.to_string();
let value: &PyString = value.downcast()?;
let value = value.to_str()?.to_string();
let column_name: String = column_name.extract()?;
let value: String = value.extract()?;
op = op.column(column_name, value);
}
future_into_py(self_.py(), async move {
@@ -159,6 +162,7 @@ impl Table {
})
}
#[pyo3(signature = (filter=None))]
pub fn count_rows(
self_: PyRef<'_, Self>,
filter: Option<String>,
@@ -169,6 +173,7 @@ impl Table {
})
}
#[pyo3(signature = (column, index=None, replace=None))]
pub fn create_index<'a>(
self_: PyRef<'a, Self>,
column: String,
@@ -263,7 +268,8 @@ impl Table {
.unwrap();
let tup: Vec<(&String, &String)> = v.metadata.iter().collect();
dict.set_item("metadata", tup.into_py_dict(py)).unwrap();
dict.set_item("metadata", tup.into_py_dict_bound(py))
.unwrap();
dict.to_object(py)
})
.collect::<Vec<_>>()
@@ -299,6 +305,7 @@ impl Table {
Query::new(self.inner_ref().unwrap().query())
}
#[pyo3(signature = (cleanup_since_ms=None, delete_unverified=None))]
pub fn optimize(
self_: PyRef<'_, Self>,
cleanup_since_ms: Option<u64>,
@@ -406,6 +413,72 @@ impl Table {
.infer_error()
})
}
pub fn add_columns(
self_: PyRef<'_, Self>,
definitions: Vec<(String, String)>,
) -> PyResult<Bound<'_, PyAny>> {
let definitions = NewColumnTransform::SqlExpressions(definitions);
let inner = self_.inner_ref()?.clone();
future_into_py(self_.py(), async move {
inner.add_columns(definitions, None).await.infer_error()?;
Ok(())
})
}
pub fn alter_columns<'a>(
self_: PyRef<'a, Self>,
alterations: Vec<Bound<PyDict>>,
) -> PyResult<Bound<'a, PyAny>> {
let alterations = alterations
.iter()
.map(|alteration| {
let path = alteration
.get_item("path")?
.ok_or_else(|| PyValueError::new_err("Missing path"))?
.extract()?;
let rename = {
// We prefer rename, but support name for backwards compatibility
let rename = if let Ok(Some(rename)) = alteration.get_item("rename") {
Some(rename)
} else {
alteration.get_item("name")?
};
rename.map(|name| name.extract()).transpose()?
};
let nullable = alteration
.get_item("nullable")?
.map(|val| val.extract())
.transpose()?;
let data_type = alteration
.get_item("data_type")?
.map(|val| DataType::from_pyarrow_bound(&val))
.transpose()?;
Ok(ColumnAlteration {
path,
rename,
nullable,
data_type,
})
})
.collect::<PyResult<Vec<_>>>()?;
let inner = self_.inner_ref()?.clone();
future_into_py(self_.py(), async move {
inner.alter_columns(&alterations).await.infer_error()?;
Ok(())
})
}
pub fn drop_columns(self_: PyRef<Self>, columns: Vec<String>) -> PyResult<Bound<PyAny>> {
let inner = self_.inner_ref()?.clone();
future_into_py(self_.py(), async move {
let column_refs = columns.iter().map(String::as_str).collect::<Vec<&str>>();
inner.drop_columns(&column_refs).await.infer_error()?;
Ok(())
})
}
}
#[derive(FromPyObject)]

View File

@@ -1,6 +1,6 @@
[package]
name = "lancedb-node"
version = "0.14.0-beta.0"
version = "0.14.0"
description = "Serverless, low-latency vector database for AI applications"
license.workspace = true
edition.workspace = true

View File

@@ -1,6 +1,6 @@
[package]
name = "lancedb"
version = "0.14.0-beta.0"
version = "0.14.0"
edition.workspace = true
description = "LanceDB: A serverless, low-latency vector database for AI applications"
license.workspace = true

View File

@@ -625,7 +625,7 @@ impl ConnectBuilder {
/// Set the LanceDB Cloud client configuration.
///
/// ```
/// ```no_run
/// # use lancedb::connect;
/// # use lancedb::remote::*;
/// connect("db://my_database")

View File

@@ -30,7 +30,7 @@
//!
//! LanceDB runs in process, to use it in your Rust project, put the following in your `Cargo.toml`:
//!
//! ```ignore
//! ```shell
//! cargo install lancedb
//! ```
//!

View File

@@ -348,7 +348,7 @@ pub trait QueryBase {
///
/// The filter should be supplied as an SQL query string. For example:
///
/// ```ignore
/// ```sql
/// x > 10
/// y > 0 AND y < 100
/// x > 5 OR y = 'test'
@@ -364,8 +364,18 @@ pub trait QueryBase {
///
/// This method is only valid on tables that have a full text search index.
///
/// ```ignore
/// query.full_text_search(FullTextSearchQuery::new("hello world"))
/// ```
/// use lance_index::scalar::FullTextSearchQuery;
/// use lancedb::query::{QueryBase, ExecutableQuery};
///
/// # use lancedb::Table;
/// # async fn query(table: &Table) -> Result<(), Box<dyn std::error::Error>> {
/// let results = table.query()
/// .full_text_search(FullTextSearchQuery::new("hello world".into()))
/// .execute()
/// .await?;
/// # Ok(())
/// # }
/// ```
fn full_text_search(self, query: FullTextSearchQuery) -> Self;
@@ -563,7 +573,7 @@ pub struct Query {
parent: Arc<dyn TableInternal>,
/// limit the number of rows to return.
pub(crate) limit: Option<usize>,
pub limit: Option<usize>,
/// Offset of the query.
pub(crate) offset: Option<usize>,
@@ -586,7 +596,7 @@ pub struct Query {
/// If set to true, the query will return the `_rowid` meta column.
///
/// By default, this is false.
pub(crate) with_row_id: bool,
pub with_row_id: bool,
/// If set to false, the filter will be applied after the vector search.
pub(crate) prefilter: bool,

View File

@@ -228,6 +228,14 @@ impl RestfulLanceDbClient<Sender> {
});
}
let db_name = parsed_url.host_str().unwrap();
let db_prefix = {
let prefix = parsed_url.path().trim_start_matches('/');
if prefix.is_empty() {
None
} else {
Some(prefix)
}
};
// Get the timeouts
let connect_timeout = Self::get_timeout(
@@ -258,6 +266,7 @@ impl RestfulLanceDbClient<Sender> {
db_name,
host_override.is_some(),
options,
db_prefix,
)?)
.user_agent(client_config.user_agent)
.build()
@@ -292,6 +301,7 @@ impl<S: HttpSend> RestfulLanceDbClient<S> {
db_name: &str,
has_host_override: bool,
options: &RemoteOptions,
db_prefix: Option<&str>,
) -> Result<HeaderMap> {
let mut headers = HeaderMap::new();
headers.insert(
@@ -317,6 +327,17 @@ impl<S: HttpSend> RestfulLanceDbClient<S> {
})?,
);
}
if db_prefix.is_some() {
headers.insert(
"x-lancedb-database-prefix",
HeaderValue::from_str(db_prefix.unwrap()).map_err(|_| Error::InvalidInput {
message: format!(
"non-ascii database prefix '{}' provided",
db_prefix.unwrap()
),
})?,
);
}
if let Some(v) = options.0.get("account_name") {
headers.insert(

View File

@@ -271,7 +271,7 @@ impl From<StorageOptions> for RemoteOptions {
filtered.insert(opt.to_string(), v.to_string());
}
}
RemoteOptions::new(filtered)
Self::new(filtered)
}
}

View File

@@ -17,7 +17,7 @@ use datafusion_physical_plan::{ExecutionPlan, SendableRecordBatchStream};
use futures::TryStreamExt;
use http::header::CONTENT_TYPE;
use http::StatusCode;
use lance::arrow::json::JsonSchema;
use lance::arrow::json::{JsonDataType, JsonSchema};
use lance::dataset::scanner::DatasetRecordBatchStream;
use lance::dataset::{ColumnAlteration, NewColumnTransform, Version};
use lance_datafusion::exec::OneShotExec;
@@ -643,25 +643,80 @@ impl<S: HttpSend> TableInternal for RemoteTable<S> {
}
async fn add_columns(
&self,
_transforms: NewColumnTransform,
transforms: NewColumnTransform,
_read_columns: Option<Vec<String>>,
) -> Result<()> {
self.check_mutable().await?;
Err(Error::NotSupported {
message: "add_columns is not yet supported.".into(),
})
match transforms {
NewColumnTransform::SqlExpressions(expressions) => {
let body = expressions
.into_iter()
.map(|(name, expression)| {
serde_json::json!({
"name": name,
"expression": expression,
})
})
.collect::<Vec<_>>();
let body = serde_json::json!({ "new_columns": body });
let request = self
.client
.post(&format!("/v1/table/{}/add_columns/", self.name))
.json(&body);
let (request_id, response) = self.client.send(request, false).await?;
self.check_table_response(&request_id, response).await?;
Ok(())
}
_ => {
return Err(Error::NotSupported {
message: "Only SQL expressions are supported for adding columns".into(),
});
}
}
}
async fn alter_columns(&self, _alterations: &[ColumnAlteration]) -> Result<()> {
async fn alter_columns(&self, alterations: &[ColumnAlteration]) -> Result<()> {
self.check_mutable().await?;
Err(Error::NotSupported {
message: "alter_columns is not yet supported.".into(),
})
let body = alterations
.iter()
.map(|alteration| {
let mut value = serde_json::json!({
"path": alteration.path,
});
if let Some(rename) = &alteration.rename {
value["rename"] = serde_json::Value::String(rename.clone());
}
if let Some(data_type) = &alteration.data_type {
let json_data_type = JsonDataType::try_from(data_type).unwrap();
let json_data_type = serde_json::to_value(&json_data_type).unwrap();
value["data_type"] = json_data_type;
}
if let Some(nullable) = &alteration.nullable {
value["nullable"] = serde_json::Value::Bool(*nullable);
}
value
})
.collect::<Vec<_>>();
let body = serde_json::json!({ "alterations": body });
let request = self
.client
.post(&format!("/v1/table/{}/alter_columns/", self.name))
.json(&body);
let (request_id, response) = self.client.send(request, false).await?;
self.check_table_response(&request_id, response).await?;
Ok(())
}
async fn drop_columns(&self, _columns: &[&str]) -> Result<()> {
async fn drop_columns(&self, columns: &[&str]) -> Result<()> {
self.check_mutable().await?;
Err(Error::NotSupported {
message: "drop_columns is not yet supported.".into(),
})
let body = serde_json::json!({ "columns": columns });
let request = self
.client
.post(&format!("/v1/table/{}/drop_columns/", self.name))
.json(&body);
let (request_id, response) = self.client.send(request, false).await?;
self.check_table_response(&request_id, response).await?;
Ok(())
}
async fn list_indices(&self) -> Result<Vec<IndexConfig>> {
@@ -844,7 +899,17 @@ mod tests {
Box::pin(table.update().column("a", "a + 1").execute().map_ok(|_| ())),
Box::pin(table.add(example_data()).execute().map_ok(|_| ())),
Box::pin(table.merge_insert(&["test"]).execute(example_data())),
Box::pin(table.delete("false")), // TODO: other endpoints.
Box::pin(table.delete("false")),
Box::pin(table.add_columns(
NewColumnTransform::SqlExpressions(vec![("x".into(), "y".into())]),
None,
)),
Box::pin(async {
let alterations = vec![ColumnAlteration::new("x".into()).rename("y".into())];
table.alter_columns(&alterations).await
}),
Box::pin(table.drop_columns(&["a"])),
// TODO: other endpoints.
];
for result in results {
@@ -1799,4 +1864,114 @@ mod tests {
.await;
assert!(matches!(res, Err(Error::NotSupported { .. })));
}
#[tokio::test]
async fn test_add_columns() {
let table = Table::new_with_handler("my_table", |request| {
assert_eq!(request.method(), "POST");
assert_eq!(request.url().path(), "/v1/table/my_table/add_columns/");
assert_eq!(
request.headers().get("Content-Type").unwrap(),
JSON_CONTENT_TYPE
);
let body = request.body().unwrap().as_bytes().unwrap();
let body = std::str::from_utf8(body).unwrap();
let value: serde_json::Value = serde_json::from_str(body).unwrap();
let new_columns = value.get("new_columns").unwrap().as_array().unwrap();
assert!(new_columns.len() == 2);
let col_name = new_columns[0]["name"].as_str().unwrap();
let expression = new_columns[0]["expression"].as_str().unwrap();
assert_eq!(col_name, "b");
assert_eq!(expression, "a + 1");
let col_name = new_columns[1]["name"].as_str().unwrap();
let expression = new_columns[1]["expression"].as_str().unwrap();
assert_eq!(col_name, "x");
assert_eq!(expression, "cast(NULL as int32)");
http::Response::builder().status(200).body("{}").unwrap()
});
table
.add_columns(
NewColumnTransform::SqlExpressions(vec![
("b".into(), "a + 1".into()),
("x".into(), "cast(NULL as int32)".into()),
]),
None,
)
.await
.unwrap();
}
#[tokio::test]
async fn test_alter_columns() {
let table = Table::new_with_handler("my_table", |request| {
assert_eq!(request.method(), "POST");
assert_eq!(request.url().path(), "/v1/table/my_table/alter_columns/");
assert_eq!(
request.headers().get("Content-Type").unwrap(),
JSON_CONTENT_TYPE
);
let body = request.body().unwrap().as_bytes().unwrap();
let body = std::str::from_utf8(body).unwrap();
let value: serde_json::Value = serde_json::from_str(body).unwrap();
let alterations = value.get("alterations").unwrap().as_array().unwrap();
assert!(alterations.len() == 2);
let path = alterations[0]["path"].as_str().unwrap();
let data_type = alterations[0]["data_type"]["type"].as_str().unwrap();
assert_eq!(path, "b.c");
assert_eq!(data_type, "int32");
let path = alterations[1]["path"].as_str().unwrap();
let nullable = alterations[1]["nullable"].as_bool().unwrap();
let rename = alterations[1]["rename"].as_str().unwrap();
assert_eq!(path, "x");
assert!(nullable);
assert_eq!(rename, "y");
http::Response::builder().status(200).body("{}").unwrap()
});
table
.alter_columns(&[
ColumnAlteration::new("b.c".into()).cast_to(DataType::Int32),
ColumnAlteration::new("x".into())
.rename("y".into())
.set_nullable(true),
])
.await
.unwrap();
}
#[tokio::test]
async fn test_drop_columns() {
let table = Table::new_with_handler("my_table", |request| {
assert_eq!(request.method(), "POST");
assert_eq!(request.url().path(), "/v1/table/my_table/drop_columns/");
assert_eq!(
request.headers().get("Content-Type").unwrap(),
JSON_CONTENT_TYPE
);
let body = request.body().unwrap().as_bytes().unwrap();
let body = std::str::from_utf8(body).unwrap();
let value: serde_json::Value = serde_json::from_str(body).unwrap();
let columns = value.get("columns").unwrap().as_array().unwrap();
assert!(columns.len() == 2);
let col1 = columns[0].as_str().unwrap();
let col2 = columns[1].as_str().unwrap();
assert_eq!(col1, "a");
assert_eq!(col2, "b");
http::Response::builder().status(200).body("{}").unwrap()
});
table.drop_columns(&["a", "b"]).await.unwrap();
}
}

View File

@@ -14,7 +14,6 @@
//! LanceDB Table APIs
use std::collections::HashMap;
use std::path::Path;
use std::sync::Arc;
@@ -37,7 +36,8 @@ pub use lance::dataset::ColumnAlteration;
pub use lance::dataset::NewColumnTransform;
pub use lance::dataset::ReadParams;
use lance::dataset::{
Dataset, UpdateBuilder as LanceUpdateBuilder, Version, WhenMatched, WriteMode, WriteParams,
Dataset, InsertBuilder, UpdateBuilder as LanceUpdateBuilder, Version, WhenMatched, WriteMode,
WriteParams,
};
use lance::dataset::{MergeInsertBuilder as LanceMergeInsertBuilder, WhenNotMatchedBySource};
use lance::io::WrappingObjectStore;
@@ -1046,12 +1046,6 @@ pub struct NativeTable {
name: String,
uri: String,
pub(crate) dataset: dataset::DatasetConsistencyWrapper,
// the object store wrapper to use on write path
store_wrapper: Option<Arc<dyn WrappingObjectStore>>,
storage_options: HashMap<String, String>,
// This comes from the connection options. We store here so we can pass down
// to the dataset when we recreate it (for example, in checkout_latest).
read_consistency_interval: Option<std::time::Duration>,
@@ -1117,13 +1111,6 @@ impl NativeTable {
None => params,
};
let storage_options = params
.store_options
.clone()
.unwrap_or_default()
.storage_options
.unwrap_or_default();
let dataset = DatasetBuilder::from_uri(uri)
.with_read_params(params)
.load()
@@ -1141,8 +1128,6 @@ impl NativeTable {
name: name.to_string(),
uri: uri.to_string(),
dataset,
store_wrapper: write_store_wrapper,
storage_options,
read_consistency_interval,
})
}
@@ -1191,12 +1176,6 @@ impl NativeTable {
Some(wrapper) => params.patch_with_store_wrapper(wrapper)?,
None => params,
};
let storage_options = params
.store_params
.clone()
.unwrap_or_default()
.storage_options
.unwrap_or_default();
let dataset = Dataset::write(batches, uri, Some(params))
.await
@@ -1210,8 +1189,6 @@ impl NativeTable {
name: name.to_string(),
uri: uri.to_string(),
dataset: DatasetConsistencyWrapper::new_latest(dataset, read_consistency_interval),
store_wrapper: write_store_wrapper,
storage_options,
read_consistency_interval,
})
}
@@ -1758,10 +1735,13 @@ impl TableInternal for NativeTable {
add: AddDataBuilder<NoData>,
data: Box<dyn RecordBatchReader + Send>,
) -> Result<()> {
let data =
MaybeEmbedded::try_new(data, self.table_definition().await?, add.embedding_registry)?;
let data = Box::new(MaybeEmbedded::try_new(
data,
self.table_definition().await?,
add.embedding_registry,
)?) as Box<dyn RecordBatchReader + Send>;
let mut lance_params = add.write_options.lance_write_params.unwrap_or(WriteParams {
let lance_params = add.write_options.lance_write_params.unwrap_or(WriteParams {
mode: match add.mode {
AddDataMode::Append => WriteMode::Append,
AddDataMode::Overwrite => WriteMode::Overwrite,
@@ -1769,27 +1749,15 @@ impl TableInternal for NativeTable {
..Default::default()
});
// Bring storage options from table
let storage_options = lance_params
.store_params
.get_or_insert(Default::default())
.storage_options
.get_or_insert(Default::default());
for (key, value) in self.storage_options.iter() {
if !storage_options.contains_key(key) {
storage_options.insert(key.clone(), value.clone());
}
}
// patch the params if we have a write store wrapper
let lance_params = match self.store_wrapper.clone() {
Some(wrapper) => lance_params.patch_with_store_wrapper(wrapper)?,
None => lance_params,
let dataset = {
// Limited scope for the mutable borrow of self.dataset avoids deadlock.
let ds = self.dataset.get_mut().await?;
InsertBuilder::new(Arc::new(ds.clone()))
.with_params(&lance_params)
.execute_stream(data)
.await?
};
self.dataset.ensure_mutable().await?;
let dataset = Dataset::write(data, &self.uri, Some(lance_params)).await?;
self.dataset.set_latest(dataset).await;
Ok(())
}

View File

@@ -15,6 +15,7 @@
use std::sync::Arc;
use arrow_schema::{DataType, Schema};
use lance::arrow::json::JsonDataType;
use lance::dataset::{ReadParams, WriteParams};
use lance::io::{ObjectStoreParams, WrappingObjectStore};
use lazy_static::lazy_static;
@@ -175,6 +176,15 @@ pub fn supported_vector_data_type(dtype: &DataType) -> bool {
}
}
/// Note: this is temporary until we get a proper datatype conversion in Lance.
pub fn string_to_datatype(s: &str) -> Option<DataType> {
let data_type = serde_json::Value::String(s.to_string());
let json_type =
serde_json::Value::Object([("type".to_string(), data_type)].iter().cloned().collect());
let json_type: JsonDataType = serde_json::from_value(json_type).ok()?;
(&json_type).try_into().ok()
}
#[cfg(test)]
mod tests {
use super::*;
@@ -239,4 +249,11 @@ mod tests {
assert!(validate_table_name("my@table").is_err());
assert!(validate_table_name("name with space").is_err());
}
#[test]
fn test_string_to_datatype() {
let string = "int32";
let expected = DataType::Int32;
assert_eq!(string_to_datatype(string), Some(expected));
}
}