Compare commits

..

256 Commits

Author SHA1 Message Date
Lance Release
27404c8623 Bump version: 0.17.1-beta.7 → 0.17.1 2024-12-24 18:37:28 +00:00
Lance Release
f181c7e77f Bump version: 0.17.1-beta.6 → 0.17.1-beta.7 2024-12-24 18:37:27 +00:00
BubbleCal
e70fd4fecc feat: support IVF_FLAT, binary vectors and hamming distance (#1955)
binary vectors and hamming distance can work on only IVF_FLAT, so
introduce them all in this PR.

---------

Signed-off-by: BubbleCal <bubble-cal@outlook.com>
2024-12-24 10:36:20 -08:00
verma nakul
ac0068b80e feat(python): add ignore_missing to the async drop_table() method (#1953)
- feat(db): add `ignore_missing` to async `drop_table` method

Fixes #1951

---------

Co-authored-by: Will Jones <willjones127@gmail.com>
2024-12-24 10:33:47 -08:00
Hezi Zisman
ebac960571 feat(python): add bypass_vector_index to sync api (#1947)
Hi lancedb team,

This PR adds the `bypass_vector_index` logic to the sync API, as
described in [Issue
#535](https://github.com/lancedb/lancedb/issues/535). (Closes #535).

Iv'e implemented it only for the regular vector search. If you think it
should also be supported for FTS, Hybrid, or Empty queries and for the
cloud solution, please let me know, and I’ll be happy to extend it.

Since there’s no `CONTRIBUTING.md` or contribution guidelines, I opted
for the simplest implementation to get this started.

Looking forward to your feedback!

Thanks!

---------

Co-authored-by: Will Jones <willjones127@gmail.com>
2024-12-24 10:33:26 -08:00
Lance Release
59b57055e7 Updating package-lock.json 2024-12-19 19:40:28 +00:00
Lance Release
591c8de8fc Updating package-lock.json 2024-12-19 19:40:13 +00:00
Lance Release
f835ff310f Bump version: 0.14.1-beta.5 → 0.14.1-beta.6 2024-12-19 19:39:41 +00:00
Lance Release
cf8c2edaf4 Bump version: 0.17.1-beta.5 → 0.17.1-beta.6 2024-12-19 19:39:08 +00:00
Will Jones
61a714a459 docs: improve optimization docs (#1957)
* Add `See Also` section to `cleanup_old_files` and `compact_files` so
they know it's linked to `optimize`.
* Fixes link to `compact_files` arguments
* Improves formatting of note.
2024-12-19 10:55:11 -08:00
Will Jones
5ddd84cec0 feat: upgrade lance to 0.21.0-beta.5 (#1961) 2024-12-19 10:54:59 -08:00
Will Jones
27ef0bb0a2 ci(rust): check MSRV and upgrade toolchain (#1960)
* Upgrades our toolchain file to v1.83.0, since many dependencies now
have MSRV of 1.81.0
* Reverts Rust changes from #1946 that were working around this in a
dumb way
* Adding an MSRV check
* Reduce MSRV back to 1.78.0
2024-12-19 08:43:25 -08:00
Will Jones
25402ba6ec chore: update lockfiles (#1946) 2024-12-18 08:43:33 -08:00
Lance Release
37c359ed40 Updating package-lock.json 2024-12-13 22:38:04 +00:00
Lance Release
06cdf00987 Bump version: 0.14.1-beta.4 → 0.14.1-beta.5 2024-12-13 22:37:41 +00:00
Lance Release
144b7f5d54 Bump version: 0.17.1-beta.4 → 0.17.1-beta.5 2024-12-13 22:37:13 +00:00
LuQQiu
edc9b9adec chore: bump Lance version to v0.21.0-beta.4 (#1939) 2024-12-13 14:36:13 -08:00
Will Jones
d11b2a6975 ci: fix python beta release to publish to fury (#1937)
We have been publishing all releases--even preview ones--to PyPI. This
was because of a faulty bash if statement. This PR fixes that
conditional.
2024-12-13 14:19:14 -08:00
Will Jones
980aa70e2d feat(python): async-sync feature parity on Table (#1914)
### Changes to sync API
* Updated `LanceTable` and `LanceDBConnection` reprs
* Add `storage_options`, `data_storage_version`, and
`enable_v2_manifest_paths` to sync create table API.
* Add `storage_options` to `open_table` in sync API.
* Add `list_indices()` and `index_stats()` to sync API
* `create_table()` will now create only 1 version when data is passed.
Previously it would always create two versions: 1 to create an empty
table and 1 to add data to it.

### Changes to async API
* Add `embedding_functions` to async `create_table()` API.
* Added `head()` to async API

### Refactors
* Refactor index parameters into dataclasses so they are easier to use
from Python
* Moved most tests to use an in-memory DB so we don't need to create so
many temp directories

Closes #1792
Closes #1932

---------

Co-authored-by: Weston Pace <weston.pace@gmail.com>
2024-12-13 12:56:44 -08:00
Lance Release
d83e5a0208 Updating package-lock.json 2024-12-13 05:34:30 +00:00
Lance Release
16a6b9ce8f Bump version: 0.14.1-beta.3 → 0.14.1-beta.4 2024-12-13 05:34:01 +00:00
Lance Release
e3c6213333 Bump version: 0.17.1-beta.3 → 0.17.1-beta.4 2024-12-13 05:33:34 +00:00
Weston Pace
00552439d9 feat: upgrade lance to 0.21.0b3 (#1936) 2024-12-12 21:32:59 -08:00
QianZhu
c0ee370f83 docs: improve schema evolution api examples (#1929) 2024-12-12 10:52:06 -08:00
QianZhu
17e4022045 docs: add faq to cloud doc (#1907)
Co-authored-by: Will Jones <willjones127@gmail.com>
2024-12-12 10:07:03 -08:00
BubbleCal
c3ebac1a92 feat(node): support FTS options in nodejs (#1934)
Closes #1790

---------

Signed-off-by: BubbleCal <bubble-cal@outlook.com>
2024-12-12 08:19:04 -08:00
Lance Release
10f919a0a9 Updating package-lock.json 2024-12-11 19:18:36 +00:00
Lance Release
8af5476395 Bump version: 0.14.1-beta.2 → 0.14.1-beta.3 2024-12-11 19:18:17 +00:00
Lance Release
bcbbeb7a00 Bump version: 0.17.1-beta.2 → 0.17.1-beta.3 2024-12-11 19:17:54 +00:00
Weston Pace
d6c0f75078 feat: upgrade to lance prerelease 0.21.0b2 (#1933) 2024-12-11 11:17:10 -08:00
Lance Release
e820e356a0 Updating package-lock.json 2024-12-11 17:58:05 +00:00
Lance Release
509286492f Bump version: 0.14.1-beta.1 → 0.14.1-beta.2 2024-12-11 17:57:41 +00:00
Lance Release
f9789ec962 Bump version: 0.17.1-beta.1 → 0.17.1-beta.2 2024-12-11 17:57:18 +00:00
Lei Xu
347515aa51 fix: support list of numpy f16 floats as query vector (#1931)
User reported on Discord, when using
`table.vector_search([np.float16(1.0), np.float16(2.0), ...])`, it
yields `TypeError: 'numpy.float16' object is not iterable`
2024-12-10 16:17:28 -08:00
BubbleCal
3324e7d525 feat: support 4bit PQ (#1916) 2024-12-10 10:36:03 +08:00
Will Jones
ab5316b4fa feat: support offset in remote client (#1923)
Closes https://github.com/lancedb/lancedb/issues/1876
2024-12-09 17:04:18 -08:00
Will Jones
db125013fc docs: better formatting for Node API docs (#1892)
* Sets `"useCodeBlocks": true`
* Adds a post-processing script `nodejs/typedoc_post_process.js` that
puts the parameter description on the same line as the parameter name,
like it is in our Python docs. This makes the text hierarchy clearer in
those sections and also makes the sections shorter.
2024-12-09 17:04:09 -08:00
Will Jones
a43193c99b fix(nodejs): upgrade arrow versions (#1924)
Closes #1626
2024-12-09 15:37:11 -08:00
Lance Release
b70513ca72 Updating package-lock.json 2024-12-09 08:41:09 +00:00
Lance Release
78165801c6 Bump version: 0.14.1-beta.0 → 0.14.1-beta.1 2024-12-09 08:40:55 +00:00
Lance Release
6e5927ce6d Bump version: 0.17.1-beta.0 → 0.17.1-beta.1 2024-12-09 08:40:35 +00:00
BubbleCal
6c1f32ac11 fix: index params are ignored by RemoteTable (#1928)
Signed-off-by: BubbleCal <bubble-cal@outlook.com>
2024-12-09 16:37:01 +08:00
Lance Release
4fdf084777 Updating package-lock.json 2024-12-09 04:01:51 +00:00
Lance Release
1fad24fcd8 Bump version: 0.14.0 → 0.14.1-beta.0 2024-12-09 04:01:35 +00:00
Lance Release
6ef20b85ca Bump version: 0.17.0 → 0.17.1-beta.0 2024-12-09 04:01:19 +00:00
LuQQiu
35bacdd57e feat: support azure account name storage options in sync db.connect (#1926)
db.connect with azure storage account name is supported in async connect
but not sync connect.
Add this functionality

---------

Co-authored-by: Will Jones <willjones127@gmail.com>
2024-12-08 20:00:23 -08:00
Will Jones
a5ebe5a6c4 fix: create_scalar_index in cloud (#1922)
Fixes #1920
2024-12-07 19:48:40 -08:00
Will Jones
bf03ad1b4a ci: fix release (#1919)
* Set `private: false` so we can publish new binary packages
* Add missing windows binary reference
2024-12-06 12:51:48 -08:00
Bert
2a9e3e2084 feat(python): support hybrid search in async sdk (#1915)
fixes: https://github.com/lancedb/lancedb/issues/1765

---------

Co-authored-by: Will Jones <willjones127@gmail.com>
2024-12-06 13:53:15 -05:00
Lance Release
f298f15360 Updating package-lock.json 2024-12-06 17:13:37 +00:00
Lance Release
679b031b99 Bump version: 0.14.0-beta.3 → 0.14.0 2024-12-06 17:13:15 +00:00
Lance Release
f50b5d532b Bump version: 0.14.0-beta.2 → 0.14.0-beta.3 2024-12-06 17:13:10 +00:00
Lance Release
fe655a15f0 Bump version: 0.17.0-beta.4 → 0.17.0 2024-12-06 17:12:43 +00:00
Lance Release
9d0af794d0 Bump version: 0.17.0-beta.3 → 0.17.0-beta.4 2024-12-06 17:12:43 +00:00
Will Jones
048a2d10f8 fix: data type parsing (#1918)
Fixes failing test on main
2024-12-06 08:56:07 -08:00
Lei Xu
c78a9849b4 ci: upgrade version of upload-pages-artifact and deploy-pages (#1917)
For
https://github.blog/changelog/2024-12-05-deprecation-notice-github-pages-actions-to-require-artifacts-actions-v4-on-github-com/
2024-12-06 10:45:24 -05:00
BubbleCal
c663085203 feat: support FTS options on RemoteTable (#1807) 2024-12-06 21:49:03 +08:00
Will Jones
8b628854d5 ci: fix nodejs release jobs (#1912)
* Clean up old commented out jobs
* Fix runner issue that caused these failures:
https://github.com/lancedb/lancedb/actions/runs/12186754094
2024-12-05 14:45:10 -08:00
Will Jones
a8d8c17b2a docs(rust): fix doctests (#1913)
* One doctest was running for > 60 seconds in CI, since it was
(unsuccessfully) trying to connect to LanceDB Cloud.
* Fixed the example for `Query::full_text_query()`, which was incorrect.
2024-12-05 14:44:59 -08:00
Will Jones
3c487e5fc7 perf: re-use table instance during write (#1909)
Previously, whenever `Table.add()` was called, we would write and
re-open the underlying dataset. This was bad for performance, as it
reset the table cache and initiated a lot of IO. It also could be the
source of bugs, since we didn't necessarily pass all the necessary
connection options down when re-opening the table.

Closes #1655
2024-12-05 14:44:50 -08:00
Will Jones
d6219d687c chore: simplify arrow json conversion (#1910)
Taking care of a small TODO
2024-12-05 13:14:43 -08:00
Bert
239f725b32 feat(python)!: async-sync feature parity on Connections (#1905)
Closes #1791
Closes #1764
Closes #1897 (Makes this unnecessary)

BREAKING CHANGE: when using azure connection string `az://...` the call
to connect will fail if the azure storage credentials are not set. this
is breaking from the previous behaviour where the call would fail after
connect, when user invokes methods on the connection.
2024-12-05 14:54:39 -05:00
Will Jones
5f261cf2d8 feat: upgrade to Lance v0.20.0 (#1908)
Upstream change log:
https://github.com/lancedb/lance/releases/tag/v0.20.0
2024-12-05 10:53:59 -08:00
Will Jones
79eaa52184 feat: schema evolution APIs in all SDKs (#1851)
* Support `add_columns`, `alter_columns`, `drop_columns` in Remote SDK
and async Python
* Add `data_type` parameter to node
* Docs updates
2024-12-04 14:47:50 -08:00
Lei Xu
bd82e1f66d feat(python): add support for Azure OpenAPI SDK (#1906)
Closes #1699
2024-12-04 13:09:38 -08:00
Lance Release
ba34c3bee1 Updating package-lock.json 2024-12-04 01:14:24 +00:00
Lance Release
d4d0873e2b Bump version: 0.14.0-beta.1 → 0.14.0-beta.2 2024-12-04 01:13:55 +00:00
Lance Release
12c7bd18a5 Bump version: 0.17.0-beta.2 → 0.17.0-beta.3 2024-12-04 01:13:18 +00:00
LuQQiu
c6bf6a25d6 feat: add remote db uri path with folder prefix (#1901)
Add remote database folder prefix
support db://bucket/path/to/folder/
2024-12-03 16:51:18 -08:00
Weston Pace
c998a47e17 feat: add a pyarrow dataset adapater for LanceDB tables (#1902)
This currently only works for local tables (remote tables cannot be
queried)
This is also exclusive to the sync interface. However, since the pyarrow
dataset interface is synchronous I am not sure if there is much value in
making an async-wrapping variant.

In addition, I added a `to_batches` method to the base query in the sync
API. This already exists in the async API. In the sync API this PR only
adds support for vector queries and scalar queries and not for hybrid or
FTS queries.
2024-12-03 15:42:54 -08:00
Frank Liu
d8c758513c feat: add multimodal capabilities for Voyage embedder (#1878)
Co-authored-by: Will Jones <willjones127@gmail.com>
2024-12-03 10:25:48 -08:00
Will Jones
3795e02ee3 chore: fix ci on main (#1899) 2024-12-02 15:21:18 -08:00
Mr. Doge
c7d424b2f3 ci: aarch64-pc-windows-msvc (#1890)
`npm run pack-build -- -t $TARGET_TRIPLE`
was needed instead of
`npm run pack-build -t $TARGET_TRIPLE`
https://github.com/lancedb/lancedb/pull/1889

some documentation about `*-pc-windows-msvc` cross-compilation (from
alpine):
https://github.com/lancedb/lancedb/pull/1831#issuecomment-2497156918

only `arm64` in `matrix` config is used
since `x86_64` built by `runs-on: windows-2022` is working
2024-12-02 11:17:37 -08:00
Bert
1efb9914ee ci: fix failing python release (#1896)
Fix failing python release for windows:
https://github.com/lancedb/lancedb/actions/runs/12019637086/job/33506642964

Also updates pkginfo to fix twine build as suggested here:
https://github.com/pypi/warehouse/issues/15611
failing release:
https://github.com/lancedb/lancedb/actions/runs/12091344173/job/33719622146
2024-12-02 11:05:29 -08:00
Lance Release
83e26a231e Updating package-lock.json 2024-11-29 22:46:45 +00:00
Lance Release
72a17b2de4 Bump version: 0.14.0-beta.0 → 0.14.0-beta.1 2024-11-29 22:46:20 +00:00
Lance Release
4231925476 Bump version: 0.17.0-beta.1 → 0.17.0-beta.2 2024-11-29 22:45:55 +00:00
Lance Release
84a6693294 Bump version: 0.17.0-beta.0 → 0.17.0-beta.1 2024-11-29 18:16:02 +00:00
Ryan Green
6c2d4c10a4 feat: support remote options for remote lancedb connection (#1895)
* Support subset of storage options as remote options
* Send Azure storage account name via HTTP header
2024-11-29 14:08:13 -03:30
Ryan Green
d914722f79 Revert "feat: support remote options for remote lancedb connection. Send Azure storage account name via HTTP header."
This reverts commit a6e4034dba.
2024-11-29 11:06:18 -03:30
Ryan Green
a6e4034dba feat: support remote options for remote lancedb connection. Send Azure storage account name via HTTP header. 2024-11-29 11:05:04 -03:30
QianZhu
2616a50502 fix: test errors after setting default limit (#1891) 2024-11-26 16:03:16 -08:00
LuQQiu
7b5e9d824a fix: dynamodb external manifest drop table (#1866)
second pr of https://github.com/lancedb/lancedb/issues/1812
2024-11-26 13:20:48 -08:00
QianZhu
3b173e7cb9 fix: default limit for remote nodejs client (#1886)
https://github.com/lancedb/lancedb/issues/1804
2024-11-26 11:01:25 -08:00
Mr. Doge
d496ab13a0 ci: linux: specify target triple for neon pack-build (vectordb) (#1889)
fixes that all `neon pack-build` packs are named
`vectordb-linux-x64-musl-*.tgz` even when cross-compiling

adds 2nd param:
`TARGET_TRIPLE=${2:-x86_64-unknown-linux-gnu}`
`npm run pack-build -- -t $TARGET_TRIPLE`
2024-11-26 10:57:17 -08:00
Will Jones
69d9beebc7 docs: improve style and introduction to Python API docs (#1885)
I found the signatures difficult to read and the parameter section not
very space efficient.
2024-11-26 09:17:35 -08:00
Bert
d32360b99d feat: support overwrite and exist_ok mode for remote create_table (#1883)
Support passing modes "overwrite" and "exist_ok" when creating a remote
table.
2024-11-26 11:38:36 -05:00
Will Jones
9fa08bfa93 ci: use correct runner for vectordb (#1881)
We already do this for `gnu` builds, we should do this also for `musl`
builds.
2024-11-25 16:17:10 -08:00
LuQQiu
d6d9cb7415 feat: bump lance to 0.20.0b3 (#1882)
Bump lance version.
Upstream change log:
https://github.com/lancedb/lance/releases/tag/v0.20.0-beta.3
2024-11-25 16:15:44 -08:00
Lance Release
990d93f553 Updating package-lock.json 2024-11-25 22:06:39 +00:00
Lance Release
0832cba3c6 Bump version: 0.13.1-beta.0 → 0.14.0-beta.0 2024-11-25 22:06:14 +00:00
Lance Release
38b0d91848 Bump version: 0.16.1-beta.0 → 0.17.0-beta.0 2024-11-25 22:05:49 +00:00
Will Jones
6826039575 fix(python): run remote SDK futures in background thread (#1856)
Users who call the remote SDK from code that uses futures (either
`ThreadPoolExecutor` or `asyncio`) can get odd errors like:

```
Traceback (most recent call last):
  File "/usr/lib/python3.12/asyncio/events.py", line 88, in _run
    self._context.run(self._callback, *self._args)
RuntimeError: cannot enter context: <_contextvars.Context object at 0x7cfe94cdc900> is already entered
```

This PR fixes that by executing all LanceDB futures in a dedicated
thread pool running on a background thread. That way, it doesn't
interact with their threadpool.
2024-11-25 13:12:47 -08:00
QianZhu
3e9321fc40 docs: improve scalar index and filtering (#1874)
improved the docs on build a scalar index and pre-/post-filtering

---------

Co-authored-by: Weston Pace <weston.pace@gmail.com>
2024-11-25 11:30:57 -08:00
Lei Xu
2ded17452b fix(python)!: handle bad openai embeddings gracefully (#1873)
BREAKING-CHANGE: change Pydantic Vector field to be nullable by default.
Closes #1577
2024-11-23 13:33:52 -08:00
Mr. Doge
dfd9d2ac99 ci: musl missing node/package.json targets (#1870)
I missed targets when manually merging draft PR to updated main
I was copying from:
https://github.com/lancedb/lancedb/pull/1816/files#diff-d6e19f28e97cfeda63a9bd9426f10f1d2454eeed375ee1235e8ba842ceeb46a0

fixes:
error: Rust target x86_64-unknown-linux-musl not found in package.json.
2024-11-22 10:40:59 -08:00
Lance Release
162880140e Updating package-lock.json 2024-11-21 21:53:25 +00:00
Lance Release
99d9ced6d5 Bump version: 0.13.0 → 0.13.1-beta.0 2024-11-21 21:53:01 +00:00
Lance Release
96933d7df8 Bump version: 0.16.0 → 0.16.1-beta.0 2024-11-21 21:52:39 +00:00
Lei Xu
d369233b3d feat: bump lance to 0.20.0b2 (#1865)
Bump lance version.
Upstream change log:
https://github.com/lancedb/lance/releases/tag/v0.20.0-beta.2
2024-11-21 13:16:59 -08:00
QianZhu
43a670ed4b fix: limit docstring change (#1860) 2024-11-21 10:50:50 -08:00
Bert
cb9a00a28d feat: add list_versions to typescript, rust and remote python sdks (#1850)
Will require update to lance dependency to bring in this change which
makes the version serializable
https://github.com/lancedb/lance/pull/3143
2024-11-21 13:35:14 -05:00
Max Epstein
72af977a73 fix(CohereReranker): updated default model_name param to newest v3 (#1862) 2024-11-21 09:02:49 -08:00
Bert
7cecb71df0 feat: support for checkout and checkout_latest in remote sdks (#1863) 2024-11-21 11:28:46 -05:00
QianZhu
285071e5c8 docs: full-text search doc update (#1861)
Co-authored-by: BubbleCal <bubble-cal@outlook.com>
2024-11-20 21:07:30 -08:00
QianZhu
114866fbcf docs: OSS doc improvement (#1859)
OSS doc improvement - HNSW index parameter explanation and others.

---------

Co-authored-by: BubbleCal <bubble-cal@outlook.com>
2024-11-20 17:51:11 -08:00
Frank Liu
5387c0e243 docs: add Voyage models to sidebar (#1858) 2024-11-20 14:20:14 -08:00
Mr. Doge
53d1535de1 ci: musl x64,arm64 (#1853)
untested 4 artifacts at:
https://github.com/FuPeiJiang/lancedb/actions/runs/11926579058
node-native-linux-aarch64-musl 22.6 MB
node-native-linux-x86_64-musl 23.6 MB
nodejs-native-linux-aarch64-musl 26.7 MB
nodejs-native-linux-x86_64-musl 27 MB

this follows the same process as:
https://github.com/lancedb/lancedb/pull/1816#issuecomment-2484816669

Closes #1388
Closes #1107

---------

Co-authored-by: Will Jones <willjones127@gmail.com>
2024-11-20 10:53:19 -08:00
BubbleCal
b2f88f0b29 feat: support to sepcify ef search param (#1844)
Signed-off-by: BubbleCal <bubble-cal@outlook.com>
2024-11-19 23:12:25 +08:00
fzowl
f2e3989831 docs: voyageai embedding in the index (#1813)
The code to support VoyageAI embedding and rerank models was added in
the https://github.com/lancedb/lancedb/pull/1799 PR.
Some of the documentation changes was also made, here adding the
VoyageAI embedding doc link to the index page.

These are my first PRs in lancedb and while i checked the
documentation/code structure, i might missed something important. Please
let me know if any changes required!
2024-11-18 14:34:16 -08:00
Emmanuel Ferdman
83ae52938a docs: update migration reference (#1837)
# PR Summary
PR fixes the `migration.md` reference in `docs/src/guides/tables.md`. On
the way, it also fixes some typos found in that document.

Signed-off-by: Emmanuel Ferdman <emmanuelferdman@gmail.com>
2024-11-18 14:33:32 -08:00
Lei Xu
267aa83bf8 feat(python): check vector query is not None (#1847)
Fix the type hints of `nearest_to` method, and raise `ValueError` when
the input is None
2024-11-18 14:15:22 -08:00
Will Jones
cc72050206 chore: update package locks (#1845)
Also ran `npm audit`.
2024-11-18 13:44:06 -08:00
Will Jones
72543c8b9d test(python): test with_row_id in sync query (#1835)
Also remove weird `MockTable` fixture.
2024-11-18 11:32:52 -08:00
Will Jones
97d6210c33 ci: remove invalid references (#1834)
Fix release job
2024-11-18 11:32:44 -08:00
Ho Kim
a3d0c27b0a feat: add support for rustls (#1842)
Hello, this is a simple PR that supports `rustls-tls` feature.

The `reqwest`\`s default TLS `default-tls` is enabled by default, to
dismiss the side-effect.

The user can use `rustls-tls` like this:

```toml
lancedb = { version = "*", default-features = false, features = ["rustls-tls"] }
```
2024-11-18 10:36:20 -08:00
BubbleCal
b23d8abcdd docs: introduce incremental indexing for FTS (#1789)
don't merge it before https://github.com/lancedb/lancedb/pull/1769
merged

---------

Signed-off-by: BubbleCal <bubble-cal@outlook.com>
2024-11-18 20:21:28 +08:00
Rob Meng
e3ea5cf9b9 chore: bump lance to 0.19.3 (#1839) 2024-11-16 14:57:52 -05:00
Lance Release
4f8b086175 Updating package-lock.json 2024-11-15 20:18:16 +00:00
Lance Release
72330fb759 Bump version: 0.13.0-beta.3 → 0.13.0 2024-11-15 20:17:59 +00:00
Lance Release
e3b2c5f438 Bump version: 0.13.0-beta.2 → 0.13.0-beta.3 2024-11-15 20:17:55 +00:00
Lance Release
66a881b33a Bump version: 0.16.0-beta.2 → 0.16.0 2024-11-15 20:17:34 +00:00
Lance Release
a7515d6ee2 Bump version: 0.16.0-beta.1 → 0.16.0-beta.2 2024-11-15 20:17:34 +00:00
Will Jones
587c0824af feat: flexible null handling and insert subschemas in Python (#1827)
* Test that we can insert subschemas (omit nullable columns) in Python.
* More work is needed to support this in Node. See:
https://github.com/lancedb/lancedb/issues/1832
* Test that we can insert data with nullable schema but no nulls in
non-nullable schema.
* Add `"null"` option for `on_bad_vectors` where we fill with null if
the vector is bad.
* Make null values not considered bad if the field itself is nullable.
2024-11-15 11:33:00 -08:00
Will Jones
b38a4269d0 fix(node): make openai and huggingface optional dependencies (#1809)
BREAKING CHANGE: openai and huggingface now have separate entrypoints.

Closes [#1624](https://github.com/lancedb/lancedb/issues/1624)
2024-11-14 15:04:35 -08:00
Will Jones
119d88b9db ci: disable Windows Arm64 until the release builds work (#1833)
Started to actually fix this, but it was taking too long
https://github.com/lancedb/lancedb/pull/1831
2024-11-14 15:04:23 -08:00
StevenSu
74f660d223 feat: add new feature, add amazon bedrock embedding function (#1788)
Add amazon bedrock embedding function to rust sdk.

1.  Add BedrockEmbeddingModel ( lancedb/src/embeddings/bedrock.rs)
2. Add example lancedb/examples/bedrock.rs
2024-11-14 11:04:59 -08:00
Lance Release
b2b0979b90 Updating package-lock.json 2024-11-14 04:42:38 +00:00
Lance Release
ee2a40b182 Bump version: 0.13.0-beta.1 → 0.13.0-beta.2 2024-11-14 04:42:19 +00:00
Lance Release
4ca0b15354 Bump version: 0.16.0-beta.0 → 0.16.0-beta.1 2024-11-14 04:41:56 +00:00
Rob Meng
d8c217b47d chore: bump lance to 0.19.2 (#1829) 2024-11-13 23:23:02 -05:00
Rob Meng
b724b1a01f feat: support remote empty query (#1828)
Support sending empty query types to remote lancedb. also include offset
and limit, where were previously omitted.
2024-11-13 23:04:52 -05:00
Will Jones
abd75e0ead feat: search multiple query vectors as one query (#1811)
Allows users to pass multiple query vector as part of a single query
plan. This just runs the queries in parallel without any further
optimization. It's mostly a convenience.

Previously, I think this was only handled by the sync Python remote API.
This makes it common across all SDKs.

Closes https://github.com/lancedb/lancedb/issues/1803

```python
>>> import lancedb
>>> import asyncio
>>> 
>>> async def main():
...     db = await lancedb.connect_async("./demo")
...     table = await db.create_table("demo", [{"id": 1, "vector": [1, 2, 3]}, {"id": 2, "vector": [4, 5, 6]}], mode="overwrite")
...     return await table.query().nearest_to([[1.0, 2.0, 3.0], [4.0, 5.0, 6.0], [4.0, 5.0, 6.0]]).limit(1).to_pandas()
... 
>>> asyncio.run(main())
   query_index  id           vector  _distance
0            2   2  [4.0, 5.0, 6.0]        0.0
1            1   2  [4.0, 5.0, 6.0]        0.0
2            0   1  [1.0, 2.0, 3.0]        0.0
```
2024-11-13 16:05:16 -08:00
Will Jones
0fd8a50bd7 ci(node): run examples in CI (#1796)
This is done as setup for a PR that will fix the OpenAI dependency
issue.

 * [x] FTS examples
 * [x] Setup mock openai
 * [x] Ran `npm audit fix`
 * [x] sentences embeddings test
 * [x] Double check formatting of docs examples
2024-11-13 11:10:56 -08:00
Umut Hope YILDIRIM
9f228feb0e ci: remove cache to fix build issues on windows arm runner (#1820) 2024-11-13 09:27:10 -08:00
Ayush Chaurasia
90e9c52d0a docs: update hybrid search example to latest langchain (#1824)
Co-authored-by: qzhu <qian@lancedb.com>
2024-11-12 20:06:25 -08:00
Will Jones
68974a4e06 ci: add index URL to fix failing docs build (#1823) 2024-11-12 16:54:22 -08:00
Lei Xu
4c9bab0d92 fix: use pandas with pydantic embedding column (#1818)
* Make Pandas `DataFrame` works with embedding function + Subset of
columns
* Make `lancedb.create_table()` work with embedding function
2024-11-11 14:48:56 -08:00
QianZhu
5117aecc38 docs: search param explanation for OSS doc (#1815)
![Screenshot 2024-11-09 at 11 09
14 AM](https://github.com/user-attachments/assets/2aeba016-aeff-4658-85c6-8640285ba0c9)
2024-11-11 11:57:17 -08:00
Umut Hope YILDIRIM
729718cb09 fix: arm64 runner proto already installed bug (#1810)
https://github.com/lancedb/lancedb/actions/runs/11748512661/job/32732745458
2024-11-08 14:49:37 -08:00
Umut Hope YILDIRIM
b1c84e0bda feat: added lancedb and vectordb release ci for win32-arm64-msvc npmjs only (#1805) 2024-11-08 11:40:57 -08:00
fzowl
cbbc07d0f5 feat: voyageai support (#1799)
Adding VoyageAI embedding and rerank support
2024-11-09 00:51:20 +05:30
Kursat Aktas
21021f94ca docs: introducing LanceDB Guru on Gurubase.io (#1797)
Hello team,

I'm the maintainer of [Anteon](https://github.com/getanteon/anteon). We
have created Gurubase.io with the mission of building a centralized,
open-source tool-focused knowledge base. Essentially, each "guru" is
equipped with custom knowledge to answer user questions based on
collected data related to that tool.

I wanted to update you that I've manually added the [LanceDB
Guru](https://gurubase.io/g/lancedb) to Gurubase. LanceDB Guru uses the
data from this repo and data from the
[docs](https://lancedb.github.io/lancedb/) to answer questions by
leveraging the LLM.

In this PR, I showcased the "LanceDB Guru", which highlights that
LanceDB now has an AI assistant available to help users with their
questions. Please let me know your thoughts on this contribution.

Additionally, if you want me to disable LanceDB Guru in Gurubase, just
let me know that's totally fine.

Signed-off-by: Kursat Aktas <kursat.ce@gmail.com>
2024-11-08 10:55:22 -08:00
BubbleCal
0ed77fa990 chore: impl Debug & Clone for Index params (#1808)
we don't really need these trait in lancedb, but all fields in `Index`
implement the 2 traits, so do it for possibility to use `Index`
somewhere

Signed-off-by: BubbleCal <bubble-cal@outlook.com>
2024-11-09 01:07:43 +08:00
BubbleCal
4372c231cd feat: support optimize indices in sync API (#1769)
Signed-off-by: BubbleCal <bubble-cal@outlook.com>
2024-11-08 08:48:07 -08:00
Umut Hope YILDIRIM
fa9ca8f7a6 ci: arm64 windows build support (#1770)
Adds support for 'aarch64-pc-windows-msvc'.
2024-11-06 15:34:23 -08:00
Lance Release
2a35d24ee6 Updating package-lock.json 2024-11-06 17:26:36 +00:00
Lance Release
dd9ce337e2 Bump version: 0.13.0-beta.0 → 0.13.0-beta.1 2024-11-06 17:26:17 +00:00
Will Jones
b9921d56cc fix(node): update default log level to warn (#1801)
🤦
2024-11-06 09:13:53 -08:00
Lance Release
0cfd9ed18e Updating package-lock.json 2024-11-05 23:21:50 +00:00
Lance Release
975398c3a8 Bump version: 0.12.0 → 0.13.0-beta.0 2024-11-05 23:21:32 +00:00
Lance Release
08d5f93f34 Bump version: 0.15.0 → 0.16.0-beta.0 2024-11-05 23:21:13 +00:00
Will Jones
91cab3b556 feat(python): transition Python remote sdk to use Rust implementation (#1701)
* Replaces Python implementation of Remote SDK with Rust one.
* Drops dependency on `attrs` and `cachetools`. Makes `requests` an
optional dependency used only for embeddings feature.
* Adds dependency on `nest-asyncio`. This was required to get hybrid
search working.
* Deprecate `request_thread_pool` parameter. We now use the tokio
threadpool.
* Stop caching the `schema` on a remote table. Schema is mutable and
there's no mechanism in place to invalidate the cache.
* Removed the client-side resolution of the vector column. We should
already be resolving this server-side.
2024-11-05 13:44:39 -08:00
Will Jones
c61bfc3af8 chore: update package locks (#1798) 2024-11-05 13:28:59 -08:00
Bert
4e8c7b0adf fix: serialize vectordb client errors as json (#1795) 2024-11-05 14:16:25 -05:00
Weston Pace
26f4a80e10 feat: upgrade to lance 0.19.2-beta.3 (#1794) 2024-11-05 06:43:41 -08:00
Will Jones
3604d20ad3 feat(python,node): support with_row_id in Python and remote (#1784)
Needed to support hybrid search in Remote SDK.
2024-11-04 11:25:45 -08:00
Gagan Bhullar
9708d829a9 fix: explain plan options (#1776)
PR fixes #1768
2024-11-04 10:25:34 -08:00
Will Jones
059c9794b5 fix(rust): fix update, open_table, fts search in remote client (#1785)
* `open_table` uses `POST` not `GET`
* `update` uses `predicate` key not `only_if`
* For FTS search, vector cannot be omitted. It must be passed as empty.
* Added logging of JSON request bodies to debug level logging.
2024-11-04 08:27:55 -08:00
Will Jones
15ed7f75a0 feat(python): support post filter on FTS (#1783) 2024-11-01 10:05:05 -07:00
Will Jones
96181ab421 feat: fast_search in Python and Node (#1623)
Sometimes it is acceptable to users to only search indexed data and skip
and new un-indexed data. For example, if un-indexed data will be shortly
indexed and they don't mind the delay. In these cases, we can save a lot
of CPU time in search, and provide better latency. Users can activate
this on queries using `fast_search()`.
2024-11-01 09:29:09 -07:00
Will Jones
f3fc339ef6 fix(rust): fix delete, update, query in remote SDK (#1782)
Fixes several minor issues with Rust remote SDK:

* Delete uses `predicate` not `filter` as parameter
* Update does not return the row value in remote SDK
* Update takes tuples
* Content type returned by query node is wrong, so we shouldn't validate
it. https://github.com/lancedb/sophon/issues/2742
* Data returned by query endpoint is actually an Arrow IPC file, not IPC
stream.
2024-10-31 15:22:09 -07:00
Will Jones
113cd6995b fix: index_stats works for FTS indices (#1780)
When running `index_stats()` for an FTS index, users would get the
deserialization error:

```
InvalidInput { message: "error deserializing index statistics: unknown variant `Inverted`, expected one of `IvfPq`, `IvfHnswPq`, `IvfHnswSq`, `BTree`, `Bitmap`, `LabelList`, `FTS` at line 1 column 24" }
```
2024-10-30 11:33:49 -07:00
Lance Release
02535bdc88 Updating package-lock.json 2024-10-29 22:16:51 +00:00
Lance Release
facc7d61c0 Bump version: 0.12.0-beta.0 → 0.12.0 2024-10-29 22:16:32 +00:00
Lance Release
f947259f16 Bump version: 0.11.1-beta.1 → 0.12.0-beta.0 2024-10-29 22:16:27 +00:00
Lance Release
e291212ecf Bump version: 0.15.0-beta.0 → 0.15.0 2024-10-29 22:16:05 +00:00
Lance Release
edc6445f6f Bump version: 0.14.1-beta.1 → 0.15.0-beta.0 2024-10-29 22:16:05 +00:00
Will Jones
a324f4ad7a feat(node): enable logging and show full errors (#1775)
This exposes the `LANCEDB_LOG` environment variable in node, so that
users can now turn on logging.

In addition, fixes a bug where only the top-level error from Rust was
being shown. This PR makes sure the full error chain is included in the
error message. In the future, will improve this so the error chain is
set on the [cause](https://nodejs.org/api/errors.html#errorcause)
property of JS errors https://github.com/lancedb/lancedb/issues/1779

Fixes #1774
2024-10-29 15:13:34 -07:00
Weston Pace
55104c5bae feat: allow distance type (metric) to be specified during hybrid search (#1777) 2024-10-29 13:51:18 -07:00
Rithik Kumar
d71df4572e docs: revamp langchain integration page (#1773)
Before - 
<img width="1030" alt="Screenshot 2024-10-28 132932"
src="https://github.com/user-attachments/assets/63f78bfa-949e-473e-ab22-0c692577fa3e">


After - 
<img width="1037" alt="Screenshot 2024-10-28 132727"
src="https://github.com/user-attachments/assets/85a12f6c-74f0-49ba-9f1a-fe77ad125704">
2024-10-29 22:55:50 +05:30
Rithik Kumar
aa269199ad docs: fix archived examples links (#1751) 2024-10-29 22:55:27 +05:30
BubbleCal
32fdcf97db feat!: upgrade lance to 0.19.1 (#1762)
BREAKING CHANGE: default tokenizer no longer does stemming or stop-word
removal. Users should explicitly turn that option on in the future.

- upgrade lance to 0.19.1
- update the FTS docs
- update the FTS API

Upstream change notes:
https://github.com/lancedb/lance/releases/tag/v0.19.1

---------

Signed-off-by: BubbleCal <bubble-cal@outlook.com>
Co-authored-by: Will Jones <willjones127@gmail.com>
2024-10-29 09:03:52 -07:00
Ryan Green
b9802a0d23 Revert "fix: error during deserialization of "INVERTED" index type"
This reverts commit 2ea5939f85.
2024-10-25 14:46:47 -02:30
Ryan Green
2ea5939f85 fix: error during deserialization of "INVERTED" index type 2024-10-25 14:40:14 -02:30
Lance Release
04e1f1ee4c Updating package-lock.json 2024-10-23 00:34:22 +00:00
Lance Release
bbc588e27d Bump version: 0.11.1-beta.0 → 0.11.1-beta.1 2024-10-23 00:34:01 +00:00
Lance Release
5517e102c3 Bump version: 0.14.1-beta.0 → 0.14.1-beta.1 2024-10-23 00:33:40 +00:00
Will Jones
82197c54e4 perf: eliminate iop in refresh (#1760)
Closes #1741

If we checkout a version, we need to make a `HEAD` request to get the
size of the manifest. The new `checkout_latest()` code path can skip
this IOP. This makes the refresh slightly faster.
2024-10-18 13:40:24 -07:00
Will Jones
48f46d4751 docs(node): update indexStats signature and regenerate docs (#1742)
`indexStats` still referenced UUID even though in
https://github.com/lancedb/lancedb/pull/1702 we changed it to take name
instead.
2024-10-18 10:53:28 -07:00
Lance Release
437316cbbc Updating package-lock.json 2024-10-17 18:59:18 +00:00
Lance Release
d406eab2c8 Bump version: 0.11.0 → 0.11.1-beta.0 2024-10-17 18:59:01 +00:00
Lance Release
1f41101897 Bump version: 0.14.0 → 0.14.1-beta.0 2024-10-17 18:58:45 +00:00
Will Jones
99e4db0d6a feat(rust): allow add_embedding on create_empty_table (#1754)
Fixes https://github.com/lancedb/lancedb/issues/1750
2024-10-17 11:58:15 -07:00
Will Jones
46486d4d22 fix: list_indices can handle fts indexes (#1753)
Fixes #1752
2024-10-16 10:39:40 -07:00
Weston Pace
f43cb8bba1 feat: upgrade lance to 0.18.3 (#1748) 2024-10-16 00:48:31 -07:00
James Wu
38eb05f297 fix(python): remove dependency on retry package (#1749)
## user story

fixes https://github.com/lancedb/lancedb/issues/1480

https://github.com/invl/retry has not had an update in 8 years, one if
its sub-dependencies via requirements.txt
(https://github.com/pytest-dev/py) is no longer maintained and has a
high severity vulnerability (CVE-2022-42969).

retry is only used for a single function in the python codebase for a
deprecated helper function `with_embeddings`, which was created for an
older tutorial (https://github.com/lancedb/lancedb/pull/12) [but is now
deprecated](https://lancedb.github.io/lancedb/embeddings/legacy/).

## changes

i backported a limited range of functionality of the `@retry()`
decorator directly into lancedb so that we no longer have a dependency
to the `retry` package.

## tests

```
/Users/james/src/lancedb/python $ ruff check .
All checks passed!
/Users/james/src/lancedb/python $ pytest python/tests/test_embeddings.py
python/tests/test_embeddings.py .......s....                                                                                                                        [100%]
================================================================ 11 passed, 1 skipped, 2 warnings in 7.08s ================================================================
```
2024-10-15 15:13:57 -07:00
Ryan Green
679a70231e feat: allow fast_search on python remote table (#1747)
Add `fast_search` parameter to query builder and remote table to support
skipping flat search in remote search
2024-10-14 14:39:54 -06:00
Dominik Weckmüller
e7b56b7b2a docs: add permanent link chain icon to headings without impacting SEO (#1746)
I noted that there are no permanent links in the docs. Adapted the
current best solution from
https://github.com/squidfunk/mkdocs-material/discussions/3535. It adds a
GitHub-like chain icon to the left of each heading (right on mobile) and
does not impact SEO unlike the default solution with pilcrow char `¶`
that might show up on google search results.

<img alt="image"
src="https://user-images.githubusercontent.com/182589/153004627-6df3f8e9-c747-4f43-bd62-a8dabaa96c3f.gif">
2024-10-14 11:58:23 -07:00
Olzhas Alexandrov
5ccd0edec2 docs: clarify infrastructure requirements for S3 Express One Zone (#1745) 2024-10-11 14:06:28 -06:00
Will Jones
9c74c435e0 ci: update package lock (#1740) 2024-10-09 15:14:08 -06:00
Lance Release
6de53ce393 Updating package-lock.json 2024-10-09 18:54:29 +00:00
Lance Release
9f42fbba96 Bump version: 0.11.0-beta.2 → 0.11.0 2024-10-09 18:54:09 +00:00
Lance Release
d892f7a622 Bump version: 0.11.0-beta.1 → 0.11.0-beta.2 2024-10-09 18:54:04 +00:00
Lance Release
515ab5f417 Bump version: 0.14.0-beta.1 → 0.14.0 2024-10-09 18:53:35 +00:00
Lance Release
8d0055fe6b Bump version: 0.14.0-beta.0 → 0.14.0-beta.1 2024-10-09 18:53:34 +00:00
Will Jones
5f9d8509b3 feat: upgrade Lance to v0.18.2 (#1737)
Includes changes from v0.18.1 and v0.18.2:

* [v0.18.1 change
log](https://github.com/lancedb/lance/releases/tag/v0.18.1)
* [v0.18.2 change
log](https://github.com/lancedb/lance/releases/tag/v0.18.2)

Closes #1656
Closes #1615
Closes #1661
2024-10-09 11:46:46 -06:00
Will Jones
f3b6a1f55b feat(node): bind remote SDK to rust implementation (#1730)
Closes [#2509](https://github.com/lancedb/sophon/issues/2509)

This is the Node.js analogue of #1700
2024-10-09 11:46:27 -06:00
Will Jones
aff25e3bf9 fix(node): add native packages to bump version (#1738)
We weren't bumping the version, so when users downloaded our package
from npm, they were getting the old binaries.
2024-10-08 23:03:53 -06:00
Will Jones
8509f73221 feat: better errors for remote SDK (#1722)
* Adds nicer errors to remote SDK, that expose useful properties like
`request_id` and `status_code`.
* Makes sure the Python tracebacks print nicely by mapping the `source`
field from a Rust error to the `__cause__` field.
2024-10-08 22:21:13 -06:00
Will Jones
607476788e feat(rust): list_indices in remote SDK (#1726)
Implements `list_indices`.

---------

Co-authored-by: Weston Pace <weston.pace@gmail.com>
2024-10-08 21:45:21 -06:00
Gagan Bhullar
4d458d5829 feat(python): drop support for dictionary in Table.add (#1725)
PR closes #1706
2024-10-08 20:41:08 -06:00
Will Jones
e61ba7f4e2 fix(rust): remote SDK bugs (#1723)
A few bugs uncovered by integration tests:

* We didn't prepend `/v1` to the Table endpoint URLs
* `/create_index` takes `metric_type` not `distance_type`. (This is also
an error in the OpenAPI docs.)
* `/create_index` expects the `metric_type` parameter to always be
lowercase.
* We were writing an IPC file message when we were supposed to send an
IPC stream message.
2024-10-04 08:43:07 -07:00
Prashant Dixit
408bc96a44 fix: broken notebook link fix (#1721) 2024-10-03 16:15:27 +05:30
Rithik Kumar
6ceaf8b06e docs: add langchainjs writing assistant (#1719) 2024-10-03 00:55:00 +05:30
Prashant Dixit
e2ca8daee1 docs: saleforce's sfr rag (#1717)
This PR adds Salesforce's newly released SFR RAG
2024-10-02 21:15:24 +05:30
Will Jones
f305f34d9b feat(python): bind python async remote client to rust client (#1700)
Closes [#1638](https://github.com/lancedb/lancedb/issues/1638)

This just binds the Python Async client to the Rust remote client.
2024-10-01 15:46:59 -07:00
Will Jones
a416925ca1 feat(rust): client configuration for remote client (#1696)
This PR ports over advanced client configuration present in the Python
`RestfulLanceDBClient` to the Rust one. The goal is to have feature
parity so we can replace the implementation.

* [x] Request timeout
* [x] Retries with backoff
* [x] Request id generation
* [x] User agent (with default tied to library version  )
* [x] Table existence cache
* [ ] Deferred: ~Request id customization (should this just pick up OTEL
trace ids?)~

Fixes #1684
2024-10-01 10:22:53 -07:00
Will Jones
2c4b07eb17 feat(python): merge_insert in async Python (#1707)
Fixes #1401
2024-10-01 10:06:52 -07:00
Will Jones
33b402c861 fix: list_indices returns correct index type (#1715)
Fixes https://github.com/lancedb/lancedb/issues/1711

Doesn't address this https://github.com/lancedb/lance/issues/2039

Instead we load the index statistics, which seems to contain the index
type. However, this involves more IO than previously. I'm not sure
whether we care that much. If we do, we can fix that upstream Lance
issue.
2024-10-01 09:16:18 -07:00
Rithik Kumar
7b2cdd2269 docs: revamp Voxel51 v1 (#1714)
Revamp Voxel51

![image](https://github.com/user-attachments/assets/7ac34457-74ec-4654-b1d1-556e3d7357f5)
2024-10-01 11:59:03 +05:30
Akash Saravanan
d6b5054778 feat(python): add support for trust_remote_code in hf embeddings (#1712)
Resovles #1709. Adds `trust_remote_code` as a parameter to the
`TransformersEmbeddingFunction` class with a default of False. Updated
relevant documentation with the same.
2024-10-01 01:06:28 +05:30
Lei Xu
f0e7f5f665 ci: change to use github runner (#1708)
Use github runner
2024-09-27 17:53:05 -07:00
Will Jones
f958f4d2e8 feat: remote index stats (#1702)
BREAKING CHANGE: the return value of `index_stats` method has changed
and all `index_stats` APIs now take index name instead of UUID. Also
several deprecated index statistics methods were removed.

* Removes deprecated methods for individual index statistics
* Aligns public `IndexStatistics` struct with API response from LanceDB
Cloud.
* Implements `index_stats` for remote Rust SDK and Python async API.
2024-09-27 12:10:00 -07:00
Will Jones
c1d9d6f70b feat(rust): remote rename table (#1703)
Adds rename to remote table. Pre-requisite for
https://github.com/lancedb/lancedb/pull/1701
2024-09-27 09:37:54 -07:00
Will Jones
1778219ea9 feat(rust): remote client query and create_index endpoints (#1663)
Support for `query` and `create_index`.

Closes [#2519](https://github.com/lancedb/sophon/issues/2519)
2024-09-27 09:00:22 -07:00
Rob Meng
ee6c18f207 feat: expose underlying dataset uri of the table (#1704) 2024-09-27 10:20:02 -04:00
rjrobben
e606a455df fix(EmbeddingFunction): modify safe_model_dump to explicitly exclude class fields with underscore (#1688)
Resolve issue #1681

---------

Co-authored-by: rjrobben <rjrobben123@gmail.com>
2024-09-25 11:53:49 -07:00
Gagan Bhullar
8f0eb34109 fix: hnsw default partitions (#1667)
PR fixes #1662

---------

Co-authored-by: Will Jones <willjones127@gmail.com>
2024-09-25 09:16:03 -07:00
Ayush Chaurasia
2f2721e242 feat(python): allow explicit hybrid search query pattern in SaaS (feat parity) (#1698)
-  fixes https://github.com/lancedb/lancedb/issues/1697.
- unifies vector column inference logic for remote and local table to
prevent future disparities.
- Updates docstring in RemoteTable to specify empty queries are not
supported
2024-09-25 21:04:00 +05:30
QianZhu
f00b21c98c fix: metric type for python/node search api (#1689) 2024-09-24 16:10:29 -07:00
Lance Release
962b3afd17 Updating package-lock.json 2024-09-24 16:51:37 +00:00
Lance Release
b72ac073ab Bump version: 0.11.0-beta.0 → 0.11.0-beta.1 2024-09-24 16:51:16 +00:00
Bert
3152ccd13c fix: re-add hostOverride arg to ConnectionOptions (#1694)
Fixes issue where hostOverride was no-longer passed through to
RemoteConnection
2024-09-24 13:29:26 -03:00
Bert
d5021356b4 feat: add fast_search to vectordb (#1693) 2024-09-24 13:28:54 -03:00
Will Jones
e82f63b40a fix(node): pass no const enum (#1690)
Apparently this is a no-no for libraries.
https://ncjamieson.com/dont-export-const-enums/

Fixes [#1664](https://github.com/lancedb/lancedb/issues/1664)
2024-09-24 07:41:42 -07:00
Ayush Chaurasia
f81ce68e41 fix(python): force deduce vector column name if running explicit hybrid query (#1692)
Right now when passing vector and query explicitly for hybrid search ,
vector_column_name is not deduced.
(https://lancedb.github.io/lancedb/hybrid_search/hybrid_search/#hybrid-search-in-lancedb
). Because vector and query can be both none when initialising the
QueryBuilder in this case. This PR forces deduction of query type if it
is set to "hybrid"
2024-09-24 19:02:56 +05:30
Will Jones
f5c25b6fff ci: run clippy on tests (#1659) 2024-09-23 07:33:47 -07:00
Ayush Chaurasia
86978e7588 feat!: enforce all rerankers always return relevance score & deprecate linear combination fixes (#1687)
- Enforce all rerankers always return _relevance_score. This was already
loosely done in tests before but based on user feedback its better to
always have _relevance_score present in all reranked results
- Deprecate LinearCombinationReranker in docs. And also fix a case where
it would not return _relevance_score if one result set was missing
2024-09-23 12:12:02 +05:30
Lei Xu
7c314d61cc chore: add error handling for openai embedding generation (#1680) 2024-09-23 12:10:56 +05:30
Lei Xu
7a8d2f37c4 feat(rust): add with_row_id to rust SDK (#1683) 2024-09-21 21:26:19 -07:00
Rithik Kumar
11072b9edc docs: phidata integration page (#1678)
Added new integration page for phidata :

![image](https://github.com/user-attachments/assets/8cd9b420-f249-4eac-ac13-ae53983822be)
2024-09-21 00:40:47 +05:30
Lei Xu
915d828cee feat!: set embeddings to Null if embedding function return invalid results (#1674) 2024-09-19 23:16:20 -07:00
Lance Release
d9a72adc58 Updating package-lock.json 2024-09-19 17:53:19 +00:00
Lance Release
d6cf2dafc6 Bump version: 0.10.0 → 0.11.0-beta.0 2024-09-19 17:53:00 +00:00
Lance Release
38f0031d0b Bump version: 0.13.0 → 0.14.0-beta.0 2024-09-19 17:52:38 +00:00
LuQQiu
e118c37228 ci: enable java auto release (#1602)
Enable bump java pom.xml versions
Enable auto java release when detect stable github release
2024-09-19 10:51:03 -07:00
LuQQiu
abeaae3d80 feat!: upgrade Lance to 0.18.0 (#1657)
BREAKING CHANGE: default file format changed to Lance v2.0.

Upgrade Lance to 0.18.0

Change notes: https://github.com/lancedb/lance/releases/tag/v0.18.0
2024-09-19 10:50:26 -07:00
Gagan Bhullar
b3c0227065 docs: hnsw documentation (#1640)
PR closes #1627

---------

Co-authored-by: Will Jones <willjones127@gmail.com>
2024-09-19 10:32:46 -07:00
Will Jones
521e665f57 feat(rust): remote client write data endpoint (#1645)
* Implements:
  * Add
  * Update
  * Delete
  * Merge-Insert

---------

Co-authored-by: Weston Pace <weston.pace@gmail.com>
2024-09-18 15:02:56 -07:00
Will Jones
ffb28dd4fc feat(rust): remote endpoints for schema, version, count_rows (#1644)
A handful of additional endpoints.
2024-09-16 08:19:25 -07:00
Lei Xu
32af962c0c feat: fix creating empty table and creating table by a list of RecordBatch for remote python sdk (#1650)
Closes #1637
2024-09-14 11:33:34 -07:00
Ayush Chaurasia
18484d0b6c fix: allow pass optional args in colbert reranker (#1649)
Fixes https://github.com/lancedb/lancedb/issues/1641
2024-09-14 11:18:09 -07:00
Lei Xu
c02ee3c80c chore: make remote client a context manager (#1648)
Allow `RemoteLanceDBClient` to be used as context manager
2024-09-13 22:08:48 -07:00
Rithik Kumar
dcd5f51036 docs: add understand embeddings v1 (#1643)
Before getting started with **managing embeddings**. Let's **understand
embeddings** (LanceDB way)

![Screenshot 2024-09-14
012144](https://github.com/user-attachments/assets/7c5435dc-5316-47e9-8d7d-9994ab13b93d)
2024-09-14 02:07:00 +05:30
Sayandip Dutta
9b8472850e fix: unterminated string literal on table update (#1573)
resolves #1429 
(python)

```python
-    return f"'{value}'"
+    return f'"{value}"'
```

---------

Co-authored-by: Will Jones <willjones127@gmail.com>
2024-09-13 12:32:59 -07:00
Sayandip Dutta
36d05ea641 fix: add appropriate QueryBuilder overloads to LanceTable.search (#1558)
- Add overloads to Table.search, to preserve the return information
of different types of QueryBuilder objects for LanceTable
- Fix fts_column type annotation by including making it `Optional`

resolves #1550

---------

Co-authored-by: sayandip-dutta <sayandip.dutta@nevaehtech.com>
Co-authored-by: Will Jones <willjones127@gmail.com>
2024-09-13 12:32:30 -07:00
LuQQiu
7ed86cadfb feat(node): let NODE API region default to us-east-1 (#1631)
Fixes #1622 
To sync with python API
2024-09-13 11:48:57 -07:00
Will Jones
1c123b58d8 feat: implement Remote connection for LanceDB Rust (#1639)
* Adding a simple test facility, which allows you to mock a single
endpoint at a time with a closure.
* Implementing all the database-level endpoints

Table-level APIs will be done in a follow up PR.

---------

Co-authored-by: Weston Pace <weston.pace@gmail.com>
2024-09-13 10:53:27 -07:00
BubbleCal
bf7d2d6fb0 docs: update FTS docs for JS SDK (#1634)
Signed-off-by: BubbleCal <bubble-cal@outlook.com>
2024-09-13 05:48:29 -07:00
LuQQiu
c7732585bf fix: support pyarrow input types (#1628)
fixes #1625 
Support PyArrow.RecordBatch, pa.dataset.Dataset, pa.dataset.Scanner,
paRecordBatchReader
2024-09-12 10:59:18 -07:00
Prashant Dixit
b3bf6386c3 docs: rag section in guide (#1619)
This PR adds the RAG section in the Guides. It includes all the RAGs
with code snippet and some advanced techniques which improves RAG.
2024-09-11 21:13:55 +05:30
BubbleCal
4b79db72bf docs: improve the docs and API param name (#1629)
Signed-off-by: BubbleCal <bubble-cal@outlook.com>
2024-09-11 10:18:29 +08:00
Lance Release
622a2922e2 Updating package-lock.json 2024-09-10 20:12:54 +00:00
Lance Release
c91221d710 Bump version: 0.10.0-beta.2 → 0.10.0 2024-09-10 20:12:41 +00:00
Lance Release
56da5ebd13 Bump version: 0.10.0-beta.1 → 0.10.0-beta.2 2024-09-10 20:12:40 +00:00
321 changed files with 26900 additions and 8387 deletions

View File

@@ -1,5 +1,5 @@
[tool.bumpversion] [tool.bumpversion]
current_version = "0.10.0-beta.1" current_version = "0.14.1-beta.6"
parse = """(?x) parse = """(?x)
(?P<major>0|[1-9]\\d*)\\. (?P<major>0|[1-9]\\d*)\\.
(?P<minor>0|[1-9]\\d*)\\. (?P<minor>0|[1-9]\\d*)\\.
@@ -24,34 +24,102 @@ commit = true
message = "Bump version: {current_version} → {new_version}" message = "Bump version: {current_version} → {new_version}"
commit_args = "" commit_args = ""
# Java maven files
pre_commit_hooks = [
"""
NEW_VERSION="${BVHOOK_NEW_MAJOR}.${BVHOOK_NEW_MINOR}.${BVHOOK_NEW_PATCH}"
if [ ! -z "$BVHOOK_NEW_PRE_L" ] && [ ! -z "$BVHOOK_NEW_PRE_N" ]; then
NEW_VERSION="${NEW_VERSION}-${BVHOOK_NEW_PRE_L}.${BVHOOK_NEW_PRE_N}"
fi
echo "Constructed new version: $NEW_VERSION"
cd java && mvn versions:set -DnewVersion=$NEW_VERSION && mvn versions:commit
# Check for any modified but unstaged pom.xml files
MODIFIED_POMS=$(git ls-files -m | grep pom.xml)
if [ ! -z "$MODIFIED_POMS" ]; then
echo "The following pom.xml files were modified but not staged. Adding them now:"
echo "$MODIFIED_POMS" | while read -r file; do
git add "$file"
echo "Added: $file"
done
fi
""",
]
[tool.bumpversion.parts.pre_l] [tool.bumpversion.parts.pre_l]
values = ["beta", "final"]
optional_value = "final" optional_value = "final"
values = ["beta", "final"]
[[tool.bumpversion.files]] [[tool.bumpversion.files]]
filename = "node/package.json" filename = "node/package.json"
search = "\"version\": \"{current_version}\","
replace = "\"version\": \"{new_version}\"," replace = "\"version\": \"{new_version}\","
search = "\"version\": \"{current_version}\","
[[tool.bumpversion.files]] [[tool.bumpversion.files]]
filename = "nodejs/package.json" filename = "nodejs/package.json"
search = "\"version\": \"{current_version}\","
replace = "\"version\": \"{new_version}\"," replace = "\"version\": \"{new_version}\","
search = "\"version\": \"{current_version}\","
# nodejs binary packages # nodejs binary packages
[[tool.bumpversion.files]] [[tool.bumpversion.files]]
glob = "nodejs/npm/*/package.json" glob = "nodejs/npm/*/package.json"
search = "\"version\": \"{current_version}\","
replace = "\"version\": \"{new_version}\"," replace = "\"version\": \"{new_version}\","
search = "\"version\": \"{current_version}\","
# vectodb node binary packages
[[tool.bumpversion.files]]
glob = "node/package.json"
replace = "\"@lancedb/vectordb-darwin-arm64\": \"{new_version}\""
search = "\"@lancedb/vectordb-darwin-arm64\": \"{current_version}\""
[[tool.bumpversion.files]]
glob = "node/package.json"
replace = "\"@lancedb/vectordb-darwin-x64\": \"{new_version}\""
search = "\"@lancedb/vectordb-darwin-x64\": \"{current_version}\""
[[tool.bumpversion.files]]
glob = "node/package.json"
replace = "\"@lancedb/vectordb-linux-arm64-gnu\": \"{new_version}\""
search = "\"@lancedb/vectordb-linux-arm64-gnu\": \"{current_version}\""
[[tool.bumpversion.files]]
glob = "node/package.json"
replace = "\"@lancedb/vectordb-linux-x64-gnu\": \"{new_version}\""
search = "\"@lancedb/vectordb-linux-x64-gnu\": \"{current_version}\""
[[tool.bumpversion.files]]
glob = "node/package.json"
replace = "\"@lancedb/vectordb-linux-arm64-musl\": \"{new_version}\""
search = "\"@lancedb/vectordb-linux-arm64-musl\": \"{current_version}\""
[[tool.bumpversion.files]]
glob = "node/package.json"
replace = "\"@lancedb/vectordb-linux-x64-musl\": \"{new_version}\""
search = "\"@lancedb/vectordb-linux-x64-musl\": \"{current_version}\""
[[tool.bumpversion.files]]
glob = "node/package.json"
replace = "\"@lancedb/vectordb-win32-x64-msvc\": \"{new_version}\""
search = "\"@lancedb/vectordb-win32-x64-msvc\": \"{current_version}\""
[[tool.bumpversion.files]]
glob = "node/package.json"
replace = "\"@lancedb/vectordb-win32-arm64-msvc\": \"{new_version}\""
search = "\"@lancedb/vectordb-win32-arm64-msvc\": \"{current_version}\""
# Cargo files # Cargo files
# ------------ # ------------
[[tool.bumpversion.files]] [[tool.bumpversion.files]]
filename = "rust/ffi/node/Cargo.toml" filename = "rust/ffi/node/Cargo.toml"
search = "\nversion = \"{current_version}\""
replace = "\nversion = \"{new_version}\"" replace = "\nversion = \"{new_version}\""
search = "\nversion = \"{current_version}\""
[[tool.bumpversion.files]] [[tool.bumpversion.files]]
filename = "rust/lancedb/Cargo.toml" filename = "rust/lancedb/Cargo.toml"
search = "\nversion = \"{current_version}\""
replace = "\nversion = \"{new_version}\"" replace = "\nversion = \"{new_version}\""
search = "\nversion = \"{current_version}\""
[[tool.bumpversion.files]]
filename = "nodejs/Cargo.toml"
replace = "\nversion = \"{new_version}\""
search = "\nversion = \"{current_version}\""

View File

@@ -31,6 +31,9 @@ rustflags = [
[target.x86_64-unknown-linux-gnu] [target.x86_64-unknown-linux-gnu]
rustflags = ["-C", "target-cpu=haswell", "-C", "target-feature=+avx2,+fma,+f16c"] rustflags = ["-C", "target-cpu=haswell", "-C", "target-feature=+avx2,+fma,+f16c"]
[target.x86_64-unknown-linux-musl]
rustflags = ["-C", "target-cpu=haswell", "-C", "target-feature=-crt-static,+avx2,+fma,+f16c"]
[target.aarch64-apple-darwin] [target.aarch64-apple-darwin]
rustflags = ["-C", "target-cpu=apple-m1", "-C", "target-feature=+neon,+fp16,+fhm,+dotprod"] rustflags = ["-C", "target-cpu=apple-m1", "-C", "target-feature=+neon,+fp16,+fhm,+dotprod"]
@@ -38,3 +41,7 @@ rustflags = ["-C", "target-cpu=apple-m1", "-C", "target-feature=+neon,+fp16,+fhm
# not found errors on systems that are missing it. # not found errors on systems that are missing it.
[target.x86_64-pc-windows-msvc] [target.x86_64-pc-windows-msvc]
rustflags = ["-Ctarget-feature=+crt-static"] rustflags = ["-Ctarget-feature=+crt-static"]
# Experimental target for Arm64 Windows
[target.aarch64-pc-windows-msvc]
rustflags = ["-Ctarget-feature=+crt-static"]

View File

@@ -41,8 +41,8 @@ jobs:
- name: Build Python - name: Build Python
working-directory: python working-directory: python
run: | run: |
python -m pip install -e . python -m pip install --extra-index-url https://pypi.fury.io/lancedb/ -e .
python -m pip install -r ../docs/requirements.txt python -m pip install --extra-index-url https://pypi.fury.io/lancedb/ -r ../docs/requirements.txt
- name: Set up node - name: Set up node
uses: actions/setup-node@v3 uses: actions/setup-node@v3
with: with:
@@ -72,9 +72,9 @@ jobs:
- name: Setup Pages - name: Setup Pages
uses: actions/configure-pages@v2 uses: actions/configure-pages@v2
- name: Upload artifact - name: Upload artifact
uses: actions/upload-pages-artifact@v1 uses: actions/upload-pages-artifact@v3
with: with:
path: "docs/site" path: "docs/site"
- name: Deploy to GitHub Pages - name: Deploy to GitHub Pages
id: deployment id: deployment
uses: actions/deploy-pages@v1 uses: actions/deploy-pages@v4

View File

@@ -24,7 +24,7 @@ env:
jobs: jobs:
test-python: test-python:
name: Test doc python code name: Test doc python code
runs-on: "warp-ubuntu-latest-x64-4x" runs-on: ubuntu-24.04
steps: steps:
- name: Checkout - name: Checkout
uses: actions/checkout@v4 uses: actions/checkout@v4
@@ -49,7 +49,7 @@ jobs:
- name: Build Python - name: Build Python
working-directory: docs/test working-directory: docs/test
run: run:
python -m pip install -r requirements.txt python -m pip install --extra-index-url https://pypi.fury.io/lancedb/ -r requirements.txt
- name: Create test files - name: Create test files
run: | run: |
cd docs/test cd docs/test
@@ -60,7 +60,7 @@ jobs:
for d in *; do cd "$d"; echo "$d".py; python "$d".py; cd ..; done for d in *; do cd "$d"; echo "$d".py; python "$d".py; cd ..; done
test-node: test-node:
name: Test doc nodejs code name: Test doc nodejs code
runs-on: "warp-ubuntu-latest-x64-4x" runs-on: ubuntu-24.04
timeout-minutes: 60 timeout-minutes: 60
strategy: strategy:
fail-fast: false fail-fast: false

View File

@@ -94,11 +94,16 @@ jobs:
mkdir -p ./core/target/classes/nativelib/darwin-aarch64 ./core/target/classes/nativelib/linux-aarch64 mkdir -p ./core/target/classes/nativelib/darwin-aarch64 ./core/target/classes/nativelib/linux-aarch64
cp ../liblancedb_jni_darwin_aarch64.zip/liblancedb_jni.dylib ./core/target/classes/nativelib/darwin-aarch64/liblancedb_jni.dylib cp ../liblancedb_jni_darwin_aarch64.zip/liblancedb_jni.dylib ./core/target/classes/nativelib/darwin-aarch64/liblancedb_jni.dylib
cp ../liblancedb_jni_linux_aarch64.zip/liblancedb_jni.so ./core/target/classes/nativelib/linux-aarch64/liblancedb_jni.so cp ../liblancedb_jni_linux_aarch64.zip/liblancedb_jni.so ./core/target/classes/nativelib/linux-aarch64/liblancedb_jni.so
- name: Dry run
if: github.event_name == 'pull_request'
run: |
mvn --batch-mode -DskipTests package
- name: Set github - name: Set github
run: | run: |
git config --global user.email "LanceDB Github Runner" git config --global user.email "LanceDB Github Runner"
git config --global user.name "dev+gha@lancedb.com" git config --global user.name "dev+gha@lancedb.com"
- name: Publish with Java 8 - name: Publish with Java 8
if: github.event_name == 'release'
run: | run: |
echo "use-agent" >> ~/.gnupg/gpg.conf echo "use-agent" >> ~/.gnupg/gpg.conf
echo "pinentry-mode loopback" >> ~/.gnupg/gpg.conf echo "pinentry-mode loopback" >> ~/.gnupg/gpg.conf

View File

@@ -30,7 +30,7 @@ on:
default: true default: true
type: boolean type: boolean
other: other:
description: 'Make a Node/Rust release' description: 'Make a Node/Rust/Java release'
required: true required: true
default: true default: true
type: boolean type: boolean
@@ -97,3 +97,7 @@ jobs:
if: ${{ !inputs.dry_run && inputs.other }} if: ${{ !inputs.dry_run && inputs.other }}
with: with:
github_token: ${{ secrets.GITHUB_TOKEN }} github_token: ${{ secrets.GITHUB_TOKEN }}
- uses: ./.github/workflows/update_package_lock_nodejs
if: ${{ !inputs.dry_run && inputs.other }}
with:
github_token: ${{ secrets.GITHUB_TOKEN }}

View File

@@ -53,6 +53,9 @@ jobs:
cargo clippy --all --all-features -- -D warnings cargo clippy --all --all-features -- -D warnings
npm ci npm ci
npm run lint-ci npm run lint-ci
- name: Lint examples
working-directory: nodejs/examples
run: npm ci && npm run lint-ci
linux: linux:
name: Linux (NodeJS ${{ matrix.node-version }}) name: Linux (NodeJS ${{ matrix.node-version }})
timeout-minutes: 30 timeout-minutes: 30
@@ -91,6 +94,18 @@ jobs:
env: env:
S3_TEST: "1" S3_TEST: "1"
run: npm run test run: npm run test
- name: Setup examples
working-directory: nodejs/examples
run: npm ci
- name: Test examples
working-directory: ./
env:
OPENAI_API_KEY: test
OPENAI_BASE_URL: http://0.0.0.0:8000
run: |
python ci/mock_openai.py &
cd nodejs/examples
npm test
macos: macos:
timeout-minutes: 30 timeout-minutes: 30
runs-on: "macos-14" runs-on: "macos-14"

View File

@@ -101,7 +101,7 @@ jobs:
path: | path: |
nodejs/dist/*.node nodejs/dist/*.node
node-linux: node-linux-gnu:
name: vectordb (${{ matrix.config.arch}}-unknown-linux-gnu) name: vectordb (${{ matrix.config.arch}}-unknown-linux-gnu)
runs-on: ${{ matrix.config.runner }} runs-on: ${{ matrix.config.runner }}
# Only runs on tags that matches the make-release action # Only runs on tags that matches the make-release action
@@ -133,15 +133,67 @@ jobs:
free -h free -h
- name: Build Linux Artifacts - name: Build Linux Artifacts
run: | run: |
bash ci/build_linux_artifacts.sh ${{ matrix.config.arch }} bash ci/build_linux_artifacts.sh ${{ matrix.config.arch }} ${{ matrix.config.arch }}-unknown-linux-gnu
- name: Upload Linux Artifacts - name: Upload Linux Artifacts
uses: actions/upload-artifact@v4 uses: actions/upload-artifact@v4
with: with:
name: node-native-linux-${{ matrix.config.arch }} name: node-native-linux-${{ matrix.config.arch }}-gnu
path: | path: |
node/dist/lancedb-vectordb-linux*.tgz node/dist/lancedb-vectordb-linux*.tgz
nodejs-linux: node-linux-musl:
name: vectordb (${{ matrix.config.arch}}-unknown-linux-musl)
runs-on: ubuntu-latest
container: alpine:edge
# Only runs on tags that matches the make-release action
if: startsWith(github.ref, 'refs/tags/v')
strategy:
fail-fast: false
matrix:
config:
- arch: x86_64
- arch: aarch64
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Install common dependencies
run: |
apk add protobuf-dev curl clang mold grep npm bash
curl --proto '=https' --tlsv1.3 -sSf https://raw.githubusercontent.com/rust-lang/rustup/refs/heads/master/rustup-init.sh | sh -s -- -y --default-toolchain 1.80.0
echo "source $HOME/.cargo/env" >> saved_env
echo "export CC=clang" >> saved_env
echo "export RUSTFLAGS='-Ctarget-cpu=haswell -Ctarget-feature=-crt-static,+avx2,+fma,+f16c -Clinker=clang -Clink-arg=-fuse-ld=mold'" >> saved_env
- name: Configure aarch64 build
if: ${{ matrix.config.arch == 'aarch64' }}
run: |
source "$HOME/.cargo/env"
rustup target add aarch64-unknown-linux-musl --toolchain 1.80.0
crt=$(realpath $(dirname $(rustup which rustc))/../lib/rustlib/aarch64-unknown-linux-musl/lib/self-contained)
sysroot_lib=/usr/aarch64-unknown-linux-musl/usr/lib
apk_url=https://dl-cdn.alpinelinux.org/alpine/latest-stable/main/aarch64/
curl -sSf $apk_url > apk_list
for pkg in gcc libgcc musl; do curl -sSf $apk_url$(cat apk_list | grep -oP '(?<=")'$pkg'-\d.*?(?=")') | tar zxf -; done
mkdir -p $sysroot_lib
echo 'GROUP ( libgcc_s.so.1 -lgcc )' > $sysroot_lib/libgcc_s.so
cp usr/lib/libgcc_s.so.1 $sysroot_lib
cp usr/lib/gcc/aarch64-alpine-linux-musl/*/libgcc.a $sysroot_lib
cp lib/ld-musl-aarch64.so.1 $sysroot_lib/libc.so
echo '!<arch>' > $sysroot_lib/libdl.a
(cd $crt && cp crti.o crtbeginS.o crtendS.o crtn.o -t $sysroot_lib)
echo "export CARGO_BUILD_TARGET=aarch64-unknown-linux-musl" >> saved_env
echo "export RUSTFLAGS='-Ctarget-cpu=apple-m1 -Ctarget-feature=-crt-static,+neon,+fp16,+fhm,+dotprod -Clinker=clang -Clink-arg=-fuse-ld=mold -Clink-arg=--target=aarch64-unknown-linux-musl -Clink-arg=--sysroot=/usr/aarch64-unknown-linux-musl -Clink-arg=-lc'" >> saved_env
- name: Build Linux Artifacts
run: |
source ./saved_env
bash ci/manylinux_node/build_vectordb.sh ${{ matrix.config.arch }} ${{ matrix.config.arch }}-unknown-linux-musl
- name: Upload Linux Artifacts
uses: actions/upload-artifact@v4
with:
name: node-native-linux-${{ matrix.config.arch }}-musl
path: |
node/dist/lancedb-vectordb-linux*.tgz
nodejs-linux-gnu:
name: lancedb (${{ matrix.config.arch}}-unknown-linux-gnu name: lancedb (${{ matrix.config.arch}}-unknown-linux-gnu
runs-on: ${{ matrix.config.runner }} runs-on: ${{ matrix.config.runner }}
# Only runs on tags that matches the make-release action # Only runs on tags that matches the make-release action
@@ -178,7 +230,7 @@ jobs:
- name: Upload Linux Artifacts - name: Upload Linux Artifacts
uses: actions/upload-artifact@v4 uses: actions/upload-artifact@v4
with: with:
name: nodejs-native-linux-${{ matrix.config.arch }} name: nodejs-native-linux-${{ matrix.config.arch }}-gnu
path: | path: |
nodejs/dist/*.node nodejs/dist/*.node
# The generic files are the same in all distros so we just pick # The generic files are the same in all distros so we just pick
@@ -192,6 +244,62 @@ jobs:
nodejs/dist/* nodejs/dist/*
!nodejs/dist/*.node !nodejs/dist/*.node
nodejs-linux-musl:
name: lancedb (${{ matrix.config.arch}}-unknown-linux-musl
runs-on: ubuntu-latest
container: alpine:edge
# Only runs on tags that matches the make-release action
if: startsWith(github.ref, 'refs/tags/v')
strategy:
fail-fast: false
matrix:
config:
- arch: x86_64
- arch: aarch64
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Install common dependencies
run: |
apk add protobuf-dev curl clang mold grep npm bash openssl-dev openssl-libs-static
curl --proto '=https' --tlsv1.3 -sSf https://raw.githubusercontent.com/rust-lang/rustup/refs/heads/master/rustup-init.sh | sh -s -- -y --default-toolchain 1.80.0
echo "source $HOME/.cargo/env" >> saved_env
echo "export CC=clang" >> saved_env
echo "export RUSTFLAGS='-Ctarget-cpu=haswell -Ctarget-feature=-crt-static,+avx2,+fma,+f16c -Clinker=clang -Clink-arg=-fuse-ld=mold'" >> saved_env
echo "export X86_64_UNKNOWN_LINUX_MUSL_OPENSSL_INCLUDE_DIR=/usr/include" >> saved_env
echo "export X86_64_UNKNOWN_LINUX_MUSL_OPENSSL_LIB_DIR=/usr/lib" >> saved_env
- name: Configure aarch64 build
if: ${{ matrix.config.arch == 'aarch64' }}
run: |
source "$HOME/.cargo/env"
rustup target add aarch64-unknown-linux-musl --toolchain 1.80.0
crt=$(realpath $(dirname $(rustup which rustc))/../lib/rustlib/aarch64-unknown-linux-musl/lib/self-contained)
sysroot_lib=/usr/aarch64-unknown-linux-musl/usr/lib
apk_url=https://dl-cdn.alpinelinux.org/alpine/latest-stable/main/aarch64/
curl -sSf $apk_url > apk_list
for pkg in gcc libgcc musl openssl-dev openssl-libs-static; do curl -sSf $apk_url$(cat apk_list | grep -oP '(?<=")'$pkg'-\d.*?(?=")') | tar zxf -; done
mkdir -p $sysroot_lib
echo 'GROUP ( libgcc_s.so.1 -lgcc )' > $sysroot_lib/libgcc_s.so
cp usr/lib/libgcc_s.so.1 $sysroot_lib
cp usr/lib/gcc/aarch64-alpine-linux-musl/*/libgcc.a $sysroot_lib
cp lib/ld-musl-aarch64.so.1 $sysroot_lib/libc.so
echo '!<arch>' > $sysroot_lib/libdl.a
(cd $crt && cp crti.o crtbeginS.o crtendS.o crtn.o -t $sysroot_lib)
echo "export CARGO_BUILD_TARGET=aarch64-unknown-linux-musl" >> saved_env
echo "export RUSTFLAGS='-Ctarget-feature=-crt-static,+neon,+fp16,+fhm,+dotprod -Clinker=clang -Clink-arg=-fuse-ld=mold -Clink-arg=--target=aarch64-unknown-linux-musl -Clink-arg=--sysroot=/usr/aarch64-unknown-linux-musl -Clink-arg=-lc'" >> saved_env
echo "export AARCH64_UNKNOWN_LINUX_MUSL_OPENSSL_INCLUDE_DIR=$(realpath usr/include)" >> saved_env
echo "export AARCH64_UNKNOWN_LINUX_MUSL_OPENSSL_LIB_DIR=$(realpath usr/lib)" >> saved_env
- name: Build Linux Artifacts
run: |
source ./saved_env
bash ci/manylinux_node/build_lancedb.sh ${{ matrix.config.arch }}
- name: Upload Linux Artifacts
uses: actions/upload-artifact@v4
with:
name: nodejs-native-linux-${{ matrix.config.arch }}-musl
path: |
nodejs/dist/*.node
node-windows: node-windows:
name: vectordb ${{ matrix.target }} name: vectordb ${{ matrix.target }}
runs-on: windows-2022 runs-on: windows-2022
@@ -226,6 +334,51 @@ jobs:
path: | path: |
node/dist/lancedb-vectordb-win32*.tgz node/dist/lancedb-vectordb-win32*.tgz
node-windows-arm64:
name: vectordb ${{ matrix.config.arch }}-pc-windows-msvc
if: startsWith(github.ref, 'refs/tags/v')
runs-on: ubuntu-latest
container: alpine:edge
strategy:
fail-fast: false
matrix:
config:
# - arch: x86_64
- arch: aarch64
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Install dependencies
run: |
apk add protobuf-dev curl clang lld llvm19 grep npm bash msitools sed
curl --proto '=https' --tlsv1.3 -sSf https://raw.githubusercontent.com/rust-lang/rustup/refs/heads/master/rustup-init.sh | sh -s -- -y --default-toolchain 1.80.0
echo "source $HOME/.cargo/env" >> saved_env
echo "export CC=clang" >> saved_env
echo "export AR=llvm-ar" >> saved_env
source "$HOME/.cargo/env"
rustup target add ${{ matrix.config.arch }}-pc-windows-msvc --toolchain 1.80.0
(mkdir -p sysroot && cd sysroot && sh ../ci/sysroot-${{ matrix.config.arch }}-pc-windows-msvc.sh)
echo "export C_INCLUDE_PATH=/usr/${{ matrix.config.arch }}-pc-windows-msvc/usr/include" >> saved_env
echo "export CARGO_BUILD_TARGET=${{ matrix.config.arch }}-pc-windows-msvc" >> saved_env
- name: Configure x86_64 build
if: ${{ matrix.config.arch == 'x86_64' }}
run: |
echo "export RUSTFLAGS='-Ctarget-cpu=haswell -Ctarget-feature=+crt-static,+avx2,+fma,+f16c -Clinker=lld -Clink-arg=/LIBPATH:/usr/x86_64-pc-windows-msvc/usr/lib'" >> saved_env
- name: Configure aarch64 build
if: ${{ matrix.config.arch == 'aarch64' }}
run: |
echo "export RUSTFLAGS='-Ctarget-feature=+crt-static,+neon,+fp16,+fhm,+dotprod -Clinker=lld -Clink-arg=/LIBPATH:/usr/aarch64-pc-windows-msvc/usr/lib -Clink-arg=arm64rt.lib'" >> saved_env
- name: Build Windows Artifacts
run: |
source ./saved_env
bash ci/manylinux_node/build_vectordb.sh ${{ matrix.config.arch }} ${{ matrix.config.arch }}-pc-windows-msvc
- name: Upload Windows Artifacts
uses: actions/upload-artifact@v4
with:
name: node-native-windows-${{ matrix.config.arch }}
path: |
node/dist/lancedb-vectordb-win32*.tgz
nodejs-windows: nodejs-windows:
name: lancedb ${{ matrix.target }} name: lancedb ${{ matrix.target }}
runs-on: windows-2022 runs-on: windows-2022
@@ -260,9 +413,57 @@ jobs:
path: | path: |
nodejs/dist/*.node nodejs/dist/*.node
nodejs-windows-arm64:
name: lancedb ${{ matrix.config.arch }}-pc-windows-msvc
# Only runs on tags that matches the make-release action
if: startsWith(github.ref, 'refs/tags/v')
runs-on: ubuntu-latest
container: alpine:edge
strategy:
fail-fast: false
matrix:
config:
# - arch: x86_64
- arch: aarch64
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Install dependencies
run: |
apk add protobuf-dev curl clang lld llvm19 grep npm bash msitools sed
curl --proto '=https' --tlsv1.3 -sSf https://raw.githubusercontent.com/rust-lang/rustup/refs/heads/master/rustup-init.sh | sh -s -- -y --default-toolchain 1.80.0
echo "source $HOME/.cargo/env" >> saved_env
echo "export CC=clang" >> saved_env
echo "export AR=llvm-ar" >> saved_env
source "$HOME/.cargo/env"
rustup target add ${{ matrix.config.arch }}-pc-windows-msvc --toolchain 1.80.0
(mkdir -p sysroot && cd sysroot && sh ../ci/sysroot-${{ matrix.config.arch }}-pc-windows-msvc.sh)
echo "export C_INCLUDE_PATH=/usr/${{ matrix.config.arch }}-pc-windows-msvc/usr/include" >> saved_env
echo "export CARGO_BUILD_TARGET=${{ matrix.config.arch }}-pc-windows-msvc" >> saved_env
printf '#!/bin/sh\ncargo "$@"' > $HOME/.cargo/bin/cargo-xwin
chmod u+x $HOME/.cargo/bin/cargo-xwin
- name: Configure x86_64 build
if: ${{ matrix.config.arch == 'x86_64' }}
run: |
echo "export RUSTFLAGS='-Ctarget-cpu=haswell -Ctarget-feature=+crt-static,+avx2,+fma,+f16c -Clinker=lld -Clink-arg=/LIBPATH:/usr/x86_64-pc-windows-msvc/usr/lib'" >> saved_env
- name: Configure aarch64 build
if: ${{ matrix.config.arch == 'aarch64' }}
run: |
echo "export RUSTFLAGS='-Ctarget-feature=+crt-static,+neon,+fp16,+fhm,+dotprod -Clinker=lld -Clink-arg=/LIBPATH:/usr/aarch64-pc-windows-msvc/usr/lib -Clink-arg=arm64rt.lib'" >> saved_env
- name: Build Windows Artifacts
run: |
source ./saved_env
bash ci/manylinux_node/build_lancedb.sh ${{ matrix.config.arch }}
- name: Upload Windows Artifacts
uses: actions/upload-artifact@v4
with:
name: nodejs-native-windows-${{ matrix.config.arch }}
path: |
nodejs/dist/*.node
release: release:
name: vectordb NPM Publish name: vectordb NPM Publish
needs: [node, node-macos, node-linux, node-windows] needs: [node, node-macos, node-linux-gnu, node-linux-musl, node-windows, node-windows-arm64]
runs-on: ubuntu-latest runs-on: ubuntu-latest
# Only runs on tags that matches the make-release action # Only runs on tags that matches the make-release action
if: startsWith(github.ref, 'refs/tags/v') if: startsWith(github.ref, 'refs/tags/v')
@@ -302,7 +503,7 @@ jobs:
release-nodejs: release-nodejs:
name: lancedb NPM Publish name: lancedb NPM Publish
needs: [nodejs-macos, nodejs-linux, nodejs-windows] needs: [nodejs-macos, nodejs-linux-gnu, nodejs-linux-musl, nodejs-windows, nodejs-windows-arm64]
runs-on: ubuntu-latest runs-on: ubuntu-latest
# Only runs on tags that matches the make-release action # Only runs on tags that matches the make-release action
if: startsWith(github.ref, 'refs/tags/v') if: startsWith(github.ref, 'refs/tags/v')
@@ -360,6 +561,7 @@ jobs:
SLACK_WEBHOOK_URL: ${{ secrets.ACTION_MONITORING_SLACK }} SLACK_WEBHOOK_URL: ${{ secrets.ACTION_MONITORING_SLACK }}
update-package-lock: update-package-lock:
if: startsWith(github.ref, 'refs/tags/v')
needs: [release] needs: [release]
runs-on: ubuntu-latest runs-on: ubuntu-latest
permissions: permissions:
@@ -369,7 +571,7 @@ jobs:
uses: actions/checkout@v4 uses: actions/checkout@v4
with: with:
ref: main ref: main
persist-credentials: false token: ${{ secrets.LANCEDB_RELEASE_TOKEN }}
fetch-depth: 0 fetch-depth: 0
lfs: true lfs: true
- uses: ./.github/workflows/update_package_lock - uses: ./.github/workflows/update_package_lock
@@ -377,6 +579,7 @@ jobs:
github_token: ${{ secrets.GITHUB_TOKEN }} github_token: ${{ secrets.GITHUB_TOKEN }}
update-package-lock-nodejs: update-package-lock-nodejs:
if: startsWith(github.ref, 'refs/tags/v')
needs: [release-nodejs] needs: [release-nodejs]
runs-on: ubuntu-latest runs-on: ubuntu-latest
permissions: permissions:
@@ -386,7 +589,7 @@ jobs:
uses: actions/checkout@v4 uses: actions/checkout@v4
with: with:
ref: main ref: main
persist-credentials: false token: ${{ secrets.LANCEDB_RELEASE_TOKEN }}
fetch-depth: 0 fetch-depth: 0
lfs: true lfs: true
- uses: ./.github/workflows/update_package_lock_nodejs - uses: ./.github/workflows/update_package_lock_nodejs
@@ -394,6 +597,7 @@ jobs:
github_token: ${{ secrets.GITHUB_TOKEN }} github_token: ${{ secrets.GITHUB_TOKEN }}
gh-release: gh-release:
if: startsWith(github.ref, 'refs/tags/v')
runs-on: ubuntu-latest runs-on: ubuntu-latest
permissions: permissions:
contents: write contents: write

View File

@@ -83,7 +83,7 @@ jobs:
- name: Set up Python - name: Set up Python
uses: actions/setup-python@v4 uses: actions/setup-python@v4
with: with:
python-version: 3.8 python-version: 3.12
- uses: ./.github/workflows/build_windows_wheel - uses: ./.github/workflows/build_windows_wheel
with: with:
python-minor-version: 8 python-minor-version: 8

View File

@@ -138,7 +138,7 @@ jobs:
run: rm -rf target/wheels run: rm -rf target/wheels
windows: windows:
name: "Windows: ${{ matrix.config.name }}" name: "Windows: ${{ matrix.config.name }}"
timeout-minutes: 30 timeout-minutes: 60
strategy: strategy:
matrix: matrix:
config: config:

View File

@@ -26,15 +26,14 @@ env:
jobs: jobs:
lint: lint:
timeout-minutes: 30 timeout-minutes: 30
runs-on: ubuntu-22.04 runs-on: ubuntu-24.04
defaults: defaults:
run: run:
shell: bash shell: bash
working-directory: rust
env: env:
# Need up-to-date compilers for kernels # Need up-to-date compilers for kernels
CC: gcc-12 CC: clang-18
CXX: g++-12 CXX: clang++-18
steps: steps:
- uses: actions/checkout@v4 - uses: actions/checkout@v4
with: with:
@@ -50,21 +49,22 @@ jobs:
- name: Run format - name: Run format
run: cargo fmt --all -- --check run: cargo fmt --all -- --check
- name: Run clippy - name: Run clippy
run: cargo clippy --all --all-features -- -D warnings run: cargo clippy --workspace --tests --all-features -- -D warnings
linux: linux:
timeout-minutes: 30 timeout-minutes: 30
# To build all features, we need more disk space than is available # To build all features, we need more disk space than is available
# on the GitHub-provided runner. This is mostly due to the the # on the free OSS github runner. This is mostly due to the the
# sentence-transformers feature. # sentence-transformers feature.
runs-on: warp-ubuntu-latest-x64-4x runs-on: ubuntu-2404-4x-x64
defaults: defaults:
run: run:
shell: bash shell: bash
working-directory: rust working-directory: rust
env: env:
# Need up-to-date compilers for kernels # Need up-to-date compilers for kernels
CC: gcc-12 CC: clang-18
CXX: g++-12 CXX: clang++-18
steps: steps:
- uses: actions/checkout@v4 - uses: actions/checkout@v4
with: with:
@@ -77,6 +77,12 @@ jobs:
run: | run: |
sudo apt update sudo apt update
sudo apt install -y protobuf-compiler libssl-dev sudo apt install -y protobuf-compiler libssl-dev
- name: Make Swap
run: |
sudo fallocate -l 16G /swapfile
sudo chmod 600 /swapfile
sudo mkswap /swapfile
sudo swapon /swapfile
- name: Start S3 integration test environment - name: Start S3 integration test environment
working-directory: . working-directory: .
run: docker compose up --detach --wait run: docker compose up --detach --wait
@@ -86,6 +92,7 @@ jobs:
run: cargo test --all-features run: cargo test --all-features
- name: Run examples - name: Run examples
run: cargo run --example simple run: cargo run --example simple
macos: macos:
timeout-minutes: 30 timeout-minutes: 30
strategy: strategy:
@@ -113,6 +120,7 @@ jobs:
- name: Run tests - name: Run tests
# Run with everything except the integration tests. # Run with everything except the integration tests.
run: cargo test --features remote,fp16kernels run: cargo test --features remote,fp16kernels
windows: windows:
runs-on: windows-2022 runs-on: windows-2022
steps: steps:
@@ -134,3 +142,137 @@ jobs:
$env:VCPKG_ROOT = $env:VCPKG_INSTALLATION_ROOT $env:VCPKG_ROOT = $env:VCPKG_INSTALLATION_ROOT
cargo build cargo build
cargo test cargo test
windows-arm64:
runs-on: windows-4x-arm
steps:
- name: Install Git
run: |
Invoke-WebRequest -Uri "https://github.com/git-for-windows/git/releases/download/v2.44.0.windows.1/Git-2.44.0-64-bit.exe" -OutFile "git-installer.exe"
Start-Process -FilePath "git-installer.exe" -ArgumentList "/VERYSILENT", "/NORESTART" -Wait
shell: powershell
- name: Add Git to PATH
run: |
Add-Content $env:GITHUB_PATH "C:\Program Files\Git\bin"
$env:Path = [System.Environment]::GetEnvironmentVariable("Path","Machine") + ";" + [System.Environment]::GetEnvironmentVariable("Path","User")
shell: powershell
- name: Configure Git symlinks
run: git config --global core.symlinks true
- uses: actions/checkout@v4
- uses: actions/setup-python@v5
with:
python-version: "3.13"
- name: Install Visual Studio Build Tools
run: |
Invoke-WebRequest -Uri "https://aka.ms/vs/17/release/vs_buildtools.exe" -OutFile "vs_buildtools.exe"
Start-Process -FilePath "vs_buildtools.exe" -ArgumentList "--quiet", "--wait", "--norestart", "--nocache", `
"--installPath", "C:\BuildTools", `
"--add", "Microsoft.VisualStudio.Component.VC.Tools.ARM64", `
"--add", "Microsoft.VisualStudio.Component.VC.Tools.x86.x64", `
"--add", "Microsoft.VisualStudio.Component.Windows11SDK.22621", `
"--add", "Microsoft.VisualStudio.Component.VC.ATL", `
"--add", "Microsoft.VisualStudio.Component.VC.ATLMFC", `
"--add", "Microsoft.VisualStudio.Component.VC.Llvm.Clang" -Wait
shell: powershell
- name: Add Visual Studio Build Tools to PATH
run: |
$vsPath = "C:\BuildTools\VC\Tools\MSVC"
$latestVersion = (Get-ChildItem $vsPath | Sort-Object {[version]$_.Name} -Descending)[0].Name
Add-Content $env:GITHUB_PATH "C:\BuildTools\VC\Tools\MSVC\$latestVersion\bin\Hostx64\arm64"
Add-Content $env:GITHUB_PATH "C:\BuildTools\VC\Tools\MSVC\$latestVersion\bin\Hostx64\x64"
Add-Content $env:GITHUB_PATH "C:\Program Files (x86)\Windows Kits\10\bin\10.0.22621.0\arm64"
Add-Content $env:GITHUB_PATH "C:\Program Files (x86)\Windows Kits\10\bin\10.0.22621.0\x64"
Add-Content $env:GITHUB_PATH "C:\BuildTools\VC\Tools\Llvm\x64\bin"
# Add MSVC runtime libraries to LIB
$env:LIB = "C:\BuildTools\VC\Tools\MSVC\$latestVersion\lib\arm64;" +
"C:\Program Files (x86)\Windows Kits\10\Lib\10.0.22621.0\um\arm64;" +
"C:\Program Files (x86)\Windows Kits\10\Lib\10.0.22621.0\ucrt\arm64"
Add-Content $env:GITHUB_ENV "LIB=$env:LIB"
# Add INCLUDE paths
$env:INCLUDE = "C:\BuildTools\VC\Tools\MSVC\$latestVersion\include;" +
"C:\Program Files (x86)\Windows Kits\10\Include\10.0.22621.0\ucrt;" +
"C:\Program Files (x86)\Windows Kits\10\Include\10.0.22621.0\um;" +
"C:\Program Files (x86)\Windows Kits\10\Include\10.0.22621.0\shared"
Add-Content $env:GITHUB_ENV "INCLUDE=$env:INCLUDE"
shell: powershell
- name: Install Rust
run: |
Invoke-WebRequest https://win.rustup.rs/x86_64 -OutFile rustup-init.exe
.\rustup-init.exe -y --default-host aarch64-pc-windows-msvc
shell: powershell
- name: Add Rust to PATH
run: |
Add-Content $env:GITHUB_PATH "$env:USERPROFILE\.cargo\bin"
shell: powershell
- uses: Swatinem/rust-cache@v2
with:
workspaces: rust
- name: Install 7-Zip ARM
run: |
New-Item -Path 'C:\7zip' -ItemType Directory
Invoke-WebRequest https://7-zip.org/a/7z2408-arm64.exe -OutFile C:\7zip\7z-installer.exe
Start-Process -FilePath C:\7zip\7z-installer.exe -ArgumentList '/S' -Wait
shell: powershell
- name: Add 7-Zip to PATH
run: Add-Content $env:GITHUB_PATH "C:\Program Files\7-Zip"
shell: powershell
- name: Install Protoc v21.12
working-directory: C:\
run: |
if (Test-Path 'C:\protoc') {
Write-Host "Protoc directory exists, skipping installation"
return
}
New-Item -Path 'C:\protoc' -ItemType Directory
Set-Location C:\protoc
Invoke-WebRequest https://github.com/protocolbuffers/protobuf/releases/download/v21.12/protoc-21.12-win64.zip -OutFile C:\protoc\protoc.zip
& 'C:\Program Files\7-Zip\7z.exe' x protoc.zip
shell: powershell
- name: Add Protoc to PATH
run: Add-Content $env:GITHUB_PATH "C:\protoc\bin"
shell: powershell
- name: Run tests
run: |
$env:VCPKG_ROOT = $env:VCPKG_INSTALLATION_ROOT
cargo build --target aarch64-pc-windows-msvc
cargo test --target aarch64-pc-windows-msvc
msrv:
# Check the minimum supported Rust version
name: MSRV Check - Rust v${{ matrix.msrv }}
runs-on: ubuntu-24.04
strategy:
matrix:
msrv: ["1.78.0"] # This should match up with rust-version in Cargo.toml
env:
# Need up-to-date compilers for kernels
CC: clang-18
CXX: clang++-18
steps:
- uses: actions/checkout@v4
with:
submodules: true
- name: Install dependencies
run: |
sudo apt update
sudo apt install -y protobuf-compiler libssl-dev
- name: Install ${{ matrix.msrv }}
uses: dtolnay/rust-toolchain@master
with:
toolchain: ${{ matrix.msrv }}
- name: Downgrade dependencies
# These packages have newer requirements for MSRV
run: |
cargo update -p aws-sdk-bedrockruntime --precise 1.64.0
cargo update -p aws-sdk-dynamodb --precise 1.55.0
cargo update -p aws-config --precise 1.5.10
cargo update -p aws-sdk-kms --precise 1.51.0
cargo update -p aws-sdk-s3 --precise 1.65.0
cargo update -p aws-sdk-sso --precise 1.50.0
cargo update -p aws-sdk-ssooidc --precise 1.51.0
cargo update -p aws-sdk-sts --precise 1.51.0
cargo update -p home --precise 0.5.9
- name: cargo +${{ matrix.msrv }} check
run: cargo check --workspace --tests --benches --all-features

View File

@@ -17,11 +17,12 @@ runs:
run: | run: |
python -m pip install --upgrade pip python -m pip install --upgrade pip
pip install twine pip install twine
python3 -m pip install --upgrade pkginfo
- name: Choose repo - name: Choose repo
shell: bash shell: bash
id: choose_repo id: choose_repo
run: | run: |
if [ ${{ github.ref }} == "*beta*" ]; then if [[ ${{ github.ref }} == *beta* ]]; then
echo "repo=fury" >> $GITHUB_OUTPUT echo "repo=fury" >> $GITHUB_OUTPUT
else else
echo "repo=pypi" >> $GITHUB_OUTPUT echo "repo=pypi" >> $GITHUB_OUTPUT
@@ -32,7 +33,7 @@ runs:
FURY_TOKEN: ${{ inputs.fury_token }} FURY_TOKEN: ${{ inputs.fury_token }}
PYPI_TOKEN: ${{ inputs.pypi_token }} PYPI_TOKEN: ${{ inputs.pypi_token }}
run: | run: |
if [ ${{ steps.choose_repo.outputs.repo }} == "fury" ]; then if [[ ${{ steps.choose_repo.outputs.repo }} == fury ]]; then
WHEEL=$(ls target/wheels/lancedb-*.whl 2> /dev/null | head -n 1) WHEEL=$(ls target/wheels/lancedb-*.whl 2> /dev/null | head -n 1)
echo "Uploading $WHEEL to Fury" echo "Uploading $WHEEL to Fury"
curl -f -F package=@$WHEEL https://$FURY_TOKEN@push.fury.io/lancedb/ curl -f -F package=@$WHEEL https://$FURY_TOKEN@push.fury.io/lancedb/

View File

@@ -18,36 +18,44 @@ repository = "https://github.com/lancedb/lancedb"
description = "Serverless, low-latency vector database for AI applications" description = "Serverless, low-latency vector database for AI applications"
keywords = ["lancedb", "lance", "database", "vector", "search"] keywords = ["lancedb", "lance", "database", "vector", "search"]
categories = ["database-implementations"] categories = ["database-implementations"]
rust-version = "1.78.0"
[workspace.dependencies] [workspace.dependencies]
lance = { "version" = "=0.17.0", "features" = ["dynamodb"] } lance = { "version" = "=0.21.0", "features" = [
lance-index = { "version" = "=0.17.0" } "dynamodb",
lance-linalg = { "version" = "=0.17.0" } ], git = "https://github.com/lancedb/lance.git", tag = "v0.21.0-beta.5" }
lance-table = { "version" = "=0.17.0" } lance-io = { version = "=0.21.0", git = "https://github.com/lancedb/lance.git", tag = "v0.21.0-beta.5" }
lance-testing = { "version" = "=0.17.0" } lance-index = { version = "=0.21.0", git = "https://github.com/lancedb/lance.git", tag = "v0.21.0-beta.5" }
lance-datafusion = { "version" = "=0.17.0" } lance-linalg = { version = "=0.21.0", git = "https://github.com/lancedb/lance.git", tag = "v0.21.0-beta.5" }
lance-encoding = { "version" = "=0.17.0" } lance-table = { version = "=0.21.0", git = "https://github.com/lancedb/lance.git", tag = "v0.21.0-beta.5" }
lance-testing = { version = "=0.21.0", git = "https://github.com/lancedb/lance.git", tag = "v0.21.0-beta.5" }
lance-datafusion = { version = "=0.21.0", git = "https://github.com/lancedb/lance.git", tag = "v0.21.0-beta.5" }
lance-encoding = { version = "=0.21.0", git = "https://github.com/lancedb/lance.git", tag = "v0.21.0-beta.5" }
# Note that this one does not include pyarrow # Note that this one does not include pyarrow
arrow = { version = "52.2", optional = false } arrow = { version = "53.2", optional = false }
arrow-array = "52.2" arrow-array = "53.2"
arrow-data = "52.2" arrow-data = "53.2"
arrow-ipc = "52.2" arrow-ipc = "53.2"
arrow-ord = "52.2" arrow-ord = "53.2"
arrow-schema = "52.2" arrow-schema = "53.2"
arrow-arith = "52.2" arrow-arith = "53.2"
arrow-cast = "52.2" arrow-cast = "53.2"
async-trait = "0" async-trait = "0"
chrono = "0.4.35" chrono = "0.4.35"
datafusion-physical-plan = "40.0" datafusion-common = "42.0"
datafusion-physical-plan = "42.0"
env_logger = "0.10"
half = { "version" = "=2.4.1", default-features = false, features = [ half = { "version" = "=2.4.1", default-features = false, features = [
"num-traits", "num-traits",
] } ] }
futures = "0" futures = "0"
log = "0.4" log = "0.4"
moka = { version = "0.11", features = ["future"] }
object_store = "0.10.2" object_store = "0.10.2"
pin-project = "1.0.7" pin-project = "1.0.7"
snafu = "0.7.4" snafu = "0.7.4"
url = "2" url = "2"
num-traits = "0.2" num-traits = "0.2"
rand = "0.8"
regex = "1.10" regex = "1.10"
lazy_static = "1" lazy_static = "1"

View File

@@ -10,6 +10,7 @@
[![Blog](https://img.shields.io/badge/Blog-12100E?style=for-the-badge&logoColor=white)](https://blog.lancedb.com/) [![Blog](https://img.shields.io/badge/Blog-12100E?style=for-the-badge&logoColor=white)](https://blog.lancedb.com/)
[![Discord](https://img.shields.io/badge/Discord-%235865F2.svg?style=for-the-badge&logo=discord&logoColor=white)](https://discord.gg/zMM32dvNtd) [![Discord](https://img.shields.io/badge/Discord-%235865F2.svg?style=for-the-badge&logo=discord&logoColor=white)](https://discord.gg/zMM32dvNtd)
[![Twitter](https://img.shields.io/badge/Twitter-%231DA1F2.svg?style=for-the-badge&logo=Twitter&logoColor=white)](https://twitter.com/lancedb) [![Twitter](https://img.shields.io/badge/Twitter-%231DA1F2.svg?style=for-the-badge&logo=Twitter&logoColor=white)](https://twitter.com/lancedb)
[![Gurubase](https://img.shields.io/badge/Gurubase-Ask%20LanceDB%20Guru-006BFF?style=for-the-badge)](https://gurubase.io/g/lancedb)
</p> </p>
@@ -82,4 +83,4 @@ result = table.search([100, 100]).limit(2).to_pandas()
## Blogs, Tutorials & Videos ## Blogs, Tutorials & Videos
* 📈 <a href="https://blog.lancedb.com/benchmarking-random-access-in-lance/">2000x better performance with Lance over Parquet</a> * 📈 <a href="https://blog.lancedb.com/benchmarking-random-access-in-lance/">2000x better performance with Lance over Parquet</a>
* 🤖 <a href="https://github.com/lancedb/lancedb/blob/main/docs/src/notebooks/youtube_transcript_search.ipynb">Build a question and answer bot with LanceDB</a> * 🤖 <a href="https://github.com/lancedb/vectordb-recipes/tree/main/examples/Youtube-Search-QA-Bot">Build a question and answer bot with LanceDB</a>

View File

@@ -1,6 +1,7 @@
#!/bin/bash #!/bin/bash
set -e set -e
ARCH=${1:-x86_64} ARCH=${1:-x86_64}
TARGET_TRIPLE=${2:-x86_64-unknown-linux-gnu}
# We pass down the current user so that when we later mount the local files # We pass down the current user so that when we later mount the local files
# into the container, the files are accessible by the current user. # into the container, the files are accessible by the current user.
@@ -18,4 +19,4 @@ docker run \
-v $(pwd):/io -w /io \ -v $(pwd):/io -w /io \
--memory-swap=-1 \ --memory-swap=-1 \
lancedb-node-manylinux \ lancedb-node-manylinux \
bash ci/manylinux_node/build_vectordb.sh $ARCH bash ci/manylinux_node/build_vectordb.sh $ARCH $TARGET_TRIPLE

View File

@@ -3,6 +3,7 @@
# Targets supported: # Targets supported:
# - x86_64-pc-windows-msvc # - x86_64-pc-windows-msvc
# - i686-pc-windows-msvc # - i686-pc-windows-msvc
# - aarch64-pc-windows-msvc
function Prebuild-Rust { function Prebuild-Rust {
param ( param (
@@ -31,7 +32,7 @@ function Build-NodeBinaries {
$targets = $args[0] $targets = $args[0]
if (-not $targets) { if (-not $targets) {
$targets = "x86_64-pc-windows-msvc" $targets = "x86_64-pc-windows-msvc", "aarch64-pc-windows-msvc"
} }
Write-Host "Building artifacts for targets: $targets" Write-Host "Building artifacts for targets: $targets"

View File

@@ -3,6 +3,7 @@
# Targets supported: # Targets supported:
# - x86_64-pc-windows-msvc # - x86_64-pc-windows-msvc
# - i686-pc-windows-msvc # - i686-pc-windows-msvc
# - aarch64-pc-windows-msvc
function Prebuild-Rust { function Prebuild-Rust {
param ( param (
@@ -31,7 +32,7 @@ function Build-NodeBinaries {
$targets = $args[0] $targets = $args[0]
if (-not $targets) { if (-not $targets) {
$targets = "x86_64-pc-windows-msvc" $targets = "x86_64-pc-windows-msvc", "aarch64-pc-windows-msvc"
} }
Write-Host "Building artifacts for targets: $targets" Write-Host "Building artifacts for targets: $targets"

View File

@@ -11,7 +11,8 @@ fi
export OPENSSL_STATIC=1 export OPENSSL_STATIC=1
export OPENSSL_INCLUDE_DIR=/usr/local/include/openssl export OPENSSL_INCLUDE_DIR=/usr/local/include/openssl
source $HOME/.bashrc #Alpine doesn't have .bashrc
FILE=$HOME/.bashrc && test -f $FILE && source $FILE
cd nodejs cd nodejs
npm ci npm ci

View File

@@ -2,6 +2,7 @@
# Builds the node module for manylinux. Invoked by ci/build_linux_artifacts.sh. # Builds the node module for manylinux. Invoked by ci/build_linux_artifacts.sh.
set -e set -e
ARCH=${1:-x86_64} ARCH=${1:-x86_64}
TARGET_TRIPLE=${2:-x86_64-unknown-linux-gnu}
if [ "$ARCH" = "x86_64" ]; then if [ "$ARCH" = "x86_64" ]; then
export OPENSSL_LIB_DIR=/usr/local/lib64/ export OPENSSL_LIB_DIR=/usr/local/lib64/
@@ -11,9 +12,10 @@ fi
export OPENSSL_STATIC=1 export OPENSSL_STATIC=1
export OPENSSL_INCLUDE_DIR=/usr/local/include/openssl export OPENSSL_INCLUDE_DIR=/usr/local/include/openssl
source $HOME/.bashrc #Alpine doesn't have .bashrc
FILE=$HOME/.bashrc && test -f $FILE && source $FILE
cd node cd node
npm ci npm ci
npm run build-release npm run build-release
npm run pack-build npm run pack-build -- -t $TARGET_TRIPLE

57
ci/mock_openai.py Normal file
View File

@@ -0,0 +1,57 @@
# SPDX-License-Identifier: Apache-2.0
# SPDX-FileCopyrightText: Copyright The LanceDB Authors
"""A zero-dependency mock OpenAI embeddings API endpoint for testing purposes."""
import argparse
import json
import http.server
class MockOpenAIRequestHandler(http.server.BaseHTTPRequestHandler):
def do_POST(self):
content_length = int(self.headers["Content-Length"])
post_data = self.rfile.read(content_length)
post_data = json.loads(post_data.decode("utf-8"))
# See: https://platform.openai.com/docs/api-reference/embeddings/create
if isinstance(post_data["input"], str):
num_inputs = 1
else:
num_inputs = len(post_data["input"])
model = post_data.get("model", "text-embedding-ada-002")
data = []
for i in range(num_inputs):
data.append({
"object": "embedding",
"embedding": [0.1] * 1536,
"index": i,
})
response = {
"object": "list",
"data": data,
"model": model,
"usage": {
"prompt_tokens": 0,
"total_tokens": 0,
}
}
self.send_response(200)
self.send_header("Content-type", "application/json")
self.end_headers()
self.wfile.write(json.dumps(response).encode("utf-8"))
if __name__ == "__main__":
parser = argparse.ArgumentParser(description="Mock OpenAI embeddings API endpoint")
parser.add_argument("--port", type=int, default=8000, help="Port to listen on")
args = parser.parse_args()
port = args.port
print(f"server started on port {port}. Press Ctrl-C to stop.")
print(f"To use, set OPENAI_BASE_URL=http://localhost:{port} in your environment.")
with http.server.HTTPServer(("0.0.0.0", port), MockOpenAIRequestHandler) as server:
server.serve_forever()

View File

@@ -0,0 +1,105 @@
#!/bin/sh
# https://github.com/mstorsjo/msvc-wine/blob/master/vsdownload.py
# https://github.com/mozilla/gecko-dev/blob/6027d1d91f2d3204a3992633b3ef730ff005fc64/build/vs/vs2022-car.yaml
# function dl() {
# curl -O https://download.visualstudio.microsoft.com/download/pr/$1
# }
# [[.h]]
# "id": "Win11SDK_10.0.26100"
# "version": "10.0.26100.7"
# libucrt.lib
# example: <assert.h>
# dir: ucrt/
curl -O https://download.visualstudio.microsoft.com/download/pr/32863b8d-a46d-4231-8e84-0888519d20a9/2ee3a5fc6e9fc832af7295b138e93839/universal%20crt%20headers%20libraries%20and%20sources-x86_en-us.msi
curl -O https://download.visualstudio.microsoft.com/download/pr/32863b8d-a46d-4231-8e84-0888519d20a9/b1aa09b90fe314aceb090f6ec7626624/16ab2ea2187acffa6435e334796c8c89.cab
curl -O https://download.visualstudio.microsoft.com/download/pr/32863b8d-a46d-4231-8e84-0888519d20a9/400609bb0ff5804e36dbe6dcd42a7f01/6ee7bbee8435130a869cf971694fd9e2.cab
curl -O https://download.visualstudio.microsoft.com/download/pr/32863b8d-a46d-4231-8e84-0888519d20a9/2ac327317abb865a0e3f56b2faefa918/78fa3c824c2c48bd4a49ab5969adaaf7.cab
curl -O https://download.visualstudio.microsoft.com/download/pr/32863b8d-a46d-4231-8e84-0888519d20a9/f034bc0b2680f67dccd4bfeea3d0f932/7afc7b670accd8e3cc94cfffd516f5cb.cab
curl -O https://download.visualstudio.microsoft.com/download/pr/32863b8d-a46d-4231-8e84-0888519d20a9/7ed5e12f9d50f80825a8b27838cf4c7f/96076045170fe5db6d5dcf14b6f6688e.cab
curl -O https://download.visualstudio.microsoft.com/download/pr/32863b8d-a46d-4231-8e84-0888519d20a9/764edc185a696bda9e07df8891dddbbb/a1e2a83aa8a71c48c742eeaff6e71928.cab
curl -O https://download.visualstudio.microsoft.com/download/pr/32863b8d-a46d-4231-8e84-0888519d20a9/66854bedc6dbd5ccb5dd82c8e2412231/b2f03f34ff83ec013b9e45c7cd8e8a73.cab
# example: <windows.h>
# dir: um/
curl -O https://download.visualstudio.microsoft.com/download/pr/32863b8d-a46d-4231-8e84-0888519d20a9/b286efac4d83a54fc49190bddef1edc9/windows%20sdk%20for%20windows%20store%20apps%20headers-x86_en-us.msi
curl -O https://download.visualstudio.microsoft.com/download/pr/32863b8d-a46d-4231-8e84-0888519d20a9/e0dc3811d92ab96fcb72bf63d6c08d71/766c0ffd568bbb31bf7fb6793383e24a.cab
curl -O https://download.visualstudio.microsoft.com/download/pr/32863b8d-a46d-4231-8e84-0888519d20a9/613503da4b5628768497822826aed39f/8125ee239710f33ea485965f76fae646.cab
# example: <winapifamily.h>
# dir: /shared
curl -O https://download.visualstudio.microsoft.com/download/pr/32863b8d-a46d-4231-8e84-0888519d20a9/122979f0348d3a2a36b6aa1a111d5d0c/windows%20sdk%20for%20windows%20store%20apps%20headers%20onecoreuap-x86_en-us.msi
curl -O https://download.visualstudio.microsoft.com/download/pr/32863b8d-a46d-4231-8e84-0888519d20a9/766e04beecdfccff39e91dd9eb32834a/e89e3dcbb016928c7e426238337d69eb.cab
# "id": "Microsoft.VisualC.14.16.CRT.Headers"
# "version": "14.16.27045"
# example: <vcruntime.h>
# dir: MSVC/
curl -O https://download.visualstudio.microsoft.com/download/pr/bac0afd7-cc9e-4182-8a83-9898fa20e092/87bbe41e09a2f83711e72696f49681429327eb7a4b90618c35667a6ba2e2880e/Microsoft.VisualC.14.16.CRT.Headers.vsix
# [[.lib]]
# advapi32.lib bcrypt.lib kernel32.lib ntdll.lib user32.lib uuid.lib ws2_32.lib userenv.lib cfgmgr32.lib runtimeobject.lib
curl -O https://download.visualstudio.microsoft.com/download/pr/32863b8d-a46d-4231-8e84-0888519d20a9/944c4153b849a1f7d0c0404a4f1c05ea/windows%20sdk%20for%20windows%20store%20apps%20libs-x86_en-us.msi
curl -O https://download.visualstudio.microsoft.com/download/pr/32863b8d-a46d-4231-8e84-0888519d20a9/5306aed3e1a38d1e8bef5934edeb2a9b/05047a45609f311645eebcac2739fc4c.cab
curl -O https://download.visualstudio.microsoft.com/download/pr/32863b8d-a46d-4231-8e84-0888519d20a9/13c8a73a0f5a6474040b26d016a26fab/13d68b8a7b6678a368e2d13ff4027521.cab
curl -O https://download.visualstudio.microsoft.com/download/pr/32863b8d-a46d-4231-8e84-0888519d20a9/149578fb3b621cdb61ee1813b9b3e791/463ad1b0783ebda908fd6c16a4abfe93.cab
curl -O https://download.visualstudio.microsoft.com/download/pr/32863b8d-a46d-4231-8e84-0888519d20a9/5c986c4f393c6b09d5aec3b539e9fb4a/5a22e5cde814b041749fb271547f4dd5.cab
# fwpuclnt.lib arm64rt.lib
curl -O https://download.visualstudio.microsoft.com/download/pr/32863b8d-a46d-4231-8e84-0888519d20a9/7a332420d812f7c1d41da865ae5a7c52/windows%20sdk%20desktop%20libs%20arm64-x86_en-us.msi
curl -O https://download.visualstudio.microsoft.com/download/pr/32863b8d-a46d-4231-8e84-0888519d20a9/19de98ed4a79938d0045d19c047936b3/3e2f7be479e3679d700ce0782e4cc318.cab
# libcmt.lib libvcruntime.lib
curl -O https://download.visualstudio.microsoft.com/download/pr/bac0afd7-cc9e-4182-8a83-9898fa20e092/227f40682a88dc5fa0ccb9cadc9ad30af99ad1f1a75db63407587d079f60d035/Microsoft.VisualC.14.16.CRT.ARM64.Desktop.vsix
msiextract universal%20crt%20headers%20libraries%20and%20sources-x86_en-us.msi
msiextract windows%20sdk%20for%20windows%20store%20apps%20headers-x86_en-us.msi
msiextract windows%20sdk%20for%20windows%20store%20apps%20headers%20onecoreuap-x86_en-us.msi
msiextract windows%20sdk%20for%20windows%20store%20apps%20libs-x86_en-us.msi
msiextract windows%20sdk%20desktop%20libs%20arm64-x86_en-us.msi
unzip -o Microsoft.VisualC.14.16.CRT.Headers.vsix
unzip -o Microsoft.VisualC.14.16.CRT.ARM64.Desktop.vsix
mkdir -p /usr/aarch64-pc-windows-msvc/usr/include
mkdir -p /usr/aarch64-pc-windows-msvc/usr/lib
# lowercase folder/file names
echo "$(find . -regex ".*/[^/]*[A-Z][^/]*")" | xargs -I{} sh -c 'mv "$(echo "{}" | sed -E '"'"'s/(.*\/)/\L\1/'"'"')" "$(echo "{}" | tr [A-Z] [a-z])"'
# .h
(cd 'program files/windows kits/10/include/10.0.26100.0' && cp -r ucrt/* um/* shared/* -t /usr/aarch64-pc-windows-msvc/usr/include)
cp -r contents/vc/tools/msvc/14.16.27023/include/* /usr/aarch64-pc-windows-msvc/usr/include
# lowercase #include "" and #include <>
find /usr/aarch64-pc-windows-msvc/usr/include -type f -exec sed -i -E 's/(#include <[^<>]*?[A-Z][^<>]*?>)|(#include "[^"]*?[A-Z][^"]*?")/\L\1\2/' "{}" ';'
# ARM intrinsics
# original dir: MSVC/
# '__n128x4' redefined in arm_neon.h
# "arm64_neon.h" included from intrin.h
(cd /usr/lib/llvm19/lib/clang/19/include && cp arm_neon.h intrin.h -t /usr/aarch64-pc-windows-msvc/usr/include)
# .lib
# _Interlocked intrinsics
# must always link with arm64rt.lib
# reason: https://developercommunity.visualstudio.com/t/libucrtlibstreamobj-error-lnk2001-unresolved-exter/1544787#T-ND1599818
# I don't understand the 'correct' fix for this, arm64rt.lib is supposed to be the workaround
(cd 'program files/windows kits/10/lib/10.0.26100.0/um/arm64' && cp advapi32.lib bcrypt.lib kernel32.lib ntdll.lib user32.lib uuid.lib ws2_32.lib userenv.lib cfgmgr32.lib runtimeobject.lib fwpuclnt.lib arm64rt.lib -t /usr/aarch64-pc-windows-msvc/usr/lib)
(cd 'contents/vc/tools/msvc/14.16.27023/lib/arm64' && cp libcmt.lib libvcruntime.lib -t /usr/aarch64-pc-windows-msvc/usr/lib)
cp 'program files/windows kits/10/lib/10.0.26100.0/ucrt/arm64/libucrt.lib' /usr/aarch64-pc-windows-msvc/usr/lib

View File

@@ -0,0 +1,105 @@
#!/bin/sh
# https://github.com/mstorsjo/msvc-wine/blob/master/vsdownload.py
# https://github.com/mozilla/gecko-dev/blob/6027d1d91f2d3204a3992633b3ef730ff005fc64/build/vs/vs2022-car.yaml
# function dl() {
# curl -O https://download.visualstudio.microsoft.com/download/pr/$1
# }
# [[.h]]
# "id": "Win11SDK_10.0.26100"
# "version": "10.0.26100.7"
# libucrt.lib
# example: <assert.h>
# dir: ucrt/
curl -O https://download.visualstudio.microsoft.com/download/pr/32863b8d-a46d-4231-8e84-0888519d20a9/2ee3a5fc6e9fc832af7295b138e93839/universal%20crt%20headers%20libraries%20and%20sources-x86_en-us.msi
curl -O https://download.visualstudio.microsoft.com/download/pr/32863b8d-a46d-4231-8e84-0888519d20a9/b1aa09b90fe314aceb090f6ec7626624/16ab2ea2187acffa6435e334796c8c89.cab
curl -O https://download.visualstudio.microsoft.com/download/pr/32863b8d-a46d-4231-8e84-0888519d20a9/400609bb0ff5804e36dbe6dcd42a7f01/6ee7bbee8435130a869cf971694fd9e2.cab
curl -O https://download.visualstudio.microsoft.com/download/pr/32863b8d-a46d-4231-8e84-0888519d20a9/2ac327317abb865a0e3f56b2faefa918/78fa3c824c2c48bd4a49ab5969adaaf7.cab
curl -O https://download.visualstudio.microsoft.com/download/pr/32863b8d-a46d-4231-8e84-0888519d20a9/f034bc0b2680f67dccd4bfeea3d0f932/7afc7b670accd8e3cc94cfffd516f5cb.cab
curl -O https://download.visualstudio.microsoft.com/download/pr/32863b8d-a46d-4231-8e84-0888519d20a9/7ed5e12f9d50f80825a8b27838cf4c7f/96076045170fe5db6d5dcf14b6f6688e.cab
curl -O https://download.visualstudio.microsoft.com/download/pr/32863b8d-a46d-4231-8e84-0888519d20a9/764edc185a696bda9e07df8891dddbbb/a1e2a83aa8a71c48c742eeaff6e71928.cab
curl -O https://download.visualstudio.microsoft.com/download/pr/32863b8d-a46d-4231-8e84-0888519d20a9/66854bedc6dbd5ccb5dd82c8e2412231/b2f03f34ff83ec013b9e45c7cd8e8a73.cab
# example: <windows.h>
# dir: um/
curl -O https://download.visualstudio.microsoft.com/download/pr/32863b8d-a46d-4231-8e84-0888519d20a9/b286efac4d83a54fc49190bddef1edc9/windows%20sdk%20for%20windows%20store%20apps%20headers-x86_en-us.msi
curl -O https://download.visualstudio.microsoft.com/download/pr/32863b8d-a46d-4231-8e84-0888519d20a9/e0dc3811d92ab96fcb72bf63d6c08d71/766c0ffd568bbb31bf7fb6793383e24a.cab
curl -O https://download.visualstudio.microsoft.com/download/pr/32863b8d-a46d-4231-8e84-0888519d20a9/613503da4b5628768497822826aed39f/8125ee239710f33ea485965f76fae646.cab
# example: <winapifamily.h>
# dir: /shared
curl -O https://download.visualstudio.microsoft.com/download/pr/32863b8d-a46d-4231-8e84-0888519d20a9/122979f0348d3a2a36b6aa1a111d5d0c/windows%20sdk%20for%20windows%20store%20apps%20headers%20onecoreuap-x86_en-us.msi
curl -O https://download.visualstudio.microsoft.com/download/pr/32863b8d-a46d-4231-8e84-0888519d20a9/766e04beecdfccff39e91dd9eb32834a/e89e3dcbb016928c7e426238337d69eb.cab
# "id": "Microsoft.VisualC.14.16.CRT.Headers"
# "version": "14.16.27045"
# example: <vcruntime.h>
# dir: MSVC/
curl -O https://download.visualstudio.microsoft.com/download/pr/bac0afd7-cc9e-4182-8a83-9898fa20e092/87bbe41e09a2f83711e72696f49681429327eb7a4b90618c35667a6ba2e2880e/Microsoft.VisualC.14.16.CRT.Headers.vsix
# [[.lib]]
# advapi32.lib bcrypt.lib kernel32.lib ntdll.lib user32.lib uuid.lib ws2_32.lib userenv.lib cfgmgr32.lib
curl -O https://download.visualstudio.microsoft.com/download/pr/32863b8d-a46d-4231-8e84-0888519d20a9/944c4153b849a1f7d0c0404a4f1c05ea/windows%20sdk%20for%20windows%20store%20apps%20libs-x86_en-us.msi
curl -O https://download.visualstudio.microsoft.com/download/pr/32863b8d-a46d-4231-8e84-0888519d20a9/5306aed3e1a38d1e8bef5934edeb2a9b/05047a45609f311645eebcac2739fc4c.cab
curl -O https://download.visualstudio.microsoft.com/download/pr/32863b8d-a46d-4231-8e84-0888519d20a9/13c8a73a0f5a6474040b26d016a26fab/13d68b8a7b6678a368e2d13ff4027521.cab
curl -O https://download.visualstudio.microsoft.com/download/pr/32863b8d-a46d-4231-8e84-0888519d20a9/149578fb3b621cdb61ee1813b9b3e791/463ad1b0783ebda908fd6c16a4abfe93.cab
curl -O https://download.visualstudio.microsoft.com/download/pr/32863b8d-a46d-4231-8e84-0888519d20a9/5c986c4f393c6b09d5aec3b539e9fb4a/5a22e5cde814b041749fb271547f4dd5.cab
curl -O https://download.visualstudio.microsoft.com/download/pr/32863b8d-a46d-4231-8e84-0888519d20a9/bfc3904a0195453419ae4dfea7abd6fb/e10768bb6e9d0ea730280336b697da66.cab
curl -O https://download.visualstudio.microsoft.com/download/pr/32863b8d-a46d-4231-8e84-0888519d20a9/637f9f3be880c71f9e3ca07b4d67345c/f9b24c8280986c0683fbceca5326d806.cab
# dbghelp.lib fwpuclnt.lib
curl -O https://download.visualstudio.microsoft.com/download/pr/32863b8d-a46d-4231-8e84-0888519d20a9/9f51690d5aa804b1340ce12d1ec80f89/windows%20sdk%20desktop%20libs%20x64-x86_en-us.msi
curl -O https://download.visualstudio.microsoft.com/download/pr/32863b8d-a46d-4231-8e84-0888519d20a9/d3a7df4ca3303a698640a29e558a5e5b/58314d0646d7e1a25e97c902166c3155.cab
# libcmt.lib libvcruntime.lib
curl -O https://download.visualstudio.microsoft.com/download/pr/bac0afd7-cc9e-4182-8a83-9898fa20e092/8728f21ae09940f1f4b4ee47b4a596be2509e2a47d2f0c83bbec0ea37d69644b/Microsoft.VisualC.14.16.CRT.x64.Desktop.vsix
msiextract universal%20crt%20headers%20libraries%20and%20sources-x86_en-us.msi
msiextract windows%20sdk%20for%20windows%20store%20apps%20headers-x86_en-us.msi
msiextract windows%20sdk%20for%20windows%20store%20apps%20headers%20onecoreuap-x86_en-us.msi
msiextract windows%20sdk%20for%20windows%20store%20apps%20libs-x86_en-us.msi
msiextract windows%20sdk%20desktop%20libs%20x64-x86_en-us.msi
unzip -o Microsoft.VisualC.14.16.CRT.Headers.vsix
unzip -o Microsoft.VisualC.14.16.CRT.x64.Desktop.vsix
mkdir -p /usr/x86_64-pc-windows-msvc/usr/include
mkdir -p /usr/x86_64-pc-windows-msvc/usr/lib
# lowercase folder/file names
echo "$(find . -regex ".*/[^/]*[A-Z][^/]*")" | xargs -I{} sh -c 'mv "$(echo "{}" | sed -E '"'"'s/(.*\/)/\L\1/'"'"')" "$(echo "{}" | tr [A-Z] [a-z])"'
# .h
(cd 'program files/windows kits/10/include/10.0.26100.0' && cp -r ucrt/* um/* shared/* -t /usr/x86_64-pc-windows-msvc/usr/include)
cp -r contents/vc/tools/msvc/14.16.27023/include/* /usr/x86_64-pc-windows-msvc/usr/include
# lowercase #include "" and #include <>
find /usr/x86_64-pc-windows-msvc/usr/include -type f -exec sed -i -E 's/(#include <[^<>]*?[A-Z][^<>]*?>)|(#include "[^"]*?[A-Z][^"]*?")/\L\1\2/' "{}" ';'
# x86 intrinsics
# original dir: MSVC/
# '_mm_movemask_epi8' defined in emmintrin.h
# '__v4sf' defined in xmmintrin.h
# '__v2si' defined in mmintrin.h
# '__m128d' redefined in immintrin.h
# '__m128i' redefined in intrin.h
# '_mm_comlt_epu8' defined in ammintrin.h
(cd /usr/lib/llvm19/lib/clang/19/include && cp emmintrin.h xmmintrin.h mmintrin.h immintrin.h intrin.h ammintrin.h -t /usr/x86_64-pc-windows-msvc/usr/include)
# .lib
(cd 'program files/windows kits/10/lib/10.0.26100.0/um/x64' && cp advapi32.lib bcrypt.lib kernel32.lib ntdll.lib user32.lib uuid.lib ws2_32.lib userenv.lib cfgmgr32.lib dbghelp.lib fwpuclnt.lib -t /usr/x86_64-pc-windows-msvc/usr/lib)
(cd 'contents/vc/tools/msvc/14.16.27023/lib/x64' && cp libcmt.lib libvcruntime.lib -t /usr/x86_64-pc-windows-msvc/usr/lib)
cp 'program files/windows kits/10/lib/10.0.26100.0/ucrt/x64/libucrt.lib' /usr/x86_64-pc-windows-msvc/usr/lib

View File

@@ -34,6 +34,7 @@ theme:
- navigation.footer - navigation.footer
- navigation.tracking - navigation.tracking
- navigation.instant - navigation.instant
- content.footnote.tooltips
icon: icon:
repo: fontawesome/brands/github repo: fontawesome/brands/github
annotation: material/arrow-right-circle annotation: material/arrow-right-circle
@@ -54,10 +55,14 @@ plugins:
show_signature_annotations: true show_signature_annotations: true
show_root_heading: true show_root_heading: true
members_order: source members_order: source
docstring_section_style: list
signature_crossrefs: true
separate_signature: true
import: import:
# for cross references # for cross references
- https://arrow.apache.org/docs/objects.inv - https://arrow.apache.org/docs/objects.inv
- https://pandas.pydata.org/docs/objects.inv - https://pandas.pydata.org/docs/objects.inv
- https://lancedb.github.io/lance/objects.inv
- mkdocs-jupyter - mkdocs-jupyter
- render_swagger: - render_swagger:
allow_arbitrary_locations: true allow_arbitrary_locations: true
@@ -65,6 +70,11 @@ plugins:
markdown_extensions: markdown_extensions:
- admonition - admonition
- footnotes - footnotes
- pymdownx.critic
- pymdownx.caret
- pymdownx.keys
- pymdownx.mark
- pymdownx.tilde
- pymdownx.details - pymdownx.details
- pymdownx.highlight: - pymdownx.highlight:
anchor_linenums: true anchor_linenums: true
@@ -84,6 +94,9 @@ markdown_extensions:
- pymdownx.emoji: - pymdownx.emoji:
emoji_index: !!python/name:material.extensions.emoji.twemoji emoji_index: !!python/name:material.extensions.emoji.twemoji
emoji_generator: !!python/name:material.extensions.emoji.to_svg emoji_generator: !!python/name:material.extensions.emoji.to_svg
- markdown.extensions.toc:
baselevel: 1
permalink: ""
nav: nav:
- Home: - Home:
@@ -100,12 +113,25 @@ nav:
- Working with tables: guides/tables.md - Working with tables: guides/tables.md
- Building a vector index: ann_indexes.md - Building a vector index: ann_indexes.md
- Vector Search: search.md - Vector Search: search.md
- Full-text search: fts.md - Full-text search (native): fts.md
- Full-text search (tantivy-based): fts_tantivy.md
- Building a scalar index: guides/scalar_index.md - Building a scalar index: guides/scalar_index.md
- Hybrid search: - Hybrid search:
- Overview: hybrid_search/hybrid_search.md - Overview: hybrid_search/hybrid_search.md
- Comparing Rerankers: hybrid_search/eval.md - Comparing Rerankers: hybrid_search/eval.md
- Airbnb financial data example: notebooks/hybrid_search.ipynb - Airbnb financial data example: notebooks/hybrid_search.ipynb
- RAG:
- Vanilla RAG: rag/vanilla_rag.md
- Multi-head RAG: rag/multi_head_rag.md
- Corrective RAG: rag/corrective_rag.md
- Agentic RAG: rag/agentic_rag.md
- Graph RAG: rag/graph_rag.md
- Self RAG: rag/self_rag.md
- Adaptive RAG: rag/adaptive_rag.md
- SFR RAG: rag/sfr_rag.md
- Advanced Techniques:
- HyDE: rag/advanced_techniques/hyde.md
- FLARE: rag/advanced_techniques/flare.md
- Reranking: - Reranking:
- Quickstart: reranking/index.md - Quickstart: reranking/index.md
- Cohere Reranker: reranking/cohere.md - Cohere Reranker: reranking/cohere.md
@@ -116,6 +142,7 @@ nav:
- Jina Reranker: reranking/jina.md - Jina Reranker: reranking/jina.md
- OpenAI Reranker: reranking/openai.md - OpenAI Reranker: reranking/openai.md
- AnswerDotAi Rerankers: reranking/answerdotai.md - AnswerDotAi Rerankers: reranking/answerdotai.md
- Voyage AI Rerankers: reranking/voyageai.md
- Building Custom Rerankers: reranking/custom_reranker.md - Building Custom Rerankers: reranking/custom_reranker.md
- Example: notebooks/lancedb_reranking.ipynb - Example: notebooks/lancedb_reranking.ipynb
- Filtering: sql.md - Filtering: sql.md
@@ -127,7 +154,8 @@ nav:
- Reranking: guides/tuning_retrievers/2_reranking.md - Reranking: guides/tuning_retrievers/2_reranking.md
- Embedding fine-tuning: guides/tuning_retrievers/3_embed_tuning.md - Embedding fine-tuning: guides/tuning_retrievers/3_embed_tuning.md
- 🧬 Managing embeddings: - 🧬 Managing embeddings:
- Overview: embeddings/index.md - Understand Embeddings: embeddings/understanding_embeddings.md
- Get Started: embeddings/index.md
- Embedding functions: embeddings/embedding_functions.md - Embedding functions: embeddings/embedding_functions.md
- Available models: - Available models:
- Overview: embeddings/default_embedding_functions.md - Overview: embeddings/default_embedding_functions.md
@@ -142,6 +170,7 @@ nav:
- Jina Embeddings: embeddings/available_embedding_models/text_embedding_functions/jina_embedding.md - Jina Embeddings: embeddings/available_embedding_models/text_embedding_functions/jina_embedding.md
- AWS Bedrock Text Embedding Functions: embeddings/available_embedding_models/text_embedding_functions/aws_bedrock_embedding.md - AWS Bedrock Text Embedding Functions: embeddings/available_embedding_models/text_embedding_functions/aws_bedrock_embedding.md
- IBM watsonx.ai Embeddings: embeddings/available_embedding_models/text_embedding_functions/ibm_watsonx_ai_embedding.md - IBM watsonx.ai Embeddings: embeddings/available_embedding_models/text_embedding_functions/ibm_watsonx_ai_embedding.md
- Voyage AI Embeddings: embeddings/available_embedding_models/text_embedding_functions/voyageai_embedding.md
- Multimodal Embedding Functions: - Multimodal Embedding Functions:
- OpenClip embeddings: embeddings/available_embedding_models/multimodal_embedding_functions/openclip_embedding.md - OpenClip embeddings: embeddings/available_embedding_models/multimodal_embedding_functions/openclip_embedding.md
- Imagebind embeddings: embeddings/available_embedding_models/multimodal_embedding_functions/imagebind_embedding.md - Imagebind embeddings: embeddings/available_embedding_models/multimodal_embedding_functions/imagebind_embedding.md
@@ -165,6 +194,7 @@ nav:
- Voxel51: integrations/voxel51.md - Voxel51: integrations/voxel51.md
- PromptTools: integrations/prompttools.md - PromptTools: integrations/prompttools.md
- dlt: integrations/dlt.md - dlt: integrations/dlt.md
- phidata: integrations/phidata.md
- 🎯 Examples: - 🎯 Examples:
- Overview: examples/index.md - Overview: examples/index.md
- 🐍 Python: - 🐍 Python:
@@ -187,9 +217,10 @@ nav:
- TransformersJS Embedding Search: examples/transformerjs_embedding_search_nodejs.md - TransformersJS Embedding Search: examples/transformerjs_embedding_search_nodejs.md
- 🦀 Rust: - 🦀 Rust:
- Overview: examples/examples_rust.md - Overview: examples/examples_rust.md
- Studies: - 📓 Studies:
- ↗Improve retrievers with hybrid search and reranking: https://blog.lancedb.com/hybrid-search-and-reranking-report/ - ↗Improve retrievers with hybrid search and reranking: https://blog.lancedb.com/hybrid-search-and-reranking-report/
- 💭 FAQs: faq.md - 💭 FAQs: faq.md
- 🔍 Troubleshooting: troubleshooting.md
- ⚙️ API reference: - ⚙️ API reference:
- 🐍 Python: python/python.md - 🐍 Python: python/python.md
- 👾 JavaScript (vectordb): javascript/modules.md - 👾 JavaScript (vectordb): javascript/modules.md
@@ -201,6 +232,7 @@ nav:
- 🐍 Python: python/saas-python.md - 🐍 Python: python/saas-python.md
- 👾 JavaScript: javascript/modules.md - 👾 JavaScript: javascript/modules.md
- REST API: cloud/rest.md - REST API: cloud/rest.md
- FAQs: cloud/cloud_faq.md
- Quick start: basic.md - Quick start: basic.md
- Concepts: - Concepts:
@@ -214,12 +246,25 @@ nav:
- Working with tables: guides/tables.md - Working with tables: guides/tables.md
- Building an ANN index: ann_indexes.md - Building an ANN index: ann_indexes.md
- Vector Search: search.md - Vector Search: search.md
- Full-text search: fts.md - Full-text search (native): fts.md
- Full-text search (tantivy-based): fts_tantivy.md
- Building a scalar index: guides/scalar_index.md - Building a scalar index: guides/scalar_index.md
- Hybrid search: - Hybrid search:
- Overview: hybrid_search/hybrid_search.md - Overview: hybrid_search/hybrid_search.md
- Comparing Rerankers: hybrid_search/eval.md - Comparing Rerankers: hybrid_search/eval.md
- Airbnb financial data example: notebooks/hybrid_search.ipynb - Airbnb financial data example: notebooks/hybrid_search.ipynb
- RAG:
- Vanilla RAG: rag/vanilla_rag.md
- Multi-head RAG: rag/multi_head_rag.md
- Corrective RAG: rag/corrective_rag.md
- Agentic RAG: rag/agentic_rag.md
- Graph RAG: rag/graph_rag.md
- Self RAG: rag/self_rag.md
- Adaptive RAG: rag/adaptive_rag.md
- SFR RAG: rag/sfr_rag.md
- Advanced Techniques:
- HyDE: rag/advanced_techniques/hyde.md
- FLARE: rag/advanced_techniques/flare.md
- Reranking: - Reranking:
- Quickstart: reranking/index.md - Quickstart: reranking/index.md
- Cohere Reranker: reranking/cohere.md - Cohere Reranker: reranking/cohere.md
@@ -241,7 +286,8 @@ nav:
- Reranking: guides/tuning_retrievers/2_reranking.md - Reranking: guides/tuning_retrievers/2_reranking.md
- Embedding fine-tuning: guides/tuning_retrievers/3_embed_tuning.md - Embedding fine-tuning: guides/tuning_retrievers/3_embed_tuning.md
- Managing Embeddings: - Managing Embeddings:
- Overview: embeddings/index.md - Understand Embeddings: embeddings/understanding_embeddings.md
- Get Started: embeddings/index.md
- Embedding functions: embeddings/embedding_functions.md - Embedding functions: embeddings/embedding_functions.md
- Available models: - Available models:
- Overview: embeddings/default_embedding_functions.md - Overview: embeddings/default_embedding_functions.md
@@ -275,6 +321,7 @@ nav:
- Voxel51: integrations/voxel51.md - Voxel51: integrations/voxel51.md
- PromptTools: integrations/prompttools.md - PromptTools: integrations/prompttools.md
- dlt: integrations/dlt.md - dlt: integrations/dlt.md
- phidata: integrations/phidata.md
- Examples: - Examples:
- examples/index.md - examples/index.md
- 🐍 Python: - 🐍 Python:
@@ -312,6 +359,7 @@ nav:
- 🐍 Python: python/saas-python.md - 🐍 Python: python/saas-python.md
- 👾 JavaScript: javascript/modules.md - 👾 JavaScript: javascript/modules.md
- REST API: cloud/rest.md - REST API: cloud/rest.md
- FAQs: cloud/cloud_faq.md
extra_css: extra_css:
- styles/global.css - styles/global.css

21
docs/package-lock.json generated
View File

@@ -19,7 +19,7 @@
}, },
"../node": { "../node": {
"name": "vectordb", "name": "vectordb",
"version": "0.4.6", "version": "0.12.0",
"cpu": [ "cpu": [
"x64", "x64",
"arm64" "arm64"
@@ -31,9 +31,7 @@
"win32" "win32"
], ],
"dependencies": { "dependencies": {
"@apache-arrow/ts": "^14.0.2",
"@neon-rs/load": "^0.0.74", "@neon-rs/load": "^0.0.74",
"apache-arrow": "^14.0.2",
"axios": "^1.4.0" "axios": "^1.4.0"
}, },
"devDependencies": { "devDependencies": {
@@ -46,6 +44,7 @@
"@types/temp": "^0.9.1", "@types/temp": "^0.9.1",
"@types/uuid": "^9.0.3", "@types/uuid": "^9.0.3",
"@typescript-eslint/eslint-plugin": "^5.59.1", "@typescript-eslint/eslint-plugin": "^5.59.1",
"apache-arrow-old": "npm:apache-arrow@13.0.0",
"cargo-cp-artifact": "^0.1", "cargo-cp-artifact": "^0.1",
"chai": "^4.3.7", "chai": "^4.3.7",
"chai-as-promised": "^7.1.1", "chai-as-promised": "^7.1.1",
@@ -62,15 +61,19 @@
"ts-node-dev": "^2.0.0", "ts-node-dev": "^2.0.0",
"typedoc": "^0.24.7", "typedoc": "^0.24.7",
"typedoc-plugin-markdown": "^3.15.3", "typedoc-plugin-markdown": "^3.15.3",
"typescript": "*", "typescript": "^5.1.0",
"uuid": "^9.0.0" "uuid": "^9.0.0"
}, },
"optionalDependencies": { "optionalDependencies": {
"@lancedb/vectordb-darwin-arm64": "0.4.6", "@lancedb/vectordb-darwin-arm64": "0.12.0",
"@lancedb/vectordb-darwin-x64": "0.4.6", "@lancedb/vectordb-darwin-x64": "0.12.0",
"@lancedb/vectordb-linux-arm64-gnu": "0.4.6", "@lancedb/vectordb-linux-arm64-gnu": "0.12.0",
"@lancedb/vectordb-linux-x64-gnu": "0.4.6", "@lancedb/vectordb-linux-x64-gnu": "0.12.0",
"@lancedb/vectordb-win32-x64-msvc": "0.4.6" "@lancedb/vectordb-win32-x64-msvc": "0.12.0"
},
"peerDependencies": {
"@apache-arrow/ts": "^14.0.2",
"apache-arrow": "^14.0.2"
} }
}, },
"../node/node_modules/apache-arrow": { "../node/node_modules/apache-arrow": {

View File

@@ -45,9 +45,9 @@ Lance supports `IVF_PQ` index type by default.
Creating indexes is done via the [lancedb.Table.createIndex](../js/classes/Table.md/#createIndex) method. Creating indexes is done via the [lancedb.Table.createIndex](../js/classes/Table.md/#createIndex) method.
```typescript ```typescript
--8<--- "nodejs/examples/ann_indexes.ts:import" --8<--- "nodejs/examples/ann_indexes.test.ts:import"
--8<-- "nodejs/examples/ann_indexes.ts:ingest" --8<-- "nodejs/examples/ann_indexes.test.ts:ingest"
``` ```
=== "vectordb (deprecated)" === "vectordb (deprecated)"
@@ -83,6 +83,7 @@ The following IVF_PQ paramters can be specified:
- **num_sub_vectors**: The number of sub-vectors (M) that will be created during Product Quantization (PQ). - **num_sub_vectors**: The number of sub-vectors (M) that will be created during Product Quantization (PQ).
For D dimensional vector, it will be divided into `M` subvectors with dimension `D/M`, each of which is replaced by For D dimensional vector, it will be divided into `M` subvectors with dimension `D/M`, each of which is replaced by
a single PQ code. The default is the dimension of the vector divided by 16. a single PQ code. The default is the dimension of the vector divided by 16.
- **num_bits**: The number of bits used to encode each sub-vector. Only 4 and 8 are supported. The higher the number of bits, the higher the accuracy of the index, also the slower search. The default is 8.
!!! note !!! note
@@ -140,13 +141,15 @@ There are a couple of parameters that can be used to fine-tune the search:
- **limit** (default: 10): The amount of results that will be returned - **limit** (default: 10): The amount of results that will be returned
- **nprobes** (default: 20): The number of probes used. A higher number makes search more accurate but also slower.<br/> - **nprobes** (default: 20): The number of probes used. A higher number makes search more accurate but also slower.<br/>
Most of the time, setting nprobes to cover 5-10% of the dataset should achieve high recall with low latency.<br/> Most of the time, setting nprobes to cover 5-15% of the dataset should achieve high recall with low latency.<br/>
e.g., for 1M vectors divided up into 256 partitions, nprobes should be set to ~20-40.<br/> - _For example_, For a dataset of 1 million vectors divided into 256 partitions, `nprobes` should be set to ~20-40. This value can be adjusted to achieve the optimal balance between search latency and search quality. <br/>
Note: nprobes is only applicable if an ANN index is present. If specified on a table without an ANN index, it is ignored.
- **refine_factor** (default: None): Refine the results by reading extra elements and re-ranking them in memory.<br/> - **refine_factor** (default: None): Refine the results by reading extra elements and re-ranking them in memory.<br/>
A higher number makes search more accurate but also slower. If you find the recall is less than ideal, try refine_factor=10 to start.<br/> A higher number makes search more accurate but also slower. If you find the recall is less than ideal, try refine_factor=10 to start.<br/>
e.g., for 1M vectors divided into 256 partitions, if you're looking for top 20, then refine_factor=200 reranks the whole partition.<br/> - _For example_, For a dataset of 1 million vectors divided into 256 partitions, setting the `refine_factor` to 200 will initially retrieve the top 4,000 candidates (top k * refine_factor) from all searched partitions. These candidates are then reranked to determine the final top 20 results.<br/>
Note: refine_factor is only applicable if an ANN index is present. If specified on a table without an ANN index, it is ignored. !!! note
Both `nprobes` and `refine_factor` are only applicable if an ANN index is present. If specified on a table without an ANN index, those parameters are ignored.
=== "Python" === "Python"
@@ -169,7 +172,7 @@ There are a couple of parameters that can be used to fine-tune the search:
=== "@lancedb/lancedb" === "@lancedb/lancedb"
```typescript ```typescript
--8<-- "nodejs/examples/ann_indexes.ts:search1" --8<-- "nodejs/examples/ann_indexes.test.ts:search1"
``` ```
=== "vectordb (deprecated)" === "vectordb (deprecated)"
@@ -203,7 +206,7 @@ You can further filter the elements returned by a search using a where clause.
=== "@lancedb/lancedb" === "@lancedb/lancedb"
```typescript ```typescript
--8<-- "nodejs/examples/ann_indexes.ts:search2" --8<-- "nodejs/examples/ann_indexes.test.ts:search2"
``` ```
=== "vectordb (deprecated)" === "vectordb (deprecated)"
@@ -235,7 +238,7 @@ You can select the columns returned by the query using a select clause.
=== "@lancedb/lancedb" === "@lancedb/lancedb"
```typescript ```typescript
--8<-- "nodejs/examples/ann_indexes.ts:search3" --8<-- "nodejs/examples/ann_indexes.test.ts:search3"
``` ```
=== "vectordb (deprecated)" === "vectordb (deprecated)"
@@ -275,7 +278,15 @@ Product quantization can lead to approximately `16 * sizeof(float32) / 1 = 64` t
Higher number of partitions could lead to more efficient I/O during queries and better accuracy, but it takes much more time to train. Higher number of partitions could lead to more efficient I/O during queries and better accuracy, but it takes much more time to train.
On `SIFT-1M` dataset, our benchmark shows that keeping each partition 1K-4K rows lead to a good latency / recall. On `SIFT-1M` dataset, our benchmark shows that keeping each partition 1K-4K rows lead to a good latency / recall.
`num_sub_vectors` specifies how many Product Quantization (PQ) short codes to generate on each vector. Because `num_sub_vectors` specifies how many Product Quantization (PQ) short codes to generate on each vector. The number should be a factor of the vector dimension. Because
PQ is a lossy compression of the original vector, a higher `num_sub_vectors` usually results in PQ is a lossy compression of the original vector, a higher `num_sub_vectors` usually results in
less space distortion, and thus yields better accuracy. However, a higher `num_sub_vectors` also causes heavier I/O and less space distortion, and thus yields better accuracy. However, a higher `num_sub_vectors` also causes heavier I/O and more PQ computation, and thus, higher latency. `dimension / num_sub_vectors` should be a multiple of 8 for optimum SIMD efficiency.
more PQ computation, and thus, higher latency. `dimension / num_sub_vectors` should be a multiple of 8 for optimum SIMD efficiency.
!!! note
if `num_sub_vectors` is set to be greater than the vector dimension, you will see errors like `attempt to divide by zero`
### How to choose `m` and `ef_construction` for `IVF_HNSW_*` index?
`m` determines the number of connections a new node establishes with its closest neighbors upon entering the graph. Typically, `m` falls within the range of 5 to 48. Lower `m` values are suitable for low-dimensional data or scenarios where recall is less critical. Conversely, higher `m` values are beneficial for high-dimensional data or when high recall is required. In essence, a larger `m` results in a denser graph with increased connectivity, but at the expense of higher memory consumption.
`ef_construction` balances build speed and accuracy. Higher values increase accuracy but slow down the build process. A typical range is 150 to 300. For good search results, a minimum value of 100 is recommended. In most cases, setting this value above 500 offers no additional benefit. Ensure that `ef_construction` is always set to a value equal to or greater than `ef` in the search phase

View File

@@ -141,14 +141,6 @@ recommend switching to stable releases.
--8<-- "python/python/tests/docs/test_basic.py:connect_async" --8<-- "python/python/tests/docs/test_basic.py:connect_async"
``` ```
!!! note "Asynchronous Python API"
The asynchronous Python API is new and has some slight differences compared
to the synchronous API. Feel free to start using the asynchronous version.
Once all features have migrated we will start to move the synchronous API to
use the same syntax as the asynchronous API. To help with this migration we
have created a [migration guide](migration.md) detailing the differences.
=== "Typescript[^1]" === "Typescript[^1]"
=== "@lancedb/lancedb" === "@lancedb/lancedb"
@@ -157,7 +149,7 @@ recommend switching to stable releases.
import * as lancedb from "@lancedb/lancedb"; import * as lancedb from "@lancedb/lancedb";
import * as arrow from "apache-arrow"; import * as arrow from "apache-arrow";
--8<-- "nodejs/examples/basic.ts:connect" --8<-- "nodejs/examples/basic.test.ts:connect"
``` ```
=== "vectordb (deprecated)" === "vectordb (deprecated)"
@@ -212,7 +204,7 @@ table.
=== "@lancedb/lancedb" === "@lancedb/lancedb"
```typescript ```typescript
--8<-- "nodejs/examples/basic.ts:create_table" --8<-- "nodejs/examples/basic.test.ts:create_table"
``` ```
=== "vectordb (deprecated)" === "vectordb (deprecated)"
@@ -268,7 +260,7 @@ similar to a `CREATE TABLE` statement in SQL.
=== "@lancedb/lancedb" === "@lancedb/lancedb"
```typescript ```typescript
--8<-- "nodejs/examples/basic.ts:create_empty_table" --8<-- "nodejs/examples/basic.test.ts:create_empty_table"
``` ```
=== "vectordb (deprecated)" === "vectordb (deprecated)"
@@ -298,7 +290,7 @@ Once created, you can open a table as follows:
=== "@lancedb/lancedb" === "@lancedb/lancedb"
```typescript ```typescript
--8<-- "nodejs/examples/basic.ts:open_table" --8<-- "nodejs/examples/basic.test.ts:open_table"
``` ```
=== "vectordb (deprecated)" === "vectordb (deprecated)"
@@ -327,7 +319,7 @@ If you forget the name of your table, you can always get a listing of all table
=== "@lancedb/lancedb" === "@lancedb/lancedb"
```typescript ```typescript
--8<-- "nodejs/examples/basic.ts:table_names" --8<-- "nodejs/examples/basic.test.ts:table_names"
``` ```
=== "vectordb (deprecated)" === "vectordb (deprecated)"
@@ -357,7 +349,7 @@ After a table has been created, you can always add more data to it as follows:
=== "@lancedb/lancedb" === "@lancedb/lancedb"
```typescript ```typescript
--8<-- "nodejs/examples/basic.ts:add_data" --8<-- "nodejs/examples/basic.test.ts:add_data"
``` ```
=== "vectordb (deprecated)" === "vectordb (deprecated)"
@@ -389,7 +381,7 @@ Once you've embedded the query, you can find its nearest neighbors as follows:
=== "@lancedb/lancedb" === "@lancedb/lancedb"
```typescript ```typescript
--8<-- "nodejs/examples/basic.ts:vector_search" --8<-- "nodejs/examples/basic.test.ts:vector_search"
``` ```
=== "vectordb (deprecated)" === "vectordb (deprecated)"
@@ -429,7 +421,7 @@ LanceDB allows you to create an ANN index on a table as follows:
=== "@lancedb/lancedb" === "@lancedb/lancedb"
```typescript ```typescript
--8<-- "nodejs/examples/basic.ts:create_index" --8<-- "nodejs/examples/basic.test.ts:create_index"
``` ```
=== "vectordb (deprecated)" === "vectordb (deprecated)"
@@ -469,7 +461,7 @@ This can delete any number of rows that match the filter.
=== "@lancedb/lancedb" === "@lancedb/lancedb"
```typescript ```typescript
--8<-- "nodejs/examples/basic.ts:delete_rows" --8<-- "nodejs/examples/basic.test.ts:delete_rows"
``` ```
=== "vectordb (deprecated)" === "vectordb (deprecated)"
@@ -527,7 +519,7 @@ Use the `drop_table()` method on the database to remove a table.
=== "@lancedb/lancedb" === "@lancedb/lancedb"
```typescript ```typescript
--8<-- "nodejs/examples/basic.ts:drop_table" --8<-- "nodejs/examples/basic.test.ts:drop_table"
``` ```
=== "vectordb (deprecated)" === "vectordb (deprecated)"
@@ -561,8 +553,8 @@ You can use the embedding API when working with embedding models. It automatical
=== "@lancedb/lancedb" === "@lancedb/lancedb"
```typescript ```typescript
--8<-- "nodejs/examples/embedding.ts:imports" --8<-- "nodejs/examples/embedding.test.ts:imports"
--8<-- "nodejs/examples/embedding.ts:openai_embeddings" --8<-- "nodejs/examples/embedding.test.ts:openai_embeddings"
``` ```
=== "Rust" === "Rust"

View File

@@ -0,0 +1,34 @@
This section provides answers to the most common questions asked about LanceDB Cloud. By following these guidelines, you can ensure a smooth, performant experience with LanceDB Cloud.
### Should I reuse the database connection?
Yes! It is recommended to establish a single database connection and maintain it throughout your interaction with the tables within.
LanceDB uses HTTP connections to communicate with the servers. By re-using the Connection object, you avoid the overhead of repeatedly establishing HTTP connections, significantly improving efficiency.
### Should I re-use the `Table` object?
`table = db.open_table()` should be called once and used for all subsequent table operations. If there are changes to the opened table, `table` always reflect the **latest version** of the data.
### What should I do if I need to search for rows by `id`?
LanceDB Cloud currently does not support an ID or primary key column. You are recommended to add a
user-defined ID column. To significantly improve the query performance with SQL causes, a scalar BITMAP/BTREE index should be created on this column.
### What are the vector indexing types supported by LanceDB Cloud?
We support `IVF_PQ` and `IVF_HNSW_SQ` as the `index_type` which is passed to `create_index`. LanceDB Cloud tunes the indexing parameters automatically to achieve the best tradeoff between query latency and query quality.
### When I add new rows to a table, do I need to manually update the index?
No! LanceDB Cloud triggers an asynchronous background job to index the new vectors.
Even though indexing is asynchronous, your vectors will still be immediately searchable. LanceDB uses brute-force search to search over unindexed rows. This makes you new data is immediately available, but does increase latency temporarily. To disable the brute-force part of search, set the `fast_search` flag in your query to `true`.
### Do I need to reindex the whole dataset if only a small portion of the data is deleted or updated?
No! Similar to adding data to the table, LanceDB Cloud triggers an asynchronous background job to update the existing indices. Therefore, no action is needed from users and there is absolutely no
downtime expected.
### How do I know whether an index has been created?
While index creation in LanceDB Cloud is generally fast, querying immediately after a `create_index` call may result in errors. It's recommended to use `list_indices` to verify index creation before querying.
### Why is my query latency higher than expected?
Multiple factors can impact query latency. To reduce query latency, consider the following:
- Send pre-warm queries: send a few queries to warm up the cache before an actual user query.
- Check network latency: LanceDB Cloud is hosted in AWS `us-east-1` region. It is recommended to run queries from an EC2 instance that is in the same region.
- Create scalar indices: If you are filtering on metadata, it is recommended to create scalar indices on those columns. This will speedup searches with metadata filtering. See [here](../guides/scalar_index.md) for more details on creating a scalar index.

View File

@@ -57,6 +57,13 @@ Then the greedy search routine operates as follows:
## Usage ## Usage
There are three key parameters to set when constructing an HNSW index:
* `metric`: Use an `L2` euclidean distance metric. We also support `dot` and `cosine` distance.
* `m`: The number of neighbors to select for each vector in the HNSW graph.
* `ef_construction`: The number of candidates to evaluate during the construction of the HNSW graph.
We can combine the above concepts to understand how to build and query an HNSW index in LanceDB. We can combine the above concepts to understand how to build and query an HNSW index in LanceDB.
### Construct index ### Construct index

View File

@@ -58,8 +58,10 @@ In Python, the index can be created as follows:
# Make sure you have enough data in the table for an effective training step # Make sure you have enough data in the table for an effective training step
tbl.create_index(metric="L2", num_partitions=256, num_sub_vectors=96) tbl.create_index(metric="L2", num_partitions=256, num_sub_vectors=96)
``` ```
!!! note
`num_partitions`=256 and `num_sub_vectors`=96 does not work for every dataset. Those values needs to be adjusted for your particular dataset.
The `num_partitions` is usually chosen to target a particular number of vectors per partition. `num_sub_vectors` is typically chosen based on the desired recall and the dimensionality of the vector. See the [FAQs](#faq) below for best practices on choosing these parameters. The `num_partitions` is usually chosen to target a particular number of vectors per partition. `num_sub_vectors` is typically chosen based on the desired recall and the dimensionality of the vector. See [here](../ann_indexes.md/#how-to-choose-num_partitions-and-num_sub_vectors-for-ivf_pq-index) for best practices on choosing these parameters.
### Query the index ### Query the index

View File

@@ -1,5 +1,5 @@
# Huggingface embedding models # Huggingface embedding models
We offer support for all huggingface models (which can be loaded via [transformers](https://huggingface.co/docs/transformers/en/index) library). The default model is `colbert-ir/colbertv2.0` which also has its own special callout - `registry.get("colbert")` We offer support for all Hugging Face models (which can be loaded via [transformers](https://huggingface.co/docs/transformers/en/index) library). The default model is `colbert-ir/colbertv2.0` which also has its own special callout - `registry.get("colbert")`. Some Hugging Face models might require custom models defined on the HuggingFace Hub in their own modeling files. You may enable this by setting `trust_remote_code=True`. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine.
Example usage - Example usage -
```python ```python

View File

@@ -6,6 +6,7 @@ LanceDB registers the OpenAI embeddings function in the registry by default, as
|---|---|---|---| |---|---|---|---|
| `name` | `str` | `"text-embedding-ada-002"` | The name of the model. | | `name` | `str` | `"text-embedding-ada-002"` | The name of the model. |
| `dim` | `int` | Model default | For OpenAI's newer text-embedding-3 model, we can specify a dimensionality that is smaller than the 1536 size. This feature supports it | | `dim` | `int` | Model default | For OpenAI's newer text-embedding-3 model, we can specify a dimensionality that is smaller than the 1536 size. This feature supports it |
| `use_azure` | bool | `False` | Set true to use Azure OpenAPI SDK |
```python ```python

View File

@@ -0,0 +1,51 @@
# VoyageAI Embeddings
Voyage AI provides cutting-edge embedding and rerankers.
Using voyageai API requires voyageai package, which can be installed using `pip install voyageai`. Voyage AI embeddings are used to generate embeddings for text data. The embeddings can be used for various tasks like semantic search, clustering, and classification.
You also need to set the `VOYAGE_API_KEY` environment variable to use the VoyageAI API.
Supported models are:
- voyage-3
- voyage-3-lite
- voyage-finance-2
- voyage-multilingual-2
- voyage-law-2
- voyage-code-2
Supported parameters (to be passed in `create` method) are:
| Parameter | Type | Default Value | Description |
|---|---|--------|---------|
| `name` | `str` | `None` | The model ID of the model to use. Supported base models for Text Embeddings: voyage-3, voyage-3-lite, voyage-finance-2, voyage-multilingual-2, voyage-law-2, voyage-code-2 |
| `input_type` | `str` | `None` | Type of the input text. Default to None. Other options: query, document. |
| `truncation` | `bool` | `True` | Whether to truncate the input texts to fit within the context length. |
Usage Example:
```python
import lancedb
from lancedb.pydantic import LanceModel, Vector
from lancedb.embeddings import EmbeddingFunctionRegistry
voyageai = EmbeddingFunctionRegistry
.get_instance()
.get("voyageai")
.create(name="voyage-3")
class TextModel(LanceModel):
text: str = voyageai.SourceField()
vector: Vector(voyageai.ndims()) = voyageai.VectorField()
data = [ { "text": "hello world" },
{ "text": "goodbye world" }]
db = lancedb.connect("~/.lancedb")
tbl = db.create_table("test", schema=TextModel, mode="overwrite")
tbl.add(data)
```

View File

@@ -47,9 +47,9 @@ Let's implement `SentenceTransformerEmbeddings` class. All you need to do is imp
=== "TypeScript" === "TypeScript"
```ts ```ts
--8<--- "nodejs/examples/custom_embedding_function.ts:imports" --8<--- "nodejs/examples/custom_embedding_function.test.ts:imports"
--8<--- "nodejs/examples/custom_embedding_function.ts:embedding_impl" --8<--- "nodejs/examples/custom_embedding_function.test.ts:embedding_impl"
``` ```
@@ -78,7 +78,7 @@ Now you can use this embedding function to create your table schema and that's i
=== "TypeScript" === "TypeScript"
```ts ```ts
--8<--- "nodejs/examples/custom_embedding_function.ts:call_custom_function" --8<--- "nodejs/examples/custom_embedding_function.test.ts:call_custom_function"
``` ```
!!! note !!! note

View File

@@ -53,6 +53,7 @@ These functions are registered by default to handle text embeddings.
| [**Jina Embeddings**](available_embedding_models/text_embedding_functions/jina_embedding.md "jina") | 🔗 World-class embedding models to improve your search and RAG systems. You will need **jina api key**. | [<img src="https://raw.githubusercontent.com/lancedb/assets/main/docs/assets/logos/jina.png" alt="Jina Icon" width="90" height="35">](available_embedding_models/text_embedding_functions/jina_embedding.md) | | [**Jina Embeddings**](available_embedding_models/text_embedding_functions/jina_embedding.md "jina") | 🔗 World-class embedding models to improve your search and RAG systems. You will need **jina api key**. | [<img src="https://raw.githubusercontent.com/lancedb/assets/main/docs/assets/logos/jina.png" alt="Jina Icon" width="90" height="35">](available_embedding_models/text_embedding_functions/jina_embedding.md) |
| [ **AWS Bedrock Functions**](available_embedding_models/text_embedding_functions/aws_bedrock_embedding.md "bedrock-text") | ☁️ AWS Bedrock supports multiple base models for generating text embeddings. You need to setup the AWS credentials to use this embedding function. | [<img src="https://raw.githubusercontent.com/lancedb/assets/main/docs/assets/logos/aws_bedrock.png" alt="AWS Bedrock Icon" width="120" height="35">](available_embedding_models/text_embedding_functions/aws_bedrock_embedding.md) | | [ **AWS Bedrock Functions**](available_embedding_models/text_embedding_functions/aws_bedrock_embedding.md "bedrock-text") | ☁️ AWS Bedrock supports multiple base models for generating text embeddings. You need to setup the AWS credentials to use this embedding function. | [<img src="https://raw.githubusercontent.com/lancedb/assets/main/docs/assets/logos/aws_bedrock.png" alt="AWS Bedrock Icon" width="120" height="35">](available_embedding_models/text_embedding_functions/aws_bedrock_embedding.md) |
| [**IBM Watsonx.ai**](available_embedding_models/text_embedding_functions/ibm_watsonx_ai_embedding.md "watsonx") | 💡 Generate text embeddings using IBM's watsonx.ai platform. **Note**: watsonx.ai library is an optional dependency. | [<img src="https://raw.githubusercontent.com/lancedb/assets/main/docs/assets/logos/watsonx.png" alt="Watsonx Icon" width="140" height="35">](available_embedding_models/text_embedding_functions/ibm_watsonx_ai_embedding.md) | | [**IBM Watsonx.ai**](available_embedding_models/text_embedding_functions/ibm_watsonx_ai_embedding.md "watsonx") | 💡 Generate text embeddings using IBM's watsonx.ai platform. **Note**: watsonx.ai library is an optional dependency. | [<img src="https://raw.githubusercontent.com/lancedb/assets/main/docs/assets/logos/watsonx.png" alt="Watsonx Icon" width="140" height="35">](available_embedding_models/text_embedding_functions/ibm_watsonx_ai_embedding.md) |
| [**VoyageAI Embeddings**](available_embedding_models/text_embedding_functions/voyageai_embedding.md "voyageai") | 🌕 Voyage AI provides cutting-edge embedding and rerankers. This will help you get started with **VoyageAI** embedding models using LanceDB. Using voyageai API requires voyageai package. Install it via `pip`. | [<img src="https://www.voyageai.com/logo.svg" alt="VoyageAI Icon" width="140" height="35">](available_embedding_models/text_embedding_functions/voyageai_embedding.md) |
@@ -66,6 +67,7 @@ These functions are registered by default to handle text embeddings.
[jina-key]: "jina" [jina-key]: "jina"
[aws-key]: "bedrock-text" [aws-key]: "bedrock-text"
[watsonx-key]: "watsonx" [watsonx-key]: "watsonx"
[voyageai-key]: "voyageai"
## Multi-modal Embedding Functions🖼 ## Multi-modal Embedding Functions🖼

View File

@@ -94,8 +94,8 @@ the embeddings at all:
=== "@lancedb/lancedb" === "@lancedb/lancedb"
```ts ```ts
--8<-- "nodejs/examples/embedding.ts:imports" --8<-- "nodejs/examples/embedding.test.ts:imports"
--8<-- "nodejs/examples/embedding.ts:embedding_function" --8<-- "nodejs/examples/embedding.test.ts:embedding_function"
``` ```
=== "vectordb (deprecated)" === "vectordb (deprecated)"
@@ -150,7 +150,7 @@ need to worry about it when you query the table:
.toArray() .toArray()
``` ```
=== "vectordb (deprecated) === "vectordb (deprecated)"
```ts ```ts
const results = await table const results = await table

View File

@@ -51,8 +51,8 @@ LanceDB registers the OpenAI embeddings function in the registry as `openai`. Yo
=== "TypeScript" === "TypeScript"
```typescript ```typescript
--8<--- "nodejs/examples/embedding.ts:imports" --8<--- "nodejs/examples/embedding.test.ts:imports"
--8<--- "nodejs/examples/embedding.ts:openai_embeddings" --8<--- "nodejs/examples/embedding.test.ts:openai_embeddings"
``` ```
=== "Rust" === "Rust"
@@ -121,12 +121,10 @@ class Words(LanceModel):
vector: Vector(func.ndims()) = func.VectorField() vector: Vector(func.ndims()) = func.VectorField()
table = db.create_table("words", schema=Words) table = db.create_table("words", schema=Words)
table.add( table.add([
[
{"text": "hello world"}, {"text": "hello world"},
{"text": "goodbye world"} {"text": "goodbye world"}
] ])
)
query = "greetings" query = "greetings"
actual = table.search(query).limit(1).to_pydantic(Words)[0] actual = table.search(query).limit(1).to_pydantic(Words)[0]

View File

@@ -0,0 +1,133 @@
# Understand Embeddings
The term **dimension** is a synonym for the number of elements in a feature vector. Each feature can be thought of as a different axis in a geometric space.
High-dimensional data means there are many features(or attributes) in the data.
!!! example
1. An image is a data point and it might have thousands of dimensions because each pixel could be considered as a feature.
2. Text data, when represented by each word or character, can also lead to high dimensions, especially when considering all possible words in a language.
Embedding captures **meaning and relationships** within data by mapping high-dimensional data into a lower-dimensional space. It captures it by placing inputs that are more **similar in meaning** closer together in the **embedding space**.
## What are Vector Embeddings?
Vector embeddings is a way to convert complex data, like text, images, or audio into numerical coordinates (called vectors) that can be plotted in an n-dimensional space(embedding space).
The closer these data points are related in the real world, the closer their corresponding numerical coordinates (vectors) will be to each other in the embedding space. This proximity in the embedding space reflects their semantic similarities, allowing machines to intuitively understand and process the data in a way that mirrors human perception of relationships and meaning.
In a way, it captures the most important aspects of the data while ignoring the less important ones. As a result, tasks like searching for related content or identifying patterns become more efficient and accurate, as the embeddings make it possible to quantify how **closely related** different **data points** are and **reduce** the **computational complexity**.
??? question "Are vectors and embeddings the same thing?"
When we say “vectors” we mean - **list of numbers** that **represents the data**.
When we say “embeddings” we mean - **list of numbers** that **capture important details and relationships**.
Although the terms are often used interchangeably, “embeddings” highlight how the data is represented with meaning and structure, while “vector” simply refers to the numerical form of that representation.
## Embedding vs Indexing
We already saw that creating **embeddings** on data is a method of creating **vectors** for a **n-dimensional embedding space** that captures the meaning and relationships inherent in the data.
Once we have these **vectors**, indexing comes into play. Indexing is a method of organizing these vector embeddings, that allows us to quickly and efficiently locate and retrieve them from the entire dataset of vector embeddings.
## What types of data/objects can be embedded?
The following are common types of data that can be embedded:
1. **Text**: Text data includes sentences, paragraphs, documents, or any written content.
2. **Images**: Image data encompasses photographs, illustrations, or any visual content.
3. **Audio**: Audio data includes sounds, music, speech, or any auditory content.
4. **Video**: Video data consists of moving images and sound, which can convey complex information.
Large datasets of multi-modal data (text, audio, images, etc.) can be converted into embeddings with the appropriate model.
!!! tip "LanceDB vs Other traditional Vector DBs"
While many vector databases primarily focus on the storage and retrieval of vector embeddings, **LanceDB** uses **Lance file format** (operates on a disk-based architecture), which allows for the storage and management of not just embeddings but also **raw file data (bytes)**. This capability means that users can integrate various types of data, including images and text, alongside their vector embeddings in a unified system.
With the ability to store both vectors and associated file data, LanceDB enhances the querying process. Users can perform semantic searches that not only retrieve similar embeddings but also access related files and metadata, thus streamlining the workflow.
## How does embedding works?
As mentioned, after creating embedding, each data point is represented as a vector in a n-dimensional space (embedding space). The dimensionality of this space can vary depending on the complexity of the data and the specific embedding technique used.
Points that are close to each other in vector space are considered similar (or appear in similar contexts), and points that are far away are considered dissimilar. To quantify this closeness, we use distance as a metric which can be measured in the following way -
1. **Euclidean Distance (L2)**: It calculates the straight-line distance between two points (vectors) in a multidimensional space.
2. **Cosine Similarity**: It measures the cosine of the angle between two vectors, providing a normalized measure of similarity based on their direction.
3. **Dot product**: It is calculated as the sum of the products of their corresponding components. To measure relatedness it considers both the magnitude and direction of the vectors.
## How do you create and store vector embeddings for your data?
1. **Creating embeddings**: Choose an embedding model, it can be a pre-trained model (open-source or commercial) or you can train a custom embedding model for your scenario. Then feed your preprocessed data into the chosen model to obtain embeddings.
??? question "Popular choices for embedding models"
For text data, popular choices are OpenAIs text-embedding models, Google Gemini text-embedding models, Coheres Embed models, and SentenceTransformers, etc.
For image data, popular choices are CLIP (Contrastive LanguageImage Pretraining), Imagebind embeddings by meta (supports audio, video, and image), and Jina multi-modal embeddings, etc.
2. **Storing vector embeddings**: This effectively requires **specialized databases** that can handle the complexity of vector data, as traditional databases often struggle with this task. Vector databases are designed specifically for storing and querying vector embeddings. They optimize for efficient nearest-neighbor searches and provide built-in indexing mechanisms.
!!! tip "Why LanceDB"
LanceDB **automates** the entire process of creating and storing embeddings for your data. LanceDB allows you to define and use **embedding functions**, which can be **pre-trained models** or **custom models**.
This enables you to **generate** embeddings tailored to the nature of your data (e.g., text, images) and **store** both the **original data** and **embeddings** in a **structured schema** thus providing efficient querying capabilities for similarity searches.
Let's quickly [get started](./index.md) and learn how to manage embeddings in LanceDB.
## Bonus: As a developer, what you can create using embeddings?
As a developer, you can create a variety of innovative applications using vector embeddings. Check out the following -
<div class="grid cards" markdown>
- __Chatbots__
---
Develop chatbots that utilize embeddings to retrieve relevant context and generate coherent, contextually aware responses to user queries.
[:octicons-arrow-right-24: Check out examples](../examples/python_examples/chatbot.md)
- __Recommendation Systems__
---
Develop systems that recommend content (such as articles, movies, or products) based on the similarity of keywords and descriptions, enhancing user experience.
[:octicons-arrow-right-24: Check out examples](../examples/python_examples/recommendersystem.md)
- __Vector Search__
---
Build powerful applications that harness the full potential of semantic search, enabling them to retrieve relevant data quickly and effectively.
[:octicons-arrow-right-24: Check out examples](../examples/python_examples/vector_search.md)
- __RAG Applications__
---
Combine the strengths of large language models (LLMs) with retrieval-based approaches to create more useful applications.
[:octicons-arrow-right-24: Check out examples](../examples/python_examples/rag.md)
- __Many more examples__
---
Explore applied examples available as Colab notebooks or Python scripts to integrate into your applications.
[:octicons-arrow-right-24: More](../examples/examples_python.md)
</div>

View File

@@ -8,9 +8,15 @@ LanceDB provides language APIs, allowing you to embed a database in your languag
* 👾 [JavaScript](examples_js.md) examples * 👾 [JavaScript](examples_js.md) examples
* 🦀 Rust examples (coming soon) * 🦀 Rust examples (coming soon)
## Applications powered by LanceDB ## Python Applications powered by LanceDB
| Project Name | Description | | Project Name | Description |
| --- | --- | | --- | --- |
| **Ultralytics Explorer 🚀**<br>[![Ultralytics](https://img.shields.io/badge/Ultralytics-Docs-green?labelColor=0f3bc4&style=flat-square&logo=https://cdn.prod.website-files.com/646dd1f1a3703e451ba81ecc/64994922cf2a6385a4bf4489_UltralyticsYOLO_mark_blue.svg&link=https://docs.ultralytics.com/datasets/explorer/)](https://docs.ultralytics.com/datasets/explorer/)<br>[![Open In Collab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/ultralytics/ultralytics/blob/main/docs/en/datasets/explorer/explorer.ipynb) | - 🔍 **Explore CV Datasets**: Semantic search, SQL queries, vector similarity, natural language.<br>- 🖥️ **GUI & Python API**: Seamless dataset interaction.<br>- ⚡ **Efficient & Scalable**: Leverages LanceDB for large datasets.<br>- 📊 **Detailed Analysis**: Easily analyze data patterns.<br>- 🌐 **Browser GUI Demo**: Create embeddings, search images, run queries. | | **Ultralytics Explorer 🚀**<br>[![Ultralytics](https://img.shields.io/badge/Ultralytics-Docs-green?labelColor=0f3bc4&style=flat-square&logo=https://cdn.prod.website-files.com/646dd1f1a3703e451ba81ecc/64994922cf2a6385a4bf4489_UltralyticsYOLO_mark_blue.svg&link=https://docs.ultralytics.com/datasets/explorer/)](https://docs.ultralytics.com/datasets/explorer/)<br>[![Open In Collab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/ultralytics/ultralytics/blob/main/docs/en/datasets/explorer/explorer.ipynb) | - 🔍 **Explore CV Datasets**: Semantic search, SQL queries, vector similarity, natural language.<br>- 🖥️ **GUI & Python API**: Seamless dataset interaction.<br>- ⚡ **Efficient & Scalable**: Leverages LanceDB for large datasets.<br>- 📊 **Detailed Analysis**: Easily analyze data patterns.<br>- 🌐 **Browser GUI Demo**: Create embeddings, search images, run queries. |
| **Website Chatbot🤖**<br>[![GitHub](https://img.shields.io/badge/github-%23121011.svg?style=for-the-badge&logo=github&logoColor=white)](https://github.com/lancedb/lancedb-vercel-chatbot)<br>[![Deploy with Vercel](https://vercel.com/button)](https://vercel.com/new/clone?repository-url=https%3A%2F%2Fgithub.com%2Flancedb%2Flancedb-vercel-chatbot&amp;env=OPENAI_API_KEY&amp;envDescription=OpenAI%20API%20Key%20for%20chat%20completion.&amp;project-name=lancedb-vercel-chatbot&amp;repository-name=lancedb-vercel-chatbot&amp;demo-title=LanceDB%20Chatbot%20Demo&amp;demo-description=Demo%20website%20chatbot%20with%20LanceDB.&amp;demo-url=https%3A%2F%2Flancedb.vercel.app&amp;demo-image=https%3A%2F%2Fi.imgur.com%2FazVJtvr.png) | - 🌐 **Chatbot from Sitemap/Docs**: Create a chatbot using site or document context.<br>- 🚀 **Embed LanceDB in Next.js**: Lightweight, on-prem storage.<br>- 🧠 **AI-Powered Context Retrieval**: Efficiently access relevant data.<br>- 🔧 **Serverless & Native JS**: Seamless integration with Next.js.<br>- ⚡ **One-Click Deploy on Vercel**: Quick and easy setup.. | | **Website Chatbot🤖**<br>[![GitHub](https://img.shields.io/badge/github-%23121011.svg?style=for-the-badge&logo=github&logoColor=white)](https://github.com/lancedb/lancedb-vercel-chatbot)<br>[![Deploy with Vercel](https://vercel.com/button)](https://vercel.com/new/clone?repository-url=https%3A%2F%2Fgithub.com%2Flancedb%2Flancedb-vercel-chatbot&amp;env=OPENAI_API_KEY&amp;envDescription=OpenAI%20API%20Key%20for%20chat%20completion.&amp;project-name=lancedb-vercel-chatbot&amp;repository-name=lancedb-vercel-chatbot&amp;demo-title=LanceDB%20Chatbot%20Demo&amp;demo-description=Demo%20website%20chatbot%20with%20LanceDB.&amp;demo-url=https%3A%2F%2Flancedb.vercel.app&amp;demo-image=https%3A%2F%2Fi.imgur.com%2FazVJtvr.png) | - 🌐 **Chatbot from Sitemap/Docs**: Create a chatbot using site or document context.<br>- 🚀 **Embed LanceDB in Next.js**: Lightweight, on-prem storage.<br>- 🧠 **AI-Powered Context Retrieval**: Efficiently access relevant data.<br>- 🔧 **Serverless & Native JS**: Seamless integration with Next.js.<br>- ⚡ **One-Click Deploy on Vercel**: Quick and easy setup.. |
## Nodejs Applications powered by LanceDB
| Project Name | Description |
| --- | --- |
| **Langchain Writing Assistant✍ **<br>[![Github](../assets/github.svg)](https://github.com/lancedb/vectordb-recipes/tree/main/applications/node/lanchain_writing_assistant) | - **📂 Data Source Integration**: Use your own data by specifying data source file, and the app instantly processes it to provide insights. <br>- **🧠 Intelligent Suggestions**: Powered by LangChain.js and LanceDB, it improves writing productivity and accuracy. <br>- **💡 Enhanced Writing Experience**: It delivers real-time contextual insights and factual suggestions while the user writes. |

View File

@@ -36,6 +36,6 @@
[aware_colab]: https://colab.research.google.com/github/lancedb/vectordb-recipes/blob/main/tutorials/chatbot_using_Llama2_&_lanceDB/main.ipynb [aware_colab]: https://colab.research.google.com/github/lancedb/vectordb-recipes/blob/main/tutorials/chatbot_using_Llama2_&_lanceDB/main.ipynb
[aware_ghost]: https://blog.lancedb.com/context-aware-chatbot-using-llama-2-lancedb-as-vector-database-4d771d95c755 [aware_ghost]: https://blog.lancedb.com/context-aware-chatbot-using-llama-2-lancedb-as-vector-database-4d771d95c755
[csv_github]: https://github.com/lancedb/vectordb-recipes/blob/main/tutorials/Chat_with_csv_file [csv_github]: https://github.com/lancedb/vectordb-recipes/tree/main/examples/archived_examples/Chat_with_csv_file
[csv_colab]: https://colab.research.google.com/github/lancedb/vectordb-recipes/blob/main/tutorials/Chat_with_csv_file/main.ipynb [csv_colab]: https://colab.research.google.com/github/lancedb/vectordb-recipes/blob/main/examples/archived_examples/Chat_with_csv_file/main.ipynb
[csv_ghost]: https://blog.lancedb.com/p/d8c71df4-e55f-479a-819e-cde13354a6a3/ [csv_ghost]: https://blog.lancedb.com/p/d8c71df4-e55f-479a-819e-cde13354a6a3/

View File

@@ -12,7 +12,7 @@ LanceDB supports multimodal search by indexing and querying vector representatio
|:----------------|:-----------------|:-----------| |:----------------|:-----------------|:-----------|
| **Multimodal CLIP: DiffusionDB 🌐💥** | Multi-Modal Search with **CLIP** and **LanceDB** Using **DiffusionDB** Data for Combined Text and Image Understanding ! 🔓 | [![GitHub](../../assets/github.svg)][Clip_diffusionDB_github] <br>[![Open In Collab](../../assets/colab.svg)][Clip_diffusionDB_colab] <br>[![Python](../../assets/python.svg)][Clip_diffusionDB_python] <br>[![Ghost](../../assets/ghost.svg)][Clip_diffusionDB_ghost] | | **Multimodal CLIP: DiffusionDB 🌐💥** | Multi-Modal Search with **CLIP** and **LanceDB** Using **DiffusionDB** Data for Combined Text and Image Understanding ! 🔓 | [![GitHub](../../assets/github.svg)][Clip_diffusionDB_github] <br>[![Open In Collab](../../assets/colab.svg)][Clip_diffusionDB_colab] <br>[![Python](../../assets/python.svg)][Clip_diffusionDB_python] <br>[![Ghost](../../assets/ghost.svg)][Clip_diffusionDB_ghost] |
| **Multimodal CLIP: Youtube Videos 📹👀** | Search **Youtube videos** using Multimodal CLIP, finding relevant content with ease and accuracy! 🎯 | [![Github](../../assets/github.svg)][Clip_youtube_github] <br>[![Open In Collab](../../assets/colab.svg)][Clip_youtube_colab] <br> [![Python](../../assets/python.svg)][Clip_youtube_python] <br>[![Ghost](../../assets/ghost.svg)][Clip_youtube_python] | | **Multimodal CLIP: Youtube Videos 📹👀** | Search **Youtube videos** using Multimodal CLIP, finding relevant content with ease and accuracy! 🎯 | [![Github](../../assets/github.svg)][Clip_youtube_github] <br>[![Open In Collab](../../assets/colab.svg)][Clip_youtube_colab] <br> [![Python](../../assets/python.svg)][Clip_youtube_python] <br>[![Ghost](../../assets/ghost.svg)][Clip_youtube_python] |
| **Multimodal Image + Text Search 📸🔍** | Find **relevant documents** and **images** with a single query using **LanceDB's** multimodal search capabilities, to seamlessly integrate text and visuals ! 🌉 | [![GitHub](../../assets/github.svg)](https://github.com/lancedb/vectordb-recipes/blob/main/examples/multimodal_search) <br>[![Open In Collab](../../assets/colab.svg)](https://colab.research.google.com/github/lancedb/vectordb-recipes/blob/main/examples/multimodal_search/main.ipynb) <br> [![Python](../../assets/python.svg)](https://github.com/lancedb/vectordb-recipes/blob/main/examples/multimodal_search/main.py)<br> [![Ghost](../../assets/ghost.svg)](https://blog.lancedb.com/multi-modal-ai-made-easy-with-lancedb-clip-5aaf8801c939/) | | **Multimodal Image + Text Search 📸🔍** | Find **relevant documents** and **images** with a single query using **LanceDB's** multimodal search capabilities, to seamlessly integrate text and visuals ! 🌉 | [![GitHub](../../assets/github.svg)](https://github.com/lancedb/vectordb-recipes/tree/main/examples/archived_examples/multimodal_search) <br>[![Open In Collab](../../assets/colab.svg)](https://colab.research.google.com/github/lancedb/vectordb-recipes/blob/main/examples/archived_examples/multimodal_search/main.ipynb) <br> [![Python](../../assets/python.svg)](https://github.com/lancedb/vectordb-recipes/blob/main/examples/multimodal_search/main.py)<br> [![Ghost](../../assets/ghost.svg)](https://blog.lancedb.com/multi-modal-ai-made-easy-with-lancedb-clip-5aaf8801c939/) |
| **Cambrian-1: Vision-Centric Image Exploration 🔍👀** | Learn how **Cambrian-1** works, using an example of **Vision-Centric** exploration on images found through vector search ! Work on **Flickr-8k** dataset 🔎 | [![Kaggle](https://img.shields.io/badge/Kaggle-035a7d?style=for-the-badge&logo=kaggle&logoColor=white)](https://www.kaggle.com/code/prasantdixit/cambrian-1-vision-centric-exploration-of-images/)<br> [![Ghost](../../assets/ghost.svg)](https://blog.lancedb.com/cambrian-1-vision-centric-exploration/) | | **Cambrian-1: Vision-Centric Image Exploration 🔍👀** | Learn how **Cambrian-1** works, using an example of **Vision-Centric** exploration on images found through vector search ! Work on **Flickr-8k** dataset 🔎 | [![Kaggle](https://img.shields.io/badge/Kaggle-035a7d?style=for-the-badge&logo=kaggle&logoColor=white)](https://www.kaggle.com/code/prasantdixit/cambrian-1-vision-centric-exploration-of-images/)<br> [![Ghost](../../assets/ghost.svg)](https://blog.lancedb.com/cambrian-1-vision-centric-exploration/) |

View File

@@ -70,12 +70,12 @@ Build RAG (Retrieval-Augmented Generation) with LanceDB, a powerful solution fo
[flare_colab]: https://colab.research.google.com/github/lancedb/vectordb-recipes/blob/main/examples/better-rag-FLAIR/main.ipynb [flare_colab]: https://colab.research.google.com/github/lancedb/vectordb-recipes/blob/main/examples/better-rag-FLAIR/main.ipynb
[flare_ghost]: https://blog.lancedb.com/better-rag-with-active-retrieval-augmented-generation-flare-3b66646e2a9f/ [flare_ghost]: https://blog.lancedb.com/better-rag-with-active-retrieval-augmented-generation-flare-3b66646e2a9f/
[query_github]: https://github.com/lancedb/vectordb-recipes/blob/main/examples/QueryExpansion&Reranker [query_github]: https://github.com/lancedb/vectordb-recipes/tree/main/examples/archived_examples/QueryExpansion%26Reranker
[query_colab]: https://colab.research.google.com/github/lancedb/vectordb-recipes/blob/main/examples/QueryExpansion&Reranker/main.ipynb [query_colab]: https://colab.research.google.com/github/lancedb/vectordb-recipes/blob/main/examples/archived_examples/QueryExpansion&Reranker/main.ipynb
[fusion_github]: https://github.com/lancedb/vectordb-recipes/blob/main/examples/RAG_Fusion [fusion_github]: https://github.com/lancedb/vectordb-recipes/tree/main/examples/archived_examples/RAG_Fusion
[fusion_colab]: https://colab.research.google.com/github/lancedb/vectordb-recipes/blob/main/examples/RAG_Fusion/main.ipynb [fusion_colab]: https://colab.research.google.com/github/lancedb/vectordb-recipes/blob/main/examples/archived_examples/RAG_Fusion/main.ipynb
[agentic_github]: https://github.com/lancedb/vectordb-recipes/blob/main/tutorials/Agentic_RAG [agentic_github]: https://github.com/lancedb/vectordb-recipes/blob/main/tutorials/Agentic_RAG
[agentic_colab]: https://colab.research.google.com/github/lancedb/vectordb-recipes/blob/main/tutorials/Agentic_RAG/main.ipynb [agentic_colab]: https://colab.research.google.com/github/lancedb/vectordb-recipes/blob/main/tutorials/Agentic_RAG/main.ipynb

View File

@@ -19,8 +19,8 @@ Deliver personalized experiences with Recommender Systems. 🎁
[movie_python]: https://github.com/lancedb/vectordb-recipes/blob/main/examples/movie-recommender/main.py [movie_python]: https://github.com/lancedb/vectordb-recipes/blob/main/examples/movie-recommender/main.py
[genre_github]: https://github.com/lancedb/vectordb-recipes/blob/main/examples/movie-recommendation-with-genres [genre_github]: https://github.com/lancedb/vectordb-recipes/tree/main/examples/archived_examples/movie-recommendation-with-genres
[genre_colab]: https://colab.research.google.com/github/lancedb/vectordb-recipes/blob/main/examples/movie-recommendation-with-genres/movie_recommendation_with_doc2vec_and_lancedb.ipynb [genre_colab]: https://colab.research.google.com/github/lancedb/vectordb-recipes/blob/main/examples/archived_examples/movie-recommendation-with-genres/movie_recommendation_with_doc2vec_and_lancedb.ipynb
[genre_ghost]: https://blog.lancedb.com/movie-recommendation-system-using-lancedb-and-doc2vec/ [genre_ghost]: https://blog.lancedb.com/movie-recommendation-system-using-lancedb-and-doc2vec/
[product_github]: https://github.com/lancedb/vectordb-recipes/blob/main/examples/product-recommender [product_github]: https://github.com/lancedb/vectordb-recipes/blob/main/examples/product-recommender
@@ -33,5 +33,5 @@ Deliver personalized experiences with Recommender Systems. 🎁
[arxiv_python]: https://github.com/lancedb/vectordb-recipes/blob/main/examples/arxiv-recommender/main.py [arxiv_python]: https://github.com/lancedb/vectordb-recipes/blob/main/examples/arxiv-recommender/main.py
[food_github]: https://github.com/lancedb/vectordb-recipes/blob/main/examples/Food_recommendation [food_github]: https://github.com/lancedb/vectordb-recipes/tree/main/examples/archived_examples/Food_recommendation
[food_colab]: https://colab.research.google.com/github/lancedb/vectordb-recipes/blob/main/examples/Food_recommendation/main.ipynb [food_colab]: https://colab.research.google.com/github/lancedb/vectordb-recipes/blob/main/examples/archived_examples/Food_recommendation/main.ipynb

View File

@@ -37,16 +37,16 @@ LanceDB implements vector search algorithms for efficient document retrieval and
[NER_colab]: https://colab.research.google.com/github/lancedb/vectordb-recipes/blob/main/tutorials/NER-powered-Semantic-Search/NER_powered_Semantic_Search_with_LanceDB.ipynb [NER_colab]: https://colab.research.google.com/github/lancedb/vectordb-recipes/blob/main/tutorials/NER-powered-Semantic-Search/NER_powered_Semantic_Search_with_LanceDB.ipynb
[NER_ghost]: https://blog.lancedb.com/ner-powered-semantic-search-using-lancedb-51051dc3e493 [NER_ghost]: https://blog.lancedb.com/ner-powered-semantic-search-using-lancedb-51051dc3e493
[audio_search_github]: https://github.com/lancedb/vectordb-recipes/blob/main/examples/audio_search [audio_search_github]: https://github.com/lancedb/vectordb-recipes/tree/main/examples/archived_examples/audio_search
[audio_search_colab]: https://colab.research.google.com/github/lancedb/vectordb-recipes/blob/main/examples/audio_search/main.ipynb [audio_search_colab]: https://colab.research.google.com/github/lancedb/vectordb-recipes/blob/main/examples/archived_examples/audio_search/main.ipynb
[audio_search_python]: https://github.com/lancedb/vectordb-recipes/blob/main/examples/audio_search/main.py [audio_search_python]: https://github.com/lancedb/vectordb-recipes/blob/main/examples/archived_examples/audio_search/main.py
[mls_github]: https://github.com/lancedb/vectordb-recipes/blob/main/examples/multi-lingual-wiki-qa [mls_github]: https://github.com/lancedb/vectordb-recipes/tree/main/examples/archived_examples/multi-lingual-wiki-qa
[mls_colab]: https://colab.research.google.com/github/lancedb/vectordb-recipes/blob/main/examples/multi-lingual-wiki-qa/main.ipynb [mls_colab]: https://colab.research.google.com/github/lancedb/vectordb-recipes/blob/main/examples/archived_examples/multi-lingual-wiki-qa/main.ipynb
[mls_python]: https://github.com/lancedb/vectordb-recipes/blob/main/examples/multi-lingual-wiki-qa/main.py [mls_python]: https://github.com/lancedb/vectordb-recipes/blob/main/examples/archived_examples/multi-lingual-wiki-qa/main.py
[fr_github]: https://github.com/lancedb/vectordb-recipes/blob/main/examples/facial_recognition [fr_github]: https://github.com/lancedb/vectordb-recipes/tree/main/examples/archived_examples/facial_recognition
[fr_colab]: https://colab.research.google.com/github/lancedb/vectordb-recipes/blob/main/examples/facial_recognition/main.ipynb [fr_colab]: https://colab.research.google.com/github/lancedb/vectordb-recipes/blob/main/examples/archived_examples/facial_recognition/main.ipynb
[sentiment_analysis_github]: https://github.com/lancedb/vectordb-recipes/blob/main/examples/Sentiment-Analysis-Analyse-Hotel-Reviews [sentiment_analysis_github]: https://github.com/lancedb/vectordb-recipes/blob/main/examples/Sentiment-Analysis-Analyse-Hotel-Reviews
[sentiment_analysis_colab]: https://colab.research.google.com/github/lancedb/vectordb-recipes/blob/main/examples/Sentiment-Analysis-Analyse-Hotel-Reviews/Sentiment_Analysis_using_LanceDB.ipynb [sentiment_analysis_colab]: https://colab.research.google.com/github/lancedb/vectordb-recipes/blob/main/examples/Sentiment-Analysis-Analyse-Hotel-Reviews/Sentiment_Analysis_using_LanceDB.ipynb
@@ -70,8 +70,8 @@ LanceDB implements vector search algorithms for efficient document retrieval and
[openvino_colab]: https://colab.research.google.com/github/lancedb/vectordb-recipes/blob/main/examples/Accelerate-Vector-Search-Applications-Using-OpenVINO/clip_text_image_search.ipynb [openvino_colab]: https://colab.research.google.com/github/lancedb/vectordb-recipes/blob/main/examples/Accelerate-Vector-Search-Applications-Using-OpenVINO/clip_text_image_search.ipynb
[openvino_ghost]: https://blog.lancedb.com/accelerate-vector-search-applications-using-openvino-lancedb/ [openvino_ghost]: https://blog.lancedb.com/accelerate-vector-search-applications-using-openvino-lancedb/
[zsic_github]: https://github.com/lancedb/vectordb-recipes/blob/main/examples/zero-shot-image-classification [zsic_github]: https://github.com/lancedb/vectordb-recipes/tree/main/examples/archived_examples/zero-shot-image-classification
[zsic_colab]: https://colab.research.google.com/github/lancedb/vectordb-recipes/blob/main/examples/zero-shot-image-classification/main.ipynb [zsic_colab]: https://colab.research.google.com/github/lancedb/vectordb-recipes/blob/main/examples/archived_examples/zero-shot-image-classification/main.ipynb
[zsic_ghost]: https://blog.lancedb.com/zero-shot-image-classification-with-vector-search/ [zsic_ghost]: https://blog.lancedb.com/zero-shot-image-classification-with-vector-search/

View File

@@ -1,21 +1,9 @@
# Full-text search # Full-text search (Native FTS)
LanceDB provides support for full-text search via Lance (before via [Tantivy](https://github.com/quickwit-oss/tantivy) (Python only)), allowing you to incorporate keyword-based search (based on BM25) in your retrieval solutions. LanceDB provides support for full-text search via Lance, allowing you to incorporate keyword-based search (based on BM25) in your retrieval solutions.
Currently, the Lance full text search is missing some features that are in the Tantivy full text search. This includes phrase queries, re-ranking, and customizing the tokenizer. Thus, in Python, Tantivy is still the default way to do full text search and many of the instructions below apply just to Tantivy-based indices.
## Installation (Only for Tantivy-based FTS)
!!! note !!! note
No need to install the tantivy dependency if using native FTS The Python SDK uses tantivy-based FTS by default, need to pass `use_tantivy=False` to use native FTS.
To use full-text search, install the dependency [`tantivy-py`](https://github.com/quickwit-oss/tantivy-py):
```sh
# Say you want to use tantivy==0.20.1
pip install tantivy==0.20.1
```
## Example ## Example
@@ -39,7 +27,7 @@ Consider that we have a LanceDB table named `my_table`, whose string column `tex
# passing `use_tantivy=False` to use lance FTS index # passing `use_tantivy=False` to use lance FTS index
# `use_tantivy=True` by default # `use_tantivy=True` by default
table.create_fts_index("text") table.create_fts_index("text", use_tantivy=False)
table.search("puppy").limit(10).select(["text"]).to_list() table.search("puppy").limit(10).select(["text"]).to_list()
# [{'text': 'Frodo was a happy puppy', '_score': 0.6931471824645996}] # [{'text': 'Frodo was a happy puppy', '_score': 0.6931471824645996}]
# ... # ...
@@ -93,56 +81,78 @@ Consider that we have a LanceDB table named `my_table`, whose string column `tex
``` ```
It would search on all indexed columns by default, so it's useful when there are multiple indexed columns. It would search on all indexed columns by default, so it's useful when there are multiple indexed columns.
For now, this is supported in tantivy way only.
Passing `fts_columns="text"` if you want to specify the columns to search, but it's not available for Tantivy-based full text search. Passing `fts_columns="text"` if you want to specify the columns to search.
!!! note !!! note
LanceDB automatically searches on the existing FTS index if the input to the search is of type `str`. If you provide a vector as input, LanceDB will search the ANN index instead. LanceDB automatically searches on the existing FTS index if the input to the search is of type `str`. If you provide a vector as input, LanceDB will search the ANN index instead.
## Tokenization ## Tokenization
By default the text is tokenized by splitting on punctuation and whitespaces and then removing tokens that are longer than 40 chars. For more language specific tokenization then provide the argument tokenizer_name with the 2 letter language code followed by "_stem". So for english it would be "en_stem". By default the text is tokenized by splitting on punctuation and whitespaces, and would filter out words that are with length greater than 40, and lowercase all words.
For now, only the Tantivy-based FTS index supports to specify the tokenizer, so it's only available in Python with `use_tantivy=True`. Stemming is useful for improving search results by reducing words to their root form, e.g. "running" to "run". LanceDB supports stemming for multiple languages, you can specify the tokenizer name to enable stemming by the pattern `tokenizer_name="{language_code}_stem"`, e.g. `en_stem` for English.
=== "use_tantivy=True"
For example, to enable stemming for English:
```python ```python
table.create_fts_index("text", use_tantivy=True, tokenizer_name="en_stem") table.create_fts_index("text", use_tantivy=True, tokenizer_name="en_stem")
``` ```
=== "use_tantivy=False"
[**Not supported yet**](https://github.com/lancedb/lance/issues/1195)
the following [languages](https://docs.rs/tantivy/latest/tantivy/tokenizer/enum.Language.html) are currently supported. the following [languages](https://docs.rs/tantivy/latest/tantivy/tokenizer/enum.Language.html) are currently supported.
## Index multiple columns The tokenizer is customizable, you can specify how the tokenizer splits the text, and how it filters out words, etc.
If you have multiple string columns to index, there's no need to combine them manually -- simply pass them all as a list to `create_fts_index`:
=== "use_tantivy=True"
For example, for language with accents, you can specify the tokenizer to use `ascii_folding` to remove accents, e.g. 'é' to 'e':
```python ```python
table.create_fts_index(["text1", "text2"]) table.create_fts_index("text",
use_tantivy=False,
language="French",
stem=True,
ascii_folding=True)
``` ```
=== "use_tantivy=False"
[**Not supported yet**](https://github.com/lancedb/lance/issues/1195)
Note that the search API call does not change - you can search over all indexed columns at once.
## Filtering ## Filtering
Currently the LanceDB full text search feature supports *post-filtering*, meaning filters are LanceDB full text search supports to filter the search results by a condition, both pre-filtering and post-filtering are supported.
applied on top of the full text search results. This can be invoked via the familiar
`where` syntax:
This can be invoked via the familiar `where` syntax.
With pre-filtering:
=== "Python" === "Python"
```python ```python
table.search("puppy").limit(10).where("meta='foo'").to_list() table.search("puppy").limit(10).where("meta='foo'", prefilte=True).to_list()
```
=== "TypeScript"
```typescript
await tbl
.search("puppy")
.select(["id", "doc"])
.limit(10)
.where("meta='foo'")
.prefilter(true)
.toArray();
```
=== "Rust"
```rust
table
.query()
.full_text_search(FullTextSearchQuery::new("puppy".to_owned()))
.select(lancedb::query::Select::Columns(vec!["doc".to_owned()]))
.limit(10)
.only_if("meta='foo'")
.execute()
.await?;
```
With post-filtering:
=== "Python"
```python
table.search("puppy").limit(10).where("meta='foo'", prefilte=False).to_list()
``` ```
=== "TypeScript" === "TypeScript"
@@ -153,6 +163,7 @@ applied on top of the full text search results. This can be invoked via the fami
.select(["id", "doc"]) .select(["id", "doc"])
.limit(10) .limit(10)
.where("meta='foo'") .where("meta='foo'")
.prefilter(false)
.toArray(); .toArray();
``` ```
@@ -163,104 +174,56 @@ applied on top of the full text search results. This can be invoked via the fami
.query() .query()
.full_text_search(FullTextSearchQuery::new(words[0].to_owned())) .full_text_search(FullTextSearchQuery::new(words[0].to_owned()))
.select(lancedb::query::Select::Columns(vec!["doc".to_owned()])) .select(lancedb::query::Select::Columns(vec!["doc".to_owned()]))
.postfilter()
.limit(10) .limit(10)
.only_if("meta='foo'") .only_if("meta='foo'")
.execute() .execute()
.await?; .await?;
``` ```
## Sorting
!!! warning "Warn"
Sorting is available for only Tantivy-based FTS
You can pre-sort the documents by specifying `ordering_field_names` when
creating the full-text search index. Once pre-sorted, you can then specify
`ordering_field_name` while searching to return results sorted by the given
field. For example,
```python
table.create_fts_index(["text_field"], use_tantivy=True, ordering_field_names=["sort_by_field"])
(table.search("terms", ordering_field_name="sort_by_field")
.limit(20)
.to_list())
```
!!! note
If you wish to specify an ordering field at query time, you must also
have specified it during indexing time. Otherwise at query time, an
error will be raised that looks like `ValueError: The field does not exist: xxx`
!!! note
The fields to sort on must be of typed unsigned integer, or else you will see
an error during indexing that looks like
`TypeError: argument 'value': 'float' object cannot be interpreted as an integer`.
!!! note
You can specify multiple fields for ordering at indexing time.
But at query time only one ordering field is supported.
## Phrase queries vs. terms queries ## Phrase queries vs. terms queries
!!! warning "Warn" !!! warning "Warn"
Lance-based FTS doesn't support queries combining by boolean operators `OR`, `AND`. Lance-based FTS doesn't support queries using boolean operators `OR`, `AND`.
For full-text search you can specify either a **phrase** query like `"the old man and the sea"`, For full-text search you can specify either a **phrase** query like `"the old man and the sea"`,
or a **terms** search query like `"(Old AND Man) AND Sea"`. For more details on the terms or a **terms** search query like `old man sea`. For more details on the terms
query syntax, see Tantivy's [query parser rules](https://docs.rs/tantivy/latest/tantivy/query/struct.QueryParser.html). query syntax, see Tantivy's [query parser rules](https://docs.rs/tantivy/latest/tantivy/query/struct.QueryParser.html).
!!! tip "Note" To search for a phrase, the index must be created with `with_position=True`:
The query parser will raise an exception on queries that are ambiguous. For example, in the query `they could have been dogs OR cats`, `OR` is capitalized so it's considered a keyword query operator. But it's ambiguous how the left part should be treated. So if you submit this search query as is, you'll get `Syntax Error: they could have been dogs OR cats`. ```python
table.create_fts_index("text", use_tantivy=False, with_position=True)
```py
# This raises a syntax error
table.search("they could have been dogs OR cats")
``` ```
This will allow you to search for phrases, but it will also significantly increase the index size and indexing time.
On the other hand, lowercasing `OR` to `or` will work, because there are no capitalized logical operators and
the query is treated as a phrase query.
```py
# This works!
table.search("they could have been dogs or cats")
```
It can be cumbersome to have to remember what will cause a syntax error depending on the type of
query you want to perform. To make this simpler, when you want to perform a phrase query, you can
enforce it in one of two ways:
1. Place the double-quoted query inside single quotes. For example, `table.search('"they could have been dogs OR cats"')` is treated as
a phrase query.
1. Explicitly declare the `phrase_query()` method. This is useful when you have a phrase query that
itself contains double quotes. For example, `table.search('the cats OR dogs were not really "pets" at all').phrase_query()`
is treated as a phrase query.
In general, a query that's declared as a phrase query will be wrapped in double quotes during parsing, with nested
double quotes replaced by single quotes.
## Configurations (Only for Tantivy-based FTS) ## Incremental indexing
By default, LanceDB configures a 1GB heap size limit for creating the index. You can LanceDB supports incremental indexing, which means you can add new records to the table without reindexing the entire table.
reduce this if running on a smaller node, or increase this for faster performance while
indexing a larger corpus. This can make the query more efficient, especially when the table is large and the new records are relatively small.
=== "Python"
```python ```python
# configure a 512MB heap size table.add([{"vector": [3.1, 4.1], "text": "Frodo was a happy puppy"}])
heap = 1024 * 1024 * 512 table.optimize()
table.create_fts_index(["text1", "text2"], writer_heap_size=heap, replace=True)
``` ```
## Current limitations === "TypeScript"
For that Tantivy-based FTS: ```typescript
await tbl.add([{ vector: [3.1, 4.1], text: "Frodo was a happy puppy" }]);
await tbl.optimize();
```
1. Currently we do not yet support incremental writes. === "Rust"
If you add data after FTS index creation, it won't be reflected
in search results until you do a full reindex.
2. We currently only support local filesystem paths for the FTS index. ```rust
This is a tantivy limitation. We've implemented an object store plugin let more_data: Box<dyn RecordBatchReader + Send> = create_some_records()?;
but there's no way in tantivy-py to specify to use it. tbl.add(more_data).execute().await?;
tbl.optimize(OptimizeAction::All).execute().await?;
```
!!! note
New data added after creating the FTS index will appear in search results while incremental index is still progress, but with increased latency due to a flat search on the unindexed portion. LanceDB Cloud automates this merging process, minimizing the impact on search speed.

160
docs/src/fts_tantivy.md Normal file
View File

@@ -0,0 +1,160 @@
# Full-text search (Tantivy-based FTS)
LanceDB also provides support for full-text search via [Tantivy](https://github.com/quickwit-oss/tantivy), allowing you to incorporate keyword-based search (based on BM25) in your retrieval solutions.
The tantivy-based FTS is only available in Python and does not support building indexes on object storage or incremental indexing. If you need these features, try native FTS [native FTS](fts.md).
## Installation
To use full-text search, install the dependency [`tantivy-py`](https://github.com/quickwit-oss/tantivy-py):
```sh
# Say you want to use tantivy==0.20.1
pip install tantivy==0.20.1
```
## Example
Consider that we have a LanceDB table named `my_table`, whose string column `content` we want to index and query via keyword search, the FTS index must be created before you can search via keywords.
```python
import lancedb
uri = "data/sample-lancedb"
db = lancedb.connect(uri)
table = db.create_table(
"my_table",
data=[
{"id": 1, "vector": [3.1, 4.1], "title": "happy puppy", "content": "Frodo was a happy puppy", "meta": "foo"},
{"id": 2, "vector": [5.9, 26.5], "title": "playing kittens", "content": "There are several kittens playing around the puppy", "meta": "bar"},
],
)
# passing `use_tantivy=False` to use lance FTS index
# `use_tantivy=True` by default
table.create_fts_index("content", use_tantivy=True)
table.search("puppy").limit(10).select(["content"]).to_list()
# [{'text': 'Frodo was a happy puppy', '_score': 0.6931471824645996}]
# ...
```
It would search on all indexed columns by default, so it's useful when there are multiple indexed columns.
!!! note
LanceDB automatically searches on the existing FTS index if the input to the search is of type `str`. If you provide a vector as input, LanceDB will search the ANN index instead.
## Tokenization
By default the text is tokenized by splitting on punctuation and whitespaces and then removing tokens that are longer than 40 chars. For more language specific tokenization then provide the argument tokenizer_name with the 2 letter language code followed by "_stem". So for english it would be "en_stem".
```python
table.create_fts_index("content", use_tantivy=True, tokenizer_name="en_stem", replace=True)
```
the following [languages](https://docs.rs/tantivy/latest/tantivy/tokenizer/enum.Language.html) are currently supported.
## Index multiple columns
If you have multiple string columns to index, there's no need to combine them manually -- simply pass them all as a list to `create_fts_index`:
```python
table.create_fts_index(["title", "content"], use_tantivy=True, replace=True)
```
Note that the search API call does not change - you can search over all indexed columns at once.
## Filtering
Currently the LanceDB full text search feature supports *post-filtering*, meaning filters are
applied on top of the full text search results (see [native FTS](fts.md) if you need pre-filtering). This can be invoked via the familiar
`where` syntax:
```python
table.search("puppy").limit(10).where("meta='foo'").to_list()
```
## Sorting
You can pre-sort the documents by specifying `ordering_field_names` when
creating the full-text search index. Once pre-sorted, you can then specify
`ordering_field_name` while searching to return results sorted by the given
field. For example,
```python
table.create_fts_index(["content"], use_tantivy=True, ordering_field_names=["id"], replace=True)
(table.search("puppy", ordering_field_name="id")
.limit(20)
.to_list())
```
!!! note
If you wish to specify an ordering field at query time, you must also
have specified it during indexing time. Otherwise at query time, an
error will be raised that looks like `ValueError: The field does not exist: xxx`
!!! note
The fields to sort on must be of typed unsigned integer, or else you will see
an error during indexing that looks like
`TypeError: argument 'value': 'float' object cannot be interpreted as an integer`.
!!! note
You can specify multiple fields for ordering at indexing time.
But at query time only one ordering field is supported.
## Phrase queries vs. terms queries
For full-text search you can specify either a **phrase** query like `"the old man and the sea"`,
or a **terms** search query like `"(Old AND Man) AND Sea"`. For more details on the terms
query syntax, see Tantivy's [query parser rules](https://docs.rs/tantivy/latest/tantivy/query/struct.QueryParser.html).
!!! tip "Note"
The query parser will raise an exception on queries that are ambiguous. For example, in the query `they could have been dogs OR cats`, `OR` is capitalized so it's considered a keyword query operator. But it's ambiguous how the left part should be treated. So if you submit this search query as is, you'll get `Syntax Error: they could have been dogs OR cats`.
```py
# This raises a syntax error
table.search("they could have been dogs OR cats")
```
On the other hand, lowercasing `OR` to `or` will work, because there are no capitalized logical operators and
the query is treated as a phrase query.
```py
# This works!
table.search("they could have been dogs or cats")
```
It can be cumbersome to have to remember what will cause a syntax error depending on the type of
query you want to perform. To make this simpler, when you want to perform a phrase query, you can
enforce it in one of two ways:
1. Place the double-quoted query inside single quotes. For example, `table.search('"they could have been dogs OR cats"')` is treated as
a phrase query.
1. Explicitly declare the `phrase_query()` method. This is useful when you have a phrase query that
itself contains double quotes. For example, `table.search('the cats OR dogs were not really "pets" at all').phrase_query()`
is treated as a phrase query.
In general, a query that's declared as a phrase query will be wrapped in double quotes during parsing, with nested
double quotes replaced by single quotes.
## Configurations
By default, LanceDB configures a 1GB heap size limit for creating the index. You can
reduce this if running on a smaller node, or increase this for faster performance while
indexing a larger corpus.
```python
# configure a 512MB heap size
heap = 1024 * 1024 * 512
table.create_fts_index(["title", "content"], use_tantivy=True, writer_heap_size=heap, replace=True)
```
## Current limitations
1. New data added after creating the FTS index will appear in search results, but with increased latency due to a flat search on the unindexed portion. Re-indexing with `create_fts_index` will reduce latency. LanceDB Cloud automates this merging process, minimizing the impact on search speed.
2. We currently only support local filesystem paths for the FTS index.
This is a tantivy limitation. We've implemented an object store plugin
but there's no way in tantivy-py to specify to use it.

View File

@@ -1,23 +1,35 @@
# Building Scalar Index # Building a Scalar Index
Similar to many SQL databases, LanceDB supports several types of Scalar indices to accelerate search Scalar indices organize data by scalar attributes (e.g. numbers, categorical values), enabling fast filtering of vector data. In vector databases, scalar indices accelerate the retrieval of scalar data associated with vectors, thus enhancing the query performance when searching for vectors that meet certain scalar criteria.
Similar to many SQL databases, LanceDB supports several types of scalar indices to accelerate search
over scalar columns. over scalar columns.
- `BTREE`: The most common type is BTREE. This index is inspired by the btree data structure - `BTREE`: The most common type is BTREE. The index stores a copy of the
although only the first few layers of the btree are cached in memory. column in sorted order. This sorted copy allows a binary search to be used to
It will perform well on columns with a large number of unique values and few rows per value. satisfy queries.
- `BITMAP`: this index stores a bitmap for each unique value in the column. - `BITMAP`: this index stores a bitmap for each unique value in the column. It
This index is useful for columns with a finite number of unique values and many rows per value. uses a series of bits to indicate whether a value is present in a row of a table
For example, columns that represent "categories", "labels", or "tags" - `LABEL_LIST`: a special index that can be used on `List<T>` columns to
- `LABEL_LIST`: a special index that is used to index list columns whose values have a finite set of possibilities. support queries with `array_contains_all` and `array_contains_any`
using an underlying bitmap index.
For example, a column that contains lists of tags (e.g. `["tag1", "tag2", "tag3"]`) can be indexed with a `LABEL_LIST` index. For example, a column that contains lists of tags (e.g. `["tag1", "tag2", "tag3"]`) can be indexed with a `LABEL_LIST` index.
!!! tips "How to choose the right scalar index type"
`BTREE`: This index is good for scalar columns with mostly distinct values and does best when the query is highly selective.
`BITMAP`: This index works best for low-cardinality numeric or string columns, where the number of unique values is small (i.e., less than a few thousands).
`LABEL_LIST`: This index should be used for columns containing list-type data.
| Data Type | Filter | Index Type | | Data Type | Filter | Index Type |
| --------------------------------------------------------------- | ----------------------------------------- | ------------ | | --------------------------------------------------------------- | ----------------------------------------- | ------------ |
| Numeric, String, Temporal | `<`, `=`, `>`, `in`, `between`, `is null` | `BTREE` | | Numeric, String, Temporal | `<`, `=`, `>`, `in`, `between`, `is null` | `BTREE` |
| Boolean, numbers or strings with fewer than 1,000 unique values | `<`, `=`, `>`, `in`, `between`, `is null` | `BITMAP` | | Boolean, numbers or strings with fewer than 1,000 unique values | `<`, `=`, `>`, `in`, `between`, `is null` | `BITMAP` |
| List of low cardinality of numbers or strings | `array_has_any`, `array_has_all` | `LABEL_LIST` | | List of low cardinality of numbers or strings | `array_has_any`, `array_has_all` | `LABEL_LIST` |
### Create a scalar index
=== "Python" === "Python"
```python ```python
@@ -46,7 +58,7 @@ over scalar columns.
await tlb.create_index("publisher", { config: lancedb.Index.bitmap() }) await tlb.create_index("publisher", { config: lancedb.Index.bitmap() })
``` ```
For example, the following scan will be faster if the column `my_col` has a scalar index: The following scan will be faster if the column `book_id` has a scalar index:
=== "Python" === "Python"
@@ -106,3 +118,30 @@ Scalar indices can also speed up scans containing a vector search or full text s
.limit(10) .limit(10)
.toArray(); .toArray();
``` ```
### Update a scalar index
Updating the table data (adding, deleting, or modifying records) requires that you also update the scalar index. This can be done by calling `optimize`, which will trigger an update to the existing scalar index.
=== "Python"
```python
table.add([{"vector": [7, 8], "book_id": 4}])
table.optimize()
```
=== "TypeScript"
```typescript
await tbl.add([{ vector: [7, 8], book_id: 4 }]);
await tbl.optimize();
```
=== "Rust"
```rust
let more_data: Box<dyn RecordBatchReader + Send> = create_some_records()?;
tbl.add(more_data).execute().await?;
tbl.optimize(OptimizeAction::All).execute().await?;
```
!!! note
New data added after creating the scalar index will still appear in search results if optimize is not used, but with increased latency due to a flat search on the unindexed portion. LanceDB Cloud automates the optimize process, minimizing the impact on search speed.

View File

@@ -27,10 +27,13 @@ LanceDB OSS supports object stores such as AWS S3 (and compatible stores), Azure
Azure Blob Storage: Azure Blob Storage:
<!-- skip-test -->
```python ```python
import lancedb import lancedb
db = lancedb.connect("az://bucket/path") db = lancedb.connect("az://bucket/path")
``` ```
Note that for Azure, storage credentials must be configured. See [below](#azure-blob-storage) for more details.
=== "TypeScript" === "TypeScript"
@@ -87,11 +90,6 @@ In most cases, when running in the respective cloud and permissions are set up c
export TIMEOUT=60s export TIMEOUT=60s
``` ```
!!! note "`storage_options` availability"
The `storage_options` parameter is only available in Python *async* API and JavaScript API.
It is not yet supported in the Python synchronous API.
If you only want this to apply to one particular connection, you can pass the `storage_options` argument when opening the connection: If you only want this to apply to one particular connection, you can pass the `storage_options` argument when opening the connection:
=== "Python" === "Python"
@@ -498,7 +496,7 @@ This can also be done with the ``AWS_ENDPOINT`` and ``AWS_DEFAULT_REGION`` envir
#### S3 Express #### S3 Express
LanceDB supports [S3 Express One Zone](https://aws.amazon.com/s3/storage-classes/express-one-zone/) endpoints, but requires additional configuration. Also, S3 Express endpoints only support connecting from an EC2 instance within the same region. LanceDB supports [S3 Express One Zone](https://aws.amazon.com/s3/storage-classes/express-one-zone/) endpoints, but requires additional infrastructure configuration for the compute service, such as EC2 or Lambda. Please refer to [Networking requirements for S3 Express One Zone](https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-express-networking.html).
To configure LanceDB to use an S3 Express endpoint, you must set the storage option `s3_express`. The bucket name in your table URI should **include the suffix**. To configure LanceDB to use an S3 Express endpoint, you must set the storage option `s3_express`. The bucket name in your table URI should **include the suffix**.

View File

@@ -85,13 +85,13 @@ Initialize a LanceDB connection and create a table
```ts ```ts
--8<-- "nodejs/examples/basic.ts:create_table" --8<-- "nodejs/examples/basic.test.ts:create_table"
``` ```
This will infer the schema from the provided data. If you want to explicitly provide a schema, you can use `apache-arrow` to declare a schema This will infer the schema from the provided data. If you want to explicitly provide a schema, you can use `apache-arrow` to declare a schema
```ts ```ts
--8<-- "nodejs/examples/basic.ts:create_table_with_schema" --8<-- "nodejs/examples/basic.test.ts:create_table_with_schema"
``` ```
!!! info "Note" !!! info "Note"
@@ -100,14 +100,14 @@ Initialize a LanceDB connection and create a table
passed in will NOT be appended to the table in that case. passed in will NOT be appended to the table in that case.
```ts ```ts
--8<-- "nodejs/examples/basic.ts:create_table_exists_ok" --8<-- "nodejs/examples/basic.test.ts:create_table_exists_ok"
``` ```
Sometimes you want to make sure that you start fresh. If you want to Sometimes you want to make sure that you start fresh. If you want to
overwrite the table, you can pass in mode: "overwrite" to the createTable function. overwrite the table, you can pass in mode: "overwrite" to the createTable function.
```ts ```ts
--8<-- "nodejs/examples/basic.ts:create_table_overwrite" --8<-- "nodejs/examples/basic.test.ts:create_table_overwrite"
``` ```
=== "vectordb (deprecated)" === "vectordb (deprecated)"
@@ -227,7 +227,7 @@ LanceDB supports float16 data type!
=== "@lancedb/lancedb" === "@lancedb/lancedb"
```typescript ```typescript
--8<-- "nodejs/examples/basic.ts:create_f16_table" --8<-- "nodejs/examples/basic.test.ts:create_f16_table"
``` ```
=== "vectordb (deprecated)" === "vectordb (deprecated)"
@@ -274,7 +274,7 @@ table = db.create_table(table_name, schema=Content)
Sometimes your data model may contain nested objects. Sometimes your data model may contain nested objects.
For example, you may want to store the document string For example, you may want to store the document string
and the document soure name as a nested Document object: and the document source name as a nested Document object:
```python ```python
class Document(BaseModel): class Document(BaseModel):
@@ -455,7 +455,7 @@ You can create an empty table for scenarios where you want to add data to the ta
=== "@lancedb/lancedb" === "@lancedb/lancedb"
```typescript ```typescript
--8<-- "nodejs/examples/basic.ts:create_empty_table" --8<-- "nodejs/examples/basic.test.ts:create_empty_table"
``` ```
=== "vectordb (deprecated)" === "vectordb (deprecated)"
@@ -466,7 +466,7 @@ You can create an empty table for scenarios where you want to add data to the ta
## Adding to a table ## Adding to a table
After a table has been created, you can always add more data to it usind the `add` method After a table has been created, you can always add more data to it using the `add` method
=== "Python" === "Python"
You can add any of the valid data structures accepted by LanceDB table, i.e, `dict`, `list[dict]`, `pd.DataFrame`, or `Iterator[pa.RecordBatch]`. Below are some examples. You can add any of the valid data structures accepted by LanceDB table, i.e, `dict`, `list[dict]`, `pd.DataFrame`, or `Iterator[pa.RecordBatch]`. Below are some examples.
@@ -535,7 +535,7 @@ After a table has been created, you can always add more data to it usind the `ad
``` ```
??? "Ingesting Pydantic models with LanceDB embedding API" ??? "Ingesting Pydantic models with LanceDB embedding API"
When using LanceDB's embedding API, you can add Pydantic models directly to the table. LanceDB will automatically convert the `vector` field to a vector before adding it to the table. You need to specify the default value of `vector` feild as None to allow LanceDB to automatically vectorize the data. When using LanceDB's embedding API, you can add Pydantic models directly to the table. LanceDB will automatically convert the `vector` field to a vector before adding it to the table. You need to specify the default value of `vector` field as None to allow LanceDB to automatically vectorize the data.
```python ```python
import lancedb import lancedb
@@ -790,6 +790,122 @@ Use the `drop_table()` method on the database to remove a table.
This permanently removes the table and is not recoverable, unlike deleting rows. This permanently removes the table and is not recoverable, unlike deleting rows.
If the table does not exist an exception is raised. If the table does not exist an exception is raised.
## Changing schemas
While tables must have a schema specified when they are created, you can
change the schema over time. There's three methods to alter the schema of
a table:
* `add_columns`: Add new columns to the table
* `alter_columns`: Alter the name, nullability, or data type of a column
* `drop_columns`: Drop columns from the table
### Adding new columns
You can add new columns to the table with the `add_columns` method. New columns
are filled with values based on a SQL expression. For example, you can add a new
column `y` to the table, fill it with the value of `x * 2` and set the expected
data type for it.
=== "Python"
```python
--8<-- "python/python/tests/docs/test_basic.py:add_columns"
```
**API Reference:** [lancedb.table.Table.add_columns][]
=== "Typescript"
```typescript
--8<-- "nodejs/examples/basic.test.ts:add_columns"
```
**API Reference:** [lancedb.Table.addColumns](../js/classes/Table.md/#addcolumns)
If you want to fill it with null, you can use `cast(NULL as <data_type>)` as
the SQL expression to fill the column with nulls, while controlling the data
type of the column. Available data types are base on the
[DataFusion data types](https://datafusion.apache.org/user-guide/sql/data_types.html).
You can use any of the SQL types, such as `BIGINT`:
```sql
cast(NULL as BIGINT)
```
Using Arrow data types and the `arrow_typeof` function is not yet supported.
<!-- TODO: we could provide a better formula for filling with nulls:
https://github.com/lancedb/lance/issues/3175
-->
### Altering existing columns
You can alter the name, nullability, or data type of a column with the `alter_columns`
method.
Changing the name or nullability of a column just updates the metadata. Because
of this, it's a fast operation. Changing the data type of a column requires
rewriting the column, which can be a heavy operation.
=== "Python"
```python
import pyarrow as pa
--8<-- "python/python/tests/docs/test_basic.py:alter_columns"
```
**API Reference:** [lancedb.table.Table.alter_columns][]
=== "Typescript"
```typescript
--8<-- "nodejs/examples/basic.test.ts:alter_columns"
```
**API Reference:** [lancedb.Table.alterColumns](../js/classes/Table.md/#altercolumns)
### Dropping columns
You can drop columns from the table with the `drop_columns` method. This will
will remove the column from the schema.
<!-- TODO: Provide guidance on how to reduce disk usage once optimize helps here
waiting on: https://github.com/lancedb/lance/issues/3177
-->
=== "Python"
```python
--8<-- "python/python/tests/docs/test_basic.py:drop_columns"
```
**API Reference:** [lancedb.table.Table.drop_columns][]
=== "Typescript"
```typescript
--8<-- "nodejs/examples/basic.test.ts:drop_columns"
```
**API Reference:** [lancedb.Table.dropColumns](../js/classes/Table.md/#altercolumns)
## Handling bad vectors
In LanceDB Python, you can use the `on_bad_vectors` parameter to choose how
invalid vector values are handled. Invalid vectors are vectors that are not valid
because:
1. They are the wrong dimension
2. They contain NaN values
3. They are null but are on a non-nullable field
By default, LanceDB will raise an error if it encounters a bad vector. You can
also choose one of the following options:
* `drop`: Ignore rows with bad vectors
* `fill`: Replace bad values (NaNs) or missing values (too few dimensions) with
the fill value specified in the `fill_value` parameter. An input like
`[1.0, NaN, 3.0]` will be replaced with `[1.0, 0.0, 3.0]` if `fill_value=0.0`.
* `null`: Replace bad vectors with null (only works if the column is nullable).
A bad vector `[1.0, NaN, 3.0]` will be replaced with `null` if the column is
nullable. If the vector column is non-nullable, then bad vectors will cause an
error
## Consistency ## Consistency
@@ -859,4 +975,4 @@ There are three possible settings for `read_consistency_interval`:
Learn the best practices on creating an ANN index and getting the most out of it. Learn the best practices on creating an ANN index and getting the most out of it.
[^1]: The `vectordb` package is a legacy package that is deprecated in favor of `@lancedb/lancedb`. The `vectordb` package will continue to receive bug fixes and security updates until September 2024. We recommend all new projects use `@lancedb/lancedb`. See the [migration guide](migration.md) for more information. [^1]: The `vectordb` package is a legacy package that is deprecated in favor of `@lancedb/lancedb`. The `vectordb` package will continue to receive bug fixes and security updates until September 2024. We recommend all new projects use `@lancedb/lancedb`. See the [migration guide](../migration.md) for more information.

View File

@@ -49,7 +49,8 @@ The following pages go deeper into the internal of LanceDB and how to use it.
* [Working with tables](guides/tables.md): Learn how to work with tables and their associated functions * [Working with tables](guides/tables.md): Learn how to work with tables and their associated functions
* [Indexing](ann_indexes.md): Understand how to create indexes * [Indexing](ann_indexes.md): Understand how to create indexes
* [Vector search](search.md): Learn how to perform vector similarity search * [Vector search](search.md): Learn how to perform vector similarity search
* [Full-text search](fts.md): Learn how to perform full-text search * [Full-text search (native)](fts.md): Learn how to perform full-text search
* [Full-text search (tantivy-based)](fts_tantivy.md): Learn how to perform full-text search using Tantivy
* [Managing embeddings](embeddings/index.md): Managing embeddings and the embedding functions API in LanceDB * [Managing embeddings](embeddings/index.md): Managing embeddings and the embedding functions API in LanceDB
* [Ecosystem Integrations](integrations/index.md): Integrate LanceDB with other tools in the data ecosystem * [Ecosystem Integrations](integrations/index.md): Integrate LanceDB with other tools in the data ecosystem
* [Python API Reference](python/python.md): Python OSS and Cloud API references * [Python API Reference](python/python.md): Python OSS and Cloud API references

View File

@@ -1,5 +1,10 @@
# Langchain **LangChain** is a framework designed for building applications with large language models (LLMs) by chaining together various components. It supports a range of functionalities including memory, agents, and chat models, enabling developers to create context-aware applications.
![Illustration](../assets/langchain.png)
![Illustration](https://raw.githubusercontent.com/lancedb/assets/refs/heads/main/docs/assets/integration/langchain_rag.png)
LangChain streamlines these stages (in figure above) by providing pre-built components and tools for integration, memory management, and deployment, allowing developers to focus on application logic rather than underlying complexities.
Integration of **Langchain** with **LanceDB** enables applications to retrieve the most relevant data by comparing query vectors against stored vectors, facilitating effective information retrieval. It results in better and context aware replies and actions by the LLMs.
## Quick Start ## Quick Start
You can load your document data using langchain's loaders, for this example we are using `TextLoader` and `OpenAIEmbeddings` as the embedding model. Checkout Complete example here - [LangChain demo](../notebooks/langchain_example.ipynb) You can load your document data using langchain's loaders, for this example we are using `TextLoader` and `OpenAIEmbeddings` as the embedding model. Checkout Complete example here - [LangChain demo](../notebooks/langchain_example.ipynb)
@@ -26,20 +31,28 @@ print(docs[0].page_content)
## Documentation ## Documentation
In the above example `LanceDB` vector store class object is created using `from_documents()` method which is a `classmethod` and returns the initialized class object. In the above example `LanceDB` vector store class object is created using `from_documents()` method which is a `classmethod` and returns the initialized class object.
You can also use `LanceDB.from_texts(texts: List[str],embedding: Embeddings)` class method. You can also use `LanceDB.from_texts(texts: List[str],embedding: Embeddings)` class method.
The exhaustive list of parameters for `LanceDB` vector store are : The exhaustive list of parameters for `LanceDB` vector store are :
- `connection`: (Optional) `lancedb.db.LanceDBConnection` connection object to use. If not provided, a new connection will be created.
- `embedding`: Langchain embedding model. |Name|type|Purpose|default|
- `vector_key`: (Optional) Column name to use for vector's in the table. Defaults to `'vector'`. |:----|:----|:----|:----|
- `id_key`: (Optional) Column name to use for id's in the table. Defaults to `'id'`. |`connection`| (Optional) `Any` |`lancedb.db.LanceDBConnection` connection object to use. If not provided, a new connection will be created.|`None`|
- `text_key`: (Optional) Column name to use for text in the table. Defaults to `'text'`. |`embedding`| (Optional) `Embeddings` | Langchain embedding model.|Provided by user.|
- `table_name`: (Optional) Name of your table in the database. Defaults to `'vectorstore'`. |`uri`| (Optional) `str` |It specifies the directory location of **LanceDB database** and establishes a connection that can be used to interact with the database. |`/tmp/lancedb`|
- `api_key`: (Optional) API key to use for LanceDB cloud database. Defaults to `None`. |`vector_key` |(Optional) `str`| Column name to use for vector's in the table.|`'vector'`|
- `region`: (Optional) Region to use for LanceDB cloud database. Only for LanceDB Cloud, defaults to `None`. |`id_key` |(Optional) `str`| Column name to use for id's in the table.|`'id'`|
- `mode`: (Optional) Mode to use for adding data to the table. Defaults to `'overwrite'`. |`text_key` |(Optional) `str` |Column name to use for text in the table.|`'text'`|
- `reranker`: (Optional) The reranker to use for LanceDB. |`table_name` |(Optional) `str`| Name of your table in the database.|`'vectorstore'`|
- `relevance_score_fn`: (Optional[Callable[[float], float]]) Langchain relevance score function to be used. Defaults to `None`. |`api_key` |(Optional `str`) |API key to use for LanceDB cloud database.|`None`|
|`region` |(Optional) `str`| Region to use for LanceDB cloud database.|Only for LanceDB Cloud : `None`.|
|`mode` |(Optional) `str` |Mode to use for adding data to the table. Valid values are "append" and "overwrite".|`'overwrite'`|
|`table`| (Optional) `Any`|You can connect to an existing table of LanceDB, created outside of langchain, and utilize it.|`None`|
|`distance`|(Optional) `str`|The choice of distance metric used to calculate the similarity between vectors.|`'l2'`|
|`reranker` |(Optional) `Any`|The reranker to use for LanceDB.|`None`|
|`relevance_score_fn` |(Optional) `Callable[[float], float]` | Langchain relevance score function to be used.|`None`|
|`limit`|`int`|Set the maximum number of results to return.|`DEFAULT_K` (it is 4)|
```python ```python
db_url = "db://lang_test" # url of db you created db_url = "db://lang_test" # url of db you created
@@ -58,12 +71,17 @@ vector_store = LanceDB(
### Methods ### Methods
##### add_texts() ##### add_texts()
- `texts`: `Iterable` of strings to add to the vectorstore.
- `metadatas`: Optional `list[dict()]` of metadatas associated with the texts.
- `ids`: Optional `list` of ids to associate with the texts.
- `kwargs`: `Any`
This method adds texts and stores respective embeddings automatically. This method turn texts into embedding and add it to the database.
|Name|Purpose|defaults|
|:---|:---|:---|
|`texts`|`Iterable` of strings to add to the vectorstore.|Provided by user|
|`metadatas`|Optional `list[dict()]` of metadatas associated with the texts.|`None`|
|`ids`|Optional `list` of ids to associate with the texts.|`None`|
|`kwargs`| Other keyworded arguments provided by the user. |-|
It returns list of ids of the added texts.
```python ```python
vector_store.add_texts(texts = ['test_123'], metadatas =[{'source' :'wiki'}]) vector_store.add_texts(texts = ['test_123'], metadatas =[{'source' :'wiki'}])
@@ -78,14 +96,25 @@ pd_df.to_csv("docsearch.csv", index=False)
# you can also create a new vector store object using an older connection object: # you can also create a new vector store object using an older connection object:
vector_store = LanceDB(connection=tbl, embedding=embeddings) vector_store = LanceDB(connection=tbl, embedding=embeddings)
``` ```
##### create_index()
- `col_name`: `Optional[str] = None`
- `vector_col`: `Optional[str] = None`
- `num_partitions`: `Optional[int] = 256`
- `num_sub_vectors`: `Optional[int] = 96`
- `index_cache_size`: `Optional[int] = None`
This method creates an index for the vector store. For index creation make sure your table has enough data in it. An ANN index is ususally not needed for datasets ~100K vectors. For large-scale (>1M) or higher dimension vectors, it is beneficial to create an ANN index. ------
##### create_index()
This method creates a scalar(for non-vector cols) or a vector index on a table.
|Name|type|Purpose|defaults|
|:---|:---|:---|:---|
|`vector_col`|`Optional[str]`| Provide if you want to create index on a vector column. |`None`|
|`col_name`|`Optional[str]`| Provide if you want to create index on a non-vector column. |`None`|
|`metric`|`Optional[str]` |Provide the metric to use for vector index. choice of metrics: 'L2', 'dot', 'cosine'. |`L2`|
|`num_partitions`|`Optional[int]`|Number of partitions to use for the index.|`256`|
|`num_sub_vectors`|`Optional[int]` |Number of sub-vectors to use for the index.|`96`|
|`index_cache_size`|`Optional[int]` |Size of the index cache.|`None`|
|`name`|`Optional[str]` |Name of the table to create index on.|`None`|
For index creation make sure your table has enough data in it. An ANN index is ususally not needed for datasets ~100K vectors. For large-scale (>1M) or higher dimension vectors, it is beneficial to create an ANN index.
```python ```python
# for creating vector index # for creating vector index
@@ -96,42 +125,63 @@ vector_store.create_index(col_name='text')
``` ```
##### similarity_search() ------
- `query`: `str`
- `k`: `Optional[int] = None`
- `filter`: `Optional[Dict[str, str]] = None`
- `fts`: `Optional[bool] = False`
- `name`: `Optional[str] = None`
- `kwargs`: `Any`
Return documents most similar to the query without relevance scores ##### similarity_search()
This method performs similarity search based on **text query**.
| Name | Type | Purpose | Default |
|---------|----------------------|---------|---------|
| `query` | `str` | A `str` representing the text query that you want to search for in the vector store. | N/A |
| `k` | `Optional[int]` | It specifies the number of documents to return. | `None` |
| `filter` | `Optional[Dict[str, str]]`| It is used to filter the search results by specific metadata criteria. | `None` |
| `fts` | `Optional[bool]` | It indicates whether to perform a full-text search (FTS). | `False` |
| `name` | `Optional[str]` | It is used for specifying the name of the table to query. If not provided, it uses the default table set during the initialization of the LanceDB instance. | `None` |
| `kwargs` | `Any` | Other keyworded arguments provided by the user. | N/A |
Return documents most similar to the query **without relevance scores**.
```python ```python
docs = docsearch.similarity_search(query) docs = docsearch.similarity_search(query)
print(docs[0].page_content) print(docs[0].page_content)
``` ```
##### similarity_search_by_vector() ------
- `embedding`: `List[float]`
- `k`: `Optional[int] = None`
- `filter`: `Optional[Dict[str, str]] = None`
- `name`: `Optional[str] = None`
- `kwargs`: `Any`
Returns documents most similar to the query vector. ##### similarity_search_by_vector()
The method returns documents that are most similar to the specified **embedding (query) vector**.
| Name | Type | Purpose | Default |
|-------------|---------------------------|---------|---------|
| `embedding` | `List[float]` | The embedding vector you want to use to search for similar documents in the vector store. | N/A |
| `k` | `Optional[int]` | It specifies the number of documents to return. | `None` |
| `filter` | `Optional[Dict[str, str]]`| It is used to filter the search results by specific metadata criteria. | `None` |
| `name` | `Optional[str]` | It is used for specifying the name of the table to query. If not provided, it uses the default table set during the initialization of the LanceDB instance. | `None` |
| `kwargs` | `Any` | Other keyworded arguments provided by the user. | N/A |
**It does not provide relevance scores.**
```python ```python
docs = docsearch.similarity_search_by_vector(query) docs = docsearch.similarity_search_by_vector(query)
print(docs[0].page_content) print(docs[0].page_content)
``` ```
##### similarity_search_with_score() ------
- `query`: `str`
- `k`: `Optional[int] = None`
- `filter`: `Optional[Dict[str, str]] = None`
- `kwargs`: `Any`
Returns documents most similar to the query string with relevance scores, gets called by base class's `similarity_search_with_relevance_scores` which selects relevance score based on our `_select_relevance_score_fn`. ##### similarity_search_with_score()
Returns documents most similar to the **query string** along with their relevance scores.
| Name | Type | Purpose | Default |
|----------|---------------------------|---------|---------|
| `query` | `str` |A `str` representing the text query you want to search for in the vector store. This query will be converted into an embedding using the specified embedding function. | N/A |
| `k` | `Optional[int]` | It specifies the number of documents to return. | `None` |
| `filter` | `Optional[Dict[str, str]]`| It is used to filter the search results by specific metadata criteria. This allows you to narrow down the search results based on certain metadata attributes associated with the documents. | `None` |
| `kwargs` | `Any` | Other keyworded arguments provided by the user. | N/A |
It gets called by base class's `similarity_search_with_relevance_scores` which selects relevance score based on our `_select_relevance_score_fn`.
```python ```python
docs = docsearch.similarity_search_with_relevance_scores(query) docs = docsearch.similarity_search_with_relevance_scores(query)
@@ -139,15 +189,21 @@ print("relevance score - ", docs[0][1])
print("text- ", docs[0][0].page_content[:1000]) print("text- ", docs[0][0].page_content[:1000])
``` ```
##### similarity_search_by_vector_with_relevance_scores() ------
- `embedding`: `List[float]`
- `k`: `Optional[int] = None`
- `filter`: `Optional[Dict[str, str]] = None`
- `name`: `Optional[str] = None`
- `kwargs`: `Any`
Return documents most similar to the query vector with relevance scores. ##### similarity_search_by_vector_with_relevance_scores()
Relevance score
Similarity search using **query vector**.
| Name | Type | Purpose | Default |
|-------------|---------------------------|---------|---------|
| `embedding` | `List[float]` | The embedding vector you want to use to search for similar documents in the vector store. | N/A |
| `k` | `Optional[int]` | It specifies the number of documents to return. | `None` |
| `filter` | `Optional[Dict[str, str]]`| It is used to filter the search results by specific metadata criteria. | `None` |
| `name` | `Optional[str]` | It is used for specifying the name of the table to query. | `None` |
| `kwargs` | `Any` | Other keyworded arguments provided by the user. | N/A |
The method returns documents most similar to the specified embedding (query) vector, along with their relevance scores.
```python ```python
docs = docsearch.similarity_search_by_vector_with_relevance_scores(query_embedding) docs = docsearch.similarity_search_by_vector_with_relevance_scores(query_embedding)
@@ -155,20 +211,22 @@ print("relevance score - ", docs[0][1])
print("text- ", docs[0][0].page_content[:1000]) print("text- ", docs[0][0].page_content[:1000])
``` ```
##### max_marginal_relevance_search() ------
- `query`: `str`
- `k`: `Optional[int] = None`
- `fetch_k` : Number of Documents to fetch to pass to MMR algorithm, `Optional[int] = None`
- `lambda_mult`: Number between 0 and 1 that determines the degree
of diversity among the results with 0 corresponding
to maximum diversity and 1 to minimum diversity.
Defaults to 0.5. `float = 0.5`
- `filter`: `Optional[Dict[str, str]] = None`
- `kwargs`: `Any`
Returns docs selected using the maximal marginal relevance(MMR). ##### max_marginal_relevance_search()
This method returns docs selected using the maximal marginal relevance(MMR).
Maximal marginal relevance optimizes for similarity to query AND diversity among selected documents. Maximal marginal relevance optimizes for similarity to query AND diversity among selected documents.
| Name | Type | Purpose | Default |
|---------------|-----------------|-----------|---------|
| `query` | `str` | Text to look up documents similar to. | N/A |
| `k` | `Optional[int]` | Number of Documents to return.| `4` |
| `fetch_k`| `Optional[int]`| Number of Documents to fetch to pass to MMR algorithm.| `None` |
| `lambda_mult` | `float` | Number between 0 and 1 that determines the degree of diversity among the results with 0 corresponding to maximum diversity and 1 to minimum diversity. | `0.5` |
| `filter`| `Optional[Dict[str, str]]`| Filter by metadata. | `None` |
|`kwargs`| Other keyworded arguments provided by the user. |-|
Similarly, `max_marginal_relevance_search_by_vector()` function returns docs most similar to the embedding passed to the function using MMR. instead of a string query you need to pass the embedding to be searched for. Similarly, `max_marginal_relevance_search_by_vector()` function returns docs most similar to the embedding passed to the function using MMR. instead of a string query you need to pass the embedding to be searched for.
```python ```python
@@ -186,12 +244,19 @@ result_texts = [doc.page_content for doc in result]
print(result_texts) print(result_texts)
``` ```
##### add_images() ------
- `uris` : File path to the image. `List[str]`.
- `metadatas` : Optional list of metadatas. `(Optional[List[dict]], optional)`
- `ids` : Optional list of IDs. `(Optional[List[str]], optional)`
Adds images by automatically creating their embeddings and adds them to the vectorstore. ##### add_images()
This method ddds images by automatically creating their embeddings and adds them to the vectorstore.
| Name | Type | Purpose | Default |
|------------|-------------------------------|--------------------------------|---------|
| `uris` | `List[str]` | File path to the image | N/A |
| `metadatas`| `Optional[List[dict]]` | Optional list of metadatas | `None` |
| `ids` | `Optional[List[str]]` | Optional list of IDs | `None` |
It returns list of IDs of the added images.
```python ```python
vec_store.add_images(uris=image_uris) vec_store.add_images(uris=image_uris)

View File

@@ -0,0 +1,383 @@
**phidata** is a framework for building **AI Assistants** with long-term memory, contextual knowledge, and the ability to take actions using function calling. It helps turn general-purpose LLMs into specialized assistants tailored to your use case by extending its capabilities using **memory**, **knowledge**, and **tools**.
- **Memory**: Stores chat history in a **database** and enables LLMs to have long-term conversations.
- **Knowledge**: Stores information in a **vector database** and provides LLMs with business context. (Here we will use LanceDB)
- **Tools**: Enable LLMs to take actions like pulling data from an **API**, **sending emails** or **querying a database**, etc.
![example](https://raw.githubusercontent.com/lancedb/assets/refs/heads/main/docs/assets/integration/phidata_assistant.png)
Memory & knowledge make LLMs smarter while tools make them autonomous.
LanceDB is a vector database and its integration into phidata makes it easy for us to provide a **knowledge base** to LLMs. It enables us to store information as [embeddings](../embeddings/understanding_embeddings.md) and search for the **results** similar to ours using **query**.
??? Question "What is Knowledge Base?"
Knowledge Base is a database of information that the Assistant can search to improve its responses. This information is stored in a vector database and provides LLMs with business context, which makes them respond in a context-aware manner.
While any type of storage can act as a knowledge base, vector databases offer the best solution for retrieving relevant results from dense information quickly.
Let's see how using LanceDB inside phidata helps in making LLM more useful:
## Prerequisites: install and import necessary dependencies
**Create a virtual environment**
1. install virtualenv package
```python
pip install virtualenv
```
2. Create a directory for your project and go to the directory and create a virtual environment inside it.
```python
mkdir phi
```
```python
cd phi
```
```python
python -m venv phidata_
```
**Activating virtual environment**
1. from inside the project directory, run the following command to activate the virtual environment.
```python
phidata_/Scripts/activate
```
**Install the following packages in the virtual environment**
```python
pip install lancedb phidata youtube_transcript_api openai ollama numpy pandas
```
**Create python files and import necessary libraries**
You need to create two files - `transcript.py` and `ollama_assistant.py` or `openai_assistant.py`
=== "openai_assistant.py"
```python
import os, openai
from rich.prompt import Prompt
from phi.assistant import Assistant
from phi.knowledge.text import TextKnowledgeBase
from phi.vectordb.lancedb import LanceDb
from phi.llm.openai import OpenAIChat
from phi.embedder.openai import OpenAIEmbedder
from transcript import extract_transcript
if "OPENAI_API_KEY" not in os.environ:
# OR set the key here as a variable
openai.api_key = "sk-..."
# The code below creates a file "transcript.txt" in the directory, the txt file will be used below
youtube_url = "https://www.youtube.com/watch?v=Xs33-Gzl8Mo"
segment_duration = 20
transcript_text,dict_transcript = extract_transcript(youtube_url,segment_duration)
```
=== "ollama_assistant.py"
```python
from rich.prompt import Prompt
from phi.assistant import Assistant
from phi.knowledge.text import TextKnowledgeBase
from phi.vectordb.lancedb import LanceDb
from phi.llm.ollama import Ollama
from phi.embedder.ollama import OllamaEmbedder
from transcript import extract_transcript
# The code below creates a file "transcript.txt" in the directory, the txt file will be used below
youtube_url = "https://www.youtube.com/watch?v=Xs33-Gzl8Mo"
segment_duration = 20
transcript_text,dict_transcript = extract_transcript(youtube_url,segment_duration)
```
=== "transcript.py"
``` python
from youtube_transcript_api import YouTubeTranscriptApi
import re
def smodify(seconds):
hours, remainder = divmod(seconds, 3600)
minutes, seconds = divmod(remainder, 60)
return f"{int(hours):02}:{int(minutes):02}:{int(seconds):02}"
def extract_transcript(youtube_url,segment_duration):
# Extract video ID from the URL
video_id = re.search(r'(?<=v=)[\w-]+', youtube_url)
if not video_id:
video_id = re.search(r'(?<=be/)[\w-]+', youtube_url)
if not video_id:
return None
video_id = video_id.group(0)
# Attempt to fetch the transcript
try:
# Try to get the official transcript
transcript = YouTubeTranscriptApi.get_transcript(video_id, languages=['en'])
except Exception:
# If no official transcript is found, try to get auto-generated transcript
try:
transcript_list = YouTubeTranscriptApi.list_transcripts(video_id)
for transcript in transcript_list:
transcript = transcript.translate('en').fetch()
except Exception:
return None
# Format the transcript into 120s chunks
transcript_text,dict_transcript = format_transcript(transcript,segment_duration)
# Open the file in write mode, which creates it if it doesn't exist
with open("transcript.txt", "w",encoding="utf-8") as file:
file.write(transcript_text)
return transcript_text,dict_transcript
def format_transcript(transcript,segment_duration):
chunked_transcript = []
chunk_dict = []
current_chunk = []
current_time = 0
# 2 minutes in seconds
start_time_chunk = 0 # To track the start time of the current chunk
for segment in transcript:
start_time = segment['start']
end_time_x = start_time + segment['duration']
text = segment['text']
# Add text to the current chunk
current_chunk.append(text)
# Update the current time with the duration of the current segment
# The duration of the current segment is given by segment['start'] - start_time_chunk
if current_chunk:
current_time = start_time - start_time_chunk
# If current chunk duration reaches or exceeds 2 minutes, save the chunk
if current_time >= segment_duration:
# Use the start time of the first segment in the current chunk as the timestamp
chunked_transcript.append(f"[{smodify(start_time_chunk)} to {smodify(end_time_x)}] " + " ".join(current_chunk))
current_chunk = re.sub(r'[\xa0\n]', lambda x: '' if x.group() == '\xa0' else ' ', "\n".join(current_chunk))
chunk_dict.append({"timestamp":f"[{smodify(start_time_chunk)} to {smodify(end_time_x)}]", "text": "".join(current_chunk)})
current_chunk = [] # Reset the chunk
start_time_chunk = start_time + segment['duration'] # Update the start time for the next chunk
current_time = 0 # Reset current time
# Add any remaining text in the last chunk
if current_chunk:
chunked_transcript.append(f"[{smodify(start_time_chunk)} to {smodify(end_time_x)}] " + " ".join(current_chunk))
current_chunk = re.sub(r'[\xa0\n]', lambda x: '' if x.group() == '\xa0' else ' ', "\n".join(current_chunk))
chunk_dict.append({"timestamp":f"[{smodify(start_time_chunk)} to {smodify(end_time_x)}]", "text": "".join(current_chunk)})
return "\n\n".join(chunked_transcript), chunk_dict
```
!!! warning
If creating Ollama assistant, download and install Ollama [from here](https://ollama.com/) and then run the Ollama instance in the background. Also, download the required models using `ollama pull <model-name>`. Check out the models [here](https://ollama.com/library)
**Run the following command to deactivate the virtual environment if needed**
```python
deactivate
```
## **Step 1** - Create a Knowledge Base for AI Assistant using LanceDB
=== "openai_assistant.py"
```python
# Create knowledge Base with OpenAIEmbedder in LanceDB
knowledge_base = TextKnowledgeBase(
path="transcript.txt",
vector_db=LanceDb(
embedder=OpenAIEmbedder(api_key = openai.api_key),
table_name="transcript_documents",
uri="./t3mp/.lancedb",
),
num_documents = 10
)
```
=== "ollama_assistant.py"
```python
# Create knowledge Base with OllamaEmbedder in LanceDB
knowledge_base = TextKnowledgeBase(
path="transcript.txt",
vector_db=LanceDb(
embedder=OllamaEmbedder(model="nomic-embed-text",dimensions=768),
table_name="transcript_documents",
uri="./t2mp/.lancedb",
),
num_documents = 10
)
```
Check out the list of **embedders** supported by **phidata** and their usage [here](https://docs.phidata.com/embedder/introduction).
Here we have used `TextKnowledgeBase`, which loads text/docx files to the knowledge base.
Let's see all the parameters that `TextKnowledgeBase` takes -
| Name| Type | Purpose | Default |
|:----|:-----|:--------|:--------|
|`path`|`Union[str, Path]`| Path to text file(s). It can point to a single text file or a directory of text files.| provided by user |
|`formats`|`List[str]`| File formats accepted by this knowledge base. |`[".txt"]`|
|`vector_db`|`VectorDb`| Vector Database for the Knowledge Base. phidata provides a wrapper around many vector DBs, you can import it like this - `from phi.vectordb.lancedb import LanceDb` | provided by user |
|`num_documents`|`int`| Number of results (documents/vectors) that vector search should return. |`5`|
|`reader`|`TextReader`| phidata provides many types of reader objects which read data, clean it and create chunks of data, encapsulate each chunk inside an object of the `Document` class, and return **`List[Document]`**. | `TextReader()` |
|`optimize_on`|`int`| It is used to specify the number of documents on which to optimize the vector database. Supposed to create an index. |`1000`|
??? Tip "Wonder! What is `Document` class?"
We know that, before storing the data in vectorDB, we need to split the data into smaller chunks upon which embeddings will be created and these embeddings along with the chunks will be stored in vectorDB. When the user queries over the vectorDB, some of these embeddings will be returned as the result based on the semantic similarity with the query.
When the user queries over vectorDB, the queries are converted into embeddings, and a nearest neighbor search is performed over these query embeddings which returns the embeddings that correspond to most semantically similar chunks(parts of our data) present in vectorDB.
Here, a “Document” is a class in phidata. Since there is an option to let phidata create and manage embeddings, it splits our data into smaller chunks(as expected). It does not directly create embeddings on it. Instead, it takes each chunk and encapsulates it inside the object of the `Document` class along with various other metadata related to the chunk. Then embeddings are created on these `Document` objects and stored in vectorDB.
```python
class Document(BaseModel):
"""Model for managing a document"""
content: str # <--- here data of chunk is stored
id: Optional[str] = None
name: Optional[str] = None
meta_data: Dict[str, Any] = {}
embedder: Optional[Embedder] = None
embedding: Optional[List[float]] = None
usage: Optional[Dict[str, Any]] = None
```
However, using phidata you can load many other types of data in the knowledge base(other than text). Check out [phidata Knowledge Base](https://docs.phidata.com/knowledge/introduction) for more information.
Let's dig deeper into the `vector_db` parameter and see what parameters `LanceDb` takes -
| Name| Type | Purpose | Default |
|:----|:-----|:--------|:--------|
|`embedder`|`Embedder`| phidata provides many Embedders that abstract the interaction with embedding APIs and utilize it to generate embeddings. Check out other embedders [here](https://docs.phidata.com/embedder/introduction) | `OpenAIEmbedder` |
|`distance`|`List[str]`| The choice of distance metric used to calculate the similarity between vectors, which directly impacts search results and performance in vector databases. |`Distance.cosine`|
|`connection`|`lancedb.db.LanceTable`| LanceTable can be accessed through `.connection`. You can connect to an existing table of LanceDB, created outside of phidata, and utilize it. If not provided, it creates a new table using `table_name` parameter and adds it to `connection`. |`None`|
|`uri`|`str`| It specifies the directory location of **LanceDB database** and establishes a connection that can be used to interact with the database. | `"/tmp/lancedb"` |
|`table_name`|`str`| If `connection` is not provided, it initializes and connects to a new **LanceDB table** with a specified(or default) name in the database present at `uri`. |`"phi"`|
|`nprobes`|`int`| It refers to the number of partitions that the search algorithm examines to find the nearest neighbors of a given query vector. Higher values will yield better recall (more likely to find vectors if they exist) at the expense of latency. |`20`|
!!! note
Since we just initialized the KnowledgeBase. The VectorDB table that corresponds to this Knowledge Base is not yet populated with our data. It will be populated in **Step 3**, once we perform the `load` operation.
You can check the state of the LanceDB table using - `knowledge_base.vector_db.connection.to_pandas()`
Now that the Knowledge Base is initialized, , we can go to **step 2**.
## **Step 2** - Create an assistant with our choice of LLM and reference to the knowledge base.
=== "openai_assistant.py"
```python
# define an assistant with gpt-4o-mini llm and reference to the knowledge base created above
assistant = Assistant(
llm=OpenAIChat(model="gpt-4o-mini", max_tokens=1000, temperature=0.3,api_key = openai.api_key),
description="""You are an Expert in explaining youtube video transcripts. You are a bot that takes transcript of a video and answer the question based on it.
This is transcript for the above timestamp: {relevant_document}
The user input is: {user_input}
generate highlights only when asked.
When asked to generate highlights from the video, understand the context for each timestamp and create key highlight points, answer in following way -
[timestamp] - highlight 1
[timestamp] - highlight 2
... so on
Your task is to understand the user question, and provide an answer using the provided contexts. Your answers are correct, high-quality, and written by an domain expert. If the provided context does not contain the answer, simply state,'The provided context does not have the answer.'""",
knowledge_base=knowledge_base,
add_references_to_prompt=True,
)
```
=== "ollama_assistant.py"
```python
# define an assistant with llama3.1 llm and reference to the knowledge base created above
assistant = Assistant(
llm=Ollama(model="llama3.1"),
description="""You are an Expert in explaining youtube video transcripts. You are a bot that takes transcript of a video and answer the question based on it.
This is transcript for the above timestamp: {relevant_document}
The user input is: {user_input}
generate highlights only when asked.
When asked to generate highlights from the video, understand the context for each timestamp and create key highlight points, answer in following way -
[timestamp] - highlight 1
[timestamp] - highlight 2
... so on
Your task is to understand the user question, and provide an answer using the provided contexts. Your answers are correct, high-quality, and written by an domain expert. If the provided context does not contain the answer, simply state,'The provided context does not have the answer.'""",
knowledge_base=knowledge_base,
add_references_to_prompt=True,
)
```
Assistants add **memory**, **knowledge**, and **tools** to LLMs. Here we will add only **knowledge** in this example.
Whenever we will give a query to LLM, the assistant will retrieve relevant information from our **Knowledge Base**(table in LanceDB) and pass it to LLM along with the user query in a structured way.
- The `add_references_to_prompt=True` always adds information from the knowledge base to the prompt, regardless of whether it is relevant to the question.
To know more about an creating assistant in phidata, check out [phidata docs](https://docs.phidata.com/assistants/introduction) here.
## **Step 3** - Load data to Knowledge Base.
```python
# load out data into the knowledge_base (populating the LanceTable)
assistant.knowledge_base.load(recreate=False)
```
The above code loads the data to the Knowledge Base(LanceDB Table) and now it is ready to be used by the assistant.
| Name| Type | Purpose | Default |
|:----|:-----|:--------|:--------|
|`recreate`|`bool`| If True, it drops the existing table and recreates the table in the vectorDB. |`False`|
|`upsert`|`bool`| If True and the vectorDB supports upsert, it will upsert documents to the vector db. | `False` |
|`skip_existing`|`bool`| If True, skips documents that already exist in the vectorDB when inserting. |`True`|
??? tip "What is upsert?"
Upsert is a database operation that combines "update" and "insert". It updates existing records if a document with the same identifier does exist, or inserts new records if no matching record exists. This is useful for maintaining the most current information without manually checking for existence.
During the Load operation, phidata directly interacts with the LanceDB library and performs the loading of the table with our data in the following steps -
1. **Creates** and **initializes** the table if it does not exist.
2. Then it **splits** our data into smaller **chunks**.
??? question "How do they create chunks?"
**phidata** provides many types of **Knowledge Bases** based on the type of data. Most of them :material-information-outline:{ title="except LlamaIndexKnowledgeBase and LangChainKnowledgeBase"} has a property method called `document_lists` of type `Iterator[List[Document]]`. During the load operation, this property method is invoked. It traverses on the data provided by us (in this case, a text file(s)) using `reader`. Then it **reads**, **creates chunks**, and **encapsulates** each chunk inside a `Document` object and yields **lists of `Document` objects** that contain our data.
3. Then **embeddings** are created on these chunks are **inserted** into the LanceDB Table
??? question "How do they insert your data as different rows in LanceDB Table?"
The chunks of your data are in the form - **lists of `Document` objects**. It was yielded in the step above.
for each `Document` in `List[Document]`, it does the following operations:
- Creates embedding on `Document`.
- Cleans the **content attribute**(chunks of our data is here) of `Document`.
- Prepares data by creating `id` and loading `payload` with the metadata related to this chunk. (1)
{ .annotate }
1. Three columns will be added to the table - `"id"`, `"vector"`, and `"payload"` (payload contains various metadata including **`content`**)
- Then add this data to LanceTable.
4. Now the internal state of `knowledge_base` is changed (embeddings are created and loaded in the table ) and it **ready to be used by assistant**.
## **Step 4** - Start a cli chatbot with access to the Knowledge base
```python
# start cli chatbot with knowledge base
assistant.print_response("Ask me about something from the knowledge base")
while True:
message = Prompt.ask(f"[bold] :sunglasses: User [/bold]")
if message in ("exit", "bye"):
break
assistant.print_response(message, markdown=True)
```
For more information and amazing cookbooks of phidata, read the [phidata documentation](https://docs.phidata.com/introduction) and also visit [LanceDB x phidata docmentation](https://docs.phidata.com/vectordb/lancedb).

View File

@@ -1,13 +1,73 @@
# FiftyOne # FiftyOne
FiftyOne is an open source toolkit for building high-quality datasets and computer vision models. It provides an API to create LanceDB tables and run similarity queries, both programmatically in Python and via point-and-click in the App. FiftyOne is an open source toolkit that enables users to curate better data and build better models. It includes tools for data exploration, visualization, and management, as well as features for collaboration and sharing.
Any developers, data scientists, and researchers who work with computer vision and machine learning can use FiftyOne to improve the quality of their datasets and deliver insights about their models.
![example](../assets/voxel.gif) ![example](../assets/voxel.gif)
## Basic recipe **FiftyOne** provides an API to create LanceDB tables and run similarity queries, both **programmatically in Python** and via **point-and-click in the App**.
The basic workflow shown below uses LanceDB to create a similarity index on your FiftyOne Let's get started and see how to use **LanceDB** to create a **similarity index** on your FiftyOne datasets.
datasets:
## Overview
**[Embeddings](../embeddings/understanding_embeddings.md)** are foundational to all of the **vector search** features. In FiftyOne, embeddings are managed by the [**FiftyOne Brain**](https://docs.voxel51.com/user_guide/brain.html) that provides powerful machine learning techniques designed to transform how you curate your data from an art into a measurable science.
!!!question "Have you ever wanted to find the images most similar to an image in your dataset?"
The **FiftyOne Brain** makes computing **visual similarity** really easy. You can compute the similarity of samples in your dataset using an embedding model and store the results in the **brain key**.
You can then sort your samples by similarity or use this information to find potential duplicate images.
Here we will be doing the following :
1. **Create Index** - In order to run similarity queries against our media, we need to **index** the data. We can do this via the `compute_similarity()` function.
- In the function, specify the **model** you want to use to generate the embedding vectors, and what **vector search engine** you want to use on the **backend** (here LanceDB).
!!!tip
You can also give the similarity index a name(`brain_key`), which is useful if you want to run vector searches against multiple indexes.
2. **Query** - Once you have generated your similarity index, you can query your dataset with `sort_by_similarity()`. The query can be any of the following:
- An ID (sample or patch)
- A query vector of same dimension as the index
- A list of IDs (samples or patches)
- A text prompt (search semantically)
## Prerequisites: install necessary dependencies
1. **Create and activate a virtual environment**
Install virtualenv package and run the following command in your project directory.
```python
python -m venv fiftyone_
```
From inside the project directory run the following to activate the virtual environment.
=== "Windows"
```python
fiftyone_/Scripts/activate
```
=== "macOS/Linux"
```python
source fiftyone_/Scripts/activate
```
2. **Install the following packages in the virtual environment**
To install FiftyOne, ensure you have activated any virtual environment that you are using, then run
```python
pip install fiftyone
```
## Understand basic workflow
The basic workflow shown below uses LanceDB to create a similarity index on your FiftyOne datasets:
1. Load a dataset into FiftyOne. 1. Load a dataset into FiftyOne.
@@ -19,14 +79,10 @@ datasets:
5. If desired, delete the table. 5. If desired, delete the table.
The example below demonstrates this workflow. ## Quick Example
!!! Note Let's jump on a quick example that demonstrates this workflow.
Install the LanceDB Python client to run the code shown below.
```
pip install lancedb
```
```python ```python
@@ -36,7 +92,10 @@ import fiftyone.zoo as foz
# Step 1: Load your data into FiftyOne # Step 1: Load your data into FiftyOne
dataset = foz.load_zoo_dataset("quickstart") dataset = foz.load_zoo_dataset("quickstart")
```
Make sure you install torch ([guide here](https://pytorch.org/get-started/locally/)) before proceeding.
```python
# Steps 2 and 3: Compute embeddings and create a similarity index # Steps 2 and 3: Compute embeddings and create a similarity index
lancedb_index = fob.compute_similarity( lancedb_index = fob.compute_similarity(
dataset, dataset,
@@ -45,8 +104,11 @@ lancedb_index = fob.compute_similarity(
backend="lancedb", backend="lancedb",
) )
``` ```
Once the similarity index has been generated, we can query our data in FiftyOne
by specifying the `brain_key`: !!! note
Running the code above will download the clip model (2.6Gb)
Once the similarity index has been generated, we can query our data in FiftyOne by specifying the `brain_key`:
```python ```python
# Step 4: Query your data # Step 4: Query your data
@@ -56,7 +118,22 @@ view = dataset.sort_by_similarity(
brain_key="lancedb_index", brain_key="lancedb_index",
k=10, # limit to 10 most similar samples k=10, # limit to 10 most similar samples
) )
```
The returned result are of type - `DatasetView`.
!!! note
`DatasetView` does not hold its contents in-memory. Views simply store the rule(s) that are applied to extract the content of interest from the underlying Dataset when the view is iterated/aggregated on.
This means, for example, that the contents of a `DatasetView` may change as the underlying Dataset is modified.
??? question "Can you query a view instead of dataset?"
Yes, you can also query a view.
Performing a similarity search on a `DatasetView` will only return results from the view; if the view contains samples that were not included in the index, they will never be included in the result.
This means that you can index an entire Dataset once and then perform searches on subsets of the dataset by constructing views that contain the images of interest.
```python
# Step 5 (optional): Cleanup # Step 5 (optional): Cleanup
# Delete the LanceDB table # Delete the LanceDB table
@@ -66,4 +143,90 @@ lancedb_index.cleanup()
dataset.delete_brain_run("lancedb_index") dataset.delete_brain_run("lancedb_index")
``` ```
## Using LanceDB backend
By default, calling `compute_similarity()` or `sort_by_similarity()` will use an sklearn backend.
To use the LanceDB backend, simply set the optional `backend` parameter of `compute_similarity()` to `"lancedb"`:
```python
import fiftyone.brain as fob
#... rest of the code
fob.compute_similarity(..., backend="lancedb", ...)
```
Alternatively, you can configure FiftyOne to use the LanceDB backend by setting the following environment variable.
In your terminal, set the environment variable using:
=== "Windows"
```python
$Env:FIFTYONE_BRAIN_DEFAULT_SIMILARITY_BACKEND="lancedb" //powershell
set FIFTYONE_BRAIN_DEFAULT_SIMILARITY_BACKEND=lancedb //cmd
```
=== "macOS/Linux"
```python
export FIFTYONE_BRAIN_DEFAULT_SIMILARITY_BACKEND=lancedb
```
!!! note
This will only run during the terminal session. Once terminal is closed, environment variable is deleted.
Alternatively, you can **permanently** configure FiftyOne to use the LanceDB backend creating a `brain_config.json` at `~/.fiftyone/brain_config.json`. The JSON file may contain any desired subset of config fields that you wish to customize.
```json
{
"default_similarity_backend": "lancedb"
}
```
This will override the default `brain_config` and will set it according to your customization. You can check the configuration by running the following code :
```python
import fiftyone.brain as fob
# Print your current brain config
print(fob.brain_config)
```
## LanceDB config parameters
The LanceDB backend supports query parameters that can be used to customize your similarity queries. These parameters include:
| Name| Purpose | Default |
|:----|:--------|:--------|
|**table_name**|The name of the LanceDB table to use. If none is provided, a new table will be created|`None`|
|**metric**|The embedding distance metric to use when creating a new table. The supported values are ("cosine", "euclidean")|`"cosine"`|
|**uri**| The database URI to use. In this Database URI, tables will be created. |`"/tmp/lancedb"`|
There are two ways to specify/customize the parameters:
1. **Using `brain_config.json` file**
```json
{
"similarity_backends": {
"lancedb": {
"table_name": "your-table",
"metric": "euclidean",
"uri": "/tmp/lancedb"
}
}
}
```
2. **Directly passing to `compute_similarity()` to configure a specific new index** :
```python
lancedb_index = fob.compute_similarity(
...
backend="lancedb",
brain_key="lancedb_index",
table_name="your-table",
metric="euclidean",
uri="/tmp/lancedb",
)
```
For a much more in depth walkthrough of the integration, visit the LanceDB x Voxel51 [docs page](https://docs.voxel51.com/integrations/lancedb.html). For a much more in depth walkthrough of the integration, visit the LanceDB x Voxel51 [docs page](https://docs.voxel51.com/integrations/lancedb.html).

View File

@@ -41,7 +41,6 @@ To build everything fresh:
```bash ```bash
npm install npm install
npm run tsc
npm run build npm run build
``` ```
@@ -51,18 +50,6 @@ Then you should be able to run the tests with:
npm test npm test
``` ```
### Rebuilding Rust library
```bash
npm run build
```
### Rebuilding Typescript
```bash
npm run tsc
```
### Fix lints ### Fix lints
To run the linter and have it automatically fix all errors To run the linter and have it automatically fix all errors

View File

@@ -38,4 +38,4 @@ A [WriteMode](../enums/WriteMode.md) to use on this operation
#### Defined in #### Defined in
[index.ts:1019](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/index.ts#L1019) [index.ts:1359](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L1359)

View File

@@ -30,6 +30,7 @@ A connection to a LanceDB database.
- [dropTable](LocalConnection.md#droptable) - [dropTable](LocalConnection.md#droptable)
- [openTable](LocalConnection.md#opentable) - [openTable](LocalConnection.md#opentable)
- [tableNames](LocalConnection.md#tablenames) - [tableNames](LocalConnection.md#tablenames)
- [withMiddleware](LocalConnection.md#withmiddleware)
## Constructors ## Constructors
@@ -46,7 +47,7 @@ A connection to a LanceDB database.
#### Defined in #### Defined in
[index.ts:489](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/index.ts#L489) [index.ts:739](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L739)
## Properties ## Properties
@@ -56,7 +57,7 @@ A connection to a LanceDB database.
#### Defined in #### Defined in
[index.ts:487](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/index.ts#L487) [index.ts:737](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L737)
___ ___
@@ -74,7 +75,7 @@ ___
#### Defined in #### Defined in
[index.ts:486](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/index.ts#L486) [index.ts:736](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L736)
## Accessors ## Accessors
@@ -92,7 +93,7 @@ ___
#### Defined in #### Defined in
[index.ts:494](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/index.ts#L494) [index.ts:744](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L744)
## Methods ## Methods
@@ -113,7 +114,7 @@ Creates a new Table, optionally initializing it with new data.
| Name | Type | | Name | Type |
| :------ | :------ | | :------ | :------ |
| `name` | `string` \| [`CreateTableOptions`](../interfaces/CreateTableOptions.md)\<`T`\> | | `name` | `string` \| [`CreateTableOptions`](../interfaces/CreateTableOptions.md)\<`T`\> |
| `data?` | `Record`\<`string`, `unknown`\>[] | | `data?` | `Table`\<`any`\> \| `Record`\<`string`, `unknown`\>[] |
| `optsOrEmbedding?` | [`WriteOptions`](../interfaces/WriteOptions.md) \| [`EmbeddingFunction`](../interfaces/EmbeddingFunction.md)\<`T`\> | | `optsOrEmbedding?` | [`WriteOptions`](../interfaces/WriteOptions.md) \| [`EmbeddingFunction`](../interfaces/EmbeddingFunction.md)\<`T`\> |
| `opt?` | [`WriteOptions`](../interfaces/WriteOptions.md) | | `opt?` | [`WriteOptions`](../interfaces/WriteOptions.md) |
@@ -127,7 +128,7 @@ Creates a new Table, optionally initializing it with new data.
#### Defined in #### Defined in
[index.ts:542](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/index.ts#L542) [index.ts:788](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L788)
___ ___
@@ -158,7 +159,7 @@ ___
#### Defined in #### Defined in
[index.ts:576](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/index.ts#L576) [index.ts:822](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L822)
___ ___
@@ -184,7 +185,7 @@ Drop an existing table.
#### Defined in #### Defined in
[index.ts:630](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/index.ts#L630) [index.ts:876](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L876)
___ ___
@@ -210,7 +211,7 @@ Open a table in the database.
#### Defined in #### Defined in
[index.ts:510](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/index.ts#L510) [index.ts:760](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L760)
**openTable**\<`T`\>(`name`, `embeddings`): `Promise`\<[`Table`](../interfaces/Table.md)\<`T`\>\> **openTable**\<`T`\>(`name`, `embeddings`): `Promise`\<[`Table`](../interfaces/Table.md)\<`T`\>\>
@@ -239,7 +240,7 @@ Connection.openTable
#### Defined in #### Defined in
[index.ts:518](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/index.ts#L518) [index.ts:768](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L768)
**openTable**\<`T`\>(`name`, `embeddings?`): `Promise`\<[`Table`](../interfaces/Table.md)\<`T`\>\> **openTable**\<`T`\>(`name`, `embeddings?`): `Promise`\<[`Table`](../interfaces/Table.md)\<`T`\>\>
@@ -266,7 +267,7 @@ Connection.openTable
#### Defined in #### Defined in
[index.ts:522](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/index.ts#L522) [index.ts:772](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L772)
___ ___
@@ -286,4 +287,36 @@ Get the names of all tables in the database.
#### Defined in #### Defined in
[index.ts:501](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/index.ts#L501) [index.ts:751](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L751)
___
### withMiddleware
**withMiddleware**(`middleware`): [`Connection`](../interfaces/Connection.md)
Instrument the behavior of this Connection with middleware.
The middleware will be called in the order they are added.
Currently this functionality is only supported for remote Connections.
#### Parameters
| Name | Type |
| :------ | :------ |
| `middleware` | `HttpMiddleware` |
#### Returns
[`Connection`](../interfaces/Connection.md)
- this Connection instrumented by the passed middleware
#### Implementation of
[Connection](../interfaces/Connection.md).[withMiddleware](../interfaces/Connection.md#withmiddleware)
#### Defined in
[index.ts:880](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L880)

View File

@@ -37,6 +37,8 @@ A LanceDB Table is the collection of Records. Each Record has one or more vector
### Methods ### Methods
- [add](LocalTable.md#add) - [add](LocalTable.md#add)
- [addColumns](LocalTable.md#addcolumns)
- [alterColumns](LocalTable.md#altercolumns)
- [checkElectron](LocalTable.md#checkelectron) - [checkElectron](LocalTable.md#checkelectron)
- [cleanupOldVersions](LocalTable.md#cleanupoldversions) - [cleanupOldVersions](LocalTable.md#cleanupoldversions)
- [compactFiles](LocalTable.md#compactfiles) - [compactFiles](LocalTable.md#compactfiles)
@@ -44,13 +46,16 @@ A LanceDB Table is the collection of Records. Each Record has one or more vector
- [createIndex](LocalTable.md#createindex) - [createIndex](LocalTable.md#createindex)
- [createScalarIndex](LocalTable.md#createscalarindex) - [createScalarIndex](LocalTable.md#createscalarindex)
- [delete](LocalTable.md#delete) - [delete](LocalTable.md#delete)
- [dropColumns](LocalTable.md#dropcolumns)
- [filter](LocalTable.md#filter) - [filter](LocalTable.md#filter)
- [getSchema](LocalTable.md#getschema) - [getSchema](LocalTable.md#getschema)
- [indexStats](LocalTable.md#indexstats) - [indexStats](LocalTable.md#indexstats)
- [listIndices](LocalTable.md#listindices) - [listIndices](LocalTable.md#listindices)
- [mergeInsert](LocalTable.md#mergeinsert)
- [overwrite](LocalTable.md#overwrite) - [overwrite](LocalTable.md#overwrite)
- [search](LocalTable.md#search) - [search](LocalTable.md#search)
- [update](LocalTable.md#update) - [update](LocalTable.md#update)
- [withMiddleware](LocalTable.md#withmiddleware)
## Constructors ## Constructors
@@ -74,7 +79,7 @@ A LanceDB Table is the collection of Records. Each Record has one or more vector
#### Defined in #### Defined in
[index.ts:642](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/index.ts#L642) [index.ts:892](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L892)
**new LocalTable**\<`T`\>(`tbl`, `name`, `options`, `embeddings`) **new LocalTable**\<`T`\>(`tbl`, `name`, `options`, `embeddings`)
@@ -95,7 +100,7 @@ A LanceDB Table is the collection of Records. Each Record has one or more vector
#### Defined in #### Defined in
[index.ts:649](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/index.ts#L649) [index.ts:899](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L899)
## Properties ## Properties
@@ -105,7 +110,7 @@ A LanceDB Table is the collection of Records. Each Record has one or more vector
#### Defined in #### Defined in
[index.ts:639](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/index.ts#L639) [index.ts:889](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L889)
___ ___
@@ -115,7 +120,7 @@ ___
#### Defined in #### Defined in
[index.ts:638](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/index.ts#L638) [index.ts:888](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L888)
___ ___
@@ -125,7 +130,7 @@ ___
#### Defined in #### Defined in
[index.ts:637](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/index.ts#L637) [index.ts:887](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L887)
___ ___
@@ -143,7 +148,7 @@ ___
#### Defined in #### Defined in
[index.ts:640](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/index.ts#L640) [index.ts:890](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L890)
___ ___
@@ -153,7 +158,7 @@ ___
#### Defined in #### Defined in
[index.ts:636](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/index.ts#L636) [index.ts:886](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L886)
___ ___
@@ -179,7 +184,7 @@ Creates a filter query to find all rows matching the specified criteria
#### Defined in #### Defined in
[index.ts:688](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/index.ts#L688) [index.ts:938](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L938)
## Accessors ## Accessors
@@ -197,7 +202,7 @@ Creates a filter query to find all rows matching the specified criteria
#### Defined in #### Defined in
[index.ts:668](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/index.ts#L668) [index.ts:918](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L918)
___ ___
@@ -215,7 +220,7 @@ ___
#### Defined in #### Defined in
[index.ts:849](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/index.ts#L849) [index.ts:1171](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L1171)
## Methods ## Methods
@@ -229,7 +234,7 @@ Insert records into this Table.
| Name | Type | Description | | Name | Type | Description |
| :------ | :------ | :------ | | :------ | :------ | :------ |
| `data` | `Record`\<`string`, `unknown`\>[] | Records to be inserted into the Table | | `data` | `Table`\<`any`\> \| `Record`\<`string`, `unknown`\>[] | Records to be inserted into the Table |
#### Returns #### Returns
@@ -243,7 +248,59 @@ The number of rows added to the table
#### Defined in #### Defined in
[index.ts:696](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/index.ts#L696) [index.ts:946](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L946)
___
### addColumns
**addColumns**(`newColumnTransforms`): `Promise`\<`void`\>
Add new columns with defined values.
#### Parameters
| Name | Type | Description |
| :------ | :------ | :------ |
| `newColumnTransforms` | \{ `name`: `string` ; `valueSql`: `string` }[] | pairs of column names and the SQL expression to use to calculate the value of the new column. These expressions will be evaluated for each row in the table, and can reference existing columns in the table. |
#### Returns
`Promise`\<`void`\>
#### Implementation of
[Table](../interfaces/Table.md).[addColumns](../interfaces/Table.md#addcolumns)
#### Defined in
[index.ts:1195](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L1195)
___
### alterColumns
**alterColumns**(`columnAlterations`): `Promise`\<`void`\>
Alter the name or nullability of columns.
#### Parameters
| Name | Type | Description |
| :------ | :------ | :------ |
| `columnAlterations` | [`ColumnAlteration`](../interfaces/ColumnAlteration.md)[] | One or more alterations to apply to columns. |
#### Returns
`Promise`\<`void`\>
#### Implementation of
[Table](../interfaces/Table.md).[alterColumns](../interfaces/Table.md#altercolumns)
#### Defined in
[index.ts:1201](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L1201)
___ ___
@@ -257,7 +314,7 @@ ___
#### Defined in #### Defined in
[index.ts:861](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/index.ts#L861) [index.ts:1183](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L1183)
___ ___
@@ -280,7 +337,7 @@ Clean up old versions of the table, freeing disk space.
#### Defined in #### Defined in
[index.ts:808](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/index.ts#L808) [index.ts:1130](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L1130)
___ ___
@@ -307,16 +364,22 @@ Metrics about the compaction operation.
#### Defined in #### Defined in
[index.ts:831](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/index.ts#L831) [index.ts:1153](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L1153)
___ ___
### countRows ### countRows
**countRows**(): `Promise`\<`number`\> **countRows**(`filter?`): `Promise`\<`number`\>
Returns the number of rows in this table. Returns the number of rows in this table.
#### Parameters
| Name | Type |
| :------ | :------ |
| `filter?` | `string` |
#### Returns #### Returns
`Promise`\<`number`\> `Promise`\<`number`\>
@@ -327,7 +390,7 @@ Returns the number of rows in this table.
#### Defined in #### Defined in
[index.ts:749](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/index.ts#L749) [index.ts:1021](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L1021)
___ ___
@@ -357,13 +420,13 @@ VectorIndexParams.
#### Defined in #### Defined in
[index.ts:734](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/index.ts#L734) [index.ts:1003](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L1003)
___ ___
### createScalarIndex ### createScalarIndex
**createScalarIndex**(`column`, `replace`): `Promise`\<`void`\> **createScalarIndex**(`column`, `replace?`): `Promise`\<`void`\>
Create a scalar index on this Table for the given column Create a scalar index on this Table for the given column
@@ -372,7 +435,7 @@ Create a scalar index on this Table for the given column
| Name | Type | Description | | Name | Type | Description |
| :------ | :------ | :------ | | :------ | :------ | :------ |
| `column` | `string` | The column to index | | `column` | `string` | The column to index |
| `replace` | `boolean` | If false, fail if an index already exists on the column Scalar indices, like vector indices, can be used to speed up scans. A scalar index can speed up scans that contain filter expressions on the indexed column. For example, the following scan will be faster if the column `my_col` has a scalar index: ```ts const con = await lancedb.connect('./.lancedb'); const table = await con.openTable('images'); const results = await table.where('my_col = 7').execute(); ``` Scalar indices can also speed up scans containing a vector search and a prefilter: ```ts const con = await lancedb.connect('././lancedb'); const table = await con.openTable('images'); const results = await table.search([1.0, 2.0]).where('my_col != 7').prefilter(true); ``` Scalar indices can only speed up scans for basic filters using equality, comparison, range (e.g. `my_col BETWEEN 0 AND 100`), and set membership (e.g. `my_col IN (0, 1, 2)`) Scalar indices can be used if the filter contains multiple indexed columns and the filter criteria are AND'd or OR'd together (e.g. `my_col < 0 AND other_col> 100`) Scalar indices may be used if the filter contains non-indexed columns but, depending on the structure of the filter, they may not be usable. For example, if the column `not_indexed` does not have a scalar index then the filter `my_col = 0 OR not_indexed = 1` will not be able to use any scalar index on `my_col`. | | `replace?` | `boolean` | If false, fail if an index already exists on the column it is always set to true for remote connections Scalar indices, like vector indices, can be used to speed up scans. A scalar index can speed up scans that contain filter expressions on the indexed column. For example, the following scan will be faster if the column `my_col` has a scalar index: ```ts const con = await lancedb.connect('./.lancedb'); const table = await con.openTable('images'); const results = await table.where('my_col = 7').execute(); ``` Scalar indices can also speed up scans containing a vector search and a prefilter: ```ts const con = await lancedb.connect('././lancedb'); const table = await con.openTable('images'); const results = await table.search([1.0, 2.0]).where('my_col != 7').prefilter(true); ``` Scalar indices can only speed up scans for basic filters using equality, comparison, range (e.g. `my_col BETWEEN 0 AND 100`), and set membership (e.g. `my_col IN (0, 1, 2)`) Scalar indices can be used if the filter contains multiple indexed columns and the filter criteria are AND'd or OR'd together (e.g. `my_col < 0 AND other_col> 100`) Scalar indices may be used if the filter contains non-indexed columns but, depending on the structure of the filter, they may not be usable. For example, if the column `not_indexed` does not have a scalar index then the filter `my_col = 0 OR not_indexed = 1` will not be able to use any scalar index on `my_col`. |
#### Returns #### Returns
@@ -392,7 +455,7 @@ await table.createScalarIndex('my_col')
#### Defined in #### Defined in
[index.ts:742](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/index.ts#L742) [index.ts:1011](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L1011)
___ ___
@@ -418,7 +481,38 @@ Delete rows from this table.
#### Defined in #### Defined in
[index.ts:758](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/index.ts#L758) [index.ts:1030](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L1030)
___
### dropColumns
▸ **dropColumns**(`columnNames`): `Promise`\<`void`\>
Drop one or more columns from the dataset
This is a metadata-only operation and does not remove the data from the
underlying storage. In order to remove the data, you must subsequently
call ``compact_files`` to rewrite the data without the removed columns and
then call ``cleanup_files`` to remove the old files.
#### Parameters
| Name | Type | Description |
| :------ | :------ | :------ |
| `columnNames` | `string`[] | The names of the columns to drop. These can be nested column references (e.g. "a.b.c") or top-level column names (e.g. "a"). |
#### Returns
`Promise`\<`void`\>
#### Implementation of
[Table](../interfaces/Table.md).[dropColumns](../interfaces/Table.md#dropcolumns)
#### Defined in
[index.ts:1205](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L1205)
___ ___
@@ -438,9 +532,13 @@ Creates a filter query to find all rows matching the specified criteria
[`Query`](Query.md)\<`T`\> [`Query`](Query.md)\<`T`\>
#### Implementation of
[Table](../interfaces/Table.md).[filter](../interfaces/Table.md#filter)
#### Defined in #### Defined in
[index.ts:684](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/index.ts#L684) [index.ts:934](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L934)
___ ___
@@ -454,13 +552,13 @@ ___
#### Defined in #### Defined in
[index.ts:854](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/index.ts#L854) [index.ts:1176](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L1176)
___ ___
### indexStats ### indexStats
▸ **indexStats**(`indexUuid`): `Promise`\<[`IndexStats`](../interfaces/IndexStats.md)\> ▸ **indexStats**(`indexName`): `Promise`\<[`IndexStats`](../interfaces/IndexStats.md)\>
Get statistics about an index. Get statistics about an index.
@@ -468,7 +566,7 @@ Get statistics about an index.
| Name | Type | | Name | Type |
| :------ | :------ | | :------ | :------ |
| `indexUuid` | `string` | | `indexName` | `string` |
#### Returns #### Returns
@@ -480,7 +578,7 @@ Get statistics about an index.
#### Defined in #### Defined in
[index.ts:845](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/index.ts#L845) [index.ts:1167](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L1167)
___ ___
@@ -500,7 +598,57 @@ List the indicies on this table.
#### Defined in #### Defined in
[index.ts:841](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/index.ts#L841) [index.ts:1163](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L1163)
___
### mergeInsert
▸ **mergeInsert**(`on`, `data`, `args`): `Promise`\<`void`\>
Runs a "merge insert" operation on the table
This operation can add rows, update rows, and remove rows all in a single
transaction. It is a very generic tool that can be used to create
behaviors like "insert if not exists", "update or insert (i.e. upsert)",
or even replace a portion of existing data with new data (e.g. replace
all data where month="january")
The merge insert operation works by combining new data from a
**source table** with existing data in a **target table** by using a
join. There are three categories of records.
"Matched" records are records that exist in both the source table and
the target table. "Not matched" records exist only in the source table
(e.g. these are new data) "Not matched by source" records exist only
in the target table (this is old data)
The MergeInsertArgs can be used to customize what should happen for
each category of data.
Please note that the data may appear to be reordered as part of this
operation. This is because updated rows will be deleted from the
dataset and then reinserted at the end with the new values.
#### Parameters
| Name | Type | Description |
| :------ | :------ | :------ |
| `on` | `string` | a column to join on. This is how records from the source table and target table are matched. |
| `data` | `Table`\<`any`\> \| `Record`\<`string`, `unknown`\>[] | the new data to insert |
| `args` | [`MergeInsertArgs`](../interfaces/MergeInsertArgs.md) | parameters controlling how the operation should behave |
#### Returns
`Promise`\<`void`\>
#### Implementation of
[Table](../interfaces/Table.md).[mergeInsert](../interfaces/Table.md#mergeinsert)
#### Defined in
[index.ts:1065](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L1065)
___ ___
@@ -514,7 +662,7 @@ Insert records into this Table, replacing its contents.
| Name | Type | Description | | Name | Type | Description |
| :------ | :------ | :------ | | :------ | :------ | :------ |
| `data` | `Record`\<`string`, `unknown`\>[] | Records to be inserted into the Table | | `data` | `Table`\<`any`\> \| `Record`\<`string`, `unknown`\>[] | Records to be inserted into the Table |
#### Returns #### Returns
@@ -528,7 +676,7 @@ The number of rows added to the table
#### Defined in #### Defined in
[index.ts:716](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/index.ts#L716) [index.ts:977](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L977)
___ ___
@@ -554,7 +702,7 @@ Creates a search query to find the nearest neighbors of the given search term
#### Defined in #### Defined in
[index.ts:676](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/index.ts#L676) [index.ts:926](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L926)
___ ___
@@ -580,4 +728,36 @@ Update rows in this table.
#### Defined in #### Defined in
[index.ts:771](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/index.ts#L771) [index.ts:1043](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L1043)
___
### withMiddleware
▸ **withMiddleware**(`middleware`): [`Table`](../interfaces/Table.md)\<`T`\>
Instrument the behavior of this Table with middleware.
The middleware will be called in the order they are added.
Currently this functionality is only supported for remote tables.
#### Parameters
| Name | Type |
| :------ | :------ |
| `middleware` | `HttpMiddleware` |
#### Returns
[`Table`](../interfaces/Table.md)\<`T`\>
- this Table instrumented by the passed middleware
#### Implementation of
[Table](../interfaces/Table.md).[withMiddleware](../interfaces/Table.md#withmiddleware)
#### Defined in
[index.ts:1209](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L1209)

View File

@@ -0,0 +1,82 @@
[vectordb](../README.md) / [Exports](../modules.md) / MakeArrowTableOptions
# Class: MakeArrowTableOptions
Options to control the makeArrowTable call.
## Table of contents
### Constructors
- [constructor](MakeArrowTableOptions.md#constructor)
### Properties
- [dictionaryEncodeStrings](MakeArrowTableOptions.md#dictionaryencodestrings)
- [embeddings](MakeArrowTableOptions.md#embeddings)
- [schema](MakeArrowTableOptions.md#schema)
- [vectorColumns](MakeArrowTableOptions.md#vectorcolumns)
## Constructors
### constructor
**new MakeArrowTableOptions**(`values?`)
#### Parameters
| Name | Type |
| :------ | :------ |
| `values?` | `Partial`\<[`MakeArrowTableOptions`](MakeArrowTableOptions.md)\> |
#### Defined in
[arrow.ts:98](https://github.com/lancedb/lancedb/blob/92179835/node/src/arrow.ts#L98)
## Properties
### dictionaryEncodeStrings
**dictionaryEncodeStrings**: `boolean` = `false`
If true then string columns will be encoded with dictionary encoding
Set this to true if your string columns tend to repeat the same values
often. For more precise control use the `schema` property to specify the
data type for individual columns.
If `schema` is provided then this property is ignored.
#### Defined in
[arrow.ts:96](https://github.com/lancedb/lancedb/blob/92179835/node/src/arrow.ts#L96)
___
### embeddings
`Optional` **embeddings**: [`EmbeddingFunction`](../interfaces/EmbeddingFunction.md)\<`any`\>
#### Defined in
[arrow.ts:85](https://github.com/lancedb/lancedb/blob/92179835/node/src/arrow.ts#L85)
___
### schema
`Optional` **schema**: `Schema`\<`any`\>
#### Defined in
[arrow.ts:63](https://github.com/lancedb/lancedb/blob/92179835/node/src/arrow.ts#L63)
___
### vectorColumns
**vectorColumns**: `Record`\<`string`, `VectorColumnOptions`\>
#### Defined in
[arrow.ts:81](https://github.com/lancedb/lancedb/blob/92179835/node/src/arrow.ts#L81)

View File

@@ -40,7 +40,7 @@ An embedding function that automatically creates vector representation for a giv
#### Defined in #### Defined in
[embedding/openai.ts:21](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/embedding/openai.ts#L21) [embedding/openai.ts:22](https://github.com/lancedb/lancedb/blob/92179835/node/src/embedding/openai.ts#L22)
## Properties ## Properties
@@ -50,17 +50,17 @@ An embedding function that automatically creates vector representation for a giv
#### Defined in #### Defined in
[embedding/openai.ts:19](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/embedding/openai.ts#L19) [embedding/openai.ts:20](https://github.com/lancedb/lancedb/blob/92179835/node/src/embedding/openai.ts#L20)
___ ___
### \_openai ### \_openai
`Private` `Readonly` **\_openai**: `any` `Private` `Readonly` **\_openai**: `OpenAI`
#### Defined in #### Defined in
[embedding/openai.ts:18](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/embedding/openai.ts#L18) [embedding/openai.ts:19](https://github.com/lancedb/lancedb/blob/92179835/node/src/embedding/openai.ts#L19)
___ ___
@@ -76,7 +76,7 @@ The name of the column that will be used as input for the Embedding Function.
#### Defined in #### Defined in
[embedding/openai.ts:50](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/embedding/openai.ts#L50) [embedding/openai.ts:56](https://github.com/lancedb/lancedb/blob/92179835/node/src/embedding/openai.ts#L56)
## Methods ## Methods
@@ -102,4 +102,4 @@ Creates a vector representation for the given values.
#### Defined in #### Defined in
[embedding/openai.ts:38](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/embedding/openai.ts#L38) [embedding/openai.ts:43](https://github.com/lancedb/lancedb/blob/92179835/node/src/embedding/openai.ts#L43)

View File

@@ -19,6 +19,7 @@ A builder for nearest neighbor queries for LanceDB.
### Properties ### Properties
- [\_embeddings](Query.md#_embeddings) - [\_embeddings](Query.md#_embeddings)
- [\_fastSearch](Query.md#_fastsearch)
- [\_filter](Query.md#_filter) - [\_filter](Query.md#_filter)
- [\_limit](Query.md#_limit) - [\_limit](Query.md#_limit)
- [\_metricType](Query.md#_metrictype) - [\_metricType](Query.md#_metrictype)
@@ -34,6 +35,7 @@ A builder for nearest neighbor queries for LanceDB.
### Methods ### Methods
- [execute](Query.md#execute) - [execute](Query.md#execute)
- [fastSearch](Query.md#fastsearch)
- [filter](Query.md#filter) - [filter](Query.md#filter)
- [isElectron](Query.md#iselectron) - [isElectron](Query.md#iselectron)
- [limit](Query.md#limit) - [limit](Query.md#limit)
@@ -65,7 +67,7 @@ A builder for nearest neighbor queries for LanceDB.
#### Defined in #### Defined in
[query.ts:38](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/query.ts#L38) [query.ts:39](https://github.com/lancedb/lancedb/blob/92179835/node/src/query.ts#L39)
## Properties ## Properties
@@ -75,7 +77,17 @@ A builder for nearest neighbor queries for LanceDB.
#### Defined in #### Defined in
[query.ts:36](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/query.ts#L36) [query.ts:37](https://github.com/lancedb/lancedb/blob/92179835/node/src/query.ts#L37)
___
### \_fastSearch
`Private` **\_fastSearch**: `boolean`
#### Defined in
[query.ts:36](https://github.com/lancedb/lancedb/blob/92179835/node/src/query.ts#L36)
___ ___
@@ -85,7 +97,7 @@ ___
#### Defined in #### Defined in
[query.ts:33](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/query.ts#L33) [query.ts:33](https://github.com/lancedb/lancedb/blob/92179835/node/src/query.ts#L33)
___ ___
@@ -95,7 +107,7 @@ ___
#### Defined in #### Defined in
[query.ts:29](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/query.ts#L29) [query.ts:29](https://github.com/lancedb/lancedb/blob/92179835/node/src/query.ts#L29)
___ ___
@@ -105,7 +117,7 @@ ___
#### Defined in #### Defined in
[query.ts:34](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/query.ts#L34) [query.ts:34](https://github.com/lancedb/lancedb/blob/92179835/node/src/query.ts#L34)
___ ___
@@ -115,7 +127,7 @@ ___
#### Defined in #### Defined in
[query.ts:31](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/query.ts#L31) [query.ts:31](https://github.com/lancedb/lancedb/blob/92179835/node/src/query.ts#L31)
___ ___
@@ -125,7 +137,7 @@ ___
#### Defined in #### Defined in
[query.ts:35](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/query.ts#L35) [query.ts:35](https://github.com/lancedb/lancedb/blob/92179835/node/src/query.ts#L35)
___ ___
@@ -135,7 +147,7 @@ ___
#### Defined in #### Defined in
[query.ts:26](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/query.ts#L26) [query.ts:26](https://github.com/lancedb/lancedb/blob/92179835/node/src/query.ts#L26)
___ ___
@@ -145,7 +157,7 @@ ___
#### Defined in #### Defined in
[query.ts:28](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/query.ts#L28) [query.ts:28](https://github.com/lancedb/lancedb/blob/92179835/node/src/query.ts#L28)
___ ___
@@ -155,7 +167,7 @@ ___
#### Defined in #### Defined in
[query.ts:30](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/query.ts#L30) [query.ts:30](https://github.com/lancedb/lancedb/blob/92179835/node/src/query.ts#L30)
___ ___
@@ -165,7 +177,7 @@ ___
#### Defined in #### Defined in
[query.ts:32](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/query.ts#L32) [query.ts:32](https://github.com/lancedb/lancedb/blob/92179835/node/src/query.ts#L32)
___ ___
@@ -175,7 +187,7 @@ ___
#### Defined in #### Defined in
[query.ts:27](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/query.ts#L27) [query.ts:27](https://github.com/lancedb/lancedb/blob/92179835/node/src/query.ts#L27)
___ ___
@@ -201,7 +213,7 @@ A filter statement to be applied to this query.
#### Defined in #### Defined in
[query.ts:87](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/query.ts#L87) [query.ts:90](https://github.com/lancedb/lancedb/blob/92179835/node/src/query.ts#L90)
## Methods ## Methods
@@ -223,7 +235,30 @@ Execute the query and return the results as an Array of Objects
#### Defined in #### Defined in
[query.ts:115](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/query.ts#L115) [query.ts:127](https://github.com/lancedb/lancedb/blob/92179835/node/src/query.ts#L127)
___
### fastSearch
**fastSearch**(`value`): [`Query`](Query.md)\<`T`\>
Skip searching un-indexed data. This can make search faster, but will miss
any data that is not yet indexed.
#### Parameters
| Name | Type |
| :------ | :------ |
| `value` | `boolean` |
#### Returns
[`Query`](Query.md)\<`T`\>
#### Defined in
[query.ts:119](https://github.com/lancedb/lancedb/blob/92179835/node/src/query.ts#L119)
___ ___
@@ -245,7 +280,7 @@ A filter statement to be applied to this query.
#### Defined in #### Defined in
[query.ts:82](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/query.ts#L82) [query.ts:85](https://github.com/lancedb/lancedb/blob/92179835/node/src/query.ts#L85)
___ ___
@@ -259,7 +294,7 @@ ___
#### Defined in #### Defined in
[query.ts:142](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/query.ts#L142) [query.ts:155](https://github.com/lancedb/lancedb/blob/92179835/node/src/query.ts#L155)
___ ___
@@ -268,6 +303,7 @@ ___
**limit**(`value`): [`Query`](Query.md)\<`T`\> **limit**(`value`): [`Query`](Query.md)\<`T`\>
Sets the number of results that will be returned Sets the number of results that will be returned
default value is 10
#### Parameters #### Parameters
@@ -281,7 +317,7 @@ Sets the number of results that will be returned
#### Defined in #### Defined in
[query.ts:55](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/query.ts#L55) [query.ts:58](https://github.com/lancedb/lancedb/blob/92179835/node/src/query.ts#L58)
___ ___
@@ -307,7 +343,7 @@ MetricType for the different options
#### Defined in #### Defined in
[query.ts:102](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/query.ts#L102) [query.ts:105](https://github.com/lancedb/lancedb/blob/92179835/node/src/query.ts#L105)
___ ___
@@ -329,7 +365,7 @@ The number of probes used. A higher number makes search more accurate but also s
#### Defined in #### Defined in
[query.ts:73](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/query.ts#L73) [query.ts:76](https://github.com/lancedb/lancedb/blob/92179835/node/src/query.ts#L76)
___ ___
@@ -349,7 +385,7 @@ ___
#### Defined in #### Defined in
[query.ts:107](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/query.ts#L107) [query.ts:110](https://github.com/lancedb/lancedb/blob/92179835/node/src/query.ts#L110)
___ ___
@@ -371,7 +407,7 @@ Refine the results by reading extra elements and re-ranking them in memory.
#### Defined in #### Defined in
[query.ts:64](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/query.ts#L64) [query.ts:67](https://github.com/lancedb/lancedb/blob/92179835/node/src/query.ts#L67)
___ ___
@@ -393,4 +429,4 @@ Return only the specified columns.
#### Defined in #### Defined in
[query.ts:93](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/query.ts#L93) [query.ts:96](https://github.com/lancedb/lancedb/blob/92179835/node/src/query.ts#L96)

View File

@@ -0,0 +1,52 @@
[vectordb](../README.md) / [Exports](../modules.md) / IndexStatus
# Enumeration: IndexStatus
## Table of contents
### Enumeration Members
- [Done](IndexStatus.md#done)
- [Failed](IndexStatus.md#failed)
- [Indexing](IndexStatus.md#indexing)
- [Pending](IndexStatus.md#pending)
## Enumeration Members
### Done
**Done** = ``"done"``
#### Defined in
[index.ts:713](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L713)
___
### Failed
• **Failed** = ``"failed"``
#### Defined in
[index.ts:714](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L714)
___
### Indexing
• **Indexing** = ``"indexing"``
#### Defined in
[index.ts:712](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L712)
___
### Pending
• **Pending** = ``"pending"``
#### Defined in
[index.ts:711](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L711)

View File

@@ -22,7 +22,7 @@ Cosine distance
#### Defined in #### Defined in
[index.ts:1041](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/index.ts#L1041) [index.ts:1381](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L1381)
___ ___
@@ -34,7 +34,7 @@ Dot product
#### Defined in #### Defined in
[index.ts:1046](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/index.ts#L1046) [index.ts:1386](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L1386)
___ ___
@@ -46,4 +46,4 @@ Euclidean distance
#### Defined in #### Defined in
[index.ts:1036](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/index.ts#L1036) [index.ts:1376](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L1376)

View File

@@ -22,7 +22,7 @@ Append new data to the table.
#### Defined in #### Defined in
[index.ts:1007](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/index.ts#L1007) [index.ts:1347](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L1347)
___ ___
@@ -34,7 +34,7 @@ Create a new [Table](../interfaces/Table.md).
#### Defined in #### Defined in
[index.ts:1003](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/index.ts#L1003) [index.ts:1343](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L1343)
___ ___
@@ -46,4 +46,4 @@ Overwrite the existing [Table](../interfaces/Table.md) if presented.
#### Defined in #### Defined in
[index.ts:1005](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/index.ts#L1005) [index.ts:1345](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L1345)

View File

@@ -18,7 +18,7 @@
#### Defined in #### Defined in
[index.ts:54](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/index.ts#L54) [index.ts:68](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L68)
___ ___
@@ -28,7 +28,7 @@ ___
#### Defined in #### Defined in
[index.ts:56](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/index.ts#L56) [index.ts:70](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L70)
___ ___
@@ -38,4 +38,4 @@ ___
#### Defined in #### Defined in
[index.ts:58](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/index.ts#L58) [index.ts:72](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L72)

View File

@@ -19,7 +19,7 @@ The number of bytes removed from disk.
#### Defined in #### Defined in
[index.ts:878](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/index.ts#L878) [index.ts:1218](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L1218)
___ ___
@@ -31,4 +31,4 @@ The number of old table versions removed.
#### Defined in #### Defined in
[index.ts:882](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/index.ts#L882) [index.ts:1222](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L1222)

View File

@@ -0,0 +1,53 @@
[vectordb](../README.md) / [Exports](../modules.md) / ColumnAlteration
# Interface: ColumnAlteration
A definition of a column alteration. The alteration changes the column at
`path` to have the new name `name`, to be nullable if `nullable` is true,
and to have the data type `data_type`. At least one of `rename` or `nullable`
must be provided.
## Table of contents
### Properties
- [nullable](ColumnAlteration.md#nullable)
- [path](ColumnAlteration.md#path)
- [rename](ColumnAlteration.md#rename)
## Properties
### nullable
`Optional` **nullable**: `boolean`
Set the new nullability. Note that a nullable column cannot be made non-nullable.
#### Defined in
[index.ts:638](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L638)
___
### path
**path**: `string`
The path to the column to alter. This is a dot-separated path to the column.
If it is a top-level column then it is just the name of the column. If it is
a nested column then it is the path to the column, e.g. "a.b.c" for a column
`c` nested inside a column `b` nested inside a column `a`.
#### Defined in
[index.ts:633](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L633)
___
### rename
`Optional` **rename**: `string`
#### Defined in
[index.ts:634](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L634)

View File

@@ -22,7 +22,7 @@ fragments added.
#### Defined in #### Defined in
[index.ts:933](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/index.ts#L933) [index.ts:1273](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L1273)
___ ___
@@ -35,7 +35,7 @@ file.
#### Defined in #### Defined in
[index.ts:928](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/index.ts#L928) [index.ts:1268](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L1268)
___ ___
@@ -47,7 +47,7 @@ The number of new fragments that were created.
#### Defined in #### Defined in
[index.ts:923](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/index.ts#L923) [index.ts:1263](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L1263)
___ ___
@@ -59,4 +59,4 @@ The number of fragments that were removed.
#### Defined in #### Defined in
[index.ts:919](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/index.ts#L919) [index.ts:1259](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L1259)

View File

@@ -24,7 +24,7 @@ Default is true.
#### Defined in #### Defined in
[index.ts:901](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/index.ts#L901) [index.ts:1241](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L1241)
___ ___
@@ -38,7 +38,7 @@ the deleted rows. Default is 10%.
#### Defined in #### Defined in
[index.ts:907](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/index.ts#L907) [index.ts:1247](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L1247)
___ ___
@@ -46,11 +46,11 @@ ___
`Optional` **maxRowsPerGroup**: `number` `Optional` **maxRowsPerGroup**: `number`
The maximum number of rows per group. Defaults to 1024. The maximum number of T per group. Defaults to 1024.
#### Defined in #### Defined in
[index.ts:895](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/index.ts#L895) [index.ts:1235](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L1235)
___ ___
@@ -63,7 +63,7 @@ the number of cores on the machine.
#### Defined in #### Defined in
[index.ts:912](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/index.ts#L912) [index.ts:1252](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L1252)
___ ___
@@ -77,4 +77,4 @@ Defaults to 1024 * 1024.
#### Defined in #### Defined in
[index.ts:891](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/index.ts#L891) [index.ts:1231](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L1231)

View File

@@ -22,6 +22,7 @@ Connection could be local against filesystem or remote against a server.
- [dropTable](Connection.md#droptable) - [dropTable](Connection.md#droptable)
- [openTable](Connection.md#opentable) - [openTable](Connection.md#opentable)
- [tableNames](Connection.md#tablenames) - [tableNames](Connection.md#tablenames)
- [withMiddleware](Connection.md#withmiddleware)
## Properties ## Properties
@@ -31,7 +32,7 @@ Connection could be local against filesystem or remote against a server.
#### Defined in #### Defined in
[index.ts:183](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/index.ts#L183) [index.ts:261](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L261)
## Methods ## Methods
@@ -59,7 +60,7 @@ Creates a new Table, optionally initializing it with new data.
#### Defined in #### Defined in
[index.ts:207](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/index.ts#L207) [index.ts:285](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L285)
**createTable**(`name`, `data`): `Promise`\<[`Table`](Table.md)\<`number`[]\>\> **createTable**(`name`, `data`): `Promise`\<[`Table`](Table.md)\<`number`[]\>\>
@@ -70,7 +71,7 @@ Creates a new Table and initialize it with new data.
| Name | Type | Description | | Name | Type | Description |
| :------ | :------ | :------ | | :------ | :------ | :------ |
| `name` | `string` | The name of the table. | | `name` | `string` | The name of the table. |
| `data` | `Record`\<`string`, `unknown`\>[] | Non-empty Array of Records to be inserted into the table | | `data` | `Table`\<`any`\> \| `Record`\<`string`, `unknown`\>[] | Non-empty Array of Records to be inserted into the table |
#### Returns #### Returns
@@ -78,7 +79,7 @@ Creates a new Table and initialize it with new data.
#### Defined in #### Defined in
[index.ts:221](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/index.ts#L221) [index.ts:299](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L299)
**createTable**(`name`, `data`, `options`): `Promise`\<[`Table`](Table.md)\<`number`[]\>\> **createTable**(`name`, `data`, `options`): `Promise`\<[`Table`](Table.md)\<`number`[]\>\>
@@ -89,7 +90,7 @@ Creates a new Table and initialize it with new data.
| Name | Type | Description | | Name | Type | Description |
| :------ | :------ | :------ | | :------ | :------ | :------ |
| `name` | `string` | The name of the table. | | `name` | `string` | The name of the table. |
| `data` | `Record`\<`string`, `unknown`\>[] | Non-empty Array of Records to be inserted into the table | | `data` | `Table`\<`any`\> \| `Record`\<`string`, `unknown`\>[] | Non-empty Array of Records to be inserted into the table |
| `options` | [`WriteOptions`](WriteOptions.md) | The write options to use when creating the table. | | `options` | [`WriteOptions`](WriteOptions.md) | The write options to use when creating the table. |
#### Returns #### Returns
@@ -98,7 +99,7 @@ Creates a new Table and initialize it with new data.
#### Defined in #### Defined in
[index.ts:233](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/index.ts#L233) [index.ts:311](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L311)
**createTable**\<`T`\>(`name`, `data`, `embeddings`): `Promise`\<[`Table`](Table.md)\<`T`\>\> **createTable**\<`T`\>(`name`, `data`, `embeddings`): `Promise`\<[`Table`](Table.md)\<`T`\>\>
@@ -115,7 +116,7 @@ Creates a new Table and initialize it with new data.
| Name | Type | Description | | Name | Type | Description |
| :------ | :------ | :------ | | :------ | :------ | :------ |
| `name` | `string` | The name of the table. | | `name` | `string` | The name of the table. |
| `data` | `Record`\<`string`, `unknown`\>[] | Non-empty Array of Records to be inserted into the table | | `data` | `Table`\<`any`\> \| `Record`\<`string`, `unknown`\>[] | Non-empty Array of Records to be inserted into the table |
| `embeddings` | [`EmbeddingFunction`](EmbeddingFunction.md)\<`T`\> | An embedding function to use on this table | | `embeddings` | [`EmbeddingFunction`](EmbeddingFunction.md)\<`T`\> | An embedding function to use on this table |
#### Returns #### Returns
@@ -124,7 +125,7 @@ Creates a new Table and initialize it with new data.
#### Defined in #### Defined in
[index.ts:246](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/index.ts#L246) [index.ts:324](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L324)
**createTable**\<`T`\>(`name`, `data`, `embeddings`, `options`): `Promise`\<[`Table`](Table.md)\<`T`\>\> **createTable**\<`T`\>(`name`, `data`, `embeddings`, `options`): `Promise`\<[`Table`](Table.md)\<`T`\>\>
@@ -141,7 +142,7 @@ Creates a new Table and initialize it with new data.
| Name | Type | Description | | Name | Type | Description |
| :------ | :------ | :------ | | :------ | :------ | :------ |
| `name` | `string` | The name of the table. | | `name` | `string` | The name of the table. |
| `data` | `Record`\<`string`, `unknown`\>[] | Non-empty Array of Records to be inserted into the table | | `data` | `Table`\<`any`\> \| `Record`\<`string`, `unknown`\>[] | Non-empty Array of Records to be inserted into the table |
| `embeddings` | [`EmbeddingFunction`](EmbeddingFunction.md)\<`T`\> | An embedding function to use on this table | | `embeddings` | [`EmbeddingFunction`](EmbeddingFunction.md)\<`T`\> | An embedding function to use on this table |
| `options` | [`WriteOptions`](WriteOptions.md) | The write options to use when creating the table. | | `options` | [`WriteOptions`](WriteOptions.md) | The write options to use when creating the table. |
@@ -151,7 +152,7 @@ Creates a new Table and initialize it with new data.
#### Defined in #### Defined in
[index.ts:259](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/index.ts#L259) [index.ts:337](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L337)
___ ___
@@ -173,7 +174,7 @@ Drop an existing table.
#### Defined in #### Defined in
[index.ts:270](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/index.ts#L270) [index.ts:348](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L348)
___ ___
@@ -202,7 +203,7 @@ Open a table in the database.
#### Defined in #### Defined in
[index.ts:193](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/index.ts#L193) [index.ts:271](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L271)
___ ___
@@ -216,4 +217,32 @@ ___
#### Defined in #### Defined in
[index.ts:185](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/index.ts#L185) [index.ts:263](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L263)
___
### withMiddleware
**withMiddleware**(`middleware`): [`Connection`](Connection.md)
Instrument the behavior of this Connection with middleware.
The middleware will be called in the order they are added.
Currently this functionality is only supported for remote Connections.
#### Parameters
| Name | Type |
| :------ | :------ |
| `middleware` | `HttpMiddleware` |
#### Returns
[`Connection`](Connection.md)
- this Connection instrumented by the passed middleware
#### Defined in
[index.ts:360](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L360)

View File

@@ -10,7 +10,10 @@
- [awsCredentials](ConnectionOptions.md#awscredentials) - [awsCredentials](ConnectionOptions.md#awscredentials)
- [awsRegion](ConnectionOptions.md#awsregion) - [awsRegion](ConnectionOptions.md#awsregion)
- [hostOverride](ConnectionOptions.md#hostoverride) - [hostOverride](ConnectionOptions.md#hostoverride)
- [readConsistencyInterval](ConnectionOptions.md#readconsistencyinterval)
- [region](ConnectionOptions.md#region) - [region](ConnectionOptions.md#region)
- [storageOptions](ConnectionOptions.md#storageoptions)
- [timeout](ConnectionOptions.md#timeout)
- [uri](ConnectionOptions.md#uri) - [uri](ConnectionOptions.md#uri)
## Properties ## Properties
@@ -19,9 +22,13 @@
`Optional` **apiKey**: `string` `Optional` **apiKey**: `string`
API key for the remote connections
Can also be passed by setting environment variable `LANCEDB_API_KEY`
#### Defined in #### Defined in
[index.ts:81](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/index.ts#L81) [index.ts:112](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L112)
___ ___
@@ -33,9 +40,14 @@ User provided AWS crednetials.
If not provided, LanceDB will use the default credentials provider chain. If not provided, LanceDB will use the default credentials provider chain.
**`Deprecated`**
Pass `aws_access_key_id`, `aws_secret_access_key`, and `aws_session_token`
through `storageOptions` instead.
#### Defined in #### Defined in
[index.ts:75](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/index.ts#L75) [index.ts:92](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L92)
___ ___
@@ -43,11 +55,15 @@ ___
`Optional` **awsRegion**: `string` `Optional` **awsRegion**: `string`
AWS region to connect to. Default is defaultAwsRegion. AWS region to connect to. Default is defaultAwsRegion
**`Deprecated`**
Pass `region` through `storageOptions` instead.
#### Defined in #### Defined in
[index.ts:78](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/index.ts#L78) [index.ts:98](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L98)
___ ___
@@ -55,13 +71,33 @@ ___
`Optional` **hostOverride**: `string` `Optional` **hostOverride**: `string`
Override the host URL for the remote connections. Override the host URL for the remote connection.
This is useful for local testing. This is useful for local testing.
#### Defined in #### Defined in
[index.ts:91](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/index.ts#L91) [index.ts:122](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L122)
___
### readConsistencyInterval
`Optional` **readConsistencyInterval**: `number`
(For LanceDB OSS only): The interval, in seconds, at which to check for
updates to the table from other processes. If None, then consistency is not
checked. For performance reasons, this is the default. For strong
consistency, set this to zero seconds. Then every read will check for
updates from other processes. As a compromise, you can set this to a
non-zero value for eventual consistency. If more than that interval
has passed since the last check, then the table will be checked for updates.
Note: this consistency only applies to read operations. Write operations are
always consistent.
#### Defined in
[index.ts:140](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L140)
___ ___
@@ -69,11 +105,37 @@ ___
`Optional` **region**: `string` `Optional` **region**: `string`
Region to connect Region to connect. Default is 'us-east-1'
#### Defined in #### Defined in
[index.ts:84](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/index.ts#L84) [index.ts:115](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L115)
___
### storageOptions
`Optional` **storageOptions**: `Record`\<`string`, `string`\>
User provided options for object storage. For example, S3 credentials or request timeouts.
The various options are described at https://lancedb.github.io/lancedb/guides/storage/
#### Defined in
[index.ts:105](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L105)
___
### timeout
`Optional` **timeout**: `number`
Duration in milliseconds for request timeout. Default = 10,000 (10 seconds)
#### Defined in
[index.ts:127](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L127)
___ ___
@@ -85,8 +147,8 @@ LanceDB database URI.
- `/path/to/database` - local database - `/path/to/database` - local database
- `s3://bucket/path/to/database` or `gs://bucket/path/to/database` - database on cloud storage - `s3://bucket/path/to/database` or `gs://bucket/path/to/database` - database on cloud storage
- `db://host:port` - remote database (SaaS) - `db://host:port` - remote database (LanceDB cloud)
#### Defined in #### Defined in
[index.ts:69](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/index.ts#L69) [index.ts:83](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L83)

View File

@@ -26,7 +26,7 @@
#### Defined in #### Defined in
[index.ts:116](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/index.ts#L116) [index.ts:163](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L163)
___ ___
@@ -36,7 +36,7 @@ ___
#### Defined in #### Defined in
[index.ts:122](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/index.ts#L122) [index.ts:169](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L169)
___ ___
@@ -46,7 +46,7 @@ ___
#### Defined in #### Defined in
[index.ts:113](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/index.ts#L113) [index.ts:160](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L160)
___ ___
@@ -56,7 +56,7 @@ ___
#### Defined in #### Defined in
[index.ts:119](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/index.ts#L119) [index.ts:166](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L166)
___ ___
@@ -66,4 +66,4 @@ ___
#### Defined in #### Defined in
[index.ts:125](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/index.ts#L125) [index.ts:172](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L172)

View File

@@ -18,11 +18,29 @@ An embedding function that automatically creates vector representation for a giv
### Properties ### Properties
- [destColumn](EmbeddingFunction.md#destcolumn)
- [embed](EmbeddingFunction.md#embed) - [embed](EmbeddingFunction.md#embed)
- [embeddingDataType](EmbeddingFunction.md#embeddingdatatype)
- [embeddingDimension](EmbeddingFunction.md#embeddingdimension)
- [excludeSource](EmbeddingFunction.md#excludesource)
- [sourceColumn](EmbeddingFunction.md#sourcecolumn) - [sourceColumn](EmbeddingFunction.md#sourcecolumn)
## Properties ## Properties
### destColumn
`Optional` **destColumn**: `string`
The name of the column that will contain the embedding
By default this is "vector"
#### Defined in
[embedding/embedding_function.ts:49](https://github.com/lancedb/lancedb/blob/92179835/node/src/embedding/embedding_function.ts#L49)
___
### embed ### embed
**embed**: (`data`: `T`[]) => `Promise`\<`number`[][]\> **embed**: (`data`: `T`[]) => `Promise`\<`number`[][]\>
@@ -45,7 +63,54 @@ Creates a vector representation for the given values.
#### Defined in #### Defined in
[embedding/embedding_function.ts:27](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/embedding/embedding_function.ts#L27) [embedding/embedding_function.ts:62](https://github.com/lancedb/lancedb/blob/92179835/node/src/embedding/embedding_function.ts#L62)
___
### embeddingDataType
`Optional` **embeddingDataType**: `Float`\<`Floats`\>
The data type of the embedding
The embedding function should return `number`. This will be converted into
an Arrow float array. By default this will be Float32 but this property can
be used to control the conversion.
#### Defined in
[embedding/embedding_function.ts:33](https://github.com/lancedb/lancedb/blob/92179835/node/src/embedding/embedding_function.ts#L33)
___
### embeddingDimension
`Optional` **embeddingDimension**: `number`
The dimension of the embedding
This is optional, normally this can be determined by looking at the results of
`embed`. If this is not specified, and there is an attempt to apply the embedding
to an empty table, then that process will fail.
#### Defined in
[embedding/embedding_function.ts:42](https://github.com/lancedb/lancedb/blob/92179835/node/src/embedding/embedding_function.ts#L42)
___
### excludeSource
`Optional` **excludeSource**: `boolean`
Should the source column be excluded from the resulting table
By default the source column is included. Set this to true and
only the embedding will be stored.
#### Defined in
[embedding/embedding_function.ts:57](https://github.com/lancedb/lancedb/blob/92179835/node/src/embedding/embedding_function.ts#L57)
___ ___
@@ -57,4 +122,4 @@ The name of the column that will be used as input for the Embedding Function.
#### Defined in #### Defined in
[embedding/embedding_function.ts:22](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/embedding/embedding_function.ts#L22) [embedding/embedding_function.ts:24](https://github.com/lancedb/lancedb/blob/92179835/node/src/embedding/embedding_function.ts#L24)

View File

@@ -6,18 +6,51 @@
### Properties ### Properties
- [distanceType](IndexStats.md#distancetype)
- [indexType](IndexStats.md#indextype)
- [numIndexedRows](IndexStats.md#numindexedrows) - [numIndexedRows](IndexStats.md#numindexedrows)
- [numIndices](IndexStats.md#numindices)
- [numUnindexedRows](IndexStats.md#numunindexedrows) - [numUnindexedRows](IndexStats.md#numunindexedrows)
## Properties ## Properties
### distanceType
`Optional` **distanceType**: `string`
#### Defined in
[index.ts:728](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L728)
___
### indexType
**indexType**: `string`
#### Defined in
[index.ts:727](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L727)
___
### numIndexedRows ### numIndexedRows
**numIndexedRows**: ``null`` \| `number` **numIndexedRows**: ``null`` \| `number`
#### Defined in #### Defined in
[index.ts:478](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/index.ts#L478) [index.ts:725](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L725)
___
### numIndices
• `Optional` **numIndices**: `number`
#### Defined in
[index.ts:729](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L729)
___ ___
@@ -27,4 +60,4 @@ ___
#### Defined in #### Defined in
[index.ts:479](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/index.ts#L479) [index.ts:726](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L726)

View File

@@ -29,7 +29,7 @@ The column to be indexed
#### Defined in #### Defined in
[index.ts:942](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/index.ts#L942) [index.ts:1282](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L1282)
___ ___
@@ -41,7 +41,7 @@ Cache size of the index
#### Defined in #### Defined in
[index.ts:991](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/index.ts#L991) [index.ts:1331](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L1331)
___ ___
@@ -53,7 +53,7 @@ A unique name for the index
#### Defined in #### Defined in
[index.ts:947](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/index.ts#L947) [index.ts:1287](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L1287)
___ ___
@@ -65,7 +65,7 @@ The max number of iterations for kmeans training.
#### Defined in #### Defined in
[index.ts:962](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/index.ts#L962) [index.ts:1302](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L1302)
___ ___
@@ -77,7 +77,7 @@ Max number of iterations to train OPQ, if `use_opq` is true.
#### Defined in #### Defined in
[index.ts:981](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/index.ts#L981) [index.ts:1321](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L1321)
___ ___
@@ -89,7 +89,7 @@ Metric type, L2 or Cosine
#### Defined in #### Defined in
[index.ts:952](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/index.ts#L952) [index.ts:1292](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L1292)
___ ___
@@ -101,7 +101,7 @@ The number of bits to present one PQ centroid.
#### Defined in #### Defined in
[index.ts:976](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/index.ts#L976) [index.ts:1316](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L1316)
___ ___
@@ -113,7 +113,7 @@ The number of partitions this index
#### Defined in #### Defined in
[index.ts:957](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/index.ts#L957) [index.ts:1297](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L1297)
___ ___
@@ -125,7 +125,7 @@ Number of subvectors to build PQ code
#### Defined in #### Defined in
[index.ts:972](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/index.ts#L972) [index.ts:1312](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L1312)
___ ___
@@ -137,7 +137,7 @@ Replace an existing index with the same name if it exists.
#### Defined in #### Defined in
[index.ts:986](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/index.ts#L986) [index.ts:1326](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L1326)
___ ___
@@ -147,7 +147,7 @@ ___
#### Defined in #### Defined in
[index.ts:993](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/index.ts#L993) [index.ts:1333](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L1333)
___ ___
@@ -159,4 +159,4 @@ Train as optimized product quantization.
#### Defined in #### Defined in
[index.ts:967](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/index.ts#L967) [index.ts:1307](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L1307)

View File

@@ -0,0 +1,73 @@
[vectordb](../README.md) / [Exports](../modules.md) / MergeInsertArgs
# Interface: MergeInsertArgs
## Table of contents
### Properties
- [whenMatchedUpdateAll](MergeInsertArgs.md#whenmatchedupdateall)
- [whenNotMatchedBySourceDelete](MergeInsertArgs.md#whennotmatchedbysourcedelete)
- [whenNotMatchedInsertAll](MergeInsertArgs.md#whennotmatchedinsertall)
## Properties
### whenMatchedUpdateAll
`Optional` **whenMatchedUpdateAll**: `string` \| `boolean`
If true then rows that exist in both the source table (new data) and
the target table (old data) will be updated, replacing the old row
with the corresponding matching row.
If there are multiple matches then the behavior is undefined.
Currently this causes multiple copies of the row to be created
but that behavior is subject to change.
Optionally, a filter can be specified. This should be an SQL
filter where fields with the prefix "target." refer to fields
in the target table (old data) and fields with the prefix
"source." refer to fields in the source table (new data). For
example, the filter "target.lastUpdated < source.lastUpdated" will
only update matched rows when the incoming `lastUpdated` value is
newer.
Rows that do not match the filter will not be updated. Rows that
do not match the filter do become "not matched" rows.
#### Defined in
[index.ts:690](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L690)
___
### whenNotMatchedBySourceDelete
`Optional` **whenNotMatchedBySourceDelete**: `string` \| `boolean`
If true then rows that exist only in the target table (old data)
will be deleted.
If this is a string then it will be treated as an SQL filter and
only rows that both do not match any row in the source table and
match the given filter will be deleted.
This can be used to replace a selection of existing data with
new data.
#### Defined in
[index.ts:707](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L707)
___
### whenNotMatchedInsertAll
`Optional` **whenNotMatchedInsertAll**: `boolean`
If true then rows that exist only in the source table (new data)
will be inserted into the target table.
#### Defined in
[index.ts:695](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L695)

View File

@@ -25,17 +25,26 @@ A LanceDB Table is the collection of Records. Each Record has one or more vector
- [delete](Table.md#delete) - [delete](Table.md#delete)
- [indexStats](Table.md#indexstats) - [indexStats](Table.md#indexstats)
- [listIndices](Table.md#listindices) - [listIndices](Table.md#listindices)
- [mergeInsert](Table.md#mergeinsert)
- [name](Table.md#name) - [name](Table.md#name)
- [overwrite](Table.md#overwrite) - [overwrite](Table.md#overwrite)
- [schema](Table.md#schema) - [schema](Table.md#schema)
- [search](Table.md#search) - [search](Table.md#search)
- [update](Table.md#update) - [update](Table.md#update)
### Methods
- [addColumns](Table.md#addcolumns)
- [alterColumns](Table.md#altercolumns)
- [dropColumns](Table.md#dropcolumns)
- [filter](Table.md#filter)
- [withMiddleware](Table.md#withmiddleware)
## Properties ## Properties
### add ### add
**add**: (`data`: `Record`\<`string`, `unknown`\>[]) => `Promise`\<`number`\> **add**: (`data`: `Table`\<`any`\> \| `Record`\<`string`, `unknown`\>[]) => `Promise`\<`number`\>
#### Type declaration #### Type declaration
@@ -47,7 +56,7 @@ Insert records into this Table.
| Name | Type | Description | | Name | Type | Description |
| :------ | :------ | :------ | | :------ | :------ | :------ |
| `data` | `Record`\<`string`, `unknown`\>[] | Records to be inserted into the Table | | `data` | `Table`\<`any`\> \| `Record`\<`string`, `unknown`\>[] | Records to be inserted into the Table |
##### Returns ##### Returns
@@ -57,27 +66,33 @@ The number of rows added to the table
#### Defined in #### Defined in
[index.ts:291](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/index.ts#L291) [index.ts:381](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L381)
___ ___
### countRows ### countRows
**countRows**: () => `Promise`\<`number`\> **countRows**: (`filter?`: `string`) => `Promise`\<`number`\>
#### Type declaration #### Type declaration
▸ (): `Promise`\<`number`\> ▸ (`filter?`): `Promise`\<`number`\>
Returns the number of rows in this table. Returns the number of rows in this table.
##### Parameters
| Name | Type |
| :------ | :------ |
| `filter?` | `string` |
##### Returns ##### Returns
`Promise`\<`number`\> `Promise`\<`number`\>
#### Defined in #### Defined in
[index.ts:361](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/index.ts#L361) [index.ts:454](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L454)
___ ___
@@ -107,17 +122,17 @@ VectorIndexParams.
#### Defined in #### Defined in
[index.ts:306](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/index.ts#L306) [index.ts:398](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L398)
___ ___
### createScalarIndex ### createScalarIndex
**createScalarIndex**: (`column`: `string`, `replace`: `boolean`) => `Promise`\<`void`\> **createScalarIndex**: (`column`: `string`, `replace?`: `boolean`) => `Promise`\<`void`\>
#### Type declaration #### Type declaration
▸ (`column`, `replace`): `Promise`\<`void`\> ▸ (`column`, `replace?`): `Promise`\<`void`\>
Create a scalar index on this Table for the given column Create a scalar index on this Table for the given column
@@ -126,7 +141,7 @@ Create a scalar index on this Table for the given column
| Name | Type | Description | | Name | Type | Description |
| :------ | :------ | :------ | | :------ | :------ | :------ |
| `column` | `string` | The column to index | | `column` | `string` | The column to index |
| `replace` | `boolean` | If false, fail if an index already exists on the column Scalar indices, like vector indices, can be used to speed up scans. A scalar index can speed up scans that contain filter expressions on the indexed column. For example, the following scan will be faster if the column `my_col` has a scalar index: ```ts const con = await lancedb.connect('./.lancedb'); const table = await con.openTable('images'); const results = await table.where('my_col = 7').execute(); ``` Scalar indices can also speed up scans containing a vector search and a prefilter: ```ts const con = await lancedb.connect('././lancedb'); const table = await con.openTable('images'); const results = await table.search([1.0, 2.0]).where('my_col != 7').prefilter(true); ``` Scalar indices can only speed up scans for basic filters using equality, comparison, range (e.g. `my_col BETWEEN 0 AND 100`), and set membership (e.g. `my_col IN (0, 1, 2)`) Scalar indices can be used if the filter contains multiple indexed columns and the filter criteria are AND'd or OR'd together (e.g. `my_col < 0 AND other_col> 100`) Scalar indices may be used if the filter contains non-indexed columns but, depending on the structure of the filter, they may not be usable. For example, if the column `not_indexed` does not have a scalar index then the filter `my_col = 0 OR not_indexed = 1` will not be able to use any scalar index on `my_col`. | | `replace?` | `boolean` | If false, fail if an index already exists on the column it is always set to true for remote connections Scalar indices, like vector indices, can be used to speed up scans. A scalar index can speed up scans that contain filter expressions on the indexed column. For example, the following scan will be faster if the column `my_col` has a scalar index: ```ts const con = await lancedb.connect('./.lancedb'); const table = await con.openTable('images'); const results = await table.where('my_col = 7').execute(); ``` Scalar indices can also speed up scans containing a vector search and a prefilter: ```ts const con = await lancedb.connect('././lancedb'); const table = await con.openTable('images'); const results = await table.search([1.0, 2.0]).where('my_col != 7').prefilter(true); ``` Scalar indices can only speed up scans for basic filters using equality, comparison, range (e.g. `my_col BETWEEN 0 AND 100`), and set membership (e.g. `my_col IN (0, 1, 2)`) Scalar indices can be used if the filter contains multiple indexed columns and the filter criteria are AND'd or OR'd together (e.g. `my_col < 0 AND other_col> 100`) Scalar indices may be used if the filter contains non-indexed columns but, depending on the structure of the filter, they may not be usable. For example, if the column `not_indexed` does not have a scalar index then the filter `my_col = 0 OR not_indexed = 1` will not be able to use any scalar index on `my_col`. |
##### Returns ##### Returns
@@ -142,7 +157,7 @@ await table.createScalarIndex('my_col')
#### Defined in #### Defined in
[index.ts:356](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/index.ts#L356) [index.ts:449](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L449)
___ ___
@@ -194,17 +209,17 @@ await tbl.countRows() // Returns 1
#### Defined in #### Defined in
[index.ts:395](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/index.ts#L395) [index.ts:488](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L488)
___ ___
### indexStats ### indexStats
• **indexStats**: (`indexUuid`: `string`) => `Promise`\<[`IndexStats`](IndexStats.md)\> • **indexStats**: (`indexName`: `string`) => `Promise`\<[`IndexStats`](IndexStats.md)\>
#### Type declaration #### Type declaration
▸ (`indexUuid`): `Promise`\<[`IndexStats`](IndexStats.md)\> ▸ (`indexName`): `Promise`\<[`IndexStats`](IndexStats.md)\>
Get statistics about an index. Get statistics about an index.
@@ -212,7 +227,7 @@ Get statistics about an index.
| Name | Type | | Name | Type |
| :------ | :------ | | :------ | :------ |
| `indexUuid` | `string` | | `indexName` | `string` |
##### Returns ##### Returns
@@ -220,7 +235,7 @@ Get statistics about an index.
#### Defined in #### Defined in
[index.ts:438](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/index.ts#L438) [index.ts:567](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L567)
___ ___
@@ -240,7 +255,57 @@ List the indicies on this table.
#### Defined in #### Defined in
[index.ts:433](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/index.ts#L433) [index.ts:562](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L562)
___
### mergeInsert
• **mergeInsert**: (`on`: `string`, `data`: `Table`\<`any`\> \| `Record`\<`string`, `unknown`\>[], `args`: [`MergeInsertArgs`](MergeInsertArgs.md)) => `Promise`\<`void`\>
#### Type declaration
▸ (`on`, `data`, `args`): `Promise`\<`void`\>
Runs a "merge insert" operation on the table
This operation can add rows, update rows, and remove rows all in a single
transaction. It is a very generic tool that can be used to create
behaviors like "insert if not exists", "update or insert (i.e. upsert)",
or even replace a portion of existing data with new data (e.g. replace
all data where month="january")
The merge insert operation works by combining new data from a
**source table** with existing data in a **target table** by using a
join. There are three categories of records.
"Matched" records are records that exist in both the source table and
the target table. "Not matched" records exist only in the source table
(e.g. these are new data) "Not matched by source" records exist only
in the target table (this is old data)
The MergeInsertArgs can be used to customize what should happen for
each category of data.
Please note that the data may appear to be reordered as part of this
operation. This is because updated rows will be deleted from the
dataset and then reinserted at the end with the new values.
##### Parameters
| Name | Type | Description |
| :------ | :------ | :------ |
| `on` | `string` | a column to join on. This is how records from the source table and target table are matched. |
| `data` | `Table`\<`any`\> \| `Record`\<`string`, `unknown`\>[] | the new data to insert |
| `args` | [`MergeInsertArgs`](MergeInsertArgs.md) | parameters controlling how the operation should behave |
##### Returns
`Promise`\<`void`\>
#### Defined in
[index.ts:553](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L553)
___ ___
@@ -250,13 +315,13 @@ ___
#### Defined in #### Defined in
[index.ts:277](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/index.ts#L277) [index.ts:367](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L367)
___ ___
### overwrite ### overwrite
• **overwrite**: (`data`: `Record`\<`string`, `unknown`\>[]) => `Promise`\<`number`\> • **overwrite**: (`data`: `Table`\<`any`\> \| `Record`\<`string`, `unknown`\>[]) => `Promise`\<`number`\>
#### Type declaration #### Type declaration
@@ -268,7 +333,7 @@ Insert records into this Table, replacing its contents.
| Name | Type | Description | | Name | Type | Description |
| :------ | :------ | :------ | | :------ | :------ | :------ |
| `data` | `Record`\<`string`, `unknown`\>[] | Records to be inserted into the Table | | `data` | `Table`\<`any`\> \| `Record`\<`string`, `unknown`\>[] | Records to be inserted into the Table |
##### Returns ##### Returns
@@ -278,7 +343,7 @@ The number of rows added to the table
#### Defined in #### Defined in
[index.ts:299](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/index.ts#L299) [index.ts:389](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L389)
___ ___
@@ -288,7 +353,7 @@ ___
#### Defined in #### Defined in
[index.ts:440](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/index.ts#L440) [index.ts:571](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L571)
___ ___
@@ -314,7 +379,7 @@ Creates a search query to find the nearest neighbors of the given search term
#### Defined in #### Defined in
[index.ts:283](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/index.ts#L283) [index.ts:373](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L373)
___ ___
@@ -365,4 +430,123 @@ let results = await tbl.search([1, 1]).execute();
#### Defined in #### Defined in
[index.ts:428](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/index.ts#L428) [index.ts:521](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L521)
## Methods
### addColumns
▸ **addColumns**(`newColumnTransforms`): `Promise`\<`void`\>
Add new columns with defined values.
#### Parameters
| Name | Type | Description |
| :------ | :------ | :------ |
| `newColumnTransforms` | \{ `name`: `string` ; `valueSql`: `string` }[] | pairs of column names and the SQL expression to use to calculate the value of the new column. These expressions will be evaluated for each row in the table, and can reference existing columns in the table. |
#### Returns
`Promise`\<`void`\>
#### Defined in
[index.ts:582](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L582)
___
### alterColumns
▸ **alterColumns**(`columnAlterations`): `Promise`\<`void`\>
Alter the name or nullability of columns.
#### Parameters
| Name | Type | Description |
| :------ | :------ | :------ |
| `columnAlterations` | [`ColumnAlteration`](ColumnAlteration.md)[] | One or more alterations to apply to columns. |
#### Returns
`Promise`\<`void`\>
#### Defined in
[index.ts:591](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L591)
___
### dropColumns
▸ **dropColumns**(`columnNames`): `Promise`\<`void`\>
Drop one or more columns from the dataset
This is a metadata-only operation and does not remove the data from the
underlying storage. In order to remove the data, you must subsequently
call ``compact_files`` to rewrite the data without the removed columns and
then call ``cleanup_files`` to remove the old files.
#### Parameters
| Name | Type | Description |
| :------ | :------ | :------ |
| `columnNames` | `string`[] | The names of the columns to drop. These can be nested column references (e.g. "a.b.c") or top-level column names (e.g. "a"). |
#### Returns
`Promise`\<`void`\>
#### Defined in
[index.ts:605](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L605)
___
### filter
▸ **filter**(`value`): [`Query`](../classes/Query.md)\<`T`\>
#### Parameters
| Name | Type |
| :------ | :------ |
| `value` | `string` |
#### Returns
[`Query`](../classes/Query.md)\<`T`\>
#### Defined in
[index.ts:569](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L569)
___
### withMiddleware
▸ **withMiddleware**(`middleware`): [`Table`](Table.md)\<`T`\>
Instrument the behavior of this Table with middleware.
The middleware will be called in the order they are added.
Currently this functionality is only supported for remote tables.
#### Parameters
| Name | Type |
| :------ | :------ |
| `middleware` | `HttpMiddleware` |
#### Returns
[`Table`](Table.md)\<`T`\>
- this Table instrumented by the passed middleware
#### Defined in
[index.ts:617](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L617)

View File

@@ -20,7 +20,7 @@ new values to set
#### Defined in #### Defined in
[index.ts:454](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/index.ts#L454) [index.ts:652](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L652)
___ ___
@@ -33,4 +33,4 @@ in which case all rows will be updated.
#### Defined in #### Defined in
[index.ts:448](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/index.ts#L448) [index.ts:646](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L646)

View File

@@ -20,7 +20,7 @@ new values to set as SQL expressions.
#### Defined in #### Defined in
[index.ts:468](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/index.ts#L468) [index.ts:666](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L666)
___ ___
@@ -33,4 +33,4 @@ in which case all rows will be updated.
#### Defined in #### Defined in
[index.ts:462](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/index.ts#L462) [index.ts:660](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L660)

View File

@@ -8,6 +8,7 @@
- [columns](VectorIndex.md#columns) - [columns](VectorIndex.md#columns)
- [name](VectorIndex.md#name) - [name](VectorIndex.md#name)
- [status](VectorIndex.md#status)
- [uuid](VectorIndex.md#uuid) - [uuid](VectorIndex.md#uuid)
## Properties ## Properties
@@ -18,7 +19,7 @@
#### Defined in #### Defined in
[index.ts:472](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/index.ts#L472) [index.ts:718](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L718)
___ ___
@@ -28,7 +29,17 @@ ___
#### Defined in #### Defined in
[index.ts:473](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/index.ts#L473) [index.ts:719](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L719)
___
### status
**status**: [`IndexStatus`](../enums/IndexStatus.md)
#### Defined in
[index.ts:721](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L721)
___ ___
@@ -38,4 +49,4 @@ ___
#### Defined in #### Defined in
[index.ts:474](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/index.ts#L474) [index.ts:720](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L720)

View File

@@ -24,4 +24,4 @@ A [WriteMode](../enums/WriteMode.md) to use on this operation
#### Defined in #### Defined in
[index.ts:1015](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/index.ts#L1015) [index.ts:1355](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L1355)

View File

@@ -6,6 +6,7 @@
### Enumerations ### Enumerations
- [IndexStatus](enums/IndexStatus.md)
- [MetricType](enums/MetricType.md) - [MetricType](enums/MetricType.md)
- [WriteMode](enums/WriteMode.md) - [WriteMode](enums/WriteMode.md)
@@ -14,6 +15,7 @@
- [DefaultWriteOptions](classes/DefaultWriteOptions.md) - [DefaultWriteOptions](classes/DefaultWriteOptions.md)
- [LocalConnection](classes/LocalConnection.md) - [LocalConnection](classes/LocalConnection.md)
- [LocalTable](classes/LocalTable.md) - [LocalTable](classes/LocalTable.md)
- [MakeArrowTableOptions](classes/MakeArrowTableOptions.md)
- [OpenAIEmbeddingFunction](classes/OpenAIEmbeddingFunction.md) - [OpenAIEmbeddingFunction](classes/OpenAIEmbeddingFunction.md)
- [Query](classes/Query.md) - [Query](classes/Query.md)
@@ -21,6 +23,7 @@
- [AwsCredentials](interfaces/AwsCredentials.md) - [AwsCredentials](interfaces/AwsCredentials.md)
- [CleanupStats](interfaces/CleanupStats.md) - [CleanupStats](interfaces/CleanupStats.md)
- [ColumnAlteration](interfaces/ColumnAlteration.md)
- [CompactionMetrics](interfaces/CompactionMetrics.md) - [CompactionMetrics](interfaces/CompactionMetrics.md)
- [CompactionOptions](interfaces/CompactionOptions.md) - [CompactionOptions](interfaces/CompactionOptions.md)
- [Connection](interfaces/Connection.md) - [Connection](interfaces/Connection.md)
@@ -29,6 +32,7 @@
- [EmbeddingFunction](interfaces/EmbeddingFunction.md) - [EmbeddingFunction](interfaces/EmbeddingFunction.md)
- [IndexStats](interfaces/IndexStats.md) - [IndexStats](interfaces/IndexStats.md)
- [IvfPQIndexConfig](interfaces/IvfPQIndexConfig.md) - [IvfPQIndexConfig](interfaces/IvfPQIndexConfig.md)
- [MergeInsertArgs](interfaces/MergeInsertArgs.md)
- [Table](interfaces/Table.md) - [Table](interfaces/Table.md)
- [UpdateArgs](interfaces/UpdateArgs.md) - [UpdateArgs](interfaces/UpdateArgs.md)
- [UpdateSqlArgs](interfaces/UpdateSqlArgs.md) - [UpdateSqlArgs](interfaces/UpdateSqlArgs.md)
@@ -42,7 +46,9 @@
### Functions ### Functions
- [connect](modules.md#connect) - [connect](modules.md#connect)
- [convertToTable](modules.md#converttotable)
- [isWriteOptions](modules.md#iswriteoptions) - [isWriteOptions](modules.md#iswriteoptions)
- [makeArrowTable](modules.md#makearrowtable)
## Type Aliases ## Type Aliases
@@ -52,7 +58,7 @@
#### Defined in #### Defined in
[index.ts:996](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/index.ts#L996) [index.ts:1336](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L1336)
## Functions ## Functions
@@ -62,11 +68,11 @@
Connect to a LanceDB instance at the given URI. Connect to a LanceDB instance at the given URI.
Accpeted formats: Accepted formats:
- `/path/to/database` - local database - `/path/to/database` - local database
- `s3://bucket/path/to/database` or `gs://bucket/path/to/database` - database on cloud storage - `s3://bucket/path/to/database` or `gs://bucket/path/to/database` - database on cloud storage
- `db://host:port` - remote database (SaaS) - `db://host:port` - remote database (LanceDB cloud)
#### Parameters #### Parameters
@@ -84,7 +90,7 @@ Accpeted formats:
#### Defined in #### Defined in
[index.ts:141](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/index.ts#L141) [index.ts:188](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L188)
**connect**(`opts`): `Promise`\<[`Connection`](interfaces/Connection.md)\> **connect**(`opts`): `Promise`\<[`Connection`](interfaces/Connection.md)\>
@@ -102,7 +108,35 @@ Connect to a LanceDB instance with connection options.
#### Defined in #### Defined in
[index.ts:147](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/index.ts#L147) [index.ts:194](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L194)
___
### convertToTable
**convertToTable**\<`T`\>(`data`, `embeddings?`, `makeTableOptions?`): `Promise`\<`ArrowTable`\>
#### Type parameters
| Name |
| :------ |
| `T` |
#### Parameters
| Name | Type |
| :------ | :------ |
| `data` | `Record`\<`string`, `unknown`\>[] |
| `embeddings?` | [`EmbeddingFunction`](interfaces/EmbeddingFunction.md)\<`T`\> |
| `makeTableOptions?` | `Partial`\<[`MakeArrowTableOptions`](classes/MakeArrowTableOptions.md)\> |
#### Returns
`Promise`\<`ArrowTable`\>
#### Defined in
[arrow.ts:465](https://github.com/lancedb/lancedb/blob/92179835/node/src/arrow.ts#L465)
___ ___
@@ -122,4 +156,116 @@ value is WriteOptions
#### Defined in #### Defined in
[index.ts:1022](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/index.ts#L1022) [index.ts:1362](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L1362)
___
### makeArrowTable
**makeArrowTable**(`data`, `options?`): `ArrowTable`
An enhanced version of the makeTable function from Apache Arrow
that supports nested fields and embeddings columns.
This function converts an array of Record<String, any> (row-major JS objects)
to an Arrow Table (a columnar structure)
Note that it currently does not support nulls.
If a schema is provided then it will be used to determine the resulting array
types. Fields will also be reordered to fit the order defined by the schema.
If a schema is not provided then the types will be inferred and the field order
will be controlled by the order of properties in the first record.
If the input is empty then a schema must be provided to create an empty table.
When a schema is not specified then data types will be inferred. The inference
rules are as follows:
- boolean => Bool
- number => Float64
- String => Utf8
- Buffer => Binary
- Record<String, any> => Struct
- Array<any> => List
#### Parameters
| Name | Type | Description |
| :------ | :------ | :------ |
| `data` | `Record`\<`string`, `any`\>[] | input data |
| `options?` | `Partial`\<[`MakeArrowTableOptions`](classes/MakeArrowTableOptions.md)\> | options to control the makeArrowTable call. |
#### Returns
`ArrowTable`
**`Example`**
```ts
import { fromTableToBuffer, makeArrowTable } from "../arrow";
import { Field, FixedSizeList, Float16, Float32, Int32, Schema } from "apache-arrow";
const schema = new Schema([
new Field("a", new Int32()),
new Field("b", new Float32()),
new Field("c", new FixedSizeList(3, new Field("item", new Float16()))),
]);
const table = makeArrowTable([
{ a: 1, b: 2, c: [1, 2, 3] },
{ a: 4, b: 5, c: [4, 5, 6] },
{ a: 7, b: 8, c: [7, 8, 9] },
], { schema });
```
By default it assumes that the column named `vector` is a vector column
and it will be converted into a fixed size list array of type float32.
The `vectorColumns` option can be used to support other vector column
names and data types.
```ts
const schema = new Schema([
new Field("a", new Float64()),
new Field("b", new Float64()),
new Field(
"vector",
new FixedSizeList(3, new Field("item", new Float32()))
),
]);
const table = makeArrowTable([
{ a: 1, b: 2, vector: [1, 2, 3] },
{ a: 4, b: 5, vector: [4, 5, 6] },
{ a: 7, b: 8, vector: [7, 8, 9] },
]);
assert.deepEqual(table.schema, schema);
```
You can specify the vector column types and names using the options as well
```typescript
const schema = new Schema([
new Field('a', new Float64()),
new Field('b', new Float64()),
new Field('vec1', new FixedSizeList(3, new Field('item', new Float16()))),
new Field('vec2', new FixedSizeList(3, new Field('item', new Float16())))
]);
const table = makeArrowTable([
{ a: 1, b: 2, vec1: [1, 2, 3], vec2: [2, 4, 6] },
{ a: 4, b: 5, vec1: [4, 5, 6], vec2: [8, 10, 12] },
{ a: 7, b: 8, vec1: [7, 8, 9], vec2: [14, 16, 18] }
], {
vectorColumns: {
vec1: { type: new Float16() },
vec2: { type: new Float16() }
}
}
assert.deepEqual(table.schema, schema)
```
#### Defined in
[arrow.ts:198](https://github.com/lancedb/lancedb/blob/92179835/node/src/arrow.ts#L198)

View File

@@ -1 +0,0 @@
TypeDoc added this file to prevent GitHub Pages from using Jekyll. You can turn off this behavior by setting the `githubPages` option to false.

View File

@@ -27,7 +27,9 @@ the underlying connection has been closed.
### new Connection() ### new Connection()
> **new Connection**(): [`Connection`](Connection.md) ```ts
new Connection(): Connection
```
#### Returns #### Returns
@@ -37,7 +39,9 @@ the underlying connection has been closed.
### close() ### close()
> `abstract` **close**(): `void` ```ts
abstract close(): void
```
Close the connection, releasing any underlying resources. Close the connection, releasing any underlying resources.
@@ -53,21 +57,24 @@ Any attempt to use the connection after it is closed will result in an error.
### createEmptyTable() ### createEmptyTable()
> `abstract` **createEmptyTable**(`name`, `schema`, `options`?): `Promise`&lt;[`Table`](Table.md)&gt; ```ts
abstract createEmptyTable(
name,
schema,
options?): Promise<Table>
```
Creates a new empty Table Creates a new empty Table
#### Parameters #### Parameters
**name**: `string` * **name**: `string`
The name of the table. The name of the table.
**schema**: `SchemaLike` * **schema**: `SchemaLike`
The schema of the table The schema of the table
**options?**: `Partial`&lt;[`CreateTableOptions`](../interfaces/CreateTableOptions.md)&gt; * **options?**: `Partial`&lt;[`CreateTableOptions`](../interfaces/CreateTableOptions.md)&gt;
#### Returns #### Returns
@@ -79,14 +86,15 @@ The schema of the table
#### createTable(options) #### createTable(options)
> `abstract` **createTable**(`options`): `Promise`&lt;[`Table`](Table.md)&gt; ```ts
abstract createTable(options): Promise<Table>
```
Creates a new Table and initialize it with new data. Creates a new Table and initialize it with new data.
##### Parameters ##### Parameters
**options**: `object` & `Partial`&lt;[`CreateTableOptions`](../interfaces/CreateTableOptions.md)&gt; * **options**: `object` & `Partial`&lt;[`CreateTableOptions`](../interfaces/CreateTableOptions.md)&gt;
The options object. The options object.
##### Returns ##### Returns
@@ -95,22 +103,25 @@ The options object.
#### createTable(name, data, options) #### createTable(name, data, options)
> `abstract` **createTable**(`name`, `data`, `options`?): `Promise`&lt;[`Table`](Table.md)&gt; ```ts
abstract createTable(
name,
data,
options?): Promise<Table>
```
Creates a new Table and initialize it with new data. Creates a new Table and initialize it with new data.
##### Parameters ##### Parameters
**name**: `string` * **name**: `string`
The name of the table. The name of the table.
**data**: `TableLike` \| `Record`&lt;`string`, `unknown`&gt;[] * **data**: `TableLike` \| `Record`&lt;`string`, `unknown`&gt;[]
Non-empty Array of Records Non-empty Array of Records
to be inserted into the table to be inserted into the table
**options?**: `Partial`&lt;[`CreateTableOptions`](../interfaces/CreateTableOptions.md)&gt; * **options?**: `Partial`&lt;[`CreateTableOptions`](../interfaces/CreateTableOptions.md)&gt;
##### Returns ##### Returns
@@ -120,7 +131,9 @@ to be inserted into the table
### display() ### display()
> `abstract` **display**(): `string` ```ts
abstract display(): string
```
Return a brief description of the connection Return a brief description of the connection
@@ -132,14 +145,15 @@ Return a brief description of the connection
### dropTable() ### dropTable()
> `abstract` **dropTable**(`name`): `Promise`&lt;`void`&gt; ```ts
abstract dropTable(name): Promise<void>
```
Drop an existing table. Drop an existing table.
#### Parameters #### Parameters
**name**: `string` * **name**: `string`
The name of the table to drop. The name of the table to drop.
#### Returns #### Returns
@@ -150,7 +164,9 @@ The name of the table to drop.
### isOpen() ### isOpen()
> `abstract` **isOpen**(): `boolean` ```ts
abstract isOpen(): boolean
```
Return true if the connection has not been closed Return true if the connection has not been closed
@@ -162,17 +178,18 @@ Return true if the connection has not been closed
### openTable() ### openTable()
> `abstract` **openTable**(`name`, `options`?): `Promise`&lt;[`Table`](Table.md)&gt; ```ts
abstract openTable(name, options?): Promise<Table>
```
Open a table in the database. Open a table in the database.
#### Parameters #### Parameters
**name**: `string` * **name**: `string`
The name of the table The name of the table
**options?**: `Partial`&lt;`OpenTableOptions`&gt; * **options?**: `Partial`&lt;`OpenTableOptions`&gt;
#### Returns #### Returns
@@ -182,7 +199,9 @@ The name of the table
### tableNames() ### tableNames()
> `abstract` **tableNames**(`options`?): `Promise`&lt;`string`[]&gt; ```ts
abstract tableNames(options?): Promise<string[]>
```
List all the table names in this database. List all the table names in this database.
@@ -190,8 +209,7 @@ Tables will be returned in lexicographical order.
#### Parameters #### Parameters
**options?**: `Partial`&lt;[`TableNamesOptions`](../interfaces/TableNamesOptions.md)&gt; * **options?**: `Partial`&lt;[`TableNamesOptions`](../interfaces/TableNamesOptions.md)&gt;
options to control the options to control the
paging / start point paging / start point

View File

@@ -8,9 +8,30 @@
## Methods ## Methods
### bitmap()
```ts
static bitmap(): Index
```
Create a bitmap index.
A `Bitmap` index stores a bitmap for each distinct value in the column for every row.
This index works best for low-cardinality columns, where the number of unique values
is small (i.e., less than a few hundreds).
#### Returns
[`Index`](Index.md)
***
### btree() ### btree()
> `static` **btree**(): [`Index`](Index.md) ```ts
static btree(): Index
```
Create a btree index Create a btree index
@@ -36,9 +57,82 @@ block size may be added in the future.
*** ***
### fts()
```ts
static fts(options?): Index
```
Create a full text search index
A full text search index is an index on a string column, so that you can conduct full
text searches on the column.
The results of a full text search are ordered by relevance measured by BM25.
You can combine filters with full text search.
For now, the full text search index only supports English, and doesn't support phrase search.
#### Parameters
* **options?**: `Partial`&lt;`FtsOptions`&gt;
#### Returns
[`Index`](Index.md)
***
### hnswPq()
```ts
static hnswPq(options?): Index
```
Create a hnswPq index
HNSW-PQ stands for Hierarchical Navigable Small World - Product Quantization.
It is a variant of the HNSW algorithm that uses product quantization to compress
the vectors.
#### Parameters
* **options?**: `Partial`&lt;`HnswPqOptions`&gt;
#### Returns
[`Index`](Index.md)
***
### hnswSq()
```ts
static hnswSq(options?): Index
```
Create a hnswSq index
HNSW-SQ stands for Hierarchical Navigable Small World - Scalar Quantization.
It is a variant of the HNSW algorithm that uses scalar quantization to compress
the vectors.
#### Parameters
* **options?**: `Partial`&lt;`HnswSqOptions`&gt;
#### Returns
[`Index`](Index.md)
***
### ivfPq() ### ivfPq()
> `static` **ivfPq**(`options`?): [`Index`](Index.md) ```ts
static ivfPq(options?): Index
```
Create an IvfPq index Create an IvfPq index
@@ -63,7 +157,25 @@ currently is also a memory intensive operation.
#### Parameters #### Parameters
**options?**: `Partial`&lt;[`IvfPqOptions`](../interfaces/IvfPqOptions.md)&gt; * **options?**: `Partial`&lt;[`IvfPqOptions`](../interfaces/IvfPqOptions.md)&gt;
#### Returns
[`Index`](Index.md)
***
### labelList()
```ts
static labelList(): Index
```
Create a label list index.
LabelList index is a scalar index that can be used on `List<T>` columns to
support queries with `array_contains_all` and `array_contains_any`
using an underlying bitmap index.
#### Returns #### Returns

View File

@@ -12,11 +12,13 @@ Options to control the makeArrowTable call.
### new MakeArrowTableOptions() ### new MakeArrowTableOptions()
> **new MakeArrowTableOptions**(`values`?): [`MakeArrowTableOptions`](MakeArrowTableOptions.md) ```ts
new MakeArrowTableOptions(values?): MakeArrowTableOptions
```
#### Parameters #### Parameters
**values?**: `Partial`&lt;[`MakeArrowTableOptions`](MakeArrowTableOptions.md)&gt; * **values?**: `Partial`&lt;[`MakeArrowTableOptions`](MakeArrowTableOptions.md)&gt;
#### Returns #### Returns
@@ -26,7 +28,9 @@ Options to control the makeArrowTable call.
### dictionaryEncodeStrings ### dictionaryEncodeStrings
> **dictionaryEncodeStrings**: `boolean` = `false` ```ts
dictionaryEncodeStrings: boolean = false;
```
If true then string columns will be encoded with dictionary encoding If true then string columns will be encoded with dictionary encoding
@@ -40,22 +44,30 @@ If `schema` is provided then this property is ignored.
### embeddingFunction? ### embeddingFunction?
> `optional` **embeddingFunction**: [`EmbeddingFunctionConfig`](../namespaces/embedding/interfaces/EmbeddingFunctionConfig.md) ```ts
optional embeddingFunction: EmbeddingFunctionConfig;
```
*** ***
### embeddings? ### embeddings?
> `optional` **embeddings**: [`EmbeddingFunction`](../namespaces/embedding/classes/EmbeddingFunction.md)&lt;`unknown`, `FunctionOptions`&gt; ```ts
optional embeddings: EmbeddingFunction<unknown, FunctionOptions>;
```
*** ***
### schema? ### schema?
> `optional` **schema**: `SchemaLike` ```ts
optional schema: SchemaLike;
```
*** ***
### vectorColumns ### vectorColumns
> **vectorColumns**: `Record`&lt;`string`, [`VectorColumnOptions`](VectorColumnOptions.md)&gt; ```ts
vectorColumns: Record<string, VectorColumnOptions>;
```

View File

@@ -16,11 +16,13 @@ A builder for LanceDB queries.
### new Query() ### new Query()
> **new Query**(`tbl`): [`Query`](Query.md) ```ts
new Query(tbl): Query
```
#### Parameters #### Parameters
**tbl**: `Table` * **tbl**: `Table`
#### Returns #### Returns
@@ -34,7 +36,9 @@ A builder for LanceDB queries.
### inner ### inner
> `protected` **inner**: `Query` \| `Promise`&lt;`Query`&gt; ```ts
protected inner: Query | Promise<Query>;
```
#### Inherited from #### Inherited from
@@ -44,7 +48,9 @@ A builder for LanceDB queries.
### \[asyncIterator\]() ### \[asyncIterator\]()
> **\[asyncIterator\]**(): `AsyncIterator`&lt;`RecordBatch`&lt;`any`&gt;, `any`, `undefined`&gt; ```ts
asyncIterator: AsyncIterator<RecordBatch<any>, any, undefined>
```
#### Returns #### Returns
@@ -58,11 +64,13 @@ A builder for LanceDB queries.
### doCall() ### doCall()
> `protected` **doCall**(`fn`): `void` ```ts
protected doCall(fn): void
```
#### Parameters #### Parameters
**fn** * **fn**
#### Returns #### Returns
@@ -76,13 +84,15 @@ A builder for LanceDB queries.
### execute() ### execute()
> `protected` **execute**(`options`?): [`RecordBatchIterator`](RecordBatchIterator.md) ```ts
protected execute(options?): RecordBatchIterator
```
Execute the query and return the results as an Execute the query and return the results as an
#### Parameters #### Parameters
**options?**: `Partial`&lt;`QueryExecutionOptions`&gt; * **options?**: `Partial`&lt;`QueryExecutionOptions`&gt;
#### Returns #### Returns
@@ -108,14 +118,15 @@ single query)
### explainPlan() ### explainPlan()
> **explainPlan**(`verbose`): `Promise`&lt;`string`&gt; ```ts
explainPlan(verbose): Promise<string>
```
Generates an explanation of the query execution plan. Generates an explanation of the query execution plan.
#### Parameters #### Parameters
**verbose**: `boolean` = `false` * **verbose**: `boolean` = `false`
If true, provides a more detailed explanation. Defaults to false. If true, provides a more detailed explanation. Defaults to false.
#### Returns #### Returns
@@ -141,15 +152,38 @@ const plan = await table.query().nearestTo([0.5, 0.2]).explainPlan();
*** ***
### fastSearch()
```ts
fastSearch(): this
```
Skip searching un-indexed data. This can make search faster, but will miss
any data that is not yet indexed.
Use lancedb.Table#optimize to index all un-indexed data.
#### Returns
`this`
#### Inherited from
[`QueryBase`](QueryBase.md).[`fastSearch`](QueryBase.md#fastsearch)
***
### ~~filter()~~ ### ~~filter()~~
> **filter**(`predicate`): `this` ```ts
filter(predicate): this
```
A filter statement to be applied to this query. A filter statement to be applied to this query.
#### Parameters #### Parameters
**predicate**: `string` * **predicate**: `string`
#### Returns #### Returns
@@ -169,9 +203,33 @@ Use `where` instead
*** ***
### fullTextSearch()
```ts
fullTextSearch(query, options?): this
```
#### Parameters
* **query**: `string`
* **options?**: `Partial`&lt;`FullTextSearchOptions`&gt;
#### Returns
`this`
#### Inherited from
[`QueryBase`](QueryBase.md).[`fullTextSearch`](QueryBase.md#fulltextsearch)
***
### limit() ### limit()
> **limit**(`limit`): `this` ```ts
limit(limit): this
```
Set the maximum number of results to return. Set the maximum number of results to return.
@@ -180,7 +238,7 @@ called then every valid row from the table will be returned.
#### Parameters #### Parameters
**limit**: `number` * **limit**: `number`
#### Returns #### Returns
@@ -194,11 +252,13 @@ called then every valid row from the table will be returned.
### nativeExecute() ### nativeExecute()
> `protected` **nativeExecute**(`options`?): `Promise`&lt;`RecordBatchIterator`&gt; ```ts
protected nativeExecute(options?): Promise<RecordBatchIterator>
```
#### Parameters #### Parameters
**options?**: `Partial`&lt;`QueryExecutionOptions`&gt; * **options?**: `Partial`&lt;`QueryExecutionOptions`&gt;
#### Returns #### Returns
@@ -212,7 +272,9 @@ called then every valid row from the table will be returned.
### nearestTo() ### nearestTo()
> **nearestTo**(`vector`): [`VectorQuery`](VectorQuery.md) ```ts
nearestTo(vector): VectorQuery
```
Find the nearest vectors to the given query vector. Find the nearest vectors to the given query vector.
@@ -232,7 +294,7 @@ If there is more than one vector column you must use
#### Parameters #### Parameters
**vector**: `IntoVector` * **vector**: `IntoVector`
#### Returns #### Returns
@@ -264,9 +326,49 @@ a default `limit` of 10 will be used.
*** ***
### nearestToText()
```ts
nearestToText(query, columns?): Query
```
#### Parameters
* **query**: `string`
* **columns?**: `string`[]
#### Returns
[`Query`](Query.md)
***
### offset()
```ts
offset(offset): this
```
#### Parameters
* **offset**: `number`
#### Returns
`this`
#### Inherited from
[`QueryBase`](QueryBase.md).[`offset`](QueryBase.md#offset)
***
### select() ### select()
> **select**(`columns`): `this` ```ts
select(columns): this
```
Return only the specified columns. Return only the specified columns.
@@ -290,7 +392,7 @@ input to this method would be:
#### Parameters #### Parameters
**columns**: `string` \| `string`[] \| `Record`&lt;`string`, `string`&gt; \| `Map`&lt;`string`, `string`&gt; * **columns**: `string` \| `string`[] \| `Record`&lt;`string`, `string`&gt; \| `Map`&lt;`string`, `string`&gt;
#### Returns #### Returns
@@ -317,13 +419,15 @@ object insertion order is easy to get wrong and `Map` is more foolproof.
### toArray() ### toArray()
> **toArray**(`options`?): `Promise`&lt;`any`[]&gt; ```ts
toArray(options?): Promise<any[]>
```
Collect the results as an array of objects. Collect the results as an array of objects.
#### Parameters #### Parameters
**options?**: `Partial`&lt;`QueryExecutionOptions`&gt; * **options?**: `Partial`&lt;`QueryExecutionOptions`&gt;
#### Returns #### Returns
@@ -337,13 +441,15 @@ Collect the results as an array of objects.
### toArrow() ### toArrow()
> **toArrow**(`options`?): `Promise`&lt;`Table`&lt;`any`&gt;&gt; ```ts
toArrow(options?): Promise<Table<any>>
```
Collect the results as an Arrow Collect the results as an Arrow
#### Parameters #### Parameters
**options?**: `Partial`&lt;`QueryExecutionOptions`&gt; * **options?**: `Partial`&lt;`QueryExecutionOptions`&gt;
#### Returns #### Returns
@@ -361,7 +467,9 @@ ArrowTable.
### where() ### where()
> **where**(`predicate`): `this` ```ts
where(predicate): this
```
A filter statement to be applied to this query. A filter statement to be applied to this query.
@@ -369,7 +477,7 @@ The filter should be supplied as an SQL query string. For example:
#### Parameters #### Parameters
**predicate**: `string` * **predicate**: `string`
#### Returns #### Returns
@@ -389,3 +497,25 @@ on the filter column(s).
#### Inherited from #### Inherited from
[`QueryBase`](QueryBase.md).[`where`](QueryBase.md#where) [`QueryBase`](QueryBase.md).[`where`](QueryBase.md#where)
***
### withRowId()
```ts
withRowId(): this
```
Whether to return the row id in the results.
This column can be used to match results between different queries. For
example, to match results from a full text search and a vector search in
order to perform hybrid search.
#### Returns
`this`
#### Inherited from
[`QueryBase`](QueryBase.md).[`withRowId`](QueryBase.md#withrowid)

View File

@@ -25,11 +25,13 @@ Common methods supported by all query types
### new QueryBase() ### new QueryBase()
> `protected` **new QueryBase**&lt;`NativeQueryType`&gt;(`inner`): [`QueryBase`](QueryBase.md)&lt;`NativeQueryType`&gt; ```ts
protected new QueryBase<NativeQueryType>(inner): QueryBase<NativeQueryType>
```
#### Parameters #### Parameters
**inner**: `NativeQueryType` \| `Promise`&lt;`NativeQueryType`&gt; * **inner**: `NativeQueryType` \| `Promise`&lt;`NativeQueryType`&gt;
#### Returns #### Returns
@@ -39,13 +41,17 @@ Common methods supported by all query types
### inner ### inner
> `protected` **inner**: `NativeQueryType` \| `Promise`&lt;`NativeQueryType`&gt; ```ts
protected inner: NativeQueryType | Promise<NativeQueryType>;
```
## Methods ## Methods
### \[asyncIterator\]() ### \[asyncIterator\]()
> **\[asyncIterator\]**(): `AsyncIterator`&lt;`RecordBatch`&lt;`any`&gt;, `any`, `undefined`&gt; ```ts
asyncIterator: AsyncIterator<RecordBatch<any>, any, undefined>
```
#### Returns #### Returns
@@ -59,11 +65,13 @@ Common methods supported by all query types
### doCall() ### doCall()
> `protected` **doCall**(`fn`): `void` ```ts
protected doCall(fn): void
```
#### Parameters #### Parameters
**fn** * **fn**
#### Returns #### Returns
@@ -73,13 +81,15 @@ Common methods supported by all query types
### execute() ### execute()
> `protected` **execute**(`options`?): [`RecordBatchIterator`](RecordBatchIterator.md) ```ts
protected execute(options?): RecordBatchIterator
```
Execute the query and return the results as an Execute the query and return the results as an
#### Parameters #### Parameters
**options?**: `Partial`&lt;`QueryExecutionOptions`&gt; * **options?**: `Partial`&lt;`QueryExecutionOptions`&gt;
#### Returns #### Returns
@@ -101,14 +111,15 @@ single query)
### explainPlan() ### explainPlan()
> **explainPlan**(`verbose`): `Promise`&lt;`string`&gt; ```ts
explainPlan(verbose): Promise<string>
```
Generates an explanation of the query execution plan. Generates an explanation of the query execution plan.
#### Parameters #### Parameters
**verbose**: `boolean` = `false` * **verbose**: `boolean` = `false`
If true, provides a more detailed explanation. Defaults to false. If true, provides a more detailed explanation. Defaults to false.
#### Returns #### Returns
@@ -130,15 +141,34 @@ const plan = await table.query().nearestTo([0.5, 0.2]).explainPlan();
*** ***
### fastSearch()
```ts
fastSearch(): this
```
Skip searching un-indexed data. This can make search faster, but will miss
any data that is not yet indexed.
Use lancedb.Table#optimize to index all un-indexed data.
#### Returns
`this`
***
### ~~filter()~~ ### ~~filter()~~
> **filter**(`predicate`): `this` ```ts
filter(predicate): this
```
A filter statement to be applied to this query. A filter statement to be applied to this query.
#### Parameters #### Parameters
**predicate**: `string` * **predicate**: `string`
#### Returns #### Returns
@@ -154,9 +184,29 @@ Use `where` instead
*** ***
### fullTextSearch()
```ts
fullTextSearch(query, options?): this
```
#### Parameters
* **query**: `string`
* **options?**: `Partial`&lt;`FullTextSearchOptions`&gt;
#### Returns
`this`
***
### limit() ### limit()
> **limit**(`limit`): `this` ```ts
limit(limit): this
```
Set the maximum number of results to return. Set the maximum number of results to return.
@@ -165,7 +215,7 @@ called then every valid row from the table will be returned.
#### Parameters #### Parameters
**limit**: `number` * **limit**: `number`
#### Returns #### Returns
@@ -175,11 +225,13 @@ called then every valid row from the table will be returned.
### nativeExecute() ### nativeExecute()
> `protected` **nativeExecute**(`options`?): `Promise`&lt;`RecordBatchIterator`&gt; ```ts
protected nativeExecute(options?): Promise<RecordBatchIterator>
```
#### Parameters #### Parameters
**options?**: `Partial`&lt;`QueryExecutionOptions`&gt; * **options?**: `Partial`&lt;`QueryExecutionOptions`&gt;
#### Returns #### Returns
@@ -187,9 +239,27 @@ called then every valid row from the table will be returned.
*** ***
### offset()
```ts
offset(offset): this
```
#### Parameters
* **offset**: `number`
#### Returns
`this`
***
### select() ### select()
> **select**(`columns`): `this` ```ts
select(columns): this
```
Return only the specified columns. Return only the specified columns.
@@ -213,7 +283,7 @@ input to this method would be:
#### Parameters #### Parameters
**columns**: `string` \| `string`[] \| `Record`&lt;`string`, `string`&gt; \| `Map`&lt;`string`, `string`&gt; * **columns**: `string` \| `string`[] \| `Record`&lt;`string`, `string`&gt; \| `Map`&lt;`string`, `string`&gt;
#### Returns #### Returns
@@ -236,13 +306,15 @@ object insertion order is easy to get wrong and `Map` is more foolproof.
### toArray() ### toArray()
> **toArray**(`options`?): `Promise`&lt;`any`[]&gt; ```ts
toArray(options?): Promise<any[]>
```
Collect the results as an array of objects. Collect the results as an array of objects.
#### Parameters #### Parameters
**options?**: `Partial`&lt;`QueryExecutionOptions`&gt; * **options?**: `Partial`&lt;`QueryExecutionOptions`&gt;
#### Returns #### Returns
@@ -252,13 +324,15 @@ Collect the results as an array of objects.
### toArrow() ### toArrow()
> **toArrow**(`options`?): `Promise`&lt;`Table`&lt;`any`&gt;&gt; ```ts
toArrow(options?): Promise<Table<any>>
```
Collect the results as an Arrow Collect the results as an Arrow
#### Parameters #### Parameters
**options?**: `Partial`&lt;`QueryExecutionOptions`&gt; * **options?**: `Partial`&lt;`QueryExecutionOptions`&gt;
#### Returns #### Returns
@@ -272,7 +346,9 @@ ArrowTable.
### where() ### where()
> **where**(`predicate`): `this` ```ts
where(predicate): this
```
A filter statement to be applied to this query. A filter statement to be applied to this query.
@@ -280,7 +356,7 @@ The filter should be supplied as an SQL query string. For example:
#### Parameters #### Parameters
**predicate**: `string` * **predicate**: `string`
#### Returns #### Returns
@@ -296,3 +372,21 @@ x > 5 OR y = 'test'
Filtering performance can often be improved by creating a scalar index Filtering performance can often be improved by creating a scalar index
on the filter column(s). on the filter column(s).
``` ```
***
### withRowId()
```ts
withRowId(): this
```
Whether to return the row id in the results.
This column can be used to match results between different queries. For
example, to match results from a full text search and a vector search in
order to perform hybrid search.
#### Returns
`this`

View File

@@ -14,11 +14,13 @@
### new RecordBatchIterator() ### new RecordBatchIterator()
> **new RecordBatchIterator**(`promise`?): [`RecordBatchIterator`](RecordBatchIterator.md) ```ts
new RecordBatchIterator(promise?): RecordBatchIterator
```
#### Parameters #### Parameters
**promise?**: `Promise`&lt;`RecordBatchIterator`&gt; * **promise?**: `Promise`&lt;`RecordBatchIterator`&gt;
#### Returns #### Returns
@@ -28,7 +30,9 @@
### next() ### next()
> **next**(): `Promise`&lt;`IteratorResult`&lt;`RecordBatch`&lt;`any`&gt;, `any`&gt;&gt; ```ts
next(): Promise<IteratorResult<RecordBatch<any>, any>>
```
#### Returns #### Returns

View File

@@ -21,7 +21,9 @@ collected.
### new Table() ### new Table()
> **new Table**(): [`Table`](Table.md) ```ts
new Table(): Table
```
#### Returns #### Returns
@@ -31,7 +33,9 @@ collected.
### name ### name
> `get` `abstract` **name**(): `string` ```ts
get abstract name(): string
```
Returns the name of the table Returns the name of the table
@@ -43,17 +47,18 @@ Returns the name of the table
### add() ### add()
> `abstract` **add**(`data`, `options`?): `Promise`&lt;`void`&gt; ```ts
abstract add(data, options?): Promise<void>
```
Insert records into this Table. Insert records into this Table.
#### Parameters #### Parameters
**data**: [`Data`](../type-aliases/Data.md) * **data**: [`Data`](../type-aliases/Data.md)
Records to be inserted into the Table Records to be inserted into the Table
**options?**: `Partial`&lt;[`AddDataOptions`](../interfaces/AddDataOptions.md)&gt; * **options?**: `Partial`&lt;[`AddDataOptions`](../interfaces/AddDataOptions.md)&gt;
#### Returns #### Returns
@@ -63,14 +68,15 @@ Records to be inserted into the Table
### addColumns() ### addColumns()
> `abstract` **addColumns**(`newColumnTransforms`): `Promise`&lt;`void`&gt; ```ts
abstract addColumns(newColumnTransforms): Promise<void>
```
Add new columns with defined values. Add new columns with defined values.
#### Parameters #### Parameters
**newColumnTransforms**: [`AddColumnsSql`](../interfaces/AddColumnsSql.md)[] * **newColumnTransforms**: [`AddColumnsSql`](../interfaces/AddColumnsSql.md)[]
pairs of column names and pairs of column names and
the SQL expression to use to calculate the value of the new column. These the SQL expression to use to calculate the value of the new column. These
expressions will be evaluated for each row in the table, and can expressions will be evaluated for each row in the table, and can
@@ -84,14 +90,15 @@ reference existing columns in the table.
### alterColumns() ### alterColumns()
> `abstract` **alterColumns**(`columnAlterations`): `Promise`&lt;`void`&gt; ```ts
abstract alterColumns(columnAlterations): Promise<void>
```
Alter the name or nullability of columns. Alter the name or nullability of columns.
#### Parameters #### Parameters
**columnAlterations**: [`ColumnAlteration`](../interfaces/ColumnAlteration.md)[] * **columnAlterations**: [`ColumnAlteration`](../interfaces/ColumnAlteration.md)[]
One or more alterations to One or more alterations to
apply to columns. apply to columns.
@@ -103,7 +110,9 @@ apply to columns.
### checkout() ### checkout()
> `abstract` **checkout**(`version`): `Promise`&lt;`void`&gt; ```ts
abstract checkout(version): Promise<void>
```
Checks out a specific version of the table _This is an in-place operation._ Checks out a specific version of the table _This is an in-place operation._
@@ -116,8 +125,7 @@ wish to return to standard mode, call `checkoutLatest`.
#### Parameters #### Parameters
**version**: `number` * **version**: `number`
The version to checkout The version to checkout
#### Returns #### Returns
@@ -144,7 +152,9 @@ console.log(await table.version()); // 2
### checkoutLatest() ### checkoutLatest()
> `abstract` **checkoutLatest**(): `Promise`&lt;`void`&gt; ```ts
abstract checkoutLatest(): Promise<void>
```
Checkout the latest version of the table. _This is an in-place operation._ Checkout the latest version of the table. _This is an in-place operation._
@@ -159,7 +169,9 @@ version of the table.
### close() ### close()
> `abstract` **close**(): `void` ```ts
abstract close(): void
```
Close the table, releasing any underlying resources. Close the table, releasing any underlying resources.
@@ -175,13 +187,15 @@ Any attempt to use the table after it is closed will result in an error.
### countRows() ### countRows()
> `abstract` **countRows**(`filter`?): `Promise`&lt;`number`&gt; ```ts
abstract countRows(filter?): Promise<number>
```
Count the total number of rows in the dataset. Count the total number of rows in the dataset.
#### Parameters #### Parameters
**filter?**: `string` * **filter?**: `string`
#### Returns #### Returns
@@ -191,7 +205,9 @@ Count the total number of rows in the dataset.
### createIndex() ### createIndex()
> `abstract` **createIndex**(`column`, `options`?): `Promise`&lt;`void`&gt; ```ts
abstract createIndex(column, options?): Promise<void>
```
Create an index to speed up queries. Create an index to speed up queries.
@@ -202,9 +218,9 @@ vector and non-vector searches)
#### Parameters #### Parameters
**column**: `string` * **column**: `string`
**options?**: `Partial`&lt;[`IndexOptions`](../interfaces/IndexOptions.md)&gt; * **options?**: `Partial`&lt;[`IndexOptions`](../interfaces/IndexOptions.md)&gt;
#### Returns #### Returns
@@ -245,13 +261,15 @@ await table.createIndex("my_float_col");
### delete() ### delete()
> `abstract` **delete**(`predicate`): `Promise`&lt;`void`&gt; ```ts
abstract delete(predicate): Promise<void>
```
Delete the rows that satisfy the predicate. Delete the rows that satisfy the predicate.
#### Parameters #### Parameters
**predicate**: `string` * **predicate**: `string`
#### Returns #### Returns
@@ -261,7 +279,9 @@ Delete the rows that satisfy the predicate.
### display() ### display()
> `abstract` **display**(): `string` ```ts
abstract display(): string
```
Return a brief description of the table Return a brief description of the table
@@ -273,7 +293,9 @@ Return a brief description of the table
### dropColumns() ### dropColumns()
> `abstract` **dropColumns**(`columnNames`): `Promise`&lt;`void`&gt; ```ts
abstract dropColumns(columnNames): Promise<void>
```
Drop one or more columns from the dataset Drop one or more columns from the dataset
@@ -284,8 +306,7 @@ then call ``cleanup_files`` to remove the old files.
#### Parameters #### Parameters
**columnNames**: `string`[] * **columnNames**: `string`[]
The names of the columns to drop. These can The names of the columns to drop. These can
be nested column references (e.g. "a.b.c") or top-level column names be nested column references (e.g. "a.b.c") or top-level column names
(e.g. "a"). (e.g. "a").
@@ -298,14 +319,15 @@ be nested column references (e.g. "a.b.c") or top-level column names
### indexStats() ### indexStats()
> `abstract` **indexStats**(`name`): `Promise`&lt;`undefined` \| [`IndexStatistics`](../interfaces/IndexStatistics.md)&gt; ```ts
abstract indexStats(name): Promise<undefined | IndexStatistics>
```
List all the stats of a specified index List all the stats of a specified index
#### Parameters #### Parameters
**name**: `string` * **name**: `string`
The name of the index. The name of the index.
#### Returns #### Returns
@@ -318,7 +340,9 @@ The stats of the index. If the index does not exist, it will return undefined
### isOpen() ### isOpen()
> `abstract` **isOpen**(): `boolean` ```ts
abstract isOpen(): boolean
```
Return true if the table has not been closed Return true if the table has not been closed
@@ -330,7 +354,9 @@ Return true if the table has not been closed
### listIndices() ### listIndices()
> `abstract` **listIndices**(): `Promise`&lt;[`IndexConfig`](../interfaces/IndexConfig.md)[]&gt; ```ts
abstract listIndices(): Promise<IndexConfig[]>
```
List all indices that have been created with [Table.createIndex](Table.md#createindex) List all indices that have been created with [Table.createIndex](Table.md#createindex)
@@ -340,13 +366,29 @@ List all indices that have been created with [Table.createIndex](Table.md#create
*** ***
### listVersions()
```ts
abstract listVersions(): Promise<Version[]>
```
List all the versions of the table
#### Returns
`Promise`&lt;`Version`[]&gt;
***
### mergeInsert() ### mergeInsert()
> `abstract` **mergeInsert**(`on`): `MergeInsertBuilder` ```ts
abstract mergeInsert(on): MergeInsertBuilder
```
#### Parameters #### Parameters
**on**: `string` \| `string`[] * **on**: `string` \| `string`[]
#### Returns #### Returns
@@ -356,7 +398,9 @@ List all indices that have been created with [Table.createIndex](Table.md#create
### optimize() ### optimize()
> `abstract` **optimize**(`options`?): `Promise`&lt;`OptimizeStats`&gt; ```ts
abstract optimize(options?): Promise<OptimizeStats>
```
Optimize the on-disk data and indices for better performance. Optimize the on-disk data and indices for better performance.
@@ -388,7 +432,7 @@ Modeled after ``VACUUM`` in PostgreSQL.
#### Parameters #### Parameters
**options?**: `Partial`&lt;`OptimizeOptions`&gt; * **options?**: `Partial`&lt;[`OptimizeOptions`](../interfaces/OptimizeOptions.md)&gt;
#### Returns #### Returns
@@ -398,7 +442,9 @@ Modeled after ``VACUUM`` in PostgreSQL.
### query() ### query()
> `abstract` **query**(): [`Query`](Query.md) ```ts
abstract query(): Query
```
Create a [Query](Query.md) Builder. Create a [Query](Query.md) Builder.
@@ -466,7 +512,9 @@ for await (const batch of table.query()) {
### restore() ### restore()
> `abstract` **restore**(): `Promise`&lt;`void`&gt; ```ts
abstract restore(): Promise<void>
```
Restore the table to the currently checked out version Restore the table to the currently checked out version
@@ -487,7 +535,9 @@ out state and the read_consistency_interval, if any, will apply.
### schema() ### schema()
> `abstract` **schema**(): `Promise`&lt;`Schema`&lt;`any`&gt;&gt; ```ts
abstract schema(): Promise<Schema<any>>
```
Get the schema of the table. Get the schema of the table.
@@ -499,49 +549,41 @@ Get the schema of the table.
### search() ### search()
#### search(query) ```ts
abstract search(
> `abstract` **search**(`query`): [`VectorQuery`](VectorQuery.md) query,
queryType?,
ftsColumns?): VectorQuery | Query
```
Create a search query to find the nearest neighbors Create a search query to find the nearest neighbors
of the given query vector of the given query
##### Parameters #### Parameters
**query**: `string` * **query**: `string` \| `IntoVector`
the query, a vector or string
the query. This will be converted to a vector using the table's provided embedding function * **queryType?**: `string`
the type of the query, "vector", "fts", or "auto"
##### Returns * **ftsColumns?**: `string` \| `string`[]
the columns to search in for full text search
for now, only one column can be searched at a time.
when "auto" is used, if the query is a string and an embedding function is defined, it will be treated as a vector query
if the query is a string and no embedding function is defined, it will be treated as a full text search query
[`VectorQuery`](VectorQuery.md) #### Returns
##### Note [`VectorQuery`](VectorQuery.md) \| [`Query`](Query.md)
If no embedding functions are defined in the table, this will error when collecting the results.
#### search(query)
> `abstract` **search**(`query`): [`VectorQuery`](VectorQuery.md)
Create a search query to find the nearest neighbors
of the given query vector
##### Parameters
• **query**: `IntoVector`
the query vector
##### Returns
[`VectorQuery`](VectorQuery.md)
*** ***
### toArrow() ### toArrow()
> `abstract` **toArrow**(): `Promise`&lt;`Table`&lt;`any`&gt;&gt; ```ts
abstract toArrow(): Promise<Table<any>>
```
Return the table as an arrow table Return the table as an arrow table
@@ -555,13 +597,15 @@ Return the table as an arrow table
#### update(opts) #### update(opts)
> `abstract` **update**(`opts`): `Promise`&lt;`void`&gt; ```ts
abstract update(opts): Promise<void>
```
Update existing records in the Table Update existing records in the Table
##### Parameters ##### Parameters
**opts**: `object` & `Partial`&lt;[`UpdateOptions`](../interfaces/UpdateOptions.md)&gt; * **opts**: `object` & `Partial`&lt;[`UpdateOptions`](../interfaces/UpdateOptions.md)&gt;
##### Returns ##### Returns
@@ -575,13 +619,15 @@ table.update({where:"x = 2", values:{"vector": [10, 10]}})
#### update(opts) #### update(opts)
> `abstract` **update**(`opts`): `Promise`&lt;`void`&gt; ```ts
abstract update(opts): Promise<void>
```
Update existing records in the Table Update existing records in the Table
##### Parameters ##### Parameters
**opts**: `object` & `Partial`&lt;[`UpdateOptions`](../interfaces/UpdateOptions.md)&gt; * **opts**: `object` & `Partial`&lt;[`UpdateOptions`](../interfaces/UpdateOptions.md)&gt;
##### Returns ##### Returns
@@ -595,7 +641,9 @@ table.update({where:"x = 2", valuesSql:{"x": "x + 1"}})
#### update(updates, options) #### update(updates, options)
> `abstract` **update**(`updates`, `options`?): `Promise`&lt;`void`&gt; ```ts
abstract update(updates, options?): Promise<void>
```
Update existing records in the Table Update existing records in the Table
@@ -614,18 +662,15 @@ repeatedly calilng this method.
##### Parameters ##### Parameters
**updates**: `Record`&lt;`string`, `string`&gt; \| `Map`&lt;`string`, `string`&gt; * **updates**: `Record`&lt;`string`, `string`&gt; \| `Map`&lt;`string`, `string`&gt;
the the
columns to update columns to update
Keys in the map should specify the name of the column to update. Keys in the map should specify the name of the column to update.
Values in the map provide the new value of the column. These can Values in the map provide the new value of the column. These can
be SQL literal strings (e.g. "7" or "'foo'") or they can be expressions be SQL literal strings (e.g. "7" or "'foo'") or they can be expressions
based on the row being updated (e.g. "my_col + 1") based on the row being updated (e.g. "my_col + 1")
**options?**: `Partial`&lt;[`UpdateOptions`](../interfaces/UpdateOptions.md)&gt; * **options?**: `Partial`&lt;[`UpdateOptions`](../interfaces/UpdateOptions.md)&gt;
additional options to control additional options to control
the update behavior the update behavior
@@ -637,7 +682,9 @@ the update behavior
### vectorSearch() ### vectorSearch()
> `abstract` **vectorSearch**(`vector`): [`VectorQuery`](VectorQuery.md) ```ts
abstract vectorSearch(vector): VectorQuery
```
Search the table with a given query vector. Search the table with a given query vector.
@@ -647,7 +694,7 @@ by `query`.
#### Parameters #### Parameters
**vector**: `IntoVector` * **vector**: `IntoVector`
#### Returns #### Returns
@@ -661,7 +708,9 @@ by `query`.
### version() ### version()
> `abstract` **version**(): `Promise`&lt;`number`&gt; ```ts
abstract version(): Promise<number>
```
Retrieve the version of the table Retrieve the version of the table
@@ -673,15 +722,20 @@ Retrieve the version of the table
### parseTableData() ### parseTableData()
> `static` **parseTableData**(`data`, `options`?, `streaming`?): `Promise`&lt;`object`&gt; ```ts
static parseTableData(
data,
options?,
streaming?): Promise<object>
```
#### Parameters #### Parameters
**data**: `TableLike` \| `Record`&lt;`string`, `unknown`&gt;[] * **data**: `TableLike` \| `Record`&lt;`string`, `unknown`&gt;[]
**options?**: `Partial`&lt;[`CreateTableOptions`](../interfaces/CreateTableOptions.md)&gt; * **options?**: `Partial`&lt;[`CreateTableOptions`](../interfaces/CreateTableOptions.md)&gt;
**streaming?**: `boolean` = `false` * **streaming?**: `boolean` = `false`
#### Returns #### Returns
@@ -689,8 +743,12 @@ Retrieve the version of the table
##### buf ##### buf
> **buf**: `Buffer` ```ts
buf: Buffer;
```
##### mode ##### mode
> **mode**: `string` ```ts
mode: string;
```

View File

@@ -10,11 +10,13 @@
### new VectorColumnOptions() ### new VectorColumnOptions()
> **new VectorColumnOptions**(`values`?): [`VectorColumnOptions`](VectorColumnOptions.md) ```ts
new VectorColumnOptions(values?): VectorColumnOptions
```
#### Parameters #### Parameters
**values?**: `Partial`&lt;[`VectorColumnOptions`](VectorColumnOptions.md)&gt; * **values?**: `Partial`&lt;[`VectorColumnOptions`](VectorColumnOptions.md)&gt;
#### Returns #### Returns
@@ -24,6 +26,8 @@
### type ### type
> **type**: `Float`&lt;`Floats`&gt; ```ts
type: Float<Floats>;
```
Vector column type. Vector column type.

View File

@@ -18,11 +18,13 @@ This builder can be reused to execute the query many times.
### new VectorQuery() ### new VectorQuery()
> **new VectorQuery**(`inner`): [`VectorQuery`](VectorQuery.md) ```ts
new VectorQuery(inner): VectorQuery
```
#### Parameters #### Parameters
**inner**: `VectorQuery` \| `Promise`&lt;`VectorQuery`&gt; * **inner**: `VectorQuery` \| `Promise`&lt;`VectorQuery`&gt;
#### Returns #### Returns
@@ -36,7 +38,9 @@ This builder can be reused to execute the query many times.
### inner ### inner
> `protected` **inner**: `VectorQuery` \| `Promise`&lt;`VectorQuery`&gt; ```ts
protected inner: VectorQuery | Promise<VectorQuery>;
```
#### Inherited from #### Inherited from
@@ -46,7 +50,9 @@ This builder can be reused to execute the query many times.
### \[asyncIterator\]() ### \[asyncIterator\]()
> **\[asyncIterator\]**(): `AsyncIterator`&lt;`RecordBatch`&lt;`any`&gt;, `any`, `undefined`&gt; ```ts
asyncIterator: AsyncIterator<RecordBatch<any>, any, undefined>
```
#### Returns #### Returns
@@ -58,9 +64,27 @@ This builder can be reused to execute the query many times.
*** ***
### addQueryVector()
```ts
addQueryVector(vector): VectorQuery
```
#### Parameters
* **vector**: `IntoVector`
#### Returns
[`VectorQuery`](VectorQuery.md)
***
### bypassVectorIndex() ### bypassVectorIndex()
> **bypassVectorIndex**(): [`VectorQuery`](VectorQuery.md) ```ts
bypassVectorIndex(): VectorQuery
```
If this is called then any vector index is skipped If this is called then any vector index is skipped
@@ -78,7 +102,9 @@ calculate your recall to select an appropriate value for nprobes.
### column() ### column()
> **column**(`column`): [`VectorQuery`](VectorQuery.md) ```ts
column(column): VectorQuery
```
Set the vector column to query Set the vector column to query
@@ -87,7 +113,7 @@ the call to
#### Parameters #### Parameters
**column**: `string` * **column**: `string`
#### Returns #### Returns
@@ -104,7 +130,9 @@ whose data type is a fixed-size-list of floats.
### distanceType() ### distanceType()
> **distanceType**(`distanceType`): [`VectorQuery`](VectorQuery.md) ```ts
distanceType(distanceType): VectorQuery
```
Set the distance metric to use Set the distance metric to use
@@ -114,7 +142,7 @@ use. See
#### Parameters #### Parameters
**distanceType**: `"l2"` \| `"cosine"` \| `"dot"` * **distanceType**: `"l2"` \| `"cosine"` \| `"dot"`
#### Returns #### Returns
@@ -135,11 +163,13 @@ By default "l2" is used.
### doCall() ### doCall()
> `protected` **doCall**(`fn`): `void` ```ts
protected doCall(fn): void
```
#### Parameters #### Parameters
**fn** * **fn**
#### Returns #### Returns
@@ -151,15 +181,41 @@ By default "l2" is used.
*** ***
### ef()
```ts
ef(ef): VectorQuery
```
Set the number of candidates to consider during the search
This argument is only used when the vector column has an HNSW index.
If there is no index then this value is ignored.
Increasing this value will increase the recall of your query but will
also increase the latency of your query. The default value is 1.5*limit.
#### Parameters
* **ef**: `number`
#### Returns
[`VectorQuery`](VectorQuery.md)
***
### execute() ### execute()
> `protected` **execute**(`options`?): [`RecordBatchIterator`](RecordBatchIterator.md) ```ts
protected execute(options?): RecordBatchIterator
```
Execute the query and return the results as an Execute the query and return the results as an
#### Parameters #### Parameters
**options?**: `Partial`&lt;`QueryExecutionOptions`&gt; * **options?**: `Partial`&lt;`QueryExecutionOptions`&gt;
#### Returns #### Returns
@@ -185,14 +241,15 @@ single query)
### explainPlan() ### explainPlan()
> **explainPlan**(`verbose`): `Promise`&lt;`string`&gt; ```ts
explainPlan(verbose): Promise<string>
```
Generates an explanation of the query execution plan. Generates an explanation of the query execution plan.
#### Parameters #### Parameters
**verbose**: `boolean` = `false` * **verbose**: `boolean` = `false`
If true, provides a more detailed explanation. Defaults to false. If true, provides a more detailed explanation. Defaults to false.
#### Returns #### Returns
@@ -218,15 +275,38 @@ const plan = await table.query().nearestTo([0.5, 0.2]).explainPlan();
*** ***
### fastSearch()
```ts
fastSearch(): this
```
Skip searching un-indexed data. This can make search faster, but will miss
any data that is not yet indexed.
Use lancedb.Table#optimize to index all un-indexed data.
#### Returns
`this`
#### Inherited from
[`QueryBase`](QueryBase.md).[`fastSearch`](QueryBase.md#fastsearch)
***
### ~~filter()~~ ### ~~filter()~~
> **filter**(`predicate`): `this` ```ts
filter(predicate): this
```
A filter statement to be applied to this query. A filter statement to be applied to this query.
#### Parameters #### Parameters
**predicate**: `string` * **predicate**: `string`
#### Returns #### Returns
@@ -246,9 +326,33 @@ Use `where` instead
*** ***
### fullTextSearch()
```ts
fullTextSearch(query, options?): this
```
#### Parameters
* **query**: `string`
* **options?**: `Partial`&lt;`FullTextSearchOptions`&gt;
#### Returns
`this`
#### Inherited from
[`QueryBase`](QueryBase.md).[`fullTextSearch`](QueryBase.md#fulltextsearch)
***
### limit() ### limit()
> **limit**(`limit`): `this` ```ts
limit(limit): this
```
Set the maximum number of results to return. Set the maximum number of results to return.
@@ -257,7 +361,7 @@ called then every valid row from the table will be returned.
#### Parameters #### Parameters
**limit**: `number` * **limit**: `number`
#### Returns #### Returns
@@ -271,11 +375,13 @@ called then every valid row from the table will be returned.
### nativeExecute() ### nativeExecute()
> `protected` **nativeExecute**(`options`?): `Promise`&lt;`RecordBatchIterator`&gt; ```ts
protected nativeExecute(options?): Promise<RecordBatchIterator>
```
#### Parameters #### Parameters
**options?**: `Partial`&lt;`QueryExecutionOptions`&gt; * **options?**: `Partial`&lt;`QueryExecutionOptions`&gt;
#### Returns #### Returns
@@ -289,7 +395,9 @@ called then every valid row from the table will be returned.
### nprobes() ### nprobes()
> **nprobes**(`nprobes`): [`VectorQuery`](VectorQuery.md) ```ts
nprobes(nprobes): VectorQuery
```
Set the number of partitions to search (probe) Set the number of partitions to search (probe)
@@ -314,7 +422,7 @@ you the desired recall.
#### Parameters #### Parameters
**nprobes**: `number` * **nprobes**: `number`
#### Returns #### Returns
@@ -322,9 +430,31 @@ you the desired recall.
*** ***
### offset()
```ts
offset(offset): this
```
#### Parameters
* **offset**: `number`
#### Returns
`this`
#### Inherited from
[`QueryBase`](QueryBase.md).[`offset`](QueryBase.md#offset)
***
### postfilter() ### postfilter()
> **postfilter**(): [`VectorQuery`](VectorQuery.md) ```ts
postfilter(): VectorQuery
```
If this is called then filtering will happen after the vector search instead of If this is called then filtering will happen after the vector search instead of
before. before.
@@ -356,7 +486,9 @@ factor can often help restore some of the results lost by post filtering.
### refineFactor() ### refineFactor()
> **refineFactor**(`refineFactor`): [`VectorQuery`](VectorQuery.md) ```ts
refineFactor(refineFactor): VectorQuery
```
A multiplier to control how many additional rows are taken during the refine step A multiplier to control how many additional rows are taken during the refine step
@@ -388,7 +520,7 @@ distance between the query vector and the actual uncompressed vector.
#### Parameters #### Parameters
**refineFactor**: `number` * **refineFactor**: `number`
#### Returns #### Returns
@@ -398,7 +530,9 @@ distance between the query vector and the actual uncompressed vector.
### select() ### select()
> **select**(`columns`): `this` ```ts
select(columns): this
```
Return only the specified columns. Return only the specified columns.
@@ -422,7 +556,7 @@ input to this method would be:
#### Parameters #### Parameters
**columns**: `string` \| `string`[] \| `Record`&lt;`string`, `string`&gt; \| `Map`&lt;`string`, `string`&gt; * **columns**: `string` \| `string`[] \| `Record`&lt;`string`, `string`&gt; \| `Map`&lt;`string`, `string`&gt;
#### Returns #### Returns
@@ -449,13 +583,15 @@ object insertion order is easy to get wrong and `Map` is more foolproof.
### toArray() ### toArray()
> **toArray**(`options`?): `Promise`&lt;`any`[]&gt; ```ts
toArray(options?): Promise<any[]>
```
Collect the results as an array of objects. Collect the results as an array of objects.
#### Parameters #### Parameters
**options?**: `Partial`&lt;`QueryExecutionOptions`&gt; * **options?**: `Partial`&lt;`QueryExecutionOptions`&gt;
#### Returns #### Returns
@@ -469,13 +605,15 @@ Collect the results as an array of objects.
### toArrow() ### toArrow()
> **toArrow**(`options`?): `Promise`&lt;`Table`&lt;`any`&gt;&gt; ```ts
toArrow(options?): Promise<Table<any>>
```
Collect the results as an Arrow Collect the results as an Arrow
#### Parameters #### Parameters
**options?**: `Partial`&lt;`QueryExecutionOptions`&gt; * **options?**: `Partial`&lt;`QueryExecutionOptions`&gt;
#### Returns #### Returns
@@ -493,7 +631,9 @@ ArrowTable.
### where() ### where()
> **where**(`predicate`): `this` ```ts
where(predicate): this
```
A filter statement to be applied to this query. A filter statement to be applied to this query.
@@ -501,7 +641,7 @@ The filter should be supplied as an SQL query string. For example:
#### Parameters #### Parameters
**predicate**: `string` * **predicate**: `string`
#### Returns #### Returns
@@ -521,3 +661,25 @@ on the filter column(s).
#### Inherited from #### Inherited from
[`QueryBase`](QueryBase.md).[`where`](QueryBase.md#where) [`QueryBase`](QueryBase.md).[`where`](QueryBase.md#where)
***
### withRowId()
```ts
withRowId(): this
```
Whether to return the row id in the results.
This column can be used to match results between different queries. For
example, to match results from a full text search and a vector search in
order to perform hybrid search.
#### Returns
`this`
#### Inherited from
[`QueryBase`](QueryBase.md).[`withRowId`](QueryBase.md#withrowid)

View File

@@ -12,16 +12,22 @@ Write mode for writing a table.
### Append ### Append
> **Append**: `"Append"` ```ts
Append: "Append";
```
*** ***
### Create ### Create
> **Create**: `"Create"` ```ts
Create: "Create";
```
*** ***
### Overwrite ### Overwrite
> **Overwrite**: `"Overwrite"` ```ts
Overwrite: "Overwrite";
```

View File

@@ -8,7 +8,9 @@
## connect(uri, opts) ## connect(uri, opts)
> **connect**(`uri`, `opts`?): `Promise`&lt;[`Connection`](../classes/Connection.md)&gt; ```ts
function connect(uri, opts?): Promise<Connection>
```
Connect to a LanceDB instance at the given URI. Connect to a LanceDB instance at the given URI.
@@ -20,12 +22,11 @@ Accepted formats:
### Parameters ### Parameters
**uri**: `string` * **uri**: `string`
The uri of the database. If the database uri starts The uri of the database. If the database uri starts
with `db://` then it connects to a remote database. with `db://` then it connects to a remote database.
**opts?**: `Partial`&lt;[`ConnectionOptions`](../interfaces/ConnectionOptions.md) \| `RemoteConnectionOptions`&gt; * **opts?**: `Partial`&lt;[`ConnectionOptions`](../interfaces/ConnectionOptions.md)&gt;
### Returns ### Returns
@@ -50,7 +51,9 @@ const conn = await connect(
## connect(opts) ## connect(opts)
> **connect**(`opts`): `Promise`&lt;[`Connection`](../classes/Connection.md)&gt; ```ts
function connect(opts): Promise<Connection>
```
Connect to a LanceDB instance at the given URI. Connect to a LanceDB instance at the given URI.
@@ -62,7 +65,7 @@ Accepted formats:
### Parameters ### Parameters
**opts**: `Partial`&lt;[`ConnectionOptions`](../interfaces/ConnectionOptions.md) \| `RemoteConnectionOptions`&gt; & `object` * **opts**: `Partial`&lt;[`ConnectionOptions`](../interfaces/ConnectionOptions.md)&gt; & `object`
### Returns ### Returns

View File

@@ -6,7 +6,12 @@
# Function: makeArrowTable() # Function: makeArrowTable()
> **makeArrowTable**(`data`, `options`?, `metadata`?): `ArrowTable` ```ts
function makeArrowTable(
data,
options?,
metadata?): ArrowTable
```
An enhanced version of the makeTable function from Apache Arrow An enhanced version of the makeTable function from Apache Arrow
that supports nested fields and embeddings columns. that supports nested fields and embeddings columns.
@@ -40,11 +45,11 @@ rules are as follows:
## Parameters ## Parameters
**data**: `Record`&lt;`string`, `unknown`&gt;[] * **data**: `Record`&lt;`string`, `unknown`&gt;[]
**options?**: `Partial`&lt;[`MakeArrowTableOptions`](../classes/MakeArrowTableOptions.md)&gt; * **options?**: `Partial`&lt;[`MakeArrowTableOptions`](../classes/MakeArrowTableOptions.md)&gt;
**metadata?**: `Map`&lt;`string`, `string`&gt; * **metadata?**: `Map`&lt;`string`, `string`&gt;
## Returns ## Returns

View File

@@ -28,16 +28,19 @@
- [AddColumnsSql](interfaces/AddColumnsSql.md) - [AddColumnsSql](interfaces/AddColumnsSql.md)
- [AddDataOptions](interfaces/AddDataOptions.md) - [AddDataOptions](interfaces/AddDataOptions.md)
- [ClientConfig](interfaces/ClientConfig.md)
- [ColumnAlteration](interfaces/ColumnAlteration.md) - [ColumnAlteration](interfaces/ColumnAlteration.md)
- [ConnectionOptions](interfaces/ConnectionOptions.md) - [ConnectionOptions](interfaces/ConnectionOptions.md)
- [CreateTableOptions](interfaces/CreateTableOptions.md) - [CreateTableOptions](interfaces/CreateTableOptions.md)
- [ExecutableQuery](interfaces/ExecutableQuery.md) - [ExecutableQuery](interfaces/ExecutableQuery.md)
- [IndexConfig](interfaces/IndexConfig.md) - [IndexConfig](interfaces/IndexConfig.md)
- [IndexMetadata](interfaces/IndexMetadata.md)
- [IndexOptions](interfaces/IndexOptions.md) - [IndexOptions](interfaces/IndexOptions.md)
- [IndexStatistics](interfaces/IndexStatistics.md) - [IndexStatistics](interfaces/IndexStatistics.md)
- [IvfPqOptions](interfaces/IvfPqOptions.md) - [IvfPqOptions](interfaces/IvfPqOptions.md)
- [OptimizeOptions](interfaces/OptimizeOptions.md)
- [RetryConfig](interfaces/RetryConfig.md)
- [TableNamesOptions](interfaces/TableNamesOptions.md) - [TableNamesOptions](interfaces/TableNamesOptions.md)
- [TimeoutConfig](interfaces/TimeoutConfig.md)
- [UpdateOptions](interfaces/UpdateOptions.md) - [UpdateOptions](interfaces/UpdateOptions.md)
- [WriteOptions](interfaces/WriteOptions.md) - [WriteOptions](interfaces/WriteOptions.md)

View File

@@ -12,7 +12,9 @@ A definition of a new column to add to a table.
### name ### name
> **name**: `string` ```ts
name: string;
```
The name of the new column. The name of the new column.
@@ -20,7 +22,9 @@ The name of the new column.
### valueSql ### valueSql
> **valueSql**: `string` ```ts
valueSql: string;
```
The values to populate the new column with, as a SQL expression. The values to populate the new column with, as a SQL expression.
The expression can reference other columns in the table. The expression can reference other columns in the table.

View File

@@ -12,7 +12,9 @@ Options for adding data to a table.
### mode ### mode
> **mode**: `"append"` \| `"overwrite"` ```ts
mode: "append" | "overwrite";
```
If "append" (the default) then the new data will be added to the table If "append" (the default) then the new data will be added to the table

View File

@@ -0,0 +1,31 @@
[**@lancedb/lancedb**](../README.md) • **Docs**
***
[@lancedb/lancedb](../globals.md) / ClientConfig
# Interface: ClientConfig
## Properties
### retryConfig?
```ts
optional retryConfig: RetryConfig;
```
***
### timeoutConfig?
```ts
optional timeoutConfig: TimeoutConfig;
```
***
### userAgent?
```ts
optional userAgent: string;
```

View File

@@ -13,9 +13,29 @@ must be provided.
## Properties ## Properties
### dataType?
```ts
optional dataType: string;
```
A new data type for the column. If not provided then the data type will not be changed.
Changing data types is limited to casting to the same general type. For example, these
changes are valid:
* `int32` -> `int64` (integers)
* `double` -> `float` (floats)
* `string` -> `large_string` (strings)
But these changes are not:
* `int32` -> `double` (mix integers and floats)
* `string` -> `int32` (mix strings and integers)
***
### nullable? ### nullable?
> `optional` **nullable**: `boolean` ```ts
optional nullable: boolean;
```
Set the new nullability. Note that a nullable column cannot be made non-nullable. Set the new nullability. Note that a nullable column cannot be made non-nullable.
@@ -23,7 +43,9 @@ Set the new nullability. Note that a nullable column cannot be made non-nullable
### path ### path
> **path**: `string` ```ts
path: string;
```
The path to the column to alter. This is a dot-separated path to the column. The path to the column to alter. This is a dot-separated path to the column.
If it is a top-level column then it is just the name of the column. If it is If it is a top-level column then it is just the name of the column. If it is
@@ -34,7 +56,9 @@ a nested column then it is the path to the column, e.g. "a.b.c" for a column
### rename? ### rename?
> `optional` **rename**: `string` ```ts
optional rename: string;
```
The new name of the column. If not provided then the name will not be changed. The new name of the column. If not provided then the name will not be changed.
This must be distinct from the names of all other columns in the table. This must be distinct from the names of all other columns in the table.

View File

@@ -8,9 +8,44 @@
## Properties ## Properties
### apiKey?
```ts
optional apiKey: string;
```
(For LanceDB cloud only): the API key to use with LanceDB Cloud.
Can also be set via the environment variable `LANCEDB_API_KEY`.
***
### clientConfig?
```ts
optional clientConfig: ClientConfig;
```
(For LanceDB cloud only): configuration for the remote HTTP client.
***
### hostOverride?
```ts
optional hostOverride: string;
```
(For LanceDB cloud only): the host to use for LanceDB cloud. Used
for testing purposes.
***
### readConsistencyInterval? ### readConsistencyInterval?
> `optional` **readConsistencyInterval**: `number` ```ts
optional readConsistencyInterval: number;
```
(For LanceDB OSS only): The interval, in seconds, at which to check for (For LanceDB OSS only): The interval, in seconds, at which to check for
updates to the table from other processes. If None, then consistency is not updates to the table from other processes. If None, then consistency is not
@@ -24,9 +59,22 @@ always consistent.
*** ***
### region?
```ts
optional region: string;
```
(For LanceDB cloud only): the region to use for LanceDB cloud.
Defaults to 'us-east-1'.
***
### storageOptions? ### storageOptions?
> `optional` **storageOptions**: `Record`&lt;`string`, `string`&gt; ```ts
optional storageOptions: Record<string, string>;
```
(For LanceDB OSS only): configuration for object storage. (For LanceDB OSS only): configuration for object storage.

View File

@@ -8,15 +8,46 @@
## Properties ## Properties
### dataStorageVersion?
```ts
optional dataStorageVersion: string;
```
The version of the data storage format to use.
The default is `stable`.
Set to "legacy" to use the old format.
***
### embeddingFunction? ### embeddingFunction?
> `optional` **embeddingFunction**: [`EmbeddingFunctionConfig`](../namespaces/embedding/interfaces/EmbeddingFunctionConfig.md) ```ts
optional embeddingFunction: EmbeddingFunctionConfig;
```
***
### enableV2ManifestPaths?
```ts
optional enableV2ManifestPaths: boolean;
```
Use the new V2 manifest paths. These paths provide more efficient
opening of datasets with many versions on object stores. WARNING:
turning this on will make the dataset unreadable for older versions
of LanceDB (prior to 0.10.0). To migrate an existing dataset, instead
use the LocalTable#migrateManifestPathsV2 method.
*** ***
### existOk ### existOk
> **existOk**: `boolean` ```ts
existOk: boolean;
```
If this is true and the table already exists and the mode is "create" If this is true and the table already exists and the mode is "create"
then no error will be raised. then no error will be raised.
@@ -25,7 +56,9 @@ then no error will be raised.
### mode ### mode
> **mode**: `"overwrite"` \| `"create"` ```ts
mode: "overwrite" | "create";
```
The mode to use when creating the table. The mode to use when creating the table.
@@ -39,13 +72,17 @@ If this is set to "overwrite" then any existing table will be replaced.
### schema? ### schema?
> `optional` **schema**: `SchemaLike` ```ts
optional schema: SchemaLike;
```
*** ***
### storageOptions? ### storageOptions?
> `optional` **storageOptions**: `Record`&lt;`string`, `string`&gt; ```ts
optional storageOptions: Record<string, string>;
```
Configuration for object storage. Configuration for object storage.
@@ -58,8 +95,12 @@ The available options are described at https://lancedb.github.io/lancedb/guides/
### useLegacyFormat? ### useLegacyFormat?
> `optional` **useLegacyFormat**: `boolean` ```ts
optional useLegacyFormat: boolean;
```
If true then data files will be written with the legacy format If true then data files will be written with the legacy format
The default is true while the new format is in beta The default is false.
Deprecated. Use data storage version instead.

Some files were not shown because too many files have changed in this diff Show More