Compare commits

...

286 Commits

Author SHA1 Message Date
Lance Release
12c7bd18a5 Bump version: 0.17.0-beta.2 → 0.17.0-beta.3 2024-12-04 01:13:18 +00:00
LuQQiu
c6bf6a25d6 feat: add remote db uri path with folder prefix (#1901)
Add remote database folder prefix
support db://bucket/path/to/folder/
2024-12-03 16:51:18 -08:00
Weston Pace
c998a47e17 feat: add a pyarrow dataset adapater for LanceDB tables (#1902)
This currently only works for local tables (remote tables cannot be
queried)
This is also exclusive to the sync interface. However, since the pyarrow
dataset interface is synchronous I am not sure if there is much value in
making an async-wrapping variant.

In addition, I added a `to_batches` method to the base query in the sync
API. This already exists in the async API. In the sync API this PR only
adds support for vector queries and scalar queries and not for hybrid or
FTS queries.
2024-12-03 15:42:54 -08:00
Frank Liu
d8c758513c feat: add multimodal capabilities for Voyage embedder (#1878)
Co-authored-by: Will Jones <willjones127@gmail.com>
2024-12-03 10:25:48 -08:00
Will Jones
3795e02ee3 chore: fix ci on main (#1899) 2024-12-02 15:21:18 -08:00
Mr. Doge
c7d424b2f3 ci: aarch64-pc-windows-msvc (#1890)
`npm run pack-build -- -t $TARGET_TRIPLE`
was needed instead of
`npm run pack-build -t $TARGET_TRIPLE`
https://github.com/lancedb/lancedb/pull/1889

some documentation about `*-pc-windows-msvc` cross-compilation (from
alpine):
https://github.com/lancedb/lancedb/pull/1831#issuecomment-2497156918

only `arm64` in `matrix` config is used
since `x86_64` built by `runs-on: windows-2022` is working
2024-12-02 11:17:37 -08:00
Bert
1efb9914ee ci: fix failing python release (#1896)
Fix failing python release for windows:
https://github.com/lancedb/lancedb/actions/runs/12019637086/job/33506642964

Also updates pkginfo to fix twine build as suggested here:
https://github.com/pypi/warehouse/issues/15611
failing release:
https://github.com/lancedb/lancedb/actions/runs/12091344173/job/33719622146
2024-12-02 11:05:29 -08:00
Lance Release
83e26a231e Updating package-lock.json 2024-11-29 22:46:45 +00:00
Lance Release
72a17b2de4 Bump version: 0.14.0-beta.0 → 0.14.0-beta.1 2024-11-29 22:46:20 +00:00
Lance Release
4231925476 Bump version: 0.17.0-beta.1 → 0.17.0-beta.2 2024-11-29 22:45:55 +00:00
Lance Release
84a6693294 Bump version: 0.17.0-beta.0 → 0.17.0-beta.1 2024-11-29 18:16:02 +00:00
Ryan Green
6c2d4c10a4 feat: support remote options for remote lancedb connection (#1895)
* Support subset of storage options as remote options
* Send Azure storage account name via HTTP header
2024-11-29 14:08:13 -03:30
Ryan Green
d914722f79 Revert "feat: support remote options for remote lancedb connection. Send Azure storage account name via HTTP header."
This reverts commit a6e4034dba.
2024-11-29 11:06:18 -03:30
Ryan Green
a6e4034dba feat: support remote options for remote lancedb connection. Send Azure storage account name via HTTP header. 2024-11-29 11:05:04 -03:30
QianZhu
2616a50502 fix: test errors after setting default limit (#1891) 2024-11-26 16:03:16 -08:00
LuQQiu
7b5e9d824a fix: dynamodb external manifest drop table (#1866)
second pr of https://github.com/lancedb/lancedb/issues/1812
2024-11-26 13:20:48 -08:00
QianZhu
3b173e7cb9 fix: default limit for remote nodejs client (#1886)
https://github.com/lancedb/lancedb/issues/1804
2024-11-26 11:01:25 -08:00
Mr. Doge
d496ab13a0 ci: linux: specify target triple for neon pack-build (vectordb) (#1889)
fixes that all `neon pack-build` packs are named
`vectordb-linux-x64-musl-*.tgz` even when cross-compiling

adds 2nd param:
`TARGET_TRIPLE=${2:-x86_64-unknown-linux-gnu}`
`npm run pack-build -- -t $TARGET_TRIPLE`
2024-11-26 10:57:17 -08:00
Will Jones
69d9beebc7 docs: improve style and introduction to Python API docs (#1885)
I found the signatures difficult to read and the parameter section not
very space efficient.
2024-11-26 09:17:35 -08:00
Bert
d32360b99d feat: support overwrite and exist_ok mode for remote create_table (#1883)
Support passing modes "overwrite" and "exist_ok" when creating a remote
table.
2024-11-26 11:38:36 -05:00
Will Jones
9fa08bfa93 ci: use correct runner for vectordb (#1881)
We already do this for `gnu` builds, we should do this also for `musl`
builds.
2024-11-25 16:17:10 -08:00
LuQQiu
d6d9cb7415 feat: bump lance to 0.20.0b3 (#1882)
Bump lance version.
Upstream change log:
https://github.com/lancedb/lance/releases/tag/v0.20.0-beta.3
2024-11-25 16:15:44 -08:00
Lance Release
990d93f553 Updating package-lock.json 2024-11-25 22:06:39 +00:00
Lance Release
0832cba3c6 Bump version: 0.13.1-beta.0 → 0.14.0-beta.0 2024-11-25 22:06:14 +00:00
Lance Release
38b0d91848 Bump version: 0.16.1-beta.0 → 0.17.0-beta.0 2024-11-25 22:05:49 +00:00
Will Jones
6826039575 fix(python): run remote SDK futures in background thread (#1856)
Users who call the remote SDK from code that uses futures (either
`ThreadPoolExecutor` or `asyncio`) can get odd errors like:

```
Traceback (most recent call last):
  File "/usr/lib/python3.12/asyncio/events.py", line 88, in _run
    self._context.run(self._callback, *self._args)
RuntimeError: cannot enter context: <_contextvars.Context object at 0x7cfe94cdc900> is already entered
```

This PR fixes that by executing all LanceDB futures in a dedicated
thread pool running on a background thread. That way, it doesn't
interact with their threadpool.
2024-11-25 13:12:47 -08:00
QianZhu
3e9321fc40 docs: improve scalar index and filtering (#1874)
improved the docs on build a scalar index and pre-/post-filtering

---------

Co-authored-by: Weston Pace <weston.pace@gmail.com>
2024-11-25 11:30:57 -08:00
Lei Xu
2ded17452b fix(python)!: handle bad openai embeddings gracefully (#1873)
BREAKING-CHANGE: change Pydantic Vector field to be nullable by default.
Closes #1577
2024-11-23 13:33:52 -08:00
Mr. Doge
dfd9d2ac99 ci: musl missing node/package.json targets (#1870)
I missed targets when manually merging draft PR to updated main
I was copying from:
https://github.com/lancedb/lancedb/pull/1816/files#diff-d6e19f28e97cfeda63a9bd9426f10f1d2454eeed375ee1235e8ba842ceeb46a0

fixes:
error: Rust target x86_64-unknown-linux-musl not found in package.json.
2024-11-22 10:40:59 -08:00
Lance Release
162880140e Updating package-lock.json 2024-11-21 21:53:25 +00:00
Lance Release
99d9ced6d5 Bump version: 0.13.0 → 0.13.1-beta.0 2024-11-21 21:53:01 +00:00
Lance Release
96933d7df8 Bump version: 0.16.0 → 0.16.1-beta.0 2024-11-21 21:52:39 +00:00
Lei Xu
d369233b3d feat: bump lance to 0.20.0b2 (#1865)
Bump lance version.
Upstream change log:
https://github.com/lancedb/lance/releases/tag/v0.20.0-beta.2
2024-11-21 13:16:59 -08:00
QianZhu
43a670ed4b fix: limit docstring change (#1860) 2024-11-21 10:50:50 -08:00
Bert
cb9a00a28d feat: add list_versions to typescript, rust and remote python sdks (#1850)
Will require update to lance dependency to bring in this change which
makes the version serializable
https://github.com/lancedb/lance/pull/3143
2024-11-21 13:35:14 -05:00
Max Epstein
72af977a73 fix(CohereReranker): updated default model_name param to newest v3 (#1862) 2024-11-21 09:02:49 -08:00
Bert
7cecb71df0 feat: support for checkout and checkout_latest in remote sdks (#1863) 2024-11-21 11:28:46 -05:00
QianZhu
285071e5c8 docs: full-text search doc update (#1861)
Co-authored-by: BubbleCal <bubble-cal@outlook.com>
2024-11-20 21:07:30 -08:00
QianZhu
114866fbcf docs: OSS doc improvement (#1859)
OSS doc improvement - HNSW index parameter explanation and others.

---------

Co-authored-by: BubbleCal <bubble-cal@outlook.com>
2024-11-20 17:51:11 -08:00
Frank Liu
5387c0e243 docs: add Voyage models to sidebar (#1858) 2024-11-20 14:20:14 -08:00
Mr. Doge
53d1535de1 ci: musl x64,arm64 (#1853)
untested 4 artifacts at:
https://github.com/FuPeiJiang/lancedb/actions/runs/11926579058
node-native-linux-aarch64-musl 22.6 MB
node-native-linux-x86_64-musl 23.6 MB
nodejs-native-linux-aarch64-musl 26.7 MB
nodejs-native-linux-x86_64-musl 27 MB

this follows the same process as:
https://github.com/lancedb/lancedb/pull/1816#issuecomment-2484816669

Closes #1388
Closes #1107

---------

Co-authored-by: Will Jones <willjones127@gmail.com>
2024-11-20 10:53:19 -08:00
BubbleCal
b2f88f0b29 feat: support to sepcify ef search param (#1844)
Signed-off-by: BubbleCal <bubble-cal@outlook.com>
2024-11-19 23:12:25 +08:00
fzowl
f2e3989831 docs: voyageai embedding in the index (#1813)
The code to support VoyageAI embedding and rerank models was added in
the https://github.com/lancedb/lancedb/pull/1799 PR.
Some of the documentation changes was also made, here adding the
VoyageAI embedding doc link to the index page.

These are my first PRs in lancedb and while i checked the
documentation/code structure, i might missed something important. Please
let me know if any changes required!
2024-11-18 14:34:16 -08:00
Emmanuel Ferdman
83ae52938a docs: update migration reference (#1837)
# PR Summary
PR fixes the `migration.md` reference in `docs/src/guides/tables.md`. On
the way, it also fixes some typos found in that document.

Signed-off-by: Emmanuel Ferdman <emmanuelferdman@gmail.com>
2024-11-18 14:33:32 -08:00
Lei Xu
267aa83bf8 feat(python): check vector query is not None (#1847)
Fix the type hints of `nearest_to` method, and raise `ValueError` when
the input is None
2024-11-18 14:15:22 -08:00
Will Jones
cc72050206 chore: update package locks (#1845)
Also ran `npm audit`.
2024-11-18 13:44:06 -08:00
Will Jones
72543c8b9d test(python): test with_row_id in sync query (#1835)
Also remove weird `MockTable` fixture.
2024-11-18 11:32:52 -08:00
Will Jones
97d6210c33 ci: remove invalid references (#1834)
Fix release job
2024-11-18 11:32:44 -08:00
Ho Kim
a3d0c27b0a feat: add support for rustls (#1842)
Hello, this is a simple PR that supports `rustls-tls` feature.

The `reqwest`\`s default TLS `default-tls` is enabled by default, to
dismiss the side-effect.

The user can use `rustls-tls` like this:

```toml
lancedb = { version = "*", default-features = false, features = ["rustls-tls"] }
```
2024-11-18 10:36:20 -08:00
BubbleCal
b23d8abcdd docs: introduce incremental indexing for FTS (#1789)
don't merge it before https://github.com/lancedb/lancedb/pull/1769
merged

---------

Signed-off-by: BubbleCal <bubble-cal@outlook.com>
2024-11-18 20:21:28 +08:00
Rob Meng
e3ea5cf9b9 chore: bump lance to 0.19.3 (#1839) 2024-11-16 14:57:52 -05:00
Lance Release
4f8b086175 Updating package-lock.json 2024-11-15 20:18:16 +00:00
Lance Release
72330fb759 Bump version: 0.13.0-beta.3 → 0.13.0 2024-11-15 20:17:59 +00:00
Lance Release
e3b2c5f438 Bump version: 0.13.0-beta.2 → 0.13.0-beta.3 2024-11-15 20:17:55 +00:00
Lance Release
66a881b33a Bump version: 0.16.0-beta.2 → 0.16.0 2024-11-15 20:17:34 +00:00
Lance Release
a7515d6ee2 Bump version: 0.16.0-beta.1 → 0.16.0-beta.2 2024-11-15 20:17:34 +00:00
Will Jones
587c0824af feat: flexible null handling and insert subschemas in Python (#1827)
* Test that we can insert subschemas (omit nullable columns) in Python.
* More work is needed to support this in Node. See:
https://github.com/lancedb/lancedb/issues/1832
* Test that we can insert data with nullable schema but no nulls in
non-nullable schema.
* Add `"null"` option for `on_bad_vectors` where we fill with null if
the vector is bad.
* Make null values not considered bad if the field itself is nullable.
2024-11-15 11:33:00 -08:00
Will Jones
b38a4269d0 fix(node): make openai and huggingface optional dependencies (#1809)
BREAKING CHANGE: openai and huggingface now have separate entrypoints.

Closes [#1624](https://github.com/lancedb/lancedb/issues/1624)
2024-11-14 15:04:35 -08:00
Will Jones
119d88b9db ci: disable Windows Arm64 until the release builds work (#1833)
Started to actually fix this, but it was taking too long
https://github.com/lancedb/lancedb/pull/1831
2024-11-14 15:04:23 -08:00
StevenSu
74f660d223 feat: add new feature, add amazon bedrock embedding function (#1788)
Add amazon bedrock embedding function to rust sdk.

1.  Add BedrockEmbeddingModel ( lancedb/src/embeddings/bedrock.rs)
2. Add example lancedb/examples/bedrock.rs
2024-11-14 11:04:59 -08:00
Lance Release
b2b0979b90 Updating package-lock.json 2024-11-14 04:42:38 +00:00
Lance Release
ee2a40b182 Bump version: 0.13.0-beta.1 → 0.13.0-beta.2 2024-11-14 04:42:19 +00:00
Lance Release
4ca0b15354 Bump version: 0.16.0-beta.0 → 0.16.0-beta.1 2024-11-14 04:41:56 +00:00
Rob Meng
d8c217b47d chore: bump lance to 0.19.2 (#1829) 2024-11-13 23:23:02 -05:00
Rob Meng
b724b1a01f feat: support remote empty query (#1828)
Support sending empty query types to remote lancedb. also include offset
and limit, where were previously omitted.
2024-11-13 23:04:52 -05:00
Will Jones
abd75e0ead feat: search multiple query vectors as one query (#1811)
Allows users to pass multiple query vector as part of a single query
plan. This just runs the queries in parallel without any further
optimization. It's mostly a convenience.

Previously, I think this was only handled by the sync Python remote API.
This makes it common across all SDKs.

Closes https://github.com/lancedb/lancedb/issues/1803

```python
>>> import lancedb
>>> import asyncio
>>> 
>>> async def main():
...     db = await lancedb.connect_async("./demo")
...     table = await db.create_table("demo", [{"id": 1, "vector": [1, 2, 3]}, {"id": 2, "vector": [4, 5, 6]}], mode="overwrite")
...     return await table.query().nearest_to([[1.0, 2.0, 3.0], [4.0, 5.0, 6.0], [4.0, 5.0, 6.0]]).limit(1).to_pandas()
... 
>>> asyncio.run(main())
   query_index  id           vector  _distance
0            2   2  [4.0, 5.0, 6.0]        0.0
1            1   2  [4.0, 5.0, 6.0]        0.0
2            0   1  [1.0, 2.0, 3.0]        0.0
```
2024-11-13 16:05:16 -08:00
Will Jones
0fd8a50bd7 ci(node): run examples in CI (#1796)
This is done as setup for a PR that will fix the OpenAI dependency
issue.

 * [x] FTS examples
 * [x] Setup mock openai
 * [x] Ran `npm audit fix`
 * [x] sentences embeddings test
 * [x] Double check formatting of docs examples
2024-11-13 11:10:56 -08:00
Umut Hope YILDIRIM
9f228feb0e ci: remove cache to fix build issues on windows arm runner (#1820) 2024-11-13 09:27:10 -08:00
Ayush Chaurasia
90e9c52d0a docs: update hybrid search example to latest langchain (#1824)
Co-authored-by: qzhu <qian@lancedb.com>
2024-11-12 20:06:25 -08:00
Will Jones
68974a4e06 ci: add index URL to fix failing docs build (#1823) 2024-11-12 16:54:22 -08:00
Lei Xu
4c9bab0d92 fix: use pandas with pydantic embedding column (#1818)
* Make Pandas `DataFrame` works with embedding function + Subset of
columns
* Make `lancedb.create_table()` work with embedding function
2024-11-11 14:48:56 -08:00
QianZhu
5117aecc38 docs: search param explanation for OSS doc (#1815)
![Screenshot 2024-11-09 at 11 09
14 AM](https://github.com/user-attachments/assets/2aeba016-aeff-4658-85c6-8640285ba0c9)
2024-11-11 11:57:17 -08:00
Umut Hope YILDIRIM
729718cb09 fix: arm64 runner proto already installed bug (#1810)
https://github.com/lancedb/lancedb/actions/runs/11748512661/job/32732745458
2024-11-08 14:49:37 -08:00
Umut Hope YILDIRIM
b1c84e0bda feat: added lancedb and vectordb release ci for win32-arm64-msvc npmjs only (#1805) 2024-11-08 11:40:57 -08:00
fzowl
cbbc07d0f5 feat: voyageai support (#1799)
Adding VoyageAI embedding and rerank support
2024-11-09 00:51:20 +05:30
Kursat Aktas
21021f94ca docs: introducing LanceDB Guru on Gurubase.io (#1797)
Hello team,

I'm the maintainer of [Anteon](https://github.com/getanteon/anteon). We
have created Gurubase.io with the mission of building a centralized,
open-source tool-focused knowledge base. Essentially, each "guru" is
equipped with custom knowledge to answer user questions based on
collected data related to that tool.

I wanted to update you that I've manually added the [LanceDB
Guru](https://gurubase.io/g/lancedb) to Gurubase. LanceDB Guru uses the
data from this repo and data from the
[docs](https://lancedb.github.io/lancedb/) to answer questions by
leveraging the LLM.

In this PR, I showcased the "LanceDB Guru", which highlights that
LanceDB now has an AI assistant available to help users with their
questions. Please let me know your thoughts on this contribution.

Additionally, if you want me to disable LanceDB Guru in Gurubase, just
let me know that's totally fine.

Signed-off-by: Kursat Aktas <kursat.ce@gmail.com>
2024-11-08 10:55:22 -08:00
BubbleCal
0ed77fa990 chore: impl Debug & Clone for Index params (#1808)
we don't really need these trait in lancedb, but all fields in `Index`
implement the 2 traits, so do it for possibility to use `Index`
somewhere

Signed-off-by: BubbleCal <bubble-cal@outlook.com>
2024-11-09 01:07:43 +08:00
BubbleCal
4372c231cd feat: support optimize indices in sync API (#1769)
Signed-off-by: BubbleCal <bubble-cal@outlook.com>
2024-11-08 08:48:07 -08:00
Umut Hope YILDIRIM
fa9ca8f7a6 ci: arm64 windows build support (#1770)
Adds support for 'aarch64-pc-windows-msvc'.
2024-11-06 15:34:23 -08:00
Lance Release
2a35d24ee6 Updating package-lock.json 2024-11-06 17:26:36 +00:00
Lance Release
dd9ce337e2 Bump version: 0.13.0-beta.0 → 0.13.0-beta.1 2024-11-06 17:26:17 +00:00
Will Jones
b9921d56cc fix(node): update default log level to warn (#1801)
🤦
2024-11-06 09:13:53 -08:00
Lance Release
0cfd9ed18e Updating package-lock.json 2024-11-05 23:21:50 +00:00
Lance Release
975398c3a8 Bump version: 0.12.0 → 0.13.0-beta.0 2024-11-05 23:21:32 +00:00
Lance Release
08d5f93f34 Bump version: 0.15.0 → 0.16.0-beta.0 2024-11-05 23:21:13 +00:00
Will Jones
91cab3b556 feat(python): transition Python remote sdk to use Rust implementation (#1701)
* Replaces Python implementation of Remote SDK with Rust one.
* Drops dependency on `attrs` and `cachetools`. Makes `requests` an
optional dependency used only for embeddings feature.
* Adds dependency on `nest-asyncio`. This was required to get hybrid
search working.
* Deprecate `request_thread_pool` parameter. We now use the tokio
threadpool.
* Stop caching the `schema` on a remote table. Schema is mutable and
there's no mechanism in place to invalidate the cache.
* Removed the client-side resolution of the vector column. We should
already be resolving this server-side.
2024-11-05 13:44:39 -08:00
Will Jones
c61bfc3af8 chore: update package locks (#1798) 2024-11-05 13:28:59 -08:00
Bert
4e8c7b0adf fix: serialize vectordb client errors as json (#1795) 2024-11-05 14:16:25 -05:00
Weston Pace
26f4a80e10 feat: upgrade to lance 0.19.2-beta.3 (#1794) 2024-11-05 06:43:41 -08:00
Will Jones
3604d20ad3 feat(python,node): support with_row_id in Python and remote (#1784)
Needed to support hybrid search in Remote SDK.
2024-11-04 11:25:45 -08:00
Gagan Bhullar
9708d829a9 fix: explain plan options (#1776)
PR fixes #1768
2024-11-04 10:25:34 -08:00
Will Jones
059c9794b5 fix(rust): fix update, open_table, fts search in remote client (#1785)
* `open_table` uses `POST` not `GET`
* `update` uses `predicate` key not `only_if`
* For FTS search, vector cannot be omitted. It must be passed as empty.
* Added logging of JSON request bodies to debug level logging.
2024-11-04 08:27:55 -08:00
Will Jones
15ed7f75a0 feat(python): support post filter on FTS (#1783) 2024-11-01 10:05:05 -07:00
Will Jones
96181ab421 feat: fast_search in Python and Node (#1623)
Sometimes it is acceptable to users to only search indexed data and skip
and new un-indexed data. For example, if un-indexed data will be shortly
indexed and they don't mind the delay. In these cases, we can save a lot
of CPU time in search, and provide better latency. Users can activate
this on queries using `fast_search()`.
2024-11-01 09:29:09 -07:00
Will Jones
f3fc339ef6 fix(rust): fix delete, update, query in remote SDK (#1782)
Fixes several minor issues with Rust remote SDK:

* Delete uses `predicate` not `filter` as parameter
* Update does not return the row value in remote SDK
* Update takes tuples
* Content type returned by query node is wrong, so we shouldn't validate
it. https://github.com/lancedb/sophon/issues/2742
* Data returned by query endpoint is actually an Arrow IPC file, not IPC
stream.
2024-10-31 15:22:09 -07:00
Will Jones
113cd6995b fix: index_stats works for FTS indices (#1780)
When running `index_stats()` for an FTS index, users would get the
deserialization error:

```
InvalidInput { message: "error deserializing index statistics: unknown variant `Inverted`, expected one of `IvfPq`, `IvfHnswPq`, `IvfHnswSq`, `BTree`, `Bitmap`, `LabelList`, `FTS` at line 1 column 24" }
```
2024-10-30 11:33:49 -07:00
Lance Release
02535bdc88 Updating package-lock.json 2024-10-29 22:16:51 +00:00
Lance Release
facc7d61c0 Bump version: 0.12.0-beta.0 → 0.12.0 2024-10-29 22:16:32 +00:00
Lance Release
f947259f16 Bump version: 0.11.1-beta.1 → 0.12.0-beta.0 2024-10-29 22:16:27 +00:00
Lance Release
e291212ecf Bump version: 0.15.0-beta.0 → 0.15.0 2024-10-29 22:16:05 +00:00
Lance Release
edc6445f6f Bump version: 0.14.1-beta.1 → 0.15.0-beta.0 2024-10-29 22:16:05 +00:00
Will Jones
a324f4ad7a feat(node): enable logging and show full errors (#1775)
This exposes the `LANCEDB_LOG` environment variable in node, so that
users can now turn on logging.

In addition, fixes a bug where only the top-level error from Rust was
being shown. This PR makes sure the full error chain is included in the
error message. In the future, will improve this so the error chain is
set on the [cause](https://nodejs.org/api/errors.html#errorcause)
property of JS errors https://github.com/lancedb/lancedb/issues/1779

Fixes #1774
2024-10-29 15:13:34 -07:00
Weston Pace
55104c5bae feat: allow distance type (metric) to be specified during hybrid search (#1777) 2024-10-29 13:51:18 -07:00
Rithik Kumar
d71df4572e docs: revamp langchain integration page (#1773)
Before - 
<img width="1030" alt="Screenshot 2024-10-28 132932"
src="https://github.com/user-attachments/assets/63f78bfa-949e-473e-ab22-0c692577fa3e">


After - 
<img width="1037" alt="Screenshot 2024-10-28 132727"
src="https://github.com/user-attachments/assets/85a12f6c-74f0-49ba-9f1a-fe77ad125704">
2024-10-29 22:55:50 +05:30
Rithik Kumar
aa269199ad docs: fix archived examples links (#1751) 2024-10-29 22:55:27 +05:30
BubbleCal
32fdcf97db feat!: upgrade lance to 0.19.1 (#1762)
BREAKING CHANGE: default tokenizer no longer does stemming or stop-word
removal. Users should explicitly turn that option on in the future.

- upgrade lance to 0.19.1
- update the FTS docs
- update the FTS API

Upstream change notes:
https://github.com/lancedb/lance/releases/tag/v0.19.1

---------

Signed-off-by: BubbleCal <bubble-cal@outlook.com>
Co-authored-by: Will Jones <willjones127@gmail.com>
2024-10-29 09:03:52 -07:00
Ryan Green
b9802a0d23 Revert "fix: error during deserialization of "INVERTED" index type"
This reverts commit 2ea5939f85.
2024-10-25 14:46:47 -02:30
Ryan Green
2ea5939f85 fix: error during deserialization of "INVERTED" index type 2024-10-25 14:40:14 -02:30
Lance Release
04e1f1ee4c Updating package-lock.json 2024-10-23 00:34:22 +00:00
Lance Release
bbc588e27d Bump version: 0.11.1-beta.0 → 0.11.1-beta.1 2024-10-23 00:34:01 +00:00
Lance Release
5517e102c3 Bump version: 0.14.1-beta.0 → 0.14.1-beta.1 2024-10-23 00:33:40 +00:00
Will Jones
82197c54e4 perf: eliminate iop in refresh (#1760)
Closes #1741

If we checkout a version, we need to make a `HEAD` request to get the
size of the manifest. The new `checkout_latest()` code path can skip
this IOP. This makes the refresh slightly faster.
2024-10-18 13:40:24 -07:00
Will Jones
48f46d4751 docs(node): update indexStats signature and regenerate docs (#1742)
`indexStats` still referenced UUID even though in
https://github.com/lancedb/lancedb/pull/1702 we changed it to take name
instead.
2024-10-18 10:53:28 -07:00
Lance Release
437316cbbc Updating package-lock.json 2024-10-17 18:59:18 +00:00
Lance Release
d406eab2c8 Bump version: 0.11.0 → 0.11.1-beta.0 2024-10-17 18:59:01 +00:00
Lance Release
1f41101897 Bump version: 0.14.0 → 0.14.1-beta.0 2024-10-17 18:58:45 +00:00
Will Jones
99e4db0d6a feat(rust): allow add_embedding on create_empty_table (#1754)
Fixes https://github.com/lancedb/lancedb/issues/1750
2024-10-17 11:58:15 -07:00
Will Jones
46486d4d22 fix: list_indices can handle fts indexes (#1753)
Fixes #1752
2024-10-16 10:39:40 -07:00
Weston Pace
f43cb8bba1 feat: upgrade lance to 0.18.3 (#1748) 2024-10-16 00:48:31 -07:00
James Wu
38eb05f297 fix(python): remove dependency on retry package (#1749)
## user story

fixes https://github.com/lancedb/lancedb/issues/1480

https://github.com/invl/retry has not had an update in 8 years, one if
its sub-dependencies via requirements.txt
(https://github.com/pytest-dev/py) is no longer maintained and has a
high severity vulnerability (CVE-2022-42969).

retry is only used for a single function in the python codebase for a
deprecated helper function `with_embeddings`, which was created for an
older tutorial (https://github.com/lancedb/lancedb/pull/12) [but is now
deprecated](https://lancedb.github.io/lancedb/embeddings/legacy/).

## changes

i backported a limited range of functionality of the `@retry()`
decorator directly into lancedb so that we no longer have a dependency
to the `retry` package.

## tests

```
/Users/james/src/lancedb/python $ ruff check .
All checks passed!
/Users/james/src/lancedb/python $ pytest python/tests/test_embeddings.py
python/tests/test_embeddings.py .......s....                                                                                                                        [100%]
================================================================ 11 passed, 1 skipped, 2 warnings in 7.08s ================================================================
```
2024-10-15 15:13:57 -07:00
Ryan Green
679a70231e feat: allow fast_search on python remote table (#1747)
Add `fast_search` parameter to query builder and remote table to support
skipping flat search in remote search
2024-10-14 14:39:54 -06:00
Dominik Weckmüller
e7b56b7b2a docs: add permanent link chain icon to headings without impacting SEO (#1746)
I noted that there are no permanent links in the docs. Adapted the
current best solution from
https://github.com/squidfunk/mkdocs-material/discussions/3535. It adds a
GitHub-like chain icon to the left of each heading (right on mobile) and
does not impact SEO unlike the default solution with pilcrow char `¶`
that might show up on google search results.

<img alt="image"
src="https://user-images.githubusercontent.com/182589/153004627-6df3f8e9-c747-4f43-bd62-a8dabaa96c3f.gif">
2024-10-14 11:58:23 -07:00
Olzhas Alexandrov
5ccd0edec2 docs: clarify infrastructure requirements for S3 Express One Zone (#1745) 2024-10-11 14:06:28 -06:00
Will Jones
9c74c435e0 ci: update package lock (#1740) 2024-10-09 15:14:08 -06:00
Lance Release
6de53ce393 Updating package-lock.json 2024-10-09 18:54:29 +00:00
Lance Release
9f42fbba96 Bump version: 0.11.0-beta.2 → 0.11.0 2024-10-09 18:54:09 +00:00
Lance Release
d892f7a622 Bump version: 0.11.0-beta.1 → 0.11.0-beta.2 2024-10-09 18:54:04 +00:00
Lance Release
515ab5f417 Bump version: 0.14.0-beta.1 → 0.14.0 2024-10-09 18:53:35 +00:00
Lance Release
8d0055fe6b Bump version: 0.14.0-beta.0 → 0.14.0-beta.1 2024-10-09 18:53:34 +00:00
Will Jones
5f9d8509b3 feat: upgrade Lance to v0.18.2 (#1737)
Includes changes from v0.18.1 and v0.18.2:

* [v0.18.1 change
log](https://github.com/lancedb/lance/releases/tag/v0.18.1)
* [v0.18.2 change
log](https://github.com/lancedb/lance/releases/tag/v0.18.2)

Closes #1656
Closes #1615
Closes #1661
2024-10-09 11:46:46 -06:00
Will Jones
f3b6a1f55b feat(node): bind remote SDK to rust implementation (#1730)
Closes [#2509](https://github.com/lancedb/sophon/issues/2509)

This is the Node.js analogue of #1700
2024-10-09 11:46:27 -06:00
Will Jones
aff25e3bf9 fix(node): add native packages to bump version (#1738)
We weren't bumping the version, so when users downloaded our package
from npm, they were getting the old binaries.
2024-10-08 23:03:53 -06:00
Will Jones
8509f73221 feat: better errors for remote SDK (#1722)
* Adds nicer errors to remote SDK, that expose useful properties like
`request_id` and `status_code`.
* Makes sure the Python tracebacks print nicely by mapping the `source`
field from a Rust error to the `__cause__` field.
2024-10-08 22:21:13 -06:00
Will Jones
607476788e feat(rust): list_indices in remote SDK (#1726)
Implements `list_indices`.

---------

Co-authored-by: Weston Pace <weston.pace@gmail.com>
2024-10-08 21:45:21 -06:00
Gagan Bhullar
4d458d5829 feat(python): drop support for dictionary in Table.add (#1725)
PR closes #1706
2024-10-08 20:41:08 -06:00
Will Jones
e61ba7f4e2 fix(rust): remote SDK bugs (#1723)
A few bugs uncovered by integration tests:

* We didn't prepend `/v1` to the Table endpoint URLs
* `/create_index` takes `metric_type` not `distance_type`. (This is also
an error in the OpenAPI docs.)
* `/create_index` expects the `metric_type` parameter to always be
lowercase.
* We were writing an IPC file message when we were supposed to send an
IPC stream message.
2024-10-04 08:43:07 -07:00
Prashant Dixit
408bc96a44 fix: broken notebook link fix (#1721) 2024-10-03 16:15:27 +05:30
Rithik Kumar
6ceaf8b06e docs: add langchainjs writing assistant (#1719) 2024-10-03 00:55:00 +05:30
Prashant Dixit
e2ca8daee1 docs: saleforce's sfr rag (#1717)
This PR adds Salesforce's newly released SFR RAG
2024-10-02 21:15:24 +05:30
Will Jones
f305f34d9b feat(python): bind python async remote client to rust client (#1700)
Closes [#1638](https://github.com/lancedb/lancedb/issues/1638)

This just binds the Python Async client to the Rust remote client.
2024-10-01 15:46:59 -07:00
Will Jones
a416925ca1 feat(rust): client configuration for remote client (#1696)
This PR ports over advanced client configuration present in the Python
`RestfulLanceDBClient` to the Rust one. The goal is to have feature
parity so we can replace the implementation.

* [x] Request timeout
* [x] Retries with backoff
* [x] Request id generation
* [x] User agent (with default tied to library version  )
* [x] Table existence cache
* [ ] Deferred: ~Request id customization (should this just pick up OTEL
trace ids?)~

Fixes #1684
2024-10-01 10:22:53 -07:00
Will Jones
2c4b07eb17 feat(python): merge_insert in async Python (#1707)
Fixes #1401
2024-10-01 10:06:52 -07:00
Will Jones
33b402c861 fix: list_indices returns correct index type (#1715)
Fixes https://github.com/lancedb/lancedb/issues/1711

Doesn't address this https://github.com/lancedb/lance/issues/2039

Instead we load the index statistics, which seems to contain the index
type. However, this involves more IO than previously. I'm not sure
whether we care that much. If we do, we can fix that upstream Lance
issue.
2024-10-01 09:16:18 -07:00
Rithik Kumar
7b2cdd2269 docs: revamp Voxel51 v1 (#1714)
Revamp Voxel51

![image](https://github.com/user-attachments/assets/7ac34457-74ec-4654-b1d1-556e3d7357f5)
2024-10-01 11:59:03 +05:30
Akash Saravanan
d6b5054778 feat(python): add support for trust_remote_code in hf embeddings (#1712)
Resovles #1709. Adds `trust_remote_code` as a parameter to the
`TransformersEmbeddingFunction` class with a default of False. Updated
relevant documentation with the same.
2024-10-01 01:06:28 +05:30
Lei Xu
f0e7f5f665 ci: change to use github runner (#1708)
Use github runner
2024-09-27 17:53:05 -07:00
Will Jones
f958f4d2e8 feat: remote index stats (#1702)
BREAKING CHANGE: the return value of `index_stats` method has changed
and all `index_stats` APIs now take index name instead of UUID. Also
several deprecated index statistics methods were removed.

* Removes deprecated methods for individual index statistics
* Aligns public `IndexStatistics` struct with API response from LanceDB
Cloud.
* Implements `index_stats` for remote Rust SDK and Python async API.
2024-09-27 12:10:00 -07:00
Will Jones
c1d9d6f70b feat(rust): remote rename table (#1703)
Adds rename to remote table. Pre-requisite for
https://github.com/lancedb/lancedb/pull/1701
2024-09-27 09:37:54 -07:00
Will Jones
1778219ea9 feat(rust): remote client query and create_index endpoints (#1663)
Support for `query` and `create_index`.

Closes [#2519](https://github.com/lancedb/sophon/issues/2519)
2024-09-27 09:00:22 -07:00
Rob Meng
ee6c18f207 feat: expose underlying dataset uri of the table (#1704) 2024-09-27 10:20:02 -04:00
rjrobben
e606a455df fix(EmbeddingFunction): modify safe_model_dump to explicitly exclude class fields with underscore (#1688)
Resolve issue #1681

---------

Co-authored-by: rjrobben <rjrobben123@gmail.com>
2024-09-25 11:53:49 -07:00
Gagan Bhullar
8f0eb34109 fix: hnsw default partitions (#1667)
PR fixes #1662

---------

Co-authored-by: Will Jones <willjones127@gmail.com>
2024-09-25 09:16:03 -07:00
Ayush Chaurasia
2f2721e242 feat(python): allow explicit hybrid search query pattern in SaaS (feat parity) (#1698)
-  fixes https://github.com/lancedb/lancedb/issues/1697.
- unifies vector column inference logic for remote and local table to
prevent future disparities.
- Updates docstring in RemoteTable to specify empty queries are not
supported
2024-09-25 21:04:00 +05:30
QianZhu
f00b21c98c fix: metric type for python/node search api (#1689) 2024-09-24 16:10:29 -07:00
Lance Release
962b3afd17 Updating package-lock.json 2024-09-24 16:51:37 +00:00
Lance Release
b72ac073ab Bump version: 0.11.0-beta.0 → 0.11.0-beta.1 2024-09-24 16:51:16 +00:00
Bert
3152ccd13c fix: re-add hostOverride arg to ConnectionOptions (#1694)
Fixes issue where hostOverride was no-longer passed through to
RemoteConnection
2024-09-24 13:29:26 -03:00
Bert
d5021356b4 feat: add fast_search to vectordb (#1693) 2024-09-24 13:28:54 -03:00
Will Jones
e82f63b40a fix(node): pass no const enum (#1690)
Apparently this is a no-no for libraries.
https://ncjamieson.com/dont-export-const-enums/

Fixes [#1664](https://github.com/lancedb/lancedb/issues/1664)
2024-09-24 07:41:42 -07:00
Ayush Chaurasia
f81ce68e41 fix(python): force deduce vector column name if running explicit hybrid query (#1692)
Right now when passing vector and query explicitly for hybrid search ,
vector_column_name is not deduced.
(https://lancedb.github.io/lancedb/hybrid_search/hybrid_search/#hybrid-search-in-lancedb
). Because vector and query can be both none when initialising the
QueryBuilder in this case. This PR forces deduction of query type if it
is set to "hybrid"
2024-09-24 19:02:56 +05:30
Will Jones
f5c25b6fff ci: run clippy on tests (#1659) 2024-09-23 07:33:47 -07:00
Ayush Chaurasia
86978e7588 feat!: enforce all rerankers always return relevance score & deprecate linear combination fixes (#1687)
- Enforce all rerankers always return _relevance_score. This was already
loosely done in tests before but based on user feedback its better to
always have _relevance_score present in all reranked results
- Deprecate LinearCombinationReranker in docs. And also fix a case where
it would not return _relevance_score if one result set was missing
2024-09-23 12:12:02 +05:30
Lei Xu
7c314d61cc chore: add error handling for openai embedding generation (#1680) 2024-09-23 12:10:56 +05:30
Lei Xu
7a8d2f37c4 feat(rust): add with_row_id to rust SDK (#1683) 2024-09-21 21:26:19 -07:00
Rithik Kumar
11072b9edc docs: phidata integration page (#1678)
Added new integration page for phidata :

![image](https://github.com/user-attachments/assets/8cd9b420-f249-4eac-ac13-ae53983822be)
2024-09-21 00:40:47 +05:30
Lei Xu
915d828cee feat!: set embeddings to Null if embedding function return invalid results (#1674) 2024-09-19 23:16:20 -07:00
Lance Release
d9a72adc58 Updating package-lock.json 2024-09-19 17:53:19 +00:00
Lance Release
d6cf2dafc6 Bump version: 0.10.0 → 0.11.0-beta.0 2024-09-19 17:53:00 +00:00
Lance Release
38f0031d0b Bump version: 0.13.0 → 0.14.0-beta.0 2024-09-19 17:52:38 +00:00
LuQQiu
e118c37228 ci: enable java auto release (#1602)
Enable bump java pom.xml versions
Enable auto java release when detect stable github release
2024-09-19 10:51:03 -07:00
LuQQiu
abeaae3d80 feat!: upgrade Lance to 0.18.0 (#1657)
BREAKING CHANGE: default file format changed to Lance v2.0.

Upgrade Lance to 0.18.0

Change notes: https://github.com/lancedb/lance/releases/tag/v0.18.0
2024-09-19 10:50:26 -07:00
Gagan Bhullar
b3c0227065 docs: hnsw documentation (#1640)
PR closes #1627

---------

Co-authored-by: Will Jones <willjones127@gmail.com>
2024-09-19 10:32:46 -07:00
Will Jones
521e665f57 feat(rust): remote client write data endpoint (#1645)
* Implements:
  * Add
  * Update
  * Delete
  * Merge-Insert

---------

Co-authored-by: Weston Pace <weston.pace@gmail.com>
2024-09-18 15:02:56 -07:00
Will Jones
ffb28dd4fc feat(rust): remote endpoints for schema, version, count_rows (#1644)
A handful of additional endpoints.
2024-09-16 08:19:25 -07:00
Lei Xu
32af962c0c feat: fix creating empty table and creating table by a list of RecordBatch for remote python sdk (#1650)
Closes #1637
2024-09-14 11:33:34 -07:00
Ayush Chaurasia
18484d0b6c fix: allow pass optional args in colbert reranker (#1649)
Fixes https://github.com/lancedb/lancedb/issues/1641
2024-09-14 11:18:09 -07:00
Lei Xu
c02ee3c80c chore: make remote client a context manager (#1648)
Allow `RemoteLanceDBClient` to be used as context manager
2024-09-13 22:08:48 -07:00
Rithik Kumar
dcd5f51036 docs: add understand embeddings v1 (#1643)
Before getting started with **managing embeddings**. Let's **understand
embeddings** (LanceDB way)

![Screenshot 2024-09-14
012144](https://github.com/user-attachments/assets/7c5435dc-5316-47e9-8d7d-9994ab13b93d)
2024-09-14 02:07:00 +05:30
Sayandip Dutta
9b8472850e fix: unterminated string literal on table update (#1573)
resolves #1429 
(python)

```python
-    return f"'{value}'"
+    return f'"{value}"'
```

---------

Co-authored-by: Will Jones <willjones127@gmail.com>
2024-09-13 12:32:59 -07:00
Sayandip Dutta
36d05ea641 fix: add appropriate QueryBuilder overloads to LanceTable.search (#1558)
- Add overloads to Table.search, to preserve the return information
of different types of QueryBuilder objects for LanceTable
- Fix fts_column type annotation by including making it `Optional`

resolves #1550

---------

Co-authored-by: sayandip-dutta <sayandip.dutta@nevaehtech.com>
Co-authored-by: Will Jones <willjones127@gmail.com>
2024-09-13 12:32:30 -07:00
LuQQiu
7ed86cadfb feat(node): let NODE API region default to us-east-1 (#1631)
Fixes #1622 
To sync with python API
2024-09-13 11:48:57 -07:00
Will Jones
1c123b58d8 feat: implement Remote connection for LanceDB Rust (#1639)
* Adding a simple test facility, which allows you to mock a single
endpoint at a time with a closure.
* Implementing all the database-level endpoints

Table-level APIs will be done in a follow up PR.

---------

Co-authored-by: Weston Pace <weston.pace@gmail.com>
2024-09-13 10:53:27 -07:00
BubbleCal
bf7d2d6fb0 docs: update FTS docs for JS SDK (#1634)
Signed-off-by: BubbleCal <bubble-cal@outlook.com>
2024-09-13 05:48:29 -07:00
LuQQiu
c7732585bf fix: support pyarrow input types (#1628)
fixes #1625 
Support PyArrow.RecordBatch, pa.dataset.Dataset, pa.dataset.Scanner,
paRecordBatchReader
2024-09-12 10:59:18 -07:00
Prashant Dixit
b3bf6386c3 docs: rag section in guide (#1619)
This PR adds the RAG section in the Guides. It includes all the RAGs
with code snippet and some advanced techniques which improves RAG.
2024-09-11 21:13:55 +05:30
BubbleCal
4b79db72bf docs: improve the docs and API param name (#1629)
Signed-off-by: BubbleCal <bubble-cal@outlook.com>
2024-09-11 10:18:29 +08:00
Lance Release
622a2922e2 Updating package-lock.json 2024-09-10 20:12:54 +00:00
Lance Release
c91221d710 Bump version: 0.10.0-beta.2 → 0.10.0 2024-09-10 20:12:41 +00:00
Lance Release
56da5ebd13 Bump version: 0.10.0-beta.1 → 0.10.0-beta.2 2024-09-10 20:12:40 +00:00
Lance Release
64eb43229d Bump version: 0.13.0-beta.2 → 0.13.0 2024-09-10 20:12:35 +00:00
Lance Release
c31c92122f Bump version: 0.13.0-beta.1 → 0.13.0-beta.2 2024-09-10 20:12:35 +00:00
Gagan Bhullar
205fc530cf feat: expose hnsw indices (#1595)
PR closes #1522

---------

Co-authored-by: Will Jones <willjones127@gmail.com>
2024-09-10 11:08:13 -07:00
BubbleCal
2bde5401eb feat: support to build FTS without positions (#1621) 2024-09-10 22:51:32 +08:00
Antonio Molner Domenech
a405847f9b fix(python): remove unmaintained ratelimiter dependency (#1603)
The `ratelimiter` package hasn't been updated in ages and is no longer
maintained. This PR removes the dependency on `ratelimiter` and replaces
it with a custom rate limiter implementation.

---------

Co-authored-by: Will Jones <willjones127@gmail.com>
2024-09-09 12:35:53 -07:00
Gagan Bhullar
bcc19665ce feat(nodejs): expose offset (#1620)
PR closes #1555
2024-09-09 11:54:40 -07:00
Will Jones
2a6586d6fb feat: add flag to enable faster manifest paths (#1612)
The new V2 manifest path scheme makes discovering the latest version of
a table constant time on object stores, regardless of the number of
versions in the table. See benchmarks in the PR here:
https://github.com/lancedb/lance/pull/2798

Closes #1583
2024-09-09 11:34:36 -07:00
James Wu
029b01bbbf feat: enable phrase_query(bool) for hybrid search queries (#1578)
first off, apologies for any folly since i'm new to contributing to
lancedb. this PR is the continuation of [a discord
thread](https://discord.com/channels/1030247538198061086/1030247538667827251/1278844345713299599):

## user story

here's the lance db search query i'd like to run:

```
def search(phrase):
    logger.info(f'Searching for phrase: {phrase}')
    phrase_embedding = get_embedding(phrase)
    df = (table.search((phrase_embedding, phrase), query_type='hybrid')
        .limit(10).to_list())
    logger.info(f'Success search with row count: {len(df)}')

search('howdy (howdy)')
search('howdy(howdy)')
```

the second search fails due to `ValueError: Syntax Error: howdy(howdy)`

i saw on the
[docs](https://lancedb.github.io/lancedb/fts/#phrase-queries-vs-terms-queries)
that i can use `phrase_query()` to [enable a
flag](https://github.com/lancedb/lancedb/blob/main/python/python/lancedb/query.py#L790-L792)
to wrap the query in double quotes (as well as sanitize single quotes)
prior to sending the query to search. this works for [normal
FTS](https://lancedb.github.io/lancedb/fts/), but the command is
unavailable on [hybrid
search](https://lancedb.github.io/lancedb/hybrid_search/hybrid_search/).

## changes

i added `phrase_query()` function to `LanceHybridQueryBuilder` by
propagating the call down to its `self. _fts_query` object. i'm not too
familiar with the codebase and am not sure if this is the best way to
implement the functionality. feel free to riff on this PR or discard


## tests

```
(lancedb) JamesMPB:python james$ pwd
/Users/james/src/lancedb/python
(lancedb) JamesMPB:python james$ pytest python/tests/test_table.py 
python/tests/test_table.py .......................................                                                                   [100%]
====================================================== 39 passed, 1 warning in 2.23s =======================================================
```
2024-09-07 08:58:05 +05:30
Will Jones
cd32944e54 feat: upgrade lance to v0.17.0 (#1608)
Changelog: https://github.com/lancedb/lance/releases/tag/v0.17.0

Highlights:

* You can do "phrase queries" by adding double quotes around phrases
(multiple tokens) in FTS.

Added follow ups in: https://github.com/lancedb/lancedb/issues/1611
2024-09-06 14:10:02 -07:00
Jon X
7eb3b52297 docs: added a blank line between a paragraph and a list block (#1604)
Though the markdown can be rendered well on GitHub (GFM style?), but it
seems that it's required to insert a blank line between a paragraph and
a list block to make it render well with `mkdocs`?

see also the web page:
https://lancedb.github.io/lancedb/concepts/index_hnsw/
2024-09-06 09:38:19 +05:30
BubbleCal
8dcd328dce feat: support to create table from record batch iterator (#1593) 2024-09-06 10:41:38 +08:00
Philip Zeyliger
1d61717d0e docs: fix get_registry() usage (#1601)
Docs used `get_registry.get(...)` whereas what works is
`get_registry().get(...)`. Fixing the two instances I found. I tested
the open clip version by trying it locally in a Jupyter notebook.
2024-09-06 01:48:24 +05:30
Lei Xu
4ee7225e91 ci: public java package (#1485)
Co-authored-by: Lu Qiu <luqiujob@gmail.com>
2024-09-05 11:48:48 -07:00
Rithik Kumar
2bc7dca3ca docs: add changes to Embeddings-> Available models-> overview page (#1596)
adding features and improvements to - Manage Embeddings page

Before:
![Screenshot 2024-09-04
223743](https://github.com/user-attachments/assets/f1e116b5-6ebb-4d59-9d29-b20084998cd0)

After:



![Screenshot 2024-09-05
214214](https://github.com/user-attachments/assets/8c94318e-68af-447e-97e1-8153860a2914)

![Screenshot 2024-09-05
213623](https://github.com/user-attachments/assets/55c82770-6df9-4bab-9c5c-1ea1552138de)

![Screenshot 2024-09-05
215931](https://github.com/user-attachments/assets/9bfac7d4-16a6-454e-801e-50789ff75261)
2024-09-05 22:19:08 +05:30
Gagan Bhullar
b24810a011 feat(python, rust): expose offset in query (#1556)
PR is part of #1555
2024-09-05 08:33:07 -07:00
Jon X
2b8e872be0 docs: removed the unnecessary fence code tag (#1599) 2024-09-05 14:40:38 +05:30
Ayush Chaurasia
03ef1dc081 feat: update default reranker to RRF (#1580)
- Both LinearCombination (the current default) and RRF are pretty fast
compared to model based rerankers. RRF is slightly faster.
- In our tests RRF has also been slightly more accurate.

This PR:
- Makes RRF the default reranker
- Removed duplicate docs for rerankers
2024-09-03 14:00:13 +05:30
Rithik Kumar
fde636ca2e docs: fix links - quick start to embedding (#1591) 2024-09-02 21:55:35 +05:30
Ayush Chaurasia
51966a84f5 docs: add multi-vector reranking, answerdotai and studies section (#1579) 2024-08-31 04:09:14 +05:30
Rithik Kumar
38015ffa7c docs: improve overall language on all example pages (#1582)
Refine and improve the language clarity and quality across all example
pages in the documentation to ensure better understanding and
readability.

---------

Co-authored-by: Ayush Chaurasia <ayush.chaurarsia@gmail.com>
2024-08-31 03:48:11 +05:30
Ayush Chaurasia
dc72ece847 feat!: better api for manual hybrid queries (#1575)
Currently, the only documented way of performing hybrid search is by
using embedding API and passing string queries that get automatically
embedded. There are use cases where users might like to pass vectors and
text manually instead.
This ticket contains more information and historical context -
https://github.com/lancedb/lancedb/issues/937

This breaks a undocumented pathway that allowed passing (vector, text)
tuple queries which was intended to be temporary, so this is marked as a
breaking change. For all practical purposes, this should not really
impact most users

### usage
```
results = table.search(query_type="hybrid")
                .vector(vector_query)
                .text(text_query)
                .limit(5)
                .to_pandas()
```
2024-08-30 17:37:58 +05:30
BubbleCal
1521435193 fix: specify column to search for FTS (#1572)
Before this we ignored the `fts_columns` parameter, and for now we
support to search on only one column, it could lead to an error if we
have multiple indexed columns for FTS

---------

Signed-off-by: BubbleCal <bubble-cal@outlook.com>
2024-08-29 23:43:46 +08:00
Ayush Chaurasia
bfe8fccfab docs: add hnsw docs (#1570) 2024-08-29 15:16:27 +05:30
Rithik Kumar
6f6eb170a9 docs: revamp Python example: Overview page and remove redundant examples and notebooks (#1574)
before:
![Screenshot 2024-08-29
131656](https://github.com/user-attachments/assets/81cb5d70-5dff-4e57-8bbe-3461327aed7d)

After:
![Screenshot 2024-08-29
131715](https://github.com/user-attachments/assets/62109a37-7f66-4fd4-90ed-906a85472117)

---------

Co-authored-by: Ayush Chaurasia <ayush.chaurarsia@gmail.com>
2024-08-29 13:48:10 +05:30
Rithik Kumar
dd1c16bbaf docs: fix links, convert backslash to forward slash in mkdocs.yml (#1571)
Co-authored-by: Ayush Chaurasia <ayush.chaurarsia@gmail.com>
2024-08-28 16:07:57 +05:30
Gagan Bhullar
a76186ee83 fix(node): read consistency level fix (#1567)
PR fixes #1565
2024-08-27 17:03:42 -07:00
Rithik Kumar
ae85008714 docs: revamp embedding models (#1568)
before:
![Screenshot 2024-08-27
151525](https://github.com/user-attachments/assets/d4f8f2b9-37e6-4a31-b144-01b804019e11)

After:
![Screenshot 2024-08-27
151550](https://github.com/user-attachments/assets/79fe7d27-8f14-4d80-9b41-a1e91f8c708f)

---------

Co-authored-by: Ayush Chaurasia <ayush.chaurarsia@gmail.com>
2024-08-27 17:14:35 +05:30
Gagan Bhullar
a85f039352 fix(bug): limit fix (#1548)
PR fixes #1151
2024-08-26 14:25:14 -07:00
Bill Chambers
9c25998110 docs: update serverless_lancedb_with_s3_and_lambda.md (#1559) 2024-08-26 14:55:28 +05:30
Ayush Chaurasia
549ca51a8a feat: add answerdotai rerankers support and minor improvements (#1560)
This PR:
- Adds missing license headers
- Integrates with answerdotai Rerankers package
- Updates ColbertReranker to subclass answerdotai package. This is done
to keep backwards compatibility as some users might be used to importing
ColbertReranker directly
- Set `trust_remote_code` to ` True` by default in CrossEncoder and
sentence-transformer based rerankers
2024-08-26 13:25:10 +05:30
Rithik Kumar
632007d0e2 docs: add recommender system example (#1561)
before:
![Screenshot 2024-08-24
230216](https://github.com/user-attachments/assets/cc8a810a-b032-45d7-b086-b2ef0720dc16)

After:
![Screenshot 2024-08-24
230228](https://github.com/user-attachments/assets/eaa1dc31-ac7f-4b81-aa79-b4cf94f0cbd5)

---------

Co-authored-by: Ayush Chaurasia <ayush.chaurarsia@gmail.com>
2024-08-25 12:30:30 +05:30
Lance Release
02d85a4ea4 Updating package-lock.json 2024-08-23 13:56:54 +00:00
Lance Release
a9d0625e2b Bump version: 0.10.0-beta.0 → 0.10.0-beta.1 2024-08-23 13:56:34 +00:00
Lance Release
89bcc1b2e7 Bump version: 0.13.0-beta.0 → 0.13.0-beta.1 2024-08-23 13:56:30 +00:00
rahuljo
6ad5553eca docs: add dlt-lancedb integration page (#1551)
Co-authored-by: Akela Drissner-Schmid <32450038+akelad@users.noreply.github.com>
2024-08-22 15:18:49 +05:30
Gagan Bhullar
6eb7ccfdee fix: rerank attribute unknown (#1554)
PR fixes #1550
2024-08-22 11:46:36 +05:30
Rithik Kumar
758c82858f docs: add AI agent example (#1553)
before:
![Screenshot 2024-08-21
225014](https://github.com/user-attachments/assets/e5b05586-87c5-4739-a4df-2d6cd0704ba5)

After:
![Screenshot 2024-08-21
225029](https://github.com/user-attachments/assets/504959db-f560-49b2-9492-557e9846a793)

---------

Co-authored-by: Ayush Chaurasia <ayush.chaurarsia@gmail.com>
2024-08-22 00:54:05 +05:30
Rithik Kumar
0cbc9cd551 docs: add evaluation example (#1552)
before:
![Screenshot 2024-08-21
194228](https://github.com/user-attachments/assets/68d96658-7579-4934-85af-e8c898b64660)

After:
![Screenshot 2024-08-21
195258](https://github.com/user-attachments/assets/81ddb9cd-cb93-47fc-a121-ff82701fd11f)

---------

Co-authored-by: Ayush Chaurasia <ayush.chaurarsia@gmail.com>
2024-08-21 20:37:04 +05:30
Ayush Chaurasia
7d65dd97cf chore(python): update Colbert architecture and minor improvements (#1547)
- Update ColBertReranker architecture: The current implementation
doesn't use the right arch. This PR uses the implementation in Rerankers
library. Fixes https://github.com/lancedb/lancedb/issues/1546
Benchmark diff (hit rate):
Hybrid - 91 vs 87
reranked vector - 85 vs 80

- Reranking in FTS is basically disabled in main after last week's FTS
updates. I think there's no blocker in supporting that?
- Allow overriding accelerators: Most transformer based Rerankers and
Embedding automatically select device. This PR allows overriding those
settings by passing `device`. Fixes:
https://github.com/lancedb/lancedb/issues/1487

---------

Co-authored-by: BubbleCal <bubble-cal@outlook.com>
2024-08-21 12:26:52 +05:30
Ayush Chaurasia
85bb7e54e4 docs: missing griffe dependency for mkdocs deployment (#1545) 2024-08-19 07:48:23 +05:30
Rithik Kumar
21014cab45 docs: add chatbot example and improve quality of other examples (#1544) 2024-08-17 12:35:33 +05:30
Lei Xu
5857cb4c6e docs: add a section to describe scalar index (#1495) 2024-08-16 18:48:29 -07:00
Rithik Kumar
09ce6c5bb5 docs: add vector search example (#1543) 2024-08-16 21:30:45 +05:30
BubbleCal
0fa50775d6 feat: support to query/index FTS on RemoteTable/AsyncTable (#1537)
Signed-off-by: BubbleCal <bubble-cal@outlook.com>
2024-08-16 12:01:05 +08:00
Gagan Bhullar
20faa4424b feat(python): add delete unverified parameter (#1542)
PR fixes #1527
2024-08-15 09:01:32 -07:00
BubbleCal
b624fc59eb docs: add create_fts_index doc in Python API Reference (#1533)
resolve #1313

---------

Signed-off-by: BubbleCal <bubble-cal@outlook.com>
2024-08-15 11:35:16 +08:00
Gagan Bhullar
d2caa5e202 feat(nodejs): add delete unverified (#1530)
PR fixes part of #1527
2024-08-14 08:53:53 -07:00
BubbleCal
501817cfac chore: bump the required python version to 3.9 (#1541)
Signed-off-by: BubbleCal <bubble-cal@outlook.com>
2024-08-14 08:44:31 -07:00
Ryan Green
b3daa25f46 feat: allow new scalar index types to be created in remote table (#1538) 2024-08-13 16:05:42 -02:30
Matt Basta
6008a8257b fix: remove native.d.ts from .npmignore (#1531)
This removes the type definitions for a number of important TypeScript
interfaces from `.npmignore` so that the package is not incorrectly
typed `any` in a number of places.

---

Presently the `opts` argument to `lancedb.connect` is typed `any`, even
though it shouldn't be.

<img width="560" alt="image"
src="https://github.com/user-attachments/assets/5c974ce8-5a59-44a1-935d-cbb808f0ea24">

Clicking into the type definitions for the published package, it has the
correct type signature:

<img width="831" alt="image"
src="https://github.com/user-attachments/assets/6e39a519-13ff-4ca8-95ae-85538ac59d5d">

However, `ConnectionOptions` is imported from `native.js` (along with a
number of other imports a bit further down):

<img width="384" alt="image"
src="https://github.com/user-attachments/assets/10c1b055-ae78-4088-922e-2816af64c23c">

This is not otherwise an issue, except that the type definitions for
`native.js` are not included in the published package:

<img width="217" alt="image"
src="https://github.com/user-attachments/assets/f15cd3b6-a8de-4011-9fa2-391858da20ec">

I haven't compiled the Rust code and run the build script, but I
strongly suspect that disincluding the type definitions in `.npmignore`
is ultimately the root cause here.
2024-08-13 10:06:15 -07:00
Lance Release
aaff43d304 Updating package-lock.json 2024-08-12 19:48:18 +00:00
Lance Release
d4c3a8ca87 Bump version: 0.9.0 → 0.10.0-beta.0 2024-08-12 19:48:02 +00:00
Lance Release
ff5bbfdd4c Bump version: 0.12.0 → 0.13.0-beta.0 2024-08-12 19:47:57 +00:00
Lei Xu
694ca30c7c feat(nodejs): add bitmap and label list index types in nodejs (#1532) 2024-08-11 12:06:02 -07:00
Lei Xu
b2317c904d feat: create bitmap and label list scalar index using python async api (#1529)
* Expose `bitmap` and `LabelList` scalar index type via Rust and Async
Python API
* Add documents
2024-08-11 09:16:11 -07:00
BubbleCal
613f3063b9 chore: upgrade lance to 0.16.1 (#1524)
Signed-off-by: BubbleCal <bubble-cal@outlook.com>
2024-08-09 19:18:05 +08:00
BubbleCal
5d2cd7fb2e chore: upgrade object_store to 0.10.2 (#1523)
To use the same version with lance

Signed-off-by: BubbleCal <bubble-cal@outlook.com>
2024-08-09 12:03:46 +08:00
Ayush Chaurasia
a88e9bb134 docs: add lancedb embedding fcn on cloud docs (#1521) 2024-08-09 07:21:04 +05:30
Gagan Bhullar
9c1adff426 feat(python): add to_list to async api (#1520)
PR fixes #1517
2024-08-08 11:45:20 -07:00
BubbleCal
f9d5fa88a1 feat!: migrate FTS from tantivy to lance-index (#1483)
Lance now supports FTS, so add it into lancedb Python, TypeScript and
Rust SDKs.

For Python, we still use tantivy based FTS by default because the lance
FTS index now misses some features of tantivy.

For Python:
- Support to create lance based FTS index
- Support to specify columns for full text search (only available for
lance based FTS index)

For TypeScript:
- Change the search method so that it can accept both string and vector
- Support full text search

For Rust
- Support full text search

The others:
- Update the FTS doc

BREAKING CHANGE: 
- for Python, this renames the attached score column of FTS from "score"
to "_score", this could be a breaking change for users that rely the
scores

---------

Signed-off-by: BubbleCal <bubble-cal@outlook.com>
2024-08-08 15:33:15 +08:00
Lance Release
4db554eea5 Updating package-lock.json 2024-08-07 20:56:12 +00:00
Lance Release
101066788d Bump version: 0.9.0-beta.0 → 0.9.0 2024-08-07 20:55:53 +00:00
Lance Release
c4135d9d30 Bump version: 0.8.0 → 0.9.0-beta.0 2024-08-07 20:55:52 +00:00
Lance Release
ec39d98571 Bump version: 0.12.0-beta.0 → 0.12.0 2024-08-07 20:55:40 +00:00
Lance Release
0cb37f0e5e Bump version: 0.11.0 → 0.12.0-beta.0 2024-08-07 20:55:39 +00:00
Gagan Bhullar
24e3507ee2 fix(node): export optimize options (#1518)
PR fixes #1514
2024-08-07 13:15:51 -07:00
Lei Xu
2bdf0a02f9 feat!: upgrade lance to 0.16 (#1519) 2024-08-07 13:15:22 -07:00
Gagan Bhullar
32123713fd feat(python): optimize stats repr method (#1510)
PR fixes #1507
2024-08-07 08:47:52 -07:00
Gagan Bhullar
d5a01ffe7b feat(python): index config repr method (#1509)
PR fixes #1506
2024-08-07 08:46:46 -07:00
Ayush Chaurasia
e01045692c feat(python): support embedding functions in remote table (#1405) 2024-08-07 20:22:43 +05:30
Rithik Kumar
a62f661d90 docs: revamp example docs (#1512)
Before: 
![Screenshot 2024-08-07
015834](https://github.com/user-attachments/assets/b817f846-78b3-4d6f-b4a0-dfa3f4d6be87)

After:
![Screenshot 2024-08-07
015852](https://github.com/user-attachments/assets/53370301-8c40-45f8-abe3-32f9d051597e)
![Screenshot 2024-08-07
015934](https://github.com/user-attachments/assets/63cdd038-32bb-4b3e-b9c4-1389d2754014)
![Screenshot 2024-08-07
015941](https://github.com/user-attachments/assets/70388680-9c2b-49ef-ba00-2bb015988214)
![Screenshot 2024-08-07
015949](https://github.com/user-attachments/assets/76335a33-bb6f-473c-896f-447320abcc25)

---------

Co-authored-by: Ayush Chaurasia <ayush.chaurarsia@gmail.com>
2024-08-07 03:56:59 +05:30
Ayush Chaurasia
4769d8eb76 feat(python): multi-vector reranking support (#1481)
Currently targeting the following usage:
```
from lancedb.rerankers import CrossEncoderReranker

reranker = CrossEncoderReranker()

query = "hello"

res1 = table.search(query, vector_column_name="vector").limit(3)
res2 = table.search(query, vector_column_name="text_vector").limit(3)
res3 = table.search(query, vector_column_name="meta_vector").limit(3)

reranked = reranker.rerank_multivector(
               [res1, res2, res3],  
              deduplicate=True,
              query=query # some reranker models need query
)
```
- This implements rerank_multivector function in the base reranker so
that all rerankers that implement rerank_vector will automatically have
multivector reranking support
- Special case for RRF reranker that just uses its existing
rerank_hybrid fcn to multi-vector reranking.

---------

Co-authored-by: Weston Pace <weston.pace@gmail.com>
2024-08-07 01:45:46 +05:30
Ayush Chaurasia
d07d7a5980 chore: update polars version range (#1508) 2024-08-06 23:43:15 +05:30
Robby
8d2ff7b210 feat(python): add watsonx embeddings to registry (#1486)
Related issue: https://github.com/lancedb/lancedb/issues/1412

---------

Co-authored-by: Robby <h0rv@users.noreply.github.com>
2024-08-06 10:58:33 +05:30
Will Jones
61c05b51a0 fix(nodejs): address import issues in lancedb npm module (#1503)
Fixes [#1496](https://github.com/lancedb/lancedb/issues/1496)
2024-08-05 16:30:27 -07:00
Will Jones
7801ab9b8b ci: fix release by upgrading to Node 18 (#1494)
Building with Node 16 produced this error:

```
npm ERR! code ENOENT
npm ERR! syscall chmod
npm ERR! path /io/nodejs/node_modules/apache-arrow-15/bin/arrow2csv.cjs
npm ERR! errno -2
npm ERR! enoent ENOENT: no such file or directory, chmod '/io/nodejs/node_modules/apache-arrow-15/bin/arrow2csv.cjs'
npm ERR! enoent This is related to npm not being able to find a file.
npm ERR! enoent 
```

[CI
Failure](https://github.com/lancedb/lancedb/actions/runs/10117131772/job/27981475770).
This looks like it is https://github.com/apache/arrow/issues/43341

Upgrading to Node 18 makes this goes away. Since Node 18 requires glibc
>= 2_28, we had to upgrade the manylinux version we are using. This is
fine since we already state a minimum Node version of 18.

This also upgrades the openssl version we bundle, as well as
consolidates the build files.
2024-08-05 14:08:42 -07:00
Rithik Kumar
d297da5a7e docs: update examples docs (#1488)
Testing Workflow with my first PR.
Before:
![Screenshot 2024-08-01
183326](https://github.com/user-attachments/assets/83d22101-8bbf-4b18-81e4-f740e605727a)

After:
![Screenshot 2024-08-01
183333](https://github.com/user-attachments/assets/a5e4cd2c-c524-4009-81d5-75b2b0361f83)
2024-08-01 18:54:45 +05:30
Ryan Green
6af69b57ad fix: return LanceMergeInsertBuilder in overridden merge_insert method on remote table (#1484) 2024-07-31 12:25:16 -02:30
Cory Grinstead
a062a92f6b docs: custom embedding function for ts (#1479) 2024-07-30 18:19:55 -05:00
Gagan Bhullar
277b753fd8 fix: run java stages in parallel (#1472)
This PR is for issue - https://github.com/lancedb/lancedb/issues/1331
2024-07-27 12:04:32 -07:00
Lance Release
f78b7863f6 Updating package-lock.json 2024-07-26 20:18:55 +00:00
Lance Release
e7d824af2b Bump version: 0.8.0-beta.0 → 0.8.0 2024-07-26 20:18:37 +00:00
Lance Release
02f1ec775f Bump version: 0.7.2 → 0.8.0-beta.0 2024-07-26 20:18:36 +00:00
Lance Release
7b6d3f943b Bump version: 0.11.0-beta.0 → 0.11.0 2024-07-26 20:18:31 +00:00
Lance Release
676876f4d5 Bump version: 0.10.2 → 0.11.0-beta.0 2024-07-26 20:18:30 +00:00
Cory Grinstead
fbfe2444a8 feat(nodejs): huggingface compatible transformers (#1462) 2024-07-26 12:54:15 -07:00
Will Jones
9555efacf9 feat: upgrade lance to 0.15.0 (#1477)
Changelog: https://github.com/lancedb/lance/releases/tag/v0.15.0

* Fixes #1466
* Closes #1475
* Fixes #1446
2024-07-26 09:13:49 -07:00
Ayush Chaurasia
513926960d docs: add rrf docs and update reranking notebook with Jina reranker results (#1474)
- RRF reranker
- Jina Reranker results

---------

Co-authored-by: Weston Pace <weston.pace@gmail.com>
2024-07-25 22:29:46 +05:30
inn-0
cc507ca766 docs: add missing whitespace before markdown table to fix rendering issue (#1471)
### Fix markdown table rendering issue

This PR adds a missing whitespace before a markdown table in the
documentation. This issue causes the table to not render properly in
mkdocs, while it does render properly in GitHub's markdown viewer.

#### Change Details:
- Added a single line of whitespace before the markdown table to ensure
proper rendering in mkdocs.

#### Note:
- I wasn't able to test this fix in the mkdocs environment, but it
should be safe as it only involves adding whitespace which won't break
anything.


---


Cohere supports following input types:

| Input Type               | Description                          |
|-------------------------|---------------------------------------|
| "`search_document`"     | Used for embeddings stored in a vector|
|                         | database for search use-cases.        |
| "`search_query`"        | Used for embeddings of search queries |
|                         | run against a vector DB               |
| "`semantic_similarity`" | Specifies the given text will be used |
|                         | for Semantic Textual Similarity (STS) |
| "`classification`"      | Used for embeddings passed through a  |
|                         | text classifier.                      |
| "`clustering`"          | Used for the embeddings run through a |
|                         | clustering algorithm                  |

Usage Example:
2024-07-24 22:26:28 +05:30
Cory Grinstead
492d0328fe chore: update readme to point to lancedb package (#1470) 2024-07-23 13:46:32 -07:00
Chang She
374c1e7aba fix: infer schema from huggingface dataset (#1444)
Closes #1383

When creating a table from a HuggingFace dataset, infer the arrow schema
directly
2024-07-23 13:12:34 -07:00
Gagan Bhullar
30047a5566 fix: remove source .ts code from published npm package (#1467)
This PR is for issue - https://github.com/lancedb/lancedb/issues/1358
2024-07-23 13:11:54 -07:00
Bert
85ccf9e22b feat!: correct timeout argument lancedb nodejs sdk (#1468)
Correct the timeout argument to `connect` in @lancedb/lancedb node SDK.
`RemoteConnectionOptions` specified two fields `connectionTimeout` and
`readTimeout`, probably to be consistent with the python SDK, but only
`connectionTimeout` was being used and it was passed to axios in such a
way that this covered the enture remote request (connect + read). This
change adds a single parameter `timeout` which makes the args to
`connect` consistent with the legacy vectordb sdk.

BREAKING CHANGE: This is a breaking change b/c users who would have
previously been passing `connectionTimeout` will now be expected to pass
`timeout`.
2024-07-23 14:02:46 -03:00
Ayush Chaurasia
0255221086 feat: add reciprocal rank fusion reranker (#1456)
Implements https://plg.uwaterloo.ca/~gvcormac/cormacksigir09-rrf.pdf

Refactors the hybrid search only rerrankers test to avoid repetition.
2024-07-23 21:37:17 +05:30
Lance Release
4ee229490c Updating package-lock.json 2024-07-23 13:49:13 +00:00
Lance Release
93e24f23af Bump version: 0.7.2-beta.0 → 0.7.2 2024-07-23 13:48:58 +00:00
Lance Release
8f141e1e33 Bump version: 0.7.1 → 0.7.2-beta.0 2024-07-23 13:48:58 +00:00
322 changed files with 29055 additions and 8859 deletions

View File

@@ -1,5 +1,5 @@
[tool.bumpversion] [tool.bumpversion]
current_version = "0.7.1" current_version = "0.14.0-beta.1"
parse = """(?x) parse = """(?x)
(?P<major>0|[1-9]\\d*)\\. (?P<major>0|[1-9]\\d*)\\.
(?P<minor>0|[1-9]\\d*)\\. (?P<minor>0|[1-9]\\d*)\\.
@@ -24,34 +24,102 @@ commit = true
message = "Bump version: {current_version} → {new_version}" message = "Bump version: {current_version} → {new_version}"
commit_args = "" commit_args = ""
# Java maven files
pre_commit_hooks = [
"""
NEW_VERSION="${BVHOOK_NEW_MAJOR}.${BVHOOK_NEW_MINOR}.${BVHOOK_NEW_PATCH}"
if [ ! -z "$BVHOOK_NEW_PRE_L" ] && [ ! -z "$BVHOOK_NEW_PRE_N" ]; then
NEW_VERSION="${NEW_VERSION}-${BVHOOK_NEW_PRE_L}.${BVHOOK_NEW_PRE_N}"
fi
echo "Constructed new version: $NEW_VERSION"
cd java && mvn versions:set -DnewVersion=$NEW_VERSION && mvn versions:commit
# Check for any modified but unstaged pom.xml files
MODIFIED_POMS=$(git ls-files -m | grep pom.xml)
if [ ! -z "$MODIFIED_POMS" ]; then
echo "The following pom.xml files were modified but not staged. Adding them now:"
echo "$MODIFIED_POMS" | while read -r file; do
git add "$file"
echo "Added: $file"
done
fi
""",
]
[tool.bumpversion.parts.pre_l] [tool.bumpversion.parts.pre_l]
values = ["beta", "final"]
optional_value = "final" optional_value = "final"
values = ["beta", "final"]
[[tool.bumpversion.files]] [[tool.bumpversion.files]]
filename = "node/package.json" filename = "node/package.json"
search = "\"version\": \"{current_version}\","
replace = "\"version\": \"{new_version}\"," replace = "\"version\": \"{new_version}\","
search = "\"version\": \"{current_version}\","
[[tool.bumpversion.files]] [[tool.bumpversion.files]]
filename = "nodejs/package.json" filename = "nodejs/package.json"
search = "\"version\": \"{current_version}\","
replace = "\"version\": \"{new_version}\"," replace = "\"version\": \"{new_version}\","
search = "\"version\": \"{current_version}\","
# nodejs binary packages # nodejs binary packages
[[tool.bumpversion.files]] [[tool.bumpversion.files]]
glob = "nodejs/npm/*/package.json" glob = "nodejs/npm/*/package.json"
search = "\"version\": \"{current_version}\","
replace = "\"version\": \"{new_version}\"," replace = "\"version\": \"{new_version}\","
search = "\"version\": \"{current_version}\","
# vectodb node binary packages
[[tool.bumpversion.files]]
glob = "node/package.json"
replace = "\"@lancedb/vectordb-darwin-arm64\": \"{new_version}\""
search = "\"@lancedb/vectordb-darwin-arm64\": \"{current_version}\""
[[tool.bumpversion.files]]
glob = "node/package.json"
replace = "\"@lancedb/vectordb-darwin-x64\": \"{new_version}\""
search = "\"@lancedb/vectordb-darwin-x64\": \"{current_version}\""
[[tool.bumpversion.files]]
glob = "node/package.json"
replace = "\"@lancedb/vectordb-linux-arm64-gnu\": \"{new_version}\""
search = "\"@lancedb/vectordb-linux-arm64-gnu\": \"{current_version}\""
[[tool.bumpversion.files]]
glob = "node/package.json"
replace = "\"@lancedb/vectordb-linux-x64-gnu\": \"{new_version}\""
search = "\"@lancedb/vectordb-linux-x64-gnu\": \"{current_version}\""
[[tool.bumpversion.files]]
glob = "node/package.json"
replace = "\"@lancedb/vectordb-linux-arm64-musl\": \"{new_version}\""
search = "\"@lancedb/vectordb-linux-arm64-musl\": \"{current_version}\""
[[tool.bumpversion.files]]
glob = "node/package.json"
replace = "\"@lancedb/vectordb-linux-x64-musl\": \"{new_version}\""
search = "\"@lancedb/vectordb-linux-x64-musl\": \"{current_version}\""
[[tool.bumpversion.files]]
glob = "node/package.json"
replace = "\"@lancedb/vectordb-win32-x64-msvc\": \"{new_version}\""
search = "\"@lancedb/vectordb-win32-x64-msvc\": \"{current_version}\""
[[tool.bumpversion.files]]
glob = "node/package.json"
replace = "\"@lancedb/vectordb-win32-arm64-msvc\": \"{new_version}\""
search = "\"@lancedb/vectordb-win32-arm64-msvc\": \"{current_version}\""
# Cargo files # Cargo files
# ------------ # ------------
[[tool.bumpversion.files]] [[tool.bumpversion.files]]
filename = "rust/ffi/node/Cargo.toml" filename = "rust/ffi/node/Cargo.toml"
search = "\nversion = \"{current_version}\""
replace = "\nversion = \"{new_version}\"" replace = "\nversion = \"{new_version}\""
search = "\nversion = \"{current_version}\""
[[tool.bumpversion.files]] [[tool.bumpversion.files]]
filename = "rust/lancedb/Cargo.toml" filename = "rust/lancedb/Cargo.toml"
search = "\nversion = \"{current_version}\""
replace = "\nversion = \"{new_version}\"" replace = "\nversion = \"{new_version}\""
search = "\nversion = \"{current_version}\""
[[tool.bumpversion.files]]
filename = "nodejs/Cargo.toml"
replace = "\nversion = \"{new_version}\""
search = "\nversion = \"{current_version}\""

View File

@@ -31,6 +31,9 @@ rustflags = [
[target.x86_64-unknown-linux-gnu] [target.x86_64-unknown-linux-gnu]
rustflags = ["-C", "target-cpu=haswell", "-C", "target-feature=+avx2,+fma,+f16c"] rustflags = ["-C", "target-cpu=haswell", "-C", "target-feature=+avx2,+fma,+f16c"]
[target.x86_64-unknown-linux-musl]
rustflags = ["-C", "target-cpu=haswell", "-C", "target-feature=-crt-static,+avx2,+fma,+f16c"]
[target.aarch64-apple-darwin] [target.aarch64-apple-darwin]
rustflags = ["-C", "target-cpu=apple-m1", "-C", "target-feature=+neon,+fp16,+fhm,+dotprod"] rustflags = ["-C", "target-cpu=apple-m1", "-C", "target-feature=+neon,+fp16,+fhm,+dotprod"]
@@ -38,3 +41,7 @@ rustflags = ["-C", "target-cpu=apple-m1", "-C", "target-feature=+neon,+fp16,+fhm
# not found errors on systems that are missing it. # not found errors on systems that are missing it.
[target.x86_64-pc-windows-msvc] [target.x86_64-pc-windows-msvc]
rustflags = ["-Ctarget-feature=+crt-static"] rustflags = ["-Ctarget-feature=+crt-static"]
# Experimental target for Arm64 Windows
[target.aarch64-pc-windows-msvc]
rustflags = ["-Ctarget-feature=+crt-static"]

View File

@@ -41,8 +41,8 @@ jobs:
- name: Build Python - name: Build Python
working-directory: python working-directory: python
run: | run: |
python -m pip install -e . python -m pip install --extra-index-url https://pypi.fury.io/lancedb/ -e .
python -m pip install -r ../docs/requirements.txt python -m pip install --extra-index-url https://pypi.fury.io/lancedb/ -r ../docs/requirements.txt
- name: Set up node - name: Set up node
uses: actions/setup-node@v3 uses: actions/setup-node@v3
with: with:

View File

@@ -24,15 +24,19 @@ env:
jobs: jobs:
test-python: test-python:
name: Test doc python code name: Test doc python code
runs-on: "warp-ubuntu-latest-x64-4x" runs-on: ubuntu-24.04
steps: steps:
- name: Checkout - name: Checkout
uses: actions/checkout@v4 uses: actions/checkout@v4
- name: Print CPU capabilities - name: Print CPU capabilities
run: cat /proc/cpuinfo run: cat /proc/cpuinfo
- name: Install protobuf
run: |
sudo apt update
sudo apt install -y protobuf-compiler
- name: Install dependecies needed for ubuntu - name: Install dependecies needed for ubuntu
run: | run: |
sudo apt install -y protobuf-compiler libssl-dev sudo apt install -y libssl-dev
rustup update && rustup default rustup update && rustup default
- name: Set up Python - name: Set up Python
uses: actions/setup-python@v5 uses: actions/setup-python@v5
@@ -45,7 +49,7 @@ jobs:
- name: Build Python - name: Build Python
working-directory: docs/test working-directory: docs/test
run: run:
python -m pip install -r requirements.txt python -m pip install --extra-index-url https://pypi.fury.io/lancedb/ -r requirements.txt
- name: Create test files - name: Create test files
run: | run: |
cd docs/test cd docs/test
@@ -56,7 +60,7 @@ jobs:
for d in *; do cd "$d"; echo "$d".py; python "$d".py; cd ..; done for d in *; do cd "$d"; echo "$d".py; python "$d".py; cd ..; done
test-node: test-node:
name: Test doc nodejs code name: Test doc nodejs code
runs-on: "warp-ubuntu-latest-x64-4x" runs-on: ubuntu-24.04
timeout-minutes: 60 timeout-minutes: 60
strategy: strategy:
fail-fast: false fail-fast: false
@@ -72,9 +76,13 @@ jobs:
uses: actions/setup-node@v4 uses: actions/setup-node@v4
with: with:
node-version: 20 node-version: 20
- name: Install protobuf
run: |
sudo apt update
sudo apt install -y protobuf-compiler
- name: Install dependecies needed for ubuntu - name: Install dependecies needed for ubuntu
run: | run: |
sudo apt install -y protobuf-compiler libssl-dev sudo apt install -y libssl-dev
rustup update && rustup default rustup update && rustup default
- name: Rust cache - name: Rust cache
uses: swatinem/rust-cache@v2 uses: swatinem/rust-cache@v2

114
.github/workflows/java-publish.yml vendored Normal file
View File

@@ -0,0 +1,114 @@
name: Build and publish Java packages
on:
release:
types: [released]
pull_request:
paths:
- .github/workflows/java-publish.yml
jobs:
macos-arm64:
name: Build on MacOS Arm64
runs-on: macos-14
timeout-minutes: 45
defaults:
run:
working-directory: ./java/core/lancedb-jni
steps:
- name: Checkout repository
uses: actions/checkout@v4
- uses: Swatinem/rust-cache@v2
- name: Install dependencies
run: |
brew install protobuf
- name: Build release
run: |
cargo build --release
- uses: actions/upload-artifact@v4
with:
name: liblancedb_jni_darwin_aarch64.zip
path: target/release/liblancedb_jni.dylib
retention-days: 1
if-no-files-found: error
linux-arm64:
name: Build on Linux Arm64
runs-on: warp-ubuntu-2204-arm64-8x
timeout-minutes: 45
defaults:
run:
working-directory: ./java/core/lancedb-jni
steps:
- name: Checkout repository
uses: actions/checkout@v4
- uses: Swatinem/rust-cache@v2
- uses: actions-rust-lang/setup-rust-toolchain@v1
with:
toolchain: "1.79.0"
cache-workspaces: "./java/core/lancedb-jni"
# Disable full debug symbol generation to speed up CI build and keep memory down
# "1" means line tables only, which is useful for panic tracebacks.
rustflags: "-C debuginfo=1"
- name: Install dependencies
run: |
sudo apt -y -qq update
sudo apt install -y protobuf-compiler libssl-dev pkg-config
- name: Build release
run: |
cargo build --release
- uses: actions/upload-artifact@v4
with:
name: liblancedb_jni_linux_aarch64.zip
path: target/release/liblancedb_jni.so
retention-days: 1
if-no-files-found: error
linux-x86:
runs-on: warp-ubuntu-2204-x64-8x
timeout-minutes: 30
needs: [macos-arm64, linux-arm64]
defaults:
run:
working-directory: ./java
steps:
- name: Checkout repository
uses: actions/checkout@v4
- uses: Swatinem/rust-cache@v2
- name: Set up Java 8
uses: actions/setup-java@v4
with:
distribution: temurin
java-version: 8
cache: "maven"
server-id: ossrh
server-username: SONATYPE_USER
server-password: SONATYPE_TOKEN
gpg-private-key: ${{ secrets.GPG_PRIVATE_KEY }}
gpg-passphrase: ${{ secrets.GPG_PASSPHRASE }}
- name: Install dependencies
run: |
sudo apt -y -qq update
sudo apt install -y protobuf-compiler libssl-dev pkg-config
- name: Download artifact
uses: actions/download-artifact@v4
- name: Copy native libs
run: |
mkdir -p ./core/target/classes/nativelib/darwin-aarch64 ./core/target/classes/nativelib/linux-aarch64
cp ../liblancedb_jni_darwin_aarch64.zip/liblancedb_jni.dylib ./core/target/classes/nativelib/darwin-aarch64/liblancedb_jni.dylib
cp ../liblancedb_jni_linux_aarch64.zip/liblancedb_jni.so ./core/target/classes/nativelib/linux-aarch64/liblancedb_jni.so
- name: Dry run
if: github.event_name == 'pull_request'
run: |
mvn --batch-mode -DskipTests package
- name: Set github
run: |
git config --global user.email "LanceDB Github Runner"
git config --global user.name "dev+gha@lancedb.com"
- name: Publish with Java 8
if: github.event_name == 'release'
run: |
echo "use-agent" >> ~/.gnupg/gpg.conf
echo "pinentry-mode loopback" >> ~/.gnupg/gpg.conf
export GPG_TTY=$(tty)
mvn --batch-mode -DskipTests -DpushChanges=false -Dgpg.passphrase=${{ secrets.GPG_PASSPHRASE }} deploy -P deploy-to-ossrh
env:
SONATYPE_USER: ${{ secrets.SONATYPE_USER }}
SONATYPE_TOKEN: ${{ secrets.SONATYPE_TOKEN }}

View File

@@ -3,6 +3,8 @@ on:
push: push:
branches: branches:
- main - main
paths:
- java/**
pull_request: pull_request:
paths: paths:
- java/** - java/**
@@ -21,9 +23,42 @@ env:
CARGO_INCREMENTAL: "0" CARGO_INCREMENTAL: "0"
CARGO_BUILD_JOBS: "1" CARGO_BUILD_JOBS: "1"
jobs: jobs:
linux-build: linux-build-java-11:
runs-on: ubuntu-22.04 runs-on: ubuntu-22.04
name: ubuntu-22.04 + Java 11 & 17 name: ubuntu-22.04 + Java 11
defaults:
run:
working-directory: ./java
steps:
- name: Checkout repository
uses: actions/checkout@v4
- uses: Swatinem/rust-cache@v2
with:
workspaces: java/core/lancedb-jni
- name: Run cargo fmt
run: cargo fmt --check
working-directory: ./java/core/lancedb-jni
- name: Install dependencies
run: |
sudo apt update
sudo apt install -y protobuf-compiler libssl-dev
- name: Install Java 11
uses: actions/setup-java@v4
with:
distribution: temurin
java-version: 11
cache: "maven"
- name: Java Style Check
run: mvn checkstyle:check
# Disable because of issues in lancedb rust core code
# - name: Rust Clippy
# working-directory: java/core/lancedb-jni
# run: cargo clippy --all-targets -- -D warnings
- name: Running tests with Java 11
run: mvn clean test
linux-build-java-17:
runs-on: ubuntu-22.04
name: ubuntu-22.04 + Java 17
defaults: defaults:
run: run:
working-directory: ./java working-directory: ./java
@@ -47,20 +82,12 @@ jobs:
java-version: 17 java-version: 17
cache: "maven" cache: "maven"
- run: echo "JAVA_17=$JAVA_HOME" >> $GITHUB_ENV - run: echo "JAVA_17=$JAVA_HOME" >> $GITHUB_ENV
- name: Install Java 11
uses: actions/setup-java@v4
with:
distribution: temurin
java-version: 11
cache: "maven"
- name: Java Style Check - name: Java Style Check
run: mvn checkstyle:check run: mvn checkstyle:check
# Disable because of issues in lancedb rust core code # Disable because of issues in lancedb rust core code
# - name: Rust Clippy # - name: Rust Clippy
# working-directory: java/core/lancedb-jni # working-directory: java/core/lancedb-jni
# run: cargo clippy --all-targets -- -D warnings # run: cargo clippy --all-targets -- -D warnings
- name: Running tests with Java 11
run: mvn clean test
- name: Running tests with Java 17 - name: Running tests with Java 17
run: | run: |
export JAVA_TOOL_OPTIONS="$JAVA_TOOL_OPTIONS \ export JAVA_TOOL_OPTIONS="$JAVA_TOOL_OPTIONS \
@@ -83,3 +110,4 @@ jobs:
-Djdk.reflect.useDirectMethodHandle=false \ -Djdk.reflect.useDirectMethodHandle=false \
-Dio.netty.tryReflectionSetAccessible=true" -Dio.netty.tryReflectionSetAccessible=true"
JAVA_HOME=$JAVA_17 mvn clean test JAVA_HOME=$JAVA_17 mvn clean test

View File

@@ -30,7 +30,7 @@ on:
default: true default: true
type: boolean type: boolean
other: other:
description: 'Make a Node/Rust release' description: 'Make a Node/Rust/Java release'
required: true required: true
default: true default: true
type: boolean type: boolean

View File

@@ -53,6 +53,9 @@ jobs:
cargo clippy --all --all-features -- -D warnings cargo clippy --all --all-features -- -D warnings
npm ci npm ci
npm run lint-ci npm run lint-ci
- name: Lint examples
working-directory: nodejs/examples
run: npm ci && npm run lint-ci
linux: linux:
name: Linux (NodeJS ${{ matrix.node-version }}) name: Linux (NodeJS ${{ matrix.node-version }})
timeout-minutes: 30 timeout-minutes: 30
@@ -91,6 +94,18 @@ jobs:
env: env:
S3_TEST: "1" S3_TEST: "1"
run: npm run test run: npm run test
- name: Setup examples
working-directory: nodejs/examples
run: npm ci
- name: Test examples
working-directory: ./
env:
OPENAI_API_KEY: test
OPENAI_BASE_URL: http://0.0.0.0:8000
run: |
python ci/mock_openai.py &
cd nodejs/examples
npm test
macos: macos:
timeout-minutes: 30 timeout-minutes: 30
runs-on: "macos-14" runs-on: "macos-14"

View File

@@ -101,7 +101,7 @@ jobs:
path: | path: |
nodejs/dist/*.node nodejs/dist/*.node
node-linux: node-linux-gnu:
name: vectordb (${{ matrix.config.arch}}-unknown-linux-gnu) name: vectordb (${{ matrix.config.arch}}-unknown-linux-gnu)
runs-on: ${{ matrix.config.runner }} runs-on: ${{ matrix.config.runner }}
# Only runs on tags that matches the make-release action # Only runs on tags that matches the make-release action
@@ -133,15 +133,70 @@ jobs:
free -h free -h
- name: Build Linux Artifacts - name: Build Linux Artifacts
run: | run: |
bash ci/build_linux_artifacts.sh ${{ matrix.config.arch }} bash ci/build_linux_artifacts.sh ${{ matrix.config.arch }} ${{ matrix.config.arch }}-unknown-linux-gnu
- name: Upload Linux Artifacts - name: Upload Linux Artifacts
uses: actions/upload-artifact@v4 uses: actions/upload-artifact@v4
with: with:
name: node-native-linux-${{ matrix.config.arch }} name: node-native-linux-${{ matrix.config.arch }}-gnu
path: | path: |
node/dist/lancedb-vectordb-linux*.tgz node/dist/lancedb-vectordb-linux*.tgz
nodejs-linux: node-linux-musl:
name: vectordb (${{ matrix.config.arch}}-unknown-linux-musl)
runs-on: ${{ matrix.config.runner }}
container: alpine:edge
# Only runs on tags that matches the make-release action
if: startsWith(github.ref, 'refs/tags/v')
strategy:
fail-fast: false
matrix:
config:
- arch: x86_64
runner: ubuntu-latest
- arch: aarch64
# For successful fat LTO builds, we need a large runner to avoid OOM errors.
runner: buildjet-16vcpu-ubuntu-2204-arm
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Install common dependencies
run: |
apk add protobuf-dev curl clang mold grep npm bash
curl --proto '=https' --tlsv1.3 -sSf https://raw.githubusercontent.com/rust-lang/rustup/refs/heads/master/rustup-init.sh | sh -s -- -y --default-toolchain 1.80.0
echo "source $HOME/.cargo/env" >> saved_env
echo "export CC=clang" >> saved_env
echo "export RUSTFLAGS='-Ctarget-cpu=haswell -Ctarget-feature=-crt-static,+avx2,+fma,+f16c -Clinker=clang -Clink-arg=-fuse-ld=mold'" >> saved_env
- name: Configure aarch64 build
if: ${{ matrix.config.arch == 'aarch64' }}
run: |
source "$HOME/.cargo/env"
rustup target add aarch64-unknown-linux-musl --toolchain 1.80.0
crt=$(realpath $(dirname $(rustup which rustc))/../lib/rustlib/aarch64-unknown-linux-musl/lib/self-contained)
sysroot_lib=/usr/aarch64-unknown-linux-musl/usr/lib
apk_url=https://dl-cdn.alpinelinux.org/alpine/latest-stable/main/aarch64/
curl -sSf $apk_url > apk_list
for pkg in gcc libgcc musl; do curl -sSf $apk_url$(cat apk_list | grep -oP '(?<=")'$pkg'-\d.*?(?=")') | tar zxf -; done
mkdir -p $sysroot_lib
echo 'GROUP ( libgcc_s.so.1 -lgcc )' > $sysroot_lib/libgcc_s.so
cp usr/lib/libgcc_s.so.1 $sysroot_lib
cp usr/lib/gcc/aarch64-alpine-linux-musl/*/libgcc.a $sysroot_lib
cp lib/ld-musl-aarch64.so.1 $sysroot_lib/libc.so
echo '!<arch>' > $sysroot_lib/libdl.a
(cd $crt && cp crti.o crtbeginS.o crtendS.o crtn.o -t $sysroot_lib)
echo "export CARGO_BUILD_TARGET=aarch64-unknown-linux-musl" >> saved_env
echo "export RUSTFLAGS='-Ctarget-cpu=apple-m1 -Ctarget-feature=-crt-static,+neon,+fp16,+fhm,+dotprod -Clinker=clang -Clink-arg=-fuse-ld=mold -Clink-arg=--target=aarch64-unknown-linux-musl -Clink-arg=--sysroot=/usr/aarch64-unknown-linux-musl -Clink-arg=-lc'" >> saved_env
- name: Build Linux Artifacts
run: |
source ./saved_env
bash ci/manylinux_node/build_vectordb.sh ${{ matrix.config.arch }} ${{ matrix.config.arch }}-unknown-linux-musl
- name: Upload Linux Artifacts
uses: actions/upload-artifact@v4
with:
name: node-native-linux-${{ matrix.config.arch }}-musl
path: |
node/dist/lancedb-vectordb-linux*.tgz
nodejs-linux-gnu:
name: lancedb (${{ matrix.config.arch}}-unknown-linux-gnu name: lancedb (${{ matrix.config.arch}}-unknown-linux-gnu
runs-on: ${{ matrix.config.runner }} runs-on: ${{ matrix.config.runner }}
# Only runs on tags that matches the make-release action # Only runs on tags that matches the make-release action
@@ -178,7 +233,7 @@ jobs:
- name: Upload Linux Artifacts - name: Upload Linux Artifacts
uses: actions/upload-artifact@v4 uses: actions/upload-artifact@v4
with: with:
name: nodejs-native-linux-${{ matrix.config.arch }} name: nodejs-native-linux-${{ matrix.config.arch }}-gnu
path: | path: |
nodejs/dist/*.node nodejs/dist/*.node
# The generic files are the same in all distros so we just pick # The generic files are the same in all distros so we just pick
@@ -192,6 +247,65 @@ jobs:
nodejs/dist/* nodejs/dist/*
!nodejs/dist/*.node !nodejs/dist/*.node
nodejs-linux-musl:
name: lancedb (${{ matrix.config.arch}}-unknown-linux-musl
runs-on: ${{ matrix.config.runner }}
container: alpine:edge
# Only runs on tags that matches the make-release action
if: startsWith(github.ref, 'refs/tags/v')
strategy:
fail-fast: false
matrix:
config:
- arch: x86_64
runner: ubuntu-latest
- arch: aarch64
# For successful fat LTO builds, we need a large runner to avoid OOM errors.
runner: buildjet-16vcpu-ubuntu-2204-arm
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Install common dependencies
run: |
apk add protobuf-dev curl clang mold grep npm bash openssl-dev openssl-libs-static
curl --proto '=https' --tlsv1.3 -sSf https://raw.githubusercontent.com/rust-lang/rustup/refs/heads/master/rustup-init.sh | sh -s -- -y --default-toolchain 1.80.0
echo "source $HOME/.cargo/env" >> saved_env
echo "export CC=clang" >> saved_env
echo "export RUSTFLAGS='-Ctarget-cpu=haswell -Ctarget-feature=-crt-static,+avx2,+fma,+f16c -Clinker=clang -Clink-arg=-fuse-ld=mold'" >> saved_env
echo "export X86_64_UNKNOWN_LINUX_MUSL_OPENSSL_INCLUDE_DIR=/usr/include" >> saved_env
echo "export X86_64_UNKNOWN_LINUX_MUSL_OPENSSL_LIB_DIR=/usr/lib" >> saved_env
- name: Configure aarch64 build
if: ${{ matrix.config.arch == 'aarch64' }}
run: |
source "$HOME/.cargo/env"
rustup target add aarch64-unknown-linux-musl --toolchain 1.80.0
crt=$(realpath $(dirname $(rustup which rustc))/../lib/rustlib/aarch64-unknown-linux-musl/lib/self-contained)
sysroot_lib=/usr/aarch64-unknown-linux-musl/usr/lib
apk_url=https://dl-cdn.alpinelinux.org/alpine/latest-stable/main/aarch64/
curl -sSf $apk_url > apk_list
for pkg in gcc libgcc musl openssl-dev openssl-libs-static; do curl -sSf $apk_url$(cat apk_list | grep -oP '(?<=")'$pkg'-\d.*?(?=")') | tar zxf -; done
mkdir -p $sysroot_lib
echo 'GROUP ( libgcc_s.so.1 -lgcc )' > $sysroot_lib/libgcc_s.so
cp usr/lib/libgcc_s.so.1 $sysroot_lib
cp usr/lib/gcc/aarch64-alpine-linux-musl/*/libgcc.a $sysroot_lib
cp lib/ld-musl-aarch64.so.1 $sysroot_lib/libc.so
echo '!<arch>' > $sysroot_lib/libdl.a
(cd $crt && cp crti.o crtbeginS.o crtendS.o crtn.o -t $sysroot_lib)
echo "export CARGO_BUILD_TARGET=aarch64-unknown-linux-musl" >> saved_env
echo "export RUSTFLAGS='-Ctarget-feature=-crt-static,+neon,+fp16,+fhm,+dotprod -Clinker=clang -Clink-arg=-fuse-ld=mold -Clink-arg=--target=aarch64-unknown-linux-musl -Clink-arg=--sysroot=/usr/aarch64-unknown-linux-musl -Clink-arg=-lc'" >> saved_env
echo "export AARCH64_UNKNOWN_LINUX_MUSL_OPENSSL_INCLUDE_DIR=$(realpath usr/include)" >> saved_env
echo "export AARCH64_UNKNOWN_LINUX_MUSL_OPENSSL_LIB_DIR=$(realpath usr/lib)" >> saved_env
- name: Build Linux Artifacts
run: |
source ./saved_env
bash ci/manylinux_node/build_lancedb.sh ${{ matrix.config.arch }}
- name: Upload Linux Artifacts
uses: actions/upload-artifact@v4
with:
name: nodejs-native-linux-${{ matrix.config.arch }}-musl
path: |
nodejs/dist/*.node
node-windows: node-windows:
name: vectordb ${{ matrix.target }} name: vectordb ${{ matrix.target }}
runs-on: windows-2022 runs-on: windows-2022
@@ -226,6 +340,154 @@ jobs:
path: | path: |
node/dist/lancedb-vectordb-win32*.tgz node/dist/lancedb-vectordb-win32*.tgz
node-windows-arm64:
name: vectordb ${{ matrix.config.arch }}-pc-windows-msvc
runs-on: ubuntu-latest
container: alpine:edge
strategy:
fail-fast: false
matrix:
config:
# - arch: x86_64
- arch: aarch64
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Install dependencies
run: |
apk add protobuf-dev curl clang lld llvm19 grep npm bash msitools sed
curl --proto '=https' --tlsv1.3 -sSf https://raw.githubusercontent.com/rust-lang/rustup/refs/heads/master/rustup-init.sh | sh -s -- -y --default-toolchain 1.80.0
echo "source $HOME/.cargo/env" >> saved_env
echo "export CC=clang" >> saved_env
echo "export AR=llvm-ar" >> saved_env
source "$HOME/.cargo/env"
rustup target add ${{ matrix.config.arch }}-pc-windows-msvc --toolchain 1.80.0
(mkdir -p sysroot && cd sysroot && sh ../ci/sysroot-${{ matrix.config.arch }}-pc-windows-msvc.sh)
echo "export C_INCLUDE_PATH=/usr/${{ matrix.config.arch }}-pc-windows-msvc/usr/include" >> saved_env
echo "export CARGO_BUILD_TARGET=${{ matrix.config.arch }}-pc-windows-msvc" >> saved_env
- name: Configure x86_64 build
if: ${{ matrix.config.arch == 'x86_64' }}
run: |
echo "export RUSTFLAGS='-Ctarget-cpu=haswell -Ctarget-feature=+crt-static,+avx2,+fma,+f16c -Clinker=lld -Clink-arg=/LIBPATH:/usr/x86_64-pc-windows-msvc/usr/lib'" >> saved_env
- name: Configure aarch64 build
if: ${{ matrix.config.arch == 'aarch64' }}
run: |
echo "export RUSTFLAGS='-Ctarget-feature=+crt-static,+neon,+fp16,+fhm,+dotprod -Clinker=lld -Clink-arg=/LIBPATH:/usr/aarch64-pc-windows-msvc/usr/lib -Clink-arg=arm64rt.lib'" >> saved_env
- name: Build Windows Artifacts
run: |
source ./saved_env
bash ci/manylinux_node/build_vectordb.sh ${{ matrix.config.arch }} ${{ matrix.config.arch }}-pc-windows-msvc
- name: Upload Windows Artifacts
uses: actions/upload-artifact@v4
with:
name: node-native-windows-${{ matrix.config.arch }}
path: |
node/dist/lancedb-vectordb-win32*.tgz
# TODO: re-enable once working https://github.com/lancedb/lancedb/pull/1831
# node-windows-arm64:
# name: vectordb win32-arm64-msvc
# runs-on: windows-4x-arm
# if: startsWith(github.ref, 'refs/tags/v')
# steps:
# - uses: actions/checkout@v4
# - name: Install Git
# run: |
# Invoke-WebRequest -Uri "https://github.com/git-for-windows/git/releases/download/v2.44.0.windows.1/Git-2.44.0-64-bit.exe" -OutFile "git-installer.exe"
# Start-Process -FilePath "git-installer.exe" -ArgumentList "/VERYSILENT", "/NORESTART" -Wait
# shell: powershell
# - name: Add Git to PATH
# run: |
# Add-Content $env:GITHUB_PATH "C:\Program Files\Git\bin"
# $env:Path = [System.Environment]::GetEnvironmentVariable("Path","Machine") + ";" + [System.Environment]::GetEnvironmentVariable("Path","User")
# shell: powershell
# - name: Configure Git symlinks
# run: git config --global core.symlinks true
# - uses: actions/checkout@v4
# - uses: actions/setup-python@v5
# with:
# python-version: "3.13"
# - name: Install Visual Studio Build Tools
# run: |
# Invoke-WebRequest -Uri "https://aka.ms/vs/17/release/vs_buildtools.exe" -OutFile "vs_buildtools.exe"
# Start-Process -FilePath "vs_buildtools.exe" -ArgumentList "--quiet", "--wait", "--norestart", "--nocache", `
# "--installPath", "C:\BuildTools", `
# "--add", "Microsoft.VisualStudio.Component.VC.Tools.ARM64", `
# "--add", "Microsoft.VisualStudio.Component.VC.Tools.x86.x64", `
# "--add", "Microsoft.VisualStudio.Component.Windows11SDK.22621", `
# "--add", "Microsoft.VisualStudio.Component.VC.ATL", `
# "--add", "Microsoft.VisualStudio.Component.VC.ATLMFC", `
# "--add", "Microsoft.VisualStudio.Component.VC.Llvm.Clang" -Wait
# shell: powershell
# - name: Add Visual Studio Build Tools to PATH
# run: |
# $vsPath = "C:\BuildTools\VC\Tools\MSVC"
# $latestVersion = (Get-ChildItem $vsPath | Sort-Object {[version]$_.Name} -Descending)[0].Name
# Add-Content $env:GITHUB_PATH "C:\BuildTools\VC\Tools\MSVC\$latestVersion\bin\Hostx64\arm64"
# Add-Content $env:GITHUB_PATH "C:\BuildTools\VC\Tools\MSVC\$latestVersion\bin\Hostx64\x64"
# Add-Content $env:GITHUB_PATH "C:\Program Files (x86)\Windows Kits\10\bin\10.0.22621.0\arm64"
# Add-Content $env:GITHUB_PATH "C:\Program Files (x86)\Windows Kits\10\bin\10.0.22621.0\x64"
# Add-Content $env:GITHUB_PATH "C:\BuildTools\VC\Tools\Llvm\x64\bin"
# # Add MSVC runtime libraries to LIB
# $env:LIB = "C:\BuildTools\VC\Tools\MSVC\$latestVersion\lib\arm64;" +
# "C:\Program Files (x86)\Windows Kits\10\Lib\10.0.22621.0\um\arm64;" +
# "C:\Program Files (x86)\Windows Kits\10\Lib\10.0.22621.0\ucrt\arm64"
# Add-Content $env:GITHUB_ENV "LIB=$env:LIB"
# # Add INCLUDE paths
# $env:INCLUDE = "C:\BuildTools\VC\Tools\MSVC\$latestVersion\include;" +
# "C:\Program Files (x86)\Windows Kits\10\Include\10.0.22621.0\ucrt;" +
# "C:\Program Files (x86)\Windows Kits\10\Include\10.0.22621.0\um;" +
# "C:\Program Files (x86)\Windows Kits\10\Include\10.0.22621.0\shared"
# Add-Content $env:GITHUB_ENV "INCLUDE=$env:INCLUDE"
# shell: powershell
# - name: Install Rust
# run: |
# Invoke-WebRequest https://win.rustup.rs/x86_64 -OutFile rustup-init.exe
# .\rustup-init.exe -y --default-host aarch64-pc-windows-msvc
# shell: powershell
# - name: Add Rust to PATH
# run: |
# Add-Content $env:GITHUB_PATH "$env:USERPROFILE\.cargo\bin"
# shell: powershell
# - uses: Swatinem/rust-cache@v2
# with:
# workspaces: rust
# - name: Install 7-Zip ARM
# run: |
# New-Item -Path 'C:\7zip' -ItemType Directory
# Invoke-WebRequest https://7-zip.org/a/7z2408-arm64.exe -OutFile C:\7zip\7z-installer.exe
# Start-Process -FilePath C:\7zip\7z-installer.exe -ArgumentList '/S' -Wait
# shell: powershell
# - name: Add 7-Zip to PATH
# run: Add-Content $env:GITHUB_PATH "C:\Program Files\7-Zip"
# shell: powershell
# - name: Install Protoc v21.12
# working-directory: C:\
# run: |
# if (Test-Path 'C:\protoc') {
# Write-Host "Protoc directory exists, skipping installation"
# return
# }
# New-Item -Path 'C:\protoc' -ItemType Directory
# Set-Location C:\protoc
# Invoke-WebRequest https://github.com/protocolbuffers/protobuf/releases/download/v21.12/protoc-21.12-win64.zip -OutFile C:\protoc\protoc.zip
# & 'C:\Program Files\7-Zip\7z.exe' x protoc.zip
# shell: powershell
# - name: Add Protoc to PATH
# run: Add-Content $env:GITHUB_PATH "C:\protoc\bin"
# shell: powershell
# - name: Build Windows native node modules
# run: .\ci\build_windows_artifacts.ps1 aarch64-pc-windows-msvc
# - name: Upload Windows ARM64 Artifacts
# uses: actions/upload-artifact@v4
# with:
# name: node-native-windows-arm64
# path: |
# node/dist/*.node
nodejs-windows: nodejs-windows:
name: lancedb ${{ matrix.target }} name: lancedb ${{ matrix.target }}
runs-on: windows-2022 runs-on: windows-2022
@@ -260,9 +522,149 @@ jobs:
path: | path: |
nodejs/dist/*.node nodejs/dist/*.node
nodejs-windows-arm64:
name: lancedb ${{ matrix.config.arch }}-pc-windows-msvc
runs-on: ubuntu-latest
container: alpine:edge
strategy:
fail-fast: false
matrix:
config:
# - arch: x86_64
- arch: aarch64
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Install dependencies
run: |
apk add protobuf-dev curl clang lld llvm19 grep npm bash msitools sed
curl --proto '=https' --tlsv1.3 -sSf https://raw.githubusercontent.com/rust-lang/rustup/refs/heads/master/rustup-init.sh | sh -s -- -y --default-toolchain 1.80.0
echo "source $HOME/.cargo/env" >> saved_env
echo "export CC=clang" >> saved_env
echo "export AR=llvm-ar" >> saved_env
source "$HOME/.cargo/env"
rustup target add ${{ matrix.config.arch }}-pc-windows-msvc --toolchain 1.80.0
(mkdir -p sysroot && cd sysroot && sh ../ci/sysroot-${{ matrix.config.arch }}-pc-windows-msvc.sh)
echo "export C_INCLUDE_PATH=/usr/${{ matrix.config.arch }}-pc-windows-msvc/usr/include" >> saved_env
echo "export CARGO_BUILD_TARGET=${{ matrix.config.arch }}-pc-windows-msvc" >> saved_env
printf '#!/bin/sh\ncargo "$@"' > $HOME/.cargo/bin/cargo-xwin
chmod u+x $HOME/.cargo/bin/cargo-xwin
- name: Configure x86_64 build
if: ${{ matrix.config.arch == 'x86_64' }}
run: |
echo "export RUSTFLAGS='-Ctarget-cpu=haswell -Ctarget-feature=+crt-static,+avx2,+fma,+f16c -Clinker=lld -Clink-arg=/LIBPATH:/usr/x86_64-pc-windows-msvc/usr/lib'" >> saved_env
- name: Configure aarch64 build
if: ${{ matrix.config.arch == 'aarch64' }}
run: |
echo "export RUSTFLAGS='-Ctarget-feature=+crt-static,+neon,+fp16,+fhm,+dotprod -Clinker=lld -Clink-arg=/LIBPATH:/usr/aarch64-pc-windows-msvc/usr/lib -Clink-arg=arm64rt.lib'" >> saved_env
- name: Build Windows Artifacts
run: |
source ./saved_env
bash ci/manylinux_node/build_lancedb.sh ${{ matrix.config.arch }}
- name: Upload Windows Artifacts
uses: actions/upload-artifact@v4
with:
name: nodejs-native-windows-${{ matrix.config.arch }}
path: |
nodejs/dist/*.node
# TODO: re-enable once working https://github.com/lancedb/lancedb/pull/1831
# nodejs-windows-arm64:
# name: lancedb win32-arm64-msvc
# runs-on: windows-4x-arm
# if: startsWith(github.ref, 'refs/tags/v')
# steps:
# - uses: actions/checkout@v4
# - name: Install Git
# run: |
# Invoke-WebRequest -Uri "https://github.com/git-for-windows/git/releases/download/v2.44.0.windows.1/Git-2.44.0-64-bit.exe" -OutFile "git-installer.exe"
# Start-Process -FilePath "git-installer.exe" -ArgumentList "/VERYSILENT", "/NORESTART" -Wait
# shell: powershell
# - name: Add Git to PATH
# run: |
# Add-Content $env:GITHUB_PATH "C:\Program Files\Git\bin"
# $env:Path = [System.Environment]::GetEnvironmentVariable("Path","Machine") + ";" + [System.Environment]::GetEnvironmentVariable("Path","User")
# shell: powershell
# - name: Configure Git symlinks
# run: git config --global core.symlinks true
# - uses: actions/checkout@v4
# - uses: actions/setup-python@v5
# with:
# python-version: "3.13"
# - name: Install Visual Studio Build Tools
# run: |
# Invoke-WebRequest -Uri "https://aka.ms/vs/17/release/vs_buildtools.exe" -OutFile "vs_buildtools.exe"
# Start-Process -FilePath "vs_buildtools.exe" -ArgumentList "--quiet", "--wait", "--norestart", "--nocache", `
# "--installPath", "C:\BuildTools", `
# "--add", "Microsoft.VisualStudio.Component.VC.Tools.ARM64", `
# "--add", "Microsoft.VisualStudio.Component.VC.Tools.x86.x64", `
# "--add", "Microsoft.VisualStudio.Component.Windows11SDK.22621", `
# "--add", "Microsoft.VisualStudio.Component.VC.ATL", `
# "--add", "Microsoft.VisualStudio.Component.VC.ATLMFC", `
# "--add", "Microsoft.VisualStudio.Component.VC.Llvm.Clang" -Wait
# shell: powershell
# - name: Add Visual Studio Build Tools to PATH
# run: |
# $vsPath = "C:\BuildTools\VC\Tools\MSVC"
# $latestVersion = (Get-ChildItem $vsPath | Sort-Object {[version]$_.Name} -Descending)[0].Name
# Add-Content $env:GITHUB_PATH "C:\BuildTools\VC\Tools\MSVC\$latestVersion\bin\Hostx64\arm64"
# Add-Content $env:GITHUB_PATH "C:\BuildTools\VC\Tools\MSVC\$latestVersion\bin\Hostx64\x64"
# Add-Content $env:GITHUB_PATH "C:\Program Files (x86)\Windows Kits\10\bin\10.0.22621.0\arm64"
# Add-Content $env:GITHUB_PATH "C:\Program Files (x86)\Windows Kits\10\bin\10.0.22621.0\x64"
# Add-Content $env:GITHUB_PATH "C:\BuildTools\VC\Tools\Llvm\x64\bin"
# $env:LIB = ""
# Add-Content $env:GITHUB_ENV "LIB=C:\Program Files (x86)\Windows Kits\10\Lib\10.0.22621.0\um\arm64;C:\Program Files (x86)\Windows Kits\10\Lib\10.0.22621.0\ucrt\arm64"
# shell: powershell
# - name: Install Rust
# run: |
# Invoke-WebRequest https://win.rustup.rs/x86_64 -OutFile rustup-init.exe
# .\rustup-init.exe -y --default-host aarch64-pc-windows-msvc
# shell: powershell
# - name: Add Rust to PATH
# run: |
# Add-Content $env:GITHUB_PATH "$env:USERPROFILE\.cargo\bin"
# shell: powershell
# - uses: Swatinem/rust-cache@v2
# with:
# workspaces: rust
# - name: Install 7-Zip ARM
# run: |
# New-Item -Path 'C:\7zip' -ItemType Directory
# Invoke-WebRequest https://7-zip.org/a/7z2408-arm64.exe -OutFile C:\7zip\7z-installer.exe
# Start-Process -FilePath C:\7zip\7z-installer.exe -ArgumentList '/S' -Wait
# shell: powershell
# - name: Add 7-Zip to PATH
# run: Add-Content $env:GITHUB_PATH "C:\Program Files\7-Zip"
# shell: powershell
# - name: Install Protoc v21.12
# working-directory: C:\
# run: |
# if (Test-Path 'C:\protoc') {
# Write-Host "Protoc directory exists, skipping installation"
# return
# }
# New-Item -Path 'C:\protoc' -ItemType Directory
# Set-Location C:\protoc
# Invoke-WebRequest https://github.com/protocolbuffers/protobuf/releases/download/v21.12/protoc-21.12-win64.zip -OutFile C:\protoc\protoc.zip
# & 'C:\Program Files\7-Zip\7z.exe' x protoc.zip
# shell: powershell
# - name: Add Protoc to PATH
# run: Add-Content $env:GITHUB_PATH "C:\protoc\bin"
# shell: powershell
# - name: Build Windows native node modules
# run: .\ci\build_windows_artifacts_nodejs.ps1 aarch64-pc-windows-msvc
# - name: Upload Windows ARM64 Artifacts
# uses: actions/upload-artifact@v4
# with:
# name: nodejs-native-windows-arm64
# path: |
# nodejs/dist/*.node
release: release:
name: vectordb NPM Publish name: vectordb NPM Publish
needs: [node, node-macos, node-linux, node-windows] needs: [node, node-macos, node-linux-gnu, node-linux-musl, node-windows, node-windows-arm64]
runs-on: ubuntu-latest runs-on: ubuntu-latest
# Only runs on tags that matches the make-release action # Only runs on tags that matches the make-release action
if: startsWith(github.ref, 'refs/tags/v') if: startsWith(github.ref, 'refs/tags/v')
@@ -302,7 +704,7 @@ jobs:
release-nodejs: release-nodejs:
name: lancedb NPM Publish name: lancedb NPM Publish
needs: [nodejs-macos, nodejs-linux, nodejs-windows] needs: [nodejs-macos, nodejs-linux-gnu, nodejs-linux-musl, nodejs-windows, nodejs-windows-arm64]
runs-on: ubuntu-latest runs-on: ubuntu-latest
# Only runs on tags that matches the make-release action # Only runs on tags that matches the make-release action
if: startsWith(github.ref, 'refs/tags/v') if: startsWith(github.ref, 'refs/tags/v')

View File

@@ -83,7 +83,7 @@ jobs:
- name: Set up Python - name: Set up Python
uses: actions/setup-python@v4 uses: actions/setup-python@v4
with: with:
python-version: 3.8 python-version: 3.12
- uses: ./.github/workflows/build_windows_wheel - uses: ./.github/workflows/build_windows_wheel
with: with:
python-minor-version: 8 python-minor-version: 8

View File

@@ -138,7 +138,7 @@ jobs:
run: rm -rf target/wheels run: rm -rf target/wheels
windows: windows:
name: "Windows: ${{ matrix.config.name }}" name: "Windows: ${{ matrix.config.name }}"
timeout-minutes: 30 timeout-minutes: 60
strategy: strategy:
matrix: matrix:
config: config:

View File

@@ -26,15 +26,14 @@ env:
jobs: jobs:
lint: lint:
timeout-minutes: 30 timeout-minutes: 30
runs-on: ubuntu-22.04 runs-on: ubuntu-24.04
defaults: defaults:
run: run:
shell: bash shell: bash
working-directory: rust
env: env:
# Need up-to-date compilers for kernels # Need up-to-date compilers for kernels
CC: gcc-12 CC: clang-18
CXX: g++-12 CXX: clang++-18
steps: steps:
- uses: actions/checkout@v4 - uses: actions/checkout@v4
with: with:
@@ -50,21 +49,22 @@ jobs:
- name: Run format - name: Run format
run: cargo fmt --all -- --check run: cargo fmt --all -- --check
- name: Run clippy - name: Run clippy
run: cargo clippy --all --all-features -- -D warnings run: cargo clippy --workspace --tests --all-features -- -D warnings
linux: linux:
timeout-minutes: 30 timeout-minutes: 30
# To build all features, we need more disk space than is available # To build all features, we need more disk space than is available
# on the GitHub-provided runner. This is mostly due to the the # on the free OSS github runner. This is mostly due to the the
# sentence-transformers feature. # sentence-transformers feature.
runs-on: warp-ubuntu-latest-x64-4x runs-on: ubuntu-2404-4x-x64
defaults: defaults:
run: run:
shell: bash shell: bash
working-directory: rust working-directory: rust
env: env:
# Need up-to-date compilers for kernels # Need up-to-date compilers for kernels
CC: gcc-12 CC: clang-18
CXX: g++-12 CXX: clang++-18
steps: steps:
- uses: actions/checkout@v4 - uses: actions/checkout@v4
with: with:
@@ -77,6 +77,12 @@ jobs:
run: | run: |
sudo apt update sudo apt update
sudo apt install -y protobuf-compiler libssl-dev sudo apt install -y protobuf-compiler libssl-dev
- name: Make Swap
run: |
sudo fallocate -l 16G /swapfile
sudo chmod 600 /swapfile
sudo mkswap /swapfile
sudo swapon /swapfile
- name: Start S3 integration test environment - name: Start S3 integration test environment
working-directory: . working-directory: .
run: docker compose up --detach --wait run: docker compose up --detach --wait
@@ -86,11 +92,12 @@ jobs:
run: cargo test --all-features run: cargo test --all-features
- name: Run examples - name: Run examples
run: cargo run --example simple run: cargo run --example simple
macos: macos:
timeout-minutes: 30 timeout-minutes: 30
strategy: strategy:
matrix: matrix:
mac-runner: [ "macos-13", "macos-14" ] mac-runner: ["macos-13", "macos-14"]
runs-on: "${{ matrix.mac-runner }}" runs-on: "${{ matrix.mac-runner }}"
defaults: defaults:
run: run:
@@ -113,6 +120,7 @@ jobs:
- name: Run tests - name: Run tests
# Run with everything except the integration tests. # Run with everything except the integration tests.
run: cargo test --features remote,fp16kernels run: cargo test --features remote,fp16kernels
windows: windows:
runs-on: windows-2022 runs-on: windows-2022
steps: steps:
@@ -134,3 +142,99 @@ jobs:
$env:VCPKG_ROOT = $env:VCPKG_INSTALLATION_ROOT $env:VCPKG_ROOT = $env:VCPKG_INSTALLATION_ROOT
cargo build cargo build
cargo test cargo test
windows-arm64:
runs-on: windows-4x-arm
steps:
- name: Install Git
run: |
Invoke-WebRequest -Uri "https://github.com/git-for-windows/git/releases/download/v2.44.0.windows.1/Git-2.44.0-64-bit.exe" -OutFile "git-installer.exe"
Start-Process -FilePath "git-installer.exe" -ArgumentList "/VERYSILENT", "/NORESTART" -Wait
shell: powershell
- name: Add Git to PATH
run: |
Add-Content $env:GITHUB_PATH "C:\Program Files\Git\bin"
$env:Path = [System.Environment]::GetEnvironmentVariable("Path","Machine") + ";" + [System.Environment]::GetEnvironmentVariable("Path","User")
shell: powershell
- name: Configure Git symlinks
run: git config --global core.symlinks true
- uses: actions/checkout@v4
- uses: actions/setup-python@v5
with:
python-version: "3.13"
- name: Install Visual Studio Build Tools
run: |
Invoke-WebRequest -Uri "https://aka.ms/vs/17/release/vs_buildtools.exe" -OutFile "vs_buildtools.exe"
Start-Process -FilePath "vs_buildtools.exe" -ArgumentList "--quiet", "--wait", "--norestart", "--nocache", `
"--installPath", "C:\BuildTools", `
"--add", "Microsoft.VisualStudio.Component.VC.Tools.ARM64", `
"--add", "Microsoft.VisualStudio.Component.VC.Tools.x86.x64", `
"--add", "Microsoft.VisualStudio.Component.Windows11SDK.22621", `
"--add", "Microsoft.VisualStudio.Component.VC.ATL", `
"--add", "Microsoft.VisualStudio.Component.VC.ATLMFC", `
"--add", "Microsoft.VisualStudio.Component.VC.Llvm.Clang" -Wait
shell: powershell
- name: Add Visual Studio Build Tools to PATH
run: |
$vsPath = "C:\BuildTools\VC\Tools\MSVC"
$latestVersion = (Get-ChildItem $vsPath | Sort-Object {[version]$_.Name} -Descending)[0].Name
Add-Content $env:GITHUB_PATH "C:\BuildTools\VC\Tools\MSVC\$latestVersion\bin\Hostx64\arm64"
Add-Content $env:GITHUB_PATH "C:\BuildTools\VC\Tools\MSVC\$latestVersion\bin\Hostx64\x64"
Add-Content $env:GITHUB_PATH "C:\Program Files (x86)\Windows Kits\10\bin\10.0.22621.0\arm64"
Add-Content $env:GITHUB_PATH "C:\Program Files (x86)\Windows Kits\10\bin\10.0.22621.0\x64"
Add-Content $env:GITHUB_PATH "C:\BuildTools\VC\Tools\Llvm\x64\bin"
# Add MSVC runtime libraries to LIB
$env:LIB = "C:\BuildTools\VC\Tools\MSVC\$latestVersion\lib\arm64;" +
"C:\Program Files (x86)\Windows Kits\10\Lib\10.0.22621.0\um\arm64;" +
"C:\Program Files (x86)\Windows Kits\10\Lib\10.0.22621.0\ucrt\arm64"
Add-Content $env:GITHUB_ENV "LIB=$env:LIB"
# Add INCLUDE paths
$env:INCLUDE = "C:\BuildTools\VC\Tools\MSVC\$latestVersion\include;" +
"C:\Program Files (x86)\Windows Kits\10\Include\10.0.22621.0\ucrt;" +
"C:\Program Files (x86)\Windows Kits\10\Include\10.0.22621.0\um;" +
"C:\Program Files (x86)\Windows Kits\10\Include\10.0.22621.0\shared"
Add-Content $env:GITHUB_ENV "INCLUDE=$env:INCLUDE"
shell: powershell
- name: Install Rust
run: |
Invoke-WebRequest https://win.rustup.rs/x86_64 -OutFile rustup-init.exe
.\rustup-init.exe -y --default-host aarch64-pc-windows-msvc
shell: powershell
- name: Add Rust to PATH
run: |
Add-Content $env:GITHUB_PATH "$env:USERPROFILE\.cargo\bin"
shell: powershell
- uses: Swatinem/rust-cache@v2
with:
workspaces: rust
- name: Install 7-Zip ARM
run: |
New-Item -Path 'C:\7zip' -ItemType Directory
Invoke-WebRequest https://7-zip.org/a/7z2408-arm64.exe -OutFile C:\7zip\7z-installer.exe
Start-Process -FilePath C:\7zip\7z-installer.exe -ArgumentList '/S' -Wait
shell: powershell
- name: Add 7-Zip to PATH
run: Add-Content $env:GITHUB_PATH "C:\Program Files\7-Zip"
shell: powershell
- name: Install Protoc v21.12
working-directory: C:\
run: |
if (Test-Path 'C:\protoc') {
Write-Host "Protoc directory exists, skipping installation"
return
}
New-Item -Path 'C:\protoc' -ItemType Directory
Set-Location C:\protoc
Invoke-WebRequest https://github.com/protocolbuffers/protobuf/releases/download/v21.12/protoc-21.12-win64.zip -OutFile C:\protoc\protoc.zip
& 'C:\Program Files\7-Zip\7z.exe' x protoc.zip
shell: powershell
- name: Add Protoc to PATH
run: Add-Content $env:GITHUB_PATH "C:\protoc\bin"
shell: powershell
- name: Run tests
run: |
$env:VCPKG_ROOT = $env:VCPKG_INSTALLATION_ROOT
cargo build --target aarch64-pc-windows-msvc
cargo test --target aarch64-pc-windows-msvc

View File

@@ -17,6 +17,7 @@ runs:
run: | run: |
python -m pip install --upgrade pip python -m pip install --upgrade pip
pip install twine pip install twine
python3 -m pip install --upgrade pkginfo
- name: Choose repo - name: Choose repo
shell: bash shell: bash
id: choose_repo id: choose_repo

View File

@@ -18,34 +18,44 @@ repository = "https://github.com/lancedb/lancedb"
description = "Serverless, low-latency vector database for AI applications" description = "Serverless, low-latency vector database for AI applications"
keywords = ["lancedb", "lance", "database", "vector", "search"] keywords = ["lancedb", "lance", "database", "vector", "search"]
categories = ["database-implementations"] categories = ["database-implementations"]
rust-version = "1.80.0" # TODO: lower this once we upgrade Lance again.
[workspace.dependencies] [workspace.dependencies]
lance = { "version" = "=0.14.1", "features" = ["dynamodb"] } lance = { "version" = "=0.20.0", "features" = [
lance-index = { "version" = "=0.14.1" } "dynamodb",
lance-linalg = { "version" = "=0.14.1" } ], git = "https://github.com/lancedb/lance.git", tag = "v0.20.0-beta.3" }
lance-testing = { "version" = "=0.14.1" } lance-io = { version = "=0.20.0", git = "https://github.com/lancedb/lance.git", tag = "v0.20.0-beta.3" }
lance-datafusion = { "version" = "=0.14.1" } lance-index = { version = "=0.20.0", git = "https://github.com/lancedb/lance.git", tag = "v0.20.0-beta.3" }
lance-linalg = { version = "=0.20.0", git = "https://github.com/lancedb/lance.git", tag = "v0.20.0-beta.3" }
lance-table = { version = "=0.20.0", git = "https://github.com/lancedb/lance.git", tag = "v0.20.0-beta.3" }
lance-testing = { version = "=0.20.0", git = "https://github.com/lancedb/lance.git", tag = "v0.20.0-beta.3" }
lance-datafusion = { version = "=0.20.0", git = "https://github.com/lancedb/lance.git", tag = "v0.20.0-beta.3" }
lance-encoding = { version = "=0.20.0", git = "https://github.com/lancedb/lance.git", tag = "v0.20.0-beta.3" }
# Note that this one does not include pyarrow # Note that this one does not include pyarrow
arrow = { version = "51.0", optional = false } arrow = { version = "52.2", optional = false }
arrow-array = "51.0" arrow-array = "52.2"
arrow-data = "51.0" arrow-data = "52.2"
arrow-ipc = "51.0" arrow-ipc = "52.2"
arrow-ord = "51.0" arrow-ord = "52.2"
arrow-schema = "51.0" arrow-schema = "52.2"
arrow-arith = "51.0" arrow-arith = "52.2"
arrow-cast = "51.0" arrow-cast = "52.2"
async-trait = "0" async-trait = "0"
chrono = "0.4.35" chrono = "0.4.35"
datafusion-physical-plan = "37.1" datafusion-common = "41.0"
datafusion-physical-plan = "41.0"
env_logger = "0.10"
half = { "version" = "=2.4.1", default-features = false, features = [ half = { "version" = "=2.4.1", default-features = false, features = [
"num-traits", "num-traits",
] } ] }
futures = "0" futures = "0"
log = "0.4" log = "0.4"
object_store = "0.9.0" moka = { version = "0.11", features = ["future"] }
object_store = "0.10.2"
pin-project = "1.0.7" pin-project = "1.0.7"
snafu = "0.7.4" snafu = "0.7.4"
url = "2" url = "2"
num-traits = "0.2" num-traits = "0.2"
rand = "0.8"
regex = "1.10" regex = "1.10"
lazy_static = "1" lazy_static = "1"

View File

@@ -10,6 +10,7 @@
[![Blog](https://img.shields.io/badge/Blog-12100E?style=for-the-badge&logoColor=white)](https://blog.lancedb.com/) [![Blog](https://img.shields.io/badge/Blog-12100E?style=for-the-badge&logoColor=white)](https://blog.lancedb.com/)
[![Discord](https://img.shields.io/badge/Discord-%235865F2.svg?style=for-the-badge&logo=discord&logoColor=white)](https://discord.gg/zMM32dvNtd) [![Discord](https://img.shields.io/badge/Discord-%235865F2.svg?style=for-the-badge&logo=discord&logoColor=white)](https://discord.gg/zMM32dvNtd)
[![Twitter](https://img.shields.io/badge/Twitter-%231DA1F2.svg?style=for-the-badge&logo=Twitter&logoColor=white)](https://twitter.com/lancedb) [![Twitter](https://img.shields.io/badge/Twitter-%231DA1F2.svg?style=for-the-badge&logo=Twitter&logoColor=white)](https://twitter.com/lancedb)
[![Gurubase](https://img.shields.io/badge/Gurubase-Ask%20LanceDB%20Guru-006BFF?style=for-the-badge)](https://gurubase.io/g/lancedb)
</p> </p>
@@ -44,26 +45,24 @@ LanceDB's core is written in Rust 🦀 and is built using <a href="https://githu
**Javascript** **Javascript**
```shell ```shell
npm install vectordb npm install @lancedb/lancedb
``` ```
```javascript ```javascript
const lancedb = require('vectordb'); import * as lancedb from "@lancedb/lancedb";
const db = await lancedb.connect('data/sample-lancedb');
const table = await db.createTable({ const db = await lancedb.connect("data/sample-lancedb");
name: 'vectors', const table = await db.createTable("vectors", [
data: [
{ id: 1, vector: [0.1, 0.2], item: "foo", price: 10 }, { id: 1, vector: [0.1, 0.2], item: "foo", price: 10 },
{ id: 2, vector: [1.1, 1.2], item: "bar", price: 50 } { id: 2, vector: [1.1, 1.2], item: "bar", price: 50 },
] ], {mode: 'overwrite'});
})
const query = table.search([0.1, 0.3]).limit(2);
const results = await query.execute(); const query = table.vectorSearch([0.1, 0.3]).limit(2);
const results = await query.toArray();
// You can also search for rows by specific criteria without involving a vector search. // You can also search for rows by specific criteria without involving a vector search.
const rowsByCriteria = await table.search(undefined).where("price >= 10").execute(); const rowsByCriteria = await table.query().where("price >= 10").toArray();
``` ```
**Python** **Python**
@@ -84,4 +83,4 @@ result = table.search([100, 100]).limit(2).to_pandas()
## Blogs, Tutorials & Videos ## Blogs, Tutorials & Videos
* 📈 <a href="https://blog.lancedb.com/benchmarking-random-access-in-lance/">2000x better performance with Lance over Parquet</a> * 📈 <a href="https://blog.lancedb.com/benchmarking-random-access-in-lance/">2000x better performance with Lance over Parquet</a>
* 🤖 <a href="https://github.com/lancedb/lancedb/blob/main/docs/src/notebooks/youtube_transcript_search.ipynb">Build a question and answer bot with LanceDB</a> * 🤖 <a href="https://github.com/lancedb/vectordb-recipes/tree/main/examples/Youtube-Search-QA-Bot">Build a question and answer bot with LanceDB</a>

View File

@@ -1,6 +1,7 @@
#!/bin/bash #!/bin/bash
set -e set -e
ARCH=${1:-x86_64} ARCH=${1:-x86_64}
TARGET_TRIPLE=${2:-x86_64-unknown-linux-gnu}
# We pass down the current user so that when we later mount the local files # We pass down the current user so that when we later mount the local files
# into the container, the files are accessible by the current user. # into the container, the files are accessible by the current user.
@@ -18,4 +19,4 @@ docker run \
-v $(pwd):/io -w /io \ -v $(pwd):/io -w /io \
--memory-swap=-1 \ --memory-swap=-1 \
lancedb-node-manylinux \ lancedb-node-manylinux \
bash ci/manylinux_node/build.sh $ARCH bash ci/manylinux_node/build_vectordb.sh $ARCH $TARGET_TRIPLE

View File

@@ -4,9 +4,9 @@ ARCH=${1:-x86_64}
# We pass down the current user so that when we later mount the local files # We pass down the current user so that when we later mount the local files
# into the container, the files are accessible by the current user. # into the container, the files are accessible by the current user.
pushd ci/manylinux_nodejs pushd ci/manylinux_node
docker build \ docker build \
-t lancedb-nodejs-manylinux \ -t lancedb-node-manylinux-$ARCH \
--build-arg="ARCH=$ARCH" \ --build-arg="ARCH=$ARCH" \
--build-arg="DOCKER_USER=$(id -u)" \ --build-arg="DOCKER_USER=$(id -u)" \
--progress=plain \ --progress=plain \
@@ -17,5 +17,5 @@ popd
docker run \ docker run \
-v $(pwd):/io -w /io \ -v $(pwd):/io -w /io \
--memory-swap=-1 \ --memory-swap=-1 \
lancedb-nodejs-manylinux \ lancedb-node-manylinux-$ARCH \
bash ci/manylinux_nodejs/build.sh $ARCH bash ci/manylinux_node/build_lancedb.sh $ARCH

View File

@@ -3,6 +3,7 @@
# Targets supported: # Targets supported:
# - x86_64-pc-windows-msvc # - x86_64-pc-windows-msvc
# - i686-pc-windows-msvc # - i686-pc-windows-msvc
# - aarch64-pc-windows-msvc
function Prebuild-Rust { function Prebuild-Rust {
param ( param (
@@ -31,7 +32,7 @@ function Build-NodeBinaries {
$targets = $args[0] $targets = $args[0]
if (-not $targets) { if (-not $targets) {
$targets = "x86_64-pc-windows-msvc" $targets = "x86_64-pc-windows-msvc", "aarch64-pc-windows-msvc"
} }
Write-Host "Building artifacts for targets: $targets" Write-Host "Building artifacts for targets: $targets"

View File

@@ -3,6 +3,7 @@
# Targets supported: # Targets supported:
# - x86_64-pc-windows-msvc # - x86_64-pc-windows-msvc
# - i686-pc-windows-msvc # - i686-pc-windows-msvc
# - aarch64-pc-windows-msvc
function Prebuild-Rust { function Prebuild-Rust {
param ( param (
@@ -31,7 +32,7 @@ function Build-NodeBinaries {
$targets = $args[0] $targets = $args[0]
if (-not $targets) { if (-not $targets) {
$targets = "x86_64-pc-windows-msvc" $targets = "x86_64-pc-windows-msvc", "aarch64-pc-windows-msvc"
} }
Write-Host "Building artifacts for targets: $targets" Write-Host "Building artifacts for targets: $targets"

View File

@@ -4,7 +4,7 @@
# range of linux distributions. # range of linux distributions.
ARG ARCH=x86_64 ARG ARCH=x86_64
FROM quay.io/pypa/manylinux2014_${ARCH} FROM quay.io/pypa/manylinux_2_28_${ARCH}
ARG ARCH=x86_64 ARG ARCH=x86_64
ARG DOCKER_USER=default_user ARG DOCKER_USER=default_user

View File

@@ -11,7 +11,8 @@ fi
export OPENSSL_STATIC=1 export OPENSSL_STATIC=1
export OPENSSL_INCLUDE_DIR=/usr/local/include/openssl export OPENSSL_INCLUDE_DIR=/usr/local/include/openssl
source $HOME/.bashrc #Alpine doesn't have .bashrc
FILE=$HOME/.bashrc && test -f $FILE && source $FILE
cd nodejs cd nodejs
npm ci npm ci

View File

@@ -2,6 +2,7 @@
# Builds the node module for manylinux. Invoked by ci/build_linux_artifacts.sh. # Builds the node module for manylinux. Invoked by ci/build_linux_artifacts.sh.
set -e set -e
ARCH=${1:-x86_64} ARCH=${1:-x86_64}
TARGET_TRIPLE=${2:-x86_64-unknown-linux-gnu}
if [ "$ARCH" = "x86_64" ]; then if [ "$ARCH" = "x86_64" ]; then
export OPENSSL_LIB_DIR=/usr/local/lib64/ export OPENSSL_LIB_DIR=/usr/local/lib64/
@@ -11,9 +12,10 @@ fi
export OPENSSL_STATIC=1 export OPENSSL_STATIC=1
export OPENSSL_INCLUDE_DIR=/usr/local/include/openssl export OPENSSL_INCLUDE_DIR=/usr/local/include/openssl
source $HOME/.bashrc #Alpine doesn't have .bashrc
FILE=$HOME/.bashrc && test -f $FILE && source $FILE
cd node cd node
npm ci npm ci
npm run build-release npm run build-release
npm run pack-build npm run pack-build -- -t $TARGET_TRIPLE

View File

@@ -6,7 +6,7 @@
# /usr/bin/ld: failed to set dynamic section sizes: Bad value # /usr/bin/ld: failed to set dynamic section sizes: Bad value
set -e set -e
git clone -b OpenSSL_1_1_1u \ git clone -b OpenSSL_1_1_1v \
--single-branch \ --single-branch \
https://github.com/openssl/openssl.git https://github.com/openssl/openssl.git

View File

@@ -8,7 +8,7 @@ install_node() {
source "$HOME"/.bashrc source "$HOME"/.bashrc
nvm install --no-progress 16 nvm install --no-progress 18
} }
install_rust() { install_rust() {

View File

@@ -1,31 +0,0 @@
# Many linux dockerfile with Rust, Node, and Lance dependencies installed.
# This container allows building the node modules native libraries in an
# environment with a very old glibc, so that we are compatible with a wide
# range of linux distributions.
ARG ARCH=x86_64
FROM quay.io/pypa/manylinux2014_${ARCH}
ARG ARCH=x86_64
ARG DOCKER_USER=default_user
# Install static openssl
COPY install_openssl.sh install_openssl.sh
RUN ./install_openssl.sh ${ARCH} > /dev/null
# Protobuf is also installed as root.
COPY install_protobuf.sh install_protobuf.sh
RUN ./install_protobuf.sh ${ARCH}
ENV DOCKER_USER=${DOCKER_USER}
# Create a group and user
RUN echo ${ARCH} && adduser --user-group --create-home --uid ${DOCKER_USER} build_user
# We switch to the user to install Rust and Node, since those like to be
# installed at the user level.
USER ${DOCKER_USER}
COPY prepare_manylinux_node.sh prepare_manylinux_node.sh
RUN cp /prepare_manylinux_node.sh $HOME/ && \
cd $HOME && \
./prepare_manylinux_node.sh ${ARCH}

View File

@@ -1,26 +0,0 @@
#!/bin/bash
# Builds openssl from source so we can statically link to it
# this is to avoid the error we get with the system installation:
# /usr/bin/ld: <library>: version node not found for symbol SSLeay@@OPENSSL_1.0.1
# /usr/bin/ld: failed to set dynamic section sizes: Bad value
set -e
git clone -b OpenSSL_1_1_1u \
--single-branch \
https://github.com/openssl/openssl.git
pushd openssl
if [[ $1 == x86_64* ]]; then
ARCH=linux-x86_64
else
# gnu target
ARCH=linux-aarch64
fi
./Configure no-shared $ARCH
make
make install

View File

@@ -1,15 +0,0 @@
#!/bin/bash
# Installs protobuf compiler. Should be run as root.
set -e
if [[ $1 == x86_64* ]]; then
ARCH=x86_64
else
# gnu target
ARCH=aarch_64
fi
PB_REL=https://github.com/protocolbuffers/protobuf/releases
PB_VERSION=23.1
curl -LO $PB_REL/download/v$PB_VERSION/protoc-$PB_VERSION-linux-$ARCH.zip
unzip protoc-$PB_VERSION-linux-$ARCH.zip -d /usr/local

View File

@@ -1,21 +0,0 @@
#!/bin/bash
set -e
install_node() {
echo "Installing node..."
curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.34.0/install.sh | bash
source "$HOME"/.bashrc
nvm install --no-progress 16
}
install_rust() {
echo "Installing rust..."
curl https://sh.rustup.rs -sSf | bash -s -- -y
export PATH="$PATH:/root/.cargo/bin"
}
install_node
install_rust

57
ci/mock_openai.py Normal file
View File

@@ -0,0 +1,57 @@
# SPDX-License-Identifier: Apache-2.0
# SPDX-FileCopyrightText: Copyright The LanceDB Authors
"""A zero-dependency mock OpenAI embeddings API endpoint for testing purposes."""
import argparse
import json
import http.server
class MockOpenAIRequestHandler(http.server.BaseHTTPRequestHandler):
def do_POST(self):
content_length = int(self.headers["Content-Length"])
post_data = self.rfile.read(content_length)
post_data = json.loads(post_data.decode("utf-8"))
# See: https://platform.openai.com/docs/api-reference/embeddings/create
if isinstance(post_data["input"], str):
num_inputs = 1
else:
num_inputs = len(post_data["input"])
model = post_data.get("model", "text-embedding-ada-002")
data = []
for i in range(num_inputs):
data.append({
"object": "embedding",
"embedding": [0.1] * 1536,
"index": i,
})
response = {
"object": "list",
"data": data,
"model": model,
"usage": {
"prompt_tokens": 0,
"total_tokens": 0,
}
}
self.send_response(200)
self.send_header("Content-type", "application/json")
self.end_headers()
self.wfile.write(json.dumps(response).encode("utf-8"))
if __name__ == "__main__":
parser = argparse.ArgumentParser(description="Mock OpenAI embeddings API endpoint")
parser.add_argument("--port", type=int, default=8000, help="Port to listen on")
args = parser.parse_args()
port = args.port
print(f"server started on port {port}. Press Ctrl-C to stop.")
print(f"To use, set OPENAI_BASE_URL=http://localhost:{port} in your environment.")
with http.server.HTTPServer(("0.0.0.0", port), MockOpenAIRequestHandler) as server:
server.serve_forever()

View File

@@ -0,0 +1,105 @@
#!/bin/sh
# https://github.com/mstorsjo/msvc-wine/blob/master/vsdownload.py
# https://github.com/mozilla/gecko-dev/blob/6027d1d91f2d3204a3992633b3ef730ff005fc64/build/vs/vs2022-car.yaml
# function dl() {
# curl -O https://download.visualstudio.microsoft.com/download/pr/$1
# }
# [[.h]]
# "id": "Win11SDK_10.0.26100"
# "version": "10.0.26100.7"
# libucrt.lib
# example: <assert.h>
# dir: ucrt/
curl -O https://download.visualstudio.microsoft.com/download/pr/32863b8d-a46d-4231-8e84-0888519d20a9/2ee3a5fc6e9fc832af7295b138e93839/universal%20crt%20headers%20libraries%20and%20sources-x86_en-us.msi
curl -O https://download.visualstudio.microsoft.com/download/pr/32863b8d-a46d-4231-8e84-0888519d20a9/b1aa09b90fe314aceb090f6ec7626624/16ab2ea2187acffa6435e334796c8c89.cab
curl -O https://download.visualstudio.microsoft.com/download/pr/32863b8d-a46d-4231-8e84-0888519d20a9/400609bb0ff5804e36dbe6dcd42a7f01/6ee7bbee8435130a869cf971694fd9e2.cab
curl -O https://download.visualstudio.microsoft.com/download/pr/32863b8d-a46d-4231-8e84-0888519d20a9/2ac327317abb865a0e3f56b2faefa918/78fa3c824c2c48bd4a49ab5969adaaf7.cab
curl -O https://download.visualstudio.microsoft.com/download/pr/32863b8d-a46d-4231-8e84-0888519d20a9/f034bc0b2680f67dccd4bfeea3d0f932/7afc7b670accd8e3cc94cfffd516f5cb.cab
curl -O https://download.visualstudio.microsoft.com/download/pr/32863b8d-a46d-4231-8e84-0888519d20a9/7ed5e12f9d50f80825a8b27838cf4c7f/96076045170fe5db6d5dcf14b6f6688e.cab
curl -O https://download.visualstudio.microsoft.com/download/pr/32863b8d-a46d-4231-8e84-0888519d20a9/764edc185a696bda9e07df8891dddbbb/a1e2a83aa8a71c48c742eeaff6e71928.cab
curl -O https://download.visualstudio.microsoft.com/download/pr/32863b8d-a46d-4231-8e84-0888519d20a9/66854bedc6dbd5ccb5dd82c8e2412231/b2f03f34ff83ec013b9e45c7cd8e8a73.cab
# example: <windows.h>
# dir: um/
curl -O https://download.visualstudio.microsoft.com/download/pr/32863b8d-a46d-4231-8e84-0888519d20a9/b286efac4d83a54fc49190bddef1edc9/windows%20sdk%20for%20windows%20store%20apps%20headers-x86_en-us.msi
curl -O https://download.visualstudio.microsoft.com/download/pr/32863b8d-a46d-4231-8e84-0888519d20a9/e0dc3811d92ab96fcb72bf63d6c08d71/766c0ffd568bbb31bf7fb6793383e24a.cab
curl -O https://download.visualstudio.microsoft.com/download/pr/32863b8d-a46d-4231-8e84-0888519d20a9/613503da4b5628768497822826aed39f/8125ee239710f33ea485965f76fae646.cab
# example: <winapifamily.h>
# dir: /shared
curl -O https://download.visualstudio.microsoft.com/download/pr/32863b8d-a46d-4231-8e84-0888519d20a9/122979f0348d3a2a36b6aa1a111d5d0c/windows%20sdk%20for%20windows%20store%20apps%20headers%20onecoreuap-x86_en-us.msi
curl -O https://download.visualstudio.microsoft.com/download/pr/32863b8d-a46d-4231-8e84-0888519d20a9/766e04beecdfccff39e91dd9eb32834a/e89e3dcbb016928c7e426238337d69eb.cab
# "id": "Microsoft.VisualC.14.16.CRT.Headers"
# "version": "14.16.27045"
# example: <vcruntime.h>
# dir: MSVC/
curl -O https://download.visualstudio.microsoft.com/download/pr/bac0afd7-cc9e-4182-8a83-9898fa20e092/87bbe41e09a2f83711e72696f49681429327eb7a4b90618c35667a6ba2e2880e/Microsoft.VisualC.14.16.CRT.Headers.vsix
# [[.lib]]
# advapi32.lib bcrypt.lib kernel32.lib ntdll.lib user32.lib uuid.lib ws2_32.lib userenv.lib cfgmgr32.lib runtimeobject.lib
curl -O https://download.visualstudio.microsoft.com/download/pr/32863b8d-a46d-4231-8e84-0888519d20a9/944c4153b849a1f7d0c0404a4f1c05ea/windows%20sdk%20for%20windows%20store%20apps%20libs-x86_en-us.msi
curl -O https://download.visualstudio.microsoft.com/download/pr/32863b8d-a46d-4231-8e84-0888519d20a9/5306aed3e1a38d1e8bef5934edeb2a9b/05047a45609f311645eebcac2739fc4c.cab
curl -O https://download.visualstudio.microsoft.com/download/pr/32863b8d-a46d-4231-8e84-0888519d20a9/13c8a73a0f5a6474040b26d016a26fab/13d68b8a7b6678a368e2d13ff4027521.cab
curl -O https://download.visualstudio.microsoft.com/download/pr/32863b8d-a46d-4231-8e84-0888519d20a9/149578fb3b621cdb61ee1813b9b3e791/463ad1b0783ebda908fd6c16a4abfe93.cab
curl -O https://download.visualstudio.microsoft.com/download/pr/32863b8d-a46d-4231-8e84-0888519d20a9/5c986c4f393c6b09d5aec3b539e9fb4a/5a22e5cde814b041749fb271547f4dd5.cab
# fwpuclnt.lib arm64rt.lib
curl -O https://download.visualstudio.microsoft.com/download/pr/32863b8d-a46d-4231-8e84-0888519d20a9/7a332420d812f7c1d41da865ae5a7c52/windows%20sdk%20desktop%20libs%20arm64-x86_en-us.msi
curl -O https://download.visualstudio.microsoft.com/download/pr/32863b8d-a46d-4231-8e84-0888519d20a9/19de98ed4a79938d0045d19c047936b3/3e2f7be479e3679d700ce0782e4cc318.cab
# libcmt.lib libvcruntime.lib
curl -O https://download.visualstudio.microsoft.com/download/pr/bac0afd7-cc9e-4182-8a83-9898fa20e092/227f40682a88dc5fa0ccb9cadc9ad30af99ad1f1a75db63407587d079f60d035/Microsoft.VisualC.14.16.CRT.ARM64.Desktop.vsix
msiextract universal%20crt%20headers%20libraries%20and%20sources-x86_en-us.msi
msiextract windows%20sdk%20for%20windows%20store%20apps%20headers-x86_en-us.msi
msiextract windows%20sdk%20for%20windows%20store%20apps%20headers%20onecoreuap-x86_en-us.msi
msiextract windows%20sdk%20for%20windows%20store%20apps%20libs-x86_en-us.msi
msiextract windows%20sdk%20desktop%20libs%20arm64-x86_en-us.msi
unzip -o Microsoft.VisualC.14.16.CRT.Headers.vsix
unzip -o Microsoft.VisualC.14.16.CRT.ARM64.Desktop.vsix
mkdir -p /usr/aarch64-pc-windows-msvc/usr/include
mkdir -p /usr/aarch64-pc-windows-msvc/usr/lib
# lowercase folder/file names
echo "$(find . -regex ".*/[^/]*[A-Z][^/]*")" | xargs -I{} sh -c 'mv "$(echo "{}" | sed -E '"'"'s/(.*\/)/\L\1/'"'"')" "$(echo "{}" | tr [A-Z] [a-z])"'
# .h
(cd 'program files/windows kits/10/include/10.0.26100.0' && cp -r ucrt/* um/* shared/* -t /usr/aarch64-pc-windows-msvc/usr/include)
cp -r contents/vc/tools/msvc/14.16.27023/include/* /usr/aarch64-pc-windows-msvc/usr/include
# lowercase #include "" and #include <>
find /usr/aarch64-pc-windows-msvc/usr/include -type f -exec sed -i -E 's/(#include <[^<>]*?[A-Z][^<>]*?>)|(#include "[^"]*?[A-Z][^"]*?")/\L\1\2/' "{}" ';'
# ARM intrinsics
# original dir: MSVC/
# '__n128x4' redefined in arm_neon.h
# "arm64_neon.h" included from intrin.h
(cd /usr/lib/llvm19/lib/clang/19/include && cp arm_neon.h intrin.h -t /usr/aarch64-pc-windows-msvc/usr/include)
# .lib
# _Interlocked intrinsics
# must always link with arm64rt.lib
# reason: https://developercommunity.visualstudio.com/t/libucrtlibstreamobj-error-lnk2001-unresolved-exter/1544787#T-ND1599818
# I don't understand the 'correct' fix for this, arm64rt.lib is supposed to be the workaround
(cd 'program files/windows kits/10/lib/10.0.26100.0/um/arm64' && cp advapi32.lib bcrypt.lib kernel32.lib ntdll.lib user32.lib uuid.lib ws2_32.lib userenv.lib cfgmgr32.lib runtimeobject.lib fwpuclnt.lib arm64rt.lib -t /usr/aarch64-pc-windows-msvc/usr/lib)
(cd 'contents/vc/tools/msvc/14.16.27023/lib/arm64' && cp libcmt.lib libvcruntime.lib -t /usr/aarch64-pc-windows-msvc/usr/lib)
cp 'program files/windows kits/10/lib/10.0.26100.0/ucrt/arm64/libucrt.lib' /usr/aarch64-pc-windows-msvc/usr/lib

View File

@@ -0,0 +1,105 @@
#!/bin/sh
# https://github.com/mstorsjo/msvc-wine/blob/master/vsdownload.py
# https://github.com/mozilla/gecko-dev/blob/6027d1d91f2d3204a3992633b3ef730ff005fc64/build/vs/vs2022-car.yaml
# function dl() {
# curl -O https://download.visualstudio.microsoft.com/download/pr/$1
# }
# [[.h]]
# "id": "Win11SDK_10.0.26100"
# "version": "10.0.26100.7"
# libucrt.lib
# example: <assert.h>
# dir: ucrt/
curl -O https://download.visualstudio.microsoft.com/download/pr/32863b8d-a46d-4231-8e84-0888519d20a9/2ee3a5fc6e9fc832af7295b138e93839/universal%20crt%20headers%20libraries%20and%20sources-x86_en-us.msi
curl -O https://download.visualstudio.microsoft.com/download/pr/32863b8d-a46d-4231-8e84-0888519d20a9/b1aa09b90fe314aceb090f6ec7626624/16ab2ea2187acffa6435e334796c8c89.cab
curl -O https://download.visualstudio.microsoft.com/download/pr/32863b8d-a46d-4231-8e84-0888519d20a9/400609bb0ff5804e36dbe6dcd42a7f01/6ee7bbee8435130a869cf971694fd9e2.cab
curl -O https://download.visualstudio.microsoft.com/download/pr/32863b8d-a46d-4231-8e84-0888519d20a9/2ac327317abb865a0e3f56b2faefa918/78fa3c824c2c48bd4a49ab5969adaaf7.cab
curl -O https://download.visualstudio.microsoft.com/download/pr/32863b8d-a46d-4231-8e84-0888519d20a9/f034bc0b2680f67dccd4bfeea3d0f932/7afc7b670accd8e3cc94cfffd516f5cb.cab
curl -O https://download.visualstudio.microsoft.com/download/pr/32863b8d-a46d-4231-8e84-0888519d20a9/7ed5e12f9d50f80825a8b27838cf4c7f/96076045170fe5db6d5dcf14b6f6688e.cab
curl -O https://download.visualstudio.microsoft.com/download/pr/32863b8d-a46d-4231-8e84-0888519d20a9/764edc185a696bda9e07df8891dddbbb/a1e2a83aa8a71c48c742eeaff6e71928.cab
curl -O https://download.visualstudio.microsoft.com/download/pr/32863b8d-a46d-4231-8e84-0888519d20a9/66854bedc6dbd5ccb5dd82c8e2412231/b2f03f34ff83ec013b9e45c7cd8e8a73.cab
# example: <windows.h>
# dir: um/
curl -O https://download.visualstudio.microsoft.com/download/pr/32863b8d-a46d-4231-8e84-0888519d20a9/b286efac4d83a54fc49190bddef1edc9/windows%20sdk%20for%20windows%20store%20apps%20headers-x86_en-us.msi
curl -O https://download.visualstudio.microsoft.com/download/pr/32863b8d-a46d-4231-8e84-0888519d20a9/e0dc3811d92ab96fcb72bf63d6c08d71/766c0ffd568bbb31bf7fb6793383e24a.cab
curl -O https://download.visualstudio.microsoft.com/download/pr/32863b8d-a46d-4231-8e84-0888519d20a9/613503da4b5628768497822826aed39f/8125ee239710f33ea485965f76fae646.cab
# example: <winapifamily.h>
# dir: /shared
curl -O https://download.visualstudio.microsoft.com/download/pr/32863b8d-a46d-4231-8e84-0888519d20a9/122979f0348d3a2a36b6aa1a111d5d0c/windows%20sdk%20for%20windows%20store%20apps%20headers%20onecoreuap-x86_en-us.msi
curl -O https://download.visualstudio.microsoft.com/download/pr/32863b8d-a46d-4231-8e84-0888519d20a9/766e04beecdfccff39e91dd9eb32834a/e89e3dcbb016928c7e426238337d69eb.cab
# "id": "Microsoft.VisualC.14.16.CRT.Headers"
# "version": "14.16.27045"
# example: <vcruntime.h>
# dir: MSVC/
curl -O https://download.visualstudio.microsoft.com/download/pr/bac0afd7-cc9e-4182-8a83-9898fa20e092/87bbe41e09a2f83711e72696f49681429327eb7a4b90618c35667a6ba2e2880e/Microsoft.VisualC.14.16.CRT.Headers.vsix
# [[.lib]]
# advapi32.lib bcrypt.lib kernel32.lib ntdll.lib user32.lib uuid.lib ws2_32.lib userenv.lib cfgmgr32.lib
curl -O https://download.visualstudio.microsoft.com/download/pr/32863b8d-a46d-4231-8e84-0888519d20a9/944c4153b849a1f7d0c0404a4f1c05ea/windows%20sdk%20for%20windows%20store%20apps%20libs-x86_en-us.msi
curl -O https://download.visualstudio.microsoft.com/download/pr/32863b8d-a46d-4231-8e84-0888519d20a9/5306aed3e1a38d1e8bef5934edeb2a9b/05047a45609f311645eebcac2739fc4c.cab
curl -O https://download.visualstudio.microsoft.com/download/pr/32863b8d-a46d-4231-8e84-0888519d20a9/13c8a73a0f5a6474040b26d016a26fab/13d68b8a7b6678a368e2d13ff4027521.cab
curl -O https://download.visualstudio.microsoft.com/download/pr/32863b8d-a46d-4231-8e84-0888519d20a9/149578fb3b621cdb61ee1813b9b3e791/463ad1b0783ebda908fd6c16a4abfe93.cab
curl -O https://download.visualstudio.microsoft.com/download/pr/32863b8d-a46d-4231-8e84-0888519d20a9/5c986c4f393c6b09d5aec3b539e9fb4a/5a22e5cde814b041749fb271547f4dd5.cab
curl -O https://download.visualstudio.microsoft.com/download/pr/32863b8d-a46d-4231-8e84-0888519d20a9/bfc3904a0195453419ae4dfea7abd6fb/e10768bb6e9d0ea730280336b697da66.cab
curl -O https://download.visualstudio.microsoft.com/download/pr/32863b8d-a46d-4231-8e84-0888519d20a9/637f9f3be880c71f9e3ca07b4d67345c/f9b24c8280986c0683fbceca5326d806.cab
# dbghelp.lib fwpuclnt.lib
curl -O https://download.visualstudio.microsoft.com/download/pr/32863b8d-a46d-4231-8e84-0888519d20a9/9f51690d5aa804b1340ce12d1ec80f89/windows%20sdk%20desktop%20libs%20x64-x86_en-us.msi
curl -O https://download.visualstudio.microsoft.com/download/pr/32863b8d-a46d-4231-8e84-0888519d20a9/d3a7df4ca3303a698640a29e558a5e5b/58314d0646d7e1a25e97c902166c3155.cab
# libcmt.lib libvcruntime.lib
curl -O https://download.visualstudio.microsoft.com/download/pr/bac0afd7-cc9e-4182-8a83-9898fa20e092/8728f21ae09940f1f4b4ee47b4a596be2509e2a47d2f0c83bbec0ea37d69644b/Microsoft.VisualC.14.16.CRT.x64.Desktop.vsix
msiextract universal%20crt%20headers%20libraries%20and%20sources-x86_en-us.msi
msiextract windows%20sdk%20for%20windows%20store%20apps%20headers-x86_en-us.msi
msiextract windows%20sdk%20for%20windows%20store%20apps%20headers%20onecoreuap-x86_en-us.msi
msiextract windows%20sdk%20for%20windows%20store%20apps%20libs-x86_en-us.msi
msiextract windows%20sdk%20desktop%20libs%20x64-x86_en-us.msi
unzip -o Microsoft.VisualC.14.16.CRT.Headers.vsix
unzip -o Microsoft.VisualC.14.16.CRT.x64.Desktop.vsix
mkdir -p /usr/x86_64-pc-windows-msvc/usr/include
mkdir -p /usr/x86_64-pc-windows-msvc/usr/lib
# lowercase folder/file names
echo "$(find . -regex ".*/[^/]*[A-Z][^/]*")" | xargs -I{} sh -c 'mv "$(echo "{}" | sed -E '"'"'s/(.*\/)/\L\1/'"'"')" "$(echo "{}" | tr [A-Z] [a-z])"'
# .h
(cd 'program files/windows kits/10/include/10.0.26100.0' && cp -r ucrt/* um/* shared/* -t /usr/x86_64-pc-windows-msvc/usr/include)
cp -r contents/vc/tools/msvc/14.16.27023/include/* /usr/x86_64-pc-windows-msvc/usr/include
# lowercase #include "" and #include <>
find /usr/x86_64-pc-windows-msvc/usr/include -type f -exec sed -i -E 's/(#include <[^<>]*?[A-Z][^<>]*?>)|(#include "[^"]*?[A-Z][^"]*?")/\L\1\2/' "{}" ';'
# x86 intrinsics
# original dir: MSVC/
# '_mm_movemask_epi8' defined in emmintrin.h
# '__v4sf' defined in xmmintrin.h
# '__v2si' defined in mmintrin.h
# '__m128d' redefined in immintrin.h
# '__m128i' redefined in intrin.h
# '_mm_comlt_epu8' defined in ammintrin.h
(cd /usr/lib/llvm19/lib/clang/19/include && cp emmintrin.h xmmintrin.h mmintrin.h immintrin.h intrin.h ammintrin.h -t /usr/x86_64-pc-windows-msvc/usr/include)
# .lib
(cd 'program files/windows kits/10/lib/10.0.26100.0/um/x64' && cp advapi32.lib bcrypt.lib kernel32.lib ntdll.lib user32.lib uuid.lib ws2_32.lib userenv.lib cfgmgr32.lib dbghelp.lib fwpuclnt.lib -t /usr/x86_64-pc-windows-msvc/usr/lib)
(cd 'contents/vc/tools/msvc/14.16.27023/lib/x64' && cp libcmt.lib libvcruntime.lib -t /usr/x86_64-pc-windows-msvc/usr/lib)
cp 'program files/windows kits/10/lib/10.0.26100.0/ucrt/x64/libucrt.lib' /usr/x86_64-pc-windows-msvc/usr/lib

View File

@@ -26,6 +26,7 @@ theme:
- content.code.copy - content.code.copy
- content.tabs.link - content.tabs.link
- content.action.edit - content.action.edit
- content.tooltips
- toc.follow - toc.follow
- navigation.top - navigation.top
- navigation.tabs - navigation.tabs
@@ -33,8 +34,10 @@ theme:
- navigation.footer - navigation.footer
- navigation.tracking - navigation.tracking
- navigation.instant - navigation.instant
- content.footnote.tooltips
icon: icon:
repo: fontawesome/brands/github repo: fontawesome/brands/github
annotation: material/arrow-right-circle
custom_dir: overrides custom_dir: overrides
plugins: plugins:
@@ -52,17 +55,25 @@ plugins:
show_signature_annotations: true show_signature_annotations: true
show_root_heading: true show_root_heading: true
members_order: source members_order: source
docstring_section_style: list
signature_crossrefs: true
separate_signature: true
import: import:
# for cross references # for cross references
- https://arrow.apache.org/docs/objects.inv - https://arrow.apache.org/docs/objects.inv
- https://pandas.pydata.org/docs/objects.inv - https://pandas.pydata.org/docs/objects.inv
- mkdocs-jupyter - mkdocs-jupyter
- render_swagger: - render_swagger:
allow_arbitrary_locations : true allow_arbitrary_locations: true
markdown_extensions: markdown_extensions:
- admonition - admonition
- footnotes - footnotes
- pymdownx.critic
- pymdownx.caret
- pymdownx.keys
- pymdownx.mark
- pymdownx.tilde
- pymdownx.details - pymdownx.details
- pymdownx.highlight: - pymdownx.highlight:
anchor_linenums: true anchor_linenums: true
@@ -76,7 +87,15 @@ markdown_extensions:
- pymdownx.tabbed: - pymdownx.tabbed:
alternate_style: true alternate_style: true
- md_in_html - md_in_html
- abbr
- attr_list - attr_list
- pymdownx.snippets
- pymdownx.emoji:
emoji_index: !!python/name:material.extensions.emoji.twemoji
emoji_generator: !!python/name:material.extensions.emoji.to_svg
- markdown.extensions.toc:
baselevel: 1
permalink: ""
nav: nav:
- Home: - Home:
@@ -84,26 +103,45 @@ nav:
- 🏃🏼‍♂️ Quick start: basic.md - 🏃🏼‍♂️ Quick start: basic.md
- 📚 Concepts: - 📚 Concepts:
- Vector search: concepts/vector_search.md - Vector search: concepts/vector_search.md
- Indexing: concepts/index_ivfpq.md - Indexing:
- IVFPQ: concepts/index_ivfpq.md
- HNSW: concepts/index_hnsw.md
- Storage: concepts/storage.md - Storage: concepts/storage.md
- Data management: concepts/data_management.md - Data management: concepts/data_management.md
- 🔨 Guides: - 🔨 Guides:
- Working with tables: guides/tables.md - Working with tables: guides/tables.md
- Building an ANN index: ann_indexes.md - Building a vector index: ann_indexes.md
- Vector Search: search.md - Vector Search: search.md
- Full-text search: fts.md - Full-text search (native): fts.md
- Full-text search (tantivy-based): fts_tantivy.md
- Building a scalar index: guides/scalar_index.md
- Hybrid search: - Hybrid search:
- Overview: hybrid_search/hybrid_search.md - Overview: hybrid_search/hybrid_search.md
- Comparing Rerankers: hybrid_search/eval.md - Comparing Rerankers: hybrid_search/eval.md
- Airbnb financial data example: notebooks/hybrid_search.ipynb - Airbnb financial data example: notebooks/hybrid_search.ipynb
- RAG:
- Vanilla RAG: rag/vanilla_rag.md
- Multi-head RAG: rag/multi_head_rag.md
- Corrective RAG: rag/corrective_rag.md
- Agentic RAG: rag/agentic_rag.md
- Graph RAG: rag/graph_rag.md
- Self RAG: rag/self_rag.md
- Adaptive RAG: rag/adaptive_rag.md
- SFR RAG: rag/sfr_rag.md
- Advanced Techniques:
- HyDE: rag/advanced_techniques/hyde.md
- FLARE: rag/advanced_techniques/flare.md
- Reranking: - Reranking:
- Quickstart: reranking/index.md - Quickstart: reranking/index.md
- Cohere Reranker: reranking/cohere.md - Cohere Reranker: reranking/cohere.md
- Linear Combination Reranker: reranking/linear_combination.md - Linear Combination Reranker: reranking/linear_combination.md
- Reciprocal Rank Fusion Reranker: reranking/rrf.md
- Cross Encoder Reranker: reranking/cross_encoder.md - Cross Encoder Reranker: reranking/cross_encoder.md
- ColBERT Reranker: reranking/colbert.md - ColBERT Reranker: reranking/colbert.md
- Jina Reranker: reranking/jina.md - Jina Reranker: reranking/jina.md
- OpenAI Reranker: reranking/openai.md - OpenAI Reranker: reranking/openai.md
- AnswerDotAi Rerankers: reranking/answerdotai.md
- Voyage AI Rerankers: reranking/voyageai.md
- Building Custom Rerankers: reranking/custom_reranker.md - Building Custom Rerankers: reranking/custom_reranker.md
- Example: notebooks/lancedb_reranking.ipynb - Example: notebooks/lancedb_reranking.ipynb
- Filtering: sql.md - Filtering: sql.md
@@ -115,9 +153,27 @@ nav:
- Reranking: guides/tuning_retrievers/2_reranking.md - Reranking: guides/tuning_retrievers/2_reranking.md
- Embedding fine-tuning: guides/tuning_retrievers/3_embed_tuning.md - Embedding fine-tuning: guides/tuning_retrievers/3_embed_tuning.md
- 🧬 Managing embeddings: - 🧬 Managing embeddings:
- Overview: embeddings/index.md - Understand Embeddings: embeddings/understanding_embeddings.md
- Get Started: embeddings/index.md
- Embedding functions: embeddings/embedding_functions.md - Embedding functions: embeddings/embedding_functions.md
- Available models: embeddings/default_embedding_functions.md - Available models:
- Overview: embeddings/default_embedding_functions.md
- Text Embedding Functions:
- Sentence Transformers: embeddings/available_embedding_models/text_embedding_functions/sentence_transformers.md
- Huggingface Embedding Models: embeddings/available_embedding_models/text_embedding_functions/huggingface_embedding.md
- Ollama Embeddings: embeddings/available_embedding_models/text_embedding_functions/ollama_embedding.md
- OpenAI Embeddings: embeddings/available_embedding_models/text_embedding_functions/openai_embedding.md
- Instructor Embeddings: embeddings/available_embedding_models/text_embedding_functions/instructor_embedding.md
- Gemini Embeddings: embeddings/available_embedding_models/text_embedding_functions/gemini_embedding.md
- Cohere Embeddings: embeddings/available_embedding_models/text_embedding_functions/cohere_embedding.md
- Jina Embeddings: embeddings/available_embedding_models/text_embedding_functions/jina_embedding.md
- AWS Bedrock Text Embedding Functions: embeddings/available_embedding_models/text_embedding_functions/aws_bedrock_embedding.md
- IBM watsonx.ai Embeddings: embeddings/available_embedding_models/text_embedding_functions/ibm_watsonx_ai_embedding.md
- Voyage AI Embeddings: embeddings/available_embedding_models/text_embedding_functions/voyageai_embedding.md
- Multimodal Embedding Functions:
- OpenClip embeddings: embeddings/available_embedding_models/multimodal_embedding_functions/openclip_embedding.md
- Imagebind embeddings: embeddings/available_embedding_models/multimodal_embedding_functions/imagebind_embedding.md
- Jina Embeddings: embeddings/available_embedding_models/multimodal_embedding_functions/jina_multimodal_embedding.md
- User-defined embedding functions: embeddings/custom_embedding_function.md - User-defined embedding functions: embeddings/custom_embedding_function.md
- "Example: Multi-lingual semantic search": notebooks/multi_lingual_example.ipynb - "Example: Multi-lingual semantic search": notebooks/multi_lingual_example.ipynb
- "Example: MultiModal CLIP Embeddings": notebooks/DisappearingEmbeddingFunction.ipynb - "Example: MultiModal CLIP Embeddings": notebooks/DisappearingEmbeddingFunction.ipynb
@@ -136,14 +192,21 @@ nav:
- Pydantic: python/pydantic.md - Pydantic: python/pydantic.md
- Voxel51: integrations/voxel51.md - Voxel51: integrations/voxel51.md
- PromptTools: integrations/prompttools.md - PromptTools: integrations/prompttools.md
- dlt: integrations/dlt.md
- phidata: integrations/phidata.md
- 🎯 Examples: - 🎯 Examples:
- Overview: examples/index.md - Overview: examples/index.md
- 🐍 Python: - 🐍 Python:
- Overview: examples/examples_python.md - Overview: examples/examples_python.md
- YouTube Transcript Search: notebooks/youtube_transcript_search.ipynb - Build From Scratch: examples/python_examples/build_from_scratch.md
- Documentation QA Bot using LangChain: notebooks/code_qa_bot.ipynb - Multimodal: examples/python_examples/multimodal.md
- Multimodal search using CLIP: notebooks/multimodal_search.ipynb - Rag: examples/python_examples/rag.md
- Example - Calculate CLIP Embeddings with Roboflow Inference: examples/image_embeddings_roboflow.md - Vector Search: examples/python_examples/vector_search.md
- Chatbot: examples/python_examples/chatbot.md
- Evaluation: examples/python_examples/evaluations.md
- AI Agent: examples/python_examples/aiagent.md
- Recommender System: examples/python_examples/recommendersystem.md
- Miscellaneous:
- Serverless QA Bot with S3 and Lambda: examples/serverless_lancedb_with_s3_and_lambda.md - Serverless QA Bot with S3 and Lambda: examples/serverless_lancedb_with_s3_and_lambda.md
- Serverless QA Bot with Modal: examples/serverless_qa_bot_with_modal_and_langchain.md - Serverless QA Bot with Modal: examples/serverless_qa_bot_with_modal_and_langchain.md
- 👾 JavaScript: - 👾 JavaScript:
@@ -153,7 +216,10 @@ nav:
- TransformersJS Embedding Search: examples/transformerjs_embedding_search_nodejs.md - TransformersJS Embedding Search: examples/transformerjs_embedding_search_nodejs.md
- 🦀 Rust: - 🦀 Rust:
- Overview: examples/examples_rust.md - Overview: examples/examples_rust.md
- 📓 Studies:
- ↗Improve retrievers with hybrid search and reranking: https://blog.lancedb.com/hybrid-search-and-reranking-report/
- 💭 FAQs: faq.md - 💭 FAQs: faq.md
- 🔍 Troubleshooting: troubleshooting.md
- ⚙️ API reference: - ⚙️ API reference:
- 🐍 Python: python/python.md - 🐍 Python: python/python.md
- 👾 JavaScript (vectordb): javascript/modules.md - 👾 JavaScript (vectordb): javascript/modules.md
@@ -169,26 +235,44 @@ nav:
- Quick start: basic.md - Quick start: basic.md
- Concepts: - Concepts:
- Vector search: concepts/vector_search.md - Vector search: concepts/vector_search.md
- Indexing: concepts/index_ivfpq.md - Indexing:
- IVFPQ: concepts/index_ivfpq.md
- HNSW: concepts/index_hnsw.md
- Storage: concepts/storage.md - Storage: concepts/storage.md
- Data management: concepts/data_management.md - Data management: concepts/data_management.md
- Guides: - Guides:
- Working with tables: guides/tables.md - Working with tables: guides/tables.md
- Building an ANN index: ann_indexes.md - Building an ANN index: ann_indexes.md
- Vector Search: search.md - Vector Search: search.md
- Full-text search: fts.md - Full-text search (native): fts.md
- Full-text search (tantivy-based): fts_tantivy.md
- Building a scalar index: guides/scalar_index.md
- Hybrid search: - Hybrid search:
- Overview: hybrid_search/hybrid_search.md - Overview: hybrid_search/hybrid_search.md
- Comparing Rerankers: hybrid_search/eval.md - Comparing Rerankers: hybrid_search/eval.md
- Airbnb financial data example: notebooks/hybrid_search.ipynb - Airbnb financial data example: notebooks/hybrid_search.ipynb
- RAG:
- Vanilla RAG: rag/vanilla_rag.md
- Multi-head RAG: rag/multi_head_rag.md
- Corrective RAG: rag/corrective_rag.md
- Agentic RAG: rag/agentic_rag.md
- Graph RAG: rag/graph_rag.md
- Self RAG: rag/self_rag.md
- Adaptive RAG: rag/adaptive_rag.md
- SFR RAG: rag/sfr_rag.md
- Advanced Techniques:
- HyDE: rag/advanced_techniques/hyde.md
- FLARE: rag/advanced_techniques/flare.md
- Reranking: - Reranking:
- Quickstart: reranking/index.md - Quickstart: reranking/index.md
- Cohere Reranker: reranking/cohere.md - Cohere Reranker: reranking/cohere.md
- Linear Combination Reranker: reranking/linear_combination.md - Linear Combination Reranker: reranking/linear_combination.md
- Reciprocal Rank Fusion Reranker: reranking/rrf.md
- Cross Encoder Reranker: reranking/cross_encoder.md - Cross Encoder Reranker: reranking/cross_encoder.md
- ColBERT Reranker: reranking/colbert.md - ColBERT Reranker: reranking/colbert.md
- Jina Reranker: reranking/jina.md - Jina Reranker: reranking/jina.md
- OpenAI Reranker: reranking/openai.md - OpenAI Reranker: reranking/openai.md
- AnswerDotAi Rerankers: reranking/answerdotai.md
- Building Custom Rerankers: reranking/custom_reranker.md - Building Custom Rerankers: reranking/custom_reranker.md
- Example: notebooks/lancedb_reranking.ipynb - Example: notebooks/lancedb_reranking.ipynb
- Filtering: sql.md - Filtering: sql.md
@@ -200,9 +284,26 @@ nav:
- Reranking: guides/tuning_retrievers/2_reranking.md - Reranking: guides/tuning_retrievers/2_reranking.md
- Embedding fine-tuning: guides/tuning_retrievers/3_embed_tuning.md - Embedding fine-tuning: guides/tuning_retrievers/3_embed_tuning.md
- Managing Embeddings: - Managing Embeddings:
- Overview: embeddings/index.md - Understand Embeddings: embeddings/understanding_embeddings.md
- Get Started: embeddings/index.md
- Embedding functions: embeddings/embedding_functions.md - Embedding functions: embeddings/embedding_functions.md
- Available models: embeddings/default_embedding_functions.md - Available models:
- Overview: embeddings/default_embedding_functions.md
- Text Embedding Functions:
- Sentence Transformers: embeddings/available_embedding_models/text_embedding_functions/sentence_transformers.md
- Huggingface Embedding Models: embeddings/available_embedding_models/text_embedding_functions/huggingface_embedding.md
- Ollama Embeddings: embeddings/available_embedding_models/text_embedding_functions/ollama_embedding.md
- OpenAI Embeddings: embeddings/available_embedding_models/text_embedding_functions/openai_embedding.md
- Instructor Embeddings: embeddings/available_embedding_models/text_embedding_functions/instructor_embedding.md
- Gemini Embeddings: embeddings/available_embedding_models/text_embedding_functions/gemini_embedding.md
- Cohere Embeddings: embeddings/available_embedding_models/text_embedding_functions/cohere_embedding.md
- Jina Embeddings: embeddings/available_embedding_models/text_embedding_functions/jina_embedding.md
- AWS Bedrock Text Embedding Functions: embeddings/available_embedding_models/text_embedding_functions/aws_bedrock_embedding.md
- IBM watsonx.ai Embeddings: embeddings/available_embedding_models/text_embedding_functions/ibm_watsonx_ai_embedding.md
- Multimodal Embedding Functions:
- OpenClip embeddings: embeddings/available_embedding_models/multimodal_embedding_functions/openclip_embedding.md
- Imagebind embeddings: embeddings/available_embedding_models/multimodal_embedding_functions/imagebind_embedding.md
- Jina Embeddings: embeddings/available_embedding_models/multimodal_embedding_functions/jina_multimodal_embedding.md
- User-defined embedding functions: embeddings/custom_embedding_function.md - User-defined embedding functions: embeddings/custom_embedding_function.md
- "Example: Multi-lingual semantic search": notebooks/multi_lingual_example.ipynb - "Example: Multi-lingual semantic search": notebooks/multi_lingual_example.ipynb
- "Example: MultiModal CLIP Embeddings": notebooks/DisappearingEmbeddingFunction.ipynb - "Example: MultiModal CLIP Embeddings": notebooks/DisappearingEmbeddingFunction.ipynb
@@ -217,16 +318,33 @@ nav:
- Pydantic: python/pydantic.md - Pydantic: python/pydantic.md
- Voxel51: integrations/voxel51.md - Voxel51: integrations/voxel51.md
- PromptTools: integrations/prompttools.md - PromptTools: integrations/prompttools.md
- dlt: integrations/dlt.md
- phidata: integrations/phidata.md
- Examples: - Examples:
- examples/index.md - examples/index.md
- YouTube Transcript Search: notebooks/youtube_transcript_search.ipynb - 🐍 Python:
- Documentation QA Bot using LangChain: notebooks/code_qa_bot.ipynb - Overview: examples/examples_python.md
- Multimodal search using CLIP: notebooks/multimodal_search.ipynb - Build From Scratch: examples/python_examples/build_from_scratch.md
- Multimodal: examples/python_examples/multimodal.md
- Rag: examples/python_examples/rag.md
- Vector Search: examples/python_examples/vector_search.md
- Chatbot: examples/python_examples/chatbot.md
- Evaluation: examples/python_examples/evaluations.md
- AI Agent: examples/python_examples/aiagent.md
- Recommender System: examples/python_examples/recommendersystem.md
- Miscellaneous:
- Serverless QA Bot with S3 and Lambda: examples/serverless_lancedb_with_s3_and_lambda.md - Serverless QA Bot with S3 and Lambda: examples/serverless_lancedb_with_s3_and_lambda.md
- Serverless QA Bot with Modal: examples/serverless_qa_bot_with_modal_and_langchain.md - Serverless QA Bot with Modal: examples/serverless_qa_bot_with_modal_and_langchain.md
- YouTube Transcript Search (JS): examples/youtube_transcript_bot_with_nodejs.md - 👾 JavaScript:
- Serverless Chatbot from any website: examples/serverless_website_chatbot.md - Overview: examples/examples_js.md
- Serverless Website Chatbot: examples/serverless_website_chatbot.md
- YouTube Transcript Search: examples/youtube_transcript_bot_with_nodejs.md
- TransformersJS Embedding Search: examples/transformerjs_embedding_search_nodejs.md - TransformersJS Embedding Search: examples/transformerjs_embedding_search_nodejs.md
- 🦀 Rust:
- Overview: examples/examples_rust.md
- Studies:
- studies/overview.md
- ↗Improve retrievers with hybrid search and reranking: https://blog.lancedb.com/hybrid-search-and-reranking-report/
- API reference: - API reference:
- Overview: api_reference.md - Overview: api_reference.md
- Python: python/python.md - Python: python/python.md

21
docs/package-lock.json generated
View File

@@ -19,7 +19,7 @@
}, },
"../node": { "../node": {
"name": "vectordb", "name": "vectordb",
"version": "0.4.6", "version": "0.12.0",
"cpu": [ "cpu": [
"x64", "x64",
"arm64" "arm64"
@@ -31,9 +31,7 @@
"win32" "win32"
], ],
"dependencies": { "dependencies": {
"@apache-arrow/ts": "^14.0.2",
"@neon-rs/load": "^0.0.74", "@neon-rs/load": "^0.0.74",
"apache-arrow": "^14.0.2",
"axios": "^1.4.0" "axios": "^1.4.0"
}, },
"devDependencies": { "devDependencies": {
@@ -46,6 +44,7 @@
"@types/temp": "^0.9.1", "@types/temp": "^0.9.1",
"@types/uuid": "^9.0.3", "@types/uuid": "^9.0.3",
"@typescript-eslint/eslint-plugin": "^5.59.1", "@typescript-eslint/eslint-plugin": "^5.59.1",
"apache-arrow-old": "npm:apache-arrow@13.0.0",
"cargo-cp-artifact": "^0.1", "cargo-cp-artifact": "^0.1",
"chai": "^4.3.7", "chai": "^4.3.7",
"chai-as-promised": "^7.1.1", "chai-as-promised": "^7.1.1",
@@ -62,15 +61,19 @@
"ts-node-dev": "^2.0.0", "ts-node-dev": "^2.0.0",
"typedoc": "^0.24.7", "typedoc": "^0.24.7",
"typedoc-plugin-markdown": "^3.15.3", "typedoc-plugin-markdown": "^3.15.3",
"typescript": "*", "typescript": "^5.1.0",
"uuid": "^9.0.0" "uuid": "^9.0.0"
}, },
"optionalDependencies": { "optionalDependencies": {
"@lancedb/vectordb-darwin-arm64": "0.4.6", "@lancedb/vectordb-darwin-arm64": "0.12.0",
"@lancedb/vectordb-darwin-x64": "0.4.6", "@lancedb/vectordb-darwin-x64": "0.12.0",
"@lancedb/vectordb-linux-arm64-gnu": "0.4.6", "@lancedb/vectordb-linux-arm64-gnu": "0.12.0",
"@lancedb/vectordb-linux-x64-gnu": "0.4.6", "@lancedb/vectordb-linux-x64-gnu": "0.12.0",
"@lancedb/vectordb-win32-x64-msvc": "0.4.6" "@lancedb/vectordb-win32-x64-msvc": "0.12.0"
},
"peerDependencies": {
"@apache-arrow/ts": "^14.0.2",
"apache-arrow": "^14.0.2"
} }
}, },
"../node/node_modules/apache-arrow": { "../node/node_modules/apache-arrow": {

View File

@@ -1,6 +1,7 @@
mkdocs==1.5.3 mkdocs==1.5.3
mkdocs-jupyter==0.24.1 mkdocs-jupyter==0.24.1
mkdocs-material==9.5.3 mkdocs-material==9.5.3
mkdocstrings[python]==0.20.0 mkdocstrings[python]==0.25.2
griffe
mkdocs-render-swagger-plugin mkdocs-render-swagger-plugin
pydantic pydantic

View File

@@ -45,9 +45,9 @@ Lance supports `IVF_PQ` index type by default.
Creating indexes is done via the [lancedb.Table.createIndex](../js/classes/Table.md/#createIndex) method. Creating indexes is done via the [lancedb.Table.createIndex](../js/classes/Table.md/#createIndex) method.
```typescript ```typescript
--8<--- "nodejs/examples/ann_indexes.ts:import" --8<--- "nodejs/examples/ann_indexes.test.ts:import"
--8<-- "nodejs/examples/ann_indexes.ts:ingest" --8<-- "nodejs/examples/ann_indexes.test.ts:ingest"
``` ```
=== "vectordb (deprecated)" === "vectordb (deprecated)"
@@ -140,13 +140,15 @@ There are a couple of parameters that can be used to fine-tune the search:
- **limit** (default: 10): The amount of results that will be returned - **limit** (default: 10): The amount of results that will be returned
- **nprobes** (default: 20): The number of probes used. A higher number makes search more accurate but also slower.<br/> - **nprobes** (default: 20): The number of probes used. A higher number makes search more accurate but also slower.<br/>
Most of the time, setting nprobes to cover 5-10% of the dataset should achieve high recall with low latency.<br/> Most of the time, setting nprobes to cover 5-15% of the dataset should achieve high recall with low latency.<br/>
e.g., for 1M vectors divided up into 256 partitions, nprobes should be set to ~20-40.<br/> - _For example_, For a dataset of 1 million vectors divided into 256 partitions, `nprobes` should be set to ~20-40. This value can be adjusted to achieve the optimal balance between search latency and search quality. <br/>
Note: nprobes is only applicable if an ANN index is present. If specified on a table without an ANN index, it is ignored.
- **refine_factor** (default: None): Refine the results by reading extra elements and re-ranking them in memory.<br/> - **refine_factor** (default: None): Refine the results by reading extra elements and re-ranking them in memory.<br/>
A higher number makes search more accurate but also slower. If you find the recall is less than ideal, try refine_factor=10 to start.<br/> A higher number makes search more accurate but also slower. If you find the recall is less than ideal, try refine_factor=10 to start.<br/>
e.g., for 1M vectors divided into 256 partitions, if you're looking for top 20, then refine_factor=200 reranks the whole partition.<br/> - _For example_, For a dataset of 1 million vectors divided into 256 partitions, setting the `refine_factor` to 200 will initially retrieve the top 4,000 candidates (top k * refine_factor) from all searched partitions. These candidates are then reranked to determine the final top 20 results.<br/>
Note: refine_factor is only applicable if an ANN index is present. If specified on a table without an ANN index, it is ignored. !!! note
Both `nprobes` and `refine_factor` are only applicable if an ANN index is present. If specified on a table without an ANN index, those parameters are ignored.
=== "Python" === "Python"
@@ -169,7 +171,7 @@ There are a couple of parameters that can be used to fine-tune the search:
=== "@lancedb/lancedb" === "@lancedb/lancedb"
```typescript ```typescript
--8<-- "nodejs/examples/ann_indexes.ts:search1" --8<-- "nodejs/examples/ann_indexes.test.ts:search1"
``` ```
=== "vectordb (deprecated)" === "vectordb (deprecated)"
@@ -203,7 +205,7 @@ You can further filter the elements returned by a search using a where clause.
=== "@lancedb/lancedb" === "@lancedb/lancedb"
```typescript ```typescript
--8<-- "nodejs/examples/ann_indexes.ts:search2" --8<-- "nodejs/examples/ann_indexes.test.ts:search2"
``` ```
=== "vectordb (deprecated)" === "vectordb (deprecated)"
@@ -235,7 +237,7 @@ You can select the columns returned by the query using a select clause.
=== "@lancedb/lancedb" === "@lancedb/lancedb"
```typescript ```typescript
--8<-- "nodejs/examples/ann_indexes.ts:search3" --8<-- "nodejs/examples/ann_indexes.test.ts:search3"
``` ```
=== "vectordb (deprecated)" === "vectordb (deprecated)"
@@ -275,7 +277,15 @@ Product quantization can lead to approximately `16 * sizeof(float32) / 1 = 64` t
Higher number of partitions could lead to more efficient I/O during queries and better accuracy, but it takes much more time to train. Higher number of partitions could lead to more efficient I/O during queries and better accuracy, but it takes much more time to train.
On `SIFT-1M` dataset, our benchmark shows that keeping each partition 1K-4K rows lead to a good latency / recall. On `SIFT-1M` dataset, our benchmark shows that keeping each partition 1K-4K rows lead to a good latency / recall.
`num_sub_vectors` specifies how many Product Quantization (PQ) short codes to generate on each vector. Because `num_sub_vectors` specifies how many Product Quantization (PQ) short codes to generate on each vector. The number should be a factor of the vector dimension. Because
PQ is a lossy compression of the original vector, a higher `num_sub_vectors` usually results in PQ is a lossy compression of the original vector, a higher `num_sub_vectors` usually results in
less space distortion, and thus yields better accuracy. However, a higher `num_sub_vectors` also causes heavier I/O and less space distortion, and thus yields better accuracy. However, a higher `num_sub_vectors` also causes heavier I/O and more PQ computation, and thus, higher latency. `dimension / num_sub_vectors` should be a multiple of 8 for optimum SIMD efficiency.
more PQ computation, and thus, higher latency. `dimension / num_sub_vectors` should be a multiple of 8 for optimum SIMD efficiency.
!!! note
if `num_sub_vectors` is set to be greater than the vector dimension, you will see errors like `attempt to divide by zero`
### How to choose `m` and `ef_construction` for `IVF_HNSW_*` index?
`m` determines the number of connections a new node establishes with its closest neighbors upon entering the graph. Typically, `m` falls within the range of 5 to 48. Lower `m` values are suitable for low-dimensional data or scenarios where recall is less critical. Conversely, higher `m` values are beneficial for high-dimensional data or when high recall is required. In essence, a larger `m` results in a denser graph with increased connectivity, but at the expense of higher memory consumption.
`ef_construction` balances build speed and accuracy. Higher values increase accuracy but slow down the build process. A typical range is 150 to 300. For good search results, a minimum value of 100 is recommended. In most cases, setting this value above 500 offers no additional benefit. Ensure that `ef_construction` is always set to a value equal to or greater than `ef` in the search phase

View File

@@ -0,0 +1 @@
<svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" width="117" height="20"><linearGradient id="b" x2="0" y2="100%"><stop offset="0" stop-color="#bbb" stop-opacity=".1"/><stop offset="1" stop-opacity=".1"/></linearGradient><clipPath id="a"><rect width="117" height="20" rx="3" fill="#fff"/></clipPath><g clip-path="url(#a)"><path fill="#555" d="M0 0h30v20H0z"/><path fill="#007ec6" d="M30 0h87v20H30z"/><path fill="url(#b)" d="M0 0h117v20H0z"/></g><g fill="#fff" text-anchor="middle" font-family="DejaVu Sans,Verdana,Geneva,sans-serif" font-size="110"><svg x="4px" y="0px" width="22px" height="20px" viewBox="-2 0 28 24" style="background-color: #fff;border-radius: 1px;"><path style="fill:#e8710a;" d="M1.977,16.77c-2.667-2.277-2.605-7.079,0-9.357C2.919,8.057,3.522,9.075,4.49,9.691c-1.152,1.6-1.146,3.201-0.004,4.803C3.522,15.111,2.918,16.126,1.977,16.77z"/><path style="fill:#f9ab00;" d="M12.257,17.114c-1.767-1.633-2.485-3.658-2.118-6.02c0.451-2.91,2.139-4.893,4.946-5.678c2.565-0.718,4.964-0.217,6.878,1.819c-0.884,0.743-1.707,1.547-2.434,2.446C18.488,8.827,17.319,8.435,16,8.856c-2.404,0.767-3.046,3.241-1.494,5.644c-0.241,0.275-0.493,0.541-0.721,0.826C13.295,15.939,12.511,16.3,12.257,17.114z"/><path style="fill:#e8710a;" d="M19.529,9.682c0.727-0.899,1.55-1.703,2.434-2.446c2.703,2.783,2.701,7.031-0.005,9.764c-2.648,2.674-6.936,2.725-9.701,0.115c0.254-0.814,1.038-1.175,1.528-1.788c0.228-0.285,0.48-0.552,0.721-0.826c1.053,0.916,2.254,1.268,3.6,0.83C20.502,14.551,21.151,11.927,19.529,9.682z"/><path style="fill:#f9ab00;" d="M4.49,9.691C3.522,9.075,2.919,8.057,1.977,7.413c2.209-2.398,5.721-2.942,8.476-1.355c0.555,0.32,0.719,0.606,0.285,1.128c-0.157,0.188-0.258,0.422-0.391,0.631c-0.299,0.47-0.509,1.067-0.929,1.371C8.933,9.539,8.523,8.847,8.021,8.746C6.673,8.475,5.509,8.787,4.49,9.691z"/><path style="fill:#f9ab00;" d="M1.977,16.77c0.941-0.644,1.545-1.659,2.509-2.277c1.373,1.152,2.85,1.433,4.45,0.499c0.332-0.194,0.503-0.088,0.673,0.19c0.386,0.635,0.753,1.285,1.181,1.89c0.34,0.48,0.222,0.715-0.253,1.006C7.84,19.73,4.205,19.188,1.977,16.77z"/></svg><text x="245" y="140" transform="scale(.1)" textLength="30"> </text><text x="725" y="150" fill="#010101" fill-opacity=".3" transform="scale(.1)" textLength="770">Open in Colab</text><text x="725" y="140" transform="scale(.1)" textLength="770">Open in Colab</text></g> </svg>

After

Width:  |  Height:  |  Size: 2.3 KiB

View File

@@ -0,0 +1 @@
<svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" width="88.25" height="28" role="img" aria-label="GHOST"><title>GHOST</title><g shape-rendering="crispEdges"><rect width="88.25" height="28" fill="#000"/></g><g fill="#fff" text-anchor="middle" font-family="Verdana,Geneva,DejaVu Sans,sans-serif" text-rendering="geometricPrecision" font-size="100"><image x="9" y="7" width="14" height="14" xlink:href="data:image/svg+xml;base64,PHN2ZyBmaWxsPSIjZjdkZjFlIiByb2xlPSJpbWciIHZpZXdCb3g9IjAgMCAyNCAyNCIgeG1sbnM9Imh0dHA6Ly93d3cudzMub3JnLzIwMDAvc3ZnIj48dGl0bGU+R2hvc3Q8L3RpdGxlPjxwYXRoIGQ9Ik0xMiAwQzUuMzczIDAgMCA1LjM3MyAwIDEyczUuMzczIDEyIDEyIDEyIDEyLTUuMzczIDEyLTEyUzE4LjYyNyAwIDEyIDB6bS4yNTYgMi4zMTNjMi40Ny4wMDUgNS4xMTYgMi4wMDggNS44OTggMi45NjJsLjI0NC4zYzEuNjQgMS45OTQgMy41NjkgNC4zNCAzLjU2OSA2Ljk2NiAwIDMuNzE5LTIuOTggNS44MDgtNi4xNTggNy41MDgtMS40MzMuNzY2LTIuOTggMS41MDgtNC43NDggMS41MDgtNC41NDMgMC04LjM2Ni0zLjU2OS04LjM2Ni04LjExMiAwLS43MDYuMTctMS40MjUuMzQyLTIuMTUuMTIyLS41MTUuMjQ0LTEuMDMzLjMwNy0xLjU0OS41NDgtNC41MzkgMi45NjctNi43OTUgOC40MjItNy40MDhhNC4yOSA0LjI5IDAgMDEuNDktLjAyNloiLz48L3N2Zz4="/><text transform="scale(.1)" x="541.25" y="175" textLength="442.5" fill="#fff" font-weight="bold">GHOST</text></g></svg>

After

Width:  |  Height:  |  Size: 1.2 KiB

View File

@@ -0,0 +1 @@
<svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" width="95.5" height="28" role="img" aria-label="GITHUB"><title>GITHUB</title><g shape-rendering="crispEdges"><rect width="95.5" height="28" fill="#121011"/></g><g fill="#fff" text-anchor="middle" font-family="Verdana,Geneva,DejaVu Sans,sans-serif" text-rendering="geometricPrecision" font-size="100"><image x="9" y="7" width="14" height="14" xlink:href="data:image/svg+xml;base64,PHN2ZyBmaWxsPSJ3aGl0ZSIgcm9sZT0iaW1nIiB2aWV3Qm94PSIwIDAgMjQgMjQiIHhtbG5zPSJodHRwOi8vd3d3LnczLm9yZy8yMDAwL3N2ZyI+PHRpdGxlPkdpdEh1YjwvdGl0bGU+PHBhdGggZD0iTTEyIC4yOTdjLTYuNjMgMC0xMiA1LjM3My0xMiAxMiAwIDUuMzAzIDMuNDM4IDkuOCA4LjIwNSAxMS4zODUuNi4xMTMuODItLjI1OC44Mi0uNTc3IDAtLjI4NS0uMDEtMS4wNC0uMDE1LTIuMDQtMy4zMzguNzI0LTQuMDQyLTEuNjEtNC4wNDItMS42MUM0LjQyMiAxOC4wNyAzLjYzMyAxNy43IDMuNjMzIDE3LjdjLTEuMDg3LS43NDQuMDg0LS43MjkuMDg0LS43MjkgMS4yMDUuMDg0IDEuODM4IDEuMjM2IDEuODM4IDEuMjM2IDEuMDcgMS44MzUgMi44MDkgMS4zMDUgMy40OTUuOTk4LjEwOC0uNzc2LjQxNy0xLjMwNS43Ni0xLjYwNS0yLjY2NS0uMy01LjQ2Ni0xLjMzMi01LjQ2Ni01LjkzIDAtMS4zMS40NjUtMi4zOCAxLjIzNS0zLjIyLS4xMzUtLjMwMy0uNTQtMS41MjMuMTA1LTMuMTc2IDAgMCAxLjAwNS0uMzIyIDMuMyAxLjIzLjk2LS4yNjcgMS45OC0uMzk5IDMtLjQwNSAxLjAyLjAwNiAyLjA0LjEzOCAzIC40MDUgMi4yOC0xLjU1MiAzLjI4NS0xLjIzIDMuMjg1LTEuMjMuNjQ1IDEuNjUzLjI0IDIuODczLjEyIDMuMTc2Ljc2NS44NCAxLjIzIDEuOTEgMS4yMyAzLjIyIDAgNC42MS0yLjgwNSA1LjYyNS01LjQ3NSA1LjkyLjQyLjM2LjgxIDEuMDk2LjgxIDIuMjIgMCAxLjYwNi0uMDE1IDIuODk2LS4wMTUgMy4yODYgMCAuMzE1LjIxLjY5LjgyNS41N0MyMC41NjUgMjIuMDkyIDI0IDE3LjU5MiAyNCAxMi4yOTdjMC02LjYyNy01LjM3My0xMi0xMi0xMiIvPjwvc3ZnPg=="/><text transform="scale(.1)" x="577.5" y="175" textLength="515" fill="#fff" font-weight="bold">GITHUB</text></g></svg>

After

Width:  |  Height:  |  Size: 1.7 KiB

View File

@@ -0,0 +1,22 @@
<svg width="147" height="20" viewBox="0 0 147 20" fill="none" xmlns="http://www.w3.org/2000/svg">
<rect x="0.5" y="0.5" width="145.482" height="19" rx="9.5" fill="white" stroke="#EFEFEF"/>
<path d="M14.1863 10.9251V12.7593H16.0205V10.9251H14.1863Z" fill="#FF3270"/>
<path d="M17.8707 10.9251V12.7593H19.7049V10.9251H17.8707Z" fill="#861FFF"/>
<path d="M14.1863 7.24078V9.07496H16.0205V7.24078H14.1863Z" fill="#097EFF"/>
<path fill-rule="evenodd" clip-rule="evenodd" d="M12.903 6.77179C12.903 6.32194 13.2676 5.95728 13.7175 5.95728C14.1703 5.95728 15.2556 5.95728 16.1094 5.95728C16.7538 5.95728 17.2758 6.47963 17.2758 7.12398V9.6698H19.8217C20.4661 9.6698 20.9884 10.1922 20.9884 10.8365C20.9884 11.6337 20.9884 12.4309 20.9884 13.2282C20.9884 13.678 20.6237 14.0427 20.1738 14.0427H17.3039H16.5874H13.7175C13.2676 14.0427 12.903 13.678 12.903 13.2282V9.71653V9.64174V6.77179ZM14.1863 7.24066V9.07485H16.0205V7.24066H14.1863ZM14.1863 12.7593V10.9251H16.0205V12.7593H14.1863ZM17.8708 12.7593V10.9251H19.705V12.7593H17.8708Z" fill="black"/>
<path d="M18.614 8.35468L20.7796 6.18905M20.7796 6.18905V7.66073M20.7796 6.18905L19.2724 6.18905" stroke="black" stroke-width="0.686298" stroke-linecap="round" stroke-linejoin="round"/>
<path d="M31.6082 13.9838C30.8546 13.9838 30.1895 13.802 29.6132 13.4385C29.0368 13.066 28.5846 12.5429 28.2565 11.869C27.9373 11.1862 27.7777 10.3749 27.7777 9.43501C27.7777 8.49511 27.9373 7.69265 28.2565 7.02762C28.5846 6.3626 29.0368 5.85275 29.6132 5.49807C30.1895 5.14339 30.8546 4.96605 31.6082 4.96605C32.3708 4.96605 33.0403 5.14339 33.6166 5.49807C34.193 5.85275 34.6408 6.3626 34.96 7.02762C35.2881 7.69265 35.4521 8.49511 35.4521 9.43501C35.4521 10.3749 35.2881 11.1862 34.96 11.869C34.6408 12.5429 34.193 13.066 33.6166 13.4385C33.0403 13.802 32.3708 13.9838 31.6082 13.9838ZM31.6082 12.6404C32.291 12.6404 32.8363 12.3523 33.2442 11.7759C33.6521 11.1907 33.856 10.4104 33.856 9.43501C33.856 8.45964 33.6521 7.69708 33.2442 7.14733C32.8363 6.58871 32.291 6.3094 31.6082 6.3094C30.9255 6.3094 30.3802 6.58871 29.9723 7.14733C29.5644 7.69708 29.3605 8.45964 29.3605 9.43501C29.3605 10.4104 29.5644 11.1907 29.9723 11.7759C30.3802 12.3523 30.9255 12.6404 31.6082 12.6404Z" fill="#2C3236"/>
<path d="M37.0592 16.4045V7.29363H38.3227L38.4291 7.98526H38.4823C38.7572 7.75472 39.0631 7.55521 39.4 7.38674C39.7459 7.21826 40.0961 7.13403 40.4508 7.13403C41.2665 7.13403 41.8961 7.43551 42.3395 8.03846C42.7917 8.64142 43.0178 9.44831 43.0178 10.4591C43.0178 11.204 42.8848 11.8424 42.6188 12.3744C42.3528 12.8976 42.0069 13.2966 41.5813 13.5715C41.1646 13.8463 40.7124 13.9838 40.2247 13.9838C39.9409 13.9838 39.6572 13.9217 39.3734 13.7976C39.0897 13.6646 38.8148 13.4872 38.5488 13.2656L38.5887 14.3562V16.4045H37.0592ZM39.9055 12.7202C40.3399 12.7202 40.7035 12.5296 40.9961 12.1483C41.2887 11.767 41.435 11.2084 41.435 10.4724C41.435 9.81629 41.3242 9.30644 41.1025 8.94289C40.8808 8.57935 40.5217 8.39757 40.0252 8.39757C39.5641 8.39757 39.0853 8.64142 38.5887 9.1291V12.1749C38.8281 12.37 39.0587 12.5119 39.2803 12.6005C39.502 12.6803 39.7104 12.7202 39.9055 12.7202Z" fill="#2C3236"/>
<path d="M47.3598 13.9838C46.7568 13.9838 46.2115 13.8508 45.7238 13.5848C45.2361 13.3099 44.8504 12.9197 44.5667 12.4143C44.2829 11.9 44.141 11.2838 44.141 10.5656C44.141 9.85619 44.2829 9.24437 44.5667 8.73009C44.8593 8.2158 45.2361 7.82122 45.6972 7.54634C46.1583 7.27147 46.6415 7.13403 47.147 7.13403C47.741 7.13403 48.2376 7.26703 48.6366 7.53304C49.0356 7.79018 49.3371 8.15373 49.541 8.62368C49.745 9.08476 49.847 9.62122 49.847 10.233C49.847 10.5523 49.8248 10.8005 49.7805 10.9779H45.6307C45.7016 11.5542 45.91 12.002 46.2558 12.3212C46.6016 12.6404 47.0361 12.8 47.5593 12.8C47.843 12.8 48.1046 12.7601 48.344 12.6803C48.5923 12.5917 48.8361 12.472 49.0755 12.3212L49.5942 13.2789C49.2839 13.4828 48.9381 13.6513 48.5568 13.7843C48.1755 13.9173 47.7765 13.9838 47.3598 13.9838ZM45.6174 9.94043H48.5169C48.5169 9.43501 48.4061 9.04043 48.1844 8.75669C47.9627 8.46408 47.6302 8.31777 47.1869 8.31777C46.8056 8.31777 46.4642 8.45964 46.1627 8.74339C45.8701 9.01826 45.6883 9.41728 45.6174 9.94043Z" fill="#2C3236"/>
<path d="M51.3078 13.8242V7.29363H52.5714L52.6778 8.17147H52.731C53.0236 7.88772 53.3428 7.64388 53.6886 7.43994C54.0344 7.236 54.429 7.13403 54.8724 7.13403C55.5728 7.13403 56.0827 7.36014 56.4019 7.81235C56.7211 8.26457 56.8807 8.90299 56.8807 9.72762V13.8242H55.3512V9.92713C55.3512 9.38624 55.2714 9.00496 55.1118 8.78329C54.9522 8.56161 54.6906 8.45078 54.327 8.45078C54.0433 8.45078 53.7906 8.52171 53.5689 8.66358C53.3561 8.79659 53.1123 8.99609 52.8374 9.2621V13.8242H51.3078Z" fill="#2C3236"/>
<path d="M61.4131 13.8242V7.29363H62.9426V13.8242H61.4131ZM62.1845 6.14979C61.9096 6.14979 61.6879 6.06999 61.5195 5.91038C61.351 5.75078 61.2668 5.53797 61.2668 5.27196C61.2668 5.01482 61.351 4.80644 61.5195 4.64684C61.6879 4.48723 61.9096 4.40743 62.1845 4.40743C62.4594 4.40743 62.6811 4.48723 62.8495 4.64684C63.018 4.80644 63.1022 5.01482 63.1022 5.27196C63.1022 5.53797 63.018 5.75078 62.8495 5.91038C62.6811 6.06999 62.4594 6.14979 62.1845 6.14979Z" fill="#2C3236"/>
<path d="M64.8941 13.8242V7.29363H66.1576L66.264 8.17147H66.3172C66.6098 7.88772 66.929 7.64388 67.2748 7.43994C67.6207 7.236 68.0152 7.13403 68.4586 7.13403C69.1591 7.13403 69.6689 7.36014 69.9881 7.81235C70.3074 8.26457 70.467 8.90299 70.467 9.72762V13.8242H68.9374V9.92713C68.9374 9.38624 68.8576 9.00496 68.698 8.78329C68.5384 8.56161 68.2768 8.45078 67.9133 8.45078C67.6295 8.45078 67.3768 8.52171 67.1551 8.66358C66.9423 8.79659 66.6985 8.99609 66.4236 9.2621V13.8242H64.8941Z" fill="#2C3236"/>
<path d="M75.1323 13.8242V5.12565H76.6752V8.62368H80.1998V5.12565H81.7427V13.8242H80.1998V9.96703H76.6752V13.8242H75.1323Z" fill="#2C3236"/>
<path d="M83.9517 13.8242V5.12565H89.2054V6.4291H85.4945V8.88969H88.6601V10.1931H85.4945V13.8242H83.9517Z" fill="#2C3236"/>
<path d="M95.9349 13.9838C95.3497 13.9838 94.7822 13.8729 94.2324 13.6513C93.6915 13.4296 93.2127 13.1148 92.796 12.7069L93.7004 11.6562C94.0108 11.9488 94.3654 12.1882 94.7645 12.3744C95.1635 12.5518 95.5625 12.6404 95.9615 12.6404C96.458 12.6404 96.8349 12.5385 97.092 12.3345C97.3492 12.1306 97.4778 11.8601 97.4778 11.5232C97.4778 11.1596 97.3492 10.8981 97.092 10.7385C96.8438 10.5789 96.5245 10.4148 96.1344 10.2463L94.9374 9.72762C94.6536 9.60348 94.3743 9.44388 94.0994 9.2488C93.8334 9.05373 93.6117 8.80546 93.4344 8.50398C93.2659 8.2025 93.1817 7.83895 93.1817 7.41334C93.1817 6.95225 93.3058 6.53994 93.5541 6.17639C93.8113 5.80398 94.1571 5.51137 94.5915 5.29856C95.0349 5.07689 95.5403 4.96605 96.1078 4.96605C96.6132 4.96605 97.1009 5.06802 97.5709 5.27196C98.0408 5.46703 98.4442 5.73304 98.7812 6.06999L97.9965 7.05423C97.7216 6.82368 97.429 6.64191 97.1186 6.5089C96.8172 6.3759 96.4802 6.3094 96.1078 6.3094C95.6999 6.3094 95.3674 6.4025 95.1103 6.58871C94.862 6.76605 94.7379 7.01432 94.7379 7.33353C94.7379 7.55521 94.7999 7.74142 94.9241 7.89215C95.0571 8.03403 95.23 8.15816 95.4428 8.26457C95.6556 8.36211 95.8817 8.45964 96.1211 8.55718L97.3048 9.0493C97.8191 9.27097 98.2403 9.56358 98.5684 9.92713C98.8965 10.2818 99.0605 10.7739 99.0605 11.4035C99.0605 11.8734 98.9364 12.3035 98.6881 12.6936C98.4398 13.0838 98.0807 13.3986 97.6108 13.638C97.1497 13.8685 96.591 13.9838 95.9349 13.9838Z" fill="#2C3236"/>
<path d="M100.509 16.4045V7.29363H101.773L101.879 7.98526H101.932C102.207 7.75472 102.513 7.55521 102.85 7.38674C103.196 7.21826 103.546 7.13403 103.901 7.13403C104.717 7.13403 105.346 7.43551 105.79 8.03846C106.242 8.64142 106.468 9.44831 106.468 10.4591C106.468 11.204 106.335 11.8424 106.069 12.3744C105.803 12.8976 105.457 13.2966 105.031 13.5715C104.615 13.8463 104.162 13.9838 103.675 13.9838C103.391 13.9838 103.107 13.9217 102.824 13.7976C102.54 13.6646 102.265 13.4872 101.999 13.2656L102.039 14.3562V16.4045H100.509ZM103.356 12.7202C103.79 12.7202 104.154 12.5296 104.446 12.1483C104.739 11.767 104.885 11.2084 104.885 10.4724C104.885 9.81629 104.774 9.30644 104.553 8.94289C104.331 8.57935 103.972 8.39757 103.475 8.39757C103.014 8.39757 102.535 8.64142 102.039 9.1291V12.1749C102.278 12.37 102.509 12.5119 102.73 12.6005C102.952 12.6803 103.16 12.7202 103.356 12.7202Z" fill="#2C3236"/>
<path d="M109.444 13.9838C108.876 13.9838 108.411 13.8064 108.047 13.4518C107.692 13.0971 107.515 12.636 107.515 12.0685C107.515 11.368 107.821 10.8271 108.433 10.4458C109.045 10.0557 110.02 9.78969 111.359 9.64782C111.35 9.30201 111.257 9.00496 111.08 8.75669C110.911 8.49954 110.605 8.37097 110.162 8.37097C109.843 8.37097 109.528 8.43304 109.218 8.55718C108.916 8.68132 108.619 8.83206 108.326 9.0094L107.768 7.98526C108.131 7.75472 108.539 7.55521 108.991 7.38674C109.452 7.21826 109.94 7.13403 110.454 7.13403C111.27 7.13403 111.878 7.37787 112.277 7.86555C112.685 8.34437 112.888 9.04043 112.888 9.95373V13.8242H111.625L111.518 13.1059H111.465C111.173 13.3542 110.858 13.5626 110.521 13.7311C110.193 13.8995 109.834 13.9838 109.444 13.9838ZM109.936 12.7867C110.202 12.7867 110.441 12.7247 110.654 12.6005C110.876 12.4675 111.111 12.2902 111.359 12.0685V10.6055C110.472 10.7207 109.856 10.8936 109.51 11.1242C109.164 11.3458 108.991 11.6207 108.991 11.9488C108.991 12.2414 109.08 12.4542 109.257 12.5872C109.435 12.7202 109.661 12.7867 109.936 12.7867Z" fill="#2C3236"/>
<path d="M117.446 13.9838C116.851 13.9838 116.315 13.8508 115.836 13.5848C115.366 13.3099 114.989 12.9197 114.706 12.4143C114.431 11.9 114.293 11.2838 114.293 10.5656C114.293 9.83846 114.444 9.2222 114.746 8.71679C115.047 8.2025 115.446 7.81235 115.943 7.54634C116.448 7.27147 116.989 7.13403 117.565 7.13403C117.982 7.13403 118.346 7.20496 118.656 7.34684C118.966 7.48871 119.241 7.66161 119.48 7.86555L118.736 8.86309C118.567 8.71235 118.394 8.59708 118.217 8.51728C118.04 8.42861 117.849 8.38427 117.645 8.38427C117.122 8.38427 116.692 8.58378 116.355 8.98279C116.027 9.38181 115.863 9.9094 115.863 10.5656C115.863 11.2128 116.022 11.736 116.342 12.135C116.67 12.534 117.091 12.7335 117.605 12.7335C117.862 12.7335 118.102 12.6803 118.323 12.5739C118.554 12.4587 118.762 12.3256 118.948 12.1749L119.574 13.1857C119.272 13.4518 118.935 13.6513 118.563 13.7843C118.19 13.9173 117.818 13.9838 117.446 13.9838Z" fill="#2C3236"/>
<path d="M123.331 13.9838C122.728 13.9838 122.183 13.8508 121.695 13.5848C121.207 13.3099 120.822 12.9197 120.538 12.4143C120.254 11.9 120.112 11.2838 120.112 10.5656C120.112 9.85619 120.254 9.24437 120.538 8.73009C120.83 8.2158 121.207 7.82122 121.668 7.54634C122.13 7.27147 122.613 7.13403 123.118 7.13403C123.712 7.13403 124.209 7.26703 124.608 7.53304C125.007 7.79018 125.308 8.15373 125.512 8.62368C125.716 9.08476 125.818 9.62122 125.818 10.233C125.818 10.5523 125.796 10.8005 125.752 10.9779H121.602C121.673 11.5542 121.881 12.002 122.227 12.3212C122.573 12.6404 123.007 12.8 123.53 12.8C123.814 12.8 124.076 12.7601 124.315 12.6803C124.563 12.5917 124.807 12.472 125.047 12.3212L125.565 13.2789C125.255 13.4828 124.909 13.6513 124.528 13.7843C124.147 13.9173 123.748 13.9838 123.331 13.9838ZM121.589 9.94043H124.488C124.488 9.43501 124.377 9.04043 124.156 8.75669C123.934 8.46408 123.601 8.31777 123.158 8.31777C122.777 8.31777 122.435 8.45964 122.134 8.74339C121.841 9.01826 121.66 9.41728 121.589 9.94043Z" fill="#2C3236"/>
<path d="M129.101 13.9838C128.658 13.9838 128.215 13.8995 127.771 13.7311C127.328 13.5537 126.947 13.3365 126.627 13.0793L127.346 12.0951C127.638 12.3168 127.931 12.4941 128.223 12.6271C128.516 12.7601 128.826 12.8266 129.154 12.8266C129.509 12.8266 129.771 12.7513 129.939 12.6005C130.108 12.4498 130.192 12.2636 130.192 12.0419C130.192 11.8557 130.121 11.705 129.979 11.5897C129.846 11.4656 129.673 11.3591 129.46 11.2705C129.248 11.1729 129.026 11.0798 128.795 10.9912C128.512 10.8848 128.228 10.7562 127.944 10.6055C127.669 10.4458 127.443 10.2463 127.266 10.0069C127.088 9.75866 127 9.45274 127 9.0892C127 8.51284 127.213 8.04289 127.638 7.67935C128.064 7.3158 128.64 7.13403 129.367 7.13403C129.828 7.13403 130.241 7.21383 130.604 7.37344C130.968 7.53304 131.282 7.71482 131.548 7.91876L130.844 8.84979C130.613 8.68132 130.378 8.54831 130.139 8.45078C129.908 8.34437 129.664 8.29117 129.407 8.29117C129.079 8.29117 128.835 8.36211 128.676 8.50398C128.516 8.63698 128.436 8.80545 128.436 9.0094C128.436 9.26654 128.569 9.46161 128.835 9.59462C129.101 9.72762 129.412 9.85619 129.766 9.98033C130.068 10.0867 130.36 10.2197 130.644 10.3793C130.928 10.5301 131.163 10.7296 131.349 10.9779C131.544 11.2261 131.642 11.5542 131.642 11.9621C131.642 12.5207 131.424 12.9995 130.99 13.3986C130.555 13.7887 129.926 13.9838 129.101 13.9838Z" fill="#2C3236"/>
</svg>

After

Width:  |  Height:  |  Size: 12 KiB

View File

@@ -0,0 +1 @@
<svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" width="97.5" height="28" role="img" aria-label="PYTHON"><title>PYTHON</title><g shape-rendering="crispEdges"><rect width="97.5" height="28" fill="#3670a0"/></g><g fill="#fff" text-anchor="middle" font-family="Verdana,Geneva,DejaVu Sans,sans-serif" text-rendering="geometricPrecision" font-size="100"><image x="9" y="7" width="14" height="14" xlink:href="data:image/svg+xml;base64,PHN2ZyBmaWxsPSIjZmZkZDU0IiByb2xlPSJpbWciIHZpZXdCb3g9IjAgMCAyNCAyNCIgeG1sbnM9Imh0dHA6Ly93d3cudzMub3JnLzIwMDAvc3ZnIj48dGl0bGU+UHl0aG9uPC90aXRsZT48cGF0aCBkPSJNMTQuMjUuMThsLjkuMi43My4yNi41OS4zLjQ1LjMyLjM0LjM0LjI1LjM0LjE2LjMzLjEuMy4wNC4yNi4wMi4yLS4wMS4xM1Y4LjVsLS4wNS42My0uMTMuNTUtLjIxLjQ2LS4yNi4zOC0uMy4zMS0uMzMuMjUtLjM1LjE5LS4zNS4xNC0uMzMuMS0uMy4wNy0uMjYuMDQtLjIxLjAySDguNzdsLS42OS4wNS0uNTkuMTQtLjUuMjItLjQxLjI3LS4zMy4zMi0uMjcuMzUtLjIuMzYtLjE1LjM3LS4xLjM1LS4wNy4zMi0uMDQuMjctLjAyLjIxdjMuMDZIMy4xN2wtLjIxLS4wMy0uMjgtLjA3LS4zMi0uMTItLjM1LS4xOC0uMzYtLjI2LS4zNi0uMzYtLjM1LS40Ni0uMzItLjU5LS4yOC0uNzMtLjIxLS44OC0uMTQtMS4wNS0uMDUtMS4yMy4wNi0xLjIyLjE2LTEuMDQuMjQtLjg3LjMyLS43MS4zNi0uNTcuNC0uNDQuNDItLjMzLjQyLS4yNC40LS4xNi4zNi0uMS4zMi0uMDUuMjQtLjAxaC4xNmwuMDYuMDFoOC4xNnYtLjgzSDYuMThsLS4wMS0yLjc1LS4wMi0uMzcuMDUtLjM0LjExLS4zMS4xNy0uMjguMjUtLjI2LjMxLS4yMy4zOC0uMi40NC0uMTguNTEtLjE1LjU4LS4xMi42NC0uMS43MS0uMDYuNzctLjA0Ljg0LS4wMiAxLjI3LjA1em0tNi4zIDEuOThsLS4yMy4zMy0uMDguNDEuMDguNDEuMjMuMzQuMzMuMjIuNDEuMDkuNDEtLjA5LjMzLS4yMi4yMy0uMzQuMDgtLjQxLS4wOC0uNDEtLjIzLS4zMy0uMzMtLjIyLS40MS0uMDktLjQxLjA5em0xMy4wOSAzLjk1bC4yOC4wNi4zMi4xMi4zNS4xOC4zNi4yNy4zNi4zNS4zNS40Ny4zMi41OS4yOC43My4yMS44OC4xNCAxLjA0LjA1IDEuMjMtLjA2IDEuMjMtLjE2IDEuMDQtLjI0Ljg2LS4zMi43MS0uMzYuNTctLjQuNDUtLjQyLjMzLS40Mi4yNC0uNC4xNi0uMzYuMDktLjMyLjA1LS4yNC4wMi0uMTYtLjAxaC04LjIydi44Mmg1Ljg0bC4wMSAyLjc2LjAyLjM2LS4wNS4zNC0uMTEuMzEtLjE3LjI5LS4yNS4yNS0uMzEuMjQtLjM4LjItLjQ0LjE3LS41MS4xNS0uNTguMTMtLjY0LjA5LS43MS4wNy0uNzcuMDQtLjg0LjAxLTEuMjctLjA0LTEuMDctLjE0LS45LS4yLS43My0uMjUtLjU5LS4zLS40NS0uMzMtLjM0LS4zNC0uMjUtLjM0LS4xNi0uMzMtLjEtLjMtLjA0LS4yNS0uMDItLjIuMDEtLjEzdi01LjM0bC4wNS0uNjQuMTMtLjU0LjIxLS40Ni4yNi0uMzguMy0uMzIuMzMtLjI0LjM1LS4yLjM1LS4xNC4zMy0uMS4zLS4wNi4yNi0uMDQuMjEtLjAyLjEzLS4wMWg1Ljg0bC42OS0uMDUuNTktLjE0LjUtLjIxLjQxLS4yOC4zMy0uMzIuMjctLjM1LjItLjM2LjE1LS4zNi4xLS4zNS4wNy0uMzIuMDQtLjI4LjAyLS4yMVY2LjA3aDIuMDlsLjE0LjAxem0tNi40NyAxNC4yNWwtLjIzLjMzLS4wOC40MS4wOC40MS4yMy4zMy4zMy4yMy40MS4wOC40MS0uMDguMzMtLjIzLjIzLS4zMy4wOC0uNDEtLjA4LS40MS0uMjMtLjMzLS4zMy0uMjMtLjQxLS4wOC0uNDEuMDh6Ii8+PC9zdmc+"/><text transform="scale(.1)" x="587.5" y="175" textLength="535" fill="#fff" font-weight="bold">PYTHON</text></g></svg>

After

Width:  |  Height:  |  Size: 2.6 KiB

View File

@@ -157,7 +157,7 @@ recommend switching to stable releases.
import * as lancedb from "@lancedb/lancedb"; import * as lancedb from "@lancedb/lancedb";
import * as arrow from "apache-arrow"; import * as arrow from "apache-arrow";
--8<-- "nodejs/examples/basic.ts:connect" --8<-- "nodejs/examples/basic.test.ts:connect"
``` ```
=== "vectordb (deprecated)" === "vectordb (deprecated)"
@@ -212,7 +212,7 @@ table.
=== "@lancedb/lancedb" === "@lancedb/lancedb"
```typescript ```typescript
--8<-- "nodejs/examples/basic.ts:create_table" --8<-- "nodejs/examples/basic.test.ts:create_table"
``` ```
=== "vectordb (deprecated)" === "vectordb (deprecated)"
@@ -268,7 +268,7 @@ similar to a `CREATE TABLE` statement in SQL.
=== "@lancedb/lancedb" === "@lancedb/lancedb"
```typescript ```typescript
--8<-- "nodejs/examples/basic.ts:create_empty_table" --8<-- "nodejs/examples/basic.test.ts:create_empty_table"
``` ```
=== "vectordb (deprecated)" === "vectordb (deprecated)"
@@ -298,7 +298,7 @@ Once created, you can open a table as follows:
=== "@lancedb/lancedb" === "@lancedb/lancedb"
```typescript ```typescript
--8<-- "nodejs/examples/basic.ts:open_table" --8<-- "nodejs/examples/basic.test.ts:open_table"
``` ```
=== "vectordb (deprecated)" === "vectordb (deprecated)"
@@ -327,7 +327,7 @@ If you forget the name of your table, you can always get a listing of all table
=== "@lancedb/lancedb" === "@lancedb/lancedb"
```typescript ```typescript
--8<-- "nodejs/examples/basic.ts:table_names" --8<-- "nodejs/examples/basic.test.ts:table_names"
``` ```
=== "vectordb (deprecated)" === "vectordb (deprecated)"
@@ -357,7 +357,7 @@ After a table has been created, you can always add more data to it as follows:
=== "@lancedb/lancedb" === "@lancedb/lancedb"
```typescript ```typescript
--8<-- "nodejs/examples/basic.ts:add_data" --8<-- "nodejs/examples/basic.test.ts:add_data"
``` ```
=== "vectordb (deprecated)" === "vectordb (deprecated)"
@@ -389,7 +389,7 @@ Once you've embedded the query, you can find its nearest neighbors as follows:
=== "@lancedb/lancedb" === "@lancedb/lancedb"
```typescript ```typescript
--8<-- "nodejs/examples/basic.ts:vector_search" --8<-- "nodejs/examples/basic.test.ts:vector_search"
``` ```
=== "vectordb (deprecated)" === "vectordb (deprecated)"
@@ -429,7 +429,7 @@ LanceDB allows you to create an ANN index on a table as follows:
=== "@lancedb/lancedb" === "@lancedb/lancedb"
```typescript ```typescript
--8<-- "nodejs/examples/basic.ts:create_index" --8<-- "nodejs/examples/basic.test.ts:create_index"
``` ```
=== "vectordb (deprecated)" === "vectordb (deprecated)"
@@ -469,7 +469,7 @@ This can delete any number of rows that match the filter.
=== "@lancedb/lancedb" === "@lancedb/lancedb"
```typescript ```typescript
--8<-- "nodejs/examples/basic.ts:delete_rows" --8<-- "nodejs/examples/basic.test.ts:delete_rows"
``` ```
=== "vectordb (deprecated)" === "vectordb (deprecated)"
@@ -527,7 +527,7 @@ Use the `drop_table()` method on the database to remove a table.
=== "@lancedb/lancedb" === "@lancedb/lancedb"
```typescript ```typescript
--8<-- "nodejs/examples/basic.ts:drop_table" --8<-- "nodejs/examples/basic.test.ts:drop_table"
``` ```
=== "vectordb (deprecated)" === "vectordb (deprecated)"
@@ -561,8 +561,8 @@ You can use the embedding API when working with embedding models. It automatical
=== "@lancedb/lancedb" === "@lancedb/lancedb"
```typescript ```typescript
--8<-- "nodejs/examples/embedding.ts:imports" --8<-- "nodejs/examples/embedding.test.ts:imports"
--8<-- "nodejs/examples/embedding.ts:openai_embeddings" --8<-- "nodejs/examples/embedding.test.ts:openai_embeddings"
``` ```
=== "Rust" === "Rust"
@@ -572,7 +572,7 @@ You can use the embedding API when working with embedding models. It automatical
--8<-- "rust/lancedb/examples/openai.rs:openai_embeddings" --8<-- "rust/lancedb/examples/openai.rs:openai_embeddings"
``` ```
Learn about using the existing integrations and creating custom embedding functions in the [embedding API guide](./embeddings/). Learn about using the existing integrations and creating custom embedding functions in the [embedding API guide](./embeddings/index.md).
## What's next ## What's next

View File

@@ -0,0 +1,99 @@
# Understanding HNSW index
Approximate Nearest Neighbor (ANN) search is a method for finding data points near a given point in a dataset, though not always the exact nearest one. HNSW is one of the most accurate and fastest Approximate Nearest Neighbour search algorithms, Its beneficial in high-dimensional spaces where finding the same nearest neighbor would be too slow and costly
[Jump to usage](#usage)
There are three main types of ANN search algorithms:
* **Tree-based search algorithms**: Use a tree structure to organize and store data points.
* * **Hash-based search algorithms**: Use a specialized geometric hash table to store and manage data points. These algorithms typically focus on theoretical guarantees, and don't usually perform as well as the other approaches in practice.
* **Graph-based search algorithms**: Use a graph structure to store data points, which can be a bit complex.
HNSW is a graph-based algorithm. All graph-based search algorithms rely on the idea of a k-nearest neighbor (or k-approximate nearest neighbor) graph, which we outline below.
HNSW also combines this with the ideas behind a classic 1-dimensional search data structure: the skip list.
## k-Nearest Neighbor Graphs and k-approximate Nearest neighbor Graphs
The k-nearest neighbor graph actually predates its use for ANN search. Its construction is quite simple:
* Each vector in the dataset is given an associated vertex.
* Each vertex has outgoing edges to its k nearest neighbors. That is, the k closest other vertices by Euclidean distance between the two corresponding vectors. This can be thought of as a "friend list" for the vertex.
* For some applications (including nearest-neighbor search), the incoming edges are also added.
Eventually, it was realized that the following greedy search method over such a graph typically results in good approximate nearest neighbors:
* Given a query vector, start at some fixed "entry point" vertex (e.g. the approximate center node).
* Look at that vertex's neighbors. If any of them are closer to the query vector than the current vertex, then move to that vertex.
* Repeat until a local optimum is found.
The above algorithm also generalizes to e.g. top 10 approximate nearest neighbors.
Computing a k-nearest neighbor graph is actually quite slow, taking quadratic time in the dataset size. It was quickly realized that near-identical performance can be achieved using a k-approximate nearest neighbor graph. That is, instead of obtaining the k-nearest neighbors for each vertex, an approximate nearest neighbor search data structure is used to build much faster.
In fact, another data structure is not needed: This can be done "incrementally".
That is, if you start with a k-ANN graph for n-1 vertices, you can extend it to a k-ANN graph for n vertices as well by using the graph to obtain the k-ANN for the new vertex.
One downside of k-NN and k-ANN graphs alone is that one must typically build them with a large value of k to get decent results, resulting in a large index.
## HNSW: Hierarchical Navigable Small Worlds
HNSW builds on k-ANN in two main ways:
* Instead of getting the k-approximate nearest neighbors for a large value of k, it sparsifies the k-ANN graph using a carefully chosen "edge pruning" heuristic, allowing for the number of edges per vertex to be limited to a relatively small constant.
* The "entry point" vertex is chosen dynamically using a recursively constructed data structure on a subset of the data, similarly to a skip list.
This recursive structure can be thought of as separating into layers:
* At the bottom-most layer, an k-ANN graph on the whole dataset is present.
* At the second layer, a k-ANN graph on a fraction of the dataset (e.g. 10%) is present.
* At the Lth layer, a k-ANN graph is present. It is over a (constant) fraction (e.g. 10%) of the vectors/vertices present in the L-1th layer.
Then the greedy search routine operates as follows:
* At the top layer (using an arbitrary vertex as an entry point), use the greedy local search routine on the k-ANN graph to get an approximate nearest neighbor at that layer.
* Using the approximate nearest neighbor found in the previous layer as an entry point, find an approximate nearest neighbor in the next layer with the same method.
* Repeat until the bottom-most layer is reached. Then use the entry point to find multiple nearest neighbors (e.g. top 10).
## Usage
There are three key parameters to set when constructing an HNSW index:
* `metric`: Use an `L2` euclidean distance metric. We also support `dot` and `cosine` distance.
* `m`: The number of neighbors to select for each vector in the HNSW graph.
* `ef_construction`: The number of candidates to evaluate during the construction of the HNSW graph.
We can combine the above concepts to understand how to build and query an HNSW index in LanceDB.
### Construct index
```python
import lancedb
import numpy as np
uri = "/tmp/lancedb"
db = lancedb.connect(uri)
# Create 10,000 sample vectors
data = [
{"vector": row, "item": f"item {i}"}
for i, row in enumerate(np.random.random((10_000, 1536)).astype('float32'))
]
# Add the vectors to a table
tbl = db.create_table("my_vectors", data=data)
# Create and train the HNSW index for a 1536-dimensional vector
# Make sure you have enough data in the table for an effective training step
tbl.create_index(index_type=IVF_HNSW_SQ)
```
### Query the index
```python
# Search using a random 1536-dimensional embedding
tbl.search(np.random.random((1536))) \
.limit(2) \
.to_pandas()
```

View File

@@ -58,8 +58,10 @@ In Python, the index can be created as follows:
# Make sure you have enough data in the table for an effective training step # Make sure you have enough data in the table for an effective training step
tbl.create_index(metric="L2", num_partitions=256, num_sub_vectors=96) tbl.create_index(metric="L2", num_partitions=256, num_sub_vectors=96)
``` ```
!!! note
`num_partitions`=256 and `num_sub_vectors`=96 does not work for every dataset. Those values needs to be adjusted for your particular dataset.
The `num_partitions` is usually chosen to target a particular number of vectors per partition. `num_sub_vectors` is typically chosen based on the desired recall and the dimensionality of the vector. See the [FAQs](#faq) below for best practices on choosing these parameters. The `num_partitions` is usually chosen to target a particular number of vectors per partition. `num_sub_vectors` is typically chosen based on the desired recall and the dimensionality of the vector. See [here](../ann_indexes.md/#how-to-choose-num_partitions-and-num_sub_vectors-for-ivf_pq-index) for best practices on choosing these parameters.
### Query the index ### Query the index

View File

@@ -0,0 +1,67 @@
# Imagebind embeddings
We have support for [imagebind](https://github.com/facebookresearch/ImageBind) model embeddings. You can download our version of the packaged model via - `pip install imagebind-packaged==0.1.2`.
This function is registered as `imagebind` and supports Audio, Video and Text modalities(extending to Thermal,Depth,IMU data):
| Parameter | Type | Default Value | Description |
|---|---|---|---|
| `name` | `str` | `"imagebind_huge"` | Name of the model. |
| `device` | `str` | `"cpu"` | The device to run the model on. Can be `"cpu"` or `"gpu"`. |
| `normalize` | `bool` | `False` | set to `True` to normalize your inputs before model ingestion. |
Below is an example demonstrating how the API works:
```python
import lancedb
from lancedb.pydantic import LanceModel, Vector
from lancedb.embeddings import get_registry
db = lancedb.connect(tmp_path)
func = get_registry().get("imagebind").create()
class ImageBindModel(LanceModel):
text: str
image_uri: str = func.SourceField()
audio_path: str
vector: Vector(func.ndims()) = func.VectorField()
# add locally accessible image paths
text_list=["A dog.", "A car", "A bird"]
image_paths=[".assets/dog_image.jpg", ".assets/car_image.jpg", ".assets/bird_image.jpg"]
audio_paths=[".assets/dog_audio.wav", ".assets/car_audio.wav", ".assets/bird_audio.wav"]
# Load data
inputs = [
{"text": a, "audio_path": b, "image_uri": c}
for a, b, c in zip(text_list, audio_paths, image_paths)
]
#create table and add data
table = db.create_table("img_bind", schema=ImageBindModel)
table.add(inputs)
```
Now, we can search using any modality:
#### image search
```python
query_image = "./assets/dog_image2.jpg" #download an image and enter that path here
actual = table.search(query_image).limit(1).to_pydantic(ImageBindModel)[0]
print(actual.text == "dog")
```
#### audio search
```python
query_audio = "./assets/car_audio2.wav" #download an audio clip and enter path here
actual = table.search(query_audio).limit(1).to_pydantic(ImageBindModel)[0]
print(actual.text == "car")
```
#### Text search
You can add any input query and fetch the result as follows:
```python
query = "an animal which flies and tweets"
actual = table.search(query).limit(1).to_pydantic(ImageBindModel)[0]
print(actual.text == "bird")
```
If you have any questions about the embeddings API, supported models, or see a relevant model missing, please raise an issue [on GitHub](https://github.com/lancedb/lancedb/issues).

View File

@@ -0,0 +1,51 @@
# Jina Embeddings : Multimodal
Jina embeddings can also be used to embed both text and image data, only some of the models support image data and you can check the list
under [https://jina.ai/embeddings/](https://jina.ai/embeddings/)
Supported parameters (to be passed in `create` method) are:
| Parameter | Type | Default Value | Description |
|---|---|---|---|
| `name` | `str` | `"jina-clip-v1"` | The model ID of the jina model to use |
Usage Example:
```python
import os
import requests
import lancedb
from lancedb.pydantic import LanceModel, Vector
from lancedb.embeddings import get_registry
import pandas as pd
os.environ['JINA_API_KEY'] = 'jina_*'
db = lancedb.connect("~/.lancedb")
func = get_registry().get("jina").create()
class Images(LanceModel):
label: str
image_uri: str = func.SourceField() # image uri as the source
image_bytes: bytes = func.SourceField() # image bytes as the source
vector: Vector(func.ndims()) = func.VectorField() # vector column
vec_from_bytes: Vector(func.ndims()) = func.VectorField() # Another vector column
table = db.create_table("images", schema=Images)
labels = ["cat", "cat", "dog", "dog", "horse", "horse"]
uris = [
"http://farm1.staticflickr.com/53/167798175_7c7845bbbd_z.jpg",
"http://farm1.staticflickr.com/134/332220238_da527d8140_z.jpg",
"http://farm9.staticflickr.com/8387/8602747737_2e5c2a45d4_z.jpg",
"http://farm5.staticflickr.com/4092/5017326486_1f46057f5f_z.jpg",
"http://farm9.staticflickr.com/8216/8434969557_d37882c42d_z.jpg",
"http://farm6.staticflickr.com/5142/5835678453_4f3a4edb45_z.jpg",
]
# get each uri as bytes
image_bytes = [requests.get(uri).content for uri in uris]
table.add(
pd.DataFrame({"label": labels, "image_uri": uris, "image_bytes": image_bytes})
)
```

View File

@@ -0,0 +1,82 @@
# OpenClip embeddings
We support CLIP model embeddings using the open source alternative, [open-clip](https://github.com/mlfoundations/open_clip) which supports various customizations. It is registered as `open-clip` and supports the following customizations:
| Parameter | Type | Default Value | Description |
|---|---|---|---|
| `name` | `str` | `"ViT-B-32"` | The name of the model. |
| `pretrained` | `str` | `"laion2b_s34b_b79k"` | The name of the pretrained model to load. |
| `device` | `str` | `"cpu"` | The device to run the model on. Can be `"cpu"` or `"gpu"`. |
| `batch_size` | `int` | `64` | The number of images to process in a batch. |
| `normalize` | `bool` | `True` | Whether to normalize the input images before feeding them to the model. |
This embedding function supports ingesting images as both bytes and urls. You can query them using both test and other images.
!!! info
LanceDB supports ingesting images directly from accessible links.
```python
import lancedb
from lancedb.pydantic import LanceModel, Vector
from lancedb.embeddings import get_registry
db = lancedb.connect(tmp_path)
func = get_registry().get("open-clip").create()
class Images(LanceModel):
label: str
image_uri: str = func.SourceField() # image uri as the source
image_bytes: bytes = func.SourceField() # image bytes as the source
vector: Vector(func.ndims()) = func.VectorField() # vector column
vec_from_bytes: Vector(func.ndims()) = func.VectorField() # Another vector column
table = db.create_table("images", schema=Images)
labels = ["cat", "cat", "dog", "dog", "horse", "horse"]
uris = [
"http://farm1.staticflickr.com/53/167798175_7c7845bbbd_z.jpg",
"http://farm1.staticflickr.com/134/332220238_da527d8140_z.jpg",
"http://farm9.staticflickr.com/8387/8602747737_2e5c2a45d4_z.jpg",
"http://farm5.staticflickr.com/4092/5017326486_1f46057f5f_z.jpg",
"http://farm9.staticflickr.com/8216/8434969557_d37882c42d_z.jpg",
"http://farm6.staticflickr.com/5142/5835678453_4f3a4edb45_z.jpg",
]
# get each uri as bytes
image_bytes = [requests.get(uri).content for uri in uris]
table.add(
pd.DataFrame({"label": labels, "image_uri": uris, "image_bytes": image_bytes})
)
```
Now we can search using text from both the default vector column and the custom vector column
```python
# text search
actual = table.search("man's best friend").limit(1).to_pydantic(Images)[0]
print(actual.label) # prints "dog"
frombytes = (
table.search("man's best friend", vector_column_name="vec_from_bytes")
.limit(1)
.to_pydantic(Images)[0]
)
print(frombytes.label)
```
Because we're using a multi-modal embedding function, we can also search using images
```python
# image search
query_image_uri = "http://farm1.staticflickr.com/200/467715466_ed4a31801f_z.jpg"
image_bytes = requests.get(query_image_uri).content
query_image = Image.open(io.BytesIO(image_bytes))
actual = table.search(query_image).limit(1).to_pydantic(Images)[0]
print(actual.label == "dog")
# image search using a custom vector column
other = (
table.search(query_image, vector_column_name="vec_from_bytes")
.limit(1)
.to_pydantic(Images)[0]
)
print(actual.label)
```

View File

@@ -0,0 +1,51 @@
# AWS Bedrock Text Embedding Functions
AWS Bedrock supports multiple base models for generating text embeddings. You need to setup the AWS credentials to use this embedding function.
You can do so by using `awscli` and also add your session_token:
```shell
aws configure
aws configure set aws_session_token "<your_session_token>"
```
to ensure that the credentials are set up correctly, you can run the following command:
```shell
aws sts get-caller-identity
```
Supported Embedding modelIDs are:
* `amazon.titan-embed-text-v1`
* `cohere.embed-english-v3`
* `cohere.embed-multilingual-v3`
Supported parameters (to be passed in `create` method) are:
| Parameter | Type | Default Value | Description |
|---|---|---|---|
| **name** | str | "amazon.titan-embed-text-v1" | The model ID of the bedrock model to use. Supported base models for Text Embeddings: amazon.titan-embed-text-v1, cohere.embed-english-v3, cohere.embed-multilingual-v3 |
| **region** | str | "us-east-1" | Optional name of the AWS Region in which the service should be called (e.g., "us-east-1"). |
| **profile_name** | str | None | Optional name of the AWS profile to use for calling the Bedrock service. If not specified, the default profile will be used. |
| **assumed_role** | str | None | Optional ARN of an AWS IAM role to assume for calling the Bedrock service. If not specified, the current active credentials will be used. |
| **role_session_name** | str | "lancedb-embeddings" | Optional name of the AWS IAM role session to use for calling the Bedrock service. If not specified, a "lancedb-embeddings" name will be used. |
| **runtime** | bool | True | Optional choice of getting different client to perform operations with the Amazon Bedrock service. |
| **max_retries** | int | 7 | Optional number of retries to perform when a request fails. |
Usage Example:
```python
import lancedb
from lancedb.pydantic import LanceModel, Vector
from lancedb.embeddings import get_registry
import pandas as pd
model = get_registry().get("bedrock-text").create()
class TextModel(LanceModel):
text: str = model.SourceField()
vector: Vector(model.ndims()) = model.VectorField()
df = pd.DataFrame({"text": ["hello world", "goodbye world"]})
db = lancedb.connect("tmp_path")
tbl = db.create_table("test", schema=TextModel, mode="overwrite")
tbl.add(df)
rs = tbl.search("hello").limit(1).to_pandas()
```

View File

@@ -0,0 +1,63 @@
# Cohere Embeddings
Using cohere API requires cohere package, which can be installed using `pip install cohere`. Cohere embeddings are used to generate embeddings for text data. The embeddings can be used for various tasks like semantic search, clustering, and classification.
You also need to set the `COHERE_API_KEY` environment variable to use the Cohere API.
Supported models are:
- embed-english-v3.0
- embed-multilingual-v3.0
- embed-english-light-v3.0
- embed-multilingual-light-v3.0
- embed-english-v2.0
- embed-english-light-v2.0
- embed-multilingual-v2.0
Supported parameters (to be passed in `create` method) are:
| Parameter | Type | Default Value | Description |
|---|---|--------|---------|
| `name` | `str` | `"embed-english-v2.0"` | The model ID of the cohere model to use. Supported base models for Text Embeddings: embed-english-v3.0, embed-multilingual-v3.0, embed-english-light-v3.0, embed-multilingual-light-v3.0, embed-english-v2.0, embed-english-light-v2.0, embed-multilingual-v2.0 |
| `source_input_type` | `str` | `"search_document"` | The type of input data to be used for the source column. |
| `query_input_type` | `str` | `"search_query"` | The type of input data to be used for the query. |
Cohere supports following input types:
| Input Type | Description |
|-------------------------|---------------------------------------|
| "`search_document`" | Used for embeddings stored in a vector|
| | database for search use-cases. |
| "`search_query`" | Used for embeddings of search queries |
| | run against a vector DB |
| "`semantic_similarity`" | Specifies the given text will be used |
| | for Semantic Textual Similarity (STS) |
| "`classification`" | Used for embeddings passed through a |
| | text classifier. |
| "`clustering`" | Used for the embeddings run through a |
| | clustering algorithm |
Usage Example:
```python
import lancedb
from lancedb.pydantic import LanceModel, Vector
from lancedb.embeddings import EmbeddingFunctionRegistry
cohere = EmbeddingFunctionRegistry
.get_instance()
.get("cohere")
.create(name="embed-multilingual-v2.0")
class TextModel(LanceModel):
text: str = cohere.SourceField()
vector: Vector(cohere.ndims()) = cohere.VectorField()
data = [ { "text": "hello world" },
{ "text": "goodbye world" }]
db = lancedb.connect("~/.lancedb")
tbl = db.create_table("test", schema=TextModel, mode="overwrite")
tbl.add(data)
```

View File

@@ -0,0 +1,35 @@
# Gemini Embeddings
With Google's Gemini, you can represent text (words, sentences, and blocks of text) in a vectorized form, making it easier to compare and contrast embeddings. For example, two texts that share a similar subject matter or sentiment should have similar embeddings, which can be identified through mathematical comparison techniques such as cosine similarity. For more on how and why you should use embeddings, refer to the Embeddings guide.
The Gemini Embedding Model API supports various task types:
| Task Type | Description |
|-------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------|
| "`retrieval_query`" | Specifies the given text is a query in a search/retrieval setting. |
| "`retrieval_document`" | Specifies the given text is a document in a search/retrieval setting. Using this task type requires a title but is automatically proided by Embeddings API |
| "`semantic_similarity`" | Specifies the given text will be used for Semantic Textual Similarity (STS). |
| "`classification`" | Specifies that the embeddings will be used for classification. |
| "`clusering`" | Specifies that the embeddings will be used for clustering. |
Usage Example:
```python
import lancedb
import pandas as pd
from lancedb.pydantic import LanceModel, Vector
from lancedb.embeddings import get_registry
model = get_registry().get("gemini-text").create()
class TextModel(LanceModel):
text: str = model.SourceField()
vector: Vector(model.ndims()) = model.VectorField()
df = pd.DataFrame({"text": ["hello world", "goodbye world"]})
db = lancedb.connect("~/.lancedb")
tbl = db.create_table("test", schema=TextModel, mode="overwrite")
tbl.add(df)
rs = tbl.search("hello").limit(1).to_pandas()
```

View File

@@ -0,0 +1,24 @@
# Huggingface embedding models
We offer support for all Hugging Face models (which can be loaded via [transformers](https://huggingface.co/docs/transformers/en/index) library). The default model is `colbert-ir/colbertv2.0` which also has its own special callout - `registry.get("colbert")`. Some Hugging Face models might require custom models defined on the HuggingFace Hub in their own modeling files. You may enable this by setting `trust_remote_code=True`. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine.
Example usage -
```python
import lancedb
import pandas as pd
from lancedb.embeddings import get_registry
from lancedb.pydantic import LanceModel, Vector
model = get_registry().get("huggingface").create(name='facebook/bart-base')
class Words(LanceModel):
text: str = model.SourceField()
vector: Vector(model.ndims()) = model.VectorField()
df = pd.DataFrame({"text": ["hi hello sayonara", "goodbye world"]})
table = db.create_table("greets", schema=Words)
table.add(df)
query = "old greeting"
actual = table.search(query).limit(1).to_pydantic(Words)[0]
print(actual.text)
```

View File

@@ -0,0 +1,75 @@
# IBM watsonx.ai Embeddings
Generate text embeddings using IBM's watsonx.ai platform.
## Supported Models
You can find a list of supported models at [IBM watsonx.ai Documentation](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-models-embed.html?context=wx). The currently supported model names are:
- `ibm/slate-125m-english-rtrvr`
- `ibm/slate-30m-english-rtrvr`
- `sentence-transformers/all-minilm-l12-v2`
- `intfloat/multilingual-e5-large`
## Parameters
The following parameters can be passed to the `create` method:
| Parameter | Type | Default Value | Description |
|------------|----------|----------------------------------|-----------------------------------------------------------|
| name | str | "ibm/slate-125m-english-rtrvr" | The model ID of the watsonx.ai model to use |
| api_key | str | None | Optional IBM Cloud API key (or set `WATSONX_API_KEY`) |
| project_id | str | None | Optional watsonx project ID (or set `WATSONX_PROJECT_ID`) |
| url | str | None | Optional custom URL for the watsonx.ai instance |
| params | dict | None | Optional additional parameters for the embedding model |
## Usage Example
First, the watsonx.ai library is an optional dependency, so must be installed seperately:
```
pip install ibm-watsonx-ai
```
Optionally set environment variables (if not passing credentials to `create` directly):
```sh
export WATSONX_API_KEY="YOUR_WATSONX_API_KEY"
export WATSONX_PROJECT_ID="YOUR_WATSONX_PROJECT_ID"
```
```python
import os
import lancedb
from lancedb.pydantic import LanceModel, Vector
from lancedb.embeddings import EmbeddingFunctionRegistry
watsonx_embed = EmbeddingFunctionRegistry
.get_instance()
.get("watsonx")
.create(
name="ibm/slate-125m-english-rtrvr",
# Uncomment and set these if not using environment variables
# api_key="your_api_key_here",
# project_id="your_project_id_here",
# url="your_watsonx_url_here",
# params={...},
)
class TextModel(LanceModel):
text: str = watsonx_embed.SourceField()
vector: Vector(watsonx_embed.ndims()) = watsonx_embed.VectorField()
data = [
{"text": "hello world"},
{"text": "goodbye world"},
]
db = lancedb.connect("~/.lancedb")
tbl = db.create_table("watsonx_test", schema=TextModel, mode="overwrite")
tbl.add(data)
rs = tbl.search("hello").limit(1).to_pandas()
print(rs)
```

View File

@@ -0,0 +1,50 @@
# Instructor Embeddings
[Instructor](https://instructor-embedding.github.io/) is an instruction-finetuned text embedding model that can generate text embeddings tailored to any task (e.g. classification, retrieval, clustering, text evaluation, etc.) and domains (e.g. science, finance, etc.) by simply providing the task instruction, without any finetuning.
If you want to calculate customized embeddings for specific sentences, you can follow the unified template to write instructions.
!!! info
Represent the `domain` `text_type` for `task_objective`:
* `domain` is optional, and it specifies the domain of the text, e.g. science, finance, medicine, etc.
* `text_type` is required, and it specifies the encoding unit, e.g. sentence, document, paragraph, etc.
* `task_objective` is optional, and it specifies the objective of embedding, e.g. retrieve a document, classify the sentence, etc.
More information about the model can be found at the [source URL](https://github.com/xlang-ai/instructor-embedding).
| Argument | Type | Default | Description |
|---|---|---|---|
| `name` | `str` | "hkunlp/instructor-base" | The name of the model to use |
| `batch_size` | `int` | `32` | The batch size to use when generating embeddings |
| `device` | `str` | `"cpu"` | The device to use when generating embeddings |
| `show_progress_bar` | `bool` | `True` | Whether to show a progress bar when generating embeddings |
| `normalize_embeddings` | `bool` | `True` | Whether to normalize the embeddings |
| `quantize` | `bool` | `False` | Whether to quantize the model |
| `source_instruction` | `str` | `"represent the docuement for retreival"` | The instruction for the source column |
| `query_instruction` | `str` | `"represent the document for retreiving the most similar documents"` | The instruction for the query |
```python
import lancedb
from lancedb.pydantic import LanceModel, Vector
from lancedb.embeddings import get_registry, InstuctorEmbeddingFunction
instructor = get_registry().get("instructor").create(
source_instruction="represent the docuement for retreival",
query_instruction="represent the document for retreiving the most similar documents"
)
class Schema(LanceModel):
vector: Vector(instructor.ndims()) = instructor.VectorField()
text: str = instructor.SourceField()
db = lancedb.connect("~/.lancedb")
tbl = db.create_table("test", schema=Schema, mode="overwrite")
texts = [{"text": "Capitalism has been dominant in the Western world since the end of feudalism, but most feel[who?] that..."},
{"text": "The disparate impact theory is especially controversial under the Fair Housing Act because the Act..."},
{"text": "Disparate impact in United States labor law refers to practices in employment, housing, and other areas that.."}]
tbl.add(texts)
```

View File

@@ -0,0 +1,39 @@
# Jina Embeddings
Jina embeddings are used to generate embeddings for text and image data.
You also need to set the `JINA_API_KEY` environment variable to use the Jina API.
You can find a list of supported models under [https://jina.ai/embeddings/](https://jina.ai/embeddings/)
Supported parameters (to be passed in `create` method) are:
| Parameter | Type | Default Value | Description |
|---|---|---|---|
| `name` | `str` | `"jina-clip-v1"` | The model ID of the jina model to use |
Usage Example:
```python
import os
import lancedb
from lancedb.pydantic import LanceModel, Vector
from lancedb.embeddings import EmbeddingFunctionRegistry
os.environ['JINA_API_KEY'] = 'jina_*'
jina_embed = EmbeddingFunctionRegistry.get_instance().get("jina").create(name="jina-embeddings-v2-base-en")
class TextModel(LanceModel):
text: str = jina_embed.SourceField()
vector: Vector(jina_embed.ndims()) = jina_embed.VectorField()
data = [{"text": "hello world"},
{"text": "goodbye world"}]
db = lancedb.connect("~/.lancedb-2")
tbl = db.create_table("test", schema=TextModel, mode="overwrite")
tbl.add(data)
```

View File

@@ -0,0 +1,37 @@
# Ollama embeddings
Generate embeddings via the [ollama](https://github.com/ollama/ollama-python) python library. More details:
- [Ollama docs on embeddings](https://github.com/ollama/ollama/blob/main/docs/api.md#generate-embeddings)
- [Ollama blog on embeddings](https://ollama.com/blog/embedding-models)
| Parameter | Type | Default Value | Description |
|------------------------|----------------------------|--------------------------|------------------------------------------------------------------------------------------------------------------------------------------------|
| `name` | `str` | `nomic-embed-text` | The name of the model. |
| `host` | `str` | `http://localhost:11434` | The Ollama host to connect to. |
| `options` | `ollama.Options` or `dict` | `None` | Additional model parameters listed in the documentation for the Modelfile such as `temperature`. |
| `keep_alive` | `float` or `str` | `"5m"` | Controls how long the model will stay loaded into memory following the request. |
| `ollama_client_kwargs` | `dict` | `{}` | kwargs that can be past to the `ollama.Client`. |
```python
import lancedb
from lancedb.pydantic import LanceModel, Vector
from lancedb.embeddings import get_registry
db = lancedb.connect("/tmp/db")
func = get_registry().get("ollama").create(name="nomic-embed-text")
class Words(LanceModel):
text: str = func.SourceField()
vector: Vector(func.ndims()) = func.VectorField()
table = db.create_table("words", schema=Words, mode="overwrite")
table.add([
{"text": "hello world"},
{"text": "goodbye world"}
])
query = "greetings"
actual = table.search(query).limit(1).to_pydantic(Words)[0]
print(actual.text)
```

View File

@@ -0,0 +1,34 @@
# OpenAI embeddings
LanceDB registers the OpenAI embeddings function in the registry by default, as `openai`. Below are the parameters that you can customize when creating the instances:
| Parameter | Type | Default Value | Description |
|---|---|---|---|
| `name` | `str` | `"text-embedding-ada-002"` | The name of the model. |
| `dim` | `int` | Model default | For OpenAI's newer text-embedding-3 model, we can specify a dimensionality that is smaller than the 1536 size. This feature supports it |
```python
import lancedb
from lancedb.pydantic import LanceModel, Vector
from lancedb.embeddings import get_registry
db = lancedb.connect("/tmp/db")
func = get_registry().get("openai").create(name="text-embedding-ada-002")
class Words(LanceModel):
text: str = func.SourceField()
vector: Vector(func.ndims()) = func.VectorField()
table = db.create_table("words", schema=Words, mode="overwrite")
table.add(
[
{"text": "hello world"},
{"text": "goodbye world"}
]
)
query = "greetings"
actual = table.search(query).limit(1).to_pydantic(Words)[0]
print(actual.text)
```

View File

@@ -0,0 +1,174 @@
# Sentence transformers
Allows you to set parameters when registering a `sentence-transformers` object.
!!! info
Sentence transformer embeddings are normalized by default. It is recommended to use normalized embeddings for similarity search.
| Parameter | Type | Default Value | Description |
|---|---|---|---|
| `name` | `str` | `all-MiniLM-L6-v2` | The name of the model |
| `device` | `str` | `cpu` | The device to run the model on (can be `cpu` or `gpu`) |
| `normalize` | `bool` | `True` | Whether to normalize the input text before feeding it to the model |
| `trust_remote_code` | `bool` | `False` | Whether to trust and execute remote code from the model's Huggingface repository |
??? "Check out available sentence-transformer models here!"
```markdown
- sentence-transformers/all-MiniLM-L12-v2
- sentence-transformers/paraphrase-mpnet-base-v2
- sentence-transformers/gtr-t5-base
- sentence-transformers/LaBSE
- sentence-transformers/all-MiniLM-L6-v2
- sentence-transformers/bert-base-nli-max-tokens
- sentence-transformers/bert-base-nli-mean-tokens
- sentence-transformers/bert-base-nli-stsb-mean-tokens
- sentence-transformers/bert-base-wikipedia-sections-mean-tokens
- sentence-transformers/bert-large-nli-cls-token
- sentence-transformers/bert-large-nli-max-tokens
- sentence-transformers/bert-large-nli-mean-tokens
- sentence-transformers/bert-large-nli-stsb-mean-tokens
- sentence-transformers/distilbert-base-nli-max-tokens
- sentence-transformers/distilbert-base-nli-mean-tokens
- sentence-transformers/distilbert-base-nli-stsb-mean-tokens
- sentence-transformers/distilroberta-base-msmarco-v1
- sentence-transformers/distilroberta-base-msmarco-v2
- sentence-transformers/nli-bert-base-cls-pooling
- sentence-transformers/nli-bert-base-max-pooling
- sentence-transformers/nli-bert-base
- sentence-transformers/nli-bert-large-cls-pooling
- sentence-transformers/nli-bert-large-max-pooling
- sentence-transformers/nli-bert-large
- sentence-transformers/nli-distilbert-base-max-pooling
- sentence-transformers/nli-distilbert-base
- sentence-transformers/nli-roberta-base
- sentence-transformers/nli-roberta-large
- sentence-transformers/roberta-base-nli-mean-tokens
- sentence-transformers/roberta-base-nli-stsb-mean-tokens
- sentence-transformers/roberta-large-nli-mean-tokens
- sentence-transformers/roberta-large-nli-stsb-mean-tokens
- sentence-transformers/stsb-bert-base
- sentence-transformers/stsb-bert-large
- sentence-transformers/stsb-distilbert-base
- sentence-transformers/stsb-roberta-base
- sentence-transformers/stsb-roberta-large
- sentence-transformers/xlm-r-100langs-bert-base-nli-mean-tokens
- sentence-transformers/xlm-r-100langs-bert-base-nli-stsb-mean-tokens
- sentence-transformers/xlm-r-base-en-ko-nli-ststb
- sentence-transformers/xlm-r-bert-base-nli-mean-tokens
- sentence-transformers/xlm-r-bert-base-nli-stsb-mean-tokens
- sentence-transformers/xlm-r-large-en-ko-nli-ststb
- sentence-transformers/bert-base-nli-cls-token
- sentence-transformers/all-distilroberta-v1
- sentence-transformers/multi-qa-MiniLM-L6-dot-v1
- sentence-transformers/multi-qa-distilbert-cos-v1
- sentence-transformers/multi-qa-distilbert-dot-v1
- sentence-transformers/multi-qa-mpnet-base-cos-v1
- sentence-transformers/multi-qa-mpnet-base-dot-v1
- sentence-transformers/nli-distilroberta-base-v2
- sentence-transformers/all-MiniLM-L6-v1
- sentence-transformers/all-mpnet-base-v1
- sentence-transformers/all-mpnet-base-v2
- sentence-transformers/all-roberta-large-v1
- sentence-transformers/allenai-specter
- sentence-transformers/average_word_embeddings_glove.6B.300d
- sentence-transformers/average_word_embeddings_glove.840B.300d
- sentence-transformers/average_word_embeddings_komninos
- sentence-transformers/average_word_embeddings_levy_dependency
- sentence-transformers/clip-ViT-B-32-multilingual-v1
- sentence-transformers/clip-ViT-B-32
- sentence-transformers/distilbert-base-nli-stsb-quora-ranking
- sentence-transformers/distilbert-multilingual-nli-stsb-quora-ranking
- sentence-transformers/distilroberta-base-paraphrase-v1
- sentence-transformers/distiluse-base-multilingual-cased-v1
- sentence-transformers/distiluse-base-multilingual-cased-v2
- sentence-transformers/distiluse-base-multilingual-cased
- sentence-transformers/facebook-dpr-ctx_encoder-multiset-base
- sentence-transformers/facebook-dpr-ctx_encoder-single-nq-base
- sentence-transformers/facebook-dpr-question_encoder-multiset-base
- sentence-transformers/facebook-dpr-question_encoder-single-nq-base
- sentence-transformers/gtr-t5-large
- sentence-transformers/gtr-t5-xl
- sentence-transformers/gtr-t5-xxl
- sentence-transformers/msmarco-MiniLM-L-12-v3
- sentence-transformers/msmarco-MiniLM-L-6-v3
- sentence-transformers/msmarco-MiniLM-L12-cos-v5
- sentence-transformers/msmarco-MiniLM-L6-cos-v5
- sentence-transformers/msmarco-bert-base-dot-v5
- sentence-transformers/msmarco-bert-co-condensor
- sentence-transformers/msmarco-distilbert-base-dot-prod-v3
- sentence-transformers/msmarco-distilbert-base-tas-b
- sentence-transformers/msmarco-distilbert-base-v2
- sentence-transformers/msmarco-distilbert-base-v3
- sentence-transformers/msmarco-distilbert-base-v4
- sentence-transformers/msmarco-distilbert-cos-v5
- sentence-transformers/msmarco-distilbert-dot-v5
- sentence-transformers/msmarco-distilbert-multilingual-en-de-v2-tmp-lng-aligned
- sentence-transformers/msmarco-distilbert-multilingual-en-de-v2-tmp-trained-scratch
- sentence-transformers/msmarco-distilroberta-base-v2
- sentence-transformers/msmarco-roberta-base-ance-firstp
- sentence-transformers/msmarco-roberta-base-v2
- sentence-transformers/msmarco-roberta-base-v3
- sentence-transformers/multi-qa-MiniLM-L6-cos-v1
- sentence-transformers/nli-mpnet-base-v2
- sentence-transformers/nli-roberta-base-v2
- sentence-transformers/nq-distilbert-base-v1
- sentence-transformers/paraphrase-MiniLM-L12-v2
- sentence-transformers/paraphrase-MiniLM-L3-v2
- sentence-transformers/paraphrase-MiniLM-L6-v2
- sentence-transformers/paraphrase-TinyBERT-L6-v2
- sentence-transformers/paraphrase-albert-base-v2
- sentence-transformers/paraphrase-albert-small-v2
- sentence-transformers/paraphrase-distilroberta-base-v1
- sentence-transformers/paraphrase-distilroberta-base-v2
- sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2
- sentence-transformers/paraphrase-multilingual-mpnet-base-v2
- sentence-transformers/paraphrase-xlm-r-multilingual-v1
- sentence-transformers/quora-distilbert-base
- sentence-transformers/quora-distilbert-multilingual
- sentence-transformers/sentence-t5-base
- sentence-transformers/sentence-t5-large
- sentence-transformers/sentence-t5-xxl
- sentence-transformers/sentence-t5-xl
- sentence-transformers/stsb-distilroberta-base-v2
- sentence-transformers/stsb-mpnet-base-v2
- sentence-transformers/stsb-roberta-base-v2
- sentence-transformers/stsb-xlm-r-multilingual
- sentence-transformers/xlm-r-distilroberta-base-paraphrase-v1
- sentence-transformers/clip-ViT-L-14
- sentence-transformers/clip-ViT-B-16
- sentence-transformers/use-cmlm-multilingual
- sentence-transformers/all-MiniLM-L12-v1
```
!!! info
You can also load many other model architectures from the library. For example models from sources such as BAAI, nomic, salesforce research, etc.
See this HF hub page for all [supported models](https://huggingface.co/models?library=sentence-transformers).
!!! note "BAAI Embeddings example"
Here is an example that uses BAAI embedding model from the HuggingFace Hub [supported models](https://huggingface.co/models?library=sentence-transformers)
```python
import lancedb
from lancedb.pydantic import LanceModel, Vector
from lancedb.embeddings import get_registry
db = lancedb.connect("/tmp/db")
model = get_registry().get("sentence-transformers").create(name="BAAI/bge-small-en-v1.5", device="cpu")
class Words(LanceModel):
text: str = model.SourceField()
vector: Vector(model.ndims()) = model.VectorField()
table = db.create_table("words", schema=Words)
table.add(
[
{"text": "hello world"},
{"text": "goodbye world"}
]
)
query = "greetings"
actual = table.search(query).limit(1).to_pydantic(Words)[0]
print(actual.text)
```
Visit sentence-transformers [HuggingFace HUB](https://huggingface.co/sentence-transformers) page for more information on the available models.

View File

@@ -0,0 +1,51 @@
# VoyageAI Embeddings
Voyage AI provides cutting-edge embedding and rerankers.
Using voyageai API requires voyageai package, which can be installed using `pip install voyageai`. Voyage AI embeddings are used to generate embeddings for text data. The embeddings can be used for various tasks like semantic search, clustering, and classification.
You also need to set the `VOYAGE_API_KEY` environment variable to use the VoyageAI API.
Supported models are:
- voyage-3
- voyage-3-lite
- voyage-finance-2
- voyage-multilingual-2
- voyage-law-2
- voyage-code-2
Supported parameters (to be passed in `create` method) are:
| Parameter | Type | Default Value | Description |
|---|---|--------|---------|
| `name` | `str` | `None` | The model ID of the model to use. Supported base models for Text Embeddings: voyage-3, voyage-3-lite, voyage-finance-2, voyage-multilingual-2, voyage-law-2, voyage-code-2 |
| `input_type` | `str` | `None` | Type of the input text. Default to None. Other options: query, document. |
| `truncation` | `bool` | `True` | Whether to truncate the input texts to fit within the context length. |
Usage Example:
```python
import lancedb
from lancedb.pydantic import LanceModel, Vector
from lancedb.embeddings import EmbeddingFunctionRegistry
voyageai = EmbeddingFunctionRegistry
.get_instance()
.get("voyageai")
.create(name="voyage-3")
class TextModel(LanceModel):
text: str = voyageai.SourceField()
vector: Vector(voyageai.ndims()) = voyageai.VectorField()
data = [ { "text": "hello world" },
{ "text": "goodbye world" }]
db = lancedb.connect("~/.lancedb")
tbl = db.create_table("test", schema=TextModel, mode="overwrite")
tbl.add(data)
```

View File

@@ -15,12 +15,15 @@ There is another optional layer of abstraction available: `TextEmbeddingFunction
Let's implement `SentenceTransformerEmbeddings` class. All you need to do is implement the `generate_embeddings()` and `ndims` function to handle the input types you expect and register the class in the global `EmbeddingFunctionRegistry` Let's implement `SentenceTransformerEmbeddings` class. All you need to do is implement the `generate_embeddings()` and `ndims` function to handle the input types you expect and register the class in the global `EmbeddingFunctionRegistry`
```python
from lancedb.embeddings import register
from lancedb.util import attempt_import_or_raise
@register("sentence-transformers") === "Python"
class SentenceTransformerEmbeddings(TextEmbeddingFunction):
```python
from lancedb.embeddings import register
from lancedb.util import attempt_import_or_raise
@register("sentence-transformers")
class SentenceTransformerEmbeddings(TextEmbeddingFunction):
name: str = "all-MiniLM-L6-v2" name: str = "all-MiniLM-L6-v2"
# set more default instance vars like device, etc. # set more default instance vars like device, etc.
@@ -39,38 +42,59 @@ class SentenceTransformerEmbeddings(TextEmbeddingFunction):
@cached(cache={}) @cached(cache={})
def _embedding_model(self): def _embedding_model(self):
return sentence_transformers.SentenceTransformer(name) return sentence_transformers.SentenceTransformer(name)
``` ```
This is a stripped down version of our implementation of `SentenceTransformerEmbeddings` that removes certain optimizations and defaul settings. === "TypeScript"
```ts
--8<--- "nodejs/examples/custom_embedding_function.test.ts:imports"
--8<--- "nodejs/examples/custom_embedding_function.test.ts:embedding_impl"
```
This is a stripped down version of our implementation of `SentenceTransformerEmbeddings` that removes certain optimizations and default settings.
Now you can use this embedding function to create your table schema and that's it! you can then ingest data and run queries without manually vectorizing the inputs. Now you can use this embedding function to create your table schema and that's it! you can then ingest data and run queries without manually vectorizing the inputs.
```python === "Python"
from lancedb.pydantic import LanceModel, Vector
registry = EmbeddingFunctionRegistry.get_instance() ```python
stransformer = registry.get("sentence-transformers").create() from lancedb.pydantic import LanceModel, Vector
class TextModelSchema(LanceModel): registry = EmbeddingFunctionRegistry.get_instance()
stransformer = registry.get("sentence-transformers").create()
class TextModelSchema(LanceModel):
vector: Vector(stransformer.ndims) = stransformer.VectorField() vector: Vector(stransformer.ndims) = stransformer.VectorField()
text: str = stransformer.SourceField() text: str = stransformer.SourceField()
tbl = db.create_table("table", schema=TextModelSchema) tbl = db.create_table("table", schema=TextModelSchema)
tbl.add(pd.DataFrame({"text": ["halo", "world"]})) tbl.add(pd.DataFrame({"text": ["halo", "world"]}))
result = tbl.search("world").limit(5) result = tbl.search("world").limit(5)
``` ```
NOTE: === "TypeScript"
You can always implement the `EmbeddingFunction` interface directly if you want or need to, `TextEmbeddingFunction` just makes it much simpler and faster for you to do so, by setting up the boiler plat for text-specific use case ```ts
--8<--- "nodejs/examples/custom_embedding_function.test.ts:call_custom_function"
```
!!! note
You can always implement the `EmbeddingFunction` interface directly if you want or need to, `TextEmbeddingFunction` just makes it much simpler and faster for you to do so, by setting up the boiler plat for text-specific use case
## Multi-modal embedding function example ## Multi-modal embedding function example
You can also use the `EmbeddingFunction` interface to implement more complex workflows such as multi-modal embedding function support. LanceDB implements `OpenClipEmeddingFunction` class that suppports multi-modal seach. Here's the implementation that you can use as a reference to build your own multi-modal embedding functions. You can also use the `EmbeddingFunction` interface to implement more complex workflows such as multi-modal embedding function support.
```python === "Python"
@register("open-clip")
class OpenClipEmbeddings(EmbeddingFunction): LanceDB implements `OpenClipEmeddingFunction` class that suppports multi-modal seach. Here's the implementation that you can use as a reference to build your own multi-modal embedding functions.
```python
@register("open-clip")
class OpenClipEmbeddings(EmbeddingFunction):
name: str = "ViT-B-32" name: str = "ViT-B-32"
pretrained: str = "laion2b_s34b_b79k" pretrained: str = "laion2b_s34b_b79k"
device: str = "cpu" device: str = "cpu"
@@ -209,4 +233,8 @@ class OpenClipEmbeddings(EmbeddingFunction):
if self.normalize: if self.normalize:
image_features /= image_features.norm(dim=-1, keepdim=True) image_features /= image_features.norm(dim=-1, keepdim=True)
return image_features.cpu().numpy().squeeze() return image_features.cpu().numpy().squeeze()
``` ```
=== "TypeScript"
Coming Soon! See this [issue](https://github.com/lancedb/lancedb/issues/1482) to track the status!

View File

@@ -1,723 +1,86 @@
There are various embedding functions available out of the box with LanceDB to manage your embeddings implicitly. We're actively working on adding other popular embedding APIs and models. # 📚 Available Embedding Models
## Text embedding functions There are various embedding functions available out of the box with LanceDB to manage your embeddings implicitly. We're actively working on adding other popular embedding APIs and models. 🚀
Contains the text embedding functions registered by default.
* Embedding functions have an inbuilt rate limit handler wrapper for source and query embedding function calls that retry with exponential backoff. Before jumping on the list of available models, let's understand how to get an embedding model initialized and configured to use in our code:
* Each `EmbeddingFunction` implementation automatically takes `max_retries` as an argument which has the default value of 7.
### Sentence transformers !!! example "Example usage"
Allows you to set parameters when registering a `sentence-transformers` object.
!!! info
Sentence transformer embeddings are normalized by default. It is recommended to use normalized embeddings for similarity search.
| Parameter | Type | Default Value | Description |
|---|---|---|---|
| `name` | `str` | `all-MiniLM-L6-v2` | The name of the model |
| `device` | `str` | `cpu` | The device to run the model on (can be `cpu` or `gpu`) |
| `normalize` | `bool` | `True` | Whether to normalize the input text before feeding it to the model |
| `trust_remote_code` | `bool` | `False` | Whether to trust and execute remote code from the model's Huggingface repository |
??? "Check out available sentence-transformer models here!"
```markdown
- sentence-transformers/all-MiniLM-L12-v2
- sentence-transformers/paraphrase-mpnet-base-v2
- sentence-transformers/gtr-t5-base
- sentence-transformers/LaBSE
- sentence-transformers/all-MiniLM-L6-v2
- sentence-transformers/bert-base-nli-max-tokens
- sentence-transformers/bert-base-nli-mean-tokens
- sentence-transformers/bert-base-nli-stsb-mean-tokens
- sentence-transformers/bert-base-wikipedia-sections-mean-tokens
- sentence-transformers/bert-large-nli-cls-token
- sentence-transformers/bert-large-nli-max-tokens
- sentence-transformers/bert-large-nli-mean-tokens
- sentence-transformers/bert-large-nli-stsb-mean-tokens
- sentence-transformers/distilbert-base-nli-max-tokens
- sentence-transformers/distilbert-base-nli-mean-tokens
- sentence-transformers/distilbert-base-nli-stsb-mean-tokens
- sentence-transformers/distilroberta-base-msmarco-v1
- sentence-transformers/distilroberta-base-msmarco-v2
- sentence-transformers/nli-bert-base-cls-pooling
- sentence-transformers/nli-bert-base-max-pooling
- sentence-transformers/nli-bert-base
- sentence-transformers/nli-bert-large-cls-pooling
- sentence-transformers/nli-bert-large-max-pooling
- sentence-transformers/nli-bert-large
- sentence-transformers/nli-distilbert-base-max-pooling
- sentence-transformers/nli-distilbert-base
- sentence-transformers/nli-roberta-base
- sentence-transformers/nli-roberta-large
- sentence-transformers/roberta-base-nli-mean-tokens
- sentence-transformers/roberta-base-nli-stsb-mean-tokens
- sentence-transformers/roberta-large-nli-mean-tokens
- sentence-transformers/roberta-large-nli-stsb-mean-tokens
- sentence-transformers/stsb-bert-base
- sentence-transformers/stsb-bert-large
- sentence-transformers/stsb-distilbert-base
- sentence-transformers/stsb-roberta-base
- sentence-transformers/stsb-roberta-large
- sentence-transformers/xlm-r-100langs-bert-base-nli-mean-tokens
- sentence-transformers/xlm-r-100langs-bert-base-nli-stsb-mean-tokens
- sentence-transformers/xlm-r-base-en-ko-nli-ststb
- sentence-transformers/xlm-r-bert-base-nli-mean-tokens
- sentence-transformers/xlm-r-bert-base-nli-stsb-mean-tokens
- sentence-transformers/xlm-r-large-en-ko-nli-ststb
- sentence-transformers/bert-base-nli-cls-token
- sentence-transformers/all-distilroberta-v1
- sentence-transformers/multi-qa-MiniLM-L6-dot-v1
- sentence-transformers/multi-qa-distilbert-cos-v1
- sentence-transformers/multi-qa-distilbert-dot-v1
- sentence-transformers/multi-qa-mpnet-base-cos-v1
- sentence-transformers/multi-qa-mpnet-base-dot-v1
- sentence-transformers/nli-distilroberta-base-v2
- sentence-transformers/all-MiniLM-L6-v1
- sentence-transformers/all-mpnet-base-v1
- sentence-transformers/all-mpnet-base-v2
- sentence-transformers/all-roberta-large-v1
- sentence-transformers/allenai-specter
- sentence-transformers/average_word_embeddings_glove.6B.300d
- sentence-transformers/average_word_embeddings_glove.840B.300d
- sentence-transformers/average_word_embeddings_komninos
- sentence-transformers/average_word_embeddings_levy_dependency
- sentence-transformers/clip-ViT-B-32-multilingual-v1
- sentence-transformers/clip-ViT-B-32
- sentence-transformers/distilbert-base-nli-stsb-quora-ranking
- sentence-transformers/distilbert-multilingual-nli-stsb-quora-ranking
- sentence-transformers/distilroberta-base-paraphrase-v1
- sentence-transformers/distiluse-base-multilingual-cased-v1
- sentence-transformers/distiluse-base-multilingual-cased-v2
- sentence-transformers/distiluse-base-multilingual-cased
- sentence-transformers/facebook-dpr-ctx_encoder-multiset-base
- sentence-transformers/facebook-dpr-ctx_encoder-single-nq-base
- sentence-transformers/facebook-dpr-question_encoder-multiset-base
- sentence-transformers/facebook-dpr-question_encoder-single-nq-base
- sentence-transformers/gtr-t5-large
- sentence-transformers/gtr-t5-xl
- sentence-transformers/gtr-t5-xxl
- sentence-transformers/msmarco-MiniLM-L-12-v3
- sentence-transformers/msmarco-MiniLM-L-6-v3
- sentence-transformers/msmarco-MiniLM-L12-cos-v5
- sentence-transformers/msmarco-MiniLM-L6-cos-v5
- sentence-transformers/msmarco-bert-base-dot-v5
- sentence-transformers/msmarco-bert-co-condensor
- sentence-transformers/msmarco-distilbert-base-dot-prod-v3
- sentence-transformers/msmarco-distilbert-base-tas-b
- sentence-transformers/msmarco-distilbert-base-v2
- sentence-transformers/msmarco-distilbert-base-v3
- sentence-transformers/msmarco-distilbert-base-v4
- sentence-transformers/msmarco-distilbert-cos-v5
- sentence-transformers/msmarco-distilbert-dot-v5
- sentence-transformers/msmarco-distilbert-multilingual-en-de-v2-tmp-lng-aligned
- sentence-transformers/msmarco-distilbert-multilingual-en-de-v2-tmp-trained-scratch
- sentence-transformers/msmarco-distilroberta-base-v2
- sentence-transformers/msmarco-roberta-base-ance-firstp
- sentence-transformers/msmarco-roberta-base-v2
- sentence-transformers/msmarco-roberta-base-v3
- sentence-transformers/multi-qa-MiniLM-L6-cos-v1
- sentence-transformers/nli-mpnet-base-v2
- sentence-transformers/nli-roberta-base-v2
- sentence-transformers/nq-distilbert-base-v1
- sentence-transformers/paraphrase-MiniLM-L12-v2
- sentence-transformers/paraphrase-MiniLM-L3-v2
- sentence-transformers/paraphrase-MiniLM-L6-v2
- sentence-transformers/paraphrase-TinyBERT-L6-v2
- sentence-transformers/paraphrase-albert-base-v2
- sentence-transformers/paraphrase-albert-small-v2
- sentence-transformers/paraphrase-distilroberta-base-v1
- sentence-transformers/paraphrase-distilroberta-base-v2
- sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2
- sentence-transformers/paraphrase-multilingual-mpnet-base-v2
- sentence-transformers/paraphrase-xlm-r-multilingual-v1
- sentence-transformers/quora-distilbert-base
- sentence-transformers/quora-distilbert-multilingual
- sentence-transformers/sentence-t5-base
- sentence-transformers/sentence-t5-large
- sentence-transformers/sentence-t5-xxl
- sentence-transformers/sentence-t5-xl
- sentence-transformers/stsb-distilroberta-base-v2
- sentence-transformers/stsb-mpnet-base-v2
- sentence-transformers/stsb-roberta-base-v2
- sentence-transformers/stsb-xlm-r-multilingual
- sentence-transformers/xlm-r-distilroberta-base-paraphrase-v1
- sentence-transformers/clip-ViT-L-14
- sentence-transformers/clip-ViT-B-16
- sentence-transformers/use-cmlm-multilingual
- sentence-transformers/all-MiniLM-L12-v1
```
!!! info
You can also load many other model architectures from the library. For example models from sources such as BAAI, nomic, salesforce research, etc.
See this HF hub page for all [supported models](https://huggingface.co/models?library=sentence-transformers).
!!! note "BAAI Embeddings example"
Here is an example that uses BAAI embedding model from the HuggingFace Hub [supported models](https://huggingface.co/models?library=sentence-transformers)
```python ```python
import lancedb model = get_registry()
from lancedb.pydantic import LanceModel, Vector .get("openai")
from lancedb.embeddings import get_registry .create(name="text-embedding-ada-002")
db = lancedb.connect("/tmp/db")
model = get_registry().get("sentence-transformers").create(name="BAAI/bge-small-en-v1.5", device="cpu")
class Words(LanceModel):
text: str = model.SourceField()
vector: Vector(model.ndims()) = model.VectorField()
table = db.create_table("words", schema=Words)
table.add(
[
{"text": "hello world"},
{"text": "goodbye world"}
]
)
query = "greetings"
actual = table.search(query).limit(1).to_pydantic(Words)[0]
print(actual.text)
```
Visit sentence-transformers [HuggingFace HUB](https://huggingface.co/sentence-transformers) page for more information on the available models.
### Huggingface embedding models
We offer support for all huggingface models (which can be loaded via [transformers](https://huggingface.co/docs/transformers/en/index) library). The default model is `colbert-ir/colbertv2.0` which also has its own special callout - `registry.get("colbert")`
Example usage -
```python
import lancedb
import pandas as pd
from lancedb.embeddings import get_registry
from lancedb.pydantic import LanceModel, Vector
model = get_registry().get("huggingface").create(name='facebook/bart-base')
class Words(LanceModel):
text: str = model.SourceField()
vector: Vector(model.ndims()) = model.VectorField()
df = pd.DataFrame({"text": ["hi hello sayonara", "goodbye world"]})
table = db.create_table("greets", schema=Words)
table.add(df)
query = "old greeting"
actual = table.search(query).limit(1).to_pydantic(Words)[0]
print(actual.text)
```
### Ollama embeddings
Generate embeddings via the [ollama](https://github.com/ollama/ollama-python) python library. More details:
- [Ollama docs on embeddings](https://github.com/ollama/ollama/blob/main/docs/api.md#generate-embeddings)
- [Ollama blog on embeddings](https://ollama.com/blog/embedding-models)
| Parameter | Type | Default Value | Description |
|------------------------|----------------------------|--------------------------|------------------------------------------------------------------------------------------------------------------------------------------------|
| `name` | `str` | `nomic-embed-text` | The name of the model. |
| `host` | `str` | `http://localhost:11434` | The Ollama host to connect to. |
| `options` | `ollama.Options` or `dict` | `None` | Additional model parameters listed in the documentation for the Modelfile such as `temperature`. |
| `keep_alive` | `float` or `str` | `"5m"` | Controls how long the model will stay loaded into memory following the request. |
| `ollama_client_kwargs` | `dict` | `{}` | kwargs that can be past to the `ollama.Client`. |
```python
import lancedb
from lancedb.pydantic import LanceModel, Vector
from lancedb.embeddings import get_registry
db = lancedb.connect("/tmp/db")
func = get_registry().get("ollama").create(name="nomic-embed-text")
class Words(LanceModel):
text: str = func.SourceField()
vector: Vector(func.ndims()) = func.VectorField()
table = db.create_table("words", schema=Words, mode="overwrite")
table.add([
{"text": "hello world"},
{"text": "goodbye world"}
])
query = "greetings"
actual = table.search(query).limit(1).to_pydantic(Words)[0]
print(actual.text)
```
### OpenAI embeddings
LanceDB registers the OpenAI embeddings function in the registry by default, as `openai`. Below are the parameters that you can customize when creating the instances:
| Parameter | Type | Default Value | Description |
|---|---|---|---|
| `name` | `str` | `"text-embedding-ada-002"` | The name of the model. |
| `dim` | `int` | Model default | For OpenAI's newer text-embedding-3 model, we can specify a dimensionality that is smaller than the 1536 size. This feature supports it |
```python
import lancedb
from lancedb.pydantic import LanceModel, Vector
from lancedb.embeddings import get_registry
db = lancedb.connect("/tmp/db")
func = get_registry().get("openai").create(name="text-embedding-ada-002")
class Words(LanceModel):
text: str = func.SourceField()
vector: Vector(func.ndims()) = func.VectorField()
table = db.create_table("words", schema=Words, mode="overwrite")
table.add(
[
{"text": "hello world"},
{"text": "goodbye world"}
]
)
query = "greetings"
actual = table.search(query).limit(1).to_pydantic(Words)[0]
print(actual.text)
```
### Instructor Embeddings
[Instructor](https://instructor-embedding.github.io/) is an instruction-finetuned text embedding model that can generate text embeddings tailored to any task (e.g. classification, retrieval, clustering, text evaluation, etc.) and domains (e.g. science, finance, etc.) by simply providing the task instruction, without any finetuning.
If you want to calculate customized embeddings for specific sentences, you can follow the unified template to write instructions.
!!! info
Represent the `domain` `text_type` for `task_objective`:
* `domain` is optional, and it specifies the domain of the text, e.g. science, finance, medicine, etc.
* `text_type` is required, and it specifies the encoding unit, e.g. sentence, document, paragraph, etc.
* `task_objective` is optional, and it specifies the objective of embedding, e.g. retrieve a document, classify the sentence, etc.
More information about the model can be found at the [source URL](https://github.com/xlang-ai/instructor-embedding).
| Argument | Type | Default | Description |
|---|---|---|---|
| `name` | `str` | "hkunlp/instructor-base" | The name of the model to use |
| `batch_size` | `int` | `32` | The batch size to use when generating embeddings |
| `device` | `str` | `"cpu"` | The device to use when generating embeddings |
| `show_progress_bar` | `bool` | `True` | Whether to show a progress bar when generating embeddings |
| `normalize_embeddings` | `bool` | `True` | Whether to normalize the embeddings |
| `quantize` | `bool` | `False` | Whether to quantize the model |
| `source_instruction` | `str` | `"represent the docuement for retreival"` | The instruction for the source column |
| `query_instruction` | `str` | `"represent the document for retreiving the most similar documents"` | The instruction for the query |
```python
import lancedb
from lancedb.pydantic import LanceModel, Vector
from lancedb.embeddings import get_registry, InstuctorEmbeddingFunction
instructor = get_registry().get("instructor").create(
source_instruction="represent the docuement for retreival",
query_instruction="represent the document for retreiving the most similar documents"
)
class Schema(LanceModel):
vector: Vector(instructor.ndims()) = instructor.VectorField()
text: str = instructor.SourceField()
db = lancedb.connect("~/.lancedb")
tbl = db.create_table("test", schema=Schema, mode="overwrite")
texts = [{"text": "Capitalism has been dominant in the Western world since the end of feudalism, but most feel[who?] that..."},
{"text": "The disparate impact theory is especially controversial under the Fair Housing Act because the Act..."},
{"text": "Disparate impact in United States labor law refers to practices in employment, housing, and other areas that.."}]
tbl.add(texts)
```
### Gemini Embeddings
With Google's Gemini, you can represent text (words, sentences, and blocks of text) in a vectorized form, making it easier to compare and contrast embeddings. For example, two texts that share a similar subject matter or sentiment should have similar embeddings, which can be identified through mathematical comparison techniques such as cosine similarity. For more on how and why you should use embeddings, refer to the Embeddings guide.
The Gemini Embedding Model API supports various task types:
| Task Type | Description |
|-------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------|
| "`retrieval_query`" | Specifies the given text is a query in a search/retrieval setting. |
| "`retrieval_document`" | Specifies the given text is a document in a search/retrieval setting. Using this task type requires a title but is automatically proided by Embeddings API |
| "`semantic_similarity`" | Specifies the given text will be used for Semantic Textual Similarity (STS). |
| "`classification`" | Specifies that the embeddings will be used for classification. |
| "`clusering`" | Specifies that the embeddings will be used for clustering. |
Usage Example:
```python
import lancedb
import pandas as pd
from lancedb.pydantic import LanceModel, Vector
from lancedb.embeddings import get_registry
model = get_registry().get("gemini-text").create()
class TextModel(LanceModel):
text: str = model.SourceField()
vector: Vector(model.ndims()) = model.VectorField()
df = pd.DataFrame({"text": ["hello world", "goodbye world"]})
db = lancedb.connect("~/.lancedb")
tbl = db.create_table("test", schema=TextModel, mode="overwrite")
tbl.add(df)
rs = tbl.search("hello").limit(1).to_pandas()
```
### Cohere Embeddings
Using cohere API requires cohere package, which can be installed using `pip install cohere`. Cohere embeddings are used to generate embeddings for text data. The embeddings can be used for various tasks like semantic search, clustering, and classification.
You also need to set the `COHERE_API_KEY` environment variable to use the Cohere API.
Supported models are:
```
* embed-english-v3.0
* embed-multilingual-v3.0
* embed-english-light-v3.0
* embed-multilingual-light-v3.0
* embed-english-v2.0
* embed-english-light-v2.0
* embed-multilingual-v2.0
```
Supported parameters (to be passed in `create` method) are:
| Parameter | Type | Default Value | Description |
|---|---|---|---|
| `name` | `str` | `"embed-english-v2.0"` | The model ID of the cohere model to use. Supported base models for Text Embeddings: embed-english-v3.0, embed-multilingual-v3.0, embed-english-light-v3.0, embed-multilingual-light-v3.0, embed-english-v2.0, embed-english-light-v2.0, embed-multilingual-v2.0 |
| `source_input_type` | `str` | `"search_document"` | The type of input data to be used for the source column. |
| `query_input_type` | `str` | `"search_query"` | The type of input data to be used for the query. |
Cohere supports following input types:
| Input Type | Description |
|-------------------------|---------------------------------------|
| "`search_document`" | Used for embeddings stored in a vector|
| | database for search use-cases. |
| "`search_query`" | Used for embeddings of search queries |
| | run against a vector DB |
| "`semantic_similarity`" | Specifies the given text will be used |
| | for Semantic Textual Similarity (STS) |
| "`classification`" | Used for embeddings passed through a |
| | text classifier. |
| "`clustering`" | Used for the embeddings run through a |
| | clustering algorithm |
Usage Example:
```python
import lancedb
from lancedb.pydantic import LanceModel, Vector
from lancedb.embeddings import EmbeddingFunctionRegistry
cohere = EmbeddingFunctionRegistry
.get_instance()
.get("cohere")
.create(name="embed-multilingual-v2.0")
class TextModel(LanceModel):
text: str = cohere.SourceField()
vector: Vector(cohere.ndims()) = cohere.VectorField()
data = [ { "text": "hello world" },
{ "text": "goodbye world" }]
db = lancedb.connect("~/.lancedb")
tbl = db.create_table("test", schema=TextModel, mode="overwrite")
tbl.add(data)
``` ```
### Jina Embeddings Now let's understand the above syntax:
Jina embeddings are used to generate embeddings for text and image data.
You also need to set the `JINA_API_KEY` environment variable to use the Jina API.
You can find a list of supported models under [https://jina.ai/embeddings/](https://jina.ai/embeddings/)
Supported parameters (to be passed in `create` method) are:
| Parameter | Type | Default Value | Description |
|---|---|---|---|
| `name` | `str` | `"jina-clip-v1"` | The model ID of the jina model to use |
Usage Example:
```python ```python
import os model = get_registry().get("model_id").create(...params)
import lancedb
from lancedb.pydantic import LanceModel, Vector
from lancedb.embeddings import EmbeddingFunctionRegistry
os.environ['JINA_API_KEY'] = 'jina_*'
jina_embed = EmbeddingFunctionRegistry.get_instance().get("jina").create(name="jina-embeddings-v2-base-en")
class TextModel(LanceModel):
text: str = jina_embed.SourceField()
vector: Vector(jina_embed.ndims()) = jina_embed.VectorField()
data = [{"text": "hello world"},
{"text": "goodbye world"}]
db = lancedb.connect("~/.lancedb-2")
tbl = db.create_table("test", schema=TextModel, mode="overwrite")
tbl.add(data)
``` ```
**This👆 line effectively creates a configured instance of an `embedding function` with `model` of choice that is ready for use.**
### AWS Bedrock Text Embedding Functions - `get_registry()` : This function call returns an instance of a `EmbeddingFunctionRegistry` object. This registry manages the registration and retrieval of embedding functions.
AWS Bedrock supports multiple base models for generating text embeddings. You need to setup the AWS credentials to use this embedding function.
You can do so by using `awscli` and also add your session_token:
```shell
aws configure
aws configure set aws_session_token "<your_session_token>"
```
to ensure that the credentials are set up correctly, you can run the following command:
```shell
aws sts get-caller-identity
```
Supported Embedding modelIDs are: - `.get("model_id")` : This method call on the registry object and retrieves the **embedding models functions** associated with the `"model_id"` (1) .
* `amazon.titan-embed-text-v1` { .annotate }
* `cohere.embed-english-v3`
* `cohere.embed-multilingual-v3`
Supported parameters (to be passed in `create` method) are: 1. Hover over the names in table below to find out the `model_id` of different embedding functions.
| Parameter | Type | Default Value | Description | - `.create(...params)` : This method call is on the object returned by the `get` method. It instantiates an embedding model function using the **specified parameters**.
|---|---|---|---|
| **name** | str | "amazon.titan-embed-text-v1" | The model ID of the bedrock model to use. Supported base models for Text Embeddings: amazon.titan-embed-text-v1, cohere.embed-english-v3, cohere.embed-multilingual-v3 |
| **region** | str | "us-east-1" | Optional name of the AWS Region in which the service should be called (e.g., "us-east-1"). |
| **profile_name** | str | None | Optional name of the AWS profile to use for calling the Bedrock service. If not specified, the default profile will be used. |
| **assumed_role** | str | None | Optional ARN of an AWS IAM role to assume for calling the Bedrock service. If not specified, the current active credentials will be used. |
| **role_session_name** | str | "lancedb-embeddings" | Optional name of the AWS IAM role session to use for calling the Bedrock service. If not specified, a "lancedb-embeddings" name will be used. |
| **runtime** | bool | True | Optional choice of getting different client to perform operations with the Amazon Bedrock service. |
| **max_retries** | int | 7 | Optional number of retries to perform when a request fails. |
Usage Example: ??? question "What parameters does the `.create(...params)` method accepts?"
**Checkout the documentation of specific embedding models (links in the table below👇) to know what parameters it takes**.
```python !!! tip "Moving on"
import lancedb Now that we know how to get the **desired embedding model** and use it in our code, let's explore the comprehensive **list** of embedding models **supported by LanceDB**, in the tables below.
from lancedb.pydantic import LanceModel, Vector
from lancedb.embeddings import get_registry
model = get_registry().get("bedrock-text").create() ## Text Embedding Functions 📝
These functions are registered by default to handle text embeddings.
class TextModel(LanceModel): - 🔄 **Embedding functions** have an inbuilt rate limit handler wrapper for source and query embedding function calls that retry with **exponential backoff**.
text: str = model.SourceField()
vector: Vector(model.ndims()) = model.VectorField()
df = pd.DataFrame({"text": ["hello world", "goodbye world"]}) - 🌕 Each `EmbeddingFunction` implementation automatically takes `max_retries` as an argument which has the default value of 7.
db = lancedb.connect("tmp_path")
tbl = db.create_table("test", schema=TextModel, mode="overwrite")
tbl.add(df) 🌟 **Available Text Embeddings**
rs = tbl.search("hello").limit(1).to_pandas()
```
## Multi-modal embedding functions | **Embedding** :material-information-outline:{ title="Hover over the name to find out the model_id" } | **Description** | **Documentation** |
Multi-modal embedding functions allow you to query your table using both images and text. |-----------|-------------|---------------|
| [**Sentence Transformers**](available_embedding_models/text_embedding_functions/sentence_transformers.md "sentence-transformers") | 🧠 **SentenceTransformers** is a Python framework for state-of-the-art sentence, text, and image embeddings. | [<img src="https://raw.githubusercontent.com/lancedb/assets/main/docs/assets/logos/sbert_2.png" alt="Sentence Transformers Icon" width="90" height="35">](available_embedding_models/text_embedding_functions/sentence_transformers.md)|
### OpenClip embeddings | [**Huggingface Models**](available_embedding_models/text_embedding_functions/huggingface_embedding.md "huggingface") |🤗 We offer support for all **Huggingface** models. The default model is `colbert-ir/colbertv2.0`. | [<img src="https://raw.githubusercontent.com/lancedb/assets/main/docs/assets/logos/hugging_face.png" alt="Huggingface Icon" width="130" height="35">](available_embedding_models/text_embedding_functions/huggingface_embedding.md) |
We support CLIP model embeddings using the open source alternative, [open-clip](https://github.com/mlfoundations/open_clip) which supports various customizations. It is registered as `open-clip` and supports the following customizations: | [**Ollama Embeddings**](available_embedding_models/text_embedding_functions/ollama_embedding.md "ollama") | 🔍 Generate embeddings via the **Ollama** python library. Ollama supports embedding models, making it possible to build RAG apps. | [<img src="https://raw.githubusercontent.com/lancedb/assets/main/docs/assets/logos/Ollama.png" alt="Ollama Icon" width="110" height="35">](available_embedding_models/text_embedding_functions/ollama_embedding.md)|
| [**OpenAI Embeddings**](available_embedding_models/text_embedding_functions/openai_embedding.md "openai")| 🔑 **OpenAIs** text embeddings measure the relatedness of text strings. **LanceDB** supports state-of-the-art embeddings from OpenAI. | [<img src="https://raw.githubusercontent.com/lancedb/assets/main/docs/assets/logos/openai.png" alt="OpenAI Icon" width="100" height="35">](available_embedding_models/text_embedding_functions/openai_embedding.md)|
| Parameter | Type | Default Value | Description | | [**Instructor Embeddings**](available_embedding_models/text_embedding_functions/instructor_embedding.md "instructor") | 📚 **Instructor**: An instruction-finetuned text embedding model that can generate text embeddings tailored to any task and domains by simply providing the task instruction, without any finetuning. | [<img src="https://raw.githubusercontent.com/lancedb/assets/main/docs/assets/logos/instructor_embedding.png" alt="Instructor Embedding Icon" width="140" height="35">](available_embedding_models/text_embedding_functions/instructor_embedding.md) |
|---|---|---|---| | [**Gemini Embeddings**](available_embedding_models/text_embedding_functions/gemini_embedding.md "gemini-text") | 🌌 Googles Gemini API generates state-of-the-art embeddings for words, phrases, and sentences. | [<img src="https://raw.githubusercontent.com/lancedb/assets/main/docs/assets/logos/gemini.png" alt="Gemini Icon" width="95" height="35">](available_embedding_models/text_embedding_functions/gemini_embedding.md) |
| `name` | `str` | `"ViT-B-32"` | The name of the model. | | [**Cohere Embeddings**](available_embedding_models/text_embedding_functions/cohere_embedding.md "cohere") | 💬 This will help you get started with **Cohere** embedding models using LanceDB. Using cohere API requires cohere package. Install it via `pip`. | [<img src="https://raw.githubusercontent.com/lancedb/assets/main/docs/assets/logos/cohere.png" alt="Cohere Icon" width="140" height="35">](available_embedding_models/text_embedding_functions/cohere_embedding.md) |
| `pretrained` | `str` | `"laion2b_s34b_b79k"` | The name of the pretrained model to load. | | [**Jina Embeddings**](available_embedding_models/text_embedding_functions/jina_embedding.md "jina") | 🔗 World-class embedding models to improve your search and RAG systems. You will need **jina api key**. | [<img src="https://raw.githubusercontent.com/lancedb/assets/main/docs/assets/logos/jina.png" alt="Jina Icon" width="90" height="35">](available_embedding_models/text_embedding_functions/jina_embedding.md) |
| `device` | `str` | `"cpu"` | The device to run the model on. Can be `"cpu"` or `"gpu"`. | | [ **AWS Bedrock Functions**](available_embedding_models/text_embedding_functions/aws_bedrock_embedding.md "bedrock-text") | ☁️ AWS Bedrock supports multiple base models for generating text embeddings. You need to setup the AWS credentials to use this embedding function. | [<img src="https://raw.githubusercontent.com/lancedb/assets/main/docs/assets/logos/aws_bedrock.png" alt="AWS Bedrock Icon" width="120" height="35">](available_embedding_models/text_embedding_functions/aws_bedrock_embedding.md) |
| `batch_size` | `int` | `64` | The number of images to process in a batch. | | [**IBM Watsonx.ai**](available_embedding_models/text_embedding_functions/ibm_watsonx_ai_embedding.md "watsonx") | 💡 Generate text embeddings using IBM's watsonx.ai platform. **Note**: watsonx.ai library is an optional dependency. | [<img src="https://raw.githubusercontent.com/lancedb/assets/main/docs/assets/logos/watsonx.png" alt="Watsonx Icon" width="140" height="35">](available_embedding_models/text_embedding_functions/ibm_watsonx_ai_embedding.md) |
| `normalize` | `bool` | `True` | Whether to normalize the input images before feeding them to the model. | | [**VoyageAI Embeddings**](available_embedding_models/text_embedding_functions/voyageai_embedding.md "voyageai") | 🌕 Voyage AI provides cutting-edge embedding and rerankers. This will help you get started with **VoyageAI** embedding models using LanceDB. Using voyageai API requires voyageai package. Install it via `pip`. | [<img src="https://www.voyageai.com/logo.svg" alt="VoyageAI Icon" width="140" height="35">](available_embedding_models/text_embedding_functions/voyageai_embedding.md) |
This embedding function supports ingesting images as both bytes and urls. You can query them using both test and other images.
!!! info
LanceDB supports ingesting images directly from accessible links.
```python
import lancedb
from lancedb.pydantic import LanceModel, Vector
from lancedb.embeddings import get_registry
db = lancedb.connect(tmp_path)
func = get_registry.get("open-clip").create()
class Images(LanceModel):
label: str
image_uri: str = func.SourceField() # image uri as the source
image_bytes: bytes = func.SourceField() # image bytes as the source
vector: Vector(func.ndims()) = func.VectorField() # vector column
vec_from_bytes: Vector(func.ndims()) = func.VectorField() # Another vector column
table = db.create_table("images", schema=Images)
labels = ["cat", "cat", "dog", "dog", "horse", "horse"]
uris = [
"http://farm1.staticflickr.com/53/167798175_7c7845bbbd_z.jpg",
"http://farm1.staticflickr.com/134/332220238_da527d8140_z.jpg",
"http://farm9.staticflickr.com/8387/8602747737_2e5c2a45d4_z.jpg",
"http://farm5.staticflickr.com/4092/5017326486_1f46057f5f_z.jpg",
"http://farm9.staticflickr.com/8216/8434969557_d37882c42d_z.jpg",
"http://farm6.staticflickr.com/5142/5835678453_4f3a4edb45_z.jpg",
]
# get each uri as bytes
image_bytes = [requests.get(uri).content for uri in uris]
table.add(
pd.DataFrame({"label": labels, "image_uri": uris, "image_bytes": image_bytes})
)
```
Now we can search using text from both the default vector column and the custom vector column
```python
# text search
actual = table.search("man's best friend").limit(1).to_pydantic(Images)[0]
print(actual.label) # prints "dog"
frombytes = (
table.search("man's best friend", vector_column_name="vec_from_bytes")
.limit(1)
.to_pydantic(Images)[0]
)
print(frombytes.label)
```
Because we're using a multi-modal embedding function, we can also search using images
```python
# image search
query_image_uri = "http://farm1.staticflickr.com/200/467715466_ed4a31801f_z.jpg"
image_bytes = requests.get(query_image_uri).content
query_image = Image.open(io.BytesIO(image_bytes))
actual = table.search(query_image).limit(1).to_pydantic(Images)[0]
print(actual.label == "dog")
# image search using a custom vector column
other = (
table.search(query_image, vector_column_name="vec_from_bytes")
.limit(1)
.to_pydantic(Images)[0]
)
print(actual.label)
```
### Imagebind embeddings
We have support for [imagebind](https://github.com/facebookresearch/ImageBind) model embeddings. You can download our version of the packaged model via - `pip install imagebind-packaged==0.1.2`.
This function is registered as `imagebind` and supports Audio, Video and Text modalities(extending to Thermal,Depth,IMU data):
| Parameter | Type | Default Value | Description |
|---|---|---|---|
| `name` | `str` | `"imagebind_huge"` | Name of the model. |
| `device` | `str` | `"cpu"` | The device to run the model on. Can be `"cpu"` or `"gpu"`. |
| `normalize` | `bool` | `False` | set to `True` to normalize your inputs before model ingestion. |
Below is an example demonstrating how the API works:
```python
import lancedb
from lancedb.pydantic import LanceModel, Vector
from lancedb.embeddings import get_registry
db = lancedb.connect(tmp_path)
func = get_registry.get("imagebind").create()
class ImageBindModel(LanceModel):
text: str
image_uri: str = func.SourceField()
audio_path: str
vector: Vector(func.ndims()) = func.VectorField()
# add locally accessible image paths
text_list=["A dog.", "A car", "A bird"]
image_paths=[".assets/dog_image.jpg", ".assets/car_image.jpg", ".assets/bird_image.jpg"]
audio_paths=[".assets/dog_audio.wav", ".assets/car_audio.wav", ".assets/bird_audio.wav"]
# Load data
inputs = [
{"text": a, "audio_path": b, "image_uri": c}
for a, b, c in zip(text_list, audio_paths, image_paths)
]
#create table and add data
table = db.create_table("img_bind", schema=ImageBindModel)
table.add(inputs)
```
Now, we can search using any modality:
#### image search
```python
query_image = "./assets/dog_image2.jpg" #download an image and enter that path here
actual = table.search(query_image).limit(1).to_pydantic(ImageBindModel)[0]
print(actual.text == "dog")
```
#### audio search
```python
query_audio = "./assets/car_audio2.wav" #download an audio clip and enter path here
actual = table.search(query_audio).limit(1).to_pydantic(ImageBindModel)[0]
print(actual.text == "car")
```
#### Text search
You can add any input query and fetch the result as follows:
```python
query = "an animal which flies and tweets"
actual = table.search(query).limit(1).to_pydantic(ImageBindModel)[0]
print(actual.text == "bird")
```
If you have any questions about the embeddings API, supported models, or see a relevant model missing, please raise an issue [on GitHub](https://github.com/lancedb/lancedb/issues).
### Jina Embeddings
Jina embeddings can also be used to embed both text and image data, only some of the models support image data and you can check the list
under [https://jina.ai/embeddings/](https://jina.ai/embeddings/)
Supported parameters (to be passed in `create` method) are:
| Parameter | Type | Default Value | Description |
|---|---|---|---|
| `name` | `str` | `"jina-clip-v1"` | The model ID of the jina model to use |
Usage Example:
```python
import os
import requests
import lancedb
from lancedb.pydantic import LanceModel, Vector
from lancedb.embeddings import get_registry
import pandas as pd
os.environ['JINA_API_KEY'] = 'jina_*'
db = lancedb.connect("~/.lancedb")
func = get_registry().get("jina").create()
class Images(LanceModel):
label: str [st-key]: "sentence-transformers"
image_uri: str = func.SourceField() # image uri as the source [hf-key]: "huggingface"
image_bytes: bytes = func.SourceField() # image bytes as the source [ollama-key]: "ollama"
vector: Vector(func.ndims()) = func.VectorField() # vector column [openai-key]: "openai"
vec_from_bytes: Vector(func.ndims()) = func.VectorField() # Another vector column [instructor-key]: "instructor"
[gemini-key]: "gemini-text"
[cohere-key]: "cohere"
[jina-key]: "jina"
[aws-key]: "bedrock-text"
[watsonx-key]: "watsonx"
[voyageai-key]: "voyageai"
table = db.create_table("images", schema=Images) ## Multi-modal Embedding Functions🖼
labels = ["cat", "cat", "dog", "dog", "horse", "horse"]
uris = [ Multi-modal embedding functions allow you to query your table using both images and text. 💬🖼️
"http://farm1.staticflickr.com/53/167798175_7c7845bbbd_z.jpg",
"http://farm1.staticflickr.com/134/332220238_da527d8140_z.jpg", 🌐 **Available Multi-modal Embeddings**
"http://farm9.staticflickr.com/8387/8602747737_2e5c2a45d4_z.jpg",
"http://farm5.staticflickr.com/4092/5017326486_1f46057f5f_z.jpg", | Embedding :material-information-outline:{ title="Hover over the name to find out the model_id" } | Description | Documentation |
"http://farm9.staticflickr.com/8216/8434969557_d37882c42d_z.jpg", |-----------|-------------|---------------|
"http://farm6.staticflickr.com/5142/5835678453_4f3a4edb45_z.jpg", | [**OpenClip Embeddings**](available_embedding_models/multimodal_embedding_functions/openclip_embedding.md "open-clip") | 🎨 We support CLIP model embeddings using the open source alternative, **open-clip** which supports various customizations. | [<img src="https://raw.githubusercontent.com/lancedb/assets/main/docs/assets/logos/openclip_github.png" alt="openclip Icon" width="150" height="35">](available_embedding_models/multimodal_embedding_functions/openclip_embedding.md) |
] | [**Imagebind Embeddings**](available_embedding_models/multimodal_embedding_functions/imagebind_embedding.md "imageind") | 🌌 We have support for **imagebind model embeddings**. You can download our version of the packaged model via - `pip install imagebind-packaged==0.1.2`. | [<img src="https://raw.githubusercontent.com/lancedb/assets/main/docs/assets/logos/imagebind_meta.png" alt="imagebind Icon" width="150" height="35">](available_embedding_models/multimodal_embedding_functions/imagebind_embedding.md)|
# get each uri as bytes | [**Jina Multi-modal Embeddings**](available_embedding_models/multimodal_embedding_functions/jina_multimodal_embedding.md "jina") | 🔗 **Jina embeddings** can also be used to embed both **text** and **image** data, only some of the models support image data and you can check the detailed documentation. 👉 | [<img src="https://raw.githubusercontent.com/lancedb/assets/main/docs/assets/logos/jina.png" alt="jina Icon" width="90" height="35">](available_embedding_models/multimodal_embedding_functions/jina_multimodal_embedding.md) |
image_bytes = [requests.get(uri).content for uri in uris]
table.add( !!! note
pd.DataFrame({"label": labels, "image_uri": uris, "image_bytes": image_bytes}) If you'd like to request support for additional **embedding functions**, please feel free to open an issue on our LanceDB [GitHub issue page](https://github.com/lancedb/lancedb/issues).
)
```

View File

@@ -2,8 +2,8 @@ Representing multi-modal data as vector embeddings is becoming a standard practi
For this purpose, LanceDB introduces an **embedding functions API**, that allow you simply set up once, during the configuration stage of your project. After this, the table remembers it, effectively making the embedding functions *disappear in the background* so you don't have to worry about manually passing callables, and instead, simply focus on the rest of your data engineering pipeline. For this purpose, LanceDB introduces an **embedding functions API**, that allow you simply set up once, during the configuration stage of your project. After this, the table remembers it, effectively making the embedding functions *disappear in the background* so you don't have to worry about manually passing callables, and instead, simply focus on the rest of your data engineering pipeline.
!!! Note "LanceDB cloud doesn't support embedding functions yet" !!! Note "Embedding functions on LanceDB cloud"
LanceDB Cloud does not support embedding functions yet. You need to generate embeddings before ingesting into the table or querying. When using embedding functions with LanceDB cloud, the embeddings will be generated on the source device and sent to the cloud. This means that the source device must have the necessary resources to generate the embeddings.
!!! warning !!! warning
Using the embedding function registry means that you don't have to explicitly generate the embeddings yourself. Using the embedding function registry means that you don't have to explicitly generate the embeddings yourself.
@@ -94,8 +94,8 @@ the embeddings at all:
=== "@lancedb/lancedb" === "@lancedb/lancedb"
```ts ```ts
--8<-- "nodejs/examples/embedding.ts:imports" --8<-- "nodejs/examples/embedding.test.ts:imports"
--8<-- "nodejs/examples/embedding.ts:embedding_function" --8<-- "nodejs/examples/embedding.test.ts:embedding_function"
``` ```
=== "vectordb (deprecated)" === "vectordb (deprecated)"
@@ -150,7 +150,7 @@ need to worry about it when you query the table:
.toArray() .toArray()
``` ```
=== "vectordb (deprecated) === "vectordb (deprecated)"
```ts ```ts
const results = await table const results = await table

View File

@@ -51,8 +51,8 @@ LanceDB registers the OpenAI embeddings function in the registry as `openai`. Yo
=== "TypeScript" === "TypeScript"
```typescript ```typescript
--8<--- "nodejs/examples/embedding.ts:imports" --8<--- "nodejs/examples/embedding.test.ts:imports"
--8<--- "nodejs/examples/embedding.ts:openai_embeddings" --8<--- "nodejs/examples/embedding.test.ts:openai_embeddings"
``` ```
=== "Rust" === "Rust"
@@ -99,34 +99,32 @@ LanceDB registers the Sentence Transformers embeddings function in the registry
Coming Soon! Coming Soon!
### Jina Embeddings ### Embedding function with LanceDB cloud
Embedding functions are now supported on LanceDB cloud. The embeddings will be generated on the source device and sent to the cloud. This means that the source device must have the necessary resources to generate the embeddings. Here's an example using the OpenAI embedding function:
LanceDB registers the JinaAI embeddings function in the registry as `jina`. You can pass any supported model name to the `create`. By default it uses `"jina-clip-v1"`.
`jina-clip-v1` can handle both text and images and other models only support `text`.
You need to pass `JINA_API_KEY` in the environment variable or pass it as `api_key` to `create` method.
```python ```python
import os import os
import lancedb import lancedb
from lancedb.pydantic import LanceModel, Vector from lancedb.pydantic import LanceModel, Vector
from lancedb.embeddings import get_registry from lancedb.embeddings import get_registry
os.environ['JINA_API_KEY'] = "jina_*" os.environ['OPENAI_API_KEY'] = "..."
db = lancedb.connect("/tmp/db") db = lancedb.connect(
func = get_registry().get("jina").create(name="jina-clip-v1") uri="db://....",
api_key="sk_...",
region="us-east-1"
)
func = get_registry().get("openai").create()
class Words(LanceModel): class Words(LanceModel):
text: str = func.SourceField() text: str = func.SourceField()
vector: Vector(func.ndims()) = func.VectorField() vector: Vector(func.ndims()) = func.VectorField()
table = db.create_table("words", schema=Words, mode="overwrite") table = db.create_table("words", schema=Words)
table.add( table.add([
[
{"text": "hello world"}, {"text": "hello world"},
{"text": "goodbye world"} {"text": "goodbye world"}
] ])
)
query = "greetings" query = "greetings"
actual = table.search(query).limit(1).to_pydantic(Words)[0] actual = table.search(query).limit(1).to_pydantic(Words)[0]

View File

@@ -0,0 +1,133 @@
# Understand Embeddings
The term **dimension** is a synonym for the number of elements in a feature vector. Each feature can be thought of as a different axis in a geometric space.
High-dimensional data means there are many features(or attributes) in the data.
!!! example
1. An image is a data point and it might have thousands of dimensions because each pixel could be considered as a feature.
2. Text data, when represented by each word or character, can also lead to high dimensions, especially when considering all possible words in a language.
Embedding captures **meaning and relationships** within data by mapping high-dimensional data into a lower-dimensional space. It captures it by placing inputs that are more **similar in meaning** closer together in the **embedding space**.
## What are Vector Embeddings?
Vector embeddings is a way to convert complex data, like text, images, or audio into numerical coordinates (called vectors) that can be plotted in an n-dimensional space(embedding space).
The closer these data points are related in the real world, the closer their corresponding numerical coordinates (vectors) will be to each other in the embedding space. This proximity in the embedding space reflects their semantic similarities, allowing machines to intuitively understand and process the data in a way that mirrors human perception of relationships and meaning.
In a way, it captures the most important aspects of the data while ignoring the less important ones. As a result, tasks like searching for related content or identifying patterns become more efficient and accurate, as the embeddings make it possible to quantify how **closely related** different **data points** are and **reduce** the **computational complexity**.
??? question "Are vectors and embeddings the same thing?"
When we say “vectors” we mean - **list of numbers** that **represents the data**.
When we say “embeddings” we mean - **list of numbers** that **capture important details and relationships**.
Although the terms are often used interchangeably, “embeddings” highlight how the data is represented with meaning and structure, while “vector” simply refers to the numerical form of that representation.
## Embedding vs Indexing
We already saw that creating **embeddings** on data is a method of creating **vectors** for a **n-dimensional embedding space** that captures the meaning and relationships inherent in the data.
Once we have these **vectors**, indexing comes into play. Indexing is a method of organizing these vector embeddings, that allows us to quickly and efficiently locate and retrieve them from the entire dataset of vector embeddings.
## What types of data/objects can be embedded?
The following are common types of data that can be embedded:
1. **Text**: Text data includes sentences, paragraphs, documents, or any written content.
2. **Images**: Image data encompasses photographs, illustrations, or any visual content.
3. **Audio**: Audio data includes sounds, music, speech, or any auditory content.
4. **Video**: Video data consists of moving images and sound, which can convey complex information.
Large datasets of multi-modal data (text, audio, images, etc.) can be converted into embeddings with the appropriate model.
!!! tip "LanceDB vs Other traditional Vector DBs"
While many vector databases primarily focus on the storage and retrieval of vector embeddings, **LanceDB** uses **Lance file format** (operates on a disk-based architecture), which allows for the storage and management of not just embeddings but also **raw file data (bytes)**. This capability means that users can integrate various types of data, including images and text, alongside their vector embeddings in a unified system.
With the ability to store both vectors and associated file data, LanceDB enhances the querying process. Users can perform semantic searches that not only retrieve similar embeddings but also access related files and metadata, thus streamlining the workflow.
## How does embedding works?
As mentioned, after creating embedding, each data point is represented as a vector in a n-dimensional space (embedding space). The dimensionality of this space can vary depending on the complexity of the data and the specific embedding technique used.
Points that are close to each other in vector space are considered similar (or appear in similar contexts), and points that are far away are considered dissimilar. To quantify this closeness, we use distance as a metric which can be measured in the following way -
1. **Euclidean Distance (L2)**: It calculates the straight-line distance between two points (vectors) in a multidimensional space.
2. **Cosine Similarity**: It measures the cosine of the angle between two vectors, providing a normalized measure of similarity based on their direction.
3. **Dot product**: It is calculated as the sum of the products of their corresponding components. To measure relatedness it considers both the magnitude and direction of the vectors.
## How do you create and store vector embeddings for your data?
1. **Creating embeddings**: Choose an embedding model, it can be a pre-trained model (open-source or commercial) or you can train a custom embedding model for your scenario. Then feed your preprocessed data into the chosen model to obtain embeddings.
??? question "Popular choices for embedding models"
For text data, popular choices are OpenAIs text-embedding models, Google Gemini text-embedding models, Coheres Embed models, and SentenceTransformers, etc.
For image data, popular choices are CLIP (Contrastive LanguageImage Pretraining), Imagebind embeddings by meta (supports audio, video, and image), and Jina multi-modal embeddings, etc.
2. **Storing vector embeddings**: This effectively requires **specialized databases** that can handle the complexity of vector data, as traditional databases often struggle with this task. Vector databases are designed specifically for storing and querying vector embeddings. They optimize for efficient nearest-neighbor searches and provide built-in indexing mechanisms.
!!! tip "Why LanceDB"
LanceDB **automates** the entire process of creating and storing embeddings for your data. LanceDB allows you to define and use **embedding functions**, which can be **pre-trained models** or **custom models**.
This enables you to **generate** embeddings tailored to the nature of your data (e.g., text, images) and **store** both the **original data** and **embeddings** in a **structured schema** thus providing efficient querying capabilities for similarity searches.
Let's quickly [get started](./index.md) and learn how to manage embeddings in LanceDB.
## Bonus: As a developer, what you can create using embeddings?
As a developer, you can create a variety of innovative applications using vector embeddings. Check out the following -
<div class="grid cards" markdown>
- __Chatbots__
---
Develop chatbots that utilize embeddings to retrieve relevant context and generate coherent, contextually aware responses to user queries.
[:octicons-arrow-right-24: Check out examples](../examples/python_examples/chatbot.md)
- __Recommendation Systems__
---
Develop systems that recommend content (such as articles, movies, or products) based on the similarity of keywords and descriptions, enhancing user experience.
[:octicons-arrow-right-24: Check out examples](../examples/python_examples/recommendersystem.md)
- __Vector Search__
---
Build powerful applications that harness the full potential of semantic search, enabling them to retrieve relevant data quickly and effectively.
[:octicons-arrow-right-24: Check out examples](../examples/python_examples/vector_search.md)
- __RAG Applications__
---
Combine the strengths of large language models (LLMs) with retrieval-based approaches to create more useful applications.
[:octicons-arrow-right-24: Check out examples](../examples/python_examples/rag.md)
- __Many more examples__
---
Explore applied examples available as Colab notebooks or Python scripts to integrate into your applications.
[:octicons-arrow-right-24: More](../examples/examples_python.md)
</div>

View File

@@ -1,17 +1,22 @@
# Examples: Python # Overview : Python Examples
To help you get started, we provide some examples, projects and applications that use the LanceDB Python API. You can always find the latest examples in our [VectorDB Recipes](https://github.com/lancedb/vectordb-recipes) repository. To help you get started, we provide some examples, projects, and applications that use the LanceDB Python API. These examples are designed to get you right into the code with minimal introduction, enabling you to move from an idea to a proof of concept in minutes.
| Example | Interactive Envs | Scripts | You can find the latest examples in our [VectorDB Recipes](https://github.com/lancedb/vectordb-recipes) repository.
|-------- | ---------------- | ------ |
| | | | **Introduction**
| [Youtube transcript search bot](https://github.com/lancedb/vectordb-recipes/tree/main/examples/youtube_bot/) | <a href="https://colab.research.google.com/github/lancedb/vectordb-recipes/blob/main/examples/youtube_bot/main.ipynb"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"></a>| [![Python](https://img.shields.io/badge/python-3670A0?style=for-the-badge&logo=python&logoColor=ffdd54)](https://github.com/lancedb/vectordb-recipes/tree/main/examples/youtube_bot/main.py)|
| [Langchain: Code Docs QA bot](https://github.com/lancedb/vectordb-recipes/tree/main/examples/Code-Documentation-QA-Bot/) | <a href="https://colab.research.google.com/github/lancedb/vectordb-recipes/blob/main/examples/Code-Documentation-QA-Bot/main.ipynb"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"></a>| [![Python](https://img.shields.io/badge/python-3670A0?style=for-the-badge&logo=python&logoColor=ffdd54)](https://github.com/lancedb/vectordb-recipes/tree/main/examples/Code-Documentation-QA-Bot/main.py) | Explore applied examples available as Colab notebooks or Python scripts to integrate into your applications. You can also checkout our blog posts related to the particular example for deeper understanding.
| [AI Agents: Reducing Hallucination](https://github.com/lancedb/vectordb-recipes/tree/main/examples/reducing_hallucinations_ai_agents/) | <a href="https://colab.research.google.com/github/lancedb/vectordb-recipes/blob/main/examples/reducing_hallucinations_ai_agents/main.ipynb"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"></a>| [![Python](https://img.shields.io/badge/python-3670A0?style=for-the-badge&logo=python&logoColor=ffdd54)](https://github.com/lancedb/vectordb-recipes/tree/main/examples/reducing_hallucinations_ai_agents/main.py)|
| [Multimodal CLIP: DiffusionDB](https://github.com/lancedb/vectordb-recipes/tree/main/examples/multimodal_clip/) | <a href="https://colab.research.google.com/github/lancedb/vectordb-recipes/blob/main/examples/multimodal_clip/main.ipynb"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"></a>| [![Python](https://img.shields.io/badge/python-3670A0?style=for-the-badge&logo=python&logoColor=ffdd54)](https://github.com/lancedb/vectordb-recipes/tree/main/examples/multimodal_clip/main.py) | | Explore | Description |
| [Multimodal CLIP: Youtube videos](https://github.com/lancedb/vectordb-recipes/tree/main/examples/multimodal_video_search/) | <a href="https://colab.research.google.com/github/lancedb/vectordb-recipes/blob/main/examples/multimodal_video_search/main.ipynb"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"></a>| [![Python](https://img.shields.io/badge/python-3670A0?style=for-the-badge&logo=python&logoColor=ffdd54)](https://github.com/lancedb/vectordb-recipes/tree/main/examples/multimodal_video_search/main.py) | |----------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| [Movie Recommender](https://github.com/lancedb/vectordb-recipes/tree/main/examples/movie-recommender/) | <a href="https://colab.research.google.com/github/lancedb/vectordb-recipes/blob/main/examples/movie-recommender/main.ipynb"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"></a> | [![Python](https://img.shields.io/badge/python-3670A0?style=for-the-badge&logo=python&logoColor=ffdd54)](https://github.com/lancedb/vectordb-recipes/tree/main/examples/movie-recommender/main.py) | | [**Build from Scratch with LanceDB** 🛠️🚀](python_examples/build_from_scratch.md) | Start building your **GenAI applications** from the **ground up** using **LanceDB's** efficient vector-based document retrieval capabilities! Get started quickly with a solid foundation. |
| [Audio Search](https://github.com/lancedb/vectordb-recipes/tree/main/examples/audio_search/) | <a href="https://colab.research.google.com/github/lancedb/vectordb-recipes/blob/main/examples/audio_search/main.ipynb"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"></a> | [![Python](https://img.shields.io/badge/python-3670A0?style=for-the-badge&logo=python&logoColor=ffdd54)](https://github.com/lancedb/vectordb-recipes/tree/main/examples/audio_search/main.py) | | [**Multimodal Search with LanceDB** 🤹‍♂️🔍](python_examples/multimodal.md) | Combine **text** and **image queries** to find the most relevant results using **LanceDBs multimodal** capabilities. Leverage the efficient vector-based similarity search. |
| [Multimodal Image + Text Search](https://github.com/lancedb/vectordb-recipes/tree/main/examples/multimodal_search/) | <a href="https://colab.research.google.com/github/lancedb/vectordb-recipes/blob/main/examples/multimodal_search/main.ipynb"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"></a> | [![Python](https://img.shields.io/badge/python-3670A0?style=for-the-badge&logo=python&logoColor=ffdd54)](https://github.com/lancedb/vectordb-recipes/tree/main/examples/multimodal_search/main.py) | | [**RAG (Retrieval-Augmented Generation) with LanceDB** 🔓🧐](python_examples/rag.md) | Build RAG (Retrieval-Augmented Generation) with **LanceDB** for efficient **vector-based information retrieval** and more accurate responses from AI. |
| [Evaluating Prompts with Prompttools](https://github.com/lancedb/vectordb-recipes/tree/main/examples/prompttools-eval-prompts/) | <a href="https://colab.research.google.com/github/lancedb/vectordb-recipes/blob/main/examples/prompttools-eval-prompts/main.ipynb"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"></a> | | | [**Vector Search: Efficient Retrieval** 🔓👀](python_examples/vector_search.md) | Use **LanceDB's** vector search capabilities to perform efficient and accurate **similarity searches**, enabling rapid discovery and retrieval of relevant documents in Large datasets. |
| [**Chatbot applications with LanceDB** 🤖](python_examples/chatbot.md) | Create **chatbots** that retrieves relevant context for **coherent and context-aware replies**, enhancing user experience through advanced conversational AI. |
| [**Evaluation: Assessing Text Performance with Precision** 📊💡](python_examples/evaluations.md) | Develop **evaluation** applications that allows you to input reference and candidate texts to **measure** their performance across various metrics. |
| [**AI Agents: Intelligent Collaboration** 🤖](python_examples/aiagent.md) | Enable **AI agents** to communicate and collaborate efficiently through dense vector representations, achieving shared goals seamlessly. |
| [**Recommender Systems: Personalized Discovery** 🍿📺](python_examples/recommendersystem.md) | Deliver **personalized experiences** by efficiently storing and querying item embeddings with **LanceDB's** powerful vector database capabilities. |
| **Miscellaneous Examples🌟** | Find other **unique examples** and **creative solutions** using **LanceDB**, showcasing the flexibility and broad applicability of the platform. |

View File

@@ -8,9 +8,15 @@ LanceDB provides language APIs, allowing you to embed a database in your languag
* 👾 [JavaScript](examples_js.md) examples * 👾 [JavaScript](examples_js.md) examples
* 🦀 Rust examples (coming soon) * 🦀 Rust examples (coming soon)
## Applications powered by LanceDB ## Python Applications powered by LanceDB
| Project Name | Description | Screenshot | | Project Name | Description |
|-----------------------------------------------------|----------------------------------------------------------------------------------------------------------------------|-------------------------------------------| | --- | --- |
| [YOLOExplorer](https://github.com/lancedb/yoloexplorer) | Iterate on your YOLO / CV datasets using SQL, Vector semantic search, and more within seconds | ![YOLOExplorer](https://github.com/lancedb/vectordb-recipes/assets/15766192/ae513a29-8f15-4e0b-99a1-ccd8272b6131) | | **Ultralytics Explorer 🚀**<br>[![Ultralytics](https://img.shields.io/badge/Ultralytics-Docs-green?labelColor=0f3bc4&style=flat-square&logo=https://cdn.prod.website-files.com/646dd1f1a3703e451ba81ecc/64994922cf2a6385a4bf4489_UltralyticsYOLO_mark_blue.svg&link=https://docs.ultralytics.com/datasets/explorer/)](https://docs.ultralytics.com/datasets/explorer/)<br>[![Open In Collab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/ultralytics/ultralytics/blob/main/docs/en/datasets/explorer/explorer.ipynb) | - 🔍 **Explore CV Datasets**: Semantic search, SQL queries, vector similarity, natural language.<br>- 🖥️ **GUI & Python API**: Seamless dataset interaction.<br>- ⚡ **Efficient & Scalable**: Leverages LanceDB for large datasets.<br>- 📊 **Detailed Analysis**: Easily analyze data patterns.<br>- 🌐 **Browser GUI Demo**: Create embeddings, search images, run queries. |
| [Website Chatbot (Deployable Vercel Template)](https://github.com/lancedb/lancedb-vercel-chatbot) | Create a chatbot from the sitemap of any website/docs of your choice. Built using vectorDB serverless native javascript package. | ![Chatbot](../assets/vercel-template.gif) | | **Website Chatbot🤖**<br>[![GitHub](https://img.shields.io/badge/github-%23121011.svg?style=for-the-badge&logo=github&logoColor=white)](https://github.com/lancedb/lancedb-vercel-chatbot)<br>[![Deploy with Vercel](https://vercel.com/button)](https://vercel.com/new/clone?repository-url=https%3A%2F%2Fgithub.com%2Flancedb%2Flancedb-vercel-chatbot&amp;env=OPENAI_API_KEY&amp;envDescription=OpenAI%20API%20Key%20for%20chat%20completion.&amp;project-name=lancedb-vercel-chatbot&amp;repository-name=lancedb-vercel-chatbot&amp;demo-title=LanceDB%20Chatbot%20Demo&amp;demo-description=Demo%20website%20chatbot%20with%20LanceDB.&amp;demo-url=https%3A%2F%2Flancedb.vercel.app&amp;demo-image=https%3A%2F%2Fi.imgur.com%2FazVJtvr.png) | - 🌐 **Chatbot from Sitemap/Docs**: Create a chatbot using site or document context.<br>- 🚀 **Embed LanceDB in Next.js**: Lightweight, on-prem storage.<br>- 🧠 **AI-Powered Context Retrieval**: Efficiently access relevant data.<br>- 🔧 **Serverless & Native JS**: Seamless integration with Next.js.<br>- ⚡ **One-Click Deploy on Vercel**: Quick and easy setup.. |
## Nodejs Applications powered by LanceDB
| Project Name | Description |
| --- | --- |
| **Langchain Writing Assistant✍ **<br>[![Github](../assets/github.svg)](https://github.com/lancedb/vectordb-recipes/tree/main/applications/node/lanchain_writing_assistant) | - **📂 Data Source Integration**: Use your own data by specifying data source file, and the app instantly processes it to provide insights. <br>- **🧠 Intelligent Suggestions**: Powered by LangChain.js and LanceDB, it improves writing productivity and accuracy. <br>- **💡 Enhanced Writing Experience**: It delivers real-time contextual insights and factual suggestions while the user writes. |

View File

@@ -0,0 +1,27 @@
# AI Agents: Intelligent Collaboration🤖
Think of a platform where AI Agents can seamlessly exchange information, coordinate over tasks, and achieve shared targets with great efficiency💻📈.
## Vector-Based Coordination: The Technical Advantage
Leveraging LanceDB's vector-based capabilities, we can enable **AI agents 🤖** to communicate and collaborate through dense vector representations. AI agents can exchange information, coordinate on a task or work towards a common goal, just by giving queries📝.
| **AI Agents** | **Description** | **Links** |
|:--------------|:----------------|:----------|
| **AI Agents: Reducing Hallucinationt📊** | 🤖💡 **Reduce AI hallucinations** using Critique-Based Contexting! Learn by Simplifying and Automating tedious workflows by going through fitness trainer agent example.💪 | [![Github](../../assets/github.svg)][hullucination_github] <br>[![Open In Collab](../../assets/colab.svg)][hullucination_colab] <br>[![Python](../../assets/python.svg)][hullucination_python] <br>[![Ghost](../../assets/ghost.svg)][hullucination_ghost] |
| **AI Trends Searcher: CrewAI🔍** | 🔍️ Learn about **CrewAI Agents** ! Utilize the features of CrewAI - Role-based Agents, Task Management, and Inter-agent Delegation ! Make AI agents work together to do tricky stuff 😺| [![Github](../../assets/github.svg)][trend_github] <br>[![Open In Collab](../../assets/colab.svg)][trend_colab] <br>[![Ghost](../../assets/ghost.svg)][trend_ghost] |
| **SuperAgent Autogen🤖** | 💻 AI interactions with the Super Agent! Integrating **Autogen**, **LanceDB**, **LangChain**, **LiteLLM**, and **Ollama** to create AI agent that excels in understanding and processing complex queries.🤖 | [![Github](../../assets/github.svg)][superagent_github] <br>[![Open In Collab](../../assets/colab.svg)][superagent_colab] |
[hullucination_github]: https://github.com/lancedb/vectordb-recipes/blob/main/examples/reducing_hallucinations_ai_agents
[hullucination_colab]: https://colab.research.google.com/github/lancedb/vectordb-recipes/blob/main/examples/reducing_hallucinations_ai_agents/main.ipynb
[hullucination_python]: https://github.com/lancedb/vectordb-recipes/blob/main/examples/reducing_hallucinations_ai_agents/main.py
[hullucination_ghost]: https://blog.lancedb.com/how-to-reduce-hallucinations-from-llm-powered-agents-using-long-term-memory-72f262c3cc1f/
[trend_github]: https://github.com/lancedb/vectordb-recipes/blob/main/examples/AI-Trends-with-CrewAI
[trend_colab]: https://colab.research.google.com/github/lancedb/vectordb-recipes/blob/main/examples/AI-Trends-with-CrewAI/CrewAI_AI_Trends.ipynb
[trend_ghost]: https://blog.lancedb.com/track-ai-trends-crewai-agents-rag/
[superagent_github]: https://github.com/lancedb/vectordb-recipes/blob/main/examples/SuperAgent_Autogen
[superagent_colab]: https://colab.research.google.com/github/lancedb/vectordb-recipes/blob/main/examples/SuperAgent_Autogen/main.ipynb

View File

@@ -0,0 +1,13 @@
# **Build from Scratch with LanceDB 🛠️🚀**
Start building your GenAI applications from the ground up using **LanceDB's** efficient vector-based document retrieval capabilities! 📑
**Get Started in Minutes ⏱️**
These examples provide a solid foundation for building your own GenAI applications using LanceDB. Jump from idea to **proof of concept** quickly with applied examples. Get started and see what you can create! 💻
| **Build From Scratch** | **Description** | **Links** |
|:-------------------------------------------|:-------------------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| **Build RAG from Scratch🚀💻** | 📝 Create a **Retrieval-Augmented Generation** (RAG) model from scratch using LanceDB. | [![GitHub](https://img.shields.io/badge/github-%23121011.svg?style=for-the-badge&logo=github&logoColor=white)](https://github.com/lancedb/vectordb-recipes/tree/main/tutorials/RAG-from-Scratch)<br>[![Open In Collab](https://colab.research.google.com/assets/colab-badge.svg)]() |
| **Local RAG from Scratch with Llama3🔥💡** | 🐫 Build a local RAG model using **Llama3** and **LanceDB** for fast and efficient text generation. | [![GitHub](https://img.shields.io/badge/github-%23121011.svg?style=for-the-badge&logo=github&logoColor=white)](https://github.com/lancedb/vectordb-recipes/tree/main/tutorials/Local-RAG-from-Scratch)<br>[![Python](https://img.shields.io/badge/python-3670A0?style=for-the-badge&logo=python&logoColor=ffdd54)](https://github.com/lancedb/vectordb-recipes/blob/main/tutorials/Local-RAG-from-Scratch/rag.py) |
| **Multi-Head RAG from Scratch📚💻** | 🤯 Develop a **Multi-Head RAG model** from scratch, enabling generation of text based on multiple documents. | [![GitHub](https://img.shields.io/badge/github-%23121011.svg?style=for-the-badge&logo=github&logoColor=white)](https://github.com/lancedb/vectordb-recipes/tree/main/tutorials/Multi-Head-RAG-from-Scratch)<br>[![Python](https://img.shields.io/badge/python-3670A0?style=for-the-badge&logo=python&logoColor=ffdd54)](https://github.com/lancedb/vectordb-recipes/tree/main/tutorials/Multi-Head-RAG-from-Scratch) |

View File

@@ -0,0 +1,41 @@
**Chatbot applications with LanceDB 🤖**
====================================================================
Create innovative chatbot applications that utilizes LanceDB for efficient vector-based response generation! 🌐✨
**Introduction 👋✨**
Users can input their queries, allowing the chatbot to retrieve relevant context seamlessly. 🔍📚 This enables the generation of coherent and context-aware replies that enhance user experience. 🌟🤝 Dive into the world of advanced conversational AI and streamline interactions with powerful data management! 🚀💡
| **Chatbot** | **Description** | **Links** |
|:----------------|:-----------------|:-----------|
| **Databricks DBRX Website Bot ⚡️** | Engage with the **Hogwarts chatbot**, that uses Open-source RAG with **DBRX**, **LanceDB** and **LLama-index with Hugging Face Embeddings**, to provide interactive and engaging user experiences. ✨ | [![GitHub](../../assets/github.svg)][databricks_github] <br>[![Python](../../assets/python.svg)][databricks_python] |
| **CLI SDK Manual Chatbot Locally 💻** | CLI chatbot for SDK/hardware documents using **Local RAG** with **LLama3**, **Ollama**, **LanceDB**, and **Openhermes Embeddings**, built with **Phidata** Assistant and Knowledge Base 🤖 | [![GitHub](../../assets/github.svg)][clisdk_github] <br>[![Python](../../assets/python.svg)][clisdk_python] |
| **Youtube Transcript Search QA Bot 📹** | Search through **youtube transcripts** using natural language with a Q&A bot, leveraging **LanceDB** for effortless data storage and management 💬 | [![GitHub](../../assets/github.svg)][youtube_github] <br>[![Open In Collab](../../assets/colab.svg)][youtube_colab] <br>[![Python](../../assets/python.svg)][youtube_python] |
| **Code Documentation Q&A Bot with LangChain 🤖** | Query your own documentation easily using questions in natural language with a Q&A bot, powered by **LangChain** and **LanceDB**, demonstrated with **Numpy 1.26 docs** 📚 | [![GitHub](../../assets/github.svg)][docs_github] <br>[![Open In Collab](../../assets/colab.svg)][docs_colab] <br>[![Python](../../assets/python.svg)][docs_python] |
| **Context-aware Chatbot using Llama 2 & LanceDB 🤖** | Build **conversational AI** with a **context-aware chatbot**, powered by **Llama 2**, **LanceDB**, and **LangChain**, that enables intuitive and meaningful conversations with your data 📚💬 | [![GitHub](../../assets/github.svg)][aware_github] <br>[![Open In Collab](../../assets/colab.svg)][aware_colab] <br>[![Ghost](../../assets/ghost.svg)][aware_ghost] |
| **Chat with csv using Hybrid Search 📊** | **Chat** application that interacts with **CSV** and **Excel files** using **LanceDBs** hybrid search capabilities, performing direct operations on large-scale columnar data efficiently 🚀 | [![GitHub](../../assets/github.svg)][csv_github] <br>[![Open In Collab](../../assets/colab.svg)][csv_colab] <br>[![Ghost](../../assets/ghost.svg)][csv_ghost] |
[databricks_github]: https://github.com/lancedb/vectordb-recipes/blob/main/examples/databricks_DBRX_website_bot
[databricks_python]: https://github.com/lancedb/vectordb-recipes/blob/main/examples/databricks_DBRX_website_bot/main.py
[clisdk_github]: https://github.com/lancedb/vectordb-recipes/blob/main/examples/CLI-SDK-Manual-Chatbot-Locally
[clisdk_python]: https://github.com/lancedb/vectordb-recipes/blob/main/examples/CLI-SDK-Manual-Chatbot-Locally/assistant.py
[youtube_github]: https://github.com/lancedb/vectordb-recipes/blob/main/examples/Youtube-Search-QA-Bot
[youtube_colab]: https://colab.research.google.com/github/lancedb/vectordb-recipes/blob/main/examples/Youtube-Search-QA-Bot/main.ipynb
[youtube_python]: https://github.com/lancedb/vectordb-recipes/blob/main/examples/Youtube-Search-QA-Bot/main.py
[docs_github]: https://github.com/lancedb/vectordb-recipes/blob/main/examples/Code-Documentation-QA-Bot
[docs_colab]: https://colab.research.google.com/github/lancedb/vectordb-recipes/blob/main/examples/Code-Documentation-QA-Bot/main.ipynb
[docs_python]: https://github.com/lancedb/vectordb-recipes/blob/main/examples/Code-Documentation-QA-Bot/main.py
[aware_github]: https://github.com/lancedb/vectordb-recipes/blob/main/tutorials/chatbot_using_Llama2_&_lanceDB
[aware_colab]: https://colab.research.google.com/github/lancedb/vectordb-recipes/blob/main/tutorials/chatbot_using_Llama2_&_lanceDB/main.ipynb
[aware_ghost]: https://blog.lancedb.com/context-aware-chatbot-using-llama-2-lancedb-as-vector-database-4d771d95c755
[csv_github]: https://github.com/lancedb/vectordb-recipes/tree/main/examples/archived_examples/Chat_with_csv_file
[csv_colab]: https://colab.research.google.com/github/lancedb/vectordb-recipes/blob/main/examples/archived_examples/Chat_with_csv_file/main.ipynb
[csv_ghost]: https://blog.lancedb.com/p/d8c71df4-e55f-479a-819e-cde13354a6a3/

View File

@@ -0,0 +1,21 @@
**Evaluation: Assessing Text Performance with Precision 📊💡**
====================================================================
Evaluation is a comprehensive tool designed to measure the performance of text-based inputs, enabling data-driven optimization and improvement 📈.
**Text Evaluation 101 📚**
Using robust framework for assessing reference and candidate texts across various metrics📊, ensure that the text outputs are high-quality and meet specific requirements and standards📝.
| **Evaluation** | **Description** | **Links** |
| -------------- | --------------- | --------- |
| **Evaluating Prompts with Prompttools 🤖** | Compare, visualize & evaluate **embedding functions** (incl. OpenAI) across metrics like latency & custom evaluation 📈📊 | [![Github](../../assets/github.svg)][prompttools_github] <br>[![Open In Collab](../../assets/colab.svg)][prompttools_colab] |
| **Evaluating RAG with RAGAs and GPT-4o 📊** | Evaluate **RAG pipelines** with cutting-edge metrics and tools, integrate with CI/CD for continuous performance checks, and generate responses with GPT-4o 🤖📈 | [![Github](../../assets/github.svg)][RAGAs_github] <br>[![Open In Collab](../../assets/colab.svg)][RAGAs_colab] |
[prompttools_github]: https://github.com/lancedb/vectordb-recipes/blob/main/examples/prompttools-eval-prompts
[prompttools_colab]: https://colab.research.google.com/github/lancedb/vectordb-recipes/blob/main/examples/prompttools-eval-prompts/main.ipynb
[RAGAs_github]: https://github.com/lancedb/vectordb-recipes/blob/main/examples/Evaluating_RAG_with_RAGAs
[RAGAs_colab]: https://colab.research.google.com/github/lancedb/vectordb-recipes/blob/main/examples/Evaluating_RAG_with_RAGAs/Evaluating_RAG_with_RAGAs.ipynb

View File

@@ -0,0 +1,28 @@
# **Multimodal Search with LanceDB 🤹‍♂️🔍**
Using LanceDB's multimodal capabilities, combine text and image queries to find the most relevant results in your corpus ! 🔓💡
**Explore the Future of Search 🚀**
LanceDB supports multimodal search by indexing and querying vector representations of text and image data 🤖. This enables efficient retrieval of relevant documents and images using vector-based similarity search 📊. The platform facilitates cross-modal search, allowing for text-image and image-text retrieval, and supports scalable indexing of high-dimensional vector spaces 💻.
| **Multimodal** | **Description** | **Links** |
|:----------------|:-----------------|:-----------|
| **Multimodal CLIP: DiffusionDB 🌐💥** | Multi-Modal Search with **CLIP** and **LanceDB** Using **DiffusionDB** Data for Combined Text and Image Understanding ! 🔓 | [![GitHub](../../assets/github.svg)][Clip_diffusionDB_github] <br>[![Open In Collab](../../assets/colab.svg)][Clip_diffusionDB_colab] <br>[![Python](../../assets/python.svg)][Clip_diffusionDB_python] <br>[![Ghost](../../assets/ghost.svg)][Clip_diffusionDB_ghost] |
| **Multimodal CLIP: Youtube Videos 📹👀** | Search **Youtube videos** using Multimodal CLIP, finding relevant content with ease and accuracy! 🎯 | [![Github](../../assets/github.svg)][Clip_youtube_github] <br>[![Open In Collab](../../assets/colab.svg)][Clip_youtube_colab] <br> [![Python](../../assets/python.svg)][Clip_youtube_python] <br>[![Ghost](../../assets/ghost.svg)][Clip_youtube_python] |
| **Multimodal Image + Text Search 📸🔍** | Find **relevant documents** and **images** with a single query using **LanceDB's** multimodal search capabilities, to seamlessly integrate text and visuals ! 🌉 | [![GitHub](../../assets/github.svg)](https://github.com/lancedb/vectordb-recipes/tree/main/examples/archived_examples/multimodal_search) <br>[![Open In Collab](../../assets/colab.svg)](https://colab.research.google.com/github/lancedb/vectordb-recipes/blob/main/examples/archived_examples/multimodal_search/main.ipynb) <br> [![Python](../../assets/python.svg)](https://github.com/lancedb/vectordb-recipes/blob/main/examples/multimodal_search/main.py)<br> [![Ghost](../../assets/ghost.svg)](https://blog.lancedb.com/multi-modal-ai-made-easy-with-lancedb-clip-5aaf8801c939/) |
| **Cambrian-1: Vision-Centric Image Exploration 🔍👀** | Learn how **Cambrian-1** works, using an example of **Vision-Centric** exploration on images found through vector search ! Work on **Flickr-8k** dataset 🔎 | [![Kaggle](https://img.shields.io/badge/Kaggle-035a7d?style=for-the-badge&logo=kaggle&logoColor=white)](https://www.kaggle.com/code/prasantdixit/cambrian-1-vision-centric-exploration-of-images/)<br> [![Ghost](../../assets/ghost.svg)](https://blog.lancedb.com/cambrian-1-vision-centric-exploration/) |
[Clip_diffusionDB_github]: https://github.com/lancedb/vectordb-recipes/blob/main/examples/multimodal_clip_diffusiondb
[Clip_diffusionDB_colab]: https://colab.research.google.com/github/lancedb/vectordb-recipes/blob/main/examples/multimodal_clip_diffusiondb/main.ipynb
[Clip_diffusionDB_python]: https://github.com/lancedb/vectordb-recipes/blob/main/examples/multimodal_clip_diffusiondb/main.py
[Clip_diffusionDB_ghost]: https://blog.lancedb.com/multi-modal-ai-made-easy-with-lancedb-clip-5aaf8801c939/
[Clip_youtube_github]: https://github.com/lancedb/vectordb-recipes/blob/main/examples/multimodal_video_search
[Clip_youtube_colab]: https://colab.research.google.com/github/lancedb/vectordb-recipes/blob/main/examples/multimodal_video_search/main.ipynb
[Clip_youtube_python]: https://github.com/lancedb/vectordb-recipes/blob/main/examples/multimodal_video_search/main.py
[Clip_youtube_ghost]: https://blog.lancedb.com/multi-modal-ai-made-easy-with-lancedb-clip-5aaf8801c939/

View File

@@ -0,0 +1,83 @@
**RAG (Retrieval-Augmented Generation) with LanceDB 🔓🧐**
====================================================================
Build RAG (Retrieval-Augmented Generation) with LanceDB, a powerful solution for efficient vector-based information retrieval 📊.
**Experience the Future of Search 🔄**
🤖 RAG enables AI to **retrieve** relevant information from external sources and use it to **generate** more accurate and context-specific responses. 💻 LanceDB provides a robust framework for integrating LLMs with external knowledge sources 📝.
| **RAG** | **Description** | **Links** |
|----------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------------------------|
| **RAG with Matryoshka Embeddings and LlamaIndex** 🪆🔗 | Utilize **Matryoshka embeddings** and **LlamaIndex** to improve the efficiency and accuracy of your RAG models. 📈✨ | [![Github](../../assets/github.svg)][matryoshka_github] <br>[![Open In Collab](../../assets/colab.svg)][matryoshka_colab] |
| **Improve RAG with Re-ranking** 📈🔄 | Enhance your RAG applications by implementing **re-ranking strategies** for more relevant document retrieval. 📚🔍 | [![Github](../../assets/github.svg)][rag_reranking_github] <br>[![Open In Collab](../../assets/colab.svg)][rag_reranking_colab] <br>[![Ghost](../../assets/ghost.svg)][rag_reranking_ghost] |
| **Instruct-Multitask** 🧠🎯 | Integrate the **Instruct Embedding Model** with LanceDB to streamline your embedding API, reducing redundant code and overhead. 🌐📊 | [![Github](../../assets/github.svg)][instruct_multitask_github] <br>[![Open In Collab](../../assets/colab.svg)][instruct_multitask_colab] <br>[![Python](../../assets/python.svg)][instruct_multitask_python] <br>[![Ghost](../../assets/ghost.svg)][instruct_multitask_ghost] |
| **Improve RAG with HyDE** 🌌🔍 | Use **Hypothetical Document Embeddings** for efficient, accurate, and unsupervised dense retrieval. 📄🔍 | [![Github](../../assets/github.svg)][hyde_github] <br>[![Open In Collab](../../assets/colab.svg)][hyde_colab]<br>[![Ghost](../../assets/ghost.svg)][hyde_ghost] |
| **Improve RAG with LOTR** 🧙‍♂️📜 | Enhance RAG with **Lord of the Retriever (LOTR)** to address 'Lost in the Middle' challenges, especially in medical data. 🌟📜 | [![Github](../../assets/github.svg)][lotr_github] <br>[![Open In Collab](../../assets/colab.svg)][lotr_colab] <br>[![Ghost](../../assets/ghost.svg)][lotr_ghost] |
| **Advanced RAG: Parent Document Retriever** 📑🔗 | Use **Parent Document & Bigger Chunk Retriever** to maintain context and relevance when generating related content. 🎵📄 | [![Github](../../assets/github.svg)][parent_doc_retriever_github] <br>[![Open In Collab](../../assets/colab.svg)][parent_doc_retriever_colab] <br>[![Ghost](../../assets/ghost.svg)][parent_doc_retriever_ghost] |
| **Corrective RAG with Langgraph** 🔧📊 | Enhance RAG reliability with **Corrective RAG (CRAG)** by self-reflecting and fact-checking for accurate and trustworthy results. ✅🔍 |[![Github](../../assets/github.svg)][corrective_rag_github] <br>[![Open In Collab](../../assets/colab.svg)][corrective_rag_colab] <br>[![Ghost](../../assets/ghost.svg)][corrective_rag_ghost] |
| **Contextual Compression with RAG** 🗜️🧠 | Apply **contextual compression techniques** to condense large documents while retaining essential information. 📄🗜️ | [![Github](../../assets/github.svg)][compression_rag_github] <br>[![Open In Collab](../../assets/colab.svg)][compression_rag_colab] <br>[![Ghost](../../assets/ghost.svg)][compression_rag_ghost] |
| **Improve RAG with FLARE** 🔥| Enable users to ask questions directly to **academic papers**, focusing on **ArXiv papers**, with **F**orward-**L**ooking **A**ctive **RE**trieval augmented generation.🚀🌟 | [![Github](../../assets/github.svg)][flare_github] <br>[![Open In Collab](../../assets/colab.svg)][flare_colab] <br>[![Ghost](../../assets/ghost.svg)][flare_ghost] |
| **Query Expansion and Reranker** 🔍🔄 | Enhance RAG with query expansion using Large Language Models and advanced **reranking methods** like **Cross Encoders**, **ColBERT v2**, and **FlashRank** for improved document retrieval precision and recall 🔍📈 | [![Github](../../assets/github.svg)][query_github] <br>[![Open In Collab](../../assets/colab.svg)][query_colab] |
| **RAG Fusion** ⚡🌐 | Build RAG Fusion, utilize the **RRF algorithm** to rerank documents based on user queries ! Use **LanceDB** as vector database to store and retrieve documents related to queries via **OPENAI Embeddings**⚡🌐 | [![Github](../../assets/github.svg)][fusion_github] <br>[![Open In Collab](../../assets/colab.svg)][fusion_colab] |
| **Agentic RAG** 🤖📚 | Build autonomous information retrieval with **Agentic RAG**, a framework of **intelligent agents** that collaborate to synthesize, summarize, and compare data across sources, that enables proactive and informed decision-making 🤖📚 | [![Github](../../assets/github.svg)][agentic_github] <br>[![Open In Collab](../../assets/colab.svg)][agentic_colab] |
[matryoshka_github]: https://github.com/lancedb/vectordb-recipes/blob/main/tutorials/RAG-with_MatryoshkaEmbed-Llamaindex
[matryoshka_colab]: https://colab.research.google.com/github/lancedb/vectordb-recipes/blob/main/tutorials/RAG-with_MatryoshkaEmbed-Llamaindex/RAG_with_MatryoshkaEmbedding_and_Llamaindex.ipynb
[rag_reranking_github]: https://github.com/lancedb/vectordb-recipes/blob/main/examples/RAG_Reranking
[rag_reranking_colab]: https://colab.research.google.com/github/lancedb/vectordb-recipes/blob/main/examples/RAG_Reranking/main.ipynb
[rag_reranking_ghost]: https://blog.lancedb.com/simplest-method-to-improve-rag-pipeline-re-ranking-cf6eaec6d544
[instruct_multitask_github]: https://github.com/lancedb/vectordb-recipes/blob/main/examples/instruct-multitask
[instruct_multitask_colab]: https://colab.research.google.com/github/lancedb/vectordb-recipes/blob/main/examples/instruct-multitask/main.ipynb
[instruct_multitask_python]: https://github.com/lancedb/vectordb-recipes/blob/main/examples/instruct-multitask/main.py
[instruct_multitask_ghost]: https://blog.lancedb.com/multitask-embedding-with-lancedb-be18ec397543
[hyde_github]: https://github.com/lancedb/vectordb-recipes/blob/main/examples/Advance-RAG-with-HyDE
[hyde_colab]: https://colab.research.google.com/github/lancedb/vectordb-recipes/blob/main/examples/Advance-RAG-with-HyDE/main.ipynb
[hyde_ghost]: https://blog.lancedb.com/advanced-rag-precise-zero-shot-dense-retrieval-with-hyde-0946c54dfdcb
[lotr_github]: https://github.com/lancedb/vectordb-recipes/blob/main/examples/Advance_RAG_LOTR
[lotr_colab]: https://colab.research.google.com/github/lancedb/vectordb-recipes/blob/main/examples/Advance_RAG_LOTR/main.ipynb
[lotr_ghost]: https://blog.lancedb.com/better-rag-with-lotr-lord-of-retriever-23c8336b9a35
[parent_doc_retriever_github]: https://github.com/lancedb/vectordb-recipes/blob/main/examples/parent_document_retriever
[parent_doc_retriever_colab]: https://colab.research.google.com/github/lancedb/vectordb-recipes/blob/main/examples/parent_document_retriever/main.ipynb
[parent_doc_retriever_ghost]: https://blog.lancedb.com/modified-rag-parent-document-bigger-chunk-retriever-62b3d1e79bc6
[corrective_rag_github]: https://github.com/lancedb/vectordb-recipes/blob/main/tutorials/Corrective-RAG-with_Langgraph
[corrective_rag_colab]: https://colab.research.google.com/github/lancedb/vectordb-recipes/blob/main/tutorials/Corrective-RAG-with_Langgraph/CRAG_with_Langgraph.ipynb
[corrective_rag_ghost]: https://blog.lancedb.com/implementing-corrective-rag-in-the-easiest-way-2/
[compression_rag_github]: https://github.com/lancedb/vectordb-recipes/blob/main/examples/Contextual-Compression-with-RAG
[compression_rag_colab]: https://colab.research.google.com/github/lancedb/vectordb-recipes/blob/main/examples/Contextual-Compression-with-RAG/main.ipynb
[compression_rag_ghost]: https://blog.lancedb.com/enhance-rag-integrate-contextual-compression-and-filtering-for-precision-a29d4a810301/
[flare_github]: https://github.com/lancedb/vectordb-recipes/blob/main/examples/better-rag-FLAIR
[flare_colab]: https://colab.research.google.com/github/lancedb/vectordb-recipes/blob/main/examples/better-rag-FLAIR/main.ipynb
[flare_ghost]: https://blog.lancedb.com/better-rag-with-active-retrieval-augmented-generation-flare-3b66646e2a9f/
[query_github]: https://github.com/lancedb/vectordb-recipes/tree/main/examples/archived_examples/QueryExpansion%26Reranker
[query_colab]: https://colab.research.google.com/github/lancedb/vectordb-recipes/blob/main/examples/archived_examples/QueryExpansion&Reranker/main.ipynb
[fusion_github]: https://github.com/lancedb/vectordb-recipes/tree/main/examples/archived_examples/RAG_Fusion
[fusion_colab]: https://colab.research.google.com/github/lancedb/vectordb-recipes/blob/main/examples/archived_examples/RAG_Fusion/main.ipynb
[agentic_github]: https://github.com/lancedb/vectordb-recipes/blob/main/tutorials/Agentic_RAG
[agentic_colab]: https://colab.research.google.com/github/lancedb/vectordb-recipes/blob/main/tutorials/Agentic_RAG/main.ipynb

View File

@@ -0,0 +1,37 @@
**Recommender Systems: Personalized Discovery🍿📺**
==============================================================
Deliver personalized experiences with Recommender Systems. 🎁
**Technical Overview📜**
🔍️ LanceDB's powerful vector database capabilities can efficiently store and query item embeddings. Recommender Systems can utilize it and provide personalized recommendations based on user preferences 🤝 and item features 📊 and therefore enhance the user experience.🗂️
| **Recommender System** | **Description** | **Links** |
| ---------------------- | --------------- | --------- |
| **Movie Recommender System🎬** | 🤝 Use **collaborative filtering** to predict user preferences, assuming similar users will like similar movies, and leverage **Singular Value Decomposition** (SVD) from Numpy for precise matrix factorization and accurate recommendations📊 | [![Github](../../assets/github.svg)][movie_github] <br>[![Open In Collab](../../assets/colab.svg)][movie_colab] <br>[![Python](../../assets/python.svg)][movie_python] |
| **🎥 Movie Recommendation with Genres** | 🔍 Creates movie embeddings using **Doc2Vec**, capturing genre and characteristic nuances, and leverages VectorDB for efficient storage and querying, enabling accurate genre classification and personalized movie recommendations through **similarity searches**🎥 | [![Github](../../assets/github.svg)][genre_github] <br>[![Open In Collab](../../assets/colab.svg)][genre_colab] <br>[![Ghost](../../assets/ghost.svg)][genre_ghost] |
| **🛍️ Product Recommender using Collaborative Filtering and LanceDB** | 📈 Using **Collaborative Filtering** and **LanceDB** to analyze your past purchases, recommends products based on user's past purchases. Demonstrated with the Instacart dataset in our example🛒 | [![Github](../../assets/github.svg)][product_github] <br>[![Open In Collab](../../assets/colab.svg)][product_colab] <br>[![Python](../../assets/python.svg)][product_python] |
| **🔍 Arxiv Search with OpenCLIP and LanceDB** | 💡 Build a semantic search engine for **Arxiv papers** using **LanceDB**, and benchmarks its performance against traditional keyword-based search on **Nomic's Atlas**, to demonstrate the power of semantic search in finding relevant research papers📚 | [![Github](../../assets/github.svg)][arxiv_github] <br>[![Open In Collab](../../assets/colab.svg)][arxiv_colab] <br>[![Python](../../assets/python.svg)][arxiv_python] |
| **Food Recommendation System🍴** | 🍔 Build a food recommendation system with **LanceDB**, featuring vector-based recommendations, full-text search, hybrid search, and reranking model integration for personalized and accurate food suggestions👌 | [![Github](../../assets/github.svg)][food_github] <br>[![Open In Collab](../../assets/colab.svg)][food_colab] |
[movie_github]: https://github.com/lancedb/vectordb-recipes/blob/main/examples/movie-recommender
[movie_colab]: https://colab.research.google.com/github/lancedb/vectordb-recipes/blob/main/examples/movie-recommender/main.ipynb
[movie_python]: https://github.com/lancedb/vectordb-recipes/blob/main/examples/movie-recommender/main.py
[genre_github]: https://github.com/lancedb/vectordb-recipes/tree/main/examples/archived_examples/movie-recommendation-with-genres
[genre_colab]: https://colab.research.google.com/github/lancedb/vectordb-recipes/blob/main/examples/archived_examples/movie-recommendation-with-genres/movie_recommendation_with_doc2vec_and_lancedb.ipynb
[genre_ghost]: https://blog.lancedb.com/movie-recommendation-system-using-lancedb-and-doc2vec/
[product_github]: https://github.com/lancedb/vectordb-recipes/blob/main/examples/product-recommender
[product_colab]: https://colab.research.google.com/github/lancedb/vectordb-recipes/blob/main/examples/product-recommender/main.ipynb
[product_python]: https://github.com/lancedb/vectordb-recipes/blob/main/examples/product-recommender/main.py
[arxiv_github]: https://github.com/lancedb/vectordb-recipes/blob/main/examples/arxiv-recommender
[arxiv_colab]: https://colab.research.google.com/github/lancedb/vectordb-recipes/blob/main/examples/arxiv-recommender/main.ipynb
[arxiv_python]: https://github.com/lancedb/vectordb-recipes/blob/main/examples/arxiv-recommender/main.py
[food_github]: https://github.com/lancedb/vectordb-recipes/tree/main/examples/archived_examples/Food_recommendation
[food_colab]: https://colab.research.google.com/github/lancedb/vectordb-recipes/blob/main/examples/archived_examples/Food_recommendation/main.ipynb

View File

@@ -0,0 +1,80 @@
**Vector Search: Efficient Retrieval 🔓👀**
====================================================================
Vector search with LanceDB, is a solution for efficient and accurate similarity searches in large datasets 📊.
**Vector Search Capabilities in LanceDB🔝**
LanceDB implements vector search algorithms for efficient document retrieval and analysis 📊. This enables fast and accurate discovery of relevant documents, leveraging dense vector representations 🤖. The platform supports scalable indexing and querying of high-dimensional vector spaces, facilitating precise document matching and retrieval 📈.
| **Vector Search** | **Description** | **Links** |
|:-----------------|:---------------|:---------|
| **Inbuilt Hybrid Search 🔄** | Perform hybrid search in **LanceDB** by combining the results of semantic and full-text search via a reranking algorithm of your choice 📊 | [![Github](../../assets/github.svg)][inbuilt_hybrid_search_github] <br>[![Open In Collab](../../assets/colab.svg)][inbuilt_hybrid_search_colab] |
| **Hybrid Search with BM25 and LanceDB 💡** | Use **Synergizes BM25's** keyword-focused precision (term frequency, document length normalization, bias-free retrieval) with **LanceDB's** semantic understanding (contextual analysis, query intent alignment) for nuanced search results in complex datasets 📈 | [![Github](../../assets/github.svg)][BM25_github] <br>[![Open In Collab](../../assets/colab.svg)][BM25_colab] <br>[![Ghost](../../assets/ghost.svg)][BM25_ghost] |
| **NER-powered Semantic Search 🔎** | Extract and identify essential information from text with Named Entity Recognition **(NER)** methods: Dictionary-Based, Rule-Based, and Deep Learning-Based, to accurately extract and categorize entities, enabling precise semantic search results 🗂️ | [![Github](../../assets/github.svg)][NER_github] <br>[![Open In Collab](../../assets/colab.svg)][NER_colab] <br>[![Ghost](../../assets/ghost.svg)][NER_ghost]|
| **Audio Similarity Search using Vector Embeddings 🎵** | Create vector **embeddings of audio files** to find similar audio content, enabling efficient audio similarity search and retrieval in **LanceDB's** vector store 📻 |[![Github](../../assets/github.svg)][audio_search_github] <br>[![Open In Collab](../../assets/colab.svg)][audio_search_colab] <br>[![Python](../../assets/python.svg)][audio_search_python]|
| **LanceDB Embeddings API: Multi-lingual Semantic Search 🌎** | Build a universal semantic search table with **LanceDB's Embeddings API**, supporting multiple languages (e.g., English, French) using **cohere's** multi-lingual model, for accurate cross-lingual search results 📄 | [![Github](../../assets/github.svg)][mls_github] <br>[![Open In Collab](../../assets/colab.svg)][mls_colab] <br>[![Python](../../assets/python.svg)][mls_python] |
| **Facial Recognition: Face Embeddings 🤖** | Detect, crop, and embed faces using Facenet, then store and query face embeddings in **LanceDB** for efficient facial recognition and top-K matching results 👥 | [![Github](../../assets/github.svg)][fr_github] <br>[![Open In Collab](../../assets/colab.svg)][fr_colab] |
| **Sentiment Analysis: Hotel Reviews 🏨** | Analyze customer sentiments towards the hotel industry using **BERT models**, storing sentiment labels, scores, and embeddings in **LanceDB**, enabling queries on customer opinions and potential areas for improvement 💬 | [![Github](../../assets/github.svg)][sentiment_analysis_github] <br>[![Open In Collab](../../assets/colab.svg)][sentiment_analysis_colab] <br>[![Ghost](../../assets/ghost.svg)][sentiment_analysis_ghost] |
| **Vector Arithmetic with LanceDB ⚖️** | Perform **vector arithmetic** on embeddings, enabling complex relationships and nuances in data to be captured, and simplifying the process of retrieving semantically similar results 📊 | [![Github](../../assets/github.svg)][arithmetic_github] <br>[![Open In Collab](../../assets/colab.svg)][arithmetic_colab] <br>[![Ghost](../../assets/ghost.svg)][arithmetic_ghost] |
| **Imagebind Demo 🖼️** | Explore the multi-modal capabilities of **Imagebind** through a Gradio app, use **LanceDB API** for seamless image search and retrieval experiences 📸 | [![Github](../../assets/github.svg)][imagebind_github] <br> [![Open in Spaces](../../assets/open_hf_space.svg)][imagebind_huggingface] |
| **Search Engine using SAM & CLIP 🔍** | Build a search engine within an image using **SAM** and **CLIP** models, enabling object-level search and retrieval, with LanceDB indexing and search capabilities to find the closest match between image embeddings and user queries 📸 | [![Github](../../assets/github.svg)][swi_github] <br>[![Open In Collab](../../assets/colab.svg)][swi_colab] <br>[![Ghost](../../assets/ghost.svg)][swi_ghost] |
| **Zero Shot Object Localization and Detection with CLIP 🔎** | Perform object detection on images using **OpenAI's CLIP**, enabling zero-shot localization and detection of objects, with capabilities to split images into patches, parse with CLIP, and plot bounding boxes 📊 | [![Github](../../assets/github.svg)][zsod_github] <br>[![Open In Collab](../../assets/colab.svg)][zsod_colab] |
| **Accelerate Vector Search with OpenVINO 🚀** | Boost vector search applications using **OpenVINO**, achieving significant speedups with **CLIP** for text-to-image and image-to-image searching, through PyTorch model optimization, FP16 and INT8 format conversion, and quantization with **OpenVINO NNCF** 📈 | [![Github](../../assets/github.svg)][openvino_github] <br>[![Open In Collab](../../assets/colab.svg)][openvino_colab] <br>[![Ghost](../../assets/ghost.svg)][openvino_ghost] |
| **Zero-Shot Image Classification with CLIP and LanceDB 📸** | Achieve zero-shot image classification using **CLIP** and **LanceDB**, enabling models to classify images without prior training on specific use cases, unlocking flexible and adaptable image classification capabilities 🔓 | [![Github](../../assets/github.svg)][zsic_github] <br>[![Open In Collab](../../assets/colab.svg)][zsic_colab] <br>[![Ghost](../../assets/ghost.svg)][zsic_ghost] |
[inbuilt_hybrid_search_github]: https://github.com/lancedb/vectordb-recipes/blob/main/examples/Inbuilt-Hybrid-Search
[inbuilt_hybrid_search_colab]: https://colab.research.google.com/github/lancedb/vectordb-recipes/blob/main/examples/Inbuilt-Hybrid-Search/Inbuilt_Hybrid_Search_with_LanceDB.ipynb
[BM25_github]: https://github.com/lancedb/vectordb-recipes/blob/main/examples/Hybrid_search_bm25_lancedb
[BM25_colab]: https://colab.research.google.com/github/lancedb/vectordb-recipes/blob/main/examples/Hybrid_search_bm25_lancedb/main.ipynb
[BM25_ghost]: https://blog.lancedb.com/hybrid-search-combining-bm25-and-semantic-search-for-better-results-with-lan-1358038fe7e6
[NER_github]: https://github.com/lancedb/vectordb-recipes/blob/main/tutorials/NER-powered-Semantic-Search
[NER_colab]: https://colab.research.google.com/github/lancedb/vectordb-recipes/blob/main/tutorials/NER-powered-Semantic-Search/NER_powered_Semantic_Search_with_LanceDB.ipynb
[NER_ghost]: https://blog.lancedb.com/ner-powered-semantic-search-using-lancedb-51051dc3e493
[audio_search_github]: https://github.com/lancedb/vectordb-recipes/tree/main/examples/archived_examples/audio_search
[audio_search_colab]: https://colab.research.google.com/github/lancedb/vectordb-recipes/blob/main/examples/archived_examples/audio_search/main.ipynb
[audio_search_python]: https://github.com/lancedb/vectordb-recipes/blob/main/examples/archived_examples/audio_search/main.py
[mls_github]: https://github.com/lancedb/vectordb-recipes/tree/main/examples/archived_examples/multi-lingual-wiki-qa
[mls_colab]: https://colab.research.google.com/github/lancedb/vectordb-recipes/blob/main/examples/archived_examples/multi-lingual-wiki-qa/main.ipynb
[mls_python]: https://github.com/lancedb/vectordb-recipes/blob/main/examples/archived_examples/multi-lingual-wiki-qa/main.py
[fr_github]: https://github.com/lancedb/vectordb-recipes/tree/main/examples/archived_examples/facial_recognition
[fr_colab]: https://colab.research.google.com/github/lancedb/vectordb-recipes/blob/main/examples/archived_examples/facial_recognition/main.ipynb
[sentiment_analysis_github]: https://github.com/lancedb/vectordb-recipes/blob/main/examples/Sentiment-Analysis-Analyse-Hotel-Reviews
[sentiment_analysis_colab]: https://colab.research.google.com/github/lancedb/vectordb-recipes/blob/main/examples/Sentiment-Analysis-Analyse-Hotel-Reviews/Sentiment_Analysis_using_LanceDB.ipynb
[sentiment_analysis_ghost]: https://blog.lancedb.com/sentiment-analysis-using-lancedb-2da3cb1e3fa6
[arithmetic_github]: https://github.com/lancedb/vectordb-recipes/blob/main/examples/Vector-Arithmetic-with-LanceDB
[arithmetic_colab]: https://colab.research.google.com/github/lancedb/vectordb-recipes/blob/main/examples/Vector-Arithmetic-with-LanceDB/main.ipynb
[arithmetic_ghost]: https://blog.lancedb.com/vector-arithmetic-with-lancedb-an-intro-to-vector-embeddings/
[imagebind_github]: https://github.com/lancedb/vectordb-recipes/blob/main/examples/imagebind_demo
[imagebind_huggingface]: https://huggingface.co/spaces/raghavd99/imagebind2
[swi_github]: https://github.com/lancedb/vectordb-recipes/blob/main/examples/search-within-images-with-sam-and-clip
[swi_colab]: https://colab.research.google.com/github/lancedb/vectordb-recipes/blob/main/examples/search-within-images-with-sam-and-clip/main.ipynb
[swi_ghost]: https://blog.lancedb.com/search-within-an-image-331b54e4285e
[zsod_github]: https://github.com/lancedb/vectordb-recipes/blob/main/examples/zero-shot-object-detection-CLIP
[zsod_colab]: https://colab.research.google.com/github/lancedb/vectordb-recipes/blob/main/examples/zero-shot-object-detection-CLIP/zero_shot_object_detection_clip.ipynb
[openvino_github]: https://github.com/lancedb/vectordb-recipes/blob/main/examples/Accelerate-Vector-Search-Applications-Using-OpenVINO
[openvino_colab]: https://colab.research.google.com/github/lancedb/vectordb-recipes/blob/main/examples/Accelerate-Vector-Search-Applications-Using-OpenVINO/clip_text_image_search.ipynb
[openvino_ghost]: https://blog.lancedb.com/accelerate-vector-search-applications-using-openvino-lancedb/
[zsic_github]: https://github.com/lancedb/vectordb-recipes/tree/main/examples/archived_examples/zero-shot-image-classification
[zsic_colab]: https://colab.research.google.com/github/lancedb/vectordb-recipes/blob/main/examples/archived_examples/zero-shot-image-classification/main.ipynb
[zsic_ghost]: https://blog.lancedb.com/zero-shot-image-classification-with-vector-search/

View File

@@ -25,8 +25,8 @@ s3://eto-public/datasets/sift/vec_data.lance
Then, we can write a quick Python script to populate our LanceDB Table: Then, we can write a quick Python script to populate our LanceDB Table:
```python ```python
import pylance import lance
sift_dataset = pylance.dataset("/path/to/local/vec_data.lance") sift_dataset = lance.dataset("/path/to/local/vec_data.lance")
df = sift_dataset.to_table().to_pandas() df = sift_dataset.to_table().to_pandas()
import lancedb import lancedb

View File

@@ -1,173 +1,229 @@
# Full-text search # Full-text search (Native FTS)
LanceDB provides support for full-text search via [Tantivy](https://github.com/quickwit-oss/tantivy) (currently Python only), allowing you to incorporate keyword-based search (based on BM25) in your retrieval solutions. Our goal is to push the FTS integration down to the Rust level in the future, so that it's available for Rust and JavaScript users as well. Follow along at [this Github issue](https://github.com/lancedb/lance/issues/1195) LanceDB provides support for full-text search via Lance, allowing you to incorporate keyword-based search (based on BM25) in your retrieval solutions.
!!! note
## Installation The Python SDK uses tantivy-based FTS by default, need to pass `use_tantivy=False` to use native FTS.
To use full-text search, install the dependency [`tantivy-py`](https://github.com/quickwit-oss/tantivy-py):
```sh
# Say you want to use tantivy==0.20.1
pip install tantivy==0.20.1
```
## Example ## Example
Consider that we have a LanceDB table named `my_table`, whose string column `text` we want to index and query via keyword search. Consider that we have a LanceDB table named `my_table`, whose string column `text` we want to index and query via keyword search, the FTS index must be created before you can search via keywords.
```python === "Python"
import lancedb
uri = "data/sample-lancedb" ```python
db = lancedb.connect(uri) import lancedb
table = db.create_table( uri = "data/sample-lancedb"
db = lancedb.connect(uri)
table = db.create_table(
"my_table", "my_table",
data=[ data=[
{"vector": [3.1, 4.1], "text": "Frodo was a happy puppy"}, {"vector": [3.1, 4.1], "text": "Frodo was a happy puppy"},
{"vector": [5.9, 26.5], "text": "There are several kittens playing"}, {"vector": [5.9, 26.5], "text": "There are several kittens playing"},
], ],
) )
```
## Create FTS index on single column # passing `use_tantivy=False` to use lance FTS index
# `use_tantivy=True` by default
table.create_fts_index("text", use_tantivy=False)
table.search("puppy").limit(10).select(["text"]).to_list()
# [{'text': 'Frodo was a happy puppy', '_score': 0.6931471824645996}]
# ...
```
The FTS index must be created before you can search via keywords. === "TypeScript"
```python ```typescript
table.create_fts_index("text") import * as lancedb from "@lancedb/lancedb";
``` const uri = "data/sample-lancedb"
const db = await lancedb.connect(uri);
To search an FTS index via keywords, LanceDB's `table.search` accepts a string as input: const data = [
{ vector: [3.1, 4.1], text: "Frodo was a happy puppy" },
{ vector: [5.9, 26.5], text: "There are several kittens playing" },
];
const tbl = await db.createTable("my_table", data, { mode: "overwrite" });
await tbl.createIndex("text", {
config: lancedb.Index.fts(),
});
```python await tbl
table.search("puppy").limit(10).select(["text"]).to_list() .search("puppy", queryType="fts")
``` .select(["text"])
.limit(10)
.toArray();
```
This returns the result as a list of dictionaries as follows. === "Rust"
```python ```rust
[{'text': 'Frodo was a happy puppy', 'score': 0.6931471824645996}] let uri = "data/sample-lancedb";
``` let db = connect(uri).execute().await?;
let initial_data: Box<dyn RecordBatchReader + Send> = create_some_records()?;
let tbl = db
.create_table("my_table", initial_data)
.execute()
.await?;
tbl
.create_index(&["text"], Index::FTS(FtsIndexBuilder::default()))
.execute()
.await?;
tbl
.query()
.full_text_search(FullTextSearchQuery::new("puppy".to_owned()))
.select(lancedb::query::Select::Columns(vec!["text".to_owned()]))
.limit(10)
.execute()
.await?;
```
It would search on all indexed columns by default, so it's useful when there are multiple indexed columns.
Passing `fts_columns="text"` if you want to specify the columns to search.
!!! note !!! note
LanceDB automatically searches on the existing FTS index if the input to the search is of type `str`. If you provide a vector as input, LanceDB will search the ANN index instead. LanceDB automatically searches on the existing FTS index if the input to the search is of type `str`. If you provide a vector as input, LanceDB will search the ANN index instead.
## Tokenization ## Tokenization
By default the text is tokenized by splitting on punctuation and whitespaces and then removing tokens that are longer than 40 chars. For more language specific tokenization then provide the argument tokenizer_name with the 2 letter language code followed by "_stem". So for english it would be "en_stem". By default the text is tokenized by splitting on punctuation and whitespaces, and would filter out words that are with length greater than 40, and lowercase all words.
Stemming is useful for improving search results by reducing words to their root form, e.g. "running" to "run". LanceDB supports stemming for multiple languages, you can specify the tokenizer name to enable stemming by the pattern `tokenizer_name="{language_code}_stem"`, e.g. `en_stem` for English.
For example, to enable stemming for English:
```python ```python
table.create_fts_index("text", tokenizer_name="en_stem") table.create_fts_index("text", use_tantivy=True, tokenizer_name="en_stem")
``` ```
The following [languages](https://docs.rs/tantivy/latest/tantivy/tokenizer/enum.Language.html) are currently supported. the following [languages](https://docs.rs/tantivy/latest/tantivy/tokenizer/enum.Language.html) are currently supported.
The tokenizer is customizable, you can specify how the tokenizer splits the text, and how it filters out words, etc.
## Index multiple columns For example, for language with accents, you can specify the tokenizer to use `ascii_folding` to remove accents, e.g. 'é' to 'e':
If you have multiple string columns to index, there's no need to combine them manually -- simply pass them all as a list to `create_fts_index`:
```python ```python
table.create_fts_index(["text1", "text2"]) table.create_fts_index("text",
use_tantivy=False,
language="French",
stem=True,
ascii_folding=True)
``` ```
Note that the search API call does not change - you can search over all indexed columns at once.
## Filtering ## Filtering
Currently the LanceDB full text search feature supports *post-filtering*, meaning filters are LanceDB full text search supports to filter the search results by a condition, both pre-filtering and post-filtering are supported.
applied on top of the full text search results. This can be invoked via the familiar
`where` syntax:
```python This can be invoked via the familiar `where` syntax.
table.search("puppy").limit(10).where("meta='foo'").to_list()
```
## Sorting With pre-filtering:
=== "Python"
You can pre-sort the documents by specifying `ordering_field_names` when ```python
creating the full-text search index. Once pre-sorted, you can then specify table.search("puppy").limit(10).where("meta='foo'", prefilte=True).to_list()
`ordering_field_name` while searching to return results sorted by the given ```
field. For example,
``` === "TypeScript"
table.create_fts_index(["text_field"], ordering_field_names=["sort_by_field"])
(table.search("terms", ordering_field_name="sort_by_field") ```typescript
.limit(20) await tbl
.to_list()) .search("puppy")
``` .select(["id", "doc"])
.limit(10)
.where("meta='foo'")
.prefilter(true)
.toArray();
```
!!! note === "Rust"
If you wish to specify an ordering field at query time, you must also
have specified it during indexing time. Otherwise at query time, an
error will be raised that looks like `ValueError: The field does not exist: xxx`
!!! note ```rust
The fields to sort on must be of typed unsigned integer, or else you will see table
an error during indexing that looks like .query()
`TypeError: argument 'value': 'float' object cannot be interpreted as an integer`. .full_text_search(FullTextSearchQuery::new("puppy".to_owned()))
.select(lancedb::query::Select::Columns(vec!["doc".to_owned()]))
.limit(10)
.only_if("meta='foo'")
.execute()
.await?;
```
!!! note With post-filtering:
You can specify multiple fields for ordering at indexing time. === "Python"
But at query time only one ordering field is supported.
```python
table.search("puppy").limit(10).where("meta='foo'", prefilte=False).to_list()
```
=== "TypeScript"
```typescript
await tbl
.search("apple")
.select(["id", "doc"])
.limit(10)
.where("meta='foo'")
.prefilter(false)
.toArray();
```
=== "Rust"
```rust
table
.query()
.full_text_search(FullTextSearchQuery::new(words[0].to_owned()))
.select(lancedb::query::Select::Columns(vec!["doc".to_owned()]))
.postfilter()
.limit(10)
.only_if("meta='foo'")
.execute()
.await?;
```
## Phrase queries vs. terms queries ## Phrase queries vs. terms queries
!!! warning "Warn"
Lance-based FTS doesn't support queries using boolean operators `OR`, `AND`.
For full-text search you can specify either a **phrase** query like `"the old man and the sea"`, For full-text search you can specify either a **phrase** query like `"the old man and the sea"`,
or a **terms** search query like `"(Old AND Man) AND Sea"`. For more details on the terms or a **terms** search query like `old man sea`. For more details on the terms
query syntax, see Tantivy's [query parser rules](https://docs.rs/tantivy/latest/tantivy/query/struct.QueryParser.html). query syntax, see Tantivy's [query parser rules](https://docs.rs/tantivy/latest/tantivy/query/struct.QueryParser.html).
!!! tip "Note" To search for a phrase, the index must be created with `with_position=True`:
The query parser will raise an exception on queries that are ambiguous. For example, in the query `they could have been dogs OR cats`, `OR` is capitalized so it's considered a keyword query operator. But it's ambiguous how the left part should be treated. So if you submit this search query as is, you'll get `Syntax Error: they could have been dogs OR cats`.
```py
# This raises a syntax error
table.search("they could have been dogs OR cats")
```
On the other hand, lowercasing `OR` to `or` will work, because there are no capitalized logical operators and
the query is treated as a phrase query.
```py
# This works!
table.search("they could have been dogs or cats")
```
It can be cumbersome to have to remember what will cause a syntax error depending on the type of
query you want to perform. To make this simpler, when you want to perform a phrase query, you can
enforce it in one of two ways:
1. Place the double-quoted query inside single quotes. For example, `table.search('"they could have been dogs OR cats"')` is treated as
a phrase query.
2. Explicitly declare the `phrase_query()` method. This is useful when you have a phrase query that
itself contains double quotes. For example, `table.search('the cats OR dogs were not really "pets" at all').phrase_query()`
is treated as a phrase query.
In general, a query that's declared as a phrase query will be wrapped in double quotes during parsing, with nested
double quotes replaced by single quotes.
## Configurations
By default, LanceDB configures a 1GB heap size limit for creating the index. You can
reduce this if running on a smaller node, or increase this for faster performance while
indexing a larger corpus.
```python ```python
# configure a 512MB heap size table.create_fts_index("text", use_tantivy=False, with_position=True)
heap = 1024 * 1024 * 512
table.create_fts_index(["text1", "text2"], writer_heap_size=heap, replace=True)
``` ```
This will allow you to search for phrases, but it will also significantly increase the index size and indexing time.
## Current limitations
1. Currently we do not yet support incremental writes. ## Incremental indexing
If you add data after FTS index creation, it won't be reflected
in search results until you do a full reindex.
2. We currently only support local filesystem paths for the FTS index. LanceDB supports incremental indexing, which means you can add new records to the table without reindexing the entire table.
This is a tantivy limitation. We've implemented an object store plugin
but there's no way in tantivy-py to specify to use it. This can make the query more efficient, especially when the table is large and the new records are relatively small.
=== "Python"
```python
table.add([{"vector": [3.1, 4.1], "text": "Frodo was a happy puppy"}])
table.optimize()
```
=== "TypeScript"
```typescript
await tbl.add([{ vector: [3.1, 4.1], text: "Frodo was a happy puppy" }]);
await tbl.optimize();
```
=== "Rust"
```rust
let more_data: Box<dyn RecordBatchReader + Send> = create_some_records()?;
tbl.add(more_data).execute().await?;
tbl.optimize(OptimizeAction::All).execute().await?;
```
!!! note
New data added after creating the FTS index will appear in search results while incremental index is still progress, but with increased latency due to a flat search on the unindexed portion. LanceDB Cloud automates this merging process, minimizing the impact on search speed.

160
docs/src/fts_tantivy.md Normal file
View File

@@ -0,0 +1,160 @@
# Full-text search (Tantivy-based FTS)
LanceDB also provides support for full-text search via [Tantivy](https://github.com/quickwit-oss/tantivy), allowing you to incorporate keyword-based search (based on BM25) in your retrieval solutions.
The tantivy-based FTS is only available in Python and does not support building indexes on object storage or incremental indexing. If you need these features, try native FTS [native FTS](fts.md).
## Installation
To use full-text search, install the dependency [`tantivy-py`](https://github.com/quickwit-oss/tantivy-py):
```sh
# Say you want to use tantivy==0.20.1
pip install tantivy==0.20.1
```
## Example
Consider that we have a LanceDB table named `my_table`, whose string column `content` we want to index and query via keyword search, the FTS index must be created before you can search via keywords.
```python
import lancedb
uri = "data/sample-lancedb"
db = lancedb.connect(uri)
table = db.create_table(
"my_table",
data=[
{"id": 1, "vector": [3.1, 4.1], "title": "happy puppy", "content": "Frodo was a happy puppy", "meta": "foo"},
{"id": 2, "vector": [5.9, 26.5], "title": "playing kittens", "content": "There are several kittens playing around the puppy", "meta": "bar"},
],
)
# passing `use_tantivy=False` to use lance FTS index
# `use_tantivy=True` by default
table.create_fts_index("content", use_tantivy=True)
table.search("puppy").limit(10).select(["content"]).to_list()
# [{'text': 'Frodo was a happy puppy', '_score': 0.6931471824645996}]
# ...
```
It would search on all indexed columns by default, so it's useful when there are multiple indexed columns.
!!! note
LanceDB automatically searches on the existing FTS index if the input to the search is of type `str`. If you provide a vector as input, LanceDB will search the ANN index instead.
## Tokenization
By default the text is tokenized by splitting on punctuation and whitespaces and then removing tokens that are longer than 40 chars. For more language specific tokenization then provide the argument tokenizer_name with the 2 letter language code followed by "_stem". So for english it would be "en_stem".
```python
table.create_fts_index("content", use_tantivy=True, tokenizer_name="en_stem", replace=True)
```
the following [languages](https://docs.rs/tantivy/latest/tantivy/tokenizer/enum.Language.html) are currently supported.
## Index multiple columns
If you have multiple string columns to index, there's no need to combine them manually -- simply pass them all as a list to `create_fts_index`:
```python
table.create_fts_index(["title", "content"], use_tantivy=True, replace=True)
```
Note that the search API call does not change - you can search over all indexed columns at once.
## Filtering
Currently the LanceDB full text search feature supports *post-filtering*, meaning filters are
applied on top of the full text search results (see [native FTS](fts.md) if you need pre-filtering). This can be invoked via the familiar
`where` syntax:
```python
table.search("puppy").limit(10).where("meta='foo'").to_list()
```
## Sorting
You can pre-sort the documents by specifying `ordering_field_names` when
creating the full-text search index. Once pre-sorted, you can then specify
`ordering_field_name` while searching to return results sorted by the given
field. For example,
```python
table.create_fts_index(["content"], use_tantivy=True, ordering_field_names=["id"], replace=True)
(table.search("puppy", ordering_field_name="id")
.limit(20)
.to_list())
```
!!! note
If you wish to specify an ordering field at query time, you must also
have specified it during indexing time. Otherwise at query time, an
error will be raised that looks like `ValueError: The field does not exist: xxx`
!!! note
The fields to sort on must be of typed unsigned integer, or else you will see
an error during indexing that looks like
`TypeError: argument 'value': 'float' object cannot be interpreted as an integer`.
!!! note
You can specify multiple fields for ordering at indexing time.
But at query time only one ordering field is supported.
## Phrase queries vs. terms queries
For full-text search you can specify either a **phrase** query like `"the old man and the sea"`,
or a **terms** search query like `"(Old AND Man) AND Sea"`. For more details on the terms
query syntax, see Tantivy's [query parser rules](https://docs.rs/tantivy/latest/tantivy/query/struct.QueryParser.html).
!!! tip "Note"
The query parser will raise an exception on queries that are ambiguous. For example, in the query `they could have been dogs OR cats`, `OR` is capitalized so it's considered a keyword query operator. But it's ambiguous how the left part should be treated. So if you submit this search query as is, you'll get `Syntax Error: they could have been dogs OR cats`.
```py
# This raises a syntax error
table.search("they could have been dogs OR cats")
```
On the other hand, lowercasing `OR` to `or` will work, because there are no capitalized logical operators and
the query is treated as a phrase query.
```py
# This works!
table.search("they could have been dogs or cats")
```
It can be cumbersome to have to remember what will cause a syntax error depending on the type of
query you want to perform. To make this simpler, when you want to perform a phrase query, you can
enforce it in one of two ways:
1. Place the double-quoted query inside single quotes. For example, `table.search('"they could have been dogs OR cats"')` is treated as
a phrase query.
1. Explicitly declare the `phrase_query()` method. This is useful when you have a phrase query that
itself contains double quotes. For example, `table.search('the cats OR dogs were not really "pets" at all').phrase_query()`
is treated as a phrase query.
In general, a query that's declared as a phrase query will be wrapped in double quotes during parsing, with nested
double quotes replaced by single quotes.
## Configurations
By default, LanceDB configures a 1GB heap size limit for creating the index. You can
reduce this if running on a smaller node, or increase this for faster performance while
indexing a larger corpus.
```python
# configure a 512MB heap size
heap = 1024 * 1024 * 512
table.create_fts_index(["title", "content"], use_tantivy=True, writer_heap_size=heap, replace=True)
```
## Current limitations
1. New data added after creating the FTS index will appear in search results, but with increased latency due to a flat search on the unindexed portion. Re-indexing with `create_fts_index` will reduce latency. LanceDB Cloud automates this merging process, minimizing the impact on search speed.
2. We currently only support local filesystem paths for the FTS index.
This is a tantivy limitation. We've implemented an object store plugin
but there's no way in tantivy-py to specify to use it.

View File

@@ -0,0 +1,147 @@
# Building a Scalar Index
Scalar indices organize data by scalar attributes (e.g. numbers, categorical values), enabling fast filtering of vector data. In vector databases, scalar indices accelerate the retrieval of scalar data associated with vectors, thus enhancing the query performance when searching for vectors that meet certain scalar criteria.
Similar to many SQL databases, LanceDB supports several types of scalar indices to accelerate search
over scalar columns.
- `BTREE`: The most common type is BTREE. The index stores a copy of the
column in sorted order. This sorted copy allows a binary search to be used to
satisfy queries.
- `BITMAP`: this index stores a bitmap for each unique value in the column. It
uses a series of bits to indicate whether a value is present in a row of a table
- `LABEL_LIST`: a special index that can be used on `List<T>` columns to
support queries with `array_contains_all` and `array_contains_any`
using an underlying bitmap index.
For example, a column that contains lists of tags (e.g. `["tag1", "tag2", "tag3"]`) can be indexed with a `LABEL_LIST` index.
!!! tips "How to choose the right scalar index type"
`BTREE`: This index is good for scalar columns with mostly distinct values and does best when the query is highly selective.
`BITMAP`: This index works best for low-cardinality numeric or string columns, where the number of unique values is small (i.e., less than a few thousands).
`LABEL_LIST`: This index should be used for columns containing list-type data.
| Data Type | Filter | Index Type |
| --------------------------------------------------------------- | ----------------------------------------- | ------------ |
| Numeric, String, Temporal | `<`, `=`, `>`, `in`, `between`, `is null` | `BTREE` |
| Boolean, numbers or strings with fewer than 1,000 unique values | `<`, `=`, `>`, `in`, `between`, `is null` | `BITMAP` |
| List of low cardinality of numbers or strings | `array_has_any`, `array_has_all` | `LABEL_LIST` |
### Create a scalar index
=== "Python"
```python
import lancedb
books = [
{"book_id": 1, "publisher": "plenty of books", "tags": ["fantasy", "adventure"]},
{"book_id": 2, "publisher": "book town", "tags": ["non-fiction"]},
{"book_id": 3, "publisher": "oreilly", "tags": ["textbook"]}
]
db = lancedb.connect("./db")
table = db.create_table("books", books)
table.create_scalar_index("book_id") # BTree by default
table.create_scalar_index("publisher", index_type="BITMAP")
```
=== "Typescript"
=== "@lancedb/lancedb"
```js
const db = await lancedb.connect("data");
const tbl = await db.openTable("my_vectors");
await tbl.create_index("book_id");
await tlb.create_index("publisher", { config: lancedb.Index.bitmap() })
```
The following scan will be faster if the column `book_id` has a scalar index:
=== "Python"
```python
import lancedb
table = db.open_table("books")
my_df = table.search().where("book_id = 2").to_pandas()
```
=== "Typescript"
=== "@lancedb/lancedb"
```js
const db = await lancedb.connect("data");
const tbl = await db.openTable("books");
await tbl
.query()
.where("book_id = 2")
.limit(10)
.toArray();
```
Scalar indices can also speed up scans containing a vector search or full text search, and a prefilter:
=== "Python"
```python
import lancedb
data = [
{"book_id": 1, "vector": [1, 2]},
{"book_id": 2, "vector": [3, 4]},
{"book_id": 3, "vector": [5, 6]}
]
table = db.create_table("book_with_embeddings", data)
(
table.search([1, 2])
.where("book_id != 3", prefilter=True)
.to_pandas()
)
```
=== "Typescript"
=== "@lancedb/lancedb"
```js
const db = await lancedb.connect("data/lance");
const tbl = await db.openTable("book_with_embeddings");
await tbl.search(Array(1536).fill(1.2))
.where("book_id != 3") // prefilter is default behavior.
.limit(10)
.toArray();
```
### Update a scalar index
Updating the table data (adding, deleting, or modifying records) requires that you also update the scalar index. This can be done by calling `optimize`, which will trigger an update to the existing scalar index.
=== "Python"
```python
table.add([{"vector": [7, 8], "book_id": 4}])
table.optimize()
```
=== "TypeScript"
```typescript
await tbl.add([{ vector: [7, 8], book_id: 4 }]);
await tbl.optimize();
```
=== "Rust"
```rust
let more_data: Box<dyn RecordBatchReader + Send> = create_some_records()?;
tbl.add(more_data).execute().await?;
tbl.optimize(OptimizeAction::All).execute().await?;
```
!!! note
New data added after creating the scalar index will still appear in search results if optimize is not used, but with increased latency due to a flat search on the unindexed portion. LanceDB Cloud automates the optimize process, minimizing the impact on search speed.

View File

@@ -498,7 +498,7 @@ This can also be done with the ``AWS_ENDPOINT`` and ``AWS_DEFAULT_REGION`` envir
#### S3 Express #### S3 Express
LanceDB supports [S3 Express One Zone](https://aws.amazon.com/s3/storage-classes/express-one-zone/) endpoints, but requires additional configuration. Also, S3 Express endpoints only support connecting from an EC2 instance within the same region. LanceDB supports [S3 Express One Zone](https://aws.amazon.com/s3/storage-classes/express-one-zone/) endpoints, but requires additional infrastructure configuration for the compute service, such as EC2 or Lambda. Please refer to [Networking requirements for S3 Express One Zone](https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-express-networking.html).
To configure LanceDB to use an S3 Express endpoint, you must set the storage option `s3_express`. The bucket name in your table URI should **include the suffix**. To configure LanceDB to use an S3 Express endpoint, you must set the storage option `s3_express`. The bucket name in your table URI should **include the suffix**.

View File

@@ -85,13 +85,13 @@ Initialize a LanceDB connection and create a table
```ts ```ts
--8<-- "nodejs/examples/basic.ts:create_table" --8<-- "nodejs/examples/basic.test.ts:create_table"
``` ```
This will infer the schema from the provided data. If you want to explicitly provide a schema, you can use `apache-arrow` to declare a schema This will infer the schema from the provided data. If you want to explicitly provide a schema, you can use `apache-arrow` to declare a schema
```ts ```ts
--8<-- "nodejs/examples/basic.ts:create_table_with_schema" --8<-- "nodejs/examples/basic.test.ts:create_table_with_schema"
``` ```
!!! info "Note" !!! info "Note"
@@ -100,14 +100,14 @@ Initialize a LanceDB connection and create a table
passed in will NOT be appended to the table in that case. passed in will NOT be appended to the table in that case.
```ts ```ts
--8<-- "nodejs/examples/basic.ts:create_table_exists_ok" --8<-- "nodejs/examples/basic.test.ts:create_table_exists_ok"
``` ```
Sometimes you want to make sure that you start fresh. If you want to Sometimes you want to make sure that you start fresh. If you want to
overwrite the table, you can pass in mode: "overwrite" to the createTable function. overwrite the table, you can pass in mode: "overwrite" to the createTable function.
```ts ```ts
--8<-- "nodejs/examples/basic.ts:create_table_overwrite" --8<-- "nodejs/examples/basic.test.ts:create_table_overwrite"
``` ```
=== "vectordb (deprecated)" === "vectordb (deprecated)"
@@ -227,7 +227,7 @@ LanceDB supports float16 data type!
=== "@lancedb/lancedb" === "@lancedb/lancedb"
```typescript ```typescript
--8<-- "nodejs/examples/basic.ts:create_f16_table" --8<-- "nodejs/examples/basic.test.ts:create_f16_table"
``` ```
=== "vectordb (deprecated)" === "vectordb (deprecated)"
@@ -274,7 +274,7 @@ table = db.create_table(table_name, schema=Content)
Sometimes your data model may contain nested objects. Sometimes your data model may contain nested objects.
For example, you may want to store the document string For example, you may want to store the document string
and the document soure name as a nested Document object: and the document source name as a nested Document object:
```python ```python
class Document(BaseModel): class Document(BaseModel):
@@ -416,7 +416,6 @@ You can create an empty table for scenarios where you want to add data to the ta
=== "Python" === "Python"
```python
An empty table can be initialized via a PyArrow schema. An empty table can be initialized via a PyArrow schema.
@@ -456,7 +455,7 @@ You can create an empty table for scenarios where you want to add data to the ta
=== "@lancedb/lancedb" === "@lancedb/lancedb"
```typescript ```typescript
--8<-- "nodejs/examples/basic.ts:create_empty_table" --8<-- "nodejs/examples/basic.test.ts:create_empty_table"
``` ```
=== "vectordb (deprecated)" === "vectordb (deprecated)"
@@ -467,7 +466,7 @@ You can create an empty table for scenarios where you want to add data to the ta
## Adding to a table ## Adding to a table
After a table has been created, you can always add more data to it usind the `add` method After a table has been created, you can always add more data to it using the `add` method
=== "Python" === "Python"
You can add any of the valid data structures accepted by LanceDB table, i.e, `dict`, `list[dict]`, `pd.DataFrame`, or `Iterator[pa.RecordBatch]`. Below are some examples. You can add any of the valid data structures accepted by LanceDB table, i.e, `dict`, `list[dict]`, `pd.DataFrame`, or `Iterator[pa.RecordBatch]`. Below are some examples.
@@ -536,7 +535,7 @@ After a table has been created, you can always add more data to it usind the `ad
``` ```
??? "Ingesting Pydantic models with LanceDB embedding API" ??? "Ingesting Pydantic models with LanceDB embedding API"
When using LanceDB's embedding API, you can add Pydantic models directly to the table. LanceDB will automatically convert the `vector` field to a vector before adding it to the table. You need to specify the default value of `vector` feild as None to allow LanceDB to automatically vectorize the data. When using LanceDB's embedding API, you can add Pydantic models directly to the table. LanceDB will automatically convert the `vector` field to a vector before adding it to the table. You need to specify the default value of `vector` field as None to allow LanceDB to automatically vectorize the data.
```python ```python
import lancedb import lancedb
@@ -791,6 +790,27 @@ Use the `drop_table()` method on the database to remove a table.
This permanently removes the table and is not recoverable, unlike deleting rows. This permanently removes the table and is not recoverable, unlike deleting rows.
If the table does not exist an exception is raised. If the table does not exist an exception is raised.
## Handling bad vectors
In LanceDB Python, you can use the `on_bad_vectors` parameter to choose how
invalid vector values are handled. Invalid vectors are vectors that are not valid
because:
1. They are the wrong dimension
2. They contain NaN values
3. They are null but are on a non-nullable field
By default, LanceDB will raise an error if it encounters a bad vector. You can
also choose one of the following options:
* `drop`: Ignore rows with bad vectors
* `fill`: Replace bad values (NaNs) or missing values (too few dimensions) with
the fill value specified in the `fill_value` parameter. An input like
`[1.0, NaN, 3.0]` will be replaced with `[1.0, 0.0, 3.0]` if `fill_value=0.0`.
* `null`: Replace bad vectors with null (only works if the column is nullable).
A bad vector `[1.0, NaN, 3.0]` will be replaced with `null` if the column is
nullable. If the vector column is non-nullable, then bad vectors will cause an
error
## Consistency ## Consistency
@@ -860,4 +880,4 @@ There are three possible settings for `read_consistency_interval`:
Learn the best practices on creating an ANN index and getting the most out of it. Learn the best practices on creating an ANN index and getting the most out of it.
[^1]: The `vectordb` package is a legacy package that is deprecated in favor of `@lancedb/lancedb`. The `vectordb` package will continue to receive bug fixes and security updates until September 2024. We recommend all new projects use `@lancedb/lancedb`. See the [migration guide](migration.md) for more information. [^1]: The `vectordb` package is a legacy package that is deprecated in favor of `@lancedb/lancedb`. The `vectordb` package will continue to receive bug fixes and security updates until September 2024. We recommend all new projects use `@lancedb/lancedb`. See the [migration guide](../migration.md) for more information.

View File

@@ -43,200 +43,32 @@ table.create_fts_index("text")
# hybrid search with default re-ranker # hybrid search with default re-ranker
results = table.search("flower moon", query_type="hybrid").to_pandas() results = table.search("flower moon", query_type="hybrid").to_pandas()
``` ```
!!! Note
You can also pass the vector and text query manually. This is useful if you're not using the embedding API or if you're using a separate embedder service.
### Explicitly passing the vector and text query
```python
vector_query = [0.1, 0.2, 0.3, 0.4, 0.5]
text_query = "flower moon"
results = table.search(query_type="hybrid")
.vector(vector_query)
.text(text_query)
.limit(5)
.to_pandas()
By default, LanceDB uses `LinearCombinationReranker(weight=0.7)` to combine and rerank the results of semantic and full-text search. You can customize the hyperparameters as needed or write your own custom reranker. Here's how you can use any of the available rerankers: ```
By default, LanceDB uses `RRFReranker()`, which uses reciprocal rank fusion score, to combine and rerank the results of semantic and full-text search. You can customize the hyperparameters as needed or write your own custom reranker. Here's how you can use any of the available rerankers:
### `rerank()` arguments ### `rerank()` arguments
* `normalize`: `str`, default `"score"`: * `normalize`: `str`, default `"score"`:
The method to normalize the scores. Can be "rank" or "score". If "rank", the scores are converted to ranks and then normalized. If "score", the scores are normalized directly. The method to normalize the scores. Can be "rank" or "score". If "rank", the scores are converted to ranks and then normalized. If "score", the scores are normalized directly.
* `reranker`: `Reranker`, default `LinearCombinationReranker(weight=0.7)`. * `reranker`: `Reranker`, default `RRF()`.
The reranker to use. If not specified, the default reranker is used. The reranker to use. If not specified, the default reranker is used.
## Available Rerankers ## Available Rerankers
LanceDB provides a number of re-rankers out of the box. You can use any of these re-rankers by passing them to the `rerank()` method. Here's a list of available re-rankers: LanceDB provides a number of re-rankers out of the box. You can use any of these re-rankers by passing them to the `rerank()` method.
Go to [Rerankers](../reranking/index.md) to learn more about using the available rerankers and implementing custom rerankers.
### Linear Combination Reranker
This is the default re-ranker used by LanceDB. It combines the results of semantic and full-text search using a linear combination of the scores. The weights for the linear combination can be specified. It defaults to 0.7, i.e, 70% weight for semantic search and 30% weight for full-text search.
```python
from lancedb.rerankers import LinearCombinationReranker
reranker = LinearCombinationReranker(weight=0.3) # Use 0.3 as the weight for vector search
results = table.search("rebel", query_type="hybrid").rerank(reranker=reranker).to_pandas()
```
### Arguments
----------------
* `weight`: `float`, default `0.7`:
The weight to use for the semantic search score. The weight for the full-text search score is `1 - weights`.
* `fill`: `float`, default `1.0`:
The score to give to results that are only in one of the two result sets.This is treated as penalty, so a higher value means a lower score.
TODO: We should just hardcode this-- its pretty confusing as we invert scores to calculate final score
* `return_score` : str, default `"relevance"`
options are "relevance" or "all"
The type of score to return. If "relevance", will return only the `_relevance_score. If "all", will return all scores from the vector and FTS search along with the relevance score.
### Cohere Reranker
This re-ranker uses the [Cohere](https://cohere.ai/) API to combine the results of semantic and full-text search. You can use this re-ranker by passing `CohereReranker()` to the `rerank()` method. Note that you'll need to set the `COHERE_API_KEY` environment variable to use this re-ranker.
```python
from lancedb.rerankers import CohereReranker
reranker = CohereReranker()
results = table.search("vampire weekend", query_type="hybrid").rerank(reranker=reranker).to_pandas()
```
### Arguments
----------------
* `model_name` : str, default `"rerank-english-v2.0"`
The name of the cross encoder model to use. Available cohere models are:
- rerank-english-v2.0
- rerank-multilingual-v2.0
* `column` : str, default `"text"`
The name of the column to use as input to the cross encoder model.
* `top_n` : str, default `None`
The number of results to return. If None, will return all results.
!!! Note
Only returns `_relevance_score`. Does not support `return_score = "all"`.
### Cross Encoder Reranker
This reranker uses the [Sentence Transformers](https://www.sbert.net/) library to combine the results of semantic and full-text search. You can use it by passing `CrossEncoderReranker()` to the `rerank()` method.
```python
from lancedb.rerankers import CrossEncoderReranker
reranker = CrossEncoderReranker()
results = table.search("harmony hall", query_type="hybrid").rerank(reranker=reranker).to_pandas()
```
### Arguments
----------------
* `model` : str, default `"cross-encoder/ms-marco-TinyBERT-L-6"`
The name of the cross encoder model to use. Available cross encoder models can be found [here](https://www.sbert.net/docs/pretrained_cross-encoders.html)
* `column` : str, default `"text"`
The name of the column to use as input to the cross encoder model.
* `device` : str, default `None`
The device to use for the cross encoder model. If None, will use "cuda" if available, otherwise "cpu".
!!! Note
Only returns `_relevance_score`. Does not support `return_score = "all"`.
### ColBERT Reranker
This reranker uses the ColBERT model to combine the results of semantic and full-text search. You can use it by passing `ColbertrReranker()` to the `rerank()` method.
ColBERT reranker model calculates relevance of given docs against the query and don't take existing fts and vector search scores into account, so it currently only supports `return_score="relevance"`. By default, it looks for `text` column to rerank the results. But you can specify the column name to use as input to the cross encoder model as described below.
```python
from lancedb.rerankers import ColbertReranker
reranker = ColbertReranker()
results = table.search("harmony hall", query_type="hybrid").rerank(reranker=reranker).to_pandas()
```
### Arguments
----------------
* `model_name` : `str`, default `"colbert-ir/colbertv2.0"`
The name of the cross encoder model to use.
* `column` : `str`, default `"text"`
The name of the column to use as input to the cross encoder model.
* `return_score` : `str`, default `"relevance"`
options are `"relevance"` or `"all"`. Only `"relevance"` is supported for now.
!!! Note
Only returns `_relevance_score`. Does not support `return_score = "all"`.
### OpenAI Reranker
This reranker uses the OpenAI API to combine the results of semantic and full-text search. You can use it by passing `OpenaiReranker()` to the `rerank()` method.
!!! Note
This prompts chat model to rerank results which is not a dedicated reranker model. This should be treated as experimental.
!!! Tip
- You might run out of token limit so set the search `limits` based on your token limit.
- It is recommended to use gpt-4-turbo-preview, the default model, older models might lead to undesired behaviour
```python
from lancedb.rerankers import OpenaiReranker
reranker = OpenaiReranker()
results = table.search("harmony hall", query_type="hybrid").rerank(reranker=reranker).to_pandas()
```
### Arguments
----------------
* `model_name` : `str`, default `"gpt-4-turbo-preview"`
The name of the cross encoder model to use.
* `column` : `str`, default `"text"`
The name of the column to use as input to the cross encoder model.
* `return_score` : `str`, default `"relevance"`
options are "relevance" or "all". Only "relevance" is supported for now.
* `api_key` : `str`, default `None`
The API key to use. If None, will use the OPENAI_API_KEY environment variable.
## Building Custom Rerankers
You can build your own custom reranker by subclassing the `Reranker` class and implementing the `rerank_hybrid()` method. Here's an example of a custom reranker that combines the results of semantic and full-text search using a linear combination of the scores.
The `Reranker` base interface comes with a `merge_results()` method that can be used to combine the results of semantic and full-text search. This is a vanilla merging algorithm that simply concatenates the results and removes the duplicates without taking the scores into consideration. It only keeps the first copy of the row encountered. This works well in cases that don't require the scores of semantic and full-text search to combine the results. If you want to use the scores or want to support `return_score="all"`, you'll need to implement your own merging algorithm.
```python
from lancedb.rerankers import Reranker
import pyarrow as pa
class MyReranker(Reranker):
def __init__(self, param1, param2, ..., return_score="relevance"):
super().__init__(return_score)
self.param1 = param1
self.param2 = param2
def rerank_hybrid(self, query: str, vector_results: pa.Table, fts_results: pa.Table):
# Use the built-in merging function
combined_result = self.merge_results(vector_results, fts_results)
# Do something with the combined results
# ...
# Return the combined results
return combined_result
```
### Example of a Custom Reranker
For the sake of simplicity let's build custom reranker that just enchances the Cohere Reranker by accepting a filter query, and accept other CohereReranker params as kwags.
```python
from typing import List, Union
import pandas as pd
from lancedb.rerankers import CohereReranker
class MofidifiedCohereReranker(CohereReranker):
def __init__(self, filters: Union[str, List[str]], **kwargs):
super().__init__(**kwargs)
filters = filters if isinstance(filters, list) else [filters]
self.filters = filters
def rerank_hybrid(self, query: str, vector_results: pa.Table, fts_results: pa.Table)-> pa.Table:
combined_result = super().rerank_hybrid(query, vector_results, fts_results)
df = combined_result.to_pandas()
for filter in self.filters:
df = df.query("not text.str.contains(@filter)")
return pa.Table.from_pandas(df)
```
!!! tip
The `vector_results` and `fts_results` are pyarrow tables. You can convert them to pandas dataframes using `to_pandas()` method and perform any operations you want. After you are done, you can convert the dataframe back to pyarrow table using `pa.Table.from_pandas()` method and return it.

View File

@@ -49,7 +49,8 @@ The following pages go deeper into the internal of LanceDB and how to use it.
* [Working with tables](guides/tables.md): Learn how to work with tables and their associated functions * [Working with tables](guides/tables.md): Learn how to work with tables and their associated functions
* [Indexing](ann_indexes.md): Understand how to create indexes * [Indexing](ann_indexes.md): Understand how to create indexes
* [Vector search](search.md): Learn how to perform vector similarity search * [Vector search](search.md): Learn how to perform vector similarity search
* [Full-text search](fts.md): Learn how to perform full-text search * [Full-text search (native)](fts.md): Learn how to perform full-text search
* [Full-text search (tantivy-based)](fts_tantivy.md): Learn how to perform full-text search using Tantivy
* [Managing embeddings](embeddings/index.md): Managing embeddings and the embedding functions API in LanceDB * [Managing embeddings](embeddings/index.md): Managing embeddings and the embedding functions API in LanceDB
* [Ecosystem Integrations](integrations/index.md): Integrate LanceDB with other tools in the data ecosystem * [Ecosystem Integrations](integrations/index.md): Integrate LanceDB with other tools in the data ecosystem
* [Python API Reference](python/python.md): Python OSS and Cloud API references * [Python API Reference](python/python.md): Python OSS and Cloud API references

View File

@@ -0,0 +1,142 @@
# dlt
[dlt](https://dlthub.com/docs/intro) is an open-source library that you can add to your Python scripts to load data from various and often messy data sources into well-structured, live datasets. dlt's [integration with LanceDB](https://dlthub.com/docs/dlt-ecosystem/destinations/lancedb) lets you ingest data from any source (databases, APIs, CSVs, dataframes, JSONs, and more) into LanceDB with a few lines of simple python code. The integration enables automatic normalization of nested data, schema inference, incremental loading and embedding the data. dlt also has integrations with several other tools like dbt, airflow, dagster etc. that can be inserted into your LanceDB workflow.
## How to ingest data into LanceDB
In this example, we will be fetching movie information from the [Open Movie Database (OMDb) API](https://www.omdbapi.com/) and loading it into a local LanceDB instance. To implement it, you will need an API key for the OMDb API (which can be created freely [here](https://www.omdbapi.com/apikey.aspx)).
1. **Install `dlt` with LanceDB extras:**
```sh
pip install dlt[lancedb]
```
2. **Inside an empty directory, initialize a `dlt` project with:**
```sh
dlt init rest_api lancedb
```
This will add all the files necessary to create a `dlt` pipeline that can ingest data from any REST API (ex: OMDb API) and load into LanceDB.
```text
├── .dlt
│ ├── config.toml
│ └── secrets.toml
├── rest_api
├── rest_api_pipeline.py
└── requirements.txt
```
dlt has a list of pre-built [sources](https://dlthub.com/docs/dlt-ecosystem/verified-sources/) like [SQL databases](https://dlthub.com/docs/dlt-ecosystem/verified-sources/sql_database), [REST APIs](https://dlthub.com/docs/dlt-ecosystem/verified-sources/rest_api), [Google Sheets](https://dlthub.com/docs/dlt-ecosystem/verified-sources/google_sheets), [Notion](https://dlthub.com/docs/dlt-ecosystem/verified-sources/notion) etc., that can be used out-of-the-box by running `dlt init <source_name> lancedb`. Since dlt is a python library, it is also very easy to modify these pre-built sources or to write your own custom source from scratch.
3. **Specify necessary credentials and/or embedding model details:**
In order to fetch data from the OMDb API, you will need to pass a valid API key into your pipeline. Depending on whether you're using LanceDB OSS or LanceDB cloud, you also may need to provide the necessary credentials to connect to the LanceDB instance. These can be pasted inside `.dlt/sercrets.toml`.
dlt's LanceDB integration also allows you to automatically embed the data during ingestion. Depending on the embedding model chosen, you may need to paste the necessary credentials inside `.dlt/sercrets.toml`:
```toml
[sources.rest_api]
api_key = "api_key" # Enter the API key for the OMDb API
[destination.lancedb]
embedding_model_provider = "sentence-transformers"
embedding_model = "all-MiniLM-L6-v2"
[destination.lancedb.credentials]
uri = ".lancedb"
api_key = "api_key" # API key to connect to LanceDB Cloud. Leave out if you are using LanceDB OSS.
embedding_model_provider_api_key = "embedding_model_provider_api_key" # Not needed for providers that don't need authentication (ollama, sentence-transformers).
```
See [here](https://dlthub.com/docs/dlt-ecosystem/destinations/lancedb#configure-the-destination) for more information and for a list of available models and model providers.
4. **Write the pipeline code inside `rest_api_pipeline.py`:**
The following code shows how you can configure dlt's REST API source to connect to the [OMDb API](https://www.omdbapi.com/), fetch all movies with the word "godzilla" in the title, and load it into a LanceDB table. The REST API source allows you to pull data from any API with minimal code, to learn more read the [dlt docs](https://dlthub.com/docs/dlt-ecosystem/verified-sources/rest_api).
```python
# Import necessary modules
import dlt
from rest_api import rest_api_source
# Configure the REST API source
movies_source = rest_api_source(
{
"client": {
"base_url": "https://www.omdbapi.com/",
"auth": { # authentication strategy for the OMDb API
"type": "api_key",
"name": "apikey",
"api_key": dlt.secrets["sources.rest_api.api_token"], # read API credentials directly from secrets.toml
"location": "query"
},
"paginator": { # pagination strategy for the OMDb API
"type": "page_number",
"base_page": 1,
"total_path": "totalResults",
"maximum_page": 5
}
},
"resources": [ # list of API endpoints to request
{
"name": "movie_search",
"endpoint": {
"path": "/",
"params": {
"s": "godzilla",
"type": "movie"
}
}
}
]
})
if __name__ == "__main__":
# Create a pipeline object
pipeline = dlt.pipeline(
pipeline_name='movies_pipeline',
destination='lancedb', # this tells dlt to load the data into LanceDB
dataset_name='movies_data_pipeline',
)
# Run the pipeline
load_info = pipeline.run(movies_source)
# pretty print the information on data that was loaded
print(load_info)
```
The script above will ingest the data into LanceDB as it is, i.e. without creating any embeddings. If we want to embed one of the fields (for example, `"Title"` that contains the movie titles), then we will use dlt's `lancedb_adapter` and modify the script as follows:
- Add the following import statement:
```python
from dlt.destinations.adapters import lancedb_adapter
```
- Modify the pipeline run like this:
```python
load_info = pipeline.run(
lancedb_adapter(
movies_source,
embed="Title",
)
)
```
This will use the embedding model specified inside `.dlt/secrets.toml` to embed the field `"Title"`.
5. **Install necessary dependencies:**
```sh
pip install -r requirements.txt
```
Note: You may need to install the dependencies for your embedding models separately.
```sh
pip install sentence-transformers
```
6. **Run the pipeline:**
Finally, running the following command will ingest the data into your LanceDB instance.
```sh
python custom_source.py
```
For more information and advanced usage of dlt's LanceDB integration, read [the dlt documentation](https://dlthub.com/docs/dlt-ecosystem/destinations/lancedb).

View File

@@ -1,5 +1,10 @@
# Langchain **LangChain** is a framework designed for building applications with large language models (LLMs) by chaining together various components. It supports a range of functionalities including memory, agents, and chat models, enabling developers to create context-aware applications.
![Illustration](../assets/langchain.png)
![Illustration](https://raw.githubusercontent.com/lancedb/assets/refs/heads/main/docs/assets/integration/langchain_rag.png)
LangChain streamlines these stages (in figure above) by providing pre-built components and tools for integration, memory management, and deployment, allowing developers to focus on application logic rather than underlying complexities.
Integration of **Langchain** with **LanceDB** enables applications to retrieve the most relevant data by comparing query vectors against stored vectors, facilitating effective information retrieval. It results in better and context aware replies and actions by the LLMs.
## Quick Start ## Quick Start
You can load your document data using langchain's loaders, for this example we are using `TextLoader` and `OpenAIEmbeddings` as the embedding model. Checkout Complete example here - [LangChain demo](../notebooks/langchain_example.ipynb) You can load your document data using langchain's loaders, for this example we are using `TextLoader` and `OpenAIEmbeddings` as the embedding model. Checkout Complete example here - [LangChain demo](../notebooks/langchain_example.ipynb)
@@ -26,20 +31,28 @@ print(docs[0].page_content)
## Documentation ## Documentation
In the above example `LanceDB` vector store class object is created using `from_documents()` method which is a `classmethod` and returns the initialized class object. In the above example `LanceDB` vector store class object is created using `from_documents()` method which is a `classmethod` and returns the initialized class object.
You can also use `LanceDB.from_texts(texts: List[str],embedding: Embeddings)` class method. You can also use `LanceDB.from_texts(texts: List[str],embedding: Embeddings)` class method.
The exhaustive list of parameters for `LanceDB` vector store are : The exhaustive list of parameters for `LanceDB` vector store are :
- `connection`: (Optional) `lancedb.db.LanceDBConnection` connection object to use. If not provided, a new connection will be created.
- `embedding`: Langchain embedding model. |Name|type|Purpose|default|
- `vector_key`: (Optional) Column name to use for vector's in the table. Defaults to `'vector'`. |:----|:----|:----|:----|
- `id_key`: (Optional) Column name to use for id's in the table. Defaults to `'id'`. |`connection`| (Optional) `Any` |`lancedb.db.LanceDBConnection` connection object to use. If not provided, a new connection will be created.|`None`|
- `text_key`: (Optional) Column name to use for text in the table. Defaults to `'text'`. |`embedding`| (Optional) `Embeddings` | Langchain embedding model.|Provided by user.|
- `table_name`: (Optional) Name of your table in the database. Defaults to `'vectorstore'`. |`uri`| (Optional) `str` |It specifies the directory location of **LanceDB database** and establishes a connection that can be used to interact with the database. |`/tmp/lancedb`|
- `api_key`: (Optional) API key to use for LanceDB cloud database. Defaults to `None`. |`vector_key` |(Optional) `str`| Column name to use for vector's in the table.|`'vector'`|
- `region`: (Optional) Region to use for LanceDB cloud database. Only for LanceDB Cloud, defaults to `None`. |`id_key` |(Optional) `str`| Column name to use for id's in the table.|`'id'`|
- `mode`: (Optional) Mode to use for adding data to the table. Defaults to `'overwrite'`. |`text_key` |(Optional) `str` |Column name to use for text in the table.|`'text'`|
- `reranker`: (Optional) The reranker to use for LanceDB. |`table_name` |(Optional) `str`| Name of your table in the database.|`'vectorstore'`|
- `relevance_score_fn`: (Optional[Callable[[float], float]]) Langchain relevance score function to be used. Defaults to `None`. |`api_key` |(Optional `str`) |API key to use for LanceDB cloud database.|`None`|
|`region` |(Optional) `str`| Region to use for LanceDB cloud database.|Only for LanceDB Cloud : `None`.|
|`mode` |(Optional) `str` |Mode to use for adding data to the table. Valid values are "append" and "overwrite".|`'overwrite'`|
|`table`| (Optional) `Any`|You can connect to an existing table of LanceDB, created outside of langchain, and utilize it.|`None`|
|`distance`|(Optional) `str`|The choice of distance metric used to calculate the similarity between vectors.|`'l2'`|
|`reranker` |(Optional) `Any`|The reranker to use for LanceDB.|`None`|
|`relevance_score_fn` |(Optional) `Callable[[float], float]` | Langchain relevance score function to be used.|`None`|
|`limit`|`int`|Set the maximum number of results to return.|`DEFAULT_K` (it is 4)|
```python ```python
db_url = "db://lang_test" # url of db you created db_url = "db://lang_test" # url of db you created
@@ -51,19 +64,24 @@ vector_store = LanceDB(
api_key=api_key, #(dont include for local API) api_key=api_key, #(dont include for local API)
region=region, #(dont include for local API) region=region, #(dont include for local API)
embedding=embeddings, embedding=embeddings,
table_name='langchain_test' #Optional table_name='langchain_test' # Optional
) )
``` ```
### Methods ### Methods
##### add_texts() ##### add_texts()
- `texts`: `Iterable` of strings to add to the vectorstore.
- `metadatas`: Optional `list[dict()]` of metadatas associated with the texts.
- `ids`: Optional `list` of ids to associate with the texts.
- `kwargs`: `Any`
This method adds texts and stores respective embeddings automatically. This method turn texts into embedding and add it to the database.
|Name|Purpose|defaults|
|:---|:---|:---|
|`texts`|`Iterable` of strings to add to the vectorstore.|Provided by user|
|`metadatas`|Optional `list[dict()]` of metadatas associated with the texts.|`None`|
|`ids`|Optional `list` of ids to associate with the texts.|`None`|
|`kwargs`| Other keyworded arguments provided by the user. |-|
It returns list of ids of the added texts.
```python ```python
vector_store.add_texts(texts = ['test_123'], metadatas =[{'source' :'wiki'}]) vector_store.add_texts(texts = ['test_123'], metadatas =[{'source' :'wiki'}])
@@ -78,14 +96,25 @@ pd_df.to_csv("docsearch.csv", index=False)
# you can also create a new vector store object using an older connection object: # you can also create a new vector store object using an older connection object:
vector_store = LanceDB(connection=tbl, embedding=embeddings) vector_store = LanceDB(connection=tbl, embedding=embeddings)
``` ```
##### create_index()
- `col_name`: `Optional[str] = None`
- `vector_col`: `Optional[str] = None`
- `num_partitions`: `Optional[int] = 256`
- `num_sub_vectors`: `Optional[int] = 96`
- `index_cache_size`: `Optional[int] = None`
This method creates an index for the vector store. For index creation make sure your table has enough data in it. An ANN index is ususally not needed for datasets ~100K vectors. For large-scale (>1M) or higher dimension vectors, it is beneficial to create an ANN index. ------
##### create_index()
This method creates a scalar(for non-vector cols) or a vector index on a table.
|Name|type|Purpose|defaults|
|:---|:---|:---|:---|
|`vector_col`|`Optional[str]`| Provide if you want to create index on a vector column. |`None`|
|`col_name`|`Optional[str]`| Provide if you want to create index on a non-vector column. |`None`|
|`metric`|`Optional[str]` |Provide the metric to use for vector index. choice of metrics: 'L2', 'dot', 'cosine'. |`L2`|
|`num_partitions`|`Optional[int]`|Number of partitions to use for the index.|`256`|
|`num_sub_vectors`|`Optional[int]` |Number of sub-vectors to use for the index.|`96`|
|`index_cache_size`|`Optional[int]` |Size of the index cache.|`None`|
|`name`|`Optional[str]` |Name of the table to create index on.|`None`|
For index creation make sure your table has enough data in it. An ANN index is ususally not needed for datasets ~100K vectors. For large-scale (>1M) or higher dimension vectors, it is beneficial to create an ANN index.
```python ```python
# for creating vector index # for creating vector index
@@ -96,42 +125,63 @@ vector_store.create_index(col_name='text')
``` ```
##### similarity_search() ------
- `query`: `str`
- `k`: `Optional[int] = None`
- `filter`: `Optional[Dict[str, str]] = None`
- `fts`: `Optional[bool] = False`
- `name`: `Optional[str] = None`
- `kwargs`: `Any`
Return documents most similar to the query without relevance scores ##### similarity_search()
This method performs similarity search based on **text query**.
| Name | Type | Purpose | Default |
|---------|----------------------|---------|---------|
| `query` | `str` | A `str` representing the text query that you want to search for in the vector store. | N/A |
| `k` | `Optional[int]` | It specifies the number of documents to return. | `None` |
| `filter` | `Optional[Dict[str, str]]`| It is used to filter the search results by specific metadata criteria. | `None` |
| `fts` | `Optional[bool]` | It indicates whether to perform a full-text search (FTS). | `False` |
| `name` | `Optional[str]` | It is used for specifying the name of the table to query. If not provided, it uses the default table set during the initialization of the LanceDB instance. | `None` |
| `kwargs` | `Any` | Other keyworded arguments provided by the user. | N/A |
Return documents most similar to the query **without relevance scores**.
```python ```python
docs = docsearch.similarity_search(query) docs = docsearch.similarity_search(query)
print(docs[0].page_content) print(docs[0].page_content)
``` ```
##### similarity_search_by_vector() ------
- `embedding`: `List[float]`
- `k`: `Optional[int] = None`
- `filter`: `Optional[Dict[str, str]] = None`
- `name`: `Optional[str] = None`
- `kwargs`: `Any`
Returns documents most similar to the query vector. ##### similarity_search_by_vector()
The method returns documents that are most similar to the specified **embedding (query) vector**.
| Name | Type | Purpose | Default |
|-------------|---------------------------|---------|---------|
| `embedding` | `List[float]` | The embedding vector you want to use to search for similar documents in the vector store. | N/A |
| `k` | `Optional[int]` | It specifies the number of documents to return. | `None` |
| `filter` | `Optional[Dict[str, str]]`| It is used to filter the search results by specific metadata criteria. | `None` |
| `name` | `Optional[str]` | It is used for specifying the name of the table to query. If not provided, it uses the default table set during the initialization of the LanceDB instance. | `None` |
| `kwargs` | `Any` | Other keyworded arguments provided by the user. | N/A |
**It does not provide relevance scores.**
```python ```python
docs = docsearch.similarity_search_by_vector(query) docs = docsearch.similarity_search_by_vector(query)
print(docs[0].page_content) print(docs[0].page_content)
``` ```
##### similarity_search_with_score() ------
- `query`: `str`
- `k`: `Optional[int] = None`
- `filter`: `Optional[Dict[str, str]] = None`
- `kwargs`: `Any`
Returns documents most similar to the query string with relevance scores, gets called by base class's `similarity_search_with_relevance_scores` which selects relevance score based on our `_select_relevance_score_fn`. ##### similarity_search_with_score()
Returns documents most similar to the **query string** along with their relevance scores.
| Name | Type | Purpose | Default |
|----------|---------------------------|---------|---------|
| `query` | `str` |A `str` representing the text query you want to search for in the vector store. This query will be converted into an embedding using the specified embedding function. | N/A |
| `k` | `Optional[int]` | It specifies the number of documents to return. | `None` |
| `filter` | `Optional[Dict[str, str]]`| It is used to filter the search results by specific metadata criteria. This allows you to narrow down the search results based on certain metadata attributes associated with the documents. | `None` |
| `kwargs` | `Any` | Other keyworded arguments provided by the user. | N/A |
It gets called by base class's `similarity_search_with_relevance_scores` which selects relevance score based on our `_select_relevance_score_fn`.
```python ```python
docs = docsearch.similarity_search_with_relevance_scores(query) docs = docsearch.similarity_search_with_relevance_scores(query)
@@ -139,15 +189,21 @@ print("relevance score - ", docs[0][1])
print("text- ", docs[0][0].page_content[:1000]) print("text- ", docs[0][0].page_content[:1000])
``` ```
##### similarity_search_by_vector_with_relevance_scores() ------
- `embedding`: `List[float]`
- `k`: `Optional[int] = None`
- `filter`: `Optional[Dict[str, str]] = None`
- `name`: `Optional[str] = None`
- `kwargs`: `Any`
Return documents most similar to the query vector with relevance scores. ##### similarity_search_by_vector_with_relevance_scores()
Relevance score
Similarity search using **query vector**.
| Name | Type | Purpose | Default |
|-------------|---------------------------|---------|---------|
| `embedding` | `List[float]` | The embedding vector you want to use to search for similar documents in the vector store. | N/A |
| `k` | `Optional[int]` | It specifies the number of documents to return. | `None` |
| `filter` | `Optional[Dict[str, str]]`| It is used to filter the search results by specific metadata criteria. | `None` |
| `name` | `Optional[str]` | It is used for specifying the name of the table to query. | `None` |
| `kwargs` | `Any` | Other keyworded arguments provided by the user. | N/A |
The method returns documents most similar to the specified embedding (query) vector, along with their relevance scores.
```python ```python
docs = docsearch.similarity_search_by_vector_with_relevance_scores(query_embedding) docs = docsearch.similarity_search_by_vector_with_relevance_scores(query_embedding)
@@ -155,20 +211,22 @@ print("relevance score - ", docs[0][1])
print("text- ", docs[0][0].page_content[:1000]) print("text- ", docs[0][0].page_content[:1000])
``` ```
##### max_marginal_relevance_search() ------
- `query`: `str`
- `k`: `Optional[int] = None`
- `fetch_k` : Number of Documents to fetch to pass to MMR algorithm, `Optional[int] = None`
- `lambda_mult`: Number between 0 and 1 that determines the degree
of diversity among the results with 0 corresponding
to maximum diversity and 1 to minimum diversity.
Defaults to 0.5. `float = 0.5`
- `filter`: `Optional[Dict[str, str]] = None`
- `kwargs`: `Any`
Returns docs selected using the maximal marginal relevance(MMR). ##### max_marginal_relevance_search()
This method returns docs selected using the maximal marginal relevance(MMR).
Maximal marginal relevance optimizes for similarity to query AND diversity among selected documents. Maximal marginal relevance optimizes for similarity to query AND diversity among selected documents.
| Name | Type | Purpose | Default |
|---------------|-----------------|-----------|---------|
| `query` | `str` | Text to look up documents similar to. | N/A |
| `k` | `Optional[int]` | Number of Documents to return.| `4` |
| `fetch_k`| `Optional[int]`| Number of Documents to fetch to pass to MMR algorithm.| `None` |
| `lambda_mult` | `float` | Number between 0 and 1 that determines the degree of diversity among the results with 0 corresponding to maximum diversity and 1 to minimum diversity. | `0.5` |
| `filter`| `Optional[Dict[str, str]]`| Filter by metadata. | `None` |
|`kwargs`| Other keyworded arguments provided by the user. |-|
Similarly, `max_marginal_relevance_search_by_vector()` function returns docs most similar to the embedding passed to the function using MMR. instead of a string query you need to pass the embedding to be searched for. Similarly, `max_marginal_relevance_search_by_vector()` function returns docs most similar to the embedding passed to the function using MMR. instead of a string query you need to pass the embedding to be searched for.
```python ```python
@@ -186,12 +244,19 @@ result_texts = [doc.page_content for doc in result]
print(result_texts) print(result_texts)
``` ```
##### add_images() ------
- `uris` : File path to the image. `List[str]`.
- `metadatas` : Optional list of metadatas. `(Optional[List[dict]], optional)`
- `ids` : Optional list of IDs. `(Optional[List[str]], optional)`
Adds images by automatically creating their embeddings and adds them to the vectorstore. ##### add_images()
This method ddds images by automatically creating their embeddings and adds them to the vectorstore.
| Name | Type | Purpose | Default |
|------------|-------------------------------|--------------------------------|---------|
| `uris` | `List[str]` | File path to the image | N/A |
| `metadatas`| `Optional[List[dict]]` | Optional list of metadatas | `None` |
| `ids` | `Optional[List[str]]` | Optional list of IDs | `None` |
It returns list of IDs of the added images.
```python ```python
vec_store.add_images(uris=image_uris) vec_store.add_images(uris=image_uris)

View File

@@ -0,0 +1,383 @@
**phidata** is a framework for building **AI Assistants** with long-term memory, contextual knowledge, and the ability to take actions using function calling. It helps turn general-purpose LLMs into specialized assistants tailored to your use case by extending its capabilities using **memory**, **knowledge**, and **tools**.
- **Memory**: Stores chat history in a **database** and enables LLMs to have long-term conversations.
- **Knowledge**: Stores information in a **vector database** and provides LLMs with business context. (Here we will use LanceDB)
- **Tools**: Enable LLMs to take actions like pulling data from an **API**, **sending emails** or **querying a database**, etc.
![example](https://raw.githubusercontent.com/lancedb/assets/refs/heads/main/docs/assets/integration/phidata_assistant.png)
Memory & knowledge make LLMs smarter while tools make them autonomous.
LanceDB is a vector database and its integration into phidata makes it easy for us to provide a **knowledge base** to LLMs. It enables us to store information as [embeddings](../embeddings/understanding_embeddings.md) and search for the **results** similar to ours using **query**.
??? Question "What is Knowledge Base?"
Knowledge Base is a database of information that the Assistant can search to improve its responses. This information is stored in a vector database and provides LLMs with business context, which makes them respond in a context-aware manner.
While any type of storage can act as a knowledge base, vector databases offer the best solution for retrieving relevant results from dense information quickly.
Let's see how using LanceDB inside phidata helps in making LLM more useful:
## Prerequisites: install and import necessary dependencies
**Create a virtual environment**
1. install virtualenv package
```python
pip install virtualenv
```
2. Create a directory for your project and go to the directory and create a virtual environment inside it.
```python
mkdir phi
```
```python
cd phi
```
```python
python -m venv phidata_
```
**Activating virtual environment**
1. from inside the project directory, run the following command to activate the virtual environment.
```python
phidata_/Scripts/activate
```
**Install the following packages in the virtual environment**
```python
pip install lancedb phidata youtube_transcript_api openai ollama numpy pandas
```
**Create python files and import necessary libraries**
You need to create two files - `transcript.py` and `ollama_assistant.py` or `openai_assistant.py`
=== "openai_assistant.py"
```python
import os, openai
from rich.prompt import Prompt
from phi.assistant import Assistant
from phi.knowledge.text import TextKnowledgeBase
from phi.vectordb.lancedb import LanceDb
from phi.llm.openai import OpenAIChat
from phi.embedder.openai import OpenAIEmbedder
from transcript import extract_transcript
if "OPENAI_API_KEY" not in os.environ:
# OR set the key here as a variable
openai.api_key = "sk-..."
# The code below creates a file "transcript.txt" in the directory, the txt file will be used below
youtube_url = "https://www.youtube.com/watch?v=Xs33-Gzl8Mo"
segment_duration = 20
transcript_text,dict_transcript = extract_transcript(youtube_url,segment_duration)
```
=== "ollama_assistant.py"
```python
from rich.prompt import Prompt
from phi.assistant import Assistant
from phi.knowledge.text import TextKnowledgeBase
from phi.vectordb.lancedb import LanceDb
from phi.llm.ollama import Ollama
from phi.embedder.ollama import OllamaEmbedder
from transcript import extract_transcript
# The code below creates a file "transcript.txt" in the directory, the txt file will be used below
youtube_url = "https://www.youtube.com/watch?v=Xs33-Gzl8Mo"
segment_duration = 20
transcript_text,dict_transcript = extract_transcript(youtube_url,segment_duration)
```
=== "transcript.py"
``` python
from youtube_transcript_api import YouTubeTranscriptApi
import re
def smodify(seconds):
hours, remainder = divmod(seconds, 3600)
minutes, seconds = divmod(remainder, 60)
return f"{int(hours):02}:{int(minutes):02}:{int(seconds):02}"
def extract_transcript(youtube_url,segment_duration):
# Extract video ID from the URL
video_id = re.search(r'(?<=v=)[\w-]+', youtube_url)
if not video_id:
video_id = re.search(r'(?<=be/)[\w-]+', youtube_url)
if not video_id:
return None
video_id = video_id.group(0)
# Attempt to fetch the transcript
try:
# Try to get the official transcript
transcript = YouTubeTranscriptApi.get_transcript(video_id, languages=['en'])
except Exception:
# If no official transcript is found, try to get auto-generated transcript
try:
transcript_list = YouTubeTranscriptApi.list_transcripts(video_id)
for transcript in transcript_list:
transcript = transcript.translate('en').fetch()
except Exception:
return None
# Format the transcript into 120s chunks
transcript_text,dict_transcript = format_transcript(transcript,segment_duration)
# Open the file in write mode, which creates it if it doesn't exist
with open("transcript.txt", "w",encoding="utf-8") as file:
file.write(transcript_text)
return transcript_text,dict_transcript
def format_transcript(transcript,segment_duration):
chunked_transcript = []
chunk_dict = []
current_chunk = []
current_time = 0
# 2 minutes in seconds
start_time_chunk = 0 # To track the start time of the current chunk
for segment in transcript:
start_time = segment['start']
end_time_x = start_time + segment['duration']
text = segment['text']
# Add text to the current chunk
current_chunk.append(text)
# Update the current time with the duration of the current segment
# The duration of the current segment is given by segment['start'] - start_time_chunk
if current_chunk:
current_time = start_time - start_time_chunk
# If current chunk duration reaches or exceeds 2 minutes, save the chunk
if current_time >= segment_duration:
# Use the start time of the first segment in the current chunk as the timestamp
chunked_transcript.append(f"[{smodify(start_time_chunk)} to {smodify(end_time_x)}] " + " ".join(current_chunk))
current_chunk = re.sub(r'[\xa0\n]', lambda x: '' if x.group() == '\xa0' else ' ', "\n".join(current_chunk))
chunk_dict.append({"timestamp":f"[{smodify(start_time_chunk)} to {smodify(end_time_x)}]", "text": "".join(current_chunk)})
current_chunk = [] # Reset the chunk
start_time_chunk = start_time + segment['duration'] # Update the start time for the next chunk
current_time = 0 # Reset current time
# Add any remaining text in the last chunk
if current_chunk:
chunked_transcript.append(f"[{smodify(start_time_chunk)} to {smodify(end_time_x)}] " + " ".join(current_chunk))
current_chunk = re.sub(r'[\xa0\n]', lambda x: '' if x.group() == '\xa0' else ' ', "\n".join(current_chunk))
chunk_dict.append({"timestamp":f"[{smodify(start_time_chunk)} to {smodify(end_time_x)}]", "text": "".join(current_chunk)})
return "\n\n".join(chunked_transcript), chunk_dict
```
!!! warning
If creating Ollama assistant, download and install Ollama [from here](https://ollama.com/) and then run the Ollama instance in the background. Also, download the required models using `ollama pull <model-name>`. Check out the models [here](https://ollama.com/library)
**Run the following command to deactivate the virtual environment if needed**
```python
deactivate
```
## **Step 1** - Create a Knowledge Base for AI Assistant using LanceDB
=== "openai_assistant.py"
```python
# Create knowledge Base with OpenAIEmbedder in LanceDB
knowledge_base = TextKnowledgeBase(
path="transcript.txt",
vector_db=LanceDb(
embedder=OpenAIEmbedder(api_key = openai.api_key),
table_name="transcript_documents",
uri="./t3mp/.lancedb",
),
num_documents = 10
)
```
=== "ollama_assistant.py"
```python
# Create knowledge Base with OllamaEmbedder in LanceDB
knowledge_base = TextKnowledgeBase(
path="transcript.txt",
vector_db=LanceDb(
embedder=OllamaEmbedder(model="nomic-embed-text",dimensions=768),
table_name="transcript_documents",
uri="./t2mp/.lancedb",
),
num_documents = 10
)
```
Check out the list of **embedders** supported by **phidata** and their usage [here](https://docs.phidata.com/embedder/introduction).
Here we have used `TextKnowledgeBase`, which loads text/docx files to the knowledge base.
Let's see all the parameters that `TextKnowledgeBase` takes -
| Name| Type | Purpose | Default |
|:----|:-----|:--------|:--------|
|`path`|`Union[str, Path]`| Path to text file(s). It can point to a single text file or a directory of text files.| provided by user |
|`formats`|`List[str]`| File formats accepted by this knowledge base. |`[".txt"]`|
|`vector_db`|`VectorDb`| Vector Database for the Knowledge Base. phidata provides a wrapper around many vector DBs, you can import it like this - `from phi.vectordb.lancedb import LanceDb` | provided by user |
|`num_documents`|`int`| Number of results (documents/vectors) that vector search should return. |`5`|
|`reader`|`TextReader`| phidata provides many types of reader objects which read data, clean it and create chunks of data, encapsulate each chunk inside an object of the `Document` class, and return **`List[Document]`**. | `TextReader()` |
|`optimize_on`|`int`| It is used to specify the number of documents on which to optimize the vector database. Supposed to create an index. |`1000`|
??? Tip "Wonder! What is `Document` class?"
We know that, before storing the data in vectorDB, we need to split the data into smaller chunks upon which embeddings will be created and these embeddings along with the chunks will be stored in vectorDB. When the user queries over the vectorDB, some of these embeddings will be returned as the result based on the semantic similarity with the query.
When the user queries over vectorDB, the queries are converted into embeddings, and a nearest neighbor search is performed over these query embeddings which returns the embeddings that correspond to most semantically similar chunks(parts of our data) present in vectorDB.
Here, a “Document” is a class in phidata. Since there is an option to let phidata create and manage embeddings, it splits our data into smaller chunks(as expected). It does not directly create embeddings on it. Instead, it takes each chunk and encapsulates it inside the object of the `Document` class along with various other metadata related to the chunk. Then embeddings are created on these `Document` objects and stored in vectorDB.
```python
class Document(BaseModel):
"""Model for managing a document"""
content: str # <--- here data of chunk is stored
id: Optional[str] = None
name: Optional[str] = None
meta_data: Dict[str, Any] = {}
embedder: Optional[Embedder] = None
embedding: Optional[List[float]] = None
usage: Optional[Dict[str, Any]] = None
```
However, using phidata you can load many other types of data in the knowledge base(other than text). Check out [phidata Knowledge Base](https://docs.phidata.com/knowledge/introduction) for more information.
Let's dig deeper into the `vector_db` parameter and see what parameters `LanceDb` takes -
| Name| Type | Purpose | Default |
|:----|:-----|:--------|:--------|
|`embedder`|`Embedder`| phidata provides many Embedders that abstract the interaction with embedding APIs and utilize it to generate embeddings. Check out other embedders [here](https://docs.phidata.com/embedder/introduction) | `OpenAIEmbedder` |
|`distance`|`List[str]`| The choice of distance metric used to calculate the similarity between vectors, which directly impacts search results and performance in vector databases. |`Distance.cosine`|
|`connection`|`lancedb.db.LanceTable`| LanceTable can be accessed through `.connection`. You can connect to an existing table of LanceDB, created outside of phidata, and utilize it. If not provided, it creates a new table using `table_name` parameter and adds it to `connection`. |`None`|
|`uri`|`str`| It specifies the directory location of **LanceDB database** and establishes a connection that can be used to interact with the database. | `"/tmp/lancedb"` |
|`table_name`|`str`| If `connection` is not provided, it initializes and connects to a new **LanceDB table** with a specified(or default) name in the database present at `uri`. |`"phi"`|
|`nprobes`|`int`| It refers to the number of partitions that the search algorithm examines to find the nearest neighbors of a given query vector. Higher values will yield better recall (more likely to find vectors if they exist) at the expense of latency. |`20`|
!!! note
Since we just initialized the KnowledgeBase. The VectorDB table that corresponds to this Knowledge Base is not yet populated with our data. It will be populated in **Step 3**, once we perform the `load` operation.
You can check the state of the LanceDB table using - `knowledge_base.vector_db.connection.to_pandas()`
Now that the Knowledge Base is initialized, , we can go to **step 2**.
## **Step 2** - Create an assistant with our choice of LLM and reference to the knowledge base.
=== "openai_assistant.py"
```python
# define an assistant with gpt-4o-mini llm and reference to the knowledge base created above
assistant = Assistant(
llm=OpenAIChat(model="gpt-4o-mini", max_tokens=1000, temperature=0.3,api_key = openai.api_key),
description="""You are an Expert in explaining youtube video transcripts. You are a bot that takes transcript of a video and answer the question based on it.
This is transcript for the above timestamp: {relevant_document}
The user input is: {user_input}
generate highlights only when asked.
When asked to generate highlights from the video, understand the context for each timestamp and create key highlight points, answer in following way -
[timestamp] - highlight 1
[timestamp] - highlight 2
... so on
Your task is to understand the user question, and provide an answer using the provided contexts. Your answers are correct, high-quality, and written by an domain expert. If the provided context does not contain the answer, simply state,'The provided context does not have the answer.'""",
knowledge_base=knowledge_base,
add_references_to_prompt=True,
)
```
=== "ollama_assistant.py"
```python
# define an assistant with llama3.1 llm and reference to the knowledge base created above
assistant = Assistant(
llm=Ollama(model="llama3.1"),
description="""You are an Expert in explaining youtube video transcripts. You are a bot that takes transcript of a video and answer the question based on it.
This is transcript for the above timestamp: {relevant_document}
The user input is: {user_input}
generate highlights only when asked.
When asked to generate highlights from the video, understand the context for each timestamp and create key highlight points, answer in following way -
[timestamp] - highlight 1
[timestamp] - highlight 2
... so on
Your task is to understand the user question, and provide an answer using the provided contexts. Your answers are correct, high-quality, and written by an domain expert. If the provided context does not contain the answer, simply state,'The provided context does not have the answer.'""",
knowledge_base=knowledge_base,
add_references_to_prompt=True,
)
```
Assistants add **memory**, **knowledge**, and **tools** to LLMs. Here we will add only **knowledge** in this example.
Whenever we will give a query to LLM, the assistant will retrieve relevant information from our **Knowledge Base**(table in LanceDB) and pass it to LLM along with the user query in a structured way.
- The `add_references_to_prompt=True` always adds information from the knowledge base to the prompt, regardless of whether it is relevant to the question.
To know more about an creating assistant in phidata, check out [phidata docs](https://docs.phidata.com/assistants/introduction) here.
## **Step 3** - Load data to Knowledge Base.
```python
# load out data into the knowledge_base (populating the LanceTable)
assistant.knowledge_base.load(recreate=False)
```
The above code loads the data to the Knowledge Base(LanceDB Table) and now it is ready to be used by the assistant.
| Name| Type | Purpose | Default |
|:----|:-----|:--------|:--------|
|`recreate`|`bool`| If True, it drops the existing table and recreates the table in the vectorDB. |`False`|
|`upsert`|`bool`| If True and the vectorDB supports upsert, it will upsert documents to the vector db. | `False` |
|`skip_existing`|`bool`| If True, skips documents that already exist in the vectorDB when inserting. |`True`|
??? tip "What is upsert?"
Upsert is a database operation that combines "update" and "insert". It updates existing records if a document with the same identifier does exist, or inserts new records if no matching record exists. This is useful for maintaining the most current information without manually checking for existence.
During the Load operation, phidata directly interacts with the LanceDB library and performs the loading of the table with our data in the following steps -
1. **Creates** and **initializes** the table if it does not exist.
2. Then it **splits** our data into smaller **chunks**.
??? question "How do they create chunks?"
**phidata** provides many types of **Knowledge Bases** based on the type of data. Most of them :material-information-outline:{ title="except LlamaIndexKnowledgeBase and LangChainKnowledgeBase"} has a property method called `document_lists` of type `Iterator[List[Document]]`. During the load operation, this property method is invoked. It traverses on the data provided by us (in this case, a text file(s)) using `reader`. Then it **reads**, **creates chunks**, and **encapsulates** each chunk inside a `Document` object and yields **lists of `Document` objects** that contain our data.
3. Then **embeddings** are created on these chunks are **inserted** into the LanceDB Table
??? question "How do they insert your data as different rows in LanceDB Table?"
The chunks of your data are in the form - **lists of `Document` objects**. It was yielded in the step above.
for each `Document` in `List[Document]`, it does the following operations:
- Creates embedding on `Document`.
- Cleans the **content attribute**(chunks of our data is here) of `Document`.
- Prepares data by creating `id` and loading `payload` with the metadata related to this chunk. (1)
{ .annotate }
1. Three columns will be added to the table - `"id"`, `"vector"`, and `"payload"` (payload contains various metadata including **`content`**)
- Then add this data to LanceTable.
4. Now the internal state of `knowledge_base` is changed (embeddings are created and loaded in the table ) and it **ready to be used by assistant**.
## **Step 4** - Start a cli chatbot with access to the Knowledge base
```python
# start cli chatbot with knowledge base
assistant.print_response("Ask me about something from the knowledge base")
while True:
message = Prompt.ask(f"[bold] :sunglasses: User [/bold]")
if message in ("exit", "bye"):
break
assistant.print_response(message, markdown=True)
```
For more information and amazing cookbooks of phidata, read the [phidata documentation](https://docs.phidata.com/introduction) and also visit [LanceDB x phidata docmentation](https://docs.phidata.com/vectordb/lancedb).

View File

@@ -1,13 +1,73 @@
# FiftyOne # FiftyOne
FiftyOne is an open source toolkit for building high-quality datasets and computer vision models. It provides an API to create LanceDB tables and run similarity queries, both programmatically in Python and via point-and-click in the App. FiftyOne is an open source toolkit that enables users to curate better data and build better models. It includes tools for data exploration, visualization, and management, as well as features for collaboration and sharing.
Any developers, data scientists, and researchers who work with computer vision and machine learning can use FiftyOne to improve the quality of their datasets and deliver insights about their models.
![example](../assets/voxel.gif) ![example](../assets/voxel.gif)
## Basic recipe **FiftyOne** provides an API to create LanceDB tables and run similarity queries, both **programmatically in Python** and via **point-and-click in the App**.
The basic workflow shown below uses LanceDB to create a similarity index on your FiftyOne Let's get started and see how to use **LanceDB** to create a **similarity index** on your FiftyOne datasets.
datasets:
## Overview
**[Embeddings](../embeddings/understanding_embeddings.md)** are foundational to all of the **vector search** features. In FiftyOne, embeddings are managed by the [**FiftyOne Brain**](https://docs.voxel51.com/user_guide/brain.html) that provides powerful machine learning techniques designed to transform how you curate your data from an art into a measurable science.
!!!question "Have you ever wanted to find the images most similar to an image in your dataset?"
The **FiftyOne Brain** makes computing **visual similarity** really easy. You can compute the similarity of samples in your dataset using an embedding model and store the results in the **brain key**.
You can then sort your samples by similarity or use this information to find potential duplicate images.
Here we will be doing the following :
1. **Create Index** - In order to run similarity queries against our media, we need to **index** the data. We can do this via the `compute_similarity()` function.
- In the function, specify the **model** you want to use to generate the embedding vectors, and what **vector search engine** you want to use on the **backend** (here LanceDB).
!!!tip
You can also give the similarity index a name(`brain_key`), which is useful if you want to run vector searches against multiple indexes.
2. **Query** - Once you have generated your similarity index, you can query your dataset with `sort_by_similarity()`. The query can be any of the following:
- An ID (sample or patch)
- A query vector of same dimension as the index
- A list of IDs (samples or patches)
- A text prompt (search semantically)
## Prerequisites: install necessary dependencies
1. **Create and activate a virtual environment**
Install virtualenv package and run the following command in your project directory.
```python
python -m venv fiftyone_
```
From inside the project directory run the following to activate the virtual environment.
=== "Windows"
```python
fiftyone_/Scripts/activate
```
=== "macOS/Linux"
```python
source fiftyone_/Scripts/activate
```
2. **Install the following packages in the virtual environment**
To install FiftyOne, ensure you have activated any virtual environment that you are using, then run
```python
pip install fiftyone
```
## Understand basic workflow
The basic workflow shown below uses LanceDB to create a similarity index on your FiftyOne datasets:
1. Load a dataset into FiftyOne. 1. Load a dataset into FiftyOne.
@@ -19,14 +79,10 @@ datasets:
5. If desired, delete the table. 5. If desired, delete the table.
The example below demonstrates this workflow. ## Quick Example
!!! Note Let's jump on a quick example that demonstrates this workflow.
Install the LanceDB Python client to run the code shown below.
```
pip install lancedb
```
```python ```python
@@ -36,7 +92,10 @@ import fiftyone.zoo as foz
# Step 1: Load your data into FiftyOne # Step 1: Load your data into FiftyOne
dataset = foz.load_zoo_dataset("quickstart") dataset = foz.load_zoo_dataset("quickstart")
```
Make sure you install torch ([guide here](https://pytorch.org/get-started/locally/)) before proceeding.
```python
# Steps 2 and 3: Compute embeddings and create a similarity index # Steps 2 and 3: Compute embeddings and create a similarity index
lancedb_index = fob.compute_similarity( lancedb_index = fob.compute_similarity(
dataset, dataset,
@@ -45,8 +104,11 @@ lancedb_index = fob.compute_similarity(
backend="lancedb", backend="lancedb",
) )
``` ```
Once the similarity index has been generated, we can query our data in FiftyOne
by specifying the `brain_key`: !!! note
Running the code above will download the clip model (2.6Gb)
Once the similarity index has been generated, we can query our data in FiftyOne by specifying the `brain_key`:
```python ```python
# Step 4: Query your data # Step 4: Query your data
@@ -56,7 +118,22 @@ view = dataset.sort_by_similarity(
brain_key="lancedb_index", brain_key="lancedb_index",
k=10, # limit to 10 most similar samples k=10, # limit to 10 most similar samples
) )
```
The returned result are of type - `DatasetView`.
!!! note
`DatasetView` does not hold its contents in-memory. Views simply store the rule(s) that are applied to extract the content of interest from the underlying Dataset when the view is iterated/aggregated on.
This means, for example, that the contents of a `DatasetView` may change as the underlying Dataset is modified.
??? question "Can you query a view instead of dataset?"
Yes, you can also query a view.
Performing a similarity search on a `DatasetView` will only return results from the view; if the view contains samples that were not included in the index, they will never be included in the result.
This means that you can index an entire Dataset once and then perform searches on subsets of the dataset by constructing views that contain the images of interest.
```python
# Step 5 (optional): Cleanup # Step 5 (optional): Cleanup
# Delete the LanceDB table # Delete the LanceDB table
@@ -66,4 +143,90 @@ lancedb_index.cleanup()
dataset.delete_brain_run("lancedb_index") dataset.delete_brain_run("lancedb_index")
``` ```
## Using LanceDB backend
By default, calling `compute_similarity()` or `sort_by_similarity()` will use an sklearn backend.
To use the LanceDB backend, simply set the optional `backend` parameter of `compute_similarity()` to `"lancedb"`:
```python
import fiftyone.brain as fob
#... rest of the code
fob.compute_similarity(..., backend="lancedb", ...)
```
Alternatively, you can configure FiftyOne to use the LanceDB backend by setting the following environment variable.
In your terminal, set the environment variable using:
=== "Windows"
```python
$Env:FIFTYONE_BRAIN_DEFAULT_SIMILARITY_BACKEND="lancedb" //powershell
set FIFTYONE_BRAIN_DEFAULT_SIMILARITY_BACKEND=lancedb //cmd
```
=== "macOS/Linux"
```python
export FIFTYONE_BRAIN_DEFAULT_SIMILARITY_BACKEND=lancedb
```
!!! note
This will only run during the terminal session. Once terminal is closed, environment variable is deleted.
Alternatively, you can **permanently** configure FiftyOne to use the LanceDB backend creating a `brain_config.json` at `~/.fiftyone/brain_config.json`. The JSON file may contain any desired subset of config fields that you wish to customize.
```json
{
"default_similarity_backend": "lancedb"
}
```
This will override the default `brain_config` and will set it according to your customization. You can check the configuration by running the following code :
```python
import fiftyone.brain as fob
# Print your current brain config
print(fob.brain_config)
```
## LanceDB config parameters
The LanceDB backend supports query parameters that can be used to customize your similarity queries. These parameters include:
| Name| Purpose | Default |
|:----|:--------|:--------|
|**table_name**|The name of the LanceDB table to use. If none is provided, a new table will be created|`None`|
|**metric**|The embedding distance metric to use when creating a new table. The supported values are ("cosine", "euclidean")|`"cosine"`|
|**uri**| The database URI to use. In this Database URI, tables will be created. |`"/tmp/lancedb"`|
There are two ways to specify/customize the parameters:
1. **Using `brain_config.json` file**
```json
{
"similarity_backends": {
"lancedb": {
"table_name": "your-table",
"metric": "euclidean",
"uri": "/tmp/lancedb"
}
}
}
```
2. **Directly passing to `compute_similarity()` to configure a specific new index** :
```python
lancedb_index = fob.compute_similarity(
...
backend="lancedb",
brain_key="lancedb_index",
table_name="your-table",
metric="euclidean",
uri="/tmp/lancedb",
)
```
For a much more in depth walkthrough of the integration, visit the LanceDB x Voxel51 [docs page](https://docs.voxel51.com/integrations/lancedb.html). For a much more in depth walkthrough of the integration, visit the LanceDB x Voxel51 [docs page](https://docs.voxel51.com/integrations/lancedb.html).

View File

@@ -41,7 +41,6 @@ To build everything fresh:
```bash ```bash
npm install npm install
npm run tsc
npm run build npm run build
``` ```
@@ -51,18 +50,6 @@ Then you should be able to run the tests with:
npm test npm test
``` ```
### Rebuilding Rust library
```bash
npm run build
```
### Rebuilding Typescript
```bash
npm run tsc
```
### Fix lints ### Fix lints
To run the linter and have it automatically fix all errors To run the linter and have it automatically fix all errors

View File

@@ -38,4 +38,4 @@ A [WriteMode](../enums/WriteMode.md) to use on this operation
#### Defined in #### Defined in
[index.ts:1019](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/index.ts#L1019) [index.ts:1359](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L1359)

View File

@@ -30,6 +30,7 @@ A connection to a LanceDB database.
- [dropTable](LocalConnection.md#droptable) - [dropTable](LocalConnection.md#droptable)
- [openTable](LocalConnection.md#opentable) - [openTable](LocalConnection.md#opentable)
- [tableNames](LocalConnection.md#tablenames) - [tableNames](LocalConnection.md#tablenames)
- [withMiddleware](LocalConnection.md#withmiddleware)
## Constructors ## Constructors
@@ -46,7 +47,7 @@ A connection to a LanceDB database.
#### Defined in #### Defined in
[index.ts:489](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/index.ts#L489) [index.ts:739](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L739)
## Properties ## Properties
@@ -56,7 +57,7 @@ A connection to a LanceDB database.
#### Defined in #### Defined in
[index.ts:487](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/index.ts#L487) [index.ts:737](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L737)
___ ___
@@ -74,7 +75,7 @@ ___
#### Defined in #### Defined in
[index.ts:486](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/index.ts#L486) [index.ts:736](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L736)
## Accessors ## Accessors
@@ -92,7 +93,7 @@ ___
#### Defined in #### Defined in
[index.ts:494](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/index.ts#L494) [index.ts:744](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L744)
## Methods ## Methods
@@ -113,7 +114,7 @@ Creates a new Table, optionally initializing it with new data.
| Name | Type | | Name | Type |
| :------ | :------ | | :------ | :------ |
| `name` | `string` \| [`CreateTableOptions`](../interfaces/CreateTableOptions.md)\<`T`\> | | `name` | `string` \| [`CreateTableOptions`](../interfaces/CreateTableOptions.md)\<`T`\> |
| `data?` | `Record`\<`string`, `unknown`\>[] | | `data?` | `Table`\<`any`\> \| `Record`\<`string`, `unknown`\>[] |
| `optsOrEmbedding?` | [`WriteOptions`](../interfaces/WriteOptions.md) \| [`EmbeddingFunction`](../interfaces/EmbeddingFunction.md)\<`T`\> | | `optsOrEmbedding?` | [`WriteOptions`](../interfaces/WriteOptions.md) \| [`EmbeddingFunction`](../interfaces/EmbeddingFunction.md)\<`T`\> |
| `opt?` | [`WriteOptions`](../interfaces/WriteOptions.md) | | `opt?` | [`WriteOptions`](../interfaces/WriteOptions.md) |
@@ -127,7 +128,7 @@ Creates a new Table, optionally initializing it with new data.
#### Defined in #### Defined in
[index.ts:542](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/index.ts#L542) [index.ts:788](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L788)
___ ___
@@ -158,7 +159,7 @@ ___
#### Defined in #### Defined in
[index.ts:576](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/index.ts#L576) [index.ts:822](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L822)
___ ___
@@ -184,7 +185,7 @@ Drop an existing table.
#### Defined in #### Defined in
[index.ts:630](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/index.ts#L630) [index.ts:876](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L876)
___ ___
@@ -210,7 +211,7 @@ Open a table in the database.
#### Defined in #### Defined in
[index.ts:510](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/index.ts#L510) [index.ts:760](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L760)
**openTable**\<`T`\>(`name`, `embeddings`): `Promise`\<[`Table`](../interfaces/Table.md)\<`T`\>\> **openTable**\<`T`\>(`name`, `embeddings`): `Promise`\<[`Table`](../interfaces/Table.md)\<`T`\>\>
@@ -239,7 +240,7 @@ Connection.openTable
#### Defined in #### Defined in
[index.ts:518](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/index.ts#L518) [index.ts:768](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L768)
**openTable**\<`T`\>(`name`, `embeddings?`): `Promise`\<[`Table`](../interfaces/Table.md)\<`T`\>\> **openTable**\<`T`\>(`name`, `embeddings?`): `Promise`\<[`Table`](../interfaces/Table.md)\<`T`\>\>
@@ -266,7 +267,7 @@ Connection.openTable
#### Defined in #### Defined in
[index.ts:522](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/index.ts#L522) [index.ts:772](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L772)
___ ___
@@ -286,4 +287,36 @@ Get the names of all tables in the database.
#### Defined in #### Defined in
[index.ts:501](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/index.ts#L501) [index.ts:751](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L751)
___
### withMiddleware
**withMiddleware**(`middleware`): [`Connection`](../interfaces/Connection.md)
Instrument the behavior of this Connection with middleware.
The middleware will be called in the order they are added.
Currently this functionality is only supported for remote Connections.
#### Parameters
| Name | Type |
| :------ | :------ |
| `middleware` | `HttpMiddleware` |
#### Returns
[`Connection`](../interfaces/Connection.md)
- this Connection instrumented by the passed middleware
#### Implementation of
[Connection](../interfaces/Connection.md).[withMiddleware](../interfaces/Connection.md#withmiddleware)
#### Defined in
[index.ts:880](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L880)

View File

@@ -37,6 +37,8 @@ A LanceDB Table is the collection of Records. Each Record has one or more vector
### Methods ### Methods
- [add](LocalTable.md#add) - [add](LocalTable.md#add)
- [addColumns](LocalTable.md#addcolumns)
- [alterColumns](LocalTable.md#altercolumns)
- [checkElectron](LocalTable.md#checkelectron) - [checkElectron](LocalTable.md#checkelectron)
- [cleanupOldVersions](LocalTable.md#cleanupoldversions) - [cleanupOldVersions](LocalTable.md#cleanupoldversions)
- [compactFiles](LocalTable.md#compactfiles) - [compactFiles](LocalTable.md#compactfiles)
@@ -44,13 +46,16 @@ A LanceDB Table is the collection of Records. Each Record has one or more vector
- [createIndex](LocalTable.md#createindex) - [createIndex](LocalTable.md#createindex)
- [createScalarIndex](LocalTable.md#createscalarindex) - [createScalarIndex](LocalTable.md#createscalarindex)
- [delete](LocalTable.md#delete) - [delete](LocalTable.md#delete)
- [dropColumns](LocalTable.md#dropcolumns)
- [filter](LocalTable.md#filter) - [filter](LocalTable.md#filter)
- [getSchema](LocalTable.md#getschema) - [getSchema](LocalTable.md#getschema)
- [indexStats](LocalTable.md#indexstats) - [indexStats](LocalTable.md#indexstats)
- [listIndices](LocalTable.md#listindices) - [listIndices](LocalTable.md#listindices)
- [mergeInsert](LocalTable.md#mergeinsert)
- [overwrite](LocalTable.md#overwrite) - [overwrite](LocalTable.md#overwrite)
- [search](LocalTable.md#search) - [search](LocalTable.md#search)
- [update](LocalTable.md#update) - [update](LocalTable.md#update)
- [withMiddleware](LocalTable.md#withmiddleware)
## Constructors ## Constructors
@@ -74,7 +79,7 @@ A LanceDB Table is the collection of Records. Each Record has one or more vector
#### Defined in #### Defined in
[index.ts:642](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/index.ts#L642) [index.ts:892](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L892)
**new LocalTable**\<`T`\>(`tbl`, `name`, `options`, `embeddings`) **new LocalTable**\<`T`\>(`tbl`, `name`, `options`, `embeddings`)
@@ -95,7 +100,7 @@ A LanceDB Table is the collection of Records. Each Record has one or more vector
#### Defined in #### Defined in
[index.ts:649](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/index.ts#L649) [index.ts:899](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L899)
## Properties ## Properties
@@ -105,7 +110,7 @@ A LanceDB Table is the collection of Records. Each Record has one or more vector
#### Defined in #### Defined in
[index.ts:639](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/index.ts#L639) [index.ts:889](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L889)
___ ___
@@ -115,7 +120,7 @@ ___
#### Defined in #### Defined in
[index.ts:638](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/index.ts#L638) [index.ts:888](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L888)
___ ___
@@ -125,7 +130,7 @@ ___
#### Defined in #### Defined in
[index.ts:637](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/index.ts#L637) [index.ts:887](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L887)
___ ___
@@ -143,7 +148,7 @@ ___
#### Defined in #### Defined in
[index.ts:640](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/index.ts#L640) [index.ts:890](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L890)
___ ___
@@ -153,7 +158,7 @@ ___
#### Defined in #### Defined in
[index.ts:636](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/index.ts#L636) [index.ts:886](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L886)
___ ___
@@ -179,7 +184,7 @@ Creates a filter query to find all rows matching the specified criteria
#### Defined in #### Defined in
[index.ts:688](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/index.ts#L688) [index.ts:938](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L938)
## Accessors ## Accessors
@@ -197,7 +202,7 @@ Creates a filter query to find all rows matching the specified criteria
#### Defined in #### Defined in
[index.ts:668](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/index.ts#L668) [index.ts:918](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L918)
___ ___
@@ -215,7 +220,7 @@ ___
#### Defined in #### Defined in
[index.ts:849](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/index.ts#L849) [index.ts:1171](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L1171)
## Methods ## Methods
@@ -229,7 +234,7 @@ Insert records into this Table.
| Name | Type | Description | | Name | Type | Description |
| :------ | :------ | :------ | | :------ | :------ | :------ |
| `data` | `Record`\<`string`, `unknown`\>[] | Records to be inserted into the Table | | `data` | `Table`\<`any`\> \| `Record`\<`string`, `unknown`\>[] | Records to be inserted into the Table |
#### Returns #### Returns
@@ -243,7 +248,59 @@ The number of rows added to the table
#### Defined in #### Defined in
[index.ts:696](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/index.ts#L696) [index.ts:946](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L946)
___
### addColumns
**addColumns**(`newColumnTransforms`): `Promise`\<`void`\>
Add new columns with defined values.
#### Parameters
| Name | Type | Description |
| :------ | :------ | :------ |
| `newColumnTransforms` | \{ `name`: `string` ; `valueSql`: `string` }[] | pairs of column names and the SQL expression to use to calculate the value of the new column. These expressions will be evaluated for each row in the table, and can reference existing columns in the table. |
#### Returns
`Promise`\<`void`\>
#### Implementation of
[Table](../interfaces/Table.md).[addColumns](../interfaces/Table.md#addcolumns)
#### Defined in
[index.ts:1195](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L1195)
___
### alterColumns
**alterColumns**(`columnAlterations`): `Promise`\<`void`\>
Alter the name or nullability of columns.
#### Parameters
| Name | Type | Description |
| :------ | :------ | :------ |
| `columnAlterations` | [`ColumnAlteration`](../interfaces/ColumnAlteration.md)[] | One or more alterations to apply to columns. |
#### Returns
`Promise`\<`void`\>
#### Implementation of
[Table](../interfaces/Table.md).[alterColumns](../interfaces/Table.md#altercolumns)
#### Defined in
[index.ts:1201](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L1201)
___ ___
@@ -257,7 +314,7 @@ ___
#### Defined in #### Defined in
[index.ts:861](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/index.ts#L861) [index.ts:1183](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L1183)
___ ___
@@ -280,7 +337,7 @@ Clean up old versions of the table, freeing disk space.
#### Defined in #### Defined in
[index.ts:808](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/index.ts#L808) [index.ts:1130](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L1130)
___ ___
@@ -307,16 +364,22 @@ Metrics about the compaction operation.
#### Defined in #### Defined in
[index.ts:831](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/index.ts#L831) [index.ts:1153](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L1153)
___ ___
### countRows ### countRows
**countRows**(): `Promise`\<`number`\> **countRows**(`filter?`): `Promise`\<`number`\>
Returns the number of rows in this table. Returns the number of rows in this table.
#### Parameters
| Name | Type |
| :------ | :------ |
| `filter?` | `string` |
#### Returns #### Returns
`Promise`\<`number`\> `Promise`\<`number`\>
@@ -327,7 +390,7 @@ Returns the number of rows in this table.
#### Defined in #### Defined in
[index.ts:749](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/index.ts#L749) [index.ts:1021](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L1021)
___ ___
@@ -357,13 +420,13 @@ VectorIndexParams.
#### Defined in #### Defined in
[index.ts:734](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/index.ts#L734) [index.ts:1003](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L1003)
___ ___
### createScalarIndex ### createScalarIndex
**createScalarIndex**(`column`, `replace`): `Promise`\<`void`\> **createScalarIndex**(`column`, `replace?`): `Promise`\<`void`\>
Create a scalar index on this Table for the given column Create a scalar index on this Table for the given column
@@ -372,7 +435,7 @@ Create a scalar index on this Table for the given column
| Name | Type | Description | | Name | Type | Description |
| :------ | :------ | :------ | | :------ | :------ | :------ |
| `column` | `string` | The column to index | | `column` | `string` | The column to index |
| `replace` | `boolean` | If false, fail if an index already exists on the column Scalar indices, like vector indices, can be used to speed up scans. A scalar index can speed up scans that contain filter expressions on the indexed column. For example, the following scan will be faster if the column `my_col` has a scalar index: ```ts const con = await lancedb.connect('./.lancedb'); const table = await con.openTable('images'); const results = await table.where('my_col = 7').execute(); ``` Scalar indices can also speed up scans containing a vector search and a prefilter: ```ts const con = await lancedb.connect('././lancedb'); const table = await con.openTable('images'); const results = await table.search([1.0, 2.0]).where('my_col != 7').prefilter(true); ``` Scalar indices can only speed up scans for basic filters using equality, comparison, range (e.g. `my_col BETWEEN 0 AND 100`), and set membership (e.g. `my_col IN (0, 1, 2)`) Scalar indices can be used if the filter contains multiple indexed columns and the filter criteria are AND'd or OR'd together (e.g. `my_col < 0 AND other_col> 100`) Scalar indices may be used if the filter contains non-indexed columns but, depending on the structure of the filter, they may not be usable. For example, if the column `not_indexed` does not have a scalar index then the filter `my_col = 0 OR not_indexed = 1` will not be able to use any scalar index on `my_col`. | | `replace?` | `boolean` | If false, fail if an index already exists on the column it is always set to true for remote connections Scalar indices, like vector indices, can be used to speed up scans. A scalar index can speed up scans that contain filter expressions on the indexed column. For example, the following scan will be faster if the column `my_col` has a scalar index: ```ts const con = await lancedb.connect('./.lancedb'); const table = await con.openTable('images'); const results = await table.where('my_col = 7').execute(); ``` Scalar indices can also speed up scans containing a vector search and a prefilter: ```ts const con = await lancedb.connect('././lancedb'); const table = await con.openTable('images'); const results = await table.search([1.0, 2.0]).where('my_col != 7').prefilter(true); ``` Scalar indices can only speed up scans for basic filters using equality, comparison, range (e.g. `my_col BETWEEN 0 AND 100`), and set membership (e.g. `my_col IN (0, 1, 2)`) Scalar indices can be used if the filter contains multiple indexed columns and the filter criteria are AND'd or OR'd together (e.g. `my_col < 0 AND other_col> 100`) Scalar indices may be used if the filter contains non-indexed columns but, depending on the structure of the filter, they may not be usable. For example, if the column `not_indexed` does not have a scalar index then the filter `my_col = 0 OR not_indexed = 1` will not be able to use any scalar index on `my_col`. |
#### Returns #### Returns
@@ -392,7 +455,7 @@ await table.createScalarIndex('my_col')
#### Defined in #### Defined in
[index.ts:742](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/index.ts#L742) [index.ts:1011](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L1011)
___ ___
@@ -418,7 +481,38 @@ Delete rows from this table.
#### Defined in #### Defined in
[index.ts:758](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/index.ts#L758) [index.ts:1030](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L1030)
___
### dropColumns
▸ **dropColumns**(`columnNames`): `Promise`\<`void`\>
Drop one or more columns from the dataset
This is a metadata-only operation and does not remove the data from the
underlying storage. In order to remove the data, you must subsequently
call ``compact_files`` to rewrite the data without the removed columns and
then call ``cleanup_files`` to remove the old files.
#### Parameters
| Name | Type | Description |
| :------ | :------ | :------ |
| `columnNames` | `string`[] | The names of the columns to drop. These can be nested column references (e.g. "a.b.c") or top-level column names (e.g. "a"). |
#### Returns
`Promise`\<`void`\>
#### Implementation of
[Table](../interfaces/Table.md).[dropColumns](../interfaces/Table.md#dropcolumns)
#### Defined in
[index.ts:1205](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L1205)
___ ___
@@ -438,9 +532,13 @@ Creates a filter query to find all rows matching the specified criteria
[`Query`](Query.md)\<`T`\> [`Query`](Query.md)\<`T`\>
#### Implementation of
[Table](../interfaces/Table.md).[filter](../interfaces/Table.md#filter)
#### Defined in #### Defined in
[index.ts:684](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/index.ts#L684) [index.ts:934](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L934)
___ ___
@@ -454,13 +552,13 @@ ___
#### Defined in #### Defined in
[index.ts:854](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/index.ts#L854) [index.ts:1176](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L1176)
___ ___
### indexStats ### indexStats
▸ **indexStats**(`indexUuid`): `Promise`\<[`IndexStats`](../interfaces/IndexStats.md)\> ▸ **indexStats**(`indexName`): `Promise`\<[`IndexStats`](../interfaces/IndexStats.md)\>
Get statistics about an index. Get statistics about an index.
@@ -468,7 +566,7 @@ Get statistics about an index.
| Name | Type | | Name | Type |
| :------ | :------ | | :------ | :------ |
| `indexUuid` | `string` | | `indexName` | `string` |
#### Returns #### Returns
@@ -480,7 +578,7 @@ Get statistics about an index.
#### Defined in #### Defined in
[index.ts:845](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/index.ts#L845) [index.ts:1167](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L1167)
___ ___
@@ -500,7 +598,57 @@ List the indicies on this table.
#### Defined in #### Defined in
[index.ts:841](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/index.ts#L841) [index.ts:1163](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L1163)
___
### mergeInsert
▸ **mergeInsert**(`on`, `data`, `args`): `Promise`\<`void`\>
Runs a "merge insert" operation on the table
This operation can add rows, update rows, and remove rows all in a single
transaction. It is a very generic tool that can be used to create
behaviors like "insert if not exists", "update or insert (i.e. upsert)",
or even replace a portion of existing data with new data (e.g. replace
all data where month="january")
The merge insert operation works by combining new data from a
**source table** with existing data in a **target table** by using a
join. There are three categories of records.
"Matched" records are records that exist in both the source table and
the target table. "Not matched" records exist only in the source table
(e.g. these are new data) "Not matched by source" records exist only
in the target table (this is old data)
The MergeInsertArgs can be used to customize what should happen for
each category of data.
Please note that the data may appear to be reordered as part of this
operation. This is because updated rows will be deleted from the
dataset and then reinserted at the end with the new values.
#### Parameters
| Name | Type | Description |
| :------ | :------ | :------ |
| `on` | `string` | a column to join on. This is how records from the source table and target table are matched. |
| `data` | `Table`\<`any`\> \| `Record`\<`string`, `unknown`\>[] | the new data to insert |
| `args` | [`MergeInsertArgs`](../interfaces/MergeInsertArgs.md) | parameters controlling how the operation should behave |
#### Returns
`Promise`\<`void`\>
#### Implementation of
[Table](../interfaces/Table.md).[mergeInsert](../interfaces/Table.md#mergeinsert)
#### Defined in
[index.ts:1065](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L1065)
___ ___
@@ -514,7 +662,7 @@ Insert records into this Table, replacing its contents.
| Name | Type | Description | | Name | Type | Description |
| :------ | :------ | :------ | | :------ | :------ | :------ |
| `data` | `Record`\<`string`, `unknown`\>[] | Records to be inserted into the Table | | `data` | `Table`\<`any`\> \| `Record`\<`string`, `unknown`\>[] | Records to be inserted into the Table |
#### Returns #### Returns
@@ -528,7 +676,7 @@ The number of rows added to the table
#### Defined in #### Defined in
[index.ts:716](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/index.ts#L716) [index.ts:977](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L977)
___ ___
@@ -554,7 +702,7 @@ Creates a search query to find the nearest neighbors of the given search term
#### Defined in #### Defined in
[index.ts:676](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/index.ts#L676) [index.ts:926](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L926)
___ ___
@@ -580,4 +728,36 @@ Update rows in this table.
#### Defined in #### Defined in
[index.ts:771](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/index.ts#L771) [index.ts:1043](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L1043)
___
### withMiddleware
▸ **withMiddleware**(`middleware`): [`Table`](../interfaces/Table.md)\<`T`\>
Instrument the behavior of this Table with middleware.
The middleware will be called in the order they are added.
Currently this functionality is only supported for remote tables.
#### Parameters
| Name | Type |
| :------ | :------ |
| `middleware` | `HttpMiddleware` |
#### Returns
[`Table`](../interfaces/Table.md)\<`T`\>
- this Table instrumented by the passed middleware
#### Implementation of
[Table](../interfaces/Table.md).[withMiddleware](../interfaces/Table.md#withmiddleware)
#### Defined in
[index.ts:1209](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L1209)

View File

@@ -0,0 +1,82 @@
[vectordb](../README.md) / [Exports](../modules.md) / MakeArrowTableOptions
# Class: MakeArrowTableOptions
Options to control the makeArrowTable call.
## Table of contents
### Constructors
- [constructor](MakeArrowTableOptions.md#constructor)
### Properties
- [dictionaryEncodeStrings](MakeArrowTableOptions.md#dictionaryencodestrings)
- [embeddings](MakeArrowTableOptions.md#embeddings)
- [schema](MakeArrowTableOptions.md#schema)
- [vectorColumns](MakeArrowTableOptions.md#vectorcolumns)
## Constructors
### constructor
**new MakeArrowTableOptions**(`values?`)
#### Parameters
| Name | Type |
| :------ | :------ |
| `values?` | `Partial`\<[`MakeArrowTableOptions`](MakeArrowTableOptions.md)\> |
#### Defined in
[arrow.ts:98](https://github.com/lancedb/lancedb/blob/92179835/node/src/arrow.ts#L98)
## Properties
### dictionaryEncodeStrings
**dictionaryEncodeStrings**: `boolean` = `false`
If true then string columns will be encoded with dictionary encoding
Set this to true if your string columns tend to repeat the same values
often. For more precise control use the `schema` property to specify the
data type for individual columns.
If `schema` is provided then this property is ignored.
#### Defined in
[arrow.ts:96](https://github.com/lancedb/lancedb/blob/92179835/node/src/arrow.ts#L96)
___
### embeddings
`Optional` **embeddings**: [`EmbeddingFunction`](../interfaces/EmbeddingFunction.md)\<`any`\>
#### Defined in
[arrow.ts:85](https://github.com/lancedb/lancedb/blob/92179835/node/src/arrow.ts#L85)
___
### schema
`Optional` **schema**: `Schema`\<`any`\>
#### Defined in
[arrow.ts:63](https://github.com/lancedb/lancedb/blob/92179835/node/src/arrow.ts#L63)
___
### vectorColumns
**vectorColumns**: `Record`\<`string`, `VectorColumnOptions`\>
#### Defined in
[arrow.ts:81](https://github.com/lancedb/lancedb/blob/92179835/node/src/arrow.ts#L81)

View File

@@ -40,7 +40,7 @@ An embedding function that automatically creates vector representation for a giv
#### Defined in #### Defined in
[embedding/openai.ts:21](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/embedding/openai.ts#L21) [embedding/openai.ts:22](https://github.com/lancedb/lancedb/blob/92179835/node/src/embedding/openai.ts#L22)
## Properties ## Properties
@@ -50,17 +50,17 @@ An embedding function that automatically creates vector representation for a giv
#### Defined in #### Defined in
[embedding/openai.ts:19](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/embedding/openai.ts#L19) [embedding/openai.ts:20](https://github.com/lancedb/lancedb/blob/92179835/node/src/embedding/openai.ts#L20)
___ ___
### \_openai ### \_openai
`Private` `Readonly` **\_openai**: `any` `Private` `Readonly` **\_openai**: `OpenAI`
#### Defined in #### Defined in
[embedding/openai.ts:18](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/embedding/openai.ts#L18) [embedding/openai.ts:19](https://github.com/lancedb/lancedb/blob/92179835/node/src/embedding/openai.ts#L19)
___ ___
@@ -76,7 +76,7 @@ The name of the column that will be used as input for the Embedding Function.
#### Defined in #### Defined in
[embedding/openai.ts:50](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/embedding/openai.ts#L50) [embedding/openai.ts:56](https://github.com/lancedb/lancedb/blob/92179835/node/src/embedding/openai.ts#L56)
## Methods ## Methods
@@ -102,4 +102,4 @@ Creates a vector representation for the given values.
#### Defined in #### Defined in
[embedding/openai.ts:38](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/embedding/openai.ts#L38) [embedding/openai.ts:43](https://github.com/lancedb/lancedb/blob/92179835/node/src/embedding/openai.ts#L43)

View File

@@ -19,6 +19,7 @@ A builder for nearest neighbor queries for LanceDB.
### Properties ### Properties
- [\_embeddings](Query.md#_embeddings) - [\_embeddings](Query.md#_embeddings)
- [\_fastSearch](Query.md#_fastsearch)
- [\_filter](Query.md#_filter) - [\_filter](Query.md#_filter)
- [\_limit](Query.md#_limit) - [\_limit](Query.md#_limit)
- [\_metricType](Query.md#_metrictype) - [\_metricType](Query.md#_metrictype)
@@ -34,6 +35,7 @@ A builder for nearest neighbor queries for LanceDB.
### Methods ### Methods
- [execute](Query.md#execute) - [execute](Query.md#execute)
- [fastSearch](Query.md#fastsearch)
- [filter](Query.md#filter) - [filter](Query.md#filter)
- [isElectron](Query.md#iselectron) - [isElectron](Query.md#iselectron)
- [limit](Query.md#limit) - [limit](Query.md#limit)
@@ -65,7 +67,7 @@ A builder for nearest neighbor queries for LanceDB.
#### Defined in #### Defined in
[query.ts:38](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/query.ts#L38) [query.ts:39](https://github.com/lancedb/lancedb/blob/92179835/node/src/query.ts#L39)
## Properties ## Properties
@@ -75,7 +77,17 @@ A builder for nearest neighbor queries for LanceDB.
#### Defined in #### Defined in
[query.ts:36](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/query.ts#L36) [query.ts:37](https://github.com/lancedb/lancedb/blob/92179835/node/src/query.ts#L37)
___
### \_fastSearch
`Private` **\_fastSearch**: `boolean`
#### Defined in
[query.ts:36](https://github.com/lancedb/lancedb/blob/92179835/node/src/query.ts#L36)
___ ___
@@ -85,7 +97,7 @@ ___
#### Defined in #### Defined in
[query.ts:33](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/query.ts#L33) [query.ts:33](https://github.com/lancedb/lancedb/blob/92179835/node/src/query.ts#L33)
___ ___
@@ -95,7 +107,7 @@ ___
#### Defined in #### Defined in
[query.ts:29](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/query.ts#L29) [query.ts:29](https://github.com/lancedb/lancedb/blob/92179835/node/src/query.ts#L29)
___ ___
@@ -105,7 +117,7 @@ ___
#### Defined in #### Defined in
[query.ts:34](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/query.ts#L34) [query.ts:34](https://github.com/lancedb/lancedb/blob/92179835/node/src/query.ts#L34)
___ ___
@@ -115,7 +127,7 @@ ___
#### Defined in #### Defined in
[query.ts:31](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/query.ts#L31) [query.ts:31](https://github.com/lancedb/lancedb/blob/92179835/node/src/query.ts#L31)
___ ___
@@ -125,7 +137,7 @@ ___
#### Defined in #### Defined in
[query.ts:35](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/query.ts#L35) [query.ts:35](https://github.com/lancedb/lancedb/blob/92179835/node/src/query.ts#L35)
___ ___
@@ -135,7 +147,7 @@ ___
#### Defined in #### Defined in
[query.ts:26](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/query.ts#L26) [query.ts:26](https://github.com/lancedb/lancedb/blob/92179835/node/src/query.ts#L26)
___ ___
@@ -145,7 +157,7 @@ ___
#### Defined in #### Defined in
[query.ts:28](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/query.ts#L28) [query.ts:28](https://github.com/lancedb/lancedb/blob/92179835/node/src/query.ts#L28)
___ ___
@@ -155,7 +167,7 @@ ___
#### Defined in #### Defined in
[query.ts:30](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/query.ts#L30) [query.ts:30](https://github.com/lancedb/lancedb/blob/92179835/node/src/query.ts#L30)
___ ___
@@ -165,7 +177,7 @@ ___
#### Defined in #### Defined in
[query.ts:32](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/query.ts#L32) [query.ts:32](https://github.com/lancedb/lancedb/blob/92179835/node/src/query.ts#L32)
___ ___
@@ -175,7 +187,7 @@ ___
#### Defined in #### Defined in
[query.ts:27](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/query.ts#L27) [query.ts:27](https://github.com/lancedb/lancedb/blob/92179835/node/src/query.ts#L27)
___ ___
@@ -201,7 +213,7 @@ A filter statement to be applied to this query.
#### Defined in #### Defined in
[query.ts:87](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/query.ts#L87) [query.ts:90](https://github.com/lancedb/lancedb/blob/92179835/node/src/query.ts#L90)
## Methods ## Methods
@@ -223,7 +235,30 @@ Execute the query and return the results as an Array of Objects
#### Defined in #### Defined in
[query.ts:115](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/query.ts#L115) [query.ts:127](https://github.com/lancedb/lancedb/blob/92179835/node/src/query.ts#L127)
___
### fastSearch
**fastSearch**(`value`): [`Query`](Query.md)\<`T`\>
Skip searching un-indexed data. This can make search faster, but will miss
any data that is not yet indexed.
#### Parameters
| Name | Type |
| :------ | :------ |
| `value` | `boolean` |
#### Returns
[`Query`](Query.md)\<`T`\>
#### Defined in
[query.ts:119](https://github.com/lancedb/lancedb/blob/92179835/node/src/query.ts#L119)
___ ___
@@ -245,7 +280,7 @@ A filter statement to be applied to this query.
#### Defined in #### Defined in
[query.ts:82](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/query.ts#L82) [query.ts:85](https://github.com/lancedb/lancedb/blob/92179835/node/src/query.ts#L85)
___ ___
@@ -259,7 +294,7 @@ ___
#### Defined in #### Defined in
[query.ts:142](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/query.ts#L142) [query.ts:155](https://github.com/lancedb/lancedb/blob/92179835/node/src/query.ts#L155)
___ ___
@@ -268,6 +303,7 @@ ___
**limit**(`value`): [`Query`](Query.md)\<`T`\> **limit**(`value`): [`Query`](Query.md)\<`T`\>
Sets the number of results that will be returned Sets the number of results that will be returned
default value is 10
#### Parameters #### Parameters
@@ -281,7 +317,7 @@ Sets the number of results that will be returned
#### Defined in #### Defined in
[query.ts:55](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/query.ts#L55) [query.ts:58](https://github.com/lancedb/lancedb/blob/92179835/node/src/query.ts#L58)
___ ___
@@ -307,7 +343,7 @@ MetricType for the different options
#### Defined in #### Defined in
[query.ts:102](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/query.ts#L102) [query.ts:105](https://github.com/lancedb/lancedb/blob/92179835/node/src/query.ts#L105)
___ ___
@@ -329,7 +365,7 @@ The number of probes used. A higher number makes search more accurate but also s
#### Defined in #### Defined in
[query.ts:73](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/query.ts#L73) [query.ts:76](https://github.com/lancedb/lancedb/blob/92179835/node/src/query.ts#L76)
___ ___
@@ -349,7 +385,7 @@ ___
#### Defined in #### Defined in
[query.ts:107](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/query.ts#L107) [query.ts:110](https://github.com/lancedb/lancedb/blob/92179835/node/src/query.ts#L110)
___ ___
@@ -371,7 +407,7 @@ Refine the results by reading extra elements and re-ranking them in memory.
#### Defined in #### Defined in
[query.ts:64](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/query.ts#L64) [query.ts:67](https://github.com/lancedb/lancedb/blob/92179835/node/src/query.ts#L67)
___ ___
@@ -393,4 +429,4 @@ Return only the specified columns.
#### Defined in #### Defined in
[query.ts:93](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/query.ts#L93) [query.ts:96](https://github.com/lancedb/lancedb/blob/92179835/node/src/query.ts#L96)

View File

@@ -0,0 +1,52 @@
[vectordb](../README.md) / [Exports](../modules.md) / IndexStatus
# Enumeration: IndexStatus
## Table of contents
### Enumeration Members
- [Done](IndexStatus.md#done)
- [Failed](IndexStatus.md#failed)
- [Indexing](IndexStatus.md#indexing)
- [Pending](IndexStatus.md#pending)
## Enumeration Members
### Done
**Done** = ``"done"``
#### Defined in
[index.ts:713](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L713)
___
### Failed
• **Failed** = ``"failed"``
#### Defined in
[index.ts:714](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L714)
___
### Indexing
• **Indexing** = ``"indexing"``
#### Defined in
[index.ts:712](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L712)
___
### Pending
• **Pending** = ``"pending"``
#### Defined in
[index.ts:711](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L711)

View File

@@ -22,7 +22,7 @@ Cosine distance
#### Defined in #### Defined in
[index.ts:1041](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/index.ts#L1041) [index.ts:1381](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L1381)
___ ___
@@ -34,7 +34,7 @@ Dot product
#### Defined in #### Defined in
[index.ts:1046](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/index.ts#L1046) [index.ts:1386](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L1386)
___ ___
@@ -46,4 +46,4 @@ Euclidean distance
#### Defined in #### Defined in
[index.ts:1036](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/index.ts#L1036) [index.ts:1376](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L1376)

View File

@@ -22,7 +22,7 @@ Append new data to the table.
#### Defined in #### Defined in
[index.ts:1007](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/index.ts#L1007) [index.ts:1347](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L1347)
___ ___
@@ -34,7 +34,7 @@ Create a new [Table](../interfaces/Table.md).
#### Defined in #### Defined in
[index.ts:1003](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/index.ts#L1003) [index.ts:1343](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L1343)
___ ___
@@ -46,4 +46,4 @@ Overwrite the existing [Table](../interfaces/Table.md) if presented.
#### Defined in #### Defined in
[index.ts:1005](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/index.ts#L1005) [index.ts:1345](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L1345)

View File

@@ -18,7 +18,7 @@
#### Defined in #### Defined in
[index.ts:54](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/index.ts#L54) [index.ts:68](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L68)
___ ___
@@ -28,7 +28,7 @@ ___
#### Defined in #### Defined in
[index.ts:56](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/index.ts#L56) [index.ts:70](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L70)
___ ___
@@ -38,4 +38,4 @@ ___
#### Defined in #### Defined in
[index.ts:58](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/index.ts#L58) [index.ts:72](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L72)

View File

@@ -19,7 +19,7 @@ The number of bytes removed from disk.
#### Defined in #### Defined in
[index.ts:878](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/index.ts#L878) [index.ts:1218](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L1218)
___ ___
@@ -31,4 +31,4 @@ The number of old table versions removed.
#### Defined in #### Defined in
[index.ts:882](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/index.ts#L882) [index.ts:1222](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L1222)

View File

@@ -0,0 +1,53 @@
[vectordb](../README.md) / [Exports](../modules.md) / ColumnAlteration
# Interface: ColumnAlteration
A definition of a column alteration. The alteration changes the column at
`path` to have the new name `name`, to be nullable if `nullable` is true,
and to have the data type `data_type`. At least one of `rename` or `nullable`
must be provided.
## Table of contents
### Properties
- [nullable](ColumnAlteration.md#nullable)
- [path](ColumnAlteration.md#path)
- [rename](ColumnAlteration.md#rename)
## Properties
### nullable
`Optional` **nullable**: `boolean`
Set the new nullability. Note that a nullable column cannot be made non-nullable.
#### Defined in
[index.ts:638](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L638)
___
### path
**path**: `string`
The path to the column to alter. This is a dot-separated path to the column.
If it is a top-level column then it is just the name of the column. If it is
a nested column then it is the path to the column, e.g. "a.b.c" for a column
`c` nested inside a column `b` nested inside a column `a`.
#### Defined in
[index.ts:633](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L633)
___
### rename
`Optional` **rename**: `string`
#### Defined in
[index.ts:634](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L634)

View File

@@ -22,7 +22,7 @@ fragments added.
#### Defined in #### Defined in
[index.ts:933](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/index.ts#L933) [index.ts:1273](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L1273)
___ ___
@@ -35,7 +35,7 @@ file.
#### Defined in #### Defined in
[index.ts:928](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/index.ts#L928) [index.ts:1268](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L1268)
___ ___
@@ -47,7 +47,7 @@ The number of new fragments that were created.
#### Defined in #### Defined in
[index.ts:923](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/index.ts#L923) [index.ts:1263](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L1263)
___ ___
@@ -59,4 +59,4 @@ The number of fragments that were removed.
#### Defined in #### Defined in
[index.ts:919](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/index.ts#L919) [index.ts:1259](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L1259)

View File

@@ -24,7 +24,7 @@ Default is true.
#### Defined in #### Defined in
[index.ts:901](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/index.ts#L901) [index.ts:1241](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L1241)
___ ___
@@ -38,7 +38,7 @@ the deleted rows. Default is 10%.
#### Defined in #### Defined in
[index.ts:907](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/index.ts#L907) [index.ts:1247](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L1247)
___ ___
@@ -46,11 +46,11 @@ ___
`Optional` **maxRowsPerGroup**: `number` `Optional` **maxRowsPerGroup**: `number`
The maximum number of rows per group. Defaults to 1024. The maximum number of T per group. Defaults to 1024.
#### Defined in #### Defined in
[index.ts:895](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/index.ts#L895) [index.ts:1235](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L1235)
___ ___
@@ -63,7 +63,7 @@ the number of cores on the machine.
#### Defined in #### Defined in
[index.ts:912](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/index.ts#L912) [index.ts:1252](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L1252)
___ ___
@@ -77,4 +77,4 @@ Defaults to 1024 * 1024.
#### Defined in #### Defined in
[index.ts:891](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/index.ts#L891) [index.ts:1231](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L1231)

View File

@@ -22,6 +22,7 @@ Connection could be local against filesystem or remote against a server.
- [dropTable](Connection.md#droptable) - [dropTable](Connection.md#droptable)
- [openTable](Connection.md#opentable) - [openTable](Connection.md#opentable)
- [tableNames](Connection.md#tablenames) - [tableNames](Connection.md#tablenames)
- [withMiddleware](Connection.md#withmiddleware)
## Properties ## Properties
@@ -31,7 +32,7 @@ Connection could be local against filesystem or remote against a server.
#### Defined in #### Defined in
[index.ts:183](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/index.ts#L183) [index.ts:261](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L261)
## Methods ## Methods
@@ -59,7 +60,7 @@ Creates a new Table, optionally initializing it with new data.
#### Defined in #### Defined in
[index.ts:207](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/index.ts#L207) [index.ts:285](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L285)
**createTable**(`name`, `data`): `Promise`\<[`Table`](Table.md)\<`number`[]\>\> **createTable**(`name`, `data`): `Promise`\<[`Table`](Table.md)\<`number`[]\>\>
@@ -70,7 +71,7 @@ Creates a new Table and initialize it with new data.
| Name | Type | Description | | Name | Type | Description |
| :------ | :------ | :------ | | :------ | :------ | :------ |
| `name` | `string` | The name of the table. | | `name` | `string` | The name of the table. |
| `data` | `Record`\<`string`, `unknown`\>[] | Non-empty Array of Records to be inserted into the table | | `data` | `Table`\<`any`\> \| `Record`\<`string`, `unknown`\>[] | Non-empty Array of Records to be inserted into the table |
#### Returns #### Returns
@@ -78,7 +79,7 @@ Creates a new Table and initialize it with new data.
#### Defined in #### Defined in
[index.ts:221](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/index.ts#L221) [index.ts:299](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L299)
**createTable**(`name`, `data`, `options`): `Promise`\<[`Table`](Table.md)\<`number`[]\>\> **createTable**(`name`, `data`, `options`): `Promise`\<[`Table`](Table.md)\<`number`[]\>\>
@@ -89,7 +90,7 @@ Creates a new Table and initialize it with new data.
| Name | Type | Description | | Name | Type | Description |
| :------ | :------ | :------ | | :------ | :------ | :------ |
| `name` | `string` | The name of the table. | | `name` | `string` | The name of the table. |
| `data` | `Record`\<`string`, `unknown`\>[] | Non-empty Array of Records to be inserted into the table | | `data` | `Table`\<`any`\> \| `Record`\<`string`, `unknown`\>[] | Non-empty Array of Records to be inserted into the table |
| `options` | [`WriteOptions`](WriteOptions.md) | The write options to use when creating the table. | | `options` | [`WriteOptions`](WriteOptions.md) | The write options to use when creating the table. |
#### Returns #### Returns
@@ -98,7 +99,7 @@ Creates a new Table and initialize it with new data.
#### Defined in #### Defined in
[index.ts:233](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/index.ts#L233) [index.ts:311](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L311)
**createTable**\<`T`\>(`name`, `data`, `embeddings`): `Promise`\<[`Table`](Table.md)\<`T`\>\> **createTable**\<`T`\>(`name`, `data`, `embeddings`): `Promise`\<[`Table`](Table.md)\<`T`\>\>
@@ -115,7 +116,7 @@ Creates a new Table and initialize it with new data.
| Name | Type | Description | | Name | Type | Description |
| :------ | :------ | :------ | | :------ | :------ | :------ |
| `name` | `string` | The name of the table. | | `name` | `string` | The name of the table. |
| `data` | `Record`\<`string`, `unknown`\>[] | Non-empty Array of Records to be inserted into the table | | `data` | `Table`\<`any`\> \| `Record`\<`string`, `unknown`\>[] | Non-empty Array of Records to be inserted into the table |
| `embeddings` | [`EmbeddingFunction`](EmbeddingFunction.md)\<`T`\> | An embedding function to use on this table | | `embeddings` | [`EmbeddingFunction`](EmbeddingFunction.md)\<`T`\> | An embedding function to use on this table |
#### Returns #### Returns
@@ -124,7 +125,7 @@ Creates a new Table and initialize it with new data.
#### Defined in #### Defined in
[index.ts:246](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/index.ts#L246) [index.ts:324](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L324)
**createTable**\<`T`\>(`name`, `data`, `embeddings`, `options`): `Promise`\<[`Table`](Table.md)\<`T`\>\> **createTable**\<`T`\>(`name`, `data`, `embeddings`, `options`): `Promise`\<[`Table`](Table.md)\<`T`\>\>
@@ -141,7 +142,7 @@ Creates a new Table and initialize it with new data.
| Name | Type | Description | | Name | Type | Description |
| :------ | :------ | :------ | | :------ | :------ | :------ |
| `name` | `string` | The name of the table. | | `name` | `string` | The name of the table. |
| `data` | `Record`\<`string`, `unknown`\>[] | Non-empty Array of Records to be inserted into the table | | `data` | `Table`\<`any`\> \| `Record`\<`string`, `unknown`\>[] | Non-empty Array of Records to be inserted into the table |
| `embeddings` | [`EmbeddingFunction`](EmbeddingFunction.md)\<`T`\> | An embedding function to use on this table | | `embeddings` | [`EmbeddingFunction`](EmbeddingFunction.md)\<`T`\> | An embedding function to use on this table |
| `options` | [`WriteOptions`](WriteOptions.md) | The write options to use when creating the table. | | `options` | [`WriteOptions`](WriteOptions.md) | The write options to use when creating the table. |
@@ -151,7 +152,7 @@ Creates a new Table and initialize it with new data.
#### Defined in #### Defined in
[index.ts:259](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/index.ts#L259) [index.ts:337](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L337)
___ ___
@@ -173,7 +174,7 @@ Drop an existing table.
#### Defined in #### Defined in
[index.ts:270](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/index.ts#L270) [index.ts:348](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L348)
___ ___
@@ -202,7 +203,7 @@ Open a table in the database.
#### Defined in #### Defined in
[index.ts:193](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/index.ts#L193) [index.ts:271](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L271)
___ ___
@@ -216,4 +217,32 @@ ___
#### Defined in #### Defined in
[index.ts:185](https://github.com/lancedb/lancedb/blob/c89d5e6/node/src/index.ts#L185) [index.ts:263](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L263)
___
### withMiddleware
**withMiddleware**(`middleware`): [`Connection`](Connection.md)
Instrument the behavior of this Connection with middleware.
The middleware will be called in the order they are added.
Currently this functionality is only supported for remote Connections.
#### Parameters
| Name | Type |
| :------ | :------ |
| `middleware` | `HttpMiddleware` |
#### Returns
[`Connection`](Connection.md)
- this Connection instrumented by the passed middleware
#### Defined in
[index.ts:360](https://github.com/lancedb/lancedb/blob/92179835/node/src/index.ts#L360)

Some files were not shown because too many files have changed in this diff Show More