Compare commits

..

73 Commits

Author SHA1 Message Date
Lance Release
291ed41c3e Bump version: 0.8.1-beta.0 → 0.8.1 2024-05-30 01:00:21 +00:00
Lance Release
fdda7b1a76 Bump version: 0.8.0 → 0.8.1-beta.0 2024-05-30 01:00:21 +00:00
Weston Pace
eb2cbedf19 feat: upgrade lance to 0.11.1 (#1338) 2024-05-29 16:28:09 -07:00
Cory Grinstead
bc139000bd feat(nodejs): add compatibility across arrow versions (#1337)
while adding some more docs & examples for the new js sdk, i ran across
a few compatibility issues when using different arrow versions. This
should fix those issues.
2024-05-29 17:36:34 -05:00
Cory Grinstead
dbea3a7544 feat: js embedding registry (#1308)
---------

Co-authored-by: Will Jones <willjones127@gmail.com>
2024-05-29 13:12:19 -05:00
zhongpu
3bb7c546d7 fix: the bug of async connection context manager (#1333)
- add `return` for `__enter__`

The buggy code didn't return the object, therefore it will always return
None within a context manager:

```python
with await lancedb.connect_async("./.lancedb") as db:
        # db is always None
```

(BTW, why not to design an async context manager?)

- add a unit test for Async connection context manager

- update return type of `AsyncConnection.open_table` to `AsyncTable`

Although type annotation doesn't affect the functionality, it is helpful
for IDEs.
2024-05-29 09:33:32 -07:00
Cory Grinstead
2f4b70ecfe chore: clippy warnings inside java bindings (#1330)
this was causing unrelated PR's to fail.
https://github.com/lancedb/lancedb/actions/runs/9274579178/job/25517248069?pr=1308
2024-05-28 14:05:07 -05:00
Philip Meier
1ad1c0820d chore: replace semver dependency with packaging (#1311)
Fixes #1296 per title. See
https://github.com/lancedb/lancedb/pull/1298#discussion_r1603931457 Cc
@wjones127

---------

Co-authored-by: Will Jones <willjones127@gmail.com>
2024-05-28 10:05:16 -07:00
LuQQiu
db712b0f99 feat(java): add table names java api (#1279)
Add lancedb-jni and table names API

---------

Co-authored-by: Lei Xu <eddyxu@gmail.com>
2024-05-24 11:49:11 -07:00
BubbleCal
fd1a5ce788 feat: support IVF_HNSW_PQ (#1314)
this also simplifies the code of creating index with macro

---------

Signed-off-by: BubbleCal <bubble-cal@outlook.com>
2024-05-24 18:32:00 +08:00
QianZhu
def087fc85 fix: parse index_stats for scalar index (#1319)
parse the index stats for scalar index - it is different from the index
stats for vector index
2024-05-23 13:10:46 -07:00
Lance Release
43f920182a Bump version: 0.8.0-beta.0 → 0.8.0 2024-05-23 17:32:36 +00:00
Lance Release
718963d1fb Bump version: 0.7.0 → 0.8.0-beta.0 2024-05-23 17:32:36 +00:00
Weston Pace
e4dac751e7 chore: remove working-directory from pypi upload step (#1322)
The wheels are built to `WORKDIR/target/wheels` and the step was
configured to look for them at `WORKDIR/python/target/wheels`.
2024-05-23 10:31:32 -07:00
Lance Release
aae02953eb Updating package-lock.json 2024-05-23 16:30:46 +00:00
Lance Release
1d9f76bdda Bump version: 0.5.0-beta.0 → 0.5.0 2024-05-23 16:30:27 +00:00
Lance Release
affdfc4d48 Bump version: 0.4.20 → 0.5.0-beta.0 2024-05-23 16:30:26 +00:00
Lance Release
41b77f5e25 Bump version: 0.7.0-beta.0 → 0.7.0 2024-05-23 16:30:16 +00:00
Lance Release
eb8b3b8c54 Bump version: 0.6.13 → 0.7.0-beta.0 2024-05-23 16:30:16 +00:00
Weston Pace
f69c3e0595 chore: sync bumpversion.toml with actual version (#1321)
Attempting to create a new minor version failed with:

```
   Specified version (0.4.21-beta.0) does not match last tagged version (0.4.20) 
```

It seems the last release commit for rust/node was made without the new
process and did not adjust bumpversion.toml correctly (or maybe
bumpversion.toml did not exist at that time)
2024-05-23 09:29:40 -07:00
Weston Pace
8511edaaab fix: get the last stable release before we've added a new tag (#1320)
I tried to do a stable release and it failed with:

```
 Traceback (most recent call last):
  File "/home/runner/work/lancedb/lancedb/ci/check_breaking_changes.py", line 20, in <module>
    commits = repo.compare(args.base, args.head).commits
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/hostedtoolcache/Python/3.11.9/x64/lib/python3.11/site-packages/github/Repository.py", line 1133, in compare
    headers, data = self._requester.requestJsonAndCheck("GET", f"{self.url}/compare/{base}...{head}", params)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/hostedtoolcache/Python/3.11.9/x64/lib/python3.11/site-packages/github/Requester.py", line 548, in requestJsonAndCheck
    return self.__check(*self.requestJson(verb, url, parameters, headers, input, self.__customConnection(url)))
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/hostedtoolcache/Python/3.11.9/x64/lib/python3.11/site-packages/github/Requester.py", line 609, in __check
    raise self.createException(status, responseHeaders, data)
github.GithubException.UnknownObjectException: 404 {"message": "Not Found", "documentation_url": "https://docs.github.com/rest/commits/commits#compare-two-commits"}
```

I believe the problem is that we are calculating the
`LAST_STABLE_RELEASE` after we have run bump version and so the newly
created tag is in the list of tags we search and it is the most recent
one and so it gets included as `LAST_STABLE_RELEASE`. Then, the call to
github fails because we haven't pushed the tag yet. This changes the
logic to grab `LAST_STABLE_RELEASE` before we create any new tags.
2024-05-23 09:11:43 -07:00
Will Jones
657aba3c05 ci: pin aws sdk versions (#1318) 2024-05-22 08:26:09 -07:00
Rob Meng
2e197ef387 feat: upgrade lance to 0.11.0 (#1317)
upgrade lance and make fixes for the upgrade
2024-05-21 18:53:19 -04:00
Weston Pace
4f512af024 feat: add the optimize function to nodejs and async python (#1257)
The optimize function is pretty crucial for getting good performance
when building a large scale dataset but it was only exposed in rust
(many sync python users are probably doing this via to_lance today)

This PR adds the optimize function to nodejs and to python.

I left the function marked experimental because I think there will
likely be changes to optimization (e.g. if we add features like
"optimize on write"). I also only exposed the `cleanup_older_than`
configuration parameter since this one is very commonly used and the
rest have sensible defaults and we don't really know why we would
recommend different values for these defaults anyways.
2024-05-20 07:09:31 -07:00
Will Jones
5349e8b1db ci: make preview releases (#1302)
This PR changes the release process. Some parts are more complex, and
other parts I've simplified.

## Simplifications

* Combined `Create Release Commit` and `Create Python Release Commit`
into a single workflow. By default, it does a release of all packages,
but you can still choose to make just a Python or just Node/Rust release
through the arguments. This will make it rarer that we create a Node
release but forget about Python or vice-versa.
* Releases are automatically generated once a tag is pushed. This
eliminates the manual step of creating the release.
* Release notes are automatically generated and changes are categorized
based on the PR labels.
* Removed the use of `LANCEDB_RELEASE_TOKEN` in favor of just using
`GITHUB_TOKEN` where it wasn't necessary. In the one place it is
necessary, I left a comment as to why it is.
* Reused the version in `python/Cargo.toml` so we don't have two
different versions in Python LanceDB.

## New changes

* We now can create `preview` / `beta` releases. By default `Create
Release Commit` will create a preview release, but you can select a
"stable" release type and it will create a full stable release.
  * For Python, pre-releases go to fury.io instead of PyPI
* `bump2version` was deprecated, so upgraded to `bump-my-version`. This
also seems to better support semantic versioning with pre-releases.
* `ci` changes will now be shown in the changelog, allowing changes like
this to be visible to users. `chore` is still hidden.

## Versioning

**NOTE**: unlike how it is in lance repo right now, the version in main
is the last one released, including beta versions.

---------

Co-authored-by: Lance Release <lance-dev@lancedb.com>
Co-authored-by: Weston Pace <weston.pace@gmail.com>
2024-05-17 11:24:38 -07:00
BubbleCal
5e01810438 feat: support IVF_HNSW_SQ (#1284)
Signed-off-by: BubbleCal <bubble-cal@outlook.com>
2024-05-16 14:28:06 +08:00
Cory Grinstead
6eaaee59f8 fix: remove accidental console.log (#1307)
i accidentally left a console.log when doing
https://github.com/lancedb/lancedb/pull/1290
2024-05-15 16:07:46 -05:00
Cory Grinstead
055efdcdb6 refactor(nodejs): use biomejs instead of eslint & prettier (#1304)
I've been noticing a lot of friction with the current toolchain for
'/nodejs'. Particularly with the usage of eslint and prettier.

[Biome](https://biomejs.dev/) is an all in one formatter & linter that
replaces the need for two different ones that can potentially clash with
one another.

I've been using it in the
[nodejs-polars](https://github.com/pola-rs/nodejs-polars) repo for quite
some time & have found it much more pleasant to work with.

---

One other small change included in this PR:

use [ts-jest](https://www.npmjs.com/package/ts-jest) so we can run our
tests without having to rebuild typescript code first
2024-05-14 11:11:18 -05:00
Cory Grinstead
bc582bb702 fix(nodejs): add better error handling when missing embedding functions (#1290)
note: 
running the default lint command `npm run lint -- --fix` seems to have
made a lot of unrelated changes.
2024-05-14 08:43:39 -05:00
Will Jones
df9c41f342 ci: write down breaking change policy (#1294)
* Enforce conventional commit PR titles
* Add automatic labelling of PRs
* Write down breaking change policy.

Left for another PR:
* Validation of breaking change version bumps. (This is complicated due
to separate releases for Python and other package.)
2024-05-13 10:25:55 -07:00
Raghav Dixit
0bd6ac945e Documentation : Langchain doc bug fix (#1301)
nav bar update
2024-05-13 20:56:34 +05:30
Raghav Dixit
c9d5475333 Documentation: Langchain Integration (#1297)
Integration doc update
2024-05-13 10:19:33 -04:00
asmith26
3850d5fb35 Add ollama embeddings function (#1263)
Following the docs
[here](https://lancedb.github.io/lancedb/python/python/#lancedb.embeddings.openai.OpenAIEmbeddings)
I've been trying to use ollama embedding via the OpenAI API interface,
but unfortunately I couldn't get it to work (possibly related to
https://github.com/ollama/ollama/issues/2416)

Given the popularity of ollama I thought it could be helpful to have a
dedicated Ollama Embedding function in lancedb.

Very much welcome any thought on this or my code etc. Thanks!
2024-05-13 13:09:19 +05:30
Lance Release
b37c58342e [python] Bump version: 0.6.12 → 0.6.13 2024-05-10 16:15:13 +00:00
Lance Release
a06e64f22d Updating package-lock.json 2024-05-09 22:46:19 +00:00
Lance Release
e983198f0e Updating package-lock.json 2024-05-09 22:12:17 +00:00
Lance Release
76e7b4abf8 Updating package-lock.json 2024-05-09 21:14:47 +00:00
Lance Release
5f6eb4651e Bump version: 0.4.19 → 0.4.20 2024-05-09 21:14:30 +00:00
Bert
805c78bb20 chore: bump lance to v0.10.18 (#1287)
https://github.com/lancedb/lance/releases/tag/v0.10.18
2024-05-09 17:06:26 -03:00
QianZhu
4746281b21 fix rename_table api and cache pop (#1283) 2024-05-08 13:41:18 -07:00
Aman Kishore
7b3b6bdccd Remove semvar strict dependancy (#1253) 2024-05-08 11:16:15 -07:00
Ryan Green
37e1124c0f chore: upgrade lance to 0.10.17 (#1280) 2024-05-08 09:56:48 -02:30
Lance Release
93f037ee41 Updating package-lock.json 2024-05-07 20:50:44 +00:00
Lance Release
e4fc06825a Updating package-lock.json 2024-05-07 20:09:25 +00:00
Lance Release
fe89a373a2 [python] Bump version: 0.6.11 → 0.6.12 2024-05-07 19:27:17 +00:00
Lance Release
3d3915edef Updating package-lock.json 2024-05-07 19:04:42 +00:00
Lance Release
e2e8b6aee4 Bump version: 0.4.18 → 0.4.19 2024-05-07 19:04:31 +00:00
Will Jones
12dbca5248 ci: better test for test_syntax (#1278)
The syntax error was fixed in tantivy 0.22.0, so I changed the test case
to something more wrong.
2024-05-07 11:52:39 -07:00
Will Jones
a6babfa651 fix(node/vectordb): parse value not key (#1276) 2024-05-07 10:16:05 -07:00
Will Jones
75ede86fab fix: clearer error that FTS is not supported on object stores (#1273)
Closes #1272
2024-05-07 10:15:53 -07:00
Will Jones
becd649130 docs: add tip about using allow_http on local servers (#1277)
Based on user question
https://discord.com/channels/1030247538198061086/1197630499926057021/1237350091191222293
2024-05-07 10:15:26 -07:00
Cory Grinstead
9d2fb7d602 feat: rust embedding registry (#1259)
Todo:

- [x] add proper documentation
- [x] add unit tests
- [x] better handling of the registry**1
- [x] allow user defined registry**2

**1 The python implementation just uses a global registry so it makes
things a bit easier. I attached it to the db/connection to prevent
future conflicts if running multiple connections/databases. I mostly
modeled the registry & pattern off of datafusion's
[FunctionRegistry](https://docs.rs/datafusion/latest/datafusion/execution/trait.FunctionRegistry.html).

**2 Ideally, the user should be able to provide it's own registry
entirely, but currently it just uses an in memory registry by default
(_which isn't configurable_)

`rust/lancedb/examples/embedding_registry.rs` provides a thorough
example of expected usage.

---

Some additional notes:

This does not provide any of the out of box functionality that the
python registry does.

_i.e there are no built-in embedding functions._ 

You can think of this as the ground work for adding those built in
functions, So while this is part of
https://github.com/lancedb/lancedb/issues/994, it does not yet offer
feature parity.
2024-05-06 18:39:07 -05:00
Ben Poulson
fdb5d6fdf1 Update README.md to correct LangChain URL (#1262)
URL in the README for LangChain is currently 404ing. Here's the new URL.
2024-05-06 11:50:34 +05:30
Ayush Chaurasia
2f13fa225f Chore (python): Better retry loop logging when embedding api fails (#1267)
https://github.com/lancedb/lancedb/issues/1266#event-12703166915

This happens because openai API errors out with None values. The current
log level didn't really print out the msg on screen. Changed the log
level to warning, which better suits this case.

Also, retry loop can be disabled by setting `max_retries=0` (I'm not
sure if we should also set this as the default behaviour as hitting api
rate is quite common when ingesting large corpus)

```
func = get_registry().get("openai").create(max_retries=0)
````
2024-05-06 11:49:11 +05:30
Nehil Jain
e933de003d fix: Docs for embed_func fixed in youtube transcript search notebook (#1269)
Fixes issue https://github.com/lancedb/lancedb/issues/1268
2024-05-06 11:48:25 +05:30
Ikko Eltociear Ashimine
05fd387425 docs: update README.md (#1270)
retrevial -> retrieval
2024-05-06 11:46:48 +05:30
Will Jones
82a1da554c fix(python): return ValueError if passed unknown args to connect() (#1265)
It's confusing to users that keyword arguments from the async API like
`storage_options` are accepted by `connect()`, but don't do anything. We
should error if unknown arguments are passed instead.
2024-05-03 17:00:08 -07:00
Rohit Rastogi
a7c0d80b9e Implement convertors to and from Polars DataFrames in Rust SDK using convertors based on C FFI #1099 (#1260)
https://github.com/lancedb/lancedb/issues/1099

Took the same general approach from:
https://github.com/lancedb/lancedb/pull/1235. Instead of using
high-level convertors implemented in polars-arrow (with the arrow-rs
feature flag, which adds a dependency on arrow-rs), I used convertors
based on the C FFI to avoid dependency conflicts.

---------

Co-authored-by: Rohit Rastogi <rohitrastogi@Rohits-MacBook-Pro.local>
Co-authored-by: Weston Pace <weston.pace@gmail.com>
2024-05-03 16:15:14 -07:00
Cory Grinstead
71323a064a chore(nodejs): update docs on "table.ts" (#1255)
closes https://github.com/lancedb/lancedb/issues/1007
2024-05-01 23:00:22 -05:00
asmith26
df48454b70 Update embedding_functions.md (#1250)
`clip.ndims` seems to be a function (I installed with `pip install
open_clip_torch`).
2024-05-01 09:33:42 -07:00
Lance Release
6603414885 Updating package-lock.json 2024-04-30 20:57:12 +00:00
Lance Release
c256f6c502 Updating package-lock.json 2024-04-30 19:58:49 +00:00
Lance Release
cc03f90379 Updating package-lock.json 2024-04-30 19:21:48 +00:00
Lance Release
975da09b02 Bump version: 0.4.17 → 0.4.18 2024-04-30 19:21:37 +00:00
Cory Grinstead
c32e17b497 chore(nodejs): remove "optionalDependencies" (#1252)
closes #1248 

the binding specific `optionalDependencies` are added automatically as
part of the `prepublishOnly` hook, and they are not supposed to be
committed to `package.json`.



--- 

npm lifecycle scripts: 
https://docs.npmjs.com/cli/v7/using-npm/scripts#life-cycle-scripts
2024-04-30 10:51:10 -05:00
Ryan Green
0528abdf97 fix: fix path on remote create_table and check for error response (#1244) 2024-04-28 11:33:05 -02:30
Lance Release
1090c311e8 [python] Bump version: 0.6.10 → 0.6.11 2024-04-27 03:54:58 +00:00
Weston Pace
e767cbb374 chore: update to Lance version 0.10.16 and Arrow version 51 (#1247) 2024-04-26 16:26:57 -07:00
Weston Pace
3d7c48feca feat: allow the index_cache_size to be configured when opening a table (#1245)
This was already configurable in the rust API but it wasn't actually
being passed down to the underlying dataset. I added this option to both
the async python API and the new nodejs API.

I also added this option to the synchronous python API.

I did not add the option to vectordb.
2024-04-26 13:42:02 -07:00
Bert
08d62550bb fix: passing data to createTable as option (#1242)
Fixes issue where we would throw `Either data or schema needs to
defined` when passing `data` to `createTable` as a property of the first
argument (an object).

```ts
await db.createTable({
  name: 'table1',
  data,
  schema
})
```
2024-04-26 15:26:08 -04:00
Lei Xu
b272408b05 chore: fix main branch test failure (#1240) 2024-04-24 13:49:37 -07:00
Weston Pace
46ffa87cd4 chore: disable the remote feature by default (#1239)
The rust implementation of the remote client is not yet ready. This is
understandably confusing for users since it is enabled by default. This
PR disables it by default. We can re-enable it when we are ready (even
then it is not clear this is something that should be a default
feature).

---------

Co-authored-by: Will Jones <willjones127@gmail.com>
2024-04-24 09:28:24 -07:00
QianZhu
cd9fc37b95 add rename_table fn and more data for index_stats to return (#1234)
1. added rename_table fn to enable dashboard to rename a table
2. added index_type and distance_type (for vector index) to index_stats
so that more detailed data can be shown on the table page.
2024-04-23 16:42:26 -07:00
149 changed files with 7795 additions and 2315 deletions

View File

@@ -1,22 +0,0 @@
[bumpversion]
current_version = 0.4.17
commit = True
message = Bump version: {current_version} → {new_version}
tag = True
tag_name = v{new_version}
[bumpversion:file:node/package.json]
[bumpversion:file:nodejs/package.json]
[bumpversion:file:nodejs/npm/darwin-x64/package.json]
[bumpversion:file:nodejs/npm/darwin-arm64/package.json]
[bumpversion:file:nodejs/npm/linux-x64-gnu/package.json]
[bumpversion:file:nodejs/npm/linux-arm64-gnu/package.json]
[bumpversion:file:rust/ffi/node/Cargo.toml]
[bumpversion:file:rust/lancedb/Cargo.toml]

57
.bumpversion.toml Normal file
View File

@@ -0,0 +1,57 @@
[tool.bumpversion]
current_version = "0.5.0"
parse = """(?x)
(?P<major>0|[1-9]\\d*)\\.
(?P<minor>0|[1-9]\\d*)\\.
(?P<patch>0|[1-9]\\d*)
(?:-(?P<pre_l>[a-zA-Z-]+)\\.(?P<pre_n>0|[1-9]\\d*))?
"""
serialize = [
"{major}.{minor}.{patch}-{pre_l}.{pre_n}",
"{major}.{minor}.{patch}",
]
search = "{current_version}"
replace = "{new_version}"
regex = false
ignore_missing_version = false
ignore_missing_files = false
tag = true
sign_tags = false
tag_name = "v{new_version}"
tag_message = "Bump version: {current_version} → {new_version}"
allow_dirty = true
commit = true
message = "Bump version: {current_version} → {new_version}"
commit_args = ""
[tool.bumpversion.parts.pre_l]
values = ["beta", "final"]
optional_value = "final"
[[tool.bumpversion.files]]
filename = "node/package.json"
search = "\"version\": \"{current_version}\","
replace = "\"version\": \"{new_version}\","
[[tool.bumpversion.files]]
filename = "nodejs/package.json"
search = "\"version\": \"{current_version}\","
replace = "\"version\": \"{new_version}\","
# nodejs binary packages
[[tool.bumpversion.files]]
glob = "nodejs/npm/*/package.json"
search = "\"version\": \"{current_version}\","
replace = "\"version\": \"{new_version}\","
# Cargo files
# ------------
[[tool.bumpversion.files]]
filename = "rust/ffi/node/Cargo.toml"
search = "\nversion = \"{current_version}\""
replace = "\nversion = \"{new_version}\""
[[tool.bumpversion.files]]
filename = "rust/lancedb/Cargo.toml"
search = "\nversion = \"{current_version}\""
replace = "\nversion = \"{new_version}\""

33
.github/labeler.yml vendored Normal file
View File

@@ -0,0 +1,33 @@
version: 1
appendOnly: true
# Labels are applied based on conventional commits standard
# https://www.conventionalcommits.org/en/v1.0.0/
# These labels are later used in release notes. See .github/release.yml
labels:
# If the PR title has an ! before the : it will be considered a breaking change
# For example, `feat!: add new feature` will be considered a breaking change
- label: breaking-change
title: "^[^:]+!:.*"
- label: breaking-change
body: "BREAKING CHANGE"
- label: enhancement
title: "^feat(\\(.+\\))?!?:.*"
- label: bug
title: "^fix(\\(.+\\))?!?:.*"
- label: documentation
title: "^docs(\\(.+\\))?!?:.*"
- label: performance
title: "^perf(\\(.+\\))?!?:.*"
- label: ci
title: "^ci(\\(.+\\))?!?:.*"
- label: chore
title: "^(chore|test|build|style)(\\(.+\\))?!?:.*"
- label: Python
files:
- "^python\\/.*"
- label: Rust
files:
- "^rust\\/.*"
- label: typescript
files:
- "^node\\/.*"

41
.github/release_notes.json vendored Normal file
View File

@@ -0,0 +1,41 @@
{
"ignore_labels": ["chore"],
"pr_template": "- ${{TITLE}} by @${{AUTHOR}} in ${{URL}}",
"categories": [
{
"title": "## 🏆 Highlights",
"labels": ["highlight"]
},
{
"title": "## 🛠 Breaking Changes",
"labels": ["breaking-change"]
},
{
"title": "## ⚠️ Deprecations ",
"labels": ["deprecation"]
},
{
"title": "## 🎉 New Features",
"labels": ["enhancement"]
},
{
"title": "## 🐛 Bug Fixes",
"labels": ["bug"]
},
{
"title": "## 📚 Documentation",
"labels": ["documentation"]
},
{
"title": "## 🚀 Performance Improvements",
"labels": ["performance"]
},
{
"title": "## Other Changes"
},
{
"title": "## 🔧 Build and CI",
"labels": ["ci"]
}
]
}

View File

@@ -1,8 +1,12 @@
name: Cargo Publish name: Cargo Publish
on: on:
release: push:
types: [ published ] tags-ignore:
# We don't publish pre-releases for Rust. Crates.io is just a source
# distribution, so we don't need to publish pre-releases.
- 'v*-beta*'
- '*-v*' # for example, python-vX.Y.Z
env: env:
# This env var is used by Swatinem/rust-cache@v2 for the cache # This env var is used by Swatinem/rust-cache@v2 for the cache

81
.github/workflows/dev.yml vendored Normal file
View File

@@ -0,0 +1,81 @@
name: PR Checks
on:
pull_request_target:
types: [opened, edited, synchronize, reopened]
concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}
cancel-in-progress: true
jobs:
labeler:
permissions:
pull-requests: write
name: Label PR
runs-on: ubuntu-latest
steps:
- uses: srvaroa/labeler@master
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
commitlint:
permissions:
pull-requests: write
name: Verify PR title / description conforms to semantic-release
runs-on: ubuntu-latest
steps:
- uses: actions/setup-node@v3
with:
node-version: "18"
# These rules are disabled because Github will always ensure there
# is a blank line between the title and the body and Github will
# word wrap the description field to ensure a reasonable max line
# length.
- run: npm install @commitlint/config-conventional
- run: >
echo 'module.exports = {
"rules": {
"body-max-line-length": [0, "always", Infinity],
"footer-max-line-length": [0, "always", Infinity],
"body-leading-blank": [0, "always"]
}
}' > .commitlintrc.js
- run: npx commitlint --extends @commitlint/config-conventional --verbose <<< $COMMIT_MSG
env:
COMMIT_MSG: >
${{ github.event.pull_request.title }}
${{ github.event.pull_request.body }}
- if: failure()
uses: actions/github-script@v6
with:
script: |
const message = `**ACTION NEEDED**
Lance follows the [Conventional Commits specification](https://www.conventionalcommits.org/en/v1.0.0/) for release automation.
The PR title and description are used as the merge commit message.\
Please update your PR title and description to match the specification.
For details on the error please inspect the "PR Title Check" action.
`
// Get list of current comments
const comments = await github.paginate(github.rest.issues.listComments, {
owner: context.repo.owner,
repo: context.repo.repo,
issue_number: context.issue.number
});
// Check if this job already commented
for (const comment of comments) {
if (comment.body === message) {
return // Already commented
}
}
// Post the comment about Conventional Commits
github.rest.issues.createComment({
owner: context.repo.owner,
repo: context.repo.repo,
issue_number: context.issue.number,
body: message
})
core.setFailed(message)

85
.github/workflows/java.yml vendored Normal file
View File

@@ -0,0 +1,85 @@
name: Build and Run Java JNI Tests
on:
push:
branches:
- main
pull_request:
paths:
- java/**
- rust/**
- .github/workflows/java.yml
env:
# This env var is used by Swatinem/rust-cache@v2 for the cache
# key, so we set it to make sure it is always consistent.
CARGO_TERM_COLOR: always
# Disable full debug symbol generation to speed up CI build and keep memory down
# "1" means line tables only, which is useful for panic tracebacks.
RUSTFLAGS: "-C debuginfo=1"
RUST_BACKTRACE: "1"
# according to: https://matklad.github.io/2021/09/04/fast-rust-builds.html
# CI builds are faster with incremental disabled.
CARGO_INCREMENTAL: "0"
CARGO_BUILD_JOBS: "1"
jobs:
linux-build:
runs-on: ubuntu-22.04
name: ubuntu-22.04 + Java 11 & 17
defaults:
run:
working-directory: ./java
steps:
- name: Checkout repository
uses: actions/checkout@v4
- uses: Swatinem/rust-cache@v2
with:
workspaces: java/core/lancedb-jni
- name: Run cargo fmt
run: cargo fmt --check
working-directory: ./java/core/lancedb-jni
- name: Install dependencies
run: |
sudo apt update
sudo apt install -y protobuf-compiler libssl-dev
- name: Install Java 17
uses: actions/setup-java@v4
with:
distribution: temurin
java-version: 17
cache: "maven"
- run: echo "JAVA_17=$JAVA_HOME" >> $GITHUB_ENV
- name: Install Java 11
uses: actions/setup-java@v4
with:
distribution: temurin
java-version: 11
cache: "maven"
- name: Java Style Check
run: mvn checkstyle:check
# Disable because of issues in lancedb rust core code
# - name: Rust Clippy
# working-directory: java/core/lancedb-jni
# run: cargo clippy --all-targets -- -D warnings
- name: Running tests with Java 11
run: mvn clean test
- name: Running tests with Java 17
run: |
export JAVA_TOOL_OPTIONS="$JAVA_TOOL_OPTIONS \
-XX:+IgnoreUnrecognizedVMOptions \
--add-opens=java.base/java.lang=ALL-UNNAMED \
--add-opens=java.base/java.lang.invoke=ALL-UNNAMED \
--add-opens=java.base/java.lang.reflect=ALL-UNNAMED \
--add-opens=java.base/java.io=ALL-UNNAMED \
--add-opens=java.base/java.net=ALL-UNNAMED \
--add-opens=java.base/java.nio=ALL-UNNAMED \
--add-opens=java.base/java.util=ALL-UNNAMED \
--add-opens=java.base/java.util.concurrent=ALL-UNNAMED \
--add-opens=java.base/java.util.concurrent.atomic=ALL-UNNAMED \
--add-opens=java.base/jdk.internal.ref=ALL-UNNAMED \
--add-opens=java.base/sun.nio.ch=ALL-UNNAMED \
--add-opens=java.base/sun.nio.cs=ALL-UNNAMED \
--add-opens=java.base/sun.security.action=ALL-UNNAMED \
--add-opens=java.base/sun.util.calendar=ALL-UNNAMED \
--add-opens=java.security.jgss/sun.security.krb5=ALL-UNNAMED \
-Djdk.reflect.useDirectMethodHandle=false \
-Dio.netty.tryReflectionSetAccessible=true"
JAVA_HOME=$JAVA_17 mvn clean test

View File

@@ -1,37 +1,62 @@
name: Create release commit name: Create release commit
# This workflow increments versions, tags the version, and pushes it.
# When a tag is pushed, another workflow is triggered that creates a GH release
# and uploads the binaries. This workflow is only for creating the tag.
# This script will enforce that a minor version is incremented if there are any
# breaking changes since the last minor increment. However, it isn't able to
# differentiate between breaking changes in Node versus Python. If you wish to
# bypass this check, you can manually increment the version and push the tag.
on: on:
workflow_dispatch: workflow_dispatch:
inputs: inputs:
dry_run: dry_run:
description: 'Dry run (create the local commit/tags but do not push it)' description: 'Dry run (create the local commit/tags but do not push it)'
required: true required: true
default: "false" default: false
type: choice type: boolean
options: type:
- "true"
- "false"
part:
description: 'What kind of release is this?' description: 'What kind of release is this?'
required: true required: true
default: 'patch' default: 'preview'
type: choice type: choice
options: options:
- patch - preview
- minor - stable
- major python:
description: 'Make a Python release'
required: true
default: true
type: boolean
other:
description: 'Make a Node/Rust release'
required: true
default: true
type: boolean
bump-minor:
description: 'Bump minor version'
required: true
default: false
type: boolean
jobs: jobs:
bump-version: make-release:
# Creates tag and GH release. The GH release will trigger the build and release jobs.
runs-on: ubuntu-latest runs-on: ubuntu-latest
permissions:
contents: write
steps: steps:
- name: Check out main - name: Output Inputs
uses: actions/checkout@v4 run: echo "${{ toJSON(github.event.inputs) }}"
- uses: actions/checkout@v4
with: with:
ref: main
persist-credentials: false
fetch-depth: 0 fetch-depth: 0
lfs: true lfs: true
# It's important we use our token here, as the default token will NOT
# trigger any workflows watching for new tags. See:
# https://docs.github.com/en/actions/using-workflows/triggering-a-workflow#triggering-a-workflow-from-a-workflow
token: ${{ secrets.LANCEDB_RELEASE_TOKEN }}
- name: Set git configs for bumpversion - name: Set git configs for bumpversion
shell: bash shell: bash
run: | run: |
@@ -41,19 +66,34 @@ jobs:
uses: actions/setup-python@v5 uses: actions/setup-python@v5
with: with:
python-version: "3.11" python-version: "3.11"
- name: Bump version, create tag and commit - name: Bump Python version
if: ${{ inputs.python }}
working-directory: python
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
run: | run: |
pip install bump2version # Need to get the commit before bumping the version, so we can
bumpversion --verbose ${{ inputs.part }} # determine if there are breaking changes in the next step as well.
- name: Push new version and tag echo "COMMIT_BEFORE_BUMP=$(git rev-parse HEAD)" >> $GITHUB_ENV
if: ${{ inputs.dry_run }} == "false"
pip install bump-my-version PyGithub packaging
bash ../ci/bump_version.sh ${{ inputs.type }} ${{ inputs.bump-minor }} python-v
- name: Bump Node/Rust version
if: ${{ inputs.other }}
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
run: |
pip install bump-my-version PyGithub packaging
bash ci/bump_version.sh ${{ inputs.type }} ${{ inputs.bump-minor }} v $COMMIT_BEFORE_BUMP
- name: Push new version tag
if: ${{ !inputs.dry_run }}
uses: ad-m/github-push-action@master uses: ad-m/github-push-action@master
with: with:
# Need to use PAT here too to trigger next workflow. See comment above.
github_token: ${{ secrets.LANCEDB_RELEASE_TOKEN }} github_token: ${{ secrets.LANCEDB_RELEASE_TOKEN }}
branch: main branch: ${{ github.ref }}
tags: true tags: true
- uses: ./.github/workflows/update_package_lock - uses: ./.github/workflows/update_package_lock
if: ${{ inputs.dry_run }} == "false" if: ${{ inputs.dry_run }} == "false"
with: with:
github_token: ${{ secrets.LANCEDB_RELEASE_TOKEN }} github_token: ${{ secrets.GITHUB_TOKEN }}

View File

@@ -52,8 +52,7 @@ jobs:
cargo fmt --all -- --check cargo fmt --all -- --check
cargo clippy --all --all-features -- -D warnings cargo clippy --all --all-features -- -D warnings
npm ci npm ci
npm run lint npm run lint-ci
npm run chkformat
linux: linux:
name: Linux (NodeJS ${{ matrix.node-version }}) name: Linux (NodeJS ${{ matrix.node-version }})
timeout-minutes: 30 timeout-minutes: 30

View File

@@ -1,8 +1,9 @@
name: NPM Publish name: NPM Publish
on: on:
release: push:
types: [published] tags:
- 'v*'
jobs: jobs:
node: node:
@@ -274,9 +275,15 @@ jobs:
env: env:
NODE_AUTH_TOKEN: ${{ secrets.LANCEDB_NPM_REGISTRY_TOKEN }} NODE_AUTH_TOKEN: ${{ secrets.LANCEDB_NPM_REGISTRY_TOKEN }}
run: | run: |
# Tag beta as "preview" instead of default "latest". See lancedb
# npm publish step for more info.
if [[ $GITHUB_REF =~ refs/tags/v(.*)-beta.* ]]; then
PUBLISH_ARGS="--tag preview"
fi
mv */*.tgz . mv */*.tgz .
for filename in *.tgz; do for filename in *.tgz; do
npm publish $filename npm publish $PUBLISH_ARGS $filename
done done
release-nodejs: release-nodejs:
@@ -316,11 +323,23 @@ jobs:
- name: Publish to NPM - name: Publish to NPM
env: env:
NODE_AUTH_TOKEN: ${{ secrets.LANCEDB_NPM_REGISTRY_TOKEN }} NODE_AUTH_TOKEN: ${{ secrets.LANCEDB_NPM_REGISTRY_TOKEN }}
run: npm publish --access public # By default, things are published to the latest tag. This is what is
# installed by default if the user does not specify a version. This is
# good for stable releases, but for pre-releases, we want to publish to
# the "preview" tag so they can install with `npm install lancedb@preview`.
# See: https://medium.com/@mbostock/prereleases-and-npm-e778fc5e2420
run: |
if [[ $GITHUB_REF =~ refs/tags/v(.*)-beta.* ]]; then
npm publish --access public --tag preview
else
npm publish --access public
fi
update-package-lock: update-package-lock:
needs: [release] needs: [release]
runs-on: ubuntu-latest runs-on: ubuntu-latest
permissions:
contents: write
steps: steps:
- name: Checkout - name: Checkout
uses: actions/checkout@v4 uses: actions/checkout@v4
@@ -331,11 +350,13 @@ jobs:
lfs: true lfs: true
- uses: ./.github/workflows/update_package_lock - uses: ./.github/workflows/update_package_lock
with: with:
github_token: ${{ secrets.LANCEDB_RELEASE_TOKEN }} github_token: ${{ secrets.GITHUB_TOKEN }}
update-package-lock-nodejs: update-package-lock-nodejs:
needs: [release-nodejs] needs: [release-nodejs]
runs-on: ubuntu-latest runs-on: ubuntu-latest
permissions:
contents: write
steps: steps:
- name: Checkout - name: Checkout
uses: actions/checkout@v4 uses: actions/checkout@v4
@@ -346,4 +367,70 @@ jobs:
lfs: true lfs: true
- uses: ./.github/workflows/update_package_lock_nodejs - uses: ./.github/workflows/update_package_lock_nodejs
with: with:
github_token: ${{ secrets.LANCEDB_RELEASE_TOKEN }} github_token: ${{ secrets.GITHUB_TOKEN }}
gh-release:
runs-on: ubuntu-latest
permissions:
contents: write
steps:
- uses: actions/checkout@v4
with:
fetch-depth: 0
lfs: true
- name: Extract version
id: extract_version
env:
GITHUB_REF: ${{ github.ref }}
run: |
set -e
echo "Extracting tag and version from $GITHUB_REF"
if [[ $GITHUB_REF =~ refs/tags/v(.*) ]]; then
VERSION=${BASH_REMATCH[1]}
TAG=v$VERSION
echo "tag=$TAG" >> $GITHUB_OUTPUT
echo "version=$VERSION" >> $GITHUB_OUTPUT
else
echo "Failed to extract version from $GITHUB_REF"
exit 1
fi
echo "Extracted version $VERSION from $GITHUB_REF"
if [[ $VERSION =~ beta ]]; then
echo "This is a beta release"
# Get last release (that is not this one)
FROM_TAG=$(git tag --sort='version:refname' \
| grep ^v \
| grep -vF "$TAG" \
| python ci/semver_sort.py v \
| tail -n 1)
else
echo "This is a stable release"
# Get last stable tag (ignore betas)
FROM_TAG=$(git tag --sort='version:refname' \
| grep ^v \
| grep -vF "$TAG" \
| grep -v beta \
| python ci/semver_sort.py v \
| tail -n 1)
fi
echo "Found from tag $FROM_TAG"
echo "from_tag=$FROM_TAG" >> $GITHUB_OUTPUT
- name: Create Release Notes
id: release_notes
uses: mikepenz/release-changelog-builder-action@v4
with:
configuration: .github/release_notes.json
toTag: ${{ steps.extract_version.outputs.tag }}
fromTag: ${{ steps.extract_version.outputs.from_tag }}
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
- name: Create GH release
uses: softprops/action-gh-release@v2
with:
prerelease: ${{ contains('beta', github.ref) }}
tag_name: ${{ steps.extract_version.outputs.tag }}
token: ${{ secrets.GITHUB_TOKEN }}
generate_release_notes: false
name: Node/Rust LanceDB v${{ steps.extract_version.outputs.version }}
body: ${{ steps.release_notes.outputs.changelog }}

View File

@@ -1,18 +1,16 @@
name: PyPI Publish name: PyPI Publish
on: on:
release: push:
types: [published] tags:
- 'python-v*'
jobs: jobs:
linux: linux:
# Only runs on tags that matches the python-make-release action
if: startsWith(github.ref, 'refs/tags/python-v')
name: Python ${{ matrix.config.platform }} manylinux${{ matrix.config.manylinux }} name: Python ${{ matrix.config.platform }} manylinux${{ matrix.config.manylinux }}
timeout-minutes: 60 timeout-minutes: 60
strategy: strategy:
matrix: matrix:
python-minor-version: ["8"]
config: config:
- platform: x86_64 - platform: x86_64
manylinux: "2_17" manylinux: "2_17"
@@ -34,25 +32,22 @@ jobs:
- name: Set up Python - name: Set up Python
uses: actions/setup-python@v4 uses: actions/setup-python@v4
with: with:
python-version: 3.${{ matrix.python-minor-version }} python-version: 3.8
- uses: ./.github/workflows/build_linux_wheel - uses: ./.github/workflows/build_linux_wheel
with: with:
python-minor-version: ${{ matrix.python-minor-version }} python-minor-version: 8
args: "--release --strip ${{ matrix.config.extra_args }}" args: "--release --strip ${{ matrix.config.extra_args }}"
arm-build: ${{ matrix.config.platform == 'aarch64' }} arm-build: ${{ matrix.config.platform == 'aarch64' }}
manylinux: ${{ matrix.config.manylinux }} manylinux: ${{ matrix.config.manylinux }}
- uses: ./.github/workflows/upload_wheel - uses: ./.github/workflows/upload_wheel
with: with:
token: ${{ secrets.LANCEDB_PYPI_API_TOKEN }} pypi_token: ${{ secrets.LANCEDB_PYPI_API_TOKEN }}
repo: "pypi" fury_token: ${{ secrets.FURY_TOKEN }}
mac: mac:
# Only runs on tags that matches the python-make-release action
if: startsWith(github.ref, 'refs/tags/python-v')
timeout-minutes: 60 timeout-minutes: 60
runs-on: ${{ matrix.config.runner }} runs-on: ${{ matrix.config.runner }}
strategy: strategy:
matrix: matrix:
python-minor-version: ["8"]
config: config:
- target: x86_64-apple-darwin - target: x86_64-apple-darwin
runner: macos-13 runner: macos-13
@@ -63,7 +58,6 @@ jobs:
steps: steps:
- uses: actions/checkout@v4 - uses: actions/checkout@v4
with: with:
ref: ${{ inputs.ref }}
fetch-depth: 0 fetch-depth: 0
lfs: true lfs: true
- name: Set up Python - name: Set up Python
@@ -72,38 +66,95 @@ jobs:
python-version: 3.12 python-version: 3.12
- uses: ./.github/workflows/build_mac_wheel - uses: ./.github/workflows/build_mac_wheel
with: with:
python-minor-version: ${{ matrix.python-minor-version }} python-minor-version: 8
args: "--release --strip --target ${{ matrix.config.target }} --features fp16kernels" args: "--release --strip --target ${{ matrix.config.target }} --features fp16kernels"
- uses: ./.github/workflows/upload_wheel - uses: ./.github/workflows/upload_wheel
with: with:
python-minor-version: ${{ matrix.python-minor-version }} pypi_token: ${{ secrets.LANCEDB_PYPI_API_TOKEN }}
token: ${{ secrets.LANCEDB_PYPI_API_TOKEN }} fury_token: ${{ secrets.FURY_TOKEN }}
repo: "pypi"
windows: windows:
# Only runs on tags that matches the python-make-release action
if: startsWith(github.ref, 'refs/tags/python-v')
timeout-minutes: 60 timeout-minutes: 60
runs-on: windows-latest runs-on: windows-latest
strategy:
matrix:
python-minor-version: ["8"]
steps: steps:
- uses: actions/checkout@v4 - uses: actions/checkout@v4
with: with:
ref: ${{ inputs.ref }}
fetch-depth: 0 fetch-depth: 0
lfs: true lfs: true
- name: Set up Python - name: Set up Python
uses: actions/setup-python@v4 uses: actions/setup-python@v4
with: with:
python-version: 3.${{ matrix.python-minor-version }} python-version: 3.8
- uses: ./.github/workflows/build_windows_wheel - uses: ./.github/workflows/build_windows_wheel
with: with:
python-minor-version: ${{ matrix.python-minor-version }} python-minor-version: 8
args: "--release --strip" args: "--release --strip"
vcpkg_token: ${{ secrets.VCPKG_GITHUB_PACKAGES }} vcpkg_token: ${{ secrets.VCPKG_GITHUB_PACKAGES }}
- uses: ./.github/workflows/upload_wheel - uses: ./.github/workflows/upload_wheel
with: with:
python-minor-version: ${{ matrix.python-minor-version }} pypi_token: ${{ secrets.LANCEDB_PYPI_API_TOKEN }}
token: ${{ secrets.LANCEDB_PYPI_API_TOKEN }} fury_token: ${{ secrets.FURY_TOKEN }}
repo: "pypi" gh-release:
runs-on: ubuntu-latest
permissions:
contents: write
steps:
- uses: actions/checkout@v4
with:
fetch-depth: 0
lfs: true
- name: Extract version
id: extract_version
env:
GITHUB_REF: ${{ github.ref }}
run: |
set -e
echo "Extracting tag and version from $GITHUB_REF"
if [[ $GITHUB_REF =~ refs/tags/python-v(.*) ]]; then
VERSION=${BASH_REMATCH[1]}
TAG=python-v$VERSION
echo "tag=$TAG" >> $GITHUB_OUTPUT
echo "version=$VERSION" >> $GITHUB_OUTPUT
else
echo "Failed to extract version from $GITHUB_REF"
exit 1
fi
echo "Extracted version $VERSION from $GITHUB_REF"
if [[ $VERSION =~ beta ]]; then
echo "This is a beta release"
# Get last release (that is not this one)
FROM_TAG=$(git tag --sort='version:refname' \
| grep ^python-v \
| grep -vF "$TAG" \
| python ci/semver_sort.py python-v \
| tail -n 1)
else
echo "This is a stable release"
# Get last stable tag (ignore betas)
FROM_TAG=$(git tag --sort='version:refname' \
| grep ^python-v \
| grep -vF "$TAG" \
| grep -v beta \
| python ci/semver_sort.py python-v \
| tail -n 1)
fi
echo "Found from tag $FROM_TAG"
echo "from_tag=$FROM_TAG" >> $GITHUB_OUTPUT
- name: Create Python Release Notes
id: python_release_notes
uses: mikepenz/release-changelog-builder-action@v4
with:
configuration: .github/release_notes.json
toTag: ${{ steps.extract_version.outputs.tag }}
fromTag: ${{ steps.extract_version.outputs.from_tag }}
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
- name: Create Python GH release
uses: softprops/action-gh-release@v2
with:
prerelease: ${{ contains('beta', github.ref) }}
tag_name: ${{ steps.extract_version.outputs.tag }}
token: ${{ secrets.GITHUB_TOKEN }}
generate_release_notes: false
name: Python LanceDB v${{ steps.extract_version.outputs.version }}
body: ${{ steps.python_release_notes.outputs.changelog }}

View File

@@ -1,56 +0,0 @@
name: Python - Create release commit
on:
workflow_dispatch:
inputs:
dry_run:
description: 'Dry run (create the local commit/tags but do not push it)'
required: true
default: "false"
type: choice
options:
- "true"
- "false"
part:
description: 'What kind of release is this?'
required: true
default: 'patch'
type: choice
options:
- patch
- minor
- major
jobs:
bump-version:
runs-on: ubuntu-latest
steps:
- name: Check out main
uses: actions/checkout@v4
with:
ref: main
persist-credentials: false
fetch-depth: 0
lfs: true
- name: Set git configs for bumpversion
shell: bash
run: |
git config user.name 'Lance Release'
git config user.email 'lance-dev@lancedb.com'
- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: "3.11"
- name: Bump version, create tag and commit
working-directory: python
run: |
pip install bump2version
bumpversion --verbose ${{ inputs.part }}
- name: Push new version and tag
if: ${{ inputs.dry_run }} == "false"
uses: ad-m/github-push-action@master
with:
github_token: ${{ secrets.LANCEDB_RELEASE_TOKEN }}
branch: main
tags: true

View File

@@ -75,7 +75,7 @@ jobs:
timeout-minutes: 30 timeout-minutes: 30
strategy: strategy:
matrix: matrix:
python-minor-version: ["8", "11"] python-minor-version: ["9", "11"]
runs-on: "ubuntu-22.04" runs-on: "ubuntu-22.04"
defaults: defaults:
run: run:

View File

@@ -74,11 +74,11 @@ jobs:
run: | run: |
sudo apt update sudo apt update
sudo apt install -y protobuf-compiler libssl-dev sudo apt install -y protobuf-compiler libssl-dev
- name: Build
run: cargo build --all-features
- name: Start S3 integration test environment - name: Start S3 integration test environment
working-directory: . working-directory: .
run: docker compose up --detach --wait run: docker compose up --detach --wait
- name: Build
run: cargo build --all-features
- name: Run tests - name: Run tests
run: cargo test --all-features run: cargo test --all-features
- name: Run examples - name: Run examples

View File

@@ -2,28 +2,43 @@ name: upload-wheel
description: "Upload wheels to Pypi" description: "Upload wheels to Pypi"
inputs: inputs:
os: pypi_token:
required: true
description: "ubuntu-22.04 or macos-13"
repo:
required: false
description: "pypi or testpypi"
default: "pypi"
token:
required: true required: true
description: "release token for the repo" description: "release token for the repo"
fury_token:
required: true
description: "release token for the fury repo"
runs: runs:
using: "composite" using: "composite"
steps: steps:
- name: Install dependencies - name: Install dependencies
shell: bash shell: bash
run: | run: |
python -m pip install --upgrade pip python -m pip install --upgrade pip
pip install twine pip install twine
- name: Publish wheel - name: Choose repo
env: shell: bash
TWINE_USERNAME: __token__ id: choose_repo
TWINE_PASSWORD: ${{ inputs.token }} run: |
shell: bash if [ ${{ github.ref }} == "*beta*" ]; then
run: twine upload --repository ${{ inputs.repo }} target/wheels/lancedb-*.whl echo "repo=fury" >> $GITHUB_OUTPUT
else
echo "repo=pypi" >> $GITHUB_OUTPUT
fi
- name: Publish to PyPI
shell: bash
env:
FURY_TOKEN: ${{ inputs.fury_token }}
PYPI_TOKEN: ${{ inputs.pypi_token }}
run: |
if [ ${{ steps.choose_repo.outputs.repo }} == "fury" ]; then
WHEEL=$(ls target/wheels/lancedb-*.whl 2> /dev/null | head -n 1)
echo "Uploading $WHEEL to Fury"
curl -f -F package=@$WHEEL https://$FURY_TOKEN@push.fury.io/lancedb/
else
twine upload --repository ${{ steps.choose_repo.outputs.repo }} \
--username __token__ \
--password $PYPI_TOKEN \
target/wheels/lancedb-*.whl
fi

2
.gitignore vendored
View File

@@ -6,7 +6,7 @@
venv venv
.vscode .vscode
.zed
rust/target rust/target
rust/Cargo.lock rust/Cargo.lock

View File

@@ -10,9 +10,12 @@ repos:
rev: v0.2.2 rev: v0.2.2
hooks: hooks:
- id: ruff - id: ruff
- repo: https://github.com/pre-commit/mirrors-prettier - repo: local
rev: v3.1.0
hooks: hooks:
- id: prettier - id: local-biome-check
name: biome check
entry: npx @biomejs/biome check --config-path nodejs/biome.json nodejs/
language: system
types: [text]
files: "nodejs/.*" files: "nodejs/.*"
exclude: nodejs/lancedb/native.d.ts|nodejs/dist/.* exclude: nodejs/lancedb/native.d.ts|nodejs/dist/.*

View File

@@ -1,5 +1,5 @@
[workspace] [workspace]
members = ["rust/ffi/node", "rust/lancedb", "nodejs", "python"] members = ["rust/ffi/node", "rust/lancedb", "nodejs", "python", "java/core/lancedb-jni"]
# Python package needs to be built by maturin. # Python package needs to be built by maturin.
exclude = ["python"] exclude = ["python"]
resolver = "2" resolver = "2"
@@ -14,22 +14,22 @@ keywords = ["lancedb", "lance", "database", "vector", "search"]
categories = ["database-implementations"] categories = ["database-implementations"]
[workspace.dependencies] [workspace.dependencies]
lance = { "version" = "=0.10.15", "features" = ["dynamodb"] } lance = { "version" = "=0.11.1", "features" = ["dynamodb"] }
lance-index = { "version" = "=0.10.15" } lance-index = { "version" = "=0.11.1" }
lance-linalg = { "version" = "=0.10.15" } lance-linalg = { "version" = "=0.11.1" }
lance-testing = { "version" = "=0.10.15" } lance-testing = { "version" = "=0.11.1" }
# Note that this one does not include pyarrow # Note that this one does not include pyarrow
arrow = { version = "50.0", optional = false } arrow = { version = "51.0", optional = false }
arrow-array = "50.0" arrow-array = "51.0"
arrow-data = "50.0" arrow-data = "51.0"
arrow-ipc = "50.0" arrow-ipc = "51.0"
arrow-ord = "50.0" arrow-ord = "51.0"
arrow-schema = "50.0" arrow-schema = "51.0"
arrow-arith = "50.0" arrow-arith = "51.0"
arrow-cast = "50.0" arrow-cast = "51.0"
async-trait = "0" async-trait = "0"
chrono = "0.4.35" chrono = "0.4.35"
half = { "version" = "=2.3.1", default-features = false, features = [ half = { "version" = "=2.4.1", default-features = false, features = [
"num-traits", "num-traits",
] } ] }
futures = "0" futures = "0"

View File

@@ -20,7 +20,7 @@
<hr /> <hr />
LanceDB is an open-source database for vector-search built with persistent storage, which greatly simplifies retrevial, filtering and management of embeddings. LanceDB is an open-source database for vector-search built with persistent storage, which greatly simplifies retrieval, filtering and management of embeddings.
The key features of LanceDB include: The key features of LanceDB include:
@@ -36,7 +36,7 @@ The key features of LanceDB include:
* GPU support in building vector index(*). * GPU support in building vector index(*).
* Ecosystem integrations with [LangChain 🦜️🔗](https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/lanecdb.html), [LlamaIndex 🦙](https://gpt-index.readthedocs.io/en/latest/examples/vector_stores/LanceDBIndexDemo.html), Apache-Arrow, Pandas, Polars, DuckDB and more on the way. * Ecosystem integrations with [LangChain 🦜️🔗](https://python.langchain.com/docs/integrations/vectorstores/lancedb/), [LlamaIndex 🦙](https://gpt-index.readthedocs.io/en/latest/examples/vector_stores/LanceDBIndexDemo.html), Apache-Arrow, Pandas, Polars, DuckDB and more on the way.
LanceDB's core is written in Rust 🦀 and is built using <a href="https://github.com/lancedb/lance">Lance</a>, an open-source columnar format designed for performant ML workloads. LanceDB's core is written in Rust 🦀 and is built using <a href="https://github.com/lancedb/lance">Lance</a>, an open-source columnar format designed for performant ML workloads.

51
ci/bump_version.sh Normal file
View File

@@ -0,0 +1,51 @@
set -e
RELEASE_TYPE=${1:-"stable"}
BUMP_MINOR=${2:-false}
TAG_PREFIX=${3:-"v"} # Such as "python-v"
HEAD_SHA=${4:-$(git rev-parse HEAD)}
readonly SELF_DIR=$(cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )
PREV_TAG=$(git tag --sort='version:refname' | grep ^$TAG_PREFIX | python $SELF_DIR/semver_sort.py $TAG_PREFIX | tail -n 1)
echo "Found previous tag $PREV_TAG"
# Initially, we don't want to tag if we are doing stable, because we will bump
# again later. See comment at end for why.
if [[ "$RELEASE_TYPE" == 'stable' ]]; then
BUMP_ARGS="--no-tag"
fi
# If last is stable and not bumping minor
if [[ $PREV_TAG != *beta* ]]; then
if [[ "$BUMP_MINOR" != "false" ]]; then
# X.Y.Z -> X.(Y+1).0-beta.0
bump-my-version bump -vv $BUMP_ARGS minor
else
# X.Y.Z -> X.Y.(Z+1)-beta.0
bump-my-version bump -vv $BUMP_ARGS patch
fi
else
if [[ "$BUMP_MINOR" != "false" ]]; then
# X.Y.Z-beta.N -> X.(Y+1).0-beta.0
bump-my-version bump -vv $BUMP_ARGS minor
else
# X.Y.Z-beta.N -> X.Y.Z-beta.(N+1)
bump-my-version bump -vv $BUMP_ARGS pre_n
fi
fi
# The above bump will always bump to a pre-release version. If we are releasing
# a stable version, bump the pre-release level ("pre_l") to make it stable.
if [[ $RELEASE_TYPE == 'stable' ]]; then
# X.Y.Z-beta.N -> X.Y.Z
bump-my-version bump -vv pre_l
fi
# Validate that we have incremented version appropriately for breaking changes
NEW_TAG=$(git describe --tags --exact-match HEAD)
NEW_VERSION=$(echo $NEW_TAG | sed "s/^$TAG_PREFIX//")
LAST_STABLE_RELEASE=$(git tag --sort='version:refname' | grep ^$TAG_PREFIX | grep -v beta | grep -vF "$NEW_TAG" | python $SELF_DIR/semver_sort.py $TAG_PREFIX | tail -n 1)
LAST_STABLE_VERSION=$(echo $LAST_STABLE_RELEASE | sed "s/^$TAG_PREFIX//")
python $SELF_DIR/check_breaking_changes.py $LAST_STABLE_RELEASE $HEAD_SHA $LAST_STABLE_VERSION $NEW_VERSION

View File

@@ -0,0 +1,35 @@
"""
Check whether there are any breaking changes in the PRs between the base and head commits.
If there are, assert that we have incremented the minor version.
"""
import argparse
import os
from packaging.version import parse
from github import Github
if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument("base")
parser.add_argument("head")
parser.add_argument("last_stable_version")
parser.add_argument("current_version")
args = parser.parse_args()
repo = Github(os.environ["GITHUB_TOKEN"]).get_repo(os.environ["GITHUB_REPOSITORY"])
commits = repo.compare(args.base, args.head).commits
prs = (pr for commit in commits for pr in commit.get_pulls())
for pr in prs:
if any(label.name == "breaking-change" for label in pr.labels):
print(f"Breaking change in PR: {pr.html_url}")
break
else:
print("No breaking changes found.")
exit(0)
last_stable_version = parse(args.last_stable_version)
current_version = parse(args.current_version)
if current_version.minor <= last_stable_version.minor:
print("Minor version is not greater than the last stable version.")
exit(1)

35
ci/semver_sort.py Normal file
View File

@@ -0,0 +1,35 @@
"""
Takes a list of semver strings and sorts them in ascending order.
"""
import sys
from packaging.version import parse, InvalidVersion
if __name__ == "__main__":
import argparse
parser = argparse.ArgumentParser()
parser.add_argument("prefix", default="v")
args = parser.parse_args()
# Read the input from stdin
lines = sys.stdin.readlines()
# Parse the versions
versions = []
for line in lines:
line = line.strip()
try:
version_str = line.removeprefix(args.prefix)
version = parse(version_str)
except InvalidVersion:
# There are old tags that don't follow the semver format
print(f"Invalid version: {line}", file=sys.stderr)
continue
versions.append((line, version))
# Sort the versions
versions.sort(key=lambda x: x[1])
# Print the sorted versions as original strings
for line, _ in versions:
print(line)

View File

@@ -119,7 +119,7 @@ nav:
- Polars: python/polars_arrow.md - Polars: python/polars_arrow.md
- DuckDB: python/duckdb.md - DuckDB: python/duckdb.md
- LangChain: - LangChain:
- LangChain 🔗: https://python.langchain.com/docs/integrations/vectorstores/lancedb/ - LangChain 🔗: integrations/langchain.md
- LangChain JS/TS 🔗: https://js.langchain.com/docs/integrations/vectorstores/lancedb - LangChain JS/TS 🔗: https://js.langchain.com/docs/integrations/vectorstores/lancedb
- LlamaIndex 🦙: https://docs.llamaindex.ai/en/stable/examples/vector_stores/LanceDBIndexDemo/ - LlamaIndex 🦙: https://docs.llamaindex.ai/en/stable/examples/vector_stores/LanceDBIndexDemo/
- Pydantic: python/pydantic.md - Pydantic: python/pydantic.md

View File

@@ -44,6 +44,36 @@
!!! info "Please also make sure you're using the same version of Arrow as in the [lancedb crate](https://github.com/lancedb/lancedb/blob/main/Cargo.toml)" !!! info "Please also make sure you're using the same version of Arrow as in the [lancedb crate](https://github.com/lancedb/lancedb/blob/main/Cargo.toml)"
### Preview releases
Stable releases are created about every 2 weeks. For the latest features and bug
fixes, you can install the preview release. These releases receive the same
level of testing as stable releases, but are not guaranteed to be available for
more than 6 months after they are released. Once your application is stable, we
recommend switching to stable releases.
=== "Python"
```shell
pip install --pre --extra-index-url https://pypi.fury.io/lancedb/ lancedb
```
=== "Typescript"
```shell
npm install vectordb@preview
```
=== "Rust"
We don't push preview releases to crates.io, but you can referent the tag
in GitHub within your Cargo dependencies:
```toml
[dependencies]
lancedb = { git = "https://github.com/lancedb/lancedb.git", tag = "vX.Y.Z-beta.N" }
```
## Connect to a database ## Connect to a database
=== "Python" === "Python"

View File

@@ -206,6 +206,44 @@ print(actual.text)
``` ```
### Ollama embeddings
Generate embeddings via the [ollama](https://github.com/ollama/ollama-python) python library. More details:
- [Ollama docs on embeddings](https://github.com/ollama/ollama/blob/main/docs/api.md#generate-embeddings)
- [Ollama blog on embeddings](https://ollama.com/blog/embedding-models)
| Parameter | Type | Default Value | Description |
|------------------------|----------------------------|--------------------------|------------------------------------------------------------------------------------------------------------------------------------------------|
| `name` | `str` | `nomic-embed-text` | The name of the model. |
| `host` | `str` | `http://localhost:11434` | The Ollama host to connect to. |
| `options` | `ollama.Options` or `dict` | `None` | Additional model parameters listed in the documentation for the [Modelfile](./modelfile.md#valid-parameters-and-values) such as `temperature`. |
| `keep_alive` | `float` or `str` | `"5m"` | Controls how long the model will stay loaded into memory following the request. |
| `ollama_client_kwargs` | `dict` | `{}` | kwargs that can be past to the `ollama.Client`. |
```python
import lancedb
from lancedb.pydantic import LanceModel, Vector
from lancedb.embeddings import get_registry
db = lancedb.connect("/tmp/db")
func = get_registry().get("ollama").create(name="nomic-embed-text")
class Words(LanceModel):
text: str = func.SourceField()
vector: Vector(func.ndims()) = func.VectorField()
table = db.create_table("words", schema=Words, mode="overwrite")
table.add([
{"text": "hello world"},
{"text": "goodbye world"}
])
query = "greetings"
actual = table.search(query).limit(1).to_pydantic(Words)[0]
print(actual.text)
```
### OpenAI embeddings ### OpenAI embeddings
LanceDB registers the OpenAI embeddings function in the registry by default, as `openai`. Below are the parameters that you can customize when creating the instances: LanceDB registers the OpenAI embeddings function in the registry by default, as `openai`. Below are the parameters that you can customize when creating the instances:

View File

@@ -46,7 +46,7 @@ For this purpose, LanceDB introduces an **embedding functions API**, that allow
```python ```python
class Pets(LanceModel): class Pets(LanceModel):
vector: Vector(clip.ndims) = clip.VectorField() vector: Vector(clip.ndims()) = clip.VectorField()
image_uri: str = clip.SourceField() image_uri: str = clip.SourceField()
``` ```
@@ -149,7 +149,7 @@ You can also use the integration for adding utility operations in the schema. Fo
```python ```python
class Pets(LanceModel): class Pets(LanceModel):
vector: Vector(clip.ndims) = clip.VectorField() vector: Vector(clip.ndims()) = clip.VectorField()
image_uri: str = clip.SourceField() image_uri: str = clip.SourceField()
@property @property
@@ -166,4 +166,4 @@ rs[2].image
![](../assets/dog_clip_output.png) ![](../assets/dog_clip_output.png)
Now that you have the basic idea about LanceDB embedding functions and the embedding function registry, Now that you have the basic idea about LanceDB embedding functions and the embedding function registry,
let's dive deeper into defining your own [custom functions](./custom_embedding_function.md). let's dive deeper into defining your own [custom functions](./custom_embedding_function.md).

View File

@@ -299,6 +299,14 @@ LanceDB can also connect to S3-compatible stores, such as MinIO. To do so, you m
This can also be done with the ``AWS_ENDPOINT`` and ``AWS_DEFAULT_REGION`` environment variables. This can also be done with the ``AWS_ENDPOINT`` and ``AWS_DEFAULT_REGION`` environment variables.
!!! tip "Local servers"
For local development, the server often has a `http` endpoint rather than a
secure `https` endpoint. In this case, you must also set the `ALLOW_HTTP`
environment variable to `true` to allow non-TLS connections, or pass the
storage option `allow_http` as `true`. If you do not do this, you will get
an error like `URL scheme is not allowed`.
#### S3 Express #### S3 Express
LanceDB supports [S3 Express One Zone](https://aws.amazon.com/s3/storage-classes/express-one-zone/) endpoints, but requires additional configuration. Also, S3 Express endpoints only support connecting from an EC2 instance within the same region. LanceDB supports [S3 Express One Zone](https://aws.amazon.com/s3/storage-classes/express-one-zone/) endpoints, but requires additional configuration. Also, S3 Express endpoints only support connecting from an EC2 instance within the same region.

View File

@@ -13,7 +13,7 @@ Get started using these examples and quick links.
| Integrations | | | Integrations | |
|---|---:| |---|---:|
| <h3> LlamaIndex </h3>LlamaIndex is a simple, flexible data framework for connecting custom data sources to large language models. Llama index integrates with LanceDB as the serverless VectorDB. <h3>[Lean More](https://gpt-index.readthedocs.io/en/latest/examples/vector_stores/LanceDBIndexDemo.html) </h3> |<img src="../assets/llama-index.jpg" alt="image" width="150" height="auto">| | <h3> LlamaIndex </h3>LlamaIndex is a simple, flexible data framework for connecting custom data sources to large language models. Llama index integrates with LanceDB as the serverless VectorDB. <h3>[Lean More](https://gpt-index.readthedocs.io/en/latest/examples/vector_stores/LanceDBIndexDemo.html) </h3> |<img src="../assets/llama-index.jpg" alt="image" width="150" height="auto">|
| <h3>Langchain</h3>Langchain allows building applications with LLMs through composability <h3>[Lean More](https://python.langchain.com/docs/integrations/vectorstores/lancedb) | <img src="../assets/langchain.png" alt="image" width="150" height="auto">| | <h3>Langchain</h3>Langchain allows building applications with LLMs through composability <h3>[Lean More](https://lancedb.github.io/lancedb/integrations/langchain/) | <img src="../assets/langchain.png" alt="image" width="150" height="auto">|
| <h3>Langchain TS</h3> Javascript bindings for Langchain. It integrates with LanceDB's serverless vectordb allowing you to build powerful AI applications through composibility using only serverless functions. <h3>[Learn More]( https://js.langchain.com/docs/modules/data_connection/vectorstores/integrations/lancedb) | <img src="../assets/langchain.png" alt="image" width="150" height="auto">| | <h3>Langchain TS</h3> Javascript bindings for Langchain. It integrates with LanceDB's serverless vectordb allowing you to build powerful AI applications through composibility using only serverless functions. <h3>[Learn More]( https://js.langchain.com/docs/modules/data_connection/vectorstores/integrations/lancedb) | <img src="../assets/langchain.png" alt="image" width="150" height="auto">|
| <h3>Voxel51</h3> It is an open source toolkit that enables you to build better computer vision workflows by improving the quality of your datasets and delivering insights about your models.<h3>[Learn More](./voxel51.md) | <img src="../assets/voxel.gif" alt="image" width="150" height="auto">| | <h3>Voxel51</h3> It is an open source toolkit that enables you to build better computer vision workflows by improving the quality of your datasets and delivering insights about your models.<h3>[Learn More](./voxel51.md) | <img src="../assets/voxel.gif" alt="image" width="150" height="auto">|
| <h3>PromptTools</h3> Offers a set of free, open-source tools for testing and experimenting with models, prompts, and configurations. The core idea is to enable developers to evaluate prompts using familiar interfaces like code and notebooks. You can use it to experiment with different configurations of LanceDB, and test how LanceDB integrates with the LLM of your choice.<h3>[Learn More](./prompttools.md) | <img src="../assets/prompttools.jpeg" alt="image" width="150" height="auto">| | <h3>PromptTools</h3> Offers a set of free, open-source tools for testing and experimenting with models, prompts, and configurations. The core idea is to enable developers to evaluate prompts using familiar interfaces like code and notebooks. You can use it to experiment with different configurations of LanceDB, and test how LanceDB integrates with the LLM of your choice.<h3>[Learn More](./prompttools.md) | <img src="../assets/prompttools.jpeg" alt="image" width="150" height="auto">|

View File

@@ -0,0 +1,92 @@
# Langchain
![Illustration](../assets/langchain.png)
## Quick Start
You can load your document data using langchain's loaders, for this example we are using `TextLoader` and `OpenAIEmbeddings` as the embedding model.
```python
import os
from langchain.document_loaders import TextLoader
from langchain.vectorstores import LanceDB
from langchain_openai import OpenAIEmbeddings
from langchain_text_splitters import CharacterTextSplitter
os.environ["OPENAI_API_KEY"] = "sk-..."
loader = TextLoader("../../modules/state_of_the_union.txt") # Replace with your data path
documents = loader.load()
documents = CharacterTextSplitter().split_documents(documents)
embeddings = OpenAIEmbeddings()
docsearch = LanceDB.from_documents(documents, embeddings)
query = "What did the president say about Ketanji Brown Jackson"
docs = docsearch.similarity_search(query)
print(docs[0].page_content)
```
## Documentation
In the above example `LanceDB` vector store class object is created using `from_documents()` method which is a `classmethod` and returns the initialized class object.
You can also use `LanceDB.from_texts(texts: List[str],embedding: Embeddings)` class method.
The exhaustive list of parameters for `LanceDB` vector store are :
- `connection`: (Optional) `lancedb.db.LanceDBConnection` connection object to use. If not provided, a new connection will be created.
- `embedding`: Langchain embedding model.
- `vector_key`: (Optional) Column name to use for vector's in the table. Defaults to `'vector'`.
- `id_key`: (Optional) Column name to use for id's in the table. Defaults to `'id'`.
- `text_key`: (Optional) Column name to use for text in the table. Defaults to `'text'`.
- `table_name`: (Optional) Name of your table in the database. Defaults to `'vectorstore'`.
- `api_key`: (Optional) API key to use for LanceDB cloud database. Defaults to `None`.
- `region`: (Optional) Region to use for LanceDB cloud database. Only for LanceDB Cloud, defaults to `None`.
- `mode`: (Optional) Mode to use for adding data to the table. Defaults to `'overwrite'`.
```python
db_url = "db://lang_test" # url of db you created
api_key = "xxxxx" # your API key
region="us-east-1-dev" # your selected region
vector_store = LanceDB(
uri=db_url,
api_key=api_key, #(dont include for local API)
region=region, #(dont include for local API)
embedding=embeddings,
table_name='langchain_test' #Optional
)
```
### Methods
To add texts and store respective embeddings automatically:
##### add_texts()
- `texts`: `Iterable` of strings to add to the vectorstore.
- `metadatas`: Optional `list[dict()]` of metadatas associated with the texts.
- `ids`: Optional `list` of ids to associate with the texts.
```python
vector_store.add_texts(texts = ['test_123'], metadatas =[{'source' :'wiki'}])
#Additionaly, to explore the table you can load it into a df or save it in a csv file:
tbl = vector_store.get_table()
print("tbl:", tbl)
pd_df = tbl.to_pandas()
pd_df.to_csv("docsearch.csv", index=False)
# you can also create a new vector store object using an older connection object:
vector_store = LanceDB(connection=tbl, embedding=embeddings)
```
For index creation make sure your table has enough data in it. An ANN index is ususally not needed for datasets ~100K vectors. For large-scale (>1M) or higher dimension vectors, it is beneficial to create an ANN index.
##### create_index()
- `col_name`: `Optional[str] = None`
- `vector_col`: `Optional[str] = None`
- `num_partitions`: `Optional[int] = 256`
- `num_sub_vectors`: `Optional[int] = 96`
- `index_cache_size`: `Optional[int] = None`
```python
# for creating vector index
vector_store.create_index(vector_col='vector', metric = 'cosine')
# for creating scalar index(for non-vector columns)
vector_store.create_index(col_name='text')
```

View File

@@ -36,7 +36,7 @@
} }
], ],
"source": [ "source": [
"!pip install --quiet openai datasets \n", "!pip install --quiet openai datasets\n",
"!pip install --quiet -U lancedb" "!pip install --quiet -U lancedb"
] ]
}, },
@@ -213,7 +213,7 @@
"if \"OPENAI_API_KEY\" not in os.environ:\n", "if \"OPENAI_API_KEY\" not in os.environ:\n",
" # OR set the key here as a variable\n", " # OR set the key here as a variable\n",
" os.environ[\"OPENAI_API_KEY\"] = \"sk-...\"\n", " os.environ[\"OPENAI_API_KEY\"] = \"sk-...\"\n",
" \n", "\n",
"client = OpenAI()\n", "client = OpenAI()\n",
"assert len(client.models.list().data) > 0" "assert len(client.models.list().data) > 0"
] ]
@@ -234,9 +234,12 @@
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
"def embed_func(c): \n", "def embed_func(c):\n",
" rs = client.embeddings.create(input=c, model=\"text-embedding-ada-002\")\n", " rs = client.embeddings.create(input=c, model=\"text-embedding-ada-002\")\n",
" return [rs.data[0].embedding]" " return [\n",
" data.embedding\n",
" for data in rs.data\n",
" ]"
] ]
}, },
{ {
@@ -514,7 +517,7 @@
" prompt_start +\n", " prompt_start +\n",
" \"\\n\\n---\\n\\n\".join(context.text) +\n", " \"\\n\\n---\\n\\n\".join(context.text) +\n",
" prompt_end\n", " prompt_end\n",
" ) \n", " )\n",
" return prompt" " return prompt"
] ]
}, },

View File

@@ -8,6 +8,7 @@ excluded_globs = [
"../src/embedding.md", "../src/embedding.md",
"../src/examples/*.md", "../src/examples/*.md",
"../src/integrations/voxel51.md", "../src/integrations/voxel51.md",
"../src/integrations/langchain.md",
"../src/guides/tables.md", "../src/guides/tables.md",
"../src/python/duckdb.md", "../src/python/duckdb.md",
"../src/embeddings/*.md", "../src/embeddings/*.md",

View File

@@ -0,0 +1,27 @@
[package]
name = "lancedb-jni"
description = "JNI bindings for LanceDB"
# TODO modify lancedb/Cargo.toml for version and dependencies
version = "0.4.18"
edition.workspace = true
repository.workspace = true
readme.workspace = true
license.workspace = true
keywords.workspace = true
categories.workspace = true
publish = false
[lib]
crate-type = ["cdylib"]
[dependencies]
lancedb = { path = "../../../rust/lancedb" }
lance = { workspace = true }
arrow = { workspace = true, features = ["ffi"] }
arrow-schema.workspace = true
tokio = "1.23"
jni = "0.21.1"
snafu.workspace = true
lazy_static.workspace = true
serde = { version = "^1" }
serde_json = { version = "1" }

View File

@@ -0,0 +1,130 @@
use crate::ffi::JNIEnvExt;
use crate::traits::IntoJava;
use crate::{Error, RT};
use jni::objects::{JObject, JString, JValue};
use jni::JNIEnv;
pub const NATIVE_CONNECTION: &str = "nativeConnectionHandle";
use crate::Result;
use lancedb::connection::{connect, Connection};
#[derive(Clone)]
pub struct BlockingConnection {
pub(crate) inner: Connection,
}
impl BlockingConnection {
pub fn create(dataset_uri: &str) -> Result<Self> {
let inner = RT.block_on(connect(dataset_uri).execute())?;
Ok(Self { inner })
}
pub fn table_names(
&self,
start_after: Option<String>,
limit: Option<i32>,
) -> Result<Vec<String>> {
let mut op = self.inner.table_names();
if let Some(start_after) = start_after {
op = op.start_after(start_after);
}
if let Some(limit) = limit {
op = op.limit(limit as u32);
}
Ok(RT.block_on(op.execute())?)
}
}
impl IntoJava for BlockingConnection {
fn into_java<'a>(self, env: &mut JNIEnv<'a>) -> JObject<'a> {
attach_native_connection(env, self)
}
}
fn attach_native_connection<'local>(
env: &mut JNIEnv<'local>,
connection: BlockingConnection,
) -> JObject<'local> {
let j_connection = create_java_connection_object(env);
// This block sets a native Rust object (Connection) as a field in the Java object (j_Connection).
// Caution: This creates a potential for memory leaks. The Rust object (Connection) is not
// automatically garbage-collected by Java, and its memory will not be freed unless
// explicitly handled.
//
// To prevent memory leaks, ensure the following:
// 1. The Java object (`j_Connection`) should implement the `java.io.Closeable` interface.
// 2. Users of this Java object should be instructed to always use it within a try-with-resources
// statement (or manually call the `close()` method) to ensure that `self.close()` is invoked.
match unsafe { env.set_rust_field(&j_connection, NATIVE_CONNECTION, connection) } {
Ok(_) => j_connection,
Err(err) => {
env.throw_new(
"java/lang/RuntimeException",
format!("Failed to set native handle for Connection: {}", err),
)
.expect("Error throwing exception");
JObject::null()
}
}
}
fn create_java_connection_object<'a>(env: &mut JNIEnv<'a>) -> JObject<'a> {
env.new_object("com/lancedb/lancedb/Connection", "()V", &[])
.expect("Failed to create Java Lance Connection instance")
}
#[no_mangle]
pub extern "system" fn Java_com_lancedb_lancedb_Connection_releaseNativeConnection(
mut env: JNIEnv,
j_connection: JObject,
) {
let _: BlockingConnection = unsafe {
env.take_rust_field(j_connection, NATIVE_CONNECTION)
.expect("Failed to take native Connection handle")
};
}
#[no_mangle]
pub extern "system" fn Java_com_lancedb_lancedb_Connection_connect<'local>(
mut env: JNIEnv<'local>,
_obj: JObject,
dataset_uri_object: JString,
) -> JObject<'local> {
let dataset_uri: String = ok_or_throw!(env, env.get_string(&dataset_uri_object)).into();
let blocking_connection = ok_or_throw!(env, BlockingConnection::create(&dataset_uri));
blocking_connection.into_java(&mut env)
}
#[no_mangle]
pub extern "system" fn Java_com_lancedb_lancedb_Connection_tableNames<'local>(
mut env: JNIEnv<'local>,
j_connection: JObject,
start_after_obj: JObject, // Optional<String>
limit_obj: JObject, // Optional<Integer>
) -> JObject<'local> {
ok_or_throw!(
env,
inner_table_names(&mut env, j_connection, start_after_obj, limit_obj)
)
}
fn inner_table_names<'local>(
env: &mut JNIEnv<'local>,
j_connection: JObject,
start_after_obj: JObject, // Optional<String>
limit_obj: JObject, // Optional<Integer>
) -> Result<JObject<'local>> {
let start_after = env.get_string_opt(&start_after_obj)?;
let limit = env.get_int_opt(&limit_obj)?;
let conn =
unsafe { env.get_rust_field::<_, _, BlockingConnection>(j_connection, NATIVE_CONNECTION) }?;
let table_names = conn.table_names(start_after, limit)?;
drop(conn);
let j_names = env.new_object("java/util/ArrayList", "()V", &[])?;
for item in table_names {
let jstr_item = env.new_string(item)?;
let item_jobj = JObject::from(jstr_item);
let item_gen = JValue::Object(&item_jobj);
env.call_method(&j_names, "add", "(Ljava/lang/Object;)Z", &[item_gen])?;
}
Ok(j_names)
}

View File

@@ -0,0 +1,225 @@
// Copyright 2024 Lance Developers.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use std::str::Utf8Error;
use arrow_schema::ArrowError;
use jni::errors::Error as JniError;
use serde_json::Error as JsonError;
use snafu::{Location, Snafu};
type BoxedError = Box<dyn std::error::Error + Send + Sync + 'static>;
/// Java Exception types
pub enum JavaException {
IllegalArgumentException,
IOException,
RuntimeException,
}
impl JavaException {
pub fn as_str(&self) -> &str {
match self {
Self::IllegalArgumentException => "java/lang/IllegalArgumentException",
Self::IOException => "java/io/IOException",
Self::RuntimeException => "java/lang/RuntimeException",
}
}
}
/// TODO(lu) change to lancedb-jni
#[derive(Debug, Snafu)]
#[snafu(visibility(pub))]
pub enum Error {
#[snafu(display("JNI error: {message}, {location}"))]
Jni { message: String, location: Location },
#[snafu(display("Invalid argument: {message}, {location}"))]
InvalidArgument { message: String, location: Location },
#[snafu(display("IO error: {source}, {location}"))]
IO {
source: BoxedError,
location: Location,
},
#[snafu(display("Arrow error: {message}, {location}"))]
Arrow { message: String, location: Location },
#[snafu(display("Index error: {message}, {location}"))]
Index { message: String, location: Location },
#[snafu(display("JSON error: {message}, {location}"))]
JSON { message: String, location: Location },
#[snafu(display("Dataset at path {path} was not found, {location}"))]
DatasetNotFound { path: String, location: Location },
#[snafu(display("Dataset already exists: {uri}, {location}"))]
DatasetAlreadyExists { uri: String, location: Location },
#[snafu(display("Table '{name}' already exists"))]
TableAlreadyExists { name: String },
#[snafu(display("Table '{name}' was not found"))]
TableNotFound { name: String },
#[snafu(display("Invalid table name '{name}': {reason}"))]
InvalidTableName { name: String, reason: String },
#[snafu(display("Embedding function '{name}' was not found: {reason}, {location}"))]
EmbeddingFunctionNotFound {
name: String,
reason: String,
location: Location,
},
#[snafu(display("Other Lance error: {message}, {location}"))]
OtherLance { message: String, location: Location },
#[snafu(display("Other LanceDB error: {message}, {location}"))]
OtherLanceDB { message: String, location: Location },
}
impl Error {
/// Throw as Java Exception
pub fn throw(&self, env: &mut jni::JNIEnv) {
match self {
Self::InvalidArgument { .. }
| Self::DatasetNotFound { .. }
| Self::DatasetAlreadyExists { .. }
| Self::TableAlreadyExists { .. }
| Self::TableNotFound { .. }
| Self::InvalidTableName { .. }
| Self::EmbeddingFunctionNotFound { .. } => {
self.throw_as(env, JavaException::IllegalArgumentException)
}
Self::IO { .. } | Self::Index { .. } => self.throw_as(env, JavaException::IOException),
Self::Arrow { .. }
| Self::JSON { .. }
| Self::OtherLance { .. }
| Self::OtherLanceDB { .. }
| Self::Jni { .. } => self.throw_as(env, JavaException::RuntimeException),
}
}
/// Throw as an concrete Java Exception
pub fn throw_as(&self, env: &mut jni::JNIEnv, exception: JavaException) {
let message = &format!(
"Error when throwing Java exception: {}:{}",
exception.as_str(),
self
);
env.throw_new(exception.as_str(), self.to_string())
.expect(message);
}
}
pub type Result<T> = std::result::Result<T, Error>;
trait ToSnafuLocation {
fn to_snafu_location(&'static self) -> snafu::Location;
}
impl ToSnafuLocation for std::panic::Location<'static> {
fn to_snafu_location(&'static self) -> snafu::Location {
snafu::Location::new(self.file(), self.line(), self.column())
}
}
impl From<JniError> for Error {
#[track_caller]
fn from(source: JniError) -> Self {
Self::Jni {
message: source.to_string(),
location: std::panic::Location::caller().to_snafu_location(),
}
}
}
impl From<Utf8Error> for Error {
#[track_caller]
fn from(source: Utf8Error) -> Self {
Self::InvalidArgument {
message: source.to_string(),
location: std::panic::Location::caller().to_snafu_location(),
}
}
}
impl From<ArrowError> for Error {
#[track_caller]
fn from(source: ArrowError) -> Self {
Self::Arrow {
message: source.to_string(),
location: std::panic::Location::caller().to_snafu_location(),
}
}
}
impl From<JsonError> for Error {
#[track_caller]
fn from(source: JsonError) -> Self {
Self::JSON {
message: source.to_string(),
location: std::panic::Location::caller().to_snafu_location(),
}
}
}
impl From<lance::Error> for Error {
#[track_caller]
fn from(source: lance::Error) -> Self {
match source {
lance::Error::DatasetNotFound {
path,
source: _,
location,
} => Self::DatasetNotFound { path, location },
lance::Error::DatasetAlreadyExists { uri, location } => {
Self::DatasetAlreadyExists { uri, location }
}
lance::Error::IO { source, location } => Self::IO { source, location },
lance::Error::Arrow { message, location } => Self::Arrow { message, location },
lance::Error::Index { message, location } => Self::Index { message, location },
lance::Error::InvalidInput { source, location } => Self::InvalidArgument {
message: source.to_string(),
location,
},
_ => Self::OtherLance {
message: source.to_string(),
location: std::panic::Location::caller().to_snafu_location(),
},
}
}
}
impl From<lancedb::Error> for Error {
#[track_caller]
fn from(source: lancedb::Error) -> Self {
match source {
lancedb::Error::InvalidTableName { name, reason } => {
Self::InvalidTableName { name, reason }
}
lancedb::Error::InvalidInput { message } => Self::InvalidArgument {
message,
location: std::panic::Location::caller().to_snafu_location(),
},
lancedb::Error::TableNotFound { name } => Self::TableNotFound { name },
lancedb::Error::TableAlreadyExists { name } => Self::TableAlreadyExists { name },
lancedb::Error::EmbeddingFunctionNotFound { name, reason } => {
Self::EmbeddingFunctionNotFound {
name,
reason,
location: std::panic::Location::caller().to_snafu_location(),
}
}
lancedb::Error::Arrow { source } => Self::Arrow {
message: source.to_string(),
location: std::panic::Location::caller().to_snafu_location(),
},
lancedb::Error::Lance { source } => Self::from(source),
_ => Self::OtherLanceDB {
message: source.to_string(),
location: std::panic::Location::caller().to_snafu_location(),
},
}
}
}

View File

@@ -0,0 +1,204 @@
// Copyright 2024 Lance Developers.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use core::slice;
use jni::objects::{JByteBuffer, JObjectArray, JString};
use jni::sys::jobjectArray;
use jni::{objects::JObject, JNIEnv};
use crate::error::{Error, Result};
/// TODO(lu) import from lance-jni without duplicate
/// Extend JNIEnv with helper functions.
pub trait JNIEnvExt {
/// Get integers from Java List<Integer> object.
fn get_integers(&mut self, obj: &JObject) -> Result<Vec<i32>>;
/// Get strings from Java List<String> object.
fn get_strings(&mut self, obj: &JObject) -> Result<Vec<String>>;
/// Get strings from Java String[] object.
/// Note that get Option<Vec<String>> from Java Optional<String[]> just doesn't work.
#[allow(unused)]
fn get_strings_array(&mut self, obj: jobjectArray) -> Result<Vec<String>>;
/// Get Option<String> from Java Optional<String>.
fn get_string_opt(&mut self, obj: &JObject) -> Result<Option<String>>;
/// Get Option<Vec<String>> from Java Optional<List<String>>.
#[allow(unused)]
fn get_strings_opt(&mut self, obj: &JObject) -> Result<Option<Vec<String>>>;
/// Get Option<i32> from Java Optional<Integer>.
fn get_int_opt(&mut self, obj: &JObject) -> Result<Option<i32>>;
/// Get Option<Vec<i32>> from Java Optional<List<Integer>>.
fn get_ints_opt(&mut self, obj: &JObject) -> Result<Option<Vec<i32>>>;
/// Get Option<i64> from Java Optional<Long>.
#[allow(unused)]
fn get_long_opt(&mut self, obj: &JObject) -> Result<Option<i64>>;
/// Get Option<u64> from Java Optional<Long>.
#[allow(unused)]
fn get_u64_opt(&mut self, obj: &JObject) -> Result<Option<u64>>;
/// Get Option<&[u8]> from Java Optional<ByteBuffer>.
#[allow(unused)]
fn get_bytes_opt(&mut self, obj: &JObject) -> Result<Option<&[u8]>>;
fn get_optional<T, F>(&mut self, obj: &JObject, f: F) -> Result<Option<T>>
where
F: FnOnce(&mut JNIEnv, &JObject) -> Result<T>;
}
impl JNIEnvExt for JNIEnv<'_> {
fn get_integers(&mut self, obj: &JObject) -> Result<Vec<i32>> {
let list = self.get_list(obj)?;
let mut iter = list.iter(self)?;
let mut results = Vec::with_capacity(list.size(self)? as usize);
while let Some(elem) = iter.next(self)? {
let int_obj = self.call_method(elem, "intValue", "()I", &[])?;
let int_value = int_obj.i()?;
results.push(int_value);
}
Ok(results)
}
fn get_strings(&mut self, obj: &JObject) -> Result<Vec<String>> {
let list = self.get_list(obj)?;
let mut iter = list.iter(self)?;
let mut results = Vec::with_capacity(list.size(self)? as usize);
while let Some(elem) = iter.next(self)? {
let jstr = JString::from(elem);
let val = self.get_string(&jstr)?;
results.push(val.to_str()?.to_string())
}
Ok(results)
}
fn get_strings_array(&mut self, obj: jobjectArray) -> Result<Vec<String>> {
let jobject_array = unsafe { JObjectArray::from_raw(obj) };
let array_len = self.get_array_length(&jobject_array)?;
let mut res: Vec<String> = Vec::new();
for i in 0..array_len {
let item: JString = self.get_object_array_element(&jobject_array, i)?.into();
res.push(self.get_string(&item)?.into());
}
Ok(res)
}
fn get_string_opt(&mut self, obj: &JObject) -> Result<Option<String>> {
self.get_optional(obj, |env, inner_obj| {
let java_obj_gen = env.call_method(inner_obj, "get", "()Ljava/lang/Object;", &[])?;
let java_string_obj = java_obj_gen.l()?;
let jstr = JString::from(java_string_obj);
let val = env.get_string(&jstr)?;
Ok(val.to_str()?.to_string())
})
}
fn get_strings_opt(&mut self, obj: &JObject) -> Result<Option<Vec<String>>> {
self.get_optional(obj, |env, inner_obj| {
let java_obj_gen = env.call_method(inner_obj, "get", "()Ljava/lang/Object;", &[])?;
let java_list_obj = java_obj_gen.l()?;
env.get_strings(&java_list_obj)
})
}
fn get_int_opt(&mut self, obj: &JObject) -> Result<Option<i32>> {
self.get_optional(obj, |env, inner_obj| {
let java_obj_gen = env.call_method(inner_obj, "get", "()Ljava/lang/Object;", &[])?;
let java_int_obj = java_obj_gen.l()?;
let int_obj = env.call_method(java_int_obj, "intValue", "()I", &[])?;
let int_value = int_obj.i()?;
Ok(int_value)
})
}
fn get_ints_opt(&mut self, obj: &JObject) -> Result<Option<Vec<i32>>> {
self.get_optional(obj, |env, inner_obj| {
let java_obj_gen = env.call_method(inner_obj, "get", "()Ljava/lang/Object;", &[])?;
let java_list_obj = java_obj_gen.l()?;
env.get_integers(&java_list_obj)
})
}
fn get_long_opt(&mut self, obj: &JObject) -> Result<Option<i64>> {
self.get_optional(obj, |env, inner_obj| {
let java_obj_gen = env.call_method(inner_obj, "get", "()Ljava/lang/Object;", &[])?;
let java_long_obj = java_obj_gen.l()?;
let long_obj = env.call_method(java_long_obj, "longValue", "()J", &[])?;
let long_value = long_obj.j()?;
Ok(long_value)
})
}
fn get_u64_opt(&mut self, obj: &JObject) -> Result<Option<u64>> {
self.get_optional(obj, |env, inner_obj| {
let java_obj_gen = env.call_method(inner_obj, "get", "()Ljava/lang/Object;", &[])?;
let java_long_obj = java_obj_gen.l()?;
let long_obj = env.call_method(java_long_obj, "longValue", "()J", &[])?;
let long_value = long_obj.j()?;
Ok(long_value as u64)
})
}
fn get_bytes_opt(&mut self, obj: &JObject) -> Result<Option<&[u8]>> {
self.get_optional(obj, |env, inner_obj| {
let java_obj_gen = env.call_method(inner_obj, "get", "()Ljava/lang/Object;", &[])?;
let java_byte_buffer_obj = java_obj_gen.l()?;
let j_byte_buffer = JByteBuffer::from(java_byte_buffer_obj);
let raw_data = env.get_direct_buffer_address(&j_byte_buffer)?;
let capacity = env.get_direct_buffer_capacity(&j_byte_buffer)?;
let data = unsafe { slice::from_raw_parts(raw_data, capacity) };
Ok(data)
})
}
fn get_optional<T, F>(&mut self, obj: &JObject, f: F) -> Result<Option<T>>
where
F: FnOnce(&mut JNIEnv, &JObject) -> Result<T>,
{
if obj.is_null() {
return Ok(None);
}
let is_empty = self.call_method(obj, "isEmpty", "()Z", &[])?;
if is_empty.z()? {
// TODO(lu): put get java object into here cuz can only get java Object
Ok(None)
} else {
f(self, obj).map(Some)
}
}
}
#[no_mangle]
pub extern "system" fn Java_com_lancedb_lance_test_JniTestHelper_parseInts(
mut env: JNIEnv,
_obj: JObject,
list_obj: JObject, // List<Integer>
) {
ok_or_throw_without_return!(env, env.get_integers(&list_obj));
}
#[no_mangle]
pub extern "system" fn Java_com_lancedb_lance_test_JniTestHelper_parseIntsOpt(
mut env: JNIEnv,
_obj: JObject,
list_obj: JObject, // Optional<List<Integer>>
) {
ok_or_throw_without_return!(env, env.get_ints_opt(&list_obj));
}

View File

@@ -0,0 +1,68 @@
// Copyright 2024 Lance Developers.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use lazy_static::lazy_static;
// TODO import from lance-jni without duplicate
#[macro_export]
macro_rules! ok_or_throw {
($env:expr, $result:expr) => {
match $result {
Ok(value) => value,
Err(err) => {
Error::from(err).throw(&mut $env);
return JObject::null();
}
}
};
}
macro_rules! ok_or_throw_without_return {
($env:expr, $result:expr) => {
match $result {
Ok(value) => value,
Err(err) => {
Error::from(err).throw(&mut $env);
return;
}
}
};
}
#[macro_export]
macro_rules! ok_or_throw_with_return {
($env:expr, $result:expr, $ret:expr) => {
match $result {
Ok(value) => value,
Err(err) => {
Error::from(err).throw(&mut $env);
return $ret;
}
}
};
}
mod connection;
pub mod error;
mod ffi;
mod traits;
pub use error::{Error, Result};
lazy_static! {
static ref RT: tokio::runtime::Runtime = tokio::runtime::Builder::new_multi_thread()
.enable_all()
.build()
.expect("Failed to create tokio runtime");
}

View File

@@ -0,0 +1,122 @@
// Copyright 2024 Lance Developers.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use jni::objects::{JMap, JObject, JString, JValue};
use jni::JNIEnv;
use crate::Result;
pub trait FromJObject<T> {
fn extract(&self) -> Result<T>;
}
/// Convert a Rust type into a Java Object.
pub trait IntoJava {
fn into_java<'a>(self, env: &mut JNIEnv<'a>) -> JObject<'a>;
}
impl FromJObject<i32> for JObject<'_> {
fn extract(&self) -> Result<i32> {
Ok(JValue::from(self).i()?)
}
}
impl FromJObject<i64> for JObject<'_> {
fn extract(&self) -> Result<i64> {
Ok(JValue::from(self).j()?)
}
}
impl FromJObject<f32> for JObject<'_> {
fn extract(&self) -> Result<f32> {
Ok(JValue::from(self).f()?)
}
}
impl FromJObject<f64> for JObject<'_> {
fn extract(&self) -> Result<f64> {
Ok(JValue::from(self).d()?)
}
}
pub trait FromJString {
fn extract(&self, env: &mut JNIEnv) -> Result<String>;
}
impl FromJString for JString<'_> {
fn extract(&self, env: &mut JNIEnv) -> Result<String> {
Ok(env.get_string(self)?.into())
}
}
pub trait JMapExt {
#[allow(dead_code)]
fn get_string(&self, env: &mut JNIEnv, key: &str) -> Result<Option<String>>;
#[allow(dead_code)]
fn get_i32(&self, env: &mut JNIEnv, key: &str) -> Result<Option<i32>>;
#[allow(dead_code)]
fn get_i64(&self, env: &mut JNIEnv, key: &str) -> Result<Option<i64>>;
#[allow(dead_code)]
fn get_f32(&self, env: &mut JNIEnv, key: &str) -> Result<Option<f32>>;
#[allow(dead_code)]
fn get_f64(&self, env: &mut JNIEnv, key: &str) -> Result<Option<f64>>;
}
fn get_map_value<T>(env: &mut JNIEnv, map: &JMap, key: &str) -> Result<Option<T>>
where
for<'a> JObject<'a>: FromJObject<T>,
{
let key_obj: JObject = env.new_string(key)?.into();
if let Some(value) = map.get(env, &key_obj)? {
if value.is_null() {
Ok(None)
} else {
Ok(Some(value.extract()?))
}
} else {
Ok(None)
}
}
impl JMapExt for JMap<'_, '_, '_> {
fn get_string(&self, env: &mut JNIEnv, key: &str) -> Result<Option<String>> {
let key_obj: JObject = env.new_string(key)?.into();
if let Some(value) = self.get(env, &key_obj)? {
let value_str: JString = value.into();
Ok(Some(value_str.extract(env)?))
} else {
Ok(None)
}
}
fn get_i32(&self, env: &mut JNIEnv, key: &str) -> Result<Option<i32>> {
get_map_value(env, self, key)
}
fn get_i64(&self, env: &mut JNIEnv, key: &str) -> Result<Option<i64>> {
get_map_value(env, self, key)
}
fn get_f32(&self, env: &mut JNIEnv, key: &str) -> Result<Option<f32>> {
get_map_value(env, self, key)
}
fn get_f64(&self, env: &mut JNIEnv, key: &str) -> Result<Option<f64>> {
get_map_value(env, self, key)
}
}

94
java/core/pom.xml Normal file
View File

@@ -0,0 +1,94 @@
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<parent>
<groupId>com.lancedb</groupId>
<artifactId>lancedb-parent</artifactId>
<version>0.1-SNAPSHOT</version>
<relativePath>../pom.xml</relativePath>
</parent>
<artifactId>lancedb-core</artifactId>
<name>LanceDB Core</name>
<packaging>jar</packaging>
<dependencies>
<dependency>
<groupId>org.apache.arrow</groupId>
<artifactId>arrow-vector</artifactId>
</dependency>
<dependency>
<groupId>org.apache.arrow</groupId>
<artifactId>arrow-memory-netty</artifactId>
</dependency>
<dependency>
<groupId>org.apache.arrow</groupId>
<artifactId>arrow-c-data</artifactId>
</dependency>
<dependency>
<groupId>org.apache.arrow</groupId>
<artifactId>arrow-dataset</artifactId>
</dependency>
<dependency>
<groupId>org.json</groupId>
<artifactId>json</artifactId>
</dependency>
<dependency>
<groupId>org.questdb</groupId>
<artifactId>jar-jni</artifactId>
</dependency>
<dependency>
<groupId>org.junit.jupiter</groupId>
<artifactId>junit-jupiter</artifactId>
<scope>test</scope>
</dependency>
</dependencies>
<profiles>
<profile>
<id>build-jni</id>
<activation>
<activeByDefault>true</activeByDefault>
</activation>
<build>
<plugins>
<plugin>
<groupId>org.questdb</groupId>
<artifactId>rust-maven-plugin</artifactId>
<version>1.1.1</version>
<executions>
<execution>
<id>lancedb-jni</id>
<goals>
<goal>build</goal>
</goals>
<configuration>
<path>lancedb-jni</path>
<!--<release>true</release>-->
<!-- Copy native libraries to target/classes for runtime access -->
<copyTo>${project.build.directory}/classes/nativelib</copyTo>
<copyWithPlatformDir>true</copyWithPlatformDir>
</configuration>
</execution>
<execution>
<id>lancedb-jni-test</id>
<goals>
<goal>test</goal>
</goals>
<configuration>
<path>lancedb-jni</path>
<release>false</release>
<verbosity>-v</verbosity>
</configuration>
</execution>
</executions>
</plugin>
</plugins>
</build>
</profile>
</profiles>
</project>

View File

@@ -0,0 +1,120 @@
/*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package com.lancedb.lancedb;
import io.questdb.jar.jni.JarJniLoader;
import java.io.Closeable;
import java.util.List;
import java.util.Optional;
/**
* Represents LanceDB database.
*/
public class Connection implements Closeable {
static {
JarJniLoader.loadLib(Connection.class, "/nativelib", "lancedb_jni");
}
private long nativeConnectionHandle;
/**
* Connect to a LanceDB instance.
*/
public static native Connection connect(String uri);
/**
* Get the names of all tables in the database. The names are sorted in
* ascending order.
*
* @return the table names
*/
public List<String> tableNames() {
return tableNames(Optional.empty(), Optional.empty());
}
/**
* Get the names of filtered tables in the database. The names are sorted in
* ascending order.
*
* @param limit The number of results to return.
* @return the table names
*/
public List<String> tableNames(int limit) {
return tableNames(Optional.empty(), Optional.of(limit));
}
/**
* Get the names of filtered tables in the database. The names are sorted in
* ascending order.
*
* @param startAfter If present, only return names that come lexicographically after the supplied
* value. This can be combined with limit to implement pagination
* by setting this to the last table name from the previous page.
* @return the table names
*/
public List<String> tableNames(String startAfter) {
return tableNames(Optional.of(startAfter), Optional.empty());
}
/**
* Get the names of filtered tables in the database. The names are sorted in
* ascending order.
*
* @param startAfter If present, only return names that come lexicographically after the supplied
* value. This can be combined with limit to implement pagination
* by setting this to the last table name from the previous page.
* @param limit The number of results to return.
* @return the table names
*/
public List<String> tableNames(String startAfter, int limit) {
return tableNames(Optional.of(startAfter), Optional.of(limit));
}
/**
* Get the names of filtered tables in the database. The names are sorted in
* ascending order.
*
* @param startAfter If present, only return names that come lexicographically after the supplied
* value. This can be combined with limit to implement pagination
* by setting this to the last table name from the previous page.
* @param limit The number of results to return.
* @return the table names
*/
public native List<String> tableNames(
Optional<String> startAfter, Optional<Integer> limit);
/**
* Closes this connection and releases any system resources associated with it. If
* the connection is
* already closed, then invoking this method has no effect.
*/
@Override
public void close() {
if (nativeConnectionHandle != 0) {
releaseNativeConnection(nativeConnectionHandle);
nativeConnectionHandle = 0;
}
}
/**
* Native method to release the Lance connection resources associated with the
* given handle.
*
* @param handle The native handle to the connection resource.
*/
private native void releaseNativeConnection(long handle);
private Connection() {}
}

View File

@@ -0,0 +1,135 @@
/*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package com.lancedb.lancedb;
import static org.junit.jupiter.api.Assertions.assertEquals;
import static org.junit.jupiter.api.Assertions.assertTrue;
import java.nio.file.Path;
import java.util.List;
import java.net.URL;
import org.junit.jupiter.api.BeforeAll;
import org.junit.jupiter.api.Test;
import org.junit.jupiter.api.io.TempDir;
public class ConnectionTest {
private static final String[] TABLE_NAMES = {
"dataset_version",
"new_empty_dataset",
"test",
"write_stream"
};
@TempDir
static Path tempDir; // Temporary directory for the tests
private static URL lanceDbURL;
@BeforeAll
static void setUp() {
ClassLoader classLoader = ConnectionTest.class.getClassLoader();
lanceDbURL = classLoader.getResource("example_db");
}
@Test
void emptyDB() {
String databaseUri = tempDir.resolve("emptyDB").toString();
try (Connection conn = Connection.connect(databaseUri)) {
List<String> tableNames = conn.tableNames();
assertTrue(tableNames.isEmpty());
}
}
@Test
void tableNames() {
try (Connection conn = Connection.connect(lanceDbURL.toString())) {
List<String> tableNames = conn.tableNames();
assertEquals(4, tableNames.size());
for (int i = 0; i < TABLE_NAMES.length; i++) {
assertEquals(TABLE_NAMES[i], tableNames.get(i));
}
}
}
@Test
void tableNamesStartAfter() {
try (Connection conn = Connection.connect(lanceDbURL.toString())) {
assertTableNamesStartAfter(conn, TABLE_NAMES[0], 3, TABLE_NAMES[1], TABLE_NAMES[2], TABLE_NAMES[3]);
assertTableNamesStartAfter(conn, TABLE_NAMES[1], 2, TABLE_NAMES[2], TABLE_NAMES[3]);
assertTableNamesStartAfter(conn, TABLE_NAMES[2], 1, TABLE_NAMES[3]);
assertTableNamesStartAfter(conn, TABLE_NAMES[3], 0);
assertTableNamesStartAfter(conn, "a_dataset", 4, TABLE_NAMES[0], TABLE_NAMES[1], TABLE_NAMES[2], TABLE_NAMES[3]);
assertTableNamesStartAfter(conn, "o_dataset", 2, TABLE_NAMES[2], TABLE_NAMES[3]);
assertTableNamesStartAfter(conn, "v_dataset", 1, TABLE_NAMES[3]);
assertTableNamesStartAfter(conn, "z_dataset", 0);
}
}
private void assertTableNamesStartAfter(Connection conn, String startAfter, int expectedSize, String... expectedNames) {
List<String> tableNames = conn.tableNames(startAfter);
assertEquals(expectedSize, tableNames.size());
for (int i = 0; i < expectedNames.length; i++) {
assertEquals(expectedNames[i], tableNames.get(i));
}
}
@Test
void tableNamesLimit() {
try (Connection conn = Connection.connect(lanceDbURL.toString())) {
for (int i = 0; i <= TABLE_NAMES.length; i++) {
List<String> tableNames = conn.tableNames(i);
assertEquals(i, tableNames.size());
for (int j = 0; j < i; j++) {
assertEquals(TABLE_NAMES[j], tableNames.get(j));
}
}
}
}
@Test
void tableNamesStartAfterLimit() {
try (Connection conn = Connection.connect(lanceDbURL.toString())) {
List<String> tableNames = conn.tableNames(TABLE_NAMES[0], 2);
assertEquals(2, tableNames.size());
assertEquals(TABLE_NAMES[1], tableNames.get(0));
assertEquals(TABLE_NAMES[2], tableNames.get(1));
tableNames = conn.tableNames(TABLE_NAMES[1], 1);
assertEquals(1, tableNames.size());
assertEquals(TABLE_NAMES[2], tableNames.get(0));
tableNames = conn.tableNames(TABLE_NAMES[2], 2);
assertEquals(1, tableNames.size());
assertEquals(TABLE_NAMES[3], tableNames.get(0));
tableNames = conn.tableNames(TABLE_NAMES[3], 2);
assertEquals(0, tableNames.size());
tableNames = conn.tableNames(TABLE_NAMES[0], 0);
assertEquals(0, tableNames.size());
// Limit larger than the number of remaining tables
tableNames = conn.tableNames(TABLE_NAMES[0], 10);
assertEquals(3, tableNames.size());
assertEquals(TABLE_NAMES[1], tableNames.get(0));
assertEquals(TABLE_NAMES[2], tableNames.get(1));
assertEquals(TABLE_NAMES[3], tableNames.get(2));
// Start after a value not in the list
tableNames = conn.tableNames("non_existent_table", 2);
assertEquals(2, tableNames.size());
assertEquals(TABLE_NAMES[2], tableNames.get(0));
assertEquals(TABLE_NAMES[3], tableNames.get(1));
// Start after the last table with a limit
tableNames = conn.tableNames(TABLE_NAMES[3], 1);
assertEquals(0, tableNames.size());
}
}
}

View File

@@ -0,0 +1 @@
$d51afd07-e3cd-4c76-9b9b-787e13fd55b0<62>=id <20><><EFBFBD><EFBFBD><EFBFBD><EFBFBD><EFBFBD><EFBFBD><EFBFBD>*int3208name <20><><EFBFBD><EFBFBD><EFBFBD><EFBFBD><EFBFBD><EFBFBD><EFBFBD>*string08

View File

@@ -0,0 +1 @@
$15648e72-076f-4ef1-8b90-10d305b95b3b<33>=id <20><><EFBFBD><EFBFBD><EFBFBD><EFBFBD><EFBFBD><EFBFBD><EFBFBD>*int3208name <20><><EFBFBD><EFBFBD><EFBFBD><EFBFBD><EFBFBD><EFBFBD><EFBFBD>*string08

View File

@@ -0,0 +1 @@
$a3689caf-4f6b-4afc-a3c7-97af75661843<34>oitem <20><><EFBFBD><EFBFBD><EFBFBD><EFBFBD><EFBFBD><EFBFBD><EFBFBD>*string8price <20><><EFBFBD><EFBFBD><EFBFBD><EFBFBD><EFBFBD><EFBFBD><EFBFBD>*double80vector <20><><EFBFBD><EFBFBD><EFBFBD><EFBFBD><EFBFBD><EFBFBD><EFBFBD>*fixed_size_list:float:28

129
java/pom.xml Normal file
View File

@@ -0,0 +1,129 @@
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>com.lancedb</groupId>
<artifactId>lancedb-parent</artifactId>
<version>0.1-SNAPSHOT</version>
<packaging>pom</packaging>
<name>Lance Parent</name>
<properties>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
<maven.compiler.source>11</maven.compiler.source>
<maven.compiler.target>11</maven.compiler.target>
<arrow.version>15.0.0</arrow.version>
</properties>
<modules>
<module>core</module>
</modules>
<dependencyManagement>
<dependencies>
<dependency>
<groupId>org.apache.arrow</groupId>
<artifactId>arrow-vector</artifactId>
<version>${arrow.version}</version>
</dependency>
<dependency>
<groupId>org.apache.arrow</groupId>
<artifactId>arrow-memory-netty</artifactId>
<version>${arrow.version}</version>
</dependency>
<dependency>
<groupId>org.apache.arrow</groupId>
<artifactId>arrow-c-data</artifactId>
<version>${arrow.version}</version>
</dependency>
<dependency>
<groupId>org.apache.arrow</groupId>
<artifactId>arrow-dataset</artifactId>
<version>${arrow.version}</version>
</dependency>
<dependency>
<groupId>org.questdb</groupId>
<artifactId>jar-jni</artifactId>
<version>1.1.1</version>
</dependency>
<dependency>
<groupId>org.junit.jupiter</groupId>
<artifactId>junit-jupiter</artifactId>
<version>5.10.1</version>
</dependency>
<dependency>
<groupId>org.json</groupId>
<artifactId>json</artifactId>
<version>20210307</version>
</dependency>
</dependencies>
</dependencyManagement>
<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-checkstyle-plugin</artifactId>
<version>3.3.1</version>
<configuration>
<configLocation>google_checks.xml</configLocation>
<consoleOutput>true</consoleOutput>
<failsOnError>true</failsOnError>
<violationSeverity>warning</violationSeverity>
<linkXRef>false</linkXRef>
</configuration>
<executions>
<execution>
<id>validate</id>
<phase>validate</phase>
<goals>
<goal>check</goal>
</goals>
</execution>
</executions>
</plugin>
</plugins>
<pluginManagement>
<plugins>
<plugin>
<artifactId>maven-clean-plugin</artifactId>
<version>3.1.0</version>
</plugin>
<plugin>
<artifactId>maven-resources-plugin</artifactId>
<version>3.0.2</version>
</plugin>
<plugin>
<artifactId>maven-compiler-plugin</artifactId>
<version>3.8.1</version>
<configuration>
<compilerArgs>
<arg>-h</arg>
<arg>target/headers</arg>
</compilerArgs>
</configuration>
</plugin>
<plugin>
<artifactId>maven-surefire-plugin</artifactId>
<version>3.2.5</version>
<configuration>
<argLine>--add-opens=java.base/java.nio=ALL-UNNAMED</argLine>
<forkNode implementation="org.apache.maven.plugin.surefire.extensions.SurefireForkNodeFactory"/>
<useSystemClassLoader>false</useSystemClassLoader>
</configuration>
</plugin>
<plugin>
<artifactId>maven-jar-plugin</artifactId>
<version>3.0.2</version>
</plugin>
<plugin>
<artifactId>maven-install-plugin</artifactId>
<version>2.5.2</version>
</plugin>
</plugins>
</pluginManagement>
</build>
</project>

74
node/package-lock.json generated
View File

@@ -1,12 +1,12 @@
{ {
"name": "vectordb", "name": "vectordb",
"version": "0.4.17", "version": "0.5.0",
"lockfileVersion": 3, "lockfileVersion": 3,
"requires": true, "requires": true,
"packages": { "packages": {
"": { "": {
"name": "vectordb", "name": "vectordb",
"version": "0.4.17", "version": "0.5.0",
"cpu": [ "cpu": [
"x64", "x64",
"arm64" "arm64"
@@ -52,11 +52,11 @@
"uuid": "^9.0.0" "uuid": "^9.0.0"
}, },
"optionalDependencies": { "optionalDependencies": {
"@lancedb/vectordb-darwin-arm64": "0.4.17", "@lancedb/vectordb-darwin-arm64": "0.4.20",
"@lancedb/vectordb-darwin-x64": "0.4.17", "@lancedb/vectordb-darwin-x64": "0.4.20",
"@lancedb/vectordb-linux-arm64-gnu": "0.4.17", "@lancedb/vectordb-linux-arm64-gnu": "0.4.20",
"@lancedb/vectordb-linux-x64-gnu": "0.4.17", "@lancedb/vectordb-linux-x64-gnu": "0.4.20",
"@lancedb/vectordb-win32-x64-msvc": "0.4.17" "@lancedb/vectordb-win32-x64-msvc": "0.4.20"
}, },
"peerDependencies": { "peerDependencies": {
"@apache-arrow/ts": "^14.0.2", "@apache-arrow/ts": "^14.0.2",
@@ -333,6 +333,66 @@
"@jridgewell/sourcemap-codec": "^1.4.10" "@jridgewell/sourcemap-codec": "^1.4.10"
} }
}, },
"node_modules/@lancedb/vectordb-darwin-arm64": {
"version": "0.4.20",
"resolved": "https://registry.npmjs.org/@lancedb/vectordb-darwin-arm64/-/vectordb-darwin-arm64-0.4.20.tgz",
"integrity": "sha512-ffP2K4sA5mQTgePyARw1y8dPN996FmpvyAYoWO+TSItaXlhcXvc+KVa5udNMCZMDYeEnEv2Xpj6k4PwW3oBz+A==",
"cpu": [
"arm64"
],
"optional": true,
"os": [
"darwin"
]
},
"node_modules/@lancedb/vectordb-darwin-x64": {
"version": "0.4.20",
"resolved": "https://registry.npmjs.org/@lancedb/vectordb-darwin-x64/-/vectordb-darwin-x64-0.4.20.tgz",
"integrity": "sha512-GSYsXE20RIehDu30FjREhJdEzhnwOTV7ZsrSXagStzLY1gr7pyd7sfqxmmUtdD09di7LnQoiM71AOpPTa01YwQ==",
"cpu": [
"x64"
],
"optional": true,
"os": [
"darwin"
]
},
"node_modules/@lancedb/vectordb-linux-arm64-gnu": {
"version": "0.4.20",
"resolved": "https://registry.npmjs.org/@lancedb/vectordb-linux-arm64-gnu/-/vectordb-linux-arm64-gnu-0.4.20.tgz",
"integrity": "sha512-FpNOjOsz3nJVm6EBGyNgbOW2aFhsWZ/igeY45Z8hbZaaK2YBwrg/DASoNlUzgv6IR8cUaGJ2irNVJfsKR2cG6g==",
"cpu": [
"arm64"
],
"optional": true,
"os": [
"linux"
]
},
"node_modules/@lancedb/vectordb-linux-x64-gnu": {
"version": "0.4.20",
"resolved": "https://registry.npmjs.org/@lancedb/vectordb-linux-x64-gnu/-/vectordb-linux-x64-gnu-0.4.20.tgz",
"integrity": "sha512-pOqWjrRZQSrLTlQPkjidRii7NZDw8Xu9pN6ouVu2JAK8n81FXaPtFCyAI+Y3v9GpnYDN0rvD4eQ36aHAVPsa2g==",
"cpu": [
"x64"
],
"optional": true,
"os": [
"linux"
]
},
"node_modules/@lancedb/vectordb-win32-x64-msvc": {
"version": "0.4.20",
"resolved": "https://registry.npmjs.org/@lancedb/vectordb-win32-x64-msvc/-/vectordb-win32-x64-msvc-0.4.20.tgz",
"integrity": "sha512-5J5SsYSJ7jRCmU/sgwVHdrGz43B/7R2T9OEoFTKyVAtqTZdu75rkytXyn9SyEayXVhlUOaw76N0ASm0hAoDS/A==",
"cpu": [
"x64"
],
"optional": true,
"os": [
"win32"
]
},
"node_modules/@neon-rs/cli": { "node_modules/@neon-rs/cli": {
"version": "0.0.160", "version": "0.0.160",
"resolved": "https://registry.npmjs.org/@neon-rs/cli/-/cli-0.0.160.tgz", "resolved": "https://registry.npmjs.org/@neon-rs/cli/-/cli-0.0.160.tgz",

View File

@@ -1,6 +1,6 @@
{ {
"name": "vectordb", "name": "vectordb",
"version": "0.4.17", "version": "0.5.0",
"description": " Serverless, low-latency vector database for AI applications", "description": " Serverless, low-latency vector database for AI applications",
"main": "dist/index.js", "main": "dist/index.js",
"types": "dist/index.d.ts", "types": "dist/index.d.ts",
@@ -88,10 +88,10 @@
} }
}, },
"optionalDependencies": { "optionalDependencies": {
"@lancedb/vectordb-darwin-arm64": "0.4.17", "@lancedb/vectordb-darwin-arm64": "0.4.20",
"@lancedb/vectordb-darwin-x64": "0.4.17", "@lancedb/vectordb-darwin-x64": "0.4.20",
"@lancedb/vectordb-linux-arm64-gnu": "0.4.17", "@lancedb/vectordb-linux-arm64-gnu": "0.4.20",
"@lancedb/vectordb-linux-x64-gnu": "0.4.17", "@lancedb/vectordb-linux-x64-gnu": "0.4.20",
"@lancedb/vectordb-win32-x64-msvc": "0.4.17" "@lancedb/vectordb-win32-x64-msvc": "0.4.20"
} }
} }

View File

@@ -27,23 +27,23 @@ import {
RecordBatch, RecordBatch,
makeData, makeData,
Struct, Struct,
Float, type Float,
DataType, DataType,
Binary, Binary,
Float32 Float32
} from 'apache-arrow' } from "apache-arrow";
import { type EmbeddingFunction } from './index' import { type EmbeddingFunction } from "./index";
import { sanitizeSchema } from './sanitize' import { sanitizeSchema } from "./sanitize";
/* /*
* Options to control how a column should be converted to a vector array * Options to control how a column should be converted to a vector array
*/ */
export class VectorColumnOptions { export class VectorColumnOptions {
/** Vector column type. */ /** Vector column type. */
type: Float = new Float32() type: Float = new Float32();
constructor (values?: Partial<VectorColumnOptions>) { constructor(values?: Partial<VectorColumnOptions>) {
Object.assign(this, values) Object.assign(this, values);
} }
} }
@@ -60,7 +60,7 @@ export class MakeArrowTableOptions {
* The schema must be specified if there are no records (e.g. to make * The schema must be specified if there are no records (e.g. to make
* an empty table) * an empty table)
*/ */
schema?: Schema schema?: Schema;
/* /*
* Mapping from vector column name to expected type * Mapping from vector column name to expected type
@@ -80,7 +80,9 @@ export class MakeArrowTableOptions {
*/ */
vectorColumns: Record<string, VectorColumnOptions> = { vectorColumns: Record<string, VectorColumnOptions> = {
vector: new VectorColumnOptions() vector: new VectorColumnOptions()
} };
embeddings?: EmbeddingFunction<any>;
/** /**
* If true then string columns will be encoded with dictionary encoding * If true then string columns will be encoded with dictionary encoding
@@ -91,10 +93,10 @@ export class MakeArrowTableOptions {
* *
* If `schema` is provided then this property is ignored. * If `schema` is provided then this property is ignored.
*/ */
dictionaryEncodeStrings: boolean = false dictionaryEncodeStrings: boolean = false;
constructor (values?: Partial<MakeArrowTableOptions>) { constructor(values?: Partial<MakeArrowTableOptions>) {
Object.assign(this, values) Object.assign(this, values);
} }
} }
@@ -193,59 +195,68 @@ export class MakeArrowTableOptions {
* assert.deepEqual(table.schema, schema) * assert.deepEqual(table.schema, schema)
* ``` * ```
*/ */
export function makeArrowTable ( export function makeArrowTable(
data: Array<Record<string, any>>, data: Array<Record<string, any>>,
options?: Partial<MakeArrowTableOptions> options?: Partial<MakeArrowTableOptions>
): ArrowTable { ): ArrowTable {
if (data.length === 0 && (options?.schema === undefined || options?.schema === null)) { if (
throw new Error('At least one record or a schema needs to be provided') data.length === 0 &&
(options?.schema === undefined || options?.schema === null)
) {
throw new Error("At least one record or a schema needs to be provided");
} }
const opt = new MakeArrowTableOptions(options !== undefined ? options : {}) const opt = new MakeArrowTableOptions(options !== undefined ? options : {});
if (opt.schema !== undefined && opt.schema !== null) { if (opt.schema !== undefined && opt.schema !== null) {
opt.schema = sanitizeSchema(opt.schema) opt.schema = sanitizeSchema(opt.schema);
opt.schema = validateSchemaEmbeddings(opt.schema, data, opt.embeddings);
} }
const columns: Record<string, Vector> = {}
const columns: Record<string, Vector> = {};
// TODO: sample dataset to find missing columns // TODO: sample dataset to find missing columns
// Prefer the field ordering of the schema, if present // Prefer the field ordering of the schema, if present
const columnNames = ((opt.schema) != null) ? (opt.schema.names as string[]) : Object.keys(data[0]) const columnNames =
opt.schema != null ? (opt.schema.names as string[]) : Object.keys(data[0]);
for (const colName of columnNames) { for (const colName of columnNames) {
if (data.length !== 0 && !Object.prototype.hasOwnProperty.call(data[0], colName)) { if (
data.length !== 0 &&
!Object.prototype.hasOwnProperty.call(data[0], colName)
) {
// The field is present in the schema, but not in the data, skip it // The field is present in the schema, but not in the data, skip it
continue continue;
} }
// Extract a single column from the records (transpose from row-major to col-major) // Extract a single column from the records (transpose from row-major to col-major)
let values = data.map((datum) => datum[colName]) let values = data.map((datum) => datum[colName]);
// By default (type === undefined) arrow will infer the type from the JS type // By default (type === undefined) arrow will infer the type from the JS type
let type let type;
if (opt.schema !== undefined) { if (opt.schema !== undefined) {
// If there is a schema provided, then use that for the type instead // If there is a schema provided, then use that for the type instead
type = opt.schema?.fields.filter((f) => f.name === colName)[0]?.type type = opt.schema?.fields.filter((f) => f.name === colName)[0]?.type;
if (DataType.isInt(type) && type.bitWidth === 64) { if (DataType.isInt(type) && type.bitWidth === 64) {
// wrap in BigInt to avoid bug: https://github.com/apache/arrow/issues/40051 // wrap in BigInt to avoid bug: https://github.com/apache/arrow/issues/40051
values = values.map((v) => { values = values.map((v) => {
if (v === null) { if (v === null) {
return v return v;
} }
return BigInt(v) return BigInt(v);
}) });
} }
} else { } else {
// Otherwise, check to see if this column is one of the vector columns // Otherwise, check to see if this column is one of the vector columns
// defined by opt.vectorColumns and, if so, use the fixed size list type // defined by opt.vectorColumns and, if so, use the fixed size list type
const vectorColumnOptions = opt.vectorColumns[colName] const vectorColumnOptions = opt.vectorColumns[colName];
if (vectorColumnOptions !== undefined) { if (vectorColumnOptions !== undefined) {
type = newVectorType(values[0].length, vectorColumnOptions.type) type = newVectorType(values[0].length, vectorColumnOptions.type);
} }
} }
try { try {
// Convert an Array of JS values to an arrow vector // Convert an Array of JS values to an arrow vector
columns[colName] = makeVector(values, type, opt.dictionaryEncodeStrings) columns[colName] = makeVector(values, type, opt.dictionaryEncodeStrings);
} catch (error: unknown) { } catch (error: unknown) {
// eslint-disable-next-line @typescript-eslint/restrict-template-expressions // eslint-disable-next-line @typescript-eslint/restrict-template-expressions
throw Error(`Could not convert column "${colName}" to Arrow: ${error}`) throw Error(`Could not convert column "${colName}" to Arrow: ${error}`);
} }
} }
@@ -260,97 +271,116 @@ export function makeArrowTable (
// To work around this we first create a table with the wrong schema and // To work around this we first create a table with the wrong schema and
// then patch the schema of the batches so we can use // then patch the schema of the batches so we can use
// `new ArrowTable(schema, batches)` which does not do any schema inference // `new ArrowTable(schema, batches)` which does not do any schema inference
const firstTable = new ArrowTable(columns) const firstTable = new ArrowTable(columns);
// eslint-disable-next-line @typescript-eslint/no-non-null-assertion const batchesFixed = firstTable.batches.map(
const batchesFixed = firstTable.batches.map(batch => new RecordBatch(opt.schema!, batch.data)) // eslint-disable-next-line @typescript-eslint/no-non-null-assertion
return new ArrowTable(opt.schema, batchesFixed) (batch) => new RecordBatch(opt.schema!, batch.data)
);
return new ArrowTable(opt.schema, batchesFixed);
} else { } else {
return new ArrowTable(columns) return new ArrowTable(columns);
} }
} }
/** /**
* Create an empty Arrow table with the provided schema * Create an empty Arrow table with the provided schema
*/ */
export function makeEmptyTable (schema: Schema): ArrowTable { export function makeEmptyTable(schema: Schema): ArrowTable {
return makeArrowTable([], { schema }) return makeArrowTable([], { schema });
} }
// Helper function to convert Array<Array<any>> to a variable sized list array // Helper function to convert Array<Array<any>> to a variable sized list array
function makeListVector (lists: any[][]): Vector<any> { function makeListVector(lists: any[][]): Vector<any> {
if (lists.length === 0 || lists[0].length === 0) { if (lists.length === 0 || lists[0].length === 0) {
throw Error('Cannot infer list vector from empty array or empty list') throw Error("Cannot infer list vector from empty array or empty list");
} }
const sampleList = lists[0] const sampleList = lists[0];
let inferredType let inferredType;
try { try {
const sampleVector = makeVector(sampleList) const sampleVector = makeVector(sampleList);
inferredType = sampleVector.type inferredType = sampleVector.type;
} catch (error: unknown) { } catch (error: unknown) {
// eslint-disable-next-line @typescript-eslint/restrict-template-expressions // eslint-disable-next-line @typescript-eslint/restrict-template-expressions
throw Error(`Cannot infer list vector. Cannot infer inner type: ${error}`) throw Error(`Cannot infer list vector. Cannot infer inner type: ${error}`);
} }
const listBuilder = makeBuilder({ const listBuilder = makeBuilder({
type: new List(new Field('item', inferredType, true)) type: new List(new Field("item", inferredType, true))
}) });
for (const list of lists) { for (const list of lists) {
listBuilder.append(list) listBuilder.append(list);
} }
return listBuilder.finish().toVector() return listBuilder.finish().toVector();
} }
// Helper function to convert an Array of JS values to an Arrow Vector // Helper function to convert an Array of JS values to an Arrow Vector
function makeVector (values: any[], type?: DataType, stringAsDictionary?: boolean): Vector<any> { function makeVector(
values: any[],
type?: DataType,
stringAsDictionary?: boolean
): Vector<any> {
if (type !== undefined) { if (type !== undefined) {
// No need for inference, let Arrow create it // No need for inference, let Arrow create it
return vectorFromArray(values, type) return vectorFromArray(values, type);
} }
if (values.length === 0) { if (values.length === 0) {
throw Error('makeVector requires at least one value or the type must be specfied') throw Error(
"makeVector requires at least one value or the type must be specfied"
);
} }
const sampleValue = values.find(val => val !== null && val !== undefined) const sampleValue = values.find((val) => val !== null && val !== undefined);
if (sampleValue === undefined) { if (sampleValue === undefined) {
throw Error('makeVector cannot infer the type if all values are null or undefined') throw Error(
"makeVector cannot infer the type if all values are null or undefined"
);
} }
if (Array.isArray(sampleValue)) { if (Array.isArray(sampleValue)) {
// Default Arrow inference doesn't handle list types // Default Arrow inference doesn't handle list types
return makeListVector(values) return makeListVector(values);
} else if (Buffer.isBuffer(sampleValue)) { } else if (Buffer.isBuffer(sampleValue)) {
// Default Arrow inference doesn't handle Buffer // Default Arrow inference doesn't handle Buffer
return vectorFromArray(values, new Binary()) return vectorFromArray(values, new Binary());
} else if (!(stringAsDictionary ?? false) && (typeof sampleValue === 'string' || sampleValue instanceof String)) { } else if (
!(stringAsDictionary ?? false) &&
(typeof sampleValue === "string" || sampleValue instanceof String)
) {
// If the type is string then don't use Arrow's default inference unless dictionaries are requested // If the type is string then don't use Arrow's default inference unless dictionaries are requested
// because it will always use dictionary encoding for strings // because it will always use dictionary encoding for strings
return vectorFromArray(values, new Utf8()) return vectorFromArray(values, new Utf8());
} else { } else {
// Convert a JS array of values to an arrow vector // Convert a JS array of values to an arrow vector
return vectorFromArray(values) return vectorFromArray(values);
} }
} }
async function applyEmbeddings<T> (table: ArrowTable, embeddings?: EmbeddingFunction<T>, schema?: Schema): Promise<ArrowTable> { async function applyEmbeddings<T>(
table: ArrowTable,
embeddings?: EmbeddingFunction<T>,
schema?: Schema
): Promise<ArrowTable> {
if (embeddings == null) { if (embeddings == null) {
return table return table;
} }
if (schema !== undefined && schema !== null) { if (schema !== undefined && schema !== null) {
schema = sanitizeSchema(schema) schema = sanitizeSchema(schema);
} }
// Convert from ArrowTable to Record<String, Vector> // Convert from ArrowTable to Record<String, Vector>
const colEntries = [...Array(table.numCols).keys()].map((_, idx) => { const colEntries = [...Array(table.numCols).keys()].map((_, idx) => {
const name = table.schema.fields[idx].name const name = table.schema.fields[idx].name;
// eslint-disable-next-line @typescript-eslint/no-non-null-assertion // eslint-disable-next-line @typescript-eslint/no-non-null-assertion
const vec = table.getChildAt(idx)! const vec = table.getChildAt(idx)!;
return [name, vec] return [name, vec];
}) });
const newColumns = Object.fromEntries(colEntries) const newColumns = Object.fromEntries(colEntries);
const sourceColumn = newColumns[embeddings.sourceColumn] const sourceColumn = newColumns[embeddings.sourceColumn];
const destColumn = embeddings.destColumn ?? 'vector' const destColumn = embeddings.destColumn ?? "vector";
const innerDestType = embeddings.embeddingDataType ?? new Float32() const innerDestType = embeddings.embeddingDataType ?? new Float32();
if (sourceColumn === undefined) { if (sourceColumn === undefined) {
throw new Error(`Cannot apply embedding function because the source column '${embeddings.sourceColumn}' was not present in the data`) throw new Error(
`Cannot apply embedding function because the source column '${embeddings.sourceColumn}' was not present in the data`
);
} }
if (table.numRows === 0) { if (table.numRows === 0) {
@@ -358,45 +388,60 @@ async function applyEmbeddings<T> (table: ArrowTable, embeddings?: EmbeddingFunc
// We have an empty table and it already has the embedding column so no work needs to be done // We have an empty table and it already has the embedding column so no work needs to be done
// Note: we don't return an error like we did below because this is a common occurrence. For example, // Note: we don't return an error like we did below because this is a common occurrence. For example,
// if we call convertToTable with 0 records and a schema that includes the embedding // if we call convertToTable with 0 records and a schema that includes the embedding
return table return table;
} }
if (embeddings.embeddingDimension !== undefined) { if (embeddings.embeddingDimension !== undefined) {
const destType = newVectorType(embeddings.embeddingDimension, innerDestType) const destType = newVectorType(
newColumns[destColumn] = makeVector([], destType) embeddings.embeddingDimension,
innerDestType
);
newColumns[destColumn] = makeVector([], destType);
} else if (schema != null) { } else if (schema != null) {
const destField = schema.fields.find(f => f.name === destColumn) const destField = schema.fields.find((f) => f.name === destColumn);
if (destField != null) { if (destField != null) {
newColumns[destColumn] = makeVector([], destField.type) newColumns[destColumn] = makeVector([], destField.type);
} else { } else {
throw new Error(`Attempt to apply embeddings to an empty table failed because schema was missing embedding column '${destColumn}'`) throw new Error(
`Attempt to apply embeddings to an empty table failed because schema was missing embedding column '${destColumn}'`
);
} }
} else { } else {
throw new Error('Attempt to apply embeddings to an empty table when the embeddings function does not specify `embeddingDimension`') throw new Error(
"Attempt to apply embeddings to an empty table when the embeddings function does not specify `embeddingDimension`"
);
} }
} else { } else {
if (Object.prototype.hasOwnProperty.call(newColumns, destColumn)) { if (Object.prototype.hasOwnProperty.call(newColumns, destColumn)) {
throw new Error(`Attempt to apply embeddings to table failed because column ${destColumn} already existed`) throw new Error(
`Attempt to apply embeddings to table failed because column ${destColumn} already existed`
);
} }
if (table.batches.length > 1) { if (table.batches.length > 1) {
throw new Error('Internal error: `makeArrowTable` unexpectedly created a table with more than one batch') throw new Error(
"Internal error: `makeArrowTable` unexpectedly created a table with more than one batch"
);
} }
const values = sourceColumn.toArray() const values = sourceColumn.toArray();
const vectors = await embeddings.embed(values as T[]) const vectors = await embeddings.embed(values as T[]);
if (vectors.length !== values.length) { if (vectors.length !== values.length) {
throw new Error('Embedding function did not return an embedding for each input element') throw new Error(
"Embedding function did not return an embedding for each input element"
);
} }
const destType = newVectorType(vectors[0].length, innerDestType) const destType = newVectorType(vectors[0].length, innerDestType);
newColumns[destColumn] = makeVector(vectors, destType) newColumns[destColumn] = makeVector(vectors, destType);
} }
const newTable = new ArrowTable(newColumns) const newTable = new ArrowTable(newColumns);
if (schema != null) { if (schema != null) {
if (schema.fields.find(f => f.name === destColumn) === undefined) { if (schema.fields.find((f) => f.name === destColumn) === undefined) {
throw new Error(`When using embedding functions and specifying a schema the schema should include the embedding column but the column ${destColumn} was missing`) throw new Error(
`When using embedding functions and specifying a schema the schema should include the embedding column but the column ${destColumn} was missing`
);
} }
return alignTable(newTable, schema) return alignTable(newTable, schema);
} }
return newTable return newTable;
} }
/* /*
@@ -417,21 +462,24 @@ async function applyEmbeddings<T> (table: ArrowTable, embeddings?: EmbeddingFunc
* embedding columns. If no schema is provded then embedding columns will * embedding columns. If no schema is provded then embedding columns will
* be placed at the end of the table, after all of the input columns. * be placed at the end of the table, after all of the input columns.
*/ */
export async function convertToTable<T> ( export async function convertToTable<T>(
data: Array<Record<string, unknown>>, data: Array<Record<string, unknown>>,
embeddings?: EmbeddingFunction<T>, embeddings?: EmbeddingFunction<T>,
makeTableOptions?: Partial<MakeArrowTableOptions> makeTableOptions?: Partial<MakeArrowTableOptions>
): Promise<ArrowTable> { ): Promise<ArrowTable> {
const table = makeArrowTable(data, makeTableOptions) const table = makeArrowTable(data, makeTableOptions);
return await applyEmbeddings(table, embeddings, makeTableOptions?.schema) return await applyEmbeddings(table, embeddings, makeTableOptions?.schema);
} }
// Creates the Arrow Type for a Vector column with dimension `dim` // Creates the Arrow Type for a Vector column with dimension `dim`
function newVectorType <T extends Float> (dim: number, innerType: T): FixedSizeList<T> { function newVectorType<T extends Float>(
dim: number,
innerType: T
): FixedSizeList<T> {
// Somewhere we always default to have the elements nullable, so we need to set it to true // Somewhere we always default to have the elements nullable, so we need to set it to true
// otherwise we often get schema mismatches because the stored data always has schema with nullable elements // otherwise we often get schema mismatches because the stored data always has schema with nullable elements
const children = new Field<T>('item', innerType, true) const children = new Field<T>("item", innerType, true);
return new FixedSizeList(dim, children) return new FixedSizeList(dim, children);
} }
/** /**
@@ -441,17 +489,17 @@ function newVectorType <T extends Float> (dim: number, innerType: T): FixedSizeL
* *
* `schema` is required if data is empty * `schema` is required if data is empty
*/ */
export async function fromRecordsToBuffer<T> ( export async function fromRecordsToBuffer<T>(
data: Array<Record<string, unknown>>, data: Array<Record<string, unknown>>,
embeddings?: EmbeddingFunction<T>, embeddings?: EmbeddingFunction<T>,
schema?: Schema schema?: Schema
): Promise<Buffer> { ): Promise<Buffer> {
if (schema !== undefined && schema !== null) { if (schema !== undefined && schema !== null) {
schema = sanitizeSchema(schema) schema = sanitizeSchema(schema);
} }
const table = await convertToTable(data, embeddings, { schema }) const table = await convertToTable(data, embeddings, { schema, embeddings });
const writer = RecordBatchFileWriter.writeAll(table) const writer = RecordBatchFileWriter.writeAll(table);
return Buffer.from(await writer.toUint8Array()) return Buffer.from(await writer.toUint8Array());
} }
/** /**
@@ -461,17 +509,17 @@ export async function fromRecordsToBuffer<T> (
* *
* `schema` is required if data is empty * `schema` is required if data is empty
*/ */
export async function fromRecordsToStreamBuffer<T> ( export async function fromRecordsToStreamBuffer<T>(
data: Array<Record<string, unknown>>, data: Array<Record<string, unknown>>,
embeddings?: EmbeddingFunction<T>, embeddings?: EmbeddingFunction<T>,
schema?: Schema schema?: Schema
): Promise<Buffer> { ): Promise<Buffer> {
if (schema !== null && schema !== undefined) { if (schema !== null && schema !== undefined) {
schema = sanitizeSchema(schema) schema = sanitizeSchema(schema);
} }
const table = await convertToTable(data, embeddings, { schema }) const table = await convertToTable(data, embeddings, { schema });
const writer = RecordBatchStreamWriter.writeAll(table) const writer = RecordBatchStreamWriter.writeAll(table);
return Buffer.from(await writer.toUint8Array()) return Buffer.from(await writer.toUint8Array());
} }
/** /**
@@ -482,17 +530,17 @@ export async function fromRecordsToStreamBuffer<T> (
* *
* `schema` is required if the table is empty * `schema` is required if the table is empty
*/ */
export async function fromTableToBuffer<T> ( export async function fromTableToBuffer<T>(
table: ArrowTable, table: ArrowTable,
embeddings?: EmbeddingFunction<T>, embeddings?: EmbeddingFunction<T>,
schema?: Schema schema?: Schema
): Promise<Buffer> { ): Promise<Buffer> {
if (schema !== null && schema !== undefined) { if (schema !== null && schema !== undefined) {
schema = sanitizeSchema(schema) schema = sanitizeSchema(schema);
} }
const tableWithEmbeddings = await applyEmbeddings(table, embeddings, schema) const tableWithEmbeddings = await applyEmbeddings(table, embeddings, schema);
const writer = RecordBatchFileWriter.writeAll(tableWithEmbeddings) const writer = RecordBatchFileWriter.writeAll(tableWithEmbeddings);
return Buffer.from(await writer.toUint8Array()) return Buffer.from(await writer.toUint8Array());
} }
/** /**
@@ -503,49 +551,85 @@ export async function fromTableToBuffer<T> (
* *
* `schema` is required if the table is empty * `schema` is required if the table is empty
*/ */
export async function fromTableToStreamBuffer<T> ( export async function fromTableToStreamBuffer<T>(
table: ArrowTable, table: ArrowTable,
embeddings?: EmbeddingFunction<T>, embeddings?: EmbeddingFunction<T>,
schema?: Schema schema?: Schema
): Promise<Buffer> { ): Promise<Buffer> {
if (schema !== null && schema !== undefined) { if (schema !== null && schema !== undefined) {
schema = sanitizeSchema(schema) schema = sanitizeSchema(schema);
} }
const tableWithEmbeddings = await applyEmbeddings(table, embeddings, schema) const tableWithEmbeddings = await applyEmbeddings(table, embeddings, schema);
const writer = RecordBatchStreamWriter.writeAll(tableWithEmbeddings) const writer = RecordBatchStreamWriter.writeAll(tableWithEmbeddings);
return Buffer.from(await writer.toUint8Array()) return Buffer.from(await writer.toUint8Array());
} }
function alignBatch (batch: RecordBatch, schema: Schema): RecordBatch { function alignBatch(batch: RecordBatch, schema: Schema): RecordBatch {
const alignedChildren = [] const alignedChildren = [];
for (const field of schema.fields) { for (const field of schema.fields) {
const indexInBatch = batch.schema.fields?.findIndex( const indexInBatch = batch.schema.fields?.findIndex(
(f) => f.name === field.name (f) => f.name === field.name
) );
if (indexInBatch < 0) { if (indexInBatch < 0) {
throw new Error( throw new Error(
`The column ${field.name} was not found in the Arrow Table` `The column ${field.name} was not found in the Arrow Table`
) );
} }
alignedChildren.push(batch.data.children[indexInBatch]) alignedChildren.push(batch.data.children[indexInBatch]);
} }
const newData = makeData({ const newData = makeData({
type: new Struct(schema.fields), type: new Struct(schema.fields),
length: batch.numRows, length: batch.numRows,
nullCount: batch.nullCount, nullCount: batch.nullCount,
children: alignedChildren children: alignedChildren
}) });
return new RecordBatch(schema, newData) return new RecordBatch(schema, newData);
} }
function alignTable (table: ArrowTable, schema: Schema): ArrowTable { function alignTable(table: ArrowTable, schema: Schema): ArrowTable {
const alignedBatches = table.batches.map((batch) => const alignedBatches = table.batches.map((batch) =>
alignBatch(batch, schema) alignBatch(batch, schema)
) );
return new ArrowTable(schema, alignedBatches) return new ArrowTable(schema, alignedBatches);
} }
// Creates an empty Arrow Table // Creates an empty Arrow Table
export function createEmptyTable (schema: Schema): ArrowTable { export function createEmptyTable(schema: Schema): ArrowTable {
return new ArrowTable(sanitizeSchema(schema)) return new ArrowTable(sanitizeSchema(schema));
}
function validateSchemaEmbeddings(
schema: Schema<any>,
data: Array<Record<string, unknown>>,
embeddings: EmbeddingFunction<any> | undefined
) {
const fields = [];
const missingEmbeddingFields = [];
// First we check if the field is a `FixedSizeList`
// Then we check if the data contains the field
// if it does not, we add it to the list of missing embedding fields
// Finally, we check if those missing embedding fields are `this._embeddings`
// if they are not, we throw an error
for (const field of schema.fields) {
if (field.type instanceof FixedSizeList) {
if (data.length !== 0 && data?.[0]?.[field.name] === undefined) {
missingEmbeddingFields.push(field);
} else {
fields.push(field);
}
} else {
fields.push(field);
}
}
if (missingEmbeddingFields.length > 0 && embeddings === undefined) {
throw new Error(
`Table has embeddings: "${missingEmbeddingFields
.map((f) => f.name)
.join(",")}", but no embedding function was provided`
);
}
return new Schema(fields, schema.metadata);
} }

View File

@@ -12,19 +12,20 @@
// See the License for the specific language governing permissions and // See the License for the specific language governing permissions and
// limitations under the License. // limitations under the License.
import { type Schema, Table as ArrowTable, tableFromIPC } from 'apache-arrow' import { type Schema, Table as ArrowTable, tableFromIPC } from "apache-arrow";
import { import {
createEmptyTable, createEmptyTable,
fromRecordsToBuffer, fromRecordsToBuffer,
fromTableToBuffer, fromTableToBuffer,
makeArrowTable makeArrowTable
} from './arrow' } from "./arrow";
import type { EmbeddingFunction } from './embedding/embedding_function' import type { EmbeddingFunction } from "./embedding/embedding_function";
import { RemoteConnection } from './remote' import { RemoteConnection } from "./remote";
import { Query } from './query' import { Query } from "./query";
import { isEmbeddingFunction } from './embedding/embedding_function' import { isEmbeddingFunction } from "./embedding/embedding_function";
import { type Literal, toSQL } from './util' import { type Literal, toSQL } from "./util";
import { type HttpMiddleware } from './middleware'
import { type HttpMiddleware } from "./middleware";
const { const {
databaseNew, databaseNew,
@@ -48,14 +49,18 @@ const {
tableAlterColumns, tableAlterColumns,
tableDropColumns tableDropColumns
// eslint-disable-next-line @typescript-eslint/no-var-requires // eslint-disable-next-line @typescript-eslint/no-var-requires
} = require('../native.js') } = require("../native.js");
export { Query } export { Query };
export type { EmbeddingFunction } export type { EmbeddingFunction };
export { OpenAIEmbeddingFunction } from './embedding/openai' export { OpenAIEmbeddingFunction } from "./embedding/openai";
export { convertToTable, makeArrowTable, type MakeArrowTableOptions } from './arrow' export {
convertToTable,
makeArrowTable,
type MakeArrowTableOptions
} from "./arrow";
const defaultAwsRegion = 'us-west-2' const defaultAwsRegion = "us-west-2";
export interface AwsCredentials { export interface AwsCredentials {
accessKeyId: string accessKeyId: string
@@ -128,19 +133,19 @@ export interface ConnectionOptions {
readConsistencyInterval?: number readConsistencyInterval?: number
} }
function getAwsArgs (opts: ConnectionOptions): any[] { function getAwsArgs(opts: ConnectionOptions): any[] {
const callArgs: any[] = [] const callArgs: any[] = [];
const awsCredentials = opts.awsCredentials const awsCredentials = opts.awsCredentials;
if (awsCredentials !== undefined) { if (awsCredentials !== undefined) {
callArgs.push(awsCredentials.accessKeyId) callArgs.push(awsCredentials.accessKeyId);
callArgs.push(awsCredentials.secretKey) callArgs.push(awsCredentials.secretKey);
callArgs.push(awsCredentials.sessionToken) callArgs.push(awsCredentials.sessionToken);
} else { } else {
callArgs.fill(undefined, 0, 3) callArgs.fill(undefined, 0, 3);
} }
callArgs.push(opts.awsRegion) callArgs.push(opts.awsRegion);
return callArgs return callArgs;
} }
export interface CreateTableOptions<T> { export interface CreateTableOptions<T> {
@@ -173,56 +178,56 @@ export interface CreateTableOptions<T> {
* *
* @see {@link ConnectionOptions} for more details on the URI format. * @see {@link ConnectionOptions} for more details on the URI format.
*/ */
export async function connect (uri: string): Promise<Connection> export async function connect(uri: string): Promise<Connection>;
/** /**
* Connect to a LanceDB instance with connection options. * Connect to a LanceDB instance with connection options.
* *
* @param opts The {@link ConnectionOptions} to use when connecting to the database. * @param opts The {@link ConnectionOptions} to use when connecting to the database.
*/ */
export async function connect ( export async function connect(
opts: Partial<ConnectionOptions> opts: Partial<ConnectionOptions>
): Promise<Connection> ): Promise<Connection>;
export async function connect ( export async function connect(
arg: string | Partial<ConnectionOptions> arg: string | Partial<ConnectionOptions>
): Promise<Connection> { ): Promise<Connection> {
let opts: ConnectionOptions let opts: ConnectionOptions;
if (typeof arg === 'string') { if (typeof arg === "string") {
opts = { uri: arg } opts = { uri: arg };
} else { } else {
const keys = Object.keys(arg) const keys = Object.keys(arg);
if (keys.length === 1 && keys[0] === 'uri' && typeof arg.uri === 'string') { if (keys.length === 1 && keys[0] === "uri" && typeof arg.uri === "string") {
opts = { uri: arg.uri } opts = { uri: arg.uri };
} else { } else {
opts = Object.assign( opts = Object.assign(
{ {
uri: '', uri: "",
awsCredentials: undefined, awsCredentials: undefined,
awsRegion: defaultAwsRegion, awsRegion: defaultAwsRegion,
apiKey: undefined, apiKey: undefined,
region: defaultAwsRegion region: defaultAwsRegion
}, },
arg arg
) );
} }
} }
if (opts.uri.startsWith('db://')) { if (opts.uri.startsWith("db://")) {
// Remote connection // Remote connection
return new RemoteConnection(opts) return new RemoteConnection(opts);
} }
const storageOptions = opts.storageOptions ?? {}; const storageOptions = opts.storageOptions ?? {};
if (opts.awsCredentials?.accessKeyId !== undefined) { if (opts.awsCredentials?.accessKeyId !== undefined) {
storageOptions.aws_access_key_id = opts.awsCredentials.accessKeyId storageOptions.aws_access_key_id = opts.awsCredentials.accessKeyId;
} }
if (opts.awsCredentials?.secretKey !== undefined) { if (opts.awsCredentials?.secretKey !== undefined) {
storageOptions.aws_secret_access_key = opts.awsCredentials.secretKey storageOptions.aws_secret_access_key = opts.awsCredentials.secretKey;
} }
if (opts.awsCredentials?.sessionToken !== undefined) { if (opts.awsCredentials?.sessionToken !== undefined) {
storageOptions.aws_session_token = opts.awsCredentials.sessionToken storageOptions.aws_session_token = opts.awsCredentials.sessionToken;
} }
if (opts.awsRegion !== undefined) { if (opts.awsRegion !== undefined) {
storageOptions.region = opts.awsRegion storageOptions.region = opts.awsRegion;
} }
// It's a pain to pass a record to Rust, so we convert it to an array of key-value pairs // It's a pain to pass a record to Rust, so we convert it to an array of key-value pairs
const storageOptionsArr = Object.entries(storageOptions); const storageOptionsArr = Object.entries(storageOptions);
@@ -231,8 +236,8 @@ export async function connect (
opts.uri, opts.uri,
storageOptionsArr, storageOptionsArr,
opts.readConsistencyInterval opts.readConsistencyInterval
) );
return new LocalConnection(db, opts) return new LocalConnection(db, opts);
} }
/** /**
@@ -533,7 +538,11 @@ export interface Table<T = number[]> {
* @param data the new data to insert * @param data the new data to insert
* @param args parameters controlling how the operation should behave * @param args parameters controlling how the operation should behave
*/ */
mergeInsert: (on: string, data: Array<Record<string, unknown>> | ArrowTable, args: MergeInsertArgs) => Promise<void> mergeInsert: (
on: string,
data: Array<Record<string, unknown>> | ArrowTable,
args: MergeInsertArgs
) => Promise<void>
/** /**
* List the indicies on this table. * List the indicies on this table.
@@ -558,7 +567,9 @@ export interface Table<T = number[]> {
* expressions will be evaluated for each row in the * expressions will be evaluated for each row in the
* table, and can reference existing columns in the table. * table, and can reference existing columns in the table.
*/ */
addColumns(newColumnTransforms: Array<{ name: string, valueSql: string }>): Promise<void> addColumns(
newColumnTransforms: Array<{ name: string, valueSql: string }>
): Promise<void>
/** /**
* Alter the name or nullability of columns. * Alter the name or nullability of columns.
@@ -699,23 +710,23 @@ export interface IndexStats {
* A connection to a LanceDB database. * A connection to a LanceDB database.
*/ */
export class LocalConnection implements Connection { export class LocalConnection implements Connection {
private readonly _options: () => ConnectionOptions private readonly _options: () => ConnectionOptions;
private readonly _db: any private readonly _db: any;
constructor (db: any, options: ConnectionOptions) { constructor(db: any, options: ConnectionOptions) {
this._options = () => options this._options = () => options;
this._db = db this._db = db;
} }
get uri (): string { get uri(): string {
return this._options().uri return this._options().uri;
} }
/** /**
* Get the names of all tables in the database. * Get the names of all tables in the database.
*/ */
async tableNames (): Promise<string[]> { async tableNames(): Promise<string[]> {
return databaseTableNames.call(this._db) return databaseTableNames.call(this._db);
} }
/** /**
@@ -723,7 +734,7 @@ export class LocalConnection implements Connection {
* *
* @param name The name of the table. * @param name The name of the table.
*/ */
async openTable (name: string): Promise<Table> async openTable(name: string): Promise<Table>;
/** /**
* Open a table in the database. * Open a table in the database.
@@ -734,23 +745,20 @@ export class LocalConnection implements Connection {
async openTable<T>( async openTable<T>(
name: string, name: string,
embeddings: EmbeddingFunction<T> embeddings: EmbeddingFunction<T>
): Promise<Table<T>> ): Promise<Table<T>>;
async openTable<T>( async openTable<T>(
name: string, name: string,
embeddings?: EmbeddingFunction<T> embeddings?: EmbeddingFunction<T>
): Promise<Table<T>> ): Promise<Table<T>>;
async openTable<T>( async openTable<T>(
name: string, name: string,
embeddings?: EmbeddingFunction<T> embeddings?: EmbeddingFunction<T>
): Promise<Table<T>> { ): Promise<Table<T>> {
const tbl = await databaseOpenTable.call( const tbl = await databaseOpenTable.call(this._db, name);
this._db,
name,
)
if (embeddings !== undefined) { if (embeddings !== undefined) {
return new LocalTable(tbl, name, this._options(), embeddings) return new LocalTable(tbl, name, this._options(), embeddings);
} else { } else {
return new LocalTable(tbl, name, this._options()) return new LocalTable(tbl, name, this._options());
} }
} }
@@ -760,32 +768,32 @@ export class LocalConnection implements Connection {
optsOrEmbedding?: WriteOptions | EmbeddingFunction<T>, optsOrEmbedding?: WriteOptions | EmbeddingFunction<T>,
opt?: WriteOptions opt?: WriteOptions
): Promise<Table<T>> { ): Promise<Table<T>> {
if (typeof name === 'string') { if (typeof name === "string") {
let writeOptions: WriteOptions = new DefaultWriteOptions() let writeOptions: WriteOptions = new DefaultWriteOptions();
if (opt !== undefined && isWriteOptions(opt)) { if (opt !== undefined && isWriteOptions(opt)) {
writeOptions = opt writeOptions = opt;
} else if ( } else if (
optsOrEmbedding !== undefined && optsOrEmbedding !== undefined &&
isWriteOptions(optsOrEmbedding) isWriteOptions(optsOrEmbedding)
) { ) {
writeOptions = optsOrEmbedding writeOptions = optsOrEmbedding;
} }
let embeddings: undefined | EmbeddingFunction<T> let embeddings: undefined | EmbeddingFunction<T>;
if ( if (
optsOrEmbedding !== undefined && optsOrEmbedding !== undefined &&
isEmbeddingFunction(optsOrEmbedding) isEmbeddingFunction(optsOrEmbedding)
) { ) {
embeddings = optsOrEmbedding embeddings = optsOrEmbedding;
} }
return await this.createTableImpl({ return await this.createTableImpl({
name, name,
data, data,
embeddingFunction: embeddings, embeddingFunction: embeddings,
writeOptions writeOptions
}) });
} }
return await this.createTableImpl(name) return await this.createTableImpl(name);
} }
private async createTableImpl<T>({ private async createTableImpl<T>({
@@ -801,27 +809,27 @@ export class LocalConnection implements Connection {
embeddingFunction?: EmbeddingFunction<T> | undefined embeddingFunction?: EmbeddingFunction<T> | undefined
writeOptions?: WriteOptions | undefined writeOptions?: WriteOptions | undefined
}): Promise<Table<T>> { }): Promise<Table<T>> {
let buffer: Buffer let buffer: Buffer;
function isEmpty ( function isEmpty(
data: Array<Record<string, unknown>> | ArrowTable<any> data: Array<Record<string, unknown>> | ArrowTable<any>
): boolean { ): boolean {
if (data instanceof ArrowTable) { if (data instanceof ArrowTable) {
return data.data.length === 0 return data.data.length === 0;
} }
return data.length === 0 return data.length === 0;
} }
if (data === undefined || isEmpty(data)) { if (data === undefined || isEmpty(data)) {
if (schema === undefined) { if (schema === undefined) {
throw new Error('Either data or schema needs to defined') throw new Error("Either data or schema needs to defined");
} }
buffer = await fromTableToBuffer(createEmptyTable(schema)) buffer = await fromTableToBuffer(createEmptyTable(schema));
} else if (data instanceof ArrowTable) { } else if (data instanceof ArrowTable) {
buffer = await fromTableToBuffer(data, embeddingFunction, schema) buffer = await fromTableToBuffer(data, embeddingFunction, schema);
} else { } else {
// data is Array<Record<...>> // data is Array<Record<...>>
buffer = await fromRecordsToBuffer(data, embeddingFunction, schema) buffer = await fromRecordsToBuffer(data, embeddingFunction, schema);
} }
const tbl = await tableCreate.call( const tbl = await tableCreate.call(
@@ -830,11 +838,11 @@ export class LocalConnection implements Connection {
buffer, buffer,
writeOptions?.writeMode?.toString(), writeOptions?.writeMode?.toString(),
...getAwsArgs(this._options()) ...getAwsArgs(this._options())
) );
if (embeddingFunction !== undefined) { if (embeddingFunction !== undefined) {
return new LocalTable(tbl, name, this._options(), embeddingFunction) return new LocalTable(tbl, name, this._options(), embeddingFunction);
} else { } else {
return new LocalTable(tbl, name, this._options()) return new LocalTable(tbl, name, this._options());
} }
} }
@@ -842,69 +850,69 @@ export class LocalConnection implements Connection {
* Drop an existing table. * Drop an existing table.
* @param name The name of the table to drop. * @param name The name of the table to drop.
*/ */
async dropTable (name: string): Promise<void> { async dropTable(name: string): Promise<void> {
await databaseDropTable.call(this._db, name) await databaseDropTable.call(this._db, name);
} }
withMiddleware (middleware: HttpMiddleware): Connection { withMiddleware(middleware: HttpMiddleware): Connection {
return this return this;
} }
} }
export class LocalTable<T = number[]> implements Table<T> { export class LocalTable<T = number[]> implements Table<T> {
private _tbl: any private _tbl: any;
private readonly _name: string private readonly _name: string;
private readonly _isElectron: boolean private readonly _isElectron: boolean;
private readonly _embeddings?: EmbeddingFunction<T> private readonly _embeddings?: EmbeddingFunction<T>;
private readonly _options: () => ConnectionOptions private readonly _options: () => ConnectionOptions;
constructor (tbl: any, name: string, options: ConnectionOptions) constructor(tbl: any, name: string, options: ConnectionOptions);
/** /**
* @param tbl * @param tbl
* @param name * @param name
* @param options * @param options
* @param embeddings An embedding function to use when interacting with this table * @param embeddings An embedding function to use when interacting with this table
*/ */
constructor ( constructor(
tbl: any, tbl: any,
name: string, name: string,
options: ConnectionOptions, options: ConnectionOptions,
embeddings: EmbeddingFunction<T> embeddings: EmbeddingFunction<T>
) );
constructor ( constructor(
tbl: any, tbl: any,
name: string, name: string,
options: ConnectionOptions, options: ConnectionOptions,
embeddings?: EmbeddingFunction<T> embeddings?: EmbeddingFunction<T>
) { ) {
this._tbl = tbl this._tbl = tbl;
this._name = name this._name = name;
this._embeddings = embeddings this._embeddings = embeddings;
this._options = () => options this._options = () => options;
this._isElectron = this.checkElectron() this._isElectron = this.checkElectron();
} }
get name (): string { get name(): string {
return this._name return this._name;
} }
/** /**
* Creates a search query to find the nearest neighbors of the given search term * Creates a search query to find the nearest neighbors of the given search term
* @param query The query search term * @param query The query search term
*/ */
search (query: T): Query<T> { search(query: T): Query<T> {
return new Query(query, this._tbl, this._embeddings) return new Query(query, this._tbl, this._embeddings);
} }
/** /**
* Creates a filter query to find all rows matching the specified criteria * Creates a filter query to find all rows matching the specified criteria
* @param value The filter criteria (like SQL where clause syntax) * @param value The filter criteria (like SQL where clause syntax)
*/ */
filter (value: string): Query<T> { filter(value: string): Query<T> {
return new Query(undefined, this._tbl, this._embeddings).filter(value) return new Query(undefined, this._tbl, this._embeddings).filter(value);
} }
where = this.filter where = this.filter;
/** /**
* Insert records into this Table. * Insert records into this Table.
@@ -912,16 +920,19 @@ export class LocalTable<T = number[]> implements Table<T> {
* @param data Records to be inserted into the Table * @param data Records to be inserted into the Table
* @return The number of rows added to the table * @return The number of rows added to the table
*/ */
async add ( async add(
data: Array<Record<string, unknown>> | ArrowTable data: Array<Record<string, unknown>> | ArrowTable
): Promise<number> { ): Promise<number> {
const schema = await this.schema const schema = await this.schema;
let tbl: ArrowTable
let tbl: ArrowTable;
if (data instanceof ArrowTable) { if (data instanceof ArrowTable) {
tbl = data tbl = data;
} else { } else {
tbl = makeArrowTable(data, { schema }) tbl = makeArrowTable(data, { schema, embeddings: this._embeddings });
} }
return tableAdd return tableAdd
.call( .call(
this._tbl, this._tbl,
@@ -930,8 +941,8 @@ export class LocalTable<T = number[]> implements Table<T> {
...getAwsArgs(this._options()) ...getAwsArgs(this._options())
) )
.then((newTable: any) => { .then((newTable: any) => {
this._tbl = newTable this._tbl = newTable;
}) });
} }
/** /**
@@ -940,14 +951,14 @@ export class LocalTable<T = number[]> implements Table<T> {
* @param data Records to be inserted into the Table * @param data Records to be inserted into the Table
* @return The number of rows added to the table * @return The number of rows added to the table
*/ */
async overwrite ( async overwrite(
data: Array<Record<string, unknown>> | ArrowTable data: Array<Record<string, unknown>> | ArrowTable
): Promise<number> { ): Promise<number> {
let buffer: Buffer let buffer: Buffer;
if (data instanceof ArrowTable) { if (data instanceof ArrowTable) {
buffer = await fromTableToBuffer(data, this._embeddings) buffer = await fromTableToBuffer(data, this._embeddings);
} else { } else {
buffer = await fromRecordsToBuffer(data, this._embeddings) buffer = await fromRecordsToBuffer(data, this._embeddings);
} }
return tableAdd return tableAdd
.call( .call(
@@ -957,8 +968,8 @@ export class LocalTable<T = number[]> implements Table<T> {
...getAwsArgs(this._options()) ...getAwsArgs(this._options())
) )
.then((newTable: any) => { .then((newTable: any) => {
this._tbl = newTable this._tbl = newTable;
}) });
} }
/** /**
@@ -966,26 +977,26 @@ export class LocalTable<T = number[]> implements Table<T> {
* *
* @param indexParams The parameters of this Index, @see VectorIndexParams. * @param indexParams The parameters of this Index, @see VectorIndexParams.
*/ */
async createIndex (indexParams: VectorIndexParams): Promise<any> { async createIndex(indexParams: VectorIndexParams): Promise<any> {
return tableCreateVectorIndex return tableCreateVectorIndex
.call(this._tbl, indexParams) .call(this._tbl, indexParams)
.then((newTable: any) => { .then((newTable: any) => {
this._tbl = newTable this._tbl = newTable;
}) });
} }
async createScalarIndex (column: string, replace?: boolean): Promise<void> { async createScalarIndex(column: string, replace?: boolean): Promise<void> {
if (replace === undefined) { if (replace === undefined) {
replace = true replace = true;
} }
return tableCreateScalarIndex.call(this._tbl, column, replace) return tableCreateScalarIndex.call(this._tbl, column, replace);
} }
/** /**
* Returns the number of rows in this table. * Returns the number of rows in this table.
*/ */
async countRows (filter?: string): Promise<number> { async countRows(filter?: string): Promise<number> {
return tableCountRows.call(this._tbl, filter) return tableCountRows.call(this._tbl, filter);
} }
/** /**
@@ -993,10 +1004,10 @@ export class LocalTable<T = number[]> implements Table<T> {
* *
* @param filter A filter in the same format used by a sql WHERE clause. * @param filter A filter in the same format used by a sql WHERE clause.
*/ */
async delete (filter: string): Promise<void> { async delete(filter: string): Promise<void> {
return tableDelete.call(this._tbl, filter).then((newTable: any) => { return tableDelete.call(this._tbl, filter).then((newTable: any) => {
this._tbl = newTable this._tbl = newTable;
}) });
} }
/** /**
@@ -1006,55 +1017,65 @@ export class LocalTable<T = number[]> implements Table<T> {
* *
* @returns * @returns
*/ */
async update (args: UpdateArgs | UpdateSqlArgs): Promise<void> { async update(args: UpdateArgs | UpdateSqlArgs): Promise<void> {
let filter: string | null let filter: string | null;
let updates: Record<string, string> let updates: Record<string, string>;
if ('valuesSql' in args) { if ("valuesSql" in args) {
filter = args.where ?? null filter = args.where ?? null;
updates = args.valuesSql updates = args.valuesSql;
} else { } else {
filter = args.where ?? null filter = args.where ?? null;
updates = {} updates = {};
for (const [key, value] of Object.entries(args.values)) { for (const [key, value] of Object.entries(args.values)) {
updates[key] = toSQL(value) updates[key] = toSQL(value);
} }
} }
return tableUpdate return tableUpdate
.call(this._tbl, filter, updates) .call(this._tbl, filter, updates)
.then((newTable: any) => { .then((newTable: any) => {
this._tbl = newTable this._tbl = newTable;
}) });
} }
async mergeInsert (on: string, data: Array<Record<string, unknown>> | ArrowTable, args: MergeInsertArgs): Promise<void> { async mergeInsert(
let whenMatchedUpdateAll = false on: string,
let whenMatchedUpdateAllFilt = null data: Array<Record<string, unknown>> | ArrowTable,
if (args.whenMatchedUpdateAll !== undefined && args.whenMatchedUpdateAll !== null) { args: MergeInsertArgs
whenMatchedUpdateAll = true ): Promise<void> {
let whenMatchedUpdateAll = false;
let whenMatchedUpdateAllFilt = null;
if (
args.whenMatchedUpdateAll !== undefined &&
args.whenMatchedUpdateAll !== null
) {
whenMatchedUpdateAll = true;
if (args.whenMatchedUpdateAll !== true) { if (args.whenMatchedUpdateAll !== true) {
whenMatchedUpdateAllFilt = args.whenMatchedUpdateAll whenMatchedUpdateAllFilt = args.whenMatchedUpdateAll;
} }
} }
const whenNotMatchedInsertAll = args.whenNotMatchedInsertAll ?? false const whenNotMatchedInsertAll = args.whenNotMatchedInsertAll ?? false;
let whenNotMatchedBySourceDelete = false let whenNotMatchedBySourceDelete = false;
let whenNotMatchedBySourceDeleteFilt = null let whenNotMatchedBySourceDeleteFilt = null;
if (args.whenNotMatchedBySourceDelete !== undefined && args.whenNotMatchedBySourceDelete !== null) { if (
whenNotMatchedBySourceDelete = true args.whenNotMatchedBySourceDelete !== undefined &&
args.whenNotMatchedBySourceDelete !== null
) {
whenNotMatchedBySourceDelete = true;
if (args.whenNotMatchedBySourceDelete !== true) { if (args.whenNotMatchedBySourceDelete !== true) {
whenNotMatchedBySourceDeleteFilt = args.whenNotMatchedBySourceDelete whenNotMatchedBySourceDeleteFilt = args.whenNotMatchedBySourceDelete;
} }
} }
const schema = await this.schema const schema = await this.schema;
let tbl: ArrowTable let tbl: ArrowTable;
if (data instanceof ArrowTable) { if (data instanceof ArrowTable) {
tbl = data tbl = data;
} else { } else {
tbl = makeArrowTable(data, { schema }) tbl = makeArrowTable(data, { schema });
} }
const buffer = await fromTableToBuffer(tbl, this._embeddings, schema) const buffer = await fromTableToBuffer(tbl, this._embeddings, schema);
this._tbl = await tableMergeInsert.call( this._tbl = await tableMergeInsert.call(
this._tbl, this._tbl,
@@ -1065,7 +1086,7 @@ export class LocalTable<T = number[]> implements Table<T> {
whenNotMatchedBySourceDelete, whenNotMatchedBySourceDelete,
whenNotMatchedBySourceDeleteFilt, whenNotMatchedBySourceDeleteFilt,
buffer buffer
) );
} }
/** /**
@@ -1083,16 +1104,16 @@ export class LocalTable<T = number[]> implements Table<T> {
* uphold this promise can lead to corrupted tables. * uphold this promise can lead to corrupted tables.
* @returns * @returns
*/ */
async cleanupOldVersions ( async cleanupOldVersions(
olderThan?: number, olderThan?: number,
deleteUnverified?: boolean deleteUnverified?: boolean
): Promise<CleanupStats> { ): Promise<CleanupStats> {
return tableCleanupOldVersions return tableCleanupOldVersions
.call(this._tbl, olderThan, deleteUnverified) .call(this._tbl, olderThan, deleteUnverified)
.then((res: { newTable: any, metrics: CleanupStats }) => { .then((res: { newTable: any, metrics: CleanupStats }) => {
this._tbl = res.newTable this._tbl = res.newTable;
return res.metrics return res.metrics;
}) });
} }
/** /**
@@ -1106,62 +1127,64 @@ export class LocalTable<T = number[]> implements Table<T> {
* for most tables. * for most tables.
* @returns Metrics about the compaction operation. * @returns Metrics about the compaction operation.
*/ */
async compactFiles (options?: CompactionOptions): Promise<CompactionMetrics> { async compactFiles(options?: CompactionOptions): Promise<CompactionMetrics> {
const optionsArg = options ?? {} const optionsArg = options ?? {};
return tableCompactFiles return tableCompactFiles
.call(this._tbl, optionsArg) .call(this._tbl, optionsArg)
.then((res: { newTable: any, metrics: CompactionMetrics }) => { .then((res: { newTable: any, metrics: CompactionMetrics }) => {
this._tbl = res.newTable this._tbl = res.newTable;
return res.metrics return res.metrics;
}) });
} }
async listIndices (): Promise<VectorIndex[]> { async listIndices(): Promise<VectorIndex[]> {
return tableListIndices.call(this._tbl) return tableListIndices.call(this._tbl);
} }
async indexStats (indexUuid: string): Promise<IndexStats> { async indexStats(indexUuid: string): Promise<IndexStats> {
return tableIndexStats.call(this._tbl, indexUuid) return tableIndexStats.call(this._tbl, indexUuid);
} }
get schema (): Promise<Schema> { get schema(): Promise<Schema> {
// empty table // empty table
return this.getSchema() return this.getSchema();
} }
private async getSchema (): Promise<Schema> { private async getSchema(): Promise<Schema> {
const buffer = await tableSchema.call(this._tbl, this._isElectron) const buffer = await tableSchema.call(this._tbl, this._isElectron);
const table = tableFromIPC(buffer) const table = tableFromIPC(buffer);
return table.schema return table.schema;
} }
// See https://github.com/electron/electron/issues/2288 // See https://github.com/electron/electron/issues/2288
private checkElectron (): boolean { private checkElectron(): boolean {
try { try {
// eslint-disable-next-line no-prototype-builtins // eslint-disable-next-line no-prototype-builtins
return ( return (
Object.prototype.hasOwnProperty.call(process?.versions, 'electron') || Object.prototype.hasOwnProperty.call(process?.versions, "electron") ||
navigator?.userAgent?.toLowerCase()?.includes(' electron') navigator?.userAgent?.toLowerCase()?.includes(" electron")
) );
} catch (e) { } catch (e) {
return false return false;
} }
} }
async addColumns (newColumnTransforms: Array<{ name: string, valueSql: string }>): Promise<void> { async addColumns(
return tableAddColumns.call(this._tbl, newColumnTransforms) newColumnTransforms: Array<{ name: string, valueSql: string }>
): Promise<void> {
return tableAddColumns.call(this._tbl, newColumnTransforms);
} }
async alterColumns (columnAlterations: ColumnAlteration[]): Promise<void> { async alterColumns(columnAlterations: ColumnAlteration[]): Promise<void> {
return tableAlterColumns.call(this._tbl, columnAlterations) return tableAlterColumns.call(this._tbl, columnAlterations);
} }
async dropColumns (columnNames: string[]): Promise<void> { async dropColumns(columnNames: string[]): Promise<void> {
return tableDropColumns.call(this._tbl, columnNames) return tableDropColumns.call(this._tbl, columnNames);
} }
withMiddleware (middleware: HttpMiddleware): Table<T> { withMiddleware(middleware: HttpMiddleware): Table<T> {
return this return this;
} }
} }
@@ -1184,7 +1207,7 @@ export interface CompactionOptions {
*/ */
targetRowsPerFragment?: number targetRowsPerFragment?: number
/** /**
* The maximum number of rows per group. Defaults to 1024. * The maximum number of T per group. Defaults to 1024.
*/ */
maxRowsPerGroup?: number maxRowsPerGroup?: number
/** /**
@@ -1284,21 +1307,21 @@ export interface IvfPQIndexConfig {
*/ */
index_cache_size?: number index_cache_size?: number
type: 'ivf_pq' type: "ivf_pq"
} }
export type VectorIndexParams = IvfPQIndexConfig export type VectorIndexParams = IvfPQIndexConfig;
/** /**
* Write mode for writing a table. * Write mode for writing a table.
*/ */
export enum WriteMode { export enum WriteMode {
/** Create a new {@link Table}. */ /** Create a new {@link Table}. */
Create = 'create', Create = "create",
/** Overwrite the existing {@link Table} if presented. */ /** Overwrite the existing {@link Table} if presented. */
Overwrite = 'overwrite', Overwrite = "overwrite",
/** Append new data to the table. */ /** Append new data to the table. */
Append = 'append', Append = "append",
} }
/** /**
@@ -1310,14 +1333,14 @@ export interface WriteOptions {
} }
export class DefaultWriteOptions implements WriteOptions { export class DefaultWriteOptions implements WriteOptions {
writeMode = WriteMode.Create writeMode = WriteMode.Create;
} }
export function isWriteOptions (value: any): value is WriteOptions { export function isWriteOptions(value: any): value is WriteOptions {
return ( return (
Object.keys(value).length === 1 && Object.keys(value).length === 1 &&
(value.writeMode === undefined || typeof value.writeMode === 'string') (value.writeMode === undefined || typeof value.writeMode === "string")
) );
} }
/** /**
@@ -1327,15 +1350,15 @@ export enum MetricType {
/** /**
* Euclidean distance * Euclidean distance
*/ */
L2 = 'l2', L2 = "l2",
/** /**
* Cosine distance * Cosine distance
*/ */
Cosine = 'cosine', Cosine = "cosine",
/** /**
* Dot product * Dot product
*/ */
Dot = 'dot', Dot = "dot",
} }

View File

@@ -51,7 +51,7 @@ describe('LanceDB Mirrored Store Integration test', function () {
const dir = tmpdir() const dir = tmpdir()
console.log(dir) console.log(dir)
const conn = await lancedb.connect(`s3://lancedb-integtest?mirroredStore=${dir}`) const conn = await lancedb.connect({ uri: `s3://lancedb-integtest?mirroredStore=${dir}`, storageOptions: { allowHttp: 'true' } })
const data = Array(200).fill({ vector: Array(128).fill(1.0), id: 0 }) const data = Array(200).fill({ vector: Array(128).fill(1.0), id: 0 })
data.push(...Array(200).fill({ vector: Array(128).fill(1.0), id: 1 })) data.push(...Array(200).fill({ vector: Array(128).fill(1.0), id: 1 }))
data.push(...Array(200).fill({ vector: Array(128).fill(1.0), id: 2 })) data.push(...Array(200).fill({ vector: Array(128).fill(1.0), id: 2 }))

View File

@@ -140,6 +140,9 @@ export class RemoteConnection implements Connection {
schema = nameOrOpts.schema schema = nameOrOpts.schema
embeddings = nameOrOpts.embeddingFunction embeddings = nameOrOpts.embeddingFunction
tableName = nameOrOpts.name tableName = nameOrOpts.name
if (data === undefined) {
data = nameOrOpts.data
}
} }
let buffer: Buffer let buffer: Buffer

View File

@@ -32,7 +32,7 @@ import {
Bool, Bool,
Date_, Date_,
Decimal, Decimal,
DataType, type DataType,
Dictionary, Dictionary,
Binary, Binary,
Float32, Float32,
@@ -74,12 +74,12 @@ import {
DurationNanosecond, DurationNanosecond,
DurationMicrosecond, DurationMicrosecond,
DurationMillisecond, DurationMillisecond,
DurationSecond, DurationSecond
} from "apache-arrow"; } from "apache-arrow";
import type { IntBitWidth, TimeBitWidth } from "apache-arrow/type"; import type { IntBitWidth, TimeBitWidth } from "apache-arrow/type";
function sanitizeMetadata( function sanitizeMetadata(
metadataLike?: unknown, metadataLike?: unknown
): Map<string, string> | undefined { ): Map<string, string> | undefined {
if (metadataLike === undefined || metadataLike === null) { if (metadataLike === undefined || metadataLike === null) {
return undefined; return undefined;
@@ -90,7 +90,7 @@ function sanitizeMetadata(
for (const item of metadataLike) { for (const item of metadataLike) {
if (!(typeof item[0] === "string" || !(typeof item[1] === "string"))) { if (!(typeof item[0] === "string" || !(typeof item[1] === "string"))) {
throw Error( throw Error(
"Expected metadata, if present, to be a Map<string, string> but it had non-string keys or values", "Expected metadata, if present, to be a Map<string, string> but it had non-string keys or values"
); );
} }
} }
@@ -105,7 +105,7 @@ function sanitizeInt(typeLike: object) {
typeof typeLike.isSigned !== "boolean" typeof typeLike.isSigned !== "boolean"
) { ) {
throw Error( throw Error(
"Expected an Int Type to have a `bitWidth` and `isSigned` property", "Expected an Int Type to have a `bitWidth` and `isSigned` property"
); );
} }
return new Int(typeLike.isSigned, typeLike.bitWidth as IntBitWidth); return new Int(typeLike.isSigned, typeLike.bitWidth as IntBitWidth);
@@ -128,7 +128,7 @@ function sanitizeDecimal(typeLike: object) {
typeof typeLike.bitWidth !== "number" typeof typeLike.bitWidth !== "number"
) { ) {
throw Error( throw Error(
"Expected a Decimal Type to have `scale`, `precision`, and `bitWidth` properties", "Expected a Decimal Type to have `scale`, `precision`, and `bitWidth` properties"
); );
} }
return new Decimal(typeLike.scale, typeLike.precision, typeLike.bitWidth); return new Decimal(typeLike.scale, typeLike.precision, typeLike.bitWidth);
@@ -149,7 +149,7 @@ function sanitizeTime(typeLike: object) {
typeof typeLike.bitWidth !== "number" typeof typeLike.bitWidth !== "number"
) { ) {
throw Error( throw Error(
"Expected a Time type to have `unit` and `bitWidth` properties", "Expected a Time type to have `unit` and `bitWidth` properties"
); );
} }
return new Time(typeLike.unit, typeLike.bitWidth as TimeBitWidth); return new Time(typeLike.unit, typeLike.bitWidth as TimeBitWidth);
@@ -172,7 +172,7 @@ function sanitizeTypedTimestamp(
| typeof TimestampNanosecond | typeof TimestampNanosecond
| typeof TimestampMicrosecond | typeof TimestampMicrosecond
| typeof TimestampMillisecond | typeof TimestampMillisecond
| typeof TimestampSecond, | typeof TimestampSecond
) { ) {
let timezone = null; let timezone = null;
if ("timezone" in typeLike && typeof typeLike.timezone === "string") { if ("timezone" in typeLike && typeof typeLike.timezone === "string") {
@@ -191,7 +191,7 @@ function sanitizeInterval(typeLike: object) {
function sanitizeList(typeLike: object) { function sanitizeList(typeLike: object) {
if (!("children" in typeLike) || !Array.isArray(typeLike.children)) { if (!("children" in typeLike) || !Array.isArray(typeLike.children)) {
throw Error( throw Error(
"Expected a List type to have an array-like `children` property", "Expected a List type to have an array-like `children` property"
); );
} }
if (typeLike.children.length !== 1) { if (typeLike.children.length !== 1) {
@@ -203,7 +203,7 @@ function sanitizeList(typeLike: object) {
function sanitizeStruct(typeLike: object) { function sanitizeStruct(typeLike: object) {
if (!("children" in typeLike) || !Array.isArray(typeLike.children)) { if (!("children" in typeLike) || !Array.isArray(typeLike.children)) {
throw Error( throw Error(
"Expected a Struct type to have an array-like `children` property", "Expected a Struct type to have an array-like `children` property"
); );
} }
return new Struct(typeLike.children.map((child) => sanitizeField(child))); return new Struct(typeLike.children.map((child) => sanitizeField(child)));
@@ -216,47 +216,47 @@ function sanitizeUnion(typeLike: object) {
typeof typeLike.mode !== "number" typeof typeLike.mode !== "number"
) { ) {
throw Error( throw Error(
"Expected a Union type to have `typeIds` and `mode` properties", "Expected a Union type to have `typeIds` and `mode` properties"
); );
} }
if (!("children" in typeLike) || !Array.isArray(typeLike.children)) { if (!("children" in typeLike) || !Array.isArray(typeLike.children)) {
throw Error( throw Error(
"Expected a Union type to have an array-like `children` property", "Expected a Union type to have an array-like `children` property"
); );
} }
return new Union( return new Union(
typeLike.mode, typeLike.mode,
typeLike.typeIds as any, typeLike.typeIds as any,
typeLike.children.map((child) => sanitizeField(child)), typeLike.children.map((child) => sanitizeField(child))
); );
} }
function sanitizeTypedUnion( function sanitizeTypedUnion(
typeLike: object, typeLike: object,
UnionType: typeof DenseUnion | typeof SparseUnion, UnionType: typeof DenseUnion | typeof SparseUnion
) { ) {
if (!("typeIds" in typeLike)) { if (!("typeIds" in typeLike)) {
throw Error( throw Error(
"Expected a DenseUnion/SparseUnion type to have a `typeIds` property", "Expected a DenseUnion/SparseUnion type to have a `typeIds` property"
); );
} }
if (!("children" in typeLike) || !Array.isArray(typeLike.children)) { if (!("children" in typeLike) || !Array.isArray(typeLike.children)) {
throw Error( throw Error(
"Expected a DenseUnion/SparseUnion type to have an array-like `children` property", "Expected a DenseUnion/SparseUnion type to have an array-like `children` property"
); );
} }
return new UnionType( return new UnionType(
typeLike.typeIds as any, typeLike.typeIds as any,
typeLike.children.map((child) => sanitizeField(child)), typeLike.children.map((child) => sanitizeField(child))
); );
} }
function sanitizeFixedSizeBinary(typeLike: object) { function sanitizeFixedSizeBinary(typeLike: object) {
if (!("byteWidth" in typeLike) || typeof typeLike.byteWidth !== "number") { if (!("byteWidth" in typeLike) || typeof typeLike.byteWidth !== "number") {
throw Error( throw Error(
"Expected a FixedSizeBinary type to have a `byteWidth` property", "Expected a FixedSizeBinary type to have a `byteWidth` property"
); );
} }
return new FixedSizeBinary(typeLike.byteWidth); return new FixedSizeBinary(typeLike.byteWidth);
@@ -268,7 +268,7 @@ function sanitizeFixedSizeList(typeLike: object) {
} }
if (!("children" in typeLike) || !Array.isArray(typeLike.children)) { if (!("children" in typeLike) || !Array.isArray(typeLike.children)) {
throw Error( throw Error(
"Expected a FixedSizeList type to have an array-like `children` property", "Expected a FixedSizeList type to have an array-like `children` property"
); );
} }
if (typeLike.children.length !== 1) { if (typeLike.children.length !== 1) {
@@ -276,14 +276,14 @@ function sanitizeFixedSizeList(typeLike: object) {
} }
return new FixedSizeList( return new FixedSizeList(
typeLike.listSize, typeLike.listSize,
sanitizeField(typeLike.children[0]), sanitizeField(typeLike.children[0])
); );
} }
function sanitizeMap(typeLike: object) { function sanitizeMap(typeLike: object) {
if (!("children" in typeLike) || !Array.isArray(typeLike.children)) { if (!("children" in typeLike) || !Array.isArray(typeLike.children)) {
throw Error( throw Error(
"Expected a Map type to have an array-like `children` property", "Expected a Map type to have an array-like `children` property"
); );
} }
if (!("keysSorted" in typeLike) || typeof typeLike.keysSorted !== "boolean") { if (!("keysSorted" in typeLike) || typeof typeLike.keysSorted !== "boolean") {
@@ -291,7 +291,7 @@ function sanitizeMap(typeLike: object) {
} }
return new Map_( return new Map_(
typeLike.children.map((field) => sanitizeField(field)) as any, typeLike.children.map((field) => sanitizeField(field)) as any,
typeLike.keysSorted, typeLike.keysSorted
); );
} }
@@ -319,7 +319,7 @@ function sanitizeDictionary(typeLike: object) {
sanitizeType(typeLike.dictionary), sanitizeType(typeLike.dictionary),
sanitizeType(typeLike.indices) as any, sanitizeType(typeLike.indices) as any,
typeLike.id, typeLike.id,
typeLike.isOrdered, typeLike.isOrdered
); );
} }
@@ -454,7 +454,7 @@ function sanitizeField(fieldLike: unknown): Field {
!("nullable" in fieldLike) !("nullable" in fieldLike)
) { ) {
throw Error( throw Error(
"The field passed in is missing a `type`/`name`/`nullable` property", "The field passed in is missing a `type`/`name`/`nullable` property"
); );
} }
const type = sanitizeType(fieldLike.type); const type = sanitizeType(fieldLike.type);
@@ -489,7 +489,7 @@ export function sanitizeSchema(schemaLike: unknown): Schema {
} }
if (!("fields" in schemaLike)) { if (!("fields" in schemaLike)) {
throw Error( throw Error(
"The schema passed in does not appear to be a schema (no 'fields' property)", "The schema passed in does not appear to be a schema (no 'fields' property)"
); );
} }
let metadata; let metadata;
@@ -498,11 +498,11 @@ export function sanitizeSchema(schemaLike: unknown): Schema {
} }
if (!Array.isArray(schemaLike.fields)) { if (!Array.isArray(schemaLike.fields)) {
throw Error( throw Error(
"The schema passed in had a 'fields' property but it was not an array", "The schema passed in had a 'fields' property but it was not an array"
); );
} }
const sanitizedFields = schemaLike.fields.map((field) => const sanitizedFields = schemaLike.fields.map((field) =>
sanitizeField(field), sanitizeField(field)
); );
return new Schema(sanitizedFields, metadata); return new Schema(sanitizedFields, metadata);
} }

File diff suppressed because it is too large Load Diff

View File

@@ -1,3 +0,0 @@
**/dist/**/*
**/native.js
**/native.d.ts

1
nodejs/.gitignore vendored Normal file
View File

@@ -0,0 +1 @@
yarn.lock

View File

@@ -1 +0,0 @@
.eslintignore

View File

@@ -43,29 +43,20 @@ npm run test
### Running lint / format ### Running lint / format
LanceDb uses eslint for linting. VSCode does not need any plugins to use eslint. However, it LanceDb uses [biome](https://biomejs.dev/) for linting and formatting. if you are using VSCode you will need to install the official [Biome](https://marketplace.visualstudio.com/items?itemName=biomejs.biome) extension.
may need some additional configuration. Make sure that eslint.experimental.useFlatConfig is To manually lint your code you can run:
set to true. Also, if your vscode root folder is the repo root then you will need to set
the eslint.workingDirectories to ["nodejs"]. To manually lint your code you can run:
```sh ```sh
npm run lint npm run lint
``` ```
LanceDb uses prettier for formatting. If you are using VSCode you will need to install the to automatically fix all fixable issues:
"Prettier - Code formatter" extension. You should then configure it to be the default formatter
for typescript and you should enable format on save. To manually check your code's format you
can run:
```sh ```sh
npm run chkformat npm run lint-fix
``` ```
If you need to manually format your code you can run: If you do not have your workspace root set to the `nodejs` directory, unfortunately the extension will not work. You can still run the linting and formatting commands manually.
```sh
npx prettier --write .
```
### Generating docs ### Generating docs

View File

@@ -13,32 +13,27 @@
// limitations under the License. // limitations under the License.
import { import {
convertToTable,
fromTableToBuffer,
makeArrowTable,
makeEmptyTable,
} from "../dist/arrow";
import {
Field,
FixedSizeList,
Float16,
Float32,
Int32,
tableFromIPC,
Schema,
Float64,
type Table,
Binary, Binary,
Bool, Bool,
Utf8,
Struct,
List,
DataType, DataType,
Dictionary, Dictionary,
Int64, Field,
FixedSizeList,
Float, Float,
Precision, Float16,
Float32,
Float64,
Int32,
Int64,
List,
MetadataVersion, MetadataVersion,
Precision,
Schema,
Struct,
type Table,
Type,
Utf8,
tableFromIPC,
} from "apache-arrow"; } from "apache-arrow";
import { import {
Dictionary as OldDictionary, Dictionary as OldDictionary,
@@ -46,14 +41,25 @@ import {
FixedSizeList as OldFixedSizeList, FixedSizeList as OldFixedSizeList,
Float32 as OldFloat32, Float32 as OldFloat32,
Int32 as OldInt32, Int32 as OldInt32,
Struct as OldStruct,
Schema as OldSchema, Schema as OldSchema,
Struct as OldStruct,
TimestampNanosecond as OldTimestampNanosecond, TimestampNanosecond as OldTimestampNanosecond,
Utf8 as OldUtf8, Utf8 as OldUtf8,
} from "apache-arrow-old"; } from "apache-arrow-old";
import { type EmbeddingFunction } from "../dist/embedding/embedding_function"; import {
convertToTable,
fromTableToBuffer,
makeArrowTable,
makeEmptyTable,
} from "../lancedb/arrow";
import {
EmbeddingFunction,
FieldOptions,
FunctionOptions,
} from "../lancedb/embedding/embedding_function";
import { EmbeddingFunctionConfig } from "../lancedb/embedding/registry";
// eslint-disable-next-line @typescript-eslint/no-explicit-any // biome-ignore lint/suspicious/noExplicitAny: skip
function sampleRecords(): Array<Record<string, any>> { function sampleRecords(): Array<Record<string, any>> {
return [ return [
{ {
@@ -280,23 +286,46 @@ describe("The function makeArrowTable", function () {
}); });
}); });
class DummyEmbedding implements EmbeddingFunction<string> { class DummyEmbedding extends EmbeddingFunction<string> {
public readonly sourceColumn = "string"; toJSON(): Partial<FunctionOptions> {
public readonly embeddingDimension = 2; return {};
public readonly embeddingDataType = new Float16(); }
async embed(data: string[]): Promise<number[][]> { async computeSourceEmbeddings(data: string[]): Promise<number[][]> {
return data.map(() => [0.0, 0.0]); return data.map(() => [0.0, 0.0]);
} }
ndims(): number {
return 2;
}
embeddingDataType() {
return new Float16();
}
} }
class DummyEmbeddingWithNoDimension implements EmbeddingFunction<string> { class DummyEmbeddingWithNoDimension extends EmbeddingFunction<string> {
public readonly sourceColumn = "string"; toJSON(): Partial<FunctionOptions> {
return {};
}
async embed(data: string[]): Promise<number[][]> { embeddingDataType(): Float {
return new Float16();
}
async computeSourceEmbeddings(data: string[]): Promise<number[][]> {
return data.map(() => [0.0, 0.0]); return data.map(() => [0.0, 0.0]);
} }
} }
const dummyEmbeddingConfig: EmbeddingFunctionConfig = {
sourceColumn: "string",
function: new DummyEmbedding(),
};
const dummyEmbeddingConfigWithNoDimension: EmbeddingFunctionConfig = {
sourceColumn: "string",
function: new DummyEmbeddingWithNoDimension(),
};
describe("convertToTable", function () { describe("convertToTable", function () {
it("will infer data types correctly", async function () { it("will infer data types correctly", async function () {
@@ -331,7 +360,7 @@ describe("convertToTable", function () {
it("will apply embeddings", async function () { it("will apply embeddings", async function () {
const records = sampleRecords(); const records = sampleRecords();
const table = await convertToTable(records, new DummyEmbedding()); const table = await convertToTable(records, dummyEmbeddingConfig);
expect(DataType.isFixedSizeList(table.getChild("vector")?.type)).toBe(true); expect(DataType.isFixedSizeList(table.getChild("vector")?.type)).toBe(true);
expect(table.getChild("vector")?.type.children[0].type.toString()).toEqual( expect(table.getChild("vector")?.type.children[0].type.toString()).toEqual(
new Float16().toString(), new Float16().toString(),
@@ -340,7 +369,7 @@ describe("convertToTable", function () {
it("will fail if missing the embedding source column", async function () { it("will fail if missing the embedding source column", async function () {
await expect( await expect(
convertToTable([{ id: 1 }], new DummyEmbedding()), convertToTable([{ id: 1 }], dummyEmbeddingConfig),
).rejects.toThrow("'string' was not present"); ).rejects.toThrow("'string' was not present");
}); });
@@ -351,7 +380,7 @@ describe("convertToTable", function () {
const table = makeEmptyTable(schema); const table = makeEmptyTable(schema);
// If the embedding specifies the dimension we are fine // If the embedding specifies the dimension we are fine
await fromTableToBuffer(table, new DummyEmbedding()); await fromTableToBuffer(table, dummyEmbeddingConfig);
// We can also supply a schema and should be ok // We can also supply a schema and should be ok
const schemaWithEmbedding = new Schema([ const schemaWithEmbedding = new Schema([
@@ -364,13 +393,13 @@ describe("convertToTable", function () {
]); ]);
await fromTableToBuffer( await fromTableToBuffer(
table, table,
new DummyEmbeddingWithNoDimension(), dummyEmbeddingConfigWithNoDimension,
schemaWithEmbedding, schemaWithEmbedding,
); );
// Otherwise we will get an error // Otherwise we will get an error
await expect( await expect(
fromTableToBuffer(table, new DummyEmbeddingWithNoDimension()), fromTableToBuffer(table, dummyEmbeddingConfigWithNoDimension),
).rejects.toThrow("does not specify `embeddingDimension`"); ).rejects.toThrow("does not specify `embeddingDimension`");
}); });
@@ -383,7 +412,7 @@ describe("convertToTable", function () {
false, false,
), ),
]); ]);
const table = await convertToTable([], new DummyEmbedding(), { schema }); const table = await convertToTable([], dummyEmbeddingConfig, { schema });
expect(DataType.isFixedSizeList(table.getChild("vector")?.type)).toBe(true); expect(DataType.isFixedSizeList(table.getChild("vector")?.type)).toBe(true);
expect(table.getChild("vector")?.type.children[0].type.toString()).toEqual( expect(table.getChild("vector")?.type.children[0].type.toString()).toEqual(
new Float16().toString(), new Float16().toString(),
@@ -393,16 +422,17 @@ describe("convertToTable", function () {
it("will complain if embeddings present but schema missing embedding column", async function () { it("will complain if embeddings present but schema missing embedding column", async function () {
const schema = new Schema([new Field("string", new Utf8(), false)]); const schema = new Schema([new Field("string", new Utf8(), false)]);
await expect( await expect(
convertToTable([], new DummyEmbedding(), { schema }), convertToTable([], dummyEmbeddingConfig, { schema }),
).rejects.toThrow("column vector was missing"); ).rejects.toThrow("column vector was missing");
}); });
it("will provide a nice error if run twice", async function () { it("will provide a nice error if run twice", async function () {
const records = sampleRecords(); const records = sampleRecords();
const table = await convertToTable(records, new DummyEmbedding()); const table = await convertToTable(records, dummyEmbeddingConfig);
// fromTableToBuffer will try and apply the embeddings again // fromTableToBuffer will try and apply the embeddings again
await expect( await expect(
fromTableToBuffer(table, new DummyEmbedding()), fromTableToBuffer(table, dummyEmbeddingConfig),
).rejects.toThrow("already existed"); ).rejects.toThrow("already existed");
}); });
}); });
@@ -438,7 +468,7 @@ describe("when using two versions of arrow", function () {
new OldField("ts_no_tz", new OldTimestampNanosecond(null)), new OldField("ts_no_tz", new OldTimestampNanosecond(null)),
]), ]),
), ),
// eslint-disable-next-line @typescript-eslint/no-explicit-any // biome-ignore lint/suspicious/noExplicitAny: skip
]) as any; ]) as any;
schema.metadataVersion = MetadataVersion.V5; schema.metadataVersion = MetadataVersion.V5;
const table = makeArrowTable([], { schema }); const table = makeArrowTable([], { schema });

View File

@@ -13,12 +13,13 @@
// limitations under the License. // limitations under the License.
import * as tmp from "tmp"; import * as tmp from "tmp";
import { Connection, connect } from "../lancedb";
import { Connection, connect } from "../dist/index.js";
describe("when connecting", () => { describe("when connecting", () => {
let tmpDir: tmp.DirResult; let tmpDir: tmp.DirResult;
beforeEach(() => (tmpDir = tmp.dirSync({ unsafeCleanup: true }))); beforeEach(() => {
tmpDir = tmp.dirSync({ unsafeCleanup: true });
});
afterEach(() => tmpDir.removeCallback()); afterEach(() => tmpDir.removeCallback());
it("should connect", async () => { it("should connect", async () => {

View File

@@ -0,0 +1,169 @@
// Copyright 2024 Lance Developers.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
import * as arrow from "apache-arrow";
import * as arrowOld from "apache-arrow-old";
import * as tmp from "tmp";
import { connect } from "../lancedb";
import { EmbeddingFunction, LanceSchema } from "../lancedb/embedding";
import { getRegistry, register } from "../lancedb/embedding/registry";
describe.each([arrow, arrowOld])("LanceSchema", (arrow) => {
test("should preserve input order", async () => {
const schema = LanceSchema({
id: new arrow.Int32(),
text: new arrow.Utf8(),
vector: new arrow.Float32(),
});
expect(schema.fields.map((x) => x.name)).toEqual(["id", "text", "vector"]);
});
});
describe("Registry", () => {
let tmpDir: tmp.DirResult;
beforeEach(() => {
tmpDir = tmp.dirSync({ unsafeCleanup: true });
});
afterEach(() => {
tmpDir.removeCallback();
getRegistry().reset();
});
it("should register a new item to the registry", async () => {
@register("mock-embedding")
class MockEmbeddingFunction extends EmbeddingFunction<string> {
toJSON(): object {
return {
someText: "hello",
};
}
constructor() {
super();
}
ndims() {
return 3;
}
embeddingDataType(): arrow.Float {
return new arrow.Float32();
}
async computeSourceEmbeddings(data: string[]) {
return data.map(() => [1, 2, 3]);
}
}
const func = getRegistry()
.get<MockEmbeddingFunction>("mock-embedding")!
.create();
const schema = LanceSchema({
id: new arrow.Int32(),
text: func.sourceField(new arrow.Utf8()),
vector: func.vectorField(),
});
const db = await connect(tmpDir.name);
const table = await db.createTable(
"test",
[
{ id: 1, text: "hello" },
{ id: 2, text: "world" },
],
{ schema },
);
const expected = [
[1, 2, 3],
[1, 2, 3],
];
const actual = await table.query().toArrow();
const vectors = actual
.getChild("vector")
?.toArray()
.map((x: unknown) => {
if (x instanceof arrow.Vector) {
return [...x];
} else {
return x;
}
});
expect(vectors).toEqual(expected);
});
test("should error if registering with the same name", async () => {
class MockEmbeddingFunction extends EmbeddingFunction<string> {
toJSON(): object {
return {
someText: "hello",
};
}
constructor() {
super();
}
ndims() {
return 3;
}
embeddingDataType(): arrow.Float {
return new arrow.Float32();
}
async computeSourceEmbeddings(data: string[]) {
return data.map(() => [1, 2, 3]);
}
}
register("mock-embedding")(MockEmbeddingFunction);
expect(() => register("mock-embedding")(MockEmbeddingFunction)).toThrow(
'Embedding function with alias "mock-embedding" already exists',
);
});
test("schema should contain correct metadata", async () => {
class MockEmbeddingFunction extends EmbeddingFunction<string> {
toJSON(): object {
return {
someText: "hello",
};
}
constructor() {
super();
}
ndims() {
return 3;
}
embeddingDataType(): arrow.Float {
return new arrow.Float32();
}
async computeSourceEmbeddings(data: string[]) {
return data.map(() => [1, 2, 3]);
}
}
const func = new MockEmbeddingFunction();
const schema = LanceSchema({
id: new arrow.Int32(),
text: func.sourceField(new arrow.Utf8()),
vector: func.vectorField(),
});
const expectedMetadata = new Map<string, string>([
[
"embedding_functions",
JSON.stringify([
{
sourceColumn: "text",
vectorColumn: "vector",
name: "MockEmbeddingFunction",
model: { someText: "hello" },
},
]),
],
]);
expect(schema.metadata).toEqual(expectedMetadata);
});
});

View File

@@ -14,7 +14,11 @@
/* eslint-disable @typescript-eslint/naming-convention */ /* eslint-disable @typescript-eslint/naming-convention */
import { connect } from "../dist"; import {
CreateKeyCommand,
KMSClient,
ScheduleKeyDeletionCommand,
} from "@aws-sdk/client-kms";
import { import {
CreateBucketCommand, CreateBucketCommand,
DeleteBucketCommand, DeleteBucketCommand,
@@ -23,11 +27,7 @@ import {
ListObjectsV2Command, ListObjectsV2Command,
S3Client, S3Client,
} from "@aws-sdk/client-s3"; } from "@aws-sdk/client-s3";
import { import { connect } from "../lancedb";
CreateKeyCommand,
ScheduleKeyDeletionCommand,
KMSClient,
} from "@aws-sdk/client-kms";
// Skip these tests unless the S3_TEST environment variable is set // Skip these tests unless the S3_TEST environment variable is set
const maybeDescribe = process.env.S3_TEST ? describe : describe.skip; const maybeDescribe = process.env.S3_TEST ? describe : describe.skip;
@@ -63,9 +63,10 @@ class S3Bucket {
// Delete the bucket if it already exists // Delete the bucket if it already exists
try { try {
await this.deleteBucket(client, name); await this.deleteBucket(client, name);
} catch (e) { } catch {
// It's fine if the bucket doesn't exist // It's fine if the bucket doesn't exist
} }
// biome-ignore lint/style/useNamingConvention: we dont control s3's api
await client.send(new CreateBucketCommand({ Bucket: name })); await client.send(new CreateBucketCommand({ Bucket: name }));
return new S3Bucket(name); return new S3Bucket(name);
} }
@@ -78,27 +79,32 @@ class S3Bucket {
static async deleteBucket(client: S3Client, name: string) { static async deleteBucket(client: S3Client, name: string) {
// Must delete all objects before we can delete the bucket // Must delete all objects before we can delete the bucket
const objects = await client.send( const objects = await client.send(
// biome-ignore lint/style/useNamingConvention: we dont control s3's api
new ListObjectsV2Command({ Bucket: name }), new ListObjectsV2Command({ Bucket: name }),
); );
if (objects.Contents) { if (objects.Contents) {
for (const object of objects.Contents) { for (const object of objects.Contents) {
await client.send( await client.send(
// biome-ignore lint/style/useNamingConvention: we dont control s3's api
new DeleteObjectCommand({ Bucket: name, Key: object.Key }), new DeleteObjectCommand({ Bucket: name, Key: object.Key }),
); );
} }
} }
// biome-ignore lint/style/useNamingConvention: we dont control s3's api
await client.send(new DeleteBucketCommand({ Bucket: name })); await client.send(new DeleteBucketCommand({ Bucket: name }));
} }
public async assertAllEncrypted(path: string, keyId: string) { public async assertAllEncrypted(path: string, keyId: string) {
const client = S3Bucket.s3Client(); const client = S3Bucket.s3Client();
const objects = await client.send( const objects = await client.send(
// biome-ignore lint/style/useNamingConvention: we dont control s3's api
new ListObjectsV2Command({ Bucket: this.name, Prefix: path }), new ListObjectsV2Command({ Bucket: this.name, Prefix: path }),
); );
if (objects.Contents) { if (objects.Contents) {
for (const object of objects.Contents) { for (const object of objects.Contents) {
const metadata = await client.send( const metadata = await client.send(
// biome-ignore lint/style/useNamingConvention: we dont control s3's api
new HeadObjectCommand({ Bucket: this.name, Key: object.Key }), new HeadObjectCommand({ Bucket: this.name, Key: object.Key }),
); );
expect(metadata.ServerSideEncryption).toBe("aws:kms"); expect(metadata.ServerSideEncryption).toBe("aws:kms");
@@ -137,6 +143,7 @@ class KmsKey {
public async delete() { public async delete() {
const client = KmsKey.kmsClient(); const client = KmsKey.kmsClient();
// biome-ignore lint/style/useNamingConvention: we dont control s3's api
await client.send(new ScheduleKeyDeletionCommand({ KeyId: this.keyId })); await client.send(new ScheduleKeyDeletionCommand({ KeyId: this.keyId }));
} }
} }

View File

@@ -16,23 +16,34 @@ import * as fs from "fs";
import * as path from "path"; import * as path from "path";
import * as tmp from "tmp"; import * as tmp from "tmp";
import { Table, connect } from "../dist"; import * as arrow from "apache-arrow";
import { import * as arrowOld from "apache-arrow-old";
Schema,
Field,
Float32,
Int32,
FixedSizeList,
Int64,
Float64,
} from "apache-arrow";
import { makeArrowTable } from "../dist/arrow";
import { Index } from "../dist/indices";
describe("Given a table", () => { import { Table, connect } from "../lancedb";
import {
Field,
FixedSizeList,
Float,
Float32,
Float64,
Int32,
Int64,
Schema,
Utf8,
makeArrowTable,
} from "../lancedb/arrow";
import { EmbeddingFunction, LanceSchema } from "../lancedb/embedding";
import { getRegistry, register } from "../lancedb/embedding/registry";
import { Index } from "../lancedb/indices";
// biome-ignore lint/suspicious/noExplicitAny: <explanation>
describe.each([arrow, arrowOld])("Given a table", (arrow: any) => {
let tmpDir: tmp.DirResult; let tmpDir: tmp.DirResult;
let table: Table; let table: Table;
const schema = new Schema([new Field("id", new Float64(), true)]);
const schema = new arrow.Schema([
new arrow.Field("id", new arrow.Float64(), true),
]);
beforeEach(async () => { beforeEach(async () => {
tmpDir = tmp.dirSync({ unsafeCleanup: true }); tmpDir = tmp.dirSync({ unsafeCleanup: true });
const conn = await connect(tmpDir.name); const conn = await connect(tmpDir.name);
@@ -419,3 +430,186 @@ describe("when dealing with versioning", () => {
); );
}); });
}); });
describe("embedding functions", () => {
let tmpDir: tmp.DirResult;
beforeEach(() => {
tmpDir = tmp.dirSync({ unsafeCleanup: true });
});
afterEach(() => tmpDir.removeCallback());
it("should be able to create a table with an embedding function", async () => {
class MockEmbeddingFunction extends EmbeddingFunction<string> {
toJSON(): object {
return {};
}
ndims() {
return 3;
}
embeddingDataType(): Float {
return new Float32();
}
async computeQueryEmbeddings(_data: string) {
return [1, 2, 3];
}
async computeSourceEmbeddings(data: string[]) {
return Array.from({ length: data.length }).fill([
1, 2, 3,
]) as number[][];
}
}
const func = new MockEmbeddingFunction();
const db = await connect(tmpDir.name);
const table = await db.createTable(
"test",
[
{ id: 1, text: "hello" },
{ id: 2, text: "world" },
],
{
embeddingFunction: {
function: func,
sourceColumn: "text",
},
},
);
// biome-ignore lint/suspicious/noExplicitAny: test
const arr = (await table.query().toArray()) as any;
expect(arr[0].vector).toBeDefined();
// we round trip through JSON to make sure the vector properly gets converted to an array
// otherwise it'll be a TypedArray or Vector
const vector0 = JSON.parse(JSON.stringify(arr[0].vector));
expect(vector0).toEqual([1, 2, 3]);
});
it("should be able to create an empty table with an embedding function", async () => {
@register()
class MockEmbeddingFunction extends EmbeddingFunction<string> {
toJSON(): object {
return {};
}
ndims() {
return 3;
}
embeddingDataType(): Float {
return new Float32();
}
async computeQueryEmbeddings(_data: string) {
return [1, 2, 3];
}
async computeSourceEmbeddings(data: string[]) {
return Array.from({ length: data.length }).fill([
1, 2, 3,
]) as number[][];
}
}
const schema = new Schema([
new Field("text", new Utf8(), true),
new Field(
"vector",
new FixedSizeList(3, new Field("item", new Float32(), true)),
true,
),
]);
const func = new MockEmbeddingFunction();
const db = await connect(tmpDir.name);
const table = await db.createEmptyTable("test", schema, {
embeddingFunction: {
function: func,
sourceColumn: "text",
},
});
const outSchema = await table.schema();
expect(outSchema.metadata.get("embedding_functions")).toBeDefined();
await table.add([{ text: "hello world" }]);
// biome-ignore lint/suspicious/noExplicitAny: test
const arr = (await table.query().toArray()) as any;
expect(arr[0].vector).toBeDefined();
// we round trip through JSON to make sure the vector properly gets converted to an array
// otherwise it'll be a TypedArray or Vector
const vector0 = JSON.parse(JSON.stringify(arr[0].vector));
expect(vector0).toEqual([1, 2, 3]);
});
it("should error when appending to a table with an unregistered embedding function", async () => {
@register("mock")
class MockEmbeddingFunction extends EmbeddingFunction<string> {
toJSON(): object {
return {};
}
ndims() {
return 3;
}
embeddingDataType(): Float {
return new Float32();
}
async computeQueryEmbeddings(_data: string) {
return [1, 2, 3];
}
async computeSourceEmbeddings(data: string[]) {
return Array.from({ length: data.length }).fill([
1, 2, 3,
]) as number[][];
}
}
const func = getRegistry().get<MockEmbeddingFunction>("mock")!.create();
const schema = LanceSchema({
id: new arrow.Float64(),
text: func.sourceField(new Utf8()),
vector: func.vectorField(),
});
const db = await connect(tmpDir.name);
await db.createTable(
"test",
[
{ id: 1, text: "hello" },
{ id: 2, text: "world" },
],
{
schema,
},
);
getRegistry().reset();
const db2 = await connect(tmpDir.name);
const tbl = await db2.openTable("test");
expect(tbl.add([{ id: 3, text: "hello" }])).rejects.toThrow(
`Function "mock" not found in registry`,
);
});
});
describe("when optimizing a dataset", () => {
let tmpDir: tmp.DirResult;
let table: Table;
beforeEach(async () => {
tmpDir = tmp.dirSync({ unsafeCleanup: true });
const con = await connect(tmpDir.name);
table = await con.createTable("vectors", [{ id: 1 }]);
await table.add([{ id: 2 }]);
});
afterEach(() => {
tmpDir.removeCallback();
});
it("compacts files", async () => {
const stats = await table.optimize();
expect(stats.compaction.filesAdded).toBe(1);
expect(stats.compaction.filesRemoved).toBe(2);
expect(stats.compaction.fragmentsAdded).toBe(1);
expect(stats.compaction.fragmentsRemoved).toBe(2);
});
it("cleanups old versions", async () => {
const stats = await table.optimize({ cleanupOlderThan: new Date() });
expect(stats.prune.bytesRemoved).toBeGreaterThan(0);
expect(stats.prune.oldVersionsRemoved).toBe(3);
});
});

142
nodejs/biome.json Normal file
View File

@@ -0,0 +1,142 @@
{
"$schema": "https://biomejs.dev/schemas/1.7.3/schema.json",
"organizeImports": {
"enabled": true
},
"files": {
"ignore": [
"**/dist/**/*",
"**/native.js",
"**/native.d.ts",
"**/npm/**/*",
"**/.vscode/**"
]
},
"formatter": {
"indentStyle": "space"
},
"linter": {
"enabled": true,
"rules": {
"recommended": false,
"complexity": {
"noBannedTypes": "error",
"noExtraBooleanCast": "error",
"noMultipleSpacesInRegularExpressionLiterals": "error",
"noUselessCatch": "error",
"noUselessThisAlias": "error",
"noUselessTypeConstraint": "error",
"noWith": "error"
},
"correctness": {
"noConstAssign": "error",
"noConstantCondition": "error",
"noEmptyCharacterClassInRegex": "error",
"noEmptyPattern": "error",
"noGlobalObjectCalls": "error",
"noInnerDeclarations": "error",
"noInvalidConstructorSuper": "error",
"noNewSymbol": "error",
"noNonoctalDecimalEscape": "error",
"noPrecisionLoss": "error",
"noSelfAssign": "error",
"noSetterReturn": "error",
"noSwitchDeclarations": "error",
"noUndeclaredVariables": "error",
"noUnreachable": "error",
"noUnreachableSuper": "error",
"noUnsafeFinally": "error",
"noUnsafeOptionalChaining": "error",
"noUnusedLabels": "error",
"noUnusedVariables": "warn",
"useIsNan": "error",
"useValidForDirection": "error",
"useYield": "error"
},
"style": {
"noNamespace": "error",
"useAsConstAssertion": "error",
"useBlockStatements": "off",
"useNamingConvention": {
"level": "error",
"options": {
"strictCase": false
}
}
},
"suspicious": {
"noAssignInExpressions": "error",
"noAsyncPromiseExecutor": "error",
"noCatchAssign": "error",
"noClassAssign": "error",
"noCompareNegZero": "error",
"noControlCharactersInRegex": "error",
"noDebugger": "error",
"noDuplicateCase": "error",
"noDuplicateClassMembers": "error",
"noDuplicateObjectKeys": "error",
"noDuplicateParameters": "error",
"noEmptyBlockStatements": "error",
"noExplicitAny": "error",
"noExtraNonNullAssertion": "error",
"noFallthroughSwitchClause": "error",
"noFunctionAssign": "error",
"noGlobalAssign": "error",
"noImportAssign": "error",
"noMisleadingCharacterClass": "error",
"noMisleadingInstantiator": "error",
"noPrototypeBuiltins": "error",
"noRedeclare": "error",
"noShadowRestrictedNames": "error",
"noUnsafeDeclarationMerging": "error",
"noUnsafeNegation": "error",
"useGetterReturn": "error",
"useValidTypeof": "error"
}
},
"ignore": ["**/dist/**/*", "**/native.js", "**/native.d.ts"]
},
"javascript": {
"globals": []
},
"overrides": [
{
"include": [
"**/*.ts",
"**/*.tsx",
"**/*.mts",
"**/*.cts",
"__test__/*.test.ts"
],
"linter": {
"rules": {
"correctness": {
"noConstAssign": "off",
"noGlobalObjectCalls": "off",
"noInvalidConstructorSuper": "off",
"noNewSymbol": "off",
"noSetterReturn": "off",
"noUndeclaredVariables": "off",
"noUnreachable": "off",
"noUnreachableSuper": "off"
},
"style": {
"noArguments": "error",
"noVar": "error",
"useConst": "error"
},
"suspicious": {
"noDuplicateClassMembers": "off",
"noDuplicateObjectKeys": "off",
"noDuplicateParameters": "off",
"noFunctionAssign": "off",
"noImportAssign": "off",
"noRedeclare": "off",
"noUnsafeNegation": "off",
"useGetterReturn": "off"
}
}
}
}
]
}

View File

@@ -1,28 +0,0 @@
/* eslint-disable @typescript-eslint/naming-convention */
// @ts-check
const eslint = require("@eslint/js");
const tseslint = require("typescript-eslint");
const eslintConfigPrettier = require("eslint-config-prettier");
const jsdoc = require("eslint-plugin-jsdoc");
module.exports = tseslint.config(
eslint.configs.recommended,
jsdoc.configs["flat/recommended"],
eslintConfigPrettier,
...tseslint.configs.recommended,
{
rules: {
"@typescript-eslint/naming-convention": "error",
"jsdoc/require-returns": "off",
"jsdoc/require-param": "off",
"jsdoc/require-jsdoc": [
"error",
{
publicOnly: true,
},
],
},
plugins: jsdoc,
},
);

View File

@@ -13,28 +13,124 @@
// limitations under the License. // limitations under the License.
import { import {
Table as ArrowTable,
Binary,
DataType,
Field, Field,
makeBuilder, FixedSizeBinary,
FixedSizeList,
Float,
Float32,
Int,
LargeBinary,
List,
Null,
RecordBatch,
RecordBatchFileWriter, RecordBatchFileWriter,
RecordBatchStreamWriter,
Schema,
Struct,
Utf8, Utf8,
type Vector, type Vector,
FixedSizeList, makeBuilder,
vectorFromArray,
type Schema,
Table as ArrowTable,
RecordBatchStreamWriter,
List,
RecordBatch,
makeData, makeData,
Struct,
type Float,
DataType,
Binary,
Float32,
type makeTable, type makeTable,
vectorFromArray,
} from "apache-arrow"; } from "apache-arrow";
import { type EmbeddingFunction } from "./embedding/embedding_function"; import { type EmbeddingFunction } from "./embedding/embedding_function";
import { sanitizeSchema } from "./sanitize"; import { EmbeddingFunctionConfig, getRegistry } from "./embedding/registry";
import { sanitizeField, sanitizeSchema, sanitizeType } from "./sanitize";
export * from "apache-arrow";
export function isArrowTable(value: object): value is ArrowTable {
if (value instanceof ArrowTable) return true;
return "schema" in value && "batches" in value;
}
export function isDataType(value: unknown): value is DataType {
return (
value instanceof DataType ||
DataType.isNull(value) ||
DataType.isInt(value) ||
DataType.isFloat(value) ||
DataType.isBinary(value) ||
DataType.isLargeBinary(value) ||
DataType.isUtf8(value) ||
DataType.isLargeUtf8(value) ||
DataType.isBool(value) ||
DataType.isDecimal(value) ||
DataType.isDate(value) ||
DataType.isTime(value) ||
DataType.isTimestamp(value) ||
DataType.isInterval(value) ||
DataType.isDuration(value) ||
DataType.isList(value) ||
DataType.isStruct(value) ||
DataType.isUnion(value) ||
DataType.isFixedSizeBinary(value) ||
DataType.isFixedSizeList(value) ||
DataType.isMap(value) ||
DataType.isDictionary(value)
);
}
export function isNull(value: unknown): value is Null {
return value instanceof Null || DataType.isNull(value);
}
export function isInt(value: unknown): value is Int {
return value instanceof Int || DataType.isInt(value);
}
export function isFloat(value: unknown): value is Float {
return value instanceof Float || DataType.isFloat(value);
}
export function isBinary(value: unknown): value is Binary {
return value instanceof Binary || DataType.isBinary(value);
}
export function isLargeBinary(value: unknown): value is LargeBinary {
return value instanceof LargeBinary || DataType.isLargeBinary(value);
}
export function isUtf8(value: unknown): value is Utf8 {
return value instanceof Utf8 || DataType.isUtf8(value);
}
export function isLargeUtf8(value: unknown): value is Utf8 {
return value instanceof Utf8 || DataType.isLargeUtf8(value);
}
export function isBool(value: unknown): value is Utf8 {
return value instanceof Utf8 || DataType.isBool(value);
}
export function isDecimal(value: unknown): value is Utf8 {
return value instanceof Utf8 || DataType.isDecimal(value);
}
export function isDate(value: unknown): value is Utf8 {
return value instanceof Utf8 || DataType.isDate(value);
}
export function isTime(value: unknown): value is Utf8 {
return value instanceof Utf8 || DataType.isTime(value);
}
export function isTimestamp(value: unknown): value is Utf8 {
return value instanceof Utf8 || DataType.isTimestamp(value);
}
export function isInterval(value: unknown): value is Utf8 {
return value instanceof Utf8 || DataType.isInterval(value);
}
export function isDuration(value: unknown): value is Utf8 {
return value instanceof Utf8 || DataType.isDuration(value);
}
export function isList(value: unknown): value is List {
return value instanceof List || DataType.isList(value);
}
export function isStruct(value: unknown): value is Struct {
return value instanceof Struct || DataType.isStruct(value);
}
export function isUnion(value: unknown): value is Struct {
return value instanceof Struct || DataType.isUnion(value);
}
export function isFixedSizeBinary(value: unknown): value is FixedSizeBinary {
return value instanceof FixedSizeBinary || DataType.isFixedSizeBinary(value);
}
export function isFixedSizeList(value: unknown): value is FixedSizeList {
return value instanceof FixedSizeList || DataType.isFixedSizeList(value);
}
/** Data type accepted by NodeJS SDK */ /** Data type accepted by NodeJS SDK */
export type Data = Record<string, unknown>[] | ArrowTable; export type Data = Record<string, unknown>[] | ArrowTable;
@@ -85,6 +181,7 @@ export class MakeArrowTableOptions {
vectorColumns: Record<string, VectorColumnOptions> = { vectorColumns: Record<string, VectorColumnOptions> = {
vector: new VectorColumnOptions(), vector: new VectorColumnOptions(),
}; };
embeddings?: EmbeddingFunction<unknown>;
/** /**
* If true then string columns will be encoded with dictionary encoding * If true then string columns will be encoded with dictionary encoding
@@ -197,6 +294,7 @@ export class MakeArrowTableOptions {
export function makeArrowTable( export function makeArrowTable(
data: Array<Record<string, unknown>>, data: Array<Record<string, unknown>>,
options?: Partial<MakeArrowTableOptions>, options?: Partial<MakeArrowTableOptions>,
metadata?: Map<string, string>,
): ArrowTable { ): ArrowTable {
if ( if (
data.length === 0 && data.length === 0 &&
@@ -208,6 +306,7 @@ export function makeArrowTable(
const opt = new MakeArrowTableOptions(options !== undefined ? options : {}); const opt = new MakeArrowTableOptions(options !== undefined ? options : {});
if (opt.schema !== undefined && opt.schema !== null) { if (opt.schema !== undefined && opt.schema !== null) {
opt.schema = sanitizeSchema(opt.schema); opt.schema = sanitizeSchema(opt.schema);
opt.schema = validateSchemaEmbeddings(opt.schema, data, opt.embeddings);
} }
const columns: Record<string, Vector> = {}; const columns: Record<string, Vector> = {};
// TODO: sample dataset to find missing columns // TODO: sample dataset to find missing columns
@@ -287,21 +386,42 @@ export function makeArrowTable(
// then patch the schema of the batches so we can use // then patch the schema of the batches so we can use
// `new ArrowTable(schema, batches)` which does not do any schema inference // `new ArrowTable(schema, batches)` which does not do any schema inference
const firstTable = new ArrowTable(columns); const firstTable = new ArrowTable(columns);
// eslint-disable-next-line @typescript-eslint/no-non-null-assertion
const batchesFixed = firstTable.batches.map( const batchesFixed = firstTable.batches.map(
(batch) => new RecordBatch(opt.schema!, batch.data), (batch) => new RecordBatch(opt.schema!, batch.data),
); );
return new ArrowTable(opt.schema, batchesFixed); let schema: Schema;
} else { if (metadata !== undefined) {
return new ArrowTable(columns); let schemaMetadata = opt.schema.metadata;
if (schemaMetadata.size === 0) {
schemaMetadata = metadata;
} else {
for (const [key, entry] of schemaMetadata.entries()) {
schemaMetadata.set(key, entry);
}
}
schema = new Schema(opt.schema.fields, schemaMetadata);
} else {
schema = opt.schema;
}
return new ArrowTable(schema, batchesFixed);
} }
const tbl = new ArrowTable(columns);
if (metadata !== undefined) {
// biome-ignore lint/suspicious/noExplicitAny: <explanation>
(<any>tbl.schema).metadata = metadata;
}
return tbl;
} }
/** /**
* Create an empty Arrow table with the provided schema * Create an empty Arrow table with the provided schema
*/ */
export function makeEmptyTable(schema: Schema): ArrowTable { export function makeEmptyTable(
return makeArrowTable([], { schema }); schema: Schema,
metadata?: Map<string, string>,
): ArrowTable {
return makeArrowTable([], { schema }, metadata);
} }
/** /**
@@ -313,7 +433,7 @@ function makeListVector(lists: unknown[][]): Vector<unknown> {
throw Error("Cannot infer list vector from empty array or empty list"); throw Error("Cannot infer list vector from empty array or empty list");
} }
const sampleList = lists[0]; const sampleList = lists[0];
// eslint-disable-next-line @typescript-eslint/no-explicit-any // biome-ignore lint/suspicious/noExplicitAny: skip
let inferredType: any; let inferredType: any;
try { try {
const sampleVector = makeVector(sampleList); const sampleVector = makeVector(sampleList);
@@ -337,7 +457,7 @@ function makeVector(
values: unknown[], values: unknown[],
type?: DataType, type?: DataType,
stringAsDictionary?: boolean, stringAsDictionary?: boolean,
// eslint-disable-next-line @typescript-eslint/no-explicit-any // biome-ignore lint/suspicious/noExplicitAny: skip
): Vector<any> { ): Vector<any> {
if (type !== undefined) { if (type !== undefined) {
// No need for inference, let Arrow create it // No need for inference, let Arrow create it
@@ -373,13 +493,75 @@ function makeVector(
} }
} }
/** Helper function to apply embeddings from metadata to an input table */
async function applyEmbeddingsFromMetadata(
table: ArrowTable,
schema: Schema,
): Promise<ArrowTable> {
const registry = getRegistry();
const functions = registry.parseFunctions(schema.metadata);
const columns = Object.fromEntries(
table.schema.fields.map((field) => [
field.name,
table.getChild(field.name)!,
]),
);
for (const functionEntry of functions.values()) {
const sourceColumn = columns[functionEntry.sourceColumn];
const destColumn = functionEntry.vectorColumn ?? "vector";
if (sourceColumn === undefined) {
throw new Error(
`Cannot apply embedding function because the source column '${functionEntry.sourceColumn}' was not present in the data`,
);
}
if (columns[destColumn] !== undefined) {
throw new Error(
`Attempt to apply embeddings to table failed because column ${destColumn} already existed`,
);
}
if (table.batches.length > 1) {
throw new Error(
"Internal error: `makeArrowTable` unexpectedly created a table with more than one batch",
);
}
const values = sourceColumn.toArray();
const vectors =
await functionEntry.function.computeSourceEmbeddings(values);
if (vectors.length !== values.length) {
throw new Error(
"Embedding function did not return an embedding for each input element",
);
}
let destType: DataType;
const dtype = schema.fields.find((f) => f.name === destColumn)!.type;
if (isFixedSizeList(dtype)) {
destType = sanitizeType(dtype);
} else {
throw new Error(
"Expected FixedSizeList as datatype for vector field, instead got: " +
dtype,
);
}
const vector = makeVector(vectors, destType);
columns[destColumn] = vector;
}
const newTable = new ArrowTable(columns);
return alignTable(newTable, schema);
}
/** Helper function to apply embeddings to an input table */ /** Helper function to apply embeddings to an input table */
async function applyEmbeddings<T>( async function applyEmbeddings<T>(
table: ArrowTable, table: ArrowTable,
embeddings?: EmbeddingFunction<T>, embeddings?: EmbeddingFunctionConfig,
schema?: Schema, schema?: Schema,
): Promise<ArrowTable> { ): Promise<ArrowTable> {
if (embeddings == null) { if (schema?.metadata.has("embedding_functions")) {
return applyEmbeddingsFromMetadata(table, schema!);
} else if (embeddings == null || embeddings === undefined) {
return table; return table;
} }
@@ -397,8 +579,9 @@ async function applyEmbeddings<T>(
const newColumns = Object.fromEntries(colEntries); const newColumns = Object.fromEntries(colEntries);
const sourceColumn = newColumns[embeddings.sourceColumn]; const sourceColumn = newColumns[embeddings.sourceColumn];
const destColumn = embeddings.destColumn ?? "vector"; const destColumn = embeddings.vectorColumn ?? "vector";
const innerDestType = embeddings.embeddingDataType ?? new Float32(); const innerDestType =
embeddings.function.embeddingDataType() ?? new Float32();
if (sourceColumn === undefined) { if (sourceColumn === undefined) {
throw new Error( throw new Error(
`Cannot apply embedding function because the source column '${embeddings.sourceColumn}' was not present in the data`, `Cannot apply embedding function because the source column '${embeddings.sourceColumn}' was not present in the data`,
@@ -412,11 +595,9 @@ async function applyEmbeddings<T>(
// if we call convertToTable with 0 records and a schema that includes the embedding // if we call convertToTable with 0 records and a schema that includes the embedding
return table; return table;
} }
if (embeddings.embeddingDimension !== undefined) { const dimensions = embeddings.function.ndims();
const destType = newVectorType( if (dimensions !== undefined) {
embeddings.embeddingDimension, const destType = newVectorType(dimensions, innerDestType);
innerDestType,
);
newColumns[destColumn] = makeVector([], destType); newColumns[destColumn] = makeVector([], destType);
} else if (schema != null) { } else if (schema != null) {
const destField = schema.fields.find((f) => f.name === destColumn); const destField = schema.fields.find((f) => f.name === destColumn);
@@ -444,7 +625,9 @@ async function applyEmbeddings<T>(
); );
} }
const values = sourceColumn.toArray(); const values = sourceColumn.toArray();
const vectors = await embeddings.embed(values as T[]); const vectors = await embeddings.function.computeSourceEmbeddings(
values as T[],
);
if (vectors.length !== values.length) { if (vectors.length !== values.length) {
throw new Error( throw new Error(
"Embedding function did not return an embedding for each input element", "Embedding function did not return an embedding for each input element",
@@ -484,9 +667,9 @@ async function applyEmbeddings<T>(
* embedding columns. If no schema is provded then embedding columns will * embedding columns. If no schema is provded then embedding columns will
* be placed at the end of the table, after all of the input columns. * be placed at the end of the table, after all of the input columns.
*/ */
export async function convertToTable<T>( export async function convertToTable(
data: Array<Record<string, unknown>>, data: Array<Record<string, unknown>>,
embeddings?: EmbeddingFunction<T>, embeddings?: EmbeddingFunctionConfig,
makeTableOptions?: Partial<MakeArrowTableOptions>, makeTableOptions?: Partial<MakeArrowTableOptions>,
): Promise<ArrowTable> { ): Promise<ArrowTable> {
const table = makeArrowTable(data, makeTableOptions); const table = makeArrowTable(data, makeTableOptions);
@@ -494,13 +677,13 @@ export async function convertToTable<T>(
} }
/** Creates the Arrow Type for a Vector column with dimension `dim` */ /** Creates the Arrow Type for a Vector column with dimension `dim` */
function newVectorType<T extends Float>( export function newVectorType<T extends Float>(
dim: number, dim: number,
innerType: T, innerType: T,
): FixedSizeList<T> { ): FixedSizeList<T> {
// in Lance we always default to have the elements nullable, so we need to set it to true // in Lance we always default to have the elements nullable, so we need to set it to true
// otherwise we often get schema mismatches because the stored data always has schema with nullable elements // otherwise we often get schema mismatches because the stored data always has schema with nullable elements
const children = new Field<T>("item", innerType, true); const children = new Field("item", <T>sanitizeType(innerType), true);
return new FixedSizeList(dim, children); return new FixedSizeList(dim, children);
} }
@@ -511,9 +694,9 @@ function newVectorType<T extends Float>(
* *
* `schema` is required if data is empty * `schema` is required if data is empty
*/ */
export async function fromRecordsToBuffer<T>( export async function fromRecordsToBuffer(
data: Array<Record<string, unknown>>, data: Array<Record<string, unknown>>,
embeddings?: EmbeddingFunction<T>, embeddings?: EmbeddingFunctionConfig,
schema?: Schema, schema?: Schema,
): Promise<Buffer> { ): Promise<Buffer> {
if (schema !== undefined && schema !== null) { if (schema !== undefined && schema !== null) {
@@ -531,9 +714,9 @@ export async function fromRecordsToBuffer<T>(
* *
* `schema` is required if data is empty * `schema` is required if data is empty
*/ */
export async function fromRecordsToStreamBuffer<T>( export async function fromRecordsToStreamBuffer(
data: Array<Record<string, unknown>>, data: Array<Record<string, unknown>>,
embeddings?: EmbeddingFunction<T>, embeddings?: EmbeddingFunctionConfig,
schema?: Schema, schema?: Schema,
): Promise<Buffer> { ): Promise<Buffer> {
if (schema !== undefined && schema !== null) { if (schema !== undefined && schema !== null) {
@@ -552,9 +735,9 @@ export async function fromRecordsToStreamBuffer<T>(
* *
* `schema` is required if the table is empty * `schema` is required if the table is empty
*/ */
export async function fromTableToBuffer<T>( export async function fromTableToBuffer(
table: ArrowTable, table: ArrowTable,
embeddings?: EmbeddingFunction<T>, embeddings?: EmbeddingFunctionConfig,
schema?: Schema, schema?: Schema,
): Promise<Buffer> { ): Promise<Buffer> {
if (schema !== undefined && schema !== null) { if (schema !== undefined && schema !== null) {
@@ -573,19 +756,19 @@ export async function fromTableToBuffer<T>(
* *
* `schema` is required if the table is empty * `schema` is required if the table is empty
*/ */
export async function fromDataToBuffer<T>( export async function fromDataToBuffer(
data: Data, data: Data,
embeddings?: EmbeddingFunction<T>, embeddings?: EmbeddingFunctionConfig,
schema?: Schema, schema?: Schema,
): Promise<Buffer> { ): Promise<Buffer> {
if (schema !== undefined && schema !== null) { if (schema !== undefined && schema !== null) {
schema = sanitizeSchema(schema); schema = sanitizeSchema(schema);
} }
if (data instanceof ArrowTable) { if (isArrowTable(data)) {
return fromTableToBuffer(data, embeddings, schema); return fromTableToBuffer(data, embeddings, schema);
} else { } else {
const table = await convertToTable(data); const table = await convertToTable(data, embeddings, { schema });
return fromTableToBuffer(table, embeddings, schema); return fromTableToBuffer(table);
} }
} }
@@ -597,9 +780,9 @@ export async function fromDataToBuffer<T>(
* *
* `schema` is required if the table is empty * `schema` is required if the table is empty
*/ */
export async function fromTableToStreamBuffer<T>( export async function fromTableToStreamBuffer(
table: ArrowTable, table: ArrowTable,
embeddings?: EmbeddingFunction<T>, embeddings?: EmbeddingFunctionConfig,
schema?: Schema, schema?: Schema,
): Promise<Buffer> { ): Promise<Buffer> {
const tableWithEmbeddings = await applyEmbeddings(table, embeddings, schema); const tableWithEmbeddings = await applyEmbeddings(table, embeddings, schema);
@@ -648,3 +831,54 @@ function alignTable(table: ArrowTable, schema: Schema): ArrowTable {
export function createEmptyTable(schema: Schema): ArrowTable { export function createEmptyTable(schema: Schema): ArrowTable {
return new ArrowTable(sanitizeSchema(schema)); return new ArrowTable(sanitizeSchema(schema));
} }
function validateSchemaEmbeddings(
schema: Schema,
data: Array<Record<string, unknown>>,
embeddings: EmbeddingFunction<unknown> | undefined,
) {
const fields = [];
const missingEmbeddingFields = [];
// First we check if the field is a `FixedSizeList`
// Then we check if the data contains the field
// if it does not, we add it to the list of missing embedding fields
// Finally, we check if those missing embedding fields are `this._embeddings`
// if they are not, we throw an error
for (let field of schema.fields) {
if (isFixedSizeList(field.type)) {
field = sanitizeField(field);
if (data.length !== 0 && data?.[0]?.[field.name] === undefined) {
if (schema.metadata.has("embedding_functions")) {
const embeddings = JSON.parse(
schema.metadata.get("embedding_functions")!,
);
if (
// biome-ignore lint/suspicious/noExplicitAny: we don't know the type of `f`
embeddings.find((f: any) => f["vectorColumn"] === field.name) ===
undefined
) {
missingEmbeddingFields.push(field);
}
} else {
missingEmbeddingFields.push(field);
}
} else {
fields.push(field);
}
} else {
fields.push(field);
}
}
if (missingEmbeddingFields.length > 0 && embeddings === undefined) {
throw new Error(
`Table has embeddings: "${missingEmbeddingFields
.map((f) => f.name)
.join(",")}", but no embedding function was provided`,
);
}
return new Schema(fields, schema.metadata);
}

View File

@@ -12,10 +12,16 @@
// See the License for the specific language governing permissions and // See the License for the specific language governing permissions and
// limitations under the License. // limitations under the License.
import { fromTableToBuffer, makeArrowTable, makeEmptyTable } from "./arrow"; import { Table as ArrowTable, Schema } from "./arrow";
import {
fromTableToBuffer,
isArrowTable,
makeArrowTable,
makeEmptyTable,
} from "./arrow";
import { EmbeddingFunctionConfig, getRegistry } from "./embedding/registry";
import { ConnectionOptions, Connection as LanceDbConnection } from "./native"; import { ConnectionOptions, Connection as LanceDbConnection } from "./native";
import { Table } from "./table"; import { Table } from "./table";
import { Table as ArrowTable, Schema } from "apache-arrow";
/** /**
* Connect to a LanceDB instance at the given URI. * Connect to a LanceDB instance at the given URI.
@@ -65,6 +71,8 @@ export interface CreateTableOptions {
* The available options are described at https://lancedb.github.io/lancedb/guides/storage/ * The available options are described at https://lancedb.github.io/lancedb/guides/storage/
*/ */
storageOptions?: Record<string, string>; storageOptions?: Record<string, string>;
schema?: Schema;
embeddingFunction?: EmbeddingFunctionConfig;
} }
export interface OpenTableOptions { export interface OpenTableOptions {
@@ -77,6 +85,18 @@ export interface OpenTableOptions {
* The available options are described at https://lancedb.github.io/lancedb/guides/storage/ * The available options are described at https://lancedb.github.io/lancedb/guides/storage/
*/ */
storageOptions?: Record<string, string>; storageOptions?: Record<string, string>;
/**
* Set the size of the index cache, specified as a number of entries
*
* The exact meaning of an "entry" will depend on the type of index:
* - IVF: there is one entry for each IVF partition
* - BTREE: there is one entry for the entire index
*
* This cache applies to the entire opened table, across all indices.
* Setting this value higher will increase performance on larger datasets
* at the expense of more RAM
*/
indexCacheSize?: number;
} }
export interface TableNamesOptions { export interface TableNamesOptions {
@@ -160,7 +180,9 @@ export class Connection {
const innerTable = await this.inner.openTable( const innerTable = await this.inner.openTable(
name, name,
cleanseStorageOptions(options?.storageOptions), cleanseStorageOptions(options?.storageOptions),
options?.indexCacheSize,
); );
return new Table(innerTable); return new Table(innerTable);
} }
@@ -183,18 +205,24 @@ export class Connection {
} }
let table: ArrowTable; let table: ArrowTable;
if (data instanceof ArrowTable) { if (isArrowTable(data)) {
table = data; table = data;
} else { } else {
table = makeArrowTable(data); table = makeArrowTable(data, options);
} }
const buf = await fromTableToBuffer(table);
const buf = await fromTableToBuffer(
table,
options?.embeddingFunction,
options?.schema,
);
const innerTable = await this.inner.createTable( const innerTable = await this.inner.createTable(
name, name,
buf, buf,
mode, mode,
cleanseStorageOptions(options?.storageOptions), cleanseStorageOptions(options?.storageOptions),
); );
return new Table(innerTable); return new Table(innerTable);
} }
@@ -214,8 +242,14 @@ export class Connection {
if (mode === "create" && existOk) { if (mode === "create" && existOk) {
mode = "exist_ok"; mode = "exist_ok";
} }
let metadata: Map<string, string> | undefined = undefined;
if (options?.embeddingFunction !== undefined) {
const embeddingFunction = options.embeddingFunction;
const registry = getRegistry();
metadata = registry.getTableMetadata([embeddingFunction]);
}
const table = makeEmptyTable(schema); const table = makeEmptyTable(schema, metadata);
const buf = await fromTableToBuffer(table); const buf = await fromTableToBuffer(table);
const innerTable = await this.inner.createEmptyTable( const innerTable = await this.inner.createEmptyTable(
name, name,

View File

@@ -1,4 +1,4 @@
// Copyright 2023 Lance Developers. // Copyright 2024 Lance Developers.
// //
// Licensed under the Apache License, Version 2.0 (the "License"); // Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License. // you may not use this file except in compliance with the License.
@@ -12,67 +12,151 @@
// See the License for the specific language governing permissions and // See the License for the specific language governing permissions and
// limitations under the License. // limitations under the License.
import { type Float } from "apache-arrow"; import "reflect-metadata";
import {
DataType,
Field,
FixedSizeList,
Float,
Float32,
isDataType,
isFixedSizeList,
isFloat,
newVectorType,
} from "../arrow";
import { sanitizeType } from "../sanitize";
/**
* Options for a given embedding function
*/
export interface FunctionOptions {
// biome-ignore lint/suspicious/noExplicitAny: options can be anything
[key: string]: any;
}
/** /**
* An embedding function that automatically creates vector representation for a given column. * An embedding function that automatically creates vector representation for a given column.
*/ */
export interface EmbeddingFunction<T> { export abstract class EmbeddingFunction<
// biome-ignore lint/suspicious/noExplicitAny: we don't know what the implementor will do
T = any,
M extends FunctionOptions = FunctionOptions,
> {
/** /**
* The name of the column that will be used as input for the Embedding Function. * Convert the embedding function to a JSON object
* It is used to serialize the embedding function to the schema
* It's important that any object returned by this method contains all the necessary
* information to recreate the embedding function
*
* It should return the same object that was passed to the constructor
* If it does not, the embedding function will not be able to be recreated, or could be recreated incorrectly
*
* @example
* ```ts
* class MyEmbeddingFunction extends EmbeddingFunction {
* constructor(options: {model: string, timeout: number}) {
* super();
* this.model = options.model;
* this.timeout = options.timeout;
* }
* toJSON() {
* return {
* model: this.model,
* timeout: this.timeout,
* };
* }
* ```
*/ */
sourceColumn: string; abstract toJSON(): Partial<M>;
/** /**
* The data type of the embedding * sourceField is used in combination with `LanceSchema` to provide a declarative data model
* *
* The embedding function should return `number`. This will be converted into * @param optionsOrDatatype - The options for the field or the datatype
* an Arrow float array. By default this will be Float32 but this property can *
* be used to control the conversion. * @see {@link lancedb.LanceSchema}
*/ */
embeddingDataType?: Float; sourceField(
optionsOrDatatype: Partial<FieldOptions> | DataType,
): [DataType, Map<string, EmbeddingFunction>] {
let datatype = isDataType(optionsOrDatatype)
? optionsOrDatatype
: optionsOrDatatype?.datatype;
if (!datatype) {
throw new Error("Datatype is required");
}
datatype = sanitizeType(datatype);
const metadata = new Map<string, EmbeddingFunction>();
metadata.set("source_column_for", this);
return [datatype, metadata];
}
/** /**
* The dimension of the embedding * vectorField is used in combination with `LanceSchema` to provide a declarative data model
* *
* This is optional, normally this can be determined by looking at the results of * @param options - The options for the field
* `embed`. If this is not specified, and there is an attempt to apply the embedding *
* to an empty table, then that process will fail. * @see {@link lancedb.LanceSchema}
*/ */
embeddingDimension?: number; vectorField(
options?: Partial<FieldOptions>,
): [DataType, Map<string, EmbeddingFunction>] {
let dtype: DataType;
const dims = this.ndims() ?? options?.dims;
if (!options?.datatype) {
if (dims === undefined) {
throw new Error("ndims is required for vector field");
}
dtype = new FixedSizeList(dims, new Field("item", new Float32(), true));
} else {
if (isFixedSizeList(options.datatype)) {
dtype = options.datatype;
} else if (isFloat(options.datatype)) {
if (dims === undefined) {
throw new Error("ndims is required for vector field");
}
dtype = newVectorType(dims, options.datatype);
} else {
throw new Error(
"Expected FixedSizeList or Float as datatype for vector field",
);
}
}
const metadata = new Map<string, EmbeddingFunction>();
metadata.set("vector_column_for", this);
/** return [dtype, metadata];
* The name of the column that will contain the embedding }
*
* By default this is "vector"
*/
destColumn?: string;
/** /** The number of dimensions of the embeddings */
* Should the source column be excluded from the resulting table ndims(): number | undefined {
* return undefined;
* By default the source column is included. Set this to true and }
* only the embedding will be stored.
*/ /** The datatype of the embeddings */
excludeSource?: boolean; abstract embeddingDataType(): Float;
/** /**
* Creates a vector representation for the given values. * Creates a vector representation for the given values.
*/ */
embed: (data: T[]) => Promise<number[][]>; abstract computeSourceEmbeddings(
data: T[],
): Promise<number[][] | Float32Array[] | Float64Array[]>;
/**
Compute the embeddings for a single query
*/
async computeQueryEmbeddings(
data: T,
): Promise<number[] | Float32Array | Float64Array> {
return this.computeSourceEmbeddings([data]).then(
(embeddings) => embeddings[0],
);
}
} }
/** Test if the input seems to be an embedding function */ export interface FieldOptions<T extends DataType = DataType> {
export function isEmbeddingFunction<T>( datatype: T;
value: unknown, dims?: number;
): value is EmbeddingFunction<T> {
if (typeof value !== "object" || value === null) {
return false;
}
if (!("sourceColumn" in value) || !("embed" in value)) {
return false;
}
return (
typeof value.sourceColumn === "string" && typeof value.embed === "function"
);
} }

View File

@@ -1,2 +1,113 @@
export { EmbeddingFunction, isEmbeddingFunction } from "./embedding_function"; // Copyright 2023 Lance Developers.
export { OpenAIEmbeddingFunction } from "./openai"; //
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
import { DataType, Field, Schema } from "../arrow";
import { isDataType } from "../arrow";
import { sanitizeType } from "../sanitize";
import { EmbeddingFunction } from "./embedding_function";
import { EmbeddingFunctionConfig, getRegistry } from "./registry";
export { EmbeddingFunction } from "./embedding_function";
// We need to explicitly export '*' so that the `register` decorator actually registers the class.
export * from "./openai";
export * from "./registry";
/**
* Create a schema with embedding functions.
*
* @param fields
* @returns Schema
* @example
* ```ts
* class MyEmbeddingFunction extends EmbeddingFunction {
* // ...
* }
* const func = new MyEmbeddingFunction();
* const schema = LanceSchema({
* id: new Int32(),
* text: func.sourceField(new Utf8()),
* vector: func.vectorField(),
* // optional: specify the datatype and/or dimensions
* vector2: func.vectorField({ datatype: new Float32(), dims: 3}),
* });
*
* const table = await db.createTable("my_table", data, { schema });
* ```
*/
export function LanceSchema(
fields: Record<string, [object, Map<string, EmbeddingFunction>] | object>,
): Schema {
const arrowFields: Field[] = [];
const embeddingFunctions = new Map<
EmbeddingFunction,
Partial<EmbeddingFunctionConfig>
>();
Object.entries(fields).forEach(([key, value]) => {
if (isDataType(value)) {
arrowFields.push(new Field(key, sanitizeType(value), true));
} else {
const [dtype, metadata] = value as [
object,
Map<string, EmbeddingFunction>,
];
arrowFields.push(new Field(key, sanitizeType(dtype), true));
parseEmbeddingFunctions(embeddingFunctions, key, metadata);
}
});
const registry = getRegistry();
const metadata = registry.getTableMetadata(
Array.from(embeddingFunctions.values()) as EmbeddingFunctionConfig[],
);
const schema = new Schema(arrowFields, metadata);
return schema;
}
function parseEmbeddingFunctions(
embeddingFunctions: Map<EmbeddingFunction, Partial<EmbeddingFunctionConfig>>,
key: string,
metadata: Map<string, EmbeddingFunction>,
): void {
if (metadata.has("source_column_for")) {
const embedFunction = metadata.get("source_column_for")!;
const current = embeddingFunctions.get(embedFunction);
if (current !== undefined) {
embeddingFunctions.set(embedFunction, {
...current,
sourceColumn: key,
});
} else {
embeddingFunctions.set(embedFunction, {
sourceColumn: key,
function: embedFunction,
});
}
} else if (metadata.has("vector_column_for")) {
const embedFunction = metadata.get("vector_column_for")!;
const current = embeddingFunctions.get(embedFunction);
if (current !== undefined) {
embeddingFunctions.set(embedFunction, {
...current,
vectorColumn: key,
});
} else {
embeddingFunctions.set(embedFunction, {
vectorColumn: key,
function: embedFunction,
});
}
}
}

View File

@@ -12,18 +12,32 @@
// See the License for the specific language governing permissions and // See the License for the specific language governing permissions and
// limitations under the License. // limitations under the License.
import { type EmbeddingFunction } from "./embedding_function";
import type OpenAI from "openai"; import type OpenAI from "openai";
import { Float, Float32 } from "../arrow";
import { EmbeddingFunction } from "./embedding_function";
import { register } from "./registry";
export class OpenAIEmbeddingFunction implements EmbeddingFunction<string> { export type OpenAIOptions = {
private readonly _openai: OpenAI; apiKey?: string;
private readonly _modelName: string; model?: string;
};
@register("openai")
export class OpenAIEmbeddingFunction extends EmbeddingFunction<
string,
OpenAIOptions
> {
#openai: OpenAI;
#modelName: string;
constructor(options: OpenAIOptions = { model: "text-embedding-ada-002" }) {
super();
const openAIKey = options?.apiKey ?? process.env.OPENAI_API_KEY;
if (!openAIKey) {
throw new Error("OpenAI API key is required");
}
const modelName = options?.model ?? "text-embedding-ada-002";
constructor(
sourceColumn: string,
openAIKey: string,
modelName: string = "text-embedding-ada-002",
) {
/** /**
* @type {import("openai").default} * @type {import("openai").default}
*/ */
@@ -36,18 +50,40 @@ export class OpenAIEmbeddingFunction implements EmbeddingFunction<string> {
throw new Error("please install openai@^4.24.1 using npm install openai"); throw new Error("please install openai@^4.24.1 using npm install openai");
} }
this.sourceColumn = sourceColumn;
const configuration = { const configuration = {
apiKey: openAIKey, apiKey: openAIKey,
}; };
this._openai = new Openai(configuration); this.#openai = new Openai(configuration);
this._modelName = modelName; this.#modelName = modelName;
} }
async embed(data: string[]): Promise<number[][]> { toJSON() {
const response = await this._openai.embeddings.create({ return {
model: this._modelName, model: this.#modelName,
};
}
ndims(): number {
switch (this.#modelName) {
case "text-embedding-ada-002":
return 1536;
case "text-embedding-3-large":
return 3072;
case "text-embedding-3-small":
return 1536;
default:
return null as never;
}
}
embeddingDataType(): Float {
return new Float32();
}
async computeSourceEmbeddings(data: string[]): Promise<number[][]> {
const response = await this.#openai.embeddings.create({
model: this.#modelName,
input: data, input: data,
}); });
@@ -58,5 +94,15 @@ export class OpenAIEmbeddingFunction implements EmbeddingFunction<string> {
return embeddings; return embeddings;
} }
sourceColumn: string; async computeQueryEmbeddings(data: string): Promise<number[]> {
if (typeof data !== "string") {
throw new Error("Data must be a string");
}
const response = await this.#openai.embeddings.create({
model: this.#modelName,
input: data,
});
return response.data[0].embedding;
}
} }

View File

@@ -0,0 +1,172 @@
// Copyright 2024 Lance Developers.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
import type { EmbeddingFunction } from "./embedding_function";
import "reflect-metadata";
export interface EmbeddingFunctionOptions {
[key: string]: unknown;
}
export interface EmbeddingFunctionFactory<
T extends EmbeddingFunction = EmbeddingFunction,
> {
new (modelOptions?: EmbeddingFunctionOptions): T;
}
interface EmbeddingFunctionCreate<T extends EmbeddingFunction> {
create(options?: EmbeddingFunctionOptions): T;
}
/**
* This is a singleton class used to register embedding functions
* and fetch them by name. It also handles serializing and deserializing.
* You can implement your own embedding function by subclassing EmbeddingFunction
* or TextEmbeddingFunction and registering it with the registry
*/
export class EmbeddingFunctionRegistry {
#functions: Map<string, EmbeddingFunctionFactory> = new Map();
/**
* Register an embedding function
* @param name The name of the function
* @param func The function to register
*/
register<T extends EmbeddingFunctionFactory = EmbeddingFunctionFactory>(
this: EmbeddingFunctionRegistry,
alias?: string,
// biome-ignore lint/suspicious/noExplicitAny: <explanation>
): (ctor: T) => any {
const self = this;
return function (ctor: T) {
if (!alias) {
alias = ctor.name;
}
if (self.#functions.has(alias)) {
throw new Error(
`Embedding function with alias "${alias}" already exists`,
);
}
self.#functions.set(alias, ctor);
Reflect.defineMetadata("lancedb::embedding::name", alias, ctor);
return ctor;
};
}
/**
* Fetch an embedding function by name
* @param name The name of the function
*/
get<T extends EmbeddingFunction<unknown> = EmbeddingFunction>(
name: string,
): EmbeddingFunctionCreate<T> | undefined {
const factory = this.#functions.get(name);
if (!factory) {
return undefined;
}
return {
create: function (options: EmbeddingFunctionOptions) {
return new factory(options) as unknown as T;
},
};
}
/**
* reset the registry to the initial state
*/
reset(this: EmbeddingFunctionRegistry) {
this.#functions.clear();
}
parseFunctions(
this: EmbeddingFunctionRegistry,
metadata: Map<string, string>,
): Map<string, EmbeddingFunctionConfig> {
if (!metadata.has("embedding_functions")) {
return new Map();
} else {
type FunctionConfig = {
name: string;
sourceColumn: string;
vectorColumn: string;
model: EmbeddingFunctionOptions;
};
const functions = <FunctionConfig[]>(
JSON.parse(metadata.get("embedding_functions")!)
);
return new Map(
functions.map((f) => {
const fn = this.get(f.name);
if (!fn) {
throw new Error(`Function "${f.name}" not found in registry`);
}
return [
f.name,
{
sourceColumn: f.sourceColumn,
vectorColumn: f.vectorColumn,
function: this.get(f.name)!.create(f.model),
},
];
}),
);
}
}
// biome-ignore lint/suspicious/noExplicitAny: <explanation>
functionToMetadata(conf: EmbeddingFunctionConfig): Record<string, any> {
// biome-ignore lint/suspicious/noExplicitAny: <explanation>
const metadata: Record<string, any> = {};
const name = Reflect.getMetadata(
"lancedb::embedding::name",
conf.function.constructor,
);
metadata["sourceColumn"] = conf.sourceColumn;
metadata["vectorColumn"] = conf.vectorColumn ?? "vector";
metadata["name"] = name ?? conf.function.constructor.name;
metadata["model"] = conf.function.toJSON();
return metadata;
}
getTableMetadata(functions: EmbeddingFunctionConfig[]): Map<string, string> {
const metadata = new Map<string, string>();
const jsonData = functions.map((conf) => this.functionToMetadata(conf));
metadata.set("embedding_functions", JSON.stringify(jsonData));
return metadata;
}
}
const _REGISTRY = new EmbeddingFunctionRegistry();
export function register(name?: string) {
return _REGISTRY.register(name);
}
/**
* Utility function to get the global instance of the registry
* @returns `EmbeddingFunctionRegistry` The global instance of the registry
* @example
* ```ts
* const registry = getRegistry();
* const openai = registry.get("openai").create();
*/
export function getRegistry(): EmbeddingFunctionRegistry {
return _REGISTRY;
}
export interface EmbeddingFunctionConfig {
sourceColumn: string;
vectorColumn?: string;
function: EmbeddingFunction;
}

View File

@@ -12,14 +12,14 @@
// See the License for the specific language governing permissions and // See the License for the specific language governing permissions and
// limitations under the License. // limitations under the License.
import { RecordBatch, tableFromIPC, Table as ArrowTable } from "apache-arrow"; import { Table as ArrowTable, RecordBatch, tableFromIPC } from "./arrow";
import { type IvfPqOptions } from "./indices";
import { import {
RecordBatchIterator as NativeBatchIterator, RecordBatchIterator as NativeBatchIterator,
Query as NativeQuery, Query as NativeQuery,
Table as NativeTable, Table as NativeTable,
VectorQuery as NativeVectorQuery, VectorQuery as NativeVectorQuery,
} from "./native"; } from "./native";
import { type IvfPqOptions } from "./indices";
export class RecordBatchIterator implements AsyncIterator<RecordBatch> { export class RecordBatchIterator implements AsyncIterator<RecordBatch> {
private promisedInner?: Promise<NativeBatchIterator>; private promisedInner?: Promise<NativeBatchIterator>;
private inner?: NativeBatchIterator; private inner?: NativeBatchIterator;
@@ -29,7 +29,7 @@ export class RecordBatchIterator implements AsyncIterator<RecordBatch> {
this.promisedInner = promise; this.promisedInner = promise;
} }
// eslint-disable-next-line @typescript-eslint/no-explicit-any // biome-ignore lint/suspicious/noExplicitAny: skip
async next(): Promise<IteratorResult<RecordBatch<any>>> { async next(): Promise<IteratorResult<RecordBatch<any>>> {
if (this.inner === undefined) { if (this.inner === undefined) {
this.inner = await this.promisedInner; this.inner = await this.promisedInner;
@@ -56,7 +56,9 @@ export class QueryBase<
QueryType, QueryType,
> implements AsyncIterable<RecordBatch> > implements AsyncIterable<RecordBatch>
{ {
protected constructor(protected inner: NativeQueryType) {} protected constructor(protected inner: NativeQueryType) {
// intentionally empty
}
/** /**
* A filter statement to be applied to this query. * A filter statement to be applied to this query.
@@ -150,7 +152,7 @@ export class QueryBase<
return new RecordBatchIterator(this.nativeExecute()); return new RecordBatchIterator(this.nativeExecute());
} }
// eslint-disable-next-line @typescript-eslint/no-explicit-any // biome-ignore lint/suspicious/noExplicitAny: skip
[Symbol.asyncIterator](): AsyncIterator<RecordBatch<any>> { [Symbol.asyncIterator](): AsyncIterator<RecordBatch<any>> {
const promise = this.nativeExecute(); const promise = this.nativeExecute();
return new RecordBatchIterator(promise); return new RecordBatchIterator(promise);
@@ -168,6 +170,7 @@ export class QueryBase<
/** Collect the results as an array of objects. */ /** Collect the results as an array of objects. */
async toArray(): Promise<unknown[]> { async toArray(): Promise<unknown[]> {
const tbl = await this.toArrow(); const tbl = await this.toArrow();
// eslint-disable-next-line @typescript-eslint/no-unsafe-return // eslint-disable-next-line @typescript-eslint/no-unsafe-return
return tbl.toArray(); return tbl.toArray();
} }
@@ -368,7 +371,7 @@ export class Query extends QueryBase<NativeQuery, Query> {
* a default `limit` of 10 will be used. @see {@link Query#limit} * a default `limit` of 10 will be used. @see {@link Query#limit}
*/ */
nearestTo(vector: unknown): VectorQuery { nearestTo(vector: unknown): VectorQuery {
// eslint-disable-next-line @typescript-eslint/no-explicit-any // biome-ignore lint/suspicious/noExplicitAny: skip
const vectorQuery = this.inner.nearestTo(Float32Array.from(vector as any)); const vectorQuery = this.inner.nearestTo(Float32Array.from(vector as any));
return new VectorQuery(vectorQuery); return new VectorQuery(vectorQuery);
} }

View File

@@ -20,65 +20,65 @@
// comes from the exact same library instance. This is not always the case // comes from the exact same library instance. This is not always the case
// and so we must sanitize the input to ensure that it is compatible. // and so we must sanitize the input to ensure that it is compatible.
import type { IntBitWidth, TKeys, TimeBitWidth } from "apache-arrow/type";
import { import {
Field, Binary,
Utf8,
FixedSizeBinary,
FixedSizeList,
Schema,
List,
Struct,
Float,
Bool, Bool,
DataType,
DateDay,
DateMillisecond,
type DateUnit,
Date_, Date_,
Decimal, Decimal,
DataType, DenseUnion,
Dictionary, Dictionary,
Binary,
Float32,
Interval,
Map_,
Duration, Duration,
Union, DurationMicrosecond,
Time, DurationMillisecond,
Timestamp, DurationNanosecond,
Type, DurationSecond,
Null, Field,
FixedSizeBinary,
FixedSizeList,
Float,
Float16,
Float32,
Float64,
Int, Int,
type Precision,
type DateUnit,
Int8, Int8,
Int16, Int16,
Int32, Int32,
Int64, Int64,
Interval,
IntervalDayTime,
IntervalYearMonth,
List,
Map_,
Null,
type Precision,
Schema,
SparseUnion,
Struct,
Time,
TimeMicrosecond,
TimeMillisecond,
TimeNanosecond,
TimeSecond,
Timestamp,
TimestampMicrosecond,
TimestampMillisecond,
TimestampNanosecond,
TimestampSecond,
Type,
Uint8, Uint8,
Uint16, Uint16,
Uint32, Uint32,
Uint64, Uint64,
Float16, Union,
Float64, Utf8,
DateDay, } from "./arrow";
DateMillisecond,
DenseUnion,
SparseUnion,
TimeNanosecond,
TimeMicrosecond,
TimeMillisecond,
TimeSecond,
TimestampNanosecond,
TimestampMicrosecond,
TimestampMillisecond,
TimestampSecond,
IntervalDayTime,
IntervalYearMonth,
DurationNanosecond,
DurationMicrosecond,
DurationMillisecond,
DurationSecond,
} from "apache-arrow";
import type { IntBitWidth, TKeys, TimeBitWidth } from "apache-arrow/type";
function sanitizeMetadata( export function sanitizeMetadata(
metadataLike?: unknown, metadataLike?: unknown,
): Map<string, string> | undefined { ): Map<string, string> | undefined {
if (metadataLike === undefined || metadataLike === null) { if (metadataLike === undefined || metadataLike === null) {
@@ -97,7 +97,7 @@ function sanitizeMetadata(
return metadataLike as Map<string, string>; return metadataLike as Map<string, string>;
} }
function sanitizeInt(typeLike: object) { export function sanitizeInt(typeLike: object) {
if ( if (
!("bitWidth" in typeLike) || !("bitWidth" in typeLike) ||
typeof typeLike.bitWidth !== "number" || typeof typeLike.bitWidth !== "number" ||
@@ -111,14 +111,14 @@ function sanitizeInt(typeLike: object) {
return new Int(typeLike.isSigned, typeLike.bitWidth as IntBitWidth); return new Int(typeLike.isSigned, typeLike.bitWidth as IntBitWidth);
} }
function sanitizeFloat(typeLike: object) { export function sanitizeFloat(typeLike: object) {
if (!("precision" in typeLike) || typeof typeLike.precision !== "number") { if (!("precision" in typeLike) || typeof typeLike.precision !== "number") {
throw Error("Expected a Float Type to have a `precision` property"); throw Error("Expected a Float Type to have a `precision` property");
} }
return new Float(typeLike.precision as Precision); return new Float(typeLike.precision as Precision);
} }
function sanitizeDecimal(typeLike: object) { export function sanitizeDecimal(typeLike: object) {
if ( if (
!("scale" in typeLike) || !("scale" in typeLike) ||
typeof typeLike.scale !== "number" || typeof typeLike.scale !== "number" ||
@@ -134,14 +134,14 @@ function sanitizeDecimal(typeLike: object) {
return new Decimal(typeLike.scale, typeLike.precision, typeLike.bitWidth); return new Decimal(typeLike.scale, typeLike.precision, typeLike.bitWidth);
} }
function sanitizeDate(typeLike: object) { export function sanitizeDate(typeLike: object) {
if (!("unit" in typeLike) || typeof typeLike.unit !== "number") { if (!("unit" in typeLike) || typeof typeLike.unit !== "number") {
throw Error("Expected a Date type to have a `unit` property"); throw Error("Expected a Date type to have a `unit` property");
} }
return new Date_(typeLike.unit as DateUnit); return new Date_(typeLike.unit as DateUnit);
} }
function sanitizeTime(typeLike: object) { export function sanitizeTime(typeLike: object) {
if ( if (
!("unit" in typeLike) || !("unit" in typeLike) ||
typeof typeLike.unit !== "number" || typeof typeLike.unit !== "number" ||
@@ -155,7 +155,7 @@ function sanitizeTime(typeLike: object) {
return new Time(typeLike.unit, typeLike.bitWidth as TimeBitWidth); return new Time(typeLike.unit, typeLike.bitWidth as TimeBitWidth);
} }
function sanitizeTimestamp(typeLike: object) { export function sanitizeTimestamp(typeLike: object) {
if (!("unit" in typeLike) || typeof typeLike.unit !== "number") { if (!("unit" in typeLike) || typeof typeLike.unit !== "number") {
throw Error("Expected a Timestamp type to have a `unit` property"); throw Error("Expected a Timestamp type to have a `unit` property");
} }
@@ -166,7 +166,7 @@ function sanitizeTimestamp(typeLike: object) {
return new Timestamp(typeLike.unit, timezone); return new Timestamp(typeLike.unit, timezone);
} }
function sanitizeTypedTimestamp( export function sanitizeTypedTimestamp(
typeLike: object, typeLike: object,
// eslint-disable-next-line @typescript-eslint/naming-convention // eslint-disable-next-line @typescript-eslint/naming-convention
Datatype: Datatype:
@@ -182,14 +182,14 @@ function sanitizeTypedTimestamp(
return new Datatype(timezone); return new Datatype(timezone);
} }
function sanitizeInterval(typeLike: object) { export function sanitizeInterval(typeLike: object) {
if (!("unit" in typeLike) || typeof typeLike.unit !== "number") { if (!("unit" in typeLike) || typeof typeLike.unit !== "number") {
throw Error("Expected an Interval type to have a `unit` property"); throw Error("Expected an Interval type to have a `unit` property");
} }
return new Interval(typeLike.unit); return new Interval(typeLike.unit);
} }
function sanitizeList(typeLike: object) { export function sanitizeList(typeLike: object) {
if (!("children" in typeLike) || !Array.isArray(typeLike.children)) { if (!("children" in typeLike) || !Array.isArray(typeLike.children)) {
throw Error( throw Error(
"Expected a List type to have an array-like `children` property", "Expected a List type to have an array-like `children` property",
@@ -201,7 +201,7 @@ function sanitizeList(typeLike: object) {
return new List(sanitizeField(typeLike.children[0])); return new List(sanitizeField(typeLike.children[0]));
} }
function sanitizeStruct(typeLike: object) { export function sanitizeStruct(typeLike: object) {
if (!("children" in typeLike) || !Array.isArray(typeLike.children)) { if (!("children" in typeLike) || !Array.isArray(typeLike.children)) {
throw Error( throw Error(
"Expected a Struct type to have an array-like `children` property", "Expected a Struct type to have an array-like `children` property",
@@ -210,7 +210,7 @@ function sanitizeStruct(typeLike: object) {
return new Struct(typeLike.children.map((child) => sanitizeField(child))); return new Struct(typeLike.children.map((child) => sanitizeField(child)));
} }
function sanitizeUnion(typeLike: object) { export function sanitizeUnion(typeLike: object) {
if ( if (
!("typeIds" in typeLike) || !("typeIds" in typeLike) ||
!("mode" in typeLike) || !("mode" in typeLike) ||
@@ -228,13 +228,13 @@ function sanitizeUnion(typeLike: object) {
return new Union( return new Union(
typeLike.mode, typeLike.mode,
// eslint-disable-next-line @typescript-eslint/no-explicit-any // biome-ignore lint/suspicious/noExplicitAny: skip
typeLike.typeIds as any, typeLike.typeIds as any,
typeLike.children.map((child) => sanitizeField(child)), typeLike.children.map((child) => sanitizeField(child)),
); );
} }
function sanitizeTypedUnion( export function sanitizeTypedUnion(
typeLike: object, typeLike: object,
// eslint-disable-next-line @typescript-eslint/naming-convention // eslint-disable-next-line @typescript-eslint/naming-convention
UnionType: typeof DenseUnion | typeof SparseUnion, UnionType: typeof DenseUnion | typeof SparseUnion,
@@ -256,7 +256,7 @@ function sanitizeTypedUnion(
); );
} }
function sanitizeFixedSizeBinary(typeLike: object) { export function sanitizeFixedSizeBinary(typeLike: object) {
if (!("byteWidth" in typeLike) || typeof typeLike.byteWidth !== "number") { if (!("byteWidth" in typeLike) || typeof typeLike.byteWidth !== "number") {
throw Error( throw Error(
"Expected a FixedSizeBinary type to have a `byteWidth` property", "Expected a FixedSizeBinary type to have a `byteWidth` property",
@@ -265,7 +265,7 @@ function sanitizeFixedSizeBinary(typeLike: object) {
return new FixedSizeBinary(typeLike.byteWidth); return new FixedSizeBinary(typeLike.byteWidth);
} }
function sanitizeFixedSizeList(typeLike: object) { export function sanitizeFixedSizeList(typeLike: object) {
if (!("listSize" in typeLike) || typeof typeLike.listSize !== "number") { if (!("listSize" in typeLike) || typeof typeLike.listSize !== "number") {
throw Error("Expected a FixedSizeList type to have a `listSize` property"); throw Error("Expected a FixedSizeList type to have a `listSize` property");
} }
@@ -283,7 +283,7 @@ function sanitizeFixedSizeList(typeLike: object) {
); );
} }
function sanitizeMap(typeLike: object) { export function sanitizeMap(typeLike: object) {
if (!("children" in typeLike) || !Array.isArray(typeLike.children)) { if (!("children" in typeLike) || !Array.isArray(typeLike.children)) {
throw Error( throw Error(
"Expected a Map type to have an array-like `children` property", "Expected a Map type to have an array-like `children` property",
@@ -294,20 +294,20 @@ function sanitizeMap(typeLike: object) {
} }
return new Map_( return new Map_(
// eslint-disable-next-line @typescript-eslint/no-explicit-any // biome-ignore lint/suspicious/noExplicitAny: skip
typeLike.children.map((field) => sanitizeField(field)) as any, typeLike.children.map((field) => sanitizeField(field)) as any,
typeLike.keysSorted, typeLike.keysSorted,
); );
} }
function sanitizeDuration(typeLike: object) { export function sanitizeDuration(typeLike: object) {
if (!("unit" in typeLike) || typeof typeLike.unit !== "number") { if (!("unit" in typeLike) || typeof typeLike.unit !== "number") {
throw Error("Expected a Duration type to have a `unit` property"); throw Error("Expected a Duration type to have a `unit` property");
} }
return new Duration(typeLike.unit); return new Duration(typeLike.unit);
} }
function sanitizeDictionary(typeLike: object) { export function sanitizeDictionary(typeLike: object) {
if (!("id" in typeLike) || typeof typeLike.id !== "number") { if (!("id" in typeLike) || typeof typeLike.id !== "number") {
throw Error("Expected a Dictionary type to have an `id` property"); throw Error("Expected a Dictionary type to have an `id` property");
} }
@@ -328,8 +328,8 @@ function sanitizeDictionary(typeLike: object) {
); );
} }
// eslint-disable-next-line @typescript-eslint/no-explicit-any // biome-ignore lint/suspicious/noExplicitAny: skip
function sanitizeType(typeLike: unknown): DataType<any> { export function sanitizeType(typeLike: unknown): DataType<any> {
if (typeof typeLike !== "object" || typeLike === null) { if (typeof typeLike !== "object" || typeLike === null) {
throw Error("Expected a Type but object was null/undefined"); throw Error("Expected a Type but object was null/undefined");
} }
@@ -449,7 +449,7 @@ function sanitizeType(typeLike: unknown): DataType<any> {
} }
} }
function sanitizeField(fieldLike: unknown): Field { export function sanitizeField(fieldLike: unknown): Field {
if (fieldLike instanceof Field) { if (fieldLike instanceof Field) {
return fieldLike; return fieldLike;
} }

View File

@@ -12,16 +12,18 @@
// See the License for the specific language governing permissions and // See the License for the specific language governing permissions and
// limitations under the License. // limitations under the License.
import { Schema, tableFromIPC } from "apache-arrow"; import { Data, Schema, fromDataToBuffer, tableFromIPC } from "./arrow";
import { getRegistry } from "./embedding/registry";
import { IndexOptions } from "./indices";
import { import {
AddColumnsSql, AddColumnsSql,
ColumnAlteration, ColumnAlteration,
IndexConfig, IndexConfig,
OptimizeStats,
Table as _NativeTable, Table as _NativeTable,
} from "./native"; } from "./native";
import { Query, VectorQuery } from "./query"; import { Query, VectorQuery } from "./query";
import { IndexOptions } from "./indices";
import { Data, fromDataToBuffer } from "./arrow";
export { IndexConfig } from "./native"; export { IndexConfig } from "./native";
/** /**
@@ -50,6 +52,23 @@ export interface UpdateOptions {
where: string; where: string;
} }
export interface OptimizeOptions {
/**
* If set then all versions older than the given date
* be removed. The current version will never be removed.
* The default is 7 days
* @example
* // Delete all versions older than 1 day
* const olderThan = new Date();
* olderThan.setDate(olderThan.getDate() - 1));
* tbl.cleanupOlderVersions(olderThan);
*
* // Delete all versions except the current version
* tbl.cleanupOlderVersions(new Date());
*/
cleanupOlderThan: Date;
}
/** /**
* A Table is a collection of Records in a LanceDB Database. * A Table is a collection of Records in a LanceDB Database.
* *
@@ -104,8 +123,14 @@ export class Table {
*/ */
async add(data: Data, options?: Partial<AddDataOptions>): Promise<void> { async add(data: Data, options?: Partial<AddDataOptions>): Promise<void> {
const mode = options?.mode ?? "append"; const mode = options?.mode ?? "append";
const schema = await this.schema();
const registry = getRegistry();
const functions = registry.parseFunctions(schema.metadata);
const buffer = await fromDataToBuffer(data); const buffer = await fromDataToBuffer(
data,
functions.values().next().value,
);
await this.inner.add(buffer, mode); await this.inner.add(buffer, mode);
} }
@@ -169,21 +194,24 @@ export class Table {
* // If the column has a vector (fixed size list) data type then * // If the column has a vector (fixed size list) data type then
* // an IvfPq vector index will be created. * // an IvfPq vector index will be created.
* const table = await conn.openTable("my_table"); * const table = await conn.openTable("my_table");
* await table.createIndex(["vector"]); * await table.createIndex("vector");
* @example * @example
* // For advanced control over vector index creation you can specify * // For advanced control over vector index creation you can specify
* // the index type and options. * // the index type and options.
* const table = await conn.openTable("my_table"); * const table = await conn.openTable("my_table");
* await table.createIndex(["vector"], I) * await table.createIndex("vector", {
* .ivf_pq({ num_partitions: 128, num_sub_vectors: 16 }) * config: lancedb.Index.ivfPq({
* .build(); * numPartitions: 128,
* numSubVectors: 16,
* }),
* });
* @example * @example
* // Or create a Scalar index * // Or create a Scalar index
* await table.createIndex("my_float_col").build(); * await table.createIndex("my_float_col");
*/ */
async createIndex(column: string, options?: Partial<IndexOptions>) { async createIndex(column: string, options?: Partial<IndexOptions>) {
// Bit of a hack to get around the fact that TS has no package-scope. // Bit of a hack to get around the fact that TS has no package-scope.
// eslint-disable-next-line @typescript-eslint/no-explicit-any // biome-ignore lint/suspicious/noExplicitAny: skip
const nativeIndex = (options?.config as any)?.inner; const nativeIndex = (options?.config as any)?.inner;
await this.inner.createIndex(nativeIndex, column, options?.replace); await this.inner.createIndex(nativeIndex, column, options?.replace);
} }
@@ -197,8 +225,7 @@ export class Table {
* vector similarity, sorting, and more. * vector similarity, sorting, and more.
* *
* Note: By default, all columns are returned. For best performance, you should * Note: By default, all columns are returned. For best performance, you should
* only fetch the columns you need. See [`Query::select_with_projection`] for * only fetch the columns you need.
* more details.
* *
* When appropriate, various indices and statistics based pruning will be used to * When appropriate, various indices and statistics based pruning will be used to
* accelerate the query. * accelerate the query.
@@ -206,10 +233,13 @@ export class Table {
* // SQL-style filtering * // SQL-style filtering
* // * //
* // This query will return up to 1000 rows whose value in the `id` column * // This query will return up to 1000 rows whose value in the `id` column
* // is greater than 5. LanceDb supports a broad set of filtering functions. * // is greater than 5. LanceDb supports a broad set of filtering functions.
* for await (const batch of table.query() * for await (const batch of table
* .filter("id > 1").select(["id"]).limit(20)) { * .query()
* console.log(batch); * .where("id > 1")
* .select(["id"])
* .limit(20)) {
* console.log(batch);
* } * }
* @example * @example
* // Vector Similarity Search * // Vector Similarity Search
@@ -218,13 +248,14 @@ export class Table {
* // closest to the query vector [1.0, 2.0, 3.0]. If an index has been created * // closest to the query vector [1.0, 2.0, 3.0]. If an index has been created
* // on the "vector" column then this will perform an ANN search. * // on the "vector" column then this will perform an ANN search.
* // * //
* // The `refine_factor` and `nprobes` methods are used to control the recall / * // The `refineFactor` and `nprobes` methods are used to control the recall /
* // latency tradeoff of the search. * // latency tradeoff of the search.
* for await (const batch of table.query() * for await (const batch of table
* .nearestTo([1, 2, 3]) * .query()
* .refineFactor(5).nprobe(10) * .where("id > 1")
* .limit(10)) { * .select(["id"])
* console.log(batch); * .limit(20)) {
* console.log(batch);
* } * }
* @example * @example
* // Scan the full dataset * // Scan the full dataset
@@ -286,43 +317,45 @@ export class Table {
await this.inner.dropColumns(columnNames); await this.inner.dropColumns(columnNames);
} }
/** /** Retrieve the version of the table */
* Retrieve the version of the table
*
* LanceDb supports versioning. Every operation that modifies the table increases
* version. As long as a version hasn't been deleted you can `[Self::checkout]` that
* version to view the data at that point. In addition, you can `[Self::restore]` the
* version to replace the current table with a previous version.
*/
async version(): Promise<number> { async version(): Promise<number> {
return await this.inner.version(); return await this.inner.version();
} }
/** /**
* Checks out a specific version of the Table * Checks out a specific version of the table _This is an in-place operation._
* *
* Any read operation on the table will now access the data at the checked out version. * This allows viewing previous versions of the table. If you wish to
* As a consequence, calling this method will disable any read consistency interval * keep writing to the dataset starting from an old version, then use
* that was previously set. * the `restore` function.
* *
* This is a read-only operation that turns the table into a sort of "view" * Calling this method will set the table into time-travel mode. If you
* or "detached head". Other table instances will not be affected. To make the change * wish to return to standard mode, call `checkoutLatest`.
* permanent you can use the `[Self::restore]` method. * @param {number} version The version to checkout
* @example
* ```typescript
* import * as lancedb from "@lancedb/lancedb"
* const db = await lancedb.connect("./.lancedb");
* const table = await db.createTable("my_table", [
* { vector: [1.1, 0.9], type: "vector" },
* ]);
* *
* Any operation that modifies the table will fail while the table is in a checked * console.log(await table.version()); // 1
* out state. * console.log(table.display());
* * await table.add([{ vector: [0.5, 0.2], type: "vector" }]);
* To return the table to a normal state use `[Self::checkout_latest]` * await table.checkout(1);
* console.log(await table.version()); // 2
* ```
*/ */
async checkout(version: number): Promise<void> { async checkout(version: number): Promise<void> {
await this.inner.checkout(version); await this.inner.checkout(version);
} }
/** /**
* Ensures the table is pointing at the latest version * Checkout the latest version of the table. _This is an in-place operation._
* *
* This can be used to manually update a table when the read_consistency_interval is None * The table will be set back into standard mode, and will track the latest
* It can also be used to undo a `[Self::checkout]` operation * version of the table.
*/ */
async checkoutLatest(): Promise<void> { async checkoutLatest(): Promise<void> {
await this.inner.checkoutLatest(); await this.inner.checkoutLatest();
@@ -345,8 +378,48 @@ export class Table {
} }
/** /**
* List all indices that have been created with Self::create_index * Optimize the on-disk data and indices for better performance.
*
* Modeled after ``VACUUM`` in PostgreSQL.
*
* Optimization covers three operations:
*
* - Compaction: Merges small files into larger ones
* - Prune: Removes old versions of the dataset
* - Index: Optimizes the indices, adding new data to existing indices
*
*
* Experimental API
* ----------------
*
* The optimization process is undergoing active development and may change.
* Our goal with these changes is to improve the performance of optimization and
* reduce the complexity.
*
* That being said, it is essential today to run optimize if you want the best
* performance. It should be stable and safe to use in production, but it our
* hope that the API may be simplified (or not even need to be called) in the
* future.
*
* The frequency an application shoudl call optimize is based on the frequency of
* data modifications. If data is frequently added, deleted, or updated then
* optimize should be run frequently. A good rule of thumb is to run optimize if
* you have added or modified 100,000 or more records or run more than 20 data
* modification operations.
*/ */
async optimize(options?: Partial<OptimizeOptions>): Promise<OptimizeStats> {
let cleanupOlderThanMs;
if (
options?.cleanupOlderThan !== undefined &&
options?.cleanupOlderThan !== null
) {
cleanupOlderThanMs =
new Date().getTime() - options.cleanupOlderThan.getTime();
}
return await this.inner.optimize(cleanupOlderThanMs);
}
/** List all indices that have been created with {@link Table.createIndex} */
async listIndices(): Promise<IndexConfig[]> { async listIndices(): Promise<IndexConfig[]> {
return await this.inner.listIndices(); return await this.inner.listIndices();
} }

View File

@@ -1,18 +1,12 @@
{ {
"name": "@lancedb/lancedb-darwin-arm64", "name": "@lancedb/lancedb-darwin-arm64",
"version": "0.4.17", "version": "0.5.0",
"os": [ "os": ["darwin"],
"darwin" "cpu": ["arm64"],
], "main": "lancedb.darwin-arm64.node",
"cpu": [ "files": ["lancedb.darwin-arm64.node"],
"arm64" "license": "Apache 2.0",
], "engines": {
"main": "lancedb.darwin-arm64.node", "node": ">= 18"
"files": [ }
"lancedb.darwin-arm64.node"
],
"license": "Apache 2.0",
"engines": {
"node": ">= 18"
}
} }

View File

@@ -1,18 +1,12 @@
{ {
"name": "@lancedb/lancedb-darwin-x64", "name": "@lancedb/lancedb-darwin-x64",
"version": "0.4.17", "version": "0.5.0",
"os": [ "os": ["darwin"],
"darwin" "cpu": ["x64"],
], "main": "lancedb.darwin-x64.node",
"cpu": [ "files": ["lancedb.darwin-x64.node"],
"x64" "license": "Apache 2.0",
], "engines": {
"main": "lancedb.darwin-x64.node", "node": ">= 18"
"files": [ }
"lancedb.darwin-x64.node"
],
"license": "Apache 2.0",
"engines": {
"node": ">= 18"
}
} }

View File

@@ -1,21 +1,13 @@
{ {
"name": "@lancedb/lancedb-linux-arm64-gnu", "name": "@lancedb/lancedb-linux-arm64-gnu",
"version": "0.4.17", "version": "0.5.0",
"os": [ "os": ["linux"],
"linux" "cpu": ["arm64"],
], "main": "lancedb.linux-arm64-gnu.node",
"cpu": [ "files": ["lancedb.linux-arm64-gnu.node"],
"arm64" "license": "Apache 2.0",
], "engines": {
"main": "lancedb.linux-arm64-gnu.node", "node": ">= 18"
"files": [ },
"lancedb.linux-arm64-gnu.node" "libc": ["glibc"]
],
"license": "Apache 2.0",
"engines": {
"node": ">= 18"
},
"libc": [
"glibc"
]
} }

View File

@@ -1,21 +1,13 @@
{ {
"name": "@lancedb/lancedb-linux-x64-gnu", "name": "@lancedb/lancedb-linux-x64-gnu",
"version": "0.4.17", "version": "0.5.0",
"os": [ "os": ["linux"],
"linux" "cpu": ["x64"],
], "main": "lancedb.linux-x64-gnu.node",
"cpu": [ "files": ["lancedb.linux-x64-gnu.node"],
"x64" "license": "Apache 2.0",
], "engines": {
"main": "lancedb.linux-x64-gnu.node", "node": ">= 18"
"files": [ },
"lancedb.linux-x64-gnu.node" "libc": ["glibc"]
],
"license": "Apache 2.0",
"engines": {
"node": ">= 18"
},
"libc": [
"glibc"
]
} }

View File

@@ -1,18 +1,12 @@
{ {
"name": "@lancedb/lancedb-win32-x64-msvc", "name": "@lancedb/lancedb-win32-x64-msvc",
"version": "0.4.14", "version": "0.5.0",
"os": [ "os": ["win32"],
"win32" "cpu": ["x64"],
], "main": "lancedb.win32-x64-msvc.node",
"cpu": [ "files": ["lancedb.win32-x64-msvc.node"],
"x64" "license": "Apache 2.0",
], "engines": {
"main": "lancedb.win32-x64-msvc.node", "node": ">= 18"
"files": [ }
"lancedb.win32-x64-msvc.node"
],
"license": "Apache 2.0",
"engines": {
"node": ">= 18"
}
} }

600
nodejs/package-lock.json generated
View File

@@ -1,12 +1,12 @@
{ {
"name": "@lancedb/lancedb", "name": "@lancedb/lancedb",
"version": "0.4.16", "version": "0.5.0",
"lockfileVersion": 3, "lockfileVersion": 3,
"requires": true, "requires": true,
"packages": { "packages": {
"": { "": {
"name": "@lancedb/lancedb", "name": "@lancedb/lancedb",
"version": "0.4.16", "version": "0.5.0",
"cpu": [ "cpu": [
"x64", "x64",
"arm64" "arm64"
@@ -19,22 +19,20 @@
], ],
"dependencies": { "dependencies": {
"apache-arrow": "^15.0.0", "apache-arrow": "^15.0.0",
"openai": "^4.29.2" "openai": "^4.29.2",
"reflect-metadata": "^0.2.2"
}, },
"devDependencies": { "devDependencies": {
"@aws-sdk/client-kms": "^3.33.0", "@aws-sdk/client-kms": "^3.33.0",
"@aws-sdk/client-s3": "^3.33.0", "@aws-sdk/client-s3": "^3.33.0",
"@biomejs/biome": "^1.7.3",
"@jest/globals": "^29.7.0",
"@napi-rs/cli": "^2.18.0", "@napi-rs/cli": "^2.18.0",
"@types/jest": "^29.1.2", "@types/jest": "^29.1.2",
"@types/tmp": "^0.2.6", "@types/tmp": "^0.2.6",
"@typescript-eslint/eslint-plugin": "^6.19.0",
"@typescript-eslint/parser": "^6.19.0",
"apache-arrow-old": "npm:apache-arrow@13.0.0", "apache-arrow-old": "npm:apache-arrow@13.0.0",
"eslint": "^8.57.0", "eslint": "^8.57.0",
"eslint-config-prettier": "^9.1.0",
"eslint-plugin-jsdoc": "^48.2.1",
"jest": "^29.7.0", "jest": "^29.7.0",
"prettier": "^3.1.0",
"shx": "^0.3.4", "shx": "^0.3.4",
"tmp": "^0.2.3", "tmp": "^0.2.3",
"ts-jest": "^29.1.2", "ts-jest": "^29.1.2",
@@ -45,13 +43,6 @@
}, },
"engines": { "engines": {
"node": ">= 18" "node": ">= 18"
},
"optionalDependencies": {
"@lancedb/lancedb-darwin-arm64": "0.4.16",
"@lancedb/lancedb-darwin-x64": "0.4.16",
"@lancedb/lancedb-linux-arm64-gnu": "0.4.16",
"@lancedb/lancedb-linux-x64-gnu": "0.4.16",
"@lancedb/lancedb-win32-x64-msvc": "0.4.16"
} }
}, },
"node_modules/@75lb/deep-merge": { "node_modules/@75lb/deep-merge": {
@@ -1660,18 +1651,159 @@
"integrity": "sha512-0hYQ8SB4Db5zvZB4axdMHGwEaQjkZzFjQiN9LVYvIFB2nSUHW9tYpxWriPrWDASIxiaXax83REcLxuSdnGPZtw==", "integrity": "sha512-0hYQ8SB4Db5zvZB4axdMHGwEaQjkZzFjQiN9LVYvIFB2nSUHW9tYpxWriPrWDASIxiaXax83REcLxuSdnGPZtw==",
"dev": true "dev": true
}, },
"node_modules/@es-joy/jsdoccomment": { "node_modules/@biomejs/biome": {
"version": "0.42.0", "version": "1.7.3",
"resolved": "https://registry.npmjs.org/@es-joy/jsdoccomment/-/jsdoccomment-0.42.0.tgz", "resolved": "https://registry.npmjs.org/@biomejs/biome/-/biome-1.7.3.tgz",
"integrity": "sha512-R1w57YlVA6+YE01wch3GPYn6bCsrOV3YW/5oGGE2tmX6JcL9Nr+b5IikrjMPF+v9CV3ay+obImEdsDhovhJrzw==", "integrity": "sha512-ogFQI+fpXftr+tiahA6bIXwZ7CSikygASdqMtH07J2cUzrpjyTMVc9Y97v23c7/tL1xCZhM+W9k4hYIBm7Q6cQ==",
"dev": true, "dev": true,
"dependencies": { "hasInstallScript": true,
"comment-parser": "1.4.1", "bin": {
"esquery": "^1.5.0", "biome": "bin/biome"
"jsdoc-type-pratt-parser": "~4.0.0"
}, },
"engines": { "engines": {
"node": ">=16" "node": ">=14.21.3"
},
"funding": {
"type": "opencollective",
"url": "https://opencollective.com/biome"
},
"optionalDependencies": {
"@biomejs/cli-darwin-arm64": "1.7.3",
"@biomejs/cli-darwin-x64": "1.7.3",
"@biomejs/cli-linux-arm64": "1.7.3",
"@biomejs/cli-linux-arm64-musl": "1.7.3",
"@biomejs/cli-linux-x64": "1.7.3",
"@biomejs/cli-linux-x64-musl": "1.7.3",
"@biomejs/cli-win32-arm64": "1.7.3",
"@biomejs/cli-win32-x64": "1.7.3"
}
},
"node_modules/@biomejs/cli-darwin-arm64": {
"version": "1.7.3",
"resolved": "https://registry.npmjs.org/@biomejs/cli-darwin-arm64/-/cli-darwin-arm64-1.7.3.tgz",
"integrity": "sha512-eDvLQWmGRqrPIRY7AIrkPHkQ3visEItJKkPYSHCscSDdGvKzYjmBJwG1Gu8+QC5ed6R7eiU63LEC0APFBobmfQ==",
"cpu": [
"arm64"
],
"dev": true,
"optional": true,
"os": [
"darwin"
],
"engines": {
"node": ">=14.21.3"
}
},
"node_modules/@biomejs/cli-darwin-x64": {
"version": "1.7.3",
"resolved": "https://registry.npmjs.org/@biomejs/cli-darwin-x64/-/cli-darwin-x64-1.7.3.tgz",
"integrity": "sha512-JXCaIseKRER7dIURsVlAJacnm8SG5I0RpxZ4ya3dudASYUc68WGl4+FEN03ABY3KMIq7hcK1tzsJiWlmXyosZg==",
"cpu": [
"x64"
],
"dev": true,
"optional": true,
"os": [
"darwin"
],
"engines": {
"node": ">=14.21.3"
}
},
"node_modules/@biomejs/cli-linux-arm64": {
"version": "1.7.3",
"resolved": "https://registry.npmjs.org/@biomejs/cli-linux-arm64/-/cli-linux-arm64-1.7.3.tgz",
"integrity": "sha512-phNTBpo7joDFastnmZsFjYcDYobLTx4qR4oPvc9tJ486Bd1SfEVPHEvJdNJrMwUQK56T+TRClOQd/8X1nnjA9w==",
"cpu": [
"arm64"
],
"dev": true,
"optional": true,
"os": [
"linux"
],
"engines": {
"node": ">=14.21.3"
}
},
"node_modules/@biomejs/cli-linux-arm64-musl": {
"version": "1.7.3",
"resolved": "https://registry.npmjs.org/@biomejs/cli-linux-arm64-musl/-/cli-linux-arm64-musl-1.7.3.tgz",
"integrity": "sha512-c8AlO45PNFZ1BYcwaKzdt46kYbuP6xPGuGQ6h4j3XiEDpyseRRUy/h+6gxj07XovmyxKnSX9GSZ6nVbZvcVUAw==",
"cpu": [
"arm64"
],
"dev": true,
"optional": true,
"os": [
"linux"
],
"engines": {
"node": ">=14.21.3"
}
},
"node_modules/@biomejs/cli-linux-x64": {
"version": "1.7.3",
"resolved": "https://registry.npmjs.org/@biomejs/cli-linux-x64/-/cli-linux-x64-1.7.3.tgz",
"integrity": "sha512-vnedYcd5p4keT3iD48oSKjOIRPYcjSNNbd8MO1bKo9ajg3GwQXZLAH+0Cvlr+eMsO67/HddWmscSQwTFrC/uPA==",
"cpu": [
"x64"
],
"dev": true,
"optional": true,
"os": [
"linux"
],
"engines": {
"node": ">=14.21.3"
}
},
"node_modules/@biomejs/cli-linux-x64-musl": {
"version": "1.7.3",
"resolved": "https://registry.npmjs.org/@biomejs/cli-linux-x64-musl/-/cli-linux-x64-musl-1.7.3.tgz",
"integrity": "sha512-UdEHKtYGWEX3eDmVWvQeT+z05T9/Sdt2+F/7zmMOFQ7boANeX8pcO6EkJPK3wxMudrApsNEKT26rzqK6sZRTRA==",
"cpu": [
"x64"
],
"dev": true,
"optional": true,
"os": [
"linux"
],
"engines": {
"node": ">=14.21.3"
}
},
"node_modules/@biomejs/cli-win32-arm64": {
"version": "1.7.3",
"resolved": "https://registry.npmjs.org/@biomejs/cli-win32-arm64/-/cli-win32-arm64-1.7.3.tgz",
"integrity": "sha512-unNCDqUKjujYkkSxs7gFIfdasttbDC4+z0kYmcqzRk6yWVoQBL4dNLcCbdnJS+qvVDNdI9rHp2NwpQ0WAdla4Q==",
"cpu": [
"arm64"
],
"dev": true,
"optional": true,
"os": [
"win32"
],
"engines": {
"node": ">=14.21.3"
}
},
"node_modules/@biomejs/cli-win32-x64": {
"version": "1.7.3",
"resolved": "https://registry.npmjs.org/@biomejs/cli-win32-x64/-/cli-win32-x64-1.7.3.tgz",
"integrity": "sha512-ZmByhbrnmz/UUFYB622CECwhKIPjJLLPr5zr3edhu04LzbfcOrz16VYeNq5dpO1ADG70FORhAJkaIGdaVBG00w==",
"cpu": [
"x64"
],
"dev": true,
"optional": true,
"os": [
"win32"
],
"engines": {
"node": ">=14.21.3"
} }
}, },
"node_modules/@eslint-community/eslint-utils": { "node_modules/@eslint-community/eslint-utils": {
@@ -2221,81 +2353,6 @@
"@jridgewell/sourcemap-codec": "^1.4.14" "@jridgewell/sourcemap-codec": "^1.4.14"
} }
}, },
"node_modules/@lancedb/lancedb-darwin-arm64": {
"version": "0.4.16",
"resolved": "https://registry.npmjs.org/@lancedb/lancedb-darwin-arm64/-/lancedb-darwin-arm64-0.4.16.tgz",
"integrity": "sha512-CV65ouIDQbBSNtdHbQSr2fqXflOuqud1cfweUS+EiK7eEOEYl7nO2oiFYO49Jy76MEwZxiP99hW825aCqIQJqg==",
"cpu": [
"arm64"
],
"optional": true,
"os": [
"darwin"
],
"engines": {
"node": ">= 18"
}
},
"node_modules/@lancedb/lancedb-darwin-x64": {
"version": "0.4.16",
"resolved": "https://registry.npmjs.org/@lancedb/lancedb-darwin-x64/-/lancedb-darwin-x64-0.4.16.tgz",
"integrity": "sha512-1CwIYCNdbFmV7fvqM+qUxbYgwxx0slcCV48PC/I19Ejitgtzw/NJiWDCvONhaLqG85lWNZm1xYceRpVv7b8seQ==",
"cpu": [
"x64"
],
"optional": true,
"os": [
"darwin"
],
"engines": {
"node": ">= 18"
}
},
"node_modules/@lancedb/lancedb-linux-arm64-gnu": {
"version": "0.4.16",
"resolved": "https://registry.npmjs.org/@lancedb/lancedb-linux-arm64-gnu/-/lancedb-linux-arm64-gnu-0.4.16.tgz",
"integrity": "sha512-CzLEbzoHKS6jV0k52YnvsiVNx0VzLp1Vz/zmbHI6HmB/XbS67qDO93Jk71MDmXq3JDw0FKFCw9ghkg+6YWq7ZA==",
"cpu": [
"arm64"
],
"optional": true,
"os": [
"linux"
],
"engines": {
"node": ">= 18"
}
},
"node_modules/@lancedb/lancedb-linux-x64-gnu": {
"version": "0.4.16",
"resolved": "https://registry.npmjs.org/@lancedb/lancedb-linux-x64-gnu/-/lancedb-linux-x64-gnu-0.4.16.tgz",
"integrity": "sha512-nKChybybi8uA0AFRHBFm7Fz3VXcRm8riv5Gs7xQsrsCtYxxf4DT/0BfUvQ0xKbwNJa+fawHRxi9BOQewdj49fg==",
"cpu": [
"x64"
],
"optional": true,
"os": [
"linux"
],
"engines": {
"node": ">= 18"
}
},
"node_modules/@lancedb/lancedb-win32-x64-msvc": {
"version": "0.4.16",
"resolved": "https://registry.npmjs.org/@lancedb/lancedb-win32-x64-msvc/-/lancedb-win32-x64-msvc-0.4.16.tgz",
"integrity": "sha512-KMeBPMpv2g+ZMVsHVibed7BydrBlxje1qS0bZTDrLw9BtZOk6XH2lh1mCDnCJI6sbAscUKNA6fDCdquhQPHL7w==",
"cpu": [
"x64"
],
"optional": true,
"os": [
"win32"
],
"engines": {
"node": ">= 18"
}
},
"node_modules/@napi-rs/cli": { "node_modules/@napi-rs/cli": {
"version": "2.18.0", "version": "2.18.0",
"resolved": "https://registry.npmjs.org/@napi-rs/cli/-/cli-2.18.0.tgz", "resolved": "https://registry.npmjs.org/@napi-rs/cli/-/cli-2.18.0.tgz",
@@ -3222,220 +3279,6 @@
"integrity": "sha512-I4q9QU9MQv4oEOz4tAHJtNz1cwuLxn2F3xcc2iV5WdqLPpUnj30aUuxt1mAxYTG+oe8CZMV/+6rU4S4gRDzqtQ==", "integrity": "sha512-I4q9QU9MQv4oEOz4tAHJtNz1cwuLxn2F3xcc2iV5WdqLPpUnj30aUuxt1mAxYTG+oe8CZMV/+6rU4S4gRDzqtQ==",
"dev": true "dev": true
}, },
"node_modules/@typescript-eslint/eslint-plugin": {
"version": "6.19.0",
"resolved": "https://registry.npmjs.org/@typescript-eslint/eslint-plugin/-/eslint-plugin-6.19.0.tgz",
"integrity": "sha512-DUCUkQNklCQYnrBSSikjVChdc84/vMPDQSgJTHBZ64G9bA9w0Crc0rd2diujKbTdp6w2J47qkeHQLoi0rpLCdg==",
"dev": true,
"dependencies": {
"@eslint-community/regexpp": "^4.5.1",
"@typescript-eslint/scope-manager": "6.19.0",
"@typescript-eslint/type-utils": "6.19.0",
"@typescript-eslint/utils": "6.19.0",
"@typescript-eslint/visitor-keys": "6.19.0",
"debug": "^4.3.4",
"graphemer": "^1.4.0",
"ignore": "^5.2.4",
"natural-compare": "^1.4.0",
"semver": "^7.5.4",
"ts-api-utils": "^1.0.1"
},
"engines": {
"node": "^16.0.0 || >=18.0.0"
},
"funding": {
"type": "opencollective",
"url": "https://opencollective.com/typescript-eslint"
},
"peerDependencies": {
"@typescript-eslint/parser": "^6.0.0 || ^6.0.0-alpha",
"eslint": "^7.0.0 || ^8.0.0"
},
"peerDependenciesMeta": {
"typescript": {
"optional": true
}
}
},
"node_modules/@typescript-eslint/parser": {
"version": "6.19.0",
"resolved": "https://registry.npmjs.org/@typescript-eslint/parser/-/parser-6.19.0.tgz",
"integrity": "sha512-1DyBLG5SH7PYCd00QlroiW60YJ4rWMuUGa/JBV0iZuqi4l4IK3twKPq5ZkEebmGqRjXWVgsUzfd3+nZveewgow==",
"dev": true,
"dependencies": {
"@typescript-eslint/scope-manager": "6.19.0",
"@typescript-eslint/types": "6.19.0",
"@typescript-eslint/typescript-estree": "6.19.0",
"@typescript-eslint/visitor-keys": "6.19.0",
"debug": "^4.3.4"
},
"engines": {
"node": "^16.0.0 || >=18.0.0"
},
"funding": {
"type": "opencollective",
"url": "https://opencollective.com/typescript-eslint"
},
"peerDependencies": {
"eslint": "^7.0.0 || ^8.0.0"
},
"peerDependenciesMeta": {
"typescript": {
"optional": true
}
}
},
"node_modules/@typescript-eslint/scope-manager": {
"version": "6.19.0",
"resolved": "https://registry.npmjs.org/@typescript-eslint/scope-manager/-/scope-manager-6.19.0.tgz",
"integrity": "sha512-dO1XMhV2ehBI6QN8Ufi7I10wmUovmLU0Oru3n5LVlM2JuzB4M+dVphCPLkVpKvGij2j/pHBWuJ9piuXx+BhzxQ==",
"dev": true,
"dependencies": {
"@typescript-eslint/types": "6.19.0",
"@typescript-eslint/visitor-keys": "6.19.0"
},
"engines": {
"node": "^16.0.0 || >=18.0.0"
},
"funding": {
"type": "opencollective",
"url": "https://opencollective.com/typescript-eslint"
}
},
"node_modules/@typescript-eslint/type-utils": {
"version": "6.19.0",
"resolved": "https://registry.npmjs.org/@typescript-eslint/type-utils/-/type-utils-6.19.0.tgz",
"integrity": "sha512-mcvS6WSWbjiSxKCwBcXtOM5pRkPQ6kcDds/juxcy/727IQr3xMEcwr/YLHW2A2+Fp5ql6khjbKBzOyjuPqGi/w==",
"dev": true,
"dependencies": {
"@typescript-eslint/typescript-estree": "6.19.0",
"@typescript-eslint/utils": "6.19.0",
"debug": "^4.3.4",
"ts-api-utils": "^1.0.1"
},
"engines": {
"node": "^16.0.0 || >=18.0.0"
},
"funding": {
"type": "opencollective",
"url": "https://opencollective.com/typescript-eslint"
},
"peerDependencies": {
"eslint": "^7.0.0 || ^8.0.0"
},
"peerDependenciesMeta": {
"typescript": {
"optional": true
}
}
},
"node_modules/@typescript-eslint/types": {
"version": "6.19.0",
"resolved": "https://registry.npmjs.org/@typescript-eslint/types/-/types-6.19.0.tgz",
"integrity": "sha512-lFviGV/vYhOy3m8BJ/nAKoAyNhInTdXpftonhWle66XHAtT1ouBlkjL496b5H5hb8dWXHwtypTqgtb/DEa+j5A==",
"dev": true,
"engines": {
"node": "^16.0.0 || >=18.0.0"
},
"funding": {
"type": "opencollective",
"url": "https://opencollective.com/typescript-eslint"
}
},
"node_modules/@typescript-eslint/typescript-estree": {
"version": "6.19.0",
"resolved": "https://registry.npmjs.org/@typescript-eslint/typescript-estree/-/typescript-estree-6.19.0.tgz",
"integrity": "sha512-o/zefXIbbLBZ8YJ51NlkSAt2BamrK6XOmuxSR3hynMIzzyMY33KuJ9vuMdFSXW+H0tVvdF9qBPTHA91HDb4BIQ==",
"dev": true,
"dependencies": {
"@typescript-eslint/types": "6.19.0",
"@typescript-eslint/visitor-keys": "6.19.0",
"debug": "^4.3.4",
"globby": "^11.1.0",
"is-glob": "^4.0.3",
"minimatch": "9.0.3",
"semver": "^7.5.4",
"ts-api-utils": "^1.0.1"
},
"engines": {
"node": "^16.0.0 || >=18.0.0"
},
"funding": {
"type": "opencollective",
"url": "https://opencollective.com/typescript-eslint"
},
"peerDependenciesMeta": {
"typescript": {
"optional": true
}
}
},
"node_modules/@typescript-eslint/typescript-estree/node_modules/brace-expansion": {
"version": "2.0.1",
"resolved": "https://registry.npmjs.org/brace-expansion/-/brace-expansion-2.0.1.tgz",
"integrity": "sha512-XnAIvQ8eM+kC6aULx6wuQiwVsnzsi9d3WxzV3FpWTGA19F621kwdbsAcFKXgKUHZWsy+mY6iL1sHTxWEFCytDA==",
"dev": true,
"dependencies": {
"balanced-match": "^1.0.0"
}
},
"node_modules/@typescript-eslint/typescript-estree/node_modules/minimatch": {
"version": "9.0.3",
"resolved": "https://registry.npmjs.org/minimatch/-/minimatch-9.0.3.tgz",
"integrity": "sha512-RHiac9mvaRw0x3AYRgDC1CxAP7HTcNrrECeA8YYJeWnpo+2Q5CegtZjaotWTWxDG3UeGA1coE05iH1mPjT/2mg==",
"dev": true,
"dependencies": {
"brace-expansion": "^2.0.1"
},
"engines": {
"node": ">=16 || 14 >=14.17"
},
"funding": {
"url": "https://github.com/sponsors/isaacs"
}
},
"node_modules/@typescript-eslint/utils": {
"version": "6.19.0",
"resolved": "https://registry.npmjs.org/@typescript-eslint/utils/-/utils-6.19.0.tgz",
"integrity": "sha512-QR41YXySiuN++/dC9UArYOg4X86OAYP83OWTewpVx5ct1IZhjjgTLocj7QNxGhWoTqknsgpl7L+hGygCO+sdYw==",
"dev": true,
"dependencies": {
"@eslint-community/eslint-utils": "^4.4.0",
"@types/json-schema": "^7.0.12",
"@types/semver": "^7.5.0",
"@typescript-eslint/scope-manager": "6.19.0",
"@typescript-eslint/types": "6.19.0",
"@typescript-eslint/typescript-estree": "6.19.0",
"semver": "^7.5.4"
},
"engines": {
"node": "^16.0.0 || >=18.0.0"
},
"funding": {
"type": "opencollective",
"url": "https://opencollective.com/typescript-eslint"
},
"peerDependencies": {
"eslint": "^7.0.0 || ^8.0.0"
}
},
"node_modules/@typescript-eslint/visitor-keys": {
"version": "6.19.0",
"resolved": "https://registry.npmjs.org/@typescript-eslint/visitor-keys/-/visitor-keys-6.19.0.tgz",
"integrity": "sha512-hZaUCORLgubBvtGpp1JEFEazcuEdfxta9j4iUwdSAr7mEsYYAp3EAUyCZk3VEEqGj6W+AV4uWyrDGtrlawAsgQ==",
"dev": true,
"dependencies": {
"@typescript-eslint/types": "6.19.0",
"eslint-visitor-keys": "^3.4.1"
},
"engines": {
"node": "^16.0.0 || >=18.0.0"
},
"funding": {
"type": "opencollective",
"url": "https://opencollective.com/typescript-eslint"
}
},
"node_modules/@ungap/structured-clone": { "node_modules/@ungap/structured-clone": {
"version": "1.2.0", "version": "1.2.0",
"resolved": "https://registry.npmjs.org/@ungap/structured-clone/-/structured-clone-1.2.0.tgz", "resolved": "https://registry.npmjs.org/@ungap/structured-clone/-/structured-clone-1.2.0.tgz",
@@ -3623,15 +3466,6 @@
"integrity": "sha512-cumHmIAf6On83X7yP+LrsEyUOf/YlociZelmpRYaGFydoaPdxdt80MAbu6vWerQT2COCp2nPvHdsbD7tHn/YlQ==", "integrity": "sha512-cumHmIAf6On83X7yP+LrsEyUOf/YlociZelmpRYaGFydoaPdxdt80MAbu6vWerQT2COCp2nPvHdsbD7tHn/YlQ==",
"dev": true "dev": true
}, },
"node_modules/are-docs-informative": {
"version": "0.0.2",
"resolved": "https://registry.npmjs.org/are-docs-informative/-/are-docs-informative-0.0.2.tgz",
"integrity": "sha512-ixiS0nLNNG5jNQzgZJNoUpBKdo9yTYZMGJ+QgT2jmjR7G7+QHRCc4v6LQ3NgE7EBJq+o0ams3waJwkrlBom8Ig==",
"dev": true,
"engines": {
"node": ">=14"
}
},
"node_modules/argparse": { "node_modules/argparse": {
"version": "1.0.10", "version": "1.0.10",
"resolved": "https://registry.npmjs.org/argparse/-/argparse-1.0.10.tgz", "resolved": "https://registry.npmjs.org/argparse/-/argparse-1.0.10.tgz",
@@ -3877,18 +3711,6 @@
"integrity": "sha512-E+XQCRwSbaaiChtv6k6Dwgc+bx+Bs6vuKJHHl5kox/BaKbhiXzqQOwK4cO22yElGp2OCmjwVhT3HmxgyPGnJfQ==", "integrity": "sha512-E+XQCRwSbaaiChtv6k6Dwgc+bx+Bs6vuKJHHl5kox/BaKbhiXzqQOwK4cO22yElGp2OCmjwVhT3HmxgyPGnJfQ==",
"dev": true "dev": true
}, },
"node_modules/builtin-modules": {
"version": "3.3.0",
"resolved": "https://registry.npmjs.org/builtin-modules/-/builtin-modules-3.3.0.tgz",
"integrity": "sha512-zhaCDicdLuWN5UbN5IMnFqNMhNfo919sH85y2/ea+5Yg9TsTkeZxpL+JLbp6cgYFS4sRLp3YV4S6yDuqVWHYOw==",
"dev": true,
"engines": {
"node": ">=6"
},
"funding": {
"url": "https://github.com/sponsors/sindresorhus"
}
},
"node_modules/camelcase": { "node_modules/camelcase": {
"version": "5.3.1", "version": "5.3.1",
"resolved": "https://registry.npmjs.org/camelcase/-/camelcase-5.3.1.tgz", "resolved": "https://registry.npmjs.org/camelcase/-/camelcase-5.3.1.tgz",
@@ -4071,15 +3893,6 @@
"node": ">=12.17" "node": ">=12.17"
} }
}, },
"node_modules/comment-parser": {
"version": "1.4.1",
"resolved": "https://registry.npmjs.org/comment-parser/-/comment-parser-1.4.1.tgz",
"integrity": "sha512-buhp5kePrmda3vhc5B9t7pUQXAb2Tnd0qgpkIhPhkHXxJpiPJ11H0ZEU0oBpJ2QztSbzG/ZxMj/CHsYJqRHmyg==",
"dev": true,
"engines": {
"node": ">= 12.0.0"
}
},
"node_modules/concat-map": { "node_modules/concat-map": {
"version": "0.0.1", "version": "0.0.1",
"resolved": "https://registry.npmjs.org/concat-map/-/concat-map-0.0.1.tgz", "resolved": "https://registry.npmjs.org/concat-map/-/concat-map-0.0.1.tgz",
@@ -4352,41 +4165,6 @@
"url": "https://opencollective.com/eslint" "url": "https://opencollective.com/eslint"
} }
}, },
"node_modules/eslint-config-prettier": {
"version": "9.1.0",
"resolved": "https://registry.npmjs.org/eslint-config-prettier/-/eslint-config-prettier-9.1.0.tgz",
"integrity": "sha512-NSWl5BFQWEPi1j4TjVNItzYV7dZXZ+wP6I6ZhrBGpChQhZRUaElihE9uRRkcbRnNb76UMKDF3r+WTmNcGPKsqw==",
"dev": true,
"bin": {
"eslint-config-prettier": "bin/cli.js"
},
"peerDependencies": {
"eslint": ">=7.0.0"
}
},
"node_modules/eslint-plugin-jsdoc": {
"version": "48.2.1",
"resolved": "https://registry.npmjs.org/eslint-plugin-jsdoc/-/eslint-plugin-jsdoc-48.2.1.tgz",
"integrity": "sha512-iUvbcyDZSO/9xSuRv2HQBw++8VkV/pt3UWtX9cpPH0l7GKPq78QC/6+PmyQHHvNZaTjAce6QVciEbnc6J/zH5g==",
"dev": true,
"dependencies": {
"@es-joy/jsdoccomment": "~0.42.0",
"are-docs-informative": "^0.0.2",
"comment-parser": "1.4.1",
"debug": "^4.3.4",
"escape-string-regexp": "^4.0.0",
"esquery": "^1.5.0",
"is-builtin-module": "^3.2.1",
"semver": "^7.6.0",
"spdx-expression-parse": "^4.0.0"
},
"engines": {
"node": ">=18"
},
"peerDependencies": {
"eslint": "^7.0.0 || ^8.0.0 || ^9.0.0"
}
},
"node_modules/eslint-scope": { "node_modules/eslint-scope": {
"version": "7.2.2", "version": "7.2.2",
"resolved": "https://registry.npmjs.org/eslint-scope/-/eslint-scope-7.2.2.tgz", "resolved": "https://registry.npmjs.org/eslint-scope/-/eslint-scope-7.2.2.tgz",
@@ -5050,21 +4828,6 @@
"resolved": "https://registry.npmjs.org/is-buffer/-/is-buffer-1.1.6.tgz", "resolved": "https://registry.npmjs.org/is-buffer/-/is-buffer-1.1.6.tgz",
"integrity": "sha512-NcdALwpXkTm5Zvvbk7owOUSvVvBKDgKP5/ewfXEznmQFfs4ZRmanOeKBTjRVjka3QFoN6XJ+9F3USqfHqTaU5w==" "integrity": "sha512-NcdALwpXkTm5Zvvbk7owOUSvVvBKDgKP5/ewfXEznmQFfs4ZRmanOeKBTjRVjka3QFoN6XJ+9F3USqfHqTaU5w=="
}, },
"node_modules/is-builtin-module": {
"version": "3.2.1",
"resolved": "https://registry.npmjs.org/is-builtin-module/-/is-builtin-module-3.2.1.tgz",
"integrity": "sha512-BSLE3HnV2syZ0FK0iMA/yUGplUeMmNz4AW5fnTunbCIqZi4vG3WjJT9FHMy5D69xmAYBHXQhJdALdpwVxV501A==",
"dev": true,
"dependencies": {
"builtin-modules": "^3.3.0"
},
"engines": {
"node": ">=6"
},
"funding": {
"url": "https://github.com/sponsors/sindresorhus"
}
},
"node_modules/is-core-module": { "node_modules/is-core-module": {
"version": "2.13.1", "version": "2.13.1",
"resolved": "https://registry.npmjs.org/is-core-module/-/is-core-module-2.13.1.tgz", "resolved": "https://registry.npmjs.org/is-core-module/-/is-core-module-2.13.1.tgz",
@@ -5938,15 +5701,6 @@
"js-yaml": "bin/js-yaml.js" "js-yaml": "bin/js-yaml.js"
} }
}, },
"node_modules/jsdoc-type-pratt-parser": {
"version": "4.0.0",
"resolved": "https://registry.npmjs.org/jsdoc-type-pratt-parser/-/jsdoc-type-pratt-parser-4.0.0.tgz",
"integrity": "sha512-YtOli5Cmzy3q4dP26GraSOeAhqecewG04hoO8DY56CH4KJ9Fvv5qKWUCCo3HZob7esJQHCv6/+bnTy72xZZaVQ==",
"dev": true,
"engines": {
"node": ">=12.0.0"
}
},
"node_modules/jsesc": { "node_modules/jsesc": {
"version": "2.5.2", "version": "2.5.2",
"resolved": "https://registry.npmjs.org/jsesc/-/jsesc-2.5.2.tgz", "resolved": "https://registry.npmjs.org/jsesc/-/jsesc-2.5.2.tgz",
@@ -6566,21 +6320,6 @@
"node": ">= 0.8.0" "node": ">= 0.8.0"
} }
}, },
"node_modules/prettier": {
"version": "3.1.0",
"resolved": "https://registry.npmjs.org/prettier/-/prettier-3.1.0.tgz",
"integrity": "sha512-TQLvXjq5IAibjh8EpBIkNKxO749UEWABoiIZehEPiY4GNpVdhaFKqSTu+QrlU6D2dPAfubRmtJTi4K4YkQ5eXw==",
"dev": true,
"bin": {
"prettier": "bin/prettier.cjs"
},
"engines": {
"node": ">=14"
},
"funding": {
"url": "https://github.com/prettier/prettier?sponsor=1"
}
},
"node_modules/pretty-format": { "node_modules/pretty-format": {
"version": "29.7.0", "version": "29.7.0",
"resolved": "https://registry.npmjs.org/pretty-format/-/pretty-format-29.7.0.tgz", "resolved": "https://registry.npmjs.org/pretty-format/-/pretty-format-29.7.0.tgz",
@@ -6683,6 +6422,11 @@
"node": ">= 0.10" "node": ">= 0.10"
} }
}, },
"node_modules/reflect-metadata": {
"version": "0.2.2",
"resolved": "https://registry.npmjs.org/reflect-metadata/-/reflect-metadata-0.2.2.tgz",
"integrity": "sha512-urBwgfrvVP/eAyXx4hluJivBKzuEbSQs9rKWCrCkbSxNv8mxPcUZKeuoF3Uy4mJl3Lwprp6yy5/39VWigZ4K6Q=="
},
"node_modules/repeat-string": { "node_modules/repeat-string": {
"version": "1.6.1", "version": "1.6.1",
"resolved": "https://registry.npmjs.org/repeat-string/-/repeat-string-1.6.1.tgz", "resolved": "https://registry.npmjs.org/repeat-string/-/repeat-string-1.6.1.tgz",
@@ -6917,28 +6661,6 @@
"source-map": "^0.6.0" "source-map": "^0.6.0"
} }
}, },
"node_modules/spdx-exceptions": {
"version": "2.5.0",
"resolved": "https://registry.npmjs.org/spdx-exceptions/-/spdx-exceptions-2.5.0.tgz",
"integrity": "sha512-PiU42r+xO4UbUS1buo3LPJkjlO7430Xn5SVAhdpzzsPHsjbYVflnnFdATgabnLude+Cqu25p6N+g2lw/PFsa4w==",
"dev": true
},
"node_modules/spdx-expression-parse": {
"version": "4.0.0",
"resolved": "https://registry.npmjs.org/spdx-expression-parse/-/spdx-expression-parse-4.0.0.tgz",
"integrity": "sha512-Clya5JIij/7C6bRR22+tnGXbc4VKlibKSVj2iHvVeX5iMW7s1SIQlqu699JkODJJIhh/pUu8L0/VLh8xflD+LQ==",
"dev": true,
"dependencies": {
"spdx-exceptions": "^2.1.0",
"spdx-license-ids": "^3.0.0"
}
},
"node_modules/spdx-license-ids": {
"version": "3.0.17",
"resolved": "https://registry.npmjs.org/spdx-license-ids/-/spdx-license-ids-3.0.17.tgz",
"integrity": "sha512-sh8PWc/ftMqAAdFiBu6Fy6JUOYjqDJBJvIhpfDMyHrr0Rbp5liZqd4TjtQ/RgfLjKFZb+LMx5hpml5qOWy0qvg==",
"dev": true
},
"node_modules/sprintf-js": { "node_modules/sprintf-js": {
"version": "1.0.3", "version": "1.0.3",
"resolved": "https://registry.npmjs.org/sprintf-js/-/sprintf-js-1.0.3.tgz", "resolved": "https://registry.npmjs.org/sprintf-js/-/sprintf-js-1.0.3.tgz",

View File

@@ -1,8 +1,12 @@
{ {
"name": "@lancedb/lancedb", "name": "@lancedb/lancedb",
"version": "0.4.17", "version": "0.5.0",
"main": "./dist/index.js", "main": "dist/index.js",
"types": "./dist/index.d.ts", "exports": {
".": "./dist/index.js",
"./embedding": "./dist/embedding/index.js"
},
"types": "dist/index.d.ts",
"napi": { "napi": {
"name": "lancedb", "name": "lancedb",
"triples": { "triples": {
@@ -18,19 +22,16 @@
}, },
"license": "Apache 2.0", "license": "Apache 2.0",
"devDependencies": { "devDependencies": {
"@aws-sdk/client-s3": "^3.33.0",
"@aws-sdk/client-kms": "^3.33.0", "@aws-sdk/client-kms": "^3.33.0",
"@aws-sdk/client-s3": "^3.33.0",
"@biomejs/biome": "^1.7.3",
"@jest/globals": "^29.7.0",
"@napi-rs/cli": "^2.18.0", "@napi-rs/cli": "^2.18.0",
"@types/jest": "^29.1.2", "@types/jest": "^29.1.2",
"@types/tmp": "^0.2.6", "@types/tmp": "^0.2.6",
"@typescript-eslint/eslint-plugin": "^6.19.0",
"@typescript-eslint/parser": "^6.19.0",
"apache-arrow-old": "npm:apache-arrow@13.0.0", "apache-arrow-old": "npm:apache-arrow@13.0.0",
"eslint": "^8.57.0", "eslint": "^8.57.0",
"eslint-config-prettier": "^9.1.0",
"eslint-plugin-jsdoc": "^48.2.1",
"jest": "^29.7.0", "jest": "^29.7.0",
"prettier": "^3.1.0",
"shx": "^0.3.4", "shx": "^0.3.4",
"tmp": "^0.2.3", "tmp": "^0.2.3",
"ts-jest": "^29.1.2", "ts-jest": "^29.1.2",
@@ -45,39 +46,27 @@
"engines": { "engines": {
"node": ">= 18" "node": ">= 18"
}, },
"cpu": [ "cpu": ["x64", "arm64"],
"x64", "os": ["darwin", "linux", "win32"],
"arm64"
],
"os": [
"darwin",
"linux",
"win32"
],
"scripts": { "scripts": {
"artifacts": "napi artifacts", "artifacts": "napi artifacts",
"build:debug": "napi build --platform --dts ../lancedb/native.d.ts --js ../lancedb/native.js dist/", "build:debug": "napi build --platform --dts ../lancedb/native.d.ts --js ../lancedb/native.js lancedb",
"build:release": "napi build --platform --release --dts ../lancedb/native.d.ts --js ../lancedb/native.js dist/", "build:release": "napi build --platform --release --dts ../lancedb/native.d.ts --js ../lancedb/native.js dist/",
"build": "npm run build:debug && tsc -b && shx cp lancedb/native.d.ts dist/native.d.ts", "build": "npm run build:debug && tsc -b && shx cp lancedb/native.d.ts dist/native.d.ts && shx cp lancedb/*.node dist/",
"build-release": "npm run build:release && tsc -b && shx cp lancedb/native.d.ts dist/native.d.ts", "build-release": "npm run build:release && tsc -b && shx cp lancedb/native.d.ts dist/native.d.ts",
"chkformat": "prettier . --check", "lint-ci": "biome ci .",
"docs": "typedoc --plugin typedoc-plugin-markdown --out ../docs/src/js lancedb/index.ts", "docs": "typedoc --plugin typedoc-plugin-markdown --out ../docs/src/js lancedb/index.ts",
"lint": "eslint lancedb && eslint __test__", "lint": "biome check . && biome format .",
"lint-fix": "biome check --apply-unsafe . && biome format --write .",
"prepublishOnly": "napi prepublish -t npm", "prepublishOnly": "napi prepublish -t npm",
"test": "npm run build && jest --verbose", "test": "jest --verbose",
"integration": "S3_TEST=1 npm run test", "integration": "S3_TEST=1 npm run test",
"universal": "napi universal", "universal": "napi universal",
"version": "napi version" "version": "napi version"
}, },
"optionalDependencies": {
"@lancedb/lancedb-darwin-arm64": "0.4.17",
"@lancedb/lancedb-darwin-x64": "0.4.17",
"@lancedb/lancedb-linux-arm64-gnu": "0.4.17",
"@lancedb/lancedb-linux-x64-gnu": "0.4.17",
"@lancedb/lancedb-win32-x64-msvc": "0.4.17"
},
"dependencies": { "dependencies": {
"apache-arrow": "^15.0.0",
"openai": "^4.29.2", "openai": "^4.29.2",
"apache-arrow": "^15.0.0" "reflect-metadata": "^0.2.2"
} }
} }

View File

@@ -176,6 +176,7 @@ impl Connection {
&self, &self,
name: String, name: String,
storage_options: Option<HashMap<String, String>>, storage_options: Option<HashMap<String, String>>,
index_cache_size: Option<u32>,
) -> napi::Result<Table> { ) -> napi::Result<Table> {
let mut builder = self.get_inner()?.open_table(&name); let mut builder = self.get_inner()?.open_table(&name);
if let Some(storage_options) = storage_options { if let Some(storage_options) = storage_options {
@@ -183,6 +184,9 @@ impl Connection {
builder = builder.storage_option(key, value); builder = builder.storage_option(key, value);
} }
} }
if let Some(index_cache_size) = index_cache_size {
builder = builder.index_cache_size(index_cache_size);
}
let tbl = builder let tbl = builder
.execute() .execute()
.await .await

View File

@@ -15,8 +15,8 @@
use arrow_ipc::writer::FileWriter; use arrow_ipc::writer::FileWriter;
use lancedb::ipc::ipc_file_to_batches; use lancedb::ipc::ipc_file_to_batches;
use lancedb::table::{ use lancedb::table::{
AddDataMode, ColumnAlteration as LanceColumnAlteration, NewColumnTransform, AddDataMode, ColumnAlteration as LanceColumnAlteration, Duration, NewColumnTransform,
Table as LanceDbTable, OptimizeAction, OptimizeOptions, Table as LanceDbTable,
}; };
use napi::bindgen_prelude::*; use napi::bindgen_prelude::*;
use napi_derive::napi; use napi_derive::napi;
@@ -263,6 +263,60 @@ impl Table {
self.inner_ref()?.restore().await.default_error() self.inner_ref()?.restore().await.default_error()
} }
#[napi]
pub async fn optimize(&self, older_than_ms: Option<i64>) -> napi::Result<OptimizeStats> {
let inner = self.inner_ref()?;
let older_than = if let Some(ms) = older_than_ms {
if ms == i64::MIN {
return Err(napi::Error::from_reason(format!(
"older_than_ms can not be {}",
i32::MIN,
)));
}
Duration::try_milliseconds(ms)
} else {
None
};
let compaction_stats = inner
.optimize(OptimizeAction::Compact {
options: lancedb::table::CompactionOptions::default(),
remap_options: None,
})
.await
.default_error()?
.compaction
.unwrap();
let prune_stats = inner
.optimize(OptimizeAction::Prune {
older_than,
delete_unverified: None,
})
.await
.default_error()?
.prune
.unwrap();
inner
.optimize(lancedb::table::OptimizeAction::Index(
OptimizeOptions::default(),
))
.await
.default_error()?;
Ok(OptimizeStats {
compaction: CompactionStats {
files_added: compaction_stats.files_added as i64,
files_removed: compaction_stats.files_removed as i64,
fragments_added: compaction_stats.fragments_added as i64,
fragments_removed: compaction_stats.fragments_removed as i64,
},
prune: RemovalStats {
bytes_removed: prune_stats.bytes_removed as i64,
old_versions_removed: prune_stats.old_versions as i64,
},
})
}
#[napi] #[napi]
pub async fn list_indices(&self) -> napi::Result<Vec<IndexConfig>> { pub async fn list_indices(&self) -> napi::Result<Vec<IndexConfig>> {
Ok(self Ok(self
@@ -298,6 +352,40 @@ impl From<lancedb::index::IndexConfig> for IndexConfig {
} }
} }
/// Statistics about a compaction operation.
#[napi(object)]
#[derive(Clone, Debug)]
pub struct CompactionStats {
/// The number of fragments removed
pub fragments_removed: i64,
/// The number of new, compacted fragments added
pub fragments_added: i64,
/// The number of data files removed
pub files_removed: i64,
/// The number of new, compacted data files added
pub files_added: i64,
}
/// Statistics about a cleanup operation
#[napi(object)]
#[derive(Clone, Debug)]
pub struct RemovalStats {
/// The number of bytes removed
pub bytes_removed: i64,
/// The number of old versions removed
pub old_versions_removed: i64,
}
/// Statistics about an optimize operation
#[napi(object)]
#[derive(Clone, Debug)]
pub struct OptimizeStats {
/// Statistics about the compaction operation
pub compaction: CompactionStats,
/// Statistics about the removal operation
pub prune: RemovalStats,
}
/// A definition of a column alteration. The alteration changes the column at /// A definition of a column alteration. The alteration changes the column at
/// `path` to have the new name `name`, to be nullable if `nullable` is true, /// `path` to have the new name `name`, to be nullable if `nullable` is true,
/// and to have the data type `data_type`. At least one of `rename` or `nullable` /// and to have the data type `data_type`. At least one of `rename` or `nullable`

Some files were not shown because too many files have changed in this diff Show More