mirror of
https://github.com/lancedb/lancedb.git
synced 2026-03-22 00:20:39 +00:00
This hooks up a new writer implementation for the `add()` method. The main immediate benefit is it allows streaming requests to remote tables, and at the same time allowing retries for most inputs. In NodeJS, we always convert the data to `Vec<RecordBatch>`, so it's always retry-able. For Python, all are retry-able, except `Iterator` and `pa.RecordBatchReader`, which can only be consumed once. Some, like `pa.datasets.Dataset` are retry-able *and* streaming. A lot of the changes here are to make the new DataFusion write pipeline maintain the same behavior as the existing Python-based preprocessing, such as: * casting input data to target schema * rejecting NaN values if `on_bad_vectors="error"` * applying embedding functions. In future PRs, we'll enhance these by moving the embedding calls into DataFusion and making sure we parallelize them. See: https://github.com/lancedb/lancedb/issues/3048 --------- Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
LanceDB JavaScript SDK
A JavaScript library for LanceDB.
Installation
npm install @lancedb/lancedb
This will download the appropriate native library for your platform. We currently support:
- Linux (x86_64 and aarch64 on glibc and musl)
- MacOS (Intel and ARM/M1/M2)
- Windows (x86_64 and aarch64)
Usage
Basic Example
import * as lancedb from "@lancedb/lancedb";
const db = await lancedb.connect("data/sample-lancedb");
const table = await db.createTable("my_table", [
{ id: 1, vector: [0.1, 1.0], item: "foo", price: 10.0 },
{ id: 2, vector: [3.9, 0.5], item: "bar", price: 20.0 },
]);
const results = await table.vectorSearch([0.1, 0.3]).limit(20).toArray();
console.log(results);
The quickstart contains more complete examples.
Development
See CONTRIBUTING.md for information on how to contribute to LanceDB.