mirror of
https://github.com/lancedb/lancedb.git
synced 2026-05-15 11:00:41 +00:00
feat(nodejs): add Scannable primitive for streaming ingestion (#3271)
## **Summary**
This PR adds a **Scannable primitive** to the Node.js bindings, bringing
parity with Python's `PyScannable`.
A `Scannable` wraps a schema, an optional row count hint, a rescannable
flag, and a batch producing callback. On the Rust side it implements
`lancedb::data::scannable::Scannable`. The goal is to give consumers
such as `Table.add`, `createTable`, and `mergeInsert` a way to stream
data without materializing the full dataset in JS memory.
This PR introduces only the primitive. Migrating existing consumers to
use it will come in follow up work.
---
## **Design**
### **Transport**
The transport uses the **Arrow IPC Stream format, one batch at a time**.
The JS side encodes each `RecordBatch` into a self contained IPC Stream
message containing schema, batch, and end of stream. The message is
returned as a `Buffer` through a napi `ThreadsafeFunction`. The Rust
side decodes it using `arrow_ipc::reader::StreamReader`.
Only one batch is active at a time, so JS memory stays bounded by the
batch size. The Node `Buffer` size limit of about 4 GiB therefore does
not constrain the stream as a whole.
I initially evaluated the Arrow C Data Interface, which is the approach
used in Python. I dropped that path after confirming that the
`apache-arrow` npm package does not expose a C Data Interface export in
any supported version from 15 to 18. JavaScript is not listed in Arrow's
C Data Interface implementation table, and the upstream tracking issue
remains open with no scheduled work.
Third party FFI shims would introduce additional dependency risk without
solving the core maintenance problem. Using IPC adds one encode and
decode step per batch, but the cost is predictable and typically
dominated by Lance's write path.
---
### **API**
```ts
class Scannable {
readonly schema: Schema
readonly numRows: number | null
readonly rescannable: boolean
static fromFactory(schema, factory, opts?)
static fromTable(table, opts?)
static fromIterable(schema, iter, opts?)
static fromRecordBatchReader(reader, opts?)
}
```
The FFI boundary consists of a single callback:
`getNextBatch(isStart: boolean): Promise<Buffer | null>`
`isStart` is `true` on the first call of each new scan and `false` for
every call after it. The JS side uses it to drop any cached iterator and
re-invoke the factory at scan boundaries. This is what makes a
rescannable source restart at batch 0 on every `scan_as_stream` call,
even when a previous scan ended mid stream, for example a retried write
after a network error. Without this signal a retry would resume a stale
iterator and silently skip already emitted batches.
In addition, a schema only IPC buffer is transferred once during
construction.
---
## **Changes**
* `nodejs/src/scannable.rs`
Adds `NapiScannable` and the `LanceScannable` implementation. Implements
`schema()`, `num_rows()`, `rescannable()`, and `scan_as_stream()`.
Includes per batch schema validation against the declared schema, one
shot enforcement for non rescannable sources, and a scan boundary reset
signal (`isStart`) so rescannable sources restart from batch 0 on every
`scan_as_stream` call rather than resuming a stale iterator.
* `nodejs/src/lib.rs`
Module registration.
* `nodejs/lancedb/scannable.ts`
Defines the `Scannable` class and the four constructors listed above.
Each constructor rejects option combinations it cannot honor, for
example a `rescannable: true` request on a one shot iterable or reader,
and a `numRows` that disagrees with an in memory table's row count.
* `nodejs/lancedb/index.ts`
Exports the new primitive.
* `nodejs/__test__/scannable.test.ts`
Test suite for the primitive.
---
## **Validation**
Before implementing the bridge, I ran an end to end harness with a JS
producer feeding a standalone Rust consumer built against the same
`arrow-ipc` version used in the bridge.
The harness covered the following scenarios:
* happy path
* empty stream
* 1,000 small batches
* 10 large batches
* mixed primitive types with nullables
* nested `List<Struct<>>`
* truncated stream error handling
* declared schema mismatch validation
* a 6 GB stress test through the pipe
All scenarios completed with bounded memory usage. The goal of this
harness was to confirm that the IPC Stream transport works correctly end
to end and that Node's `Buffer` size limit does not constrain the
overall stream.
Separately, the rescannable restart contract was verified with a focused
harness. A rescannable source is consumed partially and the scan is
dropped mid stream, then re-scanned. The re-scan replays from batch 0
rather than resuming the stale iterator. The same harness was run with
the `isStart` reset path disabled and the mid stream restart case failed
as expected, confirming the test exercises the real regression.
These harnesses are not meant to replace the full test suite, which is
described below.
---
## **Tests**
`__test__/scannable.test.ts` covers construction, metadata reflection,
per constructor defaults and overrides, construction time validation,
the native handle surface, and schema variety across empty tables,
nested types, `FixedSizeList`, and wide schemas.
Runtime scan behavior including `scan_as_stream`, one shot enforcement
on non rescannable sources, schema mismatch detection, IPC decode
failures, and rescannable restart semantics is not exercised here. There
is no in tree JS consumer of `NapiScannable` yet. This mirrors Python's
`PyScannable`, which has no dedicated test file and is covered
transitively through the consumers that accept a Scannable.
Runtime coverage will follow in the consumer migration work.
---
## **Status**
Ready for review.
Closes #3223
---
This commit is contained in:
438
nodejs/__test__/scannable.test.ts
Normal file
438
nodejs/__test__/scannable.test.ts
Normal file
@@ -0,0 +1,438 @@
|
||||
// SPDX-License-Identifier: Apache-2.0
|
||||
// SPDX-FileCopyrightText: Copyright The LanceDB Authors
|
||||
|
||||
import {
|
||||
Field,
|
||||
Float16,
|
||||
Int32,
|
||||
type RecordBatch,
|
||||
RecordBatchReader,
|
||||
Schema,
|
||||
tableToIPC,
|
||||
} from "apache-arrow";
|
||||
import { makeArrowTable, makeEmptyTable } from "../lancedb/arrow";
|
||||
import { Scannable } from "../lancedb/scannable";
|
||||
|
||||
function makeTable() {
|
||||
return makeArrowTable(
|
||||
[
|
||||
{ id: 1, name: "a" },
|
||||
{ id: 2, name: "b" },
|
||||
{ id: 3, name: "c" },
|
||||
],
|
||||
{ vectorColumns: {} },
|
||||
);
|
||||
}
|
||||
|
||||
async function makeReader(): Promise<RecordBatchReader> {
|
||||
// `RecordBatchReader.from()` returns an unopened reader; `.schema` is only
|
||||
// populated after `.open()`. Opening sync readers is synchronous.
|
||||
const reader = RecordBatchReader.from(tableToIPC(makeTable()));
|
||||
return reader.open() as RecordBatchReader;
|
||||
}
|
||||
|
||||
describe("Scannable", () => {
|
||||
describe("fromTable", () => {
|
||||
test("reflects schema, numRows, and defaults rescannable=true", async () => {
|
||||
const table = makeTable();
|
||||
const scannable = await Scannable.fromTable(table);
|
||||
|
||||
expect(scannable.schema).toBe(table.schema);
|
||||
expect(scannable.numRows).toBe(table.numRows);
|
||||
expect(scannable.rescannable).toBe(true);
|
||||
});
|
||||
|
||||
test("throws when opts.numRows does not match table.numRows", async () => {
|
||||
await expect(
|
||||
Scannable.fromTable(makeTable(), { numRows: 42 }),
|
||||
).rejects.toThrow(/does not match table\.numRows/);
|
||||
});
|
||||
|
||||
test("throws when opts.rescannable is false", async () => {
|
||||
await expect(
|
||||
Scannable.fromTable(makeTable(), { rescannable: false }),
|
||||
).rejects.toThrow(/always rescannable/);
|
||||
});
|
||||
});
|
||||
|
||||
describe("fromRecordBatchReader", () => {
|
||||
test("reflects schema and defaults numRows=null, rescannable=false", async () => {
|
||||
const reader = await makeReader();
|
||||
const scannable = await Scannable.fromRecordBatchReader(reader);
|
||||
|
||||
expect(scannable.schema).toBe(reader.schema);
|
||||
expect(scannable.numRows).toBeNull();
|
||||
expect(scannable.rescannable).toBe(false);
|
||||
});
|
||||
|
||||
test("honors numRows override", async () => {
|
||||
const scannable = await Scannable.fromRecordBatchReader(
|
||||
await makeReader(),
|
||||
{ numRows: 3 },
|
||||
);
|
||||
|
||||
expect(scannable.numRows).toBe(3);
|
||||
expect(scannable.rescannable).toBe(false);
|
||||
});
|
||||
|
||||
test("rescannable: false explicit does not throw", async () => {
|
||||
const reader = await makeReader();
|
||||
const scannable = await Scannable.fromRecordBatchReader(reader, {
|
||||
rescannable: false,
|
||||
});
|
||||
expect(scannable.rescannable).toBe(false);
|
||||
});
|
||||
|
||||
test("throws when opts.rescannable is true", async () => {
|
||||
const reader = await makeReader();
|
||||
await expect(
|
||||
Scannable.fromRecordBatchReader(reader, { rescannable: true }),
|
||||
).rejects.toThrow(/does not accept rescannable/);
|
||||
});
|
||||
|
||||
test("throws when opts.rescannable is true even alongside numRows", async () => {
|
||||
const reader = await makeReader();
|
||||
await expect(
|
||||
Scannable.fromRecordBatchReader(reader, {
|
||||
numRows: 3,
|
||||
rescannable: true,
|
||||
}),
|
||||
).rejects.toThrow(/does not accept rescannable/);
|
||||
});
|
||||
});
|
||||
|
||||
describe("fromIterable", () => {
|
||||
test("accepts a sync iterable of batches", async () => {
|
||||
const table = makeTable();
|
||||
const scannable = await Scannable.fromIterable(
|
||||
table.schema,
|
||||
table.batches,
|
||||
);
|
||||
|
||||
expect(scannable.schema).toBe(table.schema);
|
||||
expect(scannable.numRows).toBeNull();
|
||||
expect(scannable.rescannable).toBe(false);
|
||||
});
|
||||
|
||||
test("accepts an async iterable of batches", async () => {
|
||||
const table = makeTable();
|
||||
async function* generator(): AsyncGenerator<RecordBatch> {
|
||||
for (const batch of table.batches) {
|
||||
yield batch;
|
||||
}
|
||||
}
|
||||
|
||||
const scannable = await Scannable.fromIterable(table.schema, generator());
|
||||
expect(scannable.schema).toBe(table.schema);
|
||||
expect(scannable.rescannable).toBe(false);
|
||||
});
|
||||
|
||||
describe("rescannable: true detection", () => {
|
||||
// Replayable inputs: [Symbol.iterator]() / [Symbol.asyncIterator]()
|
||||
// returns a fresh iterator each call. Must NOT throw.
|
||||
|
||||
test("Array passes (fresh ArrayIterator each call)", async () => {
|
||||
const table = makeTable();
|
||||
const scannable = await Scannable.fromIterable(
|
||||
table.schema,
|
||||
table.batches,
|
||||
{ rescannable: true },
|
||||
);
|
||||
expect(scannable.rescannable).toBe(true);
|
||||
});
|
||||
|
||||
test("Set passes (fresh SetIterator each call)", async () => {
|
||||
const table = makeTable();
|
||||
const set = new Set<RecordBatch>(table.batches);
|
||||
const scannable = await Scannable.fromIterable(table.schema, set, {
|
||||
rescannable: true,
|
||||
});
|
||||
expect(scannable.rescannable).toBe(true);
|
||||
});
|
||||
|
||||
test("custom Iterable returning a fresh iterator passes", async () => {
|
||||
const table = makeTable();
|
||||
const replayable: Iterable<RecordBatch> = {
|
||||
[Symbol.iterator]() {
|
||||
return table.batches[Symbol.iterator]();
|
||||
},
|
||||
};
|
||||
const scannable = await Scannable.fromIterable(
|
||||
table.schema,
|
||||
replayable,
|
||||
{ rescannable: true },
|
||||
);
|
||||
expect(scannable.rescannable).toBe(true);
|
||||
});
|
||||
|
||||
test("object with generator method passes (fresh generator each call)", async () => {
|
||||
const table = makeTable();
|
||||
const replayable: Iterable<RecordBatch> = {
|
||||
*[Symbol.iterator]() {
|
||||
for (const batch of table.batches) yield batch;
|
||||
},
|
||||
};
|
||||
const scannable = await Scannable.fromIterable(
|
||||
table.schema,
|
||||
replayable,
|
||||
{ rescannable: true },
|
||||
);
|
||||
expect(scannable.rescannable).toBe(true);
|
||||
});
|
||||
|
||||
test("empty Array passes (replayable degenerate case)", async () => {
|
||||
const schema = makeTable().schema;
|
||||
const scannable = await Scannable.fromIterable(
|
||||
schema,
|
||||
[] as RecordBatch[],
|
||||
{ rescannable: true },
|
||||
);
|
||||
expect(scannable.rescannable).toBe(true);
|
||||
});
|
||||
|
||||
// One-shot inputs: [Symbol.iterator]() / [Symbol.asyncIterator]()
|
||||
// returns the same object, or the input is already-an-iterator.
|
||||
// Must throw with a /one-shot/ message.
|
||||
|
||||
test("sync generator throws", async () => {
|
||||
const table = makeTable();
|
||||
function* generator(): Generator<RecordBatch> {
|
||||
for (const batch of table.batches) yield batch;
|
||||
}
|
||||
await expect(
|
||||
Scannable.fromIterable(table.schema, generator(), {
|
||||
rescannable: true,
|
||||
}),
|
||||
).rejects.toThrow(/one-shot/);
|
||||
});
|
||||
|
||||
test("async generator throws", async () => {
|
||||
const table = makeTable();
|
||||
async function* generator(): AsyncGenerator<RecordBatch> {
|
||||
for (const batch of table.batches) yield batch;
|
||||
}
|
||||
await expect(
|
||||
Scannable.fromIterable(table.schema, generator(), {
|
||||
rescannable: true,
|
||||
}),
|
||||
).rejects.toThrow(/one-shot/);
|
||||
});
|
||||
|
||||
test("empty generator throws (one-shot degenerate case)", async () => {
|
||||
const schema = makeTable().schema;
|
||||
function* generator(): Generator<RecordBatch> {
|
||||
// intentionally empty; yields nothing.
|
||||
}
|
||||
await expect(
|
||||
Scannable.fromIterable(schema, generator(), { rescannable: true }),
|
||||
).rejects.toThrow(/one-shot/);
|
||||
});
|
||||
|
||||
test("custom self-iterator throws", async () => {
|
||||
const table = makeTable();
|
||||
const batches = table.batches;
|
||||
let i = 0;
|
||||
const oneShot: Iterable<RecordBatch> & Iterator<RecordBatch> = {
|
||||
[Symbol.iterator]() {
|
||||
return this;
|
||||
},
|
||||
next() {
|
||||
if (i >= batches.length) {
|
||||
return { done: true, value: undefined };
|
||||
}
|
||||
return { done: false, value: batches[i++] };
|
||||
},
|
||||
};
|
||||
await expect(
|
||||
Scannable.fromIterable(table.schema, oneShot, { rescannable: true }),
|
||||
).rejects.toThrow(/one-shot/);
|
||||
});
|
||||
|
||||
test("Array.values() (IterableIterator) throws", async () => {
|
||||
const table = makeTable();
|
||||
const iter = table.batches.values();
|
||||
await expect(
|
||||
Scannable.fromIterable(table.schema, iter, { rescannable: true }),
|
||||
).rejects.toThrow(/one-shot/);
|
||||
});
|
||||
|
||||
test("raw iterator (only `.next`) throws", async () => {
|
||||
const table = makeTable();
|
||||
const batches = table.batches;
|
||||
let i = 0;
|
||||
const rawIter = {
|
||||
next(): IteratorResult<RecordBatch> {
|
||||
if (i >= batches.length) {
|
||||
return { done: true, value: undefined };
|
||||
}
|
||||
return { done: false, value: batches[i++] };
|
||||
},
|
||||
};
|
||||
await expect(
|
||||
Scannable.fromIterable(
|
||||
table.schema,
|
||||
rawIter as unknown as Iterable<RecordBatch>,
|
||||
{ rescannable: true },
|
||||
),
|
||||
).rejects.toThrow(/one-shot/);
|
||||
});
|
||||
|
||||
// Edge: null/undefined must not crash the detection helper. The
|
||||
// null check belongs to `normalizeIterator` and only fires when a
|
||||
// scan starts.
|
||||
|
||||
test("null input does not crash detection at construction", async () => {
|
||||
const schema = makeTable().schema;
|
||||
await expect(
|
||||
Scannable.fromIterable(
|
||||
schema,
|
||||
null as unknown as Iterable<RecordBatch>,
|
||||
{
|
||||
rescannable: true,
|
||||
},
|
||||
),
|
||||
).resolves.toBeDefined();
|
||||
});
|
||||
|
||||
test("undefined input does not crash detection at construction", async () => {
|
||||
const schema = makeTable().schema;
|
||||
await expect(
|
||||
Scannable.fromIterable(
|
||||
schema,
|
||||
undefined as unknown as Iterable<RecordBatch>,
|
||||
{ rescannable: true },
|
||||
),
|
||||
).resolves.toBeDefined();
|
||||
});
|
||||
|
||||
// Default (rescannable omitted) skips the check entirely, so even
|
||||
// pathological inputs construct without throwing here.
|
||||
|
||||
test("rescannable omitted skips detection entirely (generator passes)", async () => {
|
||||
const table = makeTable();
|
||||
function* generator(): Generator<RecordBatch> {
|
||||
for (const batch of table.batches) yield batch;
|
||||
}
|
||||
const scannable = await Scannable.fromIterable(
|
||||
table.schema,
|
||||
generator(),
|
||||
);
|
||||
expect(scannable.rescannable).toBe(false);
|
||||
});
|
||||
|
||||
test("rescannable: false explicit skips detection entirely (generator passes)", async () => {
|
||||
const table = makeTable();
|
||||
function* generator(): Generator<RecordBatch> {
|
||||
for (const batch of table.batches) yield batch;
|
||||
}
|
||||
const scannable = await Scannable.fromIterable(
|
||||
table.schema,
|
||||
generator(),
|
||||
{ rescannable: false },
|
||||
);
|
||||
expect(scannable.rescannable).toBe(false);
|
||||
});
|
||||
});
|
||||
});
|
||||
|
||||
describe("fromFactory", () => {
|
||||
test("defaults rescannable=true and does not invoke the factory eagerly", async () => {
|
||||
const table = makeTable();
|
||||
const factory = jest.fn(() => table.batches);
|
||||
|
||||
const scannable = await Scannable.fromFactory(table.schema, factory);
|
||||
|
||||
expect(scannable.schema).toBe(table.schema);
|
||||
expect(scannable.rescannable).toBe(true);
|
||||
expect(factory).not.toHaveBeenCalled();
|
||||
});
|
||||
|
||||
test("honors rescannable and numRows overrides", async () => {
|
||||
const table = makeTable();
|
||||
const scannable = await Scannable.fromFactory(
|
||||
table.schema,
|
||||
() => table.batches,
|
||||
{ numRows: 7, rescannable: false },
|
||||
);
|
||||
|
||||
expect(scannable.numRows).toBe(7);
|
||||
expect(scannable.rescannable).toBe(false);
|
||||
});
|
||||
});
|
||||
|
||||
describe("validation", () => {
|
||||
test("throws when numRows is negative", async () => {
|
||||
await expect(
|
||||
Scannable.fromFactory(makeTable().schema, () => [], { numRows: -1 }),
|
||||
).rejects.toThrow(/non-negative/);
|
||||
});
|
||||
|
||||
test("throws when numRows is not an integer", async () => {
|
||||
await expect(
|
||||
Scannable.fromFactory(makeTable().schema, () => [], { numRows: 3.5 }),
|
||||
).rejects.toThrow(/integer/);
|
||||
});
|
||||
});
|
||||
|
||||
describe("native handle", () => {
|
||||
test("exposes a native handle via inner", async () => {
|
||||
const scannable = await Scannable.fromTable(makeTable());
|
||||
expect(scannable.inner).toBeDefined();
|
||||
expect(typeof scannable.inner).toBe("object");
|
||||
expect(scannable.inner).not.toBeNull();
|
||||
});
|
||||
});
|
||||
|
||||
// Schema-variety construction tests. Each asserts that construction
|
||||
// succeeds against a richer Arrow schema, which transitively exercises
|
||||
// schema serialization and the Rust-side `ipc_file_to_schema` for types
|
||||
// beyond flat primitives.
|
||||
describe("schema variety", () => {
|
||||
test("accepts an empty table", async () => {
|
||||
const schema = new Schema([new Field("id", new Int32(), true)]);
|
||||
const table = makeEmptyTable(schema);
|
||||
const scannable = await Scannable.fromTable(table);
|
||||
|
||||
expect(scannable.numRows).toBe(0);
|
||||
expect(scannable.schema).toBe(table.schema);
|
||||
});
|
||||
|
||||
test("accepts nested struct and list columns", async () => {
|
||||
const table = makeArrowTable(
|
||||
[
|
||||
{ id: 1, point: { x: 0, y: 0 }, tags: ["a", "b"] },
|
||||
{ id: 2, point: { x: 1, y: 2 }, tags: ["c"] },
|
||||
],
|
||||
{ vectorColumns: {} },
|
||||
);
|
||||
const scannable = await Scannable.fromTable(table);
|
||||
|
||||
expect(scannable.schema).toBe(table.schema);
|
||||
expect(scannable.numRows).toBe(2);
|
||||
});
|
||||
|
||||
test("accepts a FixedSizeList (vector) column", async () => {
|
||||
const table = makeArrowTable(
|
||||
[
|
||||
{ id: 1, vec: [1, 2, 3] },
|
||||
{ id: 2, vec: [4, 5, 6] },
|
||||
],
|
||||
{ vectorColumns: { vec: { type: new Float16() } } },
|
||||
);
|
||||
const scannable = await Scannable.fromTable(table);
|
||||
|
||||
expect(scannable.schema).toBe(table.schema);
|
||||
expect(scannable.numRows).toBe(2);
|
||||
});
|
||||
|
||||
test("accepts a table with many columns", async () => {
|
||||
const row: Record<string, number> = {};
|
||||
for (let i = 0; i < 50; i++) row[`c${i}`] = i;
|
||||
const table = makeArrowTable([row, row], { vectorColumns: {} });
|
||||
const scannable = await Scannable.fromTable(table);
|
||||
|
||||
expect(scannable.schema.fields.length).toBe(50);
|
||||
expect(scannable.numRows).toBe(2);
|
||||
});
|
||||
});
|
||||
});
|
||||
@@ -1291,6 +1291,18 @@ export async function fromRecordBatchToBuffer(
|
||||
return Buffer.from(await writer.toUint8Array());
|
||||
}
|
||||
|
||||
/**
|
||||
* Create a buffer containing a single record batch using the Arrow IPC Stream
|
||||
* serialization. Each call produces a self-contained Stream message (schema +
|
||||
* batch + EOS) suitable for incremental decode by `arrow_ipc::reader::StreamReader`.
|
||||
*/
|
||||
export async function fromRecordBatchToStreamBuffer(
|
||||
batch: RecordBatch,
|
||||
): Promise<Buffer> {
|
||||
const writer = RecordBatchStreamWriter.writeAll([batch]);
|
||||
return Buffer.from(await writer.toUint8Array());
|
||||
}
|
||||
|
||||
/**
|
||||
* Serialize an Arrow Table into a buffer using the Arrow IPC Stream serialization
|
||||
*
|
||||
|
||||
@@ -126,6 +126,7 @@ export { MergeInsertBuilder, WriteExecutionOptions } from "./merge";
|
||||
|
||||
export * as embedding from "./embedding";
|
||||
export { permutationBuilder, PermutationBuilder } from "./permutation";
|
||||
export { Scannable, ScannableOptions } from "./scannable";
|
||||
export * as rerankers from "./rerankers";
|
||||
export {
|
||||
SchemaLike,
|
||||
|
||||
274
nodejs/lancedb/scannable.ts
Normal file
274
nodejs/lancedb/scannable.ts
Normal file
@@ -0,0 +1,274 @@
|
||||
// SPDX-License-Identifier: Apache-2.0
|
||||
// SPDX-FileCopyrightText: Copyright The LanceDB Authors
|
||||
|
||||
import {
|
||||
Table as ArrowTable,
|
||||
RecordBatch,
|
||||
RecordBatchReader,
|
||||
Schema,
|
||||
} from "apache-arrow";
|
||||
import {
|
||||
fromRecordBatchToStreamBuffer,
|
||||
fromTableToBuffer,
|
||||
makeEmptyTable,
|
||||
} from "./arrow";
|
||||
import { NapiScannable } from "./native.js";
|
||||
|
||||
export interface ScannableOptions {
|
||||
/** Hint about the number of rows. Not validated against the stream. */
|
||||
numRows?: number;
|
||||
/**
|
||||
* Whether the source can be scanned more than once. Defaults to `true` for
|
||||
* `fromTable` / `fromFactory` and `false` for `fromIterable` /
|
||||
* `fromRecordBatchReader`.
|
||||
*/
|
||||
rescannable?: boolean;
|
||||
}
|
||||
|
||||
/**
|
||||
* A data source that can be scanned as a stream of Arrow `RecordBatch`es.
|
||||
*
|
||||
* `Scannable` wraps the schema + optional row count + rescannable flag and
|
||||
* a callback that yields batches one at a time. It is passed to consumers
|
||||
* (e.g. `Table.add`, `createTable`, `mergeInsert` — follow-up work) that
|
||||
* need to pull data without materializing the full dataset in JS memory.
|
||||
*
|
||||
* Batches cross the JS↔Rust boundary as Arrow IPC Stream messages; a fresh
|
||||
* writer serializes each batch, and the Rust side decodes it with
|
||||
* `arrow_ipc::reader::StreamReader`. One batch is in flight at a time.
|
||||
*/
|
||||
export class Scannable {
|
||||
readonly schema: Schema;
|
||||
readonly numRows: number | null;
|
||||
readonly rescannable: boolean;
|
||||
|
||||
/** @hidden */
|
||||
private readonly native: NapiScannable;
|
||||
|
||||
private constructor(
|
||||
native: NapiScannable,
|
||||
schema: Schema,
|
||||
numRows: number | null,
|
||||
rescannable: boolean,
|
||||
) {
|
||||
this.native = native;
|
||||
this.schema = schema;
|
||||
this.numRows = numRows;
|
||||
this.rescannable = rescannable;
|
||||
}
|
||||
|
||||
/** @hidden Access the native handle for passing through to Rust consumers. */
|
||||
get inner(): NapiScannable {
|
||||
return this.native;
|
||||
}
|
||||
|
||||
/**
|
||||
* Build a Scannable from an explicit schema and a factory that returns a
|
||||
* fresh batch iterator on each call.
|
||||
*
|
||||
* The factory is invoked once per scan. Each iterator yields
|
||||
* `RecordBatch`es matching the declared schema. Use this when you need
|
||||
* direct control over the pull loop — for example, to wrap a streaming
|
||||
* source whose batches are produced lazily.
|
||||
*
|
||||
* @param schema - The Arrow schema of the produced batches.
|
||||
* @param factory - Called at the start of each scan to produce a batch
|
||||
* iterator. Must be idempotent when `rescannable` is true.
|
||||
* @param opts - Optional hints. `rescannable` defaults to `true`; set to
|
||||
* `false` if calling `factory()` twice would not reproduce the same data.
|
||||
*/
|
||||
static async fromFactory(
|
||||
schema: Schema,
|
||||
factory: () =>
|
||||
| AsyncIterable<RecordBatch>
|
||||
| Iterable<RecordBatch>
|
||||
| AsyncIterator<RecordBatch>
|
||||
| Iterator<RecordBatch>,
|
||||
opts: ScannableOptions = {},
|
||||
): Promise<Scannable> {
|
||||
const numRows = opts.numRows ?? null;
|
||||
if (numRows != null && !Number.isInteger(numRows)) {
|
||||
throw new TypeError("numRows must be an integer");
|
||||
}
|
||||
const rescannable = opts.rescannable ?? true;
|
||||
|
||||
let iter: AsyncIterator<RecordBatch> | Iterator<RecordBatch> | null = null;
|
||||
const getNextBatch = async (isStart: boolean): Promise<Buffer | null> => {
|
||||
// `isStart` is true on the first pull of every new scan_as_stream.
|
||||
// Drop any cached iterator so factory() is re-invoked for the next scan
|
||||
if (isStart) {
|
||||
iter = null;
|
||||
}
|
||||
if (iter === null) {
|
||||
iter = normalizeIterator(factory());
|
||||
}
|
||||
const result = await iter.next();
|
||||
if (result.done) {
|
||||
iter = null;
|
||||
return null;
|
||||
}
|
||||
return fromRecordBatchToStreamBuffer(result.value);
|
||||
};
|
||||
|
||||
const schemaBuf = await fromTableToBuffer(makeEmptyTable(schema));
|
||||
const native = new NapiScannable(
|
||||
schemaBuf,
|
||||
numRows,
|
||||
rescannable,
|
||||
getNextBatch,
|
||||
);
|
||||
return new Scannable(native, schema, numRows, rescannable);
|
||||
}
|
||||
|
||||
/**
|
||||
* Build a Scannable from an in-memory Arrow `Table`. Always rescannable;
|
||||
* the table's batches are replayed on each scan.
|
||||
*
|
||||
* The table's row count is authoritative: `opts.numRows` must either be
|
||||
* omitted or equal to `table.numRows`. `opts.rescannable` of `false` is
|
||||
* rejected because in-memory Tables are always rescannable.
|
||||
*/
|
||||
static async fromTable(
|
||||
table: ArrowTable,
|
||||
opts: ScannableOptions = {},
|
||||
): Promise<Scannable> {
|
||||
if (opts.numRows != null && opts.numRows !== table.numRows) {
|
||||
throw new TypeError(
|
||||
`opts.numRows (${opts.numRows}) does not match table.numRows (${table.numRows}). ` +
|
||||
`The table's row count is authoritative; omit numRows or pass the matching value.`,
|
||||
);
|
||||
}
|
||||
if (opts.rescannable === false) {
|
||||
throw new TypeError(
|
||||
`fromTable does not accept rescannable: false. ` +
|
||||
`In-memory Arrow Tables are always rescannable; omit the option or pass true.`,
|
||||
);
|
||||
}
|
||||
return Scannable.fromFactory(table.schema, () => table.batches, {
|
||||
numRows: table.numRows,
|
||||
rescannable: true,
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Build a Scannable from an iterable of `RecordBatch`es. `rescannable`
|
||||
* defaults to `false`. Pass an explicit schema so the consumer can
|
||||
* validate before any batch is pulled.
|
||||
*
|
||||
* `opts.rescannable: true` is honest for replayable iterables (Arrays,
|
||||
* Sets, or custom iterables whose `[Symbol.iterator]()` returns a fresh
|
||||
* iterator each call). It is rejected for one-shot iterables (generators,
|
||||
* async generators, or already-an-iterator inputs) because their
|
||||
* `[Symbol.iterator]()` returns the same exhausted object on the second
|
||||
* scan. For replayable sources outside this shape, use
|
||||
* `fromFactory(schema, () => createIter(), { rescannable: true })`.
|
||||
*
|
||||
* Note: when `opts.rescannable` is `true`, the constructor calls
|
||||
* `[Symbol.iterator]()` once on the input to perform the structural check.
|
||||
*/
|
||||
static async fromIterable(
|
||||
schema: Schema,
|
||||
iter: AsyncIterable<RecordBatch> | Iterable<RecordBatch>,
|
||||
opts: ScannableOptions = {},
|
||||
): Promise<Scannable> {
|
||||
if (opts.rescannable === true && isOneShotIterable(iter)) {
|
||||
throw new TypeError(
|
||||
`fromIterable: rescannable: true is not honest for one-shot iterables ` +
|
||||
`(generators, async generators, or iterators where [Symbol.iterator]() ` +
|
||||
`returns the same object). The source would be exhausted after the first scan. ` +
|
||||
`Use fromFactory(schema, () => createIter(), { rescannable: true }) for sources ` +
|
||||
`where each call mints a fresh iterator.`,
|
||||
);
|
||||
}
|
||||
return Scannable.fromFactory(schema, () => iter, {
|
||||
numRows: opts.numRows,
|
||||
rescannable: opts.rescannable ?? false,
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Build a Scannable from an Arrow `RecordBatchReader`. A reader can only
|
||||
* be consumed once; `rescannable` defaults to `false`.
|
||||
*
|
||||
* The reader must already be opened (via `.open()`) so its `.schema` is
|
||||
* populated. `RecordBatchReader.from(...)` returns an unopened reader.
|
||||
*
|
||||
* `opts.rescannable: true` is rejected because `RecordBatchReader` is a
|
||||
* self-iterator (its `[Symbol.iterator]()` returns itself), and this
|
||||
* constructor does not call `reader.reset()` between scans, so a second
|
||||
* scan would always see an exhausted reader. For genuinely replayable
|
||||
* sources, use
|
||||
* `fromFactory(schema, () => openReader(), { rescannable: true })`,
|
||||
* which mints a fresh reader on each scan.
|
||||
*/
|
||||
static async fromRecordBatchReader(
|
||||
reader: RecordBatchReader,
|
||||
opts: ScannableOptions = {},
|
||||
): Promise<Scannable> {
|
||||
if (opts.rescannable === true) {
|
||||
throw new TypeError(
|
||||
`fromRecordBatchReader does not accept rescannable: true. ` +
|
||||
`RecordBatchReader is a self-iterator (its [Symbol.iterator]() ` +
|
||||
`returns itself) and would be exhausted after the first scan. ` +
|
||||
`Use fromFactory(schema, () => openReader(), { rescannable: true }) ` +
|
||||
`for sources where each call mints a fresh reader.`,
|
||||
);
|
||||
}
|
||||
return Scannable.fromFactory(reader.schema, () => reader, {
|
||||
numRows: opts.numRows,
|
||||
rescannable: false,
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
function normalizeIterator<T>(
|
||||
source: AsyncIterable<T> | Iterable<T> | AsyncIterator<T> | Iterator<T>,
|
||||
): AsyncIterator<T> | Iterator<T> {
|
||||
if (source == null) {
|
||||
throw new TypeError("Scannable factory returned null/undefined");
|
||||
}
|
||||
if (
|
||||
typeof (source as AsyncIterable<T>)[Symbol.asyncIterator] === "function"
|
||||
) {
|
||||
return (source as AsyncIterable<T>)[Symbol.asyncIterator]();
|
||||
}
|
||||
if (typeof (source as Iterable<T>)[Symbol.iterator] === "function") {
|
||||
return (source as Iterable<T>)[Symbol.iterator]();
|
||||
}
|
||||
// Already an iterator (has `.next`).
|
||||
if (typeof (source as Iterator<T>).next === "function") {
|
||||
return source as Iterator<T>;
|
||||
}
|
||||
throw new TypeError("Scannable factory returned a non-iterable value");
|
||||
}
|
||||
|
||||
// A "self-iterator" returns the same object from `[Symbol.iterator]()` /
|
||||
// `[Symbol.asyncIterator]()`. Generators behave this way, so they exhaust
|
||||
// after one pass. Replayable iterables (Array, Set, custom) return a fresh
|
||||
// iterator each call. Detection mirrors `normalizeIterator`'s ordering so
|
||||
// classification matches scan-time behavior.
|
||||
function isOneShotIterable(
|
||||
source: AsyncIterable<unknown> | Iterable<unknown>,
|
||||
): boolean {
|
||||
// null/undefined are not one-shot in any meaningful sense; let
|
||||
// `normalizeIterator` raise the actual error at scan time.
|
||||
if (source == null) return false;
|
||||
const ref = source as unknown;
|
||||
if (
|
||||
typeof (source as AsyncIterable<unknown>)[Symbol.asyncIterator] ===
|
||||
"function"
|
||||
) {
|
||||
const it = (source as AsyncIterable<unknown>)[
|
||||
Symbol.asyncIterator
|
||||
]() as unknown;
|
||||
return it === ref;
|
||||
}
|
||||
if (typeof (source as Iterable<unknown>)[Symbol.iterator] === "function") {
|
||||
const it = (source as Iterable<unknown>)[Symbol.iterator]() as unknown;
|
||||
return it === ref;
|
||||
}
|
||||
// Already-an-iterator (has `.next` but no `Symbol.iterator`) is by
|
||||
// definition one-shot.
|
||||
if (typeof (source as { next?: unknown }).next === "function") return true;
|
||||
return false;
|
||||
}
|
||||
@@ -16,6 +16,7 @@ pub mod permutation;
|
||||
mod query;
|
||||
pub mod remote;
|
||||
mod rerankers;
|
||||
mod scannable;
|
||||
mod session;
|
||||
mod table;
|
||||
mod util;
|
||||
|
||||
253
nodejs/src/scannable.rs
Normal file
253
nodejs/src/scannable.rs
Normal file
@@ -0,0 +1,253 @@
|
||||
// SPDX-License-Identifier: Apache-2.0
|
||||
// SPDX-FileCopyrightText: Copyright The LanceDB Authors
|
||||
|
||||
//! NodeJS binding for the [`lancedb::data::scannable::Scannable`] trait.
|
||||
//!
|
||||
//! The JS side supplies a `getNextBatch(isStart)` callback that returns the
|
||||
//! next Arrow `RecordBatch` encoded as a self-contained Arrow IPC Stream
|
||||
//! message (schema message + record batch message + EOS marker) wrapped in a
|
||||
//! `Buffer`, or `null` when the stream is exhausted. The Rust side parses
|
||||
//! each buffer with `arrow_ipc::reader::StreamReader`, validates every
|
||||
//! standalone batch stream against the declared schema, and yields decoded
|
||||
//! `RecordBatch`es as a [`SendableRecordBatchStream`].
|
||||
//!
|
||||
//! `isStart` is `true` on the first `getNextBatch` call of each new
|
||||
//! `scan_as_stream` and `false` thereafter. JS uses it to drop any cached
|
||||
//! iterator and re-invoke its factory at scan boundaries, so retries
|
||||
//! triggered by mid-stream failures restart at batch 0.
|
||||
|
||||
use std::io::Cursor;
|
||||
use std::sync::Arc;
|
||||
|
||||
use arrow_array::RecordBatch;
|
||||
use arrow_ipc::reader::StreamReader;
|
||||
use arrow_schema::SchemaRef;
|
||||
use futures::stream::once;
|
||||
use lancedb::arrow::{SendableRecordBatchStream, SimpleRecordBatchStream};
|
||||
use lancedb::data::scannable::Scannable as LanceScannable;
|
||||
use lancedb::ipc::ipc_file_to_schema;
|
||||
use lancedb::{Error, Result as LanceResult};
|
||||
use napi::bindgen_prelude::*;
|
||||
use napi::threadsafe_function::ThreadsafeFunction;
|
||||
use napi_derive::napi;
|
||||
|
||||
/// Threadsafe handle to the JS `getNextBatch` callback. The callback takes a
|
||||
/// single boolean `isStart` (`true` on the first call of each new scan) and
|
||||
/// returns a Promise that resolves to a `Buffer` containing one IPC Stream
|
||||
/// message, or `null` at end-of-stream.
|
||||
type GetNextBatchFn = ThreadsafeFunction<bool, Promise<Option<Buffer>>, bool, Status, false>;
|
||||
|
||||
/// A Rust-side view of a JS-constructed `Scannable`.
|
||||
///
|
||||
/// Held in JS as the return value of the `Scannable` class constructor. When
|
||||
/// passed to a consumer that accepts `impl lancedb::data::scannable::Scannable`,
|
||||
/// the consumer invokes `scan_as_stream()` to pull batches through the JS
|
||||
/// callback.
|
||||
#[napi]
|
||||
pub struct NapiScannable {
|
||||
schema: SchemaRef,
|
||||
num_rows: Option<usize>,
|
||||
rescannable: bool,
|
||||
// `ThreadsafeFunction` is not `Clone`; wrap in `Arc` so the stream
|
||||
// returned by `scan_as_stream` can own a handle independent of `self`.
|
||||
get_next_batch: Arc<GetNextBatchFn>,
|
||||
// Tracks whether a scan has already started; used to enforce one-shot
|
||||
// semantics on non-rescannable sources.
|
||||
scanned: bool,
|
||||
}
|
||||
|
||||
#[napi]
|
||||
impl NapiScannable {
|
||||
/// Construct a new `NapiScannable`.
|
||||
///
|
||||
/// - `schema_buf` — Arrow IPC File buffer carrying only the schema (no batches).
|
||||
/// - `num_rows` — optional row count hint; not validated against the stream.
|
||||
/// - `rescannable` — whether `get_next_batch` may be re-driven after the
|
||||
/// scan completes.
|
||||
/// - `get_next_batch` -- JS callback that yields the next batch as an Arrow
|
||||
/// IPC Stream message wrapped in a `Buffer`, or `null` at EOF. The
|
||||
/// `isStart` argument is `true` on the first call of each new scan;
|
||||
/// JS uses it to discard any cached iterator before pulling.
|
||||
#[napi(constructor)]
|
||||
pub fn new(
|
||||
schema_buf: Buffer,
|
||||
num_rows: Option<i64>,
|
||||
rescannable: bool,
|
||||
get_next_batch: Function<bool, Promise<Option<Buffer>>>,
|
||||
) -> napi::Result<Self> {
|
||||
let schema = ipc_file_to_schema(schema_buf.to_vec())
|
||||
.map_err(|e| napi::Error::from_reason(format!("Invalid schema buffer: {}", e)))?;
|
||||
let num_rows = num_rows
|
||||
.map(|n| {
|
||||
usize::try_from(n)
|
||||
.map_err(|_| napi::Error::from_reason("num_rows must be non-negative"))
|
||||
})
|
||||
.transpose()?;
|
||||
let get_next_batch = Arc::new(get_next_batch.build_threadsafe_function().build()?);
|
||||
Ok(Self {
|
||||
schema,
|
||||
num_rows,
|
||||
rescannable,
|
||||
get_next_batch,
|
||||
scanned: false,
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
impl std::fmt::Debug for NapiScannable {
|
||||
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
|
||||
f.debug_struct("NapiScannable")
|
||||
.field("schema", &self.schema)
|
||||
.field("num_rows", &self.num_rows)
|
||||
.field("rescannable", &self.rescannable)
|
||||
.finish()
|
||||
}
|
||||
}
|
||||
|
||||
impl LanceScannable for NapiScannable {
|
||||
fn schema(&self) -> SchemaRef {
|
||||
self.schema.clone()
|
||||
}
|
||||
|
||||
fn scan_as_stream(&mut self) -> SendableRecordBatchStream {
|
||||
let schema = self.schema.clone();
|
||||
|
||||
// One-shot enforcement for non-rescannable sources: return a stream
|
||||
// whose first item is an error.
|
||||
if self.scanned && !self.rescannable {
|
||||
let err_stream = once(async {
|
||||
Err(Error::InvalidInput {
|
||||
message: "Scannable has already been consumed (non-rescannable source)"
|
||||
.to_string(),
|
||||
})
|
||||
});
|
||||
return Box::pin(SimpleRecordBatchStream::new(err_stream, schema));
|
||||
}
|
||||
self.scanned = true;
|
||||
|
||||
let tsfn = Arc::clone(&self.get_next_batch);
|
||||
let declared_schema = schema.clone();
|
||||
|
||||
// State threaded through the unfold. `is_first_pull` starts true so
|
||||
// the first call into JS signals a new-scan boundary; JS uses it to
|
||||
// reset any cached iterator before factory()-ing a fresh one.
|
||||
let initial = State {
|
||||
tsfn,
|
||||
batch_index: 0,
|
||||
declared_schema,
|
||||
errored: false,
|
||||
is_first_pull: true,
|
||||
};
|
||||
|
||||
let stream = futures::stream::unfold(initial, |mut state| async move {
|
||||
if state.errored {
|
||||
return None;
|
||||
}
|
||||
|
||||
// Pull the next IPC Stream buffer from JS. `is_first_pull` is
|
||||
// consumed here and cleared so subsequent pulls continue the
|
||||
// same scan rather than restarting it.
|
||||
let is_start = state.is_first_pull;
|
||||
state.is_first_pull = false;
|
||||
let buf = match pull_next(&state.tsfn, is_start).await {
|
||||
Ok(Some(buf)) => buf,
|
||||
Ok(None) => return None,
|
||||
Err(e) => {
|
||||
state.errored = true;
|
||||
return Some((Err(e), state));
|
||||
}
|
||||
};
|
||||
|
||||
match decode_one_batch(buf.as_ref(), &state.declared_schema) {
|
||||
Ok(batch) => {
|
||||
state.batch_index += 1;
|
||||
Some((Ok(batch), state))
|
||||
}
|
||||
Err(e) => {
|
||||
let tagged = Error::Runtime {
|
||||
message: format!(
|
||||
"[scannable/rust-bridge] failure at batch index {}: {}",
|
||||
state.batch_index, e
|
||||
),
|
||||
};
|
||||
state.errored = true;
|
||||
Some((Err(tagged), state))
|
||||
}
|
||||
}
|
||||
});
|
||||
|
||||
Box::pin(SimpleRecordBatchStream::new(stream, schema))
|
||||
}
|
||||
|
||||
fn num_rows(&self) -> Option<usize> {
|
||||
self.num_rows
|
||||
}
|
||||
|
||||
fn rescannable(&self) -> bool {
|
||||
self.rescannable
|
||||
}
|
||||
}
|
||||
|
||||
struct State {
|
||||
tsfn: Arc<GetNextBatchFn>,
|
||||
batch_index: usize,
|
||||
declared_schema: SchemaRef,
|
||||
errored: bool,
|
||||
/// True for the very first pull of a new scan. Forwarded to JS so the
|
||||
/// callback can drop any cached iterator and call its factory fresh,
|
||||
/// which makes rescannable sources restart at batch 0 even when the
|
||||
/// previous scan ended mid-stream.
|
||||
is_first_pull: bool,
|
||||
}
|
||||
|
||||
/// Invoke the JS callback and await its Promise. `is_start` is forwarded to
|
||||
/// the JS side as the `isStart` argument so it can reset its iterator at the
|
||||
/// scan boundary. Errors on the JS side surface here as rejected promises
|
||||
/// and are tunneled back as `lancedb::Error::Runtime`.
|
||||
async fn pull_next(tsfn: &GetNextBatchFn, is_start: bool) -> LanceResult<Option<Buffer>> {
|
||||
let promise = tsfn
|
||||
.call_async(is_start)
|
||||
.await
|
||||
.map_err(|e| Error::Runtime {
|
||||
message: format!(
|
||||
"[scannable/js-factory] napi error status={}, reason={}",
|
||||
e.status, e.reason
|
||||
),
|
||||
})?;
|
||||
promise.await.map_err(|e| Error::Runtime {
|
||||
message: format!(
|
||||
"[scannable/js-iterator] napi error status={}, reason={}",
|
||||
e.status, e.reason
|
||||
),
|
||||
})
|
||||
}
|
||||
|
||||
/// Decode one IPC Stream buffer (schema + batch + EOS) into a `RecordBatch`.
|
||||
/// Each buffer is a standalone IPC stream, so every decoded stream schema must
|
||||
/// match the one declared at construction.
|
||||
fn decode_one_batch(buf: &[u8], declared: &SchemaRef) -> LanceResult<RecordBatch> {
|
||||
let reader = StreamReader::try_new(Cursor::new(buf), None).map_err(|e| Error::Runtime {
|
||||
message: format!("failed to open IPC stream reader: {}", e),
|
||||
})?;
|
||||
|
||||
let actual = reader.schema();
|
||||
if actual.as_ref() != declared.as_ref() {
|
||||
return Err(Error::InvalidInput {
|
||||
message: format!(
|
||||
"declared schema does not match stream schema: declared={:?} actual={:?}",
|
||||
declared, actual
|
||||
),
|
||||
});
|
||||
}
|
||||
|
||||
let mut iter = reader;
|
||||
let batch = iter
|
||||
.next()
|
||||
.ok_or_else(|| Error::Runtime {
|
||||
message: "IPC stream contained schema but no record batch".to_string(),
|
||||
})?
|
||||
.map_err(|e| Error::Runtime {
|
||||
message: format!("failed to decode record batch: {}", e),
|
||||
})?;
|
||||
Ok(batch)
|
||||
}
|
||||
Reference in New Issue
Block a user