Compare commits

...

15 Commits

Author SHA1 Message Date
Lance Release
38321fa226 [python] Bump version: 0.3.3 → 0.3.4 2023-11-19 00:24:01 +00:00
Lance Release
22749c3fa2 Updating package-lock.json 2023-11-19 00:04:08 +00:00
Lance Release
123a49df77 Bump version: 0.3.7 → 0.3.8 2023-11-19 00:03:58 +00:00
Will Jones
a57aa4b142 chore: upgrade lance to v0.8.17 (#656)
Readying for the next Lance release.
2023-11-18 15:57:23 -08:00
Rok Mihevc
d8e3e54226 feat(python): expose index cache size (#655)
This is to enable https://github.com/lancedb/lancedb/issues/641.
Should be merged after https://github.com/lancedb/lance/pull/1587 is
released.
2023-11-18 14:17:40 -08:00
Ayush Chaurasia
ccfdf4853a [Docs]: Add Instructor embeddings and rate limit handler docs (#651) 2023-11-18 06:08:26 +05:30
Ayush Chaurasia
87e5d86e90 [Docs][SEO] Add sitemap and robots.txt (#645)
Sitemap improves SEO by ranking pages and tracking updates.
2023-11-18 06:08:13 +05:30
Aidan
1cf8a3e4e0 SaaS create_index API (#649) 2023-11-15 19:12:52 -05:00
Lance Release
5372843281 Updating package-lock.json 2023-11-15 03:15:10 +00:00
Lance Release
54677b8f0b Updating package-lock.json 2023-11-15 02:42:38 +00:00
Lance Release
ebcf9bf6ae Bump version: 0.3.6 → 0.3.7 2023-11-15 02:42:25 +00:00
Bert
797514bcbf fix: node remote implement table.countRows (#648) 2023-11-13 17:43:20 -05:00
Rok Mihevc
1c872ce501 feat: add RemoteTable.version in Python (#644)
Please note: this is not tested as we don't have a server here and
testing against a mock object wouldn't be that interesting.
2023-11-13 21:43:48 +01:00
Bert
479f471c14 fix: node send db header for GET requests (#646) 2023-11-11 16:33:25 -05:00
Ayush Chaurasia
ae0d2f2599 fix: Pydantic 1.x compat for weak_lru caching in embeddings API (#643)
Colab has pydantic 1.x by default and pydantic 1.x BaseModel objects
don't support weakref creation by default that we use to cache embedding
models
https://github.com/lancedb/lancedb/blob/main/python/lancedb/embeddings/utils.py#L206
. It needs to be added to slot.
2023-11-10 15:02:38 +05:30
23 changed files with 183 additions and 124 deletions

View File

@@ -1,5 +1,5 @@
[bumpversion]
current_version = 0.3.6
current_version = 0.3.8
commit = True
message = Bump version: {current_version} → {new_version}
tag = True

View File

@@ -5,9 +5,10 @@ exclude = ["python"]
resolver = "2"
[workspace.dependencies]
lance = { "version" = "=0.8.14", "features" = ["dynamodb"] }
lance-linalg = { "version" = "=0.8.14" }
lance-testing = { "version" = "=0.8.14" }
lance = { "version" = "=0.8.17", "features" = ["dynamodb"] }
lance-index = { "version" = "=0.8.17" }
lance-linalg = { "version" = "=0.8.17" }
lance-testing = { "version" = "=0.8.17" }
# Note that this one does not include pyarrow
arrow = { version = "47.0.0", optional = false }
arrow-array = "47.0"

View File

@@ -1,4 +1,5 @@
site_name: LanceDB Docs
site_url: https://lancedb.github.io/lancedb/
repo_url: https://github.com/lancedb/lancedb
edit_uri: https://github.com/lancedb/lancedb/tree/main/docs/src
repo_name: lancedb/lancedb

View File

@@ -1,7 +1,9 @@
There are various Embedding functions available out of the box with lancedb. We're working on supporting other popular embedding APIs.
## Text Embedding Functions
Here are the text embedding functions registered by default
Here are the text embedding functions registered by default.
Embedding functions have inbuilt rate limit handler wrapper for source and query embedding function calls that retry with exponential standoff.
Each `EmbeddingFunction` implementation automatically takes `max_retries` as an argument which has the deafult value of 7.
### Sentence Transformers
Here are the parameters that you can set when registering a `sentence-transformers` object, and their default values:
@@ -66,6 +68,56 @@ actual = table.search(query).limit(1).to_pydantic(Words)[0]
print(actual.text)
```
### Instructor Embeddings
Instructor is an instruction-finetuned text embedding model that can generate text embeddings tailored to any task (e.g., classification, retrieval, clustering, text evaluation, etc.) and domains (e.g., science, finance, etc.) by simply providing the task instruction, without any finetuning
If you want to calculate customized embeddings for specific sentences, you may follow the unified template to write instructions:
Represent the `domain` `text_type` for `task_objective`:
* `domain` is optional, and it specifies the domain of the text, e.g., science, finance, medicine, etc.
* `text_type` is required, and it specifies the encoding unit, e.g., sentence, document, paragraph, etc.
* `task_objective` is optional, and it specifies the objective of embedding, e.g., retrieve a document, classify the sentence, etc.
More information about the model can be found here - https://github.com/xlang-ai/instructor-embedding
| Argument | Type | Default | Description |
|---|---|---|---|
| `name` | `str` | "hkunlp/instructor-base" | The name of the model to use |
| `batch_size` | `int` | `32` | The batch size to use when generating embeddings |
| `device` | `str` | `"cpu"` | The device to use when generating embeddings |
| `show_progress_bar` | `bool` | `True` | Whether to show a progress bar when generating embeddings |
| `normalize_embeddings` | `bool` | `True` | Whether to normalize the embeddings |
| `quantize` | `bool` | `False` | Whether to quantize the model |
| `source_instruction` | `str` | `"represent the docuement for retreival"` | The instruction for the source column |
| `query_instruction` | `str` | `"represent the document for retreiving the most similar documents"` | The instruction for the query |
```python
import lancedb
from lancedb.pydantic import LanceModel, Vector
from lancedb.embeddings import get_registry, InstuctorEmbeddingFunction
instructor = get_registry().get("instructor").create(
source_instruction="represent the docuement for retreival",
query_instruction="represent the document for retreiving the most similar documents"
)
class Schema(LanceModel):
vector: Vector(instructor.ndims()) = instructor.VectorField()
text: str = instructor.SourceField()
db = lancedb.connect("~/.lancedb")
tbl = db.create_table("test", schema=Schema, mode="overwrite")
texts = [{"text": "Capitalism has been dominant in the Western world since the end of feudalism, but most feel[who?] that..."},
{"text": "The disparate impact theory is especially controversial under the Fair Housing Act because the Act..."},
{"text": "Disparate impact in United States labor law refers to practices in employment, housing, and other areas that.."}]
tbl.add(texts)
```
## Multi-modal embedding functions
Multi-modal embedding functions allow you query your table using both images and text.

View File

@@ -57,6 +57,19 @@ query_image = Image.open(p)
table.search(query_image)
```
### Rate limit Handling
`EmbeddingFunction` class wraps the calls for source and query embedding generation inside a rate limit handler that retries the requests with exponential backoff after successive failures. By default the maximum retires is set to 7. You can tune it by setting it to a different number or disable it by setting it to 0.
Example
----
```python
clip = registry.get("open-clip").create() # Defaults to 7 max retries
clip = registry.get("open-clip").create(max_retries=10) # Increase max retries to 10
clip = registry.get("open-clip").create(max_retries=0) # Retries disabled
````
NOTE:
Embedding functions can also fail due to other errors that have nothing to do with rate limits. This is why the error is also logged.
### A little fun with PyDantic
LanceDB is integrated with PyDantic. Infact we've used the integration in the above example to define the schema. It is also being used behing the scene by the embdding function API to ingest useful information as table metadata.

1
docs/src/robots.txt Normal file
View File

@@ -0,0 +1 @@
User-agent: *

104
node/package-lock.json generated
View File

@@ -1,12 +1,12 @@
{
"name": "vectordb",
"version": "0.3.6",
"version": "0.3.8",
"lockfileVersion": 2,
"requires": true,
"packages": {
"": {
"name": "vectordb",
"version": "0.3.6",
"version": "0.3.8",
"cpu": [
"x64",
"arm64"
@@ -53,11 +53,11 @@
"uuid": "^9.0.0"
},
"optionalDependencies": {
"@lancedb/vectordb-darwin-arm64": "0.3.6",
"@lancedb/vectordb-darwin-x64": "0.3.6",
"@lancedb/vectordb-linux-arm64-gnu": "0.3.6",
"@lancedb/vectordb-linux-x64-gnu": "0.3.6",
"@lancedb/vectordb-win32-x64-msvc": "0.3.6"
"@lancedb/vectordb-darwin-arm64": "0.3.8",
"@lancedb/vectordb-darwin-x64": "0.3.8",
"@lancedb/vectordb-linux-arm64-gnu": "0.3.8",
"@lancedb/vectordb-linux-x64-gnu": "0.3.8",
"@lancedb/vectordb-win32-x64-msvc": "0.3.8"
}
},
"node_modules/@apache-arrow/ts": {
@@ -316,66 +316,6 @@
"@jridgewell/sourcemap-codec": "^1.4.10"
}
},
"node_modules/@lancedb/vectordb-darwin-arm64": {
"version": "0.3.6",
"resolved": "https://registry.npmjs.org/@lancedb/vectordb-darwin-arm64/-/vectordb-darwin-arm64-0.3.6.tgz",
"integrity": "sha512-GR5v+4kHUCZ71gVxd3mLsUdlreXPUIbvBgvr+BmEXRbLfc7+JsFUjsRgxmoctQ0mXxkW67Sl7v6kQCWcBLCk/Q==",
"cpu": [
"arm64"
],
"optional": true,
"os": [
"darwin"
]
},
"node_modules/@lancedb/vectordb-darwin-x64": {
"version": "0.3.6",
"resolved": "https://registry.npmjs.org/@lancedb/vectordb-darwin-x64/-/vectordb-darwin-x64-0.3.6.tgz",
"integrity": "sha512-4qemi4jUXG8jOk7ecECmb0+5Nm0n7YF5/1X9/5uc81I+4What+yhZE9nEsmCGRBqmtuQXkYl35ePvQgj3rCQjQ==",
"cpu": [
"x64"
],
"optional": true,
"os": [
"darwin"
]
},
"node_modules/@lancedb/vectordb-linux-arm64-gnu": {
"version": "0.3.6",
"resolved": "https://registry.npmjs.org/@lancedb/vectordb-linux-arm64-gnu/-/vectordb-linux-arm64-gnu-0.3.6.tgz",
"integrity": "sha512-I/lFqIUcXYxJnUG5+DILzUzcfHRGHXL3kl5bs1MGkR9a7F3oPx1IAwY9wkskVnClM7XF9H7MVcFRVTjHUqoUwA==",
"cpu": [
"arm64"
],
"optional": true,
"os": [
"linux"
]
},
"node_modules/@lancedb/vectordb-linux-x64-gnu": {
"version": "0.3.6",
"resolved": "https://registry.npmjs.org/@lancedb/vectordb-linux-x64-gnu/-/vectordb-linux-x64-gnu-0.3.6.tgz",
"integrity": "sha512-UTA/4bpA3UoByhfDx//S5m4o6uQ1qfpneD0PbuftAjkt9eHg0ABIEpZdiTI3xUBdrjXSKZtpVTxOin9X39IBKQ==",
"cpu": [
"x64"
],
"optional": true,
"os": [
"linux"
]
},
"node_modules/@lancedb/vectordb-win32-x64-msvc": {
"version": "0.3.6",
"resolved": "https://registry.npmjs.org/@lancedb/vectordb-win32-x64-msvc/-/vectordb-win32-x64-msvc-0.3.6.tgz",
"integrity": "sha512-70IS0TX4BpjSX4GP1Pq835cqQ5LZpfOJuBNtGv93OxMTWTVQUxtp2MLNwOR6OJMGNQz6q84NNKrKOSf15ZGwGg==",
"cpu": [
"x64"
],
"optional": true,
"os": [
"win32"
]
},
"node_modules/@neon-rs/cli": {
"version": "0.0.160",
"resolved": "https://registry.npmjs.org/@neon-rs/cli/-/cli-0.0.160.tgz",
@@ -4868,36 +4808,6 @@
"@jridgewell/sourcemap-codec": "^1.4.10"
}
},
"@lancedb/vectordb-darwin-arm64": {
"version": "0.3.6",
"resolved": "https://registry.npmjs.org/@lancedb/vectordb-darwin-arm64/-/vectordb-darwin-arm64-0.3.6.tgz",
"integrity": "sha512-GR5v+4kHUCZ71gVxd3mLsUdlreXPUIbvBgvr+BmEXRbLfc7+JsFUjsRgxmoctQ0mXxkW67Sl7v6kQCWcBLCk/Q==",
"optional": true
},
"@lancedb/vectordb-darwin-x64": {
"version": "0.3.6",
"resolved": "https://registry.npmjs.org/@lancedb/vectordb-darwin-x64/-/vectordb-darwin-x64-0.3.6.tgz",
"integrity": "sha512-4qemi4jUXG8jOk7ecECmb0+5Nm0n7YF5/1X9/5uc81I+4What+yhZE9nEsmCGRBqmtuQXkYl35ePvQgj3rCQjQ==",
"optional": true
},
"@lancedb/vectordb-linux-arm64-gnu": {
"version": "0.3.6",
"resolved": "https://registry.npmjs.org/@lancedb/vectordb-linux-arm64-gnu/-/vectordb-linux-arm64-gnu-0.3.6.tgz",
"integrity": "sha512-I/lFqIUcXYxJnUG5+DILzUzcfHRGHXL3kl5bs1MGkR9a7F3oPx1IAwY9wkskVnClM7XF9H7MVcFRVTjHUqoUwA==",
"optional": true
},
"@lancedb/vectordb-linux-x64-gnu": {
"version": "0.3.6",
"resolved": "https://registry.npmjs.org/@lancedb/vectordb-linux-x64-gnu/-/vectordb-linux-x64-gnu-0.3.6.tgz",
"integrity": "sha512-UTA/4bpA3UoByhfDx//S5m4o6uQ1qfpneD0PbuftAjkt9eHg0ABIEpZdiTI3xUBdrjXSKZtpVTxOin9X39IBKQ==",
"optional": true
},
"@lancedb/vectordb-win32-x64-msvc": {
"version": "0.3.6",
"resolved": "https://registry.npmjs.org/@lancedb/vectordb-win32-x64-msvc/-/vectordb-win32-x64-msvc-0.3.6.tgz",
"integrity": "sha512-70IS0TX4BpjSX4GP1Pq835cqQ5LZpfOJuBNtGv93OxMTWTVQUxtp2MLNwOR6OJMGNQz6q84NNKrKOSf15ZGwGg==",
"optional": true
},
"@neon-rs/cli": {
"version": "0.0.160",
"resolved": "https://registry.npmjs.org/@neon-rs/cli/-/cli-0.0.160.tgz",

View File

@@ -1,6 +1,6 @@
{
"name": "vectordb",
"version": "0.3.6",
"version": "0.3.8",
"description": " Serverless, low-latency vector database for AI applications",
"main": "dist/index.js",
"types": "dist/index.d.ts",
@@ -81,10 +81,10 @@
}
},
"optionalDependencies": {
"@lancedb/vectordb-darwin-arm64": "0.3.6",
"@lancedb/vectordb-darwin-x64": "0.3.6",
"@lancedb/vectordb-linux-arm64-gnu": "0.3.6",
"@lancedb/vectordb-linux-x64-gnu": "0.3.6",
"@lancedb/vectordb-win32-x64-msvc": "0.3.6"
"@lancedb/vectordb-darwin-arm64": "0.3.8",
"@lancedb/vectordb-darwin-x64": "0.3.8",
"@lancedb/vectordb-linux-arm64-gnu": "0.3.8",
"@lancedb/vectordb-linux-x64-gnu": "0.3.8",
"@lancedb/vectordb-win32-x64-msvc": "0.3.8"
}
}

View File

@@ -89,7 +89,8 @@ export class HttpLancedbClient {
{
headers: {
'Content-Type': 'application/json',
'x-api-key': this._apiKey()
'x-api-key': this._apiKey(),
...(this._dbName !== undefined ? { 'x-lancedb-database': this._dbName } : {})
},
params,
timeout: 10000

View File

@@ -237,7 +237,8 @@ export class RemoteTable<T = number[]> implements Table<T> {
}
async countRows (): Promise<number> {
throw new Error('Not implemented')
const result = await this._client.post(`/v1/table/${this._name}/describe/`)
return result.data?.stats?.num_rows
}
async delete (filter: string): Promise<void> {

View File

@@ -282,7 +282,8 @@ describe('LanceDB client', function () {
)
const table = await con.createTable({ name: 'vectors', schema })
await table.add([{ vector: Array(128).fill(0.1) }])
await table.delete('vector IS NOT NULL')
// https://github.com/lancedb/lance/issues/1635
await table.delete('true')
const result = await table.search(Array(128).fill(0.1)).execute()
assert.isEmpty(result)
})

View File

@@ -1,5 +1,5 @@
[bumpversion]
current_version = 0.3.3
current_version = 0.3.4
commit = True
message = [python] Bump version: {current_version} → {new_version}
tag = True

View File

@@ -33,6 +33,7 @@ class EmbeddingFunction(BaseModel, ABC):
3. ndims method which returns the number of dimensions of the vector column
"""
__slots__ = ("__weakref__",) # pydantic 1.x compatibility
max_retries: int = (
7 # Setitng 0 disables retires. Maybe this should not be enabled by default,
)

View File

@@ -44,6 +44,14 @@ class RemoteTable(Table):
schema = json_to_schema(resp["schema"])
return schema
@property
def version(self) -> int:
"""Get the current version of the table"""
resp = self._conn._loop.run_until_complete(
self._conn._client.post(f"/v1/table/{self._name}/describe/")
)
return resp["version"]
def to_arrow(self) -> pa.Table:
"""Return the table as an Arrow table."""
raise NotImplementedError("to_arrow() is not supported on the LanceDB cloud")
@@ -63,8 +71,62 @@ class RemoteTable(Table):
vector_column_name: str = VECTOR_COLUMN_NAME,
replace: bool = True,
accelerator: Optional[str] = None,
index_cache_size: Optional[int] = None,
):
raise NotImplementedError
"""Create an index on the table.
Currently, the only parameters that matter are
the metric and the vector column name.
Parameters
----------
metric : str
The metric to use for the index. Default is "L2".
num_partitions : int
The number of partitions to use for the index. Default is 256.
num_sub_vectors : int
The number of sub-vectors to use for the index. Default is 96.
vector_column_name : str
The name of the vector column. Default is "vector".
replace : bool
Whether to replace the existing index. Default is True.
accelerator : str, optional
If set, use the given accelerator to create the index.
Default is None. Currently not supported.
index_cache_size : int, optional
The size of the index cache in number of entries. Default value is 256.
Examples
--------
import lancedb
import uuid
from lancedb.schema import vector
conn = lancedb.connect("db://...", api_key="...", region="...")
table_name = uuid.uuid4().hex
schema = pa.schema(
[
pa.field("id", pa.uint32(), False),
pa.field("vector", vector(128), False),
pa.field("s", pa.string(), False),
]
)
table = conn.create_table(
table_name,
schema=schema,
)
table.create_index()
"""
index_type = "vector"
data = {
"column": vector_column_name,
"index_type": index_type,
"metric_type": metric,
"index_cache_size": index_cache_size,
}
resp = self._conn._loop.run_until_complete(
self._conn._client.post(f"/v1/table/{self._name}/create_index/", data=data)
)
return resp
def add(
self,

View File

@@ -188,6 +188,7 @@ class Table(ABC):
vector_column_name: str = VECTOR_COLUMN_NAME,
replace: bool = True,
accelerator: Optional[str] = None,
index_cache_size: Optional[int] = None,
):
"""Create an index on the table.
@@ -212,6 +213,8 @@ class Table(ABC):
accelerator: str, default None
If set, use the given accelerator to create the index.
Only support "cuda" for now.
index_cache_size : int, optional
The size of the index cache in number of entries. Default value is 256.
"""
raise NotImplementedError
@@ -556,6 +559,7 @@ class LanceTable(Table):
vector_column_name=VECTOR_COLUMN_NAME,
replace: bool = True,
accelerator: Optional[str] = None,
index_cache_size: Optional[int] = None,
):
"""Create an index on the table."""
self._dataset.create_index(
@@ -566,6 +570,7 @@ class LanceTable(Table):
num_sub_vectors=num_sub_vectors,
replace=replace,
accelerator=accelerator,
index_cache_size=index_cache_size,
)
self._reset_dataset()
register_event("create_index")

View File

@@ -1,9 +1,9 @@
[project]
name = "lancedb"
version = "0.3.3"
version = "0.3.4"
dependencies = [
"deprecation",
"pylance==0.8.10",
"pylance==0.8.17",
"ratelimiter~=1.0",
"retry>=0.9.2",
"tqdm>=4.1.0",

View File

@@ -301,6 +301,7 @@ def test_replace_index(tmp_path):
num_partitions=2,
num_sub_vectors=4,
replace=True,
index_cache_size=10,
)

View File

@@ -213,6 +213,7 @@ def test_create_index_method():
num_sub_vectors=96,
vector_column_name="vector",
replace=True,
index_cache_size=256,
)
# Check that the _dataset.create_index method was called
@@ -225,6 +226,7 @@ def test_create_index_method():
num_sub_vectors=96,
replace=True,
accelerator=None,
index_cache_size=256,
)

View File

@@ -1,6 +1,6 @@
[package]
name = "vectordb-node"
version = "0.3.6"
version = "0.3.8"
description = "Serverless, low-latency vector database for AI applications"
license = "Apache-2.0"
edition = "2018"
@@ -19,6 +19,7 @@ once_cell = "1"
futures = "0.3"
half = { workspace = true }
lance = { workspace = true }
lance-index = { workspace = true }
lance-linalg = { workspace = true }
vectordb = { path = "../../vectordb" }
tokio = { version = "1.23", features = ["rt-multi-thread"] }

View File

@@ -12,7 +12,7 @@
// See the License for the specific language governing permissions and
// limitations under the License.
use lance::index::vector::{ivf::IvfBuildParams, pq::PQBuildParams};
use lance_index::vector::{ivf::IvfBuildParams, pq::PQBuildParams};
use lance_linalg::distance::MetricType;
use neon::context::FunctionContext;
use neon::prelude::*;

View File

@@ -1,6 +1,6 @@
[package]
name = "vectordb"
version = "0.3.6"
version = "0.3.8"
edition = "2021"
description = "LanceDB: A serverless, low-latency vector database for AI applications"
license = "Apache-2.0"
@@ -21,6 +21,7 @@ object_store = { workspace = true }
snafu = { workspace = true }
half = { workspace = true }
lance = { workspace = true }
lance-index = { workspace = true }
lance-linalg = { workspace = true }
lance-testing = { workspace = true }
tokio = { version = "1.23", features = ["rt-multi-thread"] }

View File

@@ -13,9 +13,9 @@
// limitations under the License.
use lance::format::{Index, Manifest};
use lance::index::vector::ivf::IvfBuildParams;
use lance::index::vector::pq::PQBuildParams;
use lance::index::vector::VectorIndexParams;
use lance_index::vector::ivf::IvfBuildParams;
use lance_linalg::distance::MetricType;
pub trait VectorIndexBuilder {
@@ -136,9 +136,9 @@ impl VectorIndex {
mod tests {
use super::*;
use lance::index::vector::ivf::IvfBuildParams;
use lance::index::vector::pq::PQBuildParams;
use lance::index::vector::StageParams;
use lance_index::vector::ivf::IvfBuildParams;
use lance_index::vector::pq::PQBuildParams;
use crate::index::vector::{IvfPQIndexBuilder, VectorIndexBuilder};

View File

@@ -13,6 +13,8 @@
// limitations under the License.
use chrono::Duration;
use lance::dataset::builder::DatasetBuilder;
use lance_index::IndexType;
use std::sync::Arc;
use arrow_array::{Float32Array, RecordBatchReader};
@@ -22,7 +24,7 @@ use lance::dataset::optimize::{
compact_files, CompactionMetrics, CompactionOptions, IndexRemapperOptions,
};
use lance::dataset::{Dataset, WriteParams};
use lance::index::{DatasetIndexExt, IndexType};
use lance::index::DatasetIndexExt;
use lance::io::object_store::WrappingObjectStore;
use std::path::Path;
@@ -96,7 +98,10 @@ impl Table {
Some(wrapper) => params.patch_with_store_wrapper(wrapper)?,
None => params,
};
let dataset = Dataset::open_with_params(uri, &params)
let dataset = DatasetBuilder::from_uri(uri)
.with_read_params(params)
.load()
.await
.map_err(|e| match e {
lance::Error::DatasetNotFound { .. } => Error::TableNotFound {
@@ -414,9 +419,9 @@ mod tests {
use arrow_data::ArrayDataBuilder;
use arrow_schema::{DataType, Field, Schema};
use lance::dataset::{Dataset, WriteMode};
use lance::index::vector::ivf::IvfBuildParams;
use lance::index::vector::pq::PQBuildParams;
use lance::io::object_store::{ObjectStoreParams, WrappingObjectStore};
use lance_index::vector::ivf::IvfBuildParams;
use rand::Rng;
use tempfile::tempdir;