Compare commits

...

11 Commits

Author SHA1 Message Date
qzhu
bcbc4541e7 change variables to camel case 2024-06-10 11:58:22 -07:00
Raghav Dixit
96914a619b docs: llama-index integration (#1347)
Updated api refrence and usage for llama index integration.
2024-06-09 23:52:18 +05:30
Beinan
3c62806b6a fix(java): the JVM crash when using jdk 8 (#1372)
The Optional::isEmpty does not exist in java 8, so we should use
isPresent instead
2024-06-08 22:43:41 -07:00
Ayush Chaurasia
72f339a0b3 docs: add note about embedding api not being available on cloud (#1371) 2024-06-09 03:57:23 +05:30
QianZhu
b9e3cfbdca fix: add status to remote listIndices return (#1364)
expose `status` returned by remote listIndices
2024-06-08 09:52:35 -07:00
Ayush Chaurasia
5e30648f45 docs: fix example path (#1367) 2024-06-07 19:40:50 -07:00
Ayush Chaurasia
76fc16c7a1 docs: add retriever guide, address minor onboarding feedbacks & enhancement (#1326)
- Tried to address some onboarding feedbacks listed in
https://github.com/lancedb/lancedb/issues/1224
- Improve visibility of pydantic integration and embedding API. (Based
on onboarding feedback - Many ways of ingesting data, defining schema
but not sure what to use in a specific use-case)
- Add a guide that takes users through testing and improving retriever
performance using built-in utilities like hybrid-search and reranking
- Add some benchmarks for the above
- Add missing cohere docs

---------

Co-authored-by: Weston Pace <weston.pace@gmail.com>
2024-06-08 06:25:31 +05:30
Weston Pace
007f9c1af8 chore: change build machine for linux arm (#1360) 2024-06-06 13:22:58 -07:00
Lance Release
27e4ad3f11 Updating package-lock.json 2024-06-05 13:47:44 +00:00
Lance Release
df42943ccf Bump version: 0.5.2-beta.0 → 0.5.2 2024-06-05 13:47:28 +00:00
Lance Release
3eec9ea740 Bump version: 0.5.1 → 0.5.2-beta.0 2024-06-05 13:47:27 +00:00
27 changed files with 551 additions and 33 deletions

View File

@@ -1,5 +1,5 @@
[tool.bumpversion]
current_version = "0.5.1"
current_version = "0.5.2"
parse = """(?x)
(?P<major>0|[1-9]\\d*)\\.
(?P<minor>0|[1-9]\\d*)\\.

View File

@@ -3,7 +3,7 @@ name: NPM Publish
on:
push:
tags:
- 'v*'
- "v*"
jobs:
node:
@@ -111,12 +111,11 @@ jobs:
runner: ubuntu-latest
- arch: aarch64
# For successful fat LTO builds, we need a large runner to avoid OOM errors.
runner: buildjet-16vcpu-ubuntu-2204-arm
runner: warp-ubuntu-latest-arm64-4x
steps:
- name: Checkout
uses: actions/checkout@v4
# Buildjet aarch64 runners have only 1.5 GB RAM per core, vs 3.5 GB per core for
# x86_64 runners. To avoid OOM errors on ARM, we create a swap file.
# To avoid OOM errors on ARM, we create a swap file.
- name: Configure aarch64 build
if: ${{ matrix.config.arch == 'aarch64' }}
run: |
@@ -323,7 +322,7 @@ jobs:
- name: Publish to NPM
env:
NODE_AUTH_TOKEN: ${{ secrets.LANCEDB_NPM_REGISTRY_TOKEN }}
# By default, things are published to the latest tag. This is what is
# By default, things are published to the latest tag. This is what is
# installed by default if the user does not specify a version. This is
# good for stable releases, but for pre-releases, we want to publish to
# the "preview" tag so they can install with `npm install lancedb@preview`.
@@ -368,7 +367,7 @@ jobs:
- uses: ./.github/workflows/update_package_lock_nodejs
with:
github_token: ${{ secrets.GITHUB_TOKEN }}
gh-release:
runs-on: ubuntu-latest
permissions:

View File

@@ -106,6 +106,9 @@ nav:
- Versioning & Reproducibility: notebooks/reproducibility.ipynb
- Configuring Storage: guides/storage.md
- Sync -> Async Migration Guide: migration.md
- Tuning retrieval performance:
- Choosing right query type: guides/tuning_retrievers/1_query_types.md
- Reranking: guides/tuning_retrievers/2_reranking.md
- 🧬 Managing embeddings:
- Overview: embeddings/index.md
- Embedding functions: embeddings/embedding_functions.md
@@ -121,7 +124,9 @@ nav:
- LangChain:
- LangChain 🔗: integrations/langchain.md
- LangChain JS/TS 🔗: https://js.langchain.com/docs/integrations/vectorstores/lancedb
- LlamaIndex 🦙: https://docs.llamaindex.ai/en/stable/examples/vector_stores/LanceDBIndexDemo/
- LlamaIndex 🦙:
- LlamaIndex docs: integrations/llamaIndex.md
- LlamaIndex demo: https://docs.llamaindex.ai/en/stable/examples/vector_stores/LanceDBIndexDemo/
- Pydantic: python/pydantic.md
- Voxel51: integrations/voxel51.md
- PromptTools: integrations/prompttools.md
@@ -152,7 +157,7 @@ nav:
- Overview: cloud/index.md
- API reference:
- 🐍 Python: python/saas-python.md
- 👾 JavaScript: javascript/saas-modules.md
- 👾 JavaScript: javascript/modules.md
- Quick start: basic.md
- Concepts:
@@ -181,6 +186,9 @@ nav:
- Versioning & Reproducibility: notebooks/reproducibility.ipynb
- Configuring Storage: guides/storage.md
- Sync -> Async Migration Guide: migration.md
- Tuning retrieval performance:
- Choosing right query type: guides/tuning_retrievers/1_query_types.md
- Reranking: guides/tuning_retrievers/2_reranking.md
- Managing Embeddings:
- Overview: embeddings/index.md
- Embedding functions: embeddings/embedding_functions.md
@@ -219,7 +227,7 @@ nav:
- Overview: cloud/index.md
- API reference:
- 🐍 Python: python/saas-python.md
- 👾 JavaScript: javascript/saas-modules.md
- 👾 JavaScript: javascript/modules.md
extra_css:
- styles/global.css

View File

@@ -180,6 +180,9 @@ table.
!!! info "Under the hood, LanceDB reads in the Apache Arrow data and persists it to disk using the [Lance format](https://www.github.com/lancedb/lance)."
!!! info "Automatic embedding generation with Embedding API"
When working with embedding models, it is recommended to use the LanceDB embedding API to automatically create vector representation of the data and queries in the background. See the [quickstart example](#using-the-embedding-api) or the embedding API [guide](./embeddings/)
### Create an empty table
Sometimes you may not have the data to insert into the table at creation time.
@@ -194,6 +197,9 @@ similar to a `CREATE TABLE` statement in SQL.
--8<-- "python/python/tests/docs/test_basic.py:create_empty_table_async"
```
!!! note "You can define schema in Pydantic"
LanceDB comes with Pydantic support, which allows you to define the schema of your data using Pydantic models. This makes it easy to work with LanceDB tables and data. Learn more about all supported types in [tables guide](./guides/tables.md).
=== "Typescript"
```typescript
@@ -424,6 +430,19 @@ Use the `drop_table()` method on the database to remove a table.
})
```
## Using the Embedding API
You can use the embedding API when working with embedding models. It automatically vectorizes the data at ingestion and query time and comes with built-in integrations with popular embedding models like Openai, Hugging Face, Sentence Transformers, CLIP and more.
=== "Python"
```python
--8<-- "python/python/tests/docs/test_embeddings_optional.py:imports"
--8<-- "python/python/tests/docs/test_embeddings_optional.py:openai_embeddings"
```
Learn about using the existing integrations and creating custom embedding functions in the [embedding API guide](./embeddings/).
## What's next
This section covered the very basics of using LanceDB. If you're learning about vector databases for the first time, you may want to read the page on [indexing](concepts/index_ivfpq.md) to get familiar with the concepts.

View File

@@ -216,7 +216,7 @@ Generate embeddings via the [ollama](https://github.com/ollama/ollama-python) py
|------------------------|----------------------------|--------------------------|------------------------------------------------------------------------------------------------------------------------------------------------|
| `name` | `str` | `nomic-embed-text` | The name of the model. |
| `host` | `str` | `http://localhost:11434` | The Ollama host to connect to. |
| `options` | `ollama.Options` or `dict` | `None` | Additional model parameters listed in the documentation for the [Modelfile](./modelfile.md#valid-parameters-and-values) such as `temperature`. |
| `options` | `ollama.Options` or `dict` | `None` | Additional model parameters listed in the documentation for the Modelfile such as `temperature`. |
| `keep_alive` | `float` or `str` | `"5m"` | Controls how long the model will stay loaded into memory following the request. |
| `ollama_client_kwargs` | `dict` | `{}` | kwargs that can be past to the `ollama.Client`. |
@@ -365,6 +365,68 @@ tbl.add(df)
rs = tbl.search("hello").limit(1).to_pandas()
```
### Cohere Embeddings
Using cohere API requires cohere package, which can be installed using `pip install cohere`. Cohere embeddings are used to generate embeddings for text data. The embeddings can be used for various tasks like semantic search, clustering, and classification.
You also need to set the `COHERE_API_KEY` environment variable to use the Cohere API.
Supported models are:
```
* embed-english-v3.0
* embed-multilingual-v3.0
* embed-english-light-v3.0
* embed-multilingual-light-v3.0
* embed-english-v2.0
* embed-english-light-v2.0
* embed-multilingual-v2.0
```
Supported parameters (to be passed in `create` method) are:
| Parameter | Type | Default Value | Description |
|---|---|---|---|
| `name` | `str` | `"embed-english-v2.0"` | The model ID of the cohere model to use. Supported base models for Text Embeddings: embed-english-v3.0, embed-multilingual-v3.0, embed-english-light-v3.0, embed-multilingual-light-v3.0, embed-english-v2.0, embed-english-light-v2.0, embed-multilingual-v2.0 |
| `source_input_type` | `str` | `"search_document"` | The type of input data to be used for the source column. |
| `query_input_type` | `str` | `"search_query"` | The type of input data to be used for the query. |
Cohere supports following input types:
| Input Type | Description |
|-------------------------|---------------------------------------|
| "`search_document`" | Used for embeddings stored in a vector|
| | database for search use-cases. |
| "`search_query`" | Used for embeddings of search queries |
| | run against a vector DB |
| "`semantic_similarity`" | Specifies the given text will be used |
| | for Semantic Textual Similarity (STS) |
| "`classification`" | Used for embeddings passed through a |
| | text classifier. |
| "`clustering`" | Used for the embeddings run through a |
| | clustering algorithm |
Usage Example:
```python
import lancedb
from lancedb.pydantic import LanceModel, Vector
from lancedb.embeddings import EmbeddingFunctionRegistry
cohere = EmbeddingFunctionRegistry
.get_instance()
.get("cohere")
.create(name="embed-multilingual-v2.0")
class TextModel(LanceModel):
text: str = cohere.SourceField()
vector: Vector(cohere.ndims()) = cohere.VectorField()
data = [ { "text": "hello world" },
{ "text": "goodbye world" }]
db = lancedb.connect("~/.lancedb")
tbl = db.create_table("test", schema=TextModel, mode="overwrite")
tbl.add(data)
```
### AWS Bedrock Text Embedding Functions
AWS Bedrock supports multiple base models for generating text embeddings. You need to setup the AWS credentials to use this embedding function.
You can do so by using `awscli` and also add your session_token:

View File

@@ -2,6 +2,9 @@ Representing multi-modal data as vector embeddings is becoming a standard practi
For this purpose, LanceDB introduces an **embedding functions API**, that allow you simply set up once, during the configuration stage of your project. After this, the table remembers it, effectively making the embedding functions *disappear in the background* so you don't have to worry about manually passing callables, and instead, simply focus on the rest of your data engineering pipeline.
!!! Note "LanceDB cloud doesn't support embedding functions yet"
LanceDB Cloud does not support embedding functions yet. You need to generate embeddings before ingesting into the table or querying.
!!! warning
Using the embedding function registry means that you don't have to explicitly generate the embeddings yourself.
However, if your embedding function changes, you'll have to re-configure your table with the new embedding function

View File

@@ -2,7 +2,6 @@
LanceDB provides support for full-text search via [Tantivy](https://github.com/quickwit-oss/tantivy) (currently Python only), allowing you to incorporate keyword-based search (based on BM25) in your retrieval solutions. Our goal is to push the FTS integration down to the Rust level in the future, so that it's available for Rust and JavaScript users as well. Follow along at [this Github issue](https://github.com/lancedb/lance/issues/1195)
A hybrid search solution combining vector and full-text search is also on the way.
## Installation

View File

@@ -452,6 +452,27 @@ After a table has been created, you can always add more data to it using the var
tbl.add(pydantic_model_items)
```
??? "Ingesting Pydantic models with LanceDB embedding API"
When using LanceDB's embedding API, you can add Pydantic models directly to the table. LanceDB will automatically convert the `vector` field to a vector before adding it to the table. You need to specify the default value of `vector` feild as None to allow LanceDB to automatically vectorize the data.
```python
import lancedb
from lancedb.pydantic import LanceModel, Vector
from lancedb.embeddings import get_registry
db = lancedb.connect("~/tmp")
embed_fcn = get_registry().get("huggingface").create(name="BAAI/bge-small-en-v1.5")
class Schema(LanceModel):
text: str = embed_fcn.SourceField()
vector: Vector(embed_fcn.ndims()) = embed_fcn.VectorField(default=None)
tbl = db.create_table("my_table", schema=Schema, mode="overwrite")
models = [Schema(text="hello"), Schema(text="world")]
tbl.add(models)
```
=== "JavaScript"
@@ -636,6 +657,31 @@ The `values` parameter is used to provide the new values for the columns as lite
When rows are updated, they are moved out of the index. The row will still show up in ANN queries, but the query will not be as fast as it would be if the row was in the index. If you update a large proportion of rows, consider rebuilding the index afterwards.
## Drop a table
Use the `drop_table()` method on the database to remove a table.
=== "Python"
```python
--8<-- "python/python/tests/docs/test_basic.py:drop_table"
--8<-- "python/python/tests/docs/test_basic.py:drop_table_async"
```
This permanently removes the table and is not recoverable, unlike deleting rows.
By default, if the table does not exist an exception is raised. To suppress this,
you can pass in `ignore_missing=True`.
=== "Javascript/Typescript"
```typescript
--8<-- "docs/src/basic_legacy.ts:drop_table"
```
This permanently removes the table and is not recoverable, unlike deleting rows.
If the table does not exist an exception is raised.
## Consistency
In LanceDB OSS, users can set the `read_consistency_interval` parameter on connections to achieve different levels of read consistency. This parameter determines how frequently the database synchronizes with the underlying storage system to check for updates made by other processes. If another process updates a table, the database will not see the changes until the next synchronization.

View File

@@ -0,0 +1,128 @@
## Improving retriever performance
VectorDBs are used as retreivers in recommender or chatbot-based systems for retrieving relevant data based on user queries. For example, retriever is a critical component of Retrieval Augmented Generation (RAG) acrhitectures. In this section, we will discuss how to improve the performance of retrievers.
There are serveral ways to improve the performance of retrievers. Some of the common techniques are:
* Using different query types
* Using hybrid search
* Fine-tuning the embedding models
* Using different embedding models
Using different embedding models is something that's very specific to the use case and the data. So we will not discuss it here. In this section, we will discuss the first three techniques.
!!! note "Note"
We'll be using a simple metric called "hit-rate" for evaluating the performance of the retriever across this guide. Hit-rate is the percentage of queries for which the retriever returned the correct answer in the top-k results. For example, if the retriever returned the correct answer in the top-3 results for 70% of the queries, then the hit-rate@3 is 0.7.
## The dataset
We'll be using a QA dataset generated using a LLama2 review paper. The dataset contains 221 query, context and answer triplets. The queries and answers are generated using GPT-4 based on a given query. Full script used to generate the dataset can be found on this [repo](https://github.com/lancedb/ragged). It can be downloaded from [here](https://github.com/AyushExel/assets/blob/main/data_qa.csv)
### Using different query types
Let's setup the embeddings and the dataset first. We'll use the LanceDB's `huggingface` embeddings integration for this guide.
```python
import lancedb
import pandas as pd
from lancedb.embeddings import get_registry
from lancedb.pydantic import Vector, LanceModel
db = lancedb.connect("~/lancedb/query_types")
df = pd.read_csv("data_qa.csv")
embed_fcn = get_registry().get("huggingface").create(name="BAAI/bge-small-en-v1.")
class Schema(LanceModel):
context: str = embed_fcn.SourceField()
vector: Vector(embed_fcn.ndims()) = embed_fcn.VectorField()
table = db.create_table("qa", schema=Schema)
table.add(df[["context"]].to_dict(orient="records"))
queries = df["query"].tolist()
```
Now that we have the dataset and embeddings table set up, here's how you can run different query types on the dataset.
* <b> Vector Search: </b>
```python
table.search(quries[0], query_type="vector").limit(5).to_pandas()
```
By default, LanceDB uses vector search query type for searching and it automatically converts the input query to a vector before searching when using embedding API. So, the following statement is equivalent to the above statement.
```python
table.search(quries[0]).limit(5).to_pandas()
```
Vector or semantic search is useful when you want to find documents that are similar to the query in terms of meaning.
---
* <b> Full-text Search: </b>
FTS requires creating an index on the column you want to search on. `replace=True` will replace the existing index if it exists.
Once the index is created, you can search using the `fts` query type.
```python
table.create_fts_index("context", replace=True)
table.search(quries[0], query_type="fts").limit(5).to_pandas()
```
Full-text search is useful when you want to find documents that contain the query terms.
---
* <b> Hybrid Search: </b>
Hybrid search is a combination of vector and full-text search. Here's how you can run a hybrid search query on the dataset.
```python
table.search(quries[0], query_type="hybrid").limit(5).to_pandas()
```
Hybrid search requires a reranker to combine and rank the results from vector and full-text search. We'll cover reranking as a concept in the next section.
Hybrid search is useful when you want to combine the benefits of both vector and full-text search.
!!! note "Note"
By default, it uses `LinearCombinationReranker` that combines the scores from vector and full-text search using a weighted linear combination. It is the simplest reranker implementation available in LanceDB. You can also use other rerankers like `CrossEncoderReranker` or `CohereReranker` for reranking the results.
Learn more about rerankers [here](https://lancedb.github.io/lancedb/reranking/)
### Hit rate evaluation results
Now that we have seen how to run different query types on the dataset, let's evaluate the hit-rate of each query type on the dataset.
For brevity, the entire evaluation script is not shown here. You can find the complete evaluation and benchmarking utility scripts [here](https://github.com/lancedb/ragged).
Here are the hit-rate results for the dataset:
| Query Type | Hit-rate@5 |
| --- | --- |
| Vector Search | 0.640 |
| Full-text Search | 0.595 |
| Hybrid Search (w/ LinearCombinationReranker) | 0.645 |
**Choosing query type** is very specific to the use case and the data. This synthetic dataset has been generated to be semantically challenging, i.e, the queries don't have a lot of keywords in common with the context. So, vector search performs better than full-text search. However, in real-world scenarios, full-text search might perform better than vector search. Hybrid search is a good choice when you want to combine the benefits of both vector and full-text search.
### Evaluation results on other datasets
The hit-rate results can vary based on the dataset and the query type. Here are the hit-rate results for the other datasets using the same embedding function.
* <b> SQuAD Dataset: </b>
| Query Type | Hit-rate@5 |
| --- | --- |
| Vector Search | 0.822 |
| Full-text Search | 0.835 |
| Hybrid Search (w/ LinearCombinationReranker) | 0.8874 |
* <b> Uber10K sec filing Dataset: </b>
| Query Type | Hit-rate@5 |
| --- | --- |
| Vector Search | 0.608 |
| Full-text Search | 0.82 |
| Hybrid Search (w/ LinearCombinationReranker) | 0.80 |
In these standard datasets, FTS seems to perform much better than vector search because the queries have a lot of keywords in common with the context. So, in general choosing the query type is very specific to the use case and the data.

View File

@@ -0,0 +1,78 @@
Continuing from the previous example, we can now rerank the results using more complex rerankers.
## Reranking search results
You can rerank any search results using a reranker. The syntax for reranking is as follows:
```python
from lancedb.rerankers import LinearCombinationReranker
reranker = LinearCombinationReranker()
table.search(quries[0], query_type="hybrid").rerank(reranker=reranker).limit(5).to_pandas()
```
Based on the `query_type`, the `rerank()` function can accept other arguments as well. For example, hybrid search accepts a `normalize` param to determine the score normalization method.
!!! note "Note"
LanceDB provides a `Reranker` base class that can be extended to implement custom rerankers. Each reranker must implement the `rerank_hybrid` method. `rerank_vector` and `rerank_fts` methods are optional. For example, the `LinearCombinationReranker` only implements the `rerank_hybrid` method and so it can only be used for reranking hybrid search results.
## Choosing a Reranker
There are many rerankers available in LanceDB like `CrossEncoderReranker`, `CohereReranker`, and `ColBERT`. The choice of reranker depends on the dataset and the application. You can even implement you own custom reranker by extending the `Reranker` class. For more details about each available reranker and performance comparison, refer to the [rerankers](https://lancedb.github.io/lancedb/reranking/) documentation.
In this example, we'll use the `CohereReranker` to rerank the search results. It requires `cohere` to be installed and `COHERE_API_KEY` to be set in the environment. To get your API key, sign up on [Cohere](https://cohere.ai/).
```python
from lancedb.rerankers import CohereReranker
# use Cohere reranker v3
reranker = CohereReranker(model_name="rerank-english-v3.0") # default model is "rerank-english-v2.0"
```
### Reranking search results
Now we can rerank all query type results using the `CohereReranker`:
```python
# rerank hybrid search results
table.search(quries[0], query_type="hybrid").rerank(reranker=reranker).limit(5).to_pandas()
# rerank vector search results
table.search(quries[0], query_type="vector").rerank(reranker=reranker).limit(5).to_pandas()
# rerank fts search results
table.search(quries[0], query_type="fts").rerank(reranker=reranker).limit(5).to_pandas()
```
Each reranker can accept additional arguments. For example, `CohereReranker` accepts `top_k` and `batch_size` params to control the number of documents to rerank and the batch size for reranking respectively. Similarly, a custom reranker can accept any number of arguments based on the implementation. For example, a reranker can accept a `filter` that implements some custom logic to filter out documents before reranking.
## Results
Let us take a look at the same datasets from the previous sections, using the same embedding table but with Cohere reranker applied to all query types.
!!! note "Note"
When reranking fts or vector search results, the search results are over-fetched by a factor of 2 and then reranked. From the reranked set, `top_k` (5 in this case) results are taken. This is done because reranking will have no effect on the hit-rate if we only fetch the `top_k` results.
### Synthetic LLama2 paper dataset
| Query Type | Hit-rate@5 |
| --- | --- |
| Vector | 0.640 |
| FTS | 0.595 |
| Reranked vector | 0.677 |
| Reranked fts | 0.672 |
| Hybrid | 0.759 |
### SQuAD Dataset
### Uber10K sec filing Dataset
| Query Type | Hit-rate@5 |
| --- | --- |
| Vector | 0.608 |
| FTS | 0.824 |
| Reranked vector | 0.671 |
| Reranked fts | 0.843 |
| Hybrid | 0.849 |

View File

@@ -5,7 +5,9 @@ Hybrid Search is a broad (often misused) term. It can mean anything from combini
## The challenge of (re)ranking search results
Once you have a group of the most relevant search results from multiple search sources, you'd likely standardize the score and rank them accordingly. This process can also be seen as another independent step-reranking.
There are two approaches for reranking search results from multiple sources.
* <b>Score-based</b>: Calculate final relevance scores based on a weighted linear combination of individual search algorithm scores. Example-Weighted linear combination of semantic search & keyword-based search results.
* <b>Relevance-based</b>: Discards the existing scores and calculates the relevance of each search result-query pair. Example-Cross Encoder models
Even though there are many strategies for reranking search results, none works for all cases. Moreover, evaluating them itself is a challenge. Also, reranking can be dataset, application specific so it's hard to generalize.

View File

@@ -0,0 +1,139 @@
# Llama-Index
![Illustration](../assets/llama-index.jpg)
## Quick start
You would need to install the integration via `pip install llama-index-vector-stores-lancedb` in order to use it. You can run the below script to try it out :
```python
import logging
import sys
# Uncomment to see debug logs
# logging.basicConfig(stream=sys.stdout, level=logging.DEBUG)
# logging.getLogger().addHandler(logging.StreamHandler(stream=sys.stdout))
from llama_index.core import SimpleDirectoryReader, Document, StorageContext
from llama_index.core import VectorStoreIndex
from llama_index.vector_stores.lancedb import LanceDBVectorStore
import textwrap
import openai
openai.api_key = "sk-..."
documents = SimpleDirectoryReader("./data/your-data-dir/").load_data()
print("Document ID:", documents[0].doc_id, "Document Hash:", documents[0].hash)
## For LanceDB cloud :
# vector_store = LanceDBVectorStore(
# uri="db://db_name", # your remote DB URI
# api_key="sk_..", # lancedb cloud api key
# region="your-region" # the region you configured
# ...
# )
vector_store = LanceDBVectorStore(
uri="./lancedb", mode="overwrite", query_type="vector"
)
storage_context = StorageContext.from_defaults(vector_store=vector_store)
index = VectorStoreIndex.from_documents(
documents, storage_context=storage_context
)
lance_filter = "metadata.file_name = 'paul_graham_essay.txt' "
retriever = index.as_retriever(vector_store_kwargs={"where": lance_filter})
response = retriever.retrieve("What did the author do growing up?")
```
### Filtering
For metadata filtering, you can use a Lance SQL-like string filter as demonstrated in the example above. Additionally, you can also filter using the `MetadataFilters` class from LlamaIndex:
```python
from llama_index.core.vector_stores import (
MetadataFilters,
FilterOperator,
FilterCondition,
MetadataFilter,
)
query_filters = MetadataFilters(
filters=[
MetadataFilter(
key="creation_date", operator=FilterOperator.EQ, value="2024-05-23"
),
MetadataFilter(
key="file_size", value=75040, operator=FilterOperator.GT
),
],
condition=FilterCondition.AND,
)
```
### Hybrid Search
For complete documentation, refer [here](https://lancedb.github.io/lancedb/hybrid_search/hybrid_search/). This example uses the `colbert` reranker. Make sure to install necessary dependencies for the reranker you choose.
```python
from lancedb.rerankers import ColbertReranker
reranker = ColbertReranker()
vector_store._add_reranker(reranker)
query_engine = index.as_query_engine(
filters=query_filters,
vector_store_kwargs={
"query_type": "hybrid",
}
)
response = query_engine.query("How much did Viaweb charge per month?")
```
In the above snippet, you can change/specify query_type again when creating the engine/retriever.
## API reference
The exhaustive list of parameters for `LanceDBVectorStore` vector store are :
- `connection`: Optional, `lancedb.db.LanceDBConnection` connection object to use. If not provided, a new connection will be created.
- `uri`: Optional[str], the uri of your database. Defaults to `"/tmp/lancedb"`.
- `table_name` : Optional[str], Name of your table in the database. Defaults to `"vectors"`.
- `table`: Optional[Any], `lancedb.db.LanceTable` object to be passed. Defaults to `None`.
- `vector_column_name`: Optional[Any], Column name to use for vector's in the table. Defaults to `'vector'`.
- `doc_id_key`: Optional[str], Column name to use for document id's in the table. Defaults to `'doc_id'`.
- `text_key`: Optional[str], Column name to use for text in the table. Defaults to `'text'`.
- `api_key`: Optional[str], API key to use for LanceDB cloud database. Defaults to `None`.
- `region`: Optional[str], Region to use for LanceDB cloud database. Only for LanceDB Cloud, defaults to `None`.
- `nprobes` : Optional[int], Set the number of probes to use. Only applicable if ANN index is created on the table else its ignored. Defaults to `20`.
- `refine_factor` : Optional[int], Refine the results by reading extra elements and re-ranking them in memory. Defaults to `None`.
- `reranker`: Optional[Any], The reranker to use for LanceDB.
Defaults to `None`.
- `overfetch_factor`: Optional[int], The factor by which to fetch more results.
Defaults to `1`.
- `mode`: Optional[str], The mode to use for LanceDB.
Defaults to `"overwrite"`.
- `query_type`:Optional[str], The type of query to use for LanceDB.
Defaults to `"vector"`.
### Methods
- __from_table(cls, table: lancedb.db.LanceTable) -> `LanceDBVectorStore`__ : (class method) Creates instance from lancedb table.
- **_add_reranker(self, reranker: lancedb.rerankers.Reranker) -> `None`** : Add a reranker to an existing vector store.
- Usage :
```python
from lancedb.rerankers import ColbertReranker
reranker = ColbertReranker()
vector_store._add_reranker(reranker)
```
- **_table_exists(self, tbl_name: `Optional[str]` = `None`) -> `bool`** : Returns `True` if `tbl_name` exists in database.
- __create_index(
self, scalar: `Optional[bool]` = False, col_name: `Optional[str]` = None, num_partitions: `Optional[int]` = 256, num_sub_vectors: `Optional[int]` = 96, index_cache_size: `Optional[int]` = None, metric: `Optional[str]` = "L2",
) -> `None`__ : Creates a scalar(for non-vector cols) or a vector index on a table.
Make sure your vector column has enough data before creating an index on it.
- __add(self, nodes: `List[BaseNode]`, **add_kwargs: `Any`, ) -> `List[str]`__ :
adds Nodes to the table
- **delete(self, ref_doc_id: `str`) -> `None`**: Delete nodes using with node_ids.
- **delete_nodes(self, node_ids: `List[str]`) -> `None`** : Delete nodes using with node_ids.
- __query(
self,
query: `VectorStoreQuery`,
**kwargs: `Any`,
) -> `VectorStoreQueryResult`__:
Query index(`VectorStoreIndex`) for top k most similar nodes. Accepts llamaIndex `VectorStoreQuery` object.

View File

@@ -7,8 +7,7 @@ excluded_globs = [
"../src/fts.md",
"../src/embedding.md",
"../src/examples/*.md",
"../src/integrations/voxel51.md",
"../src/integrations/langchain.md",
"../src/integrations/*.md",
"../src/guides/tables.md",
"../src/python/duckdb.md",
"../src/embeddings/*.md",
@@ -17,6 +16,7 @@ excluded_globs = [
"../src/basic.md",
"../src/hybrid_search/hybrid_search.md",
"../src/reranking/*.md",
"../src/guides/tuning_retrievers/*.md",
]
python_prefix = "py"

View File

@@ -175,8 +175,8 @@ impl JNIEnvExt for JNIEnv<'_> {
if obj.is_null() {
return Ok(None);
}
let is_empty = self.call_method(obj, "isEmpty", "()Z", &[])?;
if is_empty.z()? {
let is_present = self.call_method(obj, "isPresent", "()Z", &[])?;
if !is_present.z()? {
// TODO(lu): put get java object into here cuz can only get java Object
Ok(None)
} else {

View File

@@ -1,12 +1,12 @@
{
"name": "vectordb",
"version": "0.5.1",
"version": "0.5.2",
"lockfileVersion": 3,
"requires": true,
"packages": {
"": {
"name": "vectordb",
"version": "0.5.1",
"version": "0.5.2",
"cpu": [
"x64",
"arm64"

View File

@@ -1,6 +1,6 @@
{
"name": "vectordb",
"version": "0.5.1",
"version": "0.5.2",
"description": " Serverless, low-latency vector database for AI applications",
"main": "dist/index.js",
"types": "dist/index.d.ts",

View File

@@ -695,18 +695,26 @@ export interface MergeInsertArgs {
whenNotMatchedBySourceDelete?: string | boolean
}
export enum IndexStatus {
Pending = "pending",
Indexing = "indexing",
Done = "done",
Failed = "failed"
}
export interface VectorIndex {
columns: string[]
name: string
uuid: string
status: IndexStatus
}
export interface IndexStats {
numIndexedRows: number | null
numUnindexedRows: number | null
index_type: string | null
distance_type: string | null
completed_at: string | null
indexType: string | null
distanceType: string | null
completedAt: string | null
}
/**

View File

@@ -522,9 +522,9 @@ export class RemoteTable<T = number[]> implements Table<T> {
return {
numIndexedRows: body?.num_indexed_rows,
numUnindexedRows: body?.num_unindexed_rows,
index_type: body?.index_type,
distance_type: body?.distance_type,
completed_at: body?.completed_at
indexType: body?.index_type,
distanceType: body?.distance_type,
completedAt: body?.completed_at
}
}

View File

@@ -1,6 +1,6 @@
{
"name": "@lancedb/lancedb-darwin-arm64",
"version": "0.5.1",
"version": "0.5.2",
"os": ["darwin"],
"cpu": ["arm64"],
"main": "lancedb.darwin-arm64.node",

View File

@@ -1,6 +1,6 @@
{
"name": "@lancedb/lancedb-darwin-x64",
"version": "0.5.1",
"version": "0.5.2",
"os": ["darwin"],
"cpu": ["x64"],
"main": "lancedb.darwin-x64.node",

View File

@@ -1,6 +1,6 @@
{
"name": "@lancedb/lancedb-linux-arm64-gnu",
"version": "0.5.1",
"version": "0.5.2",
"os": ["linux"],
"cpu": ["arm64"],
"main": "lancedb.linux-arm64-gnu.node",

View File

@@ -1,6 +1,6 @@
{
"name": "@lancedb/lancedb-linux-x64-gnu",
"version": "0.5.1",
"version": "0.5.2",
"os": ["linux"],
"cpu": ["x64"],
"main": "lancedb.linux-x64-gnu.node",

View File

@@ -1,6 +1,6 @@
{
"name": "@lancedb/lancedb-win32-x64-msvc",
"version": "0.5.1",
"version": "0.5.2",
"os": ["win32"],
"cpu": ["x64"],
"main": "lancedb.win32-x64-msvc.node",

View File

@@ -1,6 +1,6 @@
{
"name": "@lancedb/lancedb",
"version": "0.5.1",
"version": "0.5.2",
"main": "dist/index.js",
"exports": {
".": "./dist/index.js",

View File

@@ -0,0 +1,27 @@
import lancedb
# --8<-- [start:imports]
from lancedb.pydantic import LanceModel, Vector
from lancedb.embeddings import get_registry
# --8<-- [end:imports]
import pytest
@pytest.mark.slow
def test_embeddings_openai():
# --8<-- [start:openai_embeddings]
db = lancedb.connect("/tmp/db")
func = get_registry().get("openai").create(name="text-embedding-ada-002")
class Words(LanceModel):
text: str = func.SourceField()
vector: Vector(func.ndims()) = func.VectorField()
table = db.create_table("words", schema=Words, mode="overwrite")
table.add([{"text": "hello world"}, {"text": "goodbye world"}])
query = "greetings"
actual = table.search(query).limit(1).to_pydantic(Words)[0]
print(actual.text)
# --8<-- [end:openai_embeddings]

View File

@@ -1,6 +1,6 @@
[package]
name = "lancedb-node"
version = "0.5.1"
version = "0.5.2"
description = "Serverless, low-latency vector database for AI applications"
license.workspace = true
edition.workspace = true

View File

@@ -1,6 +1,6 @@
[package]
name = "lancedb"
version = "0.5.1"
version = "0.5.2"
edition.workspace = true
description = "LanceDB: A serverless, low-latency vector database for AI applications"
license.workspace = true