addresses #797
Problem: tantivy does not expose option to explicitly
Proposed solution here:
1. Add a `.phrase_query()` option
2. Under the hood, LanceDB takes care of wrapping the input in quotes
and replace nested double quotes with single quotes
I've also filed an upstream issue, if they support phrase queries
natively then we can get rid of our manual custom processing here.
Close#773
we pass an empty table over IPC so we don't need to manually deal with
serde. Then we just return the schema attribute from the empty table.
---------
Co-authored-by: albertlockett <albert.lockett@gmail.com>
By default tantivy-py uses 128MB heapsize. We change the default to 1GB
and we allow the user to customize this
locally this makes `test_fts.py` run 10x faster
If the input text is None, Tantivy raises an error
complaining it cannot add a NoneType. We handle this
upstream so None's are not added to the document.
If all of the indexed fields are None then we skip
this document.
I found that it was quite incoherent to have to read through the
documentation and having to search which submodule that each class
should be imported from.
For example, it is cumbersome to have to navigate to another
documentation page to find out that `EmbeddingFunctionRegistry` is from
`lancedb.embeddings`
If you add timezone information in the Field annotation for a datetime
then that will now be passed to the pyarrow data type.
I'm not sure how pyarrow enforces timezones, right now, it silently
coerces to the timezone given in the column regardless of whether the
input had the matching timezone or not. This is probably not the right
behavior. Though we could just make it so the user has to make the
pydantic model do the validation instead of doing that at the pyarrow
conversion layer.
Closes#721
fts will return results as a pyarrow table. Pyarrow tables has a
`filter` method but it does not take sql filter strings (only pyarrow
compute expressions). Instead, we do one of two things to support
`tbl.search("keywords").where("foo=5").limit(10).to_arrow()`:
Default path: If duckdb is available then use duckdb to execute the sql
filter string on the pyarrow table.
Backup path: Otherwise, write the pyarrow table to a lance dataset and
then do `to_table(filter=<filter>)`
Neither is ideal.
Default path has two issues:
1. requires installing an extra library (duckdb)
2. duckdb mangles some fields (like fixed size list => list)
Backup path incurs a latency penalty (~20ms on ssd) to write the
resultset to disk.
In the short term, once #676 is addressed, we can write the dataset to
"memory://" instead of disk, this makes the post filter evaluate much
quicker (ETA next week).
In the longer term, we'd like to be able to evaluate the filter string
on the pyarrow Table directly, one possibility being that we use
Substrait to generate pyarrow compute expressions from sql string. Or if
there's enough progress on pyarrow, it could support Substrait
expressions directly (no ETA)
---------
Co-authored-by: Will Jones <willjones127@gmail.com>
For object detection, each row may correspond to an image and each image
can have multiple bounding boxes of x-y coordinates. This means that a
`bbox` field is potentially "list of list of float". This adds support
in our pydantic-pyarrow conversion for nested lists.
Use pathlib for local paths so that pathlib
can handle the correct separator on windows.
Closes#703
---------
Co-authored-by: Will Jones <willjones127@gmail.com>
This PR adds issue templates, which help two recurring issues:
* Users forget to tell us whether they are using the Node or Python SDK
* Issues don't get appropriate tags
This doesn't force the use of the templates. Because we set
`blank_issues_enabled: true`, users can still create a custom issue.
This forces the user to replace the whole FTS directory when re-creating
the index, prevent duplicate data from being created. Previously, the
whole dataset was re-added to the existing index, duplicating existing
rows in the index.
This (in combination with lancedb/lance#1707) caused #726, since the
duplicate data emitted duplicate indices for `take()` and an upstream
issue caused those queries to fail.
This solution isn't ideal, since it makes the FTS index temporarily
unavailable while the index is built. In the future, we should have
multiple FTS index directories, which would allow atomic commits of new
indexes (as well as multiple indexes for different columns).
Fixes#498.
Fixes#726.
---------
Co-authored-by: Chang She <759245+changhiskhan@users.noreply.github.com>