Compare commits

..

12 Commits

Author SHA1 Message Date
Ruihang Xia
3556eb4476 chore: add tests to comment column on information_schema (#7514)
* feat: show comment on information_schema

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* add to information schema for columns, add sqlness tests

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* remove duplications

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix typo

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* update integration test

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2026-01-04 09:05:50 +00:00
Weny Xu
9343da7fe8 feat(meta-srv): fallback to non-TLS connection when etcd TLS prefer mode fail (#7507)
* feat(meta-srv): fallback to non-TLS connection when etcd TLS prefer mode fail

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore(ci): set timeout for deploy cluster

Signed-off-by: WenyXu <wenymedia@gmail.com>

* refactor: simplify etcd TLS prefer mode handling

Signed-off-by: WenyXu <wenymedia@gmail.com>

---------

Signed-off-by: WenyXu <wenymedia@gmail.com>
2025-12-31 10:03:34 +00:00
Alan Tang
8a07dbf605 fix: fix sqlness test error about double precision (#7476)
* fix: fix sqlness test error about double precision

Signed-off-by: StandingMan <jmtangcs@gmail.com>

* fix: use round method to truncate the result

Signed-off-by: StandingMan <jmtangcs@gmail.com>

---------

Signed-off-by: StandingMan <jmtangcs@gmail.com>
2025-12-31 04:55:22 +00:00
Weny Xu
83932c8c9e fix: align backend_tls default value with example config (#7496)
* fix: align backend_tls default value with example config

Signed-off-by: WenyXu <wenymedia@gmail.com>

* Update src/common/meta/src/kv_backend/rds/postgres.rs

Co-authored-by: dennis zhuang <killme2008@gmail.com>

---------

Signed-off-by: WenyXu <wenymedia@gmail.com>
Co-authored-by: dennis zhuang <killme2008@gmail.com>
2025-12-31 03:31:08 +00:00
LFC
dc9fc582a0 feat: impl json_get_int for new json type (#7495)
Update src/common/function/src/scalars/json/json_get.rs



impl `json_get_int` for new json type

Signed-off-by: luofucong <luofc@foxmail.com>
2025-12-30 09:42:16 +00:00
Weny Xu
b1d81913f5 feat: update ApplyStagingManifestRequest to fetch manifest from central region (#7493)
* feat: update ApplyStagingManifestRequest to fetch manifest from central region

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: refine comments

Signed-off-by: WenyXu <wenymedia@gmail.com>

* refactor(mito2): rename `StagingDataStorage` to `StagingBlobStorage`

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: apply suggestions

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: update proto

Signed-off-by: WenyXu <wenymedia@gmail.com>

---------

Signed-off-by: WenyXu <wenymedia@gmail.com>
2025-12-30 07:29:56 +00:00
Yingwen
554f3943b6 ci: update breaking change title level (#7497)
Signed-off-by: evenyag <realevenyag@gmail.com>
2025-12-30 06:17:51 +00:00
dennis zhuang
e4b5ef275f feat: impl vector index building (#7468)
* feat: impl vector index building

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* feat: supports flat format

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* ci: add vector_index feature to test

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* chore: apply suggestions

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* chore: apply suggestions from copilot

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

---------

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>
2025-12-30 03:38:51 +00:00
Yingwen
f2a9d50071 ci: handle prerelease version (#7492)
Signed-off-by: evenyag <realevenyag@gmail.com>
2025-12-29 08:21:05 +00:00
LFC
0c54e70e1f feat: impl json_get_string with new json type (#7489)
* impl `json_get_string` with new json type

Signed-off-by: luofucong <luofc@foxmail.com>

* resolve PR comments

Signed-off-by: luofucong <luofc@foxmail.com>

* fix ci

Signed-off-by: luofucong <luofc@foxmail.com>

---------

Signed-off-by: luofucong <luofc@foxmail.com>
2025-12-29 04:35:53 +00:00
Yingwen
b51f62c3c2 feat: bump version to beta.4 (#7490)
Signed-off-by: evenyag <realevenyag@gmail.com>
2025-12-29 04:20:00 +00:00
discord9
1ddc535b52 feat: repartition map kv (#7420)
* table partition key

Signed-off-by: discord9 <discord9@163.com>

* feat: table part key

Signed-off-by: discord9 <discord9@163.com>

* ut

Signed-off-by: discord9 <discord9@163.com>

* stuff

Signed-off-by: discord9 <discord9@163.com>

* feat: add Default trait to TablePartValue struct

Signed-off-by: discord9 <discord9@163.com>

* rename to Rep

Signed-off-by: discord9 <discord9@163.com>

* rename file

Signed-off-by: discord9 <discord9@163.com>

* more rename

Signed-off-by: discord9 <discord9@163.com>

* pcr

Signed-off-by: discord9 <discord9@163.com>

* test: update err msg

Signed-off-by: discord9 <discord9@163.com>

* feat: add TableRepartKey to TableMetadataManager

Signed-off-by: discord9 <discord9@163.com>

* feat: add TableRepartManager to TableMetadataManager

Signed-off-by: discord9 <discord9@163.com>

* docs: udpate

Signed-off-by: discord9 <discord9@163.com>

* c

Signed-off-by: discord9 <discord9@163.com>

---------

Signed-off-by: discord9 <discord9@163.com>
2025-12-29 02:45:35 +00:00
60 changed files with 4075 additions and 355 deletions

View File

@@ -70,19 +70,23 @@ runs:
--wait \
--wait-for-jobs
- name: Wait for GreptimeDB
shell: bash
run: |
while true; do
PHASE=$(kubectl -n my-greptimedb get gtc my-greptimedb -o jsonpath='{.status.clusterPhase}')
if [ "$PHASE" == "Running" ]; then
echo "Cluster is ready"
break
else
echo "Cluster is not ready yet: Current phase: $PHASE"
kubectl get pods -n my-greptimedb
sleep 5 # wait for 5 seconds before check again.
fi
done
uses: nick-fields/retry@v3
with:
timeout_minutes: 3
max_attempts: 1
shell: bash
command: |
while true; do
PHASE=$(kubectl -n my-greptimedb get gtc my-greptimedb -o jsonpath='{.status.clusterPhase}')
if [ "$PHASE" == "Running" ]; then
echo "Cluster is ready"
break
else
echo "Cluster is not ready yet: Current phase: $PHASE"
kubectl get pods -n my-greptimedb
sleep 5 # wait for 5 seconds before check again.
fi
done
- name: Print GreptimeDB info
if: always()
shell: bash

View File

@@ -755,7 +755,7 @@ jobs:
run: ../../.github/scripts/pull-test-deps-images.sh && docker compose up -d --wait
- name: Run nextest cases
run: cargo nextest run --workspace -F dashboard -F pg_kvbackend -F mysql_kvbackend
run: cargo nextest run --workspace -F dashboard -F pg_kvbackend -F mysql_kvbackend -F vector_index
env:
CARGO_BUILD_RUSTFLAGS: "-C link-arg=-fuse-ld=mold"
RUST_BACKTRACE: 1
@@ -813,7 +813,7 @@ jobs:
run: ../../.github/scripts/pull-test-deps-images.sh && docker compose up -d --wait
- name: Run nextest cases
run: cargo llvm-cov nextest --workspace --lcov --output-path lcov.info -F dashboard -F pg_kvbackend -F mysql_kvbackend
run: cargo llvm-cov nextest --workspace --lcov --output-path lcov.info -F dashboard -F pg_kvbackend -F mysql_kvbackend -F vector_index
env:
CARGO_BUILD_RUSTFLAGS: "-C link-arg=-fuse-ld=mold"
RUST_BACKTRACE: 1

159
Cargo.lock generated
View File

@@ -212,7 +212,7 @@ checksum = "d301b3b94cb4b2f23d7917810addbbaff90738e0ca2be692bd027e70d7e0330c"
[[package]]
name = "api"
version = "1.0.0-beta.3"
version = "1.0.0-beta.4"
dependencies = [
"arrow-schema",
"common-base",
@@ -733,7 +733,7 @@ dependencies = [
[[package]]
name = "auth"
version = "1.0.0-beta.3"
version = "1.0.0-beta.4"
dependencies = [
"api",
"async-trait",
@@ -1383,7 +1383,7 @@ dependencies = [
[[package]]
name = "cache"
version = "1.0.0-beta.3"
version = "1.0.0-beta.4"
dependencies = [
"catalog",
"common-error",
@@ -1418,7 +1418,7 @@ checksum = "37b2a672a2cb129a2e41c10b1224bb368f9f37a2b16b612598138befd7b37eb5"
[[package]]
name = "catalog"
version = "1.0.0-beta.3"
version = "1.0.0-beta.4"
dependencies = [
"api",
"arrow",
@@ -1763,7 +1763,7 @@ checksum = "b94f61472cee1439c0b966b47e3aca9ae07e45d070759512cd390ea2bebc6675"
[[package]]
name = "cli"
version = "1.0.0-beta.3"
version = "1.0.0-beta.4"
dependencies = [
"async-stream",
"async-trait",
@@ -1817,7 +1817,7 @@ dependencies = [
[[package]]
name = "client"
version = "1.0.0-beta.3"
version = "1.0.0-beta.4"
dependencies = [
"api",
"arc-swap",
@@ -1850,7 +1850,7 @@ dependencies = [
"snafu 0.8.6",
"store-api",
"substrait 0.37.3",
"substrait 1.0.0-beta.3",
"substrait 1.0.0-beta.4",
"tokio",
"tokio-stream",
"tonic 0.13.1",
@@ -1890,7 +1890,7 @@ dependencies = [
[[package]]
name = "cmd"
version = "1.0.0-beta.3"
version = "1.0.0-beta.4"
dependencies = [
"async-trait",
"auth",
@@ -2024,7 +2024,7 @@ checksum = "55b672471b4e9f9e95499ea597ff64941a309b2cdbffcc46f2cc5e2d971fd335"
[[package]]
name = "common-base"
version = "1.0.0-beta.3"
version = "1.0.0-beta.4"
dependencies = [
"anymap2",
"async-trait",
@@ -2048,14 +2048,14 @@ dependencies = [
[[package]]
name = "common-catalog"
version = "1.0.0-beta.3"
version = "1.0.0-beta.4"
dependencies = [
"const_format",
]
[[package]]
name = "common-config"
version = "1.0.0-beta.3"
version = "1.0.0-beta.4"
dependencies = [
"common-base",
"common-error",
@@ -2080,7 +2080,7 @@ dependencies = [
[[package]]
name = "common-datasource"
version = "1.0.0-beta.3"
version = "1.0.0-beta.4"
dependencies = [
"arrow",
"arrow-schema",
@@ -2115,7 +2115,7 @@ dependencies = [
[[package]]
name = "common-decimal"
version = "1.0.0-beta.3"
version = "1.0.0-beta.4"
dependencies = [
"bigdecimal 0.4.8",
"common-error",
@@ -2128,7 +2128,7 @@ dependencies = [
[[package]]
name = "common-error"
version = "1.0.0-beta.3"
version = "1.0.0-beta.4"
dependencies = [
"common-macro",
"http 1.3.1",
@@ -2139,7 +2139,7 @@ dependencies = [
[[package]]
name = "common-event-recorder"
version = "1.0.0-beta.3"
version = "1.0.0-beta.4"
dependencies = [
"api",
"async-trait",
@@ -2161,7 +2161,7 @@ dependencies = [
[[package]]
name = "common-frontend"
version = "1.0.0-beta.3"
version = "1.0.0-beta.4"
dependencies = [
"api",
"async-trait",
@@ -2183,13 +2183,14 @@ dependencies = [
[[package]]
name = "common-function"
version = "1.0.0-beta.3"
version = "1.0.0-beta.4"
dependencies = [
"ahash 0.8.12",
"api",
"approx 0.5.1",
"arc-swap",
"arrow",
"arrow-cast",
"arrow-schema",
"async-trait",
"bincode",
@@ -2220,6 +2221,7 @@ dependencies = [
"h3o",
"hyperloglogplus",
"jsonb",
"jsonpath-rust 0.7.5",
"memchr",
"mito-codec",
"nalgebra",
@@ -2243,7 +2245,7 @@ dependencies = [
[[package]]
name = "common-greptimedb-telemetry"
version = "1.0.0-beta.3"
version = "1.0.0-beta.4"
dependencies = [
"async-trait",
"common-runtime",
@@ -2260,7 +2262,7 @@ dependencies = [
[[package]]
name = "common-grpc"
version = "1.0.0-beta.3"
version = "1.0.0-beta.4"
dependencies = [
"api",
"arrow-flight",
@@ -2295,7 +2297,7 @@ dependencies = [
[[package]]
name = "common-grpc-expr"
version = "1.0.0-beta.3"
version = "1.0.0-beta.4"
dependencies = [
"api",
"common-base",
@@ -2315,7 +2317,7 @@ dependencies = [
[[package]]
name = "common-macro"
version = "1.0.0-beta.3"
version = "1.0.0-beta.4"
dependencies = [
"greptime-proto",
"once_cell",
@@ -2326,7 +2328,7 @@ dependencies = [
[[package]]
name = "common-mem-prof"
version = "1.0.0-beta.3"
version = "1.0.0-beta.4"
dependencies = [
"anyhow",
"common-error",
@@ -2342,7 +2344,7 @@ dependencies = [
[[package]]
name = "common-memory-manager"
version = "1.0.0-beta.3"
version = "1.0.0-beta.4"
dependencies = [
"common-error",
"common-macro",
@@ -2355,7 +2357,7 @@ dependencies = [
[[package]]
name = "common-meta"
version = "1.0.0-beta.3"
version = "1.0.0-beta.4"
dependencies = [
"anymap2",
"api",
@@ -2427,7 +2429,7 @@ dependencies = [
[[package]]
name = "common-options"
version = "1.0.0-beta.3"
version = "1.0.0-beta.4"
dependencies = [
"common-grpc",
"humantime-serde",
@@ -2436,11 +2438,11 @@ dependencies = [
[[package]]
name = "common-plugins"
version = "1.0.0-beta.3"
version = "1.0.0-beta.4"
[[package]]
name = "common-pprof"
version = "1.0.0-beta.3"
version = "1.0.0-beta.4"
dependencies = [
"common-error",
"common-macro",
@@ -2452,7 +2454,7 @@ dependencies = [
[[package]]
name = "common-procedure"
version = "1.0.0-beta.3"
version = "1.0.0-beta.4"
dependencies = [
"api",
"async-stream",
@@ -2481,7 +2483,7 @@ dependencies = [
[[package]]
name = "common-procedure-test"
version = "1.0.0-beta.3"
version = "1.0.0-beta.4"
dependencies = [
"async-trait",
"common-procedure",
@@ -2491,7 +2493,7 @@ dependencies = [
[[package]]
name = "common-query"
version = "1.0.0-beta.3"
version = "1.0.0-beta.4"
dependencies = [
"api",
"async-trait",
@@ -2517,7 +2519,7 @@ dependencies = [
[[package]]
name = "common-recordbatch"
version = "1.0.0-beta.3"
version = "1.0.0-beta.4"
dependencies = [
"arc-swap",
"common-base",
@@ -2541,7 +2543,7 @@ dependencies = [
[[package]]
name = "common-runtime"
version = "1.0.0-beta.3"
version = "1.0.0-beta.4"
dependencies = [
"async-trait",
"clap 4.5.40",
@@ -2570,7 +2572,7 @@ dependencies = [
[[package]]
name = "common-session"
version = "1.0.0-beta.3"
version = "1.0.0-beta.4"
dependencies = [
"serde",
"strum 0.27.1",
@@ -2578,7 +2580,7 @@ dependencies = [
[[package]]
name = "common-sql"
version = "1.0.0-beta.3"
version = "1.0.0-beta.4"
dependencies = [
"arrow-schema",
"common-base",
@@ -2598,7 +2600,7 @@ dependencies = [
[[package]]
name = "common-stat"
version = "1.0.0-beta.3"
version = "1.0.0-beta.4"
dependencies = [
"common-base",
"common-runtime",
@@ -2613,7 +2615,7 @@ dependencies = [
[[package]]
name = "common-telemetry"
version = "1.0.0-beta.3"
version = "1.0.0-beta.4"
dependencies = [
"backtrace",
"common-base",
@@ -2642,7 +2644,7 @@ dependencies = [
[[package]]
name = "common-test-util"
version = "1.0.0-beta.3"
version = "1.0.0-beta.4"
dependencies = [
"client",
"common-grpc",
@@ -2655,7 +2657,7 @@ dependencies = [
[[package]]
name = "common-time"
version = "1.0.0-beta.3"
version = "1.0.0-beta.4"
dependencies = [
"arrow",
"chrono",
@@ -2673,7 +2675,7 @@ dependencies = [
[[package]]
name = "common-version"
version = "1.0.0-beta.3"
version = "1.0.0-beta.4"
dependencies = [
"build-data",
"cargo-manifest",
@@ -2684,7 +2686,7 @@ dependencies = [
[[package]]
name = "common-wal"
version = "1.0.0-beta.3"
version = "1.0.0-beta.4"
dependencies = [
"common-base",
"common-error",
@@ -2707,7 +2709,7 @@ dependencies = [
[[package]]
name = "common-workload"
version = "1.0.0-beta.3"
version = "1.0.0-beta.4"
dependencies = [
"common-telemetry",
"serde",
@@ -4015,7 +4017,7 @@ dependencies = [
[[package]]
name = "datanode"
version = "1.0.0-beta.3"
version = "1.0.0-beta.4"
dependencies = [
"api",
"arrow-flight",
@@ -4079,7 +4081,7 @@ dependencies = [
[[package]]
name = "datatypes"
version = "1.0.0-beta.3"
version = "1.0.0-beta.4"
dependencies = [
"arrow",
"arrow-array",
@@ -4754,7 +4756,7 @@ checksum = "37909eebbb50d72f9059c3b6d82c0463f2ff062c9e95845c43a6c9c0355411be"
[[package]]
name = "file-engine"
version = "1.0.0-beta.3"
version = "1.0.0-beta.4"
dependencies = [
"api",
"async-trait",
@@ -4886,7 +4888,7 @@ checksum = "8bf7cc16383c4b8d58b9905a8509f02926ce3058053c056376248d958c9df1e8"
[[package]]
name = "flow"
version = "1.0.0-beta.3"
version = "1.0.0-beta.4"
dependencies = [
"api",
"arrow",
@@ -4955,7 +4957,7 @@ dependencies = [
"sql",
"store-api",
"strum 0.27.1",
"substrait 1.0.0-beta.3",
"substrait 1.0.0-beta.4",
"table",
"tokio",
"tonic 0.13.1",
@@ -5016,7 +5018,7 @@ checksum = "28dd6caf6059519a65843af8fe2a3ae298b14b80179855aeb4adc2c1934ee619"
[[package]]
name = "frontend"
version = "1.0.0-beta.3"
version = "1.0.0-beta.4"
dependencies = [
"api",
"arc-swap",
@@ -5464,7 +5466,7 @@ dependencies = [
[[package]]
name = "greptime-proto"
version = "0.1.0"
source = "git+https://github.com/GreptimeTeam/greptime-proto.git?rev=520fa524f9d590752ea327683e82ffd65721b27c#520fa524f9d590752ea327683e82ffd65721b27c"
source = "git+https://github.com/GreptimeTeam/greptime-proto.git?rev=a2e5099d72a1cfa8ba41fa4296101eb5f874074a#a2e5099d72a1cfa8ba41fa4296101eb5f874074a"
dependencies = [
"prost 0.13.5",
"prost-types 0.13.5",
@@ -6232,7 +6234,7 @@ dependencies = [
[[package]]
name = "index"
version = "1.0.0-beta.3"
version = "1.0.0-beta.4"
dependencies = [
"async-trait",
"asynchronous-codec",
@@ -7173,7 +7175,7 @@ checksum = "13dc2df351e3202783a1fe0d44375f7295ffb4049267b0f3018346dc122a1d94"
[[package]]
name = "log-query"
version = "1.0.0-beta.3"
version = "1.0.0-beta.4"
dependencies = [
"chrono",
"common-error",
@@ -7185,7 +7187,7 @@ dependencies = [
[[package]]
name = "log-store"
version = "1.0.0-beta.3"
version = "1.0.0-beta.4"
dependencies = [
"async-stream",
"async-trait",
@@ -7486,7 +7488,7 @@ dependencies = [
[[package]]
name = "meta-client"
version = "1.0.0-beta.3"
version = "1.0.0-beta.4"
dependencies = [
"api",
"async-trait",
@@ -7514,7 +7516,7 @@ dependencies = [
[[package]]
name = "meta-srv"
version = "1.0.0-beta.3"
version = "1.0.0-beta.4"
dependencies = [
"api",
"async-trait",
@@ -7614,7 +7616,7 @@ dependencies = [
[[package]]
name = "metric-engine"
version = "1.0.0-beta.3"
version = "1.0.0-beta.4"
dependencies = [
"api",
"aquamarine",
@@ -7712,7 +7714,7 @@ dependencies = [
[[package]]
name = "mito-codec"
version = "1.0.0-beta.3"
version = "1.0.0-beta.4"
dependencies = [
"api",
"bytes",
@@ -7737,7 +7739,7 @@ dependencies = [
[[package]]
name = "mito2"
version = "1.0.0-beta.3"
version = "1.0.0-beta.4"
dependencies = [
"api",
"aquamarine",
@@ -7777,7 +7779,6 @@ dependencies = [
"either",
"futures",
"greptime-proto",
"humantime",
"humantime-serde",
"index",
"itertools 0.14.0",
@@ -7796,6 +7797,7 @@ dependencies = [
"rand 0.9.1",
"rayon",
"regex",
"roaring",
"rskafka",
"rstest",
"rstest_reuse",
@@ -7814,6 +7816,7 @@ dependencies = [
"tokio-util",
"toml 0.8.23",
"tracing",
"usearch",
"uuid",
]
@@ -8477,7 +8480,7 @@ dependencies = [
[[package]]
name = "object-store"
version = "1.0.0-beta.3"
version = "1.0.0-beta.4"
dependencies = [
"anyhow",
"bytes",
@@ -8762,7 +8765,7 @@ dependencies = [
[[package]]
name = "operator"
version = "1.0.0-beta.3"
version = "1.0.0-beta.4"
dependencies = [
"ahash 0.8.12",
"api",
@@ -8822,7 +8825,7 @@ dependencies = [
"sql",
"sqlparser",
"store-api",
"substrait 1.0.0-beta.3",
"substrait 1.0.0-beta.4",
"table",
"tokio",
"tokio-util",
@@ -9108,7 +9111,7 @@ dependencies = [
[[package]]
name = "partition"
version = "1.0.0-beta.3"
version = "1.0.0-beta.4"
dependencies = [
"api",
"async-trait",
@@ -9466,7 +9469,7 @@ checksum = "8b870d8c151b6f2fb93e84a13146138f05d02ed11c7e7c54f8826aaaf7c9f184"
[[package]]
name = "pipeline"
version = "1.0.0-beta.3"
version = "1.0.0-beta.4"
dependencies = [
"ahash 0.8.12",
"api",
@@ -9622,7 +9625,7 @@ dependencies = [
[[package]]
name = "plugins"
version = "1.0.0-beta.3"
version = "1.0.0-beta.4"
dependencies = [
"auth",
"catalog",
@@ -9924,7 +9927,7 @@ dependencies = [
[[package]]
name = "promql"
version = "1.0.0-beta.3"
version = "1.0.0-beta.4"
dependencies = [
"ahash 0.8.12",
"async-trait",
@@ -10207,7 +10210,7 @@ dependencies = [
[[package]]
name = "puffin"
version = "1.0.0-beta.3"
version = "1.0.0-beta.4"
dependencies = [
"async-compression 0.4.19",
"async-trait",
@@ -10249,7 +10252,7 @@ dependencies = [
[[package]]
name = "query"
version = "1.0.0-beta.3"
version = "1.0.0-beta.4"
dependencies = [
"ahash 0.8.12",
"api",
@@ -10316,7 +10319,7 @@ dependencies = [
"sql",
"sqlparser",
"store-api",
"substrait 1.0.0-beta.3",
"substrait 1.0.0-beta.4",
"table",
"tokio",
"tokio-stream",
@@ -11668,7 +11671,7 @@ dependencies = [
[[package]]
name = "servers"
version = "1.0.0-beta.3"
version = "1.0.0-beta.4"
dependencies = [
"ahash 0.8.12",
"api",
@@ -11797,7 +11800,7 @@ dependencies = [
[[package]]
name = "session"
version = "1.0.0-beta.3"
version = "1.0.0-beta.4"
dependencies = [
"ahash 0.8.12",
"api",
@@ -12141,7 +12144,7 @@ dependencies = [
[[package]]
name = "sql"
version = "1.0.0-beta.3"
version = "1.0.0-beta.4"
dependencies = [
"api",
"arrow-buffer",
@@ -12201,7 +12204,7 @@ dependencies = [
[[package]]
name = "sqlness-runner"
version = "1.0.0-beta.3"
version = "1.0.0-beta.4"
dependencies = [
"async-trait",
"clap 4.5.40",
@@ -12478,7 +12481,7 @@ dependencies = [
[[package]]
name = "standalone"
version = "1.0.0-beta.3"
version = "1.0.0-beta.4"
dependencies = [
"async-trait",
"catalog",
@@ -12520,7 +12523,7 @@ checksum = "a2eb9349b6444b326872e140eb1cf5e7c522154d69e7a0ffb0fb81c06b37543f"
[[package]]
name = "store-api"
version = "1.0.0-beta.3"
version = "1.0.0-beta.4"
dependencies = [
"api",
"aquamarine",
@@ -12733,7 +12736,7 @@ dependencies = [
[[package]]
name = "substrait"
version = "1.0.0-beta.3"
version = "1.0.0-beta.4"
dependencies = [
"async-trait",
"bytes",
@@ -12856,7 +12859,7 @@ dependencies = [
[[package]]
name = "table"
version = "1.0.0-beta.3"
version = "1.0.0-beta.4"
dependencies = [
"api",
"async-trait",
@@ -13125,7 +13128,7 @@ checksum = "8f50febec83f5ee1df3015341d8bd429f2d1cc62bcba7ea2076759d315084683"
[[package]]
name = "tests-fuzz"
version = "1.0.0-beta.3"
version = "1.0.0-beta.4"
dependencies = [
"arbitrary",
"async-trait",
@@ -13169,7 +13172,7 @@ dependencies = [
[[package]]
name = "tests-integration"
version = "1.0.0-beta.3"
version = "1.0.0-beta.4"
dependencies = [
"api",
"arrow-flight",
@@ -13245,7 +13248,7 @@ dependencies = [
"sqlx",
"standalone",
"store-api",
"substrait 1.0.0-beta.3",
"substrait 1.0.0-beta.4",
"table",
"tempfile",
"time",

View File

@@ -75,7 +75,7 @@ members = [
resolver = "2"
[workspace.package]
version = "1.0.0-beta.3"
version = "1.0.0-beta.4"
edition = "2024"
license = "Apache-2.0"
@@ -103,6 +103,7 @@ aquamarine = "0.6"
arrow = { version = "56.2", features = ["prettyprint"] }
arrow-array = { version = "56.2", default-features = false, features = ["chrono-tz"] }
arrow-buffer = "56.2"
arrow-cast = "56.2"
arrow-flight = "56.2"
arrow-ipc = { version = "56.2", default-features = false, features = ["lz4", "zstd"] }
arrow-schema = { version = "56.2", features = ["serde"] }
@@ -150,7 +151,7 @@ etcd-client = { version = "0.16.1", features = [
fst = "0.4.7"
futures = "0.3"
futures-util = "0.3"
greptime-proto = { git = "https://github.com/GreptimeTeam/greptime-proto.git", rev = "520fa524f9d590752ea327683e82ffd65721b27c" }
greptime-proto = { git = "https://github.com/GreptimeTeam/greptime-proto.git", rev = "a2e5099d72a1cfa8ba41fa4296101eb5f874074a" }
hex = "0.4"
http = "1"
humantime = "2.1"

View File

@@ -17,7 +17,7 @@ Release date: {{ timestamp | date(format="%B %d, %Y") }}
{%- set breakings = commits | filter(attribute="breaking", value=true) -%}
{%- if breakings | length > 0 %}
## Breaking changes
### Breaking changes
{% for commit in breakings %}
* {{ commit.github.pr_title }}\
{% if commit.github.username %} by \

View File

@@ -57,6 +57,20 @@ const REPO_CONFIGS: Record<string, RepoConfig> = {
return ['bump-nightly-version.yml', version];
}
// Check for prerelease versions (e.g., 1.0.0-beta.3, 1.0.0-rc.1)
const prereleaseMatch = version.match(/^(\d+)\.(\d+)\.(\d+)-(beta|rc)\.(\d+)$/);
if (prereleaseMatch) {
const [, major, minor, patch, prereleaseType, prereleaseNum] = prereleaseMatch;
// If it's beta.1 and patch version is 0, treat as major version
if (prereleaseType === 'beta' && prereleaseNum === '1' && patch === '0') {
return ['bump-version.yml', `${major}.${minor}`];
}
// Otherwise (beta.x where x > 1, or rc.x), treat as patch version
return ['bump-patch-version.yml', version];
}
const parts = version.split('.');
if (parts.length !== 3) {
throw new Error('Invalid version format');

View File

@@ -399,8 +399,8 @@ impl InformationSchemaColumnsBuilder {
self.is_nullables.push(Some("No"));
}
self.column_types.push(Some(&data_type));
self.column_comments
.push(column_schema.column_comment().map(|x| x.as_ref()));
let column_comment = column_schema.column_comment().map(|x| x.as_ref());
self.column_comments.push(column_comment);
}
fn finish(&mut self) -> Result<RecordBatch> {

View File

@@ -92,7 +92,7 @@ impl StoreConfig {
pub fn tls_config(&self) -> Option<TlsOption> {
if self.backend_tls_mode != TlsMode::Disable {
Some(TlsOption {
mode: self.backend_tls_mode.clone(),
mode: self.backend_tls_mode,
cert_path: self.backend_tls_cert_path.clone(),
key_path: self.backend_tls_key_path.clone(),
ca_cert_path: self.backend_tls_ca_cert_path.clone(),

View File

@@ -18,6 +18,7 @@ default = [
]
enterprise = ["common-meta/enterprise", "frontend/enterprise", "meta-srv/enterprise"]
tokio-console = ["common-telemetry/tokio-console"]
vector_index = ["mito2/vector_index"]
[lints]
workspace = true

View File

@@ -233,6 +233,8 @@ impl ObjbenchCommand {
inverted_index_config: MitoConfig::default().inverted_index,
fulltext_index_config,
bloom_filter_index_config: MitoConfig::default().bloom_filter_index,
#[cfg(feature = "vector_index")]
vector_index_config: Default::default(),
};
// Write SST

View File

@@ -236,7 +236,7 @@ impl StartCommand {
};
let tls_opts = TlsOption::new(
self.tls_mode.clone(),
self.tls_mode,
self.tls_cert_path.clone(),
self.tls_key_path.clone(),
self.tls_watch,

View File

@@ -261,7 +261,7 @@ impl StartCommand {
};
let tls_opts = TlsOption::new(
self.tls_mode.clone(),
self.tls_mode,
self.tls_cert_path.clone(),
self.tls_key_path.clone(),
self.tls_watch,

View File

@@ -17,6 +17,7 @@ ahash.workspace = true
api.workspace = true
arc-swap = "1.0"
arrow.workspace = true
arrow-cast.workspace = true
arrow-schema.workspace = true
async-trait.workspace = true
bincode = "=1.3.3"
@@ -46,6 +47,7 @@ geohash = { version = "0.13", optional = true }
h3o = { version = "0.6", optional = true }
hyperloglogplus = "0.4"
jsonb.workspace = true
jsonpath-rust = "0.7.5"
memchr = "2.7"
mito-codec.workspace = true
nalgebra.workspace = true

View File

@@ -13,17 +13,24 @@
// limitations under the License.
use std::fmt::{self, Display};
use std::str::FromStr;
use std::sync::Arc;
use arrow::array::{ArrayRef, BinaryViewArray, StringViewArray, StructArray};
use arrow::compute;
use datafusion_common::DataFusionError;
use arrow::datatypes::{Float64Type, Int64Type, UInt64Type};
use datafusion_common::arrow::array::{
Array, AsArray, BinaryViewBuilder, BooleanBuilder, Float64Builder, Int64Builder,
StringViewBuilder,
};
use datafusion_common::arrow::datatypes::DataType;
use datafusion_common::{DataFusionError, Result};
use datafusion_expr::type_coercion::aggregates::STRINGS;
use datafusion_expr::{ColumnarValue, ScalarFunctionArgs, Signature};
use datafusion_expr::{ColumnarValue, ScalarFunctionArgs, Signature, Volatility};
use datatypes::arrow_array::{int_array_value_at_index, string_array_value_at_index};
use datatypes::json::JsonStructureSettings;
use jsonpath_rust::JsonPath;
use serde_json::Value;
use crate::function::{Function, extract_args};
use crate::helper;
@@ -124,13 +131,6 @@ macro_rules! json_get {
};
}
json_get!(
JsonGetInt,
Int64,
i64,
"Get the value from the JSONB by the given path and return it as an integer."
);
json_get!(
JsonGetFloat,
Float64,
@@ -145,70 +145,356 @@ json_get!(
"Get the value from the JSONB by the given path and return it as a boolean."
);
/// Get the value from the JSONB by the given path and return it as a string.
#[derive(Clone, Debug)]
pub struct JsonGetString {
enum JsonResultValue<'a> {
Jsonb(Vec<u8>),
JsonStructByColumn(&'a ArrayRef, usize),
JsonStructByValue(&'a Value),
}
trait JsonGetResultBuilder {
fn append_value(&mut self, value: JsonResultValue<'_>) -> Result<()>;
fn append_null(&mut self);
fn build(&mut self) -> ArrayRef;
}
/// Common implementation for JSON get scalar functions.
///
/// `JsonGet` encapsulates the logic for extracting values from JSON inputs
/// based on a path expression. Different JSON get functions reuse this
/// implementation by supplying their own `JsonGetResultBuilder` to control
/// how the resulting values are materialized into an Arrow array.
struct JsonGet {
signature: Signature,
}
impl JsonGetString {
pub const NAME: &'static str = "json_get_string";
impl JsonGet {
fn invoke<F, B>(&self, args: ScalarFunctionArgs, builder_factory: F) -> Result<ColumnarValue>
where
F: Fn(usize) -> B,
B: JsonGetResultBuilder,
{
let [arg0, arg1] = extract_args("JSON_GET", &args)?;
let arg1 = compute::cast(&arg1, &DataType::Utf8View)?;
let paths = arg1.as_string_view();
let mut builder = (builder_factory)(arg0.len());
match arg0.data_type() {
DataType::Binary | DataType::LargeBinary | DataType::BinaryView => {
let arg0 = compute::cast(&arg0, &DataType::BinaryView)?;
let jsons = arg0.as_binary_view();
jsonb_get(jsons, paths, &mut builder)?;
}
DataType::Struct(_) => {
let jsons = arg0.as_struct();
json_struct_get(jsons, paths, &mut builder)?
}
_ => {
return Err(DataFusionError::Execution(format!(
"JSON_GET not supported argument type {}",
arg0.data_type(),
)));
}
};
Ok(ColumnarValue::Array(builder.build()))
}
}
impl Default for JsonGetString {
impl Default for JsonGet {
fn default() -> Self {
Self {
// TODO(LFC): Use a more clear type here instead of "Binary" for Json input, once we have a "Json" type.
signature: helper::one_of_sigs2(
vec![DataType::Binary, DataType::BinaryView],
vec![DataType::Utf8, DataType::Utf8View],
),
signature: Signature::any(2, Volatility::Immutable),
}
}
}
#[derive(Default)]
pub struct JsonGetString(JsonGet);
impl JsonGetString {
pub const NAME: &'static str = "json_get_string";
}
impl Function for JsonGetString {
fn name(&self) -> &str {
Self::NAME
}
fn return_type(&self, _: &[DataType]) -> datafusion_common::Result<DataType> {
fn return_type(&self, _: &[DataType]) -> Result<DataType> {
Ok(DataType::Utf8View)
}
fn signature(&self) -> &Signature {
&self.signature
&self.0.signature
}
fn invoke_with_args(
&self,
args: ScalarFunctionArgs,
) -> datafusion_common::Result<ColumnarValue> {
let [arg0, arg1] = extract_args(self.name(), &args)?;
let arg0 = compute::cast(&arg0, &DataType::BinaryView)?;
let jsons = arg0.as_binary_view();
let arg1 = compute::cast(&arg1, &DataType::Utf8View)?;
let paths = arg1.as_string_view();
fn invoke_with_args(&self, args: ScalarFunctionArgs) -> Result<ColumnarValue> {
struct StringResultBuilder(StringViewBuilder);
let size = jsons.len();
let mut builder = StringViewBuilder::with_capacity(size);
for i in 0..size {
let json = jsons.is_valid(i).then(|| jsons.value(i));
let path = paths.is_valid(i).then(|| paths.value(i));
let result = match (json, path) {
(Some(json), Some(path)) => {
get_json_by_path(json, path).and_then(|json| jsonb::to_str(&json).ok())
impl JsonGetResultBuilder for StringResultBuilder {
fn append_value(&mut self, value: JsonResultValue<'_>) -> Result<()> {
match value {
JsonResultValue::Jsonb(value) => {
self.0.append_option(jsonb::to_str(&value).ok())
}
JsonResultValue::JsonStructByColumn(column, i) => {
if let Some(v) = string_array_value_at_index(column, i) {
self.0.append_value(v);
} else {
self.0
.append_value(arrow_cast::display::array_value_to_string(
column, i,
)?);
}
}
JsonResultValue::JsonStructByValue(value) => {
if let Some(s) = value.as_str() {
self.0.append_value(s)
} else {
self.0.append_value(value.to_string())
}
}
}
_ => None,
};
builder.append_option(result);
Ok(())
}
fn append_null(&mut self) {
self.0.append_null();
}
fn build(&mut self) -> ArrayRef {
Arc::new(self.0.finish())
}
}
Ok(ColumnarValue::Array(Arc::new(builder.finish())))
self.0.invoke(args, |len: usize| {
StringResultBuilder(StringViewBuilder::with_capacity(len))
})
}
}
#[derive(Default)]
pub struct JsonGetInt(JsonGet);
impl JsonGetInt {
pub const NAME: &'static str = "json_get_int";
}
impl Function for JsonGetInt {
fn name(&self) -> &str {
Self::NAME
}
fn return_type(&self, _: &[DataType]) -> Result<DataType> {
Ok(DataType::Int64)
}
fn signature(&self) -> &Signature {
&self.0.signature
}
fn invoke_with_args(&self, args: ScalarFunctionArgs) -> Result<ColumnarValue> {
struct IntResultBuilder(Int64Builder);
impl JsonGetResultBuilder for IntResultBuilder {
fn append_value(&mut self, value: JsonResultValue<'_>) -> Result<()> {
match value {
JsonResultValue::Jsonb(value) => {
self.0.append_option(jsonb::to_i64(&value).ok())
}
JsonResultValue::JsonStructByColumn(column, i) => {
self.0.append_option(int_array_value_at_index(column, i))
}
JsonResultValue::JsonStructByValue(value) => {
self.0.append_option(value.as_i64())
}
}
Ok(())
}
fn append_null(&mut self) {
self.0.append_null();
}
fn build(&mut self) -> ArrayRef {
Arc::new(self.0.finish())
}
}
self.0.invoke(args, |len: usize| {
IntResultBuilder(Int64Builder::with_capacity(len))
})
}
}
impl Display for JsonGetInt {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
write!(f, "{}", Self::NAME.to_ascii_uppercase())
}
}
fn jsonb_get(
jsons: &BinaryViewArray,
paths: &StringViewArray,
builder: &mut impl JsonGetResultBuilder,
) -> Result<()> {
let size = jsons.len();
for i in 0..size {
let json = jsons.is_valid(i).then(|| jsons.value(i));
let path = paths.is_valid(i).then(|| paths.value(i));
let result = match (json, path) {
(Some(json), Some(path)) => get_json_by_path(json, path),
_ => None,
};
if let Some(v) = result {
builder.append_value(JsonResultValue::Jsonb(v))?;
} else {
builder.append_null();
}
}
Ok(())
}
fn json_struct_get(
jsons: &StructArray,
paths: &StringViewArray,
builder: &mut impl JsonGetResultBuilder,
) -> Result<()> {
let size = jsons.len();
for i in 0..size {
if jsons.is_null(i) || paths.is_null(i) {
builder.append_null();
continue;
}
let path = paths.value(i);
// naively assume the JSON path is our kind of indexing to the field, by removing its "root"
let field_path = path.trim().replace("$.", "");
let column = jsons.column_by_name(&field_path);
if let Some(column) = column {
builder.append_value(JsonResultValue::JsonStructByColumn(column, i))?;
} else {
let Some(raw) = jsons
.column_by_name(JsonStructureSettings::RAW_FIELD)
.and_then(|x| string_array_value_at_index(x, i))
else {
builder.append_null();
continue;
};
let path: JsonPath<Value> = JsonPath::try_from(path).map_err(|e| {
DataFusionError::Execution(format!("{path} is not a valid JSON path: {e}"))
})?;
// the wanted field is not retrievable from the JSON struct columns directly, we have
// to combine everything (columns and the "_raw") into a complete JSON value to find it
let value = json_struct_to_value(raw, jsons, i)?;
match path.find(&value) {
Value::Null => builder.append_null(),
Value::Array(values) => match values.as_slice() {
[] => builder.append_null(),
[x] => builder.append_value(JsonResultValue::JsonStructByValue(x))?,
_ => builder.append_value(JsonResultValue::JsonStructByValue(&value))?,
},
value => builder.append_value(JsonResultValue::JsonStructByValue(&value))?,
}
}
}
Ok(())
}
fn json_struct_to_value(raw: &str, jsons: &StructArray, i: usize) -> Result<Value> {
let Ok(mut json) = Value::from_str(raw) else {
return Err(DataFusionError::Internal(format!(
"inner field '{}' is not a valid JSON string",
JsonStructureSettings::RAW_FIELD
)));
};
for (column_name, column) in jsons.column_names().into_iter().zip(jsons.columns()) {
if column_name == JsonStructureSettings::RAW_FIELD {
continue;
}
let (json_pointer, field) = if let Some((json_object, field)) = column_name.rsplit_once(".")
{
let json_pointer = format!("/{}", json_object.replace(".", "/"));
(json_pointer, field)
} else {
("".to_string(), column_name)
};
let Some(json_object) = json
.pointer_mut(&json_pointer)
.and_then(|x| x.as_object_mut())
else {
return Err(DataFusionError::Internal(format!(
"value at JSON pointer '{}' is not an object",
json_pointer
)));
};
macro_rules! insert {
($column: ident, $i: ident, $json_object: ident, $field: ident) => {{
if let Some(value) = $column
.is_valid($i)
.then(|| serde_json::Value::from($column.value($i)))
{
$json_object.insert($field.to_string(), value);
}
}};
}
match column.data_type() {
// boolean => Value::Bool
DataType::Boolean => {
let column = column.as_boolean();
insert!(column, i, json_object, field);
}
// int => Value::Number
DataType::Int64 => {
let column = column.as_primitive::<Int64Type>();
insert!(column, i, json_object, field);
}
DataType::UInt64 => {
let column = column.as_primitive::<UInt64Type>();
insert!(column, i, json_object, field);
}
DataType::Float64 => {
let column = column.as_primitive::<Float64Type>();
insert!(column, i, json_object, field);
}
// string => Value::String
DataType::Utf8 => {
let column = column.as_string::<i32>();
insert!(column, i, json_object, field);
}
DataType::LargeUtf8 => {
let column = column.as_string::<i64>();
insert!(column, i, json_object, field);
}
DataType::Utf8View => {
let column = column.as_string_view();
insert!(column, i, json_object, field);
}
// other => Value::Array and Value::Object
_ => {
return Err(DataFusionError::NotImplemented(format!(
"{} is not yet supported to be executed with field {} of datatype {}",
JsonGetString::NAME,
column_name,
column.data_type()
)));
}
}
}
Ok(json)
}
impl Display for JsonGetString {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
write!(f, "{}", Self::NAME.to_ascii_uppercase())
@@ -296,14 +582,60 @@ impl Display for JsonGetObject {
mod tests {
use std::sync::Arc;
use arrow::array::{Float64Array, Int64Array, StructArray};
use arrow_schema::Field;
use datafusion_common::ScalarValue;
use datafusion_common::arrow::array::{BinaryArray, BinaryViewArray, StringArray};
use datafusion_common::arrow::datatypes::{Float64Type, Int64Type};
use datatypes::types::parse_string_to_jsonb;
use serde_json::json;
use super::*;
/// Create a JSON object like this (as a one element struct array for testing):
///
/// ```JSON
/// {
/// "kind": "foo",
/// "payload": {
/// "code": 404,
/// "success": false,
/// "result": {
/// "error": "not found",
/// "time_cost": 1.234
/// }
/// }
/// }
/// ```
fn test_json_struct() -> ArrayRef {
Arc::new(StructArray::new(
vec![
Field::new("kind", DataType::Utf8, true),
Field::new("payload.code", DataType::Int64, true),
Field::new("payload.result.time_cost", DataType::Float64, true),
Field::new(JsonStructureSettings::RAW_FIELD, DataType::Utf8View, true),
]
.into(),
vec![
Arc::new(StringArray::from_iter([Some("foo")])) as ArrayRef,
Arc::new(Int64Array::from_iter([Some(404)])),
Arc::new(Float64Array::from_iter([Some(1.234)])),
Arc::new(StringViewArray::from_iter([Some(
json! ({
"payload": {
"success": false,
"result": {
"error": "not found"
}
}
})
.to_string(),
)])),
],
None,
))
}
#[test]
fn test_json_get_int() {
let json_get_int = JsonGetInt::default();
@@ -321,37 +653,55 @@ mod tests {
r#"{"a": 4, "b": {"c": 6}, "c": 6}"#,
r#"{"a": 7, "b": 8, "c": {"a": 7}}"#,
];
let paths = vec!["$.a.b", "$.a", "$.c"];
let results = [Some(2), Some(4), None];
let json_struct = test_json_struct();
let jsonbs = json_strings
let path_expects = vec![
("$.a.b", Some(2)),
("$.a", Some(4)),
("$.c", None),
("$.kind", None),
("$.payload.code", Some(404)),
("$.payload.success", None),
("$.payload.result.time_cost", None),
("$.payload.not-exists", None),
("$.not-exists", None),
("$", None),
];
let mut jsons = json_strings
.iter()
.map(|s| {
let value = jsonb::parse_value(s.as_bytes()).unwrap();
value.to_vec()
Arc::new(BinaryArray::from_iter_values([value.to_vec()])) as ArrayRef
})
.collect::<Vec<_>>();
let json_struct_arrays =
std::iter::repeat_n(json_struct, path_expects.len() - jsons.len()).collect::<Vec<_>>();
jsons.extend(json_struct_arrays);
let args = ScalarFunctionArgs {
args: vec![
ColumnarValue::Array(Arc::new(BinaryArray::from_iter_values(jsonbs))),
ColumnarValue::Array(Arc::new(StringArray::from_iter_values(paths))),
],
arg_fields: vec![],
number_rows: 3,
return_field: Arc::new(Field::new("x", DataType::Int64, false)),
config_options: Arc::new(Default::default()),
};
let result = json_get_int
.invoke_with_args(args)
.and_then(|x| x.to_array(3))
.unwrap();
let vector = result.as_primitive::<Int64Type>();
for i in 0..jsons.len() {
let json = &jsons[i];
let (path, expect) = path_expects[i];
assert_eq!(3, vector.len());
for (i, gt) in results.iter().enumerate() {
let result = vector.is_valid(i).then(|| vector.value(i));
assert_eq!(*gt, result);
let args = ScalarFunctionArgs {
args: vec![
ColumnarValue::Array(json.clone()),
ColumnarValue::Scalar(path.into()),
],
arg_fields: vec![],
number_rows: 1,
return_field: Arc::new(Field::new("x", DataType::Int64, false)),
config_options: Arc::new(Default::default()),
};
let result = json_get_int
.invoke_with_args(args)
.and_then(|x| x.to_array(1))
.unwrap();
let result = result.as_primitive::<Int64Type>();
assert_eq!(1, result.len());
let actual = result.is_valid(0).then(|| result.value(0));
assert_eq!(actual, expect);
}
}
@@ -474,42 +824,85 @@ mod tests {
r#"{"a": "d", "b": {"c": "e"}, "c": "f"}"#,
r#"{"a": "g", "b": "h", "c": {"a": "g"}}"#,
];
let paths = vec!["$.a.b", "$.a", ""];
let results = [Some("a"), Some("d"), None];
let json_struct = test_json_struct();
let jsonbs = json_strings
let paths = vec![
"$.a.b",
"$.a",
"",
"$.kind",
"$.payload.code",
"$.payload.result.time_cost",
"$.payload",
"$.payload.success",
"$.payload.result",
"$.payload.result.error",
"$.payload.result.not-exists",
"$.payload.not-exists",
"$.not-exists",
"$",
];
let expects = [
Some("a"),
Some("d"),
None,
Some("foo"),
Some("404"),
Some("1.234"),
Some(
r#"{"code":404,"result":{"error":"not found","time_cost":1.234},"success":false}"#,
),
Some("false"),
Some(r#"{"error":"not found","time_cost":1.234}"#),
Some("not found"),
None,
None,
None,
Some(
r#"{"kind":"foo","payload":{"code":404,"result":{"error":"not found","time_cost":1.234},"success":false}}"#,
),
];
let mut jsons = json_strings
.iter()
.map(|s| {
let value = jsonb::parse_value(s.as_bytes()).unwrap();
value.to_vec()
Arc::new(BinaryArray::from_iter_values([value.to_vec()])) as ArrayRef
})
.collect::<Vec<_>>();
let json_struct_arrays =
std::iter::repeat_n(json_struct, expects.len() - jsons.len()).collect::<Vec<_>>();
jsons.extend(json_struct_arrays);
let args = ScalarFunctionArgs {
args: vec![
ColumnarValue::Array(Arc::new(BinaryArray::from_iter_values(jsonbs))),
ColumnarValue::Array(Arc::new(StringArray::from_iter_values(paths))),
],
arg_fields: vec![],
number_rows: 3,
return_field: Arc::new(Field::new("x", DataType::Utf8View, false)),
config_options: Arc::new(Default::default()),
};
let result = json_get_string
.invoke_with_args(args)
.and_then(|x| x.to_array(3))
.unwrap();
let vector = result.as_string_view();
for i in 0..jsons.len() {
let json = &jsons[i];
let path = paths[i];
let expect = expects[i];
assert_eq!(3, vector.len());
for (i, gt) in results.iter().enumerate() {
let result = vector.is_valid(i).then(|| vector.value(i));
assert_eq!(*gt, result);
let args = ScalarFunctionArgs {
args: vec![
ColumnarValue::Array(json.clone()),
ColumnarValue::Scalar(path.into()),
],
arg_fields: vec![],
number_rows: 1,
return_field: Arc::new(Field::new("x", DataType::Utf8View, false)),
config_options: Arc::new(Default::default()),
};
let result = json_get_string
.invoke_with_args(args)
.and_then(|x| x.to_array(1))
.unwrap();
let result = result.as_string_view();
assert_eq!(1, result.len());
let actual = result.is_valid(0).then(|| result.value(0));
assert_eq!(actual, expect);
}
}
#[test]
fn test_json_get_object() -> datafusion_common::Result<()> {
fn test_json_get_object() -> Result<()> {
let udf = JsonGetObject::default();
assert_eq!("json_get_object", udf.name());
assert_eq!(

View File

@@ -224,6 +224,13 @@ pub enum Error {
location: Location,
},
#[snafu(display("Failed to find table repartition metadata for table id {}", table_id))]
TableRepartNotFound {
table_id: TableId,
#[snafu(implicit)]
location: Location,
},
#[snafu(display("Failed to decode protobuf"))]
DecodeProto {
#[snafu(implicit)]
@@ -1091,6 +1098,7 @@ impl ErrorExt for Error {
| DecodeProto { .. }
| BuildTableMeta { .. }
| TableRouteNotFound { .. }
| TableRepartNotFound { .. }
| ConvertRawTableInfo { .. }
| RegionOperatingRace { .. }
| EncodeWalOptions { .. }

View File

@@ -106,6 +106,7 @@ mod schema_metadata_manager;
pub mod schema_name;
pub mod table_info;
pub mod table_name;
pub mod table_repart;
pub mod table_route;
#[cfg(any(test, feature = "testing"))]
pub mod test_utils;
@@ -156,6 +157,7 @@ use crate::DatanodeId;
use crate::error::{self, Result, SerdeJsonSnafu};
use crate::key::flow::flow_state::FlowStateValue;
use crate::key::node_address::NodeAddressValue;
use crate::key::table_repart::{TableRepartKey, TableRepartManager};
use crate::key::table_route::TableRouteKey;
use crate::key::topic_region::TopicRegionValue;
use crate::key::txn_helper::TxnOpGetResponseSet;
@@ -178,6 +180,7 @@ pub const TABLE_NAME_KEY_PREFIX: &str = "__table_name";
pub const CATALOG_NAME_KEY_PREFIX: &str = "__catalog_name";
pub const SCHEMA_NAME_KEY_PREFIX: &str = "__schema_name";
pub const TABLE_ROUTE_PREFIX: &str = "__table_route";
pub const TABLE_REPART_PREFIX: &str = "__table_repart";
pub const NODE_ADDRESS_PREFIX: &str = "__node_address";
pub const KAFKA_TOPIC_KEY_PREFIX: &str = "__topic_name/kafka";
// The legacy topic key prefix is used to store the topic name in previous versions.
@@ -288,6 +291,11 @@ lazy_static! {
Regex::new(&format!("^{TABLE_ROUTE_PREFIX}/([0-9]+)$")).unwrap();
}
lazy_static! {
pub(crate) static ref TABLE_REPART_KEY_PATTERN: Regex =
Regex::new(&format!("^{TABLE_REPART_PREFIX}/([0-9]+)$")).unwrap();
}
lazy_static! {
static ref DATANODE_TABLE_KEY_PATTERN: Regex =
Regex::new(&format!("^{DATANODE_TABLE_KEY_PREFIX}/([0-9]+)/([0-9]+)$")).unwrap();
@@ -386,6 +394,7 @@ pub struct TableMetadataManager {
catalog_manager: CatalogManager,
schema_manager: SchemaManager,
table_route_manager: TableRouteManager,
table_repart_manager: TableRepartManager,
tombstone_manager: TombstoneManager,
topic_name_manager: TopicNameManager,
topic_region_manager: TopicRegionManager,
@@ -538,6 +547,7 @@ impl TableMetadataManager {
catalog_manager: CatalogManager::new(kv_backend.clone()),
schema_manager: SchemaManager::new(kv_backend.clone()),
table_route_manager: TableRouteManager::new(kv_backend.clone()),
table_repart_manager: TableRepartManager::new(kv_backend.clone()),
tombstone_manager: TombstoneManager::new(kv_backend.clone()),
topic_name_manager: TopicNameManager::new(kv_backend.clone()),
topic_region_manager: TopicRegionManager::new(kv_backend.clone()),
@@ -558,6 +568,7 @@ impl TableMetadataManager {
catalog_manager: CatalogManager::new(kv_backend.clone()),
schema_manager: SchemaManager::new(kv_backend.clone()),
table_route_manager: TableRouteManager::new(kv_backend.clone()),
table_repart_manager: TableRepartManager::new(kv_backend.clone()),
tombstone_manager: TombstoneManager::new_with_prefix(
kv_backend.clone(),
tombstone_prefix,
@@ -616,6 +627,10 @@ impl TableMetadataManager {
&self.table_route_manager
}
pub fn table_repart_manager(&self) -> &TableRepartManager {
&self.table_repart_manager
}
pub fn topic_name_manager(&self) -> &TopicNameManager {
&self.topic_name_manager
}
@@ -923,6 +938,7 @@ impl TableMetadataManager {
);
let table_info_key = TableInfoKey::new(table_id);
let table_route_key = TableRouteKey::new(table_id);
let table_repart_key = TableRepartKey::new(table_id);
let datanode_table_keys = datanode_ids
.into_iter()
.map(|datanode_id| DatanodeTableKey::new(datanode_id, table_id))
@@ -937,6 +953,7 @@ impl TableMetadataManager {
keys.push(table_name.to_bytes());
keys.push(table_info_key.to_bytes());
keys.push(table_route_key.to_bytes());
keys.push(table_repart_key.to_bytes());
for key in &datanode_table_keys {
keys.push(key.to_bytes());
}

View File

@@ -0,0 +1,856 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use std::collections::{BTreeMap, BTreeSet, HashMap};
use std::fmt::Display;
use serde::{Deserialize, Serialize};
use snafu::{OptionExt as _, ResultExt, ensure};
use store_api::storage::RegionId;
use table::metadata::TableId;
use crate::error::{InvalidMetadataSnafu, Result, SerdeJsonSnafu};
use crate::key::txn_helper::TxnOpGetResponseSet;
use crate::key::{
DeserializedValueWithBytes, MetadataKey, MetadataValue, TABLE_REPART_KEY_PATTERN,
TABLE_REPART_PREFIX,
};
use crate::kv_backend::KvBackendRef;
use crate::kv_backend::txn::Txn;
use crate::rpc::store::BatchGetRequest;
/// The key stores table repartition metadata.
/// Specifically, it records the relation between source and destination regions after a repartition operation is completed.
/// This is distinct from the initial partitioning scheme of the table.
/// For example, after repartition, a destination region may still hold files from a source region; this mapping should be updated once repartition is done.
/// The GC scheduler uses this information to clean up those files (and removes this mapping if all files from the source region are cleaned).
///
/// The layout: `__table_repart/{table_id}`.
#[derive(Debug, PartialEq)]
pub struct TableRepartKey {
/// The unique identifier of the table whose re-partition information is stored in this key.
pub table_id: TableId,
}
impl TableRepartKey {
pub fn new(table_id: TableId) -> Self {
Self { table_id }
}
/// Returns the range prefix of the table repartition key.
pub fn range_prefix() -> Vec<u8> {
format!("{}/", TABLE_REPART_PREFIX).into_bytes()
}
}
impl MetadataKey<'_, TableRepartKey> for TableRepartKey {
fn to_bytes(&self) -> Vec<u8> {
self.to_string().into_bytes()
}
fn from_bytes(bytes: &[u8]) -> Result<TableRepartKey> {
let key = std::str::from_utf8(bytes).map_err(|e| {
InvalidMetadataSnafu {
err_msg: format!(
"TableRepartKey '{}' is not a valid UTF8 string: {e}",
String::from_utf8_lossy(bytes)
),
}
.build()
})?;
let captures = TABLE_REPART_KEY_PATTERN
.captures(key)
.context(InvalidMetadataSnafu {
err_msg: format!("Invalid TableRepartKey '{key}'"),
})?;
// Safety: pass the regex check above
let table_id = captures[1].parse::<TableId>().unwrap();
Ok(TableRepartKey { table_id })
}
}
impl Display for TableRepartKey {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
write!(f, "{}/{}", TABLE_REPART_PREFIX, self.table_id)
}
}
#[derive(Debug, PartialEq, Serialize, Deserialize, Clone, Default)]
pub struct TableRepartValue {
/// A mapping from source region IDs to sets of destination region IDs after repartition.
///
/// Each key in the map is a `RegionId` representing a source region that has been repartitioned.
/// The corresponding value is a `BTreeSet<RegionId>` containing the IDs of destination regions
/// that currently hold files originally from the source region. This mapping is updated after
/// repartition and is used by the GC scheduler to track and clean up files that have been moved.
pub src_to_dst: BTreeMap<RegionId, BTreeSet<RegionId>>,
}
impl TableRepartValue {
/// Creates a new TableRepartValue with an empty src_to_dst map.
pub fn new() -> Self {
Default::default()
}
/// Update mapping from src region to dst regions. Should be called once repartition is done.
///
/// If `dst` is empty, this method does nothing.
pub fn update_mappings(&mut self, src: RegionId, dst: &[RegionId]) {
if dst.is_empty() {
return;
}
self.src_to_dst.entry(src).or_default().extend(dst);
}
/// Remove mappings from src region to dst regions. Should be called once files from src region are cleaned up in dst regions.
pub fn remove_mappings(&mut self, src: RegionId, dsts: &[RegionId]) {
if let Some(dst_set) = self.src_to_dst.get_mut(&src) {
for dst in dsts {
dst_set.remove(dst);
}
if dst_set.is_empty() {
self.src_to_dst.remove(&src);
}
}
}
}
impl MetadataValue for TableRepartValue {
fn try_from_raw_value(raw_value: &[u8]) -> Result<Self> {
serde_json::from_slice::<TableRepartValue>(raw_value).context(SerdeJsonSnafu)
}
fn try_as_raw_value(&self) -> Result<Vec<u8>> {
serde_json::to_vec(self).context(SerdeJsonSnafu)
}
}
pub type TableRepartValueDecodeResult =
Result<Option<DeserializedValueWithBytes<TableRepartValue>>>;
pub struct TableRepartManager {
kv_backend: KvBackendRef,
}
impl TableRepartManager {
pub fn new(kv_backend: KvBackendRef) -> Self {
Self { kv_backend }
}
/// Builds a create table repart transaction,
/// it expected the `__table_repart/{table_id}` wasn't occupied.
pub fn build_create_txn(
&self,
table_id: TableId,
table_repart_value: &TableRepartValue,
) -> Result<(
Txn,
impl FnOnce(&mut TxnOpGetResponseSet) -> TableRepartValueDecodeResult + use<>,
)> {
let key = TableRepartKey::new(table_id);
let raw_key = key.to_bytes();
let txn = Txn::put_if_not_exists(raw_key.clone(), table_repart_value.try_as_raw_value()?);
Ok((
txn,
TxnOpGetResponseSet::decode_with(TxnOpGetResponseSet::filter(raw_key)),
))
}
/// Builds a update table repart transaction,
/// it expected the remote value equals the `current_table_repart_value`.
/// It retrieves the latest value if the comparing failed.
pub fn build_update_txn(
&self,
table_id: TableId,
current_table_repart_value: &DeserializedValueWithBytes<TableRepartValue>,
new_table_repart_value: &TableRepartValue,
) -> Result<(
Txn,
impl FnOnce(&mut TxnOpGetResponseSet) -> TableRepartValueDecodeResult + use<>,
)> {
let key = TableRepartKey::new(table_id);
let raw_key = key.to_bytes();
let raw_value = current_table_repart_value.get_raw_bytes();
let new_raw_value: Vec<u8> = new_table_repart_value.try_as_raw_value()?;
let txn = Txn::compare_and_put(raw_key.clone(), raw_value, new_raw_value);
Ok((
txn,
TxnOpGetResponseSet::decode_with(TxnOpGetResponseSet::filter(raw_key)),
))
}
/// Returns the [`TableRepartValue`].
pub async fn get(&self, table_id: TableId) -> Result<Option<TableRepartValue>> {
self.get_inner(table_id).await
}
async fn get_inner(&self, table_id: TableId) -> Result<Option<TableRepartValue>> {
let key = TableRepartKey::new(table_id);
self.kv_backend
.get(&key.to_bytes())
.await?
.map(|kv| TableRepartValue::try_from_raw_value(&kv.value))
.transpose()
}
/// Returns the [`TableRepartValue`] wrapped with [`DeserializedValueWithBytes`].
pub async fn get_with_raw_bytes(
&self,
table_id: TableId,
) -> Result<Option<DeserializedValueWithBytes<TableRepartValue>>> {
self.get_with_raw_bytes_inner(table_id).await
}
async fn get_with_raw_bytes_inner(
&self,
table_id: TableId,
) -> Result<Option<DeserializedValueWithBytes<TableRepartValue>>> {
let key = TableRepartKey::new(table_id);
self.kv_backend
.get(&key.to_bytes())
.await?
.map(|kv| DeserializedValueWithBytes::from_inner_slice(&kv.value))
.transpose()
}
/// Returns batch of [`TableRepartValue`] that respects the order of `table_ids`.
pub async fn batch_get(&self, table_ids: &[TableId]) -> Result<Vec<Option<TableRepartValue>>> {
let raw_table_reparts = self.batch_get_inner(table_ids).await?;
Ok(raw_table_reparts
.into_iter()
.map(|v| v.map(|x| x.inner))
.collect())
}
/// Returns batch of [`TableRepartValue`] wrapped with [`DeserializedValueWithBytes`].
pub async fn batch_get_with_raw_bytes(
&self,
table_ids: &[TableId],
) -> Result<Vec<Option<DeserializedValueWithBytes<TableRepartValue>>>> {
self.batch_get_inner(table_ids).await
}
async fn batch_get_inner(
&self,
table_ids: &[TableId],
) -> Result<Vec<Option<DeserializedValueWithBytes<TableRepartValue>>>> {
let keys = table_ids
.iter()
.map(|id| TableRepartKey::new(*id).to_bytes())
.collect::<Vec<_>>();
let resp = self
.kv_backend
.batch_get(BatchGetRequest { keys: keys.clone() })
.await?;
let kvs = resp
.kvs
.into_iter()
.map(|kv| (kv.key, kv.value))
.collect::<HashMap<_, _>>();
keys.into_iter()
.map(|key| {
if let Some(value) = kvs.get(&key) {
Ok(Some(DeserializedValueWithBytes::from_inner_slice(value)?))
} else {
Ok(None)
}
})
.collect()
}
/// Updates mappings from src region to dst regions.
/// Should be called once repartition is done.
pub async fn update_mappings(&self, src: RegionId, dst: &[RegionId]) -> Result<()> {
let table_id = src.table_id();
// Get current table repart with raw bytes for CAS operation
let current_table_repart = self
.get_with_raw_bytes(table_id)
.await?
.context(crate::error::TableRepartNotFoundSnafu { table_id })?;
// Clone the current repart value and update mappings
let mut new_table_repart_value = current_table_repart.inner.clone();
new_table_repart_value.update_mappings(src, dst);
// Execute atomic update
let (txn, _) =
self.build_update_txn(table_id, &current_table_repart, &new_table_repart_value)?;
let result = self.kv_backend.txn(txn).await?;
ensure!(
result.succeeded,
crate::error::MetadataCorruptionSnafu {
err_msg: format!(
"Failed to update mappings for table {}: CAS operation failed",
table_id
),
}
);
Ok(())
}
/// Removes mappings from src region to dst regions.
/// Should be called once files from src region are cleaned up in dst regions.
pub async fn remove_mappings(&self, src: RegionId, dsts: &[RegionId]) -> Result<()> {
let table_id = src.table_id();
// Get current table repart with raw bytes for CAS operation
let current_table_repart = self
.get_with_raw_bytes(table_id)
.await?
.context(crate::error::TableRepartNotFoundSnafu { table_id })?;
// Clone the current repart value and remove mappings
let mut new_table_repart_value = current_table_repart.inner.clone();
new_table_repart_value.remove_mappings(src, dsts);
// Execute atomic update
let (txn, _) =
self.build_update_txn(table_id, &current_table_repart, &new_table_repart_value)?;
let result = self.kv_backend.txn(txn).await?;
ensure!(
result.succeeded,
crate::error::MetadataCorruptionSnafu {
err_msg: format!(
"Failed to remove mappings for table {}: CAS operation failed",
table_id
),
}
);
Ok(())
}
/// Returns the destination regions for a given source region.
pub async fn get_dst_regions(
&self,
src_region: RegionId,
) -> Result<Option<BTreeSet<RegionId>>> {
let table_id = src_region.table_id();
let table_repart = self.get(table_id).await?;
Ok(table_repart.and_then(|repart| repart.src_to_dst.get(&src_region).cloned()))
}
}
#[cfg(test)]
mod tests {
use std::collections::BTreeMap;
use std::sync::Arc;
use super::*;
use crate::kv_backend::TxnService;
use crate::kv_backend::memory::MemoryKvBackend;
#[test]
fn test_table_repart_key_serialization() {
let key = TableRepartKey::new(42);
let raw_key = key.to_bytes();
assert_eq!(raw_key, b"__table_repart/42");
}
#[test]
fn test_table_repart_key_deserialization() {
let expected = TableRepartKey::new(42);
let key = TableRepartKey::from_bytes(b"__table_repart/42").unwrap();
assert_eq!(key, expected);
}
#[test]
fn test_table_repart_key_deserialization_invalid_utf8() {
let result = TableRepartKey::from_bytes(b"__table_repart/\xff");
assert!(result.is_err());
assert!(
result
.unwrap_err()
.to_string()
.contains("not a valid UTF8 string")
);
}
#[test]
fn test_table_repart_key_deserialization_invalid_format() {
let result = TableRepartKey::from_bytes(b"invalid_key_format");
assert!(result.is_err());
assert!(
result
.unwrap_err()
.to_string()
.contains("Invalid TableRepartKey")
);
}
#[test]
fn test_table_repart_value_serialization_deserialization() {
let mut src_to_dst = BTreeMap::new();
let src_region = RegionId::new(1, 1);
let dst_regions = vec![RegionId::new(1, 2), RegionId::new(1, 3)];
src_to_dst.insert(src_region, dst_regions.into_iter().collect());
let value = TableRepartValue { src_to_dst };
let serialized = value.try_as_raw_value().unwrap();
let deserialized = TableRepartValue::try_from_raw_value(&serialized).unwrap();
assert_eq!(value, deserialized);
}
#[test]
fn test_table_repart_value_update_mappings_new_src() {
let mut value = TableRepartValue {
src_to_dst: BTreeMap::new(),
};
let src = RegionId::new(1, 1);
let dst = vec![RegionId::new(1, 2), RegionId::new(1, 3)];
value.update_mappings(src, &dst);
assert_eq!(value.src_to_dst.len(), 1);
assert!(value.src_to_dst.contains_key(&src));
assert_eq!(value.src_to_dst.get(&src).unwrap().len(), 2);
assert!(
value
.src_to_dst
.get(&src)
.unwrap()
.contains(&RegionId::new(1, 2))
);
assert!(
value
.src_to_dst
.get(&src)
.unwrap()
.contains(&RegionId::new(1, 3))
);
}
#[test]
fn test_table_repart_value_update_mappings_existing_src() {
let mut value = TableRepartValue {
src_to_dst: BTreeMap::new(),
};
let src = RegionId::new(1, 1);
let initial_dst = vec![RegionId::new(1, 2)];
let additional_dst = vec![RegionId::new(1, 3), RegionId::new(1, 4)];
// Initial mapping
value.update_mappings(src, &initial_dst);
// Update with additional destinations
value.update_mappings(src, &additional_dst);
assert_eq!(value.src_to_dst.len(), 1);
assert_eq!(value.src_to_dst.get(&src).unwrap().len(), 3);
assert!(
value
.src_to_dst
.get(&src)
.unwrap()
.contains(&RegionId::new(1, 2))
);
assert!(
value
.src_to_dst
.get(&src)
.unwrap()
.contains(&RegionId::new(1, 3))
);
assert!(
value
.src_to_dst
.get(&src)
.unwrap()
.contains(&RegionId::new(1, 4))
);
}
#[test]
fn test_table_repart_value_remove_mappings_existing() {
let mut value = TableRepartValue {
src_to_dst: BTreeMap::new(),
};
let src = RegionId::new(1, 1);
let dst_regions = vec![
RegionId::new(1, 2),
RegionId::new(1, 3),
RegionId::new(1, 4),
];
value.update_mappings(src, &dst_regions);
// Remove some mappings
let to_remove = vec![RegionId::new(1, 2), RegionId::new(1, 3)];
value.remove_mappings(src, &to_remove);
assert_eq!(value.src_to_dst.len(), 1);
assert_eq!(value.src_to_dst.get(&src).unwrap().len(), 1);
assert!(
value
.src_to_dst
.get(&src)
.unwrap()
.contains(&RegionId::new(1, 4))
);
}
#[test]
fn test_table_repart_value_remove_mappings_all() {
let mut value = TableRepartValue {
src_to_dst: BTreeMap::new(),
};
let src = RegionId::new(1, 1);
let dst_regions = vec![RegionId::new(1, 2), RegionId::new(1, 3)];
value.update_mappings(src, &dst_regions);
// Remove all mappings
value.remove_mappings(src, &dst_regions);
assert_eq!(value.src_to_dst.len(), 0);
}
#[test]
fn test_table_repart_value_remove_mappings_nonexistent() {
let mut value = TableRepartValue {
src_to_dst: BTreeMap::new(),
};
let src = RegionId::new(1, 1);
let dst_regions = vec![RegionId::new(1, 2)];
value.update_mappings(src, &dst_regions);
// Try to remove non-existent mappings
let nonexistent_dst = vec![RegionId::new(1, 3), RegionId::new(1, 4)];
value.remove_mappings(src, &nonexistent_dst);
// Should remain unchanged
assert_eq!(value.src_to_dst.len(), 1);
assert_eq!(value.src_to_dst.get(&src).unwrap().len(), 1);
assert!(
value
.src_to_dst
.get(&src)
.unwrap()
.contains(&RegionId::new(1, 2))
);
}
#[test]
fn test_table_repart_value_remove_mappings_nonexistent_src() {
let mut value = TableRepartValue {
src_to_dst: BTreeMap::new(),
};
let src = RegionId::new(1, 1);
let dst_regions = vec![RegionId::new(1, 2)];
// Try to remove mappings for non-existent source
value.remove_mappings(src, &dst_regions);
// Should remain empty
assert_eq!(value.src_to_dst.len(), 0);
}
#[tokio::test]
async fn test_table_repart_manager_get_empty() {
let kv = Arc::new(MemoryKvBackend::default());
let manager = TableRepartManager::new(kv);
let result = manager.get(1024).await.unwrap();
assert!(result.is_none());
}
#[tokio::test]
async fn test_table_repart_manager_get_with_raw_bytes_empty() {
let kv = Arc::new(MemoryKvBackend::default());
let manager = TableRepartManager::new(kv);
let result = manager.get_with_raw_bytes(1024).await.unwrap();
assert!(result.is_none());
}
#[tokio::test]
async fn test_table_repart_manager_create_and_get() {
let kv = Arc::new(MemoryKvBackend::default());
let manager = TableRepartManager::new(kv.clone());
let mut src_to_dst = BTreeMap::new();
let src_region = RegionId::new(1, 1);
let dst_regions = vec![RegionId::new(1, 2), RegionId::new(1, 3)];
src_to_dst.insert(src_region, dst_regions.into_iter().collect());
let value = TableRepartValue { src_to_dst };
// Create the table repart
let (txn, _) = manager.build_create_txn(1024, &value).unwrap();
let result = kv.txn(txn).await.unwrap();
assert!(result.succeeded);
// Get the table repart
let retrieved = manager.get(1024).await.unwrap().unwrap();
assert_eq!(retrieved, value);
}
#[tokio::test]
async fn test_table_repart_manager_update_txn() {
let kv = Arc::new(MemoryKvBackend::default());
let manager = TableRepartManager::new(kv.clone());
let initial_value = TableRepartValue {
src_to_dst: BTreeMap::new(),
};
// Create initial table repart
let (create_txn, _) = manager.build_create_txn(1024, &initial_value).unwrap();
let result = kv.txn(create_txn).await.unwrap();
assert!(result.succeeded);
// Get current value with raw bytes
let current_value = manager.get_with_raw_bytes(1024).await.unwrap().unwrap();
// Create updated value
let mut updated_src_to_dst = BTreeMap::new();
let src_region = RegionId::new(1, 1);
let dst_regions = vec![RegionId::new(1, 2)];
updated_src_to_dst.insert(src_region, dst_regions.into_iter().collect());
let updated_value = TableRepartValue {
src_to_dst: updated_src_to_dst,
};
// Build update transaction
let (update_txn, _) = manager
.build_update_txn(1024, &current_value, &updated_value)
.unwrap();
let result = kv.txn(update_txn).await.unwrap();
assert!(result.succeeded);
// Verify update
let retrieved = manager.get(1024).await.unwrap().unwrap();
assert_eq!(retrieved, updated_value);
}
#[tokio::test]
async fn test_table_repart_manager_batch_get() {
let kv = Arc::new(MemoryKvBackend::default());
let manager = TableRepartManager::new(kv.clone());
// Create multiple table reparts
let table_reparts = vec![
(
1024,
TableRepartValue {
src_to_dst: {
let mut map = BTreeMap::new();
map.insert(
RegionId::new(1, 1),
vec![RegionId::new(1, 2)].into_iter().collect(),
);
map
},
},
),
(
1025,
TableRepartValue {
src_to_dst: {
let mut map = BTreeMap::new();
map.insert(
RegionId::new(2, 1),
vec![RegionId::new(2, 2), RegionId::new(2, 3)]
.into_iter()
.collect(),
);
map
},
},
),
];
for (table_id, value) in &table_reparts {
let (txn, _) = manager.build_create_txn(*table_id, value).unwrap();
let result = kv.txn(txn).await.unwrap();
assert!(result.succeeded);
}
// Batch get
let results = manager.batch_get(&[1024, 1025, 1026]).await.unwrap();
assert_eq!(results.len(), 3);
assert_eq!(results[0].as_ref().unwrap(), &table_reparts[0].1);
assert_eq!(results[1].as_ref().unwrap(), &table_reparts[1].1);
assert!(results[2].is_none());
}
#[tokio::test]
async fn test_table_repart_manager_update_mappings() {
let kv = Arc::new(MemoryKvBackend::default());
let manager = TableRepartManager::new(kv.clone());
// Create initial table repart
let initial_value = TableRepartValue {
src_to_dst: BTreeMap::new(),
};
let (txn, _) = manager.build_create_txn(1024, &initial_value).unwrap();
let result = kv.txn(txn).await.unwrap();
assert!(result.succeeded);
// Update mappings
let src = RegionId::new(1024, 1);
let dst = vec![RegionId::new(1024, 2), RegionId::new(1024, 3)];
manager.update_mappings(src, &dst).await.unwrap();
// Verify update
let retrieved = manager.get(1024).await.unwrap().unwrap();
assert_eq!(retrieved.src_to_dst.len(), 1);
assert!(retrieved.src_to_dst.contains_key(&src));
assert_eq!(retrieved.src_to_dst.get(&src).unwrap().len(), 2);
}
#[tokio::test]
async fn test_table_repart_manager_remove_mappings() {
let kv = Arc::new(MemoryKvBackend::default());
let manager = TableRepartManager::new(kv.clone());
// Create initial table repart with mappings
let mut initial_src_to_dst = BTreeMap::new();
let src = RegionId::new(1024, 1);
let dst_regions = vec![
RegionId::new(1024, 2),
RegionId::new(1024, 3),
RegionId::new(1024, 4),
];
initial_src_to_dst.insert(src, dst_regions.into_iter().collect());
let initial_value = TableRepartValue {
src_to_dst: initial_src_to_dst,
};
let (txn, _) = manager.build_create_txn(1024, &initial_value).unwrap();
let result = kv.txn(txn).await.unwrap();
assert!(result.succeeded);
// Remove some mappings
let to_remove = vec![RegionId::new(1024, 2), RegionId::new(1024, 3)];
manager.remove_mappings(src, &to_remove).await.unwrap();
// Verify removal
let retrieved = manager.get(1024).await.unwrap().unwrap();
assert_eq!(retrieved.src_to_dst.len(), 1);
assert_eq!(retrieved.src_to_dst.get(&src).unwrap().len(), 1);
assert!(
retrieved
.src_to_dst
.get(&src)
.unwrap()
.contains(&RegionId::new(1024, 4))
);
}
#[tokio::test]
async fn test_table_repart_manager_get_dst_regions() {
let kv = Arc::new(MemoryKvBackend::default());
let manager = TableRepartManager::new(kv.clone());
// Create initial table repart with mappings
let mut initial_src_to_dst = BTreeMap::new();
let src = RegionId::new(1024, 1);
let dst_regions = vec![RegionId::new(1024, 2), RegionId::new(1024, 3)];
initial_src_to_dst.insert(src, dst_regions.into_iter().collect());
let initial_value = TableRepartValue {
src_to_dst: initial_src_to_dst,
};
let (txn, _) = manager.build_create_txn(1024, &initial_value).unwrap();
let result = kv.txn(txn).await.unwrap();
assert!(result.succeeded);
// Get destination regions
let dst_regions = manager.get_dst_regions(src).await.unwrap();
assert!(dst_regions.is_some());
let dst_set = dst_regions.unwrap();
assert_eq!(dst_set.len(), 2);
assert!(dst_set.contains(&RegionId::new(1024, 2)));
assert!(dst_set.contains(&RegionId::new(1024, 3)));
// Test non-existent source region
let nonexistent_src = RegionId::new(1024, 99);
let result = manager.get_dst_regions(nonexistent_src).await.unwrap();
assert!(result.is_none());
}
#[tokio::test]
async fn test_table_repart_manager_operations_on_nonexistent_table() {
let kv = Arc::new(MemoryKvBackend::default());
let manager = TableRepartManager::new(kv);
let src = RegionId::new(1024, 1);
let dst = vec![RegionId::new(1024, 2)];
// Try to update mappings on non-existent table
let result = manager.update_mappings(src, &dst).await;
assert!(result.is_err());
let err_msg = result.unwrap_err().to_string();
assert!(
err_msg.contains("Failed to find table repartition metadata for table id 1024"),
"{err_msg}"
);
// Try to remove mappings on non-existent table
let result = manager.remove_mappings(src, &dst).await;
assert!(result.is_err());
let err_msg = result.unwrap_err().to_string();
assert!(
err_msg.contains("Failed to find table repartition metadata for table id 1024"),
"{err_msg}"
);
}
#[tokio::test]
async fn test_table_repart_manager_batch_get_with_raw_bytes() {
let kv = Arc::new(MemoryKvBackend::default());
let manager = TableRepartManager::new(kv.clone());
// Create table repart
let value = TableRepartValue {
src_to_dst: {
let mut map = BTreeMap::new();
map.insert(
RegionId::new(1, 1),
vec![RegionId::new(1, 2)].into_iter().collect(),
);
map
},
};
let (txn, _) = manager.build_create_txn(1024, &value).unwrap();
let result = kv.txn(txn).await.unwrap();
assert!(result.succeeded);
// Batch get with raw bytes
let results = manager
.batch_get_with_raw_bytes(&[1024, 1025])
.await
.unwrap();
assert_eq!(results.len(), 2);
assert!(results[0].is_some());
assert!(results[1].is_none());
let retrieved = &results[0].as_ref().unwrap().inner;
assert_eq!(retrieved, &value);
}
}

View File

@@ -868,6 +868,8 @@ impl PgStore {
let client = match pool.get().await {
Ok(client) => client,
Err(e) => {
// We need to log the debug for the error to help diagnose the issue.
common_telemetry::error!(e; "Failed to get Postgres connection.");
return GetPostgresConnectionSnafu {
reason: e.to_string(),
}

View File

@@ -15,10 +15,12 @@
use arrow::array::{ArrayRef, AsArray};
use arrow::datatypes::{
DataType, DurationMicrosecondType, DurationMillisecondType, DurationNanosecondType,
DurationSecondType, Time32MillisecondType, Time32SecondType, Time64MicrosecondType,
Time64NanosecondType, TimeUnit, TimestampMicrosecondType, TimestampMillisecondType,
TimestampNanosecondType, TimestampSecondType,
DurationSecondType, Int8Type, Int16Type, Int32Type, Int64Type, Time32MillisecondType,
Time32SecondType, Time64MicrosecondType, Time64NanosecondType, TimeUnit,
TimestampMicrosecondType, TimestampMillisecondType, TimestampNanosecondType,
TimestampSecondType, UInt8Type, UInt16Type, UInt32Type, UInt64Type,
};
use arrow_array::Array;
use common_time::time::Time;
use common_time::{Duration, Timestamp};
@@ -126,3 +128,87 @@ pub fn duration_array_value(array: &ArrayRef, i: usize) -> Duration {
};
Duration::new(v, time_unit.into())
}
/// Get the string value at index `i` for `Utf8`, `LargeUtf8`, or `Utf8View` arrays.
///
/// Returns `None` when the array type is not a string type or the value is null.
///
/// # Panics
///
/// If index `i` is out of bounds.
pub fn string_array_value_at_index(array: &ArrayRef, i: usize) -> Option<&str> {
match array.data_type() {
DataType::Utf8 => {
let array = array.as_string::<i32>();
array.is_valid(i).then(|| array.value(i))
}
DataType::LargeUtf8 => {
let array = array.as_string::<i64>();
array.is_valid(i).then(|| array.value(i))
}
DataType::Utf8View => {
let array = array.as_string_view();
array.is_valid(i).then(|| array.value(i))
}
_ => None,
}
}
/// Get the integer value (`i64`) at index `i` for any integer array.
///
/// Returns `None` when:
///
/// - the array type is not an integer type;
/// - the value is larger than `i64::MAX`;
/// - the value is null.
///
/// # Panics
///
/// If index `i` is out of bounds.
pub fn int_array_value_at_index(array: &ArrayRef, i: usize) -> Option<i64> {
match array.data_type() {
DataType::Int8 => {
let array = array.as_primitive::<Int8Type>();
array.is_valid(i).then(|| array.value(i) as i64)
}
DataType::Int16 => {
let array = array.as_primitive::<Int16Type>();
array.is_valid(i).then(|| array.value(i) as i64)
}
DataType::Int32 => {
let array = array.as_primitive::<Int32Type>();
array.is_valid(i).then(|| array.value(i) as i64)
}
DataType::Int64 => {
let array = array.as_primitive::<Int64Type>();
array.is_valid(i).then(|| array.value(i))
}
DataType::UInt8 => {
let array = array.as_primitive::<UInt8Type>();
array.is_valid(i).then(|| array.value(i) as i64)
}
DataType::UInt16 => {
let array = array.as_primitive::<UInt16Type>();
array.is_valid(i).then(|| array.value(i) as i64)
}
DataType::UInt32 => {
let array = array.as_primitive::<UInt32Type>();
array.is_valid(i).then(|| array.value(i) as i64)
}
DataType::UInt64 => {
let array = array.as_primitive::<UInt64Type>();
array
.is_valid(i)
.then(|| {
let i = array.value(i);
if i <= i64::MAX as u64 {
Some(i as i64)
} else {
None
}
})
.flatten()
}
_ => None,
}
}

View File

@@ -299,7 +299,7 @@ impl Default for MetasrvOptions {
#[allow(deprecated)]
server_addr: String::new(),
store_addrs: vec!["127.0.0.1:2379".to_string()],
backend_tls: None,
backend_tls: Some(TlsOption::prefer()),
selector: SelectorType::default(),
enable_region_failover: false,
heartbeat_interval: distributed_time_constants::BASE_HEARTBEAT_INTERVAL,

View File

@@ -13,6 +13,7 @@
// limitations under the License.
use common_meta::kv_backend::etcd::create_etcd_tls_options;
use common_telemetry::warn;
use etcd_client::{Client, ConnectOptions};
use servers::tls::{TlsMode, TlsOption};
use snafu::ResultExt;
@@ -38,9 +39,12 @@ pub async fn create_etcd_client_with_tls(
client_options.keep_alive_interval,
client_options.keep_alive_timeout,
);
let all_endpoints_use_https = etcd_endpoints.iter().all(|e| e.starts_with("https"));
if let Some(tls_config) = tls_config
&& let Some(tls_options) = create_etcd_tls_options(&convert_tls_option(tls_config))
.context(BuildTlsOptionsSnafu)?
&& let Some(tls_options) =
create_etcd_tls_options(&convert_tls_option(all_endpoints_use_https, tls_config))
.context(BuildTlsOptionsSnafu)?
{
connect_options = connect_options.with_tls(tls_options);
}
@@ -50,9 +54,22 @@ pub async fn create_etcd_client_with_tls(
.context(error::ConnectEtcdSnafu)
}
fn convert_tls_option(tls_option: &TlsOption) -> common_meta::kv_backend::etcd::TlsOption {
fn convert_tls_option(
all_endpoints_use_https: bool,
tls_option: &TlsOption,
) -> common_meta::kv_backend::etcd::TlsOption {
let mode = match tls_option.mode {
TlsMode::Disable => common_meta::kv_backend::etcd::TlsMode::Disable,
TlsMode::Prefer => {
if all_endpoints_use_https {
common_meta::kv_backend::etcd::TlsMode::Require
} else {
warn!(
"All endpoints use HTTP, TLS prefer mode is not supported, using disable mode"
);
common_meta::kv_backend::etcd::TlsMode::Disable
}
}
_ => common_meta::kv_backend::etcd::TlsMode::Require,
};
common_meta::kv_backend::etcd::TlsOption {

View File

@@ -8,6 +8,7 @@ license.workspace = true
default = []
test = ["common-test-util", "rstest", "rstest_reuse", "rskafka"]
enterprise = []
vector_index = ["dep:usearch", "dep:roaring", "index/vector_index"]
[lints]
workspace = true
@@ -28,9 +29,10 @@ common-datasource.workspace = true
common-decimal.workspace = true
common-error.workspace = true
common-grpc.workspace = true
common-function.workspace = true
common-macro.workspace = true
common-meta.workspace = true
common-memory-manager.workspace = true
common-meta.workspace = true
common-query.workspace = true
common-recordbatch.workspace = true
common-runtime.workspace = true
@@ -49,7 +51,6 @@ dotenv.workspace = true
either.workspace = true
futures.workspace = true
humantime-serde.workspace = true
humantime.workspace = true
index.workspace = true
itertools.workspace = true
greptime-proto.workspace = true
@@ -67,6 +68,7 @@ partition.workspace = true
puffin.workspace = true
rand.workspace = true
rayon = "1.10"
roaring = { version = "0.10", optional = true }
regex.workspace = true
rskafka = { workspace = true, optional = true }
rstest = { workspace = true, optional = true }
@@ -84,6 +86,7 @@ tokio.workspace = true
tokio-stream.workspace = true
tokio-util.workspace = true
tracing.workspace = true
usearch = { version = "2.21", default-features = false, features = ["fp16lib"], optional = true }
uuid.workspace = true
[dev-dependencies]

View File

@@ -313,6 +313,8 @@ impl AccessLayer {
inverted_index_config: request.inverted_index_config,
fulltext_index_config: request.fulltext_index_config,
bloom_filter_index_config: request.bloom_filter_index_config,
#[cfg(feature = "vector_index")]
vector_index_config: request.vector_index_config,
};
// We disable write cache on file system but we still use atomic write.
// TODO(yingwen): If we support other non-fs stores without the write cache, then
@@ -467,6 +469,8 @@ pub struct SstWriteRequest {
pub inverted_index_config: InvertedIndexConfig,
pub fulltext_index_config: FulltextIndexConfig,
pub bloom_filter_index_config: BloomFilterConfig,
#[cfg(feature = "vector_index")]
pub vector_index_config: crate::config::VectorIndexConfig,
}
/// Cleaner to remove temp files on the atomic write dir.

View File

@@ -227,6 +227,8 @@ impl WriteCache {
inverted_index_config: write_request.inverted_index_config,
fulltext_index_config: write_request.fulltext_index_config,
bloom_filter_index_config: write_request.bloom_filter_index_config,
#[cfg(feature = "vector_index")]
vector_index_config: write_request.vector_index_config,
};
let cleaner = TempFileCleaner::new(region_id, store.clone());
@@ -520,6 +522,8 @@ mod tests {
inverted_index_config: Default::default(),
fulltext_index_config: Default::default(),
bloom_filter_index_config: Default::default(),
#[cfg(feature = "vector_index")]
vector_index_config: Default::default(),
};
let upload_request = SstUploadRequest {
@@ -620,6 +624,8 @@ mod tests {
inverted_index_config: Default::default(),
fulltext_index_config: Default::default(),
bloom_filter_index_config: Default::default(),
#[cfg(feature = "vector_index")]
vector_index_config: Default::default(),
};
let write_opts = WriteOptions {
row_group_size: 512,
@@ -701,6 +707,8 @@ mod tests {
inverted_index_config: Default::default(),
fulltext_index_config: Default::default(),
bloom_filter_index_config: Default::default(),
#[cfg(feature = "vector_index")]
vector_index_config: Default::default(),
};
let write_opts = WriteOptions {
row_group_size: 512,

View File

@@ -332,6 +332,8 @@ impl DefaultCompactor {
let inverted_index_config = compaction_region.engine_config.inverted_index.clone();
let fulltext_index_config = compaction_region.engine_config.fulltext_index.clone();
let bloom_filter_index_config = compaction_region.engine_config.bloom_filter_index.clone();
#[cfg(feature = "vector_index")]
let vector_index_config = compaction_region.engine_config.vector_index.clone();
let input_file_names = output
.inputs
@@ -378,6 +380,8 @@ impl DefaultCompactor {
inverted_index_config,
fulltext_index_config,
bloom_filter_index_config,
#[cfg(feature = "vector_index")]
vector_index_config,
},
&write_opts,
&mut metrics,

View File

@@ -158,6 +158,9 @@ pub struct MitoConfig {
pub fulltext_index: FulltextIndexConfig,
/// Bloom filter index configs.
pub bloom_filter_index: BloomFilterConfig,
/// Vector index configs (HNSW).
#[cfg(feature = "vector_index")]
pub vector_index: VectorIndexConfig,
/// Memtable config
pub memtable: MemtableConfig,
@@ -214,6 +217,8 @@ impl Default for MitoConfig {
inverted_index: InvertedIndexConfig::default(),
fulltext_index: FulltextIndexConfig::default(),
bloom_filter_index: BloomFilterConfig::default(),
#[cfg(feature = "vector_index")]
vector_index: VectorIndexConfig::default(),
memtable: MemtableConfig::default(),
min_compaction_interval: Duration::from_secs(0),
default_experimental_flat_format: false,
@@ -643,6 +648,51 @@ impl BloomFilterConfig {
}
}
/// Configuration options for the vector index (HNSW).
#[cfg(feature = "vector_index")]
#[serde_as]
#[derive(Debug, Serialize, Deserialize, Clone, PartialEq, Eq)]
#[serde(default)]
pub struct VectorIndexConfig {
/// Whether to create the index on flush: automatically or never.
pub create_on_flush: Mode,
/// Whether to create the index on compaction: automatically or never.
pub create_on_compaction: Mode,
/// Whether to apply the index on query: automatically or never.
pub apply_on_query: Mode,
/// Memory threshold for creating the index.
pub mem_threshold_on_create: MemoryThreshold,
}
#[cfg(feature = "vector_index")]
impl Default for VectorIndexConfig {
fn default() -> Self {
Self {
create_on_flush: Mode::Auto,
create_on_compaction: Mode::Auto,
apply_on_query: Mode::Auto,
mem_threshold_on_create: MemoryThreshold::Auto,
}
}
}
#[cfg(feature = "vector_index")]
impl VectorIndexConfig {
pub fn mem_threshold_on_create(&self) -> Option<usize> {
match self.mem_threshold_on_create {
MemoryThreshold::Auto => {
if let Some(sys_memory) = get_total_memory_readable() {
Some((sys_memory / INDEX_CREATE_MEM_THRESHOLD_FACTOR).as_bytes() as usize)
} else {
Some(ReadableSize::mb(64).as_bytes() as usize)
}
}
MemoryThreshold::Unlimited => None,
MemoryThreshold::Size(size) => Some(size.as_bytes() as usize),
}
}
}
/// Divide cpu num by a non-zero `divisor` and returns at least 1.
fn divide_num_cpus(divisor: usize) -> usize {
debug_assert!(divisor > 0);

View File

@@ -126,7 +126,7 @@ use crate::config::MitoConfig;
use crate::engine::puffin_index::{IndexEntryContext, collect_index_entries_from_puffin};
use crate::error::{
InvalidRequestSnafu, JoinSnafu, MitoManifestInfoSnafu, RecvSnafu, RegionNotFoundSnafu, Result,
SerdeJsonSnafu, SerializeColumnMetadataSnafu, SerializeManifestSnafu,
SerdeJsonSnafu, SerializeColumnMetadataSnafu,
};
#[cfg(feature = "enterprise")]
use crate::extension::BoxedExtensionRangeProviderFactory;
@@ -1057,19 +1057,8 @@ impl EngineInner {
let region_id = request.region_id;
let (request, receiver) = WorkerRequest::try_from_remap_manifests_request(request)?;
self.workers.submit_to_worker(region_id, request).await?;
let manifests = receiver.await.context(RecvSnafu)??;
let new_manifests = manifests
.into_iter()
.map(|(region_id, manifest)| {
Ok((
region_id,
serde_json::to_string(&manifest)
.context(SerializeManifestSnafu { region_id })?,
))
})
.collect::<Result<HashMap<_, _>>>()?;
Ok(RemapManifestsResponse { new_manifests })
let manifest_paths = receiver.await.context(RecvSnafu)??;
Ok(RemapManifestsResponse { manifest_paths })
}
async fn copy_region_from(

View File

@@ -69,7 +69,8 @@ async fn test_apply_staging_manifest_invalid_region_state_with_format(flat_forma
region_id,
RegionRequest::ApplyStagingManifest(ApplyStagingManifestRequest {
partition_expr: range_expr("x", 0, 100).as_json_str().unwrap(),
files_to_add: vec![],
central_region_id: RegionId::new(1, 0),
manifest_path: "manifest.json".to_string(),
}),
)
.await
@@ -88,7 +89,8 @@ async fn test_apply_staging_manifest_invalid_region_state_with_format(flat_forma
region_id,
RegionRequest::ApplyStagingManifest(ApplyStagingManifestRequest {
partition_expr: range_expr("x", 0, 100).as_json_str().unwrap(),
files_to_add: vec![],
central_region_id: RegionId::new(1, 0),
manifest_path: "manifest.json".to_string(),
}),
)
.await
@@ -136,7 +138,8 @@ async fn test_apply_staging_manifest_mismatched_partition_expr_with_format(flat_
region_id,
RegionRequest::ApplyStagingManifest(ApplyStagingManifestRequest {
partition_expr: range_expr("x", 0, 100).as_json_str().unwrap(),
files_to_add: vec![],
central_region_id: RegionId::new(1, 0),
manifest_path: "dummy".to_string(),
}),
)
.await
@@ -144,7 +147,36 @@ async fn test_apply_staging_manifest_mismatched_partition_expr_with_format(flat_
assert_matches!(
err.into_inner().as_any().downcast_ref::<Error>().unwrap(),
Error::StagingPartitionExprMismatch { .. }
)
);
// If staging manifest's partition expr is different from the request.
let result = engine
.remap_manifests(RemapManifestsRequest {
region_id,
input_regions: vec![region_id],
region_mapping: [(region_id, vec![region_id])].into_iter().collect(),
new_partition_exprs: [(region_id, range_expr("x", 0, 49).as_json_str().unwrap())]
.into_iter()
.collect(),
})
.await
.unwrap();
let err = engine
.handle_request(
region_id,
RegionRequest::ApplyStagingManifest(ApplyStagingManifestRequest {
partition_expr: range_expr("x", 0, 50).as_json_str().unwrap(),
central_region_id: region_id,
manifest_path: result.manifest_paths[&region_id].clone(),
}),
)
.await
.unwrap_err();
assert_matches!(
err.into_inner().as_any().downcast_ref::<Error>().unwrap(),
Error::StagingPartitionExprMismatch { .. }
);
}
#[tokio::test]
@@ -216,13 +248,26 @@ async fn test_apply_staging_manifest_success_with_format(flat_format: bool) {
})
.await
.unwrap();
assert_eq!(result.new_manifests.len(), 2);
let new_manifest_1 =
serde_json::from_str::<RegionManifest>(&result.new_manifests[&new_region_id_1]).unwrap();
let new_manifest_2 =
serde_json::from_str::<RegionManifest>(&result.new_manifests[&new_region_id_2]).unwrap();
let region = engine.get_region(region_id).unwrap();
let manager = region.manifest_ctx.manifest_manager.write().await;
let manifest_storage = manager.store();
let blob_store = manifest_storage.staging_storage().blob_storage();
assert_eq!(result.manifest_paths.len(), 2);
common_telemetry::debug!("manifest paths: {:?}", result.manifest_paths);
let new_manifest_1 = blob_store
.get(&result.manifest_paths[&new_region_id_1])
.await
.unwrap();
let new_manifest_2 = blob_store
.get(&result.manifest_paths[&new_region_id_2])
.await
.unwrap();
let new_manifest_1 = serde_json::from_slice::<RegionManifest>(&new_manifest_1).unwrap();
let new_manifest_2 = serde_json::from_slice::<RegionManifest>(&new_manifest_2).unwrap();
assert_eq!(new_manifest_1.files.len(), 3);
assert_eq!(new_manifest_2.files.len(), 3);
drop(manager);
let request = CreateRequestBuilder::new().build();
engine
@@ -238,7 +283,6 @@ async fn test_apply_staging_manifest_success_with_format(flat_format: bool) {
)
.await
.unwrap();
let mut files_to_add = new_manifest_1.files.values().cloned().collect::<Vec<_>>();
// Before apply staging manifest, the files should be empty
let region = engine.get_region(new_region_id_1).unwrap();
let manifest = region.manifest_ctx.manifest().await;
@@ -251,7 +295,8 @@ async fn test_apply_staging_manifest_success_with_format(flat_format: bool) {
new_region_id_1,
RegionRequest::ApplyStagingManifest(ApplyStagingManifestRequest {
partition_expr: range_expr("tag_0", 0, 50).as_json_str().unwrap(),
files_to_add: serde_json::to_vec(&files_to_add).unwrap(),
central_region_id: region_id,
manifest_path: result.manifest_paths[&new_region_id_1].clone(),
}),
)
.await
@@ -277,23 +322,52 @@ async fn test_apply_staging_manifest_success_with_format(flat_format: bool) {
let region_dir = format!("{}/data/test/1_0000000001", data_home.display());
let staging_manifest_dir = format!("{}/staging/manifest", region_dir);
let staging_files = fs::read_dir(&staging_manifest_dir)
.map(|entries| entries.collect::<Result<Vec<_>, _>>().unwrap_or_default())
.map(|entries| {
entries
.filter(|e| e.as_ref().unwrap().metadata().unwrap().is_file())
.collect::<Result<Vec<_>, _>>()
.unwrap_or_default()
})
.unwrap_or_default();
assert_eq!(staging_files.len(), 0);
assert_eq!(staging_files.len(), 0, "staging_files: {:?}", staging_files);
let region = engine.get_region(region_id).unwrap();
let manager = region.manifest_ctx.manifest_manager.write().await;
let manifest_storage = manager.store();
let blob_store = manifest_storage.staging_storage().blob_storage();
let new_manifest_1 = blob_store
.get(&result.manifest_paths[&new_region_id_1])
.await
.unwrap();
let mut new_manifest_1 = serde_json::from_slice::<RegionManifest>(&new_manifest_1).unwrap();
// Try to modify the file sequence.
files_to_add.push(FileMeta {
region_id,
file_id: FileId::random(),
..Default::default()
});
let file_id = FileId::random();
new_manifest_1.files.insert(
file_id,
FileMeta {
region_id,
file_id,
..Default::default()
},
);
blob_store
.put(
&result.manifest_paths[&new_region_id_1],
serde_json::to_vec(&new_manifest_1).unwrap(),
)
.await
.unwrap();
drop(manager);
// This request will be ignored.
engine
.handle_request(
new_region_id_1,
RegionRequest::ApplyStagingManifest(ApplyStagingManifestRequest {
partition_expr: range_expr("tag_0", 0, 50).as_json_str().unwrap(),
files_to_add: serde_json::to_vec(&files_to_add).unwrap(),
central_region_id: region_id,
manifest_path: result.manifest_paths[&new_region_id_1].clone(),
}),
)
.await
@@ -334,12 +408,40 @@ async fn test_apply_staging_manifest_invalid_files_to_add_with_format(flat_forma
)
.await
.unwrap();
// Apply staging manifest with not exists manifest path.
let err = engine
.handle_request(
region_id,
RegionRequest::ApplyStagingManifest(ApplyStagingManifestRequest {
partition_expr: range_expr("tag_0", 0, 50).as_json_str().unwrap(),
files_to_add: b"invalid".to_vec(),
central_region_id: RegionId::new(1, 0),
manifest_path: "dummy".to_string(),
}),
)
.await
.unwrap_err();
assert_matches!(
err.into_inner().as_any().downcast_ref::<Error>().unwrap(),
Error::OpenDal { .. }
);
// Apply staging manifest with invalid bytes.
let region = engine.get_region(region_id).unwrap();
let manager = region.manifest_ctx.manifest_manager.write().await;
let manifest_storage = manager.store();
let blob_store = manifest_storage.staging_storage().blob_storage();
blob_store
.put("invalid_bytes", b"invalid_bytes".to_vec())
.await
.unwrap();
drop(manager);
let err = engine
.handle_request(
region_id,
RegionRequest::ApplyStagingManifest(ApplyStagingManifestRequest {
partition_expr: range_expr("tag_0", 0, 50).as_json_str().unwrap(),
central_region_id: region_id,
manifest_path: "invalid_bytes".to_string(),
}),
)
.await
@@ -349,52 +451,3 @@ async fn test_apply_staging_manifest_invalid_files_to_add_with_format(flat_forma
Error::SerdeJson { .. }
);
}
#[tokio::test]
async fn test_apply_staging_manifest_empty_files() {
common_telemetry::init_default_ut_logging();
test_apply_staging_manifest_empty_files_with_format(false).await;
test_apply_staging_manifest_empty_files_with_format(true).await;
}
async fn test_apply_staging_manifest_empty_files_with_format(flat_format: bool) {
let mut env = TestEnv::with_prefix("empty-files").await;
let engine = env
.create_engine(MitoConfig {
default_experimental_flat_format: flat_format,
..Default::default()
})
.await;
let region_id = RegionId::new(1, 1);
let request = CreateRequestBuilder::new().build();
engine
.handle_request(region_id, RegionRequest::Create(request))
.await
.unwrap();
engine
.handle_request(
region_id,
RegionRequest::EnterStaging(EnterStagingRequest {
partition_expr: range_expr("tag_0", 0, 50).as_json_str().unwrap(),
}),
)
.await
.unwrap();
engine
.handle_request(
region_id,
RegionRequest::ApplyStagingManifest(ApplyStagingManifestRequest {
partition_expr: range_expr("tag_0", 0, 50).as_json_str().unwrap(),
files_to_add: serde_json::to_vec::<Vec<FileMeta>>(&vec![]).unwrap(),
}),
)
.await
.unwrap();
let region = engine.get_region(region_id).unwrap();
let manifest = region.manifest_ctx.manifest().await;
assert_eq!(manifest.files.len(), 0);
let staging_manifest = region.manifest_ctx.staging_manifest().await;
assert!(staging_manifest.is_none());
let staging_partition_expr = region.staging_partition_expr.lock().unwrap();
assert!(staging_partition_expr.is_none());
}

View File

@@ -229,11 +229,23 @@ async fn test_remap_manifests_success_with_format(flat_format: bool) {
})
.await
.unwrap();
assert_eq!(result.new_manifests.len(), 2);
let new_manifest_1 =
serde_json::from_str::<RegionManifest>(&result.new_manifests[&new_region_id_1]).unwrap();
let new_manifest_2 =
serde_json::from_str::<RegionManifest>(&result.new_manifests[&new_region_id_2]).unwrap();
let region = engine.get_region(region_id).unwrap();
let manager = region.manifest_ctx.manifest_manager.write().await;
let manifest_storage = manager.store();
let blob_store = manifest_storage.staging_storage().blob_storage();
assert_eq!(result.manifest_paths.len(), 2);
common_telemetry::debug!("manifest paths: {:?}", result.manifest_paths);
let new_manifest_1 = blob_store
.get(&result.manifest_paths[&new_region_id_1])
.await
.unwrap();
let new_manifest_2 = blob_store
.get(&result.manifest_paths[&new_region_id_2])
.await
.unwrap();
let new_manifest_1 = serde_json::from_slice::<RegionManifest>(&new_manifest_1).unwrap();
let new_manifest_2 = serde_json::from_slice::<RegionManifest>(&new_manifest_2).unwrap();
assert_eq!(new_manifest_1.files.len(), 3);
assert_eq!(new_manifest_2.files.len(), 3);
}

View File

@@ -1039,6 +1039,22 @@ pub enum Error {
location: Location,
},
#[cfg(feature = "vector_index")]
#[snafu(display("Failed to build vector index: {}", reason))]
VectorIndexBuild {
reason: String,
#[snafu(implicit)]
location: Location,
},
#[cfg(feature = "vector_index")]
#[snafu(display("Failed to finish vector index: {}", reason))]
VectorIndexFinish {
reason: String,
#[snafu(implicit)]
location: Location,
},
#[snafu(display("Manual compaction is override by following operations."))]
ManualCompactionOverride {},
@@ -1345,6 +1361,9 @@ impl ErrorExt for Error {
source.status_code()
}
#[cfg(feature = "vector_index")]
VectorIndexBuild { .. } | VectorIndexFinish { .. } => StatusCode::Internal,
ManualCompactionOverride {} => StatusCode::Cancelled,
CompactionMemoryExhausted { source, .. } => source.status_code(),

View File

@@ -669,6 +669,8 @@ impl RegionFlushTask {
inverted_index_config: self.engine_config.inverted_index.clone(),
fulltext_index_config: self.engine_config.fulltext_index.clone(),
bloom_filter_index_config: self.engine_config.bloom_filter_index.clone(),
#[cfg(feature = "vector_index")]
vector_index_config: self.engine_config.vector_index.clone(),
}
}

View File

@@ -378,6 +378,11 @@ impl ManifestObjectStore {
pub async fn clear_staging_manifests(&mut self) -> Result<()> {
self.staging_storage.clear().await
}
/// Returns the staging storage.
pub(crate) fn staging_storage(&self) -> &StagingStorage {
&self.staging_storage
}
}
#[cfg(test)]

View File

@@ -26,20 +26,104 @@ use crate::manifest::storage::size_tracker::NoopTracker;
use crate::manifest::storage::utils::sort_manifests;
use crate::manifest::storage::{file_version, is_delta_file};
/// A simple blob storage for arbitrary binary data in the staging directory.
///
/// This is primarily used during repartition operations to store generated
/// manifests that will be consumed by other regions via [`ApplyStagingManifestRequest`](store_api::region_request::ApplyStagingManifestRequest).
/// The blobs are stored in `{region_dir}/staging/blob/` directory.
#[derive(Debug, Clone)]
pub(crate) struct StagingBlobStorage {
object_store: ObjectStore,
path: String,
}
/// Returns the staging path from the blob path.
///
/// # Example
/// - Input: `"data/table/region_0001/manifest/"`
/// - Output: `"data/table/region_0001/staging/blob/"`
pub fn staging_blob_path(manifest_path: &str) -> String {
let parent_dir = manifest_path
.trim_end_matches("manifest/")
.trim_end_matches('/');
util::normalize_dir(&format!("{}/staging/blob", parent_dir))
}
impl StagingBlobStorage {
pub fn new(path: String, object_store: ObjectStore) -> Self {
let path = util::normalize_dir(&path);
common_telemetry::debug!(
"Staging blob storage path: {}, root: {}",
path,
object_store.info().root()
);
Self { object_store, path }
}
/// Put the bytes to the blob storage.
pub async fn put(&self, path: &str, bytes: Vec<u8>) -> Result<()> {
let path = format!("{}{}", self.path, path);
common_telemetry::debug!(
"Putting blob to staging blob storage, path: {}, root: {}, bytes: {}",
path,
self.object_store.info().root(),
bytes.len()
);
self.object_store
.write(&path, bytes)
.await
.context(OpenDalSnafu)?;
Ok(())
}
/// Get the bytes from the blob storage.
pub async fn get(&self, path: &str) -> Result<Vec<u8>> {
let path = format!("{}{}", self.path, path);
common_telemetry::debug!(
"Reading blob from staging blob storage, path: {}, root: {}",
path,
self.object_store.info().root()
);
let bytes = self.object_store.read(&path).await.context(OpenDalSnafu)?;
Ok(bytes.to_vec())
}
}
/// Storage for staging manifest files and blobs used during repartition operations.
///
/// Fields:
/// - `delta_storage`: Manages incremental manifest delta files specific to the staging region.
/// - `blob_storage`: Manages arbitrary blobs, such as generated manifests for regions.
///
/// Directory structure:
/// - `{region_dir}/staging/manifest/` — for incremental manifest delta files for the staging region.
/// - `{region_dir}/staging/blob/` — for arbitrary blobs (e.g., generated region manifests).
#[derive(Debug, Clone)]
pub(crate) struct StagingStorage {
delta_storage: DeltaStorage<NoopTracker>,
blob_storage: StagingBlobStorage,
}
/// Returns the staging path from the manifest path.
///
/// # Example
/// - Input: `"data/table/region_0001/manifest/"`
/// - Output: `"data/table/region_0001/staging/manifest/"`
pub fn staging_manifest_path(manifest_path: &str) -> String {
let parent_dir = manifest_path
.trim_end_matches("manifest/")
.trim_end_matches('/');
util::normalize_dir(&format!("{}/staging/manifest", parent_dir))
}
impl StagingStorage {
pub fn new(path: String, object_store: ObjectStore, compress_type: CompressionType) -> Self {
let staging_path = {
// Convert "region_dir/manifest/" to "region_dir/staging/manifest/"
let parent_dir = path.trim_end_matches("manifest/").trim_end_matches('/');
util::normalize_dir(&format!("{}/staging/manifest", parent_dir))
};
let staging_blob_path = staging_blob_path(&path);
let blob_storage = StagingBlobStorage::new(staging_blob_path, object_store.clone());
let staging_manifest_path = staging_manifest_path(&path);
let delta_storage = DeltaStorage::new(
staging_path.clone(),
staging_manifest_path.clone(),
object_store.clone(),
compress_type,
// StagingStorage does not use a manifest cache; set to None.
@@ -48,7 +132,16 @@ impl StagingStorage {
// deleted after exiting staging mode.
Arc::new(NoopTracker),
);
Self { delta_storage }
Self {
delta_storage,
blob_storage,
}
}
/// Returns the blob storage.
pub(crate) fn blob_storage(&self) -> &StagingBlobStorage {
&self.blob_storage
}
/// Returns an iterator of manifests from staging directory.
@@ -107,3 +200,22 @@ impl StagingStorage {
self.delta_storage.set_compress_type(compress_type);
}
}
#[cfg(test)]
mod tests {
use crate::manifest::storage::staging::{staging_blob_path, staging_manifest_path};
#[test]
fn test_staging_path() {
let path = "/data/table/region_0001/manifest/";
let expected = "/data/table/region_0001/staging/manifest/";
assert_eq!(staging_manifest_path(path), expected);
}
#[test]
fn test_staging_blob_path() {
let path = "/data/table/region_0001/manifest/";
let expected = "/data/table/region_0001/staging/blob/";
assert_eq!(staging_blob_path(path), expected);
}
}

View File

@@ -50,7 +50,7 @@ use crate::error::{
FlushRegionSnafu, InvalidPartitionExprSnafu, InvalidRequestSnafu, MissingPartitionExprSnafu,
Result, UnexpectedSnafu,
};
use crate::manifest::action::{RegionEdit, RegionManifest, TruncateKind};
use crate::manifest::action::{RegionEdit, TruncateKind};
use crate::memtable::MemtableId;
use crate::memtable::bulk::part::BulkPart;
use crate::metrics::COMPACTION_ELAPSED_TOTAL;
@@ -796,10 +796,7 @@ impl WorkerRequest {
region_mapping,
new_partition_exprs,
}: store_api::region_engine::RemapManifestsRequest,
) -> Result<(
WorkerRequest,
Receiver<Result<HashMap<RegionId, RegionManifest>>>,
)> {
) -> Result<(WorkerRequest, Receiver<Result<HashMap<RegionId, String>>>)> {
let (sender, receiver) = oneshot::channel();
let new_partition_exprs = new_partition_exprs
.into_iter()
@@ -1116,8 +1113,10 @@ pub(crate) struct RemapManifestsRequest {
pub(crate) region_mapping: HashMap<RegionId, Vec<RegionId>>,
/// New partition expressions for the new regions.
pub(crate) new_partition_exprs: HashMap<RegionId, PartitionExpr>,
/// Result sender.
pub(crate) sender: Sender<Result<HashMap<RegionId, RegionManifest>>>,
/// Sender for the result of the remap operation.
///
/// The result is a map from region IDs to their corresponding staging manifest paths.
pub(crate) sender: Sender<Result<HashMap<RegionId, String>>>,
}
#[derive(Debug)]

View File

@@ -287,6 +287,9 @@ pub enum IndexType {
FulltextIndex,
/// Bloom Filter index
BloomFilterIndex,
/// Vector index (HNSW).
#[cfg(feature = "vector_index")]
VectorIndex,
}
/// Metadata of indexes created for a specific column in an SST file.

View File

@@ -20,6 +20,8 @@ pub(crate) mod inverted_index;
pub mod puffin_manager;
mod statistics;
pub(crate) mod store;
#[cfg(feature = "vector_index")]
pub(crate) mod vector_index;
use std::cmp::Ordering;
use std::collections::{BinaryHeap, HashMap, HashSet};
@@ -41,10 +43,14 @@ use store_api::metadata::RegionMetadataRef;
use store_api::storage::{ColumnId, FileId, RegionId};
use strum::IntoStaticStr;
use tokio::sync::mpsc::Sender;
#[cfg(feature = "vector_index")]
use vector_index::creator::VectorIndexer;
use crate::access_layer::{AccessLayerRef, FilePathProvider, OperationType, RegionFilePathFactory};
use crate::cache::file_cache::{FileCacheRef, FileType, IndexKey};
use crate::cache::write_cache::{UploadTracker, WriteCacheRef};
#[cfg(feature = "vector_index")]
use crate::config::VectorIndexConfig;
use crate::config::{BloomFilterConfig, FulltextIndexConfig, InvertedIndexConfig};
use crate::error::{
BuildIndexAsyncSnafu, DecodeSnafu, Error, InvalidRecordBatchSnafu, RegionClosedSnafu,
@@ -76,6 +82,8 @@ use crate::worker::WorkerListener;
pub(crate) const TYPE_INVERTED_INDEX: &str = "inverted_index";
pub(crate) const TYPE_FULLTEXT_INDEX: &str = "fulltext_index";
pub(crate) const TYPE_BLOOM_FILTER_INDEX: &str = "bloom_filter_index";
#[cfg(feature = "vector_index")]
pub(crate) const TYPE_VECTOR_INDEX: &str = "vector_index";
/// Triggers background download of an index file to the local cache.
pub(crate) fn trigger_index_background_download(
@@ -114,6 +122,9 @@ pub struct IndexOutput {
pub fulltext_index: FulltextIndexOutput,
/// Bloom filter output.
pub bloom_filter: BloomFilterOutput,
/// Vector index output.
#[cfg(feature = "vector_index")]
pub vector_index: VectorIndexOutput,
}
impl IndexOutput {
@@ -128,6 +139,10 @@ impl IndexOutput {
if self.bloom_filter.is_available() {
indexes.push(IndexType::BloomFilterIndex);
}
#[cfg(feature = "vector_index")]
if self.vector_index.is_available() {
indexes.push(IndexType::VectorIndex);
}
indexes
}
@@ -151,6 +166,12 @@ impl IndexOutput {
.push(IndexType::BloomFilterIndex);
}
}
#[cfg(feature = "vector_index")]
if self.vector_index.is_available() {
for &col in &self.vector_index.columns {
map.entry(col).or_default().push(IndexType::VectorIndex);
}
}
map.into_iter()
.map(|(column_id, created_indexes)| ColumnIndexMetadata {
@@ -184,6 +205,9 @@ pub type InvertedIndexOutput = IndexBaseOutput;
pub type FulltextIndexOutput = IndexBaseOutput;
/// Output of the bloom filter creation.
pub type BloomFilterOutput = IndexBaseOutput;
/// Output of the vector index creation.
#[cfg(feature = "vector_index")]
pub type VectorIndexOutput = IndexBaseOutput;
/// The index creator that hides the error handling details.
#[derive(Default)]
@@ -199,6 +223,10 @@ pub struct Indexer {
last_mem_fulltext_index: usize,
bloom_filter_indexer: Option<BloomFilterIndexer>,
last_mem_bloom_filter: usize,
#[cfg(feature = "vector_index")]
vector_indexer: Option<VectorIndexer>,
#[cfg(feature = "vector_index")]
last_mem_vector_index: usize,
intermediate_manager: Option<IntermediateManager>,
}
@@ -259,6 +287,18 @@ impl Indexer {
.with_label_values(&[TYPE_BLOOM_FILTER_INDEX])
.add(bloom_filter_mem as i64 - self.last_mem_bloom_filter as i64);
self.last_mem_bloom_filter = bloom_filter_mem;
#[cfg(feature = "vector_index")]
{
let vector_mem = self
.vector_indexer
.as_ref()
.map_or(0, |creator| creator.memory_usage());
INDEX_CREATE_MEMORY_USAGE
.with_label_values(&[TYPE_VECTOR_INDEX])
.add(vector_mem as i64 - self.last_mem_vector_index as i64);
self.last_mem_vector_index = vector_mem;
}
}
}
@@ -279,6 +319,8 @@ pub(crate) struct IndexerBuilderImpl {
pub(crate) inverted_index_config: InvertedIndexConfig,
pub(crate) fulltext_index_config: FulltextIndexConfig,
pub(crate) bloom_filter_index_config: BloomFilterConfig,
#[cfg(feature = "vector_index")]
pub(crate) vector_index_config: VectorIndexConfig,
}
#[async_trait::async_trait]
@@ -296,11 +338,23 @@ impl IndexerBuilder for IndexerBuilderImpl {
indexer.inverted_indexer = self.build_inverted_indexer(file_id);
indexer.fulltext_indexer = self.build_fulltext_indexer(file_id).await;
indexer.bloom_filter_indexer = self.build_bloom_filter_indexer(file_id);
indexer.intermediate_manager = Some(self.intermediate_manager.clone());
if indexer.inverted_indexer.is_none()
&& indexer.fulltext_indexer.is_none()
&& indexer.bloom_filter_indexer.is_none()
#[cfg(feature = "vector_index")]
{
indexer.vector_indexer = self.build_vector_indexer(file_id);
}
indexer.intermediate_manager = Some(self.intermediate_manager.clone());
#[cfg(feature = "vector_index")]
let has_any_indexer = indexer.inverted_indexer.is_some()
|| indexer.fulltext_indexer.is_some()
|| indexer.bloom_filter_indexer.is_some()
|| indexer.vector_indexer.is_some();
#[cfg(not(feature = "vector_index"))]
let has_any_indexer = indexer.inverted_indexer.is_some()
|| indexer.fulltext_indexer.is_some()
|| indexer.bloom_filter_indexer.is_some();
if !has_any_indexer {
indexer.abort().await;
return Indexer::default();
}
@@ -476,6 +530,69 @@ impl IndexerBuilderImpl {
None
}
#[cfg(feature = "vector_index")]
fn build_vector_indexer(&self, file_id: FileId) -> Option<VectorIndexer> {
let create = match self.build_type {
IndexBuildType::Flush => self.vector_index_config.create_on_flush.auto(),
IndexBuildType::Compact => self.vector_index_config.create_on_compaction.auto(),
_ => true,
};
if !create {
debug!(
"Skip creating vector index due to config, region_id: {}, file_id: {}",
self.metadata.region_id, file_id,
);
return None;
}
// Get vector index column IDs and options from metadata
let vector_index_options = self.metadata.vector_indexed_column_ids();
if vector_index_options.is_empty() {
debug!(
"No vector columns to index, skip creating vector index, region_id: {}, file_id: {}",
self.metadata.region_id, file_id,
);
return None;
}
let mem_limit = self.vector_index_config.mem_threshold_on_create();
let indexer = VectorIndexer::new(
file_id,
&self.metadata,
self.intermediate_manager.clone(),
mem_limit,
&vector_index_options,
);
let err = match indexer {
Ok(indexer) => {
if indexer.is_none() {
debug!(
"Skip creating vector index due to no columns require indexing, region_id: {}, file_id: {}",
self.metadata.region_id, file_id,
);
}
return indexer;
}
Err(err) => err,
};
if cfg!(any(test, feature = "test")) {
panic!(
"Failed to create vector index, region_id: {}, file_id: {}, err: {:?}",
self.metadata.region_id, file_id, err
);
} else {
warn!(
err; "Failed to create vector index, region_id: {}, file_id: {}",
self.metadata.region_id, file_id,
);
}
None
}
}
/// Type of an index build task.
@@ -1115,6 +1232,8 @@ mod tests {
with_inverted: bool,
with_fulltext: bool,
with_skipping_bloom: bool,
#[cfg(feature = "vector_index")]
with_vector: bool,
}
fn mock_region_metadata(
@@ -1122,6 +1241,8 @@ mod tests {
with_inverted,
with_fulltext,
with_skipping_bloom,
#[cfg(feature = "vector_index")]
with_vector,
}: MetaConfig,
) -> RegionMetadataRef {
let mut builder = RegionMetadataBuilder::new(RegionId::new(1, 2));
@@ -1187,6 +1308,24 @@ mod tests {
builder.push_column_metadata(column);
}
#[cfg(feature = "vector_index")]
if with_vector {
use index::vector::VectorIndexOptions;
let options = VectorIndexOptions::default();
let column_schema =
ColumnSchema::new("vec", ConcreteDataType::vector_datatype(4), true)
.with_vector_index_options(&options)
.unwrap();
let column = ColumnMetadata {
column_schema,
semantic_type: SemanticType::Field,
column_id: 6,
};
builder.push_column_metadata(column);
}
Arc::new(builder.build().unwrap())
}
@@ -1237,6 +1376,8 @@ mod tests {
inverted_index_config: Default::default(),
fulltext_index_config: Default::default(),
bloom_filter_index_config: Default::default(),
#[cfg(feature = "vector_index")]
vector_index_config: Default::default(),
};
let mut metrics = Metrics::new(WriteType::Flush);
env.access_layer
@@ -1287,6 +1428,8 @@ mod tests {
inverted_index_config: InvertedIndexConfig::default(),
fulltext_index_config: FulltextIndexConfig::default(),
bloom_filter_index_config: BloomFilterConfig::default(),
#[cfg(feature = "vector_index")]
vector_index_config: Default::default(),
})
}
@@ -1300,6 +1443,8 @@ mod tests {
with_inverted: true,
with_fulltext: true,
with_skipping_bloom: true,
#[cfg(feature = "vector_index")]
with_vector: false,
});
let indexer = IndexerBuilderImpl {
build_type: IndexBuildType::Flush,
@@ -1312,6 +1457,8 @@ mod tests {
inverted_index_config: InvertedIndexConfig::default(),
fulltext_index_config: FulltextIndexConfig::default(),
bloom_filter_index_config: BloomFilterConfig::default(),
#[cfg(feature = "vector_index")]
vector_index_config: Default::default(),
}
.build(FileId::random(), 0)
.await;
@@ -1331,6 +1478,8 @@ mod tests {
with_inverted: true,
with_fulltext: true,
with_skipping_bloom: true,
#[cfg(feature = "vector_index")]
with_vector: false,
});
let indexer = IndexerBuilderImpl {
build_type: IndexBuildType::Flush,
@@ -1346,6 +1495,8 @@ mod tests {
},
fulltext_index_config: FulltextIndexConfig::default(),
bloom_filter_index_config: BloomFilterConfig::default(),
#[cfg(feature = "vector_index")]
vector_index_config: Default::default(),
}
.build(FileId::random(), 0)
.await;
@@ -1368,6 +1519,8 @@ mod tests {
..Default::default()
},
bloom_filter_index_config: BloomFilterConfig::default(),
#[cfg(feature = "vector_index")]
vector_index_config: Default::default(),
}
.build(FileId::random(), 0)
.await;
@@ -1390,6 +1543,8 @@ mod tests {
create_on_compaction: Mode::Disable,
..Default::default()
},
#[cfg(feature = "vector_index")]
vector_index_config: Default::default(),
}
.build(FileId::random(), 0)
.await;
@@ -1409,6 +1564,8 @@ mod tests {
with_inverted: false,
with_fulltext: true,
with_skipping_bloom: true,
#[cfg(feature = "vector_index")]
with_vector: false,
});
let indexer = IndexerBuilderImpl {
build_type: IndexBuildType::Flush,
@@ -1421,6 +1578,8 @@ mod tests {
inverted_index_config: InvertedIndexConfig::default(),
fulltext_index_config: FulltextIndexConfig::default(),
bloom_filter_index_config: BloomFilterConfig::default(),
#[cfg(feature = "vector_index")]
vector_index_config: Default::default(),
}
.build(FileId::random(), 0)
.await;
@@ -1433,6 +1592,8 @@ mod tests {
with_inverted: true,
with_fulltext: false,
with_skipping_bloom: true,
#[cfg(feature = "vector_index")]
with_vector: false,
});
let indexer = IndexerBuilderImpl {
build_type: IndexBuildType::Flush,
@@ -1445,6 +1606,8 @@ mod tests {
inverted_index_config: InvertedIndexConfig::default(),
fulltext_index_config: FulltextIndexConfig::default(),
bloom_filter_index_config: BloomFilterConfig::default(),
#[cfg(feature = "vector_index")]
vector_index_config: Default::default(),
}
.build(FileId::random(), 0)
.await;
@@ -1457,6 +1620,8 @@ mod tests {
with_inverted: true,
with_fulltext: true,
with_skipping_bloom: false,
#[cfg(feature = "vector_index")]
with_vector: false,
});
let indexer = IndexerBuilderImpl {
build_type: IndexBuildType::Flush,
@@ -1469,6 +1634,8 @@ mod tests {
inverted_index_config: InvertedIndexConfig::default(),
fulltext_index_config: FulltextIndexConfig::default(),
bloom_filter_index_config: BloomFilterConfig::default(),
#[cfg(feature = "vector_index")]
vector_index_config: Default::default(),
}
.build(FileId::random(), 0)
.await;
@@ -1488,6 +1655,8 @@ mod tests {
with_inverted: true,
with_fulltext: true,
with_skipping_bloom: true,
#[cfg(feature = "vector_index")]
with_vector: false,
});
let indexer = IndexerBuilderImpl {
build_type: IndexBuildType::Flush,
@@ -1500,6 +1669,8 @@ mod tests {
inverted_index_config: InvertedIndexConfig::default(),
fulltext_index_config: FulltextIndexConfig::default(),
bloom_filter_index_config: BloomFilterConfig::default(),
#[cfg(feature = "vector_index")]
vector_index_config: Default::default(),
}
.build(FileId::random(), 0)
.await;
@@ -1507,6 +1678,82 @@ mod tests {
assert!(indexer.inverted_indexer.is_none());
}
#[cfg(feature = "vector_index")]
#[tokio::test]
async fn test_update_flat_builds_vector_index() {
use datatypes::arrow::array::BinaryBuilder;
use datatypes::arrow::datatypes::{DataType, Field, Schema};
struct TestPathProvider;
impl FilePathProvider for TestPathProvider {
fn build_index_file_path(&self, file_id: RegionFileId) -> String {
format!("index/{}.puffin", file_id)
}
fn build_index_file_path_with_version(&self, index_id: RegionIndexId) -> String {
format!("index/{}.puffin", index_id)
}
fn build_sst_file_path(&self, file_id: RegionFileId) -> String {
format!("sst/{}.parquet", file_id)
}
}
fn f32s_to_bytes(values: &[f32]) -> Vec<u8> {
let mut bytes = Vec::with_capacity(values.len() * 4);
for v in values {
bytes.extend_from_slice(&v.to_le_bytes());
}
bytes
}
let (dir, factory) =
PuffinManagerFactory::new_for_test_async("test_update_flat_builds_vector_index_").await;
let intm_manager = mock_intm_mgr(dir.path().to_string_lossy()).await;
let metadata = mock_region_metadata(MetaConfig {
with_inverted: false,
with_fulltext: false,
with_skipping_bloom: false,
with_vector: true,
});
let mut indexer = IndexerBuilderImpl {
build_type: IndexBuildType::Flush,
metadata,
row_group_size: 1024,
puffin_manager: factory.build(mock_object_store(), TestPathProvider),
write_cache_enabled: false,
intermediate_manager: intm_manager,
index_options: IndexOptions::default(),
inverted_index_config: InvertedIndexConfig::default(),
fulltext_index_config: FulltextIndexConfig::default(),
bloom_filter_index_config: BloomFilterConfig::default(),
vector_index_config: Default::default(),
}
.build(FileId::random(), 0)
.await;
assert!(indexer.vector_indexer.is_some());
let vec1 = f32s_to_bytes(&[1.0, 0.0, 0.0, 0.0]);
let vec2 = f32s_to_bytes(&[0.0, 1.0, 0.0, 0.0]);
let mut builder = BinaryBuilder::with_capacity(2, vec1.len() + vec2.len());
builder.append_value(&vec1);
builder.append_value(&vec2);
let schema = Arc::new(Schema::new(vec![Field::new("vec", DataType::Binary, true)]));
let batch = RecordBatch::try_new(schema, vec![Arc::new(builder.finish())]).unwrap();
indexer.update_flat(&batch).await;
let output = indexer.finish().await;
assert!(output.vector_index.is_available());
assert!(output.vector_index.columns.contains(&6));
}
#[tokio::test]
async fn test_index_build_task_sst_not_exist() {
let env = SchedulerEnv::new().await;
@@ -1839,6 +2086,8 @@ mod tests {
inverted_index_config: InvertedIndexConfig::default(),
fulltext_index_config: FulltextIndexConfig::default(),
bloom_filter_index_config: BloomFilterConfig::default(),
#[cfg(feature = "vector_index")]
vector_index_config: Default::default(),
});
let sst_info = mock_sst_file(metadata.clone(), &env, IndexBuildMode::Async).await;

View File

@@ -23,6 +23,8 @@ impl Indexer {
self.do_abort_inverted_index().await;
self.do_abort_fulltext_index().await;
self.do_abort_bloom_filter().await;
#[cfg(feature = "vector_index")]
self.do_abort_vector_index().await;
self.do_prune_intm_sst_dir().await;
if self.write_cache_enabled {
self.do_abort_clean_fs_temp_dir().await;
@@ -106,4 +108,26 @@ impl Indexer {
.to_string();
TempFileCleaner::clean_atomic_dir_files(fs_accessor.store().store(), &[&fs_handle]).await;
}
#[cfg(feature = "vector_index")]
async fn do_abort_vector_index(&mut self) {
let Some(mut indexer) = self.vector_indexer.take() else {
return;
};
let Err(err) = indexer.abort().await else {
return;
};
if cfg!(any(test, feature = "test")) {
panic!(
"Failed to abort vector index, region_id: {}, file_id: {}, err: {:?}",
self.region_id, self.file_id, err
);
} else {
warn!(
err; "Failed to abort vector index, region_id: {}, file_id: {}",
self.region_id, self.file_id,
);
}
}
}

View File

@@ -17,6 +17,8 @@ use puffin::puffin_manager::{PuffinManager, PuffinWriter};
use store_api::storage::ColumnId;
use crate::sst::file::{RegionFileId, RegionIndexId};
#[cfg(feature = "vector_index")]
use crate::sst::index::VectorIndexOutput;
use crate::sst::index::puffin_manager::SstPuffinWriter;
use crate::sst::index::statistics::{ByteCount, RowCount};
use crate::sst::index::{
@@ -54,6 +56,15 @@ impl Indexer {
return IndexOutput::default();
}
#[cfg(feature = "vector_index")]
{
let success = self.do_finish_vector_index(&mut writer, &mut output).await;
if !success {
self.do_abort().await;
return IndexOutput::default();
}
}
self.do_prune_intm_sst_dir().await;
output.file_size = self.do_finish_puffin_writer(writer).await;
output.version = self.index_version;
@@ -276,6 +287,63 @@ impl Indexer {
output.columns = column_ids;
}
#[cfg(feature = "vector_index")]
async fn do_finish_vector_index(
&mut self,
puffin_writer: &mut SstPuffinWriter,
index_output: &mut IndexOutput,
) -> bool {
let Some(mut indexer) = self.vector_indexer.take() else {
return true;
};
let column_ids = indexer.column_ids().collect();
let err = match indexer.finish(puffin_writer).await {
Ok((row_count, byte_count)) => {
self.fill_vector_index_output(
&mut index_output.vector_index,
row_count,
byte_count,
column_ids,
);
return true;
}
Err(err) => err,
};
if cfg!(any(test, feature = "test")) {
panic!(
"Failed to finish vector index, region_id: {}, file_id: {}, err: {:?}",
self.region_id, self.file_id, err
);
} else {
warn!(
err; "Failed to finish vector index, region_id: {}, file_id: {}",
self.region_id, self.file_id,
);
}
false
}
#[cfg(feature = "vector_index")]
fn fill_vector_index_output(
&mut self,
output: &mut VectorIndexOutput,
row_count: RowCount,
byte_count: ByteCount,
column_ids: Vec<ColumnId>,
) {
debug!(
"Vector index created, region_id: {}, file_id: {}, written_bytes: {}, written_rows: {}, columns: {:?}",
self.region_id, self.file_id, byte_count, row_count, column_ids
);
output.index_size = byte_count;
output.row_count = row_count;
output.columns = column_ids;
}
pub(crate) async fn do_prune_intm_sst_dir(&mut self) {
if let Some(manager) = self.intermediate_manager.take()
&& let Err(e) = manager.prune_sst_dir(&self.region_id, &self.file_id).await

View File

@@ -33,6 +33,10 @@ impl Indexer {
if !self.do_update_bloom_filter(batch).await {
self.do_abort().await;
}
#[cfg(feature = "vector_index")]
if !self.do_update_vector_index(batch).await {
self.do_abort().await;
}
}
/// Returns false if the update failed.
@@ -110,6 +114,32 @@ impl Indexer {
false
}
/// Returns false if the update failed.
#[cfg(feature = "vector_index")]
async fn do_update_vector_index(&mut self, batch: &mut Batch) -> bool {
let Some(creator) = self.vector_indexer.as_mut() else {
return true;
};
let Err(err) = creator.update(batch).await else {
return true;
};
if cfg!(any(test, feature = "test")) {
panic!(
"Failed to update vector index, region_id: {}, file_id: {}, err: {:?}",
self.region_id, self.file_id, err
);
} else {
warn!(
err; "Failed to update vector index, region_id: {}, file_id: {}",
self.region_id, self.file_id,
);
}
false
}
pub(crate) async fn do_update_flat(&mut self, batch: &RecordBatch) {
if batch.num_rows() == 0 {
return;
@@ -124,6 +154,10 @@ impl Indexer {
if !self.do_update_flat_bloom_filter(batch).await {
self.do_abort().await;
}
#[cfg(feature = "vector_index")]
if !self.do_update_flat_vector_index(batch).await {
self.do_abort().await;
}
}
/// Returns false if the update failed.
@@ -200,4 +234,30 @@ impl Indexer {
false
}
/// Returns false if the update failed.
#[cfg(feature = "vector_index")]
async fn do_update_flat_vector_index(&mut self, batch: &RecordBatch) -> bool {
let Some(creator) = self.vector_indexer.as_mut() else {
return true;
};
let Err(err) = creator.update_flat(batch).await else {
return true;
};
if cfg!(any(test, feature = "test")) {
panic!(
"Failed to update vector index with flat format, region_id: {}, file_id: {}, err: {:?}",
self.region_id, self.file_id, err
);
} else {
warn!(
err; "Failed to update vector index with flat format, region_id: {}, file_id: {}",
self.region_id, self.file_id,
);
}
false
}
}

View File

@@ -0,0 +1,920 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
//! Vector index creator using pluggable vector index engines.
use std::collections::HashMap;
use std::sync::Arc;
use std::sync::atomic::AtomicUsize;
use common_telemetry::warn;
use datatypes::arrow::array::{Array, BinaryArray};
use datatypes::arrow::record_batch::RecordBatch;
use datatypes::data_type::ConcreteDataType;
use datatypes::prelude::ValueRef;
use index::vector::{VectorDistanceMetric, VectorIndexOptions, distance_metric_to_usearch};
use puffin::puffin_manager::{PuffinWriter, PutOptions};
use roaring::RoaringBitmap;
use snafu::{ResultExt, ensure};
use store_api::metadata::RegionMetadataRef;
use store_api::storage::{ColumnId, FileId, VectorIndexEngine, VectorIndexEngineType};
use tokio_util::compat::TokioAsyncReadCompatExt;
use usearch::MetricKind;
use crate::error::{
BiErrorsSnafu, OperateAbortedIndexSnafu, PuffinAddBlobSnafu, Result, VectorIndexBuildSnafu,
VectorIndexFinishSnafu,
};
use crate::metrics::{INDEX_CREATE_BYTES_TOTAL, INDEX_CREATE_ROWS_TOTAL};
use crate::read::Batch;
use crate::sst::index::TYPE_VECTOR_INDEX;
use crate::sst::index::intermediate::{
IntermediateLocation, IntermediateManager, TempFileProvider,
};
use crate::sst::index::puffin_manager::SstPuffinWriter;
use crate::sst::index::statistics::{ByteCount, RowCount, Statistics};
use crate::sst::index::vector_index::util::bytes_to_f32_slice;
use crate::sst::index::vector_index::{INDEX_BLOB_TYPE, engine};
/// The buffer size for the pipe used to send index data to the puffin blob.
const PIPE_BUFFER_SIZE_FOR_SENDING_BLOB: usize = 8192;
/// Configuration for a single column's vector index.
#[derive(Debug, Clone)]
pub struct VectorIndexConfig {
/// The vector index engine type.
pub engine: VectorIndexEngineType,
/// The dimension of vectors in this column.
pub dim: usize,
/// The distance metric to use (e.g., L2, Cosine, IP) - usearch format.
pub metric: MetricKind,
/// The original distance metric (for serialization).
pub distance_metric: VectorDistanceMetric,
/// HNSW connectivity parameter (M in the paper).
/// Higher values give better recall but use more memory.
pub connectivity: usize,
/// Expansion factor during index construction (ef_construction).
pub expansion_add: usize,
/// Expansion factor during search (ef_search).
pub expansion_search: usize,
}
impl VectorIndexConfig {
/// Creates a new vector index config from VectorIndexOptions.
pub fn new(dim: usize, options: &VectorIndexOptions) -> Self {
Self {
engine: options.engine,
dim,
metric: distance_metric_to_usearch(options.metric),
distance_metric: options.metric,
connectivity: options.connectivity as usize,
expansion_add: options.expansion_add as usize,
expansion_search: options.expansion_search as usize,
}
}
}
/// Creator for a single column's vector index.
struct VectorIndexCreator {
/// The vector index engine (e.g., USearch HNSW).
engine: Box<dyn VectorIndexEngine>,
/// Configuration for this index.
config: VectorIndexConfig,
/// Bitmap tracking which row offsets have NULL vectors.
/// HNSW keys are sequential (0, 1, 2...) but row offsets may have gaps due to NULLs.
null_bitmap: RoaringBitmap,
/// Current row offset (including NULLs).
current_row_offset: u64,
/// Next HNSW key to assign (only for non-NULL vectors).
next_hnsw_key: u64,
/// Memory usage estimation.
memory_usage: usize,
}
impl VectorIndexCreator {
/// Creates a new vector index creator.
fn new(config: VectorIndexConfig) -> Result<Self> {
let engine_instance = engine::create_engine(config.engine, &config)?;
Ok(Self {
engine: engine_instance,
config,
null_bitmap: RoaringBitmap::new(),
current_row_offset: 0,
next_hnsw_key: 0,
memory_usage: 0,
})
}
/// Reserves capacity for the expected number of vectors.
#[allow(dead_code)]
fn reserve(&mut self, capacity: usize) -> Result<()> {
self.engine.reserve(capacity).map_err(|e| {
VectorIndexBuildSnafu {
reason: format!("Failed to reserve capacity: {}", e),
}
.build()
})
}
/// Adds a vector to the index.
/// Returns the HNSW key assigned to this vector.
fn add_vector(&mut self, vector: &[f32]) -> Result<u64> {
let key = self.next_hnsw_key;
self.engine.add(key, vector).map_err(|e| {
VectorIndexBuildSnafu {
reason: e.to_string(),
}
.build()
})?;
self.next_hnsw_key += 1;
self.current_row_offset += 1;
self.memory_usage = self.engine.memory_usage();
Ok(key)
}
/// Records a NULL vector at the current row offset.
fn add_null(&mut self) {
self.null_bitmap.insert(self.current_row_offset as u32);
self.current_row_offset += 1;
}
/// Records multiple NULL vectors starting at the current row offset.
fn add_nulls(&mut self, n: usize) {
let start = self.current_row_offset as u32;
let end = start + n as u32;
self.null_bitmap.insert_range(start..end);
self.current_row_offset += n as u64;
}
/// Returns the serialized size of the index.
fn serialized_length(&self) -> usize {
self.engine.serialized_length()
}
/// Serializes the index to a buffer.
fn save_to_buffer(&self, buffer: &mut [u8]) -> Result<()> {
self.engine.save_to_buffer(buffer).map_err(|e| {
VectorIndexFinishSnafu {
reason: format!("Failed to serialize index: {}", e),
}
.build()
})
}
/// Returns the memory usage of this creator.
fn memory_usage(&self) -> usize {
self.memory_usage + self.null_bitmap.serialized_size()
}
/// Returns the number of vectors in the index (excluding NULLs).
fn size(&self) -> usize {
self.engine.size()
}
/// Returns the engine type.
fn engine_type(&self) -> VectorIndexEngineType {
self.config.engine
}
/// Returns the distance metric.
fn metric(&self) -> VectorDistanceMetric {
self.config.distance_metric
}
}
/// The indexer for vector indexes across multiple columns.
pub struct VectorIndexer {
/// Per-column vector index creators.
creators: HashMap<ColumnId, VectorIndexCreator>,
/// Provider for intermediate files.
temp_file_provider: Arc<TempFileProvider>,
/// Whether the indexing process has been aborted.
aborted: bool,
/// Statistics for this indexer.
stats: Statistics,
/// Global memory usage tracker.
#[allow(dead_code)]
global_memory_usage: Arc<AtomicUsize>,
/// Region metadata for column lookups.
#[allow(dead_code)]
metadata: RegionMetadataRef,
/// Memory usage threshold.
memory_usage_threshold: Option<usize>,
}
impl VectorIndexer {
/// Creates a new vector indexer.
///
/// Returns `None` if there are no vector columns that need indexing.
pub fn new(
sst_file_id: FileId,
metadata: &RegionMetadataRef,
intermediate_manager: IntermediateManager,
memory_usage_threshold: Option<usize>,
vector_index_options: &HashMap<ColumnId, VectorIndexOptions>,
) -> Result<Option<Self>> {
let mut creators = HashMap::new();
let temp_file_provider = Arc::new(TempFileProvider::new(
IntermediateLocation::new(&metadata.region_id, &sst_file_id),
intermediate_manager,
));
let global_memory_usage = Arc::new(AtomicUsize::new(0));
// Find all vector columns that have vector index enabled
for column in &metadata.column_metadatas {
// Check if this column has vector index options configured
let Some(options) = vector_index_options.get(&column.column_id) else {
continue;
};
// Verify the column is a vector type
let ConcreteDataType::Vector(vector_type) = &column.column_schema.data_type else {
continue;
};
let config = VectorIndexConfig::new(vector_type.dim as usize, options);
let creator = VectorIndexCreator::new(config)?;
creators.insert(column.column_id, creator);
}
if creators.is_empty() {
return Ok(None);
}
let indexer = Self {
creators,
temp_file_provider,
aborted: false,
stats: Statistics::new(TYPE_VECTOR_INDEX),
global_memory_usage,
metadata: metadata.clone(),
memory_usage_threshold,
};
Ok(Some(indexer))
}
/// Updates index with a batch of rows.
/// Garbage will be cleaned up if failed to update.
pub async fn update(&mut self, batch: &mut Batch) -> Result<()> {
ensure!(!self.aborted, OperateAbortedIndexSnafu);
if self.creators.is_empty() {
return Ok(());
}
if let Err(update_err) = self.do_update(batch).await {
// Clean up garbage if failed to update
if let Err(err) = self.do_cleanup().await {
if cfg!(any(test, feature = "test")) {
panic!("Failed to clean up vector index creator, err: {err:?}");
} else {
warn!(err; "Failed to clean up vector index creator");
}
}
return Err(update_err);
}
Ok(())
}
/// Updates index with a flat format `RecordBatch`.
/// Garbage will be cleaned up if failed to update.
pub async fn update_flat(&mut self, batch: &RecordBatch) -> Result<()> {
ensure!(!self.aborted, OperateAbortedIndexSnafu);
if self.creators.is_empty() || batch.num_rows() == 0 {
return Ok(());
}
if let Err(update_err) = self.do_update_flat(batch).await {
// Clean up garbage if failed to update
if let Err(err) = self.do_cleanup().await {
if cfg!(any(test, feature = "test")) {
panic!("Failed to clean up vector index creator, err: {err:?}");
} else {
warn!(err; "Failed to clean up vector index creator");
}
}
return Err(update_err);
}
Ok(())
}
/// Internal update implementation.
async fn do_update(&mut self, batch: &mut Batch) -> Result<()> {
let mut guard = self.stats.record_update();
let n = batch.num_rows();
guard.inc_row_count(n);
for (col_id, creator) in &mut self.creators {
let Some(values) = batch.field_col_value(*col_id) else {
continue;
};
// Process each row in the batch
for i in 0..n {
let value = values.data.get_ref(i);
if value.is_null() {
creator.add_null();
} else {
// Extract the vector bytes and convert to f32 slice
if let ValueRef::Binary(bytes) = value {
let floats = bytes_to_f32_slice(bytes);
if floats.len() != creator.config.dim {
return VectorIndexBuildSnafu {
reason: format!(
"Vector dimension mismatch: expected {}, got {}",
creator.config.dim,
floats.len()
),
}
.fail();
}
creator.add_vector(&floats)?;
} else {
creator.add_null();
}
}
}
// Check memory limit - abort index creation if exceeded
if let Some(threshold) = self.memory_usage_threshold {
let current_usage = creator.memory_usage();
if current_usage > threshold {
warn!(
"Vector index memory usage {} exceeds threshold {}, aborting index creation, region_id: {}",
current_usage, threshold, self.metadata.region_id
);
return VectorIndexBuildSnafu {
reason: format!(
"Memory usage {} exceeds threshold {}",
current_usage, threshold
),
}
.fail();
}
}
}
Ok(())
}
/// Internal flat update implementation.
async fn do_update_flat(&mut self, batch: &RecordBatch) -> Result<()> {
let mut guard = self.stats.record_update();
let n = batch.num_rows();
guard.inc_row_count(n);
for (col_id, creator) in &mut self.creators {
// This should never happen: creator exists but column not in metadata
let column_meta = self.metadata.column_by_id(*col_id).ok_or_else(|| {
VectorIndexBuildSnafu {
reason: format!(
"Column {} not found in region metadata, this is a bug",
col_id
),
}
.build()
})?;
let column_name = &column_meta.column_schema.name;
// Column not in batch is normal for flat format - treat as NULLs
let Some(column_array) = batch.column_by_name(column_name) else {
creator.add_nulls(n);
continue;
};
// Vector type must be stored as binary array
let binary_array = column_array
.as_any()
.downcast_ref::<BinaryArray>()
.ok_or_else(|| {
VectorIndexBuildSnafu {
reason: format!(
"Column {} is not a binary array, got {:?}",
column_name,
column_array.data_type()
),
}
.build()
})?;
for i in 0..n {
if !binary_array.is_valid(i) {
creator.add_null();
} else {
let bytes = binary_array.value(i);
let floats = bytes_to_f32_slice(bytes);
if floats.len() != creator.config.dim {
return VectorIndexBuildSnafu {
reason: format!(
"Vector dimension mismatch: expected {}, got {}",
creator.config.dim,
floats.len()
),
}
.fail();
}
creator.add_vector(&floats)?;
}
}
if let Some(threshold) = self.memory_usage_threshold {
let current_usage = creator.memory_usage();
if current_usage > threshold {
warn!(
"Vector index memory usage {} exceeds threshold {}, aborting index creation, region_id: {}",
current_usage, threshold, self.metadata.region_id
);
return VectorIndexBuildSnafu {
reason: format!(
"Memory usage {} exceeds threshold {}",
current_usage, threshold
),
}
.fail();
}
}
}
Ok(())
}
/// Finishes index creation and writes to puffin.
/// Returns the number of rows and bytes written.
pub async fn finish(
&mut self,
puffin_writer: &mut SstPuffinWriter,
) -> Result<(RowCount, ByteCount)> {
ensure!(!self.aborted, OperateAbortedIndexSnafu);
if self.stats.row_count() == 0 {
// No IO is performed, no garbage to clean up
return Ok((0, 0));
}
let finish_res = self.do_finish(puffin_writer).await;
// Clean up garbage no matter finish successfully or not
if let Err(err) = self.do_cleanup().await {
if cfg!(any(test, feature = "test")) {
panic!("Failed to clean up vector index creator, err: {err:?}");
} else {
warn!(err; "Failed to clean up vector index creator");
}
}
// Report metrics on successful finish
if finish_res.is_ok() {
INDEX_CREATE_ROWS_TOTAL
.with_label_values(&[TYPE_VECTOR_INDEX])
.inc_by(self.stats.row_count() as u64);
INDEX_CREATE_BYTES_TOTAL
.with_label_values(&[TYPE_VECTOR_INDEX])
.inc_by(self.stats.byte_count());
}
finish_res.map(|_| (self.stats.row_count(), self.stats.byte_count()))
}
/// Internal finish implementation.
async fn do_finish(&mut self, puffin_writer: &mut SstPuffinWriter) -> Result<()> {
let mut guard = self.stats.record_finish();
for (id, creator) in &mut self.creators {
if creator.size() == 0 {
// No vectors to index
continue;
}
let written_bytes = Self::do_finish_single_creator(*id, creator, puffin_writer).await?;
guard.inc_byte_count(written_bytes);
}
Ok(())
}
/// Finishes a single column's vector index.
///
/// The blob format v1 (header = 33 bytes):
/// ```text
/// +------------------+
/// | Version | 1 byte (u8, = 1)
/// +------------------+
/// | Engine type | 1 byte (u8, engine identifier)
/// +------------------+
/// | Dimension | 4 bytes (u32, little-endian)
/// +------------------+
/// | Metric | 1 byte (u8, distance metric)
/// +------------------+
/// | Connectivity | 2 bytes (u16, little-endian, HNSW M parameter)
/// +------------------+
/// | Expansion add | 2 bytes (u16, little-endian, ef_construction)
/// +------------------+
/// | Expansion search | 2 bytes (u16, little-endian, ef_search)
/// +------------------+
/// | Total rows | 8 bytes (u64, little-endian, total rows in SST)
/// +------------------+
/// | Indexed rows | 8 bytes (u64, little-endian, non-NULL rows indexed)
/// +------------------+
/// | NULL bitmap len | 4 bytes (u32, little-endian)
/// +------------------+
/// | NULL bitmap | variable length (serialized RoaringBitmap)
/// +------------------+
/// | Vector index | variable length (engine-specific serialized format)
/// +------------------+
/// ```
async fn do_finish_single_creator(
col_id: ColumnId,
creator: &mut VectorIndexCreator,
puffin_writer: &mut SstPuffinWriter,
) -> Result<ByteCount> {
// Serialize the NULL bitmap
let mut null_bitmap_bytes = Vec::new();
creator
.null_bitmap
.serialize_into(&mut null_bitmap_bytes)
.map_err(|e| {
VectorIndexFinishSnafu {
reason: format!("Failed to serialize NULL bitmap: {}", e),
}
.build()
})?;
// Serialize the vector index
let index_size = creator.serialized_length();
let mut index_bytes = vec![0u8; index_size];
creator.save_to_buffer(&mut index_bytes)?;
// Header size: version(1) + engine(1) + dim(4) + metric(1) +
// connectivity(2) + expansion_add(2) + expansion_search(2) +
// total_rows(8) + indexed_rows(8) + bitmap_len(4) = 33 bytes
/// Size of the vector index blob header in bytes.
/// Header format: version(1) + engine(1) + dim(4) + metric(1) +
/// connectivity(2) + expansion_add(2) + expansion_search(2) +
/// total_rows(8) + indexed_rows(8) + bitmap_len(4) = 33 bytes
const VECTOR_INDEX_BLOB_HEADER_SIZE: usize = 33;
let total_size =
VECTOR_INDEX_BLOB_HEADER_SIZE + null_bitmap_bytes.len() + index_bytes.len();
let mut blob_data = Vec::with_capacity(total_size);
// Write version (1 byte)
blob_data.push(1u8);
// Write engine type (1 byte)
blob_data.push(creator.engine_type().as_u8());
// Write dimension (4 bytes, little-endian)
blob_data.extend_from_slice(&(creator.config.dim as u32).to_le_bytes());
// Write metric (1 byte)
blob_data.push(creator.metric().as_u8());
// Write connectivity/M (2 bytes, little-endian)
blob_data.extend_from_slice(&(creator.config.connectivity as u16).to_le_bytes());
// Write expansion_add/ef_construction (2 bytes, little-endian)
blob_data.extend_from_slice(&(creator.config.expansion_add as u16).to_le_bytes());
// Write expansion_search/ef_search (2 bytes, little-endian)
blob_data.extend_from_slice(&(creator.config.expansion_search as u16).to_le_bytes());
// Write total_rows (8 bytes, little-endian)
blob_data.extend_from_slice(&creator.current_row_offset.to_le_bytes());
// Write indexed_rows (8 bytes, little-endian)
blob_data.extend_from_slice(&creator.next_hnsw_key.to_le_bytes());
// Write NULL bitmap length (4 bytes, little-endian)
let bitmap_len: u32 = null_bitmap_bytes.len().try_into().map_err(|_| {
VectorIndexBuildSnafu {
reason: format!(
"NULL bitmap size {} exceeds maximum allowed size {}",
null_bitmap_bytes.len(),
u32::MAX
),
}
.build()
})?;
blob_data.extend_from_slice(&bitmap_len.to_le_bytes());
// Write NULL bitmap
blob_data.extend_from_slice(&null_bitmap_bytes);
// Write vector index
blob_data.extend_from_slice(&index_bytes);
// Create blob name following the same pattern as bloom filter
let blob_name = format!("{}-{}", INDEX_BLOB_TYPE, col_id);
// Write to puffin using a pipe
let (tx, rx) = tokio::io::duplex(PIPE_BUFFER_SIZE_FOR_SENDING_BLOB);
// Writer task writes the blob data to the pipe
let write_index = async move {
use tokio::io::AsyncWriteExt;
let mut writer = tx;
writer.write_all(&blob_data).await?;
writer.shutdown().await?;
Ok::<(), std::io::Error>(())
};
let (index_write_result, puffin_add_blob) = futures::join!(
write_index,
puffin_writer.put_blob(
&blob_name,
rx.compat(),
PutOptions::default(),
Default::default()
)
);
match (
puffin_add_blob.context(PuffinAddBlobSnafu),
index_write_result.map_err(|e| {
VectorIndexFinishSnafu {
reason: format!("Failed to write blob data: {}", e),
}
.build()
}),
) {
(Err(e1), Err(e2)) => BiErrorsSnafu {
first: Box::new(e1),
second: Box::new(e2),
}
.fail()?,
(Ok(_), e @ Err(_)) => e?,
(e @ Err(_), Ok(_)) => e.map(|_| ())?,
(Ok(written_bytes), Ok(_)) => {
return Ok(written_bytes);
}
}
Ok(0)
}
/// Aborts index creation and cleans up garbage.
pub async fn abort(&mut self) -> Result<()> {
if self.aborted {
return Ok(());
}
self.aborted = true;
self.do_cleanup().await
}
/// Cleans up temporary files.
async fn do_cleanup(&mut self) -> Result<()> {
let mut _guard = self.stats.record_cleanup();
self.creators.clear();
self.temp_file_provider.cleanup().await
}
/// Returns the memory usage of the indexer.
pub fn memory_usage(&self) -> usize {
self.creators.values().map(|c| c.memory_usage()).sum()
}
/// Returns the column IDs being indexed.
pub fn column_ids(&self) -> impl Iterator<Item = ColumnId> + '_ {
self.creators.keys().copied()
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_vector_index_creator() {
let options = VectorIndexOptions::default();
let config = VectorIndexConfig::new(4, &options);
let mut creator = VectorIndexCreator::new(config).unwrap();
creator.reserve(10).unwrap();
// Add some vectors
let v1 = vec![1.0f32, 0.0, 0.0, 0.0];
let v2 = vec![0.0f32, 1.0, 0.0, 0.0];
creator.add_vector(&v1).unwrap();
creator.add_null();
creator.add_vector(&v2).unwrap();
assert_eq!(creator.size(), 2); // 2 vectors (excluding NULL)
assert_eq!(creator.current_row_offset, 3); // 3 rows total
assert!(creator.null_bitmap.contains(1)); // Row 1 is NULL
}
#[test]
fn test_vector_index_creator_serialization() {
let options = VectorIndexOptions::default();
let config = VectorIndexConfig::new(4, &options);
let mut creator = VectorIndexCreator::new(config).unwrap();
creator.reserve(10).unwrap();
// Add vectors
let vectors = vec![
vec![1.0f32, 0.0, 0.0, 0.0],
vec![0.0f32, 1.0, 0.0, 0.0],
vec![0.0f32, 0.0, 1.0, 0.0],
];
for v in &vectors {
creator.add_vector(v).unwrap();
}
// Test serialization
let size = creator.serialized_length();
assert!(size > 0);
let mut buffer = vec![0u8; size];
creator.save_to_buffer(&mut buffer).unwrap();
// Verify buffer is not empty and starts with some data
assert!(!buffer.iter().all(|&b| b == 0));
}
#[test]
fn test_vector_index_creator_null_bitmap_serialization() {
let options = VectorIndexOptions::default();
let config = VectorIndexConfig::new(4, &options);
let mut creator = VectorIndexCreator::new(config).unwrap();
creator.reserve(10).unwrap();
// Add pattern: vector, null, vector, null, null, vector
creator.add_vector(&[1.0, 0.0, 0.0, 0.0]).unwrap();
creator.add_null();
creator.add_vector(&[0.0, 1.0, 0.0, 0.0]).unwrap();
creator.add_nulls(2);
creator.add_vector(&[0.0, 0.0, 1.0, 0.0]).unwrap();
assert_eq!(creator.size(), 3); // 3 vectors
assert_eq!(creator.current_row_offset, 6); // 6 rows total
assert!(!creator.null_bitmap.contains(0));
assert!(creator.null_bitmap.contains(1));
assert!(!creator.null_bitmap.contains(2));
assert!(creator.null_bitmap.contains(3));
assert!(creator.null_bitmap.contains(4));
assert!(!creator.null_bitmap.contains(5));
// Test NULL bitmap serialization
let mut bitmap_bytes = Vec::new();
creator
.null_bitmap
.serialize_into(&mut bitmap_bytes)
.unwrap();
// Deserialize and verify
let restored = RoaringBitmap::deserialize_from(&bitmap_bytes[..]).unwrap();
assert_eq!(restored.len(), 3); // 3 NULLs
assert!(restored.contains(1));
assert!(restored.contains(3));
assert!(restored.contains(4));
}
#[test]
fn test_vector_index_config() {
use index::vector::VectorDistanceMetric;
let options = VectorIndexOptions {
engine: VectorIndexEngineType::default(),
metric: VectorDistanceMetric::Cosine,
connectivity: 32,
expansion_add: 256,
expansion_search: 128,
};
let config = VectorIndexConfig::new(128, &options);
assert_eq!(config.engine, VectorIndexEngineType::Usearch);
assert_eq!(config.dim, 128);
assert_eq!(config.metric, MetricKind::Cos);
assert_eq!(config.connectivity, 32);
assert_eq!(config.expansion_add, 256);
assert_eq!(config.expansion_search, 128);
}
#[test]
fn test_vector_index_header_format() {
use index::vector::VectorDistanceMetric;
// Create config with specific HNSW parameters
let options = VectorIndexOptions {
engine: VectorIndexEngineType::Usearch,
metric: VectorDistanceMetric::L2sq,
connectivity: 24,
expansion_add: 200,
expansion_search: 100,
};
let config = VectorIndexConfig::new(4, &options);
let mut creator = VectorIndexCreator::new(config).unwrap();
creator.reserve(10).unwrap();
// Add pattern: vector, null, vector, null, vector
creator.add_vector(&[1.0, 0.0, 0.0, 0.0]).unwrap();
creator.add_null();
creator.add_vector(&[0.0, 1.0, 0.0, 0.0]).unwrap();
creator.add_null();
creator.add_vector(&[0.0, 0.0, 1.0, 0.0]).unwrap();
// Verify counts
assert_eq!(creator.current_row_offset, 5); // total_rows
assert_eq!(creator.next_hnsw_key, 3); // indexed_rows
// Build blob data manually (simulating write_to_puffin header writing)
let mut null_bitmap_bytes = Vec::new();
creator
.null_bitmap
.serialize_into(&mut null_bitmap_bytes)
.unwrap();
let index_size = creator.serialized_length();
let mut index_bytes = vec![0u8; index_size];
creator.save_to_buffer(&mut index_bytes).unwrap();
// Header: 33 bytes
let header_size = 33;
let total_size = header_size + null_bitmap_bytes.len() + index_bytes.len();
let mut blob_data = Vec::with_capacity(total_size);
// Write header fields
blob_data.push(1u8); // version
blob_data.push(creator.engine_type().as_u8()); // engine type
blob_data.extend_from_slice(&(creator.config.dim as u32).to_le_bytes()); // dimension
blob_data.push(creator.metric().as_u8()); // metric
blob_data.extend_from_slice(&(creator.config.connectivity as u16).to_le_bytes());
blob_data.extend_from_slice(&(creator.config.expansion_add as u16).to_le_bytes());
blob_data.extend_from_slice(&(creator.config.expansion_search as u16).to_le_bytes());
blob_data.extend_from_slice(&creator.current_row_offset.to_le_bytes()); // total_rows
blob_data.extend_from_slice(&creator.next_hnsw_key.to_le_bytes()); // indexed_rows
let bitmap_len: u32 = null_bitmap_bytes.len().try_into().unwrap();
blob_data.extend_from_slice(&bitmap_len.to_le_bytes());
blob_data.extend_from_slice(&null_bitmap_bytes);
blob_data.extend_from_slice(&index_bytes);
// Verify header size
assert_eq!(blob_data.len(), total_size);
// Parse header and verify values
assert_eq!(blob_data[0], 1); // version
assert_eq!(blob_data[1], VectorIndexEngineType::Usearch.as_u8()); // engine
let dim = u32::from_le_bytes([blob_data[2], blob_data[3], blob_data[4], blob_data[5]]);
assert_eq!(dim, 4);
let metric = blob_data[6];
assert_eq!(
metric,
datatypes::schema::VectorDistanceMetric::L2sq.as_u8()
);
let connectivity = u16::from_le_bytes([blob_data[7], blob_data[8]]);
assert_eq!(connectivity, 24);
let expansion_add = u16::from_le_bytes([blob_data[9], blob_data[10]]);
assert_eq!(expansion_add, 200);
let expansion_search = u16::from_le_bytes([blob_data[11], blob_data[12]]);
assert_eq!(expansion_search, 100);
let total_rows = u64::from_le_bytes([
blob_data[13],
blob_data[14],
blob_data[15],
blob_data[16],
blob_data[17],
blob_data[18],
blob_data[19],
blob_data[20],
]);
assert_eq!(total_rows, 5);
let indexed_rows = u64::from_le_bytes([
blob_data[21],
blob_data[22],
blob_data[23],
blob_data[24],
blob_data[25],
blob_data[26],
blob_data[27],
blob_data[28],
]);
assert_eq!(indexed_rows, 3);
let null_bitmap_len =
u32::from_le_bytes([blob_data[29], blob_data[30], blob_data[31], blob_data[32]]);
assert_eq!(null_bitmap_len as usize, null_bitmap_bytes.len());
// Verify null bitmap can be deserialized
let null_bitmap_data = &blob_data[header_size..header_size + null_bitmap_len as usize];
let restored_bitmap = RoaringBitmap::deserialize_from(null_bitmap_data).unwrap();
assert_eq!(restored_bitmap.len(), 2); // 2 nulls
assert!(restored_bitmap.contains(1));
assert!(restored_bitmap.contains(3));
}
}

View File

@@ -0,0 +1,45 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
//! Pluggable vector index engine implementations.
mod usearch_impl;
use store_api::storage::{VectorIndexEngine, VectorIndexEngineType};
pub use usearch_impl::UsearchEngine;
use crate::error::Result;
use crate::sst::index::vector_index::creator::VectorIndexConfig;
/// Creates a new vector index engine based on the engine type.
pub fn create_engine(
engine_type: VectorIndexEngineType,
config: &VectorIndexConfig,
) -> Result<Box<dyn VectorIndexEngine>> {
match engine_type {
VectorIndexEngineType::Usearch => Ok(Box::new(UsearchEngine::create(config)?)),
}
}
/// Loads a vector index engine from serialized data.
#[allow(unused)]
pub fn load_engine(
engine_type: VectorIndexEngineType,
config: &VectorIndexConfig,
data: &[u8],
) -> Result<Box<dyn VectorIndexEngine>> {
match engine_type {
VectorIndexEngineType::Usearch => Ok(Box::new(UsearchEngine::load(config, data)?)),
}
}

View File

@@ -0,0 +1,231 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
//! USearch HNSW implementation of VectorIndexEngine.
use common_error::ext::BoxedError;
use store_api::storage::{VectorIndexEngine, VectorSearchMatches};
use usearch::{Index, IndexOptions, ScalarKind};
use crate::error::{Result, VectorIndexBuildSnafu};
use crate::sst::index::vector_index::creator::VectorIndexConfig;
/// USearch-based vector index engine using HNSW algorithm.
pub struct UsearchEngine {
index: Index,
}
impl UsearchEngine {
/// Creates a new USearch engine with the given configuration.
pub fn create(config: &VectorIndexConfig) -> Result<Self> {
let options = IndexOptions {
dimensions: config.dim,
metric: config.metric,
quantization: ScalarKind::F32,
connectivity: config.connectivity,
expansion_add: config.expansion_add,
expansion_search: config.expansion_search,
multi: false,
};
let index = Index::new(&options).map_err(|e| {
VectorIndexBuildSnafu {
reason: format!("Failed to create USearch index: {}", e),
}
.build()
})?;
Ok(Self { index })
}
/// Loads a USearch engine from serialized data.
#[allow(unused)]
pub fn load(config: &VectorIndexConfig, data: &[u8]) -> Result<Self> {
let options = IndexOptions {
dimensions: config.dim,
metric: config.metric,
quantization: ScalarKind::F32,
// These will be loaded from serialized data
connectivity: 0,
expansion_add: 0,
expansion_search: 0,
multi: false,
};
let index = Index::new(&options).map_err(|e| {
VectorIndexBuildSnafu {
reason: format!("Failed to create USearch index for loading: {}", e),
}
.build()
})?;
index.load_from_buffer(data).map_err(|e| {
VectorIndexBuildSnafu {
reason: format!("Failed to load USearch index from buffer: {}", e),
}
.build()
})?;
Ok(Self { index })
}
}
impl VectorIndexEngine for UsearchEngine {
fn add(&mut self, key: u64, vector: &[f32]) -> Result<(), BoxedError> {
// Reserve capacity if needed
if self.index.size() >= self.index.capacity() {
let new_capacity = std::cmp::max(1, self.index.capacity() * 2);
self.index.reserve(new_capacity).map_err(|e| {
BoxedError::new(
VectorIndexBuildSnafu {
reason: format!("Failed to reserve capacity: {}", e),
}
.build(),
)
})?;
}
self.index.add(key, vector).map_err(|e| {
BoxedError::new(
VectorIndexBuildSnafu {
reason: format!("Failed to add vector: {}", e),
}
.build(),
)
})
}
fn search(&self, query: &[f32], k: usize) -> Result<VectorSearchMatches, BoxedError> {
let matches = self.index.search(query, k).map_err(|e| {
BoxedError::new(
VectorIndexBuildSnafu {
reason: format!("Failed to search: {}", e),
}
.build(),
)
})?;
Ok(VectorSearchMatches {
keys: matches.keys,
distances: matches.distances,
})
}
fn serialized_length(&self) -> usize {
self.index.serialized_length()
}
fn save_to_buffer(&self, buffer: &mut [u8]) -> Result<(), BoxedError> {
self.index.save_to_buffer(buffer).map_err(|e| {
BoxedError::new(
VectorIndexBuildSnafu {
reason: format!("Failed to save to buffer: {}", e),
}
.build(),
)
})
}
fn reserve(&mut self, capacity: usize) -> Result<(), BoxedError> {
self.index.reserve(capacity).map_err(|e| {
BoxedError::new(
VectorIndexBuildSnafu {
reason: format!("Failed to reserve: {}", e),
}
.build(),
)
})
}
fn size(&self) -> usize {
self.index.size()
}
fn capacity(&self) -> usize {
self.index.capacity()
}
fn memory_usage(&self) -> usize {
self.index.memory_usage()
}
}
#[cfg(test)]
mod tests {
use index::vector::VectorDistanceMetric;
use store_api::storage::VectorIndexEngineType;
use usearch::MetricKind;
use super::*;
fn test_config() -> VectorIndexConfig {
VectorIndexConfig {
engine: VectorIndexEngineType::Usearch,
dim: 4,
metric: MetricKind::L2sq,
distance_metric: VectorDistanceMetric::L2sq,
connectivity: 16,
expansion_add: 128,
expansion_search: 64,
}
}
#[test]
fn test_usearch_engine_create() {
let config = test_config();
let engine = UsearchEngine::create(&config).unwrap();
assert_eq!(engine.size(), 0);
}
#[test]
fn test_usearch_engine_add_and_search() {
let config = test_config();
let mut engine = UsearchEngine::create(&config).unwrap();
// Add some vectors
engine.add(0, &[1.0, 0.0, 0.0, 0.0]).unwrap();
engine.add(1, &[0.0, 1.0, 0.0, 0.0]).unwrap();
engine.add(2, &[0.0, 0.0, 1.0, 0.0]).unwrap();
assert_eq!(engine.size(), 3);
// Search
let matches = engine.search(&[1.0, 0.0, 0.0, 0.0], 2).unwrap();
assert_eq!(matches.keys.len(), 2);
// First result should be the exact match (key 0)
assert_eq!(matches.keys[0], 0);
}
#[test]
fn test_usearch_engine_serialization() {
let config = test_config();
let mut engine = UsearchEngine::create(&config).unwrap();
engine.add(0, &[1.0, 0.0, 0.0, 0.0]).unwrap();
engine.add(1, &[0.0, 1.0, 0.0, 0.0]).unwrap();
// Serialize
let len = engine.serialized_length();
let mut buffer = vec![0u8; len];
engine.save_to_buffer(&mut buffer).unwrap();
// Load
let loaded = UsearchEngine::load(&config, &buffer).unwrap();
assert_eq!(loaded.size(), 2);
// Verify search works on loaded index
let matches = loaded.search(&[1.0, 0.0, 0.0, 0.0], 1).unwrap();
assert_eq!(matches.keys[0], 0);
}
}

View File

@@ -0,0 +1,22 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
//! Vector index module for HNSW-based approximate nearest neighbor search.
pub(crate) mod creator;
pub(crate) mod engine;
pub(crate) mod util;
/// The blob type identifier for vector index in puffin files.
pub(crate) const INDEX_BLOB_TYPE: &str = "greptime-vector-index-v1";

View File

@@ -0,0 +1,108 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
//! Utility functions for vector index operations.
use std::borrow::Cow;
/// Converts a byte slice (little-endian format) to f32 slice, handling unaligned data gracefully.
/// Returns `Cow::Borrowed` for aligned data on little-endian systems (zero-copy)
/// or `Cow::Owned` for unaligned data or big-endian systems.
///
/// # Panics
///
/// Panics if the byte slice length is not a multiple of 4.
pub fn bytes_to_f32_slice(bytes: &[u8]) -> Cow<'_, [f32]> {
assert!(
bytes.len().is_multiple_of(4),
"Vector bytes length {} is not a multiple of 4",
bytes.len()
);
if bytes.is_empty() {
return Cow::Borrowed(&[]);
}
let ptr = bytes.as_ptr();
// Fast path: zero-copy only when data is aligned AND we're on little-endian system
// (since vector data is stored in little-endian format)
#[cfg(target_endian = "little")]
if (ptr as usize).is_multiple_of(std::mem::align_of::<f32>()) {
// Safety: We've verified alignment and length requirements,
// and on little-endian systems the byte representation matches f32 layout
return Cow::Borrowed(unsafe {
std::slice::from_raw_parts(ptr as *const f32, bytes.len() / 4)
});
}
// Slow path: data is not aligned or we're on big-endian system
let floats: Vec<f32> = bytes
.chunks_exact(4)
.map(|chunk| f32::from_le_bytes([chunk[0], chunk[1], chunk[2], chunk[3]]))
.collect();
Cow::Owned(floats)
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_bytes_to_f32_slice() {
let floats = [1.0f32, 2.0, 3.0, 4.0];
let bytes: Vec<u8> = floats.iter().flat_map(|f| f.to_le_bytes()).collect();
let result = bytes_to_f32_slice(&bytes);
assert_eq!(result.len(), 4);
assert_eq!(result[0], 1.0);
assert_eq!(result[1], 2.0);
assert_eq!(result[2], 3.0);
assert_eq!(result[3], 4.0);
}
#[test]
fn test_bytes_to_f32_slice_unaligned() {
// Create a buffer with an extra byte at the start to force misalignment
let floats = [1.0f32, 2.0, 3.0, 4.0];
let mut bytes: Vec<u8> = vec![0u8]; // padding byte
bytes.extend(floats.iter().flat_map(|f| f.to_le_bytes()));
// Take a slice starting at offset 1 (unaligned)
let unaligned_bytes = &bytes[1..];
// Verify it's actually unaligned
let ptr = unaligned_bytes.as_ptr();
let is_aligned = (ptr as usize).is_multiple_of(std::mem::align_of::<f32>());
// The function should work regardless of alignment
let result = bytes_to_f32_slice(unaligned_bytes);
assert_eq!(result.len(), 4);
assert_eq!(result[0], 1.0);
assert_eq!(result[1], 2.0);
assert_eq!(result[2], 3.0);
assert_eq!(result[3], 4.0);
// If it was unaligned, it should return an owned Vec (Cow::Owned)
if !is_aligned {
assert!(matches!(result, Cow::Owned(_)));
}
}
#[test]
fn test_bytes_to_f32_slice_empty() {
let bytes: &[u8] = &[];
let result = bytes_to_f32_slice(bytes);
assert!(result.is_empty());
}
}

View File

@@ -742,6 +742,8 @@ mod tests {
inverted_index_config: Default::default(),
fulltext_index_config: Default::default(),
bloom_filter_index_config: Default::default(),
#[cfg(feature = "vector_index")]
vector_index_config: Default::default(),
};
let mut metrics = Metrics::new(WriteType::Flush);
@@ -1152,6 +1154,8 @@ mod tests {
inverted_index_config: Default::default(),
fulltext_index_config: Default::default(),
bloom_filter_index_config: Default::default(),
#[cfg(feature = "vector_index")]
vector_index_config: Default::default(),
}
}
@@ -1271,6 +1275,8 @@ mod tests {
inverted_index_config: Default::default(),
fulltext_index_config: Default::default(),
bloom_filter_index_config: Default::default(),
#[cfg(feature = "vector_index")]
vector_index_config: Default::default(),
};
let mut metrics = Metrics::new(WriteType::Flush);

View File

@@ -21,12 +21,14 @@ use store_api::storage::RegionId;
use tokio::sync::oneshot;
use crate::error::{
RegionStateSnafu, SerdeJsonSnafu, StagingPartitionExprMismatchSnafu, UnexpectedSnafu,
RegionStateSnafu, Result, SerdeJsonSnafu, StagingPartitionExprMismatchSnafu, UnexpectedSnafu,
};
use crate::manifest::action::RegionEdit;
use crate::region::{RegionLeaderState, RegionRoleState};
use crate::request::{OptionOutputTx, RegionEditRequest};
use crate::sst::file::FileMeta;
use crate::manifest::action::{RegionEdit, RegionManifest};
use crate::manifest::storage::manifest_dir;
use crate::manifest::storage::staging::{StagingBlobStorage, staging_blob_path};
use crate::region::{MitoRegionRef, RegionLeaderState, RegionRoleState};
use crate::request::{OptionOutputTx, RegionEditRequest, WorkerRequest, WorkerRequestWithTime};
use crate::sst::location::region_dir_from_table_dir;
use crate::worker::RegionWorkerLoop;
impl<S: LogStore> RegionWorkerLoop<S> {
@@ -86,21 +88,32 @@ impl<S: LogStore> RegionWorkerLoop<S> {
return;
}
let (tx, rx) = oneshot::channel();
let files_to_add = match serde_json::from_slice::<Vec<FileMeta>>(&request.files_to_add)
.context(SerdeJsonSnafu)
{
Ok(files_to_add) => files_to_add,
Err(e) => {
sender.send(Err(e));
let worker_sender = self.sender.clone();
common_runtime::spawn_global(async move {
let staging_manifest = match Self::fetch_staging_manifest(
&region,
request.central_region_id,
&request.manifest_path,
)
.await
{
Ok(staging_manifest) => staging_manifest,
Err(e) => {
sender.send(Err(e));
return;
}
};
if staging_manifest.metadata.partition_expr.as_ref() != Some(&request.partition_expr) {
sender.send(Err(StagingPartitionExprMismatchSnafu {
manifest_expr: staging_manifest.metadata.partition_expr.clone(),
request_expr: request.partition_expr,
}
.build()));
return;
}
};
info!("Applying staging manifest request to region {}", region_id);
self.handle_region_edit(RegionEditRequest {
region_id,
edit: RegionEdit {
let files_to_add = staging_manifest.files.values().cloned().collect::<Vec<_>>();
let edit = RegionEdit {
files_to_add,
files_to_remove: vec![],
timestamp_ms: Some(Utc::now().timestamp_millis()),
@@ -108,11 +121,23 @@ impl<S: LogStore> RegionWorkerLoop<S> {
flushed_entry_id: None,
flushed_sequence: None,
committed_sequence: None,
},
tx,
});
};
let (tx, rx) = oneshot::channel();
info!(
"Applying staging manifest request to region {}",
region.region_id,
);
let _ = worker_sender
.send(WorkerRequestWithTime::new(WorkerRequest::EditRegion(
RegionEditRequest {
region_id: region.region_id,
edit,
tx,
},
)))
.await;
common_runtime::spawn_global(async move {
// Await the result from the region edit and forward the outcome to the original sender.
// If the operation completes successfully, respond with Ok(0); otherwise, respond with an appropriate error.
if let Ok(result) = rx.await {
@@ -137,4 +162,25 @@ impl<S: LogStore> RegionWorkerLoop<S> {
}
});
}
/// Fetches the staging manifest from the central region's staging blob storage.
///
/// The `central_region_id` is used to locate the staging directory because the staging
/// manifest was created by the central region during `remap_manifests` operation.
async fn fetch_staging_manifest(
region: &MitoRegionRef,
central_region_id: RegionId,
manifest_path: &str,
) -> Result<RegionManifest> {
let region_dir =
region_dir_from_table_dir(region.table_dir(), central_region_id, region.path_type());
let staging_blob_path = staging_blob_path(&manifest_dir(&region_dir));
let staging_blob_storage = StagingBlobStorage::new(
staging_blob_path,
region.access_layer().object_store().clone(),
);
let staging_manifest = staging_blob_storage.get(manifest_path).await?;
serde_json::from_slice::<RegionManifest>(&staging_manifest).context(SerdeJsonSnafu)
}
}

View File

@@ -64,6 +64,8 @@ impl<S> RegionWorkerLoop<S> {
inverted_index_config: self.config.inverted_index.clone(),
fulltext_index_config: self.config.fulltext_index.clone(),
bloom_filter_index_config: self.config.bloom_filter_index.clone(),
#[cfg(feature = "vector_index")]
vector_index_config: self.config.vector_index.clone(),
index_options: version.options.index_options.clone(),
row_group_size: WriteOptions::default().row_group_size,
intermediate_manager,

View File

@@ -16,14 +16,13 @@ use std::collections::HashMap;
use std::time::Instant;
use common_error::ext::BoxedError;
use common_telemetry::info;
use common_telemetry::{debug, info};
use futures::future::try_join_all;
use partition::expr::PartitionExpr;
use snafu::{OptionExt, ResultExt};
use store_api::storage::RegionId;
use crate::error::{FetchManifestsSnafu, InvalidRequestSnafu, MissingManifestSnafu, Result};
use crate::manifest::action::RegionManifest;
use crate::error::{self, FetchManifestsSnafu, InvalidRequestSnafu, MissingManifestSnafu, Result};
use crate::region::{MitoRegionRef, RegionMetadataLoader};
use crate::remap_manifest::RemapManifest;
use crate::request::RemapManifestsRequest;
@@ -75,13 +74,17 @@ impl<S> RegionWorkerLoop<S> {
});
}
// Fetches manifests for input regions, remaps them according to the provided
// mapping and partition expressions.
//
// Returns a map from each new region to its relative staging manifest path.
async fn fetch_and_remap_manifests(
region: MitoRegionRef,
region_metadata_loader: RegionMetadataLoader,
input_regions: Vec<RegionId>,
new_partition_exprs: HashMap<RegionId, PartitionExpr>,
region_mapping: HashMap<RegionId, Vec<RegionId>>,
) -> Result<HashMap<RegionId, RegionManifest>> {
) -> Result<HashMap<RegionId, String>> {
let mut tasks = Vec::with_capacity(input_regions.len());
let region_options = region.version().options.clone();
let table_dir = region.table_dir();
@@ -97,7 +100,6 @@ impl<S> RegionWorkerLoop<S> {
.await
});
}
let results = try_join_all(tasks)
.await
.map_err(BoxedError::new)
@@ -112,12 +114,38 @@ impl<S> RegionWorkerLoop<S> {
.collect::<Result<HashMap<_, _>>>()?;
let mut mapper = RemapManifest::new(manifests, new_partition_exprs, region_mapping);
let remap_result = mapper.remap_manifests()?;
// Write new manifests to staging blob storage.
let manifest_manager = region.manifest_ctx.manifest_manager.write().await;
let manifest_storage = manifest_manager.store();
let staging_blob_storage = manifest_storage.staging_storage().blob_storage().clone();
let mut tasks = Vec::with_capacity(remap_result.new_manifests.len());
for (remap_region_id, manifest) in &remap_result.new_manifests {
let bytes = serde_json::to_vec(&manifest).context(error::SerializeManifestSnafu {
region_id: *remap_region_id,
})?;
let key = remap_manifest_key(remap_region_id);
tasks.push(async {
debug!(
"Putting manifest to staging blob storage, region_id: {}, key: {}",
*remap_region_id, key
);
staging_blob_storage.put(&key, bytes).await?;
Ok((*remap_region_id, key))
});
}
let r = try_join_all(tasks).await?;
info!(
"Remap manifests cost: {:?}, region: {}",
now.elapsed(),
region.region_id
);
Ok(remap_result.new_manifests)
Ok(r.into_iter().collect::<HashMap<_, _>>())
}
}
fn remap_manifest_key(region_id: &RegionId) -> String {
format!("remap_manifest_{}", region_id.as_u64())
}

View File

@@ -144,7 +144,7 @@ impl Categorizer {
}
}
// all group by expressions are partition columns can push down, unless
// another push down(including `Limit` or `Sort`) is already in progress(which will then prvent next cond commutative node from being push down).
// another push down(including `Limit` or `Sort`) is already in progress(which will then prevent next cond commutative node from being push down).
// TODO(discord9): This is a temporary solution(that works), a better description of
// commutativity is needed under this situation.
Commutativity::ConditionalCommutative(None)

View File

@@ -234,7 +234,7 @@ impl QueryEngineState {
rules.retain(|rule| rule.name() != name);
}
/// Optimize the logical plan by the extension anayzer rules.
/// Optimize the logical plan by the extension analyzer rules.
pub fn optimize_by_extension_rules(
&self,
plan: DfLogicalPlan,

View File

@@ -29,7 +29,7 @@ use strum::EnumString;
use crate::error::{InternalIoSnafu, Result};
/// TlsMode is used for Mysql and Postgres server start up.
#[derive(Debug, Default, Serialize, Deserialize, Clone, PartialEq, Eq, EnumString)]
#[derive(Debug, Default, Serialize, Deserialize, Clone, Copy, PartialEq, Eq, EnumString)]
#[serde(rename_all = "snake_case")]
pub enum TlsMode {
#[default]
@@ -91,6 +91,17 @@ impl TlsOption {
tls_option
}
/// Creates a new TLS option with the prefer mode.
pub fn prefer() -> Self {
Self {
mode: TlsMode::Prefer,
cert_path: String::new(),
key_path: String::new(),
ca_cert_path: String::new(),
watch: false,
}
}
/// Validates the TLS configuration.
///
/// Returns an error if:

View File

@@ -29,7 +29,7 @@ use common_error::status_code::StatusCode;
use common_macro::stack_trace_debug;
use datatypes::arrow;
use datatypes::arrow::datatypes::FieldRef;
use datatypes::schema::{ColumnSchema, FulltextOptions, Schema, SchemaRef};
use datatypes::schema::{ColumnSchema, FulltextOptions, Schema, SchemaRef, VectorIndexOptions};
use datatypes::types::TimestampType;
use itertools::Itertools;
use serde::de::Error;
@@ -384,6 +384,22 @@ impl RegionMetadata {
inverted_index
}
/// Gets the column IDs that have vector indexes along with their options.
/// Returns a map from column ID to the vector index options.
pub fn vector_indexed_column_ids(&self) -> HashMap<ColumnId, VectorIndexOptions> {
self.column_metadatas
.iter()
.filter_map(|column| {
column
.column_schema
.vector_index_options()
.ok()
.flatten()
.map(|options| (column.column_id, options))
})
.collect()
}
/// Checks whether the metadata is valid.
fn validate(&self) -> Result<()> {
// Id to name.

View File

@@ -759,8 +759,11 @@ pub struct RemapManifestsRequest {
/// Response to remap manifests from old regions to new regions.
#[derive(Debug, Clone)]
pub struct RemapManifestsResponse {
/// The new manifests for the new regions.
pub new_manifests: HashMap<RegionId, String>,
/// Maps region id to its staging manifest path.
///
/// These paths are relative paths within the central region's staging blob storage,
/// and should be passed to [`ApplyStagingManifestRequest`](RegionRequest::ApplyStagingManifest) to finalize the repartition.
pub manifest_paths: HashMap<RegionId, String>,
}
/// Request to copy files from a source region to a target region.

View File

@@ -421,20 +421,17 @@ fn make_region_apply_staging_manifest(
api::v1::region::ApplyStagingManifestRequest {
region_id,
partition_expr,
files_to_add,
central_region_id,
manifest_path,
}: api::v1::region::ApplyStagingManifestRequest,
) -> Result<Vec<(RegionId, RegionRequest)>> {
let region_id = region_id.into();
let files_to_add = files_to_add
.context(UnexpectedSnafu {
reason: "'files_to_add' field is missing",
})?
.data;
Ok(vec![(
region_id,
RegionRequest::ApplyStagingManifest(ApplyStagingManifestRequest {
partition_expr,
files_to_add,
central_region_id: central_region_id.into(),
manifest_path,
}),
)])
}
@@ -1464,8 +1461,10 @@ pub struct EnterStagingRequest {
/// In practice, this means:
/// - The `partition_expr` identifies the staging region rule that the manifest
/// was generated for.
/// - `files_to_add` carries the serialized metadata (such as file manifests or
/// file lists) that should be attached to the region under the new rule.
/// - `central_region_id` specifies which region holds the staging blob storage
/// where the manifest was written during the `remap_manifests` operation.
/// - `manifest_path` is the relative path within the central region's staging
/// blob storage to fetch the generated manifest.
///
/// It should typically be called **after** the staging region has been
/// initialized by [`EnterStagingRequest`] and the new file layout has been
@@ -1474,8 +1473,11 @@ pub struct EnterStagingRequest {
pub struct ApplyStagingManifestRequest {
/// The partition expression of the staging region.
pub partition_expr: String,
/// The files to add to the region.
pub files_to_add: Vec<u8>,
/// The region that stores the staging manifests in its staging blob storage.
pub central_region_id: RegionId,
/// The relative path to the staging manifest within the central region's
/// staging blob storage.
pub manifest_path: String,
}
impl fmt::Display for RegionRequest {

View File

@@ -1546,6 +1546,12 @@ create_on_compaction = "auto"
apply_on_query = "auto"
mem_threshold_on_create = "auto"
[region_engine.mito.vector_index]
create_on_flush = "auto"
create_on_compaction = "auto"
apply_on_query = "auto"
mem_threshold_on_create = "auto"
[region_engine.mito.memtable]
type = "time_series"

View File

@@ -56,6 +56,121 @@ async fn query_data(frontend: &Arc<Instance>) -> io::Result<()> {
))?;
execute_sql_and_expect(frontend, sql, &expected).await;
// query 1:
let sql = "\
SELECT \
json_get_string(data, '$.commit.collection') AS event, count() AS count \
FROM bluesky \
GROUP BY event \
ORDER BY count DESC, event ASC";
let expected = r#"
+-----------------------+-------+
| event | count |
+-----------------------+-------+
| app.bsky.feed.like | 3 |
| app.bsky.feed.post | 3 |
| app.bsky.graph.follow | 3 |
| app.bsky.feed.repost | 1 |
+-----------------------+-------+"#;
execute_sql_and_expect(frontend, sql, expected).await;
// query 2:
let sql = "\
SELECT \
json_get_string(data, '$.commit.collection') AS event, \
count() AS count, \
count(DISTINCT json_get_string(data, '$.did')) AS users \
FROM bluesky \
WHERE \
(json_get_string(data, '$.kind') = 'commit') AND \
(json_get_string(data, '$.commit.operation') = 'create') \
GROUP BY event \
ORDER BY count DESC, event ASC";
let expected = r#"
+-----------------------+-------+-------+
| event | count | users |
+-----------------------+-------+-------+
| app.bsky.feed.like | 3 | 3 |
| app.bsky.feed.post | 3 | 3 |
| app.bsky.graph.follow | 3 | 3 |
| app.bsky.feed.repost | 1 | 1 |
+-----------------------+-------+-------+"#;
execute_sql_and_expect(frontend, sql, expected).await;
// query 3:
let sql = "\
SELECT \
json_get_string(data, '$.commit.collection') AS event, \
date_part('hour', to_timestamp_micros(json_get_int(data, '$.time_us'))) as hour_of_day, \
count() AS count \
FROM bluesky \
WHERE \
(json_get_string(data, '$.kind') = 'commit') AND \
(json_get_string(data, '$.commit.operation') = 'create') AND \
json_get_string(data, '$.commit.collection') IN \
('app.bsky.feed.post', 'app.bsky.feed.repost', 'app.bsky.feed.like') \
GROUP BY event, hour_of_day \
ORDER BY hour_of_day, event";
let expected = r#"
+----------------------+-------------+-------+
| event | hour_of_day | count |
+----------------------+-------------+-------+
| app.bsky.feed.like | 16 | 3 |
| app.bsky.feed.post | 16 | 3 |
| app.bsky.feed.repost | 16 | 1 |
+----------------------+-------------+-------+"#;
execute_sql_and_expect(frontend, sql, expected).await;
// query 4:
let sql = "\
SELECT
json_get_string(data, '$.did') as user_id,
min(to_timestamp_micros(json_get_int(data, '$.time_us'))) AS first_post_ts
FROM bluesky
WHERE
(json_get_string(data, '$.kind') = 'commit') AND
(json_get_string(data, '$.commit.operation') = 'create') AND
(json_get_string(data, '$.commit.collection') = 'app.bsky.feed.post')
GROUP BY user_id
ORDER BY first_post_ts ASC, user_id DESC
LIMIT 3";
let expected = r#"
+----------------------------------+----------------------------+
| user_id | first_post_ts |
+----------------------------------+----------------------------+
| did:plc:yj3sjq3blzpynh27cumnp5ks | 2024-11-21T16:25:49.000167 |
| did:plc:l5o3qjrmfztir54cpwlv2eme | 2024-11-21T16:25:49.001905 |
| did:plc:s4bwqchfzm6gjqfeb6mexgbu | 2024-11-21T16:25:49.003907 |
+----------------------------------+----------------------------+"#;
execute_sql_and_expect(frontend, sql, expected).await;
// query 5:
let sql = "
SELECT
json_get_string(data, '$.did') as user_id,
date_part(
'epoch',
max(to_timestamp_micros(json_get_int(data, '$.time_us'))) -
min(to_timestamp_micros(json_get_int(data, '$.time_us')))
) AS activity_span
FROM bluesky
WHERE
(json_get_string(data, '$.kind') = 'commit') AND
(json_get_string(data, '$.commit.operation') = 'create') AND
(json_get_string(data, '$.commit.collection') = 'app.bsky.feed.post')
GROUP BY user_id
ORDER BY activity_span DESC, user_id DESC
LIMIT 3";
let expected = r#"
+----------------------------------+---------------+
| user_id | activity_span |
+----------------------------------+---------------+
| did:plc:yj3sjq3blzpynh27cumnp5ks | 0.0 |
| did:plc:s4bwqchfzm6gjqfeb6mexgbu | 0.0 |
| did:plc:l5o3qjrmfztir54cpwlv2eme | 0.0 |
+----------------------------------+---------------+"#;
execute_sql_and_expect(frontend, sql, expected).await;
Ok(())
}

View File

@@ -32,6 +32,17 @@ SHOW CREATE TABLE comment_table_test;
| | ) |
+--------------------+---------------------------------------------------+
SELECT table_comment
FROM information_schema.tables
WHERE table_schema = 'public'
AND table_name = 'comment_table_test';
+-------------------------+
| table_comment |
+-------------------------+
| table level description |
+-------------------------+
-- Remove table comment
COMMENT ON TABLE comment_table_test IS NULL;
@@ -54,6 +65,17 @@ SHOW CREATE TABLE comment_table_test;
| | |
+--------------------+---------------------------------------------------+
SELECT table_comment
FROM information_schema.tables
WHERE table_schema = 'public'
AND table_name = 'comment_table_test';
+---------------+
| table_comment |
+---------------+
| |
+---------------+
DROP TABLE comment_table_test;
Affected Rows: 0
@@ -90,6 +112,18 @@ SHOW CREATE TABLE comment_column_test;
| | |
+---------------------+---------------------------------------------------------+
SELECT column_comment
FROM information_schema.columns
WHERE table_schema = 'public'
AND table_name = 'comment_column_test'
AND column_name = 'val';
+--------------------------+
| column_comment |
+--------------------------+
| value column description |
+--------------------------+
-- Remove column comment
COMMENT ON COLUMN comment_column_test.val IS NULL;
@@ -112,6 +146,18 @@ SHOW CREATE TABLE comment_column_test;
| | |
+---------------------+----------------------------------------------------+
SELECT column_comment
FROM information_schema.columns
WHERE table_schema = 'public'
AND table_name = 'comment_column_test'
AND column_name = 'val';
+----------------+
| column_comment |
+----------------+
| |
+----------------+
DROP TABLE comment_column_test;
Affected Rows: 0
@@ -155,6 +201,16 @@ SHOW CREATE FLOW flow_comment_test;
| | AS SELECT desc_str, ts FROM flow_source_comment_test |
+-------------------+------------------------------------------------------+
SELECT comment
FROM information_schema.flows
WHERE flow_name = 'flow_comment_test';
+------------------------+
| comment |
+------------------------+
| flow level description |
+------------------------+
-- Remove flow comment
COMMENT ON FLOW flow_comment_test IS NULL;
@@ -170,6 +226,16 @@ SHOW CREATE FLOW flow_comment_test;
| | AS SELECT desc_str, ts FROM flow_source_comment_test |
+-------------------+------------------------------------------------------+
SELECT comment
FROM information_schema.flows
WHERE flow_name = 'flow_comment_test';
+---------+
| comment |
+---------+
| |
+---------+
DROP FLOW flow_comment_test;
Affected Rows: 0

View File

@@ -9,10 +9,18 @@ CREATE TABLE comment_table_test (
-- Add table comment
COMMENT ON TABLE comment_table_test IS 'table level description';
SHOW CREATE TABLE comment_table_test;
SELECT table_comment
FROM information_schema.tables
WHERE table_schema = 'public'
AND table_name = 'comment_table_test';
-- Remove table comment
COMMENT ON TABLE comment_table_test IS NULL;
SHOW CREATE TABLE comment_table_test;
SELECT table_comment
FROM information_schema.tables
WHERE table_schema = 'public'
AND table_name = 'comment_table_test';
DROP TABLE comment_table_test;
@@ -27,10 +35,20 @@ CREATE TABLE comment_column_test (
-- Add column comment
COMMENT ON COLUMN comment_column_test.val IS 'value column description';
SHOW CREATE TABLE comment_column_test;
SELECT column_comment
FROM information_schema.columns
WHERE table_schema = 'public'
AND table_name = 'comment_column_test'
AND column_name = 'val';
-- Remove column comment
COMMENT ON COLUMN comment_column_test.val IS NULL;
SHOW CREATE TABLE comment_column_test;
SELECT column_comment
FROM information_schema.columns
WHERE table_schema = 'public'
AND table_name = 'comment_column_test'
AND column_name = 'val';
DROP TABLE comment_column_test;
@@ -54,12 +72,17 @@ SELECT desc_str, ts FROM flow_source_comment_test;
-- Add flow comment
COMMENT ON FLOW flow_comment_test IS 'flow level description';
SHOW CREATE FLOW flow_comment_test;
SELECT comment
FROM information_schema.flows
WHERE flow_name = 'flow_comment_test';
-- Remove flow comment
COMMENT ON FLOW flow_comment_test IS NULL;
SHOW CREATE FLOW flow_comment_test;
SELECT comment
FROM information_schema.flows
WHERE flow_name = 'flow_comment_test';
DROP FLOW flow_comment_test;
DROP TABLE flow_source_comment_test;
DROP TABLE flow_sink_comment_test;

View File

@@ -96,7 +96,7 @@ FROM (
s."location",
COUNT(DISTINCT s.sensor_id) as sensor_count,
COUNT(r.reading_id) / COUNT(DISTINCT s.sensor_id) as avg_readings_per_sensor,
AVG(r."value") as location_avg_value
ROUND(AVG(r."value"), 6) as location_avg_value
FROM sensors s
INNER JOIN readings r ON s.sensor_id = r.sensor_id
GROUP BY s."location"
@@ -107,7 +107,7 @@ ORDER BY location_summary.location_avg_value DESC;
| location | sensor_count | avg_readings_per_sensor | location_avg_value |
+----------+--------------+-------------------------+--------------------+
| Room B | 2 | 2 | 35.88 |
| Room A | 2 | 2 | 31.880000000000003 |
| Room A | 2 | 2 | 31.88 |
+----------+--------------+-------------------------+--------------------+
-- Join with aggregated conditions

View File

@@ -62,7 +62,7 @@ FROM (
s."location",
COUNT(DISTINCT s.sensor_id) as sensor_count,
COUNT(r.reading_id) / COUNT(DISTINCT s.sensor_id) as avg_readings_per_sensor,
AVG(r."value") as location_avg_value
ROUND(AVG(r."value"), 6) as location_avg_value
FROM sensors s
INNER JOIN readings r ON s.sensor_id = r.sensor_id
GROUP BY s."location"