Compare commits

...

123 Commits

Author SHA1 Message Date
Ruihang Xia
288f69a30f fix: plan disorder from upgrading datafusion (#6787)
* fix: plan disorder from upgrading datafusion

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* update sqlness

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* update sqlness again

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2025-08-23 12:29:47 +00:00
zyy17
d6d5dad758 chore: revert #6763 (#6800)
Revert "refactor: change plugin option type from `&[PluginOptions]` to `Optio…"

This reverts commit 5420d6f7fb.
2025-08-23 06:57:25 +00:00
Weny Xu
d82f36db6a chore: add peer address context to client error logging (#6793)
chore: add context to client error logging

Signed-off-by: WenyXu <wenymedia@gmail.com>
2025-08-23 06:56:24 +00:00
Ning Sun
68ac37461b feat: add limit to label value api (#6795)
* feat: add limit to label value api

Signed-off-by: Ning Sun <sunning@greptime.com>

* feat: limit for special labels

Signed-off-by: Ning Sun <sunning@greptime.com>

---------

Signed-off-by: Ning Sun <sunning@greptime.com>
2025-08-23 06:23:51 +00:00
Weny Xu
4c2955b86b fix: time unit mismatch in lookup_frontends function (#6798)
* fix: lookup frontend

Signed-off-by: WenyXu <wenymedia@gmail.com>

* test: add tests

Signed-off-by: WenyXu <wenymedia@gmail.com>

---------

Signed-off-by: WenyXu <wenymedia@gmail.com>
2025-08-22 12:34:26 +00:00
localhost
390aef7563 chore: modifying the visibility of the ScalarFunctionFactory field (#6797) 2025-08-22 10:55:01 +00:00
dennis zhuang
c6c33d14aa feat: bump opendal to v0.54 (#6792)
* feat: bump opendal to v0.54.0

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* chore: update cargo

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

---------

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>
2025-08-22 10:51:06 +00:00
dennis zhuang
3014972202 chore: improve error message when there are more than one time index (#6796)
* chore: improve error message when there are more than one time index

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* chore: style

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

---------

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>
2025-08-22 09:22:55 +00:00
Zhenchi
6839b5aef4 feat(mito): list SSTs from manifest and storage (#6766)
* feat(engine): list SSTs from manifest and storage

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* mito engine only

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* try best to remove allocation

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* address comments

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

---------

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>
2025-08-22 09:22:42 +00:00
ZonaHe
a04ec07b61 feat: update dashboard to v0.11.0 (#6794)
Co-authored-by: ZonaHex <ZonaHex@users.noreply.github.com>
2025-08-22 09:08:48 +00:00
discord9
d774996e89 chore: make internal grpc optional (#6789)
* chore: make internal grpc optional

Signed-off-by: discord9 <discord9@163.com>

* revert sqlness runner too

Signed-off-by: discord9 <discord9@163.com>

---------

Signed-off-by: discord9 <discord9@163.com>
2025-08-22 04:22:49 +00:00
discord9
2b43ff30b6 feat: provide plan info when flow exec (#6783)
* feat: provide  plan info when flow exec

Signed-off-by: discord9 <discord9@163.com>

* backoff?

Signed-off-by: discord9 <discord9@163.com>

---------

Signed-off-by: discord9 <discord9@163.com>
2025-08-22 03:47:26 +00:00
discord9
eaceae4c91 feat: frontend internal grpc port (#6784)
* feat: frontend internal grpc port

Signed-off-by: discord9 <discord9@163.com>

* fix: grpc server naming

Signed-off-by: discord9 <discord9@163.com>

* test: sqlness test fix

Signed-off-by: discord9 <discord9@163.com>

* fix: internal not use process manager

Signed-off-by: discord9 <discord9@163.com>

* test: test integration port alloc

Signed-off-by: discord9 <discord9@163.com>

* feat: skip auth for internal grpc

Signed-off-by: discord9 <discord9@163.com>

* test: is distributed

Signed-off-by: discord9 <discord9@163.com>

* what:

Signed-off-by: discord9 <discord9@163.com>

---------

Signed-off-by: discord9 <discord9@163.com>
Co-authored-by: Ning Sun <sunning@greptime.com>
2025-08-22 02:46:35 +00:00
Ning Sun
cdc168e753 feat: support for custom headers in otel exporter (#6773)
* feat: support for custom headers in otel exporter

Signed-off-by: Ning Sun <sunning@greptime.com>

* chore: remove wrapping option

Signed-off-by: Ning Sun <sunning@greptime.com>

---------

Signed-off-by: Ning Sun <sunning@greptime.com>
2025-08-21 13:10:13 +00:00
discord9
5d5817b851 chore: no logging when init table_flow cache if empty (#6785)
chore: no logging

Signed-off-by: discord9 <discord9@163.com>
2025-08-21 12:38:11 +00:00
Alex Araujo
a98c48a9b2 feat: optimize CreateFlowData with lightweight FlowQueryContext (#6780)
* feat: optimize CreateFlowData with lightweight FlowQueryContext

Replace full QueryContext with FlowQueryContext containing only essential fields (catalog, schema, timezone) in CreateFlowData struct. This improves serialization performance by eliminating unused extensions HashMap and channel fields.

Key changes:
- Add FlowQueryContext struct with conversion implementations
- Update CreateFlowData to use FlowQueryContext with backward compatibility
- Add tests for serialization and conversions

Signed-off-by: Alex Araujo <alexaraujo@gmail.com>

* chore: run make fmt

Signed-off-by: Alex Araujo <alexaraujo@gmail.com>

---------

Signed-off-by: Alex Araujo <alexaraujo@gmail.com>
2025-08-21 12:02:57 +00:00
Weny Xu
6692957e08 refactor: simplify WAL Pruning procedure part2 (#6782)
refactor: simplify prune wal procedure

Signed-off-by: WenyXu <wenymedia@gmail.com>
2025-08-21 11:09:31 +00:00
Weny Xu
896d72191e feat: introduce PersistStatsHandler (#6777)
* feat: add `Inserter` trait and impl

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: import items

Signed-off-by: WenyXu <wenymedia@gmail.com>

* feat: introduce `PersistStatsHandler`

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: disable persisting stats in sqlness

Signed-off-by: WenyXu <wenymedia@gmail.com>

* reset channel manager

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: apply suggestions

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: avoid to collect

Signed-off-by: WenyXu <wenymedia@gmail.com>

* refactor: remove insert options

Signed-off-by: WenyXu <wenymedia@gmail.com>

* refactor: use `write_bytes` instead of `write_bytes_per_sec`

Signed-off-by: WenyXu <wenymedia@gmail.com>

* refactor: compute write bytes delta

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: apply suggestions

Signed-off-by: WenyXu <wenymedia@gmail.com>

* test: add unit tests

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: apply suggestions

Signed-off-by: WenyXu <wenymedia@gmail.com>

* Update src/meta-srv/src/handler/persist_stats_handler.rs

Co-authored-by: Yingwen <realevenyag@gmail.com>

---------

Signed-off-by: WenyXu <wenymedia@gmail.com>
Co-authored-by: Yingwen <realevenyag@gmail.com>
2025-08-21 09:34:58 +00:00
Weny Xu
5eec3485fe feat: add IntoRow and Schema derive macros (#6778)
* chore: import items

Signed-off-by: WenyXu <wenymedia@gmail.com>

* feat: add `IntoRow` and `Schema` derive macros

Signed-off-by: WenyXu <wenymedia@gmail.com>

* refactor: styling

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: apply suggestions

Signed-off-by: WenyXu <wenymedia@gmail.com>

---------

Signed-off-by: WenyXu <wenymedia@gmail.com>
2025-08-21 06:29:59 +00:00
Ruihang Xia
7e573e497c feat: simplify more regex patterns in promql (#6747)
* feat: simplify more regex patterns in promql

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* add sqlness cases

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* update sqlness case

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2025-08-20 18:51:10 +00:00
Ruihang Xia
474a689309 feat: region prune part 2 (#6752)
* skeleton

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* get rule set

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* adjust style

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* adjust params

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* reuse collider

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* canonize

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* more robust predicate extractor

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* simplify predicate extractor's test and impl

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* unify import

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* simplification, remove unnecessary interfaces

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* handle partial referenced exprs

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* finalize predicate extractor

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* document region pruner

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* chore: reduce diff

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* simplify checker

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* refine overlapping check method

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* reduce diff

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* coerce types

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* remove unused errors

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix clippy

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* apply review comment

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* refactor use Bound

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* simplify hashmap

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* Apply suggestions from code review

Co-authored-by: Yingwen <realevenyag@gmail.com>

* sqlness tests

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* redact region id

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* test: update sqlness result after udpate datafusion

Signed-off-by: discord9 <discord9@163.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
Signed-off-by: discord9 <discord9@163.com>
Co-authored-by: Yingwen <realevenyag@gmail.com>
Co-authored-by: discord9 <55937128+discord9@users.noreply.github.com>
Co-authored-by: discord9 <discord9@163.com>
2025-08-20 18:47:38 +00:00
Yingwen
2995eddca5 feat: Implements FlatCompatBatch to adapt schema in flat format (#6771)
* feat: compat flat wip

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: rewrite key

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: support append key

Signed-off-by: evenyag <realevenyag@gmail.com>

* refactor: Split rewrite logic

Signed-off-by: evenyag <realevenyag@gmail.com>

* refactor: rename CompatFlatBatch to FlatCompatBatch

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: compat primary key

Signed-off-by: evenyag <realevenyag@gmail.com>

* chore: address fixme

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: new CompatBatch if need convert

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: use different compat

Signed-off-by: evenyag <realevenyag@gmail.com>

* refactor: plain_projection -> flat_projection

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: test compat

Signed-off-by: evenyag <realevenyag@gmail.com>

* test: test FlatCompatBatch

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: fix warnings and remove unused code

Signed-off-by: evenyag <realevenyag@gmail.com>

* fix: support compat tags with dictionary types

Signed-off-by: evenyag <realevenyag@gmail.com>

* style: fix clippy

Signed-off-by: evenyag <realevenyag@gmail.com>

* chore: reuse column_id_values

Signed-off-by: evenyag <realevenyag@gmail.com>

* chore: avoid zero pk values len

Signed-off-by: evenyag <realevenyag@gmail.com>

---------

Signed-off-by: evenyag <realevenyag@gmail.com>
2025-08-20 12:36:14 +00:00
LFC
05529387d9 refactor: use DataFusion's UDAF implementation directly (#6776)
* refactor: use DataFusion's UDAF implementation directly

Signed-off-by: luofucong <luofc@foxmail.com>

* remove: delete how-to guide for writing aggregate functions

Signed-off-by: luofucong <luofc@foxmail.com>

* fix ci

Signed-off-by: luofucong <luofc@foxmail.com>

* refactor: port json_encode_path to datafusion udaf

Signed-off-by: Ning Sun <sunning@greptime.com>

---------

Signed-off-by: luofucong <luofc@foxmail.com>
Signed-off-by: Ning Sun <sunning@greptime.com>
Co-authored-by: Ning Sun <sunning@greptime.com>
2025-08-20 08:25:00 +00:00
fys
819531393f feat: disable month in trigger interval expr (#6774)
* feat: disable month in trigger interval expr

* fix: cargo clippy

* fix: cargo clippy

* add unit test

* remove unused comment
2025-08-20 07:21:39 +00:00
dennis zhuang
d6bc117408 refactor: refactor admin functions with async udf (#6770)
* refactor: use async udf for admin functions

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* fix: sqlness test

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* chore: code style

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* fix: clippy

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* chore: remove unused error

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* chore: style

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* fix: style

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* fix: code style

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* chore: apply suggestions

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* fix: logical_metric_table test

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

---------

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>
2025-08-20 03:35:38 +00:00
yihong
7402320abc fix: time() function should the same as behavior prometheus (#6704)
* fix: close issue_6701 phase 1 make it return now

Signed-off-by: yihong0618 <zouzou0208@gmail.com>

* fix: tests

Signed-off-by: yihong0618 <zouzou0208@gmail.com>

* fix: make tests stable

Signed-off-by: yihong0618 <zouzou0208@gmail.com>

* fix: drop useless tests

Signed-off-by: yihong0618 <zouzou0208@gmail.com>

* fix: address comments

Signed-off-by: yihong0618 <zouzou0208@gmail.com>

* fix: close issue_6701 phase 1 make it return now

Signed-off-by: yihong0618 <zouzou0208@gmail.com>

* fix: tests

Signed-off-by: yihong0618 <zouzou0208@gmail.com>

* fix: make tests stable

Signed-off-by: yihong0618 <zouzou0208@gmail.com>

* fix: drop useless tests

Signed-off-by: yihong0618 <zouzou0208@gmail.com>

* fix: address comments

Signed-off-by: yihong0618 <zouzou0208@gmail.com>

* fix: make time() real right

Signed-off-by: yihong0618 <zouzou0208@gmail.com>

* fix: fix tests

Signed-off-by: yihong0618 <zouzou0208@gmail.com>

* add two sqlness cases

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* simplify impl

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: yihong0618 <zouzou0208@gmail.com>
Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
Co-authored-by: Ruihang Xia <waynestxia@gmail.com>
2025-08-20 03:05:21 +00:00
zyy17
5420d6f7fb refactor: change plugin option type from &[PluginOptions] to Option<&PluginOptions> for understandability (#6763)
refactor: change plugin option type from `[PluginOptions]` to `Option<&PluginOptions>` for understandability

Signed-off-by: zyy17 <zyylsxm@gmail.com>
2025-08-20 02:11:00 +00:00
liyang
7af471c5aa ci: add is-current-version-latest check to helm-charts/homebrew-greptime bump jobs (#6772)
ci: add  check to helm/homebrew bump jobs

Signed-off-by: liyang <daviderli614@gmail.com>
2025-08-19 17:49:07 +00:00
Weny Xu
9cdd0d8251 feat: derive macro ToRow (#6768)
* feat: introduce `ToRow` derive marco

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: apply suggestions

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: apply suggestions from CR

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: use map

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: apply suggestions

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: simplify

Signed-off-by: WenyXu <wenymedia@gmail.com>

---------

Signed-off-by: WenyXu <wenymedia@gmail.com>
2025-08-19 14:06:03 +00:00
Ning Sun
5a4036cc66 feat: update opentelemetry family (#6762)
* feat: update opentelemetry family

Signed-off-by: Ning Sun <sunning@greptime.com>

* doc: update doc samples

Signed-off-by: Ning Sun <sunning@greptime.com>

* chore: toml format

Signed-off-by: Ning Sun <sunning@greptime.com>

* chore: update default otel enpoint

Signed-off-by: Ning Sun <sunning@greptime.com>

---------

Signed-off-by: Ning Sun <sunning@greptime.com>
2025-08-19 09:23:50 +00:00
Yingwen
8fc3a9a9d7 feat: Implements async FlatMergeReader and FlatDedupReader (#6761)
* refactor: Add Flat prefix to MergeIterator and DedupIterator

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: implement MergeReader for RecordBatch

Signed-off-by: evenyag <realevenyag@gmail.com>

* refactor: use GenericNode for IterNode

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: flat merge reader to stream

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: implement FlatDedupReader

Signed-off-by: evenyag <realevenyag@gmail.com>

* chore: add a benchmark for FlatMergeIterator

Signed-off-by: evenyag <realevenyag@gmail.com>

* refactor: rename plain_projection to flat_projection

Signed-off-by: evenyag <realevenyag@gmail.com>

---------

Signed-off-by: evenyag <realevenyag@gmail.com>
2025-08-19 06:30:59 +00:00
liyang
0b29b41c17 ci: add Signed-off-by in update-dev-builder-version script (#6765)
Signed-off-by: liyang <daviderli614@gmail.com>
2025-08-19 04:15:13 +00:00
github-actions[bot]
78dca8a0d7 ci: update dev-builder image tag (#6759)
* ci: update dev-builder image tag

Signed-off-by: evenyag <realevenyag@gmail.com>

* ci: ignore makefile

Signed-off-by: evenyag <realevenyag@gmail.com>

---------

Signed-off-by: evenyag <realevenyag@gmail.com>
Co-authored-by: greptimedb-ci <greptimedb-ci@greptime.com>
Co-authored-by: evenyag <realevenyag@gmail.com>
2025-08-19 02:51:34 +00:00
Ning Sun
bf191c5763 test: fix sqlness hash character count (#6758) 2025-08-18 11:06:05 +00:00
Ruihang Xia
b652ea52ee fix: partition tree's dict size metrics mismatch (#6746)
fix: partition tree metrics mismatch

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2025-08-18 08:36:35 +00:00
Weny Xu
f64fc3a57a feat: add RateMeter for tracking memtable write throughput (#6744)
* feat: introduce `RateMeter`

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: apply suggestions

Signed-off-by: WenyXu <wenymedia@gmail.com>

---------

Signed-off-by: WenyXu <wenymedia@gmail.com>
2025-08-18 06:45:31 +00:00
fys
326198162e refactor: enhanced trigger interval (#6740)
* refactor: enhance trigger interval

* update greptime-proto

* fix: build
2025-08-18 04:03:26 +00:00
LFC
f9d2a89a0c chore: update datafusion family (#6675)
* chore: update datafusion family

Signed-off-by: luofucong <luofc@foxmail.com>

* fix ci

Signed-off-by: luofucong <luofc@foxmail.com>

* use official otel-arrow-rust

Signed-off-by: luofucong <luofc@foxmail.com>

* rebase

Signed-off-by: luofucong <luofc@foxmail.com>

* use the official orc-rust

Signed-off-by: luofucong <luofc@foxmail.com>

* resolve PR comments

Signed-off-by: luofucong <luofc@foxmail.com>

* remove the empty lines

Signed-off-by: luofucong <luofc@foxmail.com>

* try following PR comments

Signed-off-by: luofucong <luofc@foxmail.com>

---------

Signed-off-by: luofucong <luofc@foxmail.com>
2025-08-15 12:41:49 +00:00
discord9
dfc29eb3b3 feat: flownode grpc client to frontend tls option (#6750)
* feat: flownode grpc client to frontend tls option

Signed-off-by: discord9 <discord9@163.com>

* refactor: client tls option

Signed-off-by: discord9 <discord9@163.com>

* refactor: client_tls to frontend_tls

Signed-off-by: discord9 <discord9@163.com>

---------

Signed-off-by: discord9 <discord9@163.com>
2025-08-15 10:44:27 +00:00
Ning Sun
351826cd32 fix: refine shadowrs dependency, remove libgit2, libz and potentally libssl (#6748)
* feat: update shadown-rs config and remove libgit/libz/libssl deps

* chore: remove native inputs from flake
2025-08-15 03:01:14 +00:00
Yingwen
60e01c7c3d feat: Implements last-non-null dedup strategy for flat format (#6709)
* feat: Implements FlatLastNonNull strategy

Dedup rows and keep last non null fields

Signed-off-by: evenyag <realevenyag@gmail.com>

* test: add basic test for FlatLastNonNull

Signed-off-by: evenyag <realevenyag@gmail.com>

* test: port more last non null test

Signed-off-by: evenyag <realevenyag@gmail.com>

* fix: do merge rows after delete op

Signed-off-by: evenyag <realevenyag@gmail.com>

* style: fix clippy

Signed-off-by: evenyag <realevenyag@gmail.com>

* refactor: rename num_pk_columns to field_column_start

So we can support different format later

Signed-off-by: evenyag <realevenyag@gmail.com>

* chore: address comment

Signed-off-by: evenyag <realevenyag@gmail.com>

---------

Signed-off-by: evenyag <realevenyag@gmail.com>
2025-08-14 14:38:47 +00:00
Weny Xu
021ad09c21 refactor: simplify WAL pruning procedure and introduce region flush trigger (#6741)
* chore: add logs

Signed-off-by: WenyXu <wenymedia@gmail.com>

* refactor: update wal config for metasrv

Signed-off-by: WenyXu <wenymedia@gmail.com>

* feat: introduce region flush trigger

Signed-off-by: WenyXu <wenymedia@gmail.com>

* fix: debug assert

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: log level

Signed-off-by: WenyXu <wenymedia@gmail.com>

* refactor: simplify wal prune procedure

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: upgrade rskafka

Signed-off-by: WenyXu <wenymedia@gmail.com>

* feat: always flush inactive regions

Signed-off-by: WenyXu <wenymedia@gmail.com>

* refactor: refactor flush trigger

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: remove unused code

Signed-off-by: WenyXu <wenymedia@gmail.com>

* fix: typo

Signed-off-by: WenyXu <wenymedia@gmail.com>

* fix: update unit tests

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: add metrics

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: rename

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: apply suggestions from CR

Signed-off-by: WenyXu <wenymedia@gmail.com>

---------

Signed-off-by: WenyXu <wenymedia@gmail.com>
2025-08-14 14:15:30 +00:00
discord9
2a3e4c7a82 fix: truncate manifest action compat (#6742)
* fix: truncate manifest action compat

Signed-off-by: discord9 <discord9@163.com>

* refactor: use simpler compat

Signed-off-by: discord9 <discord9@163.com>

---------

Signed-off-by: discord9 <discord9@163.com>
2025-08-14 14:05:17 +00:00
Zhenchi
92fd34ba22 refactor: split node manager trait (#6743)
Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>
2025-08-14 08:33:57 +00:00
sunheyi
d03f85287e feat: mysql add prepared_stmt_cache_capacity (#6639)
* feat: your clear and concise commit message

Signed-off-by: sunheyi <1061867552@qq.com>

* fix error

Signed-off-by: sunheyi <1061867552@qq.com>

* add param

Signed-off-by: sunheyi <1061867552@qq.com>

* fix

Signed-off-by: sunheyi <1061867552@qq.com>

* fix doc error

Signed-off-by: sunheyi <1061867552@qq.com>

---------

Signed-off-by: sunheyi <1061867552@qq.com>
2025-08-14 08:19:10 +00:00
Ruihang Xia
4d97754cb4 feat: persist manifest, SST and index files to staging dir (#6726)
* make flush and access layer aware of staging mode

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* staging flag in flush notify

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* staging manifest

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* index methods

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* only stage manifest

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* improve comment

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2025-08-14 06:02:31 +00:00
Ruihang Xia
1b6d924169 feat: predicate extractor (region prune part 1) (#6729)
* feat: predicate extractor (region prune part 1)

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* stricter check

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2025-08-14 04:42:50 +00:00
dennis zhuang
80f3ae650c docs: improve CONTRIBUTING.md (#6698)
* docs: improve CONTRIBUTING.md

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* chore: features

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* chore: adds features to Makefile

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* chore: adds make commands

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

---------

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-08-14 03:48:05 +00:00
dennis zhuang
c0fe800e79 feat: improve slow queries options deserialization (#6734)
* feat: improve slow queries options deserialization

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* refactor: use serde default for struct

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

---------

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>
2025-08-14 03:43:13 +00:00
yihong
56f5ccf823 fix: support unknown for timestamp function (#6708)
* fix: support unknown for timestamp function

Signed-off-by: yihong0618 <zouzou0208@gmail.com>

* fix: some sqlness now no error

Signed-off-by: yihong0618 <zouzou0208@gmail.com>

* fix: make clippy happy

Signed-off-by: yihong0618 <zouzou0208@gmail.com>

* fix: address comments

Signed-off-by: yihong0618 <zouzou0208@gmail.com>

* fix: address comments

Signed-off-by: yihong0618 <zouzou0208@gmail.com>

---------

Signed-off-by: yihong0618 <zouzou0208@gmail.com>
2025-08-14 02:56:33 +00:00
Zhenchi
fb3b1d4866 feat: Store partition expr in RegionMetadata (#6699)
* wire partition.expr_json option constant and parsing

test(mito2): manifest roundtrip persists partition_expr JSON

test(mito2): create/open with partition.expr_json persists in manifest

docs: add comments for partition.expr_json option and RegionOptions.partition_expr

serde: include RegionOptions.partition_expr (skip if None)

test(mito2): doc intent and verify runtime backfill + persistence-after-alter for partition expr

add partition expr to create request

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* add create_with_partition_expr_persists_manifest

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* pass partition expr to create request

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* polish

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* fix sqlness

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* fix test

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* remove unused dep

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

---------

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>
Co-authored-by: Ruihang Xia <waynestxia@gmail.com>
2025-08-13 22:05:37 +00:00
Weny Xu
4fb7d92f7c feat(metasrv): implement topic statistics collection (#6732)
* feat(metasrv): collect topic stats

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: fix tests and apply suggestions

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: apply suggestions from CR

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: apply suggestions from CR

Signed-off-by: WenyXu <wenymedia@gmail.com>

---------

Signed-off-by: WenyXu <wenymedia@gmail.com>
2025-08-13 12:52:28 +00:00
Weny Xu
8659412cac feat: introduce PeriodicTopicStatsReporter (#6730)
* refactor: introduce `PeriodicTopicStatsReporter`

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: apply suggestions from CR

Signed-off-by: WenyXu <wenymedia@gmail.com>

* fix: fix typo

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: remote wal tests styling

Signed-off-by: WenyXu <wenymedia@gmail.com>

* fix: fix unit test

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: apply suggestions

Signed-off-by: WenyXu <wenymedia@gmail.com>

* fix: handling region wal options not found

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: apply suggestions from CR

Signed-off-by: WenyXu <wenymedia@gmail.com>

* fix: minor

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: upgrade greptime-proto

Signed-off-by: WenyXu <wenymedia@gmail.com>

---------

Signed-off-by: WenyXu <wenymedia@gmail.com>
2025-08-13 11:46:50 +00:00
Zhenchi
dea87b7e57 refactor: use DummyCatalog to construct query engine for datanode (#6723)
* refactor: use DummyCatalog to construct query engine for datanode

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* fix clippy

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* move to query/dummy_catalog

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

---------

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>
2025-08-13 09:49:51 +00:00
localhost
a678b4dfd6 chore: add u64 for EqualValue and set expr is true when filter is empty (#6731)
* chore: add u64 for EqualValue and set expr is true when filter is empty

* Update src/log-query/src/log_query.rs

Co-authored-by: Yingwen <realevenyag@gmail.com>

* chore: update EqualValue Uinit to UInt

---------

Co-authored-by: Yingwen <realevenyag@gmail.com>
2025-08-13 08:25:33 +00:00
Ruihang Xia
ccccaf7734 feat(log-query): try infer and cast type for literal value (#6712)
* initial impl

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* one more test

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* remove duplicated test cases

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* remove duplicated methods

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* initial impl

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* one more test

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* remove duplicated test cases

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* remove duplicated methods

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* chore: add eq for log query

* skip for both literals

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
Co-authored-by: paomian <xpaomian@gmail.com>
2025-08-13 06:28:37 +00:00
yihong
f0bec4940f fix: two label_replace different from promql (#6720)
* fix: two label_replace different from promql

Signed-off-by: yihong0618 <zouzou0208@gmail.com>

* fix: address comments

Signed-off-by: yihong0618 <zouzou0208@gmail.com>
Co-authored-by: Jiachun Feng <jiachun_feng@proton.me>

* fix: another address

Signed-off-by: yihong0618 <zouzou0208@gmail.com>

---------

Signed-off-by: yihong0618 <zouzou0208@gmail.com>
Co-authored-by: Jiachun Feng <jiachun_feng@proton.me>
2025-08-13 06:27:49 +00:00
yihong
5eb491df12 fix: label_join should work with unknown (#6714)
* fix: label_join should work with unknown

Signed-off-by: yihong0618 <zouzou0208@gmail.com>

* fix: address comments

Signed-off-by: yihong0618 <zouzou0208@gmail.com>
Co-authored-by: Jiachun Feng <jiachun_feng@proton.me>

* fix: address forget comments

Signed-off-by: yihong0618 <zouzou0208@gmail.com>

---------

Signed-off-by: yihong0618 <zouzou0208@gmail.com>
Co-authored-by: Jiachun Feng <jiachun_feng@proton.me>
2025-08-13 03:45:40 +00:00
Weny Xu
1d84e802d8 feat: add integration tests for table reconciliation procedures part1 (#6705)
* feat: add integration tests for table reconciliation procedures

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: apply suggestions

Signed-off-by: WenyXu <wenymedia@gmail.com>

---------

Signed-off-by: WenyXu <wenymedia@gmail.com>
2025-08-13 03:29:39 +00:00
Ruihang Xia
2992e70393 fix: correct offset's symbol (#6728)
Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2025-08-13 03:20:47 +00:00
Ruihang Xia
8a44137f37 chore: prefix debug_assertion only variables with underscore (#6727)
Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2025-08-13 02:50:16 +00:00
zyy17
777da35b0d refactor: unify the event recorder (#6689)
* refactor: unify the event recorder

Signed-off-by: zyy17 <zyylsxm@gmail.com>

* refactor: add `table_name()` in `Event` trait

Signed-off-by: zyy17 <zyylsxm@gmail.com>

* chore: add `slow_query_options` in `Instance`

Signed-off-by: zyy17 <zyylsxm@gmail.com>

* refactor: add `EventHandlerOptions` and `options()` in `EventHandler` trait

Signed-off-by: zyy17 <zyylsxm@gmail.com>

* chore: add `aggregate_events_by_type()` and support log mode of slow query

Signed-off-by: zyy17 <zyylsxm@gmail.com>

* chore: polish the code

Signed-off-by: zyy17 <zyylsxm@gmail.com>

* fix: clippy errors

Signed-off-by: zyy17 <zyylsxm@gmail.com>

* chore: support to set ttl by using extension of query context

Signed-off-by: zyy17 <zyylsxm@gmail.com>

* chore: refine the configs fields

Signed-off-by: zyy17 <zyylsxm@gmail.com>

* fix: sqlness test errors

Signed-off-by: zyy17 <zyylsxm@gmail.com>

* refactor: use `Duration` type instead of `String` for ttl fields

Signed-off-by: zyy17 <zyylsxm@gmail.com>

* refactor: use pre-allocation for building RowInsertRequests

Signed-off-by: zyy17 <zyylsxm@gmail.com>

* chore: fix clippy errors

Signed-off-by: zyy17 <zyylsxm@gmail.com>

* refactor: code review

Signed-off-by: zyy17 <zyylsxm@gmail.com>

* chore: fix integration errors

Signed-off-by: zyy17 <zyylsxm@gmail.com>

* refactor: polish code for `group_events_by_type()` and `build_row_inserts_request()`, also add the unit tests

Signed-off-by: zyy17 <zyylsxm@gmail.com>

* chore: refine comments

---------

Signed-off-by: zyy17 <zyylsxm@gmail.com>
2025-08-12 18:26:12 +00:00
Ruihang Xia
9ad9a7d2bc feat: add all partition column to logical table automatically (#6711)
* feat: add all partition column to logical table automatically

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* sqlness test

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* simplify builder

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix clippy

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix request builder

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* test: update sqlness

Signed-off-by: discord9 <discord9@163.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
Signed-off-by: discord9 <discord9@163.com>
Co-authored-by: discord9 <55937128+discord9@users.noreply.github.com>
Co-authored-by: discord9 <discord9@163.com>
2025-08-12 17:25:24 +00:00
Lei, HUANG
ff5d672583 chore: impl cast from primitives to PathType (#6724)
chore/impl-cast-to-primitives-for-path-type:
 ### Add `num_enum` for Enum Conversion and Update `PathType`

 - **Added `num_enum` Dependency**: Updated `Cargo.lock` and `Cargo.toml` to include `num_enum` for enum conversion functionality.
   - Files: `Cargo.lock`, `src/store-api/Cargo.toml`

 - **Enhanced `PathType` Enum**: Implemented `TryFromPrimitive` for `PathType` to enable conversion from primitive types.
   - Files: `src/store-api/src/region_request.rs`

 - **Added Unit Tests**: Introduced tests to verify the conversion of `PathType` enum to and from primitive types.
   - Files: `src/store-api/src/region_request.rs`

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
2025-08-12 13:07:01 +00:00
Ruihang Xia
e495c614f7 perf: improve bloom filter reader's byte reading logic (#6658)
* perf: improve bloom filter reader's byte reading logic

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* revert toml change

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* clearify comment

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* benchmark

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* update lock file

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* pub util fn

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* note endian

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2025-08-12 11:37:25 +00:00
Ning Sun
e80e4a9ed7 fix: update pgwire to fix windows timeout issue (#6710)
* test: reproduce windows ci issue

* chore: update sqlx

* chore: update pgwire

* chore: update to a debug version of pgwire

* fix: update pgwire to resolve peek after read on windows

* ci: remove windows task from regular ci
2025-08-12 08:24:58 +00:00
Yingwen
1977ae50ee feat: Projection mapper for flat schema (#6679)
* feat: plain projection mapper wip

Signed-off-by: evenyag <realevenyag@gmail.com>

* refactor: change ProjectionMapper to enum

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: convert plain batch

Signed-off-by: evenyag <realevenyag@gmail.com>

* test: add tests for the mapper

Signed-off-by: evenyag <realevenyag@gmail.com>

* chore: allow dead code

Signed-off-by: evenyag <realevenyag@gmail.com>

* style: format code

Signed-off-by: evenyag <realevenyag@gmail.com>

* chore: fix compiler errors

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: change PlainProjectionMapper to FlatProjectionMapper

Signed-off-by: evenyag <realevenyag@gmail.com>

* test: fix projection tests

Signed-off-by: evenyag <realevenyag@gmail.com>

* style: fmt code

Signed-off-by: evenyag <realevenyag@gmail.com>

* style: fix clippy

Signed-off-by: evenyag <realevenyag@gmail.com>

* chore: Address  comments

Removes some unwrap()

Signed-off-by: evenyag <realevenyag@gmail.com>

* style: fmt

Signed-off-by: evenyag <realevenyag@gmail.com>

* chore: address comment

as_plain -> as_flat

Signed-off-by: evenyag <realevenyag@gmail.com>

---------

Signed-off-by: evenyag <realevenyag@gmail.com>
2025-08-12 07:36:23 +00:00
yihong
5cec0d4e3a fix: http and tql should return the same value for nuknown (#6718)
Signed-off-by: yihong0618 <zouzou0208@gmail.com>
2025-08-12 06:38:01 +00:00
fys
d2d6489b2f fix: unit test about trigger labels parse (#6716) 2025-08-12 04:43:42 +00:00
Ruihang Xia
25f926ea7d feat: mito region staging state (#6664)
* fix: not mark all deleted when partial trunc (#6654)

* fix: not mark all deleted when partial trunc&not update manifest when partial file range is empty

Signed-off-by: discord9 <discord9@163.com>

* docs: note

Signed-off-by: discord9 <discord9@163.com>

---------

Signed-off-by: discord9 <discord9@163.com>
Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* some tests and DdlRequest

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* stage transit

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* address CR comments

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* correct error type

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: discord9 <discord9@163.com>
Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
Co-authored-by: discord9 <55937128+discord9@users.noreply.github.com>
2025-08-12 03:17:47 +00:00
discord9
f159fcf599 fix: metrics without physical partition columns query push down (#6694)
* fix: metrics no part cols

Signed-off-by: discord9 <discord9@163.com>

* chore: typos

Signed-off-by: discord9 <discord9@163.com>

* chore: clippy

Signed-off-by: discord9 <discord9@163.com>

* chore: rename stuff

Signed-off-by: discord9 <discord9@163.com>

* refactor: put partition rules in table

Signed-off-by: discord9 <discord9@163.com>

* more tests

Signed-off-by: discord9 <discord9@163.com>

* test: redact more

Signed-off-by: discord9 <discord9@163.com>

---------

Signed-off-by: discord9 <discord9@163.com>
2025-08-12 02:51:38 +00:00
Yingwen
e4454e0c7d feat: Implements last row dedup strategy for flat format (#6695)
* feat: implement FlatLastRow dedup strategy

Signed-off-by: evenyag <realevenyag@gmail.com>

* fix: fix dedup metrics

Add tests for iter with FlatLastRow strategy

Signed-off-by: evenyag <realevenyag@gmail.com>

* chore: fix delete metrics

Signed-off-by: evenyag <realevenyag@gmail.com>

* chore: address review comments

Signed-off-by: evenyag <realevenyag@gmail.com>

* style: fix clippy

Signed-off-by: evenyag <realevenyag@gmail.com>

* refactor: move BatchLastRow position

Signed-off-by: evenyag <realevenyag@gmail.com>

* style: fix typos

Signed-off-by: evenyag <realevenyag@gmail.com>

---------

Signed-off-by: evenyag <realevenyag@gmail.com>
2025-08-11 07:44:45 +00:00
Ruihang Xia
0781adaa3d feat: new HTTP API for formatting SQL (#6691)
Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2025-08-09 07:39:41 +00:00
LFC
253d89b5cc feat: able to set read preference to flownode (#6696)
fix: correctly compare the opened follower regions in startup

Signed-off-by: luofucong <luofc@foxmail.com>
2025-08-08 09:08:09 +00:00
localhost
3a2f5413e0 chore: add and/or for log query (#6681)
* chore: add and/or for log query

* chore: remove impl From<Vec<ColumnFilters>> for Filters
2025-08-08 08:48:03 +00:00
Yingwen
214ffe7109 feat: Implements an iterator to merge RecordBatches (#6666)
* feat: Implements the sync merge iterator for RecordBatch

Signed-off-by: evenyag <realevenyag@gmail.com>

* test: test merge iterator

Signed-off-by: evenyag <realevenyag@gmail.com>

* fix: fix incorrect column index

Signed-off-by: evenyag <realevenyag@gmail.com>

* style: fix clippy

Signed-off-by: evenyag <realevenyag@gmail.com>

* chore: fix typos

Signed-off-by: evenyag <realevenyag@gmail.com>

---------

Signed-off-by: evenyag <realevenyag@gmail.com>
2025-08-08 07:03:22 +00:00
Ruihang Xia
3b1f172ab8 fix: TQL CTE parser take raw query string (#6671)
* take raw TQL part

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* more tests

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* sort sqlness result

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* add order by

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* more order by

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* add comment back

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2025-08-08 06:22:56 +00:00
LFC
0215b39f61 fix: correctly set extension range source index (#6692)
refactor: extract the common codes of creating proto ColumnSchema and Row to helper functions

fix: explicitly set the follower max sequence when finding extension ranges to avoid potential concurrency hazard

Signed-off-by: luofucong <luofc@foxmail.com>
2025-08-08 06:17:25 +00:00
Weny Xu
01dc789816 refactor: refine error status code mappings (#6678)
Signed-off-by: WenyXu <wenymedia@gmail.com>
2025-08-07 09:29:32 +00:00
Ning Sun
bbe48e9e8b feat: update pgwire to 0.32 (#6674)
* feat: update pgwire api

* feat: update pgwire and override on_query/on_execute

* feat: update pgwire to 0.32

* chore: remove code example

Signed-off-by: Ning Sun <sunning@greptime.com>

---------

Signed-off-by: Ning Sun <sunning@greptime.com>
2025-08-07 06:17:52 +00:00
Weny Xu
e2015ce1af feat(metric-engine): add metadata region cache (#6657)
* feat(metric-engine): add metadata region cache

Signed-off-by: WenyXu <wenymedia@gmail.com>

* feat: use lru

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: rename

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: rename

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: add comments

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: default ttl

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: longer ttl

Signed-off-by: WenyXu <wenymedia@gmail.com>

---------

Signed-off-by: WenyXu <wenymedia@gmail.com>
2025-08-07 06:16:23 +00:00
Yingwen
7bb765af1d chore: pub access layer (#6670)
Signed-off-by: evenyag <realevenyag@gmail.com>
2025-08-06 13:34:38 +00:00
discord9
080b4b5d53 docs(rfc): rfc for gc worker (#6572)
* docs: rfc for gc worker

Signed-off-by: discord9 <discord9@163.com>

* revise: gc worker now run after compaction&long run queries check

Signed-off-by: discord9 <discord9@163.com>

* more drawback

Signed-off-by: discord9 <discord9@163.com>

* flowchart

Signed-off-by: discord9 <discord9@163.com>

* read consist optional&more detail

Signed-off-by: discord9 <discord9@163.com>

* chore: rephrase

Signed-off-by: discord9 <discord9@163.com>

---------

Signed-off-by: discord9 <discord9@163.com>
2025-08-06 12:03:28 +00:00
Weny Xu
c7c8495a6b feat: add metrics for reconciliation procedures (#6652)
* feat: add metrics for reconciliation procedures

Signed-off-by: WenyXu <wenymedia@gmail.com>

* refactor: improve error handling

Signed-off-by: WenyXu <wenymedia@gmail.com>

* fix(datanode): handle ignore_nonexistent_region flag in open_all_regions

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: apply suggestions from CR

Signed-off-by: WenyXu <wenymedia@gmail.com>

* refactor: merge metrics

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: minor refactor

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: apply suggestions from CR

Signed-off-by: WenyXu <wenymedia@gmail.com>

---------

Signed-off-by: WenyXu <wenymedia@gmail.com>
2025-08-06 11:24:03 +00:00
Yingwen
bbab35f285 perf: Reduce fulltext bloom load time (#6651)
* perf: cached reader do not get page concurrently

Otherwise they will all fetch the same pages in parallel

Signed-off-by: evenyag <realevenyag@gmail.com>

* perf: always disable zstd for bloom

Signed-off-by: evenyag <realevenyag@gmail.com>

---------

Signed-off-by: evenyag <realevenyag@gmail.com>
2025-08-06 08:25:31 +00:00
Zhenchi
6c6487ab30 chore: bump version to 0.17.0 (#6663)
Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>
2025-08-06 07:45:39 +00:00
Ruihang Xia
757694ae38 feat: count underscore in English tokenizer and improve performance (#6660)
* feat: count underscore in English tokenizer and improve performance

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* update lock file

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* update test results

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* assert lookup table

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* handle utf8 alphanumeric

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* finalize

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2025-08-06 07:23:18 +00:00
Yingwen
39e2f122eb feat: EncodedBulkPartIter iters flat format and returns RecordBatch (#6655)
* feat: implements iter to read bulk part

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: BulkPartEncoder encodes BulkPart instead of mutation

Signed-off-by: evenyag <realevenyag@gmail.com>

---------

Signed-off-by: evenyag <realevenyag@gmail.com>
2025-08-06 06:50:01 +00:00
Lei, HUANG
877ce6e893 chore: add methods to catalog manager (#6656)
* chore/optimize-catalog:
 ### Add `table_id` Method to `CatalogManager`

 - **Files Modified**:
   - `src/catalog/src/kvbackend/manager.rs`
   - `src/catalog/src/lib.rs`

 - **Key Changes**:
   - Introduced a new asynchronous method `table_id` in the `CatalogManager` trait to retrieve the table ID based on catalog, schema, and table name.
   - Implemented the `table_id` method in `KvBackendCatalogManager` to fetch the table ID from the system catalog or cache, with a fallback to `pg_catalog` for Postgres channels.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* chore/optimize-catalog:
 ### Add `table_info_by_id` Method to Catalog Managers

 - **`manager.rs`**: Introduced the `table_info_by_id` method in `KvBackendCatalogManager` to retrieve table information by table ID using the `TableInfoCacheRef`.
 - **`lib.rs`**: Updated the `CatalogManager` trait to include the new `table_info_by_id` method.
 - **`memory/manager.rs`**: Implemented the `table_info_by_id` method in `MemoryCatalogManager` to fetch table information by table ID from in-memory catalogs.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

---------

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
2025-08-06 06:25:32 +00:00
Ruihang Xia
c8da35c7e5 feat(log-query): support binary op, scalar fn & is_true/is_false (#6659)
* rename symbol

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* handle binary op

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* update test results

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* Update src/query/src/log_query/planner.rs

Co-authored-by: Yingwen <realevenyag@gmail.com>

* fix format

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* reduce duplication

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
Co-authored-by: Yingwen <realevenyag@gmail.com>
2025-08-06 04:38:25 +00:00
Ruihang Xia
309e9d978c feat: support TQL CTE in planner (#6645)
Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2025-08-06 04:07:38 +00:00
zyy17
3a9f0220b5 fix: unable to record slow query (#6590)
* refactor: add process manager for prometheus query

Signed-off-by: zyy17 <zyylsxm@gmail.com>

* refactor: modify `register_query()` API to accept parsed statement(`catalog::process_manager::QueryStatement`)

Signed-off-by: zyy17 <zyylsxm@gmail.com>

* refactor: add the slow query timer in the `Tikcet` of ProcessManager

Signed-off-by: zyy17 <zyylsxm@gmail.com>

* test: add integration tests

Signed-off-by: zyy17 <zyylsxm@gmail.com>

* refactor: add process manager in `do_exec_plan()`

Signed-off-by: zyy17 <zyylsxm@gmail.com>

* tests: add `test_postgres_slow_query` integration test

Signed-off-by: zyy17 <zyylsxm@gmail.com>

* chore: polish the code

Signed-off-by: zyy17 <zyylsxm@gmail.com>

* refactor: create a query ticket and slow query timer if the statement is a query in `query_statement()`

Signed-off-by: zyy17 <zyylsxm@gmail.com>

* fix: sqlness errors

Signed-off-by: zyy17 <zyylsxm@gmail.com>

---------

Signed-off-by: zyy17 <zyylsxm@gmail.com>
2025-08-06 03:35:12 +00:00
zyy17
cc35bab5e4 feat: record the migration events in metasrv (#6579)
* feat: collect procedure event

Signed-off-by: zyy17 <zyylsxm@gmail.com>

* feat: collect region migration events

Signed-off-by: zyy17 <zyylsxm@gmail.com>

* test: add integration test

Signed-off-by: zyy17 <zyylsxm@gmail.com>

* chore: fix docs error

Signed-off-by: zyy17 <zyylsxm@gmail.com>

* chore: fix integration test error

Signed-off-by: zyy17 <zyylsxm@gmail.com>

* chore: change status code for errors

Signed-off-by: zyy17 <zyylsxm@gmail.com>

* refactor: add `event()` in Procedure

Signed-off-by: zyy17 <zyylsxm@gmail.com>

* refactor: improve trait design

1. Add `user_metadata()` in `Procedure` trait;

2. Add `Eventable` trait;

Signed-off-by: zyy17 <zyylsxm@gmail.com>

* chore: polish the code

Signed-off-by: zyy17 <zyylsxm@gmail.com>

---------

Signed-off-by: zyy17 <zyylsxm@gmail.com>
2025-08-06 03:30:33 +00:00
Ruihang Xia
414db41219 fix: box Explain node in Statement to reduce stack size (#6661)
Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2025-08-06 02:24:45 +00:00
Ruihang Xia
ea024874e7 feat: use column expr with filters in LogQuery (#6646)
* feat: use column expr with filters in LogQuery

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix clippy

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* remove some clone

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2025-08-05 18:35:09 +00:00
discord9
e64469bbc4 fix: not mark all deleted when partial trunc (#6654)
* fix: not mark all deleted when partial trunc&not update manifest when partial file range is empty

Signed-off-by: discord9 <discord9@163.com>

* docs: note

Signed-off-by: discord9 <discord9@163.com>

---------

Signed-off-by: discord9 <discord9@163.com>
2025-08-05 11:45:47 +00:00
discord9
875207d26c feat: register all aggregate function to auto step aggr fn (#6596)
* feat: support generic aggr push down

Signed-off-by: discord9 <discord9@163.com>

* typo

Signed-off-by: discord9 <discord9@163.com>

* fix: type ck in merge wrapper

Signed-off-by: discord9 <discord9@163.com>

* test: update sqlness

Signed-off-by: discord9 <discord9@163.com>

* feat: support all registried aggr func

Signed-off-by: discord9 <discord9@163.com>

* chore: per review

Signed-off-by: discord9 <discord9@163.com>

* chore: per review

Signed-off-by: discord9 <discord9@163.com>

---------

Signed-off-by: discord9 <discord9@163.com>
2025-08-05 11:37:45 +00:00
jeremyhi
9871c22740 fix: sequence peek with remote value (#6648)
* fix: sequence peek with remote value

* chore: more ut

* chore: add more ut
2025-08-05 08:28:09 +00:00
Yingwen
50f7f61fdc feat: Implements an iterator to read the RecordBatch in BulkPart (#6647)
* feat: impl RecordBatchIter for BulkPart

Signed-off-by: evenyag <realevenyag@gmail.com>

* refactor: rename BulkPartIter to EncodedBulkPartIter

Signed-off-by: evenyag <realevenyag@gmail.com>

* chore: add iter benchmark

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: filter by primary key columns

Signed-off-by: evenyag <realevenyag@gmail.com>

* refactor: move struct definitions

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: bulk iter for flat schema

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: iter filter benchmark

Signed-off-by: evenyag <realevenyag@gmail.com>

* chore: fix compiler errors

Signed-off-by: evenyag <realevenyag@gmail.com>

* fix: use corrent sequence array to compare

Signed-off-by: evenyag <realevenyag@gmail.com>

* refactor: remove RecordBatchIter

Signed-off-by: evenyag <realevenyag@gmail.com>

* chore: update comments

Signed-off-by: evenyag <realevenyag@gmail.com>

* style: fix clippy

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: apply projection first

Signed-off-by: evenyag <realevenyag@gmail.com>

* chore: address comment

No need to check number of rows after filter

Signed-off-by: evenyag <realevenyag@gmail.com>

---------

Signed-off-by: evenyag <realevenyag@gmail.com>
2025-08-05 08:11:28 +00:00
Ruihang Xia
9c3b83e84d feat: use real data to truncate manipulate range (#6649)
Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2025-08-05 04:55:24 +00:00
Yingwen
e81d0f5861 feat: implements FlatReadFormat to project parquets with flat schema (#6638)
* feat: add plain read format

Signed-off-by: evenyag <realevenyag@gmail.com>

* chore: reduce unused code

Signed-off-by: evenyag <realevenyag@gmail.com>

* refactor: reuse code

Signed-off-by: evenyag <realevenyag@gmail.com>

* chore: allow dead code

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: change ReadFormat to enum

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: as_primary_key() returns option

Signed-off-by: evenyag <realevenyag@gmail.com>

* chore: remove some allow dead_code

Signed-off-by: evenyag <realevenyag@gmail.com>

* refactor: rename WriteFormat to PrimaryKeyWriteFormat

Signed-off-by: evenyag <realevenyag@gmail.com>

* test: add tests for read/write format

Signed-off-by: evenyag <realevenyag@gmail.com>

* style: format code

Signed-off-by: evenyag <realevenyag@gmail.com>

* fix: dedup column ids in format

Signed-off-by: evenyag <realevenyag@gmail.com>

* refactor: rename plain to flat

Signed-off-by: evenyag <realevenyag@gmail.com>

* refactor: implements FlatReadFormat based on the new format

Signed-off-by: evenyag <realevenyag@gmail.com>

* test: fix tests

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: support override sequence

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: new_override_sequence_array for ReadFormat

Signed-off-by: evenyag <realevenyag@gmail.com>

* chore: update comments

Signed-off-by: evenyag <realevenyag@gmail.com>

* chore: address comment

Signed-off-by: evenyag <realevenyag@gmail.com>

---------

Signed-off-by: evenyag <realevenyag@gmail.com>
2025-08-04 12:43:50 +00:00
Ning Sun
29e0092468 feat: schema/database support for label_values (#6631)
* feat: initial support for __schema__ in label values

* feat: filter database with matches

* refactor: skip unnecessary check

* fix: resolve schema matcher in label values

* test: add a test case for table not exists

* refactor: add matchop check on db label

* chore: merge main
2025-08-04 11:56:10 +00:00
Weny Xu
67a93a07a2 fix: fix sequence peek method to return correct values when sequence is not initialized (#6643)
fix: improve sequence peek method to handle uninitialized sequences

Signed-off-by: WenyXu <wenymedia@gmail.com>
2025-08-04 11:31:06 +00:00
discord9
1afa0afc67 feat: add partial truncate (#6602)
* feat: add partial truncate

Signed-off-by: discord9 <discord9@163.com>

* fix: per review

Signed-off-by: discord9 <discord9@163.com>

* feat: add proto partial truncate kind

Signed-off-by: discord9 <discord9@163.com>

* chore: clippy

Signed-off-by: discord9 <discord9@163.com>

* chore: update branched proto

Signed-off-by: discord9 <discord9@163.com>

* feat: grpc support truncate WIP sql support

Signed-off-by: discord9 <discord9@163.com>

* wip: parse truncate range

Signed-off-by: discord9 <discord9@163.com>

* feat: truncate by range

Signed-off-by: discord9 <discord9@163.com>

* fix: truncate range display

Signed-off-by: discord9 <discord9@163.com>

* chore: resolve todo

Signed-off-by: discord9 <discord9@163.com>

* refactor: per review

Signed-off-by: discord9 <discord9@163.com>

* test: more invalid parse

Signed-off-by: discord9 <discord9@163.com>

* chore: per review

Signed-off-by: discord9 <discord9@163.com>

* refactor: per review

Signed-off-by: discord9 <discord9@163.com>

* chore: unused

Signed-off-by: discord9 <discord9@163.com>

* chore: per review

Signed-off-by: discord9 <discord9@163.com>

* chore: update branch

Signed-off-by: discord9 <discord9@163.com>

---------

Signed-off-by: discord9 <discord9@163.com>
2025-08-04 10:50:27 +00:00
Weny Xu
414101fafa feat: introduce reconciliation interface (#6614)
* feat: introduce reconcile interface

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: apply suggestions from CR

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: apply suggestions from CR

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: upgrade proto

Signed-off-by: WenyXu <wenymedia@gmail.com>

---------

Signed-off-by: WenyXu <wenymedia@gmail.com>
2025-08-04 09:12:48 +00:00
Yingwen
280024d7f8 feat: Add option to limit the files reading simultaneously (#6635)
* feat: limits the max number of files to scan at the same time

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: make max_concurrent_scan_files configurable

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: reduce concurrent scan files to 128

Signed-off-by: evenyag <realevenyag@gmail.com>

* docs: update config example

Signed-off-by: evenyag <realevenyag@gmail.com>

* test: add test for max_concurrent_scan_files

Signed-off-by: evenyag <realevenyag@gmail.com>

* style: fix clippy

Signed-off-by: evenyag <realevenyag@gmail.com>

* test: update config test

Signed-off-by: evenyag <realevenyag@gmail.com>

---------

Signed-off-by: evenyag <realevenyag@gmail.com>
2025-08-04 07:18:58 +00:00
Ruihang Xia
865ca44dbd feat: absent function in PromQL (#6618)
* feat: absent function in PromQL

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* impl serde

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* sqlness test

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* ai suggests

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* resolve PR comments

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* comment out some tests

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2025-08-04 06:59:58 +00:00
discord9
a3e55565dc fix: show create flow's expire after (#6641)
* fix: show create flow's expire after

Signed-off-by: discord9 <discord9@163.com>

* chore: per review

Signed-off-by: discord9 <discord9@163.com>

---------

Signed-off-by: discord9 <discord9@163.com>
2025-08-04 05:03:14 +00:00
Keming
bed0c1e55f fix: bump greptime-sqlparser to avoid convert statement overflow (#6634)
bump the greptime-sqlparser

Co-authored-by: Yihong <zouzou0208@gmail.com>
2025-08-04 02:15:34 +00:00
Ruihang Xia
572e29b158 feat: support tls for pg backend (#6611)
* load tls

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* impl tls

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* pass options

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* implement require mode

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* clean up

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* update config

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix clippy

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* default to prefer

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* update example config

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* adjust example config

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* handle client cert and key properly

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* implement verify_ca and verify_full

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* update integration test for config api

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* change config name and default mode

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2025-08-04 00:41:08 +00:00
zyy17
31cb769507 chore: add limit in resources panel and Cache Miss panel (#6636)
chore: add `limit` in resources panel and 'Cache Miss' panel

Signed-off-by: zyy17 <zyylsxm@gmail.com>
2025-08-03 19:09:32 +00:00
yihong
e19493db4a chore: update jieba tantivy-jieba and tantivy version (#6637)
* chore: update jieba tantivy-jieba and tantivy version

Signed-off-by: yihong0618 <zouzou0208@gmail.com>

* fix: address comments

Signed-off-by: yihong0618 <zouzou0208@gmail.com>

---------

Signed-off-by: yihong0618 <zouzou0208@gmail.com>
2025-08-03 19:08:36 +00:00
Ruihang Xia
9817eb934d feat: support __schema__ and __database__ in Prom Remote Read (#6610)
* feat: support __schema__ and __database__ in Prom remote R/W

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix integration test

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* revert remote write changes

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* check matcher type

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2025-08-03 07:09:44 +00:00
Lei, HUANG
8639961cc9 chore: refine metrics tracking the flush/compaction cost time (#6630)
chore: refine metrics tracking the per-stage cost time during flush and compaction

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
2025-08-02 12:13:42 +00:00
Ruihang Xia
a9cd117706 fix: only return the __name__ label when there is one (#6629)
Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2025-08-02 08:42:28 +00:00
ZonaHe
9485dbed64 feat: update dashboard to v0.10.6 (#6632)
Co-authored-by: ZonaHex <ZonaHex@users.noreply.github.com>
2025-08-01 17:55:58 +00:00
discord9
21b71d1e10 feat: panic logger (#6633)
Signed-off-by: discord9 <discord9@163.com>
2025-08-01 11:31:15 +00:00
Weny Xu
cfaa9b4dda feat: introduce reconcile catalog procedure (#6613)
Signed-off-by: WenyXu <wenymedia@gmail.com>
2025-08-01 11:03:00 +00:00
Weny Xu
19ad9a7f85 refactor: remove procedure executor from DDL manager (#6625)
* refactor: remove procedure executor from DDL manager

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: clippy

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: apply suggestions from  CR

Signed-off-by: WenyXu <wenymedia@gmail.com>

---------

Signed-off-by: WenyXu <wenymedia@gmail.com>
2025-08-01 09:33:47 +00:00
shuiyisong
9e2f793b04 chore(otlp_metric): update metric and label naming (#6624)
* chore: update otlp metrics & labels naming

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* fix: typo and test

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* Update src/session/src/protocol_ctx.rs

* chore: add test cases for normalizing functions

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

---------

Signed-off-by: shuiyisong <xixing.sys@gmail.com>
Co-authored-by: Ning Sun <classicning@gmail.com>
2025-08-01 08:17:12 +00:00
Yingwen
52466fdd92 feat: Implement a converter to converts KeyValues into BulkPart (#6620)
* chore: add api to memtable to check bulk capability

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: Add a converter to convert KeyValues into BulkPart

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: move supports_bulk_insert to MemtableBuilder

Signed-off-by: evenyag <realevenyag@gmail.com>

* chore: benchmark

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: use write_bulk if the memtable benefits from it

Signed-off-by: evenyag <realevenyag@gmail.com>

* test: test BulkPartConverter

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: add a flag to store unencoded primary keys

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: cache schema for converter

Implements to_flat_sst_arrow_schema

Signed-off-by: evenyag <realevenyag@gmail.com>

* chore: simplify tests

Signed-off-by: evenyag <realevenyag@gmail.com>

* fix: don't use bulk convert branch now

Signed-off-by: evenyag <realevenyag@gmail.com>

* style: fix clippy

Signed-off-by: evenyag <realevenyag@gmail.com>

* chore: address review comments

* simplify primary_key_column_builders check
* return error if value is not string

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: add FlatSchemaOptions::from_encoding and test sparse encoding

Signed-off-by: evenyag <realevenyag@gmail.com>

---------

Signed-off-by: evenyag <realevenyag@gmail.com>
2025-08-01 07:59:11 +00:00
Ruihang Xia
869f8bf68a docs(rfc): compatibility test framework (#6460)
* docs(rfc): compatibility test framework

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* rename file

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix typo

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2025-08-01 04:32:53 +00:00
Yingwen
9527e0df2f feat: HTTP API to activate/deactive heap prof (activate by default) (#6593)
* feat: add HTTP API to activate/deactivate heap profiling

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: add HTTP API to get profiling status

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: enable heap prof by default

Signed-off-by: evenyag <realevenyag@gmail.com>

* build: add "prof:true,prof_active:false" as default env to dockerfiles

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: activate heap profiling after log initialization

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: add memory options to control whether to activate profiling

Signed-off-by: evenyag <realevenyag@gmail.com>

* docs: update docs

Signed-off-by: evenyag <realevenyag@gmail.com>

* chore: fmt toml

Signed-off-by: evenyag <realevenyag@gmail.com>

* test: fix config test

Signed-off-by: evenyag <realevenyag@gmail.com>

* docs: usage of new api

Signed-off-by: evenyag <realevenyag@gmail.com>

* chore: log profile after version

Signed-off-by: evenyag <realevenyag@gmail.com>

* docs: update how to docs

Signed-off-by: evenyag <realevenyag@gmail.com>

* docs: fix how to docs

Signed-off-by: evenyag <realevenyag@gmail.com>

---------

Signed-off-by: evenyag <realevenyag@gmail.com>
2025-08-01 03:24:56 +00:00
669 changed files with 44054 additions and 16448 deletions

View File

@@ -35,8 +35,8 @@ HIGHER_VERSION=$(printf "%s\n%s" "$CLEAN_CURRENT" "$CLEAN_LATEST" | sort -V | ta
if [ "$HIGHER_VERSION" = "$CLEAN_CURRENT" ]; then
echo "Current version ($CLEAN_CURRENT) is NEWER than or EQUAL to latest ($CLEAN_LATEST)"
echo "should-push-latest-tag=true" >> $GITHUB_OUTPUT
echo "is-current-version-latest=true" >> $GITHUB_OUTPUT
else
echo "Current version ($CLEAN_CURRENT) is OLDER than latest ($CLEAN_LATEST)"
echo "should-push-latest-tag=false" >> $GITHUB_OUTPUT
echo "is-current-version-latest=false" >> $GITHUB_OUTPUT
fi

View File

@@ -21,7 +21,7 @@ update_dev_builder_version() {
# Commit the changes.
git add Makefile
git commit -m "ci: update dev-builder image tag"
git commit -s -m "ci: update dev-builder image tag"
git push origin $BRANCH_NAME
# Create a Pull Request.

View File

@@ -12,6 +12,7 @@ on:
- 'docker/**'
- '.gitignore'
- 'grafana/**'
- 'Makefile'
workflow_dispatch:
name: CI
@@ -709,7 +710,7 @@ jobs:
- name: Install toolchain
uses: actions-rust-lang/setup-rust-toolchain@v1
with:
cache: false
cache: false
- name: Rust Cache
uses: Swatinem/rust-cache@v2
with:

View File

@@ -10,6 +10,7 @@ on:
- 'docker/**'
- '.gitignore'
- 'grafana/**'
- 'Makefile'
push:
branches:
- main
@@ -21,6 +22,7 @@ on:
- 'docker/**'
- '.gitignore'
- 'grafana/**'
- 'Makefile'
workflow_dispatch:
name: CI

View File

@@ -111,7 +111,8 @@ jobs:
# The 'version' use as the global tag name of the release workflow.
version: ${{ steps.create-version.outputs.version }}
should-push-latest-tag: ${{ steps.check-version.outputs.should-push-latest-tag }}
# The 'is-current-version-latest' determines whether to update 'latest' Docker tags and downstream repositories.
is-current-version-latest: ${{ steps.check-version.outputs.is-current-version-latest }}
steps:
- name: Checkout
uses: actions/checkout@v4
@@ -321,7 +322,7 @@ jobs:
image-registry-username: ${{ secrets.DOCKERHUB_USERNAME }}
image-registry-password: ${{ secrets.DOCKERHUB_TOKEN }}
version: ${{ needs.allocate-runners.outputs.version }}
push-latest-tag: ${{ needs.allocate-runners.outputs.should-push-latest-tag == 'true' && github.ref_type == 'tag' && !contains(github.ref_name, 'nightly') && github.event_name != 'schedule' }}
push-latest-tag: ${{ needs.allocate-runners.outputs.is-current-version-latest == 'true' && github.ref_type == 'tag' && !contains(github.ref_name, 'nightly') && github.event_name != 'schedule' }}
- name: Set build image result
id: set-build-image-result
@@ -368,7 +369,7 @@ jobs:
dev-mode: false
upload-to-s3: true
update-version-info: true
push-latest-tag: ${{ needs.allocate-runners.outputs.should-push-latest-tag == 'true' && github.ref_type == 'tag' && !contains(github.ref_name, 'nightly') && github.event_name != 'schedule' }}
push-latest-tag: ${{ needs.allocate-runners.outputs.is-current-version-latest == 'true' && github.ref_type == 'tag' && !contains(github.ref_name, 'nightly') && github.event_name != 'schedule' }}
publish-github-release:
name: Create GitHub release and upload artifacts
@@ -476,7 +477,7 @@ jobs:
bump-helm-charts-version:
name: Bump helm charts version
if: ${{ github.ref_type == 'tag' && !contains(github.ref_name, 'nightly') && github.event_name != 'schedule' }}
if: ${{ github.ref_type == 'tag' && !contains(github.ref_name, 'nightly') && github.event_name != 'schedule' && needs.allocate-runners.outputs.is-current-version-latest == 'true' }}
needs: [allocate-runners, publish-github-release]
runs-on: ubuntu-latest
permissions:
@@ -497,7 +498,7 @@ jobs:
bump-homebrew-greptime-version:
name: Bump homebrew greptime version
if: ${{ github.ref_type == 'tag' && !contains(github.ref_name, 'nightly') && github.event_name != 'schedule' }}
if: ${{ github.ref_type == 'tag' && !contains(github.ref_name, 'nightly') && github.event_name != 'schedule' && needs.allocate-runners.outputs.is-current-version-latest == 'true' }}
needs: [allocate-runners, publish-github-release]
runs-on: ubuntu-latest
permissions:

View File

@@ -55,8 +55,9 @@ GreptimeDB uses the [Apache 2.0 license](https://github.com/GreptimeTeam/greptim
- To ensure that community is free and confident in its ability to use your contributions, please sign the Contributor License Agreement (CLA) which will be incorporated in the pull request process.
- Make sure all files have proper license header (running `docker run --rm -v $(pwd):/github/workspace ghcr.io/korandoru/hawkeye-native:v3 format` from the project root).
- Make sure all your codes are formatted and follow the [coding style](https://pingcap.github.io/style-guide/rust/) and [style guide](docs/style-guide.md).
- Make sure all unit tests are passed using [nextest](https://nexte.st/index.html) `cargo nextest run`.
- Make sure all clippy warnings are fixed (you can check it locally by running `cargo clippy --workspace --all-targets -- -D warnings`).
- Make sure all unit tests are passed using [nextest](https://nexte.st/index.html) `cargo nextest run --workspace --features pg_kvbackend,mysql_kvbackend` or `make test`.
- Make sure all clippy warnings are fixed (you can check it locally by running `cargo clippy --workspace --all-targets -- -D warnings` or `make clippy`).
- When modifying sample configuration files in `config/`, run `make config-docs` (which requires Docker to be installed) to update the configuration documentation and include it in your commit.
#### `pre-commit` Hooks

5248
Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -73,7 +73,7 @@ members = [
resolver = "2"
[workspace.package]
version = "0.16.0"
version = "0.17.0"
edition = "2021"
license = "Apache-2.0"
@@ -98,11 +98,12 @@ rust.unexpected_cfgs = { level = "warn", check-cfg = ['cfg(tokio_unstable)'] }
# See for more detaiils: https://github.com/rust-lang/cargo/issues/11329
ahash = { version = "0.8", features = ["compile-time-rng"] }
aquamarine = "0.6"
arrow = { version = "54.2", features = ["prettyprint"] }
arrow-array = { version = "54.2", default-features = false, features = ["chrono-tz"] }
arrow-flight = "54.2"
arrow-ipc = { version = "54.2", default-features = false, features = ["lz4", "zstd"] }
arrow-schema = { version = "54.2", features = ["serde"] }
arrow = { version = "56.0", features = ["prettyprint"] }
arrow-array = { version = "56.0", default-features = false, features = ["chrono-tz"] }
arrow-buffer = "56.0"
arrow-flight = "56.0"
arrow-ipc = { version = "56.0", default-features = false, features = ["lz4", "zstd"] }
arrow-schema = { version = "56.0", features = ["serde"] }
async-stream = "0.3"
async-trait = "0.1"
# Remember to update axum-extra, axum-macros when updating axum
@@ -121,26 +122,27 @@ clap = { version = "4.4", features = ["derive"] }
config = "0.13.0"
crossbeam-utils = "0.8"
dashmap = "6.1"
datafusion = { git = "https://github.com/waynexia/arrow-datafusion.git", rev = "12c0381babd52c681043957e9d6ee083a03f7646" }
datafusion-common = { git = "https://github.com/waynexia/arrow-datafusion.git", rev = "12c0381babd52c681043957e9d6ee083a03f7646" }
datafusion-expr = { git = "https://github.com/waynexia/arrow-datafusion.git", rev = "12c0381babd52c681043957e9d6ee083a03f7646" }
datafusion-functions = { git = "https://github.com/waynexia/arrow-datafusion.git", rev = "12c0381babd52c681043957e9d6ee083a03f7646" }
datafusion-functions-aggregate-common = { git = "https://github.com/waynexia/arrow-datafusion.git", rev = "12c0381babd52c681043957e9d6ee083a03f7646" }
datafusion-optimizer = { git = "https://github.com/waynexia/arrow-datafusion.git", rev = "12c0381babd52c681043957e9d6ee083a03f7646" }
datafusion-physical-expr = { git = "https://github.com/waynexia/arrow-datafusion.git", rev = "12c0381babd52c681043957e9d6ee083a03f7646" }
datafusion-physical-plan = { git = "https://github.com/waynexia/arrow-datafusion.git", rev = "12c0381babd52c681043957e9d6ee083a03f7646" }
datafusion-sql = { git = "https://github.com/waynexia/arrow-datafusion.git", rev = "12c0381babd52c681043957e9d6ee083a03f7646" }
datafusion-substrait = { git = "https://github.com/waynexia/arrow-datafusion.git", rev = "12c0381babd52c681043957e9d6ee083a03f7646" }
datafusion = { git = "https://github.com/GreptimeTeam/datafusion.git", rev = "7d5214512740b4dfb742b6b3d91ed9affcc2c9d0" }
datafusion-common = { git = "https://github.com/GreptimeTeam/datafusion.git", rev = "7d5214512740b4dfb742b6b3d91ed9affcc2c9d0" }
datafusion-expr = { git = "https://github.com/GreptimeTeam/datafusion.git", rev = "7d5214512740b4dfb742b6b3d91ed9affcc2c9d0" }
datafusion-functions = { git = "https://github.com/GreptimeTeam/datafusion.git", rev = "7d5214512740b4dfb742b6b3d91ed9affcc2c9d0" }
datafusion-functions-aggregate-common = { git = "https://github.com/GreptimeTeam/datafusion.git", rev = "7d5214512740b4dfb742b6b3d91ed9affcc2c9d0" }
datafusion-optimizer = { git = "https://github.com/GreptimeTeam/datafusion.git", rev = "7d5214512740b4dfb742b6b3d91ed9affcc2c9d0" }
datafusion-orc = { git = "https://github.com/GreptimeTeam/datafusion-orc", rev = "a0a5f902158f153119316eaeec868cff3fc8a99d" }
datafusion-physical-expr = { git = "https://github.com/GreptimeTeam/datafusion.git", rev = "7d5214512740b4dfb742b6b3d91ed9affcc2c9d0" }
datafusion-physical-plan = { git = "https://github.com/GreptimeTeam/datafusion.git", rev = "7d5214512740b4dfb742b6b3d91ed9affcc2c9d0" }
datafusion-sql = { git = "https://github.com/GreptimeTeam/datafusion.git", rev = "7d5214512740b4dfb742b6b3d91ed9affcc2c9d0" }
datafusion-substrait = { git = "https://github.com/GreptimeTeam/datafusion.git", rev = "7d5214512740b4dfb742b6b3d91ed9affcc2c9d0" }
deadpool = "0.12"
deadpool-postgres = "0.14"
derive_builder = "0.20"
dotenv = "0.15"
either = "1.15"
etcd-client = "0.14"
etcd-client = { git = "https://github.com/GreptimeTeam/etcd-client", rev = "f62df834f0cffda355eba96691fe1a9a332b75a7" }
fst = "0.4.7"
futures = "0.3"
futures-util = "0.3"
greptime-proto = { git = "https://github.com/GreptimeTeam/greptime-proto.git", rev = "fe8c13f5f3c1fbef63f57fbdd29f0490dfeb987b" }
greptime-proto = { git = "https://github.com/GreptimeTeam/greptime-proto.git", rev = "df2bb74b5990c159dfd5b7a344eecf8f4307af64" }
hex = "0.4"
http = "1"
humantime = "2.1"
@@ -151,7 +153,7 @@ itertools = "0.14"
jsonb = { git = "https://github.com/databendlabs/jsonb.git", rev = "8c8d2fc294a39f3ff08909d60f718639cfba3875", default-features = false }
lazy_static = "1.4"
local-ip-address = "0.6"
loki-proto = { git = "https://github.com/GreptimeTeam/loki-proto.git", rev = "1434ecf23a2654025d86188fb5205e7a74b225d3" }
loki-proto = { git = "https://github.com/GreptimeTeam/loki-proto.git", rev = "3b7cd33234358b18ece977bf689dc6fb760f29ab" }
meter-core = { git = "https://github.com/GreptimeTeam/greptime-meter.git", rev = "5618e779cf2bb4755b499c630fba4c35e91898cb" }
mockall = "0.13"
moka = "0.12"
@@ -159,9 +161,9 @@ nalgebra = "0.33"
nix = { version = "0.30.1", default-features = false, features = ["event", "fs", "process"] }
notify = "8.0"
num_cpus = "1.16"
object_store_opendal = "0.50"
object_store_opendal = "0.54"
once_cell = "1.18"
opentelemetry-proto = { version = "0.27", features = [
opentelemetry-proto = { version = "0.30", features = [
"gen-tonic",
"metrics",
"trace",
@@ -170,12 +172,14 @@ opentelemetry-proto = { version = "0.27", features = [
] }
ordered-float = { version = "4.3", features = ["serde"] }
parking_lot = "0.12"
parquet = { version = "54.2", default-features = false, features = ["arrow", "async", "object_store"] }
parquet = { version = "56.0", default-features = false, features = ["arrow", "async", "object_store"] }
paste = "1.0"
pin-project = "1.0"
pretty_assertions = "1.4.0"
prometheus = { version = "0.13.3", features = ["process"] }
promql-parser = { version = "0.6", features = ["ser"] }
prost = { version = "0.13", features = ["no-recursion-limit"] }
prost-types = "0.13"
raft-engine = { version = "0.4.1", default-features = false }
rand = "0.9"
ratelimit = "0.10"
@@ -187,7 +191,7 @@ reqwest = { version = "0.12", default-features = false, features = [
"stream",
"multipart",
] }
rskafka = { git = "https://github.com/influxdata/rskafka.git", rev = "8dbd01ed809f5a791833a594e85b144e36e45820", features = [
rskafka = { git = "https://github.com/influxdata/rskafka.git", rev = "a62120b6c74d68953464b256f858dc1c41a903b4", features = [
"transport-tls",
] }
rstest = "0.25"
@@ -196,18 +200,18 @@ rust_decimal = "1.33"
rustc-hash = "2.0"
# It is worth noting that we should try to avoid using aws-lc-rs until it can be compiled on various platforms.
rustls = { version = "0.23.25", default-features = false }
sea-query = "0.32"
serde = { version = "1.0", features = ["derive"] }
serde_json = { version = "1.0", features = ["float_roundtrip"] }
serde_with = "3"
shadow-rs = "1.1"
simd-json = "0.15"
similar-asserts = "1.6.0"
smallvec = { version = "1", features = ["serde"] }
snafu = "0.8"
sqlparser = { git = "https://github.com/GreptimeTeam/sqlparser-rs.git", rev = "0cf6c04490d59435ee965edd2078e8855bd8471e", features = [
sqlparser = { git = "https://github.com/GreptimeTeam/sqlparser-rs.git", rev = "39e4fc94c3c741981f77e9d63b5ce8c02e0a27ea", features = [
"visitor",
"serde",
] } # branch = "v0.54.x"
] } # branch = "v0.55.x"
sqlx = { version = "0.8", features = [
"runtime-tokio-rustls",
"mysql",
@@ -217,20 +221,20 @@ sqlx = { version = "0.8", features = [
strum = { version = "0.27", features = ["derive"] }
sysinfo = "0.33"
tempfile = "3"
tokio = { version = "1.40", features = ["full"] }
tokio = { version = "1.47", features = ["full"] }
tokio-postgres = "0.7"
tokio-rustls = { version = "0.26.2", default-features = false }
tokio-stream = "0.1"
tokio-util = { version = "0.7", features = ["io-util", "compat"] }
toml = "0.8.8"
tonic = { version = "0.12", features = ["tls", "gzip", "zstd"] }
tonic = { version = "0.13", features = ["tls-ring", "gzip", "zstd"] }
tower = "0.5"
tower-http = "0.6"
tracing = "0.1"
tracing-appender = "0.2"
tracing-subscriber = { version = "0.3", features = ["env-filter", "json", "fmt"] }
typetag = "0.2"
uuid = { version = "1.7", features = ["serde", "v4", "fast-rng"] }
uuid = { version = "1.17", features = ["serde", "v4", "fast-rng"] }
vrl = "0.25"
zstd = "0.13"
# DO_NOT_REMOVE_THIS: END_OF_EXTERNAL_DEPENDENCIES
@@ -289,7 +293,7 @@ mito-codec = { path = "src/mito-codec" }
mito2 = { path = "src/mito2" }
object-store = { path = "src/object-store" }
operator = { path = "src/operator" }
otel-arrow-rust = { git = "https://github.com/open-telemetry/otel-arrow", rev = "5d551412d2a12e689cde4d84c14ef29e36784e51", features = [
otel-arrow-rust = { git = "https://github.com/GreptimeTeam/otel-arrow", rev = "2d64b7c0fa95642028a8205b36fe9ea0b023ec59", features = [
"server",
] }
partition = { path = "src/partition" }

View File

@@ -8,7 +8,7 @@ CARGO_BUILD_OPTS := --locked
IMAGE_REGISTRY ?= docker.io
IMAGE_NAMESPACE ?= greptime
IMAGE_TAG ?= latest
DEV_BUILDER_IMAGE_TAG ?= 2025-05-19-b2377d4b-20250520045554
DEV_BUILDER_IMAGE_TAG ?= 2025-05-19-32619816-20250818043248
BUILDX_MULTI_PLATFORM_BUILD ?= false
BUILDX_BUILDER_NAME ?= gtbuilder
BASE_IMAGE ?= ubuntu
@@ -22,7 +22,7 @@ SQLNESS_OPTS ?=
ETCD_VERSION ?= v3.5.9
ETCD_IMAGE ?= quay.io/coreos/etcd:${ETCD_VERSION}
RETRY_COUNT ?= 3
NEXTEST_OPTS := --retries ${RETRY_COUNT}
NEXTEST_OPTS := --retries ${RETRY_COUNT} --features pg_kvbackend,mysql_kvbackend
BUILD_JOBS ?= $(shell which nproc 1>/dev/null && expr $$(nproc) / 2) # If nproc is not available, we don't set the build jobs.
ifeq ($(BUILD_JOBS), 0) # If the number of cores is less than 2, set the build jobs to 1.
BUILD_JOBS := 1

View File

@@ -41,6 +41,7 @@
| `mysql.addr` | String | `127.0.0.1:4002` | The addr to bind the MySQL server. |
| `mysql.runtime_size` | Integer | `2` | The number of server worker threads. |
| `mysql.keep_alive` | String | `0s` | Server-side keep-alive time.<br/>Set to 0 (default) to disable. |
| `mysql.prepared_stmt_cache_size` | Integer | `10000` | Maximum entries in the MySQL prepared statement cache; default is 10,000. |
| `mysql.tls` | -- | -- | -- |
| `mysql.tls.mode` | String | `disable` | TLS mode, refer to https://www.postgresql.org/docs/current/libpq-ssl.html<br/>- `disable` (default value)<br/>- `prefer`<br/>- `require`<br/>- `verify-ca`<br/>- `verify-full` |
| `mysql.tls.cert_path` | String | Unset | Certificate file path. |
@@ -147,6 +148,7 @@
| `region_engine.mito.write_cache_ttl` | String | Unset | TTL for write cache. |
| `region_engine.mito.sst_write_buffer_size` | String | `8MB` | Buffer size for SST writing. |
| `region_engine.mito.parallel_scan_channel_size` | Integer | `32` | Capacity of the channel to send data from parallel scan tasks to the main task. |
| `region_engine.mito.max_concurrent_scan_files` | Integer | `128` | Maximum number of SST files to scan concurrently. |
| `region_engine.mito.allow_stale_entries` | Bool | `false` | Whether to allow stale WAL entries read during replay. |
| `region_engine.mito.min_compaction_interval` | String | `0m` | Minimum time interval between two compactions.<br/>To align with the old behavior, the default value is 0 (no restrictions). |
| `region_engine.mito.index` | -- | -- | The options for index in Mito engine. |
@@ -185,12 +187,13 @@
| `logging.dir` | String | `./greptimedb_data/logs` | The directory to store the log files. If set to empty, logs will not be written to files. |
| `logging.level` | String | Unset | The log level. Can be `info`/`debug`/`warn`/`error`. |
| `logging.enable_otlp_tracing` | Bool | `false` | Enable OTLP tracing. |
| `logging.otlp_endpoint` | String | `http://localhost:4318` | The OTLP tracing endpoint. |
| `logging.otlp_endpoint` | String | `http://localhost:4318/v1/traces` | The OTLP tracing endpoint. |
| `logging.append_stdout` | Bool | `true` | Whether to append logs to stdout. |
| `logging.log_format` | String | `text` | The log format. Can be `text`/`json`. |
| `logging.max_log_files` | Integer | `720` | The maximum amount of log files. |
| `logging.otlp_export_protocol` | String | `http` | The OTLP tracing export protocol. Can be `grpc`/`http`. |
| `logging.tracing_sample_ratio` | -- | -- | The percentage of tracing will be sampled and exported.<br/>Valid range `[0, 1]`, 1 means all traces are sampled, 0 means all traces are not sampled, the default value is 1.<br/>ratio > 1 are treated as 1. Fractions < 0 are treated as 0 |
| `logging.otlp_headers` | -- | -- | Additional OTLP headers, only valid when using OTLP http |
| `logging.tracing_sample_ratio` | -- | Unset | The percentage of tracing will be sampled and exported.<br/>Valid range `[0, 1]`, 1 means all traces are sampled, 0 means all traces are not sampled, the default value is 1.<br/>ratio > 1 are treated as 1. Fractions < 0 are treated as 0 |
| `logging.tracing_sample_ratio.default_ratio` | Float | `1.0` | -- |
| `slow_query` | -- | -- | The slow query log options. |
| `slow_query.enable` | Bool | `false` | Whether to enable slow query log. |
@@ -207,6 +210,8 @@
| `export_metrics.remote_write.headers` | InlineTable | -- | HTTP headers of Prometheus remote-write carry. |
| `tracing` | -- | -- | The tracing options. Only effect when compiled with `tokio-console` feature. |
| `tracing.tokio_console_addr` | String | Unset | The tokio console address. |
| `memory` | -- | -- | The memory options. |
| `memory.enable_heap_profiling` | Bool | `true` | Whether to enable heap profiling activation during startup.<br/>When enabled, heap profiling will be activated if the `MALLOC_CONF` environment variable<br/>is set to "prof:true,prof_active:false". The official image adds this env variable.<br/>Default is true. |
## Distributed Mode
@@ -245,6 +250,7 @@
| `mysql.addr` | String | `127.0.0.1:4002` | The addr to bind the MySQL server. |
| `mysql.runtime_size` | Integer | `2` | The number of server worker threads. |
| `mysql.keep_alive` | String | `0s` | Server-side keep-alive time.<br/>Set to 0 (default) to disable. |
| `mysql.prepared_stmt_cache_size` | Integer | `10000` | Maximum entries in the MySQL prepared statement cache; default is 10,000. |
| `mysql.tls` | -- | -- | -- |
| `mysql.tls.mode` | String | `disable` | TLS mode, refer to https://www.postgresql.org/docs/current/libpq-ssl.html<br/>- `disable` (default value)<br/>- `prefer`<br/>- `require`<br/>- `verify-ca`<br/>- `verify-full` |
| `mysql.tls.cert_path` | String | Unset | Certificate file path. |
@@ -290,19 +296,20 @@
| `logging.dir` | String | `./greptimedb_data/logs` | The directory to store the log files. If set to empty, logs will not be written to files. |
| `logging.level` | String | Unset | The log level. Can be `info`/`debug`/`warn`/`error`. |
| `logging.enable_otlp_tracing` | Bool | `false` | Enable OTLP tracing. |
| `logging.otlp_endpoint` | String | `http://localhost:4318` | The OTLP tracing endpoint. |
| `logging.otlp_endpoint` | String | `http://localhost:4318/v1/traces` | The OTLP tracing endpoint. |
| `logging.append_stdout` | Bool | `true` | Whether to append logs to stdout. |
| `logging.log_format` | String | `text` | The log format. Can be `text`/`json`. |
| `logging.max_log_files` | Integer | `720` | The maximum amount of log files. |
| `logging.otlp_export_protocol` | String | `http` | The OTLP tracing export protocol. Can be `grpc`/`http`. |
| `logging.tracing_sample_ratio` | -- | -- | The percentage of tracing will be sampled and exported.<br/>Valid range `[0, 1]`, 1 means all traces are sampled, 0 means all traces are not sampled, the default value is 1.<br/>ratio > 1 are treated as 1. Fractions < 0 are treated as 0 |
| `logging.otlp_headers` | -- | -- | Additional OTLP headers, only valid when using OTLP http |
| `logging.tracing_sample_ratio` | -- | Unset | The percentage of tracing will be sampled and exported.<br/>Valid range `[0, 1]`, 1 means all traces are sampled, 0 means all traces are not sampled, the default value is 1.<br/>ratio > 1 are treated as 1. Fractions < 0 are treated as 0 |
| `logging.tracing_sample_ratio.default_ratio` | Float | `1.0` | -- |
| `slow_query` | -- | -- | The slow query log options. |
| `slow_query.enable` | Bool | `true` | Whether to enable slow query log. |
| `slow_query.record_type` | String | `system_table` | The record type of slow queries. It can be `system_table` or `log`.<br/>If `system_table` is selected, the slow queries will be recorded in a system table `greptime_private.slow_queries`.<br/>If `log` is selected, the slow queries will be logged in a log file `greptimedb-slow-queries.*`. |
| `slow_query.threshold` | String | `30s` | The threshold of slow query. It can be human readable time string, for example: `10s`, `100ms`, `1s`. |
| `slow_query.sample_ratio` | Float | `1.0` | The sampling ratio of slow query log. The value should be in the range of (0, 1]. For example, `0.1` means 10% of the slow queries will be logged and `1.0` means all slow queries will be logged. |
| `slow_query.ttl` | String | `30d` | The TTL of the `slow_queries` system table. Default is `30d` when `record_type` is `system_table`. |
| `slow_query.ttl` | String | `90d` | The TTL of the `slow_queries` system table. Default is `90d` when `record_type` is `system_table`. |
| `export_metrics` | -- | -- | The frontend can export its metrics and send to Prometheus compatible service (e.g. `greptimedb` itself) from remote-write API.<br/>This is only used for `greptimedb` to export its own metrics internally. It's different from prometheus scrape. |
| `export_metrics.enable` | Bool | `false` | whether enable export metrics. |
| `export_metrics.write_interval` | String | `30s` | The interval of export metrics. |
@@ -311,6 +318,10 @@
| `export_metrics.remote_write.headers` | InlineTable | -- | HTTP headers of Prometheus remote-write carry. |
| `tracing` | -- | -- | The tracing options. Only effect when compiled with `tokio-console` feature. |
| `tracing.tokio_console_addr` | String | Unset | The tokio console address. |
| `memory` | -- | -- | The memory options. |
| `memory.enable_heap_profiling` | Bool | `true` | Whether to enable heap profiling activation during startup.<br/>When enabled, heap profiling will be activated if the `MALLOC_CONF` environment variable<br/>is set to "prof:true,prof_active:false". The official image adds this env variable.<br/>Default is true. |
| `event_recorder` | -- | -- | Configuration options for the event recorder. |
| `event_recorder.ttl` | String | `90d` | TTL for the events table that will be used to store the events. Default is `90d`. |
### Metasrv
@@ -333,6 +344,12 @@
| `runtime` | -- | -- | The runtime options. |
| `runtime.global_rt_size` | Integer | `8` | The number of threads to execute the runtime for global read operations. |
| `runtime.compact_rt_size` | Integer | `4` | The number of threads to execute the runtime for global write operations. |
| `backend_tls` | -- | -- | TLS configuration for kv store backend (only applicable for PostgreSQL/MySQL backends)<br/>When using PostgreSQL or MySQL as metadata store, you can configure TLS here |
| `backend_tls.mode` | String | `prefer` | TLS mode, refer to https://www.postgresql.org/docs/current/libpq-ssl.html<br/>- "disable" - No TLS<br/>- "prefer" (default) - Try TLS, fallback to plain<br/>- "require" - Require TLS<br/>- "verify_ca" - Require TLS and verify CA<br/>- "verify_full" - Require TLS and verify hostname |
| `backend_tls.cert_path` | String | `""` | Path to client certificate file (for client authentication)<br/>Like "/path/to/client.crt" |
| `backend_tls.key_path` | String | `""` | Path to client private key file (for client authentication)<br/>Like "/path/to/client.key" |
| `backend_tls.ca_cert_path` | String | `""` | Path to CA certificate file (for server certificate verification)<br/>Required when using custom CAs or self-signed certificates<br/>Leave empty to use system root certificates only<br/>Like "/path/to/ca.crt" |
| `backend_tls.watch` | Bool | `false` | Watch for certificate file changes and auto reload |
| `grpc` | -- | -- | The gRPC server options. |
| `grpc.bind_addr` | String | `127.0.0.1:3002` | The address to bind the gRPC server. |
| `grpc.server_addr` | String | `127.0.0.1:3002` | The communication server address for the frontend and datanode to connect to metasrv.<br/>If left empty or unset, the server will automatically use the IP address of the first network interface<br/>on the host, with the same port number as the one specified in `bind_addr`. |
@@ -360,26 +377,32 @@
| `datanode.client.tcp_nodelay` | Bool | `true` | `TCP_NODELAY` option for accepted connections. |
| `wal` | -- | -- | -- |
| `wal.provider` | String | `raft_engine` | -- |
| `wal.broker_endpoints` | Array | -- | The broker endpoints of the Kafka cluster. |
| `wal.auto_create_topics` | Bool | `true` | Automatically create topics for WAL.<br/>Set to `true` to automatically create topics for WAL.<br/>Otherwise, use topics named `topic_name_prefix_[0..num_topics)` |
| `wal.auto_prune_interval` | String | `0s` | Interval of automatically WAL pruning.<br/>Set to `0s` to disable automatically WAL pruning which delete unused remote WAL entries periodically. |
| `wal.trigger_flush_threshold` | Integer | `0` | The threshold to trigger a flush operation of a region in automatically WAL pruning.<br/>Metasrv will send a flush request to flush the region when:<br/>`trigger_flush_threshold` + `prunable_entry_id` < `max_prunable_entry_id`<br/>where:<br/>- `prunable_entry_id` is the maximum entry id that can be pruned of the region.<br/>- `max_prunable_entry_id` is the maximum prunable entry id among all regions in the same topic.<br/>Set to `0` to disable the flush operation. |
| `wal.auto_prune_parallelism` | Integer | `10` | Concurrent task limit for automatically WAL pruning. |
| `wal.num_topics` | Integer | `64` | Number of topics. |
| `wal.selector_type` | String | `round_robin` | Topic selector type.<br/>Available selector types:<br/>- `round_robin` (default) |
| `wal.topic_name_prefix` | String | `greptimedb_wal_topic` | A Kafka topic is constructed by concatenating `topic_name_prefix` and `topic_id`.<br/>Only accepts strings that match the following regular expression pattern:<br/>[a-zA-Z_:-][a-zA-Z0-9_:\-\.@#]*<br/>i.g., greptimedb_wal_topic_0, greptimedb_wal_topic_1. |
| `wal.replication_factor` | Integer | `1` | Expected number of replicas of each partition. |
| `wal.create_topic_timeout` | String | `30s` | Above which a topic creation operation will be cancelled. |
| `wal.broker_endpoints` | Array | -- | The broker endpoints of the Kafka cluster.<br/><br/>**It's only used when the provider is `kafka`**. |
| `wal.auto_create_topics` | Bool | `true` | Automatically create topics for WAL.<br/>Set to `true` to automatically create topics for WAL.<br/>Otherwise, use topics named `topic_name_prefix_[0..num_topics)`<br/>**It's only used when the provider is `kafka`**. |
| `wal.auto_prune_interval` | String | `10m` | Interval of automatically WAL pruning.<br/>Set to `0s` to disable automatically WAL pruning which delete unused remote WAL entries periodically.<br/>**It's only used when the provider is `kafka`**. |
| `wal.flush_trigger_size` | String | `512MB` | Estimated size threshold to trigger a flush when using Kafka remote WAL.<br/>Since multiple regions may share a Kafka topic, the estimated size is calculated as:<br/> (latest_entry_id - flushed_entry_id) * avg_record_size<br/>MetaSrv triggers a flush for a region when this estimated size exceeds `flush_trigger_size`.<br/>- `latest_entry_id`: The latest entry ID in the topic.<br/>- `flushed_entry_id`: The last flushed entry ID for the region.<br/>Set to "0" to let the system decide the flush trigger size.<br/>**It's only used when the provider is `kafka`**. |
| `wal.auto_prune_parallelism` | Integer | `10` | Concurrent task limit for automatically WAL pruning.<br/>**It's only used when the provider is `kafka`**. |
| `wal.num_topics` | Integer | `64` | Number of topics used for remote WAL.<br/>**It's only used when the provider is `kafka`**. |
| `wal.selector_type` | String | `round_robin` | Topic selector type.<br/>Available selector types:<br/>- `round_robin` (default)<br/>**It's only used when the provider is `kafka`**. |
| `wal.topic_name_prefix` | String | `greptimedb_wal_topic` | A Kafka topic is constructed by concatenating `topic_name_prefix` and `topic_id`.<br/>Only accepts strings that match the following regular expression pattern:<br/>[a-zA-Z_:-][a-zA-Z0-9_:\-\.@#]*<br/>i.g., greptimedb_wal_topic_0, greptimedb_wal_topic_1.<br/>**It's only used when the provider is `kafka`**. |
| `wal.replication_factor` | Integer | `1` | Expected number of replicas of each partition.<br/>**It's only used when the provider is `kafka`**. |
| `wal.create_topic_timeout` | String | `30s` | The timeout for creating a Kafka topic.<br/>**It's only used when the provider is `kafka`**. |
| `event_recorder` | -- | -- | Configuration options for the event recorder. |
| `event_recorder.ttl` | String | `90d` | TTL for the events table that will be used to store the events. Default is `90d`. |
| `stats_persistence` | -- | -- | Configuration options for the stats persistence. |
| `stats_persistence.ttl` | String | `30d` | TTL for the stats table that will be used to store the stats. Default is `30d`.<br/>Set to `0s` to disable stats persistence. |
| `stats_persistence.interval` | String | `60s` | The interval to persist the stats. Default is `60s`.<br/>The minimum value is `60s`, if the value is less than `60s`, it will be overridden to `60s`. |
| `logging` | -- | -- | The logging options. |
| `logging.dir` | String | `./greptimedb_data/logs` | The directory to store the log files. If set to empty, logs will not be written to files. |
| `logging.level` | String | Unset | The log level. Can be `info`/`debug`/`warn`/`error`. |
| `logging.enable_otlp_tracing` | Bool | `false` | Enable OTLP tracing. |
| `logging.otlp_endpoint` | String | `http://localhost:4318` | The OTLP tracing endpoint. |
| `logging.otlp_endpoint` | String | `http://localhost:4318/v1/traces` | The OTLP tracing endpoint. |
| `logging.append_stdout` | Bool | `true` | Whether to append logs to stdout. |
| `logging.log_format` | String | `text` | The log format. Can be `text`/`json`. |
| `logging.max_log_files` | Integer | `720` | The maximum amount of log files. |
| `logging.otlp_export_protocol` | String | `http` | The OTLP tracing export protocol. Can be `grpc`/`http`. |
| `logging.tracing_sample_ratio` | -- | -- | The percentage of tracing will be sampled and exported.<br/>Valid range `[0, 1]`, 1 means all traces are sampled, 0 means all traces are not sampled, the default value is 1.<br/>ratio > 1 are treated as 1. Fractions < 0 are treated as 0 |
| `logging.otlp_headers` | -- | -- | Additional OTLP headers, only valid when using OTLP http |
| `logging.tracing_sample_ratio` | -- | Unset | The percentage of tracing will be sampled and exported.<br/>Valid range `[0, 1]`, 1 means all traces are sampled, 0 means all traces are not sampled, the default value is 1.<br/>ratio > 1 are treated as 1. Fractions < 0 are treated as 0 |
| `logging.tracing_sample_ratio.default_ratio` | Float | `1.0` | -- |
| `export_metrics` | -- | -- | The metasrv can export its metrics and send to Prometheus compatible service (e.g. `greptimedb` itself) from remote-write API.<br/>This is only used for `greptimedb` to export its own metrics internally. It's different from prometheus scrape. |
| `export_metrics.enable` | Bool | `false` | whether enable export metrics. |
@@ -389,6 +412,8 @@
| `export_metrics.remote_write.headers` | InlineTable | -- | HTTP headers of Prometheus remote-write carry. |
| `tracing` | -- | -- | The tracing options. Only effect when compiled with `tokio-console` feature. |
| `tracing.tokio_console_addr` | String | Unset | The tokio console address. |
| `memory` | -- | -- | The memory options. |
| `memory.enable_heap_profiling` | Bool | `true` | Whether to enable heap profiling activation during startup.<br/>When enabled, heap profiling will be activated if the `MALLOC_CONF` environment variable<br/>is set to "prof:true,prof_active:false". The official image adds this env variable.<br/>Default is true. |
### Datanode
@@ -501,6 +526,7 @@
| `region_engine.mito.write_cache_ttl` | String | Unset | TTL for write cache. |
| `region_engine.mito.sst_write_buffer_size` | String | `8MB` | Buffer size for SST writing. |
| `region_engine.mito.parallel_scan_channel_size` | Integer | `32` | Capacity of the channel to send data from parallel scan tasks to the main task. |
| `region_engine.mito.max_concurrent_scan_files` | Integer | `128` | Maximum number of SST files to scan concurrently. |
| `region_engine.mito.allow_stale_entries` | Bool | `false` | Whether to allow stale WAL entries read during replay. |
| `region_engine.mito.min_compaction_interval` | String | `0m` | Minimum time interval between two compactions.<br/>To align with the old behavior, the default value is 0 (no restrictions). |
| `region_engine.mito.index` | -- | -- | The options for index in Mito engine. |
@@ -539,12 +565,13 @@
| `logging.dir` | String | `./greptimedb_data/logs` | The directory to store the log files. If set to empty, logs will not be written to files. |
| `logging.level` | String | Unset | The log level. Can be `info`/`debug`/`warn`/`error`. |
| `logging.enable_otlp_tracing` | Bool | `false` | Enable OTLP tracing. |
| `logging.otlp_endpoint` | String | `http://localhost:4318` | The OTLP tracing endpoint. |
| `logging.otlp_endpoint` | String | `http://localhost:4318/v1/traces` | The OTLP tracing endpoint. |
| `logging.append_stdout` | Bool | `true` | Whether to append logs to stdout. |
| `logging.log_format` | String | `text` | The log format. Can be `text`/`json`. |
| `logging.max_log_files` | Integer | `720` | The maximum amount of log files. |
| `logging.otlp_export_protocol` | String | `http` | The OTLP tracing export protocol. Can be `grpc`/`http`. |
| `logging.tracing_sample_ratio` | -- | -- | The percentage of tracing will be sampled and exported.<br/>Valid range `[0, 1]`, 1 means all traces are sampled, 0 means all traces are not sampled, the default value is 1.<br/>ratio > 1 are treated as 1. Fractions < 0 are treated as 0 |
| `logging.otlp_headers` | -- | -- | Additional OTLP headers, only valid when using OTLP http |
| `logging.tracing_sample_ratio` | -- | Unset | The percentage of tracing will be sampled and exported.<br/>Valid range `[0, 1]`, 1 means all traces are sampled, 0 means all traces are not sampled, the default value is 1.<br/>ratio > 1 are treated as 1. Fractions < 0 are treated as 0 |
| `logging.tracing_sample_ratio.default_ratio` | Float | `1.0` | -- |
| `export_metrics` | -- | -- | The datanode can export its metrics and send to Prometheus compatible service (e.g. `greptimedb` itself) from remote-write API.<br/>This is only used for `greptimedb` to export its own metrics internally. It's different from prometheus scrape. |
| `export_metrics.enable` | Bool | `false` | whether enable export metrics. |
@@ -554,6 +581,8 @@
| `export_metrics.remote_write.headers` | InlineTable | -- | HTTP headers of Prometheus remote-write carry. |
| `tracing` | -- | -- | The tracing options. Only effect when compiled with `tokio-console` feature. |
| `tracing.tokio_console_addr` | String | Unset | The tokio console address. |
| `memory` | -- | -- | The memory options. |
| `memory.enable_heap_profiling` | Bool | `true` | Whether to enable heap profiling activation during startup.<br/>When enabled, heap profiling will be activated if the `MALLOC_CONF` environment variable<br/>is set to "prof:true,prof_active:false". The official image adds this env variable.<br/>Default is true. |
### Flownode
@@ -573,6 +602,12 @@
| `flow.batching_mode.experimental_frontend_activity_timeout` | String | `60s` | Frontend activity timeout<br/>if frontend is down(not sending heartbeat) for more than frontend_activity_timeout,<br/>it will be removed from the list that flownode use to connect |
| `flow.batching_mode.experimental_max_filter_num_per_query` | Integer | `20` | Maximum number of filters allowed in a single query |
| `flow.batching_mode.experimental_time_window_merge_threshold` | Integer | `3` | Time window merge distance |
| `flow.batching_mode.read_preference` | String | `Leader` | Read preference of the Frontend client. |
| `flow.batching_mode.frontend_tls` | -- | -- | -- |
| `flow.batching_mode.frontend_tls.enabled` | Bool | `false` | Whether to enable TLS for client. |
| `flow.batching_mode.frontend_tls.server_ca_cert_path` | String | Unset | Server Certificate file path. |
| `flow.batching_mode.frontend_tls.client_cert_path` | String | Unset | Client Certificate file path. |
| `flow.batching_mode.frontend_tls.client_key_path` | String | Unset | Client Private key file path. |
| `grpc` | -- | -- | The gRPC server options. |
| `grpc.bind_addr` | String | `127.0.0.1:6800` | The address to bind the gRPC server. |
| `grpc.server_addr` | String | `127.0.0.1:6800` | The address advertised to the metasrv,<br/>and used for connections from outside the host |
@@ -600,14 +635,17 @@
| `logging.dir` | String | `./greptimedb_data/logs` | The directory to store the log files. If set to empty, logs will not be written to files. |
| `logging.level` | String | Unset | The log level. Can be `info`/`debug`/`warn`/`error`. |
| `logging.enable_otlp_tracing` | Bool | `false` | Enable OTLP tracing. |
| `logging.otlp_endpoint` | String | `http://localhost:4318` | The OTLP tracing endpoint. |
| `logging.otlp_endpoint` | String | `http://localhost:4318/v1/traces` | The OTLP tracing endpoint. |
| `logging.append_stdout` | Bool | `true` | Whether to append logs to stdout. |
| `logging.log_format` | String | `text` | The log format. Can be `text`/`json`. |
| `logging.max_log_files` | Integer | `720` | The maximum amount of log files. |
| `logging.otlp_export_protocol` | String | `http` | The OTLP tracing export protocol. Can be `grpc`/`http`. |
| `logging.tracing_sample_ratio` | -- | -- | The percentage of tracing will be sampled and exported.<br/>Valid range `[0, 1]`, 1 means all traces are sampled, 0 means all traces are not sampled, the default value is 1.<br/>ratio > 1 are treated as 1. Fractions < 0 are treated as 0 |
| `logging.otlp_headers` | -- | -- | Additional OTLP headers, only valid when using OTLP http |
| `logging.tracing_sample_ratio` | -- | Unset | The percentage of tracing will be sampled and exported.<br/>Valid range `[0, 1]`, 1 means all traces are sampled, 0 means all traces are not sampled, the default value is 1.<br/>ratio > 1 are treated as 1. Fractions < 0 are treated as 0 |
| `logging.tracing_sample_ratio.default_ratio` | Float | `1.0` | -- |
| `tracing` | -- | -- | The tracing options. Only effect when compiled with `tokio-console` feature. |
| `tracing.tokio_console_addr` | String | Unset | The tokio console address. |
| `query` | -- | -- | -- |
| `query.parallelism` | Integer | `1` | Parallelism of the query engine for query sent by flownode.<br/>Default to 1, so it won't use too much cpu or memory |
| `memory` | -- | -- | The memory options. |
| `memory.enable_heap_profiling` | Bool | `true` | Whether to enable heap profiling activation during startup.<br/>When enabled, heap profiling will be activated if the `MALLOC_CONF` environment variable<br/>is set to "prof:true,prof_active:false". The official image adds this env variable.<br/>Default is true. |

View File

@@ -474,6 +474,9 @@ sst_write_buffer_size = "8MB"
## Capacity of the channel to send data from parallel scan tasks to the main task.
parallel_scan_channel_size = 32
## Maximum number of SST files to scan concurrently.
max_concurrent_scan_files = 128
## Whether to allow stale WAL entries read during replay.
allow_stale_entries = false
@@ -629,7 +632,7 @@ level = "info"
enable_otlp_tracing = false
## The OTLP tracing endpoint.
otlp_endpoint = "http://localhost:4318"
otlp_endpoint = "http://localhost:4318/v1/traces"
## Whether to append logs to stdout.
append_stdout = true
@@ -643,6 +646,13 @@ max_log_files = 720
## The OTLP tracing export protocol. Can be `grpc`/`http`.
otlp_export_protocol = "http"
## Additional OTLP headers, only valid when using OTLP http
[logging.otlp_headers]
## @toml2docs:none-default
#Authorization = "Bearer my-token"
## @toml2docs:none-default
#Database = "My database"
## The percentage of tracing will be sampled and exported.
## Valid range `[0, 1]`, 1 means all traces are sampled, 0 means all traces are not sampled, the default value is 1.
## ratio > 1 are treated as 1. Fractions < 0 are treated as 0
@@ -669,3 +679,11 @@ headers = { }
## The tokio console address.
## @toml2docs:none-default
#+ tokio_console_addr = "127.0.0.1"
## The memory options.
[memory]
## Whether to enable heap profiling activation during startup.
## When enabled, heap profiling will be activated if the `MALLOC_CONF` environment variable
## is set to "prof:true,prof_active:false". The official image adds this env variable.
## Default is true.
enable_heap_profiling = true

View File

@@ -30,6 +30,20 @@ node_id = 14
#+experimental_max_filter_num_per_query=20
## Time window merge distance
#+experimental_time_window_merge_threshold=3
## Read preference of the Frontend client.
#+read_preference="Leader"
[flow.batching_mode.frontend_tls]
## Whether to enable TLS for client.
#+enabled=false
## Server Certificate file path.
## @toml2docs:none-default
#+server_ca_cert_path=""
## Client Certificate file path.
## @toml2docs:none-default
#+client_cert_path=""
## Client Private key file path.
## @toml2docs:none-default
#+client_key_path=""
## The gRPC server options.
[grpc]
@@ -106,7 +120,7 @@ level = "info"
enable_otlp_tracing = false
## The OTLP tracing endpoint.
otlp_endpoint = "http://localhost:4318"
otlp_endpoint = "http://localhost:4318/v1/traces"
## Whether to append logs to stdout.
append_stdout = true
@@ -120,6 +134,13 @@ max_log_files = 720
## The OTLP tracing export protocol. Can be `grpc`/`http`.
otlp_export_protocol = "http"
## Additional OTLP headers, only valid when using OTLP http
[logging.otlp_headers]
## @toml2docs:none-default
#Authorization = "Bearer my-token"
## @toml2docs:none-default
#Database = "My database"
## The percentage of tracing will be sampled and exported.
## Valid range `[0, 1]`, 1 means all traces are sampled, 0 means all traces are not sampled, the default value is 1.
## ratio > 1 are treated as 1. Fractions < 0 are treated as 0
@@ -136,3 +157,11 @@ default_ratio = 1.0
## Parallelism of the query engine for query sent by flownode.
## Default to 1, so it won't use too much cpu or memory
parallelism = 1
## The memory options.
[memory]
## Whether to enable heap profiling activation during startup.
## When enabled, heap profiling will be activated if the `MALLOC_CONF` environment variable
## is set to "prof:true,prof_active:false". The official image adds this env variable.
## Default is true.
enable_heap_profiling = true

View File

@@ -90,6 +90,8 @@ runtime_size = 2
## Server-side keep-alive time.
## Set to 0 (default) to disable.
keep_alive = "0s"
## Maximum entries in the MySQL prepared statement cache; default is 10,000.
prepared_stmt_cache_size = 10000
# MySQL server TLS options.
[mysql.tls]
@@ -221,7 +223,7 @@ level = "info"
enable_otlp_tracing = false
## The OTLP tracing endpoint.
otlp_endpoint = "http://localhost:4318"
otlp_endpoint = "http://localhost:4318/v1/traces"
## Whether to append logs to stdout.
append_stdout = true
@@ -235,6 +237,13 @@ max_log_files = 720
## The OTLP tracing export protocol. Can be `grpc`/`http`.
otlp_export_protocol = "http"
## Additional OTLP headers, only valid when using OTLP http
[logging.otlp_headers]
## @toml2docs:none-default
#Authorization = "Bearer my-token"
## @toml2docs:none-default
#Database = "My database"
## The percentage of tracing will be sampled and exported.
## Valid range `[0, 1]`, 1 means all traces are sampled, 0 means all traces are not sampled, the default value is 1.
## ratio > 1 are treated as 1. Fractions < 0 are treated as 0
@@ -257,8 +266,8 @@ threshold = "30s"
## The sampling ratio of slow query log. The value should be in the range of (0, 1]. For example, `0.1` means 10% of the slow queries will be logged and `1.0` means all slow queries will be logged.
sample_ratio = 1.0
## The TTL of the `slow_queries` system table. Default is `30d` when `record_type` is `system_table`.
ttl = "30d"
## The TTL of the `slow_queries` system table. Default is `90d` when `record_type` is `system_table`.
ttl = "90d"
## The frontend can export its metrics and send to Prometheus compatible service (e.g. `greptimedb` itself) from remote-write API.
## This is only used for `greptimedb` to export its own metrics internally. It's different from prometheus scrape.
@@ -280,3 +289,16 @@ headers = { }
## The tokio console address.
## @toml2docs:none-default
#+ tokio_console_addr = "127.0.0.1"
## The memory options.
[memory]
## Whether to enable heap profiling activation during startup.
## When enabled, heap profiling will be activated if the `MALLOC_CONF` environment variable
## is set to "prof:true,prof_active:false". The official image adds this env variable.
## Default is true.
enable_heap_profiling = true
## Configuration options for the event recorder.
[event_recorder]
## TTL for the events table that will be used to store the events. Default is `90d`.
ttl = "90d"

View File

@@ -65,6 +65,34 @@ node_max_idle_time = "24hours"
## The number of threads to execute the runtime for global write operations.
#+ compact_rt_size = 4
## TLS configuration for kv store backend (only applicable for PostgreSQL/MySQL backends)
## When using PostgreSQL or MySQL as metadata store, you can configure TLS here
[backend_tls]
## TLS mode, refer to https://www.postgresql.org/docs/current/libpq-ssl.html
## - "disable" - No TLS
## - "prefer" (default) - Try TLS, fallback to plain
## - "require" - Require TLS
## - "verify_ca" - Require TLS and verify CA
## - "verify_full" - Require TLS and verify hostname
mode = "prefer"
## Path to client certificate file (for client authentication)
## Like "/path/to/client.crt"
cert_path = ""
## Path to client private key file (for client authentication)
## Like "/path/to/client.key"
key_path = ""
## Path to CA certificate file (for server certificate verification)
## Required when using custom CAs or self-signed certificates
## Leave empty to use system root certificates only
## Like "/path/to/ca.crt"
ca_cert_path = ""
## Watch for certificate file changes and auto reload
watch = false
## The gRPC server options.
[grpc]
## The address to bind the gRPC server.
@@ -148,50 +176,61 @@ tcp_nodelay = true
# - `kafka`: metasrv **have to be** configured with kafka wal config when using kafka wal provider in datanode.
provider = "raft_engine"
# Kafka wal config.
## The broker endpoints of the Kafka cluster.
##
## **It's only used when the provider is `kafka`**.
broker_endpoints = ["127.0.0.1:9092"]
## Automatically create topics for WAL.
## Set to `true` to automatically create topics for WAL.
## Otherwise, use topics named `topic_name_prefix_[0..num_topics)`
## **It's only used when the provider is `kafka`**.
auto_create_topics = true
## Interval of automatically WAL pruning.
## Set to `0s` to disable automatically WAL pruning which delete unused remote WAL entries periodically.
auto_prune_interval = "0s"
## **It's only used when the provider is `kafka`**.
auto_prune_interval = "10m"
## The threshold to trigger a flush operation of a region in automatically WAL pruning.
## Metasrv will send a flush request to flush the region when:
## `trigger_flush_threshold` + `prunable_entry_id` < `max_prunable_entry_id`
## where:
## - `prunable_entry_id` is the maximum entry id that can be pruned of the region.
## - `max_prunable_entry_id` is the maximum prunable entry id among all regions in the same topic.
## Set to `0` to disable the flush operation.
trigger_flush_threshold = 0
## Estimated size threshold to trigger a flush when using Kafka remote WAL.
## Since multiple regions may share a Kafka topic, the estimated size is calculated as:
## (latest_entry_id - flushed_entry_id) * avg_record_size
## MetaSrv triggers a flush for a region when this estimated size exceeds `flush_trigger_size`.
## - `latest_entry_id`: The latest entry ID in the topic.
## - `flushed_entry_id`: The last flushed entry ID for the region.
## Set to "0" to let the system decide the flush trigger size.
## **It's only used when the provider is `kafka`**.
flush_trigger_size = "512MB"
## Concurrent task limit for automatically WAL pruning.
## **It's only used when the provider is `kafka`**.
auto_prune_parallelism = 10
## Number of topics.
## Number of topics used for remote WAL.
## **It's only used when the provider is `kafka`**.
num_topics = 64
## Topic selector type.
## Available selector types:
## - `round_robin` (default)
## **It's only used when the provider is `kafka`**.
selector_type = "round_robin"
## A Kafka topic is constructed by concatenating `topic_name_prefix` and `topic_id`.
## Only accepts strings that match the following regular expression pattern:
## [a-zA-Z_:-][a-zA-Z0-9_:\-\.@#]*
## i.g., greptimedb_wal_topic_0, greptimedb_wal_topic_1.
## **It's only used when the provider is `kafka`**.
topic_name_prefix = "greptimedb_wal_topic"
## Expected number of replicas of each partition.
## **It's only used when the provider is `kafka`**.
replication_factor = 1
## Above which a topic creation operation will be cancelled.
## The timeout for creating a Kafka topic.
## **It's only used when the provider is `kafka`**.
create_topic_timeout = "30s"
# The Kafka SASL configuration.
@@ -212,6 +251,20 @@ create_topic_timeout = "30s"
# client_cert_path = "/path/to/client_cert"
# client_key_path = "/path/to/key"
## Configuration options for the event recorder.
[event_recorder]
## TTL for the events table that will be used to store the events. Default is `90d`.
ttl = "90d"
## Configuration options for the stats persistence.
[stats_persistence]
## TTL for the stats table that will be used to store the stats. Default is `30d`.
## Set to `0s` to disable stats persistence.
ttl = "30d"
## The interval to persist the stats. Default is `60s`.
## The minimum value is `60s`, if the value is less than `60s`, it will be overridden to `60s`.
interval = "60s"
## The logging options.
[logging]
## The directory to store the log files. If set to empty, logs will not be written to files.
@@ -225,7 +278,7 @@ level = "info"
enable_otlp_tracing = false
## The OTLP tracing endpoint.
otlp_endpoint = "http://localhost:4318"
otlp_endpoint = "http://localhost:4318/v1/traces"
## Whether to append logs to stdout.
append_stdout = true
@@ -239,6 +292,14 @@ max_log_files = 720
## The OTLP tracing export protocol. Can be `grpc`/`http`.
otlp_export_protocol = "http"
## Additional OTLP headers, only valid when using OTLP http
[logging.otlp_headers]
## @toml2docs:none-default
#Authorization = "Bearer my-token"
## @toml2docs:none-default
#Database = "My database"
## The percentage of tracing will be sampled and exported.
## Valid range `[0, 1]`, 1 means all traces are sampled, 0 means all traces are not sampled, the default value is 1.
## ratio > 1 are treated as 1. Fractions < 0 are treated as 0
@@ -265,3 +326,11 @@ headers = { }
## The tokio console address.
## @toml2docs:none-default
#+ tokio_console_addr = "127.0.0.1"
## The memory options.
[memory]
## Whether to enable heap profiling activation during startup.
## When enabled, heap profiling will be activated if the `MALLOC_CONF` environment variable
## is set to "prof:true,prof_active:false". The official image adds this env variable.
## Default is true.
enable_heap_profiling = true

View File

@@ -85,7 +85,8 @@ runtime_size = 2
## Server-side keep-alive time.
## Set to 0 (default) to disable.
keep_alive = "0s"
## Maximum entries in the MySQL prepared statement cache; default is 10,000.
prepared_stmt_cache_size= 10000
# MySQL server TLS options.
[mysql.tls]
@@ -565,6 +566,9 @@ sst_write_buffer_size = "8MB"
## Capacity of the channel to send data from parallel scan tasks to the main task.
parallel_scan_channel_size = 32
## Maximum number of SST files to scan concurrently.
max_concurrent_scan_files = 128
## Whether to allow stale WAL entries read during replay.
allow_stale_entries = false
@@ -720,7 +724,7 @@ level = "info"
enable_otlp_tracing = false
## The OTLP tracing endpoint.
otlp_endpoint = "http://localhost:4318"
otlp_endpoint = "http://localhost:4318/v1/traces"
## Whether to append logs to stdout.
append_stdout = true
@@ -734,6 +738,13 @@ max_log_files = 720
## The OTLP tracing export protocol. Can be `grpc`/`http`.
otlp_export_protocol = "http"
## Additional OTLP headers, only valid when using OTLP http
[logging.otlp_headers]
## @toml2docs:none-default
#Authorization = "Bearer my-token"
## @toml2docs:none-default
#Database = "My database"
## The percentage of tracing will be sampled and exported.
## Valid range `[0, 1]`, 1 means all traces are sampled, 0 means all traces are not sampled, the default value is 1.
## ratio > 1 are treated as 1. Fractions < 0 are treated as 0
@@ -783,3 +794,11 @@ headers = { }
## The tokio console address.
## @toml2docs:none-default
#+ tokio_console_addr = "127.0.0.1"
## The memory options.
[memory]
## Whether to enable heap profiling activation during startup.
## When enabled, heap profiling will be activated if the `MALLOC_CONF` environment variable
## is set to "prof:true,prof_active:false". The official image adds this env variable.
## Default is true.
enable_heap_profiling = true

View File

@@ -47,4 +47,6 @@ WORKDIR /greptime
COPY --from=builder /out/target/${OUTPUT_DIR}/greptime /greptime/bin/
ENV PATH /greptime/bin/:$PATH
ENV MALLOC_CONF="prof:true,prof_active:false"
ENTRYPOINT ["greptime"]

View File

@@ -47,4 +47,6 @@ WORKDIR /greptime
COPY --from=builder /out/target/${OUTPUT_DIR}/greptime /greptime/bin/
ENV PATH /greptime/bin/:$PATH
ENV MALLOC_CONF="prof:true,prof_active:false"
ENTRYPOINT ["greptime"]

View File

@@ -15,4 +15,6 @@ ADD $TARGETARCH/greptime /greptime/bin/
ENV PATH /greptime/bin/:$PATH
ENV MALLOC_CONF="prof:true,prof_active:false"
ENTRYPOINT ["greptime"]

View File

@@ -18,4 +18,6 @@ ENV PATH /greptime/bin/:$PATH
ENV TARGET_BIN=$TARGET_BIN
ENV MALLOC_CONF="prof:true,prof_active:false"
ENTRYPOINT ["sh", "-c", "exec $TARGET_BIN \"$@\"", "--"]

View File

@@ -19,7 +19,7 @@ ARG PROTOBUF_VERSION=29.3
RUN curl -OL https://github.com/protocolbuffers/protobuf/releases/download/v${PROTOBUF_VERSION}/protoc-${PROTOBUF_VERSION}-linux-x86_64.zip && \
unzip protoc-${PROTOBUF_VERSION}-linux-x86_64.zip -d protoc3;
RUN mv protoc3/bin/* /usr/local/bin/
RUN mv protoc3/include/* /usr/local/include/

View File

@@ -30,6 +30,23 @@ curl https://raw.githubusercontent.com/brendangregg/FlameGraph/master/flamegraph
## Profiling
### Configuration
You can control heap profiling activation through configuration. Add the following to your configuration file:
```toml
[memory]
# Whether to enable heap profiling activation during startup.
# When enabled, heap profiling will be activated if the `MALLOC_CONF` environment variable
# is set to "prof:true,prof_active:false". The official image adds this env variable.
# Default is true.
enable_heap_profiling = true
```
By default, if you set `MALLOC_CONF=prof:true,prof_active:false`, the database will enable profiling during startup. You can disable this behavior by setting `enable_heap_profiling = false` in the configuration.
### Starting with environment variables
Start GreptimeDB instance with environment variables:
```bash
@@ -40,6 +57,23 @@ MALLOC_CONF=prof:true ./target/debug/greptime standalone start
_RJEM_MALLOC_CONF=prof:true ./target/debug/greptime standalone start
```
### Memory profiling control
You can control heap profiling activation using the new HTTP APIs:
```bash
# Check current profiling status
curl -X GET localhost:4000/debug/prof/mem/status
# Activate heap profiling (if not already active)
curl -X POST localhost:4000/debug/prof/mem/activate
# Deactivate heap profiling
curl -X POST localhost:4000/debug/prof/mem/deactivate
```
### Dump memory profiling data
Dump memory profiling data through HTTP API:
```bash

View File

@@ -1,72 +0,0 @@
Currently, our query engine is based on DataFusion, so all aggregate function is executed by DataFusion, through its UDAF interface. You can find DataFusion's UDAF example [here](https://github.com/apache/datafusion/tree/main/datafusion-examples/examples/simple_udaf.rs). Basically, we provide the same way as DataFusion to write aggregate functions: both are centered in a struct called "Accumulator" to accumulates states along the way in aggregation.
However, DataFusion's UDAF implementation has a huge restriction, that it requires user to provide a concrete "Accumulator". Take `Median` aggregate function for example, to aggregate a `u32` datatype column, you have to write a `MedianU32`, and use `SELECT MEDIANU32(x)` in SQL. `MedianU32` cannot be used to aggregate a `i32` datatype column. Or, there's another way: you can use a special type that can hold all kinds of data (like our `Value` enum or Arrow's `ScalarValue`), and `match` all the way up to do aggregate calculations. It might work, though rather tedious. (But I think it's DataFusion's preferred way to write UDAF.)
So is there a way we can make an aggregate function that automatically match the input data's type? For example, a `Median` aggregator that can work on both `u32` column and `i32`? The answer is yes until we find a way to bypass DataFusion's restriction, a restriction that DataFusion simply doesn't pass the input data's type when creating an Accumulator.
> There's an example in `my_sum_udaf_example.rs`, take that as quick start.
# 1. Impl `AggregateFunctionCreator` trait for your accumulator creator.
You must first define a struct that will be used to create your accumulator. For example,
```Rust
#[as_aggr_func_creator]
#[derive(Debug, AggrFuncTypeStore)]
struct MySumAccumulatorCreator {}
```
Attribute macro `#[as_aggr_func_creator]` and derive macro `#[derive(Debug, AggrFuncTypeStore)]` must both be annotated on the struct. They work together to provide a storage of aggregate function's input data types, which are needed for creating generic accumulator later.
> Note that the `as_aggr_func_creator` macro will add fields to the struct, so the struct cannot be defined as an empty struct without field like `struct Foo;`, neither as a new type like `struct Foo(bar)`.
Then impl `AggregateFunctionCreator` trait on it. The definition of the trait is:
```Rust
pub trait AggregateFunctionCreator: Send + Sync + Debug {
fn creator(&self) -> AccumulatorCreatorFunction;
fn output_type(&self) -> ConcreteDataType;
fn state_types(&self) -> Vec<ConcreteDataType>;
}
```
You can use input data's type in methods that return output type and state types (just invoke `input_types()`).
The output type is aggregate function's output data's type. For example, `SUM` aggregate function's output type is `u64` for a `u32` datatype column. The state types are accumulator's internal states' types. Take `AVG` aggregate function on a `i32` column as example, its state types are `i64` (for sum) and `u64` (for count).
The `creator` function is where you define how an accumulator (that will be used in DataFusion) is created. You define "how" to create the accumulator (instead of "what" to create), using the input data's type as arguments. With input datatype known, you can create accumulator generically.
# 2. Impl `Accumulator` trait for your accumulator.
The accumulator is where you store the aggregate calculation states and evaluate a result. You must impl `Accumulator` trait for it. The trait's definition is:
```Rust
pub trait Accumulator: Send + Sync + Debug {
fn state(&self) -> Result<Vec<Value>>;
fn update_batch(&mut self, values: &[VectorRef]) -> Result<()>;
fn merge_batch(&mut self, states: &[VectorRef]) -> Result<()>;
fn evaluate(&self) -> Result<Value>;
}
```
The DataFusion basically executes aggregate like this:
1. Partitioning all input data for aggregate. Create an accumulator for each part.
2. Call `update_batch` on each accumulator with partitioned data, to let you update your aggregate calculation.
3. Call `state` to get each accumulator's internal state, the medial calculation result.
4. Call `merge_batch` to merge all accumulator's internal state to one.
5. Execute `evaluate` on the chosen one to get the final calculation result.
Once you know the meaning of each method, you can easily write your accumulator. You can refer to `Median` accumulator or `SUM` accumulator defined in file `my_sum_udaf_example.rs` for more details.
# 3. Register your aggregate function to our query engine.
You can call `register_aggregate_function` method in query engine to register your aggregate function. To do that, you have to new an instance of struct `AggregateFunctionMeta`. The struct has three fields, first is the name of your aggregate function's name. The function name is case-sensitive due to DataFusion's restriction. We strongly recommend using lowercase for your name. If you have to use uppercase name, wrap your aggregate function with quotation marks. For example, if you define an aggregate function named "my_aggr", you can use "`SELECT MY_AGGR(x)`"; if you define "my_AGGR", you have to use "`SELECT "my_AGGR"(x)`".
The second field is arg_counts ,the count of the arguments. Like accumulator `percentile`, calculating the p_number of the column. We need to input the value of column and the value of p to calculate, and so the count of the arguments is two.
The third field is a function about how to create your accumulator creator that you defined in step 1 above. Create creator, that's a bit intertwined, but it is how we make DataFusion use a newly created aggregate function each time it executes a SQL, preventing the stored input types from affecting each other. The key detail can be starting looking at our `DfContextProviderAdapter` struct's `get_aggregate_meta` method.
# (Optional) 4. Make your aggregate function automatically registered.
If you've written a great aggregate function that wants to let everyone use it, you can make it automatically register to our query engine at start time. It's quick and simple, just refer to the `AggregateFunctions::register` function in `common/function/src/scalars/aggregate/mod.rs`.

View File

@@ -0,0 +1,151 @@
---
Feature Name: Compatibility Test Framework
Tracking Issue: TBD
Date: 2025-07-04
Author: "Ruihang Xia <waynestxia@gmail.com>"
---
# Summary
This RFC proposes a compatibility test framework for GreptimeDB to ensure backward/forward compatibility for different versions of GreptimeDB.
# Motivation
In current practice, we don't have a systematic way to test and ensure the compatibility of different versions of GreptimeDB. Each time we release a new version, we need to manually test the compatibility with ad-hoc cases. This is not only time-consuming, but also prone to errors and unmaintainable. Highly rely on the release manager to ensure the compatibility of different versions of GreptimeDB.
We don't have a detailed guide on the release SoP of how to test and ensure the compatibility of the new version. And has broken the compatibility of the new version many times (`v0.14.1` and `v0.15.1` are two examples, which are both released right after the major release).
# Details
This RFC proposes a compatibility test framework that is easy to maintain, extend and run. It can tell the compatibility between any given two versions of GreptimeDB, both backward and forward. It's based on the Sqlness library but used in a different way.
Generally speaking, the framework is composed of two parts:
1. Test cases: A set of test cases that are maintained dedicatedly for the compatibility test. Still in the `.sql` and `.result` format.
2. Test framework: A new sqlness runner that is used to run the test cases. With some new features that is not required by the integration sqlness test.
## Test Cases
### Structure
The case set is organized in three parts:
- `1.feature`: Use a new feature
- `2.verify`: Verify database behavior
- `3.cleanup`: Paired with `1.feature`, cleanup the test environment.
These three parts are organized in a tree structure, and should be run in sequence:
```
compatibility_test/
├── 1.feature/
│ ├── feature-a/
│ ├── feature-b/
│ └── feature-c/
├── 2.verify/
│ ├── verify-metadata/
│ ├── verify-data/
│ └── verify-schema/
└── 3.cleanup/
├── cleanup-a/
├── cleanup-b/
└── cleanup-c/
```
### Example
For example, for a new feature like adding new index option ([#6416](https://github.com/GreptimeTeam/greptimedb/pull/6416)), we (who implement the feature) create a new test case like this:
```sql
-- path: compatibility_test/1.feature/index-option/granularity_and_false_positive_rate.sql
-- SQLNESS ARG since=0.15.0
-- SQLNESS IGNORE_RESULT
CREATE TABLE granularity_and_false_positive_rate (ts timestamp time index, val double) with ("index.granularity" = "8192", "index.false_positive_rate" = "0.01");
```
And
```sql
-- path: compatibility_test/3.cleanup/index-option/granularity_and_false_positive_rate.sql
drop table granularity_and_false_positive_rate;
```
Since this new feature don't require some special way to verify the database behavior, we can reuse existing test cases in `2.verify/` to verify the database behavior. For example, we can reuse the `verify-metadata` test case to verify the metadata of the table.
```sql
-- path: compatibility_test/2.verify/verify-metadata/show-create-table.sql
-- SQLNESS TEMPLATE TABLE="SHOW TABLES";
SHOW CREATE TABLE $TABLE;
```
In this example, we use some new sqlness features that will be introduced in the next section (`since`, `IGNORE_RESULT`, `TEMPLATE`).
### Maintenance
Each time implement a new feature that should be covered by the compatibility test, we should create a new test case in `1.feature/` and `3.cleanup/` for them. And check if existing cases in `2.verify/` can be reused to verify the database behavior.
This simulates an enthusiastic user who uses all the new features at the first time. All the new Maintenance burden is on the feature implementer to write one more test case for the new feature, to "fixation" the behavior. And once there is a breaking change in the future, it can be detected by the compatibility test framework automatically.
Another topic is about deprecation. If a feature is deprecated, we should also mark it in the test case. Still use above example, assume we deprecate the `index.granularity` and `index.false_positive_rate` index options in `v0.99.0`, we can mark them as:
```sql
-- SQLNESS ARG since=0.15.0 till=0.99.0
...
```
This tells the framework to ignore this feature in version `v0.99.0` and later. Currently, we have so many experimental features that are scheduled to be broken in the future, this is a good way to mark them.
## Test Framework
This section is about new sqlness features required by this framework.
### Since and Till
Follows the `ARG` interceptor in sqlness, we can mark a feature is available between two given versions. Only the `since` is required:
```sql
-- SQLNESS ARG since=VERSION_STRING [till=VERSION_STRING]
```
### IGNORE_RESULT
`IGNORE_RESULT` is a new interceptor, it tells the runner to ignore the result of the query, only check whether the query is executed successfully.
This is useful to reduce the Maintenance burden of the test cases, unlike the integration sqlness test, in most cases we don't care about the result of the query, only need to make sure the query is executed successfully.
### TEMPLATE
`TEMPLATE` is another new interceptor, it can generate queries from a template based on a runtime data.
In above example, we need to run the `SHOW CREATE TABLE` query for all existing tables, so we can use the `TEMPLATE` interceptor to generate the query with a dynamic table list.
### RUNNER
There are also some extra requirement for the runner itself:
- It should run the test cases in sequence, first `1.feature/`, then `2.verify/`, and finally `3.cleanup/`.
- It should be able to fetch required version automatically to finish the test.
- It should handle the `since` and `till` properly.
On the `1.feature` phase, the runner needs to identify all features need to be tested by version number. And then restart with a new version (the `to` version) to run `2.verify/` and `3.cleanup/` phase.
## Test Report
Finally, we can run the compatibility test to verify the compatibility between any given two versions of GreptimeDB, for example:
```bash
# check backward compatibility between v0.15.0 and v0.16.0 when releasing v0.16.0
./sqlness run --from=0.15.0 --to=0.16.0
# check forward compatibility when downgrading from v0.15.0 to v0.13.0
./sqlness run --from=0.15.0 --to=0.13.0
```
We can also use a script to run the compatibility test for all the versions in a given range to give a quick report with all versions we need.
And we always bump the version in `Cargo.toml` to the next major release version, so the next major release version can be used as "latest" unpublished version for scenarios like local testing.
# Alternatives
There was a previous attempt to implement a compatibility test framework that was disabled due to some reasons [#3728](https://github.com/GreptimeTeam/greptimedb/issues/3728).

View File

@@ -0,0 +1,157 @@
---
Feature Name: "global-gc-worker"
Tracking Issue: https://github.com/GreptimeTeam/greptimedb/issues/6571
Date: 2025-07-23
Author: "discord9 <discord9@163.com>"
---
# Global GC Worker
## Summary
This RFC proposes the integration of a garbage collection (GC) mechanism within the Compaction process. This mechanism aims to manage and remove stale files that are no longer actively used by any system component, thereby reclaiming storage space.
## Motivation
With the introduction of features such as table repartitioning, a substantial number of Parquet files can become obsolete. Furthermore, failures during manifest updates may result in orphaned files that are never referenced by the system. Therefore, a periodic garbage collection mechanism is essential to reclaim storage space by systematically removing these unused files.
## Details
### Overview
The garbage collection process will be integrated directly into the Compaction process. Upon the completion of a Compaction for a given region, the GC worker will be automatically triggered. Its primary function will be to identify and subsequently delete obsolete files that have persisted beyond their designated retention period. This integration ensures that garbage collection is performed in close conjunction with data lifecycle management, effectively leveraging the compaction process's inherent knowledge of file states.
This design prioritizes correctness and safety by explicitly linking GC execution to a well-defined operational boundary: the successful completion of a compaction cycle.
### Terminology
- **Unused File**: Refers to a file present in the storage directory that has never been formally recorded in any manifest. A common scenario for this includes cases where a new SST file is successfully written to storage, but the subsequent update to the manifest fails, leaving the file unreferenced.
- **Obsolete File**: Denotes a file that was previously recorded in a manifest but has since been explicitly marked for removal. This typically occurs following operations such as data repartitioning or compaction.
### GC Worker Process
The GC worker operates as an integral part of the Compaction process. Once a Compaction for a specific region is completed, the GC worker is automatically triggered. Executing this process on a `datanode` is preferred to eliminate the overhead associated with having to set object storage configurations in the `metasrv`.
The detailed process is as follows:
1. **Invocation**: Upon the successful completion of a Compaction for a region, the GC worker is invoked.
2. **Manifest Reading**: The worker reads the region's primary manifest to obtain a comprehensive list of all files marked as obsolete. Concurrently, it reads any temporary manifests generated by long-running queries to identify files that are currently in active use, thereby preventing their premature deletion.
3. **Lingering Time Check (Obsolete Files)**: For each identified obsolete file, the GC worker evaluates its "lingering time." Which is the time passed after it had been removed from manifest.
4. **Deletion Marking (Obsolete Files)**: Files that have exceeded their maximum configurable lingering time and are not referenced by any active temporary manifests are marked for deletion.
5. **Lingering Time (Unused Files)**: Unused files (those never recorded in any manifest) are also subject to a configurable maximum lingering time before they are eligible for deletion.
Following flowchart illustrates the GC worker's process:
```mermaid
flowchart TD
A[Compaction Completed] --> B[Trigger GC Worker]
B --> C[Scan Region Manifest]
C --> D[Identify File Types]
D --> E[Unused Files<br/>Never recorded in manifest]
D --> F[Obsolete Files<br/>Previously in manifest<br/>but marked for removal]
E --> G[Check Lingering Time]
F --> G
G --> H{File exceeds<br/>configured lingering time?}
H -->|No| I[Skip deletion]
H -->|Yes| J[Check Temporary Manifest]
J --> K{File in use by<br/>active queries?}
K -->|Yes| L[Retain file<br/>Wait for next GC cycle]
K -->|No| M[Safely delete file]
I --> N[End GC cycle]
L --> N
M --> O[Update Manifest]
O --> N
N --> P[Wait for next Compaction]
P --> A
style A fill:#e1f5fe
style B fill:#f3e5f5
style M fill:#e8f5e8
style L fill:#fff3e0
```
#### Handling Obsolete Files
An obsolete file is permanently deleted only if two conditions are met:
1. The time elapsed since its removal from the manifest (its obsolescence timestamp) exceeds a configurable threshold.
2. It is not currently referenced by any active temporary manifests.
#### Handling Unused Files
With the integration of the GC worker into the Compaction process, the risk of accidentally deleting newly created SST files that have not yet been recorded in the manifest is significantly mitigated. Consequently, the concept of "Unused Files" as a distinct category primarily susceptible to accidental deletion is largely resolved. Any files that are genuinely "unused" (i.e., never referenced by any manifest, including temporary ones) can be safely deleted after a configurable maximum lingering time.
For debugging and auditing purposes, a comprehensive list of recently deleted files can be maintained.
### Ensuring Read Consistency
To prevent the GC worker from inadvertently deleting files that are actively being utilized by long-running analytical queries, a robust protection mechanism is introduced. This mechanism relies on temporary manifests that are actively kept "alive" by the queries using them.
When a long-running query is detected (e.g., by a slow query recorder), it will write a temporary manifest to the region's manifest directory. This manifest lists all files required for the query. However, simply creating this file is not enough, as a query runner might crash, leaving the temporary manifest orphaned and preventing garbage collection indefinitely.
To address this, the following "heartbeat" mechanism is implemented:
1. **Periodic Updates**: The process executing the long-running query is responsible for periodically updating the modification timestamp of its temporary manifest file (i.e., "touching" the file). This serves as a heartbeat, signaling that the query is still active.
2. **GC Worker Verification**: When the GC worker runs, it scans for temporary manifests. For each one it finds, it checks the file's last modification time.
3. **Stale File Handling**: If a temporary manifest's last modification time is older than a configurable threshold, the GC worker considers it stale (left over from a crashed or terminated query). The GC worker will then delete this stale temporary manifest. Files that were protected only by this stale manifest are no longer shielded from garbage collection.
This approach ensures that only files for genuinely active queries are protected. The lifecycle of the temporary manifest is managed dynamically: it is created when a long query starts, kept alive through periodic updates, and is either deleted by the query upon normal completion or automatically cleaned up by the GC worker if the query terminates unexpectedly.
This mechanism may be too complex to implement at once. We can consider a two-phased approach:
1. **Phase 1 (Simple Time-Based Deletion)**: Initially, implement a simpler GC strategy that deletes obsolete files based solely on a configurable lingering time. This provides a baseline for space reclamation without the complexity of temporary manifests.
2. **Phase 2 (Consistency-Aware GC)**: Based on the practical effectiveness and observed issues from Phase 1, we can then decide whether to implement the full temporary manifest and heartbeat mechanism to handle long-running queries. This iterative approach allows for a quicker initial implementation while gathering real-world data to justify the need for a more complex solution.
## Drawbacks
- **Dependency on Compaction Frequency**: The integration of the GC worker with Compaction means that GC cycles are directly tied to the frequency of compactions. In environments with infrequent compaction operations, obsolete files may accumulate for extended periods before being reclaimed, potentially leading to increased storage consumption.
- **Race Condition with Long-Running Queries**: A potential race condition exists if a long-running query initiates but haven't write its temporary manifest in time, while a compaction process simultaneously begins and marks files used by that query as obsolete. This scenario could lead to the premature deletion of files still required by the active query. To mitigate this, the threshold time for writing a temporary manifest should be significantly shorter than the lingering time configured for obsolete files, ensuring that next GC worker runs do not delete files that are now referenced by a temporary manifest if the query is still running.
Also the read replica shouldn't be later in manifest version for more than the lingering time of obsolete files, otherwise it might ref to files that are already deleted by the GC worker.
- need to upload tmp manifest to object storage, which may introduce additional complexity and potential performance overhead. But since long-running queries are typically not frequent, the performance impact is expected to be minimal.
## Conclusion and Rationale
This section summarizes the key aspects and trade-offs of the proposed integrated GC worker, highlighting its advantages and potential challenges.
| Aspect | Current Proposal (Integrated GC) |
| :--- | :--- |
| **Implementation Complexity** | **Medium**. Requires careful integration with the compaction process and the slow query recorder for temporary manifest management. |
| **Reliability** | **High**. Integration with compaction and leveraging temporary manifests from long-running queries significantly mitigates the risk of incorrect deletion. Accurate management of lingering times for obsolete files and prevention of accidental deletion of newly created SSTs enhance data safety. |
| **Performance Overhead** | **Low to Medium**. The GC worker runs post-compaction, minimizing direct impact on write paths. Overhead from temporary manifest management by the slow query recorder is expected to be acceptable for long-running queries. |
| **Impact on Other Components** | **Moderate**. Requires modifications to the compaction process to trigger GC and the slow query recorder to manage temporary manifests. This introduces some coupling but enhances overall data safety. |
| **Deletion Strategy** | **State- and Time-Based**. Obsolete files are deleted based on a configurable lingering time, which is paused if the file is referenced by a temporary manifest. Unused files (never in a manifest) are also subject to a lingering time. |
## Unresolved Questions and Future Work
This section outlines key areas requiring further discussion and defines potential avenues for future development.
* **Slow Query Recorder Implementation**: Detailed specifications for modify slow query recorder's implementation and its precise interaction mechanisms with temporary manifests are needed.
* **Configurable Lingering Times**: Establish and make configurable the specific lingering times for both obsolete and unused files to optimize storage reclamation and data availability.
## Alternatives
### 1. Standalone GC Service
Instead of integrating the GC worker directly into the Compaction process, a standalone GC service could be implemented. This service would operate independently, periodically scanning the storage for obsolete and unused files based on manifest information and predefined retention policies.
**Pros:**
* **Decoupling**: Separates GC logic from compaction, allowing independent scaling and deployment.
* **Flexibility**: Can be configured to run at different frequencies and with different strategies than compaction.
**Cons:**
* **Increased Complexity**: Requires a separate service to manage, monitor, and coordinate with other components.
* **Potential for Redundancy**: May duplicate some file scanning logic already present in compaction.
* **Consistency Challenges**: Ensuring read consistency would require more complex coordination mechanisms between the standalone GC service and active queries, potentially involving a distributed lock manager or a more sophisticated temporary manifest system.
This alternative could be implemented in the future if the integrated GC worker proves insufficient or if there is a need for more advanced GC strategies.
### 2. Manifest-Driven Deletion (No Lingering Time)
This alternative would involve immediate deletion of files once they are removed from the manifest, without a lingering time.
**Pros:**
* **Simplicity**: Simplifies the GC logic by removing the need for lingering time management.
* **Immediate Space Reclamation**: Storage space is reclaimed as soon as files are marked for deletion.
**Cons:**
* **Increased Risk of Data Loss**: Higher risk of deleting files still in use by long-running queries or other processes if not perfectly synchronized.
* **Complex Read Consistency**: Requires extremely robust and immediate mechanisms to ensure that no active queries are referencing files marked for deletion, potentially leading to performance bottlenecks or complex error handling.
* **Debugging Challenges**: Difficult to debug issues related to premature file deletion due to the immediate nature of the operation.

View File

@@ -15,8 +15,6 @@
let
pkgs = nixpkgs.legacyPackages.${system};
buildInputs = with pkgs; [
libgit2
libz
];
lib = nixpkgs.lib;
rustToolchain = fenix.packages.${system}.fromToolchainName {

File diff suppressed because it is too large Load Diff

View File

@@ -21,14 +21,14 @@
# Resources
| Title | Query | Type | Description | Datasource | Unit | Legend Format |
| --- | --- | --- | --- | --- | --- | --- |
| Datanode Memory per Instance | `sum(process_resident_memory_bytes{instance=~"$datanode"}) by (instance, pod)` | `timeseries` | Current memory usage by instance | `prometheus` | `decbytes` | `[{{instance}}]-[{{ pod }}]` |
| Datanode CPU Usage per Instance | `sum(rate(process_cpu_seconds_total{instance=~"$datanode"}[$__rate_interval]) * 1000) by (instance, pod)` | `timeseries` | Current cpu usage by instance | `prometheus` | `none` | `[{{ instance }}]-[{{ pod }}]` |
| Frontend Memory per Instance | `sum(process_resident_memory_bytes{instance=~"$frontend"}) by (instance, pod)` | `timeseries` | Current memory usage by instance | `prometheus` | `decbytes` | `[{{ instance }}]-[{{ pod }}]` |
| Frontend CPU Usage per Instance | `sum(rate(process_cpu_seconds_total{instance=~"$frontend"}[$__rate_interval]) * 1000) by (instance, pod)` | `timeseries` | Current cpu usage by instance | `prometheus` | `none` | `[{{ instance }}]-[{{ pod }}]-cpu` |
| Metasrv Memory per Instance | `sum(process_resident_memory_bytes{instance=~"$metasrv"}) by (instance, pod)` | `timeseries` | Current memory usage by instance | `prometheus` | `decbytes` | `[{{ instance }}]-[{{ pod }}]-resident` |
| Metasrv CPU Usage per Instance | `sum(rate(process_cpu_seconds_total{instance=~"$metasrv"}[$__rate_interval]) * 1000) by (instance, pod)` | `timeseries` | Current cpu usage by instance | `prometheus` | `none` | `[{{ instance }}]-[{{ pod }}]` |
| Flownode Memory per Instance | `sum(process_resident_memory_bytes{instance=~"$flownode"}) by (instance, pod)` | `timeseries` | Current memory usage by instance | `prometheus` | `decbytes` | `[{{ instance }}]-[{{ pod }}]` |
| Flownode CPU Usage per Instance | `sum(rate(process_cpu_seconds_total{instance=~"$flownode"}[$__rate_interval]) * 1000) by (instance, pod)` | `timeseries` | Current cpu usage by instance | `prometheus` | `none` | `[{{ instance }}]-[{{ pod }}]` |
| Datanode Memory per Instance | `sum(process_resident_memory_bytes{instance=~"$datanode"}) by (instance, pod)`<br/>`max(greptime_memory_limit_in_bytes{app="greptime-datanode"})` | `timeseries` | Current memory usage by instance | `prometheus` | `bytes` | `[{{instance}}]-[{{ pod }}]` |
| Datanode CPU Usage per Instance | `sum(rate(process_cpu_seconds_total{instance=~"$datanode"}[$__rate_interval]) * 1000) by (instance, pod)`<br/>`max(greptime_cpu_limit_in_millicores{app="greptime-datanode"})` | `timeseries` | Current cpu usage by instance | `prometheus` | `none` | `[{{ instance }}]-[{{ pod }}]` |
| Frontend Memory per Instance | `sum(process_resident_memory_bytes{instance=~"$frontend"}) by (instance, pod)`<br/>`max(greptime_memory_limit_in_bytes{app="greptime-frontend"})` | `timeseries` | Current memory usage by instance | `prometheus` | `bytes` | `[{{ instance }}]-[{{ pod }}]` |
| Frontend CPU Usage per Instance | `sum(rate(process_cpu_seconds_total{instance=~"$frontend"}[$__rate_interval]) * 1000) by (instance, pod)`<br/>`max(greptime_cpu_limit_in_millicores{app="greptime-frontend"})` | `timeseries` | Current cpu usage by instance | `prometheus` | `none` | `[{{ instance }}]-[{{ pod }}]-cpu` |
| Metasrv Memory per Instance | `sum(process_resident_memory_bytes{instance=~"$metasrv"}) by (instance, pod)`<br/>`max(greptime_memory_limit_in_bytes{app="greptime-metasrv"})` | `timeseries` | Current memory usage by instance | `prometheus` | `bytes` | `[{{ instance }}]-[{{ pod }}]-resident` |
| Metasrv CPU Usage per Instance | `sum(rate(process_cpu_seconds_total{instance=~"$metasrv"}[$__rate_interval]) * 1000) by (instance, pod)`<br/>`max(greptime_cpu_limit_in_millicores{app="greptime-metasrv"})` | `timeseries` | Current cpu usage by instance | `prometheus` | `none` | `[{{ instance }}]-[{{ pod }}]` |
| Flownode Memory per Instance | `sum(process_resident_memory_bytes{instance=~"$flownode"}) by (instance, pod)`<br/>`max(greptime_memory_limit_in_bytes{app="greptime-flownode"})` | `timeseries` | Current memory usage by instance | `prometheus` | `bytes` | `[{{ instance }}]-[{{ pod }}]` |
| Flownode CPU Usage per Instance | `sum(rate(process_cpu_seconds_total{instance=~"$flownode"}[$__rate_interval]) * 1000) by (instance, pod)`<br/>`max(greptime_cpu_limit_in_millicores{app="greptime-flownode"})` | `timeseries` | Current cpu usage by instance | `prometheus` | `none` | `[{{ instance }}]-[{{ pod }}]` |
# Frontend Requests
| Title | Query | Type | Description | Datasource | Unit | Legend Format |
| --- | --- | --- | --- | --- | --- | --- |
@@ -72,6 +72,7 @@
| Region Worker Handle Bulk Insert Requests | `histogram_quantile(0.95, sum by(le,instance, stage, pod) (rate(greptime_region_worker_handle_write_bucket[$__rate_interval])))`<br/>`sum by(instance, stage, pod) (rate(greptime_region_worker_handle_write_sum[$__rate_interval]))/sum by(instance, stage, pod) (rate(greptime_region_worker_handle_write_count[$__rate_interval]))` | `timeseries` | Per-stage elapsed time for region worker to handle bulk insert region requests. | `prometheus` | `s` | `[{{instance}}]-[{{pod}}]-[{{stage}}]-P95` |
| Active Series and Field Builders Count | `sum by(instance, pod) (greptime_mito_memtable_active_series_count)`<br/>`sum by(instance, pod) (greptime_mito_memtable_field_builder_count)` | `timeseries` | Compaction oinput output bytes | `prometheus` | `none` | `[{{instance}}]-[{{pod}}]-series` |
| Region Worker Convert Requests | `histogram_quantile(0.95, sum by(le, instance, stage, pod) (rate(greptime_datanode_convert_region_request_bucket[$__rate_interval])))`<br/>`sum by(le,instance, stage, pod) (rate(greptime_datanode_convert_region_request_sum[$__rate_interval]))/sum by(le,instance, stage, pod) (rate(greptime_datanode_convert_region_request_count[$__rate_interval]))` | `timeseries` | Per-stage elapsed time for region worker to decode requests. | `prometheus` | `s` | `[{{instance}}]-[{{pod}}]-[{{stage}}]-P95` |
| Cache Miss | `sum by (instance,pod, type) (rate(greptime_mito_cache_miss{instance=~"$datanode"}[$__rate_interval]))` | `timeseries` | The local cache miss of the datanode. | `prometheus` | -- | `[{{instance}}]-[{{pod}}]-[{{type}}]` |
# OpenDAL
| Title | Query | Type | Description | Datasource | Unit | Legend Format |
| --- | --- | --- | --- | --- | --- | --- |

View File

@@ -180,13 +180,18 @@ groups:
- title: Datanode Memory per Instance
type: timeseries
description: Current memory usage by instance
unit: decbytes
unit: bytes
queries:
- expr: sum(process_resident_memory_bytes{instance=~"$datanode"}) by (instance, pod)
datasource:
type: prometheus
uid: ${metrics}
legendFormat: '[{{instance}}]-[{{ pod }}]'
- expr: max(greptime_memory_limit_in_bytes{app="greptime-datanode"})
datasource:
type: prometheus
uid: ${metrics}
legendFormat: limit
- title: Datanode CPU Usage per Instance
type: timeseries
description: Current cpu usage by instance
@@ -197,16 +202,26 @@ groups:
type: prometheus
uid: ${metrics}
legendFormat: '[{{ instance }}]-[{{ pod }}]'
- expr: max(greptime_cpu_limit_in_millicores{app="greptime-datanode"})
datasource:
type: prometheus
uid: ${metrics}
legendFormat: limit
- title: Frontend Memory per Instance
type: timeseries
description: Current memory usage by instance
unit: decbytes
unit: bytes
queries:
- expr: sum(process_resident_memory_bytes{instance=~"$frontend"}) by (instance, pod)
datasource:
type: prometheus
uid: ${metrics}
legendFormat: '[{{ instance }}]-[{{ pod }}]'
- expr: max(greptime_memory_limit_in_bytes{app="greptime-frontend"})
datasource:
type: prometheus
uid: ${metrics}
legendFormat: limit
- title: Frontend CPU Usage per Instance
type: timeseries
description: Current cpu usage by instance
@@ -217,16 +232,26 @@ groups:
type: prometheus
uid: ${metrics}
legendFormat: '[{{ instance }}]-[{{ pod }}]-cpu'
- expr: max(greptime_cpu_limit_in_millicores{app="greptime-frontend"})
datasource:
type: prometheus
uid: ${metrics}
legendFormat: limit
- title: Metasrv Memory per Instance
type: timeseries
description: Current memory usage by instance
unit: decbytes
unit: bytes
queries:
- expr: sum(process_resident_memory_bytes{instance=~"$metasrv"}) by (instance, pod)
datasource:
type: prometheus
uid: ${metrics}
legendFormat: '[{{ instance }}]-[{{ pod }}]-resident'
- expr: max(greptime_memory_limit_in_bytes{app="greptime-metasrv"})
datasource:
type: prometheus
uid: ${metrics}
legendFormat: limit
- title: Metasrv CPU Usage per Instance
type: timeseries
description: Current cpu usage by instance
@@ -237,16 +262,26 @@ groups:
type: prometheus
uid: ${metrics}
legendFormat: '[{{ instance }}]-[{{ pod }}]'
- expr: max(greptime_cpu_limit_in_millicores{app="greptime-metasrv"})
datasource:
type: prometheus
uid: ${metrics}
legendFormat: limit
- title: Flownode Memory per Instance
type: timeseries
description: Current memory usage by instance
unit: decbytes
unit: bytes
queries:
- expr: sum(process_resident_memory_bytes{instance=~"$flownode"}) by (instance, pod)
datasource:
type: prometheus
uid: ${metrics}
legendFormat: '[{{ instance }}]-[{{ pod }}]'
- expr: max(greptime_memory_limit_in_bytes{app="greptime-flownode"})
datasource:
type: prometheus
uid: ${metrics}
legendFormat: limit
- title: Flownode CPU Usage per Instance
type: timeseries
description: Current cpu usage by instance
@@ -257,6 +292,11 @@ groups:
type: prometheus
uid: ${metrics}
legendFormat: '[{{ instance }}]-[{{ pod }}]'
- expr: max(greptime_cpu_limit_in_millicores{app="greptime-flownode"})
datasource:
type: prometheus
uid: ${metrics}
legendFormat: limit
- title: Frontend Requests
panels:
- title: HTTP QPS per Instance
@@ -642,6 +682,15 @@ groups:
type: prometheus
uid: ${metrics}
legendFormat: '[{{instance}}]-[{{pod}}]-[{{stage}}]-AVG'
- title: Cache Miss
type: timeseries
description: The local cache miss of the datanode.
queries:
- expr: sum by (instance,pod, type) (rate(greptime_mito_cache_miss{instance=~"$datanode"}[$__rate_interval]))
datasource:
type: prometheus
uid: ${metrics}
legendFormat: '[{{instance}}]-[{{pod}}]-[{{type}}]'
- title: OpenDAL
panels:
- title: QPS per Instance

File diff suppressed because it is too large Load Diff

View File

@@ -21,14 +21,14 @@
# Resources
| Title | Query | Type | Description | Datasource | Unit | Legend Format |
| --- | --- | --- | --- | --- | --- | --- |
| Datanode Memory per Instance | `sum(process_resident_memory_bytes{}) by (instance, pod)` | `timeseries` | Current memory usage by instance | `prometheus` | `decbytes` | `[{{instance}}]-[{{ pod }}]` |
| Datanode CPU Usage per Instance | `sum(rate(process_cpu_seconds_total{}[$__rate_interval]) * 1000) by (instance, pod)` | `timeseries` | Current cpu usage by instance | `prometheus` | `none` | `[{{ instance }}]-[{{ pod }}]` |
| Frontend Memory per Instance | `sum(process_resident_memory_bytes{}) by (instance, pod)` | `timeseries` | Current memory usage by instance | `prometheus` | `decbytes` | `[{{ instance }}]-[{{ pod }}]` |
| Frontend CPU Usage per Instance | `sum(rate(process_cpu_seconds_total{}[$__rate_interval]) * 1000) by (instance, pod)` | `timeseries` | Current cpu usage by instance | `prometheus` | `none` | `[{{ instance }}]-[{{ pod }}]-cpu` |
| Metasrv Memory per Instance | `sum(process_resident_memory_bytes{}) by (instance, pod)` | `timeseries` | Current memory usage by instance | `prometheus` | `decbytes` | `[{{ instance }}]-[{{ pod }}]-resident` |
| Metasrv CPU Usage per Instance | `sum(rate(process_cpu_seconds_total{}[$__rate_interval]) * 1000) by (instance, pod)` | `timeseries` | Current cpu usage by instance | `prometheus` | `none` | `[{{ instance }}]-[{{ pod }}]` |
| Flownode Memory per Instance | `sum(process_resident_memory_bytes{}) by (instance, pod)` | `timeseries` | Current memory usage by instance | `prometheus` | `decbytes` | `[{{ instance }}]-[{{ pod }}]` |
| Flownode CPU Usage per Instance | `sum(rate(process_cpu_seconds_total{}[$__rate_interval]) * 1000) by (instance, pod)` | `timeseries` | Current cpu usage by instance | `prometheus` | `none` | `[{{ instance }}]-[{{ pod }}]` |
| Datanode Memory per Instance | `sum(process_resident_memory_bytes{}) by (instance, pod)`<br/>`max(greptime_memory_limit_in_bytes{app="greptime-datanode"})` | `timeseries` | Current memory usage by instance | `prometheus` | `bytes` | `[{{instance}}]-[{{ pod }}]` |
| Datanode CPU Usage per Instance | `sum(rate(process_cpu_seconds_total{}[$__rate_interval]) * 1000) by (instance, pod)`<br/>`max(greptime_cpu_limit_in_millicores{app="greptime-datanode"})` | `timeseries` | Current cpu usage by instance | `prometheus` | `none` | `[{{ instance }}]-[{{ pod }}]` |
| Frontend Memory per Instance | `sum(process_resident_memory_bytes{}) by (instance, pod)`<br/>`max(greptime_memory_limit_in_bytes{app="greptime-frontend"})` | `timeseries` | Current memory usage by instance | `prometheus` | `bytes` | `[{{ instance }}]-[{{ pod }}]` |
| Frontend CPU Usage per Instance | `sum(rate(process_cpu_seconds_total{}[$__rate_interval]) * 1000) by (instance, pod)`<br/>`max(greptime_cpu_limit_in_millicores{app="greptime-frontend"})` | `timeseries` | Current cpu usage by instance | `prometheus` | `none` | `[{{ instance }}]-[{{ pod }}]-cpu` |
| Metasrv Memory per Instance | `sum(process_resident_memory_bytes{}) by (instance, pod)`<br/>`max(greptime_memory_limit_in_bytes{app="greptime-metasrv"})` | `timeseries` | Current memory usage by instance | `prometheus` | `bytes` | `[{{ instance }}]-[{{ pod }}]-resident` |
| Metasrv CPU Usage per Instance | `sum(rate(process_cpu_seconds_total{}[$__rate_interval]) * 1000) by (instance, pod)`<br/>`max(greptime_cpu_limit_in_millicores{app="greptime-metasrv"})` | `timeseries` | Current cpu usage by instance | `prometheus` | `none` | `[{{ instance }}]-[{{ pod }}]` |
| Flownode Memory per Instance | `sum(process_resident_memory_bytes{}) by (instance, pod)`<br/>`max(greptime_memory_limit_in_bytes{app="greptime-flownode"})` | `timeseries` | Current memory usage by instance | `prometheus` | `bytes` | `[{{ instance }}]-[{{ pod }}]` |
| Flownode CPU Usage per Instance | `sum(rate(process_cpu_seconds_total{}[$__rate_interval]) * 1000) by (instance, pod)`<br/>`max(greptime_cpu_limit_in_millicores{app="greptime-flownode"})` | `timeseries` | Current cpu usage by instance | `prometheus` | `none` | `[{{ instance }}]-[{{ pod }}]` |
# Frontend Requests
| Title | Query | Type | Description | Datasource | Unit | Legend Format |
| --- | --- | --- | --- | --- | --- | --- |
@@ -72,6 +72,7 @@
| Region Worker Handle Bulk Insert Requests | `histogram_quantile(0.95, sum by(le,instance, stage, pod) (rate(greptime_region_worker_handle_write_bucket[$__rate_interval])))`<br/>`sum by(instance, stage, pod) (rate(greptime_region_worker_handle_write_sum[$__rate_interval]))/sum by(instance, stage, pod) (rate(greptime_region_worker_handle_write_count[$__rate_interval]))` | `timeseries` | Per-stage elapsed time for region worker to handle bulk insert region requests. | `prometheus` | `s` | `[{{instance}}]-[{{pod}}]-[{{stage}}]-P95` |
| Active Series and Field Builders Count | `sum by(instance, pod) (greptime_mito_memtable_active_series_count)`<br/>`sum by(instance, pod) (greptime_mito_memtable_field_builder_count)` | `timeseries` | Compaction oinput output bytes | `prometheus` | `none` | `[{{instance}}]-[{{pod}}]-series` |
| Region Worker Convert Requests | `histogram_quantile(0.95, sum by(le, instance, stage, pod) (rate(greptime_datanode_convert_region_request_bucket[$__rate_interval])))`<br/>`sum by(le,instance, stage, pod) (rate(greptime_datanode_convert_region_request_sum[$__rate_interval]))/sum by(le,instance, stage, pod) (rate(greptime_datanode_convert_region_request_count[$__rate_interval]))` | `timeseries` | Per-stage elapsed time for region worker to decode requests. | `prometheus` | `s` | `[{{instance}}]-[{{pod}}]-[{{stage}}]-P95` |
| Cache Miss | `sum by (instance,pod, type) (rate(greptime_mito_cache_miss{}[$__rate_interval]))` | `timeseries` | The local cache miss of the datanode. | `prometheus` | -- | `[{{instance}}]-[{{pod}}]-[{{type}}]` |
# OpenDAL
| Title | Query | Type | Description | Datasource | Unit | Legend Format |
| --- | --- | --- | --- | --- | --- | --- |

View File

@@ -180,13 +180,18 @@ groups:
- title: Datanode Memory per Instance
type: timeseries
description: Current memory usage by instance
unit: decbytes
unit: bytes
queries:
- expr: sum(process_resident_memory_bytes{}) by (instance, pod)
datasource:
type: prometheus
uid: ${metrics}
legendFormat: '[{{instance}}]-[{{ pod }}]'
- expr: max(greptime_memory_limit_in_bytes{app="greptime-datanode"})
datasource:
type: prometheus
uid: ${metrics}
legendFormat: limit
- title: Datanode CPU Usage per Instance
type: timeseries
description: Current cpu usage by instance
@@ -197,16 +202,26 @@ groups:
type: prometheus
uid: ${metrics}
legendFormat: '[{{ instance }}]-[{{ pod }}]'
- expr: max(greptime_cpu_limit_in_millicores{app="greptime-datanode"})
datasource:
type: prometheus
uid: ${metrics}
legendFormat: limit
- title: Frontend Memory per Instance
type: timeseries
description: Current memory usage by instance
unit: decbytes
unit: bytes
queries:
- expr: sum(process_resident_memory_bytes{}) by (instance, pod)
datasource:
type: prometheus
uid: ${metrics}
legendFormat: '[{{ instance }}]-[{{ pod }}]'
- expr: max(greptime_memory_limit_in_bytes{app="greptime-frontend"})
datasource:
type: prometheus
uid: ${metrics}
legendFormat: limit
- title: Frontend CPU Usage per Instance
type: timeseries
description: Current cpu usage by instance
@@ -217,16 +232,26 @@ groups:
type: prometheus
uid: ${metrics}
legendFormat: '[{{ instance }}]-[{{ pod }}]-cpu'
- expr: max(greptime_cpu_limit_in_millicores{app="greptime-frontend"})
datasource:
type: prometheus
uid: ${metrics}
legendFormat: limit
- title: Metasrv Memory per Instance
type: timeseries
description: Current memory usage by instance
unit: decbytes
unit: bytes
queries:
- expr: sum(process_resident_memory_bytes{}) by (instance, pod)
datasource:
type: prometheus
uid: ${metrics}
legendFormat: '[{{ instance }}]-[{{ pod }}]-resident'
- expr: max(greptime_memory_limit_in_bytes{app="greptime-metasrv"})
datasource:
type: prometheus
uid: ${metrics}
legendFormat: limit
- title: Metasrv CPU Usage per Instance
type: timeseries
description: Current cpu usage by instance
@@ -237,16 +262,26 @@ groups:
type: prometheus
uid: ${metrics}
legendFormat: '[{{ instance }}]-[{{ pod }}]'
- expr: max(greptime_cpu_limit_in_millicores{app="greptime-metasrv"})
datasource:
type: prometheus
uid: ${metrics}
legendFormat: limit
- title: Flownode Memory per Instance
type: timeseries
description: Current memory usage by instance
unit: decbytes
unit: bytes
queries:
- expr: sum(process_resident_memory_bytes{}) by (instance, pod)
datasource:
type: prometheus
uid: ${metrics}
legendFormat: '[{{ instance }}]-[{{ pod }}]'
- expr: max(greptime_memory_limit_in_bytes{app="greptime-flownode"})
datasource:
type: prometheus
uid: ${metrics}
legendFormat: limit
- title: Flownode CPU Usage per Instance
type: timeseries
description: Current cpu usage by instance
@@ -257,6 +292,11 @@ groups:
type: prometheus
uid: ${metrics}
legendFormat: '[{{ instance }}]-[{{ pod }}]'
- expr: max(greptime_cpu_limit_in_millicores{app="greptime-flownode"})
datasource:
type: prometheus
uid: ${metrics}
legendFormat: limit
- title: Frontend Requests
panels:
- title: HTTP QPS per Instance
@@ -642,6 +682,15 @@ groups:
type: prometheus
uid: ${metrics}
legendFormat: '[{{instance}}]-[{{pod}}]-[{{stage}}]-AVG'
- title: Cache Miss
type: timeseries
description: The local cache miss of the datanode.
queries:
- expr: sum by (instance,pod, type) (rate(greptime_mito_cache_miss{}[$__rate_interval]))
datasource:
type: prometheus
uid: ${metrics}
legendFormat: '[{{instance}}]-[{{pod}}]-[{{type}}]'
- title: OpenDAL
panels:
- title: QPS per Instance

View File

@@ -26,7 +26,7 @@ check_dashboards_generation() {
./grafana/scripts/gen-dashboards.sh
if [[ -n "$(git diff --name-only grafana/dashboards/metrics)" ]]; then
echo "Error: The dashboards are not generated correctly. You should execute the `make dashboards` command."
echo "Error: The dashboards are not generated correctly. You should execute the 'make dashboards' command."
exit 1
fi
}

View File

@@ -19,6 +19,3 @@ paste.workspace = true
prost.workspace = true
serde_json.workspace = true
snafu.workspace = true
[build-dependencies]
tonic-build = "0.11"

View File

@@ -17,6 +17,7 @@ use std::any::Any;
use common_error::ext::ErrorExt;
use common_error::status_code::StatusCode;
use common_macro::stack_trace_debug;
use common_time::timestamp::TimeUnit;
use datatypes::prelude::ConcreteDataType;
use snafu::prelude::*;
use snafu::Location;
@@ -66,12 +67,28 @@ pub enum Error {
#[snafu(implicit)]
location: Location,
},
#[snafu(display("Invalid time unit: {time_unit}"))]
InvalidTimeUnit {
time_unit: i32,
#[snafu(implicit)]
location: Location,
},
#[snafu(display("Inconsistent time unit: {:?}", units))]
InconsistentTimeUnit {
units: Vec<TimeUnit>,
#[snafu(implicit)]
location: Location,
},
}
impl ErrorExt for Error {
fn status_code(&self) -> StatusCode {
match self {
Error::UnknownColumnDataType { .. } => StatusCode::InvalidArguments,
Error::UnknownColumnDataType { .. }
| Error::InvalidTimeUnit { .. }
| Error::InconsistentTimeUnit { .. } => StatusCode::InvalidArguments,
Error::IntoColumnDataType { .. } | Error::SerializeJson { .. } => {
StatusCode::Unexpected
}

View File

@@ -12,6 +12,7 @@
// See the License for the specific language governing permissions and
// limitations under the License.
use std::collections::HashSet;
use std::sync::Arc;
use common_base::BitVec;
@@ -46,7 +47,7 @@ use greptime_proto::v1::{
use paste::paste;
use snafu::prelude::*;
use crate::error::{self, Result};
use crate::error::{self, InconsistentTimeUnitSnafu, InvalidTimeUnitSnafu, Result};
use crate::v1::column::Values;
use crate::v1::{Column, ColumnDataType, Value as GrpcValue};
@@ -1079,6 +1080,89 @@ pub fn value_to_grpc_value(value: Value) -> GrpcValue {
}
}
pub fn from_pb_time_unit(unit: v1::TimeUnit) -> TimeUnit {
match unit {
v1::TimeUnit::Second => TimeUnit::Second,
v1::TimeUnit::Millisecond => TimeUnit::Millisecond,
v1::TimeUnit::Microsecond => TimeUnit::Microsecond,
v1::TimeUnit::Nanosecond => TimeUnit::Nanosecond,
}
}
pub fn to_pb_time_unit(unit: TimeUnit) -> v1::TimeUnit {
match unit {
TimeUnit::Second => v1::TimeUnit::Second,
TimeUnit::Millisecond => v1::TimeUnit::Millisecond,
TimeUnit::Microsecond => v1::TimeUnit::Microsecond,
TimeUnit::Nanosecond => v1::TimeUnit::Nanosecond,
}
}
pub fn from_pb_time_ranges(time_ranges: v1::TimeRanges) -> Result<Vec<(Timestamp, Timestamp)>> {
if time_ranges.time_ranges.is_empty() {
return Ok(vec![]);
}
let proto_time_unit = v1::TimeUnit::try_from(time_ranges.time_unit).map_err(|_| {
InvalidTimeUnitSnafu {
time_unit: time_ranges.time_unit,
}
.build()
})?;
let time_unit = from_pb_time_unit(proto_time_unit);
Ok(time_ranges
.time_ranges
.into_iter()
.map(|r| {
(
Timestamp::new(r.start, time_unit),
Timestamp::new(r.end, time_unit),
)
})
.collect())
}
/// All time_ranges must be of the same time unit.
///
/// if input `time_ranges` is empty, it will return a default `TimeRanges` with `Millisecond` as the time unit.
pub fn to_pb_time_ranges(time_ranges: &[(Timestamp, Timestamp)]) -> Result<v1::TimeRanges> {
let is_same_time_unit = time_ranges.windows(2).all(|x| {
x[0].0.unit() == x[1].0.unit()
&& x[0].1.unit() == x[1].1.unit()
&& x[0].0.unit() == x[0].1.unit()
});
if !is_same_time_unit {
let all_time_units: Vec<_> = time_ranges
.iter()
.map(|(s, e)| [s.unit(), e.unit()])
.clone()
.flatten()
.collect::<HashSet<_>>()
.into_iter()
.collect();
InconsistentTimeUnitSnafu {
units: all_time_units,
}
.fail()?
}
let mut pb_time_ranges = v1::TimeRanges {
// default time unit is Millisecond
time_unit: v1::TimeUnit::Millisecond as i32,
time_ranges: Vec::with_capacity(time_ranges.len()),
};
if let Some((start, _end)) = time_ranges.first() {
pb_time_ranges.time_unit = to_pb_time_unit(start.unit()) as i32;
}
for (start, end) in time_ranges {
pb_time_ranges.time_ranges.push(v1::TimeRange {
start: start.value(),
end: end.value(),
});
}
Ok(pb_time_ranges)
}
#[cfg(test)]
mod tests {
use std::sync::Arc;

View File

@@ -14,6 +14,8 @@
pub mod column_def;
pub mod helper;
pub mod meta {
pub use greptime_proto::v1::meta::*;
}

65
src/api/src/v1/helper.rs Normal file
View File

@@ -0,0 +1,65 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use greptime_proto::v1::value::ValueData;
use greptime_proto::v1::{ColumnDataType, ColumnSchema, Row, SemanticType, Value};
/// Create a time index [ColumnSchema] with column's name and datatype.
/// Other fields are left default.
/// Useful when you just want to create a simple [ColumnSchema] without providing much struct fields.
pub fn time_index_column_schema(name: &str, datatype: ColumnDataType) -> ColumnSchema {
ColumnSchema {
column_name: name.to_string(),
datatype: datatype as i32,
semantic_type: SemanticType::Timestamp as i32,
..Default::default()
}
}
/// Create a tag [ColumnSchema] with column's name and datatype.
/// Other fields are left default.
/// Useful when you just want to create a simple [ColumnSchema] without providing much struct fields.
pub fn tag_column_schema(name: &str, datatype: ColumnDataType) -> ColumnSchema {
ColumnSchema {
column_name: name.to_string(),
datatype: datatype as i32,
semantic_type: SemanticType::Tag as i32,
..Default::default()
}
}
/// Create a field [ColumnSchema] with column's name and datatype.
/// Other fields are left default.
/// Useful when you just want to create a simple [ColumnSchema] without providing much struct fields.
pub fn field_column_schema(name: &str, datatype: ColumnDataType) -> ColumnSchema {
ColumnSchema {
column_name: name.to_string(),
datatype: datatype as i32,
semantic_type: SemanticType::Field as i32,
..Default::default()
}
}
/// Create a [Row] from [ValueData]s.
/// Useful when you don't want to write much verbose codes.
pub fn row(values: Vec<ValueData>) -> Row {
Row {
values: values
.into_iter()
.map(|x| Value {
value_data: Some(x),
})
.collect::<Vec<_>>(),
}
}

View File

@@ -21,6 +21,7 @@ bytes.workspace = true
common-base.workspace = true
common-catalog.workspace = true
common-error.workspace = true
common-event-recorder.workspace = true
common-frontend.workspace = true
common-macro.workspace = true
common-meta.workspace = true
@@ -44,6 +45,8 @@ moka = { workspace = true, features = ["future", "sync"] }
partition.workspace = true
paste.workspace = true
prometheus.workspace = true
promql-parser.workspace = true
rand.workspace = true
rustc-hash.workspace = true
serde_json.workspace = true
session.workspace = true

View File

@@ -16,8 +16,8 @@ use api::v1::meta::ProcedureStatus;
use common_error::ext::BoxedError;
use common_meta::cluster::{ClusterInfo, NodeInfo};
use common_meta::datanode::RegionStat;
use common_meta::ddl::{ExecutorContext, ProcedureExecutor};
use common_meta::key::flow::flow_state::FlowStat;
use common_meta::procedure_executor::{ExecutorContext, ProcedureExecutor};
use common_meta::rpc::procedure;
use common_procedure::{ProcedureInfo, ProcedureState};
use meta_client::MetaClientRef;

View File

@@ -23,7 +23,8 @@ use common_catalog::consts::{
};
use common_error::ext::BoxedError;
use common_meta::cache::{
LayeredCacheRegistryRef, TableRoute, TableRouteCacheRef, ViewInfoCacheRef,
LayeredCacheRegistryRef, TableInfoCacheRef, TableNameCacheRef, TableRoute, TableRouteCacheRef,
ViewInfoCacheRef,
};
use common_meta::key::catalog_name::CatalogNameKey;
use common_meta::key::flow::FlowMetadataManager;
@@ -41,8 +42,9 @@ use session::context::{Channel, QueryContext};
use snafu::prelude::*;
use store_api::metric_engine_consts::METRIC_ENGINE_NAME;
use table::dist_table::DistTable;
use table::metadata::TableId;
use table::metadata::{TableId, TableInfoRef};
use table::table::numbers::{NumbersTable, NUMBERS_TABLE_NAME};
use table::table::PartitionRules;
use table::table_name::TableName;
use table::TableRef;
use tokio::sync::Semaphore;
@@ -131,6 +133,8 @@ impl KvBackendCatalogManager {
{
let mut new_table_info = (*table.table_info()).clone();
let mut phy_part_cols_not_in_logical_table = vec![];
// Remap partition key indices from physical table to logical table
new_table_info.meta.partition_key_indices = physical_table_info_value
.table_info
@@ -147,15 +151,30 @@ impl KvBackendCatalogManager {
.get(physical_index)
.and_then(|physical_column| {
// Find the corresponding index in the logical table schema
new_table_info
let idx = new_table_info
.meta
.schema
.column_index_by_name(physical_column.name.as_str())
.column_index_by_name(physical_column.name.as_str());
if idx.is_none() {
// not all part columns in physical table that are also in logical table
phy_part_cols_not_in_logical_table
.push(physical_column.name.clone());
}
idx
})
})
.collect();
let new_table = DistTable::table(Arc::new(new_table_info));
let partition_rules = if !phy_part_cols_not_in_logical_table.is_empty() {
Some(PartitionRules {
extra_phy_cols_not_in_logical_table: phy_part_cols_not_in_logical_table,
})
} else {
None
};
let new_table = DistTable::table_partitioned(Arc::new(new_table_info), partition_rules);
return Ok(new_table);
}
@@ -325,6 +344,63 @@ impl CatalogManager for KvBackendCatalogManager {
Ok(None)
}
async fn table_id(
&self,
catalog_name: &str,
schema_name: &str,
table_name: &str,
query_ctx: Option<&QueryContext>,
) -> Result<Option<TableId>> {
let channel = query_ctx.map_or(Channel::Unknown, |ctx| ctx.channel());
if let Some(table) =
self.system_catalog
.table(catalog_name, schema_name, table_name, query_ctx)
{
return Ok(Some(table.table_info().table_id()));
}
let table_cache: TableNameCacheRef =
self.cache_registry.get().context(CacheNotFoundSnafu {
name: "table_name_cache",
})?;
let table = table_cache
.get_by_ref(&TableName {
catalog_name: catalog_name.to_string(),
schema_name: schema_name.to_string(),
table_name: table_name.to_string(),
})
.await
.context(GetTableCacheSnafu)?;
if let Some(table) = table {
return Ok(Some(table));
}
if channel == Channel::Postgres {
// falldown to pg_catalog
if let Some(table) =
self.system_catalog
.table(catalog_name, PG_CATALOG_NAME, table_name, query_ctx)
{
return Ok(Some(table.table_info().table_id()));
}
}
Ok(None)
}
async fn table_info_by_id(&self, table_id: TableId) -> Result<Option<TableInfoRef>> {
let table_info_cache: TableInfoCacheRef =
self.cache_registry.get().context(CacheNotFoundSnafu {
name: "table_info_cache",
})?;
table_info_cache
.get_by_ref(&table_id)
.await
.context(GetTableCacheSnafu)
}
async fn tables_by_ids(
&self,
catalog: &str,

View File

@@ -25,7 +25,7 @@ use common_catalog::consts::{INFORMATION_SCHEMA_NAME, PG_CATALOG_NAME};
use futures::future::BoxFuture;
use futures_util::stream::BoxStream;
use session::context::QueryContext;
use table::metadata::TableId;
use table::metadata::{TableId, TableInfoRef};
use table::TableRef;
use crate::error::Result;
@@ -89,6 +89,23 @@ pub trait CatalogManager: Send + Sync {
query_ctx: Option<&QueryContext>,
) -> Result<Option<TableRef>>;
/// Returns the table id of provided table ident.
async fn table_id(
&self,
catalog: &str,
schema: &str,
table_name: &str,
query_ctx: Option<&QueryContext>,
) -> Result<Option<TableId>> {
Ok(self
.table(catalog, schema, table_name, query_ctx)
.await?
.map(|t| t.table_info().ident.table_id))
}
/// Returns the table of provided id.
async fn table_info_by_id(&self, table_id: TableId) -> Result<Option<TableInfoRef>>;
/// Returns the tables by table ids.
async fn tables_by_ids(
&self,

View File

@@ -28,7 +28,7 @@ use common_meta::kv_backend::memory::MemoryKvBackend;
use futures_util::stream::BoxStream;
use session::context::QueryContext;
use snafu::OptionExt;
use table::metadata::TableId;
use table::metadata::{TableId, TableInfoRef};
use table::TableRef;
use crate::error::{CatalogNotFoundSnafu, Result, SchemaNotFoundSnafu, TableExistsSnafu};
@@ -38,7 +38,7 @@ use crate::{CatalogManager, DeregisterTableRequest, RegisterSchemaRequest, Regis
type SchemaEntries = HashMap<String, HashMap<String, TableRef>>;
/// Simple in-memory list of catalogs
/// Simple in-memory list of catalogs used for tests.
#[derive(Clone)]
pub struct MemoryCatalogManager {
/// Collection of catalogs containing schemas and ultimately Tables
@@ -144,6 +144,18 @@ impl CatalogManager for MemoryCatalogManager {
Ok(result)
}
async fn table_info_by_id(&self, table_id: TableId) -> Result<Option<TableInfoRef>> {
Ok(self
.catalogs
.read()
.unwrap()
.iter()
.flat_map(|(_, schema_entries)| schema_entries.values())
.flat_map(|tables| tables.values())
.find(|t| t.table_info().ident.table_id == table_id)
.map(|t| t.table_info()))
}
async fn tables_by_ids(
&self,
catalog: &str,

View File

@@ -14,17 +14,24 @@
use std::collections::hash_map::Entry;
use std::collections::HashMap;
use std::fmt::{Debug, Formatter};
use std::fmt::{Debug, Display, Formatter};
use std::sync::atomic::{AtomicU32, Ordering};
use std::sync::{Arc, RwLock};
use std::time::{Duration, Instant, UNIX_EPOCH};
use api::v1::frontend::{KillProcessRequest, ListProcessRequest, ProcessInfo};
use common_base::cancellation::CancellationHandle;
use common_event_recorder::EventRecorderRef;
use common_frontend::selector::{FrontendSelector, MetaClientSelector};
use common_telemetry::{debug, info, warn};
use common_frontend::slow_query_event::SlowQueryEvent;
use common_telemetry::logging::SlowQueriesRecordType;
use common_telemetry::{debug, info, slow, warn};
use common_time::util::current_time_millis;
use meta_client::MetaClientRef;
use promql_parser::parser::EvalStmt;
use rand::random;
use snafu::{ensure, OptionExt, ResultExt};
use sql::statements::statement::Statement;
use crate::error;
use crate::metrics::{PROCESS_KILL_COUNT, PROCESS_LIST_COUNT};
@@ -44,6 +51,23 @@ pub struct ProcessManager {
frontend_selector: Option<MetaClientSelector>,
}
/// Represents a parsed query statement, functionally equivalent to [query::parser::QueryStatement].
/// This enum is defined here to avoid cyclic dependencies with the query parser module.
#[derive(Debug, Clone)]
pub enum QueryStatement {
Sql(Statement),
Promql(EvalStmt),
}
impl Display for QueryStatement {
fn fmt(&self, f: &mut Formatter<'_>) -> std::fmt::Result {
match self {
QueryStatement::Sql(stmt) => write!(f, "{}", stmt),
QueryStatement::Promql(eval_stmt) => write!(f, "{}", eval_stmt),
}
}
}
impl ProcessManager {
/// Create a [ProcessManager] instance with server address and kv client.
pub fn new(server_addr: String, meta_client: Option<MetaClientRef>) -> Self {
@@ -67,6 +91,7 @@ impl ProcessManager {
query: String,
client: String,
query_id: Option<ProcessId>,
_slow_query_timer: Option<SlowQueryTimer>,
) -> Ticket {
let id = query_id.unwrap_or_else(|| self.next_id.fetch_add(1, Ordering::Relaxed));
let process = ProcessInfo {
@@ -93,6 +118,7 @@ impl ProcessManager {
manager: self.clone(),
id,
cancellation_handle,
_slow_query_timer,
}
}
@@ -223,6 +249,9 @@ pub struct Ticket {
pub(crate) manager: ProcessManagerRef,
pub(crate) id: ProcessId,
pub cancellation_handle: Arc<CancellationHandle>,
// Keep the handle of the slow query timer to ensure it will trigger the event recording when dropped.
_slow_query_timer: Option<SlowQueryTimer>,
}
impl Drop for Ticket {
@@ -263,6 +292,114 @@ impl Debug for CancellableProcess {
}
}
/// SlowQueryTimer is used to log slow query when it's dropped.
/// In drop(), it will check if the query is slow and send the slow query event to the handler.
pub struct SlowQueryTimer {
start: Instant,
stmt: QueryStatement,
threshold: Duration,
sample_ratio: f64,
record_type: SlowQueriesRecordType,
recorder: EventRecorderRef,
}
impl SlowQueryTimer {
pub fn new(
stmt: QueryStatement,
threshold: Duration,
sample_ratio: f64,
record_type: SlowQueriesRecordType,
recorder: EventRecorderRef,
) -> Self {
Self {
start: Instant::now(),
stmt,
threshold,
sample_ratio,
record_type,
recorder,
}
}
}
impl SlowQueryTimer {
fn send_slow_query_event(&self, elapsed: Duration) {
let mut slow_query_event = SlowQueryEvent {
cost: elapsed.as_millis() as u64,
threshold: self.threshold.as_millis() as u64,
query: "".to_string(),
// The following fields are only used for PromQL queries.
is_promql: false,
promql_range: None,
promql_step: None,
promql_start: None,
promql_end: None,
};
match &self.stmt {
QueryStatement::Promql(stmt) => {
slow_query_event.is_promql = true;
slow_query_event.query = stmt.expr.to_string();
slow_query_event.promql_step = Some(stmt.interval.as_millis() as u64);
let start = stmt
.start
.duration_since(UNIX_EPOCH)
.unwrap_or_default()
.as_millis() as i64;
let end = stmt
.end
.duration_since(UNIX_EPOCH)
.unwrap_or_default()
.as_millis() as i64;
slow_query_event.promql_range = Some((end - start) as u64);
slow_query_event.promql_start = Some(start);
slow_query_event.promql_end = Some(end);
}
QueryStatement::Sql(stmt) => {
slow_query_event.query = stmt.to_string();
}
}
match self.record_type {
// Send the slow query event to the event recorder to persist it as the system table.
SlowQueriesRecordType::SystemTable => {
self.recorder.record(Box::new(slow_query_event));
}
// Record the slow query in a specific logs file.
SlowQueriesRecordType::Log => {
slow!(
cost = slow_query_event.cost,
threshold = slow_query_event.threshold,
query = slow_query_event.query,
is_promql = slow_query_event.is_promql,
promql_range = slow_query_event.promql_range,
promql_step = slow_query_event.promql_step,
promql_start = slow_query_event.promql_start,
promql_end = slow_query_event.promql_end,
);
}
}
}
}
impl Drop for SlowQueryTimer {
fn drop(&mut self) {
// Calculate the elaspsed duration since the timer is created.
let elapsed = self.start.elapsed();
if elapsed > self.threshold {
// Only capture a portion of slow queries based on sample_ratio.
// Generate a random number in [0, 1) and compare it with sample_ratio.
if self.sample_ratio >= 1.0 || random::<f64>() <= self.sample_ratio {
self.send_slow_query_event(elapsed);
}
}
}
}
#[cfg(test)]
mod tests {
use std::sync::Arc;
@@ -278,6 +415,7 @@ mod tests {
"SELECT * FROM table".to_string(),
"".to_string(),
None,
None,
);
let running_processes = process_manager.local_processes(None).unwrap();
@@ -301,6 +439,7 @@ mod tests {
"SELECT * FROM table".to_string(),
"client1".to_string(),
Some(custom_id),
None,
);
assert_eq!(ticket.id, custom_id);
@@ -321,6 +460,7 @@ mod tests {
"SELECT * FROM table1".to_string(),
"client1".to_string(),
None,
None,
);
let ticket2 = process_manager.clone().register_query(
@@ -329,6 +469,7 @@ mod tests {
"SELECT * FROM table2".to_string(),
"client2".to_string(),
None,
None,
);
let running_processes = process_manager.local_processes(Some("public")).unwrap();
@@ -350,6 +491,7 @@ mod tests {
"SELECT * FROM table1".to_string(),
"client1".to_string(),
None,
None,
);
let _ticket2 = process_manager.clone().register_query(
@@ -358,6 +500,7 @@ mod tests {
"SELECT * FROM table2".to_string(),
"client2".to_string(),
None,
None,
);
// Test listing processes for specific catalog
@@ -384,6 +527,7 @@ mod tests {
"SELECT * FROM table".to_string(),
"client1".to_string(),
None,
None,
);
assert_eq!(process_manager.local_processes(None).unwrap().len(), 1);
process_manager.deregister_query("public".to_string(), ticket.id);
@@ -400,6 +544,7 @@ mod tests {
"SELECT * FROM table".to_string(),
"client1".to_string(),
None,
None,
);
assert!(!ticket.cancellation_handle.is_cancelled());
@@ -417,6 +562,7 @@ mod tests {
"SELECT * FROM table".to_string(),
"client1".to_string(),
None,
None,
);
assert!(!ticket.cancellation_handle.is_cancelled());
let killed = process_manager
@@ -462,6 +608,7 @@ mod tests {
"SELECT COUNT(*) FROM users WHERE age > 18".to_string(),
"test_client".to_string(),
Some(42),
None,
);
let processes = process_manager.local_processes(None).unwrap();
@@ -488,6 +635,7 @@ mod tests {
"SELECT * FROM table".to_string(),
"client1".to_string(),
None,
None,
);
// Process should be registered

View File

@@ -133,7 +133,7 @@ impl Predicate {
let Expr::Column(c) = *expr else {
unreachable!();
};
let Expr::Literal(ScalarValue::Utf8(Some(pattern))) = *pattern else {
let Expr::Literal(ScalarValue::Utf8(Some(pattern)), _) = *pattern else {
unreachable!();
};
@@ -148,8 +148,8 @@ impl Predicate {
// left OP right
Expr::BinaryExpr(bin) => match (*bin.left, bin.op, *bin.right) {
// left == right
(Expr::Literal(scalar), Operator::Eq, Expr::Column(c))
| (Expr::Column(c), Operator::Eq, Expr::Literal(scalar)) => {
(Expr::Literal(scalar, _), Operator::Eq, Expr::Column(c))
| (Expr::Column(c), Operator::Eq, Expr::Literal(scalar, _)) => {
let Ok(v) = Value::try_from(scalar) else {
return None;
};
@@ -157,8 +157,8 @@ impl Predicate {
Some(Predicate::Eq(c.name, v))
}
// left != right
(Expr::Literal(scalar), Operator::NotEq, Expr::Column(c))
| (Expr::Column(c), Operator::NotEq, Expr::Literal(scalar)) => {
(Expr::Literal(scalar, _), Operator::NotEq, Expr::Column(c))
| (Expr::Column(c), Operator::NotEq, Expr::Literal(scalar, _)) => {
let Ok(v) = Value::try_from(scalar) else {
return None;
};
@@ -189,7 +189,7 @@ impl Predicate {
let mut values = Vec::with_capacity(list.len());
for scalar in list {
// Safety: checked by `is_all_scalars`
let Expr::Literal(scalar) = scalar else {
let Expr::Literal(scalar, _) = scalar else {
unreachable!();
};
@@ -237,7 +237,7 @@ fn like_utf8(s: &str, pattern: &str, case_insensitive: &bool) -> Option<bool> {
}
fn is_string_literal(expr: &Expr) -> bool {
matches!(expr, Expr::Literal(ScalarValue::Utf8(Some(_))))
matches!(expr, Expr::Literal(ScalarValue::Utf8(Some(_)), _))
}
fn is_column(expr: &Expr) -> bool {
@@ -286,14 +286,14 @@ impl Predicates {
/// Returns true when the values are all [`DfExpr::Literal`].
fn is_all_scalars(list: &[Expr]) -> bool {
list.iter().all(|v| matches!(v, Expr::Literal(_)))
list.iter().all(|v| matches!(v, Expr::Literal(_, _)))
}
#[cfg(test)]
mod tests {
use datafusion::common::{Column, ScalarValue};
use datafusion::common::Column;
use datafusion::logical_expr::expr::InList;
use datafusion::logical_expr::BinaryExpr;
use datafusion::logical_expr::{BinaryExpr, Literal};
use super::*;
@@ -378,7 +378,7 @@ mod tests {
let expr = Expr::Like(Like {
negated: false,
expr: Box::new(column("a")),
pattern: Box::new(string_literal("%abc")),
pattern: Box::new("%abc".lit()),
case_insensitive: true,
escape_char: None,
});
@@ -405,7 +405,7 @@ mod tests {
let expr = Expr::Like(Like {
negated: false,
expr: Box::new(column("a")),
pattern: Box::new(string_literal("%abc")),
pattern: Box::new("%abc".lit()),
case_insensitive: false,
escape_char: None,
});
@@ -425,7 +425,7 @@ mod tests {
let expr = Expr::Like(Like {
negated: true,
expr: Box::new(column("a")),
pattern: Box::new(string_literal("%abc")),
pattern: Box::new("%abc".lit()),
case_insensitive: true,
escape_char: None,
});
@@ -440,10 +440,6 @@ mod tests {
Expr::Column(Column::from_name(name))
}
fn string_literal(v: &str) -> Expr {
Expr::Literal(ScalarValue::Utf8(Some(v.to_string())))
}
fn match_string_value(v: &Value, expected: &str) -> bool {
matches!(v, Value::String(bs) if bs.as_utf8() == expected)
}
@@ -463,13 +459,13 @@ mod tests {
let expr1 = Expr::BinaryExpr(BinaryExpr {
left: Box::new(column("a")),
op: Operator::Eq,
right: Box::new(string_literal("a_value")),
right: Box::new("a_value".lit()),
});
let expr2 = Expr::BinaryExpr(BinaryExpr {
left: Box::new(column("b")),
op: Operator::NotEq,
right: Box::new(string_literal("b_value")),
right: Box::new("b_value".lit()),
});
(expr1, expr2)
@@ -508,7 +504,7 @@ mod tests {
let inlist_expr = Expr::InList(InList {
expr: Box::new(column("a")),
list: vec![string_literal("a1"), string_literal("a2")],
list: vec!["a1".lit(), "a2".lit()],
negated: false,
});
@@ -518,7 +514,7 @@ mod tests {
let inlist_expr = Expr::InList(InList {
expr: Box::new(column("a")),
list: vec![string_literal("a1"), string_literal("a2")],
list: vec!["a1".lit(), "a2".lit()],
negated: true,
});
let inlist_p = Predicate::from_expr(inlist_expr).unwrap();

View File

@@ -32,7 +32,7 @@ use dummy_catalog::DummyCatalogList;
use table::TableRef;
use crate::error::{
CastManagerSnafu, DatafusionSnafu, DecodePlanSnafu, GetViewCacheSnafu, ProjectViewColumnsSnafu,
CastManagerSnafu, DecodePlanSnafu, GetViewCacheSnafu, ProjectViewColumnsSnafu,
QueryAccessDeniedSnafu, Result, TableNotExistSnafu, ViewInfoNotFoundSnafu,
ViewPlanColumnsChangedSnafu,
};
@@ -199,10 +199,10 @@ impl DfTableSourceProvider {
logical_plan
};
Ok(Arc::new(
ViewTable::try_new(logical_plan, Some(view_info.definition.to_string()))
.context(DatafusionSnafu)?,
))
Ok(Arc::new(ViewTable::new(
logical_plan,
Some(view_info.definition.to_string()),
)))
}
}

View File

@@ -75,7 +75,7 @@ impl StoreConfig {
#[cfg(feature = "pg_kvbackend")]
BackendImpl::PostgresStore => {
let table_name = &self.meta_table_name;
let pool = meta_srv::bootstrap::create_postgres_pool(store_addrs)
let pool = meta_srv::bootstrap::create_postgres_pool(store_addrs, None)
.await
.map_err(BoxedError::new)?;
Ok(common_meta::kv_backend::rds::PgStore::with_pg_pool(

View File

@@ -74,11 +74,19 @@ pub fn make_create_region_request_for_peer(
let catalog = &create_table_expr.catalog_name;
let schema = &create_table_expr.schema_name;
let storage_path = region_storage_path(catalog, schema);
let partition_exprs = region_routes
.iter()
.map(|r| (r.region.id.region_number(), r.region.partition_expr()))
.collect::<HashMap<_, _>>();
for region_number in &regions_on_this_peer {
let region_id = RegionId::new(logical_table_id, *region_number);
let region_request =
request_builder.build_one(region_id, storage_path.clone(), &HashMap::new());
let region_request = request_builder.build_one(
region_id,
storage_path.clone(),
&HashMap::new(),
&partition_exprs,
);
requests.push(region_request);
}

View File

@@ -29,6 +29,7 @@ datatypes.workspace = true
enum_dispatch = "0.3"
futures.workspace = true
futures-util.workspace = true
humantime.workspace = true
lazy_static.workspace = true
moka = { workspace = true, features = ["future"] }
parking_lot.workspace = true
@@ -38,6 +39,7 @@ query.workspace = true
rand.workspace = true
serde_json.workspace = true
snafu.workspace = true
store-api.workspace = true
substrait.workspace = true
tokio.workspace = true
tokio-stream = { workspace = true, features = ["net"] }

View File

@@ -17,7 +17,7 @@ use std::sync::Arc;
use std::time::Duration;
use common_grpc::channel_manager::{ChannelConfig, ChannelManager};
use common_meta::node_manager::{DatanodeRef, FlownodeRef, NodeManager};
use common_meta::node_manager::{DatanodeManager, DatanodeRef, FlownodeManager, FlownodeRef};
use common_meta::peer::Peer;
use moka::future::{Cache, CacheBuilder};
@@ -45,7 +45,7 @@ impl Debug for NodeClients {
}
#[async_trait::async_trait]
impl NodeManager for NodeClients {
impl DatanodeManager for NodeClients {
async fn datanode(&self, datanode: &Peer) -> DatanodeRef {
let client = self.get_client(datanode).await;
@@ -60,7 +60,10 @@ impl NodeManager for NodeClients {
*accept_compression,
))
}
}
#[async_trait::async_trait]
impl FlownodeManager for NodeClients {
async fn flownode(&self, flownode: &Peer) -> FlownodeRef {
let client = self.get_client(flownode).await;

View File

@@ -75,12 +75,24 @@ pub struct Database {
}
pub struct DatabaseClient {
pub addr: String,
pub inner: GreptimeDatabaseClient<Channel>,
}
impl DatabaseClient {
/// Returns a closure that logs the error when the request fails.
pub fn inspect_err<'a>(&'a self, context: &'a str) -> impl Fn(&tonic::Status) + 'a {
let addr = &self.addr;
move |status| {
error!("Failed to {context} request, peer: {addr}, status: {status:?}");
}
}
}
fn make_database_client(client: &Client) -> Result<DatabaseClient> {
let (_, channel) = client.find_channel()?;
let (addr, channel) = client.find_channel()?;
Ok(DatabaseClient {
addr,
inner: GreptimeDatabaseClient::new(channel)
.max_decoding_message_size(client.max_grpc_recv_message_size())
.max_encoding_message_size(client.max_grpc_send_message_size()),
@@ -167,14 +179,19 @@ impl Database {
requests: InsertRequests,
hints: &[(&str, &str)],
) -> Result<u32> {
let mut client = make_database_client(&self.client)?.inner;
let mut client = make_database_client(&self.client)?;
let request = self.to_rpc_request(Request::Inserts(requests));
let mut request = tonic::Request::new(request);
let metadata = request.metadata_mut();
Self::put_hints(metadata, hints)?;
let response = client.handle(request).await?.into_inner();
let response = client
.inner
.handle(request)
.await
.inspect_err(client.inspect_err("insert_with_hints"))?
.into_inner();
from_grpc_response(response)
}
@@ -189,14 +206,19 @@ impl Database {
requests: RowInsertRequests,
hints: &[(&str, &str)],
) -> Result<u32> {
let mut client = make_database_client(&self.client)?.inner;
let mut client = make_database_client(&self.client)?;
let request = self.to_rpc_request(Request::RowInserts(requests));
let mut request = tonic::Request::new(request);
let metadata = request.metadata_mut();
Self::put_hints(metadata, hints)?;
let response = client.handle(request).await?.into_inner();
let response = client
.inner
.handle(request)
.await
.inspect_err(client.inspect_err("row_inserts_with_hints"))?
.into_inner();
from_grpc_response(response)
}
@@ -217,9 +239,14 @@ impl Database {
/// Make a request to the database.
pub async fn handle(&self, request: Request) -> Result<u32> {
let mut client = make_database_client(&self.client)?.inner;
let mut client = make_database_client(&self.client)?;
let request = self.to_rpc_request(request);
let response = client.handle(request).await?.into_inner();
let response = client
.inner
.handle(request)
.await
.inspect_err(client.inspect_err("handle"))?
.into_inner();
from_grpc_response(response)
}
@@ -231,7 +258,7 @@ impl Database {
max_retries: u32,
hints: &[(&str, &str)],
) -> Result<u32> {
let mut client = make_database_client(&self.client)?.inner;
let mut client = make_database_client(&self.client)?;
let mut retries = 0;
let request = self.to_rpc_request(request);
@@ -240,7 +267,11 @@ impl Database {
let mut tonic_request = tonic::Request::new(request.clone());
let metadata = tonic_request.metadata_mut();
Self::put_hints(metadata, hints)?;
let raw_response = client.handle(tonic_request).await;
let raw_response = client
.inner
.handle(tonic_request)
.await
.inspect_err(client.inspect_err("handle"));
match (raw_response, retries < max_retries) {
(Ok(resp), _) => return from_grpc_response(resp.into_inner()),
(Err(err), true) => {

View File

@@ -133,6 +133,13 @@ pub enum Error {
#[snafu(implicit)]
location: Location,
},
#[snafu(display("External error"))]
External {
#[snafu(implicit)]
location: Location,
source: BoxedError,
},
}
pub type Result<T> = std::result::Result<T, Error>;
@@ -154,6 +161,7 @@ impl ErrorExt for Error {
Error::IllegalGrpcClientState { .. } => StatusCode::Unexpected,
Error::InvalidTonicMetadataValue { .. } => StatusCode::InvalidArguments,
Error::ConvertSchema { source, .. } => source.status_code(),
Error::External { source, .. } => source.status_code(),
}
}

View File

@@ -0,0 +1,60 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use std::time::Duration;
use api::v1::RowInsertRequests;
use humantime::format_duration;
use store_api::mito_engine_options::{APPEND_MODE_KEY, TTL_KEY};
use crate::error::Result;
/// Context holds the catalog and schema information.
pub struct Context<'a> {
/// The catalog name.
pub catalog: &'a str,
/// The schema name.
pub schema: &'a str,
}
/// Options for insert operations.
#[derive(Debug, Clone, Copy)]
pub struct InsertOptions {
/// Time-to-live for the inserted data.
pub ttl: Duration,
/// Whether to use append mode for the insert.
pub append_mode: bool,
}
impl InsertOptions {
/// Converts the insert options to a list of key-value string hints.
pub fn to_hints(&self) -> Vec<(&'static str, String)> {
vec![
(TTL_KEY, format_duration(self.ttl).to_string()),
(APPEND_MODE_KEY, self.append_mode.to_string()),
]
}
}
/// [`Inserter`] allows different components to share a unified mechanism for inserting data.
///
/// An implementation may perform the insert locally (e.g., via a direct procedure call) or
/// delegate/forward it to another node for processing (e.g., MetaSrv forwarding to an
/// available Frontend).
#[async_trait::async_trait]
pub trait Inserter: Send + Sync {
async fn insert_rows(&self, context: &Context<'_>, requests: RowInsertRequests) -> Result<()>;
fn set_options(&mut self, options: &InsertOptions);
}

View File

@@ -19,6 +19,7 @@ pub mod client_manager;
pub mod database;
pub mod error;
pub mod flow;
pub mod inserter;
pub mod load_balance;
mod metrics;
pub mod region;

View File

@@ -38,6 +38,7 @@ common-config.workspace = true
common-error.workspace = true
common-grpc.workspace = true
common-macro.workspace = true
common-mem-prof.workspace = true
common-meta.workspace = true
common-options.workspace = true
common-procedure.workspace = true

View File

@@ -28,7 +28,7 @@ use tracing_appender::non_blocking::WorkerGuard;
use crate::datanode::{DatanodeOptions, Instance, APP_NAME};
use crate::error::{MetaClientInitSnafu, MissingConfigSnafu, Result, StartDatanodeSnafu};
use crate::{create_resource_limit_metrics, log_versions};
use crate::{create_resource_limit_metrics, log_versions, maybe_activate_heap_profile};
/// Builder for Datanode instance.
pub struct InstanceBuilder {
@@ -68,6 +68,7 @@ impl InstanceBuilder {
);
log_versions(verbose_version(), short_version(), APP_NAME);
maybe_activate_heap_profile(&dn_opts.memory);
create_resource_limit_metrics(APP_NAME);
plugins::setup_datanode_plugins(plugins, &opts.plugins, dn_opts)

View File

@@ -46,7 +46,7 @@ use crate::error::{
MissingConfigSnafu, Result, ShutdownFlownodeSnafu, StartFlownodeSnafu,
};
use crate::options::{GlobalOptions, GreptimeOptions};
use crate::{create_resource_limit_metrics, log_versions, App};
use crate::{create_resource_limit_metrics, log_versions, maybe_activate_heap_profile, App};
pub const APP_NAME: &str = "greptime-flownode";
@@ -280,6 +280,7 @@ impl StartCommand {
);
log_versions(verbose_version(), short_version(), APP_NAME);
maybe_activate_heap_profile(&opts.component.memory);
create_resource_limit_metrics(APP_NAME);
info!("Flownode start command: {:#?}", self);
@@ -375,7 +376,8 @@ impl StartCommand {
flow_auth_header,
opts.query.clone(),
opts.flow.batching_mode.clone(),
);
)
.context(StartFlownodeSnafu)?;
let frontend_client = Arc::new(frontend_client);
let flownode_builder = FlownodeBuilder::new(
opts.clone(),

View File

@@ -47,7 +47,7 @@ use tracing_appender::non_blocking::WorkerGuard;
use crate::error::{self, Result};
use crate::options::{GlobalOptions, GreptimeOptions};
use crate::{create_resource_limit_metrics, log_versions, App};
use crate::{create_resource_limit_metrics, log_versions, maybe_activate_heap_profile, App};
type FrontendOptions = GreptimeOptions<frontend::frontend::FrontendOptions>;
@@ -279,10 +279,11 @@ impl StartCommand {
&opts.component.logging,
&opts.component.tracing,
opts.component.node_id.clone(),
opts.component.slow_query.as_ref(),
Some(&opts.component.slow_query),
);
log_versions(verbose_version(), short_version(), APP_NAME);
maybe_activate_heap_profile(&opts.component.memory);
create_resource_limit_metrics(APP_NAME);
info!("Frontend start command: {:#?}", self);

View File

@@ -15,7 +15,10 @@
#![feature(assert_matches, let_chains)]
use async_trait::async_trait;
use common_telemetry::{error, info};
use common_error::ext::ErrorExt;
use common_error::status_code::StatusCode;
use common_mem_prof::activate_heap_profile;
use common_telemetry::{error, info, warn};
use stat::{get_cpu_limit, get_memory_limit};
use crate::error::Result;
@@ -145,3 +148,20 @@ fn log_env_flags() {
info!("argument: {}", argument);
}
}
pub fn maybe_activate_heap_profile(memory_options: &common_options::memory::MemoryOptions) {
if memory_options.enable_heap_profiling {
match activate_heap_profile() {
Ok(()) => {
info!("Heap profile is active");
}
Err(err) => {
if err.status_code() == StatusCode::Unsupported {
info!("Heap profile is not supported");
} else {
warn!(err; "Failed to activate heap profile");
}
}
}
}
}

View File

@@ -30,7 +30,7 @@ use tracing_appender::non_blocking::WorkerGuard;
use crate::error::{self, LoadLayeredConfigSnafu, Result, StartMetaServerSnafu};
use crate::options::{GlobalOptions, GreptimeOptions};
use crate::{create_resource_limit_metrics, log_versions, App};
use crate::{create_resource_limit_metrics, log_versions, maybe_activate_heap_profile, App};
type MetasrvOptions = GreptimeOptions<meta_srv::metasrv::MetasrvOptions>;
@@ -325,6 +325,7 @@ impl StartCommand {
);
log_versions(verbose_version(), short_version(), APP_NAME);
maybe_activate_heap_profile(&opts.component.memory);
create_resource_limit_metrics(APP_NAME);
info!("Metasrv start command: {:#?}", self);

View File

@@ -34,17 +34,19 @@ use common_meta::cluster::{NodeInfo, NodeStatus};
use common_meta::datanode::RegionStat;
use common_meta::ddl::flow_meta::FlowMetadataAllocator;
use common_meta::ddl::table_meta::TableMetadataAllocator;
use common_meta::ddl::{DdlContext, NoopRegionFailureDetectorControl, ProcedureExecutorRef};
use common_meta::ddl::{DdlContext, NoopRegionFailureDetectorControl};
use common_meta::ddl_manager::DdlManager;
use common_meta::key::flow::flow_state::FlowStat;
use common_meta::key::flow::FlowMetadataManager;
use common_meta::key::{TableMetadataManager, TableMetadataManagerRef};
use common_meta::kv_backend::KvBackendRef;
use common_meta::peer::Peer;
use common_meta::procedure_executor::LocalProcedureExecutor;
use common_meta::region_keeper::MemoryRegionKeeper;
use common_meta::region_registry::LeaderRegionRegistry;
use common_meta::sequence::SequenceBuilder;
use common_meta::wal_options_allocator::{build_wal_options_allocator, WalOptionsAllocatorRef};
use common_options::memory::MemoryOptions;
use common_procedure::{ProcedureInfo, ProcedureManagerRef};
use common_telemetry::info;
use common_telemetry::logging::{
@@ -83,7 +85,7 @@ use tracing_appender::non_blocking::WorkerGuard;
use crate::error::{Result, StartFlownodeSnafu};
use crate::options::{GlobalOptions, GreptimeOptions};
use crate::{create_resource_limit_metrics, error, log_versions, App};
use crate::{create_resource_limit_metrics, error, log_versions, maybe_activate_heap_profile, App};
pub const APP_NAME: &str = "greptime-standalone";
@@ -155,8 +157,9 @@ pub struct StandaloneOptions {
pub init_regions_in_background: bool,
pub init_regions_parallelism: usize,
pub max_in_flight_write_bytes: Option<ReadableSize>,
pub slow_query: Option<SlowQueryOptions>,
pub slow_query: SlowQueryOptions,
pub query: QueryOptions,
pub memory: MemoryOptions,
}
impl Default for StandaloneOptions {
@@ -188,8 +191,9 @@ impl Default for StandaloneOptions {
init_regions_in_background: false,
init_regions_parallelism: 16,
max_in_flight_write_bytes: None,
slow_query: Some(SlowQueryOptions::default()),
slow_query: SlowQueryOptions::default(),
query: QueryOptions::default(),
memory: MemoryOptions::default(),
}
}
}
@@ -482,10 +486,11 @@ impl StartCommand {
&opts.component.logging,
&opts.component.tracing,
None,
opts.component.slow_query.as_ref(),
Some(&opts.component.slow_query),
);
log_versions(verbose_version(), short_version(), APP_NAME);
maybe_activate_heap_profile(&opts.component.memory);
create_resource_limit_metrics(APP_NAME);
info!("Standalone start command: {:#?}", self);
@@ -636,9 +641,8 @@ impl StartCommand {
flow_metadata_allocator: flow_metadata_allocator.clone(),
region_failure_detector_controller: Arc::new(NoopRegionFailureDetectorControl),
};
let procedure_manager_c = procedure_manager.clone();
let ddl_manager = DdlManager::try_new(ddl_context, procedure_manager_c, true)
let ddl_manager = DdlManager::try_new(ddl_context, procedure_manager.clone(), true)
.context(error::InitDdlManagerSnafu)?;
#[cfg(feature = "enterprise")]
let ddl_manager = {
@@ -646,7 +650,11 @@ impl StartCommand {
plugins.get();
ddl_manager.with_trigger_ddl_manager(trigger_ddl_manager)
};
let ddl_task_executor: ProcedureExecutorRef = Arc::new(ddl_manager);
let procedure_executor = Arc::new(LocalProcedureExecutor::new(
Arc::new(ddl_manager),
procedure_manager.clone(),
));
let fe_instance = FrontendBuilder::new(
fe_opts.clone(),
@@ -654,7 +662,7 @@ impl StartCommand {
layered_cache_registry.clone(),
catalog_manager.clone(),
node_manager.clone(),
ddl_task_executor.clone(),
procedure_executor.clone(),
process_manager,
)
.with_plugin(plugins.clone())
@@ -679,7 +687,7 @@ impl StartCommand {
catalog_manager.clone(),
kv_backend.clone(),
layered_cache_registry.clone(),
ddl_task_executor.clone(),
procedure_executor,
node_manager,
)
.await
@@ -826,6 +834,7 @@ impl InformationExtension for StandaloneInformationExtension {
region_manifest: region_stat.manifest.into(),
data_topic_latest_entry_id: region_stat.data_topic_latest_entry_id,
metadata_topic_latest_entry_id: region_stat.metadata_topic_latest_entry_id,
write_bytes: 0,
}
})
.collect::<Vec<_>>();

View File

@@ -34,6 +34,7 @@ use query::options::QueryOptions;
use servers::export_metrics::ExportMetricsOption;
use servers::grpc::GrpcOptions;
use servers::http::HttpOptions;
use servers::tls::{TlsMode, TlsOption};
use store_api::path_utils::WAL_DIR;
#[allow(deprecated)]
@@ -190,6 +191,13 @@ fn test_load_metasrv_example_config() {
remote_write: Some(Default::default()),
..Default::default()
},
backend_tls: Some(TlsOption {
mode: TlsMode::Prefer,
cert_path: String::new(),
key_path: String::new(),
ca_cert_path: String::new(),
watch: false,
}),
..Default::default()
},
..Default::default()
@@ -245,6 +253,7 @@ fn test_load_flownode_example_config() {
..Default::default()
},
user_provider: None,
memory: Default::default(),
},
..Default::default()
};
@@ -298,6 +307,7 @@ fn test_load_standalone_example_config() {
cors_allowed_origins: vec!["https://example.com".to_string()],
..Default::default()
},
..Default::default()
},
..Default::default()

View File

@@ -20,6 +20,7 @@ pub mod range_read;
#[allow(clippy::all)]
pub mod readable_size;
pub mod secrets;
pub mod serde;
pub type AffectedRows = usize;

View File

@@ -12,16 +12,20 @@
// See the License for the specific language governing permissions and
// limitations under the License.
//! Format to store in parquet.
//!
//! We store two additional internal columns at last:
//! - `__sequence`, the sequence number of a row. Type: uint64
//! - `__op_type`, the op type of the row. Type: uint8
//!
//! We store other columns in the same order as [RegionMetadata::field_columns()](store_api::metadata::RegionMetadata::field_columns()).
//!
use serde::{Deserialize, Deserializer};
/// Number of columns that have fixed positions.
///
/// Contains all internal columns.
pub(crate) const PLAIN_FIXED_POS_COLUMN_NUM: usize = 2;
/// Deserialize an empty string as the default value.
pub fn empty_string_as_default<'de, D, T>(deserializer: D) -> Result<T, D::Error>
where
D: Deserializer<'de>,
T: Default + Deserialize<'de>,
{
let s = String::deserialize(deserializer)?;
if s.is_empty() {
Ok(T::default())
} else {
T::deserialize(serde::de::value::StringDeserializer::<D::Error>::new(s))
.map_err(serde::de::Error::custom)
}
}

View File

@@ -25,19 +25,17 @@ common-error.workspace = true
common-macro.workspace = true
common-recordbatch.workspace = true
common-runtime.workspace = true
common-telemetry.workspace = true
datafusion.workspace = true
datafusion-orc.workspace = true
datatypes.workspace = true
derive_builder.workspace = true
futures.workspace = true
lazy_static.workspace = true
object-store.workspace = true
object_store_opendal.workspace = true
orc-rust = { git = "https://github.com/datafusion-contrib/orc-rust", rev = "3134cab581a8e91b942d6a23aca2916ea965f6bb", default-features = false, features = [
"async",
] }
orc-rust = { version = "0.6.3", default-features = false, features = ["async"] }
parquet.workspace = true
paste.workspace = true
rand.workspace = true
regex = "1.7"
serde.workspace = true
snafu.workspace = true
@@ -47,6 +45,4 @@ tokio-util.workspace = true
url = "2.3"
[dev-dependencies]
common-telemetry.workspace = true
common-test-util.workspace = true
uuid.workspace = true

View File

@@ -12,16 +12,11 @@
// See the License for the specific language governing permissions and
// limitations under the License.
use std::sync::Arc;
use arrow_schema::{ArrowError, Schema, SchemaRef};
use arrow_schema::Schema;
use async_trait::async_trait;
use bytes::Bytes;
use common_recordbatch::adapter::RecordBatchStreamTypeAdapter;
use datafusion::datasource::physical_plan::{FileMeta, FileOpenFuture, FileOpener};
use datafusion::error::{DataFusionError, Result as DfResult};
use futures::future::BoxFuture;
use futures::{FutureExt, StreamExt, TryStreamExt};
use futures::FutureExt;
use object_store::ObjectStore;
use orc_rust::arrow_reader::ArrowReaderBuilder;
use orc_rust::async_arrow_reader::ArrowStreamReader;
@@ -97,67 +92,6 @@ impl FileFormat for OrcFormat {
}
}
#[derive(Debug, Clone)]
pub struct OrcOpener {
object_store: Arc<ObjectStore>,
output_schema: SchemaRef,
projection: Option<Vec<usize>>,
}
impl OrcOpener {
pub fn new(
object_store: ObjectStore,
output_schema: SchemaRef,
projection: Option<Vec<usize>>,
) -> Self {
Self {
object_store: Arc::from(object_store),
output_schema,
projection,
}
}
}
impl FileOpener for OrcOpener {
fn open(&self, meta: FileMeta) -> DfResult<FileOpenFuture> {
let object_store = self.object_store.clone();
let projected_schema = if let Some(projection) = &self.projection {
let projected_schema = self
.output_schema
.project(projection)
.map_err(|e| DataFusionError::External(Box::new(e)))?;
Arc::new(projected_schema)
} else {
self.output_schema.clone()
};
let projection = self.projection.clone();
Ok(Box::pin(async move {
let path = meta.location().to_string();
let meta = object_store
.stat(&path)
.await
.map_err(|e| DataFusionError::External(Box::new(e)))?;
let reader = object_store
.reader(&path)
.await
.map_err(|e| DataFusionError::External(Box::new(e)))?;
let stream_reader =
new_orc_stream_reader(ReaderAdapter::new(reader, meta.content_length()))
.await
.map_err(|e| DataFusionError::External(Box::new(e)))?;
let stream =
RecordBatchStreamTypeAdapter::new(projected_schema, stream_reader, projection);
let adopted = stream.map_err(|e| ArrowError::ExternalError(Box::new(e)));
Ok(adopted.boxed())
}))
}
}
#[cfg(test)]
mod tests {
use common_test_util::find_workspace_path;

View File

@@ -31,6 +31,7 @@ use datatypes::schema::SchemaRef;
use futures::future::BoxFuture;
use futures::StreamExt;
use object_store::{FuturesAsyncReader, ObjectStore};
use parquet::arrow::arrow_reader::ArrowReaderOptions;
use parquet::arrow::AsyncArrowWriter;
use parquet::basic::{Compression, Encoding, ZstdLevel};
use parquet::file::properties::{WriterProperties, WriterPropertiesBuilder};
@@ -65,7 +66,7 @@ impl FileFormat for ParquetFormat {
.compat();
let metadata = reader
.get_metadata()
.get_metadata(None)
.await
.context(error::ReadParquetSnafuSnafu)?;
@@ -146,7 +147,7 @@ impl LazyParquetFileReader {
impl AsyncFileReader for LazyParquetFileReader {
fn get_bytes(
&mut self,
range: std::ops::Range<usize>,
range: std::ops::Range<u64>,
) -> BoxFuture<'_, ParquetResult<bytes::Bytes>> {
Box::pin(async move {
self.maybe_initialize()
@@ -157,13 +158,16 @@ impl AsyncFileReader for LazyParquetFileReader {
})
}
fn get_metadata(&mut self) -> BoxFuture<'_, ParquetResult<Arc<ParquetMetaData>>> {
fn get_metadata<'a>(
&'a mut self,
options: Option<&'a ArrowReaderOptions>,
) -> BoxFuture<'a, parquet::errors::Result<Arc<ParquetMetaData>>> {
Box::pin(async move {
self.maybe_initialize()
.await
.map_err(|e| ParquetError::External(Box::new(e)))?;
// Safety: Must initialized
self.reader.as_mut().unwrap().get_metadata().await
self.reader.as_mut().unwrap().get_metadata(options).await
})
}
}

View File

@@ -19,35 +19,39 @@ use std::vec;
use common_test_util::find_workspace_path;
use datafusion::assert_batches_eq;
use datafusion::datasource::file_format::file_compression_type::FileCompressionType;
use datafusion::datasource::physical_plan::{
CsvConfig, CsvOpener, FileOpener, FileScanConfig, FileStream, JsonOpener, ParquetExec,
CsvSource, FileScanConfig, FileSource, FileStream, JsonSource, ParquetSource,
};
use datafusion::datasource::source::DataSourceExec;
use datafusion::execution::context::TaskContext;
use datafusion::physical_plan::metrics::ExecutionPlanMetricsSet;
use datafusion::physical_plan::ExecutionPlan;
use datafusion::prelude::SessionContext;
use datafusion_orc::OrcSource;
use futures::StreamExt;
use object_store::ObjectStore;
use super::FORMAT_TYPE;
use crate::file_format::orc::{OrcFormat, OrcOpener};
use crate::file_format::parquet::DefaultParquetFileReaderFactory;
use crate::file_format::{FileFormat, Format};
use crate::file_format::{FileFormat, Format, OrcFormat};
use crate::test_util::{scan_config, test_basic_schema, test_store};
use crate::{error, test_util};
struct Test<'a, T: FileOpener> {
struct Test<'a> {
config: FileScanConfig,
opener: T,
file_source: Arc<dyn FileSource>,
expected: Vec<&'a str>,
}
impl<T: FileOpener> Test<'_, T> {
pub async fn run(self) {
impl Test<'_> {
async fn run(self, store: &ObjectStore) {
let store = Arc::new(object_store_opendal::OpendalStore::new(store.clone()));
let file_opener = self.file_source.create_file_opener(store, &self.config, 0);
let result = FileStream::new(
&self.config,
0,
self.opener,
file_opener,
&ExecutionPlanMetricsSet::new(),
)
.unwrap()
@@ -62,26 +66,16 @@ impl<T: FileOpener> Test<'_, T> {
#[tokio::test]
async fn test_json_opener() {
let store = test_store("/");
let store = Arc::new(object_store_opendal::OpendalStore::new(store));
let schema = test_basic_schema();
let json_opener = || {
JsonOpener::new(
test_util::TEST_BATCH_SIZE,
schema.clone(),
FileCompressionType::UNCOMPRESSED,
store.clone(),
)
};
let file_source = Arc::new(JsonSource::new()).with_batch_size(test_util::TEST_BATCH_SIZE);
let path = &find_workspace_path("/src/common/datasource/tests/json/basic.json")
.display()
.to_string();
let tests = [
Test {
config: scan_config(schema.clone(), None, path),
opener: json_opener(),
config: scan_config(schema.clone(), None, path, file_source.clone()),
file_source: file_source.clone(),
expected: vec![
"+-----+-------+",
"| num | str |",
@@ -93,8 +87,8 @@ async fn test_json_opener() {
],
},
Test {
config: scan_config(schema.clone(), Some(1), path),
opener: json_opener(),
config: scan_config(schema, Some(1), path, file_source.clone()),
file_source,
expected: vec![
"+-----+------+",
"| num | str |",
@@ -106,37 +100,26 @@ async fn test_json_opener() {
];
for test in tests {
test.run().await;
test.run(&store).await;
}
}
#[tokio::test]
async fn test_csv_opener() {
let store = test_store("/");
let store = Arc::new(object_store_opendal::OpendalStore::new(store));
let schema = test_basic_schema();
let path = &find_workspace_path("/src/common/datasource/tests/csv/basic.csv")
.display()
.to_string();
let csv_config = Arc::new(CsvConfig::new(
test_util::TEST_BATCH_SIZE,
schema.clone(),
None,
true,
b',',
b'"',
None,
store,
None,
));
let csv_opener = || CsvOpener::new(csv_config.clone(), FileCompressionType::UNCOMPRESSED);
let file_source = CsvSource::new(true, b',', b'"')
.with_batch_size(test_util::TEST_BATCH_SIZE)
.with_schema(schema.clone());
let tests = [
Test {
config: scan_config(schema.clone(), None, path),
opener: csv_opener(),
config: scan_config(schema.clone(), None, path, file_source.clone()),
file_source: file_source.clone(),
expected: vec![
"+-----+-------+",
"| num | str |",
@@ -148,8 +131,8 @@ async fn test_csv_opener() {
],
},
Test {
config: scan_config(schema.clone(), Some(1), path),
opener: csv_opener(),
config: scan_config(schema, Some(1), path, file_source.clone()),
file_source,
expected: vec![
"+-----+------+",
"| num | str |",
@@ -161,7 +144,7 @@ async fn test_csv_opener() {
];
for test in tests {
test.run().await;
test.run(&store).await;
}
}
@@ -174,12 +157,12 @@ async fn test_parquet_exec() {
let path = &find_workspace_path("/src/common/datasource/tests/parquet/basic.parquet")
.display()
.to_string();
let base_config = scan_config(schema.clone(), None, path);
let exec = ParquetExec::builder(base_config)
.with_parquet_file_reader_factory(Arc::new(DefaultParquetFileReaderFactory::new(store)))
.build();
let parquet_source = ParquetSource::default()
.with_parquet_file_reader_factory(Arc::new(DefaultParquetFileReaderFactory::new(store)));
let config = scan_config(schema, None, path, Arc::new(parquet_source));
let exec = DataSourceExec::from_data_source(config);
let ctx = SessionContext::new();
let context = Arc::new(TaskContext::from(&ctx));
@@ -208,20 +191,18 @@ async fn test_parquet_exec() {
#[tokio::test]
async fn test_orc_opener() {
let root = find_workspace_path("/src/common/datasource/tests/orc")
let path = &find_workspace_path("/src/common/datasource/tests/orc/test.orc")
.display()
.to_string();
let store = test_store(&root);
let schema = OrcFormat.infer_schema(&store, "test.orc").await.unwrap();
let schema = Arc::new(schema);
let orc_opener = OrcOpener::new(store.clone(), schema.clone(), None);
let path = "test.orc";
let store = test_store("/");
let schema = Arc::new(OrcFormat.infer_schema(&store, path).await.unwrap());
let file_source = Arc::new(OrcSource::default());
let tests = [
Test {
config: scan_config(schema.clone(), None, path),
opener: orc_opener.clone(),
config: scan_config(schema.clone(), None, path, file_source.clone()),
file_source: file_source.clone(),
expected: vec![
"+----------+-----+-------+------------+-----+-----+-------+--------------------+------------------------+-----------+---------------+------------+----------------+---------------+-------------------+--------------+---------------+---------------+----------------------------+-------------+",
"| double_a | a | b | str_direct | d | e | f | int_short_repeated | int_neg_short_repeated | int_delta | int_neg_delta | int_direct | int_neg_direct | bigint_direct | bigint_neg_direct | bigint_other | utf8_increase | utf8_decrease | timestamp_simple | date_simple |",
@@ -235,8 +216,8 @@ async fn test_orc_opener() {
],
},
Test {
config: scan_config(schema.clone(), Some(1), path),
opener: orc_opener.clone(),
config: scan_config(schema.clone(), Some(1), path, file_source.clone()),
file_source,
expected: vec![
"+----------+-----+------+------------+---+-----+-------+--------------------+------------------------+-----------+---------------+------------+----------------+---------------+-------------------+--------------+---------------+---------------+-------------------------+-------------+",
"| double_a | a | b | str_direct | d | e | f | int_short_repeated | int_neg_short_repeated | int_delta | int_neg_delta | int_direct | int_neg_direct | bigint_direct | bigint_neg_direct | bigint_other | utf8_increase | utf8_decrease | timestamp_simple | date_simple |",
@@ -248,7 +229,7 @@ async fn test_orc_opener() {
];
for test in tests {
test.run().await;
test.run(&store).await;
}
}

View File

@@ -16,12 +16,12 @@ use std::sync::Arc;
use arrow_schema::{DataType, Field, Schema, SchemaRef};
use common_test_util::temp_dir::{create_temp_dir, TempDir};
use datafusion::common::{Constraints, Statistics};
use datafusion::datasource::file_format::file_compression_type::FileCompressionType;
use datafusion::datasource::listing::PartitionedFile;
use datafusion::datasource::object_store::ObjectStoreUrl;
use datafusion::datasource::physical_plan::{
CsvConfig, CsvOpener, FileScanConfig, FileStream, JsonOpener,
CsvSource, FileGroup, FileScanConfig, FileScanConfigBuilder, FileSource, FileStream,
JsonOpener, JsonSource,
};
use datafusion::physical_plan::metrics::ExecutionPlanMetricsSet;
use object_store::services::Fs;
@@ -68,21 +68,20 @@ pub fn test_basic_schema() -> SchemaRef {
Arc::new(schema)
}
pub fn scan_config(file_schema: SchemaRef, limit: Option<usize>, filename: &str) -> FileScanConfig {
pub(crate) fn scan_config(
file_schema: SchemaRef,
limit: Option<usize>,
filename: &str,
file_source: Arc<dyn FileSource>,
) -> FileScanConfig {
// object_store only recognize the Unix style path, so make it happy.
let filename = &filename.replace('\\', "/");
let statistics = Statistics::new_unknown(file_schema.as_ref());
FileScanConfig {
object_store_url: ObjectStoreUrl::parse("empty://").unwrap(), // won't be used
file_schema,
file_groups: vec![vec![PartitionedFile::new(filename.to_string(), 10)]],
constraints: Constraints::empty(),
statistics,
projection: None,
limit,
table_partition_cols: vec![],
output_ordering: vec![],
}
let file_group = FileGroup::new(vec![PartitionedFile::new(filename.to_string(), 4096)]);
FileScanConfigBuilder::new(ObjectStoreUrl::local_filesystem(), file_schema, file_source)
.with_file_group(file_group)
.with_limit(limit)
.build()
}
pub async fn setup_stream_to_json_test(origin_path: &str, threshold: impl Fn(usize) -> usize) {
@@ -99,9 +98,14 @@ pub async fn setup_stream_to_json_test(origin_path: &str, threshold: impl Fn(usi
let size = store.read(origin_path).await.unwrap().len();
let config = scan_config(schema.clone(), None, origin_path);
let stream = FileStream::new(&config, 0, json_opener, &ExecutionPlanMetricsSet::new()).unwrap();
let config = scan_config(schema, None, origin_path, Arc::new(JsonSource::new()));
let stream = FileStream::new(
&config,
0,
Arc::new(json_opener),
&ExecutionPlanMetricsSet::new(),
)
.unwrap();
let (tmp_store, dir) = test_tmp_store("test_stream_to_json");
@@ -127,24 +131,17 @@ pub async fn setup_stream_to_csv_test(origin_path: &str, threshold: impl Fn(usiz
let schema = test_basic_schema();
let csv_config = Arc::new(CsvConfig::new(
TEST_BATCH_SIZE,
schema.clone(),
None,
true,
b',',
b'"',
None,
Arc::new(object_store_opendal::OpendalStore::new(store.clone())),
None,
));
let csv_opener = CsvOpener::new(csv_config, FileCompressionType::UNCOMPRESSED);
let csv_source = CsvSource::new(true, b',', b'"')
.with_schema(schema.clone())
.with_batch_size(TEST_BATCH_SIZE);
let config = scan_config(schema, None, origin_path, csv_source.clone());
let size = store.read(origin_path).await.unwrap().len();
let config = scan_config(schema.clone(), None, origin_path);
let csv_opener = csv_source.create_file_opener(
Arc::new(object_store_opendal::OpendalStore::new(store.clone())),
&config,
0,
);
let stream = FileStream::new(&config, 0, csv_opener, &ExecutionPlanMetricsSet::new()).unwrap();
let (tmp_store, dir) = test_tmp_store("test_stream_to_csv");

View File

@@ -8,12 +8,13 @@ license.workspace = true
api.workspace = true
async-trait.workspace = true
backon.workspace = true
client.workspace = true
common-error.workspace = true
common-macro.workspace = true
common-meta.workspace = true
common-telemetry.workspace = true
common-time.workspace = true
humantime.workspace = true
humantime-serde.workspace = true
itertools.workspace = true
serde.workspace = true
serde_json.workspace = true
snafu.workspace = true

View File

@@ -13,7 +13,7 @@
// limitations under the License.
use api::v1::ColumnSchema;
use common_error::ext::ErrorExt;
use common_error::ext::{BoxedError, ErrorExt};
use common_error::status_code::StatusCode;
use common_macro::stack_trace_debug;
use snafu::{Location, Snafu};
@@ -22,12 +22,6 @@ use snafu::{Location, Snafu};
#[snafu(visibility(pub))]
#[stack_trace_debug]
pub enum Error {
#[snafu(display("No available frontend"))]
NoAvailableFrontend {
#[snafu(implicit)]
location: Location,
},
#[snafu(display("Mismatched schema, expected: {:?}, actual: {:?}", expected, actual))]
MismatchedSchema {
#[snafu(implicit)]
@@ -35,6 +29,30 @@ pub enum Error {
expected: Vec<ColumnSchema>,
actual: Vec<ColumnSchema>,
},
#[snafu(display("Failed to serialize event"))]
SerializeEvent {
#[snafu(source)]
error: serde_json::error::Error,
#[snafu(implicit)]
location: Location,
},
#[snafu(display("Failed to insert events"))]
InsertEvents {
// BoxedError is utilized here to prevent introducing a circular dependency that would arise from directly referencing `client::error::Error`.
source: BoxedError,
#[snafu(implicit)]
location: Location,
},
#[snafu(display("Keyvalue backend error"))]
KvBackend {
// BoxedError is utilized here to prevent introducing a circular dependency that would arise from directly referencing `common_meta::error::Error`.
source: BoxedError,
#[snafu(implicit)]
location: Location,
},
}
pub type Result<T> = std::result::Result<T, Error>;
@@ -42,8 +60,10 @@ pub type Result<T> = std::result::Result<T, Error>;
impl ErrorExt for Error {
fn status_code(&self) -> StatusCode {
match self {
Error::MismatchedSchema { .. } => StatusCode::InvalidArguments,
Error::NoAvailableFrontend { .. } => StatusCode::Internal,
Error::MismatchedSchema { .. } | Error::SerializeEvent { .. } => {
StatusCode::InvalidArguments
}
Error::InsertEvents { .. } | Error::KvBackend { .. } => StatusCode::Internal,
}
}

View File

@@ -12,6 +12,8 @@
// See the License for the specific language governing permissions and
// limitations under the License.
#![feature(duration_constructors)]
pub mod error;
pub mod recorder;

View File

@@ -15,7 +15,7 @@
use std::any::Any;
use std::collections::HashMap;
use std::fmt::Debug;
use std::sync::{Arc, OnceLock};
use std::sync::Arc;
use std::time::Duration;
use api::v1::column_data_type_extension::TypeExt;
@@ -28,6 +28,8 @@ use async_trait::async_trait;
use backon::{BackoffBuilder, ExponentialBuilder};
use common_telemetry::{debug, error, info, warn};
use common_time::timestamp::{TimeUnit, Timestamp};
use humantime::format_duration;
use itertools::Itertools;
use serde::{Deserialize, Serialize};
use store_api::mito_engine_options::{APPEND_MODE_KEY, TTL_KEY};
use tokio::sync::mpsc::{channel, Receiver, Sender};
@@ -50,12 +52,10 @@ pub const EVENTS_TABLE_TIMESTAMP_COLUMN_NAME: &str = "timestamp";
/// EventRecorderRef is the reference to the event recorder.
pub type EventRecorderRef = Arc<dyn EventRecorder>;
static EVENTS_TABLE_TTL: OnceLock<String> = OnceLock::new();
/// The time interval for flushing batched events to the event handler.
pub const DEFAULT_FLUSH_INTERVAL_SECONDS: Duration = Duration::from_secs(5);
// The default TTL for the events table.
const DEFAULT_EVENTS_TABLE_TTL: &str = "30d";
/// The default TTL(90 days) for the events table.
const DEFAULT_EVENTS_TABLE_TTL: Duration = Duration::from_days(90);
// The capacity of the tokio channel for transmitting events to background processor.
const DEFAULT_CHANNEL_SIZE: usize = 2048;
// The size of the buffer for batching events before flushing to event handler.
@@ -72,6 +72,11 @@ const DEFAULT_MAX_RETRY_TIMES: u64 = 3;
///
/// The event can also add the extra schema and row to the event by overriding the `extra_schema` and `extra_row` methods.
pub trait Event: Send + Sync + Debug {
/// Returns the table name of the event.
fn table_name(&self) -> &str {
DEFAULT_EVENTS_TABLE_NAME
}
/// Returns the type of the event.
fn event_type(&self) -> &str;
@@ -81,7 +86,9 @@ pub trait Event: Send + Sync + Debug {
}
/// Returns the JSON bytes of the event as the payload. It will use JSON type to store the payload.
fn json_payload(&self) -> Result<String>;
fn json_payload(&self) -> Result<String> {
Ok("".to_string())
}
/// Add the extra schema to the event with the default schema.
fn extra_schema(&self) -> Vec<ColumnSchema> {
@@ -97,88 +104,76 @@ pub trait Event: Send + Sync + Debug {
fn as_any(&self) -> &dyn Any;
}
/// Returns the hints for the insert operation.
pub fn insert_hints() -> Vec<(&'static str, &'static str)> {
vec![
(
TTL_KEY,
EVENTS_TABLE_TTL
.get()
.map(|s| s.as_str())
.unwrap_or(DEFAULT_EVENTS_TABLE_TTL),
),
(APPEND_MODE_KEY, "true"),
]
/// Eventable trait defines the interface for objects that can be converted to [Event].
pub trait Eventable: Send + Sync + Debug {
/// Converts the object to an [Event].
fn to_event(&self) -> Option<Box<dyn Event>> {
None
}
}
/// Builds the row inserts request for the events that will be persisted to the events table.
pub fn build_row_inserts_request(events: &[Box<dyn Event>]) -> Result<RowInsertRequests> {
// Aggregate the events by the event type.
let mut event_groups: HashMap<&str, Vec<&Box<dyn Event>>> = HashMap::new();
/// Groups events by its `event_type`.
#[allow(clippy::borrowed_box)]
pub fn group_events_by_type(events: &[Box<dyn Event>]) -> HashMap<&str, Vec<&Box<dyn Event>>> {
events
.iter()
.into_grouping_map_by(|event| event.event_type())
.collect()
}
/// Builds the row inserts request for the events that will be persisted to the events table. The `events` should have the same event type, or it will return an error.
#[allow(clippy::borrowed_box)]
pub fn build_row_inserts_request(events: &[&Box<dyn Event>]) -> Result<RowInsertRequests> {
// Ensure all the events are the same type.
validate_events(events)?;
// We already validated the events, so it's safe to get the first event to build the schema for the RowInsertRequest.
let event = &events[0];
let mut schema: Vec<ColumnSchema> = Vec::with_capacity(3 + event.extra_schema().len());
schema.extend(vec![
ColumnSchema {
column_name: EVENTS_TABLE_TYPE_COLUMN_NAME.to_string(),
datatype: ColumnDataType::String.into(),
semantic_type: SemanticType::Tag.into(),
..Default::default()
},
ColumnSchema {
column_name: EVENTS_TABLE_PAYLOAD_COLUMN_NAME.to_string(),
datatype: ColumnDataType::Binary as i32,
semantic_type: SemanticType::Field as i32,
datatype_extension: Some(ColumnDataTypeExtension {
type_ext: Some(TypeExt::JsonType(JsonTypeExtension::JsonBinary.into())),
}),
..Default::default()
},
ColumnSchema {
column_name: EVENTS_TABLE_TIMESTAMP_COLUMN_NAME.to_string(),
datatype: ColumnDataType::TimestampNanosecond.into(),
semantic_type: SemanticType::Timestamp.into(),
..Default::default()
},
]);
schema.extend(event.extra_schema());
let mut rows: Vec<Row> = Vec::with_capacity(events.len());
for event in events {
event_groups
.entry(event.event_type())
.or_default()
.push(event);
let extra_row = event.extra_row()?;
let mut values = Vec::with_capacity(3 + extra_row.values.len());
values.extend([
ValueData::StringValue(event.event_type().to_string()).into(),
ValueData::BinaryValue(event.json_payload()?.into_bytes()).into(),
ValueData::TimestampNanosecondValue(event.timestamp().value()).into(),
]);
values.extend(extra_row.values);
rows.push(Row { values });
}
let mut row_insert_requests = RowInsertRequests {
inserts: Vec::with_capacity(event_groups.len()),
};
for (_, events) in event_groups {
validate_events(&events)?;
// We already validated the events, so it's safe to get the first event to build the schema for the RowInsertRequest.
let event = &events[0];
let mut schema = vec![
ColumnSchema {
column_name: EVENTS_TABLE_TYPE_COLUMN_NAME.to_string(),
datatype: ColumnDataType::String.into(),
semantic_type: SemanticType::Tag.into(),
..Default::default()
},
ColumnSchema {
column_name: EVENTS_TABLE_PAYLOAD_COLUMN_NAME.to_string(),
datatype: ColumnDataType::Binary as i32,
semantic_type: SemanticType::Field as i32,
datatype_extension: Some(ColumnDataTypeExtension {
type_ext: Some(TypeExt::JsonType(JsonTypeExtension::JsonBinary.into())),
}),
..Default::default()
},
ColumnSchema {
column_name: EVENTS_TABLE_TIMESTAMP_COLUMN_NAME.to_string(),
datatype: ColumnDataType::TimestampNanosecond.into(),
semantic_type: SemanticType::Timestamp.into(),
..Default::default()
},
];
schema.extend(event.extra_schema());
let rows = events
.iter()
.map(|event| {
let mut row = Row {
values: vec![
ValueData::StringValue(event.event_type().to_string()).into(),
ValueData::BinaryValue(event.json_payload()?.as_bytes().to_vec()).into(),
ValueData::TimestampNanosecondValue(event.timestamp().value()).into(),
],
};
row.values.extend(event.extra_row()?.values);
Ok(row)
})
.collect::<Result<Vec<_>>>()?;
row_insert_requests.inserts.push(RowInsertRequest {
table_name: DEFAULT_EVENTS_TABLE_NAME.to_string(),
Ok(RowInsertRequests {
inserts: vec![RowInsertRequest {
table_name: event.table_name().to_string(),
rows: Some(Rows { schema, rows }),
});
}
Ok(row_insert_requests)
}],
})
}
// Ensure the events with the same event type have the same extra schema.
@@ -199,7 +194,7 @@ fn validate_events(events: &[&Box<dyn Event>]) -> Result<()> {
}
/// EventRecorder trait defines the interface for recording events.
pub trait EventRecorder: Send + Sync + 'static {
pub trait EventRecorder: Send + Sync + Debug + 'static {
/// Records an event for persistence and processing by [EventHandler].
fn record(&self, event: Box<dyn Event>);
@@ -207,6 +202,34 @@ pub trait EventRecorder: Send + Sync + 'static {
fn close(&self);
}
/// EventHandlerOptions is the options for the event handler.
#[derive(Debug, Clone, Serialize, Deserialize, PartialEq, Eq)]
pub struct EventHandlerOptions {
/// TTL for the events table that will be used to store the events.
pub ttl: Duration,
/// Append mode for the events table that will be used to store the events.
pub append_mode: bool,
}
impl Default for EventHandlerOptions {
fn default() -> Self {
Self {
ttl: DEFAULT_EVENTS_TABLE_TTL,
append_mode: true,
}
}
}
impl EventHandlerOptions {
/// Converts the options to the hints for the insert operation.
pub fn to_hints(&self) -> Vec<(&str, String)> {
vec![
(TTL_KEY, format_duration(self.ttl).to_string()),
(APPEND_MODE_KEY, self.append_mode.to_string()),
]
}
}
/// EventHandler trait defines the interface for how to handle the event.
#[async_trait]
pub trait EventHandler: Send + Sync + 'static {
@@ -219,18 +242,20 @@ pub trait EventHandler: Send + Sync + 'static {
#[derive(Debug, Clone, Serialize, Deserialize, PartialEq, Eq)]
pub struct EventRecorderOptions {
/// TTL for the events table that will be used to store the events.
pub ttl: String,
#[serde(with = "humantime_serde")]
pub ttl: Duration,
}
impl Default for EventRecorderOptions {
fn default() -> Self {
Self {
ttl: DEFAULT_EVENTS_TABLE_TTL.to_string(),
ttl: DEFAULT_EVENTS_TABLE_TTL,
}
}
}
/// Implementation of [EventRecorder] that records the events and processes them in the background by the [EventHandler].
#[derive(Debug)]
pub struct EventRecorderImpl {
// The channel to send the events to the background processor.
tx: Sender<Box<dyn Event>>,
@@ -241,9 +266,7 @@ pub struct EventRecorderImpl {
}
impl EventRecorderImpl {
pub fn new(event_handler: Box<dyn EventHandler>, opts: EventRecorderOptions) -> Self {
info!("Creating event recorder with options: {:?}", opts);
pub fn new(event_handler: Box<dyn EventHandler>) -> Self {
let (tx, rx) = channel(DEFAULT_CHANNEL_SIZE);
let cancel_token = CancellationToken::new();
@@ -268,14 +291,6 @@ impl EventRecorderImpl {
recorder.handle = Some(handle);
// It only sets the ttl once, so it's safe to skip the error.
if EVENTS_TABLE_TTL.set(opts.ttl.clone()).is_err() {
info!(
"Events table ttl already set to {}, skip setting it",
opts.ttl
);
}
recorder
}
}
@@ -460,10 +475,7 @@ mod tests {
#[tokio::test]
async fn test_event_recorder() {
let mut event_recorder = EventRecorderImpl::new(
Box::new(TestEventHandlerImpl {}),
EventRecorderOptions::default(),
);
let mut event_recorder = EventRecorderImpl::new(Box::new(TestEventHandlerImpl {}));
event_recorder.record(Box::new(TestEvent {}));
// Sleep for a while to let the event be sent to the event handler.
@@ -504,10 +516,8 @@ mod tests {
#[tokio::test]
async fn test_event_recorder_should_panic() {
let mut event_recorder = EventRecorderImpl::new(
Box::new(TestEventHandlerImplShouldPanic {}),
EventRecorderOptions::default(),
);
let mut event_recorder =
EventRecorderImpl::new(Box::new(TestEventHandlerImplShouldPanic {}));
event_recorder.record(Box::new(TestEvent {}));
@@ -524,4 +534,135 @@ mod tests {
assert!(handle.await.unwrap_err().is_panic());
}
}
#[derive(Debug)]
struct TestEventA {}
impl Event for TestEventA {
fn event_type(&self) -> &str {
"A"
}
fn as_any(&self) -> &dyn Any {
self
}
}
#[derive(Debug)]
struct TestEventB {}
impl Event for TestEventB {
fn table_name(&self) -> &str {
"table_B"
}
fn event_type(&self) -> &str {
"B"
}
fn as_any(&self) -> &dyn Any {
self
}
}
#[derive(Debug)]
struct TestEventC {}
impl Event for TestEventC {
fn table_name(&self) -> &str {
"table_C"
}
fn event_type(&self) -> &str {
"C"
}
fn as_any(&self) -> &dyn Any {
self
}
}
#[test]
fn test_group_events_by_type() {
let events: Vec<Box<dyn Event>> = vec![
Box::new(TestEventA {}),
Box::new(TestEventB {}),
Box::new(TestEventA {}),
Box::new(TestEventC {}),
Box::new(TestEventB {}),
Box::new(TestEventC {}),
Box::new(TestEventA {}),
];
let event_groups = group_events_by_type(&events);
assert_eq!(event_groups.len(), 3);
assert_eq!(event_groups.get("A").unwrap().len(), 3);
assert_eq!(event_groups.get("B").unwrap().len(), 2);
assert_eq!(event_groups.get("C").unwrap().len(), 2);
}
#[test]
fn test_build_row_inserts_request() {
let events: Vec<Box<dyn Event>> = vec![
Box::new(TestEventA {}),
Box::new(TestEventB {}),
Box::new(TestEventA {}),
Box::new(TestEventC {}),
Box::new(TestEventB {}),
Box::new(TestEventC {}),
Box::new(TestEventA {}),
];
let event_groups = group_events_by_type(&events);
assert_eq!(event_groups.len(), 3);
assert_eq!(event_groups.get("A").unwrap().len(), 3);
assert_eq!(event_groups.get("B").unwrap().len(), 2);
assert_eq!(event_groups.get("C").unwrap().len(), 2);
for (event_type, events) in event_groups {
let row_inserts_request = build_row_inserts_request(&events).unwrap();
if event_type == "A" {
assert_eq!(row_inserts_request.inserts.len(), 1);
assert_eq!(
row_inserts_request.inserts[0].table_name,
DEFAULT_EVENTS_TABLE_NAME
);
assert_eq!(
row_inserts_request.inserts[0]
.rows
.as_ref()
.unwrap()
.rows
.len(),
3
);
} else if event_type == "B" {
assert_eq!(row_inserts_request.inserts.len(), 1);
assert_eq!(row_inserts_request.inserts[0].table_name, "table_B");
assert_eq!(
row_inserts_request.inserts[0]
.rows
.as_ref()
.unwrap()
.rows
.len(),
2
);
} else if event_type == "C" {
assert_eq!(row_inserts_request.inserts.len(), 1);
assert_eq!(row_inserts_request.inserts[0].table_name, "table_C");
assert_eq!(
row_inserts_request.inserts[0]
.rows
.as_ref()
.unwrap()
.rows
.len(),
2
);
} else {
panic!("Unexpected event type: {}", event_type);
}
}
}
}

View File

@@ -5,13 +5,19 @@ edition.workspace = true
license.workspace = true
[dependencies]
api.workspace = true
async-trait.workspace = true
common-error.workspace = true
common-event-recorder.workspace = true
common-grpc.workspace = true
common-macro.workspace = true
common-meta.workspace = true
common-time.workspace = true
greptime-proto.workspace = true
humantime.workspace = true
meta-client.workspace = true
serde.workspace = true
session.workspace = true
snafu.workspace = true
tonic.workspace = true

View File

@@ -19,6 +19,7 @@ use snafu::OptionExt;
pub mod error;
pub mod selector;
pub mod slow_query_event;
#[derive(Debug, Clone, Eq, PartialEq)]
pub struct DisplayProcessId {

View File

@@ -0,0 +1,132 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use std::any::Any;
use api::v1::value::ValueData;
use api::v1::{ColumnDataType, ColumnSchema, Row, SemanticType};
use common_event_recorder::error::Result;
use common_event_recorder::Event;
use serde::Serialize;
pub const SLOW_QUERY_TABLE_NAME: &str = "slow_queries";
pub const SLOW_QUERY_TABLE_COST_COLUMN_NAME: &str = "cost";
pub const SLOW_QUERY_TABLE_THRESHOLD_COLUMN_NAME: &str = "threshold";
pub const SLOW_QUERY_TABLE_QUERY_COLUMN_NAME: &str = "query";
pub const SLOW_QUERY_TABLE_TIMESTAMP_COLUMN_NAME: &str = "timestamp";
pub const SLOW_QUERY_TABLE_IS_PROMQL_COLUMN_NAME: &str = "is_promql";
pub const SLOW_QUERY_TABLE_PROMQL_START_COLUMN_NAME: &str = "promql_start";
pub const SLOW_QUERY_TABLE_PROMQL_END_COLUMN_NAME: &str = "promql_end";
pub const SLOW_QUERY_TABLE_PROMQL_RANGE_COLUMN_NAME: &str = "promql_range";
pub const SLOW_QUERY_TABLE_PROMQL_STEP_COLUMN_NAME: &str = "promql_step";
pub const SLOW_QUERY_EVENT_TYPE: &str = "slow_query";
/// SlowQueryEvent is the event of slow query.
#[derive(Debug, Serialize)]
pub struct SlowQueryEvent {
pub cost: u64,
pub threshold: u64,
pub query: String,
pub is_promql: bool,
pub promql_range: Option<u64>,
pub promql_step: Option<u64>,
pub promql_start: Option<i64>,
pub promql_end: Option<i64>,
}
impl Event for SlowQueryEvent {
fn table_name(&self) -> &str {
SLOW_QUERY_TABLE_NAME
}
fn event_type(&self) -> &str {
SLOW_QUERY_EVENT_TYPE
}
fn extra_schema(&self) -> Vec<ColumnSchema> {
vec![
ColumnSchema {
column_name: SLOW_QUERY_TABLE_COST_COLUMN_NAME.to_string(),
datatype: ColumnDataType::Uint64.into(),
semantic_type: SemanticType::Field.into(),
..Default::default()
},
ColumnSchema {
column_name: SLOW_QUERY_TABLE_THRESHOLD_COLUMN_NAME.to_string(),
datatype: ColumnDataType::Uint64.into(),
semantic_type: SemanticType::Field.into(),
..Default::default()
},
ColumnSchema {
column_name: SLOW_QUERY_TABLE_QUERY_COLUMN_NAME.to_string(),
datatype: ColumnDataType::String.into(),
semantic_type: SemanticType::Field.into(),
..Default::default()
},
ColumnSchema {
column_name: SLOW_QUERY_TABLE_IS_PROMQL_COLUMN_NAME.to_string(),
datatype: ColumnDataType::Boolean.into(),
semantic_type: SemanticType::Field.into(),
..Default::default()
},
ColumnSchema {
column_name: SLOW_QUERY_TABLE_PROMQL_RANGE_COLUMN_NAME.to_string(),
datatype: ColumnDataType::Uint64.into(),
semantic_type: SemanticType::Field.into(),
..Default::default()
},
ColumnSchema {
column_name: SLOW_QUERY_TABLE_PROMQL_STEP_COLUMN_NAME.to_string(),
datatype: ColumnDataType::Uint64.into(),
semantic_type: SemanticType::Field.into(),
..Default::default()
},
ColumnSchema {
column_name: SLOW_QUERY_TABLE_PROMQL_START_COLUMN_NAME.to_string(),
datatype: ColumnDataType::TimestampMillisecond.into(),
semantic_type: SemanticType::Field.into(),
..Default::default()
},
ColumnSchema {
column_name: SLOW_QUERY_TABLE_PROMQL_END_COLUMN_NAME.to_string(),
datatype: ColumnDataType::TimestampMillisecond.into(),
semantic_type: SemanticType::Field.into(),
..Default::default()
},
]
}
fn extra_row(&self) -> Result<Row> {
Ok(Row {
values: vec![
ValueData::U64Value(self.cost).into(),
ValueData::U64Value(self.threshold).into(),
ValueData::StringValue(self.query.to_string()).into(),
ValueData::BoolValue(self.is_promql).into(),
ValueData::U64Value(self.promql_range.unwrap_or(0)).into(),
ValueData::U64Value(self.promql_step.unwrap_or(0)).into(),
ValueData::TimestampMillisecondValue(self.promql_start.unwrap_or(0)).into(),
ValueData::TimestampMillisecondValue(self.promql_end.unwrap_or(0)).into(),
],
})
}
fn json_payload(&self) -> Result<String> {
Ok("".to_string())
}
fn as_any(&self) -> &dyn Any {
self
}
}

View File

@@ -66,6 +66,6 @@ wkt = { version = "0.11", optional = true }
[dev-dependencies]
approx = "0.5"
futures.workspace = true
pretty_assertions = "1.4.0"
pretty_assertions.workspace = true
serde = { version = "1.0", features = ["derive"] }
tokio.workspace = true

View File

@@ -16,32 +16,39 @@ mod add_region_follower;
mod flush_compact_region;
mod flush_compact_table;
mod migrate_region;
mod reconcile_catalog;
mod reconcile_database;
mod reconcile_table;
mod remove_region_follower;
use std::sync::Arc;
use add_region_follower::AddRegionFollowerFunction;
use flush_compact_region::{CompactRegionFunction, FlushRegionFunction};
use flush_compact_table::{CompactTableFunction, FlushTableFunction};
use migrate_region::MigrateRegionFunction;
use reconcile_catalog::ReconcileCatalogFunction;
use reconcile_database::ReconcileDatabaseFunction;
use reconcile_table::ReconcileTableFunction;
use remove_region_follower::RemoveRegionFollowerFunction;
use crate::flush_flow::FlushFlowFunction;
use crate::function_registry::FunctionRegistry;
/// Table functions
/// Administration functions
pub(crate) struct AdminFunction;
impl AdminFunction {
/// Register all table functions to [`FunctionRegistry`].
/// Register all admin functions to [`FunctionRegistry`].
pub fn register(registry: &FunctionRegistry) {
registry.register_async(Arc::new(MigrateRegionFunction));
registry.register_async(Arc::new(AddRegionFollowerFunction));
registry.register_async(Arc::new(RemoveRegionFollowerFunction));
registry.register_async(Arc::new(FlushRegionFunction));
registry.register_async(Arc::new(CompactRegionFunction));
registry.register_async(Arc::new(FlushTableFunction));
registry.register_async(Arc::new(CompactTableFunction));
registry.register_async(Arc::new(FlushFlowFunction));
registry.register(MigrateRegionFunction::factory());
registry.register(AddRegionFollowerFunction::factory());
registry.register(RemoveRegionFollowerFunction::factory());
registry.register(FlushRegionFunction::factory());
registry.register(CompactRegionFunction::factory());
registry.register(FlushTableFunction::factory());
registry.register(CompactTableFunction::factory());
registry.register(FlushFlowFunction::factory());
registry.register(ReconcileCatalogFunction::factory());
registry.register(ReconcileDatabaseFunction::factory());
registry.register(ReconcileTableFunction::factory());
}
}

View File

@@ -18,7 +18,8 @@ use common_query::error::{
InvalidFuncArgsSnafu, MissingProcedureServiceHandlerSnafu, Result,
UnsupportedInputDataTypeSnafu,
};
use common_query::prelude::{Signature, TypeSignature, Volatility};
use datafusion_expr::{Signature, TypeSignature, Volatility};
use datatypes::data_type::DataType;
use datatypes::prelude::ConcreteDataType;
use datatypes::value::{Value, ValueRef};
use session::context::QueryContextRef;
@@ -82,7 +83,13 @@ fn signature() -> Signature {
Signature::one_of(
vec![
// add_region_follower(region_id, peer)
TypeSignature::Uniform(2, ConcreteDataType::numerics()),
TypeSignature::Uniform(
2,
ConcreteDataType::numerics()
.into_iter()
.map(|dt| dt.as_arrow_type())
.collect(),
),
],
Volatility::Immutable,
)
@@ -92,38 +99,57 @@ fn signature() -> Signature {
mod tests {
use std::sync::Arc;
use common_query::prelude::TypeSignature;
use datatypes::vectors::{UInt64Vector, VectorRef};
use arrow::array::UInt64Array;
use arrow::datatypes::{DataType, Field};
use datafusion_expr::ColumnarValue;
use super::*;
use crate::function::{AsyncFunction, FunctionContext};
use crate::function::FunctionContext;
use crate::function_factory::ScalarFunctionFactory;
#[test]
fn test_add_region_follower_misc() {
let f = AddRegionFollowerFunction;
let factory: ScalarFunctionFactory = AddRegionFollowerFunction::factory().into();
let f = factory.provide(FunctionContext::mock());
assert_eq!("add_region_follower", f.name());
assert_eq!(
ConcreteDataType::uint64_datatype(),
f.return_type(&[]).unwrap()
);
assert_eq!(DataType::UInt64, f.return_type(&[]).unwrap());
assert!(matches!(f.signature(),
Signature {
type_signature: TypeSignature::OneOf(sigs),
volatility: Volatility::Immutable
datafusion_expr::Signature {
type_signature: datafusion_expr::TypeSignature::OneOf(sigs),
volatility: datafusion_expr::Volatility::Immutable
} if sigs.len() == 1));
}
#[tokio::test]
async fn test_add_region_follower() {
let f = AddRegionFollowerFunction;
let args = vec![1, 1];
let args = args
.into_iter()
.map(|arg| Arc::new(UInt64Vector::from_slice([arg])) as _)
.collect::<Vec<_>>();
let factory: ScalarFunctionFactory = AddRegionFollowerFunction::factory().into();
let provider = factory.provide(FunctionContext::mock());
let f = provider.as_async().unwrap();
let result = f.eval(FunctionContext::mock(), &args).await.unwrap();
let expect: VectorRef = Arc::new(UInt64Vector::from_slice([0u64]));
assert_eq!(result, expect);
let func_args = datafusion::logical_expr::ScalarFunctionArgs {
args: vec![
ColumnarValue::Array(Arc::new(UInt64Array::from(vec![1]))),
ColumnarValue::Array(Arc::new(UInt64Array::from(vec![2]))),
],
arg_fields: vec![
Arc::new(Field::new("arg_0", DataType::UInt64, false)),
Arc::new(Field::new("arg_1", DataType::UInt64, false)),
],
return_field: Arc::new(Field::new("result", DataType::UInt64, true)),
number_rows: 1,
config_options: Arc::new(datafusion_common::config::ConfigOptions::default()),
};
let result = f.invoke_async_with_args(func_args).await.unwrap();
match result {
ColumnarValue::Array(array) => {
let result_array = array.as_any().downcast_ref::<UInt64Array>().unwrap();
assert_eq!(result_array.value(0), 0u64);
}
ColumnarValue::Scalar(scalar) => {
assert_eq!(scalar, datafusion_common::ScalarValue::UInt64(Some(0)));
}
}
}
}

View File

@@ -16,7 +16,8 @@ use common_macro::admin_fn;
use common_query::error::{
InvalidFuncArgsSnafu, MissingTableMutationHandlerSnafu, Result, UnsupportedInputDataTypeSnafu,
};
use common_query::prelude::{Signature, Volatility};
use datafusion_expr::{Signature, Volatility};
use datatypes::data_type::DataType;
use datatypes::prelude::*;
use session::context::QueryContextRef;
use snafu::ensure;
@@ -66,71 +67,99 @@ define_region_function!(FlushRegionFunction, flush_region, flush_region);
define_region_function!(CompactRegionFunction, compact_region, compact_region);
fn signature() -> Signature {
Signature::uniform(1, ConcreteDataType::numerics(), Volatility::Immutable)
Signature::uniform(
1,
ConcreteDataType::numerics()
.into_iter()
.map(|dt| dt.as_arrow_type())
.collect(),
Volatility::Immutable,
)
}
#[cfg(test)]
mod tests {
use std::sync::Arc;
use common_query::prelude::TypeSignature;
use datatypes::vectors::UInt64Vector;
use arrow::array::UInt64Array;
use arrow::datatypes::{DataType, Field};
use datafusion_expr::ColumnarValue;
use super::*;
use crate::function::{AsyncFunction, FunctionContext};
use crate::function::FunctionContext;
use crate::function_factory::ScalarFunctionFactory;
macro_rules! define_region_function_test {
($name: ident, $func: ident) => {
paste::paste! {
#[test]
fn [<test_ $name _misc>]() {
let f = $func;
let factory: ScalarFunctionFactory = $func::factory().into();
let f = factory.provide(FunctionContext::mock());
assert_eq!(stringify!($name), f.name());
assert_eq!(
ConcreteDataType::uint64_datatype(),
DataType::UInt64,
f.return_type(&[]).unwrap()
);
assert!(matches!(f.signature(),
Signature {
type_signature: TypeSignature::Uniform(1, valid_types),
volatility: Volatility::Immutable
} if valid_types == ConcreteDataType::numerics()));
datafusion_expr::Signature {
type_signature: datafusion_expr::TypeSignature::Uniform(1, valid_types),
volatility: datafusion_expr::Volatility::Immutable
} if valid_types == &ConcreteDataType::numerics().into_iter().map(|dt| { use datatypes::data_type::DataType; dt.as_arrow_type() }).collect::<Vec<_>>()));
}
#[tokio::test]
async fn [<test_ $name _missing_table_mutation>]() {
let f = $func;
let factory: ScalarFunctionFactory = $func::factory().into();
let provider = factory.provide(FunctionContext::default());
let f = provider.as_async().unwrap();
let args = vec![99];
let args = args
.into_iter()
.map(|arg| Arc::new(UInt64Vector::from_slice([arg])) as _)
.collect::<Vec<_>>();
let result = f.eval(FunctionContext::default(), &args).await.unwrap_err();
let func_args = datafusion::logical_expr::ScalarFunctionArgs {
args: vec![
ColumnarValue::Array(Arc::new(UInt64Array::from(vec![99]))),
],
arg_fields: vec![
Arc::new(Field::new("arg_0", DataType::UInt64, false)),
],
return_field: Arc::new(Field::new("result", DataType::UInt64, true)),
number_rows: 1,
config_options: Arc::new(datafusion_common::config::ConfigOptions::default()),
};
let result = f.invoke_async_with_args(func_args).await.unwrap_err();
assert_eq!(
"Missing TableMutationHandler, not expected",
"Execution error: Handler error: Missing TableMutationHandler, not expected",
result.to_string()
);
}
#[tokio::test]
async fn [<test_ $name>]() {
let f = $func;
let factory: ScalarFunctionFactory = $func::factory().into();
let provider = factory.provide(FunctionContext::mock());
let f = provider.as_async().unwrap();
let func_args = datafusion::logical_expr::ScalarFunctionArgs {
args: vec![
ColumnarValue::Array(Arc::new(UInt64Array::from(vec![99]))),
],
arg_fields: vec![
Arc::new(Field::new("arg_0", DataType::UInt64, false)),
],
return_field: Arc::new(Field::new("result", DataType::UInt64, true)),
number_rows: 1,
config_options: Arc::new(datafusion_common::config::ConfigOptions::default()),
};
let result = f.invoke_async_with_args(func_args).await.unwrap();
let args = vec![99];
let args = args
.into_iter()
.map(|arg| Arc::new(UInt64Vector::from_slice([arg])) as _)
.collect::<Vec<_>>();
let result = f.eval(FunctionContext::mock(), &args).await.unwrap();
let expect: VectorRef = Arc::new(UInt64Vector::from_slice([42]));
assert_eq!(expect, result);
match result {
ColumnarValue::Array(array) => {
let result_array = array.as_any().downcast_ref::<UInt64Array>().unwrap();
assert_eq!(result_array.value(0), 42u64);
}
ColumnarValue::Scalar(scalar) => {
assert_eq!(scalar, datafusion_common::ScalarValue::UInt64(Some(42)));
}
}
}
}
};

View File

@@ -15,14 +15,15 @@
use std::str::FromStr;
use api::v1::region::{compact_request, StrictWindow};
use arrow::datatypes::DataType as ArrowDataType;
use common_error::ext::BoxedError;
use common_macro::admin_fn;
use common_query::error::{
InvalidFuncArgsSnafu, MissingTableMutationHandlerSnafu, Result, TableMutationSnafu,
UnsupportedInputDataTypeSnafu,
};
use common_query::prelude::{Signature, Volatility};
use common_telemetry::info;
use datafusion_expr::{Signature, Volatility};
use datatypes::prelude::*;
use session::context::QueryContextRef;
use session::table_name::table_name_to_full_name;
@@ -105,18 +106,11 @@ pub(crate) async fn compact_table(
}
fn flush_signature() -> Signature {
Signature::uniform(
1,
vec![ConcreteDataType::string_datatype()],
Volatility::Immutable,
)
Signature::uniform(1, vec![ArrowDataType::Utf8], Volatility::Immutable)
}
fn compact_signature() -> Signature {
Signature::variadic(
vec![ConcreteDataType::string_datatype()],
Volatility::Immutable,
)
Signature::variadic(vec![ArrowDataType::Utf8], Volatility::Immutable)
}
/// Parses `compact_table` UDF parameters. This function accepts following combinations:
@@ -204,66 +198,87 @@ mod tests {
use std::sync::Arc;
use api::v1::region::compact_request::Options;
use arrow::array::StringArray;
use arrow::datatypes::{DataType, Field};
use common_catalog::consts::{DEFAULT_CATALOG_NAME, DEFAULT_SCHEMA_NAME};
use common_query::prelude::TypeSignature;
use datatypes::vectors::{StringVector, UInt64Vector};
use datafusion_expr::ColumnarValue;
use session::context::QueryContext;
use super::*;
use crate::function::{AsyncFunction, FunctionContext};
use crate::function::FunctionContext;
use crate::function_factory::ScalarFunctionFactory;
macro_rules! define_table_function_test {
($name: ident, $func: ident) => {
paste::paste!{
#[test]
fn [<test_ $name _misc>]() {
let f = $func;
let factory: ScalarFunctionFactory = $func::factory().into();
let f = factory.provide(FunctionContext::mock());
assert_eq!(stringify!($name), f.name());
assert_eq!(
ConcreteDataType::uint64_datatype(),
DataType::UInt64,
f.return_type(&[]).unwrap()
);
assert!(matches!(f.signature(),
Signature {
type_signature: TypeSignature::Uniform(1, valid_types),
volatility: Volatility::Immutable
} if valid_types == vec![ConcreteDataType::string_datatype()]));
datafusion_expr::Signature {
type_signature: datafusion_expr::TypeSignature::Uniform(1, valid_types),
volatility: datafusion_expr::Volatility::Immutable
} if valid_types == &vec![ArrowDataType::Utf8]));
}
#[tokio::test]
async fn [<test_ $name _missing_table_mutation>]() {
let f = $func;
let factory: ScalarFunctionFactory = $func::factory().into();
let provider = factory.provide(FunctionContext::default());
let f = provider.as_async().unwrap();
let args = vec!["test"];
let args = args
.into_iter()
.map(|arg| Arc::new(StringVector::from(vec![arg])) as _)
.collect::<Vec<_>>();
let result = f.eval(FunctionContext::default(), &args).await.unwrap_err();
let func_args = datafusion::logical_expr::ScalarFunctionArgs {
args: vec![
ColumnarValue::Array(Arc::new(StringArray::from(vec!["test"]))),
],
arg_fields: vec![
Arc::new(Field::new("arg_0", DataType::Utf8, false)),
],
return_field: Arc::new(Field::new("result", DataType::UInt64, true)),
number_rows: 1,
config_options: Arc::new(datafusion_common::config::ConfigOptions::default()),
};
let result = f.invoke_async_with_args(func_args).await.unwrap_err();
assert_eq!(
"Missing TableMutationHandler, not expected",
"Execution error: Handler error: Missing TableMutationHandler, not expected",
result.to_string()
);
}
#[tokio::test]
async fn [<test_ $name>]() {
let f = $func;
let factory: ScalarFunctionFactory = $func::factory().into();
let provider = factory.provide(FunctionContext::mock());
let f = provider.as_async().unwrap();
let func_args = datafusion::logical_expr::ScalarFunctionArgs {
args: vec![
ColumnarValue::Array(Arc::new(StringArray::from(vec!["test"]))),
],
arg_fields: vec![
Arc::new(Field::new("arg_0", DataType::Utf8, false)),
],
return_field: Arc::new(Field::new("result", DataType::UInt64, true)),
number_rows: 1,
config_options: Arc::new(datafusion_common::config::ConfigOptions::default()),
};
let result = f.invoke_async_with_args(func_args).await.unwrap();
let args = vec!["test"];
let args = args
.into_iter()
.map(|arg| Arc::new(StringVector::from(vec![arg])) as _)
.collect::<Vec<_>>();
let result = f.eval(FunctionContext::mock(), &args).await.unwrap();
let expect: VectorRef = Arc::new(UInt64Vector::from_slice([42]));
assert_eq!(expect, result);
match result {
ColumnarValue::Array(array) => {
let result_array = array.as_any().downcast_ref::<arrow::array::UInt64Array>().unwrap();
assert_eq!(result_array.value(0), 42u64);
}
ColumnarValue::Scalar(scalar) => {
assert_eq!(scalar, datafusion_common::ScalarValue::UInt64(Some(42)));
}
}
}
}
}

View File

@@ -17,7 +17,8 @@ use std::time::Duration;
use common_macro::admin_fn;
use common_meta::rpc::procedure::MigrateRegionRequest;
use common_query::error::{InvalidFuncArgsSnafu, MissingProcedureServiceHandlerSnafu, Result};
use common_query::prelude::{Signature, TypeSignature, Volatility};
use datafusion_expr::{Signature, TypeSignature, Volatility};
use datatypes::data_type::DataType;
use datatypes::prelude::ConcreteDataType;
use datatypes::value::{Value, ValueRef};
use session::context::QueryContextRef;
@@ -103,9 +104,21 @@ fn signature() -> Signature {
Signature::one_of(
vec![
// migrate_region(region_id, from_peer, to_peer)
TypeSignature::Uniform(3, ConcreteDataType::numerics()),
TypeSignature::Uniform(
3,
ConcreteDataType::numerics()
.into_iter()
.map(|dt| dt.as_arrow_type())
.collect(),
),
// migrate_region(region_id, from_peer, to_peer, timeout(secs))
TypeSignature::Uniform(4, ConcreteDataType::numerics()),
TypeSignature::Uniform(
4,
ConcreteDataType::numerics()
.into_iter()
.map(|dt| dt.as_arrow_type())
.collect(),
),
],
Volatility::Immutable,
)
@@ -115,59 +128,89 @@ fn signature() -> Signature {
mod tests {
use std::sync::Arc;
use common_query::prelude::TypeSignature;
use datatypes::vectors::{StringVector, UInt64Vector, VectorRef};
use arrow::array::{StringArray, UInt64Array};
use arrow::datatypes::{DataType, Field};
use datafusion_expr::ColumnarValue;
use super::*;
use crate::function::{AsyncFunction, FunctionContext};
use crate::function::FunctionContext;
use crate::function_factory::ScalarFunctionFactory;
#[test]
fn test_migrate_region_misc() {
let f = MigrateRegionFunction;
let factory: ScalarFunctionFactory = MigrateRegionFunction::factory().into();
let f = factory.provide(FunctionContext::mock());
assert_eq!("migrate_region", f.name());
assert_eq!(
ConcreteDataType::string_datatype(),
f.return_type(&[]).unwrap()
);
assert_eq!(DataType::Utf8, f.return_type(&[]).unwrap());
assert!(matches!(f.signature(),
Signature {
type_signature: TypeSignature::OneOf(sigs),
volatility: Volatility::Immutable
datafusion_expr::Signature {
type_signature: datafusion_expr::TypeSignature::OneOf(sigs),
volatility: datafusion_expr::Volatility::Immutable
} if sigs.len() == 2));
}
#[tokio::test]
async fn test_missing_procedure_service() {
let f = MigrateRegionFunction;
let factory: ScalarFunctionFactory = MigrateRegionFunction::factory().into();
let provider = factory.provide(FunctionContext::default());
let f = provider.as_async().unwrap();
let args = vec![1, 1, 1];
let args = args
.into_iter()
.map(|arg| Arc::new(UInt64Vector::from_slice([arg])) as _)
.collect::<Vec<_>>();
let result = f.eval(FunctionContext::default(), &args).await.unwrap_err();
let func_args = datafusion::logical_expr::ScalarFunctionArgs {
args: vec![
ColumnarValue::Array(Arc::new(UInt64Array::from(vec![1]))),
ColumnarValue::Array(Arc::new(UInt64Array::from(vec![1]))),
ColumnarValue::Array(Arc::new(UInt64Array::from(vec![1]))),
],
arg_fields: vec![
Arc::new(Field::new("arg_0", DataType::UInt64, false)),
Arc::new(Field::new("arg_1", DataType::UInt64, false)),
Arc::new(Field::new("arg_2", DataType::UInt64, false)),
],
return_field: Arc::new(Field::new("result", DataType::Utf8, true)),
number_rows: 1,
config_options: Arc::new(datafusion_common::config::ConfigOptions::default()),
};
let result = f.invoke_async_with_args(func_args).await.unwrap_err();
assert_eq!(
"Missing ProcedureServiceHandler, not expected",
"Execution error: Handler error: Missing ProcedureServiceHandler, not expected",
result.to_string()
);
}
#[tokio::test]
async fn test_migrate_region() {
let f = MigrateRegionFunction;
let factory: ScalarFunctionFactory = MigrateRegionFunction::factory().into();
let provider = factory.provide(FunctionContext::mock());
let f = provider.as_async().unwrap();
let args = vec![1, 1, 1];
let func_args = datafusion::logical_expr::ScalarFunctionArgs {
args: vec![
ColumnarValue::Array(Arc::new(UInt64Array::from(vec![1]))),
ColumnarValue::Array(Arc::new(UInt64Array::from(vec![1]))),
ColumnarValue::Array(Arc::new(UInt64Array::from(vec![1]))),
],
arg_fields: vec![
Arc::new(Field::new("arg_0", DataType::UInt64, false)),
Arc::new(Field::new("arg_1", DataType::UInt64, false)),
Arc::new(Field::new("arg_2", DataType::UInt64, false)),
],
return_field: Arc::new(Field::new("result", DataType::Utf8, true)),
number_rows: 1,
config_options: Arc::new(datafusion_common::config::ConfigOptions::default()),
};
let result = f.invoke_async_with_args(func_args).await.unwrap();
let args = args
.into_iter()
.map(|arg| Arc::new(UInt64Vector::from_slice([arg])) as _)
.collect::<Vec<_>>();
let result = f.eval(FunctionContext::mock(), &args).await.unwrap();
let expect: VectorRef = Arc::new(StringVector::from(vec!["test_pid"]));
assert_eq!(expect, result);
match result {
ColumnarValue::Array(array) => {
let result_array = array.as_any().downcast_ref::<StringArray>().unwrap();
assert_eq!(result_array.value(0), "test_pid");
}
ColumnarValue::Scalar(scalar) => {
assert_eq!(
scalar,
datafusion_common::ScalarValue::Utf8(Some("test_pid".to_string()))
);
}
}
}
}

View File

@@ -0,0 +1,270 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use api::v1::meta::reconcile_request::Target;
use api::v1::meta::{ReconcileCatalog, ReconcileRequest};
use arrow::datatypes::DataType as ArrowDataType;
use common_macro::admin_fn;
use common_query::error::{
InvalidFuncArgsSnafu, MissingProcedureServiceHandlerSnafu, Result,
UnsupportedInputDataTypeSnafu,
};
use common_telemetry::info;
use datafusion_expr::{Signature, TypeSignature, Volatility};
use datatypes::data_type::DataType;
use datatypes::prelude::*;
use session::context::QueryContextRef;
use crate::handlers::ProcedureServiceHandlerRef;
use crate::helper::{
cast_u32, default_parallelism, default_resolve_strategy, get_string_from_params,
parse_resolve_strategy,
};
const FN_NAME: &str = "reconcile_catalog";
/// A function to reconcile a catalog.
/// Returns the procedure id if success.
///
/// - `reconcile_catalog(resolve_strategy)`.
/// - `reconcile_catalog(resolve_strategy, parallelism)`.
///
/// - `reconcile_catalog()`.
#[admin_fn(
name = ReconcileCatalogFunction,
display_name = reconcile_catalog,
sig_fn = signature,
ret = string
)]
pub(crate) async fn reconcile_catalog(
procedure_service_handler: &ProcedureServiceHandlerRef,
query_ctx: &QueryContextRef,
params: &[ValueRef<'_>],
) -> Result<Value> {
let (resolve_strategy, parallelism) = match params.len() {
0 => (default_resolve_strategy(), default_parallelism()),
1 => (
parse_resolve_strategy(get_string_from_params(params, 0, FN_NAME)?)?,
default_parallelism(),
),
2 => {
let Some(parallelism) = cast_u32(&params[1])? else {
return UnsupportedInputDataTypeSnafu {
function: FN_NAME,
datatypes: params.iter().map(|v| v.data_type()).collect::<Vec<_>>(),
}
.fail();
};
(
parse_resolve_strategy(get_string_from_params(params, 0, FN_NAME)?)?,
parallelism,
)
}
size => {
return InvalidFuncArgsSnafu {
err_msg: format!(
"The length of the args is not correct, expect 0, 1 or 2, have: {}",
size
),
}
.fail();
}
};
info!(
"Reconciling catalog with resolve_strategy: {:?}, parallelism: {}",
resolve_strategy, parallelism
);
let pid = procedure_service_handler
.reconcile(ReconcileRequest {
target: Some(Target::ReconcileCatalog(ReconcileCatalog {
catalog_name: query_ctx.current_catalog().to_string(),
parallelism,
resolve_strategy: resolve_strategy as i32,
})),
..Default::default()
})
.await?;
match pid {
Some(pid) => Ok(Value::from(pid)),
None => Ok(Value::Null),
}
}
fn signature() -> Signature {
let nums = ConcreteDataType::numerics();
let mut signs = Vec::with_capacity(2 + nums.len());
signs.extend([
// reconcile_catalog()
TypeSignature::Nullary,
// reconcile_catalog(resolve_strategy)
TypeSignature::Exact(vec![ArrowDataType::Utf8]),
]);
for sign in nums {
// reconcile_catalog(resolve_strategy, parallelism)
signs.push(TypeSignature::Exact(vec![
ArrowDataType::Utf8,
sign.as_arrow_type(),
]));
}
Signature::one_of(signs, Volatility::Immutable)
}
#[cfg(test)]
mod tests {
use std::sync::Arc;
use arrow::array::{StringArray, UInt64Array};
use arrow::datatypes::{DataType, Field};
use datafusion_expr::ColumnarValue;
use crate::admin::reconcile_catalog::ReconcileCatalogFunction;
use crate::function::FunctionContext;
use crate::function_factory::ScalarFunctionFactory;
#[tokio::test]
async fn test_reconcile_catalog() {
common_telemetry::init_default_ut_logging();
// reconcile_catalog()
let factory: ScalarFunctionFactory = ReconcileCatalogFunction::factory().into();
let provider = factory.provide(FunctionContext::mock());
let f = provider.as_async().unwrap();
let func_args = datafusion::logical_expr::ScalarFunctionArgs {
args: vec![],
arg_fields: vec![],
return_field: Arc::new(Field::new("result", DataType::Utf8, true)),
number_rows: 1,
config_options: Arc::new(datafusion_common::config::ConfigOptions::default()),
};
let result = f.invoke_async_with_args(func_args).await.unwrap();
match result {
ColumnarValue::Array(array) => {
let result_array = array.as_any().downcast_ref::<StringArray>().unwrap();
assert_eq!(result_array.value(0), "test_pid");
}
ColumnarValue::Scalar(scalar) => {
assert_eq!(
scalar,
datafusion_common::ScalarValue::Utf8(Some("test_pid".to_string()))
);
}
}
// reconcile_catalog(resolve_strategy)
let factory: ScalarFunctionFactory = ReconcileCatalogFunction::factory().into();
let provider = factory.provide(FunctionContext::mock());
let f = provider.as_async().unwrap();
let func_args = datafusion::logical_expr::ScalarFunctionArgs {
args: vec![ColumnarValue::Array(Arc::new(StringArray::from(vec![
"UseMetasrv",
])))],
arg_fields: vec![Arc::new(Field::new("arg_0", DataType::Utf8, false))],
return_field: Arc::new(Field::new("result", DataType::Utf8, true)),
number_rows: 1,
config_options: Arc::new(datafusion_common::config::ConfigOptions::default()),
};
let result = f.invoke_async_with_args(func_args).await.unwrap();
match result {
ColumnarValue::Array(array) => {
let result_array = array.as_any().downcast_ref::<StringArray>().unwrap();
assert_eq!(result_array.value(0), "test_pid");
}
ColumnarValue::Scalar(scalar) => {
assert_eq!(
scalar,
datafusion_common::ScalarValue::Utf8(Some("test_pid".to_string()))
);
}
}
// reconcile_catalog(resolve_strategy, parallelism)
let factory: ScalarFunctionFactory = ReconcileCatalogFunction::factory().into();
let provider = factory.provide(FunctionContext::mock());
let f = provider.as_async().unwrap();
let func_args = datafusion::logical_expr::ScalarFunctionArgs {
args: vec![
ColumnarValue::Array(Arc::new(StringArray::from(vec!["UseLatest"]))),
ColumnarValue::Array(Arc::new(UInt64Array::from(vec![10]))),
],
arg_fields: vec![
Arc::new(Field::new("arg_0", DataType::Utf8, false)),
Arc::new(Field::new("arg_1", DataType::UInt64, false)),
],
return_field: Arc::new(Field::new("result", DataType::Utf8, true)),
number_rows: 1,
config_options: Arc::new(datafusion_common::config::ConfigOptions::default()),
};
let result = f.invoke_async_with_args(func_args).await.unwrap();
match result {
ColumnarValue::Array(array) => {
let result_array = array.as_any().downcast_ref::<StringArray>().unwrap();
assert_eq!(result_array.value(0), "test_pid");
}
ColumnarValue::Scalar(scalar) => {
assert_eq!(
scalar,
datafusion_common::ScalarValue::Utf8(Some("test_pid".to_string()))
);
}
}
// unsupported input data type
let factory: ScalarFunctionFactory = ReconcileCatalogFunction::factory().into();
let provider = factory.provide(FunctionContext::mock());
let f = provider.as_async().unwrap();
let func_args = datafusion::logical_expr::ScalarFunctionArgs {
args: vec![
ColumnarValue::Array(Arc::new(StringArray::from(vec!["UseLatest"]))),
ColumnarValue::Array(Arc::new(StringArray::from(vec!["test"]))),
],
arg_fields: vec![
Arc::new(Field::new("arg_0", DataType::Utf8, false)),
Arc::new(Field::new("arg_1", DataType::Utf8, false)),
],
return_field: Arc::new(Field::new("result", DataType::Utf8, true)),
number_rows: 1,
config_options: Arc::new(datafusion_common::config::ConfigOptions::default()),
};
let _err = f.invoke_async_with_args(func_args).await.unwrap_err();
// Note: Error type is DataFusionError at this level, not common_query::Error
// invalid function args
let factory: ScalarFunctionFactory = ReconcileCatalogFunction::factory().into();
let provider = factory.provide(FunctionContext::mock());
let f = provider.as_async().unwrap();
let func_args = datafusion::logical_expr::ScalarFunctionArgs {
args: vec![
ColumnarValue::Array(Arc::new(StringArray::from(vec!["UseLatest"]))),
ColumnarValue::Array(Arc::new(UInt64Array::from(vec![10]))),
ColumnarValue::Array(Arc::new(StringArray::from(vec!["10"]))),
],
arg_fields: vec![
Arc::new(Field::new("arg_0", DataType::Utf8, false)),
Arc::new(Field::new("arg_1", DataType::UInt64, false)),
Arc::new(Field::new("arg_2", DataType::Utf8, false)),
],
return_field: Arc::new(Field::new("result", DataType::Utf8, true)),
number_rows: 1,
config_options: Arc::new(datafusion_common::config::ConfigOptions::default()),
};
let _err = f.invoke_async_with_args(func_args).await.unwrap_err();
// Note: Error type is DataFusionError at this level, not common_query::Error
}
}

View File

@@ -0,0 +1,291 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use api::v1::meta::reconcile_request::Target;
use api::v1::meta::{ReconcileDatabase, ReconcileRequest};
use arrow::datatypes::DataType as ArrowDataType;
use common_macro::admin_fn;
use common_query::error::{
InvalidFuncArgsSnafu, MissingProcedureServiceHandlerSnafu, Result,
UnsupportedInputDataTypeSnafu,
};
use common_telemetry::info;
use datafusion_expr::{Signature, TypeSignature, Volatility};
use datatypes::data_type::DataType;
use datatypes::prelude::*;
use session::context::QueryContextRef;
use crate::handlers::ProcedureServiceHandlerRef;
use crate::helper::{
cast_u32, default_parallelism, default_resolve_strategy, get_string_from_params,
parse_resolve_strategy,
};
const FN_NAME: &str = "reconcile_database";
/// A function to reconcile a database.
/// Returns the procedure id if success.
///
/// - `reconcile_database(database_name)`.
/// - `reconcile_database(database_name, resolve_strategy)`.
/// - `reconcile_database(database_name, resolve_strategy, parallelism)`.
///
/// The parameters:
/// - `database_name`: the database name
#[admin_fn(
name = ReconcileDatabaseFunction,
display_name = reconcile_database,
sig_fn = signature,
ret = string
)]
pub(crate) async fn reconcile_database(
procedure_service_handler: &ProcedureServiceHandlerRef,
query_ctx: &QueryContextRef,
params: &[ValueRef<'_>],
) -> Result<Value> {
let (database_name, resolve_strategy, parallelism) = match params.len() {
1 => (
get_string_from_params(params, 0, FN_NAME)?,
default_resolve_strategy(),
default_parallelism(),
),
2 => (
get_string_from_params(params, 0, FN_NAME)?,
parse_resolve_strategy(get_string_from_params(params, 1, FN_NAME)?)?,
default_parallelism(),
),
3 => {
let Some(parallelism) = cast_u32(&params[2])? else {
return UnsupportedInputDataTypeSnafu {
function: FN_NAME,
datatypes: params.iter().map(|v| v.data_type()).collect::<Vec<_>>(),
}
.fail();
};
(
get_string_from_params(params, 0, FN_NAME)?,
parse_resolve_strategy(get_string_from_params(params, 1, FN_NAME)?)?,
parallelism,
)
}
size => {
return InvalidFuncArgsSnafu {
err_msg: format!(
"The length of the args is not correct, expect 1, 2 or 3, have: {}",
size
),
}
.fail();
}
};
info!(
"Reconciling database: {}, resolve_strategy: {:?}, parallelism: {}",
database_name, resolve_strategy, parallelism
);
let pid = procedure_service_handler
.reconcile(ReconcileRequest {
target: Some(Target::ReconcileDatabase(ReconcileDatabase {
catalog_name: query_ctx.current_catalog().to_string(),
database_name: database_name.to_string(),
parallelism,
resolve_strategy: resolve_strategy as i32,
})),
..Default::default()
})
.await?;
match pid {
Some(pid) => Ok(Value::from(pid)),
None => Ok(Value::Null),
}
}
fn signature() -> Signature {
let nums = ConcreteDataType::numerics();
let mut signs = Vec::with_capacity(2 + nums.len());
signs.extend([
// reconcile_database(datanode_name)
TypeSignature::Exact(vec![ArrowDataType::Utf8]),
// reconcile_database(database_name, resolve_strategy)
TypeSignature::Exact(vec![ArrowDataType::Utf8, ArrowDataType::Utf8]),
]);
for sign in nums {
// reconcile_database(database_name, resolve_strategy, parallelism)
signs.push(TypeSignature::Exact(vec![
ArrowDataType::Utf8,
ArrowDataType::Utf8,
sign.as_arrow_type(),
]));
}
Signature::one_of(signs, Volatility::Immutable)
}
#[cfg(test)]
mod tests {
use std::sync::Arc;
use arrow::array::{StringArray, UInt32Array};
use arrow::datatypes::{DataType, Field};
use datafusion_expr::ColumnarValue;
use crate::admin::reconcile_database::ReconcileDatabaseFunction;
use crate::function::FunctionContext;
use crate::function_factory::ScalarFunctionFactory;
#[tokio::test]
async fn test_reconcile_catalog() {
common_telemetry::init_default_ut_logging();
// reconcile_database(database_name)
let factory: ScalarFunctionFactory = ReconcileDatabaseFunction::factory().into();
let provider = factory.provide(FunctionContext::mock());
let f = provider.as_async().unwrap();
let func_args = datafusion::logical_expr::ScalarFunctionArgs {
args: vec![ColumnarValue::Array(Arc::new(StringArray::from(vec![
"test",
])))],
arg_fields: vec![Arc::new(Field::new("arg_0", DataType::Utf8, false))],
return_field: Arc::new(Field::new("result", DataType::Utf8, true)),
number_rows: 1,
config_options: Arc::new(datafusion_common::config::ConfigOptions::default()),
};
let result = f.invoke_async_with_args(func_args).await.unwrap();
match result {
ColumnarValue::Array(array) => {
let result_array = array.as_any().downcast_ref::<StringArray>().unwrap();
assert_eq!(result_array.value(0), "test_pid");
}
ColumnarValue::Scalar(scalar) => {
assert_eq!(
scalar,
datafusion_common::ScalarValue::Utf8(Some("test_pid".to_string()))
);
}
}
// reconcile_database(database_name, resolve_strategy)
let factory: ScalarFunctionFactory = ReconcileDatabaseFunction::factory().into();
let provider = factory.provide(FunctionContext::mock());
let f = provider.as_async().unwrap();
let func_args = datafusion::logical_expr::ScalarFunctionArgs {
args: vec![
ColumnarValue::Array(Arc::new(StringArray::from(vec!["test"]))),
ColumnarValue::Array(Arc::new(StringArray::from(vec!["UseLatest"]))),
],
arg_fields: vec![
Arc::new(Field::new("arg_0", DataType::Utf8, false)),
Arc::new(Field::new("arg_1", DataType::Utf8, false)),
],
return_field: Arc::new(Field::new("result", DataType::Utf8, true)),
number_rows: 1,
config_options: Arc::new(datafusion_common::config::ConfigOptions::default()),
};
let result = f.invoke_async_with_args(func_args).await.unwrap();
match result {
ColumnarValue::Array(array) => {
let result_array = array.as_any().downcast_ref::<StringArray>().unwrap();
assert_eq!(result_array.value(0), "test_pid");
}
ColumnarValue::Scalar(scalar) => {
assert_eq!(
scalar,
datafusion_common::ScalarValue::Utf8(Some("test_pid".to_string()))
);
}
}
// reconcile_database(database_name, resolve_strategy, parallelism)
let factory: ScalarFunctionFactory = ReconcileDatabaseFunction::factory().into();
let provider = factory.provide(FunctionContext::mock());
let f = provider.as_async().unwrap();
let func_args = datafusion::logical_expr::ScalarFunctionArgs {
args: vec![
ColumnarValue::Array(Arc::new(StringArray::from(vec!["test"]))),
ColumnarValue::Array(Arc::new(StringArray::from(vec!["UseLatest"]))),
ColumnarValue::Array(Arc::new(UInt32Array::from(vec![10]))),
],
arg_fields: vec![
Arc::new(Field::new("arg_0", DataType::Utf8, false)),
Arc::new(Field::new("arg_1", DataType::Utf8, false)),
Arc::new(Field::new("arg_2", DataType::UInt32, false)),
],
return_field: Arc::new(Field::new("result", DataType::Utf8, true)),
number_rows: 1,
config_options: Arc::new(datafusion_common::config::ConfigOptions::default()),
};
let result = f.invoke_async_with_args(func_args).await.unwrap();
match result {
ColumnarValue::Array(array) => {
let result_array = array.as_any().downcast_ref::<StringArray>().unwrap();
assert_eq!(result_array.value(0), "test_pid");
}
ColumnarValue::Scalar(scalar) => {
assert_eq!(
scalar,
datafusion_common::ScalarValue::Utf8(Some("test_pid".to_string()))
);
}
}
// invalid function args
let factory: ScalarFunctionFactory = ReconcileDatabaseFunction::factory().into();
let provider = factory.provide(FunctionContext::mock());
let f = provider.as_async().unwrap();
let func_args = datafusion::logical_expr::ScalarFunctionArgs {
args: vec![
ColumnarValue::Array(Arc::new(StringArray::from(vec!["UseLatest"]))),
ColumnarValue::Array(Arc::new(UInt32Array::from(vec![10]))),
ColumnarValue::Array(Arc::new(StringArray::from(vec!["v1"]))),
ColumnarValue::Array(Arc::new(StringArray::from(vec!["v2"]))),
],
arg_fields: vec![
Arc::new(Field::new("arg_0", DataType::Utf8, false)),
Arc::new(Field::new("arg_1", DataType::UInt32, false)),
Arc::new(Field::new("arg_2", DataType::Utf8, false)),
Arc::new(Field::new("arg_3", DataType::Utf8, false)),
],
return_field: Arc::new(Field::new("result", DataType::Utf8, true)),
number_rows: 1,
config_options: Arc::new(datafusion_common::config::ConfigOptions::default()),
};
let _err = f.invoke_async_with_args(func_args).await.unwrap_err();
// Note: Error type is DataFusionError at this level, not common_query::Error
// unsupported input data type
let factory: ScalarFunctionFactory = ReconcileDatabaseFunction::factory().into();
let provider = factory.provide(FunctionContext::mock());
let f = provider.as_async().unwrap();
let func_args = datafusion::logical_expr::ScalarFunctionArgs {
args: vec![
ColumnarValue::Array(Arc::new(StringArray::from(vec!["UseLatest"]))),
ColumnarValue::Array(Arc::new(UInt32Array::from(vec![10]))),
ColumnarValue::Array(Arc::new(StringArray::from(vec!["v1"]))),
],
arg_fields: vec![
Arc::new(Field::new("arg_0", DataType::Utf8, false)),
Arc::new(Field::new("arg_1", DataType::UInt32, false)),
Arc::new(Field::new("arg_2", DataType::Utf8, false)),
],
return_field: Arc::new(Field::new("result", DataType::Utf8, true)),
number_rows: 1,
config_options: Arc::new(datafusion_common::config::ConfigOptions::default()),
};
let _err = f.invoke_async_with_args(func_args).await.unwrap_err();
// Note: Error type is DataFusionError at this level, not common_query::Error
}
}

View File

@@ -0,0 +1,204 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use api::v1::meta::reconcile_request::Target;
use api::v1::meta::{ReconcileRequest, ReconcileTable, ResolveStrategy};
use arrow::datatypes::DataType as ArrowDataType;
use common_catalog::format_full_table_name;
use common_error::ext::BoxedError;
use common_macro::admin_fn;
use common_query::error::{
MissingProcedureServiceHandlerSnafu, Result, TableMutationSnafu, UnsupportedInputDataTypeSnafu,
};
use common_telemetry::info;
use datafusion_expr::{Signature, TypeSignature, Volatility};
use datatypes::prelude::*;
use session::context::QueryContextRef;
use session::table_name::table_name_to_full_name;
use snafu::ResultExt;
use crate::handlers::ProcedureServiceHandlerRef;
use crate::helper::parse_resolve_strategy;
const FN_NAME: &str = "reconcile_table";
/// A function to reconcile a table.
/// Returns the procedure id if success.
///
/// - `reconcile_table(table_name)`.
/// - `reconcile_table(table_name, resolve_strategy)`.
///
/// The parameters:
/// - `table_name`: the table name
#[admin_fn(
name = ReconcileTableFunction,
display_name = reconcile_table,
sig_fn = signature,
ret = string
)]
pub(crate) async fn reconcile_table(
procedure_service_handler: &ProcedureServiceHandlerRef,
query_ctx: &QueryContextRef,
params: &[ValueRef<'_>],
) -> Result<Value> {
let (table_name, resolve_strategy) = match params {
[ValueRef::String(table_name)] => (table_name, ResolveStrategy::UseLatest),
[ValueRef::String(table_name), ValueRef::String(resolve_strategy)] => {
(table_name, parse_resolve_strategy(resolve_strategy)?)
}
_ => {
return UnsupportedInputDataTypeSnafu {
function: FN_NAME,
datatypes: params.iter().map(|v| v.data_type()).collect::<Vec<_>>(),
}
.fail()
}
};
let (catalog_name, schema_name, table_name) = table_name_to_full_name(table_name, query_ctx)
.map_err(BoxedError::new)
.context(TableMutationSnafu)?;
info!(
"Reconciling table: {} with resolve_strategy: {:?}",
format_full_table_name(&catalog_name, &schema_name, &table_name),
resolve_strategy
);
let pid = procedure_service_handler
.reconcile(ReconcileRequest {
target: Some(Target::ReconcileTable(ReconcileTable {
catalog_name,
schema_name,
table_name,
resolve_strategy: resolve_strategy as i32,
})),
..Default::default()
})
.await?;
match pid {
Some(pid) => Ok(Value::from(pid)),
None => Ok(Value::Null),
}
}
fn signature() -> Signature {
Signature::one_of(
vec![
// reconcile_table(table_name)
TypeSignature::Exact(vec![ArrowDataType::Utf8]),
// reconcile_table(table_name, resolve_strategy)
TypeSignature::Exact(vec![ArrowDataType::Utf8, ArrowDataType::Utf8]),
],
Volatility::Immutable,
)
}
#[cfg(test)]
mod tests {
use std::sync::Arc;
use arrow::array::StringArray;
use arrow::datatypes::{DataType, Field};
use datafusion_expr::ColumnarValue;
use crate::admin::reconcile_table::ReconcileTableFunction;
use crate::function::FunctionContext;
use crate::function_factory::ScalarFunctionFactory;
#[tokio::test]
async fn test_reconcile_table() {
common_telemetry::init_default_ut_logging();
// reconcile_table(table_name)
let factory: ScalarFunctionFactory = ReconcileTableFunction::factory().into();
let provider = factory.provide(FunctionContext::mock());
let f = provider.as_async().unwrap();
let func_args = datafusion::logical_expr::ScalarFunctionArgs {
args: vec![ColumnarValue::Array(Arc::new(StringArray::from(vec![
"test",
])))],
arg_fields: vec![Arc::new(Field::new("arg_0", DataType::Utf8, false))],
return_field: Arc::new(Field::new("result", DataType::Utf8, true)),
number_rows: 1,
config_options: Arc::new(datafusion_common::config::ConfigOptions::default()),
};
let result = f.invoke_async_with_args(func_args).await.unwrap();
match result {
ColumnarValue::Array(array) => {
let result_array = array.as_any().downcast_ref::<StringArray>().unwrap();
assert_eq!(result_array.value(0), "test_pid");
}
ColumnarValue::Scalar(scalar) => {
assert_eq!(
scalar,
datafusion_common::ScalarValue::Utf8(Some("test_pid".to_string()))
);
}
}
// reconcile_table(table_name, resolve_strategy)
let factory: ScalarFunctionFactory = ReconcileTableFunction::factory().into();
let provider = factory.provide(FunctionContext::mock());
let f = provider.as_async().unwrap();
let func_args = datafusion::logical_expr::ScalarFunctionArgs {
args: vec![
ColumnarValue::Array(Arc::new(StringArray::from(vec!["test"]))),
ColumnarValue::Array(Arc::new(StringArray::from(vec!["UseMetasrv"]))),
],
arg_fields: vec![
Arc::new(Field::new("arg_0", DataType::Utf8, false)),
Arc::new(Field::new("arg_1", DataType::Utf8, false)),
],
return_field: Arc::new(Field::new("result", DataType::Utf8, true)),
number_rows: 1,
config_options: Arc::new(datafusion_common::config::ConfigOptions::default()),
};
let result = f.invoke_async_with_args(func_args).await.unwrap();
match result {
ColumnarValue::Array(array) => {
let result_array = array.as_any().downcast_ref::<StringArray>().unwrap();
assert_eq!(result_array.value(0), "test_pid");
}
ColumnarValue::Scalar(scalar) => {
assert_eq!(
scalar,
datafusion_common::ScalarValue::Utf8(Some("test_pid".to_string()))
);
}
}
// unsupported input data type
let factory: ScalarFunctionFactory = ReconcileTableFunction::factory().into();
let provider = factory.provide(FunctionContext::mock());
let f = provider.as_async().unwrap();
let func_args = datafusion::logical_expr::ScalarFunctionArgs {
args: vec![
ColumnarValue::Array(Arc::new(StringArray::from(vec!["test"]))),
ColumnarValue::Array(Arc::new(StringArray::from(vec!["UseMetasrv"]))),
ColumnarValue::Array(Arc::new(StringArray::from(vec!["10"]))),
],
arg_fields: vec![
Arc::new(Field::new("arg_0", DataType::Utf8, false)),
Arc::new(Field::new("arg_1", DataType::Utf8, false)),
Arc::new(Field::new("arg_2", DataType::Utf8, false)),
],
return_field: Arc::new(Field::new("result", DataType::Utf8, true)),
number_rows: 1,
config_options: Arc::new(datafusion_common::config::ConfigOptions::default()),
};
let _err = f.invoke_async_with_args(func_args).await.unwrap_err();
// Note: Error type is DataFusionError at this level, not common_query::Error
}
}

View File

@@ -18,7 +18,8 @@ use common_query::error::{
InvalidFuncArgsSnafu, MissingProcedureServiceHandlerSnafu, Result,
UnsupportedInputDataTypeSnafu,
};
use common_query::prelude::{Signature, TypeSignature, Volatility};
use datafusion_expr::{Signature, TypeSignature, Volatility};
use datatypes::data_type::DataType;
use datatypes::prelude::ConcreteDataType;
use datatypes::value::{Value, ValueRef};
use session::context::QueryContextRef;
@@ -82,7 +83,13 @@ fn signature() -> Signature {
Signature::one_of(
vec![
// remove_region_follower(region_id, peer_id)
TypeSignature::Uniform(2, ConcreteDataType::numerics()),
TypeSignature::Uniform(
2,
ConcreteDataType::numerics()
.into_iter()
.map(|dt| dt.as_arrow_type())
.collect(),
),
],
Volatility::Immutable,
)
@@ -92,38 +99,57 @@ fn signature() -> Signature {
mod tests {
use std::sync::Arc;
use common_query::prelude::TypeSignature;
use datatypes::vectors::{UInt64Vector, VectorRef};
use arrow::array::UInt64Array;
use arrow::datatypes::{DataType, Field};
use datafusion_expr::ColumnarValue;
use super::*;
use crate::function::{AsyncFunction, FunctionContext};
use crate::function::FunctionContext;
use crate::function_factory::ScalarFunctionFactory;
#[test]
fn test_remove_region_follower_misc() {
let f = RemoveRegionFollowerFunction;
let factory: ScalarFunctionFactory = RemoveRegionFollowerFunction::factory().into();
let f = factory.provide(FunctionContext::mock());
assert_eq!("remove_region_follower", f.name());
assert_eq!(
ConcreteDataType::uint64_datatype(),
f.return_type(&[]).unwrap()
);
assert_eq!(DataType::UInt64, f.return_type(&[]).unwrap());
assert!(matches!(f.signature(),
Signature {
type_signature: TypeSignature::OneOf(sigs),
volatility: Volatility::Immutable
datafusion_expr::Signature {
type_signature: datafusion_expr::TypeSignature::OneOf(sigs),
volatility: datafusion_expr::Volatility::Immutable
} if sigs.len() == 1));
}
#[tokio::test]
async fn test_remove_region_follower() {
let f = RemoveRegionFollowerFunction;
let args = vec![1, 1];
let args = args
.into_iter()
.map(|arg| Arc::new(UInt64Vector::from_slice([arg])) as _)
.collect::<Vec<_>>();
let factory: ScalarFunctionFactory = RemoveRegionFollowerFunction::factory().into();
let provider = factory.provide(FunctionContext::mock());
let f = provider.as_async().unwrap();
let result = f.eval(FunctionContext::mock(), &args).await.unwrap();
let expect: VectorRef = Arc::new(UInt64Vector::from_slice([0u64]));
assert_eq!(result, expect);
let func_args = datafusion::logical_expr::ScalarFunctionArgs {
args: vec![
ColumnarValue::Array(Arc::new(UInt64Array::from(vec![1]))),
ColumnarValue::Array(Arc::new(UInt64Array::from(vec![1]))),
],
arg_fields: vec![
Arc::new(Field::new("arg_0", DataType::UInt64, false)),
Arc::new(Field::new("arg_1", DataType::UInt64, false)),
],
return_field: Arc::new(Field::new("result", DataType::UInt64, true)),
number_rows: 1,
config_options: Arc::new(datafusion_common::config::ConfigOptions::default()),
};
let result = f.invoke_async_with_args(func_args).await.unwrap();
match result {
ColumnarValue::Array(array) => {
let result_array = array.as_any().downcast_ref::<UInt64Array>().unwrap();
assert_eq!(result_array.value(0), 0u64);
}
ColumnarValue::Scalar(scalar) => {
assert_eq!(scalar, datafusion_common::ScalarValue::UInt64(Some(0)));
}
}
}
}

View File

@@ -25,12 +25,14 @@
use std::sync::Arc;
use arrow::array::StructArray;
use arrow_schema::Fields;
use arrow_schema::{FieldRef, Fields};
use common_telemetry::debug;
use datafusion::functions_aggregate::all_default_aggregate_functions;
use datafusion::optimizer::analyzer::type_coercion::TypeCoercion;
use datafusion::optimizer::AnalyzerRule;
use datafusion::physical_planner::create_aggregate_expr_and_maybe_filter;
use datafusion_common::{Column, ScalarValue};
use datafusion_expr::expr::AggregateFunction;
use datafusion_expr::expr::{AggregateFunction, AggregateFunctionParams};
use datafusion_expr::function::StateFieldsArgs;
use datafusion_expr::{
Accumulator, Aggregate, AggregateUDF, AggregateUDFImpl, Expr, ExprSchemable, LogicalPlan,
@@ -39,6 +41,8 @@ use datafusion_expr::{
use datafusion_physical_expr::aggregate::AggregateFunctionExpr;
use datatypes::arrow::datatypes::{DataType, Field};
use crate::function_registry::FunctionRegistry;
/// Returns the name of the state function for the given aggregate function name.
/// The state function is used to compute the state of the aggregate function.
/// The state function's name is in the format `__<aggr_name>_state
@@ -65,9 +69,9 @@ pub struct StateMergeHelper;
#[derive(Debug, Clone)]
pub struct StepAggrPlan {
/// Upper merge plan, which is the aggregate plan that merges the states of the state function.
pub upper_merge: Arc<LogicalPlan>,
pub upper_merge: LogicalPlan,
/// Lower state plan, which is the aggregate plan that computes the state of the aggregate function.
pub lower_state: Arc<LogicalPlan>,
pub lower_state: LogicalPlan,
}
pub fn get_aggr_func(expr: &Expr) -> Option<&datafusion_expr::expr::AggregateFunction> {
@@ -83,6 +87,36 @@ pub fn get_aggr_func(expr: &Expr) -> Option<&datafusion_expr::expr::AggregateFun
}
impl StateMergeHelper {
/// Register all the `state` function of supported aggregate functions.
/// Note that can't register `merge` function here, as it needs to be created from the original aggregate function with given input types.
pub fn register(registry: &FunctionRegistry) {
let all_default = all_default_aggregate_functions();
let greptime_custom_aggr_functions = registry.aggregate_functions();
// if our custom aggregate function have the same name as the default aggregate function, we will override it.
let supported = all_default
.into_iter()
.chain(greptime_custom_aggr_functions.into_iter().map(Arc::new))
.collect::<Vec<_>>();
debug!(
"Registering state functions for supported: {:?}",
supported.iter().map(|f| f.name()).collect::<Vec<_>>()
);
let state_func = supported.into_iter().filter_map(|f| {
StateWrapper::new((*f).clone())
.inspect_err(
|e| common_telemetry::error!(e; "Failed to register state function for {:?}", f),
)
.ok()
.map(AggregateUDF::new_from_impl)
});
for func in state_func {
registry.register_aggr(func);
}
}
/// Split an aggregate plan into two aggregate plans, one for the state function and one for the merge function.
pub fn split_aggr_node(aggr_plan: Aggregate) -> datafusion_common::Result<StepAggrPlan> {
let aggr = {
@@ -112,6 +146,7 @@ impl StateMergeHelper {
};
let original_input_types = aggr_func
.params
.args
.iter()
.map(|e| e.get_type(&aggr.input.schema()))
@@ -122,11 +157,7 @@ impl StateMergeHelper {
let expr = AggregateFunction {
func: Arc::new(state_func.into()),
args: aggr_func.args.clone(),
distinct: aggr_func.distinct,
filter: aggr_func.filter.clone(),
order_by: aggr_func.order_by.clone(),
null_treatment: aggr_func.null_treatment,
params: aggr_func.params.clone(),
};
let expr = Expr::AggregateFunction(expr);
let lower_state_output_col_name = expr.schema_name().to_string();
@@ -148,11 +179,10 @@ impl StateMergeHelper {
let arg = Expr::Column(Column::new_unqualified(lower_state_output_col_name));
let expr = AggregateFunction {
func: Arc::new(merge_func.into()),
args: vec![arg],
distinct: aggr_func.distinct,
filter: aggr_func.filter.clone(),
order_by: aggr_func.order_by.clone(),
null_treatment: aggr_func.null_treatment,
params: AggregateFunctionParams {
args: vec![arg],
..aggr_func.params.clone()
},
};
// alias to the original aggregate expr's schema name, so parent plan can refer to it
@@ -166,18 +196,18 @@ impl StateMergeHelper {
let lower_plan = LogicalPlan::Aggregate(lower);
// update aggregate's output schema
let lower_plan = Arc::new(lower_plan.recompute_schema()?);
let lower_plan = lower_plan.recompute_schema()?;
let mut upper = aggr.clone();
let aggr_plan = LogicalPlan::Aggregate(aggr);
upper.aggr_expr = upper_aggr_exprs;
upper.input = lower_plan.clone();
upper.input = Arc::new(lower_plan.clone());
// upper schema's output schema should be the same as the original aggregate plan's output schema
let upper_check = upper.clone();
let upper_plan = Arc::new(LogicalPlan::Aggregate(upper_check).recompute_schema()?);
let upper_check = upper;
let upper_plan = LogicalPlan::Aggregate(upper_check).recompute_schema()?;
if *upper_plan.schema() != *aggr_plan.schema() {
return Err(datafusion_common::DataFusionError::Internal(format!(
"Upper aggregate plan's schema is not the same as the original aggregate plan's schema: \n[transformed]:{}\n[ original]{}",
"Upper aggregate plan's schema is not the same as the original aggregate plan's schema: \n[transformed]:{}\n[original]:{}",
upper_plan.schema(), aggr_plan.schema()
)));
}
@@ -213,15 +243,8 @@ impl StateWrapper {
pub fn deduce_aggr_return_type(
&self,
acc_args: &datafusion_expr::function::AccumulatorArgs,
) -> datafusion_common::Result<DataType> {
let input_exprs = acc_args.exprs;
let input_schema = acc_args.schema;
let input_types = input_exprs
.iter()
.map(|e| e.data_type(input_schema))
.collect::<Result<Vec<_>, _>>()?;
let return_type = self.inner.return_type(&input_types)?;
Ok(return_type)
) -> datafusion_common::Result<FieldRef> {
self.inner.return_field(acc_args.schema.fields())
}
}
@@ -231,14 +254,13 @@ impl AggregateUDFImpl for StateWrapper {
acc_args: datafusion_expr::function::AccumulatorArgs<'b>,
) -> datafusion_common::Result<Box<dyn Accumulator>> {
// fix and recover proper acc args for the original aggregate function.
let state_type = acc_args.return_type.clone();
let state_type = acc_args.return_type().clone();
let inner = {
let old_return_type = self.deduce_aggr_return_type(&acc_args)?;
let acc_args = datafusion_expr::function::AccumulatorArgs {
return_type: &old_return_type,
return_field: self.deduce_aggr_return_type(&acc_args)?,
schema: acc_args.schema,
ignore_nulls: acc_args.ignore_nulls,
ordering_req: acc_args.ordering_req,
order_bys: acc_args.order_bys,
is_reversed: acc_args.is_reversed,
name: acc_args.name,
is_distinct: acc_args.is_distinct,
@@ -263,11 +285,15 @@ impl AggregateUDFImpl for StateWrapper {
/// Return state_fields as the output struct type.
///
fn return_type(&self, arg_types: &[DataType]) -> datafusion_common::Result<DataType> {
let old_return_type = self.inner.return_type(arg_types)?;
let input_fields = &arg_types
.iter()
.map(|x| Arc::new(Field::new("x", x.clone(), false)))
.collect::<Vec<_>>();
let state_fields_args = StateFieldsArgs {
name: self.inner().name(),
input_types: arg_types,
return_type: &old_return_type,
input_fields,
return_field: self.inner.return_field(input_fields)?,
// TODO(discord9): how to get this?, probably ok?
ordering_fields: &[],
is_distinct: false,
@@ -281,12 +307,11 @@ impl AggregateUDFImpl for StateWrapper {
fn state_fields(
&self,
args: datafusion_expr::function::StateFieldsArgs,
) -> datafusion_common::Result<Vec<Field>> {
let old_return_type = self.inner.return_type(args.input_types)?;
) -> datafusion_common::Result<Vec<FieldRef>> {
let state_fields_args = StateFieldsArgs {
name: args.name,
input_types: args.input_types,
return_type: &old_return_type,
input_fields: args.input_fields,
return_field: self.inner.return_field(args.input_fields)?,
ordering_fields: args.ordering_fields,
is_distinct: args.is_distinct,
};
@@ -407,15 +432,18 @@ impl AggregateUDFImpl for MergeWrapper {
&'a self,
acc_args: datafusion_expr::function::AccumulatorArgs<'b>,
) -> datafusion_common::Result<Box<dyn Accumulator>> {
if acc_args.schema.fields().len() != 1
|| !matches!(acc_args.schema.field(0).data_type(), DataType::Struct(_))
if acc_args.exprs.len() != 1
|| !matches!(
acc_args.exprs[0].data_type(acc_args.schema)?,
DataType::Struct(_)
)
{
return Err(datafusion_common::DataFusionError::Internal(format!(
"Expected one struct type as input, got: {:?}",
acc_args.schema
)));
}
let input_type = acc_args.schema.field(0).data_type();
let input_type = acc_args.exprs[0].data_type(acc_args.schema)?;
let DataType::Struct(fields) = input_type else {
return Err(datafusion_common::DataFusionError::Internal(format!(
"Expected a struct type for input, got: {:?}",
@@ -424,7 +452,7 @@ impl AggregateUDFImpl for MergeWrapper {
};
let inner_accum = self.original_phy_expr.create_accumulator()?;
Ok(Box::new(MergeAccum::new(inner_accum, fields)))
Ok(Box::new(MergeAccum::new(inner_accum, &fields)))
}
fn as_any(&self) -> &dyn std::any::Any {
@@ -465,7 +493,7 @@ impl AggregateUDFImpl for MergeWrapper {
fn state_fields(
&self,
_args: datafusion_expr::function::StateFieldsArgs,
) -> datafusion_common::Result<Vec<Field>> {
) -> datafusion_common::Result<Vec<FieldRef>> {
self.original_phy_expr.state_fields()
}
}

View File

@@ -35,7 +35,7 @@ use datafusion::prelude::SessionContext;
use datafusion_common::{Column, TableReference};
use datafusion_expr::expr::AggregateFunction;
use datafusion_expr::sqlparser::ast::NullTreatment;
use datafusion_expr::{Aggregate, Expr, LogicalPlan, SortExpr, TableScan};
use datafusion_expr::{lit, Aggregate, Expr, LogicalPlan, SortExpr, TableScan};
use datafusion_physical_expr::aggregate::AggregateExprBuilder;
use datafusion_physical_expr::{EquivalenceProperties, Partitioning};
use datatypes::arrow_array::StringArray;
@@ -234,7 +234,7 @@ async fn test_sum_udaf() {
vec![Expr::Column(Column::new_unqualified("number"))],
false,
None,
None,
vec![],
None,
))],
)
@@ -250,7 +250,7 @@ async fn test_sum_udaf() {
vec![Expr::Column(Column::new_unqualified("number"))],
false,
None,
None,
vec![],
None,
))],
)
@@ -258,7 +258,7 @@ async fn test_sum_udaf() {
)
.recompute_schema()
.unwrap();
assert_eq!(res.lower_state.as_ref(), &expected_lower_plan);
assert_eq!(&res.lower_state, &expected_lower_plan);
let expected_merge_plan = LogicalPlan::Aggregate(
Aggregate::try_new(
@@ -290,14 +290,14 @@ async fn test_sum_udaf() {
vec![Expr::Column(Column::new_unqualified("__sum_state(number)"))],
false,
None,
None,
vec![],
None,
))
.alias("sum(number)")],
)
.unwrap(),
);
assert_eq!(res.upper_merge.as_ref(), &expected_merge_plan);
assert_eq!(&res.upper_merge, &expected_merge_plan);
let phy_aggr_state_plan = DefaultPhysicalPlanner::default()
.create_physical_plan(&res.lower_state, &ctx.state())
@@ -378,7 +378,7 @@ async fn test_avg_udaf() {
vec![Expr::Column(Column::new_unqualified("number"))],
false,
None,
None,
vec![],
None,
))],
)
@@ -395,7 +395,7 @@ async fn test_avg_udaf() {
vec![Expr::Column(Column::new_unqualified("number"))],
false,
None,
None,
vec![],
None,
))],
)
@@ -405,7 +405,7 @@ async fn test_avg_udaf() {
let coerced_aggr_state_plan = TypeCoercion::new()
.analyze(expected_aggr_state_plan.clone(), &Default::default())
.unwrap();
assert_eq!(res.lower_state.as_ref(), &coerced_aggr_state_plan);
assert_eq!(&res.lower_state, &coerced_aggr_state_plan);
assert_eq!(
res.lower_state.schema().as_arrow(),
&arrow_schema::Schema::new(vec![Field::new(
@@ -449,14 +449,14 @@ async fn test_avg_udaf() {
vec![Expr::Column(Column::new_unqualified("__avg_state(number)"))],
false,
None,
None,
vec![],
None,
))
.alias("avg(number)")],
)
.unwrap(),
);
assert_eq!(res.upper_merge.as_ref(), &expected_merge_plan);
assert_eq!(&res.upper_merge, &expected_merge_plan);
let phy_aggr_state_plan = DefaultPhysicalPlanner::default()
.create_physical_plan(&coerced_aggr_state_plan, &ctx.state())
@@ -551,7 +551,7 @@ async fn test_udaf_correct_eval_result() {
expected_fn: Option<ExpectedFn>,
distinct: bool,
filter: Option<Box<Expr>>,
order_by: Option<Vec<SortExpr>>,
order_by: Vec<SortExpr>,
null_treatment: Option<NullTreatment>,
}
type ExpectedFn = fn(ArrayRef) -> bool;
@@ -575,7 +575,7 @@ async fn test_udaf_correct_eval_result() {
expected_fn: None,
distinct: false,
filter: None,
order_by: None,
order_by: vec![],
null_treatment: None,
},
TestCase {
@@ -596,7 +596,7 @@ async fn test_udaf_correct_eval_result() {
expected_fn: None,
distinct: false,
filter: None,
order_by: None,
order_by: vec![],
null_treatment: None,
},
TestCase {
@@ -619,7 +619,7 @@ async fn test_udaf_correct_eval_result() {
expected_fn: None,
distinct: false,
filter: None,
order_by: None,
order_by: vec![],
null_treatment: None,
},
TestCase {
@@ -630,8 +630,8 @@ async fn test_udaf_correct_eval_result() {
true,
)])),
args: vec![
Expr::Literal(ScalarValue::Int64(Some(128))),
Expr::Literal(ScalarValue::Float64(Some(0.05))),
lit(128i64),
lit(0.05f64),
Expr::Column(Column::new_unqualified("number")),
],
input: vec![Arc::new(Float64Array::from(vec![
@@ -659,7 +659,7 @@ async fn test_udaf_correct_eval_result() {
}),
distinct: false,
filter: None,
order_by: None,
order_by: vec![],
null_treatment: None,
},
TestCase {
@@ -690,7 +690,7 @@ async fn test_udaf_correct_eval_result() {
}),
distinct: false,
filter: None,
order_by: None,
order_by: vec![],
null_treatment: None,
},
// TODO(discord9): udd_merge/hll_merge/geo_path/quantile_aggr tests

View File

@@ -41,7 +41,7 @@ use datatypes::arrow::array::{
Array, ArrayRef, AsArray, BooleanArray, Int64Array, ListArray, UInt64Array,
};
use datatypes::arrow::buffer::{OffsetBuffer, ScalarBuffer};
use datatypes::arrow::datatypes::{DataType, Field};
use datatypes::arrow::datatypes::{DataType, Field, FieldRef};
use crate::function_registry::FunctionRegistry;
@@ -94,14 +94,14 @@ impl AggregateUDFImpl for CountHash {
false
}
fn state_fields(&self, args: StateFieldsArgs) -> Result<Vec<Field>> {
Ok(vec![Field::new_list(
fn state_fields(&self, args: StateFieldsArgs) -> Result<Vec<FieldRef>> {
Ok(vec![Arc::new(Field::new_list(
format_state_name(args.name, "count_hash"),
Field::new_list_field(DataType::UInt64, true),
// For count_hash accumulator, null list item stands for an
// empty value set (i.e., all NULL value so far for that group).
true,
)])
))])
}
fn accumulator(&self, acc_args: AccumulatorArgs) -> Result<Box<dyn Accumulator>> {

View File

@@ -21,7 +21,7 @@ pub(crate) struct GeoFunction;
impl GeoFunction {
pub fn register(registry: &FunctionRegistry) {
registry.register_aggr(encoding::JsonEncodePathAccumulator::uadf_impl());
registry.register_aggr(geo_path::GeoPathAccumulator::uadf_impl());
registry.register_aggr(encoding::JsonPathAccumulator::uadf_impl());
}
}

View File

@@ -14,223 +14,332 @@
use std::sync::Arc;
use common_error::ext::{BoxedError, PlainError};
use common_error::status_code::StatusCode;
use common_macro::{as_aggr_func_creator, AggrFuncTypeStore};
use common_query::error::{self, InvalidInputStateSnafu, Result};
use common_query::logical_plan::accumulator::AggrFuncTypeStore;
use common_query::logical_plan::{
create_aggregate_function, Accumulator, AggregateFunctionCreator,
use arrow::array::AsArray;
use datafusion::arrow::array::{Array, ArrayRef};
use datafusion::common::cast::as_primitive_array;
use datafusion::error::{DataFusionError, Result as DfResult};
use datafusion::logical_expr::{Accumulator as DfAccumulator, AggregateUDF, Volatility};
use datafusion::prelude::create_udaf;
use datafusion_common::cast::{as_list_array, as_struct_array};
use datafusion_common::ScalarValue;
use datatypes::arrow::array::{Float64Array, Int64Array, ListArray, StructArray};
use datatypes::arrow::datatypes::{
DataType, Field, Float64Type, Int64Type, TimeUnit, TimestampNanosecondType,
};
use common_query::prelude::AccumulatorCreatorFunction;
use common_time::Timestamp;
use datafusion_expr::AggregateUDF;
use datatypes::prelude::ConcreteDataType;
use datatypes::value::{ListValue, Value};
use datatypes::vectors::VectorRef;
use snafu::{ensure, ResultExt};
use datatypes::compute::{self, sort_to_indices};
use crate::scalars::geo::helpers::{ensure_columns_len, ensure_columns_n};
pub const JSON_ENCODE_PATH_NAME: &str = "json_encode_path";
/// Accumulator of lat, lng, timestamp tuples
#[derive(Debug)]
pub struct JsonPathAccumulator {
timestamp_type: ConcreteDataType,
const LATITUDE_FIELD: &str = "lat";
const LONGITUDE_FIELD: &str = "lng";
const TIMESTAMP_FIELD: &str = "timestamp";
const DEFAULT_LIST_FIELD_NAME: &str = "item";
#[derive(Debug, Default)]
pub struct JsonEncodePathAccumulator {
lat: Vec<Option<f64>>,
lng: Vec<Option<f64>>,
timestamp: Vec<Option<Timestamp>>,
timestamp: Vec<Option<i64>>,
}
impl JsonPathAccumulator {
fn new(timestamp_type: ConcreteDataType) -> Self {
Self {
lat: Vec::default(),
lng: Vec::default(),
timestamp: Vec::default(),
timestamp_type,
}
impl JsonEncodePathAccumulator {
pub fn new() -> Self {
Self::default()
}
/// Create a new `AggregateUDF` for the `json_encode_path` aggregate function.
pub fn uadf_impl() -> AggregateUDF {
create_aggregate_function(
"json_encode_path".to_string(),
3,
Arc::new(JsonPathEncodeFunctionCreator::default()),
create_udaf(
JSON_ENCODE_PATH_NAME,
// Input types: lat, lng, timestamp
vec![
DataType::Float64,
DataType::Float64,
DataType::Timestamp(TimeUnit::Nanosecond, None),
],
// Output type: geojson compatible linestring
Arc::new(DataType::Utf8),
Volatility::Immutable,
// Create the accumulator
Arc::new(|_| Ok(Box::new(Self::new()))),
// Intermediate state types
Arc::new(vec![DataType::Struct(
vec![
Field::new(
LATITUDE_FIELD,
DataType::List(Arc::new(Field::new(
DEFAULT_LIST_FIELD_NAME,
DataType::Float64,
true,
))),
false,
),
Field::new(
LONGITUDE_FIELD,
DataType::List(Arc::new(Field::new(
DEFAULT_LIST_FIELD_NAME,
DataType::Float64,
true,
))),
false,
),
Field::new(
TIMESTAMP_FIELD,
DataType::List(Arc::new(Field::new(
DEFAULT_LIST_FIELD_NAME,
DataType::Int64,
true,
))),
false,
),
]
.into(),
)]),
)
.into()
}
}
impl Accumulator for JsonPathAccumulator {
fn state(&self) -> Result<Vec<Value>> {
Ok(vec![
Value::List(ListValue::new(
self.lat.iter().map(|i| Value::from(*i)).collect(),
ConcreteDataType::float64_datatype(),
)),
Value::List(ListValue::new(
self.lng.iter().map(|i| Value::from(*i)).collect(),
ConcreteDataType::float64_datatype(),
)),
Value::List(ListValue::new(
self.timestamp.iter().map(|i| Value::from(*i)).collect(),
self.timestamp_type.clone(),
)),
])
}
impl DfAccumulator for JsonEncodePathAccumulator {
fn update_batch(&mut self, values: &[ArrayRef]) -> datafusion::error::Result<()> {
if values.len() != 3 {
return Err(DataFusionError::Internal(format!(
"Expected 3 columns for json_encode_path, got {}",
values.len()
)));
}
fn update_batch(&mut self, columns: &[VectorRef]) -> Result<()> {
// update batch as in datafusion just provides the accumulator original
// input.
//
// columns is vec of [`lat`, `lng`, `timestamp`]
// where
// - `lat` is a vector of `Value::Float64` or similar type. Each item in
// the vector is a row in given dataset.
// - so on so forth for `lng` and `timestamp`
ensure_columns_n!(columns, 3);
let lat_array = as_primitive_array::<Float64Type>(&values[0])?;
let lng_array = as_primitive_array::<Float64Type>(&values[1])?;
let ts_array = as_primitive_array::<TimestampNanosecondType>(&values[2])?;
let lat = &columns[0];
let lng = &columns[1];
let ts = &columns[2];
let size = lat.len();
let size = lat_array.len();
self.lat.reserve(size);
self.lng.reserve(size);
for idx in 0..size {
self.lat.push(lat.get(idx).as_f64_lossy());
self.lng.push(lng.get(idx).as_f64_lossy());
self.timestamp.push(ts.get(idx).as_timestamp());
self.lat.push(if lat_array.is_null(idx) {
None
} else {
Some(lat_array.value(idx))
});
self.lng.push(if lng_array.is_null(idx) {
None
} else {
Some(lng_array.value(idx))
});
self.timestamp.push(if ts_array.is_null(idx) {
None
} else {
Some(ts_array.value(idx))
});
}
Ok(())
}
fn merge_batch(&mut self, states: &[VectorRef]) -> Result<()> {
// merge batch as in datafusion gives state accumulated from the data
// returned from child accumulators' state() call
// In our particular implementation, the data structure is like
//
// states is vec of [`lat`, `lng`, `timestamp`]
// where
// - `lat` is a vector of `Value::List`. Each item in the list is all
// coordinates from a child accumulator.
// - so on so forth for `lng` and `timestamp`
fn evaluate(&mut self) -> DfResult<ScalarValue> {
let unordered_lng_array = Float64Array::from(self.lng.clone());
let unordered_lat_array = Float64Array::from(self.lat.clone());
let ts_array = Int64Array::from(self.timestamp.clone());
ensure_columns_n!(states, 3);
let ordered_indices = sort_to_indices(&ts_array, None, None)?;
let lat_array = compute::take(&unordered_lat_array, &ordered_indices, None)?;
let lng_array = compute::take(&unordered_lng_array, &ordered_indices, None)?;
let lat_lists = &states[0];
let lng_lists = &states[1];
let ts_lists = &states[2];
let len = ts_array.len();
let lat_array = lat_array.as_primitive::<Float64Type>();
let lng_array = lng_array.as_primitive::<Float64Type>();
let len = lat_lists.len();
let mut coords = Vec::with_capacity(len);
for i in 0..len {
let lng = lng_array.value(i);
let lat = lat_array.value(i);
coords.push(vec![lng, lat]);
}
for idx in 0..len {
if let Some(lat_list) = lat_lists
.get(idx)
.as_list()
.map_err(BoxedError::new)
.context(error::ExecuteSnafu)?
{
for v in lat_list.items() {
self.lat.push(v.as_f64_lossy());
}
}
let result = serde_json::to_string(&coords)
.map_err(|e| DataFusionError::Execution(format!("Failed to encode json, {}", e)))?;
if let Some(lng_list) = lng_lists
.get(idx)
.as_list()
.map_err(BoxedError::new)
.context(error::ExecuteSnafu)?
{
for v in lng_list.items() {
self.lng.push(v.as_f64_lossy());
}
}
Ok(ScalarValue::Utf8(Some(result)))
}
if let Some(ts_list) = ts_lists
.get(idx)
.as_list()
.map_err(BoxedError::new)
.context(error::ExecuteSnafu)?
{
for v in ts_list.items() {
self.timestamp.push(v.as_timestamp());
}
}
fn size(&self) -> usize {
// Base size of JsonEncodePathAccumulator struct fields
let mut total_size = std::mem::size_of::<Self>();
// Size of vectors (approximation)
total_size += self.lat.capacity() * std::mem::size_of::<Option<f64>>();
total_size += self.lng.capacity() * std::mem::size_of::<Option<f64>>();
total_size += self.timestamp.capacity() * std::mem::size_of::<Option<i64>>();
total_size
}
fn state(&mut self) -> datafusion::error::Result<Vec<ScalarValue>> {
let lat_array = Arc::new(ListArray::from_iter_primitive::<Float64Type, _, _>(vec![
Some(self.lat.clone()),
]));
let lng_array = Arc::new(ListArray::from_iter_primitive::<Float64Type, _, _>(vec![
Some(self.lng.clone()),
]));
let ts_array = Arc::new(ListArray::from_iter_primitive::<Int64Type, _, _>(vec![
Some(self.timestamp.clone()),
]));
let state_struct = StructArray::new(
vec![
Field::new(
LATITUDE_FIELD,
DataType::List(Arc::new(Field::new("item", DataType::Float64, true))),
false,
),
Field::new(
LONGITUDE_FIELD,
DataType::List(Arc::new(Field::new("item", DataType::Float64, true))),
false,
),
Field::new(
TIMESTAMP_FIELD,
DataType::List(Arc::new(Field::new("item", DataType::Int64, true))),
false,
),
]
.into(),
vec![lat_array, lng_array, ts_array],
None,
);
Ok(vec![ScalarValue::Struct(Arc::new(state_struct))])
}
fn merge_batch(&mut self, states: &[ArrayRef]) -> datafusion::error::Result<()> {
if states.len() != 1 {
return Err(DataFusionError::Internal(format!(
"Expected 1 states for json_encode_path, got {}",
states.len()
)));
}
for state in states {
let state = as_struct_array(state)?;
let lat_list = as_list_array(state.column(0))?.value(0);
let lat_array = as_primitive_array::<Float64Type>(&lat_list)?;
let lng_list = as_list_array(state.column(1))?.value(0);
let lng_array = as_primitive_array::<Float64Type>(&lng_list)?;
let ts_list = as_list_array(state.column(2))?.value(0);
let ts_array = as_primitive_array::<Int64Type>(&ts_list)?;
self.lat.extend(lat_array);
self.lng.extend(lng_array);
self.timestamp.extend(ts_array);
}
Ok(())
}
fn evaluate(&self) -> Result<Value> {
let mut work_vec: Vec<(&Option<f64>, &Option<f64>, &Option<Timestamp>)> = self
.lat
.iter()
.zip(self.lng.iter())
.zip(self.timestamp.iter())
.map(|((a, b), c)| (a, b, c))
.collect();
// sort by timestamp, we treat null timestamp as 0
work_vec.sort_unstable_by_key(|tuple| tuple.2.unwrap_or_else(|| Timestamp::new_second(0)));
let result = serde_json::to_string(
&work_vec
.into_iter()
// note that we transform to lng,lat for geojson compatibility
.map(|(lat, lng, _)| vec![lng, lat])
.collect::<Vec<Vec<&Option<f64>>>>(),
)
.map_err(|e| {
BoxedError::new(PlainError::new(
format!("Serialization failure: {}", e),
StatusCode::EngineExecuteQuery,
))
})
.context(error::ExecuteSnafu)?;
Ok(Value::String(result.into()))
}
}
/// This function accept rows of lat, lng and timestamp, sort with timestamp and
/// encoding them into a geojson-like path.
///
/// Example:
///
/// ```sql
/// SELECT json_encode_path(lat, lon, timestamp) FROM table [group by ...];
/// ```
///
#[as_aggr_func_creator]
#[derive(Debug, Default, AggrFuncTypeStore)]
pub struct JsonPathEncodeFunctionCreator {}
#[cfg(test)]
mod tests {
use datafusion::arrow::array::{Float64Array, TimestampNanosecondArray};
use datafusion::scalar::ScalarValue;
impl AggregateFunctionCreator for JsonPathEncodeFunctionCreator {
fn creator(&self) -> AccumulatorCreatorFunction {
let creator: AccumulatorCreatorFunction = Arc::new(move |types: &[ConcreteDataType]| {
let ts_type = types[2].clone();
Ok(Box::new(JsonPathAccumulator::new(ts_type)))
});
use super::*;
creator
#[test]
fn test_json_encode_path_basic() {
let mut accumulator = JsonEncodePathAccumulator::new();
// Create test data
let lat_array = Arc::new(Float64Array::from(vec![1.0, 2.0, 3.0]));
let lng_array = Arc::new(Float64Array::from(vec![4.0, 5.0, 6.0]));
let ts_array = Arc::new(TimestampNanosecondArray::from(vec![100, 200, 300]));
// Update batch
accumulator
.update_batch(&[lat_array, lng_array, ts_array])
.unwrap();
// Evaluate
let result = accumulator.evaluate().unwrap();
assert_eq!(
result,
ScalarValue::Utf8(Some("[[4.0,1.0],[5.0,2.0],[6.0,3.0]]".to_string()))
);
}
fn output_type(&self) -> Result<ConcreteDataType> {
Ok(ConcreteDataType::string_datatype())
#[test]
fn test_json_encode_path_sort_by_timestamp() {
let mut accumulator = JsonEncodePathAccumulator::new();
// Create test data with unordered timestamps
let lat_array = Arc::new(Float64Array::from(vec![1.0, 2.0, 3.0]));
let lng_array = Arc::new(Float64Array::from(vec![4.0, 5.0, 6.0]));
let ts_array = Arc::new(TimestampNanosecondArray::from(vec![300, 100, 200]));
// Update batch
accumulator
.update_batch(&[lat_array, lng_array, ts_array])
.unwrap();
// Evaluate
let result = accumulator.evaluate().unwrap();
assert_eq!(
result,
ScalarValue::Utf8(Some("[[5.0,2.0],[6.0,3.0],[4.0,1.0]]".to_string()))
);
}
fn state_types(&self) -> Result<Vec<ConcreteDataType>> {
let input_types = self.input_types()?;
ensure!(input_types.len() == 3, InvalidInputStateSnafu);
#[test]
fn test_json_encode_path_merge() {
let mut accumulator1 = JsonEncodePathAccumulator::new();
let mut accumulator2 = JsonEncodePathAccumulator::new();
let timestamp_type = input_types[2].clone();
// Create test data for first accumulator
let lat_array1 = Arc::new(Float64Array::from(vec![1.0]));
let lng_array1 = Arc::new(Float64Array::from(vec![4.0]));
let ts_array1 = Arc::new(TimestampNanosecondArray::from(vec![100]));
Ok(vec![
ConcreteDataType::list_datatype(ConcreteDataType::float64_datatype()),
ConcreteDataType::list_datatype(ConcreteDataType::float64_datatype()),
ConcreteDataType::list_datatype(timestamp_type),
])
// Create test data for second accumulator
let lat_array2 = Arc::new(Float64Array::from(vec![2.0]));
let lng_array2 = Arc::new(Float64Array::from(vec![5.0]));
let ts_array2 = Arc::new(TimestampNanosecondArray::from(vec![200]));
// Update batches
accumulator1
.update_batch(&[lat_array1, lng_array1, ts_array1])
.unwrap();
accumulator2
.update_batch(&[lat_array2, lng_array2, ts_array2])
.unwrap();
// Get states
let state1 = accumulator1.state().unwrap();
let state2 = accumulator2.state().unwrap();
// Create a merged accumulator
let mut merged = JsonEncodePathAccumulator::new();
// Extract the struct arrays from the states
let state_array1 = match &state1[0] {
ScalarValue::Struct(array) => array.clone(),
_ => panic!("Expected Struct scalar value"),
};
let state_array2 = match &state2[0] {
ScalarValue::Struct(array) => array.clone(),
_ => panic!("Expected Struct scalar value"),
};
// Merge state arrays
merged.merge_batch(&[state_array1]).unwrap();
merged.merge_batch(&[state_array2]).unwrap();
// Evaluate merged result
let result = merged.evaluate().unwrap();
assert_eq!(
result,
ScalarValue::Utf8(Some("[[4.0,1.0],[5.0,2.0]]".to_string()))
);
}
}

View File

@@ -12,21 +12,20 @@
// See the License for the specific language governing permissions and
// limitations under the License.
use std::borrow::Cow;
use std::sync::Arc;
use common_macro::{as_aggr_func_creator, AggrFuncTypeStore};
use common_query::error::{CreateAccumulatorSnafu, Error, InvalidFuncArgsSnafu};
use common_query::logical_plan::{
create_aggregate_function, Accumulator, AggregateFunctionCreator,
};
use common_query::prelude::AccumulatorCreatorFunction;
use datafusion_expr::AggregateUDF;
use datatypes::prelude::{ConcreteDataType, Value, *};
use datatypes::vectors::VectorRef;
use arrow::array::{Array, ArrayRef, AsArray, BinaryArray, StringArray};
use arrow_schema::{DataType, Field};
use datafusion::logical_expr::{Signature, TypeSignature, Volatility};
use datafusion_common::{Result, ScalarValue};
use datafusion_expr::{Accumulator, AggregateUDF, SimpleAggregateUDF};
use datafusion_functions_aggregate_common::accumulator::AccumulatorArgs;
use nalgebra::{Const, DVectorView, Dyn, OVector};
use snafu::ensure;
use crate::scalars::vector::impl_conv::{as_veclit, as_veclit_if_const, veclit_to_binlit};
use crate::scalars::vector::impl_conv::{
binlit_as_veclit, parse_veclit_from_strlit, veclit_to_binlit,
};
/// Aggregates by multiplying elements across the same dimension, returns a vector.
#[derive(Debug, Default)]
@@ -35,57 +34,42 @@ pub struct VectorProduct {
has_null: bool,
}
#[as_aggr_func_creator]
#[derive(Debug, Default, AggrFuncTypeStore)]
pub struct VectorProductCreator {}
impl AggregateFunctionCreator for VectorProductCreator {
fn creator(&self) -> AccumulatorCreatorFunction {
let creator: AccumulatorCreatorFunction = Arc::new(move |types: &[ConcreteDataType]| {
ensure!(
types.len() == 1,
InvalidFuncArgsSnafu {
err_msg: format!(
"The length of the args is not correct, expect exactly one, have: {}",
types.len()
)
}
);
let input_type = &types[0];
match input_type {
ConcreteDataType::String(_) | ConcreteDataType::Binary(_) => {
Ok(Box::new(VectorProduct::default()))
}
_ => {
let err_msg = format!(
"\"VEC_PRODUCT\" aggregate function not support data type {:?}",
input_type.logical_type_id(),
);
CreateAccumulatorSnafu { err_msg }.fail()?
}
}
});
creator
}
fn output_type(&self) -> common_query::error::Result<ConcreteDataType> {
Ok(ConcreteDataType::binary_datatype())
}
fn state_types(&self) -> common_query::error::Result<Vec<ConcreteDataType>> {
Ok(vec![self.output_type()?])
}
}
impl VectorProduct {
/// Create a new `AggregateUDF` for the `vec_product` aggregate function.
pub fn uadf_impl() -> AggregateUDF {
create_aggregate_function(
"vec_product".to_string(),
1,
Arc::new(VectorProductCreator::default()),
)
.into()
let signature = Signature::one_of(
vec![
TypeSignature::Exact(vec![DataType::Utf8]),
TypeSignature::Exact(vec![DataType::Binary]),
],
Volatility::Immutable,
);
let udaf = SimpleAggregateUDF::new_with_signature(
"vec_product",
signature,
DataType::Binary,
Arc::new(Self::accumulator),
vec![Arc::new(Field::new("x", DataType::Binary, true))],
);
AggregateUDF::from(udaf)
}
fn accumulator(args: AccumulatorArgs) -> Result<Box<dyn Accumulator>> {
if args.schema.fields().len() != 1 {
return Err(datafusion_common::DataFusionError::Internal(format!(
"expect creating `VEC_PRODUCT` with only one input field, actual {}",
args.schema.fields().len()
)));
}
let t = args.schema.field(0).data_type();
if !matches!(t, DataType::Utf8 | DataType::Binary) {
return Err(datafusion_common::DataFusionError::Internal(format!(
"unexpected input datatype {t} when creating `VEC_PRODUCT`"
)));
}
Ok(Box::new(VectorProduct::default()))
}
fn inner(&mut self, len: usize) -> &mut OVector<f32, Dyn> {
@@ -94,67 +78,82 @@ impl VectorProduct {
})
}
fn update(&mut self, values: &[VectorRef], is_update: bool) -> Result<(), Error> {
fn update(&mut self, values: &[ArrayRef], is_update: bool) -> Result<()> {
if values.is_empty() || self.has_null {
return Ok(());
};
let column = &values[0];
let len = column.len();
match as_veclit_if_const(column)? {
Some(column) => {
let vec_column = DVectorView::from_slice(&column, column.len()).scale(len as f32);
*self.inner(vec_column.len()) =
(*self.inner(vec_column.len())).component_mul(&vec_column);
let vectors = match values[0].data_type() {
DataType::Utf8 => {
let arr: &StringArray = values[0].as_string();
arr.iter()
.filter_map(|x| x.map(|s| parse_veclit_from_strlit(s).map_err(Into::into)))
.map(|x| x.map(Cow::Owned))
.collect::<Result<Vec<_>>>()?
}
None => {
for i in 0..len {
let Some(arg0) = as_veclit(column.get_ref(i))? else {
if is_update {
self.has_null = true;
self.product = None;
}
return Ok(());
};
let vec_column = DVectorView::from_slice(&arg0, arg0.len());
*self.inner(vec_column.len()) =
(*self.inner(vec_column.len())).component_mul(&vec_column);
}
DataType::Binary => {
let arr: &BinaryArray = values[0].as_binary();
arr.iter()
.filter_map(|x| x.map(|b| binlit_as_veclit(b).map_err(Into::into)))
.collect::<Result<Vec<_>>>()?
}
_ => {
return Err(datafusion_common::DataFusionError::NotImplemented(format!(
"unsupported data type {} for `VEC_PRODUCT`",
values[0].data_type()
)))
}
};
if vectors.len() != values[0].len() {
if is_update {
self.has_null = true;
self.product = None;
}
return Ok(());
}
vectors.iter().for_each(|v| {
let v = DVectorView::from_slice(v, v.len());
let inner = self.inner(v.len());
*inner = inner.component_mul(&v);
});
Ok(())
}
}
impl Accumulator for VectorProduct {
fn state(&self) -> common_query::error::Result<Vec<Value>> {
fn state(&mut self) -> Result<Vec<ScalarValue>> {
self.evaluate().map(|v| vec![v])
}
fn update_batch(&mut self, values: &[VectorRef]) -> common_query::error::Result<()> {
fn update_batch(&mut self, values: &[ArrayRef]) -> Result<()> {
self.update(values, true)
}
fn merge_batch(&mut self, states: &[VectorRef]) -> common_query::error::Result<()> {
fn merge_batch(&mut self, states: &[ArrayRef]) -> Result<()> {
self.update(states, false)
}
fn evaluate(&self) -> common_query::error::Result<Value> {
fn evaluate(&mut self) -> Result<ScalarValue> {
match &self.product {
None => Ok(Value::Null),
Some(vector) => {
let v = vector.as_slice();
Ok(Value::from(veclit_to_binlit(v)))
}
None => Ok(ScalarValue::Binary(None)),
Some(vector) => Ok(ScalarValue::Binary(Some(veclit_to_binlit(
vector.as_slice(),
)))),
}
}
fn size(&self) -> usize {
size_of_val(self)
}
}
#[cfg(test)]
mod tests {
use std::sync::Arc;
use datatypes::vectors::{ConstantVector, StringVector};
use datatypes::scalars::ScalarVector;
use datatypes::vectors::{ConstantVector, StringVector, Vector};
use super::*;
@@ -165,59 +164,60 @@ mod tests {
vec_product.update_batch(&[]).unwrap();
assert!(vec_product.product.is_none());
assert!(!vec_product.has_null);
assert_eq!(Value::Null, vec_product.evaluate().unwrap());
assert_eq!(ScalarValue::Binary(None), vec_product.evaluate().unwrap());
// test update one not-null value
let mut vec_product = VectorProduct::default();
let v: Vec<VectorRef> = vec![Arc::new(StringVector::from(vec![Some(
let v: Vec<ArrayRef> = vec![Arc::new(StringArray::from(vec![Some(
"[1.0,2.0,3.0]".to_string(),
)]))];
vec_product.update_batch(&v).unwrap();
assert_eq!(
Value::from(veclit_to_binlit(&[1.0, 2.0, 3.0])),
ScalarValue::Binary(Some(veclit_to_binlit(&[1.0, 2.0, 3.0]))),
vec_product.evaluate().unwrap()
);
// test update one null value
let mut vec_product = VectorProduct::default();
let v: Vec<VectorRef> = vec![Arc::new(StringVector::from(vec![Option::<String>::None]))];
let v: Vec<ArrayRef> = vec![Arc::new(StringArray::from(vec![Option::<String>::None]))];
vec_product.update_batch(&v).unwrap();
assert_eq!(Value::Null, vec_product.evaluate().unwrap());
assert_eq!(ScalarValue::Binary(None), vec_product.evaluate().unwrap());
// test update no null-value batch
let mut vec_product = VectorProduct::default();
let v: Vec<VectorRef> = vec![Arc::new(StringVector::from(vec![
let v: Vec<ArrayRef> = vec![Arc::new(StringArray::from(vec![
Some("[1.0,2.0,3.0]".to_string()),
Some("[4.0,5.0,6.0]".to_string()),
Some("[7.0,8.0,9.0]".to_string()),
]))];
vec_product.update_batch(&v).unwrap();
assert_eq!(
Value::from(veclit_to_binlit(&[28.0, 80.0, 162.0])),
ScalarValue::Binary(Some(veclit_to_binlit(&[28.0, 80.0, 162.0]))),
vec_product.evaluate().unwrap()
);
// test update null-value batch
let mut vec_product = VectorProduct::default();
let v: Vec<VectorRef> = vec![Arc::new(StringVector::from(vec![
let v: Vec<ArrayRef> = vec![Arc::new(StringArray::from(vec![
Some("[1.0,2.0,3.0]".to_string()),
None,
Some("[7.0,8.0,9.0]".to_string()),
]))];
vec_product.update_batch(&v).unwrap();
assert_eq!(Value::Null, vec_product.evaluate().unwrap());
assert_eq!(ScalarValue::Binary(None), vec_product.evaluate().unwrap());
// test update with constant vector
let mut vec_product = VectorProduct::default();
let v: Vec<VectorRef> = vec![Arc::new(ConstantVector::new(
let v: Vec<ArrayRef> = vec![Arc::new(ConstantVector::new(
Arc::new(StringVector::from_vec(vec!["[1.0,2.0,3.0]".to_string()])),
4,
))];
))
.to_arrow_array()];
vec_product.update_batch(&v).unwrap();
assert_eq!(
Value::from(veclit_to_binlit(&[4.0, 8.0, 12.0])),
ScalarValue::Binary(Some(veclit_to_binlit(&[1.0, 16.0, 81.0]))),
vec_product.evaluate().unwrap()
);
}

View File

@@ -14,19 +14,18 @@
use std::sync::Arc;
use common_macro::{as_aggr_func_creator, AggrFuncTypeStore};
use common_query::error::{CreateAccumulatorSnafu, Error, InvalidFuncArgsSnafu};
use common_query::logical_plan::{
create_aggregate_function, Accumulator, AggregateFunctionCreator,
use arrow::array::{Array, ArrayRef, AsArray, BinaryArray, StringArray};
use arrow_schema::{DataType, Field};
use datafusion_common::{Result, ScalarValue};
use datafusion_expr::{
Accumulator, AggregateUDF, Signature, SimpleAggregateUDF, TypeSignature, Volatility,
};
use common_query::prelude::AccumulatorCreatorFunction;
use datafusion_expr::AggregateUDF;
use datatypes::prelude::{ConcreteDataType, Value, *};
use datatypes::vectors::VectorRef;
use datafusion_functions_aggregate_common::accumulator::AccumulatorArgs;
use nalgebra::{Const, DVectorView, Dyn, OVector};
use snafu::ensure;
use crate::scalars::vector::impl_conv::{as_veclit, as_veclit_if_const, veclit_to_binlit};
use crate::scalars::vector::impl_conv::{
binlit_as_veclit, parse_veclit_from_strlit, veclit_to_binlit,
};
/// The accumulator for the `vec_sum` aggregate function.
#[derive(Debug, Default)]
@@ -35,57 +34,42 @@ pub struct VectorSum {
has_null: bool,
}
#[as_aggr_func_creator]
#[derive(Debug, Default, AggrFuncTypeStore)]
pub struct VectorSumCreator {}
impl AggregateFunctionCreator for VectorSumCreator {
fn creator(&self) -> AccumulatorCreatorFunction {
let creator: AccumulatorCreatorFunction = Arc::new(move |types: &[ConcreteDataType]| {
ensure!(
types.len() == 1,
InvalidFuncArgsSnafu {
err_msg: format!(
"The length of the args is not correct, expect exactly one, have: {}",
types.len()
)
}
);
let input_type = &types[0];
match input_type {
ConcreteDataType::String(_) | ConcreteDataType::Binary(_) => {
Ok(Box::new(VectorSum::default()))
}
_ => {
let err_msg = format!(
"\"VEC_SUM\" aggregate function not support data type {:?}",
input_type.logical_type_id(),
);
CreateAccumulatorSnafu { err_msg }.fail()?
}
}
});
creator
}
fn output_type(&self) -> common_query::error::Result<ConcreteDataType> {
Ok(ConcreteDataType::binary_datatype())
}
fn state_types(&self) -> common_query::error::Result<Vec<ConcreteDataType>> {
Ok(vec![self.output_type()?])
}
}
impl VectorSum {
/// Create a new `AggregateUDF` for the `vec_sum` aggregate function.
pub fn uadf_impl() -> AggregateUDF {
create_aggregate_function(
"vec_sum".to_string(),
1,
Arc::new(VectorSumCreator::default()),
)
.into()
let signature = Signature::one_of(
vec![
TypeSignature::Exact(vec![DataType::Utf8]),
TypeSignature::Exact(vec![DataType::Binary]),
],
Volatility::Immutable,
);
let udaf = SimpleAggregateUDF::new_with_signature(
"vec_sum",
signature,
DataType::Binary,
Arc::new(Self::accumulator),
vec![Arc::new(Field::new("x", DataType::Binary, true))],
);
AggregateUDF::from(udaf)
}
fn accumulator(args: AccumulatorArgs) -> Result<Box<dyn Accumulator>> {
if args.schema.fields().len() != 1 {
return Err(datafusion_common::DataFusionError::Internal(format!(
"expect creating `VEC_SUM` with only one input field, actual {}",
args.schema.fields().len()
)));
}
let t = args.schema.field(0).data_type();
if !matches!(t, DataType::Utf8 | DataType::Binary) {
return Err(datafusion_common::DataFusionError::Internal(format!(
"unexpected input datatype {t} when creating `VEC_SUM`"
)));
}
Ok(Box::new(VectorSum::default()))
}
fn inner(&mut self, len: usize) -> &mut OVector<f32, Dyn> {
@@ -93,62 +77,87 @@ impl VectorSum {
.get_or_insert_with(|| OVector::zeros_generic(Dyn(len), Const::<1>))
}
fn update(&mut self, values: &[VectorRef], is_update: bool) -> Result<(), Error> {
fn update(&mut self, values: &[ArrayRef], is_update: bool) -> Result<()> {
if values.is_empty() || self.has_null {
return Ok(());
};
let column = &values[0];
let len = column.len();
match as_veclit_if_const(column)? {
Some(column) => {
let vec_column = DVectorView::from_slice(&column, column.len()).scale(len as f32);
*self.inner(vec_column.len()) += vec_column;
}
None => {
for i in 0..len {
let Some(arg0) = as_veclit(column.get_ref(i))? else {
match values[0].data_type() {
DataType::Utf8 => {
let arr: &StringArray = values[0].as_string();
for s in arr.iter() {
let Some(s) = s else {
if is_update {
self.has_null = true;
self.sum = None;
}
return Ok(());
};
let vec_column = DVectorView::from_slice(&arg0, arg0.len());
let values = parse_veclit_from_strlit(s)?;
let vec_column = DVectorView::from_slice(&values, values.len());
*self.inner(vec_column.len()) += vec_column;
}
}
DataType::Binary => {
let arr: &BinaryArray = values[0].as_binary();
for b in arr.iter() {
let Some(b) = b else {
if is_update {
self.has_null = true;
self.sum = None;
}
return Ok(());
};
let values = binlit_as_veclit(b)?;
let vec_column = DVectorView::from_slice(&values, values.len());
*self.inner(vec_column.len()) += vec_column;
}
}
_ => {
return Err(datafusion_common::DataFusionError::NotImplemented(format!(
"unsupported data type {} for `VEC_SUM`",
values[0].data_type()
)))
}
}
Ok(())
}
}
impl Accumulator for VectorSum {
fn state(&self) -> common_query::error::Result<Vec<Value>> {
fn state(&mut self) -> Result<Vec<ScalarValue>> {
self.evaluate().map(|v| vec![v])
}
fn update_batch(&mut self, values: &[VectorRef]) -> common_query::error::Result<()> {
fn update_batch(&mut self, values: &[ArrayRef]) -> Result<()> {
self.update(values, true)
}
fn merge_batch(&mut self, states: &[VectorRef]) -> common_query::error::Result<()> {
fn merge_batch(&mut self, states: &[ArrayRef]) -> Result<()> {
self.update(states, false)
}
fn evaluate(&self) -> common_query::error::Result<Value> {
fn evaluate(&mut self) -> Result<ScalarValue> {
match &self.sum {
None => Ok(Value::Null),
Some(vector) => Ok(Value::from(veclit_to_binlit(vector.as_slice()))),
None => Ok(ScalarValue::Binary(None)),
Some(vector) => Ok(ScalarValue::Binary(Some(veclit_to_binlit(
vector.as_slice(),
)))),
}
}
fn size(&self) -> usize {
size_of_val(self)
}
}
#[cfg(test)]
mod tests {
use std::sync::Arc;
use datatypes::vectors::{ConstantVector, StringVector};
use arrow::array::StringArray;
use datatypes::scalars::ScalarVector;
use datatypes::vectors::{ConstantVector, StringVector, Vector};
use super::*;
@@ -159,57 +168,58 @@ mod tests {
vec_sum.update_batch(&[]).unwrap();
assert!(vec_sum.sum.is_none());
assert!(!vec_sum.has_null);
assert_eq!(Value::Null, vec_sum.evaluate().unwrap());
assert_eq!(ScalarValue::Binary(None), vec_sum.evaluate().unwrap());
// test update one not-null value
let mut vec_sum = VectorSum::default();
let v: Vec<VectorRef> = vec![Arc::new(StringVector::from(vec![Some(
let v: Vec<ArrayRef> = vec![Arc::new(StringArray::from(vec![Some(
"[1.0,2.0,3.0]".to_string(),
)]))];
vec_sum.update_batch(&v).unwrap();
assert_eq!(
Value::from(veclit_to_binlit(&[1.0, 2.0, 3.0])),
ScalarValue::Binary(Some(veclit_to_binlit(&[1.0, 2.0, 3.0]))),
vec_sum.evaluate().unwrap()
);
// test update one null value
let mut vec_sum = VectorSum::default();
let v: Vec<VectorRef> = vec![Arc::new(StringVector::from(vec![Option::<String>::None]))];
let v: Vec<ArrayRef> = vec![Arc::new(StringArray::from(vec![Option::<String>::None]))];
vec_sum.update_batch(&v).unwrap();
assert_eq!(Value::Null, vec_sum.evaluate().unwrap());
assert_eq!(ScalarValue::Binary(None), vec_sum.evaluate().unwrap());
// test update no null-value batch
let mut vec_sum = VectorSum::default();
let v: Vec<VectorRef> = vec![Arc::new(StringVector::from(vec![
let v: Vec<ArrayRef> = vec![Arc::new(StringArray::from(vec![
Some("[1.0,2.0,3.0]".to_string()),
Some("[4.0,5.0,6.0]".to_string()),
Some("[7.0,8.0,9.0]".to_string()),
]))];
vec_sum.update_batch(&v).unwrap();
assert_eq!(
Value::from(veclit_to_binlit(&[12.0, 15.0, 18.0])),
ScalarValue::Binary(Some(veclit_to_binlit(&[12.0, 15.0, 18.0]))),
vec_sum.evaluate().unwrap()
);
// test update null-value batch
let mut vec_sum = VectorSum::default();
let v: Vec<VectorRef> = vec![Arc::new(StringVector::from(vec![
let v: Vec<ArrayRef> = vec![Arc::new(StringArray::from(vec![
Some("[1.0,2.0,3.0]".to_string()),
None,
Some("[7.0,8.0,9.0]".to_string()),
]))];
vec_sum.update_batch(&v).unwrap();
assert_eq!(Value::Null, vec_sum.evaluate().unwrap());
assert_eq!(ScalarValue::Binary(None), vec_sum.evaluate().unwrap());
// test update with constant vector
let mut vec_sum = VectorSum::default();
let v: Vec<VectorRef> = vec![Arc::new(ConstantVector::new(
let v: Vec<ArrayRef> = vec![Arc::new(ConstantVector::new(
Arc::new(StringVector::from_vec(vec!["[1.0,2.0,3.0]".to_string()])),
4,
))];
))
.to_arrow_array()];
vec_sum.update_batch(&v).unwrap();
assert_eq!(
Value::from(veclit_to_binlit(&[4.0, 8.0, 12.0])),
ScalarValue::Binary(Some(veclit_to_binlit(&[4.0, 8.0, 12.0]))),
vec_sum.evaluate().unwrap()
);
}

View File

@@ -12,28 +12,24 @@
// See the License for the specific language governing permissions and
// limitations under the License.
use arrow::datatypes::DataType as ArrowDataType;
use common_error::ext::BoxedError;
use common_macro::admin_fn;
use common_query::error::{
ExecuteSnafu, InvalidFuncArgsSnafu, MissingFlowServiceHandlerSnafu, Result,
UnsupportedInputDataTypeSnafu,
};
use common_query::prelude::Signature;
use datafusion::logical_expr::Volatility;
use datafusion_expr::{Signature, Volatility};
use datatypes::value::{Value, ValueRef};
use session::context::QueryContextRef;
use snafu::{ensure, ResultExt};
use sql::ast::ObjectNamePartExt;
use sql::parser::ParserContext;
use store_api::storage::ConcreteDataType;
use crate::handlers::FlowServiceHandlerRef;
fn flush_signature() -> Signature {
Signature::uniform(
1,
vec![ConcreteDataType::string_datatype()],
Volatility::Immutable,
)
Signature::uniform(1, vec![ArrowDataType::Utf8], Volatility::Immutable)
}
#[admin_fn(
@@ -85,9 +81,9 @@ fn parse_flush_flow(
let (catalog_name, flow_name) = match &obj_name.0[..] {
[flow_name] => (
query_ctx.current_catalog().to_string(),
flow_name.value.clone(),
flow_name.to_string_unquoted(),
),
[catalog, flow_name] => (catalog.value.clone(), flow_name.value.clone()),
[catalog, flow_name] => (catalog.to_string_unquoted(), flow_name.to_string_unquoted()),
_ => {
return InvalidFuncArgsSnafu {
err_msg: format!(
@@ -105,44 +101,55 @@ fn parse_flush_flow(
mod test {
use std::sync::Arc;
use datatypes::scalars::ScalarVector;
use datatypes::vectors::StringVector;
use session::context::QueryContext;
use super::*;
use crate::function::{AsyncFunction, FunctionContext};
use crate::function::FunctionContext;
use crate::function_factory::ScalarFunctionFactory;
#[test]
fn test_flush_flow_metadata() {
let f = FlushFlowFunction;
let factory: ScalarFunctionFactory = FlushFlowFunction::factory().into();
let f = factory.provide(FunctionContext::mock());
assert_eq!("flush_flow", f.name());
assert_eq!(
ConcreteDataType::uint64_datatype(),
f.return_type(&[]).unwrap()
);
assert_eq!(
f.signature(),
Signature::uniform(
1,
vec![ConcreteDataType::string_datatype()],
Volatility::Immutable,
)
assert_eq!(ArrowDataType::UInt64, f.return_type(&[]).unwrap());
let expected_signature = datafusion_expr::Signature::uniform(
1,
vec![ArrowDataType::Utf8],
datafusion_expr::Volatility::Immutable,
);
assert_eq!(*f.signature(), expected_signature);
}
#[tokio::test]
async fn test_missing_flow_service() {
let f = FlushFlowFunction;
let factory: ScalarFunctionFactory = FlushFlowFunction::factory().into();
let binding = factory.provide(FunctionContext::default());
let f = binding.as_async().unwrap();
let args = vec!["flow_name"];
let args = args
.into_iter()
.map(|arg| Arc::new(StringVector::from_slice(&[arg])) as _)
.collect::<Vec<_>>();
let flow_name_array = Arc::new(arrow::array::StringArray::from(vec!["flow_name"]));
let result = f.eval(FunctionContext::default(), &args).await.unwrap_err();
let columnar_args = vec![datafusion_expr::ColumnarValue::Array(flow_name_array as _)];
let func_args = datafusion::logical_expr::ScalarFunctionArgs {
args: columnar_args,
arg_fields: vec![Arc::new(arrow::datatypes::Field::new(
"arg_0",
ArrowDataType::Utf8,
false,
))],
return_field: Arc::new(arrow::datatypes::Field::new(
"result",
ArrowDataType::UInt64,
true,
)),
number_rows: 1,
config_options: Arc::new(datafusion_common::config::ConfigOptions::default()),
};
let result = f.invoke_async_with_args(func_args).await.unwrap_err();
assert_eq!(
"Missing FlowServiceHandler, not expected",
"Execution error: Handler error: Missing FlowServiceHandler, not expected",
result.to_string()
);
}

View File

@@ -41,6 +41,12 @@ impl FunctionContext {
}
}
impl std::fmt::Display for FunctionContext {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
write!(f, "FunctionContext {{ query_ctx: {} }}", self.query_ctx)
}
}
impl Default for FunctionContext {
fn default() -> Self {
Self {
@@ -67,22 +73,3 @@ pub trait Function: fmt::Display + Sync + Send {
}
pub type FunctionRef = Arc<dyn Function>;
/// Async Scalar function trait
#[async_trait::async_trait]
pub trait AsyncFunction: fmt::Display + Sync + Send {
/// Returns the name of the function, should be unique.
fn name(&self) -> &str;
/// The returned data type of function execution.
fn return_type(&self, input_types: &[ConcreteDataType]) -> Result<ConcreteDataType>;
/// The signature of function.
fn signature(&self) -> Signature;
/// Evaluate the function, e.g. run/execute the function.
/// TODO(dennis): simplify the signature and refactor all the admin functions.
async fn eval(&self, _func_ctx: FunctionContext, _columns: &[VectorRef]) -> Result<VectorRef>;
}
pub type AsyncFunctionRef = Arc<dyn AsyncFunction>;

View File

@@ -22,8 +22,8 @@ use crate::scalars::udf::create_udf;
/// A factory for creating `ScalarUDF` that require a function context.
#[derive(Clone)]
pub struct ScalarFunctionFactory {
name: String,
factory: Arc<dyn Fn(FunctionContext) -> ScalarUDF + Send + Sync>,
pub name: String,
pub factory: Arc<dyn Fn(FunctionContext) -> ScalarUDF + Send + Sync>,
}
impl ScalarFunctionFactory {

View File

@@ -20,10 +20,11 @@ use datafusion_expr::AggregateUDF;
use once_cell::sync::Lazy;
use crate::admin::AdminFunction;
use crate::aggrs::aggr_wrapper::StateMergeHelper;
use crate::aggrs::approximate::ApproximateFunction;
use crate::aggrs::count_hash::CountHash;
use crate::aggrs::vector::VectorFunction as VectorAggrFunction;
use crate::function::{AsyncFunctionRef, Function, FunctionRef};
use crate::function::{Function, FunctionRef};
use crate::function_factory::ScalarFunctionFactory;
use crate::scalars::date::DateFunction;
use crate::scalars::expression::ExpressionFunction;
@@ -41,11 +42,18 @@ use crate::system::SystemFunction;
#[derive(Default)]
pub struct FunctionRegistry {
functions: RwLock<HashMap<String, ScalarFunctionFactory>>,
async_functions: RwLock<HashMap<String, AsyncFunctionRef>>,
aggregate_functions: RwLock<HashMap<String, AggregateUDF>>,
}
impl FunctionRegistry {
/// Register a function in the registry by converting it into a `ScalarFunctionFactory`.
///
/// # Arguments
///
/// * `func` - An object that can be converted into a `ScalarFunctionFactory`.
///
/// The function is inserted into the internal function map, keyed by its name.
/// If a function with the same name already exists, it will be replaced.
pub fn register(&self, func: impl Into<ScalarFunctionFactory>) {
let func = func.into();
let _ = self
@@ -55,18 +63,12 @@ impl FunctionRegistry {
.insert(func.name().to_string(), func);
}
/// Register a scalar function in the registry.
pub fn register_scalar(&self, func: impl Function + 'static) {
self.register(Arc::new(func) as FunctionRef);
}
pub fn register_async(&self, func: AsyncFunctionRef) {
let _ = self
.async_functions
.write()
.unwrap()
.insert(func.name().to_string(), func);
}
/// Register an aggregate function in the registry.
pub fn register_aggr(&self, func: AggregateUDF) {
let _ = self
.aggregate_functions
@@ -75,28 +77,16 @@ impl FunctionRegistry {
.insert(func.name().to_string(), func);
}
pub fn get_async_function(&self, name: &str) -> Option<AsyncFunctionRef> {
self.async_functions.read().unwrap().get(name).cloned()
}
pub fn async_functions(&self) -> Vec<AsyncFunctionRef> {
self.async_functions
.read()
.unwrap()
.values()
.cloned()
.collect()
}
#[cfg(test)]
pub fn get_function(&self, name: &str) -> Option<ScalarFunctionFactory> {
self.functions.read().unwrap().get(name).cloned()
}
/// Returns a list of all scalar functions registered in the registry.
pub fn scalar_functions(&self) -> Vec<ScalarFunctionFactory> {
self.functions.read().unwrap().values().cloned().collect()
}
/// Returns a list of all aggregate functions registered in the registry.
pub fn aggregate_functions(&self) -> Vec<AggregateUDF> {
self.aggregate_functions
.read()
@@ -105,6 +95,11 @@ impl FunctionRegistry {
.cloned()
.collect()
}
/// Returns true if an aggregate function with the given name exists in the registry.
pub fn is_aggr_func_exist(&self, name: &str) -> bool {
self.aggregate_functions.read().unwrap().contains_key(name)
}
}
pub static FUNCTION_REGISTRY: Lazy<Arc<FunctionRegistry>> = Lazy::new(|| {
@@ -148,6 +143,9 @@ pub static FUNCTION_REGISTRY: Lazy<Arc<FunctionRegistry>> = Lazy::new(|| {
// CountHash function
CountHash::register(&function_registry);
// state function of supported aggregate functions
StateMergeHelper::register(&function_registry);
Arc::new(function_registry)
});

View File

@@ -14,6 +14,7 @@
use std::sync::Arc;
use api::v1::meta::ReconcileRequest;
use async_trait::async_trait;
use catalog::CatalogManagerRef;
use common_base::AffectedRows;
@@ -65,6 +66,9 @@ pub trait ProcedureServiceHandler: Send + Sync {
/// Migrate a region from source peer to target peer, returns the procedure id if success.
async fn migrate_region(&self, request: MigrateRegionRequest) -> Result<Option<String>>;
/// Reconcile a table, database or catalog, returns the procedure id if success.
async fn reconcile(&self, request: ReconcileRequest) -> Result<Option<String>>;
/// Query the procedure' state by its id
async fn query_procedure_state(&self, pid: &str) -> Result<ProcedureStateResponse>;

View File

@@ -12,12 +12,15 @@
// See the License for the specific language governing permissions and
// limitations under the License.
use common_query::error::{InvalidInputTypeSnafu, Result};
use api::v1::meta::ResolveStrategy;
use common_query::error::{
InvalidFuncArgsSnafu, InvalidInputTypeSnafu, Result, UnsupportedInputDataTypeSnafu,
};
use common_query::prelude::{Signature, TypeSignature, Volatility};
use datatypes::prelude::ConcreteDataType;
use datatypes::types::cast::cast;
use datatypes::value::ValueRef;
use snafu::ResultExt;
use snafu::{OptionExt, ResultExt};
/// Create a function signature with oneof signatures of interleaving two arguments.
pub fn one_of_sigs2(args1: Vec<ConcreteDataType>, args2: Vec<ConcreteDataType>) -> Signature {
@@ -43,3 +46,64 @@ pub fn cast_u64(value: &ValueRef) -> Result<Option<u64>> {
})
.map(|v| v.as_u64())
}
/// Cast a [`ValueRef`] to u32, returns `None` if fails
pub fn cast_u32(value: &ValueRef) -> Result<Option<u32>> {
cast((*value).into(), &ConcreteDataType::uint32_datatype())
.context(InvalidInputTypeSnafu {
err_msg: format!(
"Failed to cast input into uint32, actual type: {:#?}",
value.data_type(),
),
})
.map(|v| v.as_u64().map(|v| v as u32))
}
/// Parse a resolve strategy from a string.
pub fn parse_resolve_strategy(strategy: &str) -> Result<ResolveStrategy> {
ResolveStrategy::from_str_name(strategy).context(InvalidFuncArgsSnafu {
err_msg: format!("Invalid resolve strategy: {}", strategy),
})
}
/// Default parallelism for reconcile operations.
pub fn default_parallelism() -> u32 {
64
}
/// Default resolve strategy for reconcile operations.
pub fn default_resolve_strategy() -> ResolveStrategy {
ResolveStrategy::UseLatest
}
/// Get the string value from the params.
///
/// # Errors
/// Returns an error if the input type is not a string.
pub fn get_string_from_params<'a>(
params: &'a [ValueRef<'a>],
index: usize,
fn_name: &'a str,
) -> Result<&'a str> {
let ValueRef::String(s) = &params[index] else {
return UnsupportedInputDataTypeSnafu {
function: fn_name,
datatypes: params.iter().map(|v| v.data_type()).collect::<Vec<_>>(),
}
.fail();
};
Ok(s)
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_parse_resolve_strategy() {
assert_eq!(
parse_resolve_strategy("UseLatest").unwrap(),
ResolveStrategy::UseLatest
);
}
}

View File

@@ -14,6 +14,7 @@
#![feature(let_chains)]
#![feature(try_blocks)]
#![feature(assert_matches)]
mod admin;
mod flush_flow;

View File

@@ -113,6 +113,8 @@ mod tests {
use common_query::prelude::ScalarValue;
use datafusion::arrow::array::BooleanArray;
use datafusion_common::config::ConfigOptions;
use datatypes::arrow::datatypes::Field;
use datatypes::data_type::ConcreteDataType;
use datatypes::prelude::VectorRef;
use datatypes::vectors::{BooleanVector, ConstantVector};
@@ -162,10 +164,21 @@ mod tests {
]))),
];
let arg_fields = vec![
Arc::new(Field::new("a", args[0].data_type(), false)),
Arc::new(Field::new("b", args[1].data_type(), false)),
];
let return_field = Arc::new(Field::new(
"x",
ConcreteDataType::boolean_datatype().as_arrow_type(),
false,
));
let args = ScalarFunctionArgs {
args,
arg_fields,
number_rows: 4,
return_type: &ConcreteDataType::boolean_datatype().as_arrow_type(),
return_field,
config_options: Arc::new(ConfigOptions::default()),
};
match udf.invoke_with_args(args).unwrap() {
datafusion_expr::ColumnarValue::Array(x) => {

Some files were not shown because too many files have changed in this diff Show More