Compare commits

...

79 Commits

Author SHA1 Message Date
Ning Sun
f7202bc176 feat: pgwire 0.33 update (#7048) 2025-10-03 08:06:05 +00:00
Yingwen
b7045e57a5 feat: enable zstd for bulk memtable encoded parts (#7045)
feat: enable zstd in bulk memtable

Signed-off-by: evenyag <realevenyag@gmail.com>
2025-10-02 16:05:33 +00:00
Ning Sun
660790148d fix: various typos reported by CI (#7047)
* fix: various typos reported by CI

* fix: additional typo
2025-10-02 15:11:09 +00:00
zyy17
d777e8c52f refactor: add cgroup metrics collector (#7038)
Signed-off-by: zyy17 <zyylsxm@gmail.com>
2025-09-30 02:26:02 +00:00
zyy17
efa616ce44 fix: use instance lables to fetch greptime_memory_limit_in_bytes and greptime_cpu_limit_in_millicores metrics (#7043)
fix: remove unnecessary labels of standalone dashboard.json

Signed-off-by: zyy17 <zyylsxm@gmail.com>
2025-09-29 11:43:35 +00:00
LFC
5b13fba65b refactor: make Function trait a simple shim of DataFusion UDF (#7036)
Signed-off-by: luofucong <luofc@foxmail.com>
2025-09-29 09:07:39 +00:00
LFC
aa05b3b993 feat: add max_connection_age config to grpc server (#7031)
* feat: add `max_connection_age` config to grpc server

Signed-off-by: luofucong <luofc@foxmail.com>

* Apply suggestions from code review

Co-authored-by: Yingwen <realevenyag@gmail.com>

* fix ci

Signed-off-by: luofucong <luofc@foxmail.com>

---------

Signed-off-by: luofucong <luofc@foxmail.com>
Co-authored-by: Yingwen <realevenyag@gmail.com>
2025-09-29 07:32:43 +00:00
fys
c4a7cc0adb chore: improve create trigger display (#7027)
* chore: improve create_trigger_statement display

* improve display of create trigger

* add components for frontend

* Revert "add components for frontend"

This reverts commit 8d71540a72.
2025-09-29 02:22:43 +00:00
Yingwen
90d37cb10e fix: fix panic and limit concurrency in flat format (#7035)
* feat: add a semaphore to control flush concurrency

Signed-off-by: evenyag <realevenyag@gmail.com>

* fix: build FlatSchemaOptions from encoding in FlatWriteFormat

Signed-off-by: evenyag <realevenyag@gmail.com>

* chore: remove allow dead_code

Signed-off-by: evenyag <realevenyag@gmail.com>

* fix: handle sparse encoding in FlatCompatBatch

Signed-off-by: evenyag <realevenyag@gmail.com>

* fix: add time index column in try_new_compact_sparse

Signed-off-by: evenyag <realevenyag@gmail.com>

* test: add test for compaction and sparse encoding

Signed-off-by: evenyag <realevenyag@gmail.com>

* chore: remove comment

Signed-off-by: evenyag <realevenyag@gmail.com>

* style: fix clippy

Signed-off-by: evenyag <realevenyag@gmail.com>

---------

Signed-off-by: evenyag <realevenyag@gmail.com>
2025-09-29 02:20:06 +00:00
localhost
4a3c5f85e5 fix: fix test_resolve_relative_path_relative on windows (#7039) 2025-09-28 13:03:57 +00:00
discord9
3ca5c77d91 chore: not warning (#7037)
Signed-off-by: discord9 <discord9@163.com>
2025-09-28 08:11:27 +00:00
discord9
8bcf4a8ab5 test: update unit test by passing extra sort columns (#7030)
* tests: fix unit test by passing one sort columns

Signed-off-by: discord9 <discord9@163.com>

* chore: per copilot

Signed-off-by: discord9 <discord9@163.com>

---------

Signed-off-by: discord9 <discord9@163.com>
2025-09-28 03:22:43 +00:00
zyy17
0717773f62 refactor!: add enable_read_cache config to support disable read cache explicitly (#6834)
* refactor: add `enable_read_cache` config to support disable read cache explicitly

Signed-off-by: zyy17 <zyylsxm@gmail.com>

* refactor: if `cache_path` is empty and `enable_read_cache` is true, set the default cache dir

Signed-off-by: zyy17 <zyylsxm@gmail.com>

* refactor: remove the unessary Option type for `ObjectStorageCacheConfig`

Signed-off-by: zyy17 <zyylsxm@gmail.com>

* refactor: sanitize cache config in `DatanodeOptions` and `StandaloneOptions`

Signed-off-by: zyy17 <zyylsxm@gmail.com>

* chore: code review comment

Signed-off-by: zyy17 <zyylsxm@gmail.com>

* chore: apply code review comments

Signed-off-by: zyy17 <zyylsxm@gmail.com>

---------

Signed-off-by: zyy17 <zyylsxm@gmail.com>
2025-09-26 09:44:12 +00:00
shuiyisong
195ed73448 chore: disable file not exist on watch_file_user_provider (#7028)
Signed-off-by: shuiyisong <xixing.sys@gmail.com>
2025-09-26 09:29:59 +00:00
LFC
243dbde3d5 refactor: rewrite some UDFs to DataFusion style (final part) (#7023)
Signed-off-by: luofucong <luofc@foxmail.com>
2025-09-26 09:24:29 +00:00
discord9
aca8b690d1 fix: step aggr merge phase not order nor filter (#6998)
* fix: not order

Signed-off-by: discord9 <discord9@163.com>

* test: redacted

Signed-off-by: discord9 <discord9@163.com>

* feat: fix up state wrapper

Signed-off-by: discord9 <discord9@163.com>

* df last_value state not as promised!

Signed-off-by: discord9 <discord9@163.com>

* fix?: could fix better

Signed-off-by: discord9 <discord9@163.com>

* test: unstable result

Signed-off-by: discord9 <discord9@163.com>

* fix: work around by fixing state

Signed-off-by: discord9 <discord9@163.com>

* chore: after rebase fix

Signed-off-by: discord9 <discord9@163.com>

* chore: finish some todo

Signed-off-by: discord9 <discord9@163.com>

* chore: per copilot

Signed-off-by: discord9 <discord9@163.com>

* refactor: not fix but just notify mismatch

Signed-off-by: discord9 <discord9@163.com>

* chore: warn -> debug state mismatch

Signed-off-by: discord9 <discord9@163.com>

* chore: refine error msg

Signed-off-by: discord9 <discord9@163.com>

* test: sqlness add last_value date_bin test

Signed-off-by: discord9 <discord9@163.com>

* ?: substrait order by decode failure

Signed-off-by: discord9 <discord9@163.com>

* unit test reproduce that

Signed-off-by: discord9 <discord9@163.com>

* feat: support state wrapper's order serde in substrait

Signed-off-by: discord9 <discord9@163.com>

* refactor: stuff

Signed-off-by: discord9 <discord9@163.com>

* test: standalone/distributed different exec

Signed-off-by: discord9 <discord9@163.com>

* fmt

Signed-off-by: discord9 <discord9@163.com>

* chore: per review

Signed-off-by: discord9 <discord9@163.com>

* refactor: closure

Signed-off-by: discord9 <discord9@163.com>

* test: first value order by

Signed-off-by: discord9 <discord9@163.com>

* refactor: per cr

Signed-off-by: discord9 <discord9@163.com>

* feat: ScanHint last_value last row selector

Signed-off-by: discord9 <discord9@163.com>

* docs: per cr

Signed-off-by: discord9 <discord9@163.com>

---------

Signed-off-by: discord9 <discord9@163.com>
2025-09-26 09:12:45 +00:00
ZonaHe
9564180a6a feat: update dashboard to v0.11.6 (#7026)
Co-authored-by: ZonaHex <ZonaHex@users.noreply.github.com>
2025-09-26 06:35:37 +00:00
dennis zhuang
17d16da483 feat: supports expression in TQL params (#7014)
* feat: supports expression in TQL params

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* chore: by cr comments

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* chore: comment

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* chore: by cr comments

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

---------

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>
2025-09-26 03:43:50 +00:00
Ning Sun
c1acce9943 refactor: cleanup datafusion-pg-catalog dependencies (#7025)
* refactor: cleanup datafusion-pg-catalog dependencies

Signed-off-by: Ning Sun <sunning@greptime.com>

* chore: toml format

* feat: update upstream

---------

Signed-off-by: Ning Sun <sunning@greptime.com>
2025-09-26 03:07:00 +00:00
Ruihang Xia
0790835c77 feat!: improve greptime_identity pipeline behavior (#6932)
* flat by default, store array in string

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* expose max_nested_levels param, store string instead of error

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* remove flatten option

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* remove unused errors

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2025-09-25 15:28:28 +00:00
zyy17
280df064c7 chore: add some trace logs in fetching data from cache and object store (#6877)
* chore: add some important debug logs

Signed-off-by: zyy17 <zyylsxm@gmail.com>

* chore: add traces logs in `fetch_byte_ranges()`

Signed-off-by: zyy17 <zyylsxm@gmail.com>

---------

Signed-off-by: zyy17 <zyylsxm@gmail.com>
2025-09-25 11:54:22 +00:00
discord9
11a08d1381 fix: not step when aggr have order by/filter (#7015)
* fix: not applied

Signed-off-by: discord9 <discord9@163.com>

* chore: per review

Signed-off-by: discord9 <discord9@163.com>

* test: confirm order by not push down

Signed-off-by: discord9 <discord9@163.com>

---------

Signed-off-by: discord9 <discord9@163.com>
2025-09-25 08:43:18 +00:00
shuiyisong
06a4f0abea chore: add function for getting started on metasrv (#7022)
Signed-off-by: shuiyisong <xixing.sys@gmail.com>
2025-09-25 08:24:23 +00:00
dennis zhuang
c6e5552f05 test: migrate aggregation tests from duckdb, part4 (#6965)
* test: migrate aggregation tests from duckdb, part4

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* fix: tests

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* fix: rename tests

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* fix: comments

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* chore: ignore zero weights test

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* chore: remove duplicated sql

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

---------

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>
2025-09-25 08:00:17 +00:00
discord9
9c8ff1d8a0 fix: skip placeholder when partition columns (#7020)
Signed-off-by: discord9 <discord9@163.com>
2025-09-25 07:01:49 +00:00
Yingwen
cff9cb6327 feat: converts batches in old format to the flat format in query time (#6987)
* feat: use correct projection index for old format

Signed-off-by: evenyag <realevenyag@gmail.com>

* chore: remove allow dead_code from format

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: check and convert old format to flat format

Signed-off-by: evenyag <realevenyag@gmail.com>

* fix: sub primary key num from projection

Signed-off-by: evenyag <realevenyag@gmail.com>

* fix: always convert the batch in FlatRowGroupReader

Signed-off-by: evenyag <realevenyag@gmail.com>

* style: fix clippy

Signed-off-by: evenyag <realevenyag@gmail.com>

* refactor: Change &Option<&[]> to Option<&[]>

Signed-off-by: evenyag <realevenyag@gmail.com>

* refactor: only build arrow schema once

adds a method flat_sst_arrow_schema_column_num() to get the field num

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: Handle flat format and old format separately

Adds two structs ParquetFlat and ParquetPrimaryKeyToFlat.
ParquetPrimaryKeyToFlat delegates stats and projection to the
PrimaryKeyReadFormat.

Signed-off-by: evenyag <realevenyag@gmail.com>

* fix: handle non string tag correctly

Signed-off-by: evenyag <realevenyag@gmail.com>

* fix: do not register file cache twice

Signed-off-by: evenyag <realevenyag@gmail.com>

* fix: clean temp files

Signed-off-by: evenyag <realevenyag@gmail.com>

* chore: add rows and bytes to flush success log

Signed-off-by: evenyag <realevenyag@gmail.com>

* chore: convert format in memtable

Signed-off-by: evenyag <realevenyag@gmail.com>

* refactor: add compaction flag to ScanInput

Signed-off-by: evenyag <realevenyag@gmail.com>

* fix: compaction should use old format for sparse encoding

Signed-off-by: evenyag <realevenyag@gmail.com>

* fix: merge schema use old format in sparse encoding

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: reads legacy format but not convert if skip_auto_convert

Signed-off-by: evenyag <realevenyag@gmail.com>

* fix: suppport sparse encoding in bulk parts

Signed-off-by: evenyag <realevenyag@gmail.com>

---------

Signed-off-by: evenyag <realevenyag@gmail.com>
2025-09-25 06:42:22 +00:00
Ning Sun
964dc254aa feat: upgraded pg_catalog support (#6918)
* refactor: add datafusion-postgres dependency

* refactor: move and include pg_catalog udfs

* chore: update upstream

* feat: register table function pg_get_keywords

* feat: bridge CatalogInfo for our CatalogManager

Signed-off-by: Ning Sun <sunning@greptime.com>

* feat: convert pg_catalog table to our system table

* feat: bridge system catalog with datafusion-postgres

Signed-off-by: Ning Sun <sunning@greptime.com>

* feat: add more udfs

* feat: add compatibility rewriter to postgres handler

* fix: various fix

* fmt: fix

* fix: use functions from pg_catalog library

* fmt

* fix: sqlness runner

Signed-off-by: Ning Sun <sunning@greptime.com>

* test: adopt arrow 56.0 to 56.1 memory size change

* fix: add additional udfs

* chore: format

* refactor: return None when creating system table failed

Signed-off-by: Ning Sun <sunning@greptime.com>

* chore: provide safety comments about expect usage

---------

Signed-off-by: Ning Sun <sunning@greptime.com>
2025-09-25 04:05:34 +00:00
dennis zhuang
91a727790d feat: supports permission mode for static user provider (#7017)
* feat: supports permission mode for static user provider

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* chore: style

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* fix: comment

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

---------

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-09-25 03:45:31 +00:00
Weny Xu
07b9de620e fix(cli): fix FS object store handling of absolute paths (#7018)
* fix(cli): fix FS object store handling of absolute paths

Signed-off-by: WenyXu <wenymedia@gmail.com>

* test: add unit tests

Signed-off-by: WenyXu <wenymedia@gmail.com>

* Update src/cli/src/utils.rs

Co-authored-by: LFC <990479+MichaelScofield@users.noreply.github.com>

---------

Signed-off-by: WenyXu <wenymedia@gmail.com>
Co-authored-by: LFC <990479+MichaelScofield@users.noreply.github.com>
2025-09-25 03:38:33 +00:00
LFC
6d0dd2540e refactor: rewrite some UDFs to DataFusion style (part 4) (#7011)
Signed-off-by: luofucong <luofc@foxmail.com>
2025-09-24 19:50:58 +00:00
fys
a14c01a807 feat: sql parse about show create trigger (#7016)
* feat: sql parse for show create trigger

* fix: build

* remove unused comment

* chore: add tests for parsing complete SQL
2025-09-24 15:52:15 +00:00
discord9
238ed003df fix: group by expr not as column in step aggr (#7008)
* fix: group by expr not as column

Signed-off-by: discord9 <discord9@163.com>

* test: dist analyzer date_bin

Signed-off-by: discord9 <discord9@163.com>

* ???fix wip

Signed-off-by: discord9 <discord9@163.com>

* fix: deduce using correct input fields

Signed-off-by: discord9 <discord9@163.com>

* refactor: clearer wrapper

Signed-off-by: discord9 <discord9@163.com>

* chore: update sqlness

Signed-off-by: discord9 <discord9@163.com>

* chore: per review

Signed-off-by: discord9 <discord9@163.com>

* chore: per review

Signed-off-by: discord9 <discord9@163.com>

* chore: rm todo

Signed-off-by: discord9 <discord9@163.com>

---------

Signed-off-by: discord9 <discord9@163.com>
2025-09-24 06:57:01 +00:00
Weny Xu
0c038f755f refactor(cli): refactor object storage config (#7009)
* refactor: refactor object storage config

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: public common config

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: apply suggestions

Signed-off-by: WenyXu <wenymedia@gmail.com>

---------

Signed-off-by: WenyXu <wenymedia@gmail.com>
2025-09-24 06:50:47 +00:00
Lin Yihai
b5a8725582 feat(copy_to_csv): add date_format/timestamp_format/time_format. (#6995)
feat(copy_to_csv): add `date_format` and so on to `Copy ... to with` syntax

Signed-off-by: Yihai Lin <yihai-lin@foxmail.com>
2025-09-24 06:22:53 +00:00
Ruihang Xia
c7050831db fix: match promql column reference in case sensitive way (#7013)
Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2025-09-24 03:28:09 +00:00
Ruihang Xia
f65dcd12cc feat: refine failure detector (#7005)
* feat: refine failure detector

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix format

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* revert back default value

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* revert change of test

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2025-09-24 01:43:22 +00:00
Zhenchi
80c8ab42b0 feat: add ssts releated system table (#6924)
* feat: add InformationExtension.inspect_datanode for datanode inspection

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* aggregate results from all datanodes

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* fix fmt

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* feat: add ssts releated system table

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* update sst entry

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* address comments

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* fix sqlness

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* fix sqlness

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

---------

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>
2025-09-23 11:06:00 +00:00
discord9
4507736528 docs: laminar flow rfc (#6928)
* docs: luminar flow

Signed-off-by: discord9 <discord9@163.com>

* chore: wording details

Signed-off-by: discord9 <discord9@163.com>

* more details

Signed-off-by: discord9 <discord9@163.com>

* rearrange phases

Signed-off-by: discord9 <discord9@163.com>

* refactor: use embed frontend per review

Signed-off-by: discord9 <discord9@163.com>

* todo per review

Signed-off-by: discord9 <discord9@163.com>

* docs: seq read impl for luminar flow

Signed-off-by: discord9 <discord9@163.com>

* rename

Signed-off-by: discord9 <discord9@163.com>

---------

Signed-off-by: discord9 <discord9@163.com>
2025-09-23 08:16:06 +00:00
LFC
2712c5cd7a refactor: rewrite some UDFs to DataFusion style (part 3) (#6990)
* refactor: rewrite some UDFs to DataFusion style (part 3)

Signed-off-by: luofucong <luofc@foxmail.com>

* resolve PR comments

Signed-off-by: luofucong <luofc@foxmail.com>

* resolve PR comments

Signed-off-by: luofucong <luofc@foxmail.com>

* resolve PR comments

Signed-off-by: luofucong <luofc@foxmail.com>

---------

Signed-off-by: luofucong <luofc@foxmail.com>
2025-09-23 07:53:51 +00:00
Ruihang Xia
078379816c fix: incorrect timestamp resolution in information_schema.partitions table (#7004)
* fix: incorrect timestamp resolution in information_schema.partitions table

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* use second for all fields in partitions table

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* update sqlness result

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2025-09-23 06:13:10 +00:00
Ruihang Xia
6dc5fbe9a1 fix: promql range function has incorrect timestamps (#7006)
* fix: promql range function has incorrect timestamps

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* simplify

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2025-09-23 05:16:54 +00:00
ZonaHe
cd3fb5fd3e feat: update dashboard to v0.11.5 (#7001)
Co-authored-by: sunchanglong <sunchanglong@users.noreply.github.com>
2025-09-23 05:05:17 +00:00
shyam
5fcca4eeab fix: make EXPIRE (keyword) parsing case-insensitive, when creating flow (#6997)
fix: make EXPIRE keyword case-insensitive in CREATE FLOW parser

Signed-off-by: Shyamnatesan <shyamnatesan21@gmail.com>
2025-09-22 12:07:04 +00:00
Weny Xu
b3d413258d feat: extract standalone functionality and introduce plugin-based router configuration (#7002)
* feat: extract standalone functionality and introduce plugin-based router configuration

Signed-off-by: WenyXu <wenymedia@gmail.com>

* fix: ensure dump file does not exist

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: introduce `External` error

Signed-off-by: WenyXu <wenymedia@gmail.com>

---------

Signed-off-by: WenyXu <wenymedia@gmail.com>
2025-09-22 11:21:04 +00:00
discord9
03954e8b3b chore: update proto (#6992)
* chore: update proto

Signed-off-by: discord9 <discord9@163.com>

* update lockfile

Signed-off-by: discord9 <discord9@163.com>

---------

Signed-off-by: discord9 <discord9@163.com>
2025-09-19 09:01:46 +00:00
Yingwen
bd8f5d2b71 fix: print the output message of the error in admin fn macro (#6994)
Signed-off-by: evenyag <realevenyag@gmail.com>
2025-09-19 08:11:19 +00:00
Weny Xu
74721a06ba chore: improve error logging in WAL prune manager (#6993)
Signed-off-by: WenyXu <wenymedia@gmail.com>
2025-09-19 07:08:28 +00:00
discord9
18e4839a17 feat: datanode side local gc worker (#6940)
* wip

Signed-off-by: discord9 <discord9@163.com>

* docs for behavior

Signed-off-by: discord9 <discord9@163.com>

* wip: handle outdated version

Signed-off-by: discord9 <discord9@163.com>

* feat: just retry

Signed-off-by: discord9 <discord9@163.com>

* feat: smaller lingering time

Signed-off-by: discord9 <discord9@163.com>

* refactor: partial per review

Signed-off-by: discord9 <discord9@163.com>

* refactor: rm tmp file cnt

Signed-off-by: discord9 <discord9@163.com>

* chore: per review

Signed-off-by: discord9 <discord9@163.com>

* chore: opt partial

Signed-off-by: discord9 <discord9@163.com>

* chore: rebase fix

Signed-off-by: discord9 <discord9@163.com>

---------

Signed-off-by: discord9 <discord9@163.com>
2025-09-19 03:20:13 +00:00
LFC
cbe0cf4a74 refactor: rewrite some UDFs to DataFusion style (part 2) (#6967)
* refactor: rewrite some UDFs to DataFusion style (part 2)

Signed-off-by: luofucong <luofc@foxmail.com>

* deal with vector UDFs `(scalar, scalar)` situation, and try getting the scalar value reference everytime

Signed-off-by: luofucong <luofc@foxmail.com>

* reduce some vector literal parsing

Signed-off-by: luofucong <luofc@foxmail.com>

* fix ci

Signed-off-by: luofucong <luofc@foxmail.com>

---------

Signed-off-by: luofucong <luofc@foxmail.com>
2025-09-18 06:37:27 +00:00
discord9
e26b98f452 refactor: put FileId to store-api (#6988)
* refactor: put FileId to store-api

Signed-off-by: discord9 <discord9@163.com>

* per review

Signed-off-by: discord9 <discord9@163.com>

* chore: lock file

Signed-off-by: discord9 <discord9@163.com>

---------

Signed-off-by: discord9 <discord9@163.com>
2025-09-18 03:20:42 +00:00
localhost
d8b967408e chore: modify LogExpr AggrFunc (#6948)
* chore: modify  LogExpr AggrFunc

* chore: change AggrFunc range field

* chore: remove range from aggrfunc
2025-09-17 12:19:48 +00:00
Weny Xu
c35407fdce refactor: region follower management with unified interface (#6986)
Signed-off-by: WenyXu <wenymedia@gmail.com>
2025-09-17 10:01:03 +00:00
Lei, HUANG
edf4b3f7f8 chore: unset tz env in test (#6984)
chore/unset-tz-env-in-test:
 ### Commit Message

 Add environment variable cleanup in timezone tests

 - Updated `timezone.rs` to include removal of the `TZ` environment variable in the `test_from_tz_string` function to ensure a clean test environment.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
2025-09-17 08:48:38 +00:00
Yingwen
14550429e9 chore: reduce SeriesScan sender timeout (#6983)
Signed-off-by: evenyag <realevenyag@gmail.com>
2025-09-17 07:02:47 +00:00
shuiyisong
ff2da4903e fix: OTel metrics naming wiht Prometheus style (#6982)
* fix: otel metrics naming

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* fix: otel metrics naming & add some tests

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

---------

Signed-off-by: shuiyisong <xixing.sys@gmail.com>
2025-09-17 06:11:38 +00:00
Lei, HUANG
c92ab4217f fix: avoid truncating SST statistics during flush (#6977)
fix/disable-parquet-stats-truncate:
 - **Update `memcomparable` Dependency**: Switched from crates.io to a Git repository for `memcomparable` in `Cargo.lock`, `mito-codec/Cargo.toml`, and removed it from `mito2/Cargo.toml`.
 - **Enhance Parquet Writer Properties**: Added `set_statistics_truncate_length` and `set_column_index_truncate_length` to `WriterProperties` in `parquet.rs`, `bulk/part.rs`, `partition_tree/data.rs`, and `writer.rs`.
 - **Add Test for Corrupt Scan**: Introduced a new test module `scan_corrupt.rs` in `mito2/src/engine` to verify handling of corrupt data.
 - **Update Test Data**: Modified test data in `flush.rs` to reflect changes in file sizes and sequences.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
2025-09-17 03:02:52 +00:00
Zhenchi
77981a7de5 fix: clean intm ignore notfound (#6971)
* fix: clean intm ignore notfound

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* address comments

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

---------

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>
2025-09-17 02:58:03 +00:00
Lei, HUANG
9096c5ebbf chore: bump sequence on region edit (#6947)
* chore/update-sequence-on-region-edit:
 ### Commit Message

 Refactor `get_last_seq_num` Method Across Engines

 - **Change Return Type**: Updated the `get_last_seq_num` method to return `Result<SequenceNumber, BoxedError>` instead of `Result<Option<SequenceNumber>, BoxedError>` in the following files:
   - `src/datanode/src/tests.rs`
   - `src/file-engine/src/engine.rs`
   - `src/metric-engine/src/engine.rs`
   - `src/metric-engine/src/engine/read.rs`
   - `src/mito2/src/engine.rs`
   - `src/query/src/optimizer/test_util.rs`
   - `src/store-api/src/region_engine.rs`

 - **Enhance Region Edit Handling**: Modified `RegionWorkerLoop` in `src/mito2/src/worker/handle_manifest.rs` to update file sequences during region edits.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* add committed_sequence to RegionEdit

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* chore/update-sequence-on-region-edit:
 ### Commit Message

 Refactor sequence retrieval method

 - **Renamed Method**: Changed `get_last_seq_num` to `get_committed_sequence` across multiple files to better reflect its purpose of retrieving the latest committed sequence.
   - Affected files: `tests.rs`, `engine.rs` in `file-engine`, `metric-engine`, `mito2`, `test_util.rs`, and `region_engine.rs`.
 - **Removed Unused Struct**: Deleted `RegionSequencesRequest` struct from `region_request.rs` as it is no longer needed.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* chore/update-sequence-on-region-edit:
 **Add Committed Sequence Handling in Region Engine**

 - **`engine.rs`**: Introduced a new test module `bump_committed_sequence_test` to verify committed sequence handling.
 - **`bump_committed_sequence_test.rs`**: Added a test to ensure the committed sequence is correctly updated and persisted across region reopenings.
 - **`action.rs`**: Updated `RegionManifest` and `RegionManifestBuilder` to include `committed_sequence` for tracking.
 - **`manager.rs`**: Adjusted manifest size assertion to accommodate new committed sequence data.
 - **`opener.rs`**: Implemented logic to override committed sequence during region opening.
 - **`version.rs`**: Added `set_committed_sequence` method to update the committed sequence in `VersionControl`.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* chore/update-sequence-on-region-edit:
 **Enhance `test_bump_committed_sequence` in `bump_committed_sequence_test.rs`**

 - Updated the test to include row operations using `build_rows`, `put_rows`, and `rows_schema` to verify the committed sequence behavior.
 - Adjusted assertions to reflect changes in committed sequence after row operations and region edits.
 - Added comments to clarify the expected behavior of committed sequence after reopening the region and replaying the WAL.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* chore/update-sequence-on-region-edit:
 **Enhance Region Sequence Management**

 - **`bump_committed_sequence_test.rs`**: Updated test to handle region reopening and sequence management, ensuring committed sequences are correctly set and verified after edits.
 - **`opener.rs`**: Improved committed sequence handling by overriding it only if the manifest's sequence is greater than the replayed sequence. Added logging for mutation sequence replay.
 - **`region_write_ctx.rs`**: Modified `push_mutation` and `push_bulk` methods to adopt sequence numbers from parameters, enhancing sequence management during write operations.
 - **`handle_write.rs`**: Updated `RegionWorkerLoop` to pass sequence numbers in `push_bulk` and `push_mutation` methods, ensuring consistent sequence handling.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* chore/update-sequence-on-region-edit:
 ### Remove Debug Logging from `opener.rs`

 - Removed debug logging for mutation sequences in `opener.rs` to clean up the output and improve performance.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

---------

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
2025-09-16 16:22:25 +00:00
Weny Xu
0a959f9920 feat: add TLS support for mysql backend (#6979)
* refactor: move etcd tls code to `common-meta`

Signed-off-by: WenyXu <wenymedia@gmail.com>

* refactor: move postgre pool logic to `utils::postgre`

Signed-off-by: WenyXu <wenymedia@gmail.com>

* feat: setup mysql ssl options

Signed-off-by: WenyXu <wenymedia@gmail.com>

* feat: add test for mysql backend with tls

Signed-off-by: WenyXu <wenymedia@gmail.com>

* refactor: simplify certs generation

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: apply suggestions

Signed-off-by: WenyXu <wenymedia@gmail.com>

---------

Signed-off-by: WenyXu <wenymedia@gmail.com>
2025-09-16 13:46:37 +00:00
discord9
85c1a91bae feat: support SubqueryAlias pushdown (#6963)
* wip enforce dist requirement rewriter

Signed-off-by: discord9 <discord9@163.com>

* feat: enforce dist req

Signed-off-by: discord9 <discord9@163.com>

* test: sqlness result

Signed-off-by: discord9 <discord9@163.com>

* fix: double projection

Signed-off-by: discord9 <discord9@163.com>

* test: fix sqlness

Signed-off-by: discord9 <discord9@163.com>

* refactor: per review

Signed-off-by: discord9 <discord9@163.com>

* docs: use btree map

Signed-off-by: discord9 <discord9@163.com>

* test: sqlness explain&comment

Signed-off-by: discord9 <discord9@163.com>

---------

Signed-off-by: discord9 <discord9@163.com>
2025-09-16 13:27:35 +00:00
Weny Xu
7aba9a18fd chore: add tests for postgre backend with tls (#6973)
* chore: add tests for postgre backend with tls

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: minor

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: apply suggestions

Signed-off-by: WenyXu <wenymedia@gmail.com>

---------

Signed-off-by: WenyXu <wenymedia@gmail.com>
2025-09-16 11:03:11 +00:00
shuiyisong
4c18d140b4 fix: deadlock in dashmap (#6978)
* fix: deadlock in dashmap

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* Update src/frontend/src/instance.rs

Co-authored-by: Yingwen <realevenyag@gmail.com>

* chore: extract fast cache check and add test

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

---------

Signed-off-by: shuiyisong <xixing.sys@gmail.com>
Co-authored-by: Yingwen <realevenyag@gmail.com>
2025-09-16 10:49:28 +00:00
Yingwen
b8e0c49cb4 feat: add an flag to enable the experimental flat format (#6976)
* feat: add enable_experimental_flat_format flag to enable flat format

Signed-off-by: evenyag <realevenyag@gmail.com>

* refactor: extract build_scan_input for CompactionSstReaderBuilder

Signed-off-by: evenyag <realevenyag@gmail.com>

* chore: add compact memtable cost to flush metrics

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: Sets compact dispatcher for bulk memtable

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: Cast dictionary to target type in FlatProjectionMapper

Signed-off-by: evenyag <realevenyag@gmail.com>

* fix: add time index to FlatProjectionMapper::batch_schema

Signed-off-by: evenyag <realevenyag@gmail.com>

* chore: update config toml

Signed-off-by: evenyag <realevenyag@gmail.com>

* fix: pass flat_format to ProjectionMapper in CompactionSstReaderBuilder

Signed-off-by: evenyag <realevenyag@gmail.com>

---------

Signed-off-by: evenyag <realevenyag@gmail.com>
2025-09-16 09:33:12 +00:00
Zhenchi
db42ad42dc feat: add visible to sst entry for staging mode (#6964)
Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>
2025-09-15 09:05:54 +00:00
shuiyisong
8ce963f63e fix: shorten lock time (#6968) 2025-09-15 03:37:36 +00:00
Yingwen
b3aabb6706 feat: support flush and compact flat format files (#6949)
* feat: basic functions for flush/compact flat format

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: bridge flush and compaction for flat format

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: add write cache support

Signed-off-by: evenyag <realevenyag@gmail.com>

* style: fix clippy

Signed-off-by: evenyag <realevenyag@gmail.com>

* chore: change log level to debug

Signed-off-by: evenyag <realevenyag@gmail.com>

* refactor: wrap duplicated code to merge and dedup iter

Signed-off-by: evenyag <realevenyag@gmail.com>

* refactor: wrap some code into flush_flat_mem_ranges

Signed-off-by: evenyag <realevenyag@gmail.com>

* refactor: extract logic into do_flush_memtables

Signed-off-by: evenyag <realevenyag@gmail.com>

---------

Signed-off-by: evenyag <realevenyag@gmail.com>
2025-09-14 13:36:24 +00:00
Ning Sun
028effe952 docs: update memory profiling description doc (#6960)
doc: update memory profiling description doc
2025-09-12 08:30:22 +00:00
Ruihang Xia
d86f489a74 fix: staging mode with proper region edit operations (#6962)
Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2025-09-12 04:39:42 +00:00
dennis zhuang
6c066c1a4a test: migrate join tests from duckdb, part3 (#6881)
* test: migrate join tests

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* chore: update test results after rebasing main branch

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* fix: unstable query sort results and natural_join test

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* fix: count(*) with joining

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* fix: unstable query sort results and style

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

---------

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>
2025-09-12 04:20:00 +00:00
LFC
9ab87e11a4 refactor: rewrite h3 functions to DataFusion style (#6942)
* refactor: rewrite h3 functions to DataFusion style

Signed-off-by: luofucong <luofc@foxmail.com>

* resolve PR comments

Signed-off-by: luofucong <luofc@foxmail.com>

---------

Signed-off-by: luofucong <luofc@foxmail.com>
2025-09-12 02:27:24 +00:00
Weny Xu
9fe7069146 feat: add postgres tls support for CLI (#6941)
* feat: add postgres tls support for cli

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: apply suggestions

Signed-off-by: WenyXu <wenymedia@gmail.com>

---------

Signed-off-by: WenyXu <wenymedia@gmail.com>
2025-09-11 12:18:13 +00:00
fys
733a1afcd1 fix: correct jemalloc metrics (#6959)
The allocated and resident metrics were swapped in the set calls. This commit
fixes the issue by ensuring each metric receives its corresponding value.
2025-09-11 06:37:19 +00:00
Yingwen
5e65581f94 feat: support flat format for SeriesScan (#6938)
* feat: Support flat format for SeriesScan

Signed-off-by: evenyag <realevenyag@gmail.com>

* test: simplify tests

Signed-off-by: evenyag <realevenyag@gmail.com>

* chore: update comment

Signed-off-by: evenyag <realevenyag@gmail.com>

* chore: only accumulate fetch time to scan_cost in SeriesDistributor of
the SeriesScan

Signed-off-by: evenyag <realevenyag@gmail.com>

* chore: update comment

Signed-off-by: evenyag <realevenyag@gmail.com>

---------

Signed-off-by: evenyag <realevenyag@gmail.com>
2025-09-11 06:12:53 +00:00
ZonaHe
e75e5baa63 feat: update dashboard to v0.11.4 (#6956)
Co-authored-by: sunchanglong <sunchanglong@users.noreply.github.com>
2025-09-11 04:34:25 +00:00
zyy17
c4b89df523 fix: use pull_request_target to fix add labels 403 error (#6958)
Signed-off-by: zyy17 <zyylsxm@gmail.com>
2025-09-11 03:53:14 +00:00
Weny Xu
6a15e62719 feat: expose workload filter to selector options (#6951)
* feat: add workload filtering support to selector options

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: apply suggestions

Signed-off-by: WenyXu <wenymedia@gmail.com>

---------

Signed-off-by: WenyXu <wenymedia@gmail.com>
2025-09-11 03:11:13 +00:00
discord9
2bddbe8c47 feat(query): better alias tracker (#6909)
* better resolve

Signed-off-by: discord9 <discord9@163.com>

feat: layered alias tracker

Signed-off-by: discord9 <discord9@163.com>

refactor

Signed-off-by: discord9 <discord9@163.com>

docs: expalin for no offset by one

Signed-off-by: discord9 <discord9@163.com>

test: more

Signed-off-by: discord9 <discord9@163.com>

simpify api

Signed-off-by: discord9 <discord9@163.com>

wip

Signed-off-by: discord9 <discord9@163.com>

fix: filter non-exist columns

Signed-off-by: discord9 <discord9@163.com>

feat: stuff

Signed-off-by: discord9 <discord9@163.com>

feat: cache partition columns

Signed-off-by: discord9 <discord9@163.com>

refactor: rm unused fn

Signed-off-by: discord9 <discord9@163.com>

no need res

Signed-off-by: discord9 <discord9@163.com>

chore: rm unwrap&docs update

Signed-off-by: discord9 <discord9@163.com>

* chore: after rebase fix

Signed-off-by: discord9 <discord9@163.com>

* refactor: per review

Signed-off-by: discord9 <discord9@163.com>

* fix: unsupport part

Signed-off-by: discord9 <discord9@163.com>

* err msg

Signed-off-by: discord9 <discord9@163.com>

* fix: pass correct partition cols

Signed-off-by: discord9 <discord9@163.com>

* fix? use column name only

Signed-off-by: discord9 <discord9@163.com>

* fix: merge scan has partition columns no alias/no partition diff

Signed-off-by: discord9 <discord9@163.com>

* refactor: loop instead of recursive

Signed-off-by: discord9 <discord9@163.com>

* refactor: per review

Signed-off-by: discord9 <discord9@163.com>

* feat: overlaps

Signed-off-by: discord9 <discord9@163.com>

---------

Signed-off-by: discord9 <discord9@163.com>
2025-09-11 02:30:51 +00:00
discord9
ea8125aafb fix: count(1) instead of count(ts) when >1 inputs (#6952)
Signed-off-by: discord9 <discord9@163.com>
2025-09-10 21:30:43 +00:00
dennis zhuang
49722951c6 fix: unstable query sort results (#6944)
Signed-off-by: Dennis Zhuang <killme2008@gmail.com>
2025-09-10 20:41:10 +00:00
543 changed files with 33446 additions and 10596 deletions

View File

@@ -1,7 +1,7 @@
name: "Semantic Pull Request"
on:
pull_request:
pull_request_target:
types:
- opened
- reopened
@@ -12,9 +12,9 @@ concurrency:
cancel-in-progress: true
permissions:
issues: write
contents: write
contents: read
pull-requests: write
issues: write
jobs:
check:

320
Cargo.lock generated
View File

@@ -300,9 +300,9 @@ checksum = "7c02d123df017efcdfbd739ef81735b36c5ba83ec3c59c80a9d7ecc718f92e50"
[[package]]
name = "arrow"
version = "56.0.0"
version = "56.1.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "fd798aea3553913a5986813e9c6ad31a2d2b04e931fe8ea4a37155eb541cebb5"
checksum = "c26b57282a08ae92f727497805122fec964c6245cfa0e13f0e75452eaf3bc41f"
dependencies = [
"arrow-arith",
"arrow-array",
@@ -321,9 +321,9 @@ dependencies = [
[[package]]
name = "arrow-arith"
version = "56.0.0"
version = "56.1.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "508dafb53e5804a238cab7fd97a59ddcbfab20cc4d9814b1ab5465b9fa147f2e"
checksum = "cebf38ca279120ff522f4954b81a39527425b6e9f615e6b72842f4de1ffe02b8"
dependencies = [
"arrow-array",
"arrow-buffer",
@@ -335,9 +335,9 @@ dependencies = [
[[package]]
name = "arrow-array"
version = "56.0.0"
version = "56.1.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "e2730bc045d62bb2e53ef8395b7d4242f5c8102f41ceac15e8395b9ac3d08461"
checksum = "744109142cdf8e7b02795e240e20756c2a782ac9180d4992802954a8f871c0de"
dependencies = [
"ahash 0.8.12",
"arrow-buffer",
@@ -352,9 +352,9 @@ dependencies = [
[[package]]
name = "arrow-buffer"
version = "56.0.0"
version = "56.1.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "54295b93beb702ee9a6f6fbced08ad7f4d76ec1c297952d4b83cf68755421d1d"
checksum = "601bb103c4c374bcd1f62c66bcea67b42a2ee91a690486c37d4c180236f11ccc"
dependencies = [
"bytes",
"half",
@@ -363,9 +363,9 @@ dependencies = [
[[package]]
name = "arrow-cast"
version = "56.0.0"
version = "56.1.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "67e8bcb7dc971d779a7280593a1bf0c2743533b8028909073e804552e85e75b5"
checksum = "eed61d9d73eda8df9e3014843def37af3050b5080a9acbe108f045a316d5a0be"
dependencies = [
"arrow-array",
"arrow-buffer",
@@ -384,9 +384,9 @@ dependencies = [
[[package]]
name = "arrow-csv"
version = "56.0.0"
version = "56.1.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "673fd2b5fb57a1754fdbfac425efd7cf54c947ac9950c1cce86b14e248f1c458"
checksum = "fa95b96ce0c06b4d33ac958370db8c0d31e88e54f9d6e08b0353d18374d9f991"
dependencies = [
"arrow-array",
"arrow-cast",
@@ -399,9 +399,9 @@ dependencies = [
[[package]]
name = "arrow-data"
version = "56.0.0"
version = "56.1.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "97c22fe3da840039c69e9f61f81e78092ea36d57037b4900151f063615a2f6b4"
checksum = "43407f2c6ba2367f64d85d4603d6fb9c4b92ed79d2ffd21021b37efa96523e12"
dependencies = [
"arrow-buffer",
"arrow-schema",
@@ -411,9 +411,9 @@ dependencies = [
[[package]]
name = "arrow-flight"
version = "56.0.0"
version = "56.1.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "6808d235786b721e49e228c44dd94242f2e8b46b7e95b233b0733c46e758bfee"
checksum = "d7c66c5e4a7aedc2bfebffeabc2116d76adb22e08d230b968b995da97f8b11ca"
dependencies = [
"arrow-array",
"arrow-buffer",
@@ -430,14 +430,15 @@ dependencies = [
[[package]]
name = "arrow-ipc"
version = "56.0.0"
version = "56.1.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "778de14c5a69aedb27359e3dd06dd5f9c481d5f6ee9fbae912dba332fd64636b"
checksum = "e4b0487c4d2ad121cbc42c4db204f1509f8618e589bc77e635e9c40b502e3b90"
dependencies = [
"arrow-array",
"arrow-buffer",
"arrow-data",
"arrow-schema",
"arrow-select",
"flatbuffers",
"lz4_flex",
"zstd 0.13.3",
@@ -445,9 +446,9 @@ dependencies = [
[[package]]
name = "arrow-json"
version = "56.0.0"
version = "56.1.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "3860db334fe7b19fcf81f6b56f8d9d95053f3839ffe443d56b5436f7a29a1794"
checksum = "26d747573390905905a2dc4c5a61a96163fe2750457f90a04ee2a88680758c79"
dependencies = [
"arrow-array",
"arrow-buffer",
@@ -467,9 +468,9 @@ dependencies = [
[[package]]
name = "arrow-ord"
version = "56.0.0"
version = "56.1.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "425fa0b42a39d3ff55160832e7c25553e7f012c3f187def3d70313e7a29ba5d9"
checksum = "c142a147dceb59d057bad82400f1693847c80dca870d008bf7b91caf902810ae"
dependencies = [
"arrow-array",
"arrow-buffer",
@@ -480,9 +481,9 @@ dependencies = [
[[package]]
name = "arrow-row"
version = "56.0.0"
version = "56.1.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "df9c9423c9e71abd1b08a7f788fcd203ba2698ac8e72a1f236f1faa1a06a7414"
checksum = "dac6620667fccdab4204689ca173bd84a15de6bb6b756c3a8764d4d7d0c2fc04"
dependencies = [
"arrow-array",
"arrow-buffer",
@@ -493,9 +494,9 @@ dependencies = [
[[package]]
name = "arrow-schema"
version = "56.0.0"
version = "56.1.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "85fa1babc4a45fdc64a92175ef51ff00eba5ebbc0007962fecf8022ac1c6ce28"
checksum = "dfa93af9ff2bb80de539e6eb2c1c8764abd0f4b73ffb0d7c82bf1f9868785e66"
dependencies = [
"serde",
"serde_json",
@@ -503,9 +504,9 @@ dependencies = [
[[package]]
name = "arrow-select"
version = "56.0.0"
version = "56.1.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "d8854d15f1cf5005b4b358abeb60adea17091ff5bdd094dca5d3f73787d81170"
checksum = "be8b2e0052cd20d36d64f32640b68a5ab54d805d24a473baee5d52017c85536c"
dependencies = [
"ahash 0.8.12",
"arrow-array",
@@ -517,9 +518,9 @@ dependencies = [
[[package]]
name = "arrow-string"
version = "56.0.0"
version = "56.1.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "2c477e8b89e1213d5927a2a84a72c384a9bf4dd0dbf15f9fd66d821aafd9e95e"
checksum = "c2155e26e17f053c8975c546fc70cf19c00542f9abf43c23a88a46ef7204204f"
dependencies = [
"arrow-array",
"arrow-buffer",
@@ -693,7 +694,7 @@ checksum = "37672978ae0febce7516ae0a85b53e6185159a9a28787391eb63fc44ec36037d"
dependencies = [
"async-fs",
"futures-lite",
"thiserror 2.0.12",
"thiserror 2.0.17",
]
[[package]]
@@ -1410,7 +1411,7 @@ source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "a1d8af896b707212cd0e99c112a78c9497dd32994192a463ed2f7419d29bd8c6"
dependencies = [
"serde",
"thiserror 2.0.12",
"thiserror 2.0.17",
"toml 0.8.23",
]
@@ -1450,6 +1451,7 @@ dependencies = [
"common-workload",
"dashmap",
"datafusion",
"datafusion-pg-catalog",
"datatypes",
"futures",
"futures-util",
@@ -1464,7 +1466,6 @@ dependencies = [
"prometheus",
"promql-parser",
"rand 0.9.1",
"rustc-hash 2.1.1",
"serde",
"serde_json",
"session",
@@ -1800,6 +1801,7 @@ dependencies = [
"nu-ansi-term",
"object-store",
"operator",
"paste",
"query",
"rand 0.9.1",
"reqwest",
@@ -1915,6 +1917,7 @@ dependencies = [
"common-query",
"common-recordbatch",
"common-runtime",
"common-stat",
"common-telemetry",
"common-test-util",
"common-time",
@@ -1951,7 +1954,7 @@ dependencies = [
"session",
"similar-asserts",
"snafu 0.8.6",
"stat",
"standalone",
"store-api",
"substrait 0.18.0",
"table",
@@ -2180,6 +2183,7 @@ dependencies = [
"datafusion-common",
"datafusion-expr",
"datafusion-functions-aggregate-common",
"datafusion-pg-catalog",
"datafusion-physical-expr",
"datatypes",
"derive_more",
@@ -2545,6 +2549,15 @@ dependencies = [
"sqlparser 0.55.0-greptime",
]
[[package]]
name = "common-stat"
version = "0.18.0"
dependencies = [
"lazy_static",
"nix 0.30.1",
"prometheus",
]
[[package]]
name = "common-telemetry"
version = "0.18.0"
@@ -3709,6 +3722,19 @@ dependencies = [
"tokio",
]
[[package]]
name = "datafusion-pg-catalog"
version = "0.9.0"
source = "git+https://github.com/datafusion-contrib/datafusion-postgres?rev=3d1b7c7d5b82dd49bafc2803259365e633f654fa#3d1b7c7d5b82dd49bafc2803259365e633f654fa"
dependencies = [
"async-trait",
"datafusion",
"futures",
"log",
"postgres-types",
"tokio",
]
[[package]]
name = "datafusion-physical-expr"
version = "49.0.0"
@@ -4334,7 +4360,7 @@ dependencies = [
"chrono",
"rust_decimal",
"serde",
"thiserror 2.0.12",
"thiserror 2.0.17",
"time",
"winnow 0.6.26",
]
@@ -4483,7 +4509,7 @@ source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "778e2ac28f6c47af28e4907f13ffd1e1ddbd400980a9abd7c8df189bf578a5ad"
dependencies = [
"libc",
"windows-sys 0.60.2",
"windows-sys 0.59.0",
]
[[package]]
@@ -4900,7 +4926,6 @@ dependencies = [
"humantime-serde",
"lazy_static",
"log-query",
"log-store",
"meta-client",
"num_cpus",
"opentelemetry-proto",
@@ -5302,7 +5327,7 @@ dependencies = [
[[package]]
name = "greptime-proto"
version = "0.1.0"
source = "git+https://github.com/GreptimeTeam/greptime-proto.git?rev=f9836cf8aab30e672f640c6ef4c1cfd2cf9fbc36#f9836cf8aab30e672f640c6ef4c1cfd2cf9fbc36"
source = "git+https://github.com/GreptimeTeam/greptime-proto.git?rev=3e821d0d405e6733690a4e4352812ba2ff780a3e#3e821d0d405e6733690a4e4352812ba2ff780a3e"
dependencies = [
"prost 0.13.5",
"prost-types 0.13.5",
@@ -6492,7 +6517,7 @@ dependencies = [
"pest_derive",
"regex",
"serde_json",
"thiserror 2.0.12",
"thiserror 2.0.17",
]
[[package]]
@@ -6894,7 +6919,7 @@ source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "07033963ba89ebaf1584d767badaa2e8fcec21aedea6b8c0346d487d49c28667"
dependencies = [
"cfg-if",
"windows-targets 0.53.2",
"windows-targets 0.52.6",
]
[[package]]
@@ -6982,7 +7007,7 @@ checksum = "656b3b27f8893f7bbf9485148ff9a65f019e3f33bd5cdc87c83cab16b3fd9ec8"
dependencies = [
"libc",
"neli",
"thiserror 2.0.12",
"thiserror 2.0.17",
"windows-sys 0.59.0",
]
@@ -7199,9 +7224,9 @@ dependencies = [
[[package]]
name = "mappings"
version = "0.7.0"
version = "0.7.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "e434981a332777c2b3062652d16a55f8e74fa78e6b1882633f0d77399c84fc2a"
checksum = "db4d277bb50d4508057e7bddd7fcd19ef4a4cc38051b6a5a36868d75ae2cbeb9"
dependencies = [
"anyhow",
"libc",
@@ -7287,8 +7312,7 @@ checksum = "32a282da65faaf38286cf3be983213fcf1d2e2a58700e808f83f4ea9a4804bc0"
[[package]]
name = "memcomparable"
version = "0.2.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "376101dbd964fc502d5902216e180f92b3d003b5cc3d2e40e044eb5470fca677"
source = "git+https://github.com/v0y4g3r/memcomparable.git?rev=a07122dc03556bbd88ad66234cbea7efd3b23efb#a07122dc03556bbd88ad66234cbea7efd3b23efb"
dependencies = [
"bytes",
"serde",
@@ -7607,7 +7631,6 @@ dependencies = [
"itertools 0.14.0",
"lazy_static",
"log-store",
"memcomparable",
"mito-codec",
"moka",
"object-store",
@@ -7784,7 +7807,7 @@ dependencies = [
"quote",
"syn 2.0.104",
"termcolor",
"thiserror 2.0.12",
"thiserror 2.0.17",
]
[[package]]
@@ -7811,7 +7834,7 @@ dependencies = [
"serde",
"serde_json",
"socket2 0.5.10",
"thiserror 2.0.12",
"thiserror 2.0.17",
"tokio",
"tokio-rustls",
"tokio-util",
@@ -8340,7 +8363,7 @@ dependencies = [
"itertools 0.14.0",
"parking_lot 0.12.4",
"percent-encoding",
"thiserror 2.0.12",
"thiserror 2.0.17",
"tokio",
"tracing",
"url",
@@ -8510,7 +8533,7 @@ dependencies = [
"futures-sink",
"js-sys",
"pin-project-lite",
"thiserror 2.0.12",
"thiserror 2.0.17",
"tracing",
]
@@ -8540,7 +8563,7 @@ dependencies = [
"opentelemetry_sdk",
"prost 0.13.5",
"reqwest",
"thiserror 2.0.12",
"thiserror 2.0.17",
"tokio",
"tonic 0.13.1",
"tracing",
@@ -8580,7 +8603,7 @@ dependencies = [
"percent-encoding",
"rand 0.9.1",
"serde_json",
"thiserror 2.0.12",
"thiserror 2.0.17",
"tokio",
"tokio-stream",
]
@@ -9091,7 +9114,7 @@ source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "1db05f56d34358a8b1066f67cbb203ee3e7ed2ba674a6263a1d5ec6db2204323"
dependencies = [
"memchr",
"thiserror 2.0.12",
"thiserror 2.0.17",
"ucd-trie",
]
@@ -9162,11 +9185,12 @@ dependencies = [
[[package]]
name = "pgwire"
version = "0.32.1"
version = "0.33.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "ddf403a6ee31cf7f2217b2bd8447cb13dbb6c268d7e81501bc78a4d3daafd294"
checksum = "f58d371668e6151da16be31308989058156c01257277ea8af0f97524e87cfa31"
dependencies = [
"async-trait",
"base64 0.22.1",
"bytes",
"chrono",
"derive-new",
@@ -9179,10 +9203,12 @@ dependencies = [
"ring",
"rust_decimal",
"rustls-pki-types",
"thiserror 2.0.12",
"stringprep",
"thiserror 2.0.17",
"tokio",
"tokio-rustls",
"tokio-util",
"x509-certificate 0.25.0",
]
[[package]]
@@ -9459,12 +9485,14 @@ dependencies = [
"cli",
"common-base",
"common-error",
"common-meta",
"datanode",
"flow",
"frontend",
"meta-srv",
"serde",
"snafu 0.8.6",
"standalone",
]
[[package]]
@@ -9562,9 +9590,9 @@ dependencies = [
[[package]]
name = "pprof_util"
version = "0.7.0"
version = "0.8.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "9fa015c78eed2130951e22c58d2095849391e73817ab2e74f71b0b9f63dd8416"
checksum = "f9aba4251d95ac86f14c33e688d57a9344bfcff29e9b0c5a063fc66b5facc8a1"
dependencies = [
"anyhow",
"backtrace",
@@ -10182,7 +10210,7 @@ dependencies = [
"rustc-hash 2.1.1",
"rustls",
"socket2 0.5.10",
"thiserror 2.0.12",
"thiserror 2.0.17",
"tokio",
"tracing",
"web-time",
@@ -10203,7 +10231,7 @@ dependencies = [
"rustls",
"rustls-pki-types",
"slab",
"thiserror 2.0.12",
"thiserror 2.0.17",
"tinyvec",
"tracing",
"web-time",
@@ -10797,7 +10825,7 @@ dependencies = [
"rsasl",
"rustls",
"snap",
"thiserror 2.0.12",
"thiserror 2.0.17",
"tokio",
"tokio-rustls",
"tracing",
@@ -10923,9 +10951,9 @@ dependencies = [
[[package]]
name = "rust_decimal"
version = "1.37.2"
version = "1.38.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "b203a6425500a03e0919c42d3c47caca51e79f1132046626d2c8871c5092035d"
checksum = "c8975fc98059f365204d635119cf9c5a60ae67b841ed49b5422a9a7e56cdfac0"
dependencies = [
"arrayvec",
"borsh",
@@ -11268,7 +11296,7 @@ dependencies = [
"proc-macro2",
"quote",
"syn 2.0.104",
"thiserror 2.0.12",
"thiserror 2.0.17",
]
[[package]]
@@ -11537,6 +11565,7 @@ dependencies = [
"common-runtime",
"common-session",
"common-sql",
"common-stat",
"common-telemetry",
"common-test-util",
"common-time",
@@ -11547,6 +11576,7 @@ dependencies = [
"datafusion",
"datafusion-common",
"datafusion-expr",
"datafusion-pg-catalog",
"datatypes",
"derive_builder 0.20.2",
"futures",
@@ -11554,7 +11584,6 @@ dependencies = [
"headers",
"hostname 0.3.1",
"http 1.3.1",
"http-body 1.0.1",
"humantime",
"humantime-serde",
"hyper 1.6.0",
@@ -11799,7 +11828,7 @@ checksum = "297f631f50729c8c99b84667867963997ec0b50f32b2a7dbcab828ef0541e8bb"
dependencies = [
"num-bigint",
"num-traits",
"thiserror 2.0.12",
"thiserror 2.0.17",
"time",
]
@@ -12141,7 +12170,7 @@ dependencies = [
"serde_json",
"sha2",
"smallvec",
"thiserror 2.0.12",
"thiserror 2.0.17",
"tokio",
"tokio-stream",
"tracing",
@@ -12225,7 +12254,7 @@ dependencies = [
"smallvec",
"sqlx-core",
"stringprep",
"thiserror 2.0.12",
"thiserror 2.0.17",
"tracing",
"whoami",
]
@@ -12263,7 +12292,7 @@ dependencies = [
"smallvec",
"sqlx-core",
"stringprep",
"thiserror 2.0.12",
"thiserror 2.0.17",
"tracing",
"whoami",
]
@@ -12288,7 +12317,7 @@ dependencies = [
"serde",
"serde_urlencoded",
"sqlx-core",
"thiserror 2.0.12",
"thiserror 2.0.17",
"tracing",
"url",
]
@@ -12313,10 +12342,36 @@ dependencies = [
]
[[package]]
name = "stat"
name = "standalone"
version = "0.18.0"
dependencies = [
"nix 0.30.1",
"async-trait",
"catalog",
"client",
"common-base",
"common-config",
"common-error",
"common-macro",
"common-meta",
"common-options",
"common-procedure",
"common-query",
"common-telemetry",
"common-time",
"common-version",
"common-wal",
"datanode",
"file-engine",
"flow",
"frontend",
"log-store",
"mito2",
"query",
"serde",
"servers",
"snafu 0.8.6",
"store-api",
"tokio",
]
[[package]]
@@ -12360,6 +12415,7 @@ dependencies = [
"sqlparser 0.55.0-greptime",
"strum 0.27.1",
"tokio",
"uuid",
]
[[package]]
@@ -12479,6 +12535,7 @@ dependencies = [
"async-trait",
"bytes",
"common-error",
"common-function",
"common-macro",
"common-telemetry",
"datafusion",
@@ -12733,7 +12790,7 @@ dependencies = [
"tantivy-stacker",
"tantivy-tokenizer-api",
"tempfile",
"thiserror 2.0.12",
"thiserror 2.0.17",
"time",
"uuid",
"winapi",
@@ -12873,9 +12930,9 @@ dependencies = [
[[package]]
name = "tempfile"
version = "3.19.1"
version = "3.23.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "7437ac7763b9b123ccf33c338a5cc1bac6f69b45a136c19bdd8a65e3916435bf"
checksum = "2d31c77bdf42a745371d260a26ca7163f1e0924b64afa0b688e61b5a9fa02f16"
dependencies = [
"fastrand",
"getrandom 0.3.3",
@@ -13026,6 +13083,7 @@ dependencies = [
"snafu 0.8.6",
"sql",
"sqlx",
"standalone",
"store-api",
"substrait 0.18.0",
"table",
@@ -13060,11 +13118,11 @@ dependencies = [
[[package]]
name = "thiserror"
version = "2.0.12"
version = "2.0.17"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "567b8a2dae586314f7be2a752ec7474332959c6460e02bde30d702a66d488708"
checksum = "f63587ca0f12b72a0600bcba1d40081f830876000bb46dd2337a3051618f4fc8"
dependencies = [
"thiserror-impl 2.0.12",
"thiserror-impl 2.0.17",
]
[[package]]
@@ -13080,9 +13138,9 @@ dependencies = [
[[package]]
name = "thiserror-impl"
version = "2.0.12"
version = "2.0.17"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "7f7cf42b4507d8ea322120659672cf1b9dbb93f8f2d4ecfd6e51350ff5b17a1d"
checksum = "3ff15c8ecd7de3849db632e14d18d2571fa09dfc5ed93479bc4485c7a517c913"
dependencies = [
"proc-macro2",
"quote",
@@ -13331,7 +13389,7 @@ dependencies = [
"tokio",
"tokio-postgres",
"tokio-rustls",
"x509-certificate",
"x509-certificate 0.23.1",
]
[[package]]
@@ -13851,7 +13909,7 @@ dependencies = [
"serde",
"serde_json",
"syn 2.0.104",
"thiserror 2.0.12",
"thiserror 2.0.17",
"unicode-ident",
]
@@ -13946,7 +14004,7 @@ version = "0.1.6"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "c01d12e3a56a4432a8b436f293c25f4808bdf9e9f9f98f9260bba1f1bc5a1f26"
dependencies = [
"thiserror 2.0.12",
"thiserror 2.0.17",
]
[[package]]
@@ -14244,7 +14302,7 @@ dependencies = [
"strip-ansi-escapes",
"syslog_loose",
"termcolor",
"thiserror 2.0.12",
"thiserror 2.0.17",
"tokio",
"tracing",
"ua-parser",
@@ -14677,15 +14735,6 @@ dependencies = [
"windows-targets 0.52.6",
]
[[package]]
name = "windows-sys"
version = "0.60.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "f2f500e4d28234f72040990ec9d39e3a6b950f9f22d3dba18416c35882612bcb"
dependencies = [
"windows-targets 0.53.2",
]
[[package]]
name = "windows-targets"
version = "0.48.5"
@@ -14710,29 +14759,13 @@ dependencies = [
"windows_aarch64_gnullvm 0.52.6",
"windows_aarch64_msvc 0.52.6",
"windows_i686_gnu 0.52.6",
"windows_i686_gnullvm 0.52.6",
"windows_i686_gnullvm",
"windows_i686_msvc 0.52.6",
"windows_x86_64_gnu 0.52.6",
"windows_x86_64_gnullvm 0.52.6",
"windows_x86_64_msvc 0.52.6",
]
[[package]]
name = "windows-targets"
version = "0.53.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "c66f69fcc9ce11da9966ddb31a40968cad001c5bedeb5c2b82ede4253ab48aef"
dependencies = [
"windows_aarch64_gnullvm 0.53.0",
"windows_aarch64_msvc 0.53.0",
"windows_i686_gnu 0.53.0",
"windows_i686_gnullvm 0.53.0",
"windows_i686_msvc 0.53.0",
"windows_x86_64_gnu 0.53.0",
"windows_x86_64_gnullvm 0.53.0",
"windows_x86_64_msvc 0.53.0",
]
[[package]]
name = "windows-threading"
version = "0.1.0"
@@ -14754,12 +14787,6 @@ version = "0.52.6"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "32a4622180e7a0ec044bb555404c800bc9fd9ec262ec147edd5989ccd0c02cd3"
[[package]]
name = "windows_aarch64_gnullvm"
version = "0.53.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "86b8d5f90ddd19cb4a147a5fa63ca848db3df085e25fee3cc10b39b6eebae764"
[[package]]
name = "windows_aarch64_msvc"
version = "0.48.5"
@@ -14772,12 +14799,6 @@ version = "0.52.6"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "09ec2a7bb152e2252b53fa7803150007879548bc709c039df7627cabbd05d469"
[[package]]
name = "windows_aarch64_msvc"
version = "0.53.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "c7651a1f62a11b8cbd5e0d42526e55f2c99886c77e007179efff86c2b137e66c"
[[package]]
name = "windows_i686_gnu"
version = "0.48.5"
@@ -14790,24 +14811,12 @@ version = "0.52.6"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "8e9b5ad5ab802e97eb8e295ac6720e509ee4c243f69d781394014ebfe8bbfa0b"
[[package]]
name = "windows_i686_gnu"
version = "0.53.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "c1dc67659d35f387f5f6c479dc4e28f1d4bb90ddd1a5d3da2e5d97b42d6272c3"
[[package]]
name = "windows_i686_gnullvm"
version = "0.52.6"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "0eee52d38c090b3caa76c563b86c3a4bd71ef1a819287c19d586d7334ae8ed66"
[[package]]
name = "windows_i686_gnullvm"
version = "0.53.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "9ce6ccbdedbf6d6354471319e781c0dfef054c81fbc7cf83f338a4296c0cae11"
[[package]]
name = "windows_i686_msvc"
version = "0.48.5"
@@ -14820,12 +14829,6 @@ version = "0.52.6"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "240948bc05c5e7c6dabba28bf89d89ffce3e303022809e73deaefe4f6ec56c66"
[[package]]
name = "windows_i686_msvc"
version = "0.53.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "581fee95406bb13382d2f65cd4a908ca7b1e4c2f1917f143ba16efe98a589b5d"
[[package]]
name = "windows_x86_64_gnu"
version = "0.48.5"
@@ -14838,12 +14841,6 @@ version = "0.52.6"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "147a5c80aabfbf0c7d901cb5895d1de30ef2907eb21fbbab29ca94c5b08b1a78"
[[package]]
name = "windows_x86_64_gnu"
version = "0.53.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "2e55b5ac9ea33f2fc1716d1742db15574fd6fc8dadc51caab1c16a3d3b4190ba"
[[package]]
name = "windows_x86_64_gnullvm"
version = "0.48.5"
@@ -14856,12 +14853,6 @@ version = "0.52.6"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "24d5b23dc417412679681396f2b49f3de8c1473deb516bd34410872eff51ed0d"
[[package]]
name = "windows_x86_64_gnullvm"
version = "0.53.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "0a6e035dd0599267ce1ee132e51c27dd29437f63325753051e71dd9e42406c57"
[[package]]
name = "windows_x86_64_msvc"
version = "0.48.5"
@@ -14874,12 +14865,6 @@ version = "0.52.6"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "589f6da84c646204747d1270a2a5661ea66ed1cced2631d546fdfb155959f9ec"
[[package]]
name = "windows_x86_64_msvc"
version = "0.53.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "271414315aff87387382ec3d271b52d7ae78726f5d44ac98b4f4030c91880486"
[[package]]
name = "winnow"
version = "0.5.40"
@@ -14972,6 +14957,25 @@ dependencies = [
"zeroize",
]
[[package]]
name = "x509-certificate"
version = "0.25.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "ca9eb9a0c822c67129d5b8fcc2806c6bc4f50496b420825069a440669bcfbf7f"
dependencies = [
"bcder",
"bytes",
"chrono",
"der",
"hex",
"pem",
"ring",
"signature",
"spki",
"thiserror 2.0.17",
"zeroize",
]
[[package]]
name = "xattr"
version = "1.5.1"

View File

@@ -61,6 +61,7 @@ members = [
"src/promql",
"src/puffin",
"src/query",
"src/standalone",
"src/servers",
"src/session",
"src/sql",
@@ -122,17 +123,18 @@ clap = { version = "4.4", features = ["derive"] }
config = "0.13.0"
crossbeam-utils = "0.8"
dashmap = "6.1"
datafusion = { git = "https://github.com/GreptimeTeam/datafusion.git", rev = "7d5214512740b4dfb742b6b3d91ed9affcc2c9d0" }
datafusion-common = { git = "https://github.com/GreptimeTeam/datafusion.git", rev = "7d5214512740b4dfb742b6b3d91ed9affcc2c9d0" }
datafusion-expr = { git = "https://github.com/GreptimeTeam/datafusion.git", rev = "7d5214512740b4dfb742b6b3d91ed9affcc2c9d0" }
datafusion-functions = { git = "https://github.com/GreptimeTeam/datafusion.git", rev = "7d5214512740b4dfb742b6b3d91ed9affcc2c9d0" }
datafusion-functions-aggregate-common = { git = "https://github.com/GreptimeTeam/datafusion.git", rev = "7d5214512740b4dfb742b6b3d91ed9affcc2c9d0" }
datafusion-optimizer = { git = "https://github.com/GreptimeTeam/datafusion.git", rev = "7d5214512740b4dfb742b6b3d91ed9affcc2c9d0" }
datafusion = "49"
datafusion-common = "49"
datafusion-expr = "49"
datafusion-functions = "49"
datafusion-functions-aggregate-common = "49"
datafusion-optimizer = "49"
datafusion-orc = { git = "https://github.com/GreptimeTeam/datafusion-orc", rev = "a0a5f902158f153119316eaeec868cff3fc8a99d" }
datafusion-physical-expr = { git = "https://github.com/GreptimeTeam/datafusion.git", rev = "7d5214512740b4dfb742b6b3d91ed9affcc2c9d0" }
datafusion-physical-plan = { git = "https://github.com/GreptimeTeam/datafusion.git", rev = "7d5214512740b4dfb742b6b3d91ed9affcc2c9d0" }
datafusion-sql = { git = "https://github.com/GreptimeTeam/datafusion.git", rev = "7d5214512740b4dfb742b6b3d91ed9affcc2c9d0" }
datafusion-substrait = { git = "https://github.com/GreptimeTeam/datafusion.git", rev = "7d5214512740b4dfb742b6b3d91ed9affcc2c9d0" }
datafusion-pg-catalog = { git = "https://github.com/datafusion-contrib/datafusion-postgres", rev = "3d1b7c7d5b82dd49bafc2803259365e633f654fa" }
datafusion-physical-expr = "49"
datafusion-physical-plan = "49"
datafusion-sql = "49"
datafusion-substrait = "49"
deadpool = "0.12"
deadpool-postgres = "0.14"
derive_builder = "0.20"
@@ -145,7 +147,7 @@ etcd-client = { git = "https://github.com/GreptimeTeam/etcd-client", rev = "f62d
fst = "0.4.7"
futures = "0.3"
futures-util = "0.3"
greptime-proto = { git = "https://github.com/GreptimeTeam/greptime-proto.git", rev = "f9836cf8aab30e672f640c6ef4c1cfd2cf9fbc36" }
greptime-proto = { git = "https://github.com/GreptimeTeam/greptime-proto.git", rev = "3e821d0d405e6733690a4e4352812ba2ff780a3e" }
hex = "0.4"
http = "1"
humantime = "2.1"
@@ -174,6 +176,9 @@ opentelemetry-proto = { version = "0.30", features = [
"logs",
] }
ordered-float = { version = "4.3", features = ["serde"] }
otel-arrow-rust = { git = "https://github.com/GreptimeTeam/otel-arrow", rev = "2d64b7c0fa95642028a8205b36fe9ea0b023ec59", features = [
"server",
] }
parking_lot = "0.12"
parquet = { version = "56.0", default-features = false, features = ["arrow", "async", "object_store"] }
paste = "1.0"
@@ -275,6 +280,7 @@ common-recordbatch = { path = "src/common/recordbatch" }
common-runtime = { path = "src/common/runtime" }
common-session = { path = "src/common/session" }
common-sql = { path = "src/common/sql" }
common-stat = { path = "src/common/stat" }
common-telemetry = { path = "src/common/telemetry" }
common-test-util = { path = "src/common/test-util" }
common-time = { path = "src/common/time" }
@@ -296,9 +302,6 @@ mito-codec = { path = "src/mito-codec" }
mito2 = { path = "src/mito2" }
object-store = { path = "src/object-store" }
operator = { path = "src/operator" }
otel-arrow-rust = { git = "https://github.com/GreptimeTeam/otel-arrow", rev = "2d64b7c0fa95642028a8205b36fe9ea0b023ec59", features = [
"server",
] }
partition = { path = "src/partition" }
pipeline = { path = "src/pipeline" }
plugins = { path = "src/plugins" }
@@ -308,7 +311,7 @@ query = { path = "src/query" }
servers = { path = "src/servers" }
session = { path = "src/session" }
sql = { path = "src/sql" }
stat = { path = "src/common/stat" }
standalone = { path = "src/standalone" }
store-api = { path = "src/store-api" }
substrait = { path = "src/common/substrait" }
table = { path = "src/table" }
@@ -317,6 +320,18 @@ table = { path = "src/table" }
git = "https://github.com/GreptimeTeam/greptime-meter.git"
rev = "5618e779cf2bb4755b499c630fba4c35e91898cb"
[patch.crates-io]
datafusion = { git = "https://github.com/GreptimeTeam/datafusion.git", rev = "7d5214512740b4dfb742b6b3d91ed9affcc2c9d0" }
datafusion-common = { git = "https://github.com/GreptimeTeam/datafusion.git", rev = "7d5214512740b4dfb742b6b3d91ed9affcc2c9d0" }
datafusion-expr = { git = "https://github.com/GreptimeTeam/datafusion.git", rev = "7d5214512740b4dfb742b6b3d91ed9affcc2c9d0" }
datafusion-functions = { git = "https://github.com/GreptimeTeam/datafusion.git", rev = "7d5214512740b4dfb742b6b3d91ed9affcc2c9d0" }
datafusion-functions-aggregate-common = { git = "https://github.com/GreptimeTeam/datafusion.git", rev = "7d5214512740b4dfb742b6b3d91ed9affcc2c9d0" }
datafusion-optimizer = { git = "https://github.com/GreptimeTeam/datafusion.git", rev = "7d5214512740b4dfb742b6b3d91ed9affcc2c9d0" }
datafusion-physical-expr = { git = "https://github.com/GreptimeTeam/datafusion.git", rev = "7d5214512740b4dfb742b6b3d91ed9affcc2c9d0" }
datafusion-physical-plan = { git = "https://github.com/GreptimeTeam/datafusion.git", rev = "7d5214512740b4dfb742b6b3d91ed9affcc2c9d0" }
datafusion-sql = { git = "https://github.com/GreptimeTeam/datafusion.git", rev = "7d5214512740b4dfb742b6b3d91ed9affcc2c9d0" }
datafusion-substrait = { git = "https://github.com/GreptimeTeam/datafusion.git", rev = "7d5214512740b4dfb742b6b3d91ed9affcc2c9d0" }
[profile.release]
debug = 1

View File

@@ -31,6 +31,7 @@
| `grpc` | -- | -- | The gRPC server options. |
| `grpc.bind_addr` | String | `127.0.0.1:4001` | The address to bind the gRPC server. |
| `grpc.runtime_size` | Integer | `8` | The number of server worker threads. |
| `grpc.max_connection_age` | String | Unset | The maximum connection age for gRPC connection.<br/>The value can be a human-readable time string. For example: `10m` for ten minutes or `1h` for one hour.<br/>Refer to https://grpc.io/docs/guides/keepalive/ for more details. |
| `grpc.tls` | -- | -- | gRPC server TLS options, see `mysql.tls` section. |
| `grpc.tls.mode` | String | `disable` | TLS mode. |
| `grpc.tls.cert_path` | String | Unset | Certificate file path. |
@@ -103,6 +104,7 @@
| `storage` | -- | -- | The data storage options. |
| `storage.data_home` | String | `./greptimedb_data` | The working home directory. |
| `storage.type` | String | `File` | The storage type used to store the data.<br/>- `File`: the data is stored in the local file system.<br/>- `S3`: the data is stored in the S3 object storage.<br/>- `Gcs`: the data is stored in the Google Cloud Storage.<br/>- `Azblob`: the data is stored in the Azure Blob Storage.<br/>- `Oss`: the data is stored in the Aliyun OSS. |
| `storage.enable_read_cache` | Bool | `true` | Whether to enable read cache. If not set, the read cache will be enabled by default when using object storage. |
| `storage.cache_path` | String | Unset | Read cache configuration for object storage such as 'S3' etc, it's configured by default when using object storage. It is recommended to configure it when using object storage for better performance.<br/>A local file directory, defaults to `{data_home}`. An empty string means disabling. |
| `storage.cache_capacity` | String | Unset | The local file cache capacity in bytes. If your disk space is sufficient, it is recommended to set it larger. |
| `storage.bucket` | String | Unset | The S3 bucket name.<br/>**It's only used when the storage type is `S3`, `Oss` and `Gcs`**. |
@@ -151,6 +153,7 @@
| `region_engine.mito.max_concurrent_scan_files` | Integer | `384` | Maximum number of SST files to scan concurrently. |
| `region_engine.mito.allow_stale_entries` | Bool | `false` | Whether to allow stale WAL entries read during replay. |
| `region_engine.mito.min_compaction_interval` | String | `0m` | Minimum time interval between two compactions.<br/>To align with the old behavior, the default value is 0 (no restrictions). |
| `region_engine.mito.enable_experimental_flat_format` | Bool | `false` | Whether to enable experimental flat format. |
| `region_engine.mito.index` | -- | -- | The options for index in Mito engine. |
| `region_engine.mito.index.aux_path` | String | `""` | Auxiliary directory path for the index in filesystem, used to store intermediate files for<br/>creating the index and staging files for searching the index, defaults to `{data_home}/index_intermediate`.<br/>The default name for this directory is `index_intermediate` for backward compatibility.<br/><br/>This path contains two subdirectories:<br/>- `__intm`: for storing intermediate files used during creating index.<br/>- `staging`: for storing staging files used during searching index. |
| `region_engine.mito.index.staging_size` | String | `2GB` | The max capacity of the staging directory. |
@@ -240,6 +243,7 @@
| `grpc.server_addr` | String | `127.0.0.1:4001` | The address advertised to the metasrv, and used for connections from outside the host.<br/>If left empty or unset, the server will automatically use the IP address of the first network interface<br/>on the host, with the same port number as the one specified in `grpc.bind_addr`. |
| `grpc.runtime_size` | Integer | `8` | The number of server worker threads. |
| `grpc.flight_compression` | String | `arrow_ipc` | Compression mode for frontend side Arrow IPC service. Available options:<br/>- `none`: disable all compression<br/>- `transport`: only enable gRPC transport compression (zstd)<br/>- `arrow_ipc`: only enable Arrow IPC compression (lz4)<br/>- `all`: enable all compression.<br/>Default to `none` |
| `grpc.max_connection_age` | String | Unset | The maximum connection age for gRPC connection.<br/>The value can be a human-readable time string. For example: `10m` for ten minutes or `1h` for one hour.<br/>Refer to https://grpc.io/docs/guides/keepalive/ for more details. |
| `grpc.tls` | -- | -- | gRPC server TLS options, see `mysql.tls` section. |
| `grpc.tls.mode` | String | `disable` | TLS mode. |
| `grpc.tls.cert_path` | String | Unset | Certificate file path. |
@@ -377,10 +381,9 @@
| `procedure.max_metadata_value_size` | String | `1500KiB` | Auto split large value<br/>GreptimeDB procedure uses etcd as the default metadata storage backend.<br/>The etcd the maximum size of any request is 1.5 MiB<br/>1500KiB = 1536KiB (1.5MiB) - 36KiB (reserved size of key)<br/>Comments out the `max_metadata_value_size`, for don't split large value (no limit). |
| `procedure.max_running_procedures` | Integer | `128` | Max running procedures.<br/>The maximum number of procedures that can be running at the same time.<br/>If the number of running procedures exceeds this limit, the procedure will be rejected. |
| `failure_detector` | -- | -- | -- |
| `failure_detector.threshold` | Float | `8.0` | The threshold value used by the failure detector to determine failure conditions. |
| `failure_detector.min_std_deviation` | String | `100ms` | The minimum standard deviation of the heartbeat intervals, used to calculate acceptable variations. |
| `failure_detector.acceptable_heartbeat_pause` | String | `10000ms` | The acceptable pause duration between heartbeats, used to determine if a heartbeat interval is acceptable. |
| `failure_detector.first_heartbeat_estimate` | String | `1000ms` | The initial estimate of the heartbeat interval used by the failure detector. |
| `failure_detector.threshold` | Float | `8.0` | Maximum acceptable φ before the peer is treated as failed.<br/>Lower values react faster but yield more false positives. |
| `failure_detector.min_std_deviation` | String | `100ms` | The minimum standard deviation of the heartbeat intervals.<br/>So tiny variations dont make φ explode. Prevents hypersensitivity when heartbeat intervals barely vary. |
| `failure_detector.acceptable_heartbeat_pause` | String | `10000ms` | The acceptable pause duration between heartbeats.<br/>Additional extra grace period to the learned mean interval before φ rises, absorbing temporary network hiccups or GC pauses. |
| `datanode` | -- | -- | Datanode options. |
| `datanode.client` | -- | -- | Datanode client options. |
| `datanode.client.timeout` | String | `10s` | Operation timeout. |
@@ -494,6 +497,7 @@
| `storage.data_home` | String | `./greptimedb_data` | The working home directory. |
| `storage.type` | String | `File` | The storage type used to store the data.<br/>- `File`: the data is stored in the local file system.<br/>- `S3`: the data is stored in the S3 object storage.<br/>- `Gcs`: the data is stored in the Google Cloud Storage.<br/>- `Azblob`: the data is stored in the Azure Blob Storage.<br/>- `Oss`: the data is stored in the Aliyun OSS. |
| `storage.cache_path` | String | Unset | Read cache configuration for object storage such as 'S3' etc, it's configured by default when using object storage. It is recommended to configure it when using object storage for better performance.<br/>A local file directory, defaults to `{data_home}`. An empty string means disabling. |
| `storage.enable_read_cache` | Bool | `true` | Whether to enable read cache. If not set, the read cache will be enabled by default when using object storage. |
| `storage.cache_capacity` | String | Unset | The local file cache capacity in bytes. If your disk space is sufficient, it is recommended to set it larger. |
| `storage.bucket` | String | Unset | The S3 bucket name.<br/>**It's only used when the storage type is `S3`, `Oss` and `Gcs`**. |
| `storage.root` | String | Unset | The S3 data will be stored in the specified prefix, for example, `s3://${bucket}/${root}`.<br/>**It's only used when the storage type is `S3`, `Oss` and `Azblob`**. |
@@ -543,6 +547,7 @@
| `region_engine.mito.max_concurrent_scan_files` | Integer | `384` | Maximum number of SST files to scan concurrently. |
| `region_engine.mito.allow_stale_entries` | Bool | `false` | Whether to allow stale WAL entries read during replay. |
| `region_engine.mito.min_compaction_interval` | String | `0m` | Minimum time interval between two compactions.<br/>To align with the old behavior, the default value is 0 (no restrictions). |
| `region_engine.mito.enable_experimental_flat_format` | Bool | `false` | Whether to enable experimental flat format. |
| `region_engine.mito.index` | -- | -- | The options for index in Mito engine. |
| `region_engine.mito.index.aux_path` | String | `""` | Auxiliary directory path for the index in filesystem, used to store intermediate files for<br/>creating the index and staging files for searching the index, defaults to `{data_home}/index_intermediate`.<br/>The default name for this directory is `index_intermediate` for backward compatibility.<br/><br/>This path contains two subdirectories:<br/>- `__intm`: for storing intermediate files used during creating index.<br/>- `staging`: for storing staging files used during searching index. |
| `region_engine.mito.index.staging_size` | String | `2GB` | The max capacity of the staging directory. |

View File

@@ -274,6 +274,9 @@ type = "File"
## @toml2docs:none-default
#+ cache_path = ""
## Whether to enable read cache. If not set, the read cache will be enabled by default when using object storage.
#+ enable_read_cache = true
## The local file cache capacity in bytes. If your disk space is sufficient, it is recommended to set it larger.
## @toml2docs:none-default
cache_capacity = "5GiB"
@@ -497,6 +500,9 @@ allow_stale_entries = false
## To align with the old behavior, the default value is 0 (no restrictions).
min_compaction_interval = "0m"
## Whether to enable experimental flat format.
enable_experimental_flat_format = false
## The options for index in Mito engine.
[region_engine.mito.index]

View File

@@ -61,6 +61,11 @@ runtime_size = 8
## - `all`: enable all compression.
## Default to `none`
flight_compression = "arrow_ipc"
## The maximum connection age for gRPC connection.
## The value can be a human-readable time string. For example: `10m` for ten minutes or `1h` for one hour.
## Refer to https://grpc.io/docs/guides/keepalive/ for more details.
## @toml2docs:none-default
#+ max_connection_age = "10m"
## gRPC server TLS options, see `mysql.tls` section.
[grpc.tls]

View File

@@ -149,20 +149,18 @@ max_metadata_value_size = "1500KiB"
max_running_procedures = 128
# Failure detectors options.
# GreptimeDB uses the Phi Accrual Failure Detector algorithm to detect datanode failures.
[failure_detector]
## The threshold value used by the failure detector to determine failure conditions.
## Maximum acceptable φ before the peer is treated as failed.
## Lower values react faster but yield more false positives.
threshold = 8.0
## The minimum standard deviation of the heartbeat intervals, used to calculate acceptable variations.
## The minimum standard deviation of the heartbeat intervals.
## So tiny variations dont make φ explode. Prevents hypersensitivity when heartbeat intervals barely vary.
min_std_deviation = "100ms"
## The acceptable pause duration between heartbeats, used to determine if a heartbeat interval is acceptable.
## The acceptable pause duration between heartbeats.
## Additional extra grace period to the learned mean interval before φ rises, absorbing temporary network hiccups or GC pauses.
acceptable_heartbeat_pause = "10000ms"
## The initial estimate of the heartbeat interval used by the failure detector.
first_heartbeat_estimate = "1000ms"
## Datanode options.
[datanode]

View File

@@ -56,6 +56,11 @@ prom_validation_mode = "strict"
bind_addr = "127.0.0.1:4001"
## The number of server worker threads.
runtime_size = 8
## The maximum connection age for gRPC connection.
## The value can be a human-readable time string. For example: `10m` for ten minutes or `1h` for one hour.
## Refer to https://grpc.io/docs/guides/keepalive/ for more details.
## @toml2docs:none-default
#+ max_connection_age = "10m"
## gRPC server TLS options, see `mysql.tls` section.
[grpc.tls]
@@ -361,6 +366,9 @@ data_home = "./greptimedb_data"
## - `Oss`: the data is stored in the Aliyun OSS.
type = "File"
## Whether to enable read cache. If not set, the read cache will be enabled by default when using object storage.
#+ enable_read_cache = true
## Read cache configuration for object storage such as 'S3' etc, it's configured by default when using object storage. It is recommended to configure it when using object storage for better performance.
## A local file directory, defaults to `{data_home}`. An empty string means disabling.
## @toml2docs:none-default
@@ -576,6 +584,9 @@ allow_stale_entries = false
## To align with the old behavior, the default value is 0 (no restrictions).
min_compaction_interval = "0m"
## Whether to enable experimental flat format.
enable_experimental_flat_format = false
## The options for index in Mito engine.
[region_engine.mito.index]

View File

@@ -30,22 +30,7 @@ curl https://raw.githubusercontent.com/brendangregg/FlameGraph/master/flamegraph
## Profiling
### Configuration
You can control heap profiling activation through configuration. Add the following to your configuration file:
```toml
[memory]
# Whether to enable heap profiling activation during startup.
# When enabled, heap profiling will be activated if the `MALLOC_CONF` environment variable
# is set to "prof:true,prof_active:false". The official image adds this env variable.
# Default is true.
enable_heap_profiling = true
```
By default, if you set `MALLOC_CONF=prof:true,prof_active:false`, the database will enable profiling during startup. You can disable this behavior by setting `enable_heap_profiling = false` in the configuration.
### Starting with environment variables
### Enable memory profiling for greptimedb binary
Start GreptimeDB instance with environment variables:
@@ -57,6 +42,22 @@ MALLOC_CONF=prof:true ./target/debug/greptime standalone start
_RJEM_MALLOC_CONF=prof:true ./target/debug/greptime standalone start
```
### Memory profiling for greptimedb docker image
We have memory profiling enabled and activated by default in our official docker
image.
This behavior is controlled by configuration `enable_heap_profiling`:
```toml
[memory]
# Whether to enable heap profiling activation during startup.
# Default is true.
enable_heap_profiling = true
```
To disable memory profiling, set `enable_heap_profiling` to `false`.
### Memory profiling control
You can control heap profiling activation using the new HTTP APIs:

View File

@@ -0,0 +1,463 @@
---
Feature Name: "laminar-flow"
Tracking Issue: https://github.com/GreptimeTeam/greptimedb/issues/TBD
Date: 2025-09-08
Author: "discord9 <discord9@163.com>"
---
# laminar Flow
## Summary
This RFC proposes a redesign of the flow architecture where flownode becomes a lightweight in-memory state management node with an embedded frontend for direct computation. This approach optimizes resource utilization and improves scalability by eliminating network hops while maintaining clear separation between coordination and computation tasks.
## Motivation
The current flow architecture has several limitations:
1. **Resource Inefficiency**: Flownodes perform both state management and computation, leading to resource duplication and inefficient utilization.
2. **Scalability Constraints**: Computation resources are tied to flownode instances, limiting horizontal scaling capabilities.
3. **State Management Complexity**: Mixing computation with state management makes the system harder to maintain and debug.
4. **Network Overhead**: Additional network hops between flownode and separate frontend nodes add latency.
The laminar Flow architecture addresses these issues by:
- Consolidating computation within flownode through embedded frontend
- Eliminating network overhead by removing separate frontend node communication
- Simplifying state management by focusing flownode on its core responsibility
- Improving system scalability and maintainability
## Details
### Architecture Overview
The laminar Flow architecture transforms flownode into a lightweight coordinator that maintains flow state with an embedded frontend for computation. The key components involved are:
1. **Flownode**: Maintains in-memory state, coordinates computation, and includes an embedded frontend for query execution
2. **Embedded Frontend**: Executes **incremental** computations within the flownode
3. **Datanode**: Stores final results and source data
```mermaid
graph TB
subgraph "laminar Flow Architecture"
subgraph Flownode["Flownode (State Manager + Embedded Frontend)"]
StateMap["Flow State Map<br/>Map<Timestamp, (Map<Key, Value>, Sequence)>"]
Coordinator["Computation Coordinator"]
subgraph EmbeddedFrontend["Embedded Frontend"]
QueryEngine["Query Engine"]
AggrState["__aggr_state Executor"]
end
end
subgraph Datanode["Datanode"]
Storage["Data Storage"]
Results["Result Tables"]
end
end
Coordinator -->|Internal Query| EmbeddedFrontend
EmbeddedFrontend -->|Incremental States| Coordinator
Flownode -->|Incremental Results| Datanode
EmbeddedFrontend -.->|Read Data| Datanode
```
### Core Components
#### 1. Flow State Management
Flownode maintains a state map for each flow:
```rust
type FlowState = Map<Timestamp, (Map<Key, Value>, Sequence)>;
```
Where:
- **Timestamp**: Time window identifier for aggregation groups
- **Key**: Aggregation group expressions (`group_exprs`)
- **Value**: Aggregation expressions results (`aggr_exprs`)
- **Sequence**: Computation progress marker for incremental updates
#### 2. Incremental Computation Process
The computation process follows these steps:
1. **Trigger Evaluation**: Flownode determines when to trigger computation based on:
- Time intervals (periodic updates)
- Data volume thresholds
- Sequence progress requirements
2. **Query Execution**: Flownode executes `__aggr_state` queries using its embedded frontend with:
- Time window filters
- Sequence range constraints
3. **State Update**: Flownode receives partial state results and updates its internal state:
- Merges new values with existing aggregation state
- Updates sequence markers to track progress
- Identifies changed time windows for result computation
4. **Result Materialization**: Flownode computes final results using `__aggr_merge` operations:
- Processes only updated time windows(and time series) for efficiency
- Writes results back to datanode directly through its embedded frontend
### Detailed Workflow
#### Incremental State Query
```sql
-- Example incremental state query executed by embedded frontend
SELECT
__aggr_state(avg(value)) as state,
time_window,
group_key
FROM source_table
WHERE
timestamp >= :window_start
AND timestamp < :window_end
AND __sequence >= :last_sequence
AND __sequence < :current_sequence
-- sequence range is actually written in grpc header, but shown here for clarity
GROUP BY time_window, group_key;
```
#### State Merge Process
```mermaid
sequenceDiagram
participant F as Flownode (Coordinator)
participant EF as Embedded Frontend (Lightweight)
participant DN as Datanode (Heavy Computation)
F->>F: Evaluate trigger conditions
F->>EF: Execute __aggr_state query with sequence range
EF->>DN: Send query to datanode (Heavy scan & aggregation)
DN->>DN: Scan data and compute partial aggregation state (Heavy CPU/I/O)
DN->>EF: Return aggregated state results
EF->>F: Forward state results (Lightweight merge)
F->>F: Merge with existing state
F->>F: Update sequence markers (Lightweight)
F->>EF: Compute incremental results with __aggr_merge
EF->>DN: Write incremental results to datanode
```
### Refill Implementation and State Management
#### Refill Process
Refill is implemented as a straightforward `__aggr_state` query with time and sequence constraints:
```sql
-- Refill query for flow state recovery
SELECT
__aggr_state(aggregation_functions) as state,
time_window,
group_keys
FROM source_table
WHERE
timestamp >= :refill_start_time
AND timestamp < :refill_end_time
AND __sequence >= :start_sequence
AND __sequence < :end_sequence
-- sequence range is actually written in grpc header, but shown here for clarity
GROUP BY time_window, group_keys;
```
#### State Recovery Strategy
1. **Recent Data (Stream Mode)**: For recent time windows, flownode refills state using incremental queries
2. **Historical Data (Batch Mode)**: For older time windows, flownode triggers batch computation directly and no need to refill state
3. **Hybrid Approach**: Combines stream and batch processing based on data age and availability
#### Mirror Write Optimization
Mirror writes are simplified to only transmit timestamps to flownode:
```rust
struct MirrorWrite {
timestamps: Vec<Timestamp>,
// Removed: actual data payload
}
```
This optimization:
- Eliminates network overhead by using embedded frontend
- Enables flownode to track pending time windows efficiently
- Allows flownode to decide processing mode (stream vs batch) based on timestamp age
Another optimization could be just send dirty time windows range for each flow to flownode directly, no need to send timestamps one by one.
### Query Optimization Strategies
#### Sequence-Based Incremental Processing
The core optimization relies on sequence-constrained queries:
```sql
-- Optimized incremental query
SELECT __aggr_state(expr)
FROM table
WHERE time_range AND sequence_range
```
Benefits:
- **Reduced Scan Volume**: Only processes data since last computation
- **Efficient Resource Usage**: Minimizes CPU and I/O overhead
- **Predictable Performance**: Query cost scales with incremental data size
#### Time Window Partitioning
```mermaid
graph LR
subgraph "Time Windows"
W1["Window 1<br/>09:00-09:05"]
W2["Window 2<br/>09:05-09:10"]
W3["Window 3<br/>09:10-09:15"]
end
subgraph "Processing Strategy"
W1 --> Batch["Batch Mode<br/>(Old Data)"]
W2 --> Stream["Stream Mode<br/>(Recent Data)"]
W3 --> Stream2["Stream Mode<br/>(Current Data)"]
end
```
### Performance Characteristics
#### Memory Usage
- **Flownode**: O(active_time_windows × group_cardinality) for state storage
- **Embedded Frontend**: O(query_batch_size) for temporary computation
- **Overall**: Significantly reduced compared to current architecture
#### Computation Distribution
- **Direct Processing**: Queries processed directly within flownode's embedded frontend
- **Fault Tolerance**: Simplified error handling with fewer distributed components
- **Scalability**: Computation capacity scales with flownode instances
#### Network Optimization
- **Reduced Payload**: Mirror writes only contain timestamps
- **Efficient Queries**: Sequence constraints minimize data transfer
- **Result Caching**: State results cached in flownode memory
### Sequential Read Implementation for Incremental Queries
#### Sequence Management
Flow maintains two critical sequences to track incremental query progress for each region:
- **`memtable_last_seq`**: Tracks the latest sequence number read from the memtable
- **`sst_last_seq`**: Tracks the latest sequence number read from SST files
These sequences enable precise incremental data processing by defining the exact range of data to query in subsequent iterations.
#### Query Protocol
When executing incremental queries, flownode provides both sequence parameters to datanode:
```rust
struct GrpcHeader {
...
// Sequence tracking for incremental reads
memtable_last_seq: HashMap<RegionId, SequenceNumber>,
sst_last_seqs: HashMap<RegionId, SequenceNumber>,
}
```
The datanode processes these parameters to return only the data within the specified sequence ranges, ensuring efficient incremental processing.
#### Sequence Invalidation and Refill Mechanism
A critical challenge occurs when data referenced by `memtable_last_seq` gets flushed from memory to disk. Since SST files only maintain a single maximum sequence number for the entire file (rather than per-record sequence tracking), precise incremental queries become impossible for the affected time ranges.
**Detection of Invalidation:**
```rust
// When memtable_last_seq data has been flushed to SST
if memtable_last_seq_flushed_to_disk {
// Incremental query is no longer feasible
// Need to trigger refill for affected time ranges
}
```
**Refill Process:**
1. **Identify Affected Time Range**: Query the time range corresponding to the flushed `memtable_last_seq` data
2. **Full Recomputation**: Execute a complete aggregation query for the affected time windows
3. **State Replacement**: Replace the existing flow state for these time ranges with newly computed values
4. **Sequence Update**: Update `memtable_last_seq` to the current latest sequence, while `sst_last_seq` continues normal incremental updates
```sql
-- Refill query when memtable data has been flushed
SELECT
__aggr_state(aggregation_functions) as state,
time_window,
group_keys
FROM source_table
WHERE
timestamp >= :affected_time_start
AND timestamp < :affected_time_end
-- Full scan required since sequence precision is lost in SST
GROUP BY time_window, group_keys;
```
#### Datanode Implementation Requirements
Datanode must implement enhanced query processing capabilities to support sequence-based incremental reads:
**Input Processing:**
- Accept `memtable_last_seq` and `sst_last_seq` parameters in query requests
- Filter data based on sequence ranges across both memtable and SST storage layers
**Output Enhancement:**
```rust
struct OutputMeta {
pub plan: Option<Arc<dyn ExecutionPlan>>,
pub cost: OutputCost,
pub sequence_info: HashMap<RegionId, SequenceInfo>, // New field for sequence tracking per regions involved in the query
}
struct SequenceInfo {
// Sequence tracking for next iteration
max_memtable_seq: SequenceNumber, // Highest sequence from memtable in this result
max_sst_seq: SequenceNumber, // Highest sequence from SST in this result
}
```
**Sequence Tracking Logic:**
datanode already impl `max_sst_seq` in leader range read, can reuse similar logic for `max_memtable_seq`.
#### Sequence Update Strategy
**Normal Incremental Updates:**
- Update both `memtable_last_seq` and `sst_last_seq` after successful query execution
- Use returned `max_memtable_seq` and `max_sst_seq` values for next iteration
**Refill Scenario:**
- Reset `memtable_last_seq` to current maximum after refill completion
- Continue normal `sst_last_seq` updates based on successful query responses
- Maintain separate tracking to detect future flush events
#### Performance Considerations
**Sequence Range Optimization:**
- Minimize sequence range spans to reduce scan overhead
- Batch multiple small incremental updates when beneficial
- Balance between query frequency and processing efficiency
**Memory Management:**
- Monitor memtable flush frequency to predict refill requirements
- Implement adaptive query scheduling based on flush patterns
- Optimize state storage to handle frequent updates efficiently
This sequential read implementation ensures reliable incremental processing while gracefully handling the complexities of storage architecture, maintaining both correctness and performance in the face of background compaction and flush operations.
## Implementation Plan
### Phase 1: Core Infrastructure
1. **State Management**: Implement in-memory state map in flownode
2. **Query Interface**: Integrate `__aggr_state` query interface in embedded frontend(Already done in previous query pushdown optimizer work)
3. **Basic Coordination**: Implement query dispatch and result collection
4. **Sequence Tracking**: Implement sequence-based incremental processing(Can use similar interface which leader range read use)
After phase 1, the system should support basic flow operations with incremental updates.
### Phase 2: Optimization Features
1. **Refill Logic**: Develop state recovery mechanisms
2. **Mirror Write Optimization**: Simplify mirror write protocol
### Phase 3: Advanced Features
1. **Load Balancing**: Implement intelligent resource allocation for partitioned flow(Flow distributed executed on multiple flownodes)
2. **Fault Tolerance**: Add retry mechanisms and error handling
3. **Performance Tuning**: Optimize query batching and state management
## Drawbacks
### Reduced Network Communication
- **Eliminated Hops**: Direct communication between flownode and datanode through embedded frontend
- **Reduced Latency**: No separate frontend node communication overhead
- **Simplified Network Topology**: Fewer network dependencies and failure points
### Complexity in Error Handling
- **Distributed Failures**: Need to handle failures across multiple components
- **State Consistency**: Ensuring state consistency during partial failures
- **Recovery Complexity**: More complex recovery procedures
### Datanode Resource Requirements
- **Computation Load**: Datanode handles the heavy computational workload for flow queries
- **Query Interference**: Flow queries may impact regular query performance on datanode
- **Resource Contention**: Need careful resource management and isolation on datanode
## Alternatives
### Alternative 1: Enhanced Current Architecture
Keep computation in flownode but optimize through:
- Better resource management
- Improved query optimization
- Enhanced state persistence
**Pros:**
- Simpler architecture
- Fewer network hops
- Easier debugging
**Cons:**
- Limited scalability
- Resource inefficiency
- Harder to optimize computation distribution
### Alternative 2: Embedded Computation
Embed lightweight computation engines within flownode:
**Pros:**
- Reduced network communication
- Better performance for simple queries
- Simpler deployment
**Cons:**
- Limited scalability
- Resource constraints
- Harder to leverage existing frontend optimizations
## Future Work
### Advanced Query Optimization
- **Parallel Processing**: Enable parallel execution of flow queries
- **Query Caching**: Cache frequently executed query patterns
### Enhanced State Management
- **State Compression**: Implement efficient state serialization
- **Distributed State**: Support state distribution across multiple flownodes
- **State Persistence**: Add optional state persistence for durability
### Monitoring and Observability
- **Performance Metrics**: Track query execution times and resource usage
- **State Visualization**: Provide tools for state inspection and debugging
- **Health Monitoring**: Monitor system health and performance characteristics
### Integration Improvements
- **Embedded Frontend Optimization**: Optimize embedded frontend query planning and execution
- **Datanode Optimization**: Optimize result writing from flownode
- **Metasrv Coordination**: Enhanced metadata management and coordination
## Conclusion
The laminar Flow architecture represents a significant improvement over the current flow system by separating state management from computation execution. This design enables better resource utilization, improved scalability, and simplified maintenance while maintaining the core functionality of continuous aggregation.
The key benefits include:
1. **Improved Scalability**: Computation can scale independently of state management
2. **Better Resource Utilization**: Eliminates network overhead and leverages embedded frontend infrastructure
3. **Simplified Architecture**: Clear separation of concerns between components
4. **Enhanced Performance**: Sequence-based incremental processing reduces computational overhead
While the architecture introduces some complexity in terms of distributed coordination and error handling, the benefits significantly outweigh the drawbacks, making it a compelling evolution of the flow system.

View File

@@ -1411,7 +1411,7 @@
"uid": "${metrics}"
},
"editorMode": "code",
"expr": "max(greptime_memory_limit_in_bytes{app=\"greptime-datanode\"})",
"expr": "max(greptime_memory_limit_in_bytes{instance=~\"$datanode\"})",
"hide": false,
"instant": false,
"legendFormat": "limit",
@@ -1528,7 +1528,7 @@
},
"editorMode": "code",
"exemplar": false,
"expr": "max(greptime_cpu_limit_in_millicores{app=\"greptime-datanode\"})",
"expr": "max(greptime_cpu_limit_in_millicores{instance=~\"$datanode\"})",
"hide": false,
"instant": false,
"legendFormat": "limit",
@@ -1643,7 +1643,7 @@
"uid": "${metrics}"
},
"editorMode": "code",
"expr": "max(greptime_memory_limit_in_bytes{app=\"greptime-frontend\"})",
"expr": "max(greptime_memory_limit_in_bytes{instance=~\"$frontend\"})",
"hide": false,
"instant": false,
"legendFormat": "limit",
@@ -1760,7 +1760,7 @@
},
"editorMode": "code",
"exemplar": false,
"expr": "max(greptime_cpu_limit_in_millicores{app=\"greptime-frontend\"})",
"expr": "max(greptime_cpu_limit_in_millicores{instance=~\"$frontend\"})",
"hide": false,
"instant": false,
"legendFormat": "limit",
@@ -1875,7 +1875,7 @@
"uid": "${metrics}"
},
"editorMode": "code",
"expr": "max(greptime_memory_limit_in_bytes{app=\"greptime-metasrv\"})",
"expr": "max(greptime_memory_limit_in_bytes{instance=~\"$metasrv\"})",
"hide": false,
"instant": false,
"legendFormat": "limit",
@@ -1992,7 +1992,7 @@
},
"editorMode": "code",
"exemplar": false,
"expr": "max(greptime_cpu_limit_in_millicores{app=\"greptime-metasrv\"})",
"expr": "max(greptime_cpu_limit_in_millicores{instance=~\"$metasrv\"})",
"hide": false,
"instant": false,
"legendFormat": "limit",
@@ -2107,7 +2107,7 @@
"uid": "${metrics}"
},
"editorMode": "code",
"expr": "max(greptime_memory_limit_in_bytes{app=\"greptime-flownode\"})",
"expr": "max(greptime_memory_limit_in_bytes{instance=~\"$flownode\"})",
"hide": false,
"instant": false,
"legendFormat": "limit",
@@ -2224,7 +2224,7 @@
},
"editorMode": "code",
"exemplar": false,
"expr": "max(greptime_cpu_limit_in_millicores{app=\"greptime-flownode\"})",
"expr": "max(greptime_cpu_limit_in_millicores{instance=~\"$flownode\"})",
"hide": false,
"instant": false,
"legendFormat": "limit",

View File

@@ -21,14 +21,14 @@
# Resources
| Title | Query | Type | Description | Datasource | Unit | Legend Format |
| --- | --- | --- | --- | --- | --- | --- |
| Datanode Memory per Instance | `sum(process_resident_memory_bytes{instance=~"$datanode"}) by (instance, pod)`<br/>`max(greptime_memory_limit_in_bytes{app="greptime-datanode"})` | `timeseries` | Current memory usage by instance | `prometheus` | `bytes` | `[{{instance}}]-[{{ pod }}]` |
| Datanode CPU Usage per Instance | `sum(rate(process_cpu_seconds_total{instance=~"$datanode"}[$__rate_interval]) * 1000) by (instance, pod)`<br/>`max(greptime_cpu_limit_in_millicores{app="greptime-datanode"})` | `timeseries` | Current cpu usage by instance | `prometheus` | `none` | `[{{ instance }}]-[{{ pod }}]` |
| Frontend Memory per Instance | `sum(process_resident_memory_bytes{instance=~"$frontend"}) by (instance, pod)`<br/>`max(greptime_memory_limit_in_bytes{app="greptime-frontend"})` | `timeseries` | Current memory usage by instance | `prometheus` | `bytes` | `[{{ instance }}]-[{{ pod }}]` |
| Frontend CPU Usage per Instance | `sum(rate(process_cpu_seconds_total{instance=~"$frontend"}[$__rate_interval]) * 1000) by (instance, pod)`<br/>`max(greptime_cpu_limit_in_millicores{app="greptime-frontend"})` | `timeseries` | Current cpu usage by instance | `prometheus` | `none` | `[{{ instance }}]-[{{ pod }}]-cpu` |
| Metasrv Memory per Instance | `sum(process_resident_memory_bytes{instance=~"$metasrv"}) by (instance, pod)`<br/>`max(greptime_memory_limit_in_bytes{app="greptime-metasrv"})` | `timeseries` | Current memory usage by instance | `prometheus` | `bytes` | `[{{ instance }}]-[{{ pod }}]-resident` |
| Metasrv CPU Usage per Instance | `sum(rate(process_cpu_seconds_total{instance=~"$metasrv"}[$__rate_interval]) * 1000) by (instance, pod)`<br/>`max(greptime_cpu_limit_in_millicores{app="greptime-metasrv"})` | `timeseries` | Current cpu usage by instance | `prometheus` | `none` | `[{{ instance }}]-[{{ pod }}]` |
| Flownode Memory per Instance | `sum(process_resident_memory_bytes{instance=~"$flownode"}) by (instance, pod)`<br/>`max(greptime_memory_limit_in_bytes{app="greptime-flownode"})` | `timeseries` | Current memory usage by instance | `prometheus` | `bytes` | `[{{ instance }}]-[{{ pod }}]` |
| Flownode CPU Usage per Instance | `sum(rate(process_cpu_seconds_total{instance=~"$flownode"}[$__rate_interval]) * 1000) by (instance, pod)`<br/>`max(greptime_cpu_limit_in_millicores{app="greptime-flownode"})` | `timeseries` | Current cpu usage by instance | `prometheus` | `none` | `[{{ instance }}]-[{{ pod }}]` |
| Datanode Memory per Instance | `sum(process_resident_memory_bytes{instance=~"$datanode"}) by (instance, pod)`<br/>`max(greptime_memory_limit_in_bytes{instance=~"$datanode"})` | `timeseries` | Current memory usage by instance | `prometheus` | `bytes` | `[{{instance}}]-[{{ pod }}]` |
| Datanode CPU Usage per Instance | `sum(rate(process_cpu_seconds_total{instance=~"$datanode"}[$__rate_interval]) * 1000) by (instance, pod)`<br/>`max(greptime_cpu_limit_in_millicores{instance=~"$datanode"})` | `timeseries` | Current cpu usage by instance | `prometheus` | `none` | `[{{ instance }}]-[{{ pod }}]` |
| Frontend Memory per Instance | `sum(process_resident_memory_bytes{instance=~"$frontend"}) by (instance, pod)`<br/>`max(greptime_memory_limit_in_bytes{instance=~"$frontend"})` | `timeseries` | Current memory usage by instance | `prometheus` | `bytes` | `[{{ instance }}]-[{{ pod }}]` |
| Frontend CPU Usage per Instance | `sum(rate(process_cpu_seconds_total{instance=~"$frontend"}[$__rate_interval]) * 1000) by (instance, pod)`<br/>`max(greptime_cpu_limit_in_millicores{instance=~"$frontend"})` | `timeseries` | Current cpu usage by instance | `prometheus` | `none` | `[{{ instance }}]-[{{ pod }}]-cpu` |
| Metasrv Memory per Instance | `sum(process_resident_memory_bytes{instance=~"$metasrv"}) by (instance, pod)`<br/>`max(greptime_memory_limit_in_bytes{instance=~"$metasrv"})` | `timeseries` | Current memory usage by instance | `prometheus` | `bytes` | `[{{ instance }}]-[{{ pod }}]-resident` |
| Metasrv CPU Usage per Instance | `sum(rate(process_cpu_seconds_total{instance=~"$metasrv"}[$__rate_interval]) * 1000) by (instance, pod)`<br/>`max(greptime_cpu_limit_in_millicores{instance=~"$metasrv"})` | `timeseries` | Current cpu usage by instance | `prometheus` | `none` | `[{{ instance }}]-[{{ pod }}]` |
| Flownode Memory per Instance | `sum(process_resident_memory_bytes{instance=~"$flownode"}) by (instance, pod)`<br/>`max(greptime_memory_limit_in_bytes{instance=~"$flownode"})` | `timeseries` | Current memory usage by instance | `prometheus` | `bytes` | `[{{ instance }}]-[{{ pod }}]` |
| Flownode CPU Usage per Instance | `sum(rate(process_cpu_seconds_total{instance=~"$flownode"}[$__rate_interval]) * 1000) by (instance, pod)`<br/>`max(greptime_cpu_limit_in_millicores{instance=~"$flownode"})` | `timeseries` | Current cpu usage by instance | `prometheus` | `none` | `[{{ instance }}]-[{{ pod }}]` |
# Frontend Requests
| Title | Query | Type | Description | Datasource | Unit | Legend Format |
| --- | --- | --- | --- | --- | --- | --- |

View File

@@ -187,7 +187,7 @@ groups:
type: prometheus
uid: ${metrics}
legendFormat: '[{{instance}}]-[{{ pod }}]'
- expr: max(greptime_memory_limit_in_bytes{app="greptime-datanode"})
- expr: max(greptime_memory_limit_in_bytes{instance=~"$datanode"})
datasource:
type: prometheus
uid: ${metrics}
@@ -202,7 +202,7 @@ groups:
type: prometheus
uid: ${metrics}
legendFormat: '[{{ instance }}]-[{{ pod }}]'
- expr: max(greptime_cpu_limit_in_millicores{app="greptime-datanode"})
- expr: max(greptime_cpu_limit_in_millicores{instance=~"$datanode"})
datasource:
type: prometheus
uid: ${metrics}
@@ -217,7 +217,7 @@ groups:
type: prometheus
uid: ${metrics}
legendFormat: '[{{ instance }}]-[{{ pod }}]'
- expr: max(greptime_memory_limit_in_bytes{app="greptime-frontend"})
- expr: max(greptime_memory_limit_in_bytes{instance=~"$frontend"})
datasource:
type: prometheus
uid: ${metrics}
@@ -232,7 +232,7 @@ groups:
type: prometheus
uid: ${metrics}
legendFormat: '[{{ instance }}]-[{{ pod }}]-cpu'
- expr: max(greptime_cpu_limit_in_millicores{app="greptime-frontend"})
- expr: max(greptime_cpu_limit_in_millicores{instance=~"$frontend"})
datasource:
type: prometheus
uid: ${metrics}
@@ -247,7 +247,7 @@ groups:
type: prometheus
uid: ${metrics}
legendFormat: '[{{ instance }}]-[{{ pod }}]-resident'
- expr: max(greptime_memory_limit_in_bytes{app="greptime-metasrv"})
- expr: max(greptime_memory_limit_in_bytes{instance=~"$metasrv"})
datasource:
type: prometheus
uid: ${metrics}
@@ -262,7 +262,7 @@ groups:
type: prometheus
uid: ${metrics}
legendFormat: '[{{ instance }}]-[{{ pod }}]'
- expr: max(greptime_cpu_limit_in_millicores{app="greptime-metasrv"})
- expr: max(greptime_cpu_limit_in_millicores{instance=~"$metasrv"})
datasource:
type: prometheus
uid: ${metrics}
@@ -277,7 +277,7 @@ groups:
type: prometheus
uid: ${metrics}
legendFormat: '[{{ instance }}]-[{{ pod }}]'
- expr: max(greptime_memory_limit_in_bytes{app="greptime-flownode"})
- expr: max(greptime_memory_limit_in_bytes{instance=~"$flownode"})
datasource:
type: prometheus
uid: ${metrics}
@@ -292,7 +292,7 @@ groups:
type: prometheus
uid: ${metrics}
legendFormat: '[{{ instance }}]-[{{ pod }}]'
- expr: max(greptime_cpu_limit_in_millicores{app="greptime-flownode"})
- expr: max(greptime_cpu_limit_in_millicores{instance=~"$flownode"})
datasource:
type: prometheus
uid: ${metrics}

View File

@@ -1411,7 +1411,7 @@
"uid": "${metrics}"
},
"editorMode": "code",
"expr": "max(greptime_memory_limit_in_bytes{app=\"greptime-datanode\"})",
"expr": "max(greptime_memory_limit_in_bytes{})",
"hide": false,
"instant": false,
"legendFormat": "limit",
@@ -1528,7 +1528,7 @@
},
"editorMode": "code",
"exemplar": false,
"expr": "max(greptime_cpu_limit_in_millicores{app=\"greptime-datanode\"})",
"expr": "max(greptime_cpu_limit_in_millicores{})",
"hide": false,
"instant": false,
"legendFormat": "limit",
@@ -1643,7 +1643,7 @@
"uid": "${metrics}"
},
"editorMode": "code",
"expr": "max(greptime_memory_limit_in_bytes{app=\"greptime-frontend\"})",
"expr": "max(greptime_memory_limit_in_bytes{})",
"hide": false,
"instant": false,
"legendFormat": "limit",
@@ -1760,7 +1760,7 @@
},
"editorMode": "code",
"exemplar": false,
"expr": "max(greptime_cpu_limit_in_millicores{app=\"greptime-frontend\"})",
"expr": "max(greptime_cpu_limit_in_millicores{})",
"hide": false,
"instant": false,
"legendFormat": "limit",
@@ -1875,7 +1875,7 @@
"uid": "${metrics}"
},
"editorMode": "code",
"expr": "max(greptime_memory_limit_in_bytes{app=\"greptime-metasrv\"})",
"expr": "max(greptime_memory_limit_in_bytes{})",
"hide": false,
"instant": false,
"legendFormat": "limit",
@@ -1992,7 +1992,7 @@
},
"editorMode": "code",
"exemplar": false,
"expr": "max(greptime_cpu_limit_in_millicores{app=\"greptime-metasrv\"})",
"expr": "max(greptime_cpu_limit_in_millicores{})",
"hide": false,
"instant": false,
"legendFormat": "limit",
@@ -2107,7 +2107,7 @@
"uid": "${metrics}"
},
"editorMode": "code",
"expr": "max(greptime_memory_limit_in_bytes{app=\"greptime-flownode\"})",
"expr": "max(greptime_memory_limit_in_bytes{})",
"hide": false,
"instant": false,
"legendFormat": "limit",
@@ -2224,7 +2224,7 @@
},
"editorMode": "code",
"exemplar": false,
"expr": "max(greptime_cpu_limit_in_millicores{app=\"greptime-flownode\"})",
"expr": "max(greptime_cpu_limit_in_millicores{})",
"hide": false,
"instant": false,
"legendFormat": "limit",

View File

@@ -21,14 +21,14 @@
# Resources
| Title | Query | Type | Description | Datasource | Unit | Legend Format |
| --- | --- | --- | --- | --- | --- | --- |
| Datanode Memory per Instance | `sum(process_resident_memory_bytes{}) by (instance, pod)`<br/>`max(greptime_memory_limit_in_bytes{app="greptime-datanode"})` | `timeseries` | Current memory usage by instance | `prometheus` | `bytes` | `[{{instance}}]-[{{ pod }}]` |
| Datanode CPU Usage per Instance | `sum(rate(process_cpu_seconds_total{}[$__rate_interval]) * 1000) by (instance, pod)`<br/>`max(greptime_cpu_limit_in_millicores{app="greptime-datanode"})` | `timeseries` | Current cpu usage by instance | `prometheus` | `none` | `[{{ instance }}]-[{{ pod }}]` |
| Frontend Memory per Instance | `sum(process_resident_memory_bytes{}) by (instance, pod)`<br/>`max(greptime_memory_limit_in_bytes{app="greptime-frontend"})` | `timeseries` | Current memory usage by instance | `prometheus` | `bytes` | `[{{ instance }}]-[{{ pod }}]` |
| Frontend CPU Usage per Instance | `sum(rate(process_cpu_seconds_total{}[$__rate_interval]) * 1000) by (instance, pod)`<br/>`max(greptime_cpu_limit_in_millicores{app="greptime-frontend"})` | `timeseries` | Current cpu usage by instance | `prometheus` | `none` | `[{{ instance }}]-[{{ pod }}]-cpu` |
| Metasrv Memory per Instance | `sum(process_resident_memory_bytes{}) by (instance, pod)`<br/>`max(greptime_memory_limit_in_bytes{app="greptime-metasrv"})` | `timeseries` | Current memory usage by instance | `prometheus` | `bytes` | `[{{ instance }}]-[{{ pod }}]-resident` |
| Metasrv CPU Usage per Instance | `sum(rate(process_cpu_seconds_total{}[$__rate_interval]) * 1000) by (instance, pod)`<br/>`max(greptime_cpu_limit_in_millicores{app="greptime-metasrv"})` | `timeseries` | Current cpu usage by instance | `prometheus` | `none` | `[{{ instance }}]-[{{ pod }}]` |
| Flownode Memory per Instance | `sum(process_resident_memory_bytes{}) by (instance, pod)`<br/>`max(greptime_memory_limit_in_bytes{app="greptime-flownode"})` | `timeseries` | Current memory usage by instance | `prometheus` | `bytes` | `[{{ instance }}]-[{{ pod }}]` |
| Flownode CPU Usage per Instance | `sum(rate(process_cpu_seconds_total{}[$__rate_interval]) * 1000) by (instance, pod)`<br/>`max(greptime_cpu_limit_in_millicores{app="greptime-flownode"})` | `timeseries` | Current cpu usage by instance | `prometheus` | `none` | `[{{ instance }}]-[{{ pod }}]` |
| Datanode Memory per Instance | `sum(process_resident_memory_bytes{}) by (instance, pod)`<br/>`max(greptime_memory_limit_in_bytes{})` | `timeseries` | Current memory usage by instance | `prometheus` | `bytes` | `[{{instance}}]-[{{ pod }}]` |
| Datanode CPU Usage per Instance | `sum(rate(process_cpu_seconds_total{}[$__rate_interval]) * 1000) by (instance, pod)`<br/>`max(greptime_cpu_limit_in_millicores{})` | `timeseries` | Current cpu usage by instance | `prometheus` | `none` | `[{{ instance }}]-[{{ pod }}]` |
| Frontend Memory per Instance | `sum(process_resident_memory_bytes{}) by (instance, pod)`<br/>`max(greptime_memory_limit_in_bytes{})` | `timeseries` | Current memory usage by instance | `prometheus` | `bytes` | `[{{ instance }}]-[{{ pod }}]` |
| Frontend CPU Usage per Instance | `sum(rate(process_cpu_seconds_total{}[$__rate_interval]) * 1000) by (instance, pod)`<br/>`max(greptime_cpu_limit_in_millicores{})` | `timeseries` | Current cpu usage by instance | `prometheus` | `none` | `[{{ instance }}]-[{{ pod }}]-cpu` |
| Metasrv Memory per Instance | `sum(process_resident_memory_bytes{}) by (instance, pod)`<br/>`max(greptime_memory_limit_in_bytes{})` | `timeseries` | Current memory usage by instance | `prometheus` | `bytes` | `[{{ instance }}]-[{{ pod }}]-resident` |
| Metasrv CPU Usage per Instance | `sum(rate(process_cpu_seconds_total{}[$__rate_interval]) * 1000) by (instance, pod)`<br/>`max(greptime_cpu_limit_in_millicores{})` | `timeseries` | Current cpu usage by instance | `prometheus` | `none` | `[{{ instance }}]-[{{ pod }}]` |
| Flownode Memory per Instance | `sum(process_resident_memory_bytes{}) by (instance, pod)`<br/>`max(greptime_memory_limit_in_bytes{})` | `timeseries` | Current memory usage by instance | `prometheus` | `bytes` | `[{{ instance }}]-[{{ pod }}]` |
| Flownode CPU Usage per Instance | `sum(rate(process_cpu_seconds_total{}[$__rate_interval]) * 1000) by (instance, pod)`<br/>`max(greptime_cpu_limit_in_millicores{})` | `timeseries` | Current cpu usage by instance | `prometheus` | `none` | `[{{ instance }}]-[{{ pod }}]` |
# Frontend Requests
| Title | Query | Type | Description | Datasource | Unit | Legend Format |
| --- | --- | --- | --- | --- | --- | --- |

View File

@@ -187,7 +187,7 @@ groups:
type: prometheus
uid: ${metrics}
legendFormat: '[{{instance}}]-[{{ pod }}]'
- expr: max(greptime_memory_limit_in_bytes{app="greptime-datanode"})
- expr: max(greptime_memory_limit_in_bytes{})
datasource:
type: prometheus
uid: ${metrics}
@@ -202,7 +202,7 @@ groups:
type: prometheus
uid: ${metrics}
legendFormat: '[{{ instance }}]-[{{ pod }}]'
- expr: max(greptime_cpu_limit_in_millicores{app="greptime-datanode"})
- expr: max(greptime_cpu_limit_in_millicores{})
datasource:
type: prometheus
uid: ${metrics}
@@ -217,7 +217,7 @@ groups:
type: prometheus
uid: ${metrics}
legendFormat: '[{{ instance }}]-[{{ pod }}]'
- expr: max(greptime_memory_limit_in_bytes{app="greptime-frontend"})
- expr: max(greptime_memory_limit_in_bytes{})
datasource:
type: prometheus
uid: ${metrics}
@@ -232,7 +232,7 @@ groups:
type: prometheus
uid: ${metrics}
legendFormat: '[{{ instance }}]-[{{ pod }}]-cpu'
- expr: max(greptime_cpu_limit_in_millicores{app="greptime-frontend"})
- expr: max(greptime_cpu_limit_in_millicores{})
datasource:
type: prometheus
uid: ${metrics}
@@ -247,7 +247,7 @@ groups:
type: prometheus
uid: ${metrics}
legendFormat: '[{{ instance }}]-[{{ pod }}]-resident'
- expr: max(greptime_memory_limit_in_bytes{app="greptime-metasrv"})
- expr: max(greptime_memory_limit_in_bytes{})
datasource:
type: prometheus
uid: ${metrics}
@@ -262,7 +262,7 @@ groups:
type: prometheus
uid: ${metrics}
legendFormat: '[{{ instance }}]-[{{ pod }}]'
- expr: max(greptime_cpu_limit_in_millicores{app="greptime-metasrv"})
- expr: max(greptime_cpu_limit_in_millicores{})
datasource:
type: prometheus
uid: ${metrics}
@@ -277,7 +277,7 @@ groups:
type: prometheus
uid: ${metrics}
legendFormat: '[{{ instance }}]-[{{ pod }}]'
- expr: max(greptime_memory_limit_in_bytes{app="greptime-flownode"})
- expr: max(greptime_memory_limit_in_bytes{})
datasource:
type: prometheus
uid: ${metrics}
@@ -292,7 +292,7 @@ groups:
type: prometheus
uid: ${metrics}
legendFormat: '[{{ instance }}]-[{{ pod }}]'
- expr: max(greptime_cpu_limit_in_millicores{app="greptime-flownode"})
- expr: max(greptime_cpu_limit_in_millicores{})
datasource:
type: prometheus
uid: ${metrics}

41
scripts/generate_certs.sh Executable file
View File

@@ -0,0 +1,41 @@
#!/usr/bin/env bash
set -euo pipefail
CERT_DIR="${1:-$(dirname "$0")/../tests-integration/fixtures/certs}"
DAYS="${2:-365}"
mkdir -p "${CERT_DIR}"
cd "${CERT_DIR}"
echo "Generating CA certificate..."
openssl req -new -x509 -days "${DAYS}" -nodes -text \
-out root.crt -keyout root.key \
-subj "/CN=GreptimeDBRootCA"
echo "Generating server certificate..."
openssl req -new -nodes -text \
-out server.csr -keyout server.key \
-subj "/CN=greptime"
openssl x509 -req -in server.csr -text -days "${DAYS}" \
-CA root.crt -CAkey root.key -CAcreateserial \
-out server.crt \
-extensions v3_req -extfile <(printf "[v3_req]\nsubjectAltName=DNS:localhost,IP:127.0.0.1")
echo "Generating client certificate..."
# Make sure the client certificate is for the greptimedb user
openssl req -new -nodes -text \
-out client.csr -keyout client.key \
-subj "/CN=greptimedb"
openssl x509 -req -in client.csr -CA root.crt -CAkey root.key -CAcreateserial \
-out client.crt -days 365 -extensions v3_req -extfile <(printf "[v3_req]\nsubjectAltName=DNS:localhost")
rm -f *.csr
echo "TLS certificates generated successfully in ${CERT_DIR}"
chmod 644 root.key
chmod 644 client.key
chmod 644 server.key

View File

@@ -25,7 +25,7 @@ pub use common::{
HashedPassword, Identity, Password, auth_mysql, static_user_provider_from_option,
user_provider_from_option, userinfo_by_name,
};
pub use permission::{PermissionChecker, PermissionReq, PermissionResp};
pub use permission::{DefaultPermissionChecker, PermissionChecker, PermissionReq, PermissionResp};
pub use user_info::UserInfo;
pub use user_provider::UserProvider;
pub use user_provider::static_user_provider::StaticUserProvider;

View File

@@ -13,12 +13,15 @@
// limitations under the License.
use std::fmt::Debug;
use std::sync::Arc;
use api::v1::greptime_request::Request;
use common_telemetry::debug;
use sql::statements::statement::Statement;
use crate::error::{PermissionDeniedSnafu, Result};
use crate::{PermissionCheckerRef, UserInfoRef};
use crate::user_info::DefaultUserInfo;
use crate::{PermissionCheckerRef, UserInfo, UserInfoRef};
#[derive(Debug, Clone)]
pub enum PermissionReq<'a> {
@@ -35,6 +38,32 @@ pub enum PermissionReq<'a> {
BulkInsert,
}
impl<'a> PermissionReq<'a> {
/// Returns true if the permission request is for read operations.
pub fn is_readonly(&self) -> bool {
match self {
PermissionReq::GrpcRequest(Request::Query(_))
| PermissionReq::PromQuery
| PermissionReq::LogQuery
| PermissionReq::PromStoreRead => true,
PermissionReq::SqlStatement(stmt) => stmt.is_readonly(),
PermissionReq::GrpcRequest(_)
| PermissionReq::Opentsdb
| PermissionReq::LineProtocol
| PermissionReq::PromStoreWrite
| PermissionReq::Otlp
| PermissionReq::LogWrite
| PermissionReq::BulkInsert => false,
}
}
/// Returns true if the permission request is for write operations.
pub fn is_write(&self) -> bool {
!self.is_readonly()
}
}
#[derive(Debug)]
pub enum PermissionResp {
Allow,
@@ -65,3 +94,106 @@ impl PermissionChecker for Option<&PermissionCheckerRef> {
}
}
}
/// The default permission checker implementation.
/// It checks the permission mode of [DefaultUserInfo].
pub struct DefaultPermissionChecker;
impl DefaultPermissionChecker {
/// Returns a new [PermissionCheckerRef] instance.
pub fn arc() -> PermissionCheckerRef {
Arc::new(DefaultPermissionChecker)
}
}
impl PermissionChecker for DefaultPermissionChecker {
fn check_permission(
&self,
user_info: UserInfoRef,
req: PermissionReq,
) -> Result<PermissionResp> {
if let Some(default_user) = user_info.as_any().downcast_ref::<DefaultUserInfo>() {
let permission_mode = default_user.permission_mode();
if req.is_readonly() && !permission_mode.can_read() {
debug!(
"Permission denied: read operation not allowed, user = {}, permission = {}",
default_user.username(),
permission_mode.as_str()
);
return Ok(PermissionResp::Reject);
}
if req.is_write() && !permission_mode.can_write() {
debug!(
"Permission denied: write operation not allowed, user = {}, permission = {}",
default_user.username(),
permission_mode.as_str()
);
return Ok(PermissionResp::Reject);
}
}
// default allow all
Ok(PermissionResp::Allow)
}
}
#[cfg(test)]
mod tests {
use super::*;
use crate::user_info::PermissionMode;
#[test]
fn test_default_permission_checker_allow_all_operations() {
let checker = DefaultPermissionChecker;
let user_info =
DefaultUserInfo::with_name_and_permission("test_user", PermissionMode::ReadWrite);
let read_req = PermissionReq::PromQuery;
let write_req = PermissionReq::PromStoreWrite;
let read_result = checker
.check_permission(user_info.clone(), read_req)
.unwrap();
let write_result = checker.check_permission(user_info, write_req).unwrap();
assert!(matches!(read_result, PermissionResp::Allow));
assert!(matches!(write_result, PermissionResp::Allow));
}
#[test]
fn test_default_permission_checker_readonly_user() {
let checker = DefaultPermissionChecker;
let user_info =
DefaultUserInfo::with_name_and_permission("readonly_user", PermissionMode::ReadOnly);
let read_req = PermissionReq::PromQuery;
let write_req = PermissionReq::PromStoreWrite;
let read_result = checker
.check_permission(user_info.clone(), read_req)
.unwrap();
let write_result = checker.check_permission(user_info, write_req).unwrap();
assert!(matches!(read_result, PermissionResp::Allow));
assert!(matches!(write_result, PermissionResp::Reject));
}
#[test]
fn test_default_permission_checker_writeonly_user() {
let checker = DefaultPermissionChecker;
let user_info =
DefaultUserInfo::with_name_and_permission("writeonly_user", PermissionMode::WriteOnly);
let read_req = PermissionReq::LogQuery;
let write_req = PermissionReq::LogWrite;
let read_result = checker
.check_permission(user_info.clone(), read_req)
.unwrap();
let write_result = checker.check_permission(user_info, write_req).unwrap();
assert!(matches!(read_result, PermissionResp::Reject));
assert!(matches!(write_result, PermissionResp::Allow));
}
}

View File

@@ -23,17 +23,86 @@ pub trait UserInfo: Debug + Sync + Send {
fn username(&self) -> &str;
}
/// The user permission mode
#[derive(Debug, Clone, Copy, PartialEq, Eq, Default)]
pub enum PermissionMode {
#[default]
ReadWrite,
ReadOnly,
WriteOnly,
}
impl PermissionMode {
/// Parse permission mode from string.
/// Supported values are:
/// - "rw", "readwrite", "read_write" => ReadWrite
/// - "ro", "readonly", "read_only" => ReadOnly
/// - "wo", "writeonly", "write_only" => WriteOnly
/// Returns None if the input string is not a valid permission mode.
pub fn from_str(s: &str) -> Self {
match s.to_lowercase().as_str() {
"readwrite" | "read_write" | "rw" => PermissionMode::ReadWrite,
"readonly" | "read_only" | "ro" => PermissionMode::ReadOnly,
"writeonly" | "write_only" | "wo" => PermissionMode::WriteOnly,
_ => PermissionMode::ReadWrite,
}
}
/// Convert permission mode to string.
/// - ReadWrite => "rw"
/// - ReadOnly => "ro"
/// - WriteOnly => "wo"
/// The returned string is a static string slice.
pub fn as_str(&self) -> &'static str {
match self {
PermissionMode::ReadWrite => "rw",
PermissionMode::ReadOnly => "ro",
PermissionMode::WriteOnly => "wo",
}
}
/// Returns true if the permission mode allows read operations.
pub fn can_read(&self) -> bool {
matches!(self, PermissionMode::ReadWrite | PermissionMode::ReadOnly)
}
/// Returns true if the permission mode allows write operations.
pub fn can_write(&self) -> bool {
matches!(self, PermissionMode::ReadWrite | PermissionMode::WriteOnly)
}
}
impl std::fmt::Display for PermissionMode {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
write!(f, "{}", self.as_str())
}
}
#[derive(Debug)]
pub(crate) struct DefaultUserInfo {
username: String,
permission_mode: PermissionMode,
}
impl DefaultUserInfo {
pub(crate) fn with_name(username: impl Into<String>) -> UserInfoRef {
Self::with_name_and_permission(username, PermissionMode::default())
}
/// Create a UserInfo with specified permission mode.
pub(crate) fn with_name_and_permission(
username: impl Into<String>,
permission_mode: PermissionMode,
) -> UserInfoRef {
Arc::new(Self {
username: username.into(),
permission_mode,
})
}
pub(crate) fn permission_mode(&self) -> &PermissionMode {
&self.permission_mode
}
}
impl UserInfo for DefaultUserInfo {
@@ -45,3 +114,120 @@ impl UserInfo for DefaultUserInfo {
self.username.as_str()
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_permission_mode_from_str() {
// Test ReadWrite variants
assert_eq!(
PermissionMode::from_str("readwrite"),
PermissionMode::ReadWrite
);
assert_eq!(
PermissionMode::from_str("read_write"),
PermissionMode::ReadWrite
);
assert_eq!(PermissionMode::from_str("rw"), PermissionMode::ReadWrite);
assert_eq!(
PermissionMode::from_str("ReadWrite"),
PermissionMode::ReadWrite
);
assert_eq!(PermissionMode::from_str("RW"), PermissionMode::ReadWrite);
// Test ReadOnly variants
assert_eq!(
PermissionMode::from_str("readonly"),
PermissionMode::ReadOnly
);
assert_eq!(
PermissionMode::from_str("read_only"),
PermissionMode::ReadOnly
);
assert_eq!(PermissionMode::from_str("ro"), PermissionMode::ReadOnly);
assert_eq!(
PermissionMode::from_str("ReadOnly"),
PermissionMode::ReadOnly
);
assert_eq!(PermissionMode::from_str("RO"), PermissionMode::ReadOnly);
// Test WriteOnly variants
assert_eq!(
PermissionMode::from_str("writeonly"),
PermissionMode::WriteOnly
);
assert_eq!(
PermissionMode::from_str("write_only"),
PermissionMode::WriteOnly
);
assert_eq!(PermissionMode::from_str("wo"), PermissionMode::WriteOnly);
assert_eq!(
PermissionMode::from_str("WriteOnly"),
PermissionMode::WriteOnly
);
assert_eq!(PermissionMode::from_str("WO"), PermissionMode::WriteOnly);
// Test invalid inputs default to ReadWrite
assert_eq!(
PermissionMode::from_str("invalid"),
PermissionMode::ReadWrite
);
assert_eq!(PermissionMode::from_str(""), PermissionMode::ReadWrite);
assert_eq!(PermissionMode::from_str("xyz"), PermissionMode::ReadWrite);
}
#[test]
fn test_permission_mode_as_str() {
assert_eq!(PermissionMode::ReadWrite.as_str(), "rw");
assert_eq!(PermissionMode::ReadOnly.as_str(), "ro");
assert_eq!(PermissionMode::WriteOnly.as_str(), "wo");
}
#[test]
fn test_permission_mode_default() {
assert_eq!(PermissionMode::default(), PermissionMode::ReadWrite);
}
#[test]
fn test_permission_mode_round_trip() {
let modes = [
PermissionMode::ReadWrite,
PermissionMode::ReadOnly,
PermissionMode::WriteOnly,
];
for mode in modes {
let str_repr = mode.as_str();
let parsed = PermissionMode::from_str(str_repr);
assert_eq!(mode, parsed);
}
}
#[test]
fn test_default_user_info_with_name() {
let user_info = DefaultUserInfo::with_name("test_user");
assert_eq!(user_info.username(), "test_user");
}
#[test]
fn test_default_user_info_with_name_and_permission() {
let user_info =
DefaultUserInfo::with_name_and_permission("test_user", PermissionMode::ReadOnly);
assert_eq!(user_info.username(), "test_user");
// Cast to DefaultUserInfo to access permission_mode
let default_user = user_info
.as_any()
.downcast_ref::<DefaultUserInfo>()
.unwrap();
assert_eq!(default_user.permission_mode, PermissionMode::ReadOnly);
}
#[test]
fn test_user_info_as_any() {
let user_info = DefaultUserInfo::with_name("test_user");
let any_ref = user_info.as_any();
assert!(any_ref.downcast_ref::<DefaultUserInfo>().is_some());
}
}

View File

@@ -29,7 +29,7 @@ use crate::error::{
IllegalParamSnafu, InvalidConfigSnafu, IoSnafu, Result, UnsupportedPasswordTypeSnafu,
UserNotFoundSnafu, UserPasswordMismatchSnafu,
};
use crate::user_info::DefaultUserInfo;
use crate::user_info::{DefaultUserInfo, PermissionMode};
use crate::{UserInfoRef, auth_mysql};
#[async_trait::async_trait]
@@ -64,11 +64,19 @@ pub trait UserProvider: Send + Sync {
}
}
fn load_credential_from_file(filepath: &str) -> Result<Option<HashMap<String, Vec<u8>>>> {
/// Type alias for user info map
/// Key is username, value is (password, permission_mode)
pub type UserInfoMap = HashMap<String, (Vec<u8>, PermissionMode)>;
fn load_credential_from_file(filepath: &str) -> Result<UserInfoMap> {
// check valid path
let path = Path::new(filepath);
if !path.exists() {
return Ok(None);
return InvalidConfigSnafu {
value: filepath.to_string(),
msg: "UserProvider file must exist",
}
.fail();
}
ensure!(
@@ -83,13 +91,19 @@ fn load_credential_from_file(filepath: &str) -> Result<Option<HashMap<String, Ve
.lines()
.map_while(std::result::Result::ok)
.filter_map(|line| {
if let Some((k, v)) = line.split_once('=') {
Some((k.to_string(), v.as_bytes().to_vec()))
} else {
None
// The line format is:
// - `username=password` - Basic user with default permissions
// - `username:permission_mode=password` - User with specific permission mode
// - Lines starting with '#' are treated as comments and ignored
// - Empty lines are ignored
let line = line.trim();
if line.is_empty() || line.starts_with('#') {
return None;
}
parse_credential_line(line)
})
.collect::<HashMap<String, Vec<u8>>>();
.collect::<HashMap<String, _>>();
ensure!(
!credential.is_empty(),
@@ -99,11 +113,31 @@ fn load_credential_from_file(filepath: &str) -> Result<Option<HashMap<String, Ve
}
);
Ok(Some(credential))
Ok(credential)
}
/// Parse a line of credential in the format of `username=password` or `username:permission_mode=password`
pub(crate) fn parse_credential_line(line: &str) -> Option<(String, (Vec<u8>, PermissionMode))> {
let parts = line.split('=').collect::<Vec<&str>>();
if parts.len() != 2 {
return None;
}
let (username_part, password) = (parts[0], parts[1]);
let (username, permission_mode) = if let Some((user, perm)) = username_part.split_once(':') {
(user, PermissionMode::from_str(perm))
} else {
(username_part, PermissionMode::default())
};
Some((
username.to_string(),
(password.as_bytes().to_vec(), permission_mode),
))
}
fn authenticate_with_credential(
users: &HashMap<String, Vec<u8>>,
users: &UserInfoMap,
input_id: Identity<'_>,
input_pwd: Password<'_>,
) -> Result<UserInfoRef> {
@@ -115,7 +149,7 @@ fn authenticate_with_credential(
msg: "blank username"
}
);
let save_pwd = users.get(username).context(UserNotFoundSnafu {
let (save_pwd, permission_mode) = users.get(username).context(UserNotFoundSnafu {
username: username.to_string(),
})?;
@@ -128,7 +162,10 @@ fn authenticate_with_credential(
}
);
if save_pwd == pwd.expose_secret().as_bytes() {
Ok(DefaultUserInfo::with_name(username))
Ok(DefaultUserInfo::with_name_and_permission(
username,
*permission_mode,
))
} else {
UserPasswordMismatchSnafu {
username: username.to_string(),
@@ -137,8 +174,9 @@ fn authenticate_with_credential(
}
}
Password::MysqlNativePassword(auth_data, salt) => {
auth_mysql(auth_data, salt, username, save_pwd)
.map(|_| DefaultUserInfo::with_name(username))
auth_mysql(auth_data, salt, username, save_pwd).map(|_| {
DefaultUserInfo::with_name_and_permission(username, *permission_mode)
})
}
Password::PgMD5(_, _) => UnsupportedPasswordTypeSnafu {
password_type: "pg_md5",
@@ -148,3 +186,108 @@ fn authenticate_with_credential(
}
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_parse_credential_line() {
// Basic username=password format
let result = parse_credential_line("admin=password123");
assert_eq!(
result,
Some((
"admin".to_string(),
("password123".as_bytes().to_vec(), PermissionMode::default())
))
);
// Username with permission mode
let result = parse_credential_line("user:ReadOnly=secret");
assert_eq!(
result,
Some((
"user".to_string(),
("secret".as_bytes().to_vec(), PermissionMode::ReadOnly)
))
);
let result = parse_credential_line("user:ro=secret");
assert_eq!(
result,
Some((
"user".to_string(),
("secret".as_bytes().to_vec(), PermissionMode::ReadOnly)
))
);
// Username with WriteOnly permission mode
let result = parse_credential_line("writer:WriteOnly=mypass");
assert_eq!(
result,
Some((
"writer".to_string(),
("mypass".as_bytes().to_vec(), PermissionMode::WriteOnly)
))
);
// Username with 'wo' as WriteOnly permission shorthand
let result = parse_credential_line("writer:wo=mypass");
assert_eq!(
result,
Some((
"writer".to_string(),
("mypass".as_bytes().to_vec(), PermissionMode::WriteOnly)
))
);
// Username with complex password containing special characters
let result = parse_credential_line("admin:rw=p@ssw0rd!123");
assert_eq!(
result,
Some((
"admin".to_string(),
(
"p@ssw0rd!123".as_bytes().to_vec(),
PermissionMode::ReadWrite
)
))
);
// Username with spaces should be preserved
let result = parse_credential_line("user name:WriteOnly=password");
assert_eq!(
result,
Some((
"user name".to_string(),
("password".as_bytes().to_vec(), PermissionMode::WriteOnly)
))
);
// Invalid format - no equals sign
let result = parse_credential_line("invalid_line");
assert_eq!(result, None);
// Invalid format - multiple equals signs
let result = parse_credential_line("user=pass=word");
assert_eq!(result, None);
// Empty password
let result = parse_credential_line("user=");
assert_eq!(
result,
Some((
"user".to_string(),
("".as_bytes().to_vec(), PermissionMode::default())
))
);
// Empty username
let result = parse_credential_line("=password");
assert_eq!(
result,
Some((
"".to_string(),
("password".as_bytes().to_vec(), PermissionMode::default())
))
);
}
}

View File

@@ -12,19 +12,19 @@
// See the License for the specific language governing permissions and
// limitations under the License.
use std::collections::HashMap;
use async_trait::async_trait;
use snafu::{OptionExt, ResultExt};
use crate::error::{FromUtf8Snafu, InvalidConfigSnafu, Result};
use crate::user_provider::{authenticate_with_credential, load_credential_from_file};
use crate::user_provider::{
UserInfoMap, authenticate_with_credential, load_credential_from_file, parse_credential_line,
};
use crate::{Identity, Password, UserInfoRef, UserProvider};
pub(crate) const STATIC_USER_PROVIDER: &str = "static_user_provider";
pub struct StaticUserProvider {
users: HashMap<String, Vec<u8>>,
users: UserInfoMap,
}
impl StaticUserProvider {
@@ -35,23 +35,18 @@ impl StaticUserProvider {
})?;
match mode {
"file" => {
let users = load_credential_from_file(content)?
.context(InvalidConfigSnafu {
value: content.to_string(),
msg: "StaticFileUserProvider must be a valid file path",
})?;
let users = load_credential_from_file(content)?;
Ok(StaticUserProvider { users })
}
"cmd" => content
.split(',')
.map(|kv| {
let (k, v) = kv.split_once('=').context(InvalidConfigSnafu {
parse_credential_line(kv).context(InvalidConfigSnafu {
value: kv.to_string(),
msg: "StaticUserProviderOption cmd values must be in format `user=pwd[,user=pwd]`",
})?;
Ok((k.to_string(), v.as_bytes().to_vec()))
})
})
.collect::<Result<HashMap<String, Vec<u8>>>>()
.collect::<Result<UserInfoMap>>()
.map(|users| StaticUserProvider { users }),
_ => InvalidConfigSnafu {
value: mode.to_string(),
@@ -69,7 +64,7 @@ impl StaticUserProvider {
msg: "Expect at least one pair of username and password",
})?;
let username = kv.0;
let pwd = String::from_utf8(kv.1.clone()).context(FromUtf8Snafu)?;
let pwd = String::from_utf8(kv.1.0.clone()).context(FromUtf8Snafu)?;
Ok((username.clone(), pwd))
}
}

View File

@@ -12,7 +12,6 @@
// See the License for the specific language governing permissions and
// limitations under the License.
use std::collections::HashMap;
use std::path::Path;
use std::sync::mpsc::channel;
use std::sync::{Arc, Mutex};
@@ -23,17 +22,17 @@ use notify::{EventKind, RecursiveMode, Watcher};
use snafu::{ResultExt, ensure};
use crate::error::{FileWatchSnafu, InvalidConfigSnafu, Result};
use crate::user_info::DefaultUserInfo;
use crate::user_provider::{authenticate_with_credential, load_credential_from_file};
use crate::user_provider::{UserInfoMap, authenticate_with_credential, load_credential_from_file};
use crate::{Identity, Password, UserInfoRef, UserProvider};
pub(crate) const WATCH_FILE_USER_PROVIDER: &str = "watch_file_user_provider";
type WatchedCredentialRef = Arc<Mutex<Option<HashMap<String, Vec<u8>>>>>;
type WatchedCredentialRef = Arc<Mutex<UserInfoMap>>;
/// A user provider that reads user credential from a file and watches the file for changes.
///
/// Empty file is invalid; but file not exist means every user can be authenticated.
/// Both empty file and non-existent file are invalid and will cause initialization to fail.
#[derive(Debug)]
pub(crate) struct WatchFileUserProvider {
users: WatchedCredentialRef,
}
@@ -108,16 +107,7 @@ impl UserProvider for WatchFileUserProvider {
async fn authenticate(&self, id: Identity<'_>, password: Password<'_>) -> Result<UserInfoRef> {
let users = self.users.lock().expect("users credential must be valid");
if let Some(users) = users.as_ref() {
authenticate_with_credential(users, id, password)
} else {
match id {
Identity::UserId(id, _) => {
warn!(id, "User provider file not exist, allow all users");
Ok(DefaultUserInfo::with_name(id))
}
}
}
authenticate_with_credential(&users, id, password)
}
async fn authorize(&self, _: &str, _: &str, _: &UserInfoRef) -> Result<()> {
@@ -178,6 +168,21 @@ pub mod test {
}
}
#[tokio::test]
async fn test_file_provider_initialization_with_missing_file() {
common_telemetry::init_default_ut_logging();
let dir = create_temp_dir("test_missing_file");
let file_path = format!("{}/non_existent_file", dir.path().to_str().unwrap());
// Try to create provider with non-existent file should fail
let result = WatchFileUserProvider::new(file_path.as_str());
assert!(result.is_err());
let error = result.unwrap_err();
assert!(error.to_string().contains("UserProvider file must exist"));
}
#[tokio::test]
async fn test_file_provider() {
common_telemetry::init_default_ut_logging();
@@ -202,9 +207,10 @@ pub mod test {
// remove the tmp file
assert!(std::fs::remove_file(&file_path).is_ok());
test_authenticate(&provider, "root", "123456", true, Some(timeout)).await;
// When file is deleted during runtime, keep the last known good credentials
test_authenticate(&provider, "root", "654321", true, Some(timeout)).await;
test_authenticate(&provider, "admin", "654321", true, Some(timeout)).await;
test_authenticate(&provider, "root", "123456", false, Some(timeout)).await;
test_authenticate(&provider, "admin", "654321", false, Some(timeout)).await;
// recreate the tmp file
assert!(std::fs::write(&file_path, "root=123456\n").is_ok());

View File

@@ -35,6 +35,7 @@ common-version.workspace = true
common-workload.workspace = true
dashmap.workspace = true
datafusion.workspace = true
datafusion-pg-catalog.workspace = true
datatypes.workspace = true
futures.workspace = true
futures-util.workspace = true
@@ -48,7 +49,6 @@ paste.workspace = true
prometheus.workspace = true
promql-parser.workspace = true
rand.workspace = true
rustc-hash.workspace = true
serde.workspace = true
serde_json.workspace = true
session.workspace = true

View File

@@ -137,21 +137,24 @@ impl DataSource for SystemTableDataSource {
&self,
request: ScanRequest,
) -> std::result::Result<SendableRecordBatchStream, BoxedError> {
let projection = request.projection.clone();
let projected_schema = match &projection {
let projected_schema = match &request.projection {
Some(projection) => self.try_project(projection)?,
None => self.table.schema(),
};
let projection = request.projection.clone();
let stream = self
.table
.to_stream(request)
.map_err(BoxedError::new)
.context(TablesRecordBatchSnafu)
.map_err(BoxedError::new)?
.map(move |batch| match &projection {
Some(p) => batch.and_then(|b| b.try_project(p)),
None => batch,
.map(move |batch| match (&projection, batch) {
// Some tables (e.g., inspect tables) already honor projection in their inner stream;
// others ignore it and return full rows. We will only apply projection here if the
// inner batch width doesn't match the projection size.
(Some(p), Ok(b)) if b.num_columns() != p.len() => b.try_project(p),
(_, res) => res,
});
let stream = RecordBatchStreamWrapper {

View File

@@ -24,6 +24,7 @@ pub mod region_peers;
mod region_statistics;
mod runtime_metrics;
pub mod schemata;
mod ssts;
mod table_constraints;
mod table_names;
pub mod tables;
@@ -66,6 +67,9 @@ use crate::system_schema::information_schema::partitions::InformationSchemaParti
use crate::system_schema::information_schema::region_peers::InformationSchemaRegionPeers;
use crate::system_schema::information_schema::runtime_metrics::InformationSchemaMetrics;
use crate::system_schema::information_schema::schemata::InformationSchemaSchemata;
use crate::system_schema::information_schema::ssts::{
InformationSchemaSstsManifest, InformationSchemaSstsStorage,
};
use crate::system_schema::information_schema::table_constraints::InformationSchemaTableConstraints;
use crate::system_schema::information_schema::tables::InformationSchemaTables;
use crate::system_schema::memory_table::MemoryTable;
@@ -253,6 +257,12 @@ impl SystemSchemaProviderInner for InformationSchemaProvider {
.process_manager
.as_ref()
.map(|p| Arc::new(InformationSchemaProcessList::new(p.clone())) as _),
SSTS_MANIFEST => Some(Arc::new(InformationSchemaSstsManifest::new(
self.catalog_manager.clone(),
)) as _),
SSTS_STORAGE => Some(Arc::new(InformationSchemaSstsStorage::new(
self.catalog_manager.clone(),
)) as _),
_ => None,
}
}
@@ -324,6 +334,14 @@ impl InformationSchemaProvider {
REGION_STATISTICS.to_string(),
self.build_table(REGION_STATISTICS).unwrap(),
);
tables.insert(
SSTS_MANIFEST.to_string(),
self.build_table(SSTS_MANIFEST).unwrap(),
);
tables.insert(
SSTS_STORAGE.to_string(),
self.build_table(SSTS_STORAGE).unwrap(),
);
}
tables.insert(TABLES.to_string(), self.build_table(TABLES).unwrap());

View File

@@ -26,12 +26,11 @@ use datafusion::physical_plan::stream::RecordBatchStreamAdapter as DfRecordBatch
use datafusion::physical_plan::streaming::PartitionStream as DfPartitionStream;
use datatypes::prelude::{ConcreteDataType, ScalarVectorBuilder, VectorRef};
use datatypes::schema::{ColumnSchema, Schema, SchemaRef};
use datatypes::timestamp::TimestampMicrosecond;
use datatypes::timestamp::TimestampSecond;
use datatypes::value::Value;
use datatypes::vectors::{
ConstantVector, Int64Vector, Int64VectorBuilder, MutableVector, StringVector,
StringVectorBuilder, TimestampMicrosecondVector, TimestampMicrosecondVectorBuilder,
UInt64VectorBuilder,
StringVectorBuilder, TimestampSecondVector, TimestampSecondVectorBuilder, UInt64VectorBuilder,
};
use futures::{StreamExt, TryStreamExt};
use partition::manager::PartitionInfo;
@@ -129,17 +128,17 @@ impl InformationSchemaPartitions {
ColumnSchema::new("data_free", ConcreteDataType::int64_datatype(), true),
ColumnSchema::new(
"create_time",
ConcreteDataType::timestamp_microsecond_datatype(),
ConcreteDataType::timestamp_second_datatype(),
true,
),
ColumnSchema::new(
"update_time",
ConcreteDataType::timestamp_microsecond_datatype(),
ConcreteDataType::timestamp_second_datatype(),
true,
),
ColumnSchema::new(
"check_time",
ConcreteDataType::timestamp_microsecond_datatype(),
ConcreteDataType::timestamp_second_datatype(),
true,
),
ColumnSchema::new("checksum", ConcreteDataType::int64_datatype(), true),
@@ -212,7 +211,7 @@ struct InformationSchemaPartitionsBuilder {
partition_names: StringVectorBuilder,
partition_ordinal_positions: Int64VectorBuilder,
partition_expressions: StringVectorBuilder,
create_times: TimestampMicrosecondVectorBuilder,
create_times: TimestampSecondVectorBuilder,
partition_ids: UInt64VectorBuilder,
}
@@ -232,7 +231,7 @@ impl InformationSchemaPartitionsBuilder {
partition_names: StringVectorBuilder::with_capacity(INIT_CAPACITY),
partition_ordinal_positions: Int64VectorBuilder::with_capacity(INIT_CAPACITY),
partition_expressions: StringVectorBuilder::with_capacity(INIT_CAPACITY),
create_times: TimestampMicrosecondVectorBuilder::with_capacity(INIT_CAPACITY),
create_times: TimestampSecondVectorBuilder::with_capacity(INIT_CAPACITY),
partition_ids: UInt64VectorBuilder::with_capacity(INIT_CAPACITY),
}
}
@@ -331,8 +330,8 @@ impl InformationSchemaPartitionsBuilder {
.push(Some((index + 1) as i64));
let expression = partition.partition_expr.as_ref().map(|e| e.to_string());
self.partition_expressions.push(expression.as_deref());
self.create_times.push(Some(TimestampMicrosecond::from(
table_info.meta.created_on.timestamp_millis(),
self.create_times.push(Some(TimestampSecond::from(
table_info.meta.created_on.timestamp(),
)));
self.partition_ids.push(Some(partition.id.as_u64()));
}
@@ -349,8 +348,8 @@ impl InformationSchemaPartitionsBuilder {
Arc::new(Int64Vector::from(vec![None])),
rows_num,
));
let null_timestampmicrosecond_vector = Arc::new(ConstantVector::new(
Arc::new(TimestampMicrosecondVector::from(vec![None])),
let null_timestamp_second_vector = Arc::new(ConstantVector::new(
Arc::new(TimestampSecondVector::from(vec![None])),
rows_num,
));
let partition_methods = Arc::new(ConstantVector::new(
@@ -380,8 +379,8 @@ impl InformationSchemaPartitionsBuilder {
null_i64_vector.clone(),
Arc::new(self.create_times.finish()),
// TODO(dennis): supports update_time
null_timestampmicrosecond_vector.clone(),
null_timestampmicrosecond_vector,
null_timestamp_second_vector.clone(),
null_timestamp_second_vector,
null_i64_vector,
null_string_vector.clone(),
null_string_vector.clone(),

View File

@@ -0,0 +1,142 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use std::sync::{Arc, Weak};
use common_catalog::consts::{
INFORMATION_SCHEMA_SSTS_MANIFEST_TABLE_ID, INFORMATION_SCHEMA_SSTS_STORAGE_TABLE_ID,
};
use common_error::ext::BoxedError;
use common_recordbatch::SendableRecordBatchStream;
use common_recordbatch::adapter::AsyncRecordBatchStreamAdapter;
use datatypes::schema::SchemaRef;
use snafu::ResultExt;
use store_api::sst_entry::{ManifestSstEntry, StorageSstEntry};
use store_api::storage::{ScanRequest, TableId};
use crate::CatalogManager;
use crate::error::{ProjectSchemaSnafu, Result};
use crate::information_schema::{
DatanodeInspectKind, DatanodeInspectRequest, InformationTable, SSTS_MANIFEST, SSTS_STORAGE,
};
use crate::system_schema::utils;
/// Information schema table for sst manifest.
pub struct InformationSchemaSstsManifest {
schema: SchemaRef,
catalog_manager: Weak<dyn CatalogManager>,
}
impl InformationSchemaSstsManifest {
pub(super) fn new(catalog_manager: Weak<dyn CatalogManager>) -> Self {
Self {
schema: ManifestSstEntry::schema(),
catalog_manager,
}
}
}
impl InformationTable for InformationSchemaSstsManifest {
fn table_id(&self) -> TableId {
INFORMATION_SCHEMA_SSTS_MANIFEST_TABLE_ID
}
fn table_name(&self) -> &'static str {
SSTS_MANIFEST
}
fn schema(&self) -> SchemaRef {
self.schema.clone()
}
fn to_stream(&self, request: ScanRequest) -> Result<SendableRecordBatchStream> {
let schema = if let Some(p) = &request.projection {
Arc::new(self.schema.try_project(p).context(ProjectSchemaSnafu)?)
} else {
self.schema.clone()
};
let info_ext = utils::information_extension(&self.catalog_manager)?;
let req = DatanodeInspectRequest {
kind: DatanodeInspectKind::SstManifest,
scan: request,
};
let future = async move {
info_ext
.inspect_datanode(req)
.await
.map_err(BoxedError::new)
.context(common_recordbatch::error::ExternalSnafu)
};
Ok(Box::pin(AsyncRecordBatchStreamAdapter::new(
schema,
Box::pin(future),
)))
}
}
/// Information schema table for sst storage.
pub struct InformationSchemaSstsStorage {
schema: SchemaRef,
catalog_manager: Weak<dyn CatalogManager>,
}
impl InformationSchemaSstsStorage {
pub(super) fn new(catalog_manager: Weak<dyn CatalogManager>) -> Self {
Self {
schema: StorageSstEntry::schema(),
catalog_manager,
}
}
}
impl InformationTable for InformationSchemaSstsStorage {
fn table_id(&self) -> TableId {
INFORMATION_SCHEMA_SSTS_STORAGE_TABLE_ID
}
fn table_name(&self) -> &'static str {
SSTS_STORAGE
}
fn schema(&self) -> SchemaRef {
self.schema.clone()
}
fn to_stream(&self, request: ScanRequest) -> Result<SendableRecordBatchStream> {
let schema = if let Some(p) = &request.projection {
Arc::new(self.schema.try_project(p).context(ProjectSchemaSnafu)?)
} else {
self.schema.clone()
};
let info_ext = utils::information_extension(&self.catalog_manager)?;
let req = DatanodeInspectRequest {
kind: DatanodeInspectKind::SstStorage,
scan: request,
};
let future = async move {
info_ext
.inspect_datanode(req)
.await
.map_err(BoxedError::new)
.context(common_recordbatch::error::ExternalSnafu)
};
Ok(Box::pin(AsyncRecordBatchStreamAdapter::new(
schema,
Box::pin(future),
)))
}
}

View File

@@ -48,3 +48,5 @@ pub const FLOWS: &str = "flows";
pub const PROCEDURE_INFO: &str = "procedure_info";
pub const REGION_STATISTICS: &str = "region_statistics";
pub const PROCESS_LIST: &str = "process_list";
pub const SSTS_MANIFEST: &str = "ssts_manifest";
pub const SSTS_STORAGE: &str = "ssts_storage";

View File

@@ -12,53 +12,41 @@
// See the License for the specific language governing permissions and
// limitations under the License.
mod pg_catalog_memory_table;
mod pg_class;
mod pg_database;
mod pg_namespace;
mod table_names;
use std::collections::HashMap;
use std::sync::{Arc, LazyLock, Weak};
use std::sync::{Arc, Weak};
use common_catalog::consts::{self, DEFAULT_CATALOG_NAME, DEFAULT_SCHEMA_NAME, PG_CATALOG_NAME};
use datatypes::schema::ColumnSchema;
use lazy_static::lazy_static;
use paste::paste;
use pg_catalog_memory_table::get_schema_columns;
use pg_class::PGClass;
use pg_database::PGDatabase;
use pg_namespace::PGNamespace;
use session::context::{Channel, QueryContext};
use arrow_schema::SchemaRef;
use async_trait::async_trait;
use common_catalog::consts::{DEFAULT_CATALOG_NAME, PG_CATALOG_NAME, PG_CATALOG_TABLE_ID_START};
use common_error::ext::BoxedError;
use common_recordbatch::SendableRecordBatchStream;
use common_recordbatch::adapter::RecordBatchStreamAdapter;
use common_telemetry::warn;
use datafusion::datasource::TableType;
use datafusion::error::DataFusionError;
use datafusion::execution::TaskContext;
use datafusion::physical_plan::stream::RecordBatchStreamAdapter as DfRecordBatchStreamAdapter;
use datafusion_pg_catalog::pg_catalog::catalog_info::CatalogInfo;
use datafusion_pg_catalog::pg_catalog::{
PG_CATALOG_TABLES, PgCatalogSchemaProvider, PgCatalogStaticTables, PgCatalogTable,
};
use snafu::ResultExt;
use store_api::storage::ScanRequest;
use table::TableRef;
pub use table_names::*;
use table::metadata::TableId;
use self::pg_namespace::oid_map::{PGNamespaceOidMap, PGNamespaceOidMapRef};
use crate::CatalogManager;
use crate::system_schema::memory_table::MemoryTable;
use crate::system_schema::utils::tables::u32_column;
use crate::system_schema::{SystemSchemaProvider, SystemSchemaProviderInner, SystemTableRef};
lazy_static! {
static ref MEMORY_TABLES: &'static [&'static str] = &[table_names::PG_TYPE];
}
/// The column name for the OID column.
/// The OID column is a unique identifier of type u32 for each object in the database.
const OID_COLUMN_NAME: &str = "oid";
fn oid_column() -> ColumnSchema {
u32_column(OID_COLUMN_NAME)
}
use crate::error::{InternalSnafu, ProjectSchemaSnafu, Result};
use crate::system_schema::{
SystemSchemaProvider, SystemSchemaProviderInner, SystemTable, SystemTableRef,
};
/// [`PGCatalogProvider`] is the provider for a schema named `pg_catalog`, it is not a catalog.
pub struct PGCatalogProvider {
catalog_name: String,
catalog_manager: Weak<dyn CatalogManager>,
inner: PgCatalogSchemaProvider<CatalogManagerWrapper>,
tables: HashMap<String, TableRef>,
// Workaround to store mapping of schema_name to a numeric id
namespace_oid_map: PGNamespaceOidMapRef,
table_ids: HashMap<&'static str, u32>,
}
impl SystemSchemaProvider for PGCatalogProvider {
@@ -69,30 +57,33 @@ impl SystemSchemaProvider for PGCatalogProvider {
}
}
// TODO(j0hn50n133): Not sure whether to avoid duplication with `information_schema` or not.
macro_rules! setup_memory_table {
($name: expr) => {
paste! {
{
let (schema, columns) = get_schema_columns($name);
Some(Arc::new(MemoryTable::new(
consts::[<PG_CATALOG_ $name _TABLE_ID>],
$name,
schema,
columns
)) as _)
}
}
};
}
impl PGCatalogProvider {
pub fn new(catalog_name: String, catalog_manager: Weak<dyn CatalogManager>) -> Self {
// safe to expect/unwrap because it contains only schema read, this can
// be ensured by sqlness tests
let static_tables =
PgCatalogStaticTables::try_new().expect("Failed to initialize static tables");
let inner = PgCatalogSchemaProvider::try_new(
CatalogManagerWrapper {
catalog_name: catalog_name.clone(),
catalog_manager,
},
Arc::new(static_tables),
)
.expect("Failed to initialize PgCatalogSchemaProvider");
let mut table_ids = HashMap::new();
let mut table_id = PG_CATALOG_TABLE_ID_START;
for name in PG_CATALOG_TABLES {
table_ids.insert(*name, table_id);
table_id += 1;
}
let mut provider = Self {
catalog_name,
catalog_manager,
inner,
tables: HashMap::new(),
namespace_oid_map: Arc::new(PGNamespaceOidMap::new()),
table_ids,
};
provider.build_tables();
provider
@@ -102,23 +93,13 @@ impl PGCatalogProvider {
// SECURITY NOTE:
// Must follow the same security rules as [`InformationSchemaProvider::build_tables`].
let mut tables = HashMap::new();
// TODO(J0HN50N133): modeling the table_name as a enum type to get rid of expect/unwrap here
// It's safe to unwrap here because we are sure that the constants have been handle correctly inside system_table.
for name in MEMORY_TABLES.iter() {
tables.insert(name.to_string(), self.build_table(name).expect(name));
for name in PG_CATALOG_TABLES {
if let Some(table) = self.build_table(name) {
tables.insert(name.to_string(), table);
}
}
tables.insert(
PG_NAMESPACE.to_string(),
self.build_table(PG_NAMESPACE).expect(PG_NAMESPACE),
);
tables.insert(
PG_CLASS.to_string(),
self.build_table(PG_CLASS).expect(PG_NAMESPACE),
);
tables.insert(
PG_DATABASE.to_string(),
self.build_table(PG_DATABASE).expect(PG_DATABASE),
);
self.tables = tables;
}
}
@@ -129,24 +110,26 @@ impl SystemSchemaProviderInner for PGCatalogProvider {
}
fn system_table(&self, name: &str) -> Option<SystemTableRef> {
match name {
table_names::PG_TYPE => setup_memory_table!(PG_TYPE),
table_names::PG_NAMESPACE => Some(Arc::new(PGNamespace::new(
self.catalog_name.clone(),
self.catalog_manager.clone(),
self.namespace_oid_map.clone(),
))),
table_names::PG_CLASS => Some(Arc::new(PGClass::new(
self.catalog_name.clone(),
self.catalog_manager.clone(),
self.namespace_oid_map.clone(),
))),
table_names::PG_DATABASE => Some(Arc::new(PGDatabase::new(
self.catalog_name.clone(),
self.catalog_manager.clone(),
self.namespace_oid_map.clone(),
))),
_ => None,
if let Some((table_name, table_id)) = self.table_ids.get_key_value(name) {
let table = self.inner.build_table_by_name(name).expect(name);
if let Some(table) = table {
if let Ok(system_table) = DFTableProviderAsSystemTable::try_new(
*table_id,
table_name,
table::metadata::TableType::Temporary,
table,
) {
Some(Arc::new(system_table))
} else {
warn!("failed to create pg_catalog system table {}", name);
None
}
} else {
None
}
} else {
None
}
}
@@ -155,11 +138,177 @@ impl SystemSchemaProviderInner for PGCatalogProvider {
}
}
/// Provide query context to call the [`CatalogManager`]'s method.
static PG_QUERY_CTX: LazyLock<QueryContext> = LazyLock::new(|| {
QueryContext::with_channel(DEFAULT_CATALOG_NAME, DEFAULT_SCHEMA_NAME, Channel::Postgres)
});
fn query_ctx() -> Option<&'static QueryContext> {
Some(&PG_QUERY_CTX)
#[derive(Clone)]
pub struct CatalogManagerWrapper {
catalog_name: String,
catalog_manager: Weak<dyn CatalogManager>,
}
impl CatalogManagerWrapper {
fn catalog_manager(&self) -> std::result::Result<Arc<dyn CatalogManager>, DataFusionError> {
self.catalog_manager.upgrade().ok_or_else(|| {
DataFusionError::Internal("Failed to access catalog manager".to_string())
})
}
}
impl std::fmt::Debug for CatalogManagerWrapper {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
f.debug_struct("CatalogManagerWrapper").finish()
}
}
#[async_trait]
impl CatalogInfo for CatalogManagerWrapper {
async fn catalog_names(&self) -> std::result::Result<Vec<String>, DataFusionError> {
if self.catalog_name == DEFAULT_CATALOG_NAME {
CatalogManager::catalog_names(self.catalog_manager()?.as_ref())
.await
.map_err(|e| DataFusionError::External(Box::new(e)))
} else {
Ok(vec![self.catalog_name.to_string()])
}
}
async fn schema_names(
&self,
catalog_name: &str,
) -> std::result::Result<Option<Vec<String>>, DataFusionError> {
self.catalog_manager()?
.schema_names(catalog_name, None)
.await
.map(Some)
.map_err(|e| DataFusionError::External(Box::new(e)))
}
async fn table_names(
&self,
catalog_name: &str,
schema_name: &str,
) -> std::result::Result<Option<Vec<String>>, DataFusionError> {
self.catalog_manager()?
.table_names(catalog_name, schema_name, None)
.await
.map(Some)
.map_err(|e| DataFusionError::External(Box::new(e)))
}
async fn table_schema(
&self,
catalog_name: &str,
schema_name: &str,
table_name: &str,
) -> std::result::Result<Option<SchemaRef>, DataFusionError> {
let table = self
.catalog_manager()?
.table(catalog_name, schema_name, table_name, None)
.await
.map_err(|e| DataFusionError::External(Box::new(e)))?;
Ok(table.map(|t| t.schema().arrow_schema().clone()))
}
async fn table_type(
&self,
catalog_name: &str,
schema_name: &str,
table_name: &str,
) -> std::result::Result<Option<TableType>, DataFusionError> {
let table = self
.catalog_manager()?
.table(catalog_name, schema_name, table_name, None)
.await
.map_err(|e| DataFusionError::External(Box::new(e)))?;
Ok(table.map(|t| t.table_type().into()))
}
}
struct DFTableProviderAsSystemTable {
pub table_id: TableId,
pub table_name: &'static str,
pub table_type: table::metadata::TableType,
pub schema: Arc<datatypes::schema::Schema>,
pub table_provider: PgCatalogTable,
}
impl DFTableProviderAsSystemTable {
pub fn try_new(
table_id: TableId,
table_name: &'static str,
table_type: table::metadata::TableType,
table_provider: PgCatalogTable,
) -> Result<Self> {
let arrow_schema = table_provider.schema();
let schema = Arc::new(arrow_schema.try_into().context(ProjectSchemaSnafu)?);
Ok(Self {
table_id,
table_name,
table_type,
schema,
table_provider,
})
}
}
impl SystemTable for DFTableProviderAsSystemTable {
fn table_id(&self) -> TableId {
self.table_id
}
fn table_name(&self) -> &'static str {
self.table_name
}
fn schema(&self) -> Arc<datatypes::schema::Schema> {
self.schema.clone()
}
fn table_type(&self) -> table::metadata::TableType {
self.table_type
}
fn to_stream(&self, _request: ScanRequest) -> Result<SendableRecordBatchStream> {
match &self.table_provider {
PgCatalogTable::Static(table) => {
let schema = self.schema.arrow_schema().clone();
let data = table
.data()
.iter()
.map(|rb| Ok(rb.clone()))
.collect::<Vec<_>>();
let stream = Box::pin(DfRecordBatchStreamAdapter::new(
schema,
futures::stream::iter(data),
));
Ok(Box::pin(
RecordBatchStreamAdapter::try_new(stream)
.map_err(BoxedError::new)
.context(InternalSnafu)?,
))
}
PgCatalogTable::Dynamic(table) => {
let stream = table.execute(Arc::new(TaskContext::default()));
Ok(Box::pin(
RecordBatchStreamAdapter::try_new(stream)
.map_err(BoxedError::new)
.context(InternalSnafu)?,
))
}
PgCatalogTable::Empty(_) => {
let schema = self.schema.arrow_schema().clone();
let stream = Box::pin(DfRecordBatchStreamAdapter::new(
schema,
futures::stream::iter(vec![]),
));
Ok(Box::pin(
RecordBatchStreamAdapter::try_new(stream)
.map_err(BoxedError::new)
.context(InternalSnafu)?,
))
}
}
}
}

View File

@@ -1,69 +0,0 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use std::sync::Arc;
use datatypes::schema::{ColumnSchema, Schema, SchemaRef};
use datatypes::vectors::{Int16Vector, StringVector, UInt32Vector, VectorRef};
use crate::memory_table_cols;
use crate::system_schema::pg_catalog::oid_column;
use crate::system_schema::pg_catalog::table_names::PG_TYPE;
use crate::system_schema::utils::tables::{i16_column, string_column};
fn pg_type_schema_columns() -> (Vec<ColumnSchema>, Vec<VectorRef>) {
// TODO(j0hn50n133): acquire this information from `DataType` instead of hardcoding it to avoid regression.
memory_table_cols!(
[oid, typname, typlen],
[
(1, "String", -1),
(2, "Binary", -1),
(3, "Int8", 1),
(4, "Int16", 2),
(5, "Int32", 4),
(6, "Int64", 8),
(7, "UInt8", 1),
(8, "UInt16", 2),
(9, "UInt32", 4),
(10, "UInt64", 8),
(11, "Float32", 4),
(12, "Float64", 8),
(13, "Decimal", 16),
(14, "Date", 4),
(15, "DateTime", 8),
(16, "Timestamp", 8),
(17, "Time", 8),
(18, "Duration", 8),
(19, "Interval", 16),
(20, "List", -1),
]
);
(
// not quiet identical with pg, we only follow the definition in pg
vec![oid_column(), string_column("typname"), i16_column("typlen")],
vec![
Arc::new(UInt32Vector::from_vec(oid)), // oid
Arc::new(StringVector::from(typname)),
Arc::new(Int16Vector::from_vec(typlen)), // typlen in bytes
],
)
}
pub(super) fn get_schema_columns(table_name: &str) -> (SchemaRef, Vec<VectorRef>) {
let (column_schemas, columns): (_, Vec<VectorRef>) = match table_name {
PG_TYPE => pg_type_schema_columns(),
_ => unreachable!("Unknown table in pg_catalog: {}", table_name),
};
(Arc::new(Schema::new(column_schemas)), columns)
}

View File

@@ -1,276 +0,0 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use std::fmt;
use std::sync::{Arc, Weak};
use arrow_schema::SchemaRef as ArrowSchemaRef;
use common_catalog::consts::PG_CATALOG_PG_CLASS_TABLE_ID;
use common_error::ext::BoxedError;
use common_recordbatch::adapter::RecordBatchStreamAdapter;
use common_recordbatch::{DfSendableRecordBatchStream, RecordBatch};
use datafusion::execution::TaskContext;
use datafusion::physical_plan::stream::RecordBatchStreamAdapter as DfRecordBatchStreamAdapter;
use datafusion::physical_plan::streaming::PartitionStream as DfPartitionStream;
use datatypes::scalars::ScalarVectorBuilder;
use datatypes::schema::{Schema, SchemaRef};
use datatypes::value::Value;
use datatypes::vectors::{StringVectorBuilder, UInt32VectorBuilder, VectorRef};
use futures::TryStreamExt;
use snafu::{OptionExt, ResultExt};
use store_api::storage::ScanRequest;
use table::metadata::TableType;
use crate::CatalogManager;
use crate::error::{
CreateRecordBatchSnafu, InternalSnafu, Result, UpgradeWeakCatalogManagerRefSnafu,
};
use crate::information_schema::Predicates;
use crate::system_schema::SystemTable;
use crate::system_schema::pg_catalog::pg_namespace::oid_map::PGNamespaceOidMapRef;
use crate::system_schema::pg_catalog::{OID_COLUMN_NAME, PG_CLASS, query_ctx};
use crate::system_schema::utils::tables::{string_column, u32_column};
// === column name ===
pub const RELNAME: &str = "relname";
pub const RELNAMESPACE: &str = "relnamespace";
pub const RELKIND: &str = "relkind";
pub const RELOWNER: &str = "relowner";
// === enum value of relkind ===
pub const RELKIND_TABLE: &str = "r";
pub const RELKIND_VIEW: &str = "v";
/// The initial capacity of the vector builders.
const INIT_CAPACITY: usize = 42;
/// The dummy owner id for the namespace.
const DUMMY_OWNER_ID: u32 = 0;
/// The `pg_catalog.pg_class` table implementation.
pub(super) struct PGClass {
schema: SchemaRef,
catalog_name: String,
catalog_manager: Weak<dyn CatalogManager>,
// Workaround to convert schema_name to a numeric id
namespace_oid_map: PGNamespaceOidMapRef,
}
impl PGClass {
pub(super) fn new(
catalog_name: String,
catalog_manager: Weak<dyn CatalogManager>,
namespace_oid_map: PGNamespaceOidMapRef,
) -> Self {
Self {
schema: Self::schema(),
catalog_name,
catalog_manager,
namespace_oid_map,
}
}
fn schema() -> SchemaRef {
Arc::new(Schema::new(vec![
u32_column(OID_COLUMN_NAME),
string_column(RELNAME),
u32_column(RELNAMESPACE),
string_column(RELKIND),
u32_column(RELOWNER),
]))
}
fn builder(&self) -> PGClassBuilder {
PGClassBuilder::new(
self.schema.clone(),
self.catalog_name.clone(),
self.catalog_manager.clone(),
self.namespace_oid_map.clone(),
)
}
}
impl fmt::Debug for PGClass {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
f.debug_struct("PGClass")
.field("schema", &self.schema)
.field("catalog_name", &self.catalog_name)
.finish()
}
}
impl SystemTable for PGClass {
fn table_id(&self) -> table::metadata::TableId {
PG_CATALOG_PG_CLASS_TABLE_ID
}
fn table_name(&self) -> &'static str {
PG_CLASS
}
fn schema(&self) -> SchemaRef {
self.schema.clone()
}
fn to_stream(
&self,
request: ScanRequest,
) -> Result<common_recordbatch::SendableRecordBatchStream> {
let schema = self.schema.arrow_schema().clone();
let mut builder = self.builder();
let stream = Box::pin(DfRecordBatchStreamAdapter::new(
schema,
futures::stream::once(async move {
builder
.make_class(Some(request))
.await
.map(|x| x.into_df_record_batch())
.map_err(Into::into)
}),
));
Ok(Box::pin(
RecordBatchStreamAdapter::try_new(stream)
.map_err(BoxedError::new)
.context(InternalSnafu)?,
))
}
}
impl DfPartitionStream for PGClass {
fn schema(&self) -> &ArrowSchemaRef {
self.schema.arrow_schema()
}
fn execute(&self, _: Arc<TaskContext>) -> DfSendableRecordBatchStream {
let schema = self.schema.arrow_schema().clone();
let mut builder = self.builder();
Box::pin(DfRecordBatchStreamAdapter::new(
schema,
futures::stream::once(async move {
builder
.make_class(None)
.await
.map(|x| x.into_df_record_batch())
.map_err(Into::into)
}),
))
}
}
/// Builds the `pg_catalog.pg_class` table row by row
/// TODO(J0HN50N133): `relowner` is always the [`DUMMY_OWNER_ID`] because we don't have users.
/// Once we have user system, make it the actual owner of the table.
struct PGClassBuilder {
schema: SchemaRef,
catalog_name: String,
catalog_manager: Weak<dyn CatalogManager>,
namespace_oid_map: PGNamespaceOidMapRef,
oid: UInt32VectorBuilder,
relname: StringVectorBuilder,
relnamespace: UInt32VectorBuilder,
relkind: StringVectorBuilder,
relowner: UInt32VectorBuilder,
}
impl PGClassBuilder {
fn new(
schema: SchemaRef,
catalog_name: String,
catalog_manager: Weak<dyn CatalogManager>,
namespace_oid_map: PGNamespaceOidMapRef,
) -> Self {
Self {
schema,
catalog_name,
catalog_manager,
namespace_oid_map,
oid: UInt32VectorBuilder::with_capacity(INIT_CAPACITY),
relname: StringVectorBuilder::with_capacity(INIT_CAPACITY),
relnamespace: UInt32VectorBuilder::with_capacity(INIT_CAPACITY),
relkind: StringVectorBuilder::with_capacity(INIT_CAPACITY),
relowner: UInt32VectorBuilder::with_capacity(INIT_CAPACITY),
}
}
async fn make_class(&mut self, request: Option<ScanRequest>) -> Result<RecordBatch> {
let catalog_name = self.catalog_name.clone();
let catalog_manager = self
.catalog_manager
.upgrade()
.context(UpgradeWeakCatalogManagerRefSnafu)?;
let predicates = Predicates::from_scan_request(&request);
for schema_name in catalog_manager
.schema_names(&catalog_name, query_ctx())
.await?
{
let mut stream = catalog_manager.tables(&catalog_name, &schema_name, query_ctx());
while let Some(table) = stream.try_next().await? {
let table_info = table.table_info();
self.add_class(
&predicates,
table_info.table_id(),
&schema_name,
&table_info.name,
if table_info.table_type == TableType::View {
RELKIND_VIEW
} else {
RELKIND_TABLE
},
);
}
}
self.finish()
}
fn add_class(
&mut self,
predicates: &Predicates,
oid: u32,
schema: &str,
table: &str,
kind: &str,
) {
let namespace_oid = self.namespace_oid_map.get_oid(schema);
let row = [
(OID_COLUMN_NAME, &Value::from(oid)),
(RELNAMESPACE, &Value::from(schema)),
(RELNAME, &Value::from(table)),
(RELKIND, &Value::from(kind)),
(RELOWNER, &Value::from(DUMMY_OWNER_ID)),
];
if !predicates.eval(&row) {
return;
}
self.oid.push(Some(oid));
self.relnamespace.push(Some(namespace_oid));
self.relname.push(Some(table));
self.relkind.push(Some(kind));
self.relowner.push(Some(DUMMY_OWNER_ID));
}
fn finish(&mut self) -> Result<RecordBatch> {
let columns: Vec<VectorRef> = vec![
Arc::new(self.oid.finish()),
Arc::new(self.relname.finish()),
Arc::new(self.relnamespace.finish()),
Arc::new(self.relkind.finish()),
Arc::new(self.relowner.finish()),
];
RecordBatch::new(self.schema.clone(), columns).context(CreateRecordBatchSnafu)
}
}

View File

@@ -1,223 +0,0 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use std::sync::{Arc, Weak};
use arrow_schema::SchemaRef as ArrowSchemaRef;
use common_catalog::consts::PG_CATALOG_PG_DATABASE_TABLE_ID;
use common_error::ext::BoxedError;
use common_recordbatch::adapter::RecordBatchStreamAdapter;
use common_recordbatch::{DfSendableRecordBatchStream, RecordBatch};
use datafusion::execution::TaskContext;
use datafusion::physical_plan::stream::RecordBatchStreamAdapter as DfRecordBatchStreamAdapter;
use datafusion::physical_plan::streaming::PartitionStream as DfPartitionStream;
use datatypes::scalars::ScalarVectorBuilder;
use datatypes::schema::{Schema, SchemaRef};
use datatypes::value::Value;
use datatypes::vectors::{StringVectorBuilder, UInt32VectorBuilder, VectorRef};
use snafu::{OptionExt, ResultExt};
use store_api::storage::ScanRequest;
use crate::CatalogManager;
use crate::error::{
CreateRecordBatchSnafu, InternalSnafu, Result, UpgradeWeakCatalogManagerRefSnafu,
};
use crate::information_schema::Predicates;
use crate::system_schema::SystemTable;
use crate::system_schema::pg_catalog::pg_namespace::oid_map::PGNamespaceOidMapRef;
use crate::system_schema::pg_catalog::{OID_COLUMN_NAME, PG_DATABASE, query_ctx};
use crate::system_schema::utils::tables::{string_column, u32_column};
// === column name ===
pub const DATNAME: &str = "datname";
/// The initial capacity of the vector builders.
const INIT_CAPACITY: usize = 42;
/// The `pg_catalog.database` table implementation.
pub(super) struct PGDatabase {
schema: SchemaRef,
catalog_name: String,
catalog_manager: Weak<dyn CatalogManager>,
// Workaround to convert schema_name to a numeric id
namespace_oid_map: PGNamespaceOidMapRef,
}
impl std::fmt::Debug for PGDatabase {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
f.debug_struct("PGDatabase")
.field("schema", &self.schema)
.field("catalog_name", &self.catalog_name)
.finish()
}
}
impl PGDatabase {
pub(super) fn new(
catalog_name: String,
catalog_manager: Weak<dyn CatalogManager>,
namespace_oid_map: PGNamespaceOidMapRef,
) -> Self {
Self {
schema: Self::schema(),
catalog_name,
catalog_manager,
namespace_oid_map,
}
}
fn schema() -> SchemaRef {
Arc::new(Schema::new(vec![
u32_column(OID_COLUMN_NAME),
string_column(DATNAME),
]))
}
fn builder(&self) -> PGCDatabaseBuilder {
PGCDatabaseBuilder::new(
self.schema.clone(),
self.catalog_name.clone(),
self.catalog_manager.clone(),
self.namespace_oid_map.clone(),
)
}
}
impl DfPartitionStream for PGDatabase {
fn schema(&self) -> &ArrowSchemaRef {
self.schema.arrow_schema()
}
fn execute(&self, _: Arc<TaskContext>) -> DfSendableRecordBatchStream {
let schema = self.schema.arrow_schema().clone();
let mut builder = self.builder();
Box::pin(DfRecordBatchStreamAdapter::new(
schema,
futures::stream::once(async move {
builder
.make_database(None)
.await
.map(|x| x.into_df_record_batch())
.map_err(Into::into)
}),
))
}
}
impl SystemTable for PGDatabase {
fn table_id(&self) -> table::metadata::TableId {
PG_CATALOG_PG_DATABASE_TABLE_ID
}
fn table_name(&self) -> &'static str {
PG_DATABASE
}
fn schema(&self) -> SchemaRef {
self.schema.clone()
}
fn to_stream(
&self,
request: ScanRequest,
) -> Result<common_recordbatch::SendableRecordBatchStream> {
let schema = self.schema.arrow_schema().clone();
let mut builder = self.builder();
let stream = Box::pin(DfRecordBatchStreamAdapter::new(
schema,
futures::stream::once(async move {
builder
.make_database(Some(request))
.await
.map(|x| x.into_df_record_batch())
.map_err(Into::into)
}),
));
Ok(Box::pin(
RecordBatchStreamAdapter::try_new(stream)
.map_err(BoxedError::new)
.context(InternalSnafu)?,
))
}
}
/// Builds the `pg_catalog.pg_database` table row by row
/// `oid` use schema name as a workaround since we don't have numeric schema id.
/// `nspname` is the schema name.
struct PGCDatabaseBuilder {
schema: SchemaRef,
catalog_name: String,
catalog_manager: Weak<dyn CatalogManager>,
namespace_oid_map: PGNamespaceOidMapRef,
oid: UInt32VectorBuilder,
datname: StringVectorBuilder,
}
impl PGCDatabaseBuilder {
fn new(
schema: SchemaRef,
catalog_name: String,
catalog_manager: Weak<dyn CatalogManager>,
namespace_oid_map: PGNamespaceOidMapRef,
) -> Self {
Self {
schema,
catalog_name,
catalog_manager,
namespace_oid_map,
oid: UInt32VectorBuilder::with_capacity(INIT_CAPACITY),
datname: StringVectorBuilder::with_capacity(INIT_CAPACITY),
}
}
async fn make_database(&mut self, request: Option<ScanRequest>) -> Result<RecordBatch> {
let catalog_name = self.catalog_name.clone();
let catalog_manager = self
.catalog_manager
.upgrade()
.context(UpgradeWeakCatalogManagerRefSnafu)?;
let predicates = Predicates::from_scan_request(&request);
for schema_name in catalog_manager
.schema_names(&catalog_name, query_ctx())
.await?
{
self.add_database(&predicates, &schema_name);
}
self.finish()
}
fn add_database(&mut self, predicates: &Predicates, schema_name: &str) {
let oid = self.namespace_oid_map.get_oid(schema_name);
let row: [(&str, &Value); 2] = [
(OID_COLUMN_NAME, &Value::from(oid)),
(DATNAME, &Value::from(schema_name)),
];
if !predicates.eval(&row) {
return;
}
self.oid.push(Some(oid));
self.datname.push(Some(schema_name));
}
fn finish(&mut self) -> Result<RecordBatch> {
let columns: Vec<VectorRef> =
vec![Arc::new(self.oid.finish()), Arc::new(self.datname.finish())];
RecordBatch::new(self.schema.clone(), columns).context(CreateRecordBatchSnafu)
}
}

View File

@@ -1,221 +0,0 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
//! The `pg_catalog.pg_namespace` table implementation.
//! namespace is a schema in greptime
pub(super) mod oid_map;
use std::fmt;
use std::sync::{Arc, Weak};
use arrow_schema::SchemaRef as ArrowSchemaRef;
use common_catalog::consts::PG_CATALOG_PG_NAMESPACE_TABLE_ID;
use common_error::ext::BoxedError;
use common_recordbatch::adapter::RecordBatchStreamAdapter;
use common_recordbatch::{DfSendableRecordBatchStream, RecordBatch, SendableRecordBatchStream};
use datafusion::execution::TaskContext;
use datafusion::physical_plan::stream::RecordBatchStreamAdapter as DfRecordBatchStreamAdapter;
use datafusion::physical_plan::streaming::PartitionStream as DfPartitionStream;
use datatypes::scalars::ScalarVectorBuilder;
use datatypes::schema::{Schema, SchemaRef};
use datatypes::value::Value;
use datatypes::vectors::{StringVectorBuilder, UInt32VectorBuilder, VectorRef};
use snafu::{OptionExt, ResultExt};
use store_api::storage::ScanRequest;
use crate::CatalogManager;
use crate::error::{
CreateRecordBatchSnafu, InternalSnafu, Result, UpgradeWeakCatalogManagerRefSnafu,
};
use crate::information_schema::Predicates;
use crate::system_schema::SystemTable;
use crate::system_schema::pg_catalog::{
OID_COLUMN_NAME, PG_NAMESPACE, PGNamespaceOidMapRef, query_ctx,
};
use crate::system_schema::utils::tables::{string_column, u32_column};
const NSPNAME: &str = "nspname";
const INIT_CAPACITY: usize = 42;
pub(super) struct PGNamespace {
schema: SchemaRef,
catalog_name: String,
catalog_manager: Weak<dyn CatalogManager>,
// Workaround to convert schema_name to a numeric id
oid_map: PGNamespaceOidMapRef,
}
impl PGNamespace {
pub(super) fn new(
catalog_name: String,
catalog_manager: Weak<dyn CatalogManager>,
oid_map: PGNamespaceOidMapRef,
) -> Self {
Self {
schema: Self::schema(),
catalog_name,
catalog_manager,
oid_map,
}
}
fn schema() -> SchemaRef {
Arc::new(Schema::new(vec![
// TODO(J0HN50N133): we do not have a numeric schema id, use schema name as a workaround. Use a proper schema id once we have it.
u32_column(OID_COLUMN_NAME),
string_column(NSPNAME),
]))
}
fn builder(&self) -> PGNamespaceBuilder {
PGNamespaceBuilder::new(
self.schema.clone(),
self.catalog_name.clone(),
self.catalog_manager.clone(),
self.oid_map.clone(),
)
}
}
impl fmt::Debug for PGNamespace {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
f.debug_struct("PGNamespace")
.field("schema", &self.schema)
.field("catalog_name", &self.catalog_name)
.finish()
}
}
impl SystemTable for PGNamespace {
fn schema(&self) -> SchemaRef {
self.schema.clone()
}
fn table_id(&self) -> table::metadata::TableId {
PG_CATALOG_PG_NAMESPACE_TABLE_ID
}
fn table_name(&self) -> &'static str {
PG_NAMESPACE
}
fn to_stream(&self, request: ScanRequest) -> Result<SendableRecordBatchStream> {
let schema = self.schema.arrow_schema().clone();
let mut builder = self.builder();
let stream = Box::pin(DfRecordBatchStreamAdapter::new(
schema,
futures::stream::once(async move {
builder
.make_namespace(Some(request))
.await
.map(|x| x.into_df_record_batch())
.map_err(Into::into)
}),
));
Ok(Box::pin(
RecordBatchStreamAdapter::try_new(stream)
.map_err(BoxedError::new)
.context(InternalSnafu)?,
))
}
}
impl DfPartitionStream for PGNamespace {
fn schema(&self) -> &ArrowSchemaRef {
self.schema.arrow_schema()
}
fn execute(&self, _: Arc<TaskContext>) -> DfSendableRecordBatchStream {
let schema = self.schema.arrow_schema().clone();
let mut builder = self.builder();
Box::pin(DfRecordBatchStreamAdapter::new(
schema,
futures::stream::once(async move {
builder
.make_namespace(None)
.await
.map(|x| x.into_df_record_batch())
.map_err(Into::into)
}),
))
}
}
/// Builds the `pg_catalog.pg_namespace` table row by row
/// `oid` use schema name as a workaround since we don't have numeric schema id.
/// `nspname` is the schema name.
struct PGNamespaceBuilder {
schema: SchemaRef,
catalog_name: String,
catalog_manager: Weak<dyn CatalogManager>,
namespace_oid_map: PGNamespaceOidMapRef,
oid: UInt32VectorBuilder,
nspname: StringVectorBuilder,
}
impl PGNamespaceBuilder {
fn new(
schema: SchemaRef,
catalog_name: String,
catalog_manager: Weak<dyn CatalogManager>,
namespace_oid_map: PGNamespaceOidMapRef,
) -> Self {
Self {
schema,
catalog_name,
catalog_manager,
namespace_oid_map,
oid: UInt32VectorBuilder::with_capacity(INIT_CAPACITY),
nspname: StringVectorBuilder::with_capacity(INIT_CAPACITY),
}
}
/// Construct the `pg_catalog.pg_namespace` virtual table
async fn make_namespace(&mut self, request: Option<ScanRequest>) -> Result<RecordBatch> {
let catalog_name = self.catalog_name.clone();
let catalog_manager = self
.catalog_manager
.upgrade()
.context(UpgradeWeakCatalogManagerRefSnafu)?;
let predicates = Predicates::from_scan_request(&request);
for schema_name in catalog_manager
.schema_names(&catalog_name, query_ctx())
.await?
{
self.add_namespace(&predicates, &schema_name);
}
self.finish()
}
fn finish(&mut self) -> Result<RecordBatch> {
let columns: Vec<VectorRef> =
vec![Arc::new(self.oid.finish()), Arc::new(self.nspname.finish())];
RecordBatch::new(self.schema.clone(), columns).context(CreateRecordBatchSnafu)
}
fn add_namespace(&mut self, predicates: &Predicates, schema_name: &str) {
let oid = self.namespace_oid_map.get_oid(schema_name);
let row = [
(OID_COLUMN_NAME, &Value::from(oid)),
(NSPNAME, &Value::from(schema_name)),
];
if !predicates.eval(&row) {
return;
}
self.oid.push(Some(oid));
self.nspname.push(Some(schema_name));
}
}

View File

@@ -1,94 +0,0 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use std::hash::BuildHasher;
use std::sync::Arc;
use dashmap::DashMap;
use rustc_hash::FxSeededState;
pub type PGNamespaceOidMapRef = Arc<PGNamespaceOidMap>;
// Workaround to convert schema_name to a numeric id,
// remove this when we have numeric schema id in greptime
pub struct PGNamespaceOidMap {
oid_map: DashMap<String, u32>,
// Rust use SipHasher by default, which provides resistance against DOS attacks.
// This will produce different hash value between each greptime instance. This will
// cause the sqlness test fail. We need a deterministic hash here to provide
// same oid for the same schema name with best effort and DOS attacks aren't concern here.
hasher: FxSeededState,
}
impl PGNamespaceOidMap {
pub fn new() -> Self {
Self {
oid_map: DashMap::new(),
hasher: FxSeededState::with_seed(0), // PLEASE DO NOT MODIFY THIS SEED VALUE!!!
}
}
fn oid_is_used(&self, oid: u32) -> bool {
self.oid_map.iter().any(|e| *e.value() == oid)
}
pub fn get_oid(&self, schema_name: &str) -> u32 {
if let Some(oid) = self.oid_map.get(schema_name) {
*oid
} else {
let mut oid = self.hasher.hash_one(schema_name) as u32;
while self.oid_is_used(oid) {
oid = self.hasher.hash_one(oid) as u32;
}
self.oid_map.insert(schema_name.to_string(), oid);
oid
}
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn oid_is_stable() {
let oid_map_1 = PGNamespaceOidMap::new();
let oid_map_2 = PGNamespaceOidMap::new();
let schema = "schema";
let oid = oid_map_1.get_oid(schema);
// oid keep stable in the same instance
assert_eq!(oid, oid_map_1.get_oid(schema));
// oid keep stable between different instances
assert_eq!(oid, oid_map_2.get_oid(schema));
}
#[test]
fn oid_collision() {
let oid_map = PGNamespaceOidMap::new();
let key1 = "3178510";
let key2 = "4215648";
// insert them into oid_map
let oid1 = oid_map.get_oid(key1);
let oid2 = oid_map.get_oid(key2);
// they should have different id
assert_ne!(oid1, oid2);
}
}

View File

@@ -27,22 +27,6 @@ pub fn string_column(name: &str) -> ColumnSchema {
)
}
pub fn u32_column(name: &str) -> ColumnSchema {
ColumnSchema::new(
str::to_lowercase(name),
ConcreteDataType::uint32_datatype(),
false,
)
}
pub fn i16_column(name: &str) -> ColumnSchema {
ColumnSchema::new(
str::to_lowercase(name),
ConcreteDataType::int16_datatype(),
false,
)
}
pub fn bigint_column(name: &str) -> ColumnSchema {
ColumnSchema::new(
str::to_lowercase(name),

View File

@@ -51,6 +51,7 @@ meta-srv.workspace = true
nu-ansi-term = "0.46"
object-store.workspace = true
operator.workspace = true
paste.workspace = true
query.workspace = true
rand.workspace = true
reqwest.workspace = true

View File

@@ -12,11 +12,8 @@
// See the License for the specific language governing permissions and
// limitations under the License.
// https://www.postgresql.org/docs/current/catalog-pg-database.html
pub const PG_DATABASE: &str = "pg_database";
// https://www.postgresql.org/docs/current/catalog-pg-namespace.html
pub const PG_NAMESPACE: &str = "pg_namespace";
// https://www.postgresql.org/docs/current/catalog-pg-class.html
pub const PG_CLASS: &str = "pg_class";
// https://www.postgresql.org/docs/current/catalog-pg-type.html
pub const PG_TYPE: &str = "pg_type";
mod object_store;
mod store;
pub use object_store::{ObjectStoreConfig, new_fs_object_store};
pub use store::StoreConfig;

View File

@@ -0,0 +1,224 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use common_base::secrets::SecretString;
use common_error::ext::BoxedError;
use object_store::services::{Azblob, Fs, Gcs, Oss, S3};
use object_store::util::{with_instrument_layers, with_retry_layers};
use object_store::{AzblobConnection, GcsConnection, ObjectStore, OssConnection, S3Connection};
use paste::paste;
use snafu::ResultExt;
use crate::error::{self};
macro_rules! wrap_with_clap_prefix {
(
$new_name:ident, $prefix:literal, $base:ty, {
$( $( #[doc = $doc:expr] )? $( #[alias = $alias:literal] )? $field:ident : $type:ty $( = $default:expr )? ),* $(,)?
}
) => {
paste!{
#[derive(clap::Parser, Debug, Clone, PartialEq, Default)]
pub struct $new_name {
$(
$( #[doc = $doc] )?
$( #[clap(alias = $alias)] )?
#[clap(long $(, default_value_t = $default )? )]
[<$prefix $field>]: $type,
)*
}
impl From<$new_name> for $base {
fn from(w: $new_name) -> Self {
Self {
$( $field: w.[<$prefix $field>] ),*
}
}
}
}
};
}
wrap_with_clap_prefix! {
PrefixedAzblobConnection,
"azblob-",
AzblobConnection,
{
#[doc = "The container of the object store."]
container: String = Default::default(),
#[doc = "The root of the object store."]
root: String = Default::default(),
#[doc = "The account name of the object store."]
account_name: SecretString = Default::default(),
#[doc = "The account key of the object store."]
account_key: SecretString = Default::default(),
#[doc = "The endpoint of the object store."]
endpoint: String = Default::default(),
#[doc = "The SAS token of the object store."]
sas_token: Option<String>,
}
}
wrap_with_clap_prefix! {
PrefixedS3Connection,
"s3-",
S3Connection,
{
#[doc = "The bucket of the object store."]
bucket: String = Default::default(),
#[doc = "The root of the object store."]
root: String = Default::default(),
#[doc = "The access key ID of the object store."]
access_key_id: SecretString = Default::default(),
#[doc = "The secret access key of the object store."]
secret_access_key: SecretString = Default::default(),
#[doc = "The endpoint of the object store."]
endpoint: Option<String>,
#[doc = "The region of the object store."]
region: Option<String>,
#[doc = "Enable virtual host style for the object store."]
enable_virtual_host_style: bool = Default::default(),
}
}
wrap_with_clap_prefix! {
PrefixedOssConnection,
"oss-",
OssConnection,
{
#[doc = "The bucket of the object store."]
bucket: String = Default::default(),
#[doc = "The root of the object store."]
root: String = Default::default(),
#[doc = "The access key ID of the object store."]
access_key_id: SecretString = Default::default(),
#[doc = "The access key secret of the object store."]
access_key_secret: SecretString = Default::default(),
#[doc = "The endpoint of the object store."]
endpoint: String = Default::default(),
}
}
wrap_with_clap_prefix! {
PrefixedGcsConnection,
"gcs-",
GcsConnection,
{
#[doc = "The root of the object store."]
root: String = Default::default(),
#[doc = "The bucket of the object store."]
bucket: String = Default::default(),
#[doc = "The scope of the object store."]
scope: String = Default::default(),
#[doc = "The credential path of the object store."]
credential_path: SecretString = Default::default(),
#[doc = "The credential of the object store."]
credential: SecretString = Default::default(),
#[doc = "The endpoint of the object store."]
endpoint: String = Default::default(),
}
}
/// common config for object store.
#[derive(clap::Parser, Debug, Clone, PartialEq, Default)]
pub struct ObjectStoreConfig {
/// Whether to use S3 object store.
#[clap(long, alias = "s3")]
pub enable_s3: bool,
#[clap(flatten)]
pub s3: PrefixedS3Connection,
/// Whether to use OSS.
#[clap(long, alias = "oss")]
pub enable_oss: bool,
#[clap(flatten)]
pub oss: PrefixedOssConnection,
/// Whether to use GCS.
#[clap(long, alias = "gcs")]
pub enable_gcs: bool,
#[clap(flatten)]
pub gcs: PrefixedGcsConnection,
/// Whether to use Azure Blob.
#[clap(long, alias = "azblob")]
pub enable_azblob: bool,
#[clap(flatten)]
pub azblob: PrefixedAzblobConnection,
}
/// Creates a new file system object store.
pub fn new_fs_object_store(root: &str) -> std::result::Result<ObjectStore, BoxedError> {
let builder = Fs::default().root(root);
let object_store = ObjectStore::new(builder)
.context(error::InitBackendSnafu)
.map_err(BoxedError::new)?
.finish();
Ok(with_instrument_layers(object_store, false))
}
impl ObjectStoreConfig {
/// Builds the object store from the config.
pub fn build(&self) -> Result<Option<ObjectStore>, BoxedError> {
let object_store = if self.enable_s3 {
let s3 = S3Connection::from(self.s3.clone());
common_telemetry::info!("Building object store with s3: {:?}", s3);
Some(
ObjectStore::new(S3::from(&s3))
.context(error::InitBackendSnafu)
.map_err(BoxedError::new)?
.finish(),
)
} else if self.enable_oss {
let oss = OssConnection::from(self.oss.clone());
common_telemetry::info!("Building object store with oss: {:?}", oss);
Some(
ObjectStore::new(Oss::from(&oss))
.context(error::InitBackendSnafu)
.map_err(BoxedError::new)?
.finish(),
)
} else if self.enable_gcs {
let gcs = GcsConnection::from(self.gcs.clone());
common_telemetry::info!("Building object store with gcs: {:?}", gcs);
Some(
ObjectStore::new(Gcs::from(&gcs))
.context(error::InitBackendSnafu)
.map_err(BoxedError::new)?
.finish(),
)
} else if self.enable_azblob {
let azblob = AzblobConnection::from(self.azblob.clone());
common_telemetry::info!("Building object store with azblob: {:?}", azblob);
Some(
ObjectStore::new(Azblob::from(&azblob))
.context(error::InitBackendSnafu)
.map_err(BoxedError::new)?
.finish(),
)
} else {
None
};
let object_store = object_store
.map(|object_store| with_instrument_layers(with_retry_layers(object_store), false));
Ok(object_store)
}
}

View File

@@ -19,14 +19,14 @@ use common_error::ext::BoxedError;
use common_meta::kv_backend::KvBackendRef;
use common_meta::kv_backend::chroot::ChrootKvBackend;
use common_meta::kv_backend::etcd::EtcdStore;
use meta_srv::bootstrap::create_etcd_client_with_tls;
use meta_srv::metasrv::BackendImpl;
use meta_srv::utils::etcd::create_etcd_client_with_tls;
use servers::tls::{TlsMode, TlsOption};
use crate::error::{EmptyStoreAddrsSnafu, UnsupportedMemoryBackendSnafu};
use crate::error::EmptyStoreAddrsSnafu;
#[derive(Debug, Default, Parser)]
pub(crate) struct StoreConfig {
pub struct StoreConfig {
/// The endpoint of store. one of etcd, postgres or mysql.
///
/// For postgres store, the format is:
@@ -38,51 +38,65 @@ pub(crate) struct StoreConfig {
/// For mysql store, the format is:
/// "mysql://user:password@ip:port/dbname"
#[clap(long, alias = "store-addr", value_delimiter = ',', num_args = 1..)]
store_addrs: Vec<String>,
pub store_addrs: Vec<String>,
/// The maximum number of operations in a transaction. Only used when using [etcd-store].
#[clap(long, default_value = "128")]
max_txn_ops: usize,
pub max_txn_ops: usize,
/// The metadata store backend.
#[clap(long, value_enum, default_value = "etcd-store")]
backend: BackendImpl,
pub backend: BackendImpl,
/// The key prefix of the metadata store.
#[clap(long, default_value = "")]
store_key_prefix: String,
pub store_key_prefix: String,
/// The table name in RDS to store metadata. Only used when using [postgres-store] or [mysql-store].
#[cfg(any(feature = "pg_kvbackend", feature = "mysql_kvbackend"))]
#[clap(long, default_value = common_meta::kv_backend::DEFAULT_META_TABLE_NAME)]
meta_table_name: String,
pub meta_table_name: String,
/// Optional PostgreSQL schema for metadata table (defaults to current search_path if unset).
#[cfg(feature = "pg_kvbackend")]
#[clap(long)]
meta_schema_name: Option<String>,
pub meta_schema_name: Option<String>,
/// TLS mode for backend store connections (etcd, PostgreSQL, MySQL)
#[clap(long = "backend-tls-mode", value_enum, default_value = "disable")]
backend_tls_mode: TlsMode,
pub backend_tls_mode: TlsMode,
/// Path to TLS certificate file for backend store connections
#[clap(long = "backend-tls-cert-path", default_value = "")]
backend_tls_cert_path: String,
pub backend_tls_cert_path: String,
/// Path to TLS private key file for backend store connections
#[clap(long = "backend-tls-key-path", default_value = "")]
backend_tls_key_path: String,
pub backend_tls_key_path: String,
/// Path to TLS CA certificate file for backend store connections
#[clap(long = "backend-tls-ca-cert-path", default_value = "")]
backend_tls_ca_cert_path: String,
pub backend_tls_ca_cert_path: String,
/// Enable watching TLS certificate files for changes
#[clap(long = "backend-tls-watch")]
backend_tls_watch: bool,
pub backend_tls_watch: bool,
}
impl StoreConfig {
pub fn tls_config(&self) -> Option<TlsOption> {
if self.backend_tls_mode != TlsMode::Disable {
Some(TlsOption {
mode: self.backend_tls_mode.clone(),
cert_path: self.backend_tls_cert_path.clone(),
key_path: self.backend_tls_key_path.clone(),
ca_cert_path: self.backend_tls_ca_cert_path.clone(),
watch: self.backend_tls_watch,
})
} else {
None
}
}
/// Builds a [`KvBackendRef`] from the store configuration.
pub async fn build(&self) -> Result<KvBackendRef, BoxedError> {
let max_txn_ops = self.max_txn_ops;
@@ -90,19 +104,14 @@ impl StoreConfig {
if store_addrs.is_empty() {
EmptyStoreAddrsSnafu.fail().map_err(BoxedError::new)
} else {
common_telemetry::info!(
"Building kvbackend with store addrs: {:?}, backend: {:?}",
store_addrs,
self.backend
);
let kvbackend = match self.backend {
BackendImpl::EtcdStore => {
let tls_config = if self.backend_tls_mode != TlsMode::Disable {
Some(TlsOption {
mode: self.backend_tls_mode.clone(),
cert_path: self.backend_tls_cert_path.clone(),
key_path: self.backend_tls_key_path.clone(),
ca_cert_path: self.backend_tls_ca_cert_path.clone(),
watch: self.backend_tls_watch,
})
} else {
None
};
let tls_config = self.tls_config();
let etcd_client = create_etcd_client_with_tls(store_addrs, tls_config.as_ref())
.await
.map_err(BoxedError::new)?;
@@ -111,9 +120,14 @@ impl StoreConfig {
#[cfg(feature = "pg_kvbackend")]
BackendImpl::PostgresStore => {
let table_name = &self.meta_table_name;
let pool = meta_srv::bootstrap::create_postgres_pool(store_addrs, None)
.await
.map_err(BoxedError::new)?;
let tls_config = self.tls_config();
let pool = meta_srv::utils::postgres::create_postgres_pool(
store_addrs,
None,
tls_config,
)
.await
.map_err(BoxedError::new)?;
let schema_name = self.meta_schema_name.as_deref();
Ok(common_meta::kv_backend::rds::PgStore::with_pg_pool(
pool,
@@ -127,9 +141,11 @@ impl StoreConfig {
#[cfg(feature = "mysql_kvbackend")]
BackendImpl::MysqlStore => {
let table_name = &self.meta_table_name;
let pool = meta_srv::bootstrap::create_mysql_pool(store_addrs)
.await
.map_err(BoxedError::new)?;
let tls_config = self.tls_config();
let pool =
meta_srv::utils::mysql::create_mysql_pool(store_addrs, tls_config.as_ref())
.await
.map_err(BoxedError::new)?;
Ok(common_meta::kv_backend::rds::MySqlStore::with_mysql_pool(
pool,
table_name,
@@ -138,9 +154,20 @@ impl StoreConfig {
.await
.map_err(BoxedError::new)?)
}
BackendImpl::MemoryStore => UnsupportedMemoryBackendSnafu
.fail()
.map_err(BoxedError::new),
#[cfg(not(test))]
BackendImpl::MemoryStore => {
use crate::error::UnsupportedMemoryBackendSnafu;
UnsupportedMemoryBackendSnafu
.fail()
.map_err(BoxedError::new)
}
#[cfg(test)]
BackendImpl::MemoryStore => {
use common_meta::kv_backend::memory::MemoryKvBackend;
Ok(Arc::new(MemoryKvBackend::default()) as _)
}
};
if self.store_key_prefix.is_empty() {
kvbackend

View File

@@ -313,6 +313,14 @@ pub enum Error {
location: Location,
source: common_meta::error::Error,
},
#[snafu(display("Failed to get current directory"))]
GetCurrentDir {
#[snafu(implicit)]
location: Location,
#[snafu(source)]
error: std::io::Error,
},
}
pub type Result<T> = std::result::Result<T, Error>;
@@ -362,7 +370,9 @@ impl ErrorExt for Error {
Error::BuildRuntime { source, .. } => source.status_code(),
Error::CacheRequired { .. } | Error::BuildCacheRegistry { .. } => StatusCode::Internal,
Error::CacheRequired { .. }
| Error::BuildCacheRegistry { .. }
| Error::GetCurrentDir { .. } => StatusCode::Internal,
Error::MetaClientInit { source, .. } => source.status_code(),
Error::TableNotFound { .. } => StatusCode::TableNotFound,
Error::SchemaNotFound { .. } => StatusCode::DatabaseNotFound,

View File

@@ -14,13 +14,16 @@
#![allow(clippy::print_stdout)]
mod bench;
mod common;
mod data;
mod database;
pub mod error;
mod metadata;
pub mod utils;
use async_trait::async_trait;
use clap::Parser;
pub use common::{ObjectStoreConfig, StoreConfig};
use common_error::ext::BoxedError;
pub use database::DatabaseClient;
use error::Result;

View File

@@ -12,7 +12,6 @@
// See the License for the specific language governing permissions and
// limitations under the License.
mod common;
mod control;
mod repair;
mod snapshot;

View File

@@ -20,7 +20,7 @@ use common_meta::kv_backend::KvBackendRef;
use common_meta::rpc::store::RangeRequest;
use crate::Tool;
use crate::metadata::common::StoreConfig;
use crate::common::StoreConfig;
use crate::metadata::control::del::CLI_TOMBSTONE_PREFIX;
/// Delete key-value pairs logically from the metadata store.

View File

@@ -24,8 +24,8 @@ use common_meta::kv_backend::KvBackendRef;
use store_api::storage::TableId;
use crate::Tool;
use crate::common::StoreConfig;
use crate::error::{InvalidArgumentsSnafu, TableNotFoundSnafu};
use crate::metadata::common::StoreConfig;
use crate::metadata::control::del::CLI_TOMBSTONE_PREFIX;
use crate::metadata::control::utils::get_table_id_by_name;
@@ -48,6 +48,7 @@ pub struct DelTableCommand {
#[clap(long, default_value = DEFAULT_CATALOG_NAME)]
catalog_name: String,
/// The store config.
#[clap(flatten)]
store: StoreConfig,
}

View File

@@ -28,9 +28,9 @@ use common_meta::rpc::store::RangeRequest;
use futures::TryStreamExt;
use crate::Tool;
use crate::common::StoreConfig;
use crate::error::InvalidArgumentsSnafu;
use crate::metadata::common::StoreConfig;
use crate::metadata::control::utils::{decode_key_value, get_table_id_by_name, json_fromatter};
use crate::metadata::control::utils::{decode_key_value, get_table_id_by_name, json_formatter};
/// Getting metadata from metadata store.
#[derive(Subcommand)]
@@ -206,7 +206,7 @@ impl Tool for GetTableTool {
println!(
"{}\n{}",
TableInfoKey::new(table_id),
json_fromatter(self.pretty, &*table_info)
json_formatter(self.pretty, &*table_info)
);
} else {
println!("Table info not found");
@@ -221,7 +221,7 @@ impl Tool for GetTableTool {
println!(
"{}\n{}",
TableRouteKey::new(table_id),
json_fromatter(self.pretty, &table_route)
json_formatter(self.pretty, &table_route)
);
} else {
println!("Table route not found");

View File

@@ -27,7 +27,7 @@ pub fn decode_key_value(kv: KeyValue) -> CommonMetaResult<(String, String)> {
}
/// Formats a value as a JSON string.
pub fn json_fromatter<T>(pretty: bool, value: &T) -> String
pub fn json_formatter<T>(pretty: bool, value: &T) -> String
where
T: Serialize,
{

View File

@@ -38,10 +38,10 @@ use snafu::{ResultExt, ensure};
use store_api::storage::TableId;
use crate::Tool;
use crate::common::StoreConfig;
use crate::error::{
InvalidArgumentsSnafu, Result, SendRequestToDatanodeSnafu, TableMetadataSnafu, UnexpectedSnafu,
};
use crate::metadata::common::StoreConfig;
use crate::metadata::utils::{FullTableMetadata, IteratorInput, TableMetadataIterator};
/// Repair metadata of logical tables.

View File

@@ -12,20 +12,15 @@
// See the License for the specific language governing permissions and
// limitations under the License.
use std::path::Path;
use async_trait::async_trait;
use clap::{Parser, Subcommand};
use common_base::secrets::{ExposeSecret, SecretString};
use common_error::ext::BoxedError;
use common_meta::snapshot::MetadataSnapshotManager;
use object_store::ObjectStore;
use object_store::services::{Fs, S3};
use snafu::{OptionExt, ResultExt};
use object_store::{ObjectStore, Scheme};
use crate::Tool;
use crate::error::{InvalidFilePathSnafu, OpenDalSnafu, S3ConfigNotSetSnafu};
use crate::metadata::common::StoreConfig;
use crate::common::{ObjectStoreConfig, StoreConfig, new_fs_object_store};
use crate::utils::resolve_relative_path_with_current_dir;
/// Subcommand for metadata snapshot operations, including saving snapshots, restoring from snapshots, and viewing snapshot information.
#[derive(Subcommand)]
@@ -41,68 +36,9 @@ pub enum SnapshotCommand {
impl SnapshotCommand {
pub async fn build(&self) -> Result<Box<dyn Tool>, BoxedError> {
match self {
SnapshotCommand::Save(cmd) => cmd.build().await,
SnapshotCommand::Restore(cmd) => cmd.build().await,
SnapshotCommand::Info(cmd) => cmd.build().await,
}
}
}
// TODO(qtang): Abstract a generic s3 config for export import meta snapshot restore
#[derive(Debug, Default, Parser)]
struct S3Config {
/// whether to use s3 as the output directory. default is false.
#[clap(long, default_value = "false")]
s3: bool,
/// The s3 bucket name.
#[clap(long)]
s3_bucket: Option<String>,
/// The s3 region.
#[clap(long)]
s3_region: Option<String>,
/// The s3 access key.
#[clap(long)]
s3_access_key: Option<SecretString>,
/// The s3 secret key.
#[clap(long)]
s3_secret_key: Option<SecretString>,
/// The s3 endpoint. we will automatically use the default s3 decided by the region if not set.
#[clap(long)]
s3_endpoint: Option<String>,
}
impl S3Config {
pub fn build(&self, root: &str) -> Result<Option<ObjectStore>, BoxedError> {
if !self.s3 {
Ok(None)
} else {
if self.s3_region.is_none()
|| self.s3_access_key.is_none()
|| self.s3_secret_key.is_none()
|| self.s3_bucket.is_none()
{
return S3ConfigNotSetSnafu.fail().map_err(BoxedError::new);
}
// Safety, unwrap is safe because we have checked the options above.
let mut config = S3::default()
.bucket(self.s3_bucket.as_ref().unwrap())
.region(self.s3_region.as_ref().unwrap())
.access_key_id(self.s3_access_key.as_ref().unwrap().expose_secret())
.secret_access_key(self.s3_secret_key.as_ref().unwrap().expose_secret());
if !root.is_empty() && root != "." {
config = config.root(root);
}
if let Some(endpoint) = &self.s3_endpoint {
config = config.endpoint(endpoint);
}
Ok(Some(
ObjectStore::new(config)
.context(OpenDalSnafu)
.map_err(BoxedError::new)?
.finish(),
))
SnapshotCommand::Save(cmd) => Ok(Box::new(cmd.build().await?)),
SnapshotCommand::Restore(cmd) => Ok(Box::new(cmd.build().await?)),
SnapshotCommand::Info(cmd) => Ok(Box::new(cmd.build().await?)),
}
}
}
@@ -116,60 +52,47 @@ pub struct SaveCommand {
/// The store configuration.
#[clap(flatten)]
store: StoreConfig,
/// The s3 config.
/// The object store configuration.
#[clap(flatten)]
s3_config: S3Config,
/// The name of the target snapshot file. we will add the file extension automatically.
#[clap(long, default_value = "metadata_snapshot")]
file_name: String,
/// The directory to store the snapshot file.
/// if target output is s3 bucket, this is the root directory in the bucket.
/// if target output is local file, this is the local directory.
#[clap(long, default_value = "")]
output_dir: String,
}
fn create_local_file_object_store(root: &str) -> Result<ObjectStore, BoxedError> {
let root = if root.is_empty() { "." } else { root };
let object_store = ObjectStore::new(Fs::default().root(root))
.context(OpenDalSnafu)
.map_err(BoxedError::new)?
.finish();
Ok(object_store)
object_store: ObjectStoreConfig,
/// The path of the target snapshot file.
#[clap(
long,
default_value = "metadata_snapshot.metadata.fb",
alias = "file_name"
)]
file_path: String,
/// Specifies the root directory used for I/O operations.
#[clap(long, default_value = "/", alias = "output_dir")]
dir: String,
}
impl SaveCommand {
pub async fn build(&self) -> Result<Box<dyn Tool>, BoxedError> {
async fn build(&self) -> Result<MetaSnapshotTool, BoxedError> {
let kvbackend = self.store.build().await?;
let output_dir = &self.output_dir;
let object_store = self.s3_config.build(output_dir).map_err(BoxedError::new)?;
if let Some(store) = object_store {
let tool = MetaSnapshotTool {
inner: MetadataSnapshotManager::new(kvbackend, store),
target_file: self.file_name.clone(),
};
Ok(Box::new(tool))
} else {
let object_store = create_local_file_object_store(output_dir)?;
let tool = MetaSnapshotTool {
inner: MetadataSnapshotManager::new(kvbackend, object_store),
target_file: self.file_name.clone(),
};
Ok(Box::new(tool))
}
let (object_store, file_path) = build_object_store_and_resolve_file_path(
self.object_store.clone(),
&self.dir,
&self.file_path,
)?;
let tool = MetaSnapshotTool {
inner: MetadataSnapshotManager::new(kvbackend, object_store),
file_path,
};
Ok(tool)
}
}
struct MetaSnapshotTool {
inner: MetadataSnapshotManager,
target_file: String,
file_path: String,
}
#[async_trait]
impl Tool for MetaSnapshotTool {
async fn do_work(&self) -> std::result::Result<(), BoxedError> {
self.inner
.dump("", &self.target_file)
.dump(&self.file_path)
.await
.map_err(BoxedError::new)?;
Ok(())
@@ -186,54 +109,52 @@ pub struct RestoreCommand {
/// The store configuration.
#[clap(flatten)]
store: StoreConfig,
/// The s3 config.
/// The object store config.
#[clap(flatten)]
s3_config: S3Config,
/// The name of the target snapshot file.
#[clap(long, default_value = "metadata_snapshot.metadata.fb")]
file_name: String,
/// The directory to store the snapshot file.
#[clap(long, default_value = ".")]
input_dir: String,
object_store: ObjectStoreConfig,
/// The path of the target snapshot file.
#[clap(
long,
default_value = "metadata_snapshot.metadata.fb",
alias = "file_name"
)]
file_path: String,
/// Specifies the root directory used for I/O operations.
#[clap(long, default_value = "/", alias = "input_dir")]
dir: String,
#[clap(long, default_value = "false")]
force: bool,
}
impl RestoreCommand {
pub async fn build(&self) -> Result<Box<dyn Tool>, BoxedError> {
async fn build(&self) -> Result<MetaRestoreTool, BoxedError> {
let kvbackend = self.store.build().await?;
let input_dir = &self.input_dir;
let object_store = self.s3_config.build(input_dir).map_err(BoxedError::new)?;
if let Some(store) = object_store {
let tool = MetaRestoreTool::new(
MetadataSnapshotManager::new(kvbackend, store),
self.file_name.clone(),
self.force,
);
Ok(Box::new(tool))
} else {
let object_store = create_local_file_object_store(input_dir)?;
let tool = MetaRestoreTool::new(
MetadataSnapshotManager::new(kvbackend, object_store),
self.file_name.clone(),
self.force,
);
Ok(Box::new(tool))
}
let (object_store, file_path) = build_object_store_and_resolve_file_path(
self.object_store.clone(),
&self.dir,
&self.file_path,
)
.map_err(BoxedError::new)?;
let tool = MetaRestoreTool::new(
MetadataSnapshotManager::new(kvbackend, object_store),
file_path,
self.force,
);
Ok(tool)
}
}
struct MetaRestoreTool {
inner: MetadataSnapshotManager,
source_file: String,
file_path: String,
force: bool,
}
impl MetaRestoreTool {
pub fn new(inner: MetadataSnapshotManager, source_file: String, force: bool) -> Self {
pub fn new(inner: MetadataSnapshotManager, file_path: String, force: bool) -> Self {
Self {
inner,
source_file,
file_path,
force,
}
}
@@ -252,7 +173,7 @@ impl Tool for MetaRestoreTool {
"The target source is clean, we will restore the metadata snapshot."
);
self.inner
.restore(&self.source_file)
.restore(&self.file_path)
.await
.map_err(BoxedError::new)?;
Ok(())
@@ -266,7 +187,7 @@ impl Tool for MetaRestoreTool {
"The target source is not clean, We will restore the metadata snapshot with --force."
);
self.inner
.restore(&self.source_file)
.restore(&self.file_path)
.await
.map_err(BoxedError::new)?;
Ok(())
@@ -280,12 +201,19 @@ impl Tool for MetaRestoreTool {
/// It prints the filtered metadata to the console.
#[derive(Debug, Default, Parser)]
pub struct InfoCommand {
/// The s3 config.
/// The object store config.
#[clap(flatten)]
s3_config: S3Config,
/// The name of the target snapshot file. we will add the file extension automatically.
#[clap(long, default_value = "metadata_snapshot")]
file_name: String,
object_store: ObjectStoreConfig,
/// The path of the target snapshot file.
#[clap(
long,
default_value = "metadata_snapshot.metadata.fb",
alias = "file_name"
)]
file_path: String,
/// Specifies the root directory used for I/O operations.
#[clap(long, default_value = "/", alias = "input_dir")]
dir: String,
/// The query string to filter the metadata.
#[clap(long, default_value = "*")]
inspect_key: String,
@@ -296,7 +224,7 @@ pub struct InfoCommand {
struct MetaInfoTool {
inner: ObjectStore,
source_file: String,
file_path: String,
inspect_key: String,
limit: Option<usize>,
}
@@ -306,7 +234,7 @@ impl Tool for MetaInfoTool {
async fn do_work(&self) -> std::result::Result<(), BoxedError> {
let result = MetadataSnapshotManager::info(
&self.inner,
&self.source_file,
&self.file_path,
&self.inspect_key,
self.limit,
)
@@ -320,45 +248,90 @@ impl Tool for MetaInfoTool {
}
impl InfoCommand {
fn decide_object_store_root_for_local_store(
file_path: &str,
) -> Result<(&str, &str), BoxedError> {
let path = Path::new(file_path);
let parent = path
.parent()
.and_then(|p| p.to_str())
.context(InvalidFilePathSnafu { msg: file_path })
.map_err(BoxedError::new)?;
let file_name = path
.file_name()
.and_then(|f| f.to_str())
.context(InvalidFilePathSnafu { msg: file_path })
.map_err(BoxedError::new)?;
let root = if parent.is_empty() { "." } else { parent };
Ok((root, file_name))
}
pub async fn build(&self) -> Result<Box<dyn Tool>, BoxedError> {
let object_store = self.s3_config.build("").map_err(BoxedError::new)?;
if let Some(store) = object_store {
let tool = MetaInfoTool {
inner: store,
source_file: self.file_name.clone(),
inspect_key: self.inspect_key.clone(),
limit: self.limit,
};
Ok(Box::new(tool))
} else {
let (root, file_name) =
Self::decide_object_store_root_for_local_store(&self.file_name)?;
let object_store = create_local_file_object_store(root)?;
let tool = MetaInfoTool {
inner: object_store,
source_file: file_name.to_string(),
inspect_key: self.inspect_key.clone(),
limit: self.limit,
};
Ok(Box::new(tool))
}
async fn build(&self) -> Result<MetaInfoTool, BoxedError> {
let (object_store, file_path) = build_object_store_and_resolve_file_path(
self.object_store.clone(),
&self.dir,
&self.file_path,
)?;
let tool = MetaInfoTool {
inner: object_store,
file_path,
inspect_key: self.inspect_key.clone(),
limit: self.limit,
};
Ok(tool)
}
}
/// Builds the object store and resolves the file path.
fn build_object_store_and_resolve_file_path(
object_store: ObjectStoreConfig,
fs_root: &str,
file_path: &str,
) -> Result<(ObjectStore, String), BoxedError> {
let object_store = object_store.build().map_err(BoxedError::new)?;
let object_store = match object_store {
Some(object_store) => object_store,
None => new_fs_object_store(fs_root)?,
};
let file_path = if matches!(object_store.info().scheme(), Scheme::Fs) {
resolve_relative_path_with_current_dir(file_path).map_err(BoxedError::new)?
} else {
file_path.to_string()
};
Ok((object_store, file_path))
}
#[cfg(test)]
mod tests {
use std::env;
use clap::Parser;
use crate::metadata::snapshot::RestoreCommand;
#[tokio::test]
async fn test_cmd_resolve_file_path() {
common_telemetry::init_default_ut_logging();
let cmd = RestoreCommand::parse_from([
"",
"--file_name",
"metadata_snapshot.metadata.fb",
"--backend",
"memory-store",
"--store-addrs",
"memory://",
]);
let tool = cmd.build().await.unwrap();
let current_dir = env::current_dir().unwrap();
let file_path = current_dir.join("metadata_snapshot.metadata.fb");
assert_eq!(tool.file_path, file_path.to_string_lossy().to_string());
let cmd = RestoreCommand::parse_from([
"",
"--file_name",
"metadata_snapshot.metadata.fb",
"--backend",
"memory-store",
"--store-addrs",
"memory://",
]);
let tool = cmd.build().await.unwrap();
assert_eq!(tool.file_path, file_path.to_string_lossy().to_string());
let cmd = RestoreCommand::parse_from([
"",
"--file_name",
"metadata_snapshot.metadata.fb",
"--backend",
"memory-store",
"--store-addrs",
"memory://",
]);
let tool = cmd.build().await.unwrap();
assert_eq!(tool.file_path, file_path.to_string_lossy().to_string());
}
}

94
src/cli/src/utils.rs Normal file
View File

@@ -0,0 +1,94 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use std::env;
use std::path::Path;
use snafu::ResultExt;
use crate::error::{GetCurrentDirSnafu, Result};
/// Resolves the relative path to an absolute path.
pub fn resolve_relative_path(current_dir: impl AsRef<Path>, path_str: &str) -> String {
let path = Path::new(path_str);
if path.is_relative() {
let path = current_dir.as_ref().join(path);
common_telemetry::debug!("Resolved relative path: {}", path.to_string_lossy());
path.to_string_lossy().to_string()
} else {
path_str.to_string()
}
}
/// Resolves the relative path to an absolute path.
pub fn resolve_relative_path_with_current_dir(path_str: &str) -> Result<String> {
let current_dir = env::current_dir().context(GetCurrentDirSnafu)?;
Ok(resolve_relative_path(current_dir, path_str))
}
#[cfg(test)]
mod tests {
use std::env;
use std::path::PathBuf;
use super::*;
#[test]
fn test_resolve_relative_path_absolute() {
let abs_path = if cfg!(windows) {
"C:\\foo\\bar"
} else {
"/foo/bar"
};
let current_dir = PathBuf::from("/tmp");
let result = resolve_relative_path(&current_dir, abs_path);
assert_eq!(result, abs_path);
}
#[test]
fn test_resolve_relative_path_relative() {
let current_dir = PathBuf::from("/tmp");
let rel_path = "foo/bar";
let expected = "/tmp/foo/bar";
let result = resolve_relative_path(&current_dir, rel_path);
// On Windows, the separator is '\', so normalize for comparison
// '/' is as a normal character in Windows paths
if cfg!(windows) {
assert!(result.ends_with("foo/bar"));
assert!(result.contains("/tmp\\"));
} else {
assert_eq!(result, expected);
}
}
#[test]
fn test_resolve_relative_path_with_current_dir_absolute() {
let abs_path = if cfg!(windows) {
"C:\\foo\\bar"
} else {
"/foo/bar"
};
let result = resolve_relative_path_with_current_dir(abs_path).unwrap();
assert_eq!(result, abs_path);
}
#[test]
fn test_resolve_relative_path_with_current_dir_relative() {
let rel_path = "foo/bar";
let current_dir = env::current_dir().unwrap();
let expected = current_dir.join(rel_path).to_string_lossy().to_string();
let result = resolve_relative_path_with_current_dir(rel_path).unwrap();
assert_eq!(result, expected);
}
}

View File

@@ -54,7 +54,6 @@ common-wal.workspace = true
datanode.workspace = true
datatypes.workspace = true
etcd-client.workspace = true
file-engine.workspace = true
flow.workspace = true
frontend = { workspace = true, default-features = false }
futures.workspace = true
@@ -64,7 +63,6 @@ lazy_static.workspace = true
meta-client.workspace = true
meta-srv.workspace = true
metric-engine.workspace = true
mito2.workspace = true
moka.workspace = true
nu-ansi-term = "0.46"
object-store.workspace = true
@@ -75,13 +73,14 @@ query.workspace = true
rand.workspace = true
regex.workspace = true
reqwest.workspace = true
standalone.workspace = true
serde.workspace = true
serde_json.workspace = true
servers.workspace = true
session.workspace = true
similar-asserts.workspace = true
snafu.workspace = true
stat.workspace = true
common-stat.workspace = true
store-api.workspace = true
substrait.workspace = true
table.workspace = true
@@ -100,6 +99,8 @@ common-version.workspace = true
serde.workspace = true
temp-env = "0.3"
tempfile.workspace = true
file-engine.workspace = true
mito2.workspace = true
[target.'cfg(not(windows))'.dev-dependencies]
rexpect = "0.5"

View File

@@ -302,6 +302,20 @@ pub enum Error {
location: Location,
source: common_meta::error::Error,
},
#[snafu(display("Failed to build metadata kvbackend"))]
BuildMetadataKvbackend {
#[snafu(implicit)]
location: Location,
source: standalone::error::Error,
},
#[snafu(display("Failed to setup standalone plugins"))]
SetupStandalonePlugins {
#[snafu(implicit)]
location: Location,
source: standalone::error::Error,
},
}
pub type Result<T> = std::result::Result<T, Error>;
@@ -320,6 +334,8 @@ impl ErrorExt for Error {
Error::UnsupportedSelectorType { source, .. } => source.status_code(),
Error::BuildCli { source, .. } => source.status_code(),
Error::StartCli { source, .. } => source.status_code(),
Error::BuildMetadataKvbackend { source, .. } => source.status_code(),
Error::SetupStandalonePlugins { source, .. } => source.status_code(),
Error::InitMetadata { source, .. } | Error::InitDdlManager { source, .. } => {
source.status_code()

View File

@@ -18,8 +18,8 @@ use async_trait::async_trait;
use common_error::ext::ErrorExt;
use common_error::status_code::StatusCode;
use common_mem_prof::activate_heap_profile;
use common_stat::{get_cpu_limit, get_memory_limit};
use common_telemetry::{error, info, warn};
use stat::{get_cpu_limit, get_memory_limit};
use crate::error::Result;

View File

@@ -399,7 +399,6 @@ mod tests {
threshold = 8.0
min_std_deviation = "100ms"
acceptable_heartbeat_pause = "3000ms"
first_heartbeat_estimate = "1000ms"
"#;
write!(file, "{}", toml_str).unwrap();
@@ -430,13 +429,6 @@ mod tests {
.acceptable_heartbeat_pause
.as_millis()
);
assert_eq!(
1000,
options
.failure_detector
.first_heartbeat_estimate
.as_millis()
);
assert_eq!(
options.procedure.max_metadata_value_size,
Some(ReadableSize::kb(1500))

View File

@@ -19,71 +19,47 @@ use std::{fs, path};
use async_trait::async_trait;
use cache::{build_fundamental_cache_registry, with_default_composite_cache_registry};
use catalog::information_schema::{DatanodeInspectRequest, InformationExtension};
use catalog::kvbackend::KvBackendCatalogManagerBuilder;
use catalog::process_manager::ProcessManager;
use clap::Parser;
use client::SendableRecordBatchStream;
use client::api::v1::meta::RegionRole;
use common_base::Plugins;
use common_base::readable_size::ReadableSize;
use common_catalog::consts::{MIN_USER_FLOW_ID, MIN_USER_TABLE_ID};
use common_config::{Configurable, KvBackendConfig, metadata_store_dir};
use common_config::{Configurable, metadata_store_dir};
use common_error::ext::BoxedError;
use common_meta::cache::LayeredCacheRegistryBuilder;
use common_meta::cluster::{NodeInfo, NodeStatus};
use common_meta::datanode::RegionStat;
use common_meta::ddl::flow_meta::FlowMetadataAllocator;
use common_meta::ddl::table_meta::TableMetadataAllocator;
use common_meta::ddl::{DdlContext, NoopRegionFailureDetectorControl};
use common_meta::ddl_manager::DdlManager;
use common_meta::key::flow::FlowMetadataManager;
use common_meta::key::flow::flow_state::FlowStat;
use common_meta::key::{TableMetadataManager, TableMetadataManagerRef};
use common_meta::kv_backend::KvBackendRef;
use common_meta::peer::Peer;
use common_meta::procedure_executor::LocalProcedureExecutor;
use common_meta::region_keeper::MemoryRegionKeeper;
use common_meta::region_registry::LeaderRegionRegistry;
use common_meta::sequence::SequenceBuilder;
use common_meta::wal_options_allocator::{WalOptionsAllocatorRef, build_wal_options_allocator};
use common_options::memory::MemoryOptions;
use common_procedure::{ProcedureInfo, ProcedureManagerRef};
use common_query::request::QueryRequest;
use common_procedure::ProcedureManagerRef;
use common_telemetry::info;
use common_telemetry::logging::{
DEFAULT_LOGGING_DIR, LoggingOptions, SlowQueryOptions, TracingOptions,
};
use common_telemetry::logging::{DEFAULT_LOGGING_DIR, TracingOptions};
use common_time::timezone::set_default_timezone;
use common_version::{short_version, verbose_version};
use common_wal::config::DatanodeWalConfig;
use datanode::config::{DatanodeOptions, ProcedureConfig, RegionEngineConfig, StorageConfig};
use datanode::config::DatanodeOptions;
use datanode::datanode::{Datanode, DatanodeBuilder};
use datanode::region_server::RegionServer;
use file_engine::config::EngineConfig as FileEngineConfig;
use flow::{
FlowConfig, FlownodeBuilder, FlownodeInstance, FlownodeOptions, FrontendClient,
FrontendInvoker, GrpcQueryHandlerWithBoxedError, StreamingEngine,
FlownodeBuilder, FlownodeInstance, FlownodeOptions, FrontendClient, FrontendInvoker,
GrpcQueryHandlerWithBoxedError,
};
use frontend::frontend::{Frontend, FrontendOptions};
use frontend::frontend::Frontend;
use frontend::instance::StandaloneDatanodeManager;
use frontend::instance::builder::FrontendBuilder;
use frontend::instance::{Instance as FeInstance, StandaloneDatanodeManager};
use frontend::server::Services;
use frontend::service_config::{
InfluxdbOptions, JaegerOptions, MysqlOptions, OpentsdbOptions, PostgresOptions,
PromStoreOptions,
};
use meta_srv::metasrv::{FLOW_ID_SEQ, TABLE_ID_SEQ};
use mito2::config::MitoConfig;
use query::options::QueryOptions;
use serde::{Deserialize, Serialize};
use servers::export_metrics::{ExportMetricsOption, ExportMetricsTask};
use servers::grpc::GrpcOptions;
use servers::http::HttpOptions;
use servers::export_metrics::ExportMetricsTask;
use servers::tls::{TlsMode, TlsOption};
use snafu::ResultExt;
use store_api::storage::RegionId;
use tokio::sync::RwLock;
use standalone::StandaloneInformationExtension;
use standalone::options::StandaloneOptions;
use tracing_appender::non_blocking::WorkerGuard;
use crate::error::{Result, StartFlownodeSnafu};
@@ -133,131 +109,6 @@ impl SubCommand {
}
}
#[derive(Clone, Debug, Serialize, Deserialize, PartialEq)]
#[serde(default)]
pub struct StandaloneOptions {
pub enable_telemetry: bool,
pub default_timezone: Option<String>,
pub http: HttpOptions,
pub grpc: GrpcOptions,
pub mysql: MysqlOptions,
pub postgres: PostgresOptions,
pub opentsdb: OpentsdbOptions,
pub influxdb: InfluxdbOptions,
pub jaeger: JaegerOptions,
pub prom_store: PromStoreOptions,
pub wal: DatanodeWalConfig,
pub storage: StorageConfig,
pub metadata_store: KvBackendConfig,
pub procedure: ProcedureConfig,
pub flow: FlowConfig,
pub logging: LoggingOptions,
pub user_provider: Option<String>,
/// Options for different store engines.
pub region_engine: Vec<RegionEngineConfig>,
pub export_metrics: ExportMetricsOption,
pub tracing: TracingOptions,
pub init_regions_in_background: bool,
pub init_regions_parallelism: usize,
pub max_in_flight_write_bytes: Option<ReadableSize>,
pub slow_query: SlowQueryOptions,
pub query: QueryOptions,
pub memory: MemoryOptions,
}
impl Default for StandaloneOptions {
fn default() -> Self {
Self {
enable_telemetry: true,
default_timezone: None,
http: HttpOptions::default(),
grpc: GrpcOptions::default(),
mysql: MysqlOptions::default(),
postgres: PostgresOptions::default(),
opentsdb: OpentsdbOptions::default(),
influxdb: InfluxdbOptions::default(),
jaeger: JaegerOptions::default(),
prom_store: PromStoreOptions::default(),
wal: DatanodeWalConfig::default(),
storage: StorageConfig::default(),
metadata_store: KvBackendConfig::default(),
procedure: ProcedureConfig::default(),
flow: FlowConfig::default(),
logging: LoggingOptions::default(),
export_metrics: ExportMetricsOption::default(),
user_provider: None,
region_engine: vec![
RegionEngineConfig::Mito(MitoConfig::default()),
RegionEngineConfig::File(FileEngineConfig::default()),
],
tracing: TracingOptions::default(),
init_regions_in_background: false,
init_regions_parallelism: 16,
max_in_flight_write_bytes: None,
slow_query: SlowQueryOptions::default(),
query: QueryOptions::default(),
memory: MemoryOptions::default(),
}
}
}
impl Configurable for StandaloneOptions {
fn env_list_keys() -> Option<&'static [&'static str]> {
Some(&["wal.broker_endpoints"])
}
}
/// The [`StandaloneOptions`] is only defined in cmd crate,
/// we don't want to make `frontend` depends on it, so impl [`Into`]
/// rather than [`From`].
#[allow(clippy::from_over_into)]
impl Into<FrontendOptions> for StandaloneOptions {
fn into(self) -> FrontendOptions {
self.frontend_options()
}
}
impl StandaloneOptions {
pub fn frontend_options(&self) -> FrontendOptions {
let cloned_opts = self.clone();
FrontendOptions {
default_timezone: cloned_opts.default_timezone,
http: cloned_opts.http,
grpc: cloned_opts.grpc,
mysql: cloned_opts.mysql,
postgres: cloned_opts.postgres,
opentsdb: cloned_opts.opentsdb,
influxdb: cloned_opts.influxdb,
jaeger: cloned_opts.jaeger,
prom_store: cloned_opts.prom_store,
meta_client: None,
logging: cloned_opts.logging,
user_provider: cloned_opts.user_provider,
// Handle the export metrics task run by standalone to frontend for execution
export_metrics: cloned_opts.export_metrics,
max_in_flight_write_bytes: cloned_opts.max_in_flight_write_bytes,
slow_query: cloned_opts.slow_query,
..Default::default()
}
}
pub fn datanode_options(&self) -> DatanodeOptions {
let cloned_opts = self.clone();
DatanodeOptions {
node_id: Some(0),
enable_telemetry: cloned_opts.enable_telemetry,
wal: cloned_opts.wal,
storage: cloned_opts.storage,
region_engine: cloned_opts.region_engine,
grpc: cloned_opts.grpc,
init_regions_in_background: cloned_opts.init_regions_in_background,
init_regions_parallelism: cloned_opts.init_regions_parallelism,
query: cloned_opts.query,
..Default::default()
}
}
}
pub struct Instance {
datanode: Datanode,
frontend: Frontend,
@@ -396,6 +247,7 @@ impl StartCommand {
.context(error::LoadLayeredConfigSnafu)?;
self.merge_with_cli_options(global_options, &mut opts.component)?;
opts.component.sanitize();
Ok(opts)
}
@@ -523,13 +375,14 @@ impl StartCommand {
.context(error::CreateDirSnafu { dir: data_home })?;
let metadata_dir = metadata_store_dir(data_home);
let (kv_backend, procedure_manager) = FeInstance::try_build_standalone_components(
metadata_dir,
opts.metadata_store,
opts.procedure,
)
.await
.context(error::StartFrontendSnafu)?;
let kv_backend = standalone::build_metadata_kvbackend(metadata_dir, opts.metadata_store)
.context(error::BuildMetadataKvbackendSnafu)?;
let procedure_manager =
standalone::build_procedure_manager(kv_backend.clone(), opts.procedure);
plugins::setup_standalone_plugins(&mut plugins, &plugin_opts, &opts, kv_backend.clone())
.await
.context(error::SetupStandalonePluginsSnafu)?;
// Builds cache registry
let layered_cache_builder = LayeredCacheRegistryBuilder::default();
@@ -745,141 +598,6 @@ impl StartCommand {
}
}
pub struct StandaloneInformationExtension {
region_server: RegionServer,
procedure_manager: ProcedureManagerRef,
start_time_ms: u64,
flow_streaming_engine: RwLock<Option<Arc<StreamingEngine>>>,
}
impl StandaloneInformationExtension {
pub fn new(region_server: RegionServer, procedure_manager: ProcedureManagerRef) -> Self {
Self {
region_server,
procedure_manager,
start_time_ms: common_time::util::current_time_millis() as u64,
flow_streaming_engine: RwLock::new(None),
}
}
/// Set the flow streaming engine for the standalone instance.
pub async fn set_flow_streaming_engine(&self, flow_streaming_engine: Arc<StreamingEngine>) {
let mut guard = self.flow_streaming_engine.write().await;
*guard = Some(flow_streaming_engine);
}
}
#[async_trait::async_trait]
impl InformationExtension for StandaloneInformationExtension {
type Error = catalog::error::Error;
async fn nodes(&self) -> std::result::Result<Vec<NodeInfo>, Self::Error> {
let build_info = common_version::build_info();
let node_info = NodeInfo {
// For the standalone:
// - id always 0
// - empty string for peer_addr
peer: Peer {
id: 0,
addr: "".to_string(),
},
last_activity_ts: -1,
status: NodeStatus::Standalone,
version: build_info.version.to_string(),
git_commit: build_info.commit_short.to_string(),
// Use `self.start_time_ms` instead.
// It's not precise but enough.
start_time_ms: self.start_time_ms,
cpus: common_config::utils::get_cpus() as u32,
memory_bytes: common_config::utils::get_sys_total_memory()
.unwrap_or_default()
.as_bytes(),
};
Ok(vec![node_info])
}
async fn procedures(&self) -> std::result::Result<Vec<(String, ProcedureInfo)>, Self::Error> {
self.procedure_manager
.list_procedures()
.await
.map_err(BoxedError::new)
.map(|procedures| {
procedures
.into_iter()
.map(|procedure| {
let status = procedure.state.as_str_name().to_string();
(status, procedure)
})
.collect::<Vec<_>>()
})
.context(catalog::error::ListProceduresSnafu)
}
async fn region_stats(&self) -> std::result::Result<Vec<RegionStat>, Self::Error> {
let stats = self
.region_server
.reportable_regions()
.into_iter()
.map(|stat| {
let region_stat = self
.region_server
.region_statistic(stat.region_id)
.unwrap_or_default();
RegionStat {
id: stat.region_id,
rcus: 0,
wcus: 0,
approximate_bytes: region_stat.estimated_disk_size(),
engine: stat.engine,
role: RegionRole::from(stat.role).into(),
num_rows: region_stat.num_rows,
memtable_size: region_stat.memtable_size,
manifest_size: region_stat.manifest_size,
sst_size: region_stat.sst_size,
sst_num: region_stat.sst_num,
index_size: region_stat.index_size,
region_manifest: region_stat.manifest.into(),
data_topic_latest_entry_id: region_stat.data_topic_latest_entry_id,
metadata_topic_latest_entry_id: region_stat.metadata_topic_latest_entry_id,
written_bytes: region_stat.written_bytes,
}
})
.collect::<Vec<_>>();
Ok(stats)
}
async fn flow_stats(&self) -> std::result::Result<Option<FlowStat>, Self::Error> {
Ok(Some(
self.flow_streaming_engine
.read()
.await
.as_ref()
.unwrap()
.gen_state_report()
.await,
))
}
async fn inspect_datanode(
&self,
request: DatanodeInspectRequest,
) -> std::result::Result<SendableRecordBatchStream, Self::Error> {
let req = QueryRequest {
plan: request
.build_plan()
.context(catalog::error::DatafusionSnafu)?,
region_id: RegionId::default(),
header: None,
};
self.region_server
.handle_read(req)
.await
.map_err(BoxedError::new)
.context(catalog::error::InternalSnafu)
}
}
#[cfg(test)]
mod tests {
use std::default::Default;
@@ -891,7 +609,9 @@ mod tests {
use common_config::ENV_VAR_SEP;
use common_test_util::temp_dir::create_named_temp_file;
use common_wal::config::DatanodeWalConfig;
use frontend::frontend::FrontendOptions;
use object_store::config::{FileConfig, GcsConfig};
use servers::grpc::GrpcOptions;
use super::*;
use crate::options::GlobalOptions;
@@ -1021,7 +741,7 @@ mod tests {
object_store::config::ObjectStoreConfig::S3(s3_config) => {
assert_eq!(
"SecretBox<alloc::string::String>([REDACTED])".to_string(),
format!("{:?}", s3_config.access_key_id)
format!("{:?}", s3_config.connection.access_key_id)
);
}
_ => {
@@ -1147,4 +867,22 @@ mod tests {
assert_eq!(options.logging, default_options.logging);
assert_eq!(options.region_engine, default_options.region_engine);
}
#[test]
fn test_cache_config() {
let toml_str = r#"
[storage]
data_home = "test_data_home"
type = "S3"
[storage.cache_config]
enable_read_cache = true
"#;
let mut opts: StandaloneOptions = toml::from_str(toml_str).unwrap();
opts.sanitize();
assert!(opts.storage.store.cache_config().unwrap().enable_read_cache);
assert_eq!(
opts.storage.store.cache_config().unwrap().cache_path,
"test_data_home"
);
}
}

View File

@@ -15,7 +15,6 @@
use std::time::Duration;
use cmd::options::GreptimeOptions;
use cmd::standalone::StandaloneOptions;
use common_config::{Configurable, DEFAULT_DATA_HOME};
use common_options::datanode::{ClientOptions, DatanodeClientOptions};
use common_telemetry::logging::{DEFAULT_LOGGING_DIR, DEFAULT_OTLP_HTTP_ENDPOINT, LoggingOptions};
@@ -35,6 +34,7 @@ use servers::export_metrics::ExportMetricsOption;
use servers::grpc::GrpcOptions;
use servers::http::HttpOptions;
use servers::tls::{TlsMode, TlsOption};
use standalone::options::StandaloneOptions;
use store_api::path_utils::WAL_DIR;
#[allow(deprecated)]
@@ -143,9 +143,11 @@ fn test_load_frontend_example_config() {
remote_write: Some(Default::default()),
..Default::default()
},
grpc: GrpcOptions::default()
.with_bind_addr("127.0.0.1:4001")
.with_server_addr("127.0.0.1:4001"),
grpc: GrpcOptions {
bind_addr: "127.0.0.1:4001".to_string(),
server_addr: "127.0.0.1:4001".to_string(),
..Default::default()
},
internal_grpc: Some(GrpcOptions::internal_default()),
http: HttpOptions {
cors_allowed_origins: vec!["https://example.com".to_string()],

View File

@@ -31,7 +31,7 @@
//! types of `SecretBox<T>` to be serializable with `serde`, you will need to impl
//! the [`SerializableSecret`] marker trait on `T`
use std::fmt::Debug;
use std::fmt::{Debug, Display};
use std::{any, fmt};
use serde::{Deserialize, Serialize, de, ser};
@@ -46,6 +46,12 @@ impl From<String> for SecretString {
}
}
impl Display for SecretString {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
write!(f, "SecretString([REDACTED])")
}
}
/// Wrapper type for values that contains secrets.
///
/// It attempts to limit accidental exposure and ensure secrets are wiped from memory when dropped.
@@ -165,6 +171,15 @@ impl<S: Zeroize> ExposeSecretMut<S> for SecretBox<S> {
}
}
impl<S> PartialEq for SecretBox<S>
where
S: PartialEq + Zeroize,
{
fn eq(&self, other: &Self) -> bool {
self.inner_secret == other.inner_secret
}
}
/// Expose a reference to an inner secret
pub trait ExposeSecret<S> {
/// Expose secret: this is the only method providing access to a secret.

View File

@@ -104,15 +104,16 @@ pub const INFORMATION_SCHEMA_PROCEDURE_INFO_TABLE_ID: u32 = 34;
pub const INFORMATION_SCHEMA_REGION_STATISTICS_TABLE_ID: u32 = 35;
/// id for information_schema.process_list
pub const INFORMATION_SCHEMA_PROCESS_LIST_TABLE_ID: u32 = 36;
/// id for information_schema.ssts_manifest
pub const INFORMATION_SCHEMA_SSTS_MANIFEST_TABLE_ID: u32 = 37;
/// id for information_schema.ssts_storage
pub const INFORMATION_SCHEMA_SSTS_STORAGE_TABLE_ID: u32 = 38;
// ----- End of information_schema tables -----
/// ----- Begin of pg_catalog tables -----
pub const PG_CATALOG_PG_CLASS_TABLE_ID: u32 = 256;
pub const PG_CATALOG_PG_TYPE_TABLE_ID: u32 = 257;
pub const PG_CATALOG_PG_NAMESPACE_TABLE_ID: u32 = 258;
pub const PG_CATALOG_PG_DATABASE_TABLE_ID: u32 = 259;
pub const PG_CATALOG_TABLE_ID_START: u32 = 256;
// Please leave at 128 table ids for Postgres
// ----- End of pg_catalog tables -----
pub const MITO_ENGINE: &str = "mito";

View File

@@ -213,7 +213,7 @@ mod tests {
// Check the configs from environment variables.
match &opts.storage.store {
object_store::config::ObjectStoreConfig::S3(s3_config) => {
assert_eq!(s3_config.bucket, "mybucket".to_string());
assert_eq!(s3_config.connection.bucket, "mybucket".to_string());
}
_ => panic!("unexpected store type"),
}

View File

@@ -54,8 +54,11 @@ pub const FORMAT_SCHEMA_INFER_MAX_RECORD: &str = "schema_infer_max_record";
pub const FORMAT_HAS_HEADER: &str = "has_header";
pub const FORMAT_TYPE: &str = "format";
pub const FILE_PATTERN: &str = "pattern";
pub const TIMESTAMP_FORMAT: &str = "timestamp_format";
pub const TIME_FORMAT: &str = "time_format";
pub const DATE_FORMAT: &str = "date_format";
#[derive(Debug, Clone, Copy, PartialEq, Eq)]
#[derive(Debug, Clone, PartialEq, Eq)]
pub enum Format {
Csv(CsvFormat),
Json(JsonFormat),

View File

@@ -15,8 +15,8 @@
use std::collections::HashMap;
use std::str::FromStr;
use arrow::csv;
use arrow::csv::reader::Format;
use arrow::csv::{self, WriterBuilder};
use arrow::record_batch::RecordBatch;
use arrow_schema::Schema;
use async_trait::async_trait;
@@ -33,12 +33,15 @@ use crate::error::{self, Result};
use crate::file_format::{self, FileFormat, stream_to_file};
use crate::share_buffer::SharedBuffer;
#[derive(Debug, Clone, Copy, PartialEq, Eq)]
#[derive(Debug, Clone, PartialEq, Eq)]
pub struct CsvFormat {
pub has_header: bool,
pub delimiter: u8,
pub schema_infer_max_record: Option<usize>,
pub compression_type: CompressionType,
pub timestamp_format: Option<String>,
pub time_format: Option<String>,
pub date_format: Option<String>,
}
impl TryFrom<&HashMap<String, String>> for CsvFormat {
@@ -79,6 +82,15 @@ impl TryFrom<&HashMap<String, String>> for CsvFormat {
}
.build()
})?;
};
if let Some(timestamp_format) = value.get(file_format::TIMESTAMP_FORMAT) {
format.timestamp_format = Some(timestamp_format.clone());
}
if let Some(time_format) = value.get(file_format::TIME_FORMAT) {
format.time_format = Some(time_format.clone());
}
if let Some(date_format) = value.get(file_format::DATE_FORMAT) {
format.date_format = Some(date_format.clone());
}
Ok(format)
}
@@ -91,6 +103,9 @@ impl Default for CsvFormat {
delimiter: b',',
schema_infer_max_record: Some(file_format::DEFAULT_SCHEMA_INFER_MAX_RECORD),
compression_type: CompressionType::Uncompressed,
timestamp_format: None,
time_format: None,
date_format: None,
}
}
}
@@ -140,9 +155,20 @@ pub async fn stream_to_csv(
path: &str,
threshold: usize,
concurrency: usize,
format: &CsvFormat,
) -> Result<usize> {
stream_to_file(stream, store, path, threshold, concurrency, |buffer| {
csv::Writer::new(buffer)
let mut builder = WriterBuilder::new();
if let Some(timestamp_format) = &format.timestamp_format {
builder = builder.with_timestamp_format(timestamp_format.to_owned())
}
if let Some(date_format) = &format.date_format {
builder = builder.with_date_format(date_format.to_owned())
}
if let Some(time_format) = &format.time_format {
builder = builder.with_time_format(time_format.to_owned())
}
builder.build(buffer)
})
.await
}
@@ -265,6 +291,9 @@ mod tests {
schema_infer_max_record: Some(2000),
delimiter: b'\t',
has_header: false,
timestamp_format: None,
time_format: None,
date_format: None
}
);
}

View File

@@ -196,7 +196,10 @@ pub async fn stream_to_parquet(
concurrency: usize,
) -> Result<usize> {
let write_props = column_wise_config(
WriterProperties::builder().set_compression(Compression::ZSTD(ZstdLevel::default())),
WriterProperties::builder()
.set_compression(Compression::ZSTD(ZstdLevel::default()))
.set_statistics_truncate_length(None)
.set_column_index_truncate_length(None),
schema,
)
.build();

View File

@@ -34,7 +34,7 @@ use object_store::ObjectStore;
use super::FORMAT_TYPE;
use crate::file_format::parquet::DefaultParquetFileReaderFactory;
use crate::file_format::{FileFormat, Format, OrcFormat};
use crate::test_util::{scan_config, test_basic_schema, test_store};
use crate::test_util::{csv_basic_schema, scan_config, test_basic_schema, test_store};
use crate::{error, test_util};
struct Test<'a> {
@@ -107,7 +107,7 @@ async fn test_json_opener() {
#[tokio::test]
async fn test_csv_opener() {
let store = test_store("/");
let schema = test_basic_schema();
let schema = csv_basic_schema();
let path = &find_workspace_path("/src/common/datasource/tests/csv/basic.csv")
.display()
.to_string();
@@ -121,24 +121,24 @@ async fn test_csv_opener() {
config: scan_config(schema.clone(), None, path, file_source.clone()),
file_source: file_source.clone(),
expected: vec![
"+-----+-------+",
"| num | str |",
"+-----+-------+",
"| 5 | test |",
"| 2 | hello |",
"| 4 | foo |",
"+-----+-------+",
"+-----+-------+---------------------+----------+------------+",
"| num | str | ts | t | date |",
"+-----+-------+---------------------+----------+------------+",
"| 5 | test | 2023-04-01T00:00:00 | 00:00:10 | 2023-04-01 |",
"| 2 | hello | 2023-04-01T00:00:00 | 00:00:20 | 2023-04-01 |",
"| 4 | foo | 2023-04-01T00:00:00 | 00:00:30 | 2023-04-01 |",
"+-----+-------+---------------------+----------+------------+",
],
},
Test {
config: scan_config(schema, Some(1), path, file_source.clone()),
file_source,
expected: vec![
"+-----+------+",
"| num | str |",
"+-----+------+",
"| 5 | test |",
"+-----+------+",
"+-----+------+---------------------+----------+------------+",
"| num | str | ts | t | date |",
"+-----+------+---------------------+----------+------------+",
"| 5 | test | 2023-04-01T00:00:00 | 00:00:10 | 2023-04-01 |",
"+-----+------+---------------------+----------+------------+",
],
},
];

View File

@@ -14,7 +14,7 @@
use std::sync::Arc;
use arrow_schema::{DataType, Field, Schema, SchemaRef};
use arrow_schema::{DataType, Field, Schema, SchemaRef, TimeUnit};
use common_test_util::temp_dir::{TempDir, create_temp_dir};
use datafusion::datasource::file_format::file_compression_type::FileCompressionType;
use datafusion::datasource::listing::PartitionedFile;
@@ -27,7 +27,7 @@ use datafusion::physical_plan::metrics::ExecutionPlanMetricsSet;
use object_store::ObjectStore;
use object_store::services::Fs;
use crate::file_format::csv::stream_to_csv;
use crate::file_format::csv::{CsvFormat, stream_to_csv};
use crate::file_format::json::stream_to_json;
use crate::test_util;
@@ -68,6 +68,17 @@ pub fn test_basic_schema() -> SchemaRef {
Arc::new(schema)
}
pub fn csv_basic_schema() -> SchemaRef {
let schema = Schema::new(vec![
Field::new("num", DataType::Int64, false),
Field::new("str", DataType::Utf8, false),
Field::new("ts", DataType::Timestamp(TimeUnit::Second, None), false),
Field::new("t", DataType::Time32(TimeUnit::Second), false),
Field::new("date", DataType::Date32, false),
]);
Arc::new(schema)
}
pub(crate) fn scan_config(
file_schema: SchemaRef,
limit: Option<usize>,
@@ -128,10 +139,14 @@ pub async fn setup_stream_to_json_test(origin_path: &str, threshold: impl Fn(usi
assert_eq_lines(written.to_vec(), origin.to_vec());
}
pub async fn setup_stream_to_csv_test(origin_path: &str, threshold: impl Fn(usize) -> usize) {
pub async fn setup_stream_to_csv_test(
origin_path: &str,
format_path: &str,
threshold: impl Fn(usize) -> usize,
) {
let store = test_store("/");
let schema = test_basic_schema();
let schema = csv_basic_schema();
let csv_source = CsvSource::new(true, b',', b'"')
.with_schema(schema.clone())
@@ -150,21 +165,29 @@ pub async fn setup_stream_to_csv_test(origin_path: &str, threshold: impl Fn(usiz
let output_path = format!("{}/{}", dir.path().display(), "output");
let csv_format = CsvFormat {
timestamp_format: Some("%m-%d-%Y".to_string()),
date_format: Some("%m-%d-%Y".to_string()),
time_format: Some("%Ss".to_string()),
..Default::default()
};
assert!(
stream_to_csv(
Box::pin(stream),
tmp_store.clone(),
&output_path,
threshold(size),
8
8,
&csv_format,
)
.await
.is_ok()
);
let written = tmp_store.read(&output_path).await.unwrap();
let origin = store.read(origin_path).await.unwrap();
assert_eq_lines(written.to_vec(), origin.to_vec());
let format_expect = store.read(format_path).await.unwrap();
assert_eq_lines(written.to_vec(), format_expect.to_vec());
}
// Ignore the CRLF difference across operating systems.

View File

@@ -37,11 +37,15 @@ async fn test_stream_to_csv() {
.display()
.to_string();
let format_path = &find_workspace_path("/src/common/datasource/tests/csv/basic_format.csv")
.display()
.to_string();
// A small threshold
// Triggers the flush each writes
test_util::setup_stream_to_csv_test(origin_path, |size| size / 2).await;
test_util::setup_stream_to_csv_test(origin_path, format_path, |size| size / 2).await;
// A large threshold
// Only triggers the flush at last
test_util::setup_stream_to_csv_test(origin_path, |size| size * 2).await;
test_util::setup_stream_to_csv_test(origin_path, format_path, |size| size * 2).await;
}

View File

@@ -1,4 +1,4 @@
num,str
5,test
2,hello
4,foo
num,str,ts,t,date
5,test,2023-04-01 00:00:00,10,2023-04-01
2,hello,2023-04-01 00:00:00,20,2023-04-01
4,foo,2023-04-01 00:00:00,30,2023-04-01
1 num str ts t date
2 5 test 2023-04-01 00:00:00 10 2023-04-01
3 2 hello 2023-04-01 00:00:00 20 2023-04-01
4 4 foo 2023-04-01 00:00:00 30 2023-04-01

View File

@@ -0,0 +1,4 @@
num,str,ts,t,date
5,test,04-01-2023,10s,04-01-2023
2,hello,04-01-2023,20s,04-01-2023
4,foo,04-01-2023,30s,04-01-2023
1 num str ts t date
2 5 test 04-01-2023 10s 04-01-2023
3 2 hello 04-01-2023 20s 04-01-2023
4 4 foo 04-01-2023 30s 04-01-2023

View File

@@ -36,6 +36,7 @@ datafusion.workspace = true
datafusion-common.workspace = true
datafusion-expr.workspace = true
datafusion-functions-aggregate-common.workspace = true
datafusion-pg-catalog.workspace = true
datafusion-physical-expr.workspace = true
datatypes.workspace = true
derive_more = { version = "1", default-features = false, features = ["display"] }

View File

@@ -12,23 +12,19 @@
// See the License for the specific language governing permissions and
// limitations under the License.
mod add_region_follower;
mod flush_compact_region;
mod flush_compact_table;
mod migrate_region;
mod reconcile_catalog;
mod reconcile_database;
mod reconcile_table;
mod remove_region_follower;
use add_region_follower::AddRegionFollowerFunction;
use flush_compact_region::{CompactRegionFunction, FlushRegionFunction};
use flush_compact_table::{CompactTableFunction, FlushTableFunction};
use migrate_region::MigrateRegionFunction;
use reconcile_catalog::ReconcileCatalogFunction;
use reconcile_database::ReconcileDatabaseFunction;
use reconcile_table::ReconcileTableFunction;
use remove_region_follower::RemoveRegionFollowerFunction;
use crate::flush_flow::FlushFlowFunction;
use crate::function_registry::FunctionRegistry;
@@ -40,8 +36,6 @@ impl AdminFunction {
/// Register all admin functions to [`FunctionRegistry`].
pub fn register(registry: &FunctionRegistry) {
registry.register(MigrateRegionFunction::factory());
registry.register(AddRegionFollowerFunction::factory());
registry.register(RemoveRegionFollowerFunction::factory());
registry.register(FlushRegionFunction::factory());
registry.register(CompactRegionFunction::factory());
registry.register(FlushTableFunction::factory());

View File

@@ -1,155 +0,0 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use common_macro::admin_fn;
use common_meta::rpc::procedure::AddRegionFollowerRequest;
use common_query::error::{
InvalidFuncArgsSnafu, MissingProcedureServiceHandlerSnafu, Result,
UnsupportedInputDataTypeSnafu,
};
use datafusion_expr::{Signature, TypeSignature, Volatility};
use datatypes::data_type::DataType;
use datatypes::prelude::ConcreteDataType;
use datatypes::value::{Value, ValueRef};
use session::context::QueryContextRef;
use snafu::ensure;
use crate::handlers::ProcedureServiceHandlerRef;
use crate::helper::cast_u64;
/// A function to add a follower to a region.
/// Only available in cluster mode.
///
/// - `add_region_follower(region_id, peer_id)`.
///
/// The parameters:
/// - `region_id`: the region id
/// - `peer_id`: the peer id
#[admin_fn(
name = AddRegionFollowerFunction,
display_name = add_region_follower,
sig_fn = signature,
ret = uint64
)]
pub(crate) async fn add_region_follower(
procedure_service_handler: &ProcedureServiceHandlerRef,
_ctx: &QueryContextRef,
params: &[ValueRef<'_>],
) -> Result<Value> {
ensure!(
params.len() == 2,
InvalidFuncArgsSnafu {
err_msg: format!(
"The length of the args is not correct, expect exactly 2, have: {}",
params.len()
),
}
);
let Some(region_id) = cast_u64(&params[0])? else {
return UnsupportedInputDataTypeSnafu {
function: "add_region_follower",
datatypes: params.iter().map(|v| v.data_type()).collect::<Vec<_>>(),
}
.fail();
};
let Some(peer_id) = cast_u64(&params[1])? else {
return UnsupportedInputDataTypeSnafu {
function: "add_region_follower",
datatypes: params.iter().map(|v| v.data_type()).collect::<Vec<_>>(),
}
.fail();
};
procedure_service_handler
.add_region_follower(AddRegionFollowerRequest { region_id, peer_id })
.await?;
Ok(Value::from(0u64))
}
fn signature() -> Signature {
Signature::one_of(
vec![
// add_region_follower(region_id, peer)
TypeSignature::Uniform(
2,
ConcreteDataType::numerics()
.into_iter()
.map(|dt| dt.as_arrow_type())
.collect(),
),
],
Volatility::Immutable,
)
}
#[cfg(test)]
mod tests {
use std::sync::Arc;
use arrow::array::UInt64Array;
use arrow::datatypes::{DataType, Field};
use datafusion_expr::ColumnarValue;
use super::*;
use crate::function::FunctionContext;
use crate::function_factory::ScalarFunctionFactory;
#[test]
fn test_add_region_follower_misc() {
let factory: ScalarFunctionFactory = AddRegionFollowerFunction::factory().into();
let f = factory.provide(FunctionContext::mock());
assert_eq!("add_region_follower", f.name());
assert_eq!(DataType::UInt64, f.return_type(&[]).unwrap());
assert!(matches!(f.signature(),
datafusion_expr::Signature {
type_signature: datafusion_expr::TypeSignature::OneOf(sigs),
volatility: datafusion_expr::Volatility::Immutable
} if sigs.len() == 1));
}
#[tokio::test]
async fn test_add_region_follower() {
let factory: ScalarFunctionFactory = AddRegionFollowerFunction::factory().into();
let provider = factory.provide(FunctionContext::mock());
let f = provider.as_async().unwrap();
let func_args = datafusion::logical_expr::ScalarFunctionArgs {
args: vec![
ColumnarValue::Array(Arc::new(UInt64Array::from(vec![1]))),
ColumnarValue::Array(Arc::new(UInt64Array::from(vec![2]))),
],
arg_fields: vec![
Arc::new(Field::new("arg_0", DataType::UInt64, false)),
Arc::new(Field::new("arg_1", DataType::UInt64, false)),
],
return_field: Arc::new(Field::new("result", DataType::UInt64, true)),
number_rows: 1,
config_options: Arc::new(datafusion_common::config::ConfigOptions::default()),
};
let result = f.invoke_async_with_args(func_args).await.unwrap();
match result {
ColumnarValue::Array(array) => {
let result_array = array.as_any().downcast_ref::<UInt64Array>().unwrap();
assert_eq!(result_array.value(0), 0u64);
}
ColumnarValue::Scalar(scalar) => {
assert_eq!(scalar, datafusion_common::ScalarValue::UInt64(Some(0)));
}
}
}
}

View File

@@ -1,155 +0,0 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use common_macro::admin_fn;
use common_meta::rpc::procedure::RemoveRegionFollowerRequest;
use common_query::error::{
InvalidFuncArgsSnafu, MissingProcedureServiceHandlerSnafu, Result,
UnsupportedInputDataTypeSnafu,
};
use datafusion_expr::{Signature, TypeSignature, Volatility};
use datatypes::data_type::DataType;
use datatypes::prelude::ConcreteDataType;
use datatypes::value::{Value, ValueRef};
use session::context::QueryContextRef;
use snafu::ensure;
use crate::handlers::ProcedureServiceHandlerRef;
use crate::helper::cast_u64;
/// A function to remove a follower from a region.
//// Only available in cluster mode.
///
/// - `remove_region_follower(region_id, peer_id)`.
///
/// The parameters:
/// - `region_id`: the region id
/// - `peer_id`: the peer id
#[admin_fn(
name = RemoveRegionFollowerFunction,
display_name = remove_region_follower,
sig_fn = signature,
ret = uint64
)]
pub(crate) async fn remove_region_follower(
procedure_service_handler: &ProcedureServiceHandlerRef,
_ctx: &QueryContextRef,
params: &[ValueRef<'_>],
) -> Result<Value> {
ensure!(
params.len() == 2,
InvalidFuncArgsSnafu {
err_msg: format!(
"The length of the args is not correct, expect exactly 2, have: {}",
params.len()
),
}
);
let Some(region_id) = cast_u64(&params[0])? else {
return UnsupportedInputDataTypeSnafu {
function: "add_region_follower",
datatypes: params.iter().map(|v| v.data_type()).collect::<Vec<_>>(),
}
.fail();
};
let Some(peer_id) = cast_u64(&params[1])? else {
return UnsupportedInputDataTypeSnafu {
function: "add_region_follower",
datatypes: params.iter().map(|v| v.data_type()).collect::<Vec<_>>(),
}
.fail();
};
procedure_service_handler
.remove_region_follower(RemoveRegionFollowerRequest { region_id, peer_id })
.await?;
Ok(Value::from(0u64))
}
fn signature() -> Signature {
Signature::one_of(
vec![
// remove_region_follower(region_id, peer_id)
TypeSignature::Uniform(
2,
ConcreteDataType::numerics()
.into_iter()
.map(|dt| dt.as_arrow_type())
.collect(),
),
],
Volatility::Immutable,
)
}
#[cfg(test)]
mod tests {
use std::sync::Arc;
use arrow::array::UInt64Array;
use arrow::datatypes::{DataType, Field};
use datafusion_expr::ColumnarValue;
use super::*;
use crate::function::FunctionContext;
use crate::function_factory::ScalarFunctionFactory;
#[test]
fn test_remove_region_follower_misc() {
let factory: ScalarFunctionFactory = RemoveRegionFollowerFunction::factory().into();
let f = factory.provide(FunctionContext::mock());
assert_eq!("remove_region_follower", f.name());
assert_eq!(DataType::UInt64, f.return_type(&[]).unwrap());
assert!(matches!(f.signature(),
datafusion_expr::Signature {
type_signature: datafusion_expr::TypeSignature::OneOf(sigs),
volatility: datafusion_expr::Volatility::Immutable
} if sigs.len() == 1));
}
#[tokio::test]
async fn test_remove_region_follower() {
let factory: ScalarFunctionFactory = RemoveRegionFollowerFunction::factory().into();
let provider = factory.provide(FunctionContext::mock());
let f = provider.as_async().unwrap();
let func_args = datafusion::logical_expr::ScalarFunctionArgs {
args: vec![
ColumnarValue::Array(Arc::new(UInt64Array::from(vec![1]))),
ColumnarValue::Array(Arc::new(UInt64Array::from(vec![1]))),
],
arg_fields: vec![
Arc::new(Field::new("arg_0", DataType::UInt64, false)),
Arc::new(Field::new("arg_1", DataType::UInt64, false)),
],
return_field: Arc::new(Field::new("result", DataType::UInt64, true)),
number_rows: 1,
config_options: Arc::new(datafusion_common::config::ConfigOptions::default()),
};
let result = f.invoke_async_with_args(func_args).await.unwrap();
match result {
ColumnarValue::Array(array) => {
let result_array = array.as_any().downcast_ref::<UInt64Array>().unwrap();
assert_eq!(result_array.value(0), 0u64);
}
ColumnarValue::Scalar(scalar) => {
assert_eq!(scalar, datafusion_common::ScalarValue::UInt64(Some(0)));
}
}
}
}

View File

@@ -41,7 +41,12 @@ use datafusion_expr::{
use datafusion_physical_expr::aggregate::AggregateFunctionExpr;
use datatypes::arrow::datatypes::{DataType, Field};
use crate::function_registry::FunctionRegistry;
use crate::aggrs::aggr_wrapper::fix_order::FixStateUdafOrderingAnalyzer;
use crate::function_registry::{FUNCTION_REGISTRY, FunctionRegistry};
pub mod fix_order;
#[cfg(test)]
mod tests;
/// Returns the name of the state function for the given aggregate function name.
/// The state function is used to compute the state of the aggregate function.
@@ -57,6 +62,39 @@ pub fn aggr_merge_func_name(aggr_name: &str) -> String {
format!("__{}_merge", aggr_name)
}
/// Check if the given aggregate expression is steppable.
/// As in if it can be split into multiple steps:
/// i.e. on datanode first call `state(input)` then
/// on frontend call `calc(merge(state))` to get the final result.
pub fn is_all_aggr_exprs_steppable(aggr_exprs: &[Expr]) -> bool {
aggr_exprs.iter().all(|expr| {
if let Some(aggr_func) = get_aggr_func(expr) {
if aggr_func.params.distinct {
// Distinct aggregate functions are not steppable(yet).
// TODO(discord9): support distinct aggregate functions.
return false;
}
// whether the corresponding state function exists in the registry
FUNCTION_REGISTRY.is_aggr_func_exist(&aggr_state_func_name(aggr_func.func.name()))
} else {
false
}
})
}
pub fn get_aggr_func(expr: &Expr) -> Option<&datafusion_expr::expr::AggregateFunction> {
let mut expr_ref = expr;
while let Expr::Alias(alias) = expr_ref {
expr_ref = &alias.expr;
}
if let Expr::AggregateFunction(aggr_func) = expr_ref {
Some(aggr_func)
} else {
None
}
}
/// A wrapper to make an aggregate function out of the state and merge functions of the original aggregate function.
/// It contains the original aggregate function, the state functions, and the merge function.
///
@@ -74,18 +112,6 @@ pub struct StepAggrPlan {
pub lower_state: LogicalPlan,
}
pub fn get_aggr_func(expr: &Expr) -> Option<&datafusion_expr::expr::AggregateFunction> {
let mut expr_ref = expr;
while let Expr::Alias(alias) = expr_ref {
expr_ref = &alias.expr;
}
if let Expr::AggregateFunction(aggr_func) = expr_ref {
Some(aggr_func)
} else {
None
}
}
impl StateMergeHelper {
/// Register all the `state` function of supported aggregate functions.
/// Note that can't register `merge` function here, as it needs to be created from the original aggregate function with given input types.
@@ -118,6 +144,7 @@ impl StateMergeHelper {
}
/// Split an aggregate plan into two aggregate plans, one for the state function and one for the merge function.
///
pub fn split_aggr_node(aggr_plan: Aggregate) -> datafusion_common::Result<StepAggrPlan> {
let aggr = {
// certain aggr func need type coercion to work correctly, so we need to analyze the plan first.
@@ -137,6 +164,15 @@ impl StateMergeHelper {
let mut lower_aggr_exprs = vec![];
let mut upper_aggr_exprs = vec![];
// group exprs for upper plan should refer to the output group expr as column from lower plan
// to avoid re-compute group exprs again.
let upper_group_exprs = aggr
.group_expr
.iter()
.map(|c| c.qualified_name())
.map(|(r, c)| Expr::Column(Column::new(r, c)))
.collect();
for aggr_expr in aggr.aggr_expr.iter() {
let Some(aggr_func) = get_aggr_func(aggr_expr) else {
return Err(datafusion_common::DataFusionError::NotImplemented(format!(
@@ -164,6 +200,7 @@ impl StateMergeHelper {
lower_aggr_exprs.push(expr);
// then create the merge function using the physical expression of the original aggregate function
let (original_phy_expr, _filter, _ordering) = create_aggregate_expr_and_maybe_filter(
aggr_expr,
aggr.input.schema(),
@@ -179,9 +216,15 @@ impl StateMergeHelper {
let arg = Expr::Column(Column::new_unqualified(lower_state_output_col_name));
let expr = AggregateFunction {
func: Arc::new(merge_func.into()),
// notice filter/order_by is not supported in the merge function, as it's not meaningful to have them in the merge phase.
// do notice this order by is only removed in the outer logical plan, the physical plan still have order by and hence
// can create correct accumulator with order by.
params: AggregateFunctionParams {
args: vec![arg],
..aggr_func.params.clone()
distinct: aggr_func.params.distinct,
filter: None,
order_by: vec![],
null_treatment: aggr_func.params.null_treatment,
},
};
@@ -198,10 +241,18 @@ impl StateMergeHelper {
// update aggregate's output schema
let lower_plan = lower_plan.recompute_schema()?;
let mut upper = aggr.clone();
// should only affect two udaf `first_value/last_value`
// which only them have meaningful order by field
let fixed_lower_plan =
FixStateUdafOrderingAnalyzer.analyze(lower_plan, &Default::default())?;
let upper = Aggregate::try_new(
Arc::new(fixed_lower_plan.clone()),
upper_group_exprs,
upper_aggr_exprs.clone(),
)?;
let aggr_plan = LogicalPlan::Aggregate(aggr);
upper.aggr_expr = upper_aggr_exprs;
upper.input = Arc::new(lower_plan.clone());
// upper schema's output schema should be the same as the original aggregate plan's output schema
let upper_check = upper;
let upper_plan = LogicalPlan::Aggregate(upper_check).recompute_schema()?;
@@ -214,7 +265,7 @@ impl StateMergeHelper {
}
Ok(StepAggrPlan {
lower_state: lower_plan,
lower_state: fixed_lower_plan,
upper_merge: upper_plan,
})
}
@@ -225,13 +276,22 @@ impl StateMergeHelper {
pub struct StateWrapper {
inner: AggregateUDF,
name: String,
/// Default to empty, might get fixed by analyzer later
ordering: Vec<FieldRef>,
/// Default to false, might get fixed by analyzer later
distinct: bool,
}
impl StateWrapper {
/// `state_index`: The index of the state in the output of the state function.
pub fn new(inner: AggregateUDF) -> datafusion_common::Result<Self> {
let name = aggr_state_func_name(inner.name());
Ok(Self { inner, name })
Ok(Self {
inner,
name,
ordering: vec![],
distinct: false,
})
}
pub fn inner(&self) -> &AggregateUDF {
@@ -245,7 +305,19 @@ impl StateWrapper {
&self,
acc_args: &datafusion_expr::function::AccumulatorArgs,
) -> datafusion_common::Result<FieldRef> {
self.inner.return_field(acc_args.schema.fields())
let input_fields = acc_args
.exprs
.iter()
.map(|e| e.return_field(acc_args.schema))
.collect::<Result<Vec<_>, _>>()?;
self.inner.return_field(&input_fields).inspect_err(|e| {
common_telemetry::error!(
"StateWrapper: {:#?}\nacc_args:{:?}\nerror:{:?}",
&self,
&acc_args,
e
);
})
}
}
@@ -269,6 +341,7 @@ impl AggregateUDFImpl for StateWrapper {
};
self.inner.accumulator(acc_args)?
};
Ok(Box::new(StateAccum::new(inner, state_type)?))
}
@@ -295,11 +368,22 @@ impl AggregateUDFImpl for StateWrapper {
name: self.inner().name(),
input_fields,
return_field: self.inner.return_field(input_fields)?,
// TODO(discord9): how to get this?, probably ok?
ordering_fields: &[],
is_distinct: false,
// those args are also needed as they are vital to construct the state fields correctly.
ordering_fields: &self.ordering,
is_distinct: self.distinct,
};
let state_fields = self.inner.state_fields(state_fields_args)?;
let state_fields = state_fields
.into_iter()
.map(|f| {
let mut f = f.as_ref().clone();
// since state can be null when no input rows, so make all fields nullable
f.set_nullable(true);
Arc::new(f)
})
.collect::<Vec<_>>();
let struct_field = DataType::Struct(state_fields.into());
Ok(struct_field)
}
@@ -324,7 +408,7 @@ impl AggregateUDFImpl for StateWrapper {
self.inner.signature()
}
/// Coerce types also do nothing, as optimzer should be able to already make struct types
/// Coerce types also do nothing, as optimizer should be able to already make struct types
fn coerce_types(&self, arg_types: &[DataType]) -> datafusion_common::Result<Vec<DataType>> {
self.inner.coerce_types(arg_types)
}
@@ -364,6 +448,39 @@ impl Accumulator for StateAccum {
.iter()
.map(|s| s.to_array())
.collect::<Result<Vec<_>, _>>()?;
let array_type = array
.iter()
.map(|a| a.data_type().clone())
.collect::<Vec<_>>();
let expected_type: Vec<_> = self
.state_fields
.iter()
.map(|f| f.data_type().clone())
.collect();
if array_type != expected_type {
debug!(
"State mismatch, expected: {}, got: {} for expected fields: {:?} and given array types: {:?}",
self.state_fields.len(),
array.len(),
self.state_fields,
array_type,
);
let guess_schema = array
.iter()
.enumerate()
.map(|(index, array)| {
Field::new(
format!("col_{index}[mismatch_state]").as_str(),
array.data_type().clone(),
true,
)
})
.collect::<Fields>();
let arr = StructArray::try_new(guess_schema, array, None)?;
return Ok(ScalarValue::Struct(Arc::new(arr)));
}
let struct_array = StructArray::try_new(self.state_fields.clone(), array, None)?;
Ok(ScalarValue::Struct(Arc::new(struct_array)))
}
@@ -402,7 +519,7 @@ pub struct MergeWrapper {
merge_signature: Signature,
/// The original physical expression of the aggregate function, can't store the original aggregate function directly, as PhysicalExpr didn't implement Any
original_phy_expr: Arc<AggregateFunctionExpr>,
original_input_types: Vec<DataType>,
return_type: DataType,
}
impl MergeWrapper {
pub fn new(
@@ -413,13 +530,14 @@ impl MergeWrapper {
let name = aggr_merge_func_name(inner.name());
// the input type is actually struct type, which is the state fields of the original aggregate function.
let merge_signature = Signature::user_defined(datafusion_expr::Volatility::Immutable);
let return_type = inner.return_type(&original_input_types)?;
Ok(Self {
inner,
name,
merge_signature,
original_phy_expr,
original_input_types,
return_type,
})
}
@@ -471,14 +589,13 @@ impl AggregateUDFImpl for MergeWrapper {
/// so return fixed return type instead of using `arg_types` to determine the return type.
fn return_type(&self, _arg_types: &[DataType]) -> datafusion_common::Result<DataType> {
// The return type is the same as the original aggregate function's return type.
let ret_type = self.inner.return_type(&self.original_input_types)?;
Ok(ret_type)
Ok(self.return_type.clone())
}
fn signature(&self) -> &Signature {
&self.merge_signature
}
/// Coerce types also do nothing, as optimzer should be able to already make struct types
/// Coerce types also do nothing, as optimizer should be able to already make struct types
fn coerce_types(&self, arg_types: &[DataType]) -> datafusion_common::Result<Vec<DataType>> {
// just check if the arg_types are only one and is struct array
if arg_types.len() != 1 || !matches!(arg_types.first(), Some(DataType::Struct(_))) {
@@ -542,10 +659,11 @@ impl Accumulator for MergeAccum {
})?;
let fields = struct_arr.fields();
if fields != &self.state_fields {
return Err(datafusion_common::DataFusionError::Internal(format!(
"Expected state fields: {:?}, got: {:?}",
debug!(
"State fields mismatch, expected: {:?}, got: {:?}",
self.state_fields, fields
)));
);
// state fields mismatch might be acceptable by datafusion, continue
}
// now fields should be the same, so we can merge the batch
@@ -562,6 +680,3 @@ impl Accumulator for MergeAccum {
self.inner.state()
}
}
#[cfg(test)]
mod tests;

View File

@@ -0,0 +1,189 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use std::sync::Arc;
use common_telemetry::debug;
use datafusion::config::ConfigOptions;
use datafusion::optimizer::AnalyzerRule;
use datafusion_common::tree_node::{Transformed, TreeNode, TreeNodeRewriter};
use datafusion_expr::{AggregateUDF, Expr, ExprSchemable, LogicalPlan};
use crate::aggrs::aggr_wrapper::StateWrapper;
/// Traverse the plan, found all `__<aggr_name>_state` and fix their ordering fields
/// if their input aggr is with order by, this is currently only useful for `first_value` and `last_value` udaf
///
/// should be applied to datanode's query engine
/// TODO(discord9): proper way to extend substrait's serde ability to allow carry more info for custom udaf with more info
#[derive(Debug, Default)]
pub struct FixStateUdafOrderingAnalyzer;
impl AnalyzerRule for FixStateUdafOrderingAnalyzer {
fn name(&self) -> &str {
"FixStateUdafOrderingAnalyzer"
}
fn analyze(
&self,
plan: LogicalPlan,
_config: &ConfigOptions,
) -> datafusion_common::Result<LogicalPlan> {
plan.rewrite_with_subqueries(&mut FixOrderingRewriter::new(true))
.map(|t| t.data)
}
}
/// Traverse the plan, found all `__<aggr_name>_state` and remove their ordering fields
/// this is currently only useful for `first_value` and `last_value` udaf when need to encode to substrait
///
#[derive(Debug, Default)]
pub struct UnFixStateUdafOrderingAnalyzer;
impl AnalyzerRule for UnFixStateUdafOrderingAnalyzer {
fn name(&self) -> &str {
"UnFixStateUdafOrderingAnalyzer"
}
fn analyze(
&self,
plan: LogicalPlan,
_config: &ConfigOptions,
) -> datafusion_common::Result<LogicalPlan> {
plan.rewrite_with_subqueries(&mut FixOrderingRewriter::new(false))
.map(|t| t.data)
}
}
struct FixOrderingRewriter {
/// once fixed, mark dirty, and always recompute schema from bottom up
is_dirty: bool,
/// if true, will add the ordering field from outer aggr expr
/// if false, will remove the ordering field
is_fix: bool,
}
impl FixOrderingRewriter {
pub fn new(is_fix: bool) -> Self {
Self {
is_dirty: false,
is_fix,
}
}
}
impl TreeNodeRewriter for FixOrderingRewriter {
type Node = LogicalPlan;
/// found all `__<aggr_name>_state` and fix their ordering fields
/// if their input aggr is with order by
fn f_up(
&mut self,
node: Self::Node,
) -> datafusion_common::Result<datafusion_common::tree_node::Transformed<Self::Node>> {
let LogicalPlan::Aggregate(mut aggregate) = node else {
return if self.is_dirty {
let node = node.recompute_schema()?;
Ok(Transformed::yes(node))
} else {
Ok(Transformed::no(node))
};
};
// regex to match state udaf name
for aggr_expr in &mut aggregate.aggr_expr {
let new_aggr_expr = aggr_expr
.clone()
.transform_up(|expr| rewrite_expr(expr, &aggregate.input, self.is_fix))?;
if new_aggr_expr.transformed {
*aggr_expr = new_aggr_expr.data;
self.is_dirty = true;
}
}
if self.is_dirty {
let node = LogicalPlan::Aggregate(aggregate).recompute_schema()?;
debug!(
"FixStateUdafOrderingAnalyzer: plan schema's field changed to {:?}",
node.schema().fields()
);
Ok(Transformed::yes(node))
} else {
Ok(Transformed::no(LogicalPlan::Aggregate(aggregate)))
}
}
}
/// first see the aggr node in expr
/// as it could be nested aggr like alias(aggr(sort))
/// if contained aggr expr have a order by, and the aggr name match the regex
/// then we need to fix the ordering field of the state udaf
/// to be the same as the aggr expr
fn rewrite_expr(
expr: Expr,
aggregate_input: &Arc<LogicalPlan>,
is_fix: bool,
) -> Result<Transformed<Expr>, datafusion_common::DataFusionError> {
let Expr::AggregateFunction(aggregate_function) = expr else {
return Ok(Transformed::no(expr));
};
let Some(old_state_wrapper) = aggregate_function
.func
.inner()
.as_any()
.downcast_ref::<StateWrapper>()
else {
return Ok(Transformed::no(Expr::AggregateFunction(aggregate_function)));
};
let mut state_wrapper = old_state_wrapper.clone();
if is_fix {
// then always fix the ordering field&distinct flag and more
let order_by = aggregate_function.params.order_by.clone();
let ordering_fields: Vec<_> = order_by
.iter()
.map(|sort_expr| {
sort_expr
.expr
.to_field(&aggregate_input.schema())
.map(|(_, f)| f)
})
.collect::<datafusion_common::Result<Vec<_>>>()?;
let distinct = aggregate_function.params.distinct;
// fixing up
state_wrapper.ordering = ordering_fields;
state_wrapper.distinct = distinct;
} else {
// remove the ordering field & distinct flag
state_wrapper.ordering = vec![];
state_wrapper.distinct = false;
}
debug!(
"FixStateUdafOrderingAnalyzer: fix state udaf from {old_state_wrapper:?} to {:?}",
state_wrapper
);
let mut aggregate_function = aggregate_function;
aggregate_function.func = Arc::new(AggregateUDF::new_from_impl(state_wrapper));
Ok(Transformed::yes(Expr::AggregateFunction(
aggregate_function,
)))
}

View File

@@ -17,13 +17,17 @@ use std::pin::Pin;
use std::sync::{Arc, Mutex};
use std::task::{Context, Poll};
use arrow::array::{ArrayRef, Float64Array, Int64Array, UInt64Array};
use arrow::array::{
ArrayRef, BooleanArray, Float64Array, Int64Array, TimestampMillisecondArray, UInt64Array,
};
use arrow::record_batch::RecordBatch;
use arrow_schema::SchemaRef;
use common_telemetry::init_default_ut_logging;
use datafusion::catalog::{Session, TableProvider};
use datafusion::datasource::DefaultTableSource;
use datafusion::execution::{RecordBatchStream, SendableRecordBatchStream, TaskContext};
use datafusion::functions_aggregate::average::avg_udaf;
use datafusion::functions_aggregate::count::count_udaf;
use datafusion::functions_aggregate::sum::sum_udaf;
use datafusion::optimizer::AnalyzerRule;
use datafusion::optimizer::analyzer::type_coercion::TypeCoercion;
@@ -32,10 +36,14 @@ use datafusion::physical_plan::execution_plan::{Boundedness, EmissionType};
use datafusion::physical_plan::{DisplayAs, DisplayFormatType, ExecutionPlan, PlanProperties};
use datafusion::physical_planner::{DefaultPhysicalPlanner, PhysicalPlanner};
use datafusion::prelude::SessionContext;
use datafusion_common::arrow::array::AsArray;
use datafusion_common::arrow::datatypes::{Float64Type, UInt64Type};
use datafusion_common::{Column, TableReference};
use datafusion_expr::expr::AggregateFunction;
use datafusion_expr::sqlparser::ast::NullTreatment;
use datafusion_expr::{Aggregate, Expr, LogicalPlan, SortExpr, TableScan, lit};
use datafusion_expr::{
Aggregate, ColumnarValue, Expr, LogicalPlan, ScalarFunctionArgs, SortExpr, TableScan, lit,
};
use datafusion_physical_expr::aggregate::AggregateExprBuilder;
use datafusion_physical_expr::{EquivalenceProperties, Partitioning};
use datatypes::arrow_array::StringArray;
@@ -158,6 +166,20 @@ impl DummyTableProvider {
record_batch: Mutex::new(record_batch),
}
}
pub fn with_ts(record_batch: Option<RecordBatch>) -> Self {
Self {
schema: Arc::new(arrow_schema::Schema::new(vec![
Field::new("number", DataType::Int64, true),
Field::new(
"ts",
DataType::Timestamp(arrow_schema::TimeUnit::Millisecond, None),
false,
),
])),
record_batch: Mutex::new(record_batch),
}
}
}
impl Default for DummyTableProvider {
@@ -220,6 +242,21 @@ fn dummy_table_scan() -> LogicalPlan {
)
}
fn dummy_table_scan_with_ts() -> LogicalPlan {
let table_provider = Arc::new(DummyTableProvider::with_ts(None));
let table_source = DefaultTableSource::new(table_provider);
LogicalPlan::TableScan(
TableScan::try_new(
TableReference::bare("Number"),
Arc::new(table_source),
None,
vec![],
None,
)
.unwrap(),
)
}
#[tokio::test]
async fn test_sum_udaf() {
let ctx = SessionContext::new();
@@ -541,6 +578,221 @@ async fn test_avg_udaf() {
assert_eq!(merge_eval_res, ScalarValue::Float64(Some(132. / 45_f64)));
}
#[tokio::test]
async fn test_last_value_order_by_udaf() {
init_default_ut_logging();
let ctx = SessionContext::new();
let last_value = datafusion::functions_aggregate::first_last::last_value_udaf();
let last_value = (*last_value).clone();
let original_aggr = Aggregate::try_new(
Arc::new(dummy_table_scan_with_ts()),
vec![],
vec![Expr::AggregateFunction(AggregateFunction::new_udf(
Arc::new(last_value.clone()),
vec![Expr::Column(Column::new_unqualified("ts"))],
false,
None,
vec![datafusion_expr::expr::Sort::new(
Expr::Column(Column::new_unqualified("ts")),
true,
true,
)],
None,
))],
)
.unwrap();
let res = StateMergeHelper::split_aggr_node(original_aggr).unwrap();
let state_func: Arc<AggregateUDF> =
Arc::new(StateWrapper::new(last_value.clone()).unwrap().into());
let expected_aggr_state_plan = LogicalPlan::Aggregate(
Aggregate::try_new(
Arc::new(dummy_table_scan_with_ts()),
vec![],
vec![Expr::AggregateFunction(AggregateFunction::new_udf(
state_func,
vec![Expr::Column(Column::new_unqualified("ts"))],
false,
None,
vec![datafusion_expr::expr::Sort::new(
Expr::Column(Column::new_unqualified("ts")),
true,
true,
)],
None,
))],
)
.unwrap(),
);
// fix the ordering & distinct info of the state udaf, as they are not set in the wrapper.
let fixed_aggr_state_plan = FixStateUdafOrderingAnalyzer {}
.analyze(expected_aggr_state_plan.clone(), &Default::default())
.unwrap();
assert_eq!(&res.lower_state, &fixed_aggr_state_plan);
// schema is the state fields of the last_value udaf
assert_eq!(
res.lower_state.schema().as_arrow(),
&arrow_schema::Schema::new(vec![Field::new(
"__last_value_state(ts) ORDER BY [ts ASC NULLS FIRST]",
DataType::Struct(
vec![
Field::new(
"last_value[last_value]",
DataType::Timestamp(arrow_schema::TimeUnit::Millisecond, None),
true
),
Field::new(
"ts",
DataType::Timestamp(arrow_schema::TimeUnit::Millisecond, None),
true
), // ordering field is added to state fields too
Field::new("is_set", DataType::Boolean, true)
]
.into()
),
true,
)])
);
let phy_aggr_state_plan = DefaultPhysicalPlanner::default()
.create_physical_plan(&fixed_aggr_state_plan, &ctx.state())
.await
.unwrap();
let aggr_exec = phy_aggr_state_plan
.as_any()
.downcast_ref::<AggregateExec>()
.unwrap();
let aggr_func_expr = &aggr_exec.aggr_expr()[0];
let expected_merge_fn = MergeWrapper::new(
last_value.clone(),
aggr_func_expr.clone(),
vec![DataType::Timestamp(
arrow_schema::TimeUnit::Millisecond,
None,
)],
)
.unwrap();
let expected_merge_plan = LogicalPlan::Aggregate(
Aggregate::try_new(
Arc::new(fixed_aggr_state_plan.clone()),
vec![],
vec![
Expr::AggregateFunction(AggregateFunction::new_udf(
Arc::new(expected_merge_fn.into()),
vec![Expr::Column(Column::new_unqualified(
"__last_value_state(ts) ORDER BY [ts ASC NULLS FIRST]",
))],
false,
None,
vec![],
None,
))
.alias("last_value(ts) ORDER BY [ts ASC NULLS FIRST]"),
],
)
.unwrap(),
);
assert_eq!(&res.upper_merge, &expected_merge_plan);
let mut state_accum = aggr_func_expr.create_accumulator().unwrap();
// evaluate the state function
let input = Arc::new(TimestampMillisecondArray::from(vec![
Some(1),
Some(2),
None,
Some(3),
])) as arrow::array::ArrayRef;
// notice since sorting exist, the input must have two columns, one for the value, one for the ordering
let values = vec![input.clone(), input];
state_accum.update_batch(&values).unwrap();
let state = state_accum.state().unwrap();
assert_eq!(state.len(), 3);
assert_eq!(state[0], ScalarValue::TimestampMillisecond(Some(3), None));
assert_eq!(state[1], ScalarValue::TimestampMillisecond(Some(3), None));
assert_eq!(state[2], ScalarValue::Boolean(Some(true)));
let eval_res = state_accum.evaluate().unwrap();
let expected = Arc::new(
StructArray::try_new(
vec![
Field::new(
"last_value[last_value]",
DataType::Timestamp(arrow_schema::TimeUnit::Millisecond, None),
true,
),
Field::new(
"ts",
DataType::Timestamp(arrow_schema::TimeUnit::Millisecond, None),
true,
),
Field::new("is_set", DataType::Boolean, true),
]
.into(),
vec![
Arc::new(TimestampMillisecondArray::from(vec![Some(3)])),
Arc::new(TimestampMillisecondArray::from(vec![Some(3)])),
Arc::new(BooleanArray::from(vec![Some(true)])),
],
None,
)
.unwrap(),
);
assert_eq!(eval_res, ScalarValue::Struct(expected));
let phy_aggr_merge_plan = DefaultPhysicalPlanner::default()
.create_physical_plan(&res.upper_merge, &ctx.state())
.await
.unwrap();
let aggr_exec = phy_aggr_merge_plan
.as_any()
.downcast_ref::<AggregateExec>()
.unwrap();
let aggr_func_expr = &aggr_exec.aggr_expr()[0];
let mut merge_accum = aggr_func_expr.create_accumulator().unwrap();
let merge_input = vec![
Arc::new(Int64Array::from(vec![Some(3), Some(4)])) as arrow::array::ArrayRef,
Arc::new(Int64Array::from(vec![Some(3), Some(4)])),
Arc::new(BooleanArray::from(vec![Some(true), Some(true)])),
];
let merge_input_struct_arr = StructArray::try_new(
vec![
Field::new("last_value[last_value]", DataType::Int64, true),
Field::new("number", DataType::Int64, true),
Field::new("is_set", DataType::Boolean, true),
]
.into(),
merge_input,
None,
)
.unwrap();
merge_accum
.update_batch(&[Arc::new(merge_input_struct_arr)])
.unwrap();
let merge_state = merge_accum.state().unwrap();
assert_eq!(merge_state.len(), 3);
assert_eq!(merge_state[0], ScalarValue::Int64(Some(4)));
assert_eq!(merge_state[1], ScalarValue::Int64(Some(4)));
assert_eq!(merge_state[2], ScalarValue::Boolean(Some(true)));
let merge_eval_res = merge_accum.evaluate().unwrap();
// the merge function returns the last value, which is 4
assert_eq!(merge_eval_res, ScalarValue::Int64(Some(4)));
}
/// For testing whether the UDAF state fields are correctly implemented.
/// esp. for our own custom UDAF's state fields.
/// By compare eval results before and after split to state/merge functions.
@@ -552,6 +804,7 @@ async fn test_udaf_correct_eval_result() {
input_schema: SchemaRef,
input: Vec<ArrayRef>,
expected_output: Option<ScalarValue>,
// extra check function on the final array result
expected_fn: Option<ExpectedFn>,
distinct: bool,
filter: Option<Box<Expr>>,
@@ -582,6 +835,27 @@ async fn test_udaf_correct_eval_result() {
order_by: vec![],
null_treatment: None,
},
TestCase {
func: count_udaf(),
input_schema: Arc::new(arrow_schema::Schema::new(vec![Field::new(
"str_val",
DataType::Utf8,
true,
)])),
args: vec![Expr::Column(Column::new_unqualified("str_val"))],
input: vec![Arc::new(StringArray::from(vec![
Some("hello"),
Some("world"),
None,
Some("what"),
]))],
expected_output: Some(ScalarValue::Int64(Some(3))),
expected_fn: None,
distinct: false,
filter: None,
order_by: vec![],
null_treatment: None,
},
TestCase {
func: avg_udaf(),
input_schema: Arc::new(arrow_schema::Schema::new(vec![Field::new(
@@ -649,14 +923,20 @@ async fn test_udaf_correct_eval_result() {
expected_output: None,
expected_fn: Some(|arr| {
let percent = ScalarValue::Float64(Some(0.5)).to_array().unwrap();
let percent = datatypes::vectors::Helper::try_into_vector(percent).unwrap();
let state = datatypes::vectors::Helper::try_into_vector(arr).unwrap();
let udd_calc = UddSketchCalcFunction;
let percent = ColumnarValue::Array(percent);
let state = ColumnarValue::Array(arr);
let udd_calc = UddSketchCalcFunction::default();
let res = udd_calc
.eval(&Default::default(), &[percent, state])
.invoke_with_args(ScalarFunctionArgs {
args: vec![percent, state],
arg_fields: vec![],
number_rows: 1,
return_field: Arc::new(Field::new("x", DataType::Float64, false)),
config_options: Arc::new(Default::default()),
})
.unwrap();
let binding = res.to_arrow_array();
let res_arr = binding.as_any().downcast_ref::<Float64Array>().unwrap();
let binding = res.to_array(1).unwrap();
let res_arr = binding.as_primitive::<Float64Type>();
assert!(res_arr.len() == 1);
assert!((res_arr.value(0) - 2.856578984907706f64).abs() <= f64::EPSILON);
true
@@ -683,11 +963,20 @@ async fn test_udaf_correct_eval_result() {
]))],
expected_output: None,
expected_fn: Some(|arr| {
let state = datatypes::vectors::Helper::try_into_vector(arr).unwrap();
let hll_calc = HllCalcFunction;
let res = hll_calc.eval(&Default::default(), &[state]).unwrap();
let binding = res.to_arrow_array();
let res_arr = binding.as_any().downcast_ref::<UInt64Array>().unwrap();
let number_rows = arr.len();
let state = ColumnarValue::Array(arr);
let hll_calc = HllCalcFunction::default();
let res = hll_calc
.invoke_with_args(ScalarFunctionArgs {
args: vec![state],
arg_fields: vec![],
number_rows,
return_field: Arc::new(Field::new("x", DataType::UInt64, false)),
config_options: Arc::new(Default::default()),
})
.unwrap();
let binding = res.to_array(1).unwrap();
let res_arr = binding.as_primitive::<UInt64Type>();
assert!(res_arr.len() == 1);
assert_eq!(res_arr.value(0), 3);
true

View File

@@ -12,13 +12,17 @@
// See the License for the specific language governing permissions and
// limitations under the License.
use std::any::Any;
use std::fmt;
use std::fmt::{Debug, Formatter};
use std::sync::Arc;
use common_query::error::Result;
use datafusion::arrow::datatypes::DataType;
use datafusion_expr::Signature;
use datatypes::vectors::VectorRef;
use datafusion::logical_expr::ColumnarValue;
use datafusion_common::DataFusionError;
use datafusion_common::arrow::array::ArrayRef;
use datafusion_common::config::{ConfigEntry, ConfigExtension, ExtensionOptions};
use datafusion_expr::{ScalarFunctionArgs, Signature};
use session::context::{QueryContextBuilder, QueryContextRef};
use crate::state::FunctionState;
@@ -56,6 +60,42 @@ impl Default for FunctionContext {
}
}
impl ExtensionOptions for FunctionContext {
fn as_any(&self) -> &dyn Any {
self
}
fn as_any_mut(&mut self) -> &mut dyn Any {
self
}
fn cloned(&self) -> Box<dyn ExtensionOptions> {
Box::new(self.clone())
}
fn set(&mut self, _: &str, _: &str) -> datafusion_common::Result<()> {
Err(DataFusionError::NotImplemented(
"set options for `FunctionContext`".to_string(),
))
}
fn entries(&self) -> Vec<ConfigEntry> {
vec![]
}
}
impl Debug for FunctionContext {
fn fmt(&self, f: &mut Formatter<'_>) -> fmt::Result {
f.debug_struct("FunctionContext")
.field("query_ctx", &self.query_ctx)
.finish()
}
}
impl ConfigExtension for FunctionContext {
const PREFIX: &'static str = "FunctionContext";
}
/// Scalar function trait, modified from databend to adapt datafusion
/// TODO(dennis): optimize function by it's features such as monotonicity etc.
pub trait Function: fmt::Display + Sync + Send {
@@ -63,13 +103,15 @@ pub trait Function: fmt::Display + Sync + Send {
fn name(&self) -> &str;
/// The returned data type of function execution.
fn return_type(&self, input_types: &[DataType]) -> Result<DataType>;
fn return_type(&self, input_types: &[DataType]) -> datafusion_common::Result<DataType>;
/// The signature of function.
fn signature(&self) -> Signature;
fn signature(&self) -> &Signature;
/// Evaluate the function, e.g. run/execute the function.
fn eval(&self, ctx: &FunctionContext, columns: &[VectorRef]) -> Result<VectorRef>;
fn invoke_with_args(
&self,
args: ScalarFunctionArgs,
) -> datafusion_common::Result<ColumnarValue>;
fn aliases(&self) -> &[String] {
&[]
@@ -77,3 +119,26 @@ pub trait Function: fmt::Display + Sync + Send {
}
pub type FunctionRef = Arc<dyn Function>;
/// Find the [FunctionContext] in the [ScalarFunctionArgs]. The [FunctionContext] was set
/// previously in the DataFusion session context creation, and is passed all the way down to the
/// args by DataFusion.
pub(crate) fn find_function_context(
args: &ScalarFunctionArgs,
) -> datafusion_common::Result<&FunctionContext> {
let Some(x) = args.config_options.extensions.get::<FunctionContext>() else {
return Err(DataFusionError::Execution(
"function context is not set".to_string(),
));
};
Ok(x)
}
/// Extract UDF arguments (as Arrow's [ArrayRef]) from [ScalarFunctionArgs] directly.
pub(crate) fn extract_args<const N: usize>(
name: &str,
args: &ScalarFunctionArgs,
) -> datafusion_common::Result<[ArrayRef; N]> {
ColumnarValue::values_to_arrays(&args.args)
.and_then(|x| datafusion_common::utils::take_function_args(name, x))
}

View File

@@ -52,9 +52,7 @@ impl From<ScalarUDF> for ScalarFunctionFactory {
impl From<FunctionRef> for ScalarFunctionFactory {
fn from(func: FunctionRef) -> Self {
let name = func.name().to_string();
let func = Arc::new(move |ctx: FunctionContext| {
create_udf(func.clone(), ctx.query_ctx, ctx.state)
});
let func = Arc::new(move |_| create_udf(func.clone()));
Self {
name,
factory: func,

View File

@@ -190,7 +190,7 @@ mod tests {
assert!(registry.get_function("test_and").is_none());
assert!(registry.scalar_functions().is_empty());
registry.register_scalar(TestAndFunction);
registry.register_scalar(TestAndFunction::default());
let _ = registry.get_function("test_and").unwrap();
assert_eq!(1, registry.scalar_functions().len());
}

View File

@@ -19,8 +19,7 @@ use async_trait::async_trait;
use catalog::CatalogManagerRef;
use common_base::AffectedRows;
use common_meta::rpc::procedure::{
AddRegionFollowerRequest, MigrateRegionRequest, ProcedureStateResponse,
RemoveRegionFollowerRequest,
ManageRegionFollowerRequest, MigrateRegionRequest, ProcedureStateResponse,
};
use common_query::Output;
use common_query::error::Result;
@@ -72,11 +71,8 @@ pub trait ProcedureServiceHandler: Send + Sync {
/// Query the procedure' state by its id
async fn query_procedure_state(&self, pid: &str) -> Result<ProcedureStateResponse>;
/// Add a region follower to a region.
async fn add_region_follower(&self, request: AddRegionFollowerRequest) -> Result<()>;
/// Remove a region follower from a region.
async fn remove_region_follower(&self, request: RemoveRegionFollowerRequest) -> Result<()>;
/// Manage a region follower to a region.
async fn manage_region_follower(&self, request: ManageRegionFollowerRequest) -> Result<()>;
/// Get the catalog manager
fn catalog_manager(&self) -> &CatalogManagerRef;

View File

@@ -96,6 +96,41 @@ pub fn get_string_from_params<'a>(
Ok(s)
}
macro_rules! with_match_timestamp_types {
($data_type:expr, | $_t:tt $T:ident | $body:tt) => {{
macro_rules! __with_ty__ {
( $_t $T:ident ) => {
$body
};
}
use datafusion_common::DataFusionError;
use datafusion_common::arrow::datatypes::{
TimeUnit, TimestampMicrosecondType, TimestampMillisecondType, TimestampNanosecondType,
TimestampSecondType,
};
match $data_type {
DataType::Timestamp(TimeUnit::Second, _) => Ok(__with_ty__! { TimestampSecondType }),
DataType::Timestamp(TimeUnit::Millisecond, _) => {
Ok(__with_ty__! { TimestampMillisecondType })
}
DataType::Timestamp(TimeUnit::Microsecond, _) => {
Ok(__with_ty__! { TimestampMicrosecondType })
}
DataType::Timestamp(TimeUnit::Nanosecond, _) => {
Ok(__with_ty__! { TimestampNanosecondType })
}
_ => Err(DataFusionError::Execution(format!(
"not expected data type: '{}'",
$data_type
))),
}
}};
}
pub(crate) use with_match_timestamp_types;
#[cfg(test)]
mod tests {
use super::*;

View File

@@ -26,8 +26,8 @@ pub(crate) struct DateFunction;
impl DateFunction {
pub fn register(registry: &FunctionRegistry) {
registry.register_scalar(DateAddFunction);
registry.register_scalar(DateSubFunction);
registry.register_scalar(DateFormatFunction);
registry.register_scalar(DateAddFunction::default());
registry.register_scalar(DateSubFunction::default());
registry.register_scalar(DateFormatFunction::default());
}
}

View File

@@ -14,21 +14,44 @@
use std::fmt;
use common_query::error::{ArrowComputeSnafu, IntoVectorSnafu, InvalidFuncArgsSnafu, Result};
use datafusion_expr::Signature;
use common_query::error::ArrowComputeSnafu;
use datafusion::logical_expr::ColumnarValue;
use datafusion_expr::{ScalarFunctionArgs, Signature};
use datatypes::arrow::compute::kernels::numeric;
use datatypes::arrow::datatypes::{DataType, IntervalUnit, TimeUnit};
use datatypes::vectors::{Helper, VectorRef};
use snafu::{ResultExt, ensure};
use snafu::ResultExt;
use crate::function::{Function, FunctionContext};
use crate::function::{Function, extract_args};
use crate::helper;
/// A function adds an interval value to Timestamp, Date, and return the result.
/// The implementation of datetime type is based on Date64 which is incorrect so this function
/// doesn't support the datetime type.
#[derive(Clone, Debug, Default)]
pub struct DateAddFunction;
#[derive(Clone, Debug)]
pub(crate) struct DateAddFunction {
signature: Signature,
}
impl Default for DateAddFunction {
fn default() -> Self {
Self {
signature: helper::one_of_sigs2(
vec![
DataType::Date32,
DataType::Timestamp(TimeUnit::Second, None),
DataType::Timestamp(TimeUnit::Millisecond, None),
DataType::Timestamp(TimeUnit::Microsecond, None),
DataType::Timestamp(TimeUnit::Nanosecond, None),
],
vec![
DataType::Interval(IntervalUnit::MonthDayNano),
DataType::Interval(IntervalUnit::YearMonth),
DataType::Interval(IntervalUnit::DayTime),
],
),
}
}
}
const NAME: &str = "date_add";
@@ -37,46 +60,22 @@ impl Function for DateAddFunction {
NAME
}
fn return_type(&self, input_types: &[DataType]) -> Result<DataType> {
fn return_type(&self, input_types: &[DataType]) -> datafusion_common::Result<DataType> {
Ok(input_types[0].clone())
}
fn signature(&self) -> Signature {
helper::one_of_sigs2(
vec![
DataType::Date32,
DataType::Timestamp(TimeUnit::Second, None),
DataType::Timestamp(TimeUnit::Millisecond, None),
DataType::Timestamp(TimeUnit::Microsecond, None),
DataType::Timestamp(TimeUnit::Nanosecond, None),
],
vec![
DataType::Interval(IntervalUnit::MonthDayNano),
DataType::Interval(IntervalUnit::YearMonth),
DataType::Interval(IntervalUnit::DayTime),
],
)
fn signature(&self) -> &Signature {
&self.signature
}
fn eval(&self, _func_ctx: &FunctionContext, columns: &[VectorRef]) -> Result<VectorRef> {
ensure!(
columns.len() == 2,
InvalidFuncArgsSnafu {
err_msg: format!(
"The length of the args is not correct, expect 2, have: {}",
columns.len()
),
}
);
let left = columns[0].to_arrow_array();
let right = columns[1].to_arrow_array();
fn invoke_with_args(
&self,
args: ScalarFunctionArgs,
) -> datafusion_common::Result<ColumnarValue> {
let [left, right] = extract_args(self.name(), &args)?;
let result = numeric::add(&left, &right).context(ArrowComputeSnafu)?;
let arrow_type = result.data_type().clone();
Helper::try_into_vector(result).context(IntoVectorSnafu {
data_type: arrow_type,
})
Ok(ColumnarValue::Array(result))
}
}
@@ -90,18 +89,20 @@ impl fmt::Display for DateAddFunction {
mod tests {
use std::sync::Arc;
use datafusion_expr::{TypeSignature, Volatility};
use datatypes::arrow::datatypes::IntervalDayTime;
use datatypes::value::Value;
use datatypes::vectors::{
DateVector, IntervalDayTimeVector, IntervalYearMonthVector, TimestampSecondVector,
use arrow_schema::Field;
use datafusion::arrow::array::{
Array, AsArray, Date32Array, IntervalDayTimeArray, IntervalYearMonthArray,
TimestampSecondArray,
};
use datafusion::arrow::datatypes::{Date32Type, IntervalDayTime, TimestampSecondType};
use datafusion_common::config::ConfigOptions;
use datafusion_expr::{TypeSignature, Volatility};
use super::{DateAddFunction, *};
#[test]
fn test_date_add_misc() {
let f = DateAddFunction;
let f = DateAddFunction::default();
assert_eq!("date_add", f.name());
assert_eq!(
DataType::Timestamp(TimeUnit::Microsecond, None),
@@ -130,7 +131,7 @@ mod tests {
#[test]
fn test_timestamp_date_add() {
let f = DateAddFunction;
let f = DateAddFunction::default();
let times = vec![Some(123), None, Some(42), None];
// Intervals in milliseconds
@@ -142,57 +143,81 @@ mod tests {
];
let results = [Some(124), None, Some(45), None];
let time_vector = TimestampSecondVector::from(times.clone());
let interval_vector = IntervalDayTimeVector::from_vec(intervals);
let args: Vec<VectorRef> = vec![Arc::new(time_vector), Arc::new(interval_vector)];
let vector = f.eval(&FunctionContext::default(), &args).unwrap();
let args = vec![
ColumnarValue::Array(Arc::new(TimestampSecondArray::from(times.clone()))),
ColumnarValue::Array(Arc::new(IntervalDayTimeArray::from(intervals))),
];
let vector = f
.invoke_with_args(ScalarFunctionArgs {
args,
arg_fields: vec![],
number_rows: 4,
return_field: Arc::new(Field::new(
"x",
DataType::Timestamp(TimeUnit::Second, None),
true,
)),
config_options: Arc::new(ConfigOptions::new()),
})
.and_then(|v| ColumnarValue::values_to_arrays(&[v]))
.map(|mut a| a.remove(0))
.unwrap();
let vector = vector.as_primitive::<TimestampSecondType>();
assert_eq!(4, vector.len());
for (i, _t) in times.iter().enumerate() {
let v = vector.get(i);
let result = results.get(i).unwrap();
if result.is_none() {
assert_eq!(Value::Null, v);
continue;
}
match v {
Value::Timestamp(ts) => {
assert_eq!(ts.value(), result.unwrap());
}
_ => unreachable!(),
if let Some(x) = result {
assert!(vector.is_valid(i));
assert_eq!(vector.value(i), *x);
} else {
assert!(vector.is_null(i));
}
}
}
#[test]
fn test_date_date_add() {
let f = DateAddFunction;
let f = DateAddFunction::default();
let dates = vec![Some(123), None, Some(42), None];
// Intervals in months
let intervals = vec![1, 2, 3, 1];
let results = [Some(154), None, Some(131), None];
let date_vector = DateVector::from(dates.clone());
let interval_vector = IntervalYearMonthVector::from_vec(intervals);
let args: Vec<VectorRef> = vec![Arc::new(date_vector), Arc::new(interval_vector)];
let vector = f.eval(&FunctionContext::default(), &args).unwrap();
let args = vec![
ColumnarValue::Array(Arc::new(Date32Array::from(dates.clone()))),
ColumnarValue::Array(Arc::new(IntervalYearMonthArray::from(intervals))),
];
let vector = f
.invoke_with_args(ScalarFunctionArgs {
args,
arg_fields: vec![],
number_rows: 4,
return_field: Arc::new(Field::new(
"x",
DataType::Timestamp(TimeUnit::Second, None),
true,
)),
config_options: Arc::new(ConfigOptions::new()),
})
.and_then(|v| ColumnarValue::values_to_arrays(&[v]))
.map(|mut a| a.remove(0))
.unwrap();
let vector = vector.as_primitive::<Date32Type>();
assert_eq!(4, vector.len());
for (i, _t) in dates.iter().enumerate() {
let v = vector.get(i);
let result = results.get(i).unwrap();
if result.is_none() {
assert_eq!(Value::Null, v);
continue;
}
match v {
Value::Date(date) => {
assert_eq!(date.val(), result.unwrap());
}
_ => unreachable!(),
if let Some(x) = result {
assert!(vector.is_valid(i));
assert_eq!(vector.value(i), *x);
} else {
assert!(vector.is_null(i));
}
}
}

View File

@@ -13,108 +13,116 @@
// limitations under the License.
use std::fmt;
use std::sync::Arc;
use common_error::ext::BoxedError;
use common_query::error::{self, InvalidFuncArgsSnafu, Result, UnsupportedInputDataTypeSnafu};
use datafusion_expr::Signature;
use datatypes::arrow::datatypes::{DataType, TimeUnit};
use datatypes::prelude::{ConcreteDataType, MutableVector, ScalarVectorBuilder};
use datatypes::vectors::{StringVectorBuilder, VectorRef};
use snafu::{ResultExt, ensure};
use common_query::error;
use common_time::{Date, Timestamp};
use datafusion_common::DataFusionError;
use datafusion_common::arrow::array::{Array, AsArray, StringViewBuilder};
use datafusion_common::arrow::datatypes::{ArrowTimestampType, DataType, Date32Type, TimeUnit};
use datafusion_expr::{ColumnarValue, ScalarFunctionArgs, Signature};
use snafu::ResultExt;
use crate::function::{Function, FunctionContext};
use crate::function::{Function, extract_args, find_function_context};
use crate::helper;
use crate::helper::with_match_timestamp_types;
/// A function that formats timestamp/date/datetime into string by the format
#[derive(Clone, Debug, Default)]
pub struct DateFormatFunction;
#[derive(Clone, Debug)]
pub(crate) struct DateFormatFunction {
signature: Signature,
}
const NAME: &str = "date_format";
impl Default for DateFormatFunction {
fn default() -> Self {
Self {
signature: helper::one_of_sigs2(
vec![
DataType::Date32,
DataType::Timestamp(TimeUnit::Second, None),
DataType::Timestamp(TimeUnit::Millisecond, None),
DataType::Timestamp(TimeUnit::Microsecond, None),
DataType::Timestamp(TimeUnit::Nanosecond, None),
],
vec![DataType::Utf8],
),
}
}
}
impl Function for DateFormatFunction {
fn name(&self) -> &str {
NAME
"date_format"
}
fn return_type(&self, _: &[DataType]) -> Result<DataType> {
Ok(DataType::Utf8)
fn return_type(&self, _: &[DataType]) -> datafusion_common::Result<DataType> {
Ok(DataType::Utf8View)
}
fn signature(&self) -> Signature {
helper::one_of_sigs2(
vec![
DataType::Date32,
DataType::Timestamp(TimeUnit::Second, None),
DataType::Timestamp(TimeUnit::Millisecond, None),
DataType::Timestamp(TimeUnit::Microsecond, None),
DataType::Timestamp(TimeUnit::Nanosecond, None),
],
vec![DataType::Utf8],
)
fn signature(&self) -> &Signature {
&self.signature
}
fn eval(&self, func_ctx: &FunctionContext, columns: &[VectorRef]) -> Result<VectorRef> {
ensure!(
columns.len() == 2,
InvalidFuncArgsSnafu {
err_msg: format!(
"The length of the args is not correct, expect 2, have: {}",
columns.len()
),
}
);
fn invoke_with_args(
&self,
args: ScalarFunctionArgs,
) -> datafusion_common::Result<ColumnarValue> {
let ctx = find_function_context(&args)?;
let timezone = &ctx.query_ctx.timezone();
let left = &columns[0];
let formats = &columns[1];
let [left, arg1] = extract_args(self.name(), &args)?;
let formats = arg1.as_string::<i32>();
let size = left.len();
let left_datatype = columns[0].data_type();
let mut results = StringVectorBuilder::with_capacity(size);
let left_datatype = left.data_type();
let mut builder = StringViewBuilder::with_capacity(size);
match left_datatype {
ConcreteDataType::Timestamp(_) => {
for i in 0..size {
let ts = left.get(i).as_timestamp();
let format = formats.get(i).as_string();
let result = match (ts, format) {
(Some(ts), Some(fmt)) => Some(
ts.as_formatted_string(&fmt, Some(&func_ctx.query_ctx.timezone()))
.map_err(BoxedError::new)
.context(error::ExecuteSnafu)?,
),
_ => None,
};
results.push(result.as_deref());
}
DataType::Timestamp(_, _) => {
with_match_timestamp_types!(left_datatype, |$S| {
let array = left.as_primitive::<$S>();
for (date, format) in array.iter().zip(formats.iter()) {
let result = match (date, format) {
(Some(date), Some(format)) => {
let ts = Timestamp::new(date, $S::UNIT.into());
let x = ts.as_formatted_string(&format, Some(timezone))
.map_err(|e| DataFusionError::Execution(format!(
"cannot format {ts:?} as '{format}': {e}"
)))?;
Some(x)
}
_ => None
};
builder.append_option(result.as_deref());
}
})?;
}
ConcreteDataType::Date(_) => {
DataType::Date32 => {
let left = left.as_primitive::<Date32Type>();
for i in 0..size {
let date = left.get(i).as_date();
let format = formats.get(i).as_string();
let date = left.is_valid(i).then(|| Date::from(left.value(i)));
let format = formats.is_valid(i).then(|| formats.value(i));
let result = match (date, format) {
(Some(date), Some(fmt)) => date
.as_formatted_string(&fmt, Some(&func_ctx.query_ctx.timezone()))
.as_formatted_string(fmt, Some(timezone))
.map_err(BoxedError::new)
.context(error::ExecuteSnafu)?,
_ => None,
};
results.push(result.as_deref());
builder.append_option(result.as_deref());
}
}
_ => {
return UnsupportedInputDataTypeSnafu {
function: NAME,
datatypes: columns.iter().map(|c| c.data_type()).collect::<Vec<_>>(),
}
.fail();
x => {
return Err(DataFusionError::Execution(format!(
"unsupported input data type {x}"
)));
}
}
Ok(results.to_vector())
Ok(ColumnarValue::Array(Arc::new(builder.finish())))
}
}
@@ -128,28 +136,32 @@ impl fmt::Display for DateFormatFunction {
mod tests {
use std::sync::Arc;
use arrow_schema::Field;
use datafusion_common::arrow::array::{Date32Array, StringArray, TimestampSecondArray};
use datafusion_common::config::ConfigOptions;
use datafusion_expr::{TypeSignature, Volatility};
use datatypes::prelude::ScalarVector;
use datatypes::value::Value;
use datatypes::vectors::{DateVector, StringVector, TimestampSecondVector};
use super::{DateFormatFunction, *};
use crate::function::FunctionContext;
#[test]
fn test_date_format_misc() {
let f = DateFormatFunction;
let f = DateFormatFunction::default();
assert_eq!("date_format", f.name());
assert_eq!(
DataType::Utf8,
DataType::Utf8View,
f.return_type(&[DataType::Timestamp(TimeUnit::Microsecond, None)])
.unwrap()
);
assert_eq!(
DataType::Utf8,
DataType::Utf8View,
f.return_type(&[DataType::Timestamp(TimeUnit::Second, None)])
.unwrap()
);
assert_eq!(DataType::Utf8, f.return_type(&[DataType::Date32]).unwrap());
assert_eq!(
DataType::Utf8View,
f.return_type(&[DataType::Date32]).unwrap()
);
assert!(matches!(f.signature(),
Signature {
type_signature: TypeSignature::OneOf(sigs),
@@ -159,7 +171,7 @@ mod tests {
#[test]
fn test_timestamp_date_format() {
let f = DateFormatFunction;
let f = DateFormatFunction::default();
let times = vec![Some(123), None, Some(42), None];
let formats = vec![
@@ -175,32 +187,35 @@ mod tests {
None,
];
let time_vector = TimestampSecondVector::from(times.clone());
let interval_vector = StringVector::from_vec(formats);
let args: Vec<VectorRef> = vec![Arc::new(time_vector), Arc::new(interval_vector)];
let vector = f.eval(&FunctionContext::default(), &args).unwrap();
let mut config_options = ConfigOptions::default();
config_options.extensions.insert(FunctionContext::default());
let config_options = Arc::new(config_options);
let args = ScalarFunctionArgs {
args: vec![
ColumnarValue::Array(Arc::new(TimestampSecondArray::from(times))),
ColumnarValue::Array(Arc::new(StringArray::from_iter_values(formats))),
],
arg_fields: vec![],
number_rows: 4,
return_field: Arc::new(Field::new("x", DataType::Utf8View, false)),
config_options,
};
let result = f
.invoke_with_args(args)
.and_then(|x| x.to_array(4))
.unwrap();
let vector = result.as_string_view();
assert_eq!(4, vector.len());
for (i, _t) in times.iter().enumerate() {
let v = vector.get(i);
let result = results.get(i).unwrap();
if result.is_none() {
assert_eq!(Value::Null, v);
continue;
}
match v {
Value::String(s) => {
assert_eq!(s.as_utf8(), result.unwrap());
}
_ => unreachable!(),
}
for (actual, expect) in vector.iter().zip(results) {
assert_eq!(actual, expect);
}
}
#[test]
fn test_date_date_format() {
let f = DateFormatFunction;
let f = DateFormatFunction::default();
let dates = vec![Some(123), None, Some(42), None];
let formats = vec![
@@ -216,26 +231,29 @@ mod tests {
None,
];
let date_vector = DateVector::from(dates.clone());
let interval_vector = StringVector::from_vec(formats);
let args: Vec<VectorRef> = vec![Arc::new(date_vector), Arc::new(interval_vector)];
let vector = f.eval(&FunctionContext::default(), &args).unwrap();
let mut config_options = ConfigOptions::default();
config_options.extensions.insert(FunctionContext::default());
let config_options = Arc::new(config_options);
let args = ScalarFunctionArgs {
args: vec![
ColumnarValue::Array(Arc::new(Date32Array::from(dates))),
ColumnarValue::Array(Arc::new(StringArray::from_iter_values(formats))),
],
arg_fields: vec![],
number_rows: 4,
return_field: Arc::new(Field::new("x", DataType::Utf8View, false)),
config_options,
};
let result = f
.invoke_with_args(args)
.and_then(|x| x.to_array(4))
.unwrap();
let vector = result.as_string_view();
assert_eq!(4, vector.len());
for (i, _t) in dates.iter().enumerate() {
let v = vector.get(i);
let result = results.get(i).unwrap();
if result.is_none() {
assert_eq!(Value::Null, v);
continue;
}
match v {
Value::String(s) => {
assert_eq!(s.as_utf8(), result.unwrap());
}
_ => unreachable!(),
}
for (actual, expect) in vector.iter().zip(results) {
assert_eq!(actual, expect);
}
}
}

View File

@@ -14,69 +14,66 @@
use std::fmt;
use common_query::error::{ArrowComputeSnafu, IntoVectorSnafu, InvalidFuncArgsSnafu, Result};
use datafusion_expr::Signature;
use common_query::error::ArrowComputeSnafu;
use datafusion::logical_expr::ColumnarValue;
use datafusion_expr::{ScalarFunctionArgs, Signature};
use datatypes::arrow::compute::kernels::numeric;
use datatypes::arrow::datatypes::{DataType, IntervalUnit, TimeUnit};
use datatypes::vectors::{Helper, VectorRef};
use snafu::{ResultExt, ensure};
use snafu::ResultExt;
use crate::function::{Function, FunctionContext};
use crate::function::{Function, extract_args};
use crate::helper;
/// A function subtracts an interval value to Timestamp, Date, and return the result.
/// The implementation of datetime type is based on Date64 which is incorrect so this function
/// doesn't support the datetime type.
#[derive(Clone, Debug, Default)]
pub struct DateSubFunction;
#[derive(Clone, Debug)]
pub(crate) struct DateSubFunction {
signature: Signature,
}
const NAME: &str = "date_sub";
impl Default for DateSubFunction {
fn default() -> Self {
Self {
signature: helper::one_of_sigs2(
vec![
DataType::Date32,
DataType::Timestamp(TimeUnit::Second, None),
DataType::Timestamp(TimeUnit::Millisecond, None),
DataType::Timestamp(TimeUnit::Microsecond, None),
DataType::Timestamp(TimeUnit::Nanosecond, None),
],
vec![
DataType::Interval(IntervalUnit::MonthDayNano),
DataType::Interval(IntervalUnit::YearMonth),
DataType::Interval(IntervalUnit::DayTime),
],
),
}
}
}
impl Function for DateSubFunction {
fn name(&self) -> &str {
NAME
"date_sub"
}
fn return_type(&self, input_types: &[DataType]) -> Result<DataType> {
fn return_type(&self, input_types: &[DataType]) -> datafusion_common::Result<DataType> {
Ok(input_types[0].clone())
}
fn signature(&self) -> Signature {
helper::one_of_sigs2(
vec![
DataType::Date32,
DataType::Timestamp(TimeUnit::Second, None),
DataType::Timestamp(TimeUnit::Millisecond, None),
DataType::Timestamp(TimeUnit::Microsecond, None),
DataType::Timestamp(TimeUnit::Nanosecond, None),
],
vec![
DataType::Interval(IntervalUnit::MonthDayNano),
DataType::Interval(IntervalUnit::YearMonth),
DataType::Interval(IntervalUnit::DayTime),
],
)
fn signature(&self) -> &Signature {
&self.signature
}
fn eval(&self, _func_ctx: &FunctionContext, columns: &[VectorRef]) -> Result<VectorRef> {
ensure!(
columns.len() == 2,
InvalidFuncArgsSnafu {
err_msg: format!(
"The length of the args is not correct, expect 2, have: {}",
columns.len()
),
}
);
let left = columns[0].to_arrow_array();
let right = columns[1].to_arrow_array();
fn invoke_with_args(
&self,
args: ScalarFunctionArgs,
) -> datafusion_common::Result<ColumnarValue> {
let [left, right] = extract_args(self.name(), &args)?;
let result = numeric::sub(&left, &right).context(ArrowComputeSnafu)?;
let arrow_type = result.data_type().clone();
Helper::try_into_vector(result).context(IntoVectorSnafu {
data_type: arrow_type,
})
Ok(ColumnarValue::Array(result))
}
}
@@ -90,18 +87,20 @@ impl fmt::Display for DateSubFunction {
mod tests {
use std::sync::Arc;
use datafusion_expr::{TypeSignature, Volatility};
use datatypes::arrow::datatypes::IntervalDayTime;
use datatypes::value::Value;
use datatypes::vectors::{
DateVector, IntervalDayTimeVector, IntervalYearMonthVector, TimestampSecondVector,
use arrow_schema::Field;
use datafusion::arrow::array::{
Array, AsArray, Date32Array, IntervalDayTimeArray, IntervalYearMonthArray,
TimestampSecondArray,
};
use datafusion::arrow::datatypes::{Date32Type, IntervalDayTime, TimestampSecondType};
use datafusion_common::config::ConfigOptions;
use datafusion_expr::{TypeSignature, Volatility};
use super::{DateSubFunction, *};
#[test]
fn test_date_sub_misc() {
let f = DateSubFunction;
let f = DateSubFunction::default();
assert_eq!("date_sub", f.name());
assert_eq!(
DataType::Timestamp(TimeUnit::Microsecond, None),
@@ -130,7 +129,7 @@ mod tests {
#[test]
fn test_timestamp_date_sub() {
let f = DateSubFunction;
let f = DateSubFunction::default();
let times = vec![Some(123), None, Some(42), None];
// Intervals in milliseconds
@@ -142,32 +141,44 @@ mod tests {
];
let results = [Some(122), None, Some(39), None];
let time_vector = TimestampSecondVector::from(times.clone());
let interval_vector = IntervalDayTimeVector::from_vec(intervals);
let args: Vec<VectorRef> = vec![Arc::new(time_vector), Arc::new(interval_vector)];
let vector = f.eval(&FunctionContext::default(), &args).unwrap();
let args = vec![
ColumnarValue::Array(Arc::new(TimestampSecondArray::from(times.clone()))),
ColumnarValue::Array(Arc::new(IntervalDayTimeArray::from(intervals))),
];
let vector = f
.invoke_with_args(ScalarFunctionArgs {
args,
arg_fields: vec![],
number_rows: 4,
return_field: Arc::new(Field::new(
"x",
DataType::Timestamp(TimeUnit::Second, None),
true,
)),
config_options: Arc::new(ConfigOptions::new()),
})
.and_then(|v| ColumnarValue::values_to_arrays(&[v]))
.map(|mut a| a.remove(0))
.unwrap();
let vector = vector.as_primitive::<TimestampSecondType>();
assert_eq!(4, vector.len());
for (i, _t) in times.iter().enumerate() {
let v = vector.get(i);
let result = results.get(i).unwrap();
if result.is_none() {
assert_eq!(Value::Null, v);
continue;
}
match v {
Value::Timestamp(ts) => {
assert_eq!(ts.value(), result.unwrap());
}
_ => unreachable!(),
if let Some(x) = result {
assert!(vector.is_valid(i));
assert_eq!(vector.value(i), *x);
} else {
assert!(vector.is_null(i));
}
}
}
#[test]
fn test_date_date_sub() {
let f = DateSubFunction;
let f = DateSubFunction::default();
let days_per_month = 30;
let dates = vec![
@@ -180,25 +191,37 @@ mod tests {
let intervals = vec![1, 2, 3, 1];
let results = [Some(3659), None, Some(1168), None];
let date_vector = DateVector::from(dates.clone());
let interval_vector = IntervalYearMonthVector::from_vec(intervals);
let args: Vec<VectorRef> = vec![Arc::new(date_vector), Arc::new(interval_vector)];
let vector = f.eval(&FunctionContext::default(), &args).unwrap();
let args = vec![
ColumnarValue::Array(Arc::new(Date32Array::from(dates.clone()))),
ColumnarValue::Array(Arc::new(IntervalYearMonthArray::from(intervals))),
];
let vector = f
.invoke_with_args(ScalarFunctionArgs {
args,
arg_fields: vec![],
number_rows: 4,
return_field: Arc::new(Field::new(
"x",
DataType::Timestamp(TimeUnit::Second, None),
true,
)),
config_options: Arc::new(ConfigOptions::new()),
})
.and_then(|v| ColumnarValue::values_to_arrays(&[v]))
.map(|mut a| a.remove(0))
.unwrap();
let vector = vector.as_primitive::<Date32Type>();
assert_eq!(4, vector.len());
for (i, _t) in dates.iter().enumerate() {
let v = vector.get(i);
let result = results.get(i).unwrap();
if result.is_none() {
assert_eq!(Value::Null, v);
continue;
}
match v {
Value::Date(date) => {
assert_eq!(date.val(), result.unwrap());
}
_ => unreachable!(),
if let Some(x) = result {
assert!(vector.is_valid(i));
assert_eq!(vector.value(i), *x);
} else {
assert!(vector.is_null(i));
}
}
}

View File

@@ -28,6 +28,6 @@ pub(crate) struct ExpressionFunction;
impl ExpressionFunction {
pub fn register(registry: &FunctionRegistry) {
registry.register_scalar(IsNullFunction);
registry.register_scalar(IsNullFunction::default());
}
}

View File

@@ -16,23 +16,27 @@ use std::fmt;
use std::fmt::Display;
use std::sync::Arc;
use common_query::error;
use common_query::error::{ArrowComputeSnafu, InvalidFuncArgsSnafu, Result};
use datafusion::arrow::array::ArrayRef;
use datafusion::arrow::compute::is_null;
use datafusion::arrow::datatypes::DataType;
use datafusion_expr::{Signature, Volatility};
use datatypes::prelude::VectorRef;
use datatypes::vectors::Helper;
use snafu::{ResultExt, ensure};
use datafusion_expr::{ColumnarValue, ScalarFunctionArgs, Signature, Volatility};
use crate::function::{Function, FunctionContext};
use crate::function::{Function, extract_args};
const NAME: &str = "isnull";
/// The function to check whether an expression is NULL
#[derive(Clone, Debug, Default)]
pub struct IsNullFunction;
#[derive(Clone, Debug)]
pub(crate) struct IsNullFunction {
signature: Signature,
}
impl Default for IsNullFunction {
fn default() -> Self {
Self {
signature: Signature::any(1, Volatility::Immutable),
}
}
}
impl Display for IsNullFunction {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
@@ -45,33 +49,22 @@ impl Function for IsNullFunction {
NAME
}
fn return_type(&self, _: &[DataType]) -> Result<DataType> {
fn return_type(&self, _: &[DataType]) -> datafusion_common::Result<DataType> {
Ok(DataType::Boolean)
}
fn signature(&self) -> Signature {
Signature::any(1, Volatility::Immutable)
fn signature(&self) -> &Signature {
&self.signature
}
fn eval(
fn invoke_with_args(
&self,
_func_ctx: &FunctionContext,
columns: &[VectorRef],
) -> common_query::error::Result<VectorRef> {
ensure!(
columns.len() == 1,
InvalidFuncArgsSnafu {
err_msg: format!(
"The length of the args is not correct, expect exactly one, have: {}",
columns.len()
),
}
);
let values = &columns[0];
let arrow_array = &values.to_arrow_array();
let result = is_null(arrow_array).context(ArrowComputeSnafu)?;
args: ScalarFunctionArgs,
) -> datafusion_common::Result<ColumnarValue> {
let [arg0] = extract_args(self.name(), &args)?;
let result = is_null(&arg0)?;
Helper::try_into_vector(Arc::new(result) as ArrayRef).context(error::FromArrowArraySnafu)
Ok(ColumnarValue::Array(Arc::new(result)))
}
}
@@ -79,28 +72,31 @@ impl Function for IsNullFunction {
mod tests {
use std::sync::Arc;
use datafusion_expr::TypeSignature;
use datatypes::scalars::ScalarVector;
use datatypes::vectors::{BooleanVector, Float32Vector};
use arrow_schema::Field;
use datafusion_common::arrow::array::{AsArray, BooleanArray, Float32Array};
use super::*;
#[test]
fn test_is_null_function() {
let is_null = IsNullFunction;
let is_null = IsNullFunction::default();
assert_eq!("isnull", is_null.name());
assert_eq!(DataType::Boolean, is_null.return_type(&[]).unwrap());
assert_eq!(
is_null.signature(),
Signature {
type_signature: TypeSignature::Any(1),
volatility: Volatility::Immutable
}
);
let values = vec![None, Some(3.0), None];
let args: Vec<VectorRef> = vec![Arc::new(Float32Vector::from(values))];
let vector = is_null.eval(&FunctionContext::default(), &args).unwrap();
let expect: VectorRef = Arc::new(BooleanVector::from_vec(vec![true, false, true]));
let result = is_null
.invoke_with_args(ScalarFunctionArgs {
args: vec![ColumnarValue::Array(Arc::new(Float32Array::from(values)))],
arg_fields: vec![],
number_rows: 3,
return_field: Arc::new(Field::new("", DataType::Boolean, false)),
config_options: Arc::new(Default::default()),
})
.unwrap();
let ColumnarValue::Array(result) = result else {
unreachable!()
};
let vector = result.as_boolean();
let expect = &BooleanArray::from(vec![true, false, true]);
assert_eq!(expect, vector);
}
}

View File

@@ -27,57 +27,57 @@ pub(crate) struct GeoFunctions;
impl GeoFunctions {
pub fn register(registry: &FunctionRegistry) {
// geohash
registry.register_scalar(geohash::GeohashFunction);
registry.register_scalar(geohash::GeohashNeighboursFunction);
registry.register_scalar(geohash::GeohashFunction::default());
registry.register_scalar(geohash::GeohashNeighboursFunction::default());
// h3 index
registry.register_scalar(h3::H3LatLngToCell);
registry.register_scalar(h3::H3LatLngToCellString);
registry.register_scalar(h3::H3LatLngToCell::default());
registry.register_scalar(h3::H3LatLngToCellString::default());
// h3 index inspection
registry.register_scalar(h3::H3CellBase);
registry.register_scalar(h3::H3CellIsPentagon);
registry.register_scalar(h3::H3StringToCell);
registry.register_scalar(h3::H3CellToString);
registry.register_scalar(h3::H3CellCenterLatLng);
registry.register_scalar(h3::H3CellResolution);
registry.register_scalar(h3::H3CellBase::default());
registry.register_scalar(h3::H3CellIsPentagon::default());
registry.register_scalar(h3::H3StringToCell::default());
registry.register_scalar(h3::H3CellToString::default());
registry.register_scalar(h3::H3CellCenterLatLng::default());
registry.register_scalar(h3::H3CellResolution::default());
// h3 hierarchical grid
registry.register_scalar(h3::H3CellCenterChild);
registry.register_scalar(h3::H3CellParent);
registry.register_scalar(h3::H3CellToChildren);
registry.register_scalar(h3::H3CellToChildrenSize);
registry.register_scalar(h3::H3CellToChildPos);
registry.register_scalar(h3::H3ChildPosToCell);
registry.register_scalar(h3::H3CellContains);
registry.register_scalar(h3::H3CellCenterChild::default());
registry.register_scalar(h3::H3CellParent::default());
registry.register_scalar(h3::H3CellToChildren::default());
registry.register_scalar(h3::H3CellToChildrenSize::default());
registry.register_scalar(h3::H3CellToChildPos::default());
registry.register_scalar(h3::H3ChildPosToCell::default());
registry.register_scalar(h3::H3CellContains::default());
// h3 grid traversal
registry.register_scalar(h3::H3GridDisk);
registry.register_scalar(h3::H3GridDiskDistances);
registry.register_scalar(h3::H3GridDistance);
registry.register_scalar(h3::H3GridPathCells);
registry.register_scalar(h3::H3GridDisk::default());
registry.register_scalar(h3::H3GridDiskDistances::default());
registry.register_scalar(h3::H3GridDistance::default());
registry.register_scalar(h3::H3GridPathCells::default());
// h3 measurement
registry.register_scalar(h3::H3CellDistanceSphereKm);
registry.register_scalar(h3::H3CellDistanceEuclideanDegree);
registry.register_scalar(h3::H3CellDistanceSphereKm::default());
registry.register_scalar(h3::H3CellDistanceEuclideanDegree::default());
// s2
registry.register_scalar(s2::S2LatLngToCell);
registry.register_scalar(s2::S2CellLevel);
registry.register_scalar(s2::S2CellToToken);
registry.register_scalar(s2::S2CellParent);
registry.register_scalar(s2::S2LatLngToCell::default());
registry.register_scalar(s2::S2CellLevel::default());
registry.register_scalar(s2::S2CellToToken::default());
registry.register_scalar(s2::S2CellParent::default());
// spatial data type
registry.register_scalar(wkt::LatLngToPointWkt);
registry.register_scalar(wkt::LatLngToPointWkt::default());
// spatial relation
registry.register_scalar(relation::STContains);
registry.register_scalar(relation::STWithin);
registry.register_scalar(relation::STIntersects);
registry.register_scalar(relation::STContains::default());
registry.register_scalar(relation::STWithin::default());
registry.register_scalar(relation::STIntersects::default());
// spatial measure
registry.register_scalar(measure::STDistance);
registry.register_scalar(measure::STDistanceSphere);
registry.register_scalar(measure::STArea);
registry.register_scalar(measure::STDistance::default());
registry.register_scalar(measure::STDistanceSphere::default());
registry.register_scalar(measure::STArea::default());
}
}

View File

@@ -17,82 +17,36 @@ use std::sync::Arc;
use common_error::ext::{BoxedError, PlainError};
use common_error::status_code::StatusCode;
use common_query::error::{self, InvalidFuncArgsSnafu, Result};
use datafusion::arrow::datatypes::Field;
use common_query::error;
use datafusion::arrow::array::{Array, AsArray, ListBuilder, StringViewBuilder};
use datafusion::arrow::datatypes::{DataType, Field, Float64Type, UInt8Type};
use datafusion::logical_expr::ColumnarValue;
use datafusion_common::DataFusionError;
use datafusion_expr::type_coercion::aggregates::INTEGERS;
use datafusion_expr::{Signature, TypeSignature, Volatility};
use datatypes::arrow::datatypes::DataType;
use datatypes::prelude::ConcreteDataType;
use datatypes::scalars::{Scalar, ScalarVectorBuilder};
use datatypes::value::{ListValue, Value};
use datatypes::vectors::{ListVectorBuilder, MutableVector, StringVectorBuilder, VectorRef};
use datafusion_expr::{ScalarFunctionArgs, Signature, TypeSignature, Volatility};
use geohash::Coord;
use snafu::{ResultExt, ensure};
use snafu::ResultExt;
use crate::function::{Function, FunctionContext};
use crate::function::{Function, extract_args};
use crate::scalars::geo::helpers;
macro_rules! ensure_resolution_usize {
($v: ident) => {
if !($v > 0 && $v <= 12) {
Err(BoxedError::new(PlainError::new(
format!("Invalid geohash resolution {}, expect value: [1, 12]", $v),
StatusCode::EngineExecuteQuery,
)))
.context(error::ExecuteSnafu)
} else {
Ok($v as usize)
}
};
}
fn try_into_resolution(v: Value) -> Result<usize> {
match v {
Value::Int8(v) => {
ensure_resolution_usize!(v)
}
Value::Int16(v) => {
ensure_resolution_usize!(v)
}
Value::Int32(v) => {
ensure_resolution_usize!(v)
}
Value::Int64(v) => {
ensure_resolution_usize!(v)
}
Value::UInt8(v) => {
ensure_resolution_usize!(v)
}
Value::UInt16(v) => {
ensure_resolution_usize!(v)
}
Value::UInt32(v) => {
ensure_resolution_usize!(v)
}
Value::UInt64(v) => {
ensure_resolution_usize!(v)
}
_ => unreachable!(),
fn ensure_resolution_usize(v: u8) -> datafusion_common::Result<usize> {
if v == 0 || v > 12 {
return Err(DataFusionError::Execution(format!(
"Invalid geohash resolution {v}, valid value range: [1, 12]"
)));
}
Ok(v as usize)
}
/// Function that return geohash string for a given geospatial coordinate.
#[derive(Clone, Debug, Default)]
pub struct GeohashFunction;
impl GeohashFunction {
const NAME: &'static str = "geohash";
#[derive(Clone, Debug)]
pub(crate) struct GeohashFunction {
signature: Signature,
}
impl Function for GeohashFunction {
fn name(&self) -> &str {
Self::NAME
}
fn return_type(&self, _: &[DataType]) -> Result<DataType> {
Ok(DataType::Utf8)
}
fn signature(&self) -> Signature {
impl Default for GeohashFunction {
fn default() -> Self {
let mut signatures = Vec::new();
for coord_type in &[DataType::Float32, DataType::Float64] {
for resolution_type in INTEGERS {
@@ -106,34 +60,55 @@ impl Function for GeohashFunction {
]));
}
}
Signature::one_of(signatures, Volatility::Stable)
Self {
signature: Signature::one_of(signatures, Volatility::Stable),
}
}
}
impl GeohashFunction {
const NAME: &'static str = "geohash";
}
impl Function for GeohashFunction {
fn name(&self) -> &str {
Self::NAME
}
fn eval(&self, _func_ctx: &FunctionContext, columns: &[VectorRef]) -> Result<VectorRef> {
ensure!(
columns.len() == 3,
InvalidFuncArgsSnafu {
err_msg: format!(
"The length of the args is not correct, expect 3, provided : {}",
columns.len()
),
}
);
fn return_type(&self, _: &[DataType]) -> datafusion_common::Result<DataType> {
Ok(DataType::Utf8)
}
let lat_vec = &columns[0];
let lon_vec = &columns[1];
let resolution_vec = &columns[2];
fn signature(&self) -> &Signature {
&self.signature
}
fn invoke_with_args(
&self,
args: ScalarFunctionArgs,
) -> datafusion_common::Result<ColumnarValue> {
let [lat_vec, lon_vec, resolutions] = extract_args(self.name(), &args)?;
let lat_vec = helpers::cast::<Float64Type>(&lat_vec)?;
let lat_vec = lat_vec.as_primitive::<Float64Type>();
let lon_vec = helpers::cast::<Float64Type>(&lon_vec)?;
let lon_vec = lon_vec.as_primitive::<Float64Type>();
let resolutions = helpers::cast::<UInt8Type>(&resolutions)?;
let resolutions = resolutions.as_primitive::<UInt8Type>();
let size = lat_vec.len();
let mut results = StringVectorBuilder::with_capacity(size);
let mut builder = StringViewBuilder::with_capacity(size);
for i in 0..size {
let lat = lat_vec.get(i).as_f64_lossy();
let lon = lon_vec.get(i).as_f64_lossy();
let r = try_into_resolution(resolution_vec.get(i))?;
let lat = lat_vec.is_valid(i).then(|| lat_vec.value(i));
let lon = lon_vec.is_valid(i).then(|| lon_vec.value(i));
let r = resolutions
.is_valid(i)
.then(|| ensure_resolution_usize(resolutions.value(i)))
.transpose()?;
let result = match (lat, lon) {
(Some(lat), Some(lon)) => {
let result = match (lat, lon, r) {
(Some(lat), Some(lon), Some(r)) => {
let coord = Coord { x: lon, y: lat };
let encoded = geohash::encode(coord, r)
.map_err(|e| {
@@ -148,10 +123,10 @@ impl Function for GeohashFunction {
_ => None,
};
results.push(result.as_deref());
builder.append_option(result);
}
Ok(results.to_vector())
Ok(ColumnarValue::Array(Arc::new(builder.finish())))
}
}
@@ -162,27 +137,13 @@ impl fmt::Display for GeohashFunction {
}
/// Function that return geohash string for a given geospatial coordinate.
#[derive(Clone, Debug, Default)]
pub struct GeohashNeighboursFunction;
impl GeohashNeighboursFunction {
const NAME: &'static str = "geohash_neighbours";
#[derive(Clone, Debug)]
pub(crate) struct GeohashNeighboursFunction {
signature: Signature,
}
impl Function for GeohashNeighboursFunction {
fn name(&self) -> &str {
GeohashNeighboursFunction::NAME
}
fn return_type(&self, _: &[DataType]) -> Result<DataType> {
Ok(DataType::List(Arc::new(Field::new(
"x",
DataType::Utf8,
false,
))))
}
fn signature(&self) -> Signature {
impl Default for GeohashNeighboursFunction {
fn default() -> Self {
let mut signatures = Vec::new();
for coord_type in &[DataType::Float32, DataType::Float64] {
for resolution_type in INTEGERS {
@@ -196,35 +157,59 @@ impl Function for GeohashNeighboursFunction {
]));
}
}
Signature::one_of(signatures, Volatility::Stable)
Self {
signature: Signature::one_of(signatures, Volatility::Stable),
}
}
}
impl GeohashNeighboursFunction {
const NAME: &'static str = "geohash_neighbours";
}
impl Function for GeohashNeighboursFunction {
fn name(&self) -> &str {
GeohashNeighboursFunction::NAME
}
fn eval(&self, _func_ctx: &FunctionContext, columns: &[VectorRef]) -> Result<VectorRef> {
ensure!(
columns.len() == 3,
InvalidFuncArgsSnafu {
err_msg: format!(
"The length of the args is not correct, expect 3, provided : {}",
columns.len()
),
}
);
fn return_type(&self, _: &[DataType]) -> datafusion_common::Result<DataType> {
Ok(DataType::List(Arc::new(Field::new(
"item",
DataType::Utf8View,
false,
))))
}
let lat_vec = &columns[0];
let lon_vec = &columns[1];
let resolution_vec = &columns[2];
fn signature(&self) -> &Signature {
&self.signature
}
fn invoke_with_args(
&self,
args: ScalarFunctionArgs,
) -> datafusion_common::Result<ColumnarValue> {
let [lat_vec, lon_vec, resolutions] = extract_args(self.name(), &args)?;
let lat_vec = helpers::cast::<Float64Type>(&lat_vec)?;
let lat_vec = lat_vec.as_primitive::<Float64Type>();
let lon_vec = helpers::cast::<Float64Type>(&lon_vec)?;
let lon_vec = lon_vec.as_primitive::<Float64Type>();
let resolutions = helpers::cast::<UInt8Type>(&resolutions)?;
let resolutions = resolutions.as_primitive::<UInt8Type>();
let size = lat_vec.len();
let mut results =
ListVectorBuilder::with_type_capacity(ConcreteDataType::string_datatype(), size);
let mut builder = ListBuilder::new(StringViewBuilder::new());
for i in 0..size {
let lat = lat_vec.get(i).as_f64_lossy();
let lon = lon_vec.get(i).as_f64_lossy();
let r = try_into_resolution(resolution_vec.get(i))?;
let lat = lat_vec.is_valid(i).then(|| lat_vec.value(i));
let lon = lon_vec.is_valid(i).then(|| lon_vec.value(i));
let r = resolutions
.is_valid(i)
.then(|| ensure_resolution_usize(resolutions.value(i)))
.transpose()?;
let result = match (lat, lon) {
(Some(lat), Some(lon)) => {
match (lat, lon, r) {
(Some(lat), Some(lon), Some(r)) => {
let coord = Coord { x: lon, y: lat };
let encoded = geohash::encode(coord, r)
.map_err(|e| {
@@ -242,8 +227,8 @@ impl Function for GeohashNeighboursFunction {
))
})
.context(error::ExecuteSnafu)?;
Some(ListValue::new(
vec![
builder.append_value(
[
neighbours.n,
neighbours.nw,
neighbours.w,
@@ -254,22 +239,14 @@ impl Function for GeohashNeighboursFunction {
neighbours.ne,
]
.into_iter()
.map(Value::from)
.collect(),
ConcreteDataType::string_datatype(),
))
.map(Some),
);
}
_ => None,
_ => builder.append_null(),
};
if let Some(list_value) = result {
results.push(Some(list_value.as_scalar_ref()));
} else {
results.push(None);
}
}
Ok(results.to_vector())
Ok(ColumnarValue::Array(Arc::new(builder.finish())))
}
}

File diff suppressed because it is too large Load Diff

View File

@@ -12,64 +12,30 @@
// See the License for the specific language governing permissions and
// limitations under the License.
macro_rules! ensure_columns_len {
($columns:ident) => {
snafu::ensure!(
$columns.windows(2).all(|c| c[0].len() == c[1].len()),
common_query::error::InvalidFuncArgsSnafu {
err_msg: "The length of input columns are in different size"
}
)
};
($column_a:ident, $column_b:ident, $($column_n:ident),*) => {
snafu::ensure!(
{
let mut result = $column_a.len() == $column_b.len();
$(
result = result && ($column_a.len() == $column_n.len());
)*
result
}
common_query::error::InvalidFuncArgsSnafu {
err_msg: "The length of input columns are in different size"
}
)
};
}
pub(crate) use ensure_columns_len;
macro_rules! ensure_columns_n {
($columns:ident, $n:literal) => {
snafu::ensure!(
$columns.len() == $n,
common_query::error::InvalidFuncArgsSnafu {
err_msg: format!(
"The length of arguments is not correct, expect {}, provided : {}",
stringify!($n),
$columns.len()
),
}
);
if $n > 1 {
ensure_columns_len!($columns);
}
};
}
pub(crate) use ensure_columns_n;
use datafusion::arrow::array::{ArrayRef, ArrowPrimitiveType};
use datafusion::arrow::compute;
macro_rules! ensure_and_coerce {
($compare:expr, $coerce:expr) => {{
snafu::ensure!(
$compare,
common_query::error::InvalidFuncArgsSnafu {
err_msg: "Argument was outside of acceptable range "
}
);
Ok($coerce)
if !$compare {
return Err(datafusion_common::DataFusionError::Execution(
"argument out of valid range".to_string(),
));
}
Ok(Some($coerce))
}};
}
pub(crate) use ensure_and_coerce;
pub(crate) fn cast<T: ArrowPrimitiveType>(array: &ArrayRef) -> datafusion_common::Result<ArrayRef> {
let x = compute::cast_with_options(
array.as_ref(),
&T::DATA_TYPE,
&compute::CastOptions {
safe: false,
..Default::default()
},
)?;
Ok(x)
}

View File

@@ -12,106 +12,137 @@
// See the License for the specific language governing permissions and
// limitations under the License.
use std::sync::Arc;
use common_error::ext::{BoxedError, PlainError};
use common_error::status_code::StatusCode;
use common_query::error::{self, Result};
use datafusion::arrow::datatypes::DataType;
use datafusion_expr::{Signature, Volatility};
use datatypes::scalars::ScalarVectorBuilder;
use datatypes::vectors::{Float64VectorBuilder, MutableVector, VectorRef};
use common_query::error;
use datafusion_common::arrow::array::{Array, AsArray, Float64Builder};
use datafusion_common::arrow::compute;
use datafusion_common::arrow::datatypes::DataType;
use datafusion_expr::{ColumnarValue, ScalarFunctionArgs, Signature, Volatility};
use derive_more::Display;
use geo::algorithm::line_measures::metric_spaces::Euclidean;
use geo::{Area, Distance, Haversine};
use geo_types::Geometry;
use snafu::ResultExt;
use crate::function::{Function, FunctionContext};
use crate::scalars::geo::helpers::{ensure_columns_len, ensure_columns_n};
use crate::function::{Function, extract_args};
use crate::scalars::geo::wkt::parse_wkt;
/// Return WGS84(SRID: 4326) euclidean distance between two geometry object, in degree
#[derive(Clone, Debug, Default, Display)]
#[derive(Clone, Debug, Display)]
#[display("{}", self.name())]
pub struct STDistance;
pub(crate) struct STDistance {
signature: Signature,
}
impl Default for STDistance {
fn default() -> Self {
Self {
signature: Signature::string(2, Volatility::Stable),
}
}
}
impl Function for STDistance {
fn name(&self) -> &str {
"st_distance"
}
fn return_type(&self, _: &[DataType]) -> Result<DataType> {
fn return_type(&self, _: &[DataType]) -> datafusion_common::Result<DataType> {
Ok(DataType::Float64)
}
fn signature(&self) -> Signature {
Signature::string(2, Volatility::Stable)
fn signature(&self) -> &Signature {
&self.signature
}
fn eval(&self, _func_ctx: &FunctionContext, columns: &[VectorRef]) -> Result<VectorRef> {
ensure_columns_n!(columns, 2);
fn invoke_with_args(
&self,
args: ScalarFunctionArgs,
) -> datafusion_common::Result<ColumnarValue> {
let [arg0, arg1] = extract_args(self.name(), &args)?;
let wkt_this_vec = &columns[0];
let wkt_that_vec = &columns[1];
let arg0 = compute::cast(&arg0, &DataType::Utf8View)?;
let wkt_this_vec = arg0.as_string_view();
let arg1 = compute::cast(&arg1, &DataType::Utf8View)?;
let wkt_that_vec = arg1.as_string_view();
let size = wkt_this_vec.len();
let mut results = Float64VectorBuilder::with_capacity(size);
let mut builder = Float64Builder::with_capacity(size);
for i in 0..size {
let wkt_this = wkt_this_vec.get(i).as_string();
let wkt_that = wkt_that_vec.get(i).as_string();
let wkt_this = wkt_this_vec.is_valid(i).then(|| wkt_this_vec.value(i));
let wkt_that = wkt_that_vec.is_valid(i).then(|| wkt_that_vec.value(i));
let result = match (wkt_this, wkt_that) {
(Some(wkt_this), Some(wkt_that)) => {
let geom_this = parse_wkt(&wkt_this)?;
let geom_that = parse_wkt(&wkt_that)?;
let geom_this = parse_wkt(wkt_this)?;
let geom_that = parse_wkt(wkt_that)?;
Some(Euclidean::distance(&geom_this, &geom_that))
}
_ => None,
};
results.push(result);
builder.append_option(result);
}
Ok(results.to_vector())
Ok(ColumnarValue::Array(Arc::new(builder.finish())))
}
}
/// Return great circle distance between two geometry object, in meters
#[derive(Clone, Debug, Default, Display)]
#[derive(Clone, Debug, Display)]
#[display("{}", self.name())]
pub struct STDistanceSphere;
pub(crate) struct STDistanceSphere {
signature: Signature,
}
impl Default for STDistanceSphere {
fn default() -> Self {
Self {
signature: Signature::string(2, Volatility::Stable),
}
}
}
impl Function for STDistanceSphere {
fn name(&self) -> &str {
"st_distance_sphere_m"
}
fn return_type(&self, _: &[DataType]) -> Result<DataType> {
fn return_type(&self, _: &[DataType]) -> datafusion_common::Result<DataType> {
Ok(DataType::Float64)
}
fn signature(&self) -> Signature {
Signature::string(2, Volatility::Stable)
fn signature(&self) -> &Signature {
&self.signature
}
fn eval(&self, _func_ctx: &FunctionContext, columns: &[VectorRef]) -> Result<VectorRef> {
ensure_columns_n!(columns, 2);
fn invoke_with_args(
&self,
args: ScalarFunctionArgs,
) -> datafusion_common::Result<ColumnarValue> {
let [arg0, arg1] = extract_args(self.name(), &args)?;
let wkt_this_vec = &columns[0];
let wkt_that_vec = &columns[1];
let arg0 = compute::cast(&arg0, &DataType::Utf8View)?;
let wkt_this_vec = arg0.as_string_view();
let arg1 = compute::cast(&arg1, &DataType::Utf8View)?;
let wkt_that_vec = arg1.as_string_view();
let size = wkt_this_vec.len();
let mut results = Float64VectorBuilder::with_capacity(size);
let mut builder = Float64Builder::with_capacity(size);
for i in 0..size {
let wkt_this = wkt_this_vec.get(i).as_string();
let wkt_that = wkt_that_vec.get(i).as_string();
let wkt_this = wkt_this_vec.is_valid(i).then(|| wkt_this_vec.value(i));
let wkt_that = wkt_that_vec.is_valid(i).then(|| wkt_that_vec.value(i));
let result = match (wkt_this, wkt_that) {
(Some(wkt_this), Some(wkt_that)) => {
let geom_this = parse_wkt(&wkt_this)?;
let geom_that = parse_wkt(&wkt_that)?;
let geom_this = parse_wkt(wkt_this)?;
let geom_that = parse_wkt(wkt_that)?;
match (geom_this, geom_that) {
(Geometry::Point(this), Geometry::Point(that)) => {
@@ -128,52 +159,66 @@ impl Function for STDistanceSphere {
_ => None,
};
results.push(result);
builder.append_option(result);
}
Ok(results.to_vector())
Ok(ColumnarValue::Array(Arc::new(builder.finish())))
}
}
/// Return area of given geometry object
#[derive(Clone, Debug, Default, Display)]
#[derive(Clone, Debug, Display)]
#[display("{}", self.name())]
pub struct STArea;
pub(crate) struct STArea {
signature: Signature,
}
impl Default for STArea {
fn default() -> Self {
Self {
signature: Signature::string(1, Volatility::Stable),
}
}
}
impl Function for STArea {
fn name(&self) -> &str {
"st_area"
}
fn return_type(&self, _: &[DataType]) -> Result<DataType> {
fn return_type(&self, _: &[DataType]) -> datafusion_common::Result<DataType> {
Ok(DataType::Float64)
}
fn signature(&self) -> Signature {
Signature::string(1, Volatility::Stable)
fn signature(&self) -> &Signature {
&self.signature
}
fn eval(&self, _func_ctx: &FunctionContext, columns: &[VectorRef]) -> Result<VectorRef> {
ensure_columns_n!(columns, 1);
fn invoke_with_args(
&self,
args: ScalarFunctionArgs,
) -> datafusion_common::Result<ColumnarValue> {
let [arg0] = extract_args(self.name(), &args)?;
let wkt_vec = &columns[0];
let arg0 = compute::cast(&arg0, &DataType::Utf8View)?;
let wkt_vec = arg0.as_string_view();
let size = wkt_vec.len();
let mut results = Float64VectorBuilder::with_capacity(size);
let mut builder = Float64Builder::with_capacity(size);
for i in 0..size {
let wkt = wkt_vec.get(i).as_string();
let wkt = wkt_vec.is_valid(i).then(|| wkt_vec.value(i));
let result = if let Some(wkt) = wkt {
let geom = parse_wkt(&wkt)?;
let geom = parse_wkt(wkt)?;
Some(geom.unsigned_area())
} else {
None
};
results.push(result);
builder.append_option(result);
}
Ok(results.to_vector())
Ok(ColumnarValue::Array(Arc::new(builder.finish())))
}
}

View File

@@ -12,160 +12,151 @@
// See the License for the specific language governing permissions and
// limitations under the License.
use common_query::error::Result;
use datafusion::arrow::datatypes::DataType;
use datafusion_expr::{Signature, Volatility};
use datatypes::scalars::ScalarVectorBuilder;
use datatypes::vectors::{BooleanVectorBuilder, MutableVector, VectorRef};
use std::sync::Arc;
use datafusion_common::arrow::array::{Array, AsArray, BooleanBuilder};
use datafusion_common::arrow::compute;
use datafusion_common::arrow::datatypes::DataType;
use datafusion_expr::{ColumnarValue, ScalarFunctionArgs, Signature, Volatility};
use derive_more::Display;
use geo::algorithm::contains::Contains;
use geo::algorithm::intersects::Intersects;
use geo::algorithm::within::Within;
use geo_types::Geometry;
use crate::function::{Function, FunctionContext};
use crate::scalars::geo::helpers::{ensure_columns_len, ensure_columns_n};
use crate::function::{Function, extract_args};
use crate::scalars::geo::wkt::parse_wkt;
/// Test if spatial relationship: contains
#[derive(Clone, Debug, Default, Display)]
#[derive(Clone, Debug, Display)]
#[display("{}", self.name())]
pub struct STContains;
pub(crate) struct STContains {
signature: Signature,
}
impl Function for STContains {
fn name(&self) -> &str {
"st_contains"
}
fn return_type(&self, _: &[DataType]) -> Result<DataType> {
Ok(DataType::Boolean)
}
fn signature(&self) -> Signature {
Signature::string(2, Volatility::Stable)
}
fn eval(&self, _func_ctx: &FunctionContext, columns: &[VectorRef]) -> Result<VectorRef> {
ensure_columns_n!(columns, 2);
let wkt_this_vec = &columns[0];
let wkt_that_vec = &columns[1];
let size = wkt_this_vec.len();
let mut results = BooleanVectorBuilder::with_capacity(size);
for i in 0..size {
let wkt_this = wkt_this_vec.get(i).as_string();
let wkt_that = wkt_that_vec.get(i).as_string();
let result = match (wkt_this, wkt_that) {
(Some(wkt_this), Some(wkt_that)) => {
let geom_this = parse_wkt(&wkt_this)?;
let geom_that = parse_wkt(&wkt_that)?;
Some(geom_this.contains(&geom_that))
}
_ => None,
};
results.push(result);
impl Default for STContains {
fn default() -> Self {
Self {
signature: Signature::string(2, Volatility::Stable),
}
}
}
Ok(results.to_vector())
impl StFunction for STContains {
const NAME: &'static str = "st_contains";
fn signature(&self) -> &Signature {
&self.signature
}
fn invoke(g1: Geometry, g2: Geometry) -> bool {
g1.contains(&g2)
}
}
/// Test if spatial relationship: within
#[derive(Clone, Debug, Default, Display)]
#[derive(Clone, Debug, Display)]
#[display("{}", self.name())]
pub struct STWithin;
pub(crate) struct STWithin {
signature: Signature,
}
impl Function for STWithin {
fn name(&self) -> &str {
"st_within"
}
fn return_type(&self, _: &[DataType]) -> Result<DataType> {
Ok(DataType::Boolean)
}
fn signature(&self) -> Signature {
Signature::string(2, Volatility::Stable)
}
fn eval(&self, _func_ctx: &FunctionContext, columns: &[VectorRef]) -> Result<VectorRef> {
ensure_columns_n!(columns, 2);
let wkt_this_vec = &columns[0];
let wkt_that_vec = &columns[1];
let size = wkt_this_vec.len();
let mut results = BooleanVectorBuilder::with_capacity(size);
for i in 0..size {
let wkt_this = wkt_this_vec.get(i).as_string();
let wkt_that = wkt_that_vec.get(i).as_string();
let result = match (wkt_this, wkt_that) {
(Some(wkt_this), Some(wkt_that)) => {
let geom_this = parse_wkt(&wkt_this)?;
let geom_that = parse_wkt(&wkt_that)?;
Some(geom_this.is_within(&geom_that))
}
_ => None,
};
results.push(result);
impl Default for STWithin {
fn default() -> Self {
Self {
signature: Signature::string(2, Volatility::Stable),
}
}
}
Ok(results.to_vector())
impl StFunction for STWithin {
const NAME: &'static str = "st_within";
fn signature(&self) -> &Signature {
&self.signature
}
fn invoke(g1: Geometry, g2: Geometry) -> bool {
g1.is_within(&g2)
}
}
/// Test if spatial relationship: within
#[derive(Clone, Debug, Default, Display)]
#[derive(Clone, Debug, Display)]
#[display("{}", self.name())]
pub struct STIntersects;
pub(crate) struct STIntersects {
signature: Signature,
}
impl Function for STIntersects {
fn name(&self) -> &str {
"st_intersects"
impl Default for STIntersects {
fn default() -> Self {
Self {
signature: Signature::string(2, Volatility::Stable),
}
}
}
impl StFunction for STIntersects {
const NAME: &'static str = "st_intersects";
fn signature(&self) -> &Signature {
&self.signature
}
fn return_type(&self, _: &[DataType]) -> Result<DataType> {
fn invoke(g1: Geometry, g2: Geometry) -> bool {
g1.intersects(&g2)
}
}
trait StFunction {
const NAME: &'static str;
fn signature(&self) -> &Signature;
fn invoke(g1: Geometry, g2: Geometry) -> bool;
}
impl<T: StFunction + Display + Send + Sync> Function for T {
fn name(&self) -> &str {
T::NAME
}
fn return_type(&self, _: &[DataType]) -> datafusion_common::Result<DataType> {
Ok(DataType::Boolean)
}
fn signature(&self) -> Signature {
Signature::string(2, Volatility::Stable)
fn signature(&self) -> &Signature {
self.signature()
}
fn eval(&self, _func_ctx: &FunctionContext, columns: &[VectorRef]) -> Result<VectorRef> {
ensure_columns_n!(columns, 2);
fn invoke_with_args(
&self,
args: ScalarFunctionArgs,
) -> datafusion_common::Result<ColumnarValue> {
let [arg0, arg1] = extract_args(self.name(), &args)?;
let wkt_this_vec = &columns[0];
let wkt_that_vec = &columns[1];
let arg0 = compute::cast(&arg0, &DataType::Utf8View)?;
let wkt_this_vec = arg0.as_string_view();
let arg1 = compute::cast(&arg1, &DataType::Utf8View)?;
let wkt_that_vec = arg1.as_string_view();
let size = wkt_this_vec.len();
let mut results = BooleanVectorBuilder::with_capacity(size);
let mut builder = BooleanBuilder::with_capacity(size);
for i in 0..size {
let wkt_this = wkt_this_vec.get(i).as_string();
let wkt_that = wkt_that_vec.get(i).as_string();
let wkt_this = wkt_this_vec.is_valid(i).then(|| wkt_this_vec.value(i));
let wkt_that = wkt_that_vec.is_valid(i).then(|| wkt_that_vec.value(i));
let result = match (wkt_this, wkt_that) {
(Some(wkt_this), Some(wkt_that)) => {
let geom_this = parse_wkt(&wkt_this)?;
let geom_that = parse_wkt(&wkt_that)?;
Some(geom_this.intersects(&geom_that))
Some(T::invoke(parse_wkt(wkt_this)?, parse_wkt(wkt_that)?))
}
_ => None,
};
results.push(result);
builder.append_option(result);
}
Ok(results.to_vector())
Ok(ColumnarValue::Array(Arc::new(builder.finish())))
}
}

View File

@@ -12,21 +12,21 @@
// See the License for the specific language governing permissions and
// limitations under the License.
use std::sync::LazyLock;
use std::sync::{Arc, LazyLock};
use common_query::error::{InvalidFuncArgsSnafu, Result};
use datafusion_expr::{Signature, TypeSignature, Volatility};
use datatypes::arrow::datatypes::DataType;
use datatypes::scalars::ScalarVectorBuilder;
use datatypes::value::Value;
use datatypes::vectors::{MutableVector, StringVectorBuilder, UInt64VectorBuilder, VectorRef};
use common_query::error::InvalidFuncArgsSnafu;
use datafusion_common::ScalarValue;
use datafusion_common::arrow::array::{Array, AsArray, StringViewBuilder, UInt64Builder};
use datafusion_common::arrow::datatypes::{DataType, Float64Type};
use datafusion_expr::{ColumnarValue, ScalarFunctionArgs, Signature, TypeSignature, Volatility};
use derive_more::Display;
use s2::cellid::{CellID, MAX_LEVEL};
use s2::latlng::LatLng;
use snafu::ensure;
use crate::function::{Function, FunctionContext};
use crate::scalars::geo::helpers::{ensure_and_coerce, ensure_columns_len, ensure_columns_n};
use crate::function::{Function, extract_args};
use crate::scalars::geo::helpers;
use crate::scalars::geo::helpers::ensure_and_coerce;
static CELL_TYPES: LazyLock<Vec<DataType>> =
LazyLock::new(|| vec![DataType::Int64, DataType::UInt64]);
@@ -39,20 +39,14 @@ static LEVEL_TYPES: &[DataType] = datafusion_expr::type_coercion::aggregates::IN
/// Function that returns [s2] encoding cellid for a given geospatial coordinate.
///
/// [s2]: http://s2geometry.io
#[derive(Clone, Debug, Default, Display)]
#[derive(Clone, Debug, Display)]
#[display("{}", self.name())]
pub struct S2LatLngToCell;
pub(crate) struct S2LatLngToCell {
signature: Signature,
}
impl Function for S2LatLngToCell {
fn name(&self) -> &str {
"s2_latlng_to_cell"
}
fn return_type(&self, _: &[DataType]) -> Result<DataType> {
Ok(DataType::UInt64)
}
fn signature(&self) -> Signature {
impl Default for S2LatLngToCell {
fn default() -> Self {
let mut signatures = Vec::with_capacity(COORDINATE_TYPES.len());
for coord_type in COORDINATE_TYPES.as_slice() {
signatures.push(TypeSignature::Exact(vec![
@@ -62,21 +56,42 @@ impl Function for S2LatLngToCell {
coord_type.clone(),
]));
}
Signature::one_of(signatures, Volatility::Stable)
Self {
signature: Signature::one_of(signatures, Volatility::Stable),
}
}
}
impl Function for S2LatLngToCell {
fn name(&self) -> &str {
"s2_latlng_to_cell"
}
fn eval(&self, _func_ctx: &FunctionContext, columns: &[VectorRef]) -> Result<VectorRef> {
ensure_columns_n!(columns, 2);
fn return_type(&self, _: &[DataType]) -> datafusion_common::Result<DataType> {
Ok(DataType::UInt64)
}
let lat_vec = &columns[0];
let lon_vec = &columns[1];
fn signature(&self) -> &Signature {
&self.signature
}
fn invoke_with_args(
&self,
args: ScalarFunctionArgs,
) -> datafusion_common::Result<ColumnarValue> {
let [arg0, arg1] = extract_args(self.name(), &args)?;
let arg0 = helpers::cast::<Float64Type>(&arg0)?;
let lat_vec = arg0.as_primitive::<Float64Type>();
let arg1 = helpers::cast::<Float64Type>(&arg1)?;
let lon_vec = arg1.as_primitive::<Float64Type>();
let size = lat_vec.len();
let mut results = UInt64VectorBuilder::with_capacity(size);
let mut builder = UInt64Builder::with_capacity(size);
for i in 0..size {
let lat = lat_vec.get(i).as_f64_lossy();
let lon = lon_vec.get(i).as_f64_lossy();
let lat = lat_vec.is_valid(i).then(|| lat_vec.value(i));
let lon = lon_vec.is_valid(i).then(|| lon_vec.value(i));
let result = match (lat, lon) {
(Some(lat), Some(lon)) => {
@@ -94,120 +109,159 @@ impl Function for S2LatLngToCell {
_ => None,
};
results.push(result);
builder.append_option(result);
}
Ok(results.to_vector())
Ok(ColumnarValue::Array(Arc::new(builder.finish())))
}
}
/// Return the level of current s2 cell
#[derive(Clone, Debug, Default, Display)]
#[derive(Clone, Debug, Display)]
#[display("{}", self.name())]
pub struct S2CellLevel;
pub(crate) struct S2CellLevel {
signature: Signature,
}
impl Default for S2CellLevel {
fn default() -> Self {
Self {
signature: signature_of_cell(),
}
}
}
impl Function for S2CellLevel {
fn name(&self) -> &str {
"s2_cell_level"
}
fn return_type(&self, _: &[DataType]) -> Result<DataType> {
fn return_type(&self, _: &[DataType]) -> datafusion_common::Result<DataType> {
Ok(DataType::UInt64)
}
fn signature(&self) -> Signature {
signature_of_cell()
fn signature(&self) -> &Signature {
&self.signature
}
fn eval(&self, _func_ctx: &FunctionContext, columns: &[VectorRef]) -> Result<VectorRef> {
ensure_columns_n!(columns, 1);
fn invoke_with_args(
&self,
args: ScalarFunctionArgs,
) -> datafusion_common::Result<ColumnarValue> {
let [cell_vec] = extract_args(self.name(), &args)?;
let cell_vec = &columns[0];
let size = cell_vec.len();
let mut results = UInt64VectorBuilder::with_capacity(size);
let mut builder = UInt64Builder::with_capacity(size);
for i in 0..size {
let cell = cell_from_value(cell_vec.get(i));
let res = cell.map(|cell| cell.level());
let v = ScalarValue::try_from_array(&cell_vec, i)?;
let v = cell_from_value(v).map(|x| x.level());
results.push(res);
builder.append_option(v);
}
Ok(results.to_vector())
Ok(ColumnarValue::Array(Arc::new(builder.finish())))
}
}
/// Return the string presentation of the cell
#[derive(Clone, Debug, Default, Display)]
#[derive(Clone, Debug, Display)]
#[display("{}", self.name())]
pub struct S2CellToToken;
pub(crate) struct S2CellToToken {
signature: Signature,
}
impl Default for S2CellToToken {
fn default() -> Self {
Self {
signature: signature_of_cell(),
}
}
}
impl Function for S2CellToToken {
fn name(&self) -> &str {
"s2_cell_to_token"
}
fn return_type(&self, _: &[DataType]) -> Result<DataType> {
Ok(DataType::Utf8)
fn return_type(&self, _: &[DataType]) -> datafusion_common::Result<DataType> {
Ok(DataType::Utf8View)
}
fn signature(&self) -> Signature {
signature_of_cell()
fn signature(&self) -> &Signature {
&self.signature
}
fn eval(&self, _func_ctx: &FunctionContext, columns: &[VectorRef]) -> Result<VectorRef> {
ensure_columns_n!(columns, 1);
fn invoke_with_args(
&self,
args: ScalarFunctionArgs,
) -> datafusion_common::Result<ColumnarValue> {
let [cell_vec] = extract_args(self.name(), &args)?;
let cell_vec = &columns[0];
let size = cell_vec.len();
let mut results = StringVectorBuilder::with_capacity(size);
let mut builder = StringViewBuilder::with_capacity(size);
for i in 0..size {
let cell = cell_from_value(cell_vec.get(i));
let res = cell.map(|cell| cell.to_token());
let v = ScalarValue::try_from_array(&cell_vec, i)?;
let v = cell_from_value(v).map(|x| x.to_token());
results.push(res.as_deref());
builder.append_option(v.as_deref());
}
Ok(results.to_vector())
Ok(ColumnarValue::Array(Arc::new(builder.finish())))
}
}
/// Return parent at given level of current s2 cell
#[derive(Clone, Debug, Default, Display)]
#[derive(Clone, Debug, Display)]
#[display("{}", self.name())]
pub struct S2CellParent;
pub(crate) struct S2CellParent {
signature: Signature,
}
impl Default for S2CellParent {
fn default() -> Self {
Self {
signature: signature_of_cell_and_level(),
}
}
}
impl Function for S2CellParent {
fn name(&self) -> &str {
"s2_cell_parent"
}
fn return_type(&self, _: &[DataType]) -> Result<DataType> {
fn return_type(&self, _: &[DataType]) -> datafusion_common::Result<DataType> {
Ok(DataType::UInt64)
}
fn signature(&self) -> Signature {
signature_of_cell_and_level()
fn signature(&self) -> &Signature {
&self.signature
}
fn eval(&self, _func_ctx: &FunctionContext, columns: &[VectorRef]) -> Result<VectorRef> {
ensure_columns_n!(columns, 2);
fn invoke_with_args(
&self,
args: ScalarFunctionArgs,
) -> datafusion_common::Result<ColumnarValue> {
let [cell_vec, levels] = extract_args(self.name(), &args)?;
let cell_vec = &columns[0];
let level_vec = &columns[1];
let size = cell_vec.len();
let mut results = UInt64VectorBuilder::with_capacity(size);
let mut builder = UInt64Builder::with_capacity(size);
for i in 0..size {
let cell = cell_from_value(cell_vec.get(i));
let level = value_to_level(level_vec.get(i))?;
let result = cell.map(|cell| cell.parent(level).0);
let cell = ScalarValue::try_from_array(&cell_vec, i).map(cell_from_value)?;
let level = ScalarValue::try_from_array(&levels, i).and_then(value_to_level)?;
let result = if let (Some(cell), Some(level)) = (cell, level) {
Some(cell.parent(level).0)
} else {
None
};
results.push(result);
builder.append_option(result);
}
Ok(results.to_vector())
Ok(ColumnarValue::Array(Arc::new(builder.finish())))
}
}
@@ -233,24 +287,30 @@ fn signature_of_cell_and_level() -> Signature {
Signature::one_of(signatures, Volatility::Stable)
}
fn cell_from_value(v: Value) -> Option<CellID> {
fn cell_from_value(v: ScalarValue) -> Option<CellID> {
match v {
Value::Int64(v) => Some(CellID(v as u64)),
Value::UInt64(v) => Some(CellID(v)),
ScalarValue::Int64(v) => v.map(|x| CellID(x as u64)),
ScalarValue::UInt64(v) => v.map(CellID),
_ => None,
}
}
fn value_to_level(v: Value) -> Result<u64> {
fn value_to_level(v: ScalarValue) -> datafusion_common::Result<Option<u64>> {
match v {
Value::Int8(v) => ensure_and_coerce!(v >= 0 && v <= MAX_LEVEL as i8, v as u64),
Value::Int16(v) => ensure_and_coerce!(v >= 0 && v <= MAX_LEVEL as i16, v as u64),
Value::Int32(v) => ensure_and_coerce!(v >= 0 && v <= MAX_LEVEL as i32, v as u64),
Value::Int64(v) => ensure_and_coerce!(v >= 0 && v <= MAX_LEVEL as i64, v as u64),
Value::UInt8(v) => ensure_and_coerce!(v <= MAX_LEVEL as u8, v as u64),
Value::UInt16(v) => ensure_and_coerce!(v <= MAX_LEVEL as u16, v as u64),
Value::UInt32(v) => ensure_and_coerce!(v <= MAX_LEVEL as u32, v as u64),
Value::UInt64(v) => ensure_and_coerce!(v <= MAX_LEVEL, v),
_ => unreachable!(),
ScalarValue::Int8(Some(v)) => ensure_and_coerce!(v >= 0 && v <= MAX_LEVEL as i8, v as u64),
ScalarValue::Int16(Some(v)) => {
ensure_and_coerce!(v >= 0 && v <= MAX_LEVEL as i16, v as u64)
}
ScalarValue::Int32(Some(v)) => {
ensure_and_coerce!(v >= 0 && v <= MAX_LEVEL as i32, v as u64)
}
ScalarValue::Int64(Some(v)) => {
ensure_and_coerce!(v >= 0 && v <= MAX_LEVEL as i64, v as u64)
}
ScalarValue::UInt8(Some(v)) => ensure_and_coerce!(v <= MAX_LEVEL as u8, v as u64),
ScalarValue::UInt16(Some(v)) => ensure_and_coerce!(v <= MAX_LEVEL as u16, v as u64),
ScalarValue::UInt32(Some(v)) => ensure_and_coerce!(v <= MAX_LEVEL as u32, v as u64),
ScalarValue::UInt64(Some(v)) => ensure_and_coerce!(v <= MAX_LEVEL, v),
_ => Ok(None),
}
}

View File

@@ -12,41 +12,34 @@
// See the License for the specific language governing permissions and
// limitations under the License.
use std::sync::LazyLock;
use std::sync::{Arc, LazyLock};
use common_error::ext::{BoxedError, PlainError};
use common_error::status_code::StatusCode;
use common_query::error::{self, Result};
use datafusion_expr::{Signature, TypeSignature, Volatility};
use datatypes::arrow::datatypes::DataType;
use datatypes::scalars::ScalarVectorBuilder;
use datatypes::vectors::{MutableVector, StringVectorBuilder, VectorRef};
use datafusion_common::arrow::array::{Array, AsArray, StringViewBuilder};
use datafusion_common::arrow::datatypes::{DataType, Float64Type};
use datafusion_expr::{ColumnarValue, ScalarFunctionArgs, Signature, TypeSignature, Volatility};
use derive_more::Display;
use geo_types::{Geometry, Point};
use snafu::ResultExt;
use wkt::{ToWkt, TryFromWkt};
use crate::function::{Function, FunctionContext};
use crate::scalars::geo::helpers::{ensure_columns_len, ensure_columns_n};
use crate::function::{Function, extract_args};
use crate::scalars::geo::helpers;
static COORDINATE_TYPES: LazyLock<Vec<DataType>> =
LazyLock::new(|| vec![DataType::Float32, DataType::Float64]);
/// Return WGS84(SRID: 4326) euclidean distance between two geometry object, in degree
#[derive(Clone, Debug, Default, Display)]
#[derive(Clone, Debug, Display)]
#[display("{}", self.name())]
pub struct LatLngToPointWkt;
pub(crate) struct LatLngToPointWkt {
signature: Signature,
}
impl Function for LatLngToPointWkt {
fn name(&self) -> &str {
"wkt_point_from_latlng"
}
fn return_type(&self, _: &[DataType]) -> Result<DataType> {
Ok(DataType::Utf8)
}
fn signature(&self) -> Signature {
impl Default for LatLngToPointWkt {
fn default() -> Self {
let mut signatures = Vec::new();
for coord_type in COORDINATE_TYPES.as_slice() {
signatures.push(TypeSignature::Exact(vec![
@@ -56,31 +49,52 @@ impl Function for LatLngToPointWkt {
coord_type.clone(),
]));
}
Signature::one_of(signatures, Volatility::Stable)
Self {
signature: Signature::one_of(signatures, Volatility::Stable),
}
}
}
impl Function for LatLngToPointWkt {
fn name(&self) -> &str {
"wkt_point_from_latlng"
}
fn eval(&self, _func_ctx: &FunctionContext, columns: &[VectorRef]) -> Result<VectorRef> {
ensure_columns_n!(columns, 2);
fn return_type(&self, _: &[DataType]) -> datafusion_common::Result<DataType> {
Ok(DataType::Utf8View)
}
let lat_vec = &columns[0];
let lng_vec = &columns[1];
fn signature(&self) -> &Signature {
&self.signature
}
fn invoke_with_args(
&self,
args: ScalarFunctionArgs,
) -> datafusion_common::Result<ColumnarValue> {
let [arg0, arg1] = extract_args(self.name(), &args)?;
let arg0 = helpers::cast::<Float64Type>(&arg0)?;
let lat_vec = arg0.as_primitive::<Float64Type>();
let arg1 = helpers::cast::<Float64Type>(&arg1)?;
let lng_vec = arg1.as_primitive::<Float64Type>();
let size = lat_vec.len();
let mut results = StringVectorBuilder::with_capacity(size);
let mut builder = StringViewBuilder::with_capacity(size);
for i in 0..size {
let lat = lat_vec.get(i).as_f64_lossy();
let lng = lng_vec.get(i).as_f64_lossy();
let lat = lat_vec.is_valid(i).then(|| lat_vec.value(i));
let lng = lng_vec.is_valid(i).then(|| lng_vec.value(i));
let result = match (lat, lng) {
(Some(lat), Some(lng)) => Some(Point::new(lng, lat).wkt_string()),
_ => None,
};
results.push(result.as_deref());
builder.append_option(result.as_deref());
}
Ok(results.to_vector())
Ok(ColumnarValue::Array(Arc::new(builder.finish())))
}
}

View File

@@ -16,18 +16,16 @@
use std::fmt;
use std::fmt::Display;
use std::sync::Arc;
use common_query::error::{DowncastVectorSnafu, InvalidFuncArgsSnafu, Result};
use datafusion_expr::{Signature, Volatility};
use datafusion_common::DataFusionError;
use datafusion_common::arrow::array::{Array, AsArray, UInt64Builder};
use datafusion_expr::{ColumnarValue, ScalarFunctionArgs, Signature, Volatility};
use datatypes::arrow::datatypes::DataType;
use datatypes::prelude::Vector;
use datatypes::scalars::{ScalarVector, ScalarVectorBuilder};
use datatypes::vectors::{BinaryVector, MutableVector, UInt64VectorBuilder, VectorRef};
use hyperloglogplus::HyperLogLog;
use snafu::OptionExt;
use crate::aggrs::approximate::hll::HllStateType;
use crate::function::{Function, FunctionContext};
use crate::function::{Function, extract_args};
use crate::function_registry::FunctionRegistry;
const NAME: &str = "hll_count";
@@ -38,12 +36,22 @@ const NAME: &str = "hll_count";
/// 1. The serialized HyperLogLogPlus state, as produced by the aggregator (binary).
///
/// For each row, it deserializes the sketch and returns the estimated cardinality.
#[derive(Debug, Default)]
pub struct HllCalcFunction;
#[derive(Debug)]
pub(crate) struct HllCalcFunction {
signature: Signature,
}
impl HllCalcFunction {
pub fn register(registry: &FunctionRegistry) {
registry.register_scalar(HllCalcFunction);
registry.register_scalar(HllCalcFunction::default());
}
}
impl Default for HllCalcFunction {
fn default() -> Self {
Self {
signature: Signature::exact(vec![DataType::Binary], Volatility::Immutable),
}
}
}
@@ -58,37 +66,35 @@ impl Function for HllCalcFunction {
NAME
}
fn return_type(&self, _: &[DataType]) -> Result<DataType> {
fn return_type(&self, _: &[DataType]) -> datafusion_common::Result<DataType> {
Ok(DataType::UInt64)
}
fn signature(&self) -> Signature {
// Only argument: HyperLogLogPlus state (binary)
Signature::exact(vec![DataType::Binary], Volatility::Immutable)
fn signature(&self) -> &Signature {
&self.signature
}
fn eval(&self, _func_ctx: &FunctionContext, columns: &[VectorRef]) -> Result<VectorRef> {
if columns.len() != 1 {
return InvalidFuncArgsSnafu {
err_msg: format!("hll_count expects 1 argument, got {}", columns.len()),
}
.fail();
}
fn invoke_with_args(
&self,
args: ScalarFunctionArgs,
) -> datafusion_common::Result<ColumnarValue> {
let [arg0] = extract_args(self.name(), &args)?;
let hll_vec = columns[0]
.as_any()
.downcast_ref::<BinaryVector>()
.with_context(|| DowncastVectorSnafu {
err_msg: format!("expect BinaryVector, got {}", columns[0].vector_type_name()),
})?;
let Some(hll_vec) = arg0.as_binary_opt::<i32>() else {
return Err(DataFusionError::Execution(format!(
"'{}' expects argument to be Binary datatype, got {}",
self.name(),
arg0.data_type()
)));
};
let len = hll_vec.len();
let mut builder = UInt64VectorBuilder::with_capacity(len);
let mut builder = UInt64Builder::with_capacity(len);
for i in 0..len {
let hll_opt = hll_vec.get_data(i);
let hll_opt = hll_vec.is_valid(i).then(|| hll_vec.value(i));
if hll_opt.is_none() {
builder.push_null();
builder.append_null();
continue;
}
@@ -99,15 +105,15 @@ impl Function for HllCalcFunction {
Ok(h) => h,
Err(e) => {
common_telemetry::trace!("Failed to deserialize HyperLogLogPlus: {}", e);
builder.push_null();
builder.append_null();
continue;
}
};
builder.push(Some(hll.count().round() as u64));
builder.append_value(hll.count().round() as u64);
}
Ok(builder.to_vector())
Ok(ColumnarValue::Array(Arc::new(builder.finish())))
}
}
@@ -115,14 +121,16 @@ impl Function for HllCalcFunction {
mod tests {
use std::sync::Arc;
use datatypes::vectors::BinaryVector;
use arrow_schema::Field;
use datafusion_common::arrow::array::BinaryArray;
use datafusion_common::arrow::datatypes::UInt64Type;
use super::*;
use crate::utils::FixedRandomState;
#[test]
fn test_hll_count_function() {
let function = HllCalcFunction;
let function = HllCalcFunction::default();
assert_eq!("hll_count", function.name());
assert_eq!(
DataType::UInt64,
@@ -136,38 +144,66 @@ mod tests {
}
let serialized_bytes = bincode::serialize(&hll).unwrap();
let args: Vec<VectorRef> = vec![Arc::new(BinaryVector::from(vec![Some(serialized_bytes)]))];
let args = vec![ColumnarValue::Array(Arc::new(BinaryArray::from_iter(
vec![Some(serialized_bytes)],
)))];
let result = function.eval(&FunctionContext::default(), &args).unwrap();
let result = function
.invoke_with_args(ScalarFunctionArgs {
args,
arg_fields: vec![],
number_rows: 1,
return_field: Arc::new(Field::new("x", DataType::UInt64, false)),
config_options: Arc::new(Default::default()),
})
.unwrap();
let ColumnarValue::Array(result) = result else {
unreachable!()
};
let result = result.as_primitive::<UInt64Type>();
assert_eq!(result.len(), 1);
// Test cardinality estimate
if let datatypes::value::Value::UInt64(v) = result.get(0) {
assert_eq!(v, 10);
} else {
panic!("Expected uint64 value");
}
assert_eq!(result.value(0), 10);
}
#[test]
fn test_hll_count_function_errors() {
let function = HllCalcFunction;
let function = HllCalcFunction::default();
// Test with invalid number of arguments
let args: Vec<VectorRef> = vec![];
let result = function.eval(&FunctionContext::default(), &args);
let result = function.invoke_with_args(ScalarFunctionArgs {
args: vec![],
arg_fields: vec![],
number_rows: 0,
return_field: Arc::new(Field::new("x", DataType::UInt64, false)),
config_options: Arc::new(Default::default()),
});
assert!(result.is_err());
assert!(
result
.unwrap_err()
.to_string()
.contains("hll_count expects 1 argument")
.contains("Execution error: hll_count function requires 1 argument, got 0")
);
// Test with invalid binary data
let args: Vec<VectorRef> = vec![Arc::new(BinaryVector::from(vec![Some(vec![1, 2, 3])]))]; // Invalid binary data
let result = function.eval(&FunctionContext::default(), &args).unwrap();
let result = function
.invoke_with_args(ScalarFunctionArgs {
args: vec![ColumnarValue::Array(Arc::new(BinaryArray::from_iter(
vec![Some(vec![1, 2, 3])],
)))],
arg_fields: vec![],
number_rows: 0,
return_field: Arc::new(Field::new("x", DataType::UInt64, false)),
config_options: Arc::new(Default::default()),
})
.unwrap();
let ColumnarValue::Array(result) = result else {
unreachable!()
};
let result = result.as_primitive::<UInt64Type>();
assert_eq!(result.len(), 1);
assert!(matches!(result.get(0), datatypes::value::Value::Null));
assert!(result.is_null(0));
}
}

View File

@@ -30,14 +30,14 @@ impl IpFunctions {
pub fn register(registry: &FunctionRegistry) {
// Register IPv4 functions
registry.register_scalar(Ipv4NumToString::default());
registry.register_scalar(Ipv4StringToNum);
registry.register_scalar(Ipv4ToCidr);
registry.register_scalar(Ipv4InRange);
registry.register_scalar(Ipv4StringToNum::default());
registry.register_scalar(Ipv4ToCidr::default());
registry.register_scalar(Ipv4InRange::default());
// Register IPv6 functions
registry.register_scalar(Ipv6NumToString);
registry.register_scalar(Ipv6StringToNum);
registry.register_scalar(Ipv6ToCidr);
registry.register_scalar(Ipv6InRange);
registry.register_scalar(Ipv6NumToString::default());
registry.register_scalar(Ipv6StringToNum::default());
registry.register_scalar(Ipv6ToCidr::default());
registry.register_scalar(Ipv6InRange::default());
}
}

View File

@@ -14,18 +14,21 @@
use std::net::{Ipv4Addr, Ipv6Addr};
use std::str::FromStr;
use std::sync::Arc;
use common_query::error::{InvalidFuncArgsSnafu, Result};
use datafusion_common::types;
use datafusion_expr::{Coercion, Signature, TypeSignature, TypeSignatureClass, Volatility};
use datatypes::arrow::datatypes::DataType;
use datatypes::prelude::Value;
use datatypes::scalars::ScalarVectorBuilder;
use datatypes::vectors::{MutableVector, StringVectorBuilder, VectorRef};
use datafusion_common::arrow::array::{Array, AsArray, StringViewBuilder};
use datafusion_common::arrow::compute;
use datafusion_common::arrow::datatypes::{DataType, UInt8Type};
use datafusion_common::{DataFusionError, types};
use datafusion_expr::{
Coercion, ColumnarValue, ScalarFunctionArgs, Signature, TypeSignature, TypeSignatureClass,
Volatility,
};
use derive_more::Display;
use snafu::ensure;
use crate::function::{Function, FunctionContext};
use crate::function::Function;
/// Function that converts an IPv4 address string to CIDR notation.
///
@@ -36,45 +39,65 @@ use crate::function::{Function, FunctionContext};
/// - ipv4_to_cidr('192.168.1.0') -> '192.168.1.0/24'
/// - ipv4_to_cidr('192.168') -> '192.168.0.0/16'
/// - ipv4_to_cidr('192.168.1.1', 24) -> '192.168.1.0/24'
#[derive(Clone, Debug, Default, Display)]
#[derive(Clone, Debug, Display)]
#[display("{}", self.name())]
pub struct Ipv4ToCidr;
pub(crate) struct Ipv4ToCidr {
signature: Signature,
}
impl Default for Ipv4ToCidr {
fn default() -> Self {
Self {
signature: Signature::one_of(
vec![
TypeSignature::String(1),
TypeSignature::Coercible(vec![
Coercion::new_exact(TypeSignatureClass::Native(types::logical_string())),
Coercion::new_exact(TypeSignatureClass::Integer),
]),
],
Volatility::Immutable,
),
}
}
}
impl Function for Ipv4ToCidr {
fn name(&self) -> &str {
"ipv4_to_cidr"
}
fn return_type(&self, _: &[DataType]) -> Result<DataType> {
Ok(DataType::Utf8)
fn return_type(&self, _: &[DataType]) -> datafusion_common::Result<DataType> {
Ok(DataType::Utf8View)
}
fn signature(&self) -> Signature {
Signature::one_of(
vec![
TypeSignature::String(1),
TypeSignature::Coercible(vec![
Coercion::new_exact(TypeSignatureClass::Native(types::logical_string())),
Coercion::new_exact(TypeSignatureClass::Integer),
]),
],
Volatility::Immutable,
)
fn signature(&self) -> &Signature {
&self.signature
}
fn eval(&self, _func_ctx: &FunctionContext, columns: &[VectorRef]) -> Result<VectorRef> {
ensure!(
columns.len() == 1 || columns.len() == 2,
InvalidFuncArgsSnafu {
err_msg: format!("Expected 1 or 2 arguments, got {}", columns.len())
}
);
fn invoke_with_args(
&self,
args: ScalarFunctionArgs,
) -> datafusion_common::Result<ColumnarValue> {
if args.args.len() != 1 && args.args.len() != 2 {
return Err(DataFusionError::Execution(format!(
"expecting 1 or 2 arguments, got {}",
args.args.len()
)));
}
let columns = ColumnarValue::values_to_arrays(&args.args)?;
let ip_vec = &columns[0];
let mut results = StringVectorBuilder::with_capacity(ip_vec.len());
let mut builder = StringViewBuilder::with_capacity(ip_vec.len());
let arg0 = compute::cast(ip_vec, &DataType::Utf8View)?;
let ip_vec = arg0.as_string_view();
let has_subnet_arg = columns.len() == 2;
let subnet_vec = if has_subnet_arg {
let maybe_arg1 = if columns.len() > 1 {
Some(compute::cast(&columns[1], &DataType::UInt8)?)
} else {
None
};
let subnets = if let Some(arg1) = maybe_arg1.as_ref() {
ensure!(
columns[1].len() == ip_vec.len(),
InvalidFuncArgsSnafu {
@@ -83,23 +106,19 @@ impl Function for Ipv4ToCidr {
.to_string()
}
);
Some(&columns[1])
Some(arg1.as_primitive::<UInt8Type>())
} else {
None
};
for i in 0..ip_vec.len() {
let ip_str = ip_vec.get(i);
let subnet = subnet_vec.map(|v| v.get(i));
let ip_str = ip_vec.is_valid(i).then(|| ip_vec.value(i));
let subnet = subnets.and_then(|v| v.is_valid(i).then(|| v.value(i)));
let cidr = match (ip_str, subnet) {
(Value::String(s), Some(Value::UInt8(mask))) => {
let ip_str = s.as_utf8().trim();
(Some(ip_str), Some(mask)) => {
if ip_str.is_empty() {
return InvalidFuncArgsSnafu {
err_msg: "Empty IPv4 address".to_string(),
}
.fail();
return Err(DataFusionError::Execution("empty IPv4 address".to_string()));
}
let ip_addr = complete_and_parse_ipv4(ip_str)?;
@@ -109,13 +128,9 @@ impl Function for Ipv4ToCidr {
Some(format!("{}/{}", masked_ip, mask))
}
(Value::String(s), None) => {
let ip_str = s.as_utf8().trim();
(Some(ip_str), None) => {
if ip_str.is_empty() {
return InvalidFuncArgsSnafu {
err_msg: "Empty IPv4 address".to_string(),
}
.fail();
return Err(DataFusionError::Execution("empty IPv4 address".to_string()));
}
let ip_addr = complete_and_parse_ipv4(ip_str)?;
@@ -149,10 +164,10 @@ impl Function for Ipv4ToCidr {
_ => None,
};
results.push(cidr.as_deref());
builder.append_option(cidr.as_deref());
}
Ok(results.to_vector())
Ok(ColumnarValue::Array(Arc::new(builder.finish())))
}
}
@@ -165,60 +180,74 @@ impl Function for Ipv4ToCidr {
/// - ipv6_to_cidr('2001:db8::') -> '2001:db8::/32'
/// - ipv6_to_cidr('2001:db8') -> '2001:db8::/32'
/// - ipv6_to_cidr('2001:db8::', 48) -> '2001:db8::/48'
#[derive(Clone, Debug, Default, Display)]
#[derive(Clone, Debug, Display)]
#[display("{}", self.name())]
pub struct Ipv6ToCidr;
pub(crate) struct Ipv6ToCidr {
signature: Signature,
}
impl Default for Ipv6ToCidr {
fn default() -> Self {
Self {
signature: Signature::one_of(
vec![
TypeSignature::String(1),
TypeSignature::Exact(vec![DataType::Utf8, DataType::UInt8]),
],
Volatility::Immutable,
),
}
}
}
impl Function for Ipv6ToCidr {
fn name(&self) -> &str {
"ipv6_to_cidr"
}
fn return_type(&self, _: &[DataType]) -> Result<DataType> {
Ok(DataType::Utf8)
fn return_type(&self, _: &[DataType]) -> datafusion_common::Result<DataType> {
Ok(DataType::Utf8View)
}
fn signature(&self) -> Signature {
Signature::one_of(
vec![
TypeSignature::String(1),
TypeSignature::Exact(vec![DataType::Utf8, DataType::UInt8]),
],
Volatility::Immutable,
)
fn signature(&self) -> &Signature {
&self.signature
}
fn eval(&self, _func_ctx: &FunctionContext, columns: &[VectorRef]) -> Result<VectorRef> {
ensure!(
columns.len() == 1 || columns.len() == 2,
InvalidFuncArgsSnafu {
err_msg: format!("Expected 1 or 2 arguments, got {}", columns.len())
}
);
fn invoke_with_args(
&self,
args: ScalarFunctionArgs,
) -> datafusion_common::Result<ColumnarValue> {
if args.args.len() != 1 && args.args.len() != 2 {
return Err(DataFusionError::Execution(format!(
"expecting 1 or 2 arguments, got {}",
args.args.len()
)));
}
let columns = ColumnarValue::values_to_arrays(&args.args)?;
let ip_vec = &columns[0];
let size = ip_vec.len();
let mut results = StringVectorBuilder::with_capacity(size);
let mut builder = StringViewBuilder::with_capacity(size);
let arg0 = compute::cast(ip_vec, &DataType::Utf8View)?;
let ip_vec = arg0.as_string_view();
let has_subnet_arg = columns.len() == 2;
let subnet_vec = if has_subnet_arg {
Some(&columns[1])
let maybe_arg1 = if columns.len() > 1 {
Some(compute::cast(&columns[1], &DataType::UInt8)?)
} else {
None
};
let subnets = maybe_arg1
.as_ref()
.map(|arg1| arg1.as_primitive::<UInt8Type>());
for i in 0..size {
let ip_str = ip_vec.get(i);
let subnet = subnet_vec.map(|v| v.get(i));
let ip_str = ip_vec.is_valid(i).then(|| ip_vec.value(i));
let subnet = subnets.and_then(|v| v.is_valid(i).then(|| v.value(i)));
let cidr = match (ip_str, subnet) {
(Value::String(s), Some(Value::UInt8(mask))) => {
let ip_str = s.as_utf8().trim();
(Some(ip_str), Some(mask)) => {
if ip_str.is_empty() {
return InvalidFuncArgsSnafu {
err_msg: "Empty IPv6 address".to_string(),
}
.fail();
return Err(DataFusionError::Execution("empty IPv6 address".to_string()));
}
let ip_addr = complete_and_parse_ipv6(ip_str)?;
@@ -228,13 +257,9 @@ impl Function for Ipv6ToCidr {
Some(format!("{}/{}", masked_ip, mask))
}
(Value::String(s), None) => {
let ip_str = s.as_utf8().trim();
(Some(ip_str), None) => {
if ip_str.is_empty() {
return InvalidFuncArgsSnafu {
err_msg: "Empty IPv6 address".to_string(),
}
.fail();
return Err(DataFusionError::Execution("empty IPv6 address".to_string()));
}
let ip_addr = complete_and_parse_ipv6(ip_str)?;
@@ -250,10 +275,10 @@ impl Function for Ipv6ToCidr {
_ => None,
};
results.push(cidr.as_deref());
builder.append_option(cidr.as_deref());
}
Ok(results.to_vector())
Ok(ColumnarValue::Array(Arc::new(builder.finish())))
}
}
@@ -375,108 +400,148 @@ fn auto_detect_ipv6_subnet(addr: &Ipv6Addr) -> u8 {
#[cfg(test)]
mod tests {
use std::sync::Arc;
use datatypes::scalars::ScalarVector;
use datatypes::vectors::{StringVector, UInt8Vector};
use arrow_schema::Field;
use datafusion_common::arrow::array::{StringViewArray, UInt8Array};
use super::*;
#[test]
fn test_ipv4_to_cidr_auto() {
let func = Ipv4ToCidr;
let ctx = FunctionContext::default();
let func = Ipv4ToCidr::default();
// Test data with auto subnet detection
let values = vec!["192.168.1.0", "10.0.0.0", "172.16", "192"];
let input = Arc::new(StringVector::from_slice(&values)) as VectorRef;
let arg0 = ColumnarValue::Array(Arc::new(StringViewArray::from_iter_values(&values)));
let result = func.eval(&ctx, &[input]).unwrap();
let result = result.as_any().downcast_ref::<StringVector>().unwrap();
let args = ScalarFunctionArgs {
args: vec![arg0],
arg_fields: vec![],
number_rows: 4,
return_field: Arc::new(Field::new("x", DataType::Utf8View, false)),
config_options: Arc::new(Default::default()),
};
let result = func.invoke_with_args(args).unwrap();
let result = result.to_array(4).unwrap();
let result = result.as_string_view();
assert_eq!(result.get_data(0).unwrap(), "192.168.1.0/24");
assert_eq!(result.get_data(1).unwrap(), "10.0.0.0/8");
assert_eq!(result.get_data(2).unwrap(), "172.16.0.0/16");
assert_eq!(result.get_data(3).unwrap(), "192.0.0.0/8");
assert_eq!(result.value(0), "192.168.1.0/24");
assert_eq!(result.value(1), "10.0.0.0/8");
assert_eq!(result.value(2), "172.16.0.0/16");
assert_eq!(result.value(3), "192.0.0.0/8");
}
#[test]
fn test_ipv4_to_cidr_with_subnet() {
let func = Ipv4ToCidr;
let ctx = FunctionContext::default();
let func = Ipv4ToCidr::default();
// Test data with explicit subnet
let ip_values = vec!["192.168.1.1", "10.0.0.1", "172.16.5.5"];
let subnet_values = vec![24u8, 16u8, 12u8];
let ip_input = Arc::new(StringVector::from_slice(&ip_values)) as VectorRef;
let subnet_input = Arc::new(UInt8Vector::from_vec(subnet_values)) as VectorRef;
let arg0 = ColumnarValue::Array(Arc::new(StringViewArray::from_iter_values(&ip_values)));
let arg1 = ColumnarValue::Array(Arc::new(UInt8Array::from(subnet_values)));
let result = func.eval(&ctx, &[ip_input, subnet_input]).unwrap();
let result = result.as_any().downcast_ref::<StringVector>().unwrap();
let args = ScalarFunctionArgs {
args: vec![arg0, arg1],
arg_fields: vec![],
number_rows: 3,
return_field: Arc::new(Field::new("x", DataType::Utf8View, false)),
config_options: Arc::new(Default::default()),
};
let result = func.invoke_with_args(args).unwrap();
let result = result.to_array(3).unwrap();
let result = result.as_string_view();
assert_eq!(result.get_data(0).unwrap(), "192.168.1.0/24");
assert_eq!(result.get_data(1).unwrap(), "10.0.0.0/16");
assert_eq!(result.get_data(2).unwrap(), "172.16.0.0/12");
assert_eq!(result.value(0), "192.168.1.0/24");
assert_eq!(result.value(1), "10.0.0.0/16");
assert_eq!(result.value(2), "172.16.0.0/12");
}
#[test]
fn test_ipv6_to_cidr_auto() {
let func = Ipv6ToCidr;
let ctx = FunctionContext::default();
let func = Ipv6ToCidr::default();
// Test data with auto subnet detection
let values = vec!["2001:db8::", "2001:db8", "fe80::1", "::1"];
let input = Arc::new(StringVector::from_slice(&values)) as VectorRef;
let arg0 = ColumnarValue::Array(Arc::new(StringViewArray::from_iter_values(&values)));
let result = func.eval(&ctx, &[input]).unwrap();
let result = result.as_any().downcast_ref::<StringVector>().unwrap();
let args = ScalarFunctionArgs {
args: vec![arg0],
arg_fields: vec![],
number_rows: 4,
return_field: Arc::new(Field::new("x", DataType::Utf8View, false)),
config_options: Arc::new(Default::default()),
};
let result = func.invoke_with_args(args).unwrap();
let result = result.to_array(4).unwrap();
let result = result.as_string_view();
assert_eq!(result.get_data(0).unwrap(), "2001:db8::/32");
assert_eq!(result.get_data(1).unwrap(), "2001:db8::/32");
assert_eq!(result.get_data(2).unwrap(), "fe80::/16");
assert_eq!(result.get_data(3).unwrap(), "::1/128"); // Special case for ::1
assert_eq!(result.value(0), "2001:db8::/32");
assert_eq!(result.value(1), "2001:db8::/32");
assert_eq!(result.value(2), "fe80::/16");
assert_eq!(result.value(3), "::1/128"); // Special case for ::1
}
#[test]
fn test_ipv6_to_cidr_with_subnet() {
let func = Ipv6ToCidr;
let ctx = FunctionContext::default();
let func = Ipv6ToCidr::default();
// Test data with explicit subnet
let ip_values = vec!["2001:db8::", "fe80::1", "2001:db8:1234::"];
let subnet_values = vec![48u8, 10u8, 56u8];
let ip_input = Arc::new(StringVector::from_slice(&ip_values)) as VectorRef;
let subnet_input = Arc::new(UInt8Vector::from_vec(subnet_values)) as VectorRef;
let arg0 = ColumnarValue::Array(Arc::new(StringViewArray::from_iter_values(&ip_values)));
let arg1 = ColumnarValue::Array(Arc::new(UInt8Array::from(subnet_values)));
let result = func.eval(&ctx, &[ip_input, subnet_input]).unwrap();
let result = result.as_any().downcast_ref::<StringVector>().unwrap();
let args = ScalarFunctionArgs {
args: vec![arg0, arg1],
arg_fields: vec![],
number_rows: 3,
return_field: Arc::new(Field::new("x", DataType::Utf8View, false)),
config_options: Arc::new(Default::default()),
};
let result = func.invoke_with_args(args).unwrap();
let result = result.to_array(3).unwrap();
let result = result.as_string_view();
assert_eq!(result.get_data(0).unwrap(), "2001:db8::/48");
assert_eq!(result.get_data(1).unwrap(), "fe80::/10");
assert_eq!(result.get_data(2).unwrap(), "2001:db8:1234::/56");
assert_eq!(result.value(0), "2001:db8::/48");
assert_eq!(result.value(1), "fe80::/10");
assert_eq!(result.value(2), "2001:db8:1234::/56");
}
#[test]
fn test_invalid_inputs() {
let ipv4_func = Ipv4ToCidr;
let ipv6_func = Ipv6ToCidr;
let ctx = FunctionContext::default();
let ipv4_func = Ipv4ToCidr::default();
let ipv6_func = Ipv6ToCidr::default();
// Empty string should fail
let empty_values = vec![""];
let empty_input = Arc::new(StringVector::from_slice(&empty_values)) as VectorRef;
let arg0 = ColumnarValue::Array(Arc::new(StringViewArray::from_iter_values(&empty_values)));
let ipv4_result = ipv4_func.eval(&ctx, std::slice::from_ref(&empty_input));
let ipv6_result = ipv6_func.eval(&ctx, std::slice::from_ref(&empty_input));
let args = ScalarFunctionArgs {
args: vec![arg0],
arg_fields: vec![],
number_rows: 1,
return_field: Arc::new(Field::new("x", DataType::Utf8View, false)),
config_options: Arc::new(Default::default()),
};
let ipv4_result = ipv4_func.invoke_with_args(args.clone());
let ipv6_result = ipv6_func.invoke_with_args(args);
assert!(ipv4_result.is_err());
assert!(ipv6_result.is_err());
// Invalid IP formats should fail
let invalid_values = vec!["not an ip", "192.168.1.256", "zzzz::ffff"];
let invalid_input = Arc::new(StringVector::from_slice(&invalid_values)) as VectorRef;
let arg0 =
ColumnarValue::Array(Arc::new(StringViewArray::from_iter_values(&invalid_values)));
let ipv4_result = ipv4_func.eval(&ctx, std::slice::from_ref(&invalid_input));
let args = ScalarFunctionArgs {
args: vec![arg0],
arg_fields: vec![],
number_rows: 3,
return_field: Arc::new(Field::new("x", DataType::Utf8View, false)),
config_options: Arc::new(Default::default()),
};
let ipv4_result = ipv4_func.invoke_with_args(args);
assert!(ipv4_result.is_err());
}

View File

@@ -14,16 +14,16 @@
use std::net::Ipv4Addr;
use std::str::FromStr;
use std::sync::Arc;
use common_query::error::{InvalidFuncArgsSnafu, Result};
use datafusion_expr::{Signature, TypeSignature, Volatility};
use datatypes::arrow::datatypes::DataType;
use datatypes::scalars::ScalarVectorBuilder;
use datatypes::vectors::{MutableVector, StringVectorBuilder, UInt32VectorBuilder, VectorRef};
use common_query::error::InvalidFuncArgsSnafu;
use datafusion_common::arrow::array::{Array, AsArray, StringViewBuilder, UInt32Builder};
use datafusion_common::arrow::compute;
use datafusion_common::arrow::datatypes::{DataType, UInt32Type};
use datafusion_expr::{ColumnarValue, ScalarFunctionArgs, Signature, TypeSignature, Volatility};
use derive_more::Display;
use snafu::ensure;
use crate::function::{Function, FunctionContext};
use crate::function::{Function, extract_args};
/// Function that converts a UInt32 number to an IPv4 address string.
///
@@ -36,12 +36,17 @@ use crate::function::{Function, FunctionContext};
#[derive(Clone, Debug, Display)]
#[display("{}", self.name())]
pub struct Ipv4NumToString {
signature: Signature,
aliases: [String; 1],
}
impl Default for Ipv4NumToString {
fn default() -> Self {
Self {
signature: Signature::new(
TypeSignature::Exact(vec![DataType::UInt32]),
Volatility::Immutable,
),
aliases: ["inet_ntoa".to_string()],
}
}
@@ -52,33 +57,28 @@ impl Function for Ipv4NumToString {
"ipv4_num_to_string"
}
fn return_type(&self, _: &[DataType]) -> Result<DataType> {
Ok(DataType::Utf8)
fn return_type(&self, _: &[DataType]) -> datafusion_common::Result<DataType> {
Ok(DataType::Utf8View)
}
fn signature(&self) -> Signature {
Signature::new(
TypeSignature::Exact(vec![DataType::UInt32]),
Volatility::Immutable,
)
fn signature(&self) -> &Signature {
&self.signature
}
fn eval(&self, _func_ctx: &FunctionContext, columns: &[VectorRef]) -> Result<VectorRef> {
ensure!(
columns.len() == 1,
InvalidFuncArgsSnafu {
err_msg: format!("Expected 1 argument, got {}", columns.len())
}
);
fn invoke_with_args(
&self,
args: ScalarFunctionArgs,
) -> datafusion_common::Result<ColumnarValue> {
let [arg0] = extract_args(self.name(), &args)?;
let uint_vec = arg0.as_primitive::<UInt32Type>();
let uint_vec = &columns[0];
let size = uint_vec.len();
let mut results = StringVectorBuilder::with_capacity(size);
let mut builder = StringViewBuilder::with_capacity(size);
for i in 0..size {
let ip_num = uint_vec.get(i);
let ip_num = uint_vec.is_valid(i).then(|| uint_vec.value(i));
let ip_str = match ip_num {
datatypes::value::Value::UInt32(num) => {
Some(num) => {
// Convert UInt32 to IPv4 string (A.B.C.D format)
let a = (num >> 24) & 0xFF;
let b = (num >> 16) & 0xFF;
@@ -89,10 +89,10 @@ impl Function for Ipv4NumToString {
_ => None,
};
results.push(ip_str.as_deref());
builder.append_option(ip_str.as_deref());
}
Ok(results.to_vector())
Ok(ColumnarValue::Array(Arc::new(builder.finish())))
}
fn aliases(&self) -> &[String] {
@@ -106,40 +106,48 @@ impl Function for Ipv4NumToString {
/// - "10.0.0.1" returns 167772161
/// - "192.168.0.1" returns 3232235521
/// - Invalid IPv4 format throws an exception
#[derive(Clone, Debug, Default, Display)]
#[derive(Clone, Debug, Display)]
#[display("{}", self.name())]
pub struct Ipv4StringToNum;
pub(crate) struct Ipv4StringToNum {
signature: Signature,
}
impl Default for Ipv4StringToNum {
fn default() -> Self {
Self {
signature: Signature::string(1, Volatility::Immutable),
}
}
}
impl Function for Ipv4StringToNum {
fn name(&self) -> &str {
"ipv4_string_to_num"
}
fn return_type(&self, _: &[DataType]) -> Result<DataType> {
fn return_type(&self, _: &[DataType]) -> datafusion_common::Result<DataType> {
Ok(DataType::UInt32)
}
fn signature(&self) -> Signature {
Signature::string(1, Volatility::Immutable)
fn signature(&self) -> &Signature {
&self.signature
}
fn eval(&self, _func_ctx: &FunctionContext, columns: &[VectorRef]) -> Result<VectorRef> {
ensure!(
columns.len() == 1,
InvalidFuncArgsSnafu {
err_msg: format!("Expected 1 argument, got {}", columns.len())
}
);
fn invoke_with_args(
&self,
args: ScalarFunctionArgs,
) -> datafusion_common::Result<ColumnarValue> {
let [arg0] = extract_args(self.name(), &args)?;
let ip_vec = &columns[0];
let arg0 = compute::cast(&arg0, &DataType::Utf8View)?;
let ip_vec = arg0.as_string_view();
let size = ip_vec.len();
let mut results = UInt32VectorBuilder::with_capacity(size);
let mut builder = UInt32Builder::with_capacity(size);
for i in 0..size {
let ip_str = ip_vec.get(i);
let ip_str = ip_vec.is_valid(i).then(|| ip_vec.value(i));
let ip_num = match ip_str {
datatypes::value::Value::String(s) => {
let ip_str = s.as_utf8();
Some(ip_str) => {
let ip_addr = Ipv4Addr::from_str(ip_str).map_err(|_| {
InvalidFuncArgsSnafu {
err_msg: format!("Invalid IPv4 address format: {}", ip_str),
@@ -151,10 +159,10 @@ impl Function for Ipv4StringToNum {
_ => None,
};
results.push(ip_num);
builder.append_option(ip_num);
}
Ok(results.to_vector())
Ok(ColumnarValue::Array(Arc::new(builder.finish())))
}
}
@@ -162,66 +170,92 @@ impl Function for Ipv4StringToNum {
mod tests {
use std::sync::Arc;
use datatypes::scalars::ScalarVector;
use datatypes::vectors::{StringVector, UInt32Vector};
use arrow_schema::Field;
use datafusion_common::arrow::array::{StringViewArray, UInt32Array};
use super::*;
#[test]
fn test_ipv4_num_to_string() {
let func = Ipv4NumToString::default();
let ctx = FunctionContext::default();
// Test data
let values = vec![167772161u32, 3232235521u32, 0u32, 4294967295u32];
let input = Arc::new(UInt32Vector::from_vec(values)) as VectorRef;
let input = ColumnarValue::Array(Arc::new(UInt32Array::from(values)));
let result = func.eval(&ctx, &[input]).unwrap();
let result = result.as_any().downcast_ref::<StringVector>().unwrap();
let args = ScalarFunctionArgs {
args: vec![input],
arg_fields: vec![],
number_rows: 4,
return_field: Arc::new(Field::new("x", DataType::Utf8View, false)),
config_options: Arc::new(Default::default()),
};
let result = func.invoke_with_args(args).unwrap();
let result = result.to_array(4).unwrap();
let result = result.as_string_view();
assert_eq!(result.get_data(0).unwrap(), "10.0.0.1");
assert_eq!(result.get_data(1).unwrap(), "192.168.0.1");
assert_eq!(result.get_data(2).unwrap(), "0.0.0.0");
assert_eq!(result.get_data(3).unwrap(), "255.255.255.255");
assert_eq!(result.value(0), "10.0.0.1");
assert_eq!(result.value(1), "192.168.0.1");
assert_eq!(result.value(2), "0.0.0.0");
assert_eq!(result.value(3), "255.255.255.255");
}
#[test]
fn test_ipv4_string_to_num() {
let func = Ipv4StringToNum;
let ctx = FunctionContext::default();
let func = Ipv4StringToNum::default();
// Test data
let values = vec!["10.0.0.1", "192.168.0.1", "0.0.0.0", "255.255.255.255"];
let input = Arc::new(StringVector::from_slice(&values)) as VectorRef;
let input = ColumnarValue::Array(Arc::new(StringViewArray::from_iter_values(&values)));
let result = func.eval(&ctx, &[input]).unwrap();
let result = result.as_any().downcast_ref::<UInt32Vector>().unwrap();
let args = ScalarFunctionArgs {
args: vec![input],
arg_fields: vec![],
number_rows: 4,
return_field: Arc::new(Field::new("x", DataType::UInt32, false)),
config_options: Arc::new(Default::default()),
};
let result = func.invoke_with_args(args).unwrap();
let result = result.to_array(4).unwrap();
let result = result.as_primitive::<UInt32Type>();
assert_eq!(result.get_data(0).unwrap(), 167772161);
assert_eq!(result.get_data(1).unwrap(), 3232235521);
assert_eq!(result.get_data(2).unwrap(), 0);
assert_eq!(result.get_data(3).unwrap(), 4294967295);
assert_eq!(result.value(0), 167772161);
assert_eq!(result.value(1), 3232235521);
assert_eq!(result.value(2), 0);
assert_eq!(result.value(3), 4294967295);
}
#[test]
fn test_ipv4_conversions_roundtrip() {
let to_num = Ipv4StringToNum;
let to_num = Ipv4StringToNum::default();
let to_string = Ipv4NumToString::default();
let ctx = FunctionContext::default();
// Test data for string to num to string
let values = vec!["10.0.0.1", "192.168.0.1", "0.0.0.0", "255.255.255.255"];
let input = Arc::new(StringVector::from_slice(&values)) as VectorRef;
let input = ColumnarValue::Array(Arc::new(StringViewArray::from_iter_values(&values)));
let num_result = to_num.eval(&ctx, &[input]).unwrap();
let back_to_string = to_string.eval(&ctx, &[num_result]).unwrap();
let str_result = back_to_string
.as_any()
.downcast_ref::<StringVector>()
.unwrap();
let args = ScalarFunctionArgs {
args: vec![input],
arg_fields: vec![],
number_rows: 4,
return_field: Arc::new(Field::new("x", DataType::UInt32, false)),
config_options: Arc::new(Default::default()),
};
let result = to_num.invoke_with_args(args).unwrap();
let args = ScalarFunctionArgs {
args: vec![result],
arg_fields: vec![],
number_rows: 4,
return_field: Arc::new(Field::new("x", DataType::Utf8View, false)),
config_options: Arc::new(Default::default()),
};
let result = to_string.invoke_with_args(args).unwrap();
let result = result.to_array(4).unwrap();
let result = result.as_string_view();
for (i, expected) in values.iter().enumerate() {
assert_eq!(str_result.get_data(i).unwrap(), *expected);
assert_eq!(result.value(i), *expected);
}
}
}

View File

@@ -14,57 +14,66 @@
use std::net::{Ipv4Addr, Ipv6Addr};
use std::str::FromStr;
use std::sync::Arc;
use common_query::error::{InvalidFuncArgsSnafu, Result};
use common_query::error::InvalidFuncArgsSnafu;
use datafusion::arrow::datatypes::DataType;
use datafusion_expr::{Signature, Volatility};
use datatypes::prelude::Value;
use datatypes::scalars::ScalarVectorBuilder;
use datatypes::vectors::{BinaryVectorBuilder, MutableVector, StringVectorBuilder, VectorRef};
use datafusion_common::DataFusionError;
use datafusion_common::arrow::array::{Array, AsArray, BinaryViewBuilder, StringViewBuilder};
use datafusion_common::arrow::compute;
use datafusion_expr::{ColumnarValue, ScalarFunctionArgs, Signature, Volatility};
use derive_more::Display;
use snafu::ensure;
use crate::function::{Function, FunctionContext};
use crate::function::{Function, extract_args};
/// Function that converts a hex string representation of an IPv6 address to a formatted string.
///
/// For example:
/// - "20010DB8000000000000000000000001" returns "2001:db8::1"
/// - "00000000000000000000FFFFC0A80001" returns "::ffff:192.168.0.1"
#[derive(Clone, Debug, Default, Display)]
#[derive(Clone, Debug, Display)]
#[display("{}", self.name())]
pub struct Ipv6NumToString;
pub(crate) struct Ipv6NumToString {
signature: Signature,
}
impl Default for Ipv6NumToString {
fn default() -> Self {
Self {
signature: Signature::string(1, Volatility::Immutable),
}
}
}
impl Function for Ipv6NumToString {
fn name(&self) -> &str {
"ipv6_num_to_string"
}
fn return_type(&self, _: &[DataType]) -> Result<DataType> {
Ok(DataType::Utf8)
fn return_type(&self, _: &[DataType]) -> datafusion_common::Result<DataType> {
Ok(DataType::Utf8View)
}
fn signature(&self) -> Signature {
Signature::string(1, Volatility::Immutable)
fn signature(&self) -> &Signature {
&self.signature
}
fn eval(&self, _func_ctx: &FunctionContext, columns: &[VectorRef]) -> Result<VectorRef> {
ensure!(
columns.len() == 1,
InvalidFuncArgsSnafu {
err_msg: format!("Expected 1 argument, got {}", columns.len())
}
);
fn invoke_with_args(
&self,
args: ScalarFunctionArgs,
) -> datafusion_common::Result<ColumnarValue> {
let [arg0] = extract_args(self.name(), &args)?;
let hex_vec = &columns[0];
let arg0 = compute::cast(&arg0, &DataType::Utf8View)?;
let hex_vec = arg0.as_string_view();
let size = hex_vec.len();
let mut results = StringVectorBuilder::with_capacity(size);
let mut builder = StringViewBuilder::with_capacity(size);
for i in 0..size {
let hex_str = hex_vec.get(i);
let hex_str = hex_vec.is_valid(i).then(|| hex_vec.value(i));
let ip_str = match hex_str {
Value::String(s) => {
let hex_str = s.as_utf8().to_lowercase();
Some(s) => {
let hex_str = s.to_lowercase();
// Validate and convert hex string to bytes
let bytes = if hex_str.len() == 32 {
@@ -80,10 +89,10 @@ impl Function for Ipv6NumToString {
}
bytes
} else {
return InvalidFuncArgsSnafu {
err_msg: format!("Expected 32 hex characters, got {}", hex_str.len()),
}
.fail();
return Err(DataFusionError::Execution(format!(
"expecting 32 hex characters, got {}",
hex_str.len()
)));
};
// Convert bytes to IPv6 address
@@ -106,10 +115,10 @@ impl Function for Ipv6NumToString {
_ => None,
};
results.push(ip_str.as_deref());
builder.append_option(ip_str.as_deref());
}
Ok(results.to_vector())
Ok(ColumnarValue::Array(Arc::new(builder.finish())))
}
}
@@ -120,41 +129,48 @@ impl Function for Ipv6NumToString {
/// - If the input string contains a valid IPv4 address, returns its IPv6 equivalent
/// - HEX can be uppercase or lowercase
/// - Invalid IPv6 format throws an exception
#[derive(Clone, Debug, Default, Display)]
#[derive(Clone, Debug, Display)]
#[display("{}", self.name())]
pub struct Ipv6StringToNum;
pub(crate) struct Ipv6StringToNum {
signature: Signature,
}
impl Default for Ipv6StringToNum {
fn default() -> Self {
Self {
signature: Signature::string(1, Volatility::Immutable),
}
}
}
impl Function for Ipv6StringToNum {
fn name(&self) -> &str {
"ipv6_string_to_num"
}
fn return_type(&self, _: &[DataType]) -> Result<DataType> {
Ok(DataType::Binary)
fn return_type(&self, _: &[DataType]) -> datafusion_common::Result<DataType> {
Ok(DataType::BinaryView)
}
fn signature(&self) -> Signature {
Signature::string(1, Volatility::Immutable)
fn signature(&self) -> &Signature {
&self.signature
}
fn eval(&self, _func_ctx: &FunctionContext, columns: &[VectorRef]) -> Result<VectorRef> {
ensure!(
columns.len() == 1,
InvalidFuncArgsSnafu {
err_msg: format!("Expected 1 argument, got {}", columns.len())
}
);
fn invoke_with_args(
&self,
args: ScalarFunctionArgs,
) -> datafusion_common::Result<ColumnarValue> {
let [arg0] = extract_args(self.name(), &args)?;
let arg0 = compute::cast(&arg0, &DataType::Utf8View)?;
let ip_vec = arg0.as_string_view();
let ip_vec = &columns[0];
let size = ip_vec.len();
let mut results = BinaryVectorBuilder::with_capacity(size);
let mut builder = BinaryViewBuilder::with_capacity(size);
for i in 0..size {
let ip_str = ip_vec.get(i);
let ip_str = ip_vec.is_valid(i).then(|| ip_vec.value(i));
let ip_binary = match ip_str {
Value::String(s) => {
let addr_str = s.as_utf8();
Some(addr_str) => {
let addr = if let Ok(ipv6) = Ipv6Addr::from_str(addr_str) {
// Direct IPv6 address
ipv6
@@ -163,10 +179,10 @@ impl Function for Ipv6StringToNum {
ipv4.to_ipv6_mapped()
} else {
// Invalid format
return InvalidFuncArgsSnafu {
err_msg: format!("Invalid IPv6 address format: {}", addr_str),
}
.fail();
return Err(DataFusionError::Execution(format!(
"Invalid IPv6 address format: {}",
addr_str
)));
};
// Convert IPv6 address to binary (16 bytes)
@@ -176,10 +192,10 @@ impl Function for Ipv6StringToNum {
_ => None,
};
results.push(ip_binary.as_deref());
builder.append_option(ip_binary.as_deref());
}
Ok(results.to_vector())
Ok(ColumnarValue::Array(Arc::new(builder.finish())))
}
}
@@ -188,15 +204,14 @@ mod tests {
use std::fmt::Write;
use std::sync::Arc;
use datatypes::scalars::ScalarVector;
use datatypes::vectors::{BinaryVector, StringVector, Vector};
use arrow_schema::Field;
use datafusion_common::arrow::array::StringViewArray;
use super::*;
#[test]
fn test_ipv6_num_to_string() {
let func = Ipv6NumToString;
let ctx = FunctionContext::default();
let func = Ipv6NumToString::default();
// Hex string for "2001:db8::1"
let hex_str1 = "20010db8000000000000000000000001";
@@ -205,62 +220,93 @@ mod tests {
let hex_str2 = "00000000000000000000ffffc0a80001";
let values = vec![hex_str1, hex_str2];
let input = Arc::new(StringVector::from_slice(&values)) as VectorRef;
let arg0 = ColumnarValue::Array(Arc::new(StringViewArray::from_iter_values(&values)));
let result = func.eval(&ctx, &[input]).unwrap();
let result = result.as_any().downcast_ref::<StringVector>().unwrap();
let args = ScalarFunctionArgs {
args: vec![arg0],
arg_fields: vec![],
number_rows: 2,
return_field: Arc::new(Field::new("x", DataType::Utf8View, false)),
config_options: Arc::new(Default::default()),
};
let result = func.invoke_with_args(args).unwrap();
let result = result.to_array(2).unwrap();
let result = result.as_string_view();
assert_eq!(result.get_data(0).unwrap(), "2001:db8::1");
assert_eq!(result.get_data(1).unwrap(), "::ffff:192.168.0.1");
assert_eq!(result.value(0), "2001:db8::1");
assert_eq!(result.value(1), "::ffff:192.168.0.1");
}
#[test]
fn test_ipv6_num_to_string_uppercase() {
let func = Ipv6NumToString;
let ctx = FunctionContext::default();
let func = Ipv6NumToString::default();
// Uppercase hex string for "2001:db8::1"
let hex_str = "20010DB8000000000000000000000001";
let values = vec![hex_str];
let input = Arc::new(StringVector::from_slice(&values)) as VectorRef;
let arg0 = ColumnarValue::Array(Arc::new(StringViewArray::from_iter_values(&values)));
let result = func.eval(&ctx, &[input]).unwrap();
let result = result.as_any().downcast_ref::<StringVector>().unwrap();
let args = ScalarFunctionArgs {
args: vec![arg0],
arg_fields: vec![],
number_rows: 1,
return_field: Arc::new(Field::new("x", DataType::Utf8View, false)),
config_options: Arc::new(Default::default()),
};
let result = func.invoke_with_args(args).unwrap();
let result = result.to_array(1).unwrap();
let result = result.as_string_view();
assert_eq!(result.get_data(0).unwrap(), "2001:db8::1");
assert_eq!(result.value(0), "2001:db8::1");
}
#[test]
fn test_ipv6_num_to_string_error() {
let func = Ipv6NumToString;
let ctx = FunctionContext::default();
let func = Ipv6NumToString::default();
// Invalid hex string - wrong length
let hex_str = "20010db8";
let values = vec![hex_str];
let input = Arc::new(StringVector::from_slice(&values)) as VectorRef;
let arg0 = ColumnarValue::Array(Arc::new(StringViewArray::from_iter_values(&values)));
// Should return an error
let result = func.eval(&ctx, &[input]);
let args = ScalarFunctionArgs {
args: vec![arg0],
arg_fields: vec![],
number_rows: 2,
return_field: Arc::new(Field::new("x", DataType::Utf8View, false)),
config_options: Arc::new(Default::default()),
};
let result = func.invoke_with_args(args);
assert!(result.is_err());
// Check that the error message contains expected text
let error_msg = result.unwrap_err().to_string();
assert!(error_msg.contains("Expected 32 hex characters"));
assert_eq!(
error_msg,
"Execution error: expecting 32 hex characters, got 8"
);
}
#[test]
fn test_ipv6_string_to_num() {
let func = Ipv6StringToNum;
let ctx = FunctionContext::default();
let func = Ipv6StringToNum::default();
let values = vec!["2001:db8::1", "::ffff:192.168.0.1", "192.168.0.1"];
let input = Arc::new(StringVector::from_slice(&values)) as VectorRef;
let arg0 = ColumnarValue::Array(Arc::new(StringViewArray::from_iter_values(&values)));
let result = func.eval(&ctx, &[input]).unwrap();
let result = result.as_any().downcast_ref::<BinaryVector>().unwrap();
let args = ScalarFunctionArgs {
args: vec![arg0],
arg_fields: vec![],
number_rows: 3,
return_field: Arc::new(Field::new("x", DataType::Utf8View, false)),
config_options: Arc::new(Default::default()),
};
let result = func.invoke_with_args(args).unwrap();
let result = result.to_array(3).unwrap();
let result = result.as_binary_view();
// Expected binary for "2001:db8::1"
let expected_1 = [
@@ -272,33 +318,37 @@ mod tests {
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0xFF, 0xFF, 0xC0, 0xA8, 0, 0x01,
];
assert_eq!(result.get_data(0).unwrap(), &expected_1);
assert_eq!(result.get_data(1).unwrap(), &expected_2);
assert_eq!(result.get_data(2).unwrap(), &expected_2);
assert_eq!(result.value(0), &expected_1);
assert_eq!(result.value(1), &expected_2);
assert_eq!(result.value(2), &expected_2);
}
#[test]
fn test_ipv6_conversions_roundtrip() {
let to_num = Ipv6StringToNum;
let to_string = Ipv6NumToString;
let ctx = FunctionContext::default();
let to_num = Ipv6StringToNum::default();
let to_string = Ipv6NumToString::default();
// Test data
let values = vec!["2001:db8::1", "::ffff:192.168.0.1"];
let input = Arc::new(StringVector::from_slice(&values)) as VectorRef;
let arg0 = ColumnarValue::Array(Arc::new(StringViewArray::from_iter_values(&values)));
// Convert IPv6 addresses to binary
let binary_result = to_num.eval(&ctx, std::slice::from_ref(&input)).unwrap();
let args = ScalarFunctionArgs {
args: vec![arg0],
arg_fields: vec![],
number_rows: 2,
return_field: Arc::new(Field::new("x", DataType::BinaryView, false)),
config_options: Arc::new(Default::default()),
};
let result = to_num.invoke_with_args(args).unwrap();
// Convert binary to hex string representation (for ipv6_num_to_string)
let mut hex_strings = Vec::new();
let binary_vector = binary_result
.as_any()
.downcast_ref::<BinaryVector>()
.unwrap();
let result = result.to_array(2).unwrap();
let binary_vector = result.as_binary_view();
for i in 0..binary_vector.len() {
let bytes = binary_vector.get_data(i).unwrap();
let bytes = binary_vector.value(i);
let hex = bytes.iter().fold(String::new(), |mut acc, b| {
write!(&mut acc, "{:02x}", b).unwrap();
acc
@@ -307,44 +357,60 @@ mod tests {
}
let hex_str_refs: Vec<&str> = hex_strings.iter().map(|s| s.as_str()).collect();
let hex_input = Arc::new(StringVector::from_slice(&hex_str_refs)) as VectorRef;
let arg0 = ColumnarValue::Array(Arc::new(StringViewArray::from_iter_values(&hex_str_refs)));
// Now convert hex to formatted string
let string_result = to_string.eval(&ctx, &[hex_input]).unwrap();
let str_result = string_result
.as_any()
.downcast_ref::<StringVector>()
.unwrap();
let args = ScalarFunctionArgs {
args: vec![arg0],
arg_fields: vec![],
number_rows: 2,
return_field: Arc::new(Field::new("x", DataType::Utf8View, false)),
config_options: Arc::new(Default::default()),
};
let result = to_string.invoke_with_args(args).unwrap();
let result = result.to_array(2).unwrap();
let result = result.as_string_view();
// Compare with original input
assert_eq!(str_result.get_data(0).unwrap(), values[0]);
assert_eq!(str_result.get_data(1).unwrap(), values[1]);
assert_eq!(result.value(0), values[0]);
assert_eq!(result.value(1), values[1]);
}
#[test]
fn test_ipv6_conversions_hex_roundtrip() {
// Create a new test to verify that the string output from ipv6_num_to_string
// can be converted back using ipv6_string_to_num
let to_string = Ipv6NumToString;
let to_binary = Ipv6StringToNum;
let ctx = FunctionContext::default();
let to_string = Ipv6NumToString::default();
let to_binary = Ipv6StringToNum::default();
// Hex representation of IPv6 addresses
let hex_values = vec![
"20010db8000000000000000000000001",
"00000000000000000000ffffc0a80001",
];
let hex_input = Arc::new(StringVector::from_slice(&hex_values)) as VectorRef;
let arg0 = ColumnarValue::Array(Arc::new(StringViewArray::from_iter_values(&hex_values)));
// Convert hex to string representation
let string_result = to_string.eval(&ctx, &[hex_input]).unwrap();
let args = ScalarFunctionArgs {
args: vec![arg0],
arg_fields: vec![],
number_rows: 2,
return_field: Arc::new(Field::new("x", DataType::Utf8View, false)),
config_options: Arc::new(Default::default()),
};
let result = to_string.invoke_with_args(args).unwrap();
// Then convert string representation back to binary
let binary_result = to_binary.eval(&ctx, &[string_result]).unwrap();
let bin_result = binary_result
.as_any()
.downcast_ref::<BinaryVector>()
.unwrap();
let args = ScalarFunctionArgs {
args: vec![result],
arg_fields: vec![],
number_rows: 2,
return_field: Arc::new(Field::new("x", DataType::BinaryView, false)),
config_options: Arc::new(Default::default()),
};
let result = to_binary.invoke_with_args(args).unwrap();
let result = result.to_array(2).unwrap();
let result = result.as_binary_view();
// Expected binary values
let expected_bin1 = [
@@ -354,7 +420,7 @@ mod tests {
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0xFF, 0xFF, 0xC0, 0xA8, 0, 0x01,
];
assert_eq!(bin_result.get_data(0).unwrap(), &expected_bin1);
assert_eq!(bin_result.get_data(1).unwrap(), &expected_bin2);
assert_eq!(result.value(0), &expected_bin1);
assert_eq!(result.value(1), &expected_bin2);
}
}

View File

@@ -14,17 +14,18 @@
use std::net::{Ipv4Addr, Ipv6Addr};
use std::str::FromStr;
use std::sync::Arc;
use common_query::error::{InvalidFuncArgsSnafu, Result};
use datafusion::arrow::datatypes::DataType;
use datafusion_expr::{Signature, Volatility};
use datatypes::prelude::Value;
use datatypes::scalars::ScalarVectorBuilder;
use datatypes::vectors::{BooleanVectorBuilder, MutableVector, VectorRef};
use datafusion_common::DataFusionError;
use datafusion_common::arrow::array::{Array, AsArray, BooleanBuilder};
use datafusion_common::arrow::compute;
use datafusion_common::arrow::datatypes::DataType;
use datafusion_expr::{ColumnarValue, ScalarFunctionArgs, Signature, Volatility};
use derive_more::Display;
use snafu::ensure;
use crate::function::{Function, FunctionContext};
use crate::function::{Function, extract_args};
/// Function that checks if an IPv4 address is within a specified CIDR range.
///
@@ -35,59 +36,58 @@ use crate::function::{Function, FunctionContext};
/// - ipv4_in_range('192.168.1.5', '192.168.1.0/24') -> true
/// - ipv4_in_range('192.168.2.1', '192.168.1.0/24') -> false
/// - ipv4_in_range('10.0.0.1', '10.0.0.0/8') -> true
#[derive(Clone, Debug, Default, Display)]
#[derive(Clone, Debug, Display)]
#[display("{}", self.name())]
pub struct Ipv4InRange;
pub(crate) struct Ipv4InRange {
signature: Signature,
}
impl Default for Ipv4InRange {
fn default() -> Self {
Self {
signature: Signature::string(2, Volatility::Immutable),
}
}
}
impl Function for Ipv4InRange {
fn name(&self) -> &str {
"ipv4_in_range"
}
fn return_type(&self, _: &[DataType]) -> Result<DataType> {
fn return_type(&self, _: &[DataType]) -> datafusion_common::Result<DataType> {
Ok(DataType::Boolean)
}
fn signature(&self) -> Signature {
Signature::string(2, Volatility::Immutable)
fn signature(&self) -> &Signature {
&self.signature
}
fn eval(&self, _func_ctx: &FunctionContext, columns: &[VectorRef]) -> Result<VectorRef> {
ensure!(
columns.len() == 2,
InvalidFuncArgsSnafu {
err_msg: format!("Expected 2 arguments, got {}", columns.len())
}
);
fn invoke_with_args(
&self,
args: ScalarFunctionArgs,
) -> datafusion_common::Result<ColumnarValue> {
let [arg0, arg1] = extract_args(self.name(), &args)?;
let arg0 = compute::cast(&arg0, &DataType::Utf8View)?;
let ip_vec = arg0.as_string_view();
let arg1 = compute::cast(&arg1, &DataType::Utf8View)?;
let ranges = arg1.as_string_view();
let ip_vec = &columns[0];
let range_vec = &columns[1];
let size = ip_vec.len();
ensure!(
range_vec.len() == size,
InvalidFuncArgsSnafu {
err_msg: "IP addresses and CIDR ranges must have the same number of rows"
.to_string()
}
);
let mut results = BooleanVectorBuilder::with_capacity(size);
let mut builder = BooleanBuilder::with_capacity(size);
for i in 0..size {
let ip = ip_vec.get(i);
let range = range_vec.get(i);
let ip = ip_vec.is_valid(i).then(|| ip_vec.value(i));
let range = ranges.is_valid(i).then(|| ranges.value(i));
let in_range = match (ip, range) {
(Value::String(ip_str), Value::String(range_str)) => {
let ip_str = ip_str.as_utf8().trim();
let range_str = range_str.as_utf8().trim();
(Some(ip_str), Some(range_str)) => {
if ip_str.is_empty() || range_str.is_empty() {
return InvalidFuncArgsSnafu {
err_msg: "IP address and CIDR range cannot be empty".to_string(),
}
.fail();
return Err(DataFusionError::Execution(
"IP address or CIDR range cannot be empty".to_string(),
));
}
// Parse the IP address
@@ -107,10 +107,10 @@ impl Function for Ipv4InRange {
_ => None,
};
results.push(in_range);
builder.append_option(in_range);
}
Ok(results.to_vector())
Ok(ColumnarValue::Array(Arc::new(builder.finish())))
}
}
@@ -124,59 +124,56 @@ impl Function for Ipv4InRange {
/// - ipv6_in_range('2001:db8:1::', '2001:db8::/32') -> true
/// - ipv6_in_range('2001:db9::1', '2001:db8::/32') -> false
/// - ipv6_in_range('::1', '::1/128') -> true
#[derive(Clone, Debug, Default, Display)]
#[derive(Clone, Debug, Display)]
#[display("{}", self.name())]
pub struct Ipv6InRange;
pub(crate) struct Ipv6InRange {
signature: Signature,
}
impl Default for Ipv6InRange {
fn default() -> Self {
Self {
signature: Signature::string(2, Volatility::Immutable),
}
}
}
impl Function for Ipv6InRange {
fn name(&self) -> &str {
"ipv6_in_range"
}
fn return_type(&self, _: &[DataType]) -> Result<DataType> {
fn return_type(&self, _: &[DataType]) -> datafusion_common::Result<DataType> {
Ok(DataType::Boolean)
}
fn signature(&self) -> Signature {
Signature::string(2, Volatility::Immutable)
fn signature(&self) -> &Signature {
&self.signature
}
fn eval(&self, _func_ctx: &FunctionContext, columns: &[VectorRef]) -> Result<VectorRef> {
ensure!(
columns.len() == 2,
InvalidFuncArgsSnafu {
err_msg: format!("Expected 2 arguments, got {}", columns.len())
}
);
fn invoke_with_args(
&self,
args: ScalarFunctionArgs,
) -> datafusion_common::Result<ColumnarValue> {
let [arg0, arg1] = extract_args(self.name(), &args)?;
let ip_vec = &columns[0];
let range_vec = &columns[1];
let arg0 = compute::cast(&arg0, &DataType::Utf8View)?;
let ip_vec = arg0.as_string_view();
let arg1 = compute::cast(&arg1, &DataType::Utf8View)?;
let ranges = arg1.as_string_view();
let size = ip_vec.len();
ensure!(
range_vec.len() == size,
InvalidFuncArgsSnafu {
err_msg: "IP addresses and CIDR ranges must have the same number of rows"
.to_string()
}
);
let mut results = BooleanVectorBuilder::with_capacity(size);
let mut builder = BooleanBuilder::with_capacity(size);
for i in 0..size {
let ip = ip_vec.get(i);
let range = range_vec.get(i);
let ip = ip_vec.is_valid(i).then(|| ip_vec.value(i));
let range = ranges.is_valid(i).then(|| ranges.value(i));
let in_range = match (ip, range) {
(Value::String(ip_str), Value::String(range_str)) => {
let ip_str = ip_str.as_utf8().trim();
let range_str = range_str.as_utf8().trim();
(Some(ip_str), Some(range_str)) => {
if ip_str.is_empty() || range_str.is_empty() {
return InvalidFuncArgsSnafu {
err_msg: "IP address and CIDR range cannot be empty".to_string(),
}
.fail();
return Err(DataFusionError::Execution(
"IP address or CIDR range cannot be empty".to_string(),
));
}
// Parse the IP address
@@ -196,10 +193,10 @@ impl Function for Ipv6InRange {
_ => None,
};
results.push(in_range);
builder.append_option(in_range);
}
Ok(results.to_vector())
Ok(ColumnarValue::Array(Arc::new(builder.finish())))
}
}
@@ -329,15 +326,14 @@ fn is_ipv6_in_range(ip: &Ipv6Addr, cidr_base: &Ipv6Addr, prefix_len: u8) -> Opti
mod tests {
use std::sync::Arc;
use datatypes::scalars::ScalarVector;
use datatypes::vectors::{BooleanVector, StringVector};
use arrow_schema::Field;
use datafusion_common::arrow::array::StringViewArray;
use super::*;
#[test]
fn test_ipv4_in_range() {
let func = Ipv4InRange;
let ctx = FunctionContext::default();
let func = Ipv4InRange::default();
// Test IPs
let ip_values = vec![
@@ -357,24 +353,31 @@ mod tests {
"172.16.0.0/16",
];
let ip_input = Arc::new(StringVector::from_slice(&ip_values)) as VectorRef;
let cidr_input = Arc::new(StringVector::from_slice(&cidr_values)) as VectorRef;
let arg0 = ColumnarValue::Array(Arc::new(StringViewArray::from_iter_values(&ip_values)));
let arg1 = ColumnarValue::Array(Arc::new(StringViewArray::from_iter_values(&cidr_values)));
let result = func.eval(&ctx, &[ip_input, cidr_input]).unwrap();
let result = result.as_any().downcast_ref::<BooleanVector>().unwrap();
let args = ScalarFunctionArgs {
args: vec![arg0, arg1],
arg_fields: vec![],
number_rows: 5,
return_field: Arc::new(Field::new("x", DataType::Boolean, false)),
config_options: Arc::new(Default::default()),
};
let result = func.invoke_with_args(args).unwrap();
let result = result.to_array(5).unwrap();
let result = result.as_boolean();
// Expected results
assert!(result.get_data(0).unwrap()); // 192.168.1.5 is in 192.168.1.0/24
assert!(!result.get_data(1).unwrap()); // 192.168.2.1 is not in 192.168.1.0/24
assert!(result.get_data(2).unwrap()); // 10.0.0.1 is in 10.0.0.0/8
assert!(result.get_data(3).unwrap()); // 10.1.0.1 is in 10.0.0.0/8
assert!(result.get_data(4).unwrap()); // 172.16.0.1 is in 172.16.0.0/16
assert!(result.value(0)); // 192.168.1.5 is in 192.168.1.0/24
assert!(!result.value(1)); // 192.168.2.1 is not in 192.168.1.0/24
assert!(result.value(2)); // 10.0.0.1 is in 10.0.0.0/8
assert!(result.value(3)); // 10.1.0.1 is in 10.0.0.0/8
assert!(result.value(4)); // 172.16.0.1 is in 172.16.0.0/16
}
#[test]
fn test_ipv6_in_range() {
let func = Ipv6InRange;
let ctx = FunctionContext::default();
let func = Ipv6InRange::default();
// Test IPs
let ip_values = vec![
@@ -394,46 +397,70 @@ mod tests {
"fe80::/16",
];
let ip_input = Arc::new(StringVector::from_slice(&ip_values)) as VectorRef;
let cidr_input = Arc::new(StringVector::from_slice(&cidr_values)) as VectorRef;
let arg0 = ColumnarValue::Array(Arc::new(StringViewArray::from_iter_values(&ip_values)));
let arg1 = ColumnarValue::Array(Arc::new(StringViewArray::from_iter_values(&cidr_values)));
let result = func.eval(&ctx, &[ip_input, cidr_input]).unwrap();
let result = result.as_any().downcast_ref::<BooleanVector>().unwrap();
let args = ScalarFunctionArgs {
args: vec![arg0, arg1],
arg_fields: vec![],
number_rows: 5,
return_field: Arc::new(Field::new("x", DataType::Boolean, false)),
config_options: Arc::new(Default::default()),
};
let result = func.invoke_with_args(args).unwrap();
let result = result.to_array(5).unwrap();
let result = result.as_boolean();
// Expected results
assert!(result.get_data(0).unwrap()); // 2001:db8::1 is in 2001:db8::/32
assert!(result.get_data(1).unwrap()); // 2001:db8:1:: is in 2001:db8::/32
assert!(!result.get_data(2).unwrap()); // 2001:db9::1 is not in 2001:db8::/32
assert!(result.get_data(3).unwrap()); // ::1 is in ::1/128
assert!(result.get_data(4).unwrap()); // fe80::1 is in fe80::/16
assert!(result.value(0)); // 2001:db8::1 is in 2001:db8::/32
assert!(result.value(1)); // 2001:db8:1:: is in 2001:db8::/32
assert!(!result.value(2)); // 2001:db9::1 is not in 2001:db8::/32
assert!(result.value(3)); // ::1 is in ::1/128
assert!(result.value(4)); // fe80::1 is in fe80::/16
}
#[test]
fn test_invalid_inputs() {
let ipv4_func = Ipv4InRange;
let ipv6_func = Ipv6InRange;
let ctx = FunctionContext::default();
let ipv4_func = Ipv4InRange::default();
let ipv6_func = Ipv6InRange::default();
// Invalid IPv4 address
let invalid_ip_values = vec!["not-an-ip", "192.168.1.300"];
let cidr_values = vec!["192.168.1.0/24", "192.168.1.0/24"];
let invalid_ip_input = Arc::new(StringVector::from_slice(&invalid_ip_values)) as VectorRef;
let cidr_input = Arc::new(StringVector::from_slice(&cidr_values)) as VectorRef;
let arg0 = ColumnarValue::Array(Arc::new(StringViewArray::from_iter_values(
&invalid_ip_values,
)));
let arg1 = ColumnarValue::Array(Arc::new(StringViewArray::from_iter_values(&cidr_values)));
let result = ipv4_func.eval(&ctx, &[invalid_ip_input, cidr_input]);
let args = ScalarFunctionArgs {
args: vec![arg0, arg1],
arg_fields: vec![],
number_rows: 2,
return_field: Arc::new(Field::new("x", DataType::Boolean, false)),
config_options: Arc::new(Default::default()),
};
let result = ipv4_func.invoke_with_args(args);
assert!(result.is_err());
// Invalid CIDR notation
let ip_values = vec!["192.168.1.1", "2001:db8::1"];
let invalid_cidr_values = vec!["192.168.1.0", "2001:db8::/129"];
let ip_input = Arc::new(StringVector::from_slice(&ip_values)) as VectorRef;
let invalid_cidr_input =
Arc::new(StringVector::from_slice(&invalid_cidr_values)) as VectorRef;
let arg0 = ColumnarValue::Array(Arc::new(StringViewArray::from_iter_values(&ip_values)));
let arg1 = ColumnarValue::Array(Arc::new(StringViewArray::from_iter_values(
&invalid_cidr_values,
)));
let ipv4_result = ipv4_func.eval(&ctx, &[ip_input.clone(), invalid_cidr_input.clone()]);
let ipv6_result = ipv6_func.eval(&ctx, &[ip_input, invalid_cidr_input]);
let args = ScalarFunctionArgs {
args: vec![arg0, arg1],
arg_fields: vec![],
number_rows: 2,
return_field: Arc::new(Field::new("x", DataType::Boolean, false)),
config_options: Arc::new(Default::default()),
};
let ipv4_result = ipv4_func.invoke_with_args(args.clone());
let ipv6_result = ipv6_func.invoke_with_args(args);
assert!(ipv4_result.is_err());
assert!(ipv6_result.is_err());
@@ -441,21 +468,28 @@ mod tests {
#[test]
fn test_edge_cases() {
let ipv4_func = Ipv4InRange;
let ctx = FunctionContext::default();
let ipv4_func = Ipv4InRange::default();
// Edge cases like prefix length 0 (matches everything) and 32 (exact match)
let ip_values = vec!["8.8.8.8", "192.168.1.1", "192.168.1.1"];
let cidr_values = vec!["0.0.0.0/0", "192.168.1.1/32", "192.168.1.0/32"];
let ip_input = Arc::new(StringVector::from_slice(&ip_values)) as VectorRef;
let cidr_input = Arc::new(StringVector::from_slice(&cidr_values)) as VectorRef;
let arg0 = ColumnarValue::Array(Arc::new(StringViewArray::from_iter_values(&ip_values)));
let arg1 = ColumnarValue::Array(Arc::new(StringViewArray::from_iter_values(&cidr_values)));
let result = ipv4_func.eval(&ctx, &[ip_input, cidr_input]).unwrap();
let result = result.as_any().downcast_ref::<BooleanVector>().unwrap();
let args = ScalarFunctionArgs {
args: vec![arg0, arg1],
arg_fields: vec![],
number_rows: 3,
return_field: Arc::new(Field::new("x", DataType::Boolean, false)),
config_options: Arc::new(Default::default()),
};
let result = ipv4_func.invoke_with_args(args).unwrap();
let result = result.to_array(3).unwrap();
let result = result.as_boolean();
assert!(result.get_data(0).unwrap()); // 8.8.8.8 is in 0.0.0.0/0 (matches everything)
assert!(result.get_data(1).unwrap()); // 192.168.1.1 is in 192.168.1.1/32 (exact match)
assert!(!result.get_data(2).unwrap()); // 192.168.1.1 is not in 192.168.1.0/32 (no match)
assert!(result.value(0)); // 8.8.8.8 is in 0.0.0.0/0 (matches everything)
assert!(result.value(1)); // 192.168.1.1 is in 192.168.1.1/32 (exact match)
assert!(!result.value(2)); // 192.168.1.1 is not in 192.168.1.0/32 (no match)
}
}

View File

@@ -32,23 +32,23 @@ pub(crate) struct JsonFunction;
impl JsonFunction {
pub fn register(registry: &FunctionRegistry) {
registry.register_scalar(JsonToStringFunction);
registry.register_scalar(ParseJsonFunction);
registry.register_scalar(JsonToStringFunction::default());
registry.register_scalar(ParseJsonFunction::default());
registry.register_scalar(JsonGetInt);
registry.register_scalar(JsonGetFloat);
registry.register_scalar(JsonGetString);
registry.register_scalar(JsonGetBool);
registry.register_scalar(JsonGetInt::default());
registry.register_scalar(JsonGetFloat::default());
registry.register_scalar(JsonGetString::default());
registry.register_scalar(JsonGetBool::default());
registry.register_scalar(JsonIsNull);
registry.register_scalar(JsonIsInt);
registry.register_scalar(JsonIsFloat);
registry.register_scalar(JsonIsString);
registry.register_scalar(JsonIsBool);
registry.register_scalar(JsonIsArray);
registry.register_scalar(JsonIsObject);
registry.register_scalar(JsonIsNull::default());
registry.register_scalar(JsonIsInt::default());
registry.register_scalar(JsonIsFloat::default());
registry.register_scalar(JsonIsString::default());
registry.register_scalar(JsonIsBool::default());
registry.register_scalar(JsonIsArray::default());
registry.register_scalar(JsonIsObject::default());
registry.register_scalar(json_path_exists::JsonPathExistsFunction);
registry.register_scalar(json_path_match::JsonPathMatchFunction);
registry.register_scalar(json_path_exists::JsonPathExistsFunction::default());
registry.register_scalar(json_path_match::JsonPathMatchFunction::default());
}
}

Some files were not shown because too many files have changed in this diff Show More