Compare commits

...

120 Commits

Author SHA1 Message Date
discord9
d815bdf770 refactor: per reviews 2024-05-27 14:27:21 +08:00
discord9
0b324563ac fix: range begin>range end 2024-05-27 14:27:21 +08:00
discord9
ffecb6882e mend 2024-05-27 14:27:21 +08:00
discord9
c185242997 fix: window_start when ts<start_time 2024-05-27 14:27:21 +08:00
discord9
9fda415b0d fix: ts<start time correct window 2024-05-27 14:27:21 +08:00
discord9
5eaf9816b9 fix: send buf clear 2024-05-27 14:27:21 +08:00
discord9
6684f8dce3 fix: test of tumble 2024-05-27 14:27:21 +08:00
discord9
e8660a6f7e fix: expire state 2024-05-27 14:27:21 +08:00
discord9
6659f3cc62 fix: reorder write requests 2024-05-27 14:27:21 +08:00
discord9
d218d65361 fix: default timestamp name 2024-05-27 14:27:21 +08:00
discord9
8f40ba42c1 feat: rename default ts to GREPTIME_TIMESTAMP 2024-05-27 14:27:21 +08:00
discord9
d1ce436442 fix(WIP): choose 2024-05-27 14:27:21 +08:00
discord9
e580ba63ec fix: optional args of tumble 2024-05-27 14:27:21 +08:00
LFC
297105266b feat: enable tcp keepalive for http server (#4019)
* feat: enable tcp keepalive for http server

* chore: for enterprise's update

* resolve PR comments
2024-05-27 04:07:36 +00:00
Ruihang Xia
1de17aec74 feat: change EXPIRE WHEN to EXPIRE AFTER (#4002)
* feat: change EXPIRE WHEN to EXPIRE AFTER

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* change remaining

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* rename create_if_not_exist to create_if_not_exists

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* parse interval expr

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* update comment

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* Apply suggestions from code review

Co-authored-by: Jeremyhi <jiachun_feng@proton.me>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
Co-authored-by: Jeremyhi <jiachun_feng@proton.me>
2024-05-27 04:05:55 +00:00
Weny Xu
389ded93d1 chore: add logs for setting the region to writable (#4044)
* chore: add logs for setting the region to writable

* fix: ignore redundant logs
2024-05-27 04:01:40 +00:00
Eugene Tolbakov
af486ec0d0 feat(opertor): check if a database is in use before dropping it (#4035)
feat(opertor): check if database is in use before dropping it
2024-05-27 03:31:58 +00:00
irenjj
25d64255a3 feat: support table level comment (#4042)
* feat: support table level comment

* use constants

Signed-off-by: tison <wander4096@gmail.com>

---------

Signed-off-by: tison <wander4096@gmail.com>
Co-authored-by: tison <wander4096@gmail.com>
2024-05-27 02:28:52 +00:00
tison
3790020d78 build(deps): upgrade promql-parser to 0.4 (#4047)
* build(deps): upgrade promql-parser to 0.4

Signed-off-by: tison <wander4096@gmail.com>

* lock

Signed-off-by: tison <wander4096@gmail.com>

* catch up upgrades

Signed-off-by: tison <wander4096@gmail.com>

* concise method

Signed-off-by: tison <wander4096@gmail.com>

---------

Signed-off-by: tison <wander4096@gmail.com>
2024-05-27 01:51:59 +00:00
Weny Xu
5df3d4e5da feat: implement the LogStoreRawEntryReader and RawEntryReaderFilter (#4030)
* feat: implement the `LogStoreRawEntryReader`

* feat: implement the `RawEntryReaderFilter`

* test: add tests
2024-05-24 11:53:15 +00:00
tison
af670df515 ci: skip notification for manual releases (#4033)
Signed-off-by: tison <wander4096@gmail.com>
2024-05-24 10:16:06 +00:00
Ruihang Xia
a58256d4d3 feat: round-robin selector (#4024)
* feat: implement round robin peer selector

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* add document and test

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2024-05-24 07:29:07 +00:00
Weny Xu
466f7c6448 feat: add RawEntryReader and OneshotWalEntryReader trait (#4027)
* feat: add `RawEntryReader` and `OneShotWalEntryReader` trait

* chore: rename `OneShot` to `Oneshot`

* refacotr: remove `region_id` from `OneshotWalEntryReader`
2024-05-24 06:30:50 +00:00
Ruihang Xia
0101657649 feat: remove one clone on constructing partition (#4028)
Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2024-05-24 04:01:19 +00:00
taobo
a3a2c8d063 feat: Add TLS support for gRPC service (#3957)
* feat: Add tls support for grpc service

* feat: add integration test

* fix: integration test

* fix: revert by suggestion

* fix: typos

* fix: optimize code

* fix: optimize code

* docs: update configs
2024-05-23 19:00:16 +00:00
Yingwen
dfc1acbb2a fix: notifies all workers once a region is flushed (#4016)
* fix: notify workers to handle stalled requests if flush is finished

* chore: change stalled count to gauge

* feat: process stalled requests eagerly
2024-05-23 12:45:00 +00:00
Lei, HUANG
0d055b6ee6 refactor: remove unused log config (#4021) 2024-05-23 08:59:42 +00:00
Weny Xu
614643ef7b chore(ci): add more replicas (#4015) 2024-05-23 02:43:24 +00:00
Ning Sun
b90b7adf6f feat: add fallback logic for vmagent sending wrong content type (#4009)
* feat: add fallback logic for vmagent sending wrong content type

* fix: resolve lint issues

* Update src/servers/src/http/prom_store.rs

Co-authored-by: Yingwen <realevenyag@gmail.com>

---------

Co-authored-by: Yingwen <realevenyag@gmail.com>
2024-05-23 02:40:17 +00:00
Jeremyhi
418090b464 chore: log error for detail (#4011)
* chore: log error for detail

* chore: by cr
2024-05-22 12:17:20 +00:00
Lei, HUANG
090b59e8d6 feat: manual compaction (#3988)
* add compaction udf params

* wip: pass compaction options through grpc

* wip: pass compaction options all the way down to region server

* wip: window compaction task

* feat: trigger major compaction

* refactor: optimize compaction parameter parsing

* chore: rebase main

* chore: update proto

* chore: add some tests

* feat: validate catalog

* chore: fix typo and rebase main

* fix: some cr comments

* fix: file_time_bucket_span

* fix: avoid upper bound overflow

* chore: update proto
2024-05-22 09:42:21 +00:00
shuiyisong
9e1af79637 chore: add ttl to write_cache (#4010)
* chore: add ttl to write_cache

* chore: update test & add example config

* chore: fix typo

* chore: fix typo

* chore: fix typo
2024-05-22 06:50:12 +00:00
Yohan Wal
9800807fe5 fix(fuzz): sort inserted rows with primary keys and time index (#4008)
* fix(fuzz): sort inserted rows with primary keys and time index

* fix: correct index when replacing default

* fix: put null behind all values
2024-05-22 03:32:19 +00:00
zyy17
b86d79b906 fix: can't print log because the tracing guard is dropped (#4005)
* fix: avoid logging guard drop

* chore: remove unused '#[allow(dead_code)]'
2024-05-22 03:24:40 +00:00
Lei, HUANG
e070ba3c32 feat: respect time range when building parquet reader (#3947)
* feat: convert timestamp range filters to predicates

* chore: rebase main

* fix: remove prediactes once they have been added to timestamp filters to avoid duplicate filtering

* fix: some comments

* fix: resolve conflicts
2024-05-21 16:02:25 +00:00
Weny Xu
43bf7bffd0 fix: try to fix unstable fuzz test (#4003)
fix: ignore PoolTimedOut
2024-05-21 12:57:09 +00:00
Weny Xu
56aed6e6ff chore(ci): export kind logs (#3996)
* chore(ci): export kind logs

* chore: add empty line
2024-05-21 11:56:03 +00:00
zyy17
47785756e5 fix: move log_version() into build() of App to fix no log version on bootstrap (#4004) 2024-05-21 09:15:15 +00:00
Jeremyhi
0aa523cd8c feat: make create view procedure simple as others (#4001) 2024-05-21 08:30:57 +00:00
Weny Xu
7a8222dd97 fix: try to fix broken CI (#3998)
* fix: try to fix broken CI

* chore: using loop to check status
2024-05-21 07:02:18 +00:00
maco
40c585890a refactor: replace Expr with datafusion::Expr (#3995)
* refactor: replace Expr with datafusion::Expr

* fix: fmt-toml

* fix: cr comment
2024-05-21 06:40:29 +00:00
zyy17
da925e956e ci: change the image name of nightly build (#3994) 2024-05-21 06:11:12 +00:00
Weny Xu
d7ade3c854 chore(ci): add fuzz tests for distributed mode (#3967)
* chore(ci): add cfg for setup GreptimeDB cluster

* chore: use kind

* chore: always print info

* chore: add debug print

* chore: set etcd replica to 1

* ci: refactor e2e cfg

* ci: add Fuzz Test for distributed mode

* Apply suggestions from code review

* chore: apply suggestions from CR

* chore(ci): upload logs
2024-05-21 04:58:42 +00:00
Yingwen
179c8c716c feat: Adds RegionScanner trait (#3948)
* feat: define region scanner

* feat: single partition scanner

* feat: use single partition scanner

* feat: implement ExecutionPlan wip

* feat: mito engine returns single partition scanner

* feat: implement DisplayAs for region server

* feat: dummy table provider use handle_partitioned_query()

* test: update sqlness test

* feat: table provider use ReadFromRegion

* refactor: remove StreamScanAdapter

* chore: update lock

* style: fix clippy

* refactor: remove handle_query from the RegionEngine trait

* chore: address CR comments

* refactor: rename methods

* refactor: rename ReadFromRegion to RegionScanExec
2024-05-20 11:52:00 +00:00
shuiyisong
19543f9819 feat: support compression on gRPC server (#3961)
* feat: enable gzip in grpc server side

* feat: add enable_gzip_compression config

* test: add grpc compression test

* feat: support user configured compression on grpc server

* chore: update doc

* chore: add tests

* fix: make config-docs

* chore: fix cr issue

* chore: add test

* refactor: remove config on server side, auto enable all compression support

* chore: minor update

* chore: remove unused code

* refactor: enable zstd compression internally by default

* chore: minor fix
2024-05-20 11:28:00 +00:00
discord9
533ada70ca chore: remove a dbg! forget to remove (#3990)
* chore: remove a dbg! forget to remove

* remove other dbg! and add lint

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix pyo3 feature

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
Co-authored-by: Ruihang Xia <waynestxia@gmail.com>
2024-05-20 08:34:47 +00:00
zyy17
c50ff23194 ci: add 'contents: write' permission (#3989) 2024-05-20 06:23:54 +00:00
tison
d7f1150098 ci: fixup strings in check ci status (#3987)
Signed-off-by: tison <wander4096@gmail.com>
2024-05-20 03:59:05 +00:00
zyy17
82c3eca25e refactor: make the command entry cleaner (#3981)
* refactor: move run() in App trait

* refactor: introduce AppBuilder trait

* chore: remove AppBuilder

* refactor: remove Options struct and make the start() clean

* refactor: init once for common_telemetry::init_global_logging
2024-05-20 03:34:06 +00:00
Yingwen
df13832a59 feat: use cache in compaction (#3982) 2024-05-20 02:36:51 +00:00
tison
7da92eb9eb ci: check-status for nightly-ci (#3984)
Signed-off-by: tison <wander4096@gmail.com>
2024-05-19 07:10:59 +00:00
Weny Xu
c71298d3d5 chore: pin cargo-ndk to 3.5.4 (#3979) 2024-05-18 08:46:01 +00:00
Yingwen
de594833ac docs: add v0.8.0 TSBS report (#3983)
docs: add v0.8.0 tsbs report
2024-05-18 08:09:16 +00:00
Eugene Tolbakov
6a9a92931d chore: change binary array type from LargeBinaryArray to BinaryArray (#3924)
* chore: change binary array type from LargeBinaryArray to BinaryArray

* fix: adjust try_into_vector logic

* fix: apply CR suggestions, add tests

* chore: fix failing test

* chore: fix integration test

* chore: adjust the assertions according to changed implementation

* chore: add a test with LargeBinary type

* chore: apply CR suggestions

* chore: simplify tests
2024-05-18 08:04:41 +00:00
tison
11ad5b3ed1 ci: report CI failures with creating issues (#3976)
* ci: report CI failures with creating issues

Signed-off-by: tison <wander4096@gmail.com>

* integrate with CI workflows

Signed-off-by: tison <wander4096@gmail.com>

* mention db-approver

Signed-off-by: tison <wander4096@gmail.com>

---------

Signed-off-by: tison <wander4096@gmail.com>
2024-05-18 03:03:56 +00:00
zyy17
b8354bbb55 docs: add toc for config docs (#3974) 2024-05-18 01:57:49 +00:00
Weny Xu
258675b75e chore: bump to v0.8.0 (#3971) 2024-05-17 15:05:20 +00:00
Weny Xu
11a08cb272 feat(cli): prevent exporting physical table data (#3978)
* feat: prevent exporting physical table data

* chore: apply suggestions from CR
2024-05-17 14:58:10 +00:00
Ruihang Xia
e9b178b8b9 fix: tql parser hang on abnormal input (#3977)
* fix: tql parser hang on abnormal input

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* apply review sugg

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2024-05-17 14:22:20 +00:00
discord9
3477fde0e5 feat(flow): tumble window func (#3968)
* feat(WIP): tumble window rewrite parser

* tests: tumble func

* feat: add `update_at` column for all flow output

* chore: cleanup per review

* fix: update_at not as time index

* fix: demo tumble

* fix: tests&tumble signature&accept both ts&datetime

* refactor: update_at now ts millis type

* chore: per review advices
2024-05-17 12:10:28 +00:00
dennis zhuang
9baa431656 fix: changing column data type can't process type alias (#3972) 2024-05-17 11:34:31 +00:00
WU Jingdi
e2a1cb5840 feat: support evaluate expr in range query param (#3823)
* feat: support evaluate expr in range query param

* chore: fix comment

* chore: fix code comment

* fix: disbale now in duration param
2024-05-17 08:31:55 +00:00
Weny Xu
f696f41a02 fix: prevent registering logical regions with AliveKeeper (#3965)
* fix: register logical region

* chore: fix Clippy

* chore: apply suggestions from CR
2024-05-17 07:38:35 +00:00
Weny Xu
0168d43d60 fix: prevent exporting metric physical table data (#3970) 2024-05-17 07:19:28 +00:00
Yingwen
e372e25e30 build: add RUSTUP_WINDOWS_PATH_ADD_BIN env (#3969)
build: add RUSTUP_WINDOWS_PATH_ADD_BIN: 1
2024-05-17 06:01:46 +00:00
zyy17
ca409a732f refactor(naming): use the better naming for pubsub (#3960) 2024-05-17 03:00:15 +00:00
Ruihang Xia
5c0a530ad1 feat: skip read-only region when trying to flush on region full (#3966)
* feat: skip read-only region when trying to flush on region full

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* improve log

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* also skip in periodically

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2024-05-16 14:56:43 +00:00
Ruihang Xia
4b030456f6 feat: remove timeout in the channel between frontend and datanode (#3962)
* style: change builder pattern

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* feat: remove timeout

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* remove unused config

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* update docs

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2024-05-16 14:12:42 +00:00
irenjj
f93b5b19f0 feat: limit total rows copied in COPY TABLE FROM with LIMIT segment (#3910)
* feat: limit total rows copied in COPY TABLE FROM with LIMIT segment

* fmt

* disable default limit

* fix: check parse

* fix test, add error case

* fix: forbide LIMIT in database

* fix: only support LIMIT segment

* fix: simplify

* fix

* fix

* fix

* fix

* fix: test

* fix: change error info

* fix clippy

* fix: fix error msg

* fix test

* fix: test error info
2024-05-16 13:39:26 +00:00
Yingwen
669a6d84e9 test: gracefully shutdown postgres client in sql tests (#3958)
* chore: debug log

* test: gracefully shutdown pg client
2024-05-16 11:50:45 +00:00
discord9
a45017ad71 feat(flow): expire arrange according to time_index type (#3956)
* feat: render_reduce's arrangement expire after time passed

* feat: set expire when create flow
2024-05-16 11:41:03 +00:00
discord9
0d9e71b653 feat(flow): flow node manager (#3954)
* feat(flow): flow node manager

feat(flow): render src/sink

feat(flow): flow node manager in standalone

fix?: higher run freq

chore: remove abunant error enum variant

fix: run with higher freq if insert more

chore: fix after rebase

chore: typos

* chore(WIP): per review

* chore: per review
2024-05-16 11:37:14 +00:00
discord9
93f178f3ad feat(flow): avg func rewrite to sum/count (#3955)
* feat(WIP): parse avg

* feat: RelationType::apply_mfp no need expr typs

* feat: avg&tests

* fix(WIP): avg eval

* fix: sum ret correct type

* chore: typos
2024-05-16 10:03:56 +00:00
WU Jingdi
9f4a6c6fe2 feat: support any precision in PromQL (#3933)
* feat: support any precision in PromQL

* chore: add test
2024-05-16 07:00:24 +00:00
Weny Xu
c915916b62 feat(cli): export metric physical tables first (#3949)
* feat: export metric physical tables first

* chore: apply suggestions from CR
2024-05-16 06:30:20 +00:00
Weny Xu
dff7ba7598 feat: ignore internal columns in SHOW CREATE TABLE (#3950)
* feat: ignore internal columns

* chore: add new line

* chore: apply suggestions from CR

* chore: apply suggestions from CR
2024-05-16 06:28:48 +00:00
Ning Sun
fe34ebf770 test: give windows file watcher more time (#3953)
* test: give windows file watcher more time

* refactor: use constants for timeout
2024-05-16 01:58:45 +00:00
tison
a1c51a5885 chore: catch up label updates (#3951)
Signed-off-by: tison <wander4096@gmail.com>
2024-05-15 16:44:17 +00:00
zyy17
63a8d293a1 refactor: add Configurable trait (#3917)
* refactor: add Configurable trait

* refactor: add merge_with_cli_options() to simplify load_options()

* docs: add comments

* fix: clippy errors

* fix: toml format

* fix: build error

* fix: clippy errors

* build: downgrade config-rs

* refactor: use '#[snafu(source(from()))'

* refactor: minor modification for load_layered_options() to make it clean
2024-05-15 12:56:40 +00:00
tison
6c621b7fcf ci: implement docbot in cyborg (#3937)
* ci: implement docbot in cyborg

Signed-off-by: tison <wander4096@gmail.com>

* allow remove non-existing label

Signed-off-by: tison <wander4096@gmail.com>

* fixup

Signed-off-by: tison <wander4096@gmail.com>

* fixup org name

Signed-off-by: tison <wander4096@gmail.com>

* fixup step name

Signed-off-by: tison <wander4096@gmail.com>

* remove unused file

Signed-off-by: tison <wander4096@gmail.com>

---------

Signed-off-by: tison <wander4096@gmail.com>
2024-05-15 12:55:49 +00:00
Yingwen
529e344450 ci: Use lld linker in windows tests (#3946)
* ci: disable other test

* ci: timeout 30

* ci: try to use lld

* ci: change linker

* test: wait for file change in test multiple times

* ci: enable other tests

* chore: revert sleep in loop
2024-05-15 12:34:10 +00:00
Zhenchi
2a169f9364 perf(operator): reuse table info from table creation (#3945)
perf(operator): reuse table info from creating

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>
2024-05-15 09:18:17 +00:00
discord9
97eb196699 feat(flow): query table schema&refactor (#3943)
* feat: get table info

* feat: remove new&unwrap

* chore: per PR advices

* chore: per review
2024-05-15 08:35:12 +00:00
Yohan Wal
cfae276d37 feat(fuzz): add validator for inserted rows (#3932)
* feat(fuzz): add validator for inserted rows

* fix: compatibility with mysql types

* feat(fuzz): add datetime and date type in mysql for row validator
2024-05-15 07:05:51 +00:00
Weny Xu
09129a911e chore: update greptime-proto to a11db14 (#3942) 2024-05-15 03:38:47 +00:00
discord9
15d7b9755e feat(flow): flow worker (#3934)
* feat: flow worker

* chore: fix after cherry pick

* refactor: error handling

* refactor: error handling

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* chore: merge origin/main

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

---------

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>
Co-authored-by: Zhenchi <zhongzc_arch@outlook.com>
2024-05-14 16:09:25 +00:00
Jeremyhi
72897a20e3 chore: minor refactor on etcd kvbackend (#3940)
* chore: minor refactor on etcd kvbackend

* chore: avoid clone
2024-05-14 13:25:22 +00:00
Lei, HUANG
c04d02460f fix(metric engine): label mismatch in metric engine (#3927)
* fix: label mismatch

* test: add unit test

* chore: avoid updating full primary keys

* fix: style

* chore: add some doc for PkIndexMap

* chore: update some doc
2024-05-14 12:58:22 +00:00
discord9
4ca7ac7632 feat(flow): add types for every plan enum variant (#3938)
* feat: Plan with types

* chore: per review advices
2024-05-14 10:51:37 +00:00
Jeremyhi
a260ba3ee7 feat: use txn to impl cas (#3936)
* feat: usr txn to impl cas

* chore: fix test
2024-05-14 08:21:27 +00:00
dennis zhuang
efd3f04b7c feat: create view (#3807)
* add statement

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* feat: rebase with main

* fix: create flow

* feat: adds gRPC stuff

* feat: impl create_view ddl in operator

* feat: impl CreateViewProcedure

* chore: update cargo lock

* fix: format

* chore: compile error after rebasing main

* chore: refactor and test create view parser

* chore: fixed todo list and comments

* fix: compile error after rebeasing

* test: add create view test

* test: test view_info keys

* test: adds test for CreateViewProcedure and clean code

* test: adds more sqlness test for creating views

* chore: update cargo lock

* fix: don't replace normal table in CreateViewProcedure

* chore: apply suggestion

Co-authored-by: Jeremyhi <jiachun_feng@proton.me>

* chore: style

Co-authored-by: Jeremyhi <jiachun_feng@proton.me>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
Co-authored-by: Ruihang Xia <waynestxia@gmail.com>
Co-authored-by: Jeremyhi <jiachun_feng@proton.me>
2024-05-14 08:03:29 +00:00
Weny Xu
f16ce3ca27 feat: support to invalidate flow cache (#3926)
* feat: add `FlowName` & `FlowId` to `CacheIdent`

* feat: support to invalidate flow cache

* chore: apply suggestions from CR
2024-05-14 06:24:09 +00:00
Yingwen
6214180ecd build: upgrade rust toolchain to fix ci issues on Windows (#3898)
* ci: use windows 2019

* test: ignore cleanup result

* chore: revert change

* test: unstable repeated task test

* build: update rust toolchain and windows

* ci: test sqlness

* chore: enable other tests
2024-05-14 06:13:43 +00:00
Weny Xu
00e21e2021 fix: potential deadlock (#3930)
fix: fix potential deadlock
2024-05-14 03:04:03 +00:00
maco
494ce65729 feat: limiting the size of query results to Dashboard (#3901)
* feat: limiting the size of query results to Dashboard

* optimize code

* fix by cr

* fix integration tests error

* remove RequestSource::parse

* refactor: sql query params

* fix: unit test

---------

Co-authored-by: tison <wander4096@gmail.com>
2024-05-14 01:57:30 +00:00
Weny Xu
e15294db41 feat: introduce TableRouteCache to PartitionRuleManager (#3922)
* chore: add `CompositeTableRouteCacheRef` to `PartitionRuleManager`

* chore: update comments

* fix: add metrics for `get`

* chore: apply suggestions from CR

* chore: correct cache name

* feat: implement `LayeredCacheRegistry`

* fix: invalidate logical tables by physical table id

* refactor: replace `CacheRegistry` with `LayeredCacheRegistry`

* chore: update comments

* chore: apply suggestions from CR

* chore: fix fmt

* refactor: use `TableRouteCache` instead

* chore: apply suggestions from CR

* chore: fix clippy
2024-05-13 13:26:43 +00:00
discord9
be1eb4efb7 feat(flow): render source/sink (#3903)
* feat(flow): render src/sink

* chore: add empty impl

* chore: typos

* refactor: according to review(WIP)

* refactor: reexport df_sbustrait&use to_sub_plan

* fix: add implict location to error enum

* fix: error handling unwrap query_ctx
2024-05-13 11:58:02 +00:00
Jeff Chiang
9d12496aaf feat: create database with options (#3751)
* feat: create database with options

* fix: clippy

* fix: clippy

* feat: rebase and add Display test

* feat: sqlness test for creating database with options

* address comments

Signed-off-by: tison <wander4096@gmail.com>

* fixup tests

Signed-off-by: tison <wander4096@gmail.com>

* catch up

Signed-off-by: tison <wander4096@gmail.com>

* DefaultOnNull

Signed-off-by: tison <wander4096@gmail.com>

---------

Signed-off-by: tison <wander4096@gmail.com>
Co-authored-by: tison <wander4096@gmail.com>
2024-05-13 09:00:15 +00:00
Jeremyhi
5d8084a32f chore: store-addr to store-addrs (#3925) 2024-05-13 08:30:25 +00:00
zyy17
60eb5de3f1 refactor: add tracing options in xOptions (#3919)
* refactor: add tracing options in {DatanodeOptions, FrontendOptions, MetasrvOptions, StandaloneOptions}

* ci: fix integration error

* refactor: minor modification for initialization of tracing options
2024-05-13 05:16:50 +00:00
maco
a0be7198f9 feat: migrate orc-rs to datafusion-orc (#3923) 2024-05-13 05:15:06 +00:00
Jeremyhi
6ab3aeb142 refactor: rename metasrv_addr to metasrv_addrs (#3921) 2024-05-12 03:44:21 +00:00
Ruihang Xia
590aedd466 fix: sort unstable HTTP result in label values query (#3920)
* fix: sort unstable HTTP result in label values query

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* chore: Update src/servers/src/http/prometheus.rs

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
Co-authored-by: Yingwen <realevenyag@gmail.com>
2024-05-11 11:33:01 +00:00
Weny Xu
27e376e892 feat: implement the CompositeTableRoute (#3918)
* feat: implement the `CompositeTableRoute`

* chore: update comments
2024-05-11 11:09:00 +00:00
Weny Xu
36c41d129c feat: support to create & drop flow via grpc (#3915)
* feat: support to create & drop flow via grpc

* chore: apply suggestions from CR

* chore: apply suggestions from CR

* chore: apply suggestions from CR
2024-05-11 11:01:48 +00:00
Weny Xu
89da42dbc1 refactor: refactor frontend cache (#3912)
* feat: implement the `TableCache`

* refactor: use `TableCache`

* refactor: replace `TableFlowManager` with `TableFlownodeSetCache`

* refactor: introduce cache crate

* chore: add comments

* chore: update comments

* chore: apply suggestions from CR

* chore: rename `cache_invalidator` to `local_cache_invalidator`

* chore(fuzz): set `acquire_timeout` to 30s
2024-05-11 09:58:18 +00:00
Jeremyhi
04852aa27e chore: keep the same naming style (#3916) 2024-05-11 09:39:49 +00:00
Yingwen
d0820bb26d refactor: Remove PhysicalPlan trait and use ExecutionPlan directly (#3894)
* refactor: remove PhysicalPlan

* refactor: remove physical_plan mod

* refactor: import

* fix merge error

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
Co-authored-by: Ruihang Xia <waynestxia@gmail.com>
2024-05-11 07:38:03 +00:00
Jeremyhi
fa6c371380 feat: metaclient builder options with role (#3909)
* feat: metaclient builder options with default role

* chore: remove unnecessary ut
2024-05-11 06:17:14 +00:00
Jeremyhi
9aa2182cb2 refactor: make txn easy to use (#3905)
refactor: put_if_not_exists and compare_and_put API
2024-05-11 04:45:04 +00:00
Weny Xu
bca2e393bf refactor: add procedure_loader macro (#3906) 2024-05-11 04:41:21 +00:00
zyy17
b1ef327bac refactor: remove MixOptions and use StandaloneOptions only (#3893)
* refactor: remove MixOptions and use StandaloneOptions only

* refactor: refactor code by code review comments

1. Use '&self' in frontend_options() and datanode_options();

2. Remove unused 'clone()';

Signed-off-by: zyy17 <zyylsxm@gmail.com>

* ci: fix integration error

---------

Signed-off-by: zyy17 <zyylsxm@gmail.com>
2024-05-10 16:08:34 +00:00
Ruihang Xia
115c74791d build(deps): bump snafu to 0.8 (#3911)
* change Cargo.toml

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* global replace

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* handle alias in script engine

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix clippy

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2024-05-10 13:36:25 +00:00
Ruihang Xia
aec5cca2c7 feat: support distributed EXPLAIN ANALYZE (#3908)
* feat: fetch and pass per-plan metrics

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* impl DistAnalyzeExec

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* update sqlness results

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix clippy

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix typo

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix typo again

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* Update src/query/src/analyze.rs

Co-authored-by: Jeremyhi <jiachun_feng@proton.me>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
Co-authored-by: Jeremyhi <jiachun_feng@proton.me>
2024-05-10 12:31:29 +00:00
Weny Xu
06e1c43743 feat: support to drop flow (#3900)
* feat: support to drop flow

* chore: apply suggestions from CR
2024-05-10 12:09:37 +00:00
Weny Xu
9d36c31209 feat: introduce the CacheRegistry (#3896)
* feat: implement the `CacheRegistry`

* refactor: change `CacheInvalidator` signature

* feat: implement `CacheInvalidator`

* feat: add `get_or_register`

* fix: fmt toml

* feat: implement the `CacheRegistryBuilder`

* chore: apply suggestions from CR

* chore: fmt code
2024-05-10 11:53:42 +00:00
Weny Xu
c91132bd14 feat: introduce TableNameCache & TableInfoCache & TableRouteCache (#3895)
* feat: implement the `TableNameCache`

* feat: implement the `TableInfoCache`

* feat: implement the `TableRouteCache`

* test: add tests for `TableInfoCache` & `TableRouteCache`

* chore: use `TableId`
2024-05-10 09:50:44 +00:00
Ruihang Xia
25e9076f5b docs: correct v0.7 benchmark report (#3907)
Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2024-05-10 09:33:06 +00:00
Ruihang Xia
08945f128b fix: sort unstable HTTP result on test profile
Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2024-05-10 17:01:42 +08:00
Yingwen
5a0629eaa0 feat: Parquet reader builder supports building multiple ranges to read (#3841)
* chore: change `&mut self` to `&self`

* feat: define partition and partition context

* refactor: move precise_filter to PartitionContext

* feat: filter  wip

* feat: compute projection and fields in format

* feat: use RowGroupReader to implement ParquetReader

* fix: use expected meta to get column id for filters

* feat: partition returns row group reader

* style: fix clippy

* feat: add build partitions method

* docs: comment

* refactor: rename Partition to FileRange

* chore: address CR comments

* feat: avoid allocating column ids while constructing ReadFormat
2024-05-10 07:39:38 +00:00
488 changed files with 21519 additions and 6165 deletions

View File

@@ -1,7 +1,7 @@
---
name: Bug report
description: Is something not working? Help us fix it!
labels: [ "bug" ]
labels: [ "C-bug" ]
body:
- type: markdown
attributes:

View File

@@ -4,5 +4,5 @@ contact_links:
url: https://greptime.com/slack
about: Get free help from the Greptime community
- name: Greptime Community Discussion
url: https://github.com/greptimeTeam/greptimedb/discussions
url: https://github.com/greptimeTeam/discussions
about: Get free help from the Greptime community

View File

@@ -1,7 +1,7 @@
---
name: Enhancement
description: Suggest an enhancement to existing functionality
labels: [ "enhancement" ]
labels: [ "C-enhancement" ]
body:
- type: dropdown
id: type

View File

@@ -1,7 +1,7 @@
---
name: Feature request
name: New Feature
description: Suggest a new feature for GreptimeDB
labels: [ "feature request" ]
labels: [ "C-feature" ]
body:
- type: markdown
id: info

View File

@@ -0,0 +1,18 @@
name: Build and push CI Docker image
description: Build and push CI Docker image to local registry
inputs:
binary_path:
default: "./bin"
description: "Binary path"
runs:
using: composite
steps:
- name: Build and push to local registry
uses: docker/build-push-action@v5
with:
context: .
file: ./docker/ci/ubuntu/Dockerfile.fuzztests
push: true
tags: localhost:5001/greptime/greptimedb:latest
build-args: |
BINARY_PATH=${{ inputs.binary_path }}

View File

@@ -59,6 +59,9 @@ runs:
if: ${{ inputs.disable-run-tests == 'false' }}
shell: pwsh
run: make test sqlness-test
env:
RUSTUP_WINDOWS_PATH_ADD_BIN: 1 # Workaround for https://github.com/nextest-rs/nextest/issues/1493
RUST_BACKTRACE: 1
- name: Upload sqlness logs
if: ${{ failure() }} # Only upload logs when the integration tests failed.

16
.github/actions/setup-cyborg/action.yml vendored Normal file
View File

@@ -0,0 +1,16 @@
name: Setup cyborg environment
description: Setup cyborg environment
runs:
using: composite
steps:
- uses: actions/setup-node@v4
with:
node-version: 22
- uses: pnpm/action-setup@v3
with:
package_json_file: 'cyborg/package.json'
run_install: true
- name: Describe the Environment
working-directory: cyborg
shell: bash
run: pnpm tsx -v

View File

@@ -0,0 +1,25 @@
name: Setup Etcd cluster
description: Deploy Etcd cluster on Kubernetes
inputs:
etcd-replicas:
default: 3
description: "Etcd replicas"
namespace:
default: "etcd-cluster"
runs:
using: composite
steps:
- name: Install Etcd cluster
shell: bash
run: |
helm upgrade \
--install etcd oci://registry-1.docker.io/bitnamicharts/etcd \
--set replicaCount=${{ inputs.etcd-replicas }} \
--set resources.requests.cpu=50m \
--set resources.requests.memory=128Mi \
--set auth.rbac.create=false \
--set auth.rbac.token.enabled=false \
--set persistence.size=2Gi \
--create-namespace \
-n ${{ inputs.namespace }}

View File

@@ -0,0 +1,85 @@
name: Setup GreptimeDB cluster
description: Deploy GreptimeDB cluster on Kubernetes
inputs:
frontend-replicas:
default: 2
description: "Number of Frontend replicas"
datanode-replicas:
default: 2
description: "Number of Datanode replicas"
meta-replicas:
default: 3
description: "Number of Metasrv replicas"
image-registry:
default: "docker.io"
description: "Image registry"
image-repository:
default: "greptime/greptimedb"
description: "Image repository"
image-tag:
default: "latest"
description: 'Image tag'
etcd-endpoints:
default: "etcd.etcd-cluster.svc.cluster.local:2379"
description: "Etcd endpoints"
runs:
using: composite
steps:
- name: Install GreptimeDB operator
shell: bash
run: |
helm repo add greptime https://greptimeteam.github.io/helm-charts/
helm repo update
helm upgrade \
--install \
--create-namespace \
greptimedb-operator greptime/greptimedb-operator \
-n greptimedb-admin \
--wait \
--wait-for-jobs
- name: Install GreptimeDB cluster
shell: bash
run: |
helm upgrade \
--install my-greptimedb \
--set meta.etcdEndpoints=${{ inputs.etcd-endpoints }} \
--set image.registry=${{ inputs.image-registry }} \
--set image.repository=${{ inputs.image-repository }} \
--set image.tag=${{ inputs.image-tag }} \
--set base.podTemplate.main.resources.requests.cpu=50m \
--set base.podTemplate.main.resources.requests.memory=256Mi \
--set base.podTemplate.main.resources.limits.cpu=1000m \
--set base.podTemplate.main.resources.limits.memory=2Gi \
--set frontend.replicas=${{ inputs.frontend-replicas }} \
--set datanode.replicas=${{ inputs.datanode-replicas }} \
--set meta.replicas=${{ inputs.meta-replicas }} \
greptime/greptimedb-cluster \
--create-namespace \
-n my-greptimedb \
--wait \
--wait-for-jobs
- name: Wait for GreptimeDB
shell: bash
run: |
while true; do
PHASE=$(kubectl -n my-greptimedb get gtc my-greptimedb -o jsonpath='{.status.clusterPhase}')
if [ "$PHASE" == "Running" ]; then
echo "Cluster is ready"
break
else
echo "Cluster is not ready yet: Current phase: $PHASE"
kubectl get pods -n my-greptimedb
sleep 5 # wait for 5 seconds before check again.
fi
done
- name: Print GreptimeDB info
if: always()
shell: bash
run: |
kubectl get all --show-labels -n my-greptimedb
- name: Describe Nodes
if: always()
shell: bash
run: |
kubectl describe nodes

10
.github/actions/setup-kind/action.yml vendored Normal file
View File

@@ -0,0 +1,10 @@
name: Setup Kind
description: Deploy Kind
runs:
using: composite
steps:
- uses: actions/checkout@v4
- name: Create kind cluster
shell: bash
run: |
./.github/scripts/kind-with-registry.sh

View File

@@ -57,3 +57,14 @@ runs:
AWS_SECRET_ACCESS_KEY: ${{ inputs.aws-secret-access-key }}
run: |
aws s3 rm s3://${{ inputs.aws-ci-test-bucket }}/${{ inputs.data-root }} --recursive
- name: Export kind logs
if: failure()
shell: bash
run: kind export logs -n greptimedb-operator-e2e /tmp/kind
- name: Upload logs
if: failure()
uses: actions/upload-artifact@v4
with:
name: kind-logs
path: /tmp/kind
retention-days: 3

View File

@@ -1,4 +0,0 @@
Doc not needed:
- '- \[x\] This PR does not require documentation updates.'
Doc update required:
- '- \[ \] This PR does not require documentation updates.'

View File

@@ -15,6 +15,6 @@ Please explain IN DETAIL what the changes are in this PR and why they are needed
## Checklist
- [ ] I have written the necessary rustdoc comments.
- [ ] I have added the necessary unit tests and integration tests.
- [x] This PR does not require documentation updates.
- [ ] I have written the necessary rustdoc comments.
- [ ] I have added the necessary unit tests and integration tests.
- [ ] This PR requires documentation updates.

66
.github/scripts/kind-with-registry.sh vendored Executable file
View File

@@ -0,0 +1,66 @@
#!/usr/bin/env bash
set -e
set -o pipefail
# 1. Create registry container unless it already exists
reg_name='kind-registry'
reg_port='5001'
if [ "$(docker inspect -f '{{.State.Running}}' "${reg_name}" 2>/dev/null || true)" != 'true' ]; then
docker run \
-d --restart=always -p "127.0.0.1:${reg_port}:5000" --network bridge --name "${reg_name}" \
registry:2
fi
# 2. Create kind cluster with containerd registry config dir enabled
# TODO: kind will eventually enable this by default and this patch will
# be unnecessary.
#
# See:
# https://github.com/kubernetes-sigs/kind/issues/2875
# https://github.com/containerd/containerd/blob/main/docs/cri/config.md#registry-configuration
# See: https://github.com/containerd/containerd/blob/main/docs/hosts.md
cat <<EOF | kind create cluster --wait 2m --config=-
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
containerdConfigPatches:
- |-
[plugins."io.containerd.grpc.v1.cri".registry]
config_path = "/etc/containerd/certs.d"
EOF
# 3. Add the registry config to the nodes
#
# This is necessary because localhost resolves to loopback addresses that are
# network-namespace local.
# In other words: localhost in the container is not localhost on the host.
#
# We want a consistent name that works from both ends, so we tell containerd to
# alias localhost:${reg_port} to the registry container when pulling images
REGISTRY_DIR="/etc/containerd/certs.d/localhost:${reg_port}"
for node in $(kind get nodes); do
docker exec "${node}" mkdir -p "${REGISTRY_DIR}"
cat <<EOF | docker exec -i "${node}" cp /dev/stdin "${REGISTRY_DIR}/hosts.toml"
[host."http://${reg_name}:5000"]
EOF
done
# 4. Connect the registry to the cluster network if not already connected
# This allows kind to bootstrap the network but ensures they're on the same network
if [ "$(docker inspect -f='{{json .NetworkSettings.Networks.kind}}' "${reg_name}")" = 'null' ]; then
docker network connect "kind" "${reg_name}"
fi
# 5. Document the local registry
# https://github.com/kubernetes/enhancements/tree/master/keps/sig-cluster-lifecycle/generic/1755-communicating-a-local-registry
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: ConfigMap
metadata:
name: local-registry-hosting
namespace: kube-public
data:
localRegistryHosting.v1: |
host: "localhost:${reg_port}"
help: "https://kind.sigs.k8s.io/docs/user/local-registry/"
EOF

View File

@@ -13,7 +13,7 @@ on:
name: Build API docs
env:
RUST_TOOLCHAIN: nightly-2024-04-18
RUST_TOOLCHAIN: nightly-2024-04-20
jobs:
apidoc:

View File

@@ -82,6 +82,9 @@ env:
# The source code will check out in the following path: '${WORKING_DIR}/dev/greptime'.
CHECKOUT_GREPTIMEDB_PATH: dev/greptimedb
permissions:
issues: write
jobs:
allocate-runners:
name: Allocate runners
@@ -321,7 +324,7 @@ jobs:
github-token: ${{ secrets.GH_PERSONAL_ACCESS_TOKEN }}
notification:
if: ${{ always() }} # Not requiring successful dependent jobs, always run.
if: ${{ github.repository == 'GreptimeTeam/greptimedb' && always() }} # Not requiring successful dependent jobs, always run.
name: Send notification to Greptime team
needs: [
release-images-to-dockerhub
@@ -330,16 +333,25 @@ jobs:
env:
SLACK_WEBHOOK_URL: ${{ secrets.SLACK_WEBHOOK_URL_DEVELOP_CHANNEL }}
steps:
- name: Notifiy dev build successful result
- uses: actions/checkout@v4
- uses: ./.github/actions/setup-cyborg
- name: Report CI status
id: report-ci-status
working-directory: cyborg
run: pnpm tsx bin/report-ci-failure.ts
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
CI_REPORT_STATUS: ${{ needs.release-images-to-dockerhub.outputs.build-result == 'success' }}
- name: Notify dev build successful result
uses: slackapi/slack-github-action@v1.23.0
if: ${{ needs.release-images-to-dockerhub.outputs.build-result == 'success' }}
with:
payload: |
{"text": "GreptimeDB's ${{ env.NEXT_RELEASE_VERSION }} build has completed successfully."}
- name: Notifiy dev build failed result
- name: Notify dev build failed result
uses: slackapi/slack-github-action@v1.23.0
if: ${{ needs.release-images-to-dockerhub.outputs.build-result != 'success' }}
with:
payload: |
{"text": "GreptimeDB's ${{ env.NEXT_RELEASE_VERSION }} build has failed, please check 'https://github.com/GreptimeTeam/greptimedb/actions/workflows/${{ env.NEXT_RELEASE_VERSION }}-build.yml'."}
{"text": "GreptimeDB's ${{ env.NEXT_RELEASE_VERSION }} build has failed, please check ${{ steps.report-ci-status.outputs.html_url }}."}

View File

@@ -30,7 +30,7 @@ concurrency:
cancel-in-progress: true
env:
RUST_TOOLCHAIN: nightly-2024-04-18
RUST_TOOLCHAIN: nightly-2024-04-20
jobs:
check-typos-and-docs:
@@ -57,7 +57,7 @@ jobs:
runs-on: ${{ matrix.os }}
strategy:
matrix:
os: [ windows-latest, ubuntu-20.04 ]
os: [ windows-2022, ubuntu-20.04 ]
timeout-minutes: 60
steps:
- uses: actions/checkout@v4
@@ -231,6 +231,130 @@ jobs:
path: /tmp/unstable-greptime/
retention-days: 3
build-greptime-ci:
name: Build GreptimeDB binary (profile-CI)
runs-on: ${{ matrix.os }}
strategy:
matrix:
os: [ ubuntu-20.04 ]
timeout-minutes: 60
steps:
- uses: actions/checkout@v4
- uses: arduino/setup-protoc@v3
with:
repo-token: ${{ secrets.GITHUB_TOKEN }}
- uses: dtolnay/rust-toolchain@master
with:
toolchain: ${{ env.RUST_TOOLCHAIN }}
- uses: Swatinem/rust-cache@v2
with:
# Shares across multiple jobs
shared-key: "build-greptime-ci"
- name: Install cargo-gc-bin
shell: bash
run: cargo install cargo-gc-bin
- name: Build greptime bianry
shell: bash
# `cargo gc` will invoke `cargo build` with specified args
run: cargo build --bin greptime --profile ci
- name: Pack greptime binary
shell: bash
run: |
mkdir bin && \
mv ./target/ci/greptime bin
- name: Print greptime binaries info
run: ls -lh bin
- name: Upload artifacts
uses: ./.github/actions/upload-artifacts
with:
artifacts-dir: bin
version: current
distributed-fuzztest:
name: Fuzz Test (Distributed, Disk)
runs-on: ubuntu-latest
needs: build-greptime-ci
strategy:
matrix:
target: [ "fuzz_create_table", "fuzz_alter_table", "fuzz_create_database", "fuzz_create_logical_table", "fuzz_alter_logical_table", "fuzz_insert", "fuzz_insert_logical_table" ]
steps:
- uses: actions/checkout@v4
- name: Setup Kind
uses: ./.github/actions/setup-kind
- name: Setup Etcd cluser
uses: ./.github/actions/setup-etcd-cluster
# Prepares for fuzz tests
- uses: arduino/setup-protoc@v3
with:
repo-token: ${{ secrets.GITHUB_TOKEN }}
- uses: dtolnay/rust-toolchain@master
with:
toolchain: ${{ env.RUST_TOOLCHAIN }}
- name: Rust Cache
uses: Swatinem/rust-cache@v2
with:
# Shares across multiple jobs
shared-key: "fuzz-test-targets"
- name: Set Rust Fuzz
shell: bash
run: |
sudo apt-get install -y libfuzzer-14-dev
rustup install nightly
cargo +nightly install cargo-fuzz
# Downloads ci image
- name: Download pre-built binariy
uses: actions/download-artifact@v4
with:
name: bin
path: .
- name: Unzip binary
run: tar -xvf ./bin.tar.gz
- name: Build and push GreptimeDB image
uses: ./.github/actions/build-and-push-ci-image
- name: Wait for etcd
run: |
kubectl wait \
--for=condition=Ready \
pod -l app.kubernetes.io/instance=etcd \
--timeout=120s \
-n etcd-cluster
- name: Print etcd info
shell: bash
run: kubectl get all --show-labels -n etcd-cluster
# Setup cluster for test
- name: Setup GreptimeDB cluster
uses: ./.github/actions/setup-greptimedb-cluster
with:
image-registry: localhost:5001
- name: Port forward (mysql)
run: |
kubectl port-forward service/my-greptimedb-frontend 4002:4002 -n my-greptimedb&
- name: Fuzz Test
uses: ./.github/actions/fuzz-test
env:
CUSTOM_LIBFUZZER_PATH: /usr/lib/llvm-14/lib/libFuzzer.a
GT_MYSQL_ADDR: 127.0.0.1:4002
with:
target: ${{ matrix.target }}
max-total-time: 120
- name: Describe Nodes
if: failure()
shell: bash
run: |
kubectl describe nodes
- name: Export kind logs
if: failure()
shell: bash
run: |
kind export logs /tmp/kind
- name: Upload logs
if: failure()
uses: actions/upload-artifact@v4
with:
name: fuzz-tests-kind-logs-${{ matrix.target }}
path: /tmp/kind
retention-days: 3
sqlness:
name: Sqlness Test
@@ -256,7 +380,7 @@ jobs:
uses: actions/upload-artifact@v4
with:
name: sqlness-logs
path: /tmp/sqlness-*
path: /tmp/sqlness*
retention-days: 3
sqlness-kafka-wal:
@@ -286,7 +410,7 @@ jobs:
uses: actions/upload-artifact@v4
with:
name: sqlness-logs-with-kafka-wal
path: /tmp/sqlness-*
path: /tmp/sqlness*
retention-days: 3
fmt:

View File

@@ -1,39 +0,0 @@
name: Create Issue in downstream repos
on:
issues:
types:
- labeled
pull_request_target:
types:
- labeled
jobs:
doc_issue:
if: github.event.label.name == 'doc update required'
runs-on: ubuntu-20.04
steps:
- name: create an issue in doc repo
uses: dacbd/create-issue-action@v1.2.1
with:
owner: GreptimeTeam
repo: docs
token: ${{ secrets.DOCS_REPO_TOKEN }}
title: Update docs for ${{ github.event.issue.title || github.event.pull_request.title }}
body: |
A document change request is generated from
${{ github.event.issue.html_url || github.event.pull_request.html_url }}
cloud_issue:
if: github.event.label.name == 'cloud followup required'
runs-on: ubuntu-20.04
steps:
- name: create an issue in cloud repo
uses: dacbd/create-issue-action@v1.2.1
with:
owner: GreptimeTeam
repo: greptimedb-cloud
token: ${{ secrets.DOCS_REPO_TOKEN }}
title: Followup changes in ${{ github.event.issue.title || github.event.pull_request.title }}
body: |
A followup request is generated from
${{ github.event.issue.html_url || github.event.pull_request.html_url }}

View File

@@ -1,36 +0,0 @@
name: "PR Doc Labeler"
on:
pull_request_target:
types: [opened, edited, synchronize, ready_for_review, auto_merge_enabled, labeled, unlabeled]
permissions:
pull-requests: write
contents: read
jobs:
triage:
if: ${{ github.repository == 'GreptimeTeam/greptimedb' }}
runs-on: ubuntu-latest
steps:
- uses: github/issue-labeler@v3.4
with:
configuration-path: .github/doc-label-config.yml
enable-versioned-regex: false
repo-token: ${{ secrets.GITHUB_TOKEN }}
sync-labels: 1
- name: create an issue in doc repo
uses: dacbd/create-issue-action@v1.2.1
if: ${{ github.event.action == 'opened' && contains(github.event.pull_request.body, '- [ ] This PR does not require documentation updates.') }}
with:
owner: GreptimeTeam
repo: docs
token: ${{ secrets.DOCS_REPO_TOKEN }}
title: Update docs for ${{ github.event.issue.title || github.event.pull_request.title }}
body: |
A document change request is generated from
${{ github.event.issue.html_url || github.event.pull_request.html_url }}
- name: Check doc labels
uses: docker://agilepathway/pull-request-label-checker:latest
with:
one_of: Doc update required,Doc not needed
repo_token: ${{ secrets.GITHUB_TOKEN }}

22
.github/workflows/docbot.yml vendored Normal file
View File

@@ -0,0 +1,22 @@
name: Follow Up Docs
on:
pull_request_target:
types: [opened, edited]
permissions:
pull-requests: write
contents: read
jobs:
docbot:
runs-on: ubuntu-20.04
timeout-minutes: 10
steps:
- uses: actions/checkout@v4
- uses: ./.github/actions/setup-cyborg
- name: Maybe Follow Up Docs Issue
working-directory: cyborg
run: pnpm tsx bin/follow-up-docs-issue.ts
env:
DOCS_REPO_TOKEN: ${{ secrets.DOCS_REPO_TOKEN }}
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}

View File

@@ -66,6 +66,13 @@ env:
NIGHTLY_RELEASE_PREFIX: nightly
# Use the different image name to avoid conflict with the release images.
# The DockerHub image will be greptime/greptimedb-nightly.
IMAGE_NAME: greptimedb-nightly
permissions:
issues: write
jobs:
allocate-runners:
name: Allocate runners
@@ -188,6 +195,7 @@ jobs:
with:
image-registry: docker.io
image-namespace: ${{ vars.IMAGE_NAMESPACE }}
image-name: ${{ env.IMAGE_NAME }}
image-registry-username: ${{ secrets.DOCKERHUB_USERNAME }}
image-registry-password: ${{ secrets.DOCKERHUB_TOKEN }}
version: ${{ needs.allocate-runners.outputs.version }}
@@ -220,7 +228,7 @@ jobs:
with:
src-image-registry: docker.io
src-image-namespace: ${{ vars.IMAGE_NAMESPACE }}
src-image-name: greptimedb
src-image-name: ${{ env.IMAGE_NAME }}
dst-image-registry-username: ${{ secrets.ALICLOUD_USERNAME }}
dst-image-registry-password: ${{ secrets.ALICLOUD_PASSWORD }}
dst-image-registry: ${{ vars.ACR_IMAGE_REGISTRY }}
@@ -285,7 +293,7 @@ jobs:
github-token: ${{ secrets.GH_PERSONAL_ACCESS_TOKEN }}
notification:
if: ${{ always() }} # Not requiring successful dependent jobs, always run.
if: ${{ github.repository == 'GreptimeTeam/greptimedb' && always() }} # Not requiring successful dependent jobs, always run.
name: Send notification to Greptime team
needs: [
release-images-to-dockerhub
@@ -294,16 +302,25 @@ jobs:
env:
SLACK_WEBHOOK_URL: ${{ secrets.SLACK_WEBHOOK_URL_DEVELOP_CHANNEL }}
steps:
- name: Notifiy nightly build successful result
- uses: actions/checkout@v4
- uses: ./.github/actions/setup-cyborg
- name: Report CI status
id: report-ci-status
working-directory: cyborg
run: pnpm tsx bin/report-ci-failure.ts
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
CI_REPORT_STATUS: ${{ needs.release-images-to-dockerhub.outputs.nightly-build-result == 'success' }}
- name: Notify nightly build successful result
uses: slackapi/slack-github-action@v1.23.0
if: ${{ needs.release-images-to-dockerhub.outputs.nightly-build-result == 'success' }}
with:
payload: |
{"text": "GreptimeDB's ${{ env.NEXT_RELEASE_VERSION }} build has completed successfully."}
- name: Notifiy nightly build failed result
- name: Notify nightly build failed result
uses: slackapi/slack-github-action@v1.23.0
if: ${{ needs.release-images-to-dockerhub.outputs.nightly-build-result != 'success' }}
with:
payload: |
{"text": "GreptimeDB's ${{ env.NEXT_RELEASE_VERSION }} build has failed, please check 'https://github.com/GreptimeTeam/greptimedb/actions/workflows/${{ env.NEXT_RELEASE_VERSION }}-build.yml'."}
{"text": "GreptimeDB's ${{ env.NEXT_RELEASE_VERSION }} build has failed, please check ${{ steps.report-ci-status.outputs.html_url }}."}

View File

@@ -1,6 +1,6 @@
on:
schedule:
- cron: '0 23 * * 1-5'
- cron: "0 23 * * 1-5"
workflow_dispatch:
name: Nightly CI
@@ -10,7 +10,10 @@ concurrency:
cancel-in-progress: true
env:
RUST_TOOLCHAIN: nightly-2024-04-18
RUST_TOOLCHAIN: nightly-2024-04-20
permissions:
issues: write
jobs:
sqlness-test:
@@ -22,7 +25,6 @@ jobs:
uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Run sqlness test
uses: ./.github/actions/sqlness-test
with:
@@ -35,10 +37,11 @@ jobs:
sqlness-windows:
name: Sqlness tests on Windows
if: ${{ github.repository == 'GreptimeTeam/greptimedb' }}
runs-on: windows-latest-8-cores
runs-on: windows-2022-8-cores
timeout-minutes: 60
steps:
- uses: actions/checkout@v4
- uses: ./.github/actions/setup-cyborg
- uses: arduino/setup-protoc@v3
with:
repo-token: ${{ secrets.GITHUB_TOKEN }}
@@ -49,14 +52,6 @@ jobs:
uses: Swatinem/rust-cache@v2
- name: Run sqlness
run: cargo sqlness
- name: Notify slack if failed
if: failure()
uses: slackapi/slack-github-action@v1.23.0
env:
SLACK_WEBHOOK_URL: ${{ secrets.SLACK_WEBHOOK_URL_DEVELOP_CHANNEL }}
with:
payload: |
{"text": "Nightly CI failed for sqlness tests"}
- name: Upload sqlness logs
if: always()
uses: actions/upload-artifact@v4
@@ -68,14 +63,18 @@ jobs:
test-on-windows:
name: Run tests on Windows
if: ${{ github.repository == 'GreptimeTeam/greptimedb' }}
runs-on: windows-latest-8-cores
runs-on: windows-2022-8-cores
timeout-minutes: 60
steps:
- run: git config --global core.autocrlf false
- uses: actions/checkout@v4
- uses: ./.github/actions/setup-cyborg
- uses: arduino/setup-protoc@v3
with:
repo-token: ${{ secrets.GITHUB_TOKEN }}
- uses: KyleMayes/install-llvm-action@v1
with:
version: "14.0"
- name: Install Rust toolchain
uses: dtolnay/rust-toolchain@master
with:
@@ -88,7 +87,7 @@ jobs:
- name: Install Python
uses: actions/setup-python@v5
with:
python-version: '3.10'
python-version: "3.10"
- name: Install PyArrow Package
run: pip install pyarrow
- name: Install WSL distribution
@@ -98,18 +97,62 @@ jobs:
- name: Running tests
run: cargo nextest run -F pyo3_backend,dashboard
env:
CARGO_BUILD_RUSTFLAGS: "-C linker=lld-link"
RUST_BACKTRACE: 1
CARGO_INCREMENTAL: 0
RUSTUP_WINDOWS_PATH_ADD_BIN: 1 # Workaround for https://github.com/nextest-rs/nextest/issues/1493
GT_S3_BUCKET: ${{ vars.AWS_CI_TEST_BUCKET }}
GT_S3_ACCESS_KEY_ID: ${{ secrets.AWS_CI_TEST_ACCESS_KEY_ID }}
GT_S3_ACCESS_KEY: ${{ secrets.AWS_CI_TEST_SECRET_ACCESS_KEY }}
GT_S3_REGION: ${{ vars.AWS_CI_TEST_BUCKET_REGION }}
UNITTEST_LOG_DIR: "__unittest_logs"
- name: Notify slack if failed
if: failure()
uses: slackapi/slack-github-action@v1.23.0
check-status:
name: Check status
needs: [
sqlness-test,
sqlness-windows,
test-on-windows,
]
if: ${{ github.repository == 'GreptimeTeam/greptimedb' }}
runs-on: ubuntu-20.04
outputs:
check-result: ${{ steps.set-check-result.outputs.check-result }}
steps:
- name: Set check result
id: set-check-result
run: |
echo "check-result=success" >> $GITHUB_OUTPUT
notification:
if: ${{ github.repository == 'GreptimeTeam/greptimedb' && always() }} # Not requiring successful dependent jobs, always run.
name: Send notification to Greptime team
needs: [
check-status
]
runs-on: ubuntu-20.04
env:
SLACK_WEBHOOK_URL: ${{ secrets.SLACK_WEBHOOK_URL_DEVELOP_CHANNEL }}
steps:
- uses: actions/checkout@v4
- uses: ./.github/actions/setup-cyborg
- name: Report CI status
id: report-ci-status
working-directory: cyborg
run: pnpm tsx bin/report-ci-failure.ts
env:
SLACK_WEBHOOK_URL: ${{ secrets.SLACK_WEBHOOK_URL_DEVELOP_CHANNEL }}
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
CI_REPORT_STATUS: ${{ needs.check-status.outputs.check-result == 'success' }}
- name: Notify dev build successful result
uses: slackapi/slack-github-action@v1.23.0
if: ${{ needs.check-status.outputs.check-result == 'success' }}
with:
payload: |
{"text": "Nightly CI failed for cargo test"}
{"text": "Nightly CI has completed successfully."}
- name: Notify dev build failed result
uses: slackapi/slack-github-action@v1.23.0
if: ${{ needs.check-status.outputs.check-result != 'success' }}
with:
payload: |
{"text": "Nightly CI failed has failed, please check ${{ steps.report-ci-status.outputs.html_url }}."}

View File

@@ -82,7 +82,7 @@ on:
# Use env variables to control all the release process.
env:
# The arguments of building greptime.
RUST_TOOLCHAIN: nightly-2024-04-18
RUST_TOOLCHAIN: nightly-2024-04-20
CARGO_PROFILE: nightly
# Controls whether to run tests, include unit-test, integration-test and sqlness.
@@ -91,7 +91,12 @@ env:
# The scheduled version is '${{ env.NEXT_RELEASE_VERSION }}-nightly-YYYYMMDD', like v0.2.0-nigthly-20230313;
NIGHTLY_RELEASE_PREFIX: nightly
# Note: The NEXT_RELEASE_VERSION should be modified manually by every formal release.
NEXT_RELEASE_VERSION: v0.8.0
NEXT_RELEASE_VERSION: v0.9.0
# Permission reference: https://docs.github.com/en/actions/using-jobs/assigning-permissions-to-jobs
permissions:
issues: write # Allows the action to create issues for cyborg.
contents: write # Allows the action to create a release.
jobs:
allocate-runners:
@@ -102,7 +107,7 @@ jobs:
linux-amd64-runner: ${{ steps.start-linux-amd64-runner.outputs.label }}
linux-arm64-runner: ${{ steps.start-linux-arm64-runner.outputs.label }}
macos-runner: ${{ inputs.macos_runner || vars.DEFAULT_MACOS_RUNNER }}
windows-runner: windows-latest-8-cores
windows-runner: windows-2022-8-cores
# The following EC2 resource id will be used for resource releasing.
linux-amd64-ec2-runner-label: ${{ steps.start-linux-amd64-runner.outputs.label }}
@@ -245,7 +250,7 @@ jobs:
- name: Set build macos result
id: set-build-macos-result
run: |
echo "build-macos-result=success" >> $GITHUB_OUTPUT
echo "build-macos-result=success" >> $GITHUB_OUTPUT
build-windows-artifacts:
name: Build Windows artifacts
@@ -318,7 +323,7 @@ jobs:
- name: Set build image result
id: set-build-image-result
run: |
echo "build-image-result=success" >> $GITHUB_OUTPUT
echo "build-image-result=success" >> $GITHUB_OUTPUT
release-cn-artifacts:
name: Release artifacts to CN region
@@ -436,7 +441,7 @@ jobs:
github-token: ${{ secrets.GH_PERSONAL_ACCESS_TOKEN }}
notification:
if: ${{ always() || github.repository == 'GreptimeTeam/greptimedb' }}
if: ${{ github.repository == 'GreptimeTeam/greptimedb' && (github.event_name == 'push' || github.event_name == 'schedule') && always() }}
name: Send notification to Greptime team
needs: [
release-images-to-dockerhub,
@@ -447,16 +452,25 @@ jobs:
env:
SLACK_WEBHOOK_URL: ${{ secrets.SLACK_WEBHOOK_URL_DEVELOP_CHANNEL }}
steps:
- name: Notifiy release successful result
- uses: actions/checkout@v4
- uses: ./.github/actions/setup-cyborg
- name: Report CI status
id: report-ci-status
working-directory: cyborg
run: pnpm tsx bin/report-ci-failure.ts
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
CI_REPORT_STATUS: ${{ needs.release-images-to-dockerhub.outputs.build-image-result == 'success' && needs.build-windows-artifacts.outputs.build-windows-result == 'success' && needs.build-macos-artifacts.outputs.build-macos-result == 'success' }}
- name: Notify release successful result
uses: slackapi/slack-github-action@v1.25.0
if: ${{ needs.release-images-to-dockerhub.outputs.build-image-result == 'success' && needs.build-windows-artifacts.outputs.build-windows-result == 'success' && needs.build-macos-artifacts.outputs.build-macos-result == 'success' }}
with:
payload: |
{"text": "GreptimeDB's release version has completed successfully."}
- name: Notifiy release failed result
- name: Notify release failed result
uses: slackapi/slack-github-action@v1.25.0
if: ${{ needs.release-images-to-dockerhub.outputs.build-image-result != 'success' || needs.build-windows-artifacts.outputs.build-windows-result != 'success' || needs.build-macos-artifacts.outputs.build-macos-result != 'success' }}
with:
payload: |
{"text": "GreptimeDB's release version has failed, please check 'https://github.com/GreptimeTeam/greptimedb/actions/workflows/release.yml'."}
{"text": "GreptimeDB's release version has failed, please check ${{ steps.report-ci-status.outputs.html_url }}."}

View File

@@ -16,16 +16,7 @@ jobs:
if: ${{ github.repository == 'GreptimeTeam/greptimedb' }}
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: 22
- uses: pnpm/action-setup@v3
with:
package_json_file: 'cyborg/package.json'
run_install: true
- name: Describe the Environment
working-directory: cyborg
run: pnpm tsx -v
- uses: ./.github/actions/setup-cyborg
- name: Do Maintenance
working-directory: cyborg
run: pnpm tsx bin/schedule.ts

View File

@@ -13,16 +13,7 @@ jobs:
timeout-minutes: 10
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: 22
- uses: pnpm/action-setup@v3
with:
package_json_file: 'cyborg/package.json'
run_install: true
- name: Describe the Environment
working-directory: cyborg
run: pnpm tsx -v
- uses: ./.github/actions/setup-cyborg
- name: Check Pull Request
working-directory: cyborg
run: pnpm tsx bin/check-pull-request.ts

1179
Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -4,6 +4,7 @@ members = [
"src/api",
"src/auth",
"src/catalog",
"src/cache",
"src/client",
"src/cmd",
"src/common/base",
@@ -63,13 +64,14 @@ members = [
resolver = "2"
[workspace.package]
version = "0.7.2"
version = "0.8.0"
edition = "2021"
license = "Apache-2.0"
[workspace.lints]
clippy.print_stdout = "warn"
clippy.print_stderr = "warn"
clippy.dbg_macro = "warn"
clippy.implicit_clone = "warn"
clippy.readonly_write_lock = "allow"
rust.unknown_lints = "deny"
@@ -99,6 +101,7 @@ bytemuck = "1.12"
bytes = { version = "1.5", features = ["serde"] }
chrono = { version = "0.4", features = ["serde"] }
clap = { version = "4.4", features = ["derive"] }
config = "0.13.0"
crossbeam-utils = "0.8"
dashmap = "5.4"
datafusion = { git = "https://github.com/apache/arrow-datafusion.git", rev = "34eda15b73a9e278af8844b30ed2f1c21c10359c" }
@@ -107,6 +110,7 @@ datafusion-expr = { git = "https://github.com/apache/arrow-datafusion.git", rev
datafusion-functions = { git = "https://github.com/apache/arrow-datafusion.git", rev = "34eda15b73a9e278af8844b30ed2f1c21c10359c" }
datafusion-optimizer = { git = "https://github.com/apache/arrow-datafusion.git", rev = "34eda15b73a9e278af8844b30ed2f1c21c10359c" }
datafusion-physical-expr = { git = "https://github.com/apache/arrow-datafusion.git", rev = "34eda15b73a9e278af8844b30ed2f1c21c10359c" }
datafusion-physical-plan = { git = "https://github.com/apache/arrow-datafusion.git", rev = "34eda15b73a9e278af8844b30ed2f1c21c10359c" }
datafusion-sql = { git = "https://github.com/apache/arrow-datafusion.git", rev = "34eda15b73a9e278af8844b30ed2f1c21c10359c" }
datafusion-substrait = { git = "https://github.com/apache/arrow-datafusion.git", rev = "34eda15b73a9e278af8844b30ed2f1c21c10359c" }
derive_builder = "0.12"
@@ -116,7 +120,7 @@ etcd-client = { git = "https://github.com/MichaelScofield/etcd-client.git", rev
fst = "0.4.7"
futures = "0.3"
futures-util = "0.3"
greptime-proto = { git = "https://github.com/GreptimeTeam/greptime-proto.git", rev = "219b2409bb701f75b43fc0ba64967d2ed8e75491" }
greptime-proto = { git = "https://github.com/GreptimeTeam/greptime-proto.git", rev = "902f75fdd170c572e90b1f640161d90995f20218" }
humantime = "2.1"
humantime-serde = "1.1"
itertools = "0.10"
@@ -136,6 +140,7 @@ parquet = { version = "51.0.0", default-features = false, features = ["arrow", "
paste = "1.0"
pin-project = "1.0"
prometheus = { version = "0.13.3", features = ["process"] }
promql-parser = { version = "0.4" }
prost = "0.12"
raft-engine = { version = "0.4.1", default-features = false }
rand = "0.8"
@@ -154,10 +159,10 @@ serde = { version = "1.0", features = ["derive"] }
serde_json = { version = "1.0", features = ["float_roundtrip"] }
serde_with = "3"
smallvec = { version = "1", features = ["serde"] }
snafu = "0.7"
snafu = "0.8"
sysinfo = "0.30"
# on branch v0.44.x
sqlparser = { git = "https://github.com/GreptimeTeam/sqlparser-rs.git", rev = "c919990bf62ad38d2b0c0a3bc90b26ad919d51b0", features = [
sqlparser = { git = "https://github.com/GreptimeTeam/sqlparser-rs.git", rev = "e4e496b8d62416ad50ce70a1b460c7313610cf5d", features = [
"visitor",
] }
strum = { version = "0.25", features = ["derive"] }
@@ -166,13 +171,14 @@ tokio = { version = "1.36", features = ["full"] }
tokio-stream = { version = "0.1" }
tokio-util = { version = "0.7", features = ["io-util", "compat"] }
toml = "0.8.8"
tonic = { version = "0.11", features = ["tls"] }
tonic = { version = "0.11", features = ["tls", "gzip", "zstd"] }
uuid = { version = "1.7", features = ["serde", "v4", "fast-rng"] }
zstd = "0.13"
## workspaces members
api = { path = "src/api" }
auth = { path = "src/auth" }
cache = { path = "src/cache" }
catalog = { path = "src/catalog" }
client = { path = "src/client" }
cmd = { path = "src/cmd" }
@@ -204,6 +210,7 @@ common-wal = { path = "src/common/wal" }
datanode = { path = "src/datanode" }
datatypes = { path = "src/datatypes" }
file-engine = { path = "src/file-engine" }
flow = { path = "src/flow" }
frontend = { path = "src/frontend" }
index = { path = "src/index" }
log-store = { path = "src/log-store" }
@@ -242,6 +249,11 @@ lto = "thin"
debug = false
incremental = false
[profile.ci]
inherits = "dev"
debug = false
strip = true
[profile.dev.package.sqlness-runner]
debug = false
strip = true

View File

@@ -199,7 +199,7 @@ config-docs: ## Generate configuration documentation from toml files.
docker run --rm \
-v ${PWD}:/greptimedb \
-w /greptimedb/config \
toml2docs/toml2docs:latest \
toml2docs/toml2docs:v0.1.1 \
-p '##' \
-t ./config-docs-template.md \
-o ./config.md

View File

@@ -1,10 +1,16 @@
# Configurations
- [Standalone Mode](#standalone-mode)
- [Distributed Mode](#distributed-mode)
- [Frontend](#frontend)
- [Metasrv](#metasrv)
- [Datanode](#datanode)
## Standalone Mode
{{ toml2docs "./standalone.example.toml" }}
## Cluster Mode
## Distributed Mode
### Frontend

View File

@@ -1,5 +1,11 @@
# Configurations
- [Standalone Mode](#standalone-mode)
- [Distributed Mode](#distributed-mode)
- [Frontend](#frontend)
- [Metasrv](#metasrv)
- [Datanode](#datanode)
## Standalone Mode
| Key | Type | Default | Descriptions |
@@ -14,6 +20,11 @@
| `grpc` | -- | -- | The gRPC server options. |
| `grpc.addr` | String | `127.0.0.1:4001` | The address to bind the gRPC server. |
| `grpc.runtime_size` | Integer | `8` | The number of server worker threads. |
| `grpc.tls` | -- | -- | gRPC server TLS options, see `mysql.tls` section. |
| `grpc.tls.mode` | String | `disable` | TLS mode. |
| `grpc.tls.cert_path` | String | `None` | Certificate file path. |
| `grpc.tls.key_path` | String | `None` | Private key file path. |
| `grpc.tls.watch` | Bool | `false` | Watch for Certificate and key file change and auto reload.<br/>For now, gRPC tls config does not support auto reload. |
| `mysql` | -- | -- | MySQL server options. |
| `mysql.enable` | Bool | `true` | Whether to enable. |
| `mysql.addr` | String | `127.0.0.1:4002` | The addr to bind the MySQL server. |
@@ -27,7 +38,7 @@
| `postgres.enable` | Bool | `true` | Whether to enable |
| `postgres.addr` | String | `127.0.0.1:4003` | The addr to bind the PostgresSQL server. |
| `postgres.runtime_size` | Integer | `2` | The number of server worker threads. |
| `postgres.tls` | -- | -- | PostgresSQL server TLS options, see `mysql_options.tls` section. |
| `postgres.tls` | -- | -- | PostgresSQL server TLS options, see `mysql.tls` section. |
| `postgres.tls.mode` | String | `disable` | TLS mode. |
| `postgres.tls.cert_path` | String | `None` | Certificate file path. |
| `postgres.tls.key_path` | String | `None` | Private key file path. |
@@ -96,6 +107,10 @@
| `region_engine.mito.sst_meta_cache_size` | String | `128MB` | Cache size for SST metadata. Setting it to 0 to disable the cache.<br/>If not set, it's default to 1/32 of OS memory with a max limitation of 128MB. |
| `region_engine.mito.vector_cache_size` | String | `512MB` | Cache size for vectors and arrow arrays. Setting it to 0 to disable the cache.<br/>If not set, it's default to 1/16 of OS memory with a max limitation of 512MB. |
| `region_engine.mito.page_cache_size` | String | `512MB` | Cache size for pages of SST row groups. Setting it to 0 to disable the cache.<br/>If not set, it's default to 1/16 of OS memory with a max limitation of 512MB. |
| `region_engine.mito.enable_experimental_write_cache` | Bool | `false` | Whether to enable the experimental write cache. |
| `region_engine.mito.experimental_write_cache_path` | String | `""` | File system path for write cache, defaults to `{data_home}/write_cache`. |
| `region_engine.mito.experimental_write_cache_size` | String | `512MB` | Capacity for write cache. |
| `region_engine.mito.experimental_write_cache_ttl` | String | `1h` | TTL for write cache. |
| `region_engine.mito.sst_write_buffer_size` | String | `8MB` | Buffer size for SST writing. |
| `region_engine.mito.scan_parallelism` | Integer | `0` | Parallelism to scan a region (default: 1/4 of cpu cores).<br/>- `0`: using the default value (1/4 of cpu cores).<br/>- `1`: scan in current thread.<br/>- `n`: scan in parallelism n. |
| `region_engine.mito.parallel_scan_channel_size` | Integer | `32` | Capacity of the channel to send data from parallel scan tasks to the main task. |
@@ -127,9 +142,11 @@
| `export_metrics.remote_write` | -- | -- | -- |
| `export_metrics.remote_write.url` | String | `""` | The url the metrics send to. The url example can be: `http://127.0.0.1:4000/v1/prometheus/write?db=information_schema`. |
| `export_metrics.remote_write.headers` | InlineTable | -- | HTTP headers of Prometheus remote-write carry. |
| `tracing` | -- | -- | The tracing options. Only effect when compiled with `tokio-console` feature. |
| `tracing.tokio_console_addr` | String | `None` | The tokio console address. |
## Cluster Mode
## Distributed Mode
### Frontend
@@ -147,6 +164,11 @@
| `grpc` | -- | -- | The gRPC server options. |
| `grpc.addr` | String | `127.0.0.1:4001` | The address to bind the gRPC server. |
| `grpc.runtime_size` | Integer | `8` | The number of server worker threads. |
| `grpc.tls` | -- | -- | gRPC server TLS options, see `mysql.tls` section. |
| `grpc.tls.mode` | String | `disable` | TLS mode. |
| `grpc.tls.cert_path` | String | `None` | Certificate file path. |
| `grpc.tls.key_path` | String | `None` | Private key file path. |
| `grpc.tls.watch` | Bool | `false` | Watch for Certificate and key file change and auto reload.<br/>For now, gRPC tls config does not support auto reload. |
| `mysql` | -- | -- | MySQL server options. |
| `mysql.enable` | Bool | `true` | Whether to enable. |
| `mysql.addr` | String | `127.0.0.1:4002` | The addr to bind the MySQL server. |
@@ -160,7 +182,7 @@
| `postgres.enable` | Bool | `true` | Whether to enable |
| `postgres.addr` | String | `127.0.0.1:4003` | The addr to bind the PostgresSQL server. |
| `postgres.runtime_size` | Integer | `2` | The number of server worker threads. |
| `postgres.tls` | -- | -- | PostgresSQL server TLS options, see `mysql_options.tls` section. |
| `postgres.tls` | -- | -- | PostgresSQL server TLS options, see `mysql.tls` section. |
| `postgres.tls.mode` | String | `disable` | TLS mode. |
| `postgres.tls.cert_path` | String | `None` | Certificate file path. |
| `postgres.tls.key_path` | String | `None` | Private key file path. |
@@ -184,7 +206,6 @@
| `meta_client.metadata_cache_tti` | String | `5m` | -- |
| `datanode` | -- | -- | Datanode options. |
| `datanode.client` | -- | -- | Datanode client options. |
| `datanode.client.timeout` | String | `10s` | -- |
| `datanode.client.connect_timeout` | String | `10s` | -- |
| `datanode.client.tcp_nodelay` | Bool | `true` | -- |
| `logging` | -- | -- | The logging options. |
@@ -203,6 +224,8 @@
| `export_metrics.remote_write` | -- | -- | -- |
| `export_metrics.remote_write.url` | String | `""` | The url the metrics send to. The url example can be: `http://127.0.0.1:4000/v1/prometheus/write?db=information_schema`. |
| `export_metrics.remote_write.headers` | InlineTable | -- | HTTP headers of Prometheus remote-write carry. |
| `tracing` | -- | -- | The tracing options. Only effect when compiled with `tokio-console` feature. |
| `tracing.tokio_console_addr` | String | `None` | The tokio console address. |
### Metasrv
@@ -259,6 +282,8 @@
| `export_metrics.remote_write` | -- | -- | -- |
| `export_metrics.remote_write.url` | String | `""` | The url the metrics send to. The url example can be: `http://127.0.0.1:4000/v1/prometheus/write?db=information_schema`. |
| `export_metrics.remote_write.headers` | InlineTable | -- | HTTP headers of Prometheus remote-write carry. |
| `tracing` | -- | -- | The tracing options. Only effect when compiled with `tokio-console` feature. |
| `tracing.tokio_console_addr` | String | `None` | The tokio console address. |
### Datanode
@@ -339,6 +364,10 @@
| `region_engine.mito.sst_meta_cache_size` | String | `128MB` | Cache size for SST metadata. Setting it to 0 to disable the cache.<br/>If not set, it's default to 1/32 of OS memory with a max limitation of 128MB. |
| `region_engine.mito.vector_cache_size` | String | `512MB` | Cache size for vectors and arrow arrays. Setting it to 0 to disable the cache.<br/>If not set, it's default to 1/16 of OS memory with a max limitation of 512MB. |
| `region_engine.mito.page_cache_size` | String | `512MB` | Cache size for pages of SST row groups. Setting it to 0 to disable the cache.<br/>If not set, it's default to 1/16 of OS memory with a max limitation of 512MB. |
| `region_engine.mito.enable_experimental_write_cache` | Bool | `false` | Whether to enable the experimental write cache. |
| `region_engine.mito.experimental_write_cache_path` | String | `""` | File system path for write cache, defaults to `{data_home}/write_cache`. |
| `region_engine.mito.experimental_write_cache_size` | String | `512MB` | Capacity for write cache. |
| `region_engine.mito.experimental_write_cache_ttl` | String | `1h` | TTL for write cache. |
| `region_engine.mito.sst_write_buffer_size` | String | `8MB` | Buffer size for SST writing. |
| `region_engine.mito.scan_parallelism` | Integer | `0` | Parallelism to scan a region (default: 1/4 of cpu cores).<br/>- `0`: using the default value (1/4 of cpu cores).<br/>- `1`: scan in current thread.<br/>- `n`: scan in parallelism n. |
| `region_engine.mito.parallel_scan_channel_size` | Integer | `32` | Capacity of the channel to send data from parallel scan tasks to the main task. |
@@ -370,3 +399,5 @@
| `export_metrics.remote_write` | -- | -- | -- |
| `export_metrics.remote_write.url` | String | `""` | The url the metrics send to. The url example can be: `http://127.0.0.1:4000/v1/prometheus/write?db=information_schema`. |
| `export_metrics.remote_write.headers` | InlineTable | -- | HTTP headers of Prometheus remote-write carry. |
| `tracing` | -- | -- | The tracing options. Only effect when compiled with `tokio-console` feature. |
| `tracing.tokio_console_addr` | String | `None` | The tokio console address. |

View File

@@ -324,6 +324,18 @@ vector_cache_size = "512MB"
## If not set, it's default to 1/16 of OS memory with a max limitation of 512MB.
page_cache_size = "512MB"
## Whether to enable the experimental write cache.
enable_experimental_write_cache = false
## File system path for write cache, defaults to `{data_home}/write_cache`.
experimental_write_cache_path = ""
## Capacity for write cache.
experimental_write_cache_size = "512MB"
## TTL for write cache.
experimental_write_cache_ttl = "1h"
## Buffer size for SST writing.
sst_write_buffer_size = "8MB"
@@ -428,3 +440,9 @@ url = ""
## HTTP headers of Prometheus remote-write carry.
headers = { }
## The tracing options. Only effect when compiled with `tokio-console` feature.
[tracing]
## The tokio console address.
## +toml2docs:none-default
tokio_console_addr = "127.0.0.1"

View File

@@ -30,6 +30,23 @@ addr = "127.0.0.1:4001"
## The number of server worker threads.
runtime_size = 8
## gRPC server TLS options, see `mysql.tls` section.
[grpc.tls]
## TLS mode.
mode = "disable"
## Certificate file path.
## +toml2docs:none-default
cert_path = ""
## Private key file path.
## +toml2docs:none-default
key_path = ""
## Watch for Certificate and key file change and auto reload.
## For now, gRPC tls config does not support auto reload.
watch = false
## MySQL server options.
[mysql]
## Whether to enable.
@@ -70,7 +87,7 @@ addr = "127.0.0.1:4003"
## The number of server worker threads.
runtime_size = 2
## PostgresSQL server TLS options, see `mysql_options.tls` section.
## PostgresSQL server TLS options, see `mysql.tls` section.
[postgres.tls]
## TLS mode.
mode = "disable"
@@ -136,7 +153,6 @@ metadata_cache_tti = "5m"
[datanode]
## Datanode client options.
[datanode.client]
timeout = "10s"
connect_timeout = "10s"
tcp_nodelay = true
@@ -186,3 +202,9 @@ url = ""
## HTTP headers of Prometheus remote-write carry.
headers = { }
## The tracing options. Only effect when compiled with `tokio-console` feature.
[tracing]
## The tokio console address.
## +toml2docs:none-default
tokio_console_addr = "127.0.0.1"

View File

@@ -141,3 +141,9 @@ url = ""
## HTTP headers of Prometheus remote-write carry.
headers = { }
## The tracing options. Only effect when compiled with `tokio-console` feature.
[tracing]
## The tokio console address.
## +toml2docs:none-default
tokio_console_addr = "127.0.0.1"

View File

@@ -25,6 +25,23 @@ addr = "127.0.0.1:4001"
## The number of server worker threads.
runtime_size = 8
## gRPC server TLS options, see `mysql.tls` section.
[grpc.tls]
## TLS mode.
mode = "disable"
## Certificate file path.
## +toml2docs:none-default
cert_path = ""
## Private key file path.
## +toml2docs:none-default
key_path = ""
## Watch for Certificate and key file change and auto reload.
## For now, gRPC tls config does not support auto reload.
watch = false
## MySQL server options.
[mysql]
## Whether to enable.
@@ -65,7 +82,7 @@ addr = "127.0.0.1:4003"
## The number of server worker threads.
runtime_size = 2
## PostgresSQL server TLS options, see `mysql_options.tls` section.
## PostgresSQL server TLS options, see `mysql.tls` section.
[postgres.tls]
## TLS mode.
mode = "disable"
@@ -367,6 +384,18 @@ vector_cache_size = "512MB"
## If not set, it's default to 1/16 of OS memory with a max limitation of 512MB.
page_cache_size = "512MB"
## Whether to enable the experimental write cache.
enable_experimental_write_cache = false
## File system path for write cache, defaults to `{data_home}/write_cache`.
experimental_write_cache_path = ""
## Capacity for write cache.
experimental_write_cache_size = "512MB"
## TTL for write cache.
experimental_write_cache_ttl = "1h"
## Buffer size for SST writing.
sst_write_buffer_size = "8MB"
@@ -471,3 +500,9 @@ url = ""
## HTTP headers of Prometheus remote-write carry.
headers = { }
## The tracing options. Only effect when compiled with `tokio-console` feature.
[tracing]
## The tokio console address.
## +toml2docs:none-default
tokio_console_addr = "127.0.0.1"

View File

@@ -0,0 +1,106 @@
/*
* Copyright 2023 Greptime Team
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
import * as core from '@actions/core'
import {handleError, obtainClient} from "@/common";
import {context} from "@actions/github";
import {PullRequestEditedEvent, PullRequestEvent, PullRequestOpenedEvent} from "@octokit/webhooks-types";
// @ts-expect-error moduleResolution:nodenext issue 54523
import {RequestError} from "@octokit/request-error";
const needFollowUpDocs = "[x] This PR requires documentation updates."
const labelDocsNotRequired = "docs-not-required"
const labelDocsRequired = "docs-required"
async function main() {
if (!context.payload.pull_request) {
throw new Error(`Only pull request event supported. ${context.eventName} is unsupported.`)
}
const client = obtainClient("GITHUB_TOKEN")
const docsClient = obtainClient("DOCS_REPO_TOKEN")
const payload = context.payload as PullRequestEvent
const { owner, repo, number, actor, title, html_url } = {
owner: payload.pull_request.base.user.login,
repo: payload.pull_request.base.repo.name,
number: payload.pull_request.number,
title: payload.pull_request.title,
html_url: payload.pull_request.html_url,
actor: payload.pull_request.user.login,
}
const followUpDocs = checkPullRequestEvent(payload)
if (followUpDocs) {
core.info("Follow up docs.")
await client.rest.issues.removeLabel({
owner, repo, issue_number: number, name: labelDocsNotRequired,
}).catch((e: RequestError) => {
if (e.status != 404) {
throw e;
}
core.debug(`Label ${labelDocsNotRequired} not exist.`)
})
await client.rest.issues.addLabels({
owner, repo, issue_number: number, labels: [labelDocsRequired],
})
await docsClient.rest.issues.create({
owner: 'GreptimeTeam',
repo: 'docs',
title: `Update docs for ${title}`,
body: `A document change request is generated from ${html_url}`,
assignee: actor,
}).then((res) => {
core.info(`Created issue ${res.data}`)
})
} else {
core.info("No need to follow up docs.")
await client.rest.issues.removeLabel({
owner, repo, issue_number: number, name: labelDocsRequired
}).catch((e: RequestError) => {
if (e.status != 404) {
throw e;
}
core.debug(`Label ${labelDocsRequired} not exist.`)
})
await client.rest.issues.addLabels({
owner, repo, issue_number: number, labels: [labelDocsNotRequired],
})
}
}
function checkPullRequestEvent(payload: PullRequestEvent) {
switch (payload.action) {
case "opened":
return checkPullRequestOpenedEvent(payload as PullRequestOpenedEvent)
case "edited":
return checkPullRequestEditedEvent(payload as PullRequestEditedEvent)
default:
throw new Error(`${payload.action} is unsupported.`)
}
}
function checkPullRequestOpenedEvent(event: PullRequestOpenedEvent): boolean {
// @ts-ignore
return event.pull_request.body?.includes(needFollowUpDocs)
}
function checkPullRequestEditedEvent(event: PullRequestEditedEvent): boolean {
const previous = event.changes.body?.from.includes(needFollowUpDocs)
const current = event.pull_request.body?.includes(needFollowUpDocs)
// from docs-not-need to docs-required
return (!previous) && current
}
main().catch(handleError)

View File

@@ -0,0 +1,83 @@
/*
* Copyright 2023 Greptime Team
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
import * as core from '@actions/core'
import {handleError, obtainClient} from "@/common"
import {context} from "@actions/github"
import _ from "lodash"
async function main() {
const success = process.env["CI_REPORT_STATUS"] === "true"
core.info(`CI_REPORT_STATUS=${process.env["CI_REPORT_STATUS"]}, resolved to ${success}`)
const client = obtainClient("GITHUB_TOKEN")
const title = `Workflow run '${context.workflow}' failed`
const url = `${process.env["GITHUB_SERVER_URL"]}/${process.env["GITHUB_REPOSITORY"]}/actions/runs/${process.env["GITHUB_RUN_ID"]}`
const failure_comment = `@GreptimeTeam/db-approver\nNew failure: ${url} `
const success_comment = `@GreptimeTeam/db-approver\nBack to success: ${url}`
const {owner, repo} = context.repo
const labels = ['O-ci-failure']
const issues = await client.paginate(client.rest.issues.listForRepo, {
owner,
repo,
labels: labels.join(','),
state: "open",
sort: "created",
direction: "desc",
});
const issue = _.find(issues, (i) => i.title === title);
if (issue) { // exist issue
core.info(`Found previous issue ${issue.html_url}`)
if (!success) {
await client.rest.issues.createComment({
owner,
repo,
issue_number: issue.number,
body: failure_comment,
})
} else {
await client.rest.issues.createComment({
owner,
repo,
issue_number: issue.number,
body: success_comment,
})
await client.rest.issues.update({
owner,
repo,
issue_number: issue.number,
state: "closed",
state_reason: "completed",
})
}
core.setOutput("html_url", issue.html_url)
} else if (!success) { // create new issue for failure
const issue = await client.rest.issues.create({
owner,
repo,
title,
labels,
body: failure_comment,
})
core.info(`Created issue ${issue.data.html_url}`)
core.setOutput("html_url", issue.data.html_url)
}
}
main().catch(handleError)

View File

@@ -7,6 +7,7 @@
"dependencies": {
"@actions/core": "^1.10.1",
"@actions/github": "^6.0.0",
"@octokit/request-error": "^6.1.1",
"@octokit/webhooks-types": "^7.5.1",
"conventional-commit-types": "^3.0.0",
"conventional-commits-parser": "^5.0.0",

10
cyborg/pnpm-lock.yaml generated
View File

@@ -11,6 +11,9 @@ dependencies:
'@actions/github':
specifier: ^6.0.0
version: 6.0.0
'@octokit/request-error':
specifier: ^6.1.1
version: 6.1.1
'@octokit/webhooks-types':
specifier: ^7.5.1
version: 7.5.1
@@ -359,6 +362,13 @@ packages:
once: 1.4.0
dev: false
/@octokit/request-error@6.1.1:
resolution: {integrity: sha512-1mw1gqT3fR/WFvnoVpY/zUM2o/XkMs/2AszUUG9I69xn0JFLv6PGkPhNk5lbfvROs79wiS0bqiJNxfCZcRJJdg==}
engines: {node: '>= 18'}
dependencies:
'@octokit/types': 13.5.0
dev: false
/@octokit/request@8.4.0:
resolution: {integrity: sha512-9Bb014e+m2TgBeEJGEbdplMVWwPmL1FPtggHQRkV+WVsMggPtEkLKPlcVYm/o8xKLkpJ7B+6N8WfQMtDLX2Dpw==}
engines: {node: '>= 18'}

View File

@@ -0,0 +1,16 @@
FROM ubuntu:22.04
# The binary name of GreptimeDB executable.
# Defaults to "greptime", but sometimes in other projects it might be different.
ARG TARGET_BIN=greptime
RUN apt-get update && DEBIAN_FRONTEND=noninteractive apt-get install -y \
ca-certificates \
curl
ARG BINARY_PATH
ADD $BINARY_PATH/$TARGET_BIN /greptime/bin/
ENV PATH /greptime/bin/:$PATH
ENTRYPOINT ["greptime"]

View File

@@ -34,7 +34,7 @@ RUN rustup toolchain install ${RUST_TOOLCHAIN}
RUN rustup target add aarch64-linux-android
# Install cargo-ndk
RUN cargo install cargo-ndk
RUN cargo install cargo-ndk@3.5.4
ENV ANDROID_NDK_HOME $NDK_ROOT
# Builder entrypoint.

View File

@@ -23,28 +23,28 @@
## Write performance
| Environment | Ingest rate (rows/s) |
| ------------------ | --------------------- |
| Local | 3695814.64 |
| EC2 c5d.2xlarge | 2987166.64 |
| Environment | Ingest rate (rows/s) |
| --------------- | -------------------- |
| Local | 369581.464 |
| EC2 c5d.2xlarge | 298716.664 |
## Query performance
| Query type | Local (ms) | EC2 c5d.2xlarge (ms) |
| --------------------- | ---------- | ---------------------- |
| cpu-max-all-1 | 30.56 | 54.74 |
| cpu-max-all-8 | 52.69 | 70.50 |
| double-groupby-1 | 664.30 | 1366.63 |
| double-groupby-5 | 1391.26 | 2141.71 |
| double-groupby-all | 2828.94 | 3389.59 |
| groupby-orderby-limit | 718.92 | 1213.90 |
| high-cpu-1 | 29.21 | 52.98 |
| high-cpu-all | 5514.12 | 7194.91 |
| lastpoint | 7571.40 | 9423.41 |
| single-groupby-1-1-1 | 19.09 | 7.77 |
| single-groupby-1-1-12 | 27.28 | 51.64 |
| single-groupby-1-8-1 | 31.85 | 11.64 |
| single-groupby-5-1-1 | 16.14 | 9.67 |
| single-groupby-5-1-12 | 27.21 | 53.62 |
| single-groupby-5-8-1 | 39.62 | 14.96 |
| Query type | Local (ms) | EC2 c5d.2xlarge (ms) |
| --------------------- | ---------- | -------------------- |
| cpu-max-all-1 | 30.56 | 54.74 |
| cpu-max-all-8 | 52.69 | 70.50 |
| double-groupby-1 | 664.30 | 1366.63 |
| double-groupby-5 | 1391.26 | 2141.71 |
| double-groupby-all | 2828.94 | 3389.59 |
| groupby-orderby-limit | 718.92 | 1213.90 |
| high-cpu-1 | 29.21 | 52.98 |
| high-cpu-all | 5514.12 | 7194.91 |
| lastpoint | 7571.40 | 9423.41 |
| single-groupby-1-1-1 | 19.09 | 7.77 |
| single-groupby-1-1-12 | 27.28 | 51.64 |
| single-groupby-1-8-1 | 31.85 | 11.64 |
| single-groupby-5-1-1 | 16.14 | 9.67 |
| single-groupby-5-1-12 | 27.21 | 53.62 |
| single-groupby-5-8-1 | 39.62 | 14.96 |

View File

@@ -0,0 +1,58 @@
# TSBS benchmark - v0.8.0
## Environment
### Local
| | |
| ------ | ---------------------------------- |
| CPU | AMD Ryzen 7 7735HS (8 core 3.2GHz) |
| Memory | 32GB |
| Disk | SOLIDIGM SSDPFKNU010TZ |
| OS | Ubuntu 22.04.2 LTS |
### Amazon EC2
| | |
| ------- | -------------- |
| Machine | c5d.2xlarge |
| CPU | 8 core |
| Memory | 16GB |
| Disk | 50GB (GP3) |
| OS | Ubuntu 22.04.1 |
## Write performance
| Environment | Ingest rate (rows/s) |
| --------------- | -------------------- |
| Local | 315369.66 |
| EC2 c5d.2xlarge | 222148.56 |
## Query performance
| Query type | Local (ms) | EC2 c5d.2xlarge (ms) |
| --------------------- | ---------- | -------------------- |
| cpu-max-all-1 | 24.63 | 15.29 |
| cpu-max-all-8 | 51.69 | 33.53 |
| double-groupby-1 | 673.51 | 1295.38 |
| double-groupby-5 | 1244.93 | 1993.91 |
| double-groupby-all | 2215.44 | 3056.77 |
| groupby-orderby-limit | 754.50 | 1546.49 |
| high-cpu-1 | 19.62 | 11.58 |
| high-cpu-all | 5402.31 | 8011.43 |
| lastpoint | 6756.12 | 9312.67 |
| single-groupby-1-1-1 | 15.70 | 7.67 |
| single-groupby-1-1-12 | 16.72 | 9.29 |
| single-groupby-1-8-1 | 26.72 | 17.97 |
| single-groupby-5-1-1 | 18.17 | 10.09 |
| single-groupby-5-1-12 | 20.04 | 12.37 |
| single-groupby-5-8-1 | 35.63 | 23.13 |
`single-groupby-1-1-1` query throughput
| Environment | Client concurrency | mean time (ms) | qps (queries/sec) |
| --------------- | ------------------ | -------------- | ----------------- |
| Local | 50 | 42.87 | 1165.73 |
| Local | 100 | 89.29 | 1119.38 |
| EC2 c5d.2xlarge | 50 | 69.25 | 721.73 |
| EC2 c5d.2xlarge | 100 | 140.93 | 709.35 |

View File

@@ -1,2 +1,2 @@
[toolchain]
channel = "nightly-2024-04-18"
channel = "nightly-2024-04-20"

View File

@@ -30,6 +30,7 @@ pub enum Error {
#[snafu(display("Unknown proto column datatype: {}", datatype))]
UnknownColumnDataType {
datatype: i32,
#[snafu(implicit)]
location: Location,
#[snafu(source)]
error: prost::DecodeError,
@@ -38,12 +39,14 @@ pub enum Error {
#[snafu(display("Failed to create column datatype from {:?}", from))]
IntoColumnDataType {
from: ConcreteDataType,
#[snafu(implicit)]
location: Location,
},
#[snafu(display("Failed to convert column default constraint, column: {}", column))]
ConvertColumnDefaultConstraint {
column: String,
#[snafu(implicit)]
location: Location,
source: datatypes::error::Error,
},
@@ -51,6 +54,7 @@ pub enum Error {
#[snafu(display("Invalid column default constraint, column: {}", column))]
InvalidColumnDefaultConstraint {
column: String,
#[snafu(implicit)]
location: Location,
source: datatypes::error::Error,
},

View File

@@ -480,6 +480,8 @@ fn ddl_request_type(request: &DdlRequest) -> &'static str {
Some(Expr::TruncateTable(_)) => "ddl.truncate_table",
Some(Expr::CreateFlow(_)) => "ddl.create_flow",
Some(Expr::DropFlow(_)) => "ddl.drop_flow",
Some(Expr::CreateView(_)) => "ddl.create_view",
Some(Expr::DropView(_)) => "ddl.drop_view",
None => "ddl.empty",
}
}

View File

@@ -34,11 +34,13 @@ pub enum Error {
Io {
#[snafu(source)]
error: std::io::Error,
#[snafu(implicit)]
location: Location,
},
#[snafu(display("Auth failed"))]
AuthBackend {
#[snafu(implicit)]
location: Location,
source: BoxedError,
},
@@ -72,7 +74,10 @@ pub enum Error {
},
#[snafu(display("User is not authorized to perform this action"))]
PermissionDenied { location: Location },
PermissionDenied {
#[snafu(implicit)]
location: Location,
},
}
impl ErrorExt for Error {

13
src/cache/Cargo.toml vendored Normal file
View File

@@ -0,0 +1,13 @@
[package]
name = "cache"
version.workspace = true
edition.workspace = true
license.workspace = true
[dependencies]
catalog.workspace = true
common-error.workspace = true
common-macro.workspace = true
common-meta.workspace = true
moka.workspace = true
snafu.workspace = true

44
src/cache/src/error.rs vendored Normal file
View File

@@ -0,0 +1,44 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use common_error::ext::ErrorExt;
use common_error::status_code::StatusCode;
use common_macro::stack_trace_debug;
use snafu::{Location, Snafu};
#[derive(Snafu)]
#[snafu(visibility(pub))]
#[stack_trace_debug]
pub enum Error {
#[snafu(display("Failed to get cache from cache registry: {}", name))]
CacheRequired {
#[snafu(implicit)]
location: Location,
name: String,
},
}
pub type Result<T> = std::result::Result<T, Error>;
impl ErrorExt for Error {
fn status_code(&self) -> StatusCode {
match self {
Error::CacheRequired { .. } => StatusCode::Internal,
}
}
fn as_any(&self) -> &dyn std::any::Any {
self
}
}

122
src/cache/src/lib.rs vendored Normal file
View File

@@ -0,0 +1,122 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
pub mod error;
use std::sync::Arc;
use std::time::Duration;
use catalog::kvbackend::new_table_cache;
use common_meta::cache::{
new_table_flownode_set_cache, new_table_info_cache, new_table_name_cache,
new_table_route_cache, CacheRegistry, CacheRegistryBuilder, LayeredCacheRegistryBuilder,
};
use common_meta::kv_backend::KvBackendRef;
use moka::future::CacheBuilder;
use snafu::OptionExt;
use crate::error::Result;
const DEFAULT_CACHE_MAX_CAPACITY: u64 = 65536;
const DEFAULT_CACHE_TTL: Duration = Duration::from_secs(10 * 60);
const DEFAULT_CACHE_TTI: Duration = Duration::from_secs(5 * 60);
pub const TABLE_INFO_CACHE_NAME: &str = "table_info_cache";
pub const TABLE_NAME_CACHE_NAME: &str = "table_name_cache";
pub const TABLE_CACHE_NAME: &str = "table_cache";
pub const TABLE_FLOWNODE_SET_CACHE_NAME: &str = "table_flownode_set_cache";
pub const TABLE_ROUTE_CACHE_NAME: &str = "table_route_cache";
pub fn build_fundamental_cache_registry(kv_backend: KvBackendRef) -> CacheRegistry {
// Builds table info cache
let cache = CacheBuilder::new(DEFAULT_CACHE_MAX_CAPACITY)
.time_to_live(DEFAULT_CACHE_TTL)
.time_to_idle(DEFAULT_CACHE_TTI)
.build();
let table_info_cache = Arc::new(new_table_info_cache(
TABLE_INFO_CACHE_NAME.to_string(),
cache,
kv_backend.clone(),
));
// Builds table name cache
let cache = CacheBuilder::new(DEFAULT_CACHE_MAX_CAPACITY)
.time_to_live(DEFAULT_CACHE_TTL)
.time_to_idle(DEFAULT_CACHE_TTI)
.build();
let table_name_cache = Arc::new(new_table_name_cache(
TABLE_NAME_CACHE_NAME.to_string(),
cache,
kv_backend.clone(),
));
// Builds table route cache
let cache = CacheBuilder::new(DEFAULT_CACHE_MAX_CAPACITY)
.time_to_live(DEFAULT_CACHE_TTL)
.time_to_idle(DEFAULT_CACHE_TTI)
.build();
let table_route_cache = Arc::new(new_table_route_cache(
TABLE_ROUTE_CACHE_NAME.to_string(),
cache,
kv_backend.clone(),
));
// Builds table flownode set cache
let cache = CacheBuilder::new(DEFAULT_CACHE_MAX_CAPACITY)
.time_to_live(DEFAULT_CACHE_TTL)
.time_to_idle(DEFAULT_CACHE_TTI)
.build();
let table_flownode_set_cache = Arc::new(new_table_flownode_set_cache(
TABLE_FLOWNODE_SET_CACHE_NAME.to_string(),
cache,
kv_backend.clone(),
));
CacheRegistryBuilder::default()
.add_cache(table_info_cache)
.add_cache(table_name_cache)
.add_cache(table_route_cache)
.add_cache(table_flownode_set_cache)
.build()
}
// TODO(weny): Make the cache configurable.
pub fn with_default_composite_cache_registry(
builder: LayeredCacheRegistryBuilder,
) -> Result<LayeredCacheRegistryBuilder> {
let table_info_cache = builder.get().context(error::CacheRequiredSnafu {
name: TABLE_INFO_CACHE_NAME,
})?;
let table_name_cache = builder.get().context(error::CacheRequiredSnafu {
name: TABLE_NAME_CACHE_NAME,
})?;
// Builds table cache
let cache = CacheBuilder::new(DEFAULT_CACHE_MAX_CAPACITY)
.time_to_live(DEFAULT_CACHE_TTL)
.time_to_idle(DEFAULT_CACHE_TTI)
.build();
let table_cache = Arc::new(new_table_cache(
TABLE_CACHE_NAME.to_string(),
cache,
table_info_cache,
table_name_cache,
));
let registry = CacheRegistryBuilder::default()
.add_cache(table_cache)
.build();
Ok(builder.add_cache_registry(registry))
}

View File

@@ -30,12 +30,14 @@ use tokio::task::JoinError;
pub enum Error {
#[snafu(display("Failed to list catalogs"))]
ListCatalogs {
#[snafu(implicit)]
location: Location,
source: BoxedError,
},
#[snafu(display("Failed to list {}'s schemas", catalog))]
ListSchemas {
#[snafu(implicit)]
location: Location,
catalog: String,
source: BoxedError,
@@ -43,6 +45,7 @@ pub enum Error {
#[snafu(display("Failed to list {}.{}'s tables", catalog, schema))]
ListTables {
#[snafu(implicit)]
location: Location,
catalog: String,
schema: String,
@@ -51,23 +54,27 @@ pub enum Error {
#[snafu(display("Failed to list nodes in cluster: {source}"))]
ListNodes {
#[snafu(implicit)]
location: Location,
source: BoxedError,
},
#[snafu(display("Failed to re-compile script due to internal error"))]
CompileScriptInternal {
#[snafu(implicit)]
location: Location,
source: BoxedError,
},
#[snafu(display("Failed to open system catalog table"))]
OpenSystemCatalog {
#[snafu(implicit)]
location: Location,
source: table::error::Error,
},
#[snafu(display("Failed to create system catalog table"))]
CreateSystemCatalog {
#[snafu(implicit)]
location: Location,
source: table::error::Error,
},
@@ -75,12 +82,17 @@ pub enum Error {
#[snafu(display("Failed to create table, table info: {}", table_info))]
CreateTable {
table_info: String,
#[snafu(implicit)]
location: Location,
source: table::error::Error,
},
#[snafu(display("System catalog is not valid: {}", msg))]
SystemCatalog { msg: String, location: Location },
SystemCatalog {
msg: String,
#[snafu(implicit)]
location: Location,
},
#[snafu(display(
"System catalog table type mismatch, expected: binary, found: {:?}",
@@ -88,34 +100,42 @@ pub enum Error {
))]
SystemCatalogTypeMismatch {
data_type: ConcreteDataType,
#[snafu(implicit)]
location: Location,
},
#[snafu(display("Invalid system catalog entry type: {:?}", entry_type))]
InvalidEntryType {
entry_type: Option<u8>,
#[snafu(implicit)]
location: Location,
},
#[snafu(display("Invalid system catalog key: {:?}", key))]
InvalidKey {
key: Option<String>,
#[snafu(implicit)]
location: Location,
},
#[snafu(display("Catalog value is not present"))]
EmptyValue { location: Location },
EmptyValue {
#[snafu(implicit)]
location: Location,
},
#[snafu(display("Failed to deserialize value"))]
ValueDeserialize {
#[snafu(source)]
error: serde_json::error::Error,
#[snafu(implicit)]
location: Location,
},
#[snafu(display("Table engine not found: {}", engine_name))]
TableEngineNotFound {
engine_name: String,
#[snafu(implicit)]
location: Location,
source: table::error::Error,
},
@@ -123,6 +143,7 @@ pub enum Error {
#[snafu(display("Cannot find catalog by name: {}", catalog_name))]
CatalogNotFound {
catalog_name: String,
#[snafu(implicit)]
location: Location,
},
@@ -130,30 +151,49 @@ pub enum Error {
SchemaNotFound {
catalog: String,
schema: String,
#[snafu(implicit)]
location: Location,
},
#[snafu(display("Table `{}` already exists", table))]
TableExists { table: String, location: Location },
TableExists {
table: String,
#[snafu(implicit)]
location: Location,
},
#[snafu(display("Table not found: {}", table))]
TableNotExist { table: String, location: Location },
TableNotExist {
table: String,
#[snafu(implicit)]
location: Location,
},
#[snafu(display("Schema {} already exists", schema))]
SchemaExists { schema: String, location: Location },
SchemaExists {
schema: String,
#[snafu(implicit)]
location: Location,
},
#[snafu(display("Operation {} not implemented yet", operation))]
Unimplemented {
operation: String,
#[snafu(implicit)]
location: Location,
},
#[snafu(display("Operation {} not supported", op))]
NotSupported { op: String, location: Location },
NotSupported {
op: String,
#[snafu(implicit)]
location: Location,
},
#[snafu(display("Failed to open table {table_id}"))]
OpenTable {
table_id: TableId,
#[snafu(implicit)]
location: Location,
source: table::error::Error,
},
@@ -167,6 +207,7 @@ pub enum Error {
#[snafu(display("Table not found while opening table, table info: {}", table_info))]
TableNotFound {
table_info: String,
#[snafu(implicit)]
location: Location,
},
@@ -178,57 +219,69 @@ pub enum Error {
#[snafu(display("Failed to read system catalog table records"))]
ReadSystemCatalog {
#[snafu(implicit)]
location: Location,
source: common_recordbatch::error::Error,
},
#[snafu(display("Failed to create recordbatch"))]
CreateRecordBatch {
#[snafu(implicit)]
location: Location,
source: common_recordbatch::error::Error,
},
#[snafu(display("Failed to insert table creation record to system catalog"))]
InsertCatalogRecord {
#[snafu(implicit)]
location: Location,
source: table::error::Error,
},
#[snafu(display("Failed to scan system catalog table"))]
SystemCatalogTableScan {
#[snafu(implicit)]
location: Location,
source: table::error::Error,
},
#[snafu(display("Internal error"))]
Internal {
#[snafu(implicit)]
location: Location,
source: BoxedError,
},
#[snafu(display("Failed to upgrade weak catalog manager reference"))]
UpgradeWeakCatalogManagerRef { location: Location },
UpgradeWeakCatalogManagerRef {
#[snafu(implicit)]
location: Location,
},
#[snafu(display("Failed to execute system catalog table scan"))]
SystemCatalogTableScanExec {
#[snafu(implicit)]
location: Location,
source: common_query::error::Error,
},
#[snafu(display("Cannot parse catalog value"))]
InvalidCatalogValue {
#[snafu(implicit)]
location: Location,
source: common_catalog::error::Error,
},
#[snafu(display("Failed to perform metasrv operation"))]
Metasrv {
#[snafu(implicit)]
location: Location,
source: meta_client::error::Error,
},
#[snafu(display("Invalid table info in catalog"))]
InvalidTableInfoInCatalog {
#[snafu(implicit)]
location: Location,
source: datatypes::error::Error,
},
@@ -240,29 +293,37 @@ pub enum Error {
Datafusion {
#[snafu(source)]
error: DataFusionError,
#[snafu(implicit)]
location: Location,
},
#[snafu(display("Table schema mismatch"))]
TableSchemaMismatch {
#[snafu(implicit)]
location: Location,
source: table::error::Error,
},
#[snafu(display("A generic error has occurred, msg: {}", msg))]
Generic { msg: String, location: Location },
Generic {
msg: String,
#[snafu(implicit)]
location: Location,
},
#[snafu(display("Table metadata manager error"))]
TableMetadataManager {
source: common_meta::error::Error,
#[snafu(implicit)]
location: Location,
},
#[snafu(display("Get null from table cache, key: {}", key))]
TableCacheNotGet { key: String, location: Location },
#[snafu(display("Failed to get table cache, err: {}", err_msg))]
GetTableCache { err_msg: String },
#[snafu(display("Failed to get table cache"))]
GetTableCache {
source: common_meta::error::Error,
#[snafu(implicit)]
location: Location,
},
}
pub type Result<T> = std::result::Result<T, Error>;
@@ -324,7 +385,7 @@ impl ErrorExt for Error {
Error::QueryAccessDenied { .. } => StatusCode::AccessDenied,
Error::Datafusion { .. } => StatusCode::EngineExecuteQuery,
Error::TableMetadataManager { source, .. } => source.status_code(),
Error::TableCacheNotGet { .. } | Error::GetTableCache { .. } => StatusCode::Internal,
Error::GetTableCache { .. } => StatusCode::Internal,
}
}

View File

@@ -21,11 +21,11 @@ use common_config::Mode;
use common_error::ext::BoxedError;
use common_meta::cluster::{ClusterInfo, NodeInfo, NodeStatus};
use common_meta::peer::Peer;
use common_query::physical_plan::TaskContext;
use common_recordbatch::adapter::RecordBatchStreamAdapter;
use common_recordbatch::{RecordBatch, SendableRecordBatchStream};
use common_telemetry::warn;
use common_time::timestamp::Timestamp;
use datafusion::execution::TaskContext;
use datafusion::physical_plan::stream::RecordBatchStreamAdapter as DfRecordBatchStreamAdapter;
use datafusion::physical_plan::streaming::PartitionStream as DfPartitionStream;
use datafusion::physical_plan::SendableRecordBatchStream as DfSendableRecordBatchStream;

View File

@@ -20,9 +20,9 @@ use common_catalog::consts::{
SEMANTIC_TYPE_TIME_INDEX,
};
use common_error::ext::BoxedError;
use common_query::physical_plan::TaskContext;
use common_recordbatch::adapter::RecordBatchStreamAdapter;
use common_recordbatch::{RecordBatch, SendableRecordBatchStream};
use datafusion::execution::TaskContext;
use datafusion::physical_plan::stream::RecordBatchStreamAdapter as DfRecordBatchStreamAdapter;
use datafusion::physical_plan::streaming::PartitionStream as DfPartitionStream;
use datafusion::physical_plan::SendableRecordBatchStream as DfSendableRecordBatchStream;

View File

@@ -17,9 +17,9 @@ use std::sync::{Arc, Weak};
use arrow_schema::SchemaRef as ArrowSchemaRef;
use common_catalog::consts::INFORMATION_SCHEMA_KEY_COLUMN_USAGE_TABLE_ID;
use common_error::ext::BoxedError;
use common_query::physical_plan::TaskContext;
use common_recordbatch::adapter::RecordBatchStreamAdapter;
use common_recordbatch::{RecordBatch, SendableRecordBatchStream};
use datafusion::execution::TaskContext;
use datafusion::physical_plan::stream::RecordBatchStreamAdapter as DfRecordBatchStreamAdapter;
use datafusion::physical_plan::streaming::PartitionStream as DfPartitionStream;
use datafusion::physical_plan::SendableRecordBatchStream as DfSendableRecordBatchStream;

View File

@@ -17,9 +17,9 @@ use std::sync::Arc;
use arrow_schema::SchemaRef as ArrowSchemaRef;
use common_error::ext::BoxedError;
use common_query::physical_plan::TaskContext;
use common_recordbatch::adapter::RecordBatchStreamAdapter;
use common_recordbatch::{RecordBatch, SendableRecordBatchStream};
use datafusion::execution::TaskContext;
use datafusion::physical_plan::stream::RecordBatchStreamAdapter as DfRecordBatchStreamAdapter;
use datafusion::physical_plan::streaming::PartitionStream as DfPartitionStream;
use datafusion::physical_plan::SendableRecordBatchStream as DfSendableRecordBatchStream;

View File

@@ -18,10 +18,10 @@ use std::sync::{Arc, Weak};
use arrow_schema::SchemaRef as ArrowSchemaRef;
use common_catalog::consts::INFORMATION_SCHEMA_PARTITIONS_TABLE_ID;
use common_error::ext::BoxedError;
use common_query::physical_plan::TaskContext;
use common_recordbatch::adapter::RecordBatchStreamAdapter;
use common_recordbatch::{RecordBatch, SendableRecordBatchStream};
use common_time::datetime::DateTime;
use datafusion::execution::TaskContext;
use datafusion::physical_plan::stream::RecordBatchStreamAdapter as DfRecordBatchStreamAdapter;
use datafusion::physical_plan::streaming::PartitionStream as DfPartitionStream;
use datafusion::physical_plan::SendableRecordBatchStream as DfSendableRecordBatchStream;

View File

@@ -14,10 +14,9 @@
use arrow::array::StringArray;
use arrow::compute::kernels::comparison;
use common_query::logical_plan::DfExpr;
use datafusion::common::ScalarValue;
use datafusion::logical_expr::expr::Like;
use datafusion::logical_expr::Operator;
use datafusion::logical_expr::{Expr, Operator};
use datatypes::value::Value;
use store_api::storage::ScanRequest;
@@ -118,12 +117,12 @@ impl Predicate {
}
/// Try to create a predicate from datafusion [`Expr`], return None if fails.
fn from_expr(expr: DfExpr) -> Option<Predicate> {
fn from_expr(expr: Expr) -> Option<Predicate> {
match expr {
// NOT expr
DfExpr::Not(expr) => Some(Predicate::Not(Box::new(Self::from_expr(*expr)?))),
Expr::Not(expr) => Some(Predicate::Not(Box::new(Self::from_expr(*expr)?))),
// expr LIKE pattern
DfExpr::Like(Like {
Expr::Like(Like {
negated,
expr,
pattern,
@@ -131,10 +130,10 @@ impl Predicate {
..
}) if is_column(&expr) && is_string_literal(&pattern) => {
// Safety: ensured by gurad
let DfExpr::Column(c) = *expr else {
let Expr::Column(c) = *expr else {
unreachable!();
};
let DfExpr::Literal(ScalarValue::Utf8(Some(pattern))) = *pattern else {
let Expr::Literal(ScalarValue::Utf8(Some(pattern))) = *pattern else {
unreachable!();
};
@@ -147,10 +146,10 @@ impl Predicate {
}
}
// left OP right
DfExpr::BinaryExpr(bin) => match (*bin.left, bin.op, *bin.right) {
Expr::BinaryExpr(bin) => match (*bin.left, bin.op, *bin.right) {
// left == right
(DfExpr::Literal(scalar), Operator::Eq, DfExpr::Column(c))
| (DfExpr::Column(c), Operator::Eq, DfExpr::Literal(scalar)) => {
(Expr::Literal(scalar), Operator::Eq, Expr::Column(c))
| (Expr::Column(c), Operator::Eq, Expr::Literal(scalar)) => {
let Ok(v) = Value::try_from(scalar) else {
return None;
};
@@ -158,8 +157,8 @@ impl Predicate {
Some(Predicate::Eq(c.name, v))
}
// left != right
(DfExpr::Literal(scalar), Operator::NotEq, DfExpr::Column(c))
| (DfExpr::Column(c), Operator::NotEq, DfExpr::Literal(scalar)) => {
(Expr::Literal(scalar), Operator::NotEq, Expr::Column(c))
| (Expr::Column(c), Operator::NotEq, Expr::Literal(scalar)) => {
let Ok(v) = Value::try_from(scalar) else {
return None;
};
@@ -183,14 +182,14 @@ impl Predicate {
_ => None,
},
// [NOT] IN (LIST)
DfExpr::InList(list) => {
Expr::InList(list) => {
match (*list.expr, list.list, list.negated) {
// column [NOT] IN (v1, v2, v3, ...)
(DfExpr::Column(c), list, negated) if is_all_scalars(&list) => {
(Expr::Column(c), list, negated) if is_all_scalars(&list) => {
let mut values = Vec::with_capacity(list.len());
for scalar in list {
// Safety: checked by `is_all_scalars`
let DfExpr::Literal(scalar) = scalar else {
let Expr::Literal(scalar) = scalar else {
unreachable!();
};
@@ -237,12 +236,12 @@ fn like_utf8(s: &str, pattern: &str, case_insensitive: &bool) -> Option<bool> {
Some(booleans.value(0))
}
fn is_string_literal(expr: &DfExpr) -> bool {
matches!(expr, DfExpr::Literal(ScalarValue::Utf8(Some(_))))
fn is_string_literal(expr: &Expr) -> bool {
matches!(expr, Expr::Literal(ScalarValue::Utf8(Some(_))))
}
fn is_column(expr: &DfExpr) -> bool {
matches!(expr, DfExpr::Column(_))
fn is_column(expr: &Expr) -> bool {
matches!(expr, Expr::Column(_))
}
/// A list of predicate
@@ -257,7 +256,7 @@ impl Predicates {
let mut predicates = Vec::with_capacity(request.filters.len());
for filter in &request.filters {
if let Some(predicate) = Predicate::from_expr(filter.df_expr().clone()) {
if let Some(predicate) = Predicate::from_expr(filter.clone()) {
predicates.push(predicate);
}
}
@@ -286,8 +285,8 @@ impl Predicates {
}
/// Returns true when the values are all [`DfExpr::Literal`].
fn is_all_scalars(list: &[DfExpr]) -> bool {
list.iter().all(|v| matches!(v, DfExpr::Literal(_)))
fn is_all_scalars(list: &[Expr]) -> bool {
list.iter().all(|v| matches!(v, Expr::Literal(_)))
}
#[cfg(test)]
@@ -376,7 +375,7 @@ mod tests {
#[test]
fn test_predicate_like() {
// case insensitive
let expr = DfExpr::Like(Like {
let expr = Expr::Like(Like {
negated: false,
expr: Box::new(column("a")),
pattern: Box::new(string_literal("%abc")),
@@ -403,7 +402,7 @@ mod tests {
assert!(p.eval(&[]).is_none());
// case sensitive
let expr = DfExpr::Like(Like {
let expr = Expr::Like(Like {
negated: false,
expr: Box::new(column("a")),
pattern: Box::new(string_literal("%abc")),
@@ -423,7 +422,7 @@ mod tests {
assert!(p.eval(&[]).is_none());
// not like
let expr = DfExpr::Like(Like {
let expr = Expr::Like(Like {
negated: true,
expr: Box::new(column("a")),
pattern: Box::new(string_literal("%abc")),
@@ -437,15 +436,15 @@ mod tests {
assert!(p.eval(&[]).is_none());
}
fn column(name: &str) -> DfExpr {
DfExpr::Column(Column {
fn column(name: &str) -> Expr {
Expr::Column(Column {
relation: None,
name: name.to_string(),
})
}
fn string_literal(v: &str) -> DfExpr {
DfExpr::Literal(ScalarValue::Utf8(Some(v.to_string())))
fn string_literal(v: &str) -> Expr {
Expr::Literal(ScalarValue::Utf8(Some(v.to_string())))
}
fn match_string_value(v: &Value, expected: &str) -> bool {
@@ -463,14 +462,14 @@ mod tests {
result
}
fn mock_exprs() -> (DfExpr, DfExpr) {
let expr1 = DfExpr::BinaryExpr(BinaryExpr {
fn mock_exprs() -> (Expr, Expr) {
let expr1 = Expr::BinaryExpr(BinaryExpr {
left: Box::new(column("a")),
op: Operator::Eq,
right: Box::new(string_literal("a_value")),
});
let expr2 = DfExpr::BinaryExpr(BinaryExpr {
let expr2 = Expr::BinaryExpr(BinaryExpr {
left: Box::new(column("b")),
op: Operator::NotEq,
right: Box::new(string_literal("b_value")),
@@ -491,17 +490,17 @@ mod tests {
assert!(matches!(&p2, Predicate::NotEq(column, v) if column == "b"
&& match_string_value(v, "b_value")));
let and_expr = DfExpr::BinaryExpr(BinaryExpr {
let and_expr = Expr::BinaryExpr(BinaryExpr {
left: Box::new(expr1.clone()),
op: Operator::And,
right: Box::new(expr2.clone()),
});
let or_expr = DfExpr::BinaryExpr(BinaryExpr {
let or_expr = Expr::BinaryExpr(BinaryExpr {
left: Box::new(expr1.clone()),
op: Operator::Or,
right: Box::new(expr2.clone()),
});
let not_expr = DfExpr::Not(Box::new(expr1.clone()));
let not_expr = Expr::Not(Box::new(expr1.clone()));
let and_p = Predicate::from_expr(and_expr).unwrap();
assert!(matches!(and_p, Predicate::And(left, right) if *left == p1 && *right == p2));
@@ -510,7 +509,7 @@ mod tests {
let not_p = Predicate::from_expr(not_expr).unwrap();
assert!(matches!(not_p, Predicate::Not(p) if *p == p1));
let inlist_expr = DfExpr::InList(InList {
let inlist_expr = Expr::InList(InList {
expr: Box::new(column("a")),
list: vec![string_literal("a1"), string_literal("a2")],
negated: false,
@@ -520,7 +519,7 @@ mod tests {
assert!(matches!(&inlist_p, Predicate::InList(c, values) if c == "a"
&& match_string_values(values, &["a1", "a2"])));
let inlist_expr = DfExpr::InList(InList {
let inlist_expr = Expr::InList(InList {
expr: Box::new(column("a")),
list: vec![string_literal("a1"), string_literal("a2")],
negated: true,
@@ -540,7 +539,7 @@ mod tests {
let (expr1, expr2) = mock_exprs();
let request = ScanRequest {
filters: vec![expr1.into(), expr2.into()],
filters: vec![expr1, expr2],
..Default::default()
};
let predicates = Predicates::from_scan_request(&Some(request));
@@ -578,7 +577,7 @@ mod tests {
let (expr1, expr2) = mock_exprs();
let request = ScanRequest {
filters: vec![expr1.into(), expr2.into()],
filters: vec![expr1, expr2],
..Default::default()
};
let predicates = Predicates::from_scan_request(&Some(request));

View File

@@ -19,9 +19,9 @@ use arrow_schema::SchemaRef as ArrowSchemaRef;
use common_catalog::consts::INFORMATION_SCHEMA_REGION_PEERS_TABLE_ID;
use common_error::ext::BoxedError;
use common_meta::rpc::router::RegionRoute;
use common_query::physical_plan::TaskContext;
use common_recordbatch::adapter::RecordBatchStreamAdapter;
use common_recordbatch::{RecordBatch, SendableRecordBatchStream};
use datafusion::execution::TaskContext;
use datafusion::physical_plan::stream::RecordBatchStreamAdapter as DfRecordBatchStreamAdapter;
use datafusion::physical_plan::streaming::PartitionStream as DfPartitionStream;
use datafusion::physical_plan::SendableRecordBatchStream as DfSendableRecordBatchStream;

View File

@@ -17,10 +17,10 @@ use std::sync::Arc;
use arrow_schema::SchemaRef as ArrowSchemaRef;
use common_catalog::consts::INFORMATION_SCHEMA_RUNTIME_METRICS_TABLE_ID;
use common_error::ext::BoxedError;
use common_query::physical_plan::TaskContext;
use common_recordbatch::adapter::RecordBatchStreamAdapter;
use common_recordbatch::{RecordBatch, SendableRecordBatchStream};
use common_time::util::current_time_millis;
use datafusion::execution::TaskContext;
use datafusion::physical_plan::stream::RecordBatchStreamAdapter as DfRecordBatchStreamAdapter;
use datafusion::physical_plan::streaming::PartitionStream as DfPartitionStream;
use datafusion::physical_plan::SendableRecordBatchStream as DfSendableRecordBatchStream;

View File

@@ -17,9 +17,9 @@ use std::sync::{Arc, Weak};
use arrow_schema::SchemaRef as ArrowSchemaRef;
use common_catalog::consts::INFORMATION_SCHEMA_SCHEMATA_TABLE_ID;
use common_error::ext::BoxedError;
use common_query::physical_plan::TaskContext;
use common_recordbatch::adapter::RecordBatchStreamAdapter;
use common_recordbatch::{RecordBatch, SendableRecordBatchStream};
use datafusion::execution::TaskContext;
use datafusion::physical_plan::stream::RecordBatchStreamAdapter as DfRecordBatchStreamAdapter;
use datafusion::physical_plan::streaming::PartitionStream as DfPartitionStream;
use datafusion::physical_plan::SendableRecordBatchStream as DfSendableRecordBatchStream;

View File

@@ -17,9 +17,9 @@ use std::sync::{Arc, Weak};
use arrow_schema::SchemaRef as ArrowSchemaRef;
use common_catalog::consts::INFORMATION_SCHEMA_TABLE_CONSTRAINTS_TABLE_ID;
use common_error::ext::BoxedError;
use common_query::physical_plan::TaskContext;
use common_recordbatch::adapter::RecordBatchStreamAdapter;
use common_recordbatch::{RecordBatch, SendableRecordBatchStream};
use datafusion::execution::TaskContext;
use datafusion::physical_plan::stream::RecordBatchStreamAdapter as DfRecordBatchStreamAdapter;
use datafusion::physical_plan::streaming::PartitionStream as DfPartitionStream;
use datafusion::physical_plan::SendableRecordBatchStream as DfSendableRecordBatchStream;

View File

@@ -17,9 +17,9 @@ use std::sync::{Arc, Weak};
use arrow_schema::SchemaRef as ArrowSchemaRef;
use common_catalog::consts::INFORMATION_SCHEMA_TABLES_TABLE_ID;
use common_error::ext::BoxedError;
use common_query::physical_plan::TaskContext;
use common_recordbatch::adapter::RecordBatchStreamAdapter;
use common_recordbatch::{RecordBatch, SendableRecordBatchStream};
use datafusion::execution::TaskContext;
use datafusion::physical_plan::stream::RecordBatchStreamAdapter as DfRecordBatchStreamAdapter;
use datafusion::physical_plan::streaming::PartitionStream as DfPartitionStream;
use datafusion::physical_plan::SendableRecordBatchStream as DfSendableRecordBatchStream;

View File

@@ -16,5 +16,7 @@ pub use client::{CachedMetaKvBackend, CachedMetaKvBackendBuilder, MetaKvBackend}
mod client;
mod manager;
mod table_cache;
pub use manager::KvBackendCatalogManager;
pub use table_cache::{new_table_cache, TableCache, TableCacheRef};

View File

@@ -350,6 +350,13 @@ pub struct MetaKvBackend {
pub client: Arc<MetaClient>,
}
impl MetaKvBackend {
/// Constructs a [MetaKvBackend].
pub fn new(client: Arc<MetaClient>) -> MetaKvBackend {
MetaKvBackend { client }
}
}
impl TxnService for MetaKvBackend {
type Error = Error;
}
@@ -450,9 +457,8 @@ mod tests {
use common_meta::kv_backend::{KvBackend, TxnService};
use common_meta::rpc::store::{
BatchDeleteRequest, BatchDeleteResponse, BatchGetRequest, BatchGetResponse,
BatchPutRequest, BatchPutResponse, CompareAndPutRequest, CompareAndPutResponse,
DeleteRangeRequest, DeleteRangeResponse, PutRequest, PutResponse, RangeRequest,
RangeResponse,
BatchPutRequest, BatchPutResponse, DeleteRangeRequest, DeleteRangeResponse, PutRequest,
PutResponse, RangeRequest, RangeResponse,
};
use common_meta::rpc::KeyValue;
use dashmap::DashMap;
@@ -512,13 +518,6 @@ mod tests {
unimplemented!()
}
async fn compare_and_put(
&self,
_req: CompareAndPutRequest,
) -> Result<CompareAndPutResponse, Self::Error> {
unimplemented!()
}
async fn delete_range(
&self,
_req: DeleteRangeRequest,

View File

@@ -15,27 +15,24 @@
use std::any::Any;
use std::collections::BTreeSet;
use std::sync::{Arc, Weak};
use std::time::Duration;
use async_stream::try_stream;
use common_catalog::consts::{
DEFAULT_CATALOG_NAME, DEFAULT_SCHEMA_NAME, INFORMATION_SCHEMA_NAME, NUMBERS_TABLE_ID,
};
use common_catalog::format_full_table_name;
use common_config::Mode;
use common_error::ext::BoxedError;
use common_meta::cache_invalidator::{CacheInvalidator, Context, MultiCacheInvalidator};
use common_meta::instruction::CacheIdent;
use common_meta::cache::TableRouteCacheRef;
use common_meta::key::catalog_name::CatalogNameKey;
use common_meta::key::schema_name::SchemaNameKey;
use common_meta::key::table_info::TableInfoValue;
use common_meta::key::table_name::TableNameKey;
use common_meta::key::{TableMetadataManager, TableMetadataManagerRef};
use common_meta::kv_backend::KvBackendRef;
use common_meta::table_name::TableName;
use futures_util::stream::BoxStream;
use futures_util::{StreamExt, TryStreamExt};
use meta_client::client::MetaClient;
use moka::future::{Cache as AsyncCache, CacheBuilder};
use moka::sync::Cache;
use partition::manager::{PartitionRuleManager, PartitionRuleManagerRef};
use snafu::prelude::*;
@@ -43,12 +40,12 @@ use table::dist_table::DistTable;
use table::table::numbers::{NumbersTable, NUMBERS_TABLE_NAME};
use table::TableRef;
use crate::error::Error::{GetTableCache, TableCacheNotGet};
use crate::error::{
InvalidTableInfoInCatalogSnafu, ListCatalogsSnafu, ListSchemasSnafu, ListTablesSnafu, Result,
TableCacheNotGetSnafu, TableMetadataManagerSnafu,
GetTableCacheSnafu, InvalidTableInfoInCatalogSnafu, ListCatalogsSnafu, ListSchemasSnafu,
ListTablesSnafu, Result, TableMetadataManagerSnafu,
};
use crate::information_schema::InformationSchemaProvider;
use crate::kvbackend::TableCacheRef;
use crate::CatalogManager;
/// Access all existing catalog, schema and tables.
@@ -64,64 +61,26 @@ pub struct KvBackendCatalogManager {
table_metadata_manager: TableMetadataManagerRef,
/// A sub-CatalogManager that handles system tables
system_catalog: SystemCatalog,
table_cache: AsyncCache<String, TableRef>,
}
struct TableCacheInvalidator {
table_cache: AsyncCache<String, TableRef>,
}
impl TableCacheInvalidator {
pub fn new(table_cache: AsyncCache<String, TableRef>) -> Self {
Self { table_cache }
}
}
#[async_trait::async_trait]
impl CacheInvalidator for TableCacheInvalidator {
async fn invalidate(
&self,
_ctx: &Context,
caches: Vec<CacheIdent>,
) -> common_meta::error::Result<()> {
for cache in caches {
if let CacheIdent::TableName(table_name) = cache {
let table_cache_key = format_full_table_name(
&table_name.catalog_name,
&table_name.schema_name,
&table_name.table_name,
);
self.table_cache.invalidate(&table_cache_key).await;
}
}
Ok(())
}
table_cache: TableCacheRef,
}
const CATALOG_CACHE_MAX_CAPACITY: u64 = 128;
const TABLE_CACHE_MAX_CAPACITY: u64 = 65536;
const TABLE_CACHE_TTL: Duration = Duration::from_secs(10 * 60);
const TABLE_CACHE_TTI: Duration = Duration::from_secs(5 * 60);
impl KvBackendCatalogManager {
pub async fn new(
mode: Mode,
meta_client: Option<Arc<MetaClient>>,
backend: KvBackendRef,
multi_cache_invalidator: Arc<MultiCacheInvalidator>,
table_cache: TableCacheRef,
table_route_cache: TableRouteCacheRef,
) -> Arc<Self> {
let table_cache: AsyncCache<String, TableRef> = CacheBuilder::new(TABLE_CACHE_MAX_CAPACITY)
.time_to_live(TABLE_CACHE_TTL)
.time_to_idle(TABLE_CACHE_TTI)
.build();
multi_cache_invalidator
.add_invalidator(Arc::new(TableCacheInvalidator::new(table_cache.clone())))
.await;
Arc::new_cyclic(|me| Self {
mode,
meta_client,
partition_manager: Arc::new(PartitionRuleManager::new(backend.clone())),
partition_manager: Arc::new(PartitionRuleManager::new(
backend.clone(),
table_route_cache,
)),
table_metadata_manager: Arc::new(TableMetadataManager::new(backend)),
system_catalog: SystemCatalog {
catalog_manager: me.clone(),
@@ -218,7 +177,7 @@ impl CatalogManager for KvBackendCatalogManager {
}
async fn schema_exists(&self, catalog: &str, schema: &str) -> Result<bool> {
if self.system_catalog.schema_exist(schema) {
if self.system_catalog.schema_exists(schema) {
return Ok(true);
}
@@ -230,7 +189,7 @@ impl CatalogManager for KvBackendCatalogManager {
}
async fn table_exists(&self, catalog: &str, schema: &str, table: &str) -> Result<bool> {
if self.system_catalog.table_exist(schema, table) {
if self.system_catalog.table_exists(schema, table) {
return Ok(true);
}
@@ -245,60 +204,25 @@ impl CatalogManager for KvBackendCatalogManager {
async fn table(
&self,
catalog: &str,
schema: &str,
catalog_name: &str,
schema_name: &str,
table_name: &str,
) -> Result<Option<TableRef>> {
if let Some(table) = self.system_catalog.table(catalog, schema, table_name) {
if let Some(table) = self
.system_catalog
.table(catalog_name, schema_name, table_name)
{
return Ok(Some(table));
}
let init = async {
let table_name_key = TableNameKey::new(catalog, schema, table_name);
let Some(table_name_value) = self
.table_metadata_manager
.table_name_manager()
.get(table_name_key)
.await
.context(TableMetadataManagerSnafu)?
else {
return TableCacheNotGetSnafu {
key: table_name_key.to_string(),
}
.fail();
};
let table_id = table_name_value.table_id();
let Some(table_info_value) = self
.table_metadata_manager
.table_info_manager()
.get(table_id)
.await
.context(TableMetadataManagerSnafu)?
.map(|v| v.into_inner())
else {
return TableCacheNotGetSnafu {
key: table_name_key.to_string(),
}
.fail();
};
build_table(table_info_value)
};
match self
.table_cache
.try_get_with_by_ref(&format_full_table_name(catalog, schema, table_name), init)
self.table_cache
.get_by_ref(&TableName {
catalog_name: catalog_name.to_string(),
schema_name: schema_name.to_string(),
table_name: table_name.to_string(),
})
.await
{
Ok(table) => Ok(Some(table)),
Err(err) => match err.as_ref() {
TableCacheNotGet { .. } => Ok(None),
_ => Err(err),
},
}
.map_err(|err| GetTableCache {
err_msg: err.to_string(),
})
.context(GetTableCacheSnafu)
}
fn tables<'a>(&'a self, catalog: &'a str, schema: &'a str) -> BoxStream<'a, Result<TableRef>> {
@@ -382,11 +306,11 @@ impl SystemCatalog {
}
}
fn schema_exist(&self, schema: &str) -> bool {
fn schema_exists(&self, schema: &str) -> bool {
schema == INFORMATION_SCHEMA_NAME
}
fn table_exist(&self, schema: &str, table: &str) -> bool {
fn table_exists(&self, schema: &str, table: &str) -> bool {
if schema == INFORMATION_SCHEMA_NAME {
self.information_schema_provider.table(table).is_some()
} else if schema == DEFAULT_SCHEMA_NAME {

View File

@@ -0,0 +1,80 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use std::sync::Arc;
use common_meta::cache::{CacheContainer, Initializer, TableInfoCacheRef, TableNameCacheRef};
use common_meta::error::{Result as MetaResult, ValueNotExistSnafu};
use common_meta::instruction::CacheIdent;
use common_meta::table_name::TableName;
use futures::future::BoxFuture;
use moka::future::Cache;
use snafu::OptionExt;
use table::dist_table::DistTable;
use table::TableRef;
pub type TableCacheRef = Arc<TableCache>;
/// [TableCache] caches the [TableName] to [TableRef] mapping.
pub type TableCache = CacheContainer<TableName, TableRef, CacheIdent>;
/// Constructs a [TableCache].
pub fn new_table_cache(
name: String,
cache: Cache<TableName, TableRef>,
table_info_cache: TableInfoCacheRef,
table_name_cache: TableNameCacheRef,
) -> TableCache {
let init = init_factory(table_info_cache, table_name_cache);
CacheContainer::new(name, cache, Box::new(invalidator), init, Box::new(filter))
}
fn init_factory(
table_info_cache: TableInfoCacheRef,
table_name_cache: TableNameCacheRef,
) -> Initializer<TableName, TableRef> {
Arc::new(move |table_name| {
let table_info_cache = table_info_cache.clone();
let table_name_cache = table_name_cache.clone();
Box::pin(async move {
let table_id = table_name_cache
.get_by_ref(table_name)
.await?
.context(ValueNotExistSnafu)?;
let table_info = table_info_cache
.get_by_ref(&table_id)
.await?
.context(ValueNotExistSnafu)?;
Ok(Some(DistTable::table(table_info)))
})
})
}
fn invalidator<'a>(
cache: &'a Cache<TableName, TableRef>,
ident: &'a CacheIdent,
) -> BoxFuture<'a, MetaResult<()>> {
Box::pin(async move {
if let CacheIdent::TableName(table_name) = ident {
cache.invalidate(table_name).await
}
Ok(())
})
}
fn filter(ident: &CacheIdent) -> bool {
matches!(ident, CacheIdent::TableName(_))
}

View File

@@ -19,9 +19,10 @@ use api::v1::prometheus_gateway_client::PrometheusGatewayClient;
use api::v1::region::region_client::RegionClient as PbRegionClient;
use api::v1::HealthCheckRequest;
use arrow_flight::flight_service_client::FlightServiceClient;
use common_grpc::channel_manager::ChannelManager;
use common_grpc::channel_manager::{ChannelConfig, ChannelManager, ClientTlsOption};
use parking_lot::RwLock;
use snafu::{OptionExt, ResultExt};
use tonic::codec::CompressionEncoding;
use tonic::transport::Channel;
use crate::load_balance::{LoadBalance, Loadbalancer};
@@ -86,6 +87,17 @@ impl Client {
Self::with_manager_and_urls(ChannelManager::new(), urls)
}
pub fn with_tls_and_urls<U, A>(urls: A, client_tls: ClientTlsOption) -> Result<Self>
where
U: AsRef<str>,
A: AsRef<[U]>,
{
let channel_config = ChannelConfig::default().client_tls_config(client_tls);
let channel_manager = ChannelManager::with_tls_config(channel_config)
.context(error::CreateTlsChannelSnafu)?;
Ok(Self::with_manager_and_urls(channel_manager, urls))
}
pub fn with_manager_and_urls<U, A>(channel_manager: ChannelManager, urls: A) -> Self
where
U: AsRef<str>,
@@ -151,24 +163,34 @@ impl Client {
pub fn make_flight_client(&self) -> Result<FlightClient> {
let (addr, channel) = self.find_channel()?;
Ok(FlightClient {
addr,
client: FlightServiceClient::new(channel)
.max_decoding_message_size(self.max_grpc_recv_message_size())
.max_encoding_message_size(self.max_grpc_send_message_size()),
})
let client = FlightServiceClient::new(channel)
.max_decoding_message_size(self.max_grpc_recv_message_size())
.max_encoding_message_size(self.max_grpc_send_message_size())
.accept_compressed(CompressionEncoding::Zstd)
.send_compressed(CompressionEncoding::Zstd);
Ok(FlightClient { addr, client })
}
pub(crate) fn raw_region_client(&self) -> Result<PbRegionClient<Channel>> {
let (_, channel) = self.find_channel()?;
Ok(PbRegionClient::new(channel)
let client = PbRegionClient::new(channel)
.max_decoding_message_size(self.max_grpc_recv_message_size())
.max_encoding_message_size(self.max_grpc_send_message_size()))
.max_encoding_message_size(self.max_grpc_send_message_size())
.accept_compressed(CompressionEncoding::Zstd)
.send_compressed(CompressionEncoding::Zstd);
Ok(client)
}
pub fn make_prometheus_gateway_client(&self) -> Result<PrometheusGatewayClient<Channel>> {
let (_, channel) = self.find_channel()?;
Ok(PrometheusGatewayClient::new(channel))
let client = PrometheusGatewayClient::new(channel)
.accept_compressed(CompressionEncoding::Gzip)
.accept_compressed(CompressionEncoding::Zstd)
.send_compressed(CompressionEncoding::Gzip)
.send_compressed(CompressionEncoding::Zstd);
Ok(client)
}
pub async fn health_check(&self) -> Result<()> {

View File

@@ -18,7 +18,7 @@ use common_error::ext::{BoxedError, ErrorExt};
use common_error::status_code::StatusCode;
use common_error::{GREPTIME_DB_HEADER_ERROR_CODE, GREPTIME_DB_HEADER_ERROR_MSG};
use common_macro::stack_trace_debug;
use snafu::{Location, Snafu};
use snafu::{location, Location, Snafu};
use tonic::{Code, Status};
#[derive(Snafu)]
@@ -26,7 +26,11 @@ use tonic::{Code, Status};
#[stack_trace_debug]
pub enum Error {
#[snafu(display("Illegal Flight messages, reason: {}", reason))]
IllegalFlightMessages { reason: String, location: Location },
IllegalFlightMessages {
reason: String,
#[snafu(implicit)]
location: Location,
},
#[snafu(display("Failed to do Flight get, code: {}", tonic_code))]
FlightGet {
@@ -37,47 +41,84 @@ pub enum Error {
#[snafu(display("Failure occurs during handling request"))]
HandleRequest {
#[snafu(implicit)]
location: Location,
source: BoxedError,
},
#[snafu(display("Failed to convert FlightData"))]
ConvertFlightData {
#[snafu(implicit)]
location: Location,
source: common_grpc::Error,
},
#[snafu(display("Column datatype error"))]
ColumnDataType {
#[snafu(implicit)]
location: Location,
source: api::error::Error,
},
#[snafu(display("Illegal GRPC client state: {}", err_msg))]
IllegalGrpcClientState { err_msg: String, location: Location },
IllegalGrpcClientState {
err_msg: String,
#[snafu(implicit)]
location: Location,
},
#[snafu(display("Missing required field in protobuf, field: {}", field))]
MissingField { field: String, location: Location },
MissingField {
field: String,
#[snafu(implicit)]
location: Location,
},
#[snafu(display("Failed to create gRPC channel, peer address: {}", addr))]
CreateChannel {
addr: String,
#[snafu(implicit)]
location: Location,
source: common_grpc::error::Error,
},
#[snafu(display("Failed to create Tls channel manager"))]
CreateTlsChannel {
#[snafu(implicit)]
location: Location,
source: common_grpc::error::Error,
},
#[snafu(display("Failed to request RegionServer, code: {}", code))]
RegionServer { code: Code, source: BoxedError },
RegionServer {
code: Code,
source: BoxedError,
#[snafu(implicit)]
location: Location,
},
// Server error carried in Tonic Status's metadata.
#[snafu(display("{}", msg))]
Server { code: StatusCode, msg: String },
Server {
code: StatusCode,
msg: String,
#[snafu(implicit)]
location: Location,
},
#[snafu(display("Illegal Database response: {err_msg}"))]
IllegalDatabaseResponse { err_msg: String },
IllegalDatabaseResponse {
err_msg: String,
#[snafu(implicit)]
location: Location,
},
#[snafu(display("Failed to send request with streaming: {}", err_msg))]
ClientStreaming { err_msg: String, location: Location },
ClientStreaming {
err_msg: String,
#[snafu(implicit)]
location: Location,
},
}
pub type Result<T> = std::result::Result<T, Error>;
@@ -95,9 +136,9 @@ impl ErrorExt for Error {
Error::FlightGet { source, .. }
| Error::HandleRequest { source, .. }
| Error::RegionServer { source, .. } => source.status_code(),
Error::CreateChannel { source, .. } | Error::ConvertFlightData { source, .. } => {
source.status_code()
}
Error::CreateChannel { source, .. }
| Error::ConvertFlightData { source, .. }
| Error::CreateTlsChannel { source, .. } => source.status_code(),
Error::IllegalGrpcClientState { .. } => StatusCode::Unexpected,
}
}
@@ -128,7 +169,11 @@ impl From<Status> for Error {
let msg = get_metadata_value(&e, GREPTIME_DB_HEADER_ERROR_MSG)
.unwrap_or_else(|| e.message().to_string());
Self::Server { code, msg }
Self::Server {
code,
msg,
location: location!(),
}
}
}

View File

@@ -189,6 +189,7 @@ impl RegionRequester {
error::Error::RegionServer {
code,
source: BoxedError::new(err),
location: location!(),
}
})?
.into_inner();
@@ -272,7 +273,7 @@ mod test {
err_msg: "blabla".to_string(),
}),
}));
let Server { code, msg } = result.unwrap_err() else {
let Server { code, msg, .. } = result.unwrap_err() else {
unreachable!()
};
assert_eq!(code, StatusCode::Internal);

View File

@@ -19,6 +19,7 @@ workspace = true
async-trait.workspace = true
auth.workspace = true
base64.workspace = true
cache.workspace = true
catalog.workspace = true
chrono.workspace = true
clap.workspace = true
@@ -27,6 +28,7 @@ common-base.workspace = true
common-catalog.workspace = true
common-config.workspace = true
common-error.workspace = true
common-grpc.workspace = true
common-macro.workspace = true
common-meta.workspace = true
common-procedure.workspace = true
@@ -39,12 +41,12 @@ common-telemetry = { workspace = true, features = [
common-time.workspace = true
common-version.workspace = true
common-wal.workspace = true
config = "0.13"
datanode.workspace = true
datatypes.workspace = true
either = "1.8"
etcd-client.workspace = true
file-engine.workspace = true
flow.workspace = true
frontend.workspace = true
futures.workspace = true
human-panic = "1.2.2"
@@ -52,6 +54,7 @@ lazy_static.workspace = true
meta-client.workspace = true
meta-srv.workspace = true
mito2.workspace = true
moka.workspace = true
nu-ansi-term = "0.46"
plugins.workspace = true
prometheus.workspace = true
@@ -71,6 +74,7 @@ substrait.workspace = true
table.workspace = true
tokio.workspace = true
toml.workspace = true
tracing-appender = "0.2"
[target.'cfg(not(windows))'.dependencies]
tikv-jemallocator = "0.5"

View File

@@ -14,13 +14,11 @@
#![doc = include_str!("../../../../README.md")]
use std::fmt;
use clap::{Parser, Subcommand};
use cmd::error::Result;
use cmd::options::{GlobalOptions, Options};
use cmd::{cli, datanode, frontend, log_versions, metasrv, standalone, start_app, App};
use common_version::{short_version, version};
use cmd::options::GlobalOptions;
use cmd::{cli, datanode, frontend, metasrv, standalone, App};
use common_version::version;
#[derive(Parser)]
#[command(name = "greptime", author, version, long_version = version!(), about)]
@@ -56,58 +54,6 @@ enum SubCommand {
Cli(cli::Command),
}
impl SubCommand {
async fn build(self, opts: Options) -> Result<Box<dyn App>> {
let app: Box<dyn App> = match (self, opts) {
(SubCommand::Datanode(cmd), Options::Datanode(dn_opts)) => {
let app = cmd.build(*dn_opts).await?;
Box::new(app) as _
}
(SubCommand::Frontend(cmd), Options::Frontend(fe_opts)) => {
let app = cmd.build(*fe_opts).await?;
Box::new(app) as _
}
(SubCommand::Metasrv(cmd), Options::Metasrv(meta_opts)) => {
let app = cmd.build(*meta_opts).await?;
Box::new(app) as _
}
(SubCommand::Standalone(cmd), Options::Standalone(opts)) => {
let app = cmd.build(*opts).await?;
Box::new(app) as _
}
(SubCommand::Cli(cmd), Options::Cli(_)) => {
let app = cmd.build().await?;
Box::new(app) as _
}
_ => unreachable!(),
};
Ok(app)
}
fn load_options(&self, global_options: &GlobalOptions) -> Result<Options> {
match self {
SubCommand::Datanode(cmd) => cmd.load_options(global_options),
SubCommand::Frontend(cmd) => cmd.load_options(global_options),
SubCommand::Metasrv(cmd) => cmd.load_options(global_options),
SubCommand::Standalone(cmd) => cmd.load_options(global_options),
SubCommand::Cli(cmd) => cmd.load_options(global_options),
}
}
}
impl fmt::Display for SubCommand {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
match self {
SubCommand::Datanode(..) => write!(f, "greptime-datanode"),
SubCommand::Frontend(..) => write!(f, "greptime-frontend"),
SubCommand::Metasrv(..) => write!(f, "greptime-metasrv"),
SubCommand::Standalone(..) => write!(f, "greptime-standalone"),
SubCommand::Cli(_) => write!(f, "greptime-cli"),
}
}
}
#[cfg(not(windows))]
#[global_allocator]
static ALLOC: tikv_jemallocator::Jemalloc = tikv_jemallocator::Jemalloc;
@@ -119,24 +65,38 @@ async fn main() -> Result<()> {
}
async fn start(cli: Command) -> Result<()> {
let subcmd = cli.subcmd;
let app_name = subcmd.to_string();
let opts = subcmd.load_options(&cli.global_options)?;
let _guard = common_telemetry::init_global_logging(
&app_name,
opts.logging_options(),
cli.global_options.tracing_options(),
opts.node_id(),
);
log_versions(version!(), short_version!());
let app = subcmd.build(opts).await?;
start_app(app).await
match cli.subcmd {
SubCommand::Datanode(cmd) => {
cmd.build(cmd.load_options(&cli.global_options)?)
.await?
.run()
.await
}
SubCommand::Frontend(cmd) => {
cmd.build(cmd.load_options(&cli.global_options)?)
.await?
.run()
.await
}
SubCommand::Metasrv(cmd) => {
cmd.build(cmd.load_options(&cli.global_options)?)
.await?
.run()
.await
}
SubCommand::Standalone(cmd) => {
cmd.build(cmd.load_options(&cli.global_options)?)
.await?
.run()
.await
}
SubCommand::Cli(cmd) => {
cmd.build(cmd.load_options(&cli.global_options)?)
.await?
.run()
.await
}
}
}
fn setup_human_panic() {

View File

@@ -30,15 +30,18 @@ mod upgrade;
use async_trait::async_trait;
use bench::BenchTableMetadataCommand;
use clap::Parser;
use common_telemetry::logging::LoggingOptions;
use common_telemetry::logging::{LoggingOptions, TracingOptions};
use tracing_appender::non_blocking::WorkerGuard;
// pub use repl::Repl;
use upgrade::UpgradeCommand;
use self::export::ExportCommand;
use crate::error::Result;
use crate::options::{GlobalOptions, Options};
use crate::options::GlobalOptions;
use crate::App;
pub const APP_NAME: &str = "greptime-cli";
#[async_trait]
pub trait Tool: Send + Sync {
async fn do_work(&self) -> Result<()>;
@@ -46,24 +49,34 @@ pub trait Tool: Send + Sync {
pub struct Instance {
tool: Box<dyn Tool>,
// Keep the logging guard to prevent the worker from being dropped.
_guard: Vec<WorkerGuard>,
}
impl Instance {
fn new(tool: Box<dyn Tool>) -> Self {
Self { tool }
fn new(tool: Box<dyn Tool>, guard: Vec<WorkerGuard>) -> Self {
Self {
tool,
_guard: guard,
}
}
}
#[async_trait]
impl App for Instance {
fn name(&self) -> &str {
"greptime-cli"
APP_NAME
}
async fn start(&mut self) -> Result<()> {
self.tool.do_work().await
}
fn wait_signal(&self) -> bool {
false
}
async fn stop(&self) -> Result<()> {
Ok(())
}
@@ -76,11 +89,18 @@ pub struct Command {
}
impl Command {
pub async fn build(self) -> Result<Instance> {
self.cmd.build().await
pub async fn build(&self, opts: LoggingOptions) -> Result<Instance> {
let guard = common_telemetry::init_global_logging(
APP_NAME,
&opts,
&TracingOptions::default(),
None,
);
self.cmd.build(guard).await
}
pub fn load_options(&self, global_options: &GlobalOptions) -> Result<Options> {
pub fn load_options(&self, global_options: &GlobalOptions) -> Result<LoggingOptions> {
let mut logging_opts = LoggingOptions::default();
if let Some(dir) = &global_options.log_dir {
@@ -89,7 +109,7 @@ impl Command {
logging_opts.level.clone_from(&global_options.log_level);
Ok(Options::Cli(Box::new(logging_opts)))
Ok(logging_opts)
}
}
@@ -102,12 +122,12 @@ enum SubCommand {
}
impl SubCommand {
async fn build(self) -> Result<Instance> {
async fn build(&self, guard: Vec<WorkerGuard>) -> Result<Instance> {
match self {
// SubCommand::Attach(cmd) => cmd.build().await,
SubCommand::Upgrade(cmd) => cmd.build().await,
SubCommand::Bench(cmd) => cmd.build().await,
SubCommand::Export(cmd) => cmd.build().await,
SubCommand::Upgrade(cmd) => cmd.build(guard).await,
SubCommand::Bench(cmd) => cmd.build(guard).await,
SubCommand::Export(cmd) => cmd.build(guard).await,
}
}
}

View File

@@ -30,6 +30,7 @@ use datatypes::schema::{ColumnSchema, RawSchema};
use rand::Rng;
use store_api::storage::RegionNumber;
use table::metadata::{RawTableInfo, RawTableMeta, TableId, TableIdent, TableType};
use tracing_appender::non_blocking::WorkerGuard;
use self::metadata::TableMetadataBencher;
use crate::cli::{Instance, Tool};
@@ -61,7 +62,7 @@ pub struct BenchTableMetadataCommand {
}
impl BenchTableMetadataCommand {
pub async fn build(&self) -> Result<Instance> {
pub async fn build(&self, guard: Vec<WorkerGuard>) -> Result<Instance> {
let etcd_store = EtcdStore::with_endpoints([&self.etcd_addr], 128)
.await
.unwrap();
@@ -72,7 +73,7 @@ impl BenchTableMetadataCommand {
table_metadata_manager,
count: self.count,
};
Ok(Instance::new(Box::new(tool)))
Ok(Instance::new(Box::new(tool), guard))
}
}

View File

@@ -12,6 +12,7 @@
// See the License for the specific language governing permissions and
// limitations under the License.
use std::collections::HashSet;
use std::path::Path;
use std::sync::Arc;
@@ -28,6 +29,8 @@ use snafu::{OptionExt, ResultExt};
use tokio::fs::File;
use tokio::io::{AsyncWriteExt, BufWriter};
use tokio::sync::Semaphore;
use tokio::time::Instant;
use tracing_appender::non_blocking::WorkerGuard;
use crate::cli::{Instance, Tool};
use crate::error::{
@@ -78,7 +81,7 @@ pub struct ExportCommand {
}
impl ExportCommand {
pub async fn build(&self) -> Result<Instance> {
pub async fn build(&self, guard: Vec<WorkerGuard>) -> Result<Instance> {
let (catalog, schema) = split_database(&self.database)?;
let auth_header = if let Some(basic) = &self.auth_basic {
@@ -88,15 +91,18 @@ impl ExportCommand {
None
};
Ok(Instance::new(Box::new(Export {
addr: self.addr.clone(),
catalog,
schema,
output_dir: self.output_dir.clone(),
parallelism: self.export_jobs,
target: self.target.clone(),
auth_header,
})))
Ok(Instance::new(
Box::new(Export {
addr: self.addr.clone(),
catalog,
schema,
output_dir: self.output_dir.clone(),
parallelism: self.export_jobs,
target: self.target.clone(),
auth_header,
}),
guard,
))
}
}
@@ -174,8 +180,34 @@ impl Export {
}
/// Return a list of [`TableReference`] to be exported.
/// Includes all tables under the given `catalog` and `schema`
async fn get_table_list(&self, catalog: &str, schema: &str) -> Result<Vec<TableReference>> {
/// Includes all tables under the given `catalog` and `schema`.
async fn get_table_list(
&self,
catalog: &str,
schema: &str,
) -> Result<(Vec<TableReference>, Vec<TableReference>)> {
// Puts all metric table first
let sql = format!(
"select table_catalog, table_schema, table_name from \
information_schema.columns where column_name = '__tsid' \
and table_catalog = \'{catalog}\' and table_schema = \'{schema}\'"
);
let result = self.sql(&sql).await?;
let Some(records) = result else {
EmptyResultSnafu.fail()?
};
let mut metric_physical_tables = HashSet::with_capacity(records.len());
for value in records {
let mut t = Vec::with_capacity(3);
for v in &value {
let serde_json::Value::String(value) = v else {
unreachable!()
};
t.push(value);
}
metric_physical_tables.insert((t[0].clone(), t[1].clone(), t[2].clone()));
}
// TODO: SQL injection hurts
let sql = format!(
"select table_catalog, table_schema, table_name from \
@@ -190,10 +222,10 @@ impl Export {
debug!("Fetched table list: {:?}", records);
if records.is_empty() {
return Ok(vec![]);
return Ok((vec![], vec![]));
}
let mut result = Vec::with_capacity(records.len());
let mut remaining_tables = Vec::with_capacity(records.len());
for value in records {
let mut t = Vec::with_capacity(3);
for v in &value {
@@ -202,10 +234,17 @@ impl Export {
};
t.push(value);
}
result.push((t[0].clone(), t[1].clone(), t[2].clone()));
let table = (t[0].clone(), t[1].clone(), t[2].clone());
// Ignores the physical table
if !metric_physical_tables.contains(&table) {
remaining_tables.push(table);
}
}
Ok(result)
Ok((
metric_physical_tables.into_iter().collect(),
remaining_tables,
))
}
async fn show_create_table(&self, catalog: &str, schema: &str, table: &str) -> Result<String> {
@@ -225,6 +264,7 @@ impl Export {
}
async fn export_create_table(&self) -> Result<()> {
let timer = Instant::now();
let semaphore = Arc::new(Semaphore::new(self.parallelism));
let db_names = self.iter_db_names().await?;
let db_count = db_names.len();
@@ -233,15 +273,16 @@ impl Export {
let semaphore_moved = semaphore.clone();
tasks.push(async move {
let _permit = semaphore_moved.acquire().await.unwrap();
let table_list = self.get_table_list(&catalog, &schema).await?;
let table_count = table_list.len();
let (metric_physical_tables, remaining_tables) =
self.get_table_list(&catalog, &schema).await?;
let table_count = metric_physical_tables.len() + remaining_tables.len();
tokio::fs::create_dir_all(&self.output_dir)
.await
.context(FileIoSnafu)?;
let output_file =
Path::new(&self.output_dir).join(format!("{catalog}-{schema}.sql"));
let mut file = File::create(output_file).await.context(FileIoSnafu)?;
for (c, s, t) in table_list {
for (c, s, t) in metric_physical_tables.into_iter().chain(remaining_tables) {
match self.show_create_table(&c, &s, &t).await {
Err(e) => {
error!(e; r#"Failed to export table "{}"."{}"."{}""#, c, s, t)
@@ -270,12 +311,14 @@ impl Export {
})
.count();
info!("success {success}/{db_count} jobs");
let elapsed = timer.elapsed();
info!("Success {success}/{db_count} jobs, cost: {:?}", elapsed);
Ok(())
}
async fn export_table_data(&self) -> Result<()> {
let timer = Instant::now();
let semaphore = Arc::new(Semaphore::new(self.parallelism));
let db_names = self.iter_db_names().await?;
let db_count = db_names.len();
@@ -288,15 +331,25 @@ impl Export {
.await
.context(FileIoSnafu)?;
let output_dir = Path::new(&self.output_dir).join(format!("{catalog}-{schema}/"));
// copy database to
let sql = format!(
"copy database {} to '{}' with (format='parquet');",
schema,
output_dir.to_str().unwrap()
);
self.sql(&sql).await?;
info!("finished exporting {catalog}.{schema} data");
// Ignores metric physical tables
let (metrics_tables, table_list) = self.get_table_list(&catalog, &schema).await?;
for (_, _, table_name) in metrics_tables {
warn!("Ignores metric physical table: {table_name}");
}
for (catalog_name, schema_name, table_name) in table_list {
// copy table to
let sql = format!(
r#"Copy "{}"."{}"."{}" TO '{}{}.parquet' WITH (format='parquet');"#,
catalog_name,
schema_name,
table_name,
output_dir.to_str().unwrap(),
table_name,
);
info!("Executing sql: {sql}");
self.sql(&sql).await?;
}
info!("Finished exporting {catalog}.{schema} data");
// export copy from sql
let dir_filenames = match output_dir.read_dir() {
@@ -351,8 +404,8 @@ impl Export {
}
})
.count();
info!("success {success}/{db_count} jobs");
let elapsed = timer.elapsed();
info!("Success {success}/{db_count} jobs, costs: {:?}", elapsed);
Ok(())
}

View File

@@ -40,6 +40,7 @@ use etcd_client::Client;
use futures::TryStreamExt;
use prost::Message;
use snafu::ResultExt;
use tracing_appender::non_blocking::WorkerGuard;
use v1_helper::{CatalogKey as v1CatalogKey, SchemaKey as v1SchemaKey, TableGlobalValue};
use crate::cli::{Instance, Tool};
@@ -63,7 +64,7 @@ pub struct UpgradeCommand {
}
impl UpgradeCommand {
pub async fn build(&self) -> Result<Instance> {
pub async fn build(&self, guard: Vec<WorkerGuard>) -> Result<Instance> {
let client = Client::connect([&self.etcd_addr], None)
.await
.context(ConnectEtcdSnafu {
@@ -77,7 +78,7 @@ impl UpgradeCommand {
skip_schema_keys: self.skip_schema_keys,
skip_table_route_keys: self.skip_table_route_keys,
};
Ok(Instance::new(Box::new(tool)))
Ok(Instance::new(Box::new(tool), guard))
}
}
@@ -565,11 +566,16 @@ mod v1_helper {
#[snafu(visibility(pub))]
pub enum Error {
#[snafu(display("Invalid catalog info: {}", key))]
InvalidCatalog { key: String, location: Location },
InvalidCatalog {
key: String,
#[snafu(implicit)]
location: Location,
},
#[snafu(display("Failed to deserialize catalog entry value: {}", raw))]
DeserializeCatalogEntryValue {
raw: String,
#[snafu(implicit)]
location: Location,
source: serde_json::error::Error,
},

View File

@@ -18,7 +18,10 @@ use std::time::Duration;
use async_trait::async_trait;
use catalog::kvbackend::MetaKvBackend;
use clap::Parser;
use common_config::Configurable;
use common_telemetry::info;
use common_telemetry::logging::TracingOptions;
use common_version::{short_version, version};
use common_wal::config::DatanodeWalConfig;
use datanode::config::DatanodeOptions;
use datanode::datanode::{Datanode, DatanodeBuilder};
@@ -26,18 +29,29 @@ use datanode::service::DatanodeServiceBuilder;
use meta_client::MetaClientOptions;
use servers::Mode;
use snafu::{OptionExt, ResultExt};
use tracing_appender::non_blocking::WorkerGuard;
use crate::error::{MissingConfigSnafu, Result, ShutdownDatanodeSnafu, StartDatanodeSnafu};
use crate::options::{GlobalOptions, Options};
use crate::App;
use crate::error::{
LoadLayeredConfigSnafu, MissingConfigSnafu, Result, ShutdownDatanodeSnafu, StartDatanodeSnafu,
};
use crate::options::GlobalOptions;
use crate::{log_versions, App};
pub const APP_NAME: &str = "greptime-datanode";
pub struct Instance {
datanode: Datanode,
// Keep the logging guard to prevent the worker from being dropped.
_guard: Vec<WorkerGuard>,
}
impl Instance {
pub fn new(datanode: Datanode) -> Self {
Self { datanode }
pub fn new(datanode: Datanode, guard: Vec<WorkerGuard>) -> Self {
Self {
datanode,
_guard: guard,
}
}
pub fn datanode_mut(&mut self) -> &mut Datanode {
@@ -52,7 +66,7 @@ impl Instance {
#[async_trait]
impl App for Instance {
fn name(&self) -> &str {
"greptime-datanode"
APP_NAME
}
async fn start(&mut self) -> Result<()> {
@@ -78,11 +92,11 @@ pub struct Command {
}
impl Command {
pub async fn build(self, opts: DatanodeOptions) -> Result<Instance> {
pub async fn build(&self, opts: DatanodeOptions) -> Result<Instance> {
self.subcmd.build(opts).await
}
pub fn load_options(&self, global_options: &GlobalOptions) -> Result<Options> {
pub fn load_options(&self, global_options: &GlobalOptions) -> Result<DatanodeOptions> {
self.subcmd.load_options(global_options)
}
}
@@ -93,13 +107,13 @@ enum SubCommand {
}
impl SubCommand {
async fn build(self, opts: DatanodeOptions) -> Result<Instance> {
async fn build(&self, opts: DatanodeOptions) -> Result<Instance> {
match self {
SubCommand::Start(cmd) => cmd.build(opts).await,
}
}
fn load_options(&self, global_options: &GlobalOptions) -> Result<Options> {
fn load_options(&self, global_options: &GlobalOptions) -> Result<DatanodeOptions> {
match self {
SubCommand::Start(cmd) => cmd.load_options(global_options),
}
@@ -114,8 +128,8 @@ struct StartCommand {
rpc_addr: Option<String>,
#[clap(long)]
rpc_hostname: Option<String>,
#[clap(long, value_delimiter = ',', num_args = 1..)]
metasrv_addr: Option<Vec<String>>,
#[clap(long, aliases = ["metasrv-addr"], value_delimiter = ',', num_args = 1..)]
metasrv_addrs: Option<Vec<String>>,
#[clap(short, long)]
config_file: Option<String>,
#[clap(long)]
@@ -131,13 +145,23 @@ struct StartCommand {
}
impl StartCommand {
fn load_options(&self, global_options: &GlobalOptions) -> Result<Options> {
let mut opts: DatanodeOptions = Options::load_layered_options(
self.config_file.as_deref(),
self.env_prefix.as_ref(),
DatanodeOptions::env_list_keys(),
)?;
fn load_options(&self, global_options: &GlobalOptions) -> Result<DatanodeOptions> {
self.merge_with_cli_options(
global_options,
DatanodeOptions::load_layered_options(
self.config_file.as_deref(),
self.env_prefix.as_ref(),
)
.context(LoadLayeredConfigSnafu)?,
)
}
// The precedence order is: cli > config file > environment variables > default values.
fn merge_with_cli_options(
&self,
global_options: &GlobalOptions,
mut opts: DatanodeOptions,
) -> Result<DatanodeOptions> {
if let Some(dir) = &global_options.log_dir {
opts.logging.dir.clone_from(dir);
}
@@ -146,6 +170,11 @@ impl StartCommand {
opts.logging.level.clone_from(&global_options.log_level);
}
opts.tracing = TracingOptions {
#[cfg(feature = "tokio-console")]
tokio_console_addr: global_options.tokio_console_addr.clone(),
};
if let Some(addr) = &self.rpc_addr {
opts.rpc_addr.clone_from(addr);
}
@@ -158,7 +187,7 @@ impl StartCommand {
opts.node_id = Some(node_id);
}
if let Some(metasrv_addrs) = &self.metasrv_addr {
if let Some(metasrv_addrs) = &self.metasrv_addrs {
opts.meta_client
.get_or_insert_with(MetaClientOptions::default)
.metasrv_addrs
@@ -202,10 +231,18 @@ impl StartCommand {
// Disable dashboard in datanode.
opts.http.disable_dashboard = true;
Ok(Options::Datanode(Box::new(opts)))
Ok(opts)
}
async fn build(self, mut opts: DatanodeOptions) -> Result<Instance> {
async fn build(&self, mut opts: DatanodeOptions) -> Result<Instance> {
let guard = common_telemetry::init_global_logging(
APP_NAME,
&opts.logging,
&opts.tracing,
opts.node_id.map(|x| x.to_string()),
);
log_versions(version!(), short_version!());
let plugins = plugins::setup_datanode_plugins(&mut opts)
.await
.context(StartDatanodeSnafu)?;
@@ -244,7 +281,7 @@ impl StartCommand {
.context(StartDatanodeSnafu)?;
datanode.setup_services(services);
Ok(Instance::new(datanode))
Ok(Instance::new(datanode, guard))
}
}
@@ -253,13 +290,14 @@ mod tests {
use std::io::Write;
use std::time::Duration;
use common_config::ENV_VAR_SEP;
use common_test_util::temp_dir::create_named_temp_file;
use datanode::config::{FileConfig, GcsConfig, ObjectStoreConfig, S3Config};
use servers::heartbeat_options::HeartbeatOptions;
use servers::Mode;
use super::*;
use crate::options::{GlobalOptions, ENV_VAR_SEP};
use crate::options::GlobalOptions;
#[test]
fn test_read_from_config_file() {
@@ -315,10 +353,7 @@ mod tests {
..Default::default()
};
let Options::Datanode(options) = cmd.load_options(&GlobalOptions::default()).unwrap()
else {
unreachable!()
};
let options = cmd.load_options(&GlobalOptions::default()).unwrap();
assert_eq!("127.0.0.1:3001".to_string(), options.rpc_addr);
assert_eq!(Some(42), options.node_id);
@@ -377,26 +412,22 @@ mod tests {
#[test]
fn test_try_from_cmd() {
if let Options::Datanode(opt) = StartCommand::default()
let opt = StartCommand::default()
.load_options(&GlobalOptions::default())
.unwrap()
{
assert_eq!(Mode::Standalone, opt.mode)
}
.unwrap();
assert_eq!(Mode::Standalone, opt.mode);
if let Options::Datanode(opt) = (StartCommand {
let opt = (StartCommand {
node_id: Some(42),
metasrv_addr: Some(vec!["127.0.0.1:3002".to_string()]),
metasrv_addrs: Some(vec!["127.0.0.1:3002".to_string()]),
..Default::default()
})
.load_options(&GlobalOptions::default())
.unwrap()
{
assert_eq!(Mode::Distributed, opt.mode)
}
.unwrap();
assert_eq!(Mode::Distributed, opt.mode);
assert!((StartCommand {
metasrv_addr: Some(vec!["127.0.0.1:3002".to_string()]),
metasrv_addrs: Some(vec!["127.0.0.1:3002".to_string()]),
..Default::default()
})
.load_options(&GlobalOptions::default())
@@ -425,7 +456,7 @@ mod tests {
})
.unwrap();
let logging_opt = options.logging_options();
let logging_opt = options.logging;
assert_eq!("/tmp/greptimedb/test/logs", logging_opt.dir);
assert_eq!("debug", logging_opt.level.as_ref().unwrap());
}
@@ -505,11 +536,7 @@ mod tests {
..Default::default()
};
let Options::Datanode(opts) =
command.load_options(&GlobalOptions::default()).unwrap()
else {
unreachable!()
};
let opts = command.load_options(&GlobalOptions::default()).unwrap();
// Should be read from env, env > default values.
let DatanodeWalConfig::RaftEngine(raft_engine_config) = opts.wal else {

View File

@@ -17,7 +17,6 @@ use std::any::Any;
use common_error::ext::{BoxedError, ErrorExt};
use common_error::status_code::StatusCode;
use common_macro::stack_trace_debug;
use config::ConfigError;
use rustyline::error::ReadlineError;
use snafu::{Location, Snafu};
@@ -27,97 +26,120 @@ use snafu::{Location, Snafu};
pub enum Error {
#[snafu(display("Failed to create default catalog and schema"))]
InitMetadata {
#[snafu(implicit)]
location: Location,
source: common_meta::error::Error,
},
#[snafu(display("Failed to iter stream"))]
IterStream {
#[snafu(implicit)]
location: Location,
source: common_meta::error::Error,
},
#[snafu(display("Failed to init DDL manager"))]
InitDdlManager {
#[snafu(implicit)]
location: Location,
source: common_meta::error::Error,
},
#[snafu(display("Failed to init default timezone"))]
InitTimezone {
#[snafu(implicit)]
location: Location,
source: common_time::error::Error,
},
#[snafu(display("Failed to start procedure manager"))]
StartProcedureManager {
#[snafu(implicit)]
location: Location,
source: common_procedure::error::Error,
},
#[snafu(display("Failed to stop procedure manager"))]
StopProcedureManager {
#[snafu(implicit)]
location: Location,
source: common_procedure::error::Error,
},
#[snafu(display("Failed to start wal options allocator"))]
StartWalOptionsAllocator {
#[snafu(implicit)]
location: Location,
source: common_meta::error::Error,
},
#[snafu(display("Failed to start datanode"))]
StartDatanode {
#[snafu(implicit)]
location: Location,
source: datanode::error::Error,
},
#[snafu(display("Failed to shutdown datanode"))]
ShutdownDatanode {
#[snafu(implicit)]
location: Location,
source: datanode::error::Error,
},
#[snafu(display("Failed to start frontend"))]
StartFrontend {
#[snafu(implicit)]
location: Location,
source: frontend::error::Error,
},
#[snafu(display("Failed to shutdown frontend"))]
ShutdownFrontend {
#[snafu(implicit)]
location: Location,
source: frontend::error::Error,
},
#[snafu(display("Failed to build meta server"))]
BuildMetaServer {
#[snafu(implicit)]
location: Location,
source: meta_srv::error::Error,
},
#[snafu(display("Failed to start meta server"))]
StartMetaServer {
#[snafu(implicit)]
location: Location,
source: meta_srv::error::Error,
},
#[snafu(display("Failed to shutdown meta server"))]
ShutdownMetaServer {
#[snafu(implicit)]
location: Location,
source: meta_srv::error::Error,
},
#[snafu(display("Missing config, msg: {}", msg))]
MissingConfig { msg: String, location: Location },
MissingConfig {
msg: String,
#[snafu(implicit)]
location: Location,
},
#[snafu(display("Illegal config: {}", msg))]
IllegalConfig { msg: String, location: Location },
IllegalConfig {
msg: String,
#[snafu(implicit)]
location: Location,
},
#[snafu(display("Unsupported selector type: {}", selector_type))]
UnsupportedSelectorType {
selector_type: String,
#[snafu(implicit)]
location: Location,
source: meta_srv::error::Error,
},
@@ -129,6 +151,7 @@ pub enum Error {
ReplCreation {
#[snafu(source)]
error: ReadlineError,
#[snafu(implicit)]
location: Location,
},
@@ -136,23 +159,27 @@ pub enum Error {
Readline {
#[snafu(source)]
error: ReadlineError,
#[snafu(implicit)]
location: Location,
},
#[snafu(display("Failed to collect RecordBatches"))]
CollectRecordBatches {
#[snafu(implicit)]
location: Location,
source: common_recordbatch::error::Error,
},
#[snafu(display("Failed to pretty print Recordbatches"))]
PrettyPrintRecordBatches {
#[snafu(implicit)]
location: Location,
source: common_recordbatch::error::Error,
},
#[snafu(display("Failed to start Meta client"))]
StartMetaClient {
#[snafu(implicit)]
location: Location,
source: meta_client::error::Error,
},
@@ -160,31 +187,36 @@ pub enum Error {
#[snafu(display("Failed to parse SQL: {}", sql))]
ParseSql {
sql: String,
#[snafu(implicit)]
location: Location,
source: query::error::Error,
},
#[snafu(display("Failed to plan statement"))]
PlanStatement {
#[snafu(implicit)]
location: Location,
source: query::error::Error,
},
#[snafu(display("Failed to encode logical plan in substrait"))]
SubstraitEncodeLogicalPlan {
#[snafu(implicit)]
location: Location,
source: substrait::error::Error,
},
#[snafu(display("Failed to load layered config"))]
LoadLayeredConfig {
#[snafu(source)]
error: ConfigError,
#[snafu(source(from(common_config::error::Error, Box::new)))]
source: Box<common_config::error::Error>,
#[snafu(implicit)]
location: Location,
},
#[snafu(display("Failed to start catalog manager"))]
StartCatalogManager {
#[snafu(implicit)]
location: Location,
source: catalog::error::Error,
},
@@ -194,6 +226,7 @@ pub enum Error {
etcd_addr: String,
#[snafu(source)]
error: etcd_client::Error,
#[snafu(implicit)]
location: Location,
},
@@ -201,6 +234,7 @@ pub enum Error {
ConnectServer {
addr: String,
source: client::error::Error,
#[snafu(implicit)]
location: Location,
},
@@ -208,6 +242,7 @@ pub enum Error {
SerdeJson {
#[snafu(source)]
error: serde_json::error::Error,
#[snafu(implicit)]
location: Location,
},
@@ -216,17 +251,25 @@ pub enum Error {
reason: String,
#[snafu(source)]
error: reqwest::Error,
#[snafu(implicit)]
location: Location,
},
#[snafu(display("Expect data from output, but got another thing"))]
NotDataFromOutput { location: Location },
NotDataFromOutput {
#[snafu(implicit)]
location: Location,
},
#[snafu(display("Empty result from output"))]
EmptyResult { location: Location },
EmptyResult {
#[snafu(implicit)]
location: Location,
},
#[snafu(display("Failed to manipulate file"))]
FileIo {
#[snafu(implicit)]
location: Location,
#[snafu(source)]
error: std::io::Error,
@@ -234,6 +277,7 @@ pub enum Error {
#[snafu(display("Invalid database name: {}", database))]
InvalidDatabaseName {
#[snafu(implicit)]
location: Location,
database: String,
},
@@ -248,14 +292,30 @@ pub enum Error {
#[snafu(display("Other error"))]
Other {
source: BoxedError,
#[snafu(implicit)]
location: Location,
},
#[snafu(display("Failed to build runtime"))]
BuildRuntime {
#[snafu(implicit)]
location: Location,
source: common_runtime::error::Error,
},
#[snafu(display("Failed to get cache from cache registry: {}", name))]
CacheRequired {
#[snafu(implicit)]
location: Location,
name: String,
},
#[snafu(display("Failed to build cache registry"))]
BuildCacheRegistry {
#[snafu(implicit)]
location: Location,
source: cache::error::Error,
},
}
pub type Result<T> = std::result::Result<T, Error>;
@@ -305,6 +365,8 @@ impl ErrorExt for Error {
Error::SerdeJson { .. } | Error::FileIo { .. } => StatusCode::Unexpected,
Error::CacheRequired { .. } | Error::BuildCacheRegistry { .. } => StatusCode::Internal,
Error::Other { source, .. } => source.status_code(),
Error::BuildRuntime { source, .. } => source.status_code(),

View File

@@ -16,14 +16,22 @@ use std::sync::Arc;
use std::time::Duration;
use async_trait::async_trait;
use catalog::kvbackend::{CachedMetaKvBackendBuilder, KvBackendCatalogManager};
use cache::{
build_fundamental_cache_registry, with_default_composite_cache_registry, TABLE_CACHE_NAME,
TABLE_ROUTE_CACHE_NAME,
};
use catalog::kvbackend::{CachedMetaKvBackendBuilder, KvBackendCatalogManager, MetaKvBackend};
use clap::Parser;
use client::client_manager::DatanodeClients;
use common_meta::cache_invalidator::MultiCacheInvalidator;
use common_config::Configurable;
use common_grpc::channel_manager::ChannelConfig;
use common_meta::cache::{CacheRegistryBuilder, LayeredCacheRegistryBuilder};
use common_meta::heartbeat::handler::parse_mailbox_message::ParseMailboxMessageHandler;
use common_meta::heartbeat::handler::HandlerGroupExecutor;
use common_telemetry::info;
use common_telemetry::logging::TracingOptions;
use common_time::timezone::set_default_timezone;
use common_version::{short_version, version};
use frontend::frontend::FrontendOptions;
use frontend::heartbeat::handler::invalidate_table_cache::InvalidateTableCacheHandler;
use frontend::heartbeat::HeartbeatTask;
@@ -34,18 +42,29 @@ use meta_client::MetaClientOptions;
use servers::tls::{TlsMode, TlsOption};
use servers::Mode;
use snafu::{OptionExt, ResultExt};
use tracing_appender::non_blocking::WorkerGuard;
use crate::error::{self, InitTimezoneSnafu, MissingConfigSnafu, Result, StartFrontendSnafu};
use crate::options::{GlobalOptions, Options};
use crate::App;
use crate::error::{
self, InitTimezoneSnafu, LoadLayeredConfigSnafu, MissingConfigSnafu, Result, StartFrontendSnafu,
};
use crate::options::GlobalOptions;
use crate::{log_versions, App};
pub struct Instance {
frontend: FeInstance,
// Keep the logging guard to prevent the worker from being dropped.
_guard: Vec<WorkerGuard>,
}
pub const APP_NAME: &str = "greptime-frontend";
impl Instance {
pub fn new(frontend: FeInstance) -> Self {
Self { frontend }
pub fn new(frontend: FeInstance, guard: Vec<WorkerGuard>) -> Self {
Self {
frontend,
_guard: guard,
}
}
pub fn mut_inner(&mut self) -> &mut FeInstance {
@@ -60,7 +79,7 @@ impl Instance {
#[async_trait]
impl App for Instance {
fn name(&self) -> &str {
"greptime-frontend"
APP_NAME
}
async fn start(&mut self) -> Result<()> {
@@ -86,11 +105,11 @@ pub struct Command {
}
impl Command {
pub async fn build(self, opts: FrontendOptions) -> Result<Instance> {
pub async fn build(&self, opts: FrontendOptions) -> Result<Instance> {
self.subcmd.build(opts).await
}
pub fn load_options(&self, global_options: &GlobalOptions) -> Result<Options> {
pub fn load_options(&self, global_options: &GlobalOptions) -> Result<FrontendOptions> {
self.subcmd.load_options(global_options)
}
}
@@ -101,13 +120,13 @@ enum SubCommand {
}
impl SubCommand {
async fn build(self, opts: FrontendOptions) -> Result<Instance> {
async fn build(&self, opts: FrontendOptions) -> Result<Instance> {
match self {
SubCommand::Start(cmd) => cmd.build(opts).await,
}
}
fn load_options(&self, global_options: &GlobalOptions) -> Result<Options> {
fn load_options(&self, global_options: &GlobalOptions) -> Result<FrontendOptions> {
match self {
SubCommand::Start(cmd) => cmd.load_options(global_options),
}
@@ -130,8 +149,8 @@ pub struct StartCommand {
config_file: Option<String>,
#[clap(short, long)]
influxdb_enable: Option<bool>,
#[clap(long, value_delimiter = ',', num_args = 1..)]
metasrv_addr: Option<Vec<String>>,
#[clap(long, aliases = ["metasrv-addr"], value_delimiter = ',', num_args = 1..)]
metasrv_addrs: Option<Vec<String>>,
#[clap(long)]
tls_mode: Option<TlsMode>,
#[clap(long)]
@@ -147,13 +166,23 @@ pub struct StartCommand {
}
impl StartCommand {
fn load_options(&self, global_options: &GlobalOptions) -> Result<Options> {
let mut opts: FrontendOptions = Options::load_layered_options(
self.config_file.as_deref(),
self.env_prefix.as_ref(),
FrontendOptions::env_list_keys(),
)?;
fn load_options(&self, global_options: &GlobalOptions) -> Result<FrontendOptions> {
self.merge_with_cli_options(
global_options,
FrontendOptions::load_layered_options(
self.config_file.as_deref(),
self.env_prefix.as_ref(),
)
.context(LoadLayeredConfigSnafu)?,
)
}
// The precedence order is: cli > config file > environment variables > default values.
fn merge_with_cli_options(
&self,
global_options: &GlobalOptions,
mut opts: FrontendOptions,
) -> Result<FrontendOptions> {
if let Some(dir) = &global_options.log_dir {
opts.logging.dir.clone_from(dir);
}
@@ -162,6 +191,11 @@ impl StartCommand {
opts.logging.level.clone_from(&global_options.log_level);
}
opts.tracing = TracingOptions {
#[cfg(feature = "tokio-console")]
tokio_console_addr: global_options.tokio_console_addr.clone(),
};
let tls_opts = TlsOption::new(
self.tls_mode.clone(),
self.tls_cert_path.clone(),
@@ -182,6 +216,7 @@ impl StartCommand {
if let Some(addr) = &self.rpc_addr {
opts.grpc.addr.clone_from(addr);
opts.grpc.tls = tls_opts.clone();
}
if let Some(addr) = &self.mysql_addr {
@@ -200,7 +235,7 @@ impl StartCommand {
opts.influxdb.enable = enable;
}
if let Some(metasrv_addrs) = &self.metasrv_addr {
if let Some(metasrv_addrs) = &self.metasrv_addrs {
opts.meta_client
.get_or_insert_with(MetaClientOptions::default)
.metasrv_addrs
@@ -210,10 +245,18 @@ impl StartCommand {
opts.user_provider.clone_from(&self.user_provider);
Ok(Options::Frontend(Box::new(opts)))
Ok(opts)
}
async fn build(self, mut opts: FrontendOptions) -> Result<Instance> {
async fn build(&self, mut opts: FrontendOptions) -> Result<Instance> {
let guard = common_telemetry::init_global_logging(
APP_NAME,
&opts.logging,
&opts.tracing,
opts.node_id.clone(),
);
log_versions(version!(), short_version!());
#[allow(clippy::unnecessary_mut_passed)]
let plugins = plugins::setup_frontend_plugins(&mut opts)
.await
@@ -242,21 +285,47 @@ impl StartCommand {
.cache_tti(cache_tti)
.build();
let cached_meta_backend = Arc::new(cached_meta_backend);
let multi_cache_invalidator = Arc::new(MultiCacheInvalidator::with_invalidators(vec![
cached_meta_backend.clone(),
]));
// Builds cache registry
let layered_cache_builder = LayeredCacheRegistryBuilder::default().add_cache_registry(
CacheRegistryBuilder::default()
.add_cache(cached_meta_backend.clone())
.build(),
);
let fundamental_cache_registry =
build_fundamental_cache_registry(Arc::new(MetaKvBackend::new(meta_client.clone())));
let layered_cache_registry = Arc::new(
with_default_composite_cache_registry(
layered_cache_builder.add_cache_registry(fundamental_cache_registry),
)
.context(error::BuildCacheRegistrySnafu)?
.build(),
);
let table_cache = layered_cache_registry
.get()
.context(error::CacheRequiredSnafu {
name: TABLE_CACHE_NAME,
})?;
let table_route_cache =
layered_cache_registry
.get()
.context(error::CacheRequiredSnafu {
name: TABLE_ROUTE_CACHE_NAME,
})?;
let catalog_manager = KvBackendCatalogManager::new(
opts.mode,
Some(meta_client.clone()),
cached_meta_backend.clone(),
multi_cache_invalidator.clone(),
table_cache,
table_route_cache,
)
.await;
let executor = HandlerGroupExecutor::new(vec![
Arc::new(ParseMailboxMessageHandler),
Arc::new(InvalidateTableCacheHandler::new(
multi_cache_invalidator.clone(),
layered_cache_registry.clone(),
)),
]);
@@ -267,14 +336,23 @@ impl StartCommand {
Arc::new(executor),
);
// frontend to datanode need not timeout.
// Some queries are expected to take long time.
let channel_config = ChannelConfig {
timeout: None,
..Default::default()
};
let client = DatanodeClients::new(channel_config);
let mut instance = FrontendBuilder::new(
cached_meta_backend.clone(),
layered_cache_registry.clone(),
catalog_manager,
Arc::new(DatanodeClients::default()),
Arc::new(client),
meta_client,
)
.with_plugin(plugins.clone())
.with_cache_invalidator(multi_cache_invalidator)
.with_local_cache_invalidator(layered_cache_registry)
.with_heartbeat_task(heartbeat_task)
.try_build()
.await
@@ -288,7 +366,7 @@ impl StartCommand {
.build_servers(opts, servers)
.context(StartFrontendSnafu)?;
Ok(Instance::new(instance))
Ok(Instance::new(instance, guard))
}
}
@@ -299,12 +377,13 @@ mod tests {
use auth::{Identity, Password, UserProviderRef};
use common_base::readable_size::ReadableSize;
use common_config::ENV_VAR_SEP;
use common_test_util::temp_dir::create_named_temp_file;
use frontend::service_config::GrpcOptions;
use servers::http::HttpOptions;
use super::*;
use crate::options::{GlobalOptions, ENV_VAR_SEP};
use crate::options::GlobalOptions;
#[test]
fn test_try_from_start_command() {
@@ -317,10 +396,7 @@ mod tests {
..Default::default()
};
let Options::Frontend(opts) = command.load_options(&GlobalOptions::default()).unwrap()
else {
unreachable!()
};
let opts = command.load_options(&GlobalOptions::default()).unwrap();
assert_eq!(opts.http.addr, "127.0.0.1:1234");
assert_eq!(ReadableSize::mb(64), opts.http.body_limit);
@@ -368,10 +444,7 @@ mod tests {
..Default::default()
};
let Options::Frontend(fe_opts) = command.load_options(&GlobalOptions::default()).unwrap()
else {
unreachable!()
};
let fe_opts = command.load_options(&GlobalOptions::default()).unwrap();
assert_eq!(Mode::Distributed, fe_opts.mode);
assert_eq!("127.0.0.1:4000".to_string(), fe_opts.http.addr);
assert_eq!(Duration::from_secs(30), fe_opts.http.timeout);
@@ -424,7 +497,7 @@ mod tests {
})
.unwrap();
let logging_opt = options.logging_options();
let logging_opt = options.logging;
assert_eq!("/tmp/greptimedb/test/logs", logging_opt.dir);
assert_eq!("debug", logging_opt.level.as_ref().unwrap());
}
@@ -500,11 +573,7 @@ mod tests {
..Default::default()
};
let Options::Frontend(fe_opts) =
command.load_options(&GlobalOptions::default()).unwrap()
else {
unreachable!()
};
let fe_opts = command.load_options(&GlobalOptions::default()).unwrap();
// Should be read from env, env > default values.
assert_eq!(fe_opts.mysql.runtime_size, 11);

View File

@@ -17,6 +17,8 @@
use async_trait::async_trait;
use common_telemetry::{error, info};
use crate::error::Result;
pub mod cli;
pub mod datanode;
pub mod error;
@@ -35,32 +37,39 @@ pub trait App: Send {
fn name(&self) -> &str;
/// A hook for implementor to make something happened before actual startup. Defaults to no-op.
async fn pre_start(&mut self) -> error::Result<()> {
async fn pre_start(&mut self) -> Result<()> {
Ok(())
}
async fn start(&mut self) -> error::Result<()>;
async fn start(&mut self) -> Result<()>;
async fn stop(&self) -> error::Result<()>;
}
pub async fn start_app(mut app: Box<dyn App>) -> error::Result<()> {
info!("Starting app: {}", app.name());
app.pre_start().await?;
app.start().await?;
if let Err(e) = tokio::signal::ctrl_c().await {
error!("Failed to listen for ctrl-c signal: {}", e);
// It's unusual to fail to listen for ctrl-c signal, maybe there's something unexpected in
// the underlying system. So we stop the app instead of running nonetheless to let people
// investigate the issue.
/// Waits the quit signal by default.
fn wait_signal(&self) -> bool {
true
}
app.stop().await?;
info!("Goodbye!");
Ok(())
async fn stop(&self) -> Result<()>;
async fn run(&mut self) -> Result<()> {
info!("Starting app: {}", self.name());
self.pre_start().await?;
self.start().await?;
if self.wait_signal() {
if let Err(e) = tokio::signal::ctrl_c().await {
error!(e; "Failed to listen for ctrl-c signal");
// It's unusual to fail to listen for ctrl-c signal, maybe there's something unexpected in
// the underlying system. So we stop the app instead of running nonetheless to let people
// investigate the issue.
}
}
self.stop().await?;
info!("Goodbye!");
Ok(())
}
}
/// Log the versions of the application, and the arguments passed to the cli.

View File

@@ -16,29 +16,41 @@ use std::time::Duration;
use async_trait::async_trait;
use clap::Parser;
use common_config::Configurable;
use common_telemetry::info;
use common_telemetry::logging::TracingOptions;
use common_version::{short_version, version};
use meta_srv::bootstrap::MetasrvInstance;
use meta_srv::metasrv::MetasrvOptions;
use snafu::ResultExt;
use tracing_appender::non_blocking::WorkerGuard;
use crate::error::{self, Result, StartMetaServerSnafu};
use crate::options::{GlobalOptions, Options};
use crate::App;
use crate::error::{self, LoadLayeredConfigSnafu, Result, StartMetaServerSnafu};
use crate::options::GlobalOptions;
use crate::{log_versions, App};
pub const APP_NAME: &str = "greptime-metasrv";
pub struct Instance {
instance: MetasrvInstance,
// Keep the logging guard to prevent the worker from being dropped.
_guard: Vec<WorkerGuard>,
}
impl Instance {
fn new(instance: MetasrvInstance) -> Self {
Self { instance }
fn new(instance: MetasrvInstance, guard: Vec<WorkerGuard>) -> Self {
Self {
instance,
_guard: guard,
}
}
}
#[async_trait]
impl App for Instance {
fn name(&self) -> &str {
"greptime-metasrv"
APP_NAME
}
async fn start(&mut self) -> Result<()> {
@@ -64,11 +76,11 @@ pub struct Command {
}
impl Command {
pub async fn build(self, opts: MetasrvOptions) -> Result<Instance> {
pub async fn build(&self, opts: MetasrvOptions) -> Result<Instance> {
self.subcmd.build(opts).await
}
pub fn load_options(&self, global_options: &GlobalOptions) -> Result<Options> {
pub fn load_options(&self, global_options: &GlobalOptions) -> Result<MetasrvOptions> {
self.subcmd.load_options(global_options)
}
}
@@ -79,13 +91,13 @@ enum SubCommand {
}
impl SubCommand {
async fn build(self, opts: MetasrvOptions) -> Result<Instance> {
async fn build(&self, opts: MetasrvOptions) -> Result<Instance> {
match self {
SubCommand::Start(cmd) => cmd.build(opts).await,
}
}
fn load_options(&self, global_options: &GlobalOptions) -> Result<Options> {
fn load_options(&self, global_options: &GlobalOptions) -> Result<MetasrvOptions> {
match self {
SubCommand::Start(cmd) => cmd.load_options(global_options),
}
@@ -98,8 +110,8 @@ struct StartCommand {
bind_addr: Option<String>,
#[clap(long)]
server_addr: Option<String>,
#[clap(long)]
store_addr: Option<String>,
#[clap(long, aliases = ["store-addr"], value_delimiter = ',', num_args = 1..)]
store_addrs: Option<Vec<String>>,
#[clap(short, long)]
config_file: Option<String>,
#[clap(short, long)]
@@ -126,13 +138,23 @@ struct StartCommand {
}
impl StartCommand {
fn load_options(&self, global_options: &GlobalOptions) -> Result<Options> {
let mut opts: MetasrvOptions = Options::load_layered_options(
self.config_file.as_deref(),
self.env_prefix.as_ref(),
MetasrvOptions::env_list_keys(),
)?;
fn load_options(&self, global_options: &GlobalOptions) -> Result<MetasrvOptions> {
self.merge_with_cli_options(
global_options,
MetasrvOptions::load_layered_options(
self.config_file.as_deref(),
self.env_prefix.as_ref(),
)
.context(LoadLayeredConfigSnafu)?,
)
}
// The precedence order is: cli > config file > environment variables > default values.
fn merge_with_cli_options(
&self,
global_options: &GlobalOptions,
mut opts: MetasrvOptions,
) -> Result<MetasrvOptions> {
if let Some(dir) = &global_options.log_dir {
opts.logging.dir.clone_from(dir);
}
@@ -141,6 +163,11 @@ impl StartCommand {
opts.logging.level.clone_from(&global_options.log_level);
}
opts.tracing = TracingOptions {
#[cfg(feature = "tokio-console")]
tokio_console_addr: global_options.tokio_console_addr.clone(),
};
if let Some(addr) = &self.bind_addr {
opts.bind_addr.clone_from(addr);
}
@@ -149,8 +176,8 @@ impl StartCommand {
opts.server_addr.clone_from(addr);
}
if let Some(addr) = &self.store_addr {
opts.store_addr.clone_from(addr);
if let Some(addrs) = &self.store_addrs {
opts.store_addrs.clone_from(addrs);
}
if let Some(selector_type) = &self.selector {
@@ -190,10 +217,14 @@ impl StartCommand {
// Disable dashboard in metasrv.
opts.http.disable_dashboard = true;
Ok(Options::Metasrv(Box::new(opts)))
Ok(opts)
}
async fn build(self, mut opts: MetasrvOptions) -> Result<Instance> {
async fn build(&self, mut opts: MetasrvOptions) -> Result<Instance> {
let guard =
common_telemetry::init_global_logging(APP_NAME, &opts.logging, &opts.tracing, None);
log_versions(version!(), short_version!());
let plugins = plugins::setup_metasrv_plugins(&mut opts)
.await
.context(StartMetaServerSnafu)?;
@@ -210,7 +241,7 @@ impl StartCommand {
.await
.context(error::BuildMetaServerSnafu)?;
Ok(Instance::new(instance))
Ok(Instance::new(instance, guard))
}
}
@@ -219,27 +250,25 @@ mod tests {
use std::io::Write;
use common_base::readable_size::ReadableSize;
use common_config::ENV_VAR_SEP;
use common_test_util::temp_dir::create_named_temp_file;
use meta_srv::selector::SelectorType;
use super::*;
use crate::options::ENV_VAR_SEP;
#[test]
fn test_read_from_cmd() {
let cmd = StartCommand {
bind_addr: Some("127.0.0.1:3002".to_string()),
server_addr: Some("127.0.0.1:3002".to_string()),
store_addr: Some("127.0.0.1:2380".to_string()),
store_addrs: Some(vec!["127.0.0.1:2380".to_string()]),
selector: Some("LoadBased".to_string()),
..Default::default()
};
let Options::Metasrv(options) = cmd.load_options(&GlobalOptions::default()).unwrap() else {
unreachable!()
};
let options = cmd.load_options(&GlobalOptions::default()).unwrap();
assert_eq!("127.0.0.1:3002".to_string(), options.bind_addr);
assert_eq!("127.0.0.1:2380".to_string(), options.store_addr);
assert_eq!(vec!["127.0.0.1:2380".to_string()], options.store_addrs);
assert_eq!(SelectorType::LoadBased, options.selector);
}
@@ -270,12 +299,10 @@ mod tests {
..Default::default()
};
let Options::Metasrv(options) = cmd.load_options(&GlobalOptions::default()).unwrap() else {
unreachable!()
};
let options = cmd.load_options(&GlobalOptions::default()).unwrap();
assert_eq!("127.0.0.1:3002".to_string(), options.bind_addr);
assert_eq!("127.0.0.1:3002".to_string(), options.server_addr);
assert_eq!("127.0.0.1:2379".to_string(), options.store_addr);
assert_eq!(vec!["127.0.0.1:2379".to_string()], options.store_addrs);
assert_eq!(SelectorType::LeaseBased, options.selector);
assert_eq!("debug", options.logging.level.as_ref().unwrap());
assert_eq!("/tmp/greptimedb/test/logs".to_string(), options.logging.dir);
@@ -309,7 +336,7 @@ mod tests {
let cmd = StartCommand {
bind_addr: Some("127.0.0.1:3002".to_string()),
server_addr: Some("127.0.0.1:3002".to_string()),
store_addr: Some("127.0.0.1:2380".to_string()),
store_addrs: Some(vec!["127.0.0.1:2380".to_string()]),
selector: Some("LoadBased".to_string()),
..Default::default()
};
@@ -324,7 +351,7 @@ mod tests {
})
.unwrap();
let logging_opt = options.logging_options();
let logging_opt = options.logging;
assert_eq!("/tmp/greptimedb/test/logs", logging_opt.dir);
assert_eq!("debug", logging_opt.level.as_ref().unwrap());
}
@@ -379,11 +406,7 @@ mod tests {
..Default::default()
};
let Options::Metasrv(opts) =
command.load_options(&GlobalOptions::default()).unwrap()
else {
unreachable!()
};
let opts = command.load_options(&GlobalOptions::default()).unwrap();
// Should be read from env, env > default values.
assert_eq!(opts.bind_addr, "127.0.0.1:14002");
@@ -395,7 +418,7 @@ mod tests {
assert_eq!(opts.http.addr, "127.0.0.1:14000");
// Should be default value.
assert_eq!(opts.store_addr, "127.0.0.1:2379");
assert_eq!(opts.store_addrs, vec!["127.0.0.1:2379".to_string()]);
},
);
}

View File

@@ -13,53 +13,6 @@
// limitations under the License.
use clap::Parser;
use common_config::KvBackendConfig;
use common_telemetry::logging::{LoggingOptions, TracingOptions};
use common_wal::config::MetasrvWalConfig;
use config::{Config, Environment, File, FileFormat};
use datanode::config::{DatanodeOptions, ProcedureConfig};
use frontend::error::{Result as FeResult, TomlFormatSnafu};
use frontend::frontend::{FrontendOptions, TomlSerializable};
use meta_srv::metasrv::MetasrvOptions;
use serde::{Deserialize, Serialize};
use snafu::ResultExt;
use crate::error::{LoadLayeredConfigSnafu, Result, SerdeJsonSnafu};
pub const ENV_VAR_SEP: &str = "__";
pub const ENV_LIST_SEP: &str = ",";
/// Options mixed up from datanode, frontend and metasrv.
#[derive(Serialize, Debug, Clone)]
pub struct MixOptions {
pub data_home: String,
pub procedure: ProcedureConfig,
pub metadata_store: KvBackendConfig,
pub frontend: FrontendOptions,
pub datanode: DatanodeOptions,
pub logging: LoggingOptions,
pub wal_meta: MetasrvWalConfig,
}
impl From<MixOptions> for FrontendOptions {
fn from(value: MixOptions) -> Self {
value.frontend
}
}
impl TomlSerializable for MixOptions {
fn to_toml(&self) -> FeResult<String> {
toml::to_string(self).context(TomlFormatSnafu)
}
}
pub enum Options {
Datanode(Box<DatanodeOptions>),
Frontend(Box<FrontendOptions>),
Metasrv(Box<MetasrvOptions>),
Standalone(Box<MixOptions>),
Cli(Box<LoggingOptions>),
}
#[derive(Parser, Default, Debug, Clone)]
pub struct GlobalOptions {
@@ -76,216 +29,3 @@ pub struct GlobalOptions {
#[arg(global = true)]
pub tokio_console_addr: Option<String>,
}
impl GlobalOptions {
pub fn tracing_options(&self) -> TracingOptions {
TracingOptions {
#[cfg(feature = "tokio-console")]
tokio_console_addr: self.tokio_console_addr.clone(),
}
}
}
impl Options {
pub fn logging_options(&self) -> &LoggingOptions {
match self {
Options::Datanode(opts) => &opts.logging,
Options::Frontend(opts) => &opts.logging,
Options::Metasrv(opts) => &opts.logging,
Options::Standalone(opts) => &opts.logging,
Options::Cli(opts) => opts,
}
}
/// Load the configuration from multiple sources and merge them.
/// The precedence order is: config file > environment variables > default values.
/// `env_prefix` is the prefix of environment variables, e.g. "FRONTEND__xxx".
/// The function will use dunder(double underscore) `__` as the separator for environment variables, for example:
/// `DATANODE__STORAGE__MANIFEST__CHECKPOINT_MARGIN` will be mapped to `DatanodeOptions.storage.manifest.checkpoint_margin` field in the configuration.
/// `list_keys` is the list of keys that should be parsed as a list, for example, you can pass `Some(&["meta_client_options.metasrv_addrs"]` to parse `GREPTIMEDB_METASRV__META_CLIENT_OPTIONS__METASRV_ADDRS` as a list.
/// The function will use comma `,` as the separator for list values, for example: `127.0.0.1:3001,127.0.0.1:3002,127.0.0.1:3003`.
pub fn load_layered_options<'de, T: Serialize + Deserialize<'de> + Default>(
config_file: Option<&str>,
env_prefix: &str,
list_keys: Option<&[&str]>,
) -> Result<T> {
let default_opts = T::default();
let env_source = {
let mut env = Environment::default();
if !env_prefix.is_empty() {
env = env.prefix(env_prefix);
}
if let Some(list_keys) = list_keys {
env = env.list_separator(ENV_LIST_SEP);
for key in list_keys {
env = env.with_list_parse_key(key);
}
}
env.try_parsing(true)
.separator(ENV_VAR_SEP)
.ignore_empty(true)
};
// Workaround: Replacement for `Config::try_from(&default_opts)` due to
// `ConfigSerializer` cannot handle the case of an empty struct contained
// within an iterative structure.
// See: https://github.com/mehcode/config-rs/issues/461
let json_str = serde_json::to_string(&default_opts).context(SerdeJsonSnafu)?;
let default_config = File::from_str(&json_str, FileFormat::Json);
// Add default values and environment variables as the sources of the configuration.
let mut layered_config = Config::builder()
.add_source(default_config)
.add_source(env_source);
// Add config file as the source of the configuration if it is specified.
if let Some(config_file) = config_file {
layered_config = layered_config.add_source(File::new(config_file, FileFormat::Toml));
}
let opts = layered_config
.build()
.context(LoadLayeredConfigSnafu)?
.try_deserialize()
.context(LoadLayeredConfigSnafu)?;
Ok(opts)
}
pub fn node_id(&self) -> Option<String> {
match self {
Options::Metasrv(_) | Options::Cli(_) => None,
Options::Datanode(opt) => opt.node_id.map(|x| x.to_string()),
Options::Frontend(opt) => opt.node_id.clone(),
Options::Standalone(opt) => opt.frontend.node_id.clone(),
}
}
}
#[cfg(test)]
mod tests {
use std::io::Write;
use common_test_util::temp_dir::create_named_temp_file;
use common_wal::config::DatanodeWalConfig;
use datanode::config::{DatanodeOptions, ObjectStoreConfig};
use super::*;
#[test]
fn test_load_layered_options() {
let mut file = create_named_temp_file();
let toml_str = r#"
mode = "distributed"
enable_memory_catalog = false
rpc_addr = "127.0.0.1:3001"
rpc_hostname = "127.0.0.1"
rpc_runtime_size = 8
mysql_addr = "127.0.0.1:4406"
mysql_runtime_size = 2
[meta_client]
timeout = "3s"
connect_timeout = "5s"
tcp_nodelay = true
[wal]
provider = "raft_engine"
dir = "/tmp/greptimedb/wal"
file_size = "1GB"
purge_threshold = "50GB"
purge_interval = "10m"
read_batch_size = 128
sync_write = false
[logging]
level = "debug"
dir = "/tmp/greptimedb/test/logs"
"#;
write!(file, "{}", toml_str).unwrap();
let env_prefix = "DATANODE_UT";
temp_env::with_vars(
// The following environment variables will be used to override the values in the config file.
[
(
// storage.type = S3
[
env_prefix.to_string(),
"storage".to_uppercase(),
"type".to_uppercase(),
]
.join(ENV_VAR_SEP),
Some("S3"),
),
(
// storage.bucket = mybucket
[
env_prefix.to_string(),
"storage".to_uppercase(),
"bucket".to_uppercase(),
]
.join(ENV_VAR_SEP),
Some("mybucket"),
),
(
// wal.dir = /other/wal/dir
[
env_prefix.to_string(),
"wal".to_uppercase(),
"dir".to_uppercase(),
]
.join(ENV_VAR_SEP),
Some("/other/wal/dir"),
),
(
// meta_client.metasrv_addrs = 127.0.0.1:3001,127.0.0.1:3002,127.0.0.1:3003
[
env_prefix.to_string(),
"meta_client".to_uppercase(),
"metasrv_addrs".to_uppercase(),
]
.join(ENV_VAR_SEP),
Some("127.0.0.1:3001,127.0.0.1:3002,127.0.0.1:3003"),
),
],
|| {
let opts: DatanodeOptions = Options::load_layered_options(
Some(file.path().to_str().unwrap()),
env_prefix,
DatanodeOptions::env_list_keys(),
)
.unwrap();
// Check the configs from environment variables.
match &opts.storage.store {
ObjectStoreConfig::S3(s3_config) => {
assert_eq!(s3_config.bucket, "mybucket".to_string());
}
_ => panic!("unexpected store type"),
}
assert_eq!(
opts.meta_client.unwrap().metasrv_addrs,
vec![
"127.0.0.1:3001".to_string(),
"127.0.0.1:3002".to_string(),
"127.0.0.1:3003".to_string()
]
);
// Should be the values from config file, not environment variables.
let DatanodeWalConfig::RaftEngine(raft_engine_config) = opts.wal else {
unreachable!()
};
assert_eq!(raft_engine_config.dir.unwrap(), "/tmp/greptimedb/wal");
// Should be default values.
assert_eq!(opts.node_id, None);
},
);
}
}

View File

@@ -16,11 +16,16 @@ use std::sync::Arc;
use std::{fs, path};
use async_trait::async_trait;
use cache::{
build_fundamental_cache_registry, with_default_composite_cache_registry, TABLE_CACHE_NAME,
TABLE_ROUTE_CACHE_NAME,
};
use catalog::kvbackend::KvBackendCatalogManager;
use clap::Parser;
use common_catalog::consts::{MIN_USER_FLOW_ID, MIN_USER_TABLE_ID};
use common_config::{metadata_store_dir, KvBackendConfig};
use common_meta::cache_invalidator::{CacheInvalidatorRef, MultiCacheInvalidator};
use common_config::{metadata_store_dir, Configurable, KvBackendConfig};
use common_meta::cache::LayeredCacheRegistryBuilder;
use common_meta::cache_invalidator::CacheInvalidatorRef;
use common_meta::ddl::flow_meta::{FlowMetadataAllocator, FlowMetadataAllocatorRef};
use common_meta::ddl::table_meta::{TableMetadataAllocator, TableMetadataAllocatorRef};
use common_meta::ddl::{DdlContext, ProcedureExecutorRef};
@@ -34,12 +39,14 @@ use common_meta::sequence::SequenceBuilder;
use common_meta::wal_options_allocator::{WalOptionsAllocator, WalOptionsAllocatorRef};
use common_procedure::ProcedureManagerRef;
use common_telemetry::info;
use common_telemetry::logging::LoggingOptions;
use common_telemetry::logging::{LoggingOptions, TracingOptions};
use common_time::timezone::set_default_timezone;
use common_version::{short_version, version};
use common_wal::config::StandaloneWalConfig;
use datanode::config::{DatanodeOptions, ProcedureConfig, RegionEngineConfig, StorageConfig};
use datanode::datanode::{Datanode, DatanodeBuilder};
use file_engine::config::EngineConfig as FileEngineConfig;
use flow::FlownodeBuilder;
use frontend::frontend::FrontendOptions;
use frontend::instance::builder::FrontendBuilder;
use frontend::instance::{FrontendInstance, Instance as FeInstance, StandaloneDatanodeManager};
@@ -54,15 +61,19 @@ use servers::export_metrics::ExportMetricsOption;
use servers::http::HttpOptions;
use servers::tls::{TlsMode, TlsOption};
use servers::Mode;
use snafu::ResultExt;
use snafu::{OptionExt, ResultExt};
use tracing_appender::non_blocking::WorkerGuard;
use crate::error::{
CreateDirSnafu, IllegalConfigSnafu, InitDdlManagerSnafu, InitMetadataSnafu, InitTimezoneSnafu,
Result, ShutdownDatanodeSnafu, ShutdownFrontendSnafu, StartDatanodeSnafu, StartFrontendSnafu,
BuildCacheRegistrySnafu, CacheRequiredSnafu, CreateDirSnafu, IllegalConfigSnafu,
InitDdlManagerSnafu, InitMetadataSnafu, InitTimezoneSnafu, LoadLayeredConfigSnafu, Result,
ShutdownDatanodeSnafu, ShutdownFrontendSnafu, StartDatanodeSnafu, StartFrontendSnafu,
StartProcedureManagerSnafu, StartWalOptionsAllocatorSnafu, StopProcedureManagerSnafu,
};
use crate::options::{GlobalOptions, MixOptions, Options};
use crate::App;
use crate::options::GlobalOptions;
use crate::{log_versions, App};
pub const APP_NAME: &str = "greptime-standalone";
#[derive(Parser)]
pub struct Command {
@@ -71,11 +82,11 @@ pub struct Command {
}
impl Command {
pub async fn build(self, opts: MixOptions) -> Result<Instance> {
pub async fn build(&self, opts: StandaloneOptions) -> Result<Instance> {
self.subcmd.build(opts).await
}
pub fn load_options(&self, global_options: &GlobalOptions) -> Result<Options> {
pub fn load_options(&self, global_options: &GlobalOptions) -> Result<StandaloneOptions> {
self.subcmd.load_options(global_options)
}
}
@@ -86,13 +97,13 @@ enum SubCommand {
}
impl SubCommand {
async fn build(self, opts: MixOptions) -> Result<Instance> {
async fn build(&self, opts: StandaloneOptions) -> Result<Instance> {
match self {
SubCommand::Start(cmd) => cmd.build(opts).await,
}
}
fn load_options(&self, global_options: &GlobalOptions) -> Result<Options> {
fn load_options(&self, global_options: &GlobalOptions) -> Result<StandaloneOptions> {
match self {
SubCommand::Start(cmd) => cmd.load_options(global_options),
}
@@ -121,12 +132,7 @@ pub struct StandaloneOptions {
/// Options for different store engines.
pub region_engine: Vec<RegionEngineConfig>,
pub export_metrics: ExportMetricsOption,
}
impl StandaloneOptions {
pub fn env_list_keys() -> Option<&'static [&'static str]> {
Some(&["wal.broker_endpoints"])
}
pub tracing: TracingOptions,
}
impl Default for StandaloneOptions {
@@ -153,39 +159,48 @@ impl Default for StandaloneOptions {
RegionEngineConfig::Mito(MitoConfig::default()),
RegionEngineConfig::File(FileEngineConfig::default()),
],
tracing: TracingOptions::default(),
}
}
}
impl Configurable<'_> for StandaloneOptions {
fn env_list_keys() -> Option<&'static [&'static str]> {
Some(&["wal.broker_endpoints"])
}
}
impl StandaloneOptions {
fn frontend_options(self) -> FrontendOptions {
pub fn frontend_options(&self) -> FrontendOptions {
let cloned_opts = self.clone();
FrontendOptions {
mode: self.mode,
default_timezone: self.default_timezone,
http: self.http,
grpc: self.grpc,
mysql: self.mysql,
postgres: self.postgres,
opentsdb: self.opentsdb,
influxdb: self.influxdb,
prom_store: self.prom_store,
mode: cloned_opts.mode,
default_timezone: cloned_opts.default_timezone,
http: cloned_opts.http,
grpc: cloned_opts.grpc,
mysql: cloned_opts.mysql,
postgres: cloned_opts.postgres,
opentsdb: cloned_opts.opentsdb,
influxdb: cloned_opts.influxdb,
prom_store: cloned_opts.prom_store,
meta_client: None,
logging: self.logging,
user_provider: self.user_provider,
logging: cloned_opts.logging,
user_provider: cloned_opts.user_provider,
// Handle the export metrics task run by standalone to frontend for execution
export_metrics: self.export_metrics,
export_metrics: cloned_opts.export_metrics,
..Default::default()
}
}
fn datanode_options(self) -> DatanodeOptions {
pub fn datanode_options(&self) -> DatanodeOptions {
let cloned_opts = self.clone();
DatanodeOptions {
node_id: Some(0),
enable_telemetry: self.enable_telemetry,
wal: self.wal.into(),
storage: self.storage,
region_engine: self.region_engine,
rpc_addr: self.grpc.addr,
enable_telemetry: cloned_opts.enable_telemetry,
wal: cloned_opts.wal.into(),
storage: cloned_opts.storage,
region_engine: cloned_opts.region_engine,
rpc_addr: cloned_opts.grpc.addr,
..Default::default()
}
}
@@ -196,12 +211,15 @@ pub struct Instance {
frontend: FeInstance,
procedure_manager: ProcedureManagerRef,
wal_options_allocator: WalOptionsAllocatorRef,
// Keep the logging guard to prevent the worker from being dropped.
_guard: Vec<WorkerGuard>,
}
#[async_trait]
impl App for Instance {
fn name(&self) -> &str {
"greptime-standalone"
APP_NAME
}
async fn start(&mut self) -> Result<()> {
@@ -276,21 +294,24 @@ pub struct StartCommand {
}
impl StartCommand {
fn load_options(&self, global_options: &GlobalOptions) -> Result<Options> {
let opts: StandaloneOptions = Options::load_layered_options(
self.config_file.as_deref(),
self.env_prefix.as_ref(),
StandaloneOptions::env_list_keys(),
)?;
self.convert_options(global_options, opts)
fn load_options(&self, global_options: &GlobalOptions) -> Result<StandaloneOptions> {
self.merge_with_cli_options(
global_options,
StandaloneOptions::load_layered_options(
self.config_file.as_deref(),
self.env_prefix.as_ref(),
)
.context(LoadLayeredConfigSnafu)?,
)
}
pub fn convert_options(
// The precedence order is: cli > config file > environment variables > default values.
pub fn merge_with_cli_options(
&self,
global_options: &GlobalOptions,
mut opts: StandaloneOptions,
) -> Result<Options> {
) -> Result<StandaloneOptions> {
// Should always be standalone mode.
opts.mode = Mode::Standalone;
if let Some(dir) = &global_options.log_dir {
@@ -301,6 +322,11 @@ impl StartCommand {
opts.logging.level.clone_from(&global_options.log_level);
}
opts.tracing = TracingOptions {
#[cfg(feature = "tokio-console")]
tokio_console_addr: global_options.tokio_console_addr.clone(),
};
let tls_opts = TlsOption::new(
self.tls_mode.clone(),
self.tls_cert_path.clone(),
@@ -323,8 +349,7 @@ impl StartCommand {
msg: format!(
"gRPC listen address conflicts with datanode reserved gRPC addr: {datanode_grpc_addr}",
),
}
.fail();
}.fail();
}
opts.grpc.addr.clone_from(addr)
}
@@ -347,47 +372,36 @@ impl StartCommand {
opts.user_provider.clone_from(&self.user_provider);
let metadata_store = opts.metadata_store.clone();
let procedure = opts.procedure.clone();
let frontend = opts.clone().frontend_options();
let logging = opts.logging.clone();
let wal_meta = opts.wal.clone().into();
let datanode = opts.datanode_options().clone();
Ok(Options::Standalone(Box::new(MixOptions {
procedure,
metadata_store,
data_home: datanode.storage.data_home.to_string(),
frontend,
datanode,
logging,
wal_meta,
})))
Ok(opts)
}
#[allow(unreachable_code)]
#[allow(unused_variables)]
#[allow(clippy::diverging_sub_expression)]
async fn build(self, opts: MixOptions) -> Result<Instance> {
async fn build(&self, opts: StandaloneOptions) -> Result<Instance> {
let guard =
common_telemetry::init_global_logging(APP_NAME, &opts.logging, &opts.tracing, None);
log_versions(version!(), short_version!());
info!("Standalone start command: {:#?}", self);
info!("Building standalone instance with {opts:#?}");
let mut fe_opts = opts.frontend;
let mut fe_opts = opts.frontend_options();
#[allow(clippy::unnecessary_mut_passed)]
let fe_plugins = plugins::setup_frontend_plugins(&mut fe_opts) // mut ref is MUST, DO NOT change it
.await
.context(StartFrontendSnafu)?;
let dn_opts = opts.datanode;
let dn_opts = opts.datanode_options();
set_default_timezone(fe_opts.default_timezone.as_deref()).context(InitTimezoneSnafu)?;
let data_home = &dn_opts.storage.data_home;
// Ensure the data_home directory exists.
fs::create_dir_all(path::Path::new(&opts.data_home)).context(CreateDirSnafu {
dir: &opts.data_home,
})?;
fs::create_dir_all(path::Path::new(data_home))
.context(CreateDirSnafu { dir: data_home })?;
let metadata_dir = metadata_store_dir(&opts.data_home);
let metadata_dir = metadata_store_dir(data_home);
let (kv_backend, procedure_manager) = FeInstance::try_build_standalone_components(
metadata_dir,
opts.metadata_store.clone(),
@@ -396,20 +410,52 @@ impl StartCommand {
.await
.context(StartFrontendSnafu)?;
let multi_cache_invalidator = Arc::new(MultiCacheInvalidator::default());
// Builds cache registry
let layered_cache_builder = LayeredCacheRegistryBuilder::default();
let fundamental_cache_registry = build_fundamental_cache_registry(kv_backend.clone());
let layered_cache_registry = Arc::new(
with_default_composite_cache_registry(
layered_cache_builder.add_cache_registry(fundamental_cache_registry),
)
.context(BuildCacheRegistrySnafu)?
.build(),
);
let table_cache = layered_cache_registry.get().context(CacheRequiredSnafu {
name: TABLE_CACHE_NAME,
})?;
let table_route_cache = layered_cache_registry.get().context(CacheRequiredSnafu {
name: TABLE_ROUTE_CACHE_NAME,
})?;
let catalog_manager = KvBackendCatalogManager::new(
dn_opts.mode,
None,
kv_backend.clone(),
multi_cache_invalidator.clone(),
table_cache,
table_route_cache,
)
.await;
let table_metadata_manager =
Self::create_table_metadata_manager(kv_backend.clone()).await?;
let flow_builder = FlownodeBuilder::new(
1,
Default::default(),
fe_plugins.clone(),
table_metadata_manager.clone(),
catalog_manager.clone(),
);
let flownode = Arc::new(flow_builder.build().await);
let builder =
DatanodeBuilder::new(dn_opts, fe_plugins.clone()).with_kv_backend(kv_backend.clone());
let datanode = builder.build().await.context(StartDatanodeSnafu)?;
let node_manager = Arc::new(StandaloneDatanodeManager(datanode.region_server()));
let node_manager = Arc::new(StandaloneDatanodeManager {
region_server: datanode.region_server(),
flow_server: flownode.clone(),
});
let table_id_sequence = Arc::new(
SequenceBuilder::new(TABLE_ID_SEQ, kv_backend.clone())
@@ -424,11 +470,9 @@ impl StartCommand {
.build(),
);
let wal_options_allocator = Arc::new(WalOptionsAllocator::new(
opts.wal_meta.clone(),
opts.wal.into(),
kv_backend.clone(),
));
let table_metadata_manager =
Self::create_table_metadata_manager(kv_backend.clone()).await?;
let flow_metadata_manager = Arc::new(FlowMetadataManager::new(kv_backend.clone()));
let table_meta_allocator = Arc::new(TableMetadataAllocator::new(
table_id_sequence,
@@ -441,7 +485,7 @@ impl StartCommand {
let ddl_task_executor = Self::create_ddl_task_executor(
procedure_manager.clone(),
node_manager.clone(),
multi_cache_invalidator,
layered_cache_registry.clone(),
table_metadata_manager,
table_meta_allocator,
flow_metadata_manager,
@@ -449,12 +493,24 @@ impl StartCommand {
)
.await?;
let mut frontend =
FrontendBuilder::new(kv_backend, catalog_manager, node_manager, ddl_task_executor)
.with_plugin(fe_plugins.clone())
.try_build()
.await
.context(StartFrontendSnafu)?;
let mut frontend = FrontendBuilder::new(
kv_backend,
layered_cache_registry,
catalog_manager,
node_manager,
ddl_task_executor,
)
.with_plugin(fe_plugins.clone())
.try_build()
.await
.context(StartFrontendSnafu)?;
// flow server need to be able to use frontend to write insert requests back
flownode
.set_frontend_invoker(Box::new(frontend.clone()))
.await;
// TODO(discord9): unify with adding `start` and `shutdown` method to flownode too.
let _handle = flownode.clone().run_background();
let servers = Services::new(fe_opts.clone(), Arc::new(frontend.clone()), fe_plugins)
.build()
@@ -469,6 +525,7 @@ impl StartCommand {
frontend,
procedure_manager,
wal_options_allocator,
_guard: guard,
})
}
@@ -523,13 +580,14 @@ mod tests {
use auth::{Identity, Password, UserProviderRef};
use common_base::readable_size::ReadableSize;
use common_config::ENV_VAR_SEP;
use common_test_util::temp_dir::create_named_temp_file;
use common_wal::config::DatanodeWalConfig;
use datanode::config::{FileConfig, GcsConfig};
use servers::Mode;
use super::*;
use crate::options::{GlobalOptions, ENV_VAR_SEP};
use crate::options::GlobalOptions;
#[tokio::test]
async fn test_try_from_start_command_to_anymap() {
@@ -617,12 +675,9 @@ mod tests {
..Default::default()
};
let Options::Standalone(options) = cmd.load_options(&GlobalOptions::default()).unwrap()
else {
unreachable!()
};
let fe_opts = options.frontend;
let dn_opts = options.datanode;
let options = cmd.load_options(&GlobalOptions::default()).unwrap();
let fe_opts = options.frontend_options();
let dn_opts = options.datanode_options();
let logging_opts = options.logging;
assert_eq!(Mode::Standalone, fe_opts.mode);
assert_eq!("127.0.0.1:4000".to_string(), fe_opts.http.addr);
@@ -673,7 +728,7 @@ mod tests {
..Default::default()
};
let Options::Standalone(opts) = cmd
let opts = cmd
.load_options(&GlobalOptions {
log_dir: Some("/tmp/greptimedb/test/logs".to_string()),
log_level: Some("debug".to_string()),
@@ -681,10 +736,7 @@ mod tests {
#[cfg(feature = "tokio-console")]
tokio_console_addr: None,
})
.unwrap()
else {
unreachable!()
};
.unwrap();
assert_eq!("/tmp/greptimedb/test/logs", opts.logging.dir);
assert_eq!("debug", opts.logging.level.unwrap());
@@ -746,11 +798,7 @@ mod tests {
..Default::default()
};
let Options::Standalone(opts) =
command.load_options(&GlobalOptions::default()).unwrap()
else {
unreachable!()
};
let opts = command.load_options(&GlobalOptions::default()).unwrap();
// Should be read from env, env > default values.
assert_eq!(opts.logging.dir, "/other/log/dir");
@@ -759,19 +807,20 @@ mod tests {
assert_eq!(opts.logging.level.as_ref().unwrap(), "debug");
// Should be read from cli, cli > config file > env > default values.
assert_eq!(opts.frontend.http.addr, "127.0.0.1:14000");
assert_eq!(ReadableSize::mb(64), opts.frontend.http.body_limit);
let fe_opts = opts.frontend_options();
assert_eq!(fe_opts.http.addr, "127.0.0.1:14000");
assert_eq!(ReadableSize::mb(64), fe_opts.http.body_limit);
// Should be default value.
assert_eq!(opts.frontend.grpc.addr, GrpcOptions::default().addr);
assert_eq!(fe_opts.grpc.addr, GrpcOptions::default().addr);
},
);
}
#[test]
fn test_load_default_standalone_options() {
let options: StandaloneOptions =
Options::load_layered_options(None, "GREPTIMEDB_FRONTEND", None).unwrap();
let options =
StandaloneOptions::load_layered_options(None, "GREPTIMEDB_STANDALONE").unwrap();
let default_options = StandaloneOptions::default();
assert_eq!(options.mode, default_options.mode);
assert_eq!(options.enable_telemetry, default_options.enable_telemetry);

View File

@@ -33,16 +33,21 @@ pub enum Error {
Overflow {
src_len: usize,
dst_len: usize,
#[snafu(implicit)]
location: Location,
},
#[snafu(display("Buffer underflow"))]
Underflow { location: Location },
Underflow {
#[snafu(implicit)]
location: Location,
},
#[snafu(display("IO operation reach EOF"))]
Eof {
#[snafu(source)]
error: std::io::Error,
#[snafu(implicit)]
location: Location,
},
}

View File

@@ -26,6 +26,7 @@ pub enum Error {
#[snafu(display("Invalid full table name: {}", table_name))]
InvalidFullTableName {
table_name: String,
#[snafu(implicit)]
location: Location,
},
}

View File

@@ -9,6 +9,22 @@ workspace = true
[dependencies]
common-base.workspace = true
common-error.workspace = true
common-macro.workspace = true
config.workspace = true
num_cpus.workspace = true
serde.workspace = true
serde_json.workspace = true
snafu.workspace = true
sysinfo.workspace = true
toml.workspace = true
[dev-dependencies]
common-telemetry.workspace = true
common-test-util.workspace = true
common-wal.workspace = true
datanode.workspace = true
meta-client.workspace = true
serde.workspace = true
temp-env = "0.3"
tempfile.workspace = true

View File

@@ -0,0 +1,248 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use config::{Environment, File, FileFormat};
use serde::{Deserialize, Serialize};
use snafu::ResultExt;
use crate::error::{LoadLayeredConfigSnafu, Result, SerdeJsonSnafu, TomlFormatSnafu};
/// Separator for environment variables. For example, `DATANODE__STORAGE__MANIFEST__CHECKPOINT_MARGIN`.
pub const ENV_VAR_SEP: &str = "__";
/// Separator for list values in environment variables. For example, `localhost:3001,localhost:3002,localhost:3003`.
pub const ENV_LIST_SEP: &str = ",";
/// Configuration trait defines the common interface for configuration that can be loaded from multiple sources and serialized to TOML.
pub trait Configurable<'de>: Serialize + Deserialize<'de> + Default + Sized {
/// Load the configuration from multiple sources and merge them.
/// The precedence order is: config file > environment variables > default values.
/// `env_prefix` is the prefix of environment variables, e.g. "FRONTEND__xxx".
/// The function will use dunder(double underscore) `__` as the separator for environment variables, for example:
/// `DATANODE__STORAGE__MANIFEST__CHECKPOINT_MARGIN` will be mapped to `DatanodeOptions.storage.manifest.checkpoint_margin` field in the configuration.
/// `list_keys` is the list of keys that should be parsed as a list, for example, you can pass `Some(&["meta_client_options.metasrv_addrs"]` to parse `GREPTIMEDB_METASRV__META_CLIENT_OPTIONS__METASRV_ADDRS` as a list.
/// The function will use comma `,` as the separator for list values, for example: `127.0.0.1:3001,127.0.0.1:3002,127.0.0.1:3003`.
fn load_layered_options(config_file: Option<&str>, env_prefix: &str) -> Result<Self> {
let default_opts = Self::default();
let env_source = {
let mut env = Environment::default();
if !env_prefix.is_empty() {
env = env.prefix(env_prefix);
}
if let Some(list_keys) = Self::env_list_keys() {
env = env.list_separator(ENV_LIST_SEP);
for key in list_keys {
env = env.with_list_parse_key(key);
}
}
env.try_parsing(true)
.separator(ENV_VAR_SEP)
.ignore_empty(true)
};
// Workaround: Replacement for `Config::try_from(&default_opts)` due to
// `ConfigSerializer` cannot handle the case of an empty struct contained
// within an iterative structure.
// See: https://github.com/mehcode/config-rs/issues/461
let json_str = serde_json::to_string(&default_opts).context(SerdeJsonSnafu)?;
let default_config = File::from_str(&json_str, FileFormat::Json);
// Add default values and environment variables as the sources of the configuration.
let mut layered_config = config::Config::builder()
.add_source(default_config)
.add_source(env_source);
// Add config file as the source of the configuration if it is specified.
if let Some(config_file) = config_file {
layered_config = layered_config.add_source(File::new(config_file, FileFormat::Toml));
}
let opts = layered_config
.build()
.and_then(|x| x.try_deserialize())
.context(LoadLayeredConfigSnafu)?;
Ok(opts)
}
/// List of toml keys that should be parsed as a list.
fn env_list_keys() -> Option<&'static [&'static str]> {
None
}
/// Serialize the configuration to a TOML string.
fn to_toml(&self) -> Result<String> {
toml::to_string(&self).context(TomlFormatSnafu)
}
}
#[cfg(test)]
mod tests {
use std::io::Write;
use common_telemetry::logging::LoggingOptions;
use common_test_util::temp_dir::create_named_temp_file;
use common_wal::config::DatanodeWalConfig;
use datanode::config::{ObjectStoreConfig, StorageConfig};
use meta_client::MetaClientOptions;
use serde::{Deserialize, Serialize};
use super::*;
use crate::Mode;
#[derive(Debug, Serialize, Deserialize)]
struct TestDatanodeConfig {
mode: Mode,
node_id: Option<u64>,
logging: LoggingOptions,
meta_client: Option<MetaClientOptions>,
wal: DatanodeWalConfig,
storage: StorageConfig,
}
impl Default for TestDatanodeConfig {
fn default() -> Self {
Self {
mode: Mode::Distributed,
node_id: None,
logging: LoggingOptions::default(),
meta_client: None,
wal: DatanodeWalConfig::default(),
storage: StorageConfig::default(),
}
}
}
impl Configurable<'_> for TestDatanodeConfig {
fn env_list_keys() -> Option<&'static [&'static str]> {
Some(&["meta_client.metasrv_addrs"])
}
}
#[test]
fn test_load_layered_options() {
let mut file = create_named_temp_file();
let toml_str = r#"
mode = "distributed"
enable_memory_catalog = false
rpc_addr = "127.0.0.1:3001"
rpc_hostname = "127.0.0.1"
rpc_runtime_size = 8
mysql_addr = "127.0.0.1:4406"
mysql_runtime_size = 2
[meta_client]
timeout = "3s"
connect_timeout = "5s"
tcp_nodelay = true
[wal]
provider = "raft_engine"
dir = "/tmp/greptimedb/wal"
file_size = "1GB"
purge_threshold = "50GB"
purge_interval = "10m"
read_batch_size = 128
sync_write = false
[logging]
level = "debug"
dir = "/tmp/greptimedb/test/logs"
"#;
write!(file, "{}", toml_str).unwrap();
let env_prefix = "DATANODE_UT";
temp_env::with_vars(
// The following environment variables will be used to override the values in the config file.
[
(
// storage.type = S3
[
env_prefix.to_string(),
"storage".to_uppercase(),
"type".to_uppercase(),
]
.join(ENV_VAR_SEP),
Some("S3"),
),
(
// storage.bucket = mybucket
[
env_prefix.to_string(),
"storage".to_uppercase(),
"bucket".to_uppercase(),
]
.join(ENV_VAR_SEP),
Some("mybucket"),
),
(
// wal.dir = /other/wal/dir
[
env_prefix.to_string(),
"wal".to_uppercase(),
"dir".to_uppercase(),
]
.join(ENV_VAR_SEP),
Some("/other/wal/dir"),
),
(
// meta_client.metasrv_addrs = 127.0.0.1:3001,127.0.0.1:3002,127.0.0.1:3003
[
env_prefix.to_string(),
"meta_client".to_uppercase(),
"metasrv_addrs".to_uppercase(),
]
.join(ENV_VAR_SEP),
Some("127.0.0.1:3001,127.0.0.1:3002,127.0.0.1:3003"),
),
],
|| {
let opts = TestDatanodeConfig::load_layered_options(
Some(file.path().to_str().unwrap()),
env_prefix,
)
.unwrap();
// Check the configs from environment variables.
match &opts.storage.store {
ObjectStoreConfig::S3(s3_config) => {
assert_eq!(s3_config.bucket, "mybucket".to_string());
}
_ => panic!("unexpected store type"),
}
assert_eq!(
opts.meta_client.unwrap().metasrv_addrs,
vec![
"127.0.0.1:3001".to_string(),
"127.0.0.1:3002".to_string(),
"127.0.0.1:3003".to_string()
]
);
// Should be the values from config file, not environment variables.
let DatanodeWalConfig::RaftEngine(raft_engine_config) = opts.wal else {
unreachable!()
};
assert_eq!(raft_engine_config.dir.unwrap(), "/tmp/greptimedb/wal");
// Should be default values.
assert_eq!(opts.node_id, None);
},
);
}
}

View File

@@ -0,0 +1,67 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use std::any::Any;
use common_error::ext::ErrorExt;
use common_error::status_code::StatusCode;
use common_macro::stack_trace_debug;
use config::ConfigError;
use snafu::{Location, Snafu};
pub type Result<T> = std::result::Result<T, Error>;
#[derive(Snafu)]
#[snafu(visibility(pub))]
#[stack_trace_debug]
pub enum Error {
#[snafu(display("Failed to load layered config"))]
LoadLayeredConfig {
#[snafu(source)]
error: ConfigError,
#[snafu(implicit)]
location: Location,
},
#[snafu(display("Failed to serde json"))]
SerdeJson {
#[snafu(source)]
error: serde_json::error::Error,
#[snafu(implicit)]
location: Location,
},
#[snafu(display("Failed to serialize options to TOML"))]
TomlFormat {
#[snafu(source)]
error: toml::ser::Error,
#[snafu(implicit)]
location: Location,
},
}
impl ErrorExt for Error {
fn status_code(&self) -> StatusCode {
match self {
Error::TomlFormat { .. } | Error::LoadLayeredConfig { .. } => {
StatusCode::InvalidArguments
}
Error::SerdeJson { .. } => StatusCode::Unexpected,
}
}
fn as_any(&self) -> &dyn Any {
self
}
}

View File

@@ -12,9 +12,12 @@
// See the License for the specific language governing permissions and
// limitations under the License.
pub mod config;
pub mod error;
pub mod utils;
use common_base::readable_size::ReadableSize;
pub use config::*;
use serde::{Deserialize, Serialize};
pub fn metadata_store_dir(store_dir: &str) -> String {

View File

@@ -30,7 +30,7 @@ derive_builder.workspace = true
futures.workspace = true
lazy_static.workspace = true
object-store.workspace = true
orc-rust = { git = "https://github.com/MichaelScofield/orc-rs.git", rev = "17347f5f084ac937863317df882218055c4ea8c1" }
orc-rust = { git = "https://github.com/datafusion-contrib/datafusion-orc.git", rev = "502217315726314c4008808fe169764529640599" }
parquet.workspace = true
paste = "1.0"
regex = "1.7"

View File

@@ -29,27 +29,38 @@ pub enum Error {
#[snafu(display("Unsupported compression type: {}", compression_type))]
UnsupportedCompressionType {
compression_type: String,
#[snafu(implicit)]
location: Location,
},
#[snafu(display("Unsupported backend protocol: {}, url: {}", protocol, url))]
UnsupportedBackendProtocol {
protocol: String,
#[snafu(implicit)]
location: Location,
url: String,
},
#[snafu(display("Unsupported format protocol: {}", format))]
UnsupportedFormat { format: String, location: Location },
UnsupportedFormat {
format: String,
#[snafu(implicit)]
location: Location,
},
#[snafu(display("empty host: {}", url))]
EmptyHostPath { url: String, location: Location },
EmptyHostPath {
url: String,
#[snafu(implicit)]
location: Location,
},
#[snafu(display("Invalid url: {}", url))]
InvalidUrl {
url: String,
#[snafu(source)]
error: ParseError,
#[snafu(implicit)]
location: Location,
},
@@ -57,19 +68,22 @@ pub enum Error {
BuildBackend {
#[snafu(source)]
error: object_store::Error,
#[snafu(implicit)]
location: Location,
},
#[snafu(display("Failed to build orc reader"))]
OrcReader {
#[snafu(implicit)]
location: Location,
#[snafu(source)]
error: orc_rust::error::Error,
error: orc_rust::error::OrcError,
},
#[snafu(display("Failed to read object from path: {}", path))]
ReadObject {
path: String,
#[snafu(implicit)]
location: Location,
#[snafu(source)]
error: object_store::Error,
@@ -78,6 +92,7 @@ pub enum Error {
#[snafu(display("Failed to write object to path: {}", path))]
WriteObject {
path: String,
#[snafu(implicit)]
location: Location,
#[snafu(source)]
error: object_store::Error,
@@ -87,11 +102,13 @@ pub enum Error {
AsyncWrite {
#[snafu(source)]
error: std::io::Error,
#[snafu(implicit)]
location: Location,
},
#[snafu(display("Failed to write record batch"))]
WriteRecordBatch {
#[snafu(implicit)]
location: Location,
#[snafu(source)]
error: ArrowError,
@@ -99,6 +116,7 @@ pub enum Error {
#[snafu(display("Failed to encode record batch"))]
EncodeRecordBatch {
#[snafu(implicit)]
location: Location,
#[snafu(source)]
error: ParquetError,
@@ -106,6 +124,7 @@ pub enum Error {
#[snafu(display("Failed to read record batch"))]
ReadRecordBatch {
#[snafu(implicit)]
location: Location,
#[snafu(source)]
error: datafusion::error::DataFusionError,
@@ -113,6 +132,7 @@ pub enum Error {
#[snafu(display("Failed to read parquet"))]
ReadParquetSnafu {
#[snafu(implicit)]
location: Location,
#[snafu(source)]
error: datafusion::parquet::errors::ParquetError,
@@ -120,6 +140,7 @@ pub enum Error {
#[snafu(display("Failed to convert parquet to schema"))]
ParquetToSchema {
#[snafu(implicit)]
location: Location,
#[snafu(source)]
error: datafusion::parquet::errors::ParquetError,
@@ -127,6 +148,7 @@ pub enum Error {
#[snafu(display("Failed to infer schema from file"))]
InferSchema {
#[snafu(implicit)]
location: Location,
#[snafu(source)]
error: arrow_schema::ArrowError,
@@ -135,16 +157,22 @@ pub enum Error {
#[snafu(display("Failed to list object in path: {}", path))]
ListObjects {
path: String,
#[snafu(implicit)]
location: Location,
#[snafu(source)]
error: object_store::Error,
},
#[snafu(display("Invalid connection: {}", msg))]
InvalidConnection { msg: String, location: Location },
InvalidConnection {
msg: String,
#[snafu(implicit)]
location: Location,
},
#[snafu(display("Failed to join handle"))]
JoinHandle {
#[snafu(implicit)]
location: Location,
#[snafu(source)]
error: tokio::task::JoinError,
@@ -154,6 +182,7 @@ pub enum Error {
ParseFormat {
key: &'static str,
value: String,
#[snafu(implicit)]
location: Location,
},
@@ -161,15 +190,20 @@ pub enum Error {
MergeSchema {
#[snafu(source)]
error: arrow_schema::ArrowError,
#[snafu(implicit)]
location: Location,
},
#[snafu(display("Buffered writer closed"))]
BufferedWriterClosed { location: Location },
BufferedWriterClosed {
#[snafu(implicit)]
location: Location,
},
#[snafu(display("Failed to write parquet file, path: {}", path))]
WriteParquet {
path: String,
#[snafu(implicit)]
location: Location,
#[snafu(source)]
error: parquet::errors::ParquetError,

View File

@@ -21,9 +21,8 @@ use datafusion::datasource::physical_plan::{FileMeta, FileOpenFuture, FileOpener
use datafusion::error::{DataFusionError, Result as DfResult};
use futures::{StreamExt, TryStreamExt};
use object_store::ObjectStore;
use orc_rust::arrow_reader::{create_arrow_schema, Cursor};
use orc_rust::arrow_reader::ArrowReaderBuilder;
use orc_rust::async_arrow_reader::ArrowStreamReader;
use orc_rust::reader::Reader;
use snafu::ResultExt;
use tokio::io::{AsyncRead, AsyncSeek};
@@ -33,28 +32,20 @@ use crate::file_format::FileFormat;
#[derive(Debug, Clone, Copy, PartialEq, Eq, Default)]
pub struct OrcFormat;
pub async fn new_orc_cursor<R: AsyncRead + AsyncSeek + Unpin + Send + 'static>(
reader: R,
) -> Result<Cursor<R>> {
let reader = Reader::new_async(reader)
.await
.context(error::OrcReaderSnafu)?;
let cursor = Cursor::root(reader).context(error::OrcReaderSnafu)?;
Ok(cursor)
}
pub async fn new_orc_stream_reader<R: AsyncRead + AsyncSeek + Unpin + Send + 'static>(
reader: R,
) -> Result<ArrowStreamReader<R>> {
let cursor = new_orc_cursor(reader).await?;
Ok(ArrowStreamReader::new(cursor, None))
let reader_build = ArrowReaderBuilder::try_new_async(reader)
.await
.context(error::OrcReaderSnafu)?;
Ok(reader_build.build_async())
}
pub async fn infer_orc_schema<R: AsyncRead + AsyncSeek + Unpin + Send + 'static>(
reader: R,
) -> Result<Schema> {
let cursor = new_orc_cursor(reader).await?;
Ok(create_arrow_schema(&cursor))
let reader = new_orc_stream_reader(reader).await?;
Ok(reader.schema().as_ref().clone())
}
#[async_trait]

View File

@@ -25,6 +25,7 @@ pub enum Error {
#[snafu(display("Decimal out of range, decimal value: {}", value))]
BigDecimalOutOfRange {
value: BigDecimal,
#[snafu(implicit)]
location: Location,
},
@@ -43,7 +44,11 @@ pub enum Error {
},
#[snafu(display("Invalid precision or scale, resion: {}", reason))]
InvalidPrecisionOrScale { reason: String, location: Location },
InvalidPrecisionOrScale {
reason: String,
#[snafu(implicit)]
location: Location,
},
}
impl ErrorExt for Error {

View File

@@ -23,6 +23,7 @@ use snafu::{Location, Snafu};
pub enum Error {
#[snafu(display("External error"))]
External {
#[snafu(implicit)]
location: Location,
source: BoxedError,
},

View File

@@ -13,7 +13,9 @@
// limitations under the License.
use std::fmt;
use std::str::FromStr;
use api::v1::region::{compact_request, StrictWindow};
use common_error::ext::BoxedError;
use common_macro::admin_fn;
use common_query::error::Error::ThreadJoin;
@@ -22,7 +24,7 @@ use common_query::error::{
UnsupportedInputDataTypeSnafu,
};
use common_query::prelude::{Signature, Volatility};
use common_telemetry::error;
use common_telemetry::{error, info};
use datatypes::prelude::*;
use datatypes::vectors::VectorRef;
use session::context::QueryContextRef;
@@ -34,71 +36,78 @@ use crate::ensure_greptime;
use crate::function::{Function, FunctionContext};
use crate::handlers::TableMutationHandlerRef;
macro_rules! define_table_function {
($name: expr, $display_name_str: expr, $display_name: ident, $func: ident, $request: ident) => {
/// A function to $func table, such as `$display_name(table_name)`.
#[admin_fn(name = $name, display_name = $display_name_str, sig_fn = "signature", ret = "uint64")]
pub(crate) async fn $display_name(
table_mutation_handler: &TableMutationHandlerRef,
query_ctx: &QueryContextRef,
params: &[ValueRef<'_>],
) -> Result<Value> {
ensure!(
params.len() == 1,
InvalidFuncArgsSnafu {
err_msg: format!(
"The length of the args is not correct, expect 1, have: {}",
params.len()
),
}
);
/// Compact type: strict window.
const COMPACT_TYPE_STRICT_WINDOW: &str = "strict_window";
let ValueRef::String(table_name) = params[0] else {
return UnsupportedInputDataTypeSnafu {
function: $display_name_str,
datatypes: params.iter().map(|v| v.data_type()).collect::<Vec<_>>(),
}
.fail();
};
let (catalog_name, schema_name, table_name) =
table_name_to_full_name(table_name, &query_ctx)
.map_err(BoxedError::new)
.context(TableMutationSnafu)?;
let affected_rows = table_mutation_handler
.$func(
$request {
catalog_name,
schema_name,
table_name,
},
query_ctx.clone(),
)
.await?;
Ok(Value::from(affected_rows as u64))
#[admin_fn(
name = "FlushTableFunction",
display_name = "flush_table",
sig_fn = "flush_signature",
ret = "uint64"
)]
pub(crate) async fn flush_table(
table_mutation_handler: &TableMutationHandlerRef,
query_ctx: &QueryContextRef,
params: &[ValueRef<'_>],
) -> Result<Value> {
ensure!(
params.len() == 1,
InvalidFuncArgsSnafu {
err_msg: format!(
"The length of the args is not correct, expect 1, have: {}",
params.len()
),
}
);
let ValueRef::String(table_name) = params[0] else {
return UnsupportedInputDataTypeSnafu {
function: "flush_table",
datatypes: params.iter().map(|v| v.data_type()).collect::<Vec<_>>(),
}
.fail();
};
let (catalog_name, schema_name, table_name) = table_name_to_full_name(table_name, query_ctx)
.map_err(BoxedError::new)
.context(TableMutationSnafu)?;
let affected_rows = table_mutation_handler
.flush(
FlushTableRequest {
catalog_name,
schema_name,
table_name,
},
query_ctx.clone(),
)
.await?;
Ok(Value::from(affected_rows as u64))
}
define_table_function!(
"FlushTableFunction",
"flush_table",
flush_table,
flush,
FlushTableRequest
);
#[admin_fn(
name = "CompactTableFunction",
display_name = "compact_table",
sig_fn = "compact_signature",
ret = "uint64"
)]
pub(crate) async fn compact_table(
table_mutation_handler: &TableMutationHandlerRef,
query_ctx: &QueryContextRef,
params: &[ValueRef<'_>],
) -> Result<Value> {
let request = parse_compact_params(params, query_ctx)?;
info!("Compact table request: {:?}", request);
define_table_function!(
"CompactTableFunction",
"compact_table",
compact_table,
compact,
CompactTableRequest
);
let affected_rows = table_mutation_handler
.compact(request, query_ctx.clone())
.await?;
fn signature() -> Signature {
Ok(Value::from(affected_rows as u64))
}
fn flush_signature() -> Signature {
Signature::uniform(
1,
vec![ConcreteDataType::string_datatype()],
@@ -106,12 +115,98 @@ fn signature() -> Signature {
)
}
fn compact_signature() -> Signature {
Signature::variadic(
vec![ConcreteDataType::string_datatype()],
Volatility::Immutable,
)
}
/// Parses `compact_table` UDF parameters. This function accepts following combinations:
/// - `[<table_name>]`: only tables name provided, using default compaction type: regular
/// - `[<table_name>, <type>]`: specify table name and compaction type. The compaction options will be default.
/// - `[<table_name>, <type>, <options>]`: provides both type and type-specific options.
fn parse_compact_params(
params: &[ValueRef<'_>],
query_ctx: &QueryContextRef,
) -> Result<CompactTableRequest> {
ensure!(
!params.is_empty(),
InvalidFuncArgsSnafu {
err_msg: "Args cannot be empty",
}
);
let (table_name, compact_type) = match params {
[ValueRef::String(table_name)] => (
table_name,
compact_request::Options::Regular(Default::default()),
),
[ValueRef::String(table_name), ValueRef::String(compact_ty_str)] => {
let compact_type = parse_compact_type(compact_ty_str, None)?;
(table_name, compact_type)
}
[ValueRef::String(table_name), ValueRef::String(compact_ty_str), ValueRef::String(options_str)] =>
{
let compact_type = parse_compact_type(compact_ty_str, Some(options_str))?;
(table_name, compact_type)
}
_ => {
return UnsupportedInputDataTypeSnafu {
function: "compact_table",
datatypes: params.iter().map(|v| v.data_type()).collect::<Vec<_>>(),
}
.fail()
}
};
let (catalog_name, schema_name, table_name) = table_name_to_full_name(table_name, query_ctx)
.map_err(BoxedError::new)
.context(TableMutationSnafu)?;
Ok(CompactTableRequest {
catalog_name,
schema_name,
table_name,
compact_options: compact_type,
})
}
fn parse_compact_type(type_str: &str, option: Option<&str>) -> Result<compact_request::Options> {
if type_str.eq_ignore_ascii_case(COMPACT_TYPE_STRICT_WINDOW) {
let window_seconds = option
.map(|v| {
i64::from_str(v).map_err(|_| {
InvalidFuncArgsSnafu {
err_msg: format!(
"Compact window is expected to be a valid number, provided: {}",
v
),
}
.build()
})
})
.transpose()?
.unwrap_or(0);
Ok(compact_request::Options::StrictWindow(StrictWindow {
window_seconds,
}))
} else {
Ok(compact_request::Options::Regular(Default::default()))
}
}
#[cfg(test)]
mod tests {
use std::sync::Arc;
use api::v1::region::compact_request::Options;
use common_catalog::consts::{DEFAULT_CATALOG_NAME, DEFAULT_SCHEMA_NAME};
use common_query::prelude::TypeSignature;
use datatypes::vectors::{StringVector, UInt64Vector};
use session::context::QueryContext;
use super::*;
@@ -174,5 +269,109 @@ mod tests {
define_table_function_test!(flush_table, FlushTableFunction);
define_table_function_test!(compact_table, CompactTableFunction);
fn check_parse_compact_params(cases: &[(&[&str], CompactTableRequest)]) {
for (params, expected) in cases {
let params = params
.iter()
.map(|s| ValueRef::String(s))
.collect::<Vec<_>>();
assert_eq!(
expected,
&parse_compact_params(&params, &QueryContext::arc()).unwrap()
);
}
}
#[test]
fn test_parse_compact_params() {
check_parse_compact_params(&[
(
&["table"],
CompactTableRequest {
catalog_name: DEFAULT_CATALOG_NAME.to_string(),
schema_name: DEFAULT_SCHEMA_NAME.to_string(),
table_name: "table".to_string(),
compact_options: Options::Regular(Default::default()),
},
),
(
&[&format!("{}.table", DEFAULT_SCHEMA_NAME)],
CompactTableRequest {
catalog_name: DEFAULT_CATALOG_NAME.to_string(),
schema_name: DEFAULT_SCHEMA_NAME.to_string(),
table_name: "table".to_string(),
compact_options: Options::Regular(Default::default()),
},
),
(
&[&format!(
"{}.{}.table",
DEFAULT_CATALOG_NAME, DEFAULT_SCHEMA_NAME
)],
CompactTableRequest {
catalog_name: DEFAULT_CATALOG_NAME.to_string(),
schema_name: DEFAULT_SCHEMA_NAME.to_string(),
table_name: "table".to_string(),
compact_options: Options::Regular(Default::default()),
},
),
(
&["table", "regular"],
CompactTableRequest {
catalog_name: DEFAULT_CATALOG_NAME.to_string(),
schema_name: DEFAULT_SCHEMA_NAME.to_string(),
table_name: "table".to_string(),
compact_options: Options::Regular(Default::default()),
},
),
(
&["table", "strict_window"],
CompactTableRequest {
catalog_name: DEFAULT_CATALOG_NAME.to_string(),
schema_name: DEFAULT_SCHEMA_NAME.to_string(),
table_name: "table".to_string(),
compact_options: Options::StrictWindow(StrictWindow { window_seconds: 0 }),
},
),
(
&["table", "strict_window", "3600"],
CompactTableRequest {
catalog_name: DEFAULT_CATALOG_NAME.to_string(),
schema_name: DEFAULT_SCHEMA_NAME.to_string(),
table_name: "table".to_string(),
compact_options: Options::StrictWindow(StrictWindow {
window_seconds: 3600,
}),
},
),
(
&["table", "regular", "abcd"],
CompactTableRequest {
catalog_name: DEFAULT_CATALOG_NAME.to_string(),
schema_name: DEFAULT_SCHEMA_NAME.to_string(),
table_name: "table".to_string(),
compact_options: Options::Regular(Default::default()),
},
),
]);
assert!(parse_compact_params(
&["table", "strict_window", "abc"]
.into_iter()
.map(ValueRef::String)
.collect::<Vec<_>>(),
&QueryContext::arc(),
)
.is_err());
assert!(parse_compact_params(
&["a.b.table", "strict_window", "abc"]
.into_iter()
.map(ValueRef::String)
.collect::<Vec<_>>(),
&QueryContext::arc(),
)
.is_err());
}
}

View File

@@ -24,10 +24,15 @@ use snafu::{Location, Snafu};
#[stack_trace_debug]
pub enum Error {
#[snafu(display("Illegal delete request, reason: {reason}"))]
IllegalDeleteRequest { reason: String, location: Location },
IllegalDeleteRequest {
reason: String,
#[snafu(implicit)]
location: Location,
},
#[snafu(display("Column datatype error"))]
ColumnDataType {
#[snafu(implicit)]
location: Location,
source: api::error::Error,
},
@@ -40,39 +45,63 @@ pub enum Error {
DuplicatedTimestampColumn {
exists: String,
duplicated: String,
#[snafu(implicit)]
location: Location,
},
#[snafu(display("Duplicated column name in gRPC requests, name: {}", name,))]
DuplicatedColumnName { name: String, location: Location },
DuplicatedColumnName {
name: String,
#[snafu(implicit)]
location: Location,
},
#[snafu(display("Missing timestamp column, msg: {}", msg))]
MissingTimestampColumn { msg: String, location: Location },
MissingTimestampColumn {
msg: String,
#[snafu(implicit)]
location: Location,
},
#[snafu(display("Invalid column proto: {}", err_msg))]
InvalidColumnProto { err_msg: String, location: Location },
InvalidColumnProto {
err_msg: String,
#[snafu(implicit)]
location: Location,
},
#[snafu(display("Failed to create vector"))]
CreateVector {
#[snafu(implicit)]
location: Location,
source: datatypes::error::Error,
},
#[snafu(display("Missing required field in protobuf, field: {}", field))]
MissingField { field: String, location: Location },
MissingField {
field: String,
#[snafu(implicit)]
location: Location,
},
#[snafu(display("Invalid column proto definition, column: {}", column))]
InvalidColumnDef {
column: String,
#[snafu(implicit)]
location: Location,
source: api::error::Error,
},
#[snafu(display("Unexpected values length, reason: {}", reason))]
UnexpectedValuesLength { reason: String, location: Location },
UnexpectedValuesLength {
reason: String,
#[snafu(implicit)]
location: Location,
},
#[snafu(display("Unknown location type: {}", location_type))]
UnknownLocationType {
location_type: i32,
#[snafu(implicit)]
location: Location,
},
}

View File

@@ -291,88 +291,68 @@ impl ChannelConfig {
}
/// A timeout to each request.
pub fn timeout(self, timeout: Duration) -> Self {
Self {
timeout: Some(timeout),
..self
}
pub fn timeout(mut self, timeout: Duration) -> Self {
self.timeout = Some(timeout);
self
}
/// A timeout to connecting to the uri.
///
/// Defaults to no timeout.
pub fn connect_timeout(self, timeout: Duration) -> Self {
Self {
connect_timeout: Some(timeout),
..self
}
pub fn connect_timeout(mut self, timeout: Duration) -> Self {
self.connect_timeout = Some(timeout);
self
}
/// A concurrency limit to each request.
pub fn concurrency_limit(self, limit: usize) -> Self {
Self {
concurrency_limit: Some(limit),
..self
}
pub fn concurrency_limit(mut self, limit: usize) -> Self {
self.concurrency_limit = Some(limit);
self
}
/// A rate limit to each request.
pub fn rate_limit(self, limit: u64, duration: Duration) -> Self {
Self {
rate_limit: Some((limit, duration)),
..self
}
pub fn rate_limit(mut self, limit: u64, duration: Duration) -> Self {
self.rate_limit = Some((limit, duration));
self
}
/// Sets the SETTINGS_INITIAL_WINDOW_SIZE option for HTTP2 stream-level flow control.
/// Default is 65,535
pub fn initial_stream_window_size(self, size: u32) -> Self {
Self {
initial_stream_window_size: Some(size),
..self
}
pub fn initial_stream_window_size(mut self, size: u32) -> Self {
self.initial_stream_window_size = Some(size);
self
}
/// Sets the max connection-level flow control for HTTP2
///
/// Default is 65,535
pub fn initial_connection_window_size(self, size: u32) -> Self {
Self {
initial_connection_window_size: Some(size),
..self
}
pub fn initial_connection_window_size(mut self, size: u32) -> Self {
self.initial_connection_window_size = Some(size);
self
}
/// Set http2 KEEP_ALIVE_INTERVAL. Uses hypers default otherwise.
pub fn http2_keep_alive_interval(self, duration: Duration) -> Self {
Self {
http2_keep_alive_interval: Some(duration),
..self
}
pub fn http2_keep_alive_interval(mut self, duration: Duration) -> Self {
self.http2_keep_alive_interval = Some(duration);
self
}
/// Set http2 KEEP_ALIVE_TIMEOUT. Uses hypers default otherwise.
pub fn http2_keep_alive_timeout(self, duration: Duration) -> Self {
Self {
http2_keep_alive_timeout: Some(duration),
..self
}
pub fn http2_keep_alive_timeout(mut self, duration: Duration) -> Self {
self.http2_keep_alive_timeout = Some(duration);
self
}
/// Set http2 KEEP_ALIVE_WHILE_IDLE. Uses hypers default otherwise.
pub fn http2_keep_alive_while_idle(self, enabled: bool) -> Self {
Self {
http2_keep_alive_while_idle: Some(enabled),
..self
}
pub fn http2_keep_alive_while_idle(mut self, enabled: bool) -> Self {
self.http2_keep_alive_while_idle = Some(enabled);
self
}
/// Sets whether to use an adaptive flow control. Uses hypers default otherwise.
pub fn http2_adaptive_window(self, enabled: bool) -> Self {
Self {
http2_adaptive_window: Some(enabled),
..self
}
pub fn http2_adaptive_window(mut self, enabled: bool) -> Self {
self.http2_adaptive_window = Some(enabled);
self
}
/// Set whether TCP keepalive messages are enabled on accepted connections.
@@ -381,31 +361,25 @@ impl ChannelConfig {
/// will be the time to remain idle before sending TCP keepalive probes.
///
/// Default is no keepalive (None)
pub fn tcp_keepalive(self, duration: Duration) -> Self {
Self {
tcp_keepalive: Some(duration),
..self
}
pub fn tcp_keepalive(mut self, duration: Duration) -> Self {
self.tcp_keepalive = Some(duration);
self
}
/// Set the value of TCP_NODELAY option for accepted connections.
///
/// Enabled by default.
pub fn tcp_nodelay(self, enabled: bool) -> Self {
Self {
tcp_nodelay: enabled,
..self
}
pub fn tcp_nodelay(mut self, enabled: bool) -> Self {
self.tcp_nodelay = enabled;
self
}
/// Set the value of tls client auth.
///
/// Disabled by default.
pub fn client_tls_config(self, client_tls_option: ClientTlsOption) -> Self {
Self {
client_tls: Some(client_tls_option),
..self
}
pub fn client_tls_config(mut self, client_tls_option: ClientTlsOption) -> Self {
self.client_tls = Some(client_tls_option);
self
}
}

View File

@@ -33,6 +33,7 @@ pub enum Error {
InvalidConfigFilePath {
#[snafu(source)]
error: io::Error,
#[snafu(implicit)]
location: Location,
},
@@ -46,6 +47,7 @@ pub enum Error {
column_name: String,
expected: String,
actual: String,
#[snafu(implicit)]
location: Location,
},
@@ -53,30 +55,42 @@ pub enum Error {
CreateChannel {
#[snafu(source)]
error: tonic::transport::Error,
#[snafu(implicit)]
location: Location,
},
#[snafu(display("Failed to create RecordBatch"))]
CreateRecordBatch {
#[snafu(implicit)]
location: Location,
source: common_recordbatch::error::Error,
},
#[snafu(display("Failed to convert Arrow type: {}", from))]
Conversion { from: String, location: Location },
Conversion {
from: String,
#[snafu(implicit)]
location: Location,
},
#[snafu(display("Failed to decode FlightData"))]
DecodeFlightData {
#[snafu(source)]
error: api::DecodeError,
#[snafu(implicit)]
location: Location,
},
#[snafu(display("Invalid FlightData, reason: {}", reason))]
InvalidFlightData { reason: String, location: Location },
InvalidFlightData {
reason: String,
#[snafu(implicit)]
location: Location,
},
#[snafu(display("Failed to convert Arrow Schema"))]
ConvertArrowSchema {
#[snafu(implicit)]
location: Location,
source: datatypes::error::Error,
},

View File

@@ -23,9 +23,9 @@ async fn test_mtls_config() {
// test wrong file
let config = ChannelConfig::new().client_tls_config(ClientTlsOption {
server_ca_cert_path: "tests/tls/wrong_server.cert.pem".to_string(),
client_cert_path: "tests/tls/wrong_client.cert.pem".to_string(),
client_key_path: "tests/tls/wrong_client.key.pem".to_string(),
server_ca_cert_path: "tests/tls/wrong_ca.pem".to_string(),
client_cert_path: "tests/tls/wrong_client.pem".to_string(),
client_key_path: "tests/tls/wrong_client.key".to_string(),
});
let re = ChannelManager::with_tls_config(config);
@@ -33,8 +33,8 @@ async fn test_mtls_config() {
// test corrupted file content
let config = ChannelConfig::new().client_tls_config(ClientTlsOption {
server_ca_cert_path: "tests/tls/server.cert.pem".to_string(),
client_cert_path: "tests/tls/client.cert.pem".to_string(),
server_ca_cert_path: "tests/tls/ca.pem".to_string(),
client_cert_path: "tests/tls/client.pem".to_string(),
client_key_path: "tests/tls/corrupted".to_string(),
});
@@ -44,9 +44,9 @@ async fn test_mtls_config() {
// success
let config = ChannelConfig::new().client_tls_config(ClientTlsOption {
server_ca_cert_path: "tests/tls/server.cert.pem".to_string(),
client_cert_path: "tests/tls/client.cert.pem".to_string(),
client_key_path: "tests/tls/client.key.pem".to_string(),
server_ca_cert_path: "tests/tls/ca.pem".to_string(),
client_cert_path: "tests/tls/client.pem".to_string(),
client_key_path: "tests/tls/client.key".to_string(),
});
let re = ChannelManager::with_tls_config(config).unwrap();

View File

@@ -0,0 +1,28 @@
-----BEGIN CERTIFICATE-----
MIIE3DCCA0SgAwIBAgIRAObeYbJFiVQSGR8yk44dsOYwDQYJKoZIhvcNAQELBQAw
gYUxHjAcBgNVBAoTFW1rY2VydCBkZXZlbG9wbWVudCBDQTEtMCsGA1UECwwkbHVj
aW9ATHVjaW9zLVdvcmstTUJQIChMdWNpbyBGcmFuY28pMTQwMgYDVQQDDCtta2Nl
cnQgbHVjaW9ATHVjaW9zLVdvcmstTUJQIChMdWNpbyBGcmFuY28pMB4XDTE5MDky
OTIzMzUzM1oXDTI5MDkyOTIzMzUzM1owgYUxHjAcBgNVBAoTFW1rY2VydCBkZXZl
bG9wbWVudCBDQTEtMCsGA1UECwwkbHVjaW9ATHVjaW9zLVdvcmstTUJQIChMdWNp
byBGcmFuY28pMTQwMgYDVQQDDCtta2NlcnQgbHVjaW9ATHVjaW9zLVdvcmstTUJQ
IChMdWNpbyBGcmFuY28pMIIBojANBgkqhkiG9w0BAQEFAAOCAY8AMIIBigKCAYEA
y/vE61ItbN/1qMYt13LMf+le1svwfkCCOPsygk7nWeRXmomgUpymqn1LnWiuB0+e
4IdVH2f5E9DknWEpPhKIDMRTCbz4jTwQfHrxCb8EGj3I8oO73pJO5S/xCedM9OrZ
qWcYWwN0GQ8cO/ogazaoZf1uTrRNHyzRyQsKyb412kDBTNEeldJZ2ljKgXXvh4HO
2ZIk9K/ZAaAf6VN8K/89rlJ9/KPgRVNsyAapE+Pb8XXKtpzeFiEcUfuXVYWtkoW+
xyn/Zu8A1L2CXMQ1sARh7P/42BTMKr5pfraYgcBGxKXLrxoySpxCO9KqeVveKy1q
fPm5FCwFsXDr0koFLrCiR58mcIO/04Q9DKKTV4Z2a+LoqDJRY37KfBSc8sDMPhw5
k7g3WPoa6QwXRjZTCA5fHWVgLOtcwLsnju5tBE4LDxwF6s+1wPF8NI5yUfufcEjJ
Z6JBwgoWYosVj27Lx7KBNLU/57PX9ryee691zmtswt0tP0WVBAgalhYWg99RXoa3
AgMBAAGjRTBDMA4GA1UdDwEB/wQEAwICBDASBgNVHRMBAf8ECDAGAQH/AgEAMB0G
A1UdDgQWBBQdvlE4Bdcsjc9oaxjDCRu5FiuZkzANBgkqhkiG9w0BAQsFAAOCAYEA
BP/6o1kPINksMJZSSXgNCPZskDLyGw7auUZBnQ0ocDT3W6gXQvT/27LM1Hxoj9Eh
qU1TYdEt7ppecLQSGvzQ02MExG7H75art75oLiB+A5agDira937YbK4MCjqW481d
bDhw6ixJnY1jIvwjEZxyH6g94YyL927aSPch51fys0kSnjkFzC2RmuzDADScc4XH
5P1+/3dnIm3M5yfpeUzoaOrTXNmhn8p0RDIGrZ5kA5eISIGGD3Mm8FDssUNKndtO
g4ojHUsxb14icnAYGeye1NOhGiqN6TEFcgr6MPd0XdFNZ5c0HUaBCfN6bc+JxDV5
MKZVJdNeJsYYwilgJNHAyZgCi30JC20xeYVtTF7CEEsMrFDGJ70Kz7o/FnRiFsA1
ZSwVVWhhkHG2VkT4vlo0O3fYeZpenYicvy+wZNTbGK83gzHWqxxNC1z3Etg5+HRJ
F9qeMWPyfA3IHYXygiMcviyLcyNGG/SJ0EhUpYBN/Gg7wI5yFkcsxUDPPzd23O0M
-----END CERTIFICATE-----

Some files were not shown because too many files have changed in this diff Show More