Compare commits

...

133 Commits

Author SHA1 Message Date
Xieqijun
0b3f955ca7 feat: Add an error variant RetryLater (#1058)
* feat: support retry error

* fix: ci

* fix: ci

* fix: fmt

* feat: add convert procedure error

* Docs : add rustdoc

* fix: cr

* fix: cr

* fix: rm unless code
2023-02-27 17:19:37 +08:00
Ning Sun
4b58a8a18d feat: update substrait and prost version (#1080) 2023-02-27 15:18:12 +08:00
Yingwen
bd377ef329 feat: Procedure to create table and register table to catalog (#1040)
* feat: Add table-procedures crate

* feat: Implement procedure to create table

* feat: Integrate procedure manager to datanode

* test: Test CreateTableProcedure

* refactor: Rename table-procedures to table-procedure

* feat: Implement create_table_by_procedure

* chore: Remove comment

* chore: Add todo

* feat: Add procedure config to standalone mode

* feat: Register table-procedure loaders

* feat: Address review comments

CreateTableProcedure just return error if the subprocedure is failed

* chore: Address CR comments
2023-02-27 11:49:23 +08:00
LFC
df751c38b4 feat: a simple REPL for debugging purpose (#1048)
* feat: a simple REPL for debugging purpose

* fix: rebase develop
2023-02-27 11:00:15 +08:00
Yingwen
f6e871708a chore: Rename MetaClientOpts to MetaClientOptions (#1075)
* fix: Serialize FrontendOptions to toml

* fix: Serialize DatanodeOptions to toml

* fix: Serialize StandaloneOptions to toml

See https://users.rust-lang.org/t/why-toml-to-string-get-error-valueaftertable/85903/2

* chore!: Rename MetaClientOpts to MetaClientOptions

BREAKING CHANGE: Change the meta_client_opts in the config file to
meta_client_options
2023-02-24 16:28:38 +08:00
fys
819c990a89 fix: thread that reports the heartbeat panics in unit test (#1078)
fix: ut panic in heartbeat report thread
2023-02-24 15:36:32 +08:00
Ruihang Xia
a8b4e8d933 ci: simplify codecov commment (#1073)
chore(ci): simplify codecov commment

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-02-24 15:22:57 +08:00
Yingwen
710e2ed133 ci: Use fixed skywalking-eyes revision (#1076)
The latest PR of skywalking-eyes https://github.com/apache/skywalking-eyes/pull/149
breaks our CI action
2023-02-24 07:05:18 +00:00
Ning Sun
81eab74b90 refactor: remove grpc client constructor with default catalog/schema (#1060)
* refactor: remove grpc client with default catalog/schema

* refactor: re-export consts in client module
2023-02-24 11:06:14 +08:00
Ning Sun
8f67d8ca93 fix: update mysql server library to fix tls corrupt messsage issue (#1065) 2023-02-24 10:20:44 +08:00
Ruihang Xia
4cc3ac37d5 feat: add DictionaryVector DataType (#1061)
* fix stddev and stdvar. try build range function expr

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* feat: add dictionary data type

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* preserve timestamp column in range manipulator

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* plan range functions

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* update test result

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix typo

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* update test result

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* update test result

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* resolve CR comments

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* resolve CR comments

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-02-23 20:31:07 +08:00
Lei, HUANG
b48c851b96 fix: support datetime type parsing (#1071)
* fix: support datetime type parsing

* fix: unit test
2023-02-23 20:26:47 +08:00
Xuanwo
fdd17c6eeb refactor: Clean up re-export of opendal services (#1067)
Signed-off-by: Xuanwo <github@xuanwo.io>
2023-02-23 14:12:34 +08:00
Ruihang Xia
51641db39e feat: support filter expression in PromQL (#1066)
feat: support filter expression

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-02-23 11:55:23 +08:00
Xuanwo
98ef74bff4 chore: Bump OpenDAL to v0.27 (#1057)
* Bump OpenDAL to v0.27

Signed-off-by: Xuanwo <github@xuanwo.io>

* Make cargo check happy

Signed-off-by: Xuanwo <github@xuanwo.io>

* Address comments

Signed-off-by: Xuanwo <github@xuanwo.io>

* Address comments

Signed-off-by: Xuanwo <github@xuanwo.io>

* Format toml

Signed-off-by: Xuanwo <github@xuanwo.io>

* Make taplo happy

Signed-off-by: Xuanwo <github@xuanwo.io>

---------

Signed-off-by: Xuanwo <github@xuanwo.io>
2023-02-23 11:20:45 +08:00
Lei, HUANG
f42acc90c2 fix: allow empty TableOptions (#1063)
fix: allow default TableOptions to avoid panic when upgrading from older versions
2023-02-22 19:19:13 +08:00
Lei, HUANG
2df8143ad5 feat: support table ttl (#1052)
* feat: purge expired sst on compaction

* chore: add more log

* fix: clippy

* fix: mark expired ssts as compacting before picking candidates

* fix: some CR comments

* fix: remove useless result

* fix: cr comments
2023-02-22 16:56:20 +08:00
shuiyisong
fb2e0c7cf3 feat: add auth to grpc handler (#1051)
* chore: get header in grpc & temp save

* chore: change authscheme to include data str

* chore: add auth to grpc flight handler

* chore: add unit test & hold for now since grpc api doesnt accept req input

* chore: minor change

* chore: minor change

* chore: add flight context to database interface

* chore: add test

* chore: update proto version & fix cr issue

* chore: add test

* chore: minor update
2023-02-22 15:20:10 +08:00
Xieqijun
390e9095f6 feat: admin http api (#1026)
* feat: catalog list

* feat: catalog list

* feat:api

* feat: leader info

* feat: use constant

* fix: ci

* feat: query heartbeat by ip

* ut: add test

* fix: cr

* fix: cr

* fix: cr
2023-02-22 14:18:37 +08:00
dennis zhuang
bcd44b90c1 feat: invoke TQL via SQL interface (#1047)
* feat: impl TQL parser in sqlparser

* feat: impl invoking TQL via SQL

* chore: remove src/sql/src/tql_parser.rs

* chore: fix typo

* test: add tql test

* chore: carry type

Co-authored-by: LFC <bayinamine@gmail.com>

* chore: cr comments

---------

Co-authored-by: LFC <bayinamine@gmail.com>
2023-02-22 11:28:09 +08:00
Yingwen
c6f2db8ae0 feat(procedure): Add procedure watcher (#1043)
* refactor: Use watch channel to store ProcedureState

* feat: Add a watcher to wait for state change

* test: test watcher on procedure failure

* feat: Only clear message cache on success

* feat: submit returns Watcher
2023-02-21 17:19:39 +08:00
Lei, HUANG
e17d5a1c41 feat: support table options (#1044)
* feat: change table options from string map to a struct, add ttl and write_buffer_size

* fix: also pass table options to table meta

* feat: pass table options when opening/creating regions

* fix: CR comments
2023-02-21 08:10:23 +00:00
Ruihang Xia
23092a5208 feat: Support unary, paren, bool keyword and nonexistent metric/label in PromQL (#1049)
* feat: don't report metric/label not found as error

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* feat: impl unary expr

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* feat: impl paren expr

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* feat: support bool keyword

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* add some tests

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* ignore nonexistence labels during planning

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix clippy

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-02-21 15:24:01 +08:00
Yingwen
4bbad6ab1e ci: allow ci pass when codecov can't upload data (#1046) 2023-02-21 14:52:44 +08:00
Zhizhen He
6833b405d9 ci: upgrade spell checker to 1.13.10 (#1045)
* ci: upgrade spell checker to 1.13.10

Signed-off-by: Zhizhen He <hezhizhen.yi@gmail.com>

* fix: fix existing typos

Signed-off-by: Zhizhen He <hezhizhen.yi@gmail.com>

* chore: use taplo to format typos.toml

Signed-off-by: Zhizhen He <hezhizhen.yi@gmail.com>

* chore: add fmt-toml rule to format TOML files

Signed-off-by: Zhizhen He <hezhizhen.yi@gmail.com>

---------

Signed-off-by: Zhizhen He <hezhizhen.yi@gmail.com>
2023-02-21 10:55:27 +08:00
Yingwen
aaaf24143d feat: Procedure to create a mito engine (#1035)
* feat: wip

* feat: Implement procedure to create mito table

* feat: Add create_table_procedure to TableEngine

* feat: Impl dump and lock for CreateMitoTable

* feat: Impl CreateMitoTable::execute and register it to manager

* feat(common-procedure): pub local mod

* feat: Add simple test for MitoCreateTable

* style: Fix clippy

* refactor: Move create_table_procedure to a new trait TableEngineProcedure
2023-02-21 09:40:56 +08:00
Jiachun Feng
9161796dfa feat: export the data from a table to parquet files (#1000)
* feat: copy table parser

* feat: coopy table

* chore: minor fix

* chore: give stmt a more clearer name

* chore: unified naming

* chore: minor change

* chore: add a todo

* chore: end up with an empty file when occur an empty table

* feat: format with copy table

* feat: with options

* chore: by cr

* chore: default 5M rows per segment

* Update src/datanode/src/sql/copy_table.rs

Co-authored-by: Lei, HUANG <6406592+v0y4g3r@users.noreply.github.com>

* Update src/datanode/src/sql/copy_table.rs

Co-authored-by: Lei, HUANG <6406592+v0y4g3r@users.noreply.github.com>

* Update src/datanode/src/error.rs

Co-authored-by: fys <40801205+Fengys123@users.noreply.github.com>

---------

Co-authored-by: Lei, HUANG <6406592+v0y4g3r@users.noreply.github.com>
Co-authored-by: fys <40801205+Fengys123@users.noreply.github.com>
2023-02-20 16:43:50 +08:00
Ruihang Xia
68b231987c feat: improve Prometheus compliance (#1022)
* initial impl

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* minor (useless) refactor

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* retrieve metric name

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* add time index column to group by columns
filter out NaN in normalize
remove NULL in instant manipulator
accept form data as HTTP params
correct API URL
accept second literal as step param

* happy clippy

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* update test result

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-02-20 07:29:43 +00:00
Yingwen
6e9964ac97 refactor(storage): Simplify debug output of some structs (#1028)
* refactor: Simplify debug output of RegionImpl

* feat: Simplify memtable debug output
2023-02-20 14:35:30 +08:00
shuiyisong
6afd79cab8 feat: support InfluxDB auth protocol (#1034)
* chore: add http auth influxdb compat

* chore: add test

* chore: minor change

* chore: fix typo

* chore: fix cr
2023-02-20 03:26:19 +00:00
fys
4e88a01638 feat: support influxdb ping and health endpoint (#1027)
* feat: support influxdb ping and health endpoint

* add some unit tests

* ping and health api no need auth

* cr
2023-02-20 02:31:51 +00:00
Lei, HUANG
af1f8d6101 feat: file purger (#1030)
* wip

* wip

* feat: file purger

* chore: add tests

* feat: delete removed file on sst merge

* chore: move MockAccessLayer to test_util

* fix: some cr comments

* feat: add await termination for scheduler

* fix: some cr comments

* chore: rename max_file_in_level0 to max_files_in_level0
2023-02-19 14:56:41 +08:00
dennis zhuang
a9c8584c98 feat: impl insert data from query (#1025)
* feat: refactor insertion in datanode

* feat: supports inserting data by select query

* feat: impl cast operation for vector

* feat: streaming insert from select query results

* chore: minor changes

* fix: remove unwrap

* test: insert_to_requsts

* test: test_execute_insert_by_select

* fix: cast operation for vectors

* fix: test

* fix: typo

* chore: by CR comments

* fix: test_statement_to_request
2023-02-17 17:56:12 +08:00
Eugene Tolbakov
7787cfdd42 refactor(datatypes): enhance MutableVector methods (#987)
* refactor(datatypes): enhance MutableVector methods

* refactor(datatypes): address code review issues

* refactor(datatypes): address more code review issues

* refactor(datatypes): fix merge conflicts

* refactor(datatypes): address code review issues

* refactor(datatypes): address more code review issues

* refactor(datatypes): update sql delete with the newly introduced method
2023-02-17 16:16:23 +08:00
Weny Xu
2f39a77137 feat: add close method for the region trait (#970)
feat: add close for region trait
2023-02-17 11:32:55 +08:00
Lei, HUANG
16f86a9d77 refactor: separate compaction stuff from task scheduler (#1021)
* refactor: make schedule request return value generic

* feat: add handler trait

* wip

* feat: use task handler

* fix: unit test

* refactor: separate scheduler mod

* chore: rename

* chore: Request use associate type

* refactor: use associate type

* refactor: use associate type to reduce generic parameters

* chore: further remove generic types

* chore: further remove a generic parameter
2023-02-16 19:30:23 +08:00
dennis zhuang
5ec1a7027b feat: supports passing user params into coprocessor (#962)
* feat: make args in coprocessor optional

* feat: supports kwargs for coprocessor as params passed by the users

* feat: supports params for /run-script

* fix: we should rewrite the coprocessor by removing kwargs

* fix: remove println

* fix: compile error after rebasing

* fix: improve http_handler_test

* test: http scripts api with user params

* refactor: tweak all to_owned
2023-02-16 16:11:26 +08:00
Yingwen
ddbc97befb refactor: changes CreateTableRequest::schema to RawSchema (#1018)
* refactor: changes CreateTableRequest::schema to RawSchema

* refactor(grpc-expr): create_table_schema returns RawSchema
2023-02-16 16:04:17 +08:00
Ruihang Xia
a8c2b35ec6 chore: bump rust to nightly-2023-02-14 (#1019)
* chore: bump rust to nightly-2023-02-14

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix clippy

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* bump statrs to 0.16

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-02-16 13:17:57 +08:00
Yingwen
04afee216e feat(procedure): Support multi-lock keys and querying procedure state from context (#1006)
* feat: Add ContextProvider to Context

So procedures can query states of other procedures via the
ContextProvider and they don't need to hold a ProcedureManagerRef

* feat: Procedure supports acquring multiple lock keys

* test: Use multi-locks in test

* feat: Add keys_to_lock/unlock
2023-02-15 18:04:19 +08:00
LFC
5533040be7 fix: describe distribute table (#988)
* fix: describe distribute table
2023-02-15 17:48:43 +08:00
LFC
34fdba77df feat: create database if not exists (#1009) 2023-02-15 17:47:46 +08:00
Ning Sun
cd0d58cb24 fix: correct date/time type format for postgresql (#1001)
* fix: correct date/time type format for postgresql

* fix: tests for timestamp

* refactor: use Utc datetime for timestamp::to_chrono_datetime

* Update src/servers/Cargo.toml

Co-authored-by: LFC <bayinamine@gmail.com>

---------

Co-authored-by: LFC <bayinamine@gmail.com>
2023-02-15 09:40:16 +00:00
yuanbohan
8b869642b8 feat: update promql-parser to v0.1.0 (#994)
feat: update promql-parser version to v0.1.0
2023-02-15 17:23:59 +08:00
Ning Sun
a33d1e9863 ci: add cloud followup label (#1007)
ci: add cloud followup support
2023-02-15 17:17:32 +08:00
Ruihang Xia
dfe7bfb07f feat: handle PromQL HTTP API parameters (#985)
* feat: impl EvalStmt parser

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix compile errors

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* update test result

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* add integration test

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix clippy

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* resolve CR comments

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* impl From<PromqlQuery> for PromQuery

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* move format into with_context

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* update test result

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* shorthand compound error

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* use rfc3339 error to report float parsing error

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* remove CompoundError

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-02-15 17:15:44 +08:00
Ruihang Xia
5d1f231004 fix: update planner state according to output plan (#1005)
* fix: update context according to planner phase

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* alias out qualifier

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* remove ignore

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix clippy

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix typo

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-02-15 16:52:14 +08:00
Ning Sun
40eec85cf7 feat: add catalog name to s3 path (#1011) 2023-02-15 08:30:09 +00:00
shuiyisong
e17d564bf0 feat: add client tls option to channel manager config (#999)
* feat: add client tls to channel manager config

* chore: move test to tests folder

* chore: fix license issue

* chore: fix cr issue
2023-02-15 16:02:27 +08:00
shuiyisong
301656d568 fix: rename schema to db in http param (#1008)
chore: rename schema to db in http script handler
2023-02-15 15:59:00 +08:00
Zheming Li
a19dee1dc0 feat: duplicate error logs into separate file (#995)
Signed-off-by: Zheming Li <nkdudu@126.com>
2023-02-15 14:27:32 +08:00
Lei, HUANG
75b8afe043 feat: compaction integration (#997)
* feat: trigger compaction on flush

* chore: rebase develop

* feat: add config item max_file_in_level0 and remove compaction_after_flush

* fix: cr comments

* chore: add unit test to cover Timestamp::new_inclusive

* fix: workaround to fix future is not Sync

* fix: future is not sync

* fix: some cr comments
2023-02-15 14:14:07 +08:00
fys
e2904b99ac feat: add retry logic for MetaPeerClient (#991)
* add retry logic in meta_peer_client

* impl need_retry function

* create meta_peer_client using the builder pattern

* cr
2023-02-15 14:12:53 +08:00
Xieqijun
de0b8aa0a0 feat: Support the DELETE SQL statement (#942)
* [WIP]:delete sql

* [fix]:time parser bug

* [fix]:resolve conflict

* [fmt]:cargo fmt

* [fix]:remove unless log

* [fix]:test

* [feat]:add error parse

* [fix]:resolve conflict

* [fix]:remove unless code

* [fix]:remove unless code

* [test]:add IT

* [fix]:add license

* [fix]:ci

* [fix]:ci

* [fix]:ci

* [fix]:remove

* [fix]:ci

* [feat]:add sql

* [fix]:modify sql

* [feat]:refactor parser_expr

* [feat]:rm backtrace

* [fix]:ci

* [fix]: conversation

* [fix]: conversation

* feat:refactor delete

* feat:refactor delete

* fix:resolve conversation

* fix:ut

* fix:ut

* fix:conversation

* fix:conversation

* fix:conservation

---------

Co-authored-by: xieqijun <qijun@apache.org>
2023-02-15 13:13:17 +08:00
Xieqijun
63e396e9e9 test: add api and doc http test (#998)
* test:add api and doc test

* fix:conservation
2023-02-15 11:55:13 +08:00
Eugene Tolbakov
4d8276790b refactor(storage): remove unused FlushIo variant (#1002)
refactor(storeage): remove unused FlushIo variant
2023-02-15 11:42:05 +08:00
Lei, HUANG
374acc8830 feat: compaction reader and writer (#972)
* feat: compaction reader and writer

* feat: make ParquetWrite accept both memtable iterator and chunk reader

* feat: adapt ParquetWriter to accomodate ChunkReaderImpl

* chore: rebase develop

* wip: compile

* wip: task logic

* feat: version and manifest update

* fix: remove useless as_inner from Timestamp vectors

* feat: mark file compacting

* fix: unit test

* fix: clippy warnings

* fix: CR comment

* chore: according to cr comments, remove visit_levels from LevelMetas

* fix: some CR comments

* fix: add PlainTimestampRowFilter for correctness

* fix: cr comments

* fix: some typos
2023-02-14 17:32:00 +08:00
shuiyisong
8491f65093 refactor: remove obj_name_to_tab_ref (#989) 2023-02-14 16:33:55 +08:00
Weny Xu
5e6f340dd9 refactor: refactor execute_stream to non-async method (#980) 2023-02-14 15:41:22 +08:00
Ruihang Xia
7b98718cd9 test: Some PromQL cases about aggregator (#977)
* port some aggregator tests

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* find two unsupported cases

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix clippy warnings

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* resolve CR comments

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix fn naming

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-02-14 15:36:00 +08:00
Yingwen
0f7e5a2fb2 feat: Implement LocalManager::recover (#981)
* feat: Implement LocalManager::recover

* feat: Impl From<ObjectStore> for ProcedureStore
2023-02-14 14:50:43 +08:00
LFC
9ad6c45913 test: Sqlness tests for distribute mode (#979)
* test: Sqlness tests for distribute mode

* ci

* fix: resolve PR comments

* fix: resolve PR comments
2023-02-14 10:24:09 +08:00
fys
7fe417e740 fix: an error occurred when requesting the http doc api (#984) 2023-02-13 11:17:27 +00:00
fys
c1a9f84c7f feat: meta provides the ability to distribute lock (#961)
* add DistLock trait and a implement based etcd

wip

impl lock grpc service for meta-srv

reuse the etcd client instead of repeatedly creating etcd client

add some docs and comments

add some comment

meta client support distribute lock

fix: dead lock

self-cr

* cr

* rename "expire" -> "expire_secs"
2023-02-13 15:58:30 +08:00
Yingwen
be897efd01 feat: Execute procedure in LocalManager (#953)
* feat: Runner executes procedure

* feat: Add rollback key type to ParsedKey

* feat: Write rollback key when procedure is unable to execute

* feat: Use loaded step to re-submit subprocedure

* feat: Track subprocedures in ProcedureMeta

* feat: Clean message cache after the root procedure is done

* feat: Runner returns execution result

* fix: Fix tests

* test: Test Runner

* test: Test procedures_in_tree

* chore: Refine test and comments

* feat: Remove support of lock inheritance

A deadlock happens if a subprocedure acquires the same lock key as
its parent.

The main concern is if the subprocedure directly inherits its parent's
lock, then how should we behave when multiple subprocedures acquire
this same lock? Each procedure may assume it has unique access to the
same object but it actually shares the resource with others.

Now subprocedures need to use different keys to lock objects, which is
reasonable. For example:
- A parent procedure wants to create a table so it locks the table with
a key like `catalog.schema.table`
- Subprocedures create regions for the table so they lock the regions
with keys `catalog.schema.table.region-0 ~ catalog.schema.table.region-n`

* style: Fix clippy

* feat: insert_procedure returns false on duplicate procedure

Also rename this method to try_insert_procedure

* chore: Address CR comments
2023-02-13 10:38:56 +08:00
Eugene Tolbakov
c06e04afbb refactor(query): tests from query/tests to query/src (#973)
* refactor(query): tests from query/tests to query/src

* chore(query): address rust fmt issues

* chore(query): add licence header
2023-02-12 20:55:17 +08:00
Lei, HUANG
e77a7f253c feat: L0 to L1 compaction strategy (#964)
* feat: impl simple compaction strategy

* chore: rebase to develop and fix clippy warnings

* chore: simplify time bucket strcut

* chore: some typos
2023-02-11 21:10:24 +08:00
Eugene Tolbakov
7d6f4cd88b feat: remove backtrace from sql::error::Error (#966)
* feat: remove backtrace from sql::error::Error

* fix: address formatting issues

---------

Co-authored-by: Evgeny Tolbakov <evgeny.tolbakov@jpmorgan.com>
2023-02-11 14:52:29 +08:00
Ruihang Xia
83ac6598b6 feat: add start, end and step to promql http api (#969)
Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-02-10 14:16:16 +08:00
Ruihang Xia
4c925e0079 chore(deps): bump promql-parser (#968)
Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-02-10 14:12:55 +08:00
LFC
c6128ec0a4 refactor: use remote proto (#963)
* refactor: use remote proto (see our new repo "GreptimeTeam/greptime-proto")

* fix: resolve PR comments
2023-02-10 13:35:18 +08:00
discord9
7c34b009ec feat: bind DataFrame API into python script (#945)
* chore: remove unused magic fn

* feat: dataframe

* feat: add data_frame crate

* feat: more api binded

* fix: `Comparable` for overload op

* fix: license&more test

* chore: PR advices

* chore: more PR advices
2023-02-10 11:21:57 +08:00
shuiyisong
70edd4d55b fix: remove incorrect table_idents_to_full_name (#967) 2023-02-10 03:15:48 +00:00
Ning Sun
6beea73590 fix: use query_ctx in distributed inserts (#965) 2023-02-10 10:09:13 +08:00
Yun Chen
c0d3533d10 fix: Sql Inline Primary Key definition (#957)
* fix: invalid inline primary key syntax

* fix: format

* fix: clippy fix

* fix: added sqlness tests

* fix: throw exception when multiple inline pk defined

* fix: pr comments

* fix: add ending blank line for create.sql
2023-02-09 18:57:19 +08:00
shuiyisong
9989a8c192 fix: check full table name during logical plan creation (#948) 2023-02-09 17:23:28 +08:00
Ruihang Xia
19dd8b1246 feat: SeriesDivide plan for PromQL (#960)
* implement SeriesDivide plan

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* planner part

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix clippy and typo

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-02-09 11:50:29 +08:00
Lei, HUANG
1e9918ddf9 feat: compaction scheduler and rate limiter (#947)
* wip: compaction schdduler

* feat: imple simple compaction scheduler

* fix: typo

* feat: add generic parameter to make scheduler friendly to tests

* chore: add more tests

* fix: CR comments

* fix: CR comments

* fix: ensure idempotency for rate limit token

* fix: Cr ct omments
2023-02-09 11:43:20 +08:00
fys
4ce62f850b chore: add an opaque error type in meta (#959)
add boxed err in meta
2023-02-08 09:47:33 +00:00
Ning Sun
83d57f9111 fix: setting postgres query context (#958) 2023-02-08 16:34:10 +08:00
LFC
803b7f0633 feat: implement "drop table" in distributed mode (both in SQL and gRPC) (#944)
* feat: implement "drop table" in distributed mode (both in SQL and gRPC)

refactor: create distributed table
some details:
- set table global value in Meta, as well as table routes value. Datanode only set table regional value
- complete instance SQL tests both in standalone and distributed mode

* fix: rebase develop

* fix: resolve PR comments
2023-02-08 07:36:38 +00:00
Ruihang Xia
37ca5ba380 chore: alias sqlness subcommand (#956)
Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-02-07 18:53:46 +08:00
Ning Sun
c1d32bdf2b fix: add form data support for http sql api (#955)
fix: add form data support for http apis
2023-02-07 10:15:39 +00:00
fys
83509f31f4 feat: datanode stats is stored in the mem_kv of meta leader (#943)
* store heartbeat data in memory, instead of etcd

* fix: typo

* fix: license header

* cr
2023-02-07 17:09:28 +08:00
elijah
926022e14c feat: enable caching when using object store (#928)
* feat: enable caching when using object store

* feat: support file cache for object store

* feat: maintaining the cached files with lru

* fix: improve the code

* empty commit

* improve the code
2023-02-07 15:46:37 +08:00
Ruihang Xia
2f2609d8c6 build(ci): disable release workflow for forked repo (#954)
Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-02-07 15:22:32 +08:00
Yingwen
ecadbc1435 feat: Add procedure manager LocalManager (#946)
* feat: Add ManagerContext and LocalManager

* test: Add register_loader test

* feat: Remove some unused methods

* fix: Fix submit_procedure ensure condition
2023-02-07 11:33:13 +08:00
ShenJunkun
afac885c10 refactor: add schema column to the scripts table (#868) 2023-02-07 11:07:32 +08:00
Lei, HUANG
5d62e193bd feat: support multi regions on datanode (#653)
* wip: fix compile errors

* chore: move splitter to partition crate

* fix: remove useless variants in frontend errors

* chore: move more partition related code to partition manager

* fix: license header

* wip: move WriteSplitter to PartitionRuleManager

* fix: clippy warnings

* chore: remove useless error variant and format toml

* fix: cr comments

* chore: resolve conflicts

* chore: rebase develop

* fix: cr comments

* feat: support multi regions on datanode

* chore: rebase onto develop

* chore: rebase develop

* chore: rebase develop

* wip

* fix: compile errors

* feat: multi region

* fix: CR comments

* feat: allow stat existing regions without actually open it

* fix: use table meta in manifest to recover region info
2023-02-07 10:46:18 +08:00
elijah
7d77913e88 chore: fix rfc typo (#952) 2023-02-07 08:47:06 +08:00
Lei, HUANG
3f45a0d337 docs: rfc for table compaction (#939)
* doc: rfc for table compaction

* docs: update compaction rfc
2023-02-06 22:15:53 +08:00
Zhizhen He
a1e97c990f chore: fix typo (#949) 2023-02-06 22:13:56 +08:00
Ning Sun
4ae63b7089 feat: Initial prepare statement support for Postgres protocol (#925)
* feat: add describe statement to query_engine

* feat: add ability to describe statement for sql handler

* refactor: return schema instead of wrapped ref

* test: resolve tests

* feat: add initial support for prepared statements

* feat: add parameter types to query statement

* test: fix parser test

* chore: add todo task

* fix: turn on integer_datetime for binary timestamp

* fix: format string using single quote

* test: add tests for prepared statement

* Apply suggestions from code review

Co-authored-by: LFC <bayinamine@gmail.com>

* refactor: use stream api from recordbatches

---------

Co-authored-by: LFC <bayinamine@gmail.com>
2023-02-06 22:06:00 +08:00
Yingwen
b0925d94ed feat: Implement lock component for ProcedureManager (#937)
* feat: Add procedure meta

* feat: Implement lock for procedures

* chore: Allow dead code

* docs: Fix comment

* docs: Update docs of acquire_lock
2023-02-03 18:42:03 +08:00
Ruihang Xia
fc9276c79d feat: export promql service in server (#924)
* chore: some tiny typo/style fix

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* feat: add promql server

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* works for mocked query

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* clean up

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* integration test case

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* resolve CR comments

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* expose promql api to our http server

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* resolve CR comments

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* adjust router structure

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-02-03 08:28:56 +00:00
LFC
184ca78a4d revert: removed all "USE"s in sqlness tests introduced in #922 (#938) 2023-02-03 15:44:58 +08:00
discord9
ebbf1e43b5 feat: Query using sql inside python script (#884)
* feat: add weakref to QueryEngine in copr

* feat: sql query in python

* fix: make_class for Query Engine

* fix: use `Handle::try_current` instead

* fix: cache `Runtime`

* fix: lock file conflict

* fix: dedicated thread for blocking&fix test

* test: remove unnecessary print
2023-02-03 15:05:27 +08:00
dennis zhuang
54fe81dad9 docs: add dashboard to resources in README (#934) 2023-02-03 13:47:19 +08:00
LFC
af935671b2 feat: support "use" in GRPC requests (#922)
* feat: support "use catalog and schema"(behave like the "use" in MySQL) in GRPC requests

* fix: rebase develop
2023-02-02 20:02:56 +08:00
Yingwen
74adb077bc feat: Implement ProcedureStore (#927)
* test: Add more tests for ProcedureId

* feat: Add ObjectStore based state store

* feat: Implement ProcedureStore

* test: Add tests for ParsedKey

* refactor: Rename list to walk_top_down

* fix: Test ProcedureStore and handles unordered key values.

* style: Fix clippy

* docs: Update comment

* chore: Adjust log level for printing invalid key
2023-02-02 17:49:31 +08:00
Ruihang Xia
54c7a8be02 docs: document sqlness-runner usage (#931)
docs: paste doc from greptime-doc

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-02-02 15:56:51 +08:00
Ruihang Xia
ea5146762a chore(deps): bump promql-parser (#929)
* fix promql crate

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* migrate to new api

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix aggregator test

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix styles

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-02-02 07:31:41 +00:00
Yingwen
788b5362a1 docs: Add procedure framework RFC (#836)
* docs: Add procedure framework RFC

* docs: Add dump, rollback and locking to procedure framework

* docs: Change ProcedureBuilder to ProcedureLoader

* docs: Add sub-procedures section

* docs: Add a link to explain idempotent

* docs: Add link to the tracking issue

* docs: Fix ProcedureLoader type alias

* docs: Update procedure API

* docs: Address CR comments

* docs: Update path and make the docs more clear
2023-02-02 11:28:56 +08:00
Lei, HUANG
028a69e349 refactor: move partition related code to partition manager (#906)
* wip: fix compile errors

* chore: move splitter to partition crate

* fix: remove useless variants in frontend errors

* chore: move more partition related code to partition manager

* fix: license header

* wip: move WriteSplitter to PartitionRuleManager

* fix: clippy warnings

* chore: remove useless error variant and format toml

* fix: cr comments

* chore: resolve conflicts

* chore: rebase develop

* fix: cr comments
2023-02-01 19:24:49 +08:00
elijah
9a30ba00c4 test: run sqlness test in distributed mode (#916)
* test: run sqlness test in distributed mode

* chore: fix ci test

* chore: improve the ci yaml

* chore: improve the code

* chore: fix conflicts
2023-01-31 15:00:11 +08:00
LFC
8149932bad feat: local catalog drop table (#913)
* feat: local catalog drop table

* Update src/catalog/src/local/manager.rs

Co-authored-by: Lei, HUANG <6406592+v0y4g3r@users.noreply.github.com>

* Update src/catalog/src/local/manager.rs

Co-authored-by: Lei, HUANG <6406592+v0y4g3r@users.noreply.github.com>

* fix: resolve PR comments

---------

Co-authored-by: Lei, HUANG <6406592+v0y4g3r@users.noreply.github.com>
2023-01-31 14:44:03 +08:00
Ruihang Xia
89e4084af4 build(ci): upload sqlness log files (#920)
Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-01-31 14:31:27 +08:00
Ning Sun
39df25a8f6 refactor: make postgres handler stateful (#914)
* feat: update pgwire to 0.8 and unify postgres handler

* fix: correct password message matching
2023-01-31 14:19:18 +08:00
Yingwen
b2ad0e972b feat: Define procedure related traits (#904)
* chore: Move uuid to workspace.dependencies

* feat: Define procedure related traits

* test: Add tests

* chore: Update imports

* feat: Submit ProcedureWithId to manager

* chore: pub ProcedureId::parse_str

* refactor: ProcedureId::parse_str returns Result

* chore: Address CR comments

Also implements FromStr for ProcedureId
2023-01-31 14:17:28 +08:00
shuiyisong
18e6740ac9 chore: add interceptor err in frontend::error::Error (#917)
* chore: add interceptor boxed err

* chore: rename

* chore: update err msg

Co-authored-by: fys <40801205+Fengys123@users.noreply.github.com>

---------

Co-authored-by: fys <40801205+Fengys123@users.noreply.github.com>
2023-01-30 03:12:03 +00:00
Yun Chen
a7dc86ffe5 feat: oss storage support (#911)
* feat: add oss storage support

* fix: ci build format check

* fix: align OSS to Oss

* fix: cr comments

* fix: rename OSS to Oss in integration tests

* fix: clippy fix
2023-01-29 20:09:38 +08:00
Ruihang Xia
71482b38d7 feat: PromQL binary expr planner (#889)
* feat: PromQL binary expr planner

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* column & column test

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* column & literal test

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* mark literal-literal unsupported

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-01-29 17:02:11 +08:00
Ruihang Xia
dc9b5339bf feat: impl increase and irate/idelta in PromQL (#880)
* feat: impl increase and irate/idelta in PromQL

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* add license header

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix styles

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* resolve CR comments

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* add counter reset test case

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-01-29 14:21:13 +08:00
Lei, HUANG
5e05c8f884 fix: TimestampRange::new_inclusive and strum dependency (#910)
fix: TimestampRange::new_inclusive; also fix strum dependency in common-error
2023-01-29 13:09:05 +08:00
shuiyisong
aafc26c788 feat: add mysql reject_no_database (#896)
* chore: update opensrv-mysql to main

* refactor: change mysql server struct

* feat: add option to reject no database mysql connection request

* chore: remove unused condition

* chore: rebase develop

* chore: make reject_no_database optional
2023-01-29 04:09:47 +00:00
LFC
64243e3a7d refactor: accommodate java flight client (#886)
* refactor: change how AffectedRows is carried in flight stream to accommodate Java Flight client

* fix: clippy
2023-01-29 11:27:13 +08:00
Ruihang Xia
36a13dafb7 build(deps): bump tokio to 1.24.2 (#900)
deps: bump tokio to 1.24.2

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-01-29 11:13:37 +08:00
shuiyisong
637837ae44 chore: return authorize err msg to mysql client (#905)
chore: refine authorize err msg to client
2023-01-29 10:53:36 +08:00
dependabot[bot]
ae8afd3711 build(deps): bump bzip2 from 0.4.3 to 0.4.4 (#898)
Bumps [bzip2](https://github.com/alexcrichton/bzip2-rs) from 0.4.3 to 0.4.4.
- [Release notes](https://github.com/alexcrichton/bzip2-rs/releases)
- [Commits](https://github.com/alexcrichton/bzip2-rs/commits/0.4.4)

---
updated-dependencies:
- dependency-name: bzip2
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-01-28 21:08:03 +08:00
Yingwen
3db8f95169 ci: Skip status check on docs changed (#903)
* ci: Pass status check on docs changed

* ci: Remove coverage.yml
2023-01-28 16:37:47 +08:00
Lei, HUANG
43aefc5d74 feat: prunine sst files according to time range in filters (#887)
* 1. Reimplement Eq for Timestamp
2. Add and/or for GenericRange

* feat: extract time range from filters

* feat: select sst files according to time range

* fix: clippy

* fix: empty value in range

* fix: some cr comments

* fix: return optional timestamp range

* fix: cr comments
2023-01-28 15:16:41 +08:00
Ruihang Xia
b33937f48e test: sqlness test for alter table rename (#891)
* test: sqlness test for alter table rename

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* update test result

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* change show create table to desc table

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-01-28 11:35:38 +08:00
Ning Sun
9bc4c0d9c7 fix: mysql tests error (#897)
fix: mysql tests merge error
2023-01-20 16:15:16 +08:00
Ning Sun
302d7ec41b ci: use ubuntu 2004 to build weekly (#895)
feat: use ubuntu 2004 to build weekly
2023-01-20 08:36:41 +08:00
zyy17
cc46194f29 refactor: support TLS private key of RSA format and add the full test certificates generation (#885)
chore: add the full certificate generation

Signed-off-by: zyy17 <zyylsxm@gmail.com>

Signed-off-by: zyy17 <zyylsxm@gmail.com>
2023-01-19 13:13:33 +08:00
elijah
5dfc24e4f6 fix: create table after rename table (#894)
* fix: create table after rename table

* chore: fix test
2023-01-19 13:13:09 +08:00
Zheming Li
4987136850 refactor: use rust-toolchain.toml to override toolchain (#882) 2023-01-19 13:11:36 +08:00
shuiyisong
6960739b3d feat: add authorize to UserProvider trait (#879)
* feat: add SchemaValidator

* feat: add schema validator to mysql shim

* chore: pass schema validator to http auth layer

* feat: add schema validator to http

* feat: add schema validator to pg

* feat: add schema validator to pg

* feat: add schema validator test

* chore: remove println in test

* chore: use !matches

* refactor: refac authenticate and authorize in http auth

* refactor: refac authenticate and authorize in http auth

* chore: typo

* chore: minor change

* refactor: merge schema_validator into user_providier

* chore: fix license issue

* refactor: change http query param from database to db

* chore: fix cr issue
2023-01-18 12:42:08 +08:00
fys
49d83abc0c chore: add an opaque error type in meta (#890)
add a boxed error type in meta
2023-01-18 11:30:54 +08:00
Ning Sun
ecb71f81be feat: add --rpc-hostname option to datanode for a persist address to store in meta (#871)
* feat: add --rpc-hostname option

* fix: config file and hostname parsing

* Apply suggestions from code review

Co-authored-by: fys <40801205+Fengys123@users.noreply.github.com>

Co-authored-by: fys <40801205+Fengys123@users.noreply.github.com>
2023-01-17 10:50:50 +08:00
fys
6f5639fccd feat: add load_based selector in meta (#874)
* fix: wrong error info

* add derive hash for StatKey

* add a attrs field in Context

* add load_based selector

* add license

* make Nodestat module public

* add meta startup config item about selector

* cr: remove attrs, add concrete type in context

* cr: change region_number type to Option<u64>

* cr: add comment in example.toml

* cr
2023-01-17 10:25:00 +08:00
Ruihang Xia
1e9d09099e feat: update promql-parser to commit fec3c8b (#881)
deps: update promql-parser to commit fec3c8b

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-01-16 17:55:44 +08:00
Lei, HUANG
daad38360f fix: impl total order for Timestamp (#878)
* 1. Reimplement Eq for Timestamp
2. Add and/or for GenericRange

* chore: add test for TimestampRange with diff unit

* chore: optimize split implementation

* fix: clippy

* fix: add fast path

* fix: CR comments
2023-01-16 17:37:30 +08:00
511 changed files with 29930 additions and 7179 deletions

View File

@@ -1,2 +1,5 @@
[target.aarch64-unknown-linux-gnu]
linker = "aarch64-linux-gnu-gcc"
[alias]
sqlness = "run --bin sqlness-runner --"

View File

@@ -2,3 +2,9 @@
GT_S3_BUCKET=S3 bucket
GT_S3_ACCESS_KEY_ID=S3 access key id
GT_S3_ACCESS_KEY=S3 secret access key
# Settings for oss test
GT_OSS_BUCKET=OSS bucket
GT_OSS_ACCESS_KEY_ID=OSS access key id
GT_OSS_ACCESS_KEY=OSS access key
GT_OSS_ENDPOINT=OSS endpoint

View File

@@ -1,70 +0,0 @@
on:
pull_request:
types: [opened, synchronize, reopened, ready_for_review]
paths-ignore:
- 'docs/**'
- 'config/**'
- '**.md'
- '.dockerignore'
- 'docker/**'
- '.gitignore'
push:
branches:
- "main"
- "develop"
paths-ignore:
- 'docs/**'
- 'config/**'
- '**.md'
- '.dockerignore'
- 'docker/**'
- '.gitignore'
workflow_dispatch:
name: Code coverage
env:
RUST_TOOLCHAIN: nightly-2022-12-20
jobs:
coverage:
if: github.event.pull_request.draft == false
runs-on: ubuntu-latest-8-cores
timeout-minutes: 60
steps:
- uses: actions/checkout@v3
- uses: arduino/setup-protoc@v1
with:
repo-token: ${{ secrets.GITHUB_TOKEN }}
- uses: KyleMayes/install-llvm-action@v1
with:
version: "14.0"
- name: Install toolchain
uses: dtolnay/rust-toolchain@master
with:
toolchain: ${{ env.RUST_TOOLCHAIN }}
components: llvm-tools-preview
- name: Rust Cache
uses: Swatinem/rust-cache@v2
- name: Install latest nextest release
uses: taiki-e/install-action@nextest
- name: Install cargo-llvm-cov
uses: taiki-e/install-action@cargo-llvm-cov
- name: Collect coverage data
run: cargo llvm-cov nextest --workspace --lcov --output-path lcov.info
env:
CARGO_BUILD_RUSTFLAGS: "-C link-arg=-fuse-ld=lld"
RUST_BACKTRACE: 1
CARGO_INCREMENTAL: 0
GT_S3_BUCKET: ${{ secrets.S3_BUCKET }}
GT_S3_ACCESS_KEY_ID: ${{ secrets.S3_ACCESS_KEY_ID }}
GT_S3_ACCESS_KEY: ${{ secrets.S3_ACCESS_KEY }}
UNITTEST_LOG_DIR: "__unittest_logs"
- name: Codecov upload
uses: codecov/codecov-action@v2
with:
token: ${{ secrets.CODECOV_TOKEN }}
files: ./lcov.info
flags: rust
fail_ci_if_error: true
verbose: true

View File

@@ -7,6 +7,7 @@ on:
- '**.md'
- '.dockerignore'
- 'docker/**'
- '.gitignore'
push:
branches:
- develop
@@ -23,15 +24,15 @@ on:
name: CI
env:
RUST_TOOLCHAIN: nightly-2022-12-20
RUST_TOOLCHAIN: nightly-2023-02-14
jobs:
typos:
name: Spell Check with Typos
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- uses: crate-ci/typos@v1.0.4
- uses: actions/checkout@v3
- uses: crate-ci/typos@v1.13.10
check:
name: Check
@@ -125,8 +126,25 @@ jobs:
toolchain: ${{ env.RUST_TOOLCHAIN }}
- name: Rust Cache
uses: Swatinem/rust-cache@v2
- name: Run etcd
run: |
ETCD_VER=v3.5.7
DOWNLOAD_URL=https://github.com/etcd-io/etcd/releases/download
curl -L ${DOWNLOAD_URL}/${ETCD_VER}/etcd-${ETCD_VER}-linux-amd64.tar.gz -o /tmp/etcd-${ETCD_VER}-linux-amd64.tar.gz
mkdir -p /tmp/etcd-download
tar xzvf /tmp/etcd-${ETCD_VER}-linux-amd64.tar.gz -C /tmp/etcd-download --strip-components=1
rm -f /tmp/etcd-${ETCD_VER}-linux-amd64.tar.gz
sudo cp -a /tmp/etcd-download/etcd* /usr/local/bin/
nohup etcd >/tmp/etcd.log 2>&1 &
- name: Run sqlness
run: cargo run --bin sqlness-runner
run: cargo sqlness && ls /tmp
- name: Upload sqlness logs
uses: actions/upload-artifact@v3
with:
name: sqlness-logs
path: /tmp/greptime-*.log
retention-days: 3
fmt:
name: Rustfmt
@@ -165,3 +183,45 @@ jobs:
uses: Swatinem/rust-cache@v2
- name: Run cargo clippy
run: cargo clippy --workspace --all-targets -- -D warnings -D clippy::print_stdout -D clippy::print_stderr
coverage:
if: github.event.pull_request.draft == false
runs-on: ubuntu-latest-8-cores
timeout-minutes: 60
steps:
- uses: actions/checkout@v3
- uses: arduino/setup-protoc@v1
with:
repo-token: ${{ secrets.GITHUB_TOKEN }}
- uses: KyleMayes/install-llvm-action@v1
with:
version: "14.0"
- name: Install toolchain
uses: dtolnay/rust-toolchain@master
with:
toolchain: ${{ env.RUST_TOOLCHAIN }}
components: llvm-tools-preview
- name: Rust Cache
uses: Swatinem/rust-cache@v2
- name: Install latest nextest release
uses: taiki-e/install-action@nextest
- name: Install cargo-llvm-cov
uses: taiki-e/install-action@cargo-llvm-cov
- name: Collect coverage data
run: cargo llvm-cov nextest --workspace --lcov --output-path lcov.info
env:
CARGO_BUILD_RUSTFLAGS: "-C link-arg=-fuse-ld=lld"
RUST_BACKTRACE: 1
CARGO_INCREMENTAL: 0
GT_S3_BUCKET: ${{ secrets.S3_BUCKET }}
GT_S3_ACCESS_KEY_ID: ${{ secrets.S3_ACCESS_KEY_ID }}
GT_S3_ACCESS_KEY: ${{ secrets.S3_ACCESS_KEY }}
UNITTEST_LOG_DIR: "__unittest_logs"
- name: Codecov upload
uses: codecov/codecov-action@v2
with:
token: ${{ secrets.CODECOV_TOKEN }}
files: ./lcov.info
flags: rust
fail_ci_if_error: false
verbose: true

View File

@@ -1,4 +1,4 @@
name: Create Issue in docs repo on doc related changes
name: Create Issue in downstream repos
on:
issues:
@@ -23,3 +23,17 @@ jobs:
body: |
A document change request is generated from
${{ github.event.issue.html_url || github.event.pull_request.html_url }}
cloud_issue:
if: github.event.label.name == 'cloud followup required'
runs-on: ubuntu-latest
steps:
- name: create an issue in cloud repo
uses: dacbd/create-issue-action@main
with:
owner: GreptimeTeam
repo: greptimedb-cloud
token: ${{ secrets.DOCS_REPO_TOKEN }}
title: Followup changes in ${{ github.event.issue.title || github.event.pull_request.title }}
body: |
A followup request is generated from
${{ github.event.issue.html_url || github.event.pull_request.html_url }}

55
.github/workflows/docs.yml vendored Normal file
View File

@@ -0,0 +1,55 @@
on:
pull_request:
types: [opened, synchronize, reopened, ready_for_review]
paths:
- 'docs/**'
- 'config/**'
- '**.md'
- '.dockerignore'
- 'docker/**'
- '.gitignore'
push:
branches:
- develop
- main
paths:
- 'docs/**'
- 'config/**'
- '**.md'
- '.dockerignore'
- 'docker/**'
- '.gitignore'
workflow_dispatch:
name: CI
# To pass the required status check, see:
# https://docs.github.com/en/repositories/configuring-branches-and-merges-in-your-repository/defining-the-mergeability-of-pull-requests/troubleshooting-required-status-checks#handling-skipped-but-required-checks
jobs:
check:
name: Check
if: github.event.pull_request.draft == false
runs-on: ubuntu-latest
steps:
- run: 'echo "No action required"'
fmt:
name: Rustfmt
if: github.event.pull_request.draft == false
runs-on: ubuntu-latest
steps:
- run: 'echo "No action required"'
clippy:
name: Clippy
if: github.event.pull_request.draft == false
runs-on: ubuntu-latest
steps:
- run: 'echo "No action required"'
coverage:
if: github.event.pull_request.draft == false
runs-on: ubuntu-latest
steps:
- run: 'echo "No action required"'

View File

@@ -13,4 +13,4 @@ jobs:
steps:
- uses: actions/checkout@v2
- name: Check License Header
uses: apache/skywalking-eyes/header@main
uses: apache/skywalking-eyes/header@df70871af1a8109c9a5b1dc824faaf65246c5236

View File

@@ -10,7 +10,7 @@ on:
name: Release
env:
RUST_TOOLCHAIN: nightly-2022-12-20
RUST_TOOLCHAIN: nightly-2023-02-14
# FIXME(zyy17): Would be better to use `gh release list -L 1 | cut -f 3` to get the latest release version tag, but for a long time, we will stay at 'v0.1.0-alpha-*'.
SCHEDULED_BUILD_VERSION_PREFIX: v0.1.0-alpha
@@ -28,10 +28,10 @@ jobs:
# The file format is greptime-<os>-<arch>
include:
- arch: x86_64-unknown-linux-gnu
os: ubuntu-latest-16-cores
os: ubuntu-2004-16-cores
file: greptime-linux-amd64
- arch: aarch64-unknown-linux-gnu
os: ubuntu-latest-16-cores
os: ubuntu-2004-16-cores
file: greptime-linux-arm64
- arch: aarch64-apple-darwin
os: macos-latest
@@ -40,6 +40,7 @@ jobs:
os: macos-latest
file: greptime-darwin-amd64
runs-on: ${{ matrix.os }}
if: github.repository == 'GreptimeTeam/greptimedb'
steps:
- name: Checkout sources
uses: actions/checkout@v3
@@ -69,6 +70,25 @@ jobs:
run: |
brew install protobuf
- name: Install etcd for linux
if: contains(matrix.arch, 'linux') && endsWith(matrix.arch, '-gnu')
run: |
ETCD_VER=v3.5.7
DOWNLOAD_URL=https://github.com/etcd-io/etcd/releases/download
curl -L ${DOWNLOAD_URL}/${ETCD_VER}/etcd-${ETCD_VER}-linux-amd64.tar.gz -o /tmp/etcd-${ETCD_VER}-linux-amd64.tar.gz
mkdir -p /tmp/etcd-download
tar xzvf /tmp/etcd-${ETCD_VER}-linux-amd64.tar.gz -C /tmp/etcd-download --strip-components=1
rm -f /tmp/etcd-${ETCD_VER}-linux-amd64.tar.gz
sudo cp -a /tmp/etcd-download/etcd* /usr/local/bin/
nohup etcd >/tmp/etcd.log 2>&1 &
- name: Install etcd for macos
if: contains(matrix.arch, 'darwin')
run: |
brew install etcd
brew services start etcd
- name: Install dependencies for linux
if: contains(matrix.arch, 'linux') && endsWith(matrix.arch, '-gnu')
run: |
@@ -113,6 +133,7 @@ jobs:
name: Release artifacts
needs: [build]
runs-on: ubuntu-latest
if: github.repository == 'GreptimeTeam/greptimedb'
steps:
- name: Checkout sources
uses: actions/checkout@v3
@@ -155,6 +176,7 @@ jobs:
name: Build docker image
needs: [build]
runs-on: ubuntu-latest
if: github.repository == 'GreptimeTeam/greptimedb'
steps:
- name: Checkout sources
uses: actions/checkout@v3

1418
Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -12,6 +12,7 @@ members = [
"src/common/function-macro",
"src/common/grpc",
"src/common/grpc-expr",
"src/common/procedure",
"src/common/query",
"src/common/recordbatch",
"src/common/runtime",
@@ -26,6 +27,7 @@ members = [
"src/meta-srv",
"src/mito",
"src/object-store",
"src/partition",
"src/promql",
"src/query",
"src/script",
@@ -35,6 +37,7 @@ members = [
"src/storage",
"src/store-api",
"src/table",
"src/table-procedure",
"tests-integration",
"tests/runner",
]
@@ -46,11 +49,13 @@ license = "Apache-2.0"
[workspace.dependencies]
arrow = "29.0"
arrow-array = "29.0"
arrow-flight = "29.0"
arrow-schema = { version = "29.0", features = ["serde"] }
async-stream = "0.3"
async-trait = "0.1"
# TODO(LFC): Use released Datafusion when it officially dpendent on Arrow 29.0
chrono = { version = "0.4", features = ["serde"] }
# TODO(LFC): Use released Datafusion when it officially dependent on Arrow 29.0
datafusion = { git = "https://github.com/apache/arrow-datafusion.git", rev = "4917235a398ae20145c87d20984e6367dc1a0c1e" }
datafusion-common = { git = "https://github.com/apache/arrow-datafusion.git", rev = "4917235a398ae20145c87d20984e6367dc1a0c1e" }
datafusion-expr = { git = "https://github.com/apache/arrow-datafusion.git", rev = "4917235a398ae20145c87d20984e6367dc1a0c1e" }
@@ -63,10 +68,13 @@ parquet = "29.0"
paste = "1.0"
prost = "0.11"
serde = { version = "1.0", features = ["derive"] }
serde_json = "1.0"
snafu = { version = "0.7", features = ["backtraces"] }
sqlparser = "0.28"
tokio = { version = "1", features = ["full"] }
tonic = "0.8"
tokio = { version = "1.24.2", features = ["full"] }
tokio-util = "0.7"
tonic = { version = "0.8", features = ["tls"] }
uuid = { version = "1", features = ["serde", "v4", "fast-rng"] }
[profile.release]
debug = true
@@ -74,6 +82,6 @@ debug = true
[profile.weekly]
inherits = "release"
strip = true
lto = true
lto = "thin"
debug = false
incremental = false

View File

@@ -19,6 +19,10 @@ clean: ## Clean the project.
fmt: ## Format all the Rust code.
cargo fmt --all
.PHONY: fmt-toml
fmt-toml: ## Format all TOML files.
taplo format --check --option "indent_string= "
.PHONY: docker-image
docker-image: ## Build docker image.
docker build --network host -f docker/Dockerfile -t ${IMAGE_REGISTRY}:${IMAGE_TAG} .
@@ -35,7 +39,7 @@ integration-test: ## Run integation test.
.PHONY: sqlness-test
sqlness-test: ## Run sqlness test.
cargo run --bin sqlness-runner
cargo sqlness
.PHONY: check
check: ## Cargo check all the targets.

View File

@@ -153,6 +153,9 @@ You can always cleanup test database by removing `/tmp/greptimedb`.
- GreptimeDB [Developer
Guide](https://docs.greptime.com/developer-guide/overview.html)
### Dashboard
- [The dashboard UI for GreptimeDB](https://github.com/GreptimeTeam/dashboard)
### SDK
- [GreptimeDB Java
@@ -169,7 +172,7 @@ For future plans, check out [GreptimeDB roadmap](https://github.com/GreptimeTeam
## Community
Our core team is thrilled too see you participate in any ways you like. When you are stuck, try to
Our core team is thrilled to see you participate in any ways you like. When you are stuck, try to
ask for help by filling an issue with a detailed description of what you were trying to do
and what went wrong. If you have any questions or if you would like to get involved in our
community, please check out:

View File

@@ -27,12 +27,11 @@ use arrow::record_batch::RecordBatch;
use clap::Parser;
use client::api::v1::column::Values;
use client::api::v1::{Column, ColumnDataType, ColumnDef, CreateTableExpr, InsertRequest, TableId};
use client::{Client, Database};
use client::{Client, Database, DEFAULT_CATALOG_NAME, DEFAULT_SCHEMA_NAME};
use indicatif::{MultiProgress, ProgressBar, ProgressStyle};
use parquet::arrow::arrow_reader::ParquetRecordBatchReaderBuilder;
use tokio::task::JoinSet;
const DATABASE_NAME: &str = "greptime";
const CATALOG_NAME: &str = "greptime";
const SCHEMA_NAME: &str = "public";
const TABLE_NAME: &str = "nyc_taxi";
@@ -100,7 +99,6 @@ async fn write_data(
let record_batch = record_batch.unwrap();
let (columns, row_count) = convert_record_batch(record_batch);
let request = InsertRequest {
schema_name: "public".to_string(),
table_name: TABLE_NAME.to_string(),
region_number: 0,
columns,
@@ -424,7 +422,7 @@ fn main() {
.unwrap()
.block_on(async {
let client = Client::with_urls(vec![&args.endpoint]);
let db = Database::new(DATABASE_NAME, client);
let db = Database::new(DEFAULT_CATALOG_NAME, DEFAULT_SCHEMA_NAME, client);
if !args.skip_write {
do_write(&args, &db).await;

View File

@@ -8,3 +8,5 @@ coverage:
ignore:
- "**/error*.rs" # ignore all error.rs files
- "tests/runner/*.rs" # ignore integration test runner
comment: # this is a top-level key
layout: "diff"

View File

@@ -1,6 +1,7 @@
node_id = 42
mode = 'distributed'
rpc_addr = '127.0.0.1:3001'
rpc_hostname = '127.0.0.1'
rpc_runtime_size = 8
mysql_addr = '127.0.0.1:4406'
mysql_runtime_size = 4
@@ -18,8 +19,17 @@ sync_write = false
type = 'File'
data_dir = '/tmp/greptimedb/data/'
[meta_client_opts]
[meta_client_options]
metasrv_addrs = ['127.0.0.1:3002']
timeout_millis = 3000
connect_timeout_millis = 5000
tcp_nodelay = false
[compaction]
max_inflight_tasks = 4
max_files_in_level0 = 16
max_purge_tasks = 32
[procedure.store]
type = 'File'
data_dir = '/tmp/greptimedb/procedure/'

View File

@@ -5,7 +5,7 @@ datanode_rpc_addr = '127.0.0.1:3001'
addr = '127.0.0.1:4000'
timeout = "30s"
[meta_client_opts]
[meta_client_options]
metasrv_addrs = ['127.0.0.1:3002']
timeout_millis = 3000
connect_timeout_millis = 5000

View File

@@ -2,3 +2,5 @@ bind_addr = '127.0.0.1:3002'
server_addr = '127.0.0.1:3002'
store_addr = '127.0.0.1:2379'
datanode_lease_secs = 15
# selector: 'LeaseBased', 'LoadBased'
selector = 'LeaseBased'

View File

@@ -14,7 +14,6 @@ purge_threshold = '50GB'
read_batch_size = 128
sync_write = false
[storage]
type = 'File'
data_dir = '/tmp/greptimedb/data/'
@@ -42,3 +41,7 @@ enable = true
addr = '127.0.0.1:4003'
runtime_size = 2
check_pwd = false
[procedure.store]
type = 'File'
data_dir = '/tmp/greptimedb/procedure/'

View File

@@ -149,10 +149,10 @@ inputs:
- title: 'Series Normalize: \noffset = 0'
operator: prom
inputs:
- title: 'Filter: \ntimetamp > 2022-12-20T10:00:00 && timestamp < 2022-12-21T10:00:00'
- title: 'Filter: \ntimestamp > 2022-12-20T10:00:00 && timestamp < 2022-12-21T10:00:00'
operator: filter
inputs:
- title: 'Table Scan: \ntable = request_duration, timetamp > 2022-12-20T10:00:00 && timestamp < 2022-12-21T10:00:00'
- title: 'Table Scan: \ntable = request_duration, timestamp > 2022-12-20T10:00:00 && timestamp < 2022-12-21T10:00:00'
operator: scan -->
![example](example.png)

View File

@@ -0,0 +1,151 @@
---
Feature Name: "procedure-framework"
Tracking Issue: https://github.com/GreptimeTeam/greptimedb/issues/286
Date: 2023-01-03
Author: "Yingwen <realevenyag@gmail.com>"
---
Procedure Framework
----------------------
# Summary
A framework for executing operations in a fault-tolerant manner.
# Motivation
Some operations in GreptimeDB require multiple steps to implement. For example, creating a table needs:
1. Check whether the table exists
2. Create the table in the table engine
1. Create a region for the table in the storage engine
2. Persist the metadata of the table to the table manifest
3. Add the table to the catalog manager
If the node dies or restarts in the middle of creating a table, it could leave the system in an inconsistent state. The procedure framework, inspired by [Apache HBase's ProcedureV2 framework](https://github.com/apache/hbase/blob/bfc9fc9605de638785435e404430a9408b99a8d0/src/main/asciidoc/_chapters/pv2.adoc) and [Apache Accumulos FATE framework](https://accumulo.apache.org/docs/2.x/administration/fate), aims to provide a unified way to implement multi-step operations that is tolerant to failure.
# Details
## Overview
The procedure framework consists of the following primary components:
- A `Procedure` represents an operation or a set of operations to be performed step-by-step
- `ProcedureManager`, the runtime to run `Procedures`. It executes the submitted procedures, stores procedures' states to the `ProcedureStore` and restores procedures from `ProcedureStore` while the database restarts.
- `ProcedureStore` is a storage layer for persisting the procedure state
## Procedures
The `ProcedureManager` keeps calling `Procedure::execute()` until the Procedure is done, so the operation of the Procedure should be [idempotent](https://developer.mozilla.org/en-US/docs/Glossary/Idempotent): it needs to be able to undo or replay a partial execution of itself.
```rust
trait Procedure {
fn execute(&mut self, ctx: &Context) -> Result<Status>;
fn dump(&self) -> Result<String>;
fn rollback(&self) -> Result<()>;
// other methods...
}
```
The `Status` is an enum that has the following variants:
```rust
enum Status {
Executing {
persist: bool,
},
Suspended {
subprocedures: Vec<ProcedureWithId>,
persist: bool,
},
Done,
}
```
A call to `execute()` can result in the following possibilities:
- `Ok(Status::Done)`: we are done
- `Ok(Status::Executing { .. })`: there are remaining steps to do
- `Ok(Status::Suspend { sub_procedure, .. })`: execution is suspended and can be resumed later after the sub-procedure is done.
- `Err(e)`: error occurs during execution and the procedure is unable to proceed anymore.
Users need to assign a unique `ProcedureId` to the procedure and the procedure can get this id via the `Context`. The `ProcedureId` is typically a UUID.
```rust
struct Context {
id: ProcedureId,
// other fields ...
}
```
The `ProcedureManager` calls `Procedure::dump()` to serialize the internal state of the procedure and writes to the `ProcedureStore`. The `Status` has a field `persist` to tell the `ProcedureManager` whether it needs persistence.
## Sub-procedures
A procedure may need to create some sub-procedures to process its subtasks. For example, creating a distributed table with multiple regions (partitions) needs to set up the regions in each node, thus the parent procedure should instantiate a sub-procedure for each region. The `ProcedureManager` makes sure that the parent procedure does not proceed till all sub-procedures are successfully finished.
The procedure can submit sub-procedures to the `ProcedureManager` by returning `Status::Suspended`. It needs to assign a procedure id to each procedure manually so it can track the status of the sub-procedures.
```rust
struct ProcedureWithId {
id: ProcedureId,
procedure: BoxedProcedure,
}
```
## ProcedureStore
We might need to provide two different ProcedureStore implementations:
- In standalone mode, it stores data on the local disk.
- In distributed mode, it stores data on the meta server or the object store service.
These implementations should share the same storage structure. They store each procedure's state in a unique path based on the procedure id:
```
Sample paths:
/procedures/{PROCEDURE_ID}/000001.step
/procedures/{PROCEDURE_ID}/000002.step
/procedures/{PROCEDURE_ID}/000003.commit
```
`ProcedureStore` behaves like a WAL. Before performing each step, the `ProcedureManager` can write the procedure's current state to the ProcedureStore, which stores the state in the `.step` file. The `000001` in the path is a monotonic increasing sequence of the step. After the procedure is done, the `ProcedureManager` puts a `.commit` file to indicate the procedure is finished (committed).
The `ProcedureManager` can remove the procedure's files once the procedure is done, but it needs to leave the `.commit` as the last file to remove in case of failure during removal.
## ProcedureManager
`ProcedureManager` executes procedures submitted to it.
```rust
trait ProcedureManager {
fn register_loader(&self, name: &str, loader: BoxedProcedureLoader) -> Result<()>;
async fn submit(&self, procedure: ProcedureWithId) -> Result<()>;
}
```
It supports the following operations:
- Register a `ProcedureLoader` by the type name of the `Procedure`.
- Submit a `Procedure` to the manager and execute it.
When `ProcedureManager` starts, it loads procedures from the `ProcedureStore` and restores the procedures by the `ProcedureLoader`. The manager stores the type name from `Procedure::type_name()` with the data from `Procedure::dump()` in the `.step` file and uses the type name to find a `ProcedureLoader` to recover the procedure from its data.
```rust
type BoxedProcedureLoader = Box<dyn Fn(&str) -> Result<BoxedProcedure> + Send>;
```
## Rollback
The rollback step is supposed to clean up the resources created during the execute() step. When a procedure has failed, the `ProcedureManager` puts a `rollback` file and calls the `Procedure::rollback()` method.
```text
/procedures/{PROCEDURE_ID}/000001.step
/procedures/{PROCEDURE_ID}/000002.rollback
```
Rollback is complicated to implement so some procedures might not support rollback or only provide a best-efforts approach.
## Locking
The `ProcedureManager` can provide a locking mechanism that gives a procedure read/write access to a database object such as a table so other procedures are unable to modify the same table while the current one is executing.
# Drawbacks
The `Procedure` framework introduces additional complexity and overhead to our database.
- To execute a `Procedure`, we need to write to the `ProcedureStore` multiple times, which may slow down the server
- We need to rewrite the logic of creating/dropping/altering a table using the procedure framework
# Alternatives
Another approach is to tolerate failure during execution and allow users to retry the operation until it succeeds. But we still need to:
- Make each step idempotent
- Record the status in some place to check whether we are done

View File

@@ -0,0 +1,92 @@
---
Feature Name: "table-compaction"
Tracking Issue: https://github.com/GreptimeTeam/greptimedb/issues/930
Date: 2023-02-01
Author: "Lei, HUANG <mrsatangel@gmail.com>"
---
# Table Compaction
---
## Background
GreptimeDB uses an LSM-tree based storage engine that flushes memtables to SSTs for persistence.
But currently it only supports level 0. SST files in level 0 does not guarantee to contain only rows with disjoint time ranges.
That is to say, different SST files in level 0 may contain overlapped timestamps.
The consequence is, in order to retrieve rows in some time range, all files need to be scanned, which brings a lot of IO overhead.
Also, just like other LSMT engines, delete/update to existing primary keys are converted to new rows with delete/update mark and appended to SSTs on flushing.
We need to merge the operations to same primary keys so that we don't have to go through all SST files to find the final state of these primary keys.
## Goal
Implement a compaction framework to:
- maintain SSTs in timestamp order to accelerate queries with timestamp condition;
- merge rows with same primary key;
- purge expired SSTs;
- accommodate other tasks like data rollup/indexing.
## Overview
Table compaction involves following components:
- Compaction scheduler: run compaction tasks, limit the consumed resources;
- Compaction strategy: find the SSTs to compact and determine the output files of compaction.
- Compaction task: read the rows from input SSTs and write to the output files.
## Implementation
### Compaction scheduler
`CompactionScheduler` is an executor that continuously polls and executes compaction request from a task queue.
```rust
#[async_trait]
pub trait CompactionScheduler {
/// Schedules a compaction task.
async fn schedule(&self, task: CompactionRequest) -> Result<()>;
/// Stops compaction scheduler.
async fn stop(&self) -> Result<()>;
}
```
### Compaction triggering
Currently, we can check whether to compact tables when memtable is flushed to SST.
https://github.com/GreptimeTeam/greptimedb/blob/4015dd80752e1e6aaa3d7cacc3203cb67ed9be6d/src/storage/src/flush.rs#L245
### Compaction strategy
`CompactionStrategy` defines how to pick SSTs in all levels for compaction.
```rust
pub trait CompactionStrategy {
fn pick(
&self,
ctx: CompactionContext,
levels: &LevelMetas,
) -> Result<CompactionTask>;
}
```
The most suitable compaction strategy for time-series scenario would be
a hybrid strategy that combines time window compaction with size-tired compaction, just like [Cassandra](https://cassandra.apache.org/doc/latest/cassandra/operating/compaction/twcs.html) and [ScyllaDB](https://docs.scylladb.com/stable/architecture/compaction/compaction-strategies.html#time-window-compaction-strategy-twcs) does.
We can first group SSTs in level n into buckets according to some predefined time window. Within that window,
SSTs are compacted in a size-tired manner (find SSTs with similar size and compact them to level n+1).
SSTs from different time windows are neven compacted together.
That strategy guarantees SSTs in each level are mainly sorted in timestamp order which boosts queries with
explicit timestamp condition, while size-tired compaction minimizes the impact to foreground writes.
### Alternatives
Currently, GreptimeDB's storage engine [only support two levels](https://github.com/GreptimeTeam/greptimedb/blob/43aefc5d74dfa73b7819cae77b7eb546d8534a41/src/storage/src/sst.rs#L32).
For level 0, we can start with a simple time-window based leveled compaction, which reads from all SSTs in level 0,
align them to time windows with a fixed duration, merge them with SSTs in level 1 within the same time window
to ensure there is only one sorted run in level 1.

View File

@@ -1 +0,0 @@
nightly-2022-12-20

2
rust-toolchain.toml Normal file
View File

@@ -0,0 +1,2 @@
[toolchain]
channel = "nightly-2023-02-14"

View File

@@ -10,6 +10,7 @@ common-base = { path = "../common/base" }
common-error = { path = "../common/error" }
common-time = { path = "../common/time" }
datatypes = { path = "../datatypes" }
greptime-proto = { git = "https://github.com/GreptimeTeam/greptime-proto.git", rev = "1599ae2a0d1d8f42ee23ed26e4ad7a7b34134c60" }
prost.workspace = true
snafu = { version = "0.7", features = ["backtraces"] }
tonic.workspace = true

View File

@@ -1,85 +0,0 @@
syntax = "proto3";
package greptime.v1;
message Column {
string column_name = 1;
enum SemanticType {
TAG = 0;
FIELD = 1;
TIMESTAMP = 2;
}
SemanticType semantic_type = 2;
message Values {
repeated int32 i8_values = 1;
repeated int32 i16_values = 2;
repeated int32 i32_values = 3;
repeated int64 i64_values = 4;
repeated uint32 u8_values = 5;
repeated uint32 u16_values = 6;
repeated uint32 u32_values = 7;
repeated uint64 u64_values = 8;
repeated float f32_values = 9;
repeated double f64_values = 10;
repeated bool bool_values = 11;
repeated bytes binary_values = 12;
repeated string string_values = 13;
repeated int32 date_values = 14;
repeated int64 datetime_values = 15;
repeated int64 ts_second_values = 16;
repeated int64 ts_millisecond_values = 17;
repeated int64 ts_microsecond_values = 18;
repeated int64 ts_nanosecond_values = 19;
}
// The array of non-null values in this column.
//
// For example: suppose there is a column "foo" that contains some int32 values (1, 2, 3, 4, 5, null, 7, 8, 9, null);
// column:
// column_name: foo
// semantic_type: Tag
// values: 1, 2, 3, 4, 5, 7, 8, 9
// null_masks: 00100000 00000010
Values values = 3;
// Mask maps the positions of null values.
// If a bit in null_mask is 1, it indicates that the column value at that position is null.
bytes null_mask = 4;
// Helpful in creating vector from column.
ColumnDataType datatype = 5;
}
message ColumnDef {
string name = 1;
ColumnDataType datatype = 2;
bool is_nullable = 3;
bytes default_constraint = 4;
}
enum ColumnDataType {
BOOLEAN = 0;
INT8 = 1;
INT16 = 2;
INT32 = 3;
INT64 = 4;
UINT8 = 5;
UINT16 = 6;
UINT32 = 7;
UINT64 = 8;
FLOAT32 = 9;
FLOAT64 = 10;
BINARY = 11;
STRING = 12;
DATE = 13;
DATETIME = 14;
TIMESTAMP_SECOND = 15;
TIMESTAMP_MILLISECOND = 16;
TIMESTAMP_MICROSECOND = 17;
TIMESTAMP_NANOSECOND = 18;
}

View File

@@ -1,41 +0,0 @@
syntax = "proto3";
package greptime.v1;
import "greptime/v1/ddl.proto";
import "greptime/v1/column.proto";
message GreptimeRequest {
oneof request {
InsertRequest insert = 1;
QueryRequest query = 2;
DdlRequest ddl = 3;
}
}
message QueryRequest {
oneof query {
string sql = 1;
bytes logical_plan = 2;
}
}
message InsertRequest {
string schema_name = 1;
string table_name = 2;
// Data is represented here.
repeated Column columns = 3;
// The row_count of all columns, which include null and non-null values.
//
// Note: the row_count of all columns in a InsertRequest must be same.
uint32 row_count = 4;
// The region number of current insert request.
uint32 region_number = 5;
}
message FlightDataExt {
uint32 affected_rows = 1;
}

View File

@@ -1,79 +0,0 @@
syntax = "proto3";
package greptime.v1;
import "greptime/v1/column.proto";
// "Data Definition Language" requests, that create, modify or delete the database structures but not the data.
// `DdlRequest` could carry more information than plain SQL, for example, the "table_id" in `CreateTableExpr`.
// So create a new DDL expr if you need it.
message DdlRequest {
oneof expr {
CreateDatabaseExpr create_database = 1;
CreateTableExpr create_table = 2;
AlterExpr alter = 3;
DropTableExpr drop_table = 4;
}
}
message CreateTableExpr {
string catalog_name = 1;
string schema_name = 2;
string table_name = 3;
string desc = 4;
repeated ColumnDef column_defs = 5;
string time_index = 6;
repeated string primary_keys = 7;
bool create_if_not_exists = 8;
map<string, string> table_options = 9;
TableId table_id = 10;
repeated uint32 region_ids = 11;
}
message AlterExpr {
string catalog_name = 1;
string schema_name = 2;
string table_name = 3;
oneof kind {
AddColumns add_columns = 4;
DropColumns drop_columns = 5;
RenameTable rename_table = 6;
}
}
message DropTableExpr {
string catalog_name = 1;
string schema_name = 2;
string table_name = 3;
}
message CreateDatabaseExpr {
//TODO(hl): maybe rename to schema_name?
string database_name = 1;
bool create_if_not_exists = 2;
}
message AddColumns {
repeated AddColumn add_columns = 1;
}
message DropColumns {
repeated DropColumn drop_columns = 1;
}
message RenameTable {
string new_table_name = 1;
}
message AddColumn {
ColumnDef column_def = 1;
bool is_key = 2;
}
message DropColumn {
string name = 1;
}
message TableId {
uint32 id = 1;
}

View File

@@ -1,48 +0,0 @@
syntax = "proto3";
package greptime.v1.meta;
message RequestHeader {
uint64 protocol_version = 1;
// cluster_id is the ID of the cluster which be sent to.
uint64 cluster_id = 2;
// member_id is the ID of the sender server.
uint64 member_id = 3;
}
message ResponseHeader {
uint64 protocol_version = 1;
// cluster_id is the ID of the cluster which sent the response.
uint64 cluster_id = 2;
Error error = 3;
}
message Error {
int32 code = 1;
string err_msg = 2;
}
message Peer {
uint64 id = 1;
string addr = 2;
}
message TableName {
string catalog_name = 1;
string schema_name = 2;
string table_name = 3;
}
message TimeInterval {
// The unix timestamp in millis of the start of this period.
uint64 start_timestamp_millis = 1;
// The unix timestamp in millis of the end of this period.
uint64 end_timestamp_millis = 2;
}
message KeyValue {
// key is the key in bytes. An empty key is not allowed.
bytes key = 1;
// value is the value held by the key, in bytes.
bytes value = 2;
}

View File

@@ -1,92 +0,0 @@
syntax = "proto3";
package greptime.v1.meta;
import "greptime/v1/meta/common.proto";
service Heartbeat {
// Heartbeat, there may be many contents of the heartbeat, such as:
// 1. Metadata to be registered to meta server and discoverable by other nodes.
// 2. Some performance metrics, such as Load, CPU usage, etc.
// 3. The number of computing tasks being executed.
rpc Heartbeat(stream HeartbeatRequest) returns (stream HeartbeatResponse) {}
// Ask leader's endpoint.
rpc AskLeader(AskLeaderRequest) returns (AskLeaderResponse) {}
}
message HeartbeatRequest {
RequestHeader header = 1;
// Self peer
Peer peer = 2;
// Leader node
bool is_leader = 3;
// Actually reported time interval
TimeInterval report_interval = 4;
// Node stat
NodeStat node_stat = 5;
// Region stats on this node
repeated RegionStat region_stats = 6;
// Follower nodes and stats, empty on follower nodes
repeated ReplicaStat replica_stats = 7;
}
message NodeStat {
// The read capacity units during this period
int64 rcus = 1;
// The write capacity units during this period
int64 wcus = 2;
// How many tables on this node
int64 table_num = 3;
// How many regions on this node
int64 region_num = 4;
double cpu_usage = 5;
double load = 6;
// Read disk IO on this node
double read_io_rate = 7;
// Write disk IO on this node
double write_io_rate = 8;
// Others
map<string, string> attrs = 100;
}
message RegionStat {
uint64 region_id = 1;
TableName table_name = 2;
// The read capacity units during this period
int64 rcus = 3;
// The write capacity units during this period
int64 wcus = 4;
// Approximate bytes of this region
int64 approximate_bytes = 5;
// Approximate number of rows in this region
int64 approximate_rows = 6;
// Others
map<string, string> attrs = 100;
}
message ReplicaStat {
Peer peer = 1;
bool in_sync = 2;
bool is_learner = 3;
}
message HeartbeatResponse {
ResponseHeader header = 1;
repeated bytes payload = 2;
}
message AskLeaderRequest {
RequestHeader header = 1;
}
message AskLeaderResponse {
ResponseHeader header = 1;
Peer leader = 2;
}

View File

@@ -1,98 +0,0 @@
syntax = "proto3";
package greptime.v1.meta;
import "greptime/v1/meta/common.proto";
service Router {
rpc Create(CreateRequest) returns (RouteResponse) {}
// Fetch routing information for tables. The smallest unit is the complete
// routing information(all regions) of a table.
//
// ```text
// table_1
// table_name
// table_schema
// regions
// region_1
// leader_peer
// follower_peer_1, follower_peer_2
// region_2
// leader_peer
// follower_peer_1, follower_peer_2, follower_peer_3
// region_xxx
// table_2
// ...
// ```
//
rpc Route(RouteRequest) returns (RouteResponse) {}
rpc Delete(DeleteRequest) returns (RouteResponse) {}
}
message CreateRequest {
RequestHeader header = 1;
TableName table_name = 2;
repeated Partition partitions = 3;
}
message RouteRequest {
RequestHeader header = 1;
repeated TableName table_names = 2;
}
message DeleteRequest {
RequestHeader header = 1;
TableName table_name = 2;
}
message RouteResponse {
ResponseHeader header = 1;
repeated Peer peers = 2;
repeated TableRoute table_routes = 3;
}
message TableRoute {
Table table = 1;
repeated RegionRoute region_routes = 2;
}
message RegionRoute {
Region region = 1;
// single leader node for write task
uint64 leader_peer_index = 2;
// multiple follower nodes for read task
repeated uint64 follower_peer_indexes = 3;
}
message Table {
uint64 id = 1;
TableName table_name = 2;
bytes table_schema = 3;
}
message Region {
// TODO(LFC): Maybe use message RegionNumber?
uint64 id = 1;
string name = 2;
Partition partition = 3;
map<string, string> attrs = 100;
}
// PARTITION `region_name` VALUES LESS THAN (value_list)
message Partition {
repeated bytes column_list = 1;
repeated bytes value_list = 2;
}
// This message is only for saving into store.
message TableRouteValue {
repeated Peer peers = 1;
TableRoute table_route = 2;
}

View File

@@ -1,159 +0,0 @@
syntax = "proto3";
package greptime.v1.meta;
import "greptime/v1/meta/common.proto";
service Store {
// Range gets the keys in the range from the key-value store.
rpc Range(RangeRequest) returns (RangeResponse);
// Put puts the given key into the key-value store.
rpc Put(PutRequest) returns (PutResponse);
// BatchPut atomically puts the given keys into the key-value store.
rpc BatchPut(BatchPutRequest) returns (BatchPutResponse);
// CompareAndPut atomically puts the value to the given updated
// value if the current value == the expected value.
rpc CompareAndPut(CompareAndPutRequest) returns (CompareAndPutResponse);
// DeleteRange deletes the given range from the key-value store.
rpc DeleteRange(DeleteRangeRequest) returns (DeleteRangeResponse);
// MoveValue atomically renames the key to the given updated key.
rpc MoveValue(MoveValueRequest) returns (MoveValueResponse);
}
message RangeRequest {
RequestHeader header = 1;
// key is the first key for the range, If range_end is not given, the
// request only looks up key.
bytes key = 2;
// range_end is the upper bound on the requested range [key, range_end).
// If range_end is '\0', the range is all keys >= key.
// If range_end is key plus one (e.g., "aa"+1 == "ab", "a\xff"+1 == "b"),
// then the range request gets all keys prefixed with key.
// If both key and range_end are '\0', then the range request returns all
// keys.
bytes range_end = 3;
// limit is a limit on the number of keys returned for the request. When
// limit is set to 0, it is treated as no limit.
int64 limit = 4;
// keys_only when set returns only the keys and not the values.
bool keys_only = 5;
}
message RangeResponse {
ResponseHeader header = 1;
// kvs is the list of key-value pairs matched by the range request.
repeated KeyValue kvs = 2;
// more indicates if there are more keys to return in the requested range.
bool more = 3;
}
message PutRequest {
RequestHeader header = 1;
// key is the key, in bytes, to put into the key-value store.
bytes key = 2;
// value is the value, in bytes, to associate with the key in the
// key-value store.
bytes value = 3;
// If prev_kv is set, gets the previous key-value pair before changing it.
// The previous key-value pair will be returned in the put response.
bool prev_kv = 4;
}
message PutResponse {
ResponseHeader header = 1;
// If prev_kv is set in the request, the previous key-value pair will be
// returned.
KeyValue prev_kv = 2;
}
message BatchPutRequest {
RequestHeader header = 1;
repeated KeyValue kvs = 2;
// If prev_kv is set, gets the previous key-value pairs before changing it.
// The previous key-value pairs will be returned in the batch put response.
bool prev_kv = 3;
}
message BatchPutResponse {
ResponseHeader header = 1;
// If prev_kv is set in the request, the previous key-value pairs will be
// returned.
repeated KeyValue prev_kvs = 2;
}
message CompareAndPutRequest {
RequestHeader header = 1;
// key is the key, in bytes, to put into the key-value store.
bytes key = 2;
// expect is the previous value, in bytes
bytes expect = 3;
// value is the value, in bytes, to associate with the key in the
// key-value store.
bytes value = 4;
}
message CompareAndPutResponse {
ResponseHeader header = 1;
bool success = 2;
KeyValue prev_kv = 3;
}
message DeleteRangeRequest {
RequestHeader header = 1;
// key is the first key to delete in the range.
bytes key = 2;
// range_end is the key following the last key to delete for the range
// [key, range_end).
// If range_end is not given, the range is defined to contain only the key
// argument.
// If range_end is one bit larger than the given key, then the range is all
// the keys with the prefix (the given key).
// If range_end is '\0', the range is all keys greater than or equal to the
// key argument.
bytes range_end = 3;
// If prev_kv is set, gets the previous key-value pairs before deleting it.
// The previous key-value pairs will be returned in the delete response.
bool prev_kv = 4;
}
message DeleteRangeResponse {
ResponseHeader header = 1;
// deleted is the number of keys deleted by the delete range request.
int64 deleted = 2;
// If prev_kv is set in the request, the previous key-value pairs will be
// returned.
repeated KeyValue prev_kvs = 3;
}
message MoveValueRequest {
RequestHeader header = 1;
// If from_key dose not exist, return the value of to_key (if it exists).
// If from_key exists, move the value of from_key to to_key (i.e. rename),
// and return the value.
bytes from_key = 2;
bytes to_key = 3;
}
message MoveValueResponse {
ResponseHeader header = 1;
// If from_key dose not exist, return the value of to_key (if it exists).
// If from_key exists, return the value of from_key.
KeyValue kv = 2;
}

View File

@@ -1,85 +0,0 @@
// Copyright 2016 Prometheus Team
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
syntax = "proto3";
package prometheus;
option go_package = "prompb";
import "prometheus/remote/types.proto";
message WriteRequest {
repeated prometheus.TimeSeries timeseries = 1;
// Cortex uses this field to determine the source of the write request.
// We reserve it to avoid any compatibility issues.
reserved 2;
repeated prometheus.MetricMetadata metadata = 3;
}
// ReadRequest represents a remote read request.
message ReadRequest {
repeated Query queries = 1;
enum ResponseType {
// Server will return a single ReadResponse message with matched series that includes list of raw samples.
// It's recommended to use streamed response types instead.
//
// Response headers:
// Content-Type: "application/x-protobuf"
// Content-Encoding: "snappy"
SAMPLES = 0;
// Server will stream a delimited ChunkedReadResponse message that contains XOR encoded chunks for a single series.
// Each message is following varint size and fixed size bigendian uint32 for CRC32 Castagnoli checksum.
//
// Response headers:
// Content-Type: "application/x-streamed-protobuf; proto=prometheus.ChunkedReadResponse"
// Content-Encoding: ""
STREAMED_XOR_CHUNKS = 1;
}
// accepted_response_types allows negotiating the content type of the response.
//
// Response types are taken from the list in the FIFO order. If no response type in `accepted_response_types` is
// implemented by server, error is returned.
// For request that do not contain `accepted_response_types` field the SAMPLES response type will be used.
repeated ResponseType accepted_response_types = 2;
}
// ReadResponse is a response when response_type equals SAMPLES.
message ReadResponse {
// In same order as the request's queries.
repeated QueryResult results = 1;
}
message Query {
int64 start_timestamp_ms = 1;
int64 end_timestamp_ms = 2;
repeated prometheus.LabelMatcher matchers = 3;
prometheus.ReadHints hints = 4;
}
message QueryResult {
// Samples within a time series must be ordered by time.
repeated prometheus.TimeSeries timeseries = 1;
}
// ChunkedReadResponse is a response when response_type equals STREAMED_XOR_CHUNKS.
// We strictly stream full series after series, optionally split by time. This means that a single frame can contain
// partition of the single series, but once a new series is started to be streamed it means that no more chunks will
// be sent for previous one. Series are returned sorted in the same way TSDB block are internally.
message ChunkedReadResponse {
repeated prometheus.ChunkedSeries chunked_series = 1;
// query_index represents an index of the query from ReadRequest.queries these chunks relates to.
int64 query_index = 2;
}

View File

@@ -1,117 +0,0 @@
// Copyright 2017 Prometheus Team
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
syntax = "proto3";
package prometheus;
option go_package = "prompb";
message MetricMetadata {
enum MetricType {
UNKNOWN = 0;
COUNTER = 1;
GAUGE = 2;
HISTOGRAM = 3;
GAUGEHISTOGRAM = 4;
SUMMARY = 5;
INFO = 6;
STATESET = 7;
}
// Represents the metric type, these match the set from Prometheus.
// Refer to model/textparse/interface.go for details.
MetricType type = 1;
string metric_family_name = 2;
string help = 4;
string unit = 5;
}
message Sample {
double value = 1;
// timestamp is in ms format, see model/timestamp/timestamp.go for
// conversion from time.Time to Prometheus timestamp.
int64 timestamp = 2;
}
message Exemplar {
// Optional, can be empty.
repeated Label labels = 1;
double value = 2;
// timestamp is in ms format, see model/timestamp/timestamp.go for
// conversion from time.Time to Prometheus timestamp.
int64 timestamp = 3;
}
// TimeSeries represents samples and labels for a single time series.
message TimeSeries {
// For a timeseries to be valid, and for the samples and exemplars
// to be ingested by the remote system properly, the labels field is required.
repeated Label labels = 1;
repeated Sample samples = 2;
repeated Exemplar exemplars = 3;
}
message Label {
string name = 1;
string value = 2;
}
message Labels {
repeated Label labels = 1;
}
// Matcher specifies a rule, which can match or set of labels or not.
message LabelMatcher {
enum Type {
EQ = 0;
NEQ = 1;
RE = 2;
NRE = 3;
}
Type type = 1;
string name = 2;
string value = 3;
}
message ReadHints {
int64 step_ms = 1; // Query step size in milliseconds.
string func = 2; // String representation of surrounding function or aggregation.
int64 start_ms = 3; // Start time in milliseconds.
int64 end_ms = 4; // End time in milliseconds.
repeated string grouping = 5; // List of label names used in aggregation.
bool by = 6; // Indicate whether it is without or by.
int64 range_ms = 7; // Range vector selector range in milliseconds.
}
// Chunk represents a TSDB chunk.
// Time range [min, max] is inclusive.
message Chunk {
int64 min_time_ms = 1;
int64 max_time_ms = 2;
// We require this to match chunkenc.Encoding.
enum Encoding {
UNKNOWN = 0;
XOR = 1;
}
Encoding type = 3;
bytes data = 4;
}
// ChunkedSeries represents single, encoded time series.
message ChunkedSeries {
// Labels should be sorted.
repeated Label labels = 1;
// Chunks will be in start time order and may overlap.
repeated Chunk chunks = 2;
}

View File

@@ -97,7 +97,9 @@ impl TryFrom<ConcreteDataType> for ColumnDataTypeWrapper {
TimestampType::Microsecond(_) => ColumnDataType::TimestampMicrosecond,
TimestampType::Nanosecond(_) => ColumnDataType::TimestampNanosecond,
},
ConcreteDataType::Null(_) | ConcreteDataType::List(_) => {
ConcreteDataType::Null(_)
| ConcreteDataType::List(_)
| ConcreteDataType::Dictionary(_) => {
return error::IntoColumnDataTypeSnafu { from: datatype }.fail()
}
});
@@ -105,125 +107,121 @@ impl TryFrom<ConcreteDataType> for ColumnDataTypeWrapper {
}
}
impl Values {
pub fn with_capacity(datatype: ColumnDataType, capacity: usize) -> Self {
match datatype {
ColumnDataType::Boolean => Values {
bool_values: Vec::with_capacity(capacity),
..Default::default()
},
ColumnDataType::Int8 => Values {
i8_values: Vec::with_capacity(capacity),
..Default::default()
},
ColumnDataType::Int16 => Values {
i16_values: Vec::with_capacity(capacity),
..Default::default()
},
ColumnDataType::Int32 => Values {
i32_values: Vec::with_capacity(capacity),
..Default::default()
},
ColumnDataType::Int64 => Values {
i64_values: Vec::with_capacity(capacity),
..Default::default()
},
ColumnDataType::Uint8 => Values {
u8_values: Vec::with_capacity(capacity),
..Default::default()
},
ColumnDataType::Uint16 => Values {
u16_values: Vec::with_capacity(capacity),
..Default::default()
},
ColumnDataType::Uint32 => Values {
u32_values: Vec::with_capacity(capacity),
..Default::default()
},
ColumnDataType::Uint64 => Values {
u64_values: Vec::with_capacity(capacity),
..Default::default()
},
ColumnDataType::Float32 => Values {
f32_values: Vec::with_capacity(capacity),
..Default::default()
},
ColumnDataType::Float64 => Values {
f64_values: Vec::with_capacity(capacity),
..Default::default()
},
ColumnDataType::Binary => Values {
binary_values: Vec::with_capacity(capacity),
..Default::default()
},
ColumnDataType::String => Values {
string_values: Vec::with_capacity(capacity),
..Default::default()
},
ColumnDataType::Date => Values {
date_values: Vec::with_capacity(capacity),
..Default::default()
},
ColumnDataType::Datetime => Values {
datetime_values: Vec::with_capacity(capacity),
..Default::default()
},
ColumnDataType::TimestampSecond => Values {
ts_second_values: Vec::with_capacity(capacity),
..Default::default()
},
ColumnDataType::TimestampMillisecond => Values {
ts_millisecond_values: Vec::with_capacity(capacity),
..Default::default()
},
ColumnDataType::TimestampMicrosecond => Values {
ts_microsecond_values: Vec::with_capacity(capacity),
..Default::default()
},
ColumnDataType::TimestampNanosecond => Values {
ts_nanosecond_values: Vec::with_capacity(capacity),
..Default::default()
},
}
pub fn values_with_capacity(datatype: ColumnDataType, capacity: usize) -> Values {
match datatype {
ColumnDataType::Boolean => Values {
bool_values: Vec::with_capacity(capacity),
..Default::default()
},
ColumnDataType::Int8 => Values {
i8_values: Vec::with_capacity(capacity),
..Default::default()
},
ColumnDataType::Int16 => Values {
i16_values: Vec::with_capacity(capacity),
..Default::default()
},
ColumnDataType::Int32 => Values {
i32_values: Vec::with_capacity(capacity),
..Default::default()
},
ColumnDataType::Int64 => Values {
i64_values: Vec::with_capacity(capacity),
..Default::default()
},
ColumnDataType::Uint8 => Values {
u8_values: Vec::with_capacity(capacity),
..Default::default()
},
ColumnDataType::Uint16 => Values {
u16_values: Vec::with_capacity(capacity),
..Default::default()
},
ColumnDataType::Uint32 => Values {
u32_values: Vec::with_capacity(capacity),
..Default::default()
},
ColumnDataType::Uint64 => Values {
u64_values: Vec::with_capacity(capacity),
..Default::default()
},
ColumnDataType::Float32 => Values {
f32_values: Vec::with_capacity(capacity),
..Default::default()
},
ColumnDataType::Float64 => Values {
f64_values: Vec::with_capacity(capacity),
..Default::default()
},
ColumnDataType::Binary => Values {
binary_values: Vec::with_capacity(capacity),
..Default::default()
},
ColumnDataType::String => Values {
string_values: Vec::with_capacity(capacity),
..Default::default()
},
ColumnDataType::Date => Values {
date_values: Vec::with_capacity(capacity),
..Default::default()
},
ColumnDataType::Datetime => Values {
datetime_values: Vec::with_capacity(capacity),
..Default::default()
},
ColumnDataType::TimestampSecond => Values {
ts_second_values: Vec::with_capacity(capacity),
..Default::default()
},
ColumnDataType::TimestampMillisecond => Values {
ts_millisecond_values: Vec::with_capacity(capacity),
..Default::default()
},
ColumnDataType::TimestampMicrosecond => Values {
ts_microsecond_values: Vec::with_capacity(capacity),
..Default::default()
},
ColumnDataType::TimestampNanosecond => Values {
ts_nanosecond_values: Vec::with_capacity(capacity),
..Default::default()
},
}
}
impl Column {
// The type of vals must be same.
pub fn push_vals(&mut self, origin_count: usize, vector: VectorRef) {
let values = self.values.get_or_insert_with(Values::default);
let mut null_mask = BitVec::from_slice(&self.null_mask);
let len = vector.len();
null_mask.reserve_exact(origin_count + len);
null_mask.extend(BitVec::repeat(false, len));
// The type of vals must be same.
pub fn push_vals(column: &mut Column, origin_count: usize, vector: VectorRef) {
let values = column.values.get_or_insert_with(Values::default);
let mut null_mask = BitVec::from_slice(&column.null_mask);
let len = vector.len();
null_mask.reserve_exact(origin_count + len);
null_mask.extend(BitVec::repeat(false, len));
(0..len).into_iter().for_each(|idx| match vector.get(idx) {
Value::Null => null_mask.set(idx + origin_count, true),
Value::Boolean(val) => values.bool_values.push(val),
Value::UInt8(val) => values.u8_values.push(val.into()),
Value::UInt16(val) => values.u16_values.push(val.into()),
Value::UInt32(val) => values.u32_values.push(val),
Value::UInt64(val) => values.u64_values.push(val),
Value::Int8(val) => values.i8_values.push(val.into()),
Value::Int16(val) => values.i16_values.push(val.into()),
Value::Int32(val) => values.i32_values.push(val),
Value::Int64(val) => values.i64_values.push(val),
Value::Float32(val) => values.f32_values.push(*val),
Value::Float64(val) => values.f64_values.push(*val),
Value::String(val) => values.string_values.push(val.as_utf8().to_string()),
Value::Binary(val) => values.binary_values.push(val.to_vec()),
Value::Date(val) => values.date_values.push(val.val()),
Value::DateTime(val) => values.datetime_values.push(val.val()),
Value::Timestamp(val) => match val.unit() {
TimeUnit::Second => values.ts_second_values.push(val.value()),
TimeUnit::Millisecond => values.ts_millisecond_values.push(val.value()),
TimeUnit::Microsecond => values.ts_microsecond_values.push(val.value()),
TimeUnit::Nanosecond => values.ts_nanosecond_values.push(val.value()),
},
Value::List(_) => unreachable!(),
});
self.null_mask = null_mask.into_vec();
}
(0..len).for_each(|idx| match vector.get(idx) {
Value::Null => null_mask.set(idx + origin_count, true),
Value::Boolean(val) => values.bool_values.push(val),
Value::UInt8(val) => values.u8_values.push(val.into()),
Value::UInt16(val) => values.u16_values.push(val.into()),
Value::UInt32(val) => values.u32_values.push(val),
Value::UInt64(val) => values.u64_values.push(val),
Value::Int8(val) => values.i8_values.push(val.into()),
Value::Int16(val) => values.i16_values.push(val.into()),
Value::Int32(val) => values.i32_values.push(val),
Value::Int64(val) => values.i64_values.push(val),
Value::Float32(val) => values.f32_values.push(*val),
Value::Float64(val) => values.f64_values.push(*val),
Value::String(val) => values.string_values.push(val.as_utf8().to_string()),
Value::Binary(val) => values.binary_values.push(val.to_vec()),
Value::Date(val) => values.date_values.push(val.val()),
Value::DateTime(val) => values.datetime_values.push(val.val()),
Value::Timestamp(val) => match val.unit() {
TimeUnit::Second => values.ts_second_values.push(val.value()),
TimeUnit::Millisecond => values.ts_millisecond_values.push(val.value()),
TimeUnit::Microsecond => values.ts_microsecond_values.push(val.value()),
TimeUnit::Nanosecond => values.ts_nanosecond_values.push(val.value()),
},
Value::List(_) => unreachable!(),
});
column.null_mask = null_mask.into_vec();
}
#[cfg(test)]
@@ -239,59 +237,59 @@ mod tests {
#[test]
fn test_values_with_capacity() {
let values = Values::with_capacity(ColumnDataType::Int8, 2);
let values = values_with_capacity(ColumnDataType::Int8, 2);
let values = values.i8_values;
assert_eq!(2, values.capacity());
let values = Values::with_capacity(ColumnDataType::Int32, 2);
let values = values_with_capacity(ColumnDataType::Int32, 2);
let values = values.i32_values;
assert_eq!(2, values.capacity());
let values = Values::with_capacity(ColumnDataType::Int64, 2);
let values = values_with_capacity(ColumnDataType::Int64, 2);
let values = values.i64_values;
assert_eq!(2, values.capacity());
let values = Values::with_capacity(ColumnDataType::Uint8, 2);
let values = values_with_capacity(ColumnDataType::Uint8, 2);
let values = values.u8_values;
assert_eq!(2, values.capacity());
let values = Values::with_capacity(ColumnDataType::Uint32, 2);
let values = values_with_capacity(ColumnDataType::Uint32, 2);
let values = values.u32_values;
assert_eq!(2, values.capacity());
let values = Values::with_capacity(ColumnDataType::Uint64, 2);
let values = values_with_capacity(ColumnDataType::Uint64, 2);
let values = values.u64_values;
assert_eq!(2, values.capacity());
let values = Values::with_capacity(ColumnDataType::Float32, 2);
let values = values_with_capacity(ColumnDataType::Float32, 2);
let values = values.f32_values;
assert_eq!(2, values.capacity());
let values = Values::with_capacity(ColumnDataType::Float64, 2);
let values = values_with_capacity(ColumnDataType::Float64, 2);
let values = values.f64_values;
assert_eq!(2, values.capacity());
let values = Values::with_capacity(ColumnDataType::Binary, 2);
let values = values_with_capacity(ColumnDataType::Binary, 2);
let values = values.binary_values;
assert_eq!(2, values.capacity());
let values = Values::with_capacity(ColumnDataType::Boolean, 2);
let values = values_with_capacity(ColumnDataType::Boolean, 2);
let values = values.bool_values;
assert_eq!(2, values.capacity());
let values = Values::with_capacity(ColumnDataType::String, 2);
let values = values_with_capacity(ColumnDataType::String, 2);
let values = values.string_values;
assert_eq!(2, values.capacity());
let values = Values::with_capacity(ColumnDataType::Date, 2);
let values = values_with_capacity(ColumnDataType::Date, 2);
let values = values.date_values;
assert_eq!(2, values.capacity());
let values = Values::with_capacity(ColumnDataType::Datetime, 2);
let values = values_with_capacity(ColumnDataType::Datetime, 2);
let values = values.datetime_values;
assert_eq!(2, values.capacity());
let values = Values::with_capacity(ColumnDataType::TimestampMillisecond, 2);
let values = values_with_capacity(ColumnDataType::TimestampMillisecond, 2);
let values = values.ts_millisecond_values;
assert_eq!(2, values.capacity());
}
@@ -462,28 +460,28 @@ mod tests {
};
let vector = Arc::new(TimestampNanosecondVector::from_vec(vec![1, 2, 3]));
column.push_vals(3, vector);
push_vals(&mut column, 3, vector);
assert_eq!(
vec![1, 2, 3],
column.values.as_ref().unwrap().ts_nanosecond_values
);
let vector = Arc::new(TimestampMillisecondVector::from_vec(vec![4, 5, 6]));
column.push_vals(3, vector);
push_vals(&mut column, 3, vector);
assert_eq!(
vec![4, 5, 6],
column.values.as_ref().unwrap().ts_millisecond_values
);
let vector = Arc::new(TimestampMicrosecondVector::from_vec(vec![7, 8, 9]));
column.push_vals(3, vector);
push_vals(&mut column, 3, vector);
assert_eq!(
vec![7, 8, 9],
column.values.as_ref().unwrap().ts_microsecond_values
);
let vector = Arc::new(TimestampSecondVector::from_vec(vec![10, 11, 12]));
column.push_vals(3, vector);
push_vals(&mut column, 3, vector);
assert_eq!(
vec![10, 11, 12],
column.values.as_ref().unwrap().ts_second_values
@@ -507,7 +505,7 @@ mod tests {
let row_count = 4;
let vector = Arc::new(BooleanVector::from(vec![Some(true), None, Some(false)]));
column.push_vals(row_count, vector);
push_vals(&mut column, row_count, vector);
// Some(false), None, Some(true), Some(true), Some(true), None, Some(false)
let bool_values = column.values.unwrap().bool_values;
assert_eq!(vec![false, true, true, true, false], bool_values);

View File

@@ -14,8 +14,13 @@
pub mod error;
pub mod helper;
pub mod prometheus;
pub mod serde;
pub mod prometheus {
pub mod remote {
pub use greptime_proto::prometheus::remote::*;
}
}
pub mod v1;
pub use prost::DecodeError;

View File

@@ -1,38 +0,0 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
pub use prost::DecodeError;
use prost::Message;
use crate::v1::meta::TableRouteValue;
macro_rules! impl_convert_with_bytes {
($data_type: ty) => {
impl From<$data_type> for Vec<u8> {
fn from(entity: $data_type) -> Self {
entity.encode_to_vec()
}
}
impl TryFrom<&[u8]> for $data_type {
type Error = DecodeError;
fn try_from(value: &[u8]) -> Result<Self, Self::Error> {
<$data_type>::decode(value.as_ref())
}
}
};
}
impl_convert_with_bytes!(TableRouteValue);

View File

@@ -12,8 +12,10 @@
// See the License for the specific language governing permissions and
// limitations under the License.
#![allow(clippy::derive_partial_eq_without_eq)]
tonic::include_proto!("greptime.v1");
pub mod column_def;
mod column_def;
pub mod meta;
pub mod meta {
pub use greptime_proto::v1::meta::*;
}
pub use greptime_proto::v1::*;

View File

@@ -19,21 +19,24 @@ use crate::error::{self, Result};
use crate::helper::ColumnDataTypeWrapper;
use crate::v1::ColumnDef;
impl ColumnDef {
pub fn try_as_column_schema(&self) -> Result<ColumnSchema> {
let data_type = ColumnDataTypeWrapper::try_new(self.datatype)?;
pub fn try_as_column_schema(column_def: &ColumnDef) -> Result<ColumnSchema> {
let data_type = ColumnDataTypeWrapper::try_new(column_def.datatype)?;
let constraint = if self.default_constraint.is_empty() {
None
} else {
Some(
ColumnDefaultConstraint::try_from(self.default_constraint.as_slice())
.context(error::ConvertColumnDefaultConstraintSnafu { column: &self.name })?,
)
};
let constraint = if column_def.default_constraint.is_empty() {
None
} else {
Some(
ColumnDefaultConstraint::try_from(column_def.default_constraint.as_slice()).context(
error::ConvertColumnDefaultConstraintSnafu {
column: &column_def.name,
},
)?,
)
};
ColumnSchema::new(&self.name, data_type.into(), self.is_nullable)
.with_default_constraint(constraint)
.context(error::InvalidColumnDefaultConstraintSnafu { column: &self.name })
}
ColumnSchema::new(&column_def.name, data_type.into(), column_def.is_nullable)
.with_default_constraint(constraint)
.context(error::InvalidColumnDefaultConstraintSnafu {
column: &column_def.name,
})
}

View File

@@ -1,209 +0,0 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
tonic::include_proto!("greptime.v1.meta");
use std::collections::HashMap;
use std::hash::{Hash, Hasher};
pub const PROTOCOL_VERSION: u64 = 1;
#[derive(Default)]
pub struct PeerDict {
peers: HashMap<Peer, usize>,
index: usize,
}
impl PeerDict {
pub fn get_or_insert(&mut self, peer: Peer) -> usize {
let index = self.peers.entry(peer).or_insert_with(|| {
let v = self.index;
self.index += 1;
v
});
*index
}
pub fn into_peers(self) -> Vec<Peer> {
let mut array = vec![Peer::default(); self.index];
for (p, i) in self.peers {
array[i] = p;
}
array
}
}
#[allow(clippy::derive_hash_xor_eq)]
impl Hash for Peer {
fn hash<H: Hasher>(&self, state: &mut H) {
self.id.hash(state);
self.addr.hash(state);
}
}
impl Eq for Peer {}
impl RequestHeader {
#[inline]
pub fn new((cluster_id, member_id): (u64, u64)) -> Self {
Self {
protocol_version: PROTOCOL_VERSION,
cluster_id,
member_id,
}
}
}
impl ResponseHeader {
#[inline]
pub fn success(cluster_id: u64) -> Self {
Self {
protocol_version: PROTOCOL_VERSION,
cluster_id,
..Default::default()
}
}
#[inline]
pub fn failed(cluster_id: u64, error: Error) -> Self {
Self {
protocol_version: PROTOCOL_VERSION,
cluster_id,
error: Some(error),
}
}
#[inline]
pub fn is_not_leader(&self) -> bool {
if let Some(error) = &self.error {
if error.code == ErrorCode::NotLeader as i32 {
return true;
}
}
false
}
}
#[derive(Debug, Clone, Copy, PartialEq, Eq)]
pub enum ErrorCode {
NoActiveDatanodes = 1,
NotLeader = 2,
}
impl Error {
#[inline]
pub fn no_active_datanodes() -> Self {
Self {
code: ErrorCode::NoActiveDatanodes as i32,
err_msg: "No active datanodes".to_string(),
}
}
#[inline]
pub fn is_not_leader() -> Self {
Self {
code: ErrorCode::NotLeader as i32,
err_msg: "Current server is not leader".to_string(),
}
}
}
impl HeartbeatResponse {
#[inline]
pub fn is_not_leader(&self) -> bool {
if let Some(header) = &self.header {
return header.is_not_leader();
}
false
}
}
macro_rules! gen_set_header {
($req: ty) => {
impl $req {
#[inline]
pub fn set_header(&mut self, (cluster_id, member_id): (u64, u64)) {
self.header = Some(RequestHeader::new((cluster_id, member_id)));
}
}
};
}
gen_set_header!(HeartbeatRequest);
gen_set_header!(RouteRequest);
gen_set_header!(CreateRequest);
gen_set_header!(RangeRequest);
gen_set_header!(DeleteRequest);
gen_set_header!(PutRequest);
gen_set_header!(BatchPutRequest);
gen_set_header!(CompareAndPutRequest);
gen_set_header!(DeleteRangeRequest);
gen_set_header!(MoveValueRequest);
#[cfg(test)]
mod tests {
use std::vec;
use super::*;
#[test]
fn test_peer_dict() {
let mut dict = PeerDict::default();
dict.get_or_insert(Peer {
id: 1,
addr: "111".to_string(),
});
dict.get_or_insert(Peer {
id: 2,
addr: "222".to_string(),
});
dict.get_or_insert(Peer {
id: 1,
addr: "111".to_string(),
});
dict.get_or_insert(Peer {
id: 1,
addr: "111".to_string(),
});
dict.get_or_insert(Peer {
id: 1,
addr: "111".to_string(),
});
dict.get_or_insert(Peer {
id: 1,
addr: "111".to_string(),
});
dict.get_or_insert(Peer {
id: 2,
addr: "222".to_string(),
});
assert_eq!(2, dict.index);
assert_eq!(
vec![
Peer {
id: 1,
addr: "111".to_string(),
},
Peer {
id: 2,
addr: "222".to_string(),
}
],
dict.into_peers()
);
}
}

View File

@@ -33,7 +33,7 @@ table = { path = "../table" }
tokio.workspace = true
[dev-dependencies]
chrono = "0.4"
chrono.workspace = true
log-store = { path = "../log-store" }
mito = { path = "../mito", features = ["test"] }
object-store = { path = "../object-store" }

View File

@@ -13,14 +13,16 @@
// limitations under the License.
use std::any::Any;
use std::fmt::Debug;
use common_error::ext::{BoxedError, ErrorExt};
use common_error::prelude::{Snafu, StatusCode};
use datafusion::error::DataFusionError;
use datatypes::prelude::ConcreteDataType;
use datatypes::schema::RawSchema;
use snafu::{Backtrace, ErrorCompat};
use crate::DeregisterTableRequest;
#[derive(Debug, Snafu)]
#[snafu(visibility(pub))]
pub enum Error {
@@ -96,18 +98,15 @@ pub enum Error {
#[snafu(display("Table `{}` already exists", table))]
TableExists { table: String, backtrace: Backtrace },
#[snafu(display("Table `{}` not exist", table))]
TableNotExist { table: String, backtrace: Backtrace },
#[snafu(display("Schema {} already exists", schema))]
SchemaExists {
schema: String,
backtrace: Backtrace,
},
#[snafu(display("Failed to register table"))]
RegisterTable {
#[snafu(backtrace)]
source: BoxedError,
},
#[snafu(display("Operation {} not implemented yet", operation))]
Unimplemented {
operation: String,
@@ -142,6 +141,17 @@ pub enum Error {
source: table::error::Error,
},
#[snafu(display(
"Failed to deregister table, request: {:?}, source: {}",
request,
source
))]
DeregisterTable {
request: DeregisterTableRequest,
#[snafu(backtrace)]
source: table::error::Error,
},
#[snafu(display("Illegal catalog manager state: {}", msg))]
IllegalManagerState { backtrace: Backtrace, msg: String },
@@ -151,21 +161,17 @@ pub enum Error {
source: table::error::Error,
},
#[snafu(display(
"Invalid table schema in catalog entry, table:{}, schema: {:?}, source: {}",
table_info,
schema,
source
))]
InvalidTableSchema {
table_info: String,
schema: RawSchema,
#[snafu(display("Failure during SchemaProvider operation, source: {}", source))]
SchemaProviderOperation {
#[snafu(backtrace)]
source: datatypes::error::Error,
source: BoxedError,
},
#[snafu(display("Failure during SchemaProvider operation, source: {}", source))]
SchemaProviderOperation { source: BoxedError },
#[snafu(display("{source}"))]
Internal {
#[snafu(backtrace)]
source: BoxedError,
},
#[snafu(display("Failed to execute system catalog table scan, source: {}", source))]
SystemCatalogTableScanExec {
@@ -178,15 +184,6 @@ pub enum Error {
source: common_catalog::error::Error,
},
#[snafu(display("IO error occurred while fetching catalog info, source: {}", source))]
Io {
backtrace: Backtrace,
source: std::io::Error,
},
#[snafu(display("Local and remote catalog data are inconsistent, msg: {}", msg))]
CatalogStateInconsistent { msg: String, backtrace: Backtrace },
#[snafu(display("Failed to perform metasrv operation, source: {}", source))]
MetaSrv {
#[snafu(backtrace)]
@@ -199,10 +196,10 @@ pub enum Error {
source: datatypes::error::Error,
},
#[snafu(display("Catalog internal error: {}", source))]
Internal {
#[snafu(display("Failed to serialize or deserialize catalog entry: {}", source))]
CatalogEntrySerde {
#[snafu(backtrace)]
source: BoxedError,
source: common_catalog::error::Error,
},
}
@@ -216,35 +213,35 @@ impl ErrorExt for Error {
| Error::TableNotFound { .. }
| Error::IllegalManagerState { .. }
| Error::CatalogNotFound { .. }
| Error::InvalidEntryType { .. }
| Error::CatalogStateInconsistent { .. } => StatusCode::Unexpected,
| Error::InvalidEntryType { .. } => StatusCode::Unexpected,
Error::SystemCatalog { .. }
| Error::EmptyValue { .. }
| Error::ValueDeserialize { .. }
| Error::Io { .. } => StatusCode::StorageUnavailable,
| Error::ValueDeserialize { .. } => StatusCode::StorageUnavailable,
Error::RegisterTable { .. } | Error::SystemCatalogTypeMismatch { .. } => {
StatusCode::Internal
}
Error::SystemCatalogTypeMismatch { .. } => StatusCode::Internal,
Error::ReadSystemCatalog { source, .. } => source.status_code(),
Error::InvalidCatalogValue { source, .. } => source.status_code(),
Error::InvalidCatalogValue { source, .. } | Error::CatalogEntrySerde { source } => {
source.status_code()
}
Error::TableExists { .. } => StatusCode::TableAlreadyExists,
Error::TableNotExist { .. } => StatusCode::TableNotFound,
Error::SchemaExists { .. } => StatusCode::InvalidArguments,
Error::OpenSystemCatalog { source, .. }
| Error::CreateSystemCatalog { source, .. }
| Error::InsertCatalogRecord { source, .. }
| Error::OpenTable { source, .. }
| Error::CreateTable { source, .. } => source.status_code(),
| Error::CreateTable { source, .. }
| Error::DeregisterTable { source, .. } => source.status_code(),
Error::MetaSrv { source, .. } => source.status_code(),
Error::SystemCatalogTableScan { source } => source.status_code(),
Error::SystemCatalogTableScanExec { source } => source.status_code(),
Error::InvalidTableSchema { source, .. } => source.status_code(),
Error::InvalidTableInfoInCatalog { .. } => StatusCode::Unexpected,
Error::Internal { source, .. } | Error::SchemaProviderOperation { source } => {
Error::InvalidTableInfoInCatalog { source } => source.status_code(),
Error::SchemaProviderOperation { source } | Error::Internal { source } => {
source.status_code()
}

View File

@@ -24,10 +24,10 @@ use serde::{Deserialize, Serialize, Serializer};
use snafu::{ensure, OptionExt, ResultExt};
use table::metadata::{RawTableInfo, TableId, TableVersion};
const CATALOG_KEY_PREFIX: &str = "__c";
const SCHEMA_KEY_PREFIX: &str = "__s";
const TABLE_GLOBAL_KEY_PREFIX: &str = "__tg";
const TABLE_REGIONAL_KEY_PREFIX: &str = "__tr";
pub const CATALOG_KEY_PREFIX: &str = "__c";
pub const SCHEMA_KEY_PREFIX: &str = "__s";
pub const TABLE_GLOBAL_KEY_PREFIX: &str = "__tg";
pub const TABLE_REGIONAL_KEY_PREFIX: &str = "__tr";
const ALPHANUMERICS_NAME_PATTERN: &str = "[a-zA-Z_][a-zA-Z0-9_]*";
@@ -370,4 +370,10 @@ mod tests {
let deserialized = TableGlobalValue::parse(serialized).unwrap();
assert_eq!(value, deserialized);
}
#[test]
fn test_table_global_value_compatibility() {
let s = r#"{"node_id":1,"regions_id_map":{"1":[0]},"table_info":{"ident":{"table_id":1098,"version":1},"name":"container_cpu_limit","desc":"Created on insertion","catalog_name":"greptime","schema_name":"dd","meta":{"schema":{"column_schemas":[{"name":"container_id","data_type":{"String":null},"is_nullable":true,"is_time_index":false,"default_constraint":null,"metadata":{}},{"name":"container_name","data_type":{"String":null},"is_nullable":true,"is_time_index":false,"default_constraint":null,"metadata":{}},{"name":"docker_image","data_type":{"String":null},"is_nullable":true,"is_time_index":false,"default_constraint":null,"metadata":{}},{"name":"host","data_type":{"String":null},"is_nullable":true,"is_time_index":false,"default_constraint":null,"metadata":{}},{"name":"image_name","data_type":{"String":null},"is_nullable":true,"is_time_index":false,"default_constraint":null,"metadata":{}},{"name":"image_tag","data_type":{"String":null},"is_nullable":true,"is_time_index":false,"default_constraint":null,"metadata":{}},{"name":"interval","data_type":{"String":null},"is_nullable":true,"is_time_index":false,"default_constraint":null,"metadata":{}},{"name":"runtime","data_type":{"String":null},"is_nullable":true,"is_time_index":false,"default_constraint":null,"metadata":{}},{"name":"short_image","data_type":{"String":null},"is_nullable":true,"is_time_index":false,"default_constraint":null,"metadata":{}},{"name":"type","data_type":{"String":null},"is_nullable":true,"is_time_index":false,"default_constraint":null,"metadata":{}},{"name":"dd_value","data_type":{"Float64":{}},"is_nullable":true,"is_time_index":false,"default_constraint":null,"metadata":{}},{"name":"ts","data_type":{"Timestamp":{"Millisecond":null}},"is_nullable":false,"is_time_index":true,"default_constraint":null,"metadata":{"greptime:time_index":"true"}},{"name":"git.repository_url","data_type":{"String":null},"is_nullable":true,"is_time_index":false,"default_constraint":null,"metadata":{}}],"timestamp_index":11,"version":1},"primary_key_indices":[0,1,2,3,4,5,6,7,8,9,12],"value_indices":[10,11],"engine":"mito","next_column_id":12,"region_numbers":[],"engine_options":{},"options":{},"created_on":"1970-01-01T00:00:00Z"},"table_type":"Base"}}"#;
TableGlobalValue::parse(s).unwrap();
}
}

View File

@@ -154,7 +154,7 @@ pub struct RenameTableRequest {
pub table_id: TableId,
}
#[derive(Clone)]
#[derive(Debug, Clone)]
pub struct DeregisterTableRequest {
pub catalog: String,
pub schema: String,
@@ -167,11 +167,6 @@ pub struct RegisterSchemaRequest {
pub schema: String,
}
/// Formats table fully-qualified name
pub fn format_full_table_name(catalog: &str, schema: &str, table: &str) -> String {
format!("{catalog}.{schema}.{table}")
}
pub trait CatalogProviderFactory {
fn create(&self, catalog_name: String) -> CatalogProviderRef;
}
@@ -198,8 +193,10 @@ pub(crate) async fn handle_system_table_request<'a, M: CatalogManager>(
.create_table(&EngineContext::default(), req.create_table_request.clone())
.await
.with_context(|_| CreateTableSnafu {
table_info: format!(
"{catalog_name}.{schema_name}.{table_name}, id: {table_id}",
table_info: common_catalog::format_full_table_name(
catalog_name,
schema_name,
table_name,
),
})?;
manager

View File

@@ -20,6 +20,7 @@ use common_catalog::consts::{
DEFAULT_CATALOG_NAME, DEFAULT_SCHEMA_NAME, INFORMATION_SCHEMA_NAME, MIN_USER_TABLE_ID,
SYSTEM_CATALOG_NAME, SYSTEM_CATALOG_TABLE_NAME,
};
use common_catalog::format_full_table_name;
use common_recordbatch::{RecordBatch, SendableRecordBatchStream};
use common_telemetry::{error, info};
use datatypes::prelude::ScalarVector;
@@ -34,9 +35,9 @@ use table::table::TableIdProvider;
use table::TableRef;
use crate::error::{
CatalogNotFoundSnafu, IllegalManagerStateSnafu, OpenTableSnafu, ReadSystemCatalogSnafu, Result,
SchemaExistsSnafu, SchemaNotFoundSnafu, SystemCatalogSnafu, SystemCatalogTypeMismatchSnafu,
TableExistsSnafu, TableNotFoundSnafu, UnimplementedSnafu,
self, CatalogNotFoundSnafu, IllegalManagerStateSnafu, OpenTableSnafu, ReadSystemCatalogSnafu,
Result, SchemaExistsSnafu, SchemaNotFoundSnafu, SystemCatalogSnafu,
SystemCatalogTypeMismatchSnafu, TableExistsSnafu, TableNotFoundSnafu,
};
use crate::local::memory::{MemoryCatalogManager, MemoryCatalogProvider, MemorySchemaProvider};
use crate::system::{
@@ -45,10 +46,9 @@ use crate::system::{
};
use crate::tables::SystemCatalog;
use crate::{
format_full_table_name, handle_system_table_request, CatalogList, CatalogManager,
CatalogProvider, CatalogProviderRef, DeregisterTableRequest, RegisterSchemaRequest,
RegisterSystemTableRequest, RegisterTableRequest, RenameTableRequest, SchemaProvider,
SchemaProviderRef,
handle_system_table_request, CatalogList, CatalogManager, CatalogProvider, CatalogProviderRef,
DeregisterTableRequest, RegisterSchemaRequest, RegisterSystemTableRequest,
RegisterTableRequest, RenameTableRequest, SchemaProvider, SchemaProviderRef,
};
/// A `CatalogManager` consists of a system catalog and a bunch of user catalogs.
@@ -252,7 +252,6 @@ impl LocalCatalogManager {
schema_name: t.schema_name.clone(),
table_name: t.table_name.clone(),
table_id: t.table_id,
region_numbers: vec![0],
};
let option = self
@@ -419,11 +418,36 @@ impl CatalogManager for LocalCatalogManager {
.is_ok())
}
async fn deregister_table(&self, _request: DeregisterTableRequest) -> Result<bool> {
UnimplementedSnafu {
operation: "deregister table",
async fn deregister_table(&self, request: DeregisterTableRequest) -> Result<bool> {
{
let started = *self.init_lock.lock().await;
ensure!(started, IllegalManagerStateSnafu { msg: "not started" });
}
{
let _ = self.register_lock.lock().await;
let DeregisterTableRequest {
catalog,
schema,
table_name,
} = &request;
let table_id = self
.catalogs
.table(catalog, schema, table_name)?
.with_context(|| error::TableNotExistSnafu {
table: format!("{catalog}.{schema}.{table_name}"),
})?
.table_info()
.ident
.table_id;
if !self.system.deregister_table(&request, table_id).await? {
return Ok(false);
}
self.catalogs.deregister_table(request).await
}
.fail()
}
async fn register_schema(&self, request: RegisterSchemaRequest) -> Result<bool> {

View File

@@ -20,13 +20,13 @@ use std::sync::{Arc, RwLock};
use common_catalog::consts::MIN_USER_TABLE_ID;
use common_telemetry::error;
use snafu::OptionExt;
use snafu::{ensure, OptionExt};
use table::metadata::TableId;
use table::table::TableIdProvider;
use table::TableRef;
use crate::error::{
CatalogNotFoundSnafu, Result, SchemaNotFoundSnafu, TableExistsSnafu, TableNotFoundSnafu,
self, CatalogNotFoundSnafu, Result, SchemaNotFoundSnafu, TableExistsSnafu, TableNotFoundSnafu,
};
use crate::schema::SchemaProvider;
use crate::{
@@ -250,6 +250,10 @@ impl CatalogProvider for MemoryCatalogProvider {
schema: SchemaProviderRef,
) -> Result<Option<SchemaProviderRef>> {
let mut schemas = self.schemas.write().unwrap();
ensure!(
!schemas.contains_key(&name),
error::SchemaExistsSnafu { schema: &name }
);
Ok(schemas.insert(name, schema))
}

View File

@@ -32,8 +32,8 @@ use table::TableRef;
use tokio::sync::Mutex;
use crate::error::{
CatalogNotFoundSnafu, CreateTableSnafu, InvalidCatalogValueSnafu, InvalidTableSchemaSnafu,
OpenTableSnafu, Result, SchemaNotFoundSnafu, TableExistsSnafu, UnimplementedSnafu,
CatalogNotFoundSnafu, CreateTableSnafu, InvalidCatalogValueSnafu, OpenTableSnafu, Result,
SchemaNotFoundSnafu, TableExistsSnafu, UnimplementedSnafu,
};
use crate::helper::{
build_catalog_prefix, build_schema_prefix, build_table_global_prefix, CatalogKey, CatalogValue,
@@ -324,7 +324,6 @@ impl RemoteCatalogManager {
schema_name: schema_name.clone(),
table_name: table_name.clone(),
table_id,
region_numbers: region_numbers.clone(),
};
match self
.engine
@@ -347,21 +346,13 @@ impl RemoteCatalogManager {
);
let meta = &table_info.meta;
let schema = meta
.schema
.clone()
.try_into()
.context(InvalidTableSchemaSnafu {
table_info: format!("{catalog_name}.{schema_name}.{table_name}"),
schema: meta.schema.clone(),
})?;
let req = CreateTableRequest {
id: table_id,
catalog_name: catalog_name.clone(),
schema_name: schema_name.clone(),
table_name: table_name.clone(),
desc: None,
schema: Arc::new(schema),
schema: meta.schema.clone(),
region_numbers: region_numbers.clone(),
primary_key_indices: meta.primary_key_indices.clone(),
create_if_not_exists: true,
@@ -431,11 +422,18 @@ impl CatalogManager for RemoteCatalogManager {
Ok(true)
}
async fn deregister_table(&self, _request: DeregisterTableRequest) -> Result<bool> {
UnimplementedSnafu {
operation: "deregister table",
}
.fail()
async fn deregister_table(&self, request: DeregisterTableRequest) -> Result<bool> {
let catalog_name = &request.catalog;
let schema_name = &request.schema;
let schema = self
.schema(catalog_name, schema_name)?
.context(SchemaNotFoundSnafu {
catalog: catalog_name,
schema: schema_name,
})?;
let result = schema.deregister_table(&request.table_name)?;
Ok(result.is_none())
}
async fn register_schema(&self, request: RegisterSchemaRequest) -> Result<bool> {

View File

@@ -25,29 +25,29 @@ use common_query::physical_plan::{PhysicalPlanRef, SessionContext};
use common_recordbatch::SendableRecordBatchStream;
use common_telemetry::debug;
use common_time::util;
use datatypes::prelude::{ConcreteDataType, ScalarVector};
use datatypes::schema::{ColumnSchema, Schema, SchemaBuilder, SchemaRef};
use datatypes::prelude::{ConcreteDataType, ScalarVector, VectorRef};
use datatypes::schema::{ColumnSchema, RawSchema, SchemaRef};
use datatypes::vectors::{BinaryVector, TimestampMillisecondVector, UInt8Vector};
use serde::{Deserialize, Serialize};
use snafu::{ensure, OptionExt, ResultExt};
use table::engine::{EngineContext, TableEngineRef};
use table::metadata::{TableId, TableInfoRef};
use table::requests::{CreateTableRequest, InsertRequest, OpenTableRequest};
use table::requests::{
CreateTableRequest, DeleteRequest, InsertRequest, OpenTableRequest, TableOptions,
};
use table::{Table, TableRef};
use crate::error::{
self, CreateSystemCatalogSnafu, EmptyValueSnafu, Error, InvalidEntryTypeSnafu, InvalidKeySnafu,
OpenSystemCatalogSnafu, Result, ValueDeserializeSnafu,
};
use crate::DeregisterTableRequest;
pub const ENTRY_TYPE_INDEX: usize = 0;
pub const KEY_INDEX: usize = 1;
pub const VALUE_INDEX: usize = 3;
pub struct SystemCatalogTable {
table_info: TableInfoRef,
pub table: TableRef,
}
pub struct SystemCatalogTable(TableRef);
#[async_trait::async_trait]
impl Table for SystemCatalogTable {
@@ -56,25 +56,29 @@ impl Table for SystemCatalogTable {
}
fn schema(&self) -> SchemaRef {
self.table_info.meta.schema.clone()
self.0.schema()
}
async fn scan(
&self,
_projection: Option<&Vec<usize>>,
_filters: &[Expr],
_limit: Option<usize>,
projection: Option<&Vec<usize>>,
filters: &[Expr],
limit: Option<usize>,
) -> table::Result<PhysicalPlanRef> {
panic!("System catalog table does not support scan!")
self.0.scan(projection, filters, limit).await
}
/// Insert values into table.
async fn insert(&self, request: InsertRequest) -> table::error::Result<usize> {
self.table.insert(request).await
self.0.insert(request).await
}
fn table_info(&self) -> TableInfoRef {
self.table_info.clone()
self.0.table_info()
}
async fn delete(&self, request: DeleteRequest) -> table::Result<usize> {
self.0.delete(request).await
}
}
@@ -85,9 +89,8 @@ impl SystemCatalogTable {
schema_name: INFORMATION_SCHEMA_NAME.to_string(),
table_name: SYSTEM_CATALOG_TABLE_NAME.to_string(),
table_id: SYSTEM_CATALOG_TABLE_ID,
region_numbers: vec![0],
};
let schema = Arc::new(build_system_catalog_schema());
let schema = build_system_catalog_schema();
let ctx = EngineContext::default();
if let Some(table) = engine
@@ -95,10 +98,7 @@ impl SystemCatalogTable {
.await
.context(OpenSystemCatalogSnafu)?
{
Ok(Self {
table_info: table.table_info(),
table,
})
Ok(Self(table))
} else {
// system catalog table is not yet created, try to create
let request = CreateTableRequest {
@@ -107,19 +107,18 @@ impl SystemCatalogTable {
schema_name: INFORMATION_SCHEMA_NAME.to_string(),
table_name: SYSTEM_CATALOG_TABLE_NAME.to_string(),
desc: Some("System catalog table".to_string()),
schema: schema.clone(),
schema,
region_numbers: vec![0],
primary_key_indices: vec![ENTRY_TYPE_INDEX, KEY_INDEX],
create_if_not_exists: true,
table_options: HashMap::new(),
table_options: TableOptions::default(),
};
let table = engine
.create_table(&ctx, request)
.await
.context(CreateSystemCatalogSnafu)?;
let table_info = table.table_info();
Ok(Self { table, table_info })
Ok(Self(table))
}
}
@@ -128,7 +127,6 @@ impl SystemCatalogTable {
let full_projection = None;
let ctx = SessionContext::new();
let scan = self
.table
.scan(full_projection, &[], None)
.await
.context(error::SystemCatalogTableScanSnafu)?;
@@ -147,7 +145,7 @@ impl SystemCatalogTable {
/// - value: JSON-encoded value of entry's metadata.
/// - gmt_created: create time of this metadata.
/// - gmt_modified: last updated time of this metadata.
fn build_system_catalog_schema() -> Schema {
fn build_system_catalog_schema() -> RawSchema {
let cols = vec![
ColumnSchema::new(
"entry_type".to_string(),
@@ -182,8 +180,7 @@ fn build_system_catalog_schema() -> Schema {
),
];
// The schema of this table must be valid.
SchemaBuilder::try_from(cols).unwrap().build().unwrap()
RawSchema::new(cols)
}
/// Formats key string for table entry in system catalog
@@ -208,6 +205,34 @@ pub fn build_table_insert_request(
)
}
pub(crate) fn build_table_deletion_request(
request: &DeregisterTableRequest,
table_id: TableId,
) -> DeleteRequest {
let table_key = format_table_entry_key(&request.catalog, &request.schema, table_id);
DeleteRequest {
key_column_values: build_primary_key_columns(EntryType::Table, table_key.as_bytes()),
}
}
fn build_primary_key_columns(entry_type: EntryType, key: &[u8]) -> HashMap<String, VectorRef> {
let mut m = HashMap::with_capacity(3);
m.insert(
"entry_type".to_string(),
Arc::new(UInt8Vector::from_slice(&[entry_type as u8])) as _,
);
m.insert(
"key".to_string(),
Arc::new(BinaryVector::from_slice(&[key])) as _,
);
// Timestamp in key part is intentionally left to 0
m.insert(
"timestamp".to_string(),
Arc::new(TimestampMillisecondVector::from_slice(&[0])) as _,
);
m
}
pub fn build_schema_insert_request(catalog_name: String, schema_name: String) -> InsertRequest {
let full_schema_name = format!("{catalog_name}.{schema_name}");
build_insert_request(
@@ -220,22 +245,10 @@ pub fn build_schema_insert_request(catalog_name: String, schema_name: String) ->
}
pub fn build_insert_request(entry_type: EntryType, key: &[u8], value: &[u8]) -> InsertRequest {
let primary_key_columns = build_primary_key_columns(entry_type, key);
let mut columns_values = HashMap::with_capacity(6);
columns_values.insert(
"entry_type".to_string(),
Arc::new(UInt8Vector::from_slice(&[entry_type as u8])) as _,
);
columns_values.insert(
"key".to_string(),
Arc::new(BinaryVector::from_slice(&[key])) as _,
);
// Timestamp in key part is intentionally left to 0
columns_values.insert(
"timestamp".to_string(),
Arc::new(TimestampMillisecondVector::from_slice(&[0])) as _,
);
columns_values.extend(primary_key_columns.into_iter());
columns_values.insert(
"value".to_string(),
@@ -258,6 +271,7 @@ pub fn build_insert_request(entry_type: EntryType, key: &[u8], value: &[u8]) ->
schema_name: DEFAULT_SCHEMA_NAME.to_string(),
table_name: SYSTEM_CATALOG_TABLE_NAME.to_string(),
columns_values,
region_number: 0, // system catalog table has only one region
}
}
@@ -380,10 +394,13 @@ pub struct TableEntryValue {
#[cfg(test)]
mod tests {
use common_recordbatch::RecordBatches;
use datatypes::value::Value;
use log_store::NoopLogStore;
use mito::config::EngineConfig;
use mito::engine::MitoEngine;
use object_store::ObjectStore;
use object_store::{ObjectStore, ObjectStoreBuilder};
use storage::compaction::noop::NoopCompactionScheduler;
use storage::config::EngineConfig as StorageEngineConfig;
use storage::EngineImpl;
use table::metadata::TableType;
@@ -465,17 +482,19 @@ mod tests {
pub async fn prepare_table_engine() -> (TempDir, TableEngineRef) {
let dir = TempDir::new("system-table-test").unwrap();
let store_dir = dir.path().to_string_lossy();
let accessor = object_store::backend::fs::Builder::default()
let accessor = object_store::services::Fs::default()
.root(&store_dir)
.build()
.unwrap();
let object_store = ObjectStore::new(accessor);
let object_store = ObjectStore::new(accessor).finish();
let noop_compaction_scheduler = Arc::new(NoopCompactionScheduler::default());
let table_engine = Arc::new(MitoEngine::new(
EngineConfig::default(),
EngineImpl::new(
StorageEngineConfig::default(),
Arc::new(NoopLogStore::default()),
object_store.clone(),
noop_compaction_scheduler,
),
object_store,
));
@@ -500,4 +519,53 @@ mod tests {
assert_eq!(SYSTEM_CATALOG_NAME, info.catalog_name);
assert_eq!(INFORMATION_SCHEMA_NAME, info.schema_name);
}
#[tokio::test]
async fn test_system_catalog_table_records() {
let (_, table_engine) = prepare_table_engine().await;
let catalog_table = SystemCatalogTable::new(table_engine).await.unwrap();
let table_insertion = build_table_insert_request(
DEFAULT_CATALOG_NAME.to_string(),
DEFAULT_SCHEMA_NAME.to_string(),
"my_table".to_string(),
1,
);
let result = catalog_table.insert(table_insertion).await.unwrap();
assert_eq!(result, 1);
let records = catalog_table.records().await.unwrap();
let mut batches = RecordBatches::try_collect(records).await.unwrap().take();
assert_eq!(batches.len(), 1);
let batch = batches.remove(0);
assert_eq!(batch.num_rows(), 1);
let row = batch.rows().next().unwrap();
let Value::UInt8(entry_type) = row[0] else { unreachable!() };
let Value::Binary(key) = row[1].clone() else { unreachable!() };
let Value::Binary(value) = row[3].clone() else { unreachable!() };
let entry = decode_system_catalog(Some(entry_type), Some(&*key), Some(&*value)).unwrap();
let expected = Entry::Table(TableEntry {
catalog_name: DEFAULT_CATALOG_NAME.to_string(),
schema_name: DEFAULT_SCHEMA_NAME.to_string(),
table_name: "my_table".to_string(),
table_id: 1,
});
assert_eq!(entry, expected);
let table_deletion = build_table_deletion_request(
&DeregisterTableRequest {
catalog: DEFAULT_CATALOG_NAME.to_string(),
schema: DEFAULT_SCHEMA_NAME.to_string(),
table_name: "my_table".to_string(),
},
1,
);
let result = catalog_table.delete(table_deletion).await.unwrap();
assert_eq!(result, 1);
let records = catalog_table.records().await.unwrap();
let batches = RecordBatches::try_collect(records).await.unwrap().take();
assert_eq!(batches.len(), 0);
}
}

View File

@@ -38,9 +38,14 @@ use table::metadata::{TableId, TableInfoRef};
use table::table::scan::SimpleTableScan;
use table::{Table, TableRef};
use crate::error::{Error, InsertCatalogRecordSnafu};
use crate::system::{build_schema_insert_request, build_table_insert_request, SystemCatalogTable};
use crate::{CatalogListRef, CatalogProvider, SchemaProvider, SchemaProviderRef};
use crate::error::{self, Error, InsertCatalogRecordSnafu, Result as CatalogResult};
use crate::system::{
build_schema_insert_request, build_table_deletion_request, build_table_insert_request,
SystemCatalogTable,
};
use crate::{
CatalogListRef, CatalogProvider, DeregisterTableRequest, SchemaProvider, SchemaProviderRef,
};
/// Tables holds all tables created by user.
pub struct Tables {
@@ -157,16 +162,10 @@ fn tables_to_record_batch(
for table_name in table_names {
// Safety: All these vectors are string type.
catalog_vec
.push_value_ref(ValueRef::String(catalog_name))
.unwrap();
schema_vec
.push_value_ref(ValueRef::String(schema_name))
.unwrap();
table_name_vec
.push_value_ref(ValueRef::String(&table_name))
.unwrap();
engine_vec.push_value_ref(ValueRef::String(engine)).unwrap();
catalog_vec.push_value_ref(ValueRef::String(catalog_name));
schema_vec.push_value_ref(ValueRef::String(schema_name));
table_name_vec.push_value_ref(ValueRef::String(&table_name));
engine_vec.push_value_ref(ValueRef::String(engine));
}
vec![
@@ -279,6 +278,21 @@ impl SystemCatalog {
.context(InsertCatalogRecordSnafu)
}
pub(crate) async fn deregister_table(
&self,
request: &DeregisterTableRequest,
table_id: TableId,
) -> CatalogResult<bool> {
self.information_schema
.system
.delete(build_table_deletion_request(request, table_id))
.await
.map(|x| x == 1)
.with_context(|_| error::DeregisterTableSnafu {
request: request.clone(),
})
}
pub async fn register_schema(
&self,
catalog: String,

View File

@@ -147,6 +147,7 @@ impl TableEngine for MockTableEngine {
let table_id = TableId::from_str(
request
.table_options
.extra_options
.get("table_id")
.unwrap_or(&default_table_id),
)

View File

@@ -28,7 +28,7 @@ mod tests {
};
use catalog::{CatalogList, CatalogManager, RegisterTableRequest};
use common_catalog::consts::{DEFAULT_CATALOG_NAME, DEFAULT_SCHEMA_NAME};
use datatypes::schema::Schema;
use datatypes::schema::RawSchema;
use futures_util::StreamExt;
use table::engine::{EngineContext, TableEngineRef};
use table::requests::CreateTableRequest;
@@ -116,7 +116,7 @@ mod tests {
let schema_name = "nonexistent_schema".to_string();
let table_name = "fail_table".to_string();
// this schema has no effect
let table_schema = Arc::new(Schema::new(vec![]));
let table_schema = RawSchema::new(vec![]);
let table = table_engine
.create_table(
&EngineContext {},
@@ -126,7 +126,7 @@ mod tests {
schema_name: schema_name.clone(),
table_name: table_name.clone(),
desc: None,
schema: table_schema.clone(),
schema: table_schema,
region_numbers: vec![0],
primary_key_indices: vec![],
create_if_not_exists: false,
@@ -176,7 +176,7 @@ mod tests {
let table_name = "test_table".to_string();
let table_id = 1;
// this schema has no effect
let table_schema = Arc::new(Schema::new(vec![]));
let table_schema = RawSchema::new(vec![]);
let table = table_engine
.create_table(
&EngineContext {},
@@ -186,7 +186,7 @@ mod tests {
schema_name: schema_name.clone(),
table_name: table_name.clone(),
desc: None,
schema: table_schema.clone(),
schema: table_schema,
region_numbers: vec![0],
primary_key_indices: vec![],
create_if_not_exists: false,
@@ -246,7 +246,7 @@ mod tests {
schema_name: schema_name.clone(),
table_name: "".to_string(),
desc: None,
schema: Arc::new(Schema::new(vec![])),
schema: RawSchema::new(vec![]),
region_numbers: vec![0],
primary_key_indices: vec![],
create_if_not_exists: false,

View File

@@ -9,6 +9,7 @@ api = { path = "../api" }
arrow-flight.workspace = true
async-stream.workspace = true
common-base = { path = "../common/base" }
common-catalog = { path = "../common/catalog" }
common-error = { path = "../common/error" }
common-grpc = { path = "../common/grpc" }
common-grpc-expr = { path = "../common/grpc-expr" }
@@ -31,12 +32,8 @@ substrait = { path = "../common/substrait" }
tokio.workspace = true
tracing = "0.1"
tracing-subscriber = { version = "0.3", features = ["env-filter"] }
# TODO(ruihang): upgrade to 0.11 once substrait-rs supports it.
[dev-dependencies.prost_09]
package = "prost"
version = "0.9"
prost.workspace = true
[dev-dependencies.substrait_proto]
package = "substrait"
version = "0.2"
version = "0.4"

View File

@@ -14,11 +14,12 @@
use api::v1::{ColumnDataType, ColumnDef, CreateTableExpr, TableId};
use client::{Client, Database};
use prost_09::Message;
use substrait_proto::protobuf::plan_rel::RelType as PlanRelType;
use substrait_proto::protobuf::read_rel::{NamedTable, ReadType};
use substrait_proto::protobuf::rel::RelType;
use substrait_proto::protobuf::{PlanRel, ReadRel, Rel};
use common_catalog::consts::{DEFAULT_CATALOG_NAME, DEFAULT_SCHEMA_NAME};
use prost::Message;
use substrait_proto::proto::plan_rel::RelType as PlanRelType;
use substrait_proto::proto::read_rel::{NamedTable, ReadType};
use substrait_proto::proto::rel::RelType;
use substrait_proto::proto::{PlanRel, ReadRel, Rel};
use tracing::{event, Level};
fn main() {
@@ -65,13 +66,12 @@ async fn run() {
region_ids: vec![0],
};
let db = Database::new("create table", client.clone());
let db = Database::new(DEFAULT_CATALOG_NAME, DEFAULT_SCHEMA_NAME, client);
let result = db.create(create_table_expr).await.unwrap();
event!(Level::INFO, "create table result: {:#?}", result);
let logical = mock_logical_plan();
event!(Level::INFO, "plan size: {:#?}", logical.len());
let db = Database::new("greptime", client);
let result = db.logical_plan(logical).await.unwrap();
event!(Level::INFO, "result: {:#?}", result);
@@ -89,12 +89,8 @@ fn mock_logical_plan() -> Vec<u8> {
let read_type = ReadType::NamedTable(named_table);
let read_rel = ReadRel {
common: None,
base_schema: None,
filter: None,
projection: None,
advanced_extension: None,
read_type: Some(read_type),
..Default::default()
};
let mut buf = vec![];

View File

@@ -14,12 +14,13 @@
use std::str::FromStr;
use api::v1::auth_header::AuthScheme;
use api::v1::ddl_request::Expr as DdlExpr;
use api::v1::greptime_request::Request;
use api::v1::query_request::Query;
use api::v1::{
AlterExpr, CreateTableExpr, DdlRequest, DropTableExpr, GreptimeRequest, InsertRequest,
QueryRequest,
AlterExpr, AuthHeader, CreateTableExpr, DdlRequest, DropTableExpr, GreptimeRequest,
InsertRequest, QueryRequest, RequestHeader,
};
use arrow_flight::{FlightData, Ticket};
use common_error::prelude::*;
@@ -34,83 +35,98 @@ use crate::{error, Client, Result};
#[derive(Clone, Debug)]
pub struct Database {
name: String,
// The "catalog" and "schema" to be used in processing the requests at the server side.
// They are the "hint" or "context", just like how the "database" in "USE" statement is treated in MySQL.
// They will be carried in the request header.
catalog: String,
schema: String,
client: Client,
ctx: FlightContext,
}
impl Database {
pub fn new(name: impl Into<String>, client: Client) -> Self {
pub fn new(catalog: impl Into<String>, schema: impl Into<String>, client: Client) -> Self {
Self {
name: name.into(),
catalog: catalog.into(),
schema: schema.into(),
client,
ctx: FlightContext::default(),
}
}
pub fn name(&self) -> &str {
&self.name
pub fn set_catalog(&mut self, catalog: impl Into<String>) {
self.catalog = catalog.into();
}
pub fn set_schema(&mut self, schema: impl Into<String>) {
self.schema = schema.into();
}
pub fn set_auth(&mut self, auth: AuthScheme) {
self.ctx.auth_header = Some(AuthHeader {
auth_scheme: Some(auth),
});
}
pub async fn insert(&self, request: InsertRequest) -> Result<Output> {
self.do_get(GreptimeRequest {
request: Some(Request::Insert(request)),
})
.await
self.do_get(Request::Insert(request)).await
}
pub async fn sql(&self, sql: &str) -> Result<Output> {
self.do_get(GreptimeRequest {
request: Some(Request::Query(QueryRequest {
query: Some(Query::Sql(sql.to_string())),
})),
})
self.do_get(Request::Query(QueryRequest {
query: Some(Query::Sql(sql.to_string())),
}))
.await
}
pub async fn logical_plan(&self, logical_plan: Vec<u8>) -> Result<Output> {
self.do_get(GreptimeRequest {
request: Some(Request::Query(QueryRequest {
query: Some(Query::LogicalPlan(logical_plan)),
})),
})
self.do_get(Request::Query(QueryRequest {
query: Some(Query::LogicalPlan(logical_plan)),
}))
.await
}
pub async fn create(&self, expr: CreateTableExpr) -> Result<Output> {
self.do_get(GreptimeRequest {
request: Some(Request::Ddl(DdlRequest {
expr: Some(DdlExpr::CreateTable(expr)),
})),
})
self.do_get(Request::Ddl(DdlRequest {
expr: Some(DdlExpr::CreateTable(expr)),
}))
.await
}
pub async fn alter(&self, expr: AlterExpr) -> Result<Output> {
self.do_get(GreptimeRequest {
request: Some(Request::Ddl(DdlRequest {
expr: Some(DdlExpr::Alter(expr)),
})),
})
self.do_get(Request::Ddl(DdlRequest {
expr: Some(DdlExpr::Alter(expr)),
}))
.await
}
pub async fn drop_table(&self, expr: DropTableExpr) -> Result<Output> {
self.do_get(GreptimeRequest {
request: Some(Request::Ddl(DdlRequest {
expr: Some(DdlExpr::DropTable(expr)),
})),
})
self.do_get(Request::Ddl(DdlRequest {
expr: Some(DdlExpr::DropTable(expr)),
}))
.await
}
async fn do_get(&self, request: GreptimeRequest) -> Result<Output> {
async fn do_get(&self, request: Request) -> Result<Output> {
let request = GreptimeRequest {
header: Some(RequestHeader {
catalog: self.catalog.clone(),
schema: self.schema.clone(),
authorization: self.ctx.auth_header.clone(),
}),
request: Some(request),
};
let request = Ticket {
ticket: request.encode_to_vec(),
};
let mut client = self.client.make_client()?;
// TODO(LFC): Streaming get flight data.
let flight_data: Vec<FlightData> = client
.mut_inner()
.do_get(Ticket {
ticket: request.encode_to_vec(),
})
.do_get(request)
.and_then(|response| response.into_inner().try_collect())
.await
.map_err(|e| {
@@ -157,12 +173,18 @@ fn get_metadata_value(e: &tonic::Status, key: &str) -> Option<String> {
.and_then(|v| String::from_utf8(v.as_bytes().to_vec()).ok())
}
#[derive(Default, Debug, Clone)]
pub struct FlightContext {
auth_header: Option<AuthHeader>,
}
#[cfg(test)]
mod tests {
use std::sync::Arc;
use api::helper::ColumnDataTypeWrapper;
use api::v1::Column;
use api::v1::auth_header::AuthScheme;
use api::v1::{AuthHeader, Basic, Column};
use common_grpc::select::{null_mask, values};
use common_grpc_expr::column_to_vector;
use datatypes::prelude::{Vector, VectorRef};
@@ -172,6 +194,8 @@ mod tests {
UInt32Vector, UInt64Vector, UInt8Vector,
};
use crate::database::FlightContext;
#[test]
fn test_column_to_vector() {
let mut column = create_test_column(Arc::new(BooleanVector::from(vec![true])));
@@ -255,4 +279,26 @@ mod tests {
datatype: wrapper.datatype() as i32,
}
}
#[test]
fn test_flight_ctx() {
let mut ctx = FlightContext::default();
assert!(ctx.auth_header.is_none());
let basic = AuthScheme::Basic(Basic {
username: "u".to_string(),
password: "p".to_string(),
});
ctx.auth_header = Some(AuthHeader {
auth_scheme: Some(basic),
});
assert!(matches!(
ctx.auth_header,
Some(AuthHeader {
auth_scheme: Some(AuthScheme::Basic(_)),
})
))
}
}

View File

@@ -18,6 +18,7 @@ mod error;
pub mod load_balance;
pub use api;
pub use common_catalog::consts::{DEFAULT_CATALOG_NAME, DEFAULT_SCHEMA_NAME};
pub use self::client::Client;
pub use self::database::Database;

View File

@@ -12,22 +12,30 @@ path = "src/bin/greptime.rs"
[dependencies]
anymap = "1.0.0-beta.2"
clap = { version = "3.1", features = ["derive"] }
client = { path = "../client" }
common-base = { path = "../common/base" }
common-error = { path = "../common/error" }
common-query = { path = "../common/query" }
common-recordbatch = { path = "../common/recordbatch" }
common-telemetry = { path = "../common/telemetry", features = [
"deadlock_detection",
] }
datanode = { path = "../datanode" }
either = "1.8"
frontend = { path = "../frontend" }
futures.workspace = true
meta-client = { path = "../meta-client" }
meta-srv = { path = "../meta-srv" }
nu-ansi-term = "0.46"
rustyline = "10.1"
serde.workspace = true
servers = { path = "../servers" }
snafu.workspace = true
tokio = { version = "1.18", features = ["full"] }
tokio.workspace = true
toml = "0.5"
[dev-dependencies]
rexpect = "0.5"
serde.workspace = true
tempdir = "0.3"

View File

@@ -16,7 +16,7 @@ use std::fmt;
use clap::Parser;
use cmd::error::Result;
use cmd::{datanode, frontend, metasrv, standalone};
use cmd::{cli, datanode, frontend, metasrv, standalone};
use common_telemetry::logging::{error, info};
#[derive(Parser)]
@@ -46,6 +46,8 @@ enum SubCommand {
Metasrv(metasrv::Command),
#[clap(name = "standalone")]
Standalone(standalone::Command),
#[clap(name = "cli")]
Cli(cli::Command),
}
impl SubCommand {
@@ -55,6 +57,7 @@ impl SubCommand {
SubCommand::Frontend(cmd) => cmd.run().await,
SubCommand::Metasrv(cmd) => cmd.run().await,
SubCommand::Standalone(cmd) => cmd.run().await,
SubCommand::Cli(cmd) => cmd.run().await,
}
}
}
@@ -66,6 +69,7 @@ impl fmt::Display for SubCommand {
SubCommand::Frontend(..) => write!(f, "greptime-frontend"),
SubCommand::Metasrv(..) => write!(f, "greptime-metasrv"),
SubCommand::Standalone(..) => write!(f, "greptime-standalone"),
SubCommand::Cli(_) => write!(f, "greptime-cli"),
}
}
}

62
src/cmd/src/cli.rs Normal file
View File

@@ -0,0 +1,62 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
mod cmd;
mod helper;
mod repl;
use clap::Parser;
use repl::Repl;
use crate::error::Result;
#[derive(Parser)]
pub struct Command {
#[clap(subcommand)]
cmd: SubCommand,
}
impl Command {
pub async fn run(self) -> Result<()> {
self.cmd.run().await
}
}
#[derive(Parser)]
enum SubCommand {
Attach(AttachCommand),
}
impl SubCommand {
async fn run(self) -> Result<()> {
match self {
SubCommand::Attach(cmd) => cmd.run().await,
}
}
}
#[derive(Debug, Parser)]
pub(crate) struct AttachCommand {
#[clap(long)]
pub(crate) grpc_addr: String,
#[clap(long, action)]
pub(crate) disable_helper: bool,
}
impl AttachCommand {
async fn run(self) -> Result<()> {
let mut repl = Repl::try_new(&self)?;
repl.run().await
}
}

154
src/cmd/src/cli/cmd.rs Normal file
View File

@@ -0,0 +1,154 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use crate::error::{Error, InvalidReplCommandSnafu, Result};
/// Represents the parsed command from the user (which may be over many lines)
#[derive(Debug, PartialEq)]
pub(crate) enum ReplCommand {
Help,
UseDatabase { db_name: String },
Sql { sql: String },
Exit,
}
impl TryFrom<&str> for ReplCommand {
type Error = Error;
fn try_from(input: &str) -> Result<Self> {
let input = input.trim();
if input.is_empty() {
return InvalidReplCommandSnafu {
reason: "No command specified".to_string(),
}
.fail();
}
// If line ends with ';', it must be treated as a complete input.
// However, the opposite is not true.
let input_is_completed = input.ends_with(';');
let input = input.strip_suffix(';').map(|x| x.trim()).unwrap_or(input);
let lowercase = input.to_lowercase();
match lowercase.as_str() {
"help" => Ok(Self::Help),
"exit" | "quit" => Ok(Self::Exit),
_ => match input.split_once(' ') {
Some((maybe_use, database)) if maybe_use.to_lowercase() == "use" => {
Ok(Self::UseDatabase {
db_name: database.trim().to_string(),
})
}
// Any valid SQL must contains at least one whitespace.
Some(_) if input_is_completed => Ok(Self::Sql {
sql: input.to_string(),
}),
_ => InvalidReplCommandSnafu {
reason: format!("unknown command '{input}', maybe input is not completed"),
}
.fail(),
},
}
}
}
impl ReplCommand {
pub fn help() -> &'static str {
r#"
Available commands (case insensitive):
- 'help': print this help
- 'exit' or 'quit': exit the REPL
- 'use <your database name>': switch to another database/schema context
- Other typed in text will be treated as SQL.
You can enter new line while typing, just remember to end it with ';'.
"#
}
}
#[cfg(test)]
mod tests {
use super::*;
use crate::error::Error::InvalidReplCommand;
#[test]
fn test_from_str() {
fn test_ok(s: &str, expected: ReplCommand) {
let actual: ReplCommand = s.try_into().unwrap();
assert_eq!(expected, actual, "'{}'", s);
}
fn test_err(s: &str) {
let result: Result<ReplCommand> = s.try_into();
assert!(matches!(result, Err(InvalidReplCommand { .. })))
}
test_err("");
test_err(" ");
test_err("\t");
test_ok("help", ReplCommand::Help);
test_ok("help", ReplCommand::Help);
test_ok(" help", ReplCommand::Help);
test_ok(" help ", ReplCommand::Help);
test_ok(" HELP ", ReplCommand::Help);
test_ok(" Help; ", ReplCommand::Help);
test_ok(" help ; ", ReplCommand::Help);
test_ok("exit", ReplCommand::Exit);
test_ok("exit;", ReplCommand::Exit);
test_ok("exit ;", ReplCommand::Exit);
test_ok("EXIT", ReplCommand::Exit);
test_ok("quit", ReplCommand::Exit);
test_ok("quit;", ReplCommand::Exit);
test_ok("quit ;", ReplCommand::Exit);
test_ok("QUIT", ReplCommand::Exit);
test_ok(
"use Foo",
ReplCommand::UseDatabase {
db_name: "Foo".to_string(),
},
);
test_ok(
" use Foo ; ",
ReplCommand::UseDatabase {
db_name: "Foo".to_string(),
},
);
// ensure that database name is case sensitive
test_ok(
" use FOO ; ",
ReplCommand::UseDatabase {
db_name: "FOO".to_string(),
},
);
// ensure that we aren't messing with capitalization
test_ok(
"SELECT * from foo;",
ReplCommand::Sql {
sql: "SELECT * from foo".to_string(),
},
);
// Input line (that don't belong to any other cases above) must ends with ';' to make it a valid SQL.
test_err("insert blah");
test_ok(
"insert blah;",
ReplCommand::Sql {
sql: "insert blah".to_string(),
},
);
}
}

112
src/cmd/src/cli/helper.rs Normal file
View File

@@ -0,0 +1,112 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use std::borrow::Cow;
use rustyline::completion::Completer;
use rustyline::highlight::{Highlighter, MatchingBracketHighlighter};
use rustyline::hint::{Hinter, HistoryHinter};
use rustyline::validate::{ValidationContext, ValidationResult, Validator};
use crate::cli::cmd::ReplCommand;
pub(crate) struct RustylineHelper {
hinter: HistoryHinter,
highlighter: MatchingBracketHighlighter,
}
impl Default for RustylineHelper {
fn default() -> Self {
Self {
hinter: HistoryHinter {},
highlighter: MatchingBracketHighlighter::default(),
}
}
}
impl rustyline::Helper for RustylineHelper {}
impl Validator for RustylineHelper {
fn validate(&self, ctx: &mut ValidationContext<'_>) -> rustyline::Result<ValidationResult> {
let input = ctx.input();
match ReplCommand::try_from(input) {
Ok(_) => Ok(ValidationResult::Valid(None)),
Err(e) => {
if input.trim_end().ends_with(';') {
// If line ends with ';', it HAS to be a valid command.
Ok(ValidationResult::Invalid(Some(e.to_string())))
} else {
Ok(ValidationResult::Incomplete)
}
}
}
}
}
impl Hinter for RustylineHelper {
type Hint = String;
fn hint(&self, line: &str, pos: usize, ctx: &rustyline::Context<'_>) -> Option<Self::Hint> {
self.hinter.hint(line, pos, ctx)
}
}
impl Highlighter for RustylineHelper {
fn highlight<'l>(&self, line: &'l str, pos: usize) -> Cow<'l, str> {
self.highlighter.highlight(line, pos)
}
fn highlight_prompt<'b, 's: 'b, 'p: 'b>(
&'s self,
prompt: &'p str,
default: bool,
) -> Cow<'b, str> {
self.highlighter.highlight_prompt(prompt, default)
}
fn highlight_hint<'h>(&self, hint: &'h str) -> Cow<'h, str> {
use nu_ansi_term::Style;
Cow::Owned(Style::new().dimmed().paint(hint).to_string())
}
fn highlight_candidate<'c>(
&self,
candidate: &'c str,
completion: rustyline::CompletionType,
) -> Cow<'c, str> {
self.highlighter.highlight_candidate(candidate, completion)
}
fn highlight_char(&self, line: &str, pos: usize) -> bool {
self.highlighter.highlight_char(line, pos)
}
}
impl Completer for RustylineHelper {
type Candidate = String;
fn complete(
&self,
line: &str,
pos: usize,
ctx: &rustyline::Context<'_>,
) -> rustyline::Result<(usize, Vec<Self::Candidate>)> {
// If there is a hint, use that as the auto-complete when user hits `tab`
if let Some(hint) = self.hinter.hint(line, pos, ctx) {
Ok((pos, vec![hint]))
} else {
Ok((0, vec![]))
}
}
}

199
src/cmd/src/cli/repl.rs Normal file
View File

@@ -0,0 +1,199 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use std::path::PathBuf;
use std::time::Instant;
use client::{Client, Database, DEFAULT_CATALOG_NAME, DEFAULT_SCHEMA_NAME};
use common_error::prelude::ErrorExt;
use common_query::Output;
use common_recordbatch::RecordBatches;
use common_telemetry::logging;
use either::Either;
use rustyline::error::ReadlineError;
use rustyline::Editor;
use snafu::{ErrorCompat, ResultExt};
use crate::cli::cmd::ReplCommand;
use crate::cli::helper::RustylineHelper;
use crate::cli::AttachCommand;
use crate::error::{
CollectRecordBatchesSnafu, PrettyPrintRecordBatchesSnafu, ReadlineSnafu, ReplCreationSnafu,
RequestDatabaseSnafu, Result,
};
/// Captures the state of the repl, gathers commands and executes them one by one
pub(crate) struct Repl {
/// Rustyline editor for interacting with user on command line
rl: Editor<RustylineHelper>,
/// Current prompt
prompt: String,
/// Client for interacting with GreptimeDB
database: Database,
}
#[allow(clippy::print_stdout)]
impl Repl {
fn print_help(&self) {
println!("{}", ReplCommand::help())
}
pub(crate) fn try_new(cmd: &AttachCommand) -> Result<Self> {
let mut rl = Editor::new().context(ReplCreationSnafu)?;
if !cmd.disable_helper {
rl.set_helper(Some(RustylineHelper::default()));
let history_file = history_file();
if let Err(e) = rl.load_history(&history_file) {
logging::debug!(
"failed to load history file on {}, error: {e}",
history_file.display()
);
}
}
let client = Client::with_urls([&cmd.grpc_addr]);
let database = Database::new(DEFAULT_CATALOG_NAME, DEFAULT_SCHEMA_NAME, client);
Ok(Self {
rl,
prompt: "> ".to_string(),
database,
})
}
/// Parse the next command
fn next_command(&mut self) -> Result<ReplCommand> {
match self.rl.readline(&self.prompt) {
Ok(ref line) => {
let request = line.trim();
self.rl.add_history_entry(request.to_string());
request.try_into()
}
Err(ReadlineError::Eof) | Err(ReadlineError::Interrupted) => Ok(ReplCommand::Exit),
// Some sort of real underlying error
Err(e) => Err(e).context(ReadlineSnafu),
}
}
/// Read Evaluate Print Loop (interactive command line) for GreptimeDB
///
/// Inspired / based on repl.rs from InfluxDB IOX
pub(crate) async fn run(&mut self) -> Result<()> {
println!("Ready for commands. (Hint: try 'help')");
loop {
match self.next_command()? {
ReplCommand::Help => {
self.print_help();
}
ReplCommand::UseDatabase { db_name } => {
if self.execute_sql(format!("USE {db_name}")).await {
println!("Using {db_name}");
self.database.set_schema(&db_name);
self.prompt = format!("[{db_name}] > ");
}
}
ReplCommand::Sql { sql } => {
self.execute_sql(sql).await;
}
ReplCommand::Exit => {
return Ok(());
}
}
}
}
async fn execute_sql(&self, sql: String) -> bool {
self.do_execute_sql(sql)
.await
.map_err(|e| {
let status_code = e.status_code();
let root_cause = e.iter_chain().last().unwrap();
println!("Error: {}({status_code}), {root_cause}", status_code as u32)
})
.is_ok()
}
async fn do_execute_sql(&self, sql: String) -> Result<()> {
let start = Instant::now();
let output = self
.database
.sql(&sql)
.await
.context(RequestDatabaseSnafu { sql: &sql })?;
let either = match output {
Output::Stream(s) => {
let x = RecordBatches::try_collect(s)
.await
.context(CollectRecordBatchesSnafu)?;
Either::Left(x)
}
Output::RecordBatches(x) => Either::Left(x),
Output::AffectedRows(rows) => Either::Right(rows),
};
let end = Instant::now();
match either {
Either::Left(recordbatches) => {
let total_rows: usize = recordbatches.iter().map(|x| x.num_rows()).sum();
if total_rows > 0 {
println!(
"{}",
recordbatches
.pretty_print()
.context(PrettyPrintRecordBatchesSnafu)?
);
}
println!("Total Rows: {total_rows}")
}
Either::Right(rows) => println!("Affected Rows: {rows}"),
};
println!("Cost {} ms", (end - start).as_millis());
Ok(())
}
}
impl Drop for Repl {
fn drop(&mut self) {
if self.rl.helper().is_some() {
let history_file = history_file();
if let Err(e) = self.rl.save_history(&history_file) {
logging::debug!(
"failed to save history file on {}, error: {e}",
history_file.display()
);
}
}
}
}
/// Return the location of the history file (defaults to $HOME/".greptimedb_cli_history")
fn history_file() -> PathBuf {
let mut buf = match std::env::var("HOME") {
Ok(home) => PathBuf::from(home),
Err(_) => PathBuf::new(),
};
buf.push(".greptimedb_cli_history");
buf
}

View File

@@ -14,8 +14,10 @@
use clap::Parser;
use common_telemetry::logging;
use datanode::datanode::{Datanode, DatanodeOptions, ObjectStoreConfig};
use meta_client::MetaClientOpts;
use datanode::datanode::{
Datanode, DatanodeOptions, FileConfig, ObjectStoreConfig, ProcedureConfig,
};
use meta_client::MetaClientOptions;
use servers::Mode;
use snafu::ResultExt;
@@ -54,6 +56,8 @@ struct StartCommand {
#[clap(long)]
rpc_addr: Option<String>,
#[clap(long)]
rpc_hostname: Option<String>,
#[clap(long)]
mysql_addr: Option<String>,
#[clap(long)]
metasrv_addr: Option<String>,
@@ -63,6 +67,8 @@ struct StartCommand {
data_dir: Option<String>,
#[clap(long)]
wal_dir: Option<String>,
#[clap(long)]
procedure_dir: Option<String>,
}
impl StartCommand {
@@ -94,6 +100,11 @@ impl TryFrom<StartCommand> for DatanodeOptions {
if let Some(addr) = cmd.rpc_addr {
opts.rpc_addr = addr;
}
if cmd.rpc_hostname.is_some() {
opts.rpc_hostname = cmd.rpc_hostname;
}
if let Some(addr) = cmd.mysql_addr {
opts.mysql_addr = addr;
}
@@ -103,8 +114,8 @@ impl TryFrom<StartCommand> for DatanodeOptions {
}
if let Some(meta_addr) = cmd.metasrv_addr {
opts.meta_client_opts
.get_or_insert_with(MetaClientOpts::default)
opts.meta_client_options
.get_or_insert_with(MetaClientOptions::default)
.metasrv_addrs = meta_addr
.split(',')
.map(&str::trim)
@@ -121,12 +132,17 @@ impl TryFrom<StartCommand> for DatanodeOptions {
}
if let Some(data_dir) = cmd.data_dir {
opts.storage = ObjectStoreConfig::File { data_dir };
opts.storage = ObjectStoreConfig::File(FileConfig { data_dir });
}
if let Some(wal_dir) = cmd.wal_dir {
opts.wal.dir = wal_dir;
}
if let Some(procedure_dir) = cmd.procedure_dir {
opts.procedure = Some(ProcedureConfig::from_file_path(procedure_dir));
}
Ok(opts)
}
}
@@ -136,7 +152,7 @@ mod tests {
use std::assert_matches::assert_matches;
use std::time::Duration;
use datanode::datanode::ObjectStoreConfig;
use datanode::datanode::{CompactionConfig, ObjectStoreConfig};
use servers::Mode;
use super::*;
@@ -155,12 +171,12 @@ mod tests {
assert_eq!("/tmp/greptimedb/wal".to_string(), options.wal.dir);
assert_eq!("127.0.0.1:4406".to_string(), options.mysql_addr);
assert_eq!(4, options.mysql_runtime_size);
let MetaClientOpts {
let MetaClientOptions {
metasrv_addrs: metasrv_addr,
timeout_millis,
connect_timeout_millis,
tcp_nodelay,
} = options.meta_client_opts.unwrap();
} = options.meta_client_options.unwrap();
assert_eq!(vec!["127.0.0.1:3002".to_string()], metasrv_addr);
assert_eq!(5000, connect_timeout_millis);
@@ -168,11 +184,21 @@ mod tests {
assert!(!tcp_nodelay);
match options.storage {
ObjectStoreConfig::File { data_dir } => {
ObjectStoreConfig::File(FileConfig { data_dir }) => {
assert_eq!("/tmp/greptimedb/data/".to_string(), data_dir)
}
ObjectStoreConfig::S3 { .. } => unreachable!(),
ObjectStoreConfig::Oss { .. } => unreachable!(),
};
assert_eq!(
CompactionConfig {
max_inflight_tasks: 4,
max_files_in_level0: 16,
max_purge_tasks: 32,
},
options.compaction
);
}
#[test]
@@ -223,12 +249,12 @@ mod tests {
assert_eq!(1024 * 1024 * 1024 * 50, dn_opts.wal.purge_threshold.0);
assert!(!dn_opts.wal.sync_write);
assert_eq!(Some(42), dn_opts.node_id);
let MetaClientOpts {
let MetaClientOptions {
metasrv_addrs: metasrv_addr,
timeout_millis,
connect_timeout_millis,
tcp_nodelay,
} = dn_opts.meta_client_opts.unwrap();
} = dn_opts.meta_client_options.unwrap();
assert_eq!(vec!["127.0.0.1:3002".to_string()], metasrv_addr);
assert_eq!(3000, timeout_millis);
assert_eq!(5000, connect_timeout_millis);

View File

@@ -15,6 +15,7 @@
use std::any::Any;
use common_error::prelude::*;
use rustyline::error::ReadlineError;
#[derive(Debug, Snafu)]
#[snafu(visibility(pub))]
@@ -61,6 +62,47 @@ pub enum Error {
#[snafu(backtrace)]
source: servers::auth::Error,
},
#[snafu(display("Unsupported selector type, {} source: {}", selector_type, source))]
UnsupportedSelectorType {
selector_type: String,
#[snafu(backtrace)]
source: meta_srv::error::Error,
},
#[snafu(display("Invalid REPL command: {reason}"))]
InvalidReplCommand { reason: String },
#[snafu(display("Cannot create REPL: {}", source))]
ReplCreation {
source: ReadlineError,
backtrace: Backtrace,
},
#[snafu(display("Error reading command: {}", source))]
Readline {
source: ReadlineError,
backtrace: Backtrace,
},
#[snafu(display("Failed to request database, sql: {sql}, source: {source}"))]
RequestDatabase {
sql: String,
#[snafu(backtrace)]
source: client::Error,
},
#[snafu(display("Failed to collect RecordBatches, source: {source}"))]
CollectRecordBatches {
#[snafu(backtrace)]
source: common_recordbatch::error::Error,
},
#[snafu(display("Failed to pretty print Recordbatches, source: {source}"))]
PrettyPrintRecordBatches {
#[snafu(backtrace)]
source: common_recordbatch::error::Error,
},
}
pub type Result<T> = std::result::Result<T, Error>;
@@ -71,11 +113,19 @@ impl ErrorExt for Error {
Error::StartDatanode { source } => source.status_code(),
Error::StartFrontend { source } => source.status_code(),
Error::StartMetaServer { source } => source.status_code(),
Error::UnsupportedSelectorType { source, .. } => source.status_code(),
Error::ReadConfig { .. } | Error::ParseConfig { .. } | Error::MissingConfig { .. } => {
StatusCode::InvalidArguments
}
Error::IllegalConfig { .. } => StatusCode::InvalidArguments,
Error::IllegalConfig { .. } | Error::InvalidReplCommand { .. } => {
StatusCode::InvalidArguments
}
Error::IllegalAuthConfig { .. } => StatusCode::InvalidArguments,
Error::ReplCreation { .. } | Error::Readline { .. } => StatusCode::Internal,
Error::RequestDatabase { source, .. } => source.status_code(),
Error::CollectRecordBatches { source } | Error::PrettyPrintRecordBatches { source } => {
source.status_code()
}
}
}

View File

@@ -15,6 +15,7 @@
use std::sync::Arc;
use clap::Parser;
use common_base::Plugins;
use frontend::frontend::{Frontend, FrontendOptions};
use frontend::grpc::GrpcOptions;
use frontend::influxdb::InfluxdbOptions;
@@ -22,8 +23,7 @@ use frontend::instance::Instance;
use frontend::mysql::MysqlOptions;
use frontend::opentsdb::OpentsdbOptions;
use frontend::postgres::PostgresOptions;
use frontend::Plugins;
use meta_client::MetaClientOpts;
use meta_client::MetaClientOptions;
use servers::auth::UserProviderRef;
use servers::http::HttpOptions;
use servers::tls::{TlsMode, TlsOption};
@@ -91,10 +91,9 @@ impl StartCommand {
let plugins = Arc::new(load_frontend_plugins(&self.user_provider)?);
let opts: FrontendOptions = self.try_into()?;
let mut instance = Instance::try_new_distributed(&opts)
let instance = Instance::try_new_distributed(&opts, plugins.clone())
.await
.context(error::StartFrontendSnafu)?;
instance.set_plugins(plugins.clone());
let mut frontend = Frontend::new(opts, instance, plugins);
frontend.start().await.context(error::StartFrontendSnafu)
@@ -159,8 +158,8 @@ impl TryFrom<StartCommand> for FrontendOptions {
opts.influxdb_options = Some(InfluxdbOptions { enable });
}
if let Some(metasrv_addr) = cmd.metasrv_addr {
opts.meta_client_opts
.get_or_insert_with(MetaClientOpts::default)
opts.meta_client_options
.get_or_insert_with(MetaClientOptions::default)
.metasrv_addrs = metasrv_addr
.split(',')
.map(&str::trim)
@@ -287,7 +286,7 @@ mod tests {
let provider = provider.unwrap();
let result = provider
.auth(Identity::UserId("test", None), Password::PlainText("test"))
.authenticate(Identity::UserId("test", None), Password::PlainText("test"))
.await;
assert!(result.is_ok());
}

View File

@@ -14,6 +14,7 @@
#![feature(assert_matches)]
pub mod cli;
pub mod datanode;
pub mod error;
pub mod frontend;

View File

@@ -13,7 +13,7 @@
// limitations under the License.
use clap::Parser;
use common_telemetry::logging;
use common_telemetry::{info, logging, warn};
use meta_srv::bootstrap;
use meta_srv::metasrv::MetaSrvOptions;
use snafu::ResultExt;
@@ -56,6 +56,10 @@ struct StartCommand {
store_addr: Option<String>,
#[clap(short, long)]
config_file: Option<String>,
#[clap(short, long)]
selector: Option<String>,
#[clap(long)]
use_memory_store: bool,
}
impl StartCommand {
@@ -91,6 +95,17 @@ impl TryFrom<StartCommand> for MetaSrvOptions {
if let Some(addr) = cmd.store_addr {
opts.store_addr = addr;
}
if let Some(selector_type) = &cmd.selector {
opts.selector = selector_type[..]
.try_into()
.context(error::UnsupportedSelectorTypeSnafu { selector_type })?;
info!("Using {} selector", selector_type);
}
if cmd.use_memory_store {
warn!("Using memory store for Meta. Make sure you are in running tests.");
opts.use_memory_store = true;
}
Ok(opts)
}
@@ -98,6 +113,8 @@ impl TryFrom<StartCommand> for MetaSrvOptions {
#[cfg(test)]
mod tests {
use meta_srv::selector::SelectorType;
use super::*;
#[test]
@@ -107,11 +124,14 @@ mod tests {
server_addr: Some("127.0.0.1:3002".to_string()),
store_addr: Some("127.0.0.1:2380".to_string()),
config_file: None,
selector: Some("LoadBased".to_string()),
use_memory_store: false,
};
let options: MetaSrvOptions = cmd.try_into().unwrap();
assert_eq!("127.0.0.1:3002".to_string(), options.bind_addr);
assert_eq!("127.0.0.1:3002".to_string(), options.server_addr);
assert_eq!("127.0.0.1:2380".to_string(), options.store_addr);
assert_eq!(SelectorType::LoadBased, options.selector);
}
#[test]
@@ -120,15 +140,18 @@ mod tests {
bind_addr: None,
server_addr: None,
store_addr: None,
selector: None,
config_file: Some(format!(
"{}/../../config/metasrv.example.toml",
std::env::current_dir().unwrap().as_path().to_str().unwrap()
)),
use_memory_store: false,
};
let options: MetaSrvOptions = cmd.try_into().unwrap();
assert_eq!("127.0.0.1:3002".to_string(), options.bind_addr);
assert_eq!("127.0.0.1:3002".to_string(), options.server_addr);
assert_eq!("127.0.0.1:2379".to_string(), options.store_addr);
assert_eq!(15, options.datanode_lease_secs);
assert_eq!(SelectorType::LeaseBased, options.selector);
}
}

View File

@@ -15,8 +15,11 @@
use std::sync::Arc;
use clap::Parser;
use common_base::Plugins;
use common_telemetry::info;
use datanode::datanode::{Datanode, DatanodeOptions, ObjectStoreConfig, WalConfig};
use datanode::datanode::{
CompactionConfig, Datanode, DatanodeOptions, ObjectStoreConfig, ProcedureConfig, WalConfig,
};
use datanode::instance::InstanceRef;
use frontend::frontend::{Frontend, FrontendOptions};
use frontend::grpc::GrpcOptions;
@@ -26,7 +29,7 @@ use frontend::mysql::MysqlOptions;
use frontend::opentsdb::OpentsdbOptions;
use frontend::postgres::PostgresOptions;
use frontend::prometheus::PrometheusOptions;
use frontend::Plugins;
use frontend::promql::PromqlOptions;
use serde::{Deserialize, Serialize};
use servers::http::HttpOptions;
use servers::tls::{TlsMode, TlsOption};
@@ -65,6 +68,8 @@ impl SubCommand {
#[derive(Clone, Debug, Serialize, Deserialize)]
#[serde(default)]
pub struct StandaloneOptions {
pub mode: Mode,
pub enable_memory_catalog: bool,
pub http_options: Option<HttpOptions>,
pub grpc_options: Option<GrpcOptions>,
pub mysql_options: Option<MysqlOptions>,
@@ -72,15 +77,18 @@ pub struct StandaloneOptions {
pub opentsdb_options: Option<OpentsdbOptions>,
pub influxdb_options: Option<InfluxdbOptions>,
pub prometheus_options: Option<PrometheusOptions>,
pub mode: Mode,
pub promql_options: Option<PromqlOptions>,
pub wal: WalConfig,
pub storage: ObjectStoreConfig,
pub enable_memory_catalog: bool,
pub compaction: CompactionConfig,
pub procedure: Option<ProcedureConfig>,
}
impl Default for StandaloneOptions {
fn default() -> Self {
Self {
mode: Mode::Standalone,
enable_memory_catalog: false,
http_options: Some(HttpOptions::default()),
grpc_options: Some(GrpcOptions::default()),
mysql_options: Some(MysqlOptions::default()),
@@ -88,10 +96,11 @@ impl Default for StandaloneOptions {
opentsdb_options: Some(OpentsdbOptions::default()),
influxdb_options: Some(InfluxdbOptions::default()),
prometheus_options: Some(PrometheusOptions::default()),
mode: Mode::Standalone,
promql_options: Some(PromqlOptions::default()),
wal: WalConfig::default(),
storage: ObjectStoreConfig::default(),
enable_memory_catalog: false,
compaction: CompactionConfig::default(),
procedure: None,
}
}
}
@@ -99,6 +108,7 @@ impl Default for StandaloneOptions {
impl StandaloneOptions {
fn frontend_options(self) -> FrontendOptions {
FrontendOptions {
mode: self.mode,
http_options: self.http_options,
grpc_options: self.grpc_options,
mysql_options: self.mysql_options,
@@ -106,16 +116,18 @@ impl StandaloneOptions {
opentsdb_options: self.opentsdb_options,
influxdb_options: self.influxdb_options,
prometheus_options: self.prometheus_options,
mode: self.mode,
meta_client_opts: None,
promql_options: self.promql_options,
meta_client_options: None,
}
}
fn datanode_options(self) -> DatanodeOptions {
DatanodeOptions {
enable_memory_catalog: self.enable_memory_catalog,
wal: self.wal,
storage: self.storage,
enable_memory_catalog: self.enable_memory_catalog,
compaction: self.compaction,
procedure: self.procedure,
..Default::default()
}
}
@@ -323,6 +335,10 @@ mod tests {
fe_opts.mysql_options.as_ref().unwrap().addr
);
assert_eq!(2, fe_opts.mysql_options.as_ref().unwrap().runtime_size);
assert_eq!(
None,
fe_opts.mysql_options.as_ref().unwrap().reject_no_database
);
assert!(fe_opts.influxdb_options.as_ref().unwrap().enable);
}
@@ -350,8 +366,15 @@ mod tests {
assert!(provider.is_some());
let provider = provider.unwrap();
let result = provider
.auth(Identity::UserId("test", None), Password::PlainText("test"))
.authenticate(Identity::UserId("test", None), Password::PlainText("test"))
.await;
assert!(result.is_ok());
}
#[test]
fn test_toml() {
let opts = StandaloneOptions::default();
let toml_string = toml::to_string(&opts).unwrap();
let _parsed: StandaloneOptions = toml::from_str(&toml_string).unwrap();
}
}

145
src/cmd/tests/cli.rs Normal file
View File

@@ -0,0 +1,145 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
#[cfg(target_os = "macos")]
mod tests {
use std::path::PathBuf;
use std::process::{Command, Stdio};
use std::time::Duration;
use rexpect::session::PtyReplSession;
use tempdir::TempDir;
struct Repl {
repl: PtyReplSession,
}
impl Repl {
fn send_line(&mut self, line: &str) {
self.repl.send_line(line).unwrap();
// read a line to consume the prompt
self.read_line();
}
fn read_line(&mut self) -> String {
self.repl.read_line().unwrap()
}
fn read_expect(&mut self, expect: &str) {
assert_eq!(self.read_line(), expect);
}
fn read_contains(&mut self, pat: &str) {
assert!(self.read_line().contains(pat));
}
}
#[test]
fn test_repl() {
let data_dir = TempDir::new_in("/tmp", "data").unwrap();
let wal_dir = TempDir::new_in("/tmp", "wal").unwrap();
let mut bin_path = PathBuf::from(env!("CARGO_MANIFEST_DIR"));
bin_path.push("../../target/debug");
let bin_path = bin_path.to_str().unwrap();
let mut datanode = Command::new("./greptime")
.current_dir(bin_path)
.args([
"datanode",
"start",
"--rpc-addr=0.0.0.0:4321",
"--node-id=1",
&format!("--data-dir={}", data_dir.path().display()),
&format!("--wal-dir={}", wal_dir.path().display()),
])
.stdout(Stdio::null())
.spawn()
.unwrap();
// wait for Datanode actually started
std::thread::sleep(Duration::from_secs(3));
let mut repl_cmd = Command::new("./greptime");
repl_cmd.current_dir(bin_path).args([
"--log-level=off",
"cli",
"attach",
"--grpc-addr=0.0.0.0:4321",
// history commands can sneaky into stdout and mess up our tests, so disable it
"--disable-helper",
]);
let pty_session = rexpect::session::spawn_command(repl_cmd, Some(5_000)).unwrap();
let repl = PtyReplSession {
prompt: "> ".to_string(),
pty_session,
quit_command: None,
echo_on: false,
};
let repl = &mut Repl { repl };
repl.read_expect("Ready for commands. (Hint: try 'help')");
test_create_database(repl);
test_use_database(repl);
test_create_table(repl);
test_insert(repl);
test_select(repl);
datanode.kill().unwrap();
datanode.wait().unwrap();
}
fn test_create_database(repl: &mut Repl) {
repl.send_line("CREATE DATABASE db;");
repl.read_expect("Affected Rows: 1");
repl.read_contains("Cost");
}
fn test_use_database(repl: &mut Repl) {
repl.send_line("USE db");
repl.read_expect("Total Rows: 0");
repl.read_contains("Cost");
repl.read_expect("Using db");
}
fn test_create_table(repl: &mut Repl) {
repl.send_line("CREATE TABLE t(x STRING, ts TIMESTAMP TIME INDEX);");
repl.read_expect("Affected Rows: 0");
repl.read_contains("Cost");
}
fn test_insert(repl: &mut Repl) {
repl.send_line("INSERT INTO t(x, ts) VALUES ('hello', 1676895812239);");
repl.read_expect("Affected Rows: 1");
repl.read_contains("Cost");
}
fn test_select(repl: &mut Repl) {
repl.send_line("SELECT * FROM t;");
repl.read_expect("+-------+-------------------------+");
repl.read_expect("| x | ts |");
repl.read_expect("+-------+-------------------------+");
repl.read_expect("| hello | 2023-02-20T12:23:32.239 |");
repl.read_expect("+-------+-------------------------+");
repl.read_expect("Total Rows: 1");
repl.read_contains("Cost");
}
}

View File

@@ -5,6 +5,7 @@ edition.workspace = true
license.workspace = true
[dependencies]
anymap = "1.0.0-beta.2"
bitvec = "1.0"
bytes = { version = "1.1", features = ["serde"] }
common-error = { path = "../error" }

View File

@@ -19,3 +19,5 @@ pub mod bytes;
pub mod readable_size;
pub use bit_vec::BitVec;
pub type Plugins = anymap::Map<dyn core::any::Any + Send + Sync>;

View File

@@ -16,6 +16,6 @@ serde_json = "1.0"
snafu = { version = "0.7", features = ["backtraces"] }
[dev-dependencies]
chrono = "0.4"
chrono.workspace = true
tempdir = "0.3"
tokio = { version = "1.0", features = ["full"] }
tokio.workspace = true

View File

@@ -14,3 +14,9 @@
pub mod consts;
pub mod error;
/// Formats table fully-qualified name
#[inline]
pub fn format_full_table_name(catalog: &str, schema: &str, table: &str) -> String {
format!("{catalog}.{schema}.{table}")
}

View File

@@ -5,5 +5,5 @@ edition.workspace = true
license.workspace = true
[dependencies]
strum = "0.24.1"
snafu = { version = "0.7", features = ["backtraces"] }
strum = { version = "0.24", features = ["std", "derive"] }

View File

@@ -88,6 +88,45 @@ impl crate::snafu::ErrorCompat for BoxedError {
}
}
/// Error type with plain error message
#[derive(Debug)]
pub struct PlainError {
msg: String,
status_code: StatusCode,
}
impl PlainError {
pub fn new(msg: String, status_code: StatusCode) -> Self {
Self { msg, status_code }
}
}
impl std::fmt::Display for PlainError {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
write!(f, "{}", self.msg)
}
}
impl std::error::Error for PlainError {
fn source(&self) -> Option<&(dyn std::error::Error + 'static)> {
None
}
}
impl crate::ext::ErrorExt for PlainError {
fn status_code(&self) -> crate::status_code::StatusCode {
self.status_code
}
fn backtrace_opt(&self) -> Option<&crate::snafu::Backtrace> {
None
}
fn as_any(&self) -> &dyn std::any::Any {
self as _
}
}
#[cfg(test)]
mod tests {
use std::error::Error;

View File

@@ -77,6 +77,8 @@ pub enum StatusCode {
AuthHeaderNotFound = 7003,
/// Invalid http authorization header
InvalidAuthHeader = 7004,
/// Illegal request to connect catalog-schema
AccessDenied = 7005,
// ====== End of auth related status code =====
}
@@ -84,6 +86,34 @@ impl StatusCode {
pub fn is_success(code: u32) -> bool {
Self::Success as u32 == code
}
pub fn is_retryable(&self) -> bool {
match self {
StatusCode::StorageUnavailable
| StatusCode::RuntimeResourcesExhausted
| StatusCode::Internal => true,
StatusCode::Success
| StatusCode::Unknown
| StatusCode::Unsupported
| StatusCode::Unexpected
| StatusCode::InvalidArguments
| StatusCode::InvalidSyntax
| StatusCode::PlanQuery
| StatusCode::EngineExecuteQuery
| StatusCode::TableAlreadyExists
| StatusCode::TableNotFound
| StatusCode::TableColumnNotFound
| StatusCode::TableColumnExists
| StatusCode::DatabaseNotFound
| StatusCode::UserNotFound
| StatusCode::UnsupportedPasswordType
| StatusCode::UserPasswordMismatch
| StatusCode::AuthHeaderNotFound
| StatusCode::InvalidAuthHeader
| StatusCode::AccessDenied => false,
}
}
}
impl fmt::Display for StatusCode {

View File

@@ -20,6 +20,7 @@ use static_assertions::{assert_fields, assert_impl_all};
struct Foo {}
#[test]
#[allow(clippy::extra_unused_type_parameters)]
fn test_derive() {
Foo::default();
assert_fields!(Foo: input_types);

View File

@@ -19,7 +19,7 @@ num-traits = "0.2"
once_cell = "1.10"
paste = "1.0"
snafu.workspace = true
statrs = "0.15"
statrs = "0.16"
[dev-dependencies]
ron = "0.7"

View File

@@ -12,33 +12,25 @@
// See the License for the specific language governing permissions and
// limitations under the License.
use std::sync::Arc;
use api::v1::alter_expr::Kind;
use api::v1::{AlterExpr, CreateTableExpr, DropColumns, RenameTable};
use api::v1::{column_def, AlterExpr, CreateTableExpr, DropColumns, RenameTable};
use common_catalog::consts::{DEFAULT_CATALOG_NAME, DEFAULT_SCHEMA_NAME};
use datatypes::schema::{ColumnSchema, SchemaBuilder, SchemaRef};
use datatypes::schema::{ColumnSchema, RawSchema};
use snafu::{ensure, OptionExt, ResultExt};
use table::metadata::TableId;
use table::requests::{AddColumnRequest, AlterKind, AlterTableRequest, CreateTableRequest};
use table::requests::{
AddColumnRequest, AlterKind, AlterTableRequest, CreateTableRequest, TableOptions,
};
use crate::error::{
ColumnNotFoundSnafu, CreateSchemaSnafu, InvalidColumnDefSnafu, MissingFieldSnafu,
MissingTimestampColumnSnafu, Result,
ColumnNotFoundSnafu, InvalidColumnDefSnafu, MissingFieldSnafu, MissingTimestampColumnSnafu,
Result, UnrecognizedTableOptionSnafu,
};
/// Convert an [`AlterExpr`] to an [`AlterTableRequest`]
pub fn alter_expr_to_request(expr: AlterExpr) -> Result<AlterTableRequest> {
let catalog_name = if expr.catalog_name.is_empty() {
None
} else {
Some(expr.catalog_name)
};
let schema_name = if expr.schema_name.is_empty() {
None
} else {
Some(expr.schema_name)
};
let catalog_name = expr.catalog_name;
let schema_name = expr.schema_name;
let kind = expr.kind.context(MissingFieldSnafu { field: "kind" })?;
match kind {
Kind::AddColumns(add_columns) => {
@@ -50,12 +42,11 @@ pub fn alter_expr_to_request(expr: AlterExpr) -> Result<AlterTableRequest> {
field: "column_def",
})?;
let schema =
column_def
.try_as_column_schema()
.context(InvalidColumnDefSnafu {
column: &column_def.name,
})?;
let schema = column_def::try_as_column_schema(&column_def).context(
InvalidColumnDefSnafu {
column: &column_def.name,
},
)?;
Ok(AddColumnRequest {
column_schema: schema,
is_key: ac.is_key,
@@ -101,13 +92,12 @@ pub fn alter_expr_to_request(expr: AlterExpr) -> Result<AlterTableRequest> {
}
}
pub fn create_table_schema(expr: &CreateTableExpr) -> Result<SchemaRef> {
pub fn create_table_schema(expr: &CreateTableExpr) -> Result<RawSchema> {
let column_schemas = expr
.column_defs
.iter()
.map(|x| {
x.try_as_column_schema()
.context(InvalidColumnDefSnafu { column: &x.name })
column_def::try_as_column_schema(x).context(InvalidColumnDefSnafu { column: &x.name })
})
.collect::<Result<Vec<ColumnSchema>>>()?;
@@ -131,12 +121,7 @@ pub fn create_table_schema(expr: &CreateTableExpr) -> Result<SchemaRef> {
})
.collect::<Vec<_>>();
Ok(Arc::new(
SchemaBuilder::try_from(column_schemas)
.context(CreateSchemaSnafu)?
.build()
.context(CreateSchemaSnafu)?,
))
Ok(RawSchema::new(column_schemas))
}
pub fn create_expr_to_request(
@@ -148,8 +133,11 @@ pub fn create_expr_to_request(
.primary_keys
.iter()
.map(|key| {
// We do a linear search here.
schema
.column_index_by_name(key)
.column_schemas
.iter()
.position(|column_schema| column_schema.name == *key)
.context(ColumnNotFoundSnafu {
column_name: key,
table_name: &expr.table_name,
@@ -177,6 +165,8 @@ pub fn create_expr_to_request(
expr.region_ids
};
let table_options =
TableOptions::try_from(&expr.table_options).context(UnrecognizedTableOptionSnafu)?;
Ok(CreateTableRequest {
id: table_id,
catalog_name,
@@ -187,7 +177,7 @@ pub fn create_expr_to_request(
region_numbers: region_ids,
primary_key_indices,
create_if_not_exists: expr.create_if_not_exists,
table_options: expr.table_options,
table_options,
})
}
@@ -219,8 +209,8 @@ mod tests {
};
let alter_request = alter_expr_to_request(expr).unwrap();
assert_eq!(None, alter_request.catalog_name);
assert_eq!(None, alter_request.schema_name);
assert_eq!(alter_request.catalog_name, "");
assert_eq!(alter_request.schema_name, "");
assert_eq!("monitor".to_string(), alter_request.table_name);
let add_column = match alter_request.alter_kind {
AlterKind::AddColumns { mut columns } => columns.pop().unwrap(),
@@ -250,8 +240,8 @@ mod tests {
};
let alter_request = alter_expr_to_request(expr).unwrap();
assert_eq!(Some("test_catalog".to_string()), alter_request.catalog_name);
assert_eq!(Some("test_schema".to_string()), alter_request.schema_name);
assert_eq!(alter_request.catalog_name, "test_catalog");
assert_eq!(alter_request.schema_name, "test_schema");
assert_eq!("monitor".to_string(), alter_request.table_name);
let mut drop_names = match alter_request.alter_kind {

View File

@@ -40,12 +40,6 @@ pub enum Error {
source: api::error::Error,
},
#[snafu(display("Failed to create schema when creating table, source: {}", source))]
CreateSchema {
#[snafu(backtrace)]
source: datatypes::error::Error,
},
#[snafu(display(
"Duplicated timestamp column in gRPC requests, exists {}, duplicated: {}",
exists,
@@ -90,6 +84,12 @@ pub enum Error {
#[snafu(backtrace)]
source: api::error::Error,
},
#[snafu(display("Unrecognized table option: {}", source))]
UnrecognizedTableOption {
#[snafu(backtrace)]
source: table::error::Error,
},
}
pub type Result<T> = std::result::Result<T, Error>;
@@ -102,14 +102,15 @@ impl ErrorExt for Error {
StatusCode::InvalidArguments
}
Error::ColumnDataType { .. } => StatusCode::Internal,
Error::CreateSchema { .. }
| Error::DuplicatedTimestampColumn { .. }
| Error::MissingTimestampColumn { .. } => StatusCode::InvalidArguments,
Error::DuplicatedTimestampColumn { .. } | Error::MissingTimestampColumn { .. } => {
StatusCode::InvalidArguments
}
Error::InvalidColumnProto { .. } => StatusCode::InvalidArguments,
Error::CreateVector { .. } => StatusCode::InvalidArguments,
Error::MissingField { .. } => StatusCode::InvalidArguments,
Error::ColumnDefaultConstraint { source, .. } => source.status_code(),
Error::InvalidColumnDef { source, .. } => source.status_code(),
Error::UnrecognizedTableOption { .. } => StatusCode::InvalidArguments,
}
}
fn backtrace_opt(&self) -> Option<&Backtrace> {

View File

@@ -21,7 +21,6 @@ use api::v1::{
InsertRequest as GrpcInsertRequest,
};
use common_base::BitVec;
use common_catalog::consts::DEFAULT_CATALOG_NAME;
use common_time::timestamp::Timestamp;
use common_time::{Date, DateTime};
use datatypes::data_type::{ConcreteDataType, DataType};
@@ -31,7 +30,7 @@ use datatypes::value::Value;
use datatypes::vectors::MutableVector;
use snafu::{ensure, OptionExt, ResultExt};
use table::metadata::TableId;
use table::requests::{AddColumnRequest, AlterKind, AlterTableRequest, InsertRequest};
use table::requests::InsertRequest;
use crate::error::{
ColumnDataTypeSnafu, CreateVectorSnafu, DuplicatedTimestampColumnSnafu, IllegalInsertDataSnafu,
@@ -81,20 +80,6 @@ pub fn find_new_columns(schema: &SchemaRef, columns: &[Column]) -> Result<Option
}
}
/// Build a alter table rqeusts that adding new columns.
#[inline]
pub fn build_alter_table_request(
table_name: &str,
columns: Vec<AddColumnRequest>,
) -> AlterTableRequest {
AlterTableRequest {
catalog_name: None,
schema_name: None,
table_name: table_name.to_string(),
alter_kind: AlterKind::AddColumns { columns },
}
}
pub fn column_to_vector(column: &Column, rows: u32) -> Result<VectorRef> {
let wrapper = ColumnDataTypeWrapper::try_new(column.datatype).context(ColumnDataTypeSnafu)?;
let column_datatype = wrapper.datatype();
@@ -111,9 +96,7 @@ pub fn column_to_vector(column: &Column, rows: u32) -> Result<VectorRef> {
for i in 0..rows {
if let Some(true) = nulls_iter.next() {
vector
.push_value_ref(ValueRef::Null)
.context(CreateVectorSnafu)?;
vector.push_null();
} else {
let value_ref = values_iter
.next()
@@ -124,16 +107,12 @@ pub fn column_to_vector(column: &Column, rows: u32) -> Result<VectorRef> {
),
})?;
vector
.push_value_ref(value_ref)
.try_push_value_ref(value_ref)
.context(CreateVectorSnafu)?;
}
}
} else {
(0..rows).try_for_each(|_| {
vector
.push_value_ref(ValueRef::Null)
.context(CreateVectorSnafu)
})?;
(0..rows).for_each(|_| vector.push_null());
}
Ok(vector.to_vector())
}
@@ -281,9 +260,11 @@ pub fn build_create_expr_from_insertion(
Ok(expr)
}
pub fn to_table_insert_request(request: GrpcInsertRequest) -> Result<InsertRequest> {
let catalog_name = DEFAULT_CATALOG_NAME;
let schema_name = &request.schema_name;
pub fn to_table_insert_request(
catalog_name: &str,
schema_name: &str,
request: GrpcInsertRequest,
) -> Result<InsertRequest> {
let table_name = &request.table_name;
let row_count = request.row_count as usize;
@@ -319,6 +300,7 @@ pub fn to_table_insert_request(request: GrpcInsertRequest) -> Result<InsertReque
schema_name: schema_name.to_string(),
table_name: table_name.to_string(),
columns_values,
region_number: request.region_number,
})
}
@@ -336,7 +318,7 @@ fn add_values_to_builder(
values.iter().try_for_each(|value| {
builder
.push_value_ref(value.as_value_ref())
.try_push_value_ref(value.as_value_ref())
.context(CreateVectorSnafu)
})?;
} else {
@@ -349,12 +331,10 @@ fn add_values_to_builder(
let mut idx_of_values = 0;
for idx in 0..row_count {
match is_null(&null_mask, idx) {
Some(true) => builder
.push_value_ref(ValueRef::Null)
.context(CreateVectorSnafu)?,
Some(true) => builder.push_null(),
_ => {
builder
.push_value_ref(values[idx_of_values].as_value_ref())
.try_push_value_ref(values[idx_of_values].as_value_ref())
.context(CreateVectorSnafu)?;
idx_of_values += 1
}
@@ -439,8 +419,9 @@ fn convert_values(data_type: &ConcreteDataType, values: Values) -> Vec<Value> {
.into_iter()
.map(|v| Value::Timestamp(Timestamp::new_millisecond(v)))
.collect(),
ConcreteDataType::Null(_) => unreachable!(),
ConcreteDataType::List(_) => unreachable!(),
ConcreteDataType::Null(_) | ConcreteDataType::List(_) | ConcreteDataType::Dictionary(_) => {
unreachable!()
}
}
}
@@ -452,6 +433,7 @@ fn is_null(null_mask: &BitVec, idx: usize) -> Option<bool> {
mod tests {
use std::any::Any;
use std::sync::Arc;
use std::{assert_eq, unimplemented, vec};
use api::helper::ColumnDataTypeWrapper;
use api::v1::column::{self, SemanticType, Values};
@@ -617,13 +599,12 @@ mod tests {
fn test_to_table_insert_request() {
let (columns, row_count) = mock_insert_batch();
let request = GrpcInsertRequest {
schema_name: "public".to_string(),
table_name: "demo".to_string(),
columns,
row_count,
region_number: 0,
};
let insert_req = to_table_insert_request(request).unwrap();
let insert_req = to_table_insert_request("greptime", "public", request).unwrap();
assert_eq!("greptime", insert_req.catalog_name);
assert_eq!("public", insert_req.schema_name);

View File

@@ -17,6 +17,4 @@ pub mod error;
pub mod insert;
pub use alter::{alter_expr_to_request, create_expr_to_request, create_table_schema};
pub use insert::{
build_alter_table_request, build_create_expr_from_insertion, column_to_vector, find_new_columns,
};
pub use insert::{build_create_expr_from_insertion, column_to_vector, find_new_columns};

View File

@@ -18,18 +18,20 @@ use std::time::Duration;
use dashmap::mapref::entry::Entry;
use dashmap::DashMap;
use snafu::ResultExt;
use tonic::transport::{Channel as InnerChannel, Endpoint, Uri};
use snafu::{OptionExt, ResultExt};
use tonic::transport::{
Certificate, Channel as InnerChannel, ClientTlsConfig, Endpoint, Identity, Uri,
};
use tower::make::MakeConnection;
use crate::error;
use crate::error::Result;
use crate::error::{CreateChannelSnafu, InvalidConfigFilePathSnafu, InvalidTlsConfigSnafu, Result};
const RECYCLE_CHANNEL_INTERVAL_SECS: u64 = 60;
#[derive(Clone, Debug)]
pub struct ChannelManager {
config: ChannelConfig,
client_tls_config: Option<ClientTlsConfig>,
pool: Arc<Pool>,
}
@@ -52,7 +54,37 @@ impl ChannelManager {
recycle_channel_in_loop(cloned_pool, RECYCLE_CHANNEL_INTERVAL_SECS).await;
});
Self { config, pool }
Self {
config,
client_tls_config: None,
pool,
}
}
pub fn with_tls_config(config: ChannelConfig) -> Result<Self> {
let mut cm = Self::with_config(config.clone());
// setup tls
let path_config = config.client_tls.context(InvalidTlsConfigSnafu {
msg: "no config input",
})?;
let server_root_ca_cert = std::fs::read_to_string(path_config.server_ca_cert_path)
.context(InvalidConfigFilePathSnafu)?;
let server_root_ca_cert = Certificate::from_pem(server_root_ca_cert);
let client_cert = std::fs::read_to_string(path_config.client_cert_path)
.context(InvalidConfigFilePathSnafu)?;
let client_key = std::fs::read_to_string(path_config.client_key_path)
.context(InvalidConfigFilePathSnafu)?;
let client_identity = Identity::from_pem(client_cert, client_key);
cm.client_tls_config = Some(
ClientTlsConfig::new()
.ca_certificate(server_root_ca_cert)
.identity(client_identity),
);
Ok(cm)
}
pub fn config(&self) -> &ChannelConfig {
@@ -119,8 +151,7 @@ impl ChannelManager {
}
fn build_endpoint(&self, addr: &str) -> Result<Endpoint> {
let mut endpoint =
Endpoint::new(format!("http://{addr}")).context(error::CreateChannelSnafu)?;
let mut endpoint = Endpoint::new(format!("http://{addr}")).context(CreateChannelSnafu)?;
if let Some(dur) = self.config.timeout {
endpoint = endpoint.timeout(dur);
@@ -152,6 +183,12 @@ impl ChannelManager {
if let Some(enabled) = self.config.http2_adaptive_window {
endpoint = endpoint.http2_adaptive_window(enabled);
}
if let Some(tls_config) = &self.client_tls_config {
endpoint = endpoint
.tls_config(tls_config.clone())
.context(CreateChannelSnafu)?;
}
endpoint = endpoint
.tcp_keepalive(self.config.tcp_keepalive)
.tcp_nodelay(self.config.tcp_nodelay);
@@ -160,6 +197,13 @@ impl ChannelManager {
}
}
#[derive(Clone, Debug, PartialEq, Eq)]
pub struct ClientTlsOption {
pub server_ca_cert_path: String,
pub client_cert_path: String,
pub client_key_path: String,
}
#[derive(Clone, Debug, PartialEq, Eq)]
pub struct ChannelConfig {
pub timeout: Option<Duration>,
@@ -174,6 +218,7 @@ pub struct ChannelConfig {
pub http2_adaptive_window: Option<bool>,
pub tcp_keepalive: Option<Duration>,
pub tcp_nodelay: bool,
pub client_tls: Option<ClientTlsOption>,
}
impl Default for ChannelConfig {
@@ -191,6 +236,7 @@ impl Default for ChannelConfig {
http2_adaptive_window: None,
tcp_keepalive: None,
tcp_nodelay: true,
client_tls: None,
}
}
}
@@ -307,6 +353,16 @@ impl ChannelConfig {
..self
}
}
/// Set the value of tls client auth.
///
/// Disabled by default.
pub fn client_tls_config(self, client_tls_option: ClientTlsOption) -> Self {
Self {
client_tls: Some(client_tls_option),
..self
}
}
}
#[derive(Debug)]
@@ -401,7 +457,11 @@ mod tests {
async fn test_access_count() {
let pool = Arc::new(Pool::default());
let config = ChannelConfig::new();
let mgr = Arc::new(ChannelManager { pool, config });
let mgr = Arc::new(ChannelManager {
pool,
config,
client_tls_config: None,
});
let addr = "test_uri";
let mut joins = Vec::with_capacity(10);
@@ -443,6 +503,7 @@ mod tests {
http2_adaptive_window: None,
tcp_keepalive: None,
tcp_nodelay: true,
client_tls: None,
},
default_cfg
);
@@ -459,7 +520,12 @@ mod tests {
.http2_keep_alive_while_idle(true)
.http2_adaptive_window(true)
.tcp_keepalive(Duration::from_secs(2))
.tcp_nodelay(false);
.tcp_nodelay(false)
.client_tls_config(ClientTlsOption {
server_ca_cert_path: "some_server_path".to_string(),
client_cert_path: "some_cert_path".to_string(),
client_key_path: "some_key_path".to_string(),
});
assert_eq!(
ChannelConfig {
@@ -475,6 +541,11 @@ mod tests {
http2_adaptive_window: Some(true),
tcp_keepalive: Some(Duration::from_secs(2)),
tcp_nodelay: false,
client_tls: Some(ClientTlsOption {
server_ca_cert_path: "some_server_path".to_string(),
client_cert_path: "some_cert_path".to_string(),
client_key_path: "some_key_path".to_string(),
}),
},
cfg
);
@@ -496,7 +567,11 @@ mod tests {
.http2_adaptive_window(true)
.tcp_keepalive(Duration::from_secs(2))
.tcp_nodelay(true);
let mgr = ChannelManager { pool, config };
let mgr = ChannelManager {
pool,
config,
client_tls_config: None,
};
let res = mgr.build_endpoint("test_addr");
@@ -512,7 +587,11 @@ mod tests {
let pool = Arc::new(pool);
let config = ChannelConfig::new();
let mgr = ChannelManager { pool, config };
let mgr = ChannelManager {
pool,
config,
client_tls_config: None,
};
let addr = "test_addr";
let res = mgr.get(addr);

View File

@@ -13,6 +13,7 @@
// limitations under the License.
use std::any::Any;
use std::io;
use common_error::prelude::{ErrorExt, StatusCode};
use snafu::{Backtrace, ErrorCompat, Snafu};
@@ -22,6 +23,15 @@ pub type Result<T> = std::result::Result<T, Error>;
#[derive(Debug, Snafu)]
#[snafu(visibility(pub))]
pub enum Error {
#[snafu(display("Invalid client tls config, {}", msg))]
InvalidTlsConfig { msg: String },
#[snafu(display("Invalid config file path, {}", source))]
InvalidConfigFilePath {
source: io::Error,
backtrace: Backtrace,
},
#[snafu(display("Missing required field in protobuf, field: {}", field))]
MissingField { field: String, backtrace: Backtrace },
@@ -81,7 +91,9 @@ pub enum Error {
impl ErrorExt for Error {
fn status_code(&self) -> StatusCode {
match self {
Error::MissingField { .. }
Error::InvalidTlsConfig { .. }
| Error::InvalidConfigFilePath { .. }
| Error::MissingField { .. }
| Error::TypeMismatch { .. }
| Error::InvalidFlightData { .. } => StatusCode::InvalidArguments,

View File

@@ -15,7 +15,7 @@
use std::collections::HashMap;
use std::sync::Arc;
use api::v1::FlightDataExt;
use api::v1::{AffectedRows, FlightMetadata};
use arrow_flight::utils::{flight_data_from_arrow_batch, flight_data_to_arrow_batch};
use arrow_flight::{FlightData, IpcMessage, SchemaAsIpc};
use common_recordbatch::{RecordBatch, RecordBatches};
@@ -66,11 +66,11 @@ impl FlightEncoder {
flight_batch
}
FlightMessage::AffectedRows(rows) => {
let ext_data = FlightDataExt {
affected_rows: rows as _,
let metadata = FlightMetadata {
affected_rows: Some(AffectedRows { value: rows as _ }),
}
.encode_to_vec();
FlightData::new(None, IpcMessage(build_none_flight_msg()), vec![], ext_data)
FlightData::new(None, IpcMessage(build_none_flight_msg()), metadata, vec![])
}
}
}
@@ -91,9 +91,15 @@ impl FlightDecoder {
})?;
match message.header_type() {
MessageHeader::NONE => {
let ext_data = FlightDataExt::decode(flight_data.data_body.as_slice())
let metadata = FlightMetadata::decode(flight_data.app_metadata.as_slice())
.context(DecodeFlightDataSnafu)?;
Ok(FlightMessage::AffectedRows(ext_data.affected_rows as _))
if let Some(AffectedRows { value }) = metadata.affected_rows {
return Ok(FlightMessage::AffectedRows(value as _));
}
InvalidFlightDataSnafu {
reason: "Expecting FlightMetadata have some meaningful content.",
}
.fail()
}
MessageHeader::Schema => {
let arrow_schema = ArrowSchema::try_from(&flight_data).map_err(|e| {

View File

@@ -67,7 +67,7 @@ macro_rules! convert_arrow_array_to_grpc_vals {
return Ok(vals);
},
)+
ConcreteDataType::Null(_) | ConcreteDataType::List(_) => unreachable!("Should not send {:?} in gRPC", $data_type),
ConcreteDataType::Null(_) | ConcreteDataType::List(_) | ConcreteDataType::Dictionary(_) => unreachable!("Should not send {:?} in gRPC", $data_type),
}
}};
}

View File

@@ -14,7 +14,8 @@
use std::collections::HashMap;
use api::v1::column::{SemanticType, Values};
use api::helper::values_with_capacity;
use api::v1::column::SemanticType;
use api::v1::{Column, ColumnDataType};
use common_base::BitVec;
use snafu::ensure;
@@ -212,7 +213,7 @@ impl LinesWriter {
batch.0.push(Column {
column_name: column_name.to_string(),
semantic_type: semantic_type.into(),
values: Some(Values::with_capacity(datatype, to_insert)),
values: Some(values_with_capacity(datatype, to_insert)),
datatype: datatype as i32,
null_mask: Vec::default(),
});

View File

@@ -0,0 +1,57 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use common_grpc::channel_manager::{ChannelConfig, ChannelManager, ClientTlsOption};
#[tokio::test]
async fn test_mtls_config() {
// test no config
let config = ChannelConfig::new();
let re = ChannelManager::with_tls_config(config);
assert!(re.is_err());
// test wrong file
let config = ChannelConfig::new().client_tls_config(ClientTlsOption {
server_ca_cert_path: "tests/tls/wrong_server.cert.pem".to_string(),
client_cert_path: "tests/tls/wrong_client.cert.pem".to_string(),
client_key_path: "tests/tls/wrong_client.key.pem".to_string(),
});
let re = ChannelManager::with_tls_config(config);
assert!(re.is_err());
// test corrupted file content
let config = ChannelConfig::new().client_tls_config(ClientTlsOption {
server_ca_cert_path: "tests/tls/server.cert.pem".to_string(),
client_cert_path: "tests/tls/client.cert.pem".to_string(),
client_key_path: "tests/tls/corrupted".to_string(),
});
let re = ChannelManager::with_tls_config(config);
assert!(re.is_ok());
let re = re.unwrap().get("127.0.0.1:0");
assert!(re.is_err());
// success
let config = ChannelConfig::new().client_tls_config(ClientTlsOption {
server_ca_cert_path: "tests/tls/server.cert.pem".to_string(),
client_cert_path: "tests/tls/client.cert.pem".to_string(),
client_key_path: "tests/tls/client.key.pem".to_string(),
});
let re = ChannelManager::with_tls_config(config);
assert!(re.is_ok());
let re = re.unwrap().get("127.0.0.1:0");
assert!(re.is_ok());
}

View File

@@ -0,0 +1,36 @@
-----BEGIN CERTIFICATE-----
MIIGOzCCBCOgAwIBAgIBATANBgkqhkiG9w0BAQsFADCBhzELMAkGA1UEBhMCSU4x
EjAQBgNVBAgMCUthcm5hdGFrYTESMBAGA1UEBwwJQkFOR0FMT1JFMRUwEwYDVQQK
DAxHb0xpbnV4Q2xvdWQxEjAQBgNVBAMMCWNhLXNlcnZlcjElMCMGCSqGSIb3DQEJ
ARYWYWRtaW5AZ29saW51eGNsb3VkLmNvbTAeFw0yMzAyMTQxMTM4MDFaFw0yNzA4
MjIxMTM4MDFaMHIxCzAJBgNVBAYTAklOMRIwEAYDVQQIDAlLYXJuYXRha2ExFTAT
BgNVBAoMDEdvTGludXhDbG91ZDERMA8GA1UEAwwIc2VydmVyLTIxJTAjBgkqhkiG
9w0BCQEWFmFkbWluQGdvbGludXhjbG91ZC5jb20wggIiMA0GCSqGSIb3DQEBAQUA
A4ICDwAwggIKAoICAQDNPiXZFK1cDOevdU5628xqAZjHn2e86hD9ih0IHvQKbcAm
a8fhFMQ+Gki+p2+Ga1fxHDi1+aUn00UjyLAxSMQVulpZWYHsRj3koyD9LyTvpDQk
SwJhFNtL33WlqUMtjgVXoznjECfhc/hwKJ9BS0b5j21XzqYkSKTJNcxZmoNLJVvL
dfbsWjLywSAHbcF1gs2w3IxruPQwyMXL1URjcwGRTtK+zk6QGxgyXsIEJDW4EZqR
xXgmEz7jx7vfDLaYc8GoujTki2dkyTWQkdDrJ4/N7VWGOGjL60EJDOcQyCowDuAq
sbB5C9OuhB59o2/wzeSeaY7qS5nLOufwiYmvc1S6kgi9emirxqFLmrcaJv8QPDEX
6ufI8wSkCS/CX/IUNXPkSripU3zQcjorinAw3w9pGY1VNknz5AgDXrEAW17aZKsp
QyLSyl87vG9dhjybdkc7QyBghTxweggYT1INY6dmj9ijIyU+9V64xOTb9dlbgLW/
qAvZyeq2H9Z5aBwkG31n1b2rX0JEK+/NC+8PRs2tWq63EOB8hzh4mF9RKLcZC3zS
9eJa1B0ugyy5fw8GGWA49H3rFoU2u7+Gazzdn5uD9sqLuVnzW1FREDhMHGd4VdRx
vuhUp9jz9u0WDRr2Ix7N7Vd57mwhBPivUywg7QwZSTqlIrGVoQFPL4BjWwSSswID
AQABo4HFMIHCMAkGA1UdEwQCMAAwEQYJYIZIAYb4QgEBBAQDAgWgMDMGCWCGSAGG
+EIBDQQmFiRPcGVuU1NMIEdlbmVyYXRlZCBDbGllbnQgQ2VydGlmaWNhdGUwHQYD
VR0OBBYEFI056bMc2jHoeOTUGBCpBGGY/UfQMB8GA1UdIwQYMBaAFKVZwpSJCPkN
wGXyJX1sl2Pbby4FMA4GA1UdDwEB/wQEAwIF4DAdBgNVHSUEFjAUBggrBgEFBQcD
AgYIKwYBBQUHAwQwDQYJKoZIhvcNAQELBQADggIBABHQ/EGnAFeIdzKTbaP3kaSd
A3tCyjWVwo9eULXBjsMFFyf4NDw8bkrYdJos6rBpzi6R1PUb4UMc9CUF6ee9zbTK
mDeusqwhDOLmYZot1aZbujMngpbMoQx5keSQ9Eg10npbYMl6Sq3qFbAST9l/hlDh
Ue9KhfrAvrSobP0WWb/EpEXZMt2DafKpoz4nvtFpcOO5kbsQ+/eQfWHmR/k6sCYG
UycFYCJCFQz2xG8wtbExg5iyaR3nE0LfqZwRxhIa4iSWlCecYc1XUJnOh8fIeop4
9fD5k2wqvCEBAZiaKg2RYbaw6LIFkg7c99B4Gt5eez7Bs878T7lS+xl9wbzinzez
WFIgsDYHYjmK8s5WXXWwT7UhqSA12FHOp8grqFllXV/dOPTFz+dq9Mn1VGgH6MS4
Ls3r2LH5ycAz+gkoY2wlnF++ItpB2K3LTlqk+OvQZ1oXMq8u5F6XsM7Uirc7Da+9
MEG1zBpGvA/iAd2kKd3APS+EuoytSt022bD7YDJ1isuxT5q2Hpa4p14BJHCgDKTZ
vPYIdzCh05vwLwB28T8bh7s5OLOcRY9KmxVPkT0SYLOk11j5nZ1N/hQvGDxL60e2
RBS3ADHkymIE55Xf1VLXcs17zR9fLV+5fiSQ40FLjcBEjhkvrzcDe3tVFsA/ty9h
dBCSsexiXj/S5KwKtz/c
-----END CERTIFICATE-----

View File

@@ -0,0 +1,51 @@
-----BEGIN RSA PRIVATE KEY-----
MIIJKQIBAAKCAgEAzT4l2RStXAznr3VOetvMagGYx59nvOoQ/YodCB70Cm3AJmvH
4RTEPhpIvqdvhmtX8Rw4tfmlJ9NFI8iwMUjEFbpaWVmB7EY95KMg/S8k76Q0JEsC
YRTbS991palDLY4FV6M54xAn4XP4cCifQUtG+Y9tV86mJEikyTXMWZqDSyVby3X2
7Foy8sEgB23BdYLNsNyMa7j0MMjFy9VEY3MBkU7Svs5OkBsYMl7CBCQ1uBGakcV4
JhM+48e73wy2mHPBqLo05ItnZMk1kJHQ6yePze1Vhjhoy+tBCQznEMgqMA7gKrGw
eQvTroQefaNv8M3knmmO6kuZyzrn8ImJr3NUupIIvXpoq8ahS5q3Gib/EDwxF+rn
yPMEpAkvwl/yFDVz5Eq4qVN80HI6K4pwMN8PaRmNVTZJ8+QIA16xAFte2mSrKUMi
0spfO7xvXYY8m3ZHO0MgYIU8cHoIGE9SDWOnZo/YoyMlPvVeuMTk2/XZW4C1v6gL
2cnqth/WeWgcJBt9Z9W9q19CRCvvzQvvD0bNrVqutxDgfIc4eJhfUSi3GQt80vXi
WtQdLoMsuX8PBhlgOPR96xaFNru/hms83Z+bg/bKi7lZ81tRURA4TBxneFXUcb7o
VKfY8/btFg0a9iMeze1Xee5sIQT4r1MsIO0MGUk6pSKxlaEBTy+AY1sEkrMCAwEA
AQKCAgEAw2jBZj5+k96hk/dPIkA1DlS43o7RmRcN2CdwXrQBzBAUW0BRDObVtP8X
dZY647M+BozFHdUzPoizEk/YGQRb1QgZT2qd/ZQfB5mdJhGFzDf9gPR9rmrKJCH8
hB50nGHUik0ZJyvRnKDqz/aNMgB28dJx26Efo/oaEoyLJGCtUpWeIUgOMZfrXB8t
3ITOJZDFP/esJj/xFqWBVQGXXEw6GNwAYLRSLnftgL+hX4oOL1NrZBCrxSybuwkG
wWX8T4gewQOQqmxjo5zCyANc8xc2nmyx+dmpRUWWJQTI1ryNFjaDjYKiL41oHIcj
9KDwSkftvDlqXX5fThSmkeiRU5+t8UMj4+Bt7opCzIlwHtQe+95BqiXQ7bHfCjn7
GvShZgHo45rDkfwWDz/pYhHQ2Wb9DkhEtwa0cu3mDMGc6BY+4yo+Vz6Rk1TypxQw
LIa43WgVCRm66Mq65sObx7wkdxvolUE8j1Io3AHwgeBjV+gISV9srj2m/HnOmFFb
16SKQEDEVoaci+v6DT8A7UOZH4sgYSbknHdjMy6c6UlYgd8UNqbY3h/ohZ2JOcPd
8DqGUDGKbpS7OxWogxb9K++6SPSn86sPmUjzRPMgijVjU5pyK42DpZj1/RIe8Tml
JXVqHuZvURK4Qi3ECQ09m9vQ9nS88HMRVJ7sFSca6HOFYSFyAfkCggEBAPN35hva
OhbgQlFJrpo5YDYS5v7l7YjLbry6DaCR1CpYaKlTPkc4tiznCUHe1N41mR4qu2Tc
4+m7GN9BZfLU8w/Jvrp7mAO7fZXZtIzTQrZQDbAZppUGBbGBoAOlLVxR4NrN2TSk
49Ljj87UynhxeCv6RWx0F1p1/VIZertLELbSdb3C43pAsNSXzbkb7LtT9RXemyUL
LBK4ugcXMSZrzHJK1Ct31LoGd9m+TEp/VW2aGMeWliuIticJx44OW4tlJ70qKrd0
KezBZVMHPa3FqW7kdYwdlISoqZsE9OVPgLCQNVLhDO1YMaTl3WEHKTRBxTF70pvv
zMkSRQGoU4ff7AUCggEBANfOkCsx2mRvJV+UYxW8R6510w2H1bNbNRbfTJAo8kld
/7dXU4H3QrhUrCsSyc5ijm09q7I4+rc+uMxfT/R1mO5tq9AueCWhg85WV+NBR1FE
Yg7MX+zblpHqUDQoTj9vvgwLyqvZ7k9NON42Zz+Tj2worICnVlDvahm/3NaItT9B
oGhsEoJjYFK4Hq7RwosU+KPXkQBxWzrNLipo8jx0XFpPZVHSLIFs9eW25bnj/qxc
toMgx4IsvEDlzS/oqfycCrDdKwqiW74w0Djb5TiJv+dYzl9GnN6istqbUTNZkJjn
lkbmegrtfz3Yd1ORvjkNqHuANyuR+YnUSIsb0PV5eVcCggEAck5bgb4eQbk+SY3P
ZOcFLb4IJ6ppsCzaq86qMTXmJ49kbAMCHUwZ89DwvrVQuZbucYRcgMlYU9ccoUzC
AZVLHKF6Y3E9eJshJiaVJvzUuGWzV3djh1nReHpEVxHIzyw95lx42seDkvJ2BQRQ
nuWfJv6Uc4u5nyYALfh6b86ZZUxALTx/slkG7HjtBDiBF54eVgsySd0J7yw9YrDX
yZMY5JwPKu1SuZfp0xgOF3fa8t9DPQmNLZk88+0afK5u+m4ejyhp78GhIV/XI3kl
0x0XJFIsggEtRm8tWfOkyrhd0geSkXvJpvEeNa4aFsDW7ormewoIYl/ehJSIQ3P0
67kMxQKCAQA12iP7w2r+GQY4fazkJaG1lU1fWQAoy5/J31sZtj4PtNc1ByOdkPgj
S23TKdMWH13vQK5xwOo/g/VVeotXM2lARjnTr2Tn7xAXE1DHMuj7DJdznehqELnY
G6J8AXrVNas1ElQ24iEnxNtmCClnogjuMpApYpiVhcjyOACBwIeKC3Rd2mocA3Rr
7+ooMcvcLRWGvSo/9AmR+NWGW73m/Bp3psxfyJS2j1wlQKi+5HgOxuv8eNeQUl1/
zFiRlfulP8MjM22kL7O5GDE9nxHqM+Whc3W8LMDEhdEf4BY5PCZrIY9MjgLyayWP
Z08PmZTgY9ohR3N8+eZNUJ3xqLVSLEftAoIBAQDF1K8lPXAs8e4V0oc9hq4GFLvi
E0KC+8X1ShzvkVGV/3Kz1FJ0bwix/M3C5XSSNguxHI6CG2GprJlExp1qqwlvmGr2
hHdfemvq6tF4qjXLgPXvgoWocBGNUvBXxFVuc0hOHgT/X3+GsPYtNvZb3fp+4Bm6
ugUu05drqrHSOY5kUbU3jf/5KctnDFmOsSeOgGiI/JJWVcKJALpDkhazRL0nxfuW
6xU6pZazhCAby2Qn+wn0xyi4bEZSNobiQTgOXOC0DA1uGD3XHctCMnSBtYtocQjq
IFT2l3u4pEKpVQwuc4+yObWUT47oBxV6vFneXsnV89vd2SSUPuR8GIYYeA+/
-----END RSA PRIVATE KEY-----

View File

@@ -0,0 +1,50 @@
rWtZ7U3SoVAl6yMhfJsB
LcEGbuCfgFxk2ADw0N1G
byTKlrUgoRZeSc0cYHTf
0XjbRCBtMV9yYaVJKPwi
rGofQgFoc1lW0U5x2bnN
O9nn9aDe5t5LAlGS81uX
aBMvuzVjHbZKOlabXl4W
ZJc06qngAcQWQUu8nAnR
FLsjhoaTyuaDMY3OWJAx
5Dt7YglND5uFAqYwRG9L
agLGOCH8suwnXGYaPxjM
Ysb5RANkpgcbSulLZiic
4sLmpJomjokwZbctODVW
pCLiQT3wWDJ7YjIePR6g
P3Jlg0LDhbgSwXxgjjUR
6qGRfcb8LFlVlT7O1ze2
lFBNWzijkPeKyKmwpOSa
oGCR2OUg71n0Tzt2a3ir
WLijq0bL1Cetz24fv738
L3MEAwezFBW38U4QilNz
uza1bC3PgToermGSgKLx
WMdgjZIszK4t6Rehelx8
YpCJWVXTob3Gn4bMwWJO
xpJ9qhvMBdD8iamheF4b
bUm1YmHW4gPT1ujiqCmN
I7hOFurjJ6zvXGETyfCn
w23W8PNFWbqpHUKN59Bz
HpbsIRDVVpEGxnoWmdjq
58BUOxDdbTZxCKt0UqLD
uUPOlW8bRhuC1tK1NL5u
wq9ybcfwZ4jIHyYlHZ5M
4t4zKLRG2DN6icHmctOW
TzYp3np0OFsTlzCwkogM
Os6SOvjU0Irq2Xo5wLvn
1nN6FQwUxcw0H5rfQEZo
NioHP0JdBv3HmIaQZs1n
8lJWLVof1TBWtRUKmWmO
79DcTURdzt28Vdn6F0K0
UiG15bda4Pb81I9IE9ug
iZkC7CE98aE6WQK9Ghlu
dNXJTkUD3uVg6Tqi3957
Hfa9xMclyrxsOvkGcudI
QbcvG5Apom6nBWIGHRMQ
68rn9eZEcq5mJLaiNmHr
5AOtHddC5NVgQLgdmmKb
gQlrcSXzxT6V6jzbxZ79
xmulvmkeqG4kj6TAuJEg
u9dCkExxv5tLSpF8hC08
HHU4QE56UC97djO5EpmK
g3rElyboRHlAYPWviWbm

View File

@@ -0,0 +1,40 @@
-----BEGIN CERTIFICATE-----
MIIG+jCCBOKgAwIBAgIBAjANBgkqhkiG9w0BAQsFADCBhzELMAkGA1UEBhMCSU4x
EjAQBgNVBAgMCUthcm5hdGFrYTESMBAGA1UEBwwJQkFOR0FMT1JFMRUwEwYDVQQK
DAxHb0xpbnV4Q2xvdWQxEjAQBgNVBAMMCWNhLXNlcnZlcjElMCMGCSqGSIb3DQEJ
ARYWYWRtaW5AZ29saW51eGNsb3VkLmNvbTAeFw0yMzAyMTQxMTM5NDBaFw0yNzA4
MjIxMTM5NDBaMHAxCzAJBgNVBAYTAklOMRIwEAYDVQQIDAlLYXJuYXRha2ExFTAT
BgNVBAoMDEdvTGludXhDbG91ZDEPMA0GA1UEAwwGc2VydmVyMSUwIwYJKoZIhvcN
AQkBFhZhZG1pbkBnb2xpbnV4Y2xvdWQuY29tMIICIjANBgkqhkiG9w0BAQEFAAOC
Ag8AMIICCgKCAgEAvVtxAoRjLRs3Ei4+CgzqJ2+bpc0sBdUm/4LM/D+0KbXxwD7w
HP6GcKl/9zf9GJg56pVXxXMaerMDLS4Est25+mBgqcePC6utCBYrKA25pKbkFkxZ
TPh9/R4RHGVJ3KHy9vc4VzqoV7XFMJFFUQ2fQywHZlXh6MNz0WPTIGaH7hvYoHbK
I3NpPq8TjRuuV61XB0hK+RW0K6/5Yuj74h/mfheX1VIUOjGwKnTPccZQAlrKYjeW
BZBS4YqahkTIaGLa06SdUSkuhL85rqAxWvhK9GIRlQLNYJOzg+E3jGyqf566xX60
fxM6alLYf+ZzCwSBuDDj5f+j752gPLYUI82YL4xQ+AEHNR8U1uMvt0EzzFt7mSRe
fobVr+Y2zpci+mo7kcQGOhenzGclsm+qXwMhYUnJcOYFZWtTJlFaaPreL4M3Dh+2
pmKj23ZU6zcT3MYtE6phjCLJl0DsFIcOn+tSqMdpwB20EeQjo9bVJuw/HJrlpcnY
U9aLsnm/4Ls5A0BQutZnxKBIJjpzp8VfK0WU8a4iKok3AS0z1/K+atNrgSUB9DCH
0MvLqqQmM9TdLcZj7NSEfLyyFVwPRc5dt4CrNDL7JUpMzt36ezU83JU+nfqWDZsL
+2JOaE4gGLZDcA3cfP83/mYRaAnYW/9W4vEnIpa6subzq1aFOeY/3dKLTx8CAwEA
AaOCAYUwggGBMAkGA1UdEwQCMAAwEQYJYIZIAYb4QgEBBAQDAgZAMDMGCWCGSAGG
+EIBDQQmFiRPcGVuU1NMIEdlbmVyYXRlZCBTZXJ2ZXIgQ2VydGlmaWNhdGUwHQYD
VR0OBBYEFLijeA+RFDQtuVeMUkaXqF7LF50GMIG8BgNVHSMEgbQwgbGAFKVZwpSJ
CPkNwGXyJX1sl2Pbby4FoYGNpIGKMIGHMQswCQYDVQQGEwJJTjESMBAGA1UECAwJ
S2FybmF0YWthMRIwEAYDVQQHDAlCQU5HQUxPUkUxFTATBgNVBAoMDEdvTGludXhD
bG91ZDESMBAGA1UEAwwJY2Etc2VydmVyMSUwIwYJKoZIhvcNAQkBFhZhZG1pbkBn
b2xpbnV4Y2xvdWQuY29tggkA7NvbvF8jodEwDgYDVR0PAQH/BAQDAgWgMBMGA1Ud
JQQMMAoGCCsGAQUFBwMBMCkGA1UdEQQiMCCHBMCoAHKHBAoAAg+CEnNlcnZlci5l
eGFtcGxlLmNvbTANBgkqhkiG9w0BAQsFAAOCAgEAXvaS9+y5g2Kw/4EPsnhjpN1v
CxXW0+UYSWOaxVJdEAjGQI/1m9LOiF9IHImmiwluJ/Bex1TzuaTCKmpluPwGvd9D
Zgf0A5SmVqW4WTT4d2nSecxw4OICJ3j6ubKkvMVf9s+ZJwb+fMMUaSt80bWqp1TY
XbZguv67PkBECPqVe6rgzXnTLwM3lE8EgG8VtM3IOy9a5SIEjm5L8SQ2I2hiytmE
e4jR1fbZsB5NbBdfA3GFMKQEE2dIymkG3Bz71M3tZi1y4RnHtRKdrFtrIlgclrwd
nVnQn/NiXUOOzsL2+vwSF32SSbiLvOxu63qO1YDBkKVChog3P/2f6xcJ23wkbHlL
qaL2jvLo6ylvMPUYHf5ZWat5zayaGUMHYDKcbD4Dw7aY3M0tNgEHdqUqNePmKvmn
luyXof3KmmLgWlcfBoX96a7hXDtxFyB2N4nzfQBXh+0VAlgqa+ZZhpdEqRQaWkkR
MDBdsVJ9O3812IaNfMzpS1vb701GFDCM5Hcyw6a/v6Ln08NMhYut4saLi13kHilS
Wq7wOAfW3rzxuhjOJJxsi0jJNI775q+a/BbbG/CPl826bXPGH43BdPV8mKwsX5HM
wwDKf3otP/v7bxwJabfhv2EKUy+W1kkFW9FEZ919yTtfhSDrTNcrXtE7RkiAepfm
95I025URIlhJGLGBUlA=
-----END CERTIFICATE-----

View File

@@ -0,0 +1,23 @@
[package]
name = "common-procedure"
version.workspace = true
edition.workspace = true
license.workspace = true
[dependencies]
async-trait.workspace = true
common-error = { path = "../error" }
common-runtime = { path = "../runtime" }
common-telemetry = { path = "../telemetry" }
futures.workspace = true
object-store = { path = "../../object-store" }
serde.workspace = true
serde_json = "1.0"
smallvec = "1"
snafu.workspace = true
tokio.workspace = true
uuid.workspace = true
[dev-dependencies]
futures-util.workspace = true
tempdir = "0.3"

View File

@@ -0,0 +1,143 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use std::any::Any;
use common_error::prelude::*;
use crate::procedure::ProcedureId;
/// Procedure error.
#[derive(Debug, Snafu)]
#[snafu(visibility(pub))]
pub enum Error {
#[snafu(display(
"Failed to execute procedure due to external error, source: {}",
source
))]
External {
#[snafu(backtrace)]
source: BoxedError,
},
#[snafu(display("Loader {} is already registered", name))]
LoaderConflict { name: String, backtrace: Backtrace },
#[snafu(display("Failed to serialize to json, source: {}", source))]
ToJson {
source: serde_json::Error,
backtrace: Backtrace,
},
#[snafu(display("Procedure {} already exists", procedure_id))]
DuplicateProcedure {
procedure_id: ProcedureId,
backtrace: Backtrace,
},
#[snafu(display("Failed to put {}, source: {}", key, source))]
PutState {
key: String,
source: object_store::Error,
},
#[snafu(display("Failed to delete {}, source: {}", key, source))]
DeleteState {
key: String,
source: object_store::Error,
},
#[snafu(display("Failed to list {}, source: {}", path, source))]
ListState {
path: String,
source: object_store::Error,
},
#[snafu(display("Failed to read {}, source: {}", key, source))]
ReadState {
key: String,
source: object_store::Error,
},
#[snafu(display("Failed to deserialize from json, source: {}", source))]
FromJson {
source: serde_json::Error,
backtrace: Backtrace,
},
#[snafu(display("Procedure exec failed, source: {}", source))]
RetryLater {
#[snafu(backtrace)]
source: BoxedError,
},
}
pub type Result<T> = std::result::Result<T, Error>;
impl ErrorExt for Error {
fn status_code(&self) -> StatusCode {
match self {
Error::External { source } => source.status_code(),
Error::ToJson { .. }
| Error::PutState { .. }
| Error::DeleteState { .. }
| Error::ListState { .. }
| Error::ReadState { .. }
| Error::FromJson { .. }
| Error::RetryLater { .. } => StatusCode::Internal,
Error::LoaderConflict { .. } | Error::DuplicateProcedure { .. } => {
StatusCode::InvalidArguments
}
}
}
fn backtrace_opt(&self) -> Option<&Backtrace> {
ErrorCompat::backtrace(self)
}
fn as_any(&self) -> &dyn Any {
self
}
}
impl Error {
/// Creates a new [Error::External] error from source `err`.
pub fn external<E: ErrorExt + Send + Sync + 'static>(err: E) -> Error {
Error::External {
source: BoxedError::new(err),
}
}
/// Creates a new [Error::RetryLater] error from source `err`.
pub fn retry_later<E: ErrorExt + Send + Sync + 'static>(err: E) -> Error {
Error::RetryLater {
source: BoxedError::new(err),
}
}
/// Determine whether it is a retry later type through [StatusCode]
pub fn is_retry_later(&self) -> bool {
matches!(self, Error::RetryLater { .. })
}
/// Creates a new [Error::RetryLater] or [Error::External] error from source `err` according
/// to its [StatusCode].
pub fn from_error_ext<E: ErrorExt + Send + Sync + 'static>(err: E) -> Self {
if err.status_code().is_retryable() {
Error::retry_later(err)
} else {
Error::external(err)
}
}
}

View File

@@ -12,4 +12,15 @@
// See the License for the specific language governing permissions and
// limitations under the License.
pub use opendal::services::azblob::Builder;
//! Common traits and structures for the procedure framework.
pub mod error;
pub mod local;
mod procedure;
mod store;
pub use crate::error::{Error, Result};
pub use crate::procedure::{
BoxedProcedure, Context, ContextProvider, LockKey, Procedure, ProcedureId, ProcedureManager,
ProcedureManagerRef, ProcedureState, ProcedureWithId, Status, Watcher,
};

View File

@@ -0,0 +1,707 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
mod lock;
mod runner;
use std::collections::{HashMap, VecDeque};
use std::sync::{Arc, Mutex, RwLock};
use async_trait::async_trait;
use common_telemetry::logging;
use object_store::ObjectStore;
use snafu::ensure;
use tokio::sync::watch::{self, Receiver, Sender};
use tokio::sync::Notify;
use crate::error::{DuplicateProcedureSnafu, LoaderConflictSnafu, Result};
use crate::local::lock::LockMap;
use crate::local::runner::Runner;
use crate::procedure::BoxedProcedureLoader;
use crate::store::{ObjectStateStore, ProcedureMessage, ProcedureStore, StateStoreRef};
use crate::{
BoxedProcedure, ContextProvider, LockKey, ProcedureId, ProcedureManager, ProcedureState,
ProcedureWithId, Watcher,
};
/// Shared metadata of a procedure.
///
/// # Note
/// [Notify] is not a condition variable, we can't guarantee the waiters are notified
/// if they didn't call `notified()` before we signal the notify. So we
/// 1. use dedicated notify for each condition, such as waiting for a lock, waiting
/// for children;
/// 2. always use `notify_one` and ensure there are only one waiter.
#[derive(Debug)]
pub(crate) struct ProcedureMeta {
/// Id of this procedure.
id: ProcedureId,
/// Notify to wait for a lock.
lock_notify: Notify,
/// Parent procedure id.
parent_id: Option<ProcedureId>,
/// Notify to wait for subprocedures.
child_notify: Notify,
/// Lock required by this procedure.
lock_key: LockKey,
/// Sender to notify the procedure state.
state_sender: Sender<ProcedureState>,
/// Receiver to watch the procedure state.
state_receiver: Receiver<ProcedureState>,
/// Id of child procedures.
children: Mutex<Vec<ProcedureId>>,
}
impl ProcedureMeta {
fn new(id: ProcedureId, parent_id: Option<ProcedureId>, lock_key: LockKey) -> ProcedureMeta {
let (state_sender, state_receiver) = watch::channel(ProcedureState::Running);
ProcedureMeta {
id,
lock_notify: Notify::new(),
parent_id,
child_notify: Notify::new(),
lock_key,
state_sender,
state_receiver,
children: Mutex::new(Vec::new()),
}
}
/// Returns current [ProcedureState].
fn state(&self) -> ProcedureState {
self.state_receiver.borrow().clone()
}
/// Update current [ProcedureState].
fn set_state(&self, state: ProcedureState) {
// Safety: ProcedureMeta also holds the receiver, so `send()` should never fail.
self.state_sender.send(state).unwrap();
}
/// Push `procedure_id` of the subprocedure to the metadata.
fn push_child(&self, procedure_id: ProcedureId) {
let mut children = self.children.lock().unwrap();
children.push(procedure_id);
}
/// Append subprocedures to given `buffer`.
fn list_children(&self, buffer: &mut Vec<ProcedureId>) {
let children = self.children.lock().unwrap();
buffer.extend_from_slice(&children);
}
/// Returns the number of subprocedures.
fn num_children(&self) -> usize {
self.children.lock().unwrap().len()
}
}
/// Reference counted pointer to [ProcedureMeta].
type ProcedureMetaRef = Arc<ProcedureMeta>;
/// Procedure loaded from store.
struct LoadedProcedure {
procedure: BoxedProcedure,
parent_id: Option<ProcedureId>,
step: u32,
}
/// Shared context of the manager.
pub(crate) struct ManagerContext {
/// Procedure loaders. The key is the type name of the procedure which the loader returns.
loaders: Mutex<HashMap<String, BoxedProcedureLoader>>,
lock_map: LockMap,
procedures: RwLock<HashMap<ProcedureId, ProcedureMetaRef>>,
/// Messages loaded from the procedure store.
messages: Mutex<HashMap<ProcedureId, ProcedureMessage>>,
}
#[async_trait]
impl ContextProvider for ManagerContext {
async fn procedure_state(&self, procedure_id: ProcedureId) -> Result<Option<ProcedureState>> {
Ok(self.state(procedure_id))
}
}
impl ManagerContext {
/// Returns a new [ManagerContext].
fn new() -> ManagerContext {
ManagerContext {
loaders: Mutex::new(HashMap::new()),
lock_map: LockMap::new(),
procedures: RwLock::new(HashMap::new()),
messages: Mutex::new(HashMap::new()),
}
}
/// Returns true if the procedure with specific `procedure_id` exists.
fn contains_procedure(&self, procedure_id: ProcedureId) -> bool {
let procedures = self.procedures.read().unwrap();
procedures.contains_key(&procedure_id)
}
/// Try to insert the `procedure` to the context if there is no procedure
/// with same [ProcedureId].
///
/// Returns `false` if there is already a procedure using the same [ProcedureId].
fn try_insert_procedure(&self, meta: ProcedureMetaRef) -> bool {
let mut procedures = self.procedures.write().unwrap();
if procedures.contains_key(&meta.id) {
return false;
}
let old = procedures.insert(meta.id, meta);
debug_assert!(old.is_none());
true
}
/// Returns the [ProcedureState] of specific `procedure_id`.
fn state(&self, procedure_id: ProcedureId) -> Option<ProcedureState> {
let procedures = self.procedures.read().unwrap();
procedures.get(&procedure_id).map(|meta| meta.state())
}
/// Returns the [Watcher] of specific `procedure_id`.
fn watcher(&self, procedure_id: ProcedureId) -> Option<Watcher> {
let procedures = self.procedures.read().unwrap();
procedures
.get(&procedure_id)
.map(|meta| meta.state_receiver.clone())
}
/// Notify a suspended parent procedure with specific `procedure_id` by its subprocedure.
fn notify_by_subprocedure(&self, procedure_id: ProcedureId) {
let procedures = self.procedures.read().unwrap();
if let Some(meta) = procedures.get(&procedure_id) {
meta.child_notify.notify_one();
}
}
/// Load procedure with specific `procedure_id` from cached [ProcedureMessage]s.
fn load_one_procedure(&self, procedure_id: ProcedureId) -> Option<LoadedProcedure> {
let message = {
let messages = self.messages.lock().unwrap();
messages.get(&procedure_id).cloned()?
};
self.load_one_procedure_from_message(procedure_id, &message)
}
/// Load procedure from specific [ProcedureMessage].
fn load_one_procedure_from_message(
&self,
procedure_id: ProcedureId,
message: &ProcedureMessage,
) -> Option<LoadedProcedure> {
let loaders = self.loaders.lock().unwrap();
let loader = loaders.get(&message.type_name).or_else(|| {
logging::error!(
"Loader not found, procedure_id: {}, type_name: {}",
procedure_id,
message.type_name
);
None
})?;
let procedure = loader(&message.data)
.map_err(|e| {
logging::error!(
"Failed to load procedure data, key: {}, source: {}",
procedure_id,
e
);
e
})
.ok()?;
Some(LoadedProcedure {
procedure,
parent_id: message.parent_id,
step: message.step,
})
}
/// Returns all procedures in the tree (including given `root` procedure).
///
/// If callers need a consistent view of the tree, they must ensure no new
/// procedure is added to the tree during using this method.
fn procedures_in_tree(&self, root: &ProcedureMetaRef) -> Vec<ProcedureId> {
let sub_num = root.num_children();
// Reserve capacity for the root procedure and its children.
let mut procedures = Vec::with_capacity(1 + sub_num);
let mut queue = VecDeque::with_capacity(1 + sub_num);
// Push the root procedure to the queue.
queue.push_back(root.clone());
let mut children_ids = Vec::with_capacity(sub_num);
let mut children = Vec::with_capacity(sub_num);
while let Some(meta) = queue.pop_front() {
procedures.push(meta.id);
// Find metadatas of children.
children_ids.clear();
meta.list_children(&mut children_ids);
self.find_procedures(&children_ids, &mut children);
// Traverse children later.
for child in children.drain(..) {
queue.push_back(child);
}
}
procedures
}
/// Finds procedures by given `procedure_ids`.
///
/// Ignores the id if corresponding procedure is not found.
fn find_procedures(&self, procedure_ids: &[ProcedureId], metas: &mut Vec<ProcedureMetaRef>) {
let procedures = self.procedures.read().unwrap();
for procedure_id in procedure_ids {
if let Some(meta) = procedures.get(procedure_id) {
metas.push(meta.clone());
}
}
}
/// Remove cached [ProcedureMessage] by ids.
fn remove_messages(&self, procedure_ids: &[ProcedureId]) {
let mut messages = self.messages.lock().unwrap();
for procedure_id in procedure_ids {
messages.remove(procedure_id);
}
}
}
/// Config for [LocalManager].
#[derive(Debug)]
pub struct ManagerConfig {
/// Object store
pub object_store: ObjectStore,
}
/// A [ProcedureManager] that maintains procedure states locally.
pub struct LocalManager {
manager_ctx: Arc<ManagerContext>,
state_store: StateStoreRef,
}
impl LocalManager {
/// Create a new [LocalManager] with specific `config`.
pub fn new(config: ManagerConfig) -> LocalManager {
LocalManager {
manager_ctx: Arc::new(ManagerContext::new()),
state_store: Arc::new(ObjectStateStore::new(config.object_store)),
}
}
/// Submit a root procedure with given `procedure_id`.
fn submit_root(
&self,
procedure_id: ProcedureId,
step: u32,
procedure: BoxedProcedure,
) -> Result<Watcher> {
let meta = Arc::new(ProcedureMeta::new(procedure_id, None, procedure.lock_key()));
let runner = Runner {
meta: meta.clone(),
procedure,
manager_ctx: self.manager_ctx.clone(),
step,
store: ProcedureStore::new(self.state_store.clone()),
};
let watcher = meta.state_receiver.clone();
// Inserts meta into the manager before actually spawnd the runner.
ensure!(
self.manager_ctx.try_insert_procedure(meta),
DuplicateProcedureSnafu { procedure_id },
);
common_runtime::spawn_bg(async move {
// Run the root procedure.
let _ = runner.run().await;
});
Ok(watcher)
}
}
#[async_trait]
impl ProcedureManager for LocalManager {
fn register_loader(&self, name: &str, loader: BoxedProcedureLoader) -> Result<()> {
let mut loaders = self.manager_ctx.loaders.lock().unwrap();
ensure!(!loaders.contains_key(name), LoaderConflictSnafu { name });
loaders.insert(name.to_string(), loader);
Ok(())
}
async fn submit(&self, procedure: ProcedureWithId) -> Result<Watcher> {
let procedure_id = procedure.id;
ensure!(
!self.manager_ctx.contains_procedure(procedure_id),
DuplicateProcedureSnafu { procedure_id }
);
self.submit_root(procedure.id, 0, procedure.procedure)
}
async fn recover(&self) -> Result<()> {
logging::info!("LocalManager start to recover");
let procedure_store = ProcedureStore::new(self.state_store.clone());
let messages = procedure_store.load_messages().await?;
for (procedure_id, message) in &messages {
if message.parent_id.is_none() {
// This is the root procedure. We only submit the root procedure as it will
// submit sub-procedures to the manager.
let Some(loaded_procedure) = self.manager_ctx.load_one_procedure_from_message(*procedure_id, message) else {
// Try to load other procedures.
continue;
};
logging::info!(
"Recover root procedure {}-{}, step: {}",
loaded_procedure.procedure.type_name(),
procedure_id,
loaded_procedure.step
);
if let Err(e) = self.submit_root(
*procedure_id,
loaded_procedure.step,
loaded_procedure.procedure,
) {
logging::error!(e; "Failed to recover procedure {}", procedure_id);
}
}
}
Ok(())
}
async fn procedure_state(&self, procedure_id: ProcedureId) -> Result<Option<ProcedureState>> {
Ok(self.manager_ctx.state(procedure_id))
}
fn procedure_watcher(&self, procedure_id: ProcedureId) -> Option<Watcher> {
self.manager_ctx.watcher(procedure_id)
}
}
/// Create a new [ProcedureMeta] for test purpose.
#[cfg(test)]
mod test_util {
use object_store::services::Fs as Builder;
use object_store::ObjectStoreBuilder;
use tempdir::TempDir;
use super::*;
pub(crate) fn procedure_meta_for_test() -> ProcedureMeta {
ProcedureMeta::new(ProcedureId::random(), None, LockKey::default())
}
pub(crate) fn new_object_store(dir: &TempDir) -> ObjectStore {
let store_dir = dir.path().to_str().unwrap();
let accessor = Builder::default().root(store_dir).build().unwrap();
ObjectStore::new(accessor).finish()
}
}
#[cfg(test)]
mod tests {
use common_error::mock::MockError;
use common_error::prelude::StatusCode;
use tempdir::TempDir;
use super::*;
use crate::error::Error;
use crate::{Context, Procedure, Status};
#[test]
fn test_manager_context() {
let ctx = ManagerContext::new();
let meta = Arc::new(test_util::procedure_meta_for_test());
assert!(!ctx.contains_procedure(meta.id));
assert!(ctx.state(meta.id).is_none());
assert!(ctx.try_insert_procedure(meta.clone()));
assert!(ctx.contains_procedure(meta.id));
assert_eq!(ProcedureState::Running, ctx.state(meta.id).unwrap());
meta.set_state(ProcedureState::Done);
assert_eq!(ProcedureState::Done, ctx.state(meta.id).unwrap());
}
#[test]
fn test_manager_context_insert_duplicate() {
let ctx = ManagerContext::new();
let meta = Arc::new(test_util::procedure_meta_for_test());
assert!(ctx.try_insert_procedure(meta.clone()));
assert!(!ctx.try_insert_procedure(meta));
}
fn new_child(parent_id: ProcedureId, ctx: &ManagerContext) -> ProcedureMetaRef {
let mut child = test_util::procedure_meta_for_test();
child.parent_id = Some(parent_id);
let child = Arc::new(child);
assert!(ctx.try_insert_procedure(child.clone()));
let mut parent = Vec::new();
ctx.find_procedures(&[parent_id], &mut parent);
parent[0].push_child(child.id);
child
}
#[test]
fn test_procedures_in_tree() {
let ctx = ManagerContext::new();
let root = Arc::new(test_util::procedure_meta_for_test());
assert!(ctx.try_insert_procedure(root.clone()));
assert_eq!(1, ctx.procedures_in_tree(&root).len());
let child1 = new_child(root.id, &ctx);
let child2 = new_child(root.id, &ctx);
let child3 = new_child(child1.id, &ctx);
let child4 = new_child(child1.id, &ctx);
let child5 = new_child(child2.id, &ctx);
let expect = vec![
root.id, child1.id, child2.id, child3.id, child4.id, child5.id,
];
assert_eq!(expect, ctx.procedures_in_tree(&root));
}
#[derive(Debug)]
struct ProcedureToLoad {
content: String,
lock_key: LockKey,
}
#[async_trait]
impl Procedure for ProcedureToLoad {
fn type_name(&self) -> &str {
"ProcedureToLoad"
}
async fn execute(&mut self, _ctx: &Context) -> Result<Status> {
Ok(Status::Done)
}
fn dump(&self) -> Result<String> {
Ok(self.content.clone())
}
fn lock_key(&self) -> LockKey {
self.lock_key.clone()
}
}
impl ProcedureToLoad {
fn new(content: &str) -> ProcedureToLoad {
ProcedureToLoad {
content: content.to_string(),
lock_key: LockKey::default(),
}
}
fn loader() -> BoxedProcedureLoader {
let f = |json: &str| {
let procedure = ProcedureToLoad::new(json);
Ok(Box::new(procedure) as _)
};
Box::new(f)
}
}
#[test]
fn test_register_loader() {
let dir = TempDir::new("register").unwrap();
let config = ManagerConfig {
object_store: test_util::new_object_store(&dir),
};
let manager = LocalManager::new(config);
manager
.register_loader("ProcedureToLoad", ProcedureToLoad::loader())
.unwrap();
// Register duplicate loader.
let err = manager
.register_loader("ProcedureToLoad", ProcedureToLoad::loader())
.unwrap_err();
assert!(matches!(err, Error::LoaderConflict { .. }), "{err}");
}
#[tokio::test]
async fn test_recover() {
let dir = TempDir::new("recover").unwrap();
let object_store = test_util::new_object_store(&dir);
let config = ManagerConfig {
object_store: object_store.clone(),
};
let manager = LocalManager::new(config);
manager
.register_loader("ProcedureToLoad", ProcedureToLoad::loader())
.unwrap();
// Prepare data
let procedure_store = ProcedureStore::from(object_store.clone());
let root: BoxedProcedure = Box::new(ProcedureToLoad::new("test recover manager"));
let root_id = ProcedureId::random();
// Prepare data for the root procedure.
for step in 0..3 {
procedure_store
.store_procedure(root_id, step, &root, None)
.await
.unwrap();
}
let child: BoxedProcedure = Box::new(ProcedureToLoad::new("a child procedure"));
let child_id = ProcedureId::random();
// Prepare data for the child procedure
for step in 0..2 {
procedure_store
.store_procedure(child_id, step, &child, Some(root_id))
.await
.unwrap();
}
// Recover the manager
manager.recover().await.unwrap();
// The manager should submit the root procedure.
assert!(manager.procedure_state(root_id).await.unwrap().is_some());
// Since the mocked root procedure actually doesn't submit subprocedures, so there is no
// related state.
assert!(manager.procedure_state(child_id).await.unwrap().is_none());
}
#[tokio::test]
async fn test_submit_procedure() {
let dir = TempDir::new("submit").unwrap();
let config = ManagerConfig {
object_store: test_util::new_object_store(&dir),
};
let manager = LocalManager::new(config);
let procedure_id = ProcedureId::random();
assert!(manager
.procedure_state(procedure_id)
.await
.unwrap()
.is_none());
assert!(manager.procedure_watcher(procedure_id).is_none());
let mut procedure = ProcedureToLoad::new("submit");
procedure.lock_key = LockKey::single("test.submit");
manager
.submit(ProcedureWithId {
id: procedure_id,
procedure: Box::new(procedure),
})
.await
.unwrap();
assert!(manager
.procedure_state(procedure_id)
.await
.unwrap()
.is_some());
// Wait for the procedure done.
let mut watcher = manager.procedure_watcher(procedure_id).unwrap();
watcher.changed().await.unwrap();
assert_eq!(ProcedureState::Done, *watcher.borrow());
// Try to submit procedure with same id again.
let err = manager
.submit(ProcedureWithId {
id: procedure_id,
procedure: Box::new(ProcedureToLoad::new("submit")),
})
.await
.unwrap_err();
assert!(matches!(err, Error::DuplicateProcedure { .. }), "{err}");
}
#[tokio::test]
async fn test_state_changed_on_err() {
let dir = TempDir::new("on_err").unwrap();
let config = ManagerConfig {
object_store: test_util::new_object_store(&dir),
};
let manager = LocalManager::new(config);
#[derive(Debug)]
struct MockProcedure {
panic: bool,
}
#[async_trait]
impl Procedure for MockProcedure {
fn type_name(&self) -> &str {
"MockProcedure"
}
async fn execute(&mut self, _ctx: &Context) -> Result<Status> {
if self.panic {
// Test the runner can set the state to failed even the procedure
// panics.
panic!();
} else {
Err(Error::external(MockError::new(StatusCode::Unexpected)))
}
}
fn dump(&self) -> Result<String> {
Ok(String::new())
}
fn lock_key(&self) -> LockKey {
LockKey::single("test.submit")
}
}
let check_procedure = |procedure| {
async {
let procedure_id = ProcedureId::random();
let mut watcher = manager
.submit(ProcedureWithId {
id: procedure_id,
procedure: Box::new(procedure),
})
.await
.unwrap();
// Wait for the notification.
watcher.changed().await.unwrap();
assert_eq!(ProcedureState::Failed, *watcher.borrow());
}
};
check_procedure(MockProcedure { panic: false }).await;
check_procedure(MockProcedure { panic: true }).await;
}
}

View File

@@ -0,0 +1,214 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use std::collections::{HashMap, VecDeque};
use std::sync::RwLock;
use crate::local::ProcedureMetaRef;
use crate::ProcedureId;
/// A lock entry.
#[derive(Debug)]
struct Lock {
/// Current lock owner.
owner: ProcedureMetaRef,
/// Waiter procedures.
waiters: VecDeque<ProcedureMetaRef>,
}
impl Lock {
/// Returns a [Lock] with specific `owner` procedure.
fn from_owner(owner: ProcedureMetaRef) -> Lock {
Lock {
owner,
waiters: VecDeque::new(),
}
}
/// Try to pop a waiter from the waiter list, set it as owner
/// and wake up the new owner.
///
/// Returns false if there is no waiter in the waiter list.
fn switch_owner(&mut self) -> bool {
if let Some(waiter) = self.waiters.pop_front() {
// Update owner.
self.owner = waiter.clone();
// We need to use notify_one() since the waiter may have not called `notified()` yet.
waiter.lock_notify.notify_one();
true
} else {
false
}
}
}
/// Manages lock entries for procedures.
pub(crate) struct LockMap {
locks: RwLock<HashMap<String, Lock>>,
}
impl LockMap {
/// Returns a new [LockMap].
pub(crate) fn new() -> LockMap {
LockMap {
locks: RwLock::new(HashMap::new()),
}
}
/// Acquire lock by `key` for procedure with specific `meta`.
///
/// Though `meta` is cloneable, callers must ensure that only one `meta`
/// is acquiring and holding the lock at the same time.
///
/// # Panics
/// Panics if the procedure acquires the lock recursively.
pub(crate) async fn acquire_lock(&self, key: &str, meta: ProcedureMetaRef) {
assert!(!self.hold_lock(key, meta.id));
{
let mut locks = self.locks.write().unwrap();
if let Some(lock) = locks.get_mut(key) {
// Lock already exists, but we don't expect that a procedure acquires
// the same lock again.
assert_ne!(lock.owner.id, meta.id);
// Add this procedure to the waiter list. Here we don't check
// whether the procedure is already in the waiter list as we
// expect that a procedure should not wait for two lock simultaneously.
lock.waiters.push_back(meta.clone());
} else {
locks.insert(key.to_string(), Lock::from_owner(meta));
return;
}
}
// Wait for notify.
meta.lock_notify.notified().await;
assert!(self.hold_lock(key, meta.id));
}
/// Release lock by `key`.
pub(crate) fn release_lock(&self, key: &str, procedure_id: ProcedureId) {
let mut locks = self.locks.write().unwrap();
if let Some(lock) = locks.get_mut(key) {
if lock.owner.id != procedure_id {
// This is not the lock owner.
return;
}
if !lock.switch_owner() {
// No body waits for this lock, we can remove the lock entry.
locks.remove(key);
}
}
}
/// Returns true if the procedure with specific `procedure_id` holds the
/// lock of `key`.
fn hold_lock(&self, key: &str, procedure_id: ProcedureId) -> bool {
let locks = self.locks.read().unwrap();
locks
.get(key)
.map(|lock| lock.owner.id == procedure_id)
.unwrap_or(false)
}
/// Returns true if the procedure is waiting for the lock `key`.
#[cfg(test)]
fn waiting_lock(&self, key: &str, procedure_id: ProcedureId) -> bool {
let locks = self.locks.read().unwrap();
locks
.get(key)
.map(|lock| lock.waiters.iter().any(|meta| meta.id == procedure_id))
.unwrap_or(false)
}
}
#[cfg(test)]
mod tests {
use std::sync::Arc;
use super::*;
use crate::local::test_util;
#[test]
fn test_lock_no_waiter() {
let meta = Arc::new(test_util::procedure_meta_for_test());
let mut lock = Lock::from_owner(meta);
assert!(!lock.switch_owner());
}
#[tokio::test]
async fn test_lock_with_waiter() {
let owner = Arc::new(test_util::procedure_meta_for_test());
let mut lock = Lock::from_owner(owner);
let waiter = Arc::new(test_util::procedure_meta_for_test());
lock.waiters.push_back(waiter.clone());
assert!(lock.switch_owner());
assert!(lock.waiters.is_empty());
waiter.lock_notify.notified().await;
assert_eq!(lock.owner.id, waiter.id);
}
#[tokio::test]
async fn test_lock_map() {
let key = "hello";
let owner = Arc::new(test_util::procedure_meta_for_test());
let lock_map = Arc::new(LockMap::new());
lock_map.acquire_lock(key, owner.clone()).await;
let waiter = Arc::new(test_util::procedure_meta_for_test());
let waiter_id = waiter.id;
// Waiter release the lock, this should not take effect.
lock_map.release_lock(key, waiter_id);
let lock_map2 = lock_map.clone();
let owner_id = owner.id;
let handle = tokio::spawn(async move {
assert!(lock_map2.hold_lock(key, owner_id));
assert!(!lock_map2.hold_lock(key, waiter_id));
// Waiter wait for lock.
lock_map2.acquire_lock(key, waiter.clone()).await;
assert!(lock_map2.hold_lock(key, waiter_id));
});
// Owner still holds the lock.
assert!(lock_map.hold_lock(key, owner_id));
// Wait until the waiter acquired the lock
while !lock_map.waiting_lock(key, waiter_id) {
tokio::time::sleep(std::time::Duration::from_millis(5)).await;
}
// Release lock
lock_map.release_lock(key, owner_id);
assert!(!lock_map.hold_lock(key, owner_id));
// Wait for task.
handle.await.unwrap();
// The waiter should hold the lock now.
assert!(lock_map.hold_lock(key, waiter_id));
lock_map.release_lock(key, waiter_id);
}
}

View File

@@ -0,0 +1,822 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use std::sync::Arc;
use std::time::Duration;
use common_telemetry::logging;
use tokio::time;
use crate::error::{Error, Result};
use crate::local::{ManagerContext, ProcedureMeta, ProcedureMetaRef};
use crate::store::ProcedureStore;
use crate::{BoxedProcedure, Context, ProcedureId, ProcedureState, ProcedureWithId, Status};
const ERR_WAIT_DURATION: Duration = Duration::from_secs(30);
#[derive(Debug)]
enum ExecResult {
Continue,
Done,
RetryLater,
Failed(Error),
}
#[cfg(test)]
impl ExecResult {
fn is_continue(&self) -> bool {
matches!(self, ExecResult::Continue)
}
fn is_done(&self) -> bool {
matches!(self, ExecResult::Done)
}
fn is_retry_later(&self) -> bool {
matches!(self, ExecResult::RetryLater)
}
fn is_failed(&self) -> bool {
matches!(self, ExecResult::Failed(_))
}
}
/// A guard to cleanup procedure state.
struct ProcedureGuard {
meta: ProcedureMetaRef,
manager_ctx: Arc<ManagerContext>,
finish: bool,
}
impl ProcedureGuard {
/// Returns a new [ProcedureGuard].
fn new(meta: ProcedureMetaRef, manager_ctx: Arc<ManagerContext>) -> ProcedureGuard {
ProcedureGuard {
meta,
manager_ctx,
finish: false,
}
}
/// The procedure is finished successfully.
fn finish(mut self) {
self.finish = true;
}
}
impl Drop for ProcedureGuard {
fn drop(&mut self) {
if !self.finish {
logging::error!("Procedure {} exits unexpectedly", self.meta.id);
// Set state to failed. This is useful in test as runtime may not abort when the runner task panics.
// See https://github.com/tokio-rs/tokio/issues/2002 .
// We set set_panic_hook() in the application's main function. But our tests don't have this panic hook.
self.meta.set_state(ProcedureState::Failed);
}
// Notify parent procedure.
if let Some(parent_id) = self.meta.parent_id {
self.manager_ctx.notify_by_subprocedure(parent_id);
}
// Release lock in reverse order.
for key in self.meta.lock_key.keys_to_unlock() {
self.manager_ctx.lock_map.release_lock(key, self.meta.id);
}
}
}
// TODO(yingwen): Support cancellation.
pub(crate) struct Runner {
pub(crate) meta: ProcedureMetaRef,
pub(crate) procedure: BoxedProcedure,
pub(crate) manager_ctx: Arc<ManagerContext>,
pub(crate) step: u32,
pub(crate) store: ProcedureStore,
}
impl Runner {
/// Run the procedure.
pub(crate) async fn run(mut self) -> Result<()> {
// Ensure we can update the procedure state.
let guard = ProcedureGuard::new(self.meta.clone(), self.manager_ctx.clone());
logging::info!(
"Runner {}-{} starts",
self.procedure.type_name(),
self.meta.id
);
// TODO(yingwen): Detect recursive locking (and deadlock) if possible. Maybe we could detect
// recursive locking by adding a root procedure id to the meta.
for key in self.meta.lock_key.keys_to_lock() {
// Acquire lock for each key.
self.manager_ctx
.lock_map
.acquire_lock(key, self.meta.clone())
.await;
}
let mut result = Ok(());
// Execute the procedure. We need to release the lock whenever the the execution
// is successful or fail.
if let Err(e) = self.execute_procedure_in_loop().await {
result = Err(e);
}
// We can't remove the metadata of the procedure now as users and its parent might
// need to query its state.
// TODO(yingwen): 1. Add TTL to the metadata; 2. Only keep state in the procedure store
// so we don't need to always store the metadata in memory after the procedure is done.
// Release locks and notify parent procedure.
guard.finish();
// If this is the root procedure, clean up message cache.
if self.meta.parent_id.is_none() {
let procedure_ids = self.manager_ctx.procedures_in_tree(&self.meta);
self.manager_ctx.remove_messages(&procedure_ids);
}
logging::info!(
"Runner {}-{} exits",
self.procedure.type_name(),
self.meta.id
);
result
}
async fn execute_procedure_in_loop(&mut self) -> Result<()> {
let ctx = Context {
procedure_id: self.meta.id,
provider: self.manager_ctx.clone(),
};
loop {
match self.execute_once(&ctx).await {
ExecResult::Continue => (),
ExecResult::Done => return Ok(()),
ExecResult::RetryLater => {
self.wait_on_err().await;
}
ExecResult::Failed(e) => return Err(e),
}
}
}
async fn execute_once(&mut self, ctx: &Context) -> ExecResult {
match self.procedure.execute(ctx).await {
Ok(status) => {
logging::debug!(
"Execute procedure {}-{} once, status: {:?}, need_persist: {}",
self.procedure.type_name(),
self.meta.id,
status,
status.need_persist(),
);
if status.need_persist() && self.persist_procedure().await.is_err() {
return ExecResult::RetryLater;
}
match status {
Status::Executing { .. } => (),
Status::Suspended { subprocedures, .. } => {
self.on_suspended(subprocedures).await;
}
Status::Done => {
if self.commit_procedure().await.is_err() {
return ExecResult::RetryLater;
}
self.done();
return ExecResult::Done;
}
}
ExecResult::Continue
}
Err(e) => {
logging::error!(
e;
"Failed to execute procedure {}-{}, retry: {}",
self.procedure.type_name(),
self.meta.id,
e.is_retry_later(),
);
if e.is_retry_later() {
return ExecResult::RetryLater;
}
self.meta.set_state(ProcedureState::Failed);
// Write rollback key so we can skip this procedure while recovering procedures.
if self.rollback_procedure().await.is_err() {
return ExecResult::RetryLater;
}
ExecResult::Failed(e)
}
}
}
/// Submit a subprocedure with specific `procedure_id`.
fn submit_subprocedure(&self, procedure_id: ProcedureId, mut procedure: BoxedProcedure) {
if self.manager_ctx.contains_procedure(procedure_id) {
// If the parent has already submitted this procedure, don't submit it again.
return;
}
let mut step = 0;
if let Some(loaded_procedure) = self.manager_ctx.load_one_procedure(procedure_id) {
// Try to load procedure state from the message to avoid re-run the subprocedure
// from initial state.
assert_eq!(self.meta.id, loaded_procedure.parent_id.unwrap());
// Use the dumped procedure from the procedure store.
procedure = loaded_procedure.procedure;
// Update step number.
step = loaded_procedure.step;
}
let meta = Arc::new(ProcedureMeta::new(
procedure_id,
Some(self.meta.id),
procedure.lock_key(),
));
let runner = Runner {
meta: meta.clone(),
procedure,
manager_ctx: self.manager_ctx.clone(),
step,
store: self.store.clone(),
};
// Insert the procedure. We already check the procedure existence before inserting
// so we add an assertion to ensure the procedure id is unique and no other procedures
// using the same procedure id.
assert!(
self.manager_ctx.try_insert_procedure(meta),
"Procedure {}-{} submit an existing procedure {}-{}",
self.procedure.type_name(),
self.meta.id,
runner.procedure.type_name(),
procedure_id,
);
// Add the id of the subprocedure to the metadata.
self.meta.push_child(procedure_id);
common_runtime::spawn_bg(async move {
// Run the root procedure.
runner.run().await
});
}
async fn wait_on_err(&self) {
time::sleep(ERR_WAIT_DURATION).await;
}
async fn on_suspended(&self, subprocedures: Vec<ProcedureWithId>) {
let has_child = !subprocedures.is_empty();
for subprocedure in subprocedures {
logging::info!(
"Procedure {}-{} submit subprocedure {}-{}",
self.procedure.type_name(),
self.meta.id,
subprocedure.procedure.type_name(),
subprocedure.id,
);
self.submit_subprocedure(subprocedure.id, subprocedure.procedure);
}
logging::info!(
"Procedure {}-{} is waiting for subprocedures",
self.procedure.type_name(),
self.meta.id,
);
// Wait for subprocedures.
if has_child {
self.meta.child_notify.notified().await;
logging::info!(
"Procedure {}-{} is waked up",
self.procedure.type_name(),
self.meta.id,
);
}
}
async fn persist_procedure(&mut self) -> Result<()> {
self.store
.store_procedure(
self.meta.id,
self.step,
&self.procedure,
self.meta.parent_id,
)
.await
.map_err(|e| {
logging::error!(
e; "Failed to persist procedure {}-{}",
self.procedure.type_name(),
self.meta.id
);
e
})?;
self.step += 1;
Ok(())
}
async fn commit_procedure(&mut self) -> Result<()> {
self.store
.commit_procedure(self.meta.id, self.step)
.await
.map_err(|e| {
logging::error!(
e; "Failed to commit procedure {}-{}",
self.procedure.type_name(),
self.meta.id
);
e
})?;
self.step += 1;
Ok(())
}
async fn rollback_procedure(&mut self) -> Result<()> {
self.store
.rollback_procedure(self.meta.id, self.step)
.await
.map_err(|e| {
logging::error!(
e; "Failed to write rollback key for procedure {}-{}",
self.procedure.type_name(),
self.meta.id
);
e
})?;
self.step += 1;
Ok(())
}
fn done(&self) {
// TODO(yingwen): Add files to remove list.
logging::info!(
"Procedure {}-{} done",
self.procedure.type_name(),
self.meta.id,
);
// Mark the state of this procedure to done.
self.meta.set_state(ProcedureState::Done);
}
}
#[cfg(test)]
mod tests {
use std::sync::Arc;
use async_trait::async_trait;
use common_error::ext::PlainError;
use common_error::mock::MockError;
use common_error::prelude::StatusCode;
use futures_util::future::BoxFuture;
use futures_util::{FutureExt, TryStreamExt};
use object_store::ObjectStore;
use tempdir::TempDir;
use super::*;
use crate::local::test_util;
use crate::{ContextProvider, LockKey, Procedure};
const ROOT_ID: &str = "9f805a1f-05f7-490c-9f91-bd56e3cc54c1";
fn new_runner(
meta: ProcedureMetaRef,
procedure: BoxedProcedure,
store: ProcedureStore,
) -> Runner {
Runner {
meta,
procedure,
manager_ctx: Arc::new(ManagerContext::new()),
step: 0,
store,
}
}
async fn check_files(object_store: &ObjectStore, procedure_id: ProcedureId, files: &[&str]) {
let dir = format!("{procedure_id}/");
let object = object_store.object(&dir);
let lister = object.list().await.unwrap();
let mut files_in_dir: Vec<_> = lister
.map_ok(|de| de.name().to_string())
.try_collect()
.await
.unwrap();
files_in_dir.sort_unstable();
assert_eq!(files, files_in_dir);
}
fn context_without_provider(procedure_id: ProcedureId) -> Context {
struct MockProvider;
#[async_trait]
impl ContextProvider for MockProvider {
async fn procedure_state(
&self,
_procedure_id: ProcedureId,
) -> Result<Option<ProcedureState>> {
unimplemented!()
}
}
Context {
procedure_id,
provider: Arc::new(MockProvider),
}
}
#[derive(Debug)]
struct ProcedureAdapter<F> {
data: String,
lock_key: LockKey,
exec_fn: F,
}
impl<F> ProcedureAdapter<F> {
fn new_meta(&self, uuid: &str) -> ProcedureMetaRef {
let mut meta = test_util::procedure_meta_for_test();
meta.id = ProcedureId::parse_str(uuid).unwrap();
meta.lock_key = self.lock_key.clone();
Arc::new(meta)
}
}
#[async_trait]
impl<F> Procedure for ProcedureAdapter<F>
where
F: FnMut(Context) -> BoxFuture<'static, Result<Status>> + Send + Sync,
{
fn type_name(&self) -> &str {
"ProcedureAdapter"
}
async fn execute(&mut self, ctx: &Context) -> Result<Status> {
let f = (self.exec_fn)(ctx.clone());
f.await
}
fn dump(&self) -> Result<String> {
Ok(self.data.clone())
}
fn lock_key(&self) -> LockKey {
self.lock_key.clone()
}
}
async fn execute_once_normal(persist: bool, first_files: &[&str], second_files: &[&str]) {
let mut times = 0;
let exec_fn = move |_| {
times += 1;
async move {
if times == 1 {
Ok(Status::Executing { persist })
} else {
Ok(Status::Done)
}
}
.boxed()
};
let normal = ProcedureAdapter {
data: "normal".to_string(),
lock_key: LockKey::single("catalog.schema.table"),
exec_fn,
};
let dir = TempDir::new("normal").unwrap();
let meta = normal.new_meta(ROOT_ID);
let ctx = context_without_provider(meta.id);
let object_store = test_util::new_object_store(&dir);
let procedure_store = ProcedureStore::from(object_store.clone());
let mut runner = new_runner(meta, Box::new(normal), procedure_store);
let res = runner.execute_once(&ctx).await;
assert!(res.is_continue(), "{res:?}");
check_files(&object_store, ctx.procedure_id, first_files).await;
let res = runner.execute_once(&ctx).await;
assert!(res.is_done(), "{res:?}");
check_files(&object_store, ctx.procedure_id, second_files).await;
}
#[tokio::test]
async fn test_execute_once_normal() {
execute_once_normal(
true,
&["0000000000.step"],
&["0000000000.step", "0000000001.commit"],
)
.await;
}
#[tokio::test]
async fn test_execute_once_normal_skip_persist() {
execute_once_normal(false, &[], &["0000000000.commit"]).await;
}
#[tokio::test]
async fn test_on_suspend_empty() {
let exec_fn = move |_| {
async move {
Ok(Status::Suspended {
subprocedures: Vec::new(),
persist: false,
})
}
.boxed()
};
let suspend = ProcedureAdapter {
data: "suspend".to_string(),
lock_key: LockKey::single("catalog.schema.table"),
exec_fn,
};
let dir = TempDir::new("suspend").unwrap();
let meta = suspend.new_meta(ROOT_ID);
let ctx = context_without_provider(meta.id);
let object_store = test_util::new_object_store(&dir);
let procedure_store = ProcedureStore::from(object_store.clone());
let mut runner = new_runner(meta, Box::new(suspend), procedure_store);
let res = runner.execute_once(&ctx).await;
assert!(res.is_continue(), "{res:?}");
}
fn new_child_procedure(procedure_id: ProcedureId, keys: &[&str]) -> ProcedureWithId {
let mut times = 0;
let exec_fn = move |_| {
times += 1;
async move {
if times == 1 {
time::sleep(Duration::from_millis(200)).await;
Ok(Status::Executing { persist: true })
} else {
Ok(Status::Done)
}
}
.boxed()
};
let child = ProcedureAdapter {
data: "child".to_string(),
lock_key: LockKey::new(keys.iter().map(|k| k.to_string())),
exec_fn,
};
ProcedureWithId {
id: procedure_id,
procedure: Box::new(child),
}
}
#[tokio::test]
async fn test_on_suspend_by_subprocedures() {
let mut times = 0;
let children_ids = [ProcedureId::random(), ProcedureId::random()];
let keys = [
&[
"catalog.schema.table.region-0",
"catalog.schema.table.region-1",
],
&[
"catalog.schema.table.region-2",
"catalog.schema.table.region-3",
],
];
let exec_fn = move |ctx: Context| {
times += 1;
async move {
if times == 1 {
// Submit subprocedures.
Ok(Status::Suspended {
subprocedures: children_ids
.into_iter()
.zip(keys)
.map(|(id, key_slice)| new_child_procedure(id, key_slice))
.collect(),
persist: true,
})
} else {
// Wait for subprocedures.
let mut all_child_done = true;
for id in children_ids {
if ctx.provider.procedure_state(id).await.unwrap()
!= Some(ProcedureState::Done)
{
all_child_done = false;
}
}
if all_child_done {
Ok(Status::Done)
} else {
// Return suspended to wait for notify.
Ok(Status::Suspended {
subprocedures: Vec::new(),
persist: false,
})
}
}
}
.boxed()
};
let parent = ProcedureAdapter {
data: "parent".to_string(),
lock_key: LockKey::single("catalog.schema.table"),
exec_fn,
};
let dir = TempDir::new("parent").unwrap();
let meta = parent.new_meta(ROOT_ID);
let procedure_id = meta.id;
let object_store = test_util::new_object_store(&dir);
let procedure_store = ProcedureStore::from(object_store.clone());
let mut runner = new_runner(meta.clone(), Box::new(parent), procedure_store);
let manager_ctx = Arc::new(ManagerContext::new());
// Manually add this procedure to the manager ctx.
assert!(manager_ctx.try_insert_procedure(meta));
// Replace the manager ctx.
runner.manager_ctx = manager_ctx;
runner.run().await.unwrap();
// Check files on store.
for child_id in children_ids {
check_files(
&object_store,
child_id,
&["0000000000.step", "0000000001.commit"],
)
.await;
}
check_files(
&object_store,
procedure_id,
&["0000000000.step", "0000000001.commit"],
)
.await;
}
#[tokio::test]
async fn test_execute_on_error() {
let exec_fn =
|_| async { Err(Error::external(MockError::new(StatusCode::Unexpected))) }.boxed();
let fail = ProcedureAdapter {
data: "fail".to_string(),
lock_key: LockKey::single("catalog.schema.table"),
exec_fn,
};
let dir = TempDir::new("fail").unwrap();
let meta = fail.new_meta(ROOT_ID);
let ctx = context_without_provider(meta.id);
let object_store = test_util::new_object_store(&dir);
let procedure_store = ProcedureStore::from(object_store.clone());
let mut runner = new_runner(meta.clone(), Box::new(fail), procedure_store);
let res = runner.execute_once(&ctx).await;
assert!(res.is_failed(), "{res:?}");
assert_eq!(ProcedureState::Failed, meta.state());
check_files(&object_store, ctx.procedure_id, &["0000000000.rollback"]).await;
}
#[tokio::test]
async fn test_execute_on_retry_later_error() {
let mut times = 0;
let exec_fn = move |_| {
times += 1;
async move {
if times == 1 {
Err(Error::retry_later(MockError::new(StatusCode::Unexpected)))
} else {
Ok(Status::Done)
}
}
.boxed()
};
let retry_later = ProcedureAdapter {
data: "retry_later".to_string(),
lock_key: LockKey::single("catalog.schema.table"),
exec_fn,
};
let dir = TempDir::new("retry_later").unwrap();
let meta = retry_later.new_meta(ROOT_ID);
let ctx = context_without_provider(meta.id);
let object_store = test_util::new_object_store(&dir);
let procedure_store = ProcedureStore::from(object_store.clone());
let mut runner = new_runner(meta.clone(), Box::new(retry_later), procedure_store);
let res = runner.execute_once(&ctx).await;
assert!(res.is_retry_later(), "{res:?}");
assert_eq!(ProcedureState::Running, meta.state());
let res = runner.execute_once(&ctx).await;
assert!(res.is_done(), "{res:?}");
assert_eq!(ProcedureState::Done, meta.state());
check_files(&object_store, ctx.procedure_id, &["0000000000.commit"]).await;
}
#[tokio::test]
async fn test_child_error() {
let mut times = 0;
let child_id = ProcedureId::random();
let exec_fn = move |ctx: Context| {
times += 1;
async move {
if times == 1 {
// Submit subprocedures.
let exec_fn = |_| {
async { Err(Error::external(MockError::new(StatusCode::Unexpected))) }
.boxed()
};
let fail = ProcedureAdapter {
data: "fail".to_string(),
lock_key: LockKey::single("catalog.schema.table.region-0"),
exec_fn,
};
Ok(Status::Suspended {
subprocedures: vec![ProcedureWithId {
id: child_id,
procedure: Box::new(fail),
}],
persist: true,
})
} else {
// Wait for subprocedures.
let state = ctx.provider.procedure_state(child_id).await.unwrap();
if state == Some(ProcedureState::Failed) {
// The parent procedure to abort itself if child procedure is failed.
Err(Error::from_error_ext(PlainError::new(
"subprocedure failed".to_string(),
StatusCode::Unexpected,
)))
} else {
// Return suspended to wait for notify.
Ok(Status::Suspended {
subprocedures: Vec::new(),
persist: false,
})
}
}
}
.boxed()
};
let parent = ProcedureAdapter {
data: "parent".to_string(),
lock_key: LockKey::single("catalog.schema.table"),
exec_fn,
};
let dir = TempDir::new("child_err").unwrap();
let meta = parent.new_meta(ROOT_ID);
let object_store = test_util::new_object_store(&dir);
let procedure_store = ProcedureStore::from(object_store.clone());
let mut runner = new_runner(meta.clone(), Box::new(parent), procedure_store);
let manager_ctx = Arc::new(ManagerContext::new());
// Manually add this procedure to the manager ctx.
assert!(manager_ctx.try_insert_procedure(meta));
// Replace the manager ctx.
runner.manager_ctx = manager_ctx;
// Run the runer and execute the procedure.
let err = runner.run().await.unwrap_err();
assert!(err.to_string().contains("subprocedure failed"), "{err}");
}
}

View File

@@ -0,0 +1,314 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use std::fmt;
use std::str::FromStr;
use std::sync::Arc;
use async_trait::async_trait;
use serde::{Deserialize, Serialize};
use smallvec::{smallvec, SmallVec};
use snafu::{ResultExt, Snafu};
use tokio::sync::watch::Receiver;
use uuid::Uuid;
use crate::error::Result;
/// Procedure execution status.
#[derive(Debug)]
pub enum Status {
/// The procedure is still executing.
Executing {
/// Whether the framework needs to persist the procedure.
persist: bool,
},
/// The procedure has suspended itself and is waiting for subprocedures.
Suspended {
subprocedures: Vec<ProcedureWithId>,
/// Whether the framework needs to persist the procedure.
persist: bool,
},
/// the procedure is done.
Done,
}
impl Status {
/// Returns a [Status::Executing] with given `persist` flag.
pub fn executing(persist: bool) -> Status {
Status::Executing { persist }
}
/// Returns `true` if the procedure needs the framework to persist its intermediate state.
pub fn need_persist(&self) -> bool {
// If the procedure is done, the framework doesn't need to persist the procedure
// anymore. It only needs to mark the procedure as committed.
match self {
Status::Executing { persist } | Status::Suspended { persist, .. } => *persist,
Status::Done => false,
}
}
}
/// [ContextProvider] provides information about procedures in the [ProcedureManager].
#[async_trait]
pub trait ContextProvider: Send + Sync {
/// Query the procedure state.
async fn procedure_state(&self, procedure_id: ProcedureId) -> Result<Option<ProcedureState>>;
}
/// Reference-counted pointer to [ContextProvider].
pub type ContextProviderRef = Arc<dyn ContextProvider>;
/// Procedure execution context.
#[derive(Clone)]
pub struct Context {
/// Id of the procedure.
pub procedure_id: ProcedureId,
/// [ProcedureManager] context provider.
pub provider: ContextProviderRef,
}
/// A `Procedure` represents an operation or a set of operations to be performed step-by-step.
#[async_trait]
pub trait Procedure: Send + Sync {
/// Type name of the procedure.
fn type_name(&self) -> &str;
/// Execute the procedure.
///
/// The implementation must be idempotent.
async fn execute(&mut self, ctx: &Context) -> Result<Status>;
/// Dump the state of the procedure to a string.
fn dump(&self) -> Result<String>;
/// Returns the [LockKey] that this procedure needs to acquire.
fn lock_key(&self) -> LockKey;
}
/// Keys to identify required locks.
///
/// [LockKey] always sorts keys lexicographically so that they can be acquired
/// in the same order.
// Most procedures should only acquire 1 ~ 2 locks so we use smallvec to hold keys.
#[derive(Clone, Debug, Default, PartialEq, Eq)]
pub struct LockKey(SmallVec<[String; 2]>);
impl LockKey {
/// Returns a new [LockKey] with only one key.
pub fn single(key: impl Into<String>) -> LockKey {
LockKey(smallvec![key.into()])
}
/// Returns a new [LockKey] with keys from specific `iter`.
pub fn new(iter: impl IntoIterator<Item = String>) -> LockKey {
let mut vec: SmallVec<_> = iter.into_iter().collect();
vec.sort();
// Dedup keys to avoid acquiring the same key multiple times.
vec.dedup();
LockKey(vec)
}
/// Returns the keys to lock.
pub fn keys_to_lock(&self) -> impl Iterator<Item = &String> {
self.0.iter()
}
/// Returns the keys to unlock.
pub fn keys_to_unlock(&self) -> impl Iterator<Item = &String> {
self.0.iter().rev()
}
}
/// Boxed [Procedure].
pub type BoxedProcedure = Box<dyn Procedure>;
/// A procedure with specific id.
pub struct ProcedureWithId {
/// Id of the procedure.
pub id: ProcedureId,
pub procedure: BoxedProcedure,
}
impl ProcedureWithId {
/// Returns a new [ProcedureWithId] that holds specific `procedure`
/// and a random [ProcedureId].
pub fn with_random_id(procedure: BoxedProcedure) -> ProcedureWithId {
ProcedureWithId {
id: ProcedureId::random(),
procedure,
}
}
}
impl fmt::Debug for ProcedureWithId {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
write!(f, "{}-{}", self.procedure.type_name(), self.id)
}
}
#[derive(Debug, Snafu)]
pub struct ParseIdError {
source: uuid::Error,
}
/// Unique id for [Procedure].
#[derive(Debug, Clone, Copy, PartialEq, Eq, Hash, Serialize, Deserialize)]
pub struct ProcedureId(Uuid);
impl ProcedureId {
/// Returns a new unique [ProcedureId] randomly.
pub fn random() -> ProcedureId {
ProcedureId(Uuid::new_v4())
}
/// Parses id from string.
pub fn parse_str(input: &str) -> std::result::Result<ProcedureId, ParseIdError> {
Uuid::parse_str(input)
.map(ProcedureId)
.context(ParseIdSnafu)
}
}
impl fmt::Display for ProcedureId {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
write!(f, "{}", self.0)
}
}
impl FromStr for ProcedureId {
type Err = ParseIdError;
fn from_str(s: &str) -> std::result::Result<ProcedureId, ParseIdError> {
ProcedureId::parse_str(s)
}
}
/// Loader to recover the [Procedure] instance from serialized data.
pub type BoxedProcedureLoader = Box<dyn Fn(&str) -> Result<BoxedProcedure> + Send>;
// TODO(yingwen): Find a way to return the error message if the procedure is failed.
/// State of a submitted procedure.
#[derive(Debug, Clone, PartialEq, Eq)]
pub enum ProcedureState {
/// The procedure is running.
Running,
/// The procedure is finished.
Done,
/// The procedure is failed and cannot proceed anymore.
Failed,
}
/// Watcher to watch procedure state.
pub type Watcher = Receiver<ProcedureState>;
// TODO(yingwen): Shutdown
/// `ProcedureManager` executes [Procedure] submitted to it.
#[async_trait]
pub trait ProcedureManager: Send + Sync + 'static {
/// Registers loader for specific procedure type `name`.
fn register_loader(&self, name: &str, loader: BoxedProcedureLoader) -> Result<()>;
/// Submits a procedure to execute.
///
/// Returns a [Watcher] to watch the created procedure.
async fn submit(&self, procedure: ProcedureWithId) -> Result<Watcher>;
/// Recovers unfinished procedures and reruns them.
///
/// Callers should ensure all loaders are registered.
async fn recover(&self) -> Result<()>;
/// Query the procedure state.
///
/// Returns `Ok(None)` if the procedure doesn't exist.
async fn procedure_state(&self, procedure_id: ProcedureId) -> Result<Option<ProcedureState>>;
/// Returns a [Watcher] to watch [ProcedureState] of specific procedure.
fn procedure_watcher(&self, procedure_id: ProcedureId) -> Option<Watcher>;
}
/// Ref-counted pointer to the [ProcedureManager].
pub type ProcedureManagerRef = Arc<dyn ProcedureManager>;
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_status() {
let status = Status::Executing { persist: false };
assert!(!status.need_persist());
let status = Status::Executing { persist: true };
assert!(status.need_persist());
let status = Status::Suspended {
subprocedures: Vec::new(),
persist: false,
};
assert!(!status.need_persist());
let status = Status::Suspended {
subprocedures: Vec::new(),
persist: true,
};
assert!(status.need_persist());
let status = Status::Done;
assert!(!status.need_persist());
}
#[test]
fn test_lock_key() {
let entity = "catalog.schema.my_table";
let key = LockKey::single(entity);
assert_eq!(vec![entity], key.keys_to_lock().collect::<Vec<_>>());
assert_eq!(vec![entity], key.keys_to_unlock().collect::<Vec<_>>());
let key = LockKey::new([
"b".to_string(),
"c".to_string(),
"a".to_string(),
"c".to_string(),
]);
assert_eq!(vec!["a", "b", "c"], key.keys_to_lock().collect::<Vec<_>>());
assert_eq!(
vec!["c", "b", "a"],
key.keys_to_unlock().collect::<Vec<_>>()
);
}
#[test]
fn test_procedure_id() {
let id = ProcedureId::random();
let uuid_str = id.to_string();
assert_eq!(id.0.to_string(), uuid_str);
let parsed = ProcedureId::parse_str(&uuid_str).unwrap();
assert_eq!(id, parsed);
let parsed = uuid_str.parse().unwrap();
assert_eq!(id, parsed);
}
#[test]
fn test_procedure_id_serialization() {
let id = ProcedureId::random();
let json = serde_json::to_string(&id).unwrap();
assert_eq!(format!("\"{id}\""), json);
let parsed = serde_json::from_str(&json).unwrap();
assert_eq!(id, parsed);
}
}

View File

@@ -0,0 +1,488 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use std::collections::HashMap;
use std::fmt;
use std::sync::Arc;
use common_telemetry::logging;
use futures::TryStreamExt;
use object_store::ObjectStore;
use serde::{Deserialize, Serialize};
use snafu::ResultExt;
use crate::error::{Result, ToJsonSnafu};
pub(crate) use crate::store::state_store::{ObjectStateStore, StateStoreRef};
use crate::{BoxedProcedure, ProcedureId};
mod state_store;
/// Serialized data of a procedure.
#[derive(Debug, Clone, PartialEq, Eq, Serialize, Deserialize)]
pub struct ProcedureMessage {
/// Type name of the procedure. The procedure framework also use the type name to
/// find a loader to load the procedure.
pub type_name: String,
/// The data of the procedure.
pub data: String,
/// Parent procedure id.
pub parent_id: Option<ProcedureId>,
/// Current step.
pub step: u32,
}
/// Procedure storage layer.
#[derive(Clone)]
pub(crate) struct ProcedureStore(StateStoreRef);
impl ProcedureStore {
/// Creates a new [ProcedureStore] from specific [StateStoreRef].
pub(crate) fn new(state_store: StateStoreRef) -> ProcedureStore {
ProcedureStore(state_store)
}
/// Dump the `procedure` to the storage.
pub(crate) async fn store_procedure(
&self,
procedure_id: ProcedureId,
step: u32,
procedure: &BoxedProcedure,
parent_id: Option<ProcedureId>,
) -> Result<()> {
let type_name = procedure.type_name();
let data = procedure.dump()?;
let message = ProcedureMessage {
type_name: type_name.to_string(),
data,
parent_id,
step,
};
let key = ParsedKey {
procedure_id,
step,
key_type: KeyType::Step,
}
.to_string();
let value = serde_json::to_string(&message).context(ToJsonSnafu)?;
self.0.put(&key, value.into_bytes()).await?;
Ok(())
}
/// Write commit flag to the storage.
pub(crate) async fn commit_procedure(
&self,
procedure_id: ProcedureId,
step: u32,
) -> Result<()> {
let key = ParsedKey {
procedure_id,
step,
key_type: KeyType::Commit,
}
.to_string();
self.0.put(&key, Vec::new()).await?;
Ok(())
}
/// Write rollback flag to the storage.
pub(crate) async fn rollback_procedure(
&self,
procedure_id: ProcedureId,
step: u32,
) -> Result<()> {
let key = ParsedKey {
procedure_id,
step,
key_type: KeyType::Rollback,
}
.to_string();
self.0.put(&key, Vec::new()).await?;
Ok(())
}
/// Load uncommitted procedures from the storage.
pub(crate) async fn load_messages(&self) -> Result<HashMap<ProcedureId, ProcedureMessage>> {
let mut messages = HashMap::new();
// Track the key-value pair by procedure id.
let mut procedure_key_values: HashMap<_, (ParsedKey, Vec<u8>)> = HashMap::new();
// Scan all procedures.
let mut key_values = self.0.walk_top_down("/").await?;
while let Some((key, value)) = key_values.try_next().await? {
let Some(curr_key) = ParsedKey::parse_str(&key) else {
logging::warn!("Unknown key while loading procedures, key: {}", key);
continue;
};
if let Some(entry) = procedure_key_values.get_mut(&curr_key.procedure_id) {
if entry.0.step < curr_key.step {
entry.0 = curr_key;
entry.1 = value;
}
} else {
procedure_key_values.insert(curr_key.procedure_id, (curr_key, value));
}
}
for (procedure_id, (parsed_key, value)) in procedure_key_values {
if parsed_key.key_type == KeyType::Step {
let Some(message) = self.load_one_message(&parsed_key, &value) else {
// We don't abort the loading process and just ignore errors to ensure all remaining
// procedures are loaded.
continue;
};
messages.insert(procedure_id, message);
}
}
Ok(messages)
}
fn load_one_message(&self, key: &ParsedKey, value: &[u8]) -> Option<ProcedureMessage> {
serde_json::from_slice(value)
.map_err(|e| {
// `e` doesn't impl ErrorExt so we print it as normal error.
logging::error!("Failed to parse value, key: {:?}, source: {}", key, e);
e
})
.ok()
}
}
impl From<ObjectStore> for ProcedureStore {
fn from(store: ObjectStore) -> ProcedureStore {
let state_store = ObjectStateStore::new(store);
ProcedureStore::new(Arc::new(state_store))
}
}
/// Suffix type of the key.
#[derive(Debug, PartialEq, Eq)]
enum KeyType {
Step,
Commit,
Rollback,
}
impl KeyType {
fn as_str(&self) -> &'static str {
match self {
KeyType::Step => "step",
KeyType::Commit => "commit",
KeyType::Rollback => "rollback",
}
}
fn from_str(s: &str) -> Option<KeyType> {
match s {
"step" => Some(KeyType::Step),
"commit" => Some(KeyType::Commit),
"rollback" => Some(KeyType::Rollback),
_ => None,
}
}
}
/// Key to refer the procedure in the [ProcedureStore].
#[derive(Debug, PartialEq, Eq)]
struct ParsedKey {
procedure_id: ProcedureId,
step: u32,
key_type: KeyType,
}
impl fmt::Display for ParsedKey {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
write!(
f,
"{}/{:010}.{}",
self.procedure_id,
self.step,
self.key_type.as_str(),
)
}
}
impl ParsedKey {
/// Try to parse the key from specific `input`.
fn parse_str(input: &str) -> Option<ParsedKey> {
let mut iter = input.rsplit('/');
let name = iter.next()?;
let id_str = iter.next()?;
let procedure_id = ProcedureId::parse_str(id_str).ok()?;
let mut parts = name.split('.');
let step_str = parts.next()?;
let suffix = parts.next()?;
let key_type = KeyType::from_str(suffix)?;
let step = step_str.parse().ok()?;
Some(ParsedKey {
procedure_id,
step,
key_type,
})
}
}
#[cfg(test)]
mod tests {
use async_trait::async_trait;
use object_store::services::Fs as Builder;
use object_store::ObjectStoreBuilder;
use tempdir::TempDir;
use super::*;
use crate::{Context, LockKey, Procedure, Status};
fn procedure_store_for_test(dir: &TempDir) -> ProcedureStore {
let store_dir = dir.path().to_str().unwrap();
let accessor = Builder::default().root(store_dir).build().unwrap();
let object_store = ObjectStore::new(accessor).finish();
ProcedureStore::from(object_store)
}
#[test]
fn test_parsed_key() {
let procedure_id = ProcedureId::random();
let key = ParsedKey {
procedure_id,
step: 2,
key_type: KeyType::Step,
};
assert_eq!(format!("{procedure_id}/0000000002.step"), key.to_string());
assert_eq!(key, ParsedKey::parse_str(&key.to_string()).unwrap());
let key = ParsedKey {
procedure_id,
step: 2,
key_type: KeyType::Commit,
};
assert_eq!(format!("{procedure_id}/0000000002.commit"), key.to_string());
assert_eq!(key, ParsedKey::parse_str(&key.to_string()).unwrap());
let key = ParsedKey {
procedure_id,
step: 2,
key_type: KeyType::Rollback,
};
assert_eq!(
format!("{procedure_id}/0000000002.rollback"),
key.to_string()
);
assert_eq!(key, ParsedKey::parse_str(&key.to_string()).unwrap());
}
#[test]
fn test_parse_invalid_key() {
assert!(ParsedKey::parse_str("").is_none());
let procedure_id = ProcedureId::random();
let input = format!("{procedure_id}");
assert!(ParsedKey::parse_str(&input).is_none());
let input = format!("{procedure_id}/");
assert!(ParsedKey::parse_str(&input).is_none());
let input = format!("{procedure_id}/0000000003");
assert!(ParsedKey::parse_str(&input).is_none());
let input = format!("{procedure_id}/0000000003.");
assert!(ParsedKey::parse_str(&input).is_none());
let input = format!("{procedure_id}/0000000003.other");
assert!(ParsedKey::parse_str(&input).is_none());
assert!(ParsedKey::parse_str("12345/0000000003.step").is_none());
let input = format!("{procedure_id}-0000000003.commit");
assert!(ParsedKey::parse_str(&input).is_none());
}
#[test]
fn test_procedure_message() {
let mut message = ProcedureMessage {
type_name: "TestMessage".to_string(),
data: "no parent id".to_string(),
parent_id: None,
step: 4,
};
let json = serde_json::to_string(&message).unwrap();
assert_eq!(
json,
r#"{"type_name":"TestMessage","data":"no parent id","parent_id":null,"step":4}"#
);
let procedure_id = ProcedureId::parse_str("9f805a1f-05f7-490c-9f91-bd56e3cc54c1").unwrap();
message.parent_id = Some(procedure_id);
let json = serde_json::to_string(&message).unwrap();
assert_eq!(
json,
r#"{"type_name":"TestMessage","data":"no parent id","parent_id":"9f805a1f-05f7-490c-9f91-bd56e3cc54c1","step":4}"#
);
}
struct MockProcedure {
data: String,
}
impl MockProcedure {
fn new(data: impl Into<String>) -> MockProcedure {
MockProcedure { data: data.into() }
}
}
#[async_trait]
impl Procedure for MockProcedure {
fn type_name(&self) -> &str {
"MockProcedure"
}
async fn execute(&mut self, _ctx: &Context) -> Result<Status> {
unimplemented!()
}
fn dump(&self) -> Result<String> {
Ok(self.data.clone())
}
fn lock_key(&self) -> LockKey {
LockKey::default()
}
}
#[tokio::test]
async fn test_store_procedure() {
let dir = TempDir::new("store_procedure").unwrap();
let store = procedure_store_for_test(&dir);
let procedure_id = ProcedureId::random();
let procedure: BoxedProcedure = Box::new(MockProcedure::new("test store procedure"));
store
.store_procedure(procedure_id, 0, &procedure, None)
.await
.unwrap();
let messages = store.load_messages().await.unwrap();
assert_eq!(1, messages.len());
let msg = messages.get(&procedure_id).unwrap();
let expect = ProcedureMessage {
type_name: "MockProcedure".to_string(),
data: "test store procedure".to_string(),
parent_id: None,
step: 0,
};
assert_eq!(expect, *msg);
}
#[tokio::test]
async fn test_commit_procedure() {
let dir = TempDir::new("commit_procedure").unwrap();
let store = procedure_store_for_test(&dir);
let procedure_id = ProcedureId::random();
let procedure: BoxedProcedure = Box::new(MockProcedure::new("test store procedure"));
store
.store_procedure(procedure_id, 0, &procedure, None)
.await
.unwrap();
store.commit_procedure(procedure_id, 1).await.unwrap();
let messages = store.load_messages().await.unwrap();
assert!(messages.is_empty());
}
#[tokio::test]
async fn test_rollback_procedure() {
let dir = TempDir::new("rollback_procedure").unwrap();
let store = procedure_store_for_test(&dir);
let procedure_id = ProcedureId::random();
let procedure: BoxedProcedure = Box::new(MockProcedure::new("test store procedure"));
store
.store_procedure(procedure_id, 0, &procedure, None)
.await
.unwrap();
store.rollback_procedure(procedure_id, 1).await.unwrap();
let messages = store.load_messages().await.unwrap();
assert!(messages.is_empty());
}
#[tokio::test]
async fn test_load_messages() {
let dir = TempDir::new("load_messages").unwrap();
let store = procedure_store_for_test(&dir);
// store 3 steps
let id0 = ProcedureId::random();
let procedure: BoxedProcedure = Box::new(MockProcedure::new("id0-0"));
store
.store_procedure(id0, 0, &procedure, None)
.await
.unwrap();
let procedure: BoxedProcedure = Box::new(MockProcedure::new("id0-1"));
store
.store_procedure(id0, 1, &procedure, None)
.await
.unwrap();
let procedure: BoxedProcedure = Box::new(MockProcedure::new("id0-2"));
store
.store_procedure(id0, 2, &procedure, None)
.await
.unwrap();
// store 2 steps and then commit
let id1 = ProcedureId::random();
let procedure: BoxedProcedure = Box::new(MockProcedure::new("id1-0"));
store
.store_procedure(id1, 0, &procedure, None)
.await
.unwrap();
let procedure: BoxedProcedure = Box::new(MockProcedure::new("id1-1"));
store
.store_procedure(id1, 1, &procedure, None)
.await
.unwrap();
store.commit_procedure(id1, 2).await.unwrap();
// store 1 step
let id2 = ProcedureId::random();
let procedure: BoxedProcedure = Box::new(MockProcedure::new("id2-0"));
store
.store_procedure(id2, 0, &procedure, None)
.await
.unwrap();
let messages = store.load_messages().await.unwrap();
assert_eq!(2, messages.len());
let msg = messages.get(&id0).unwrap();
assert_eq!("id0-2", msg.data);
let msg = messages.get(&id2).unwrap();
assert_eq!("id2-0", msg.data);
}
}

View File

@@ -0,0 +1,192 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use std::pin::Pin;
use std::sync::Arc;
use async_trait::async_trait;
use futures::{Stream, TryStreamExt};
use object_store::{ObjectMode, ObjectStore};
use snafu::ResultExt;
use crate::error::{DeleteStateSnafu, Error, PutStateSnafu, Result};
/// Key value from state store.
type KeyValue = (String, Vec<u8>);
/// Stream that yields [KeyValue].
type KeyValueStream = Pin<Box<dyn Stream<Item = Result<KeyValue>> + Send>>;
/// Storage layer for persisting procedure's state.
#[async_trait]
pub(crate) trait StateStore: Send + Sync {
/// Puts `key` and `value` into the store.
async fn put(&self, key: &str, value: Vec<u8>) -> Result<()>;
/// Returns the key-value pairs under `path` in top down way.
///
/// # Note
/// - There is no guarantee about the order of the keys in the stream.
/// - The `path` must ends with `/`.
async fn walk_top_down(&self, path: &str) -> Result<KeyValueStream>;
/// Deletes key-value pairs by `keys`.
async fn delete(&self, keys: &[String]) -> Result<()>;
}
/// Reference counted pointer to [StateStore].
pub(crate) type StateStoreRef = Arc<dyn StateStore>;
/// [StateStore] based on [ObjectStore].
#[derive(Debug)]
pub(crate) struct ObjectStateStore {
store: ObjectStore,
}
impl ObjectStateStore {
/// Returns a new [ObjectStateStore] with specific `store`.
pub(crate) fn new(store: ObjectStore) -> ObjectStateStore {
ObjectStateStore { store }
}
}
#[async_trait]
impl StateStore for ObjectStateStore {
async fn put(&self, key: &str, value: Vec<u8>) -> Result<()> {
let object = self.store.object(key);
object.write(value).await.context(PutStateSnafu { key })
}
async fn walk_top_down(&self, path: &str) -> Result<KeyValueStream> {
let path_string = path.to_string();
let lister = self
.store
.object(path)
.scan()
.await
.map_err(|e| Error::ListState {
path: path_string.clone(),
source: e,
})?;
let stream = lister
.try_filter_map(|entry| async move {
let key = entry.path();
let key_value = match entry.mode().await? {
ObjectMode::FILE => {
let value = entry.read().await?;
Some((key.to_string(), value))
}
ObjectMode::DIR | ObjectMode::Unknown => None,
};
Ok(key_value)
})
.map_err(move |e| Error::ListState {
path: path_string.clone(),
source: e,
});
Ok(Box::pin(stream))
}
async fn delete(&self, keys: &[String]) -> Result<()> {
for key in keys {
let object = self.store.object(key);
object.delete().await.context(DeleteStateSnafu { key })?;
}
Ok(())
}
}
#[cfg(test)]
mod tests {
use object_store::services::Fs as Builder;
use object_store::ObjectStoreBuilder;
use tempdir::TempDir;
use super::*;
#[tokio::test]
async fn test_object_state_store() {
let dir = TempDir::new("state_store").unwrap();
let store_dir = dir.path().to_str().unwrap();
let accessor = Builder::default().root(store_dir).build().unwrap();
let object_store = ObjectStore::new(accessor).finish();
let state_store = ObjectStateStore::new(object_store);
let data: Vec<_> = state_store
.walk_top_down("/")
.await
.unwrap()
.try_collect()
.await
.unwrap();
assert!(data.is_empty());
state_store.put("a/1", b"v1".to_vec()).await.unwrap();
state_store.put("a/2", b"v2".to_vec()).await.unwrap();
state_store.put("b/1", b"v3".to_vec()).await.unwrap();
let mut data: Vec<_> = state_store
.walk_top_down("/")
.await
.unwrap()
.try_collect()
.await
.unwrap();
data.sort_unstable_by(|a, b| a.0.cmp(&b.0));
assert_eq!(
vec![
("a/1".to_string(), b"v1".to_vec()),
("a/2".to_string(), b"v2".to_vec()),
("b/1".to_string(), b"v3".to_vec())
],
data
);
let mut data: Vec<_> = state_store
.walk_top_down("a/")
.await
.unwrap()
.try_collect()
.await
.unwrap();
data.sort_unstable_by(|a, b| a.0.cmp(&b.0));
assert_eq!(
vec![
("a/1".to_string(), b"v1".to_vec()),
("a/2".to_string(), b"v2".to_vec()),
],
data
);
state_store
.delete(&["a/2".to_string(), "b/1".to_string()])
.await
.unwrap();
let mut data: Vec<_> = state_store
.walk_top_down("a/")
.await
.unwrap()
.try_collect()
.await
.unwrap();
data.sort_unstable_by(|a, b| a.0.cmp(&b.0));
assert_eq!(vec![("a/1".to_string(), b"v1".to_vec()),], data);
}
}

View File

@@ -14,8 +14,8 @@ datafusion-common.workspace = true
datafusion-expr.workspace = true
datatypes = { path = "../../datatypes" }
snafu.workspace = true
statrs = "0.15"
statrs = "0.16"
[dev-dependencies]
common-base = { path = "../base" }
tokio = { version = "1.0", features = ["full"] }
tokio.workspace = true

Some files were not shown because too many files have changed in this diff Show More