Compare commits

..

73 Commits

Author SHA1 Message Date
LFC
a3e47955b8 feat: information schema (#1327)
* feat: basic information schema

* show information schema only for current catalog

* fix: fragile tests
2023-04-07 16:50:14 +08:00
zyy17
554a69ea54 refactor: add disable_dashboard option and disable dashboard in metasrv and datanode (#1343)
* refactor: add disable_dashboard option and disable dashboard in metasrv and datanode

* refactor: skip disable_dashboard filed in toml file

* refactor: simplify the http initialization
2023-04-07 16:45:25 +08:00
LFC
f8b6a6b219 fix!: not allowed to create column name same with keyword without quoted (#1333)
* fix: not allowed to create column name same with keyword without quoted

* fix: tests

* Update src/sql/src/parsers/create_parser.rs

Co-authored-by: Ning Sun <classicning@gmail.com>

* fix: tests

---------

Co-authored-by: Ning Sun <classicning@gmail.com>
2023-04-06 15:34:26 +08:00
dennis zhuang
dce0adfc7e chore: readme (#1318) 2023-04-06 13:20:08 +08:00
Ruihang Xia
da66138e80 refactor(error): remove backtrace, and introduce call-site location for debugging (#1329)
* wip: global replace

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix compile

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix warnings

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* remove unneeded tests of errors

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix ErrorExt trait implementator

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix warnings

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix format

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix pyo3 tests

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-04-06 04:06:00 +00:00
Lei, HUANG
d10de46e03 feat: support timestamp precision on creating table (#1332)
* feat: support timestamp precision on creating table

* fix sqlness

* fix: substrait representation of different timestamp precision
2023-04-06 11:18:20 +08:00
Eugene Tolbakov
59f7630000 feat: initial changes for compaction_time_window field support (#1083)
* feat(compaction_time_window): initial changes for compaction_time_window field support

* feat(compaction_time_window): move PickerContext creation

* feat(compaction_time_window): update region descriptor, fix formatting

* feat(compaction_time_window): add minor enhancements

* feat(compaction_time_window): fix failing test

* feat(compaction_time_window):  return an error instead silently skip for the user provided compaction_time_window

* feat(compaction_time_window): add TODO reminder
2023-04-06 10:32:41 +08:00
Hao
a6932c6a08 feat: implement deriv function (#1324)
* feat: implement deriv function

* docs: add docs for linear regression

* test: add test for deriv
2023-04-05 13:42:07 +08:00
Ruihang Xia
10593a5adb fix: update sqlness result (#1328)
Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-04-04 22:47:55 +08:00
dennis zhuang
bf8c717022 feat: try to do manifest checkpoint on opening region (#1321) 2023-04-04 21:36:54 +08:00
localhost
aa9f6c344c chore: minor fix about metrics component (#1322)
* typo: fix StartMetricsExport error message error

* bug: add metrics http handler for frontend node
2023-04-04 19:31:06 +08:00
Ruihang Xia
99353c6ce7 refactor: rename "value" semantic type to "field" (#1326)
* global replace

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* change desc table

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* update sqlness result

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-04-04 11:14:28 +00:00
Ruihang Xia
a2d8804129 feat: impl __field__ special matcher to project value columns (#1320)
* plan new come functions

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* implement __value__ matcher

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix typo

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* change __value__ to __field__

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* add bad-case tests

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* rename variables

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-04-04 09:08:50 +00:00
Weny Xu
637a4a2a58 docs: file external table RFC (#1274) 2023-04-04 10:41:17 +08:00
Weny Xu
ef134479ef feat: support multi table engines in distributed mode (#1316)
* chore: bump greptime-proto to 59afacd

* feat: support multi table engines in distributed mode
2023-04-04 10:27:08 +08:00
Weny Xu
451f9d2d4e feat: support multi table engines (#1277)
* feat: support multi table engines

* refactor: adapt SqlHandler to support multiple table engines

* refactor: refactor TableEngineManager

* chore: apply review suggestions

* chore: apply review suggestions

* chore: apply review suggestions

* chore: snafu context styling
2023-04-03 14:49:12 +00:00
dennis zhuang
68d3247791 chore: tweak logs (#1314)
* chore: tweak logs

* chore: cr comments
2023-04-03 21:08:16 +08:00
Eugene Tolbakov
2458b4edd5 feat(changes): add initial implementation (#1304)
* feat(changes): add initial implementation

* feat(changes): add docs
2023-04-03 12:02:13 +08:00
Eugene Tolbakov
5848f27c27 feat(resets): add initial implementation (#1306) 2023-04-03 11:37:01 +08:00
LFC
215cea151f refactor: move PromQL execution to Frontend (#1297)
* refactor: move PromQL execution to Frontend
2023-04-03 11:34:03 +08:00
Hao
a82f1f564d feat: implement stdvar_over_time function (#1291)
* feat: implement stdvar_over_time function

* feat: add more test for stdvar_over_time

* feat: add stdvar_over_time to functions.rs
2023-04-03 10:01:25 +08:00
LFC
48c2841e4d feat: execute python script in distributed mode (#1264)
* feat: execute python script in distributed mode

* fix: rebase develop
2023-04-02 20:36:48 +08:00
Lei, HUANG
d2542552d3 fix: unit test fails when try to copy table to s3 and copy back (#1302)
fix: unit test fails when try to copy table to s3 and copy back to greptimedb
2023-04-02 16:43:44 +08:00
Ruihang Xia
c0132e6cc0 feat: impl quantile_over_time function (#1287)
* fix qualifier alias

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix in another way

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* impl quantile_over_time

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix clippy

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-04-02 16:20:32 +08:00
dennis zhuang
aea932b891 fix: checkpoint fails when deleting old logs fails (#1300) 2023-04-02 11:06:36 +08:00
Lei, HUANG
0253136333 feat: buffered parquet writer (#1263)
* wip: use

* rebase develop

* chore: fix typos

* feat: replace export parquet writer with buffered writer

* fix: some cr comments

* feat: add sst_write_buffer_size config item to config how many bytes to buffer before flush to underlying storage

* chore: reabse onto develop
2023-04-01 17:21:19 +08:00
Eugene Tolbakov
6a05f617a4 feat(stddev_over_time): add initial implementation (#1289)
* feat(stddev_over_time): add initial implementation

* feat(stddev_over_time): address code review remarks, add compensated summation

* feat(stddev_over_time): fix fmt issues

* feat(stddev_over_time): add docs, minor renamings
2023-04-01 17:16:51 +08:00
localhost
a2b262ebc0 chore: add http metrics server in datanode node when greptime start in distributed mode (#1256)
* chore: add http metrics server in datanode node when greptime start in distributed mode

* chore: add some docs and license

* chore: change metrics_addr to resolve address already in use error

* chore add metrics for meta service

* chore: replace metrics exporter http server from hyper to axum

* chore: format

* fix: datanode mode branching error

* fix: sqlness test address already in use and start metrics in defualt config

* chore: change metrics location

* chore: use builder pattern to builder httpserver

* chore: remove useless debug_assert macro in httpserver builder

* chore: resolve conflicting build error

* chore: format code
2023-03-31 18:37:52 +08:00
dennis zhuang
972f64c3d7 chore: improve opendal layers (#1295)
* chore: improve opendal layers

* chore: log level
2023-03-31 09:48:11 +00:00
LFC
eb77f9aafd feat: start LocalManager in Metasrv (#1279)
* feat: procedure store in Metasrv, backed by Etcd; start `LocalManager` in Metasrv leader

* fix: resolve PR comments

* fix: resolve PR comments
2023-03-31 15:32:59 +08:00
Yingwen
dee20144d7 feat: Implement procedure to alter a table for mito engine (#1259)
* feat: wip

* fix: Fix CreateMitoTable::table_schema not initialized from json

* feat: Implement AlterMitoTable procedure

* test: Add test for alter procedure

* feat: Register alter procedure

* fix: Recover procedures after catalog manager is started

* feat: Simplify usage of table schema in create table procedure

* test: Add rename test

* test: Add drop columns test
2023-03-31 14:40:54 +08:00
dennis zhuang
563adbabe9 feat!: improve region manifest service (#1268)
* feat: try to use batch delete in ManifestLogStorage

* feat: clean temp dir when startup with file backend

* refactor: export region manifest checkpoint actions magin and refactor storage options

* feat: purge unused manifest and checkpoint files by repeat gc task

* chore: debug deleted logs

* feat: adds RepeatedTask and refactor all gc tasks

* chore: clean code

* feat: export gc_duration to manifest config

* test: assert gc works

* fix: typo

* Update src/common/runtime/src/error.rs

Co-authored-by: LFC <bayinamine@gmail.com>

* Update src/common/runtime/src/repeated_task.rs

Co-authored-by: LFC <bayinamine@gmail.com>

* Update src/common/runtime/src/repeated_task.rs

Co-authored-by: LFC <bayinamine@gmail.com>

* fix: format

* Update src/common/runtime/src/repeated_task.rs

Co-authored-by: Yingwen <realevenyag@gmail.com>

* chore: by CR comments

* chore: by CR comments

* fix: serde default for StorageConfig

* chore: remove compaction config in StandaloneOptions

---------

Co-authored-by: LFC <bayinamine@gmail.com>
Co-authored-by: Yingwen <realevenyag@gmail.com>
2023-03-31 10:42:00 +08:00
Ruihang Xia
b71bb4e5fa feat: implement restart argument for sqlness-runner (#1262)
* refactor standalone mode and distribute mode start process

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* implement restart arg

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* Update tests/runner/src/env.rs

Co-authored-by: LFC <bayinamine@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
Co-authored-by: LFC <bayinamine@gmail.com>
2023-03-31 10:02:19 +08:00
LFC
fae293310c feat: unify describe table execution (#1285) 2023-03-31 09:59:19 +08:00
LFC
3e51640442 ci: release binary with embedded dashboard enabled (#1283) 2023-03-30 21:35:47 +08:00
discord9
b40193d7da test: align RsPy PyO3 Behavior (#1280)
* feat: allow PyList Return in PyO3 Backend

* feat: mixed list

* feat: align&test

* chore: PR advices
2023-03-30 17:45:21 +08:00
Ruihang Xia
b5e5f8e555 chore(deps): bump arrow and parquet to 36.0.0, and datafusion to the latest (#1282)
* chore: update arrow, parquet to 36.0 and datafusion

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* update deps

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* Apply suggestions from code review

Co-authored-by: LFC <bayinamine@gmail.com>

* update sqlness result

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
Co-authored-by: LFC <bayinamine@gmail.com>
2023-03-30 16:24:10 +08:00
zyy17
192fa0caa5 ci: only builds binaries for manually trigger workflow (#1284) 2023-03-30 15:58:28 +08:00
Weny Xu
30eb676d6a feat: implement create external table parser (#1252)
* refactor: move parse_option_string to util

* feat: implement create external table parser
2023-03-30 13:37:53 +08:00
Ruihang Xia
d7cadf6e6d fix: nyc-taxi bench tools and limit max parallel compaction task number (#1275)
* limit mas parallel compaction subtask

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* correct type map

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-03-29 09:16:53 +00:00
Lei, HUANG
d7a1435517 fix: remove backtrace from ratelimit error (#1273) 2023-03-29 15:58:01 +08:00
xiaomin tang
0943079de2 feat: Create SECURITY.md (#1270)
Create SECURITY.md
2023-03-28 19:14:29 +08:00
shuiyisong
509d07b798 chore: add build_table_route_prefix (#1269) 2023-03-28 16:26:24 +08:00
Yingwen
e72ce5eaa9 fix: Adds FileHandle to ChunkStream (#1255)
* test: Add compaction test

* test: Test read during compaction

* test: Add s3 object store to test

* test: only run compact test

* feat: Hold file handle in chunk stream

* test: check files still exist after compact

* feat: Revert changes to develop.yaml

* test: Simplify MockPurgeHandler
2023-03-28 16:22:07 +08:00
Ruihang Xia
f491a040f5 feat: implelemt rate, increase and delta in PromQL (#1258)
* clean up

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix increase fn

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* impl rate and delta

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix typo

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix IS_RATE condition

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* more tests about rate and delta

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* ensure range_length is not zero

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-03-28 15:21:06 +08:00
Yingwen
47179a7812 feat: Support sending multiple affected rows (#1203)
* feat: Support sending multiple affected rows

* feat: Skip federated check if query starts with insert

* style: Fix clippy
2023-03-28 14:34:14 +08:00
shuiyisong
995a28a27d feat: impl BatchDelete (#1253)
* chore: impl `BatchDelete`

* chore: add `batch_delete` to meta-client

* fix: auth param length check

* fix: auth param length check

* chore: rebase develop

* chore: use `filter_map`

Co-authored-by: LFC <bayinamine@gmail.com>

* chore: update error msg

Co-authored-by: LFC <bayinamine@gmail.com>

* fix: pre-allocate vec length

---------

Co-authored-by: LFC <bayinamine@gmail.com>
2023-03-28 14:06:13 +08:00
LFC
ed1cb73ffc fix: a minor misuse of tokio::select (#1266) 2023-03-28 13:50:35 +08:00
dennis zhuang
0ffa628c22 refactor: scripts perf and metrics (#1261)
* refactor: retrieve pyvector datatype by inner vector

* perf: replace all ok_or to ok_or_else

* feat: adds metrics for scripts execution
2023-03-28 10:07:21 +08:00
Lei, HUANG
5edd2a3dbe feat: upgrade opendal (#1245)
* chore: upgrade opendal

* chore: finish upgrading opendal

* fix: clippy complaints

* fix some tests

* fix: all unit tests

* chore: rebase develop

* fix: sqlness tests

* optimize imports

* chore: rebase develop

* doc: add todo
2023-03-28 09:47:33 +08:00
Ning Sun
e63b28bff1 feat: add dbname and health check for grpc api (#1220)
* feat: add dbname and health check for grpc api

* refactor: move health check to dedicated service

* chore: switch to merged proto rev

* feat: implement healthcheck on server-side
2023-03-28 09:46:30 +08:00
zyy17
8140d4e3e5 ci: modify the copy path of binary artifacts (#1257) 2023-03-27 21:49:42 +08:00
shuiyisong
6825459c75 chore: ignore dashboard files (#1260) 2023-03-27 19:11:31 +08:00
Ning Sun
7eb4d81929 feat: adopt pgwire 0.12 and simplify encoding apis (#1250)
* feat: adopt pgwire 0.12 and simplify encoding apis

* refactor: remove duplicated format match clause
2023-03-27 18:16:43 +08:00
discord9
8ba0741c81 fix: set locals to main.dict too (#1242) 2023-03-27 15:23:52 +08:00
zyy17
0eeb5b460c ci: install python requests lib in release container image (#1241)
* ci: install python requests lib in release container image

* refactor: add requirements.txt
2023-03-27 15:20:31 +08:00
LFC
65ea6fd85f feat: embed dashboard into GreptimeDB binary (#1239)
* feat: embed dashboard into GreptimeDB binary

* fix: resolve PR comments
2023-03-27 15:08:44 +08:00
dennis zhuang
4f15b26b28 feat: region manifest checkpoint (#1202)
* chore: adds log when manifest protocol is changed

* chore: refactor region manifest

* temp commit

* feat: impl region manifest checkpoint

* feat: recover region version from manifest snapshot

* test: adds region snapshot test

* test: region manifest checkpoint

* test: alter region with manifest checkpoint

* fix: revert storage api

* feat: delete old snapshot

* refactor: manifest log storage

* Update src/storage/src/version.rs

Co-authored-by: Ruihang Xia <waynestxia@gmail.com>

* Update src/storage/src/manifest/checkpoint.rs

Co-authored-by: Ruihang Xia <waynestxia@gmail.com>

* Update src/storage/src/manifest/region.rs

Co-authored-by: Ruihang Xia <waynestxia@gmail.com>

* Update src/storage/src/manifest/region.rs

Co-authored-by: Ruihang Xia <waynestxia@gmail.com>

* chore: by CR comments

* refactor: by CR comments

* fix: typo

* chore: tweak start_version

---------

Co-authored-by: Ruihang Xia <waynestxia@gmail.com>
2023-03-27 11:15:52 +08:00
Lei, HUANG
15ee4ac729 fix: noop flush impl for numbers table (#1247)
* fix: noop flush impl for numbers table

* fix: clippy
2023-03-27 10:54:07 +08:00
dennis zhuang
b4fc8c5b78 refactor: make sql function in scripts return a list of column vectors (#1243) 2023-03-27 08:50:19 +08:00
Lei, HUANG
6f81717866 fix: skip empty parquet (#1236)
* fix: returns None if parquet file does not contain any rows

* fix: skip empty parquet file

* chore: add doc

* rebase develop

* fix: use flatten instead of filter_map with identity
2023-03-26 09:39:15 +08:00
Lei, HUANG
77f9383daf fix: allow larger compaction window to reduce parallel task num (#1223)
fix: unit tests
2023-03-24 17:12:13 +08:00
discord9
c788b7fc26 feat: slicing PyVector&Create DataFrame from sql (#1190)
* chore: some typos

* feat: slicing for pyo3 vector

* feat: slice tests

* feat: from_sql

* feat: from_sql for dataframe

* test: df tests

* feat: `from_sql` for rspython

* test: tweak a bit

* test: and CR advices

* typos: ordered points

* chore: update error msg

* test: add more `slicing` testcase
2023-03-24 15:37:45 +08:00
LFC
0f160a73be feat: metasrv collects datanode heartbeats for region failure detection (#1214)
* feat: metasrv collects datanode heartbeats for region failure detection

* chore: change visibility

* fix: fragile tests

* Update src/meta-srv/src/handler/persist_stats_handler.rs

Co-authored-by: fys <40801205+Fengys123@users.noreply.github.com>

* Update src/meta-srv/src/handler/failure_handler.rs

Co-authored-by: fys <40801205+Fengys123@users.noreply.github.com>

* fix: resolve PR comments

* fix: resolve PR comments

* fix: resolve PR comments

---------

Co-authored-by: shuiyisong <xixing.sys@gmail.com>
Co-authored-by: fys <40801205+Fengys123@users.noreply.github.com>
2023-03-24 04:28:34 +00:00
LFC
92963b9614 feat: execute "delete" in query engine (in the form of "LogicalPlan") (#1222)
fix: execute "delete" in query engine (in the form of "LogicalPlan")
2023-03-24 12:11:58 +08:00
Yingwen
f1139fba59 fix: Holds FileHandle in ParquetReader to avoid the purger purges it (#1224) 2023-03-23 14:24:25 +00:00
Ruihang Xia
4e552245b1 fix: range func tests (#1221)
* remove ignore on range fn tests

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* placeholder for changes, deriv and resets

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-03-23 17:33:11 +08:00
Ruihang Xia
3126bbc1c7 docs: use CDN for logos (#1219)
Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-03-23 11:39:24 +08:00
LFC
b77b561bc8 refactor: execute insert with select in query engine (#1181)
* refactor: execute insert with select in query engine

* fix: resolve PR comments
2023-03-23 10:38:26 +08:00
dennis zhuang
501faad8ab chore: rename params in flush api (#1213) 2023-03-22 14:07:23 +08:00
Eugene Tolbakov
5397a9bbe6 feat(to_unixtime): add initial implementation (#1186)
* feat(to_unixtime): add initial implementation

* feat(to_unixtime): use Timestamp for conversion

* feat(to_unixtime):  implement conversion to Result<VectorRef>

* feat(to_unixtime): make unit test pass

* feat(to_unixtime): preserve None for invalid timestamps

* feat(to_unixtime): address code review suggestions

* feat(to_unixtime): add an sqlness test

* feat(to_unixtime): adjust the assertion for the sqlness test

* Update tests/cases/standalone/common/select/dummy.sql

---------

Co-authored-by: Ruihang Xia <waynestxia@gmail.com>
2023-03-21 12:41:07 +00:00
Ruihang Xia
f351ee7042 docs: update document string and site (#1211)
Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-03-21 07:01:08 +00:00
Ruihang Xia
e0493e0b8f feat: flush all tables on shutdown (#1185)
* feat: impl flush on shutdown (#14)

* feat: impl flush on shutdown

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* powerful if-else!

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* retrieve table handler from schema provider

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* feat: impl flush on shutdown

* feat: impl flush on shutdown

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* powerful if-else!

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* retrieve table handler from schema provider

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* Update src/datanode/src/instance.rs

Co-authored-by: dennis zhuang <killme2008@gmail.com>

* fix: uncommitted merge change

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
Co-authored-by: dennis zhuang <killme2008@gmail.com>
2023-03-21 14:36:30 +08:00
364 changed files with 13465 additions and 6405 deletions

View File

@@ -5,6 +5,7 @@ on:
schedule:
# At 00:00 on Monday.
- cron: '0 0 * * 1'
# Mannually trigger only builds binaries.
workflow_dispatch:
name: Release
@@ -32,38 +33,42 @@ jobs:
os: ubuntu-2004-16-cores
file: greptime-linux-amd64
continue-on-error: false
opts: "-F servers/dashboard"
- arch: aarch64-unknown-linux-gnu
os: ubuntu-2004-16-cores
file: greptime-linux-arm64
continue-on-error: false
opts: "-F servers/dashboard"
- arch: aarch64-apple-darwin
os: macos-latest
file: greptime-darwin-arm64
continue-on-error: false
opts: "-F servers/dashboard"
- arch: x86_64-apple-darwin
os: macos-latest
file: greptime-darwin-amd64
continue-on-error: false
opts: "-F servers/dashboard"
- arch: x86_64-unknown-linux-gnu
os: ubuntu-2004-16-cores
file: greptime-linux-amd64-pyo3
continue-on-error: false
opts: "-F pyo3_backend"
opts: "-F pyo3_backend,servers/dashboard"
- arch: aarch64-unknown-linux-gnu
os: ubuntu-2004-16-cores
file: greptime-linux-arm64-pyo3
continue-on-error: false
opts: "-F pyo3_backend"
opts: "-F pyo3_backend,servers/dashboard"
- arch: aarch64-apple-darwin
os: macos-latest
file: greptime-darwin-arm64-pyo3
continue-on-error: false
opts: "-F pyo3_backend"
opts: "-F pyo3_backend,servers/dashboard"
- arch: x86_64-apple-darwin
os: macos-latest
file: greptime-darwin-amd64-pyo3
continue-on-error: false
opts: "-F pyo3_backend"
opts: "-F pyo3_backend,servers/dashboard"
runs-on: ${{ matrix.os }}
continue-on-error: ${{ matrix.continue-on-error }}
if: github.repository == 'GreptimeTeam/greptimedb'
@@ -164,7 +169,7 @@ jobs:
export LD_LIBRARY_PATH=$PYTHON_INSTALL_PATH_AMD64/lib:$LD_LIBRARY_PATH
export LIBRARY_PATH=$PYTHON_INSTALL_PATH_AMD64/lib:$LIBRARY_PATH
export PATH=$PYTHON_INSTALL_PATH_AMD64/bin:$PATH
echo "implementation=CPython" >> pyo3.config
echo "version=3.10" >> pyo3.config
echo "implementation=CPython" >> pyo3.config
@@ -212,7 +217,7 @@ jobs:
name: Build docker image
needs: [build]
runs-on: ubuntu-latest
if: github.repository == 'GreptimeTeam/greptimedb'
if: github.repository == 'GreptimeTeam/greptimedb' && github.event_name != 'workflow_dispatch'
steps:
- name: Checkout sources
uses: actions/checkout@v3
@@ -252,9 +257,8 @@ jobs:
- name: Unzip the amd64 artifacts
run: |
cd amd64
tar xvf greptime-linux-amd64-pyo3.tgz
rm greptime-linux-amd64-pyo3.tgz
tar xvf amd64/greptime-linux-amd64-pyo3.tgz -C amd64/ && rm amd64/greptime-linux-amd64-pyo3.tgz
cp -r amd64 docker/ci
- name: Download arm64 binary
id: download-arm64
@@ -267,15 +271,14 @@ jobs:
id: unzip-arm64
if: success() || steps.download-arm64.conclusion == 'success'
run: |
cd arm64
tar xvf greptime-linux-arm64-pyo3.tgz
rm greptime-linux-arm64-pyo3.tgz
tar xvf arm64/greptime-linux-arm64-pyo3.tgz -C arm64/ && rm arm64/greptime-linux-arm64-pyo3.tgz
cp -r arm64 docker/ci
- name: Build and push all
uses: docker/build-push-action@v3
if: success() || steps.unzip-arm64.conclusion == 'success' # Build and push all platform if unzip-arm64 succeeds
with:
context: .
context: ./docker/ci/
file: ./docker/ci/Dockerfile
push: true
platforms: linux/amd64,linux/arm64
@@ -287,7 +290,7 @@ jobs:
uses: docker/build-push-action@v3
if: success() || steps.download-arm64.conclusion == 'failure' # Only build and push amd64 platform if download-arm64 fails
with:
context: .
context: ./docker/ci/
file: ./docker/ci/Dockerfile
push: true
platforms: linux/amd64
@@ -300,7 +303,7 @@ jobs:
# Release artifacts only when all the artifacts are built successfully.
needs: [build,docker]
runs-on: ubuntu-latest
if: github.repository == 'GreptimeTeam/greptimedb'
if: github.repository == 'GreptimeTeam/greptimedb' && github.event_name != 'workflow_dispatch'
steps:
- name: Checkout sources
uses: actions/checkout@v3
@@ -343,7 +346,7 @@ jobs:
name: Push docker image to UCloud Container Registry
needs: [docker]
runs-on: ubuntu-latest
if: github.repository == 'GreptimeTeam/greptimedb'
if: github.repository == 'GreptimeTeam/greptimedb' && github.event_name != 'workflow_dispatch'
# Push to uhub may fail(500 error), but we don't want to block the release process. The failed job will be retried manually.
continue-on-error: true
steps:

4
.gitignore vendored
View File

@@ -35,3 +35,7 @@ benchmarks/data
# dotenv
.env
# dashboard files
!/src/servers/dashboard/VERSION
/src/servers/dashboard/*

1053
Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -51,22 +51,22 @@ edition = "2021"
license = "Apache-2.0"
[workspace.dependencies]
arrow = { version = "34.0" }
arrow-array = "34.0"
arrow-flight = "34.0"
arrow-schema = { version = "34.0", features = ["serde"] }
arrow = { version = "36.0" }
arrow-array = "36.0"
arrow-flight = "36.0"
arrow-schema = { version = "36.0", features = ["serde"] }
async-stream = "0.3"
async-trait = "0.1"
chrono = { version = "0.4", features = ["serde"] }
datafusion = { git = "https://github.com/apache/arrow-datafusion.git", rev = "146a949218ec970784974137277cde3b4e547d0a" }
datafusion-common = { git = "https://github.com/apache/arrow-datafusion.git", rev = "146a949218ec970784974137277cde3b4e547d0a" }
datafusion-expr = { git = "https://github.com/apache/arrow-datafusion.git", rev = "146a949218ec970784974137277cde3b4e547d0a" }
datafusion-optimizer = { git = "https://github.com/apache/arrow-datafusion.git", rev = "146a949218ec970784974137277cde3b4e547d0a" }
datafusion-physical-expr = { git = "https://github.com/apache/arrow-datafusion.git", rev = "146a949218ec970784974137277cde3b4e547d0a" }
datafusion-sql = { git = "https://github.com/apache/arrow-datafusion.git", rev = "146a949218ec970784974137277cde3b4e547d0a" }
datafusion = { git = "https://github.com/apache/arrow-datafusion.git", rev = "21bf4ffccadfeea824ab6e29c0b872930d0e190a" }
datafusion-common = { git = "https://github.com/apache/arrow-datafusion.git", rev = "21bf4ffccadfeea824ab6e29c0b872930d0e190a" }
datafusion-expr = { git = "https://github.com/apache/arrow-datafusion.git", rev = "21bf4ffccadfeea824ab6e29c0b872930d0e190a" }
datafusion-optimizer = { git = "https://github.com/apache/arrow-datafusion.git", rev = "21bf4ffccadfeea824ab6e29c0b872930d0e190a" }
datafusion-physical-expr = { git = "https://github.com/apache/arrow-datafusion.git", rev = "21bf4ffccadfeea824ab6e29c0b872930d0e190a" }
datafusion-sql = { git = "https://github.com/apache/arrow-datafusion.git", rev = "21bf4ffccadfeea824ab6e29c0b872930d0e190a" }
futures = "0.3"
futures-util = "0.3"
parquet = "34.0"
parquet = "36.0"
paste = "1.0"
prost = "0.11"
rand = "0.8"

View File

@@ -1,14 +1,14 @@
<p align="center">
<picture>
<source media="(prefers-color-scheme: light)" srcset="/docs/logo-text-padding.png">
<source media="(prefers-color-scheme: dark)" srcset="/docs/logo-text-padding-dark.png">
<img alt="GreptimeDB Logo" src="/docs/logo-text-padding.png" width="400px">
<source media="(prefers-color-scheme: light)" srcset="https://cdn.jsdelivr.net/gh/GreptimeTeam/greptimedb@develop/docs/logo-text-padding.png">
<source media="(prefers-color-scheme: dark)" srcset="https://cdn.jsdelivr.net/gh/GreptimeTeam/greptimedb@develop/docs/logo-text-padding-dark.png">
<img alt="GreptimeDB Logo" src="https://cdn.jsdelivr.net/gh/GreptimeTeam/greptimedb@develop/docs/logo-text-padding.png" width="400px">
</picture>
</p>
<h3 align="center">
The next-generation hybrid timeseries/analytics processing database in the cloud
The next-generation hybrid time-series/analytics processing database in the cloud
</h3>
<p align="center">
@@ -36,11 +36,11 @@ Our core developers have been building time-series data platform
for years. Based on their best-practices, GreptimeDB is born to give you:
- A standalone binary that scales to highly-available distributed cluster, providing a transparent experience for cluster users
- Optimized columnar layout for handling time-series data; compacted, compressed, stored on various storage backends
- Flexible index options, tackling high cardinality issues down
- Optimized columnar layout for handling time-series data; compacted, compressed, and stored on various storage backends
- Flexible indexes, tackling high cardinality issues down
- Distributed, parallel query execution, leveraging elastic computing resource
- Native SQL, and Python scripting for advanced analytical scenarios
- Widely adopted database protocols and APIs
- Widely adopted database protocols and APIs, native PromQL supports
- Extensible table engine architecture for extensive workloads
## Quick Start
@@ -158,6 +158,7 @@ You can always cleanup test database by removing `/tmp/greptimedb`.
- GreptimeDB [User Guide](https://docs.greptime.com/user-guide/concepts.html)
- GreptimeDB [Developer
Guide](https://docs.greptime.com/developer-guide/overview.html)
- GreptimeDB [internal code document](https://greptimedb.rs)
### Dashboard
- [The dashboard UI for GreptimeDB](https://github.com/GreptimeTeam/dashboard)

19
SECURITY.md Normal file
View File

@@ -0,0 +1,19 @@
# Security Policy
## Supported Versions
| Version | Supported |
| ------- | ------------------ |
| >= v0.1.0 | :white_check_mark: |
| < v0.1.0 | :x: |
## Reporting a Vulnerability
We place great importance on the security of GreptimeDB code, software,
and cloud platform. If you come across a security vulnerability in GreptimeDB,
we kindly request that you inform us immediately. We will thoroughly investigate
all valid reports and make every effort to resolve the issue promptly.
To report any issues or vulnerabilities, please email us at info@greptime.com, rather than
posting publicly on GitHub. Be sure to provide us with the version identifier as well as details
on how the vulnerability can be exploited.

View File

@@ -126,12 +126,13 @@ fn convert_record_batch(record_batch: RecordBatch) -> (Vec<Column>, u32) {
for (array, field) in record_batch.columns().iter().zip(fields.iter()) {
let (values, datatype) = build_values(array);
let column = Column {
column_name: field.name().to_owned(),
values: Some(values),
null_mask: array
.data()
.null_bitmap()
.nulls()
.map(|bitmap| bitmap.buffer().as_slice().to_vec())
.unwrap_or_default(),
datatype: datatype.into(),
@@ -182,10 +183,10 @@ fn build_values(column: &ArrayRef) -> (Values, ColumnDataType) {
let values = array.values();
(
Values {
i64_values: values.to_vec(),
ts_microsecond_values: values.to_vec(),
..Default::default()
},
ColumnDataType::Int64,
ColumnDataType::TimestampMicrosecond,
)
}
DataType::Utf8 => {
@@ -252,13 +253,13 @@ fn create_table_expr() -> CreateTableExpr {
},
ColumnDef {
name: "tpep_pickup_datetime".to_string(),
datatype: ColumnDataType::Int64 as i32,
datatype: ColumnDataType::TimestampMicrosecond as i32,
is_nullable: true,
default_constraint: vec![],
},
ColumnDef {
name: "tpep_dropoff_datetime".to_string(),
datatype: ColumnDataType::Int64 as i32,
datatype: ColumnDataType::TimestampMicrosecond as i32,
is_nullable: true,
default_constraint: vec![],
},
@@ -365,6 +366,7 @@ fn create_table_expr() -> CreateTableExpr {
table_options: Default::default(),
region_ids: vec![0],
table_id: None,
engine: "mito".to_string(),
}
}

View File

@@ -37,11 +37,21 @@ type = "File"
data_dir = "/tmp/greptimedb/data/"
# Compaction options, see `standalone.example.toml`.
[compaction]
[storage.compaction]
max_inflight_tasks = 4
max_files_in_level0 = 8
max_purge_tasks = 32
# Storage manifest options
[storage.manifest]
# Region checkpoint actions margin.
# Create a checkpoint every <checkpoint_margin> actions.
checkpoint_margin = 10
# Region manifest logs and checkpoints gc execution duration
gc_duration = '30s'
# Whether to try creating a manifest checkpoint on region opening
checkpoint_on_startup = false
# Procedure storage options, see `standalone.example.toml`.
# [procedure.store]
# type = "File"

View File

@@ -99,7 +99,7 @@ type = "File"
data_dir = "/tmp/greptimedb/data/"
# Compaction options.
[compaction]
[storage.compaction]
# Max task number that can concurrently run.
max_inflight_tasks = 4
# Max files in level 0 to trigger compaction.
@@ -107,6 +107,16 @@ max_files_in_level0 = 8
# Max task number for SST purge task after compaction.
max_purge_tasks = 32
# Storage manifest options
[storage.manifest]
# Region checkpoint actions margin.
# Create a checkpoint every <checkpoint_margin> actions.
checkpoint_margin = 10
# Region manifest logs and checkpoints gc execution duration
gc_duration = '30s'
# Whether to try creating a manifest checkpoint on region opening
checkpoint_on_startup = false
# Procedure storage options.
# Uncomment to enable.
# [procedure.store]

View File

@@ -6,7 +6,9 @@ RUN apt-get update && DEBIAN_FRONTEND=noninteractive apt-get install -y \
python3.10-dev \
python3-pip
RUN python3 -m pip install pyarrow
COPY requirements.txt /etc/greptime/requirements.txt
RUN python3 -m pip install -r /etc/greptime/requirements.txt
ARG TARGETARCH

View File

@@ -0,0 +1,5 @@
numpy>=1.24.2
pandas>=1.5.3
pyarrow>=11.0.0
requests>=2.28.2
scipy>=1.10.1

Binary file not shown.

After

Width:  |  Height:  |  Size: 78 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 47 KiB

View File

@@ -0,0 +1,174 @@
---
Feature Name: "File external table"
Tracking Issue: https://github.com/GreptimeTeam/greptimedb/issues/1041
Date: 2023-03-08
Author: "Xu Wenkang <wenymedia@gmail.com>"
---
File external table
---
# Summary
Allows users to perform SQL queries on files
# Motivation
User data may already exist in other storages, i.g., file systems/s3, etc. in CSV, parquet, JSON format, etc. We can provide users the ability to perform SQL queries on these files.
# Details
## Overview
The file external table providers users ability to perform SQL queries on these files.
For example, a user has a CSV file on the local file system `/var/data/city.csv`:
```
Rank , Name , State , 2023 Population , 2020 Census , Annual Change , Density (mi²)
1 , New York City , New York , 8,992,908 , 8,804,190 , 0.7% , 29,938
2 , Los Angeles , California , 3,930,586 , 3,898,747 , 0.27% , 8,382
3 , Chicago , Illinois , 2,761,625 , 2,746,388 , 0.18% , 12,146
.....
```
Then user can create a file external table with:
```sql
CREATE EXTERNAL TABLE city with(location='/var/data/city.csv', format="CSV", field_delimiter = ',', record_delimiter = '\n', skip_header = 1);
```
Then query the external table with:
```bash
MySQL> select * from city;
```
| Rank | Name | State | 2023 Population | 2020 Census | Annual Change | Density (mi²) |
| :--- | :------------ | :--------- | :-------------- | :---------- | :------------ | :------------ |
| 1 | New York City | New York | 8,992,908 | 8,804,190 | 0.7% | 29,938 |
| 2 | Los Angeles | California | 3,930,586 | 3,898,747 | 0.27% | 8,382 |
| 3 | Chicago | Illinois | 2,761,625 | 2,746,388 | 0.18% | 12,146 |
Drop the external table, if needs with:
```sql
DROP EXTERNAL TABLE city
```
### Syntax
```
CREATE EXTERNAL [<database>.]<table_name>
[
(
<col_name> <col_type> [NULL | NOT NULL] [COMMENT "<comment>"]
)
]
[ WITH
(
LOCATION = 'url'
[,FIELD_DELIMITER = 'delimiter' ]
[,RECORD_DELIMITER = 'delimiter' ]
[,SKIP_HEADER = '<number>' ]
[,FORMAT = { csv | json | parquet } ]
[,PATTERN = '<regex_pattern>' ]
[,ENDPOINT = '<uri>' ]
[,ACCESS_KEY_ID = '<key_id>' ]
[,SECRET_ACCESS_KEY = '<access_key>' ]
[,SESSION_TOKEN = '<token>' ]
[,REGION = '<region>' ]
[,ENABLE_VIRTUAL_HOST_STYLE = '<boolean>']
..
)
]
```
### Supported File Format
The external file table supports multiple formats; We divide formats into row format and columnar format.
Row formats:
- CSV, JSON
Columnar formats:
- Parquet
Some of these formats support filter pushdown, and others don't. If users use very large files, that format doesn't support pushdown, which might consume a lot of IO for scanning full files and cause a long running query.
### File Table Engine
![overview](external-table-engine-overview.png)
We implement a file table engine that creates an external table by accepting user-specified file paths and treating all records as immutable.
1. File Format Decoder: decode files to the `RecordBatch` stream.
2. File Table Engine: implement the `TableProvider` trait, store necessary metadata in memory, and provide scan ability.
Our implementation is better for small files. For large files(i.g., a GB-level CSV file), suggests our users import data to the database.
## Drawbacks
- Some formats don't support filter pushdown
- Hard to support indexing
## Life cycle
### Register a table
1. Write metadata to manifest.
2. Create the table via file table engine.
3. Register table to `CatalogProvider` and register table to `SystemCatalog`(persist tables to disk).
### Deregister a table (Drop a table)
1. Fetch the target table info (figure out table engine type).
2. Deregister the target table in `CatalogProvider` and `SystemCatalog`.
3. Find the target table engine.
4. Drop the target table.
### Recover a table when restarting
1. Collect tables name and engine type info.
2. Find the target tables in different engines.
3. Open and register tables.
# Alternatives
## Using DataFusion API
We can use datafusion API to register a file table:
```rust
let ctx = SessionContext::new();
ctx.register_csv("example", "tests/data/example.csv", CsvReadOptions::new()).await?;
// create a plan
let df = ctx.sql("SELECT a, MIN(b) FROM example WHERE a <= b GROUP BY a LIMIT 100").await?;
```
### Drawbacks
The DataFusion implements its own `Object Store` abstraction and supports parsing the partitioned directories, which can push down the filter and skips some directories. However, this makes it impossible to use our's `LruCacheLayer`(The parsing of the partitioned directories required paths as input). If we want to manage memory entirely, we should implement our own `TableProvider` or `Table`.
- Impossible to use `CacheLayer`
## Introduce an intermediate representation layer
![overview](external-table-engine-way-2.png)
We convert all files into `parquet` as an intermediate representation. Then we only need to implement a `parquet` file table engine, and we already have a similar one. Also, it supports limited filter pushdown via the `parquet` row group stats.
### Drawbacks
- Computing overhead
- Storage overhead

View File

@@ -0,0 +1,39 @@
#!/usr/bin/env bash
# This script is used to download built dashboard assets from the "GreptimeTeam/dashboard" repository.
set -e
declare -r SCRIPT_DIR=$(cd $(dirname ${0}) >/dev/null 2>&1 && pwd)
declare -r ROOT_DIR=$(dirname ${SCRIPT_DIR})
declare -r STATIC_DIR="$ROOT_DIR/src/servers/dashboard"
RELEASE_VERSION="$(cat $STATIC_DIR/VERSION)"
# Download the SHA256 checksum attached to the release. To verify the integrity
# of the download, this checksum will be used to check the download tar file
# containing the built dashboard assets.
curl -Ls https://github.com/GreptimeTeam/dashboard/releases/download/$RELEASE_VERSION/sha256.txt --output sha256.txt
# Download the tar file containing the built dashboard assets.
curl -L https://github.com/GreptimeTeam/dashboard/releases/download/$RELEASE_VERSION/build.tar.gz --output build.tar.gz
# Verify the checksums match; exit if they don't.
case "$(uname -s)" in
FreeBSD | Darwin)
echo "$(cat sha256.txt)" | shasum --algorithm 256 --check \
|| { echo "Checksums did not match for downloaded dashboard assets!"; exit 1; } ;;
Linux)
echo "$(cat sha256.txt)" | sha256sum --check -- \
|| { echo "Checksums did not match for downloaded dashboard assets!"; exit 1; } ;;
*)
echo "The '$(uname -s)' operating system is not supported as a build host for the dashboard" >&2
exit 1
esac
# Extract the assets and clean up.
tar -xzf build.tar.gz -C "$STATIC_DIR"
rm sha256.txt
rm build.tar.gz
echo "Successfully download dashboard assets to $STATIC_DIR"

View File

@@ -10,7 +10,7 @@ common-base = { path = "../common/base" }
common-error = { path = "../common/error" }
common-time = { path = "../common/time" }
datatypes = { path = "../datatypes" }
greptime-proto = { git = "https://github.com/GreptimeTeam/greptime-proto.git", rev = "eb760d219206c77dd3a105ecb6a3ba97d9d650ec" }
greptime-proto = { git = "https://github.com/GreptimeTeam/greptime-proto.git", rev = "59afacdae59eae4241cfaf851021361caaeaed21" }
prost.workspace = true
snafu = { version = "0.7", features = ["backtraces"] }
tonic.workspace = true

View File

@@ -18,7 +18,7 @@ use common_error::ext::ErrorExt;
use common_error::prelude::StatusCode;
use datatypes::prelude::ConcreteDataType;
use snafu::prelude::*;
use snafu::{Backtrace, ErrorCompat};
use snafu::Location;
pub type Result<T> = std::result::Result<T, Error>;
@@ -26,12 +26,12 @@ pub type Result<T> = std::result::Result<T, Error>;
#[snafu(visibility(pub))]
pub enum Error {
#[snafu(display("Unknown proto column datatype: {}", datatype))]
UnknownColumnDataType { datatype: i32, backtrace: Backtrace },
UnknownColumnDataType { datatype: i32, location: Location },
#[snafu(display("Failed to create column datatype from {:?}", from))]
IntoColumnDataType {
from: ConcreteDataType,
backtrace: Backtrace,
location: Location,
},
#[snafu(display(
@@ -66,9 +66,6 @@ impl ErrorExt for Error {
| Error::InvalidColumnDefaultConstraint { source, .. } => source.status_code(),
}
}
fn backtrace_opt(&self) -> Option<&Backtrace> {
ErrorCompat::backtrace(self)
}
fn as_any(&self) -> &dyn Any {
self

View File

@@ -7,6 +7,7 @@ license.workspace = true
[dependencies]
api = { path = "../api" }
arc-swap = "1.0"
arrow-schema.workspace = true
async-stream.workspace = true
async-trait = "0.1"
backoff = { version = "0.4", features = ["tokio"] }

View File

@@ -12,4 +12,4 @@
// See the License for the specific language governing permissions and
// limitations under the License.
//! PromQL functions
pub mod catalog_adapter;

View File

@@ -18,10 +18,6 @@ use std::any::Any;
use std::sync::Arc;
use async_trait::async_trait;
use catalog::error::{self as catalog_error, Error};
use catalog::{
CatalogListRef, CatalogProvider, CatalogProviderRef, SchemaProvider, SchemaProviderRef,
};
use common_error::prelude::BoxedError;
use datafusion::catalog::catalog::{
CatalogList as DfCatalogList, CatalogProvider as DfCatalogProvider,
@@ -33,7 +29,10 @@ use snafu::ResultExt;
use table::table::adapter::{DfTableProviderAdapter, TableAdapter};
use table::TableRef;
use crate::datafusion::error;
use crate::error::{self, Result, SchemaProviderOperationSnafu};
use crate::{
CatalogListRef, CatalogProvider, CatalogProviderRef, SchemaProvider, SchemaProviderRef,
};
pub struct DfCatalogListAdapter {
catalog_list: CatalogListRef,
@@ -89,7 +88,7 @@ impl CatalogProvider for CatalogProviderAdapter {
self
}
fn schema_names(&self) -> catalog::error::Result<Vec<String>> {
fn schema_names(&self) -> Result<Vec<String>> {
Ok(self.df_catalog_provider.schema_names())
}
@@ -97,11 +96,11 @@ impl CatalogProvider for CatalogProviderAdapter {
&self,
_name: String,
_schema: SchemaProviderRef,
) -> catalog::error::Result<Option<SchemaProviderRef>> {
) -> Result<Option<SchemaProviderRef>> {
todo!("register_schema is not supported in Datafusion catalog provider")
}
fn schema(&self, name: &str) -> catalog::error::Result<Option<Arc<dyn SchemaProvider>>> {
fn schema(&self, name: &str) -> Result<Option<Arc<dyn SchemaProvider>>> {
Ok(self
.df_catalog_provider
.schema(name)
@@ -196,11 +195,11 @@ impl SchemaProvider for SchemaProviderAdapter {
}
/// Retrieves the list of available table names in this schema.
fn table_names(&self) -> Result<Vec<String>, Error> {
fn table_names(&self) -> Result<Vec<String>> {
Ok(self.df_schema_provider.table_names())
}
async fn table(&self, name: &str) -> Result<Option<TableRef>, Error> {
async fn table(&self, name: &str) -> Result<Option<TableRef>> {
let table = self.df_schema_provider.table(name).await;
let table = table.map(|table_provider| {
match table_provider
@@ -219,11 +218,7 @@ impl SchemaProvider for SchemaProviderAdapter {
Ok(table)
}
fn register_table(
&self,
name: String,
table: TableRef,
) -> catalog::error::Result<Option<TableRef>> {
fn register_table(&self, name: String, table: TableRef) -> Result<Option<TableRef>> {
let table_provider = Arc::new(DfTableProviderAdapter::new(table.clone()));
Ok(self
.df_schema_provider
@@ -232,43 +227,43 @@ impl SchemaProvider for SchemaProviderAdapter {
msg: "Fail to register table to datafusion",
})
.map_err(BoxedError::new)
.context(catalog_error::SchemaProviderOperationSnafu)?
.context(SchemaProviderOperationSnafu)?
.map(|_| table))
}
fn rename_table(&self, _name: &str, _new_name: String) -> catalog_error::Result<TableRef> {
fn rename_table(&self, _name: &str, _new_name: String) -> Result<TableRef> {
todo!()
}
fn deregister_table(&self, name: &str) -> catalog::error::Result<Option<TableRef>> {
fn deregister_table(&self, name: &str) -> Result<Option<TableRef>> {
self.df_schema_provider
.deregister_table(name)
.context(error::DatafusionSnafu {
msg: "Fail to deregister table from datafusion",
})
.map_err(BoxedError::new)
.context(catalog_error::SchemaProviderOperationSnafu)?
.context(SchemaProviderOperationSnafu)?
.map(|table| {
let adapter = TableAdapter::new(table)
.context(error::TableSchemaMismatchSnafu)
.map_err(BoxedError::new)
.context(catalog_error::SchemaProviderOperationSnafu)?;
.context(SchemaProviderOperationSnafu)?;
Ok(Arc::new(adapter) as _)
})
.transpose()
}
fn table_exist(&self, name: &str) -> Result<bool, Error> {
fn table_exist(&self, name: &str) -> Result<bool> {
Ok(self.df_schema_provider.table_exist(name))
}
}
#[cfg(test)]
mod tests {
use catalog::local::{new_memory_catalog_list, MemoryCatalogProvider, MemorySchemaProvider};
use table::table::numbers::NumbersTable;
use super::*;
use crate::local::{new_memory_catalog_list, MemoryCatalogProvider, MemorySchemaProvider};
#[test]
#[should_panic]

View File

@@ -19,7 +19,7 @@ use common_error::ext::{BoxedError, ErrorExt};
use common_error::prelude::{Snafu, StatusCode};
use datafusion::error::DataFusionError;
use datatypes::prelude::ConcreteDataType;
use snafu::{Backtrace, ErrorCompat};
use snafu::Location;
use crate::DeregisterTableRequest;
@@ -50,7 +50,7 @@ pub enum Error {
},
#[snafu(display("System catalog is not valid: {}", msg))]
SystemCatalog { msg: String, backtrace: Backtrace },
SystemCatalog { msg: String, location: Location },
#[snafu(display(
"System catalog table type mismatch, expected: binary, found: {:?}",
@@ -58,61 +58,68 @@ pub enum Error {
))]
SystemCatalogTypeMismatch {
data_type: ConcreteDataType,
backtrace: Backtrace,
location: Location,
},
#[snafu(display("Invalid system catalog entry type: {:?}", entry_type))]
InvalidEntryType {
entry_type: Option<u8>,
backtrace: Backtrace,
location: Location,
},
#[snafu(display("Invalid system catalog key: {:?}", key))]
InvalidKey {
key: Option<String>,
backtrace: Backtrace,
location: Location,
},
#[snafu(display("Catalog value is not present"))]
EmptyValue { backtrace: Backtrace },
EmptyValue { location: Location },
#[snafu(display("Failed to deserialize value, source: {}", source))]
ValueDeserialize {
source: serde_json::error::Error,
backtrace: Backtrace,
location: Location,
},
#[snafu(display("Table engine not found: {}, source: {}", engine_name, source))]
TableEngineNotFound {
engine_name: String,
#[snafu(backtrace)]
source: table::error::Error,
},
#[snafu(display("Cannot find catalog by name: {}", catalog_name))]
CatalogNotFound {
catalog_name: String,
backtrace: Backtrace,
location: Location,
},
#[snafu(display("Cannot find schema {} in catalog {}", schema, catalog))]
SchemaNotFound {
catalog: String,
schema: String,
backtrace: Backtrace,
location: Location,
},
#[snafu(display("Table `{}` already exists", table))]
TableExists { table: String, backtrace: Backtrace },
TableExists { table: String, location: Location },
#[snafu(display("Table `{}` not exist", table))]
TableNotExist { table: String, backtrace: Backtrace },
TableNotExist { table: String, location: Location },
#[snafu(display("Schema {} already exists", schema))]
SchemaExists {
schema: String,
backtrace: Backtrace,
},
SchemaExists { schema: String, location: Location },
#[snafu(display("Operation {} not implemented yet", operation))]
Unimplemented {
operation: String,
backtrace: Backtrace,
location: Location,
},
#[snafu(display("Operation {} not supported", op))]
NotSupported { op: String, location: Location },
#[snafu(display("Failed to open table, table info: {}, source: {}", table_info, source))]
OpenTable {
table_info: String,
@@ -123,7 +130,7 @@ pub enum Error {
#[snafu(display("Table not found while opening table, table info: {}", table_info))]
TableNotFound {
table_info: String,
backtrace: Backtrace,
location: Location,
},
#[snafu(display("Failed to read system catalog table records"))]
@@ -132,6 +139,12 @@ pub enum Error {
source: common_recordbatch::error::Error,
},
#[snafu(display("Failed to create recordbatch, source: {}", source))]
CreateRecordBatch {
#[snafu(backtrace)]
source: common_recordbatch::error::Error,
},
#[snafu(display(
"Failed to insert table creation record to system catalog, source: {}",
source
@@ -153,7 +166,7 @@ pub enum Error {
},
#[snafu(display("Illegal catalog manager state: {}", msg))]
IllegalManagerState { backtrace: Backtrace, msg: String },
IllegalManagerState { location: Location, msg: String },
#[snafu(display("Failed to scan system catalog table, source: {}", source))]
SystemCatalogTableScan {
@@ -219,6 +232,22 @@ pub enum Error {
#[snafu(backtrace)]
source: table::error::Error,
},
#[snafu(display("Invalid system table definition: {err_msg}"))]
InvalidSystemTableDef { err_msg: String, location: Location },
#[snafu(display("{}: {}", msg, source))]
Datafusion {
msg: String,
source: DataFusionError,
location: Location,
},
#[snafu(display("Table schema mismatch, source: {}", source))]
TableSchemaMismatch {
#[snafu(backtrace)]
source: table::error::Error,
},
}
pub type Result<T> = std::result::Result<T, Error>;
@@ -231,7 +260,8 @@ impl ErrorExt for Error {
| Error::TableNotFound { .. }
| Error::IllegalManagerState { .. }
| Error::CatalogNotFound { .. }
| Error::InvalidEntryType { .. } => StatusCode::Unexpected,
| Error::InvalidEntryType { .. }
| Error::InvalidSystemTableDef { .. } => StatusCode::Unexpected,
Error::SystemCatalog { .. }
| Error::EmptyValue { .. }
@@ -239,14 +269,18 @@ impl ErrorExt for Error {
Error::SystemCatalogTypeMismatch { .. } => StatusCode::Internal,
Error::ReadSystemCatalog { source, .. } => source.status_code(),
Error::ReadSystemCatalog { source, .. } | Error::CreateRecordBatch { source } => {
source.status_code()
}
Error::InvalidCatalogValue { source, .. } | Error::CatalogEntrySerde { source } => {
source.status_code()
}
Error::TableExists { .. } => StatusCode::TableAlreadyExists,
Error::TableNotExist { .. } => StatusCode::TableNotFound,
Error::SchemaExists { .. } => StatusCode::InvalidArguments,
Error::SchemaExists { .. } | Error::TableEngineNotFound { .. } => {
StatusCode::InvalidArguments
}
Error::OpenSystemCatalog { source, .. }
| Error::CreateSystemCatalog { source, .. }
@@ -254,7 +288,8 @@ impl ErrorExt for Error {
| Error::OpenTable { source, .. }
| Error::CreateTable { source, .. }
| Error::DeregisterTable { source, .. }
| Error::RegionStats { source, .. } => source.status_code(),
| Error::RegionStats { source, .. }
| Error::TableSchemaMismatch { source } => source.status_code(),
Error::MetaSrv { source, .. } => source.status_code(),
Error::SystemCatalogTableScan { source } => source.status_code(),
@@ -264,15 +299,12 @@ impl ErrorExt for Error {
source.status_code()
}
Error::Unimplemented { .. } => StatusCode::Unsupported,
Error::Unimplemented { .. } | Error::NotSupported { .. } => StatusCode::Unsupported,
Error::QueryAccessDenied { .. } => StatusCode::AccessDenied,
Error::Datafusion { .. } => StatusCode::EngineExecuteQuery,
}
}
fn backtrace_opt(&self) -> Option<&Backtrace> {
ErrorCompat::backtrace(self)
}
fn as_any(&self) -> &dyn Any {
self
}
@@ -296,7 +328,7 @@ mod tests {
StatusCode::TableAlreadyExists,
Error::TableExists {
table: "some_table".to_string(),
backtrace: Backtrace::generate(),
location: Location::generate(),
}
.status_code()
);
@@ -310,7 +342,7 @@ mod tests {
StatusCode::StorageUnavailable,
Error::SystemCatalog {
msg: "".to_string(),
backtrace: Backtrace::generate(),
location: Location::generate(),
}
.status_code()
);
@@ -319,7 +351,7 @@ mod tests {
StatusCode::Internal,
Error::SystemCatalogTypeMismatch {
data_type: ConcreteDataType::binary_datatype(),
backtrace: Backtrace::generate(),
location: Location::generate(),
}
.status_code()
);
@@ -333,7 +365,7 @@ mod tests {
pub fn test_errors_to_datafusion_error() {
let e: DataFusionError = Error::TableExists {
table: "test_table".to_string(),
backtrace: Backtrace::generate(),
location: Location::generate(),
}
.into();
match e {

View File

@@ -0,0 +1,80 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
mod tables;
use std::any::Any;
use std::sync::Arc;
use async_trait::async_trait;
use datafusion::datasource::streaming::{PartitionStream, StreamingTable};
use snafu::ResultExt;
use table::table::adapter::TableAdapter;
use table::TableRef;
use crate::error::{DatafusionSnafu, Result, TableSchemaMismatchSnafu};
use crate::information_schema::tables::InformationSchemaTables;
use crate::{CatalogProviderRef, SchemaProvider};
const TABLES: &str = "tables";
pub(crate) struct InformationSchemaProvider {
catalog_name: String,
catalog_provider: CatalogProviderRef,
}
impl InformationSchemaProvider {
pub(crate) fn new(catalog_name: String, catalog_provider: CatalogProviderRef) -> Self {
Self {
catalog_name,
catalog_provider,
}
}
}
#[async_trait]
impl SchemaProvider for InformationSchemaProvider {
fn as_any(&self) -> &dyn Any {
self
}
fn table_names(&self) -> Result<Vec<String>> {
Ok(vec![TABLES.to_string()])
}
async fn table(&self, name: &str) -> Result<Option<TableRef>> {
let table = if name.eq_ignore_ascii_case(TABLES) {
Arc::new(InformationSchemaTables::new(
self.catalog_name.clone(),
self.catalog_provider.clone(),
))
} else {
return Ok(None);
};
let table = Arc::new(
StreamingTable::try_new(table.schema().clone(), vec![table]).with_context(|_| {
DatafusionSnafu {
msg: format!("Failed to get InformationSchema table '{name}'"),
}
})?,
);
let table = TableAdapter::new(table).context(TableSchemaMismatchSnafu)?;
Ok(Some(Arc::new(table)))
}
fn table_exist(&self, name: &str) -> Result<bool> {
Ok(matches!(name.to_ascii_lowercase().as_str(), TABLES))
}
}

View File

@@ -0,0 +1,165 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use std::sync::Arc;
use arrow_schema::SchemaRef as ArrowSchemaRef;
use common_catalog::consts::INFORMATION_SCHEMA_NAME;
use common_query::physical_plan::TaskContext;
use common_recordbatch::RecordBatch;
use datafusion::datasource::streaming::PartitionStream as DfPartitionStream;
use datafusion::physical_plan::stream::RecordBatchStreamAdapter as DfRecordBatchStreamAdapter;
use datafusion::physical_plan::SendableRecordBatchStream as DfSendableRecordBatchStream;
use datatypes::prelude::{ConcreteDataType, ScalarVectorBuilder, VectorRef};
use datatypes::schema::{ColumnSchema, Schema, SchemaRef};
use datatypes::vectors::StringVectorBuilder;
use snafu::ResultExt;
use table::metadata::TableType;
use crate::error::{CreateRecordBatchSnafu, Result};
use crate::information_schema::TABLES;
use crate::CatalogProviderRef;
pub(super) struct InformationSchemaTables {
schema: SchemaRef,
catalog_name: String,
catalog_provider: CatalogProviderRef,
}
impl InformationSchemaTables {
pub(super) fn new(catalog_name: String, catalog_provider: CatalogProviderRef) -> Self {
let schema = Arc::new(Schema::new(vec![
ColumnSchema::new("table_catalog", ConcreteDataType::string_datatype(), false),
ColumnSchema::new("table_schema", ConcreteDataType::string_datatype(), false),
ColumnSchema::new("table_name", ConcreteDataType::string_datatype(), false),
ColumnSchema::new("table_type", ConcreteDataType::string_datatype(), false),
]));
Self {
schema,
catalog_name,
catalog_provider,
}
}
fn builder(&self) -> InformationSchemaTablesBuilder {
InformationSchemaTablesBuilder::new(
self.schema.clone(),
self.catalog_name.clone(),
self.catalog_provider.clone(),
)
}
}
/// Builds the `information_schema.TABLE` table row by row
///
/// Columns are based on <https://www.postgresql.org/docs/current/infoschema-columns.html>
struct InformationSchemaTablesBuilder {
schema: SchemaRef,
catalog_name: String,
catalog_provider: CatalogProviderRef,
catalog_names: StringVectorBuilder,
schema_names: StringVectorBuilder,
table_names: StringVectorBuilder,
table_types: StringVectorBuilder,
}
impl InformationSchemaTablesBuilder {
fn new(schema: SchemaRef, catalog_name: String, catalog_provider: CatalogProviderRef) -> Self {
Self {
schema,
catalog_name,
catalog_provider,
catalog_names: StringVectorBuilder::with_capacity(42),
schema_names: StringVectorBuilder::with_capacity(42),
table_names: StringVectorBuilder::with_capacity(42),
table_types: StringVectorBuilder::with_capacity(42),
}
}
/// Construct the `information_schema.tables` virtual table
async fn make_tables(&mut self) -> Result<RecordBatch> {
let catalog_name = self.catalog_name.clone();
for schema_name in self.catalog_provider.schema_names()? {
if schema_name == INFORMATION_SCHEMA_NAME {
continue;
}
let Some(schema) = self.catalog_provider.schema(&schema_name)? else { continue };
for table_name in schema.table_names()? {
let Some(table) = schema.table(&table_name).await? else { continue };
self.add_table(&catalog_name, &schema_name, &table_name, table.table_type());
}
}
// Add a final list for the information schema tables themselves
self.add_table(
&catalog_name,
INFORMATION_SCHEMA_NAME,
TABLES,
TableType::View,
);
self.finish()
}
fn add_table(
&mut self,
catalog_name: &str,
schema_name: &str,
table_name: &str,
table_type: TableType,
) {
self.catalog_names.push(Some(catalog_name));
self.schema_names.push(Some(schema_name));
self.table_names.push(Some(table_name));
self.table_types.push(Some(match table_type {
TableType::Base => "BASE TABLE",
TableType::View => "VIEW",
TableType::Temporary => "LOCAL TEMPORARY",
}));
}
fn finish(&mut self) -> Result<RecordBatch> {
let columns: Vec<VectorRef> = vec![
Arc::new(self.catalog_names.finish()),
Arc::new(self.schema_names.finish()),
Arc::new(self.table_names.finish()),
Arc::new(self.table_types.finish()),
];
RecordBatch::new(self.schema.clone(), columns).context(CreateRecordBatchSnafu)
}
}
impl DfPartitionStream for InformationSchemaTables {
fn schema(&self) -> &ArrowSchemaRef {
self.schema.arrow_schema()
}
fn execute(&self, _: Arc<TaskContext>) -> DfSendableRecordBatchStream {
let schema = self.schema().clone();
let mut builder = self.builder();
Box::pin(DfRecordBatchStreamAdapter::new(
schema,
futures::stream::once(async move {
builder
.make_tables()
.await
.map(|x| x.into_df_record_batch())
.map_err(Into::into)
}),
))
}
}

View File

@@ -20,7 +20,7 @@ use std::sync::Arc;
use api::v1::meta::{RegionStat, TableName};
use common_telemetry::{info, warn};
use snafu::{OptionExt, ResultExt};
use snafu::ResultExt;
use table::engine::{EngineContext, TableEngineRef};
use table::metadata::TableId;
use table::requests::CreateTableRequest;
@@ -29,8 +29,10 @@ use table::TableRef;
use crate::error::{CreateTableSnafu, Result};
pub use crate::schema::{SchemaProvider, SchemaProviderRef};
pub mod datafusion;
pub mod error;
pub mod helper;
pub(crate) mod information_schema;
pub mod local;
pub mod remote;
pub mod schema;
@@ -228,34 +230,25 @@ pub(crate) async fn handle_system_table_request<'a, M: CatalogManager>(
/// The stat of regions in the datanode node.
/// The number of regions can be got from len of vec.
pub async fn datanode_stat(catalog_manager: &CatalogManagerRef) -> Result<(u64, Vec<RegionStat>)> {
///
/// Ignores any errors occurred during iterating regions. The intention of this method is to
/// collect region stats that will be carried in Datanode's heartbeat to Metasrv, so it's a
/// "try our best" job.
pub async fn datanode_stat(catalog_manager: &CatalogManagerRef) -> (u64, Vec<RegionStat>) {
let mut region_number: u64 = 0;
let mut region_stats = Vec::new();
for catalog_name in catalog_manager.catalog_names()? {
let catalog =
catalog_manager
.catalog(&catalog_name)?
.context(error::CatalogNotFoundSnafu {
catalog_name: &catalog_name,
})?;
let Ok(catalog_names) = catalog_manager.catalog_names() else { return (region_number, region_stats) };
for catalog_name in catalog_names {
let Ok(Some(catalog)) = catalog_manager.catalog(&catalog_name) else { continue };
for schema_name in catalog.schema_names()? {
let schema = catalog
.schema(&schema_name)?
.context(error::SchemaNotFoundSnafu {
catalog: &catalog_name,
schema: &schema_name,
})?;
let Ok(schema_names) = catalog.schema_names() else { continue };
for schema_name in schema_names {
let Ok(Some(schema)) = catalog.schema(&schema_name) else { continue };
for table_name in schema.table_names()? {
let table =
schema
.table(&table_name)
.await?
.context(error::TableNotFoundSnafu {
table_info: &table_name,
})?;
let Ok(table_names) = schema.table_names() else { continue };
for table_name in table_names {
let Ok(Some(table)) = schema.table(&table_name).await else { continue };
let region_numbers = &table.table_info().meta.region_numbers;
region_number += region_numbers.len() as u64;
@@ -282,6 +275,5 @@ pub async fn datanode_stat(catalog_manager: &CatalogManagerRef) -> Result<(u64,
}
}
}
Ok((region_number, region_stats))
(region_number, region_stats)
}

View File

@@ -18,7 +18,7 @@ use std::sync::Arc;
use common_catalog::consts::{
DEFAULT_CATALOG_NAME, DEFAULT_SCHEMA_NAME, INFORMATION_SCHEMA_NAME, MIN_USER_TABLE_ID,
SYSTEM_CATALOG_NAME, SYSTEM_CATALOG_TABLE_NAME,
MITO_ENGINE, SYSTEM_CATALOG_NAME, SYSTEM_CATALOG_TABLE_NAME,
};
use common_catalog::format_full_table_name;
use common_recordbatch::{RecordBatch, SendableRecordBatchStream};
@@ -27,7 +27,8 @@ use datatypes::prelude::ScalarVector;
use datatypes::vectors::{BinaryVector, UInt8Vector};
use futures_util::lock::Mutex;
use snafu::{ensure, OptionExt, ResultExt};
use table::engine::{EngineContext, TableEngineRef};
use table::engine::manager::TableEngineManagerRef;
use table::engine::EngineContext;
use table::metadata::TableId;
use table::requests::OpenTableRequest;
use table::table::numbers::NumbersTable;
@@ -37,7 +38,8 @@ use table::TableRef;
use crate::error::{
self, CatalogNotFoundSnafu, IllegalManagerStateSnafu, OpenTableSnafu, ReadSystemCatalogSnafu,
Result, SchemaExistsSnafu, SchemaNotFoundSnafu, SystemCatalogSnafu,
SystemCatalogTypeMismatchSnafu, TableExistsSnafu, TableNotFoundSnafu,
SystemCatalogTypeMismatchSnafu, TableEngineNotFoundSnafu, TableExistsSnafu, TableNotExistSnafu,
TableNotFoundSnafu,
};
use crate::local::memory::{MemoryCatalogManager, MemoryCatalogProvider, MemorySchemaProvider};
use crate::system::{
@@ -55,7 +57,7 @@ use crate::{
pub struct LocalCatalogManager {
system: Arc<SystemCatalog>,
catalogs: Arc<MemoryCatalogManager>,
engine: TableEngineRef,
engine_manager: TableEngineManagerRef,
next_table_id: AtomicU32,
init_lock: Mutex<bool>,
register_lock: Mutex<()>,
@@ -63,19 +65,20 @@ pub struct LocalCatalogManager {
}
impl LocalCatalogManager {
/// Create a new [CatalogManager] with given user catalogs and table engine
pub async fn try_new(engine: TableEngineRef) -> Result<Self> {
/// Create a new [CatalogManager] with given user catalogs and mito engine
pub async fn try_new(engine_manager: TableEngineManagerRef) -> Result<Self> {
let engine = engine_manager
.engine(MITO_ENGINE)
.context(TableEngineNotFoundSnafu {
engine_name: MITO_ENGINE,
})?;
let table = SystemCatalogTable::new(engine.clone()).await?;
let memory_catalog_list = crate::local::memory::new_memory_catalog_list()?;
let system_catalog = Arc::new(SystemCatalog::new(
table,
memory_catalog_list.clone(),
engine.clone(),
));
let system_catalog = Arc::new(SystemCatalog::new(table));
Ok(Self {
system: system_catalog,
catalogs: memory_catalog_list,
engine,
engine_manager,
next_table_id: AtomicU32::new(MIN_USER_TABLE_ID),
init_lock: Mutex::new(false),
register_lock: Mutex::new(()),
@@ -100,7 +103,14 @@ impl LocalCatalogManager {
// Processing system table hooks
let mut sys_table_requests = self.system_table_requests.lock().await;
handle_system_table_request(self, self.engine.clone(), &mut sys_table_requests).await?;
let engine = self
.engine_manager
.engine(MITO_ENGINE)
.context(TableEngineNotFoundSnafu {
engine_name: MITO_ENGINE,
})?;
handle_system_table_request(self, engine, &mut sys_table_requests).await?;
Ok(())
}
@@ -253,9 +263,14 @@ impl LocalCatalogManager {
table_name: t.table_name.clone(),
table_id: t.table_id,
};
let engine = self
.engine_manager
.engine(&t.engine)
.context(TableEngineNotFoundSnafu {
engine_name: &t.engine,
})?;
let option = self
.engine
let option = engine
.open_table(&context, request)
.await
.with_context(|_| OpenTableSnafu {
@@ -290,9 +305,7 @@ impl CatalogList for LocalCatalogManager {
}
fn catalog_names(&self) -> Result<Vec<String>> {
let mut res = self.catalogs.catalog_names()?;
res.push(SYSTEM_CATALOG_NAME.to_string());
Ok(res)
self.catalogs.catalog_names()
}
fn catalog(&self, name: &str) -> Result<Option<CatalogProviderRef>> {
@@ -364,6 +377,7 @@ impl CatalogManager for LocalCatalogManager {
// Try to register table with same table id, just ignore.
Ok(false)
} else {
let engine = request.table.table_info().meta.engine.to_string();
// table does not exist
self.system
.register_table(
@@ -371,6 +385,7 @@ impl CatalogManager for LocalCatalogManager {
schema_name.clone(),
request.table_name.clone(),
request.table_id,
engine,
)
.await?;
schema.register_table(request.table_name, request.table)?;
@@ -404,6 +419,14 @@ impl CatalogManager for LocalCatalogManager {
schema: schema_name,
})?;
let old_table = schema
.table(&request.table_name)
.await?
.context(TableNotExistSnafu {
table: &request.table_name,
})?;
let engine = old_table.table_info().meta.engine.to_string();
// rename table in system catalog
self.system
.register_table(
@@ -411,6 +434,7 @@ impl CatalogManager for LocalCatalogManager {
schema_name.clone(),
request.new_table_name.clone(),
request.table_id,
engine,
)
.await?;
Ok(schema
@@ -530,6 +554,8 @@ impl CatalogManager for LocalCatalogManager {
mod tests {
use std::assert_matches::assert_matches;
use mito::engine::MITO_ENGINE;
use super::*;
use crate::system::{CatalogEntry, SchemaEntry};
@@ -541,6 +567,7 @@ mod tests {
schema_name: "S1".to_string(),
table_name: "T1".to_string(),
table_id: 1,
engine: MITO_ENGINE.to_string(),
}),
Entry::Catalog(CatalogEntry {
catalog_name: "C2".to_string(),
@@ -561,6 +588,7 @@ mod tests {
schema_name: "S1".to_string(),
table_name: "T2".to_string(),
table_id: 2,
engine: MITO_ENGINE.to_string(),
}),
];
let res = LocalCatalogManager::sort_entries(vec);

View File

@@ -396,7 +396,6 @@ mod tests {
let other_table = NumbersTable::new(12);
let result = provider.register_table(table_name.to_string(), Arc::new(other_table));
let err = result.err().unwrap();
assert!(err.backtrace_opt().is_some());
assert_eq!(StatusCode::TableAlreadyExists, err.status_code());
}

View File

@@ -20,14 +20,17 @@ use std::sync::Arc;
use arc_swap::ArcSwap;
use async_stream::stream;
use async_trait::async_trait;
use common_catalog::consts::{DEFAULT_CATALOG_NAME, DEFAULT_SCHEMA_NAME, MIN_USER_TABLE_ID};
use common_catalog::consts::{
DEFAULT_CATALOG_NAME, DEFAULT_SCHEMA_NAME, MIN_USER_TABLE_ID, MITO_ENGINE,
};
use common_telemetry::{debug, error, info};
use dashmap::DashMap;
use futures::Stream;
use futures_util::StreamExt;
use parking_lot::RwLock;
use snafu::{OptionExt, ResultExt};
use table::engine::{EngineContext, TableEngineRef};
use table::engine::manager::TableEngineManagerRef;
use table::engine::EngineContext;
use table::metadata::TableId;
use table::requests::{CreateTableRequest, OpenTableRequest};
use table::table::numbers::NumbersTable;
@@ -36,7 +39,7 @@ use tokio::sync::Mutex;
use crate::error::{
CatalogNotFoundSnafu, CreateTableSnafu, InvalidCatalogValueSnafu, OpenTableSnafu, Result,
SchemaNotFoundSnafu, TableExistsSnafu, UnimplementedSnafu,
SchemaNotFoundSnafu, TableEngineNotFoundSnafu, TableExistsSnafu, UnimplementedSnafu,
};
use crate::helper::{
build_catalog_prefix, build_schema_prefix, build_table_global_prefix, CatalogKey, CatalogValue,
@@ -55,14 +58,14 @@ pub struct RemoteCatalogManager {
node_id: u64,
backend: KvBackendRef,
catalogs: Arc<RwLock<DashMap<String, CatalogProviderRef>>>,
engine: TableEngineRef,
engine_manager: TableEngineManagerRef,
system_table_requests: Mutex<Vec<RegisterSystemTableRequest>>,
}
impl RemoteCatalogManager {
pub fn new(engine: TableEngineRef, node_id: u64, backend: KvBackendRef) -> Self {
pub fn new(engine_manager: TableEngineManagerRef, node_id: u64, backend: KvBackendRef) -> Self {
Self {
engine,
engine_manager,
node_id,
backend,
catalogs: Default::default(),
@@ -186,7 +189,9 @@ impl RemoteCatalogManager {
let max_table_id = MIN_USER_TABLE_ID - 1;
// initiate default catalog and schema
let default_catalog = self.initiate_default_catalog().await?;
let default_catalog = self
.create_catalog_and_schema(DEFAULT_CATALOG_NAME, DEFAULT_SCHEMA_NAME)
.await?;
res.insert(DEFAULT_CATALOG_NAME.to_string(), default_catalog);
info!("Default catalog and schema registered");
@@ -266,13 +271,19 @@ impl RemoteCatalogManager {
Ok(())
}
async fn initiate_default_catalog(&self) -> Result<CatalogProviderRef> {
let default_catalog = self.new_catalog_provider(DEFAULT_CATALOG_NAME);
let default_schema = self.new_schema_provider(DEFAULT_CATALOG_NAME, DEFAULT_SCHEMA_NAME);
default_catalog.register_schema(DEFAULT_SCHEMA_NAME.to_string(), default_schema.clone())?;
pub async fn create_catalog_and_schema(
&self,
catalog_name: &str,
schema_name: &str,
) -> Result<CatalogProviderRef> {
let schema_provider = self.new_schema_provider(catalog_name, schema_name);
let catalog_provider = self.new_catalog_provider(catalog_name);
catalog_provider.register_schema(schema_name.to_string(), schema_provider.clone())?;
let schema_key = SchemaKey {
schema_name: DEFAULT_SCHEMA_NAME.to_string(),
catalog_name: DEFAULT_CATALOG_NAME.to_string(),
catalog_name: catalog_name.to_string(),
schema_name: schema_name.to_string(),
}
.to_string();
self.backend
@@ -283,10 +294,10 @@ impl RemoteCatalogManager {
.context(InvalidCatalogValueSnafu)?,
)
.await?;
info!("Registered default schema");
info!("Created schema '{schema_key}'");
let catalog_key = CatalogKey {
catalog_name: DEFAULT_CATALOG_NAME.to_string(),
catalog_name: catalog_name.to_string(),
}
.to_string();
self.backend
@@ -297,8 +308,8 @@ impl RemoteCatalogManager {
.context(InvalidCatalogValueSnafu)?,
)
.await?;
info!("Registered default catalog");
Ok(default_catalog)
info!("Created catalog '{catalog_key}");
Ok(catalog_provider)
}
async fn open_or_create_table(
@@ -331,8 +342,13 @@ impl RemoteCatalogManager {
table_name: table_name.clone(),
table_id,
};
match self
.engine
let engine = self
.engine_manager
.engine(&table_info.meta.engine)
.context(TableEngineNotFoundSnafu {
engine_name: &table_info.meta.engine,
})?;
match engine
.open_table(&context, request)
.await
.with_context(|_| OpenTableSnafu {
@@ -363,9 +379,10 @@ impl RemoteCatalogManager {
primary_key_indices: meta.primary_key_indices.clone(),
create_if_not_exists: true,
table_options: meta.options.clone(),
engine: engine.name().to_string(),
};
self.engine
engine
.create_table(&context, req)
.await
.context(CreateTableSnafu {
@@ -398,7 +415,13 @@ impl CatalogManager for RemoteCatalogManager {
info!("Max table id allocated: {}", max_table_id);
let mut system_table_requests = self.system_table_requests.lock().await;
handle_system_table_request(self, self.engine.clone(), &mut system_table_requests).await?;
let engine = self
.engine_manager
.engine(MITO_ENGINE)
.context(TableEngineNotFoundSnafu {
engine_name: MITO_ENGINE,
})?;
handle_system_table_request(self, engine, &mut system_table_requests).await?;
info!("All system table opened");
self.catalog(DEFAULT_CATALOG_NAME)

View File

@@ -18,7 +18,7 @@ use std::sync::Arc;
use async_trait::async_trait;
use table::TableRef;
use crate::error::Result;
use crate::error::{NotSupportedSnafu, Result};
/// Represents a schema, comprising a number of named tables.
#[async_trait]
@@ -35,15 +35,30 @@ pub trait SchemaProvider: Sync + Send {
/// If supported by the implementation, adds a new table to this schema.
/// If a table of the same name existed before, it returns "Table already exists" error.
fn register_table(&self, name: String, table: TableRef) -> Result<Option<TableRef>>;
fn register_table(&self, name: String, _table: TableRef) -> Result<Option<TableRef>> {
NotSupportedSnafu {
op: format!("register_table({name}, <table>)"),
}
.fail()
}
/// If supported by the implementation, renames an existing table from this schema and returns it.
/// If no table of that name exists, returns "Table not found" error.
fn rename_table(&self, name: &str, new_name: String) -> Result<TableRef>;
fn rename_table(&self, name: &str, new_name: String) -> Result<TableRef> {
NotSupportedSnafu {
op: format!("rename_table({name}, {new_name})"),
}
.fail()
}
/// If supported by the implementation, removes an existing table from this schema and returns it.
/// If no table of that name exists, returns Ok(None).
fn deregister_table(&self, name: &str) -> Result<Option<TableRef>>;
fn deregister_table(&self, name: &str) -> Result<Option<TableRef>> {
NotSupportedSnafu {
op: format!("deregister_table({name})"),
}
.fail()
}
/// If supported by the implementation, checks the table exist in the schema provider or not.
/// If no matched table in the schema provider, return false.

View File

@@ -17,8 +17,8 @@ use std::collections::HashMap;
use std::sync::Arc;
use common_catalog::consts::{
DEFAULT_CATALOG_NAME, DEFAULT_SCHEMA_NAME, INFORMATION_SCHEMA_NAME, SYSTEM_CATALOG_NAME,
SYSTEM_CATALOG_TABLE_ID, SYSTEM_CATALOG_TABLE_NAME,
DEFAULT_CATALOG_NAME, DEFAULT_SCHEMA_NAME, INFORMATION_SCHEMA_NAME, MITO_ENGINE,
SYSTEM_CATALOG_NAME, SYSTEM_CATALOG_TABLE_ID, SYSTEM_CATALOG_TABLE_NAME,
};
use common_query::logical_plan::Expr;
use common_query::physical_plan::{PhysicalPlanRef, SessionContext};
@@ -112,6 +112,7 @@ impl SystemCatalogTable {
primary_key_indices: vec![ENTRY_TYPE_INDEX, KEY_INDEX],
create_if_not_exists: true,
table_options: TableOptions::default(),
engine: engine.name().to_string(),
};
let table = engine
@@ -194,12 +195,13 @@ pub fn build_table_insert_request(
schema: String,
table_name: String,
table_id: TableId,
engine: String,
) -> InsertRequest {
let entry_key = format_table_entry_key(&catalog, &schema, table_id);
build_insert_request(
EntryType::Table,
entry_key.as_bytes(),
serde_json::to_string(&TableEntryValue { table_name })
serde_json::to_string(&TableEntryValue { table_name, engine })
.unwrap()
.as_bytes(),
)
@@ -330,6 +332,7 @@ pub fn decode_system_catalog(
schema_name: table_parts[1].to_string(),
table_name: table_meta.table_name,
table_id,
engine: table_meta.engine,
}))
}
}
@@ -385,11 +388,19 @@ pub struct TableEntry {
pub schema_name: String,
pub table_name: String,
pub table_id: TableId,
pub engine: String,
}
#[derive(Debug, Serialize, Deserialize, PartialEq, Eq)]
pub struct TableEntryValue {
pub table_name: String,
#[serde(default = "mito_engine")]
pub engine: String,
}
fn mito_engine() -> String {
MITO_ENGINE.to_string()
}
#[cfg(test)]
@@ -399,8 +410,8 @@ mod tests {
use datatypes::value::Value;
use log_store::NoopLogStore;
use mito::config::EngineConfig;
use mito::engine::MitoEngine;
use object_store::{ObjectStore, ObjectStoreBuilder};
use mito::engine::{MitoEngine, MITO_ENGINE};
use object_store::ObjectStore;
use storage::compaction::noop::NoopCompactionScheduler;
use storage::config::EngineConfig as StorageEngineConfig;
use storage::EngineImpl;
@@ -482,11 +493,9 @@ mod tests {
pub async fn prepare_table_engine() -> (TempDir, TableEngineRef) {
let dir = create_temp_dir("system-table-test");
let store_dir = dir.path().to_string_lossy();
let accessor = object_store::services::Fs::default()
.root(&store_dir)
.build()
.unwrap();
let object_store = ObjectStore::new(accessor).finish();
let mut builder = object_store::services::Fs::default();
builder.root(&store_dir);
let object_store = ObjectStore::new(builder).unwrap().finish();
let noop_compaction_scheduler = Arc::new(NoopCompactionScheduler::default());
let table_engine = Arc::new(MitoEngine::new(
EngineConfig::default(),
@@ -530,6 +539,7 @@ mod tests {
DEFAULT_SCHEMA_NAME.to_string(),
"my_table".to_string(),
1,
MITO_ENGINE.to_string(),
);
let result = catalog_table.insert(table_insertion).await.unwrap();
assert_eq!(result, 1);
@@ -550,6 +560,7 @@ mod tests {
schema_name: DEFAULT_SCHEMA_NAME.to_string(),
table_name: "my_table".to_string(),
table_id: 1,
engine: MITO_ENGINE.to_string(),
});
assert_eq!(entry, expected);

View File

@@ -15,8 +15,9 @@
use std::collections::HashMap;
use std::sync::Arc;
use common_catalog::consts::INFORMATION_SCHEMA_NAME;
use common_catalog::format_full_table_name;
use datafusion::common::{OwnedTableReference, ResolvedTableReference, TableReference};
use datafusion::common::{ResolvedTableReference, TableReference};
use datafusion::datasource::provider_as_source;
use datafusion::logical_expr::TableSource;
use session::context::QueryContext;
@@ -26,6 +27,7 @@ use table::table::adapter::DfTableProviderAdapter;
use crate::error::{
CatalogNotFoundSnafu, QueryAccessDeniedSnafu, Result, SchemaNotFoundSnafu, TableNotExistSnafu,
};
use crate::information_schema::InformationSchemaProvider;
use crate::CatalogListRef;
pub struct DfTableSourceProvider {
@@ -87,9 +89,8 @@ impl DfTableSourceProvider {
pub async fn resolve_table(
&mut self,
table_ref: OwnedTableReference,
table_ref: TableReference<'_>,
) -> Result<Arc<dyn TableSource>> {
let table_ref = table_ref.as_table_reference();
let table_ref = self.resolve_table_ref(table_ref)?;
let resolved_name = table_ref.to_string();
@@ -101,14 +102,25 @@ impl DfTableSourceProvider {
let schema_name = table_ref.schema.as_ref();
let table_name = table_ref.table.as_ref();
let catalog = self
.catalog_list
.catalog(catalog_name)?
.context(CatalogNotFoundSnafu { catalog_name })?;
let schema = catalog.schema(schema_name)?.context(SchemaNotFoundSnafu {
catalog: catalog_name,
schema: schema_name,
})?;
let schema = if schema_name != INFORMATION_SCHEMA_NAME {
let catalog = self
.catalog_list
.catalog(catalog_name)?
.context(CatalogNotFoundSnafu { catalog_name })?;
catalog.schema(schema_name)?.context(SchemaNotFoundSnafu {
catalog: catalog_name,
schema: schema_name,
})?
} else {
let catalog_provider = self
.catalog_list
.catalog(catalog_name)?
.context(CatalogNotFoundSnafu { catalog_name })?;
Arc::new(InformationSchemaProvider::new(
catalog_name.to_string(),
catalog_provider,
))
};
let table = schema
.table(table_name)
.await?

View File

@@ -15,28 +15,12 @@
// The `tables` table in system catalog keeps a record of all tables created by user.
use std::any::Any;
use std::pin::Pin;
use std::sync::Arc;
use std::task::{Context, Poll};
use async_stream::stream;
use async_trait::async_trait;
use common_catalog::consts::{INFORMATION_SCHEMA_NAME, SYSTEM_CATALOG_TABLE_NAME};
use common_error::ext::BoxedError;
use common_query::logical_plan::Expr;
use common_query::physical_plan::PhysicalPlanRef;
use common_recordbatch::error::Result as RecordBatchResult;
use common_recordbatch::{RecordBatch, RecordBatchStream};
use datatypes::prelude::{ConcreteDataType, DataType};
use datatypes::schema::{ColumnSchema, Schema, SchemaRef};
use datatypes::value::ValueRef;
use datatypes::vectors::VectorRef;
use futures::Stream;
use snafu::ResultExt;
use table::engine::TableEngineRef;
use table::error::TablesRecordBatchSnafu;
use table::metadata::{TableId, TableInfoRef};
use table::table::scan::SimpleTableScan;
use table::metadata::TableId;
use table::{Table, TableRef};
use crate::error::{self, Error, InsertCatalogRecordSnafu, Result as CatalogResult};
@@ -44,160 +28,9 @@ use crate::system::{
build_schema_insert_request, build_table_deletion_request, build_table_insert_request,
SystemCatalogTable,
};
use crate::{
CatalogListRef, CatalogProvider, DeregisterTableRequest, SchemaProvider, SchemaProviderRef,
};
/// Tables holds all tables created by user.
pub struct Tables {
schema: SchemaRef,
catalogs: CatalogListRef,
engine_name: String,
}
impl Tables {
pub fn new(catalogs: CatalogListRef, engine_name: String) -> Self {
Self {
schema: Arc::new(build_schema_for_tables()),
catalogs,
engine_name,
}
}
}
#[async_trait::async_trait]
impl Table for Tables {
fn as_any(&self) -> &dyn Any {
self
}
fn schema(&self) -> SchemaRef {
self.schema.clone()
}
fn table_info(&self) -> TableInfoRef {
unreachable!("Tables does not support table_info method")
}
async fn scan(
&self,
_projection: Option<&Vec<usize>>,
_filters: &[Expr],
_limit: Option<usize>,
) -> table::error::Result<PhysicalPlanRef> {
let catalogs = self.catalogs.clone();
let schema_ref = self.schema.clone();
let engine_name = self.engine_name.clone();
let stream = stream!({
for catalog_name in catalogs
.catalog_names()
.map_err(BoxedError::new)
.context(TablesRecordBatchSnafu)?
{
let catalog = catalogs
.catalog(&catalog_name)
.map_err(BoxedError::new)
.context(TablesRecordBatchSnafu)?
.unwrap();
for schema_name in catalog
.schema_names()
.map_err(BoxedError::new)
.context(TablesRecordBatchSnafu)?
{
let mut tables_in_schema = Vec::with_capacity(
catalog
.schema_names()
.map_err(BoxedError::new)
.context(TablesRecordBatchSnafu)?
.len(),
);
let schema = catalog
.schema(&schema_name)
.map_err(BoxedError::new)
.context(TablesRecordBatchSnafu)?
.unwrap();
for table_name in schema
.table_names()
.map_err(BoxedError::new)
.context(TablesRecordBatchSnafu)?
{
tables_in_schema.push(table_name);
}
let vec = tables_to_record_batch(
&catalog_name,
&schema_name,
tables_in_schema,
&engine_name,
);
let record_batch_res = RecordBatch::new(schema_ref.clone(), vec);
yield record_batch_res;
}
}
});
let stream = Box::pin(TablesRecordBatchStream {
schema: self.schema.clone(),
stream: Box::pin(stream),
});
Ok(Arc::new(SimpleTableScan::new(stream)))
}
}
/// Convert tables info to `RecordBatch`.
fn tables_to_record_batch(
catalog_name: &str,
schema_name: &str,
table_names: Vec<String>,
engine: &str,
) -> Vec<VectorRef> {
let mut catalog_vec =
ConcreteDataType::string_datatype().create_mutable_vector(table_names.len());
let mut schema_vec =
ConcreteDataType::string_datatype().create_mutable_vector(table_names.len());
let mut table_name_vec =
ConcreteDataType::string_datatype().create_mutable_vector(table_names.len());
let mut engine_vec =
ConcreteDataType::string_datatype().create_mutable_vector(table_names.len());
for table_name in table_names {
// Safety: All these vectors are string type.
catalog_vec.push_value_ref(ValueRef::String(catalog_name));
schema_vec.push_value_ref(ValueRef::String(schema_name));
table_name_vec.push_value_ref(ValueRef::String(&table_name));
engine_vec.push_value_ref(ValueRef::String(engine));
}
vec![
catalog_vec.to_vector(),
schema_vec.to_vector(),
table_name_vec.to_vector(),
engine_vec.to_vector(),
]
}
pub struct TablesRecordBatchStream {
schema: SchemaRef,
stream: Pin<Box<dyn Stream<Item = RecordBatchResult<RecordBatch>> + Send>>,
}
impl Stream for TablesRecordBatchStream {
type Item = RecordBatchResult<RecordBatch>;
fn poll_next(mut self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<Option<Self::Item>> {
Pin::new(&mut self.stream).poll_next(cx)
}
}
impl RecordBatchStream for TablesRecordBatchStream {
fn schema(&self) -> SchemaRef {
self.schema.clone()
}
}
use crate::{CatalogProvider, DeregisterTableRequest, SchemaProvider, SchemaProviderRef};
pub struct InformationSchema {
pub tables: Arc<Tables>,
pub system: Arc<SystemCatalogTable>,
}
@@ -208,41 +41,19 @@ impl SchemaProvider for InformationSchema {
}
fn table_names(&self) -> Result<Vec<String>, Error> {
Ok(vec![
"tables".to_string(),
SYSTEM_CATALOG_TABLE_NAME.to_string(),
])
Ok(vec![SYSTEM_CATALOG_TABLE_NAME.to_string()])
}
async fn table(&self, name: &str) -> Result<Option<TableRef>, Error> {
if name.eq_ignore_ascii_case("tables") {
Ok(Some(self.tables.clone()))
} else if name.eq_ignore_ascii_case(SYSTEM_CATALOG_TABLE_NAME) {
if name.eq_ignore_ascii_case(SYSTEM_CATALOG_TABLE_NAME) {
Ok(Some(self.system.clone()))
} else {
Ok(None)
}
}
fn register_table(
&self,
_name: String,
_table: TableRef,
) -> crate::error::Result<Option<TableRef>> {
panic!("System catalog & schema does not support register table")
}
fn rename_table(&self, _name: &str, _new_name: String) -> crate::error::Result<TableRef> {
unimplemented!("System catalog & schema does not support rename table")
}
fn deregister_table(&self, _name: &str) -> crate::error::Result<Option<TableRef>> {
panic!("System catalog & schema does not support deregister table")
}
fn table_exist(&self, name: &str) -> Result<bool, Error> {
Ok(name.eq_ignore_ascii_case("tables")
|| name.eq_ignore_ascii_case(SYSTEM_CATALOG_TABLE_NAME))
Ok(name.eq_ignore_ascii_case(SYSTEM_CATALOG_TABLE_NAME))
}
}
@@ -251,13 +62,8 @@ pub struct SystemCatalog {
}
impl SystemCatalog {
pub fn new(
system: SystemCatalogTable,
catalogs: CatalogListRef,
engine: TableEngineRef,
) -> Self {
pub(crate) fn new(system: SystemCatalogTable) -> Self {
let schema = InformationSchema {
tables: Arc::new(Tables::new(catalogs, engine.name().to_string())),
system: Arc::new(system),
};
Self {
@@ -271,8 +77,9 @@ impl SystemCatalog {
schema: String,
table_name: String,
table_id: TableId,
engine: String,
) -> crate::error::Result<usize> {
let request = build_table_insert_request(catalog, schema, table_name, table_id);
let request = build_table_insert_request(catalog, schema, table_name, table_id, engine);
self.information_schema
.system
.insert(request)
@@ -334,104 +141,3 @@ impl CatalogProvider for SystemCatalog {
}
}
}
fn build_schema_for_tables() -> Schema {
let cols = vec![
ColumnSchema::new(
"catalog".to_string(),
ConcreteDataType::string_datatype(),
false,
),
ColumnSchema::new(
"schema".to_string(),
ConcreteDataType::string_datatype(),
false,
),
ColumnSchema::new(
"table_name".to_string(),
ConcreteDataType::string_datatype(),
false,
),
ColumnSchema::new(
"engine".to_string(),
ConcreteDataType::string_datatype(),
false,
),
];
Schema::new(cols)
}
#[cfg(test)]
mod tests {
use common_catalog::consts::{DEFAULT_CATALOG_NAME, DEFAULT_SCHEMA_NAME};
use common_query::physical_plan::SessionContext;
use futures_util::StreamExt;
use table::table::numbers::NumbersTable;
use super::*;
use crate::local::memory::new_memory_catalog_list;
use crate::CatalogList;
#[tokio::test]
async fn test_tables() {
let catalog_list = new_memory_catalog_list().unwrap();
let schema = catalog_list
.catalog(DEFAULT_CATALOG_NAME)
.unwrap()
.unwrap()
.schema(DEFAULT_SCHEMA_NAME)
.unwrap()
.unwrap();
schema
.register_table("test_table".to_string(), Arc::new(NumbersTable::default()))
.unwrap();
let tables = Tables::new(catalog_list, "test_engine".to_string());
let tables_stream = tables.scan(None, &[], None).await.unwrap();
let session_ctx = SessionContext::new();
let mut tables_stream = tables_stream.execute(0, session_ctx.task_ctx()).unwrap();
if let Some(t) = tables_stream.next().await {
let batch = t.unwrap();
assert_eq!(1, batch.num_rows());
assert_eq!(4, batch.num_columns());
assert_eq!(
ConcreteDataType::string_datatype(),
batch.column(0).data_type()
);
assert_eq!(
ConcreteDataType::string_datatype(),
batch.column(1).data_type()
);
assert_eq!(
ConcreteDataType::string_datatype(),
batch.column(2).data_type()
);
assert_eq!(
ConcreteDataType::string_datatype(),
batch.column(3).data_type()
);
assert_eq!(
"greptime",
batch.column(0).get_ref(0).as_string().unwrap().unwrap()
);
assert_eq!(
"public",
batch.column(1).get_ref(0).as_string().unwrap().unwrap()
);
assert_eq!(
"test_table",
batch.column(2).get_ref(0).as_string().unwrap().unwrap()
);
assert_eq!(
"test_engine",
batch.column(3).get_ref(0).as_string().unwrap().unwrap()
);
} else {
panic!("Record batch should not be empty!")
}
}
}

View File

@@ -21,6 +21,7 @@ mod tests {
use common_catalog::consts::{DEFAULT_CATALOG_NAME, DEFAULT_SCHEMA_NAME};
use common_telemetry::{error, info};
use mito::config::EngineConfig;
use table::engine::manager::MemoryTableEngineManager;
use table::table::numbers::NumbersTable;
use table::TableRef;
use tokio::sync::Mutex;
@@ -33,7 +34,8 @@ mod tests {
mito::table::test_util::MockEngine::default(),
object_store,
));
let catalog_manager = LocalCatalogManager::try_new(mock_engine).await.unwrap();
let engine_manager = Arc::new(MemoryTableEngineManager::new(mock_engine.clone()));
let catalog_manager = LocalCatalogManager::try_new(engine_manager).await.unwrap();
catalog_manager.start().await?;
Ok(catalog_manager)
}

View File

@@ -27,9 +27,10 @@ mod tests {
KvBackend, KvBackendRef, RemoteCatalogManager, RemoteCatalogProvider, RemoteSchemaProvider,
};
use catalog::{CatalogList, CatalogManager, RegisterTableRequest};
use common_catalog::consts::{DEFAULT_CATALOG_NAME, DEFAULT_SCHEMA_NAME};
use common_catalog::consts::{DEFAULT_CATALOG_NAME, DEFAULT_SCHEMA_NAME, MITO_ENGINE};
use datatypes::schema::RawSchema;
use futures_util::StreamExt;
use table::engine::manager::MemoryTableEngineManager;
use table::engine::{EngineContext, TableEngineRef};
use table::requests::CreateTableRequest;
@@ -80,8 +81,11 @@ mod tests {
) -> (KvBackendRef, TableEngineRef, Arc<RemoteCatalogManager>) {
let backend = Arc::new(MockKvBackend::default()) as KvBackendRef;
let table_engine = Arc::new(MockTableEngine::default());
let catalog_manager =
RemoteCatalogManager::new(table_engine.clone(), node_id, backend.clone());
let engine_manager = Arc::new(MemoryTableEngineManager::alias(
MITO_ENGINE.to_string(),
table_engine.clone(),
));
let catalog_manager = RemoteCatalogManager::new(engine_manager, node_id, backend.clone());
catalog_manager.start().await.unwrap();
(backend, table_engine, Arc::new(catalog_manager))
}
@@ -131,6 +135,7 @@ mod tests {
primary_key_indices: vec![],
create_if_not_exists: false,
table_options: Default::default(),
engine: MITO_ENGINE.to_string(),
},
)
.await
@@ -191,6 +196,7 @@ mod tests {
primary_key_indices: vec![],
create_if_not_exists: false,
table_options: Default::default(),
engine: MITO_ENGINE.to_string(),
},
)
.await
@@ -251,6 +257,7 @@ mod tests {
primary_key_indices: vec![],
create_if_not_exists: false,
table_options: Default::default(),
engine: MITO_ENGINE.to_string(),
},
)
.await

View File

@@ -14,7 +14,7 @@
use api::v1::{ColumnDataType, ColumnDef, CreateTableExpr, TableId};
use client::{Client, Database};
use common_catalog::consts::{DEFAULT_CATALOG_NAME, DEFAULT_SCHEMA_NAME};
use common_catalog::consts::{DEFAULT_CATALOG_NAME, DEFAULT_SCHEMA_NAME, MITO_ENGINE};
use prost::Message;
use substrait_proto::proto::plan_rel::RelType as PlanRelType;
use substrait_proto::proto::read_rel::{NamedTable, ReadType};
@@ -64,6 +64,7 @@ async fn run() {
table_options: Default::default(),
table_id: Some(TableId { id: 1024 }),
region_ids: vec![0],
engine: MITO_ENGINE.to_string(),
};
let db = Database::new(DEFAULT_CATALOG_NAME, DEFAULT_SCHEMA_NAME, client);

View File

@@ -15,6 +15,8 @@
use std::sync::Arc;
use api::v1::greptime_database_client::GreptimeDatabaseClient;
use api::v1::health_check_client::HealthCheckClient;
use api::v1::HealthCheckRequest;
use arrow_flight::flight_service_client::FlightServiceClient;
use common_grpc::channel_manager::ChannelManager;
use parking_lot::RwLock;
@@ -153,6 +155,13 @@ impl Client {
inner: GreptimeDatabaseClient::new(channel),
})
}
pub async fn health_check(&self) -> Result<()> {
let (_, channel) = self.find_channel()?;
let mut client = HealthCheckClient::new(channel);
client.health_check(HealthCheckRequest {}).await?;
Ok(())
}
}
#[cfg(test)]

View File

@@ -35,25 +35,44 @@ use crate::error::{
};
use crate::{error, Client, Result};
#[derive(Clone, Debug)]
#[derive(Clone, Debug, Default)]
pub struct Database {
// The "catalog" and "schema" to be used in processing the requests at the server side.
// They are the "hint" or "context", just like how the "database" in "USE" statement is treated in MySQL.
// They will be carried in the request header.
catalog: String,
schema: String,
// The dbname follows naming rule as out mysql, postgres and http
// protocol. The server treat dbname in priority of catalog/schema.
dbname: String,
client: Client,
ctx: FlightContext,
}
impl Database {
/// Create database service client using catalog and schema
pub fn new(catalog: impl Into<String>, schema: impl Into<String>, client: Client) -> Self {
Self {
catalog: catalog.into(),
schema: schema.into(),
client,
ctx: FlightContext::default(),
..Default::default()
}
}
/// Create database service client using dbname.
///
/// This API is designed for external usage. `dbname` is:
///
/// - the name of database when using GreptimeDB standalone or cluster
/// - the name provided by GreptimeCloud or other multi-tenant GreptimeDB
/// environment
pub fn new_with_dbname(dbname: impl Into<String>, client: Client) -> Self {
Self {
dbname: dbname.into(),
client,
..Default::default()
}
}
@@ -73,6 +92,14 @@ impl Database {
self.schema = schema.into();
}
pub fn dbname(&self) -> &String {
&self.dbname
}
pub fn set_dbname(&mut self, dbname: impl Into<String>) {
self.dbname = dbname.into();
}
pub fn set_auth(&mut self, auth: AuthScheme) {
self.ctx.auth_header = Some(AuthHeader {
auth_scheme: Some(auth),
@@ -86,6 +113,7 @@ impl Database {
catalog: self.catalog.clone(),
schema: self.schema.clone(),
authorization: self.ctx.auth_header.clone(),
dbname: self.dbname.clone(),
}),
request: Some(Request::Insert(request)),
};
@@ -167,6 +195,7 @@ impl Database {
catalog: self.catalog.clone(),
schema: self.schema.clone(),
authorization: self.ctx.auth_header.clone(),
dbname: self.dbname.clone(),
}),
request: Some(request),
};

View File

@@ -16,16 +16,14 @@ use std::any::Any;
use std::str::FromStr;
use common_error::prelude::*;
use snafu::Location;
use tonic::{Code, Status};
#[derive(Debug, Snafu)]
#[snafu(visibility(pub))]
pub enum Error {
#[snafu(display("Illegal Flight messages, reason: {}", reason))]
IllegalFlightMessages {
reason: String,
backtrace: Backtrace,
},
IllegalFlightMessages { reason: String, location: Location },
#[snafu(display("Failed to do Flight get, code: {}, source: {}", tonic_code, source))]
FlightGet {
@@ -47,13 +45,10 @@ pub enum Error {
},
#[snafu(display("Illegal GRPC client state: {}", err_msg))]
IllegalGrpcClientState {
err_msg: String,
backtrace: Backtrace,
},
IllegalGrpcClientState { err_msg: String, location: Location },
#[snafu(display("Missing required field in protobuf, field: {}", field))]
MissingField { field: String, backtrace: Backtrace },
MissingField { field: String, location: Location },
#[snafu(display(
"Failed to create gRPC channel, peer address: {}, source: {}",
@@ -93,10 +88,6 @@ impl ErrorExt for Error {
}
}
fn backtrace_opt(&self) -> Option<&Backtrace> {
ErrorCompat::backtrace(self)
}
fn as_any(&self) -> &dyn Any {
self
}

View File

@@ -12,6 +12,8 @@
// See the License for the specific language governing permissions and
// limitations under the License.
#![doc = include_str!("../../../../README.md")]
use std::fmt;
use clap::Parser;

View File

@@ -12,6 +12,8 @@
// See the License for the specific language governing permissions and
// limitations under the License.
use std::time::Duration;
use clap::Parser;
use common_telemetry::logging;
use datanode::datanode::{
@@ -86,6 +88,10 @@ struct StartCommand {
wal_dir: Option<String>,
#[clap(long)]
procedure_dir: Option<String>,
#[clap(long)]
http_addr: Option<String>,
#[clap(long)]
http_timeout: Option<u64>,
}
impl StartCommand {
@@ -146,7 +152,7 @@ impl TryFrom<StartCommand> for DatanodeOptions {
}
if let Some(data_dir) = cmd.data_dir {
opts.storage = ObjectStoreConfig::File(FileConfig { data_dir });
opts.storage.store = ObjectStoreConfig::File(FileConfig { data_dir });
}
if let Some(wal_dir) = cmd.wal_dir {
@@ -155,6 +161,15 @@ impl TryFrom<StartCommand> for DatanodeOptions {
if let Some(procedure_dir) = cmd.procedure_dir {
opts.procedure = Some(ProcedureConfig::from_file_path(procedure_dir));
}
if let Some(http_addr) = cmd.http_addr {
opts.http_opts.addr = http_addr
}
if let Some(http_timeout) = cmd.http_timeout {
opts.http_opts.timeout = Duration::from_secs(http_timeout)
}
// Disable dashboard in datanode.
opts.http_opts.disable_dashboard = true;
Ok(opts)
}
@@ -166,8 +181,9 @@ mod tests {
use std::io::Write;
use std::time::Duration;
use common_base::readable_size::ReadableSize;
use common_test_util::temp_dir::create_named_temp_file;
use datanode::datanode::{CompactionConfig, ObjectStoreConfig};
use datanode::datanode::{CompactionConfig, ObjectStoreConfig, RegionManifestConfig};
use servers::Mode;
use super::*;
@@ -203,10 +219,15 @@ mod tests {
type = "File"
data_dir = "/tmp/greptimedb/data/"
[compaction]
max_inflight_tasks = 4
max_files_in_level0 = 8
[storage.compaction]
max_inflight_tasks = 3
max_files_in_level0 = 7
max_purge_tasks = 32
[storage.manifest]
checkpoint_margin = 9
gc_duration = '7s'
checkpoint_on_startup = true
"#;
write!(file, "{}", toml_str).unwrap();
@@ -237,9 +258,9 @@ mod tests {
assert_eq!(3000, timeout_millis);
assert!(tcp_nodelay);
match options.storage {
ObjectStoreConfig::File(FileConfig { data_dir }) => {
assert_eq!("/tmp/greptimedb/data/".to_string(), data_dir)
match &options.storage.store {
ObjectStoreConfig::File(FileConfig { data_dir, .. }) => {
assert_eq!("/tmp/greptimedb/data/", data_dir)
}
ObjectStoreConfig::S3 { .. } => unreachable!(),
ObjectStoreConfig::Oss { .. } => unreachable!(),
@@ -247,11 +268,20 @@ mod tests {
assert_eq!(
CompactionConfig {
max_inflight_tasks: 4,
max_files_in_level0: 8,
max_inflight_tasks: 3,
max_files_in_level0: 7,
max_purge_tasks: 32,
sst_write_buffer_size: ReadableSize::mb(8),
},
options.compaction
options.storage.compaction,
);
assert_eq!(
RegionManifestConfig {
checkpoint_margin: Some(9),
gc_duration: Some(Duration::from_secs(7)),
checkpoint_on_startup: true,
},
options.storage.manifest,
);
}

View File

@@ -16,6 +16,7 @@ use std::any::Any;
use common_error::prelude::*;
use rustyline::error::ReadlineError;
use snafu::Location;
#[derive(Debug, Snafu)]
#[snafu(visibility(pub))]
@@ -66,20 +67,20 @@ pub enum Error {
ReadConfig {
path: String,
source: std::io::Error,
backtrace: Backtrace,
location: Location,
},
#[snafu(display("Failed to parse config, source: {}", source))]
ParseConfig {
source: toml::de::Error,
backtrace: Backtrace,
location: Location,
},
#[snafu(display("Missing config, msg: {}", msg))]
MissingConfig { msg: String, backtrace: Backtrace },
MissingConfig { msg: String, location: Location },
#[snafu(display("Illegal config: {}", msg))]
IllegalConfig { msg: String, backtrace: Backtrace },
IllegalConfig { msg: String, location: Location },
#[snafu(display("Illegal auth config: {}", source))]
IllegalAuthConfig {
@@ -100,13 +101,13 @@ pub enum Error {
#[snafu(display("Cannot create REPL: {}", source))]
ReplCreation {
source: ReadlineError,
backtrace: Backtrace,
location: Location,
},
#[snafu(display("Error reading command: {}", source))]
Readline {
source: ReadlineError,
backtrace: Backtrace,
location: Location,
},
#[snafu(display("Failed to request database, sql: {sql}, source: {source}"))]
@@ -187,78 +188,7 @@ impl ErrorExt for Error {
}
}
fn backtrace_opt(&self) -> Option<&Backtrace> {
ErrorCompat::backtrace(self)
}
fn as_any(&self) -> &dyn Any {
self
}
}
#[cfg(test)]
mod tests {
use super::*;
type StdResult<E> = std::result::Result<(), E>;
#[test]
fn test_start_node_error() {
fn throw_datanode_error() -> StdResult<datanode::error::Error> {
datanode::error::MissingNodeIdSnafu {}.fail()
}
let e = throw_datanode_error()
.context(StartDatanodeSnafu)
.err()
.unwrap();
assert!(e.backtrace_opt().is_some());
assert_eq!(e.status_code(), StatusCode::InvalidArguments);
}
#[test]
fn test_start_frontend_error() {
fn throw_frontend_error() -> StdResult<frontend::error::Error> {
frontend::error::InvalidSqlSnafu { err_msg: "failed" }.fail()
}
let e = throw_frontend_error()
.context(StartFrontendSnafu)
.err()
.unwrap();
assert!(e.backtrace_opt().is_some());
assert_eq!(e.status_code(), StatusCode::InvalidArguments);
}
#[test]
fn test_start_metasrv_error() {
fn throw_metasrv_error() -> StdResult<meta_srv::error::Error> {
meta_srv::error::StreamNoneSnafu {}.fail()
}
let e = throw_metasrv_error()
.context(StartMetaServerSnafu)
.err()
.unwrap();
assert!(e.backtrace_opt().is_some());
assert_eq!(e.status_code(), StatusCode::Internal);
}
#[test]
fn test_read_config_error() {
fn throw_read_config_error() -> StdResult<std::io::Error> {
Err(std::io::ErrorKind::NotFound.into())
}
let e = throw_read_config_error()
.context(ReadConfigSnafu { path: "test" })
.err()
.unwrap();
assert!(e.backtrace_opt().is_some());
assert_eq!(e.status_code(), StatusCode::InvalidArguments);
}
}

View File

@@ -107,6 +107,8 @@ pub struct StartCommand {
tls_key_path: Option<String>,
#[clap(long)]
user_provider: Option<String>,
#[clap(long)]
disable_dashboard: bool,
}
impl StartCommand {
@@ -149,18 +151,24 @@ impl TryFrom<StartCommand> for FrontendOptions {
let tls_option = TlsOption::new(cmd.tls_mode, cmd.tls_cert_path, cmd.tls_key_path);
let mut http_options = HttpOptions {
disable_dashboard: cmd.disable_dashboard,
..Default::default()
};
if let Some(addr) = cmd.http_addr {
opts.http_options = Some(HttpOptions {
addr,
..Default::default()
});
http_options.addr = addr;
}
opts.http_options = Some(http_options);
if let Some(addr) = cmd.grpc_addr {
opts.grpc_options = Some(GrpcOptions {
addr,
..Default::default()
});
}
if let Some(addr) = cmd.mysql_addr {
opts.mysql_options = Some(MysqlOptions {
addr,
@@ -227,6 +235,7 @@ mod tests {
tls_cert_path: None,
tls_key_path: None,
user_provider: None,
disable_dashboard: false,
};
let opts: FrontendOptions = command.try_into().unwrap();
@@ -289,6 +298,7 @@ mod tests {
tls_cert_path: None,
tls_key_path: None,
user_provider: None,
disable_dashboard: false,
};
let fe_opts = FrontendOptions::try_from(command).unwrap();
@@ -319,6 +329,7 @@ mod tests {
tls_cert_path: None,
tls_key_path: None,
user_provider: Some("static_user_provider:cmd:test=test".to_string()),
disable_dashboard: false,
};
let plugins = load_frontend_plugins(&command.user_provider);

View File

@@ -12,6 +12,8 @@
// See the License for the specific language governing permissions and
// limitations under the License.
use std::time::Duration;
use clap::Parser;
use common_telemetry::{info, logging, warn};
use meta_srv::bootstrap::MetaSrvInstance;
@@ -80,6 +82,10 @@ struct StartCommand {
selector: Option<String>,
#[clap(long)]
use_memory_store: bool,
#[clap(long)]
http_addr: Option<String>,
#[clap(long)]
http_timeout: Option<u64>,
}
impl StartCommand {
@@ -128,6 +134,16 @@ impl TryFrom<StartCommand> for MetaSrvOptions {
opts.use_memory_store = true;
}
if let Some(http_addr) = cmd.http_addr {
opts.http_opts.addr = http_addr;
}
if let Some(http_timeout) = cmd.http_timeout {
opts.http_opts.timeout = Duration::from_secs(http_timeout);
}
// Disable dashboard in metasrv.
opts.http_opts.disable_dashboard = true;
Ok(opts)
}
}
@@ -150,6 +166,8 @@ mod tests {
config_file: None,
selector: Some("LoadBased".to_string()),
use_memory_store: false,
http_addr: None,
http_timeout: None,
};
let options: MetaSrvOptions = cmd.try_into().unwrap();
assert_eq!("127.0.0.1:3002".to_string(), options.bind_addr);
@@ -178,6 +196,8 @@ mod tests {
selector: None,
config_file: Some(file.path().to_str().unwrap().to_string()),
use_memory_store: false,
http_addr: None,
http_timeout: None,
};
let options: MetaSrvOptions = cmd.try_into().unwrap();
assert_eq!("127.0.0.1:3002".to_string(), options.bind_addr);

View File

@@ -17,9 +17,7 @@ use std::sync::Arc;
use clap::Parser;
use common_base::Plugins;
use common_telemetry::info;
use datanode::datanode::{
CompactionConfig, Datanode, DatanodeOptions, ObjectStoreConfig, ProcedureConfig, WalConfig,
};
use datanode::datanode::{Datanode, DatanodeOptions, ProcedureConfig, StorageConfig, WalConfig};
use datanode::instance::InstanceRef;
use frontend::frontend::FrontendOptions;
use frontend::grpc::GrpcOptions;
@@ -82,8 +80,7 @@ pub struct StandaloneOptions {
pub prometheus_options: Option<PrometheusOptions>,
pub prom_options: Option<PromOptions>,
pub wal: WalConfig,
pub storage: ObjectStoreConfig,
pub compaction: CompactionConfig,
pub storage: StorageConfig,
pub procedure: Option<ProcedureConfig>,
}
@@ -101,8 +98,7 @@ impl Default for StandaloneOptions {
prometheus_options: Some(PrometheusOptions::default()),
prom_options: Some(PromOptions::default()),
wal: WalConfig::default(),
storage: ObjectStoreConfig::default(),
compaction: CompactionConfig::default(),
storage: StorageConfig::default(),
procedure: None,
}
}
@@ -129,7 +125,6 @@ impl StandaloneOptions {
enable_memory_catalog: self.enable_memory_catalog,
wal: self.wal,
storage: self.storage,
compaction: self.compaction,
procedure: self.procedure,
..Default::default()
}
@@ -241,8 +236,9 @@ async fn build_frontend(
plugins: Arc<Plugins>,
datanode_instance: InstanceRef,
) -> Result<FeInstance> {
let mut frontend_instance = FeInstance::new_standalone(datanode_instance.clone());
frontend_instance.set_script_handler(datanode_instance);
let mut frontend_instance = FeInstance::try_new_standalone(datanode_instance.clone())
.await
.context(StartFrontendSnafu)?;
frontend_instance.set_plugins(plugins.clone());
Ok(frontend_instance)
}

View File

@@ -18,7 +18,7 @@ use std::io::{Read, Write};
use bytes::{Buf, BufMut, BytesMut};
use common_error::prelude::ErrorExt;
use paste::paste;
use snafu::{ensure, Backtrace, ErrorCompat, ResultExt, Snafu};
use snafu::{ensure, Location, ResultExt, Snafu};
#[derive(Debug, Snafu)]
#[snafu(visibility(pub))]
@@ -31,29 +31,33 @@ pub enum Error {
Overflow {
src_len: usize,
dst_len: usize,
backtrace: Backtrace,
location: Location,
},
#[snafu(display("Buffer underflow"))]
Underflow { backtrace: Backtrace },
Underflow { location: Location },
#[snafu(display("IO operation reach EOF, source: {}", source))]
Eof {
source: std::io::Error,
backtrace: Backtrace,
location: Location,
},
}
pub type Result<T> = std::result::Result<T, Error>;
impl ErrorExt for Error {
fn backtrace_opt(&self) -> Option<&Backtrace> {
ErrorCompat::backtrace(self)
}
fn as_any(&self) -> &dyn Any {
self
}
fn location_opt(&self) -> Option<common_error::snafu::Location> {
match self {
Error::Overflow { location, .. } => Some(*location),
Error::Underflow { location, .. } => Some(*location),
Error::Eof { location, .. } => Some(*location),
}
}
}
macro_rules! impl_read_le {

View File

@@ -53,6 +53,10 @@ impl ReadableSize {
pub const fn as_mb(self) -> u64 {
self.0 / MIB
}
pub const fn as_bytes(self) -> u64 {
self.0
}
}
impl Div<u64> for ReadableSize {

View File

@@ -25,3 +25,5 @@ pub const MIN_USER_TABLE_ID: u32 = 1024;
pub const SYSTEM_CATALOG_TABLE_ID: u32 = 0;
/// scripts table id
pub const SCRIPTS_TABLE_ID: u32 = 1;
pub const MITO_ENGINE: &str = "mito";

View File

@@ -16,29 +16,29 @@ use std::any::Any;
use common_error::ext::ErrorExt;
use common_error::prelude::{Snafu, StatusCode};
use snafu::{Backtrace, ErrorCompat};
use snafu::Location;
#[derive(Debug, Snafu)]
#[snafu(visibility(pub))]
pub enum Error {
#[snafu(display("Invalid catalog info: {}", key))]
InvalidCatalog { key: String, backtrace: Backtrace },
InvalidCatalog { key: String, location: Location },
#[snafu(display("Failed to deserialize catalog entry value: {}", raw))]
DeserializeCatalogEntryValue {
raw: String,
backtrace: Backtrace,
location: Location,
source: serde_json::error::Error,
},
#[snafu(display("Failed to serialize catalog entry value"))]
SerializeCatalogEntryValue {
backtrace: Backtrace,
location: Location,
source: serde_json::error::Error,
},
#[snafu(display("Failed to parse node id: {}", key))]
ParseNodeId { key: String, backtrace: Backtrace },
ParseNodeId { key: String, location: Location },
}
impl ErrorExt for Error {
@@ -51,10 +51,6 @@ impl ErrorExt for Error {
}
}
fn backtrace_opt(&self) -> Option<&Backtrace> {
ErrorCompat::backtrace(self)
}
fn as_any(&self) -> &dyn Any {
self
}

View File

@@ -15,6 +15,7 @@
use std::any::Any;
use common_error::prelude::*;
use snafu::Location;
use url::ParseError;
#[derive(Debug, Snafu)]
@@ -35,13 +36,13 @@ pub enum Error {
#[snafu(display("Failed to build backend, source: {}", source))]
BuildBackend {
source: object_store::Error,
backtrace: Backtrace,
location: Location,
},
#[snafu(display("Failed to list object in path: {}, source: {}", path, source))]
ListObjects {
path: String,
backtrace: Backtrace,
location: Location,
source: object_store::Error,
},
@@ -65,11 +66,19 @@ impl ErrorExt for Error {
}
}
fn backtrace_opt(&self) -> Option<&Backtrace> {
ErrorCompat::backtrace(self)
}
fn as_any(&self) -> &dyn Any {
self
}
fn location_opt(&self) -> Option<common_error::snafu::Location> {
match self {
Error::BuildBackend { location, .. } => Some(*location),
Error::ListObjects { location, .. } => Some(*location),
Error::UnsupportedBackendProtocol { .. }
| Error::EmptyHostPath { .. }
| Error::InvalidPath { .. }
| Error::InvalidUrl { .. }
| Error::InvalidConnection { .. } => None,
}
}
}

View File

@@ -13,7 +13,7 @@
// limitations under the License.
use futures::{future, TryStreamExt};
use object_store::{Object, ObjectStore};
use object_store::{Entry, ObjectStore};
use regex::Regex;
use snafu::ResultExt;
@@ -46,13 +46,12 @@ impl Lister {
}
}
pub async fn list(&self) -> Result<Vec<Object>> {
pub async fn list(&self) -> Result<Vec<Entry>> {
match &self.source {
Source::Dir => {
let streamer = self
.object_store
.object(&self.path)
.list()
.list(&self.path)
.await
.context(error::ListObjectsSnafu { path: &self.path })?;
@@ -70,11 +69,14 @@ impl Lister {
.context(error::ListObjectsSnafu { path: &self.path })
}
Source::Filename(filename) => {
let obj = self
.object_store
.object(&format!("{}{}", self.path, filename));
Ok(vec![obj])
// make sure this file exists
let file_full_path = format!("{}{}", self.path, filename);
let _ = self.object_store.stat(&file_full_path).await.context(
error::ListObjectsSnafu {
path: &file_full_path,
},
)?;
Ok(vec![Entry::new(&file_full_path)])
}
}
}

View File

@@ -13,16 +13,16 @@
// limitations under the License.
use object_store::services::Fs;
use object_store::{ObjectStore, ObjectStoreBuilder};
use object_store::ObjectStore;
use snafu::ResultExt;
use crate::error::{self, Result};
use crate::error::{BuildBackendSnafu, Result};
pub fn build_fs_backend(root: &str) -> Result<ObjectStore> {
let accessor = Fs::default()
.root(root)
.build()
.context(error::BuildBackendSnafu)?;
Ok(ObjectStore::new(accessor).finish())
let mut builder = Fs::default();
builder.root(root);
let object_store = ObjectStore::new(builder)
.context(BuildBackendSnafu)?
.finish();
Ok(object_store)
}

View File

@@ -15,7 +15,7 @@
use std::collections::HashMap;
use object_store::services::S3;
use object_store::{ObjectStore, ObjectStoreBuilder};
use object_store::ObjectStore;
use snafu::ResultExt;
use crate::error::{self, Result};
@@ -73,7 +73,7 @@ pub fn build_s3_backend(
}
}
let accessor = builder.build().context(error::BuildBackendSnafu)?;
Ok(ObjectStore::new(accessor).finish())
Ok(ObjectStore::new(builder)
.context(error::BuildBackendSnafu)?
.finish())
}

View File

@@ -23,10 +23,12 @@ pub trait ErrorExt: std::error::Error {
StatusCode::Unknown
}
/// Get the reference to the backtrace of this error, None if the backtrace is unavailable.
// Add `_opt` suffix to avoid confusing with similar method in `std::error::Error`, once backtrace
// in std is stable, we can deprecate this method.
fn backtrace_opt(&self) -> Option<&crate::snafu::Backtrace>;
// TODO(ruihang): remove this default implementation
/// Get the location of this error, None if the location is unavailable.
/// Add `_opt` suffix to avoid confusing with similar method in `std::error::Error`
fn location_opt(&self) -> Option<crate::snafu::Location> {
None
}
/// Returns the error as [Any](std::any::Any) so that it can be
/// downcast to a specific implementation.
@@ -71,8 +73,8 @@ impl crate::ext::ErrorExt for BoxedError {
self.inner.status_code()
}
fn backtrace_opt(&self) -> Option<&crate::snafu::Backtrace> {
self.inner.backtrace_opt()
fn location_opt(&self) -> Option<crate::snafu::Location> {
self.inner.location_opt()
}
fn as_any(&self) -> &dyn std::any::Any {
@@ -84,7 +86,7 @@ impl crate::ext::ErrorExt for BoxedError {
// via `ErrorCompat::backtrace()`.
impl crate::snafu::ErrorCompat for BoxedError {
fn backtrace(&self) -> Option<&crate::snafu::Backtrace> {
self.inner.backtrace_opt()
None
}
}
@@ -118,7 +120,7 @@ impl crate::ext::ErrorExt for PlainError {
self.status_code
}
fn backtrace_opt(&self) -> Option<&crate::snafu::Backtrace> {
fn location_opt(&self) -> Option<crate::snafu::Location> {
None
}
@@ -126,62 +128,3 @@ impl crate::ext::ErrorExt for PlainError {
self as _
}
}
#[cfg(test)]
mod tests {
use std::error::Error;
use snafu::ErrorCompat;
use super::*;
use crate::format::DebugFormat;
use crate::mock::MockError;
#[test]
fn test_opaque_error_without_backtrace() {
let err = BoxedError::new(MockError::new(StatusCode::Internal));
assert!(err.backtrace_opt().is_none());
assert_eq!(StatusCode::Internal, err.status_code());
assert!(err.as_any().downcast_ref::<MockError>().is_some());
assert!(err.source().is_none());
assert!(ErrorCompat::backtrace(&err).is_none());
}
#[test]
fn test_opaque_error_with_backtrace() {
let err = BoxedError::new(MockError::with_backtrace(StatusCode::Internal));
assert!(err.backtrace_opt().is_some());
assert_eq!(StatusCode::Internal, err.status_code());
assert!(err.as_any().downcast_ref::<MockError>().is_some());
assert!(err.source().is_none());
assert!(ErrorCompat::backtrace(&err).is_some());
let msg = format!("{err:?}");
assert!(msg.contains("\nBacktrace:\n"));
let fmt_msg = format!("{:?}", DebugFormat::new(&err));
assert_eq!(msg, fmt_msg);
let msg = err.to_string();
msg.contains("Internal");
}
#[test]
fn test_opaque_error_with_source() {
let leaf_err = MockError::with_backtrace(StatusCode::Internal);
let internal_err = MockError::with_source(leaf_err);
let err = BoxedError::new(internal_err);
assert!(err.backtrace_opt().is_some());
assert_eq!(StatusCode::Internal, err.status_code());
assert!(err.as_any().downcast_ref::<MockError>().is_some());
assert!(err.source().is_some());
let msg = format!("{err:?}");
assert!(msg.contains("\nBacktrace:\n"));
assert!(msg.contains("Caused by"));
assert!(ErrorCompat::backtrace(&err).is_some());
}
}

View File

@@ -33,9 +33,9 @@ impl<'a, E: ErrorExt + ?Sized> fmt::Debug for DebugFormat<'a, E> {
// Source error use debug format for more verbose info.
write!(f, " Caused by: {source:?}")?;
}
if let Some(backtrace) = self.0.backtrace_opt() {
if let Some(location) = self.0.location_opt() {
// Add a newline to separate causes and backtrace.
write!(f, "\nBacktrace:\n{backtrace}")?;
write!(f, " at: {location}")?;
}
Ok(())
@@ -47,7 +47,7 @@ mod tests {
use std::any::Any;
use snafu::prelude::*;
use snafu::{Backtrace, GenerateImplicitData};
use snafu::{GenerateImplicitData, Location};
use super::*;
@@ -56,7 +56,7 @@ mod tests {
struct Leaf;
impl ErrorExt for Leaf {
fn backtrace_opt(&self) -> Option<&Backtrace> {
fn location_opt(&self) -> Option<Location> {
None
}
@@ -66,14 +66,14 @@ mod tests {
}
#[derive(Debug, Snafu)]
#[snafu(display("This is a leaf with backtrace"))]
struct LeafWithBacktrace {
backtrace: Backtrace,
#[snafu(display("This is a leaf with location"))]
struct LeafWithLocation {
location: Location,
}
impl ErrorExt for LeafWithBacktrace {
fn backtrace_opt(&self) -> Option<&Backtrace> {
Some(&self.backtrace)
impl ErrorExt for LeafWithLocation {
fn location_opt(&self) -> Option<Location> {
None
}
fn as_any(&self) -> &dyn Any {
@@ -86,12 +86,12 @@ mod tests {
struct Internal {
#[snafu(source)]
source: Leaf,
backtrace: Backtrace,
location: Location,
}
impl ErrorExt for Internal {
fn backtrace_opt(&self) -> Option<&Backtrace> {
Some(&self.backtrace)
fn location_opt(&self) -> Option<Location> {
None
}
fn as_any(&self) -> &dyn Any {
@@ -106,19 +106,21 @@ mod tests {
let msg = format!("{:?}", DebugFormat::new(&err));
assert_eq!("This is a leaf error.", msg);
let err = LeafWithBacktrace {
backtrace: Backtrace::generate(),
let err = LeafWithLocation {
location: Location::generate(),
};
// TODO(ruihang): display location here
let msg = format!("{:?}", DebugFormat::new(&err));
assert!(msg.starts_with("This is a leaf with backtrace.\nBacktrace:\n"));
assert!(msg.starts_with("This is a leaf with location."));
let err = Internal {
source: Leaf,
backtrace: Backtrace::generate(),
location: Location::generate(),
};
// TODO(ruihang): display location here
let msg = format!("{:?}", DebugFormat::new(&err));
assert!(msg.contains("Internal error. Caused by: Leaf\nBacktrace:\n"));
assert!(msg.contains("Internal error. Caused by: Leaf"));
}
}

View File

@@ -17,7 +17,7 @@
use std::any::Any;
use std::fmt;
use snafu::GenerateImplicitData;
use snafu::Location;
use crate::prelude::*;
@@ -25,34 +25,19 @@ use crate::prelude::*;
#[derive(Debug)]
pub struct MockError {
pub code: StatusCode,
backtrace: Option<Backtrace>,
source: Option<Box<MockError>>,
}
impl MockError {
/// Create a new [MockError] without backtrace.
pub fn new(code: StatusCode) -> MockError {
MockError {
code,
backtrace: None,
source: None,
}
}
/// Create a new [MockError] with backtrace.
pub fn with_backtrace(code: StatusCode) -> MockError {
MockError {
code,
backtrace: Some(Backtrace::generate()),
source: None,
}
MockError { code, source: None }
}
/// Create a new [MockError] with source.
pub fn with_source(source: MockError) -> MockError {
MockError {
code: source.code,
backtrace: None,
source: Some(Box::new(source)),
}
}
@@ -75,39 +60,11 @@ impl ErrorExt for MockError {
self.code
}
fn backtrace_opt(&self) -> Option<&Backtrace> {
self.backtrace
.as_ref()
.or_else(|| self.source.as_ref().and_then(|err| err.backtrace_opt()))
fn location_opt(&self) -> Option<Location> {
None
}
fn as_any(&self) -> &dyn Any {
self
}
}
impl ErrorCompat for MockError {
fn backtrace(&self) -> Option<&Backtrace> {
self.backtrace_opt()
}
}
#[cfg(test)]
mod tests {
use std::error::Error;
use super::*;
#[test]
fn test_mock_error() {
let err = MockError::new(StatusCode::Unknown);
assert!(err.backtrace_opt().is_none());
let err = MockError::with_backtrace(StatusCode::Unknown);
assert!(err.backtrace_opt().is_some());
let root_err = MockError::with_source(err);
assert!(root_err.source().is_some());
assert!(root_err.backtrace_opt().is_some());
}
}

View File

@@ -36,6 +36,8 @@ macro_rules! ok {
}
pub(crate) fn process_range_fn(args: TokenStream, input: TokenStream) -> TokenStream {
let mut result = TokenStream::new();
// extract arg map
let arg_pairs = parse_macro_input!(args as AttributeArgs);
let arg_span = arg_pairs[0].span();
@@ -59,12 +61,17 @@ pub(crate) fn process_range_fn(args: TokenStream, input: TokenStream) -> TokenSt
let arg_types = ok!(extract_input_types(inputs));
// build the struct and its impl block
let struct_code = build_struct(
attrs,
vis,
ok!(get_ident(&arg_map, "name", arg_span)),
ok!(get_ident(&arg_map, "display_name", arg_span)),
);
// only do this when `display_name` is specified
if let Ok(display_name) = get_ident(&arg_map, "display_name", arg_span) {
let struct_code = build_struct(
attrs,
vis,
ok!(get_ident(&arg_map, "name", arg_span)),
display_name,
);
result.extend(struct_code);
}
let calc_fn_code = build_calc_fn(
ok!(get_ident(&arg_map, "name", arg_span)),
arg_types,
@@ -77,8 +84,6 @@ pub(crate) fn process_range_fn(args: TokenStream, input: TokenStream) -> TokenSt
}
.into();
let mut result = TokenStream::new();
result.extend(struct_code);
result.extend(calc_fn_code);
result.extend(input_fn_code);
result

View File

@@ -11,11 +11,17 @@
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use std::sync::Arc;
mod to_unixtime;
use to_unixtime::ToUnixtimeFunction;
use crate::scalars::function_registry::FunctionRegistry;
pub(crate) struct TimestampFunction;
impl TimestampFunction {
pub fn register(_registry: &FunctionRegistry) {}
pub fn register(registry: &FunctionRegistry) {
registry.register(Arc::new(ToUnixtimeFunction::default()));
}
}

View File

@@ -0,0 +1,148 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use std::fmt;
use std::str::FromStr;
use std::sync::Arc;
use common_query::error::{self, Result, UnsupportedInputDataTypeSnafu};
use common_query::prelude::{Signature, Volatility};
use common_time::timestamp::TimeUnit;
use common_time::Timestamp;
use datatypes::prelude::ConcreteDataType;
use datatypes::types::StringType;
use datatypes::vectors::{Int64Vector, StringVector, Vector, VectorRef};
use snafu::ensure;
use crate::scalars::function::{Function, FunctionContext};
#[derive(Clone, Debug, Default)]
pub struct ToUnixtimeFunction;
const NAME: &str = "to_unixtime";
fn convert_to_seconds(arg: &str) -> Option<i64> {
match Timestamp::from_str(arg) {
Ok(ts) => {
let sec_mul = (TimeUnit::Second.factor() / ts.unit().factor()) as i64;
Some(ts.value().div_euclid(sec_mul))
}
Err(_err) => None,
}
}
impl Function for ToUnixtimeFunction {
fn name(&self) -> &str {
NAME
}
fn return_type(&self, _input_types: &[ConcreteDataType]) -> Result<ConcreteDataType> {
Ok(ConcreteDataType::timestamp_second_datatype())
}
fn signature(&self) -> Signature {
Signature::exact(
vec![ConcreteDataType::String(StringType)],
Volatility::Immutable,
)
}
fn eval(&self, _func_ctx: FunctionContext, columns: &[VectorRef]) -> Result<VectorRef> {
ensure!(
columns.len() == 1,
error::InvalidFuncArgsSnafu {
err_msg: format!(
"The length of the args is not correct, expect exactly one, have: {}",
columns.len()
),
}
);
match columns[0].data_type() {
ConcreteDataType::String(_) => {
let array = columns[0].to_arrow_array();
let vector = StringVector::try_from_arrow_array(&array).unwrap();
Ok(Arc::new(Int64Vector::from(
(0..vector.len())
.map(|i| convert_to_seconds(&vector.get(i).to_string()))
.collect::<Vec<_>>(),
)))
}
_ => UnsupportedInputDataTypeSnafu {
function: NAME,
datatypes: columns.iter().map(|c| c.data_type()).collect::<Vec<_>>(),
}
.fail(),
}
}
}
impl fmt::Display for ToUnixtimeFunction {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
write!(f, "TO_UNIXTIME")
}
}
#[cfg(test)]
mod tests {
use common_query::prelude::TypeSignature;
use datatypes::prelude::ConcreteDataType;
use datatypes::types::StringType;
use datatypes::value::Value;
use datatypes::vectors::StringVector;
use super::{ToUnixtimeFunction, *};
use crate::scalars::Function;
#[test]
fn test_to_unixtime() {
let f = ToUnixtimeFunction::default();
assert_eq!("to_unixtime", f.name());
assert_eq!(
ConcreteDataType::timestamp_second_datatype(),
f.return_type(&[]).unwrap()
);
assert!(matches!(f.signature(),
Signature {
type_signature: TypeSignature::Exact(valid_types),
volatility: Volatility::Immutable
} if valid_types == vec![ConcreteDataType::String(StringType)]
));
let times = vec![
Some("2023-03-01T06:35:02Z"),
None,
Some("2022-06-30T23:59:60Z"),
Some("invalid_time_stamp"),
];
let results = vec![Some(1677652502), None, Some(1656633600), None];
let args: Vec<VectorRef> = vec![Arc::new(StringVector::from(times.clone()))];
let vector = f.eval(FunctionContext::default(), &args).unwrap();
assert_eq!(4, vector.len());
for (i, _t) in times.iter().enumerate() {
let v = vector.get(i);
if i == 1 || i == 3 {
assert_eq!(Value::Null, v);
continue;
}
match v {
Value::Int64(ts) => {
assert_eq!(ts, (*results.get(i).unwrap()).unwrap());
}
_ => unreachable!(),
}
}
}
}

View File

@@ -178,6 +178,7 @@ pub fn create_expr_to_request(
primary_key_indices,
create_if_not_exists: expr.create_if_not_exists,
table_options,
engine: expr.engine,
})
}

View File

@@ -17,7 +17,7 @@ use std::any::Any;
use api::DecodeError;
use common_error::ext::ErrorExt;
use common_error::prelude::{Snafu, StatusCode};
use snafu::{Backtrace, ErrorCompat};
use snafu::Location;
#[derive(Debug, Snafu)]
#[snafu(visibility(pub))]
@@ -32,7 +32,7 @@ pub enum Error {
DecodeInsert { source: DecodeError },
#[snafu(display("Illegal insert data"))]
IllegalInsertData { backtrace: Backtrace },
IllegalInsertData { location: Location },
#[snafu(display("Column datatype error, source: {}", source))]
ColumnDataType {
@@ -48,17 +48,14 @@ pub enum Error {
DuplicatedTimestampColumn {
exists: String,
duplicated: String,
backtrace: Backtrace,
location: Location,
},
#[snafu(display("Missing timestamp column, msg: {}", msg))]
MissingTimestampColumn { msg: String, backtrace: Backtrace },
MissingTimestampColumn { msg: String, location: Location },
#[snafu(display("Invalid column proto: {}", err_msg))]
InvalidColumnProto {
err_msg: String,
backtrace: Backtrace,
},
InvalidColumnProto { err_msg: String, location: Location },
#[snafu(display("Failed to create vector, source: {}", source))]
CreateVector {
#[snafu(backtrace)]
@@ -66,7 +63,7 @@ pub enum Error {
},
#[snafu(display("Missing required field in protobuf, field: {}", field))]
MissingField { field: String, backtrace: Backtrace },
MissingField { field: String, location: Location },
#[snafu(display("Invalid column default constraint, source: {}", source))]
ColumnDefaultConstraint {
@@ -113,9 +110,6 @@ impl ErrorExt for Error {
Error::UnrecognizedTableOption { .. } => StatusCode::InvalidArguments,
}
}
fn backtrace_opt(&self) -> Option<&Backtrace> {
ErrorCompat::backtrace(self)
}
fn as_any(&self) -> &dyn Any {
self

View File

@@ -195,6 +195,7 @@ pub fn build_create_expr_from_insertion(
table_id: Option<TableId>,
table_name: &str,
columns: &[Column],
engine: &str,
) -> Result<CreateTableExpr> {
let mut new_columns: HashSet<String> = HashSet::default();
let mut column_defs = Vec::default();
@@ -256,6 +257,7 @@ pub fn build_create_expr_from_insertion(
table_options: Default::default(),
table_id: table_id.map(|id| api::v1::TableId { id }),
region_ids: vec![0], // TODO:(hl): region id should be allocated by frontend
engine: engine.to_string(),
};
Ok(expr)
@@ -455,6 +457,7 @@ mod tests {
use api::v1::column::{self, SemanticType, Values};
use api::v1::{Column, ColumnDataType};
use common_base::BitVec;
use common_catalog::consts::MITO_ENGINE;
use common_query::physical_plan::PhysicalPlanRef;
use common_query::prelude::Expr;
use common_time::timestamp::Timestamp;
@@ -493,13 +496,22 @@ mod tests {
let table_id = Some(10);
let table_name = "test_metric";
assert!(build_create_expr_from_insertion("", "", table_id, table_name, &[]).is_err());
assert!(
build_create_expr_from_insertion("", "", table_id, table_name, &[], MITO_ENGINE)
.is_err()
);
let insert_batch = mock_insert_batch();
let create_expr =
build_create_expr_from_insertion("", "", table_id, table_name, &insert_batch.0)
.unwrap();
let create_expr = build_create_expr_from_insertion(
"",
"",
table_id,
table_name,
&insert_batch.0,
MITO_ENGINE,
)
.unwrap();
assert_eq!(table_id, create_expr.table_id.map(|x| x.id));
assert_eq!(table_name, create_expr.table_name);

View File

@@ -16,7 +16,7 @@ use std::any::Any;
use std::io;
use common_error::prelude::{ErrorExt, StatusCode};
use snafu::{Backtrace, ErrorCompat, Snafu};
use snafu::{Location, Snafu};
pub type Result<T> = std::result::Result<T, Error>;
@@ -29,11 +29,11 @@ pub enum Error {
#[snafu(display("Invalid config file path, {}", source))]
InvalidConfigFilePath {
source: io::Error,
backtrace: Backtrace,
location: Location,
},
#[snafu(display("Missing required field in protobuf, field: {}", field))]
MissingField { field: String, backtrace: Backtrace },
MissingField { field: String, location: Location },
#[snafu(display(
"Write type mismatch, column name: {}, expected: {}, actual: {}",
@@ -45,13 +45,13 @@ pub enum Error {
column_name: String,
expected: String,
actual: String,
backtrace: Backtrace,
location: Location,
},
#[snafu(display("Failed to create gRPC channel, source: {}", source))]
CreateChannel {
source: tonic::transport::Error,
backtrace: Backtrace,
location: Location,
},
#[snafu(display("Failed to create RecordBatch, source: {}", source))]
@@ -61,7 +61,7 @@ pub enum Error {
},
#[snafu(display("Failed to convert Arrow type: {}", from))]
Conversion { from: String, backtrace: Backtrace },
Conversion { from: String, location: Location },
#[snafu(display("Column datatype error, source: {}", source))]
ColumnDataType {
@@ -72,14 +72,11 @@ pub enum Error {
#[snafu(display("Failed to decode FlightData, source: {}", source))]
DecodeFlightData {
source: api::DecodeError,
backtrace: Backtrace,
location: Location,
},
#[snafu(display("Invalid FlightData, reason: {}", reason))]
InvalidFlightData {
reason: String,
backtrace: Backtrace,
},
InvalidFlightData { reason: String, location: Location },
#[snafu(display("Failed to convert Arrow Schema, source: {}", source))]
ConvertArrowSchema {
@@ -107,65 +104,7 @@ impl ErrorExt for Error {
}
}
fn backtrace_opt(&self) -> Option<&Backtrace> {
ErrorCompat::backtrace(self)
}
fn as_any(&self) -> &dyn Any {
self
}
}
#[cfg(test)]
mod tests {
use snafu::{OptionExt, ResultExt};
use super::*;
type StdResult<E> = std::result::Result<(), E>;
fn throw_none_option() -> Option<String> {
None
}
#[test]
fn test_missing_field_error() {
let e = throw_none_option()
.context(MissingFieldSnafu { field: "test" })
.err()
.unwrap();
assert!(e.backtrace_opt().is_some());
assert_eq!(e.status_code(), StatusCode::InvalidArguments);
}
#[test]
fn test_type_mismatch_error() {
let e = throw_none_option()
.context(TypeMismatchSnafu {
column_name: "",
expected: "",
actual: "",
})
.err()
.unwrap();
assert!(e.backtrace_opt().is_some());
assert_eq!(e.status_code(), StatusCode::InvalidArguments);
}
#[test]
fn test_create_channel_error() {
fn throw_tonic_error() -> StdResult<tonic::transport::Error> {
tonic::transport::Endpoint::new("http//http").map(|_| ())
}
let e = throw_tonic_error()
.context(CreateChannelSnafu)
.err()
.unwrap();
assert!(e.backtrace_opt().is_some());
assert_eq!(e.status_code(), StatusCode::Internal);
}
}

View File

@@ -16,7 +16,7 @@ use std::any::Any;
use std::path::PathBuf;
use common_error::prelude::{ErrorExt, StatusCode};
use snafu::{Backtrace, Snafu};
use snafu::{Location, Snafu};
pub type Result<T> = std::result::Result<T, Error>;
@@ -30,7 +30,7 @@ pub enum Error {
ProfilingNotEnabled,
#[snafu(display("Failed to build temp file from given path: {:?}", path))]
BuildTempPath { path: PathBuf, backtrace: Backtrace },
BuildTempPath { path: PathBuf, location: Location },
#[snafu(display("Failed to open temp file: {}", path))]
OpenTempFile {
@@ -56,10 +56,6 @@ impl ErrorExt for Error {
}
}
fn backtrace_opt(&self) -> Option<&Backtrace> {
snafu::ErrorCompat::backtrace(self)
}
fn as_any(&self) -> &dyn Any {
self
}

View File

@@ -6,6 +6,7 @@ license.workspace = true
[dependencies]
async-trait.workspace = true
async-stream.workspace = true
common-error = { path = "../error" }
common-runtime = { path = "../runtime" }
common-telemetry = { path = "../telemetry" }

View File

@@ -13,9 +13,11 @@
// limitations under the License.
use std::any::Any;
use std::string::FromUtf8Error;
use std::sync::Arc;
use common_error::prelude::*;
use snafu::Location;
use crate::procedure::ProcedureId;
@@ -33,24 +35,25 @@ pub enum Error {
},
#[snafu(display("Loader {} is already registered", name))]
LoaderConflict { name: String, backtrace: Backtrace },
LoaderConflict { name: String, location: Location },
#[snafu(display("Failed to serialize to json, source: {}", source))]
ToJson {
source: serde_json::Error,
backtrace: Backtrace,
location: Location,
},
#[snafu(display("Procedure {} already exists", procedure_id))]
DuplicateProcedure {
procedure_id: ProcedureId,
backtrace: Backtrace,
location: Location,
},
#[snafu(display("Failed to put {}, source: {}", key, source))]
#[snafu(display("Failed to put state, key: '{key}', source: {source}"))]
PutState {
key: String,
source: object_store::Error,
#[snafu(backtrace)]
source: BoxedError,
},
#[snafu(display("Failed to delete {}, source: {}", key, source))]
@@ -59,10 +62,18 @@ pub enum Error {
source: object_store::Error,
},
#[snafu(display("Failed to list {}, source: {}", path, source))]
#[snafu(display("Failed to delete keys: '{keys}', source: {source}"))]
DeleteStates {
keys: String,
#[snafu(backtrace)]
source: BoxedError,
},
#[snafu(display("Failed to list state, path: '{path}', source: {source}"))]
ListState {
path: String,
source: object_store::Error,
#[snafu(backtrace)]
source: BoxedError,
},
#[snafu(display("Failed to read {}, source: {}", key, source))]
@@ -74,7 +85,7 @@ pub enum Error {
#[snafu(display("Failed to deserialize from json, source: {}", source))]
FromJson {
source: serde_json::Error,
backtrace: Backtrace,
location: Location,
},
#[snafu(display("Procedure exec failed, source: {}", source))]
@@ -89,13 +100,13 @@ pub enum Error {
#[snafu(display("Failed to wait watcher, source: {}", source))]
WaitWatcher {
source: tokio::sync::watch::error::RecvError,
backtrace: Backtrace,
location: Location,
},
#[snafu(display("Failed to execute procedure, source: {}", source))]
ProcedureExec {
source: Arc<Error>,
backtrace: Backtrace,
location: Location,
},
#[snafu(display(
@@ -107,6 +118,9 @@ pub enum Error {
source: Arc<Error>,
procedure_id: ProcedureId,
},
#[snafu(display("Corrupted data, error: {source}"))]
CorruptedData { source: FromUtf8Error },
}
pub type Result<T> = std::result::Result<T, Error>;
@@ -114,11 +128,13 @@ pub type Result<T> = std::result::Result<T, Error>;
impl ErrorExt for Error {
fn status_code(&self) -> StatusCode {
match self {
Error::External { source } => source.status_code(),
Error::External { source }
| Error::PutState { source, .. }
| Error::DeleteStates { source, .. }
| Error::ListState { source, .. } => source.status_code(),
Error::ToJson { .. }
| Error::PutState { .. }
| Error::DeleteState { .. }
| Error::ListState { .. }
| Error::ReadState { .. }
| Error::FromJson { .. }
| Error::RetryTimesExceeded { .. }
@@ -127,15 +143,11 @@ impl ErrorExt for Error {
Error::LoaderConflict { .. } | Error::DuplicateProcedure { .. } => {
StatusCode::InvalidArguments
}
Error::ProcedurePanic { .. } => StatusCode::Unexpected,
Error::ProcedurePanic { .. } | Error::CorruptedData { .. } => StatusCode::Unexpected,
Error::ProcedureExec { source, .. } => source.status_code(),
}
}
fn backtrace_opt(&self) -> Option<&Backtrace> {
ErrorCompat::backtrace(self)
}
fn as_any(&self) -> &dyn Any {
self
}

View File

@@ -17,7 +17,7 @@
pub mod error;
pub mod local;
mod procedure;
mod store;
pub mod store;
pub mod watcher;
pub use crate::error::{Error, Result};

View File

@@ -22,7 +22,6 @@ use std::time::Duration;
use async_trait::async_trait;
use backon::ExponentialBuilder;
use common_telemetry::logging;
use object_store::ObjectStore;
use snafu::ensure;
use tokio::sync::watch::{self, Receiver, Sender};
use tokio::sync::Notify;
@@ -31,7 +30,7 @@ use crate::error::{DuplicateProcedureSnafu, LoaderConflictSnafu, Result};
use crate::local::lock::LockMap;
use crate::local::runner::Runner;
use crate::procedure::BoxedProcedureLoader;
use crate::store::{ObjectStateStore, ProcedureMessage, ProcedureStore, StateStoreRef};
use crate::store::{ProcedureMessage, ProcedureStore, StateStoreRef};
use crate::{
BoxedProcedure, ContextProvider, LockKey, ProcedureId, ProcedureManager, ProcedureState,
ProcedureWithId, Watcher,
@@ -291,12 +290,19 @@ impl ManagerContext {
/// Config for [LocalManager].
#[derive(Debug)]
pub struct ManagerConfig {
/// Object store
pub object_store: ObjectStore,
pub max_retry_times: usize,
pub retry_delay: Duration,
}
impl Default for ManagerConfig {
fn default() -> Self {
Self {
max_retry_times: 3,
retry_delay: Duration::from_millis(500),
}
}
}
/// A [ProcedureManager] that maintains procedure states locally.
pub struct LocalManager {
manager_ctx: Arc<ManagerContext>,
@@ -307,10 +313,10 @@ pub struct LocalManager {
impl LocalManager {
/// Create a new [LocalManager] with specific `config`.
pub fn new(config: ManagerConfig) -> LocalManager {
pub fn new(config: ManagerConfig, state_store: StateStoreRef) -> LocalManager {
LocalManager {
manager_ctx: Arc::new(ManagerContext::new()),
state_store: Arc::new(ObjectStateStore::new(config.object_store)),
state_store,
max_retry_times: config.max_retry_times,
retry_delay: config.retry_delay,
}
@@ -423,7 +429,7 @@ impl ProcedureManager for LocalManager {
mod test_util {
use common_test_util::temp_dir::TempDir;
use object_store::services::Fs as Builder;
use object_store::ObjectStoreBuilder;
use object_store::ObjectStore;
use super::*;
@@ -433,8 +439,9 @@ mod test_util {
pub(crate) fn new_object_store(dir: &TempDir) -> ObjectStore {
let store_dir = dir.path().to_str().unwrap();
let accessor = Builder::default().root(store_dir).build().unwrap();
ObjectStore::new(accessor).finish()
let mut builder = Builder::default();
builder.root(store_dir);
ObjectStore::new(builder).unwrap().finish()
}
}
@@ -446,6 +453,7 @@ mod tests {
use super::*;
use crate::error::Error;
use crate::store::ObjectStateStore;
use crate::{Context, Procedure, Status};
#[test]
@@ -554,11 +562,11 @@ mod tests {
fn test_register_loader() {
let dir = create_temp_dir("register");
let config = ManagerConfig {
object_store: test_util::new_object_store(&dir),
max_retry_times: 3,
retry_delay: Duration::from_millis(500),
};
let manager = LocalManager::new(config);
let state_store = Arc::new(ObjectStateStore::new(test_util::new_object_store(&dir)));
let manager = LocalManager::new(config, state_store);
manager
.register_loader("ProcedureToLoad", ProcedureToLoad::loader())
@@ -575,11 +583,11 @@ mod tests {
let dir = create_temp_dir("recover");
let object_store = test_util::new_object_store(&dir);
let config = ManagerConfig {
object_store: object_store.clone(),
max_retry_times: 3,
retry_delay: Duration::from_millis(500),
};
let manager = LocalManager::new(config);
let state_store = Arc::new(ObjectStateStore::new(object_store.clone()));
let manager = LocalManager::new(config, state_store);
manager
.register_loader("ProcedureToLoad", ProcedureToLoad::loader())
@@ -621,11 +629,11 @@ mod tests {
async fn test_submit_procedure() {
let dir = create_temp_dir("submit");
let config = ManagerConfig {
object_store: test_util::new_object_store(&dir),
max_retry_times: 3,
retry_delay: Duration::from_millis(500),
};
let manager = LocalManager::new(config);
let state_store = Arc::new(ObjectStateStore::new(test_util::new_object_store(&dir)));
let manager = LocalManager::new(config, state_store);
let procedure_id = ProcedureId::random();
assert!(manager
@@ -669,11 +677,11 @@ mod tests {
async fn test_state_changed_on_err() {
let dir = create_temp_dir("on_err");
let config = ManagerConfig {
object_store: test_util::new_object_store(&dir),
max_retry_times: 3,
retry_delay: Duration::from_millis(500),
};
let manager = LocalManager::new(config);
let state_store = Arc::new(ObjectStateStore::new(test_util::new_object_store(&dir)));
let manager = LocalManager::new(config, state_store);
#[derive(Debug)]
struct MockProcedure {

View File

@@ -473,8 +473,7 @@ mod tests {
async fn check_files(object_store: &ObjectStore, procedure_id: ProcedureId, files: &[&str]) {
let dir = format!("{procedure_id}/");
let object = object_store.object(&dir);
let lister = object.list().await.unwrap();
let lister = object_store.list(&dir).await.unwrap();
let mut files_in_dir: Vec<_> = lister
.map_ok(|de| de.name().to_string())
.try_collect()

View File

@@ -26,7 +26,7 @@ use crate::error::{Result, ToJsonSnafu};
pub(crate) use crate::store::state_store::{ObjectStateStore, StateStoreRef};
use crate::{BoxedProcedure, ProcedureId};
mod state_store;
pub mod state_store;
/// Serialized data of a procedure.
#[derive(Debug, Clone, PartialEq, Eq, Serialize, Deserialize)]
@@ -248,15 +248,15 @@ mod tests {
use async_trait::async_trait;
use common_test_util::temp_dir::{create_temp_dir, TempDir};
use object_store::services::Fs as Builder;
use object_store::ObjectStoreBuilder;
use super::*;
use crate::{Context, LockKey, Procedure, Status};
fn procedure_store_for_test(dir: &TempDir) -> ProcedureStore {
let store_dir = dir.path().to_str().unwrap();
let accessor = Builder::default().root(store_dir).build().unwrap();
let object_store = ObjectStore::new(accessor).finish();
let mut builder = Builder::default();
builder.root(store_dir);
let object_store = ObjectStore::new(builder).unwrap().finish();
ProcedureStore::from(object_store)
}

View File

@@ -15,22 +15,25 @@
use std::pin::Pin;
use std::sync::Arc;
use async_stream::try_stream;
use async_trait::async_trait;
use futures::{Stream, TryStreamExt};
use object_store::{ObjectMode, ObjectStore};
use common_error::ext::PlainError;
use common_error::prelude::{BoxedError, StatusCode};
use futures::{Stream, StreamExt};
use object_store::{EntryMode, Metakey, ObjectStore};
use snafu::ResultExt;
use crate::error::{DeleteStateSnafu, Error, PutStateSnafu, Result};
use crate::error::{DeleteStateSnafu, ListStateSnafu, PutStateSnafu, Result};
/// Key value from state store.
type KeyValue = (String, Vec<u8>);
pub type KeyValue = (String, Vec<u8>);
/// Stream that yields [KeyValue].
type KeyValueStream = Pin<Box<dyn Stream<Item = Result<KeyValue>> + Send>>;
pub type KeyValueStream = Pin<Box<dyn Stream<Item = Result<KeyValue>> + Send>>;
/// Storage layer for persisting procedure's state.
#[async_trait]
pub(crate) trait StateStore: Send + Sync {
pub trait StateStore: Send + Sync {
/// Puts `key` and `value` into the store.
async fn put(&self, key: &str, value: Vec<u8>) -> Result<()>;
@@ -50,13 +53,13 @@ pub(crate) type StateStoreRef = Arc<dyn StateStore>;
/// [StateStore] based on [ObjectStore].
#[derive(Debug)]
pub(crate) struct ObjectStateStore {
pub struct ObjectStateStore {
store: ObjectStore,
}
impl ObjectStateStore {
/// Returns a new [ObjectStateStore] with specific `store`.
pub(crate) fn new(store: ObjectStore) -> ObjectStateStore {
pub fn new(store: ObjectStore) -> ObjectStateStore {
ObjectStateStore { store }
}
}
@@ -64,49 +67,83 @@ impl ObjectStateStore {
#[async_trait]
impl StateStore for ObjectStateStore {
async fn put(&self, key: &str, value: Vec<u8>) -> Result<()> {
let object = self.store.object(key);
object.write(value).await.context(PutStateSnafu { key })
self.store
.write(key, value)
.await
.map_err(|e| {
BoxedError::new(PlainError::new(
e.to_string(),
StatusCode::StorageUnavailable,
))
})
.context(PutStateSnafu { key })
}
async fn walk_top_down(&self, path: &str) -> Result<KeyValueStream> {
let path_string = path.to_string();
let lister = self
let mut lister = self
.store
.object(path)
.scan()
.scan(path)
.await
.map_err(|e| Error::ListState {
.map_err(|e| {
BoxedError::new(PlainError::new(
e.to_string(),
StatusCode::StorageUnavailable,
))
})
.with_context(|_| ListStateSnafu {
path: path_string.clone(),
source: e,
})?;
let stream = lister
.try_filter_map(|entry| async move {
let store = self.store.clone();
let stream = try_stream!({
while let Some(res) = lister.next().await {
let entry = res
.map_err(|e| {
BoxedError::new(PlainError::new(
e.to_string(),
StatusCode::StorageUnavailable,
))
})
.context(ListStateSnafu { path: &path_string })?;
let key = entry.path();
let key_value = match entry.mode().await? {
ObjectMode::FILE => {
let value = entry.read().await?;
Some((key.to_string(), value))
}
ObjectMode::DIR | ObjectMode::Unknown => None,
};
Ok(key_value)
})
.map_err(move |e| Error::ListState {
path: path_string.clone(),
source: e,
});
let metadata = store
.metadata(&entry, Metakey::Mode)
.await
.map_err(|e| {
BoxedError::new(PlainError::new(
e.to_string(),
StatusCode::StorageUnavailable,
))
})
.context(ListStateSnafu { path: key })?;
if let EntryMode::FILE = metadata.mode() {
let value = store
.read(key)
.await
.map_err(|e| {
BoxedError::new(PlainError::new(
e.to_string(),
StatusCode::StorageUnavailable,
))
})
.context(ListStateSnafu { path: key })?;
yield (key.to_string(), value);
}
}
});
Ok(Box::pin(stream))
}
async fn delete(&self, keys: &[String]) -> Result<()> {
for key in keys {
let object = self.store.object(key);
object.delete().await.context(DeleteStateSnafu { key })?;
self.store
.delete(key)
.await
.context(DeleteStateSnafu { key })?;
}
Ok(())
@@ -116,8 +153,8 @@ impl StateStore for ObjectStateStore {
#[cfg(test)]
mod tests {
use common_test_util::temp_dir::create_temp_dir;
use futures_util::TryStreamExt;
use object_store::services::Fs as Builder;
use object_store::ObjectStoreBuilder;
use super::*;
@@ -125,8 +162,10 @@ mod tests {
async fn test_object_state_store() {
let dir = create_temp_dir("state_store");
let store_dir = dir.path().to_str().unwrap();
let accessor = Builder::default().root(store_dir).build().unwrap();
let object_store = ObjectStore::new(accessor).finish();
let mut builder = Builder::default();
builder.root(store_dir);
let object_store = ObjectStore::new(builder).unwrap().finish();
let state_store = ObjectStateStore::new(object_store);
let data: Vec<_> = state_store

View File

@@ -22,6 +22,7 @@ use datatypes::arrow;
use datatypes::arrow::datatypes::DataType as ArrowDatatype;
use datatypes::error::Error as DataTypeError;
use datatypes::prelude::ConcreteDataType;
use snafu::Location;
use statrs::StatsError;
#[derive(Debug, Snafu)]
@@ -31,7 +32,7 @@ pub enum Error {
PyUdf {
// TODO(discord9): find a way that prevent circle depend(query<-script<-query) and can use script's error type
msg: String,
backtrace: Backtrace,
location: Location,
},
#[snafu(display(
@@ -46,20 +47,20 @@ pub enum Error {
#[snafu(display("Fail to execute function, source: {}", source))]
ExecuteFunction {
source: DataFusionError,
backtrace: Backtrace,
location: Location,
},
#[snafu(display("Unsupported input datatypes {:?} in function {}", datatypes, function))]
UnsupportedInputDataType {
function: String,
datatypes: Vec<ConcreteDataType>,
backtrace: Backtrace,
location: Location,
},
#[snafu(display("Fail to generate function, source: {}", source))]
GenerateFunction {
source: StatsError,
backtrace: Backtrace,
location: Location,
},
#[snafu(display("Fail to cast scalar value into vector: {}", source))]
@@ -88,10 +89,7 @@ pub enum Error {
DowncastVector { err_msg: String },
#[snafu(display("Bad accumulator implementation: {}", err_msg))]
BadAccumulatorImpl {
err_msg: String,
backtrace: Backtrace,
},
BadAccumulatorImpl { err_msg: String, location: Location },
#[snafu(display("Invalid input type: {}", err_msg))]
InvalidInputType {
@@ -103,24 +101,24 @@ pub enum Error {
#[snafu(display(
"Illegal input_types status, check if DataFusion has changed its UDAF execution logic"
))]
InvalidInputState { backtrace: Backtrace },
InvalidInputState { location: Location },
#[snafu(display("unexpected: not constant column"))]
InvalidInputCol { backtrace: Backtrace },
InvalidInputCol { location: Location },
#[snafu(display("Not expected to run ExecutionPlan more than once"))]
ExecuteRepeatedly { backtrace: Backtrace },
ExecuteRepeatedly { location: Location },
#[snafu(display("General DataFusion error, source: {}", source))]
GeneralDataFusion {
source: DataFusionError,
backtrace: Backtrace,
location: Location,
},
#[snafu(display("Failed to execute DataFusion ExecutionPlan, source: {}", source))]
DataFusionExecutionPlan {
source: DataFusionError,
backtrace: Backtrace,
location: Location,
},
#[snafu(display(
@@ -148,7 +146,7 @@ pub enum Error {
TypeCast {
source: ArrowError,
typ: arrow::datatypes::DataType,
backtrace: Backtrace,
location: Location,
},
#[snafu(display(
@@ -157,7 +155,7 @@ pub enum Error {
))]
ArrowCompute {
source: ArrowError,
backtrace: Backtrace,
location: Location,
},
#[snafu(display("Query engine fail to cast value: {}", source))]
@@ -173,10 +171,7 @@ pub enum Error {
},
#[snafu(display("Invalid function args: {}", err_msg))]
InvalidFuncArgs {
err_msg: String,
backtrace: Backtrace,
},
InvalidFuncArgs { err_msg: String, location: Location },
}
pub type Result<T> = std::result::Result<T, Error>;
@@ -216,10 +211,6 @@ impl ErrorExt for Error {
}
}
fn backtrace_opt(&self) -> Option<&Backtrace> {
ErrorCompat::backtrace(self)
}
fn as_any(&self) -> &dyn Any {
self
}
@@ -236,83 +227,3 @@ impl From<BoxedError> for Error {
Error::ExecutePhysicalPlan { source }
}
}
#[cfg(test)]
mod tests {
use snafu::GenerateImplicitData;
use super::*;
fn throw_df_error() -> std::result::Result<(), DataFusionError> {
Err(DataFusionError::NotImplemented("test".to_string()))
}
fn assert_error(err: &Error, code: StatusCode) {
let inner_err = err.as_any().downcast_ref::<Error>().unwrap();
assert_eq!(code, inner_err.status_code());
assert!(inner_err.backtrace_opt().is_some());
}
#[test]
fn test_datafusion_as_source() {
let err = throw_df_error()
.context(ExecuteFunctionSnafu)
.err()
.unwrap();
assert_error(&err, StatusCode::EngineExecuteQuery);
let err: Error = throw_df_error()
.context(GeneralDataFusionSnafu)
.err()
.unwrap();
assert_error(&err, StatusCode::Unexpected);
let err = throw_df_error()
.context(DataFusionExecutionPlanSnafu)
.err()
.unwrap();
assert_error(&err, StatusCode::Unexpected);
}
#[test]
fn test_execute_repeatedly_error() {
let error = None::<i32>.context(ExecuteRepeatedlySnafu).err().unwrap();
assert_eq!(error.status_code(), StatusCode::Unexpected);
assert!(error.backtrace_opt().is_some());
}
#[test]
fn test_convert_df_recordbatch_stream_error() {
let result: std::result::Result<i32, common_recordbatch::error::Error> =
Err(common_recordbatch::error::Error::PollStream {
source: DataFusionError::Internal("blabla".to_string()),
backtrace: Backtrace::generate(),
});
let error = result
.context(ConvertDfRecordBatchStreamSnafu)
.err()
.unwrap();
assert_eq!(error.status_code(), StatusCode::Internal);
assert!(error.backtrace_opt().is_some());
}
fn raise_datatype_error() -> std::result::Result<(), DataTypeError> {
Err(DataTypeError::Conversion {
from: "test".to_string(),
backtrace: Backtrace::generate(),
})
}
#[test]
fn test_into_vector_error() {
let err = raise_datatype_error()
.context(IntoVectorSnafu {
data_type: ArrowDatatype::Int32,
})
.err()
.unwrap();
assert!(err.backtrace_opt().is_some());
let datatype_err = raise_datatype_error().err().unwrap();
assert_eq!(datatype_err.status_code(), err.status_code());
}
}

View File

@@ -17,6 +17,8 @@ use std::any::Any;
use common_error::ext::BoxedError;
use common_error::prelude::*;
use datatypes::prelude::ConcreteDataType;
use snafu::Location;
pub type Result<T> = std::result::Result<T, Error>;
@@ -26,7 +28,7 @@ pub enum Error {
#[snafu(display("Fail to create datafusion record batch, source: {}", source))]
NewDfRecordBatch {
source: datatypes::arrow::error::ArrowError,
backtrace: Backtrace,
location: Location,
},
#[snafu(display("Data types error, source: {}", source))]
@@ -42,33 +44,50 @@ pub enum Error {
},
#[snafu(display("Failed to create RecordBatches, reason: {}", reason))]
CreateRecordBatches {
reason: String,
backtrace: Backtrace,
},
CreateRecordBatches { reason: String, location: Location },
#[snafu(display("Failed to convert Arrow schema, source: {}", source))]
SchemaConversion {
source: datatypes::error::Error,
backtrace: Backtrace,
location: Location,
},
#[snafu(display("Failed to poll stream, source: {}", source))]
PollStream {
source: datafusion::error::DataFusionError,
backtrace: Backtrace,
location: Location,
},
#[snafu(display("Fail to format record batch, source: {}", source))]
Format {
source: datatypes::arrow::error::ArrowError,
backtrace: Backtrace,
location: Location,
},
#[snafu(display("Failed to init Recordbatch stream, source: {}", source))]
InitRecordbatchStream {
source: datafusion_common::DataFusionError,
backtrace: Backtrace,
location: Location,
},
#[snafu(display("Column {} not exists in table {}", column_name, table_name))]
ColumnNotExists {
column_name: String,
table_name: String,
location: Location,
},
#[snafu(display(
"Failed to cast vector of type '{:?}' to type '{:?}', source: {}",
from_type,
to_type,
source
))]
CastVector {
from_type: ConcreteDataType,
to_type: ConcreteDataType,
#[snafu(backtrace)]
source: datatypes::error::Error,
},
}
@@ -81,18 +100,17 @@ impl ErrorExt for Error {
| Error::CreateRecordBatches { .. }
| Error::PollStream { .. }
| Error::Format { .. }
| Error::InitRecordbatchStream { .. } => StatusCode::Internal,
| Error::InitRecordbatchStream { .. }
| Error::ColumnNotExists { .. } => StatusCode::Internal,
Error::External { source } => source.status_code(),
Error::SchemaConversion { source, .. } => source.status_code(),
Error::SchemaConversion { source, .. } | Error::CastVector { source, .. } => {
source.status_code()
}
}
}
fn backtrace_opt(&self) -> Option<&Backtrace> {
ErrorCompat::backtrace(self)
}
fn as_any(&self) -> &dyn Any {
self
}

View File

@@ -12,14 +12,16 @@
// See the License for the specific language governing permissions and
// limitations under the License.
use std::collections::HashMap;
use datatypes::schema::SchemaRef;
use datatypes::value::Value;
use datatypes::vectors::{Helper, VectorRef};
use serde::ser::{Error, SerializeStruct};
use serde::{Serialize, Serializer};
use snafu::ResultExt;
use snafu::{OptionExt, ResultExt};
use crate::error::{self, Result};
use crate::error::{self, CastVectorSnafu, ColumnNotExistsSnafu, Result};
use crate::DfRecordBatch;
/// A two-dimensional batch of column-oriented data with a defined schema.
@@ -108,6 +110,41 @@ impl RecordBatch {
pub fn rows(&self) -> RecordBatchRowIterator<'_> {
RecordBatchRowIterator::new(self)
}
pub fn column_vectors(
&self,
table_name: &str,
table_schema: SchemaRef,
) -> Result<HashMap<String, VectorRef>> {
let mut vectors = HashMap::with_capacity(self.num_columns());
// column schemas in recordbatch must match its vectors, otherwise it's corrupted
for (vector_schema, vector) in self.schema.column_schemas().iter().zip(self.columns.iter())
{
let column_name = &vector_schema.name;
let column_schema =
table_schema
.column_schema_by_name(column_name)
.context(ColumnNotExistsSnafu {
table_name,
column_name,
})?;
let vector = if vector_schema.data_type != column_schema.data_type {
vector
.cast(&column_schema.data_type)
.with_context(|_| CastVectorSnafu {
from_type: vector.data_type(),
to_type: column_schema.data_type.clone(),
})?
} else {
vector.clone()
};
vectors.insert(column_name.clone(), vector);
}
Ok(vectors)
}
}
impl Serialize for RecordBatch {

View File

@@ -5,6 +5,7 @@ edition.workspace = true
license.workspace = true
[dependencies]
async-trait.workspace = true
common-error = { path = "../error" }
common-telemetry = { path = "../telemetry" }
metrics = "0.20"
@@ -12,6 +13,7 @@ once_cell = "1.12"
paste.workspace = true
snafu.workspace = true
tokio.workspace = true
tokio-util.workspace = true
[dev-dependencies]
tokio-test = "0.4"

View File

@@ -15,6 +15,8 @@
use std::any::Any;
use common_error::prelude::*;
use snafu::Location;
use tokio::task::JoinError;
pub type Result<T> = std::result::Result<T, Error>;
@@ -24,16 +26,33 @@ pub enum Error {
#[snafu(display("Failed to build runtime, source: {}", source))]
BuildRuntime {
source: std::io::Error,
backtrace: Backtrace,
location: Location,
},
#[snafu(display("Repeated task {} not started yet", name))]
IllegalState { name: String, location: Location },
#[snafu(display(
"Failed to wait for repeated task {} to stop, source: {}",
name,
source
))]
WaitGcTaskStop {
name: String,
source: JoinError,
location: Location,
},
}
impl ErrorExt for Error {
fn backtrace_opt(&self) -> Option<&Backtrace> {
ErrorCompat::backtrace(self)
}
fn as_any(&self) -> &dyn Any {
self
}
fn location_opt(&self) -> Option<common_error::snafu::Location> {
match self {
Error::BuildRuntime { location, .. }
| Error::IllegalState { location, .. }
| Error::WaitGcTaskStop { location, .. } => Some(*location),
}
}
}

View File

@@ -14,7 +14,8 @@
pub mod error;
mod global;
pub mod metric;
mod metrics;
mod repeated_task;
pub mod runtime;
pub use global::{
@@ -23,4 +24,5 @@ pub use global::{
spawn_read, spawn_write, write_runtime,
};
pub use crate::repeated_task::{RepeatedTask, TaskFunction, TaskFunctionRef};
pub use crate::runtime::{Builder, JoinError, JoinHandle, Runtime};

View File

@@ -0,0 +1,174 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use std::sync::atomic::{AtomicBool, Ordering};
use std::sync::Arc;
use std::time::Duration;
use common_error::prelude::ErrorExt;
use common_telemetry::logging;
use snafu::{ensure, OptionExt, ResultExt};
use tokio::sync::Mutex;
use tokio::task::JoinHandle;
use tokio_util::sync::CancellationToken;
use crate::error::{IllegalStateSnafu, Result, WaitGcTaskStopSnafu};
use crate::Runtime;
#[async_trait::async_trait]
pub trait TaskFunction<E: ErrorExt> {
async fn call(&self) -> std::result::Result<(), E>;
fn name(&self) -> &str;
}
pub type TaskFunctionRef<E> = Arc<dyn TaskFunction<E> + Send + Sync>;
pub struct RepeatedTask<E> {
cancel_token: Mutex<Option<CancellationToken>>,
task_handle: Mutex<Option<JoinHandle<()>>>,
started: AtomicBool,
interval: Duration,
task_fn: TaskFunctionRef<E>,
}
impl<E: ErrorExt> std::fmt::Display for RepeatedTask<E> {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
write!(f, "RepeatedTask({})", self.task_fn.name())
}
}
impl<E: ErrorExt> std::fmt::Debug for RepeatedTask<E> {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
f.debug_tuple("RepeatedTask")
.field(&self.task_fn.name())
.finish()
}
}
impl<E: ErrorExt + 'static> RepeatedTask<E> {
pub fn new(interval: Duration, task_fn: TaskFunctionRef<E>) -> Self {
Self {
cancel_token: Mutex::new(None),
task_handle: Mutex::new(None),
started: AtomicBool::new(false),
interval,
task_fn,
}
}
pub fn started(&self) -> bool {
self.started.load(Ordering::Relaxed)
}
pub async fn start(&self, runtime: Runtime) -> Result<()> {
let token = CancellationToken::new();
let interval = self.interval;
let child = token.child_token();
let task_fn = self.task_fn.clone();
// TODO(hl): Maybe spawn to a blocking runtime.
let handle = runtime.spawn(async move {
loop {
tokio::select! {
_ = tokio::time::sleep(interval) => {}
_ = child.cancelled() => {
return;
}
}
if let Err(e) = task_fn.call().await {
logging::error!(e; "Failed to run repeated task: {}", task_fn.name());
}
}
});
*self.cancel_token.lock().await = Some(token);
*self.task_handle.lock().await = Some(handle);
self.started.store(true, Ordering::Relaxed);
logging::debug!(
"Repeated task {} started with interval: {:?}",
self.task_fn.name(),
self.interval
);
Ok(())
}
pub async fn stop(&self) -> Result<()> {
let name = self.task_fn.name();
ensure!(
self.started
.compare_exchange(true, false, Ordering::Relaxed, Ordering::Relaxed)
.is_ok(),
IllegalStateSnafu { name }
);
let token = self
.cancel_token
.lock()
.await
.take()
.context(IllegalStateSnafu { name })?;
let handle = self
.task_handle
.lock()
.await
.take()
.context(IllegalStateSnafu { name })?;
token.cancel();
handle.await.context(WaitGcTaskStopSnafu { name })?;
logging::debug!("Repeated task {} stopped", name);
Ok(())
}
}
#[cfg(test)]
mod tests {
use std::sync::atomic::AtomicI32;
use super::*;
struct TickTask {
n: AtomicI32,
}
#[async_trait::async_trait]
impl TaskFunction<crate::error::Error> for TickTask {
fn name(&self) -> &str {
"test"
}
async fn call(&self) -> Result<()> {
self.n.fetch_add(1, Ordering::Relaxed);
Ok(())
}
}
#[tokio::test]
async fn test_repeated_task() {
common_telemetry::init_default_ut_logging();
let task_fn = Arc::new(TickTask {
n: AtomicI32::new(0),
});
let task = RepeatedTask::new(Duration::from_millis(100), task_fn.clone());
task.start(crate::bg_runtime()).await.unwrap();
tokio::time::sleep(Duration::from_millis(550)).await;
task.stop().await.unwrap();
assert_eq!(task_fn.n.load(Ordering::Relaxed), 5);
}
}

View File

@@ -24,7 +24,7 @@ use tokio::sync::oneshot;
pub use tokio::task::{JoinError, JoinHandle};
use crate::error::*;
use crate::metric::*;
use crate::metrics::*;
/// A runtime to run future tasks
#[derive(Clone, Debug)]

View File

@@ -581,7 +581,7 @@ pub fn expression_from_df_expr(
| Expr::ScalarSubquery(..)
| Expr::Placeholder { .. }
| Expr::QualifiedWildcard { .. } => todo!(),
Expr::GroupingSet(_) => UnsupportedExprSnafu {
Expr::GroupingSet(_) | Expr::OuterReferenceColumn(_, _) => UnsupportedExprSnafu {
name: expr.to_string(),
}
.fail()?,

View File

@@ -22,9 +22,10 @@ use catalog::CatalogManagerRef;
use common_catalog::format_full_table_name;
use common_telemetry::debug;
use datafusion::arrow::datatypes::SchemaRef as ArrowSchemaRef;
use datafusion::common::{DFField, DFSchema, OwnedTableReference};
use datafusion::common::{DFField, DFSchema};
use datafusion::datasource::DefaultTableSource;
use datafusion::physical_plan::project_schema;
use datafusion::sql::TableReference;
use datafusion_expr::{Filter, LogicalPlan, TableScan};
use prost::Message;
use session::context::QueryContext;
@@ -240,13 +241,13 @@ impl DFLogicalSubstraitConvertor {
.projection
.map(|mask_expr| self.convert_mask_expression(mask_expr));
let table_ref = OwnedTableReference::Full {
catalog: catalog_name.clone(),
schema: schema_name.clone(),
table: table_name.clone(),
};
let table_ref = TableReference::full(
catalog_name.clone(),
schema_name.clone(),
table_name.clone(),
);
let adapter = table_provider
.resolve_table(table_ref)
.resolve_table(table_ref.clone())
.await
.with_context(|_| ResolveTableSnafu {
table_name: format_full_table_name(&catalog_name, &schema_name, &table_name),
@@ -272,14 +273,13 @@ impl DFLogicalSubstraitConvertor {
};
// Calculate the projected schema
let qualified = &format_full_table_name(&catalog_name, &schema_name, &table_name);
let projected_schema = Arc::new(
project_schema(&stored_schema, projection.as_ref())
.and_then(|x| {
DFSchema::new_with_metadata(
x.fields()
.iter()
.map(|f| DFField::from_qualified(qualified, f.clone()))
.map(|f| DFField::from_qualified(table_ref.clone(), f.clone()))
.collect(),
x.metadata().clone(),
)
@@ -291,7 +291,7 @@ impl DFLogicalSubstraitConvertor {
// TODO(ruihang): Support limit(fetch)
Ok(LogicalPlan::TableScan(TableScan {
table_name: qualified.to_string(),
table_name: table_ref,
source: adapter,
projection,
projected_schema,
@@ -398,7 +398,6 @@ impl DFLogicalSubstraitConvertor {
| LogicalPlan::CreateCatalog(_)
| LogicalPlan::DropView(_)
| LogicalPlan::Distinct(_)
| LogicalPlan::SetVariable(_)
| LogicalPlan::CreateExternalTable(_)
| LogicalPlan::CreateMemoryTable(_)
| LogicalPlan::DropTable(_)
@@ -409,7 +408,8 @@ impl DFLogicalSubstraitConvertor {
| LogicalPlan::Prepare(_)
| LogicalPlan::Dml(_)
| LogicalPlan::DescribeTable(_)
| LogicalPlan::Unnest(_) => InvalidParametersSnafu {
| LogicalPlan::Unnest(_)
| LogicalPlan::Statement(_) => InvalidParametersSnafu {
reason: format!(
"Trying to convert DDL/DML plan to substrait proto, plan: {plan:?}",
),
@@ -535,10 +535,11 @@ fn same_schema_without_metadata(lhs: &ArrowSchemaRef, rhs: &ArrowSchemaRef) -> b
mod test {
use catalog::local::{LocalCatalogManager, MemoryCatalogProvider, MemorySchemaProvider};
use catalog::{CatalogList, CatalogProvider, RegisterTableRequest};
use common_catalog::consts::{DEFAULT_CATALOG_NAME, DEFAULT_SCHEMA_NAME};
use common_catalog::consts::{DEFAULT_CATALOG_NAME, DEFAULT_SCHEMA_NAME, MITO_ENGINE};
use datafusion::common::{DFSchema, ToDFSchema};
use datafusion_expr::TableSource;
use datatypes::schema::RawSchema;
use table::engine::manager::MemoryTableEngineManager;
use table::requests::CreateTableRequest;
use table::test_util::{EmptyTable, MockTableEngine};
@@ -549,11 +550,11 @@ mod test {
async fn build_mock_catalog_manager() -> CatalogManagerRef {
let mock_table_engine = Arc::new(MockTableEngine::new());
let catalog_manager = Arc::new(
LocalCatalogManager::try_new(mock_table_engine)
.await
.unwrap(),
);
let engine_manager = Arc::new(MemoryTableEngineManager::alias(
MITO_ENGINE.to_string(),
mock_table_engine.clone(),
));
let catalog_manager = Arc::new(LocalCatalogManager::try_new(engine_manager).await.unwrap());
let schema_provider = Arc::new(MemorySchemaProvider::new());
let catalog_provider = Arc::new(MemoryCatalogProvider::new());
catalog_provider
@@ -579,6 +580,7 @@ mod test {
primary_key_indices: vec![],
create_if_not_exists: true,
table_options: Default::default(),
engine: MITO_ENGINE.to_string(),
}
}
@@ -620,10 +622,13 @@ mod test {
let projected_schema =
Arc::new(DFSchema::new_with_metadata(projected_fields, Default::default()).unwrap());
let table_name = TableReference::full(
DEFAULT_CATALOG_NAME,
DEFAULT_SCHEMA_NAME,
DEFAULT_TABLE_NAME,
);
let table_scan_plan = LogicalPlan::TableScan(TableScan {
table_name: format!(
"{DEFAULT_CATALOG_NAME}.{DEFAULT_SCHEMA_NAME}.{DEFAULT_TABLE_NAME}",
),
table_name,
source: adapter,
projection: Some(projection),
projected_schema,

View File

@@ -18,61 +18,58 @@ use common_error::prelude::{BoxedError, ErrorExt, StatusCode};
use datafusion::error::DataFusionError;
use datatypes::prelude::ConcreteDataType;
use prost::{DecodeError, EncodeError};
use snafu::{Backtrace, ErrorCompat, Snafu};
use snafu::{Location, Snafu};
#[derive(Debug, Snafu)]
#[snafu(visibility(pub))]
pub enum Error {
#[snafu(display("Unsupported physical plan: {}", name))]
UnsupportedPlan { name: String, backtrace: Backtrace },
UnsupportedPlan { name: String, location: Location },
#[snafu(display("Unsupported expr: {}", name))]
UnsupportedExpr { name: String, backtrace: Backtrace },
UnsupportedExpr { name: String, location: Location },
#[snafu(display("Unsupported concrete type: {:?}", ty))]
UnsupportedConcreteType {
ty: ConcreteDataType,
backtrace: Backtrace,
location: Location,
},
#[snafu(display("Unsupported substrait type: {}", ty))]
UnsupportedSubstraitType { ty: String, backtrace: Backtrace },
UnsupportedSubstraitType { ty: String, location: Location },
#[snafu(display("Failed to decode substrait relation, source: {}", source))]
DecodeRel {
source: DecodeError,
backtrace: Backtrace,
location: Location,
},
#[snafu(display("Failed to encode substrait relation, source: {}", source))]
EncodeRel {
source: EncodeError,
backtrace: Backtrace,
location: Location,
},
#[snafu(display("Input plan is empty"))]
EmptyPlan { backtrace: Backtrace },
EmptyPlan { location: Location },
#[snafu(display("Input expression is empty"))]
EmptyExpr { backtrace: Backtrace },
EmptyExpr { location: Location },
#[snafu(display("Missing required field in protobuf, field: {}, plan: {}", field, plan))]
MissingField {
field: String,
plan: String,
backtrace: Backtrace,
location: Location,
},
#[snafu(display("Invalid parameters: {}", reason))]
InvalidParameters {
reason: String,
backtrace: Backtrace,
},
InvalidParameters { reason: String, location: Location },
#[snafu(display("Internal error from DataFusion: {}", source))]
DFInternal {
source: DataFusionError,
backtrace: Backtrace,
location: Location,
},
#[snafu(display("Internal error: {}", source))]
@@ -82,10 +79,10 @@ pub enum Error {
},
#[snafu(display("Table querying not found: {}", name))]
TableNotFound { name: String, backtrace: Backtrace },
TableNotFound { name: String, location: Location },
#[snafu(display("Cannot convert plan doesn't belong to GreptimeDB"))]
UnknownPlan { backtrace: Backtrace },
UnknownPlan { location: Location },
#[snafu(display(
"Schema from Substrait proto doesn't match with the schema in storage.
@@ -97,7 +94,7 @@ pub enum Error {
SchemaNotMatch {
substrait_schema: datafusion::arrow::datatypes::SchemaRef,
storage_schema: datafusion::arrow::datatypes::SchemaRef,
backtrace: Backtrace,
location: Location,
},
#[snafu(display("Failed to convert DataFusion schema, source: {}", source))]
@@ -138,10 +135,6 @@ impl ErrorExt for Error {
}
}
fn backtrace_opt(&self) -> Option<&Backtrace> {
ErrorCompat::backtrace(self)
}
fn as_any(&self) -> &dyn Any {
self
}

View File

@@ -20,6 +20,7 @@
use datafusion::scalar::ScalarValue;
use datatypes::prelude::ConcreteDataType;
use datatypes::types::TimestampType;
use substrait_proto::proto::expression::literal::LiteralType;
use substrait_proto::proto::r#type::{self as s_type, Kind, Nullability};
use substrait_proto::proto::{Type as SType, Type};
@@ -71,7 +72,14 @@ pub fn to_concrete_type(ty: &SType) -> Result<(ConcreteDataType, bool)> {
Kind::Binary(desc) => substrait_kind!(desc, binary_datatype),
Kind::Timestamp(desc) => substrait_kind!(
desc,
ConcreteDataType::timestamp_datatype(Default::default())
ConcreteDataType::timestamp_datatype(
TimestampType::try_from(desc.type_variation_reference as u64)
.map_err(|_| UnsupportedSubstraitTypeSnafu {
ty: format!("{kind:?}")
}
.build())?
.unit()
)
),
Kind::Date(desc) => substrait_kind!(desc, date_datatype),
Kind::Time(_)
@@ -95,7 +103,7 @@ pub fn to_concrete_type(ty: &SType) -> Result<(ConcreteDataType, bool)> {
}
macro_rules! build_substrait_kind {
($kind:ident,$s_type:ident,$nullable:ident,$variation:literal) => {{
($kind:ident,$s_type:ident,$nullable:ident,$variation:expr) => {{
let nullability = match $nullable {
Some(true) => Nullability::Nullable,
Some(false) => Nullability::Required,
@@ -129,8 +137,8 @@ pub fn from_concrete_type(ty: ConcreteDataType, nullability: Option<bool>) -> Re
ConcreteDataType::String(_) => build_substrait_kind!(String, String, nullability, 0),
ConcreteDataType::Date(_) => build_substrait_kind!(Date, Date, nullability, 0),
ConcreteDataType::DateTime(_) => UnsupportedConcreteTypeSnafu { ty }.fail()?,
ConcreteDataType::Timestamp(_) => {
build_substrait_kind!(Timestamp, Timestamp, nullability, 0)
ConcreteDataType::Timestamp(ty) => {
build_substrait_kind!(Timestamp, Timestamp, nullability, ty.precision() as u32)
}
ConcreteDataType::List(_) | ConcreteDataType::Dictionary(_) => {
UnsupportedConcreteTypeSnafu { ty }.fail()?

View File

@@ -12,7 +12,7 @@ deadlock_detection = ["parking_lot"]
backtrace = "0.3"
common-error = { path = "../error" }
console-subscriber = { version = "0.1", optional = true }
metrics = "0.20"
metrics = "0.20.1"
metrics-exporter-prometheus = { version = "0.11", default-features = false }
once_cell = "1.10"
opentelemetry = { version = "0.17", default-features = false, features = [

View File

@@ -38,15 +38,15 @@ macro_rules! error {
($e:expr; target: $target:expr, $($arg:tt)+) => ({
use $crate::common_error::ext::ErrorExt;
use std::error::Error;
match ($e.source(), $e.backtrace_opt()) {
(Some(source), Some(backtrace)) => {
match ($e.source(), $e.location_opt()) {
(Some(source), Some(location)) => {
$crate::log!(
target: $target,
$crate::logging::Level::ERROR,
err.msg = %$e,
err.code = %$e.status_code(),
err.source = source,
err.backtrace = %backtrace,
err.location = %location,
$($arg)+
)
},
@@ -60,13 +60,13 @@ macro_rules! error {
$($arg)+
)
},
(None, Some(backtrace)) => {
(None, Some(location)) => {
$crate::log!(
target: $target,
$crate::logging::Level::ERROR,
err.msg = %$e,
err.code = %$e.status_code(),
err.backtrace = %backtrace,
err.location = %location,
$($arg)+
)
},
@@ -86,14 +86,14 @@ macro_rules! error {
($e:expr; $($arg:tt)+) => ({
use std::error::Error;
use $crate::common_error::ext::ErrorExt;
match ($e.source(), $e.backtrace_opt()) {
(Some(source), Some(backtrace)) => {
match ($e.source(), $e.location_opt()) {
(Some(source), Some(location)) => {
$crate::log!(
$crate::logging::Level::ERROR,
err.msg = %$e,
err.code = %$e.status_code(),
err.source = source,
err.backtrace = %backtrace,
err.location = %location,
$($arg)+
)
},
@@ -106,12 +106,12 @@ macro_rules! error {
$($arg)+
)
},
(None, Some(backtrace)) => {
(None, Some(location)) => {
$crate::log!(
$crate::logging::Level::ERROR,
err.msg = %$e,
err.code = %$e.status_code(),
err.backtrace = %backtrace,
err.location = %location,
$($arg)+
)
},
@@ -263,9 +263,6 @@ mod tests {
error!(err_ref2; "hello {}", "world");
error!("hello {}", "world");
let err = MockError::with_backtrace(StatusCode::Internal);
error!(err; "Error with backtrace hello {}", "world");
let root_err = MockError::with_source(err);
error!(root_err; "Error with source hello {}", "world");
}

View File

@@ -18,7 +18,7 @@ use std::num::TryFromIntError;
use chrono::ParseError;
use common_error::ext::ErrorExt;
use common_error::prelude::StatusCode;
use snafu::{Backtrace, ErrorCompat, Snafu};
use snafu::{Location, Snafu};
#[derive(Debug, Snafu)]
#[snafu(visibility(pub))]
@@ -27,16 +27,16 @@ pub enum Error {
ParseDateStr { raw: String, source: ParseError },
#[snafu(display("Failed to parse a string into Timestamp, raw string: {}", raw))]
ParseTimestamp { raw: String, backtrace: Backtrace },
ParseTimestamp { raw: String, location: Location },
#[snafu(display("Current timestamp overflow, source: {}", source))]
TimestampOverflow {
source: TryFromIntError,
backtrace: Backtrace,
location: Location,
},
#[snafu(display("Timestamp arithmetic overflow, msg: {}", msg))]
ArithmeticOverflow { msg: String, backtrace: Backtrace },
ArithmeticOverflow { msg: String, location: Location },
}
impl ErrorExt for Error {
@@ -50,33 +50,18 @@ impl ErrorExt for Error {
}
}
fn backtrace_opt(&self) -> Option<&Backtrace> {
ErrorCompat::backtrace(self)
}
fn as_any(&self) -> &dyn Any {
self
}
fn location_opt(&self) -> Option<common_error::snafu::Location> {
match self {
Error::ParseTimestamp { location, .. }
| Error::TimestampOverflow { location, .. }
| Error::ArithmeticOverflow { location, .. } => Some(*location),
Error::ParseDateStr { .. } => None,
}
}
}
pub type Result<T> = std::result::Result<T, Error>;
#[cfg(test)]
mod tests {
use chrono::NaiveDateTime;
use snafu::ResultExt;
use super::*;
#[test]
fn test_errors() {
let raw = "2020-09-08T13:42:29.190855Z";
let result = NaiveDateTime::parse_from_str(raw, "%F").context(ParseDateStrSnafu { raw });
assert!(matches!(result.err().unwrap(), Error::ParseDateStr { .. }));
assert_eq!(
"Failed to parse a string into Timestamp, raw string: 2020-09-08T13:42:29.190855Z",
ParseTimestampSnafu { raw }.build().to_string()
);
}
}

View File

@@ -4,10 +4,6 @@ version.workspace = true
edition.workspace = true
license.workspace = true
[features]
default = ["python"]
python = ["dep:script"]
[dependencies]
async-compat = "0.2"
async-stream.workspace = true
@@ -21,6 +17,7 @@ common-base = { path = "../common/base" }
common-catalog = { path = "../common/catalog" }
common-error = { path = "../common/error" }
common-datasource = { path = "../common/datasource" }
common-function = { path = "../common/function" }
common-grpc = { path = "../common/grpc" }
common-grpc-expr = { path = "../common/grpc-expr" }
common-procedure = { path = "../common/procedure" }
@@ -37,6 +34,7 @@ futures = "0.3"
futures-util.workspace = true
hyper = { version = "0.14", features = ["full"] }
humantime-serde = "1.1"
log = "0.4"
log-store = { path = "../log-store" }
meta-client = { path = "../meta-client" }
meta-srv = { path = "../meta-srv", features = ["mock"] }
@@ -47,7 +45,6 @@ pin-project = "1.0"
prost.workspace = true
query = { path = "../query" }
regex = "1.6"
script = { path = "../script", features = ["python"], optional = true }
serde = "1.0"
serde_json = "1.0"
servers = { path = "../servers" }
@@ -65,7 +62,6 @@ tonic.workspace = true
tower = { version = "0.4", features = ["full"] }
tower-http = { version = "0.3", features = ["full"] }
url = "2.3.1"
uuid.workspace = true
[dev-dependencies]
axum-test-helper = { git = "https://github.com/sunng87/axum-test-helper.git", branch = "patch-1" }

View File

@@ -19,6 +19,7 @@ use common_base::readable_size::ReadableSize;
use common_telemetry::info;
use meta_client::MetaClientOptions;
use serde::{Deserialize, Serialize};
use servers::http::HttpOptions;
use servers::Mode;
use storage::config::EngineConfig as StorageEngineConfig;
use storage::scheduler::SchedulerConfig;
@@ -29,6 +30,7 @@ use crate::server::Services;
pub const DEFAULT_OBJECT_STORE_CACHE_SIZE: ReadableSize = ReadableSize(1024);
/// Object storage config
#[derive(Debug, Clone, Serialize, Deserialize)]
#[serde(tag = "type")]
pub enum ObjectStoreConfig {
@@ -37,6 +39,16 @@ pub enum ObjectStoreConfig {
Oss(OssConfig),
}
/// Storage engine config
#[derive(Debug, Clone, Serialize, Deserialize, Default)]
#[serde(default)]
pub struct StorageConfig {
#[serde(flatten)]
pub store: ObjectStoreConfig,
pub compaction: CompactionConfig,
pub manifest: RegionManifestConfig,
}
#[derive(Debug, Clone, Serialize, Default, Deserialize)]
#[serde(default)]
pub struct FileConfig {
@@ -107,6 +119,30 @@ impl Default for WalConfig {
}
}
/// Options for region manifest
#[derive(Debug, Clone, Serialize, Deserialize, Eq, PartialEq)]
#[serde(default)]
pub struct RegionManifestConfig {
/// Region manifest checkpoint actions margin.
/// Manifest service create a checkpoint every [checkpoint_margin] actions.
pub checkpoint_margin: Option<u16>,
/// Region manifest logs and checkpoints gc task execution duration.
#[serde(with = "humantime_serde")]
pub gc_duration: Option<Duration>,
/// Whether to try creating a manifest checkpoint on region opening
pub checkpoint_on_startup: bool,
}
impl Default for RegionManifestConfig {
fn default() -> Self {
Self {
checkpoint_margin: Some(10u16),
gc_duration: Some(Duration::from_secs(30)),
checkpoint_on_startup: false,
}
}
}
/// Options for table compaction
#[derive(Debug, Clone, Serialize, Deserialize, Eq, PartialEq)]
#[serde(default)]
@@ -117,6 +153,8 @@ pub struct CompactionConfig {
pub max_files_in_level0: usize,
/// Max task number for SST purge task after compaction.
pub max_purge_tasks: usize,
/// Buffer threshold while writing SST files
pub sst_write_buffer_size: ReadableSize,
}
impl Default for CompactionConfig {
@@ -125,6 +163,7 @@ impl Default for CompactionConfig {
max_inflight_tasks: 4,
max_files_in_level0: 8,
max_purge_tasks: 32,
sst_write_buffer_size: ReadableSize::mb(8),
}
}
}
@@ -132,7 +171,7 @@ impl Default for CompactionConfig {
impl From<&DatanodeOptions> for SchedulerConfig {
fn from(value: &DatanodeOptions) -> Self {
Self {
max_inflight_tasks: value.compaction.max_inflight_tasks,
max_inflight_tasks: value.storage.compaction.max_inflight_tasks,
}
}
}
@@ -140,8 +179,12 @@ impl From<&DatanodeOptions> for SchedulerConfig {
impl From<&DatanodeOptions> for StorageEngineConfig {
fn from(value: &DatanodeOptions) -> Self {
Self {
max_files_in_l0: value.compaction.max_files_in_level0,
max_purge_tasks: value.compaction.max_purge_tasks,
manifest_checkpoint_on_startup: value.storage.manifest.checkpoint_on_startup,
manifest_checkpoint_margin: value.storage.manifest.checkpoint_margin,
manifest_gc_duration: value.storage.manifest.gc_duration,
max_files_in_l0: value.storage.compaction.max_files_in_level0,
max_purge_tasks: value.storage.compaction.max_purge_tasks,
sst_write_buffer_size: value.storage.compaction.sst_write_buffer_size,
}
}
}
@@ -190,10 +233,10 @@ pub struct DatanodeOptions {
pub rpc_runtime_size: usize,
pub mysql_addr: String,
pub mysql_runtime_size: usize,
pub http_opts: HttpOptions,
pub meta_client_options: Option<MetaClientOptions>,
pub wal: WalConfig,
pub storage: ObjectStoreConfig,
pub compaction: CompactionConfig,
pub storage: StorageConfig,
pub procedure: Option<ProcedureConfig>,
}
@@ -208,10 +251,10 @@ impl Default for DatanodeOptions {
rpc_runtime_size: 8,
mysql_addr: "127.0.0.1:4406".to_string(),
mysql_runtime_size: 2,
http_opts: HttpOptions::default(),
meta_client_options: None,
wal: WalConfig::default(),
storage: ObjectStoreConfig::default(),
compaction: CompactionConfig::default(),
storage: StorageConfig::default(),
procedure: None,
}
}
@@ -220,14 +263,17 @@ impl Default for DatanodeOptions {
/// Datanode service.
pub struct Datanode {
opts: DatanodeOptions,
services: Services,
services: Option<Services>,
instance: InstanceRef,
}
impl Datanode {
pub async fn new(opts: DatanodeOptions) -> Result<Datanode> {
let instance = Arc::new(Instance::new(&opts).await?);
let services = Services::try_new(instance.clone(), &opts).await?;
let services = match opts.mode {
Mode::Distributed => Some(Services::try_new(instance.clone(), &opts).await?),
Mode::Standalone => None,
};
Ok(Self {
opts,
services,
@@ -248,7 +294,11 @@ impl Datanode {
/// Start services of datanode. This method call will block until services are shutdown.
pub async fn start_services(&mut self) -> Result<()> {
self.services.start(&self.opts).await
if let Some(service) = self.services.as_mut() {
service.start(&self.opts).await
} else {
Ok(())
}
}
pub fn get_instance(&self) -> InstanceRef {
@@ -260,7 +310,11 @@ impl Datanode {
}
async fn shutdown_services(&self) -> Result<()> {
self.services.shutdown().await
if let Some(service) = self.services.as_ref() {
service.shutdown().await
} else {
Ok(())
}
}
pub async fn shutdown(&self) -> Result<()> {

View File

@@ -17,9 +17,8 @@ use std::any::Any;
use common_datasource::error::Error as DataSourceError;
use common_error::prelude::*;
use common_procedure::ProcedureId;
use common_recordbatch::error::Error as RecordBatchError;
use datafusion::parquet;
use datatypes::prelude::ConcreteDataType;
use snafu::Location;
use storage::error::Error as StorageError;
use table::error::Error as TableError;
use url::ParseError;
@@ -61,7 +60,7 @@ pub enum Error {
},
#[snafu(display("Incorrect internal state: {}", state))]
IncorrectInternalState { state: String, backtrace: Backtrace },
IncorrectInternalState { state: String, location: Location },
#[snafu(display("Failed to create catalog list, source: {}", source))]
NewCatalog {
@@ -70,10 +69,10 @@ pub enum Error {
},
#[snafu(display("Catalog not found: {}", name))]
CatalogNotFound { name: String, backtrace: Backtrace },
CatalogNotFound { name: String, location: Location },
#[snafu(display("Schema not found: {}", name))]
SchemaNotFound { name: String, backtrace: Backtrace },
SchemaNotFound { name: String, location: Location },
#[snafu(display("Failed to create table: {}, source: {}", table_name, source))]
CreateTable {
@@ -103,10 +102,17 @@ pub enum Error {
source: BoxedError,
},
#[snafu(display("Table engine not found: {}, source: {}", engine_name, source))]
TableEngineNotFound {
engine_name: String,
#[snafu(backtrace)]
source: table::error::Error,
},
#[snafu(display("Table not found: {}", table_name))]
TableNotFound {
table_name: String,
backtrace: Backtrace,
location: Location,
},
#[snafu(display("Column {} not found in table {}", column_name, table_name))]
@@ -116,7 +122,7 @@ pub enum Error {
},
#[snafu(display("Missing timestamp column in request"))]
MissingTimestampColumn { backtrace: Backtrace },
MissingTimestampColumn { location: Location },
#[snafu(display(
"Columns and values number mismatch, columns: {}, values: {}",
@@ -125,24 +131,6 @@ pub enum Error {
))]
ColumnValuesNumberMismatch { columns: usize, values: usize },
#[snafu(display(
"Column type mismatch, column: {}, expected type: {:?}, actual: {:?}",
column,
expected,
actual,
))]
ColumnTypeMismatch {
column: String,
expected: ConcreteDataType,
actual: ConcreteDataType,
},
#[snafu(display("Failed to collect record batch, source: {}", source))]
CollectRecords {
#[snafu(backtrace)]
source: RecordBatchError,
},
#[snafu(display("Failed to parse sql value, source: {}", source))]
ParseSqlValue {
#[snafu(backtrace)]
@@ -150,7 +138,7 @@ pub enum Error {
},
#[snafu(display("Missing insert body"))]
MissingInsertBody { backtrace: Backtrace },
MissingInsertBody { location: Location },
#[snafu(display("Failed to insert value to table: {}, source: {}", table_name, source))]
Insert {
@@ -201,6 +189,9 @@ pub enum Error {
#[snafu(display("Failed to create directory {}, source: {}", dir, source))]
CreateDir { dir: String, source: std::io::Error },
#[snafu(display("Failed to remove directory {}, source: {}", dir, source))]
RemoveDir { dir: String, source: std::io::Error },
#[snafu(display("Failed to open log store, source: {}", source))]
OpenLogStore {
#[snafu(backtrace)]
@@ -214,7 +205,7 @@ pub enum Error {
InitBackend {
config: Box<ObjectStoreConfig>,
source: object_store::Error,
backtrace: Backtrace,
location: Location,
},
#[snafu(display("Failed to build backend, source: {}", source))]
@@ -226,7 +217,7 @@ pub enum Error {
#[snafu(display("Failed to parse url, source: {}", source))]
ParseUrl {
source: DataSourceError,
backtrace: Backtrace,
location: Location,
},
#[snafu(display("Runtime resource error, source: {}", source))]
@@ -252,7 +243,7 @@ pub enum Error {
#[snafu(display("Failed to regex, source: {}", source))]
BuildRegex {
backtrace: Backtrace,
location: Location,
source: regex::Error,
},
@@ -278,10 +269,10 @@ pub enum Error {
},
#[snafu(display("Specified timestamp key or primary key column not found: {}", name))]
KeyColumnNotFound { name: String, backtrace: Backtrace },
KeyColumnNotFound { name: String, location: Location },
#[snafu(display("Illegal primary keys definition: {}", msg))]
IllegalPrimaryKeysDef { msg: String, backtrace: Backtrace },
IllegalPrimaryKeysDef { msg: String, location: Location },
#[snafu(display(
"Constraint in CREATE TABLE statement is not supported yet: {}",
@@ -289,7 +280,7 @@ pub enum Error {
))]
ConstraintNotSupported {
constraint: String,
backtrace: Backtrace,
location: Location,
},
#[snafu(display("Failed to insert into system catalog table, source: {}", source))]
@@ -311,7 +302,7 @@ pub enum Error {
},
#[snafu(display("Schema {} already exists", name))]
SchemaExists { name: String, backtrace: Backtrace },
SchemaExists { name: String, location: Location },
#[snafu(display("Failed to convert alter expr to request: {}", source))]
AlterExprToRequest {
@@ -331,12 +322,6 @@ pub enum Error {
source: sql::error::Error,
},
#[snafu(display("Failed to start script manager, source: {}", source))]
StartScriptManager {
#[snafu(backtrace)]
source: script::error::Error,
},
#[snafu(display(
"Failed to parse string to timestamp, string: {}, source: {}",
raw,
@@ -375,7 +360,7 @@ pub enum Error {
#[snafu(display(
"Table id provider not found, cannot execute SQL directly on datanode in distributed mode"
))]
TableIdProviderNotFound { backtrace: Backtrace },
TableIdProviderNotFound { location: Location },
#[snafu(display("Failed to bump table id, source: {}", source))]
BumpTableId {
@@ -390,13 +375,13 @@ pub enum Error {
},
#[snafu(display("Missing node id option in distributed mode"))]
MissingNodeId { backtrace: Backtrace },
MissingNodeId { location: Location },
#[snafu(display("Missing node id option in distributed mode"))]
MissingMetasrvOpts { backtrace: Backtrace },
MissingMetasrvOpts { location: Location },
#[snafu(display("Missing required field: {}", name))]
MissingRequiredField { name: String, backtrace: Backtrace },
MissingRequiredField { name: String, location: Location },
#[snafu(display("Cannot find requested database: {}-{}", catalog, schema))]
DatabaseNotFound { catalog: String, schema: String },
@@ -416,10 +401,7 @@ pub enum Error {
"No valid default value can be built automatically, column: {}",
column,
))]
ColumnNoneDefaultValue {
column: String,
backtrace: Backtrace,
},
ColumnNoneDefaultValue { column: String, location: Location },
#[snafu(display("Failed to describe schema for given statement, source: {}", source))]
DescribeStatement {
@@ -453,38 +435,32 @@ pub enum Error {
#[snafu(display("Failed to read parquet file, source: {}", source))]
ReadParquet {
source: parquet::errors::ParquetError,
backtrace: Backtrace,
},
#[snafu(display("Failed to write parquet file, source: {}", source))]
WriteParquet {
source: parquet::errors::ParquetError,
backtrace: Backtrace,
location: Location,
},
#[snafu(display("Failed to poll stream, source: {}", source))]
PollStream {
source: datafusion_common::DataFusionError,
backtrace: Backtrace,
location: Location,
},
#[snafu(display("Failed to build parquet record batch stream, source: {}", source))]
BuildParquetRecordBatchStream {
backtrace: Backtrace,
location: Location,
source: parquet::errors::ParquetError,
},
#[snafu(display("Failed to read object in path: {}, source: {}", path, source))]
ReadObject {
path: String,
backtrace: Backtrace,
location: Location,
source: object_store::Error,
},
#[snafu(display("Failed to write object into path: {}, source: {}", path, source))]
WriteObject {
path: String,
backtrace: Backtrace,
location: Location,
source: object_store::Error,
},
@@ -531,6 +507,12 @@ pub enum Error {
#[snafu(backtrace)]
source: BoxedError,
},
#[snafu(display("Failed to copy table to parquet file, source: {}", source))]
WriteParquet {
#[snafu(backtrace)]
source: storage::error::Error,
},
}
pub type Result<T> = std::result::Result<T, Error>;
@@ -556,8 +538,7 @@ impl ErrorExt for Error {
Insert { source, .. } => source.status_code(),
Delete { source, .. } => source.status_code(),
CollectRecords { source, .. } => source.status_code(),
TableEngineNotFound { source, .. } => source.status_code(),
TableNotFound { .. } => StatusCode::TableNotFound,
ColumnNotFound { .. } => StatusCode::TableColumnNotFound,
@@ -570,7 +551,6 @@ impl ErrorExt for Error {
ConvertSchema { source, .. } | VectorComputation { source } => source.status_code(),
ColumnValuesNumberMismatch { .. }
| ColumnTypeMismatch { .. }
| InvalidSql { .. }
| InvalidUrl { .. }
| InvalidPath { .. }
@@ -599,6 +579,7 @@ impl ErrorExt for Error {
| TcpBind { .. }
| StartGrpc { .. }
| CreateDir { .. }
| RemoveDir { .. }
| InsertSystemCatalog { .. }
| RenameTable { .. }
| Catalog { .. }
@@ -620,7 +601,6 @@ impl ErrorExt for Error {
| WriteObject { .. }
| ListObjects { .. } => StatusCode::StorageUnavailable,
OpenLogStore { source } => source.status_code(),
StartScriptManager { source } => source.status_code(),
OpenStorageEngine { source } => source.status_code(),
RuntimeResource { .. } => StatusCode::RuntimeResourcesExhausted,
MetaClientInit { source, .. } => source.status_code(),
@@ -637,10 +617,6 @@ impl ErrorExt for Error {
}
}
fn backtrace_opt(&self) -> Option<&Backtrace> {
ErrorCompat::backtrace(self)
}
fn as_any(&self) -> &dyn Any {
self
}
@@ -654,62 +630,10 @@ impl From<Error> for tonic::Status {
#[cfg(test)]
mod tests {
use std::error::Error as StdError;
use std::str::FromStr;
use common_error::ext::BoxedError;
use common_error::mock::MockError;
use super::*;
fn throw_query_error() -> std::result::Result<(), query::error::Error> {
query::error::CatalogNotFoundSnafu {
catalog: String::new(),
}
.fail()
}
fn throw_catalog_error() -> catalog::error::Result<()> {
Err(catalog::error::Error::SchemaProviderOperation {
source: BoxedError::new(MockError::with_backtrace(StatusCode::Internal)),
})
}
fn assert_internal_error(err: &Error) {
assert!(err.backtrace_opt().is_some());
assert_eq!(StatusCode::Internal, err.status_code());
}
fn assert_invalid_argument_error(err: &Error) {
assert!(err.backtrace_opt().is_some());
assert_eq!(StatusCode::InvalidArguments, err.status_code());
}
fn assert_tonic_internal_error(err: Error) {
let status_code = err.status_code();
let err_string = err.to_string();
let s: tonic::Status = err.into();
assert_eq!(s.code(), tonic::Code::Unknown);
let source = s.source().unwrap().downcast_ref::<Error>().unwrap();
assert_eq!(source.status_code(), status_code);
assert_eq!(source.to_string(), err_string);
}
#[test]
fn test_error() {
let err = throw_query_error().context(ExecuteSqlSnafu).err().unwrap();
assert_invalid_argument_error(&err);
assert_tonic_internal_error(err);
let err = throw_catalog_error()
.context(NewCatalogSnafu)
.err()
.unwrap();
assert_internal_error(&err);
assert_tonic_internal_error(err);
}
#[test]
fn test_parse_timestamp() {
let err = common_time::timestamp::Timestamp::from_str("test")

View File

@@ -106,13 +106,7 @@ impl HeartbeatTask {
let mut tx = Self::create_streams(&meta_client, running.clone()).await?;
common_runtime::spawn_bg(async move {
while running.load(Ordering::Acquire) {
let (region_num, region_stats) = match datanode_stat(&catalog_manager_clone).await {
Ok(datanode_stat) => (datanode_stat.0 as i64, datanode_stat.1),
Err(e) => {
error!("failed to get region status, err: {e:?}");
(-1, vec![])
}
};
let (region_num, region_stats) = datanode_stat(&catalog_manager_clone).await;
let req = HeartbeatRequest {
peer: Some(Peer {
@@ -120,7 +114,7 @@ impl HeartbeatTask {
addr: addr.clone(),
}),
node_stat: Some(NodeStat {
region_num,
region_num: region_num as _,
..Default::default()
}),
region_stats,

View File

@@ -23,6 +23,7 @@ use common_catalog::consts::{DEFAULT_CATALOG_NAME, DEFAULT_SCHEMA_NAME, MIN_USER
use common_error::prelude::BoxedError;
use common_grpc::channel_manager::{ChannelConfig, ChannelManager};
use common_procedure::local::{LocalManager, ManagerConfig};
use common_procedure::store::state_store::ObjectStateStore;
use common_procedure::ProcedureManagerRef;
use common_telemetry::logging::info;
use log_store::raft_engine::log_store::RaftEngineLogStore;
@@ -37,12 +38,15 @@ use object_store::services::{Fs as FsBuilder, Oss as OSSBuilder, S3 as S3Builder
use object_store::{util, ObjectStore, ObjectStoreBuilder};
use query::query_engine::{QueryEngineFactory, QueryEngineRef};
use servers::Mode;
use session::context::QueryContext;
use snafu::prelude::*;
use storage::compaction::{CompactionHandler, CompactionSchedulerRef, SimplePicker};
use storage::config::EngineConfig as StorageEngineConfig;
use storage::scheduler::{LocalScheduler, SchedulerConfig};
use storage::EngineImpl;
use store_api::logstore::LogStore;
use table::engine::manager::MemoryTableEngineManager;
use table::requests::FlushTableRequest;
use table::table::numbers::NumbersTable;
use table::table::TableIdProviderRef;
use table::Table;
@@ -55,11 +59,9 @@ use crate::error::{
NewCatalogSnafu, OpenLogStoreSnafu, RecoverProcedureSnafu, Result, ShutdownInstanceSnafu,
};
use crate::heartbeat::HeartbeatTask;
use crate::script::ScriptExecutor;
use crate::sql::SqlHandler;
use crate::sql::{SqlHandler, SqlRequest};
mod grpc;
mod script;
pub mod sql;
pub(crate) type DefaultEngine = MitoEngine<EngineImpl<RaftEngineLogStore>>;
@@ -69,9 +71,9 @@ pub struct Instance {
pub(crate) query_engine: QueryEngineRef,
pub(crate) sql_handler: SqlHandler,
pub(crate) catalog_manager: CatalogManagerRef,
pub(crate) script_executor: ScriptExecutor,
pub(crate) table_id_provider: Option<TableIdProviderRef>,
pub(crate) heartbeat_task: Option<HeartbeatTask>,
procedure_manager: Option<ProcedureManagerRef>,
}
pub type InstanceRef = Arc<Instance>;
@@ -102,7 +104,7 @@ impl Instance {
meta_client: Option<Arc<MetaClient>>,
compaction_scheduler: CompactionSchedulerRef<RaftEngineLogStore>,
) -> Result<Self> {
let object_store = new_object_store(&opts.storage).await?;
let object_store = new_object_store(&opts.storage.store).await?;
let log_store = Arc::new(create_log_store(&opts.wal).await?);
let table_engine = Arc::new(DefaultEngine::new(
@@ -116,6 +118,8 @@ impl Instance {
object_store,
));
let engine_manager = Arc::new(MemoryTableEngineManager::new(table_engine.clone()));
// create remote catalog manager
let (catalog_manager, table_id_provider) = match opts.mode {
Mode::Standalone => {
@@ -140,7 +144,7 @@ impl Instance {
)
} else {
let catalog = Arc::new(
catalog::local::LocalCatalogManager::try_new(table_engine.clone())
catalog::local::LocalCatalogManager::try_new(engine_manager)
.await
.context(CatalogSnafu)?,
);
@@ -154,7 +158,7 @@ impl Instance {
Mode::Distributed => {
let catalog = Arc::new(catalog::remote::RemoteCatalogManager::new(
table_engine.clone(),
engine_manager,
opts.node_id.context(MissingNodeIdSnafu)?,
Arc::new(MetaKvBackend {
client: meta_client.as_ref().unwrap().clone(),
@@ -166,8 +170,6 @@ impl Instance {
let factory = QueryEngineFactory::new(catalog_manager.clone());
let query_engine = factory.query_engine();
let script_executor =
ScriptExecutor::new(catalog_manager.clone(), query_engine.clone()).await?;
let heartbeat_task = match opts.mode {
Mode::Standalone => None,
@@ -181,6 +183,7 @@ impl Instance {
};
let procedure_manager = create_procedure_manager(&opts.procedure).await?;
// Register all procedures.
if let Some(procedure_manager) = &procedure_manager {
table_engine.register_procedure_loaders(&**procedure_manager);
table_procedure::register_procedure_loaders(
@@ -189,27 +192,20 @@ impl Instance {
table_engine.clone(),
&**procedure_manager,
);
// Recover procedures.
procedure_manager
.recover()
.await
.context(RecoverProcedureSnafu)?;
}
Ok(Self {
query_engine: query_engine.clone(),
sql_handler: SqlHandler::new(
table_engine.clone(),
Arc::new(MemoryTableEngineManager::new(table_engine.clone())),
catalog_manager.clone(),
query_engine.clone(),
table_engine,
procedure_manager,
procedure_manager.clone(),
),
catalog_manager,
script_executor,
heartbeat_task,
table_id_provider,
procedure_manager,
})
}
@@ -221,6 +217,15 @@ impl Instance {
if let Some(task) = &self.heartbeat_task {
task.start().await?;
}
// Recover procedures after the catalog manager is started, so we can
// ensure we can access all tables from the catalog manager.
if let Some(procedure_manager) = &self.procedure_manager {
procedure_manager
.recover()
.await
.context(RecoverProcedureSnafu)?;
}
Ok(())
}
@@ -233,6 +238,8 @@ impl Instance {
.context(ShutdownInstanceSnafu)?;
}
self.flush_tables().await?;
self.sql_handler
.close()
.await
@@ -240,6 +247,43 @@ impl Instance {
.context(ShutdownInstanceSnafu)
}
pub async fn flush_tables(&self) -> Result<()> {
info!("going to flush all schemas");
let schema_list = self
.catalog_manager
.catalog(DEFAULT_CATALOG_NAME)
.map_err(BoxedError::new)
.context(ShutdownInstanceSnafu)?
.expect("Default schema not found")
.schema_names()
.map_err(BoxedError::new)
.context(ShutdownInstanceSnafu)?;
let flush_requests = schema_list
.into_iter()
.map(|schema_name| {
SqlRequest::FlushTable(FlushTableRequest {
catalog_name: DEFAULT_CATALOG_NAME.to_string(),
schema_name,
table_name: None,
region_number: None,
wait: Some(true),
})
})
.collect::<Vec<_>>();
let flush_result = futures::future::try_join_all(
flush_requests
.into_iter()
.map(|request| self.sql_handler.execute(request, QueryContext::arc())),
)
.await
.map_err(BoxedError::new)
.context(ShutdownInstanceSnafu);
info!("Flushed all tables result: {}", flush_result.is_ok());
flush_result?;
Ok(())
}
pub fn sql_handler(&self) -> &SqlHandler {
&self.sql_handler
}
@@ -268,11 +312,22 @@ pub(crate) async fn new_object_store(store_config: &ObjectStoreConfig) -> Result
ObjectStoreConfig::Oss { .. } => new_oss_object_store(store_config).await,
};
// Don't enable retry layer when using local file backend.
let object_store = if !matches!(store_config, ObjectStoreConfig::File(..)) {
object_store.map(|object_store| object_store.layer(RetryLayer::new().with_jitter()))
} else {
object_store
};
object_store.map(|object_store| {
object_store
.layer(RetryLayer::new().with_jitter())
.layer(MetricsLayer)
.layer(LoggingLayer::default())
.layer(
LoggingLayer::default()
// Print the expected error only in DEBUG level.
// See https://docs.rs/opendal/latest/opendal/layers/struct.LoggingLayer.html#method.with_error_level
.with_error_level(Some(log::Level::Debug)),
)
.layer(TracingLayer)
})
}
@@ -290,18 +345,20 @@ pub(crate) async fn new_oss_object_store(store_config: &ObjectStoreConfig) -> Re
);
let mut builder = OSSBuilder::default();
let builder = builder
builder
.root(&root)
.bucket(&oss_config.bucket)
.endpoint(&oss_config.endpoint)
.access_key_id(&oss_config.access_key_id)
.access_key_secret(&oss_config.access_key_secret);
let accessor = builder.build().with_context(|_| error::InitBackendSnafu {
config: store_config.clone(),
})?;
let object_store = ObjectStore::new(builder)
.with_context(|_| error::InitBackendSnafu {
config: store_config.clone(),
})?
.finish();
create_object_store_with_cache(ObjectStore::new(accessor).finish(), store_config)
create_object_store_with_cache(object_store, store_config)
}
fn create_object_store_with_cache(
@@ -354,24 +411,27 @@ pub(crate) async fn new_s3_object_store(store_config: &ObjectStoreConfig) -> Res
);
let mut builder = S3Builder::default();
let mut builder = builder
builder
.root(&root)
.bucket(&s3_config.bucket)
.access_key_id(&s3_config.access_key_id)
.secret_access_key(&s3_config.secret_access_key);
if s3_config.endpoint.is_some() {
builder = builder.endpoint(s3_config.endpoint.as_ref().unwrap());
builder.endpoint(s3_config.endpoint.as_ref().unwrap());
}
if s3_config.region.is_some() {
builder = builder.region(s3_config.region.as_ref().unwrap());
builder.region(s3_config.region.as_ref().unwrap());
}
let accessor = builder.build().with_context(|_| error::InitBackendSnafu {
config: store_config.clone(),
})?;
create_object_store_with_cache(ObjectStore::new(accessor).finish(), store_config)
create_object_store_with_cache(
ObjectStore::new(builder)
.with_context(|_| error::InitBackendSnafu {
config: store_config.clone(),
})?
.finish(),
store_config,
)
}
pub(crate) async fn new_fs_object_store(store_config: &ObjectStoreConfig) -> Result<ObjectStore> {
@@ -385,16 +445,27 @@ pub(crate) async fn new_fs_object_store(store_config: &ObjectStoreConfig) -> Res
info!("The file storage directory is: {}", &data_dir);
let atomic_write_dir = format!("{data_dir}/.tmp/");
if path::Path::new(&atomic_write_dir).exists() {
info!(
"Begin to clean temp storage directory: {}",
&atomic_write_dir
);
fs::remove_dir_all(&atomic_write_dir).context(error::RemoveDirSnafu {
dir: &atomic_write_dir,
})?;
info!("Cleaned temp storage directory: {}", &atomic_write_dir);
}
let accessor = FsBuilder::default()
.root(&data_dir)
.atomic_write_dir(&atomic_write_dir)
.build()
let mut builder = FsBuilder::default();
builder.root(&data_dir).atomic_write_dir(&atomic_write_dir);
let object_store = ObjectStore::new(builder)
.context(error::InitBackendSnafu {
config: store_config.clone(),
})?;
})?
.finish();
Ok(ObjectStore::new(accessor).finish())
Ok(object_store)
}
/// Create metasrv client instance and spawn heartbeat loop.
@@ -460,11 +531,15 @@ pub(crate) async fn create_procedure_manager(
);
let object_store = new_object_store(&procedure_config.store).await?;
let state_store = Arc::new(ObjectStateStore::new(object_store));
let manager_config = ManagerConfig {
object_store,
max_retry_times: procedure_config.max_retry_times,
retry_delay: procedure_config.retry_delay,
};
Ok(Some(Arc::new(LocalManager::new(manager_config))))
Ok(Some(Arc::new(LocalManager::new(
manager_config,
state_store,
))))
}

View File

@@ -21,7 +21,7 @@ use common_query::Output;
use query::parser::{PromQuery, QueryLanguageParser, QueryStatement};
use query::plan::LogicalPlan;
use servers::query_handler::grpc::GrpcQueryHandler;
use session::context::QueryContextRef;
use session::context::{QueryContext, QueryContextRef};
use snafu::prelude::*;
use sql::statements::statement::Statement;
use substrait::{DFLogicalSubstraitConvertor, SubstraitPlan};
@@ -53,7 +53,7 @@ impl Instance {
.context(DecodeLogicalPlanSnafu)?;
self.query_engine
.execute(&LogicalPlan::DfPlan(logical_plan))
.execute(LogicalPlan::DfPlan(logical_plan), QueryContext::arc())
.await
.context(ExecuteLogicalPlanSnafu)
}
@@ -69,11 +69,11 @@ impl Instance {
let plan = self
.query_engine
.planner()
.plan(stmt, ctx)
.plan(stmt, ctx.clone())
.await
.context(PlanStatementSnafu)?;
self.query_engine
.execute(&plan)
.execute(plan, ctx)
.await
.context(ExecuteLogicalPlanSnafu)
}
@@ -159,6 +159,7 @@ mod test {
alter_expr, AddColumn, AddColumns, AlterExpr, Column, ColumnDataType, ColumnDef,
CreateDatabaseExpr, CreateTableExpr, QueryRequest,
};
use common_catalog::consts::MITO_ENGINE;
use common_recordbatch::RecordBatches;
use datatypes::prelude::*;
use query::parser::QueryLanguageParser;
@@ -175,7 +176,7 @@ mod test {
.plan(stmt, QueryContext::arc())
.await
.unwrap();
engine.execute(&plan).await.unwrap()
engine.execute(plan, QueryContext::arc()).await.unwrap()
}
#[tokio::test(flavor = "multi_thread")]
@@ -213,6 +214,7 @@ mod test {
},
],
time_index: "ts".to_string(),
engine: MITO_ENGINE.to_string(),
..Default::default()
})),
});

View File

@@ -19,13 +19,10 @@ use common_error::prelude::BoxedError;
use common_query::Output;
use common_telemetry::logging::info;
use common_telemetry::timer;
use futures::StreamExt;
use query::error::QueryExecutionSnafu;
use query::parser::{PromQuery, QueryLanguageParser, QueryStatement};
use query::query_engine::StatementHandler;
use servers::error as server_error;
use servers::prom::PromHandler;
use session::context::{QueryContext, QueryContextRef};
use session::context::QueryContextRef;
use snafu::prelude::*;
use sql::ast::ObjectName;
use sql::statements::copy::{CopyTable, CopyTableArgument};
@@ -38,9 +35,8 @@ use crate::error::{
TableIdProviderNotFoundSnafu,
};
use crate::instance::Instance;
use crate::metric;
use crate::sql::insert::InsertRequests;
use crate::sql::SqlRequest;
use crate::metrics;
use crate::sql::{SqlHandler, SqlRequest};
impl Instance {
pub async fn execute_stmt(
@@ -50,37 +46,10 @@ impl Instance {
) -> Result<Output> {
match stmt {
QueryStatement::Sql(Statement::Insert(insert)) => {
let requests = self
.sql_handler
.insert_to_requests(self.catalog_manager.clone(), *insert, query_ctx.clone())
.await?;
match requests {
InsertRequests::Request(request) => {
self.sql_handler.execute(request, query_ctx.clone()).await
}
InsertRequests::Stream(mut s) => {
let mut rows = 0;
while let Some(request) = s.next().await {
match self
.sql_handler
.execute(request?, query_ctx.clone())
.await?
{
Output::AffectedRows(n) => {
rows += n;
}
_ => unreachable!(),
}
}
Ok(Output::AffectedRows(rows))
}
}
}
QueryStatement::Sql(Statement::Delete(delete)) => {
let request = SqlRequest::Delete(*delete);
self.sql_handler.execute(request, query_ctx).await
let request =
SqlHandler::insert_to_request(self.catalog_manager.clone(), *insert, query_ctx)
.await?;
self.sql_handler.insert(request).await
}
QueryStatement::Sql(Statement::CreateDatabase(create_database)) => {
let request = CreateDatabaseRequest {
@@ -119,6 +88,9 @@ impl Instance {
.execute(SqlRequest::CreateTable(request), query_ctx)
.await
}
QueryStatement::Sql(Statement::CreateExternalTable(_create_external_table)) => {
unimplemented!()
}
QueryStatement::Sql(Statement::Alter(alter_table)) => {
let name = alter_table.table_name().clone();
let (catalog, schema, table) = table_idents_to_full_name(&name, query_ctx.clone())?;
@@ -150,11 +122,6 @@ impl Instance {
.execute(SqlRequest::ShowTables(show_tables), query_ctx)
.await
}
QueryStatement::Sql(Statement::DescribeTable(describe_table)) => {
self.sql_handler
.execute(SqlRequest::DescribeTable(describe_table), query_ctx)
.await
}
QueryStatement::Sql(Statement::ShowCreateTable(_show_create_table)) => {
unimplemented!("SHOW CREATE TABLE is unimplemented yet");
}
@@ -210,6 +177,8 @@ impl Instance {
| QueryStatement::Sql(Statement::Explain(_))
| QueryStatement::Sql(Statement::Use(_))
| QueryStatement::Sql(Statement::Tql(_))
| QueryStatement::Sql(Statement::Delete(_))
| QueryStatement::Sql(Statement::DescribeTable(_))
| QueryStatement::Promql(_) => unreachable!(),
}
}
@@ -219,17 +188,20 @@ impl Instance {
promql: &PromQuery,
query_ctx: QueryContextRef,
) -> Result<Output> {
let _timer = timer!(metric::METRIC_HANDLE_PROMQL_ELAPSED);
let _timer = timer!(metrics::METRIC_HANDLE_PROMQL_ELAPSED);
let stmt = QueryLanguageParser::parse_promql(promql).context(ExecuteSqlSnafu)?;
let engine = self.query_engine();
let plan = engine
.planner()
.plan(stmt, query_ctx)
.plan(stmt, query_ctx.clone())
.await
.context(PlanStatementSnafu)?;
engine.execute(&plan).await.context(ExecuteStatementSnafu)
engine
.execute(plan, query_ctx)
.await
.context(ExecuteStatementSnafu)
}
// TODO(ruihang): merge this and `execute_promql` after #951 landed
@@ -262,10 +234,13 @@ impl Instance {
let engine = self.query_engine();
let plan = engine
.planner()
.plan(stmt, query_ctx)
.plan(stmt, query_ctx.clone())
.await
.context(PlanStatementSnafu)?;
engine.execute(&plan).await.context(ExecuteStatementSnafu)
engine
.execute(plan, query_ctx)
.await
.context(ExecuteStatementSnafu)
}
}
@@ -314,23 +289,6 @@ impl StatementHandler for Instance {
}
}
#[async_trait]
impl PromHandler for Instance {
async fn do_query(&self, query: &PromQuery) -> server_error::Result<Output> {
let _timer = timer!(metric::METRIC_HANDLE_PROMQL_ELAPSED);
self.execute_promql(query, QueryContext::arc())
.await
.map_err(BoxedError::new)
.with_context(|_| {
let query_literal = format!("{query:?}");
server_error::ExecuteQuerySnafu {
query: query_literal,
}
})
}
}
#[cfg(test)]
mod test {
use std::sync::Arc;

View File

@@ -19,9 +19,8 @@ pub mod datanode;
pub mod error;
mod heartbeat;
pub mod instance;
pub mod metric;
pub mod metrics;
mod mock;
mod script;
pub mod server;
pub mod sql;
#[cfg(test)]

View File

@@ -15,6 +15,4 @@
//! datanode metrics
pub const METRIC_HANDLE_SQL_ELAPSED: &str = "datanode.handle_sql_elapsed";
pub const METRIC_HANDLE_SCRIPTS_ELAPSED: &str = "datanode.handle_scripts_elapsed";
pub const METRIC_RUN_SCRIPT_ELAPSED: &str = "datanode.run_script_elapsed";
pub const METRIC_HANDLE_PROMQL_ELAPSED: &str = "datanode.handle_promql_elapsed";

View File

@@ -18,9 +18,12 @@ use std::sync::Arc;
use common_runtime::Builder as RuntimeBuilder;
use servers::grpc::GrpcServer;
use servers::http::{HttpServer, HttpServerBuilder};
use servers::metrics_handler::MetricsHandler;
use servers::query_handler::grpc::ServerGrpcQueryHandlerAdaptor;
use servers::server::Server;
use snafu::ResultExt;
use tokio::select;
use crate::datanode::DatanodeOptions;
use crate::error::{
@@ -33,6 +36,7 @@ pub mod grpc;
/// All rpc services.
pub struct Services {
grpc_server: GrpcServer,
http_server: HttpServer,
}
impl Services {
@@ -51,6 +55,9 @@ impl Services {
None,
grpc_runtime,
),
http_server: HttpServerBuilder::new(opts.http_opts.clone())
.with_metrics_handler(MetricsHandler)
.build(),
})
}
@@ -58,10 +65,15 @@ impl Services {
let grpc_addr: SocketAddr = opts.rpc_addr.parse().context(ParseAddrSnafu {
addr: &opts.rpc_addr,
})?;
self.grpc_server
.start(grpc_addr)
.await
.context(StartServerSnafu)?;
let http_addr = opts.http_opts.addr.parse().context(ParseAddrSnafu {
addr: &opts.http_opts.addr,
})?;
let grpc = self.grpc_server.start(grpc_addr);
let http = self.http_server.start(http_addr);
select!(
v = grpc => v.context(StartServerSnafu)?,
v = http => v.context(StartServerSnafu)?,
);
Ok(())
}
@@ -69,6 +81,11 @@ impl Services {
self.grpc_server
.shutdown()
.await
.context(ShutdownServerSnafu)
.context(ShutdownServerSnafu)?;
self.http_server
.shutdown()
.await
.context(ShutdownServerSnafu)?;
Ok(())
}
}

View File

@@ -106,7 +106,7 @@ impl Instance {
#[cfg(test)]
mod tests {
use api::v1::{column_def, ColumnDataType, ColumnDef, TableId};
use common_catalog::consts::MIN_USER_TABLE_ID;
use common_catalog::consts::{MIN_USER_TABLE_ID, MITO_ENGINE};
use common_grpc_expr::create_table_schema;
use datatypes::prelude::ConcreteDataType;
use datatypes::schema::{ColumnDefaultConstraint, ColumnSchema, RawSchema};
@@ -237,6 +237,7 @@ mod tests {
id: MIN_USER_TABLE_ID,
}),
region_ids: vec![0],
engine: MITO_ENGINE.to_string(),
}
}

View File

@@ -12,24 +12,25 @@
// See the License for the specific language governing permissions and
// limitations under the License.
use std::sync::Arc;
use catalog::CatalogManagerRef;
use common_error::prelude::BoxedError;
use common_procedure::ProcedureManagerRef;
use common_query::Output;
use common_telemetry::error;
use query::query_engine::QueryEngineRef;
use query::sql::{describe_table, show_databases, show_tables};
use query::sql::{show_databases, show_tables};
use session::context::QueryContextRef;
use snafu::{OptionExt, ResultExt};
use sql::statements::delete::Delete;
use sql::statements::describe::DescribeTable;
use sql::statements::show::{ShowDatabases, ShowTables};
use table::engine::{EngineContext, TableEngineProcedureRef, TableEngineRef, TableReference};
use table::engine::manager::TableEngineManagerRef;
use table::engine::{TableEngineProcedureRef, TableEngineRef, TableReference};
use table::requests::*;
use table::TableRef;
use table::{Table, TableRef};
use crate::error::{
self, CloseTableEngineSnafu, ExecuteSqlSnafu, GetTableSnafu, Result, TableNotFoundSnafu,
self, CloseTableEngineSnafu, ExecuteSqlSnafu, Result, TableEngineNotFoundSnafu,
TableNotFoundSnafu,
};
use crate::instance::sql::table_idents_to_full_name;
@@ -37,14 +38,12 @@ mod alter;
mod copy_table_from;
mod copy_table_to;
mod create;
mod delete;
mod drop_table;
mod flush_table;
pub(crate) mod insert;
#[derive(Debug)]
pub enum SqlRequest {
Insert(InsertRequest),
CreateTable(CreateTableRequest),
CreateDatabase(CreateDatabaseRequest),
Alter(AlterTableRequest),
@@ -52,32 +51,28 @@ pub enum SqlRequest {
FlushTable(FlushTableRequest),
ShowDatabases(ShowDatabases),
ShowTables(ShowTables),
DescribeTable(DescribeTable),
Delete(Delete),
CopyTable(CopyTableRequest),
}
// Handler to execute SQL except query
#[derive(Clone)]
pub struct SqlHandler {
table_engine: TableEngineRef,
table_engine_manager: TableEngineManagerRef,
catalog_manager: CatalogManagerRef,
query_engine: QueryEngineRef,
engine_procedure: TableEngineProcedureRef,
procedure_manager: Option<ProcedureManagerRef>,
}
impl SqlHandler {
pub fn new(
table_engine: TableEngineRef,
table_engine_manager: TableEngineManagerRef,
catalog_manager: CatalogManagerRef,
query_engine: QueryEngineRef,
engine_procedure: TableEngineProcedureRef,
procedure_manager: Option<ProcedureManagerRef>,
) -> Self {
Self {
table_engine,
table_engine_manager,
catalog_manager,
query_engine,
engine_procedure,
procedure_manager,
}
@@ -89,12 +84,10 @@ impl SqlHandler {
// there, instead of executing here in a "static" fashion.
pub async fn execute(&self, request: SqlRequest, query_ctx: QueryContextRef) -> Result<Output> {
let result = match request {
SqlRequest::Insert(req) => self.insert(req).await,
SqlRequest::CreateTable(req) => self.create_table(req).await,
SqlRequest::CreateDatabase(req) => self.create_database(req, query_ctx.clone()).await,
SqlRequest::Alter(req) => self.alter(req).await,
SqlRequest::DropTable(req) => self.drop_table(req).await,
SqlRequest::Delete(req) => self.delete(query_ctx.clone(), req).await,
SqlRequest::CopyTable(req) => match req.direction {
CopyDirection::Export => self.copy_table_to(req).await,
CopyDirection::Import => self.copy_table_from(req).await,
@@ -106,19 +99,6 @@ impl SqlHandler {
show_tables(req, self.catalog_manager.clone(), query_ctx.clone())
.context(ExecuteSqlSnafu)
}
SqlRequest::DescribeTable(req) => {
let (catalog, schema, table) =
table_idents_to_full_name(req.name(), query_ctx.clone())?;
let table = self
.catalog_manager
.table(&catalog, &schema, &table)
.await
.context(error::CatalogSnafu)?
.with_context(|| TableNotFoundSnafu {
table_name: req.name().to_string(),
})?;
describe_table(table).context(ExecuteSqlSnafu)
}
SqlRequest::FlushTable(req) => self.flush_table(req).await,
};
if let Err(e) = &result {
@@ -127,262 +107,41 @@ impl SqlHandler {
result
}
pub(crate) fn get_table(&self, table_ref: &TableReference) -> Result<TableRef> {
self.table_engine
.get_table(&EngineContext::default(), table_ref)
.with_context(|_| GetTableSnafu {
table_name: table_ref.to_string(),
})?
pub async fn get_table(&self, table_ref: &TableReference<'_>) -> Result<TableRef> {
let TableReference {
catalog,
schema,
table,
} = table_ref;
let table = self
.catalog_manager
.table(catalog, schema, table)
.await
.context(error::CatalogSnafu)?
.with_context(|| TableNotFoundSnafu {
table_name: table_ref.to_string(),
})
})?;
Ok(table)
}
pub fn table_engine(&self) -> TableEngineRef {
self.table_engine.clone()
pub fn table_engine_manager(&self) -> TableEngineManagerRef {
self.table_engine_manager.clone()
}
pub fn table_engine(&self, table: Arc<dyn Table>) -> Result<TableEngineRef> {
let engine_name = &table.table_info().meta.engine;
let engine = self
.table_engine_manager
.engine(engine_name)
.context(TableEngineNotFoundSnafu { engine_name })?;
Ok(engine)
}
pub async fn close(&self) -> Result<()> {
self.table_engine
self.table_engine_manager
.close()
.await
.map_err(BoxedError::new)
.context(CloseTableEngineSnafu)
}
}
#[cfg(test)]
mod tests {
use std::any::Any;
use std::sync::Arc;
use catalog::{CatalogManager, RegisterTableRequest};
use common_catalog::consts::{DEFAULT_CATALOG_NAME, DEFAULT_SCHEMA_NAME};
use common_query::logical_plan::Expr;
use common_query::physical_plan::PhysicalPlanRef;
use common_test_util::temp_dir::create_temp_dir;
use common_time::timestamp::Timestamp;
use datatypes::prelude::ConcreteDataType;
use datatypes::schema::{ColumnSchema, SchemaBuilder, SchemaRef};
use datatypes::value::Value;
use futures::StreamExt;
use log_store::NoopLogStore;
use mito::config::EngineConfig as TableEngineConfig;
use mito::engine::MitoEngine;
use object_store::services::Fs as Builder;
use object_store::{ObjectStore, ObjectStoreBuilder};
use query::parser::{QueryLanguageParser, QueryStatement};
use query::QueryEngineFactory;
use session::context::QueryContext;
use sql::statements::statement::Statement;
use storage::compaction::noop::NoopCompactionScheduler;
use storage::config::EngineConfig as StorageEngineConfig;
use storage::EngineImpl;
use table::error::Result as TableResult;
use table::metadata::TableInfoRef;
use table::Table;
use super::*;
use crate::error::Error;
use crate::sql::insert::InsertRequests;
struct DemoTable;
#[async_trait::async_trait]
impl Table for DemoTable {
fn as_any(&self) -> &dyn Any {
self
}
fn schema(&self) -> SchemaRef {
let column_schemas = vec![
ColumnSchema::new("host", ConcreteDataType::string_datatype(), false),
ColumnSchema::new("cpu", ConcreteDataType::float64_datatype(), true),
ColumnSchema::new("memory", ConcreteDataType::float64_datatype(), true),
ColumnSchema::new(
"ts",
ConcreteDataType::timestamp_millisecond_datatype(),
true,
)
.with_time_index(true),
];
Arc::new(
SchemaBuilder::try_from(column_schemas)
.unwrap()
.build()
.unwrap(),
)
}
fn table_info(&self) -> TableInfoRef {
unimplemented!()
}
async fn scan(
&self,
_projection: Option<&Vec<usize>>,
_filters: &[Expr],
_limit: Option<usize>,
) -> TableResult<PhysicalPlanRef> {
unimplemented!();
}
}
#[tokio::test]
async fn test_statement_to_request() {
let dir = create_temp_dir("setup_test_engine_and_table");
let store_dir = dir.path().to_string_lossy();
let accessor = Builder::default().root(&store_dir).build().unwrap();
let object_store = ObjectStore::new(accessor).finish();
let compaction_scheduler = Arc::new(NoopCompactionScheduler::default());
let sql = r#"insert into demo(host, cpu, memory, ts) values
('host1', 66.6, 1024, 1655276557000),
('host2', 88.8, 333.3, 1655276558000)
"#;
let table_engine = Arc::new(MitoEngine::<EngineImpl<NoopLogStore>>::new(
TableEngineConfig::default(),
EngineImpl::new(
StorageEngineConfig::default(),
Arc::new(NoopLogStore::default()),
object_store.clone(),
compaction_scheduler,
),
object_store,
));
let catalog_list = Arc::new(
catalog::local::LocalCatalogManager::try_new(table_engine.clone())
.await
.unwrap(),
);
catalog_list.start().await.unwrap();
assert!(catalog_list
.register_table(RegisterTableRequest {
catalog: DEFAULT_CATALOG_NAME.to_string(),
schema: DEFAULT_SCHEMA_NAME.to_string(),
table_name: "demo".to_string(),
table_id: 1,
table: Arc::new(DemoTable),
})
.await
.unwrap());
let factory = QueryEngineFactory::new(catalog_list.clone());
let query_engine = factory.query_engine();
let sql_handler = SqlHandler::new(
table_engine.clone(),
catalog_list.clone(),
query_engine.clone(),
table_engine,
None,
);
let stmt = match QueryLanguageParser::parse_sql(sql).unwrap() {
QueryStatement::Sql(Statement::Insert(i)) => i,
_ => {
unreachable!()
}
};
let request = sql_handler
.insert_to_requests(catalog_list.clone(), *stmt, QueryContext::arc())
.await
.unwrap();
match request {
InsertRequests::Request(SqlRequest::Insert(req)) => {
assert_eq!(req.table_name, "demo");
let columns_values = req.columns_values;
assert_eq!(4, columns_values.len());
let hosts = &columns_values["host"];
assert_eq!(2, hosts.len());
assert_eq!(Value::from("host1"), hosts.get(0));
assert_eq!(Value::from("host2"), hosts.get(1));
let cpus = &columns_values["cpu"];
assert_eq!(2, cpus.len());
assert_eq!(Value::from(66.6f64), cpus.get(0));
assert_eq!(Value::from(88.8f64), cpus.get(1));
let memories = &columns_values["memory"];
assert_eq!(2, memories.len());
assert_eq!(Value::from(1024f64), memories.get(0));
assert_eq!(Value::from(333.3f64), memories.get(1));
let ts = &columns_values["ts"];
assert_eq!(2, ts.len());
assert_eq!(
Value::from(Timestamp::new_millisecond(1655276557000i64)),
ts.get(0)
);
assert_eq!(
Value::from(Timestamp::new_millisecond(1655276558000i64)),
ts.get(1)
);
}
_ => {
panic!("Not supposed to reach here")
}
}
// test inert into select
// type mismatch
let sql = "insert into demo(ts) select number from numbers limit 3";
let stmt = match QueryLanguageParser::parse_sql(sql).unwrap() {
QueryStatement::Sql(Statement::Insert(i)) => i,
_ => {
unreachable!()
}
};
let request = sql_handler
.insert_to_requests(catalog_list.clone(), *stmt, QueryContext::arc())
.await
.unwrap();
match request {
InsertRequests::Stream(mut stream) => {
assert!(matches!(
stream.next().await.unwrap().unwrap_err(),
Error::ColumnTypeMismatch { .. }
));
}
_ => unreachable!(),
}
let sql = "insert into demo(cpu) select cast(number as double) from numbers limit 3";
let stmt = match QueryLanguageParser::parse_sql(sql).unwrap() {
QueryStatement::Sql(Statement::Insert(i)) => i,
_ => {
unreachable!()
}
};
let request = sql_handler
.insert_to_requests(catalog_list.clone(), *stmt, QueryContext::arc())
.await
.unwrap();
match request {
InsertRequests::Stream(mut stream) => {
let mut times = 0;
while let Some(Ok(SqlRequest::Insert(req))) = stream.next().await {
times += 1;
assert_eq!(req.table_name, "demo");
let columns_values = req.columns_values;
assert_eq!(1, columns_values.len());
let memories = &columns_values["cpu"];
assert_eq!(3, memories.len());
assert_eq!(Value::from(0.0f64), memories.get(0));
assert_eq!(Value::from(1.0f64), memories.get(1));
assert_eq!(Value::from(2.0f64), memories.get(2));
}
assert_eq!(1, times);
}
_ => unreachable!(),
}
}
}

View File

@@ -35,20 +35,24 @@ impl SqlHandler {
let full_table_name = table_ref.to_string();
// fetches table via catalog
let table = self.get_table(&table_ref).await?;
// checks the table engine exist
let table_engine = self.table_engine(table)?;
ensure!(
self.table_engine.table_exists(&ctx, &table_ref),
table_engine.table_exists(&ctx, &table_ref),
error::TableNotFoundSnafu {
table_name: &full_table_name,
}
);
let is_rename = req.is_rename_table();
let table =
self.table_engine
.alter_table(&ctx, req)
.await
.context(error::AlterTableSnafu {
table_name: full_table_name,
})?;
let table = table_engine
.alter_table(&ctx, req)
.await
.context(error::AlterTableSnafu {
table_name: full_table_name,
})?;
if is_rename {
let table_info = &table.table_info();
let rename_table_req = RenameTableRequest {

View File

@@ -40,7 +40,7 @@ impl SqlHandler {
schema: &req.schema_name,
table: &req.table_name,
};
let table = self.get_table(&table_ref)?;
let table = self.get_table(&table_ref).await?;
let (_schema, _host, path) = parse_url(&req.location).context(error::ParseUrlSnafu)?;
@@ -48,7 +48,6 @@ impl SqlHandler {
build_backend(&req.location, req.connection).context(error::BuildBackendSnafu)?;
let (dir, filename) = find_dir_and_filename(&path);
let regex = req
.pattern
.as_ref()
@@ -62,16 +61,18 @@ impl SqlHandler {
Source::Dir
};
let lister = Lister::new(object_store, source, dir, regex);
let lister = Lister::new(object_store.clone(), source, dir, regex);
let objects = lister.list().await.context(error::ListObjectsSnafu)?;
let entries = lister.list().await.context(error::ListObjectsSnafu)?;
let mut buf: Vec<RecordBatch> = Vec::new();
for obj in objects.iter() {
let reader = obj.reader().await.context(error::ReadObjectSnafu {
path: &obj.path().to_string(),
})?;
for entry in entries.iter() {
let path = entry.path();
let reader = object_store
.reader(path)
.await
.context(error::ReadObjectSnafu { path })?;
let buf_reader = BufReader::new(reader.compat());

View File

@@ -12,24 +12,17 @@
// See the License for the specific language governing permissions and
// limitations under the License.
use std::pin::Pin;
use common_datasource;
use common_datasource::object_store::{build_backend, parse_url};
use common_query::physical_plan::SessionContext;
use common_query::Output;
use common_recordbatch::adapter::DfRecordBatchStreamAdapter;
use datafusion::parquet::arrow::ArrowWriter;
use datafusion::parquet::basic::{Compression, Encoding};
use datafusion::parquet::file::properties::WriterProperties;
use datafusion::physical_plan::RecordBatchStream;
use futures::TryStreamExt;
use object_store::ObjectStore;
use snafu::ResultExt;
use storage::sst::SstInfo;
use storage::{ParquetWriter, Source};
use table::engine::TableReference;
use table::requests::CopyTableRequest;
use crate::error::{self, Result};
use crate::error::{self, Result, WriteParquetSnafu};
use crate::sql::SqlHandler;
impl SqlHandler {
@@ -39,7 +32,7 @@ impl SqlHandler {
schema: &req.schema_name,
table: &req.table_name,
};
let table = self.get_table(&table_ref)?;
let table = self.get_table(&table_ref).await?;
let stream = table
.scan(None, &[], None)
@@ -51,99 +44,20 @@ impl SqlHandler {
let stream = stream
.execute(0, SessionContext::default().task_ctx())
.context(error::TableScanExecSnafu)?;
let stream = Box::pin(DfRecordBatchStreamAdapter::new(stream));
let (_schema, _host, path) = parse_url(&req.location).context(error::ParseUrlSnafu)?;
let object_store =
build_backend(&req.location, req.connection).context(error::BuildBackendSnafu)?;
let mut parquet_writer = ParquetWriter::new(path.to_string(), stream, object_store);
// TODO(jiachun):
// For now, COPY is implemented synchronously.
// When copying large table, it will be blocked for a long time.
// Maybe we should make "copy" runs in background?
// Like PG: https://www.postgresql.org/docs/current/sql-copy.html
let rows = parquet_writer.flush().await?;
let writer = ParquetWriter::new(&path, Source::Stream(stream), object_store);
Ok(Output::AffectedRows(rows))
}
}
type DfRecordBatchStream = Pin<Box<DfRecordBatchStreamAdapter>>;
struct ParquetWriter {
file_name: String,
stream: DfRecordBatchStream,
object_store: ObjectStore,
max_row_group_size: usize,
max_rows_in_segment: usize,
}
impl ParquetWriter {
pub fn new(file_name: String, stream: DfRecordBatchStream, object_store: ObjectStore) -> Self {
Self {
file_name,
stream,
object_store,
// TODO(jiachun): make these configurable: WITH (max_row_group_size=xxx, max_rows_in_segment=xxx)
max_row_group_size: 4096,
max_rows_in_segment: 5000000, // default 5M rows per segment
}
}
pub async fn flush(&mut self) -> Result<usize> {
let schema = self.stream.as_ref().schema();
let writer_props = WriterProperties::builder()
.set_compression(Compression::ZSTD)
.set_encoding(Encoding::PLAIN)
.set_max_row_group_size(self.max_row_group_size)
.build();
let mut total_rows = 0;
loop {
let mut buf = vec![];
let mut arrow_writer =
ArrowWriter::try_new(&mut buf, schema.clone(), Some(writer_props.clone()))
.context(error::WriteParquetSnafu)?;
let mut rows = 0;
let mut end_loop = true;
// TODO(hl & jiachun): Since OpenDAL's writer is async and ArrowWriter requires a `std::io::Write`,
// here we use a Vec<u8> to buffer all parquet bytes in memory and write to object store
// at a time. Maybe we should find a better way to bridge ArrowWriter and OpenDAL's object.
while let Some(batch) = self
.stream
.try_next()
.await
.context(error::PollStreamSnafu)?
{
arrow_writer
.write(&batch)
.context(error::WriteParquetSnafu)?;
rows += batch.num_rows();
if rows >= self.max_rows_in_segment {
end_loop = false;
break;
}
}
let start_row_num = total_rows + 1;
total_rows += rows;
arrow_writer.close().context(error::WriteParquetSnafu)?;
// if rows == 0, we just end up with an empty file.
//
// file_name like:
// "file_name_1_1000000" (row num: 1 ~ 1000000),
// "file_name_1000001_xxx" (row num: 1000001 ~ xxx)
let file_name = format!("{}_{}_{}", self.file_name, start_row_num, total_rows);
let object = self.object_store.object(&file_name);
object.write(buf).await.context(error::WriteObjectSnafu {
path: object.path(),
})?;
if end_loop {
return Ok(total_rows);
}
}
let rows_copied = writer
.write_sst(&storage::sst::WriteOptions::default())
.await
.context(WriteParquetSnafu)?
.map(|SstInfo { num_rows, .. }| num_rows)
.unwrap_or(0);
Ok(Output::AffectedRows(rows_copied))
}
}

Some files were not shown because too many files have changed in this diff Show More