Compare commits

...

233 Commits

Author SHA1 Message Date
Lei, HUANG
719fbf2f3a chore: bump dashboard to v0.2.1 2023-04-18 14:32:07 +08:00
Lei, HUANG
cc14dea913 chore: bump version to v0.2.0 2023-04-17 15:16:49 +08:00
Weny Xu
cc7c313937 chore: fix clippy (#1387) 2023-04-15 07:00:54 +08:00
Ruihang Xia
a6e41cdd7b chore: bump arrow, parquet, datafusion and tonic (#1386)
* bump arrow, parquet, datafusion, tonic and greptime-proto

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* add analyzer and fix test

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix clippy warnings

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* update sqlness result

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-04-15 00:03:15 +08:00
Hao
a5771e2ec3 feat: implement predict_linear function in promql (#1362)
* feat: implement predict_linear function in promql

* feat: initialize predict_linear's planner

* fix(bug): fix a bug in linear regression and add some unit test for linear regression

* chore: format code

* feat: deal with NULL value in linear_regression

* feat: add test for all value is None
2023-04-14 22:26:37 +08:00
Lei, HUANG
68e64a6ce9 feat: add some metrics (#1384)
* feat: add some metrics

* fix: compile errors
2023-04-14 20:46:45 +08:00
Ning Sun
90cd3bb5c9 chore: switch mysql_async to git dep (#1383) 2023-04-14 07:04:34 +00:00
shuiyisong
bea37e30d8 chore: query prom using input query context (#1381) 2023-04-14 14:23:36 +08:00
Yingwen
d988b43996 feat: Add drop table procedure to mito (#1377)
* feat: Add drop table procedure to mito

* feat: remove table from engine and then close it
2023-04-14 13:09:38 +08:00
LFC
0fc816fb0c test: add "numbers" table in distributed mode (#1374) 2023-04-14 11:52:04 +08:00
Ning Sun
43391e0162 chore: update pgwire and rustls libraries (#1380)
* feat: update pgwire to 0.13 and fix grafana compatibility

* chore: update pgwire and rustls

* chore: remove unsued imports

* style: format toml
2023-04-14 11:06:01 +08:00
Ruihang Xia
3e7f7e3e8d fix: compile error in develop branch (#1376)
Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-04-13 15:19:00 +08:00
Yingwen
0819582a26 feat: Add alter table procedure (#1354)
* feat: Implement AlterTableProcedure

* test: Test alter table procedure

* feat: support alter table by procedure in datanode

* chore: update comment
2023-04-13 14:05:53 +08:00
Lei, HUANG
9fa871a3fa fix: concurrent rename two table to same name may cause override (#1368)
* fix: concurrent rename two table to same name may cause override

* fix: concurrently update system catalog table

* fix: correctness
2023-04-13 11:53:02 +08:00
Lei, HUANG
76640402ba fix: update cargo lock (#1375) 2023-04-13 11:08:35 +08:00
discord9
c20dbda598 feat: from/to numpy&collect concat (#1339)
* feat: from/to numpy&collect concat

* feat: PyRecordBatch

* test: try import first,allow w/out numpy/pyarrow

* fix: cond compile flag

* doc: license

* feat: sql() ret PyRecordBatch&repr

* fix: after merge

* style: fmt

* chore: CR advices

* docs: update

* chore: resolve conflict
2023-04-13 10:46:25 +08:00
LFC
33dbf7264f refactor: unify the execution of show stmt (#1340)
* refactor: unify the execution of show stmt
2023-04-12 23:09:07 +08:00
discord9
716bde8f04 feat: benchmark some python script (#1356)
* test: bench rspy&pyo3

* docs: add TODO

* api heavy

* Update src/script/benches/py_benchmark.rs

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>

* style: toml fmt

* test: use `rayon` for threadpool

* test: compile first, run later

---------

Co-authored-by: Ruihang Xia <waynestxia@gmail.com>
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2023-04-12 18:19:02 +08:00
ZonaHe
9f2825495d feat: update dashboard to v0.1.0 (#1370)
Co-authored-by: ZonaHex <ZonaHex@users.noreply.github.com>
2023-04-12 17:08:10 +08:00
localhost
ae21c1c1e9 chore: set keep lease heartbeat log level to trace (#1364)
Co-authored-by: paomian <qtang@greptime.com>
2023-04-12 09:38:49 +08:00
Ruihang Xia
6b6617f9cb build: specify clippy denies in cargo config (#1351)
* build: specify clippy denies in cargo config

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* deny implicit clone

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-04-11 09:48:52 +00:00
shuiyisong
d5f0ba4ad9 refactor: merge authenticate and authorize api (#1360)
* chore: add auth api

* chore: update pg using auth api

* chore: update grpc using auth api

* chore: update http using auth api
2023-04-11 17:28:07 +08:00
Eugene Tolbakov
e021da2eee feat(promql): add holt_winters initial implementation (#1342)
* feat(promql): add holt_winters initial implementation

* feat(promql): improve docs for holt_winters

* feat(promql): adjust holt_winters implementation according to code review

* feat(promql): add holt_winters test from prometheus promql function test suite

* feat(promql): add holt_winters more tests from prometheus promql function test suite

* feat(promql): fix styling issue

Co-authored-by: Ruihang Xia <waynestxia@gmail.com>

---------

Co-authored-by: Ruihang Xia <waynestxia@gmail.com>
2023-04-11 17:04:35 +08:00
Weny Xu
fac9c17a9b feat: implement infer schema from single file (#1348)
* feat: implement infer schema from file

* feat: implement compression type

* refactor: remove unnecessary BufReader

* refactor: remove SyncIoBridge and using tokio_util::io::SyncIoBridge instead

* chore: apply suggestions from CR
2023-04-11 16:59:30 +08:00
Weny Xu
dfc2a45de1 docs: treat slack as the first-class citizen (#1361) 2023-04-11 16:59:17 +08:00
Lei, HUANG
3e8ec8b73a fix: avoid panic when no region found in table (#1359) 2023-04-11 16:58:18 +08:00
Weny Xu
a90798a2c1 test: add tests for file table engine (#1353)
* test: add tests for file table engine

* test: refactor open table test and add close engine test
2023-04-11 06:25:08 +00:00
Lei, HUANG
f5cf5685cc feat!: parsing local timestamp (#1352)
* fix: parse and display timestamp/datetime in local time zone

* fix display

* fix: unit tests

* change time zone env

* fix: remove useless code
2023-04-11 12:54:15 +08:00
localhost
1a21a6ea41 chore: set metasrv and datanode heartbeat log level to trace (#1357) 2023-04-11 11:21:29 +08:00
Ruihang Xia
09f003d01d fix: lots of corner cases in PromQL (#1345)
* adjust plan ordering
fix offset logic
ignore empty range vector

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix: different NaN logic between instant and range selector

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix: enlarge selector time window

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* revert change about stale NaN

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix tests

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* clean up

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* rename variables

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* one more rename

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-04-10 09:05:24 +00:00
Weny Xu
29c6155ae3 feat: introduce file table engine (#1323)
* feat: introduce file table engine

* chore: apply cr suggestions

* refactor: refactor immutable manifest

* chore: apply cr suggestions

* refactor: refactor immutable manifest

* chore: apply suggestions from code review

Co-authored-by: dennis zhuang <killme2008@gmail.com>

* chore: apply suggestions from CR

* chore: apply suggestions from code review

Co-authored-by: Lei, HUANG <6406592+v0y4g3r@users.noreply.github.com>

---------

Co-authored-by: dennis zhuang <killme2008@gmail.com>
Co-authored-by: Lei, HUANG <6406592+v0y4g3r@users.noreply.github.com>
2023-04-10 12:03:36 +08:00
Weny Xu
804348966d chore: amend fmt-toml (#1347) 2023-04-10 11:42:36 +08:00
Lei, HUANG
b7bdee6de9 feat: ignoring time zone info when import from external files (#1341)
* feat: ignore timezone info when copy from external files

* chore: rebase onto develop
2023-04-10 11:41:34 +08:00
Lei, HUANG
c850e9695a fix: stream inserts when copying from external file (#1338)
* fix: stream inserts when copying from external file

* fix: reset pending bytes once insertion succeeds

* Update src/datanode/src/sql/copy_table_from.rs

Co-authored-by: LFC <bayinamine@gmail.com>

---------

Co-authored-by: LFC <bayinamine@gmail.com>
2023-04-10 10:44:12 +08:00
LFC
a3e47955b8 feat: information schema (#1327)
* feat: basic information schema

* show information schema only for current catalog

* fix: fragile tests
2023-04-07 16:50:14 +08:00
zyy17
554a69ea54 refactor: add disable_dashboard option and disable dashboard in metasrv and datanode (#1343)
* refactor: add disable_dashboard option and disable dashboard in metasrv and datanode

* refactor: skip disable_dashboard filed in toml file

* refactor: simplify the http initialization
2023-04-07 16:45:25 +08:00
LFC
f8b6a6b219 fix!: not allowed to create column name same with keyword without quoted (#1333)
* fix: not allowed to create column name same with keyword without quoted

* fix: tests

* Update src/sql/src/parsers/create_parser.rs

Co-authored-by: Ning Sun <classicning@gmail.com>

* fix: tests

---------

Co-authored-by: Ning Sun <classicning@gmail.com>
2023-04-06 15:34:26 +08:00
dennis zhuang
dce0adfc7e chore: readme (#1318) 2023-04-06 13:20:08 +08:00
Ruihang Xia
da66138e80 refactor(error): remove backtrace, and introduce call-site location for debugging (#1329)
* wip: global replace

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix compile

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix warnings

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* remove unneeded tests of errors

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix ErrorExt trait implementator

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix warnings

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix format

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix pyo3 tests

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-04-06 04:06:00 +00:00
Lei, HUANG
d10de46e03 feat: support timestamp precision on creating table (#1332)
* feat: support timestamp precision on creating table

* fix sqlness

* fix: substrait representation of different timestamp precision
2023-04-06 11:18:20 +08:00
Eugene Tolbakov
59f7630000 feat: initial changes for compaction_time_window field support (#1083)
* feat(compaction_time_window): initial changes for compaction_time_window field support

* feat(compaction_time_window): move PickerContext creation

* feat(compaction_time_window): update region descriptor, fix formatting

* feat(compaction_time_window): add minor enhancements

* feat(compaction_time_window): fix failing test

* feat(compaction_time_window):  return an error instead silently skip for the user provided compaction_time_window

* feat(compaction_time_window): add TODO reminder
2023-04-06 10:32:41 +08:00
Hao
a6932c6a08 feat: implement deriv function (#1324)
* feat: implement deriv function

* docs: add docs for linear regression

* test: add test for deriv
2023-04-05 13:42:07 +08:00
Ruihang Xia
10593a5adb fix: update sqlness result (#1328)
Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-04-04 22:47:55 +08:00
dennis zhuang
bf8c717022 feat: try to do manifest checkpoint on opening region (#1321) 2023-04-04 21:36:54 +08:00
localhost
aa9f6c344c chore: minor fix about metrics component (#1322)
* typo: fix StartMetricsExport error message error

* bug: add metrics http handler for frontend node
2023-04-04 19:31:06 +08:00
Ruihang Xia
99353c6ce7 refactor: rename "value" semantic type to "field" (#1326)
* global replace

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* change desc table

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* update sqlness result

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-04-04 11:14:28 +00:00
Ruihang Xia
a2d8804129 feat: impl __field__ special matcher to project value columns (#1320)
* plan new come functions

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* implement __value__ matcher

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix typo

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* change __value__ to __field__

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* add bad-case tests

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* rename variables

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-04-04 09:08:50 +00:00
Weny Xu
637a4a2a58 docs: file external table RFC (#1274) 2023-04-04 10:41:17 +08:00
Weny Xu
ef134479ef feat: support multi table engines in distributed mode (#1316)
* chore: bump greptime-proto to 59afacd

* feat: support multi table engines in distributed mode
2023-04-04 10:27:08 +08:00
Weny Xu
451f9d2d4e feat: support multi table engines (#1277)
* feat: support multi table engines

* refactor: adapt SqlHandler to support multiple table engines

* refactor: refactor TableEngineManager

* chore: apply review suggestions

* chore: apply review suggestions

* chore: apply review suggestions

* chore: snafu context styling
2023-04-03 14:49:12 +00:00
dennis zhuang
68d3247791 chore: tweak logs (#1314)
* chore: tweak logs

* chore: cr comments
2023-04-03 21:08:16 +08:00
Eugene Tolbakov
2458b4edd5 feat(changes): add initial implementation (#1304)
* feat(changes): add initial implementation

* feat(changes): add docs
2023-04-03 12:02:13 +08:00
Eugene Tolbakov
5848f27c27 feat(resets): add initial implementation (#1306) 2023-04-03 11:37:01 +08:00
LFC
215cea151f refactor: move PromQL execution to Frontend (#1297)
* refactor: move PromQL execution to Frontend
2023-04-03 11:34:03 +08:00
Hao
a82f1f564d feat: implement stdvar_over_time function (#1291)
* feat: implement stdvar_over_time function

* feat: add more test for stdvar_over_time

* feat: add stdvar_over_time to functions.rs
2023-04-03 10:01:25 +08:00
LFC
48c2841e4d feat: execute python script in distributed mode (#1264)
* feat: execute python script in distributed mode

* fix: rebase develop
2023-04-02 20:36:48 +08:00
Lei, HUANG
d2542552d3 fix: unit test fails when try to copy table to s3 and copy back (#1302)
fix: unit test fails when try to copy table to s3 and copy back to greptimedb
2023-04-02 16:43:44 +08:00
Ruihang Xia
c0132e6cc0 feat: impl quantile_over_time function (#1287)
* fix qualifier alias

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix in another way

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* impl quantile_over_time

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix clippy

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-04-02 16:20:32 +08:00
dennis zhuang
aea932b891 fix: checkpoint fails when deleting old logs fails (#1300) 2023-04-02 11:06:36 +08:00
Lei, HUANG
0253136333 feat: buffered parquet writer (#1263)
* wip: use

* rebase develop

* chore: fix typos

* feat: replace export parquet writer with buffered writer

* fix: some cr comments

* feat: add sst_write_buffer_size config item to config how many bytes to buffer before flush to underlying storage

* chore: reabse onto develop
2023-04-01 17:21:19 +08:00
Eugene Tolbakov
6a05f617a4 feat(stddev_over_time): add initial implementation (#1289)
* feat(stddev_over_time): add initial implementation

* feat(stddev_over_time): address code review remarks, add compensated summation

* feat(stddev_over_time): fix fmt issues

* feat(stddev_over_time): add docs, minor renamings
2023-04-01 17:16:51 +08:00
localhost
a2b262ebc0 chore: add http metrics server in datanode node when greptime start in distributed mode (#1256)
* chore: add http metrics server in datanode node when greptime start in distributed mode

* chore: add some docs and license

* chore: change metrics_addr to resolve address already in use error

* chore add metrics for meta service

* chore: replace metrics exporter http server from hyper to axum

* chore: format

* fix: datanode mode branching error

* fix: sqlness test address already in use and start metrics in defualt config

* chore: change metrics location

* chore: use builder pattern to builder httpserver

* chore: remove useless debug_assert macro in httpserver builder

* chore: resolve conflicting build error

* chore: format code
2023-03-31 18:37:52 +08:00
dennis zhuang
972f64c3d7 chore: improve opendal layers (#1295)
* chore: improve opendal layers

* chore: log level
2023-03-31 09:48:11 +00:00
LFC
eb77f9aafd feat: start LocalManager in Metasrv (#1279)
* feat: procedure store in Metasrv, backed by Etcd; start `LocalManager` in Metasrv leader

* fix: resolve PR comments

* fix: resolve PR comments
2023-03-31 15:32:59 +08:00
Yingwen
dee20144d7 feat: Implement procedure to alter a table for mito engine (#1259)
* feat: wip

* fix: Fix CreateMitoTable::table_schema not initialized from json

* feat: Implement AlterMitoTable procedure

* test: Add test for alter procedure

* feat: Register alter procedure

* fix: Recover procedures after catalog manager is started

* feat: Simplify usage of table schema in create table procedure

* test: Add rename test

* test: Add drop columns test
2023-03-31 14:40:54 +08:00
dennis zhuang
563adbabe9 feat!: improve region manifest service (#1268)
* feat: try to use batch delete in ManifestLogStorage

* feat: clean temp dir when startup with file backend

* refactor: export region manifest checkpoint actions magin and refactor storage options

* feat: purge unused manifest and checkpoint files by repeat gc task

* chore: debug deleted logs

* feat: adds RepeatedTask and refactor all gc tasks

* chore: clean code

* feat: export gc_duration to manifest config

* test: assert gc works

* fix: typo

* Update src/common/runtime/src/error.rs

Co-authored-by: LFC <bayinamine@gmail.com>

* Update src/common/runtime/src/repeated_task.rs

Co-authored-by: LFC <bayinamine@gmail.com>

* Update src/common/runtime/src/repeated_task.rs

Co-authored-by: LFC <bayinamine@gmail.com>

* fix: format

* Update src/common/runtime/src/repeated_task.rs

Co-authored-by: Yingwen <realevenyag@gmail.com>

* chore: by CR comments

* chore: by CR comments

* fix: serde default for StorageConfig

* chore: remove compaction config in StandaloneOptions

---------

Co-authored-by: LFC <bayinamine@gmail.com>
Co-authored-by: Yingwen <realevenyag@gmail.com>
2023-03-31 10:42:00 +08:00
Ruihang Xia
b71bb4e5fa feat: implement restart argument for sqlness-runner (#1262)
* refactor standalone mode and distribute mode start process

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* implement restart arg

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* Update tests/runner/src/env.rs

Co-authored-by: LFC <bayinamine@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
Co-authored-by: LFC <bayinamine@gmail.com>
2023-03-31 10:02:19 +08:00
LFC
fae293310c feat: unify describe table execution (#1285) 2023-03-31 09:59:19 +08:00
LFC
3e51640442 ci: release binary with embedded dashboard enabled (#1283) 2023-03-30 21:35:47 +08:00
discord9
b40193d7da test: align RsPy PyO3 Behavior (#1280)
* feat: allow PyList Return in PyO3 Backend

* feat: mixed list

* feat: align&test

* chore: PR advices
2023-03-30 17:45:21 +08:00
Ruihang Xia
b5e5f8e555 chore(deps): bump arrow and parquet to 36.0.0, and datafusion to the latest (#1282)
* chore: update arrow, parquet to 36.0 and datafusion

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* update deps

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* Apply suggestions from code review

Co-authored-by: LFC <bayinamine@gmail.com>

* update sqlness result

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
Co-authored-by: LFC <bayinamine@gmail.com>
2023-03-30 16:24:10 +08:00
zyy17
192fa0caa5 ci: only builds binaries for manually trigger workflow (#1284) 2023-03-30 15:58:28 +08:00
Weny Xu
30eb676d6a feat: implement create external table parser (#1252)
* refactor: move parse_option_string to util

* feat: implement create external table parser
2023-03-30 13:37:53 +08:00
Ruihang Xia
d7cadf6e6d fix: nyc-taxi bench tools and limit max parallel compaction task number (#1275)
* limit mas parallel compaction subtask

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* correct type map

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-03-29 09:16:53 +00:00
Lei, HUANG
d7a1435517 fix: remove backtrace from ratelimit error (#1273) 2023-03-29 15:58:01 +08:00
xiaomin tang
0943079de2 feat: Create SECURITY.md (#1270)
Create SECURITY.md
2023-03-28 19:14:29 +08:00
shuiyisong
509d07b798 chore: add build_table_route_prefix (#1269) 2023-03-28 16:26:24 +08:00
Yingwen
e72ce5eaa9 fix: Adds FileHandle to ChunkStream (#1255)
* test: Add compaction test

* test: Test read during compaction

* test: Add s3 object store to test

* test: only run compact test

* feat: Hold file handle in chunk stream

* test: check files still exist after compact

* feat: Revert changes to develop.yaml

* test: Simplify MockPurgeHandler
2023-03-28 16:22:07 +08:00
Ruihang Xia
f491a040f5 feat: implelemt rate, increase and delta in PromQL (#1258)
* clean up

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix increase fn

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* impl rate and delta

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix typo

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix IS_RATE condition

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* more tests about rate and delta

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* ensure range_length is not zero

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-03-28 15:21:06 +08:00
Yingwen
47179a7812 feat: Support sending multiple affected rows (#1203)
* feat: Support sending multiple affected rows

* feat: Skip federated check if query starts with insert

* style: Fix clippy
2023-03-28 14:34:14 +08:00
shuiyisong
995a28a27d feat: impl BatchDelete (#1253)
* chore: impl `BatchDelete`

* chore: add `batch_delete` to meta-client

* fix: auth param length check

* fix: auth param length check

* chore: rebase develop

* chore: use `filter_map`

Co-authored-by: LFC <bayinamine@gmail.com>

* chore: update error msg

Co-authored-by: LFC <bayinamine@gmail.com>

* fix: pre-allocate vec length

---------

Co-authored-by: LFC <bayinamine@gmail.com>
2023-03-28 14:06:13 +08:00
LFC
ed1cb73ffc fix: a minor misuse of tokio::select (#1266) 2023-03-28 13:50:35 +08:00
dennis zhuang
0ffa628c22 refactor: scripts perf and metrics (#1261)
* refactor: retrieve pyvector datatype by inner vector

* perf: replace all ok_or to ok_or_else

* feat: adds metrics for scripts execution
2023-03-28 10:07:21 +08:00
Lei, HUANG
5edd2a3dbe feat: upgrade opendal (#1245)
* chore: upgrade opendal

* chore: finish upgrading opendal

* fix: clippy complaints

* fix some tests

* fix: all unit tests

* chore: rebase develop

* fix: sqlness tests

* optimize imports

* chore: rebase develop

* doc: add todo
2023-03-28 09:47:33 +08:00
Ning Sun
e63b28bff1 feat: add dbname and health check for grpc api (#1220)
* feat: add dbname and health check for grpc api

* refactor: move health check to dedicated service

* chore: switch to merged proto rev

* feat: implement healthcheck on server-side
2023-03-28 09:46:30 +08:00
zyy17
8140d4e3e5 ci: modify the copy path of binary artifacts (#1257) 2023-03-27 21:49:42 +08:00
shuiyisong
6825459c75 chore: ignore dashboard files (#1260) 2023-03-27 19:11:31 +08:00
Ning Sun
7eb4d81929 feat: adopt pgwire 0.12 and simplify encoding apis (#1250)
* feat: adopt pgwire 0.12 and simplify encoding apis

* refactor: remove duplicated format match clause
2023-03-27 18:16:43 +08:00
discord9
8ba0741c81 fix: set locals to main.dict too (#1242) 2023-03-27 15:23:52 +08:00
zyy17
0eeb5b460c ci: install python requests lib in release container image (#1241)
* ci: install python requests lib in release container image

* refactor: add requirements.txt
2023-03-27 15:20:31 +08:00
LFC
65ea6fd85f feat: embed dashboard into GreptimeDB binary (#1239)
* feat: embed dashboard into GreptimeDB binary

* fix: resolve PR comments
2023-03-27 15:08:44 +08:00
dennis zhuang
4f15b26b28 feat: region manifest checkpoint (#1202)
* chore: adds log when manifest protocol is changed

* chore: refactor region manifest

* temp commit

* feat: impl region manifest checkpoint

* feat: recover region version from manifest snapshot

* test: adds region snapshot test

* test: region manifest checkpoint

* test: alter region with manifest checkpoint

* fix: revert storage api

* feat: delete old snapshot

* refactor: manifest log storage

* Update src/storage/src/version.rs

Co-authored-by: Ruihang Xia <waynestxia@gmail.com>

* Update src/storage/src/manifest/checkpoint.rs

Co-authored-by: Ruihang Xia <waynestxia@gmail.com>

* Update src/storage/src/manifest/region.rs

Co-authored-by: Ruihang Xia <waynestxia@gmail.com>

* Update src/storage/src/manifest/region.rs

Co-authored-by: Ruihang Xia <waynestxia@gmail.com>

* chore: by CR comments

* refactor: by CR comments

* fix: typo

* chore: tweak start_version

---------

Co-authored-by: Ruihang Xia <waynestxia@gmail.com>
2023-03-27 11:15:52 +08:00
Lei, HUANG
15ee4ac729 fix: noop flush impl for numbers table (#1247)
* fix: noop flush impl for numbers table

* fix: clippy
2023-03-27 10:54:07 +08:00
dennis zhuang
b4fc8c5b78 refactor: make sql function in scripts return a list of column vectors (#1243) 2023-03-27 08:50:19 +08:00
Lei, HUANG
6f81717866 fix: skip empty parquet (#1236)
* fix: returns None if parquet file does not contain any rows

* fix: skip empty parquet file

* chore: add doc

* rebase develop

* fix: use flatten instead of filter_map with identity
2023-03-26 09:39:15 +08:00
Lei, HUANG
77f9383daf fix: allow larger compaction window to reduce parallel task num (#1223)
fix: unit tests
2023-03-24 17:12:13 +08:00
discord9
c788b7fc26 feat: slicing PyVector&Create DataFrame from sql (#1190)
* chore: some typos

* feat: slicing for pyo3 vector

* feat: slice tests

* feat: from_sql

* feat: from_sql for dataframe

* test: df tests

* feat: `from_sql` for rspython

* test: tweak a bit

* test: and CR advices

* typos: ordered points

* chore: update error msg

* test: add more `slicing` testcase
2023-03-24 15:37:45 +08:00
LFC
0f160a73be feat: metasrv collects datanode heartbeats for region failure detection (#1214)
* feat: metasrv collects datanode heartbeats for region failure detection

* chore: change visibility

* fix: fragile tests

* Update src/meta-srv/src/handler/persist_stats_handler.rs

Co-authored-by: fys <40801205+Fengys123@users.noreply.github.com>

* Update src/meta-srv/src/handler/failure_handler.rs

Co-authored-by: fys <40801205+Fengys123@users.noreply.github.com>

* fix: resolve PR comments

* fix: resolve PR comments

* fix: resolve PR comments

---------

Co-authored-by: shuiyisong <xixing.sys@gmail.com>
Co-authored-by: fys <40801205+Fengys123@users.noreply.github.com>
2023-03-24 04:28:34 +00:00
LFC
92963b9614 feat: execute "delete" in query engine (in the form of "LogicalPlan") (#1222)
fix: execute "delete" in query engine (in the form of "LogicalPlan")
2023-03-24 12:11:58 +08:00
Yingwen
f1139fba59 fix: Holds FileHandle in ParquetReader to avoid the purger purges it (#1224) 2023-03-23 14:24:25 +00:00
Ruihang Xia
4e552245b1 fix: range func tests (#1221)
* remove ignore on range fn tests

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* placeholder for changes, deriv and resets

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-03-23 17:33:11 +08:00
Ruihang Xia
3126bbc1c7 docs: use CDN for logos (#1219)
Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-03-23 11:39:24 +08:00
LFC
b77b561bc8 refactor: execute insert with select in query engine (#1181)
* refactor: execute insert with select in query engine

* fix: resolve PR comments
2023-03-23 10:38:26 +08:00
dennis zhuang
501faad8ab chore: rename params in flush api (#1213) 2023-03-22 14:07:23 +08:00
Eugene Tolbakov
5397a9bbe6 feat(to_unixtime): add initial implementation (#1186)
* feat(to_unixtime): add initial implementation

* feat(to_unixtime): use Timestamp for conversion

* feat(to_unixtime):  implement conversion to Result<VectorRef>

* feat(to_unixtime): make unit test pass

* feat(to_unixtime): preserve None for invalid timestamps

* feat(to_unixtime): address code review suggestions

* feat(to_unixtime): add an sqlness test

* feat(to_unixtime): adjust the assertion for the sqlness test

* Update tests/cases/standalone/common/select/dummy.sql

---------

Co-authored-by: Ruihang Xia <waynestxia@gmail.com>
2023-03-21 12:41:07 +00:00
Ruihang Xia
f351ee7042 docs: update document string and site (#1211)
Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-03-21 07:01:08 +00:00
Ruihang Xia
e0493e0b8f feat: flush all tables on shutdown (#1185)
* feat: impl flush on shutdown (#14)

* feat: impl flush on shutdown

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* powerful if-else!

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* retrieve table handler from schema provider

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* feat: impl flush on shutdown

* feat: impl flush on shutdown

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* powerful if-else!

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* retrieve table handler from schema provider

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* Update src/datanode/src/instance.rs

Co-authored-by: dennis zhuang <killme2008@gmail.com>

* fix: uncommitted merge change

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
Co-authored-by: dennis zhuang <killme2008@gmail.com>
2023-03-21 14:36:30 +08:00
LFC
b2a09c888a feat: phi accrual failure detector (#1200) 2023-03-21 11:47:47 +08:00
LFC
af101480b3 feat: add gRPC reflection service (#1208)
* feat: add gRPC reflection service

* feat: add gRPC reflection service
2023-03-21 11:23:29 +08:00
Weny Xu
b8f7f603cf test: add copy clause sqlness tests (#1198) 2023-03-21 11:22:26 +08:00
dennis zhuang
8fb97ea1d8 fix: losing region numbers after altering table (#1209) 2023-03-21 11:19:43 +08:00
discord9
21ce9c1163 docs: more explain in readme (#1195)
* docs: more explain in readme

* fix: typos

* fix: CR advices
2023-03-20 21:56:34 +08:00
Ruihang Xia
0a22375ac1 fix: nyc-taxi bench suite (#1204)
Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-03-20 21:53:01 +08:00
fys
0596d20a3b fix: can not create table in the local distributed environment (#1207)
fix: create table in local distribute env
2023-03-20 20:12:35 +08:00
Weny Xu
e19c8fa2b6 refactor: combine Copy To and Copy From (#1197)
* refactor: combine Copy To and Copy From

* Apply suggestions from code review

Co-authored-by: LFC <bayinamine@gmail.com>

* Apply suggestions from code review

Co-authored-by: Lei, HUANG <6406592+v0y4g3r@users.noreply.github.com>

---------

Co-authored-by: LFC <bayinamine@gmail.com>
Co-authored-by: Lei, HUANG <6406592+v0y4g3r@users.noreply.github.com>
2023-03-20 19:23:25 +08:00
LFC
ad886f5b3e feat: GRPC client stream interface for insertion (#1206)
* feat: GRPC client stream interface for insertion

* feat: GRPC client stream interface for insertion
2023-03-20 18:45:37 +08:00
LFC
f6669a8201 feat: add GRPC unary call service to our GreptimeDB (#1196)
* feat: add GRPC unary call service to our GreptimeDB
2023-03-20 14:27:32 +08:00
Yingwen
ad5c47185d feat: wait flush until the flush is done (#1188)
* feat: Add wait argument to flush

* test(storage): Fix flush tests
2023-03-20 11:25:19 +08:00
zyy17
64441616db ci: refactor compile-python.sh and use the python310 to build amd64 binary (#1199) 2023-03-18 16:16:15 +08:00
zyy17
09491d6aee ci: release the standalone binaries with pyo3 and install python utils in images (#1194)
* ci: install python3 and python3-dev in CI Dockerfile

* ci: release the standalone binaries with pyo3 support for multiple platforms

* refactor: install pip and pyarrow

* refactor: specify the python version
2023-03-17 15:42:13 +08:00
Weny Xu
7cfa30b2ab feat: add shutdown for standalone and metasrv (#1174) 2023-03-17 11:35:17 +08:00
Ning Sun
a7676d8860 refactor: port div_ceil from stdlib to avoid unstable features (#1191)
* refactor: use float div&ceil to avoid unstable features

* refactor: port div_ceil from rust stdlib
2023-03-16 22:55:35 +08:00
zyy17
62e2a60b7b ci: release artifacts after binary and container is ready (#1192)
ci: release artifacts before binary and container is ready
2023-03-16 09:20:03 +00:00
zyy17
128c5cabe1 ci: disable run tests temporarily (#1187) 2023-03-16 14:12:19 +08:00
Yingwen
9a001d3392 chore(datanode): derive serde default for Wal/CompactionConfig (#1173) 2023-03-16 11:56:28 +08:00
Weny Xu
facdda4d9f feat: implement CONNECTION clause of Copy To (#1163)
* feat: implement CONNECTION clause of Copy To

* test: add tests for s3 backend

* Apply suggestions from code review

Co-authored-by: Yingwen <realevenyag@gmail.com>

---------

Co-authored-by: Yingwen <realevenyag@gmail.com>
2023-03-16 11:36:38 +08:00
Lei, HUANG
17eb99bc52 feat: allow manual table flush through HTTP API (#1184) 2023-03-15 20:15:34 +08:00
Xieqijun
cd8be77968 feat(procedure): Max retry time (#1095)
* feat: procedure config

* fix: modify config

* feat: add retry logic

* feat: add error

* feat: add it

* feat: add it

* feat: add it

* feat: rm retry from runner

* feat: use backon

* feat: add retry_interval

* feat: add retry_interval

* fix: conflict

* fix: cr

* feat: add retry error and id

* feat: rename

* refactor: execute

* feat: use config dir

* fix: cr

* fix: cr

* fix: fmt

* fix: fmt

* fix: pr

* fix: it

* fix: rm unless cmd params

* feat: add toml

* fix: ut

* feat: add rolling back

* fix: cr

* fix: cr

* fix: cr

* fix: ci

* fix: ci

* fix: ci

* chore: Apply suggestions from code review

---------

Co-authored-by: Yingwen <realevenyag@gmail.com>
2023-03-15 08:28:08 +00:00
Eugene Tolbakov
b530ac9e60 chore(from_unixtime): remove UDF from_unixtime (#1179)
* chore(from_unixtime): remove UDF from_unixtime

* chore(from_unixtime): restore timestamp.rs for further usage

* chore(from_unixtime): address fmt issue
2023-03-15 16:27:09 +08:00
zyy17
76f1a79f1b ci: set 'continue-on-error' to false since the problem of compiling binary was resolved (#1182)
Signed-off-by: zyy17 <zyylsxm@gmail.com>
2023-03-15 15:41:36 +08:00
LFC
4705245d60 docs: region failover RFC (#1139)
* docs: region failover RFC

* fix: resolve PR comments
2023-03-15 15:21:58 +08:00
Zheming Li
f712f978cf feat: Report disk usage stats to metasrv thru heartbeat (#1167)
* feat: Report disk usage stats to metasrv thru heartbeat

Signed-off-by: Zheming Li <nkdudu@126.com>

* Update src/catalog/src/error.rs

Co-authored-by: fys <40801205+Fengys123@users.noreply.github.com>

* Update src/catalog/src/lib.rs

Co-authored-by: fys <40801205+Fengys123@users.noreply.github.com>

* Update src/mito/src/table.rs

Co-authored-by: fys <40801205+Fengys123@users.noreply.github.com>

---------

Signed-off-by: Zheming Li <nkdudu@126.com>
Co-authored-by: fys <40801205+Fengys123@users.noreply.github.com>
2023-03-15 03:11:32 +00:00
discord9
cbf64e65b9 refactor: put dataframe & query into greptime module (#1172)
* feat: impl getitem for `vector`

* feat: mv `query`&`dataframe` into `greptime` for PyO3

* refactor: allow call dataframe&query

* refactor: pyo3 query&dataframe

* chore: CR advices
2023-03-15 11:01:43 +08:00
zyy17
242ce5c2aa ci: add pyo3 options for mac (#1178) 2023-03-14 13:51:58 +00:00
Ruihang Xia
e8d2e82335 fix: ambiguous column reference (#1177)
Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-03-14 13:18:43 +00:00
zyy17
0086cc2d3d fix: export 'PYO3_CROSS_LIB_DIR' when cargo build for aarch64-linux and refactor matrix opts (#1171) 2023-03-14 15:35:29 +08:00
Weny Xu
cdc111b607 refactor: make the cmd hold the application instance (#1159) 2023-03-14 15:18:50 +08:00
zyy17
81ca1d8399 refactor: add the separate GitHub Action job to push the image to the UCloud registry (#1170) 2023-03-14 11:35:18 +08:00
LFC
8d3999df5f fix: failed to run subquery wrapped in two parentheses (#1157) 2023-03-14 10:59:43 +08:00
discord9
a60788e92e fix: use correct env var (#1166)
* fix: use correct env var

* fix: move COPY up so rustup know it's nightly

* fix: add `pyo3_backend` in GHA yml

* chore: name for `TODO`

* temp: not set `pyo3_backend` before find DSO

* fix: release linux with pyo3_backend
2023-03-14 10:57:13 +08:00
Weny Xu
296c6dfcbf feat: implement table flush (#1121)
* feat: add flush method for trait

* feat: implement flush via grpc

* chore: move table_dir/region_name/region_id to table crate

* chore: Update src/mito/src/table.rs

---------

Co-authored-by: Yingwen <realevenyag@gmail.com>
2023-03-13 20:10:37 +08:00
LFC
604c20a83d refactor: remove the SQL execution interfaces in Datanode (#1135)
* refactor: remove the SQL execution interfaces in Datanode
2023-03-13 18:45:30 +08:00
Weny Xu
c7f114c8fa feat: add shutdown for frontend (#1161) 2023-03-13 17:59:36 +08:00
Weny Xu
8a83de4ea5 feat: add shutdown for datanode (#1160) 2023-03-13 17:49:26 +08:00
discord9
3377930a50 build: add cross compile docker (#1156)
* build: add cross compile docker

* build: added compile python to github action

* fix: correct path

* fix: Python Compile

* fix: run mulitple cmds

* fix: both cross compile docker file&github action

* refactor: compile-python.sh

* chore: put wget install together

* fix: CR advices

* chore: add `-F pyo3_backend`
2023-03-13 16:56:03 +08:00
Ning Sun
85dd7e4f24 feat: implement promql query on grpc (#1134)
* feat: implement promql query on grpc

* test: resolve test errors

* test: add tests for promql grpc api

* refactor: align prom object name with proto

* chore: switch proto revision to main
2023-03-13 15:24:34 +08:00
LFC
f790fa05c1 fix: validate insert request (#1142)
* fix: validate GRPC insert request has the value when required by column schema, before actually made any change to the DB

* fix: resolve PR comments
2023-03-13 11:03:51 +08:00
Yingwen
dfd91a1bf8 chore: Bump version to 0.1.1 (#1155) 2023-03-11 01:11:23 +08:00
localhost
ded31fb069 chore: remove addr from datanode error message (#1152)
* chore: remove addr from datanode error message

* chore: add log for flight get error
2023-03-10 14:13:01 +08:00
Ning Sun
6a574fc52b chore: update script prompt (#1154) 2023-03-10 10:51:38 +08:00
Ning Sun
58bdf27068 fix: make pyo3 optional again (#1153)
* fix: make pyo3 optional again

* Update src/script/Cargo.toml

Co-authored-by: dennis zhuang <killme2008@gmail.com>

---------

Co-authored-by: dennis zhuang <killme2008@gmail.com>
2023-03-09 14:16:48 +00:00
discord9
610a895b66 feat: abi3 & abi37 (#1151) 2023-03-09 20:08:07 +08:00
zyy17
a9ccc06449 ci: modify scheduled release tag to 'v0.2.0-nightly-yymmdd' (#1149)
* ci: modify scheduled release tag to 'v0.2.0-nightly-yymmdd'

Signed-off-by: zyy17 <zyylsxm@gmail.com>

* refactor: modify 'profile.weekly' to 'profile.nightly'

Signed-off-by: zyy17 <zyylsxm@gmail.com>

---------

Signed-off-by: zyy17 <zyylsxm@gmail.com>
2023-03-09 16:42:40 +08:00
Ruihang Xia
38fe1a2f01 chore: update dependencies (#1148)
Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-03-09 14:24:29 +08:00
fys
3414ac46b0 fix: remove unnecessary braces around block return value (#1147) 2023-03-09 03:42:04 +00:00
Lei, HUANG
757b4a87a0 fix: remove profile in mem-prof crate to suppress compiler warnings (#1146) 2023-03-09 03:39:24 +00:00
Yingwen
ba1517fceb ci: Fix step "build and push amd64" not triggered (#1145) 2023-03-09 11:35:38 +08:00
Yingwen
5b5d953d56 ci: tolerate error while building arm64 releases (#1143)
* ci: allow failure while building arm64 docker

* ci: Remove continue-on-error on docker step
2023-03-08 21:11:40 +08:00
Yingwen
3f6cbc378d ci: Disable arm64 release temporarily (#1141) 2023-03-08 19:13:00 +08:00
Yingwen
9619940569 ci: Allow error when building release for non-x86 platform (#1140) 2023-03-08 18:12:06 +08:00
Weny Xu
ed8252157a chore: code styling (#1137)
Co-authored-by: Ruihang Xia <waynestxia@gmail.com>
2023-03-08 08:10:12 +00:00
Ruihang Xia
3e0fb7e75b test: ignore two test cases due to arrow-datafusion#5513 (#1138)
* test: ignore two test cases due to arrow-datafusion#5513

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* clean up

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-03-08 07:29:34 +00:00
Bohan Wu
ba3ce436df refactor(SST): UUID as id in FileMeta (#1116)
* feat(SST): use a newType named FileId for FileMeta

* chore: rename some functions

* fix: compatible for previous FileMeta format

* fix: alias for file_id when getting deserialized
2023-03-08 14:27:20 +08:00
Eugene Tolbakov
b31a6cb506 refactor: replace tempdir with tempfile (#1123)
* refactor: replace tempdir with tempfile

* refactor(query): move tempfile dependency under the workspace's Cargo.toml

* refactor(tempfile): create common-test-util

* refactor(tempfile): fix toml format

* refactor(tempfile): remove tempfile out of dependencies

* refactor(tempfile): fix incorrect toml
2023-03-08 11:15:56 +08:00
SSebo
95090592f0 feat: mysql prepare replacing sql placeholder to param (#1086)
* feat: mysql prepare by replace ? in sql to param

* chore: mysql prepare statment support time param

* chore: prepare test more types

* chore: add TODO
2023-03-08 11:02:29 +08:00
Ruihang Xia
3a527c0fd5 feat: impl proc macro range_fn and some aggr_over_time functions (#1072)
* impl range_fn proc macro

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* impl some aggr_over_time fn

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* impl present_over_time and absent_over_time

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* accomplish planner, and correct type cast

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* clean up

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* document the macro

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix styles

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* update irate/idelta test

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* add test cases

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix clippy

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-03-07 23:39:45 +08:00
elijah
819b60ca13 feat(datatypes): implement VectorOp::take (#1115)
* feat: add take index method for VectorOp

* chore: make clippy happy

* chore: make clippy happy

* chore: improve the code

* chore: improve the code

* chore: add take null test

* chore: fix clippy
2023-03-07 19:27:33 +08:00
Weny Xu
7169fe2989 feat: implement Copy From (#1064) 2023-03-07 17:54:11 +08:00
Zheming Li
b70672be77 feat: track disk usage of regions (#1125)
* feat: track disk usage of regions

Signed-off-by: Zheming Li <nkdudu@126.com>

* calculate disk usage when call

* add default on file meta

---------

Signed-off-by: Zheming Li <nkdudu@126.com>
2023-03-07 17:13:12 +08:00
Lei, HUANG
a4c01f4a3a feat: memory profiling (#1124)
* feat: use jemalloc as default allocator

* feat: add feature for mem-prof

* feat: add errors

* make common-mem-prof optional dep

* fix: toml format

* doc: add profile doc

* fix: typo
2023-03-07 17:12:51 +08:00
Weny Xu
bd98a26cca chore: bump greptime-proto to latest(ad01872) (#1102) 2023-03-07 10:52:42 +08:00
shuiyisong
1b4236d698 refactor: use split instead of serde_urlencoded in http auth (#1110)
* refactor: change from urlencoded to regex

* refactor: change from urlencoded to regex

* chore: add unit test

* chore: update comment

* chore: remove local benchmark test

* chore: minor fix

* chore: remove unused dep
2023-03-07 10:51:47 +08:00
Lei, HUANG
e8cc9b4b29 test: add manifest compatibility tests (#1130)
* tests: add manifest compatibility tests

* fix: clippy
2023-03-06 19:31:54 +08:00
discord9
379f581780 test: add Integrated Test for Coprocessor& fix minor bugs (#1122)
* feat: cache `Runtime`

* fix: coprstream schema not set

* test: integrated tests for Coprocessor

* fix: UDF fixed

* style: remove unused import

* chore: remove more unused import

* feat: `filter`, (r)floordiv for Vector

* chore: CR advices

* feat: auto convert to `lit`

* chore: fix typo

* feat: from&to `pyarrow.array`

* feat: allow `pyarrow.array` as args to builtins

* chore: cargo fmt

* test: CI add `pyarrow`

* test: install Python&PyArrow in CI

* test: not cache depend for now

* chore: CR advices

* test: fix name

* style: rename
2023-03-06 19:20:59 +08:00
fys
ff6cfe8e70 refactor: move the batch_get to KvStore trait (#1029)
* move batch_get from KvStoreExt to KvStore

* add some unit tests

* add some unit test

* add some unit tests

* expose batch_get grpc method
2023-03-06 17:35:43 +08:00
Igor Morozov
5a397917c0 docs(contributingmd): add run tests commands (#1129)
* docs(contributingmd): add run tests commands

* docs(contributingmd): add link to nextest website

Co-authored-by: dennis zhuang <killme2008@gmail.com>

---------

Co-authored-by: dennis zhuang <killme2008@gmail.com>
2023-03-06 15:54:16 +08:00
fys
559880cb84 fix: can not find catalog when create table (#1118)
* fix: get catalog by name in RemoteCatalogManager

* cr

* cr

* cr

* fix: ut failed
2023-03-06 14:44:40 +08:00
Ruihang Xia
b76b27f3bf refactor: try to remove unnecessary tests in error mod (#750)
Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-03-06 12:31:30 +08:00
yuanbohan
d4e0dc3685 feat: specify prom server start addr (#1111)
* feat: specify promql server start addr

* refactor: rename promql to prom in Prometheus API server scenario
2023-03-06 11:07:21 +08:00
Eugene Tolbakov
b022556b79 fix: apply ttl and write_buffer_size options when a table is created via procedure (#1117)
* fix: apply ttl and write_buffer_size options when a table is created via procedure

* fix: address code review suggestion

* fix: use borrowing of table_options correctly
2023-03-05 19:37:23 +08:00
shuiyisong
bd065ea6e8 fix: remove incorrect continue (#1114) 2023-03-02 19:52:17 +08:00
yuanbohan
9a87f5edf8 fix(grpc): support timestamp precision (#1113) 2023-03-02 17:33:59 +08:00
Weny Xu
e851b6d019 feat: implement Copy From parser (#1092)
* feat: implement Copy From parser

* Apply suggestions from code review

Co-authored-by: Yingwen <realevenyag@gmail.com>

---------

Co-authored-by: Lei, HUANG <6406592+v0y4g3r@users.noreply.github.com>
Co-authored-by: Yingwen <realevenyag@gmail.com>
2023-03-02 14:03:13 +08:00
Ruihang Xia
e7b92f24e8 feat: impl EmptyMetric plan and time() function (#1100)
* impl EmptyMetric plan

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* add test cases

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* impl planner part

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* adapt new datafusion changes

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix typo

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-03-02 03:15:55 +00:00
Igor Morozov
4b8db408cf style(contributingmd): fix markdown issues and typos (#1107)
* style(contributingmd): fix markdown issues and typos

* style(contributingmd): remove code blocks in lists
2023-03-01 20:00:36 +08:00
Yingwen
98659899c0 refactor: Move mito engine tests to a separate file (#1104)
* refactor(mito): Move tests to a separate file

* chore(query): Remove empty mod function
2023-03-01 11:46:39 +00:00
Ruihang Xia
b1311801da ci: update breaking-change labeler (#1109)
Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-03-01 19:24:21 +08:00
Yingwen
f1b65d9b77 test: fix datanode::test_read_from_config_file (#1106)
* test: Fix datanode::test_read_from_config_file

* test: frontend and metasrv don't read example toml file
2023-03-01 18:31:40 +08:00
Ruihang Xia
d5a2a26916 chore(deps): bump sqlness to v0.4 (#1101)
deps: bump sqlness to v0.4

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-03-01 17:27:16 +08:00
Ning Sun
8e7e68708f docs: correct readme format (#1105)
* docs: correct readme format

* ci: fix config name
2023-03-01 16:59:11 +08:00
Ruihang Xia
9c1118b06d ci: adjust title labeler's rule (#1079)
* ci: adjust title labeler's rule

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* Apply suggestions from code review

Co-authored-by: Yingwen <realevenyag@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
Co-authored-by: Yingwen <realevenyag@gmail.com>
2023-03-01 15:16:21 +08:00
Yingwen
3fb93efbd0 docs: Document fields in the config examples (#1098)
* docs: Add comments to standalone config example

* docs: Add comments to datanode config example

* docs: Add comments to frontend config example

* docs: Add comments to meta-srv config example

* docs: Use "GB" instead of "GiB"

* docs: Add link to the selector doc

* docs: Fix grammar
2023-03-01 15:14:08 +08:00
Yingwen
3fd9c2f144 feat: Store error in procedure state (#1062)
* docs: Change comment position

* refactor(procedure): Store error in ProcedureState

* test: Mock instance with procedure enabled

* feat: Add wait method to wait for procedure

* test(datanode): Test create table by procedure

* chore: Fix clippy
2023-03-01 14:37:50 +08:00
Ning Sun
75e48c5f20 ci: fix apidoc generation 2023-03-01 14:09:47 +08:00
Ning Sun
d402f83442 ci: generate apidocs when pushing to default branch (#1093)
* ci: generate apidocs when pushing to default branch

* ci: require clippy before running tests

* fix: resolve new clippy warnings on primitive slice

* fix: resolve more clippy warnings

* Update .github/workflows/apidoc.yml

Co-authored-by: Ruihang Xia <waynestxia@gmail.com>

* ci: add an index html to redirect

---------

Co-authored-by: Ruihang Xia <waynestxia@gmail.com>
2023-03-01 13:18:26 +08:00
discord9
c5c6494e0b feat: add PyO3(Hence CPython as a Optional Backend (#976)
* refactor: ffi_types

* style: fmt

* refactor: use `String` for return when possible

* todo: vector_impl

* feat: pyobj_try_typed_val

* refactor: more backend indep function

* feat: +-*/ magic methods

* refactor: copr

* style: fmt

* feat: add paired tests

* refactor: more

* refactor: move inside `python` folder

* refactor: all but test code

* feat: builtins for PyO3

* chore: add licenses

* chore: remove unused&add todos

* refactor: remove old files

* chore: mark unused

* chore: fmt

* chore: license

* feat: query in PyO3

* test: paired testcases for rspy&pyo3

* feat: PyDataFrame(Untested)

* feat: some allow_threads

* style: fmt

* style: add license

* feat: rebase manually of #962

* feat: more `allow_threads`

* chore: typo

* chore: remove some `TODO`

* test: allow margin of epsilon

* chore: code review advices

* chore: more CR adjust

* chore: more adjust

* feat: kwargs&its test

* chore: remove some `dbg!`

* chore: allow params

* fix: put `dataframe` into scope

* chore: newline

* fix: adjust after rebase

* fix: test serde skip attr

* style: taplo

* feat: add `pyo3_backend` feature

* doc: update CI&readme
2023-03-01 10:45:55 +08:00
shuiyisong
dc50095af3 fix: use catalog from connection (#1099)
* fix: using schema instead of full database

* fix: using schema instead of full database

* fix: using schema instead of full database

* chore: add debug log

* chore: remove debug log

* chore: remove debug log

* chore: fix cr
2023-03-01 10:34:57 +08:00
LFC
8cd69f441e feat: REPL issues logical plan to DB (#1097) 2023-02-28 16:59:48 +08:00
Weny Xu
f52fc9b7d4 fix: fix panic when the root is not specified (#1089) 2023-02-28 10:54:52 +08:00
shuiyisong
50d2685365 fix: fix catalog parsing issue (#1091)
fix: try fix catalog parsing issue
2023-02-27 22:51:49 +08:00
LFC
11d45e2918 refactor: upgrade DataFusion, Arrow and Sqlparser (#1074)
* refactor: upgrade DataFusion, Arrow and Sqlparser

* fix: resolve PR comments
2023-02-27 22:20:08 +08:00
shuiyisong
30287e7e41 fix: continue if parsing err catalog (#1090)
* fix: continue if parsing err catalog

* fix: change from warn to error
2023-02-27 11:28:45 +00:00
Xieqijun
0b3f955ca7 feat: Add an error variant RetryLater (#1058)
* feat: support retry error

* fix: ci

* fix: ci

* fix: fmt

* feat: add convert procedure error

* Docs : add rustdoc

* fix: cr

* fix: cr

* fix: rm unless code
2023-02-27 17:19:37 +08:00
Ning Sun
4b58a8a18d feat: update substrait and prost version (#1080) 2023-02-27 15:18:12 +08:00
Yingwen
bd377ef329 feat: Procedure to create table and register table to catalog (#1040)
* feat: Add table-procedures crate

* feat: Implement procedure to create table

* feat: Integrate procedure manager to datanode

* test: Test CreateTableProcedure

* refactor: Rename table-procedures to table-procedure

* feat: Implement create_table_by_procedure

* chore: Remove comment

* chore: Add todo

* feat: Add procedure config to standalone mode

* feat: Register table-procedure loaders

* feat: Address review comments

CreateTableProcedure just return error if the subprocedure is failed

* chore: Address CR comments
2023-02-27 11:49:23 +08:00
LFC
df751c38b4 feat: a simple REPL for debugging purpose (#1048)
* feat: a simple REPL for debugging purpose

* fix: rebase develop
2023-02-27 11:00:15 +08:00
Yingwen
f6e871708a chore: Rename MetaClientOpts to MetaClientOptions (#1075)
* fix: Serialize FrontendOptions to toml

* fix: Serialize DatanodeOptions to toml

* fix: Serialize StandaloneOptions to toml

See https://users.rust-lang.org/t/why-toml-to-string-get-error-valueaftertable/85903/2

* chore!: Rename MetaClientOpts to MetaClientOptions

BREAKING CHANGE: Change the meta_client_opts in the config file to
meta_client_options
2023-02-24 16:28:38 +08:00
fys
819c990a89 fix: thread that reports the heartbeat panics in unit test (#1078)
fix: ut panic in heartbeat report thread
2023-02-24 15:36:32 +08:00
Ruihang Xia
a8b4e8d933 ci: simplify codecov commment (#1073)
chore(ci): simplify codecov commment

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-02-24 15:22:57 +08:00
Yingwen
710e2ed133 ci: Use fixed skywalking-eyes revision (#1076)
The latest PR of skywalking-eyes https://github.com/apache/skywalking-eyes/pull/149
breaks our CI action
2023-02-24 07:05:18 +00:00
Ning Sun
81eab74b90 refactor: remove grpc client constructor with default catalog/schema (#1060)
* refactor: remove grpc client with default catalog/schema

* refactor: re-export consts in client module
2023-02-24 11:06:14 +08:00
Ning Sun
8f67d8ca93 fix: update mysql server library to fix tls corrupt messsage issue (#1065) 2023-02-24 10:20:44 +08:00
Ruihang Xia
4cc3ac37d5 feat: add DictionaryVector DataType (#1061)
* fix stddev and stdvar. try build range function expr

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* feat: add dictionary data type

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* preserve timestamp column in range manipulator

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* plan range functions

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* update test result

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix typo

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* update test result

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* update test result

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* resolve CR comments

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* resolve CR comments

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-02-23 20:31:07 +08:00
Lei, HUANG
b48c851b96 fix: support datetime type parsing (#1071)
* fix: support datetime type parsing

* fix: unit test
2023-02-23 20:26:47 +08:00
Xuanwo
fdd17c6eeb refactor: Clean up re-export of opendal services (#1067)
Signed-off-by: Xuanwo <github@xuanwo.io>
2023-02-23 14:12:34 +08:00
Ruihang Xia
51641db39e feat: support filter expression in PromQL (#1066)
feat: support filter expression

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-02-23 11:55:23 +08:00
Xuanwo
98ef74bff4 chore: Bump OpenDAL to v0.27 (#1057)
* Bump OpenDAL to v0.27

Signed-off-by: Xuanwo <github@xuanwo.io>

* Make cargo check happy

Signed-off-by: Xuanwo <github@xuanwo.io>

* Address comments

Signed-off-by: Xuanwo <github@xuanwo.io>

* Address comments

Signed-off-by: Xuanwo <github@xuanwo.io>

* Format toml

Signed-off-by: Xuanwo <github@xuanwo.io>

* Make taplo happy

Signed-off-by: Xuanwo <github@xuanwo.io>

---------

Signed-off-by: Xuanwo <github@xuanwo.io>
2023-02-23 11:20:45 +08:00
Lei, HUANG
f42acc90c2 fix: allow empty TableOptions (#1063)
fix: allow default TableOptions to avoid panic when upgrading from older versions
2023-02-22 19:19:13 +08:00
Lei, HUANG
2df8143ad5 feat: support table ttl (#1052)
* feat: purge expired sst on compaction

* chore: add more log

* fix: clippy

* fix: mark expired ssts as compacting before picking candidates

* fix: some CR comments

* fix: remove useless result

* fix: cr comments
2023-02-22 16:56:20 +08:00
shuiyisong
fb2e0c7cf3 feat: add auth to grpc handler (#1051)
* chore: get header in grpc & temp save

* chore: change authscheme to include data str

* chore: add auth to grpc flight handler

* chore: add unit test & hold for now since grpc api doesnt accept req input

* chore: minor change

* chore: minor change

* chore: add flight context to database interface

* chore: add test

* chore: update proto version & fix cr issue

* chore: add test

* chore: minor update
2023-02-22 15:20:10 +08:00
Xieqijun
390e9095f6 feat: admin http api (#1026)
* feat: catalog list

* feat: catalog list

* feat:api

* feat: leader info

* feat: use constant

* fix: ci

* feat: query heartbeat by ip

* ut: add test

* fix: cr

* fix: cr

* fix: cr
2023-02-22 14:18:37 +08:00
dennis zhuang
bcd44b90c1 feat: invoke TQL via SQL interface (#1047)
* feat: impl TQL parser in sqlparser

* feat: impl invoking TQL via SQL

* chore: remove src/sql/src/tql_parser.rs

* chore: fix typo

* test: add tql test

* chore: carry type

Co-authored-by: LFC <bayinamine@gmail.com>

* chore: cr comments

---------

Co-authored-by: LFC <bayinamine@gmail.com>
2023-02-22 11:28:09 +08:00
Yingwen
c6f2db8ae0 feat(procedure): Add procedure watcher (#1043)
* refactor: Use watch channel to store ProcedureState

* feat: Add a watcher to wait for state change

* test: test watcher on procedure failure

* feat: Only clear message cache on success

* feat: submit returns Watcher
2023-02-21 17:19:39 +08:00
Lei, HUANG
e17d5a1c41 feat: support table options (#1044)
* feat: change table options from string map to a struct, add ttl and write_buffer_size

* fix: also pass table options to table meta

* feat: pass table options when opening/creating regions

* fix: CR comments
2023-02-21 08:10:23 +00:00
Ruihang Xia
23092a5208 feat: Support unary, paren, bool keyword and nonexistent metric/label in PromQL (#1049)
* feat: don't report metric/label not found as error

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* feat: impl unary expr

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* feat: impl paren expr

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* feat: support bool keyword

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* add some tests

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* ignore nonexistence labels during planning

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix clippy

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-02-21 15:24:01 +08:00
Yingwen
4bbad6ab1e ci: allow ci pass when codecov can't upload data (#1046) 2023-02-21 14:52:44 +08:00
Zhizhen He
6833b405d9 ci: upgrade spell checker to 1.13.10 (#1045)
* ci: upgrade spell checker to 1.13.10

Signed-off-by: Zhizhen He <hezhizhen.yi@gmail.com>

* fix: fix existing typos

Signed-off-by: Zhizhen He <hezhizhen.yi@gmail.com>

* chore: use taplo to format typos.toml

Signed-off-by: Zhizhen He <hezhizhen.yi@gmail.com>

* chore: add fmt-toml rule to format TOML files

Signed-off-by: Zhizhen He <hezhizhen.yi@gmail.com>

---------

Signed-off-by: Zhizhen He <hezhizhen.yi@gmail.com>
2023-02-21 10:55:27 +08:00
Yingwen
aaaf24143d feat: Procedure to create a mito engine (#1035)
* feat: wip

* feat: Implement procedure to create mito table

* feat: Add create_table_procedure to TableEngine

* feat: Impl dump and lock for CreateMitoTable

* feat: Impl CreateMitoTable::execute and register it to manager

* feat(common-procedure): pub local mod

* feat: Add simple test for MitoCreateTable

* style: Fix clippy

* refactor: Move create_table_procedure to a new trait TableEngineProcedure
2023-02-21 09:40:56 +08:00
Jiachun Feng
9161796dfa feat: export the data from a table to parquet files (#1000)
* feat: copy table parser

* feat: coopy table

* chore: minor fix

* chore: give stmt a more clearer name

* chore: unified naming

* chore: minor change

* chore: add a todo

* chore: end up with an empty file when occur an empty table

* feat: format with copy table

* feat: with options

* chore: by cr

* chore: default 5M rows per segment

* Update src/datanode/src/sql/copy_table.rs

Co-authored-by: Lei, HUANG <6406592+v0y4g3r@users.noreply.github.com>

* Update src/datanode/src/sql/copy_table.rs

Co-authored-by: Lei, HUANG <6406592+v0y4g3r@users.noreply.github.com>

* Update src/datanode/src/error.rs

Co-authored-by: fys <40801205+Fengys123@users.noreply.github.com>

---------

Co-authored-by: Lei, HUANG <6406592+v0y4g3r@users.noreply.github.com>
Co-authored-by: fys <40801205+Fengys123@users.noreply.github.com>
2023-02-20 16:43:50 +08:00
Ruihang Xia
68b231987c feat: improve Prometheus compliance (#1022)
* initial impl

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* minor (useless) refactor

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* retrieve metric name

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* add time index column to group by columns
filter out NaN in normalize
remove NULL in instant manipulator
accept form data as HTTP params
correct API URL
accept second literal as step param

* happy clippy

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* update test result

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-02-20 07:29:43 +00:00
Yingwen
6e9964ac97 refactor(storage): Simplify debug output of some structs (#1028)
* refactor: Simplify debug output of RegionImpl

* feat: Simplify memtable debug output
2023-02-20 14:35:30 +08:00
shuiyisong
6afd79cab8 feat: support InfluxDB auth protocol (#1034)
* chore: add http auth influxdb compat

* chore: add test

* chore: minor change

* chore: fix typo

* chore: fix cr
2023-02-20 03:26:19 +00:00
fys
4e88a01638 feat: support influxdb ping and health endpoint (#1027)
* feat: support influxdb ping and health endpoint

* add some unit tests

* ping and health api no need auth

* cr
2023-02-20 02:31:51 +00:00
629 changed files with 40960 additions and 14098 deletions

View File

@@ -3,3 +3,13 @@ linker = "aarch64-linux-gnu-gcc"
[alias]
sqlness = "run --bin sqlness-runner --"
[build]
rustflags = [
# lints
# TODO: use lint configuration in cargo https://github.com/rust-lang/cargo/issues/5034
"-Wclippy::print_stdout",
"-Wclippy::print_stderr",
"-Wclippy::implicit_clone",
]

View File

@@ -2,7 +2,7 @@
GT_S3_BUCKET=S3 bucket
GT_S3_ACCESS_KEY_ID=S3 access key id
GT_S3_ACCESS_KEY=S3 secret access key
GT_S3_ENDPOINT_URL=S3 endpoint url
# Settings for oss test
GT_OSS_BUCKET=OSS bucket
GT_OSS_ACCESS_KEY_ID=OSS access key id

View File

@@ -0,0 +1,13 @@
{
"LABEL": {
"name": "breaking change",
"color": "D93F0B"
},
"CHECKS": {
"regexp": "^(?:(?!!:).)*$",
"ignoreLabels": [
"ignore-title"
],
"alwaysPassCI": true
}
}

View File

@@ -1,10 +1,12 @@
{
"LABEL": {
"name": "Invalid PR Title",
"color": "B60205"
},
"CHECKS": {
"regexp": "^(feat|fix|test|refactor|chore|style|docs|perf|build|ci|revert)(\\(.*\\))?:.*",
"ignoreLabels" : ["ignore-title"]
}
"LABEL": {
"name": "Invalid PR Title",
"color": "B60205"
},
"CHECKS": {
"regexp": "^(feat|fix|test|refactor|chore|style|docs|perf|build|ci|revert)(\\(.*\\))?\\!?:.*",
"ignoreLabels": [
"ignore-title"
]
}
}

42
.github/workflows/apidoc.yml vendored Normal file
View File

@@ -0,0 +1,42 @@
on:
push:
branches:
- develop
paths-ignore:
- 'docs/**'
- 'config/**'
- '**.md'
- '.dockerignore'
- 'docker/**'
- '.gitignore'
name: Build API docs
env:
RUST_TOOLCHAIN: nightly-2023-02-26
jobs:
apidoc:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: arduino/setup-protoc@v1
with:
repo-token: ${{ secrets.GITHUB_TOKEN }}
- uses: dtolnay/rust-toolchain@master
with:
toolchain: ${{ env.RUST_TOOLCHAIN }}
- run: cargo doc --workspace --no-deps --document-private-items
- run: |
cat <<EOF > target/doc/index.html
<!DOCTYPE html>
<html>
<head>
<meta http-equiv="refresh" content="0; url='greptime/'" />
</head>
<body></body></html>
EOF
- name: Publish dist directory
uses: JamesIves/github-pages-deploy-action@v4
with:
folder: target/doc

View File

@@ -24,15 +24,15 @@ on:
name: CI
env:
RUST_TOOLCHAIN: nightly-2023-02-14
RUST_TOOLCHAIN: nightly-2023-02-26
jobs:
typos:
name: Spell Check with Typos
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- uses: crate-ci/typos@v1.0.4
- uses: actions/checkout@v3
- uses: crate-ci/typos@v1.13.10
check:
name: Check
@@ -116,6 +116,7 @@ jobs:
if: github.event.pull_request.draft == false
runs-on: ubuntu-latest-8-cores
timeout-minutes: 60
needs: [clippy]
steps:
- uses: actions/checkout@v3
- uses: arduino/setup-protoc@v1
@@ -131,7 +132,7 @@ jobs:
ETCD_VER=v3.5.7
DOWNLOAD_URL=https://github.com/etcd-io/etcd/releases/download
curl -L ${DOWNLOAD_URL}/${ETCD_VER}/etcd-${ETCD_VER}-linux-amd64.tar.gz -o /tmp/etcd-${ETCD_VER}-linux-amd64.tar.gz
mkdir -p /tmp/etcd-download
mkdir -p /tmp/etcd-download
tar xzvf /tmp/etcd-${ETCD_VER}-linux-amd64.tar.gz -C /tmp/etcd-download --strip-components=1
rm -f /tmp/etcd-${ETCD_VER}-linux-amd64.tar.gz
@@ -182,12 +183,13 @@ jobs:
- name: Rust Cache
uses: Swatinem/rust-cache@v2
- name: Run cargo clippy
run: cargo clippy --workspace --all-targets -- -D warnings -D clippy::print_stdout -D clippy::print_stderr
run: cargo clippy --workspace --all-targets -- -D warnings
coverage:
if: github.event.pull_request.draft == false
runs-on: ubuntu-latest-8-cores
timeout-minutes: 60
needs: [clippy]
steps:
- uses: actions/checkout@v3
- uses: arduino/setup-protoc@v1
@@ -205,10 +207,16 @@ jobs:
uses: Swatinem/rust-cache@v2
- name: Install latest nextest release
uses: taiki-e/install-action@nextest
- name: Install Python
uses: actions/setup-python@v4
with:
python-version: '3.10'
- name: Install PyArrow Package
run: pip install pyarrow
- name: Install cargo-llvm-cov
uses: taiki-e/install-action@cargo-llvm-cov
- name: Collect coverage data
run: cargo llvm-cov nextest --workspace --lcov --output-path lcov.info
run: cargo llvm-cov nextest --workspace --lcov --output-path lcov.info -F pyo3_backend
env:
CARGO_BUILD_RUSTFLAGS: "-C link-arg=-fuse-ld=lld"
RUST_BACKTRACE: 1
@@ -223,5 +231,5 @@ jobs:
token: ${{ secrets.CODECOV_TOKEN }}
files: ./lcov.info
flags: rust
fail_ci_if_error: true
fail_ci_if_error: false
verbose: true

View File

@@ -13,4 +13,4 @@ jobs:
steps:
- uses: actions/checkout@v2
- name: Check License Header
uses: apache/skywalking-eyes/header@main
uses: apache/skywalking-eyes/header@df70871af1a8109c9a5b1dc824faaf65246c5236

View File

@@ -18,3 +18,12 @@ jobs:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
pass_on_octokit_error: false
configuration_path: ".github/pr-title-checker-config.json"
breaking:
runs-on: ubuntu-latest
timeout-minutes: 10
steps:
- uses: thehanimo/pr-title-checker@v1.3.4
with:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
pass_on_octokit_error: false
configuration_path: ".github/pr-title-breaking-change-label-config.json"

View File

@@ -5,20 +5,22 @@ on:
schedule:
# At 00:00 on Monday.
- cron: '0 0 * * 1'
# Mannually trigger only builds binaries.
workflow_dispatch:
name: Release
env:
RUST_TOOLCHAIN: nightly-2023-02-14
RUST_TOOLCHAIN: nightly-2023-02-26
# FIXME(zyy17): Would be better to use `gh release list -L 1 | cut -f 3` to get the latest release version tag, but for a long time, we will stay at 'v0.1.0-alpha-*'.
SCHEDULED_BUILD_VERSION_PREFIX: v0.1.0-alpha
SCHEDULED_BUILD_VERSION_PREFIX: v0.2.0
# In the future, we can change SCHEDULED_PERIOD to nightly.
SCHEDULED_PERIOD: weekly
SCHEDULED_PERIOD: nightly
CARGO_PROFILE: weekly
CARGO_PROFILE: nightly
## FIXME(zyy17): Enable it after the tests are stabled.
DISABLE_RUN_TESTS: true
jobs:
build:
@@ -30,16 +32,45 @@ jobs:
- arch: x86_64-unknown-linux-gnu
os: ubuntu-2004-16-cores
file: greptime-linux-amd64
continue-on-error: false
opts: "-F servers/dashboard"
- arch: aarch64-unknown-linux-gnu
os: ubuntu-2004-16-cores
file: greptime-linux-arm64
continue-on-error: false
opts: "-F servers/dashboard"
- arch: aarch64-apple-darwin
os: macos-latest
file: greptime-darwin-arm64
continue-on-error: false
opts: "-F servers/dashboard"
- arch: x86_64-apple-darwin
os: macos-latest
file: greptime-darwin-amd64
continue-on-error: false
opts: "-F servers/dashboard"
- arch: x86_64-unknown-linux-gnu
os: ubuntu-2004-16-cores
file: greptime-linux-amd64-pyo3
continue-on-error: false
opts: "-F pyo3_backend,servers/dashboard"
- arch: aarch64-unknown-linux-gnu
os: ubuntu-2004-16-cores
file: greptime-linux-arm64-pyo3
continue-on-error: false
opts: "-F pyo3_backend,servers/dashboard"
- arch: aarch64-apple-darwin
os: macos-latest
file: greptime-darwin-arm64-pyo3
continue-on-error: false
opts: "-F pyo3_backend,servers/dashboard"
- arch: x86_64-apple-darwin
os: macos-latest
file: greptime-darwin-amd64-pyo3
continue-on-error: false
opts: "-F pyo3_backend,servers/dashboard"
runs-on: ${{ matrix.os }}
continue-on-error: ${{ matrix.continue-on-error }}
if: github.repository == 'GreptimeTeam/greptimedb'
steps:
- name: Checkout sources
@@ -76,10 +107,10 @@ jobs:
ETCD_VER=v3.5.7
DOWNLOAD_URL=https://github.com/etcd-io/etcd/releases/download
curl -L ${DOWNLOAD_URL}/${ETCD_VER}/etcd-${ETCD_VER}-linux-amd64.tar.gz -o /tmp/etcd-${ETCD_VER}-linux-amd64.tar.gz
mkdir -p /tmp/etcd-download
tar xzvf /tmp/etcd-${ETCD_VER}-linux-amd64.tar.gz -C /tmp/etcd-download --strip-components=1
mkdir -p /tmp/etcd-download
tar xzvf /tmp/etcd-${ETCD_VER}-linux-amd64.tar.gz -C /tmp/etcd-download --strip-components=1
rm -f /tmp/etcd-${ETCD_VER}-linux-amd64.tar.gz
sudo cp -a /tmp/etcd-download/etcd* /usr/local/bin/
nohup etcd >/tmp/etcd.log 2>&1 &
@@ -93,7 +124,14 @@ jobs:
if: contains(matrix.arch, 'linux') && endsWith(matrix.arch, '-gnu')
run: |
sudo apt-get -y update
sudo apt-get -y install libssl-dev pkg-config g++-aarch64-linux-gnu gcc-aarch64-linux-gnu
sudo apt-get -y install libssl-dev pkg-config g++-aarch64-linux-gnu gcc-aarch64-linux-gnu binutils-aarch64-linux-gnu wget
# FIXME(zyy17): Should we specify the version of python when building binary for darwin?
- name: Compile Python 3.10.10 from source for linux
if: contains(matrix.arch, 'linux') && contains(matrix.opts, 'pyo3_backend')
run: |
sudo chmod +x ./docker/aarch64/compile-python.sh
sudo ./docker/aarch64/compile-python.sh ${{ matrix.arch }}
- name: Install rust toolchain
uses: dtolnay/rust-toolchain@master
@@ -105,10 +143,55 @@ jobs:
run: protoc --version ; cargo version ; rustc --version ; gcc --version ; g++ --version
- name: Run tests
if: env.DISABLE_RUN_TESTS == 'false'
run: make unit-test integration-test sqlness-test
- name: Run cargo build with pyo3 for aarch64-linux
if: contains(matrix.arch, 'aarch64-unknown-linux-gnu') && contains(matrix.opts, 'pyo3_backend')
run: |
# TODO(zyy17): We should make PYO3_CROSS_LIB_DIR configurable.
export PYTHON_INSTALL_PATH_AMD64=${PWD}/python-3.10.10/amd64
export LD_LIBRARY_PATH=$PYTHON_INSTALL_PATH_AMD64/lib:$LD_LIBRARY_PATH
export LIBRARY_PATH=$PYTHON_INSTALL_PATH_AMD64/lib:$LIBRARY_PATH
export PATH=$PYTHON_INSTALL_PATH_AMD64/bin:$PATH
export PYO3_CROSS_LIB_DIR=${PWD}/python-3.10.10/aarch64
echo "PYO3_CROSS_LIB_DIR: $PYO3_CROSS_LIB_DIR"
alias python=$PYTHON_INSTALL_PATH_AMD64/bin/python3
alias pip=$PYTHON_INSTALL_PATH_AMD64/bin/python3-pip
cargo build --profile ${{ env.CARGO_PROFILE }} --locked --target ${{ matrix.arch }} ${{ matrix.opts }}
- name: Run cargo build with pyo3 for amd64-linux
if: contains(matrix.arch, 'x86_64-unknown-linux-gnu') && contains(matrix.opts, 'pyo3_backend')
run: |
export PYTHON_INSTALL_PATH_AMD64=${PWD}/python-3.10.10/amd64
export LD_LIBRARY_PATH=$PYTHON_INSTALL_PATH_AMD64/lib:$LD_LIBRARY_PATH
export LIBRARY_PATH=$PYTHON_INSTALL_PATH_AMD64/lib:$LIBRARY_PATH
export PATH=$PYTHON_INSTALL_PATH_AMD64/bin:$PATH
echo "implementation=CPython" >> pyo3.config
echo "version=3.10" >> pyo3.config
echo "implementation=CPython" >> pyo3.config
echo "shared=true" >> pyo3.config
echo "abi3=true" >> pyo3.config
echo "lib_name=python3.10" >> pyo3.config
echo "lib_dir=$PYTHON_INSTALL_PATH_AMD64/lib" >> pyo3.config
echo "executable=$PYTHON_INSTALL_PATH_AMD64/bin/python3" >> pyo3.config
echo "pointer_width=64" >> pyo3.config
echo "build_flags=" >> pyo3.config
echo "suppress_build_script_link_lines=false" >> pyo3.config
cat pyo3.config
export PYO3_CONFIG_FILE=${PWD}/pyo3.config
alias python=$PYTHON_INSTALL_PATH_AMD64/bin/python3
alias pip=$PYTHON_INSTALL_PATH_AMD64/bin/python3-pip
cargo build --profile ${{ env.CARGO_PROFILE }} --locked --target ${{ matrix.arch }} ${{ matrix.opts }}
- name: Run cargo build
run: cargo build ${{ matrix.opts }} --profile ${{ env.CARGO_PROFILE }} --locked --target ${{ matrix.arch }}
if: contains(matrix.arch, 'darwin') || contains(matrix.opts, 'pyo3_backend') == false
run: cargo build --profile ${{ env.CARGO_PROFILE }} --locked --target ${{ matrix.arch }} ${{ matrix.opts }}
- name: Calculate checksum and rename binary
shell: bash
@@ -129,89 +212,16 @@ jobs:
with:
name: ${{ matrix.file }}.sha256sum
path: target/${{ matrix.arch }}/${{ env.CARGO_PROFILE }}/${{ matrix.file }}.sha256sum
release:
name: Release artifacts
needs: [build]
runs-on: ubuntu-latest
if: github.repository == 'GreptimeTeam/greptimedb'
steps:
- name: Checkout sources
uses: actions/checkout@v3
- name: Download artifacts
uses: actions/download-artifact@v3
- name: Configure scheduled build version # the version would be ${SCHEDULED_BUILD_VERSION_PREFIX}-YYYYMMDD-${SCHEDULED_PERIOD}, like v0.1.0-alpha-20221119-weekly.
shell: bash
if: github.event_name == 'schedule'
run: |
buildTime=`date "+%Y%m%d"`
SCHEDULED_BUILD_VERSION=${{ env.SCHEDULED_BUILD_VERSION_PREFIX }}-$buildTime-${{ env.SCHEDULED_PERIOD }}
echo "SCHEDULED_BUILD_VERSION=${SCHEDULED_BUILD_VERSION}" >> $GITHUB_ENV
- name: Create scheduled build git tag
if: github.event_name == 'schedule'
run: |
git tag ${{ env.SCHEDULED_BUILD_VERSION }}
- name: Publish scheduled release # configure the different release title and tags.
uses: softprops/action-gh-release@v1
if: github.event_name == 'schedule'
with:
name: "Release ${{ env.SCHEDULED_BUILD_VERSION }}"
tag_name: ${{ env.SCHEDULED_BUILD_VERSION }}
generate_release_notes: true
files: |
**/greptime-*
- name: Publish release
uses: softprops/action-gh-release@v1
if: github.event_name != 'schedule'
with:
name: "Release ${{ github.ref_name }}"
files: |
**/greptime-*
docker:
name: Build docker image
needs: [build]
runs-on: ubuntu-latest
if: github.repository == 'GreptimeTeam/greptimedb'
if: github.repository == 'GreptimeTeam/greptimedb' && github.event_name != 'workflow_dispatch'
steps:
- name: Checkout sources
uses: actions/checkout@v3
- name: Download amd64 binary
uses: actions/download-artifact@v3
with:
name: greptime-linux-amd64
path: amd64
- name: Unzip the amd64 artifacts
run: |
cd amd64
tar xvf greptime-linux-amd64.tgz
rm greptime-linux-amd64.tgz
- name: Download arm64 binary
uses: actions/download-artifact@v3
with:
name: greptime-linux-arm64
path: arm64
- name: Unzip the arm64 artifacts
run: |
cd arm64
tar xvf greptime-linux-arm64.tgz
rm greptime-linux-arm64.tgz
- name: Login to UCloud Container Registry
uses: docker/login-action@v2
with:
registry: uhub.service.ucloud.cn
username: ${{ secrets.UCLOUD_USERNAME }}
password: ${{ secrets.UCLOUD_PASSWORD }}
- name: Login to Dockerhub
uses: docker/login-action@v2
with:
@@ -239,15 +249,141 @@ jobs:
- name: Set up buildx
uses: docker/setup-buildx-action@v2
- name: Build and push
uses: docker/build-push-action@v3
- name: Download amd64 binary
uses: actions/download-artifact@v3
with:
context: .
name: greptime-linux-amd64-pyo3
path: amd64
- name: Unzip the amd64 artifacts
run: |
tar xvf amd64/greptime-linux-amd64-pyo3.tgz -C amd64/ && rm amd64/greptime-linux-amd64-pyo3.tgz
cp -r amd64 docker/ci
- name: Download arm64 binary
id: download-arm64
uses: actions/download-artifact@v3
with:
name: greptime-linux-arm64-pyo3
path: arm64
- name: Unzip the arm64 artifacts
id: unzip-arm64
if: success() || steps.download-arm64.conclusion == 'success'
run: |
tar xvf arm64/greptime-linux-arm64-pyo3.tgz -C arm64/ && rm arm64/greptime-linux-arm64-pyo3.tgz
cp -r arm64 docker/ci
- name: Build and push all
uses: docker/build-push-action@v3
if: success() || steps.unzip-arm64.conclusion == 'success' # Build and push all platform if unzip-arm64 succeeds
with:
context: ./docker/ci/
file: ./docker/ci/Dockerfile
push: true
platforms: linux/amd64,linux/arm64
tags: |
greptime/greptimedb:latest
greptime/greptimedb:${{ env.IMAGE_TAG }}
uhub.service.ucloud.cn/greptime/greptimedb:latest
uhub.service.ucloud.cn/greptime/greptimedb:${{ env.IMAGE_TAG }}
- name: Build and push amd64 only
uses: docker/build-push-action@v3
if: success() || steps.download-arm64.conclusion == 'failure' # Only build and push amd64 platform if download-arm64 fails
with:
context: ./docker/ci/
file: ./docker/ci/Dockerfile
push: true
platforms: linux/amd64
tags: |
greptime/greptimedb:latest
greptime/greptimedb:${{ env.IMAGE_TAG }}
release:
name: Release artifacts
# Release artifacts only when all the artifacts are built successfully.
needs: [build,docker]
runs-on: ubuntu-latest
if: github.repository == 'GreptimeTeam/greptimedb' && github.event_name != 'workflow_dispatch'
steps:
- name: Checkout sources
uses: actions/checkout@v3
- name: Download artifacts
uses: actions/download-artifact@v3
- name: Configure scheduled build version # the version would be ${SCHEDULED_BUILD_VERSION_PREFIX}-${SCHEDULED_PERIOD}-YYYYMMDD, like v0.2.0-nigthly-20230313.
shell: bash
if: github.event_name == 'schedule'
run: |
buildTime=`date "+%Y%m%d"`
SCHEDULED_BUILD_VERSION=${{ env.SCHEDULED_BUILD_VERSION_PREFIX }}-${{ env.SCHEDULED_PERIOD }}-$buildTime
echo "SCHEDULED_BUILD_VERSION=${SCHEDULED_BUILD_VERSION}" >> $GITHUB_ENV
- name: Create scheduled build git tag
if: github.event_name == 'schedule'
run: |
git tag ${{ env.SCHEDULED_BUILD_VERSION }}
- name: Publish scheduled release # configure the different release title and tags.
uses: softprops/action-gh-release@v1
if: github.event_name == 'schedule'
with:
name: "Release ${{ env.SCHEDULED_BUILD_VERSION }}"
tag_name: ${{ env.SCHEDULED_BUILD_VERSION }}
generate_release_notes: true
files: |
**/greptime-*
- name: Publish release
uses: softprops/action-gh-release@v1
if: github.event_name != 'schedule'
with:
name: "Release ${{ github.ref_name }}"
files: |
**/greptime-*
docker-push-uhub:
name: Push docker image to UCloud Container Registry
needs: [docker]
runs-on: ubuntu-latest
if: github.repository == 'GreptimeTeam/greptimedb' && github.event_name != 'workflow_dispatch'
# Push to uhub may fail(500 error), but we don't want to block the release process. The failed job will be retried manually.
continue-on-error: true
steps:
- name: Checkout sources
uses: actions/checkout@v3
- name: Set up QEMU
uses: docker/setup-qemu-action@v2
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
- name: Login to UCloud Container Registry
uses: docker/login-action@v2
with:
registry: uhub.service.ucloud.cn
username: ${{ secrets.UCLOUD_USERNAME }}
password: ${{ secrets.UCLOUD_PASSWORD }}
- name: Configure scheduled build image tag # the tag would be ${SCHEDULED_BUILD_VERSION_PREFIX}-YYYYMMDD-${SCHEDULED_PERIOD}
shell: bash
if: github.event_name == 'schedule'
run: |
buildTime=`date "+%Y%m%d"`
SCHEDULED_BUILD_VERSION=${{ env.SCHEDULED_BUILD_VERSION_PREFIX }}-$buildTime-${{ env.SCHEDULED_PERIOD }}
echo "IMAGE_TAG=${SCHEDULED_BUILD_VERSION:1}" >> $GITHUB_ENV
- name: Configure tag # If the release tag is v0.1.0, then the image version tag will be 0.1.0.
shell: bash
if: github.event_name != 'schedule'
run: |
VERSION=${{ github.ref_name }}
echo "IMAGE_TAG=${VERSION:1}" >> $GITHUB_ENV
- name: Push image to uhub # Use 'docker buildx imagetools create' to create a new image base on source image.
run: |
docker buildx imagetools create \
--tag uhub.service.ucloud.cn/greptime/greptimedb:latest \
--tag uhub.service.ucloud.cn/greptime/greptimedb:${{ env.IMAGE_TAG }} \
greptime/greptimedb:${{ env.IMAGE_TAG }}

4
.gitignore vendored
View File

@@ -35,3 +35,7 @@ benchmarks/data
# dotenv
.env
# dashboard files
!/src/servers/dashboard/VERSION
/src/servers/dashboard/*

View File

@@ -1,4 +1,4 @@
# Welcome!
# Welcome 👋
Thanks a lot for considering contributing to GreptimeDB. We believe people like you would make GreptimeDB a great product. We intend to build a community where individuals can have open talks, show respect for one another, and speak with true ❤️. Meanwhile, we are to keep transparency and make your effort count here.
@@ -50,34 +50,33 @@ GreptimeDB uses the [Apache 2.0 license](https://github.com/GreptimeTeam/greptim
- To ensure that community is free and confident in its ability to use your contributions, please sign the Contributor License Agreement (CLA) which will be incorporated in the pull request process.
- Make sure all your codes are formatted and follow the [coding style](https://pingcap.github.io/style-guide/rust/).
- Make sure all unit tests are passed.
- Make sure all clippy warnings are fixed (you can check it locally by running `cargo clippy --workspace --all-targets -- -D warnings -D clippy::print_stdout -D clippy::print_stderr`).
- Make sure all unit tests are passed (using `cargo test --workspace` or [nextest](https://nexte.st/index.html) `cargo nextest run`).
- Make sure all clippy warnings are fixed (you can check it locally by running `cargo clippy --workspace --all-targets -- -D warnings`).
#### `pre-commit` Hooks
You could setup the [`pre-commit`](https://pre-commit.com/#plugins) hooks to run these checks on every commit automatically.
1. Install `pre-commit`
```
$ pip install pre-commit
```
or
```
$ brew install pre-commit
```
pip install pre-commit
or
brew install pre-commit
2. Install the `pre-commit` hooks
```
$ pre-commit install
pre-commit installed at .git/hooks/pre-commit
$ pre-commit install --hook-type commit-msg
pre-commit installed at .git/hooks/commit-msg
$ pre-commit install
pre-commit installed at .git/hooks/pre-commit
$ pre-commit install --hook-type pre-push
pre-commit installed at .git/hooks/pre-pus
```
$ pre-commit install --hook-type commit-msg
pre-commit installed at .git/hooks/commit-msg
now `pre-commit` will run automatically on `git commit`.
$ pre-commit install --hook-type pre-push
pre-commit installed at .git/hooks/pre-push
Now, `pre-commit` will run automatically on `git commit`.
### Title
@@ -102,10 +101,12 @@ of what you were trying to do and what went wrong. You can also reach for help i
## Community
The core team will be thrilled if you participate in any way you like. When you are stuck, try ask for help by filing an issue, with a detailed description of what you were trying to do and what went wrong. If you have any questions or if you would like to get involved in our community, please check out:
- [GreptimeDB Community Slack](https://greptime.com/slack)
- [GreptimeDB Github Discussions](https://github.com/GreptimeTeam/greptimedb/discussions)
Also, see some extra GreptimeDB content:
- [GreptimeDB Docs](https://greptime.com/docs)
- [Learn GreptimeDB](https://greptime.com/products/db)
- [Greptime Inc. Website](https://greptime.com)

2853
Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -7,20 +7,24 @@ members = [
"src/cmd",
"src/common/base",
"src/common/catalog",
"src/common/datasource",
"src/common/error",
"src/common/function",
"src/common/function-macro",
"src/common/grpc",
"src/common/grpc-expr",
"src/common/mem-prof",
"src/common/procedure",
"src/common/query",
"src/common/recordbatch",
"src/common/runtime",
"src/common/substrait",
"src/common/telemetry",
"src/common/test-util",
"src/common/time",
"src/datanode",
"src/datatypes",
"src/file-table-engine",
"src/frontend",
"src/log-store",
"src/meta-client",
@@ -37,48 +41,50 @@ members = [
"src/storage",
"src/store-api",
"src/table",
"src/table-procedure",
"tests-integration",
"tests/runner",
]
[workspace.package]
version = "0.1.0"
version = "0.2.0"
edition = "2021"
license = "Apache-2.0"
[workspace.dependencies]
arrow = "29.0"
arrow-array = "29.0"
arrow-flight = "29.0"
arrow-schema = { version = "29.0", features = ["serde"] }
arrow = { version = "37.0" }
arrow-array = "37.0"
arrow-flight = "37.0"
arrow-schema = { version = "37.0", features = ["serde"] }
async-stream = "0.3"
async-trait = "0.1"
chrono = { version = "0.4", features = ["serde"] }
# TODO(LFC): Use released Datafusion when it officially dpendent on Arrow 29.0
datafusion = { git = "https://github.com/apache/arrow-datafusion.git", rev = "4917235a398ae20145c87d20984e6367dc1a0c1e" }
datafusion-common = { git = "https://github.com/apache/arrow-datafusion.git", rev = "4917235a398ae20145c87d20984e6367dc1a0c1e" }
datafusion-expr = { git = "https://github.com/apache/arrow-datafusion.git", rev = "4917235a398ae20145c87d20984e6367dc1a0c1e" }
datafusion-optimizer = { git = "https://github.com/apache/arrow-datafusion.git", rev = "4917235a398ae20145c87d20984e6367dc1a0c1e" }
datafusion-physical-expr = { git = "https://github.com/apache/arrow-datafusion.git", rev = "4917235a398ae20145c87d20984e6367dc1a0c1e" }
datafusion-sql = { git = "https://github.com/apache/arrow-datafusion.git", rev = "4917235a398ae20145c87d20984e6367dc1a0c1e" }
datafusion = { git = "https://github.com/apache/arrow-datafusion.git", rev = "74a778ca6016a853a3c3add3fa8c6f12f4fe4561" }
datafusion-common = { git = "https://github.com/apache/arrow-datafusion.git", rev = "74a778ca6016a853a3c3add3fa8c6f12f4fe4561" }
datafusion-expr = { git = "https://github.com/apache/arrow-datafusion.git", rev = "74a778ca6016a853a3c3add3fa8c6f12f4fe4561" }
datafusion-optimizer = { git = "https://github.com/apache/arrow-datafusion.git", rev = "74a778ca6016a853a3c3add3fa8c6f12f4fe4561" }
datafusion-physical-expr = { git = "https://github.com/apache/arrow-datafusion.git", rev = "74a778ca6016a853a3c3add3fa8c6f12f4fe4561" }
datafusion-sql = { git = "https://github.com/apache/arrow-datafusion.git", rev = "74a778ca6016a853a3c3add3fa8c6f12f4fe4561" }
futures = "0.3"
futures-util = "0.3"
parquet = "29.0"
parquet = "37.0"
paste = "1.0"
prost = "0.11"
rand = "0.8"
serde = { version = "1.0", features = ["derive"] }
serde_json = "1.0"
snafu = { version = "0.7", features = ["backtraces"] }
sqlparser = "0.28"
sqlparser = "0.33"
tempfile = "3"
tokio = { version = "1.24.2", features = ["full"] }
tokio-util = "0.7"
tonic = { version = "0.8", features = ["tls"] }
tokio-util = { version = "0.7", features = ["io-util"] }
tonic = { version = "0.9", features = ["tls"] }
uuid = { version = "1", features = ["serde", "v4", "fast-rng"] }
[profile.release]
debug = true
[profile.weekly]
[profile.nightly]
inherits = "release"
strip = true
lto = "thin"

View File

@@ -19,6 +19,14 @@ clean: ## Clean the project.
fmt: ## Format all the Rust code.
cargo fmt --all
.PHONY: fmt-toml
fmt-toml: ## Format all TOML files.
taplo format --option "indent_string= "
.PHONY: check-toml
check-toml: ## Check all TOML files.
taplo format --check --option "indent_string= "
.PHONY: docker-image
docker-image: ## Build docker image.
docker build --network host -f docker/Dockerfile -t ${IMAGE_REGISTRY}:${IMAGE_TAG} .
@@ -43,7 +51,7 @@ check: ## Cargo check all the targets.
.PHONY: clippy
clippy: ## Check clippy rules.
cargo clippy --workspace --all-targets -- -D warnings -D clippy::print_stdout -D clippy::print_stderr
cargo clippy --workspace --all-targets -- -D warnings
.PHONY: fmt-check
fmt-check: ## Check code format.

View File

@@ -1,14 +1,14 @@
<p align="center">
<picture>
<source media="(prefers-color-scheme: light)" srcset="/docs/logo-text-padding.png">
<source media="(prefers-color-scheme: dark)" srcset="/docs/logo-text-padding-dark.png">
<img alt="GreptimeDB Logo" src="/docs/logo-text-padding.png" width="400px">
<source media="(prefers-color-scheme: light)" srcset="https://cdn.jsdelivr.net/gh/GreptimeTeam/greptimedb@develop/docs/logo-text-padding.png">
<source media="(prefers-color-scheme: dark)" srcset="https://cdn.jsdelivr.net/gh/GreptimeTeam/greptimedb@develop/docs/logo-text-padding-dark.png">
<img alt="GreptimeDB Logo" src="https://cdn.jsdelivr.net/gh/GreptimeTeam/greptimedb@develop/docs/logo-text-padding.png" width="400px">
</picture>
</p>
<h3 align="center">
The next-generation hybrid timeseries/analytics processing database in the cloud
The next-generation hybrid time-series/analytics processing database in the cloud
</h3>
<p align="center">
@@ -23,6 +23,8 @@
<a href="https://twitter.com/greptime"><img src="https://img.shields.io/badge/twitter-follow_us-1d9bf0.svg"></a>
&nbsp;
<a href="https://www.linkedin.com/company/greptime/"><img src="https://img.shields.io/badge/linkedin-connect_with_us-0a66c2.svg"></a>
&nbsp;
<a href="https://greptime.com/slack"><img src="https://img.shields.io/badge/slack-GreptimeDB-0abd59?logo=slack" alt="slack" /></a>
</p>
## What is GreptimeDB
@@ -36,11 +38,11 @@ Our core developers have been building time-series data platform
for years. Based on their best-practices, GreptimeDB is born to give you:
- A standalone binary that scales to highly-available distributed cluster, providing a transparent experience for cluster users
- Optimized columnar layout for handling time-series data; compacted, compressed, stored on various storage backends
- Flexible index options, tackling high cardinality issues down
- Optimized columnar layout for handling time-series data; compacted, compressed, and stored on various storage backends
- Flexible indexes, tackling high cardinality issues down
- Distributed, parallel query execution, leveraging elastic computing resource
- Native SQL, and Python scripting for advanced analytical scenarios
- Widely adopted database protocols and APIs
- Widely adopted database protocols and APIs, native PromQL supports
- Extensible table engine architecture for extensive workloads
## Quick Start
@@ -61,6 +63,12 @@ To compile GreptimeDB from source, you'll need:
find an installation instructions [here](https://grpc.io/docs/protoc-installation/).
**Note that `protoc` version needs to be >= 3.15** because we have used the `optional`
keyword. You can check it with `protoc --version`.
- python3-dev or python3-devel(Optional feature, only needed if you want to run scripts
in CPython, and also need to enable `pyo3_backend` feature when compiling(by `cargo run -F pyo3_backend` or add `pyo3_backend` to src/script/Cargo.toml 's `features.default` like `default = ["python", "pyo3_backend]`)): this install a Python shared library required for running Python
scripting engine(In CPython Mode). This is available as `python3-dev` on
ubuntu, you can install it with `sudo apt install python3-dev`, or
`python3-devel` on RPM based distributions (e.g. Fedora, Red Hat, SuSE). Mac's
`Python3` package should have this shared library by default. More detail for compiling with PyO3 can be found in [PyO3](https://pyo3.rs/v0.18.1/building_and_distribution#configuring-the-python-version)'s documentation.
#### Build with Docker
@@ -141,9 +149,9 @@ You can always cleanup test database by removing `/tmp/greptimedb`.
### Installation
- [Pre-built Binaries](https://github.com/GreptimeTeam/greptimedb/releases):
downloadable pre-built binaries for Linux and MacOS
- [Docker Images](https://hub.docker.com/r/greptime/greptimedb): pre-built
Docker images
For Linux and macOS, you can easily download pre-built binaries that are ready to use. In most cases, downloading the version without PyO3 is sufficient. However, if you plan to run scripts in CPython (and use Python packages like NumPy and Pandas), you will need to download the version with PyO3 and install a Python with the same version as the Python in the PyO3 version. We recommend using virtualenv for the installation process to manage multiple Python versions.
- [Docker Images](https://hub.docker.com/r/greptime/greptimedb)(**recommended**): pre-built
Docker images, this is the easiest way to try GreptimeDB. By default it runs CPython script with `pyo3_backend` enabled.
- [`gtctl`](https://github.com/GreptimeTeam/gtctl): the command-line tool for
Kubernetes deployment
@@ -152,6 +160,7 @@ You can always cleanup test database by removing `/tmp/greptimedb`.
- GreptimeDB [User Guide](https://docs.greptime.com/user-guide/concepts.html)
- GreptimeDB [Developer
Guide](https://docs.greptime.com/developer-guide/overview.html)
- GreptimeDB [internal code document](https://greptimedb.rs)
### Dashboard
- [The dashboard UI for GreptimeDB](https://github.com/GreptimeTeam/dashboard)

19
SECURITY.md Normal file
View File

@@ -0,0 +1,19 @@
# Security Policy
## Supported Versions
| Version | Supported |
| ------- | ------------------ |
| >= v0.1.0 | :white_check_mark: |
| < v0.1.0 | :x: |
## Reporting a Vulnerability
We place great importance on the security of GreptimeDB code, software,
and cloud platform. If you come across a security vulnerability in GreptimeDB,
we kindly request that you inform us immediately. We will thoroughly investigate
all valid reports and make every effort to resolve the issue promptly.
To report any issues or vulnerabilities, please email us at info@greptime.com, rather than
posting publicly on GitHub. Be sure to provide us with the version identifier as well as details
on how the vulnerability can be exploited.

View File

@@ -21,13 +21,13 @@ use std::collections::HashMap;
use std::path::{Path, PathBuf};
use std::time::Instant;
use arrow::array::{ArrayRef, PrimitiveArray, StringArray, TimestampNanosecondArray};
use arrow::array::{ArrayRef, PrimitiveArray, StringArray, TimestampMicrosecondArray};
use arrow::datatypes::{DataType, Float64Type, Int64Type};
use arrow::record_batch::RecordBatch;
use clap::Parser;
use client::api::v1::column::Values;
use client::api::v1::{Column, ColumnDataType, ColumnDef, CreateTableExpr, InsertRequest, TableId};
use client::{Client, Database};
use client::api::v1::{Column, ColumnDataType, ColumnDef, CreateTableExpr, InsertRequest};
use client::{Client, Database, DEFAULT_CATALOG_NAME, DEFAULT_SCHEMA_NAME};
use indicatif::{MultiProgress, ProgressBar, ProgressStyle};
use parquet::arrow::arrow_reader::ParquetRecordBatchReaderBuilder;
use tokio::task::JoinSet;
@@ -61,7 +61,7 @@ struct Args {
#[arg(long = "skip-read")]
skip_read: bool,
#[arg(short, long, default_value_t = String::from("127.0.0.1:3001"))]
#[arg(short, long, default_value_t = String::from("127.0.0.1:4001"))]
endpoint: String,
}
@@ -97,6 +97,9 @@ async fn write_data(
for record_batch in record_batch_reader {
let record_batch = record_batch.unwrap();
if !is_record_batch_full(&record_batch) {
continue;
}
let (columns, row_count) = convert_record_batch(record_batch);
let request = InsertRequest {
table_name: TABLE_NAME.to_string(),
@@ -122,11 +125,17 @@ fn convert_record_batch(record_batch: RecordBatch) -> (Vec<Column>, u32) {
let mut columns = vec![];
for (array, field) in record_batch.columns().iter().zip(fields.iter()) {
let values = build_values(array);
let (values, datatype) = build_values(array);
let column = Column {
column_name: field.name().to_owned(),
column_name: field.name().clone(),
values: Some(values),
null_mask: vec![],
null_mask: array
.to_data()
.nulls()
.map(|bitmap| bitmap.buffer().as_slice().to_vec())
.unwrap_or_default(),
datatype: datatype.into(),
// datatype and semantic_type are set to default
..Default::default()
};
@@ -136,7 +145,7 @@ fn convert_record_batch(record_batch: RecordBatch) -> (Vec<Column>, u32) {
(columns, row_count as _)
}
fn build_values(column: &ArrayRef) -> Values {
fn build_values(column: &ArrayRef) -> (Values, ColumnDataType) {
match column.data_type() {
DataType::Int64 => {
let array = column
@@ -144,10 +153,13 @@ fn build_values(column: &ArrayRef) -> Values {
.downcast_ref::<PrimitiveArray<Int64Type>>()
.unwrap();
let values = array.values();
Values {
i64_values: values.to_vec(),
..Default::default()
}
(
Values {
i64_values: values.to_vec(),
..Default::default()
},
ColumnDataType::Int64,
)
}
DataType::Float64 => {
let array = column
@@ -155,29 +167,38 @@ fn build_values(column: &ArrayRef) -> Values {
.downcast_ref::<PrimitiveArray<Float64Type>>()
.unwrap();
let values = array.values();
Values {
f64_values: values.to_vec(),
..Default::default()
}
(
Values {
f64_values: values.to_vec(),
..Default::default()
},
ColumnDataType::Float64,
)
}
DataType::Timestamp(_, _) => {
let array = column
.as_any()
.downcast_ref::<TimestampNanosecondArray>()
.downcast_ref::<TimestampMicrosecondArray>()
.unwrap();
let values = array.values();
Values {
i64_values: values.to_vec(),
..Default::default()
}
(
Values {
ts_microsecond_values: values.to_vec(),
..Default::default()
},
ColumnDataType::TimestampMicrosecond,
)
}
DataType::Utf8 => {
let array = column.as_any().downcast_ref::<StringArray>().unwrap();
let values = array.iter().filter_map(|s| s.map(String::from)).collect();
Values {
string_values: values,
..Default::default()
}
(
Values {
string_values: values,
..Default::default()
},
ColumnDataType::String,
)
}
DataType::Null
| DataType::Boolean
@@ -204,14 +225,19 @@ fn build_values(column: &ArrayRef) -> Values {
| DataType::FixedSizeList(_, _)
| DataType::LargeList(_)
| DataType::Struct(_)
| DataType::Union(_, _, _)
| DataType::Union(_, _)
| DataType::Dictionary(_, _)
| DataType::Decimal128(_, _)
| DataType::Decimal256(_, _)
| DataType::RunEndEncoded(_, _)
| DataType::Map(_, _) => todo!(),
}
}
fn is_record_batch_full(batch: &RecordBatch) -> bool {
batch.columns().iter().all(|col| col.null_count() == 0)
}
fn create_table_expr() -> CreateTableExpr {
CreateTableExpr {
catalog_name: CATALOG_NAME.to_string(),
@@ -227,13 +253,13 @@ fn create_table_expr() -> CreateTableExpr {
},
ColumnDef {
name: "tpep_pickup_datetime".to_string(),
datatype: ColumnDataType::Int64 as i32,
datatype: ColumnDataType::TimestampMicrosecond as i32,
is_nullable: true,
default_constraint: vec![],
},
ColumnDef {
name: "tpep_dropoff_datetime".to_string(),
datatype: ColumnDataType::Int64 as i32,
datatype: ColumnDataType::TimestampMicrosecond as i32,
is_nullable: true,
default_constraint: vec![],
},
@@ -339,7 +365,8 @@ fn create_table_expr() -> CreateTableExpr {
create_if_not_exists: false,
table_options: Default::default(),
region_ids: vec![0],
table_id: Some(TableId { id: 0 }),
table_id: None,
engine: "mito".to_string(),
}
}
@@ -422,7 +449,7 @@ fn main() {
.unwrap()
.block_on(async {
let client = Client::with_urls(vec![&args.endpoint]);
let db = Database::with_client(client);
let db = Database::new(DEFAULT_CATALOG_NAME, DEFAULT_SCHEMA_NAME, client);
if !args.skip_write {
do_write(&args, &db).await;

View File

@@ -8,3 +8,5 @@ coverage:
ignore:
- "**/error*.rs" # ignore all error.rs files
- "tests/runner/*.rs" # ignore integration test runner
comment: # this is a top-level key
layout: "diff"

View File

@@ -1,31 +1,60 @@
node_id = 42
mode = 'distributed'
rpc_addr = '127.0.0.1:3001'
rpc_hostname = '127.0.0.1'
rpc_runtime_size = 8
mysql_addr = '127.0.0.1:4406'
mysql_runtime_size = 4
# Node running mode, see `standalone.example.toml`.
mode = "distributed"
# Whether to use in-memory catalog, see `standalone.example.toml`.
enable_memory_catalog = false
# The datanode identifier, should be unique.
node_id = 42
# gRPC server address, "127.0.0.1:3001" by default.
rpc_addr = "127.0.0.1:3001"
# Hostname of this node.
rpc_hostname = "127.0.0.1"
# The number of gRPC server worker threads, 8 by default.
rpc_runtime_size = 8
# Metasrv client options.
[meta_client_options]
# Metasrv address list.
metasrv_addrs = ["127.0.0.1:3002"]
# Operation timeout in milliseconds, 3000 by default.
timeout_millis = 3000
# Connect server timeout in milliseconds, 5000 by default.
connect_timeout_millis = 5000
# `TCP_NODELAY` option for accepted connections, true by default.
tcp_nodelay = true
# WAL options, see `standalone.example.toml`.
[wal]
dir = "/tmp/greptimedb/wal"
file_size = '1GB'
purge_interval = '10m'
purge_threshold = '50GB'
file_size = "1GB"
purge_threshold = "50GB"
purge_interval = "10m"
read_batch_size = 128
sync_write = false
# Storage options, see `standalone.example.toml`.
[storage]
type = 'File'
data_dir = '/tmp/greptimedb/data/'
type = "File"
data_dir = "/tmp/greptimedb/data/"
[meta_client_opts]
metasrv_addrs = ['127.0.0.1:3002']
timeout_millis = 3000
connect_timeout_millis = 5000
tcp_nodelay = false
[compaction]
# Compaction options, see `standalone.example.toml`.
[storage.compaction]
max_inflight_tasks = 4
max_files_in_level0 = 16
max_files_in_level0 = 8
max_purge_tasks = 32
# Storage manifest options
[storage.manifest]
# Region checkpoint actions margin.
# Create a checkpoint every <checkpoint_margin> actions.
checkpoint_margin = 10
# Region manifest logs and checkpoints gc execution duration
gc_duration = '30s'
# Whether to try creating a manifest checkpoint on region opening
checkpoint_on_startup = false
# Procedure storage options, see `standalone.example.toml`.
# [procedure.store]
# type = "File"
# data_dir = "/tmp/greptimedb/procedure/"
# max_retry_times = 3
# retry_delay = "500ms"

View File

@@ -1,12 +1,58 @@
mode = 'distributed'
datanode_rpc_addr = '127.0.0.1:3001'
# Node running mode, see `standalone.example.toml`.
mode = "distributed"
# HTTP server options, see `standalone.example.toml`.
[http_options]
addr = '127.0.0.1:4000'
addr = "127.0.0.1:4000"
timeout = "30s"
[meta_client_opts]
metasrv_addrs = ['127.0.0.1:3002']
# gRPC server options, see `standalone.example.toml`.
[grpc_options]
addr = "127.0.0.1:4001"
runtime_size = 8
# MySQL server options, see `standalone.example.toml`.
[mysql_options]
addr = "127.0.0.1:4002"
runtime_size = 2
# MySQL server TLS options, see `standalone.example.toml`.
[mysql_options.tls]
mode = "disable"
cert_path = ""
key_path = ""
# PostgresSQL server options, see `standalone.example.toml`.
[postgres_options]
addr = "127.0.0.1:4003"
runtime_size = 2
# PostgresSQL server TLS options, see `standalone.example.toml`.
[postgres_options.tls]
mode = "disable"
cert_path = ""
key_path = ""
# OpenTSDB protocol options, see `standalone.example.toml`.
[opentsdb_options]
addr = "127.0.0.1:4242"
runtime_size = 2
# InfluxDB protocol options, see `standalone.example.toml`.
[influxdb_options]
enable = true
# Prometheus protocol options, see `standalone.example.toml`.
[prometheus_options]
enable = true
# Prometheus protocol options, see `standalone.example.toml`.
[prom_options]
addr = "127.0.0.1:4004"
# Metasrv client options, see `datanode.example.toml`.
[meta_client_options]
metasrv_addrs = ["127.0.0.1:3002"]
timeout_millis = 3000
connect_timeout_millis = 5000
tcp_nodelay = false
tcp_nodelay = true

View File

@@ -1,6 +1,15 @@
bind_addr = '127.0.0.1:3002'
server_addr = '127.0.0.1:3002'
store_addr = '127.0.0.1:2379'
# The bind address of metasrv, "127.0.0.1:3002" by default.
bind_addr = "127.0.0.1:3002"
# The communication server address for frontend and datanode to connect to metasrv, "127.0.0.1:3002" by default for localhost.
server_addr = "127.0.0.1:3002"
# Etcd server address, "127.0.0.1:2379" by default.
store_addr = "127.0.0.1:2379"
# Datanode lease in seconds, 15 seconds by default.
datanode_lease_secs = 15
# selector: 'LeaseBased', 'LoadBased'
selector = 'LeaseBased'
# Datanode selector type.
# - "LeaseBased" (default value).
# - "LoadBased"
# For details, please see "https://docs.greptime.com/developer-guide/meta/selector".
selector = "LeaseBased"
# Store data in memory, false by default.
use_memory_store = false

View File

@@ -1,44 +1,130 @@
node_id = 0
mode = 'standalone'
# Node running mode, "standalone" or "distributed".
mode = "standalone"
# Whether to use in-memory catalog, `false` by default.
enable_memory_catalog = false
# HTTP server options.
[http_options]
addr = '127.0.0.1:4000'
# Server address, "127.0.0.1:4000" by default.
addr = "127.0.0.1:4000"
# HTTP request timeout, 30s by default.
timeout = "30s"
[wal]
dir = "/tmp/greptimedb/wal"
file_size = '1GB'
purge_interval = '10m'
purge_threshold = '50GB'
read_batch_size = 128
sync_write = false
[storage]
type = 'File'
data_dir = '/tmp/greptimedb/data/'
# gRPC server options.
[grpc_options]
addr = '127.0.0.1:4001'
# Server address, "127.0.0.1:4001" by default.
addr = "127.0.0.1:4001"
# The number of server worker threads, 8 by default.
runtime_size = 8
# MySQL server options.
[mysql_options]
addr = '127.0.0.1:4002'
# Server address, "127.0.0.1:4002" by default.
addr = "127.0.0.1:4002"
# The number of server worker threads, 2 by default.
runtime_size = 2
[influxdb_options]
enable = true
[opentsdb_options]
addr = '127.0.0.1:4242'
enable = true
runtime_size = 2
[prometheus_options]
enable = true
# MySQL server TLS options.
[mysql_options.tls]
# TLS mode, refer to https://www.postgresql.org/docs/current/libpq-ssl.html
# - "disable" (default value)
# - "prefer"
# - "require"
# - "verify-ca"
# - "verify-full"
mode = "disable"
# Certificate file path.
cert_path = ""
# Private key file path.
key_path = ""
# PostgresSQL server options.
[postgres_options]
addr = '127.0.0.1:4003'
# Server address, "127.0.0.1:4003" by default.
addr = "127.0.0.1:4003"
# The number of server worker threads, 2 by default.
runtime_size = 2
check_pwd = false
# PostgresSQL server TLS options, see `[mysql_options.tls]` section.
[postgres_options.tls]
# TLS mode.
mode = "disable"
# certificate file path.
cert_path = ""
# private key file path.
key_path = ""
# OpenTSDB protocol options.
[opentsdb_options]
# OpenTSDB telnet API server address, "127.0.0.1:4242" by default.
addr = "127.0.0.1:4242"
# The number of server worker threads, 2 by default.
runtime_size = 2
# InfluxDB protocol options.
[influxdb_options]
# Whether to enable InfluxDB protocol in HTTP API, true by default.
enable = true
# Prometheus protocol options.
[prometheus_options]
# Whether to enable Prometheus remote write and read in HTTP API, true by default.
enable = true
# Prom protocol options.
[prom_options]
# Prometheus API server address, "127.0.0.1:4004" by default.
addr = "127.0.0.1:4004"
# WAL options.
[wal]
# WAL data directory.
dir = "/tmp/greptimedb/wal"
# WAL file size in bytes.
file_size = "1GB"
# WAL purge threshold in bytes.
purge_threshold = "50GB"
# WAL purge interval in seconds.
purge_interval = "10m"
# WAL read batch size.
read_batch_size = 128
# Whether to sync log file after every write.
sync_write = false
# Storage options.
[storage]
# Storage type.
type = "File"
# Data directory, "/tmp/greptimedb/data" by default.
data_dir = "/tmp/greptimedb/data/"
# Compaction options.
[storage.compaction]
# Max task number that can concurrently run.
max_inflight_tasks = 4
# Max files in level 0 to trigger compaction.
max_files_in_level0 = 8
# Max task number for SST purge task after compaction.
max_purge_tasks = 32
# Storage manifest options
[storage.manifest]
# Region checkpoint actions margin.
# Create a checkpoint every <checkpoint_margin> actions.
checkpoint_margin = 10
# Region manifest logs and checkpoints gc execution duration
gc_duration = '30s'
# Whether to try creating a manifest checkpoint on region opening
checkpoint_on_startup = false
# Procedure storage options.
# Uncomment to enable.
# [procedure.store]
# # Storage type.
# type = "File"
# # Procedure data path.
# data_dir = "/tmp/greptimedb/procedure/"
# # Procedure max retry time.
# max_retry_times = 3
# # Initial retry delay of procedures, increases exponentially
# retry_delay = "500ms"

View File

@@ -9,7 +9,10 @@ RUN apt-get update && apt-get install -y \
protobuf-compiler \
curl \
build-essential \
pkg-config
pkg-config \
python3 \
python3-dev \
&& pip install pyarrow
# Install Rust.
SHELL ["/bin/bash", "-c"]

57
docker/aarch64/Dockerfile Normal file
View File

@@ -0,0 +1,57 @@
FROM ubuntu:22.04 as builder
ENV LANG en_US.utf8
WORKDIR /greptimedb
# Install dependencies.
RUN apt-get update && apt-get install -y \
libssl-dev \
protobuf-compiler \
curl \
build-essential \
pkg-config \
wget
# Install Rust.
SHELL ["/bin/bash", "-c"]
RUN curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh -s -- --no-modify-path --default-toolchain none -y
ENV PATH /root/.cargo/bin/:$PATH
# Install cross platform toolchain
RUN apt-get -y update && \
apt-get -y install g++-aarch64-linux-gnu gcc-aarch64-linux-gnu && \
apt-get install binutils-aarch64-linux-gnu
COPY ./docker/aarch64/compile-python.sh ./docker/aarch64/
RUN chmod +x ./docker/aarch64/compile-python.sh && \
./docker/aarch64/compile-python.sh
COPY ./rust-toolchain.toml .
# Install rustup target for cross compiling.
RUN rustup target add aarch64-unknown-linux-gnu
COPY . .
# Update dependency, using separate `RUN` to separate cache
RUN cargo fetch
# This three env var is set in script, so I set it manually in dockerfile.
ENV LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/lib/
ENV LIBRARY_PATH=$LIBRARY_PATH:/usr/local/lib/
ENV PY_INSTALL_PATH=/greptimedb/python_arm64_build
# Set the environment variable for cross compiling and compile it
# cross compiled python is `python3` in path, but pyo3 need `python` in path so alias it
# Build the project in release mode.
RUN export PYO3_CROSS_LIB_DIR=$PY_INSTALL_PATH/lib && \
alias python=python3 && \
cargo build --target aarch64-unknown-linux-gnu --release -F pyo3_backend
# Exporting the binary to the clean image
FROM ubuntu:22.04 as base
RUN apt-get update && DEBIAN_FRONTEND=noninteractive apt-get -y install ca-certificates
WORKDIR /greptime
COPY --from=builder /greptimedb/target/aarch64-unknown-linux-gnu/release/greptime /greptime/bin/
ENV PATH /greptime/bin/:$PATH
ENTRYPOINT ["greptime"]

View File

@@ -0,0 +1,87 @@
#!/usr/bin/env bash
set -e
# this script will download Python source code, compile it, and install it to /usr/local/lib
# then use this python to compile cross-compiled python for aarch64
ARCH=$1
PYTHON_VERSION=3.10.10
PYTHON_SOURCE_DIR=Python-${PYTHON_VERSION}
PYTHON_INSTALL_PATH_AMD64=${PWD}/python-${PYTHON_VERSION}/amd64
PYTHON_INSTALL_PATH_AARCH64=${PWD}/python-${PYTHON_VERSION}/aarch64
function download_python_source_code() {
wget https://www.python.org/ftp/python/$PYTHON_VERSION/Python-$PYTHON_VERSION.tgz
tar -xvf Python-$PYTHON_VERSION.tgz
}
function compile_for_amd64_platform() {
mkdir -p "$PYTHON_INSTALL_PATH_AMD64"
echo "Compiling for amd64 platform..."
./configure \
--prefix="$PYTHON_INSTALL_PATH_AMD64" \
--enable-shared \
ac_cv_pthread_is_default=no ac_cv_pthread=yes ac_cv_cxx_thread=yes \
ac_cv_have_long_long_format=yes \
--disable-ipv6 ac_cv_file__dev_ptmx=no ac_cv_file__dev_ptc=no
make
make install
}
# explain Python compile options here a bit:s
# --enable-shared: enable building a shared Python library (default is no) but we do need it for calling from rust
# CC, CXX, AR, LD, RANLIB: set the compiler, archiver, linker, and ranlib programs to use
# build: the machine you are building on, host: the machine you will run the compiled program on
# --with-system-ffi: build _ctypes module using an installed ffi library, see Doc/library/ctypes.rst, not used in here TODO: could remove
# ac_cv_pthread_is_default=no ac_cv_pthread=yes ac_cv_cxx_thread=yes:
# allow cross-compiled python to have -pthread set for CXX, see https://github.com/python/cpython/pull/22525
# ac_cv_have_long_long_format=yes: target platform supports long long type
# disable-ipv6: disable ipv6 support, we don't need it in here
# ac_cv_file__dev_ptmx=no ac_cv_file__dev_ptc=no: disable pty support, we don't need it in here
function compile_for_aarch64_platform() {
export LD_LIBRARY_PATH=$PYTHON_INSTALL_PATH_AMD64/lib:$LD_LIBRARY_PATH
export LIBRARY_PATH=$PYTHON_INSTALL_PATH_AMD64/lib:$LIBRARY_PATH
export PATH=$PYTHON_INSTALL_PATH_AMD64/bin:$PATH
mkdir -p "$PYTHON_INSTALL_PATH_AARCH64"
echo "Compiling for aarch64 platform..."
echo "LD_LIBRARY_PATH: $LD_LIBRARY_PATH"
echo "LIBRARY_PATH: $LIBRARY_PATH"
echo "PATH: $PATH"
./configure --build=x86_64-linux-gnu --host=aarch64-linux-gnu \
--prefix="$PYTHON_INSTALL_PATH_AARCH64" --enable-optimizations \
CC=aarch64-linux-gnu-gcc \
CXX=aarch64-linux-gnu-g++ \
AR=aarch64-linux-gnu-ar \
LD=aarch64-linux-gnu-ld \
RANLIB=aarch64-linux-gnu-ranlib \
--enable-shared \
ac_cv_pthread_is_default=no ac_cv_pthread=yes ac_cv_cxx_thread=yes \
ac_cv_have_long_long_format=yes \
--disable-ipv6 ac_cv_file__dev_ptmx=no ac_cv_file__dev_ptc=no
make
make altinstall
}
# Main script starts here.
download_python_source_code
# Enter the python source code directory.
cd $PYTHON_SOURCE_DIR || exit 1
# Build local python first, then build cross-compiled python.
compile_for_amd64_platform
# Clean the build directory.
make clean && make distclean
# Cross compile python for aarch64.
if [ "$ARCH" = "aarch64-unknown-linux-gnu" ]; then
compile_for_aarch64_platform
fi

View File

@@ -1,6 +1,14 @@
FROM ubuntu:22.04
RUN apt-get update && DEBIAN_FRONTEND=noninteractive apt-get -y install ca-certificates
RUN apt-get update && DEBIAN_FRONTEND=noninteractive apt-get install -y \
ca-certificates \
python3.10 \
python3.10-dev \
python3-pip
COPY requirements.txt /etc/greptime/requirements.txt
RUN python3 -m pip install -r /etc/greptime/requirements.txt
ARG TARGETARCH

View File

@@ -0,0 +1,5 @@
numpy>=1.24.2
pandas>=1.5.3
pyarrow>=11.0.0
requests>=2.28.2
scipy>=1.10.1

View File

@@ -0,0 +1,74 @@
This document introduces how to implement SQL statements in GreptimeDB.
The execution entry point for SQL statements locates at Frontend Instance. You can see it has
implemented `SqlQueryHandler`:
```rust
impl SqlQueryHandler for Instance {
type Error = Error;
async fn do_query(&self, query: &str, query_ctx: QueryContextRef) -> Vec<Result<Output>> {
// ...
}
}
```
Normally, when a SQL query arrives at GreptimeDB, the `do_query` method will be called. After some parsing work, the SQL
will be feed into `StatementExecutor`:
```rust
// in Frontend Instance:
self.statement_executor.execute_sql(stmt, query_ctx).await
```
That's where we handle our SQL statements. You can just create a new match arm for your statement there, then the
statement is implemented for both GreptimeDB Standalone and Cluster. You can see how `DESCRIBE TABLE` is implemented as
an example.
Now, what if the statements should be handled differently for GreptimeDB Standalone and Cluster? You can see there's
a `SqlStatementExecutor` field in `StatementExecutor`. Each GreptimeDB Standalone and Cluster has its own implementation
of `SqlStatementExecutor`. If you are going to implement the statements differently in the two mode (
like `CREATE TABLE`), you have to implement them in their own `SqlStatementExecutor`s.
Summarize as the diagram below:
```text
SQL query
|
v
+---------------------------+
| SqlQueryHandler::do_query |
+---------------------------+
|
| SQL parsing
v
+--------------------------------+
| StatementExecutor::execute_sql |
+--------------------------------+
|
| SQL execution
v
+----------------------------------+
| commonly handled statements like |
| "plan_exec" for selection or |
+----------------------------------+
| |
For Standalone | | For Cluster
v v
+---------------------------+ +---------------------------+
| SqlStatementExecutor impl | | SqlStatementExecutor impl |
| in Datanode Instance | | in Frontend DistInstance |
+---------------------------+ +---------------------------+
```
Note that some SQL statements can be executed in our QueryEngine, in the form of `LogicalPlan`. You can follow the
invocation path down to the `QueryEngine` implementation from `StatementExecutor::plan_exec`. For now, there's only
one `DatafusionQueryEngine` for both GreptimeDB Standalone and Cluster. That lone query engine works for both modes is
because GreptimeDB read/write data through `Table` trait, and each mode has its own `Table` implementation.
We don't have any bias towards whether statements should be handled in query engine or `StatementExecutor`. You can
implement one kind of statement in both places. For example, `Insert` with selection is handled in query engine, because
we can easily do the query part there. However, `Insert` without selection is not, for the cost of parsing statement
to `LogicalPlan` is not neglectable. So generally if the SQL query is simple enough, you can handle it
in `StatementExecutor`; otherwise if it is complex or has some part of selection, it should be parsed to `LogicalPlan`
and handled in query engine.

View File

@@ -149,10 +149,10 @@ inputs:
- title: 'Series Normalize: \noffset = 0'
operator: prom
inputs:
- title: 'Filter: \ntimetamp > 2022-12-20T10:00:00 && timestamp < 2022-12-21T10:00:00'
- title: 'Filter: \ntimestamp > 2022-12-20T10:00:00 && timestamp < 2022-12-21T10:00:00'
operator: filter
inputs:
- title: 'Table Scan: \ntable = request_duration, timetamp > 2022-12-20T10:00:00 && timestamp < 2022-12-21T10:00:00'
- title: 'Table Scan: \ntable = request_duration, timestamp > 2022-12-20T10:00:00 && timestamp < 2022-12-21T10:00:00'
operator: scan -->
![example](example.png)

View File

@@ -0,0 +1,196 @@
---
Feature Name: "Fault Tolerance for Region"
Tracking Issue: https://github.com/GreptimeTeam/greptimedb/issues/1126
Date: 2023-03-08
Author: "Luo Fucong <luofucong@greptime.com>"
---
Fault Tolerance for Region
----------------------
# Summary
This RFC proposes a method to achieve fault tolerance for regions in GreptimeDB's distributed mode. Or, put it in another way, achieving region high availability("HA") for GreptimeDB cluster.
In this RFC, we mainly describe two aspects of region HA: how region availability is detected, and what recovery process is need to be taken. We also discuss some alternatives and future work.
When this feature is done, our users could expect a GreptimeDB cluster that can always handle their requests to regions, despite some requests may failed during the region failover. The optimization to reduce the MTTR(Mean Time To Recovery) is not a concern of this RPC, and is left for future work.
# Motivation
Fault tolerance for regions is a critical feature for our clients to use the GreptimeDB cluster confidently. High availability for users to interact with their stored data is a "must have" for any TSDB products, that include our GreptimeDB cluster.
# Details
## Background
Some backgrounds about region in distributed mode:
- A table is logically split into multiple regions. Each region stores a part of non-overlapping table data.
- Regions are distributed in Datanodes, the mappings are not static, are assigned and governed by Metasrv.
- In distributed mode, client requests are scoped in regions. To be more specific, when a request that needs to scan multiple regions arrived in Frontend, Frontend splits the request into multiple sub-requests, each of which scans one region only, and submits them to Datanodes that hold corresponding regions.
In conclusion, as long as regions remain available, and regions could regain availability when failures do occur, the overall region HA could be achieved. With this in mind, let's see how region failures are detected first.
## Failure Detection
We detect region failures in Metasrv, and do it both passively and actively. Passively means that Metasrv do not fire some "are you healthy" requests to regions. Instead, we carry region healthy information in the heartbeat requests that are submit to Metasrv by Datanodes.
Datanode already carries its regions stats in the heartbeat request (the non-relevant fields are omitted):
```protobuf
message HeartbeatRequest {
...
// Region stats on this node
repeated RegionStat region_stats = 6;
...
}
message RegionStat {
uint64 region_id = 1;
TableName table_name = 2;
...
}
```
For the sake of simplicity, we don't add another field `bool available = 3` to the `RegionStat` message; instead, if the region were unavailable in the view of the Datanode that contains it, the Datanode just not includes the `RegionStat` of it in the heartbeat request. Or, if the Datanode itself is not unavailable, the heartbeat request is not submitted, effectively the same with not carrying the `RegionStat`.
> The heartbeat interval is now hardcoded to five seconds.
Metasrv gathers the heartbeat requests, extracts the `RegionStat`s, and treat them as region heartbeat. In this way, Metasrv maintains all regions healthy information. If some region's heartbeats were not received in a period of time, Metasrv speculates the region might be unavailable. To make the decision whether a region is failed or not, Metasrv uses a failure detection algorithm called the "[Phi φ Accrual Failure Detection](https://medium.com/@arpitbhayani/phi-%CF%86-accrual-failure-detection-79c21ce53a7a)". Basically, the algorithm calculates a value called "phi" to represent the possibility of a region's unavailability, based on the historical heartbeats' arrived rate. Once the "phi" is above some pre-defined threshold, Metasrv knows the region is failed.
> This algorithm has been widely adopted in some well known products, like Akka and Cassandra.
When Metasrv decides some region is failed from heartbeats, it's not the final decision. Here comes the "actively" detection. Before Metasrv decides to do region failover, it actively invokes the healthy check interface of the Datanode that the failure region resides. Only this healthy check is failed does Metasrv actually start doing failover upon the region.
To conclude, the failure detection pseudo-codes are like this:
```rust
// in Metasrv:
fn failure_detection() {
loop {
// passive detection
let failed_regions = all_regions.iter().filter(|r| r.estimated_failure_possibility() > config.phi).collect();
// find the datanodes that contains the failed regions
let datanodes_and_regions = find_region_resides_datanodes(failed_regions);
// active detection
for (datanode, regions) in datanodes_and_regions {
if !datanode.is_healthy(regions) {
do_failover(datanode, regions);
}
}
sleep(config.detect_interval);
}
}
```
Some design considerations:
- Why active detecting while we have passively detection? Because it could be happened that the network is singly connectable sometimes (especially in the complex Cloud environment), then the Datanode's heartbeats cannot reach Metasrv, while Metasrv could request Datanode. Active detecting avoid this false positive situation.
- Why the detection works on region instead of Datanode? Because we might face the possibility that only part of the regions in the Datanode are not available, not ALL regions. Especially the situation that Datanodes are used by multiple tenants. If this is the case, it's better to do failover upon the designated regions instead of the whole regions that reside on the Datanode. All in all, we want a more subtle control over region failover.
So we detect some regions are not available. How to regain the availability back?
## Region Failover
Region Failover largely relies on remote WAL, aka "[Bunshin](https://github.com/GreptimeTeam/bunshin)". I'm not including any of the details of it in this RFC, let's just assume we already have it.
In general, region failover is fairly simple. Once Metasrv decides to do failover upon some regions, it first chooses one or more Datanodes to hold the failed region. This can be done easily, as the Metasrv already has the whole picture of Datanodes: it knows which Datanode has the minimum regions, what Datanode historically had the lowest CPU usage and IO rate, and how the Datanodes are assigned to tenants, among other information that can all help the Metasrv choose the most suitable Datanodes. Let's call these chosen Datanodes as "candidates".
> The strategy to choose the most suitable candidates required careful design, but it's another RFC.
Then, Metasrv sets the states of these failed regions as "passive". We should add a field to `Region`:
```protobuf
message Region {
uint64 id = 1;
string name = 2;
Partition partition = 3;
message State {
Active,
Passive,
}
State state = 4;
map<string, string> attrs = 100;
}
```
Here `Region` is used in message `RegionRoute`, which indicates how the write request is split among regions. When a region is set as "passive", Frontend knows the write to it should be rejected at the moment (the region read is not blocked, however).
> Making a region "passive" here is effectively blocking the write to it. It's ok in the failover situation, the region is failed anyway. However, when dealing with active maintenance operations, region state requires more refined design. But that's another story.
Third, Metasrv fires the "close region" requests to the failed Datanodes, and fires the "open region" requests to those candidates. "Close region" requests might be failed due to the unavailability of Datanodes, but that's fine, it's just a best-effort attempt to reduce the chance of any in-flight writes got handled unintentionally after the region is set as "passive". The "open region" requests must have succeeded though. Datanodes open regions from remote WAL.
> Currently the "close region" is undefined in Datanode. It could be a local cache clean up of region data or other resources tidy up.
Finally, when a candidate successfully opens its region, it calls back to Metasrv, indicating it is ready to handle region. "call back" here is backed by its heartbeat to Metasrv. Metasrv updates the region's state to "active", so as to let Frontend lifts the restrictions of region writes (again, the read part of region is untouched).
All the above steps should be managed by remote procedure framework. It's another implementation challenge in the region failover feature. (One is the remote WAL of course.)
A picture is worth a 1000 words:
```text
+-------------------------+
| Metasrv detects region |
| failure |
+-------------------------+
|
v
+----------------------------+
| Metasrv chooses candidates |
| to hold failed regions |
+----------------------------+
|
v
+-------------------------+ +-------------------------+
| Metasrv "passive" the |------>| Frontend rejects writes |
| failed regions | | to "passive" regions |
+-------------------------+ +-------------------------+
|
v
+--------------------------+ +---------------------------+
| Candidate Datanodes open |<-------| Metasrv fires "close" and |
| regions from remote WAL | | "open" region requests |
+--------------------------+ +---------------------------+
|
|
| +-------------------------+ +-------------------------+
+--------------------->| Metasrv "active" the |------>| Frontend lifts write |
| failed regions | | restriction to regions |
+-------------------------+ +-------------------------+
|
v
+-------------------------+
| Region failover done, |
| HA regain |
+-------------------------+
```
# Alternatives
## The "Neon" Way
Remote WAL raises a problem that could harm the write throughput of GreptimeDB cluster: each write request has to do at least two remote call, one is from Frontend to Datanode, and one is from Datanode to remote WAL. What if we do it the "[Neon](https://github.com/neondatabase/neon)" way, making remote WAL sits in between the Frontend and Datanode, couldn't that improve our write throughput? It could, though there're some consistency issues like "read-your-writes" to solve.
However, the main concerns we don't adopt this method are two-fold:
1. Remote WAL is planned to be quorum based, it can be efficiently written;
2. More importantly, we are planning to make the remote WAL an option that users could choose not to enable it (at the cost of some reliability reduction).
## No WAL, Replication instead
This method replicates region across Datanodes directly, like the common way in shared-nothing database. Were the main region failed, a standby region in the replicate group is elected as new "main" and take the read/write requests. The main concern to this method is the incompatibility to our current architecture and code structure. It requires a major redesign, but gains no significant advantage over the remote WAL method.
However, the replication does have its own advantage that we can learn from to optimize this failover procedure.
# Future Work
Some optimizations we could take:
- To reduce the MTTR, we could make Metasrv chooses the candidate to each region at normal time. The candidate does some preparation works to reduce the open region time, effectively accelerate the failover procedure.
- We can adopt the replication method, to the degree that region replicas are used as the fast catch-up candidates. The data difference among replicas is minor, region failover does not need to load or exchange too much data, greatly reduced the region failover time.

Binary file not shown.

After

Width:  |  Height:  |  Size: 78 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 47 KiB

View File

@@ -0,0 +1,174 @@
---
Feature Name: "File external table"
Tracking Issue: https://github.com/GreptimeTeam/greptimedb/issues/1041
Date: 2023-03-08
Author: "Xu Wenkang <wenymedia@gmail.com>"
---
File external table
---
# Summary
Allows users to perform SQL queries on files
# Motivation
User data may already exist in other storages, i.g., file systems/s3, etc. in CSV, parquet, JSON format, etc. We can provide users the ability to perform SQL queries on these files.
# Details
## Overview
The file external table providers users ability to perform SQL queries on these files.
For example, a user has a CSV file on the local file system `/var/data/city.csv`:
```
Rank , Name , State , 2023 Population , 2020 Census , Annual Change , Density (mi²)
1 , New York City , New York , 8,992,908 , 8,804,190 , 0.7% , 29,938
2 , Los Angeles , California , 3,930,586 , 3,898,747 , 0.27% , 8,382
3 , Chicago , Illinois , 2,761,625 , 2,746,388 , 0.18% , 12,146
.....
```
Then user can create a file external table with:
```sql
CREATE EXTERNAL TABLE city with(location='/var/data/city.csv', format="CSV", field_delimiter = ',', record_delimiter = '\n', skip_header = 1);
```
Then query the external table with:
```bash
MySQL> select * from city;
```
| Rank | Name | State | 2023 Population | 2020 Census | Annual Change | Density (mi²) |
| :--- | :------------ | :--------- | :-------------- | :---------- | :------------ | :------------ |
| 1 | New York City | New York | 8,992,908 | 8,804,190 | 0.7% | 29,938 |
| 2 | Los Angeles | California | 3,930,586 | 3,898,747 | 0.27% | 8,382 |
| 3 | Chicago | Illinois | 2,761,625 | 2,746,388 | 0.18% | 12,146 |
Drop the external table, if needs with:
```sql
DROP EXTERNAL TABLE city
```
### Syntax
```
CREATE EXTERNAL [<database>.]<table_name>
[
(
<col_name> <col_type> [NULL | NOT NULL] [COMMENT "<comment>"]
)
]
[ WITH
(
LOCATION = 'url'
[,FIELD_DELIMITER = 'delimiter' ]
[,RECORD_DELIMITER = 'delimiter' ]
[,SKIP_HEADER = '<number>' ]
[,FORMAT = { csv | json | parquet } ]
[,PATTERN = '<regex_pattern>' ]
[,ENDPOINT = '<uri>' ]
[,ACCESS_KEY_ID = '<key_id>' ]
[,SECRET_ACCESS_KEY = '<access_key>' ]
[,SESSION_TOKEN = '<token>' ]
[,REGION = '<region>' ]
[,ENABLE_VIRTUAL_HOST_STYLE = '<boolean>']
..
)
]
```
### Supported File Format
The external file table supports multiple formats; We divide formats into row format and columnar format.
Row formats:
- CSV, JSON
Columnar formats:
- Parquet
Some of these formats support filter pushdown, and others don't. If users use very large files, that format doesn't support pushdown, which might consume a lot of IO for scanning full files and cause a long running query.
### File Table Engine
![overview](external-table-engine-overview.png)
We implement a file table engine that creates an external table by accepting user-specified file paths and treating all records as immutable.
1. File Format Decoder: decode files to the `RecordBatch` stream.
2. File Table Engine: implement the `TableProvider` trait, store necessary metadata in memory, and provide scan ability.
Our implementation is better for small files. For large files(i.g., a GB-level CSV file), suggests our users import data to the database.
## Drawbacks
- Some formats don't support filter pushdown
- Hard to support indexing
## Life cycle
### Register a table
1. Write metadata to manifest.
2. Create the table via file table engine.
3. Register table to `CatalogProvider` and register table to `SystemCatalog`(persist tables to disk).
### Deregister a table (Drop a table)
1. Fetch the target table info (figure out table engine type).
2. Deregister the target table in `CatalogProvider` and `SystemCatalog`.
3. Find the target table engine.
4. Drop the target table.
### Recover a table when restarting
1. Collect tables name and engine type info.
2. Find the target tables in different engines.
3. Open and register tables.
# Alternatives
## Using DataFusion API
We can use datafusion API to register a file table:
```rust
let ctx = SessionContext::new();
ctx.register_csv("example", "tests/data/example.csv", CsvReadOptions::new()).await?;
// create a plan
let df = ctx.sql("SELECT a, MIN(b) FROM example WHERE a <= b GROUP BY a LIMIT 100").await?;
```
### Drawbacks
The DataFusion implements its own `Object Store` abstraction and supports parsing the partitioned directories, which can push down the filter and skips some directories. However, this makes it impossible to use our's `LruCacheLayer`(The parsing of the partitioned directories required paths as input). If we want to manage memory entirely, we should implement our own `TableProvider` or `Table`.
- Impossible to use `CacheLayer`
## Introduce an intermediate representation layer
![overview](external-table-engine-way-2.png)
We convert all files into `parquet` as an intermediate representation. Then we only need to implement a `parquet` file table engine, and we already have a similar one. Also, it supports limited filter pushdown via the `parquet` row group stats.
### Drawbacks
- Computing overhead
- Storage overhead

View File

@@ -1,2 +1,2 @@
[toolchain]
channel = "nightly-2023-02-14"
channel = "nightly-2023-02-26"

View File

@@ -0,0 +1,39 @@
#!/usr/bin/env bash
# This script is used to download built dashboard assets from the "GreptimeTeam/dashboard" repository.
set -e
declare -r SCRIPT_DIR=$(cd $(dirname ${0}) >/dev/null 2>&1 && pwd)
declare -r ROOT_DIR=$(dirname ${SCRIPT_DIR})
declare -r STATIC_DIR="$ROOT_DIR/src/servers/dashboard"
RELEASE_VERSION="$(cat $STATIC_DIR/VERSION)"
# Download the SHA256 checksum attached to the release. To verify the integrity
# of the download, this checksum will be used to check the download tar file
# containing the built dashboard assets.
curl -Ls https://github.com/GreptimeTeam/dashboard/releases/download/$RELEASE_VERSION/sha256.txt --output sha256.txt
# Download the tar file containing the built dashboard assets.
curl -L https://github.com/GreptimeTeam/dashboard/releases/download/$RELEASE_VERSION/build.tar.gz --output build.tar.gz
# Verify the checksums match; exit if they don't.
case "$(uname -s)" in
FreeBSD | Darwin)
echo "$(cat sha256.txt)" | shasum --algorithm 256 --check \
|| { echo "Checksums did not match for downloaded dashboard assets!"; exit 1; } ;;
Linux)
echo "$(cat sha256.txt)" | sha256sum --check -- \
|| { echo "Checksums did not match for downloaded dashboard assets!"; exit 1; } ;;
*)
echo "The '$(uname -s)' operating system is not supported as a build host for the dashboard" >&2
exit 1
esac
# Extract the assets and clean up.
tar -xzf build.tar.gz -C "$STATIC_DIR"
rm sha256.txt
rm build.tar.gz
echo "Successfully download dashboard assets to $STATIC_DIR"

View File

@@ -59,5 +59,5 @@ if [ -n "${OS_TYPE}" ] && [ -n "${ARCH_TYPE}" ]; then
wget "https://github.com/${GITHUB_ORG}/${GITHUB_REPO}/releases/download/${VERSION}/${BIN}-${OS_TYPE}-${ARCH_TYPE}.tgz"
fi
tar xvf ${BIN}-${OS_TYPE}-${ARCH_TYPE}.tgz && rm ${BIN}-${OS_TYPE}-${ARCH_TYPE}.tgz && echo "Run '${BIN} --help' to get started"
tar xvf ${BIN}-${OS_TYPE}-${ARCH_TYPE}.tgz && rm ${BIN}-${OS_TYPE}-${ARCH_TYPE}.tgz && echo "Run './${BIN} --help' to get started"
fi

View File

@@ -10,10 +10,10 @@ common-base = { path = "../common/base" }
common-error = { path = "../common/error" }
common-time = { path = "../common/time" }
datatypes = { path = "../datatypes" }
greptime-proto = { git = "https://github.com/GreptimeTeam/greptime-proto.git", rev = "3e6349be127b65a8b42a38cda9d527ec423ca77d" }
greptime-proto = { git = "https://github.com/GreptimeTeam/greptime-proto.git", rev = "0bebe5f69c91cdfbce85cb8f45f9fcd28185261c" }
prost.workspace = true
snafu = { version = "0.7", features = ["backtraces"] }
tonic.workspace = true
[build-dependencies]
tonic-build = "0.8"
tonic-build = "0.9"

View File

@@ -18,7 +18,7 @@ use common_error::ext::ErrorExt;
use common_error::prelude::StatusCode;
use datatypes::prelude::ConcreteDataType;
use snafu::prelude::*;
use snafu::{Backtrace, ErrorCompat};
use snafu::Location;
pub type Result<T> = std::result::Result<T, Error>;
@@ -26,12 +26,12 @@ pub type Result<T> = std::result::Result<T, Error>;
#[snafu(visibility(pub))]
pub enum Error {
#[snafu(display("Unknown proto column datatype: {}", datatype))]
UnknownColumnDataType { datatype: i32, backtrace: Backtrace },
UnknownColumnDataType { datatype: i32, location: Location },
#[snafu(display("Failed to create column datatype from {:?}", from))]
IntoColumnDataType {
from: ConcreteDataType,
backtrace: Backtrace,
location: Location,
},
#[snafu(display(
@@ -66,9 +66,6 @@ impl ErrorExt for Error {
| Error::InvalidColumnDefaultConstraint { source, .. } => source.status_code(),
}
}
fn backtrace_opt(&self) -> Option<&Backtrace> {
ErrorCompat::backtrace(self)
}
fn as_any(&self) -> &dyn Any {
self

View File

@@ -97,7 +97,9 @@ impl TryFrom<ConcreteDataType> for ColumnDataTypeWrapper {
TimestampType::Microsecond(_) => ColumnDataType::TimestampMicrosecond,
TimestampType::Nanosecond(_) => ColumnDataType::TimestampNanosecond,
},
ConcreteDataType::Null(_) | ConcreteDataType::List(_) => {
ConcreteDataType::Null(_)
| ConcreteDataType::List(_)
| ConcreteDataType::Dictionary(_) => {
return error::IntoColumnDataTypeSnafu { from: datatype }.fail()
}
});

View File

@@ -7,6 +7,7 @@ license.workspace = true
[dependencies]
api = { path = "../api" }
arc-swap = "1.0"
arrow-schema.workspace = true
async-stream.workspace = true
async-trait = "0.1"
backoff = { version = "0.4", features = ["tokio"] }
@@ -18,25 +19,28 @@ common-recordbatch = { path = "../common/recordbatch" }
common-runtime = { path = "../common/runtime" }
common-telemetry = { path = "../common/telemetry" }
common-time = { path = "../common/time" }
dashmap = "5.4"
datafusion.workspace = true
datatypes = { path = "../datatypes" }
futures = "0.3"
futures-util.workspace = true
lazy_static = "1.4"
meta-client = { path = "../meta-client" }
parking_lot = "0.12"
regex = "1.6"
serde = "1.0"
serde_json = "1.0"
session = { path = "../session" }
snafu = { version = "0.7", features = ["backtraces"] }
storage = { path = "../storage" }
table = { path = "../table" }
tokio.workspace = true
[dev-dependencies]
common-test-util = { path = "../common/test-util" }
chrono.workspace = true
log-store = { path = "../log-store" }
mito = { path = "../mito", features = ["test"] }
object-store = { path = "../object-store" }
storage = { path = "../storage" }
tempdir = "0.3"
tokio.workspace = true

View File

@@ -12,4 +12,4 @@
// See the License for the specific language governing permissions and
// limitations under the License.
//! GreptimeDB builtin functions
pub mod catalog_adapter;

View File

@@ -17,10 +17,7 @@
use std::any::Any;
use std::sync::Arc;
use catalog::error::{self as catalog_error, Error};
use catalog::{
CatalogListRef, CatalogProvider, CatalogProviderRef, SchemaProvider, SchemaProviderRef,
};
use async_trait::async_trait;
use common_error::prelude::BoxedError;
use datafusion::catalog::catalog::{
CatalogList as DfCatalogList, CatalogProvider as DfCatalogProvider,
@@ -32,7 +29,10 @@ use snafu::ResultExt;
use table::table::adapter::{DfTableProviderAdapter, TableAdapter};
use table::TableRef;
use crate::datafusion::error;
use crate::error::{self, Result, SchemaProviderOperationSnafu};
use crate::{
CatalogListRef, CatalogProvider, CatalogProviderRef, SchemaProvider, SchemaProviderRef,
};
pub struct DfCatalogListAdapter {
catalog_list: CatalogListRef,
@@ -88,7 +88,7 @@ impl CatalogProvider for CatalogProviderAdapter {
self
}
fn schema_names(&self) -> catalog::error::Result<Vec<String>> {
fn schema_names(&self) -> Result<Vec<String>> {
Ok(self.df_catalog_provider.schema_names())
}
@@ -96,11 +96,11 @@ impl CatalogProvider for CatalogProviderAdapter {
&self,
_name: String,
_schema: SchemaProviderRef,
) -> catalog::error::Result<Option<SchemaProviderRef>> {
) -> Result<Option<SchemaProviderRef>> {
todo!("register_schema is not supported in Datafusion catalog provider")
}
fn schema(&self, name: &str) -> catalog::error::Result<Option<Arc<dyn SchemaProvider>>> {
fn schema(&self, name: &str) -> Result<Option<Arc<dyn SchemaProvider>>> {
Ok(self
.df_catalog_provider
.schema(name)
@@ -137,6 +137,7 @@ struct DfSchemaProviderAdapter {
schema_provider: Arc<dyn SchemaProvider>,
}
#[async_trait]
impl DfSchemaProvider for DfSchemaProviderAdapter {
fn as_any(&self) -> &dyn Any {
self
@@ -148,9 +149,10 @@ impl DfSchemaProvider for DfSchemaProviderAdapter {
.expect("datafusion does not accept fallible catalog access")
}
fn table(&self, name: &str) -> Option<Arc<dyn DfTableProvider>> {
async fn table(&self, name: &str) -> Option<Arc<dyn DfTableProvider>> {
self.schema_provider
.table(name)
.await
.expect("datafusion does not accept fallible catalog access")
.map(|table| Arc::new(DfTableProviderAdapter::new(table)) as _)
}
@@ -186,18 +188,20 @@ struct SchemaProviderAdapter {
df_schema_provider: Arc<dyn DfSchemaProvider>,
}
#[async_trait]
impl SchemaProvider for SchemaProviderAdapter {
fn as_any(&self) -> &dyn Any {
self
}
/// Retrieves the list of available table names in this schema.
fn table_names(&self) -> Result<Vec<String>, Error> {
fn table_names(&self) -> Result<Vec<String>> {
Ok(self.df_schema_provider.table_names())
}
fn table(&self, name: &str) -> Result<Option<TableRef>, Error> {
let table = self.df_schema_provider.table(name).map(|table_provider| {
async fn table(&self, name: &str) -> Result<Option<TableRef>> {
let table = self.df_schema_provider.table(name).await;
let table = table.map(|table_provider| {
match table_provider
.as_any()
.downcast_ref::<DfTableProviderAdapter>()
@@ -214,11 +218,7 @@ impl SchemaProvider for SchemaProviderAdapter {
Ok(table)
}
fn register_table(
&self,
name: String,
table: TableRef,
) -> catalog::error::Result<Option<TableRef>> {
fn register_table(&self, name: String, table: TableRef) -> Result<Option<TableRef>> {
let table_provider = Arc::new(DfTableProviderAdapter::new(table.clone()));
Ok(self
.df_schema_provider
@@ -227,43 +227,43 @@ impl SchemaProvider for SchemaProviderAdapter {
msg: "Fail to register table to datafusion",
})
.map_err(BoxedError::new)
.context(catalog_error::SchemaProviderOperationSnafu)?
.context(SchemaProviderOperationSnafu)?
.map(|_| table))
}
fn rename_table(&self, _name: &str, _new_name: String) -> catalog_error::Result<TableRef> {
fn rename_table(&self, _name: &str, _new_name: String) -> Result<TableRef> {
todo!()
}
fn deregister_table(&self, name: &str) -> catalog::error::Result<Option<TableRef>> {
fn deregister_table(&self, name: &str) -> Result<Option<TableRef>> {
self.df_schema_provider
.deregister_table(name)
.context(error::DatafusionSnafu {
msg: "Fail to deregister table from datafusion",
})
.map_err(BoxedError::new)
.context(catalog_error::SchemaProviderOperationSnafu)?
.context(SchemaProviderOperationSnafu)?
.map(|table| {
let adapter = TableAdapter::new(table)
.context(error::TableSchemaMismatchSnafu)
.map_err(BoxedError::new)
.context(catalog_error::SchemaProviderOperationSnafu)?;
.context(SchemaProviderOperationSnafu)?;
Ok(Arc::new(adapter) as _)
})
.transpose()
}
fn table_exist(&self, name: &str) -> Result<bool, Error> {
fn table_exist(&self, name: &str) -> Result<bool> {
Ok(self.df_schema_provider.table_exist(name))
}
}
#[cfg(test)]
mod tests {
use catalog::local::{new_memory_catalog_list, MemoryCatalogProvider, MemorySchemaProvider};
use table::table::numbers::NumbersTable;
use super::*;
use crate::local::{new_memory_catalog_list, MemoryCatalogProvider, MemorySchemaProvider};
#[test]
#[should_panic]
@@ -282,8 +282,8 @@ mod tests {
.unwrap();
}
#[test]
pub fn test_register_table() {
#[tokio::test]
async fn test_register_table() {
let adapter = DfSchemaProviderAdapter {
schema_provider: Arc::new(MemorySchemaProvider::new()),
};
@@ -296,7 +296,7 @@ mod tests {
))),
)
.unwrap();
adapter.table("test_table").unwrap();
adapter.table("test_table").await.unwrap();
}
#[test]

View File

@@ -19,7 +19,7 @@ use common_error::ext::{BoxedError, ErrorExt};
use common_error::prelude::{Snafu, StatusCode};
use datafusion::error::DataFusionError;
use datatypes::prelude::ConcreteDataType;
use snafu::{Backtrace, ErrorCompat};
use snafu::Location;
use crate::DeregisterTableRequest;
@@ -50,7 +50,7 @@ pub enum Error {
},
#[snafu(display("System catalog is not valid: {}", msg))]
SystemCatalog { msg: String, backtrace: Backtrace },
SystemCatalog { msg: String, location: Location },
#[snafu(display(
"System catalog table type mismatch, expected: binary, found: {:?}",
@@ -58,61 +58,68 @@ pub enum Error {
))]
SystemCatalogTypeMismatch {
data_type: ConcreteDataType,
backtrace: Backtrace,
location: Location,
},
#[snafu(display("Invalid system catalog entry type: {:?}", entry_type))]
InvalidEntryType {
entry_type: Option<u8>,
backtrace: Backtrace,
location: Location,
},
#[snafu(display("Invalid system catalog key: {:?}", key))]
InvalidKey {
key: Option<String>,
backtrace: Backtrace,
location: Location,
},
#[snafu(display("Catalog value is not present"))]
EmptyValue { backtrace: Backtrace },
EmptyValue { location: Location },
#[snafu(display("Failed to deserialize value, source: {}", source))]
ValueDeserialize {
source: serde_json::error::Error,
backtrace: Backtrace,
location: Location,
},
#[snafu(display("Table engine not found: {}, source: {}", engine_name, source))]
TableEngineNotFound {
engine_name: String,
#[snafu(backtrace)]
source: table::error::Error,
},
#[snafu(display("Cannot find catalog by name: {}", catalog_name))]
CatalogNotFound {
catalog_name: String,
backtrace: Backtrace,
location: Location,
},
#[snafu(display("Cannot find schema {} in catalog {}", schema, catalog))]
SchemaNotFound {
catalog: String,
schema: String,
backtrace: Backtrace,
location: Location,
},
#[snafu(display("Table `{}` already exists", table))]
TableExists { table: String, backtrace: Backtrace },
TableExists { table: String, location: Location },
#[snafu(display("Table `{}` not exist", table))]
TableNotExist { table: String, backtrace: Backtrace },
TableNotExist { table: String, location: Location },
#[snafu(display("Schema {} already exists", schema))]
SchemaExists {
schema: String,
backtrace: Backtrace,
},
SchemaExists { schema: String, location: Location },
#[snafu(display("Operation {} not implemented yet", operation))]
Unimplemented {
operation: String,
backtrace: Backtrace,
location: Location,
},
#[snafu(display("Operation {} not supported", op))]
NotSupported { op: String, location: Location },
#[snafu(display("Failed to open table, table info: {}, source: {}", table_info, source))]
OpenTable {
table_info: String,
@@ -123,7 +130,7 @@ pub enum Error {
#[snafu(display("Table not found while opening table, table info: {}", table_info))]
TableNotFound {
table_info: String,
backtrace: Backtrace,
location: Location,
},
#[snafu(display("Failed to read system catalog table records"))]
@@ -132,6 +139,12 @@ pub enum Error {
source: common_recordbatch::error::Error,
},
#[snafu(display("Failed to create recordbatch, source: {}", source))]
CreateRecordBatch {
#[snafu(backtrace)]
source: common_recordbatch::error::Error,
},
#[snafu(display(
"Failed to insert table creation record to system catalog, source: {}",
source
@@ -153,7 +166,7 @@ pub enum Error {
},
#[snafu(display("Illegal catalog manager state: {}", msg))]
IllegalManagerState { backtrace: Backtrace, msg: String },
IllegalManagerState { location: Location, msg: String },
#[snafu(display("Failed to scan system catalog table, source: {}", source))]
SystemCatalogTableScan {
@@ -201,6 +214,40 @@ pub enum Error {
#[snafu(backtrace)]
source: common_catalog::error::Error,
},
#[snafu(display("Illegal access to catalog: {} and schema: {}", catalog, schema))]
QueryAccessDenied { catalog: String, schema: String },
#[snafu(display(
"Failed to get region stats, catalog: {}, schema: {}, table: {}, source: {}",
catalog,
schema,
table,
source
))]
RegionStats {
catalog: String,
schema: String,
table: String,
#[snafu(backtrace)]
source: table::error::Error,
},
#[snafu(display("Invalid system table definition: {err_msg}"))]
InvalidSystemTableDef { err_msg: String, location: Location },
#[snafu(display("{}: {}", msg, source))]
Datafusion {
msg: String,
source: DataFusionError,
location: Location,
},
#[snafu(display("Table schema mismatch, source: {}", source))]
TableSchemaMismatch {
#[snafu(backtrace)]
source: table::error::Error,
},
}
pub type Result<T> = std::result::Result<T, Error>;
@@ -213,7 +260,8 @@ impl ErrorExt for Error {
| Error::TableNotFound { .. }
| Error::IllegalManagerState { .. }
| Error::CatalogNotFound { .. }
| Error::InvalidEntryType { .. } => StatusCode::Unexpected,
| Error::InvalidEntryType { .. }
| Error::InvalidSystemTableDef { .. } => StatusCode::Unexpected,
Error::SystemCatalog { .. }
| Error::EmptyValue { .. }
@@ -221,21 +269,27 @@ impl ErrorExt for Error {
Error::SystemCatalogTypeMismatch { .. } => StatusCode::Internal,
Error::ReadSystemCatalog { source, .. } => source.status_code(),
Error::ReadSystemCatalog { source, .. } | Error::CreateRecordBatch { source } => {
source.status_code()
}
Error::InvalidCatalogValue { source, .. } | Error::CatalogEntrySerde { source } => {
source.status_code()
}
Error::TableExists { .. } => StatusCode::TableAlreadyExists,
Error::TableNotExist { .. } => StatusCode::TableNotFound,
Error::SchemaExists { .. } => StatusCode::InvalidArguments,
Error::SchemaExists { .. } | Error::TableEngineNotFound { .. } => {
StatusCode::InvalidArguments
}
Error::OpenSystemCatalog { source, .. }
| Error::CreateSystemCatalog { source, .. }
| Error::InsertCatalogRecord { source, .. }
| Error::OpenTable { source, .. }
| Error::CreateTable { source, .. }
| Error::DeregisterTable { source, .. } => source.status_code(),
| Error::DeregisterTable { source, .. }
| Error::RegionStats { source, .. }
| Error::TableSchemaMismatch { source } => source.status_code(),
Error::MetaSrv { source, .. } => source.status_code(),
Error::SystemCatalogTableScan { source } => source.status_code(),
@@ -245,14 +299,12 @@ impl ErrorExt for Error {
source.status_code()
}
Error::Unimplemented { .. } => StatusCode::Unsupported,
Error::Unimplemented { .. } | Error::NotSupported { .. } => StatusCode::Unsupported,
Error::QueryAccessDenied { .. } => StatusCode::AccessDenied,
Error::Datafusion { .. } => StatusCode::EngineExecuteQuery,
}
}
fn backtrace_opt(&self) -> Option<&Backtrace> {
ErrorCompat::backtrace(self)
}
fn as_any(&self) -> &dyn Any {
self
}
@@ -276,7 +328,7 @@ mod tests {
StatusCode::TableAlreadyExists,
Error::TableExists {
table: "some_table".to_string(),
backtrace: Backtrace::generate(),
location: Location::generate(),
}
.status_code()
);
@@ -290,7 +342,7 @@ mod tests {
StatusCode::StorageUnavailable,
Error::SystemCatalog {
msg: "".to_string(),
backtrace: Backtrace::generate(),
location: Location::generate(),
}
.status_code()
);
@@ -299,7 +351,7 @@ mod tests {
StatusCode::Internal,
Error::SystemCatalogTypeMismatch {
data_type: ConcreteDataType::binary_datatype(),
backtrace: Backtrace::generate(),
location: Location::generate(),
}
.status_code()
);
@@ -313,7 +365,7 @@ mod tests {
pub fn test_errors_to_datafusion_error() {
let e: DataFusionError = Error::TableExists {
table: "test_table".to_string(),
backtrace: Backtrace::generate(),
location: Location::generate(),
}
.into();
match e {

View File

@@ -24,10 +24,10 @@ use serde::{Deserialize, Serialize, Serializer};
use snafu::{ensure, OptionExt, ResultExt};
use table::metadata::{RawTableInfo, TableId, TableVersion};
const CATALOG_KEY_PREFIX: &str = "__c";
const SCHEMA_KEY_PREFIX: &str = "__s";
const TABLE_GLOBAL_KEY_PREFIX: &str = "__tg";
const TABLE_REGIONAL_KEY_PREFIX: &str = "__tr";
pub const CATALOG_KEY_PREFIX: &str = "__c";
pub const SCHEMA_KEY_PREFIX: &str = "__s";
pub const TABLE_GLOBAL_KEY_PREFIX: &str = "__tg";
pub const TABLE_REGIONAL_KEY_PREFIX: &str = "__tr";
const ALPHANUMERICS_NAME_PATTERN: &str = "[a-zA-Z_][a-zA-Z0-9_]*";
@@ -370,4 +370,10 @@ mod tests {
let deserialized = TableGlobalValue::parse(serialized).unwrap();
assert_eq!(value, deserialized);
}
#[test]
fn test_table_global_value_compatibility() {
let s = r#"{"node_id":1,"regions_id_map":{"1":[0]},"table_info":{"ident":{"table_id":1098,"version":1},"name":"container_cpu_limit","desc":"Created on insertion","catalog_name":"greptime","schema_name":"dd","meta":{"schema":{"column_schemas":[{"name":"container_id","data_type":{"String":null},"is_nullable":true,"is_time_index":false,"default_constraint":null,"metadata":{}},{"name":"container_name","data_type":{"String":null},"is_nullable":true,"is_time_index":false,"default_constraint":null,"metadata":{}},{"name":"docker_image","data_type":{"String":null},"is_nullable":true,"is_time_index":false,"default_constraint":null,"metadata":{}},{"name":"host","data_type":{"String":null},"is_nullable":true,"is_time_index":false,"default_constraint":null,"metadata":{}},{"name":"image_name","data_type":{"String":null},"is_nullable":true,"is_time_index":false,"default_constraint":null,"metadata":{}},{"name":"image_tag","data_type":{"String":null},"is_nullable":true,"is_time_index":false,"default_constraint":null,"metadata":{}},{"name":"interval","data_type":{"String":null},"is_nullable":true,"is_time_index":false,"default_constraint":null,"metadata":{}},{"name":"runtime","data_type":{"String":null},"is_nullable":true,"is_time_index":false,"default_constraint":null,"metadata":{}},{"name":"short_image","data_type":{"String":null},"is_nullable":true,"is_time_index":false,"default_constraint":null,"metadata":{}},{"name":"type","data_type":{"String":null},"is_nullable":true,"is_time_index":false,"default_constraint":null,"metadata":{}},{"name":"dd_value","data_type":{"Float64":{}},"is_nullable":true,"is_time_index":false,"default_constraint":null,"metadata":{}},{"name":"ts","data_type":{"Timestamp":{"Millisecond":null}},"is_nullable":false,"is_time_index":true,"default_constraint":null,"metadata":{"greptime:time_index":"true"}},{"name":"git.repository_url","data_type":{"String":null},"is_nullable":true,"is_time_index":false,"default_constraint":null,"metadata":{}}],"timestamp_index":11,"version":1},"primary_key_indices":[0,1,2,3,4,5,6,7,8,9,12],"value_indices":[10,11],"engine":"mito","next_column_id":12,"region_numbers":[],"engine_options":{},"options":{},"created_on":"1970-01-01T00:00:00Z"},"table_type":"Base"}}"#;
TableGlobalValue::parse(s).unwrap();
}
}

View File

@@ -0,0 +1,80 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
mod tables;
use std::any::Any;
use std::sync::Arc;
use async_trait::async_trait;
use datafusion::datasource::streaming::{PartitionStream, StreamingTable};
use snafu::ResultExt;
use table::table::adapter::TableAdapter;
use table::TableRef;
use crate::error::{DatafusionSnafu, Result, TableSchemaMismatchSnafu};
use crate::information_schema::tables::InformationSchemaTables;
use crate::{CatalogProviderRef, SchemaProvider};
const TABLES: &str = "tables";
pub(crate) struct InformationSchemaProvider {
catalog_name: String,
catalog_provider: CatalogProviderRef,
}
impl InformationSchemaProvider {
pub(crate) fn new(catalog_name: String, catalog_provider: CatalogProviderRef) -> Self {
Self {
catalog_name,
catalog_provider,
}
}
}
#[async_trait]
impl SchemaProvider for InformationSchemaProvider {
fn as_any(&self) -> &dyn Any {
self
}
fn table_names(&self) -> Result<Vec<String>> {
Ok(vec![TABLES.to_string()])
}
async fn table(&self, name: &str) -> Result<Option<TableRef>> {
let table = if name.eq_ignore_ascii_case(TABLES) {
Arc::new(InformationSchemaTables::new(
self.catalog_name.clone(),
self.catalog_provider.clone(),
))
} else {
return Ok(None);
};
let table = Arc::new(
StreamingTable::try_new(table.schema().clone(), vec![table]).with_context(|_| {
DatafusionSnafu {
msg: format!("Failed to get InformationSchema table '{name}'"),
}
})?,
);
let table = TableAdapter::new(table).context(TableSchemaMismatchSnafu)?;
Ok(Some(Arc::new(table)))
}
fn table_exist(&self, name: &str) -> Result<bool> {
Ok(matches!(name.to_ascii_lowercase().as_str(), TABLES))
}
}

View File

@@ -0,0 +1,165 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use std::sync::Arc;
use arrow_schema::SchemaRef as ArrowSchemaRef;
use common_catalog::consts::INFORMATION_SCHEMA_NAME;
use common_query::physical_plan::TaskContext;
use common_recordbatch::RecordBatch;
use datafusion::datasource::streaming::PartitionStream as DfPartitionStream;
use datafusion::physical_plan::stream::RecordBatchStreamAdapter as DfRecordBatchStreamAdapter;
use datafusion::physical_plan::SendableRecordBatchStream as DfSendableRecordBatchStream;
use datatypes::prelude::{ConcreteDataType, ScalarVectorBuilder, VectorRef};
use datatypes::schema::{ColumnSchema, Schema, SchemaRef};
use datatypes::vectors::StringVectorBuilder;
use snafu::ResultExt;
use table::metadata::TableType;
use crate::error::{CreateRecordBatchSnafu, Result};
use crate::information_schema::TABLES;
use crate::CatalogProviderRef;
pub(super) struct InformationSchemaTables {
schema: SchemaRef,
catalog_name: String,
catalog_provider: CatalogProviderRef,
}
impl InformationSchemaTables {
pub(super) fn new(catalog_name: String, catalog_provider: CatalogProviderRef) -> Self {
let schema = Arc::new(Schema::new(vec![
ColumnSchema::new("table_catalog", ConcreteDataType::string_datatype(), false),
ColumnSchema::new("table_schema", ConcreteDataType::string_datatype(), false),
ColumnSchema::new("table_name", ConcreteDataType::string_datatype(), false),
ColumnSchema::new("table_type", ConcreteDataType::string_datatype(), false),
]));
Self {
schema,
catalog_name,
catalog_provider,
}
}
fn builder(&self) -> InformationSchemaTablesBuilder {
InformationSchemaTablesBuilder::new(
self.schema.clone(),
self.catalog_name.clone(),
self.catalog_provider.clone(),
)
}
}
/// Builds the `information_schema.TABLE` table row by row
///
/// Columns are based on <https://www.postgresql.org/docs/current/infoschema-columns.html>
struct InformationSchemaTablesBuilder {
schema: SchemaRef,
catalog_name: String,
catalog_provider: CatalogProviderRef,
catalog_names: StringVectorBuilder,
schema_names: StringVectorBuilder,
table_names: StringVectorBuilder,
table_types: StringVectorBuilder,
}
impl InformationSchemaTablesBuilder {
fn new(schema: SchemaRef, catalog_name: String, catalog_provider: CatalogProviderRef) -> Self {
Self {
schema,
catalog_name,
catalog_provider,
catalog_names: StringVectorBuilder::with_capacity(42),
schema_names: StringVectorBuilder::with_capacity(42),
table_names: StringVectorBuilder::with_capacity(42),
table_types: StringVectorBuilder::with_capacity(42),
}
}
/// Construct the `information_schema.tables` virtual table
async fn make_tables(&mut self) -> Result<RecordBatch> {
let catalog_name = self.catalog_name.clone();
for schema_name in self.catalog_provider.schema_names()? {
if schema_name == INFORMATION_SCHEMA_NAME {
continue;
}
let Some(schema) = self.catalog_provider.schema(&schema_name)? else { continue };
for table_name in schema.table_names()? {
let Some(table) = schema.table(&table_name).await? else { continue };
self.add_table(&catalog_name, &schema_name, &table_name, table.table_type());
}
}
// Add a final list for the information schema tables themselves
self.add_table(
&catalog_name,
INFORMATION_SCHEMA_NAME,
TABLES,
TableType::View,
);
self.finish()
}
fn add_table(
&mut self,
catalog_name: &str,
schema_name: &str,
table_name: &str,
table_type: TableType,
) {
self.catalog_names.push(Some(catalog_name));
self.schema_names.push(Some(schema_name));
self.table_names.push(Some(table_name));
self.table_types.push(Some(match table_type {
TableType::Base => "BASE TABLE",
TableType::View => "VIEW",
TableType::Temporary => "LOCAL TEMPORARY",
}));
}
fn finish(&mut self) -> Result<RecordBatch> {
let columns: Vec<VectorRef> = vec![
Arc::new(self.catalog_names.finish()),
Arc::new(self.schema_names.finish()),
Arc::new(self.table_names.finish()),
Arc::new(self.table_types.finish()),
];
RecordBatch::new(self.schema.clone(), columns).context(CreateRecordBatchSnafu)
}
}
impl DfPartitionStream for InformationSchemaTables {
fn schema(&self) -> &ArrowSchemaRef {
self.schema.arrow_schema()
}
fn execute(&self, _: Arc<TaskContext>) -> DfSendableRecordBatchStream {
let schema = self.schema().clone();
let mut builder = self.builder();
Box::pin(DfRecordBatchStreamAdapter::new(
schema,
futures::stream::once(async move {
builder
.make_tables()
.await
.map(|x| x.into_df_record_batch())
.map_err(Into::into)
}),
))
}
}

View File

@@ -18,8 +18,9 @@ use std::any::Any;
use std::fmt::{Debug, Formatter};
use std::sync::Arc;
use common_telemetry::info;
use snafu::{OptionExt, ResultExt};
use api::v1::meta::{RegionStat, TableName};
use common_telemetry::{info, warn};
use snafu::ResultExt;
use table::engine::{EngineContext, TableEngineRef};
use table::metadata::TableId;
use table::requests::CreateTableRequest;
@@ -28,12 +29,15 @@ use table::TableRef;
use crate::error::{CreateTableSnafu, Result};
pub use crate::schema::{SchemaProvider, SchemaProviderRef};
pub mod datafusion;
pub mod error;
pub mod helper;
pub(crate) mod information_schema;
pub mod local;
pub mod remote;
pub mod schema;
pub mod system;
pub mod table_source;
pub mod tables;
/// Represent a list of named catalogs
@@ -107,7 +111,12 @@ pub trait CatalogManager: CatalogList {
fn schema(&self, catalog: &str, schema: &str) -> Result<Option<SchemaProviderRef>>;
/// Returns the table by catalog, schema and table name.
fn table(&self, catalog: &str, schema: &str, table_name: &str) -> Result<Option<TableRef>>;
async fn table(
&self,
catalog: &str,
schema: &str,
table_name: &str,
) -> Result<Option<TableRef>>;
}
pub type CatalogManagerRef = Arc<dyn CatalogManager>;
@@ -186,7 +195,8 @@ pub(crate) async fn handle_system_table_request<'a, M: CatalogManager>(
let table_name = &req.create_table_request.table_name;
let table_id = req.create_table_request.id;
let table = if let Some(table) = manager.table(catalog_name, schema_name, table_name)? {
let table = manager.table(catalog_name, schema_name, table_name).await?;
let table = if let Some(table) = table {
table
} else {
let table = engine
@@ -218,37 +228,52 @@ pub(crate) async fn handle_system_table_request<'a, M: CatalogManager>(
Ok(())
}
/// The number of regions in the datanode node.
pub fn region_number(catalog_manager: &CatalogManagerRef) -> Result<u64> {
/// The stat of regions in the datanode node.
/// The number of regions can be got from len of vec.
///
/// Ignores any errors occurred during iterating regions. The intention of this method is to
/// collect region stats that will be carried in Datanode's heartbeat to Metasrv, so it's a
/// "try our best" job.
pub async fn datanode_stat(catalog_manager: &CatalogManagerRef) -> (u64, Vec<RegionStat>) {
let mut region_number: u64 = 0;
let mut region_stats = Vec::new();
for catalog_name in catalog_manager.catalog_names()? {
let catalog =
catalog_manager
.catalog(&catalog_name)?
.context(error::CatalogNotFoundSnafu {
catalog_name: &catalog_name,
})?;
let Ok(catalog_names) = catalog_manager.catalog_names() else { return (region_number, region_stats) };
for catalog_name in catalog_names {
let Ok(Some(catalog)) = catalog_manager.catalog(&catalog_name) else { continue };
for schema_name in catalog.schema_names()? {
let schema = catalog
.schema(&schema_name)?
.context(error::SchemaNotFoundSnafu {
catalog: &catalog_name,
schema: &schema_name,
})?;
let Ok(schema_names) = catalog.schema_names() else { continue };
for schema_name in schema_names {
let Ok(Some(schema)) = catalog.schema(&schema_name) else { continue };
for table_name in schema.table_names()? {
let table = schema
.table(&table_name)?
.context(error::TableNotFoundSnafu {
table_info: &table_name,
})?;
let Ok(table_names) = schema.table_names() else { continue };
for table_name in table_names {
let Ok(Some(table)) = schema.table(&table_name).await else { continue };
let region_numbers = &table.table_info().meta.region_numbers;
region_number += region_numbers.len() as u64;
match table.region_stats() {
Ok(stats) => {
let stats = stats.into_iter().map(|stat| RegionStat {
region_id: stat.region_id,
table_name: Some(TableName {
catalog_name: catalog_name.clone(),
schema_name: schema_name.clone(),
table_name: table_name.clone(),
}),
approximate_bytes: stat.disk_usage_bytes as i64,
..Default::default()
});
region_stats.extend(stats);
}
Err(e) => {
warn!("Failed to get region status, err: {:?}", e);
}
};
}
}
}
Ok(region_number)
(region_number, region_stats)
}

View File

@@ -18,7 +18,7 @@ use std::sync::Arc;
use common_catalog::consts::{
DEFAULT_CATALOG_NAME, DEFAULT_SCHEMA_NAME, INFORMATION_SCHEMA_NAME, MIN_USER_TABLE_ID,
SYSTEM_CATALOG_NAME, SYSTEM_CATALOG_TABLE_NAME,
MITO_ENGINE, SYSTEM_CATALOG_NAME, SYSTEM_CATALOG_TABLE_NAME,
};
use common_catalog::format_full_table_name;
use common_recordbatch::{RecordBatch, SendableRecordBatchStream};
@@ -27,7 +27,8 @@ use datatypes::prelude::ScalarVector;
use datatypes::vectors::{BinaryVector, UInt8Vector};
use futures_util::lock::Mutex;
use snafu::{ensure, OptionExt, ResultExt};
use table::engine::{EngineContext, TableEngineRef};
use table::engine::manager::TableEngineManagerRef;
use table::engine::EngineContext;
use table::metadata::TableId;
use table::requests::OpenTableRequest;
use table::table::numbers::NumbersTable;
@@ -37,7 +38,8 @@ use table::TableRef;
use crate::error::{
self, CatalogNotFoundSnafu, IllegalManagerStateSnafu, OpenTableSnafu, ReadSystemCatalogSnafu,
Result, SchemaExistsSnafu, SchemaNotFoundSnafu, SystemCatalogSnafu,
SystemCatalogTypeMismatchSnafu, TableExistsSnafu, TableNotFoundSnafu,
SystemCatalogTypeMismatchSnafu, TableEngineNotFoundSnafu, TableExistsSnafu, TableNotExistSnafu,
TableNotFoundSnafu,
};
use crate::local::memory::{MemoryCatalogManager, MemoryCatalogProvider, MemorySchemaProvider};
use crate::system::{
@@ -55,7 +57,7 @@ use crate::{
pub struct LocalCatalogManager {
system: Arc<SystemCatalog>,
catalogs: Arc<MemoryCatalogManager>,
engine: TableEngineRef,
engine_manager: TableEngineManagerRef,
next_table_id: AtomicU32,
init_lock: Mutex<bool>,
register_lock: Mutex<()>,
@@ -63,19 +65,20 @@ pub struct LocalCatalogManager {
}
impl LocalCatalogManager {
/// Create a new [CatalogManager] with given user catalogs and table engine
pub async fn try_new(engine: TableEngineRef) -> Result<Self> {
/// Create a new [CatalogManager] with given user catalogs and mito engine
pub async fn try_new(engine_manager: TableEngineManagerRef) -> Result<Self> {
let engine = engine_manager
.engine(MITO_ENGINE)
.context(TableEngineNotFoundSnafu {
engine_name: MITO_ENGINE,
})?;
let table = SystemCatalogTable::new(engine.clone()).await?;
let memory_catalog_list = crate::local::memory::new_memory_catalog_list()?;
let system_catalog = Arc::new(SystemCatalog::new(
table,
memory_catalog_list.clone(),
engine.clone(),
));
let system_catalog = Arc::new(SystemCatalog::new(table));
Ok(Self {
system: system_catalog,
catalogs: memory_catalog_list,
engine,
engine_manager,
next_table_id: AtomicU32::new(MIN_USER_TABLE_ID),
init_lock: Mutex::new(false),
register_lock: Mutex::new(()),
@@ -100,7 +103,14 @@ impl LocalCatalogManager {
// Processing system table hooks
let mut sys_table_requests = self.system_table_requests.lock().await;
handle_system_table_request(self, self.engine.clone(), &mut sys_table_requests).await?;
let engine = self
.engine_manager
.engine(MITO_ENGINE)
.context(TableEngineNotFoundSnafu {
engine_name: MITO_ENGINE,
})?;
handle_system_table_request(self, engine, &mut sys_table_requests).await?;
Ok(())
}
@@ -253,9 +263,14 @@ impl LocalCatalogManager {
table_name: t.table_name.clone(),
table_id: t.table_id,
};
let engine = self
.engine_manager
.engine(&t.engine)
.context(TableEngineNotFoundSnafu {
engine_name: &t.engine,
})?;
let option = self
.engine
let option = engine
.open_table(&context, request)
.await
.with_context(|_| OpenTableSnafu {
@@ -290,9 +305,7 @@ impl CatalogList for LocalCatalogManager {
}
fn catalog_names(&self) -> Result<Vec<String>> {
let mut res = self.catalogs.catalog_names()?;
res.push(SYSTEM_CATALOG_NAME.to_string());
Ok(res)
self.catalogs.catalog_names()
}
fn catalog(&self, name: &str) -> Result<Option<CatalogProviderRef>> {
@@ -345,7 +358,7 @@ impl CatalogManager for LocalCatalogManager {
{
let _lock = self.register_lock.lock().await;
if let Some(existing) = schema.table(&request.table_name)? {
if let Some(existing) = schema.table(&request.table_name).await? {
if existing.table_info().ident.table_id != request.table_id {
error!(
"Unexpected table register request: {:?}, existing: {:?}",
@@ -364,6 +377,7 @@ impl CatalogManager for LocalCatalogManager {
// Try to register table with same table id, just ignore.
Ok(false)
} else {
let engine = request.table.table_info().meta.engine.to_string();
// table does not exist
self.system
.register_table(
@@ -371,6 +385,7 @@ impl CatalogManager for LocalCatalogManager {
schema_name.clone(),
request.table_name.clone(),
request.table_id,
engine,
)
.await?;
schema.register_table(request.table_name, request.table)?;
@@ -404,6 +419,21 @@ impl CatalogManager for LocalCatalogManager {
schema: schema_name,
})?;
let _lock = self.register_lock.lock().await;
ensure!(
!schema.table_exist(&request.new_table_name)?,
TableExistsSnafu {
table: &request.new_table_name
}
);
let old_table = schema
.table(&request.table_name)
.await?
.context(TableNotExistSnafu {
table: &request.table_name,
})?;
let engine = old_table.table_info().meta.engine.to_string();
// rename table in system catalog
self.system
.register_table(
@@ -411,11 +441,14 @@ impl CatalogManager for LocalCatalogManager {
schema_name.clone(),
request.new_table_name.clone(),
request.table_id,
engine,
)
.await?;
Ok(schema
.rename_table(&request.table_name, request.new_table_name)
.is_ok())
let renamed = schema
.rename_table(&request.table_name, request.new_table_name.clone())
.is_ok();
Ok(renamed)
}
async fn deregister_table(&self, request: DeregisterTableRequest) -> Result<bool> {
@@ -434,9 +467,10 @@ impl CatalogManager for LocalCatalogManager {
} = &request;
let table_id = self
.catalogs
.table(catalog, schema, table_name)?
.table(catalog, schema, table_name)
.await?
.with_context(|| error::TableNotExistSnafu {
table: format!("{catalog}.{schema}.{table_name}"),
table: format_full_table_name(catalog, schema, table_name),
})?
.table_info()
.ident
@@ -505,7 +539,7 @@ impl CatalogManager for LocalCatalogManager {
.schema(schema)
}
fn table(
async fn table(
&self,
catalog_name: &str,
schema_name: &str,
@@ -521,7 +555,7 @@ impl CatalogManager for LocalCatalogManager {
catalog: catalog_name,
schema: schema_name,
})?;
schema.table(table_name)
schema.table(table_name).await
}
}
@@ -529,6 +563,8 @@ impl CatalogManager for LocalCatalogManager {
mod tests {
use std::assert_matches::assert_matches;
use mito::engine::MITO_ENGINE;
use super::*;
use crate::system::{CatalogEntry, SchemaEntry};
@@ -540,6 +576,7 @@ mod tests {
schema_name: "S1".to_string(),
table_name: "T1".to_string(),
table_id: 1,
engine: MITO_ENGINE.to_string(),
}),
Entry::Catalog(CatalogEntry {
catalog_name: "C2".to_string(),
@@ -560,6 +597,7 @@ mod tests {
schema_name: "S1".to_string(),
table_name: "T2".to_string(),
table_id: 2,
engine: MITO_ENGINE.to_string(),
}),
];
let res = LocalCatalogManager::sort_entries(vec);

View File

@@ -18,6 +18,7 @@ use std::collections::HashMap;
use std::sync::atomic::{AtomicU32, Ordering};
use std::sync::{Arc, RwLock};
use async_trait::async_trait;
use common_catalog::consts::MIN_USER_TABLE_ID;
use common_telemetry::error;
use snafu::{ensure, OptionExt};
@@ -155,16 +156,20 @@ impl CatalogManager for MemoryCatalogManager {
}
}
fn table(&self, catalog: &str, schema: &str, table_name: &str) -> Result<Option<TableRef>> {
let c = self.catalogs.read().unwrap();
let catalog = if let Some(c) = c.get(catalog) {
async fn table(
&self,
catalog: &str,
schema: &str,
table_name: &str,
) -> Result<Option<TableRef>> {
let catalog = {
let c = self.catalogs.read().unwrap();
let Some(c) = c.get(catalog) else { return Ok(None) };
c.clone()
} else {
return Ok(None);
};
match catalog.schema(schema)? {
None => Ok(None),
Some(s) => s.table(table_name),
Some(s) => s.table(table_name).await,
}
}
}
@@ -283,6 +288,7 @@ impl Default for MemorySchemaProvider {
}
}
#[async_trait]
impl SchemaProvider for MemorySchemaProvider {
fn as_any(&self) -> &dyn Any {
self
@@ -293,7 +299,7 @@ impl SchemaProvider for MemorySchemaProvider {
Ok(tables.keys().cloned().collect())
}
fn table(&self, name: &str) -> Result<Option<TableRef>> {
async fn table(&self, name: &str) -> Result<Option<TableRef>> {
let tables = self.tables.read().unwrap();
Ok(tables.get(name).cloned())
}
@@ -318,16 +324,20 @@ impl SchemaProvider for MemorySchemaProvider {
fn rename_table(&self, name: &str, new_name: String) -> Result<TableRef> {
let mut tables = self.tables.write().unwrap();
if tables.get(name).is_some() {
let table = tables.remove(name).unwrap();
tables.insert(new_name, table.clone());
Ok(table)
} else {
TableNotFoundSnafu {
let Some(table) = tables.remove(name) else {
return TableNotFoundSnafu {
table_info: name.to_string(),
}
.fail()?
}
.fail()?;
};
let e = match tables.entry(new_name) {
Entry::Vacant(e) => e,
Entry::Occupied(e) => {
return TableExistsSnafu { table: e.key() }.fail();
}
};
e.insert(table.clone());
Ok(table)
}
fn deregister_table(&self, name: &str) -> Result<Option<TableRef>> {
@@ -355,8 +365,8 @@ mod tests {
use super::*;
#[test]
fn test_new_memory_catalog_list() {
#[tokio::test]
async fn test_new_memory_catalog_list() {
let catalog_list = new_memory_catalog_list().unwrap();
let default_catalog = catalog_list.catalog(DEFAULT_CATALOG_NAME).unwrap().unwrap();
@@ -369,9 +379,9 @@ mod tests {
.register_table("numbers".to_string(), Arc::new(NumbersTable::default()))
.unwrap();
let table = default_schema.table("numbers").unwrap();
let table = default_schema.table("numbers").await.unwrap();
assert!(table.is_some());
assert!(default_schema.table("not_exists").unwrap().is_none());
assert!(default_schema.table("not_exists").await.unwrap().is_none());
}
#[tokio::test]
@@ -390,7 +400,6 @@ mod tests {
let other_table = NumbersTable::new(12);
let result = provider.register_table(table_name.to_string(), Arc::new(other_table));
let err = result.err().unwrap();
assert!(err.backtrace_opt().is_some());
assert_eq!(StatusCode::TableAlreadyExists, err.status_code());
}
@@ -419,7 +428,7 @@ mod tests {
// test new table name exists
assert!(provider.table_exist(new_table_name).unwrap());
let registered_table = provider.table(new_table_name).unwrap().unwrap();
let registered_table = provider.table(new_table_name).await.unwrap().unwrap();
assert_eq!(
registered_table.table_info().ident.table_id,
test_table.table_info().ident.table_id
@@ -454,7 +463,7 @@ mod tests {
assert!(schema.table_exist(table_name).unwrap());
// rename table
let new_table_name = "numbers";
let new_table_name = "numbers_new";
let rename_table_req = RenameTableRequest {
catalog: DEFAULT_CATALOG_NAME.to_string(),
schema: DEFAULT_SCHEMA_NAME.to_string(),
@@ -468,6 +477,7 @@ mod tests {
let registered_table = catalog
.table(DEFAULT_CATALOG_NAME, DEFAULT_SCHEMA_NAME, new_table_name)
.await
.unwrap()
.unwrap();
assert_eq!(registered_table.table_info().ident.table_id, table_id);

View File

@@ -13,31 +13,37 @@
// limitations under the License.
use std::any::Any;
use std::collections::HashMap;
use std::collections::{HashMap, HashSet};
use std::pin::Pin;
use std::sync::Arc;
use arc_swap::ArcSwap;
use async_stream::stream;
use common_catalog::consts::{DEFAULT_CATALOG_NAME, DEFAULT_SCHEMA_NAME, MIN_USER_TABLE_ID};
use common_telemetry::{debug, info};
use async_trait::async_trait;
use common_catalog::consts::{
DEFAULT_CATALOG_NAME, DEFAULT_SCHEMA_NAME, MIN_USER_TABLE_ID, MITO_ENGINE,
};
use common_telemetry::{debug, error, info};
use dashmap::DashMap;
use futures::Stream;
use futures_util::StreamExt;
use parking_lot::RwLock;
use snafu::{OptionExt, ResultExt};
use table::engine::{EngineContext, TableEngineRef};
use table::engine::manager::TableEngineManagerRef;
use table::engine::EngineContext;
use table::metadata::TableId;
use table::requests::{CreateTableRequest, OpenTableRequest};
use table::table::numbers::NumbersTable;
use table::TableRef;
use tokio::sync::Mutex;
use crate::error::{
CatalogNotFoundSnafu, CreateTableSnafu, InvalidCatalogValueSnafu, OpenTableSnafu, Result,
SchemaNotFoundSnafu, TableExistsSnafu, UnimplementedSnafu,
SchemaNotFoundSnafu, TableEngineNotFoundSnafu, TableExistsSnafu, UnimplementedSnafu,
};
use crate::helper::{
build_catalog_prefix, build_schema_prefix, build_table_global_prefix, CatalogKey, CatalogValue,
SchemaKey, SchemaValue, TableGlobalKey, TableGlobalValue, TableRegionalKey, TableRegionalValue,
CATALOG_KEY_PREFIX,
};
use crate::remote::{Kv, KvBackendRef};
use crate::{
@@ -50,21 +56,19 @@ use crate::{
pub struct RemoteCatalogManager {
node_id: u64,
backend: KvBackendRef,
catalogs: Arc<ArcSwap<HashMap<String, CatalogProviderRef>>>,
engine: TableEngineRef,
catalogs: Arc<RwLock<DashMap<String, CatalogProviderRef>>>,
engine_manager: TableEngineManagerRef,
system_table_requests: Mutex<Vec<RegisterSystemTableRequest>>,
mutex: Arc<Mutex<()>>,
}
impl RemoteCatalogManager {
pub fn new(engine: TableEngineRef, node_id: u64, backend: KvBackendRef) -> Self {
pub fn new(engine_manager: TableEngineManagerRef, node_id: u64, backend: KvBackendRef) -> Self {
Self {
engine,
engine_manager,
node_id,
backend,
catalogs: Default::default(),
system_table_requests: Default::default(),
mutex: Default::default(),
}
}
@@ -108,9 +112,13 @@ impl RemoteCatalogManager {
debug!("Ignoring non-catalog key: {}", String::from_utf8_lossy(&k));
continue;
}
let key = CatalogKey::parse(&String::from_utf8_lossy(&k))
.context(InvalidCatalogValueSnafu)?;
yield Ok(key)
let catalog_key = String::from_utf8_lossy(&k);
if let Ok(key) = CatalogKey::parse(&catalog_key) {
yield Ok(key)
} else {
error!("Invalid catalog key: {:?}", catalog_key);
}
}
}))
}
@@ -180,7 +188,9 @@ impl RemoteCatalogManager {
let max_table_id = MIN_USER_TABLE_ID - 1;
// initiate default catalog and schema
let default_catalog = self.initiate_default_catalog().await?;
let default_catalog = self
.create_catalog_and_schema(DEFAULT_CATALOG_NAME, DEFAULT_SCHEMA_NAME)
.await?;
res.insert(DEFAULT_CATALOG_NAME.to_string(), default_catalog);
info!("Default catalog and schema registered");
@@ -260,13 +270,19 @@ impl RemoteCatalogManager {
Ok(())
}
async fn initiate_default_catalog(&self) -> Result<CatalogProviderRef> {
let default_catalog = self.new_catalog_provider(DEFAULT_CATALOG_NAME);
let default_schema = self.new_schema_provider(DEFAULT_CATALOG_NAME, DEFAULT_SCHEMA_NAME);
default_catalog.register_schema(DEFAULT_SCHEMA_NAME.to_string(), default_schema.clone())?;
pub async fn create_catalog_and_schema(
&self,
catalog_name: &str,
schema_name: &str,
) -> Result<CatalogProviderRef> {
let schema_provider = self.new_schema_provider(catalog_name, schema_name);
let catalog_provider = self.new_catalog_provider(catalog_name);
catalog_provider.register_schema(schema_name.to_string(), schema_provider.clone())?;
let schema_key = SchemaKey {
schema_name: DEFAULT_SCHEMA_NAME.to_string(),
catalog_name: DEFAULT_CATALOG_NAME.to_string(),
catalog_name: catalog_name.to_string(),
schema_name: schema_name.to_string(),
}
.to_string();
self.backend
@@ -277,10 +293,10 @@ impl RemoteCatalogManager {
.context(InvalidCatalogValueSnafu)?,
)
.await?;
info!("Registered default schema");
info!("Created schema '{schema_key}'");
let catalog_key = CatalogKey {
catalog_name: DEFAULT_CATALOG_NAME.to_string(),
catalog_name: catalog_name.to_string(),
}
.to_string();
self.backend
@@ -291,8 +307,8 @@ impl RemoteCatalogManager {
.context(InvalidCatalogValueSnafu)?,
)
.await?;
info!("Registered default catalog");
Ok(default_catalog)
info!("Created catalog '{catalog_key}");
Ok(catalog_provider)
}
async fn open_or_create_table(
@@ -325,8 +341,13 @@ impl RemoteCatalogManager {
table_name: table_name.clone(),
table_id,
};
match self
.engine
let engine = self
.engine_manager
.engine(&table_info.meta.engine)
.context(TableEngineNotFoundSnafu {
engine_name: &table_info.meta.engine,
})?;
match engine
.open_table(&context, request)
.await
.with_context(|_| OpenTableSnafu {
@@ -357,9 +378,10 @@ impl RemoteCatalogManager {
primary_key_indices: meta.primary_key_indices.clone(),
create_if_not_exists: true,
table_options: meta.options.clone(),
engine: engine.name().to_string(),
};
self.engine
engine
.create_table(&context, req)
.await
.context(CreateTableSnafu {
@@ -381,21 +403,25 @@ impl CatalogManager for RemoteCatalogManager {
"Initialized catalogs: {:?}",
catalogs.keys().cloned().collect::<Vec<_>>()
);
self.catalogs.store(Arc::new(catalogs));
{
let self_catalogs = self.catalogs.read();
catalogs.into_iter().for_each(|(k, v)| {
self_catalogs.insert(k, v);
});
}
info!("Max table id allocated: {}", max_table_id);
let mut system_table_requests = self.system_table_requests.lock().await;
handle_system_table_request(self, self.engine.clone(), &mut system_table_requests).await?;
let engine = self
.engine_manager
.engine(MITO_ENGINE)
.context(TableEngineNotFoundSnafu {
engine_name: MITO_ENGINE,
})?;
handle_system_table_request(self, engine, &mut system_table_requests).await?;
info!("All system table opened");
self.catalog(DEFAULT_CATALOG_NAME)
.unwrap()
.unwrap()
.schema(DEFAULT_SCHEMA_NAME)
.unwrap()
.unwrap()
.register_table("numbers".to_string(), Arc::new(NumbersTable::default()))
.unwrap();
Ok(())
}
@@ -468,7 +494,7 @@ impl CatalogManager for RemoteCatalogManager {
.schema(schema)
}
fn table(
async fn table(
&self,
catalog_name: &str,
schema_name: &str,
@@ -483,7 +509,7 @@ impl CatalogManager for RemoteCatalogManager {
catalog: catalog_name,
schema: schema_name,
})?;
schema.table(table_name)
schema.table(table_name).await
}
}
@@ -499,12 +525,10 @@ impl CatalogList for RemoteCatalogManager {
) -> Result<Option<CatalogProviderRef>> {
let key = self.build_catalog_key(&name).to_string();
let backend = self.backend.clone();
let mutex = self.mutex.clone();
let catalogs = self.catalogs.clone();
std::thread::spawn(|| {
common_runtime::block_on_write(async move {
let _guard = mutex.lock().await;
backend
.set(
key.as_bytes(),
@@ -513,11 +537,10 @@ impl CatalogList for RemoteCatalogManager {
.context(InvalidCatalogValueSnafu)?,
)
.await?;
let prev_catalogs = catalogs.load();
let mut new_catalogs = HashMap::with_capacity(prev_catalogs.len() + 1);
new_catalogs.clone_from(&prev_catalogs);
let prev = new_catalogs.insert(name, catalog);
catalogs.store(Arc::new(new_catalogs));
let catalogs = catalogs.read();
let prev = catalogs.insert(name, catalog.clone());
Ok(prev)
})
})
@@ -527,12 +550,65 @@ impl CatalogList for RemoteCatalogManager {
/// List all catalogs from metasrv
fn catalog_names(&self) -> Result<Vec<String>> {
Ok(self.catalogs.load().keys().cloned().collect::<Vec<_>>())
let catalogs = self.catalogs.read();
Ok(catalogs.iter().map(|k| k.key().to_string()).collect())
}
/// Read catalog info of given name from metasrv.
fn catalog(&self, name: &str) -> Result<Option<CatalogProviderRef>> {
Ok(self.catalogs.load().get(name).cloned())
{
let catalogs = self.catalogs.read();
let catalog = catalogs.get(name);
if let Some(catalog) = catalog {
return Ok(Some(catalog.clone()));
}
}
let catalogs = self.catalogs.write();
let catalog = catalogs.get(name);
if let Some(catalog) = catalog {
return Ok(Some(catalog.clone()));
}
// It's for lack of incremental catalog syncing between datanode and meta. Here we fetch catalog
// from meta on demand. This can be removed when incremental catalog syncing is done in datanode.
let backend = self.backend.clone();
let catalogs_from_meta: HashSet<String> = std::thread::spawn(|| {
common_runtime::block_on_read(async move {
let mut stream = backend.range(CATALOG_KEY_PREFIX.as_bytes());
let mut catalogs = HashSet::new();
while let Some(catalog) = stream.next().await {
if let Ok(catalog) = catalog {
let catalog_key = String::from_utf8_lossy(&catalog.0);
if let Ok(key) = CatalogKey::parse(&catalog_key) {
catalogs.insert(key.catalog_name);
}
}
}
catalogs
})
})
.join()
.unwrap();
catalogs.retain(|catalog_name, _| catalogs_from_meta.get(catalog_name).is_some());
for catalog in catalogs_from_meta {
catalogs
.entry(catalog.clone())
.or_insert(self.new_catalog_provider(&catalog));
}
let catalog = catalogs.get(name);
Ok(catalog.as_deref().cloned())
}
}
@@ -692,6 +768,7 @@ impl RemoteSchemaProvider {
}
}
#[async_trait]
impl SchemaProvider for RemoteSchemaProvider {
fn as_any(&self) -> &dyn Any {
self
@@ -701,7 +778,7 @@ impl SchemaProvider for RemoteSchemaProvider {
Ok(self.tables.load().keys().cloned().collect::<Vec<_>>())
}
fn table(&self, name: &str) -> Result<Option<TableRef>> {
async fn table(&self, name: &str) -> Result<Option<TableRef>> {
Ok(self.tables.load().get(name).cloned())
}

View File

@@ -15,11 +15,13 @@
use std::any::Any;
use std::sync::Arc;
use async_trait::async_trait;
use table::TableRef;
use crate::error::Result;
use crate::error::{NotSupportedSnafu, Result};
/// Represents a schema, comprising a number of named tables.
#[async_trait]
pub trait SchemaProvider: Sync + Send {
/// Returns the schema provider as [`Any`](std::any::Any)
/// so that it can be downcast to a specific implementation.
@@ -29,19 +31,34 @@ pub trait SchemaProvider: Sync + Send {
fn table_names(&self) -> Result<Vec<String>>;
/// Retrieves a specific table from the schema by name, provided it exists.
fn table(&self, name: &str) -> Result<Option<TableRef>>;
async fn table(&self, name: &str) -> Result<Option<TableRef>>;
/// If supported by the implementation, adds a new table to this schema.
/// If a table of the same name existed before, it returns "Table already exists" error.
fn register_table(&self, name: String, table: TableRef) -> Result<Option<TableRef>>;
fn register_table(&self, name: String, _table: TableRef) -> Result<Option<TableRef>> {
NotSupportedSnafu {
op: format!("register_table({name}, <table>)"),
}
.fail()
}
/// If supported by the implementation, renames an existing table from this schema and returns it.
/// If no table of that name exists, returns "Table not found" error.
fn rename_table(&self, name: &str, new_name: String) -> Result<TableRef>;
fn rename_table(&self, name: &str, new_name: String) -> Result<TableRef> {
NotSupportedSnafu {
op: format!("rename_table({name}, {new_name})"),
}
.fail()
}
/// If supported by the implementation, removes an existing table from this schema and returns it.
/// If no table of that name exists, returns Ok(None).
fn deregister_table(&self, name: &str) -> Result<Option<TableRef>>;
fn deregister_table(&self, name: &str) -> Result<Option<TableRef>> {
NotSupportedSnafu {
op: format!("deregister_table({name})"),
}
.fail()
}
/// If supported by the implementation, checks the table exist in the schema provider or not.
/// If no matched table in the schema provider, return false.

View File

@@ -17,8 +17,8 @@ use std::collections::HashMap;
use std::sync::Arc;
use common_catalog::consts::{
DEFAULT_CATALOG_NAME, DEFAULT_SCHEMA_NAME, INFORMATION_SCHEMA_NAME, SYSTEM_CATALOG_NAME,
SYSTEM_CATALOG_TABLE_ID, SYSTEM_CATALOG_TABLE_NAME,
DEFAULT_CATALOG_NAME, DEFAULT_SCHEMA_NAME, INFORMATION_SCHEMA_NAME, MITO_ENGINE,
SYSTEM_CATALOG_NAME, SYSTEM_CATALOG_TABLE_ID, SYSTEM_CATALOG_TABLE_NAME,
};
use common_query::logical_plan::Expr;
use common_query::physical_plan::{PhysicalPlanRef, SessionContext};
@@ -32,7 +32,9 @@ use serde::{Deserialize, Serialize};
use snafu::{ensure, OptionExt, ResultExt};
use table::engine::{EngineContext, TableEngineRef};
use table::metadata::{TableId, TableInfoRef};
use table::requests::{CreateTableRequest, DeleteRequest, InsertRequest, OpenTableRequest};
use table::requests::{
CreateTableRequest, DeleteRequest, InsertRequest, OpenTableRequest, TableOptions,
};
use table::{Table, TableRef};
use crate::error::{
@@ -109,7 +111,8 @@ impl SystemCatalogTable {
region_numbers: vec![0],
primary_key_indices: vec![ENTRY_TYPE_INDEX, KEY_INDEX],
create_if_not_exists: true,
table_options: HashMap::new(),
table_options: TableOptions::default(),
engine: engine.name().to_string(),
};
let table = engine
@@ -192,12 +195,13 @@ pub fn build_table_insert_request(
schema: String,
table_name: String,
table_id: TableId,
engine: String,
) -> InsertRequest {
let entry_key = format_table_entry_key(&catalog, &schema, table_id);
build_insert_request(
EntryType::Table,
entry_key.as_bytes(),
serde_json::to_string(&TableEntryValue { table_name })
serde_json::to_string(&TableEntryValue { table_name, engine })
.unwrap()
.as_bytes(),
)
@@ -217,7 +221,7 @@ fn build_primary_key_columns(entry_type: EntryType, key: &[u8]) -> HashMap<Strin
let mut m = HashMap::with_capacity(3);
m.insert(
"entry_type".to_string(),
Arc::new(UInt8Vector::from_slice(&[entry_type as u8])) as _,
Arc::new(UInt8Vector::from_slice([entry_type as u8])) as _,
);
m.insert(
"key".to_string(),
@@ -226,7 +230,7 @@ fn build_primary_key_columns(entry_type: EntryType, key: &[u8]) -> HashMap<Strin
// Timestamp in key part is intentionally left to 0
m.insert(
"timestamp".to_string(),
Arc::new(TimestampMillisecondVector::from_slice(&[0])) as _,
Arc::new(TimestampMillisecondVector::from_slice([0])) as _,
);
m
}
@@ -256,12 +260,12 @@ pub fn build_insert_request(entry_type: EntryType, key: &[u8], value: &[u8]) ->
let now = util::current_time_millis();
columns_values.insert(
"gmt_created".to_string(),
Arc::new(TimestampMillisecondVector::from_slice(&[now])) as _,
Arc::new(TimestampMillisecondVector::from_slice([now])) as _,
);
columns_values.insert(
"gmt_modified".to_string(),
Arc::new(TimestampMillisecondVector::from_slice(&[now])) as _,
Arc::new(TimestampMillisecondVector::from_slice([now])) as _,
);
InsertRequest {
@@ -328,6 +332,7 @@ pub fn decode_system_catalog(
schema_name: table_parts[1].to_string(),
table_name: table_meta.table_name,
table_id,
engine: table_meta.engine,
}))
}
}
@@ -383,27 +388,35 @@ pub struct TableEntry {
pub schema_name: String,
pub table_name: String,
pub table_id: TableId,
pub engine: String,
}
#[derive(Debug, Serialize, Deserialize, PartialEq, Eq)]
pub struct TableEntryValue {
pub table_name: String,
#[serde(default = "mito_engine")]
pub engine: String,
}
fn mito_engine() -> String {
MITO_ENGINE.to_string()
}
#[cfg(test)]
mod tests {
use common_recordbatch::RecordBatches;
use common_test_util::temp_dir::{create_temp_dir, TempDir};
use datatypes::value::Value;
use log_store::NoopLogStore;
use mito::config::EngineConfig;
use mito::engine::MitoEngine;
use mito::engine::{MitoEngine, MITO_ENGINE};
use object_store::ObjectStore;
use storage::compaction::noop::NoopCompactionScheduler;
use storage::config::EngineConfig as StorageEngineConfig;
use storage::EngineImpl;
use table::metadata::TableType;
use table::metadata::TableType::Base;
use tempdir::TempDir;
use super::*;
@@ -478,13 +491,11 @@ mod tests {
}
pub async fn prepare_table_engine() -> (TempDir, TableEngineRef) {
let dir = TempDir::new("system-table-test").unwrap();
let dir = create_temp_dir("system-table-test");
let store_dir = dir.path().to_string_lossy();
let accessor = object_store::backend::fs::Builder::default()
.root(&store_dir)
.build()
.unwrap();
let object_store = ObjectStore::new(accessor);
let mut builder = object_store::services::Fs::default();
builder.root(&store_dir);
let object_store = ObjectStore::new(builder).unwrap().finish();
let noop_compaction_scheduler = Arc::new(NoopCompactionScheduler::default());
let table_engine = Arc::new(MitoEngine::new(
EngineConfig::default(),
@@ -528,6 +539,7 @@ mod tests {
DEFAULT_SCHEMA_NAME.to_string(),
"my_table".to_string(),
1,
MITO_ENGINE.to_string(),
);
let result = catalog_table.insert(table_insertion).await.unwrap();
assert_eq!(result, 1);
@@ -548,6 +560,7 @@ mod tests {
schema_name: DEFAULT_SCHEMA_NAME.to_string(),
table_name: "my_table".to_string(),
table_id: 1,
engine: MITO_ENGINE.to_string(),
});
assert_eq!(entry, expected);

View File

@@ -0,0 +1,190 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use std::collections::HashMap;
use std::sync::Arc;
use common_catalog::consts::INFORMATION_SCHEMA_NAME;
use common_catalog::format_full_table_name;
use datafusion::common::{ResolvedTableReference, TableReference};
use datafusion::datasource::provider_as_source;
use datafusion::logical_expr::TableSource;
use session::context::QueryContext;
use snafu::{ensure, OptionExt};
use table::table::adapter::DfTableProviderAdapter;
use crate::error::{
CatalogNotFoundSnafu, QueryAccessDeniedSnafu, Result, SchemaNotFoundSnafu, TableNotExistSnafu,
};
use crate::information_schema::InformationSchemaProvider;
use crate::CatalogListRef;
pub struct DfTableSourceProvider {
catalog_list: CatalogListRef,
resolved_tables: HashMap<String, Arc<dyn TableSource>>,
disallow_cross_schema_query: bool,
default_catalog: String,
default_schema: String,
}
impl DfTableSourceProvider {
pub fn new(
catalog_list: CatalogListRef,
disallow_cross_schema_query: bool,
query_ctx: &QueryContext,
) -> Self {
Self {
catalog_list,
disallow_cross_schema_query,
resolved_tables: HashMap::new(),
default_catalog: query_ctx.current_catalog(),
default_schema: query_ctx.current_schema(),
}
}
pub fn resolve_table_ref<'a>(
&'a self,
table_ref: TableReference<'a>,
) -> Result<ResolvedTableReference<'a>> {
if self.disallow_cross_schema_query {
match &table_ref {
TableReference::Bare { .. } => (),
TableReference::Partial { schema, .. } => {
ensure!(
schema.as_ref() == self.default_schema,
QueryAccessDeniedSnafu {
catalog: &self.default_catalog,
schema: schema.as_ref(),
}
);
}
TableReference::Full {
catalog, schema, ..
} => {
ensure!(
catalog.as_ref() == self.default_catalog
&& schema.as_ref() == self.default_schema,
QueryAccessDeniedSnafu {
catalog: catalog.as_ref(),
schema: schema.as_ref()
}
);
}
};
}
Ok(table_ref.resolve(&self.default_catalog, &self.default_schema))
}
pub async fn resolve_table(
&mut self,
table_ref: TableReference<'_>,
) -> Result<Arc<dyn TableSource>> {
let table_ref = self.resolve_table_ref(table_ref)?;
let resolved_name = table_ref.to_string();
if let Some(table) = self.resolved_tables.get(&resolved_name) {
return Ok(table.clone());
}
let catalog_name = table_ref.catalog.as_ref();
let schema_name = table_ref.schema.as_ref();
let table_name = table_ref.table.as_ref();
let schema = if schema_name != INFORMATION_SCHEMA_NAME {
let catalog = self
.catalog_list
.catalog(catalog_name)?
.context(CatalogNotFoundSnafu { catalog_name })?;
catalog.schema(schema_name)?.context(SchemaNotFoundSnafu {
catalog: catalog_name,
schema: schema_name,
})?
} else {
let catalog_provider = self
.catalog_list
.catalog(catalog_name)?
.context(CatalogNotFoundSnafu { catalog_name })?;
Arc::new(InformationSchemaProvider::new(
catalog_name.to_string(),
catalog_provider,
))
};
let table = schema
.table(table_name)
.await?
.with_context(|| TableNotExistSnafu {
table: format_full_table_name(catalog_name, schema_name, table_name),
})?;
let table = DfTableProviderAdapter::new(table);
let table = provider_as_source(Arc::new(table));
self.resolved_tables.insert(resolved_name, table.clone());
Ok(table)
}
}
#[cfg(test)]
mod tests {
use std::borrow::Cow;
use session::context::QueryContext;
use super::*;
use crate::local::MemoryCatalogManager;
#[test]
fn test_validate_table_ref() {
let query_ctx = &QueryContext::with("greptime", "public");
let table_provider =
DfTableSourceProvider::new(Arc::new(MemoryCatalogManager::default()), true, query_ctx);
let table_ref = TableReference::Bare {
table: Cow::Borrowed("table_name"),
};
let result = table_provider.resolve_table_ref(table_ref);
assert!(result.is_ok());
let table_ref = TableReference::Partial {
schema: Cow::Borrowed("public"),
table: Cow::Borrowed("table_name"),
};
let result = table_provider.resolve_table_ref(table_ref);
assert!(result.is_ok());
let table_ref = TableReference::Partial {
schema: Cow::Borrowed("wrong_schema"),
table: Cow::Borrowed("table_name"),
};
let result = table_provider.resolve_table_ref(table_ref);
assert!(result.is_err());
let table_ref = TableReference::Full {
catalog: Cow::Borrowed("greptime"),
schema: Cow::Borrowed("public"),
table: Cow::Borrowed("table_name"),
};
let result = table_provider.resolve_table_ref(table_ref);
assert!(result.is_ok());
let table_ref = TableReference::Full {
catalog: Cow::Borrowed("wrong_catalog"),
schema: Cow::Borrowed("public"),
table: Cow::Borrowed("table_name"),
};
let result = table_provider.resolve_table_ref(table_ref);
assert!(result.is_err());
}
}

View File

@@ -15,27 +15,12 @@
// The `tables` table in system catalog keeps a record of all tables created by user.
use std::any::Any;
use std::pin::Pin;
use std::sync::Arc;
use std::task::{Context, Poll};
use async_stream::stream;
use async_trait::async_trait;
use common_catalog::consts::{INFORMATION_SCHEMA_NAME, SYSTEM_CATALOG_TABLE_NAME};
use common_error::ext::BoxedError;
use common_query::logical_plan::Expr;
use common_query::physical_plan::PhysicalPlanRef;
use common_recordbatch::error::Result as RecordBatchResult;
use common_recordbatch::{RecordBatch, RecordBatchStream};
use datatypes::prelude::{ConcreteDataType, DataType};
use datatypes::schema::{ColumnSchema, Schema, SchemaRef};
use datatypes::value::ValueRef;
use datatypes::vectors::VectorRef;
use futures::Stream;
use snafu::ResultExt;
use table::engine::TableEngineRef;
use table::error::TablesRecordBatchSnafu;
use table::metadata::{TableId, TableInfoRef};
use table::table::scan::SimpleTableScan;
use table::metadata::TableId;
use table::{Table, TableRef};
use crate::error::{self, Error, InsertCatalogRecordSnafu, Result as CatalogResult};
@@ -43,204 +28,32 @@ use crate::system::{
build_schema_insert_request, build_table_deletion_request, build_table_insert_request,
SystemCatalogTable,
};
use crate::{
CatalogListRef, CatalogProvider, DeregisterTableRequest, SchemaProvider, SchemaProviderRef,
};
/// Tables holds all tables created by user.
pub struct Tables {
schema: SchemaRef,
catalogs: CatalogListRef,
engine_name: String,
}
impl Tables {
pub fn new(catalogs: CatalogListRef, engine_name: String) -> Self {
Self {
schema: Arc::new(build_schema_for_tables()),
catalogs,
engine_name,
}
}
}
#[async_trait::async_trait]
impl Table for Tables {
fn as_any(&self) -> &dyn Any {
self
}
fn schema(&self) -> SchemaRef {
self.schema.clone()
}
fn table_info(&self) -> TableInfoRef {
unreachable!("Tables does not support table_info method")
}
async fn scan(
&self,
_projection: Option<&Vec<usize>>,
_filters: &[Expr],
_limit: Option<usize>,
) -> table::error::Result<PhysicalPlanRef> {
let catalogs = self.catalogs.clone();
let schema_ref = self.schema.clone();
let engine_name = self.engine_name.clone();
let stream = stream!({
for catalog_name in catalogs
.catalog_names()
.map_err(BoxedError::new)
.context(TablesRecordBatchSnafu)?
{
let catalog = catalogs
.catalog(&catalog_name)
.map_err(BoxedError::new)
.context(TablesRecordBatchSnafu)?
.unwrap();
for schema_name in catalog
.schema_names()
.map_err(BoxedError::new)
.context(TablesRecordBatchSnafu)?
{
let mut tables_in_schema = Vec::with_capacity(
catalog
.schema_names()
.map_err(BoxedError::new)
.context(TablesRecordBatchSnafu)?
.len(),
);
let schema = catalog
.schema(&schema_name)
.map_err(BoxedError::new)
.context(TablesRecordBatchSnafu)?
.unwrap();
for table_name in schema
.table_names()
.map_err(BoxedError::new)
.context(TablesRecordBatchSnafu)?
{
tables_in_schema.push(table_name);
}
let vec = tables_to_record_batch(
&catalog_name,
&schema_name,
tables_in_schema,
&engine_name,
);
let record_batch_res = RecordBatch::new(schema_ref.clone(), vec);
yield record_batch_res;
}
}
});
let stream = Box::pin(TablesRecordBatchStream {
schema: self.schema.clone(),
stream: Box::pin(stream),
});
Ok(Arc::new(SimpleTableScan::new(stream)))
}
}
/// Convert tables info to `RecordBatch`.
fn tables_to_record_batch(
catalog_name: &str,
schema_name: &str,
table_names: Vec<String>,
engine: &str,
) -> Vec<VectorRef> {
let mut catalog_vec =
ConcreteDataType::string_datatype().create_mutable_vector(table_names.len());
let mut schema_vec =
ConcreteDataType::string_datatype().create_mutable_vector(table_names.len());
let mut table_name_vec =
ConcreteDataType::string_datatype().create_mutable_vector(table_names.len());
let mut engine_vec =
ConcreteDataType::string_datatype().create_mutable_vector(table_names.len());
for table_name in table_names {
// Safety: All these vectors are string type.
catalog_vec.push_value_ref(ValueRef::String(catalog_name));
schema_vec.push_value_ref(ValueRef::String(schema_name));
table_name_vec.push_value_ref(ValueRef::String(&table_name));
engine_vec.push_value_ref(ValueRef::String(engine));
}
vec![
catalog_vec.to_vector(),
schema_vec.to_vector(),
table_name_vec.to_vector(),
engine_vec.to_vector(),
]
}
pub struct TablesRecordBatchStream {
schema: SchemaRef,
stream: Pin<Box<dyn Stream<Item = RecordBatchResult<RecordBatch>> + Send>>,
}
impl Stream for TablesRecordBatchStream {
type Item = RecordBatchResult<RecordBatch>;
fn poll_next(mut self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<Option<Self::Item>> {
Pin::new(&mut self.stream).poll_next(cx)
}
}
impl RecordBatchStream for TablesRecordBatchStream {
fn schema(&self) -> SchemaRef {
self.schema.clone()
}
}
use crate::{CatalogProvider, DeregisterTableRequest, SchemaProvider, SchemaProviderRef};
pub struct InformationSchema {
pub tables: Arc<Tables>,
pub system: Arc<SystemCatalogTable>,
}
#[async_trait]
impl SchemaProvider for InformationSchema {
fn as_any(&self) -> &dyn Any {
self
}
fn table_names(&self) -> Result<Vec<String>, Error> {
Ok(vec![
"tables".to_string(),
SYSTEM_CATALOG_TABLE_NAME.to_string(),
])
Ok(vec![SYSTEM_CATALOG_TABLE_NAME.to_string()])
}
fn table(&self, name: &str) -> Result<Option<TableRef>, Error> {
if name.eq_ignore_ascii_case("tables") {
Ok(Some(self.tables.clone()))
} else if name.eq_ignore_ascii_case(SYSTEM_CATALOG_TABLE_NAME) {
async fn table(&self, name: &str) -> Result<Option<TableRef>, Error> {
if name.eq_ignore_ascii_case(SYSTEM_CATALOG_TABLE_NAME) {
Ok(Some(self.system.clone()))
} else {
Ok(None)
}
}
fn register_table(
&self,
_name: String,
_table: TableRef,
) -> crate::error::Result<Option<TableRef>> {
panic!("System catalog & schema does not support register table")
}
fn rename_table(&self, _name: &str, _new_name: String) -> crate::error::Result<TableRef> {
unimplemented!("System catalog & schema does not support rename table")
}
fn deregister_table(&self, _name: &str) -> crate::error::Result<Option<TableRef>> {
panic!("System catalog & schema does not support deregister table")
}
fn table_exist(&self, name: &str) -> Result<bool, Error> {
Ok(name.eq_ignore_ascii_case("tables")
|| name.eq_ignore_ascii_case(SYSTEM_CATALOG_TABLE_NAME))
Ok(name.eq_ignore_ascii_case(SYSTEM_CATALOG_TABLE_NAME))
}
}
@@ -249,13 +62,8 @@ pub struct SystemCatalog {
}
impl SystemCatalog {
pub fn new(
system: SystemCatalogTable,
catalogs: CatalogListRef,
engine: TableEngineRef,
) -> Self {
pub(crate) fn new(system: SystemCatalogTable) -> Self {
let schema = InformationSchema {
tables: Arc::new(Tables::new(catalogs, engine.name().to_string())),
system: Arc::new(system),
};
Self {
@@ -269,8 +77,9 @@ impl SystemCatalog {
schema: String,
table_name: String,
table_id: TableId,
engine: String,
) -> crate::error::Result<usize> {
let request = build_table_insert_request(catalog, schema, table_name, table_id);
let request = build_table_insert_request(catalog, schema, table_name, table_id, engine);
self.information_schema
.system
.insert(request)
@@ -332,104 +141,3 @@ impl CatalogProvider for SystemCatalog {
}
}
}
fn build_schema_for_tables() -> Schema {
let cols = vec![
ColumnSchema::new(
"catalog".to_string(),
ConcreteDataType::string_datatype(),
false,
),
ColumnSchema::new(
"schema".to_string(),
ConcreteDataType::string_datatype(),
false,
),
ColumnSchema::new(
"table_name".to_string(),
ConcreteDataType::string_datatype(),
false,
),
ColumnSchema::new(
"engine".to_string(),
ConcreteDataType::string_datatype(),
false,
),
];
Schema::new(cols)
}
#[cfg(test)]
mod tests {
use common_catalog::consts::{DEFAULT_CATALOG_NAME, DEFAULT_SCHEMA_NAME};
use common_query::physical_plan::SessionContext;
use futures_util::StreamExt;
use table::table::numbers::NumbersTable;
use super::*;
use crate::local::memory::new_memory_catalog_list;
use crate::CatalogList;
#[tokio::test]
async fn test_tables() {
let catalog_list = new_memory_catalog_list().unwrap();
let schema = catalog_list
.catalog(DEFAULT_CATALOG_NAME)
.unwrap()
.unwrap()
.schema(DEFAULT_SCHEMA_NAME)
.unwrap()
.unwrap();
schema
.register_table("test_table".to_string(), Arc::new(NumbersTable::default()))
.unwrap();
let tables = Tables::new(catalog_list, "test_engine".to_string());
let tables_stream = tables.scan(None, &[], None).await.unwrap();
let session_ctx = SessionContext::new();
let mut tables_stream = tables_stream.execute(0, session_ctx.task_ctx()).unwrap();
if let Some(t) = tables_stream.next().await {
let batch = t.unwrap();
assert_eq!(1, batch.num_rows());
assert_eq!(4, batch.num_columns());
assert_eq!(
ConcreteDataType::string_datatype(),
batch.column(0).data_type()
);
assert_eq!(
ConcreteDataType::string_datatype(),
batch.column(1).data_type()
);
assert_eq!(
ConcreteDataType::string_datatype(),
batch.column(2).data_type()
);
assert_eq!(
ConcreteDataType::string_datatype(),
batch.column(3).data_type()
);
assert_eq!(
"greptime",
batch.column(0).get_ref(0).as_string().unwrap().unwrap()
);
assert_eq!(
"public",
batch.column(1).get_ref(0).as_string().unwrap().unwrap()
);
assert_eq!(
"test_table",
batch.column(2).get_ref(0).as_string().unwrap().unwrap()
);
assert_eq!(
"test_engine",
batch.column(3).get_ref(0).as_string().unwrap().unwrap()
);
} else {
panic!("Record batch should not be empty!")
}
}
}

View File

@@ -21,6 +21,7 @@ mod tests {
use common_catalog::consts::{DEFAULT_CATALOG_NAME, DEFAULT_SCHEMA_NAME};
use common_telemetry::{error, info};
use mito::config::EngineConfig;
use table::engine::manager::MemoryTableEngineManager;
use table::table::numbers::NumbersTable;
use table::TableRef;
use tokio::sync::Mutex;
@@ -33,7 +34,8 @@ mod tests {
mito::table::test_util::MockEngine::default(),
object_store,
));
let catalog_manager = LocalCatalogManager::try_new(mock_engine).await.unwrap();
let engine_manager = Arc::new(MemoryTableEngineManager::new(mock_engine.clone()));
let catalog_manager = LocalCatalogManager::try_new(engine_manager).await.unwrap();
catalog_manager.start().await?;
Ok(catalog_manager)
}
@@ -71,6 +73,7 @@ mod tests {
let registered_table = catalog_manager
.table(DEFAULT_CATALOG_NAME, DEFAULT_SCHEMA_NAME, new_table_name)
.await
.unwrap()
.unwrap();
assert_eq!(registered_table.table_info().ident.table_id, table_id);
@@ -158,6 +161,7 @@ mod tests {
let table = guard.as_ref().unwrap();
let table_registered = catalog_manager
.table(DEFAULT_CATALOG_NAME, DEFAULT_SCHEMA_NAME, "test_table")
.await
.unwrap()
.unwrap();
assert_eq!(

View File

@@ -147,6 +147,7 @@ impl TableEngine for MockTableEngine {
let table_id = TableId::from_str(
request
.table_options
.extra_options
.get("table_id")
.unwrap_or(&default_table_id),
)
@@ -220,4 +221,8 @@ impl TableEngine for MockTableEngine {
) -> table::Result<bool> {
unimplemented!()
}
async fn close(&self) -> table::Result<()> {
Ok(())
}
}

View File

@@ -27,9 +27,10 @@ mod tests {
KvBackend, KvBackendRef, RemoteCatalogManager, RemoteCatalogProvider, RemoteSchemaProvider,
};
use catalog::{CatalogList, CatalogManager, RegisterTableRequest};
use common_catalog::consts::{DEFAULT_CATALOG_NAME, DEFAULT_SCHEMA_NAME};
use common_catalog::consts::{DEFAULT_CATALOG_NAME, DEFAULT_SCHEMA_NAME, MITO_ENGINE};
use datatypes::schema::RawSchema;
use futures_util::StreamExt;
use table::engine::manager::MemoryTableEngineManager;
use table::engine::{EngineContext, TableEngineRef};
use table::requests::CreateTableRequest;
@@ -80,8 +81,11 @@ mod tests {
) -> (KvBackendRef, TableEngineRef, Arc<RemoteCatalogManager>) {
let backend = Arc::new(MockKvBackend::default()) as KvBackendRef;
let table_engine = Arc::new(MockTableEngine::default());
let catalog_manager =
RemoteCatalogManager::new(table_engine.clone(), node_id, backend.clone());
let engine_manager = Arc::new(MemoryTableEngineManager::alias(
MITO_ENGINE.to_string(),
table_engine.clone(),
));
let catalog_manager = RemoteCatalogManager::new(engine_manager, node_id, backend.clone());
catalog_manager.start().await.unwrap();
(backend, table_engine, Arc::new(catalog_manager))
}
@@ -131,6 +135,7 @@ mod tests {
primary_key_indices: vec![],
create_if_not_exists: false,
table_options: Default::default(),
engine: MITO_ENGINE.to_string(),
},
)
.await
@@ -168,7 +173,6 @@ mod tests {
.schema(DEFAULT_SCHEMA_NAME)
.unwrap()
.unwrap();
assert_eq!(vec!["numbers"], default_schema.table_names().unwrap());
// register a new table with an nonexistent catalog
let catalog_name = DEFAULT_CATALOG_NAME.to_string();
@@ -191,6 +195,7 @@ mod tests {
primary_key_indices: vec![],
create_if_not_exists: false,
table_options: Default::default(),
engine: MITO_ENGINE.to_string(),
},
)
.await
@@ -203,14 +208,7 @@ mod tests {
table,
};
assert!(catalog_manager.register_table(reg_req).await.unwrap());
assert_eq!(
HashSet::from([table_name, "numbers".to_string()]),
default_schema
.table_names()
.unwrap()
.into_iter()
.collect::<HashSet<_>>()
);
assert_eq!(vec![table_name], default_schema.table_names().unwrap());
}
#[tokio::test]
@@ -251,6 +249,7 @@ mod tests {
primary_key_indices: vec![],
create_if_not_exists: false,
table_options: Default::default(),
engine: MITO_ENGINE.to_string(),
},
)
.await

View File

@@ -16,13 +16,14 @@ common-grpc-expr = { path = "../common/grpc-expr" }
common-query = { path = "../common/query" }
common-recordbatch = { path = "../common/recordbatch" }
common-time = { path = "../common/time" }
common-telemetry = { path = "../common/telemetry" }
datafusion.workspace = true
datatypes = { path = "../datatypes" }
enum_dispatch = "0.3"
futures-util.workspace = true
parking_lot = "0.12"
prost.workspace = true
rand = "0.8"
rand.workspace = true
snafu.workspace = true
tonic.workspace = true
@@ -32,12 +33,8 @@ substrait = { path = "../common/substrait" }
tokio.workspace = true
tracing = "0.1"
tracing-subscriber = { version = "0.3", features = ["env-filter"] }
# TODO(ruihang): upgrade to 0.11 once substrait-rs supports it.
[dev-dependencies.prost_09]
package = "prost"
version = "0.9"
prost.workspace = true
[dev-dependencies.substrait_proto]
package = "substrait"
version = "0.2"
version = "0.4"

View File

@@ -14,11 +14,12 @@
use api::v1::{ColumnDataType, ColumnDef, CreateTableExpr, TableId};
use client::{Client, Database};
use prost_09::Message;
use substrait_proto::protobuf::plan_rel::RelType as PlanRelType;
use substrait_proto::protobuf::read_rel::{NamedTable, ReadType};
use substrait_proto::protobuf::rel::RelType;
use substrait_proto::protobuf::{PlanRel, ReadRel, Rel};
use common_catalog::consts::{DEFAULT_CATALOG_NAME, DEFAULT_SCHEMA_NAME, MITO_ENGINE};
use prost::Message;
use substrait_proto::proto::plan_rel::RelType as PlanRelType;
use substrait_proto::proto::read_rel::{NamedTable, ReadType};
use substrait_proto::proto::rel::RelType;
use substrait_proto::proto::{PlanRel, ReadRel, Rel};
use tracing::{event, Level};
fn main() {
@@ -63,9 +64,10 @@ async fn run() {
table_options: Default::default(),
table_id: Some(TableId { id: 1024 }),
region_ids: vec![0],
engine: MITO_ENGINE.to_string(),
};
let db = Database::with_client(client);
let db = Database::new(DEFAULT_CATALOG_NAME, DEFAULT_SCHEMA_NAME, client);
let result = db.create(create_table_expr).await.unwrap();
event!(Level::INFO, "create table result: {:#?}", result);
@@ -88,12 +90,8 @@ fn mock_logical_plan() -> Vec<u8> {
let read_type = ReadType::NamedTable(named_table);
let read_rel = ReadRel {
common: None,
base_schema: None,
filter: None,
projection: None,
advanced_extension: None,
read_type: Some(read_type),
..Default::default()
};
let mut buf = vec![];

View File

@@ -14,6 +14,9 @@
use std::sync::Arc;
use api::v1::greptime_database_client::GreptimeDatabaseClient;
use api::v1::health_check_client::HealthCheckClient;
use api::v1::HealthCheckRequest;
use arrow_flight::flight_service_client::FlightServiceClient;
use common_grpc::channel_manager::ChannelManager;
use parking_lot::RwLock;
@@ -23,6 +26,10 @@ use tonic::transport::Channel;
use crate::load_balance::{LoadBalance, Loadbalancer};
use crate::{error, Result};
pub(crate) struct DatabaseClient {
pub(crate) inner: GreptimeDatabaseClient<Channel>,
}
pub(crate) struct FlightClient {
addr: String,
client: FlightServiceClient<Channel>,
@@ -118,7 +125,7 @@ impl Client {
self.inner.set_peers(urls);
}
pub(crate) fn make_client(&self) -> Result<FlightClient> {
fn find_channel(&self) -> Result<(String, Channel)> {
let addr = self
.inner
.get_peer()
@@ -131,11 +138,30 @@ impl Client {
.channel_manager
.get(&addr)
.context(error::CreateChannelSnafu { addr: &addr })?;
Ok((addr, channel))
}
pub(crate) fn make_flight_client(&self) -> Result<FlightClient> {
let (addr, channel) = self.find_channel()?;
Ok(FlightClient {
addr,
client: FlightServiceClient::new(channel),
})
}
pub(crate) fn make_database_client(&self) -> Result<DatabaseClient> {
let (_, channel) = self.find_channel()?;
Ok(DatabaseClient {
inner: GreptimeDatabaseClient::new(channel),
})
}
pub async fn health_check(&self) -> Result<()> {
let (_, channel) = self.find_channel()?;
let mut client = HealthCheckClient::new(channel);
client.health_check(HealthCheckRequest {}).await?;
Ok(())
}
}
#[cfg(test)]

View File

@@ -12,57 +12,121 @@
// See the License for the specific language governing permissions and
// limitations under the License.
use std::str::FromStr;
use api::v1::auth_header::AuthScheme;
use api::v1::ddl_request::Expr as DdlExpr;
use api::v1::greptime_request::Request;
use api::v1::query_request::Query;
use api::v1::{
AlterExpr, CreateTableExpr, DdlRequest, DropTableExpr, GreptimeRequest, InsertRequest,
QueryRequest, RequestHeader,
greptime_response, AffectedRows, AlterExpr, AuthHeader, CreateTableExpr, DdlRequest,
DropTableExpr, FlushTableExpr, GreptimeRequest, InsertRequest, PromRangeQuery, QueryRequest,
RequestHeader,
};
use arrow_flight::{FlightData, Ticket};
use common_catalog::consts::{DEFAULT_CATALOG_NAME, DEFAULT_SCHEMA_NAME};
use common_error::prelude::*;
use common_grpc::flight::{flight_messages_to_recordbatches, FlightDecoder, FlightMessage};
use common_query::Output;
use common_telemetry::logging;
use futures_util::{TryFutureExt, TryStreamExt};
use prost::Message;
use snafu::{ensure, ResultExt};
use crate::error::{ConvertFlightDataSnafu, IllegalFlightMessagesSnafu};
use crate::error::{
ConvertFlightDataSnafu, IllegalDatabaseResponseSnafu, IllegalFlightMessagesSnafu,
};
use crate::{error, Client, Result};
#[derive(Clone, Debug)]
#[derive(Clone, Debug, Default)]
pub struct Database {
// The "catalog" and "schema" to be used in processing the requests at the server side.
// They are the "hint" or "context", just like how the "database" in "USE" statement is treated in MySQL.
// They will be carried in the request header.
catalog: String,
schema: String,
// The dbname follows naming rule as out mysql, postgres and http
// protocol. The server treat dbname in priority of catalog/schema.
dbname: String,
client: Client,
ctx: FlightContext,
}
impl Database {
/// Create database service client using catalog and schema
pub fn new(catalog: impl Into<String>, schema: impl Into<String>, client: Client) -> Self {
Self {
catalog: catalog.into(),
schema: schema.into(),
client,
..Default::default()
}
}
pub fn with_client(client: Client) -> Self {
Self::new(DEFAULT_CATALOG_NAME, DEFAULT_SCHEMA_NAME, client)
/// Create database service client using dbname.
///
/// This API is designed for external usage. `dbname` is:
///
/// - the name of database when using GreptimeDB standalone or cluster
/// - the name provided by GreptimeCloud or other multi-tenant GreptimeDB
/// environment
pub fn new_with_dbname(dbname: impl Into<String>, client: Client) -> Self {
Self {
dbname: dbname.into(),
client,
..Default::default()
}
}
pub fn catalog(&self) -> &String {
&self.catalog
}
pub fn set_catalog(&mut self, catalog: impl Into<String>) {
self.catalog = catalog.into();
}
pub fn schema(&self) -> &String {
&self.schema
}
pub fn set_schema(&mut self, schema: impl Into<String>) {
self.schema = schema.into();
}
pub async fn insert(&self, request: InsertRequest) -> Result<Output> {
self.do_get(Request::Insert(request)).await
pub fn dbname(&self) -> &String {
&self.dbname
}
pub fn set_dbname(&mut self, dbname: impl Into<String>) {
self.dbname = dbname.into();
}
pub fn set_auth(&mut self, auth: AuthScheme) {
self.ctx.auth_header = Some(AuthHeader {
auth_scheme: Some(auth),
});
}
pub async fn insert(&self, request: InsertRequest) -> Result<u32> {
let mut client = self.client.make_database_client()?.inner;
let request = GreptimeRequest {
header: Some(RequestHeader {
catalog: self.catalog.clone(),
schema: self.schema.clone(),
authorization: self.ctx.auth_header.clone(),
dbname: self.dbname.clone(),
}),
request: Some(Request::Insert(request)),
};
let response = client
.handle(request)
.await?
.into_inner()
.response
.context(IllegalDatabaseResponseSnafu {
err_msg: "GreptimeResponse is empty",
})?;
let greptime_response::Response::AffectedRows(AffectedRows { value }) = response;
Ok(value)
}
pub async fn sql(&self, sql: &str) -> Result<Output> {
@@ -79,6 +143,24 @@ impl Database {
.await
}
pub async fn prom_range_query(
&self,
promql: &str,
start: &str,
end: &str,
step: &str,
) -> Result<Output> {
self.do_get(Request::Query(QueryRequest {
query: Some(Query::PromRangeQuery(PromRangeQuery {
query: promql.to_string(),
start: start.to_string(),
end: end.to_string(),
step: step.to_string(),
})),
}))
.await
}
pub async fn create(&self, expr: CreateTableExpr) -> Result<Output> {
self.do_get(Request::Ddl(DdlRequest {
expr: Some(DdlExpr::CreateTable(expr)),
@@ -100,19 +182,28 @@ impl Database {
.await
}
pub async fn flush_table(&self, expr: FlushTableExpr) -> Result<Output> {
self.do_get(Request::Ddl(DdlRequest {
expr: Some(DdlExpr::FlushTable(expr)),
}))
.await
}
async fn do_get(&self, request: Request) -> Result<Output> {
let request = GreptimeRequest {
header: Some(RequestHeader {
catalog: self.catalog.clone(),
schema: self.schema.clone(),
authorization: self.ctx.auth_header.clone(),
dbname: self.dbname.clone(),
}),
request: Some(request),
};
let request = Ticket {
ticket: request.encode_to_vec(),
ticket: request.encode_to_vec().into(),
};
let mut client = self.client.make_client()?;
let mut client = self.client.make_flight_client()?;
// TODO(LFC): Streaming get flight data.
let flight_data: Vec<FlightData> = client
@@ -121,17 +212,26 @@ impl Database {
.and_then(|response| response.into_inner().try_collect())
.await
.map_err(|e| {
let code = get_metadata_value(&e, INNER_ERROR_CODE)
.and_then(|s| StatusCode::from_str(&s).ok())
.unwrap_or(StatusCode::Unknown);
let msg = get_metadata_value(&e, INNER_ERROR_MSG).unwrap_or(e.to_string());
error::ExternalSnafu { code, msg }
let tonic_code = e.code();
let e: error::Error = e.into();
let code = e.status_code();
let msg = e.to_string();
error::ServerSnafu { code, msg }
.fail::<()>()
.map_err(BoxedError::new)
.context(error::FlightGetSnafu {
tonic_code: e.code(),
tonic_code,
addr: client.addr(),
})
.map_err(|error| {
logging::error!(
"Failed to do Flight get, addr: {}, code: {}, source: {}",
client.addr(),
tonic_code,
error
);
error
})
.unwrap_err()
})?;
@@ -158,10 +258,9 @@ impl Database {
}
}
fn get_metadata_value(e: &tonic::Status, key: &str) -> Option<String> {
e.metadata()
.get(key)
.and_then(|v| String::from_utf8(v.as_bytes().to_vec()).ok())
#[derive(Default, Debug, Clone)]
pub struct FlightContext {
auth_header: Option<AuthHeader>,
}
#[cfg(test)]
@@ -169,7 +268,8 @@ mod tests {
use std::sync::Arc;
use api::helper::ColumnDataTypeWrapper;
use api::v1::Column;
use api::v1::auth_header::AuthScheme;
use api::v1::{AuthHeader, Basic, Column};
use common_grpc::select::{null_mask, values};
use common_grpc_expr::column_to_vector;
use datatypes::prelude::{Vector, VectorRef};
@@ -179,6 +279,8 @@ mod tests {
UInt32Vector, UInt64Vector, UInt8Vector,
};
use crate::database::FlightContext;
#[test]
fn test_column_to_vector() {
let mut column = create_test_column(Arc::new(BooleanVector::from(vec![true])));
@@ -262,4 +364,26 @@ mod tests {
datatype: wrapper.datatype() as i32,
}
}
#[test]
fn test_flight_ctx() {
let mut ctx = FlightContext::default();
assert!(ctx.auth_header.is_none());
let basic = AuthScheme::Basic(Basic {
username: "u".to_string(),
password: "p".to_string(),
});
ctx.auth_header = Some(AuthHeader {
auth_scheme: Some(basic),
});
assert!(matches!(
ctx.auth_header,
Some(AuthHeader {
auth_scheme: Some(AuthScheme::Basic(_)),
})
))
}
}

View File

@@ -13,25 +13,19 @@
// limitations under the License.
use std::any::Any;
use std::str::FromStr;
use common_error::prelude::*;
use tonic::Code;
use snafu::Location;
use tonic::{Code, Status};
#[derive(Debug, Snafu)]
#[snafu(visibility(pub))]
pub enum Error {
#[snafu(display("Illegal Flight messages, reason: {}", reason))]
IllegalFlightMessages {
reason: String,
backtrace: Backtrace,
},
IllegalFlightMessages { reason: String, location: Location },
#[snafu(display(
"Failed to do Flight get, addr: {}, code: {}, source: {}",
addr,
tonic_code,
source
))]
#[snafu(display("Failed to do Flight get, code: {}, source: {}", tonic_code, source))]
FlightGet {
addr: String,
tonic_code: Code,
@@ -51,13 +45,10 @@ pub enum Error {
},
#[snafu(display("Illegal GRPC client state: {}", err_msg))]
IllegalGrpcClientState {
err_msg: String,
backtrace: Backtrace,
},
IllegalGrpcClientState { err_msg: String, location: Location },
#[snafu(display("Missing required field in protobuf, field: {}", field))]
MissingField { field: String, backtrace: Backtrace },
MissingField { field: String, location: Location },
#[snafu(display(
"Failed to create gRPC channel, peer address: {}, source: {}",
@@ -70,9 +61,12 @@ pub enum Error {
source: common_grpc::error::Error,
},
/// Error deserialized from gRPC metadata
// Server error carried in Tonic Status's metadata.
#[snafu(display("{}", msg))]
ExternalError { code: StatusCode, msg: String },
Server { code: StatusCode, msg: String },
#[snafu(display("Illegal Database response: {err_msg}"))]
IllegalDatabaseResponse { err_msg: String },
}
pub type Result<T> = std::result::Result<T, Error>;
@@ -82,21 +76,37 @@ impl ErrorExt for Error {
match self {
Error::IllegalFlightMessages { .. }
| Error::ColumnDataType { .. }
| Error::MissingField { .. } => StatusCode::Internal,
| Error::MissingField { .. }
| Error::IllegalDatabaseResponse { .. } => StatusCode::Internal,
Error::Server { code, .. } => *code,
Error::FlightGet { source, .. } => source.status_code(),
Error::CreateChannel { source, .. } | Error::ConvertFlightData { source } => {
source.status_code()
}
Error::IllegalGrpcClientState { .. } => StatusCode::Unexpected,
Error::ExternalError { code, .. } => *code,
}
}
fn backtrace_opt(&self) -> Option<&Backtrace> {
ErrorCompat::backtrace(self)
}
fn as_any(&self) -> &dyn Any {
self
}
}
impl From<Status> for Error {
fn from(e: Status) -> Self {
fn get_metadata_value(e: &Status, key: &str) -> Option<String> {
e.metadata()
.get(key)
.and_then(|v| String::from_utf8(v.as_bytes().to_vec()).ok())
}
let code = get_metadata_value(&e, INNER_ERROR_CODE)
.and_then(|s| StatusCode::from_str(&s).ok())
.unwrap_or(StatusCode::Unknown);
let msg = get_metadata_value(&e, INNER_ERROR_MSG).unwrap_or(e.to_string());
Self::Server { code, msg }
}
}

View File

@@ -18,6 +18,7 @@ mod error;
pub mod load_balance;
pub use api;
pub use common_catalog::consts::{DEFAULT_CATALOG_NAME, DEFAULT_SCHEMA_NAME};
pub use self::client::Client;
pub use self::database::Database;

View File

@@ -9,28 +9,46 @@ default-run = "greptime"
name = "greptime"
path = "src/bin/greptime.rs"
[features]
mem-prof = ["tikv-jemallocator", "tikv-jemalloc-ctl"]
[dependencies]
anymap = "1.0.0-beta.2"
catalog = { path = "../catalog" }
clap = { version = "3.1", features = ["derive"] }
client = { path = "../client" }
common-base = { path = "../common/base" }
common-error = { path = "../common/error" }
common-query = { path = "../common/query" }
common-recordbatch = { path = "../common/recordbatch" }
common-telemetry = { path = "../common/telemetry", features = [
"deadlock_detection",
] }
datanode = { path = "../datanode" }
either = "1.8"
frontend = { path = "../frontend" }
futures.workspace = true
meta-client = { path = "../meta-client" }
meta-srv = { path = "../meta-srv" }
nu-ansi-term = "0.46"
partition = { path = "../partition" }
query = { path = "../query" }
rustyline = "10.1"
serde.workspace = true
servers = { path = "../servers" }
session = { path = "../session" }
snafu.workspace = true
substrait = { path = "../common/substrait" }
tikv-jemalloc-ctl = { version = "0.5", optional = true }
tikv-jemallocator = { version = "0.5", optional = true }
tokio.workspace = true
toml = "0.5"
[dev-dependencies]
common-test-util = { path = "../common/test-util" }
rexpect = "0.5"
serde.workspace = true
tempdir = "0.3"
[build-dependencies]
build-data = "0.1.3"

View File

@@ -12,11 +12,13 @@
// See the License for the specific language governing permissions and
// limitations under the License.
#![doc = include_str!("../../../../README.md")]
use std::fmt;
use clap::Parser;
use cmd::error::Result;
use cmd::{datanode, frontend, metasrv, standalone};
use cmd::{cli, datanode, frontend, metasrv, standalone};
use common_telemetry::logging::{error, info};
#[derive(Parser)]
@@ -30,9 +32,39 @@ struct Command {
subcmd: SubCommand,
}
pub enum Application {
Datanode(datanode::Instance),
Frontend(frontend::Instance),
Metasrv(metasrv::Instance),
Standalone(standalone::Instance),
Cli(cli::Instance),
}
impl Application {
async fn run(&mut self) -> Result<()> {
match self {
Application::Datanode(instance) => instance.run().await,
Application::Frontend(instance) => instance.run().await,
Application::Metasrv(instance) => instance.run().await,
Application::Standalone(instance) => instance.run().await,
Application::Cli(instance) => instance.run().await,
}
}
async fn stop(&self) -> Result<()> {
match self {
Application::Datanode(instance) => instance.stop().await,
Application::Frontend(instance) => instance.stop().await,
Application::Metasrv(instance) => instance.stop().await,
Application::Standalone(instance) => instance.stop().await,
Application::Cli(instance) => instance.stop().await,
}
}
}
impl Command {
async fn run(self) -> Result<()> {
self.subcmd.run().await
async fn build(self) -> Result<Application> {
self.subcmd.build().await
}
}
@@ -46,15 +78,33 @@ enum SubCommand {
Metasrv(metasrv::Command),
#[clap(name = "standalone")]
Standalone(standalone::Command),
#[clap(name = "cli")]
Cli(cli::Command),
}
impl SubCommand {
async fn run(self) -> Result<()> {
async fn build(self) -> Result<Application> {
match self {
SubCommand::Datanode(cmd) => cmd.run().await,
SubCommand::Frontend(cmd) => cmd.run().await,
SubCommand::Metasrv(cmd) => cmd.run().await,
SubCommand::Standalone(cmd) => cmd.run().await,
SubCommand::Datanode(cmd) => {
let app = cmd.build().await?;
Ok(Application::Datanode(app))
}
SubCommand::Frontend(cmd) => {
let app = cmd.build().await?;
Ok(Application::Frontend(app))
}
SubCommand::Metasrv(cmd) => {
let app = cmd.build().await?;
Ok(Application::Metasrv(app))
}
SubCommand::Standalone(cmd) => {
let app = cmd.build().await?;
Ok(Application::Standalone(app))
}
SubCommand::Cli(cmd) => {
let app = cmd.build().await?;
Ok(Application::Cli(app))
}
}
}
}
@@ -66,6 +116,7 @@ impl fmt::Display for SubCommand {
SubCommand::Frontend(..) => write!(f, "greptime-frontend"),
SubCommand::Metasrv(..) => write!(f, "greptime-metasrv"),
SubCommand::Standalone(..) => write!(f, "greptime-standalone"),
SubCommand::Cli(_) => write!(f, "greptime-cli"),
}
}
}
@@ -83,6 +134,10 @@ fn print_version() -> &'static str {
)
}
#[cfg(feature = "mem-prof")]
#[global_allocator]
static ALLOC: tikv_jemallocator::Jemalloc = tikv_jemallocator::Jemalloc;
#[tokio::main]
async fn main() -> Result<()> {
let cmd = Command::parse();
@@ -96,13 +151,18 @@ async fn main() -> Result<()> {
common_telemetry::init_default_metrics_recorder();
let _guard = common_telemetry::init_global_logging(app_name, log_dir, log_level, false);
let mut app = cmd.build().await?;
tokio::select! {
result = cmd.run() => {
result = app.run() => {
if let Err(err) = result {
error!(err; "Fatal error occurs!");
}
}
_ = tokio::signal::ctrl_c() => {
if let Err(err) = app.stop().await {
error!(err; "Fatal error occurs!");
}
info!("Goodbye!");
}
}

78
src/cmd/src/cli.rs Normal file
View File

@@ -0,0 +1,78 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
mod cmd;
mod helper;
mod repl;
use clap::Parser;
pub use repl::Repl;
use crate::error::Result;
pub struct Instance {
repl: Repl,
}
impl Instance {
pub async fn run(&mut self) -> Result<()> {
self.repl.run().await
}
pub async fn stop(&self) -> Result<()> {
Ok(())
}
}
#[derive(Parser)]
pub struct Command {
#[clap(subcommand)]
cmd: SubCommand,
}
impl Command {
pub async fn build(self) -> Result<Instance> {
self.cmd.build().await
}
}
#[derive(Parser)]
enum SubCommand {
Attach(AttachCommand),
}
impl SubCommand {
async fn build(self) -> Result<Instance> {
match self {
SubCommand::Attach(cmd) => cmd.build().await,
}
}
}
#[derive(Debug, Parser)]
pub(crate) struct AttachCommand {
#[clap(long)]
pub(crate) grpc_addr: String,
#[clap(long)]
pub(crate) meta_addr: Option<String>,
#[clap(long, action)]
pub(crate) disable_helper: bool,
}
impl AttachCommand {
async fn build(self) -> Result<Instance> {
let repl = Repl::try_new(&self).await?;
Ok(Instance { repl })
}
}

154
src/cmd/src/cli/cmd.rs Normal file
View File

@@ -0,0 +1,154 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use crate::error::{Error, InvalidReplCommandSnafu, Result};
/// Represents the parsed command from the user (which may be over many lines)
#[derive(Debug, PartialEq)]
pub(crate) enum ReplCommand {
Help,
UseDatabase { db_name: String },
Sql { sql: String },
Exit,
}
impl TryFrom<&str> for ReplCommand {
type Error = Error;
fn try_from(input: &str) -> Result<Self> {
let input = input.trim();
if input.is_empty() {
return InvalidReplCommandSnafu {
reason: "No command specified".to_string(),
}
.fail();
}
// If line ends with ';', it must be treated as a complete input.
// However, the opposite is not true.
let input_is_completed = input.ends_with(';');
let input = input.strip_suffix(';').map(|x| x.trim()).unwrap_or(input);
let lowercase = input.to_lowercase();
match lowercase.as_str() {
"help" => Ok(Self::Help),
"exit" | "quit" => Ok(Self::Exit),
_ => match input.split_once(' ') {
Some((maybe_use, database)) if maybe_use.to_lowercase() == "use" => {
Ok(Self::UseDatabase {
db_name: database.trim().to_string(),
})
}
// Any valid SQL must contains at least one whitespace.
Some(_) if input_is_completed => Ok(Self::Sql {
sql: input.to_string(),
}),
_ => InvalidReplCommandSnafu {
reason: format!("unknown command '{input}', maybe input is not completed"),
}
.fail(),
},
}
}
}
impl ReplCommand {
pub fn help() -> &'static str {
r#"
Available commands (case insensitive):
- 'help': print this help
- 'exit' or 'quit': exit the REPL
- 'use <your database name>': switch to another database/schema context
- Other typed in text will be treated as SQL.
You can enter new line while typing, just remember to end it with ';'.
"#
}
}
#[cfg(test)]
mod tests {
use super::*;
use crate::error::Error::InvalidReplCommand;
#[test]
fn test_from_str() {
fn test_ok(s: &str, expected: ReplCommand) {
let actual: ReplCommand = s.try_into().unwrap();
assert_eq!(expected, actual, "'{}'", s);
}
fn test_err(s: &str) {
let result: Result<ReplCommand> = s.try_into();
assert!(matches!(result, Err(InvalidReplCommand { .. })))
}
test_err("");
test_err(" ");
test_err("\t");
test_ok("help", ReplCommand::Help);
test_ok("help", ReplCommand::Help);
test_ok(" help", ReplCommand::Help);
test_ok(" help ", ReplCommand::Help);
test_ok(" HELP ", ReplCommand::Help);
test_ok(" Help; ", ReplCommand::Help);
test_ok(" help ; ", ReplCommand::Help);
test_ok("exit", ReplCommand::Exit);
test_ok("exit;", ReplCommand::Exit);
test_ok("exit ;", ReplCommand::Exit);
test_ok("EXIT", ReplCommand::Exit);
test_ok("quit", ReplCommand::Exit);
test_ok("quit;", ReplCommand::Exit);
test_ok("quit ;", ReplCommand::Exit);
test_ok("QUIT", ReplCommand::Exit);
test_ok(
"use Foo",
ReplCommand::UseDatabase {
db_name: "Foo".to_string(),
},
);
test_ok(
" use Foo ; ",
ReplCommand::UseDatabase {
db_name: "Foo".to_string(),
},
);
// ensure that database name is case sensitive
test_ok(
" use FOO ; ",
ReplCommand::UseDatabase {
db_name: "FOO".to_string(),
},
);
// ensure that we aren't messing with capitalization
test_ok(
"SELECT * from foo;",
ReplCommand::Sql {
sql: "SELECT * from foo".to_string(),
},
);
// Input line (that don't belong to any other cases above) must ends with ';' to make it a valid SQL.
test_err("insert blah");
test_ok(
"insert blah;",
ReplCommand::Sql {
sql: "insert blah".to_string(),
},
);
}
}

112
src/cmd/src/cli/helper.rs Normal file
View File

@@ -0,0 +1,112 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use std::borrow::Cow;
use rustyline::completion::Completer;
use rustyline::highlight::{Highlighter, MatchingBracketHighlighter};
use rustyline::hint::{Hinter, HistoryHinter};
use rustyline::validate::{ValidationContext, ValidationResult, Validator};
use crate::cli::cmd::ReplCommand;
pub(crate) struct RustylineHelper {
hinter: HistoryHinter,
highlighter: MatchingBracketHighlighter,
}
impl Default for RustylineHelper {
fn default() -> Self {
Self {
hinter: HistoryHinter {},
highlighter: MatchingBracketHighlighter::default(),
}
}
}
impl rustyline::Helper for RustylineHelper {}
impl Validator for RustylineHelper {
fn validate(&self, ctx: &mut ValidationContext<'_>) -> rustyline::Result<ValidationResult> {
let input = ctx.input();
match ReplCommand::try_from(input) {
Ok(_) => Ok(ValidationResult::Valid(None)),
Err(e) => {
if input.trim_end().ends_with(';') {
// If line ends with ';', it HAS to be a valid command.
Ok(ValidationResult::Invalid(Some(e.to_string())))
} else {
Ok(ValidationResult::Incomplete)
}
}
}
}
}
impl Hinter for RustylineHelper {
type Hint = String;
fn hint(&self, line: &str, pos: usize, ctx: &rustyline::Context<'_>) -> Option<Self::Hint> {
self.hinter.hint(line, pos, ctx)
}
}
impl Highlighter for RustylineHelper {
fn highlight<'l>(&self, line: &'l str, pos: usize) -> Cow<'l, str> {
self.highlighter.highlight(line, pos)
}
fn highlight_prompt<'b, 's: 'b, 'p: 'b>(
&'s self,
prompt: &'p str,
default: bool,
) -> Cow<'b, str> {
self.highlighter.highlight_prompt(prompt, default)
}
fn highlight_hint<'h>(&self, hint: &'h str) -> Cow<'h, str> {
use nu_ansi_term::Style;
Cow::Owned(Style::new().dimmed().paint(hint).to_string())
}
fn highlight_candidate<'c>(
&self,
candidate: &'c str,
completion: rustyline::CompletionType,
) -> Cow<'c, str> {
self.highlighter.highlight_candidate(candidate, completion)
}
fn highlight_char(&self, line: &str, pos: usize) -> bool {
self.highlighter.highlight_char(line, pos)
}
}
impl Completer for RustylineHelper {
type Candidate = String;
fn complete(
&self,
line: &str,
pos: usize,
ctx: &rustyline::Context<'_>,
) -> rustyline::Result<(usize, Vec<Self::Candidate>)> {
// If there is a hint, use that as the auto-complete when user hits `tab`
if let Some(hint) = self.hinter.hint(line, pos, ctx) {
Ok((pos, vec![hint]))
} else {
Ok((0, vec![]))
}
}
}

273
src/cmd/src/cli/repl.rs Normal file
View File

@@ -0,0 +1,273 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use std::path::PathBuf;
use std::sync::Arc;
use std::time::Instant;
use catalog::remote::MetaKvBackend;
use client::{Client, Database, DEFAULT_CATALOG_NAME, DEFAULT_SCHEMA_NAME};
use common_error::prelude::ErrorExt;
use common_query::Output;
use common_recordbatch::RecordBatches;
use common_telemetry::logging;
use either::Either;
use frontend::catalog::FrontendCatalogManager;
use frontend::datanode::DatanodeClients;
use meta_client::client::MetaClientBuilder;
use partition::manager::PartitionRuleManager;
use partition::route::TableRoutes;
use query::datafusion::DatafusionQueryEngine;
use query::logical_optimizer::LogicalOptimizer;
use query::parser::QueryLanguageParser;
use query::plan::LogicalPlan;
use query::query_engine::QueryEngineState;
use query::QueryEngine;
use rustyline::error::ReadlineError;
use rustyline::Editor;
use session::context::QueryContext;
use snafu::{ErrorCompat, ResultExt};
use substrait::{DFLogicalSubstraitConvertor, SubstraitPlan};
use crate::cli::cmd::ReplCommand;
use crate::cli::helper::RustylineHelper;
use crate::cli::AttachCommand;
use crate::error::{
CollectRecordBatchesSnafu, ParseSqlSnafu, PlanStatementSnafu, PrettyPrintRecordBatchesSnafu,
ReadlineSnafu, ReplCreationSnafu, RequestDatabaseSnafu, Result, StartMetaClientSnafu,
SubstraitEncodeLogicalPlanSnafu,
};
/// Captures the state of the repl, gathers commands and executes them one by one
pub struct Repl {
/// Rustyline editor for interacting with user on command line
rl: Editor<RustylineHelper>,
/// Current prompt
prompt: String,
/// Client for interacting with GreptimeDB
database: Database,
query_engine: Option<DatafusionQueryEngine>,
}
#[allow(clippy::print_stdout)]
impl Repl {
fn print_help(&self) {
println!("{}", ReplCommand::help())
}
pub(crate) async fn try_new(cmd: &AttachCommand) -> Result<Self> {
let mut rl = Editor::new().context(ReplCreationSnafu)?;
if !cmd.disable_helper {
rl.set_helper(Some(RustylineHelper::default()));
let history_file = history_file();
if let Err(e) = rl.load_history(&history_file) {
logging::debug!(
"failed to load history file on {}, error: {e}",
history_file.display()
);
}
}
let client = Client::with_urls([&cmd.grpc_addr]);
let database = Database::new(DEFAULT_CATALOG_NAME, DEFAULT_SCHEMA_NAME, client);
let query_engine = if let Some(meta_addr) = &cmd.meta_addr {
create_query_engine(meta_addr).await.map(Some)?
} else {
None
};
Ok(Self {
rl,
prompt: "> ".to_string(),
database,
query_engine,
})
}
/// Parse the next command
fn next_command(&mut self) -> Result<ReplCommand> {
match self.rl.readline(&self.prompt) {
Ok(ref line) => {
let request = line.trim();
self.rl.add_history_entry(request.to_string());
request.try_into()
}
Err(ReadlineError::Eof) | Err(ReadlineError::Interrupted) => Ok(ReplCommand::Exit),
// Some sort of real underlying error
Err(e) => Err(e).context(ReadlineSnafu),
}
}
/// Read Evaluate Print Loop (interactive command line) for GreptimeDB
///
/// Inspired / based on repl.rs from InfluxDB IOX
pub(crate) async fn run(&mut self) -> Result<()> {
println!("Ready for commands. (Hint: try 'help')");
loop {
match self.next_command()? {
ReplCommand::Help => {
self.print_help();
}
ReplCommand::UseDatabase { db_name } => {
if self.execute_sql(format!("USE {db_name}")).await {
println!("Using {db_name}");
self.database.set_schema(&db_name);
self.prompt = format!("[{db_name}] > ");
}
}
ReplCommand::Sql { sql } => {
self.execute_sql(sql).await;
}
ReplCommand::Exit => {
return Ok(());
}
}
}
}
async fn execute_sql(&self, sql: String) -> bool {
self.do_execute_sql(sql)
.await
.map_err(|e| {
let status_code = e.status_code();
let root_cause = e.iter_chain().last().unwrap();
println!("Error: {}({status_code}), {root_cause}", status_code as u32)
})
.is_ok()
}
async fn do_execute_sql(&self, sql: String) -> Result<()> {
let start = Instant::now();
let output = if let Some(query_engine) = &self.query_engine {
let stmt = QueryLanguageParser::parse_sql(&sql)
.with_context(|_| ParseSqlSnafu { sql: sql.clone() })?;
let query_ctx = Arc::new(QueryContext::with(
self.database.catalog(),
self.database.schema(),
));
let plan = query_engine
.planner()
.plan(stmt, query_ctx)
.await
.context(PlanStatementSnafu)?;
let LogicalPlan::DfPlan(plan) =
query_engine.optimize(&plan).context(PlanStatementSnafu)?;
let plan = DFLogicalSubstraitConvertor {}
.encode(plan)
.context(SubstraitEncodeLogicalPlanSnafu)?;
self.database.logical_plan(plan.to_vec()).await
} else {
self.database.sql(&sql).await
}
.context(RequestDatabaseSnafu { sql: &sql })?;
let either = match output {
Output::Stream(s) => {
let x = RecordBatches::try_collect(s)
.await
.context(CollectRecordBatchesSnafu)?;
Either::Left(x)
}
Output::RecordBatches(x) => Either::Left(x),
Output::AffectedRows(rows) => Either::Right(rows),
};
let end = Instant::now();
match either {
Either::Left(recordbatches) => {
let total_rows: usize = recordbatches.iter().map(|x| x.num_rows()).sum();
if total_rows > 0 {
println!(
"{}",
recordbatches
.pretty_print()
.context(PrettyPrintRecordBatchesSnafu)?
);
}
println!("Total Rows: {total_rows}")
}
Either::Right(rows) => println!("Affected Rows: {rows}"),
};
println!("Cost {} ms", (end - start).as_millis());
Ok(())
}
}
impl Drop for Repl {
fn drop(&mut self) {
if self.rl.helper().is_some() {
let history_file = history_file();
if let Err(e) = self.rl.save_history(&history_file) {
logging::debug!(
"failed to save history file on {}, error: {e}",
history_file.display()
);
}
}
}
}
/// Return the location of the history file (defaults to $HOME/".greptimedb_cli_history")
fn history_file() -> PathBuf {
let mut buf = match std::env::var("HOME") {
Ok(home) => PathBuf::from(home),
Err(_) => PathBuf::new(),
};
buf.push(".greptimedb_cli_history");
buf
}
async fn create_query_engine(meta_addr: &str) -> Result<DatafusionQueryEngine> {
let mut meta_client = MetaClientBuilder::default().enable_store().build();
meta_client
.start([meta_addr])
.await
.context(StartMetaClientSnafu)?;
let meta_client = Arc::new(meta_client);
let backend = Arc::new(MetaKvBackend {
client: meta_client.clone(),
});
let table_routes = Arc::new(TableRoutes::new(meta_client));
let partition_manager = Arc::new(PartitionRuleManager::new(table_routes));
let datanode_clients = Arc::new(DatanodeClients::default());
let catalog_list = Arc::new(FrontendCatalogManager::new(
backend,
partition_manager,
datanode_clients,
));
let state = Arc::new(QueryEngineState::new(catalog_list, Default::default()));
Ok(DatafusionQueryEngine::new(state))
}

View File

@@ -12,16 +12,37 @@
// See the License for the specific language governing permissions and
// limitations under the License.
use std::time::Duration;
use clap::Parser;
use common_telemetry::logging;
use datanode::datanode::{Datanode, DatanodeOptions, FileConfig, ObjectStoreConfig};
use meta_client::MetaClientOpts;
use datanode::datanode::{
Datanode, DatanodeOptions, FileConfig, ObjectStoreConfig, ProcedureConfig,
};
use meta_client::MetaClientOptions;
use servers::Mode;
use snafu::ResultExt;
use crate::error::{Error, MissingConfigSnafu, Result, StartDatanodeSnafu};
use crate::error::{Error, MissingConfigSnafu, Result, ShutdownDatanodeSnafu, StartDatanodeSnafu};
use crate::toml_loader;
pub struct Instance {
datanode: Datanode,
}
impl Instance {
pub async fn run(&mut self) -> Result<()> {
self.datanode.start().await.context(StartDatanodeSnafu)
}
pub async fn stop(&self) -> Result<()> {
self.datanode
.shutdown()
.await
.context(ShutdownDatanodeSnafu)
}
}
#[derive(Parser)]
pub struct Command {
#[clap(subcommand)]
@@ -29,8 +50,8 @@ pub struct Command {
}
impl Command {
pub async fn run(self) -> Result<()> {
self.subcmd.run().await
pub async fn build(self) -> Result<Instance> {
self.subcmd.build().await
}
}
@@ -40,9 +61,9 @@ enum SubCommand {
}
impl SubCommand {
async fn run(self) -> Result<()> {
async fn build(self) -> Result<Instance> {
match self {
SubCommand::Start(cmd) => cmd.run().await,
SubCommand::Start(cmd) => cmd.build().await,
}
}
}
@@ -65,22 +86,25 @@ struct StartCommand {
data_dir: Option<String>,
#[clap(long)]
wal_dir: Option<String>,
#[clap(long)]
procedure_dir: Option<String>,
#[clap(long)]
http_addr: Option<String>,
#[clap(long)]
http_timeout: Option<u64>,
}
impl StartCommand {
async fn run(self) -> Result<()> {
async fn build(self) -> Result<Instance> {
logging::info!("Datanode start command: {:#?}", self);
let opts: DatanodeOptions = self.try_into()?;
logging::info!("Datanode options: {:#?}", opts);
Datanode::new(opts)
.await
.context(StartDatanodeSnafu)?
.start()
.await
.context(StartDatanodeSnafu)
let datanode = Datanode::new(opts).await.context(StartDatanodeSnafu)?;
Ok(Instance { datanode })
}
}
@@ -110,8 +134,8 @@ impl TryFrom<StartCommand> for DatanodeOptions {
}
if let Some(meta_addr) = cmd.metasrv_addr {
opts.meta_client_opts
.get_or_insert_with(MetaClientOpts::default)
opts.meta_client_options
.get_or_insert_with(MetaClientOptions::default)
.metasrv_addrs = meta_addr
.split(',')
.map(&str::trim)
@@ -128,12 +152,25 @@ impl TryFrom<StartCommand> for DatanodeOptions {
}
if let Some(data_dir) = cmd.data_dir {
opts.storage = ObjectStoreConfig::File(FileConfig { data_dir });
opts.storage.store = ObjectStoreConfig::File(FileConfig { data_dir });
}
if let Some(wal_dir) = cmd.wal_dir {
opts.wal.dir = wal_dir;
}
if let Some(procedure_dir) = cmd.procedure_dir {
opts.procedure = Some(ProcedureConfig::from_file_path(procedure_dir));
}
if let Some(http_addr) = cmd.http_addr {
opts.http_opts.addr = http_addr
}
if let Some(http_timeout) = cmd.http_timeout {
opts.http_opts.timeout = Duration::from_secs(http_timeout)
}
// Disable dashboard in datanode.
opts.http_opts.disable_dashboard = true;
Ok(opts)
}
}
@@ -141,42 +178,89 @@ impl TryFrom<StartCommand> for DatanodeOptions {
#[cfg(test)]
mod tests {
use std::assert_matches::assert_matches;
use std::io::Write;
use std::time::Duration;
use datanode::datanode::{CompactionConfig, ObjectStoreConfig};
use common_base::readable_size::ReadableSize;
use common_test_util::temp_dir::create_named_temp_file;
use datanode::datanode::{CompactionConfig, ObjectStoreConfig, RegionManifestConfig};
use servers::Mode;
use super::*;
#[test]
fn test_read_from_config_file() {
let mut file = create_named_temp_file();
let toml_str = r#"
mode = "distributed"
enable_memory_catalog = false
node_id = 42
rpc_addr = "127.0.0.1:3001"
rpc_hostname = "127.0.0.1"
rpc_runtime_size = 8
mysql_addr = "127.0.0.1:4406"
mysql_runtime_size = 2
[meta_client_options]
metasrv_addrs = ["127.0.0.1:3002"]
timeout_millis = 3000
connect_timeout_millis = 5000
tcp_nodelay = true
[wal]
dir = "/tmp/greptimedb/wal"
file_size = "1GB"
purge_threshold = "50GB"
purge_interval = "10m"
read_batch_size = 128
sync_write = false
[storage]
type = "File"
data_dir = "/tmp/greptimedb/data/"
[storage.compaction]
max_inflight_tasks = 3
max_files_in_level0 = 7
max_purge_tasks = 32
[storage.manifest]
checkpoint_margin = 9
gc_duration = '7s'
checkpoint_on_startup = true
"#;
write!(file, "{}", toml_str).unwrap();
let cmd = StartCommand {
config_file: Some(format!(
"{}/../../config/datanode.example.toml",
std::env::current_dir().unwrap().as_path().to_str().unwrap()
)),
config_file: Some(file.path().to_str().unwrap().to_string()),
..Default::default()
};
let options: DatanodeOptions = cmd.try_into().unwrap();
assert_eq!("127.0.0.1:3001".to_string(), options.rpc_addr);
assert_eq!("/tmp/greptimedb/wal".to_string(), options.wal.dir);
assert_eq!("127.0.0.1:4406".to_string(), options.mysql_addr);
assert_eq!(4, options.mysql_runtime_size);
let MetaClientOpts {
assert_eq!(2, options.mysql_runtime_size);
assert_eq!(Some(42), options.node_id);
assert_eq!(Duration::from_secs(600), options.wal.purge_interval);
assert_eq!(1024 * 1024 * 1024, options.wal.file_size.0);
assert_eq!(1024 * 1024 * 1024 * 50, options.wal.purge_threshold.0);
assert!(!options.wal.sync_write);
let MetaClientOptions {
metasrv_addrs: metasrv_addr,
timeout_millis,
connect_timeout_millis,
tcp_nodelay,
} = options.meta_client_opts.unwrap();
} = options.meta_client_options.unwrap();
assert_eq!(vec!["127.0.0.1:3002".to_string()], metasrv_addr);
assert_eq!(5000, connect_timeout_millis);
assert_eq!(3000, timeout_millis);
assert!(!tcp_nodelay);
assert!(tcp_nodelay);
match options.storage {
ObjectStoreConfig::File(FileConfig { data_dir }) => {
assert_eq!("/tmp/greptimedb/data/".to_string(), data_dir)
match &options.storage.store {
ObjectStoreConfig::File(FileConfig { data_dir, .. }) => {
assert_eq!("/tmp/greptimedb/data/", data_dir)
}
ObjectStoreConfig::S3 { .. } => unreachable!(),
ObjectStoreConfig::Oss { .. } => unreachable!(),
@@ -184,11 +268,20 @@ mod tests {
assert_eq!(
CompactionConfig {
max_inflight_tasks: 4,
max_files_in_level0: 16,
max_inflight_tasks: 3,
max_files_in_level0: 7,
max_purge_tasks: 32,
sst_write_buffer_size: ReadableSize::mb(8),
},
options.compaction
options.storage.compaction,
);
assert_eq!(
RegionManifestConfig {
checkpoint_margin: Some(9),
gc_duration: Some(Duration::from_secs(7)),
checkpoint_on_startup: true,
},
options.storage.manifest,
);
}
@@ -223,32 +316,4 @@ mod tests {
})
.unwrap();
}
#[test]
fn test_merge_config() {
let dn_opts = DatanodeOptions::try_from(StartCommand {
config_file: Some(format!(
"{}/../../config/datanode.example.toml",
std::env::current_dir().unwrap().as_path().to_str().unwrap()
)),
..Default::default()
})
.unwrap();
assert_eq!("/tmp/greptimedb/wal", dn_opts.wal.dir);
assert_eq!(Duration::from_secs(600), dn_opts.wal.purge_interval);
assert_eq!(1024 * 1024 * 1024, dn_opts.wal.file_size.0);
assert_eq!(1024 * 1024 * 1024 * 50, dn_opts.wal.purge_threshold.0);
assert!(!dn_opts.wal.sync_write);
assert_eq!(Some(42), dn_opts.node_id);
let MetaClientOpts {
metasrv_addrs: metasrv_addr,
timeout_millis,
connect_timeout_millis,
tcp_nodelay,
} = dn_opts.meta_client_opts.unwrap();
assert_eq!(vec!["127.0.0.1:3002".to_string()], metasrv_addr);
assert_eq!(3000, timeout_millis);
assert_eq!(5000, connect_timeout_millis);
assert!(!tcp_nodelay);
}
}

View File

@@ -15,6 +15,8 @@
use std::any::Any;
use common_error::prelude::*;
use rustyline::error::ReadlineError;
use snafu::Location;
#[derive(Debug, Snafu)]
#[snafu(visibility(pub))]
@@ -25,36 +27,60 @@ pub enum Error {
source: datanode::error::Error,
},
#[snafu(display("Failed to shutdown datanode, source: {}", source))]
ShutdownDatanode {
#[snafu(backtrace)]
source: datanode::error::Error,
},
#[snafu(display("Failed to start frontend, source: {}", source))]
StartFrontend {
#[snafu(backtrace)]
source: frontend::error::Error,
},
#[snafu(display("Failed to shutdown frontend, source: {}", source))]
ShutdownFrontend {
#[snafu(backtrace)]
source: frontend::error::Error,
},
#[snafu(display("Failed to build meta server, source: {}", source))]
BuildMetaServer {
#[snafu(backtrace)]
source: meta_srv::error::Error,
},
#[snafu(display("Failed to start meta server, source: {}", source))]
StartMetaServer {
#[snafu(backtrace)]
source: meta_srv::error::Error,
},
#[snafu(display("Failed to shutdown meta server, source: {}", source))]
ShutdownMetaServer {
#[snafu(backtrace)]
source: meta_srv::error::Error,
},
#[snafu(display("Failed to read config file: {}, source: {}", path, source))]
ReadConfig {
path: String,
source: std::io::Error,
backtrace: Backtrace,
location: Location,
},
#[snafu(display("Failed to parse config, source: {}", source))]
ParseConfig {
source: toml::de::Error,
backtrace: Backtrace,
location: Location,
},
#[snafu(display("Missing config, msg: {}", msg))]
MissingConfig { msg: String, backtrace: Backtrace },
MissingConfig { msg: String, location: Location },
#[snafu(display("Illegal config: {}", msg))]
IllegalConfig { msg: String, backtrace: Backtrace },
IllegalConfig { msg: String, location: Location },
#[snafu(display("Illegal auth config: {}", source))]
IllegalAuthConfig {
@@ -68,6 +94,65 @@ pub enum Error {
#[snafu(backtrace)]
source: meta_srv::error::Error,
},
#[snafu(display("Invalid REPL command: {reason}"))]
InvalidReplCommand { reason: String },
#[snafu(display("Cannot create REPL: {}", source))]
ReplCreation {
source: ReadlineError,
location: Location,
},
#[snafu(display("Error reading command: {}", source))]
Readline {
source: ReadlineError,
location: Location,
},
#[snafu(display("Failed to request database, sql: {sql}, source: {source}"))]
RequestDatabase {
sql: String,
#[snafu(backtrace)]
source: client::Error,
},
#[snafu(display("Failed to collect RecordBatches, source: {source}"))]
CollectRecordBatches {
#[snafu(backtrace)]
source: common_recordbatch::error::Error,
},
#[snafu(display("Failed to pretty print Recordbatches, source: {source}"))]
PrettyPrintRecordBatches {
#[snafu(backtrace)]
source: common_recordbatch::error::Error,
},
#[snafu(display("Failed to start Meta client, source: {}", source))]
StartMetaClient {
#[snafu(backtrace)]
source: meta_client::error::Error,
},
#[snafu(display("Failed to parse SQL: {}, source: {}", sql, source))]
ParseSql {
sql: String,
#[snafu(backtrace)]
source: query::error::Error,
},
#[snafu(display("Failed to plan statement, source: {}", source))]
PlanStatement {
#[snafu(backtrace)]
source: query::error::Error,
},
#[snafu(display("Failed to encode logical plan in substrait, source: {}", source))]
SubstraitEncodeLogicalPlan {
#[snafu(backtrace)]
source: substrait::error::Error,
},
}
pub type Result<T> = std::result::Result<T, Error>;
@@ -77,88 +162,33 @@ impl ErrorExt for Error {
match self {
Error::StartDatanode { source } => source.status_code(),
Error::StartFrontend { source } => source.status_code(),
Error::ShutdownDatanode { source } => source.status_code(),
Error::ShutdownFrontend { source } => source.status_code(),
Error::StartMetaServer { source } => source.status_code(),
Error::ShutdownMetaServer { source } => source.status_code(),
Error::BuildMetaServer { source } => source.status_code(),
Error::UnsupportedSelectorType { source, .. } => source.status_code(),
Error::ReadConfig { .. } | Error::ParseConfig { .. } | Error::MissingConfig { .. } => {
StatusCode::InvalidArguments
}
Error::IllegalConfig { .. } => StatusCode::InvalidArguments,
Error::IllegalConfig { .. } | Error::InvalidReplCommand { .. } => {
StatusCode::InvalidArguments
}
Error::IllegalAuthConfig { .. } => StatusCode::InvalidArguments,
Error::ReplCreation { .. } | Error::Readline { .. } => StatusCode::Internal,
Error::RequestDatabase { source, .. } => source.status_code(),
Error::CollectRecordBatches { source } | Error::PrettyPrintRecordBatches { source } => {
source.status_code()
}
Error::StartMetaClient { source } => source.status_code(),
Error::ParseSql { source, .. } | Error::PlanStatement { source } => {
source.status_code()
}
Error::SubstraitEncodeLogicalPlan { source } => source.status_code(),
}
}
fn backtrace_opt(&self) -> Option<&Backtrace> {
ErrorCompat::backtrace(self)
}
fn as_any(&self) -> &dyn Any {
self
}
}
#[cfg(test)]
mod tests {
use super::*;
type StdResult<E> = std::result::Result<(), E>;
#[test]
fn test_start_node_error() {
fn throw_datanode_error() -> StdResult<datanode::error::Error> {
datanode::error::MissingNodeIdSnafu {}.fail()
}
let e = throw_datanode_error()
.context(StartDatanodeSnafu)
.err()
.unwrap();
assert!(e.backtrace_opt().is_some());
assert_eq!(e.status_code(), StatusCode::InvalidArguments);
}
#[test]
fn test_start_frontend_error() {
fn throw_frontend_error() -> StdResult<frontend::error::Error> {
frontend::error::InvalidSqlSnafu { err_msg: "failed" }.fail()
}
let e = throw_frontend_error()
.context(StartFrontendSnafu)
.err()
.unwrap();
assert!(e.backtrace_opt().is_some());
assert_eq!(e.status_code(), StatusCode::InvalidArguments);
}
#[test]
fn test_start_metasrv_error() {
fn throw_metasrv_error() -> StdResult<meta_srv::error::Error> {
meta_srv::error::StreamNoneSnafu {}.fail()
}
let e = throw_metasrv_error()
.context(StartMetaServerSnafu)
.err()
.unwrap();
assert!(e.backtrace_opt().is_some());
assert_eq!(e.status_code(), StatusCode::Internal);
}
#[test]
fn test_read_config_error() {
fn throw_read_config_error() -> StdResult<std::io::Error> {
Err(std::io::ErrorKind::NotFound.into())
}
let e = throw_read_config_error()
.context(ReadConfigSnafu { path: "test" })
.err()
.unwrap();
assert!(e.backtrace_opt().is_some());
assert_eq!(e.status_code(), StatusCode::InvalidArguments);
}
}

View File

@@ -16,14 +16,15 @@ use std::sync::Arc;
use clap::Parser;
use common_base::Plugins;
use frontend::frontend::{Frontend, FrontendOptions};
use frontend::frontend::FrontendOptions;
use frontend::grpc::GrpcOptions;
use frontend::influxdb::InfluxdbOptions;
use frontend::instance::Instance;
use frontend::instance::{FrontendInstance, Instance as FeInstance};
use frontend::mysql::MysqlOptions;
use frontend::opentsdb::OpentsdbOptions;
use frontend::postgres::PostgresOptions;
use meta_client::MetaClientOpts;
use frontend::prom::PromOptions;
use meta_client::MetaClientOptions;
use servers::auth::UserProviderRef;
use servers::http::HttpOptions;
use servers::tls::{TlsMode, TlsOption};
@@ -33,6 +34,26 @@ use snafu::ResultExt;
use crate::error::{self, IllegalAuthConfigSnafu, Result};
use crate::toml_loader;
pub struct Instance {
frontend: FeInstance,
}
impl Instance {
pub async fn run(&mut self) -> Result<()> {
self.frontend
.start()
.await
.context(error::StartFrontendSnafu)
}
pub async fn stop(&self) -> Result<()> {
self.frontend
.shutdown()
.await
.context(error::ShutdownFrontendSnafu)
}
}
#[derive(Parser)]
pub struct Command {
#[clap(subcommand)]
@@ -40,8 +61,8 @@ pub struct Command {
}
impl Command {
pub async fn run(self) -> Result<()> {
self.subcmd.run().await
pub async fn build(self) -> Result<Instance> {
self.subcmd.build().await
}
}
@@ -51,9 +72,9 @@ enum SubCommand {
}
impl SubCommand {
async fn run(self) -> Result<()> {
async fn build(self) -> Result<Instance> {
match self {
SubCommand::Start(cmd) => cmd.run().await,
SubCommand::Start(cmd) => cmd.build().await,
}
}
}
@@ -67,6 +88,8 @@ pub struct StartCommand {
#[clap(long)]
mysql_addr: Option<String>,
#[clap(long)]
prom_addr: Option<String>,
#[clap(long)]
postgres_addr: Option<String>,
#[clap(long)]
opentsdb_addr: Option<String>,
@@ -84,19 +107,25 @@ pub struct StartCommand {
tls_key_path: Option<String>,
#[clap(long)]
user_provider: Option<String>,
#[clap(long)]
disable_dashboard: bool,
}
impl StartCommand {
async fn run(self) -> Result<()> {
async fn build(self) -> Result<Instance> {
let plugins = Arc::new(load_frontend_plugins(&self.user_provider)?);
let opts: FrontendOptions = self.try_into()?;
let instance = Instance::try_new_distributed(&opts, plugins.clone())
let mut instance = FeInstance::try_new_distributed(&opts, plugins.clone())
.await
.context(error::StartFrontendSnafu)?;
let mut frontend = Frontend::new(opts, instance, plugins);
frontend.start().await.context(error::StartFrontendSnafu)
instance
.build_servers(&opts, plugins)
.await
.context(error::StartFrontendSnafu)?;
Ok(Instance { frontend: instance })
}
}
@@ -122,18 +151,24 @@ impl TryFrom<StartCommand> for FrontendOptions {
let tls_option = TlsOption::new(cmd.tls_mode, cmd.tls_cert_path, cmd.tls_key_path);
let mut http_options = HttpOptions {
disable_dashboard: cmd.disable_dashboard,
..Default::default()
};
if let Some(addr) = cmd.http_addr {
opts.http_options = Some(HttpOptions {
addr,
..Default::default()
});
http_options.addr = addr;
}
opts.http_options = Some(http_options);
if let Some(addr) = cmd.grpc_addr {
opts.grpc_options = Some(GrpcOptions {
addr,
..Default::default()
});
}
if let Some(addr) = cmd.mysql_addr {
opts.mysql_options = Some(MysqlOptions {
addr,
@@ -141,6 +176,9 @@ impl TryFrom<StartCommand> for FrontendOptions {
..Default::default()
});
}
if let Some(addr) = cmd.prom_addr {
opts.prom_options = Some(PromOptions { addr });
}
if let Some(addr) = cmd.postgres_addr {
opts.postgres_options = Some(PostgresOptions {
addr,
@@ -158,8 +196,8 @@ impl TryFrom<StartCommand> for FrontendOptions {
opts.influxdb_options = Some(InfluxdbOptions { enable });
}
if let Some(metasrv_addr) = cmd.metasrv_addr {
opts.meta_client_opts
.get_or_insert_with(MetaClientOpts::default)
opts.meta_client_options
.get_or_insert_with(MetaClientOptions::default)
.metasrv_addrs = metasrv_addr
.split(',')
.map(&str::trim)
@@ -173,8 +211,10 @@ impl TryFrom<StartCommand> for FrontendOptions {
#[cfg(test)]
mod tests {
use std::io::Write;
use std::time::Duration;
use common_test_util::temp_dir::create_named_temp_file;
use servers::auth::{Identity, Password, UserProviderRef};
use super::*;
@@ -184,6 +224,7 @@ mod tests {
let command = StartCommand {
http_addr: Some("127.0.0.1:1234".to_string()),
grpc_addr: None,
prom_addr: Some("127.0.0.1:4444".to_string()),
mysql_addr: Some("127.0.0.1:5678".to_string()),
postgres_addr: Some("127.0.0.1:5432".to_string()),
opentsdb_addr: Some("127.0.0.1:4321".to_string()),
@@ -194,6 +235,7 @@ mod tests {
tls_cert_path: None,
tls_key_path: None,
user_provider: None,
disable_dashboard: false,
};
let opts: FrontendOptions = command.try_into().unwrap();
@@ -207,6 +249,7 @@ mod tests {
opts.opentsdb_options.as_ref().unwrap().addr,
"127.0.0.1:4321"
);
assert_eq!(opts.prom_options.as_ref().unwrap().addr, "127.0.0.1:4444");
let default_opts = FrontendOptions::default();
assert_eq!(
@@ -231,22 +274,31 @@ mod tests {
#[test]
fn test_read_from_config_file() {
let mut file = create_named_temp_file();
let toml_str = r#"
mode = "distributed"
[http_options]
addr = "127.0.0.1:4000"
timeout = "30s"
"#;
write!(file, "{}", toml_str).unwrap();
let command = StartCommand {
http_addr: None,
grpc_addr: None,
mysql_addr: None,
prom_addr: None,
postgres_addr: None,
opentsdb_addr: None,
influxdb_enable: None,
config_file: Some(format!(
"{}/../../config/frontend.example.toml",
std::env::current_dir().unwrap().as_path().to_str().unwrap()
)),
config_file: Some(file.path().to_str().unwrap().to_string()),
metasrv_addr: None,
tls_mode: None,
tls_cert_path: None,
tls_key_path: None,
user_provider: None,
disable_dashboard: false,
};
let fe_opts = FrontendOptions::try_from(command).unwrap();
@@ -267,6 +319,7 @@ mod tests {
http_addr: None,
grpc_addr: None,
mysql_addr: None,
prom_addr: None,
postgres_addr: None,
opentsdb_addr: None,
influxdb_enable: None,
@@ -276,6 +329,7 @@ mod tests {
tls_cert_path: None,
tls_key_path: None,
user_provider: Some("static_user_provider:cmd:test=test".to_string()),
disable_dashboard: false,
};
let plugins = load_frontend_plugins(&command.user_provider);

View File

@@ -14,6 +14,7 @@
#![feature(assert_matches)]
pub mod cli;
pub mod datanode;
pub mod error;
pub mod frontend;

View File

@@ -12,15 +12,37 @@
// See the License for the specific language governing permissions and
// limitations under the License.
use std::time::Duration;
use clap::Parser;
use common_telemetry::{info, logging, warn};
use meta_srv::bootstrap;
use meta_srv::bootstrap::MetaSrvInstance;
use meta_srv::metasrv::MetaSrvOptions;
use snafu::ResultExt;
use crate::error::{Error, Result};
use crate::{error, toml_loader};
pub struct Instance {
instance: MetaSrvInstance,
}
impl Instance {
pub async fn run(&mut self) -> Result<()> {
self.instance
.start()
.await
.context(error::StartMetaServerSnafu)
}
pub async fn stop(&self) -> Result<()> {
self.instance
.shutdown()
.await
.context(error::ShutdownMetaServerSnafu)
}
}
#[derive(Parser)]
pub struct Command {
#[clap(subcommand)]
@@ -28,8 +50,8 @@ pub struct Command {
}
impl Command {
pub async fn run(self) -> Result<()> {
self.subcmd.run().await
pub async fn build(self) -> Result<Instance> {
self.subcmd.build().await
}
}
@@ -39,9 +61,9 @@ enum SubCommand {
}
impl SubCommand {
async fn run(self) -> Result<()> {
async fn build(self) -> Result<Instance> {
match self {
SubCommand::Start(cmd) => cmd.run().await,
SubCommand::Start(cmd) => cmd.build().await,
}
}
}
@@ -60,19 +82,24 @@ struct StartCommand {
selector: Option<String>,
#[clap(long)]
use_memory_store: bool,
#[clap(long)]
http_addr: Option<String>,
#[clap(long)]
http_timeout: Option<u64>,
}
impl StartCommand {
async fn run(self) -> Result<()> {
async fn build(self) -> Result<Instance> {
logging::info!("MetaSrv start command: {:#?}", self);
let opts: MetaSrvOptions = self.try_into()?;
logging::info!("MetaSrv options: {:#?}", opts);
bootstrap::bootstrap_meta_srv(opts)
let instance = MetaSrvInstance::new(opts)
.await
.context(error::StartMetaServerSnafu)
.context(error::BuildMetaServerSnafu)?;
Ok(Instance { instance })
}
}
@@ -107,12 +134,25 @@ impl TryFrom<StartCommand> for MetaSrvOptions {
opts.use_memory_store = true;
}
if let Some(http_addr) = cmd.http_addr {
opts.http_opts.addr = http_addr;
}
if let Some(http_timeout) = cmd.http_timeout {
opts.http_opts.timeout = Duration::from_secs(http_timeout);
}
// Disable dashboard in metasrv.
opts.http_opts.disable_dashboard = true;
Ok(opts)
}
}
#[cfg(test)]
mod tests {
use std::io::Write;
use common_test_util::temp_dir::create_named_temp_file;
use meta_srv::selector::SelectorType;
use super::*;
@@ -126,6 +166,8 @@ mod tests {
config_file: None,
selector: Some("LoadBased".to_string()),
use_memory_store: false,
http_addr: None,
http_timeout: None,
};
let options: MetaSrvOptions = cmd.try_into().unwrap();
assert_eq!("127.0.0.1:3002".to_string(), options.bind_addr);
@@ -136,16 +178,26 @@ mod tests {
#[test]
fn test_read_from_config_file() {
let mut file = create_named_temp_file();
let toml_str = r#"
bind_addr = "127.0.0.1:3002"
server_addr = "127.0.0.1:3002"
store_addr = "127.0.0.1:2379"
datanode_lease_secs = 15
selector = "LeaseBased"
use_memory_store = false
"#;
write!(file, "{}", toml_str).unwrap();
let cmd = StartCommand {
bind_addr: None,
server_addr: None,
store_addr: None,
selector: None,
config_file: Some(format!(
"{}/../../config/metasrv.example.toml",
std::env::current_dir().unwrap().as_path().to_str().unwrap()
)),
config_file: Some(file.path().to_str().unwrap().to_string()),
use_memory_store: false,
http_addr: None,
http_timeout: None,
};
let options: MetaSrvOptions = cmd.try_into().unwrap();
assert_eq!("127.0.0.1:3002".to_string(), options.bind_addr);

View File

@@ -17,24 +17,27 @@ use std::sync::Arc;
use clap::Parser;
use common_base::Plugins;
use common_telemetry::info;
use datanode::datanode::{Datanode, DatanodeOptions, ObjectStoreConfig, WalConfig};
use datanode::datanode::{Datanode, DatanodeOptions, ProcedureConfig, StorageConfig, WalConfig};
use datanode::instance::InstanceRef;
use frontend::frontend::{Frontend, FrontendOptions};
use frontend::frontend::FrontendOptions;
use frontend::grpc::GrpcOptions;
use frontend::influxdb::InfluxdbOptions;
use frontend::instance::Instance as FeInstance;
use frontend::instance::{FrontendInstance, Instance as FeInstance};
use frontend::mysql::MysqlOptions;
use frontend::opentsdb::OpentsdbOptions;
use frontend::postgres::PostgresOptions;
use frontend::prom::PromOptions;
use frontend::prometheus::PrometheusOptions;
use frontend::promql::PromqlOptions;
use serde::{Deserialize, Serialize};
use servers::http::HttpOptions;
use servers::tls::{TlsMode, TlsOption};
use servers::Mode;
use snafu::ResultExt;
use crate::error::{Error, IllegalConfigSnafu, Result, StartDatanodeSnafu, StartFrontendSnafu};
use crate::error::{
Error, IllegalConfigSnafu, Result, ShutdownDatanodeSnafu, ShutdownFrontendSnafu,
StartDatanodeSnafu, StartFrontendSnafu,
};
use crate::frontend::load_frontend_plugins;
use crate::toml_loader;
@@ -45,8 +48,8 @@ pub struct Command {
}
impl Command {
pub async fn run(self) -> Result<()> {
self.subcmd.run().await
pub async fn build(self) -> Result<Instance> {
self.subcmd.build().await
}
}
@@ -56,9 +59,9 @@ enum SubCommand {
}
impl SubCommand {
async fn run(self) -> Result<()> {
async fn build(self) -> Result<Instance> {
match self {
SubCommand::Start(cmd) => cmd.run().await,
SubCommand::Start(cmd) => cmd.build().await,
}
}
}
@@ -66,6 +69,8 @@ impl SubCommand {
#[derive(Clone, Debug, Serialize, Deserialize)]
#[serde(default)]
pub struct StandaloneOptions {
pub mode: Mode,
pub enable_memory_catalog: bool,
pub http_options: Option<HttpOptions>,
pub grpc_options: Option<GrpcOptions>,
pub mysql_options: Option<MysqlOptions>,
@@ -73,16 +78,17 @@ pub struct StandaloneOptions {
pub opentsdb_options: Option<OpentsdbOptions>,
pub influxdb_options: Option<InfluxdbOptions>,
pub prometheus_options: Option<PrometheusOptions>,
pub promql_options: Option<PromqlOptions>,
pub mode: Mode,
pub prom_options: Option<PromOptions>,
pub wal: WalConfig,
pub storage: ObjectStoreConfig,
pub enable_memory_catalog: bool,
pub storage: StorageConfig,
pub procedure: Option<ProcedureConfig>,
}
impl Default for StandaloneOptions {
fn default() -> Self {
Self {
mode: Mode::Standalone,
enable_memory_catalog: false,
http_options: Some(HttpOptions::default()),
grpc_options: Some(GrpcOptions::default()),
mysql_options: Some(MysqlOptions::default()),
@@ -90,11 +96,10 @@ impl Default for StandaloneOptions {
opentsdb_options: Some(OpentsdbOptions::default()),
influxdb_options: Some(InfluxdbOptions::default()),
prometheus_options: Some(PrometheusOptions::default()),
promql_options: Some(PromqlOptions::default()),
mode: Mode::Standalone,
prom_options: Some(PromOptions::default()),
wal: WalConfig::default(),
storage: ObjectStoreConfig::default(),
enable_memory_catalog: false,
storage: StorageConfig::default(),
procedure: None,
}
}
}
@@ -102,6 +107,7 @@ impl Default for StandaloneOptions {
impl StandaloneOptions {
fn frontend_options(self) -> FrontendOptions {
FrontendOptions {
mode: self.mode,
http_options: self.http_options,
grpc_options: self.grpc_options,
mysql_options: self.mysql_options,
@@ -109,22 +115,56 @@ impl StandaloneOptions {
opentsdb_options: self.opentsdb_options,
influxdb_options: self.influxdb_options,
prometheus_options: self.prometheus_options,
promql_options: self.promql_options,
mode: self.mode,
meta_client_opts: None,
prom_options: self.prom_options,
meta_client_options: None,
}
}
fn datanode_options(self) -> DatanodeOptions {
DatanodeOptions {
enable_memory_catalog: self.enable_memory_catalog,
wal: self.wal,
storage: self.storage,
enable_memory_catalog: self.enable_memory_catalog,
procedure: self.procedure,
..Default::default()
}
}
}
pub struct Instance {
datanode: Datanode,
frontend: FeInstance,
}
impl Instance {
pub async fn run(&mut self) -> Result<()> {
// Start datanode instance before starting services, to avoid requests come in before internal components are started.
self.datanode
.start_instance()
.await
.context(StartDatanodeSnafu)?;
info!("Datanode instance started");
self.frontend.start().await.context(StartFrontendSnafu)?;
Ok(())
}
pub async fn stop(&self) -> Result<()> {
self.frontend
.shutdown()
.await
.context(ShutdownFrontendSnafu)?;
self.datanode
.shutdown_instance()
.await
.context(ShutdownDatanodeSnafu)?;
info!("Datanode instance stopped.");
Ok(())
}
}
#[derive(Debug, Parser)]
struct StartCommand {
#[clap(long)]
@@ -134,6 +174,8 @@ struct StartCommand {
#[clap(long)]
mysql_addr: Option<String>,
#[clap(long)]
prom_addr: Option<String>,
#[clap(long)]
postgres_addr: Option<String>,
#[clap(long)]
opentsdb_addr: Option<String>,
@@ -154,7 +196,7 @@ struct StartCommand {
}
impl StartCommand {
async fn run(self) -> Result<()> {
async fn build(self) -> Result<Instance> {
let enable_memory_catalog = self.enable_memory_catalog;
let config_file = self.config_file.clone();
let plugins = Arc::new(load_frontend_plugins(&self.user_provider)?);
@@ -174,33 +216,31 @@ impl StartCommand {
fe_opts, dn_opts
);
let mut datanode = Datanode::new(dn_opts.clone())
let datanode = Datanode::new(dn_opts.clone())
.await
.context(StartDatanodeSnafu)?;
let mut frontend = build_frontend(fe_opts, plugins, datanode.get_instance()).await?;
// Start datanode instance before starting services, to avoid requests come in before internal components are started.
datanode
.start_instance()
let mut frontend = build_frontend(plugins.clone(), datanode.get_instance()).await?;
frontend
.build_servers(&fe_opts, plugins)
.await
.context(StartDatanodeSnafu)?;
info!("Datanode instance started");
.context(StartFrontendSnafu)?;
frontend.start().await.context(StartFrontendSnafu)?;
Ok(())
Ok(Instance { datanode, frontend })
}
}
/// Build frontend instance in standalone mode
async fn build_frontend(
fe_opts: FrontendOptions,
plugins: Arc<Plugins>,
datanode_instance: InstanceRef,
) -> Result<Frontend<FeInstance>> {
let mut frontend_instance = FeInstance::new_standalone(datanode_instance.clone());
frontend_instance.set_script_handler(datanode_instance);
) -> Result<FeInstance> {
let mut frontend_instance = FeInstance::try_new_standalone(datanode_instance.clone())
.await
.context(StartFrontendSnafu)?;
frontend_instance.set_plugins(plugins.clone());
Ok(Frontend::new(fe_opts, frontend_instance, plugins))
Ok(frontend_instance)
}
impl TryFrom<StartCommand> for FrontendOptions {
@@ -246,6 +286,11 @@ impl TryFrom<StartCommand> for FrontendOptions {
..Default::default()
})
}
if let Some(addr) = cmd.prom_addr {
opts.prom_options = Some(PromOptions { addr })
}
if let Some(addr) = cmd.postgres_addr {
opts.postgres_options = Some(PostgresOptions {
addr,
@@ -294,6 +339,7 @@ mod tests {
http_addr: None,
rpc_addr: None,
mysql_addr: None,
prom_addr: None,
postgres_addr: None,
opentsdb_addr: None,
config_file: Some(format!(
@@ -339,6 +385,7 @@ mod tests {
let command = StartCommand {
http_addr: None,
rpc_addr: None,
prom_addr: None,
mysql_addr: None,
postgres_addr: None,
opentsdb_addr: None,
@@ -362,4 +409,11 @@ mod tests {
.await;
assert!(result.is_ok());
}
#[test]
fn test_toml() {
let opts = StandaloneOptions::default();
let toml_string = toml::to_string(&opts).unwrap();
let _parsed: StandaloneOptions = toml::from_str(&toml_string).unwrap();
}
}

View File

@@ -29,9 +29,9 @@ mod tests {
use std::fs::File;
use std::io::Write;
use common_test_util::temp_dir::create_temp_dir;
use serde::{Deserialize, Serialize};
use snafu::ResultExt;
use tempdir::TempDir;
use super::*;
use crate::error::Result;
@@ -62,7 +62,7 @@ mod tests {
host: "greptime.test".to_string(),
};
let dir = TempDir::new("test_from_file").unwrap();
let dir = create_temp_dir("test_from_file");
let test_file = format!("{}/test.toml", dir.path().to_str().unwrap());
let s = toml::to_string(&config).unwrap();

148
src/cmd/tests/cli.rs Normal file
View File

@@ -0,0 +1,148 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
#[cfg(target_os = "macos")]
mod tests {
use std::path::PathBuf;
use std::process::{Command, Stdio};
use std::time::Duration;
use common_test_util::temp_dir::create_temp_dir;
use rexpect::session::PtyReplSession;
struct Repl {
repl: PtyReplSession,
}
impl Repl {
fn send_line(&mut self, line: &str) {
self.repl.send_line(line).unwrap();
// read a line to consume the prompt
self.read_line();
}
fn read_line(&mut self) -> String {
self.repl.read_line().unwrap()
}
fn read_expect(&mut self, expect: &str) {
assert_eq!(self.read_line(), expect);
}
fn read_contains(&mut self, pat: &str) {
assert!(self.read_line().contains(pat));
}
}
// TODO(LFC): Un-ignore this REPL test.
// Ignore this REPL test because some logical plans like create database are not supported yet in Datanode.
#[ignore]
#[test]
fn test_repl() {
let data_dir = create_temp_dir("data");
let wal_dir = create_temp_dir("wal");
let mut bin_path = PathBuf::from(env!("CARGO_MANIFEST_DIR"));
bin_path.push("../../target/debug");
let bin_path = bin_path.to_str().unwrap();
let mut datanode = Command::new("./greptime")
.current_dir(bin_path)
.args([
"datanode",
"start",
"--rpc-addr=0.0.0.0:4321",
"--node-id=1",
&format!("--data-dir={}", data_dir.path().display()),
&format!("--wal-dir={}", wal_dir.path().display()),
])
.stdout(Stdio::null())
.spawn()
.unwrap();
// wait for Datanode actually started
std::thread::sleep(Duration::from_secs(3));
let mut repl_cmd = Command::new("./greptime");
repl_cmd.current_dir(bin_path).args([
"--log-level=off",
"cli",
"attach",
"--grpc-addr=0.0.0.0:4321",
// history commands can sneaky into stdout and mess up our tests, so disable it
"--disable-helper",
]);
let pty_session = rexpect::session::spawn_command(repl_cmd, Some(5_000)).unwrap();
let repl = PtyReplSession {
prompt: "> ".to_string(),
pty_session,
quit_command: None,
echo_on: false,
};
let repl = &mut Repl { repl };
repl.read_expect("Ready for commands. (Hint: try 'help')");
test_create_database(repl);
test_use_database(repl);
test_create_table(repl);
test_insert(repl);
test_select(repl);
datanode.kill().unwrap();
datanode.wait().unwrap();
}
fn test_create_database(repl: &mut Repl) {
repl.send_line("CREATE DATABASE db;");
repl.read_expect("Affected Rows: 1");
repl.read_contains("Cost");
}
fn test_use_database(repl: &mut Repl) {
repl.send_line("USE db");
repl.read_expect("Total Rows: 0");
repl.read_contains("Cost");
repl.read_expect("Using db");
}
fn test_create_table(repl: &mut Repl) {
repl.send_line("CREATE TABLE t(x STRING, ts TIMESTAMP TIME INDEX);");
repl.read_expect("Affected Rows: 0");
repl.read_contains("Cost");
}
fn test_insert(repl: &mut Repl) {
repl.send_line("INSERT INTO t(x, ts) VALUES ('hello', 1676895812239);");
repl.read_expect("Affected Rows: 1");
repl.read_contains("Cost");
}
fn test_select(repl: &mut Repl) {
repl.send_line("SELECT * FROM t;");
repl.read_expect("+-------+-------------------------+");
repl.read_expect("| x | ts |");
repl.read_expect("+-------+-------------------------+");
repl.read_expect("| hello | 2023-02-20T12:23:32.239 |");
repl.read_expect("+-------+-------------------------+");
repl.read_expect("Total Rows: 1");
repl.read_contains("Cost");
}
}

View File

@@ -18,7 +18,7 @@ use std::io::{Read, Write};
use bytes::{Buf, BufMut, BytesMut};
use common_error::prelude::ErrorExt;
use paste::paste;
use snafu::{ensure, Backtrace, ErrorCompat, ResultExt, Snafu};
use snafu::{ensure, Location, ResultExt, Snafu};
#[derive(Debug, Snafu)]
#[snafu(visibility(pub))]
@@ -31,29 +31,33 @@ pub enum Error {
Overflow {
src_len: usize,
dst_len: usize,
backtrace: Backtrace,
location: Location,
},
#[snafu(display("Buffer underflow"))]
Underflow { backtrace: Backtrace },
Underflow { location: Location },
#[snafu(display("IO operation reach EOF, source: {}", source))]
Eof {
source: std::io::Error,
backtrace: Backtrace,
location: Location,
},
}
pub type Result<T> = std::result::Result<T, Error>;
impl ErrorExt for Error {
fn backtrace_opt(&self) -> Option<&Backtrace> {
ErrorCompat::backtrace(self)
}
fn as_any(&self) -> &dyn Any {
self
}
fn location_opt(&self) -> Option<common_error::snafu::Location> {
match self {
Error::Overflow { location, .. } => Some(*location),
Error::Underflow { location, .. } => Some(*location),
Error::Eof { location, .. } => Some(*location),
}
}
}
macro_rules! impl_read_le {

View File

@@ -20,6 +20,12 @@ use serde::{Deserialize, Deserializer, Serialize, Serializer};
#[derive(Debug, Default, Clone, PartialEq, Eq, PartialOrd, Ord, Deserialize, Serialize)]
pub struct Bytes(bytes::Bytes);
impl From<Bytes> for bytes::Bytes {
fn from(value: Bytes) -> Self {
value.0
}
}
impl From<bytes::Bytes> for Bytes {
fn from(bytes: bytes::Bytes) -> Bytes {
Bytes(bytes)

View File

@@ -53,6 +53,10 @@ impl ReadableSize {
pub const fn as_mb(self) -> u64 {
self.0 / MIB
}
pub const fn as_bytes(self) -> u64 {
self.0
}
}
impl Div<u64> for ReadableSize {

View File

@@ -17,5 +17,4 @@ snafu = { version = "0.7", features = ["backtraces"] }
[dev-dependencies]
chrono.workspace = true
tempdir = "0.3"
tokio.workspace = true

View File

@@ -25,3 +25,6 @@ pub const MIN_USER_TABLE_ID: u32 = 1024;
pub const SYSTEM_CATALOG_TABLE_ID: u32 = 0;
/// scripts table id
pub const SCRIPTS_TABLE_ID: u32 = 1;
pub const MITO_ENGINE: &str = "mito";
pub const IMMUTABLE_FILE_ENGINE: &str = "file";

View File

@@ -16,29 +16,29 @@ use std::any::Any;
use common_error::ext::ErrorExt;
use common_error::prelude::{Snafu, StatusCode};
use snafu::{Backtrace, ErrorCompat};
use snafu::Location;
#[derive(Debug, Snafu)]
#[snafu(visibility(pub))]
pub enum Error {
#[snafu(display("Invalid catalog info: {}", key))]
InvalidCatalog { key: String, backtrace: Backtrace },
InvalidCatalog { key: String, location: Location },
#[snafu(display("Failed to deserialize catalog entry value: {}", raw))]
DeserializeCatalogEntryValue {
raw: String,
backtrace: Backtrace,
location: Location,
source: serde_json::error::Error,
},
#[snafu(display("Failed to serialize catalog entry value"))]
SerializeCatalogEntryValue {
backtrace: Backtrace,
location: Location,
source: serde_json::error::Error,
},
#[snafu(display("Failed to parse node id: {}", key))]
ParseNodeId { key: String, backtrace: Backtrace },
ParseNodeId { key: String, location: Location },
}
impl ErrorExt for Error {
@@ -51,10 +51,6 @@ impl ErrorExt for Error {
}
}
fn backtrace_opt(&self) -> Option<&Backtrace> {
ErrorCompat::backtrace(self)
}
fn as_any(&self) -> &dyn Any {
self
}

View File

@@ -0,0 +1,28 @@
[package]
name = "common-datasource"
version.workspace = true
edition.workspace = true
license.workspace = true
[dependencies]
arrow.workspace = true
arrow-schema.workspace = true
async-compression = { version = "0.3", features = [
"bzip2",
"gzip",
"xz",
"zstd",
"futures-io",
"tokio",
] }
async-trait.workspace = true
common-error = { path = "../error" }
common-runtime = { path = "../runtime" }
datafusion.workspace = true
futures.workspace = true
object-store = { path = "../../object-store" }
regex = "1.7"
snafu.workspace = true
tokio.workspace = true
tokio-util.workspace = true
url = "2.3"

View File

@@ -0,0 +1,84 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use std::fmt::Display;
use std::str::FromStr;
use async_compression::tokio::bufread::{BzDecoder, GzipDecoder, XzDecoder, ZstdDecoder};
use tokio::io::{AsyncRead, BufReader};
use crate::error::{self, Error, Result};
#[derive(Debug, Clone, Copy, PartialEq, Eq, Hash)]
pub enum CompressionType {
/// Gzip-ed file
GZIP,
/// Bzip2-ed file
BZIP2,
/// Xz-ed file (liblzma)
XZ,
/// Zstd-ed file,
ZSTD,
/// Uncompressed file
UNCOMPRESSED,
}
impl FromStr for CompressionType {
type Err = Error;
fn from_str(s: &str) -> Result<Self> {
let s = s.to_uppercase();
match s.as_str() {
"GZIP" | "GZ" => Ok(Self::GZIP),
"BZIP2" | "BZ2" => Ok(Self::BZIP2),
"XZ" => Ok(Self::XZ),
"ZST" | "ZSTD" => Ok(Self::ZSTD),
"" => Ok(Self::UNCOMPRESSED),
_ => error::UnsupportedCompressionTypeSnafu {
compression_type: s,
}
.fail(),
}
}
}
impl Display for CompressionType {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
f.write_str(match self {
Self::GZIP => "GZIP",
Self::BZIP2 => "BZIP2",
Self::XZ => "XZ",
Self::ZSTD => "ZSTD",
Self::UNCOMPRESSED => "",
})
}
}
impl CompressionType {
pub const fn is_compressed(&self) -> bool {
!matches!(self, &Self::UNCOMPRESSED)
}
pub fn convert_async_read<T: AsyncRead + Unpin + Send + 'static>(
&self,
s: T,
) -> Box<dyn AsyncRead + Unpin + Send> {
match self {
CompressionType::GZIP => Box::new(GzipDecoder::new(BufReader::new(s))),
CompressionType::BZIP2 => Box::new(BzDecoder::new(BufReader::new(s))),
CompressionType::XZ => Box::new(XzDecoder::new(BufReader::new(s))),
CompressionType::ZSTD => Box::new(ZstdDecoder::new(BufReader::new(s))),
CompressionType::UNCOMPRESSED => Box::new(s),
}
}
}

View File

@@ -0,0 +1,142 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use std::any::Any;
use common_error::prelude::*;
use snafu::Location;
use url::ParseError;
#[derive(Debug, Snafu)]
#[snafu(visibility(pub))]
pub enum Error {
#[snafu(display("Unsupported compression type: {}", compression_type))]
UnsupportedCompressionType { compression_type: String },
#[snafu(display("Unsupported backend protocol: {}", protocol))]
UnsupportedBackendProtocol { protocol: String },
#[snafu(display("empty host: {}", url))]
EmptyHostPath { url: String },
#[snafu(display("Invalid path: {}", path))]
InvalidPath { path: String },
#[snafu(display("Invalid url: {}, error :{}", url, source))]
InvalidUrl { url: String, source: ParseError },
#[snafu(display("Failed to decompression, source: {}", source))]
Decompression {
source: object_store::Error,
location: Location,
},
#[snafu(display("Failed to build backend, source: {}", source))]
BuildBackend {
source: object_store::Error,
location: Location,
},
#[snafu(display("Failed to read object from path: {}, source: {}", path, source))]
ReadObject {
path: String,
location: Location,
source: object_store::Error,
},
#[snafu(display("Failed to read parquet source: {}", source))]
ReadParquetSnafu {
location: Location,
source: datafusion::parquet::errors::ParquetError,
},
#[snafu(display("Failed to convert parquet to schema: {}", source))]
ParquetToSchema {
location: Location,
source: datafusion::parquet::errors::ParquetError,
},
#[snafu(display("Failed to infer schema from file: {}, source: {}", path, source))]
InferSchema {
path: String,
location: Location,
source: arrow_schema::ArrowError,
},
#[snafu(display("Failed to list object in path: {}, source: {}", path, source))]
ListObjects {
path: String,
location: Location,
source: object_store::Error,
},
#[snafu(display("Invalid connection: {}", msg))]
InvalidConnection { msg: String },
#[snafu(display("Failed to join handle: {}", source))]
JoinHandle {
location: Location,
source: tokio::task::JoinError,
},
}
pub type Result<T> = std::result::Result<T, Error>;
impl ErrorExt for Error {
fn status_code(&self) -> StatusCode {
use Error::*;
match self {
BuildBackend { .. } | ListObjects { .. } | ReadObject { .. } => {
StatusCode::StorageUnavailable
}
UnsupportedBackendProtocol { .. }
| UnsupportedCompressionType { .. }
| InvalidConnection { .. }
| InvalidUrl { .. }
| EmptyHostPath { .. }
| InvalidPath { .. }
| InferSchema { .. }
| ReadParquetSnafu { .. }
| ParquetToSchema { .. } => StatusCode::InvalidArguments,
Decompression { .. } | JoinHandle { .. } => StatusCode::Unexpected,
}
}
fn as_any(&self) -> &dyn Any {
self
}
fn location_opt(&self) -> Option<common_error::snafu::Location> {
use Error::*;
match self {
BuildBackend { location, .. } => Some(*location),
ReadObject { location, .. } => Some(*location),
ListObjects { location, .. } => Some(*location),
InferSchema { location, .. } => Some(*location),
ReadParquetSnafu { location, .. } => Some(*location),
ParquetToSchema { location, .. } => Some(*location),
Decompression { location, .. } => Some(*location),
JoinHandle { location, .. } => Some(*location),
UnsupportedBackendProtocol { .. }
| EmptyHostPath { .. }
| InvalidPath { .. }
| InvalidUrl { .. }
| InvalidConnection { .. }
| UnsupportedCompressionType { .. } => None,
}
}
}

View File

@@ -12,18 +12,19 @@
// See the License for the specific language governing permissions and
// limitations under the License.
use std::sync::Arc;
pub mod csv;
pub mod json;
pub mod parquet;
use promql_parser::parser::{Expr, Value};
pub const DEFAULT_SCHEMA_INFER_MAX_RECORD: usize = 1000;
use arrow::datatypes::SchemaRef;
use async_trait::async_trait;
use object_store::ObjectStore;
use crate::engine::Context;
use crate::error::Result;
/// An evaluator evaluates given expressions over given fixed timestamps.
pub struct Evaluator {}
impl Evaluator {
pub fn eval(_ctx: &Context, _expr: &Expr) -> Result<Arc<dyn Value>> {
unimplemented!();
}
#[async_trait]
pub trait FileFormat: Send + Sync + std::fmt::Debug {
async fn infer_schema(&self, store: &ObjectStore, path: String) -> Result<SchemaRef>;
}

View File

@@ -0,0 +1,158 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use std::sync::Arc;
use arrow::csv::reader::infer_reader_schema as infer_csv_schema;
use arrow_schema::SchemaRef;
use async_trait::async_trait;
use common_runtime;
use object_store::ObjectStore;
use snafu::ResultExt;
use tokio_util::io::SyncIoBridge;
use crate::compression::CompressionType;
use crate::error::{self, Result};
use crate::file_format::{self, FileFormat};
#[derive(Debug)]
pub struct CsvFormat {
pub has_header: bool,
pub delimiter: u8,
pub schema_infer_max_record: Option<usize>,
pub compression_type: CompressionType,
}
impl Default for CsvFormat {
fn default() -> Self {
Self {
has_header: true,
delimiter: b',',
schema_infer_max_record: Some(file_format::DEFAULT_SCHEMA_INFER_MAX_RECORD),
compression_type: CompressionType::UNCOMPRESSED,
}
}
}
#[async_trait]
impl FileFormat for CsvFormat {
async fn infer_schema(&self, store: &ObjectStore, path: String) -> Result<SchemaRef> {
let reader = store
.reader(&path)
.await
.context(error::ReadObjectSnafu { path: &path })?;
let decoded = self.compression_type.convert_async_read(reader);
let delimiter = self.delimiter;
let schema_infer_max_record = self.schema_infer_max_record;
let has_header = self.has_header;
common_runtime::spawn_blocking_read(move || {
let reader = SyncIoBridge::new(decoded);
let (schema, _records_read) =
infer_csv_schema(reader, delimiter, schema_infer_max_record, has_header)
.context(error::InferSchemaSnafu { path: &path })?;
Ok(Arc::new(schema))
})
.await
.context(error::JoinHandleSnafu)?
}
}
#[cfg(test)]
mod tests {
use super::*;
use crate::file_format::FileFormat;
use crate::test_util::{self, format_schema, test_store};
fn test_data_root() -> String {
test_util::get_data_dir("tests/csv").display().to_string()
}
#[tokio::test]
async fn infer_schema_basic() {
let csv = CsvFormat::default();
let store = test_store(&test_data_root());
let schema = csv
.infer_schema(&store, "simple.csv".to_string())
.await
.unwrap();
let formatted: Vec<_> = format_schema(schema);
assert_eq!(
vec![
"c1: Utf8: NULL",
"c2: Int64: NULL",
"c3: Int64: NULL",
"c4: Int64: NULL",
"c5: Int64: NULL",
"c6: Int64: NULL",
"c7: Int64: NULL",
"c8: Int64: NULL",
"c9: Int64: NULL",
"c10: Int64: NULL",
"c11: Float64: NULL",
"c12: Float64: NULL",
"c13: Utf8: NULL"
],
formatted,
);
}
#[tokio::test]
async fn infer_schema_with_limit() {
let json = CsvFormat {
schema_infer_max_record: Some(3),
..CsvFormat::default()
};
let store = test_store(&test_data_root());
let schema = json
.infer_schema(&store, "schema_infer_limit.csv".to_string())
.await
.unwrap();
let formatted: Vec<_> = format_schema(schema);
assert_eq!(
vec![
"a: Int64: NULL",
"b: Float64: NULL",
"c: Int64: NULL",
"d: Int64: NULL"
],
formatted
);
let json = CsvFormat::default();
let store = test_store(&test_data_root());
let schema = json
.infer_schema(&store, "schema_infer_limit.csv".to_string())
.await
.unwrap();
let formatted: Vec<_> = format_schema(schema);
assert_eq!(
vec![
"a: Int64: NULL",
"b: Float64: NULL",
"c: Int64: NULL",
"d: Utf8: NULL"
],
formatted
);
}
}

View File

@@ -0,0 +1,121 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use std::io::BufReader;
use std::sync::Arc;
use arrow::datatypes::SchemaRef;
use arrow::json::reader::{infer_json_schema_from_iterator, ValueIter};
use async_trait::async_trait;
use common_runtime;
use object_store::ObjectStore;
use snafu::ResultExt;
use tokio_util::io::SyncIoBridge;
use crate::compression::CompressionType;
use crate::error::{self, Result};
use crate::file_format::{self, FileFormat};
#[derive(Debug)]
pub struct JsonFormat {
pub schema_infer_max_record: Option<usize>,
pub compression_type: CompressionType,
}
impl Default for JsonFormat {
fn default() -> Self {
Self {
schema_infer_max_record: Some(file_format::DEFAULT_SCHEMA_INFER_MAX_RECORD),
compression_type: CompressionType::UNCOMPRESSED,
}
}
}
#[async_trait]
impl FileFormat for JsonFormat {
async fn infer_schema(&self, store: &ObjectStore, path: String) -> Result<SchemaRef> {
let reader = store
.reader(&path)
.await
.context(error::ReadObjectSnafu { path: &path })?;
let decoded = self.compression_type.convert_async_read(reader);
let schema_infer_max_record = self.schema_infer_max_record;
common_runtime::spawn_blocking_read(move || {
let mut reader = BufReader::new(SyncIoBridge::new(decoded));
let iter = ValueIter::new(&mut reader, schema_infer_max_record);
let schema = infer_json_schema_from_iterator(iter)
.context(error::InferSchemaSnafu { path: &path })?;
Ok(Arc::new(schema))
})
.await
.context(error::JoinHandleSnafu)?
}
}
#[cfg(test)]
mod tests {
use super::*;
use crate::file_format::FileFormat;
use crate::test_util::{self, format_schema, test_store};
fn test_data_root() -> String {
test_util::get_data_dir("tests/json").display().to_string()
}
#[tokio::test]
async fn infer_schema_basic() {
let json = JsonFormat::default();
let store = test_store(&test_data_root());
let schema = json
.infer_schema(&store, "simple.json".to_string())
.await
.unwrap();
let formatted: Vec<_> = format_schema(schema);
assert_eq!(
vec![
"a: Int64: NULL",
"b: Float64: NULL",
"c: Boolean: NULL",
"d: Utf8: NULL",
],
formatted
);
}
#[tokio::test]
async fn infer_schema_with_limit() {
let json = JsonFormat {
schema_infer_max_record: Some(3),
..JsonFormat::default()
};
let store = test_store(&test_data_root());
let schema = json
.infer_schema(&store, "schema_infer_limit.json".to_string())
.await
.unwrap();
let formatted: Vec<_> = format_schema(schema);
assert_eq!(
vec!["a: Int64: NULL", "b: Float64: NULL", "c: Boolean: NULL"],
formatted
);
}
}

View File

@@ -0,0 +1,78 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use std::sync::Arc;
use arrow_schema::SchemaRef;
use async_trait::async_trait;
use datafusion::parquet::arrow::async_reader::AsyncFileReader;
use datafusion::parquet::arrow::parquet_to_arrow_schema;
use object_store::ObjectStore;
use snafu::ResultExt;
use crate::error::{self, Result};
use crate::file_format::FileFormat;
#[derive(Debug, Default)]
pub struct ParquetFormat {}
#[async_trait]
impl FileFormat for ParquetFormat {
async fn infer_schema(&self, store: &ObjectStore, path: String) -> Result<SchemaRef> {
let mut reader = store
.reader(&path)
.await
.context(error::ReadObjectSnafu { path: &path })?;
let metadata = reader
.get_metadata()
.await
.context(error::ReadParquetSnafuSnafu)?;
let file_metadata = metadata.file_metadata();
let schema = parquet_to_arrow_schema(
file_metadata.schema_descr(),
file_metadata.key_value_metadata(),
)
.context(error::ParquetToSchemaSnafu)?;
Ok(Arc::new(schema))
}
}
#[cfg(test)]
mod tests {
use super::*;
use crate::file_format::FileFormat;
use crate::test_util::{self, format_schema, test_store};
fn test_data_root() -> String {
test_util::get_data_dir("tests/parquet")
.display()
.to_string()
}
#[tokio::test]
async fn infer_schema_basic() {
let json = ParquetFormat::default();
let store = test_store(&test_data_root());
let schema = json
.infer_schema(&store, "basic.parquet".to_string())
.await
.unwrap();
let formatted: Vec<_> = format_schema(schema);
assert_eq!(vec!["num: Int64: NULL", "str: Utf8: NULL"], formatted);
}
}

View File

@@ -0,0 +1,21 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
pub mod compression;
pub mod error;
pub mod file_format;
pub mod lister;
pub mod object_store;
pub mod test_util;
pub mod util;

View File

@@ -0,0 +1,83 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use futures::{future, TryStreamExt};
use object_store::{Entry, ObjectStore};
use regex::Regex;
use snafu::ResultExt;
use crate::error::{self, Result};
#[derive(Debug, Clone, PartialEq, Eq)]
pub enum Source {
Filename(String),
Dir,
}
pub struct Lister {
object_store: ObjectStore,
source: Source,
path: String,
regex: Option<Regex>,
}
impl Lister {
pub fn new(
object_store: ObjectStore,
source: Source,
path: String,
regex: Option<Regex>,
) -> Self {
Lister {
object_store,
source,
path,
regex,
}
}
pub async fn list(&self) -> Result<Vec<Entry>> {
match &self.source {
Source::Dir => {
let streamer = self
.object_store
.list(&self.path)
.await
.context(error::ListObjectsSnafu { path: &self.path })?;
streamer
.try_filter(|f| {
let res = self
.regex
.as_ref()
.map(|x| x.is_match(f.name()))
.unwrap_or(true);
future::ready(res)
})
.try_collect::<Vec<_>>()
.await
.context(error::ListObjectsSnafu { path: &self.path })
}
Source::Filename(filename) => {
// make sure this file exists
let file_full_path = format!("{}{}", self.path, filename);
let _ = self.object_store.stat(&file_full_path).await.context(
error::ListObjectsSnafu {
path: &file_full_path,
},
)?;
Ok(vec![Entry::new(&file_full_path)])
}
}
}
}

View File

@@ -0,0 +1,60 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
pub mod fs;
pub mod s3;
use std::collections::HashMap;
use object_store::ObjectStore;
use snafu::{OptionExt, ResultExt};
use url::{ParseError, Url};
use self::fs::build_fs_backend;
use self::s3::build_s3_backend;
use crate::error::{self, Result};
pub const FS_SCHEMA: &str = "FS";
pub const S3_SCHEMA: &str = "S3";
/// parse url returns (schema,Option<host>,path)
pub fn parse_url(url: &str) -> Result<(String, Option<String>, String)> {
let parsed_url = Url::parse(url);
match parsed_url {
Ok(url) => Ok((
url.scheme().to_string(),
url.host_str().map(|s| s.to_string()),
url.path().to_string(),
)),
Err(ParseError::RelativeUrlWithoutBase) => {
Ok((FS_SCHEMA.to_string(), None, url.to_string()))
}
Err(err) => Err(err).context(error::InvalidUrlSnafu { url }),
}
}
pub fn build_backend(url: &str, connection: HashMap<String, String>) -> Result<ObjectStore> {
let (schema, host, _path) = parse_url(url)?;
match schema.to_uppercase().as_str() {
S3_SCHEMA => {
let host = host.context(error::EmptyHostPathSnafu {
url: url.to_string(),
})?;
Ok(build_s3_backend(&host, "/", connection)?)
}
FS_SCHEMA => Ok(build_fs_backend("/")?),
_ => error::UnsupportedBackendProtocolSnafu { protocol: schema }.fail(),
}
}

View File

@@ -0,0 +1,28 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use object_store::services::Fs;
use object_store::ObjectStore;
use snafu::ResultExt;
use crate::error::{BuildBackendSnafu, Result};
pub fn build_fs_backend(root: &str) -> Result<ObjectStore> {
let mut builder = Fs::default();
builder.root(root);
let object_store = ObjectStore::new(builder)
.context(BuildBackendSnafu)?
.finish();
Ok(object_store)
}

View File

@@ -0,0 +1,79 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use std::collections::HashMap;
use object_store::services::S3;
use object_store::ObjectStore;
use snafu::ResultExt;
use crate::error::{self, Result};
const ENDPOINT_URL: &str = "ENDPOINT_URL";
const ACCESS_KEY_ID: &str = "ACCESS_KEY_ID";
const SECRET_ACCESS_KEY: &str = "SECRET_ACCESS_KEY";
const SESSION_TOKEN: &str = "SESSION_TOKEN";
const REGION: &str = "REGION";
const ENABLE_VIRTUAL_HOST_STYLE: &str = "ENABLE_VIRTUAL_HOST_STYLE";
pub fn build_s3_backend(
host: &str,
path: &str,
connection: HashMap<String, String>,
) -> Result<ObjectStore> {
let mut builder = S3::default();
builder.root(path);
builder.bucket(host);
if let Some(endpoint) = connection.get(ENDPOINT_URL) {
builder.endpoint(endpoint);
}
if let Some(region) = connection.get(REGION) {
builder.region(region);
}
if let Some(key_id) = connection.get(ACCESS_KEY_ID) {
builder.access_key_id(key_id);
}
if let Some(key) = connection.get(SECRET_ACCESS_KEY) {
builder.secret_access_key(key);
}
if let Some(session_token) = connection.get(SESSION_TOKEN) {
builder.security_token(session_token);
}
if let Some(enable_str) = connection.get(ENABLE_VIRTUAL_HOST_STYLE) {
let enable = enable_str.as_str().parse::<bool>().map_err(|e| {
error::InvalidConnectionSnafu {
msg: format!(
"failed to parse the option {}={}, {}",
ENABLE_VIRTUAL_HOST_STYLE, enable_str, e
),
}
.build()
})?;
if enable {
builder.enable_virtual_host_style();
}
}
Ok(ObjectStore::new(builder)
.context(error::BuildBackendSnafu)?
.finish())
}

View File

@@ -0,0 +1,48 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use std::path::PathBuf;
use arrow_schema::SchemaRef;
use object_store::services::Fs;
use object_store::ObjectStore;
pub fn get_data_dir(path: &str) -> PathBuf {
// https://doc.rust-lang.org/cargo/reference/environment-variables.html
let dir = env!("CARGO_MANIFEST_DIR");
PathBuf::from(dir).join(path)
}
pub fn format_schema(schema: SchemaRef) -> Vec<String> {
schema
.fields()
.iter()
.map(|f| {
format!(
"{}: {:?}: {}",
f.name(),
f.data_type(),
if f.is_nullable() { "NULL" } else { "NOT NULL" }
)
})
.collect()
}
pub fn test_store(root: &str) -> ObjectStore {
let mut builder = Fs::default();
builder.root(root);
ObjectStore::new(builder).unwrap().finish()
}

View File

@@ -0,0 +1,125 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
pub fn find_dir_and_filename(path: &str) -> (String, Option<String>) {
if path.is_empty() {
("/".to_string(), None)
} else if path.ends_with('/') {
(path.to_string(), None)
} else if let Some(idx) = path.rfind('/') {
(
path[..idx + 1].to_string(),
Some(path[idx + 1..].to_string()),
)
} else {
("/".to_string(), Some(path.to_string()))
}
}
#[cfg(test)]
mod tests {
use url::Url;
use super::*;
#[test]
fn test_parse_uri() {
struct Test<'a> {
uri: &'a str,
expected_path: &'a str,
expected_schema: &'a str,
}
let tests = [
Test {
uri: "s3://bucket/to/path/",
expected_path: "/to/path/",
expected_schema: "s3",
},
Test {
uri: "fs:///to/path/",
expected_path: "/to/path/",
expected_schema: "fs",
},
Test {
uri: "fs:///to/path/file",
expected_path: "/to/path/file",
expected_schema: "fs",
},
];
for test in tests {
let parsed_uri = Url::parse(test.uri).unwrap();
assert_eq!(parsed_uri.path(), test.expected_path);
assert_eq!(parsed_uri.scheme(), test.expected_schema);
}
}
#[test]
fn test_parse_path_and_dir() {
let parsed = Url::from_file_path("/to/path/file").unwrap();
assert_eq!(parsed.path(), "/to/path/file");
let parsed = Url::from_directory_path("/to/path/").unwrap();
assert_eq!(parsed.path(), "/to/path/");
}
#[test]
fn test_find_dir_and_filename() {
struct Test<'a> {
path: &'a str,
expected_dir: &'a str,
expected_filename: Option<String>,
}
let tests = [
Test {
path: "to/path/",
expected_dir: "to/path/",
expected_filename: None,
},
Test {
path: "to/path/filename",
expected_dir: "to/path/",
expected_filename: Some("filename".into()),
},
Test {
path: "/to/path/filename",
expected_dir: "/to/path/",
expected_filename: Some("filename".into()),
},
Test {
path: "/",
expected_dir: "/",
expected_filename: None,
},
Test {
path: "filename",
expected_dir: "/",
expected_filename: Some("filename".into()),
},
Test {
path: "",
expected_dir: "/",
expected_filename: None,
},
];
for test in tests {
let (path, filename) = find_dir_and_filename(test.path);
assert_eq!(test.expected_dir, path);
assert_eq!(test.expected_filename, filename)
}
}
}

View File

@@ -0,0 +1,24 @@
### Parquet
The `parquet/basic.parquet` was converted from `csv/basic.csv` via [bdt](https://github.com/andygrove/bdt).
Internal of `parquet/basic.parquet`:
Data:
```
+-----+-------+
| num | str |
+-----+-------+
| 5 | test |
| 2 | hello |
| 4 | foo |
+-----+-------+
```
Schema:
```
+-------------+-----------+-------------+
| column_name | data_type | is_nullable |
+-------------+-----------+-------------+
| num | Int64 | YES |
| str | Utf8 | YES |
+-------------+-----------+-------------+
```

View File

@@ -0,0 +1,4 @@
num,str
5,test
2,hello
4,foo
1 num str
2 5 test
3 2 hello
4 4 foo

View File

@@ -0,0 +1,5 @@
a,b,c,d
1,2,3,4
1,2,3,4
1,2.0,3,4
1,2,4,test
1 a b c d
2 1 2 3 4
3 1 2 3 4
4 1 2.0 3 4
5 1 2 4 test

Some files were not shown because too many files have changed in this diff Show More