Compare commits

...

228 Commits

Author SHA1 Message Date
Weny Xu
fbb7db42aa chore: unify code styling (#1523) 2023-05-10 11:10:39 +08:00
Ning Sun
a1587595d9 feat: add information_schema as exception of cross schema check (#1551)
* feat: add information_schema as a cross-schema query exception

* fix: resolve lint issue
2023-05-10 10:55:00 +08:00
Weny Xu
abd5a8ecbb chore(datasource): make CompressionType follow the style of the guide (#1522) 2023-05-10 10:50:24 +08:00
Ruihang Xia
4ddab8e982 build: change release CI to only run test on linux (#1548)
* disable all linux release

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* split linux and macos

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* correct job name

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* add missing build job

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* run build-macos first

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* disable unstable test

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* disable test on macos

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* re-enable test

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* do not dependent on build-macos

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-05-10 10:49:14 +08:00
Yingwen
1833e487a4 refactor: remove unnecessary async from RepeatedTask::start (#1545)
* refactor: relax RepeatedTask requirements

Some refactor:
- Remove async from start()
- Cancel task in drop
- Allow TaskFunction::call taking &mut self
- Make start/stop concurrent safe

* test(log-store): Fix log store tests (start multiple times)
2023-05-09 21:03:15 +08:00
ZonaHe
c93b5743e8 feat: update dashboard to v0.2.4 (#1553)
Co-authored-by: ZonaHex <ZonaHex@users.noreply.github.com>
2023-05-09 20:56:20 +08:00
Weny Xu
550c494d25 fix: Copy from must follow the order of table fields issue (#1521)
* fix: Copy from must follow the order of table fields issue

* chore: apply suggestion from CR
2023-05-09 17:46:16 +08:00
Yingwen
2ab0e42d6f feat: clean procedure's state after it is done (#1543)
* feat(common-procedure): pub(crate) use proc_path

* feat(common-procedure): Implement delete_procedure

* feat(common-procedure): Clean procedure after it is finished

* chore(common-procedure): put path_string in front of try_stream

* test(common-procedure): Test cleaning up procedures

* feat(common-procedure): Clean procedure states in recover()

* feat(common-procedure): Use VecDeque for finished procedures
2023-05-09 11:44:50 +08:00
JeremyHi
05e6ca1e14 fix: the latest number of regions (#1546)
* fix: the latest number of regions

* fix: unit test
2023-05-09 10:11:26 +08:00
localhost
b9661818f2 chore: remove useless Option type in plugins (#1544)
Co-authored-by: paomian <qtang@greptime.com>
2023-05-08 21:54:24 +08:00
localhost
f86390345c chore: remove useless Option type in plugins (#1544)
Co-authored-by: paomian <qtang@greptime.com>
2023-05-08 21:53:45 +08:00
localhost
7191bb9652 chore: remove useless Option type in plugins (#1544)
Co-authored-by: paomian <qtang@greptime.com>
2023-05-08 21:52:12 +08:00
localhost
34c7f78861 chore: add configurator to http server (#1488)
* chore: add configurator params to start server fun

* chore: update plugins type

---------

Co-authored-by: paomian <qtang@greptime.com>
2023-05-08 10:55:03 +00:00
JeremyHi
610651fa8f feat: meta metrics (#1538)
* chore: from_etcd_kv (better name)

* feat: kv request metric

* feat: router metric

* feat: connections metric
2023-05-08 17:50:21 +08:00
fys
c48067f88d fix: no active datanode when frontend start (#1533)
* fix: no active datanode when frontend start

* chore: add log when can not get stat_val
2023-05-08 15:02:07 +08:00
Ning Sun
ec1b95c250 docs: add play section (#1528)
* docs: add play section

* Update README.md

Co-authored-by: xiaomin tang <xtang@users.noreply.github.com>

---------

Co-authored-by: dennis zhuang <killme2008@gmail.com>
Co-authored-by: xiaomin tang <xtang@users.noreply.github.com>
2023-05-08 14:26:22 +08:00
gitccl
fbf1ddd006 feat: open catalogs and schemas in parallel (#1527)
* feat: open catalogs and schemas in parallel

* fix: code review
2023-05-08 10:34:30 +08:00
Ning Sun
d679cfcb53 feat: add semantic_type to information_schema.columns (#1530) 2023-05-06 15:48:37 +08:00
discord9
2c82ded975 feat: table metrics (#1469)
* feat: Statistic

* add todo

* fmt: cargo fmt

* feat: some simple impl for MemTable

* chore: a try on adding statistics

* Update src/table/src/stats.rs

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>

* docs: fix typo

* newlines unnecessary

---------

Co-authored-by: Ruihang Xia <waynestxia@gmail.com>
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2023-05-06 14:59:49 +08:00
Ruihang Xia
d4f3f617e4 chore(toolchain): update rust-toolchain to 2023-05-03 (#1524)
* chore(toolchain): update rust-toolchain to 2023-05-03

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* update workflow yaml

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-05-06 11:34:09 +08:00
Ruihang Xia
6fe117d7d5 fix: vector and matrix in Prometheus use different field (#1520)
* fix empty tag

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix result type

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* make it work

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* clean up

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-05-05 15:54:26 +08:00
Ning Sun
b0ab641602 feat: add catalog/schema/table count as catalog metrics (#1499)
* feat: add catalog/schema/table count as catalog metrics

* test: add integration tests for catalog metrics
2023-05-05 05:54:12 +00:00
Huaijin
224ec9bd25 fix: wrong max_table_id log in remote catalog manager (#1516)
* fix: wrong max_table_id log in remote catalog manager

* chore: update link in CONTRIBUTING.md

* chore: add a new const MAX_SYS_TABLE_ID
2023-05-05 03:39:45 +00:00
Niwaka
d86b3386dc fix: incorrect show create table output (#1514)
* fix: incorrect show create table output

* feat: change CreateTable's Display if table is external

* feat: change CreateTable's Display if table is external
2023-05-05 11:29:09 +08:00
Lei, HUANG
c8301feed7 fix: respect MySQL timestamp format (#1510) 2023-05-04 18:57:38 +08:00
dennis zhuang
b1920c41a4 fix: object store cache bug (#1482)
* feat: use streaming read instead of reading whole file

* feat: enable atomic writing for object store file caching

* fix: recover existing keys from local cache

* test: recovering keys from local file cache for LruCachePolicy

* Update src/datanode/src/instance.rs

Co-authored-by: Yingwen <realevenyag@gmail.com>

* chore: cr comments

* feat: md5 hash caching path

* fix: test

* fix: read cache

* Update src/object-store/src/cache_policy.rs

Co-authored-by: Yingwen <realevenyag@gmail.com>

---------

Co-authored-by: Yingwen <realevenyag@gmail.com>
2023-05-04 18:25:40 +08:00
Weny Xu
c471007edd feat: support to copy table from/to CSV and JSON format file (#1475)
* refactor: refactor copy from executor

* feat: support to copy from CSV and JSON format files

* feat: support to copy table to the CSV and JSON format file

* test: add tests copy from/to

* chore: apply suggestions from CR
2023-05-04 17:20:28 +08:00
Yingwen
2818f466d3 feat: Log error in GreptimeRequestHandler (#1507)
* feat(common-error): Add should_log_error

* feat(servers): log error in grpc handler
2023-05-04 15:48:38 +08:00
JeremyHi
d7a906e0bd feat: metasrv mailbox (#1481)
* refactor: id first in pusher_key

* feat: is_acceptable for multi roles

* feat: mailbox

* fix: channel for mailbox

* feat: impl mailbox via heartbeat

* chore: add unit test for mailbox

* chore: by cr

* chore: typo

* chore: refactor the mailbox API

* chore: br cr

* chore: check timeout interval to 10ms

* chore: add response header
2023-05-04 15:42:43 +08:00
Ning Sun
6e1bb9e458 feat: add support for information_schema.columns (#1500)
* feat: add support for information_schema.columns

* feat: remove information_schema from its view

* Update src/catalog/src/information_schema.rs

Co-authored-by: LFC <bayinamine@gmail.com>

* fix: error on table data type

* test: correct sqlness test for information schema

* test: add information_schema.columns sqlness tests

---------

Co-authored-by: LFC <bayinamine@gmail.com>
2023-05-04 14:29:38 +08:00
Ning Sun
494ad570c5 feat: update pgwire to 0.14 (#1504) 2023-05-04 14:24:26 +08:00
Vanish
12d59e6341 chore: remove redundant code. (#1502) 2023-05-04 14:20:26 +08:00
Yingwen
479ef9d379 fix: checkpoint GC task also deletes the file with the last version (#1491)
* test(storage): use assert_eq to check scan result

* feat(storage): Add more info to manifest log

* feat: Avoid error log when unable to delete

* fix: The manifest gc task should delete files <= last_version

* feat(storage): Don't log if the error kind is not found

* feat: Add keep_last_checkpoint option
2023-05-04 14:18:38 +08:00
Niwaka
93ffe1ff33 feat: improve and distinguish different errors for IllegalInsertData (#1503)
* feat: improve and distinguish different errors for IllegalInsertData

* feat: change error code for UnexpectedValuesLength and ColumnAlreadyExists

* chore: improve readability of error message
2023-05-04 12:36:24 +08:00
Niwaka
d461328238 fix: insert distributed table if partition column has default value (#1498)
* fix: insert distributed table if partition column has default value

* Address review

* address review

* address review

* chore: introduce assert_columns

---------

Co-authored-by: WenyXu <wenymedia@gmail.com>
2023-05-02 20:50:02 +08:00
Vanish
6aae5b7286 feat: prevent sensitive information (key, password, secrets etc.) from being printed in plain (#1501)
* feat: add secret type

* chore: replace key, password, secrets with secret type.

* chore: use secrecy

* chore: remove redundant file

* style: taplo fmt
2023-05-01 20:54:54 +08:00
Ning Sun
7dbac89000 feat: add metrics for protocol interfaces (#1495)
* feat: add metrics for various interfaces

* feat: add db label for protocols

* feat: add postgres protocol metrics

* feat: add metrics for grpcs apis

* feat: add auth failure counter for mysql/pg

* fix: add db label to grpc prometheus interface

* Apply suggestions from code review

Co-authored-by: dennis zhuang <killme2008@gmail.com>

* feat: add error code for auth failure counter

* fix: use schema as dbname when catalog is default

---------

Co-authored-by: dennis zhuang <killme2008@gmail.com>
2023-04-28 23:42:35 +08:00
Yingwen
0b0b5a10da feat: Remove store from procedure config (#1489)
* feat(procedure): Add key prefix

* feat: Remove store config from ProcedureConfig

* refactor(procedure): Address review comments

Add proc_path! macro and rename KEY_PREFIX to PROC_PATH

* docs: Update procedure config examples
2023-04-28 22:12:57 +08:00
Yingwen
51be35a7b1 feat(mito): Combine the original and procedure's implementation (#1468)
* fix(mito): Add metrics to mito DDL procedure

* feat(mito): Use procedure's implementation to create table

* feat(mito): Use procedure's implementation to alter table

* feat(mito): Use procedure's implementation to drop table

* style(mito): Fix clippy

* test(mito): Fix tests

* feat(mito): Add TableCreator

* feat(mito): update alter table procedure

* fix(mito): alter procedure create alter op first

* feat(mito): Combine alter table code

* fix(mito): Fix deadlock

* feat(mito): Simplify drop table procedure
2023-04-28 11:48:52 +08:00
Lei, HUANG
9e4887f29f fix: disable dashboard (#1494) 2023-04-27 22:55:15 +08:00
yuanbohan
cca34aa914 chore: upgrade promql-parser version (#1484) 2023-04-27 13:10:15 +00:00
Ruihang Xia
0ac50632aa feat: use server time if it's not specified (#1480)
Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-04-27 20:54:26 +08:00
Yingwen
b1f7ad097a test: Fix s3 region in test (#1493) 2023-04-27 12:25:20 +00:00
Weny Xu
a77a4a4bd1 fix: add s3 region info (#1492)
fix: add region info
2023-04-27 19:13:01 +08:00
Weny Xu
47f1cbaaed fix: add s3 region info (#1486) 2023-04-27 17:35:34 +08:00
Yingwen
8e3c3cbc40 build: Download assets to cargo output dir (#1476)
* build: Download assets to cargo output dir

Also remove the output from the build script and only print the output
on failure

* chore: Update src/servers/build.rs

Co-authored-by: Ruihang Xia <waynestxia@gmail.com>

* build: replace pushd by cd

---------

Co-authored-by: Ruihang Xia <waynestxia@gmail.com>
2023-04-27 17:09:10 +08:00
Vanish
9f0efc748d feat: make log level and destination configurable from config files (#1444)
* feat: implement load_options.

* refactor: build by ConfigOptions.

* refactor: init_global_logging by LoggingOptions.

* chore: make clippy happy.

* refactor: use TopLevelOptions push top level options to subcommand.

* test: test TopLevelOptions.

* refactor: push Options in Box.

* refactor: push Options in Box.

* refactor: use let-else and Options.
2023-04-27 15:30:04 +08:00
Ruihang Xia
939a51aea9 feat: adopt REPLACE interceptor and quit all processes on exit (#1478)
* bump version and update test

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* quit all processes on drop

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* Update tests/runner/src/env.rs

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
Co-authored-by: Lei, HUANG <6406592+v0y4g3r@users.noreply.github.com>
2023-04-27 07:16:41 +00:00
Weny Xu
bf35620904 refactor: refactor BufferedWriter (#1439)
* feat: implement ApproximateBufWriter

* refactor: refactor BufferedWriter

* refactor: remove ApproximateBufWriter

* fix: fix losing pending writes issue

* chore: fmt

* chore: remove unused import

* chore: rename method name

* feat: return written row count

* chore: apply suggestions from CR

* fix: fix counting the bytes_written twice issue
2023-04-27 14:45:33 +08:00
Weny Xu
09f55e3cd8 chore: remove info log (#1483) 2023-04-27 14:05:22 +08:00
dennis zhuang
b88d8e5b82 feat: bump opendal to 0.33 (#1479) 2023-04-27 12:13:18 +08:00
Weny Xu
a709a5c842 feat: support to create parquet format external table (#1463)
* feat: support parquet format external table

* Update src/file-table-engine/src/error.rs

Co-authored-by: Ruihang Xia <waynestxia@gmail.com>

---------

Co-authored-by: Ruihang Xia <waynestxia@gmail.com>
2023-04-26 16:45:37 +08:00
Lei, HUANG
fb9978e95d refactor: catalog (#1454)
* wip

* add schema_async

* remove CatalogList

* remove catalog provider and schema provider

* fix

* fix: rename table

* fix: sqlness

* fix: ignore tonic error metadata

* fix: table engine name

* feat: rename catalog_async to catalog

* respect engine name in table regional value when deregistering tables

* fix: CR
2023-04-26 08:36:40 +00:00
discord9
ef4e473e6d fix: recompile&register scripts as UDF on reboot (#1421)
* fixme: recompile somewhere else

* feat: re-compile&re-register all scripts in table

* fix: allow empty scripts table

* chore: add non-blocking somewhere

* chore: PR advices

* chore: more PR advices

* style: remove useless join

* style: remove redunent code

* refactor: use `bg` runtime instead

* style: cargo fmt
2023-04-26 16:30:58 +08:00
Ning Sun
1a245f35b9 feat: improve metrics and log level (#1470)
* refactor: tune log and metrics for meta/frontend

* feat: add panic counter
2023-04-26 13:13:40 +08:00
dennis zhuang
8d8a480dc1 fix: object store caching bug, #1466 (#1467)
* fix: object store caching bug, #1466

* fix: forgot to add S3WithCache tests
2023-04-25 21:48:51 +08:00
Lei, HUANG
197c34bc17 fix: grpc client keepalive (#1461)
fix: grpc keepalive
2023-04-25 20:07:57 +08:00
Ruihang Xia
4d9afee8ef chore(deps): update substrait dep in client (#1453)
Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-04-25 16:21:59 +08:00
Weny Xu
7f14d40798 test: add tests for external table (#1460) 2023-04-25 15:14:46 +08:00
Yingwen
eb50cee601 feat: Switch to the procedure framework (#1448)
* feat: Remove create_mock_sql_handler()

create_to_request() and alter_to_request() don't need `&self`, so
we don't need to mock the sql handler to test them

* feat: Enable procedure manager by default

* docs: Update config example

* test: Enable procedure framework in all tests

* refactor(datanode): rename methods using procedure

* test(catalog): Fix temp dir drops before test finishes

* tests: Enable procedure framework in sqlness

* test: Fix sqlness standalone rename test

* fix: Drop procedure allows table not in engine

* test: Change rename table test

* fix: add options to table meta when creating table by procedure

* test: adjust error message in schema test case

* test: Fix test_sql_api error message
2023-04-25 12:04:02 +08:00
Lei, HUANG
92c0808766 fix: frontend opt should respect http addr in config file when no com… (#1456)
* fix: frontend opt should respect http addr in config file when no command options is given

* refactor: command line options should be Option<bool>

* fix: ci
2023-04-25 03:43:42 +00:00
Ruihang Xia
f9ea6b63bf feat: impl instant query and add tests (#1452)
* feat: impl instant query and add tests

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* clean up

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-04-25 11:08:14 +08:00
fys
2287db7ff7 fix: execute sql query in another catalog (#1457) 2023-04-25 10:30:35 +08:00
shuiyisong
69acf32914 chore: add len() to Bytes and StringBytes (#1455)
* chore: add `len()` to Bytes and StringBytes

* chore: add `len()` to Bytes and StringBytes
2023-04-25 10:18:41 +08:00
Ruihang Xia
b9db2cfd83 fix: support restart sqlness in distributed mode (#1443)
* fix: support restart sqlness in distributed mode

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* move alter_table case to common dir

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* is_standalone flag

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* Update tests/runner/src/env.rs

Co-authored-by: LFC <bayinamine@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
Co-authored-by: LFC <bayinamine@gmail.com>
2023-04-24 19:36:12 +08:00
JeremyHi
6d247f73fd fix: add log on leader stepdown (#1450) 2023-04-24 19:16:57 +08:00
Ruihang Xia
2cf828da3c feat: implement Prometheus-compatible API in gRPC (#1449)
* update greptime-proto

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* remove duplicate delete enum

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* impl handler and service

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* clean up

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-04-24 18:03:48 +08:00
Weny Xu
f2167663b2 feat: support to create external table (#1372)
* feat: support to create external table

* chore: apply suggestions from CR

* test: add create external table without ts type

* chore: apply suggestions from CR

* fix: fix import typo

* refactor: move consts to table crate

* chore: apply suggestions from CR

* refactor: rename create_table_schema
2023-04-24 14:43:12 +08:00
LFC
17daf4cdff feat: support "delete" in distributed mode (#1441)
* feat: support "delete" in distributed mode

* fix: resolve PR comments
2023-04-24 12:07:50 +08:00
shuiyisong
7c6754d03e feat: meter write request (#1447)
* chore: add write meter

* chore: update meter macro

* chore: update meter framework url to https
2023-04-24 11:42:06 +08:00
zyy17
e64fea3a15 ci: upgrade nightly release tag from v0.2.0 to v0.3.0 (#1446) 2023-04-24 11:04:39 +08:00
Weny Xu
22b5a94d02 feat: support creating the physical plan for JSON and CSV files (#1424)
* feat: support creating the physical plan for JSON and CSV files

* chore: apply suggestions from CR

* chore: apply suggestions from CR

* refactor(file-table-engine): use datasource Format instead
2023-04-24 10:17:11 +08:00
Weny Xu
d374859e24 refactor: replace Copy Format with datasource Format (#1435)
* refactor: replace Copy Format with datasource Format

* chore: apply suggestions from CR

* chore: apply suggestions from CR
2023-04-23 08:31:54 +00:00
Ning Sun
c5dba29f9e refactor: remove redundant plugins argument (#1436) 2023-04-23 12:39:46 +08:00
Hao
9f442dedf9 chore: fix some typo and add deriv to plan in promql (#1438) 2023-04-23 12:21:25 +08:00
Ruihang Xia
5d77ed00bb test: add basic cases for distributed TQL (#1437)
* test: add basic cases for distributed TQL

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* drop table

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-04-23 03:34:42 +00:00
Zheming Li
c75845c570 fix: wrong next column in manifest (#1440)
Signed-off-by: Zheming Li <nkdudu@126.com>
2023-04-23 11:25:38 +08:00
Yingwen
1ee9ad4ca1 feat: manage multiple engine procedure in the engine manager (#1434)
* feat(table): Add engine procedure to engine manager

* feat(datanode): Get engine procedure from engine manager

* feat(table-procedure): Add source error to SubprocedureFailed

* test: Enable procedure in tests and pass all tests

* style(table-procedure): Fix clippy
2023-04-23 10:04:09 +08:00
Weny Xu
f2cc912c87 feat: implement ParquetFileReaderFactory (#1423)
* feat: implement ParquetFileReaderFactory

* refactor: use LazyParquetFileReader instead

* chore: apply suggestions from code review

Co-authored-by: Yingwen <realevenyag@gmail.com>

---------

Co-authored-by: Yingwen <realevenyag@gmail.com>
2023-04-21 13:40:58 +08:00
dennis zhuang
2a9f482bc7 feat: show create table (#1336)
* temp commit

* feat: impl Display for CreateTable statement

* feat: impl show create table for standalone

* fix: forgot show.rs

* feat: clean code

* fix: typo

* feat: impl show create table for distributed

* test: add show create table sqlness test

* fix: typo

* fix: sqlness tests

* feat: render partition rules for distributed table

* Update src/sql/src/statements.rs

Co-authored-by: Yingwen <realevenyag@gmail.com>

* Update src/sql/src/statements.rs

Co-authored-by: Yingwen <realevenyag@gmail.com>

* Update src/sql/src/statements.rs

Co-authored-by: Yingwen <realevenyag@gmail.com>

* Update src/sql/src/statements/create.rs

Co-authored-by: Yingwen <realevenyag@gmail.com>

* chore: by CR comments

* fix: compile error

* fix: missing column comments and extra table options

* test: add show create table test

* test: add show create table test

* chore: timestamp precision

* fix: test

---------

Co-authored-by: Yingwen <realevenyag@gmail.com>
2023-04-21 11:37:16 +08:00
Weny Xu
d5e4662181 refactor: refactor stmt_options_to_table_options (#1403)
refactor: move stmt_options_to_table_options to query crate
2023-04-21 11:08:01 +08:00
Yingwen
9cd2cf630d feat: procedures for file table engine (#1417)
* refactor: Add table_ref() to requests as their methods

* feat: Add CreateImmutableFileTable

* feat: Add DropImmutableFileTable

* feat: Implement TableEngineProcedure for ImmutableFileTableEngine

* feat: Add common-procedure-test crate

* refactor: mito engine use common-procedure-test to test procedures

* test: Add test for create and drop table

* chore: Address review comments
2023-04-20 18:52:44 +08:00
Ruihang Xia
7152a1b79e feat: expose output_ordering on scan plan (#1425)
Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-04-20 17:58:48 +08:00
fys
f2cfd8e608 refactor: default catalog and schema are created at Metasrv (#1391)
* refactor: default catalog and schema are created at Metasrv

* fix: unit test

* fix: add license

* simplify the meta mock

* cr
2023-04-20 17:58:37 +08:00
ZonaHe
e8cd2f0e48 feat: update dashboard to v0.2.3 (#1430)
Co-authored-by: ZonaHex <ZonaHex@users.noreply.github.com>
2023-04-20 17:51:11 +08:00
Yingwen
830367b8f4 feat: Drop table by procedure (#1401)
* feat: Add drop table procedure

* feat: support dropping table by procedure on datanode

* test: Add test for DropTableProcedure

* test: Test drop table by procedure

* chore: update comments

* fix: Make on_remove_from_catalog idempotent
2023-04-20 15:57:56 +08:00
Ruihang Xia
37678e2e02 ci: enable test on release (#1428)
Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-04-20 12:06:20 +08:00
Ruihang Xia
b6647af2e3 test: add integration case to check dashboard path (#1422)
Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-04-20 11:17:01 +08:00
ZonaHe
d2c90b4c59 feat: update dashboard to v0.2.2 (#1426)
Co-authored-by: ZonaHex <ZonaHex@users.noreply.github.com>
2023-04-19 19:09:17 +08:00
Weny Xu
5a05e3107c feat: implement parsing format from hashmap (#1420)
* feat: implement parsing format from hashmap

* chore: apply suggestions from CR
2023-04-19 16:29:31 +08:00
Hao
e4cd08c750 feat: add table id and engine to information_schema.TABLES (#1407)
* feat: add table id and engine to informatin_schema.TABLES

* Update src/catalog/src/information_schema/tables.rs

Co-authored-by: Yingwen <realevenyag@gmail.com>

* chore: change table_engine to engine

* test: update sqlness for information schema

* test: update information_schema test in frontend::tests::instance_test.rs

* fix: github action sqlness information_schema test fail

* test: ignore table_id in information_schema

* test: support distribute and standalone have different output

---------

Co-authored-by: Yingwen <realevenyag@gmail.com>
2023-04-19 10:52:02 +08:00
Ruihang Xia
e8bb00f0be feat: impl instant query interface (#1410)
Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-04-18 23:25:14 +08:00
LFC
ff2784da0f test: add SELECT ... LIMIT ... test cases for distributed mode (#1419) 2023-04-18 23:05:43 +08:00
liyang
4652b62481 chore: use alicloud imagehub (#1418)
Co-authored-by: Lei, HUANG <6406592+v0y4g3r@users.noreply.github.com>
2023-04-18 14:35:14 +00:00
Lei, HUANG
0e4d4f0300 chore: release 0.2.0 (#1413)
* chore: bump version to v0.2.0

* chore: bump dashboard to v0.2.1

* chore: remove push uhub step

* fix: static assets path prefix
2023-04-18 22:12:13 +08:00
shuiyisong
145f8eb5a7 refactor: parallelize open table (#1392)
* refactor: change open_table to parallel on datanode startup

* chore: try move out register schema table

* chore: change mito engine to key lock

* chore: minor change

* chore: minor change

* chore: update error definition

* chore: remove rwlock on tables

* chore: try parallel register table on schema provider

* chore: add rt log

* chore: add region open rt log

* chore: add actual open region rt log

* chore: add recover rt log

* chore: divide to three part rt log

* chore: remove debug log

* chore: add replay rt log

* chore: update cargo lock

* chore: remove debug log

* chore: revert unused change

* chore: update err msg

Co-authored-by: dennis zhuang <killme2008@gmail.com>

* chore: fix cr issue

Co-authored-by: dennis zhuang <killme2008@gmail.com>

* chore: fix cr issue

* chore: fix cr issue

---------

Co-authored-by: dennis zhuang <killme2008@gmail.com>
2023-04-18 21:36:29 +08:00
discord9
de8b889701 chore: update RustPython depend (#1406)
* chore: update RustPython to newer version

* chore: bump ver

* chore: PR advices
2023-04-18 15:39:57 +08:00
Lei, HUANG
1c65987026 chore: remove Release prefix from release name (#1409) 2023-04-18 06:25:08 +00:00
Near
c6f024a171 feat: Add metrics for cache hit/miss for object store cache (#1405)
* Add the cache hit/miss counter

* Verify the cache metrics are included

* Resolve comments

* Rename the error kind label name to be consistent with other metrics

* Rename the object store metric names

* Avoid using glob imports

* Format the code

* chore: Update src/object-store/src/metrics.rs mod doc

---------

Co-authored-by: Yingwen <realevenyag@gmail.com>
2023-04-18 14:08:19 +08:00
localhost
0c88bb09e3 chore: add some metrics for grpc client (#1398)
* chore: add some metrics for grpc client

* chore: add grpc preix and change metrics-exporter-ptometheus to add global prefix

---------

Co-authored-by: paomian <qtang@greptime.com>
2023-04-18 13:55:01 +08:00
Ruihang Xia
f4190cfca6 fix: table scan without projection (#1404)
* fix: table scan without projection

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* update PR reference

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-04-17 20:10:36 +08:00
zyy17
b933ffddd0 ci: set whether it is the latest release by using 'ncipollo/release-action and update install.sh (#1400)
* ci: set whether it is the latest release by using 'ncipollo/release-action'

* ci: modify greptimedb install script to use the latest nightly version binary
2023-04-17 18:44:00 +08:00
Lei, HUANG
1214b5b43e docs: fix timestamp rendering in readme (#1399)
doc: fix timestamp rendering in readme
2023-04-17 17:07:25 +08:00
Ruihang Xia
a47134a971 chore: don't render reproduce as shell in issue template (#1397)
Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-04-17 16:42:51 +08:00
Ruihang Xia
dc85a4b5bb feat: migrate substrait to datafusion official implementation (#1238)
* some test cases will fail

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* revert version changes

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* update substrait-proto version

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix compile

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* update df again

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* clean up

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* Update src/common/substrait/Cargo.toml

Co-authored-by: dennis zhuang <killme2008@gmail.com>

* revert COPY FROM / COPY TO sqlness to standalone only

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
Co-authored-by: dennis zhuang <killme2008@gmail.com>
2023-04-17 15:54:35 +08:00
Yingwen
0937ccdb61 docs: Add docs about schema structs (#1373)
* docs: Add docs about schema structs

* docs: refine schema struct docs

- Describe SchemaRef and relationship between our schema and arrow's.
- Add more examples

* docs: Add code link to schemas

* docs: Add conversion graph

* docs: Apply suggestions from code review

Co-authored-by: LFC <bayinamine@gmail.com>

---------

Co-authored-by: LFC <bayinamine@gmail.com>
2023-04-17 12:11:24 +08:00
Weny Xu
408de51be8 feat: implement JsonOpener and CsvOpener (#1367)
* feat: introduce JsonOpener and CsvOpener

* refactor: refactor Opener

* docs: add doc
2023-04-17 11:42:16 +08:00
LFC
f7b7a9c801 feat: implement COPY for cluster (#1388) 2023-04-17 11:04:47 +08:00
Weny Xu
cc7c313937 chore: fix clippy (#1387) 2023-04-15 07:00:54 +08:00
Ruihang Xia
a6e41cdd7b chore: bump arrow, parquet, datafusion and tonic (#1386)
* bump arrow, parquet, datafusion, tonic and greptime-proto

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* add analyzer and fix test

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix clippy warnings

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* update sqlness result

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-04-15 00:03:15 +08:00
Hao
a5771e2ec3 feat: implement predict_linear function in promql (#1362)
* feat: implement predict_linear function in promql

* feat: initialize predict_linear's planner

* fix(bug): fix a bug in linear regression and add some unit test for linear regression

* chore: format code

* feat: deal with NULL value in linear_regression

* feat: add test for all value is None
2023-04-14 22:26:37 +08:00
Lei, HUANG
68e64a6ce9 feat: add some metrics (#1384)
* feat: add some metrics

* fix: compile errors
2023-04-14 20:46:45 +08:00
Ning Sun
90cd3bb5c9 chore: switch mysql_async to git dep (#1383) 2023-04-14 07:04:34 +00:00
shuiyisong
bea37e30d8 chore: query prom using input query context (#1381) 2023-04-14 14:23:36 +08:00
Yingwen
d988b43996 feat: Add drop table procedure to mito (#1377)
* feat: Add drop table procedure to mito

* feat: remove table from engine and then close it
2023-04-14 13:09:38 +08:00
LFC
0fc816fb0c test: add "numbers" table in distributed mode (#1374) 2023-04-14 11:52:04 +08:00
Ning Sun
43391e0162 chore: update pgwire and rustls libraries (#1380)
* feat: update pgwire to 0.13 and fix grafana compatibility

* chore: update pgwire and rustls

* chore: remove unsued imports

* style: format toml
2023-04-14 11:06:01 +08:00
Ruihang Xia
3e7f7e3e8d fix: compile error in develop branch (#1376)
Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-04-13 15:19:00 +08:00
Yingwen
0819582a26 feat: Add alter table procedure (#1354)
* feat: Implement AlterTableProcedure

* test: Test alter table procedure

* feat: support alter table by procedure in datanode

* chore: update comment
2023-04-13 14:05:53 +08:00
Lei, HUANG
9fa871a3fa fix: concurrent rename two table to same name may cause override (#1368)
* fix: concurrent rename two table to same name may cause override

* fix: concurrently update system catalog table

* fix: correctness
2023-04-13 11:53:02 +08:00
Lei, HUANG
76640402ba fix: update cargo lock (#1375) 2023-04-13 11:08:35 +08:00
discord9
c20dbda598 feat: from/to numpy&collect concat (#1339)
* feat: from/to numpy&collect concat

* feat: PyRecordBatch

* test: try import first,allow w/out numpy/pyarrow

* fix: cond compile flag

* doc: license

* feat: sql() ret PyRecordBatch&repr

* fix: after merge

* style: fmt

* chore: CR advices

* docs: update

* chore: resolve conflict
2023-04-13 10:46:25 +08:00
LFC
33dbf7264f refactor: unify the execution of show stmt (#1340)
* refactor: unify the execution of show stmt
2023-04-12 23:09:07 +08:00
discord9
716bde8f04 feat: benchmark some python script (#1356)
* test: bench rspy&pyo3

* docs: add TODO

* api heavy

* Update src/script/benches/py_benchmark.rs

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>

* style: toml fmt

* test: use `rayon` for threadpool

* test: compile first, run later

---------

Co-authored-by: Ruihang Xia <waynestxia@gmail.com>
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2023-04-12 18:19:02 +08:00
ZonaHe
9f2825495d feat: update dashboard to v0.1.0 (#1370)
Co-authored-by: ZonaHex <ZonaHex@users.noreply.github.com>
2023-04-12 17:08:10 +08:00
localhost
ae21c1c1e9 chore: set keep lease heartbeat log level to trace (#1364)
Co-authored-by: paomian <qtang@greptime.com>
2023-04-12 09:38:49 +08:00
Ruihang Xia
6b6617f9cb build: specify clippy denies in cargo config (#1351)
* build: specify clippy denies in cargo config

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* deny implicit clone

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-04-11 09:48:52 +00:00
shuiyisong
d5f0ba4ad9 refactor: merge authenticate and authorize api (#1360)
* chore: add auth api

* chore: update pg using auth api

* chore: update grpc using auth api

* chore: update http using auth api
2023-04-11 17:28:07 +08:00
Eugene Tolbakov
e021da2eee feat(promql): add holt_winters initial implementation (#1342)
* feat(promql): add holt_winters initial implementation

* feat(promql): improve docs for holt_winters

* feat(promql): adjust holt_winters implementation according to code review

* feat(promql): add holt_winters test from prometheus promql function test suite

* feat(promql): add holt_winters more tests from prometheus promql function test suite

* feat(promql): fix styling issue

Co-authored-by: Ruihang Xia <waynestxia@gmail.com>

---------

Co-authored-by: Ruihang Xia <waynestxia@gmail.com>
2023-04-11 17:04:35 +08:00
Weny Xu
fac9c17a9b feat: implement infer schema from single file (#1348)
* feat: implement infer schema from file

* feat: implement compression type

* refactor: remove unnecessary BufReader

* refactor: remove SyncIoBridge and using tokio_util::io::SyncIoBridge instead

* chore: apply suggestions from CR
2023-04-11 16:59:30 +08:00
Weny Xu
dfc2a45de1 docs: treat slack as the first-class citizen (#1361) 2023-04-11 16:59:17 +08:00
Lei, HUANG
3e8ec8b73a fix: avoid panic when no region found in table (#1359) 2023-04-11 16:58:18 +08:00
Weny Xu
a90798a2c1 test: add tests for file table engine (#1353)
* test: add tests for file table engine

* test: refactor open table test and add close engine test
2023-04-11 06:25:08 +00:00
Lei, HUANG
f5cf5685cc feat!: parsing local timestamp (#1352)
* fix: parse and display timestamp/datetime in local time zone

* fix display

* fix: unit tests

* change time zone env

* fix: remove useless code
2023-04-11 12:54:15 +08:00
localhost
1a21a6ea41 chore: set metasrv and datanode heartbeat log level to trace (#1357) 2023-04-11 11:21:29 +08:00
Ruihang Xia
09f003d01d fix: lots of corner cases in PromQL (#1345)
* adjust plan ordering
fix offset logic
ignore empty range vector

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix: different NaN logic between instant and range selector

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix: enlarge selector time window

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* revert change about stale NaN

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix tests

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* clean up

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* rename variables

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* one more rename

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-04-10 09:05:24 +00:00
Weny Xu
29c6155ae3 feat: introduce file table engine (#1323)
* feat: introduce file table engine

* chore: apply cr suggestions

* refactor: refactor immutable manifest

* chore: apply cr suggestions

* refactor: refactor immutable manifest

* chore: apply suggestions from code review

Co-authored-by: dennis zhuang <killme2008@gmail.com>

* chore: apply suggestions from CR

* chore: apply suggestions from code review

Co-authored-by: Lei, HUANG <6406592+v0y4g3r@users.noreply.github.com>

---------

Co-authored-by: dennis zhuang <killme2008@gmail.com>
Co-authored-by: Lei, HUANG <6406592+v0y4g3r@users.noreply.github.com>
2023-04-10 12:03:36 +08:00
Weny Xu
804348966d chore: amend fmt-toml (#1347) 2023-04-10 11:42:36 +08:00
Lei, HUANG
b7bdee6de9 feat: ignoring time zone info when import from external files (#1341)
* feat: ignore timezone info when copy from external files

* chore: rebase onto develop
2023-04-10 11:41:34 +08:00
Lei, HUANG
c850e9695a fix: stream inserts when copying from external file (#1338)
* fix: stream inserts when copying from external file

* fix: reset pending bytes once insertion succeeds

* Update src/datanode/src/sql/copy_table_from.rs

Co-authored-by: LFC <bayinamine@gmail.com>

---------

Co-authored-by: LFC <bayinamine@gmail.com>
2023-04-10 10:44:12 +08:00
LFC
a3e47955b8 feat: information schema (#1327)
* feat: basic information schema

* show information schema only for current catalog

* fix: fragile tests
2023-04-07 16:50:14 +08:00
zyy17
554a69ea54 refactor: add disable_dashboard option and disable dashboard in metasrv and datanode (#1343)
* refactor: add disable_dashboard option and disable dashboard in metasrv and datanode

* refactor: skip disable_dashboard filed in toml file

* refactor: simplify the http initialization
2023-04-07 16:45:25 +08:00
LFC
f8b6a6b219 fix!: not allowed to create column name same with keyword without quoted (#1333)
* fix: not allowed to create column name same with keyword without quoted

* fix: tests

* Update src/sql/src/parsers/create_parser.rs

Co-authored-by: Ning Sun <classicning@gmail.com>

* fix: tests

---------

Co-authored-by: Ning Sun <classicning@gmail.com>
2023-04-06 15:34:26 +08:00
dennis zhuang
dce0adfc7e chore: readme (#1318) 2023-04-06 13:20:08 +08:00
Ruihang Xia
da66138e80 refactor(error): remove backtrace, and introduce call-site location for debugging (#1329)
* wip: global replace

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix compile

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix warnings

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* remove unneeded tests of errors

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix ErrorExt trait implementator

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix warnings

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix format

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix pyo3 tests

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-04-06 04:06:00 +00:00
Lei, HUANG
d10de46e03 feat: support timestamp precision on creating table (#1332)
* feat: support timestamp precision on creating table

* fix sqlness

* fix: substrait representation of different timestamp precision
2023-04-06 11:18:20 +08:00
Eugene Tolbakov
59f7630000 feat: initial changes for compaction_time_window field support (#1083)
* feat(compaction_time_window): initial changes for compaction_time_window field support

* feat(compaction_time_window): move PickerContext creation

* feat(compaction_time_window): update region descriptor, fix formatting

* feat(compaction_time_window): add minor enhancements

* feat(compaction_time_window): fix failing test

* feat(compaction_time_window):  return an error instead silently skip for the user provided compaction_time_window

* feat(compaction_time_window): add TODO reminder
2023-04-06 10:32:41 +08:00
Hao
a6932c6a08 feat: implement deriv function (#1324)
* feat: implement deriv function

* docs: add docs for linear regression

* test: add test for deriv
2023-04-05 13:42:07 +08:00
Ruihang Xia
10593a5adb fix: update sqlness result (#1328)
Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-04-04 22:47:55 +08:00
dennis zhuang
bf8c717022 feat: try to do manifest checkpoint on opening region (#1321) 2023-04-04 21:36:54 +08:00
localhost
aa9f6c344c chore: minor fix about metrics component (#1322)
* typo: fix StartMetricsExport error message error

* bug: add metrics http handler for frontend node
2023-04-04 19:31:06 +08:00
Ruihang Xia
99353c6ce7 refactor: rename "value" semantic type to "field" (#1326)
* global replace

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* change desc table

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* update sqlness result

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-04-04 11:14:28 +00:00
Ruihang Xia
a2d8804129 feat: impl __field__ special matcher to project value columns (#1320)
* plan new come functions

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* implement __value__ matcher

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix typo

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* change __value__ to __field__

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* add bad-case tests

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* rename variables

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-04-04 09:08:50 +00:00
Weny Xu
637a4a2a58 docs: file external table RFC (#1274) 2023-04-04 10:41:17 +08:00
Weny Xu
ef134479ef feat: support multi table engines in distributed mode (#1316)
* chore: bump greptime-proto to 59afacd

* feat: support multi table engines in distributed mode
2023-04-04 10:27:08 +08:00
Weny Xu
451f9d2d4e feat: support multi table engines (#1277)
* feat: support multi table engines

* refactor: adapt SqlHandler to support multiple table engines

* refactor: refactor TableEngineManager

* chore: apply review suggestions

* chore: apply review suggestions

* chore: apply review suggestions

* chore: snafu context styling
2023-04-03 14:49:12 +00:00
dennis zhuang
68d3247791 chore: tweak logs (#1314)
* chore: tweak logs

* chore: cr comments
2023-04-03 21:08:16 +08:00
Eugene Tolbakov
2458b4edd5 feat(changes): add initial implementation (#1304)
* feat(changes): add initial implementation

* feat(changes): add docs
2023-04-03 12:02:13 +08:00
Eugene Tolbakov
5848f27c27 feat(resets): add initial implementation (#1306) 2023-04-03 11:37:01 +08:00
LFC
215cea151f refactor: move PromQL execution to Frontend (#1297)
* refactor: move PromQL execution to Frontend
2023-04-03 11:34:03 +08:00
Hao
a82f1f564d feat: implement stdvar_over_time function (#1291)
* feat: implement stdvar_over_time function

* feat: add more test for stdvar_over_time

* feat: add stdvar_over_time to functions.rs
2023-04-03 10:01:25 +08:00
LFC
48c2841e4d feat: execute python script in distributed mode (#1264)
* feat: execute python script in distributed mode

* fix: rebase develop
2023-04-02 20:36:48 +08:00
Lei, HUANG
d2542552d3 fix: unit test fails when try to copy table to s3 and copy back (#1302)
fix: unit test fails when try to copy table to s3 and copy back to greptimedb
2023-04-02 16:43:44 +08:00
Ruihang Xia
c0132e6cc0 feat: impl quantile_over_time function (#1287)
* fix qualifier alias

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix in another way

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* impl quantile_over_time

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix clippy

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-04-02 16:20:32 +08:00
dennis zhuang
aea932b891 fix: checkpoint fails when deleting old logs fails (#1300) 2023-04-02 11:06:36 +08:00
Lei, HUANG
0253136333 feat: buffered parquet writer (#1263)
* wip: use

* rebase develop

* chore: fix typos

* feat: replace export parquet writer with buffered writer

* fix: some cr comments

* feat: add sst_write_buffer_size config item to config how many bytes to buffer before flush to underlying storage

* chore: reabse onto develop
2023-04-01 17:21:19 +08:00
Eugene Tolbakov
6a05f617a4 feat(stddev_over_time): add initial implementation (#1289)
* feat(stddev_over_time): add initial implementation

* feat(stddev_over_time): address code review remarks, add compensated summation

* feat(stddev_over_time): fix fmt issues

* feat(stddev_over_time): add docs, minor renamings
2023-04-01 17:16:51 +08:00
localhost
a2b262ebc0 chore: add http metrics server in datanode node when greptime start in distributed mode (#1256)
* chore: add http metrics server in datanode node when greptime start in distributed mode

* chore: add some docs and license

* chore: change metrics_addr to resolve address already in use error

* chore add metrics for meta service

* chore: replace metrics exporter http server from hyper to axum

* chore: format

* fix: datanode mode branching error

* fix: sqlness test address already in use and start metrics in defualt config

* chore: change metrics location

* chore: use builder pattern to builder httpserver

* chore: remove useless debug_assert macro in httpserver builder

* chore: resolve conflicting build error

* chore: format code
2023-03-31 18:37:52 +08:00
dennis zhuang
972f64c3d7 chore: improve opendal layers (#1295)
* chore: improve opendal layers

* chore: log level
2023-03-31 09:48:11 +00:00
LFC
eb77f9aafd feat: start LocalManager in Metasrv (#1279)
* feat: procedure store in Metasrv, backed by Etcd; start `LocalManager` in Metasrv leader

* fix: resolve PR comments

* fix: resolve PR comments
2023-03-31 15:32:59 +08:00
Yingwen
dee20144d7 feat: Implement procedure to alter a table for mito engine (#1259)
* feat: wip

* fix: Fix CreateMitoTable::table_schema not initialized from json

* feat: Implement AlterMitoTable procedure

* test: Add test for alter procedure

* feat: Register alter procedure

* fix: Recover procedures after catalog manager is started

* feat: Simplify usage of table schema in create table procedure

* test: Add rename test

* test: Add drop columns test
2023-03-31 14:40:54 +08:00
dennis zhuang
563adbabe9 feat!: improve region manifest service (#1268)
* feat: try to use batch delete in ManifestLogStorage

* feat: clean temp dir when startup with file backend

* refactor: export region manifest checkpoint actions magin and refactor storage options

* feat: purge unused manifest and checkpoint files by repeat gc task

* chore: debug deleted logs

* feat: adds RepeatedTask and refactor all gc tasks

* chore: clean code

* feat: export gc_duration to manifest config

* test: assert gc works

* fix: typo

* Update src/common/runtime/src/error.rs

Co-authored-by: LFC <bayinamine@gmail.com>

* Update src/common/runtime/src/repeated_task.rs

Co-authored-by: LFC <bayinamine@gmail.com>

* Update src/common/runtime/src/repeated_task.rs

Co-authored-by: LFC <bayinamine@gmail.com>

* fix: format

* Update src/common/runtime/src/repeated_task.rs

Co-authored-by: Yingwen <realevenyag@gmail.com>

* chore: by CR comments

* chore: by CR comments

* fix: serde default for StorageConfig

* chore: remove compaction config in StandaloneOptions

---------

Co-authored-by: LFC <bayinamine@gmail.com>
Co-authored-by: Yingwen <realevenyag@gmail.com>
2023-03-31 10:42:00 +08:00
Ruihang Xia
b71bb4e5fa feat: implement restart argument for sqlness-runner (#1262)
* refactor standalone mode and distribute mode start process

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* implement restart arg

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* Update tests/runner/src/env.rs

Co-authored-by: LFC <bayinamine@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
Co-authored-by: LFC <bayinamine@gmail.com>
2023-03-31 10:02:19 +08:00
LFC
fae293310c feat: unify describe table execution (#1285) 2023-03-31 09:59:19 +08:00
LFC
3e51640442 ci: release binary with embedded dashboard enabled (#1283) 2023-03-30 21:35:47 +08:00
discord9
b40193d7da test: align RsPy PyO3 Behavior (#1280)
* feat: allow PyList Return in PyO3 Backend

* feat: mixed list

* feat: align&test

* chore: PR advices
2023-03-30 17:45:21 +08:00
Ruihang Xia
b5e5f8e555 chore(deps): bump arrow and parquet to 36.0.0, and datafusion to the latest (#1282)
* chore: update arrow, parquet to 36.0 and datafusion

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* update deps

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* Apply suggestions from code review

Co-authored-by: LFC <bayinamine@gmail.com>

* update sqlness result

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
Co-authored-by: LFC <bayinamine@gmail.com>
2023-03-30 16:24:10 +08:00
zyy17
192fa0caa5 ci: only builds binaries for manually trigger workflow (#1284) 2023-03-30 15:58:28 +08:00
Weny Xu
30eb676d6a feat: implement create external table parser (#1252)
* refactor: move parse_option_string to util

* feat: implement create external table parser
2023-03-30 13:37:53 +08:00
Ruihang Xia
d7cadf6e6d fix: nyc-taxi bench tools and limit max parallel compaction task number (#1275)
* limit mas parallel compaction subtask

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* correct type map

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-03-29 09:16:53 +00:00
Lei, HUANG
d7a1435517 fix: remove backtrace from ratelimit error (#1273) 2023-03-29 15:58:01 +08:00
xiaomin tang
0943079de2 feat: Create SECURITY.md (#1270)
Create SECURITY.md
2023-03-28 19:14:29 +08:00
shuiyisong
509d07b798 chore: add build_table_route_prefix (#1269) 2023-03-28 16:26:24 +08:00
Yingwen
e72ce5eaa9 fix: Adds FileHandle to ChunkStream (#1255)
* test: Add compaction test

* test: Test read during compaction

* test: Add s3 object store to test

* test: only run compact test

* feat: Hold file handle in chunk stream

* test: check files still exist after compact

* feat: Revert changes to develop.yaml

* test: Simplify MockPurgeHandler
2023-03-28 16:22:07 +08:00
Ruihang Xia
f491a040f5 feat: implelemt rate, increase and delta in PromQL (#1258)
* clean up

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix increase fn

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* impl rate and delta

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix typo

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix IS_RATE condition

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* more tests about rate and delta

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* ensure range_length is not zero

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-03-28 15:21:06 +08:00
Yingwen
47179a7812 feat: Support sending multiple affected rows (#1203)
* feat: Support sending multiple affected rows

* feat: Skip federated check if query starts with insert

* style: Fix clippy
2023-03-28 14:34:14 +08:00
shuiyisong
995a28a27d feat: impl BatchDelete (#1253)
* chore: impl `BatchDelete`

* chore: add `batch_delete` to meta-client

* fix: auth param length check

* fix: auth param length check

* chore: rebase develop

* chore: use `filter_map`

Co-authored-by: LFC <bayinamine@gmail.com>

* chore: update error msg

Co-authored-by: LFC <bayinamine@gmail.com>

* fix: pre-allocate vec length

---------

Co-authored-by: LFC <bayinamine@gmail.com>
2023-03-28 14:06:13 +08:00
LFC
ed1cb73ffc fix: a minor misuse of tokio::select (#1266) 2023-03-28 13:50:35 +08:00
dennis zhuang
0ffa628c22 refactor: scripts perf and metrics (#1261)
* refactor: retrieve pyvector datatype by inner vector

* perf: replace all ok_or to ok_or_else

* feat: adds metrics for scripts execution
2023-03-28 10:07:21 +08:00
Lei, HUANG
5edd2a3dbe feat: upgrade opendal (#1245)
* chore: upgrade opendal

* chore: finish upgrading opendal

* fix: clippy complaints

* fix some tests

* fix: all unit tests

* chore: rebase develop

* fix: sqlness tests

* optimize imports

* chore: rebase develop

* doc: add todo
2023-03-28 09:47:33 +08:00
Ning Sun
e63b28bff1 feat: add dbname and health check for grpc api (#1220)
* feat: add dbname and health check for grpc api

* refactor: move health check to dedicated service

* chore: switch to merged proto rev

* feat: implement healthcheck on server-side
2023-03-28 09:46:30 +08:00
zyy17
8140d4e3e5 ci: modify the copy path of binary artifacts (#1257) 2023-03-27 21:49:42 +08:00
shuiyisong
6825459c75 chore: ignore dashboard files (#1260) 2023-03-27 19:11:31 +08:00
Ning Sun
7eb4d81929 feat: adopt pgwire 0.12 and simplify encoding apis (#1250)
* feat: adopt pgwire 0.12 and simplify encoding apis

* refactor: remove duplicated format match clause
2023-03-27 18:16:43 +08:00
discord9
8ba0741c81 fix: set locals to main.dict too (#1242) 2023-03-27 15:23:52 +08:00
zyy17
0eeb5b460c ci: install python requests lib in release container image (#1241)
* ci: install python requests lib in release container image

* refactor: add requirements.txt
2023-03-27 15:20:31 +08:00
LFC
65ea6fd85f feat: embed dashboard into GreptimeDB binary (#1239)
* feat: embed dashboard into GreptimeDB binary

* fix: resolve PR comments
2023-03-27 15:08:44 +08:00
dennis zhuang
4f15b26b28 feat: region manifest checkpoint (#1202)
* chore: adds log when manifest protocol is changed

* chore: refactor region manifest

* temp commit

* feat: impl region manifest checkpoint

* feat: recover region version from manifest snapshot

* test: adds region snapshot test

* test: region manifest checkpoint

* test: alter region with manifest checkpoint

* fix: revert storage api

* feat: delete old snapshot

* refactor: manifest log storage

* Update src/storage/src/version.rs

Co-authored-by: Ruihang Xia <waynestxia@gmail.com>

* Update src/storage/src/manifest/checkpoint.rs

Co-authored-by: Ruihang Xia <waynestxia@gmail.com>

* Update src/storage/src/manifest/region.rs

Co-authored-by: Ruihang Xia <waynestxia@gmail.com>

* Update src/storage/src/manifest/region.rs

Co-authored-by: Ruihang Xia <waynestxia@gmail.com>

* chore: by CR comments

* refactor: by CR comments

* fix: typo

* chore: tweak start_version

---------

Co-authored-by: Ruihang Xia <waynestxia@gmail.com>
2023-03-27 11:15:52 +08:00
Lei, HUANG
15ee4ac729 fix: noop flush impl for numbers table (#1247)
* fix: noop flush impl for numbers table

* fix: clippy
2023-03-27 10:54:07 +08:00
dennis zhuang
b4fc8c5b78 refactor: make sql function in scripts return a list of column vectors (#1243) 2023-03-27 08:50:19 +08:00
Lei, HUANG
6f81717866 fix: skip empty parquet (#1236)
* fix: returns None if parquet file does not contain any rows

* fix: skip empty parquet file

* chore: add doc

* rebase develop

* fix: use flatten instead of filter_map with identity
2023-03-26 09:39:15 +08:00
Lei, HUANG
77f9383daf fix: allow larger compaction window to reduce parallel task num (#1223)
fix: unit tests
2023-03-24 17:12:13 +08:00
discord9
c788b7fc26 feat: slicing PyVector&Create DataFrame from sql (#1190)
* chore: some typos

* feat: slicing for pyo3 vector

* feat: slice tests

* feat: from_sql

* feat: from_sql for dataframe

* test: df tests

* feat: `from_sql` for rspython

* test: tweak a bit

* test: and CR advices

* typos: ordered points

* chore: update error msg

* test: add more `slicing` testcase
2023-03-24 15:37:45 +08:00
LFC
0f160a73be feat: metasrv collects datanode heartbeats for region failure detection (#1214)
* feat: metasrv collects datanode heartbeats for region failure detection

* chore: change visibility

* fix: fragile tests

* Update src/meta-srv/src/handler/persist_stats_handler.rs

Co-authored-by: fys <40801205+Fengys123@users.noreply.github.com>

* Update src/meta-srv/src/handler/failure_handler.rs

Co-authored-by: fys <40801205+Fengys123@users.noreply.github.com>

* fix: resolve PR comments

* fix: resolve PR comments

* fix: resolve PR comments

---------

Co-authored-by: shuiyisong <xixing.sys@gmail.com>
Co-authored-by: fys <40801205+Fengys123@users.noreply.github.com>
2023-03-24 04:28:34 +00:00
LFC
92963b9614 feat: execute "delete" in query engine (in the form of "LogicalPlan") (#1222)
fix: execute "delete" in query engine (in the form of "LogicalPlan")
2023-03-24 12:11:58 +08:00
Yingwen
f1139fba59 fix: Holds FileHandle in ParquetReader to avoid the purger purges it (#1224) 2023-03-23 14:24:25 +00:00
Ruihang Xia
4e552245b1 fix: range func tests (#1221)
* remove ignore on range fn tests

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* placeholder for changes, deriv and resets

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-03-23 17:33:11 +08:00
Ruihang Xia
3126bbc1c7 docs: use CDN for logos (#1219)
Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-03-23 11:39:24 +08:00
LFC
b77b561bc8 refactor: execute insert with select in query engine (#1181)
* refactor: execute insert with select in query engine

* fix: resolve PR comments
2023-03-23 10:38:26 +08:00
dennis zhuang
501faad8ab chore: rename params in flush api (#1213) 2023-03-22 14:07:23 +08:00
Eugene Tolbakov
5397a9bbe6 feat(to_unixtime): add initial implementation (#1186)
* feat(to_unixtime): add initial implementation

* feat(to_unixtime): use Timestamp for conversion

* feat(to_unixtime):  implement conversion to Result<VectorRef>

* feat(to_unixtime): make unit test pass

* feat(to_unixtime): preserve None for invalid timestamps

* feat(to_unixtime): address code review suggestions

* feat(to_unixtime): add an sqlness test

* feat(to_unixtime): adjust the assertion for the sqlness test

* Update tests/cases/standalone/common/select/dummy.sql

---------

Co-authored-by: Ruihang Xia <waynestxia@gmail.com>
2023-03-21 12:41:07 +00:00
Ruihang Xia
f351ee7042 docs: update document string and site (#1211)
Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-03-21 07:01:08 +00:00
Ruihang Xia
e0493e0b8f feat: flush all tables on shutdown (#1185)
* feat: impl flush on shutdown (#14)

* feat: impl flush on shutdown

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* powerful if-else!

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* retrieve table handler from schema provider

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* feat: impl flush on shutdown

* feat: impl flush on shutdown

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* powerful if-else!

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* retrieve table handler from schema provider

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* Update src/datanode/src/instance.rs

Co-authored-by: dennis zhuang <killme2008@gmail.com>

* fix: uncommitted merge change

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
Co-authored-by: dennis zhuang <killme2008@gmail.com>
2023-03-21 14:36:30 +08:00
LFC
b2a09c888a feat: phi accrual failure detector (#1200) 2023-03-21 11:47:47 +08:00
LFC
af101480b3 feat: add gRPC reflection service (#1208)
* feat: add gRPC reflection service

* feat: add gRPC reflection service
2023-03-21 11:23:29 +08:00
Weny Xu
b8f7f603cf test: add copy clause sqlness tests (#1198) 2023-03-21 11:22:26 +08:00
dennis zhuang
8fb97ea1d8 fix: losing region numbers after altering table (#1209) 2023-03-21 11:19:43 +08:00
discord9
21ce9c1163 docs: more explain in readme (#1195)
* docs: more explain in readme

* fix: typos

* fix: CR advices
2023-03-20 21:56:34 +08:00
Ruihang Xia
0a22375ac1 fix: nyc-taxi bench suite (#1204)
Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-03-20 21:53:01 +08:00
fys
0596d20a3b fix: can not create table in the local distributed environment (#1207)
fix: create table in local distribute env
2023-03-20 20:12:35 +08:00
Weny Xu
e19c8fa2b6 refactor: combine Copy To and Copy From (#1197)
* refactor: combine Copy To and Copy From

* Apply suggestions from code review

Co-authored-by: LFC <bayinamine@gmail.com>

* Apply suggestions from code review

Co-authored-by: Lei, HUANG <6406592+v0y4g3r@users.noreply.github.com>

---------

Co-authored-by: LFC <bayinamine@gmail.com>
Co-authored-by: Lei, HUANG <6406592+v0y4g3r@users.noreply.github.com>
2023-03-20 19:23:25 +08:00
LFC
ad886f5b3e feat: GRPC client stream interface for insertion (#1206)
* feat: GRPC client stream interface for insertion

* feat: GRPC client stream interface for insertion
2023-03-20 18:45:37 +08:00
LFC
f6669a8201 feat: add GRPC unary call service to our GreptimeDB (#1196)
* feat: add GRPC unary call service to our GreptimeDB
2023-03-20 14:27:32 +08:00
Yingwen
ad5c47185d feat: wait flush until the flush is done (#1188)
* feat: Add wait argument to flush

* test(storage): Fix flush tests
2023-03-20 11:25:19 +08:00
zyy17
64441616db ci: refactor compile-python.sh and use the python310 to build amd64 binary (#1199) 2023-03-18 16:16:15 +08:00
zyy17
09491d6aee ci: release the standalone binaries with pyo3 and install python utils in images (#1194)
* ci: install python3 and python3-dev in CI Dockerfile

* ci: release the standalone binaries with pyo3 support for multiple platforms

* refactor: install pip and pyarrow

* refactor: specify the python version
2023-03-17 15:42:13 +08:00
Weny Xu
7cfa30b2ab feat: add shutdown for standalone and metasrv (#1174) 2023-03-17 11:35:17 +08:00
Ning Sun
a7676d8860 refactor: port div_ceil from stdlib to avoid unstable features (#1191)
* refactor: use float div&ceil to avoid unstable features

* refactor: port div_ceil from rust stdlib
2023-03-16 22:55:35 +08:00
612 changed files with 36422 additions and 13169 deletions

View File

@@ -3,3 +3,14 @@ linker = "aarch64-linux-gnu-gcc"
[alias]
sqlness = "run --bin sqlness-runner --"
[build]
rustflags = [
# lints
# TODO: use lint configuration in cargo https://github.com/rust-lang/cargo/issues/5034
"-Wclippy::print_stdout",
"-Wclippy::print_stderr",
"-Wclippy::implicit_clone",
"-Aclippy::items_after_test_module",
]

View File

@@ -3,6 +3,7 @@ GT_S3_BUCKET=S3 bucket
GT_S3_ACCESS_KEY_ID=S3 access key id
GT_S3_ACCESS_KEY=S3 secret access key
GT_S3_ENDPOINT_URL=S3 endpoint url
GT_S3_REGION=S3 region
# Settings for oss test
GT_OSS_BUCKET=OSS bucket
GT_OSS_ACCESS_KEY_ID=OSS access key id

View File

@@ -81,6 +81,5 @@ body:
Please walk us through and provide steps and details on how
to reproduce the issue. If possible, provide scripts that we
can run to trigger the bug.
render: bash
validations:
required: true

View File

@@ -13,7 +13,7 @@ on:
name: Build API docs
env:
RUST_TOOLCHAIN: nightly-2023-02-26
RUST_TOOLCHAIN: nightly-2023-05-03
jobs:
apidoc:

View File

@@ -24,7 +24,7 @@ on:
name: CI
env:
RUST_TOOLCHAIN: nightly-2023-02-26
RUST_TOOLCHAIN: nightly-2023-05-03
jobs:
typos:
@@ -183,7 +183,7 @@ jobs:
- name: Rust Cache
uses: Swatinem/rust-cache@v2
- name: Run cargo clippy
run: cargo clippy --workspace --all-targets -- -D warnings -D clippy::print_stdout -D clippy::print_stderr
run: cargo clippy --workspace --all-targets -- -D warnings
coverage:
if: github.event.pull_request.draft == false
@@ -216,7 +216,7 @@ jobs:
- name: Install cargo-llvm-cov
uses: taiki-e/install-action@cargo-llvm-cov
- name: Collect coverage data
run: cargo llvm-cov nextest --workspace --lcov --output-path lcov.info -F pyo3_backend
run: cargo llvm-cov nextest --workspace --lcov --output-path lcov.info -F pyo3_backend -F dashboard
env:
CARGO_BUILD_RUSTFLAGS: "-C link-arg=-fuse-ld=lld"
RUST_BACKTRACE: 1
@@ -224,6 +224,7 @@ jobs:
GT_S3_BUCKET: ${{ secrets.S3_BUCKET }}
GT_S3_ACCESS_KEY_ID: ${{ secrets.S3_ACCESS_KEY_ID }}
GT_S3_ACCESS_KEY: ${{ secrets.S3_ACCESS_KEY }}
GT_S3_REGION: ${{ secrets.S3_REGION }}
UNITTEST_LOG_DIR: "__unittest_logs"
- name: Codecov upload
uses: codecov/codecov-action@v2

View File

@@ -5,25 +5,119 @@ on:
schedule:
# At 00:00 on Monday.
- cron: '0 0 * * 1'
# Mannually trigger only builds binaries.
workflow_dispatch:
name: Release
env:
RUST_TOOLCHAIN: nightly-2023-02-26
RUST_TOOLCHAIN: nightly-2023-05-03
SCHEDULED_BUILD_VERSION_PREFIX: v0.2.0
SCHEDULED_BUILD_VERSION_PREFIX: v0.3.0
SCHEDULED_PERIOD: nightly
CARGO_PROFILE: nightly
## FIXME(zyy17): Enable it after the tests are stabled.
DISABLE_RUN_TESTS: true
# Controls whether to run tests, include unit-test, integration-test and sqlness.
DISABLE_RUN_TESTS: false
jobs:
build:
name: Build binary
build-macos:
name: Build macOS binary
strategy:
matrix:
# The file format is greptime-<os>-<arch>
include:
- arch: aarch64-apple-darwin
os: macos-latest
file: greptime-darwin-arm64
continue-on-error: false
opts: "-F servers/dashboard"
- arch: x86_64-apple-darwin
os: macos-latest
file: greptime-darwin-amd64
continue-on-error: false
opts: "-F servers/dashboard"
- arch: aarch64-apple-darwin
os: macos-latest
file: greptime-darwin-arm64-pyo3
continue-on-error: false
opts: "-F pyo3_backend,servers/dashboard"
- arch: x86_64-apple-darwin
os: macos-latest
file: greptime-darwin-amd64-pyo3
continue-on-error: false
opts: "-F pyo3_backend,servers/dashboard"
runs-on: ${{ matrix.os }}
continue-on-error: ${{ matrix.continue-on-error }}
if: github.repository == 'GreptimeTeam/greptimedb'
steps:
- name: Checkout sources
uses: actions/checkout@v3
- name: Cache cargo assets
id: cache
uses: actions/cache@v3
with:
path: |
~/.cargo/bin/
~/.cargo/registry/index/
~/.cargo/registry/cache/
~/.cargo/git/db/
target/
key: ${{ matrix.arch }}-build-cargo-${{ hashFiles('**/Cargo.lock') }}
- name: Install Protoc for macos
if: contains(matrix.arch, 'darwin')
run: |
brew install protobuf
- name: Install etcd for macos
if: contains(matrix.arch, 'darwin')
run: |
brew install etcd
brew services start etcd
- name: Install rust toolchain
uses: dtolnay/rust-toolchain@master
with:
toolchain: ${{ env.RUST_TOOLCHAIN }}
targets: ${{ matrix.arch }}
- name: Output package versions
run: protoc --version ; cargo version ; rustc --version ; gcc --version ; g++ --version
# - name: Run tests
# if: env.DISABLE_RUN_TESTS == 'false'
# run: make unit-test integration-test sqlness-test
- name: Run cargo build
if: contains(matrix.arch, 'darwin') || contains(matrix.opts, 'pyo3_backend') == false
run: cargo build --profile ${{ env.CARGO_PROFILE }} --locked --target ${{ matrix.arch }} ${{ matrix.opts }}
- name: Calculate checksum and rename binary
shell: bash
run: |
cd target/${{ matrix.arch }}/${{ env.CARGO_PROFILE }}
chmod +x greptime
tar -zcvf ${{ matrix.file }}.tgz greptime
echo $(shasum -a 256 ${{ matrix.file }}.tgz | cut -f1 -d' ') > ${{ matrix.file }}.sha256sum
- name: Upload artifacts
uses: actions/upload-artifact@v3
with:
name: ${{ matrix.file }}
path: target/${{ matrix.arch }}/${{ env.CARGO_PROFILE }}/${{ matrix.file }}.tgz
- name: Upload checksum of artifacts
uses: actions/upload-artifact@v3
with:
name: ${{ matrix.file }}.sha256sum
path: target/${{ matrix.arch }}/${{ env.CARGO_PROFILE }}/${{ matrix.file }}.sha256sum
build-linux:
name: Build linux binary
strategy:
matrix:
# The file format is greptime-<os>-<arch>
@@ -32,22 +126,22 @@ jobs:
os: ubuntu-2004-16-cores
file: greptime-linux-amd64
continue-on-error: false
opts: "-F pyo3_backend"
opts: "-F servers/dashboard"
- arch: aarch64-unknown-linux-gnu
os: ubuntu-2004-16-cores
file: greptime-linux-arm64
continue-on-error: false
opts: "-F pyo3_backend"
- arch: aarch64-apple-darwin
os: macos-latest
file: greptime-darwin-arm64
opts: "-F servers/dashboard"
- arch: x86_64-unknown-linux-gnu
os: ubuntu-2004-16-cores
file: greptime-linux-amd64-pyo3
continue-on-error: false
opts: "-F pyo3_backend"
- arch: x86_64-apple-darwin
os: macos-latest
file: greptime-darwin-amd64
opts: "-F pyo3_backend,servers/dashboard"
- arch: aarch64-unknown-linux-gnu
os: ubuntu-2004-16-cores
file: greptime-linux-arm64-pyo3
continue-on-error: false
opts: "-F pyo3_backend"
opts: "-F pyo3_backend,servers/dashboard"
runs-on: ${{ matrix.os }}
continue-on-error: ${{ matrix.continue-on-error }}
if: github.repository == 'GreptimeTeam/greptimedb'
@@ -75,11 +169,6 @@ jobs:
sudo cp protoc/bin/protoc /usr/local/bin/
sudo cp -r protoc/include/google /usr/local/include/
- name: Install Protoc for macos
if: contains(matrix.arch, 'darwin')
run: |
brew install protobuf
- name: Install etcd for linux
if: contains(matrix.arch, 'linux') && endsWith(matrix.arch, '-gnu')
run: |
@@ -93,23 +182,18 @@ jobs:
sudo cp -a /tmp/etcd-download/etcd* /usr/local/bin/
nohup etcd >/tmp/etcd.log 2>&1 &
- name: Install etcd for macos
if: contains(matrix.arch, 'darwin')
run: |
brew install etcd
brew services start etcd
- name: Install dependencies for linux
if: contains(matrix.arch, 'linux') && endsWith(matrix.arch, '-gnu')
run: |
sudo apt-get -y update
sudo apt-get -y install libssl-dev pkg-config g++-aarch64-linux-gnu gcc-aarch64-linux-gnu binutils-aarch64-linux-gnu wget
- name: Compile Python 3.10.10 from source for Aarch64
if: contains(matrix.arch, 'aarch64-unknown-linux-gnu')
# FIXME(zyy17): Should we specify the version of python when building binary for darwin?
- name: Compile Python 3.10.10 from source for linux
if: contains(matrix.arch, 'linux') && contains(matrix.opts, 'pyo3_backend')
run: |
sudo chmod +x ./docker/aarch64/compile-python.sh
sudo ./docker/aarch64/compile-python.sh
sudo ./docker/aarch64/compile-python.sh ${{ matrix.arch }}
- name: Install rust toolchain
uses: dtolnay/rust-toolchain@master
@@ -124,18 +208,52 @@ jobs:
if: env.DISABLE_RUN_TESTS == 'false'
run: make unit-test integration-test sqlness-test
- name: Run cargo build for aarch64-linux
if: contains(matrix.arch, 'aarch64-unknown-linux-gnu')
- name: Run cargo build
if: contains(matrix.arch, 'darwin') || contains(matrix.opts, 'pyo3_backend') == false
run: cargo build --profile ${{ env.CARGO_PROFILE }} --locked --target ${{ matrix.arch }} ${{ matrix.opts }}
- name: Run cargo build with pyo3 for aarch64-linux
if: contains(matrix.arch, 'aarch64-unknown-linux-gnu') && contains(matrix.opts, 'pyo3_backend')
run: |
# TODO(zyy17): We should make PYO3_CROSS_LIB_DIR configurable.
export PYO3_CROSS_LIB_DIR=$(pwd)/python_arm64_build/lib
export PYTHON_INSTALL_PATH_AMD64=${PWD}/python-3.10.10/amd64
export LD_LIBRARY_PATH=$PYTHON_INSTALL_PATH_AMD64/lib:$LD_LIBRARY_PATH
export LIBRARY_PATH=$PYTHON_INSTALL_PATH_AMD64/lib:$LIBRARY_PATH
export PATH=$PYTHON_INSTALL_PATH_AMD64/bin:$PATH
export PYO3_CROSS_LIB_DIR=${PWD}/python-3.10.10/aarch64
echo "PYO3_CROSS_LIB_DIR: $PYO3_CROSS_LIB_DIR"
alias python=python3
alias python=$PYTHON_INSTALL_PATH_AMD64/bin/python3
alias pip=$PYTHON_INSTALL_PATH_AMD64/bin/python3-pip
cargo build --profile ${{ env.CARGO_PROFILE }} --locked --target ${{ matrix.arch }} ${{ matrix.opts }}
- name: Run cargo build
if: contains(matrix.arch, 'aarch64-unknown-linux-gnu') == false
run: cargo build --profile ${{ env.CARGO_PROFILE }} --locked --target ${{ matrix.arch }} ${{ matrix.opts }}
- name: Run cargo build with pyo3 for amd64-linux
if: contains(matrix.arch, 'x86_64-unknown-linux-gnu') && contains(matrix.opts, 'pyo3_backend')
run: |
export PYTHON_INSTALL_PATH_AMD64=${PWD}/python-3.10.10/amd64
export LD_LIBRARY_PATH=$PYTHON_INSTALL_PATH_AMD64/lib:$LD_LIBRARY_PATH
export LIBRARY_PATH=$PYTHON_INSTALL_PATH_AMD64/lib:$LIBRARY_PATH
export PATH=$PYTHON_INSTALL_PATH_AMD64/bin:$PATH
echo "implementation=CPython" >> pyo3.config
echo "version=3.10" >> pyo3.config
echo "implementation=CPython" >> pyo3.config
echo "shared=true" >> pyo3.config
echo "abi3=true" >> pyo3.config
echo "lib_name=python3.10" >> pyo3.config
echo "lib_dir=$PYTHON_INSTALL_PATH_AMD64/lib" >> pyo3.config
echo "executable=$PYTHON_INSTALL_PATH_AMD64/bin/python3" >> pyo3.config
echo "pointer_width=64" >> pyo3.config
echo "build_flags=" >> pyo3.config
echo "suppress_build_script_link_lines=false" >> pyo3.config
cat pyo3.config
export PYO3_CONFIG_FILE=${PWD}/pyo3.config
alias python=$PYTHON_INSTALL_PATH_AMD64/bin/python3
alias pip=$PYTHON_INSTALL_PATH_AMD64/bin/python3-pip
cargo build --profile ${{ env.CARGO_PROFILE }} --locked --target ${{ matrix.arch }} ${{ matrix.opts }}
- name: Calculate checksum and rename binary
shell: bash
@@ -159,9 +277,9 @@ jobs:
docker:
name: Build docker image
needs: [build]
needs: [build-linux, build-macos]
runs-on: ubuntu-latest
if: github.repository == 'GreptimeTeam/greptimedb'
if: github.repository == 'GreptimeTeam/greptimedb' && github.event_name != 'workflow_dispatch'
steps:
- name: Checkout sources
uses: actions/checkout@v3
@@ -196,35 +314,33 @@ jobs:
- name: Download amd64 binary
uses: actions/download-artifact@v3
with:
name: greptime-linux-amd64
name: greptime-linux-amd64-pyo3
path: amd64
- name: Unzip the amd64 artifacts
run: |
cd amd64
tar xvf greptime-linux-amd64.tgz
rm greptime-linux-amd64.tgz
tar xvf amd64/greptime-linux-amd64-pyo3.tgz -C amd64/ && rm amd64/greptime-linux-amd64-pyo3.tgz
cp -r amd64 docker/ci
- name: Download arm64 binary
id: download-arm64
uses: actions/download-artifact@v3
with:
name: greptime-linux-arm64
name: greptime-linux-arm64-pyo3
path: arm64
- name: Unzip the arm64 artifacts
id: unzip-arm64
if: success() || steps.download-arm64.conclusion == 'success'
run: |
cd arm64
tar xvf greptime-linux-arm64.tgz
rm greptime-linux-arm64.tgz
tar xvf arm64/greptime-linux-arm64-pyo3.tgz -C arm64/ && rm arm64/greptime-linux-arm64-pyo3.tgz
cp -r arm64 docker/ci
- name: Build and push all
uses: docker/build-push-action@v3
if: success() || steps.unzip-arm64.conclusion == 'success' # Build and push all platform if unzip-arm64 succeeds
with:
context: .
context: ./docker/ci/
file: ./docker/ci/Dockerfile
push: true
platforms: linux/amd64,linux/arm64
@@ -236,7 +352,7 @@ jobs:
uses: docker/build-push-action@v3
if: success() || steps.download-arm64.conclusion == 'failure' # Only build and push amd64 platform if download-arm64 fails
with:
context: .
context: ./docker/ci/
file: ./docker/ci/Dockerfile
push: true
platforms: linux/amd64
@@ -247,9 +363,9 @@ jobs:
release:
name: Release artifacts
# Release artifacts only when all the artifacts are built successfully.
needs: [build,docker]
needs: [build-linux, build-macos, docker]
runs-on: ubuntu-latest
if: github.repository == 'GreptimeTeam/greptimedb'
if: github.repository == 'GreptimeTeam/greptimedb' && github.event_name != 'workflow_dispatch'
steps:
- name: Checkout sources
uses: actions/checkout@v3
@@ -265,35 +381,50 @@ jobs:
SCHEDULED_BUILD_VERSION=${{ env.SCHEDULED_BUILD_VERSION_PREFIX }}-${{ env.SCHEDULED_PERIOD }}-$buildTime
echo "SCHEDULED_BUILD_VERSION=${SCHEDULED_BUILD_VERSION}" >> $GITHUB_ENV
# Only publish release when the release tag is like v1.0.0, v1.0.1, v1.0.2, etc.
- name: Set whether it is the latest release
run: |
if [[ "${{ github.ref_name }}" =~ ^v[0-9]+\.[0-9]+\.[0-9]+$ ]]; then
echo "prerelease=false" >> $GITHUB_ENV
echo "makeLatest=true" >> $GITHUB_ENV
else
echo "prerelease=true" >> $GITHUB_ENV
echo "makeLatest=false" >> $GITHUB_ENV
fi
- name: Create scheduled build git tag
if: github.event_name == 'schedule'
run: |
git tag ${{ env.SCHEDULED_BUILD_VERSION }}
- name: Publish scheduled release # configure the different release title and tags.
uses: softprops/action-gh-release@v1
uses: ncipollo/release-action@v1
if: github.event_name == 'schedule'
with:
name: "Release ${{ env.SCHEDULED_BUILD_VERSION }}"
tag_name: ${{ env.SCHEDULED_BUILD_VERSION }}
generate_release_notes: true
files: |
prerelease: ${{ env.prerelease }}
makeLatest: ${{ env.makeLatest }}
tag: ${{ env.SCHEDULED_BUILD_VERSION }}
generateReleaseNotes: true
artifacts: |
**/greptime-*
- name: Publish release
uses: softprops/action-gh-release@v1
uses: ncipollo/release-action@v1
if: github.event_name != 'schedule'
with:
name: "Release ${{ github.ref_name }}"
files: |
name: "${{ github.ref_name }}"
prerelease: ${{ env.prerelease }}
makeLatest: ${{ env.makeLatest }}
generateReleaseNotes: true
artifacts: |
**/greptime-*
docker-push-uhub:
name: Push docker image to UCloud Container Registry
docker-push-acr:
name: Push docker image to alibaba cloud container registry
needs: [docker]
runs-on: ubuntu-latest
if: github.repository == 'GreptimeTeam/greptimedb'
# Push to uhub may fail(500 error), but we don't want to block the release process. The failed job will be retried manually.
if: github.repository == 'GreptimeTeam/greptimedb' && github.event_name != 'workflow_dispatch'
continue-on-error: true
steps:
- name: Checkout sources
@@ -305,12 +436,12 @@ jobs:
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
- name: Login to UCloud Container Registry
- name: Login to alibaba cloud container registry
uses: docker/login-action@v2
with:
registry: uhub.service.ucloud.cn
username: ${{ secrets.UCLOUD_USERNAME }}
password: ${{ secrets.UCLOUD_PASSWORD }}
registry: registry.cn-hangzhou.aliyuncs.com
username: ${{ secrets.ALICLOUD_USERNAME }}
password: ${{ secrets.ALICLOUD_PASSWORD }}
- name: Configure scheduled build image tag # the tag would be ${SCHEDULED_BUILD_VERSION_PREFIX}-YYYYMMDD-${SCHEDULED_PERIOD}
shell: bash
@@ -327,9 +458,9 @@ jobs:
VERSION=${{ github.ref_name }}
echo "IMAGE_TAG=${VERSION:1}" >> $GITHUB_ENV
- name: Push image to uhub # Use 'docker buildx imagetools create' to create a new image base on source image.
- name: Push image to alibaba cloud container registry # Use 'docker buildx imagetools create' to create a new image base on source image.
run: |
docker buildx imagetools create \
--tag uhub.service.ucloud.cn/greptime/greptimedb:latest \
--tag uhub.service.ucloud.cn/greptime/greptimedb:${{ env.IMAGE_TAG }} \
--tag registry.cn-hangzhou.aliyuncs.com/greptime/greptimedb:latest \
--tag registry.cn-hangzhou.aliyuncs.com/greptime/greptimedb:${{ env.IMAGE_TAG }} \
greptime/greptimedb:${{ env.IMAGE_TAG }}

4
.gitignore vendored
View File

@@ -35,3 +35,7 @@ benchmarks/data
# dotenv
.env
# dashboard files
!/src/servers/dashboard/VERSION
/src/servers/dashboard/*

View File

@@ -51,7 +51,7 @@ GreptimeDB uses the [Apache 2.0 license](https://github.com/GreptimeTeam/greptim
- To ensure that community is free and confident in its ability to use your contributions, please sign the Contributor License Agreement (CLA) which will be incorporated in the pull request process.
- Make sure all your codes are formatted and follow the [coding style](https://pingcap.github.io/style-guide/rust/).
- Make sure all unit tests are passed (using `cargo test --workspace` or [nextest](https://nexte.st/index.html) `cargo nextest run`).
- Make sure all clippy warnings are fixed (you can check it locally by running `cargo clippy --workspace --all-targets -- -D warnings -D clippy::print_stdout -D clippy::print_stderr`).
- Make sure all clippy warnings are fixed (you can check it locally by running `cargo clippy --workspace --all-targets -- -D warnings`).
#### `pre-commit` Hooks
@@ -107,6 +107,6 @@ The core team will be thrilled if you participate in any way you like. When you
Also, see some extra GreptimeDB content:
- [GreptimeDB Docs](https://greptime.com/docs)
- [Learn GreptimeDB](https://greptime.com/products/db)
- [GreptimeDB Docs](https://docs.greptime.com/)
- [Learn GreptimeDB](https://greptime.com/product/db)
- [Greptime Inc. Website](https://greptime.com)

2832
Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -7,6 +7,7 @@ members = [
"src/cmd",
"src/common/base",
"src/common/catalog",
"src/common/datasource",
"src/common/error",
"src/common/function",
"src/common/function-macro",
@@ -14,6 +15,7 @@ members = [
"src/common/grpc-expr",
"src/common/mem-prof",
"src/common/procedure",
"src/common/procedure-test",
"src/common/query",
"src/common/recordbatch",
"src/common/runtime",
@@ -23,6 +25,7 @@ members = [
"src/common/time",
"src/datanode",
"src/datatypes",
"src/file-table-engine",
"src/frontend",
"src/log-store",
"src/meta-client",
@@ -45,38 +48,47 @@ members = [
]
[workspace.package]
version = "0.1.1"
version = "0.2.0"
edition = "2021"
license = "Apache-2.0"
[workspace.dependencies]
arrow = { version = "34.0" }
arrow-array = "34.0"
arrow-flight = "34.0"
arrow-schema = { version = "34.0", features = ["serde"] }
arrow = { version = "37.0" }
arrow-array = "37.0"
arrow-flight = "37.0"
arrow-schema = { version = "37.0", features = ["serde"] }
async-stream = "0.3"
async-trait = "0.1"
chrono = { version = "0.4", features = ["serde"] }
datafusion = { git = "https://github.com/apache/arrow-datafusion.git", rev = "146a949218ec970784974137277cde3b4e547d0a" }
datafusion-common = { git = "https://github.com/apache/arrow-datafusion.git", rev = "146a949218ec970784974137277cde3b4e547d0a" }
datafusion-expr = { git = "https://github.com/apache/arrow-datafusion.git", rev = "146a949218ec970784974137277cde3b4e547d0a" }
datafusion-optimizer = { git = "https://github.com/apache/arrow-datafusion.git", rev = "146a949218ec970784974137277cde3b4e547d0a" }
datafusion-physical-expr = { git = "https://github.com/apache/arrow-datafusion.git", rev = "146a949218ec970784974137277cde3b4e547d0a" }
datafusion-sql = { git = "https://github.com/apache/arrow-datafusion.git", rev = "146a949218ec970784974137277cde3b4e547d0a" }
# TODO(ruihang): use arrow-datafusion when it contains https://github.com/apache/arrow-datafusion/pull/6032
datafusion = { git = "https://github.com/waynexia/arrow-datafusion.git", rev = "b14f7a9ffe91257fc3d2a5d654f2a1a14a8fc793" }
datafusion-common = { git = "https://github.com/waynexia/arrow-datafusion.git", rev = "b14f7a9ffe91257fc3d2a5d654f2a1a14a8fc793" }
datafusion-expr = { git = "https://github.com/waynexia/arrow-datafusion.git", rev = "b14f7a9ffe91257fc3d2a5d654f2a1a14a8fc793" }
datafusion-optimizer = { git = "https://github.com/waynexia/arrow-datafusion.git", rev = "b14f7a9ffe91257fc3d2a5d654f2a1a14a8fc793" }
datafusion-physical-expr = { git = "https://github.com/waynexia/arrow-datafusion.git", rev = "b14f7a9ffe91257fc3d2a5d654f2a1a14a8fc793" }
datafusion-sql = { git = "https://github.com/waynexia/arrow-datafusion.git", rev = "b14f7a9ffe91257fc3d2a5d654f2a1a14a8fc793" }
datafusion-substrait = { git = "https://github.com/waynexia/arrow-datafusion.git", rev = "b14f7a9ffe91257fc3d2a5d654f2a1a14a8fc793" }
futures = "0.3"
futures-util = "0.3"
parquet = "34.0"
parquet = "37.0"
paste = "1.0"
prost = "0.11"
rand = "0.8"
serde = { version = "1.0", features = ["derive"] }
serde_json = "1.0"
snafu = { version = "0.7", features = ["backtraces"] }
sqlparser = "0.32"
sqlparser = "0.33"
tempfile = "3"
tokio = { version = "1.24.2", features = ["full"] }
tokio-util = "0.7"
tonic = { version = "0.8", features = ["tls"] }
tokio-util = { version = "0.7", features = ["io-util", "compat"] }
tonic = { version = "0.9", features = ["tls"] }
uuid = { version = "1", features = ["serde", "v4", "fast-rng"] }
metrics = "0.20"
meter-core = { git = "https://github.com/GreptimeTeam/greptime-meter.git", rev = "f0798c4c648d89f51abe63e870919c75dd463199" }
[workspace.dependencies.meter-macros]
git = "https://github.com/GreptimeTeam/greptime-meter.git"
rev = "f0798c4c648d89f51abe63e870919c75dd463199"
[profile.release]
debug = true

View File

@@ -21,6 +21,10 @@ fmt: ## Format all the Rust code.
.PHONY: fmt-toml
fmt-toml: ## Format all TOML files.
taplo format --option "indent_string= "
.PHONY: check-toml
check-toml: ## Check all TOML files.
taplo format --check --option "indent_string= "
.PHONY: docker-image
@@ -47,7 +51,7 @@ check: ## Cargo check all the targets.
.PHONY: clippy
clippy: ## Check clippy rules.
cargo clippy --workspace --all-targets -- -D warnings -D clippy::print_stdout -D clippy::print_stderr
cargo clippy --workspace --all-targets -- -D warnings
.PHONY: fmt-check
fmt-check: ## Check code format.

View File

@@ -1,14 +1,14 @@
<p align="center">
<picture>
<source media="(prefers-color-scheme: light)" srcset="/docs/logo-text-padding.png">
<source media="(prefers-color-scheme: dark)" srcset="/docs/logo-text-padding-dark.png">
<img alt="GreptimeDB Logo" src="/docs/logo-text-padding.png" width="400px">
<source media="(prefers-color-scheme: light)" srcset="https://cdn.jsdelivr.net/gh/GreptimeTeam/greptimedb@develop/docs/logo-text-padding.png">
<source media="(prefers-color-scheme: dark)" srcset="https://cdn.jsdelivr.net/gh/GreptimeTeam/greptimedb@develop/docs/logo-text-padding-dark.png">
<img alt="GreptimeDB Logo" src="https://cdn.jsdelivr.net/gh/GreptimeTeam/greptimedb@develop/docs/logo-text-padding.png" width="400px">
</picture>
</p>
<h3 align="center">
The next-generation hybrid timeseries/analytics processing database in the cloud
The next-generation hybrid time-series/analytics processing database in the cloud
</h3>
<p align="center">
@@ -23,6 +23,8 @@
<a href="https://twitter.com/greptime"><img src="https://img.shields.io/badge/twitter-follow_us-1d9bf0.svg"></a>
&nbsp;
<a href="https://www.linkedin.com/company/greptime/"><img src="https://img.shields.io/badge/linkedin-connect_with_us-0a66c2.svg"></a>
&nbsp;
<a href="https://greptime.com/slack"><img src="https://img.shields.io/badge/slack-GreptimeDB-0abd59?logo=slack" alt="slack" /></a>
</p>
## What is GreptimeDB
@@ -36,15 +38,23 @@ Our core developers have been building time-series data platform
for years. Based on their best-practices, GreptimeDB is born to give you:
- A standalone binary that scales to highly-available distributed cluster, providing a transparent experience for cluster users
- Optimized columnar layout for handling time-series data; compacted, compressed, stored on various storage backends
- Flexible index options, tackling high cardinality issues down
- Optimized columnar layout for handling time-series data; compacted, compressed, and stored on various storage backends
- Flexible indexes, tackling high cardinality issues down
- Distributed, parallel query execution, leveraging elastic computing resource
- Native SQL, and Python scripting for advanced analytical scenarios
- Widely adopted database protocols and APIs
- Widely adopted database protocols and APIs, native PromQL supports
- Extensible table engine architecture for extensive workloads
## Quick Start
### GreptimePlay
Try out the features of GreptimeDB right from your browser.
<a href="https://greptime.com/playground" target="_blank"><img
src="https://www.greptime.com/assets/greptime_play_button_colorful.1bbe2746.png"
alt="GreptimePlay" width="200px" /></a>
### Build
#### Build from Source
@@ -61,12 +71,12 @@ To compile GreptimeDB from source, you'll need:
find an installation instructions [here](https://grpc.io/docs/protoc-installation/).
**Note that `protoc` version needs to be >= 3.15** because we have used the `optional`
keyword. You can check it with `protoc --version`.
- python3-dev or python3-devel(Optional, only needed if you want to run scripts
in cpython): this install a Python shared library required for running python
- python3-dev or python3-devel(Optional feature, only needed if you want to run scripts
in CPython, and also need to enable `pyo3_backend` feature when compiling(by `cargo run -F pyo3_backend` or add `pyo3_backend` to src/script/Cargo.toml 's `features.default` like `default = ["python", "pyo3_backend]`)): this install a Python shared library required for running Python
scripting engine(In CPython Mode). This is available as `python3-dev` on
ubuntu, you can install it with `sudo apt install python3-dev`, or
`python3-devel` on RPM based distributions (e.g. Fedora, Red Hat, SuSE). Mac's
`Python3` package should have this shared library by default.
`Python3` package should have this shared library by default. More detail for compiling with PyO3 can be found in [PyO3](https://pyo3.rs/v0.18.1/building_and_distribution#configuring-the-python-version)'s documentation.
#### Build with Docker
@@ -129,16 +139,16 @@ about Kubernetes deployment, check our [docs](https://docs.greptime.com/).
SELECT * FROM monitor;
```
```TEXT
+-------+---------------------+------+--------+
| host | ts | cpu | memory |
+-------+---------------------+------+--------+
| host1 | 2022-08-19 08:32:35 | 66.6 | 1024 |
| host2 | 2022-08-19 08:32:36 | 77.7 | 2048 |
| host3 | 2022-08-19 08:32:37 | 88.8 | 4096 |
+-------+---------------------+------+--------+
3 rows in set (0.01 sec)
```
```TEXT
+-------+--------------------------+------+--------+
| host | ts | cpu | memory |
+-------+--------------------------+------+--------+
| host1 | 2022-08-19 16:32:35+0800 | 66.6 | 1024 |
| host2 | 2022-08-19 16:32:36+0800 | 77.7 | 2048 |
| host3 | 2022-08-19 16:32:37+0800 | 88.8 | 4096 |
+-------+--------------------------+------+--------+
3 rows in set (0.03 sec)
```
You can always cleanup test database by removing `/tmp/greptimedb`.
@@ -147,9 +157,9 @@ You can always cleanup test database by removing `/tmp/greptimedb`.
### Installation
- [Pre-built Binaries](https://github.com/GreptimeTeam/greptimedb/releases):
downloadable pre-built binaries for Linux and MacOS
- [Docker Images](https://hub.docker.com/r/greptime/greptimedb): pre-built
Docker images
For Linux and macOS, you can easily download pre-built binaries that are ready to use. In most cases, downloading the version without PyO3 is sufficient. However, if you plan to run scripts in CPython (and use Python packages like NumPy and Pandas), you will need to download the version with PyO3 and install a Python with the same version as the Python in the PyO3 version. We recommend using virtualenv for the installation process to manage multiple Python versions.
- [Docker Images](https://hub.docker.com/r/greptime/greptimedb)(**recommended**): pre-built
Docker images, this is the easiest way to try GreptimeDB. By default it runs CPython script with `pyo3_backend` enabled.
- [`gtctl`](https://github.com/GreptimeTeam/gtctl): the command-line tool for
Kubernetes deployment
@@ -158,6 +168,7 @@ You can always cleanup test database by removing `/tmp/greptimedb`.
- GreptimeDB [User Guide](https://docs.greptime.com/user-guide/concepts.html)
- GreptimeDB [Developer
Guide](https://docs.greptime.com/developer-guide/overview.html)
- GreptimeDB [internal code document](https://greptimedb.rs)
### Dashboard
- [The dashboard UI for GreptimeDB](https://github.com/GreptimeTeam/dashboard)

19
SECURITY.md Normal file
View File

@@ -0,0 +1,19 @@
# Security Policy
## Supported Versions
| Version | Supported |
| ------- | ------------------ |
| >= v0.1.0 | :white_check_mark: |
| < v0.1.0 | :x: |
## Reporting a Vulnerability
We place great importance on the security of GreptimeDB code, software,
and cloud platform. If you come across a security vulnerability in GreptimeDB,
we kindly request that you inform us immediately. We will thoroughly investigate
all valid reports and make every effort to resolve the issue promptly.
To report any issues or vulnerabilities, please email us at info@greptime.com, rather than
posting publicly on GitHub. Be sure to provide us with the version identifier as well as details
on how the vulnerability can be exploited.

View File

@@ -21,12 +21,12 @@ use std::collections::HashMap;
use std::path::{Path, PathBuf};
use std::time::Instant;
use arrow::array::{ArrayRef, PrimitiveArray, StringArray, TimestampNanosecondArray};
use arrow::array::{ArrayRef, PrimitiveArray, StringArray, TimestampMicrosecondArray};
use arrow::datatypes::{DataType, Float64Type, Int64Type};
use arrow::record_batch::RecordBatch;
use clap::Parser;
use client::api::v1::column::Values;
use client::api::v1::{Column, ColumnDataType, ColumnDef, CreateTableExpr, InsertRequest, TableId};
use client::api::v1::{Column, ColumnDataType, ColumnDef, CreateTableExpr, InsertRequest};
use client::{Client, Database, DEFAULT_CATALOG_NAME, DEFAULT_SCHEMA_NAME};
use indicatif::{MultiProgress, ProgressBar, ProgressStyle};
use parquet::arrow::arrow_reader::ParquetRecordBatchReaderBuilder;
@@ -61,7 +61,7 @@ struct Args {
#[arg(long = "skip-read")]
skip_read: bool,
#[arg(short, long, default_value_t = String::from("127.0.0.1:3001"))]
#[arg(short, long, default_value_t = String::from("127.0.0.1:4001"))]
endpoint: String,
}
@@ -97,6 +97,9 @@ async fn write_data(
for record_batch in record_batch_reader {
let record_batch = record_batch.unwrap();
if !is_record_batch_full(&record_batch) {
continue;
}
let (columns, row_count) = convert_record_batch(record_batch);
let request = InsertRequest {
table_name: TABLE_NAME.to_string(),
@@ -122,11 +125,17 @@ fn convert_record_batch(record_batch: RecordBatch) -> (Vec<Column>, u32) {
let mut columns = vec![];
for (array, field) in record_batch.columns().iter().zip(fields.iter()) {
let values = build_values(array);
let (values, datatype) = build_values(array);
let column = Column {
column_name: field.name().to_owned(),
column_name: field.name().clone(),
values: Some(values),
null_mask: vec![],
null_mask: array
.to_data()
.nulls()
.map(|bitmap| bitmap.buffer().as_slice().to_vec())
.unwrap_or_default(),
datatype: datatype.into(),
// datatype and semantic_type are set to default
..Default::default()
};
@@ -136,7 +145,7 @@ fn convert_record_batch(record_batch: RecordBatch) -> (Vec<Column>, u32) {
(columns, row_count as _)
}
fn build_values(column: &ArrayRef) -> Values {
fn build_values(column: &ArrayRef) -> (Values, ColumnDataType) {
match column.data_type() {
DataType::Int64 => {
let array = column
@@ -144,10 +153,13 @@ fn build_values(column: &ArrayRef) -> Values {
.downcast_ref::<PrimitiveArray<Int64Type>>()
.unwrap();
let values = array.values();
Values {
i64_values: values.to_vec(),
..Default::default()
}
(
Values {
i64_values: values.to_vec(),
..Default::default()
},
ColumnDataType::Int64,
)
}
DataType::Float64 => {
let array = column
@@ -155,29 +167,38 @@ fn build_values(column: &ArrayRef) -> Values {
.downcast_ref::<PrimitiveArray<Float64Type>>()
.unwrap();
let values = array.values();
Values {
f64_values: values.to_vec(),
..Default::default()
}
(
Values {
f64_values: values.to_vec(),
..Default::default()
},
ColumnDataType::Float64,
)
}
DataType::Timestamp(_, _) => {
let array = column
.as_any()
.downcast_ref::<TimestampNanosecondArray>()
.downcast_ref::<TimestampMicrosecondArray>()
.unwrap();
let values = array.values();
Values {
i64_values: values.to_vec(),
..Default::default()
}
(
Values {
ts_microsecond_values: values.to_vec(),
..Default::default()
},
ColumnDataType::TimestampMicrosecond,
)
}
DataType::Utf8 => {
let array = column.as_any().downcast_ref::<StringArray>().unwrap();
let values = array.iter().filter_map(|s| s.map(String::from)).collect();
Values {
string_values: values,
..Default::default()
}
(
Values {
string_values: values,
..Default::default()
},
ColumnDataType::String,
)
}
DataType::Null
| DataType::Boolean
@@ -204,7 +225,7 @@ fn build_values(column: &ArrayRef) -> Values {
| DataType::FixedSizeList(_, _)
| DataType::LargeList(_)
| DataType::Struct(_)
| DataType::Union(_, _, _)
| DataType::Union(_, _)
| DataType::Dictionary(_, _)
| DataType::Decimal128(_, _)
| DataType::Decimal256(_, _)
@@ -213,6 +234,10 @@ fn build_values(column: &ArrayRef) -> Values {
}
}
fn is_record_batch_full(batch: &RecordBatch) -> bool {
batch.columns().iter().all(|col| col.null_count() == 0)
}
fn create_table_expr() -> CreateTableExpr {
CreateTableExpr {
catalog_name: CATALOG_NAME.to_string(),
@@ -228,13 +253,13 @@ fn create_table_expr() -> CreateTableExpr {
},
ColumnDef {
name: "tpep_pickup_datetime".to_string(),
datatype: ColumnDataType::Int64 as i32,
datatype: ColumnDataType::TimestampMicrosecond as i32,
is_nullable: true,
default_constraint: vec![],
},
ColumnDef {
name: "tpep_dropoff_datetime".to_string(),
datatype: ColumnDataType::Int64 as i32,
datatype: ColumnDataType::TimestampMicrosecond as i32,
is_nullable: true,
default_constraint: vec![],
},
@@ -340,7 +365,8 @@ fn create_table_expr() -> CreateTableExpr {
create_if_not_exists: false,
table_options: Default::default(),
region_ids: vec![0],
table_id: Some(TableId { id: 0 }),
table_id: None,
engine: "mito".to_string(),
}
}

View File

@@ -37,14 +37,27 @@ type = "File"
data_dir = "/tmp/greptimedb/data/"
# Compaction options, see `standalone.example.toml`.
[compaction]
[storage.compaction]
max_inflight_tasks = 4
max_files_in_level0 = 8
max_purge_tasks = 32
# Storage manifest options
[storage.manifest]
# Region checkpoint actions margin.
# Create a checkpoint every <checkpoint_margin> actions.
checkpoint_margin = 10
# Region manifest logs and checkpoints gc execution duration
gc_duration = '30s'
# Whether to try creating a manifest checkpoint on region opening
checkpoint_on_startup = false
# Procedure storage options, see `standalone.example.toml`.
# [procedure.store]
# type = "File"
# data_dir = "/tmp/greptimedb/procedure/"
# max_retry_times = 3
# retry_delay = "500ms"
[procedure]
max_retry_times = 3
retry_delay = "500ms"
# Log options, see `standalone.example.toml`
[logging]
dir = "/tmp/greptimedb/logs"
level = "info"

View File

@@ -56,3 +56,8 @@ metasrv_addrs = ["127.0.0.1:3002"]
timeout_millis = 3000
connect_timeout_millis = 5000
tcp_nodelay = true
# Log options, see `standalone.example.toml`
[logging]
dir = "/tmp/greptimedb/logs"
level = "info"

View File

@@ -13,3 +13,8 @@ datanode_lease_secs = 15
selector = "LeaseBased"
# Store data in memory, false by default.
use_memory_store = false
# Log options, see `standalone.example.toml`
[logging]
dir = "/tmp/greptimedb/logs"
level = "info"

View File

@@ -99,7 +99,7 @@ type = "File"
data_dir = "/tmp/greptimedb/data/"
# Compaction options.
[compaction]
[storage.compaction]
# Max task number that can concurrently run.
max_inflight_tasks = 4
# Max files in level 0 to trigger compaction.
@@ -107,14 +107,26 @@ max_files_in_level0 = 8
# Max task number for SST purge task after compaction.
max_purge_tasks = 32
# Storage manifest options
[storage.manifest]
# Region checkpoint actions margin.
# Create a checkpoint every <checkpoint_margin> actions.
checkpoint_margin = 10
# Region manifest logs and checkpoints gc execution duration
gc_duration = '30s'
# Whether to try creating a manifest checkpoint on region opening
checkpoint_on_startup = false
# Procedure storage options.
# Uncomment to enable.
# [procedure.store]
# # Storage type.
# type = "File"
# # Procedure data path.
# data_dir = "/tmp/greptimedb/procedure/"
# # Procedure max retry time.
# max_retry_times = 3
# # Initial retry delay of procedures, increases exponentially
# retry_delay = "500ms"
[procedure]
# Procedure max retry time.
max_retry_times = 3
# Initial retry delay of procedures, increases exponentially
retry_delay = "500ms"
# Log options
[logging]
# Specify logs directory.
dir = "/tmp/greptimedb/logs"
# Specify the log level [info | debug | error | warn]
level = "debug"

View File

@@ -1,9 +1,36 @@
#!/usr/bin/env bash
set -e
# this script will download Python source code, compile it, and install it to /usr/local/lib
# then use this python to compile cross-compiled python for aarch64
ARCH=$1
PYTHON_VERSION=3.10.10
PYTHON_SOURCE_DIR=Python-${PYTHON_VERSION}
PYTHON_INSTALL_PATH_AMD64=${PWD}/python-${PYTHON_VERSION}/amd64
PYTHON_INSTALL_PATH_AARCH64=${PWD}/python-${PYTHON_VERSION}/aarch64
function download_python_source_code() {
wget https://www.python.org/ftp/python/$PYTHON_VERSION/Python-$PYTHON_VERSION.tgz
tar -xvf Python-$PYTHON_VERSION.tgz
}
function compile_for_amd64_platform() {
mkdir -p "$PYTHON_INSTALL_PATH_AMD64"
echo "Compiling for amd64 platform..."
./configure \
--prefix="$PYTHON_INSTALL_PATH_AMD64" \
--enable-shared \
ac_cv_pthread_is_default=no ac_cv_pthread=yes ac_cv_cxx_thread=yes \
ac_cv_have_long_long_format=yes \
--disable-ipv6 ac_cv_file__dev_ptmx=no ac_cv_file__dev_ptc=no
make
make install
}
wget https://www.python.org/ftp/python/3.10.10/Python-3.10.10.tgz
tar -xvf Python-3.10.10.tgz
cd Python-3.10.10
# explain Python compile options here a bit:s
# --enable-shared: enable building a shared Python library (default is no) but we do need it for calling from rust
# CC, CXX, AR, LD, RANLIB: set the compiler, archiver, linker, and ranlib programs to use
@@ -14,33 +41,47 @@ cd Python-3.10.10
# ac_cv_have_long_long_format=yes: target platform supports long long type
# disable-ipv6: disable ipv6 support, we don't need it in here
# ac_cv_file__dev_ptmx=no ac_cv_file__dev_ptc=no: disable pty support, we don't need it in here
function compile_for_aarch64_platform() {
export LD_LIBRARY_PATH=$PYTHON_INSTALL_PATH_AMD64/lib:$LD_LIBRARY_PATH
export LIBRARY_PATH=$PYTHON_INSTALL_PATH_AMD64/lib:$LIBRARY_PATH
export PATH=$PYTHON_INSTALL_PATH_AMD64/bin:$PATH
mkdir -p "$PYTHON_INSTALL_PATH_AARCH64"
echo "Compiling for aarch64 platform..."
echo "LD_LIBRARY_PATH: $LD_LIBRARY_PATH"
echo "LIBRARY_PATH: $LIBRARY_PATH"
echo "PATH: $PATH"
./configure --build=x86_64-linux-gnu --host=aarch64-linux-gnu \
--prefix="$PYTHON_INSTALL_PATH_AARCH64" --enable-optimizations \
CC=aarch64-linux-gnu-gcc \
CXX=aarch64-linux-gnu-g++ \
AR=aarch64-linux-gnu-ar \
LD=aarch64-linux-gnu-ld \
RANLIB=aarch64-linux-gnu-ranlib \
--enable-shared \
ac_cv_pthread_is_default=no ac_cv_pthread=yes ac_cv_cxx_thread=yes \
ac_cv_have_long_long_format=yes \
--disable-ipv6 ac_cv_file__dev_ptmx=no ac_cv_file__dev_ptc=no
make
make altinstall
}
# Main script starts here.
download_python_source_code
# Enter the python source code directory.
cd $PYTHON_SOURCE_DIR || exit 1
# Build local python first, then build cross-compiled python.
./configure \
--enable-shared \
ac_cv_pthread_is_default=no ac_cv_pthread=yes ac_cv_cxx_thread=yes \
ac_cv_have_long_long_format=yes \
--disable-ipv6 ac_cv_file__dev_ptmx=no ac_cv_file__dev_ptc=no && \
make
make install
cd ..
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/lib/
export LIBRARY_PATH=$LIBRARY_PATH:/usr/local/lib/
export PY_INSTALL_PATH=$(pwd)/python_arm64_build
cd Python-3.10.10 && \
make clean && \
make distclean && \
alias python=python3 && \
./configure --build=x86_64-linux-gnu --host=aarch64-linux-gnu \
--prefix=$PY_INSTALL_PATH --enable-optimizations \
CC=aarch64-linux-gnu-gcc \
CXX=aarch64-linux-gnu-g++ \
AR=aarch64-linux-gnu-ar \
LD=aarch64-linux-gnu-ld \
RANLIB=aarch64-linux-gnu-ranlib \
--enable-shared \
ac_cv_pthread_is_default=no ac_cv_pthread=yes ac_cv_cxx_thread=yes \
ac_cv_have_long_long_format=yes \
--disable-ipv6 ac_cv_file__dev_ptmx=no ac_cv_file__dev_ptc=no && \
make && make altinstall && \
cd ..
compile_for_amd64_platform
# Clean the build directory.
make clean && make distclean
# Cross compile python for aarch64.
if [ "$ARCH" = "aarch64-unknown-linux-gnu" ]; then
compile_for_aarch64_platform
fi

View File

@@ -1,6 +1,14 @@
FROM ubuntu:22.04
RUN apt-get update && DEBIAN_FRONTEND=noninteractive apt-get -y install ca-certificates
RUN apt-get update && DEBIAN_FRONTEND=noninteractive apt-get install -y \
ca-certificates \
python3.10 \
python3.10-dev \
python3-pip
COPY requirements.txt /etc/greptime/requirements.txt
RUN python3 -m pip install -r /etc/greptime/requirements.txt
ARG TARGETARCH

View File

@@ -0,0 +1,5 @@
numpy>=1.24.2
pandas>=1.5.3
pyarrow>=11.0.0
requests>=2.28.2
scipy>=1.10.1

View File

@@ -0,0 +1,74 @@
This document introduces how to implement SQL statements in GreptimeDB.
The execution entry point for SQL statements locates at Frontend Instance. You can see it has
implemented `SqlQueryHandler`:
```rust
impl SqlQueryHandler for Instance {
type Error = Error;
async fn do_query(&self, query: &str, query_ctx: QueryContextRef) -> Vec<Result<Output>> {
// ...
}
}
```
Normally, when a SQL query arrives at GreptimeDB, the `do_query` method will be called. After some parsing work, the SQL
will be feed into `StatementExecutor`:
```rust
// in Frontend Instance:
self.statement_executor.execute_sql(stmt, query_ctx).await
```
That's where we handle our SQL statements. You can just create a new match arm for your statement there, then the
statement is implemented for both GreptimeDB Standalone and Cluster. You can see how `DESCRIBE TABLE` is implemented as
an example.
Now, what if the statements should be handled differently for GreptimeDB Standalone and Cluster? You can see there's
a `SqlStatementExecutor` field in `StatementExecutor`. Each GreptimeDB Standalone and Cluster has its own implementation
of `SqlStatementExecutor`. If you are going to implement the statements differently in the two mode (
like `CREATE TABLE`), you have to implement them in their own `SqlStatementExecutor`s.
Summarize as the diagram below:
```text
SQL query
|
v
+---------------------------+
| SqlQueryHandler::do_query |
+---------------------------+
|
| SQL parsing
v
+--------------------------------+
| StatementExecutor::execute_sql |
+--------------------------------+
|
| SQL execution
v
+----------------------------------+
| commonly handled statements like |
| "plan_exec" for selection or |
+----------------------------------+
| |
For Standalone | | For Cluster
v v
+---------------------------+ +---------------------------+
| SqlStatementExecutor impl | | SqlStatementExecutor impl |
| in Datanode Instance | | in Frontend DistInstance |
+---------------------------+ +---------------------------+
```
Note that some SQL statements can be executed in our QueryEngine, in the form of `LogicalPlan`. You can follow the
invocation path down to the `QueryEngine` implementation from `StatementExecutor::plan_exec`. For now, there's only
one `DatafusionQueryEngine` for both GreptimeDB Standalone and Cluster. That lone query engine works for both modes is
because GreptimeDB read/write data through `Table` trait, and each mode has its own `Table` implementation.
We don't have any bias towards whether statements should be handled in query engine or `StatementExecutor`. You can
implement one kind of statement in both places. For example, `Insert` with selection is handled in query engine, because
we can easily do the query part there. However, `Insert` without selection is not, for the cost of parsing statement
to `LogicalPlan` is not neglectable. So generally if the SQL query is simple enough, you can handle it
in `StatementExecutor`; otherwise if it is complex or has some part of selection, it should be parsed to `LogicalPlan`
and handled in query engine.

Binary file not shown.

After

Width:  |  Height:  |  Size: 78 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 47 KiB

View File

@@ -0,0 +1,174 @@
---
Feature Name: "File external table"
Tracking Issue: https://github.com/GreptimeTeam/greptimedb/issues/1041
Date: 2023-03-08
Author: "Xu Wenkang <wenymedia@gmail.com>"
---
File external table
---
# Summary
Allows users to perform SQL queries on files
# Motivation
User data may already exist in other storages, i.g., file systems/s3, etc. in CSV, parquet, JSON format, etc. We can provide users the ability to perform SQL queries on these files.
# Details
## Overview
The file external table providers users ability to perform SQL queries on these files.
For example, a user has a CSV file on the local file system `/var/data/city.csv`:
```
Rank , Name , State , 2023 Population , 2020 Census , Annual Change , Density (mi²)
1 , New York City , New York , 8,992,908 , 8,804,190 , 0.7% , 29,938
2 , Los Angeles , California , 3,930,586 , 3,898,747 , 0.27% , 8,382
3 , Chicago , Illinois , 2,761,625 , 2,746,388 , 0.18% , 12,146
.....
```
Then user can create a file external table with:
```sql
CREATE EXTERNAL TABLE city with(location='/var/data/city.csv', format="CSV", field_delimiter = ',', record_delimiter = '\n', skip_header = 1);
```
Then query the external table with:
```bash
MySQL> select * from city;
```
| Rank | Name | State | 2023 Population | 2020 Census | Annual Change | Density (mi²) |
| :--- | :------------ | :--------- | :-------------- | :---------- | :------------ | :------------ |
| 1 | New York City | New York | 8,992,908 | 8,804,190 | 0.7% | 29,938 |
| 2 | Los Angeles | California | 3,930,586 | 3,898,747 | 0.27% | 8,382 |
| 3 | Chicago | Illinois | 2,761,625 | 2,746,388 | 0.18% | 12,146 |
Drop the external table, if needs with:
```sql
DROP EXTERNAL TABLE city
```
### Syntax
```
CREATE EXTERNAL [<database>.]<table_name>
[
(
<col_name> <col_type> [NULL | NOT NULL] [COMMENT "<comment>"]
)
]
[ WITH
(
LOCATION = 'url'
[,FIELD_DELIMITER = 'delimiter' ]
[,RECORD_DELIMITER = 'delimiter' ]
[,SKIP_HEADER = '<number>' ]
[,FORMAT = { csv | json | parquet } ]
[,PATTERN = '<regex_pattern>' ]
[,ENDPOINT = '<uri>' ]
[,ACCESS_KEY_ID = '<key_id>' ]
[,SECRET_ACCESS_KEY = '<access_key>' ]
[,SESSION_TOKEN = '<token>' ]
[,REGION = '<region>' ]
[,ENABLE_VIRTUAL_HOST_STYLE = '<boolean>']
..
)
]
```
### Supported File Format
The external file table supports multiple formats; We divide formats into row format and columnar format.
Row formats:
- CSV, JSON
Columnar formats:
- Parquet
Some of these formats support filter pushdown, and others don't. If users use very large files, that format doesn't support pushdown, which might consume a lot of IO for scanning full files and cause a long running query.
### File Table Engine
![overview](external-table-engine-overview.png)
We implement a file table engine that creates an external table by accepting user-specified file paths and treating all records as immutable.
1. File Format Decoder: decode files to the `RecordBatch` stream.
2. File Table Engine: implement the `TableProvider` trait, store necessary metadata in memory, and provide scan ability.
Our implementation is better for small files. For large files(i.g., a GB-level CSV file), suggests our users import data to the database.
## Drawbacks
- Some formats don't support filter pushdown
- Hard to support indexing
## Life cycle
### Register a table
1. Write metadata to manifest.
2. Create the table via file table engine.
3. Register table to `CatalogProvider` and register table to `SystemCatalog`(persist tables to disk).
### Deregister a table (Drop a table)
1. Fetch the target table info (figure out table engine type).
2. Deregister the target table in `CatalogProvider` and `SystemCatalog`.
3. Find the target table engine.
4. Drop the target table.
### Recover a table when restarting
1. Collect tables name and engine type info.
2. Find the target tables in different engines.
3. Open and register tables.
# Alternatives
## Using DataFusion API
We can use datafusion API to register a file table:
```rust
let ctx = SessionContext::new();
ctx.register_csv("example", "tests/data/example.csv", CsvReadOptions::new()).await?;
// create a plan
let df = ctx.sql("SELECT a, MIN(b) FROM example WHERE a <= b GROUP BY a LIMIT 100").await?;
```
### Drawbacks
The DataFusion implements its own `Object Store` abstraction and supports parsing the partitioned directories, which can push down the filter and skips some directories. However, this makes it impossible to use our's `LruCacheLayer`(The parsing of the partitioned directories required paths as input). If we want to manage memory entirely, we should implement our own `TableProvider` or `Table`.
- Impossible to use `CacheLayer`
## Introduce an intermediate representation layer
![overview](external-table-engine-way-2.png)
We convert all files into `parquet` as an intermediate representation. Then we only need to implement a `parquet` file table engine, and we already have a similar one. Also, it supports limited filter pushdown via the `parquet` row group stats.
### Drawbacks
- Computing overhead
- Storage overhead

527
docs/schema-structs.md Normal file
View File

@@ -0,0 +1,527 @@
# Schema Structs
# Common Schemas
The `datatypes` crate defines the elementary schema struct to describe the metadata.
## ColumnSchema
[ColumnSchema](https://github.com/GreptimeTeam/greptimedb/blob/9fa871a3fad07f583dc1863a509414da393747f8/src/datatypes/src/schema/column_schema.rs#L36) represents the metadata of a column. It is equivalent to arrow's [Field](https://docs.rs/arrow/latest/arrow/datatypes/struct.Field.html) with additional metadata such as default constraint and whether the column is a time index. The time index is the column with a `TIME INDEX` constraint of a table. We can convert the `ColumnSchema` into an arrow `Field` and convert the `Field` back to the `ColumnSchema` without losing metadata.
```rust
pub struct ColumnSchema {
pub name: String,
pub data_type: ConcreteDataType,
is_nullable: bool,
is_time_index: bool,
default_constraint: Option<ColumnDefaultConstraint>,
metadata: Metadata,
}
```
## Schema
[Schema](https://github.com/GreptimeTeam/greptimedb/blob/9fa871a3fad07f583dc1863a509414da393747f8/src/datatypes/src/schema.rs#L38) is an ordered sequence of `ColumnSchema`. It is equivalent to arrow's [Schema](https://docs.rs/arrow/latest/arrow/datatypes/struct.Schema.html) with additional metadata including the index of the time index column and the version of this schema. Same as `ColumnSchema`, we can convert our `Schema` from/to arrow's `Schema`.
```rust
use arrow::datatypes::Schema as ArrowSchema;
pub struct Schema {
column_schemas: Vec<ColumnSchema>,
name_to_index: HashMap<String, usize>,
arrow_schema: Arc<ArrowSchema>,
timestamp_index: Option<usize>,
version: u32,
}
pub type SchemaRef = Arc<Schema>;
```
We alias `Arc<Schema>` as `SchemaRef` since it is used frequently. Mostly, we use our `ColumnSchema` and `Schema` structs instead of Arrow's `Field` and `Schema` unless we need to invoke third-party libraries (like DataFusion or ArrowFlight) that rely on Arrow.
## RawSchema
`Schema` contains fields like a map from column names to their indices in the `ColumnSchema` sequences and a cached arrow `Schema`. We can construct these fields from the `ColumnSchema` sequences thus we don't want to serialize them. This is why we don't derive `Serialize` and `Deserialize` for `Schema`. We introduce a new struct [RawSchema](https://github.com/GreptimeTeam/greptimedb/blob/9fa871a3fad07f583dc1863a509414da393747f8/src/datatypes/src/schema/raw.rs#L24) which keeps all required fields of a `Schema` and derives the serialization traits. To serialize a `Schema`, we need to convert it into a `RawSchema` first and serialize the `RawSchema`.
```rust
pub struct RawSchema {
pub column_schemas: Vec<ColumnSchema>,
pub timestamp_index: Option<usize>,
pub version: u32,
}
```
We want to keep the `Schema` simple and avoid putting too much business-related metadata in it as many different structs or traits rely on it.
# Schema of the Table
A table maintains its schema in [TableMeta](https://github.com/GreptimeTeam/greptimedb/blob/9fa871a3fad07f583dc1863a509414da393747f8/src/table/src/metadata.rs#L97).
```rust
pub struct TableMeta {
pub schema: SchemaRef,
pub primary_key_indices: Vec<usize>,
pub value_indices: Vec<usize>,
// ...
}
```
The order of columns in `TableMeta::schema` is the same as the order specified in the `CREATE TABLE` statement which users use to create this table.
The field `primary_key_indices` stores indices of primary key columns. The field `value_indices` records the indices of value columns (non-primary key and time index, we sometimes call them field columns).
Suppose we create a table with the following SQL
```sql
CREATE TABLE cpu (
ts TIMESTAMP,
host STRING,
usage_user DOUBLE,
usage_system DOUBLE,
datacenter STRING,
TIME INDEX (ts),
PRIMARY KEY(datacenter, host)) ENGINE=mito WITH(regions=1);
```
Then the table's `TableMeta` may look like this:
```json
{
"schema":{
"column_schemas":[
"ts",
"host",
"usage_user",
"usage_system",
"datacenter"
],
"time_index":0,
"version":0
},
"primary_key_indices":[
4,
1
],
"value_indices":[
2,
3
]
}
```
# Schemas of the storage engine
We split a table into one or more units with the same schema and then store these units in the storage engine. Each unit is a region in the storage engine.
The storage engine maintains schemas of regions in more complicated ways because it
- adds internal columns that are invisible to users to store additional metadata for each row
- provides a data model similar to the key-value model so it organizes columns in a different order
- maintains additional metadata like column id or column family
So the storage engine defines several schema structs:
- RegionSchema
- StoreSchema
- ProjectedSchema
## RegionSchema
A [RegionSchema](https://github.com/GreptimeTeam/greptimedb/blob/9fa871a3fad07f583dc1863a509414da393747f8/src/storage/src/schema/region.rs#L37) describes the schema of a region.
```rust
pub struct RegionSchema {
user_schema: SchemaRef,
store_schema: StoreSchemaRef,
columns: ColumnsMetadataRef,
}
```
Each region reserves some columns called `internal columns` for internal usage:
- `__sequence`, sequence number of a row
- `__op_type`, operation type of a row, such as `PUT` or `DELETE`
- `__version`, user-specified version of a row, reserved but not used. We might remove this in the future
The table engine can't see the `__sequence` and `__op_type` columns, so the `RegionSchema` itself maintains two internal schemas:
- User schema, a `Schema` struct that doesn't have internal columns
- Store schema, a `StoreSchema` struct that has internal columns
The `ColumnsMetadata` struct keeps metadata about all columns but most time we only need to use metadata in user schema and store schema, so we just ignore it. We may remove this struct in the future.
`RegionSchema` organizes columns in the following order:
```
key columns, timestamp, [__version,] value columns, __sequence, __op_type
```
We can ignore the `__version` column because it is disabled now:
```
key columns, timestamp, value columns, __sequence, __op_type
```
Key columns are columns of a table's primary key. Timestamp is the time index column. A region sorts all rows by key columns, timestamp, sequence, and op type.
So the `RegionSchema` of our `cpu` table above looks like this:
```json
{
"user_schema":[
"datacenter",
"host",
"ts",
"usage_user",
"usage_system"
],
"store_schema":[
"datacenter",
"host",
"ts",
"usage_user",
"usage_system",
"__sequence",
"__op_type"
]
}
```
## StoreSchema
As described above, a [StoreSchema](https://github.com/GreptimeTeam/greptimedb/blob/9fa871a3fad07f583dc1863a509414da393747f8/src/storage/src/schema/store.rs#L36) is a schema that knows all internal columns.
```rust
struct StoreSchema {
columns: Vec<ColumnMetadata>,
schema: SchemaRef,
row_key_end: usize,
user_column_end: usize,
}
```
The columns in the `columns` and `schema` fields have the same order. The `ColumnMetadata` has metadata like column id, column family id, and comment. The `StoreSchema` also stores this metadata in `StoreSchema::schema`, so we can convert the `StoreSchema` between arrow's `Schema`. We use this feature to persist the `StoreSchema` in the SST since our SST format is `Parquet`, which can take arrow's `Schema` as its schema.
The `StoreSchema` of the region above is similar to this:
```json
{
"schema":{
"column_schemas":[
"datacenter",
"host",
"ts",
"usage_user",
"usage_system",
"__sequence",
"__op_type"
],
"time_index":2,
"version":0
},
"row_key_end":3,
"user_column_end":5
}
```
The key and timestamp columns form row keys of rows. We put them together so we can use `row_key_end` to get indices of all row key columns. Similarly, we can use the `user_column_end` to get indices of all user columns (non-internal columns).
```rust
impl StoreSchema {
#[inline]
pub(crate) fn row_key_indices(&self) -> impl Iterator<Item = usize> {
0..self.row_key_end
}
#[inline]
pub(crate) fn value_indices(&self) -> impl Iterator<Item = usize> {
self.row_key_end..self.user_column_end
}
}
```
Another useful feature of `StoreSchema` is that we ensure it always contains key columns, a timestamp column, and internal columns because we need them to perform merge, deduplication, and delete. Projection on `StoreSchema` only projects value columns.
## ProjectedSchema
To support arbitrary projection, we introduce the [ProjectedSchema](https://github.com/GreptimeTeam/greptimedb/blob/9fa871a3fad07f583dc1863a509414da393747f8/src/storage/src/schema/projected.rs#L106).
```rust
pub struct ProjectedSchema {
projection: Option<Projection>,
schema_to_read: StoreSchemaRef,
projected_user_schema: SchemaRef,
}
```
We need to handle many cases while doing projection:
- The columns' order of table and region is different
- The projection can be in arbitrary order, e.g. `select usage_user, host from cpu` and `select host, usage_user from cpu` have different projection order
- We support `ALTER TABLE` so data files may have different schemas.
### Projection
Let's take an example to see how projection works. Suppose we want to select `ts`, `usage_system` from the `cpu` table.
```sql
CREATE TABLE cpu (
ts TIMESTAMP,
host STRING,
usage_user DOUBLE,
usage_system DOUBLE,
datacenter STRING,
TIME INDEX (ts),
PRIMARY KEY(datacenter, host)) ENGINE=mito WITH(regions=1);
select ts, usage_system from cpu;
```
The query engine uses the projection `[0, 3]` to scan the table. However, columns in the region have a different order, so the table engine adjusts the projection to `2, 4`.
```json
{
"user_schema":[
"datacenter",
"host",
"ts",
"usage_user",
"usage_system"
],
}
```
As you can see, the output order is still `[ts, usage_system]`. This is the schema users can see after projection so we call it `projected user schema`.
But the storage engine also needs to read key columns, a timestamp column, and internal columns. So we maintain a `StoreSchema` after projection in the `ProjectedSchema`.
The `Projection` struct is a helper struct to help compute the projected user schema and store schema.
So we can construct the following `ProjectedSchema`:
```json
{
"schema_to_read":{
"schema":{
"column_schemas":[
"datacenter",
"host",
"ts",
"usage_system",
"__sequence",
"__op_type"
],
"time_index":2,
"version":0
},
"row_key_end":3,
"user_column_end":4
},
"projected_user_schema":{
"column_schemas":[
"ts",
"usage_system"
],
"time_index":0
}
}
```
As you can see, `schema_to_read` doesn't contain the column `usage_user` that is not intended to be read (not in projection).
### ReadAdapter
As mentioned above, we can alter a table so the underlying files (SSTs) and memtables in the storage engine may have different schemas.
To simplify the logic of `ProjectedSchema`, we handle the difference between schemas before projection (constructing the `ProjectedSchema`). We introduce [ReadAdapter](https://github.com/GreptimeTeam/greptimedb/blob/9fa871a3fad07f583dc1863a509414da393747f8/src/storage/src/schema/compat.rs#L90) that adapts rows with different source schemas to the same expected schema.
So we can always use the current `RegionSchema` of the region to construct the `ProjectedSchema`, and then create a `ReadAdapter` for each memtable or SST.
```rust
#[derive(Debug)]
pub struct ReadAdapter {
source_schema: StoreSchemaRef,
dest_schema: ProjectedSchemaRef,
indices_in_result: Vec<Option<usize>>,
is_source_needed: Vec<bool>,
}
```
For each column required by `dest_schema`, `indices_in_result` stores the index of that column in the row read from the source memtable or SST. If the source row doesn't contain that column, the index is `None`.
The field `is_source_needed` stores whether a column in the source memtable or SST is needed.
Suppose we add a new column `usage_idle` to the table `cpu`.
```sql
ALTER TABLE cpu ADD COLUMN usage_idle DOUBLE;
```
The new `StoreSchema` becomes:
```json
{
"schema":{
"column_schemas":[
"datacenter",
"host",
"ts",
"usage_user",
"usage_system",
"usage_idle",
"__sequence",
"__op_type"
],
"time_index":2,
"version":1
},
"row_key_end":3,
"user_column_end":6
}
```
Note that we bump the version of the schema to 1.
If we want to select `ts`, `usage_system`, and `usage_idle`. While reading from the old schema, the storage engine creates a `ReadAdapter` like this:
```json
{
"source_schema":{
"schema":{
"column_schemas":[
"datacenter",
"host",
"ts",
"usage_user",
"usage_system",
"__sequence",
"__op_type"
],
"time_index":2,
"version":0
},
"row_key_end":3,
"user_column_end":5
},
"dest_schema":{
"schema_to_read":{
"schema":{
"column_schemas":[
"datacenter",
"host",
"ts",
"usage_system",
"usage_idle",
"__sequence",
"__op_type"
],
"time_index":2,
"version":1
},
"row_key_end":3,
"user_column_end":5
},
"projected_user_schema":{
"column_schemas":[
"ts",
"usage_system",
"usage_idle"
],
"time_index":0
}
},
"indices_in_result":[
0,
1,
2,
3,
null,
4,
5
],
"is_source_needed":[
true,
true,
true,
false,
true,
true,
true
]
}
```
We don't need to read `usage_user` so `is_source_needed[3]` is false. The old schema doesn't have column `usage_idle` so `indices_in_result[4]` is `null` and the `ReadAdapter` needs to insert a null column to the output row so the output schema still contains `usage_idle`.
The figure below shows the relationship between `RegionSchema`, `StoreSchema`, `ProjectedSchema`, and `ReadAdapter`.
```text
┌──────────────────────────────┐
│ │
│ ┌────────────────────┐ │
│ │ store_schema │ │
│ │ │ │
│ │ StoreSchema │ │
│ │ version 1 │ │
│ └────────────────────┘ │
│ │
│ ┌────────────────────┐ │
│ │ user_schema │ │
│ └────────────────────┘ │
│ │
│ RegionSchema │
│ │
└──────────────┬───────────────┘
┌──────────────▼───────────────┐
│ │
│ ┌──────────────────────────┐ │
│ │ schema_to_read │ │
│ │ │ │
│ │ StoreSchema (projected) │ │
│ │ version 1 │ │
│ └──────────────────────────┘ │
┌───┤ ├───┐
│ │ ┌──────────────────────────┐ │ │
│ │ │ projected_user_schema │ │ │
│ │ └──────────────────────────┘ │ │
│ │ │ │
│ │ ProjectedSchema │ │
dest schema │ └──────────────────────────────┘ │ dest schema
│ │
│ │
┌──────▼───────┐ ┌───────▼──────┐
│ │ │ │
│ ReadAdapter │ │ ReadAdapter │
│ │ │ │
└──────▲───────┘ └───────▲──────┘
│ │
│ │
source schema │ │ source schema
│ │
┌───────┴─────────┐ ┌────────┴────────┐
│ │ │ │
│ ┌─────────────┐ │ │ ┌─────────────┐ │
│ │ │ │ │ │ │ │
│ │ StoreSchema │ │ │ │ StoreSchema │ │
│ │ │ │ │ │ │ │
│ │ version 0 │ │ │ │ version 1 │ │
│ │ │ │ │ │ │ │
│ └─────────────┘ │ │ └─────────────┘ │
│ │ │ │
│ SST 0 │ │ SST 1 │
│ │ │ │
└─────────────────┘ └─────────────────┘
```
# Conversion
This figure shows the conversion between schemas:
```text
┌─────────────┐ schema From ┌─────────────┐
│ ├──────────────────┐ ┌────────────────────────────► │
│ TableMeta │ │ │ │ RawSchema │
│ │ │ │ ┌─────────────────────────┤ │
└─────────────┘ │ │ │ TryFrom └─────────────┘
│ │ │
│ │ │
│ │ │
│ │ │
│ │ │
┌───────────────────┐ ┌─────▼──┴──▼──┐ arrow_schema() ┌─────────────────┐
│ │ │ ├─────────────────────► │
│ ColumnsMetadata │ ┌─────► Schema │ │ ArrowSchema ├──┐
│ │ │ │ ◄─────────────────────┤ │ │
└────┬───────────▲──┘ │ └───▲───▲──────┘ TryFrom └─────────────────┘ │
│ │ │ │ │ │
│ │ │ │ └────────────────────────────────────────┐ │
│ │ │ │ │ │
│ columns │ user_schema() │ │ │
│ │ │ │ projected_user_schema() schema() │
│ │ │ │ │ │
│ ┌───┴─────────────┴─┐ │ ┌────────────────────┐ │ │
columns │ │ │ └─────────────────┤ │ │ │ TryFrom
│ │ RegionSchema │ │ ProjectedSchema │ │ │
│ │ ├─────────────────────────► │ │ │
│ └─────────────────┬─┘ ProjectedSchema::new() └──────────────────┬─┘ │ │
│ │ │ │ │
│ │ │ │ │
│ │ │ │ │
│ │ │ │ │
┌────▼────────────────────┐ │ store_schema() ┌────▼───────┴──┐ │
│ │ └─────────────────────────────────────────► │ │
│ Vec<ColumnMetadata> │ │ StoreSchema ◄─────┘
│ ◄──────────────────────────────────────────────┤ │
└─────────────────────────┘ columns └───────────────┘
```

View File

@@ -1,2 +1,2 @@
[toolchain]
channel = "nightly-2023-02-26"
channel = "nightly-2023-05-03"

View File

@@ -0,0 +1,42 @@
#!/usr/bin/env bash
# This script is used to download built dashboard assets from the "GreptimeTeam/dashboard" repository.
set -e
declare -r SCRIPT_DIR=$(cd $(dirname ${0}) >/dev/null 2>&1 && pwd)
declare -r ROOT_DIR=$(dirname ${SCRIPT_DIR})
declare -r STATIC_DIR="$ROOT_DIR/src/servers/dashboard"
OUT_DIR="${1:-$SCRIPT_DIR}"
RELEASE_VERSION="$(cat $STATIC_DIR/VERSION)"
echo "Downloading assets to dir: $OUT_DIR"
cd $OUT_DIR
# Download the SHA256 checksum attached to the release. To verify the integrity
# of the download, this checksum will be used to check the download tar file
# containing the built dashboard assets.
curl -Ls https://github.com/GreptimeTeam/dashboard/releases/download/$RELEASE_VERSION/sha256.txt --output sha256.txt
# Download the tar file containing the built dashboard assets.
curl -L https://github.com/GreptimeTeam/dashboard/releases/download/$RELEASE_VERSION/build.tar.gz --output build.tar.gz
# Verify the checksums match; exit if they don't.
case "$(uname -s)" in
FreeBSD | Darwin)
echo "$(cat sha256.txt)" | shasum --algorithm 256 --check \
|| { echo "Checksums did not match for downloaded dashboard assets!"; exit 1; } ;;
Linux)
echo "$(cat sha256.txt)" | sha256sum --check -- \
|| { echo "Checksums did not match for downloaded dashboard assets!"; exit 1; } ;;
*)
echo "The '$(uname -s)' operating system is not supported as a build host for the dashboard" >&2
exit 1
esac
# Extract the assets and clean up.
tar -xzf build.tar.gz -C "$STATIC_DIR"
rm sha256.txt
rm build.tar.gz
echo "Successfully download dashboard assets to $STATIC_DIR"

View File

@@ -51,13 +51,17 @@ get_os_type
get_arch_type
if [ -n "${OS_TYPE}" ] && [ -n "${ARCH_TYPE}" ]; then
echo "Downloading ${BIN}, OS: ${OS_TYPE}, Arch: ${ARCH_TYPE}, Version: ${VERSION}"
# Use the latest nightly version.
if [ "${VERSION}" = "latest" ]; then
wget "https://github.com/${GITHUB_ORG}/${GITHUB_REPO}/releases/latest/download/${BIN}-${OS_TYPE}-${ARCH_TYPE}.tgz"
else
wget "https://github.com/${GITHUB_ORG}/${GITHUB_REPO}/releases/download/${VERSION}/${BIN}-${OS_TYPE}-${ARCH_TYPE}.tgz"
VERSION=$(curl -s -XGET "https://api.github.com/repos/${GITHUB_ORG}/${GITHUB_REPO}/releases" | grep tag_name | grep nightly | cut -d: -f 2 | sed 's/.*"\(.*\)".*/\1/' | uniq | sort -r | head -n 1)
if [ -z "${VERSION}" ]; then
echo "Failed to get the latest version."
exit 1
fi
fi
echo "Downloading ${BIN}, OS: ${OS_TYPE}, Arch: ${ARCH_TYPE}, Version: ${VERSION}"
wget "https://github.com/${GITHUB_ORG}/${GITHUB_REPO}/releases/download/${VERSION}/${BIN}-${OS_TYPE}-${ARCH_TYPE}.tgz"
tar xvf ${BIN}-${OS_TYPE}-${ARCH_TYPE}.tgz && rm ${BIN}-${OS_TYPE}-${ARCH_TYPE}.tgz && echo "Run './${BIN} --help' to get started"
fi

View File

@@ -10,10 +10,10 @@ common-base = { path = "../common/base" }
common-error = { path = "../common/error" }
common-time = { path = "../common/time" }
datatypes = { path = "../datatypes" }
greptime-proto = { git = "https://github.com/GreptimeTeam/greptime-proto.git", rev = "3a715150563b89d5dfc81a5838eac1f66a5658a1" }
greptime-proto = { git = "https://github.com/GreptimeTeam/greptime-proto.git", rev = "e8abf8241c908448dce595399e89c89a40d048bd" }
prost.workspace = true
snafu = { version = "0.7", features = ["backtraces"] }
tonic.workspace = true
[build-dependencies]
tonic-build = "0.8"
tonic-build = "0.9"

View File

@@ -18,7 +18,7 @@ use common_error::ext::ErrorExt;
use common_error::prelude::StatusCode;
use datatypes::prelude::ConcreteDataType;
use snafu::prelude::*;
use snafu::{Backtrace, ErrorCompat};
use snafu::Location;
pub type Result<T> = std::result::Result<T, Error>;
@@ -26,12 +26,12 @@ pub type Result<T> = std::result::Result<T, Error>;
#[snafu(visibility(pub))]
pub enum Error {
#[snafu(display("Unknown proto column datatype: {}", datatype))]
UnknownColumnDataType { datatype: i32, backtrace: Backtrace },
UnknownColumnDataType { datatype: i32, location: Location },
#[snafu(display("Failed to create column datatype from {:?}", from))]
IntoColumnDataType {
from: ConcreteDataType,
backtrace: Backtrace,
location: Location,
},
#[snafu(display(
@@ -66,9 +66,6 @@ impl ErrorExt for Error {
| Error::InvalidColumnDefaultConstraint { source, .. } => source.status_code(),
}
}
fn backtrace_opt(&self) -> Option<&Backtrace> {
ErrorCompat::backtrace(self)
}
fn as_any(&self) -> &dyn Any {
self

View File

@@ -7,6 +7,7 @@ license.workspace = true
[dependencies]
api = { path = "../api" }
arc-swap = "1.0"
arrow-schema.workspace = true
async-stream.workspace = true
async-trait = "0.1"
backoff = { version = "0.4", features = ["tokio"] }
@@ -23,8 +24,10 @@ datafusion.workspace = true
datatypes = { path = "../datatypes" }
futures = "0.3"
futures-util.workspace = true
key-lock = "0.1"
lazy_static = "1.4"
meta-client = { path = "../meta-client" }
metrics.workspace = true
parking_lot = "0.12"
regex = "1.6"
serde = "1.0"

View File

@@ -19,13 +19,22 @@ use common_error::ext::{BoxedError, ErrorExt};
use common_error::prelude::{Snafu, StatusCode};
use datafusion::error::DataFusionError;
use datatypes::prelude::ConcreteDataType;
use snafu::{Backtrace, ErrorCompat};
use snafu::Location;
use tokio::task::JoinError;
use crate::DeregisterTableRequest;
#[derive(Debug, Snafu)]
#[snafu(visibility(pub))]
pub enum Error {
#[snafu(display(
"Failed to re-compile script due to internal error, source: {}",
source
))]
CompileScriptInternal {
#[snafu(backtrace)]
source: BoxedError,
},
#[snafu(display("Failed to open system catalog table, source: {}", source))]
OpenSystemCatalog {
#[snafu(backtrace)]
@@ -50,7 +59,7 @@ pub enum Error {
},
#[snafu(display("System catalog is not valid: {}", msg))]
SystemCatalog { msg: String, backtrace: Backtrace },
SystemCatalog { msg: String, location: Location },
#[snafu(display(
"System catalog table type mismatch, expected: binary, found: {:?}",
@@ -58,61 +67,68 @@ pub enum Error {
))]
SystemCatalogTypeMismatch {
data_type: ConcreteDataType,
backtrace: Backtrace,
location: Location,
},
#[snafu(display("Invalid system catalog entry type: {:?}", entry_type))]
InvalidEntryType {
entry_type: Option<u8>,
backtrace: Backtrace,
location: Location,
},
#[snafu(display("Invalid system catalog key: {:?}", key))]
InvalidKey {
key: Option<String>,
backtrace: Backtrace,
location: Location,
},
#[snafu(display("Catalog value is not present"))]
EmptyValue { backtrace: Backtrace },
EmptyValue { location: Location },
#[snafu(display("Failed to deserialize value, source: {}", source))]
ValueDeserialize {
source: serde_json::error::Error,
backtrace: Backtrace,
location: Location,
},
#[snafu(display("Table engine not found: {}, source: {}", engine_name, source))]
TableEngineNotFound {
engine_name: String,
#[snafu(backtrace)]
source: table::error::Error,
},
#[snafu(display("Cannot find catalog by name: {}", catalog_name))]
CatalogNotFound {
catalog_name: String,
backtrace: Backtrace,
location: Location,
},
#[snafu(display("Cannot find schema {} in catalog {}", schema, catalog))]
SchemaNotFound {
catalog: String,
schema: String,
backtrace: Backtrace,
location: Location,
},
#[snafu(display("Table `{}` already exists", table))]
TableExists { table: String, backtrace: Backtrace },
TableExists { table: String, location: Location },
#[snafu(display("Table `{}` not exist", table))]
TableNotExist { table: String, backtrace: Backtrace },
#[snafu(display("Table not found: {}", table))]
TableNotExist { table: String, location: Location },
#[snafu(display("Schema {} already exists", schema))]
SchemaExists {
schema: String,
backtrace: Backtrace,
},
SchemaExists { schema: String, location: Location },
#[snafu(display("Operation {} not implemented yet", operation))]
Unimplemented {
operation: String,
backtrace: Backtrace,
location: Location,
},
#[snafu(display("Operation {} not supported", op))]
NotSupported { op: String, location: Location },
#[snafu(display("Failed to open table, table info: {}, source: {}", table_info, source))]
OpenTable {
table_info: String,
@@ -120,10 +136,13 @@ pub enum Error {
source: table::error::Error,
},
#[snafu(display("Failed to open table in parallel, source: {}", source))]
ParallelOpenTable { source: JoinError },
#[snafu(display("Table not found while opening table, table info: {}", table_info))]
TableNotFound {
table_info: String,
backtrace: Backtrace,
location: Location,
},
#[snafu(display("Failed to read system catalog table records"))]
@@ -132,6 +151,12 @@ pub enum Error {
source: common_recordbatch::error::Error,
},
#[snafu(display("Failed to create recordbatch, source: {}", source))]
CreateRecordBatch {
#[snafu(backtrace)]
source: common_recordbatch::error::Error,
},
#[snafu(display(
"Failed to insert table creation record to system catalog, source: {}",
source
@@ -153,7 +178,7 @@ pub enum Error {
},
#[snafu(display("Illegal catalog manager state: {}", msg))]
IllegalManagerState { backtrace: Backtrace, msg: String },
IllegalManagerState { location: Location, msg: String },
#[snafu(display("Failed to scan system catalog table, source: {}", source))]
SystemCatalogTableScan {
@@ -219,6 +244,22 @@ pub enum Error {
#[snafu(backtrace)]
source: table::error::Error,
},
#[snafu(display("Invalid system table definition: {err_msg}"))]
InvalidSystemTableDef { err_msg: String, location: Location },
#[snafu(display("{}: {}", msg, source))]
Datafusion {
msg: String,
source: DataFusionError,
location: Location,
},
#[snafu(display("Table schema mismatch, source: {}", source))]
TableSchemaMismatch {
#[snafu(backtrace)]
source: table::error::Error,
},
}
pub type Result<T> = std::result::Result<T, Error>;
@@ -231,7 +272,9 @@ impl ErrorExt for Error {
| Error::TableNotFound { .. }
| Error::IllegalManagerState { .. }
| Error::CatalogNotFound { .. }
| Error::InvalidEntryType { .. } => StatusCode::Unexpected,
| Error::InvalidEntryType { .. }
| Error::InvalidSystemTableDef { .. }
| Error::ParallelOpenTable { .. } => StatusCode::Unexpected,
Error::SystemCatalog { .. }
| Error::EmptyValue { .. }
@@ -239,14 +282,18 @@ impl ErrorExt for Error {
Error::SystemCatalogTypeMismatch { .. } => StatusCode::Internal,
Error::ReadSystemCatalog { source, .. } => source.status_code(),
Error::ReadSystemCatalog { source, .. } | Error::CreateRecordBatch { source } => {
source.status_code()
}
Error::InvalidCatalogValue { source, .. } | Error::CatalogEntrySerde { source } => {
source.status_code()
}
Error::TableExists { .. } => StatusCode::TableAlreadyExists,
Error::TableNotExist { .. } => StatusCode::TableNotFound,
Error::SchemaExists { .. } => StatusCode::InvalidArguments,
Error::SchemaExists { .. } | Error::TableEngineNotFound { .. } => {
StatusCode::InvalidArguments
}
Error::OpenSystemCatalog { source, .. }
| Error::CreateSystemCatalog { source, .. }
@@ -254,25 +301,24 @@ impl ErrorExt for Error {
| Error::OpenTable { source, .. }
| Error::CreateTable { source, .. }
| Error::DeregisterTable { source, .. }
| Error::RegionStats { source, .. } => source.status_code(),
| Error::RegionStats { source, .. }
| Error::TableSchemaMismatch { source } => source.status_code(),
Error::MetaSrv { source, .. } => source.status_code(),
Error::SystemCatalogTableScan { source } => source.status_code(),
Error::SystemCatalogTableScanExec { source } => source.status_code(),
Error::InvalidTableInfoInCatalog { source } => source.status_code(),
Error::SchemaProviderOperation { source } | Error::Internal { source } => {
source.status_code()
}
Error::Unimplemented { .. } => StatusCode::Unsupported,
Error::CompileScriptInternal { source }
| Error::SchemaProviderOperation { source }
| Error::Internal { source } => source.status_code(),
Error::Unimplemented { .. } | Error::NotSupported { .. } => StatusCode::Unsupported,
Error::QueryAccessDenied { .. } => StatusCode::AccessDenied,
Error::Datafusion { .. } => StatusCode::EngineExecuteQuery,
}
}
fn backtrace_opt(&self) -> Option<&Backtrace> {
ErrorCompat::backtrace(self)
}
fn as_any(&self) -> &dyn Any {
self
}
@@ -296,7 +342,7 @@ mod tests {
StatusCode::TableAlreadyExists,
Error::TableExists {
table: "some_table".to_string(),
backtrace: Backtrace::generate(),
location: Location::generate(),
}
.status_code()
);
@@ -310,7 +356,7 @@ mod tests {
StatusCode::StorageUnavailable,
Error::SystemCatalog {
msg: "".to_string(),
backtrace: Backtrace::generate(),
location: Location::generate(),
}
.status_code()
);
@@ -319,7 +365,7 @@ mod tests {
StatusCode::Internal,
Error::SystemCatalogTypeMismatch {
data_type: ConcreteDataType::binary_datatype(),
backtrace: Backtrace::generate(),
location: Location::generate(),
}
.status_code()
);
@@ -333,7 +379,7 @@ mod tests {
pub fn test_errors_to_datafusion_error() {
let e: DataFusionError = Error::TableExists {
table: "test_table".to_string(),
backtrace: Backtrace::generate(),
location: Location::generate(),
}
.into();
match e {

View File

@@ -191,6 +191,7 @@ impl TableRegionalKey {
pub struct TableRegionalValue {
pub version: TableVersion,
pub regions_ids: Vec<u32>,
pub engine_name: Option<String>,
}
pub struct CatalogKey {

View File

@@ -0,0 +1,102 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
mod columns;
mod tables;
use std::any::Any;
use std::sync::Arc;
use async_trait::async_trait;
use datafusion::datasource::streaming::{PartitionStream, StreamingTable};
use snafu::ResultExt;
use table::table::adapter::TableAdapter;
use table::TableRef;
use self::columns::InformationSchemaColumns;
use crate::error::{DatafusionSnafu, Result, TableSchemaMismatchSnafu};
use crate::information_schema::tables::InformationSchemaTables;
use crate::{CatalogProviderRef, SchemaProvider};
const TABLES: &str = "tables";
const COLUMNS: &str = "columns";
pub(crate) struct InformationSchemaProvider {
catalog_name: String,
catalog_provider: CatalogProviderRef,
tables: Vec<String>,
}
impl InformationSchemaProvider {
pub(crate) fn new(catalog_name: String, catalog_provider: CatalogProviderRef) -> Self {
Self {
catalog_name,
catalog_provider,
tables: vec![TABLES.to_string(), COLUMNS.to_string()],
}
}
}
#[async_trait]
impl SchemaProvider for InformationSchemaProvider {
fn as_any(&self) -> &dyn Any {
self
}
async fn table_names(&self) -> Result<Vec<String>> {
Ok(self.tables.clone())
}
async fn table(&self, name: &str) -> Result<Option<TableRef>> {
let table = match name.to_ascii_lowercase().as_ref() {
TABLES => {
let inner = Arc::new(InformationSchemaTables::new(
self.catalog_name.clone(),
self.catalog_provider.clone(),
));
Arc::new(
StreamingTable::try_new(inner.schema().clone(), vec![inner]).with_context(
|_| DatafusionSnafu {
msg: format!("Failed to get InformationSchema table '{name}'"),
},
)?,
)
}
COLUMNS => {
let inner = Arc::new(InformationSchemaColumns::new(
self.catalog_name.clone(),
self.catalog_provider.clone(),
));
Arc::new(
StreamingTable::try_new(inner.schema().clone(), vec![inner]).with_context(
|_| DatafusionSnafu {
msg: format!("Failed to get InformationSchema table '{name}'"),
},
)?,
)
}
_ => {
return Ok(None);
}
};
let table = TableAdapter::new(table).context(TableSchemaMismatchSnafu)?;
Ok(Some(Arc::new(table)))
}
async fn table_exist(&self, name: &str) -> Result<bool> {
let normalized_name = name.to_ascii_lowercase();
Ok(self.tables.contains(&normalized_name))
}
}

View File

@@ -0,0 +1,184 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use std::sync::Arc;
use arrow_schema::SchemaRef as ArrowSchemaRef;
use common_catalog::consts::{
SEMANTIC_TYPE_FIELD, SEMANTIC_TYPE_PRIMARY_KEY, SEMANTIC_TYPE_TIME_INDEX,
};
use common_query::physical_plan::TaskContext;
use common_recordbatch::RecordBatch;
use datafusion::datasource::streaming::PartitionStream as DfPartitionStream;
use datafusion::physical_plan::stream::RecordBatchStreamAdapter as DfRecordBatchStreamAdapter;
use datafusion::physical_plan::SendableRecordBatchStream as DfSendableRecordBatchStream;
use datatypes::prelude::{ConcreteDataType, DataType};
use datatypes::scalars::ScalarVectorBuilder;
use datatypes::schema::{ColumnSchema, Schema, SchemaRef};
use datatypes::vectors::{StringVectorBuilder, VectorRef};
use snafu::ResultExt;
use crate::error::{CreateRecordBatchSnafu, Result};
use crate::CatalogProviderRef;
pub(super) struct InformationSchemaColumns {
schema: SchemaRef,
catalog_name: String,
catalog_provider: CatalogProviderRef,
}
const TABLE_CATALOG: &str = "table_catalog";
const TABLE_SCHEMA: &str = "table_schema";
const TABLE_NAME: &str = "table_name";
const COLUMN_NAME: &str = "column_name";
const DATA_TYPE: &str = "data_type";
const SEMANTIC_TYPE: &str = "semantic_type";
impl InformationSchemaColumns {
pub(super) fn new(catalog_name: String, catalog_provider: CatalogProviderRef) -> Self {
let schema = Arc::new(Schema::new(vec![
ColumnSchema::new(TABLE_CATALOG, ConcreteDataType::string_datatype(), false),
ColumnSchema::new(TABLE_SCHEMA, ConcreteDataType::string_datatype(), false),
ColumnSchema::new(TABLE_NAME, ConcreteDataType::string_datatype(), false),
ColumnSchema::new(COLUMN_NAME, ConcreteDataType::string_datatype(), false),
ColumnSchema::new(DATA_TYPE, ConcreteDataType::string_datatype(), false),
ColumnSchema::new(SEMANTIC_TYPE, ConcreteDataType::string_datatype(), false),
]));
Self {
schema,
catalog_name,
catalog_provider,
}
}
fn builder(&self) -> InformationSchemaColumnsBuilder {
InformationSchemaColumnsBuilder::new(
self.schema.clone(),
self.catalog_name.clone(),
self.catalog_provider.clone(),
)
}
}
struct InformationSchemaColumnsBuilder {
schema: SchemaRef,
catalog_name: String,
catalog_provider: CatalogProviderRef,
catalog_names: StringVectorBuilder,
schema_names: StringVectorBuilder,
table_names: StringVectorBuilder,
column_names: StringVectorBuilder,
data_types: StringVectorBuilder,
semantic_types: StringVectorBuilder,
}
impl InformationSchemaColumnsBuilder {
fn new(schema: SchemaRef, catalog_name: String, catalog_provider: CatalogProviderRef) -> Self {
Self {
schema,
catalog_name,
catalog_provider,
catalog_names: StringVectorBuilder::with_capacity(42),
schema_names: StringVectorBuilder::with_capacity(42),
table_names: StringVectorBuilder::with_capacity(42),
column_names: StringVectorBuilder::with_capacity(42),
data_types: StringVectorBuilder::with_capacity(42),
semantic_types: StringVectorBuilder::with_capacity(42),
}
}
/// Construct the `information_schema.tables` virtual table
async fn make_tables(&mut self) -> Result<RecordBatch> {
let catalog_name = self.catalog_name.clone();
for schema_name in self.catalog_provider.schema_names().await? {
let Some(schema) = self.catalog_provider.schema(&schema_name).await? else { continue };
for table_name in schema.table_names().await? {
let Some(table) = schema.table(&table_name).await? else { continue };
let keys = &table.table_info().meta.primary_key_indices;
let schema = table.schema();
for (idx, column) in schema.column_schemas().iter().enumerate() {
let semantic_type = if column.is_time_index() {
SEMANTIC_TYPE_TIME_INDEX
} else if keys.contains(&idx) {
SEMANTIC_TYPE_PRIMARY_KEY
} else {
SEMANTIC_TYPE_FIELD
};
self.add_column(
&catalog_name,
&schema_name,
&table_name,
&column.name,
column.data_type.name(),
semantic_type,
);
}
}
}
self.finish()
}
fn add_column(
&mut self,
catalog_name: &str,
schema_name: &str,
table_name: &str,
column_name: &str,
data_type: &str,
semantic_type: &str,
) {
self.catalog_names.push(Some(catalog_name));
self.schema_names.push(Some(schema_name));
self.table_names.push(Some(table_name));
self.column_names.push(Some(column_name));
self.data_types.push(Some(data_type));
self.semantic_types.push(Some(semantic_type));
}
fn finish(&mut self) -> Result<RecordBatch> {
let columns: Vec<VectorRef> = vec![
Arc::new(self.catalog_names.finish()),
Arc::new(self.schema_names.finish()),
Arc::new(self.table_names.finish()),
Arc::new(self.column_names.finish()),
Arc::new(self.data_types.finish()),
Arc::new(self.semantic_types.finish()),
];
RecordBatch::new(self.schema.clone(), columns).context(CreateRecordBatchSnafu)
}
}
impl DfPartitionStream for InformationSchemaColumns {
fn schema(&self) -> &ArrowSchemaRef {
self.schema.arrow_schema()
}
fn execute(&self, _: Arc<TaskContext>) -> DfSendableRecordBatchStream {
let schema = self.schema().clone();
let mut builder = self.builder();
Box::pin(DfRecordBatchStreamAdapter::new(
schema,
futures::stream::once(async move {
builder
.make_tables()
.await
.map(|x| x.into_df_record_batch())
.map_err(Into::into)
}),
))
}
}

View File

@@ -0,0 +1,176 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use std::sync::Arc;
use arrow_schema::SchemaRef as ArrowSchemaRef;
use common_catalog::consts::INFORMATION_SCHEMA_NAME;
use common_query::physical_plan::TaskContext;
use common_recordbatch::RecordBatch;
use datafusion::datasource::streaming::PartitionStream as DfPartitionStream;
use datafusion::physical_plan::stream::RecordBatchStreamAdapter as DfRecordBatchStreamAdapter;
use datafusion::physical_plan::SendableRecordBatchStream as DfSendableRecordBatchStream;
use datatypes::prelude::{ConcreteDataType, ScalarVectorBuilder, VectorRef};
use datatypes::schema::{ColumnSchema, Schema, SchemaRef};
use datatypes::vectors::{StringVectorBuilder, UInt32VectorBuilder};
use snafu::ResultExt;
use table::metadata::TableType;
use crate::error::{CreateRecordBatchSnafu, Result};
use crate::CatalogProviderRef;
pub(super) struct InformationSchemaTables {
schema: SchemaRef,
catalog_name: String,
catalog_provider: CatalogProviderRef,
}
impl InformationSchemaTables {
pub(super) fn new(catalog_name: String, catalog_provider: CatalogProviderRef) -> Self {
let schema = Arc::new(Schema::new(vec![
ColumnSchema::new("table_catalog", ConcreteDataType::string_datatype(), false),
ColumnSchema::new("table_schema", ConcreteDataType::string_datatype(), false),
ColumnSchema::new("table_name", ConcreteDataType::string_datatype(), false),
ColumnSchema::new("table_type", ConcreteDataType::string_datatype(), false),
ColumnSchema::new("table_id", ConcreteDataType::uint32_datatype(), true),
ColumnSchema::new("engine", ConcreteDataType::string_datatype(), true),
]));
Self {
schema,
catalog_name,
catalog_provider,
}
}
fn builder(&self) -> InformationSchemaTablesBuilder {
InformationSchemaTablesBuilder::new(
self.schema.clone(),
self.catalog_name.clone(),
self.catalog_provider.clone(),
)
}
}
/// Builds the `information_schema.TABLE` table row by row
///
/// Columns are based on <https://www.postgresql.org/docs/current/infoschema-columns.html>
struct InformationSchemaTablesBuilder {
schema: SchemaRef,
catalog_name: String,
catalog_provider: CatalogProviderRef,
catalog_names: StringVectorBuilder,
schema_names: StringVectorBuilder,
table_names: StringVectorBuilder,
table_types: StringVectorBuilder,
table_ids: UInt32VectorBuilder,
engines: StringVectorBuilder,
}
impl InformationSchemaTablesBuilder {
fn new(schema: SchemaRef, catalog_name: String, catalog_provider: CatalogProviderRef) -> Self {
Self {
schema,
catalog_name,
catalog_provider,
catalog_names: StringVectorBuilder::with_capacity(42),
schema_names: StringVectorBuilder::with_capacity(42),
table_names: StringVectorBuilder::with_capacity(42),
table_types: StringVectorBuilder::with_capacity(42),
table_ids: UInt32VectorBuilder::with_capacity(42),
engines: StringVectorBuilder::with_capacity(42),
}
}
/// Construct the `information_schema.tables` virtual table
async fn make_tables(&mut self) -> Result<RecordBatch> {
let catalog_name = self.catalog_name.clone();
for schema_name in self.catalog_provider.schema_names().await? {
if schema_name == INFORMATION_SCHEMA_NAME {
continue;
}
let Some(schema) = self.catalog_provider.schema(&schema_name).await? else { continue };
for table_name in schema.table_names().await? {
let Some(table) = schema.table(&table_name).await? else { continue };
let table_info = table.table_info();
self.add_table(
&catalog_name,
&schema_name,
&table_name,
table.table_type(),
Some(table_info.ident.table_id),
Some(&table_info.meta.engine),
);
}
}
self.finish()
}
fn add_table(
&mut self,
catalog_name: &str,
schema_name: &str,
table_name: &str,
table_type: TableType,
table_id: Option<u32>,
engine: Option<&str>,
) {
self.catalog_names.push(Some(catalog_name));
self.schema_names.push(Some(schema_name));
self.table_names.push(Some(table_name));
self.table_types.push(Some(match table_type {
TableType::Base => "BASE TABLE",
TableType::View => "VIEW",
TableType::Temporary => "LOCAL TEMPORARY",
}));
self.table_ids.push(table_id);
self.engines.push(engine);
}
fn finish(&mut self) -> Result<RecordBatch> {
let columns: Vec<VectorRef> = vec![
Arc::new(self.catalog_names.finish()),
Arc::new(self.schema_names.finish()),
Arc::new(self.table_names.finish()),
Arc::new(self.table_types.finish()),
Arc::new(self.table_ids.finish()),
Arc::new(self.engines.finish()),
];
RecordBatch::new(self.schema.clone(), columns).context(CreateRecordBatchSnafu)
}
}
impl DfPartitionStream for InformationSchemaTables {
fn schema(&self) -> &ArrowSchemaRef {
self.schema.arrow_schema()
}
fn execute(&self, _: Arc<TaskContext>) -> DfSendableRecordBatchStream {
let schema = self.schema().clone();
let mut builder = self.builder();
Box::pin(DfRecordBatchStreamAdapter::new(
schema,
futures::stream::once(async move {
builder
.make_tables()
.await
.map(|x| x.into_df_record_batch())
.map_err(Into::into)
}),
))
}
}

View File

@@ -19,8 +19,8 @@ use std::fmt::{Debug, Formatter};
use std::sync::Arc;
use api::v1::meta::{RegionStat, TableName};
use common_telemetry::info;
use snafu::{OptionExt, ResultExt};
use common_telemetry::{info, warn};
use snafu::ResultExt;
use table::engine::{EngineContext, TableEngineRef};
use table::metadata::TableId;
use table::requests::CreateTableRequest;
@@ -31,62 +31,49 @@ pub use crate::schema::{SchemaProvider, SchemaProviderRef};
pub mod error;
pub mod helper;
pub(crate) mod information_schema;
pub mod local;
mod metrics;
pub mod remote;
pub mod schema;
pub mod system;
pub mod table_source;
pub mod tables;
/// Represent a list of named catalogs
pub trait CatalogList: Sync + Send {
/// Returns the catalog list as [`Any`](std::any::Any)
/// so that it can be downcast to a specific implementation.
fn as_any(&self) -> &dyn Any;
/// Adds a new catalog to this catalog list
/// If a catalog of the same name existed before, it is replaced in the list and returned.
fn register_catalog(
&self,
name: String,
catalog: CatalogProviderRef,
) -> Result<Option<CatalogProviderRef>>;
/// Retrieves the list of available catalog names
fn catalog_names(&self) -> Result<Vec<String>>;
/// Retrieves a specific catalog by name, provided it exists.
fn catalog(&self, name: &str) -> Result<Option<CatalogProviderRef>>;
}
/// Represents a catalog, comprising a number of named schemas.
#[async_trait::async_trait]
pub trait CatalogProvider: Sync + Send {
/// Returns the catalog provider as [`Any`](std::any::Any)
/// so that it can be downcast to a specific implementation.
fn as_any(&self) -> &dyn Any;
/// Retrieves the list of available schema names in this catalog.
fn schema_names(&self) -> Result<Vec<String>>;
async fn schema_names(&self) -> Result<Vec<String>>;
/// Registers schema to this catalog.
fn register_schema(
async fn register_schema(
&self,
name: String,
schema: SchemaProviderRef,
) -> Result<Option<SchemaProviderRef>>;
/// Retrieves a specific schema from the catalog by name, provided it exists.
fn schema(&self, name: &str) -> Result<Option<SchemaProviderRef>>;
async fn schema(&self, name: &str) -> Result<Option<SchemaProviderRef>>;
}
pub type CatalogListRef = Arc<dyn CatalogList>;
pub type CatalogProviderRef = Arc<dyn CatalogProvider>;
#[async_trait::async_trait]
pub trait CatalogManager: CatalogList {
pub trait CatalogManager: Send + Sync {
/// Starts a catalog manager.
async fn start(&self) -> Result<()>;
async fn register_catalog(
&self,
name: String,
catalog: CatalogProviderRef,
) -> Result<Option<CatalogProviderRef>>;
/// Registers a table within given catalog/schema to catalog manager,
/// returns whether the table registered.
async fn register_table(&self, request: RegisterTableRequest) -> Result<bool>;
@@ -106,7 +93,11 @@ pub trait CatalogManager: CatalogList {
async fn register_system_table(&self, request: RegisterSystemTableRequest)
-> error::Result<()>;
fn schema(&self, catalog: &str, schema: &str) -> Result<Option<SchemaProviderRef>>;
async fn catalog_names(&self) -> Result<Vec<String>>;
async fn catalog(&self, catalog: &str) -> Result<Option<CatalogProviderRef>>;
async fn schema(&self, catalog: &str, schema: &str) -> Result<Option<SchemaProviderRef>>;
/// Returns the table by catalog, schema and table name.
async fn table(
@@ -115,6 +106,8 @@ pub trait CatalogManager: CatalogList {
schema: &str,
table_name: &str,
) -> Result<Option<TableRef>>;
fn as_any(&self) -> &dyn Any;
}
pub type CatalogManagerRef = Arc<dyn CatalogManager>;
@@ -228,43 +221,32 @@ pub(crate) async fn handle_system_table_request<'a, M: CatalogManager>(
/// The stat of regions in the datanode node.
/// The number of regions can be got from len of vec.
pub async fn region_stats(catalog_manager: &CatalogManagerRef) -> Result<Vec<RegionStat>> {
///
/// Ignores any errors occurred during iterating regions. The intention of this method is to
/// collect region stats that will be carried in Datanode's heartbeat to Metasrv, so it's a
/// "try our best" job.
pub async fn datanode_stat(catalog_manager: &CatalogManagerRef) -> (u64, Vec<RegionStat>) {
let mut region_number: u64 = 0;
let mut region_stats = Vec::new();
for catalog_name in catalog_manager.catalog_names()? {
let catalog =
catalog_manager
.catalog(&catalog_name)?
.context(error::CatalogNotFoundSnafu {
catalog_name: &catalog_name,
})?;
for schema_name in catalog.schema_names()? {
let schema = catalog
.schema(&schema_name)?
.context(error::SchemaNotFoundSnafu {
catalog: &catalog_name,
schema: &schema_name,
})?;
let Ok(catalog_names) = catalog_manager.catalog_names().await else { return (region_number, region_stats) };
for catalog_name in catalog_names {
let Ok(Some(catalog)) = catalog_manager.catalog(&catalog_name).await else { continue };
for table_name in schema.table_names()? {
let table =
schema
.table(&table_name)
.await?
.context(error::TableNotFoundSnafu {
table_info: &table_name,
})?;
let Ok(schema_names) = catalog.schema_names().await else { continue };
for schema_name in schema_names {
let Ok(Some(schema)) = catalog.schema(&schema_name).await else { continue };
region_stats.extend(
table
.region_stats()
.context(error::RegionStatsSnafu {
catalog: &catalog_name,
schema: &schema_name,
table: &table_name,
})?
.into_iter()
.map(|stat| RegionStat {
let Ok(table_names) = schema.table_names().await else { continue };
for table_name in table_names {
let Ok(Some(table)) = schema.table(&table_name).await else { continue };
let region_numbers = &table.table_info().meta.region_numbers;
region_number += region_numbers.len() as u64;
match table.region_stats() {
Ok(stats) => {
let stats = stats.into_iter().map(|stat| RegionStat {
region_id: stat.region_id,
table_name: Some(TableName {
catalog_name: catalog_name.clone(),
@@ -273,10 +255,16 @@ pub async fn region_stats(catalog_manager: &CatalogManagerRef) -> Result<Vec<Reg
}),
approximate_bytes: stat.disk_usage_bytes as i64,
..Default::default()
}),
);
});
region_stats.extend(stats);
}
Err(e) => {
warn!("Failed to get region status, err: {:?}", e);
}
};
}
}
}
Ok(region_stats)
(region_number, region_stats)
}

View File

@@ -18,7 +18,7 @@ use std::sync::Arc;
use common_catalog::consts::{
DEFAULT_CATALOG_NAME, DEFAULT_SCHEMA_NAME, INFORMATION_SCHEMA_NAME, MIN_USER_TABLE_ID,
SYSTEM_CATALOG_NAME, SYSTEM_CATALOG_TABLE_NAME,
MITO_ENGINE, SYSTEM_CATALOG_NAME, SYSTEM_CATALOG_TABLE_NAME,
};
use common_catalog::format_full_table_name;
use common_recordbatch::{RecordBatch, SendableRecordBatchStream};
@@ -26,8 +26,10 @@ use common_telemetry::{error, info};
use datatypes::prelude::ScalarVector;
use datatypes::vectors::{BinaryVector, UInt8Vector};
use futures_util::lock::Mutex;
use metrics::increment_gauge;
use snafu::{ensure, OptionExt, ResultExt};
use table::engine::{EngineContext, TableEngineRef};
use table::engine::manager::TableEngineManagerRef;
use table::engine::EngineContext;
use table::metadata::TableId;
use table::requests::OpenTableRequest;
use table::table::numbers::NumbersTable;
@@ -37,7 +39,8 @@ use table::TableRef;
use crate::error::{
self, CatalogNotFoundSnafu, IllegalManagerStateSnafu, OpenTableSnafu, ReadSystemCatalogSnafu,
Result, SchemaExistsSnafu, SchemaNotFoundSnafu, SystemCatalogSnafu,
SystemCatalogTypeMismatchSnafu, TableExistsSnafu, TableNotFoundSnafu,
SystemCatalogTypeMismatchSnafu, TableEngineNotFoundSnafu, TableExistsSnafu, TableNotExistSnafu,
TableNotFoundSnafu,
};
use crate::local::memory::{MemoryCatalogManager, MemoryCatalogProvider, MemorySchemaProvider};
use crate::system::{
@@ -46,16 +49,16 @@ use crate::system::{
};
use crate::tables::SystemCatalog;
use crate::{
handle_system_table_request, CatalogList, CatalogManager, CatalogProvider, CatalogProviderRef,
DeregisterTableRequest, RegisterSchemaRequest, RegisterSystemTableRequest,
RegisterTableRequest, RenameTableRequest, SchemaProvider, SchemaProviderRef,
handle_system_table_request, CatalogManager, CatalogProviderRef, DeregisterTableRequest,
RegisterSchemaRequest, RegisterSystemTableRequest, RegisterTableRequest, RenameTableRequest,
SchemaProviderRef,
};
/// A `CatalogManager` consists of a system catalog and a bunch of user catalogs.
pub struct LocalCatalogManager {
system: Arc<SystemCatalog>,
catalogs: Arc<MemoryCatalogManager>,
engine: TableEngineRef,
engine_manager: TableEngineManagerRef,
next_table_id: AtomicU32,
init_lock: Mutex<bool>,
register_lock: Mutex<()>,
@@ -63,19 +66,20 @@ pub struct LocalCatalogManager {
}
impl LocalCatalogManager {
/// Create a new [CatalogManager] with given user catalogs and table engine
pub async fn try_new(engine: TableEngineRef) -> Result<Self> {
/// Create a new [CatalogManager] with given user catalogs and mito engine
pub async fn try_new(engine_manager: TableEngineManagerRef) -> Result<Self> {
let engine = engine_manager
.engine(MITO_ENGINE)
.context(TableEngineNotFoundSnafu {
engine_name: MITO_ENGINE,
})?;
let table = SystemCatalogTable::new(engine.clone()).await?;
let memory_catalog_list = crate::local::memory::new_memory_catalog_list()?;
let system_catalog = Arc::new(SystemCatalog::new(
table,
memory_catalog_list.clone(),
engine.clone(),
));
let system_catalog = Arc::new(SystemCatalog::new(table));
Ok(Self {
system: system_catalog,
catalogs: memory_catalog_list,
engine,
engine_manager,
next_table_id: AtomicU32::new(MIN_USER_TABLE_ID),
init_lock: Mutex::new(false),
register_lock: Mutex::new(()),
@@ -85,7 +89,7 @@ impl LocalCatalogManager {
/// Scan all entries from system catalog table
pub async fn init(&self) -> Result<()> {
self.init_system_catalog()?;
self.init_system_catalog().await?;
let system_records = self.system.information_schema.system.records().await?;
let entries = self.collect_system_catalog_entries(system_records).await?;
let max_table_id = self.handle_system_catalog_entries(entries).await?;
@@ -100,31 +104,38 @@ impl LocalCatalogManager {
// Processing system table hooks
let mut sys_table_requests = self.system_table_requests.lock().await;
handle_system_table_request(self, self.engine.clone(), &mut sys_table_requests).await?;
let engine = self
.engine_manager
.engine(MITO_ENGINE)
.context(TableEngineNotFoundSnafu {
engine_name: MITO_ENGINE,
})?;
handle_system_table_request(self, engine, &mut sys_table_requests).await?;
Ok(())
}
fn init_system_catalog(&self) -> Result<()> {
async fn init_system_catalog(&self) -> Result<()> {
let system_schema = Arc::new(MemorySchemaProvider::new());
system_schema.register_table(
system_schema.register_table_sync(
SYSTEM_CATALOG_TABLE_NAME.to_string(),
self.system.information_schema.system.clone(),
)?;
let system_catalog = Arc::new(MemoryCatalogProvider::new());
system_catalog.register_schema(INFORMATION_SCHEMA_NAME.to_string(), system_schema)?;
system_catalog.register_schema_sync(INFORMATION_SCHEMA_NAME.to_string(), system_schema)?;
self.catalogs
.register_catalog(SYSTEM_CATALOG_NAME.to_string(), system_catalog)?;
.register_catalog_sync(SYSTEM_CATALOG_NAME.to_string(), system_catalog)?;
let default_catalog = Arc::new(MemoryCatalogProvider::new());
let default_schema = Arc::new(MemorySchemaProvider::new());
// Add numbers table for test
let table = Arc::new(NumbersTable::default());
default_schema.register_table("numbers".to_string(), table)?;
default_schema.register_table_sync("numbers".to_string(), table)?;
default_catalog.register_schema(DEFAULT_SCHEMA_NAME.to_string(), default_schema)?;
default_catalog.register_schema_sync(DEFAULT_SCHEMA_NAME.to_string(), default_schema)?;
self.catalogs
.register_catalog(DEFAULT_CATALOG_NAME.to_string(), default_catalog)?;
.register_catalog_sync(DEFAULT_CATALOG_NAME.to_string(), default_catalog)?;
Ok(())
}
@@ -203,16 +214,17 @@ impl LocalCatalogManager {
info!("Register catalog: {}", c.catalog_name);
}
Entry::Schema(s) => {
let catalog =
self.catalogs
.catalog(&s.catalog_name)?
.context(CatalogNotFoundSnafu {
catalog_name: &s.catalog_name,
})?;
catalog.register_schema(
s.schema_name.clone(),
Arc::new(MemorySchemaProvider::new()),
)?;
self.catalogs
.catalog(&s.catalog_name)
.await?
.context(CatalogNotFoundSnafu {
catalog_name: &s.catalog_name,
})?
.register_schema(
s.schema_name.clone(),
Arc::new(MemorySchemaProvider::new()),
)
.await?;
info!("Registered schema: {:?}", s);
}
Entry::Table(t) => {
@@ -233,14 +245,16 @@ impl LocalCatalogManager {
}
async fn open_and_register_table(&self, t: &TableEntry) -> Result<()> {
let catalog = self
.catalogs
.catalog(&t.catalog_name)?
.context(CatalogNotFoundSnafu {
catalog_name: &t.catalog_name,
})?;
let catalog =
self.catalogs
.catalog(&t.catalog_name)
.await?
.context(CatalogNotFoundSnafu {
catalog_name: &t.catalog_name,
})?;
let schema = catalog
.schema(&t.schema_name)?
.schema(&t.schema_name)
.await?
.context(SchemaNotFoundSnafu {
catalog: &t.catalog_name,
schema: &t.schema_name,
@@ -253,9 +267,14 @@ impl LocalCatalogManager {
table_name: t.table_name.clone(),
table_id: t.table_id,
};
let engine = self
.engine_manager
.engine(&t.engine)
.context(TableEngineNotFoundSnafu {
engine_name: &t.engine,
})?;
let option = self
.engine
let option = engine
.open_table(&context, request)
.await
.with_context(|_| OpenTableSnafu {
@@ -271,39 +290,11 @@ impl LocalCatalogManager {
),
})?;
schema.register_table(t.table_name.clone(), option)?;
schema.register_table(t.table_name.clone(), option).await?;
Ok(())
}
}
impl CatalogList for LocalCatalogManager {
fn as_any(&self) -> &dyn Any {
self
}
fn register_catalog(
&self,
name: String,
catalog: CatalogProviderRef,
) -> Result<Option<CatalogProviderRef>> {
self.catalogs.register_catalog(name, catalog)
}
fn catalog_names(&self) -> Result<Vec<String>> {
let mut res = self.catalogs.catalog_names()?;
res.push(SYSTEM_CATALOG_NAME.to_string());
Ok(res)
}
fn catalog(&self, name: &str) -> Result<Option<CatalogProviderRef>> {
if name.eq_ignore_ascii_case(SYSTEM_CATALOG_NAME) {
Ok(Some(self.system.clone()))
} else {
self.catalogs.catalog(name)
}
}
}
#[async_trait::async_trait]
impl TableIdProvider for LocalCatalogManager {
async fn next_table_id(&self) -> table::Result<TableId> {
@@ -334,10 +325,12 @@ impl CatalogManager for LocalCatalogManager {
let catalog = self
.catalogs
.catalog(catalog_name)?
.catalog(catalog_name)
.await?
.context(CatalogNotFoundSnafu { catalog_name })?;
let schema = catalog
.schema(schema_name)?
.schema(schema_name)
.await?
.with_context(|| SchemaNotFoundSnafu {
catalog: catalog_name,
schema: schema_name,
@@ -364,6 +357,7 @@ impl CatalogManager for LocalCatalogManager {
// Try to register table with same table id, just ignore.
Ok(false)
} else {
let engine = request.table.table_info().meta.engine.to_string();
// table does not exist
self.system
.register_table(
@@ -371,9 +365,17 @@ impl CatalogManager for LocalCatalogManager {
schema_name.clone(),
request.table_name.clone(),
request.table_id,
engine,
)
.await?;
schema.register_table(request.table_name, request.table)?;
schema
.register_table(request.table_name, request.table)
.await?;
increment_gauge!(
crate::metrics::METRIC_CATALOG_MANAGER_TABLE_COUNT,
1.0,
&[crate::metrics::db_label(catalog_name, schema_name)],
);
Ok(true)
}
}
@@ -394,16 +396,33 @@ impl CatalogManager for LocalCatalogManager {
let catalog = self
.catalogs
.catalog(catalog_name)?
.catalog(catalog_name)
.await?
.context(CatalogNotFoundSnafu { catalog_name })?;
let schema = catalog
.schema(schema_name)?
.schema(schema_name)
.await?
.with_context(|| SchemaNotFoundSnafu {
catalog: catalog_name,
schema: schema_name,
})?;
let _lock = self.register_lock.lock().await;
ensure!(
!schema.table_exist(&request.new_table_name).await?,
TableExistsSnafu {
table: &request.new_table_name
}
);
let old_table = schema
.table(&request.table_name)
.await?
.context(TableNotExistSnafu {
table: &request.table_name,
})?;
let engine = old_table.table_info().meta.engine.to_string();
// rename table in system catalog
self.system
.register_table(
@@ -411,11 +430,15 @@ impl CatalogManager for LocalCatalogManager {
schema_name.clone(),
request.new_table_name.clone(),
request.table_id,
engine,
)
.await?;
Ok(schema
.rename_table(&request.table_name, request.new_table_name)
.is_ok())
let renamed = schema
.rename_table(&request.table_name, request.new_table_name.clone())
.await
.is_ok();
Ok(renamed)
}
async fn deregister_table(&self, request: DeregisterTableRequest) -> Result<bool> {
@@ -464,13 +487,14 @@ impl CatalogManager for LocalCatalogManager {
let catalog = self
.catalogs
.catalog(catalog_name)?
.catalog(catalog_name)
.await?
.context(CatalogNotFoundSnafu { catalog_name })?;
{
let _lock = self.register_lock.lock().await;
ensure!(
catalog.schema(schema_name)?.is_none(),
catalog.schema(schema_name).await?.is_none(),
SchemaExistsSnafu {
schema: schema_name,
}
@@ -478,12 +502,18 @@ impl CatalogManager for LocalCatalogManager {
self.system
.register_schema(request.catalog, schema_name.clone())
.await?;
catalog.register_schema(request.schema, Arc::new(MemorySchemaProvider::new()))?;
catalog
.register_schema(request.schema, Arc::new(MemorySchemaProvider::new()))
.await?;
Ok(true)
}
}
async fn register_system_table(&self, request: RegisterSystemTableRequest) -> Result<()> {
let catalog_name = request.create_table_request.catalog_name.clone();
let schema_name = request.create_table_request.schema_name.clone();
ensure!(
!*self.init_lock.lock().await,
IllegalManagerStateSnafu {
@@ -493,17 +523,23 @@ impl CatalogManager for LocalCatalogManager {
let mut sys_table_requests = self.system_table_requests.lock().await;
sys_table_requests.push(request);
increment_gauge!(
crate::metrics::METRIC_CATALOG_MANAGER_TABLE_COUNT,
1.0,
&[crate::metrics::db_label(&catalog_name, &schema_name)],
);
Ok(())
}
fn schema(&self, catalog: &str, schema: &str) -> Result<Option<SchemaProviderRef>> {
async fn schema(&self, catalog: &str, schema: &str) -> Result<Option<SchemaProviderRef>> {
self.catalogs
.catalog(catalog)?
.catalog(catalog)
.await?
.context(CatalogNotFoundSnafu {
catalog_name: catalog,
})?
.schema(schema)
.await
}
async fn table(
@@ -514,22 +550,50 @@ impl CatalogManager for LocalCatalogManager {
) -> Result<Option<TableRef>> {
let catalog = self
.catalogs
.catalog(catalog_name)?
.catalog(catalog_name)
.await?
.context(CatalogNotFoundSnafu { catalog_name })?;
let schema = catalog
.schema(schema_name)?
.schema(schema_name)
.await?
.with_context(|| SchemaNotFoundSnafu {
catalog: catalog_name,
schema: schema_name,
})?;
schema.table(table_name).await
}
async fn catalog(&self, catalog: &str) -> Result<Option<CatalogProviderRef>> {
if catalog.eq_ignore_ascii_case(SYSTEM_CATALOG_NAME) {
Ok(Some(self.system.clone()))
} else {
self.catalogs.catalog(catalog).await
}
}
async fn catalog_names(&self) -> Result<Vec<String>> {
self.catalogs.catalog_names().await
}
async fn register_catalog(
&self,
name: String,
catalog: CatalogProviderRef,
) -> Result<Option<CatalogProviderRef>> {
self.catalogs.register_catalog(name, catalog).await
}
fn as_any(&self) -> &dyn Any {
self
}
}
#[cfg(test)]
mod tests {
use std::assert_matches::assert_matches;
use mito::engine::MITO_ENGINE;
use super::*;
use crate::system::{CatalogEntry, SchemaEntry};
@@ -541,6 +605,7 @@ mod tests {
schema_name: "S1".to_string(),
table_name: "T1".to_string(),
table_id: 1,
engine: MITO_ENGINE.to_string(),
}),
Entry::Catalog(CatalogEntry {
catalog_name: "C2".to_string(),
@@ -561,6 +626,7 @@ mod tests {
schema_name: "S1".to_string(),
table_name: "T2".to_string(),
table_id: 2,
engine: MITO_ENGINE.to_string(),
}),
];
let res = LocalCatalogManager::sort_entries(vec);

View File

@@ -21,6 +21,7 @@ use std::sync::{Arc, RwLock};
use async_trait::async_trait;
use common_catalog::consts::MIN_USER_TABLE_ID;
use common_telemetry::error;
use metrics::{decrement_gauge, increment_gauge};
use snafu::{ensure, OptionExt};
use table::metadata::TableId;
use table::table::TableIdProvider;
@@ -31,7 +32,7 @@ use crate::error::{
};
use crate::schema::SchemaProvider;
use crate::{
CatalogList, CatalogManager, CatalogProvider, CatalogProviderRef, DeregisterTableRequest,
CatalogManager, CatalogProvider, CatalogProviderRef, DeregisterTableRequest,
RegisterSchemaRequest, RegisterSystemTableRequest, RegisterTableRequest, RenameTableRequest,
SchemaProviderRef,
};
@@ -51,10 +52,10 @@ impl Default for MemoryCatalogManager {
};
let default_catalog = Arc::new(MemoryCatalogProvider::new());
manager
.register_catalog("greptime".to_string(), default_catalog.clone())
.register_catalog_sync("greptime".to_string(), default_catalog.clone())
.unwrap();
default_catalog
.register_schema("public".to_string(), Arc::new(MemorySchemaProvider::new()))
.register_schema_sync("public".to_string(), Arc::new(MemorySchemaProvider::new()))
.unwrap();
manager
}
@@ -75,70 +76,81 @@ impl CatalogManager for MemoryCatalogManager {
}
async fn register_table(&self, request: RegisterTableRequest) -> Result<bool> {
let catalogs = self.catalogs.write().unwrap();
let catalog = catalogs
.get(&request.catalog)
let schema = self
.catalog(&request.catalog)
.context(CatalogNotFoundSnafu {
catalog_name: &request.catalog,
})?
.clone();
let schema = catalog
.schema(&request.schema)?
.with_context(|| SchemaNotFoundSnafu {
.schema(&request.schema)
.await?
.context(SchemaNotFoundSnafu {
catalog: &request.catalog,
schema: &request.schema,
})?;
increment_gauge!(
crate::metrics::METRIC_CATALOG_MANAGER_TABLE_COUNT,
1.0,
&[crate::metrics::db_label(&request.catalog, &request.schema)],
);
schema
.register_table(request.table_name, request.table)
.await
.map(|v| v.is_none())
}
async fn rename_table(&self, request: RenameTableRequest) -> Result<bool> {
let catalogs = self.catalogs.write().unwrap();
let catalog = catalogs
.get(&request.catalog)
let catalog = self
.catalog(&request.catalog)
.context(CatalogNotFoundSnafu {
catalog_name: &request.catalog,
})?
.clone();
let schema = catalog
.schema(&request.schema)?
.with_context(|| SchemaNotFoundSnafu {
catalog: &request.catalog,
schema: &request.schema,
})?;
let schema =
catalog
.schema(&request.schema)
.await?
.with_context(|| SchemaNotFoundSnafu {
catalog: &request.catalog,
schema: &request.schema,
})?;
Ok(schema
.rename_table(&request.table_name, request.new_table_name)
.await
.is_ok())
}
async fn deregister_table(&self, request: DeregisterTableRequest) -> Result<bool> {
let catalogs = self.catalogs.write().unwrap();
let catalog = catalogs
.get(&request.catalog)
let schema = self
.catalog(&request.catalog)
.context(CatalogNotFoundSnafu {
catalog_name: &request.catalog,
})?
.clone();
let schema = catalog
.schema(&request.schema)?
.schema(&request.schema)
.await?
.with_context(|| SchemaNotFoundSnafu {
catalog: &request.catalog,
schema: &request.schema,
})?;
decrement_gauge!(
crate::metrics::METRIC_CATALOG_MANAGER_TABLE_COUNT,
1.0,
&[crate::metrics::db_label(&request.catalog, &request.schema)],
);
schema
.deregister_table(&request.table_name)
.await
.map(|v| v.is_some())
}
async fn register_schema(&self, request: RegisterSchemaRequest) -> Result<bool> {
let catalogs = self.catalogs.write().unwrap();
let catalog = catalogs
.get(&request.catalog)
let catalog = self
.catalog(&request.catalog)
.context(CatalogNotFoundSnafu {
catalog_name: &request.catalog,
})?;
catalog.register_schema(request.schema, Arc::new(MemorySchemaProvider::new()))?;
catalog
.register_schema(request.schema, Arc::new(MemorySchemaProvider::new()))
.await?;
increment_gauge!(crate::metrics::METRIC_CATALOG_MANAGER_SCHEMA_COUNT, 1.0);
Ok(true)
}
@@ -147,10 +159,9 @@ impl CatalogManager for MemoryCatalogManager {
Ok(())
}
fn schema(&self, catalog: &str, schema: &str) -> Result<Option<SchemaProviderRef>> {
let catalogs = self.catalogs.read().unwrap();
if let Some(c) = catalogs.get(catalog) {
c.schema(schema)
async fn schema(&self, catalog: &str, schema: &str) -> Result<Option<SchemaProviderRef>> {
if let Some(c) = self.catalog(catalog) {
c.schema(schema).await
} else {
Ok(None)
}
@@ -162,15 +173,31 @@ impl CatalogManager for MemoryCatalogManager {
schema: &str,
table_name: &str,
) -> Result<Option<TableRef>> {
let catalog = {
let c = self.catalogs.read().unwrap();
let Some(c) = c.get(catalog) else { return Ok(None) };
c.clone()
};
match catalog.schema(schema)? {
None => Ok(None),
Some(s) => s.table(table_name).await,
}
let Some(catalog) = self
.catalog(catalog) else { return Ok(None)};
let Some(s) = catalog.schema(schema).await? else { return Ok(None) };
s.table(table_name).await
}
async fn catalog(&self, catalog: &str) -> Result<Option<CatalogProviderRef>> {
Ok(self.catalogs.read().unwrap().get(catalog).cloned())
}
async fn catalog_names(&self) -> Result<Vec<String>> {
Ok(self.catalogs.read().unwrap().keys().cloned().collect())
}
async fn register_catalog(
&self,
name: String,
catalog: CatalogProviderRef,
) -> Result<Option<CatalogProviderRef>> {
increment_gauge!(crate::metrics::METRIC_CATALOG_MANAGER_CATALOG_COUNT, 1.0);
self.register_catalog_sync(name, catalog)
}
fn as_any(&self) -> &dyn Any {
self
}
}
@@ -192,14 +219,8 @@ impl MemoryCatalogManager {
}
}
}
}
impl CatalogList for MemoryCatalogManager {
fn as_any(&self) -> &dyn Any {
self
}
fn register_catalog(
pub fn register_catalog_sync(
&self,
name: String,
catalog: CatalogProviderRef,
@@ -208,14 +229,8 @@ impl CatalogList for MemoryCatalogManager {
Ok(catalogs.insert(name, catalog))
}
fn catalog_names(&self) -> Result<Vec<String>> {
let catalogs = self.catalogs.read().unwrap();
Ok(catalogs.keys().map(|s| s.to_string()).collect())
}
fn catalog(&self, name: &str) -> Result<Option<CatalogProviderRef>> {
let catalogs = self.catalogs.read().unwrap();
Ok(catalogs.get(name).cloned())
fn catalog(&self, catalog_name: &str) -> Option<CatalogProviderRef> {
self.catalogs.read().unwrap().get(catalog_name).cloned()
}
}
@@ -237,19 +252,13 @@ impl MemoryCatalogProvider {
schemas: RwLock::new(HashMap::new()),
}
}
}
impl CatalogProvider for MemoryCatalogProvider {
fn as_any(&self) -> &dyn Any {
self
}
fn schema_names(&self) -> Result<Vec<String>> {
pub fn schema_names_sync(&self) -> Result<Vec<String>> {
let schemas = self.schemas.read().unwrap();
Ok(schemas.keys().cloned().collect())
}
fn register_schema(
pub fn register_schema_sync(
&self,
name: String,
schema: SchemaProviderRef,
@@ -259,15 +268,39 @@ impl CatalogProvider for MemoryCatalogProvider {
!schemas.contains_key(&name),
error::SchemaExistsSnafu { schema: &name }
);
increment_gauge!(crate::metrics::METRIC_CATALOG_MANAGER_SCHEMA_COUNT, 1.0);
Ok(schemas.insert(name, schema))
}
fn schema(&self, name: &str) -> Result<Option<Arc<dyn SchemaProvider>>> {
pub fn schema_sync(&self, name: &str) -> Result<Option<Arc<dyn SchemaProvider>>> {
let schemas = self.schemas.read().unwrap();
Ok(schemas.get(name).cloned())
}
}
#[async_trait::async_trait]
impl CatalogProvider for MemoryCatalogProvider {
fn as_any(&self) -> &dyn Any {
self
}
async fn schema_names(&self) -> Result<Vec<String>> {
self.schema_names_sync()
}
async fn register_schema(
&self,
name: String,
schema: SchemaProviderRef,
) -> Result<Option<SchemaProviderRef>> {
self.register_schema_sync(name, schema)
}
async fn schema(&self, name: &str) -> Result<Option<Arc<dyn SchemaProvider>>> {
self.schema_sync(name)
}
}
/// Simple in-memory implementation of a schema.
pub struct MemorySchemaProvider {
tables: RwLock<HashMap<String, TableRef>>,
@@ -280,31 +313,8 @@ impl MemorySchemaProvider {
tables: RwLock::new(HashMap::new()),
}
}
}
impl Default for MemorySchemaProvider {
fn default() -> Self {
Self::new()
}
}
#[async_trait]
impl SchemaProvider for MemorySchemaProvider {
fn as_any(&self) -> &dyn Any {
self
}
fn table_names(&self) -> Result<Vec<String>> {
let tables = self.tables.read().unwrap();
Ok(tables.keys().cloned().collect())
}
async fn table(&self, name: &str) -> Result<Option<TableRef>> {
let tables = self.tables.read().unwrap();
Ok(tables.get(name).cloned())
}
fn register_table(&self, name: String, table: TableRef) -> Result<Option<TableRef>> {
pub fn register_table_sync(&self, name: String, table: TableRef) -> Result<Option<TableRef>> {
let mut tables = self.tables.write().unwrap();
if let Some(existing) = tables.get(name.as_str()) {
// if table with the same name but different table id exists, then it's a fatal bug
@@ -322,28 +332,71 @@ impl SchemaProvider for MemorySchemaProvider {
}
}
fn rename_table(&self, name: &str, new_name: String) -> Result<TableRef> {
pub fn rename_table_sync(&self, name: &str, new_name: String) -> Result<TableRef> {
let mut tables = self.tables.write().unwrap();
if tables.get(name).is_some() {
let table = tables.remove(name).unwrap();
tables.insert(new_name, table.clone());
Ok(table)
} else {
TableNotFoundSnafu {
let Some(table) = tables.remove(name) else {
return TableNotFoundSnafu {
table_info: name.to_string(),
}
.fail()?
}
.fail()?;
};
let e = match tables.entry(new_name) {
Entry::Vacant(e) => e,
Entry::Occupied(e) => {
return TableExistsSnafu { table: e.key() }.fail();
}
};
e.insert(table.clone());
Ok(table)
}
fn deregister_table(&self, name: &str) -> Result<Option<TableRef>> {
pub fn table_exist_sync(&self, name: &str) -> Result<bool> {
let tables = self.tables.read().unwrap();
Ok(tables.contains_key(name))
}
pub fn deregister_table_sync(&self, name: &str) -> Result<Option<TableRef>> {
let mut tables = self.tables.write().unwrap();
Ok(tables.remove(name))
}
}
fn table_exist(&self, name: &str) -> Result<bool> {
impl Default for MemorySchemaProvider {
fn default() -> Self {
Self::new()
}
}
#[async_trait]
impl SchemaProvider for MemorySchemaProvider {
fn as_any(&self) -> &dyn Any {
self
}
async fn table_names(&self) -> Result<Vec<String>> {
let tables = self.tables.read().unwrap();
Ok(tables.contains_key(name))
Ok(tables.keys().cloned().collect())
}
async fn table(&self, name: &str) -> Result<Option<TableRef>> {
let tables = self.tables.read().unwrap();
Ok(tables.get(name).cloned())
}
async fn register_table(&self, name: String, table: TableRef) -> Result<Option<TableRef>> {
self.register_table_sync(name, table)
}
async fn rename_table(&self, name: &str, new_name: String) -> Result<TableRef> {
self.rename_table_sync(name, new_name)
}
async fn deregister_table(&self, name: &str) -> Result<Option<TableRef>> {
self.deregister_table_sync(name)
}
async fn table_exist(&self, name: &str) -> Result<bool> {
self.table_exist_sync(name)
}
}
@@ -364,15 +417,20 @@ mod tests {
#[tokio::test]
async fn test_new_memory_catalog_list() {
let catalog_list = new_memory_catalog_list().unwrap();
let default_catalog = catalog_list.catalog(DEFAULT_CATALOG_NAME).unwrap().unwrap();
let default_catalog = CatalogManager::catalog(&*catalog_list, DEFAULT_CATALOG_NAME)
.await
.unwrap()
.unwrap();
let default_schema = default_catalog
.schema(DEFAULT_SCHEMA_NAME)
.await
.unwrap()
.unwrap();
default_schema
.register_table("numbers".to_string(), Arc::new(NumbersTable::default()))
.await
.unwrap();
let table = default_schema.table("numbers").await.unwrap();
@@ -384,19 +442,18 @@ mod tests {
async fn test_mem_provider() {
let provider = MemorySchemaProvider::new();
let table_name = "numbers";
assert!(!provider.table_exist(table_name).unwrap());
assert!(provider.deregister_table(table_name).unwrap().is_none());
assert!(!provider.table_exist_sync(table_name).unwrap());
provider.deregister_table_sync(table_name).unwrap();
let test_table = NumbersTable::default();
// register table successfully
assert!(provider
.register_table(table_name.to_string(), Arc::new(test_table))
.register_table_sync(table_name.to_string(), Arc::new(test_table))
.unwrap()
.is_none());
assert!(provider.table_exist(table_name).unwrap());
assert!(provider.table_exist_sync(table_name).unwrap());
let other_table = NumbersTable::new(12);
let result = provider.register_table(table_name.to_string(), Arc::new(other_table));
let result = provider.register_table_sync(table_name.to_string(), Arc::new(other_table));
let err = result.err().unwrap();
assert!(err.backtrace_opt().is_some());
assert_eq!(StatusCode::TableAlreadyExists, err.status_code());
}
@@ -404,27 +461,27 @@ mod tests {
async fn test_mem_provider_rename_table() {
let provider = MemorySchemaProvider::new();
let table_name = "num";
assert!(!provider.table_exist(table_name).unwrap());
assert!(!provider.table_exist_sync(table_name).unwrap());
let test_table: TableRef = Arc::new(NumbersTable::default());
// register test table
assert!(provider
.register_table(table_name.to_string(), test_table.clone())
.register_table_sync(table_name.to_string(), test_table.clone())
.unwrap()
.is_none());
assert!(provider.table_exist(table_name).unwrap());
assert!(provider.table_exist_sync(table_name).unwrap());
// rename test table
let new_table_name = "numbers";
provider
.rename_table(table_name, new_table_name.to_string())
.rename_table_sync(table_name, new_table_name.to_string())
.unwrap();
// test old table name not exist
assert!(!provider.table_exist(table_name).unwrap());
assert!(provider.deregister_table(table_name).unwrap().is_none());
assert!(!provider.table_exist_sync(table_name).unwrap());
provider.deregister_table_sync(table_name).unwrap();
// test new table name exists
assert!(provider.table_exist(new_table_name).unwrap());
assert!(provider.table_exist_sync(new_table_name).unwrap());
let registered_table = provider.table(new_table_name).await.unwrap().unwrap();
assert_eq!(
registered_table.table_info().ident.table_id,
@@ -432,7 +489,9 @@ mod tests {
);
let other_table = Arc::new(NumbersTable::new(2));
let result = provider.register_table(new_table_name.to_string(), other_table);
let result = provider
.register_table(new_table_name.to_string(), other_table)
.await;
let err = result.err().unwrap();
assert_eq!(StatusCode::TableAlreadyExists, err.status_code());
}
@@ -442,6 +501,7 @@ mod tests {
let catalog = MemoryCatalogManager::default();
let schema = catalog
.schema(DEFAULT_CATALOG_NAME, DEFAULT_SCHEMA_NAME)
.await
.unwrap()
.unwrap();
@@ -457,10 +517,10 @@ mod tests {
table,
};
assert!(catalog.register_table(register_table_req).await.unwrap());
assert!(schema.table_exist(table_name).unwrap());
assert!(schema.table_exist(table_name).await.unwrap());
// rename table
let new_table_name = "numbers";
let new_table_name = "numbers_new";
let rename_table_req = RenameTableRequest {
catalog: DEFAULT_CATALOG_NAME.to_string(),
schema: DEFAULT_SCHEMA_NAME.to_string(),
@@ -469,8 +529,8 @@ mod tests {
table_id,
};
assert!(catalog.rename_table(rename_table_req).await.unwrap());
assert!(!schema.table_exist(table_name).unwrap());
assert!(schema.table_exist(new_table_name).unwrap());
assert!(!schema.table_exist(table_name).await.unwrap());
assert!(schema.table_exist(new_table_name).await.unwrap());
let registered_table = catalog
.table(DEFAULT_CATALOG_NAME, DEFAULT_SCHEMA_NAME, new_table_name)
@@ -504,6 +564,7 @@ mod tests {
let catalog = MemoryCatalogManager::default();
let schema = catalog
.schema(DEFAULT_CATALOG_NAME, DEFAULT_SCHEMA_NAME)
.await
.unwrap()
.unwrap();
@@ -515,7 +576,7 @@ mod tests {
table: Arc::new(NumbersTable::default()),
};
catalog.register_table(register_table_req).await.unwrap();
assert!(schema.table_exist("numbers").unwrap());
assert!(schema.table_exist("numbers").await.unwrap());
let deregister_table_req = DeregisterTableRequest {
catalog: DEFAULT_CATALOG_NAME.to_string(),
@@ -526,6 +587,6 @@ mod tests {
.deregister_table(deregister_table_req)
.await
.unwrap();
assert!(!schema.table_exist("numbers").unwrap());
assert!(!schema.table_exist("numbers").await.unwrap());
}
}

View File

@@ -0,0 +1,26 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use common_catalog::build_db_string;
pub(crate) const METRIC_DB_LABEL: &str = "db";
pub(crate) const METRIC_CATALOG_MANAGER_CATALOG_COUNT: &str = "catalog.catalog_count";
pub(crate) const METRIC_CATALOG_MANAGER_SCHEMA_COUNT: &str = "catalog.schema_count";
pub(crate) const METRIC_CATALOG_MANAGER_TABLE_COUNT: &str = "catalog.table_count";
#[inline]
pub(crate) fn db_label(catalog: &str, schema: &str) -> (&'static str, String) {
(METRIC_DB_LABEL, build_db_string(catalog, schema))
}

File diff suppressed because it is too large Load Diff

View File

@@ -18,7 +18,7 @@ use std::sync::Arc;
use async_trait::async_trait;
use table::TableRef;
use crate::error::Result;
use crate::error::{NotSupportedSnafu, Result};
/// Represents a schema, comprising a number of named tables.
#[async_trait]
@@ -28,27 +28,42 @@ pub trait SchemaProvider: Sync + Send {
fn as_any(&self) -> &dyn Any;
/// Retrieves the list of available table names in this schema.
fn table_names(&self) -> Result<Vec<String>>;
async fn table_names(&self) -> Result<Vec<String>>;
/// Retrieves a specific table from the schema by name, provided it exists.
async fn table(&self, name: &str) -> Result<Option<TableRef>>;
/// If supported by the implementation, adds a new table to this schema.
/// If a table of the same name existed before, it returns "Table already exists" error.
fn register_table(&self, name: String, table: TableRef) -> Result<Option<TableRef>>;
async fn register_table(&self, name: String, _table: TableRef) -> Result<Option<TableRef>> {
NotSupportedSnafu {
op: format!("register_table({name}, <table>)"),
}
.fail()
}
/// If supported by the implementation, renames an existing table from this schema and returns it.
/// If no table of that name exists, returns "Table not found" error.
fn rename_table(&self, name: &str, new_name: String) -> Result<TableRef>;
async fn rename_table(&self, name: &str, new_name: String) -> Result<TableRef> {
NotSupportedSnafu {
op: format!("rename_table({name}, {new_name})"),
}
.fail()
}
/// If supported by the implementation, removes an existing table from this schema and returns it.
/// If no table of that name exists, returns Ok(None).
fn deregister_table(&self, name: &str) -> Result<Option<TableRef>>;
async fn deregister_table(&self, name: &str) -> Result<Option<TableRef>> {
NotSupportedSnafu {
op: format!("deregister_table({name})"),
}
.fail()
}
/// If supported by the implementation, checks the table exist in the schema provider or not.
/// If no matched table in the schema provider, return false.
/// Otherwise, return true.
fn table_exist(&self, name: &str) -> Result<bool>;
async fn table_exist(&self, name: &str) -> Result<bool>;
}
pub type SchemaProviderRef = Arc<dyn SchemaProvider>;

View File

@@ -17,8 +17,8 @@ use std::collections::HashMap;
use std::sync::Arc;
use common_catalog::consts::{
DEFAULT_CATALOG_NAME, DEFAULT_SCHEMA_NAME, INFORMATION_SCHEMA_NAME, SYSTEM_CATALOG_NAME,
SYSTEM_CATALOG_TABLE_ID, SYSTEM_CATALOG_TABLE_NAME,
DEFAULT_CATALOG_NAME, DEFAULT_SCHEMA_NAME, INFORMATION_SCHEMA_NAME, MITO_ENGINE,
SYSTEM_CATALOG_NAME, SYSTEM_CATALOG_TABLE_ID, SYSTEM_CATALOG_TABLE_NAME,
};
use common_query::logical_plan::Expr;
use common_query::physical_plan::{PhysicalPlanRef, SessionContext};
@@ -80,6 +80,10 @@ impl Table for SystemCatalogTable {
async fn delete(&self, request: DeleteRequest) -> table::Result<usize> {
self.0.delete(request).await
}
fn statistics(&self) -> Option<table::stats::TableStatistics> {
self.0.statistics()
}
}
impl SystemCatalogTable {
@@ -112,6 +116,7 @@ impl SystemCatalogTable {
primary_key_indices: vec![ENTRY_TYPE_INDEX, KEY_INDEX],
create_if_not_exists: true,
table_options: TableOptions::default(),
engine: engine.name().to_string(),
};
let table = engine
@@ -194,12 +199,13 @@ pub fn build_table_insert_request(
schema: String,
table_name: String,
table_id: TableId,
engine: String,
) -> InsertRequest {
let entry_key = format_table_entry_key(&catalog, &schema, table_id);
build_insert_request(
EntryType::Table,
entry_key.as_bytes(),
serde_json::to_string(&TableEntryValue { table_name })
serde_json::to_string(&TableEntryValue { table_name, engine })
.unwrap()
.as_bytes(),
)
@@ -330,6 +336,7 @@ pub fn decode_system_catalog(
schema_name: table_parts[1].to_string(),
table_name: table_meta.table_name,
table_id,
engine: table_meta.engine,
}))
}
}
@@ -385,11 +392,19 @@ pub struct TableEntry {
pub schema_name: String,
pub table_name: String,
pub table_id: TableId,
pub engine: String,
}
#[derive(Debug, Serialize, Deserialize, PartialEq, Eq)]
pub struct TableEntryValue {
pub table_name: String,
#[serde(default = "mito_engine")]
pub engine: String,
}
fn mito_engine() -> String {
MITO_ENGINE.to_string()
}
#[cfg(test)]
@@ -399,8 +414,8 @@ mod tests {
use datatypes::value::Value;
use log_store::NoopLogStore;
use mito::config::EngineConfig;
use mito::engine::MitoEngine;
use object_store::{ObjectStore, ObjectStoreBuilder};
use mito::engine::{MitoEngine, MITO_ENGINE};
use object_store::ObjectStore;
use storage::compaction::noop::NoopCompactionScheduler;
use storage::config::EngineConfig as StorageEngineConfig;
use storage::EngineImpl;
@@ -482,11 +497,9 @@ mod tests {
pub async fn prepare_table_engine() -> (TempDir, TableEngineRef) {
let dir = create_temp_dir("system-table-test");
let store_dir = dir.path().to_string_lossy();
let accessor = object_store::services::Fs::default()
.root(&store_dir)
.build()
.unwrap();
let object_store = ObjectStore::new(accessor).finish();
let mut builder = object_store::services::Fs::default();
builder.root(&store_dir);
let object_store = ObjectStore::new(builder).unwrap().finish();
let noop_compaction_scheduler = Arc::new(NoopCompactionScheduler::default());
let table_engine = Arc::new(MitoEngine::new(
EngineConfig::default(),
@@ -530,6 +543,7 @@ mod tests {
DEFAULT_SCHEMA_NAME.to_string(),
"my_table".to_string(),
1,
MITO_ENGINE.to_string(),
);
let result = catalog_table.insert(table_insertion).await.unwrap();
assert_eq!(result, 1);
@@ -550,6 +564,7 @@ mod tests {
schema_name: DEFAULT_SCHEMA_NAME.to_string(),
table_name: "my_table".to_string(),
table_id: 1,
engine: MITO_ENGINE.to_string(),
});
assert_eq!(entry, expected);

View File

@@ -15,8 +15,9 @@
use std::collections::HashMap;
use std::sync::Arc;
use common_catalog::consts::INFORMATION_SCHEMA_NAME;
use common_catalog::format_full_table_name;
use datafusion::common::{OwnedTableReference, ResolvedTableReference, TableReference};
use datafusion::common::{ResolvedTableReference, TableReference};
use datafusion::datasource::provider_as_source;
use datafusion::logical_expr::TableSource;
use session::context::QueryContext;
@@ -26,10 +27,11 @@ use table::table::adapter::DfTableProviderAdapter;
use crate::error::{
CatalogNotFoundSnafu, QueryAccessDeniedSnafu, Result, SchemaNotFoundSnafu, TableNotExistSnafu,
};
use crate::CatalogListRef;
use crate::information_schema::InformationSchemaProvider;
use crate::CatalogManagerRef;
pub struct DfTableSourceProvider {
catalog_list: CatalogListRef,
catalog_manager: CatalogManagerRef,
resolved_tables: HashMap<String, Arc<dyn TableSource>>,
disallow_cross_schema_query: bool,
default_catalog: String,
@@ -38,12 +40,12 @@ pub struct DfTableSourceProvider {
impl DfTableSourceProvider {
pub fn new(
catalog_list: CatalogListRef,
catalog_manager: CatalogManagerRef,
disallow_cross_schema_query: bool,
query_ctx: &QueryContext,
) -> Self {
Self {
catalog_list,
catalog_manager,
disallow_cross_schema_query,
resolved_tables: HashMap::new(),
default_catalog: query_ctx.current_catalog(),
@@ -87,9 +89,8 @@ impl DfTableSourceProvider {
pub async fn resolve_table(
&mut self,
table_ref: OwnedTableReference,
table_ref: TableReference<'_>,
) -> Result<Arc<dyn TableSource>> {
let table_ref = table_ref.as_table_reference();
let table_ref = self.resolve_table_ref(table_ref)?;
let resolved_name = table_ref.to_string();
@@ -101,14 +102,30 @@ impl DfTableSourceProvider {
let schema_name = table_ref.schema.as_ref();
let table_name = table_ref.table.as_ref();
let catalog = self
.catalog_list
.catalog(catalog_name)?
.context(CatalogNotFoundSnafu { catalog_name })?;
let schema = catalog.schema(schema_name)?.context(SchemaNotFoundSnafu {
catalog: catalog_name,
schema: schema_name,
})?;
let schema = if schema_name != INFORMATION_SCHEMA_NAME {
let catalog = self
.catalog_manager
.catalog(catalog_name)
.await?
.context(CatalogNotFoundSnafu { catalog_name })?;
catalog
.schema(schema_name)
.await?
.context(SchemaNotFoundSnafu {
catalog: catalog_name,
schema: schema_name,
})?
} else {
let catalog_provider = self
.catalog_manager
.catalog(catalog_name)
.await?
.context(CatalogNotFoundSnafu { catalog_name })?;
Arc::new(InformationSchemaProvider::new(
catalog_name.to_string(),
catalog_provider,
))
};
let table = schema
.table(table_name)
.await?

View File

@@ -15,28 +15,12 @@
// The `tables` table in system catalog keeps a record of all tables created by user.
use std::any::Any;
use std::pin::Pin;
use std::sync::Arc;
use std::task::{Context, Poll};
use async_stream::stream;
use async_trait::async_trait;
use common_catalog::consts::{INFORMATION_SCHEMA_NAME, SYSTEM_CATALOG_TABLE_NAME};
use common_error::ext::BoxedError;
use common_query::logical_plan::Expr;
use common_query::physical_plan::PhysicalPlanRef;
use common_recordbatch::error::Result as RecordBatchResult;
use common_recordbatch::{RecordBatch, RecordBatchStream};
use datatypes::prelude::{ConcreteDataType, DataType};
use datatypes::schema::{ColumnSchema, Schema, SchemaRef};
use datatypes::value::ValueRef;
use datatypes::vectors::VectorRef;
use futures::Stream;
use snafu::ResultExt;
use table::engine::TableEngineRef;
use table::error::TablesRecordBatchSnafu;
use table::metadata::{TableId, TableInfoRef};
use table::table::scan::SimpleTableScan;
use table::metadata::TableId;
use table::{Table, TableRef};
use crate::error::{self, Error, InsertCatalogRecordSnafu, Result as CatalogResult};
@@ -44,160 +28,9 @@ use crate::system::{
build_schema_insert_request, build_table_deletion_request, build_table_insert_request,
SystemCatalogTable,
};
use crate::{
CatalogListRef, CatalogProvider, DeregisterTableRequest, SchemaProvider, SchemaProviderRef,
};
/// Tables holds all tables created by user.
pub struct Tables {
schema: SchemaRef,
catalogs: CatalogListRef,
engine_name: String,
}
impl Tables {
pub fn new(catalogs: CatalogListRef, engine_name: String) -> Self {
Self {
schema: Arc::new(build_schema_for_tables()),
catalogs,
engine_name,
}
}
}
#[async_trait::async_trait]
impl Table for Tables {
fn as_any(&self) -> &dyn Any {
self
}
fn schema(&self) -> SchemaRef {
self.schema.clone()
}
fn table_info(&self) -> TableInfoRef {
unreachable!("Tables does not support table_info method")
}
async fn scan(
&self,
_projection: Option<&Vec<usize>>,
_filters: &[Expr],
_limit: Option<usize>,
) -> table::error::Result<PhysicalPlanRef> {
let catalogs = self.catalogs.clone();
let schema_ref = self.schema.clone();
let engine_name = self.engine_name.clone();
let stream = stream!({
for catalog_name in catalogs
.catalog_names()
.map_err(BoxedError::new)
.context(TablesRecordBatchSnafu)?
{
let catalog = catalogs
.catalog(&catalog_name)
.map_err(BoxedError::new)
.context(TablesRecordBatchSnafu)?
.unwrap();
for schema_name in catalog
.schema_names()
.map_err(BoxedError::new)
.context(TablesRecordBatchSnafu)?
{
let mut tables_in_schema = Vec::with_capacity(
catalog
.schema_names()
.map_err(BoxedError::new)
.context(TablesRecordBatchSnafu)?
.len(),
);
let schema = catalog
.schema(&schema_name)
.map_err(BoxedError::new)
.context(TablesRecordBatchSnafu)?
.unwrap();
for table_name in schema
.table_names()
.map_err(BoxedError::new)
.context(TablesRecordBatchSnafu)?
{
tables_in_schema.push(table_name);
}
let vec = tables_to_record_batch(
&catalog_name,
&schema_name,
tables_in_schema,
&engine_name,
);
let record_batch_res = RecordBatch::new(schema_ref.clone(), vec);
yield record_batch_res;
}
}
});
let stream = Box::pin(TablesRecordBatchStream {
schema: self.schema.clone(),
stream: Box::pin(stream),
});
Ok(Arc::new(SimpleTableScan::new(stream)))
}
}
/// Convert tables info to `RecordBatch`.
fn tables_to_record_batch(
catalog_name: &str,
schema_name: &str,
table_names: Vec<String>,
engine: &str,
) -> Vec<VectorRef> {
let mut catalog_vec =
ConcreteDataType::string_datatype().create_mutable_vector(table_names.len());
let mut schema_vec =
ConcreteDataType::string_datatype().create_mutable_vector(table_names.len());
let mut table_name_vec =
ConcreteDataType::string_datatype().create_mutable_vector(table_names.len());
let mut engine_vec =
ConcreteDataType::string_datatype().create_mutable_vector(table_names.len());
for table_name in table_names {
// Safety: All these vectors are string type.
catalog_vec.push_value_ref(ValueRef::String(catalog_name));
schema_vec.push_value_ref(ValueRef::String(schema_name));
table_name_vec.push_value_ref(ValueRef::String(&table_name));
engine_vec.push_value_ref(ValueRef::String(engine));
}
vec![
catalog_vec.to_vector(),
schema_vec.to_vector(),
table_name_vec.to_vector(),
engine_vec.to_vector(),
]
}
pub struct TablesRecordBatchStream {
schema: SchemaRef,
stream: Pin<Box<dyn Stream<Item = RecordBatchResult<RecordBatch>> + Send>>,
}
impl Stream for TablesRecordBatchStream {
type Item = RecordBatchResult<RecordBatch>;
fn poll_next(mut self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<Option<Self::Item>> {
Pin::new(&mut self.stream).poll_next(cx)
}
}
impl RecordBatchStream for TablesRecordBatchStream {
fn schema(&self) -> SchemaRef {
self.schema.clone()
}
}
use crate::{CatalogProvider, DeregisterTableRequest, SchemaProvider, SchemaProviderRef};
pub struct InformationSchema {
pub tables: Arc<Tables>,
pub system: Arc<SystemCatalogTable>,
}
@@ -207,42 +40,20 @@ impl SchemaProvider for InformationSchema {
self
}
fn table_names(&self) -> Result<Vec<String>, Error> {
Ok(vec![
"tables".to_string(),
SYSTEM_CATALOG_TABLE_NAME.to_string(),
])
async fn table_names(&self) -> Result<Vec<String>, Error> {
Ok(vec![SYSTEM_CATALOG_TABLE_NAME.to_string()])
}
async fn table(&self, name: &str) -> Result<Option<TableRef>, Error> {
if name.eq_ignore_ascii_case("tables") {
Ok(Some(self.tables.clone()))
} else if name.eq_ignore_ascii_case(SYSTEM_CATALOG_TABLE_NAME) {
if name.eq_ignore_ascii_case(SYSTEM_CATALOG_TABLE_NAME) {
Ok(Some(self.system.clone()))
} else {
Ok(None)
}
}
fn register_table(
&self,
_name: String,
_table: TableRef,
) -> crate::error::Result<Option<TableRef>> {
panic!("System catalog & schema does not support register table")
}
fn rename_table(&self, _name: &str, _new_name: String) -> crate::error::Result<TableRef> {
unimplemented!("System catalog & schema does not support rename table")
}
fn deregister_table(&self, _name: &str) -> crate::error::Result<Option<TableRef>> {
panic!("System catalog & schema does not support deregister table")
}
fn table_exist(&self, name: &str) -> Result<bool, Error> {
Ok(name.eq_ignore_ascii_case("tables")
|| name.eq_ignore_ascii_case(SYSTEM_CATALOG_TABLE_NAME))
async fn table_exist(&self, name: &str) -> Result<bool, Error> {
Ok(name.eq_ignore_ascii_case(SYSTEM_CATALOG_TABLE_NAME))
}
}
@@ -251,13 +62,8 @@ pub struct SystemCatalog {
}
impl SystemCatalog {
pub fn new(
system: SystemCatalogTable,
catalogs: CatalogListRef,
engine: TableEngineRef,
) -> Self {
pub(crate) fn new(system: SystemCatalogTable) -> Self {
let schema = InformationSchema {
tables: Arc::new(Tables::new(catalogs, engine.name().to_string())),
system: Arc::new(system),
};
Self {
@@ -271,8 +77,9 @@ impl SystemCatalog {
schema: String,
table_name: String,
table_id: TableId,
engine: String,
) -> crate::error::Result<usize> {
let request = build_table_insert_request(catalog, schema, table_name, table_id);
let request = build_table_insert_request(catalog, schema, table_name, table_id, engine);
self.information_schema
.system
.insert(request)
@@ -309,16 +116,17 @@ impl SystemCatalog {
}
}
#[async_trait::async_trait]
impl CatalogProvider for SystemCatalog {
fn as_any(&self) -> &dyn Any {
self
}
fn schema_names(&self) -> Result<Vec<String>, Error> {
async fn schema_names(&self) -> Result<Vec<String>, Error> {
Ok(vec![INFORMATION_SCHEMA_NAME.to_string()])
}
fn register_schema(
async fn register_schema(
&self,
_name: String,
_schema: SchemaProviderRef,
@@ -326,7 +134,7 @@ impl CatalogProvider for SystemCatalog {
panic!("System catalog does not support registering schema!")
}
fn schema(&self, name: &str) -> Result<Option<Arc<dyn SchemaProvider>>, Error> {
async fn schema(&self, name: &str) -> Result<Option<Arc<dyn SchemaProvider>>, Error> {
if name.eq_ignore_ascii_case(INFORMATION_SCHEMA_NAME) {
Ok(Some(self.information_schema.clone()))
} else {
@@ -334,104 +142,3 @@ impl CatalogProvider for SystemCatalog {
}
}
}
fn build_schema_for_tables() -> Schema {
let cols = vec![
ColumnSchema::new(
"catalog".to_string(),
ConcreteDataType::string_datatype(),
false,
),
ColumnSchema::new(
"schema".to_string(),
ConcreteDataType::string_datatype(),
false,
),
ColumnSchema::new(
"table_name".to_string(),
ConcreteDataType::string_datatype(),
false,
),
ColumnSchema::new(
"engine".to_string(),
ConcreteDataType::string_datatype(),
false,
),
];
Schema::new(cols)
}
#[cfg(test)]
mod tests {
use common_catalog::consts::{DEFAULT_CATALOG_NAME, DEFAULT_SCHEMA_NAME};
use common_query::physical_plan::SessionContext;
use futures_util::StreamExt;
use table::table::numbers::NumbersTable;
use super::*;
use crate::local::memory::new_memory_catalog_list;
use crate::CatalogList;
#[tokio::test]
async fn test_tables() {
let catalog_list = new_memory_catalog_list().unwrap();
let schema = catalog_list
.catalog(DEFAULT_CATALOG_NAME)
.unwrap()
.unwrap()
.schema(DEFAULT_SCHEMA_NAME)
.unwrap()
.unwrap();
schema
.register_table("test_table".to_string(), Arc::new(NumbersTable::default()))
.unwrap();
let tables = Tables::new(catalog_list, "test_engine".to_string());
let tables_stream = tables.scan(None, &[], None).await.unwrap();
let session_ctx = SessionContext::new();
let mut tables_stream = tables_stream.execute(0, session_ctx.task_ctx()).unwrap();
if let Some(t) = tables_stream.next().await {
let batch = t.unwrap();
assert_eq!(1, batch.num_rows());
assert_eq!(4, batch.num_columns());
assert_eq!(
ConcreteDataType::string_datatype(),
batch.column(0).data_type()
);
assert_eq!(
ConcreteDataType::string_datatype(),
batch.column(1).data_type()
);
assert_eq!(
ConcreteDataType::string_datatype(),
batch.column(2).data_type()
);
assert_eq!(
ConcreteDataType::string_datatype(),
batch.column(3).data_type()
);
assert_eq!(
"greptime",
batch.column(0).get_ref(0).as_string().unwrap().unwrap()
);
assert_eq!(
"public",
batch.column(1).get_ref(0).as_string().unwrap().unwrap()
);
assert_eq!(
"test_table",
batch.column(2).get_ref(0).as_string().unwrap().unwrap()
);
assert_eq!(
"test_engine",
batch.column(3).get_ref(0).as_string().unwrap().unwrap()
);
} else {
panic!("Record batch should not be empty!")
}
}
}

View File

@@ -20,28 +20,32 @@ mod tests {
use catalog::{CatalogManager, RegisterTableRequest, RenameTableRequest};
use common_catalog::consts::{DEFAULT_CATALOG_NAME, DEFAULT_SCHEMA_NAME};
use common_telemetry::{error, info};
use common_test_util::temp_dir::TempDir;
use mito::config::EngineConfig;
use table::engine::manager::MemoryTableEngineManager;
use table::table::numbers::NumbersTable;
use table::TableRef;
use tokio::sync::Mutex;
async fn create_local_catalog_manager() -> Result<LocalCatalogManager, catalog::error::Error> {
let (_dir, object_store) =
async fn create_local_catalog_manager(
) -> Result<(TempDir, LocalCatalogManager), catalog::error::Error> {
let (dir, object_store) =
mito::table::test_util::new_test_object_store("setup_mock_engine_and_table").await;
let mock_engine = Arc::new(mito::table::test_util::MockMitoEngine::new(
EngineConfig::default(),
mito::table::test_util::MockEngine::default(),
object_store,
));
let catalog_manager = LocalCatalogManager::try_new(mock_engine).await.unwrap();
let engine_manager = Arc::new(MemoryTableEngineManager::new(mock_engine.clone()));
let catalog_manager = LocalCatalogManager::try_new(engine_manager).await.unwrap();
catalog_manager.start().await?;
Ok(catalog_manager)
Ok((dir, catalog_manager))
}
#[tokio::test]
async fn test_rename_table() {
common_telemetry::init_default_ut_logging();
let catalog_manager = create_local_catalog_manager().await.unwrap();
let (_dir, catalog_manager) = create_local_catalog_manager().await.unwrap();
// register table
let table_name = "test_table";
let table_id = 42;
@@ -79,7 +83,7 @@ mod tests {
#[tokio::test]
async fn test_duplicate_register() {
let catalog_manager = create_local_catalog_manager().await.unwrap();
let (_dir, catalog_manager) = create_local_catalog_manager().await.unwrap();
let request = RegisterTableRequest {
catalog: DEFAULT_CATALOG_NAME.to_string(),
schema: DEFAULT_SCHEMA_NAME.to_string(),
@@ -116,8 +120,9 @@ mod tests {
fn test_concurrent_register() {
common_telemetry::init_default_ut_logging();
let rt = Arc::new(tokio::runtime::Builder::new_multi_thread().build().unwrap());
let catalog_manager =
Arc::new(rt.block_on(async { create_local_catalog_manager().await.unwrap() }));
let (_dir, catalog_manager) =
rt.block_on(async { create_local_catalog_manager().await.unwrap() });
let catalog_manager = Arc::new(catalog_manager);
let succeed: Arc<Mutex<Option<TableRef>>> = Arc::new(Mutex::new(None));

View File

@@ -20,7 +20,9 @@ use std::sync::Arc;
use async_stream::stream;
use catalog::error::Error;
use catalog::helper::{CatalogKey, CatalogValue, SchemaKey, SchemaValue};
use catalog::remote::{Kv, KvBackend, ValueIter};
use common_catalog::consts::{DEFAULT_CATALOG_NAME, DEFAULT_SCHEMA_NAME};
use common_recordbatch::RecordBatch;
use common_telemetry::logging::info;
use datatypes::data_type::ConcreteDataType;
@@ -34,11 +36,36 @@ use table::test_util::MemTable;
use table::TableRef;
use tokio::sync::RwLock;
#[derive(Default)]
pub struct MockKvBackend {
map: RwLock<BTreeMap<Vec<u8>, Vec<u8>>>,
}
impl Default for MockKvBackend {
fn default() -> Self {
let mut map = BTreeMap::default();
let catalog_value = CatalogValue {}.as_bytes().unwrap();
let schema_value = SchemaValue {}.as_bytes().unwrap();
let default_catalog_key = CatalogKey {
catalog_name: DEFAULT_CATALOG_NAME.to_string(),
}
.to_string();
let default_schema_key = SchemaKey {
catalog_name: DEFAULT_CATALOG_NAME.to_string(),
schema_name: DEFAULT_SCHEMA_NAME.to_string(),
}
.to_string();
// create default catalog and schema
map.insert(default_catalog_key.into(), catalog_value);
map.insert(default_schema_key.into(), schema_value);
let map = RwLock::new(map);
Self { map }
}
}
impl Display for MockKvBackend {
fn fmt(&self, f: &mut Formatter<'_>) -> std::fmt::Result {
futures::executor::block_on(async {

View File

@@ -26,10 +26,11 @@ mod tests {
use catalog::remote::{
KvBackend, KvBackendRef, RemoteCatalogManager, RemoteCatalogProvider, RemoteSchemaProvider,
};
use catalog::{CatalogList, CatalogManager, RegisterTableRequest};
use common_catalog::consts::{DEFAULT_CATALOG_NAME, DEFAULT_SCHEMA_NAME};
use catalog::{CatalogManager, RegisterTableRequest};
use common_catalog::consts::{DEFAULT_CATALOG_NAME, DEFAULT_SCHEMA_NAME, MITO_ENGINE};
use datatypes::schema::RawSchema;
use futures_util::StreamExt;
use table::engine::manager::{MemoryTableEngineManager, TableEngineManagerRef};
use table::engine::{EngineContext, TableEngineRef};
use table::requests::CreateTableRequest;
@@ -77,32 +78,47 @@ mod tests {
async fn prepare_components(
node_id: u64,
) -> (KvBackendRef, TableEngineRef, Arc<RemoteCatalogManager>) {
) -> (
KvBackendRef,
TableEngineRef,
Arc<RemoteCatalogManager>,
TableEngineManagerRef,
) {
let backend = Arc::new(MockKvBackend::default()) as KvBackendRef;
let table_engine = Arc::new(MockTableEngine::default());
let engine_manager = Arc::new(MemoryTableEngineManager::alias(
MITO_ENGINE.to_string(),
table_engine.clone(),
));
let catalog_manager =
RemoteCatalogManager::new(table_engine.clone(), node_id, backend.clone());
RemoteCatalogManager::new(engine_manager.clone(), node_id, backend.clone());
catalog_manager.start().await.unwrap();
(backend, table_engine, Arc::new(catalog_manager))
(
backend,
table_engine,
Arc::new(catalog_manager),
engine_manager as Arc<_>,
)
}
#[tokio::test]
async fn test_remote_catalog_default() {
common_telemetry::init_default_ut_logging();
let node_id = 42;
let (_, _, catalog_manager) = prepare_components(node_id).await;
let (_, _, catalog_manager, _) = prepare_components(node_id).await;
assert_eq!(
vec![DEFAULT_CATALOG_NAME.to_string()],
catalog_manager.catalog_names().unwrap()
catalog_manager.catalog_names().await.unwrap()
);
let default_catalog = catalog_manager
.catalog(DEFAULT_CATALOG_NAME)
.await
.unwrap()
.unwrap();
assert_eq!(
vec![DEFAULT_SCHEMA_NAME.to_string()],
default_catalog.schema_names().unwrap()
default_catalog.schema_names().await.unwrap()
);
}
@@ -110,7 +126,7 @@ mod tests {
async fn test_remote_catalog_register_nonexistent() {
common_telemetry::init_default_ut_logging();
let node_id = 42;
let (_, table_engine, catalog_manager) = prepare_components(node_id).await;
let (_, table_engine, catalog_manager, _) = prepare_components(node_id).await;
// register a new table with an nonexistent catalog
let catalog_name = "nonexistent_catalog".to_string();
let schema_name = "nonexistent_schema".to_string();
@@ -131,6 +147,7 @@ mod tests {
primary_key_indices: vec![],
create_if_not_exists: false,
table_options: Default::default(),
engine: MITO_ENGINE.to_string(),
},
)
.await
@@ -154,21 +171,22 @@ mod tests {
#[tokio::test]
async fn test_register_table() {
let node_id = 42;
let (_, table_engine, catalog_manager) = prepare_components(node_id).await;
let (_, table_engine, catalog_manager, _) = prepare_components(node_id).await;
let default_catalog = catalog_manager
.catalog(DEFAULT_CATALOG_NAME)
.await
.unwrap()
.unwrap();
assert_eq!(
vec![DEFAULT_SCHEMA_NAME.to_string()],
default_catalog.schema_names().unwrap()
default_catalog.schema_names().await.unwrap()
);
let default_schema = default_catalog
.schema(DEFAULT_SCHEMA_NAME)
.await
.unwrap()
.unwrap();
assert_eq!(vec!["numbers"], default_schema.table_names().unwrap());
// register a new table with an nonexistent catalog
let catalog_name = DEFAULT_CATALOG_NAME.to_string();
@@ -191,6 +209,7 @@ mod tests {
primary_key_indices: vec![],
create_if_not_exists: false,
table_options: Default::default(),
engine: MITO_ENGINE.to_string(),
},
)
.await
@@ -204,37 +223,35 @@ mod tests {
};
assert!(catalog_manager.register_table(reg_req).await.unwrap());
assert_eq!(
HashSet::from([table_name, "numbers".to_string()]),
default_schema
.table_names()
.unwrap()
.into_iter()
.collect::<HashSet<_>>()
vec![table_name],
default_schema.table_names().await.unwrap()
);
}
#[tokio::test]
async fn test_register_catalog_schema_table() {
let node_id = 42;
let (backend, table_engine, catalog_manager) = prepare_components(node_id).await;
let (backend, table_engine, catalog_manager, engine_manager) =
prepare_components(node_id).await;
let catalog_name = "test_catalog".to_string();
let schema_name = "nonexistent_schema".to_string();
let catalog = Arc::new(RemoteCatalogProvider::new(
catalog_name.clone(),
backend.clone(),
engine_manager.clone(),
node_id,
));
// register catalog to catalog manager
catalog_manager
.register_catalog(catalog_name.clone(), catalog)
CatalogManager::register_catalog(&*catalog_manager, catalog_name.clone(), catalog)
.await
.unwrap();
assert_eq!(
HashSet::<String>::from_iter(
vec![DEFAULT_CATALOG_NAME.to_string(), catalog_name.clone()].into_iter()
),
HashSet::from_iter(catalog_manager.catalog_names().unwrap().into_iter())
HashSet::from_iter(catalog_manager.catalog_names().await.unwrap().into_iter())
);
let table_to_register = table_engine
@@ -251,6 +268,7 @@ mod tests {
primary_key_indices: vec![],
create_if_not_exists: false,
table_options: Default::default(),
engine: MITO_ENGINE.to_string(),
},
)
.await
@@ -274,24 +292,32 @@ mod tests {
let new_catalog = catalog_manager
.catalog(&catalog_name)
.await
.unwrap()
.expect("catalog should exist since it's already registered");
let schema = Arc::new(RemoteSchemaProvider::new(
catalog_name.clone(),
schema_name.clone(),
node_id,
engine_manager,
backend.clone(),
));
let prev = new_catalog
.register_schema(schema_name.clone(), schema.clone())
.await
.expect("Register schema should not fail");
assert!(prev.is_none());
assert!(catalog_manager.register_table(reg_req).await.unwrap());
assert_eq!(
HashSet::from([schema_name.clone()]),
new_catalog.schema_names().unwrap().into_iter().collect()
new_catalog
.schema_names()
.await
.unwrap()
.into_iter()
.collect()
)
}
}

View File

@@ -23,7 +23,7 @@ enum_dispatch = "0.3"
futures-util.workspace = true
parking_lot = "0.12"
prost.workspace = true
rand = "0.8"
rand.workspace = true
snafu.workspace = true
tonic.workspace = true
@@ -37,4 +37,4 @@ prost.workspace = true
[dev-dependencies.substrait_proto]
package = "substrait"
version = "0.4"
version = "0.7"

View File

@@ -14,7 +14,7 @@
use api::v1::{ColumnDataType, ColumnDef, CreateTableExpr, TableId};
use client::{Client, Database};
use common_catalog::consts::{DEFAULT_CATALOG_NAME, DEFAULT_SCHEMA_NAME};
use common_catalog::consts::{DEFAULT_CATALOG_NAME, DEFAULT_SCHEMA_NAME, MITO_ENGINE};
use prost::Message;
use substrait_proto::proto::plan_rel::RelType as PlanRelType;
use substrait_proto::proto::read_rel::{NamedTable, ReadType};
@@ -64,6 +64,7 @@ async fn run() {
table_options: Default::default(),
table_id: Some(TableId { id: 1024 }),
region_ids: vec![0],
engine: MITO_ENGINE.to_string(),
};
let db = Database::new(DEFAULT_CATALOG_NAME, DEFAULT_SCHEMA_NAME, client);

View File

@@ -14,6 +14,10 @@
use std::sync::Arc;
use api::v1::greptime_database_client::GreptimeDatabaseClient;
use api::v1::health_check_client::HealthCheckClient;
use api::v1::prometheus_gateway_client::PrometheusGatewayClient;
use api::v1::HealthCheckRequest;
use arrow_flight::flight_service_client::FlightServiceClient;
use common_grpc::channel_manager::ChannelManager;
use parking_lot::RwLock;
@@ -23,6 +27,10 @@ use tonic::transport::Channel;
use crate::load_balance::{LoadBalance, Loadbalancer};
use crate::{error, Result};
pub(crate) struct DatabaseClient {
pub(crate) inner: GreptimeDatabaseClient<Channel>,
}
pub(crate) struct FlightClient {
addr: String,
client: FlightServiceClient<Channel>,
@@ -118,7 +126,7 @@ impl Client {
self.inner.set_peers(urls);
}
pub(crate) fn make_client(&self) -> Result<FlightClient> {
fn find_channel(&self) -> Result<(String, Channel)> {
let addr = self
.inner
.get_peer()
@@ -131,11 +139,35 @@ impl Client {
.channel_manager
.get(&addr)
.context(error::CreateChannelSnafu { addr: &addr })?;
Ok((addr, channel))
}
pub(crate) fn make_flight_client(&self) -> Result<FlightClient> {
let (addr, channel) = self.find_channel()?;
Ok(FlightClient {
addr,
client: FlightServiceClient::new(channel),
})
}
pub(crate) fn make_database_client(&self) -> Result<DatabaseClient> {
let (_, channel) = self.find_channel()?;
Ok(DatabaseClient {
inner: GreptimeDatabaseClient::new(channel),
})
}
pub fn make_prometheus_gateway_client(&self) -> Result<PrometheusGatewayClient<Channel>> {
let (_, channel) = self.find_channel()?;
Ok(PrometheusGatewayClient::new(channel))
}
pub async fn health_check(&self) -> Result<()> {
let (_, channel) = self.find_channel()?;
let mut client = HealthCheckClient::new(channel);
client.health_check(HealthCheckRequest {}).await?;
Ok(())
}
}
#[cfg(test)]

View File

@@ -12,47 +12,67 @@
// See the License for the specific language governing permissions and
// limitations under the License.
use std::str::FromStr;
use api::v1::auth_header::AuthScheme;
use api::v1::ddl_request::Expr as DdlExpr;
use api::v1::greptime_request::Request;
use api::v1::query_request::Query;
use api::v1::{
AlterExpr, AuthHeader, CreateTableExpr, DdlRequest, DropTableExpr, FlushTableExpr,
GreptimeRequest, InsertRequest, PromRangeQuery, QueryRequest, RequestHeader,
greptime_response, AffectedRows, AlterExpr, AuthHeader, CreateTableExpr, DdlRequest,
DeleteRequest, DropTableExpr, FlushTableExpr, GreptimeRequest, InsertRequest, PromRangeQuery,
QueryRequest, RequestHeader,
};
use arrow_flight::{FlightData, Ticket};
use common_error::prelude::*;
use common_grpc::flight::{flight_messages_to_recordbatches, FlightDecoder, FlightMessage};
use common_query::Output;
use common_telemetry::logging;
use common_telemetry::{logging, timer};
use futures_util::{TryFutureExt, TryStreamExt};
use prost::Message;
use snafu::{ensure, ResultExt};
use crate::error::{ConvertFlightDataSnafu, IllegalFlightMessagesSnafu};
use crate::{error, Client, Result};
use crate::error::{
ConvertFlightDataSnafu, IllegalDatabaseResponseSnafu, IllegalFlightMessagesSnafu,
};
use crate::{error, metrics, Client, Result};
#[derive(Clone, Debug)]
#[derive(Clone, Debug, Default)]
pub struct Database {
// The "catalog" and "schema" to be used in processing the requests at the server side.
// They are the "hint" or "context", just like how the "database" in "USE" statement is treated in MySQL.
// They will be carried in the request header.
catalog: String,
schema: String,
// The dbname follows naming rule as out mysql, postgres and http
// protocol. The server treat dbname in priority of catalog/schema.
dbname: String,
client: Client,
ctx: FlightContext,
}
impl Database {
/// Create database service client using catalog and schema
pub fn new(catalog: impl Into<String>, schema: impl Into<String>, client: Client) -> Self {
Self {
catalog: catalog.into(),
schema: schema.into(),
client,
ctx: FlightContext::default(),
..Default::default()
}
}
/// Create database service client using dbname.
///
/// This API is designed for external usage. `dbname` is:
///
/// - the name of database when using GreptimeDB standalone or cluster
/// - the name provided by GreptimeCloud or other multi-tenant GreptimeDB
/// environment
pub fn new_with_dbname(dbname: impl Into<String>, client: Client) -> Self {
Self {
dbname: dbname.into(),
client,
..Default::default()
}
}
@@ -72,17 +92,55 @@ impl Database {
self.schema = schema.into();
}
pub fn dbname(&self) -> &String {
&self.dbname
}
pub fn set_dbname(&mut self, dbname: impl Into<String>) {
self.dbname = dbname.into();
}
pub fn set_auth(&mut self, auth: AuthScheme) {
self.ctx.auth_header = Some(AuthHeader {
auth_scheme: Some(auth),
});
}
pub async fn insert(&self, request: InsertRequest) -> Result<Output> {
self.do_get(Request::Insert(request)).await
pub async fn insert(&self, request: InsertRequest) -> Result<u32> {
let _timer = timer!(metrics::METRIC_GRPC_INSERT);
self.handle(Request::Insert(request)).await
}
pub async fn delete(&self, request: DeleteRequest) -> Result<u32> {
let _timer = timer!(metrics::METRIC_GRPC_DELETE);
self.handle(Request::Delete(request)).await
}
async fn handle(&self, request: Request) -> Result<u32> {
let mut client = self.client.make_database_client()?.inner;
let request = GreptimeRequest {
header: Some(RequestHeader {
catalog: self.catalog.clone(),
schema: self.schema.clone(),
authorization: self.ctx.auth_header.clone(),
dbname: self.dbname.clone(),
}),
request: Some(request),
};
let response = client
.handle(request)
.await?
.into_inner()
.response
.context(IllegalDatabaseResponseSnafu {
err_msg: "GreptimeResponse is empty",
})?;
let greptime_response::Response::AffectedRows(AffectedRows { value }) = response;
Ok(value)
}
pub async fn sql(&self, sql: &str) -> Result<Output> {
let _timer = timer!(metrics::METRIC_GRPC_SQL);
self.do_get(Request::Query(QueryRequest {
query: Some(Query::Sql(sql.to_string())),
}))
@@ -90,6 +148,7 @@ impl Database {
}
pub async fn logical_plan(&self, logical_plan: Vec<u8>) -> Result<Output> {
let _timer = timer!(metrics::METRIC_GRPC_LOGICAL_PLAN);
self.do_get(Request::Query(QueryRequest {
query: Some(Query::LogicalPlan(logical_plan)),
}))
@@ -103,6 +162,7 @@ impl Database {
end: &str,
step: &str,
) -> Result<Output> {
let _timer = timer!(metrics::METRIC_GRPC_PROMQL_RANGE_QUERY);
self.do_get(Request::Query(QueryRequest {
query: Some(Query::PromRangeQuery(PromRangeQuery {
query: promql.to_string(),
@@ -115,6 +175,7 @@ impl Database {
}
pub async fn create(&self, expr: CreateTableExpr) -> Result<Output> {
let _timer = timer!(metrics::METRIC_GRPC_CREATE_TABLE);
self.do_get(Request::Ddl(DdlRequest {
expr: Some(DdlExpr::CreateTable(expr)),
}))
@@ -122,6 +183,7 @@ impl Database {
}
pub async fn alter(&self, expr: AlterExpr) -> Result<Output> {
let _timer = timer!(metrics::METRIC_GRPC_ALTER);
self.do_get(Request::Ddl(DdlRequest {
expr: Some(DdlExpr::Alter(expr)),
}))
@@ -129,6 +191,7 @@ impl Database {
}
pub async fn drop_table(&self, expr: DropTableExpr) -> Result<Output> {
let _timer = timer!(metrics::METRIC_GRPC_DROP_TABLE);
self.do_get(Request::Ddl(DdlRequest {
expr: Some(DdlExpr::DropTable(expr)),
}))
@@ -136,6 +199,7 @@ impl Database {
}
pub async fn flush_table(&self, expr: FlushTableExpr) -> Result<Output> {
let _timer = timer!(metrics::METRIC_GRPC_FLUSH_TABLE);
self.do_get(Request::Ddl(DdlRequest {
expr: Some(DdlExpr::FlushTable(expr)),
}))
@@ -143,11 +207,14 @@ impl Database {
}
async fn do_get(&self, request: Request) -> Result<Output> {
// FIXME(paomian): should be added some labels for metrics
let _timer = timer!(metrics::METRIC_GRPC_DO_GET);
let request = GreptimeRequest {
header: Some(RequestHeader {
catalog: self.catalog.clone(),
schema: self.schema.clone(),
authorization: self.ctx.auth_header.clone(),
dbname: self.dbname.clone(),
}),
request: Some(request),
};
@@ -155,7 +222,7 @@ impl Database {
ticket: request.encode_to_vec().into(),
};
let mut client = self.client.make_client()?;
let mut client = self.client.make_flight_client()?;
// TODO(LFC): Streaming get flight data.
let flight_data: Vec<FlightData> = client
@@ -164,22 +231,22 @@ impl Database {
.and_then(|response| response.into_inner().try_collect())
.await
.map_err(|e| {
let code = get_metadata_value(&e, INNER_ERROR_CODE)
.and_then(|s| StatusCode::from_str(&s).ok())
.unwrap_or(StatusCode::Unknown);
let msg = get_metadata_value(&e, INNER_ERROR_MSG).unwrap_or(e.to_string());
error::ExternalSnafu { code, msg }
let tonic_code = e.code();
let e: error::Error = e.into();
let code = e.status_code();
let msg = e.to_string();
error::ServerSnafu { code, msg }
.fail::<()>()
.map_err(BoxedError::new)
.context(error::FlightGetSnafu {
tonic_code: e.code(),
tonic_code,
addr: client.addr(),
})
.map_err(|error| {
logging::error!(
"Failed to do Flight get, addr: {}, code: {}, source: {}",
client.addr(),
e.code(),
tonic_code,
error
);
error
@@ -210,12 +277,6 @@ impl Database {
}
}
fn get_metadata_value(e: &tonic::Status, key: &str) -> Option<String> {
e.metadata()
.get(key)
.and_then(|v| String::from_utf8(v.as_bytes().to_vec()).ok())
}
#[derive(Default, Debug, Clone)]
pub struct FlightContext {
auth_header: Option<AuthHeader>,

View File

@@ -13,18 +13,17 @@
// limitations under the License.
use std::any::Any;
use std::str::FromStr;
use common_error::prelude::*;
use tonic::Code;
use snafu::Location;
use tonic::{Code, Status};
#[derive(Debug, Snafu)]
#[snafu(visibility(pub))]
pub enum Error {
#[snafu(display("Illegal Flight messages, reason: {}", reason))]
IllegalFlightMessages {
reason: String,
backtrace: Backtrace,
},
IllegalFlightMessages { reason: String, location: Location },
#[snafu(display("Failed to do Flight get, code: {}, source: {}", tonic_code, source))]
FlightGet {
@@ -46,13 +45,10 @@ pub enum Error {
},
#[snafu(display("Illegal GRPC client state: {}", err_msg))]
IllegalGrpcClientState {
err_msg: String,
backtrace: Backtrace,
},
IllegalGrpcClientState { err_msg: String, location: Location },
#[snafu(display("Missing required field in protobuf, field: {}", field))]
MissingField { field: String, backtrace: Backtrace },
MissingField { field: String, location: Location },
#[snafu(display(
"Failed to create gRPC channel, peer address: {}, source: {}",
@@ -65,9 +61,12 @@ pub enum Error {
source: common_grpc::error::Error,
},
/// Error deserialized from gRPC metadata
// Server error carried in Tonic Status's metadata.
#[snafu(display("{}", msg))]
ExternalError { code: StatusCode, msg: String },
Server { code: StatusCode, msg: String },
#[snafu(display("Illegal Database response: {err_msg}"))]
IllegalDatabaseResponse { err_msg: String },
}
pub type Result<T> = std::result::Result<T, Error>;
@@ -77,21 +76,37 @@ impl ErrorExt for Error {
match self {
Error::IllegalFlightMessages { .. }
| Error::ColumnDataType { .. }
| Error::MissingField { .. } => StatusCode::Internal,
| Error::MissingField { .. }
| Error::IllegalDatabaseResponse { .. } => StatusCode::Internal,
Error::Server { code, .. } => *code,
Error::FlightGet { source, .. } => source.status_code(),
Error::CreateChannel { source, .. } | Error::ConvertFlightData { source } => {
source.status_code()
}
Error::IllegalGrpcClientState { .. } => StatusCode::Unexpected,
Error::ExternalError { code, .. } => *code,
}
}
fn backtrace_opt(&self) -> Option<&Backtrace> {
ErrorCompat::backtrace(self)
}
fn as_any(&self) -> &dyn Any {
self
}
}
impl From<Status> for Error {
fn from(e: Status) -> Self {
fn get_metadata_value(e: &Status, key: &str) -> Option<String> {
e.metadata()
.get(key)
.and_then(|v| String::from_utf8(v.as_bytes().to_vec()).ok())
}
let code = get_metadata_value(&e, INNER_ERROR_CODE)
.and_then(|s| StatusCode::from_str(&s).ok())
.unwrap_or(StatusCode::Unknown);
let msg = get_metadata_value(&e, INNER_ERROR_MSG).unwrap_or(e.to_string());
Self::Server { code, msg }
}
}

View File

@@ -16,6 +16,7 @@ mod client;
mod database;
mod error;
pub mod load_balance;
mod metrics;
pub use api;
pub use common_catalog::consts::{DEFAULT_CATALOG_NAME, DEFAULT_SCHEMA_NAME};

25
src/client/src/metrics.rs Normal file
View File

@@ -0,0 +1,25 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
//! client metrics
pub const METRIC_GRPC_CREATE_TABLE: &str = "grpc.create_table";
pub const METRIC_GRPC_PROMQL_RANGE_QUERY: &str = "grpc.promql.range_query";
pub const METRIC_GRPC_INSERT: &str = "grpc.insert";
pub const METRIC_GRPC_DELETE: &str = "grpc.delete";
pub const METRIC_GRPC_SQL: &str = "grpc.sql";
pub const METRIC_GRPC_LOGICAL_PLAN: &str = "grpc.logical_plan";
pub const METRIC_GRPC_ALTER: &str = "grpc.alter";
pub const METRIC_GRPC_DROP_TABLE: &str = "grpc.drop_table";
pub const METRIC_GRPC_FLUSH_TABLE: &str = "grpc.flush_table";
pub const METRIC_GRPC_DO_GET: &str = "grpc.do_get";

View File

@@ -36,6 +36,7 @@ query = { path = "../query" }
rustyline = "10.1"
serde.workspace = true
servers = { path = "../servers" }
session = { path = "../session" }
snafu.workspace = true
substrait = { path = "../common/substrait" }

View File

@@ -12,20 +12,23 @@
// See the License for the specific language governing permissions and
// limitations under the License.
#![doc = include_str!("../../../../README.md")]
use std::fmt;
use clap::Parser;
use cmd::error::Result;
use cmd::options::{Options, TopLevelOptions};
use cmd::{cli, datanode, frontend, metasrv, standalone};
use common_telemetry::logging::{error, info};
#[derive(Parser)]
#[clap(name = "greptimedb", version = print_version())]
struct Command {
#[clap(long, default_value = "/tmp/greptimedb/logs")]
log_dir: String,
#[clap(long, default_value = "info")]
log_level: String,
#[clap(long)]
log_dir: Option<String>,
#[clap(long)]
log_level: Option<String>,
#[clap(subcommand)]
subcmd: SubCommand,
}
@@ -61,8 +64,20 @@ impl Application {
}
impl Command {
async fn build(self) -> Result<Application> {
self.subcmd.build().await
async fn build(self, opts: Options) -> Result<Application> {
self.subcmd.build(opts).await
}
fn load_options(&self) -> Result<Options> {
let top_level_opts = self.top_level_options();
self.subcmd.load_options(top_level_opts)
}
fn top_level_options(&self) -> TopLevelOptions {
TopLevelOptions {
log_dir: self.log_dir.clone(),
log_level: self.log_level.clone(),
}
}
}
@@ -81,28 +96,40 @@ enum SubCommand {
}
impl SubCommand {
async fn build(self) -> Result<Application> {
match self {
SubCommand::Datanode(cmd) => {
let app = cmd.build().await?;
async fn build(self, opts: Options) -> Result<Application> {
match (self, opts) {
(SubCommand::Datanode(cmd), Options::Datanode(dn_opts)) => {
let app = cmd.build(*dn_opts).await?;
Ok(Application::Datanode(app))
}
SubCommand::Frontend(cmd) => {
let app = cmd.build().await?;
(SubCommand::Frontend(cmd), Options::Frontend(fe_opts)) => {
let app = cmd.build(*fe_opts).await?;
Ok(Application::Frontend(app))
}
SubCommand::Metasrv(cmd) => {
let app = cmd.build().await?;
(SubCommand::Metasrv(cmd), Options::Metasrv(meta_opts)) => {
let app = cmd.build(*meta_opts).await?;
Ok(Application::Metasrv(app))
}
SubCommand::Standalone(cmd) => {
let app = cmd.build().await?;
(SubCommand::Standalone(cmd), Options::Standalone(opts)) => {
let app = cmd.build(opts.fe_opts, opts.dn_opts).await?;
Ok(Application::Standalone(app))
}
SubCommand::Cli(cmd) => {
(SubCommand::Cli(cmd), Options::Cli(_)) => {
let app = cmd.build().await?;
Ok(Application::Cli(app))
}
_ => unreachable!(),
}
}
fn load_options(&self, top_level_opts: TopLevelOptions) -> Result<Options> {
match self {
SubCommand::Datanode(cmd) => cmd.load_options(top_level_opts),
SubCommand::Frontend(cmd) => cmd.load_options(top_level_opts),
SubCommand::Metasrv(cmd) => cmd.load_options(top_level_opts),
SubCommand::Standalone(cmd) => cmd.load_options(top_level_opts),
SubCommand::Cli(cmd) => cmd.load_options(top_level_opts),
}
}
}
@@ -142,14 +169,15 @@ async fn main() -> Result<()> {
// TODO(dennis):
// 1. adds ip/port to app
let app_name = &cmd.subcmd.to_string();
let log_dir = &cmd.log_dir;
let log_level = &cmd.log_level;
let opts = cmd.load_options()?;
let logging_opts = opts.logging_options();
common_telemetry::set_panic_hook();
common_telemetry::init_default_metrics_recorder();
let _guard = common_telemetry::init_global_logging(app_name, log_dir, log_level, false);
let _guard = common_telemetry::init_global_logging(app_name, logging_opts);
let mut app = cmd.build().await?;
let mut app = cmd.build(opts).await?;
tokio::select! {
result = app.run() => {

View File

@@ -17,9 +17,11 @@ mod helper;
mod repl;
use clap::Parser;
use common_telemetry::logging::LoggingOptions;
pub use repl::Repl;
use crate::error::Result;
use crate::options::{Options, TopLevelOptions};
pub struct Instance {
repl: Repl,
@@ -31,7 +33,6 @@ impl Instance {
}
pub async fn stop(&self) -> Result<()> {
// TODO: handle cli shutdown
Ok(())
}
}
@@ -46,6 +47,17 @@ impl Command {
pub async fn build(self) -> Result<Instance> {
self.cmd.build().await
}
pub fn load_options(&self, top_level_opts: TopLevelOptions) -> Result<Options> {
let mut logging_opts = LoggingOptions::default();
if let Some(dir) = top_level_opts.log_dir {
logging_opts.dir = dir;
}
if let Some(level) = top_level_opts.log_level {
logging_opts.level = level;
}
Ok(Options::Cli(Box::new(logging_opts)))
}
}
#[derive(Parser)]
@@ -77,3 +89,46 @@ impl AttachCommand {
Ok(Instance { repl })
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_load_options() {
let cmd = Command {
cmd: SubCommand::Attach(AttachCommand {
grpc_addr: String::from(""),
meta_addr: None,
disable_helper: false,
}),
};
let opts = cmd.load_options(TopLevelOptions::default()).unwrap();
let logging_opts = opts.logging_options();
assert_eq!("/tmp/greptimedb/logs", logging_opts.dir);
assert_eq!("info", logging_opts.level);
assert!(!logging_opts.enable_jaeger_tracing);
}
#[test]
fn test_top_level_options() {
let cmd = Command {
cmd: SubCommand::Attach(AttachCommand {
grpc_addr: String::from(""),
meta_addr: None,
disable_helper: false,
}),
};
let opts = cmd
.load_options(TopLevelOptions {
log_dir: Some("/tmp/greptimedb/test/logs".to_string()),
log_level: Some("debug".to_string()),
})
.unwrap();
let logging_opts = opts.logging_options();
assert_eq!("/tmp/greptimedb/test/logs", logging_opts.dir);
assert_eq!("debug", logging_opts.level);
}
}

View File

@@ -12,16 +12,17 @@
// See the License for the specific language governing permissions and
// limitations under the License.
use std::time::Duration;
use clap::Parser;
use common_telemetry::logging;
use datanode::datanode::{
Datanode, DatanodeOptions, FileConfig, ObjectStoreConfig, ProcedureConfig,
};
use datanode::datanode::{Datanode, DatanodeOptions, FileConfig, ObjectStoreConfig};
use meta_client::MetaClientOptions;
use servers::Mode;
use snafu::ResultExt;
use crate::error::{Error, MissingConfigSnafu, Result, StartDatanodeSnafu};
use crate::error::{MissingConfigSnafu, Result, ShutdownDatanodeSnafu, StartDatanodeSnafu};
use crate::options::{Options, TopLevelOptions};
use crate::toml_loader;
pub struct Instance {
@@ -34,8 +35,10 @@ impl Instance {
}
pub async fn stop(&self) -> Result<()> {
// TODO: handle datanode shutdown
Ok(())
self.datanode
.shutdown()
.await
.context(ShutdownDatanodeSnafu)
}
}
@@ -46,8 +49,12 @@ pub struct Command {
}
impl Command {
pub async fn build(self) -> Result<Instance> {
self.subcmd.build().await
pub async fn build(self, opts: DatanodeOptions) -> Result<Instance> {
self.subcmd.build(opts).await
}
pub fn load_options(&self, top_level_opts: TopLevelOptions) -> Result<Options> {
self.subcmd.load_options(top_level_opts)
}
}
@@ -57,9 +64,15 @@ enum SubCommand {
}
impl SubCommand {
async fn build(self) -> Result<Instance> {
async fn build(self, opts: DatanodeOptions) -> Result<Instance> {
match self {
SubCommand::Start(cmd) => cmd.build().await,
SubCommand::Start(cmd) => cmd.build(opts).await,
}
}
fn load_options(&self, top_level_opts: TopLevelOptions) -> Result<Options> {
match self {
SubCommand::Start(cmd) => cmd.load_options(top_level_opts),
}
}
}
@@ -83,49 +96,43 @@ struct StartCommand {
#[clap(long)]
wal_dir: Option<String>,
#[clap(long)]
procedure_dir: Option<String>,
http_addr: Option<String>,
#[clap(long)]
http_timeout: Option<u64>,
}
impl StartCommand {
async fn build(self) -> Result<Instance> {
logging::info!("Datanode start command: {:#?}", self);
let opts: DatanodeOptions = self.try_into()?;
logging::info!("Datanode options: {:#?}", opts);
let datanode = Datanode::new(opts).await.context(StartDatanodeSnafu)?;
Ok(Instance { datanode })
}
}
impl TryFrom<StartCommand> for DatanodeOptions {
type Error = Error;
fn try_from(cmd: StartCommand) -> Result<Self> {
let mut opts: DatanodeOptions = if let Some(path) = cmd.config_file {
toml_loader::from_file!(&path)?
fn load_options(&self, top_level_opts: TopLevelOptions) -> Result<Options> {
let mut opts: DatanodeOptions = if let Some(path) = &self.config_file {
toml_loader::from_file!(path)?
} else {
DatanodeOptions::default()
};
if let Some(addr) = cmd.rpc_addr {
if let Some(dir) = top_level_opts.log_dir {
opts.logging.dir = dir;
}
if let Some(level) = top_level_opts.log_level {
opts.logging.level = level;
}
if let Some(addr) = self.rpc_addr.clone() {
opts.rpc_addr = addr;
}
if cmd.rpc_hostname.is_some() {
opts.rpc_hostname = cmd.rpc_hostname;
if self.rpc_hostname.is_some() {
opts.rpc_hostname = self.rpc_hostname.clone();
}
if let Some(addr) = cmd.mysql_addr {
if let Some(addr) = self.mysql_addr.clone() {
opts.mysql_addr = addr;
}
if let Some(node_id) = cmd.node_id {
if let Some(node_id) = self.node_id {
opts.node_id = Some(node_id);
}
if let Some(meta_addr) = cmd.metasrv_addr {
if let Some(meta_addr) = self.metasrv_addr.clone() {
opts.meta_client_options
.get_or_insert_with(MetaClientOptions::default)
.metasrv_addrs = meta_addr
@@ -143,29 +150,44 @@ impl TryFrom<StartCommand> for DatanodeOptions {
.fail();
}
if let Some(data_dir) = cmd.data_dir {
opts.storage = ObjectStoreConfig::File(FileConfig { data_dir });
if let Some(data_dir) = self.data_dir.clone() {
opts.storage.store = ObjectStoreConfig::File(FileConfig { data_dir });
}
if let Some(wal_dir) = cmd.wal_dir {
if let Some(wal_dir) = self.wal_dir.clone() {
opts.wal.dir = wal_dir;
}
if let Some(procedure_dir) = cmd.procedure_dir {
opts.procedure = Some(ProcedureConfig::from_file_path(procedure_dir));
if let Some(http_addr) = self.http_addr.clone() {
opts.http_opts.addr = http_addr
}
if let Some(http_timeout) = self.http_timeout {
opts.http_opts.timeout = Duration::from_secs(http_timeout)
}
Ok(opts)
// Disable dashboard in datanode.
opts.http_opts.disable_dashboard = true;
Ok(Options::Datanode(Box::new(opts)))
}
async fn build(self, opts: DatanodeOptions) -> Result<Instance> {
logging::info!("Datanode start command: {:#?}", self);
logging::info!("Datanode options: {:#?}", opts);
let datanode = Datanode::new(opts).await.context(StartDatanodeSnafu)?;
Ok(Instance { datanode })
}
}
#[cfg(test)]
mod tests {
use std::assert_matches::assert_matches;
use std::io::Write;
use std::time::Duration;
use common_base::readable_size::ReadableSize;
use common_test_util::temp_dir::create_named_temp_file;
use datanode::datanode::{CompactionConfig, ObjectStoreConfig};
use datanode::datanode::{CompactionConfig, ObjectStoreConfig, RegionManifestConfig};
use servers::Mode;
use super::*;
@@ -201,10 +223,19 @@ mod tests {
type = "File"
data_dir = "/tmp/greptimedb/data/"
[compaction]
max_inflight_tasks = 4
max_files_in_level0 = 8
[storage.compaction]
max_inflight_tasks = 3
max_files_in_level0 = 7
max_purge_tasks = 32
[storage.manifest]
checkpoint_margin = 9
gc_duration = '7s'
checkpoint_on_startup = true
[logging]
level = "debug"
dir = "/tmp/greptimedb/test/logs"
"#;
write!(file, "{}", toml_str).unwrap();
@@ -212,7 +243,10 @@ mod tests {
config_file: Some(file.path().to_str().unwrap().to_string()),
..Default::default()
};
let options: DatanodeOptions = cmd.try_into().unwrap();
let Options::Datanode(options) =
cmd.load_options(TopLevelOptions::default()).unwrap() else { unreachable!() };
assert_eq!("127.0.0.1:3001".to_string(), options.rpc_addr);
assert_eq!("127.0.0.1:4406".to_string(), options.mysql_addr);
assert_eq!(2, options.mysql_runtime_size);
@@ -235,9 +269,9 @@ mod tests {
assert_eq!(3000, timeout_millis);
assert!(tcp_nodelay);
match options.storage {
ObjectStoreConfig::File(FileConfig { data_dir }) => {
assert_eq!("/tmp/greptimedb/data/".to_string(), data_dir)
match &options.storage.store {
ObjectStoreConfig::File(FileConfig { data_dir, .. }) => {
assert_eq!("/tmp/greptimedb/data/", data_dir)
}
ObjectStoreConfig::S3 { .. } => unreachable!(),
ObjectStoreConfig::Oss { .. } => unreachable!(),
@@ -245,43 +279,75 @@ mod tests {
assert_eq!(
CompactionConfig {
max_inflight_tasks: 4,
max_files_in_level0: 8,
max_inflight_tasks: 3,
max_files_in_level0: 7,
max_purge_tasks: 32,
sst_write_buffer_size: ReadableSize::mb(8),
},
options.compaction
options.storage.compaction,
);
assert_eq!(
RegionManifestConfig {
checkpoint_margin: Some(9),
gc_duration: Some(Duration::from_secs(7)),
checkpoint_on_startup: true,
},
options.storage.manifest,
);
assert_eq!("debug".to_string(), options.logging.level);
assert_eq!("/tmp/greptimedb/test/logs".to_string(), options.logging.dir);
}
#[test]
fn test_try_from_cmd() {
assert_eq!(
Mode::Standalone,
DatanodeOptions::try_from(StartCommand::default())
.unwrap()
.mode
);
if let Options::Datanode(opt) = StartCommand::default()
.load_options(TopLevelOptions::default())
.unwrap()
{
assert_eq!(Mode::Standalone, opt.mode)
}
let mode = DatanodeOptions::try_from(StartCommand {
if let Options::Datanode(opt) = (StartCommand {
node_id: Some(42),
metasrv_addr: Some("127.0.0.1:3002".to_string()),
..Default::default()
})
.load_options(TopLevelOptions::default())
.unwrap()
.mode;
assert_matches!(mode, Mode::Distributed);
{
assert_eq!(Mode::Distributed, opt.mode)
}
assert!(DatanodeOptions::try_from(StartCommand {
assert!((StartCommand {
metasrv_addr: Some("127.0.0.1:3002".to_string()),
..Default::default()
})
.load_options(TopLevelOptions::default())
.is_err());
// Providing node_id but leave metasrv_addr absent is ok since metasrv_addr has default value
DatanodeOptions::try_from(StartCommand {
(StartCommand {
node_id: Some(42),
..Default::default()
})
.load_options(TopLevelOptions::default())
.unwrap();
}
#[test]
fn test_top_level_options() {
let cmd = StartCommand::default();
let options = cmd
.load_options(TopLevelOptions {
log_dir: Some("/tmp/greptimedb/test/logs".to_string()),
log_level: Some("debug".to_string()),
})
.unwrap();
let logging_opt = options.logging_options();
assert_eq!("/tmp/greptimedb/test/logs", logging_opt.dir);
assert_eq!("debug", logging_opt.level);
}
}

View File

@@ -16,6 +16,7 @@ use std::any::Any;
use common_error::prelude::*;
use rustyline::error::ReadlineError;
use snafu::Location;
#[derive(Debug, Snafu)]
#[snafu(visibility(pub))]
@@ -26,12 +27,24 @@ pub enum Error {
source: datanode::error::Error,
},
#[snafu(display("Failed to shutdown datanode, source: {}", source))]
ShutdownDatanode {
#[snafu(backtrace)]
source: datanode::error::Error,
},
#[snafu(display("Failed to start frontend, source: {}", source))]
StartFrontend {
#[snafu(backtrace)]
source: frontend::error::Error,
},
#[snafu(display("Failed to shutdown frontend, source: {}", source))]
ShutdownFrontend {
#[snafu(backtrace)]
source: frontend::error::Error,
},
#[snafu(display("Failed to build meta server, source: {}", source))]
BuildMetaServer {
#[snafu(backtrace)]
@@ -44,24 +57,30 @@ pub enum Error {
source: meta_srv::error::Error,
},
#[snafu(display("Failed to shutdown meta server, source: {}", source))]
ShutdownMetaServer {
#[snafu(backtrace)]
source: meta_srv::error::Error,
},
#[snafu(display("Failed to read config file: {}, source: {}", path, source))]
ReadConfig {
path: String,
source: std::io::Error,
backtrace: Backtrace,
location: Location,
},
#[snafu(display("Failed to parse config, source: {}", source))]
ParseConfig {
source: toml::de::Error,
backtrace: Backtrace,
location: Location,
},
#[snafu(display("Missing config, msg: {}", msg))]
MissingConfig { msg: String, backtrace: Backtrace },
MissingConfig { msg: String, location: Location },
#[snafu(display("Illegal config: {}", msg))]
IllegalConfig { msg: String, backtrace: Backtrace },
IllegalConfig { msg: String, location: Location },
#[snafu(display("Illegal auth config: {}", source))]
IllegalAuthConfig {
@@ -82,13 +101,13 @@ pub enum Error {
#[snafu(display("Cannot create REPL: {}", source))]
ReplCreation {
source: ReadlineError,
backtrace: Backtrace,
location: Location,
},
#[snafu(display("Error reading command: {}", source))]
Readline {
source: ReadlineError,
backtrace: Backtrace,
location: Location,
},
#[snafu(display("Failed to request database, sql: {sql}, source: {source}"))]
@@ -143,7 +162,10 @@ impl ErrorExt for Error {
match self {
Error::StartDatanode { source } => source.status_code(),
Error::StartFrontend { source } => source.status_code(),
Error::ShutdownDatanode { source } => source.status_code(),
Error::ShutdownFrontend { source } => source.status_code(),
Error::StartMetaServer { source } => source.status_code(),
Error::ShutdownMetaServer { source } => source.status_code(),
Error::BuildMetaServer { source } => source.status_code(),
Error::UnsupportedSelectorType { source, .. } => source.status_code(),
Error::ReadConfig { .. } | Error::ParseConfig { .. } | Error::MissingConfig { .. } => {
@@ -166,78 +188,7 @@ impl ErrorExt for Error {
}
}
fn backtrace_opt(&self) -> Option<&Backtrace> {
ErrorCompat::backtrace(self)
}
fn as_any(&self) -> &dyn Any {
self
}
}
#[cfg(test)]
mod tests {
use super::*;
type StdResult<E> = std::result::Result<(), E>;
#[test]
fn test_start_node_error() {
fn throw_datanode_error() -> StdResult<datanode::error::Error> {
datanode::error::MissingNodeIdSnafu {}.fail()
}
let e = throw_datanode_error()
.context(StartDatanodeSnafu)
.err()
.unwrap();
assert!(e.backtrace_opt().is_some());
assert_eq!(e.status_code(), StatusCode::InvalidArguments);
}
#[test]
fn test_start_frontend_error() {
fn throw_frontend_error() -> StdResult<frontend::error::Error> {
frontend::error::InvalidSqlSnafu { err_msg: "failed" }.fail()
}
let e = throw_frontend_error()
.context(StartFrontendSnafu)
.err()
.unwrap();
assert!(e.backtrace_opt().is_some());
assert_eq!(e.status_code(), StatusCode::InvalidArguments);
}
#[test]
fn test_start_metasrv_error() {
fn throw_metasrv_error() -> StdResult<meta_srv::error::Error> {
meta_srv::error::StreamNoneSnafu {}.fail()
}
let e = throw_metasrv_error()
.context(StartMetaServerSnafu)
.err()
.unwrap();
assert!(e.backtrace_opt().is_some());
assert_eq!(e.status_code(), StatusCode::Internal);
}
#[test]
fn test_read_config_error() {
fn throw_read_config_error() -> StdResult<std::io::Error> {
Err(std::io::ErrorKind::NotFound.into())
}
let e = throw_read_config_error()
.context(ReadConfigSnafu { path: "test" })
.err()
.unwrap();
assert!(e.backtrace_opt().is_some());
assert_eq!(e.status_code(), StatusCode::InvalidArguments);
}
}

View File

@@ -26,12 +26,12 @@ use frontend::postgres::PostgresOptions;
use frontend::prom::PromOptions;
use meta_client::MetaClientOptions;
use servers::auth::UserProviderRef;
use servers::http::HttpOptions;
use servers::tls::{TlsMode, TlsOption};
use servers::{auth, Mode};
use snafu::ResultExt;
use crate::error::{self, IllegalAuthConfigSnafu, Result};
use crate::options::{Options, TopLevelOptions};
use crate::toml_loader;
pub struct Instance {
@@ -47,8 +47,10 @@ impl Instance {
}
pub async fn stop(&self) -> Result<()> {
// TODO: handle frontend shutdown
Ok(())
self.frontend
.shutdown()
.await
.context(error::ShutdownFrontendSnafu)
}
}
@@ -59,8 +61,12 @@ pub struct Command {
}
impl Command {
pub async fn build(self) -> Result<Instance> {
self.subcmd.build().await
pub async fn build(self, opts: FrontendOptions) -> Result<Instance> {
self.subcmd.build(opts).await
}
pub fn load_options(&self, top_level_opts: TopLevelOptions) -> Result<Options> {
self.subcmd.load_options(top_level_opts)
}
}
@@ -70,9 +76,15 @@ enum SubCommand {
}
impl SubCommand {
async fn build(self) -> Result<Instance> {
async fn build(self, opts: FrontendOptions) -> Result<Instance> {
match self {
SubCommand::Start(cmd) => cmd.build().await,
SubCommand::Start(cmd) => cmd.build(opts).await,
}
}
fn load_options(&self, top_level_opts: TopLevelOptions) -> Result<Options> {
match self {
SubCommand::Start(cmd) => cmd.load_options(top_level_opts),
}
}
}
@@ -105,87 +117,75 @@ pub struct StartCommand {
tls_key_path: Option<String>,
#[clap(long)]
user_provider: Option<String>,
#[clap(long)]
disable_dashboard: Option<bool>,
}
impl StartCommand {
async fn build(self) -> Result<Instance> {
let plugins = Arc::new(load_frontend_plugins(&self.user_provider)?);
let opts: FrontendOptions = self.try_into()?;
let mut instance = FeInstance::try_new_distributed(&opts, plugins.clone())
.await
.context(error::StartFrontendSnafu)?;
instance
.build_servers(&opts, plugins)
.await
.context(error::StartFrontendSnafu)?;
Ok(Instance { frontend: instance })
}
}
pub fn load_frontend_plugins(user_provider: &Option<String>) -> Result<Plugins> {
let mut plugins = Plugins::new();
if let Some(provider) = user_provider {
let provider = auth::user_provider_from_option(provider).context(IllegalAuthConfigSnafu)?;
plugins.insert::<UserProviderRef>(provider);
}
Ok(plugins)
}
impl TryFrom<StartCommand> for FrontendOptions {
type Error = error::Error;
fn try_from(cmd: StartCommand) -> Result<Self> {
let mut opts: FrontendOptions = if let Some(path) = cmd.config_file {
toml_loader::from_file!(&path)?
fn load_options(&self, top_level_opts: TopLevelOptions) -> Result<Options> {
let mut opts: FrontendOptions = if let Some(path) = &self.config_file {
toml_loader::from_file!(path)?
} else {
FrontendOptions::default()
};
let tls_option = TlsOption::new(cmd.tls_mode, cmd.tls_cert_path, cmd.tls_key_path);
if let Some(addr) = cmd.http_addr {
opts.http_options = Some(HttpOptions {
addr,
..Default::default()
});
if let Some(dir) = top_level_opts.log_dir {
opts.logging.dir = dir;
}
if let Some(addr) = cmd.grpc_addr {
if let Some(level) = top_level_opts.log_level {
opts.logging.level = level;
}
let tls_option = TlsOption::new(
self.tls_mode.clone(),
self.tls_cert_path.clone(),
self.tls_key_path.clone(),
);
if let Some(addr) = self.http_addr.clone() {
opts.http_options.get_or_insert_with(Default::default).addr = addr;
}
if let Some(disable_dashboard) = self.disable_dashboard {
opts.http_options
.get_or_insert_with(Default::default)
.disable_dashboard = disable_dashboard;
}
if let Some(addr) = self.grpc_addr.clone() {
opts.grpc_options = Some(GrpcOptions {
addr,
..Default::default()
});
}
if let Some(addr) = cmd.mysql_addr {
if let Some(addr) = self.mysql_addr.clone() {
opts.mysql_options = Some(MysqlOptions {
addr,
tls: tls_option.clone(),
..Default::default()
});
}
if let Some(addr) = cmd.prom_addr {
if let Some(addr) = self.prom_addr.clone() {
opts.prom_options = Some(PromOptions { addr });
}
if let Some(addr) = cmd.postgres_addr {
if let Some(addr) = self.postgres_addr.clone() {
opts.postgres_options = Some(PostgresOptions {
addr,
tls: tls_option,
..Default::default()
});
}
if let Some(addr) = cmd.opentsdb_addr {
if let Some(addr) = self.opentsdb_addr.clone() {
opts.opentsdb_options = Some(OpentsdbOptions {
addr,
..Default::default()
});
}
if let Some(enable) = cmd.influxdb_enable {
if let Some(enable) = self.influxdb_enable {
opts.influxdb_options = Some(InfluxdbOptions { enable });
}
if let Some(metasrv_addr) = cmd.metasrv_addr {
if let Some(metasrv_addr) = self.metasrv_addr.clone() {
opts.meta_client_options
.get_or_insert_with(MetaClientOptions::default)
.metasrv_addrs = metasrv_addr
@@ -195,8 +195,34 @@ impl TryFrom<StartCommand> for FrontendOptions {
.collect::<Vec<_>>();
opts.mode = Mode::Distributed;
}
Ok(opts)
Ok(Options::Frontend(Box::new(opts)))
}
async fn build(self, opts: FrontendOptions) -> Result<Instance> {
let plugins = Arc::new(load_frontend_plugins(&self.user_provider)?);
let mut instance = FeInstance::try_new_distributed(&opts, plugins.clone())
.await
.context(error::StartFrontendSnafu)?;
instance
.build_servers(&opts)
.await
.context(error::StartFrontendSnafu)?;
Ok(Instance { frontend: instance })
}
}
pub fn load_frontend_plugins(user_provider: &Option<String>) -> Result<Plugins> {
let plugins = Plugins::new();
if let Some(provider) = user_provider {
let provider = auth::user_provider_from_option(provider).context(IllegalAuthConfigSnafu)?;
plugins.insert::<UserProviderRef>(provider);
}
Ok(plugins)
}
#[cfg(test)]
@@ -225,9 +251,12 @@ mod tests {
tls_cert_path: None,
tls_key_path: None,
user_provider: None,
disable_dashboard: Some(false),
};
let opts: FrontendOptions = command.try_into().unwrap();
let Options::Frontend(opts) =
command.load_options(TopLevelOptions::default()).unwrap() else { unreachable!() };
assert_eq!(opts.http_options.as_ref().unwrap().addr, "127.0.0.1:1234");
assert_eq!(opts.mysql_options.as_ref().unwrap().addr, "127.0.0.1:5678");
assert_eq!(
@@ -270,6 +299,10 @@ mod tests {
[http_options]
addr = "127.0.0.1:4000"
timeout = "30s"
[logging]
level = "debug"
dir = "/tmp/greptimedb/test/logs"
"#;
write!(file, "{}", toml_str).unwrap();
@@ -287,9 +320,11 @@ mod tests {
tls_cert_path: None,
tls_key_path: None,
user_provider: None,
disable_dashboard: Some(false),
};
let fe_opts = FrontendOptions::try_from(command).unwrap();
let Options::Frontend(fe_opts) =
command.load_options(TopLevelOptions::default()).unwrap() else {unreachable!()};
assert_eq!(Mode::Distributed, fe_opts.mode);
assert_eq!(
"127.0.0.1:4000".to_string(),
@@ -299,6 +334,9 @@ mod tests {
Duration::from_secs(30),
fe_opts.http_options.as_ref().unwrap().timeout
);
assert_eq!("debug".to_string(), fe_opts.logging.level);
assert_eq!("/tmp/greptimedb/test/logs".to_string(), fe_opts.logging.dir);
}
#[tokio::test]
@@ -317,6 +355,7 @@ mod tests {
tls_cert_path: None,
tls_key_path: None,
user_provider: Some("static_user_provider:cmd:test=test".to_string()),
disable_dashboard: Some(false),
};
let plugins = load_frontend_plugins(&command.user_provider);
@@ -327,8 +366,42 @@ mod tests {
let provider = provider.unwrap();
let result = provider
.authenticate(Identity::UserId("test", None), Password::PlainText("test"))
.authenticate(
Identity::UserId("test", None),
Password::PlainText("test".to_string().into()),
)
.await;
assert!(result.is_ok());
}
#[test]
fn test_top_level_options() {
let cmd = StartCommand {
http_addr: None,
grpc_addr: None,
mysql_addr: None,
prom_addr: None,
postgres_addr: None,
opentsdb_addr: None,
influxdb_enable: None,
config_file: None,
metasrv_addr: None,
tls_mode: None,
tls_cert_path: None,
tls_key_path: None,
user_provider: None,
disable_dashboard: Some(false),
};
let options = cmd
.load_options(TopLevelOptions {
log_dir: Some("/tmp/greptimedb/test/logs".to_string()),
log_level: Some("debug".to_string()),
})
.unwrap();
let logging_opt = options.logging_options();
assert_eq!("/tmp/greptimedb/test/logs", logging_opt.dir);
assert_eq!("debug", logging_opt.level);
}
}

View File

@@ -19,5 +19,6 @@ pub mod datanode;
pub mod error;
pub mod frontend;
pub mod metasrv;
pub mod options;
pub mod standalone;
mod toml_loader;

View File

@@ -12,13 +12,16 @@
// See the License for the specific language governing permissions and
// limitations under the License.
use std::time::Duration;
use clap::Parser;
use common_telemetry::{info, logging, warn};
use common_telemetry::logging;
use meta_srv::bootstrap::MetaSrvInstance;
use meta_srv::metasrv::MetaSrvOptions;
use snafu::ResultExt;
use crate::error::{Error, Result};
use crate::error::Result;
use crate::options::{Options, TopLevelOptions};
use crate::{error, toml_loader};
pub struct Instance {
@@ -30,13 +33,14 @@ impl Instance {
self.instance
.start()
.await
.context(error::StartMetaServerSnafu)?;
Ok(())
.context(error::StartMetaServerSnafu)
}
pub async fn stop(&self) -> Result<()> {
// TODO: handle metasrv shutdown
Ok(())
self.instance
.shutdown()
.await
.context(error::ShutdownMetaServerSnafu)
}
}
@@ -47,8 +51,12 @@ pub struct Command {
}
impl Command {
pub async fn build(self) -> Result<Instance> {
self.subcmd.build().await
pub async fn build(self, opts: MetaSrvOptions) -> Result<Instance> {
self.subcmd.build(opts).await
}
pub fn load_options(&self, top_level_opts: TopLevelOptions) -> Result<Options> {
self.subcmd.load_options(top_level_opts)
}
}
@@ -58,9 +66,15 @@ enum SubCommand {
}
impl SubCommand {
async fn build(self) -> Result<Instance> {
async fn build(self, opts: MetaSrvOptions) -> Result<Instance> {
match self {
SubCommand::Start(cmd) => cmd.build().await,
SubCommand::Start(cmd) => cmd.build(opts).await,
}
}
fn load_options(&self, top_level_opts: TopLevelOptions) -> Result<Options> {
match self {
SubCommand::Start(cmd) => cmd.load_options(top_level_opts),
}
}
}
@@ -79,15 +93,64 @@ struct StartCommand {
selector: Option<String>,
#[clap(long)]
use_memory_store: bool,
#[clap(long)]
http_addr: Option<String>,
#[clap(long)]
http_timeout: Option<u64>,
}
impl StartCommand {
async fn build(self) -> Result<Instance> {
fn load_options(&self, top_level_opts: TopLevelOptions) -> Result<Options> {
let mut opts: MetaSrvOptions = if let Some(path) = &self.config_file {
toml_loader::from_file!(path)?
} else {
MetaSrvOptions::default()
};
if let Some(dir) = top_level_opts.log_dir {
opts.logging.dir = dir;
}
if let Some(level) = top_level_opts.log_level {
opts.logging.level = level;
}
if let Some(addr) = self.bind_addr.clone() {
opts.bind_addr = addr;
}
if let Some(addr) = self.server_addr.clone() {
opts.server_addr = addr;
}
if let Some(addr) = self.store_addr.clone() {
opts.store_addr = addr;
}
if let Some(selector_type) = &self.selector {
opts.selector = selector_type[..]
.try_into()
.context(error::UnsupportedSelectorTypeSnafu { selector_type })?;
}
if self.use_memory_store {
opts.use_memory_store = true;
}
if let Some(http_addr) = self.http_addr.clone() {
opts.http_opts.addr = http_addr;
}
if let Some(http_timeout) = self.http_timeout {
opts.http_opts.timeout = Duration::from_secs(http_timeout);
}
// Disable dashboard in metasrv.
opts.http_opts.disable_dashboard = true;
Ok(Options::Metasrv(Box::new(opts)))
}
async fn build(self, opts: MetaSrvOptions) -> Result<Instance> {
logging::info!("MetaSrv start command: {:#?}", self);
let opts: MetaSrvOptions = self.try_into()?;
logging::info!("MetaSrv options: {:#?}", opts);
let instance = MetaSrvInstance::new(opts)
.await
.context(error::BuildMetaServerSnafu)?;
@@ -96,41 +159,6 @@ impl StartCommand {
}
}
impl TryFrom<StartCommand> for MetaSrvOptions {
type Error = Error;
fn try_from(cmd: StartCommand) -> Result<Self> {
let mut opts: MetaSrvOptions = if let Some(path) = cmd.config_file {
toml_loader::from_file!(&path)?
} else {
MetaSrvOptions::default()
};
if let Some(addr) = cmd.bind_addr {
opts.bind_addr = addr;
}
if let Some(addr) = cmd.server_addr {
opts.server_addr = addr;
}
if let Some(addr) = cmd.store_addr {
opts.store_addr = addr;
}
if let Some(selector_type) = &cmd.selector {
opts.selector = selector_type[..]
.try_into()
.context(error::UnsupportedSelectorTypeSnafu { selector_type })?;
info!("Using {} selector", selector_type);
}
if cmd.use_memory_store {
warn!("Using memory store for Meta. Make sure you are in running tests.");
opts.use_memory_store = true;
}
Ok(opts)
}
}
#[cfg(test)]
mod tests {
use std::io::Write;
@@ -149,10 +177,13 @@ mod tests {
config_file: None,
selector: Some("LoadBased".to_string()),
use_memory_store: false,
http_addr: None,
http_timeout: None,
};
let options: MetaSrvOptions = cmd.try_into().unwrap();
let Options::Metasrv(options) =
cmd.load_options(TopLevelOptions::default()).unwrap() else { unreachable!() };
assert_eq!("127.0.0.1:3002".to_string(), options.bind_addr);
assert_eq!("127.0.0.1:3002".to_string(), options.server_addr);
assert_eq!("127.0.0.1:2380".to_string(), options.store_addr);
assert_eq!(SelectorType::LoadBased, options.selector);
}
@@ -167,6 +198,10 @@ mod tests {
datanode_lease_secs = 15
selector = "LeaseBased"
use_memory_store = false
[logging]
level = "debug"
dir = "/tmp/greptimedb/test/logs"
"#;
write!(file, "{}", toml_str).unwrap();
@@ -177,12 +212,43 @@ mod tests {
selector: None,
config_file: Some(file.path().to_str().unwrap().to_string()),
use_memory_store: false,
http_addr: None,
http_timeout: None,
};
let options: MetaSrvOptions = cmd.try_into().unwrap();
let Options::Metasrv(options) =
cmd.load_options(TopLevelOptions::default()).unwrap() else { unreachable!() };
assert_eq!("127.0.0.1:3002".to_string(), options.bind_addr);
assert_eq!("127.0.0.1:3002".to_string(), options.server_addr);
assert_eq!("127.0.0.1:2379".to_string(), options.store_addr);
assert_eq!(15, options.datanode_lease_secs);
assert_eq!(SelectorType::LeaseBased, options.selector);
assert_eq!("debug".to_string(), options.logging.level);
assert_eq!("/tmp/greptimedb/test/logs".to_string(), options.logging.dir);
}
#[test]
fn test_top_level_options() {
let cmd = StartCommand {
bind_addr: Some("127.0.0.1:3002".to_string()),
server_addr: Some("127.0.0.1:3002".to_string()),
store_addr: Some("127.0.0.1:2380".to_string()),
config_file: None,
selector: Some("LoadBased".to_string()),
use_memory_store: false,
http_addr: None,
http_timeout: None,
};
let options = cmd
.load_options(TopLevelOptions {
log_dir: Some("/tmp/greptimedb/test/logs".to_string()),
log_level: Some("debug".to_string()),
})
.unwrap();
let logging_opt = options.logging_options();
assert_eq!("/tmp/greptimedb/test/logs", logging_opt.dir);
assert_eq!("debug", logging_opt.level);
}
}

49
src/cmd/src/options.rs Normal file
View File

@@ -0,0 +1,49 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use common_telemetry::logging::LoggingOptions;
use datanode::datanode::DatanodeOptions;
use frontend::frontend::FrontendOptions;
use meta_srv::metasrv::MetaSrvOptions;
pub struct MixOptions {
pub fe_opts: FrontendOptions,
pub dn_opts: DatanodeOptions,
pub logging: LoggingOptions,
}
pub enum Options {
Datanode(Box<DatanodeOptions>),
Frontend(Box<FrontendOptions>),
Metasrv(Box<MetaSrvOptions>),
Standalone(Box<MixOptions>),
Cli(Box<LoggingOptions>),
}
impl Options {
pub fn logging_options(&self) -> &LoggingOptions {
match self {
Options::Datanode(opts) => &opts.logging,
Options::Frontend(opts) => &opts.logging,
Options::Metasrv(opts) => &opts.logging,
Options::Standalone(opts) => &opts.logging,
Options::Cli(opts) => opts,
}
}
}
#[derive(Clone, Debug, Default)]
pub struct TopLevelOptions {
pub log_dir: Option<String>,
pub log_level: Option<String>,
}

View File

@@ -17,9 +17,8 @@ use std::sync::Arc;
use clap::Parser;
use common_base::Plugins;
use common_telemetry::info;
use datanode::datanode::{
CompactionConfig, Datanode, DatanodeOptions, ObjectStoreConfig, ProcedureConfig, WalConfig,
};
use common_telemetry::logging::LoggingOptions;
use datanode::datanode::{Datanode, DatanodeOptions, ProcedureConfig, StorageConfig, WalConfig};
use datanode::instance::InstanceRef;
use frontend::frontend::FrontendOptions;
use frontend::grpc::GrpcOptions;
@@ -36,8 +35,12 @@ use servers::tls::{TlsMode, TlsOption};
use servers::Mode;
use snafu::ResultExt;
use crate::error::{Error, IllegalConfigSnafu, Result, StartDatanodeSnafu, StartFrontendSnafu};
use crate::error::{
IllegalConfigSnafu, Result, ShutdownDatanodeSnafu, ShutdownFrontendSnafu, StartDatanodeSnafu,
StartFrontendSnafu,
};
use crate::frontend::load_frontend_plugins;
use crate::options::{MixOptions, Options, TopLevelOptions};
use crate::toml_loader;
#[derive(Parser)]
@@ -47,8 +50,16 @@ pub struct Command {
}
impl Command {
pub async fn build(self) -> Result<Instance> {
self.subcmd.build().await
pub async fn build(
self,
fe_opts: FrontendOptions,
dn_opts: DatanodeOptions,
) -> Result<Instance> {
self.subcmd.build(fe_opts, dn_opts).await
}
pub fn load_options(&self, top_level_options: TopLevelOptions) -> Result<Options> {
self.subcmd.load_options(top_level_options)
}
}
@@ -58,9 +69,15 @@ enum SubCommand {
}
impl SubCommand {
async fn build(self) -> Result<Instance> {
async fn build(self, fe_opts: FrontendOptions, dn_opts: DatanodeOptions) -> Result<Instance> {
match self {
SubCommand::Start(cmd) => cmd.build().await,
SubCommand::Start(cmd) => cmd.build(fe_opts, dn_opts).await,
}
}
fn load_options(&self, top_level_options: TopLevelOptions) -> Result<Options> {
match self {
SubCommand::Start(cmd) => cmd.load_options(top_level_options),
}
}
}
@@ -79,9 +96,9 @@ pub struct StandaloneOptions {
pub prometheus_options: Option<PrometheusOptions>,
pub prom_options: Option<PromOptions>,
pub wal: WalConfig,
pub storage: ObjectStoreConfig,
pub compaction: CompactionConfig,
pub procedure: Option<ProcedureConfig>,
pub storage: StorageConfig,
pub procedure: ProcedureConfig,
pub logging: LoggingOptions,
}
impl Default for StandaloneOptions {
@@ -98,9 +115,9 @@ impl Default for StandaloneOptions {
prometheus_options: Some(PrometheusOptions::default()),
prom_options: Some(PromOptions::default()),
wal: WalConfig::default(),
storage: ObjectStoreConfig::default(),
compaction: CompactionConfig::default(),
procedure: None,
storage: StorageConfig::default(),
procedure: ProcedureConfig::default(),
logging: LoggingOptions::default(),
}
}
}
@@ -118,6 +135,7 @@ impl StandaloneOptions {
prometheus_options: self.prometheus_options,
prom_options: self.prom_options,
meta_client_options: None,
logging: self.logging,
}
}
@@ -126,7 +144,6 @@ impl StandaloneOptions {
enable_memory_catalog: self.enable_memory_catalog,
wal: self.wal,
storage: self.storage,
compaction: self.compaction,
procedure: self.procedure,
..Default::default()
}
@@ -152,7 +169,17 @@ impl Instance {
}
pub async fn stop(&self) -> Result<()> {
// TODO: handle standalone shutdown
self.frontend
.shutdown()
.await
.context(ShutdownFrontendSnafu)?;
self.datanode
.shutdown_instance()
.await
.context(ShutdownDatanodeSnafu)?;
info!("Datanode instance stopped.");
Ok(())
}
}
@@ -188,21 +215,108 @@ struct StartCommand {
}
impl StartCommand {
async fn build(self) -> Result<Instance> {
fn load_options(&self, top_level_options: TopLevelOptions) -> Result<Options> {
let enable_memory_catalog = self.enable_memory_catalog;
let config_file = self.config_file.clone();
let plugins = Arc::new(load_frontend_plugins(&self.user_provider)?);
let fe_opts = FrontendOptions::try_from(self)?;
let dn_opts: DatanodeOptions = {
let mut opts: StandaloneOptions = if let Some(path) = config_file {
toml_loader::from_file!(&path)?
} else {
StandaloneOptions::default()
};
opts.enable_memory_catalog = enable_memory_catalog;
opts.datanode_options()
let config_file = &self.config_file;
let mut opts: StandaloneOptions = if let Some(path) = config_file {
toml_loader::from_file!(path)?
} else {
StandaloneOptions::default()
};
opts.enable_memory_catalog = enable_memory_catalog;
let mut fe_opts = opts.clone().frontend_options();
let mut logging = opts.logging.clone();
let dn_opts = opts.datanode_options();
if let Some(dir) = top_level_options.log_dir {
logging.dir = dir;
}
if let Some(level) = top_level_options.log_level {
logging.level = level;
}
fe_opts.mode = Mode::Standalone;
if let Some(addr) = self.http_addr.clone() {
fe_opts.http_options = Some(HttpOptions {
addr,
..Default::default()
});
}
if let Some(addr) = self.rpc_addr.clone() {
// frontend grpc addr conflict with datanode default grpc addr
let datanode_grpc_addr = DatanodeOptions::default().rpc_addr;
if addr == datanode_grpc_addr {
return IllegalConfigSnafu {
msg: format!(
"gRPC listen address conflicts with datanode reserved gRPC addr: {datanode_grpc_addr}",
),
}
.fail();
}
fe_opts.grpc_options = Some(GrpcOptions {
addr,
..Default::default()
});
}
if let Some(addr) = self.mysql_addr.clone() {
fe_opts.mysql_options = Some(MysqlOptions {
addr,
..Default::default()
})
}
if let Some(addr) = self.prom_addr.clone() {
fe_opts.prom_options = Some(PromOptions { addr })
}
if let Some(addr) = self.postgres_addr.clone() {
fe_opts.postgres_options = Some(PostgresOptions {
addr,
..Default::default()
})
}
if let Some(addr) = self.opentsdb_addr.clone() {
fe_opts.opentsdb_options = Some(OpentsdbOptions {
addr,
..Default::default()
});
}
if self.influxdb_enable {
fe_opts.influxdb_options = Some(InfluxdbOptions { enable: true });
}
let tls_option = TlsOption::new(
self.tls_mode.clone(),
self.tls_cert_path.clone(),
self.tls_key_path.clone(),
);
if let Some(mut mysql_options) = fe_opts.mysql_options {
mysql_options.tls = tls_option.clone();
fe_opts.mysql_options = Some(mysql_options);
}
if let Some(mut postgres_options) = fe_opts.postgres_options {
postgres_options.tls = tls_option;
fe_opts.postgres_options = Some(postgres_options);
}
Ok(Options::Standalone(Box::new(MixOptions {
fe_opts,
dn_opts,
logging,
})))
}
async fn build(self, fe_opts: FrontendOptions, dn_opts: DatanodeOptions) -> Result<Instance> {
let plugins = Arc::new(load_frontend_plugins(&self.user_provider)?);
info!(
"Standalone frontend options: {:#?}, datanode options: {:#?}",
fe_opts, dn_opts
@@ -215,7 +329,7 @@ impl StartCommand {
let mut frontend = build_frontend(plugins.clone(), datanode.get_instance()).await?;
frontend
.build_servers(&fe_opts, plugins)
.build_servers(&fe_opts)
.await
.context(StartFrontendSnafu)?;
@@ -228,149 +342,24 @@ async fn build_frontend(
plugins: Arc<Plugins>,
datanode_instance: InstanceRef,
) -> Result<FeInstance> {
let mut frontend_instance = FeInstance::new_standalone(datanode_instance.clone());
frontend_instance.set_script_handler(datanode_instance);
let mut frontend_instance = FeInstance::try_new_standalone(datanode_instance.clone())
.await
.context(StartFrontendSnafu)?;
frontend_instance.set_plugins(plugins.clone());
Ok(frontend_instance)
}
impl TryFrom<StartCommand> for FrontendOptions {
type Error = Error;
fn try_from(cmd: StartCommand) -> std::result::Result<Self, Self::Error> {
let opts: StandaloneOptions = if let Some(path) = cmd.config_file {
toml_loader::from_file!(&path)?
} else {
StandaloneOptions::default()
};
let mut opts = opts.frontend_options();
opts.mode = Mode::Standalone;
if let Some(addr) = cmd.http_addr {
opts.http_options = Some(HttpOptions {
addr,
..Default::default()
});
}
if let Some(addr) = cmd.rpc_addr {
// frontend grpc addr conflict with datanode default grpc addr
let datanode_grpc_addr = DatanodeOptions::default().rpc_addr;
if addr == datanode_grpc_addr {
return IllegalConfigSnafu {
msg: format!(
"gRPC listen address conflicts with datanode reserved gRPC addr: {datanode_grpc_addr}",
),
}
.fail();
}
opts.grpc_options = Some(GrpcOptions {
addr,
..Default::default()
});
}
if let Some(addr) = cmd.mysql_addr {
opts.mysql_options = Some(MysqlOptions {
addr,
..Default::default()
})
}
if let Some(addr) = cmd.prom_addr {
opts.prom_options = Some(PromOptions { addr })
}
if let Some(addr) = cmd.postgres_addr {
opts.postgres_options = Some(PostgresOptions {
addr,
..Default::default()
})
}
if let Some(addr) = cmd.opentsdb_addr {
opts.opentsdb_options = Some(OpentsdbOptions {
addr,
..Default::default()
});
}
if cmd.influxdb_enable {
opts.influxdb_options = Some(InfluxdbOptions { enable: true });
}
let tls_option = TlsOption::new(cmd.tls_mode, cmd.tls_cert_path, cmd.tls_key_path);
if let Some(mut mysql_options) = opts.mysql_options {
mysql_options.tls = tls_option.clone();
opts.mysql_options = Some(mysql_options);
}
if let Some(mut postgres_options) = opts.postgres_options {
postgres_options.tls = tls_option;
opts.postgres_options = Some(postgres_options);
}
Ok(opts)
}
}
#[cfg(test)]
mod tests {
use std::io::Write;
use std::time::Duration;
use common_test_util::temp_dir::create_named_temp_file;
use servers::auth::{Identity, Password, UserProviderRef};
use servers::Mode;
use super::*;
#[test]
fn test_read_config_file() {
let cmd = StartCommand {
http_addr: None,
rpc_addr: None,
mysql_addr: None,
prom_addr: None,
postgres_addr: None,
opentsdb_addr: None,
config_file: Some(format!(
"{}/../../config/standalone.example.toml",
std::env::current_dir().unwrap().as_path().to_str().unwrap()
)),
influxdb_enable: false,
enable_memory_catalog: false,
tls_mode: None,
tls_cert_path: None,
tls_key_path: None,
user_provider: None,
};
let fe_opts = FrontendOptions::try_from(cmd).unwrap();
assert_eq!(Mode::Standalone, fe_opts.mode);
assert_eq!(
"127.0.0.1:4000".to_string(),
fe_opts.http_options.as_ref().unwrap().addr
);
assert_eq!(
Duration::from_secs(30),
fe_opts.http_options.as_ref().unwrap().timeout
);
assert_eq!(
"127.0.0.1:4001".to_string(),
fe_opts.grpc_options.unwrap().addr
);
assert_eq!(
"127.0.0.1:4002",
fe_opts.mysql_options.as_ref().unwrap().addr
);
assert_eq!(2, fe_opts.mysql_options.as_ref().unwrap().runtime_size);
assert_eq!(
None,
fe_opts.mysql_options.as_ref().unwrap().reject_no_database
);
assert!(fe_opts.influxdb_options.as_ref().unwrap().enable);
}
#[tokio::test]
async fn test_try_from_start_command_to_anymap() {
let command = StartCommand {
@@ -396,7 +385,10 @@ mod tests {
assert!(provider.is_some());
let provider = provider.unwrap();
let result = provider
.authenticate(Identity::UserId("test", None), Password::PlainText("test"))
.authenticate(
Identity::UserId("test", None),
Password::PlainText("test".to_string().into()),
)
.await;
assert!(result.is_ok());
}
@@ -407,4 +399,136 @@ mod tests {
let toml_string = toml::to_string(&opts).unwrap();
let _parsed: StandaloneOptions = toml::from_str(&toml_string).unwrap();
}
#[test]
fn test_read_from_config_file() {
let mut file = create_named_temp_file();
let toml_str = r#"
mode = "distributed"
enable_memory_catalog = true
[wal]
dir = "/tmp/greptimedb/test/wal"
file_size = "1GB"
purge_threshold = "50GB"
purge_interval = "10m"
read_batch_size = 128
sync_write = false
[storage]
type = "S3"
access_key_id = "access_key_id"
secret_access_key = "secret_access_key"
[storage.compaction]
max_inflight_tasks = 3
max_files_in_level0 = 7
max_purge_tasks = 32
[storage.manifest]
checkpoint_margin = 9
gc_duration = '7s'
checkpoint_on_startup = true
[http_options]
addr = "127.0.0.1:4000"
timeout = "30s"
[logging]
level = "debug"
dir = "/tmp/greptimedb/test/logs"
"#;
write!(file, "{}", toml_str).unwrap();
let cmd = StartCommand {
http_addr: None,
rpc_addr: None,
prom_addr: None,
mysql_addr: None,
postgres_addr: None,
opentsdb_addr: None,
config_file: Some(file.path().to_str().unwrap().to_string()),
influxdb_enable: false,
enable_memory_catalog: false,
tls_mode: None,
tls_cert_path: None,
tls_key_path: None,
user_provider: Some("static_user_provider:cmd:test=test".to_string()),
};
let Options::Standalone(options) = cmd.load_options(TopLevelOptions::default()).unwrap() else {unreachable!()};
let fe_opts = options.fe_opts;
let dn_opts = options.dn_opts;
let logging_opts = options.logging;
assert_eq!(Mode::Standalone, fe_opts.mode);
assert_eq!(
"127.0.0.1:4000".to_string(),
fe_opts.http_options.as_ref().unwrap().addr
);
assert_eq!(
Duration::from_secs(30),
fe_opts.http_options.as_ref().unwrap().timeout
);
assert_eq!(
"127.0.0.1:4001".to_string(),
fe_opts.grpc_options.unwrap().addr
);
assert_eq!(
"127.0.0.1:4002",
fe_opts.mysql_options.as_ref().unwrap().addr
);
assert_eq!(2, fe_opts.mysql_options.as_ref().unwrap().runtime_size);
assert_eq!(
None,
fe_opts.mysql_options.as_ref().unwrap().reject_no_database
);
assert!(fe_opts.influxdb_options.as_ref().unwrap().enable);
assert_eq!("/tmp/greptimedb/test/wal", dn_opts.wal.dir);
match &dn_opts.storage.store {
datanode::datanode::ObjectStoreConfig::S3(s3_config) => {
assert_eq!(
"Secret([REDACTED alloc::string::String])".to_string(),
format!("{:?}", s3_config.access_key_id)
);
}
_ => {
unreachable!()
}
}
assert_eq!("debug".to_string(), logging_opts.level);
assert_eq!("/tmp/greptimedb/test/logs".to_string(), logging_opts.dir);
}
#[test]
fn test_top_level_options() {
let cmd = StartCommand {
http_addr: None,
rpc_addr: None,
prom_addr: None,
mysql_addr: None,
postgres_addr: None,
opentsdb_addr: None,
config_file: None,
influxdb_enable: false,
enable_memory_catalog: false,
tls_mode: None,
tls_cert_path: None,
tls_key_path: None,
user_provider: Some("static_user_provider:cmd:test=test".to_string()),
};
let Options::Standalone(opts) = cmd
.load_options(TopLevelOptions {
log_dir: Some("/tmp/greptimedb/test/logs".to_string()),
log_level: Some("debug".to_string()),
})
.unwrap() else {
unreachable!()
};
assert_eq!("/tmp/greptimedb/test/logs", opts.logging.dir);
assert_eq!("debug", opts.logging.level);
}
}

View File

@@ -18,7 +18,7 @@ use std::io::{Read, Write};
use bytes::{Buf, BufMut, BytesMut};
use common_error::prelude::ErrorExt;
use paste::paste;
use snafu::{ensure, Backtrace, ErrorCompat, ResultExt, Snafu};
use snafu::{ensure, Location, ResultExt, Snafu};
#[derive(Debug, Snafu)]
#[snafu(visibility(pub))]
@@ -31,29 +31,33 @@ pub enum Error {
Overflow {
src_len: usize,
dst_len: usize,
backtrace: Backtrace,
location: Location,
},
#[snafu(display("Buffer underflow"))]
Underflow { backtrace: Backtrace },
Underflow { location: Location },
#[snafu(display("IO operation reach EOF, source: {}", source))]
Eof {
source: std::io::Error,
backtrace: Backtrace,
location: Location,
},
}
pub type Result<T> = std::result::Result<T, Error>;
impl ErrorExt for Error {
fn backtrace_opt(&self) -> Option<&Backtrace> {
ErrorCompat::backtrace(self)
}
fn as_any(&self) -> &dyn Any {
self
}
fn location_opt(&self) -> Option<common_error::snafu::Location> {
match self {
Error::Overflow { location, .. } => Some(*location),
Error::Underflow { location, .. } => Some(*location),
Error::Eof { location, .. } => Some(*location),
}
}
}
macro_rules! impl_read_le {

View File

@@ -92,6 +92,14 @@ impl StringBytes {
pub fn as_utf8(&self) -> &str {
unsafe { std::str::from_utf8_unchecked(&self.0) }
}
pub fn len(&self) -> usize {
self.0.len()
}
pub fn is_empty(&self) -> bool {
self.0.is_empty()
}
}
impl From<String> for StringBytes {
@@ -178,6 +186,17 @@ mod tests {
assert_eq!(world, &bytes);
}
#[test]
fn test_bytes_len() {
let hello = b"hello".to_vec();
let bytes = Bytes::from(hello.clone());
assert_eq!(bytes.len(), hello.len());
let zero = b"".to_vec();
let bytes = Bytes::from(zero);
assert!(bytes.is_empty());
}
#[test]
fn test_string_bytes_from() {
let hello = "hello".to_string();
@@ -191,6 +210,17 @@ mod tests {
assert_eq!(&bytes, world);
}
#[test]
fn test_string_bytes_len() {
let hello = "hello".to_string();
let bytes = StringBytes::from(hello.clone());
assert_eq!(bytes.len(), hello.len());
let zero = "".to_string();
let bytes = StringBytes::from(zero);
assert!(bytes.is_empty());
}
fn check_str(expect: &str, given: &str) {
assert_eq!(expect, given);
}

View File

@@ -18,6 +18,60 @@ pub mod bytes;
#[allow(clippy::all)]
pub mod readable_size;
use core::any::Any;
use std::sync::{Arc, Mutex, MutexGuard};
pub use bit_vec::BitVec;
pub type Plugins = anymap::Map<dyn core::any::Any + Send + Sync>;
#[derive(Default, Clone)]
pub struct Plugins {
inner: Arc<Mutex<anymap::Map<dyn Any + Send + Sync>>>,
}
impl Plugins {
pub fn new() -> Self {
Self {
inner: Arc::new(Mutex::new(anymap::Map::new())),
}
}
fn lock(&self) -> MutexGuard<anymap::Map<dyn Any + Send + Sync>> {
self.inner.lock().unwrap()
}
pub fn insert<T: 'static + Send + Sync>(&self, value: T) {
self.lock().insert(value);
}
pub fn get<T: 'static + Send + Sync + Clone>(&self) -> Option<T> {
let binding = self.lock();
binding.get::<T>().cloned()
}
pub fn map_mut<T: 'static + Send + Sync, F, R>(&self, mapper: F) -> R
where
F: FnOnce(Option<&mut T>) -> R,
{
let mut binding = self.lock();
let opt = binding.get_mut::<T>();
mapper(opt)
}
pub fn map<T: 'static + Send + Sync, F, R>(&self, mapper: F) -> Option<R>
where
F: FnOnce(&T) -> R,
{
let binding = self.lock();
binding.get::<T>().map(mapper)
}
pub fn len(&self) -> usize {
let binding = self.lock();
binding.len()
}
pub fn is_empty(&self) -> bool {
let binding = self.lock();
binding.is_empty()
}
}

View File

@@ -53,6 +53,10 @@ impl ReadableSize {
pub const fn as_mb(self) -> u64 {
self.0 / MIB
}
pub const fn as_bytes(self) -> u64 {
self.0
}
}
impl Div<u64> for ReadableSize {

View File

@@ -21,7 +21,16 @@ pub const DEFAULT_SCHEMA_NAME: &str = "public";
/// Reserves [0,MIN_USER_TABLE_ID) for internal usage.
/// User defined table id starts from this value.
pub const MIN_USER_TABLE_ID: u32 = 1024;
/// the max system table id
pub const MAX_SYS_TABLE_ID: u32 = MIN_USER_TABLE_ID - 1;
/// system_catalog table id
pub const SYSTEM_CATALOG_TABLE_ID: u32 = 0;
/// scripts table id
pub const SCRIPTS_TABLE_ID: u32 = 1;
pub const MITO_ENGINE: &str = "mito";
pub const IMMUTABLE_FILE_ENGINE: &str = "file";
pub const SEMANTIC_TYPE_PRIMARY_KEY: &str = "PRIMARY KEY";
pub const SEMANTIC_TYPE_FIELD: &str = "FIELD";
pub const SEMANTIC_TYPE_TIME_INDEX: &str = "TIME INDEX";

View File

@@ -16,29 +16,29 @@ use std::any::Any;
use common_error::ext::ErrorExt;
use common_error::prelude::{Snafu, StatusCode};
use snafu::{Backtrace, ErrorCompat};
use snafu::Location;
#[derive(Debug, Snafu)]
#[snafu(visibility(pub))]
pub enum Error {
#[snafu(display("Invalid catalog info: {}", key))]
InvalidCatalog { key: String, backtrace: Backtrace },
InvalidCatalog { key: String, location: Location },
#[snafu(display("Failed to deserialize catalog entry value: {}", raw))]
DeserializeCatalogEntryValue {
raw: String,
backtrace: Backtrace,
location: Location,
source: serde_json::error::Error,
},
#[snafu(display("Failed to serialize catalog entry value"))]
SerializeCatalogEntryValue {
backtrace: Backtrace,
location: Location,
source: serde_json::error::Error,
},
#[snafu(display("Failed to parse node id: {}", key))]
ParseNodeId { key: String, backtrace: Backtrace },
ParseNodeId { key: String, location: Location },
}
impl ErrorExt for Error {
@@ -51,10 +51,6 @@ impl ErrorExt for Error {
}
}
fn backtrace_opt(&self) -> Option<&Backtrace> {
ErrorCompat::backtrace(self)
}
fn as_any(&self) -> &dyn Any {
self
}

View File

@@ -12,6 +12,8 @@
// See the License for the specific language governing permissions and
// limitations under the License.
use consts::DEFAULT_CATALOG_NAME;
pub mod consts;
pub mod error;
@@ -20,3 +22,23 @@ pub mod error;
pub fn format_full_table_name(catalog: &str, schema: &str, table: &str) -> String {
format!("{catalog}.{schema}.{table}")
}
/// Build db name from catalog and schema string
pub fn build_db_string(catalog: &str, schema: &str) -> String {
if catalog == DEFAULT_CATALOG_NAME {
schema.to_string()
} else {
format!("{catalog}-{schema}")
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_db_string() {
assert_eq!("test", build_db_string(DEFAULT_CATALOG_NAME, "test"));
assert_eq!("a0b1c2d3-test", build_db_string("a0b1c2d3", "test"));
}
}

View File

@@ -0,0 +1,34 @@
[package]
name = "common-datasource"
version.workspace = true
edition.workspace = true
license.workspace = true
[dependencies]
arrow.workspace = true
arrow-schema.workspace = true
async-compression = { version = "0.3", features = [
"bzip2",
"gzip",
"xz",
"zstd",
"futures-io",
"tokio",
] }
async-trait.workspace = true
bytes = "1.1"
common-base = { path = "../base" }
common-error = { path = "../error" }
common-runtime = { path = "../runtime" }
datafusion.workspace = true
derive_builder = "0.12"
futures.workspace = true
object-store = { path = "../../object-store" }
regex = "1.7"
snafu.workspace = true
tokio.workspace = true
tokio-util.workspace = true
url = "2.3"
[dev-dependencies]
common-test-util = { path = "../test-util" }

View File

@@ -0,0 +1,138 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use arrow::record_batch::RecordBatch;
use async_trait::async_trait;
use datafusion::parquet::format::FileMetaData;
use object_store::Writer;
use snafu::{OptionExt, ResultExt};
use tokio::io::{AsyncWrite, AsyncWriteExt};
use tokio_util::compat::Compat;
use crate::error::{self, Result};
use crate::share_buffer::SharedBuffer;
pub struct BufferedWriter<T, U> {
writer: T,
/// None stands for [`BufferedWriter`] closed.
encoder: Option<U>,
buffer: SharedBuffer,
bytes_written: u64,
flushed: bool,
threshold: usize,
}
pub trait DfRecordBatchEncoder {
fn write(&mut self, batch: &RecordBatch) -> Result<()>;
}
#[async_trait]
pub trait ArrowWriterCloser {
async fn close(mut self) -> Result<FileMetaData>;
}
pub type DefaultBufferedWriter<E> = BufferedWriter<Compat<Writer>, E>;
impl<T: AsyncWrite + Send + Unpin, U: DfRecordBatchEncoder + ArrowWriterCloser>
BufferedWriter<T, U>
{
pub async fn close_with_arrow_writer(mut self) -> Result<(FileMetaData, u64)> {
let encoder = self
.encoder
.take()
.context(error::BufferedWriterClosedSnafu)?;
let metadata = encoder.close().await?;
let written = self.try_flush(true).await?;
// It's important to shut down! flushes all pending writes
self.close().await?;
Ok((metadata, written))
}
}
impl<T: AsyncWrite + Send + Unpin, U: DfRecordBatchEncoder> BufferedWriter<T, U> {
pub async fn close(&mut self) -> Result<()> {
self.writer.shutdown().await.context(error::AsyncWriteSnafu)
}
pub fn new(threshold: usize, buffer: SharedBuffer, encoder: U, writer: T) -> Self {
Self {
threshold,
writer,
encoder: Some(encoder),
buffer,
bytes_written: 0,
flushed: false,
}
}
pub fn bytes_written(&self) -> u64 {
self.bytes_written
}
pub async fn write(&mut self, batch: &RecordBatch) -> Result<()> {
let encoder = self
.encoder
.as_mut()
.context(error::BufferedWriterClosedSnafu)?;
encoder.write(batch)?;
self.try_flush(false).await?;
Ok(())
}
pub fn flushed(&self) -> bool {
self.flushed
}
pub async fn try_flush(&mut self, all: bool) -> Result<u64> {
let mut bytes_written: u64 = 0;
// Once buffered data size reaches threshold, split the data in chunks (typically 4MB)
// and write to underlying storage.
while self.buffer.buffer.lock().unwrap().len() >= self.threshold {
let chunk = {
let mut buffer = self.buffer.buffer.lock().unwrap();
buffer.split_to(self.threshold)
};
let size = chunk.len();
self.writer
.write_all(&chunk)
.await
.context(error::AsyncWriteSnafu)?;
bytes_written += size as u64;
}
if all {
bytes_written += self.try_flush_all().await?;
}
self.flushed = bytes_written > 0;
self.bytes_written += bytes_written;
Ok(bytes_written)
}
async fn try_flush_all(&mut self) -> Result<u64> {
let remain = self.buffer.buffer.lock().unwrap().split();
let size = remain.len();
self.writer
.write_all(&remain)
.await
.context(error::AsyncWriteSnafu)?;
Ok(size as u64)
}
}

View File

@@ -0,0 +1,109 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use std::fmt::Display;
use std::io;
use std::str::FromStr;
use async_compression::tokio::bufread::{BzDecoder, GzipDecoder, XzDecoder, ZstdDecoder};
use bytes::Bytes;
use futures::Stream;
use tokio::io::{AsyncRead, BufReader};
use tokio_util::io::{ReaderStream, StreamReader};
use crate::error::{self, Error, Result};
#[derive(Debug, Clone, Copy, PartialEq, Eq, Hash)]
pub enum CompressionType {
/// Gzip-ed file
Gzip,
/// Bzip2-ed file
Bzip2,
/// Xz-ed file (liblzma)
Xz,
/// Zstd-ed file,
Zstd,
/// Uncompressed file
Uncompressed,
}
impl FromStr for CompressionType {
type Err = Error;
fn from_str(s: &str) -> Result<Self> {
let s = s.to_uppercase();
match s.as_str() {
"GZIP" | "GZ" => Ok(Self::Gzip),
"BZIP2" | "BZ2" => Ok(Self::Bzip2),
"XZ" => Ok(Self::Xz),
"ZST" | "ZSTD" => Ok(Self::Zstd),
"" => Ok(Self::Uncompressed),
_ => error::UnsupportedCompressionTypeSnafu {
compression_type: s,
}
.fail(),
}
}
}
impl Display for CompressionType {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
f.write_str(match self {
Self::Gzip => "GZIP",
Self::Bzip2 => "BZIP2",
Self::Xz => "XZ",
Self::Zstd => "ZSTD",
Self::Uncompressed => "",
})
}
}
impl CompressionType {
pub const fn is_compressed(&self) -> bool {
!matches!(self, &Self::Uncompressed)
}
pub fn convert_async_read<T: AsyncRead + Unpin + Send + 'static>(
&self,
s: T,
) -> Box<dyn AsyncRead + Unpin + Send> {
match self {
CompressionType::Gzip => Box::new(GzipDecoder::new(BufReader::new(s))),
CompressionType::Bzip2 => Box::new(BzDecoder::new(BufReader::new(s))),
CompressionType::Xz => Box::new(XzDecoder::new(BufReader::new(s))),
CompressionType::Zstd => Box::new(ZstdDecoder::new(BufReader::new(s))),
CompressionType::Uncompressed => Box::new(s),
}
}
pub fn convert_stream<T: Stream<Item = io::Result<Bytes>> + Unpin + Send + 'static>(
&self,
s: T,
) -> Box<dyn Stream<Item = io::Result<Bytes>> + Send + Unpin> {
match self {
CompressionType::Gzip => {
Box::new(ReaderStream::new(GzipDecoder::new(StreamReader::new(s))))
}
CompressionType::Bzip2 => {
Box::new(ReaderStream::new(BzDecoder::new(StreamReader::new(s))))
}
CompressionType::Xz => {
Box::new(ReaderStream::new(XzDecoder::new(StreamReader::new(s))))
}
CompressionType::Zstd => {
Box::new(ReaderStream::new(ZstdDecoder::new(StreamReader::new(s))))
}
CompressionType::Uncompressed => Box::new(s),
}
}
}

View File

@@ -0,0 +1,227 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use std::any::Any;
use arrow_schema::ArrowError;
use common_error::prelude::*;
use datafusion::parquet::errors::ParquetError;
use snafu::Location;
use url::ParseError;
#[derive(Debug, Snafu)]
#[snafu(visibility(pub))]
pub enum Error {
#[snafu(display("Unsupported compression type: {}", compression_type))]
UnsupportedCompressionType {
compression_type: String,
location: Location,
},
#[snafu(display("Unsupported backend protocol: {}", protocol))]
UnsupportedBackendProtocol {
protocol: String,
location: Location,
},
#[snafu(display("Unsupported format protocol: {}", format))]
UnsupportedFormat { format: String, location: Location },
#[snafu(display("empty host: {}", url))]
EmptyHostPath { url: String, location: Location },
#[snafu(display("Invalid path: {}", path))]
InvalidPath { path: String, location: Location },
#[snafu(display("Invalid url: {}, error :{}", url, source))]
InvalidUrl {
url: String,
source: ParseError,
location: Location,
},
#[snafu(display("Failed to decompression, source: {}", source))]
Decompression {
source: object_store::Error,
location: Location,
},
#[snafu(display("Failed to build backend, source: {}", source))]
BuildBackend {
source: object_store::Error,
location: Location,
},
#[snafu(display("Failed to read object from path: {}, source: {}", path, source))]
ReadObject {
path: String,
location: Location,
source: object_store::Error,
},
#[snafu(display("Failed to write object to path: {}, source: {}", path, source))]
WriteObject {
path: String,
location: Location,
source: object_store::Error,
},
#[snafu(display("Failed to write: {}", source))]
AsyncWrite {
source: std::io::Error,
location: Location,
},
#[snafu(display("Failed to write record batch: {}", source))]
WriteRecordBatch {
location: Location,
source: ArrowError,
},
#[snafu(display("Failed to encode record batch: {}", source))]
EncodeRecordBatch {
location: Location,
source: ParquetError,
},
#[snafu(display("Failed to read record batch: {}", source))]
ReadRecordBatch {
location: Location,
source: datafusion::error::DataFusionError,
},
#[snafu(display("Failed to read parquet source: {}", source))]
ReadParquetSnafu {
location: Location,
source: datafusion::parquet::errors::ParquetError,
},
#[snafu(display("Failed to convert parquet to schema: {}", source))]
ParquetToSchema {
location: Location,
source: datafusion::parquet::errors::ParquetError,
},
#[snafu(display("Failed to infer schema from file, source: {}", source))]
InferSchema {
location: Location,
source: arrow_schema::ArrowError,
},
#[snafu(display("Failed to list object in path: {}, source: {}", path, source))]
ListObjects {
path: String,
location: Location,
source: object_store::Error,
},
#[snafu(display("Invalid connection: {}", msg))]
InvalidConnection { msg: String, location: Location },
#[snafu(display("Failed to join handle: {}", source))]
JoinHandle {
location: Location,
source: tokio::task::JoinError,
},
#[snafu(display("Failed to parse format {} with value: {}", key, value))]
ParseFormat {
key: &'static str,
value: String,
location: Location,
},
#[snafu(display("Failed to merge schema: {}", source))]
MergeSchema {
source: arrow_schema::ArrowError,
location: Location,
},
#[snafu(display("Missing required field: {}", name))]
MissingRequiredField { name: String, location: Location },
#[snafu(display("Buffered writer closed"))]
BufferedWriterClosed { location: Location },
}
pub type Result<T> = std::result::Result<T, Error>;
impl ErrorExt for Error {
fn status_code(&self) -> StatusCode {
use Error::*;
match self {
BuildBackend { .. }
| ListObjects { .. }
| ReadObject { .. }
| WriteObject { .. }
| AsyncWrite { .. } => StatusCode::StorageUnavailable,
UnsupportedBackendProtocol { .. }
| UnsupportedCompressionType { .. }
| UnsupportedFormat { .. }
| InvalidConnection { .. }
| InvalidUrl { .. }
| EmptyHostPath { .. }
| InvalidPath { .. }
| InferSchema { .. }
| ReadParquetSnafu { .. }
| ParquetToSchema { .. }
| ParseFormat { .. }
| MergeSchema { .. }
| MissingRequiredField { .. } => StatusCode::InvalidArguments,
Decompression { .. }
| JoinHandle { .. }
| ReadRecordBatch { .. }
| WriteRecordBatch { .. }
| EncodeRecordBatch { .. }
| BufferedWriterClosed { .. } => StatusCode::Unexpected,
}
}
fn as_any(&self) -> &dyn Any {
self
}
fn location_opt(&self) -> Option<common_error::snafu::Location> {
use Error::*;
match self {
BuildBackend { location, .. } => Some(*location),
ReadObject { location, .. } => Some(*location),
ListObjects { location, .. } => Some(*location),
InferSchema { location, .. } => Some(*location),
ReadParquetSnafu { location, .. } => Some(*location),
ParquetToSchema { location, .. } => Some(*location),
Decompression { location, .. } => Some(*location),
JoinHandle { location, .. } => Some(*location),
ParseFormat { location, .. } => Some(*location),
MergeSchema { location, .. } => Some(*location),
MissingRequiredField { location, .. } => Some(*location),
WriteObject { location, .. } => Some(*location),
ReadRecordBatch { location, .. } => Some(*location),
WriteRecordBatch { location, .. } => Some(*location),
AsyncWrite { location, .. } => Some(*location),
EncodeRecordBatch { location, .. } => Some(*location),
BufferedWriterClosed { location, .. } => Some(*location),
UnsupportedBackendProtocol { location, .. } => Some(*location),
EmptyHostPath { location, .. } => Some(*location),
InvalidPath { location, .. } => Some(*location),
InvalidUrl { location, .. } => Some(*location),
InvalidConnection { location, .. } => Some(*location),
UnsupportedCompressionType { location, .. } => Some(*location),
UnsupportedFormat { location, .. } => Some(*location),
}
}
}

View File

@@ -0,0 +1,207 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
pub mod csv;
pub mod json;
pub mod parquet;
#[cfg(test)]
pub mod tests;
pub const DEFAULT_SCHEMA_INFER_MAX_RECORD: usize = 1000;
use std::collections::HashMap;
use std::result;
use std::sync::Arc;
use std::task::Poll;
use arrow::record_batch::RecordBatch;
use arrow_schema::{ArrowError, Schema as ArrowSchema};
use async_trait::async_trait;
use bytes::{Buf, Bytes};
use datafusion::error::{DataFusionError, Result as DataFusionResult};
use datafusion::physical_plan::file_format::FileOpenFuture;
use datafusion::physical_plan::SendableRecordBatchStream;
use futures::StreamExt;
use object_store::ObjectStore;
use snafu::ResultExt;
use tokio_util::compat::FuturesAsyncWriteCompatExt;
use self::csv::CsvFormat;
use self::json::JsonFormat;
use self::parquet::ParquetFormat;
use crate::buffered_writer::{BufferedWriter, DfRecordBatchEncoder};
use crate::compression::CompressionType;
use crate::error::{self, Result};
use crate::share_buffer::SharedBuffer;
pub const FORMAT_COMPRESSION_TYPE: &str = "compression_type";
pub const FORMAT_DELIMITER: &str = "delimiter";
pub const FORMAT_SCHEMA_INFER_MAX_RECORD: &str = "schema_infer_max_record";
pub const FORMAT_HAS_HEADER: &str = "has_header";
pub const FORMAT_TYPE: &str = "format";
pub const FILE_PATTERN: &str = "pattern";
#[derive(Debug, Clone, Copy, PartialEq, Eq)]
pub enum Format {
Csv(CsvFormat),
Json(JsonFormat),
Parquet(ParquetFormat),
}
impl TryFrom<&HashMap<String, String>> for Format {
type Error = error::Error;
fn try_from(options: &HashMap<String, String>) -> Result<Self> {
let format = options
.get(FORMAT_TYPE)
.map(|format| format.to_ascii_uppercase())
.unwrap_or_else(|| "PARQUET".to_string());
match format.as_str() {
"CSV" => Ok(Self::Csv(CsvFormat::try_from(options)?)),
"JSON" => Ok(Self::Json(JsonFormat::try_from(options)?)),
"PARQUET" => Ok(Self::Parquet(ParquetFormat::default())),
_ => error::UnsupportedFormatSnafu { format: &format }.fail(),
}
}
}
#[async_trait]
pub trait FileFormat: Send + Sync + std::fmt::Debug {
async fn infer_schema(&self, store: &ObjectStore, path: &str) -> Result<ArrowSchema>;
}
pub trait ArrowDecoder: Send + 'static {
/// Decode records from `buf` returning the number of bytes read.
///
/// This method returns `Ok(0)` once `batch_size` objects have been parsed since the
/// last call to [`Self::flush`], or `buf` is exhausted.
///
/// Any remaining bytes should be included in the next call to [`Self::decode`].
fn decode(&mut self, buf: &[u8]) -> result::Result<usize, ArrowError>;
/// Flushes the currently buffered data to a [`RecordBatch`].
///
/// This should only be called after [`Self::decode`] has returned `Ok(0)`,
/// otherwise may return an error if part way through decoding a record
///
/// Returns `Ok(None)` if no buffered data.
fn flush(&mut self) -> result::Result<Option<RecordBatch>, ArrowError>;
}
impl ArrowDecoder for arrow::csv::reader::Decoder {
fn decode(&mut self, buf: &[u8]) -> result::Result<usize, ArrowError> {
self.decode(buf)
}
fn flush(&mut self) -> result::Result<Option<RecordBatch>, ArrowError> {
self.flush()
}
}
impl ArrowDecoder for arrow::json::RawDecoder {
fn decode(&mut self, buf: &[u8]) -> result::Result<usize, ArrowError> {
self.decode(buf)
}
fn flush(&mut self) -> result::Result<Option<RecordBatch>, ArrowError> {
self.flush()
}
}
pub fn open_with_decoder<T: ArrowDecoder, F: Fn() -> DataFusionResult<T>>(
object_store: Arc<ObjectStore>,
path: String,
compression_type: CompressionType,
decoder_factory: F,
) -> DataFusionResult<FileOpenFuture> {
let mut decoder = decoder_factory()?;
Ok(Box::pin(async move {
let reader = object_store
.reader(&path)
.await
.map_err(|e| DataFusionError::External(Box::new(e)))?;
let mut upstream = compression_type.convert_stream(reader).fuse();
let mut buffered = Bytes::new();
let stream = futures::stream::poll_fn(move |cx| {
loop {
if buffered.is_empty() {
if let Some(result) = futures::ready!(upstream.poll_next_unpin(cx)) {
buffered = result?;
};
}
let decoded = decoder.decode(buffered.as_ref())?;
if decoded == 0 {
break;
} else {
buffered.advance(decoded);
}
}
Poll::Ready(decoder.flush().transpose())
});
Ok(stream.boxed())
}))
}
pub async fn infer_schemas(
store: &ObjectStore,
files: &[String],
file_format: &dyn FileFormat,
) -> Result<ArrowSchema> {
let mut schemas = Vec::with_capacity(files.len());
for file in files {
schemas.push(file_format.infer_schema(store, file).await?)
}
ArrowSchema::try_merge(schemas).context(error::MergeSchemaSnafu)
}
pub async fn stream_to_file<T: DfRecordBatchEncoder, U: Fn(SharedBuffer) -> T>(
mut stream: SendableRecordBatchStream,
store: ObjectStore,
path: &str,
threshold: usize,
encoder_factory: U,
) -> Result<usize> {
let writer = store
.writer(path)
.await
.context(error::WriteObjectSnafu { path })?
.compat_write();
let buffer = SharedBuffer::with_capacity(threshold);
let encoder = encoder_factory(buffer.clone());
let mut writer = BufferedWriter::new(threshold, buffer, encoder, writer);
let mut rows = 0;
while let Some(batch) = stream.next().await {
let batch = batch.context(error::ReadRecordBatchSnafu)?;
writer.write(&batch).await?;
rows += batch.num_rows();
}
// Flushes all pending writes
writer.try_flush(true).await?;
writer.close().await?;
Ok(rows)
}

View File

@@ -0,0 +1,319 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use std::collections::HashMap;
use std::str::FromStr;
use std::sync::Arc;
use arrow::csv;
use arrow::csv::reader::infer_reader_schema as infer_csv_schema;
use arrow::record_batch::RecordBatch;
use arrow_schema::{Schema, SchemaRef};
use async_trait::async_trait;
use common_runtime;
use datafusion::error::Result as DataFusionResult;
use datafusion::physical_plan::file_format::{FileMeta, FileOpenFuture, FileOpener};
use datafusion::physical_plan::SendableRecordBatchStream;
use derive_builder::Builder;
use object_store::ObjectStore;
use snafu::ResultExt;
use tokio_util::io::SyncIoBridge;
use super::stream_to_file;
use crate::buffered_writer::DfRecordBatchEncoder;
use crate::compression::CompressionType;
use crate::error::{self, Result};
use crate::file_format::{self, open_with_decoder, FileFormat};
use crate::share_buffer::SharedBuffer;
#[derive(Debug, Clone, Copy, PartialEq, Eq)]
pub struct CsvFormat {
pub has_header: bool,
pub delimiter: u8,
pub schema_infer_max_record: Option<usize>,
pub compression_type: CompressionType,
}
impl TryFrom<&HashMap<String, String>> for CsvFormat {
type Error = error::Error;
fn try_from(value: &HashMap<String, String>) -> Result<Self> {
let mut format = CsvFormat::default();
if let Some(delimiter) = value.get(file_format::FORMAT_DELIMITER) {
// TODO(weny): considers to support parse like "\t" (not only b'\t')
format.delimiter = u8::from_str(delimiter).map_err(|_| {
error::ParseFormatSnafu {
key: file_format::FORMAT_DELIMITER,
value: delimiter,
}
.build()
})?;
};
if let Some(compression_type) = value.get(file_format::FORMAT_COMPRESSION_TYPE) {
format.compression_type = CompressionType::from_str(compression_type)?;
};
if let Some(schema_infer_max_record) =
value.get(file_format::FORMAT_SCHEMA_INFER_MAX_RECORD)
{
format.schema_infer_max_record =
Some(schema_infer_max_record.parse::<usize>().map_err(|_| {
error::ParseFormatSnafu {
key: file_format::FORMAT_SCHEMA_INFER_MAX_RECORD,
value: schema_infer_max_record,
}
.build()
})?);
};
if let Some(has_header) = value.get(file_format::FORMAT_HAS_HEADER) {
format.has_header = has_header.parse().map_err(|_| {
error::ParseFormatSnafu {
key: file_format::FORMAT_HAS_HEADER,
value: has_header,
}
.build()
})?;
}
Ok(format)
}
}
impl Default for CsvFormat {
fn default() -> Self {
Self {
has_header: true,
delimiter: b',',
schema_infer_max_record: Some(file_format::DEFAULT_SCHEMA_INFER_MAX_RECORD),
compression_type: CompressionType::Uncompressed,
}
}
}
#[derive(Debug, Clone, Builder)]
pub struct CsvConfig {
batch_size: usize,
file_schema: SchemaRef,
#[builder(default = "None")]
file_projection: Option<Vec<usize>>,
#[builder(default = "true")]
has_header: bool,
#[builder(default = "b','")]
delimiter: u8,
}
impl CsvConfig {
fn builder(&self) -> csv::ReaderBuilder {
let mut builder = csv::ReaderBuilder::new()
.with_schema(self.file_schema.clone())
.with_delimiter(self.delimiter)
.with_batch_size(self.batch_size)
.has_header(self.has_header);
if let Some(proj) = &self.file_projection {
builder = builder.with_projection(proj.clone());
}
builder
}
}
#[derive(Debug, Clone)]
pub struct CsvOpener {
config: Arc<CsvConfig>,
object_store: Arc<ObjectStore>,
compression_type: CompressionType,
}
impl CsvOpener {
/// Return a new [`CsvOpener`]. The caller must ensure [`CsvConfig`].file_schema must correspond to the opening file.
pub fn new(
config: CsvConfig,
object_store: ObjectStore,
compression_type: CompressionType,
) -> Self {
CsvOpener {
config: Arc::new(config),
object_store: Arc::new(object_store),
compression_type,
}
}
}
impl FileOpener for CsvOpener {
fn open(&self, meta: FileMeta) -> DataFusionResult<FileOpenFuture> {
open_with_decoder(
self.object_store.clone(),
meta.location().to_string(),
self.compression_type,
|| Ok(self.config.builder().build_decoder()),
)
}
}
#[async_trait]
impl FileFormat for CsvFormat {
async fn infer_schema(&self, store: &ObjectStore, path: &str) -> Result<Schema> {
let reader = store
.reader(path)
.await
.context(error::ReadObjectSnafu { path })?;
let decoded = self.compression_type.convert_async_read(reader);
let delimiter = self.delimiter;
let schema_infer_max_record = self.schema_infer_max_record;
let has_header = self.has_header;
common_runtime::spawn_blocking_read(move || {
let reader = SyncIoBridge::new(decoded);
let (schema, _records_read) =
infer_csv_schema(reader, delimiter, schema_infer_max_record, has_header)
.context(error::InferSchemaSnafu)?;
Ok(schema)
})
.await
.context(error::JoinHandleSnafu)?
}
}
pub async fn stream_to_csv(
stream: SendableRecordBatchStream,
store: ObjectStore,
path: &str,
threshold: usize,
) -> Result<usize> {
stream_to_file(stream, store, path, threshold, |buffer| {
csv::Writer::new(buffer)
})
.await
}
impl DfRecordBatchEncoder for csv::Writer<SharedBuffer> {
fn write(&mut self, batch: &RecordBatch) -> Result<()> {
self.write(batch).context(error::WriteRecordBatchSnafu)
}
}
#[cfg(test)]
mod tests {
use super::*;
use crate::file_format::{
FileFormat, FORMAT_COMPRESSION_TYPE, FORMAT_DELIMITER, FORMAT_HAS_HEADER,
FORMAT_SCHEMA_INFER_MAX_RECORD,
};
use crate::test_util::{self, format_schema, test_store};
fn test_data_root() -> String {
test_util::get_data_dir("tests/csv").display().to_string()
}
#[tokio::test]
async fn infer_schema_basic() {
let csv = CsvFormat::default();
let store = test_store(&test_data_root());
let schema = csv.infer_schema(&store, "simple.csv").await.unwrap();
let formatted: Vec<_> = format_schema(schema);
assert_eq!(
vec![
"c1: Utf8: NULL",
"c2: Int64: NULL",
"c3: Int64: NULL",
"c4: Int64: NULL",
"c5: Int64: NULL",
"c6: Int64: NULL",
"c7: Int64: NULL",
"c8: Int64: NULL",
"c9: Int64: NULL",
"c10: Int64: NULL",
"c11: Float64: NULL",
"c12: Float64: NULL",
"c13: Utf8: NULL"
],
formatted,
);
}
#[tokio::test]
async fn infer_schema_with_limit() {
let json = CsvFormat {
schema_infer_max_record: Some(3),
..CsvFormat::default()
};
let store = test_store(&test_data_root());
let schema = json
.infer_schema(&store, "schema_infer_limit.csv")
.await
.unwrap();
let formatted: Vec<_> = format_schema(schema);
assert_eq!(
vec![
"a: Int64: NULL",
"b: Float64: NULL",
"c: Int64: NULL",
"d: Int64: NULL"
],
formatted
);
let json = CsvFormat::default();
let store = test_store(&test_data_root());
let schema = json
.infer_schema(&store, "schema_infer_limit.csv")
.await
.unwrap();
let formatted: Vec<_> = format_schema(schema);
assert_eq!(
vec![
"a: Int64: NULL",
"b: Float64: NULL",
"c: Int64: NULL",
"d: Utf8: NULL"
],
formatted
);
}
#[test]
fn test_try_from() {
let mut map = HashMap::new();
let format: CsvFormat = CsvFormat::try_from(&map).unwrap();
assert_eq!(format, CsvFormat::default());
map.insert(
FORMAT_SCHEMA_INFER_MAX_RECORD.to_string(),
"2000".to_string(),
);
map.insert(FORMAT_COMPRESSION_TYPE.to_string(), "zstd".to_string());
map.insert(FORMAT_DELIMITER.to_string(), b'\t'.to_string());
map.insert(FORMAT_HAS_HEADER.to_string(), "false".to_string());
let format = CsvFormat::try_from(&map).unwrap();
assert_eq!(
format,
CsvFormat {
compression_type: CompressionType::Zstd,
schema_infer_max_record: Some(2000),
delimiter: b'\t',
has_header: false,
}
);
}
}

View File

@@ -0,0 +1,238 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use std::collections::HashMap;
use std::io::BufReader;
use std::str::FromStr;
use std::sync::Arc;
use arrow::datatypes::SchemaRef;
use arrow::json::reader::{infer_json_schema_from_iterator, ValueIter};
use arrow::json::writer::LineDelimited;
use arrow::json::{self, RawReaderBuilder};
use arrow::record_batch::RecordBatch;
use arrow_schema::Schema;
use async_trait::async_trait;
use common_runtime;
use datafusion::error::{DataFusionError, Result as DataFusionResult};
use datafusion::physical_plan::file_format::{FileMeta, FileOpenFuture, FileOpener};
use datafusion::physical_plan::SendableRecordBatchStream;
use object_store::ObjectStore;
use snafu::ResultExt;
use tokio_util::io::SyncIoBridge;
use super::stream_to_file;
use crate::buffered_writer::DfRecordBatchEncoder;
use crate::compression::CompressionType;
use crate::error::{self, Result};
use crate::file_format::{self, open_with_decoder, FileFormat};
use crate::share_buffer::SharedBuffer;
#[derive(Debug, Clone, Copy, PartialEq, Eq)]
pub struct JsonFormat {
pub schema_infer_max_record: Option<usize>,
pub compression_type: CompressionType,
}
impl TryFrom<&HashMap<String, String>> for JsonFormat {
type Error = error::Error;
fn try_from(value: &HashMap<String, String>) -> Result<Self> {
let mut format = JsonFormat::default();
if let Some(compression_type) = value.get(file_format::FORMAT_COMPRESSION_TYPE) {
format.compression_type = CompressionType::from_str(compression_type)?
};
if let Some(schema_infer_max_record) =
value.get(file_format::FORMAT_SCHEMA_INFER_MAX_RECORD)
{
format.schema_infer_max_record =
Some(schema_infer_max_record.parse::<usize>().map_err(|_| {
error::ParseFormatSnafu {
key: file_format::FORMAT_SCHEMA_INFER_MAX_RECORD,
value: schema_infer_max_record,
}
.build()
})?);
};
Ok(format)
}
}
impl Default for JsonFormat {
fn default() -> Self {
Self {
schema_infer_max_record: Some(file_format::DEFAULT_SCHEMA_INFER_MAX_RECORD),
compression_type: CompressionType::Uncompressed,
}
}
}
#[async_trait]
impl FileFormat for JsonFormat {
async fn infer_schema(&self, store: &ObjectStore, path: &str) -> Result<Schema> {
let reader = store
.reader(path)
.await
.context(error::ReadObjectSnafu { path })?;
let decoded = self.compression_type.convert_async_read(reader);
let schema_infer_max_record = self.schema_infer_max_record;
common_runtime::spawn_blocking_read(move || {
let mut reader = BufReader::new(SyncIoBridge::new(decoded));
let iter = ValueIter::new(&mut reader, schema_infer_max_record);
let schema = infer_json_schema_from_iterator(iter).context(error::InferSchemaSnafu)?;
Ok(schema)
})
.await
.context(error::JoinHandleSnafu)?
}
}
#[derive(Debug, Clone)]
pub struct JsonOpener {
batch_size: usize,
projected_schema: SchemaRef,
object_store: Arc<ObjectStore>,
compression_type: CompressionType,
}
impl JsonOpener {
/// Return a new [`JsonOpener`]. Any fields not present in `projected_schema` will be ignored.
pub fn new(
batch_size: usize,
projected_schema: SchemaRef,
object_store: ObjectStore,
compression_type: CompressionType,
) -> Self {
Self {
batch_size,
projected_schema,
object_store: Arc::new(object_store),
compression_type,
}
}
}
impl FileOpener for JsonOpener {
fn open(&self, meta: FileMeta) -> DataFusionResult<FileOpenFuture> {
open_with_decoder(
self.object_store.clone(),
meta.location().to_string(),
self.compression_type,
|| {
RawReaderBuilder::new(self.projected_schema.clone())
.with_batch_size(self.batch_size)
.build_decoder()
.map_err(DataFusionError::from)
},
)
}
}
pub async fn stream_to_json(
stream: SendableRecordBatchStream,
store: ObjectStore,
path: &str,
threshold: usize,
) -> Result<usize> {
stream_to_file(stream, store, path, threshold, |buffer| {
json::LineDelimitedWriter::new(buffer)
})
.await
}
impl DfRecordBatchEncoder for json::Writer<SharedBuffer, LineDelimited> {
fn write(&mut self, batch: &RecordBatch) -> Result<()> {
self.write(batch.clone())
.context(error::WriteRecordBatchSnafu)
}
}
#[cfg(test)]
mod tests {
use super::*;
use crate::file_format::{FileFormat, FORMAT_COMPRESSION_TYPE, FORMAT_SCHEMA_INFER_MAX_RECORD};
use crate::test_util::{self, format_schema, test_store};
fn test_data_root() -> String {
test_util::get_data_dir("tests/json").display().to_string()
}
#[tokio::test]
async fn infer_schema_basic() {
let json = JsonFormat::default();
let store = test_store(&test_data_root());
let schema = json.infer_schema(&store, "simple.json").await.unwrap();
let formatted: Vec<_> = format_schema(schema);
assert_eq!(
vec![
"a: Int64: NULL",
"b: Float64: NULL",
"c: Boolean: NULL",
"d: Utf8: NULL",
],
formatted
);
}
#[tokio::test]
async fn infer_schema_with_limit() {
let json = JsonFormat {
schema_infer_max_record: Some(3),
..JsonFormat::default()
};
let store = test_store(&test_data_root());
let schema = json
.infer_schema(&store, "schema_infer_limit.json")
.await
.unwrap();
let formatted: Vec<_> = format_schema(schema);
assert_eq!(
vec!["a: Int64: NULL", "b: Float64: NULL", "c: Boolean: NULL"],
formatted
);
}
#[test]
fn test_try_from() {
let mut map = HashMap::new();
let format = JsonFormat::try_from(&map).unwrap();
assert_eq!(format, JsonFormat::default());
map.insert(
FORMAT_SCHEMA_INFER_MAX_RECORD.to_string(),
"2000".to_string(),
);
map.insert(FORMAT_COMPRESSION_TYPE.to_string(), "zstd".to_string());
let format = JsonFormat::try_from(&map).unwrap();
assert_eq!(
format,
JsonFormat {
compression_type: CompressionType::Zstd,
schema_infer_max_record: Some(2000),
}
);
}
}

View File

@@ -0,0 +1,179 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use std::result;
use std::sync::Arc;
use arrow::record_batch::RecordBatch;
use arrow_schema::Schema;
use async_trait::async_trait;
use datafusion::error::Result as DatafusionResult;
use datafusion::parquet::arrow::async_reader::AsyncFileReader;
use datafusion::parquet::arrow::{parquet_to_arrow_schema, ArrowWriter};
use datafusion::parquet::errors::{ParquetError, Result as ParquetResult};
use datafusion::parquet::file::metadata::ParquetMetaData;
use datafusion::parquet::format::FileMetaData;
use datafusion::physical_plan::file_format::{FileMeta, ParquetFileReaderFactory};
use datafusion::physical_plan::metrics::ExecutionPlanMetricsSet;
use futures::future::BoxFuture;
use object_store::{ObjectStore, Reader};
use snafu::ResultExt;
use crate::buffered_writer::{ArrowWriterCloser, DfRecordBatchEncoder};
use crate::error::{self, Result};
use crate::file_format::FileFormat;
use crate::share_buffer::SharedBuffer;
#[derive(Debug, Clone, Copy, PartialEq, Eq, Default)]
pub struct ParquetFormat {}
#[async_trait]
impl FileFormat for ParquetFormat {
async fn infer_schema(&self, store: &ObjectStore, path: &str) -> Result<Schema> {
let mut reader = store
.reader(path)
.await
.context(error::ReadObjectSnafu { path })?;
let metadata = reader
.get_metadata()
.await
.context(error::ReadParquetSnafuSnafu)?;
let file_metadata = metadata.file_metadata();
let schema = parquet_to_arrow_schema(
file_metadata.schema_descr(),
file_metadata.key_value_metadata(),
)
.context(error::ParquetToSchemaSnafu)?;
Ok(schema)
}
}
#[derive(Debug, Clone)]
pub struct DefaultParquetFileReaderFactory {
object_store: ObjectStore,
}
/// Returns a AsyncFileReader factory
impl DefaultParquetFileReaderFactory {
pub fn new(object_store: ObjectStore) -> Self {
Self { object_store }
}
}
impl ParquetFileReaderFactory for DefaultParquetFileReaderFactory {
// TODO(weny): Supports [`metadata_size_hint`].
// The upstream has a implementation supports [`metadata_size_hint`],
// however it coupled with Box<dyn ObjectStore>.
fn create_reader(
&self,
_partition_index: usize,
file_meta: FileMeta,
_metadata_size_hint: Option<usize>,
_metrics: &ExecutionPlanMetricsSet,
) -> DatafusionResult<Box<dyn AsyncFileReader + Send>> {
let path = file_meta.location().to_string();
let object_store = self.object_store.clone();
Ok(Box::new(LazyParquetFileReader::new(object_store, path)))
}
}
pub struct LazyParquetFileReader {
object_store: ObjectStore,
reader: Option<Reader>,
path: String,
}
impl LazyParquetFileReader {
pub fn new(object_store: ObjectStore, path: String) -> Self {
LazyParquetFileReader {
object_store,
path,
reader: None,
}
}
/// Must initialize the reader, or throw an error from the future.
async fn maybe_initialize(&mut self) -> result::Result<(), object_store::Error> {
if self.reader.is_none() {
let reader = self.object_store.reader(&self.path).await?;
self.reader = Some(reader);
}
Ok(())
}
}
impl AsyncFileReader for LazyParquetFileReader {
fn get_bytes(
&mut self,
range: std::ops::Range<usize>,
) -> BoxFuture<'_, ParquetResult<bytes::Bytes>> {
Box::pin(async move {
self.maybe_initialize()
.await
.map_err(|e| ParquetError::External(Box::new(e)))?;
// Safety: Must initialized
self.reader.as_mut().unwrap().get_bytes(range).await
})
}
fn get_metadata(&mut self) -> BoxFuture<'_, ParquetResult<Arc<ParquetMetaData>>> {
Box::pin(async move {
self.maybe_initialize()
.await
.map_err(|e| ParquetError::External(Box::new(e)))?;
// Safety: Must initialized
self.reader.as_mut().unwrap().get_metadata().await
})
}
}
impl DfRecordBatchEncoder for ArrowWriter<SharedBuffer> {
fn write(&mut self, batch: &RecordBatch) -> Result<()> {
self.write(batch).context(error::EncodeRecordBatchSnafu)
}
}
#[async_trait]
impl ArrowWriterCloser for ArrowWriter<SharedBuffer> {
async fn close(self) -> Result<FileMetaData> {
self.close().context(error::EncodeRecordBatchSnafu)
}
}
#[cfg(test)]
mod tests {
use super::*;
use crate::test_util::{self, format_schema, test_store};
fn test_data_root() -> String {
test_util::get_data_dir("tests/parquet")
.display()
.to_string()
}
#[tokio::test]
async fn infer_schema_basic() {
let json = ParquetFormat::default();
let store = test_store(&test_data_root());
let schema = json.infer_schema(&store, "basic.parquet").await.unwrap();
let formatted: Vec<_> = format_schema(schema);
assert_eq!(vec!["num: Int64: NULL", "str: Utf8: NULL"], formatted);
}
}

View File

@@ -0,0 +1,228 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use std::assert_matches::assert_matches;
use std::collections::HashMap;
use std::sync::Arc;
use std::vec;
use datafusion::assert_batches_eq;
use datafusion::execution::context::TaskContext;
use datafusion::physical_plan::file_format::{FileOpener, FileScanConfig, FileStream, ParquetExec};
use datafusion::physical_plan::metrics::ExecutionPlanMetricsSet;
use datafusion::physical_plan::ExecutionPlan;
use datafusion::prelude::SessionContext;
use futures::StreamExt;
use super::FORMAT_TYPE;
use crate::compression::CompressionType;
use crate::error;
use crate::file_format::csv::{CsvConfigBuilder, CsvOpener};
use crate::file_format::json::JsonOpener;
use crate::file_format::parquet::DefaultParquetFileReaderFactory;
use crate::file_format::Format;
use crate::test_util::{self, scan_config, test_basic_schema, test_store};
struct Test<'a, T: FileOpener> {
config: FileScanConfig,
opener: T,
expected: Vec<&'a str>,
}
impl<'a, T: FileOpener> Test<'a, T> {
pub async fn run(self) {
let result = FileStream::new(
&self.config,
0,
self.opener,
&ExecutionPlanMetricsSet::new(),
)
.unwrap()
.map(|b| b.unwrap())
.collect::<Vec<_>>()
.await;
assert_batches_eq!(self.expected, &result);
}
}
#[tokio::test]
async fn test_json_opener() {
let store = test_store("/");
let schema = test_basic_schema();
let json_opener = JsonOpener::new(
100,
schema.clone(),
store.clone(),
CompressionType::Uncompressed,
);
let path = &test_util::get_data_dir("tests/json/basic.json")
.display()
.to_string();
let tests = [
Test {
config: scan_config(schema.clone(), None, path),
opener: json_opener.clone(),
expected: vec![
"+-----+-------+",
"| num | str |",
"+-----+-------+",
"| 5 | test |",
"| 2 | hello |",
"| 4 | foo |",
"+-----+-------+",
],
},
Test {
config: scan_config(schema.clone(), Some(1), path),
opener: json_opener.clone(),
expected: vec![
"+-----+------+",
"| num | str |",
"+-----+------+",
"| 5 | test |",
"+-----+------+",
],
},
];
for test in tests {
test.run().await;
}
}
#[tokio::test]
async fn test_csv_opener() {
let store = test_store("/");
let schema = test_basic_schema();
let path = &test_util::get_data_dir("tests/csv/basic.csv")
.display()
.to_string();
let csv_conf = CsvConfigBuilder::default()
.batch_size(test_util::TEST_BATCH_SIZE)
.file_schema(schema.clone())
.build()
.unwrap();
let csv_opener = CsvOpener::new(csv_conf, store, CompressionType::Uncompressed);
let tests = [
Test {
config: scan_config(schema.clone(), None, path),
opener: csv_opener.clone(),
expected: vec![
"+-----+-------+",
"| num | str |",
"+-----+-------+",
"| 5 | test |",
"| 2 | hello |",
"| 4 | foo |",
"+-----+-------+",
],
},
Test {
config: scan_config(schema.clone(), Some(1), path),
opener: csv_opener.clone(),
expected: vec![
"+-----+------+",
"| num | str |",
"+-----+------+",
"| 5 | test |",
"+-----+------+",
],
},
];
for test in tests {
test.run().await;
}
}
#[tokio::test(flavor = "multi_thread")]
async fn test_parquet_exec() {
let store = test_store("/");
let schema = test_basic_schema();
let path = &test_util::get_data_dir("tests/parquet/basic.parquet")
.display()
.to_string();
let base_config = scan_config(schema.clone(), None, path);
let exec = ParquetExec::new(base_config, None, None)
.with_parquet_file_reader_factory(Arc::new(DefaultParquetFileReaderFactory::new(store)));
let ctx = SessionContext::new();
let context = Arc::new(TaskContext::from(&ctx));
// The stream batch size can be set by ctx.session_config.batch_size
let result = exec
.execute(0, context)
.unwrap()
.map(|b| b.unwrap())
.collect::<Vec<_>>()
.await;
assert_batches_eq!(
vec![
"+-----+-------+",
"| num | str |",
"+-----+-------+",
"| 5 | test |",
"| 2 | hello |",
"| 4 | foo |",
"+-----+-------+",
],
&result
);
}
#[test]
fn test_format() {
let value = [(FORMAT_TYPE.to_string(), "csv".to_string())]
.into_iter()
.collect::<HashMap<_, _>>();
assert_matches!(Format::try_from(&value).unwrap(), Format::Csv(_));
let value = [(FORMAT_TYPE.to_string(), "Parquet".to_string())]
.into_iter()
.collect::<HashMap<_, _>>();
assert_matches!(Format::try_from(&value).unwrap(), Format::Parquet(_));
let value = [(FORMAT_TYPE.to_string(), "JSON".to_string())]
.into_iter()
.collect::<HashMap<_, _>>();
assert_matches!(Format::try_from(&value).unwrap(), Format::Json(_));
let value = [(FORMAT_TYPE.to_string(), "Foobar".to_string())]
.into_iter()
.collect::<HashMap<_, _>>();
assert_matches!(
Format::try_from(&value).unwrap_err(),
error::Error::UnsupportedFormat { .. }
);
let value = HashMap::new();
assert_matches!(Format::try_from(&value).unwrap(), Format::Parquet(_));
}

View File

@@ -0,0 +1,28 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
#![feature(assert_matches)]
pub mod buffered_writer;
pub mod compression;
pub mod error;
pub mod file_format;
pub mod lister;
pub mod object_store;
pub mod share_buffer;
#[cfg(test)]
pub mod test_util;
#[cfg(test)]
pub mod tests;
pub mod util;

View File

@@ -0,0 +1,83 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use futures::{future, TryStreamExt};
use object_store::{Entry, ObjectStore};
use regex::Regex;
use snafu::ResultExt;
use crate::error::{self, Result};
#[derive(Debug, Clone, PartialEq, Eq)]
pub enum Source {
Filename(String),
Dir,
}
pub struct Lister {
object_store: ObjectStore,
source: Source,
path: String,
regex: Option<Regex>,
}
impl Lister {
pub fn new(
object_store: ObjectStore,
source: Source,
path: String,
regex: Option<Regex>,
) -> Self {
Lister {
object_store,
source,
path,
regex,
}
}
pub async fn list(&self) -> Result<Vec<Entry>> {
match &self.source {
Source::Dir => {
let streamer = self
.object_store
.list(&self.path)
.await
.context(error::ListObjectsSnafu { path: &self.path })?;
streamer
.try_filter(|f| {
let res = self
.regex
.as_ref()
.map(|x| x.is_match(f.name()))
.unwrap_or(true);
future::ready(res)
})
.try_collect::<Vec<_>>()
.await
.context(error::ListObjectsSnafu { path: &self.path })
}
Source::Filename(filename) => {
// make sure this file exists
let file_full_path = format!("{}{}", self.path, filename);
let _ = self.object_store.stat(&file_full_path).await.context(
error::ListObjectsSnafu {
path: &file_full_path,
},
)?;
Ok(vec![Entry::new(&file_full_path)])
}
}
}
}

View File

@@ -0,0 +1,60 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
pub mod fs;
pub mod s3;
use std::collections::HashMap;
use object_store::ObjectStore;
use snafu::{OptionExt, ResultExt};
use url::{ParseError, Url};
use self::fs::build_fs_backend;
use self::s3::build_s3_backend;
use crate::error::{self, Result};
pub const FS_SCHEMA: &str = "FS";
pub const S3_SCHEMA: &str = "S3";
/// Returns (schema, Option<host>, path)
pub fn parse_url(url: &str) -> Result<(String, Option<String>, String)> {
let parsed_url = Url::parse(url);
match parsed_url {
Ok(url) => Ok((
url.scheme().to_string(),
url.host_str().map(|s| s.to_string()),
url.path().to_string(),
)),
Err(ParseError::RelativeUrlWithoutBase) => {
Ok((FS_SCHEMA.to_string(), None, url.to_string()))
}
Err(err) => Err(err).context(error::InvalidUrlSnafu { url }),
}
}
pub fn build_backend(url: &str, connection: &HashMap<String, String>) -> Result<ObjectStore> {
let (schema, host, _path) = parse_url(url)?;
match schema.to_uppercase().as_str() {
S3_SCHEMA => {
let host = host.context(error::EmptyHostPathSnafu {
url: url.to_string(),
})?;
Ok(build_s3_backend(&host, "/", connection)?)
}
FS_SCHEMA => Ok(build_fs_backend("/")?),
_ => error::UnsupportedBackendProtocolSnafu { protocol: schema }.fail(),
}
}

View File

@@ -0,0 +1,28 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use object_store::services::Fs;
use object_store::ObjectStore;
use snafu::ResultExt;
use crate::error::{BuildBackendSnafu, Result};
pub fn build_fs_backend(root: &str) -> Result<ObjectStore> {
let mut builder = Fs::default();
builder.root(root);
let object_store = ObjectStore::new(builder)
.context(BuildBackendSnafu)?
.finish();
Ok(object_store)
}

View File

@@ -0,0 +1,79 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use std::collections::HashMap;
use object_store::services::S3;
use object_store::ObjectStore;
use snafu::ResultExt;
use crate::error::{self, Result};
const ENDPOINT_URL: &str = "endpoint_url";
const ACCESS_KEY_ID: &str = "access_key_id";
const SECRET_ACCESS_KEY: &str = "secret_access_key";
const SESSION_TOKEN: &str = "session_token";
const REGION: &str = "region";
const ENABLE_VIRTUAL_HOST_STYLE: &str = "enable_virtual_host_style";
pub fn build_s3_backend(
host: &str,
path: &str,
connection: &HashMap<String, String>,
) -> Result<ObjectStore> {
let mut builder = S3::default();
builder.root(path);
builder.bucket(host);
if let Some(endpoint) = connection.get(ENDPOINT_URL) {
builder.endpoint(endpoint);
}
if let Some(region) = connection.get(REGION) {
builder.region(region);
}
if let Some(key_id) = connection.get(ACCESS_KEY_ID) {
builder.access_key_id(key_id);
}
if let Some(key) = connection.get(SECRET_ACCESS_KEY) {
builder.secret_access_key(key);
}
if let Some(session_token) = connection.get(SESSION_TOKEN) {
builder.security_token(session_token);
}
if let Some(enable_str) = connection.get(ENABLE_VIRTUAL_HOST_STYLE) {
let enable = enable_str.as_str().parse::<bool>().map_err(|e| {
error::InvalidConnectionSnafu {
msg: format!(
"failed to parse the option {}={}, {}",
ENABLE_VIRTUAL_HOST_STYLE, enable_str, e
),
}
.build()
})?;
if enable {
builder.enable_virtual_host_style();
}
}
Ok(ObjectStore::new(builder)
.context(error::BuildBackendSnafu)?
.finish())
}

View File

@@ -0,0 +1,46 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use std::io::Write;
use std::sync::{Arc, Mutex};
use bytes::{BufMut, BytesMut};
#[derive(Clone, Default)]
pub struct SharedBuffer {
pub buffer: Arc<Mutex<BytesMut>>,
}
impl SharedBuffer {
pub fn with_capacity(size: usize) -> Self {
Self {
buffer: Arc::new(Mutex::new(BytesMut::with_capacity(size))),
}
}
}
impl Write for SharedBuffer {
fn write(&mut self, buf: &[u8]) -> std::io::Result<usize> {
let len = buf.len();
let mut buffer = self.buffer.lock().unwrap();
buffer.put_slice(buf);
Ok(len)
}
fn flush(&mut self) -> std::io::Result<()> {
// This flush implementation is intentionally left to blank.
// The actual flush is in `BufferedWriter::try_flush`
Ok(())
}
}

View File

@@ -0,0 +1,175 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use std::path::PathBuf;
use std::sync::Arc;
use arrow_schema::{DataType, Field, Schema, SchemaRef};
use common_test_util::temp_dir::{create_temp_dir, TempDir};
use datafusion::datasource::listing::PartitionedFile;
use datafusion::datasource::object_store::ObjectStoreUrl;
use datafusion::physical_plan::file_format::{FileScanConfig, FileStream};
use datafusion::physical_plan::metrics::ExecutionPlanMetricsSet;
use object_store::services::Fs;
use object_store::ObjectStore;
use crate::compression::CompressionType;
use crate::file_format::csv::{stream_to_csv, CsvConfigBuilder, CsvOpener};
use crate::file_format::json::{stream_to_json, JsonOpener};
use crate::test_util;
pub const TEST_BATCH_SIZE: usize = 100;
pub fn get_data_dir(path: &str) -> PathBuf {
// https://doc.rust-lang.org/cargo/reference/environment-variables.html
let dir = env!("CARGO_MANIFEST_DIR");
PathBuf::from(dir).join(path)
}
pub fn format_schema(schema: Schema) -> Vec<String> {
schema
.fields()
.iter()
.map(|f| {
format!(
"{}: {:?}: {}",
f.name(),
f.data_type(),
if f.is_nullable() { "NULL" } else { "NOT NULL" }
)
})
.collect()
}
pub fn test_store(root: &str) -> ObjectStore {
let mut builder = Fs::default();
builder.root(root);
ObjectStore::new(builder).unwrap().finish()
}
pub fn test_tmp_store(root: &str) -> (ObjectStore, TempDir) {
let dir = create_temp_dir(root);
let mut builder = Fs::default();
builder.root("/");
(ObjectStore::new(builder).unwrap().finish(), dir)
}
pub fn test_basic_schema() -> SchemaRef {
let schema = Schema::new(vec![
Field::new("num", DataType::Int64, false),
Field::new("str", DataType::Utf8, false),
]);
Arc::new(schema)
}
pub fn scan_config(file_schema: SchemaRef, limit: Option<usize>, filename: &str) -> FileScanConfig {
FileScanConfig {
object_store_url: ObjectStoreUrl::parse("empty://").unwrap(), // won't be used
file_schema,
file_groups: vec![vec![PartitionedFile::new(filename.to_string(), 10)]],
statistics: Default::default(),
projection: None,
limit,
table_partition_cols: vec![],
output_ordering: None,
infinite_source: false,
}
}
pub async fn setup_stream_to_json_test(origin_path: &str, threshold: impl Fn(usize) -> usize) {
let store = test_store("/");
let schema = test_basic_schema();
let json_opener = JsonOpener::new(
test_util::TEST_BATCH_SIZE,
schema.clone(),
store.clone(),
CompressionType::Uncompressed,
);
let size = store.read(origin_path).await.unwrap().len();
let config = scan_config(schema.clone(), None, origin_path);
let stream = FileStream::new(&config, 0, json_opener, &ExecutionPlanMetricsSet::new()).unwrap();
let (tmp_store, dir) = test_tmp_store("test_stream_to_json");
let output_path = format!("{}/{}", dir.path().display(), "output");
stream_to_json(
Box::pin(stream),
tmp_store.clone(),
&output_path,
threshold(size),
)
.await
.unwrap();
let written = tmp_store.read(&output_path).await.unwrap();
let origin = store.read(origin_path).await.unwrap();
// ignores `\n`
assert_eq!(
String::from_utf8_lossy(&written).trim_end_matches('\n'),
String::from_utf8_lossy(&origin).trim_end_matches('\n'),
)
}
pub async fn setup_stream_to_csv_test(origin_path: &str, threshold: impl Fn(usize) -> usize) {
let store = test_store("/");
let schema = test_basic_schema();
let csv_conf = CsvConfigBuilder::default()
.batch_size(test_util::TEST_BATCH_SIZE)
.file_schema(schema.clone())
.build()
.unwrap();
let csv_opener = CsvOpener::new(csv_conf, store.clone(), CompressionType::Uncompressed);
let size = store.read(origin_path).await.unwrap().len();
let config = scan_config(schema.clone(), None, origin_path);
let stream = FileStream::new(&config, 0, csv_opener, &ExecutionPlanMetricsSet::new()).unwrap();
let (tmp_store, dir) = test_tmp_store("test_stream_to_csv");
let output_path = format!("{}/{}", dir.path().display(), "output");
stream_to_csv(
Box::pin(stream),
tmp_store.clone(),
&output_path,
threshold(size),
)
.await
.unwrap();
let written = tmp_store.read(&output_path).await.unwrap();
let origin = store.read(origin_path).await.unwrap();
// ignores `\n`
assert_eq!(
String::from_utf8_lossy(&written).trim_end_matches('\n'),
String::from_utf8_lossy(&origin).trim_end_matches('\n'),
)
}

View File

@@ -0,0 +1,61 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use crate::test_util;
#[tokio::test]
async fn test_stream_to_json() {
// A small threshold
// Triggers the flush each writes
test_util::setup_stream_to_json_test(
&test_util::get_data_dir("tests/json/basic.json")
.display()
.to_string(),
|size| size / 2,
)
.await;
// A large threshold
// Only triggers the flush at last
test_util::setup_stream_to_json_test(
&test_util::get_data_dir("tests/json/basic.json")
.display()
.to_string(),
|size| size * 2,
)
.await;
}
#[tokio::test]
async fn test_stream_to_csv() {
// A small threshold
// Triggers the flush each writes
test_util::setup_stream_to_csv_test(
&test_util::get_data_dir("tests/csv/basic.csv")
.display()
.to_string(),
|size| size / 2,
)
.await;
// A large threshold
// Only triggers the flush at last
test_util::setup_stream_to_csv_test(
&test_util::get_data_dir("tests/csv/basic.csv")
.display()
.to_string(),
|size| size * 2,
)
.await;
}

View File

@@ -0,0 +1,125 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
pub fn find_dir_and_filename(path: &str) -> (String, Option<String>) {
if path.is_empty() {
("/".to_string(), None)
} else if path.ends_with('/') {
(path.to_string(), None)
} else if let Some(idx) = path.rfind('/') {
(
path[..idx + 1].to_string(),
Some(path[idx + 1..].to_string()),
)
} else {
("/".to_string(), Some(path.to_string()))
}
}
#[cfg(test)]
mod tests {
use url::Url;
use super::*;
#[test]
fn test_parse_uri() {
struct Test<'a> {
uri: &'a str,
expected_path: &'a str,
expected_schema: &'a str,
}
let tests = [
Test {
uri: "s3://bucket/to/path/",
expected_path: "/to/path/",
expected_schema: "s3",
},
Test {
uri: "fs:///to/path/",
expected_path: "/to/path/",
expected_schema: "fs",
},
Test {
uri: "fs:///to/path/file",
expected_path: "/to/path/file",
expected_schema: "fs",
},
];
for test in tests {
let parsed_uri = Url::parse(test.uri).unwrap();
assert_eq!(parsed_uri.path(), test.expected_path);
assert_eq!(parsed_uri.scheme(), test.expected_schema);
}
}
#[test]
fn test_parse_path_and_dir() {
let parsed = Url::from_file_path("/to/path/file").unwrap();
assert_eq!(parsed.path(), "/to/path/file");
let parsed = Url::from_directory_path("/to/path/").unwrap();
assert_eq!(parsed.path(), "/to/path/");
}
#[test]
fn test_find_dir_and_filename() {
struct Test<'a> {
path: &'a str,
expected_dir: &'a str,
expected_filename: Option<String>,
}
let tests = [
Test {
path: "to/path/",
expected_dir: "to/path/",
expected_filename: None,
},
Test {
path: "to/path/filename",
expected_dir: "to/path/",
expected_filename: Some("filename".into()),
},
Test {
path: "/to/path/filename",
expected_dir: "/to/path/",
expected_filename: Some("filename".into()),
},
Test {
path: "/",
expected_dir: "/",
expected_filename: None,
},
Test {
path: "filename",
expected_dir: "/",
expected_filename: Some("filename".into()),
},
Test {
path: "",
expected_dir: "/",
expected_filename: None,
},
];
for test in tests {
let (path, filename) = find_dir_and_filename(test.path);
assert_eq!(test.expected_dir, path);
assert_eq!(test.expected_filename, filename)
}
}
}

View File

@@ -0,0 +1,24 @@
### Parquet
The `parquet/basic.parquet` was converted from `csv/basic.csv` via [bdt](https://github.com/andygrove/bdt).
Internal of `parquet/basic.parquet`:
Data:
```
+-----+-------+
| num | str |
+-----+-------+
| 5 | test |
| 2 | hello |
| 4 | foo |
+-----+-------+
```
Schema:
```
+-------------+-----------+-------------+
| column_name | data_type | is_nullable |
+-------------+-----------+-------------+
| num | Int64 | YES |
| str | Utf8 | YES |
+-------------+-----------+-------------+
```

View File

@@ -0,0 +1,4 @@
num,str
5,test
2,hello
4,foo
1 num str
2 5 test
3 2 hello
4 4 foo

View File

@@ -0,0 +1,5 @@
a,b,c,d
1,2,3,4
1,2,3,4
1,2.0,3,4
1,2,4,test
1 a b c d
2 1 2 3 4
3 1 2 3 4
4 1 2.0 3 4
5 1 2 4 test

View File

@@ -0,0 +1,11 @@
c1,c2,c3,c4,c5,c6,c7,c8,c9,c10,c11,c12,c13
c,2,1,18109,2033001162,-6513304855495910254,25,43062,1491205016,5863949479783605708,0.110830784,0.9294097332465232,6WfVFBVGJSQb7FhA7E0lBwdvjfZnSW
d,5,-40,22614,706441268,-7542719935673075327,155,14337,3373581039,11720144131976083864,0.69632107,0.3114712539863804,C2GT5KVyOPZpgKVl110TyZO0NcJ434
b,1,29,-18218,994303988,5983957848665088916,204,9489,3275293996,14857091259186476033,0.53840446,0.17909035118828576,AyYVExXK6AR2qUTxNZ7qRHQOVGMLcz
a,1,-85,-15154,1171968280,1919439543497968449,77,52286,774637006,12101411955859039553,0.12285209,0.6864391962767343,0keZ5G8BffGwgF2RwQD59TFzMStxCB
b,5,-82,22080,1824882165,7373730676428214987,208,34331,3342719438,3330177516592499461,0.82634634,0.40975383525297016,Ig1QcuKsjHXkproePdERo2w0mYzIqd
b,4,-111,-1967,-4229382,1892872227362838079,67,9832,1243785310,8382489916947120498,0.06563997,0.152498292971736,Sfx0vxv1skzZWT1PqVdoRDdO6Sb6xH
e,3,104,-25136,1738331255,300633854973581194,139,20807,3577318119,13079037564113702254,0.40154034,0.7764360990307122,DuJNG8tufSqW0ZstHqWj3aGvFLMg4A
a,3,13,12613,1299719633,2020498574254265315,191,17835,3998790955,14881411008939145569,0.041445434,0.8813167497816289,Amn2K87Db5Es3dFQO9cw9cvpAM6h35
d,1,38,18384,-335410409,-1632237090406591229,26,57510,2712615025,1842662804748246269,0.6064476,0.6404495093354053,4HX6feIvmNXBN7XGqgO4YVBkhu8GDI
a,4,-38,20744,762932956,308913475857409919,7,45465,1787652631,878137512938218976,0.7459874,0.02182578039211991,ydkwycaISlYSlEq3TlkS2m15I2pcp8
1 c1 c2 c3 c4 c5 c6 c7 c8 c9 c10 c11 c12 c13
2 c 2 1 18109 2033001162 -6513304855495910254 25 43062 1491205016 5863949479783605708 0.110830784 0.9294097332465232 6WfVFBVGJSQb7FhA7E0lBwdvjfZnSW
3 d 5 -40 22614 706441268 -7542719935673075327 155 14337 3373581039 11720144131976083864 0.69632107 0.3114712539863804 C2GT5KVyOPZpgKVl110TyZO0NcJ434
4 b 1 29 -18218 994303988 5983957848665088916 204 9489 3275293996 14857091259186476033 0.53840446 0.17909035118828576 AyYVExXK6AR2qUTxNZ7qRHQOVGMLcz
5 a 1 -85 -15154 1171968280 1919439543497968449 77 52286 774637006 12101411955859039553 0.12285209 0.6864391962767343 0keZ5G8BffGwgF2RwQD59TFzMStxCB
6 b 5 -82 22080 1824882165 7373730676428214987 208 34331 3342719438 3330177516592499461 0.82634634 0.40975383525297016 Ig1QcuKsjHXkproePdERo2w0mYzIqd
7 b 4 -111 -1967 -4229382 1892872227362838079 67 9832 1243785310 8382489916947120498 0.06563997 0.152498292971736 Sfx0vxv1skzZWT1PqVdoRDdO6Sb6xH
8 e 3 104 -25136 1738331255 300633854973581194 139 20807 3577318119 13079037564113702254 0.40154034 0.7764360990307122 DuJNG8tufSqW0ZstHqWj3aGvFLMg4A
9 a 3 13 12613 1299719633 2020498574254265315 191 17835 3998790955 14881411008939145569 0.041445434 0.8813167497816289 Amn2K87Db5Es3dFQO9cw9cvpAM6h35
10 d 1 38 18384 -335410409 -1632237090406591229 26 57510 2712615025 1842662804748246269 0.6064476 0.6404495093354053 4HX6feIvmNXBN7XGqgO4YVBkhu8GDI
11 a 4 -38 20744 762932956 308913475857409919 7 45465 1787652631 878137512938218976 0.7459874 0.02182578039211991 ydkwycaISlYSlEq3TlkS2m15I2pcp8

View File

@@ -0,0 +1,3 @@
{"num":5,"str":"test"}
{"num":2,"str":"hello"}
{"num":4,"str":"foo"}

View File

@@ -0,0 +1,4 @@
{"a":1}
{"a":-10, "b":-3.5}
{"a":2, "b":0.6, "c":false}
{"a":1, "b":2.0, "c":false, "d":"4"}

View File

@@ -0,0 +1,12 @@
{"a":1, "b":2.0, "c":false, "d":"4"}
{"a":-10, "b":-3.5, "c":true, "d":"4"}
{"a":2, "b":0.6, "c":false, "d":"text"}
{"a":1, "b":2.0, "c":false, "d":"4"}
{"a":7, "b":-3.5, "c":true, "d":"4"}
{"a":1, "b":0.6, "c":false, "d":"text"}
{"a":1, "b":2.0, "c":false, "d":"4"}
{"a":5, "b":-3.5, "c":true, "d":"4"}
{"a":1, "b":0.6, "c":false, "d":"text"}
{"a":1, "b":2.0, "c":false, "d":"4"}
{"a":1, "b":-3.5, "c":true, "d":"4"}
{"a":100000000000000, "b":0.6, "c":false, "d":"text"}

Binary file not shown.

View File

@@ -23,10 +23,12 @@ pub trait ErrorExt: std::error::Error {
StatusCode::Unknown
}
/// Get the reference to the backtrace of this error, None if the backtrace is unavailable.
// Add `_opt` suffix to avoid confusing with similar method in `std::error::Error`, once backtrace
// in std is stable, we can deprecate this method.
fn backtrace_opt(&self) -> Option<&crate::snafu::Backtrace>;
// TODO(ruihang): remove this default implementation
/// Get the location of this error, None if the location is unavailable.
/// Add `_opt` suffix to avoid confusing with similar method in `std::error::Error`
fn location_opt(&self) -> Option<crate::snafu::Location> {
None
}
/// Returns the error as [Any](std::any::Any) so that it can be
/// downcast to a specific implementation.
@@ -71,8 +73,8 @@ impl crate::ext::ErrorExt for BoxedError {
self.inner.status_code()
}
fn backtrace_opt(&self) -> Option<&crate::snafu::Backtrace> {
self.inner.backtrace_opt()
fn location_opt(&self) -> Option<crate::snafu::Location> {
self.inner.location_opt()
}
fn as_any(&self) -> &dyn std::any::Any {
@@ -84,7 +86,7 @@ impl crate::ext::ErrorExt for BoxedError {
// via `ErrorCompat::backtrace()`.
impl crate::snafu::ErrorCompat for BoxedError {
fn backtrace(&self) -> Option<&crate::snafu::Backtrace> {
self.inner.backtrace_opt()
None
}
}
@@ -118,7 +120,7 @@ impl crate::ext::ErrorExt for PlainError {
self.status_code
}
fn backtrace_opt(&self) -> Option<&crate::snafu::Backtrace> {
fn location_opt(&self) -> Option<crate::snafu::Location> {
None
}
@@ -126,62 +128,3 @@ impl crate::ext::ErrorExt for PlainError {
self as _
}
}
#[cfg(test)]
mod tests {
use std::error::Error;
use snafu::ErrorCompat;
use super::*;
use crate::format::DebugFormat;
use crate::mock::MockError;
#[test]
fn test_opaque_error_without_backtrace() {
let err = BoxedError::new(MockError::new(StatusCode::Internal));
assert!(err.backtrace_opt().is_none());
assert_eq!(StatusCode::Internal, err.status_code());
assert!(err.as_any().downcast_ref::<MockError>().is_some());
assert!(err.source().is_none());
assert!(ErrorCompat::backtrace(&err).is_none());
}
#[test]
fn test_opaque_error_with_backtrace() {
let err = BoxedError::new(MockError::with_backtrace(StatusCode::Internal));
assert!(err.backtrace_opt().is_some());
assert_eq!(StatusCode::Internal, err.status_code());
assert!(err.as_any().downcast_ref::<MockError>().is_some());
assert!(err.source().is_none());
assert!(ErrorCompat::backtrace(&err).is_some());
let msg = format!("{err:?}");
assert!(msg.contains("\nBacktrace:\n"));
let fmt_msg = format!("{:?}", DebugFormat::new(&err));
assert_eq!(msg, fmt_msg);
let msg = err.to_string();
msg.contains("Internal");
}
#[test]
fn test_opaque_error_with_source() {
let leaf_err = MockError::with_backtrace(StatusCode::Internal);
let internal_err = MockError::with_source(leaf_err);
let err = BoxedError::new(internal_err);
assert!(err.backtrace_opt().is_some());
assert_eq!(StatusCode::Internal, err.status_code());
assert!(err.as_any().downcast_ref::<MockError>().is_some());
assert!(err.source().is_some());
let msg = format!("{err:?}");
assert!(msg.contains("\nBacktrace:\n"));
assert!(msg.contains("Caused by"));
assert!(ErrorCompat::backtrace(&err).is_some());
}
}

Some files were not shown because too many files have changed in this diff Show More