Compare commits

...

108 Commits

Author SHA1 Message Date
ShenJunkun
afac885c10 refactor: add schema column to the scripts table (#868) 2023-02-07 11:07:32 +08:00
Lei, HUANG
5d62e193bd feat: support multi regions on datanode (#653)
* wip: fix compile errors

* chore: move splitter to partition crate

* fix: remove useless variants in frontend errors

* chore: move more partition related code to partition manager

* fix: license header

* wip: move WriteSplitter to PartitionRuleManager

* fix: clippy warnings

* chore: remove useless error variant and format toml

* fix: cr comments

* chore: resolve conflicts

* chore: rebase develop

* fix: cr comments

* feat: support multi regions on datanode

* chore: rebase onto develop

* chore: rebase develop

* chore: rebase develop

* wip

* fix: compile errors

* feat: multi region

* fix: CR comments

* feat: allow stat existing regions without actually open it

* fix: use table meta in manifest to recover region info
2023-02-07 10:46:18 +08:00
elijah
7d77913e88 chore: fix rfc typo (#952) 2023-02-07 08:47:06 +08:00
Lei, HUANG
3f45a0d337 docs: rfc for table compaction (#939)
* doc: rfc for table compaction

* docs: update compaction rfc
2023-02-06 22:15:53 +08:00
Zhizhen He
a1e97c990f chore: fix typo (#949) 2023-02-06 22:13:56 +08:00
Ning Sun
4ae63b7089 feat: Initial prepare statement support for Postgres protocol (#925)
* feat: add describe statement to query_engine

* feat: add ability to describe statement for sql handler

* refactor: return schema instead of wrapped ref

* test: resolve tests

* feat: add initial support for prepared statements

* feat: add parameter types to query statement

* test: fix parser test

* chore: add todo task

* fix: turn on integer_datetime for binary timestamp

* fix: format string using single quote

* test: add tests for prepared statement

* Apply suggestions from code review

Co-authored-by: LFC <bayinamine@gmail.com>

* refactor: use stream api from recordbatches

---------

Co-authored-by: LFC <bayinamine@gmail.com>
2023-02-06 22:06:00 +08:00
Yingwen
b0925d94ed feat: Implement lock component for ProcedureManager (#937)
* feat: Add procedure meta

* feat: Implement lock for procedures

* chore: Allow dead code

* docs: Fix comment

* docs: Update docs of acquire_lock
2023-02-03 18:42:03 +08:00
Ruihang Xia
fc9276c79d feat: export promql service in server (#924)
* chore: some tiny typo/style fix

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* feat: add promql server

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* works for mocked query

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* clean up

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* integration test case

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* resolve CR comments

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* expose promql api to our http server

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* resolve CR comments

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* adjust router structure

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-02-03 08:28:56 +00:00
LFC
184ca78a4d revert: removed all "USE"s in sqlness tests introduced in #922 (#938) 2023-02-03 15:44:58 +08:00
discord9
ebbf1e43b5 feat: Query using sql inside python script (#884)
* feat: add weakref to QueryEngine in copr

* feat: sql query in python

* fix: make_class for Query Engine

* fix: use `Handle::try_current` instead

* fix: cache `Runtime`

* fix: lock file conflict

* fix: dedicated thread for blocking&fix test

* test: remove unnecessary print
2023-02-03 15:05:27 +08:00
dennis zhuang
54fe81dad9 docs: add dashboard to resources in README (#934) 2023-02-03 13:47:19 +08:00
LFC
af935671b2 feat: support "use" in GRPC requests (#922)
* feat: support "use catalog and schema"(behave like the "use" in MySQL) in GRPC requests

* fix: rebase develop
2023-02-02 20:02:56 +08:00
Yingwen
74adb077bc feat: Implement ProcedureStore (#927)
* test: Add more tests for ProcedureId

* feat: Add ObjectStore based state store

* feat: Implement ProcedureStore

* test: Add tests for ParsedKey

* refactor: Rename list to walk_top_down

* fix: Test ProcedureStore and handles unordered key values.

* style: Fix clippy

* docs: Update comment

* chore: Adjust log level for printing invalid key
2023-02-02 17:49:31 +08:00
Ruihang Xia
54c7a8be02 docs: document sqlness-runner usage (#931)
docs: paste doc from greptime-doc

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-02-02 15:56:51 +08:00
Ruihang Xia
ea5146762a chore(deps): bump promql-parser (#929)
* fix promql crate

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* migrate to new api

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix aggregator test

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix styles

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-02-02 07:31:41 +00:00
Yingwen
788b5362a1 docs: Add procedure framework RFC (#836)
* docs: Add procedure framework RFC

* docs: Add dump, rollback and locking to procedure framework

* docs: Change ProcedureBuilder to ProcedureLoader

* docs: Add sub-procedures section

* docs: Add a link to explain idempotent

* docs: Add link to the tracking issue

* docs: Fix ProcedureLoader type alias

* docs: Update procedure API

* docs: Address CR comments

* docs: Update path and make the docs more clear
2023-02-02 11:28:56 +08:00
Lei, HUANG
028a69e349 refactor: move partition related code to partition manager (#906)
* wip: fix compile errors

* chore: move splitter to partition crate

* fix: remove useless variants in frontend errors

* chore: move more partition related code to partition manager

* fix: license header

* wip: move WriteSplitter to PartitionRuleManager

* fix: clippy warnings

* chore: remove useless error variant and format toml

* fix: cr comments

* chore: resolve conflicts

* chore: rebase develop

* fix: cr comments
2023-02-01 19:24:49 +08:00
elijah
9a30ba00c4 test: run sqlness test in distributed mode (#916)
* test: run sqlness test in distributed mode

* chore: fix ci test

* chore: improve the ci yaml

* chore: improve the code

* chore: fix conflicts
2023-01-31 15:00:11 +08:00
LFC
8149932bad feat: local catalog drop table (#913)
* feat: local catalog drop table

* Update src/catalog/src/local/manager.rs

Co-authored-by: Lei, HUANG <6406592+v0y4g3r@users.noreply.github.com>

* Update src/catalog/src/local/manager.rs

Co-authored-by: Lei, HUANG <6406592+v0y4g3r@users.noreply.github.com>

* fix: resolve PR comments

---------

Co-authored-by: Lei, HUANG <6406592+v0y4g3r@users.noreply.github.com>
2023-01-31 14:44:03 +08:00
Ruihang Xia
89e4084af4 build(ci): upload sqlness log files (#920)
Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-01-31 14:31:27 +08:00
Ning Sun
39df25a8f6 refactor: make postgres handler stateful (#914)
* feat: update pgwire to 0.8 and unify postgres handler

* fix: correct password message matching
2023-01-31 14:19:18 +08:00
Yingwen
b2ad0e972b feat: Define procedure related traits (#904)
* chore: Move uuid to workspace.dependencies

* feat: Define procedure related traits

* test: Add tests

* chore: Update imports

* feat: Submit ProcedureWithId to manager

* chore: pub ProcedureId::parse_str

* refactor: ProcedureId::parse_str returns Result

* chore: Address CR comments

Also implements FromStr for ProcedureId
2023-01-31 14:17:28 +08:00
shuiyisong
18e6740ac9 chore: add interceptor err in frontend::error::Error (#917)
* chore: add interceptor boxed err

* chore: rename

* chore: update err msg

Co-authored-by: fys <40801205+Fengys123@users.noreply.github.com>

---------

Co-authored-by: fys <40801205+Fengys123@users.noreply.github.com>
2023-01-30 03:12:03 +00:00
Yun Chen
a7dc86ffe5 feat: oss storage support (#911)
* feat: add oss storage support

* fix: ci build format check

* fix: align OSS to Oss

* fix: cr comments

* fix: rename OSS to Oss in integration tests

* fix: clippy fix
2023-01-29 20:09:38 +08:00
Ruihang Xia
71482b38d7 feat: PromQL binary expr planner (#889)
* feat: PromQL binary expr planner

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* column & column test

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* column & literal test

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* mark literal-literal unsupported

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-01-29 17:02:11 +08:00
Ruihang Xia
dc9b5339bf feat: impl increase and irate/idelta in PromQL (#880)
* feat: impl increase and irate/idelta in PromQL

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* add license header

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix styles

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* resolve CR comments

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* add counter reset test case

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-01-29 14:21:13 +08:00
Lei, HUANG
5e05c8f884 fix: TimestampRange::new_inclusive and strum dependency (#910)
fix: TimestampRange::new_inclusive; also fix strum dependency in common-error
2023-01-29 13:09:05 +08:00
shuiyisong
aafc26c788 feat: add mysql reject_no_database (#896)
* chore: update opensrv-mysql to main

* refactor: change mysql server struct

* feat: add option to reject no database mysql connection request

* chore: remove unused condition

* chore: rebase develop

* chore: make reject_no_database optional
2023-01-29 04:09:47 +00:00
LFC
64243e3a7d refactor: accommodate java flight client (#886)
* refactor: change how AffectedRows is carried in flight stream to accommodate Java Flight client

* fix: clippy
2023-01-29 11:27:13 +08:00
Ruihang Xia
36a13dafb7 build(deps): bump tokio to 1.24.2 (#900)
deps: bump tokio to 1.24.2

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-01-29 11:13:37 +08:00
shuiyisong
637837ae44 chore: return authorize err msg to mysql client (#905)
chore: refine authorize err msg to client
2023-01-29 10:53:36 +08:00
dependabot[bot]
ae8afd3711 build(deps): bump bzip2 from 0.4.3 to 0.4.4 (#898)
Bumps [bzip2](https://github.com/alexcrichton/bzip2-rs) from 0.4.3 to 0.4.4.
- [Release notes](https://github.com/alexcrichton/bzip2-rs/releases)
- [Commits](https://github.com/alexcrichton/bzip2-rs/commits/0.4.4)

---
updated-dependencies:
- dependency-name: bzip2
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-01-28 21:08:03 +08:00
Yingwen
3db8f95169 ci: Skip status check on docs changed (#903)
* ci: Pass status check on docs changed

* ci: Remove coverage.yml
2023-01-28 16:37:47 +08:00
Lei, HUANG
43aefc5d74 feat: prunine sst files according to time range in filters (#887)
* 1. Reimplement Eq for Timestamp
2. Add and/or for GenericRange

* feat: extract time range from filters

* feat: select sst files according to time range

* fix: clippy

* fix: empty value in range

* fix: some cr comments

* fix: return optional timestamp range

* fix: cr comments
2023-01-28 15:16:41 +08:00
Ruihang Xia
b33937f48e test: sqlness test for alter table rename (#891)
* test: sqlness test for alter table rename

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* update test result

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* change show create table to desc table

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-01-28 11:35:38 +08:00
Ning Sun
9bc4c0d9c7 fix: mysql tests error (#897)
fix: mysql tests merge error
2023-01-20 16:15:16 +08:00
Ning Sun
302d7ec41b ci: use ubuntu 2004 to build weekly (#895)
feat: use ubuntu 2004 to build weekly
2023-01-20 08:36:41 +08:00
zyy17
cc46194f29 refactor: support TLS private key of RSA format and add the full test certificates generation (#885)
chore: add the full certificate generation

Signed-off-by: zyy17 <zyylsxm@gmail.com>

Signed-off-by: zyy17 <zyylsxm@gmail.com>
2023-01-19 13:13:33 +08:00
elijah
5dfc24e4f6 fix: create table after rename table (#894)
* fix: create table after rename table

* chore: fix test
2023-01-19 13:13:09 +08:00
Zheming Li
4987136850 refactor: use rust-toolchain.toml to override toolchain (#882) 2023-01-19 13:11:36 +08:00
shuiyisong
6960739b3d feat: add authorize to UserProvider trait (#879)
* feat: add SchemaValidator

* feat: add schema validator to mysql shim

* chore: pass schema validator to http auth layer

* feat: add schema validator to http

* feat: add schema validator to pg

* feat: add schema validator to pg

* feat: add schema validator test

* chore: remove println in test

* chore: use !matches

* refactor: refac authenticate and authorize in http auth

* refactor: refac authenticate and authorize in http auth

* chore: typo

* chore: minor change

* refactor: merge schema_validator into user_providier

* chore: fix license issue

* refactor: change http query param from database to db

* chore: fix cr issue
2023-01-18 12:42:08 +08:00
fys
49d83abc0c chore: add an opaque error type in meta (#890)
add a boxed error type in meta
2023-01-18 11:30:54 +08:00
Ning Sun
ecb71f81be feat: add --rpc-hostname option to datanode for a persist address to store in meta (#871)
* feat: add --rpc-hostname option

* fix: config file and hostname parsing

* Apply suggestions from code review

Co-authored-by: fys <40801205+Fengys123@users.noreply.github.com>

Co-authored-by: fys <40801205+Fengys123@users.noreply.github.com>
2023-01-17 10:50:50 +08:00
fys
6f5639fccd feat: add load_based selector in meta (#874)
* fix: wrong error info

* add derive hash for StatKey

* add a attrs field in Context

* add load_based selector

* add license

* make Nodestat module public

* add meta startup config item about selector

* cr: remove attrs, add concrete type in context

* cr: change region_number type to Option<u64>

* cr: add comment in example.toml

* cr
2023-01-17 10:25:00 +08:00
Ruihang Xia
1e9d09099e feat: update promql-parser to commit fec3c8b (#881)
deps: update promql-parser to commit fec3c8b

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-01-16 17:55:44 +08:00
Lei, HUANG
daad38360f fix: impl total order for Timestamp (#878)
* 1. Reimplement Eq for Timestamp
2. Add and/or for GenericRange

* chore: add test for TimestampRange with diff unit

* chore: optimize split implementation

* fix: clippy

* fix: add fast path

* fix: CR comments
2023-01-16 17:37:30 +08:00
Ruihang Xia
bae0243959 test: sqlness test for insert default (#873)
* test: sqlness test for insert default

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* empty line

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* add more sqls

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix typos

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* update test according to typo fix

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-01-13 20:22:55 +08:00
dennis zhuang
d162fbb598 fix: compile error in test (#872) 2023-01-13 15:12:49 +08:00
Zheming Li
0959c1d16b feat: support default value when inserting data (#854) 2023-01-13 14:49:05 +08:00
discord9
e428a84446 feat: use Python Script as UDF in SQL (#839)
* feat: reg PyScript as UDF

* refactor: use `ConcreteDataType` instead

* fix: accept `str` data type

* fix: allow binary to capture SIGINT

* test: add test for py udf

* Update src/servers/tests/py_script/mod.rs

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>

* style: clippy problem

* style: add newline

* chore: PR advices

Co-authored-by: Ruihang Xia <waynestxia@gmail.com>
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2023-01-13 14:35:03 +08:00
Ruihang Xia
58c37f588d feat: plan some aggregate expr in PromQL planner (#870)
Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-01-13 14:27:31 +08:00
dennis zhuang
d195a22f40 fix: parsing time index column option (#865)
* fix: parsing time index column option

* test: adds more cases for creating table

* chore: by CR comments

* feat: validate time index constraint in parser

* chore: improve error msg
2023-01-13 13:22:12 +08:00
elijah
6775c5be87 feat: support renaming table in the catalog manger (#824)
* feat: support renaming table in the catalog manger

* feat: implement rename table for local catalog manager

* chore: fmt code

* fix: update system catalog when renaming table in local catalog manager

* chore: add instance test for rename table

* chore: fix frontend test

* chore: fix comment

* chore: fix rename table test

* fix: renaming a table with an existing name

* fix: improve the system catalog's renaming process

* chore: improve the code

* chore: improve the comment

Co-authored-by: Yingwen <realevenyag@gmail.com>

* chore: improve the code

* chore: fix tests

* chore: fix instance_test

* chore: improve the code

Co-authored-by: Yingwen <realevenyag@gmail.com>
2023-01-12 19:48:18 +08:00
Ruihang Xia
5e89f1ba4e ci: run tests on weekly release build (#869)
Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-01-12 19:10:23 +08:00
LFC
2664436194 feat: handle "USE <catalog>-<schema>" in MySQL (#857)
* feat: handle "USE <catalog>-<schema>" in MySQL

* fix: resolve PR comments
2023-01-12 11:12:11 +08:00
shuiyisong
b91c77b862 chore: add path check to http auth (#866)
* chore: add whitelist to http auth

* chore: use const instead of format everytime
2023-01-12 10:20:18 +08:00
Lei, HUANG
4015dd8075 feat: record sst file time range in FileMeta (#860)
* feat: record sst file time range in FileMeta

* fix: clippy

* chore: add some log and doc
2023-01-11 21:16:07 +08:00
Yingwen
b39dbcbda9 fix: Fix deleting table with non null column (#849)
If the table has a non-null column, we need to use default value instead
of null to fill the value columns in the record batch for deletion.
Otherwise, we can't create the record batch since the schema check
doesn't allow null in the non-null column.
2023-01-11 20:06:46 +08:00
elijah
0e8411c2ff chore: add custom log level support for common_telemetry::init_default_ut_logging() (#864)
chore: improve default ut logging
2023-01-11 16:52:21 +08:00
Ruihang Xia
a9b42b436d feat: PromQL handler in query engine (#861)
* example promql test

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* make the mock test works

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* update planner test

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix clippys

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* add license header

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-01-11 11:31:07 +08:00
dennis zhuang
9428e70971 feat: integration test (#770)
* feat: add insert test cases

* fix: update results after rebase develop

* feat: supports unsigned integer types and big_insert test

* test: add insert_invalid test

* feat: supports time index constraint for bigint type

* chore: time index column at last

* test: adds more order, limit test

* fix: style

* feat: adds numbers table in standable memory catalog mode

* feat: enable fail_fast and test_filter in sqlness

* feat: add more tests

* fix: test_filter

* test: add alter tests

* feat: supports if_not_exists when create database

* test: filter_push_down and catalog test

* fix: compile error

* fix: delete output file

* chore: ignore integration test output in git

* test: update all integration test results

* fix: by code review

* chore: revert .gitignore

* feat: sort the show tables/databases results

* chore: remove issue link

* fix: compile error and code format after rebase

* test: update all integration test results
2023-01-10 18:15:50 +08:00
Ruihang Xia
32d51947a4 refactor: adjust outermost error message (#859)
* refactor: adjust outermost error message

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix clippy

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* preserve tonic status code

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-01-10 17:28:27 +08:00
Ruihang Xia
5fb417ec7c feat: implement RangeManipulate (#843)
* basic impl

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* impl constructor

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* test printout

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* truncate tag columns

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* doc this plan

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix empty range

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix clippy

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* document behavior

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-01-10 16:27:09 +08:00
Lei, HUANG
90fcaa8487 feat: expose wal config (#852)
* feat: wal config

* fix: use human-readable string in wal config

* feat: copy ReadableSize and humanize size config items in toml files

* fix: clippy
2023-01-10 16:07:26 +08:00
Jiachun Feng
c609b193a1 feat: in memory storage on meta leader (#856)
* chore: minor change on election

* chore: refactor some from/into

* feat: add in_memory store for leader node

* refactor: make context mutable

* feat: add ResetableKvStore trait
2023-01-10 15:53:34 +08:00
Ruihang Xia
1305924423 ci: add sqlness job (#835)
* ci: add sqlness job

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* update sqlness to official release

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* filter out backtrace

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix error display

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* close once_cell feature gate

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-01-10 15:12:20 +08:00
Ning Sun
ea753b9ac0 ci: fix release task by correcting output dir (#853) 2023-01-10 14:37:35 +08:00
LFC
72f05a3137 feat: flight aboard (#840)
feat: replace old GRPC interface with Arrow Flight
2023-01-09 17:06:24 +08:00
fys
9e58311ecd feat: datanode support report number of regions to meta (#838)
* feat: dn support report number of regions to meta

* put the heartbeat batch to store

* cr: change region_number's parameter to &CatalogManagerRef

* cr: when dn failed to get region number, report region_num = -1 to meta
2023-01-09 16:13:53 +08:00
Ruihang Xia
2679faf911 refactor: move parse methods out of QueryEngine trait (#850)
* refactor: move parse methods out of QueryEngine trait

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix styles

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* add test

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* change style

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* add license header

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix test literal

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-01-09 15:44:20 +08:00
Lei, HUANG
fa54870197 fix: parquet native row group pruning support (#845)
* fix: parquet native row group pruning support

* fix: use filter_map instead of flat_map
2023-01-09 12:10:14 +08:00
Ning Sun
3988770266 feat: add catalog name resolution for postgres and http interface (#810)
* feat: add catalog name resolution for postgres and http interface

* test: add tests for catalog resolution on http and postgres

* feat: assign custom catalog for query

* chore: order code for better readability
2023-01-09 11:43:25 +08:00
Xuanwo
777a3182c5 feat: Bump OpenDAL to 0.24 for better seekable support (#847)
* deps: Bump OpenDAL to 0.24 for better seekable support

Signed-off-by: Xuanwo <github@xuanwo.io>

* fix: test

Signed-off-by: Xuanwo <github@xuanwo.io>
Co-authored-by: Lei, HUANG <mrsatangel@gmail.com>
2023-01-09 11:37:43 +08:00
Ning Sun
5b675f54a8 ci: add lto and strip to weekly build (#841) 2023-01-06 16:20:23 +08:00
Lei, HUANG
627d444723 fix: remove start from LogStore; fix error message (#837) 2023-01-06 12:21:00 +08:00
LFC
d1730a9577 refactor: simplify how Frontend instance handles other protocols (#831)
* refactor: make influxdb, opentsdb and prometheus read/write goes through GRPC interface, to unify and simplify the Frontend instance either in standalone or distributed mode
2023-01-06 12:19:38 +08:00
Jiachun Feng
ca7ed67dc5 feat: collect stats from heartbeats (#833)
* feat: collect stats from heartbeats

* chore: refactor and improve the keep_lease_handler

* Update src/meta-srv/src/handler/collect_stats_handler.rs

Co-authored-by: fys <40801205+Fengys123@users.noreply.github.com>
2023-01-06 11:30:23 +08:00
Yingwen
072e5f78b4 feat: Implement delete for table (#801)
* feat: Table default implementations for insert/alter return error

* feat: Implement delete for mito table

* docs: Fix comment
2023-01-05 20:03:40 +08:00
Lei, HUANG
8f5ecefc90 feat: use raft-engine crate to reimplement logstore (#799)
* chore: remove useless method in Entry trait, add proto definition for entry and namespace

* feat: add proto definition for raft-engine based logstore

* feat: introduce RaftEngineLogstore

* feat: impl read for raft engine log store

* feat: impl raft engine logstore

* feat: raft engine logstore start and stop

* feat: add purge bg task

* fix: license header

* fix: clippy

* fix: toml files

* feat: add some test cases

* fix: CR comments

* fix: CR comments

* fix: check namespace validity and state of logstore

* fix: CR comments; add config item to control sync/async flush per write

* fix: remove unused error variants

* fix: unit tests

* fix: use compare and exchange to stop logstore

* fix: CR comments
2023-01-05 17:18:51 +08:00
Ruihang Xia
afd9866709 feat: basic promql planner for single arg function call (#828)
* wip: draft planner

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* handle function args

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* a simple test

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* list all operators that accept 1 instant vector as input

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* simplify test cases

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* clean up

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* update cargo lock

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* CR suggessions

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* CR suggessions

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* change the way to handle metric name

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-01-05 16:30:54 +08:00
LFC
89d5306740 feat: Impl Query and DDL functionality of Arrow Flight service for Frontend Instance (#827)
* feat: Implement Query and DDL functionality of Arrow Flight service for Frontend Instance
2023-01-05 14:17:57 +08:00
LFC
50cc0e9b51 feat: Impl Insert functionality of Arrow Flight service for Frontend Instance (#821)
* feat: Implement Insert functionality of Arrow Flight service for Frontend Instance

* fix: update license content

* Update src/common/grpc-expr/src/alter.rs

Co-authored-by: fys <40801205+Fengys123@users.noreply.github.com>

* fix: resolve PR comments

* fix: resolve PR comments

Co-authored-by: fys <40801205+Fengys123@users.noreply.github.com>
2023-01-04 17:48:59 +08:00
dennis zhuang
7762873842 feat: endpoint and region config for s3 storage (#820)
* feat: adds serde default attribute to options

* feat: adds endpoint and region for s3 config
2023-01-04 11:24:24 +08:00
LFC
4aa24f0639 fix: test failure (#822) 2023-01-04 10:47:18 +08:00
LFC
f1b95e25a1 fix: remove boilerplate message from GRPC error output (#813)
* fix: remove boilerplate message from GRPC error output

* fix: rebase develop
2023-01-03 20:49:36 +08:00
Ning Sun
041cd422a1 refactor: do not call use upon mysql connection (#818) 2023-01-03 19:15:47 +08:00
Ruihang Xia
f907a93b97 feat: impl RangeArray based on DictionaryArray (#796)
* feat: impl RangeArray based on DictionaryArray

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix clippys

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* apply review suggs

* fix typo

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* update license header

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* Apply suggestions from code review

Co-authored-by: Lei, HUANG <6406592+v0y4g3r@users.noreply.github.com>

* update doc to change i32 to u32

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
Co-authored-by: Lei, HUANG <6406592+v0y4g3r@users.noreply.github.com>
2023-01-03 18:04:26 +08:00
elijah
a6eb213adf feat: implement rename table (#802)
* feat: support renaming tables in the mito table engine

* chore: add test for table engine

* chore: fix test
2023-01-03 17:37:27 +08:00
Ruihang Xia
5fcad7a175 fix: update license header for instant manipulate (#817)
Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-01-03 16:10:21 +08:00
Ruihang Xia
0566f812d3 refactor: remove macro define_opaque_error (#812)
* refactor: remove macro define_opaque_error

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* impl BoxedError

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix tests

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* remove open-region error

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-01-03 15:50:27 +08:00
Ruihang Xia
334fd26bc5 feat: impl InstantManipulator for PromQL extension (#803)
* feat: impl InstantSelector for PromQL extension

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* add tests

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* make clippy happy

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix typo

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* apply review suggs

* rename manipulator to manipulate

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-01-03 15:32:12 +08:00
Lei, HUANG
8ffc078f88 fix: license header (#815) 2023-01-03 15:09:49 +08:00
shuiyisong
179ff728df refactor: merge servers::context into session (#811)
* refactor: move context to session

* chore: add unit test

* chore: add pg, opentsdb, influxdb and prometheus to channel enum
2022-12-31 00:00:04 +08:00
Yingwen
4d56d896ca feat: Implement delete for the storage engine (#777)
* docs: Fix incorrect comment of Vector::only_null

* feat: Add delete to WriteRequest and WriteBatch

* feat: Filter deleted rows

* fix: Fix panic after reopening engine

This is detected by adding a reopen step to the delete test for region.

* fix: Fix OpType::min_type()

* test: Add delete absent key test

* chore: Address CR comments
2022-12-30 17:12:18 +08:00
discord9
6fe205f3b5 chore: Update RustPython(With GC) (#809)
* chore: use newest RustPython

* chore: use Garbage collected RustPython Fork

* style: format toml
2022-12-30 16:55:43 +08:00
LFC
d13de0aeba refactor: remove AdminExpr, make DDL expressions as normal GRPC requests (#808)
* refactor: remove AdminExpr, make DDL expressions as normal GRPC requests
2022-12-30 16:47:45 +08:00
zyy17
11194f37d4 build: install ca-certificates in docker image building (#807)
refactor: install ca-certificates in docker image building

Signed-off-by: zyy17 <zyylsxm@gmail.com>

Signed-off-by: zyy17 <zyylsxm@gmail.com>
2022-12-30 14:56:39 +08:00
LFC
de6803d253 feat: handle InsertRequest(formerly InsertExpr) in new Arrow Flight (#800)
feat: handle InsertRequest(formerly InsertExpr) in new Arrow Flight interface
2022-12-30 10:24:09 +08:00
Ruihang Xia
d0ef3aa9eb docs: align Jeremy Clarkson to the right side (#804)
docs: align Jeremy Clarkson to right side

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2022-12-29 16:55:38 +08:00
LFC
04df80e640 fix: further ease the restriction of executing SQLs in new GRPC interface (#797)
* fix: carry not recordbatch result in FlightData, to allow executing SQLs other than selection in new GRPC interface

* Update src/datanode/src/instance/flight/stream.rs

Co-authored-by: Jiachun Feng <jiachun_feng@proton.me>
2022-12-28 16:43:21 +08:00
fys
76236646ef chore: extract some functions from "bootstrap_meta_srv" function (#795)
refactor: bootstrap of meta
2022-12-28 14:29:52 +08:00
LFC
26848f9f5c feat: Replace SelectResult with FlightData (#776)
* feat: replace SelectResult with FlightData

* Update tests/runner/src/env.rs

Co-authored-by: Ruihang Xia <waynestxia@gmail.com>

Co-authored-by: Ruihang Xia <waynestxia@gmail.com>
2022-12-28 10:22:46 +08:00
Ruihang Xia
90990584b7 feat: Prom SeriesNormalize plan (#787)
* feat: impl SeriesNormalize plan

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* some tests

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* feat: add metrics

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* add license header

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* resolve CR comments

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* update tests

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* make time index column a parameter

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* precompute time index column index

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* sign the TODO

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2022-12-27 22:59:53 +08:00
LFC
a14ec94653 fix: ease the restriction of the original "SelectExpr" (#794)
fix: ease the restriction of the original "SelectExpr" since we used to pass SQLs other than selection in the related GRPC interface
2022-12-27 16:50:12 +08:00
Ruihang Xia
26a3e93ca7 chore: util workspace deps in more places (#792)
Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2022-12-27 16:26:59 +08:00
elijah
3978931b8e feat: support parsing the RENAME TABLE statements in the parser (#780)
* feat: add parsing `alter rename table` syntax to the parser

* chore: fix clippy

* chore: add test for parser

* fix: add test for parsing RENAME keyword

* chore: remove unused code

* fix: parse table name object

Co-authored-by: Yingwen <realevenyag@gmail.com>

* chore: fmt code

Co-authored-by: Yingwen <realevenyag@gmail.com>
2022-12-27 14:53:40 +08:00
shuiyisong
d589de63ef feat: pub auth_mysql & add auth boxed err (#788)
* chore: minor openup

* chore: open up auth_mysql and return ()

* chore: typo change

* chore: change according to ci

* chore: change according to ci

* chore: remove tonic status in auth error
2022-12-27 11:04:05 +08:00
LFC
7829e4a219 feat: Implement Arrow Flight Service (except gRPC server) for selection (#768)
* feat: Implement Arrow Flight Service (but not the GRPC server) for selection

Co-authored-by: luofucong <luofucong@greptime.com>
2022-12-26 16:41:10 +08:00
717 changed files with 26084 additions and 11835 deletions

View File

@@ -2,3 +2,9 @@
GT_S3_BUCKET=S3 bucket
GT_S3_ACCESS_KEY_ID=S3 access key id
GT_S3_ACCESS_KEY=S3 secret access key
# Settings for oss test
GT_OSS_BUCKET=OSS bucket
GT_OSS_ACCESS_KEY_ID=OSS access key id
GT_OSS_ACCESS_KEY=OSS access key
GT_OSS_ENDPOINT=OSS endpoint

View File

@@ -1,70 +0,0 @@
on:
pull_request:
types: [opened, synchronize, reopened, ready_for_review]
paths-ignore:
- 'docs/**'
- 'config/**'
- '**.md'
- '.dockerignore'
- 'docker/**'
- '.gitignore'
push:
branches:
- "main"
- "develop"
paths-ignore:
- 'docs/**'
- 'config/**'
- '**.md'
- '.dockerignore'
- 'docker/**'
- '.gitignore'
workflow_dispatch:
name: Code coverage
env:
RUST_TOOLCHAIN: nightly-2022-12-20
jobs:
coverage:
if: github.event.pull_request.draft == false
runs-on: ubuntu-latest-8-cores
timeout-minutes: 60
steps:
- uses: actions/checkout@v3
- uses: arduino/setup-protoc@v1
with:
repo-token: ${{ secrets.GITHUB_TOKEN }}
- uses: KyleMayes/install-llvm-action@v1
with:
version: "14.0"
- name: Install toolchain
uses: dtolnay/rust-toolchain@master
with:
toolchain: ${{ env.RUST_TOOLCHAIN }}
components: llvm-tools-preview
- name: Rust Cache
uses: Swatinem/rust-cache@v2
- name: Install latest nextest release
uses: taiki-e/install-action@nextest
- name: Install cargo-llvm-cov
uses: taiki-e/install-action@cargo-llvm-cov
- name: Collect coverage data
run: cargo llvm-cov nextest --workspace --lcov --output-path lcov.info
env:
CARGO_BUILD_RUSTFLAGS: "-C link-arg=-fuse-ld=lld"
RUST_BACKTRACE: 1
CARGO_INCREMENTAL: 0
GT_S3_BUCKET: ${{ secrets.S3_BUCKET }}
GT_S3_ACCESS_KEY_ID: ${{ secrets.S3_ACCESS_KEY_ID }}
GT_S3_ACCESS_KEY: ${{ secrets.S3_ACCESS_KEY }}
UNITTEST_LOG_DIR: "__unittest_logs"
- name: Codecov upload
uses: codecov/codecov-action@v2
with:
token: ${{ secrets.CODECOV_TOKEN }}
files: ./lcov.info
flags: rust
fail_ci_if_error: true
verbose: true

View File

@@ -7,6 +7,7 @@ on:
- '**.md'
- '.dockerignore'
- 'docker/**'
- '.gitignore'
push:
branches:
- develop
@@ -110,6 +111,41 @@ jobs:
# GT_S3_ACCESS_KEY: ${{ secrets.S3_ACCESS_KEY }}
# UNITTEST_LOG_DIR: "__unittest_logs"
sqlness:
name: Sqlness Test
if: github.event.pull_request.draft == false
runs-on: ubuntu-latest-8-cores
timeout-minutes: 60
steps:
- uses: actions/checkout@v3
- uses: arduino/setup-protoc@v1
with:
repo-token: ${{ secrets.GITHUB_TOKEN }}
- uses: dtolnay/rust-toolchain@master
with:
toolchain: ${{ env.RUST_TOOLCHAIN }}
- name: Rust Cache
uses: Swatinem/rust-cache@v2
- name: Run etcd
run: |
ETCD_VER=v3.5.7
DOWNLOAD_URL=https://github.com/etcd-io/etcd/releases/download
curl -L ${DOWNLOAD_URL}/${ETCD_VER}/etcd-${ETCD_VER}-linux-amd64.tar.gz -o /tmp/etcd-${ETCD_VER}-linux-amd64.tar.gz
mkdir -p /tmp/etcd-download
tar xzvf /tmp/etcd-${ETCD_VER}-linux-amd64.tar.gz -C /tmp/etcd-download --strip-components=1
rm -f /tmp/etcd-${ETCD_VER}-linux-amd64.tar.gz
sudo cp -a /tmp/etcd-download/etcd* /usr/local/bin/
nohup etcd >/tmp/etcd.log 2>&1 &
- name: Run sqlness
run: cargo run --bin sqlness-runner && ls /tmp
- name: Upload sqlness logs
uses: actions/upload-artifact@v3
with:
name: sqlness-logs
path: /tmp/greptime-*.log
retention-days: 3
fmt:
name: Rustfmt
if: github.event.pull_request.draft == false
@@ -147,3 +183,45 @@ jobs:
uses: Swatinem/rust-cache@v2
- name: Run cargo clippy
run: cargo clippy --workspace --all-targets -- -D warnings -D clippy::print_stdout -D clippy::print_stderr
coverage:
if: github.event.pull_request.draft == false
runs-on: ubuntu-latest-8-cores
timeout-minutes: 60
steps:
- uses: actions/checkout@v3
- uses: arduino/setup-protoc@v1
with:
repo-token: ${{ secrets.GITHUB_TOKEN }}
- uses: KyleMayes/install-llvm-action@v1
with:
version: "14.0"
- name: Install toolchain
uses: dtolnay/rust-toolchain@master
with:
toolchain: ${{ env.RUST_TOOLCHAIN }}
components: llvm-tools-preview
- name: Rust Cache
uses: Swatinem/rust-cache@v2
- name: Install latest nextest release
uses: taiki-e/install-action@nextest
- name: Install cargo-llvm-cov
uses: taiki-e/install-action@cargo-llvm-cov
- name: Collect coverage data
run: cargo llvm-cov nextest --workspace --lcov --output-path lcov.info
env:
CARGO_BUILD_RUSTFLAGS: "-C link-arg=-fuse-ld=lld"
RUST_BACKTRACE: 1
CARGO_INCREMENTAL: 0
GT_S3_BUCKET: ${{ secrets.S3_BUCKET }}
GT_S3_ACCESS_KEY_ID: ${{ secrets.S3_ACCESS_KEY_ID }}
GT_S3_ACCESS_KEY: ${{ secrets.S3_ACCESS_KEY }}
UNITTEST_LOG_DIR: "__unittest_logs"
- name: Codecov upload
uses: codecov/codecov-action@v2
with:
token: ${{ secrets.CODECOV_TOKEN }}
files: ./lcov.info
flags: rust
fail_ci_if_error: true
verbose: true

55
.github/workflows/docs.yml vendored Normal file
View File

@@ -0,0 +1,55 @@
on:
pull_request:
types: [opened, synchronize, reopened, ready_for_review]
paths:
- 'docs/**'
- 'config/**'
- '**.md'
- '.dockerignore'
- 'docker/**'
- '.gitignore'
push:
branches:
- develop
- main
paths:
- 'docs/**'
- 'config/**'
- '**.md'
- '.dockerignore'
- 'docker/**'
- '.gitignore'
workflow_dispatch:
name: CI
# To pass the required status check, see:
# https://docs.github.com/en/repositories/configuring-branches-and-merges-in-your-repository/defining-the-mergeability-of-pull-requests/troubleshooting-required-status-checks#handling-skipped-but-required-checks
jobs:
check:
name: Check
if: github.event.pull_request.draft == false
runs-on: ubuntu-latest
steps:
- run: 'echo "No action required"'
fmt:
name: Rustfmt
if: github.event.pull_request.draft == false
runs-on: ubuntu-latest
steps:
- run: 'echo "No action required"'
clippy:
name: Clippy
if: github.event.pull_request.draft == false
runs-on: ubuntu-latest
steps:
- run: 'echo "No action required"'
coverage:
if: github.event.pull_request.draft == false
runs-on: ubuntu-latest
steps:
- run: 'echo "No action required"'

View File

@@ -18,6 +18,8 @@ env:
# In the future, we can change SCHEDULED_PERIOD to nightly.
SCHEDULED_PERIOD: weekly
CARGO_PROFILE: weekly
jobs:
build:
name: Build binary
@@ -26,10 +28,10 @@ jobs:
# The file format is greptime-<os>-<arch>
include:
- arch: x86_64-unknown-linux-gnu
os: ubuntu-latest-16-cores
os: ubuntu-2004-16-cores
file: greptime-linux-amd64
- arch: aarch64-unknown-linux-gnu
os: ubuntu-latest-16-cores
os: ubuntu-2004-16-cores
file: greptime-linux-arm64
- arch: aarch64-apple-darwin
os: macos-latest
@@ -67,6 +69,25 @@ jobs:
run: |
brew install protobuf
- name: Install etcd for linux
if: contains(matrix.arch, 'linux') && endsWith(matrix.arch, '-gnu')
run: |
ETCD_VER=v3.5.7
DOWNLOAD_URL=https://github.com/etcd-io/etcd/releases/download
curl -L ${DOWNLOAD_URL}/${ETCD_VER}/etcd-${ETCD_VER}-linux-amd64.tar.gz -o /tmp/etcd-${ETCD_VER}-linux-amd64.tar.gz
mkdir -p /tmp/etcd-download
tar xzvf /tmp/etcd-${ETCD_VER}-linux-amd64.tar.gz -C /tmp/etcd-download --strip-components=1
rm -f /tmp/etcd-${ETCD_VER}-linux-amd64.tar.gz
sudo cp -a /tmp/etcd-download/etcd* /usr/local/bin/
nohup etcd >/tmp/etcd.log 2>&1 &
- name: Install etcd for macos
if: contains(matrix.arch, 'darwin')
run: |
brew install etcd
brew services start etcd
- name: Install dependencies for linux
if: contains(matrix.arch, 'linux') && endsWith(matrix.arch, '-gnu')
run: |
@@ -82,13 +103,16 @@ jobs:
- name: Output package versions
run: protoc --version ; cargo version ; rustc --version ; gcc --version ; g++ --version
- name: Run tests
run: make unit-test integration-test sqlness-test
- name: Run cargo build
run: cargo build ${{ matrix.opts }} --release --locked --target ${{ matrix.arch }}
run: cargo build ${{ matrix.opts }} --profile ${{ env.CARGO_PROFILE }} --locked --target ${{ matrix.arch }}
- name: Calculate checksum and rename binary
shell: bash
run: |
cd target/${{ matrix.arch }}/release
cd target/${{ matrix.arch }}/${{ env.CARGO_PROFILE }}
chmod +x greptime
tar -zcvf ${{ matrix.file }}.tgz greptime
echo $(shasum -a 256 ${{ matrix.file }}.tgz | cut -f1 -d' ') > ${{ matrix.file }}.sha256sum
@@ -97,13 +121,13 @@ jobs:
uses: actions/upload-artifact@v3
with:
name: ${{ matrix.file }}
path: target/${{ matrix.arch }}/release/${{ matrix.file }}.tgz
path: target/${{ matrix.arch }}/${{ env.CARGO_PROFILE }}/${{ matrix.file }}.tgz
- name: Upload checksum of artifacts
uses: actions/upload-artifact@v3
with:
name: ${{ matrix.file }}.sha256sum
path: target/${{ matrix.arch }}/release/${{ matrix.file }}.sha256sum
path: target/${{ matrix.arch }}/${{ env.CARGO_PROFILE }}/${{ matrix.file }}.sha256sum
release:
name: Release artifacts
needs: [build]

View File

@@ -5,11 +5,11 @@ repos:
- id: conventional-pre-commit
stages: [commit-msg]
- repo: https://github.com/DevinR528/cargo-sort
rev: e6a795bc6b2c0958f9ef52af4863bbd7cc17238f
hooks:
- id: cargo-sort
args: ["--workspace"]
# - repo: https://github.com/DevinR528/cargo-sort
# rev: e6a795bc6b2c0958f9ef52af4863bbd7cc17238f
# hooks:
# - id: cargo-sort
# args: ["--workspace"]
- repo: https://github.com/doublify/pre-commit-rust
rev: v1.0

1303
Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -12,6 +12,7 @@ members = [
"src/common/function-macro",
"src/common/grpc",
"src/common/grpc-expr",
"src/common/procedure",
"src/common/query",
"src/common/recordbatch",
"src/common/runtime",
@@ -26,6 +27,7 @@ members = [
"src/meta-srv",
"src/mito",
"src/object-store",
"src/partition",
"src/promql",
"src/query",
"src/script",
@@ -46,7 +48,10 @@ license = "Apache-2.0"
[workspace.dependencies]
arrow = "29.0"
arrow-flight = "29.0"
arrow-schema = { version = "29.0", features = ["serde"] }
async-stream = "0.3"
async-trait = "0.1"
# TODO(LFC): Use released Datafusion when it officially dpendent on Arrow 29.0
datafusion = { git = "https://github.com/apache/arrow-datafusion.git", rev = "4917235a398ae20145c87d20984e6367dc1a0c1e" }
datafusion-common = { git = "https://github.com/apache/arrow-datafusion.git", rev = "4917235a398ae20145c87d20984e6367dc1a0c1e" }
@@ -54,8 +59,24 @@ datafusion-expr = { git = "https://github.com/apache/arrow-datafusion.git", rev
datafusion-optimizer = { git = "https://github.com/apache/arrow-datafusion.git", rev = "4917235a398ae20145c87d20984e6367dc1a0c1e" }
datafusion-physical-expr = { git = "https://github.com/apache/arrow-datafusion.git", rev = "4917235a398ae20145c87d20984e6367dc1a0c1e" }
datafusion-sql = { git = "https://github.com/apache/arrow-datafusion.git", rev = "4917235a398ae20145c87d20984e6367dc1a0c1e" }
futures = "0.3"
futures-util = "0.3"
parquet = "29.0"
paste = "1.0"
prost = "0.11"
serde = { version = "1.0", features = ["derive"] }
snafu = { version = "0.7", features = ["backtraces"] }
sqlparser = "0.28"
tokio = { version = "1.24.2", features = ["full"] }
tonic = "0.8"
uuid = { version = "1", features = ["serde", "v4", "fast-rng"] }
[profile.release]
debug = true
[profile.weekly]
inherits = "release"
strip = true
lto = "thin"
debug = false
incremental = false

View File

@@ -153,6 +153,9 @@ You can always cleanup test database by removing `/tmp/greptimedb`.
- GreptimeDB [Developer
Guide](https://docs.greptime.com/developer-guide/overview.html)
### Dashboard
- [The dashboard UI for GreptimeDB](https://github.com/GreptimeTeam/dashboard)
### SDK
- [GreptimeDB Java
@@ -169,7 +172,7 @@ For future plans, check out [GreptimeDB roadmap](https://github.com/GreptimeTeam
## Community
Our core team is thrilled too see you participate in any ways you like. When you are stuck, try to
Our core team is thrilled to see you participate in any ways you like. When you are stuck, try to
ask for help by filling an issue with a detailed description of what you were trying to do
and what went wrong. If you have any questions or if you would like to get involved in our
community, please check out:

View File

@@ -11,4 +11,4 @@ client = { path = "../src/client" }
indicatif = "0.17.1"
itertools = "0.10.5"
parquet.workspace = true
tokio = { version = "1.21", features = ["full"] }
tokio.workspace = true

View File

@@ -1,10 +1,10 @@
// Copyright 2022 Greptime Team
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
@@ -25,15 +25,13 @@ use arrow::array::{ArrayRef, PrimitiveArray, StringArray, TimestampNanosecondArr
use arrow::datatypes::{DataType, Float64Type, Int64Type};
use arrow::record_batch::RecordBatch;
use clap::Parser;
use client::admin::Admin;
use client::api::v1::column::Values;
use client::api::v1::{Column, ColumnDataType, ColumnDef, CreateTableExpr, InsertExpr, TableId};
use client::{Client, Database, Select};
use client::api::v1::{Column, ColumnDataType, ColumnDef, CreateTableExpr, InsertRequest, TableId};
use client::{Client, Database};
use indicatif::{MultiProgress, ProgressBar, ProgressStyle};
use parquet::arrow::arrow_reader::ParquetRecordBatchReaderBuilder;
use tokio::task::JoinSet;
const DATABASE_NAME: &str = "greptime";
const CATALOG_NAME: &str = "greptime";
const SCHEMA_NAME: &str = "public";
const TABLE_NAME: &str = "nyc_taxi";
@@ -100,15 +98,14 @@ async fn write_data(
for record_batch in record_batch_reader {
let record_batch = record_batch.unwrap();
let (columns, row_count) = convert_record_batch(record_batch);
let insert_expr = InsertExpr {
schema_name: "public".to_string(),
let request = InsertRequest {
table_name: TABLE_NAME.to_string(),
region_number: 0,
columns,
row_count,
};
let now = Instant::now();
db.insert(insert_expr).await.unwrap();
db.insert(request).await.unwrap();
let elapsed = now.elapsed();
total_rpc_elapsed_ms += elapsed.as_millis();
progress_bar.inc(row_count as _);
@@ -362,13 +359,11 @@ fn query_set() -> HashMap<String, String> {
ret
}
async fn do_write(args: &Args, client: &Client) {
let admin = Admin::new("admin", client.clone());
async fn do_write(args: &Args, db: &Database) {
let mut file_list = get_file_list(args.path.clone().expect("Specify data path in argument"));
let mut write_jobs = JoinSet::new();
let create_table_result = admin.create(create_table_expr()).await;
let create_table_result = db.create(create_table_expr()).await;
println!("Create table result: {create_table_result:?}");
let progress_bar_style = ProgressStyle::with_template(
@@ -383,7 +378,7 @@ async fn do_write(args: &Args, client: &Client) {
let batch_size = args.batch_size;
for _ in 0..args.thread_num {
if let Some(path) = file_list.pop() {
let db = Database::new(DATABASE_NAME, client.clone());
let db = db.clone();
let mpb = multi_progress_bar.clone();
let pb_style = progress_bar_style.clone();
write_jobs.spawn(async move { write_data(batch_size, &db, path, mpb, pb_style).await });
@@ -392,7 +387,7 @@ async fn do_write(args: &Args, client: &Client) {
while write_jobs.join_next().await.is_some() {
file_progress.inc(1);
if let Some(path) = file_list.pop() {
let db = Database::new(DATABASE_NAME, client.clone());
let db = db.clone();
let mpb = multi_progress_bar.clone();
let pb_style = progress_bar_style.clone();
write_jobs.spawn(async move { write_data(batch_size, &db, path, mpb, pb_style).await });
@@ -405,7 +400,7 @@ async fn do_query(num_iter: usize, db: &Database) {
println!("Running query: {query}");
for i in 0..num_iter {
let now = Instant::now();
let _res = db.select(Select::Sql(query.clone())).await.unwrap();
let _res = db.sql(&query).await.unwrap();
let elapsed = now.elapsed();
println!(
"query {}, iteration {}: {}ms",
@@ -427,13 +422,13 @@ fn main() {
.unwrap()
.block_on(async {
let client = Client::with_urls(vec![&args.endpoint]);
let db = Database::with_client(client);
if !args.skip_write {
do_write(&args, &client).await;
do_write(&args, &db).await;
}
if !args.skip_read {
let db = Database::new(DATABASE_NAME, client.clone());
do_query(args.iter_num, &db).await;
}
})

View File

@@ -1,12 +1,20 @@
node_id = 42
mode = 'distributed'
rpc_addr = '127.0.0.1:3001'
wal_dir = '/tmp/greptimedb/wal'
rpc_hostname = '127.0.0.1'
rpc_runtime_size = 8
mysql_addr = '127.0.0.1:4406'
mysql_runtime_size = 4
enable_memory_catalog = false
[wal]
dir = "/tmp/greptimedb/wal"
file_size = '1GB'
purge_interval = '10m'
purge_threshold = '50GB'
read_batch_size = 128
sync_write = false
[storage]
type = 'File'
data_dir = '/tmp/greptimedb/data/'

View File

@@ -2,3 +2,5 @@ bind_addr = '127.0.0.1:3002'
server_addr = '127.0.0.1:3002'
store_addr = '127.0.0.1:2379'
datanode_lease_secs = 15
# selector: 'LeaseBased', 'LoadBased'
selector = 'LeaseBased'

View File

@@ -1,12 +1,20 @@
node_id = 0
mode = 'standalone'
wal_dir = '/tmp/greptimedb/wal/'
enable_memory_catalog = false
[http_options]
addr = '127.0.0.1:4000'
timeout = "30s"
[wal]
dir = "/tmp/greptimedb/wal"
file_size = '1GB'
purge_interval = '10m'
purge_threshold = '50GB'
read_batch_size = 128
sync_write = false
[storage]
type = 'File'
data_dir = '/tmp/greptimedb/data/'

View File

@@ -24,6 +24,8 @@ RUN cargo build --release
# TODO(zyy17): Maybe should use the more secure container image.
FROM ubuntu:22.04 as base
RUN apt-get update && DEBIAN_FRONTEND=noninteractive apt-get -y install ca-certificates
WORKDIR /greptime
COPY --from=builder /greptimedb/target/release/greptime /greptime/bin/
ENV PATH /greptime/bin/:$PATH

View File

@@ -1,5 +1,7 @@
FROM ubuntu:22.04
RUN apt-get update && DEBIAN_FRONTEND=noninteractive apt-get -y install ca-certificates
ARG TARGETARCH
ADD $TARGETARCH/greptime /greptime/bin/

View File

@@ -0,0 +1,153 @@
---
Feature Name: "procedure-framework"
Tracking Issue: https://github.com/GreptimeTeam/greptimedb/issues/286
Date: 2023-01-03
Author: "Yingwen <realevenyag@gmail.com>"
---
Procedure Framework
----------------------
# Summary
A framework for executing operations in a fault-tolerant manner.
# Motivation
Some operations in GreptimeDB require multiple steps to implement. For example, creating a table needs:
1. Check whether the table exists
2. Create the table in the table engine
1. Create a region for the table in the storage engine
2. Persist the metadata of the table to the table manifest
3. Add the table to the catalog manager
If the node dies or restarts in the middle of creating a table, it could leave the system in an inconsistent state. The procedure framework, inspired by [Apache HBase's ProcedureV2 framework](https://github.com/apache/hbase/blob/bfc9fc9605de638785435e404430a9408b99a8d0/src/main/asciidoc/_chapters/pv2.adoc) and [Apache Accumulos FATE framework](https://accumulo.apache.org/docs/2.x/administration/fate), aims to provide a unified way to implement multi-step operations that is tolerant to failure.
# Details
## Overview
The procedure framework consists of the following primary components:
- A `Procedure` represents an operation or a set of operations to be performed step-by-step
- `ProcedureManager`, the runtime to run `Procedures`. It executes the submitted procedures, stores procedures' states to the `ProcedureStore` and restores procedures from `ProcedureStore` while the database restarts.
- `ProcedureStore` is a storage layer for persisting the procedure state
## Procedures
The `ProcedureManager` keeps calling `Procedure::execute()` until the Procedure is done, so the operation of the Procedure should be [idempotent](https://developer.mozilla.org/en-US/docs/Glossary/Idempotent): it needs to be able to undo or replay a partial execution of itself.
```rust
trait Procedure {
fn execute(&mut self, ctx: &Context) -> Result<Status>;
fn dump(&self) -> Result<String>;
fn rollback(&self) -> Result<()>;
// other methods...
}
```
The `Status` is an enum that has the following variants:
```rust
enum Status {
Executing {
persist: bool,
},
Suspended {
subprocedures: Vec<ProcedureWithId>,
persist: bool,
},
Done,
}
```
A call to `execute()` can result in the following possibilities:
- `Ok(Status::Done)`: we are done
- `Ok(Status::Executing { .. })`: there are remaining steps to do
- `Ok(Status::Suspend { sub_procedure, .. })`: execution is suspended and can be resumed later after the sub-procedure is done.
- `Err(e)`: error occurs during execution and the procedure is unable to proceed anymore.
Users need to assign a unique `ProcedureId` to the procedure and the procedure can get this id via the `Context`. The `ProcedureId` is typically a UUID.
```rust
struct Context {
id: ProcedureId,
// other fields ...
}
```
The `ProcedureManager` calls `Procedure::dump()` to serialize the internal state of the procedure and writes to the `ProcedureStore`. The `Status` has a field `persist` to tell the `ProcedureManager` whether it needs persistence.
## Sub-procedures
A procedure may need to create some sub-procedures to process its subtasks. For example, creating a distributed table with multiple regions (partitions) needs to set up the regions in each node, thus the parent procedure should instantiate a sub-procedure for each region. The `ProcedureManager` makes sure that the parent procedure does not proceed till all sub-procedures are successfully finished.
The procedure can submit sub-procedures to the `ProcedureManager` by returning `Status::Suspended`. It needs to assign a procedure id to each procedure manually so it can track the status of the sub-procedures.
```rust
struct ProcedureWithId {
id: ProcedureId,
procedure: BoxedProcedure,
}
```
## ProcedureStore
We might need to provide two different ProcedureStore implementations:
- In standalone mode, it stores data on the local disk.
- In distributed mode, it stores data on the meta server or the object store service.
These implementations should share the same storage structure. They store each procedure's state in a unique path based on the procedure id:
```
Sample paths:
/procedures/{PROCEDURE_ID}/000001.step
/procedures/{PROCEDURE_ID}/000002.step
/procedures/{PROCEDURE_ID}/000003.commit
```
`ProcedureStore` behaves like a WAL. Before performing each step, the `ProcedureManager` can write the procedure's current state to the ProcedureStore, which stores the state in the `.step` file. The `000001` in the path is a monotonic increasing sequence of the step. After the procedure is done, the `ProcedureManager` puts a `.commit` file to indicate the procedure is finished (committed).
The `ProcedureManager` can remove the procedure's files once the procedure is done, but it needs to leave the `.commit` as the last file to remove in case of failure during removal.
## ProcedureManager
`ProcedureManager` executes procedures submitted to it.
```rust
trait ProcedureManager {
fn register_loader(&self, name: &str, loader: BoxedProcedureLoader) -> Result<()>;
async fn submit(&self, procedure: ProcedureWithId) -> Result<()>;
}
```
It supports the following operations:
- Register a `ProcedureLoader` by the type name of the `Procedure`.
- Submit a `Procedure` to the manager and execute it.
When `ProcedureManager` starts, it loads procedures from the `ProcedureStore` and restores the procedures by the `ProcedureLoader`. The manager stores the type name from `Procedure::type_name()` with the data from `Procedure::dump()` in the `.step` file and uses the type name to find a `ProcedureLoader` to recover the procedure from its data.
```rust
type BoxedProcedureLoader = Box<dyn Fn(&str) -> Result<BoxedProcedure> + Send>;
```
## Rollback
The rollback step is supposed to clean up the resources created during the execute() step. When a procedure has failed, the `ProcedureManager` puts a `rollback` file and calls the `Procedure::rollback()` method.
```text
/procedures/{PROCEDURE_ID}/000001.step
/procedures/{PROCEDURE_ID}/000002.rollback
```
Rollback is complicated to implement so some procedures might not support rollback or only provide a best-efforts approach.
## Locking
The `ProcedureManager` can provide a locking mechanism that gives a procedure read/write access to a database object such as a table so other procedures are unable to modify the same table while the current one is executing.
Sub-procedures always inherit their parents' locks. The `ProcedureManager` only acquires locks for a procedure if its parent doesn't hold the lock.
# Drawbacks
The `Procedure` framework introduces additional complexity and overhead to our database.
- To execute a `Procedure`, we need to write to the `ProcedureStore` multiple times, which may slow down the server
- We need to rewrite the logic of creating/dropping/altering a table using the procedure framework
# Alternatives
Another approach is to tolerate failure during execution and allow users to retry the operation until it succeeds. But we still need to:
- Make each step idempotent
- Record the status in some place to check whether we are done

View File

@@ -0,0 +1,92 @@
---
Feature Name: "table-compaction"
Tracking Issue: https://github.com/GreptimeTeam/greptimedb/issues/930
Date: 2023-02-01
Author: "Lei, HUANG <mrsatangel@gmail.com>"
---
# Table Compaction
---
## Background
GreptimeDB uses an LSM-tree based storage engine that flushes memtables to SSTs for persistence.
But currently it only supports level 0. SST files in level 0 does not guarantee to contain only rows with disjoint time ranges.
That is to say, different SST files in level 0 may contain overlapped timestamps.
The consequence is, in order to retrieve rows in some time range, all files need to be scanned, which brings a lot of IO overhead.
Also, just like other LSMT engines, delete/update to existing primary keys are converted to new rows with delete/update mark and appended to SSTs on flushing.
We need to merge the operations to same primary keys so that we don't have to go through all SST files to find the final state of these primary keys.
## Goal
Implement a compaction framework to:
- maintain SSTs in timestamp order to accelerate queries with timestamp condition;
- merge rows with same primary key;
- purge expired SSTs;
- accommodate other tasks like data rollup/indexing.
## Overview
Table compaction involves following components:
- Compaction scheduler: run compaction tasks, limit the consumed resources;
- Compaction strategy: find the SSTs to compact and determine the output files of compaction.
- Compaction task: read the rows from input SSTs and write to the output files.
## Implementation
### Compaction scheduler
`CompactionScheduler` is an executor that continuously polls and executes compaction request from a task queue.
```rust
#[async_trait]
pub trait CompactionScheduler {
/// Schedules a compaction task.
async fn schedule(&self, task: CompactionRequest) -> Result<()>;
/// Stops compaction scheduler.
async fn stop(&self) -> Result<()>;
}
```
### Compaction triggering
Currently, we can check whether to compact tables when memtable is flushed to SST.
https://github.com/GreptimeTeam/greptimedb/blob/4015dd80752e1e6aaa3d7cacc3203cb67ed9be6d/src/storage/src/flush.rs#L245
### Compaction strategy
`CompactionStrategy` defines how to pick SSTs in all levels for compaction.
```rust
pub trait CompactionStrategy {
fn pick(
&self,
ctx: CompactionContext,
levels: &LevelMetas,
) -> Result<CompactionTask>;
}
```
The most suitable compaction strategy for time-series scenario would be
a hybrid strategy that combines time window compaction with size-tired compaction, just like [Cassandra](https://cassandra.apache.org/doc/latest/cassandra/operating/compaction/twcs.html) and [ScyllaDB](https://docs.scylladb.com/stable/architecture/compaction/compaction-strategies.html#time-window-compaction-strategy-twcs) does.
We can first group SSTs in level n into buckets according to some predefined time window. Within that window,
SSTs are compacted in a size-tired manner (find SSTs with similar size and compact them to level n+1).
SSTs from different time windows are neven compacted together.
That strategy guarantees SSTs in each level are mainly sorted in timestamp order which boosts queries with
explicit timestamp condition, while size-tired compaction minimizes the impact to foreground writes.
### Alternatives
Currently, GreptimeDB's storage engine [only support two levels](https://github.com/GreptimeTeam/greptimedb/blob/43aefc5d74dfa73b7819cae77b7eb546d8534a41/src/storage/src/sst.rs#L32).
For level 0, we can start with a simple time-window based leveled compaction, which reads from all SSTs in level 0,
align them to time windows with a fixed duration, merge them with SSTs in level 1 within the same time window
to ensure there is only one sorted run in level 1.

View File

@@ -1 +0,0 @@
nightly-2022-12-20

2
rust-toolchain.toml Normal file
View File

@@ -0,0 +1,2 @@
[toolchain]
channel = "nightly-2022-12-20"

View File

@@ -5,13 +5,14 @@ edition.workspace = true
license.workspace = true
[dependencies]
arrow-flight.workspace = true
common-base = { path = "../common/base" }
common-error = { path = "../common/error" }
common-time = { path = "../common/time" }
datatypes = { path = "../datatypes" }
prost = "0.11"
prost.workspace = true
snafu = { version = "0.7", features = ["backtraces"] }
tonic = "0.8"
tonic.workspace = true
[build-dependencies]
tonic-build = "0.8"

View File

@@ -1,10 +1,10 @@
// Copyright 2022 Greptime Team
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
@@ -12,16 +12,11 @@
// See the License for the specific language governing permissions and
// limitations under the License.
use std::path::PathBuf;
fn main() {
let default_out_dir = PathBuf::from(std::env::var("OUT_DIR").unwrap());
tonic_build::configure()
.file_descriptor_set_path(default_out_dir.join("greptime_fd.bin"))
.compile(
&[
"greptime/v1/select.proto",
"greptime/v1/greptime.proto",
"greptime/v1/database.proto",
"greptime/v1/meta/common.proto",
"greptime/v1/meta/heartbeat.proto",
"greptime/v1/meta/route.proto",

View File

@@ -1,22 +0,0 @@
syntax = "proto3";
package greptime.v1;
message RequestHeader {
string tenant = 1;
}
message ExprHeader {
uint32 version = 1;
}
message ResultHeader {
uint32 version = 1;
uint32 code = 2;
string err_msg = 3;
}
message MutateResult {
uint32 success = 1;
uint32 failure = 2;
}

View File

@@ -2,65 +2,51 @@ syntax = "proto3";
package greptime.v1;
import "greptime/v1/ddl.proto";
import "greptime/v1/column.proto";
import "greptime/v1/common.proto";
message DatabaseRequest {
string name = 1;
repeated ObjectExpr exprs = 2;
message RequestHeader {
// The `catalog` that is selected to be used in this request.
string catalog = 1;
// The `schema` that is selected to be used in this request.
string schema = 2;
}
message DatabaseResponse {
repeated ObjectResult results = 1;
}
message ObjectExpr {
ExprHeader header = 1;
oneof expr {
InsertExpr insert = 2;
SelectExpr select = 3;
UpdateExpr update = 4;
DeleteExpr delete = 5;
message GreptimeRequest {
RequestHeader header = 1;
oneof request {
InsertRequest insert = 2;
QueryRequest query = 3;
DdlRequest ddl = 4;
}
}
// TODO(fys): Only support sql now, and will support promql etc in the future
message SelectExpr {
oneof expr {
message QueryRequest {
oneof query {
string sql = 1;
bytes logical_plan = 2;
}
}
message InsertExpr {
string schema_name = 1;
string table_name = 2;
message InsertRequest {
string table_name = 1;
// Data is represented here.
repeated Column columns = 3;
// The row_count of all columns, which include null and non-null values.
//
// Note: the row_count of all columns in a InsertExpr must be same.
// Note: the row_count of all columns in a InsertRequest must be same.
uint32 row_count = 4;
// The region number of current insert request.
uint32 region_number = 5;
}
// TODO(jiachun)
message UpdateExpr {}
// TODO(jiachun)
message DeleteExpr {}
message ObjectResult {
ResultHeader header = 1;
oneof result {
SelectResult select = 2;
MutateResult mutate = 3;
}
message AffectedRows {
uint32 value = 1;
}
message SelectResult {
bytes raw_data = 1;
message FlightMetadata {
AffectedRows affected_rows = 1;
}

View File

@@ -3,31 +3,16 @@ syntax = "proto3";
package greptime.v1;
import "greptime/v1/column.proto";
import "greptime/v1/common.proto";
message AdminRequest {
string name = 1;
repeated AdminExpr exprs = 2;
}
message AdminResponse {
repeated AdminResult results = 1;
}
message AdminExpr {
ExprHeader header = 1;
// "Data Definition Language" requests, that create, modify or delete the database structures but not the data.
// `DdlRequest` could carry more information than plain SQL, for example, the "table_id" in `CreateTableExpr`.
// So create a new DDL expr if you need it.
message DdlRequest {
oneof expr {
CreateDatabaseExpr create_database = 1;
CreateTableExpr create_table = 2;
AlterExpr alter = 3;
CreateDatabaseExpr create_database = 4;
DropTableExpr drop_table = 5;
}
}
message AdminResult {
ResultHeader header = 1;
oneof result {
MutateResult mutate = 2;
DropTableExpr drop_table = 4;
}
}
@@ -52,6 +37,7 @@ message AlterExpr {
oneof kind {
AddColumns add_columns = 4;
DropColumns drop_columns = 5;
RenameTable rename_table = 6;
}
}
@@ -64,6 +50,7 @@ message DropTableExpr {
message CreateDatabaseExpr {
//TODO(hl): maybe rename to schema_name?
string database_name = 1;
bool create_if_not_exists = 2;
}
message AddColumns {
@@ -74,6 +61,10 @@ message DropColumns {
repeated DropColumn drop_columns = 1;
}
message RenameTable {
string new_table_name = 1;
}
message AddColumn {
ColumnDef column_def = 1;
bool is_key = 2;

View File

@@ -1,22 +0,0 @@
syntax = "proto3";
package greptime.v1;
import "greptime/v1/admin.proto";
import "greptime/v1/common.proto";
import "greptime/v1/database.proto";
service Greptime {
rpc Batch(BatchRequest) returns (BatchResponse) {}
}
message BatchRequest {
RequestHeader header = 1;
repeated AdminRequest admins = 2;
repeated DatabaseRequest databases = 3;
}
message BatchResponse {
repeated AdminResponse admins = 1;
repeated DatabaseResponse databases = 2;
}

View File

@@ -26,7 +26,7 @@ message HeartbeatRequest {
TimeInterval report_interval = 4;
// Node stat
NodeStat node_stat = 5;
// Region stats in this node
// Region stats on this node
repeated RegionStat region_stats = 6;
// Follower nodes and stats, empty on follower nodes
repeated ReplicaStat replica_stats = 7;
@@ -34,19 +34,19 @@ message HeartbeatRequest {
message NodeStat {
// The read capacity units during this period
uint64 rcus = 1;
int64 rcus = 1;
// The write capacity units during this period
uint64 wcus = 2;
// Table number in this node
uint64 table_num = 3;
// Region number in this node
uint64 region_num = 4;
int64 wcus = 2;
// How many tables on this node
int64 table_num = 3;
// How many regions on this node
int64 region_num = 4;
double cpu_usage = 5;
double load = 6;
// Read disk I/O in the node
// Read disk IO on this node
double read_io_rate = 7;
// Write disk I/O in the node
// Write disk IO on this node
double write_io_rate = 8;
// Others
@@ -57,13 +57,13 @@ message RegionStat {
uint64 region_id = 1;
TableName table_name = 2;
// The read capacity units during this period
uint64 rcus = 3;
int64 rcus = 3;
// The write capacity units during this period
uint64 wcus = 4;
// Approximate region size
uint64 approximate_size = 5;
// Approximate number of rows
uint64 approximate_rows = 6;
int64 wcus = 4;
// Approximate bytes of this region
int64 approximate_bytes = 5;
// Approximate number of rows in this region
int64 approximate_rows = 6;
// Others
map<string, string> attrs = 100;

View File

@@ -1,10 +0,0 @@
syntax = "proto3";
package greptime.v1.codec;
import "greptime/v1/column.proto";
message SelectResult {
repeated Column columns = 1;
uint32 row_count = 2;
}

View File

@@ -1,10 +1,10 @@
// Copyright 2022 Greptime Team
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,

View File

@@ -1,10 +1,10 @@
// Copyright 2022 Greptime Team
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,

View File

@@ -1,10 +1,10 @@
// Copyright 2022 Greptime Team
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
@@ -15,7 +15,6 @@
pub mod error;
pub mod helper;
pub mod prometheus;
pub mod result;
pub mod serde;
pub mod v1;

View File

@@ -1,10 +1,10 @@
// Copyright 2022 Greptime Team
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,

View File

@@ -1,203 +0,0 @@
// Copyright 2022 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use common_error::prelude::ErrorExt;
use crate::v1::codec::SelectResult;
use crate::v1::{
admin_result, object_result, AdminResult, MutateResult, ObjectResult, ResultHeader,
SelectResult as SelectResultRaw,
};
pub const PROTOCOL_VERSION: u32 = 1;
pub type Success = u32;
pub type Failure = u32;
#[derive(Default)]
pub struct ObjectResultBuilder {
version: u32,
code: u32,
err_msg: Option<String>,
result: Option<Body>,
}
pub enum Body {
Mutate((Success, Failure)),
Select(SelectResult),
}
impl ObjectResultBuilder {
pub fn new() -> Self {
Self {
version: PROTOCOL_VERSION,
..Default::default()
}
}
#[allow(dead_code)]
pub fn version(mut self, version: u32) -> Self {
self.version = version;
self
}
pub fn status_code(mut self, code: u32) -> Self {
self.code = code;
self
}
pub fn err_msg(mut self, err_msg: String) -> Self {
self.err_msg = Some(err_msg);
self
}
pub fn mutate_result(mut self, success: u32, failure: u32) -> Self {
self.result = Some(Body::Mutate((success, failure)));
self
}
pub fn select_result(mut self, select_result: SelectResult) -> Self {
self.result = Some(Body::Select(select_result));
self
}
pub fn build(self) -> ObjectResult {
let header = Some(ResultHeader {
version: self.version,
code: self.code,
err_msg: self.err_msg.unwrap_or_default(),
});
let result = match self.result {
Some(Body::Mutate((success, failure))) => {
Some(object_result::Result::Mutate(MutateResult {
success,
failure,
}))
}
Some(Body::Select(select)) => Some(object_result::Result::Select(SelectResultRaw {
raw_data: select.into(),
})),
None => None,
};
ObjectResult { header, result }
}
}
pub fn build_err_result(err: &impl ErrorExt) -> ObjectResult {
ObjectResultBuilder::new()
.status_code(err.status_code() as u32)
.err_msg(err.to_string())
.build()
}
#[derive(Debug)]
pub struct AdminResultBuilder {
version: u32,
code: u32,
err_msg: Option<String>,
mutate: Option<(Success, Failure)>,
}
impl AdminResultBuilder {
pub fn status_code(mut self, code: u32) -> Self {
self.code = code;
self
}
pub fn err_msg(mut self, err_msg: String) -> Self {
self.err_msg = Some(err_msg);
self
}
pub fn mutate_result(mut self, success: u32, failure: u32) -> Self {
self.mutate = Some((success, failure));
self
}
pub fn build(self) -> AdminResult {
let header = Some(ResultHeader {
version: self.version,
code: self.code,
err_msg: self.err_msg.unwrap_or_default(),
});
let result = if let Some((success, failure)) = self.mutate {
Some(admin_result::Result::Mutate(MutateResult {
success,
failure,
}))
} else {
None
};
AdminResult { header, result }
}
}
impl Default for AdminResultBuilder {
fn default() -> Self {
Self {
version: PROTOCOL_VERSION,
code: 0,
err_msg: None,
mutate: None,
}
}
}
#[cfg(test)]
mod tests {
use common_error::status_code::StatusCode;
use super::*;
use crate::error::UnknownColumnDataTypeSnafu;
use crate::v1::{object_result, MutateResult};
#[test]
fn test_object_result_builder() {
let obj_result = ObjectResultBuilder::new()
.version(101)
.status_code(500)
.err_msg("Failed to read this file!".to_string())
.mutate_result(100, 20)
.build();
let header = obj_result.header.unwrap();
assert_eq!(101, header.version);
assert_eq!(500, header.code);
assert_eq!("Failed to read this file!", header.err_msg);
let result = obj_result.result.unwrap();
assert_eq!(
object_result::Result::Mutate(MutateResult {
success: 100,
failure: 20,
}),
result
);
}
#[test]
fn test_build_err_result() {
let err = UnknownColumnDataTypeSnafu { datatype: 1 }.build();
let err_result = build_err_result(&err);
let header = err_result.header.unwrap();
let result = err_result.result;
assert_eq!(PROTOCOL_VERSION, header.version);
assert_eq!(StatusCode::InvalidArguments as u32, header.code);
assert!(result.is_none());
}
}

View File

@@ -1,10 +1,10 @@
// Copyright 2022 Greptime Team
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
@@ -15,7 +15,6 @@
pub use prost::DecodeError;
use prost::Message;
use crate::v1::codec::SelectResult;
use crate::v1::meta::TableRouteValue;
macro_rules! impl_convert_with_bytes {
@@ -36,80 +35,4 @@ macro_rules! impl_convert_with_bytes {
};
}
impl_convert_with_bytes!(SelectResult);
impl_convert_with_bytes!(TableRouteValue);
#[cfg(test)]
mod tests {
use std::ops::Deref;
use crate::v1::codec::*;
use crate::v1::{column, Column};
const SEMANTIC_TAG: i32 = 0;
#[test]
fn test_convert_select_result() {
let select_result = mock_select_result();
let bytes: Vec<u8> = select_result.into();
let result: SelectResult = bytes.deref().try_into().unwrap();
assert_eq!(8, result.row_count);
assert_eq!(1, result.columns.len());
let column = &result.columns[0];
assert_eq!("foo", column.column_name);
assert_eq!(SEMANTIC_TAG, column.semantic_type);
assert_eq!(vec![1], column.null_mask);
assert_eq!(
vec![2, 3, 4, 5, 6, 7, 8],
column.values.as_ref().unwrap().i32_values
);
}
#[should_panic]
#[test]
fn test_convert_select_result_wrong() {
let select_result = mock_select_result();
let mut bytes: Vec<u8> = select_result.into();
// modify some bytes
bytes[0] = 0b1;
bytes[1] = 0b1;
let result: SelectResult = bytes.deref().try_into().unwrap();
assert_eq!(8, result.row_count);
assert_eq!(1, result.columns.len());
let column = &result.columns[0];
assert_eq!("foo", column.column_name);
assert_eq!(SEMANTIC_TAG, column.semantic_type);
assert_eq!(vec![1], column.null_mask);
assert_eq!(
vec![2, 3, 4, 5, 6, 7, 8],
column.values.as_ref().unwrap().i32_values
);
}
fn mock_select_result() -> SelectResult {
let values = column::Values {
i32_values: vec![2, 3, 4, 5, 6, 7, 8],
..Default::default()
};
let null_mask = vec![1];
let column = Column {
column_name: "foo".to_string(),
semantic_type: SEMANTIC_TAG,
values: Some(values),
null_mask,
..Default::default()
};
SelectResult {
columns: vec![column],
row_count: 8,
}
}
}

View File

@@ -1,10 +1,10 @@
// Copyright 2022 Greptime Team
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
@@ -15,11 +15,5 @@
#![allow(clippy::derive_partial_eq_without_eq)]
tonic::include_proto!("greptime.v1");
pub const GREPTIME_FD_SET: &[u8] = tonic::include_file_descriptor_set!("greptime_fd");
pub mod codec {
tonic::include_proto!("greptime.v1.codec");
}
mod column_def;
pub mod meta;

View File

@@ -1,10 +1,10 @@
// Copyright 2022 Greptime Team
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,

View File

@@ -1,10 +1,10 @@
// Copyright 2022 Greptime Team
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,

View File

@@ -7,7 +7,7 @@ license.workspace = true
[dependencies]
api = { path = "../api" }
arc-swap = "1.0"
async-stream = "0.3"
async-stream.workspace = true
async-trait = "0.1"
backoff = { version = "0.4", features = ["tokio"] }
common-catalog = { path = "../common/catalog" }
@@ -21,7 +21,7 @@ common-time = { path = "../common/time" }
datafusion.workspace = true
datatypes = { path = "../datatypes" }
futures = "0.3"
futures-util = "0.3"
futures-util.workspace = true
lazy_static = "1.4"
meta-client = { path = "../meta-client" }
regex = "1.6"
@@ -30,7 +30,7 @@ serde_json = "1.0"
snafu = { version = "0.7", features = ["backtraces"] }
storage = { path = "../storage" }
table = { path = "../table" }
tokio = { version = "1.18", features = ["full"] }
tokio.workspace = true
[dev-dependencies]
chrono = "0.4"
@@ -39,4 +39,4 @@ mito = { path = "../mito", features = ["test"] }
object-store = { path = "../object-store" }
storage = { path = "../storage" }
tempdir = "0.3"
tokio = { version = "1.0", features = ["full"] }
tokio.workspace = true

View File

@@ -1,10 +1,10 @@
// Copyright 2022 Greptime Team
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
@@ -13,6 +13,7 @@
// limitations under the License.
use std::any::Any;
use std::fmt::Debug;
use common_error::ext::{BoxedError, ErrorExt};
use common_error::prelude::{Snafu, StatusCode};
@@ -21,6 +22,8 @@ use datatypes::prelude::ConcreteDataType;
use datatypes::schema::RawSchema;
use snafu::{Backtrace, ErrorCompat};
use crate::DeregisterTableRequest;
#[derive(Debug, Snafu)]
#[snafu(visibility(pub))]
pub enum Error {
@@ -86,27 +89,25 @@ pub enum Error {
backtrace: Backtrace,
},
#[snafu(display("Cannot find schema, schema info: {}", schema_info))]
#[snafu(display("Cannot find schema {} in catalog {}", schema, catalog))]
SchemaNotFound {
schema_info: String,
catalog: String,
schema: String,
backtrace: Backtrace,
},
#[snafu(display("Table `{}` already exists", table))]
TableExists { table: String, backtrace: Backtrace },
#[snafu(display("Table `{}` not exist", table))]
TableNotExist { table: String, backtrace: Backtrace },
#[snafu(display("Schema {} already exists", schema))]
SchemaExists {
schema: String,
backtrace: Backtrace,
},
#[snafu(display("Failed to register table"))]
RegisterTable {
#[snafu(backtrace)]
source: BoxedError,
},
#[snafu(display("Operation {} not implemented yet", operation))]
Unimplemented {
operation: String,
@@ -141,6 +142,17 @@ pub enum Error {
source: table::error::Error,
},
#[snafu(display(
"Failed to deregister table, request: {:?}, source: {}",
request,
source
))]
DeregisterTable {
request: DeregisterTableRequest,
#[snafu(backtrace)]
source: table::error::Error,
},
#[snafu(display("Illegal catalog manager state: {}", msg))]
IllegalManagerState { backtrace: Backtrace, msg: String },
@@ -163,6 +175,12 @@ pub enum Error {
source: datatypes::error::Error,
},
#[snafu(display("Failure during SchemaProvider operation, source: {}", source))]
SchemaProviderOperation {
#[snafu(backtrace)]
source: BoxedError,
},
#[snafu(display("Failed to execute system catalog table scan, source: {}", source))]
SystemCatalogTableScanExec {
#[snafu(backtrace)]
@@ -174,15 +192,6 @@ pub enum Error {
source: common_catalog::error::Error,
},
#[snafu(display("IO error occurred while fetching catalog info, source: {}", source))]
Io {
backtrace: Backtrace,
source: std::io::Error,
},
#[snafu(display("Local and remote catalog data are inconsistent, msg: {}", msg))]
CatalogStateInconsistent { msg: String, backtrace: Backtrace },
#[snafu(display("Failed to perform metasrv operation, source: {}", source))]
MetaSrv {
#[snafu(backtrace)]
@@ -194,12 +203,6 @@ pub enum Error {
#[snafu(backtrace)]
source: datatypes::error::Error,
},
#[snafu(display("Catalog internal error: {}", source))]
Internal {
#[snafu(backtrace)]
source: BoxedError,
},
}
pub type Result<T> = std::result::Result<T, Error>;
@@ -212,35 +215,34 @@ impl ErrorExt for Error {
| Error::TableNotFound { .. }
| Error::IllegalManagerState { .. }
| Error::CatalogNotFound { .. }
| Error::InvalidEntryType { .. }
| Error::CatalogStateInconsistent { .. } => StatusCode::Unexpected,
| Error::InvalidEntryType { .. } => StatusCode::Unexpected,
Error::SystemCatalog { .. }
| Error::EmptyValue { .. }
| Error::ValueDeserialize { .. }
| Error::Io { .. } => StatusCode::StorageUnavailable,
| Error::ValueDeserialize { .. } => StatusCode::StorageUnavailable,
Error::RegisterTable { .. } | Error::SystemCatalogTypeMismatch { .. } => {
StatusCode::Internal
}
Error::SystemCatalogTypeMismatch { .. } => StatusCode::Internal,
Error::ReadSystemCatalog { source, .. } => source.status_code(),
Error::InvalidCatalogValue { source, .. } => source.status_code(),
Error::TableExists { .. } => StatusCode::TableAlreadyExists,
Error::TableNotExist { .. } => StatusCode::TableNotFound,
Error::SchemaExists { .. } => StatusCode::InvalidArguments,
Error::OpenSystemCatalog { source, .. }
| Error::CreateSystemCatalog { source, .. }
| Error::InsertCatalogRecord { source, .. }
| Error::OpenTable { source, .. }
| Error::CreateTable { source, .. } => source.status_code(),
| Error::CreateTable { source, .. }
| Error::DeregisterTable { source, .. } => source.status_code(),
Error::MetaSrv { source, .. } => source.status_code(),
Error::SystemCatalogTableScan { source } => source.status_code(),
Error::SystemCatalogTableScanExec { source } => source.status_code(),
Error::InvalidTableSchema { source, .. } => source.status_code(),
Error::InvalidTableInfoInCatalog { .. } => StatusCode::Unexpected,
Error::Internal { source, .. } => source.status_code(),
Error::SchemaProviderOperation { source } => source.status_code(),
Error::Unimplemented { .. } => StatusCode::Unsupported,
}
@@ -263,7 +265,6 @@ impl From<Error> for DataFusionError {
#[cfg(test)]
mod tests {
use common_error::mock::MockError;
use snafu::GenerateImplicitData;
use super::*;
@@ -284,22 +285,6 @@ mod tests {
InvalidKeySnafu { key: None }.build().status_code()
);
assert_eq!(
StatusCode::StorageUnavailable,
Error::OpenSystemCatalog {
source: table::error::Error::new(MockError::new(StatusCode::StorageUnavailable))
}
.status_code()
);
assert_eq!(
StatusCode::StorageUnavailable,
Error::CreateSystemCatalog {
source: table::error::Error::new(MockError::new(StatusCode::StorageUnavailable))
}
.status_code()
);
assert_eq!(
StatusCode::StorageUnavailable,
Error::SystemCatalog {

View File

@@ -1,10 +1,10 @@
// Copyright 2022 Greptime Team
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
@@ -91,6 +91,7 @@ pub fn build_table_regional_prefix(
}
/// Table global info has only one key across all datanodes so it does not have `node_id` field.
#[derive(Clone)]
pub struct TableGlobalKey {
pub catalog_name: String,
pub schema_name: String,
@@ -131,7 +132,6 @@ impl TableGlobalKey {
pub struct TableGlobalValue {
/// Id of datanode that created the global table info kv. only for debugging.
pub node_id: u64,
// TODO(LFC): Maybe remove it?
/// Allocation of region ids across all datanodes.
pub regions_id_map: HashMap<u64, Vec<u32>>,
pub table_info: RawTableInfo,

View File

@@ -1,10 +1,10 @@
// Copyright 2022 Greptime Team
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
@@ -19,7 +19,7 @@ use std::fmt::{Debug, Formatter};
use std::sync::Arc;
use common_telemetry::info;
use snafu::ResultExt;
use snafu::{OptionExt, ResultExt};
use table::engine::{EngineContext, TableEngineRef};
use table::metadata::TableId;
use table::requests::CreateTableRequest;
@@ -97,6 +97,9 @@ pub trait CatalogManager: CatalogList {
/// schema registered.
async fn register_schema(&self, request: RegisterSchemaRequest) -> Result<bool>;
/// Rename a table to [RenameTableRequest::new_table_name], returns whether the table is renamed.
async fn rename_table(&self, request: RenameTableRequest) -> Result<bool>;
/// Register a system table, should be called before starting the manager.
async fn register_system_table(&self, request: RegisterSystemTableRequest)
-> error::Result<()>;
@@ -142,7 +145,16 @@ impl Debug for RegisterTableRequest {
}
}
#[derive(Clone)]
#[derive(Debug, Clone)]
pub struct RenameTableRequest {
pub catalog: String,
pub schema: String,
pub table_name: String,
pub new_table_name: String,
pub table_id: TableId,
}
#[derive(Debug, Clone)]
pub struct DeregisterTableRequest {
pub catalog: String,
pub schema: String,
@@ -155,11 +167,6 @@ pub struct RegisterSchemaRequest {
pub schema: String,
}
/// Formats table fully-qualified name
pub fn format_full_table_name(catalog: &str, schema: &str, table: &str) -> String {
format!("{catalog}.{schema}.{table}")
}
pub trait CatalogProviderFactory {
fn create(&self, catalog_name: String) -> CatalogProviderRef;
}
@@ -186,8 +193,10 @@ pub(crate) async fn handle_system_table_request<'a, M: CatalogManager>(
.create_table(&EngineContext::default(), req.create_table_request.clone())
.await
.with_context(|_| CreateTableSnafu {
table_info: format!(
"{catalog_name}.{schema_name}.{table_name}, id: {table_id}",
table_info: common_catalog::format_full_table_name(
catalog_name,
schema_name,
table_name,
),
})?;
manager
@@ -208,3 +217,38 @@ pub(crate) async fn handle_system_table_request<'a, M: CatalogManager>(
}
Ok(())
}
/// The number of regions in the datanode node.
pub fn region_number(catalog_manager: &CatalogManagerRef) -> Result<u64> {
let mut region_number: u64 = 0;
for catalog_name in catalog_manager.catalog_names()? {
let catalog =
catalog_manager
.catalog(&catalog_name)?
.context(error::CatalogNotFoundSnafu {
catalog_name: &catalog_name,
})?;
for schema_name in catalog.schema_names()? {
let schema = catalog
.schema(&schema_name)?
.context(error::SchemaNotFoundSnafu {
catalog: &catalog_name,
schema: &schema_name,
})?;
for table_name in schema.table_names()? {
let table = schema
.table(&table_name)?
.context(error::TableNotFoundSnafu {
table_info: &table_name,
})?;
let region_numbers = &table.table_info().meta.region_numbers;
region_number += region_numbers.len() as u64;
}
}
}
Ok(region_number)
}

View File

@@ -1,10 +1,10 @@
// Copyright 2022 Greptime Team
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,

View File

@@ -1,10 +1,10 @@
// Copyright 2022 Greptime Team
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
@@ -20,6 +20,7 @@ use common_catalog::consts::{
DEFAULT_CATALOG_NAME, DEFAULT_SCHEMA_NAME, INFORMATION_SCHEMA_NAME, MIN_USER_TABLE_ID,
SYSTEM_CATALOG_NAME, SYSTEM_CATALOG_TABLE_NAME,
};
use common_catalog::format_full_table_name;
use common_recordbatch::{RecordBatch, SendableRecordBatchStream};
use common_telemetry::{error, info};
use datatypes::prelude::ScalarVector;
@@ -34,9 +35,9 @@ use table::table::TableIdProvider;
use table::TableRef;
use crate::error::{
CatalogNotFoundSnafu, IllegalManagerStateSnafu, OpenTableSnafu, ReadSystemCatalogSnafu, Result,
SchemaExistsSnafu, SchemaNotFoundSnafu, SystemCatalogSnafu, SystemCatalogTypeMismatchSnafu,
TableExistsSnafu, TableNotFoundSnafu, UnimplementedSnafu,
self, CatalogNotFoundSnafu, IllegalManagerStateSnafu, OpenTableSnafu, ReadSystemCatalogSnafu,
Result, SchemaExistsSnafu, SchemaNotFoundSnafu, SystemCatalogSnafu,
SystemCatalogTypeMismatchSnafu, TableExistsSnafu, TableNotFoundSnafu,
};
use crate::local::memory::{MemoryCatalogManager, MemoryCatalogProvider, MemorySchemaProvider};
use crate::system::{
@@ -45,9 +46,9 @@ use crate::system::{
};
use crate::tables::SystemCatalog;
use crate::{
format_full_table_name, handle_system_table_request, CatalogList, CatalogManager,
CatalogProvider, CatalogProviderRef, DeregisterTableRequest, RegisterSchemaRequest,
RegisterSystemTableRequest, RegisterTableRequest, SchemaProvider, SchemaProviderRef,
handle_system_table_request, CatalogList, CatalogManager, CatalogProvider, CatalogProviderRef,
DeregisterTableRequest, RegisterSchemaRequest, RegisterSystemTableRequest,
RegisterTableRequest, RenameTableRequest, SchemaProvider, SchemaProviderRef,
};
/// A `CatalogManager` consists of a system catalog and a bunch of user catalogs.
@@ -241,7 +242,8 @@ impl LocalCatalogManager {
let schema = catalog
.schema(&t.schema_name)?
.context(SchemaNotFoundSnafu {
schema_info: format!("{}.{}", &t.catalog_name, &t.schema_name),
catalog: &t.catalog_name,
schema: &t.schema_name,
})?;
let context = EngineContext {};
@@ -250,7 +252,6 @@ impl LocalCatalogManager {
schema_name: t.schema_name.clone(),
table_name: t.table_name.clone(),
table_id: t.table_id,
region_numbers: vec![0],
};
let option = self
@@ -338,7 +339,8 @@ impl CatalogManager for LocalCatalogManager {
let schema = catalog
.schema(schema_name)?
.with_context(|| SchemaNotFoundSnafu {
schema_info: format!("{catalog_name}.{schema_name}"),
catalog: catalog_name,
schema: schema_name,
})?;
{
@@ -377,11 +379,75 @@ impl CatalogManager for LocalCatalogManager {
}
}
async fn deregister_table(&self, _request: DeregisterTableRequest) -> Result<bool> {
UnimplementedSnafu {
operation: "deregister table",
async fn rename_table(&self, request: RenameTableRequest) -> Result<bool> {
let started = self.init_lock.lock().await;
ensure!(
*started,
IllegalManagerStateSnafu {
msg: "Catalog manager not started",
}
);
let catalog_name = &request.catalog;
let schema_name = &request.schema;
let catalog = self
.catalogs
.catalog(catalog_name)?
.context(CatalogNotFoundSnafu { catalog_name })?;
let schema = catalog
.schema(schema_name)?
.with_context(|| SchemaNotFoundSnafu {
catalog: catalog_name,
schema: schema_name,
})?;
// rename table in system catalog
self.system
.register_table(
catalog_name.clone(),
schema_name.clone(),
request.new_table_name.clone(),
request.table_id,
)
.await?;
Ok(schema
.rename_table(&request.table_name, request.new_table_name)
.is_ok())
}
async fn deregister_table(&self, request: DeregisterTableRequest) -> Result<bool> {
{
let started = *self.init_lock.lock().await;
ensure!(started, IllegalManagerStateSnafu { msg: "not started" });
}
{
let _ = self.register_lock.lock().await;
let DeregisterTableRequest {
catalog,
schema,
table_name,
} = &request;
let table_id = self
.catalogs
.table(catalog, schema, table_name)?
.with_context(|| error::TableNotExistSnafu {
table: format!("{catalog}.{schema}.{table_name}"),
})?
.table_info()
.ident
.table_id;
if !self.system.deregister_table(&request, table_id).await? {
return Ok(false);
}
self.catalogs.deregister_table(request).await
}
.fail()
}
async fn register_schema(&self, request: RegisterSchemaRequest) -> Result<bool> {
@@ -452,7 +518,8 @@ impl CatalogManager for LocalCatalogManager {
let schema = catalog
.schema(schema_name)?
.with_context(|| SchemaNotFoundSnafu {
schema_info: format!("{catalog_name}.{schema_name}"),
catalog: catalog_name,
schema: schema_name,
})?;
schema.table(table_name)
}

View File

@@ -1,10 +1,10 @@
// Copyright 2022 Greptime Team
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
@@ -20,16 +20,19 @@ use std::sync::{Arc, RwLock};
use common_catalog::consts::MIN_USER_TABLE_ID;
use common_telemetry::error;
use snafu::OptionExt;
use snafu::{ensure, OptionExt};
use table::metadata::TableId;
use table::table::TableIdProvider;
use table::TableRef;
use crate::error::{CatalogNotFoundSnafu, Result, SchemaNotFoundSnafu, TableExistsSnafu};
use crate::error::{
self, CatalogNotFoundSnafu, Result, SchemaNotFoundSnafu, TableExistsSnafu, TableNotFoundSnafu,
};
use crate::schema::SchemaProvider;
use crate::{
CatalogList, CatalogManager, CatalogProvider, CatalogProviderRef, DeregisterTableRequest,
RegisterSchemaRequest, RegisterSystemTableRequest, RegisterTableRequest, SchemaProviderRef,
RegisterSchemaRequest, RegisterSystemTableRequest, RegisterTableRequest, RenameTableRequest,
SchemaProviderRef,
};
/// Simple in-memory list of catalogs
@@ -81,13 +84,33 @@ impl CatalogManager for MemoryCatalogManager {
let schema = catalog
.schema(&request.schema)?
.with_context(|| SchemaNotFoundSnafu {
schema_info: format!("{}.{}", &request.catalog, &request.schema),
catalog: &request.catalog,
schema: &request.schema,
})?;
schema
.register_table(request.table_name, request.table)
.map(|v| v.is_none())
}
async fn rename_table(&self, request: RenameTableRequest) -> Result<bool> {
let catalogs = self.catalogs.write().unwrap();
let catalog = catalogs
.get(&request.catalog)
.context(CatalogNotFoundSnafu {
catalog_name: &request.catalog,
})?
.clone();
let schema = catalog
.schema(&request.schema)?
.with_context(|| SchemaNotFoundSnafu {
catalog: &request.catalog,
schema: &request.schema,
})?;
Ok(schema
.rename_table(&request.table_name, request.new_table_name)
.is_ok())
}
async fn deregister_table(&self, request: DeregisterTableRequest) -> Result<bool> {
let catalogs = self.catalogs.write().unwrap();
let catalog = catalogs
@@ -99,7 +122,8 @@ impl CatalogManager for MemoryCatalogManager {
let schema = catalog
.schema(&request.schema)?
.with_context(|| SchemaNotFoundSnafu {
schema_info: format!("{}.{}", &request.catalog, &request.schema),
catalog: &request.catalog,
schema: &request.schema,
})?;
schema
.deregister_table(&request.table_name)
@@ -226,6 +250,10 @@ impl CatalogProvider for MemoryCatalogProvider {
schema: SchemaProviderRef,
) -> Result<Option<SchemaProviderRef>> {
let mut schemas = self.schemas.write().unwrap();
ensure!(
!schemas.contains_key(&name),
error::SchemaExistsSnafu { schema: &name }
);
Ok(schemas.insert(name, schema))
}
@@ -288,6 +316,20 @@ impl SchemaProvider for MemorySchemaProvider {
}
}
fn rename_table(&self, name: &str, new_name: String) -> Result<TableRef> {
let mut tables = self.tables.write().unwrap();
if tables.get(name).is_some() {
let table = tables.remove(name).unwrap();
tables.insert(new_name, table.clone());
Ok(table)
} else {
TableNotFoundSnafu {
table_info: name.to_string(),
}
.fail()?
}
}
fn deregister_table(&self, name: &str) -> Result<Option<TableRef>> {
let mut tables = self.tables.write().unwrap();
Ok(tables.remove(name))
@@ -352,6 +394,85 @@ mod tests {
assert_eq!(StatusCode::TableAlreadyExists, err.status_code());
}
#[tokio::test]
async fn test_mem_provider_rename_table() {
let provider = MemorySchemaProvider::new();
let table_name = "num";
assert!(!provider.table_exist(table_name).unwrap());
let test_table: TableRef = Arc::new(NumbersTable::default());
// register test table
assert!(provider
.register_table(table_name.to_string(), test_table.clone())
.unwrap()
.is_none());
assert!(provider.table_exist(table_name).unwrap());
// rename test table
let new_table_name = "numbers";
provider
.rename_table(table_name, new_table_name.to_string())
.unwrap();
// test old table name not exist
assert!(!provider.table_exist(table_name).unwrap());
assert!(provider.deregister_table(table_name).unwrap().is_none());
// test new table name exists
assert!(provider.table_exist(new_table_name).unwrap());
let registered_table = provider.table(new_table_name).unwrap().unwrap();
assert_eq!(
registered_table.table_info().ident.table_id,
test_table.table_info().ident.table_id
);
let other_table = Arc::new(NumbersTable::new(2));
let result = provider.register_table(new_table_name.to_string(), other_table);
let err = result.err().unwrap();
assert_eq!(StatusCode::TableAlreadyExists, err.status_code());
}
#[tokio::test]
async fn test_catalog_rename_table() {
let catalog = MemoryCatalogManager::default();
let schema = catalog
.schema(DEFAULT_CATALOG_NAME, DEFAULT_SCHEMA_NAME)
.unwrap()
.unwrap();
// register table
let table_name = "num";
let table_id = 2333;
let table: TableRef = Arc::new(NumbersTable::new(table_id));
let register_table_req = RegisterTableRequest {
catalog: DEFAULT_CATALOG_NAME.to_string(),
schema: DEFAULT_SCHEMA_NAME.to_string(),
table_name: table_name.to_string(),
table_id,
table,
};
assert!(catalog.register_table(register_table_req).await.unwrap());
assert!(schema.table_exist(table_name).unwrap());
// rename table
let new_table_name = "numbers";
let rename_table_req = RenameTableRequest {
catalog: DEFAULT_CATALOG_NAME.to_string(),
schema: DEFAULT_SCHEMA_NAME.to_string(),
table_name: table_name.to_string(),
new_table_name: new_table_name.to_string(),
table_id,
};
assert!(catalog.rename_table(rename_table_req).await.unwrap());
assert!(!schema.table_exist(table_name).unwrap());
assert!(schema.table_exist(new_table_name).unwrap());
let registered_table = catalog
.table(DEFAULT_CATALOG_NAME, DEFAULT_SCHEMA_NAME, new_table_name)
.unwrap()
.unwrap();
assert_eq!(registered_table.table_info().ident.table_id, table_id);
}
#[test]
pub fn test_register_if_absent() {
let list = MemoryCatalogManager::default();

View File

@@ -1,10 +1,10 @@
// Copyright 2022 Greptime Team
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,

View File

@@ -1,10 +1,10 @@
// Copyright 2022 Greptime Team
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,

View File

@@ -1,10 +1,10 @@
// Copyright 2022 Greptime Team
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
@@ -43,7 +43,7 @@ use crate::remote::{Kv, KvBackendRef};
use crate::{
handle_system_table_request, CatalogList, CatalogManager, CatalogProvider, CatalogProviderRef,
DeregisterTableRequest, RegisterSchemaRequest, RegisterSystemTableRequest,
RegisterTableRequest, SchemaProvider, SchemaProviderRef,
RegisterTableRequest, RenameTableRequest, SchemaProvider, SchemaProviderRef,
};
/// Catalog manager based on metasrv.
@@ -324,7 +324,6 @@ impl RemoteCatalogManager {
schema_name: schema_name.clone(),
table_name: table_name.clone(),
table_id,
region_numbers: region_numbers.clone(),
};
match self
.engine
@@ -418,7 +417,8 @@ impl CatalogManager for RemoteCatalogManager {
catalog_provider
.schema(&schema_name)?
.with_context(|| SchemaNotFoundSnafu {
schema_info: format!("{}.{}", &catalog_name, &schema_name),
catalog: &catalog_name,
schema: &schema_name,
})?;
if schema_provider.table_exist(&request.table_name)? {
return TableExistsSnafu {
@@ -448,6 +448,13 @@ impl CatalogManager for RemoteCatalogManager {
Ok(true)
}
async fn rename_table(&self, _request: RenameTableRequest) -> Result<bool> {
UnimplementedSnafu {
operation: "rename table",
}
.fail()
}
async fn register_system_table(&self, request: RegisterSystemTableRequest) -> Result<()> {
let mut requests = self.system_table_requests.lock().await;
requests.push(request);
@@ -474,7 +481,8 @@ impl CatalogManager for RemoteCatalogManager {
let schema = catalog
.schema(schema_name)?
.with_context(|| SchemaNotFoundSnafu {
schema_info: format!("{catalog_name}.{schema_name}"),
catalog: catalog_name,
schema: schema_name,
})?;
schema.table(table_name)
}
@@ -737,6 +745,13 @@ impl SchemaProvider for RemoteSchemaProvider {
prev
}
fn rename_table(&self, _name: &str, _new_name: String) -> Result<TableRef> {
UnimplementedSnafu {
operation: "rename table",
}
.fail()
}
fn deregister_table(&self, name: &str) -> Result<Option<TableRef>> {
let table_name = name.to_string();
let table_key = self.build_regional_table_key(&table_name).to_string();

View File

@@ -1,10 +1,10 @@
// Copyright 2022 Greptime Team
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
@@ -35,6 +35,10 @@ pub trait SchemaProvider: Sync + Send {
/// If a table of the same name existed before, it returns "Table already exists" error.
fn register_table(&self, name: String, table: TableRef) -> Result<Option<TableRef>>;
/// If supported by the implementation, renames an existing table from this schema and returns it.
/// If no table of that name exists, returns "Table not found" error.
fn rename_table(&self, name: &str, new_name: String) -> Result<TableRef>;
/// If supported by the implementation, removes an existing table from this schema and returns it.
/// If no table of that name exists, returns Ok(None).
fn deregister_table(&self, name: &str) -> Result<Option<TableRef>>;

View File

@@ -1,10 +1,10 @@
// Copyright 2022 Greptime Team
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
@@ -25,29 +25,27 @@ use common_query::physical_plan::{PhysicalPlanRef, SessionContext};
use common_recordbatch::SendableRecordBatchStream;
use common_telemetry::debug;
use common_time::util;
use datatypes::prelude::{ConcreteDataType, ScalarVector};
use datatypes::prelude::{ConcreteDataType, ScalarVector, VectorRef};
use datatypes::schema::{ColumnSchema, Schema, SchemaBuilder, SchemaRef};
use datatypes::vectors::{BinaryVector, TimestampMillisecondVector, UInt8Vector};
use serde::{Deserialize, Serialize};
use snafu::{ensure, OptionExt, ResultExt};
use table::engine::{EngineContext, TableEngineRef};
use table::metadata::{TableId, TableInfoRef};
use table::requests::{CreateTableRequest, InsertRequest, OpenTableRequest};
use table::requests::{CreateTableRequest, DeleteRequest, InsertRequest, OpenTableRequest};
use table::{Table, TableRef};
use crate::error::{
self, CreateSystemCatalogSnafu, EmptyValueSnafu, Error, InvalidEntryTypeSnafu, InvalidKeySnafu,
OpenSystemCatalogSnafu, Result, ValueDeserializeSnafu,
};
use crate::DeregisterTableRequest;
pub const ENTRY_TYPE_INDEX: usize = 0;
pub const KEY_INDEX: usize = 1;
pub const VALUE_INDEX: usize = 3;
pub struct SystemCatalogTable {
table_info: TableInfoRef,
pub table: TableRef,
}
pub struct SystemCatalogTable(TableRef);
#[async_trait::async_trait]
impl Table for SystemCatalogTable {
@@ -56,25 +54,29 @@ impl Table for SystemCatalogTable {
}
fn schema(&self) -> SchemaRef {
self.table_info.meta.schema.clone()
self.0.schema()
}
async fn scan(
&self,
_projection: Option<&Vec<usize>>,
_filters: &[Expr],
_limit: Option<usize>,
projection: Option<&Vec<usize>>,
filters: &[Expr],
limit: Option<usize>,
) -> table::Result<PhysicalPlanRef> {
panic!("System catalog table does not support scan!")
self.0.scan(projection, filters, limit).await
}
/// Insert values into table.
async fn insert(&self, request: InsertRequest) -> table::error::Result<usize> {
self.table.insert(request).await
self.0.insert(request).await
}
fn table_info(&self) -> TableInfoRef {
self.table_info.clone()
self.0.table_info()
}
async fn delete(&self, request: DeleteRequest) -> table::Result<usize> {
self.0.delete(request).await
}
}
@@ -85,7 +87,6 @@ impl SystemCatalogTable {
schema_name: INFORMATION_SCHEMA_NAME.to_string(),
table_name: SYSTEM_CATALOG_TABLE_NAME.to_string(),
table_id: SYSTEM_CATALOG_TABLE_ID,
region_numbers: vec![0],
};
let schema = Arc::new(build_system_catalog_schema());
let ctx = EngineContext::default();
@@ -95,10 +96,7 @@ impl SystemCatalogTable {
.await
.context(OpenSystemCatalogSnafu)?
{
Ok(Self {
table_info: table.table_info(),
table,
})
Ok(Self(table))
} else {
// system catalog table is not yet created, try to create
let request = CreateTableRequest {
@@ -118,8 +116,7 @@ impl SystemCatalogTable {
.create_table(&ctx, request)
.await
.context(CreateSystemCatalogSnafu)?;
let table_info = table.table_info();
Ok(Self { table, table_info })
Ok(Self(table))
}
}
@@ -128,7 +125,6 @@ impl SystemCatalogTable {
let full_projection = None;
let ctx = SessionContext::new();
let scan = self
.table
.scan(full_projection, &[], None)
.await
.context(error::SystemCatalogTableScanSnafu)?;
@@ -186,16 +182,56 @@ fn build_system_catalog_schema() -> Schema {
SchemaBuilder::try_from(cols).unwrap().build().unwrap()
}
pub fn build_table_insert_request(full_table_name: String, table_id: TableId) -> InsertRequest {
/// Formats key string for table entry in system catalog
#[inline]
pub fn format_table_entry_key(catalog: &str, schema: &str, table_id: TableId) -> String {
format!("{catalog}.{schema}.{table_id}")
}
pub fn build_table_insert_request(
catalog: String,
schema: String,
table_name: String,
table_id: TableId,
) -> InsertRequest {
let entry_key = format_table_entry_key(&catalog, &schema, table_id);
build_insert_request(
EntryType::Table,
full_table_name.as_bytes(),
serde_json::to_string(&TableEntryValue { table_id })
entry_key.as_bytes(),
serde_json::to_string(&TableEntryValue { table_name })
.unwrap()
.as_bytes(),
)
}
pub(crate) fn build_table_deletion_request(
request: &DeregisterTableRequest,
table_id: TableId,
) -> DeleteRequest {
let table_key = format_table_entry_key(&request.catalog, &request.schema, table_id);
DeleteRequest {
key_column_values: build_primary_key_columns(EntryType::Table, table_key.as_bytes()),
}
}
fn build_primary_key_columns(entry_type: EntryType, key: &[u8]) -> HashMap<String, VectorRef> {
let mut m = HashMap::with_capacity(3);
m.insert(
"entry_type".to_string(),
Arc::new(UInt8Vector::from_slice(&[entry_type as u8])) as _,
);
m.insert(
"key".to_string(),
Arc::new(BinaryVector::from_slice(&[key])) as _,
);
// Timestamp in key part is intentionally left to 0
m.insert(
"timestamp".to_string(),
Arc::new(TimestampMillisecondVector::from_slice(&[0])) as _,
);
m
}
pub fn build_schema_insert_request(catalog_name: String, schema_name: String) -> InsertRequest {
let full_schema_name = format!("{catalog_name}.{schema_name}");
build_insert_request(
@@ -208,22 +244,10 @@ pub fn build_schema_insert_request(catalog_name: String, schema_name: String) ->
}
pub fn build_insert_request(entry_type: EntryType, key: &[u8], value: &[u8]) -> InsertRequest {
let primary_key_columns = build_primary_key_columns(entry_type, key);
let mut columns_values = HashMap::with_capacity(6);
columns_values.insert(
"entry_type".to_string(),
Arc::new(UInt8Vector::from_slice(&[entry_type as u8])) as _,
);
columns_values.insert(
"key".to_string(),
Arc::new(BinaryVector::from_slice(&[key])) as _,
);
// Timestamp in key part is intentionally left to 0
columns_values.insert(
"timestamp".to_string(),
Arc::new(TimestampMillisecondVector::from_slice(&[0])) as _,
);
columns_values.extend(primary_key_columns.into_iter());
columns_values.insert(
"value".to_string(),
@@ -246,6 +270,7 @@ pub fn build_insert_request(entry_type: EntryType, key: &[u8], value: &[u8]) ->
schema_name: DEFAULT_SCHEMA_NAME.to_string(),
table_name: SYSTEM_CATALOG_TABLE_NAME.to_string(),
columns_values,
region_number: 0, // system catalog table has only one region
}
}
@@ -285,8 +310,8 @@ pub fn decode_system_catalog(
}
EntryType::Table => {
// As for table entry, the key is a string with format: `<catalog_name>.<schema_name>.<table_name>`
// and the value is a JSON string with format: `{"table_id": <table_id>}`
// As for table entry, the key is a string with format: `<catalog_name>.<schema_name>.<table_id>`
// and the value is a JSON string with format: `{"table_name": <table_name>}`
let table_parts = key.split('.').collect::<Vec<_>>();
ensure!(
table_parts.len() >= 3,
@@ -298,11 +323,12 @@ pub fn decode_system_catalog(
debug!("Table meta value: {}", String::from_utf8_lossy(value));
let table_meta: TableEntryValue =
serde_json::from_slice(value).context(ValueDeserializeSnafu)?;
let table_id = table_parts[2].parse::<TableId>().unwrap();
Ok(Entry::Table(TableEntry {
catalog_name: table_parts[0].to_string(),
schema_name: table_parts[1].to_string(),
table_name: table_parts[2].to_string(),
table_id: table_meta.table_id,
table_name: table_meta.table_name,
table_id,
}))
}
}
@@ -362,12 +388,14 @@ pub struct TableEntry {
#[derive(Debug, Serialize, Deserialize, PartialEq, Eq)]
pub struct TableEntryValue {
pub table_id: TableId,
pub table_name: String,
}
#[cfg(test)]
mod tests {
use log_store::fs::noop::NoopLogStore;
use common_recordbatch::RecordBatches;
use datatypes::value::Value;
use log_store::NoopLogStore;
use mito::config::EngineConfig;
use mito::engine::MitoEngine;
use object_store::ObjectStore;
@@ -415,8 +443,8 @@ mod tests {
pub fn test_decode_table() {
let entry = decode_system_catalog(
Some(EntryType::Table as u8),
Some("some_catalog.some_schema.some_table".as_bytes()),
Some("{\"table_id\":42}".as_bytes()),
Some("some_catalog.some_schema.42".as_bytes()),
Some("{\"table_name\":\"some_table\"}".as_bytes()),
)
.unwrap();
@@ -435,7 +463,7 @@ mod tests {
pub fn test_decode_mismatch() {
decode_system_catalog(
Some(EntryType::Table as u8),
Some("some_catalog.some_schema.some_table".as_bytes()),
Some("some_catalog.some_schema.42".as_bytes()),
None,
)
.unwrap();
@@ -487,4 +515,53 @@ mod tests {
assert_eq!(SYSTEM_CATALOG_NAME, info.catalog_name);
assert_eq!(INFORMATION_SCHEMA_NAME, info.schema_name);
}
#[tokio::test]
async fn test_system_catalog_table_records() {
let (_, table_engine) = prepare_table_engine().await;
let catalog_table = SystemCatalogTable::new(table_engine).await.unwrap();
let table_insertion = build_table_insert_request(
DEFAULT_CATALOG_NAME.to_string(),
DEFAULT_SCHEMA_NAME.to_string(),
"my_table".to_string(),
1,
);
let result = catalog_table.insert(table_insertion).await.unwrap();
assert_eq!(result, 1);
let records = catalog_table.records().await.unwrap();
let mut batches = RecordBatches::try_collect(records).await.unwrap().take();
assert_eq!(batches.len(), 1);
let batch = batches.remove(0);
assert_eq!(batch.num_rows(), 1);
let row = batch.rows().next().unwrap();
let Value::UInt8(entry_type) = row[0] else { unreachable!() };
let Value::Binary(key) = row[1].clone() else { unreachable!() };
let Value::Binary(value) = row[3].clone() else { unreachable!() };
let entry = decode_system_catalog(Some(entry_type), Some(&*key), Some(&*value)).unwrap();
let expected = Entry::Table(TableEntry {
catalog_name: DEFAULT_CATALOG_NAME.to_string(),
schema_name: DEFAULT_SCHEMA_NAME.to_string(),
table_name: "my_table".to_string(),
table_id: 1,
});
assert_eq!(entry, expected);
let table_deletion = build_table_deletion_request(
&DeregisterTableRequest {
catalog: DEFAULT_CATALOG_NAME.to_string(),
schema: DEFAULT_SCHEMA_NAME.to_string(),
table_name: "my_table".to_string(),
},
1,
);
let result = catalog_table.delete(table_deletion).await.unwrap();
assert_eq!(result, 1);
let records = catalog_table.records().await.unwrap();
let batches = RecordBatches::try_collect(records).await.unwrap().take();
assert_eq!(batches.len(), 0);
}
}

View File

@@ -1,10 +1,10 @@
// Copyright 2022 Greptime Team
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
@@ -38,10 +38,13 @@ use table::metadata::{TableId, TableInfoRef};
use table::table::scan::SimpleTableScan;
use table::{Table, TableRef};
use crate::error::{Error, InsertCatalogRecordSnafu};
use crate::system::{build_schema_insert_request, build_table_insert_request, SystemCatalogTable};
use crate::error::{self, Error, InsertCatalogRecordSnafu, Result as CatalogResult};
use crate::system::{
build_schema_insert_request, build_table_deletion_request, build_table_insert_request,
SystemCatalogTable,
};
use crate::{
format_full_table_name, CatalogListRef, CatalogProvider, SchemaProvider, SchemaProviderRef,
CatalogListRef, CatalogProvider, DeregisterTableRequest, SchemaProvider, SchemaProviderRef,
};
/// Tables holds all tables created by user.
@@ -233,6 +236,10 @@ impl SchemaProvider for InformationSchema {
panic!("System catalog & schema does not support register table")
}
fn rename_table(&self, _name: &str, _new_name: String) -> crate::error::Result<TableRef> {
unimplemented!("System catalog & schema does not support rename table")
}
fn deregister_table(&self, _name: &str) -> crate::error::Result<Option<TableRef>> {
panic!("System catalog & schema does not support deregister table")
}
@@ -269,8 +276,7 @@ impl SystemCatalog {
table_name: String,
table_id: TableId,
) -> crate::error::Result<usize> {
let full_table_name = format_full_table_name(&catalog, &schema, &table_name);
let request = build_table_insert_request(full_table_name, table_id);
let request = build_table_insert_request(catalog, schema, table_name, table_id);
self.information_schema
.system
.insert(request)
@@ -278,6 +284,21 @@ impl SystemCatalog {
.context(InsertCatalogRecordSnafu)
}
pub(crate) async fn deregister_table(
&self,
request: &DeregisterTableRequest,
table_id: TableId,
) -> CatalogResult<bool> {
self.information_schema
.system
.delete(build_table_deletion_request(request, table_id))
.await
.map(|x| x == 1)
.with_context(|_| error::DeregisterTableSnafu {
request: request.clone(),
})
}
pub async fn register_schema(
&self,
catalog: String,

View File

@@ -1,10 +1,10 @@
// Copyright 2022 Greptime Team
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
@@ -17,7 +17,7 @@ mod tests {
use std::sync::Arc;
use catalog::local::LocalCatalogManager;
use catalog::{CatalogManager, RegisterTableRequest};
use catalog::{CatalogManager, RegisterTableRequest, RenameTableRequest};
use common_catalog::consts::{DEFAULT_CATALOG_NAME, DEFAULT_SCHEMA_NAME};
use common_telemetry::{error, info};
use mito::config::EngineConfig;
@@ -38,6 +38,44 @@ mod tests {
Ok(catalog_manager)
}
#[tokio::test]
async fn test_rename_table() {
common_telemetry::init_default_ut_logging();
let catalog_manager = create_local_catalog_manager().await.unwrap();
// register table
let table_name = "test_table";
let table_id = 42;
let table = Arc::new(NumbersTable::new(table_id));
let request = RegisterTableRequest {
catalog: DEFAULT_CATALOG_NAME.to_string(),
schema: DEFAULT_SCHEMA_NAME.to_string(),
table_name: table_name.to_string(),
table_id,
table: table.clone(),
};
assert!(catalog_manager.register_table(request).await.unwrap());
// rename table
let new_table_name = "table_t";
let rename_table_req = RenameTableRequest {
catalog: DEFAULT_CATALOG_NAME.to_string(),
schema: DEFAULT_SCHEMA_NAME.to_string(),
table_name: table_name.to_string(),
new_table_name: new_table_name.to_string(),
table_id,
};
assert!(catalog_manager
.rename_table(rename_table_req)
.await
.unwrap());
let registered_table = catalog_manager
.table(DEFAULT_CATALOG_NAME, DEFAULT_SCHEMA_NAME, new_table_name)
.unwrap()
.unwrap();
assert_eq!(registered_table.table_info().ident.table_id, table_id);
}
#[tokio::test]
async fn test_duplicate_register() {
let catalog_manager = create_local_catalog_manager().await.unwrap();

View File

@@ -1,10 +1,10 @@
// Copyright 2022 Greptime Team
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,

View File

@@ -1,10 +1,10 @@
// Copyright 2022 Greptime Team
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,

View File

@@ -6,8 +6,10 @@ license.workspace = true
[dependencies]
api = { path = "../api" }
async-stream = "0.3"
arrow-flight.workspace = true
async-stream.workspace = true
common-base = { path = "../common/base" }
common-catalog = { path = "../common/catalog" }
common-error = { path = "../common/error" }
common-grpc = { path = "../common/grpc" }
common-grpc-expr = { path = "../common/grpc-expr" }
@@ -17,15 +19,17 @@ common-time = { path = "../common/time" }
datafusion.workspace = true
datatypes = { path = "../datatypes" }
enum_dispatch = "0.3"
futures-util.workspace = true
parking_lot = "0.12"
prost.workspace = true
rand = "0.8"
snafu = { version = "0.7", features = ["backtraces"] }
tonic = "0.8"
snafu.workspace = true
tonic.workspace = true
[dev-dependencies]
datanode = { path = "../datanode" }
substrait = { path = "../common/substrait" }
tokio = { version = "1.0", features = ["full"] }
tokio.workspace = true
tracing = "0.1"
tracing-subscriber = { version = "0.3", features = ["env-filter"] }

View File

@@ -1,106 +0,0 @@
// Copyright 2022 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use api::v1::*;
use client::{Client, Database};
fn main() {
tracing::subscriber::set_global_default(tracing_subscriber::FmtSubscriber::builder().finish())
.unwrap();
run();
}
#[tokio::main]
async fn run() {
let client = Client::with_urls(vec!["127.0.0.1:3001"]);
let db = Database::new("greptime", client);
let (columns, row_count) = insert_data();
let expr = InsertExpr {
schema_name: "public".to_string(),
table_name: "demo".to_string(),
region_number: 0,
columns,
row_count,
};
db.insert(expr).await.unwrap();
}
fn insert_data() -> (Vec<Column>, u32) {
const SEMANTIC_TAG: i32 = 0;
const SEMANTIC_FIELD: i32 = 1;
const SEMANTIC_TS: i32 = 2;
let row_count = 4;
let host_vals = column::Values {
string_values: vec![
"host1".to_string(),
"host2".to_string(),
"host3".to_string(),
"host4".to_string(),
],
..Default::default()
};
let host_column = Column {
column_name: "host".to_string(),
semantic_type: SEMANTIC_TAG,
values: Some(host_vals),
null_mask: vec![0],
..Default::default()
};
let cpu_vals = column::Values {
f64_values: vec![0.31, 0.41, 0.2],
..Default::default()
};
let cpu_column = Column {
column_name: "cpu".to_string(),
semantic_type: SEMANTIC_FIELD,
values: Some(cpu_vals),
null_mask: vec![2],
..Default::default()
};
let mem_vals = column::Values {
f64_values: vec![0.1, 0.2, 0.3],
..Default::default()
};
let mem_column = Column {
column_name: "memory".to_string(),
semantic_type: SEMANTIC_FIELD,
values: Some(mem_vals),
null_mask: vec![4],
..Default::default()
};
let ts_vals = column::Values {
i64_values: vec![100, 101, 102, 103],
..Default::default()
};
let ts_column = Column {
column_name: "ts".to_string(),
semantic_type: SEMANTIC_TS,
values: Some(ts_vals),
null_mask: vec![0],
..Default::default()
};
(
vec![host_column, cpu_column, mem_column, ts_column],
row_count,
)
}

View File

@@ -1,10 +1,10 @@
// Copyright 2022 Greptime Team
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
@@ -13,7 +13,6 @@
// limitations under the License.
use api::v1::{ColumnDataType, ColumnDef, CreateTableExpr, TableId};
use client::admin::Admin;
use client::{Client, Database};
use prost_09::Message;
use substrait_proto::protobuf::plan_rel::RelType as PlanRelType;
@@ -66,13 +65,12 @@ async fn run() {
region_ids: vec![0],
};
let admin = Admin::new("create table", client.clone());
let result = admin.create(create_table_expr).await.unwrap();
let db = Database::with_client(client);
let result = db.create(create_table_expr).await.unwrap();
event!(Level::INFO, "create table result: {:#?}", result);
let logical = mock_logical_plan();
event!(Level::INFO, "plan size: {:#?}", logical.len());
let db = Database::new("greptime", client);
let result = db.logical_plan(logical).await.unwrap();
event!(Level::INFO, "result: {:#?}", result);

View File

@@ -1,34 +0,0 @@
// Copyright 2022 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use client::{Client, Database, Select};
use tracing::{event, Level};
fn main() {
tracing::subscriber::set_global_default(tracing_subscriber::FmtSubscriber::builder().finish())
.unwrap();
run();
}
#[tokio::main]
async fn run() {
let client = Client::with_urls(vec!["127.0.0.1:3001"]);
let db = Database::new("greptime", client);
let sql = Select::Sql("select * from demo".to_string());
let result = db.select(sql).await.unwrap();
event!(Level::INFO, "result: {:#?}", result);
}

View File

@@ -1,137 +0,0 @@
// Copyright 2022 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use api::v1::*;
use common_error::prelude::StatusCode;
use common_query::Output;
use snafu::prelude::*;
use crate::database::PROTOCOL_VERSION;
use crate::{error, Client, Result};
#[derive(Clone, Debug)]
pub struct Admin {
name: String,
client: Client,
}
impl Admin {
pub fn new(name: impl Into<String>, client: Client) -> Self {
Self {
name: name.into(),
client,
}
}
pub async fn create(&self, expr: CreateTableExpr) -> Result<AdminResult> {
let header = ExprHeader {
version: PROTOCOL_VERSION,
};
let expr = AdminExpr {
header: Some(header),
expr: Some(admin_expr::Expr::CreateTable(expr)),
};
self.do_request(expr).await
}
pub async fn do_request(&self, expr: AdminExpr) -> Result<AdminResult> {
// `remove(0)` is safe because of `do_requests`'s invariants.
Ok(self.do_requests(vec![expr]).await?.remove(0))
}
pub async fn alter(&self, expr: AlterExpr) -> Result<AdminResult> {
let header = ExprHeader {
version: PROTOCOL_VERSION,
};
let expr = AdminExpr {
header: Some(header),
expr: Some(admin_expr::Expr::Alter(expr)),
};
self.do_request(expr).await
}
pub async fn drop_table(&self, expr: DropTableExpr) -> Result<AdminResult> {
let header = ExprHeader {
version: PROTOCOL_VERSION,
};
let expr = AdminExpr {
header: Some(header),
expr: Some(admin_expr::Expr::DropTable(expr)),
};
self.do_request(expr).await
}
/// Invariants: the lengths of input vec (`Vec<AdminExpr>`) and output vec (`Vec<AdminResult>`) are equal.
async fn do_requests(&self, exprs: Vec<AdminExpr>) -> Result<Vec<AdminResult>> {
let expr_count = exprs.len();
let req = AdminRequest {
name: self.name.clone(),
exprs,
};
let resp = self.client.admin(req).await?;
let results = resp.results;
ensure!(
results.len() == expr_count,
error::MissingResultSnafu {
name: "admin_results",
expected: expr_count,
actual: results.len(),
}
);
Ok(results)
}
pub async fn create_database(&self, expr: CreateDatabaseExpr) -> Result<AdminResult> {
let header = ExprHeader {
version: PROTOCOL_VERSION,
};
let expr = AdminExpr {
header: Some(header),
expr: Some(admin_expr::Expr::CreateDatabase(expr)),
};
Ok(self.do_requests(vec![expr]).await?.remove(0))
}
}
pub fn admin_result_to_output(admin_result: AdminResult) -> Result<Output> {
let header = admin_result.header.context(error::MissingHeaderSnafu)?;
if !StatusCode::is_success(header.code) {
return error::DatanodeSnafu {
code: header.code,
msg: header.err_msg,
}
.fail();
}
let result = admin_result.result.context(error::MissingResultSnafu {
name: "result".to_string(),
expected: 1_usize,
actual: 0_usize,
})?;
let output = match result {
admin_result::Result::Mutate(mutate) => {
if mutate.failure != 0 {
return error::MutateFailureSnafu {
failure: mutate.failure,
}
.fail();
}
Output::AffectedRows(mutate.success as usize)
}
};
Ok(output)
}

View File

@@ -1,10 +1,10 @@
// Copyright 2022 Greptime Team
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
@@ -14,8 +14,7 @@
use std::sync::Arc;
use api::v1::greptime_client::GreptimeClient;
use api::v1::*;
use arrow_flight::flight_service_client::FlightServiceClient;
use common_grpc::channel_manager::ChannelManager;
use parking_lot::RwLock;
use snafu::{OptionExt, ResultExt};
@@ -24,6 +23,21 @@ use tonic::transport::Channel;
use crate::load_balance::{LoadBalance, Loadbalancer};
use crate::{error, Result};
pub(crate) struct FlightClient {
addr: String,
client: FlightServiceClient<Channel>,
}
impl FlightClient {
pub(crate) fn addr(&self) -> &str {
&self.addr
}
pub(crate) fn mut_inner(&mut self) -> &mut FlightServiceClient<Channel> {
&mut self.client
}
}
#[derive(Clone, Debug, Default)]
pub struct Client {
inner: Arc<Inner>,
@@ -104,57 +118,23 @@ impl Client {
self.inner.set_peers(urls);
}
pub async fn admin(&self, req: AdminRequest) -> Result<AdminResponse> {
let req = BatchRequest {
admins: vec![req],
..Default::default()
};
let mut res = self.batch(req).await?;
res.admins.pop().context(error::MissingResultSnafu {
name: "admins",
expected: 1_usize,
actual: 0_usize,
})
}
pub async fn database(&self, req: DatabaseRequest) -> Result<DatabaseResponse> {
let req = BatchRequest {
databases: vec![req],
..Default::default()
};
let mut res = self.batch(req).await?;
res.databases.pop().context(error::MissingResultSnafu {
name: "database",
expected: 1_usize,
actual: 0_usize,
})
}
pub async fn batch(&self, req: BatchRequest) -> Result<BatchResponse> {
let peer = self
pub(crate) fn make_client(&self) -> Result<FlightClient> {
let addr = self
.inner
.get_peer()
.context(error::IllegalGrpcClientStateSnafu {
err_msg: "No available peer found",
})?;
let mut client = self.make_client(&peer)?;
let result = client
.batch(req)
.await
.context(error::TonicStatusSnafu { addr: peer })?;
Ok(result.into_inner())
}
fn make_client(&self, addr: impl AsRef<str>) -> Result<GreptimeClient<Channel>> {
let addr = addr.as_ref();
let channel = self
.inner
.channel_manager
.get(addr)
.context(error::CreateChannelSnafu { addr })?;
Ok(GreptimeClient::new(channel))
.get(&addr)
.context(error::CreateChannelSnafu { addr: &addr })?;
Ok(FlightClient {
addr,
client: FlightServiceClient::new(channel),
})
}
}

View File

@@ -1,10 +1,10 @@
// Copyright 2022 Greptime Team
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
@@ -12,237 +12,173 @@
// See the License for the specific language governing permissions and
// limitations under the License.
use std::sync::Arc;
use std::str::FromStr;
use api::v1::codec::SelectResult as GrpcSelectResult;
use api::v1::column::SemanticType;
use api::v1::ddl_request::Expr as DdlExpr;
use api::v1::greptime_request::Request;
use api::v1::query_request::Query;
use api::v1::{
object_expr, object_result, select_expr, DatabaseRequest, ExprHeader, InsertExpr,
MutateResult as GrpcMutateResult, ObjectExpr, ObjectResult as GrpcObjectResult, SelectExpr,
AlterExpr, CreateTableExpr, DdlRequest, DropTableExpr, GreptimeRequest, InsertRequest,
QueryRequest, RequestHeader,
};
use common_error::status_code::StatusCode;
use common_grpc_expr::column_to_vector;
use arrow_flight::{FlightData, Ticket};
use common_catalog::consts::{DEFAULT_CATALOG_NAME, DEFAULT_SCHEMA_NAME};
use common_error::prelude::*;
use common_grpc::flight::{flight_messages_to_recordbatches, FlightDecoder, FlightMessage};
use common_query::Output;
use common_recordbatch::{RecordBatch, RecordBatches};
use datatypes::prelude::*;
use datatypes::schema::{ColumnSchema, Schema};
use snafu::{ensure, OptionExt, ResultExt};
use futures_util::{TryFutureExt, TryStreamExt};
use prost::Message;
use snafu::{ensure, ResultExt};
use crate::error::{ColumnToVectorSnafu, ConvertSchemaSnafu, DatanodeSnafu, DecodeSelectSnafu};
use crate::error::{ConvertFlightDataSnafu, IllegalFlightMessagesSnafu};
use crate::{error, Client, Result};
pub const PROTOCOL_VERSION: u32 = 1;
#[derive(Clone, Debug)]
pub struct Database {
name: String,
// The "catalog" and "schema" to be used in processing the requests at the server side.
// They are the "hint" or "context", just like how the "database" in "USE" statement is treated in MySQL.
// They will be carried in the request header.
catalog: String,
schema: String,
client: Client,
}
impl Database {
pub fn new(name: impl Into<String>, client: Client) -> Self {
pub fn new(catalog: impl Into<String>, schema: impl Into<String>, client: Client) -> Self {
Self {
name: name.into(),
catalog: catalog.into(),
schema: schema.into(),
client,
}
}
pub fn name(&self) -> &str {
&self.name
pub fn with_client(client: Client) -> Self {
Self::new(DEFAULT_CATALOG_NAME, DEFAULT_SCHEMA_NAME, client)
}
pub async fn insert(&self, insert: InsertExpr) -> Result<ObjectResult> {
let header = ExprHeader {
version: PROTOCOL_VERSION,
};
let expr = ObjectExpr {
header: Some(header),
expr: Some(object_expr::Expr::Insert(insert)),
};
self.object(expr).await?.try_into()
pub fn set_schema(&mut self, schema: impl Into<String>) {
self.schema = schema.into();
}
pub async fn batch_insert(&self, insert_exprs: Vec<InsertExpr>) -> Result<Vec<ObjectResult>> {
let header = ExprHeader {
version: PROTOCOL_VERSION,
};
let obj_exprs = insert_exprs
.into_iter()
.map(|expr| ObjectExpr {
header: Some(header.clone()),
expr: Some(object_expr::Expr::Insert(expr)),
})
.collect();
self.objects(obj_exprs)
.await?
.into_iter()
.map(|result| result.try_into())
.collect()
pub async fn insert(&self, request: InsertRequest) -> Result<Output> {
self.do_get(Request::Insert(request)).await
}
pub async fn select(&self, expr: Select) -> Result<ObjectResult> {
let select_expr = match expr {
Select::Sql(sql) => SelectExpr {
expr: Some(select_expr::Expr::Sql(sql)),
},
};
self.do_select(select_expr).await
pub async fn sql(&self, sql: &str) -> Result<Output> {
self.do_get(Request::Query(QueryRequest {
query: Some(Query::Sql(sql.to_string())),
}))
.await
}
pub async fn logical_plan(&self, logical_plan: Vec<u8>) -> Result<ObjectResult> {
let select_expr = SelectExpr {
expr: Some(select_expr::Expr::LogicalPlan(logical_plan)),
};
self.do_select(select_expr).await
pub async fn logical_plan(&self, logical_plan: Vec<u8>) -> Result<Output> {
self.do_get(Request::Query(QueryRequest {
query: Some(Query::LogicalPlan(logical_plan)),
}))
.await
}
async fn do_select(&self, select_expr: SelectExpr) -> Result<ObjectResult> {
let header = ExprHeader {
version: PROTOCOL_VERSION,
pub async fn create(&self, expr: CreateTableExpr) -> Result<Output> {
self.do_get(Request::Ddl(DdlRequest {
expr: Some(DdlExpr::CreateTable(expr)),
}))
.await
}
pub async fn alter(&self, expr: AlterExpr) -> Result<Output> {
self.do_get(Request::Ddl(DdlRequest {
expr: Some(DdlExpr::Alter(expr)),
}))
.await
}
pub async fn drop_table(&self, expr: DropTableExpr) -> Result<Output> {
self.do_get(Request::Ddl(DdlRequest {
expr: Some(DdlExpr::DropTable(expr)),
}))
.await
}
async fn do_get(&self, request: Request) -> Result<Output> {
let request = GreptimeRequest {
header: Some(RequestHeader {
catalog: self.catalog.clone(),
schema: self.schema.clone(),
}),
request: Some(request),
};
let request = Ticket {
ticket: request.encode_to_vec(),
};
let expr = ObjectExpr {
header: Some(header),
expr: Some(object_expr::Expr::Select(select_expr)),
};
let mut client = self.client.make_client()?;
let obj_result = self.object(expr).await?;
obj_result.try_into()
}
pub async fn object(&self, expr: ObjectExpr) -> Result<GrpcObjectResult> {
let res = self.objects(vec![expr]).await?.pop().unwrap();
Ok(res)
}
async fn objects(&self, exprs: Vec<ObjectExpr>) -> Result<Vec<GrpcObjectResult>> {
let expr_count = exprs.len();
let req = DatabaseRequest {
name: self.name.clone(),
exprs,
};
let res = self.client.database(req).await?;
let res = res.results;
ensure!(
res.len() == expr_count,
error::MissingResultSnafu {
name: "object_results",
expected: expr_count,
actual: res.len(),
}
);
Ok(res)
}
}
#[derive(Debug)]
pub enum ObjectResult {
Select(GrpcSelectResult),
Mutate(GrpcMutateResult),
}
impl TryFrom<api::v1::ObjectResult> for ObjectResult {
type Error = error::Error;
fn try_from(object_result: api::v1::ObjectResult) -> std::result::Result<Self, Self::Error> {
let header = object_result.header.context(error::MissingHeaderSnafu)?;
if !StatusCode::is_success(header.code) {
return DatanodeSnafu {
code: header.code,
msg: header.err_msg,
}
.fail();
}
let obj_result = object_result.result.context(error::MissingResultSnafu {
name: "result".to_string(),
expected: 1_usize,
actual: 0_usize,
})?;
Ok(match obj_result {
object_result::Result::Select(select) => {
let result = (*select.raw_data).try_into().context(DecodeSelectSnafu)?;
ObjectResult::Select(result)
}
object_result::Result::Mutate(mutate) => ObjectResult::Mutate(mutate),
})
}
}
pub enum Select {
Sql(String),
}
impl TryFrom<ObjectResult> for Output {
type Error = error::Error;
fn try_from(value: ObjectResult) -> Result<Self> {
let output = match value {
ObjectResult::Select(select) => {
let vectors = select
.columns
.iter()
.map(|column| {
column_to_vector(column, select.row_count).context(ColumnToVectorSnafu)
// TODO(LFC): Streaming get flight data.
let flight_data: Vec<FlightData> = client
.mut_inner()
.do_get(request)
.and_then(|response| response.into_inner().try_collect())
.await
.map_err(|e| {
let code = get_metadata_value(&e, INNER_ERROR_CODE)
.and_then(|s| StatusCode::from_str(&s).ok())
.unwrap_or(StatusCode::Unknown);
let msg = get_metadata_value(&e, INNER_ERROR_MSG).unwrap_or(e.to_string());
error::ExternalSnafu { code, msg }
.fail::<()>()
.map_err(BoxedError::new)
.context(error::FlightGetSnafu {
tonic_code: e.code(),
addr: client.addr(),
})
.collect::<Result<Vec<VectorRef>>>()?;
.unwrap_err()
})?;
let column_schemas = select
.columns
.iter()
.zip(vectors.iter())
.map(|(column, vector)| {
let datatype = vector.data_type();
// nullable or not, does not affect the output
let mut column_schema =
ColumnSchema::new(&column.column_name, datatype, true);
if column.semantic_type == SemanticType::Timestamp as i32 {
column_schema = column_schema.with_time_index(true);
}
column_schema
})
.collect::<Vec<ColumnSchema>>();
let decoder = &mut FlightDecoder::default();
let flight_messages = flight_data
.into_iter()
.map(|x| decoder.try_decode(x).context(ConvertFlightDataSnafu))
.collect::<Result<Vec<_>>>()?;
let schema = Arc::new(Schema::try_new(column_schemas).context(ConvertSchemaSnafu)?);
let recordbatches = if vectors.is_empty() {
RecordBatches::try_new(schema, vec![])
} else {
RecordBatch::new(schema, vectors)
.and_then(|batch| RecordBatches::try_new(batch.schema.clone(), vec![batch]))
let output = if let Some(FlightMessage::AffectedRows(rows)) = flight_messages.get(0) {
ensure!(
flight_messages.len() == 1,
IllegalFlightMessagesSnafu {
reason: "Expect 'AffectedRows' Flight messages to be one and only!"
}
.context(error::CreateRecordBatchesSnafu)?;
Output::RecordBatches(recordbatches)
}
ObjectResult::Mutate(mutate) => {
if mutate.failure != 0 {
return error::MutateFailureSnafu {
failure: mutate.failure,
}
.fail();
}
Output::AffectedRows(mutate.success as usize)
}
);
Output::AffectedRows(*rows)
} else {
let recordbatches = flight_messages_to_recordbatches(flight_messages)
.context(ConvertFlightDataSnafu)?;
Output::RecordBatches(recordbatches)
};
Ok(output)
}
}
fn get_metadata_value(e: &tonic::Status, key: &str) -> Option<String> {
e.metadata()
.get(key)
.and_then(|v| String::from_utf8(v.as_bytes().to_vec()).ok())
}
#[cfg(test)]
mod tests {
use std::sync::Arc;
use api::helper::ColumnDataTypeWrapper;
use api::v1::Column;
use common_grpc::select::{null_mask, values};
use common_grpc_expr::column_to_vector;
use datatypes::prelude::{Vector, VectorRef};
use datatypes::vectors::{
BinaryVector, BooleanVector, DateTimeVector, DateVector, Float32Vector, Float64Vector,
Int16Vector, Int32Vector, Int64Vector, Int8Vector, StringVector, UInt16Vector,
UInt32Vector, UInt64Vector, UInt8Vector,
};
use super::*;
#[test]
fn test_column_to_vector() {
let mut column = create_test_column(Arc::new(BooleanVector::from(vec![true])));

View File

@@ -1,10 +1,10 @@
// Copyright 2022 Greptime Team
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
@@ -13,67 +13,43 @@
// limitations under the License.
use std::any::Any;
use std::sync::Arc;
use api::serde::DecodeError;
use common_error::prelude::*;
use datafusion::physical_plan::ExecutionPlan;
use tonic::Code;
#[derive(Debug, Snafu)]
#[snafu(visibility(pub))]
pub enum Error {
#[snafu(display("Connect failed to {}, source: {}", url, source))]
ConnectFailed {
url: String,
source: tonic::transport::Error,
#[snafu(display("Illegal Flight messages, reason: {}", reason))]
IllegalFlightMessages {
reason: String,
backtrace: Backtrace,
},
#[snafu(display("Missing {}, expected {}, actual {}", name, expected, actual))]
MissingResult {
name: String,
expected: usize,
actual: usize,
},
#[snafu(display("Missing result header"))]
MissingHeader,
#[snafu(display("Tonic internal error, addr: {}, source: {}", addr, source))]
TonicStatus {
#[snafu(display(
"Failed to do Flight get, addr: {}, code: {}, source: {}",
addr,
tonic_code,
source
))]
FlightGet {
addr: String,
source: tonic::Status,
backtrace: Backtrace,
tonic_code: Code,
source: BoxedError,
},
#[snafu(display("Fail to decode select result, source: {}", source))]
DecodeSelect { source: DecodeError },
#[snafu(display("Error occurred on the data node, code: {}, msg: {}", code, msg))]
Datanode { code: u32, msg: String },
#[snafu(display("Failed to encode physical plan: {:?}, source: {}", physical, source))]
EncodePhysical {
physical: Arc<dyn ExecutionPlan>,
#[snafu(display("Failed to convert FlightData, source: {}", source))]
ConvertFlightData {
#[snafu(backtrace)]
source: common_grpc::Error,
},
#[snafu(display("Mutate result has failure {}", failure))]
MutateFailure { failure: u32, backtrace: Backtrace },
#[snafu(display("Column datatype error, source: {}", source))]
ColumnDataType {
#[snafu(backtrace)]
source: api::error::Error,
},
#[snafu(display("Failed to create RecordBatches, source: {}", source))]
CreateRecordBatches {
#[snafu(backtrace)]
source: common_recordbatch::error::Error,
},
#[snafu(display("Illegal GRPC client state: {}", err_msg))]
IllegalGrpcClientState {
err_msg: String,
@@ -83,12 +59,6 @@ pub enum Error {
#[snafu(display("Missing required field in protobuf, field: {}", field))]
MissingField { field: String, backtrace: Backtrace },
#[snafu(display("Failed to convert schema, source: {}", source))]
ConvertSchema {
#[snafu(backtrace)]
source: datatypes::error::Error,
},
#[snafu(display(
"Failed to create gRPC channel, peer address: {}, source: {}",
addr,
@@ -100,11 +70,9 @@ pub enum Error {
source: common_grpc::error::Error,
},
#[snafu(display("Failed to convert column to vector, source: {}", source))]
ColumnToVector {
#[snafu(backtrace)]
source: common_grpc_expr::error::Error,
},
/// Error deserialized from gRPC metadata
#[snafu(display("{}", msg))]
ExternalError { code: StatusCode, msg: String },
}
pub type Result<T> = std::result::Result<T, Error>;
@@ -112,21 +80,15 @@ pub type Result<T> = std::result::Result<T, Error>;
impl ErrorExt for Error {
fn status_code(&self) -> StatusCode {
match self {
Error::ConnectFailed { .. }
| Error::MissingResult { .. }
| Error::MissingHeader { .. }
| Error::TonicStatus { .. }
| Error::DecodeSelect { .. }
| Error::Datanode { .. }
| Error::EncodePhysical { .. }
| Error::MutateFailure { .. }
Error::IllegalFlightMessages { .. }
| Error::ColumnDataType { .. }
| Error::MissingField { .. } => StatusCode::Internal,
Error::ConvertSchema { source } => source.status_code(),
Error::CreateRecordBatches { source } => source.status_code(),
Error::CreateChannel { source, .. } => source.status_code(),
Error::FlightGet { source, .. } => source.status_code(),
Error::CreateChannel { source, .. } | Error::ConvertFlightData { source } => {
source.status_code()
}
Error::IllegalGrpcClientState { .. } => StatusCode::Unexpected,
Error::ColumnToVector { source, .. } => source.status_code(),
Error::ExternalError { code, .. } => *code,
}
}

View File

@@ -1,10 +1,10 @@
// Copyright 2022 Greptime Team
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
@@ -12,7 +12,6 @@
// See the License for the specific language governing permissions and
// limitations under the License.
pub mod admin;
mod client;
mod database;
mod error;
@@ -21,5 +20,5 @@ pub mod load_balance;
pub use api;
pub use self::client::Client;
pub use self::database::{Database, ObjectResult, Select};
pub use self::database::Database;
pub use self::error::{Error, Result};

View File

@@ -1,10 +1,10 @@
// Copyright 2022 Greptime Team
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,

View File

@@ -18,17 +18,17 @@ common-telemetry = { path = "../common/telemetry", features = [
] }
datanode = { path = "../datanode" }
frontend = { path = "../frontend" }
futures = "0.3"
futures.workspace = true
meta-client = { path = "../meta-client" }
meta-srv = { path = "../meta-srv" }
serde = "1.0"
serde.workspace = true
servers = { path = "../servers" }
snafu = { version = "0.7", features = ["backtraces"] }
tokio = { version = "1.18", features = ["full"] }
snafu.workspace = true
tokio.workspace = true
toml = "0.5"
[dev-dependencies]
serde = "1.0"
serde.workspace = true
tempdir = "0.3"
[build-dependencies]

View File

@@ -1,10 +1,10 @@
// Copyright 2022 Greptime Team
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,

View File

@@ -1,10 +1,10 @@
// Copyright 2022 Greptime Team
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,

View File

@@ -1,10 +1,10 @@
// Copyright 2022 Greptime Team
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
@@ -14,7 +14,7 @@
use clap::Parser;
use common_telemetry::logging;
use datanode::datanode::{Datanode, DatanodeOptions, ObjectStoreConfig};
use datanode::datanode::{Datanode, DatanodeOptions, FileConfig, ObjectStoreConfig};
use meta_client::MetaClientOpts;
use servers::Mode;
use snafu::ResultExt;
@@ -54,6 +54,8 @@ struct StartCommand {
#[clap(long)]
rpc_addr: Option<String>,
#[clap(long)]
rpc_hostname: Option<String>,
#[clap(long)]
mysql_addr: Option<String>,
#[clap(long)]
metasrv_addr: Option<String>,
@@ -94,6 +96,11 @@ impl TryFrom<StartCommand> for DatanodeOptions {
if let Some(addr) = cmd.rpc_addr {
opts.rpc_addr = addr;
}
if cmd.rpc_hostname.is_some() {
opts.rpc_hostname = cmd.rpc_hostname;
}
if let Some(addr) = cmd.mysql_addr {
opts.mysql_addr = addr;
}
@@ -121,11 +128,11 @@ impl TryFrom<StartCommand> for DatanodeOptions {
}
if let Some(data_dir) = cmd.data_dir {
opts.storage = ObjectStoreConfig::File { data_dir };
opts.storage = ObjectStoreConfig::File(FileConfig { data_dir });
}
if let Some(wal_dir) = cmd.wal_dir {
opts.wal_dir = wal_dir;
opts.wal.dir = wal_dir;
}
Ok(opts)
}
@@ -134,6 +141,7 @@ impl TryFrom<StartCommand> for DatanodeOptions {
#[cfg(test)]
mod tests {
use std::assert_matches::assert_matches;
use std::time::Duration;
use datanode::datanode::ObjectStoreConfig;
use servers::Mode;
@@ -151,7 +159,7 @@ mod tests {
};
let options: DatanodeOptions = cmd.try_into().unwrap();
assert_eq!("127.0.0.1:3001".to_string(), options.rpc_addr);
assert_eq!("/tmp/greptimedb/wal".to_string(), options.wal_dir);
assert_eq!("/tmp/greptimedb/wal".to_string(), options.wal.dir);
assert_eq!("127.0.0.1:4406".to_string(), options.mysql_addr);
assert_eq!(4, options.mysql_runtime_size);
let MetaClientOpts {
@@ -167,10 +175,11 @@ mod tests {
assert!(!tcp_nodelay);
match options.storage {
ObjectStoreConfig::File { data_dir } => {
ObjectStoreConfig::File(FileConfig { data_dir }) => {
assert_eq!("/tmp/greptimedb/data/".to_string(), data_dir)
}
ObjectStoreConfig::S3 { .. } => unreachable!(),
ObjectStoreConfig::Oss { .. } => unreachable!(),
};
}
@@ -216,6 +225,11 @@ mod tests {
..Default::default()
})
.unwrap();
assert_eq!("/tmp/greptimedb/wal", dn_opts.wal.dir);
assert_eq!(Duration::from_secs(600), dn_opts.wal.purge_interval);
assert_eq!(1024 * 1024 * 1024, dn_opts.wal.file_size.0);
assert_eq!(1024 * 1024 * 1024 * 50, dn_opts.wal.purge_threshold.0);
assert!(!dn_opts.wal.sync_write);
assert_eq!(Some(42), dn_opts.node_id);
let MetaClientOpts {
metasrv_addrs: metasrv_addr,

View File

@@ -1,10 +1,10 @@
// Copyright 2022 Greptime Team
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
@@ -61,6 +61,13 @@ pub enum Error {
#[snafu(backtrace)]
source: servers::auth::Error,
},
#[snafu(display("Unsupported selector type, {} source: {}", selector_type, source))]
UnsupportedSelectorType {
selector_type: String,
#[snafu(backtrace)]
source: meta_srv::error::Error,
},
}
pub type Result<T> = std::result::Result<T, Error>;
@@ -71,6 +78,7 @@ impl ErrorExt for Error {
Error::StartDatanode { source } => source.status_code(),
Error::StartFrontend { source } => source.status_code(),
Error::StartMetaServer { source } => source.status_code(),
Error::UnsupportedSelectorType { source, .. } => source.status_code(),
Error::ReadConfig { .. } | Error::ParseConfig { .. } | Error::MissingConfig { .. } => {
StatusCode::InvalidArguments
}

View File

@@ -1,10 +1,10 @@
// Copyright 2022 Greptime Team
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
@@ -287,7 +287,7 @@ mod tests {
let provider = provider.unwrap();
let result = provider
.auth(Identity::UserId("test", None), Password::PlainText("test"))
.authenticate(Identity::UserId("test", None), Password::PlainText("test"))
.await;
assert!(result.is_ok());
}

View File

@@ -1,10 +1,10 @@
// Copyright 2022 Greptime Team
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,

View File

@@ -1,10 +1,10 @@
// Copyright 2022 Greptime Team
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
@@ -13,7 +13,7 @@
// limitations under the License.
use clap::Parser;
use common_telemetry::logging;
use common_telemetry::{info, logging, warn};
use meta_srv::bootstrap;
use meta_srv::metasrv::MetaSrvOptions;
use snafu::ResultExt;
@@ -56,6 +56,10 @@ struct StartCommand {
store_addr: Option<String>,
#[clap(short, long)]
config_file: Option<String>,
#[clap(short, long)]
selector: Option<String>,
#[clap(long)]
use_memory_store: bool,
}
impl StartCommand {
@@ -91,6 +95,17 @@ impl TryFrom<StartCommand> for MetaSrvOptions {
if let Some(addr) = cmd.store_addr {
opts.store_addr = addr;
}
if let Some(selector_type) = &cmd.selector {
opts.selector = selector_type[..]
.try_into()
.context(error::UnsupportedSelectorTypeSnafu { selector_type })?;
info!("Using {} selector", selector_type);
}
if cmd.use_memory_store {
warn!("Using memory store for Meta. Make sure you are in running tests.");
opts.use_memory_store = true;
}
Ok(opts)
}
@@ -98,6 +113,8 @@ impl TryFrom<StartCommand> for MetaSrvOptions {
#[cfg(test)]
mod tests {
use meta_srv::selector::SelectorType;
use super::*;
#[test]
@@ -107,11 +124,14 @@ mod tests {
server_addr: Some("127.0.0.1:3002".to_string()),
store_addr: Some("127.0.0.1:2380".to_string()),
config_file: None,
selector: Some("LoadBased".to_string()),
use_memory_store: false,
};
let options: MetaSrvOptions = cmd.try_into().unwrap();
assert_eq!("127.0.0.1:3002".to_string(), options.bind_addr);
assert_eq!("127.0.0.1:3002".to_string(), options.server_addr);
assert_eq!("127.0.0.1:2380".to_string(), options.store_addr);
assert_eq!(SelectorType::LoadBased, options.selector);
}
#[test]
@@ -120,15 +140,18 @@ mod tests {
bind_addr: None,
server_addr: None,
store_addr: None,
selector: None,
config_file: Some(format!(
"{}/../../config/metasrv.example.toml",
std::env::current_dir().unwrap().as_path().to_str().unwrap()
)),
use_memory_store: false,
};
let options: MetaSrvOptions = cmd.try_into().unwrap();
assert_eq!("127.0.0.1:3002".to_string(), options.bind_addr);
assert_eq!("127.0.0.1:3002".to_string(), options.server_addr);
assert_eq!("127.0.0.1:2379".to_string(), options.store_addr);
assert_eq!(15, options.datanode_lease_secs);
assert_eq!(SelectorType::LeaseBased, options.selector);
}
}

View File

@@ -1,10 +1,10 @@
// Copyright 2022 Greptime Team
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
@@ -16,7 +16,7 @@ use std::sync::Arc;
use clap::Parser;
use common_telemetry::info;
use datanode::datanode::{Datanode, DatanodeOptions, ObjectStoreConfig};
use datanode::datanode::{Datanode, DatanodeOptions, ObjectStoreConfig, WalConfig};
use datanode::instance::InstanceRef;
use frontend::frontend::{Frontend, FrontendOptions};
use frontend::grpc::GrpcOptions;
@@ -26,6 +26,7 @@ use frontend::mysql::MysqlOptions;
use frontend::opentsdb::OpentsdbOptions;
use frontend::postgres::PostgresOptions;
use frontend::prometheus::PrometheusOptions;
use frontend::promql::PromqlOptions;
use frontend::Plugins;
use serde::{Deserialize, Serialize};
use servers::http::HttpOptions;
@@ -63,6 +64,7 @@ impl SubCommand {
}
#[derive(Clone, Debug, Serialize, Deserialize)]
#[serde(default)]
pub struct StandaloneOptions {
pub http_options: Option<HttpOptions>,
pub grpc_options: Option<GrpcOptions>,
@@ -71,8 +73,9 @@ pub struct StandaloneOptions {
pub opentsdb_options: Option<OpentsdbOptions>,
pub influxdb_options: Option<InfluxdbOptions>,
pub prometheus_options: Option<PrometheusOptions>,
pub promql_options: Option<PromqlOptions>,
pub mode: Mode,
pub wal_dir: String,
pub wal: WalConfig,
pub storage: ObjectStoreConfig,
pub enable_memory_catalog: bool,
}
@@ -87,8 +90,9 @@ impl Default for StandaloneOptions {
opentsdb_options: Some(OpentsdbOptions::default()),
influxdb_options: Some(InfluxdbOptions::default()),
prometheus_options: Some(PrometheusOptions::default()),
promql_options: Some(PromqlOptions::default()),
mode: Mode::Standalone,
wal_dir: "/tmp/greptimedb/wal".to_string(),
wal: WalConfig::default(),
storage: ObjectStoreConfig::default(),
enable_memory_catalog: false,
}
@@ -105,6 +109,7 @@ impl StandaloneOptions {
opentsdb_options: self.opentsdb_options,
influxdb_options: self.influxdb_options,
prometheus_options: self.prometheus_options,
promql_options: self.promql_options,
mode: self.mode,
meta_client_opts: None,
}
@@ -112,7 +117,7 @@ impl StandaloneOptions {
fn datanode_options(self) -> DatanodeOptions {
DatanodeOptions {
wal_dir: self.wal_dir,
wal: self.wal,
storage: self.storage,
enable_memory_catalog: self.enable_memory_catalog,
..Default::default()
@@ -322,6 +327,10 @@ mod tests {
fe_opts.mysql_options.as_ref().unwrap().addr
);
assert_eq!(2, fe_opts.mysql_options.as_ref().unwrap().runtime_size);
assert_eq!(
None,
fe_opts.mysql_options.as_ref().unwrap().reject_no_database
);
assert!(fe_opts.influxdb_options.as_ref().unwrap().enable);
}
@@ -349,7 +358,7 @@ mod tests {
assert!(provider.is_some());
let provider = provider.unwrap();
let result = provider
.auth(Identity::UserId("test", None), Password::PlainText("test"))
.authenticate(Identity::UserId("test", None), Password::PlainText("test"))
.await;
assert!(result.is_ok());
}

View File

@@ -1,10 +1,10 @@
// Copyright 2022 Greptime Team
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
@@ -37,12 +37,23 @@ mod tests {
use crate::error::Result;
#[derive(Clone, PartialEq, Debug, Deserialize, Serialize)]
#[serde(default)]
struct MockConfig {
path: String,
port: u32,
host: String,
}
impl Default for MockConfig {
fn default() -> Self {
Self {
path: "test".to_string(),
port: 0,
host: "localhost".to_string(),
}
}
}
#[test]
fn test_from_file() -> Result<()> {
let config = MockConfig {
@@ -63,6 +74,21 @@ mod tests {
let loaded_config: MockConfig = from_file!(&test_file)?;
assert_eq!(loaded_config, config);
// Only host in file
let mut file = File::create(&test_file).unwrap();
file.write_all("host='greptime.test'\n".as_bytes()).unwrap();
let loaded_config: MockConfig = from_file!(&test_file)?;
assert_eq!(loaded_config.host, "greptime.test");
assert_eq!(loaded_config.port, 0);
assert_eq!(loaded_config.path, "test");
// Truncate the file.
let file = File::create(&test_file).unwrap();
file.set_len(0).unwrap();
let loaded_config: MockConfig = from_file!(&test_file)?;
assert_eq!(loaded_config, MockConfig::default());
Ok(())
}
}

View File

@@ -10,4 +10,7 @@ bytes = { version = "1.1", features = ["serde"] }
common-error = { path = "../error" }
paste = "1.0"
serde = { version = "1.0", features = ["derive"] }
snafu = { version = "0.7", features = ["backtraces"] }
snafu.workspace = true
[dev-dependencies]
toml = "0.5"

View File

@@ -1,10 +1,10 @@
// Copyright 2022 Greptime Team
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,

View File

@@ -1,10 +1,10 @@
// Copyright 2022 Greptime Team
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,

View File

@@ -1,10 +1,10 @@
// Copyright 2022 Greptime Team
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,

View File

@@ -1,10 +1,10 @@
// Copyright 2022 Greptime Team
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
@@ -15,5 +15,7 @@
pub mod bit_vec;
pub mod buffer;
pub mod bytes;
#[allow(clippy::all)]
pub mod readable_size;
pub use bit_vec::BitVec;

View File

@@ -0,0 +1,321 @@
// Copyright (c) 2017-present, PingCAP, Inc. Licensed under Apache-2.0.
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// This file is copied from https://github.com/tikv/raft-engine/blob/8dd2a39f359ff16f5295f35343f626e0c10132fa/src/util.rs without any modification.
use std::fmt;
use std::fmt::{Display, Write};
use std::ops::{Div, Mul};
use std::str::FromStr;
use serde::de::{Unexpected, Visitor};
use serde::{de, Deserialize, Deserializer, Serialize, Serializer};
const UNIT: u64 = 1;
const BINARY_DATA_MAGNITUDE: u64 = 1024;
pub const B: u64 = UNIT;
pub const KIB: u64 = B * BINARY_DATA_MAGNITUDE;
pub const MIB: u64 = KIB * BINARY_DATA_MAGNITUDE;
pub const GIB: u64 = MIB * BINARY_DATA_MAGNITUDE;
pub const TIB: u64 = GIB * BINARY_DATA_MAGNITUDE;
pub const PIB: u64 = TIB * BINARY_DATA_MAGNITUDE;
#[derive(Clone, Debug, Copy, PartialEq, Eq, PartialOrd)]
pub struct ReadableSize(pub u64);
impl ReadableSize {
pub const fn kb(count: u64) -> ReadableSize {
ReadableSize(count * KIB)
}
pub const fn mb(count: u64) -> ReadableSize {
ReadableSize(count * MIB)
}
pub const fn gb(count: u64) -> ReadableSize {
ReadableSize(count * GIB)
}
pub const fn as_mb(self) -> u64 {
self.0 / MIB
}
}
impl Div<u64> for ReadableSize {
type Output = ReadableSize;
fn div(self, rhs: u64) -> ReadableSize {
ReadableSize(self.0 / rhs)
}
}
impl Div<ReadableSize> for ReadableSize {
type Output = u64;
fn div(self, rhs: ReadableSize) -> u64 {
self.0 / rhs.0
}
}
impl Mul<u64> for ReadableSize {
type Output = ReadableSize;
fn mul(self, rhs: u64) -> ReadableSize {
ReadableSize(self.0 * rhs)
}
}
impl Serialize for ReadableSize {
fn serialize<S>(&self, serializer: S) -> Result<S::Ok, S::Error>
where
S: Serializer,
{
let size = self.0;
let mut buffer = String::new();
if size == 0 {
write!(buffer, "{}KiB", size).unwrap();
} else if size % PIB == 0 {
write!(buffer, "{}PiB", size / PIB).unwrap();
} else if size % TIB == 0 {
write!(buffer, "{}TiB", size / TIB).unwrap();
} else if size % GIB as u64 == 0 {
write!(buffer, "{}GiB", size / GIB).unwrap();
} else if size % MIB as u64 == 0 {
write!(buffer, "{}MiB", size / MIB).unwrap();
} else if size % KIB as u64 == 0 {
write!(buffer, "{}KiB", size / KIB).unwrap();
} else {
return serializer.serialize_u64(size);
}
serializer.serialize_str(&buffer)
}
}
impl FromStr for ReadableSize {
type Err = String;
// This method parses value in binary unit.
fn from_str(s: &str) -> Result<ReadableSize, String> {
let size_str = s.trim();
if size_str.is_empty() {
return Err(format!("{:?} is not a valid size.", s));
}
if !size_str.is_ascii() {
return Err(format!("ASCII string is expected, but got {:?}", s));
}
// size: digits and '.' as decimal separator
let size_len = size_str
.to_string()
.chars()
.take_while(|c| char::is_ascii_digit(c) || ['.', 'e', 'E', '-', '+'].contains(c))
.count();
// unit: alphabetic characters
let (size, unit) = size_str.split_at(size_len);
let unit = match unit.trim() {
"K" | "KB" | "KiB" => KIB,
"M" | "MB" | "MiB" => MIB,
"G" | "GB" | "GiB" => GIB,
"T" | "TB" | "TiB" => TIB,
"P" | "PB" | "PiB" => PIB,
"B" | "" => B,
_ => {
return Err(format!(
"only B, KB, KiB, MB, MiB, GB, GiB, TB, TiB, PB, and PiB are supported: {:?}",
s
));
}
};
match size.parse::<f64>() {
Ok(n) => Ok(ReadableSize((n * unit as f64) as u64)),
Err(_) => Err(format!("invalid size string: {:?}", s)),
}
}
}
impl Display for ReadableSize {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
if self.0 >= PIB {
write!(f, "{:.1}PiB", self.0 as f64 / PIB as f64)
} else if self.0 >= TIB {
write!(f, "{:.1}TiB", self.0 as f64 / TIB as f64)
} else if self.0 >= GIB {
write!(f, "{:.1}GiB", self.0 as f64 / GIB as f64)
} else if self.0 >= MIB {
write!(f, "{:.1}MiB", self.0 as f64 / MIB as f64)
} else if self.0 >= KIB {
write!(f, "{:.1}KiB", self.0 as f64 / KIB as f64)
} else {
write!(f, "{}B", self.0)
}
}
}
impl<'de> Deserialize<'de> for ReadableSize {
fn deserialize<D>(deserializer: D) -> Result<Self, D::Error>
where
D: Deserializer<'de>,
{
struct SizeVisitor;
impl<'de> Visitor<'de> for SizeVisitor {
type Value = ReadableSize;
fn expecting(&self, formatter: &mut fmt::Formatter<'_>) -> fmt::Result {
formatter.write_str("valid size")
}
fn visit_i64<E>(self, size: i64) -> Result<ReadableSize, E>
where
E: de::Error,
{
if size >= 0 {
self.visit_u64(size as u64)
} else {
Err(E::invalid_value(Unexpected::Signed(size), &self))
}
}
fn visit_u64<E>(self, size: u64) -> Result<ReadableSize, E>
where
E: de::Error,
{
Ok(ReadableSize(size))
}
fn visit_str<E>(self, size_str: &str) -> Result<ReadableSize, E>
where
E: de::Error,
{
size_str.parse().map_err(E::custom)
}
}
deserializer.deserialize_any(SizeVisitor)
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_readable_size() {
let s = ReadableSize::kb(2);
assert_eq!(s.0, 2048);
assert_eq!(s.as_mb(), 0);
let s = ReadableSize::mb(2);
assert_eq!(s.0, 2 * 1024 * 1024);
assert_eq!(s.as_mb(), 2);
let s = ReadableSize::gb(2);
assert_eq!(s.0, 2 * 1024 * 1024 * 1024);
assert_eq!(s.as_mb(), 2048);
assert_eq!((ReadableSize::mb(2) / 2).0, MIB);
assert_eq!((ReadableSize::mb(1) / 2).0, 512 * KIB);
assert_eq!(ReadableSize::mb(2) / ReadableSize::kb(1), 2048);
}
#[test]
fn test_parse_readable_size() {
#[derive(Serialize, Deserialize)]
struct SizeHolder {
s: ReadableSize,
}
let legal_cases = vec![
(0, "0KiB"),
(2 * KIB, "2KiB"),
(4 * MIB, "4MiB"),
(5 * GIB, "5GiB"),
(7 * TIB, "7TiB"),
(11 * PIB, "11PiB"),
];
for (size, exp) in legal_cases {
let c = SizeHolder {
s: ReadableSize(size),
};
let res_str = toml::to_string(&c).unwrap();
let exp_str = format!("s = {:?}\n", exp);
assert_eq!(res_str, exp_str);
let res_size: SizeHolder = toml::from_str(&exp_str).unwrap();
assert_eq!(res_size.s.0, size);
}
let c = SizeHolder {
s: ReadableSize(512),
};
let res_str = toml::to_string(&c).unwrap();
assert_eq!(res_str, "s = 512\n");
let res_size: SizeHolder = toml::from_str(&res_str).unwrap();
assert_eq!(res_size.s.0, c.s.0);
let decode_cases = vec![
(" 0.5 PB", PIB / 2),
("0.5 TB", TIB / 2),
("0.5GB ", GIB / 2),
("0.5MB", MIB / 2),
("0.5KB", KIB / 2),
("0.5P", PIB / 2),
("0.5T", TIB / 2),
("0.5G", GIB / 2),
("0.5M", MIB / 2),
("0.5K", KIB / 2),
("23", 23),
("1", 1),
("1024B", KIB),
// units with binary prefixes
(" 0.5 PiB", PIB / 2),
("1PiB", PIB),
("0.5 TiB", TIB / 2),
("2 TiB", TIB * 2),
("0.5GiB ", GIB / 2),
("787GiB ", GIB * 787),
("0.5MiB", MIB / 2),
("3MiB", MIB * 3),
("0.5KiB", KIB / 2),
("1 KiB", KIB),
// scientific notation
("0.5e6 B", B * 500000),
("0.5E6 B", B * 500000),
("1e6B", B * 1000000),
("8E6B", B * 8000000),
("8e7", B * 80000000),
("1e-1MB", MIB / 10),
("1e+1MB", MIB * 10),
("0e+10MB", 0),
];
for (src, exp) in decode_cases {
let src = format!("s = {:?}", src);
let res: SizeHolder = toml::from_str(&src).unwrap();
assert_eq!(res.s.0, exp);
}
let illegal_cases = vec![
"0.5kb", "0.5kB", "0.5Kb", "0.5k", "0.5g", "b", "gb", "1b", "B", "1K24B", " 5_KB",
"4B7", "5M_",
];
for src in illegal_cases {
let src_str = format!("s = {:?}", src);
assert!(toml::from_str::<SizeHolder>(&src_str).is_err(), "{}", src);
}
}
}

View File

@@ -1,10 +1,10 @@
// Copyright 2022 Greptime Team
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,

View File

@@ -11,11 +11,11 @@ common-telemetry = { path = "../telemetry" }
datatypes = { path = "../../datatypes" }
lazy_static = "1.4"
regex = "1.6"
serde = "1.0"
serde.workspace = true
serde_json = "1.0"
snafu = { version = "0.7", features = ["backtraces"] }
[dev-dependencies]
chrono = "0.4"
tempdir = "0.3"
tokio = { version = "1.0", features = ["full"] }
tokio.workspace = true

View File

@@ -1,10 +1,10 @@
// Copyright 2022 Greptime Team
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,

View File

@@ -1,10 +1,10 @@
// Copyright 2022 Greptime Team
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,

View File

@@ -1,10 +1,10 @@
// Copyright 2022 Greptime Team
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
@@ -14,3 +14,9 @@
pub mod consts;
pub mod error;
/// Formats table fully-qualified name
#[inline]
pub fn format_full_table_name(catalog: &str, schema: &str, table: &str) -> String {
format!("{catalog}.{schema}.{table}")
}

View File

@@ -6,3 +6,4 @@ license.workspace = true
[dependencies]
snafu = { version = "0.7", features = ["backtraces"] }
strum = { version = "0.24", features = ["std", "derive"] }

View File

@@ -1,10 +1,10 @@
// Copyright 2022 Greptime Team
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
@@ -33,72 +33,99 @@ pub trait ErrorExt: std::error::Error {
fn as_any(&self) -> &dyn Any;
}
/// A helper macro to define a opaque boxed error based on errors that implement [ErrorExt] trait.
#[macro_export]
macro_rules! define_opaque_error {
($Error:ident) => {
/// An error behaves like `Box<dyn Error>`.
///
/// Define this error as a new type instead of using `Box<dyn Error>` directly so we can implement
/// more methods or traits for it.
pub struct $Error {
inner: Box<dyn $crate::ext::ErrorExt + Send + Sync>,
}
impl $Error {
pub fn new<E: $crate::ext::ErrorExt + Send + Sync + 'static>(err: E) -> Self {
Self {
inner: Box::new(err),
}
}
}
impl std::fmt::Debug for $Error {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
// Use the pretty debug format of inner error for opaque error.
let debug_format = $crate::format::DebugFormat::new(&*self.inner);
debug_format.fmt(f)
}
}
impl std::fmt::Display for $Error {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
write!(f, "{}", self.inner)
}
}
impl std::error::Error for $Error {
fn source(&self) -> Option<&(dyn std::error::Error + 'static)> {
self.inner.source()
}
}
impl $crate::ext::ErrorExt for $Error {
fn status_code(&self) -> $crate::status_code::StatusCode {
self.inner.status_code()
}
fn backtrace_opt(&self) -> Option<&$crate::snafu::Backtrace> {
self.inner.backtrace_opt()
}
fn as_any(&self) -> &dyn std::any::Any {
self.inner.as_any()
}
}
// Implement ErrorCompat for this opaque error so the backtrace is also available
// via `ErrorCompat::backtrace()`.
impl $crate::snafu::ErrorCompat for $Error {
fn backtrace(&self) -> Option<&$crate::snafu::Backtrace> {
self.inner.backtrace_opt()
}
}
};
/// An opaque boxed error based on errors that implement [ErrorExt] trait.
pub struct BoxedError {
inner: Box<dyn crate::ext::ErrorExt + Send + Sync>,
}
// Define a general boxed error.
define_opaque_error!(BoxedError);
impl BoxedError {
pub fn new<E: crate::ext::ErrorExt + Send + Sync + 'static>(err: E) -> Self {
Self {
inner: Box::new(err),
}
}
}
impl std::fmt::Debug for BoxedError {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
// Use the pretty debug format of inner error for opaque error.
let debug_format = crate::format::DebugFormat::new(&*self.inner);
debug_format.fmt(f)
}
}
impl std::fmt::Display for BoxedError {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
write!(f, "{}", self.inner)
}
}
impl std::error::Error for BoxedError {
fn source(&self) -> Option<&(dyn std::error::Error + 'static)> {
self.inner.source()
}
}
impl crate::ext::ErrorExt for BoxedError {
fn status_code(&self) -> crate::status_code::StatusCode {
self.inner.status_code()
}
fn backtrace_opt(&self) -> Option<&crate::snafu::Backtrace> {
self.inner.backtrace_opt()
}
fn as_any(&self) -> &dyn std::any::Any {
self.inner.as_any()
}
}
// Implement ErrorCompat for this opaque error so the backtrace is also available
// via `ErrorCompat::backtrace()`.
impl crate::snafu::ErrorCompat for BoxedError {
fn backtrace(&self) -> Option<&crate::snafu::Backtrace> {
self.inner.backtrace_opt()
}
}
/// Error type with plain error message
#[derive(Debug)]
pub struct PlainError {
msg: String,
status_code: StatusCode,
}
impl PlainError {
pub fn new(msg: String, status_code: StatusCode) -> Self {
Self { msg, status_code }
}
}
impl std::fmt::Display for PlainError {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
write!(f, "{}", self.msg)
}
}
impl std::error::Error for PlainError {
fn source(&self) -> Option<&(dyn std::error::Error + 'static)> {
None
}
}
impl crate::ext::ErrorExt for PlainError {
fn status_code(&self) -> crate::status_code::StatusCode {
self.status_code
}
fn backtrace_opt(&self) -> Option<&crate::snafu::Backtrace> {
None
}
fn as_any(&self) -> &dyn std::any::Any {
self as _
}
}
#[cfg(test)]
mod tests {

View File

@@ -1,10 +1,10 @@
// Copyright 2022 Greptime Team
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,

View File

@@ -1,10 +1,10 @@
// Copyright 2022 Greptime Team
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
@@ -24,6 +24,9 @@ pub mod prelude {
pub use crate::ext::{BoxedError, ErrorExt};
pub use crate::format::DebugFormat;
pub use crate::status_code::StatusCode;
pub const INNER_ERROR_CODE: &str = "INNER_ERROR_CODE";
pub const INNER_ERROR_MSG: &str = "INNER_ERROR_MSG";
}
pub use snafu;

View File

@@ -1,10 +1,10 @@
// Copyright 2022 Greptime Team
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,

View File

@@ -1,10 +1,10 @@
// Copyright 2022 Greptime Team
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
@@ -14,8 +14,10 @@
use std::fmt;
use strum::EnumString;
/// Common status code for public API.
#[derive(Debug, Clone, Copy, PartialEq, Eq)]
#[derive(Debug, Clone, Copy, PartialEq, Eq, EnumString)]
pub enum StatusCode {
// ====== Begin of common status code ==============
/// Success.
@@ -75,6 +77,8 @@ pub enum StatusCode {
AuthHeaderNotFound = 7003,
/// Invalid http authorization header
InvalidAuthHeader = 7004,
/// Illegal request to connect catalog-schema
AccessDenied = 7005,
// ====== End of auth related status code =====
}

View File

@@ -15,5 +15,5 @@ syn = "1.0"
arc-swap = "1.0"
common-query = { path = "../query" }
datatypes = { path = "../../datatypes" }
snafu = { version = "0.7", features = ["backtraces"] }
snafu.workspace = true
static_assertions = "1.1.0"

View File

@@ -1,10 +1,10 @@
// Copyright 2022 Greptime Team
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,

View File

@@ -1,10 +1,10 @@
// Copyright 2022 Greptime Team
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,

View File

@@ -18,7 +18,7 @@ num = "0.4"
num-traits = "0.2"
once_cell = "1.10"
paste = "1.0"
snafu = { version = "0.7", features = ["backtraces"] }
snafu.workspace = true
statrs = "0.15"
[dev-dependencies]

View File

@@ -1,10 +1,10 @@
// Copyright 2022 Greptime Team
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,

View File

@@ -1,10 +1,10 @@
// Copyright 2022 Greptime Team
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,

View File

@@ -1,10 +1,10 @@
// Copyright 2022 Greptime Team
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,

View File

@@ -1,10 +1,10 @@
// Copyright 2022 Greptime Team
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,

View File

@@ -1,10 +1,10 @@
// Copyright 2022 Greptime Team
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,

View File

@@ -1,10 +1,10 @@
// Copyright 2022 Greptime Team
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,

Some files were not shown because too many files have changed in this diff Show More